From 33b107e6c8445c43c4b44c4694bdb89fd3e977c5 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 2 Mar 2018 15:04:35 -0500 Subject: [PATCH 0001/2289] start a vignette to track HB-PDA workflow --- .../vignettes/MultiSitePDAVignette.Rmd | 322 ++++++++++++++++++ 1 file changed, 322 insertions(+) create mode 100644 modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd new file mode 100644 index 00000000000..24b691ee80c --- /dev/null +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -0,0 +1,322 @@ +--- +title: "HB-PDA vignette" +author: "Istem Fer" +date: "3/2/2018" +output: html_document +--- + +```{r setup, include=FALSE} +knitr::opts_chunk$set(echo = TRUE) +``` + +## Multi-Site Hierarchical PDA + +This vignette documents the steps and settings of Hierarchical Bayesian Parameter Data Assimilation (HB-PDA) analysis on multiple sites. + +## Start with multi-settings + +Add the `` tag in your `pecan.xml` for the groups of sites you are interested in. In this vignette, we will use HB-PDA site group in BETYdb which consists of the temperate decidious broadleaf forest (DBF) sites of Ameriflux network. + +``` + + 1000000022 + +``` + +In the meantime, your `` section should specify ``, `` and `` information accordingly. E.g.: + + +``` + + + + 796 + 2005/01/01 + 2011/12/31 + Bartlett Experimental Forest (US-Bar) + 44.06464 + -71.288077 + + 2005/01/01 + 2011/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-796/AMF_US-Bar_BASE_HH_4-1.2005-01-01.2011-12-31.clim + + + + + + 758 + 2010/01/01 + 2015/12/31 + Harvard Forest EMS Tower/HFR1 (US-Ha1) + 42.5378 + -72.1715 + + 2010/01/01 + 2015/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-758/AMF_US-Ha1_BASE_HR_10-1.2010-01-01.2015-12-31.clim + + + + + + 767 + 2001/01/01 + 2014/12/31 + Morgan Monroe State Forest (US-MMS) + 39.3231 + -86.4131 + + 2001/01/01 + 2014/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim + + + + + + 768 + 2005/01/01 + 2015/12/31 + Missouri Ozark Site/BREA (US-MOz) + 38.7441 + -92.2 + + 2005/01/01 + 2015/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-768/AMF_US-MOz_BASE_HH_7-1.2005-01-01.2015-12-31.clim + + + + + + 776 + 2007/01/01 + 2014/12/31 + Univiversity of Michigan Biological Station (US-UMB) + 45.5598 + -84.7138 + + 2007/01/01 + 2014/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-776/AMF_US-UMB_BASE_HH_10-1.2007-01-01.2014-12-31.clim + + + + + + 676 + 1999/01/01 + 2006/12/31 + Willow Creek (US-WCr) + 45.805925 + -90.07961 + + 1999/01/01 + 2006/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-676/AMF_US-WCr_BASE_HH_11-1.1999-01-01.2006-12-31.clim + + + + + + 1000000109 + 2013/01/01 + 2015/12/31 + Ontario - Turkey Point Mature Deciduous (CA-TPD) + 42.6353 + -80.5577 + + 2013/01/01 + 2015/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-109/AMF_CA-TPD_BASE_HH_1-1.2013-01-01.2015-12-31.clim + + + + + + 740 + 1997/01/01 + 2010/12/31 + BOREAS SSA Old Aspen (CA-Oas) + 53.6289 + -106.198 + + 1997/01/01 + 2010/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-740/AMF_CA-Oas_BASE_HH_1-1.1997-01-01.2010-12-31.clim + + + + +``` + +If requested, your `` and `` sections would look similar to this in your `pecan.CONFIGS.xml`: + + +``` + + + 100 + NEE + 1000017326 + 2005 + 2011 + + + 100 + NEE + 1000017326 + 2010 + 2015 + + + 100 + NEE + 1000017326 + 2001 + 2014 + + + 100 + NEE + 1000017326 + 2005 + 2015 + + + 100 + NEE + 1000017326 + 2007 + 2014 + + + 100 + NEE + 1000017326 + 1999 + 2006 + + + 100 + NEE + 1000017326 + 2013 + 2015 + + + 100 + NEE + 1000017326 + 1997 + 2010 + + + + + + -1 + 0 + 1 + + NEE + 1000017325 + 2005 + 2011 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 2010 + 2015 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 2001 + 2014 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 2005 + 2015 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 2007 + 2014 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 1999 + 2006 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 2013 + 2015 + + + + -1 + 0 + 1 + + NEE + 1000017325 + 1997 + 2010 + + +``` From ea7caf58ccd94bf522360a2a31dc200431749463 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 2 Mar 2018 16:15:16 -0500 Subject: [PATCH 0002/2289] add HB-PDA example tags --- .../vignettes/MultiSitePDAVignette.Rmd | 193 +++++++++++++++++- 1 file changed, 192 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 24b691ee80c..2a903ed27f0 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -11,7 +11,7 @@ knitr::opts_chunk$set(echo = TRUE) ## Multi-Site Hierarchical PDA -This vignette documents the steps and settings of Hierarchical Bayesian Parameter Data Assimilation (HB-PDA) analysis on multiple sites. +This vignette documents the steps and settings of Hierarchical Bayesian Parameter Data Assimilation (HB-PDA) analysis on multiple sites. If you haven't done the standard PDA check that vignette first modules/assim.batch/vignettes/AssimBatchVignette.Rmd ## Start with multi-settings @@ -320,3 +320,194 @@ If requested, your `` and `` sections would look ``` + +## Tags for HB-PDA + +Similar to PDA settings, we will contain HB-PDA settings under the `` section. Likewise standard PDA, if you have chosen the parameters you want to target in the calibration, open the `pecan.CONFIGS.xml` file, insert the tag below, and save as `pecan.HBPDA.xml`: + +``` + + emulator.ms + hierarchical + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + psnTOpt + + + + 100 + 0.1 + 0.3 + + +``` + +Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be defined under the `` tag above: local, global and hierarchical. + +Now we will extend the settings above to include information about data to assimilate and likelihoods, similar to the standard-PDA setup, except for multiple sites: + + +``` + + emulator.ms + hierarchical + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + + + 100 + 0.1 + 0.3 + + + + + 1000013322 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-796/AMF_US-Bar_BASE_HH_4-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 2000000128 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-758/AMF_US-Ha1_BASE_HR_10-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 1000013317 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-767/AMF_US-MMS_BASE_HR_8-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 1000013316 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-768/AMF_US-MOz_BASE_HH_7-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 1000013832 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-776/AMF_US-UMB_BASE_HH_10-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 1000012965 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-676/AMF_US-WCr_BASE_HH_11-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 1000013836 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_1-109/AMF_CA-TPD_BASE_HH_1-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + + + + 1000013838 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-740/AMF_CA-Oas_BASE_HH_1-1.csv + + Laplace + 1000000042 + + FC + UST + + + + + +``` \ No newline at end of file From dd723a2ac60fc739891781a72916f1dd67b5bc6f Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 10:02:15 -0500 Subject: [PATCH 0003/2289] introduce ms function in utils --- modules/assim.batch/R/pda.utils.R | 2 ++ modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd | 6 +++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 99ce798cac6..134e24d9344 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -23,6 +23,8 @@ assim.batch <- function(settings) { settings <- pda.mcmc.bs(settings) } else if (settings$assim.batch$method == "emulator") { settings <- pda.emulator(settings) + } else if (settings$assim.batch$method == "emulator.ms") { + settings <- pda.emulator.ms(settings) } else if (settings$assim.batch$method == "bayesian.tools") { settings <- pda.bayesian.tools(settings) } else { diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 2a903ed27f0..01594d95760 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -328,7 +328,7 @@ Similar to PDA settings, we will contain HB-PDA settings under the ` emulator.ms - hierarchical + local 60 10000 3 @@ -352,7 +352,7 @@ Similar to PDA settings, we will contain HB-PDA settings under the ` ``` -Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be defined under the `` tag above: local, global and hierarchical. +Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be selected under the `` tag above: local, global and hierarchical. We will start with local mode, i.e. site-level calibration. Now we will extend the settings above to include information about data to assimilate and likelihoods, similar to the standard-PDA setup, except for multiple sites: @@ -360,7 +360,7 @@ Now we will extend the settings above to include information about data to assim ``` emulator.ms - hierarchical + local 60 10000 3 From b9e0e10319792415ad7f8f32c5bba6fecd67de3f Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 10:06:50 -0500 Subject: [PATCH 0004/2289] dont want papply over, but the ms function itself to have control over the multisettings --- modules/assim.batch/R/pda.utils.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 134e24d9344..62e03336d4b 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -37,9 +37,9 @@ assim.batch <- function(settings) { ##' @export runModule.assim.batch <- function(settings) { - if (is.MultiSettings(settings)) { + if (is.MultiSettings(settings) && settings$assim.batch$method != "emulator.ms") { return(papply(settings, runModule.assim.batch)) - } else if (is.Settings(settings)) { + } else if (is.Settings(settings) || settings$assim.batch$method == "emulator.ms") { return(assim.batch(settings)) } else { stop("runModule.assim.batch only works with Settings or MultiSettings") From 86e4c7b611bbce257e6cf86b04238bc1fc430034 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 10:33:06 -0500 Subject: [PATCH 0005/2289] start ms function --- modules/assim.batch/R/pda.emulator.ms.R | 49 +++++++++++++++++++++++++ 1 file changed, 49 insertions(+) create mode 100644 modules/assim.batch/R/pda.emulator.ms.R diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R new file mode 100644 index 00000000000..9818861e9e0 --- /dev/null +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -0,0 +1,49 @@ +##' Paramater Data Assimilation using emulator on multiple sites in three modes: local, global, hierarchical +##' First draft, not complete yet +##' +##' @title Paramater Data Assimilation using emulator on multiple sites +##' @param settings = a pecan settings list +##' +##' @return settings +##' +##' @author Istem Fer +##' @export +pda.emulator.ms <- function(settings, external.priors = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, + chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, + ar.target = NULL, jvar = NULL, n.knot = NULL) { + + ## this bit of code is useful for defining the variables passed to this function if you are + ## debugging + if (FALSE) { + params.id <- param.names <- prior.id <- chain <- iter <- NULL + n.knot <- adapt <- adj.min <- ar.target <- jvar <- external.priors <- NULL + } + + + # check mode + pda.mode <- settings$assim.batch$mode + + # how many sites + nsites <- length(settings) + + if(pda.mode == "local"){ + local <- TRUE + global <- hierarchical <- FALSE + }else if(pda.mode == "global"){ + global <- TRUE + local <- hierarchical <- FALSE + }else if(pda.mode == "hierarchical"){ + hieararchical <- TRUE + local <- global <- FALSE + }else{ + local <- global <- hierarchical <- TRUE + } + + + + + + + + +} \ No newline at end of file From 85ccf4de75f245ac6cc846fd3e677236e0d0f104 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 14:21:54 -0500 Subject: [PATCH 0006/2289] change utils handling --- modules/assim.batch/R/pda.utils.R | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 62e03336d4b..e6443fa3d6a 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -23,8 +23,6 @@ assim.batch <- function(settings) { settings <- pda.mcmc.bs(settings) } else if (settings$assim.batch$method == "emulator") { settings <- pda.emulator(settings) - } else if (settings$assim.batch$method == "emulator.ms") { - settings <- pda.emulator.ms(settings) } else if (settings$assim.batch$method == "bayesian.tools") { settings <- pda.bayesian.tools(settings) } else { @@ -37,8 +35,13 @@ assim.batch <- function(settings) { ##' @export runModule.assim.batch <- function(settings) { - if (is.MultiSettings(settings) && settings$assim.batch$method != "emulator.ms") { - return(papply(settings, runModule.assim.batch)) + if (is.MultiSettings(settings)) { + pda.method <- unique(sapply(settings$assim.batch,`[[`, "method")) + if(pda.method == "emulator.ms"){ + return(pda.emulator.ms(settings)) + }else{ + return(papply(settings, runModule.assim.batch)) + } } else if (is.Settings(settings) || settings$assim.batch$method == "emulator.ms") { return(assim.batch(settings)) } else { From 96da9726329658aa28dbafcee4139b5fffda7211 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 14:22:40 -0500 Subject: [PATCH 0007/2289] delete remnant --- modules/assim.batch/R/pda.utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index e6443fa3d6a..5e6de425142 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -42,7 +42,7 @@ runModule.assim.batch <- function(settings) { }else{ return(papply(settings, runModule.assim.batch)) } - } else if (is.Settings(settings) || settings$assim.batch$method == "emulator.ms") { + } else if (is.Settings(settings)) { return(assim.batch(settings)) } else { stop("runModule.assim.batch only works with Settings or MultiSettings") From 30286db2405c9112d6b2709b837f1978c7c8a940 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 14:26:47 -0500 Subject: [PATCH 0008/2289] initialization --- modules/assim.batch/R/pda.emulator.ms.R | 44 ++--- .../vignettes/MultiSitePDAVignette.Rmd | 179 +++++++++++++++--- 2 files changed, 171 insertions(+), 52 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 9818861e9e0..a541835928a 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -2,29 +2,18 @@ ##' First draft, not complete yet ##' ##' @title Paramater Data Assimilation using emulator on multiple sites -##' @param settings = a pecan settings list +##' @param multi.settings = a pecan multi-settings list ##' ##' @return settings ##' ##' @author Istem Fer ##' @export -pda.emulator.ms <- function(settings, external.priors = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, - chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, - ar.target = NULL, jvar = NULL, n.knot = NULL) { - - ## this bit of code is useful for defining the variables passed to this function if you are - ## debugging - if (FALSE) { - params.id <- param.names <- prior.id <- chain <- iter <- NULL - n.knot <- adapt <- adj.min <- ar.target <- jvar <- external.priors <- NULL - } - +pda.emulator.ms <- function(multi.settings) { + ## -------------------------------------- Initialization --------------------------------------------------- + # check mode - pda.mode <- settings$assim.batch$mode - - # how many sites - nsites <- length(settings) + pda.mode <- unique(sapply(multi.settings$assim.batch,`[[`, "mode")) if(pda.mode == "local"){ local <- TRUE @@ -39,11 +28,22 @@ pda.emulator.ms <- function(settings, external.priors = NULL, params.id = NULL, local <- global <- hierarchical <- TRUE } - - - - - - + # how many sites + nsites <- length(multi.settings) + + + # lists to collect emulators and run MCMC per site later + SS.stack <- vector("list", nsites) + gp.stack <- vector("list", nsites) + prior.stack <- vector("list", nsites) + nstack <- vector("list", nsites) + + ## -------------------------------------- Runs and build emulator ------------------------------------------ + + for(s in seq_along(multi.settings)){ # site runs - loop begin + + settings <- multi.settings[[s]] + + } # site runs - loop end } \ No newline at end of file diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 01594d95760..320019569fa 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -344,44 +344,44 @@ Similar to PDA settings, we will contain HB-PDA settings under the `psnTOpt - - 100 - 0.1 - 0.3 - ``` -Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be selected under the `` tag above: local, global and hierarchical. We will start with local mode, i.e. site-level calibration. +Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be selected under the `` tag above: local, global and hierarchical. If left empty, the function will go through all three modes. But note that, for both global and hierarhical calibration, we need site-level runs. We will start with local mode, i.e. site-level calibration. -Now we will extend the settings above to include information about data to assimilate and likelihoods, similar to the standard-PDA setup, except for multiple sites: +Now we will extend the settings above to include information about data to assimilate and likelihoods, similar to the standard-PDA setup, except for multiple sites. First, add `` tag to the multisettings in the bottom of your `pecan.HBPDA.xml`: +``` + + assim.batch + ensemble + sensitivity.analysis + run + +``` + +Now we can insert multi-site settings in the `` section accordingly (note that now the chunk above where we defined `method`, `mode`, `param.names` etc. will go under each `setting.*`): ``` - - emulator.ms - local - 60 - 10000 - 3 - - - som_respiration_rate - soil_respiration_Q10 - soilWHC - - - half_saturation_PAR - dVPDSlope - leafGrowth - - - - 100 - 0.1 - 0.3 - + + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000013322 @@ -398,6 +398,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 2000000128 @@ -414,6 +431,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000013317 @@ -430,6 +464,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000013316 @@ -446,6 +497,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000013832 @@ -462,6 +530,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000012965 @@ -478,6 +563,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000013836 @@ -494,6 +596,23 @@ Now we will extend the settings above to include information about data to assim + emulator.ms + local + 60 + 10000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + soilWHC + + + half_saturation_PAR + dVPDSlope + leafGrowth + + 1000013838 From ddd26ff3862b6aad955e6a8f6aec1cb31f1281a1 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 14:54:43 -0500 Subject: [PATCH 0009/2289] intorduce ensemble type --- modules/assim.batch/R/pda.utils.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 5e6de425142..cdbd0196c0e 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -345,7 +345,8 @@ pda.create.ensemble <- function(settings, con, workflow.id) { settings$assim.batch$method == "bruteforce.bs" | settings$assim.batch$method == "bayesian.tools") { ensemble.type <- "pda.MCMC" - } else if (settings$assim.batch$method == "emulator") { + } else if (settings$assim.batch$method == "emulator"| + settings$assim.batch$method == "emulator.ms") { ensemble.type <- "pda.emulator" } From a8febb36cb175ee4f0f22dce5104cb4911e30ca7 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 15:10:22 -0500 Subject: [PATCH 0010/2289] hacks for development until we read FC --- base/db/inst/bety_mstmip_lookup.csv | 2 +- modules/assim.batch/R/pda.emulator.R | 7 ++++++- .../assim.batch/vignettes/MultiSitePDAVignette.Rmd | 12 ++++++++++-- 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/base/db/inst/bety_mstmip_lookup.csv b/base/db/inst/bety_mstmip_lookup.csv index cc739d20db8..e1b99b6b02a 100644 --- a/base/db/inst/bety_mstmip_lookup.csv +++ b/base/db/inst/bety_mstmip_lookup.csv @@ -30,7 +30,7 @@ LWdown,W/m2,LWdown LWdown,W/m2,surface_downwelling_longwave_flux_in_air Lwnet,W m-2,Lwnet NEE,kg C m-2 s-1,NEE -NEE,kg C m-2 s-1,FC +FC,kg C m-2 s-1,FC NPP,kg C m-2 s-1,NPP NPP,kg C m-2 s-1,NPP_2_C NPP,kg C m-2 s-1,NPP_1_C diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 84eec0e80db..099220834ef 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -14,7 +14,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, - ar.target = NULL, jvar = NULL, n.knot = NULL) { + ar.target = NULL, jvar = NULL, n.knot = NULL, local = TRUE) { ## this bit of code is useful for defining the variables passed to this function if you are ## debugging @@ -22,6 +22,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, external.data <- external.priors <- NULL params.id <- param.names <- prior.id <- chain <- iter <- NULL n.knot <- adapt <- adj.min <- ar.target <- jvar <- NULL + local <- TRUE } # handle extention flags @@ -100,6 +101,10 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, inputs <- external.data } + if(TRUE){ # until FC read is fixed + + } + n.input <- length(inputs) diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 320019569fa..0e45d232ba7 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -321,7 +321,7 @@ If requested, your `` and `` sections would look ``` -## Tags for HB-PDA +## Settings for HB-PDA Similar to PDA settings, we will contain HB-PDA settings under the `` section. Likewise standard PDA, if you have chosen the parameters you want to target in the calibration, open the `pecan.CONFIGS.xml` file, insert the tag below, and save as `pecan.HBPDA.xml`: @@ -629,4 +629,12 @@ Now we can insert multi-site settings in the `` section accordingly -``` \ No newline at end of file +``` + +Following section briefly explains what happens after: +``` +multi.settings <- read.settings("pecan.HBPDA.xml") +pda.emulator.ms(multi.settings) +``` + +## The workflow for HB-PDA \ No newline at end of file From f4c39ba2191a19bc9512182df8e00499eb43667d Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 3 Mar 2018 15:47:12 -0500 Subject: [PATCH 0011/2289] local flag --- modules/assim.batch/R/pda.emulator.R | 4 ---- modules/assim.batch/R/pda.emulator.ms.R | 7 ++++++- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 099220834ef..3fecf8d8101 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -101,10 +101,6 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, inputs <- external.data } - if(TRUE){ # until FC read is fixed - - } - n.input <- length(inputs) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index a541835928a..102f7e87375 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -43,7 +43,12 @@ pda.emulator.ms <- function(multi.settings) { for(s in seq_along(multi.settings)){ # site runs - loop begin settings <- multi.settings[[s]] - + # NOTE: local flag is not used currently, prepearation for future use + # if this flag is TRUE, pda.emulator will not fit GP and run MCMC, + # but will run LHC ensembles, calculate SS and return settings list with saved SS paths etc. + # this requires some re-arrangement in pda.emulator, + # for now we will always run site-level calibration + settings <- pda.emulator(settings, local = local) } # site runs - loop end } \ No newline at end of file From 733f3ddaa3d4ec741dd625ae2f7567a8628b9a76 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 4 Mar 2018 09:10:28 -0500 Subject: [PATCH 0012/2289] start saving after ensemble id is given --- modules/assim.batch/R/pda.emulator.R | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 3fecf8d8101..a8650233173 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -53,10 +53,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, prior.id=prior.id, chain=chain, iter=iter, adapt=adapt, adj.min=adj.min, ar.target=ar.target, jvar=jvar, n.knot=n.knot, run.round) - ## history restart - pda.restart.file <- file.path(settings$outdir,paste0("history.pda", - settings$assim.batch$ensemble.id, ".Rdata")) - current.step <- "START" + ## will be used to check if multiplicative Gaussian is requested any.mgauss <- sapply(settings$assim.batch$inputs, `[[`, "likelihood") @@ -139,6 +136,10 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ## Create an ensemble id settings$assim.batch$ensemble.id <- pda.create.ensemble(settings, con, workflow.id) + ## history restart + pda.restart.file <- file.path(settings$outdir,paste0("history.pda", + settings$assim.batch$ensemble.id, ".Rdata")) + current.step <- "START" ## Set up likelihood functions llik.fn <- pda.define.llik.fn(settings) From 1d11faaff8a9303dbf88aa50b65b4a0e1d7a8e0f Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 4 Mar 2018 11:50:46 -0500 Subject: [PATCH 0013/2289] start global calibration --- modules/assim.batch/R/pda.emulator.ms.R | 92 ++++++++++++++++++++++++- 1 file changed, 89 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 102f7e87375..5229cf81faf 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -1,7 +1,6 @@ ##' Paramater Data Assimilation using emulator on multiple sites in three modes: local, global, hierarchical ##' First draft, not complete yet ##' -##' @title Paramater Data Assimilation using emulator on multiple sites ##' @param multi.settings = a pecan multi-settings list ##' ##' @return settings @@ -38,17 +37,104 @@ pda.emulator.ms <- function(multi.settings) { prior.stack <- vector("list", nsites) nstack <- vector("list", nsites) - ## -------------------------------------- Runs and build emulator ------------------------------------------ + ## -------------------------------------- Local runs and calibration ------------------------------------------ for(s in seq_along(multi.settings)){ # site runs - loop begin settings <- multi.settings[[s]] # NOTE: local flag is not used currently, prepearation for future use - # if this flag is TRUE, pda.emulator will not fit GP and run MCMC, + # if this flag is FALSE, pda.emulator will not fit GP and run MCMC, # but will run LHC ensembles, calculate SS and return settings list with saved SS paths etc. # this requires some re-arrangement in pda.emulator, # for now we will always run site-level calibration settings <- pda.emulator(settings, local = local) + multi.settings[[s]] <- settings } # site runs - loop end + #PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDAMS.xml') + + ## -------------------------------------- Global calibration -------------------------------------------------- + if(global){ # global - if begin + + # collect SS matrices + for(s in seq_along(multi.settings)){ + load(multi.settings[[s]]$assim.batch$ss.path) + SS.stack[[s]] <- SS + remove(SS) + } + + # double check indices and dimensions for multiple variables later + SS <- lapply(seq_along(SS.stack[[1]]), + function(l) matrix(sapply(SS.stack,`[[`,l), ncol = ncol(SS.stack[[1]][[l]]), byrow = FALSE)) + + # pass colnames using the first SS.stack (all should be the same) + for(c in seq_along(SS)){ + colnames(SS[[c]]) <- colnames(SS.stack[[1]][[c]]) + } + + ## Fit emulator on SS from multiple sites ## + + # start the clock + ptm.start <- proc.time() + + # prepare for parallelization + dcores <- parallel::detectCores() - 1 + ncores <- min(max(dcores, 1), length(SS)) + + cl <- parallel::makeCluster(ncores, type="FORK") + + ## Parallel fit for GPs + GPmodel <- parallel::parLapply(cl, SS, function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) + # GPmodel <- lapply(SS, function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) + + + parallel::stopCluster(cl) + + # Stop the clock + ptm.finish <- proc.time() - ptm.start + PEcAn.logger::logger.info(paste0("GP fitting took ", paste0(round(ptm.finish[3])), " seconds.")) + + + gp <- GPmodel + + + # start the clock + ptm.start <- proc.time() + + # prepare for parallelization + dcores <- parallel::detectCores() - 1 + ncores <- min(max(dcores, 1), settings$assim.batch$chain) + # + logger.setOutputFile(file.path(settings$outdir, "pda.log")) + # + cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(settings$outdir, "pda.log")) + + ## Sample posterior from emulator + mcmc.out <- parallel::parLapply(cl, 1:settings$assim.batch$chain, function(chain) { + mcmc.GP(gp = gp, ## Emulator(s) + x0 = init.list[[chain]], ## Initial conditions + nmcmc = settings$assim.batch$iter, ## Number of reps + rng = rng, ## range + format = "lin", ## "lin"ear vs "log" of LogLikelihood + mix = mix, ## Jump "each" dimension independently or update them "joint"ly + jmp0 = jmp.list[[chain]], ## Initial jump size + ar.target = settings$assim.batch$jump$ar.target, ## Target acceptance rate + priors = prior.fn.all$dprior[prior.ind.all], ## priors + settings = multi.settings[[s]], # this is just for checking llik functions downstream + run.block = TRUE, + n.of.obs = unlist(nstack), + llik.fn = llik.fn, + resume.list = resume.list[[chain]] + ) + }) + + parallel::stopCluster(cl) + + # Stop the clock + ptm.finish <- proc.time() - ptm.start + logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(settings$assim.batch$iter), " iterations.")) + + + } # global - if end + } \ No newline at end of file From 6a4ddd4a9087c93daa9054592aa46f62c9c4e515 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 4 Mar 2018 15:33:27 -0500 Subject: [PATCH 0014/2289] load previous objects from local calibration --- modules/assim.batch/R/pda.emulator.ms.R | 41 +++++++++++++++++++------ modules/assim.batch/R/pda.utils.R | 12 ++++++++ 2 files changed, 43 insertions(+), 10 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 5229cf81faf..7b03e4ffd6e 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -53,6 +53,28 @@ pda.emulator.ms <- function(multi.settings) { #PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDAMS.xml') + ## -------------------------------- Prepare for Global and Hierarchical ----------------------------------------- + + + # we need some objects that are common to all calibrations + need_obj <- load_pda_history(workdir = multi.settings$outdir, + ensemble.id = multi.settings[[8]]$assim.batch$ensemble.id, + objects = c("init.list", "rng", "jmp.list", "prior.fn.all", "prior.ind.all", "llik.fn", "settings")) + + init.list <- need_obj$init.list + rng <- need_obj$rng + jmp.list <- need_obj$jmp.list + prior.fn.all <- need_obj$prior.fn.all + prior.ind.all <- need_obj$prior.ind.all + llik.fn <- need_obj$llik.fn + tmp.settings <- need_obj$settings + + + resume.list <- vector("list", multi.settings[[1]]$assim.batch$chain) + + + + ## -------------------------------------- Global calibration -------------------------------------------------- if(global){ # global - if begin @@ -97,32 +119,31 @@ pda.emulator.ms <- function(multi.settings) { gp <- GPmodel - # start the clock ptm.start <- proc.time() # prepare for parallelization dcores <- parallel::detectCores() - 1 - ncores <- min(max(dcores, 1), settings$assim.batch$chain) + ncores <- min(max(dcores, 1), multi.settings[[1]]$assim.batch$chain) # - logger.setOutputFile(file.path(settings$outdir, "pda.log")) + logger.setOutputFile(file.path(multi.settings$outdir, "pda.log")) # - cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(settings$outdir, "pda.log")) + cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(multi.settings$outdir, "pda.log")) ## Sample posterior from emulator - mcmc.out <- parallel::parLapply(cl, 1:settings$assim.batch$chain, function(chain) { + mcmc.out <- parallel::parLapply(cl, 1:multi.settings[[1]]$assim.batch$chain, function(chain) { mcmc.GP(gp = gp, ## Emulator(s) x0 = init.list[[chain]], ## Initial conditions - nmcmc = settings$assim.batch$iter, ## Number of reps + nmcmc = 100000, ## Number of reps rng = rng, ## range format = "lin", ## "lin"ear vs "log" of LogLikelihood - mix = mix, ## Jump "each" dimension independently or update them "joint"ly + mix = "joint", ## Jump "each" dimension independently or update them "joint"ly jmp0 = jmp.list[[chain]], ## Initial jump size - ar.target = settings$assim.batch$jump$ar.target, ## Target acceptance rate + ar.target = 0.3, ## Target acceptance rate priors = prior.fn.all$dprior[prior.ind.all], ## priors - settings = multi.settings[[s]], # this is just for checking llik functions downstream + settings = tmp.settings, # this is just for checking llik functions downstream run.block = TRUE, - n.of.obs = unlist(nstack), + n.of.obs = NULL, # need this for Gaussian likelihoods, keep it NULL for now llik.fn = llik.fn, resume.list = resume.list[[chain]] ) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index cdbd0196c0e..6ee6a1439ef 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -784,3 +784,15 @@ return_hyperpars <- function(assim.settings, inputs){ return(hyper.pars) } # return_hyperpars + + + +##' Helper function that loads history from previous PDA run, but returns only requested objects +##' @author Istem Fer +##' @export +load_pda_history <- function(workdir, ensemble.id, objects){ + load(paste0(workdir, "/history.pda", ensemble.id,".Rdata")) + alist <- lapply(objects, function(x) assign(x, get(x))) + names(alist) <- objects + return(alist) +} From a8b3964d6f833a31f938082266dd434f6f85bd63 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 08:12:27 -0500 Subject: [PATCH 0015/2289] need to handle dimensions differently while passing more gp than input length --- modules/assim.batch/R/pda.define.llik.R | 15 ++++++++++----- modules/emulator/R/minimize.GP.R | 3 ++- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/modules/assim.batch/R/pda.define.llik.R b/modules/assim.batch/R/pda.define.llik.R index 04b21b280ed..a58e30a7d7b 100644 --- a/modules/assim.batch/R/pda.define.llik.R +++ b/modules/assim.batch/R/pda.define.llik.R @@ -171,7 +171,10 @@ pda.calc.llik <- function(pda.errors, llik.fn, llik.par) { LL.vec <- numeric(n.var) for (k in seq_len(n.var)) { - LL.vec[k] <- llik.fn[[k]](pda.errors[k], llik.par[[k]]) + + j <- (k-1) %% length(llik.fn) + 1 + + LL.vec[k] <- llik.fn[[j]](pda.errors[k], llik.par[[k]]) } LL.total <- sum(LL.vec) @@ -198,13 +201,15 @@ pda.calc.llik.par <-function(settings, n, error.stats, hyper.pars){ for(k in seq_along(error.stats)){ + j <- (k-1) %% length(settings$assim.batch$inputs) + 1 + llik.par[[k]] <- list() - if (settings$assim.batch$inputs[[k]]$likelihood == "Gaussian" | - settings$assim.batch$inputs[[k]]$likelihood == "multipGauss") { + if (settings$assim.batch$inputs[[j]]$likelihood == "Gaussian" | + settings$assim.batch$inputs[[j]]$likelihood == "multipGauss") { - llik.par[[k]]$par <- rgamma(1, hyper.pars[[k]]$parama + n[k]/2, - hyper.pars[[k]]$paramb + error.stats[k]/2) + llik.par[[k]]$par <- rgamma(1, hyper.pars[[j]]$parama + n[k]/2, + hyper.pars[[j]]$paramb + error.stats[k]/2) names(llik.par[[k]]$par) <- paste0("tau.", names(n)[k]) } diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 276173aef8c..e2f3f6f1a18 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -125,8 +125,9 @@ get_ss <- function(gp, xnew, pos.check) { for(igp in seq_along(gp)){ Y <- mlegp::predict.gp(gp[[igp]], newData = X[, 1:ncol(gp[[igp]]$X), drop=FALSE], se.fit = TRUE) + j <- (igp %% length(pos.check)) + 1 - if(pos.check[igp]){ + if(pos.check[j]){ if(Y$fit < 0){ return(-Inf) } From 4c1dd8ad20d1a1aa5a6630da287b982f0f4cc206 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 08:47:21 -0500 Subject: [PATCH 0016/2289] global calibration --- modules/assim.batch/R/pda.emulator.ms.R | 173 +++++++++++++++++------- 1 file changed, 125 insertions(+), 48 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 7b03e4ffd6e..80ba903aac6 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -57,68 +57,69 @@ pda.emulator.ms <- function(multi.settings) { # we need some objects that are common to all calibrations + obj_names <- c("init.list", "rng", "jmp.list", "prior.fn.all", "prior.ind.all", "llik.fn", + "settings", "prior.ind.all.ns", "sf", "prior.list", "n.param.orig", "pname", "prior.ind.orig", + "hyper.pars") + need_obj <- load_pda_history(workdir = multi.settings$outdir, ensemble.id = multi.settings[[8]]$assim.batch$ensemble.id, - objects = c("init.list", "rng", "jmp.list", "prior.fn.all", "prior.ind.all", "llik.fn", "settings")) - - init.list <- need_obj$init.list - rng <- need_obj$rng - jmp.list <- need_obj$jmp.list - prior.fn.all <- need_obj$prior.fn.all - prior.ind.all <- need_obj$prior.ind.all - llik.fn <- need_obj$llik.fn - tmp.settings <- need_obj$settings - + objects = obj_names) + + init.list <- need_obj$init.list + rng <- need_obj$rng + jmp.list <- need_obj$jmp.list + prior.list <- need_obj$prior.list + prior.fn.all <- need_obj$prior.fn.all + prior.ind.all <- need_obj$prior.ind.all + prior.ind.all.ns <- need_obj$prior.ind.all.ns + llik.fn <- need_obj$llik.fn + tmp.settings <- need_obj$settings + sf <- need_obj$sf + n.param.orig <- need_obj$n.param.orig + prior.ind.orig <- need_obj$prior.ind.orig + pname <- need_obj$pname + hyper.pars <- need_obj$hyper.pars resume.list <- vector("list", multi.settings[[1]]$assim.batch$chain) + ## Open database connection + if (settings$database$bety$write) { + con <- try(db.open(settings$database$bety), silent = TRUE) + if (is(con, "try-error")) { + con <- NULL + } else { + on.exit(db.close(con)) + } + } else { + con <- NULL + } + ## Get the workflow id + if ("workflow" %in% names(tmp.settings)) { + workflow.id <- tmp.settings$workflow$id + } else { + workflow.id <- -1 + } + + ## Create an ensemble id + tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) ## -------------------------------------- Global calibration -------------------------------------------------- if(global){ # global - if begin - # collect SS matrices + # collect GPs for(s in seq_along(multi.settings)){ - load(multi.settings[[s]]$assim.batch$ss.path) - SS.stack[[s]] <- SS - remove(SS) - } - - # double check indices and dimensions for multiple variables later - SS <- lapply(seq_along(SS.stack[[1]]), - function(l) matrix(sapply(SS.stack,`[[`,l), ncol = ncol(SS.stack[[1]][[l]]), byrow = FALSE)) - - # pass colnames using the first SS.stack (all should be the same) - for(c in seq_along(SS)){ - colnames(SS[[c]]) <- colnames(SS.stack[[1]][[c]]) + + load(multi.settings[[s]]$assim.batch$emulator.path) + gp.stack[[s]] <- gp + remove(gp) + } - ## Fit emulator on SS from multiple sites ## - - # start the clock - ptm.start <- proc.time() - - # prepare for parallelization - dcores <- parallel::detectCores() - 1 - ncores <- min(max(dcores, 1), length(SS)) - - cl <- parallel::makeCluster(ncores, type="FORK") - - ## Parallel fit for GPs - GPmodel <- parallel::parLapply(cl, SS, function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) - # GPmodel <- lapply(SS, function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) - - - parallel::stopCluster(cl) - - # Stop the clock - ptm.finish <- proc.time() - ptm.start - PEcAn.logger::logger.info(paste0("GP fitting took ", paste0(round(ptm.finish[3])), " seconds.")) - - - gp <- GPmodel - + gp <- unlist(gp.stack, recursive = FALSE) + + # start the clock ptm.start <- proc.time() @@ -145,6 +146,7 @@ pda.emulator.ms <- function(multi.settings) { run.block = TRUE, n.of.obs = NULL, # need this for Gaussian likelihoods, keep it NULL for now llik.fn = llik.fn, + hyper.pars = hyper.pars, resume.list = resume.list[[chain]] ) }) @@ -155,6 +157,81 @@ pda.emulator.ms <- function(multi.settings) { ptm.finish <- proc.time() - ptm.start logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(settings$assim.batch$iter), " iterations.")) + mcmc.samp.list <- sf.samp.list <- list() + + for (c in seq_len(tmp.settings$assim.batch$chain)) { + + m <- matrix(NA, nrow = nrow(mcmc.out[[c]]$mcmc.samp), ncol = length(prior.ind.all.ns)) + + if(!is.null(sf)){ + sfm <- matrix(NA, nrow = nrow(mcmc.out[[c]]$mcmc.samp), ncol = length(sf)) + # give colnames but the order can change, we'll overwrite anyway + colnames(sfm) <- paste0(sf, "_SF") + } + ## Set the prior functions back to work with actual parameter range + + prior.all <- do.call("rbind", prior.list) + prior.fn.all <- pda.define.prior.fn(prior.all) + + # retrieve rownames separately to get rid of var_name* structures + prior.all.rownames <- unlist(sapply(prior.list, rownames)) + + sc <- 1 + for (i in seq_along(prior.ind.all.ns)) { + sf.check <- prior.all.rownames[prior.ind.all.ns][i] + idx <- grep(sf.check, rownames(prior.all)[prior.ind.all]) + if(any(grepl(sf.check, sf))){ + + m[, i] <- eval(prior.fn.all$qprior[prior.ind.all.ns][[i]], + list(p = mcmc.out[[c]]$mcmc.samp[, idx])) + if(sc <= length(sf)){ + sfm[, sc] <- mcmc.out[[c]]$mcmc.samp[, idx] + colnames(sfm)[sc] <- paste0(sf.check, "_SF") + sc <- sc + 1 + } + + }else{ + m[, i] <- mcmc.out[[c]]$mcmc.samp[, idx] + } + } + + colnames(m) <- prior.all.rownames[prior.ind.all.ns] + mcmc.samp.list[[c]] <- m + + if(!is.null(sf)){ + sf.samp.list[[c]] <- sfm + } + + resume.list[[c]] <- mcmc.out[[c]]$chain.res + } + + # Separate each PFT's parameter samples (and bias term) to their own list + mcmc.param.list <- list() + ind <- 0 + for (i in seq_along(n.param.orig)) { + mcmc.param.list[[i]] <- lapply(mcmc.samp.list, function(x) x[, (ind + 1):(ind + n.param.orig[i]), drop = FALSE]) + ind <- ind + n.param.orig[i] + } + + # Collect non-model parameters in their own list + if(length(mcmc.param.list) > length(tmp.settings$pfts)) { + # means bias parameter was at least one bias param in the emulator + # it will be the last list in mcmc.param.list + # there will always be at least one tau for bias + for(c in seq_len(tmp.settings$assim.batch$chain)){ + mcmc.param.list[[length(mcmc.param.list)]][[c]] <- cbind( mcmc.param.list[[length(mcmc.param.list)]][[c]], + mcmc.out[[c]]$mcmc.par) + } + + } else if (ncol(mcmc.out[[1]]$mcmc.par) != 0){ + # means no bias param but there are still other params, e.g. Gaussian + mcmc.param.list[[length(mcmc.param.list)+1]] <- list() + for(c in seq_len(tmp.settings$assim.batch$chain)){ + mcmc.param.list[[length(mcmc.param.list)]][[c]] <- mcmc.out[[c]]$mcmc.par + } + } + + tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig) } # global - if end From db26119073f169530ac5a3d5085e9cbfa3f2361a Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 09:47:48 -0500 Subject: [PATCH 0017/2289] enable adding a suffix to filenames --- modules/assim.batch/R/pda.emulator.ms.R | 11 +++++---- modules/assim.batch/R/pda.postprocess.R | 24 ++++++++++--------- .../vignettes/MultiSitePDAVignette.Rmd | 6 ++++- 3 files changed, 24 insertions(+), 17 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 80ba903aac6..227791caeac 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -101,13 +101,14 @@ pda.emulator.ms <- function(multi.settings) { workflow.id <- -1 } - ## Create an ensemble id - tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) - - + ## -------------------------------------- Global calibration -------------------------------------------------- if(global){ # global - if begin + ## Get an ensemble id for global calibration + tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) + + # collect GPs for(s in seq_along(multi.settings)){ @@ -231,7 +232,7 @@ pda.emulator.ms <- function(multi.settings) { } } - tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig) + tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_global") } # global - if end diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index 79da0a28d34..640904d4454 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -7,7 +7,7 @@ ##' ##' @author Ryan Kelly, Istem Fer ##' @export -pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior.ind) { +pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior.ind, sffx = NULL) { # prepare for non-model params if(length(mcmc.param.list) > length(settings$pfts)){ @@ -28,7 +28,7 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. par.file.name <- NULL } - params.subset <- pda.plot.params(settings, mcmc.param.list, prior.ind, par.file.name) + params.subset <- pda.plot.params(settings, mcmc.param.list, prior.ind, par.file.name, sffx) for (i in seq_along(settings$pfts)) { @@ -38,7 +38,7 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id, - ".Rdata")) + sffx, ".Rdata")) params.pft <- params.subset[[i]] save(params.pft, file = filename.mcmc) @@ -64,9 +64,11 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. post.distns <- PEcAn.MA::approx.posterior(trait.mcmc = params.subset[[i]], priors = prior[[i]], outdir = settings$pfts[[i]]$outdir, - filename.flag = paste0(".pda.", settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id)) + filename.flag = paste0(".pda.", settings$pfts[[i]]$name, "_", + settings$assim.batch$ensemble.id, sffx)) filename <- file.path(settings$pfts[[i]]$outdir, - paste0("post.distns.pda.", settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id, ".Rdata")) + paste0("post.distns.pda.", settings$pfts[[i]]$name, "_", + settings$assim.batch$ensemble.id, sffx, ".Rdata")) save(post.distns, file = filename) dbfile.insert(dirname(filename), basename(filename), "Posterior", posteriorid, con) @@ -97,7 +99,7 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. paste0("trait.mcmc.pda.", settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id, - ".Rdata")) + sffx, ".Rdata")) save(trait.mcmc, file = filename) dbfile.insert(dirname(filename), basename(filename), "Posterior", posteriorid, con) } #end of loop over PFTs @@ -119,7 +121,7 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. ##' ##' @author Ryan Kelly, Istem Fer ##' @export -pda.plot.params <- function(settings, mcmc.param.list, prior.ind, par.file.name = NULL) { +pda.plot.params <- function(settings, mcmc.param.list, prior.ind, par.file.name = NULL, sffx) { params.subset <- list() @@ -150,12 +152,12 @@ pda.plot.params <- function(settings, mcmc.param.list, prior.ind, par.file.name paste0("mcmc.diagnostics.pda.", settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id, - ".pdf"))) + sffx, ".pdf"))) } else { pdf(file.path(par.file.name, paste0("mcmc.diagnostics.pda.par_", settings$assim.batch$ensemble.id, - ".pdf"))) + sffx, ".pdf"))) } layout(matrix(c(1, 2, 3, 4, 5, 6), ncol = 2, byrow = TRUE)) @@ -181,11 +183,11 @@ pda.plot.params <- function(settings, mcmc.param.list, prior.ind, par.file.name paste0("mcmc.diagnostics.pda.", settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id, - ".txt")) + sffx, ".txt")) } else { filename.mcmc.temp <- file.path(par.file.name, paste0("mcmc.diagnostics.pda.par_", - settings$assim.batch$ensemble.id,".txt")) + settings$assim.batch$ensemble.id, sffx, ".txt")) } diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 0e45d232ba7..665d6639d89 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -637,4 +637,8 @@ multi.settings <- read.settings("pecan.HBPDA.xml") pda.emulator.ms(multi.settings) ``` -## The workflow for HB-PDA \ No newline at end of file +## The workflow for HB-PDA + +HB-PDA workflow for multiple sites starts with fitting models locally. This is done by looping over the PEcAn multi-settings list using the `pda.emulator` function. During this step, site-level runs, GP fitting and MCMC are done. Components of these runs are saved and will be used in global and hierarhical fitting. + +In global calibration, we estimate posterior using all site-level GPs at once at each iteration. Proposed values are plugged-in all GPs, and accepted/rejected according to all estimated likelihoods. \ No newline at end of file From 7a8e2fd43c3eb69f69e6599cdb7b7e634260e2ee Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 11:03:03 -0500 Subject: [PATCH 0018/2289] start hierarchical --- modules/assim.batch/R/pda.emulator.ms.R | 235 +++++++++++++++++- .../vignettes/MultiSitePDAVignette.Rmd | 4 +- 2 files changed, 226 insertions(+), 13 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 227791caeac..5e00dd05cd2 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -81,6 +81,15 @@ pda.emulator.ms <- function(multi.settings) { hyper.pars <- need_obj$hyper.pars resume.list <- vector("list", multi.settings[[1]]$assim.batch$chain) + + # collect GPs + for(s in seq_along(multi.settings)){ + + load(multi.settings[[s]]$assim.batch$emulator.path) + gp.stack[[s]] <- gp + remove(gp) + + } ## Open database connection if (settings$database$bety$write) { @@ -108,19 +117,8 @@ pda.emulator.ms <- function(multi.settings) { ## Get an ensemble id for global calibration tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) - - # collect GPs - for(s in seq_along(multi.settings)){ - - load(multi.settings[[s]]$assim.batch$emulator.path) - gp.stack[[s]] <- gp - remove(gp) - - } - gp <- unlist(gp.stack, recursive = FALSE) - # start the clock ptm.start <- proc.time() @@ -236,4 +234,217 @@ pda.emulator.ms <- function(multi.settings) { } # global - if end -} \ No newline at end of file + ## -------------------------------------- Hierarchical MCMC ------------------------------------------ + if(hierarchical){ # hierarchical - if begin + + ## Get an ensemble id for hierarchical calibration + tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) + + + ########### hierarchical MCMC function with Gibbs ############## + + + hier.mcmc <- function(settings, gp.stack, nstack, nmcmc, rng, + global.prior.fn.all, mu0, jmp0, prior.ind.all, nsites){ + + nparam <- length(prior.ind.all) + + pos.check <- sapply(settings$assim.batch$inputs, `[[`, "ss.positive") + + if(length(unlist(pos.check)) == 0){ + # if not passed from settings assume none + pos.check <- rep(FALSE, length(settings$assim.batch$inputs)) + }else if(length(unlist(pos.check)) != length(settings$assim.batch$inputs)){ + # maybe one provided, but others are forgotten + # check which ones are provided in settings + from.settings <- sapply(seq_along(pos.check), function(x) !is.null(pos.check[[x]])) + tmp.check <- rep(FALSE, length(settings$assim.batch$inputs)) + # replace those with the values provided in the settings + tmp.check[from.settings] <- as.logical(unlist(pos.check)) + pos.check <- tmp.check + }else{ + pos.check <- as.logical(pos.check) + } + + ################################################################ + # + # mu_site : site level parameters (nsite x nparam) + # tau_site : site level precision (nsite x nsite) + # mu_global : global parameters (nparam) + # tau_global : global precision matrix (nparam x nparam) + # + ################################################################ + + + # mu_f + mu_f <- unlist(mu0) + + + # initialize jcov.arr (jump variances per site) + jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) + for(j in seq_len(nsites)) jcov.arr[,,j] <- diag(jmp0) + + # initialize mu_global (nparam) + mu_global <- mvtnorm::rmvnorm(1, mu_f, diag(jmp0)) + + + # initialize tau_global (nparam x nparam) + tau_global <- diag(1, nparam) + + # initialize mu_site (nsite x nparam) + mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) + mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= n.param) + for(ns in 1:nsites){ + repeat{ + mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean + check.that <- sapply(seq_len(nparam), function(x) { + chk <- (mu_site_curr[ns,x] > rng[x, 1] & mu_site_curr[ns,x] < rng[x, 2]) + return(chk)}) + + if(all(check.that)) break + } + } + + # values for each site will be accepted/rejected in themselves + currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) + # force it to be nvar x nsites matrix + currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) + currllp <- lapply(seq_len(nsites), function(v) PEcAn.assim.batch::pda.calc.llik.par(settings, nstack[[v]], currSS[,v])) + + # storage + mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) + # tau_site_samp <- array(NA_real_, c(nmcmc, nsites, nsites)) + mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) + tau_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) + + musite.accept.count <- rep(0, nsites) + muglobal.accept.count <- 0 + tauglobal.accept.count <- 0 + + for(g in seq_len(nmcmc)){ + + # adapt + if ((g > 2) && ((g - 1) %% settings$assim.batch$jump$adapt == 0)) { + + # update site level jvars + params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] + #colnames(params.recent) <- names(x0) + jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], accept.count[v], params.recent[,,v])) + jcov.arr <- abind(jcov.list, along=3) + musite.accept.count <- rep(0, nsites) # Reset counter + + # update global jvars + params.recent <- mu_global_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] + #colnames(params.recent) <- names(x0) + jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], accept.count[v], params.recent[,,v])) + jcov.arr <- abind(jcov.list, along=3) + musite.accept.count <- rep(0, nsites) # Reset counter + + } + + + + ######################################## + # update tau_global | mu_global, mu_site + # + # tau_global ~ W(tau_df, tau_sigma) + # + # tau_global : error precision matrix + + # sum of pairwise deviation products + sum_term <- matrix(0, ncol = nparam, nrow = nparam) + for(i in seq_len(nsites)){ + pairwise_deviation <- as.matrix(mu_site_curr[i,] - mu_global) + sum_term <- sum_term + t(pairwise_deviation) %*% pairwise_deviation + } + + tau_sigma <- solve(V_inv + sum_term) + + # update tau + tau_global <- rWishart(1, df = tau_df, Sigma = tau_sigma)[,,1] # site precision + sigma_global <- solve(tau_global) # site covariance, new prior sigma to be used below for prior prob. calc. + + + ######################################## + # update mu_global | mu_site, tau_global + # + # mu_global ~ MVN(global_mu, global_Sigma) + # + # mu_global : global parameters + # global_mu : precision weighted average between the data (mu_site) and prior mean (mu_f) + # global_Sigma : sum of mu_site and mu_f precision + + + global_Sigma <- solve(P_f_inv + (nsites * tau_global)) + + global_mu <- global_Sigma %*% ((nsites * tau_global %*% colMeans(mu_site_curr)) + (P_f_inv %*% mu_f)) + + mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. + + + # site level M-H + ######################################## + + # propose mu_site + + for(ns in seq_len(nsites)){ + repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation + mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,s]) + check.that <- sapply(seq_len(sum(n.param)), function(x) { + chk <- (mu_site_new[ns,x] > rng[x, 1] & mu_site_new[ns,x] < rng[x, 2]) + return(chk)}) + + if(all(check.that)) break + } + } + + + # re-predict current SS + currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,])) + + # calculate posterior + currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) + # use new priors for calculating prior probability + currPrior <- mvtnorm::dmvnorm(mu_site_curr, mu_global, sigma_global, log = TRUE) + currPost <- currLL + currPrior + + + # predict new SS + newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,])) + + # calculate posterior + newllp <- lapply(seq_len(nsites), function(v) pda.calc.llik.par(settings, nstack[[v]], newSS[,v])) + newLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(newSS[,v], llik.fn, newllp[[v]])) + # use new priors for calculating prior probability + newPrior <- dmvnorm(mu_site_new, mu_global, sigma_global, log = TRUE) + newPost <- newLL + newPrior + + ar <- is.accepted(currPost, newPost) + mu_site_curr[ar, ] <- mu_site_new[ar, ] + accept.count <- accept.count + ar + + + mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] + # tau_site_samp <- + mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs + tau_global_samp[g,,] <- tau_global # 100% acceptance for gibbs + + } + + return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp)) + } # hier.mcmc + + + ## Sample posterior from emulator + mcmc.out <- parallel::parLapply(cl, seq_len(settings$assim.batch$chain), function(chain) { + hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, nstack = NULL, + nmcmc = tmp.settings$assim.batch$iter, rng = rng, + global.prior.fn.all = global.prior.fn.all, + mu0 = init.list[[chain]], jmp0 = jmp.list[[chain]], + prior.ind.all = prior.ind.all, nsites = nsites) + }) + + } # hierarchical - if end + +} + diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 665d6639d89..5b78b5c8a2b 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -641,4 +641,6 @@ pda.emulator.ms(multi.settings) HB-PDA workflow for multiple sites starts with fitting models locally. This is done by looping over the PEcAn multi-settings list using the `pda.emulator` function. During this step, site-level runs, GP fitting and MCMC are done. Components of these runs are saved and will be used in global and hierarhical fitting. -In global calibration, we estimate posterior using all site-level GPs at once at each iteration. Proposed values are plugged-in all GPs, and accepted/rejected according to all estimated likelihoods. \ No newline at end of file +In global calibration, we estimate posterior using all site-level GPs at once at each iteration. Proposed values are plugged-in all GPs, and accepted/rejected according to all estimated likelihoods. + +Hierarhical calibration, uses a similar function to `mcmc.GP` but modified for hierarchihcal fitting, called `hier.mcmc`. \ No newline at end of file From 71585509003678559f68787172a4e6c86318b614 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 12:11:22 -0500 Subject: [PATCH 0019/2289] hier.mcmc function --- modules/assim.batch/R/pda.emulator.ms.R | 87 ++++++++++++++++--------- 1 file changed, 58 insertions(+), 29 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 5e00dd05cd2..8bc407ce2df 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -245,10 +245,8 @@ pda.emulator.ms <- function(multi.settings) { hier.mcmc <- function(settings, gp.stack, nstack, nmcmc, rng, - global.prior.fn.all, mu0, jmp0, prior.ind.all, nsites){ - - nparam <- length(prior.ind.all) - + mu0, jmp0, nparam, nsites){ + pos.check <- sapply(settings$assim.batch$inputs, `[[`, "ss.positive") if(length(unlist(pos.check)) == 0){ @@ -276,20 +274,42 @@ pda.emulator.ms <- function(multi.settings) { ################################################################ - # mu_f - mu_f <- unlist(mu0) - - # initialize jcov.arr (jump variances per site) - jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) - for(j in seq_len(nsites)) jcov.arr[,,j] <- diag(jmp0) + ###### priors + # mu_f : prior mean vector + # P_f : prior covariance matrix + # P_f_inv : prior precision matrix + # + # mu_global ~ MVN (mu_f, P_f) + # + + mu_f <- unlist(mu0) + P_f <- diag(jmp0) + P_f_inv <- solve(P_f) # initialize mu_global (nparam) - mu_global <- mvtnorm::rmvnorm(1, mu_f, diag(jmp0)) + mu_global <- mvtnorm::rmvnorm(1, mu_f, P_f) + + ###### priors cont'd + # tau_df : Wishart degrees of freedom + # tau_V : Wishart scale matrix + # tau_global ~ W (tau_df, tau_scale) + # sigma_global <- solve(tau_global) + # + # initialize tau_global + tau_df <- nsites + nparam + 1 + tau_V <- diag(1, nparam) + V_inv <- solve(tau_V) # will be used in gibbs updating # initialize tau_global (nparam x nparam) - tau_global <- diag(1, nparam) + tau_global <- rWishart(1, tau_df, tau_V)[,,1] + + + # initialize jcov.arr (jump variances per site) + jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) + for(j in seq_len(nsites)) jcov.arr[,,j] <- diag(jmp0) + # initialize mu_site (nsite x nparam) mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) @@ -318,28 +338,21 @@ pda.emulator.ms <- function(multi.settings) { tau_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) musite.accept.count <- rep(0, nsites) - muglobal.accept.count <- 0 - tauglobal.accept.count <- 0 + + ########################## Start MCMC ######################## for(g in seq_len(nmcmc)){ - # adapt + # jump adaptation step if ((g > 2) && ((g - 1) %% settings$assim.batch$jump$adapt == 0)) { # update site level jvars params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] #colnames(params.recent) <- names(x0) jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], accept.count[v], params.recent[,,v])) - jcov.arr <- abind(jcov.list, along=3) - musite.accept.count <- rep(0, nsites) # Reset counter - - # update global jvars - params.recent <- mu_global_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] - #colnames(params.recent) <- names(x0) - jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], accept.count[v], params.recent[,,v])) - jcov.arr <- abind(jcov.list, along=3) + jcov.arr <- abind::abind(jcov.list, along=3) musite.accept.count <- rep(0, nsites) # Reset counter - + } @@ -364,6 +377,7 @@ pda.emulator.ms <- function(multi.settings) { tau_global <- rWishart(1, df = tau_df, Sigma = tau_sigma)[,,1] # site precision sigma_global <- solve(tau_global) # site covariance, new prior sigma to be used below for prior prob. calc. + ######################################## # update mu_global | mu_site, tau_global @@ -400,7 +414,8 @@ pda.emulator.ms <- function(multi.settings) { # re-predict current SS - currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,])) + currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) + currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) # calculate posterior currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) @@ -410,7 +425,8 @@ pda.emulator.ms <- function(multi.settings) { # predict new SS - newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,])) + newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,], pos.check)) + newSS <- matrix(newSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) # calculate posterior newllp <- lapply(seq_len(nsites), function(v) pda.calc.llik.par(settings, nstack[[v]], newSS[,v])) @@ -421,7 +437,7 @@ pda.emulator.ms <- function(multi.settings) { ar <- is.accepted(currPost, newPost) mu_site_curr[ar, ] <- mu_site_new[ar, ] - accept.count <- accept.count + ar + musite.accept.count <- musite.accept.count + ar mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] @@ -434,16 +450,29 @@ pda.emulator.ms <- function(multi.settings) { return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp)) } # hier.mcmc + # prepare for parallelization + dcores <- parallel::detectCores() - 1 + ncores <- min(max(dcores, 1), settings$assim.batch$chain) + # + logger.setOutputFile(file.path(settings$outdir, "pda.log")) + # + cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(settings$outdir, "pda.log")) + ## Sample posterior from emulator mcmc.out <- parallel::parLapply(cl, seq_len(settings$assim.batch$chain), function(chain) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, nstack = NULL, nmcmc = tmp.settings$assim.batch$iter, rng = rng, - global.prior.fn.all = global.prior.fn.all, mu0 = init.list[[chain]], jmp0 = jmp.list[[chain]], - prior.ind.all = prior.ind.all, nsites = nsites) + nparam = length(prior.ind.all), nsites = nsites) }) + parallel::stopCluster(cl) + + # Stop the clock + ptm.finish <- proc.time() - ptm.start + logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) + } # hierarchical - if end } From 6c738d087f21f99fd3f55698ff96f3cfbeeec0b2 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 15:17:39 -0500 Subject: [PATCH 0020/2289] add function to sort hierarchical MCMC samples --- modules/assim.batch/R/pda.emulator.ms.R | 48 +++++++++------- modules/assim.batch/R/pda.postprocess.R | 73 +++++++++++++++++++++++++ 2 files changed, 102 insertions(+), 19 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 8bc407ce2df..51fc59f2b0a 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -92,8 +92,8 @@ pda.emulator.ms <- function(multi.settings) { } ## Open database connection - if (settings$database$bety$write) { - con <- try(db.open(settings$database$bety), silent = TRUE) + if (multi.settings$database$bety$write) { + con <- try(db.open(multi.settings$database$bety), silent = TRUE) if (is(con, "try-error")) { con <- NULL } else { @@ -287,8 +287,12 @@ pda.emulator.ms <- function(multi.settings) { P_f <- diag(jmp0) P_f_inv <- solve(P_f) - # initialize mu_global (nparam) - mu_global <- mvtnorm::rmvnorm(1, mu_f, P_f) + # # initialize mu_global (nparam) + repeat{ + mu_global <- mvtnorm::rmvnorm(1, mu_f, P_f) + check.that <- (mu_global > rng[, 1] & mu_global < rng[, 2]) + if(all(check.that)) break + } ###### priors cont'd # tau_df : Wishart degrees of freedom @@ -317,10 +321,7 @@ pda.emulator.ms <- function(multi.settings) { for(ns in 1:nsites){ repeat{ mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean - check.that <- sapply(seq_len(nparam), function(x) { - chk <- (mu_site_curr[ns,x] > rng[x, 1] & mu_site_curr[ns,x] < rng[x, 2]) - return(chk)}) - + check.that <- (mu_site_curr[ns,] > rng[, 1] & mu_site_curr[ns,] < rng[, 2]) if(all(check.that)) break } } @@ -389,12 +390,16 @@ pda.emulator.ms <- function(multi.settings) { # global_Sigma : sum of mu_site and mu_f precision - global_Sigma <- solve(P_f_inv + (nsites * tau_global)) + global_Sigma <- P_f_inv + (nsites * tau_global) - global_mu <- global_Sigma %*% ((nsites * tau_global %*% colMeans(mu_site_curr)) + (P_f_inv %*% mu_f)) - - mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. + global_mu <- solve(global_Sigma) %*% ((P_f_inv %*% mu_f) + tau_global %*% colSums(mu_site_curr) ) + #mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. + repeat{ + mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) + check.that <- (mu_global > rng[, 1] & mu_global < rng[, 2]) + if(all(check.that)) break + } # site level M-H ######################################## @@ -403,11 +408,8 @@ pda.emulator.ms <- function(multi.settings) { for(ns in seq_len(nsites)){ repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,s]) - check.that <- sapply(seq_len(sum(n.param)), function(x) { - chk <- (mu_site_new[ns,x] > rng[x, 1] & mu_site_new[ns,x] < rng[x, 2]) - return(chk)}) - + mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) + check.that <- (mu_site_new[ns,] > rng[, 1] & mu_site_new[ns,] < rng[, 2]) if(all(check.that)) break } } @@ -450,12 +452,15 @@ pda.emulator.ms <- function(multi.settings) { return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp)) } # hier.mcmc + # start the clock + ptm.start <- proc.time() + # prepare for parallelization dcores <- parallel::detectCores() - 1 ncores <- min(max(dcores, 1), settings$assim.batch$chain) - # + logger.setOutputFile(file.path(settings$outdir, "pda.log")) - # + cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(settings$outdir, "pda.log")) @@ -473,6 +478,11 @@ pda.emulator.ms <- function(multi.settings) { ptm.finish <- proc.time() - ptm.start logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) + + # Collect global params in their own list and postprocess + mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") + } # hierarchical - if end } diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index 640904d4454..3364e19b873 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -251,3 +251,76 @@ write_sf_posterior <- function(sf.samp.list, sf.prior, sf.filename){ return(sf.post.distns) } # write_sf_posterior + + +##' Function to sort Hierarchical MCMC samples +##' @export +pda.sort.params <- function(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, + sf = NULL, n.param.orig, prior.list, prior.fn.all){ + + mcmc.samp.list <- list() + + for (c in seq_len(settings$assim.batch$chain)) { + + if(sub.sample == "mu_global_samp"){ + m <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]]), ncol = length(prior.ind.all.ns)) + }else if(sub.sample == "mu_site_samp"){ + m <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]][,,ns]), ncol = length(prior.ind.all.ns)) + } + + # TODO: make this sf compatible for multi site + if(!is.null(sf)){ + sfm <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]]), ncol = length(sf)) + } + + # TODO: get back to this when scaling factor is used + # # retrieve rownames separately to get rid of var_name* structures + prior.all.rownames <- unlist(sapply(prior.list, rownames)) + + sc <- 1 + for (i in seq_along(prior.ind.all.ns)) { + sf.check <- prior.all.rownames[prior.ind.all.ns][i] + idx <- grep(sf.check, rownames(prior.all)[prior.ind.all]) + if(any(grepl(sf.check, sf))){ + + m[, i] <- eval(prior.fn.all$qprior[prior.ind.all.ns][[i]], + list(p = mcmc.out[[c]][[sub.sample]][, idx])) + + + if(sc <= length(sf)){ + sfm[, sc] <- mcmc.out[[c]][[sub.sample]][, idx] + sc <- sc + 1 + } + + }else{ + + if(sub.sample == "mu_global_samp"){ + m[, i] <- mcmc.out[[c]][[sub.sample]][, idx] + }else if(sub.sample == "mu_site_samp"){ + m[, i] <- mcmc.out[[c]][[sub.sample]][, idx, ns] + } + + + } + } + + colnames(m) <- prior.all.rownames[prior.ind.all.ns] + mcmc.samp.list[[c]] <- m + + if(!is.null(sf)){ + colnames(sfm) <- paste0(sf, "_SF") + sf.samp.list[[c]] <- sfm + } + + } + + # Separate each PFT's parameter samples (and bias term) to their own list + mcmc.param.list <- list() + ind <- 0 + for (i in seq_along(n.param.orig)) { + mcmc.param.list[[i]] <- lapply(mcmc.samp.list, function(x) x[, (ind + 1):(ind + n.param.orig[i]), drop = FALSE]) + ind <- ind + n.param.orig[i] + } + + return(mcmc.param.list) +} # pda.sort.params From 1acca2f59b37b9640712a85cd8b69cf431501313 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Mar 2018 17:04:32 -0500 Subject: [PATCH 0021/2289] roxygenise --- modules/assim.batch/NAMESPACE | 3 +++ modules/assim.batch/R/pda.emulator.ms.R | 7 +++--- modules/assim.batch/man/load_pda_history.Rd | 14 ++++++++++++ modules/assim.batch/man/pda.emulator.Rd | 2 +- modules/assim.batch/man/pda.emulator.ms.Rd | 22 +++++++++++++++++++ modules/assim.batch/man/pda.plot.params.Rd | 3 ++- modules/assim.batch/man/pda.postprocess.Rd | 3 ++- modules/assim.batch/man/pda.sort.params.Rd | 13 +++++++++++ .../vignettes/MultiSitePDAVignette.Rmd | 2 +- 9 files changed, 62 insertions(+), 7 deletions(-) create mode 100644 modules/assim.batch/man/load_pda_history.Rd create mode 100644 modules/assim.batch/man/pda.emulator.ms.Rd create mode 100644 modules/assim.batch/man/pda.sort.params.Rd diff --git a/modules/assim.batch/NAMESPACE b/modules/assim.batch/NAMESPACE index 630ac2f28ba..c699eae254e 100644 --- a/modules/assim.batch/NAMESPACE +++ b/modules/assim.batch/NAMESPACE @@ -8,6 +8,7 @@ export(gelman_diag_mw) export(getBurnin) export(load.L2Ameriflux.cf) export(load.pda.data) +export(load_pda_history) export(makeMCMCList) export(pda.adjust.jumps) export(pda.adjust.jumps.bs) @@ -21,6 +22,7 @@ export(pda.create.ensemble) export(pda.define.llik.fn) export(pda.define.prior.fn) export(pda.emulator) +export(pda.emulator.ms) export(pda.generate.knots) export(pda.generate.sf) export(pda.get.model.output) @@ -35,6 +37,7 @@ export(pda.plot.params) export(pda.postprocess) export(pda.settings) export(pda.settings.bt) +export(pda.sort.params) export(return.bias) export(return_hyperpars) export(runModule.assim.batch) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 51fc59f2b0a..096406f3212 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -390,9 +390,9 @@ pda.emulator.ms <- function(multi.settings) { # global_Sigma : sum of mu_site and mu_f precision - global_Sigma <- P_f_inv + (nsites * tau_global) + global_Sigma <- solve(P_f_inv + (nsites * tau_global)) - global_mu <- solve(global_Sigma) %*% ((P_f_inv %*% mu_f) + tau_global %*% colSums(mu_site_curr) ) + global_mu <- global_Sigma %*% ((P_f_inv %*% mu_f) + tau_global %*% colSums(mu_site_curr) ) #mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. repeat{ @@ -449,7 +449,8 @@ pda.emulator.ms <- function(multi.settings) { } - return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp)) + return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp, + musite.accept.count = musite.accept.count)) } # hier.mcmc # start the clock diff --git a/modules/assim.batch/man/load_pda_history.Rd b/modules/assim.batch/man/load_pda_history.Rd new file mode 100644 index 00000000000..fcdccf90fef --- /dev/null +++ b/modules/assim.batch/man/load_pda_history.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.utils.R +\name{load_pda_history} +\alias{load_pda_history} +\title{Helper function that loads history from previous PDA run, but returns only requested objects} +\usage{ +load_pda_history(workdir, ensemble.id, objects) +} +\description{ +Helper function that loads history from previous PDA run, but returns only requested objects +} +\author{ +Istem Fer +} diff --git a/modules/assim.batch/man/pda.emulator.Rd b/modules/assim.batch/man/pda.emulator.Rd index 178e8545504..e71b65a5297 100644 --- a/modules/assim.batch/man/pda.emulator.Rd +++ b/modules/assim.batch/man/pda.emulator.Rd @@ -7,7 +7,7 @@ pda.emulator(settings, external.data = NULL, external.priors = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, ar.target = NULL, - jvar = NULL, n.knot = NULL) + jvar = NULL, n.knot = NULL, local = TRUE) } \arguments{ \item{settings}{= a pecan settings list} diff --git a/modules/assim.batch/man/pda.emulator.ms.Rd b/modules/assim.batch/man/pda.emulator.ms.Rd new file mode 100644 index 00000000000..9843eca56a6 --- /dev/null +++ b/modules/assim.batch/man/pda.emulator.ms.Rd @@ -0,0 +1,22 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.emulator.ms.R +\name{pda.emulator.ms} +\alias{pda.emulator.ms} +\title{Paramater Data Assimilation using emulator on multiple sites in three modes: local, global, hierarchical +First draft, not complete yet} +\usage{ +pda.emulator.ms(multi.settings) +} +\arguments{ +\item{multi.settings}{= a pecan multi-settings list} +} +\value{ +settings +} +\description{ +Paramater Data Assimilation using emulator on multiple sites in three modes: local, global, hierarchical +First draft, not complete yet +} +\author{ +Istem Fer +} diff --git a/modules/assim.batch/man/pda.plot.params.Rd b/modules/assim.batch/man/pda.plot.params.Rd index ee9269c6522..b86f51c8e70 100644 --- a/modules/assim.batch/man/pda.plot.params.Rd +++ b/modules/assim.batch/man/pda.plot.params.Rd @@ -4,7 +4,8 @@ \alias{pda.plot.params} \title{Plot PDA Parameter Diagnostics} \usage{ -pda.plot.params(settings, mcmc.param.list, prior.ind, par.file.name = NULL) +pda.plot.params(settings, mcmc.param.list, prior.ind, par.file.name = NULL, + sffx) } \arguments{ \item{all}{params are the identically named variables in pda.mcmc / pda.emulator} diff --git a/modules/assim.batch/man/pda.postprocess.Rd b/modules/assim.batch/man/pda.postprocess.Rd index 0b3b8c71a0e..375282b6ec8 100644 --- a/modules/assim.batch/man/pda.postprocess.Rd +++ b/modules/assim.batch/man/pda.postprocess.Rd @@ -4,7 +4,8 @@ \alias{pda.postprocess} \title{Postprocessing for PDA Results} \usage{ -pda.postprocess(settings, con, mcmc.param.list, pname, prior, prior.ind) +pda.postprocess(settings, con, mcmc.param.list, pname, prior, prior.ind, + sffx = NULL) } \arguments{ \item{all}{params are the identically named variables in pda.mcmc / pda.emulator} diff --git a/modules/assim.batch/man/pda.sort.params.Rd b/modules/assim.batch/man/pda.sort.params.Rd new file mode 100644 index 00000000000..582f21a393d --- /dev/null +++ b/modules/assim.batch/man/pda.sort.params.Rd @@ -0,0 +1,13 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.postprocess.R +\name{pda.sort.params} +\alias{pda.sort.params} +\title{Function to sort Hierarchical MCMC samples} +\usage{ +pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, + prior.all, prior.ind.all.ns, sf = NULL, n.param.orig, prior.list, + prior.fn.all) +} +\description{ +Function to sort Hierarchical MCMC samples +} diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 5b78b5c8a2b..d4f6e81db03 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -643,4 +643,4 @@ HB-PDA workflow for multiple sites starts with fitting models locally. This is d In global calibration, we estimate posterior using all site-level GPs at once at each iteration. Proposed values are plugged-in all GPs, and accepted/rejected according to all estimated likelihoods. -Hierarhical calibration, uses a similar function to `mcmc.GP` but modified for hierarchihcal fitting, called `hier.mcmc`. \ No newline at end of file +Hierarhical calibration, uses a similar function to `mcmc.GP` but modified for hierarchihcal fitting, called `hier.mcmc`. Within this function, both site-level and (hierarchically modeled) global parameters are fitted. \ No newline at end of file From 44d3fe0758e351689215ac9b8c01e7af328466ed Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 6 Mar 2018 15:31:53 -0500 Subject: [PATCH 0022/2289] fix covariance-precision --- modules/assim.batch/R/pda.emulator.ms.R | 47 +++++++++++++------------ 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 096406f3212..d07b55f2a12 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -32,10 +32,8 @@ pda.emulator.ms <- function(multi.settings) { # lists to collect emulators and run MCMC per site later - SS.stack <- vector("list", nsites) gp.stack <- vector("list", nsites) - prior.stack <- vector("list", nsites) - nstack <- vector("list", nsites) + #nstack <- vector("list", nsites) ## -------------------------------------- Local runs and calibration ------------------------------------------ @@ -62,7 +60,7 @@ pda.emulator.ms <- function(multi.settings) { "hyper.pars") need_obj <- load_pda_history(workdir = multi.settings$outdir, - ensemble.id = multi.settings[[8]]$assim.batch$ensemble.id, + ensemble.id = multi.settings[[1]]$assim.batch$ensemble.id, objects = obj_names) init.list <- need_obj$init.list @@ -275,7 +273,8 @@ pda.emulator.ms <- function(multi.settings) { - ###### priors + ###### (hierarchical) global mu priors + # # mu_f : prior mean vector # P_f : prior covariance matrix # P_f_inv : prior precision matrix @@ -294,14 +293,15 @@ pda.emulator.ms <- function(multi.settings) { if(all(check.that)) break } - ###### priors cont'd + ###### (hierarchical) global tau priors + # # tau_df : Wishart degrees of freedom # tau_V : Wishart scale matrix + # # tau_global ~ W (tau_df, tau_scale) # sigma_global <- solve(tau_global) # - # initialize tau_global tau_df <- nsites + nparam + 1 tau_V <- diag(1, nparam) V_inv <- solve(tau_V) # will be used in gibbs updating @@ -366,11 +366,8 @@ pda.emulator.ms <- function(multi.settings) { # tau_global : error precision matrix # sum of pairwise deviation products - sum_term <- matrix(0, ncol = nparam, nrow = nparam) - for(i in seq_len(nsites)){ - pairwise_deviation <- as.matrix(mu_site_curr[i,] - mu_global) - sum_term <- sum_term + t(pairwise_deviation) %*% pairwise_deviation - } + pairwise_deviation <- apply(mu_site_curr, 1, function(r) r - mu_global) + sum_term <- pairwise_deviation %*% t(pairwise_deviation) tau_sigma <- solve(V_inv + sum_term) @@ -390,16 +387,16 @@ pda.emulator.ms <- function(multi.settings) { # global_Sigma : sum of mu_site and mu_f precision - global_Sigma <- solve(P_f_inv + (nsites * tau_global)) - - global_mu <- global_Sigma %*% ((P_f_inv %*% mu_f) + tau_global %*% colSums(mu_site_curr) ) + global_Sigma <- solve(P_f + (nsites * sigma_global)) - #mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. - repeat{ - mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) - check.that <- (mu_global > rng[, 1] & mu_global < rng[, 2]) - if(all(check.that)) break - } + global_mu <- global_Sigma %*% ((sigma_global %*% colSums(mu_site_curr)) + (P_f %*% mu_f)) + + mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. + # repeat{ + # mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) + # check.that <- (mu_global > rng[, 1] & mu_global < rng[, 2]) + # if(all(check.that)) break + # } # site level M-H ######################################## @@ -447,6 +444,7 @@ pda.emulator.ms <- function(multi.settings) { mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs tau_global_samp[g,,] <- tau_global # 100% acceptance for gibbs + if(g %% 500 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") } return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp, @@ -466,7 +464,7 @@ pda.emulator.ms <- function(multi.settings) { ## Sample posterior from emulator - mcmc.out <- parallel::parLapply(cl, seq_len(settings$assim.batch$chain), function(chain) { + mcmc.out <- parallel::parLapply(cl, seq_len(tmp.settings$assim.batch$chain), function(chain) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, nstack = NULL, nmcmc = tmp.settings$assim.batch$iter, rng = rng, mu0 = init.list[[chain]], jmp0 = jmp.list[[chain]], @@ -484,6 +482,11 @@ pda.emulator.ms <- function(multi.settings) { mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") + # Collect site-level params in their own list and postprocess + for(ns in seq_len(nsites)){ + mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_site_samp", ns = ns, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + settings <- pda.postprocess(settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = paste0("_hierarchical_SL",ns)) + } } # hierarchical - if end } From 62d709c8fcb0194d9639b5335505409a8e6ba3f3 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 10 Mar 2018 13:11:01 -0500 Subject: [PATCH 0023/2289] implement normal transformation --- modules/assim.batch/R/pda.emulator.ms.R | 51 +++++++++++++++------ modules/assim.batch/R/pda.utils.R | 59 +++++++++++++++++++++++++ 2 files changed, 97 insertions(+), 13 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index d07b55f2a12..3f52f955ef1 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -33,7 +33,8 @@ pda.emulator.ms <- function(multi.settings) { # lists to collect emulators and run MCMC per site later gp.stack <- vector("list", nsites) - #nstack <- vector("list", nsites) + SS.stack <- vector("list", nsites) + #nstack <- vector("list", nsites) ## -------------------------------------- Local runs and calibration ------------------------------------------ @@ -80,12 +81,14 @@ pda.emulator.ms <- function(multi.settings) { resume.list <- vector("list", multi.settings[[1]]$assim.batch$chain) - # collect GPs + # collect GPs and SSs for(s in seq_along(multi.settings)){ load(multi.settings[[s]]$assim.batch$emulator.path) + load(multi.settings[[s]]$assim.batch$ss.path) gp.stack[[s]] <- gp - remove(gp) + SS.stack[[s]] <- SS + remove(gp, SS) } @@ -238,6 +241,29 @@ pda.emulator.ms <- function(multi.settings) { ## Get an ensemble id for hierarchical calibration tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) + ## Transform values from non-normal distributions to standard Normal + ## it won't do anything if all priors are already normal + norm_transform <- norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list, rng) + if(!norm_transform$normF){ # means SS values are transformed + ## get new SS.stack with transformed values + SS.stack <- norm_transform$normSS + + ## re-fit GP on new param space + for(i in seq_along(SS.stack)){ + GPmodel <- lapply(SS.stack[[i]], function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], verbose = 0)) + gp.stack[[i]] <- GPmodel + } + + ## re-define rng + rng <- norm_transform$rng + + ## get new init.list and jmp.list + init.list <- norm_transform$init + jmp.list <- norm_transform$jmp + + + } + ########### hierarchical MCMC function with Gibbs ############## @@ -319,9 +345,10 @@ pda.emulator.ms <- function(multi.settings) { mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= n.param) for(ns in 1:nsites){ + mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean repeat{ mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean - check.that <- (mu_site_curr[ns,] > rng[, 1] & mu_site_curr[ns,] < rng[, 2]) + check.that <- (mu_site_curr[ns,] > rng[, 1] & mu_site_curr[ns,] < rng[, 2]) if(all(check.that)) break } } @@ -374,7 +401,7 @@ pda.emulator.ms <- function(multi.settings) { # update tau tau_global <- rWishart(1, df = tau_df, Sigma = tau_sigma)[,,1] # site precision sigma_global <- solve(tau_global) # site covariance, new prior sigma to be used below for prior prob. calc. - + ######################################## @@ -386,17 +413,14 @@ pda.emulator.ms <- function(multi.settings) { # global_mu : precision weighted average between the data (mu_site) and prior mean (mu_f) # global_Sigma : sum of mu_site and mu_f precision - + global_Sigma <- solve(P_f + (nsites * sigma_global)) global_mu <- global_Sigma %*% ((sigma_global %*% colSums(mu_site_curr)) + (P_f %*% mu_f)) - + mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. - # repeat{ - # mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) - # check.that <- (mu_global > rng[, 1] & mu_global < rng[, 2]) - # if(all(check.that)) break - # } + + # site level M-H ######################################## @@ -404,9 +428,10 @@ pda.emulator.ms <- function(multi.settings) { # propose mu_site for(ns in seq_len(nsites)){ + mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) - check.that <- (mu_site_new[ns,] > rng[, 1] & mu_site_new[ns,] < rng[, 2]) + check.that <- (mu_site_new[ns,] > rng[, 1] & mu_site_new[ns,] < rng[, 2]) if(all(check.that)) break } } diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 6ee6a1439ef..6a2fd0a06ff 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -796,3 +796,62 @@ load_pda_history <- function(workdir, ensemble.id, objects){ names(alist) <- objects return(alist) } + +##' Helper function that transforms the values of each parameter into N(0,1) equivalent +##' @author Istem Fer +##' @export +norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list, rng){ + + # check for non-normals + prior.all <- do.call("rbind", prior.list) + psel <- prior.all[prior.ind.all, 1] != "norm" + norm.check <- all(!psel) # if all are norm do nothing + + + if(!norm.check){ + + rng[psel,1] <- qnorm(1e-05) + rng[psel,2] <- qnorm(0.99999) + + # need to modify init.list and jmp.list as well + parnames <- names(init.list[[1]]) + + for(c in seq_along(init.list)){ + init.list[[c]] <- lapply(seq_along(init.list[[c]]), function(x){ + if(psel[c]) init.list[[c]][[x]] <- rnorm(1) + }) + names(init.list[[c]]) <- parnames + jmp.list[[c]][psel] <- 0.1 * diff(qnorm(c(0.05, 0.95))) + } + + psel <- c(psel, FALSE) # the last column is SS, always FALSE + + for(i in seq_along(SS.stack)){ + # NOTE: there might be differences in dimensions, + # some SS-matrices might have likelihood params such as bias + # check for that later + SS.tmp <- lapply(SS.stack[[i]], function(ss){ + for(p in seq_along(psel)){ + if(psel[p]){ + prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[p]]], list(q = ss[,p])) + stdnorm.vals <- qnorm(prior.quantiles) + ss[,p] <- stdnorm.vals + } + } + return(ss) + }) + SS.stack[[i]] <- SS.tmp + } + } + + return(list(normSS = SS.stack, normF = norm.check, init = init.list, jmp = jmp.list, rng = rng)) + +} # norm_transform_priors + + +##' Helper function that transforms the samples back to their original prior distribution equivalents +##' @author Istem Fer +##' @export +back_transform_posteriors <- function(){ + +} From f07cea373a678479bb439274a646b53500845963 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 11 Mar 2018 17:32:47 -0400 Subject: [PATCH 0024/2289] Back transformation --- modules/assim.batch/R/pda.emulator.ms.R | 20 +++++----- modules/assim.batch/R/pda.utils.R | 50 ++++++++++++++++++++++--- 2 files changed, 54 insertions(+), 16 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 3f52f955ef1..b33213d90de 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -243,14 +243,14 @@ pda.emulator.ms <- function(multi.settings) { ## Transform values from non-normal distributions to standard Normal ## it won't do anything if all priors are already normal - norm_transform <- norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list, rng) + norm_transform <- norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list) if(!norm_transform$normF){ # means SS values are transformed ## get new SS.stack with transformed values SS.stack <- norm_transform$normSS ## re-fit GP on new param space for(i in seq_along(SS.stack)){ - GPmodel <- lapply(SS.stack[[i]], function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], verbose = 0)) + GPmodel <- lapply(SS.stack[[i]], function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) gp.stack[[i]] <- GPmodel } @@ -315,7 +315,7 @@ pda.emulator.ms <- function(multi.settings) { # # initialize mu_global (nparam) repeat{ mu_global <- mvtnorm::rmvnorm(1, mu_f, P_f) - check.that <- (mu_global > rng[, 1] & mu_global < rng[, 2]) + check.that <- (mu_global > apply(rng,1:2,max)[, 1] & mu_global < apply(rng,1:2,min)[, 2]) if(all(check.that)) break } @@ -345,10 +345,9 @@ pda.emulator.ms <- function(multi.settings) { mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= n.param) for(ns in 1:nsites){ - mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean repeat{ mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean - check.that <- (mu_site_curr[ns,] > rng[, 1] & mu_site_curr[ns,] < rng[, 2]) + check.that <- (mu_site_curr[ns,] > rng[, 1, ns] & mu_site_curr[ns,] < rng[, 2, ns]) if(all(check.that)) break } } @@ -428,10 +427,9 @@ pda.emulator.ms <- function(multi.settings) { # propose mu_site for(ns in seq_len(nsites)){ - mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) - check.that <- (mu_site_new[ns,] > rng[, 1] & mu_site_new[ns,] < rng[, 2]) + check.that <- (mu_site_new[ns,] > rng[, 1, ns] & mu_site_new[ns,] < rng[, 2, ns]) if(all(check.that)) break } } @@ -502,15 +500,17 @@ pda.emulator.ms <- function(multi.settings) { ptm.finish <- proc.time() - ptm.start logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) + # transform samples from std normal to prior quantiles + mcmc.out2 <- back_transform_posteriors(prior.list, prior.fn.all, prior.ind.all, mcmc.out) # Collect global params in their own list and postprocess - mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") # Collect site-level params in their own list and postprocess for(ns in seq_len(nsites)){ - mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_site_samp", ns = ns, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) - settings <- pda.postprocess(settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = paste0("_hierarchical_SL",ns)) + mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_site_samp", ns = ns, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = paste0("_hierarchical_SL",ns)) } } # hierarchical - if end diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 6a2fd0a06ff..77ddcfd3b18 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -800,7 +800,7 @@ load_pda_history <- function(workdir, ensemble.id, objects){ ##' Helper function that transforms the values of each parameter into N(0,1) equivalent ##' @author Istem Fer ##' @export -norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list, rng){ +norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list){ # check for non-normals prior.all <- do.call("rbind", prior.list) @@ -810,15 +810,18 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st if(!norm.check){ - rng[psel,1] <- qnorm(1e-05) - rng[psel,2] <- qnorm(0.99999) - + rng <- array(NA, dim = c(length(prior.ind.all),2,length(SS.stack))) + # need to modify init.list and jmp.list as well parnames <- names(init.list[[1]]) for(c in seq_along(init.list)){ init.list[[c]] <- lapply(seq_along(init.list[[c]]), function(x){ - if(psel[c]) init.list[[c]][[x]] <- rnorm(1) + if(psel[x]){ + init.list[[c]][[x]] <- rnorm(1) + }else{ + init.list[[c]][[x]] <- init.list[[c]][[x]] + } }) names(init.list[[c]]) <- parnames jmp.list[[c]][psel] <- 0.1 * diff(qnorm(c(0.05, 0.95))) @@ -841,6 +844,8 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st return(ss) }) SS.stack[[i]] <- SS.tmp + rng.tmp <- apply(SS.tmp[[1]],2,range) + rng[,,i] <- t(rng.tmp[,-ncol(rng.tmp)]) } } @@ -852,6 +857,39 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st ##' Helper function that transforms the samples back to their original prior distribution equivalents ##' @author Istem Fer ##' @export -back_transform_posteriors <- function(){ +back_transform_posteriors <- function(prior.list, prior.fn.all, prior.ind.all, mcmc.out){ + + # check for non-normals + prior.all <- do.call("rbind", prior.list) + psel <- prior.all[prior.ind.all, 1] != "norm" + norm.check <- all(!psel) # if all are norm do nothing + + if(!norm.check){ + for(i in seq_along(mcmc.out)){ + mu_global_samp <- mcmc.out[[i]]$mu_global_samp + mu_site_samp <- mcmc.out[[i]]$mu_site_samp + + mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), mu_site_samp, along = 3) + for(ms in seq_len(dim(mu_sample_tmp)[3])){ + mcmc.vals <- mu_sample_tmp[,,ms] + stdnorm.quantiles <- pnorm(mcmc.vals[, psel]) + pc <- 1 # counter, because all cols might not need transforming back + for(ps in seq_along(psel)){ + if(psel[ps]){ + prior.quantiles <- eval(prior.fn.all$qprior[[prior.ind.all[ps]]], list(p = stdnorm.quantiles[,pc])) + mcmc.vals[,ps] <- prior.quantiles + pc <- pc + 1 + } + } + mu_sample_tmp[,,ms] <- mcmc.vals + } + + + mcmc.out[[i]]$mu_global_samp <- mu_sample_tmp[,,1] + mcmc.out[[i]]$mu_site_samp <- mu_sample_tmp[,,-1] + } + } + + return(mcmc.out) } From dcdb8c7a8310e7ffc4f5cd6c5b93f6c87f48b4e3 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 12 Mar 2018 13:35:38 -0400 Subject: [PATCH 0025/2289] roxygenise --- modules/assim.batch/NAMESPACE | 2 ++ .../assim.batch/man/back_transform_posteriors.Rd | 14 ++++++++++++++ modules/assim.batch/man/norm_transform_priors.Rd | 15 +++++++++++++++ 3 files changed, 31 insertions(+) create mode 100644 modules/assim.batch/man/back_transform_posteriors.Rd create mode 100644 modules/assim.batch/man/norm_transform_priors.Rd diff --git a/modules/assim.batch/NAMESPACE b/modules/assim.batch/NAMESPACE index c699eae254e..c368b285e44 100644 --- a/modules/assim.batch/NAMESPACE +++ b/modules/assim.batch/NAMESPACE @@ -2,6 +2,7 @@ export(assim.batch) export(autoburnin) +export(back_transform_posteriors) export(correlationPlot) export(gelman_diag_gelmanPlot) export(gelman_diag_mw) @@ -10,6 +11,7 @@ export(load.L2Ameriflux.cf) export(load.pda.data) export(load_pda_history) export(makeMCMCList) +export(norm_transform_priors) export(pda.adjust.jumps) export(pda.adjust.jumps.bs) export(pda.autocorr.calc) diff --git a/modules/assim.batch/man/back_transform_posteriors.Rd b/modules/assim.batch/man/back_transform_posteriors.Rd new file mode 100644 index 00000000000..c4cda869d26 --- /dev/null +++ b/modules/assim.batch/man/back_transform_posteriors.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.utils.R +\name{back_transform_posteriors} +\alias{back_transform_posteriors} +\title{Helper function that transforms the samples back to their original prior distribution equivalents} +\usage{ +back_transform_posteriors(prior.list, prior.fn.all, prior.ind.all, mcmc.out) +} +\description{ +Helper function that transforms the samples back to their original prior distribution equivalents +} +\author{ +Istem Fer +} diff --git a/modules/assim.batch/man/norm_transform_priors.Rd b/modules/assim.batch/man/norm_transform_priors.Rd new file mode 100644 index 00000000000..76a0cc14d52 --- /dev/null +++ b/modules/assim.batch/man/norm_transform_priors.Rd @@ -0,0 +1,15 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.utils.R +\name{norm_transform_priors} +\alias{norm_transform_priors} +\title{Helper function that transforms the values of each parameter into N(0,1) equivalent} +\usage{ +norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, + init.list, jmp.list) +} +\description{ +Helper function that transforms the values of each parameter into N(0,1) equivalent +} +\author{ +Istem Fer +} From 73da4ea4abeac892ee142e87973d62ec7bc66406 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 26 Mar 2018 09:01:16 -0400 Subject: [PATCH 0026/2289] if not all are normal, transform all even if some are normal --- modules/assim.batch/R/pda.utils.R | 44 +++++++++++++------------------ 1 file changed, 18 insertions(+), 26 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 77ddcfd3b18..09c8dab8880 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -815,31 +815,17 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st # need to modify init.list and jmp.list as well parnames <- names(init.list[[1]]) - for(c in seq_along(init.list)){ - init.list[[c]] <- lapply(seq_along(init.list[[c]]), function(x){ - if(psel[x]){ - init.list[[c]][[x]] <- rnorm(1) - }else{ - init.list[[c]][[x]] <- init.list[[c]][[x]] - } - }) - names(init.list[[c]]) <- parnames - jmp.list[[c]][psel] <- 0.1 * diff(qnorm(c(0.05, 0.95))) - } - - psel <- c(psel, FALSE) # the last column is SS, always FALSE - for(i in seq_along(SS.stack)){ # NOTE: there might be differences in dimensions, # some SS-matrices might have likelihood params such as bias # check for that later SS.tmp <- lapply(SS.stack[[i]], function(ss){ for(p in seq_along(psel)){ - if(psel[p]){ - prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[p]]], list(q = ss[,p])) - stdnorm.vals <- qnorm(prior.quantiles) - ss[,p] <- stdnorm.vals - } + #we transform all + prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[p]]], list(q = ss[,p])) + stdnorm.vals <- qnorm(prior.quantiles) + ss[,p] <- stdnorm.vals + } return(ss) }) @@ -847,6 +833,15 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st rng.tmp <- apply(SS.tmp[[1]],2,range) rng[,,i] <- t(rng.tmp[,-ncol(rng.tmp)]) } + + for(c in seq_along(init.list)){ + init.list[[c]] <- lapply(seq_along(init.list[[c]]), function(x){ + init.list[[c]][[x]] <- rnorm(1) + }) + names(init.list[[c]]) <- parnames + jmp.list[[c]][psel] <- 0.1 * diff(qnorm(c(0.05, 0.95))) + } + } return(list(normSS = SS.stack, normF = norm.check, init = init.list, jmp = jmp.list, rng = rng)) @@ -872,14 +867,11 @@ back_transform_posteriors <- function(prior.list, prior.fn.all, prior.ind.all, m mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), mu_site_samp, along = 3) for(ms in seq_len(dim(mu_sample_tmp)[3])){ mcmc.vals <- mu_sample_tmp[,,ms] - stdnorm.quantiles <- pnorm(mcmc.vals[, psel]) - pc <- 1 # counter, because all cols might not need transforming back + stdnorm.quantiles <- pnorm(mcmc.vals) + # counter, because all cols need transforming back for(ps in seq_along(psel)){ - if(psel[ps]){ - prior.quantiles <- eval(prior.fn.all$qprior[[prior.ind.all[ps]]], list(p = stdnorm.quantiles[,pc])) - mcmc.vals[,ps] <- prior.quantiles - pc <- pc + 1 - } + prior.quantiles <- eval(prior.fn.all$qprior[[prior.ind.all[ps]]], list(p = stdnorm.quantiles[,ps])) + mcmc.vals[,ps] <- prior.quantiles } mu_sample_tmp[,,ms] <- mcmc.vals } From 5ed001e888ff595995d0290b383866e554578622 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 11 Apr 2018 07:39:04 -0400 Subject: [PATCH 0027/2289] delete the ea/sa bit --- .../vignettes/MultiSitePDAVignette.Rmd | 166 +----------------- 1 file changed, 7 insertions(+), 159 deletions(-) diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index d4f6e81db03..394be923868 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -167,159 +167,6 @@ In the meantime, your `` section should specify ``, `` ``` -If requested, your `` and `` sections would look similar to this in your `pecan.CONFIGS.xml`: - - -``` - - - 100 - NEE - 1000017326 - 2005 - 2011 - - - 100 - NEE - 1000017326 - 2010 - 2015 - - - 100 - NEE - 1000017326 - 2001 - 2014 - - - 100 - NEE - 1000017326 - 2005 - 2015 - - - 100 - NEE - 1000017326 - 2007 - 2014 - - - 100 - NEE - 1000017326 - 1999 - 2006 - - - 100 - NEE - 1000017326 - 2013 - 2015 - - - 100 - NEE - 1000017326 - 1997 - 2010 - - - - - - -1 - 0 - 1 - - NEE - 1000017325 - 2005 - 2011 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 2010 - 2015 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 2001 - 2014 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 2005 - 2015 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 2007 - 2014 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 1999 - 2006 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 2013 - 2015 - - - - -1 - 0 - 1 - - NEE - 1000017325 - 1997 - 2010 - - -``` ## Settings for HB-PDA @@ -327,21 +174,22 @@ Similar to PDA settings, we will contain HB-PDA settings under the ` - emulator.ms - local - 60 - 10000 - 3 + emulator.ms + local + 115 + 100000 + 3 som_respiration_rate soil_respiration_Q10 - soilWHC half_saturation_PAR dVPDSlope psnTOpt + AmaxFrac + leafGrowth From 95aa998f913bb76a6a421fc2d8583e0c8c6c524f Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 11 Apr 2018 07:39:44 -0400 Subject: [PATCH 0028/2289] papply for the local runs --- modules/assim.batch/R/pda.emulator.ms.R | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index b33213d90de..38394ac12a9 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -38,17 +38,14 @@ pda.emulator.ms <- function(multi.settings) { ## -------------------------------------- Local runs and calibration ------------------------------------------ - for(s in seq_along(multi.settings)){ # site runs - loop begin - - settings <- multi.settings[[s]] + if(local){ # NOTE: local flag is not used currently, prepearation for future use # if this flag is FALSE, pda.emulator will not fit GP and run MCMC, # but will run LHC ensembles, calculate SS and return settings list with saved SS paths etc. # this requires some re-arrangement in pda.emulator, # for now we will always run site-level calibration - settings <- pda.emulator(settings, local = local) - multi.settings[[s]] <- settings - } # site runs - loop end + multi.settings <- papply(multi.settings, pda.emulator, local = local) + } #PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDAMS.xml') @@ -328,13 +325,15 @@ pda.emulator.ms <- function(multi.settings) { # sigma_global <- solve(tau_global) # - tau_df <- nsites + nparam + 1 + tau_df <- nparam + 1 tau_V <- diag(1, nparam) V_inv <- solve(tau_V) # will be used in gibbs updating # initialize tau_global (nparam x nparam) tau_global <- rWishart(1, tau_df, tau_V)[,,1] + # df to use while updating tau later, setting here one out of the loop + tau_df <- tau_df + nsites # initialize jcov.arr (jump variances per site) jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) From c54d05c6be0e136df119394050b3202f966e341d Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 11 Apr 2018 07:40:26 -0400 Subject: [PATCH 0029/2289] save inputs with neff calculation --- modules/assim.batch/R/pda.emulator.R | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index a8650233173..135015556f1 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -673,6 +673,14 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ".Rdata")) save(resume.list, file = settings$assim.batch$resume.path) + # save inputs list, this object has been processed for autocorrelation correction + # this can take a long time depending on the data, re-load and skip in next iteration + external.data <- inputs + save(external.data, file = file.path(settings$outdir, + paste0("external.", + settings$assim.batch$ensemble.id, + ".Rdata"))) + # save prior.list with bias term if(any(unlist(any.mgauss) == "multipGauss")){ settings$assim.batch$bias.path <- file.path(settings$outdir, From 9f9eca7362b3531d6534987bf84fd36389ef6cf0 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 11 Apr 2018 07:50:02 -0400 Subject: [PATCH 0030/2289] and reload --- modules/assim.batch/R/pda.emulator.R | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 135015556f1..31a80e977d4 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -53,7 +53,11 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, prior.id=prior.id, chain=chain, iter=iter, adapt=adapt, adj.min=adj.min, ar.target=ar.target, jvar=jvar, n.knot=n.knot, run.round) - + # load inputs with neff if this is another round + if(!run.normal){ + load(file.path(settings$outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata"))) + } + ## will be used to check if multiplicative Gaussian is requested any.mgauss <- sapply(settings$assim.batch$inputs, `[[`, "likelihood") From 24c2c400769f5cb9ea1a9afebd9bd46dd340358f Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 11 Apr 2018 08:01:00 -0400 Subject: [PATCH 0031/2289] and delete the file --- modules/assim.batch/R/pda.emulator.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 31a80e977d4..b9ad6acb303 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -55,7 +55,10 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # load inputs with neff if this is another round if(!run.normal){ - load(file.path(settings$outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata"))) + external_data_path <- file.path(settings$outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata")) + load(external_data_path) + # and maybe delete the file afterwards because it will be re-written with a new ensemble id in the end + file.remove(external_data_path) } From 082bcf64050853a54246d09fc498e330b32127f6 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 12 Apr 2018 11:45:02 -0400 Subject: [PATCH 0032/2289] using individual, joint instead of local, global --- modules/assim.batch/R/pda.emulator.R | 6 ++--- modules/assim.batch/R/pda.emulator.ms.R | 36 ++++++++++++------------- modules/assim.batch/man/pda.emulator.Rd | 2 +- 3 files changed, 22 insertions(+), 22 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index b9ad6acb303..5aacca918d5 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -14,7 +14,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, - ar.target = NULL, jvar = NULL, n.knot = NULL, local = TRUE) { + ar.target = NULL, jvar = NULL, n.knot = NULL, individual = TRUE) { ## this bit of code is useful for defining the variables passed to this function if you are ## debugging @@ -22,7 +22,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, external.data <- external.priors <- NULL params.id <- param.names <- prior.id <- chain <- iter <- NULL n.knot <- adapt <- adj.min <- ar.target <- jvar <- NULL - local <- TRUE + individual <- TRUE } # handle extention flags @@ -57,7 +57,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, if(!run.normal){ external_data_path <- file.path(settings$outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata")) load(external_data_path) - # and maybe delete the file afterwards because it will be re-written with a new ensemble id in the end + # and delete the file afterwards because it will be re-written with a new ensemble id in the end file.remove(external_data_path) } diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 38394ac12a9..0230b0341aa 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -14,17 +14,17 @@ pda.emulator.ms <- function(multi.settings) { # check mode pda.mode <- unique(sapply(multi.settings$assim.batch,`[[`, "mode")) - if(pda.mode == "local"){ - local <- TRUE - global <- hierarchical <- FALSE - }else if(pda.mode == "global"){ - global <- TRUE - local <- hierarchical <- FALSE + if(pda.mode == "individual"){ + individual <- TRUE + joint <- hierarchical <- FALSE + }else if(pda.mode == "joint"){ + joint <- TRUE + individual <- hierarchical <- FALSE }else if(pda.mode == "hierarchical"){ hieararchical <- TRUE - local <- global <- FALSE + individual <- joint <- FALSE }else{ - local <- global <- hierarchical <- TRUE + individual <- joint <- hierarchical <- TRUE } # how many sites @@ -36,20 +36,20 @@ pda.emulator.ms <- function(multi.settings) { SS.stack <- vector("list", nsites) #nstack <- vector("list", nsites) - ## -------------------------------------- Local runs and calibration ------------------------------------------ + ## -------------------------------------- Individual runs and calibration ------------------------------------------ - if(local){ - # NOTE: local flag is not used currently, prepearation for future use + if(individual){ + # NOTE: individual flag is not used currently, prepearation for future use # if this flag is FALSE, pda.emulator will not fit GP and run MCMC, # but will run LHC ensembles, calculate SS and return settings list with saved SS paths etc. # this requires some re-arrangement in pda.emulator, # for now we will always run site-level calibration - multi.settings <- papply(multi.settings, pda.emulator, local = local) + multi.settings <- papply(multi.settings, pda.emulator, individual = individual) } #PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDAMS.xml') - ## -------------------------------- Prepare for Global and Hierarchical ----------------------------------------- + ## -------------------------------- Prepare for Joint and Hierarchical ----------------------------------------- # we need some objects that are common to all calibrations @@ -109,8 +109,8 @@ pda.emulator.ms <- function(multi.settings) { } - ## -------------------------------------- Global calibration -------------------------------------------------- - if(global){ # global - if begin + ## -------------------------------------- Joint calibration -------------------------------------------------- + if(joint){ # joint - if begin ## Get an ensemble id for global calibration tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) @@ -152,7 +152,7 @@ pda.emulator.ms <- function(multi.settings) { # Stop the clock ptm.finish <- proc.time() - ptm.start - logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(settings$assim.batch$iter), " iterations.")) + logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) mcmc.samp.list <- sf.samp.list <- list() @@ -228,9 +228,9 @@ pda.emulator.ms <- function(multi.settings) { } } - tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_global") + tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_joint") - } # global - if end + } # joint - if end ## -------------------------------------- Hierarchical MCMC ------------------------------------------ if(hierarchical){ # hierarchical - if begin diff --git a/modules/assim.batch/man/pda.emulator.Rd b/modules/assim.batch/man/pda.emulator.Rd index e71b65a5297..b9ced428e04 100644 --- a/modules/assim.batch/man/pda.emulator.Rd +++ b/modules/assim.batch/man/pda.emulator.Rd @@ -7,7 +7,7 @@ pda.emulator(settings, external.data = NULL, external.priors = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, ar.target = NULL, - jvar = NULL, n.knot = NULL, local = TRUE) + jvar = NULL, n.knot = NULL, individual = TRUE) } \arguments{ \item{settings}{= a pecan settings list} From 9db169e5bd474ba5401dba76c54aaee42319cc24 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 13 Apr 2018 11:37:54 -0400 Subject: [PATCH 0033/2289] return all is necessary to retrieve back the pda posteriors for post-pda ensembles --- base/utils/R/run.write.configs.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/run.write.configs.R b/base/utils/R/run.write.configs.R index 3e4cda644bf..c30c2145ae6 100644 --- a/base/utils/R/run.write.configs.R +++ b/base/utils/R/run.write.configs.R @@ -40,7 +40,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo if (!is.null(settings$pfts[[i]]$posteriorid)) { files <- PEcAn.DB::dbfile.check("Posterior", settings$pfts[[i]]$posteriorid, - con, settings$host$name) + con, settings$host$name, return.all = TRUE) pid <- grep("post.distns.*Rdata", files$file_name) ## is there a posterior file? if (length(pid) == 0) { pid <- grep("prior.distns.Rdata", files$file_name) ## is there a prior file? From 9bf70ea5336638f30970a7b1fe5a66fccfdf9494 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 16 Apr 2018 13:56:42 -0400 Subject: [PATCH 0034/2289] changes for better working std normal transformation --- modules/assim.batch/NAMESPACE | 1 + modules/assim.batch/R/pda.emulator.R | 10 ++- modules/assim.batch/R/pda.utils.R | 63 +++++++++++++++++-- .../man/back_transform_posteriors.Rd | 2 +- modules/assim.batch/man/sample_MCMC.Rd | 15 +++++ 5 files changed, 83 insertions(+), 8 deletions(-) create mode 100644 modules/assim.batch/man/sample_MCMC.Rd diff --git a/modules/assim.batch/NAMESPACE b/modules/assim.batch/NAMESPACE index c368b285e44..b9f111284d2 100644 --- a/modules/assim.batch/NAMESPACE +++ b/modules/assim.batch/NAMESPACE @@ -43,6 +43,7 @@ export(pda.sort.params) export(return.bias) export(return_hyperpars) export(runModule.assim.batch) +export(sample_MCMC) export(write_sf_posterior) import(IDPmisc) import(ellipse) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 5aacca918d5..a366c6f30cc 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -190,8 +190,8 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # loads the posteriors of the the previous emulator run temp.round <- pda.load.priors(settings, con, run.round) prior.round.list <- temp.round$prior - - + + prior.round.fn <- lapply(prior.round.list, pda.define.prior.fn) ## Propose a percentage (if not specified 80%) of the new parameter knots from the posterior of the previous run @@ -217,6 +217,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, probs.round.sf <- NULL } + ## set prior distribution functions for posterior of the previous emulator run ## need to do two things here: ## 1) for non-SF parameters, use the posterior of previous emulator @@ -253,6 +254,11 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } } + # TODO: I need to revise this later, most of the code above is unnecessary (load posteriors, propose from them etc.) + # but for now going with the simplest and hopefully bug-free version (NEEDs CHECKING SF VERSION) + # sample from MCMC + knots.params.temp <- sample_MCMC(settings$assim.batch$mcmc.path, n.param.orig, prior.ind.orig, n.post.knots, knots.params.temp) + # mixture of knots mix.knots <- sample(settings$assim.batch$n.knot, (settings$assim.batch$n.knot - n.post.knots)) for (i in seq_along(settings$pfts)) { diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 09c8dab8880..63b58353432 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -815,6 +815,8 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st # need to modify init.list and jmp.list as well parnames <- names(init.list[[1]]) + prior.fn.all <- pda.define.prior.fn(prior.all) # setup prior functions again + for(i in seq_along(SS.stack)){ # NOTE: there might be differences in dimensions, # some SS-matrices might have likelihood params such as bias @@ -825,7 +827,6 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[p]]], list(q = ss[,p])) stdnorm.vals <- qnorm(prior.quantiles) ss[,p] <- stdnorm.vals - } return(ss) }) @@ -834,9 +835,12 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st rng[,,i] <- t(rng.tmp[,-ncol(rng.tmp)]) } + all_knots <- do.call("rbind", lapply(SS.stack,`[[`,1)) + rand_ind <- sample(seq_len(nrow(all_knots)), length(init.list)) + for(c in seq_along(init.list)){ init.list[[c]] <- lapply(seq_along(init.list[[c]]), function(x){ - init.list[[c]][[x]] <- rnorm(1) + init.list[[c]][[x]] <- all_knots[rand_ind[c],x] }) names(init.list[[c]]) <- parnames jmp.list[[c]][psel] <- 0.1 * diff(qnorm(c(0.05, 0.95))) @@ -844,7 +848,8 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st } - return(list(normSS = SS.stack, normF = norm.check, init = init.list, jmp = jmp.list, rng = rng)) + return(list(normSS = SS.stack, normF = norm.check, init = init.list, jmp = jmp.list, + rng = rng, prior.all = prior.all, prior.fn.all = prior.fn.all)) } # norm_transform_priors @@ -852,10 +857,9 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st ##' Helper function that transforms the samples back to their original prior distribution equivalents ##' @author Istem Fer ##' @export -back_transform_posteriors <- function(prior.list, prior.fn.all, prior.ind.all, mcmc.out){ +back_transform_posteriors <- function(prior.all, prior.fn.all, prior.ind.all, mcmc.out){ # check for non-normals - prior.all <- do.call("rbind", prior.list) psel <- prior.all[prior.ind.all, 1] != "norm" norm.check <- all(!psel) # if all are norm do nothing @@ -885,3 +889,52 @@ back_transform_posteriors <- function(prior.list, prior.fn.all, prior.ind.all, m return(mcmc.out) } + + +##' Helper function to sample from previous MCMC chain while proposing new knots +##' @author Istem Fer +##' @export +sample_MCMC <- function(mcmc_path, n.param.orig, prior.ind.orig, n.post.knots, knots.params.temp){ + + PEcAn.logger::logger.info("Sampling from previous round's MCMC") + + load(mcmc_path) + + mcmc.param.list <- params.subset <- list() + ind <- 0 + for (i in seq_along(n.param.orig)) { + mcmc.param.list[[i]] <- lapply(mcmc.samp.list, function(x) x[, (ind + 1):(ind + n.param.orig[i]), drop = FALSE]) + ind <- ind + n.param.orig[i] + } + + burnins <- rep(NA, length(mcmc.param.list)) + for (i in seq_along(mcmc.param.list)) { + params.subset[[i]] <- as.mcmc.list(lapply(mcmc.param.list[[i]], mcmc)) + + burnin <- getBurnin(params.subset[[i]], method = "gelman.plot") + burnins[i] <- max(burnin, na.rm = TRUE) + } + maxburn <- max(burnins) + + collect_samples <- list() + for (i in seq_along(mcmc.samp.list)) { + collect_samples[[i]] <- window(mcmc.samp.list[[i]], start = maxburn) + } + + mcmc_samples <- do.call(rbind, collect_samples) + + get_samples <- sample(1:nrow(mcmc_samples), n.post.knots) + new_knots <- mcmc_samples[get_samples,] + + ind <- 0 + for(i in seq_along(n.param.orig)){ + knots.params.temp[[i]][, prior.ind.orig[[i]]] <- new_knots[, (ind + 1):(ind + n.param.orig[i]), drop = FALSE] + ind <- ind + n.param.orig[i] + } + + return(knots.params.temp) + +} + + + diff --git a/modules/assim.batch/man/back_transform_posteriors.Rd b/modules/assim.batch/man/back_transform_posteriors.Rd index c4cda869d26..4338996fd0b 100644 --- a/modules/assim.batch/man/back_transform_posteriors.Rd +++ b/modules/assim.batch/man/back_transform_posteriors.Rd @@ -4,7 +4,7 @@ \alias{back_transform_posteriors} \title{Helper function that transforms the samples back to their original prior distribution equivalents} \usage{ -back_transform_posteriors(prior.list, prior.fn.all, prior.ind.all, mcmc.out) +back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) } \description{ Helper function that transforms the samples back to their original prior distribution equivalents diff --git a/modules/assim.batch/man/sample_MCMC.Rd b/modules/assim.batch/man/sample_MCMC.Rd new file mode 100644 index 00000000000..ef9a9c930e5 --- /dev/null +++ b/modules/assim.batch/man/sample_MCMC.Rd @@ -0,0 +1,15 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.utils.R +\name{sample_MCMC} +\alias{sample_MCMC} +\title{Helper function to sample from previous MCMC chain while proposing new knots} +\usage{ +sample_MCMC(mcmc_path, n.param.orig, prior.ind.orig, n.post.knots, + knots.params.temp) +} +\description{ +Helper function to sample from previous MCMC chain while proposing new knots +} +\author{ +Istem Fer +} From 46680dc51406062e55ea933cfc7896228de348ea Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 16 Apr 2018 13:58:37 -0400 Subject: [PATCH 0035/2289] save environment in ms too --- modules/assim.batch/R/pda.emulator.ms.R | 49 ++++++++++++++++++++----- modules/assim.batch/R/pda.postprocess.R | 2 +- 2 files changed, 41 insertions(+), 10 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 0230b0341aa..e04aaf4b593 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -47,8 +47,10 @@ pda.emulator.ms <- function(multi.settings) { multi.settings <- papply(multi.settings, pda.emulator, individual = individual) } - #PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDAMS.xml') + # write multi.settings with individual-pda info + PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDA_MS.xml') + ## -------------------------------- Prepare for Joint and Hierarchical ----------------------------------------- @@ -115,6 +117,10 @@ pda.emulator.ms <- function(multi.settings) { ## Get an ensemble id for global calibration tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) + ## history restart + hbpda.restart.file <- file.path(settings$outdir,paste0("history.hbpda", + settings$assim.batch$ensemble.id, ".Rdata")) + gp <- unlist(gp.stack, recursive = FALSE) # start the clock @@ -154,6 +160,9 @@ pda.emulator.ms <- function(multi.settings) { ptm.finish <- proc.time() - ptm.start logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) + current.step <- "END OF JOINT MCMC" + save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + mcmc.samp.list <- sf.samp.list <- list() for (c in seq_len(tmp.settings$assim.batch$chain)) { @@ -230,6 +239,9 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_joint") + current.step <- "JOINT - END" + save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + } # joint - if end ## -------------------------------------- Hierarchical MCMC ------------------------------------------ @@ -238,10 +250,19 @@ pda.emulator.ms <- function(multi.settings) { ## Get an ensemble id for hierarchical calibration tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) + ## history restart + hbpda.restart.file <- file.path(settings$outdir,paste0("history.hbpda", + settings$assim.batch$ensemble.id, ".Rdata")) + + ## Transform values from non-normal distributions to standard Normal ## it won't do anything if all priors are already normal norm_transform <- norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list) if(!norm_transform$normF){ # means SS values are transformed + + prior.all <- norm_transform$prior.all + prior.fn.all <- norm_transform$prior.fn.all + ## get new SS.stack with transformed values SS.stack <- norm_transform$normSS @@ -261,6 +282,9 @@ pda.emulator.ms <- function(multi.settings) { } + current.step <- "HIERARCHICAL MCMC PREP" + save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + ########### hierarchical MCMC function with Gibbs ############## @@ -306,7 +330,7 @@ pda.emulator.ms <- function(multi.settings) { # mu_f <- unlist(mu0) - P_f <- diag(jmp0) + P_f <- diag(1, nparam) P_f_inv <- solve(P_f) # # initialize mu_global (nparam) @@ -342,7 +366,7 @@ pda.emulator.ms <- function(multi.settings) { # initialize mu_site (nsite x nparam) mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) - mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= n.param) + mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) for(ns in 1:nsites){ repeat{ mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean @@ -375,7 +399,7 @@ pda.emulator.ms <- function(multi.settings) { # update site level jvars params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] #colnames(params.recent) <- names(x0) - jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], accept.count[v], params.recent[,,v])) + jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], params.recent[,,v])) jcov.arr <- abind::abind(jcov.list, along=3) musite.accept.count <- rep(0, nsites) # Reset counter @@ -478,11 +502,11 @@ pda.emulator.ms <- function(multi.settings) { # prepare for parallelization dcores <- parallel::detectCores() - 1 - ncores <- min(max(dcores, 1), settings$assim.batch$chain) + ncores <- min(max(dcores, 1), tmp.settings$assim.batch$chain) - logger.setOutputFile(file.path(settings$outdir, "pda.log")) + logger.setOutputFile(file.path(tmp.settings$outdir, "pda.log")) - cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(settings$outdir, "pda.log")) + cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(tmp.settings$outdir, "pda.log")) ## Sample posterior from emulator @@ -499,11 +523,14 @@ pda.emulator.ms <- function(multi.settings) { ptm.finish <- proc.time() - ptm.start logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) + current.step <- "HIERARCHICAL MCMC END" + save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + # transform samples from std normal to prior quantiles - mcmc.out2 <- back_transform_posteriors(prior.list, prior.fn.all, prior.ind.all, mcmc.out) + mcmc.out2 <- back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) # Collect global params in their own list and postprocess - mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") # Collect site-level params in their own list and postprocess @@ -511,6 +538,10 @@ pda.emulator.ms <- function(multi.settings) { mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_site_samp", ns = ns, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = paste0("_hierarchical_SL",ns)) } + + current.step <- "HIERARCHICAL - END" + save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + } # hierarchical - if end } diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index 3364e19b873..b843f7a5704 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -260,7 +260,7 @@ pda.sort.params <- function(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, mcmc.samp.list <- list() - for (c in seq_len(settings$assim.batch$chain)) { + for (c in seq_along(mcmc.out)) { if(sub.sample == "mu_global_samp"){ m <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]]), ncol = length(prior.ind.all.ns)) From d3279b2566b53547573cf18a3475b3dcd3e0fc96 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 17 Apr 2018 11:59:50 -0400 Subject: [PATCH 0036/2289] expand eqn --- modules/assim.batch/R/pda.emulator.ms.R | 26 ++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index e04aaf4b593..b5dcee14e01 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -118,8 +118,8 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) ## history restart - hbpda.restart.file <- file.path(settings$outdir,paste0("history.hbpda", - settings$assim.batch$ensemble.id, ".Rdata")) + hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.hbc", + tmp.settings$assim.batch$ensemble.id, ".Rdata")) gp <- unlist(gp.stack, recursive = FALSE) @@ -161,7 +161,7 @@ pda.emulator.ms <- function(multi.settings) { logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) current.step <- "END OF JOINT MCMC" - save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) mcmc.samp.list <- sf.samp.list <- list() @@ -240,7 +240,7 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_joint") current.step <- "JOINT - END" - save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) } # joint - if end @@ -251,8 +251,8 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) ## history restart - hbpda.restart.file <- file.path(settings$outdir,paste0("history.hbpda", - settings$assim.batch$ensemble.id, ".Rdata")) + hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.hbc", + tmp.settings$assim.batch$ensemble.id, ".Rdata")) ## Transform values from non-normal distributions to standard Normal @@ -283,7 +283,7 @@ pda.emulator.ms <- function(multi.settings) { } current.step <- "HIERARCHICAL MCMC PREP" - save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) ########### hierarchical MCMC function with Gibbs ############## @@ -356,7 +356,7 @@ pda.emulator.ms <- function(multi.settings) { # initialize tau_global (nparam x nparam) tau_global <- rWishart(1, tau_df, tau_V)[,,1] - # df to use while updating tau later, setting here one out of the loop + # df to use while updating tau later, setting here out of the loop tau_df <- tau_df + nsites # initialize jcov.arr (jump variances per site) @@ -410,7 +410,7 @@ pda.emulator.ms <- function(multi.settings) { ######################################## # update tau_global | mu_global, mu_site # - # tau_global ~ W(tau_df, tau_sigma) + # W(tau_global | mu_global, mu_site) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) # # tau_global : error precision matrix @@ -434,6 +434,10 @@ pda.emulator.ms <- function(multi.settings) { # mu_global : global parameters # global_mu : precision weighted average between the data (mu_site) and prior mean (mu_f) # global_Sigma : sum of mu_site and mu_f precision + # + # MVN(mu_global | mu_site, tau_global) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) + + global_Sigma <- solve(P_f + (nsites * sigma_global)) @@ -524,7 +528,7 @@ pda.emulator.ms <- function(multi.settings) { logger.info(paste0("Emulator MCMC took ", paste0(round(ptm.finish[3])), " seconds for ", paste0(tmp.settings$assim.batch$iter), " iterations.")) current.step <- "HIERARCHICAL MCMC END" - save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) # transform samples from std normal to prior quantiles mcmc.out2 <- back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) @@ -540,7 +544,7 @@ pda.emulator.ms <- function(multi.settings) { } current.step <- "HIERARCHICAL - END" - save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) + save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) } # hierarchical - if end From 023ccbb463f2d8b5056c9d1213e75c4ff93e0b8c Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 17 Apr 2018 12:00:15 -0400 Subject: [PATCH 0037/2289] revise vignette --- .../vignettes/MultiSitePDAVignette.Rmd | 362 +++++++++++------- 1 file changed, 221 insertions(+), 141 deletions(-) diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index 394be923868..aa26aecd02e 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -17,14 +17,36 @@ This vignette documents the steps and settings of Hierarchical Bayesian Paramete Add the `` tag in your `pecan.xml` for the groups of sites you are interested in. In this vignette, we will use HB-PDA site group in BETYdb which consists of the temperate decidious broadleaf forest (DBF) sites of Ameriflux network. + ``` 1000000022 ``` +In the meantime, your `` tag will look like this: +``` + + + + AmerifluxLBL + SIPNET + pecan + + + 2004/01/01 + 2004/12/31 + +``` + +It's OK if not all your sites have the same `start.date` and `end.date`, multi settings functions will use this as a template to expand. -In the meantime, your `` section should specify ``, `` and `` information accordingly. E.g.: +Now start going through the workflow script and stop after: +``` +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") +``` +When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded with the same information repeated. Now you specify ``, `` and `` information accordingly if you want them to be different. E.g.: ``` @@ -46,23 +68,6 @@ In the meantime, your `` section should specify ``, `` - - 758 - 2010/01/01 - 2015/12/31 - Harvard Forest EMS Tower/HFR1 (US-Ha1) - 42.5378 - -72.1715 - - 2010/01/01 - 2015/12/31 - - - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-758/AMF_US-Ha1_BASE_HR_10-1.2010-01-01.2015-12-31.clim - - - - 767 2001/01/01 @@ -78,8 +83,8 @@ In the meantime, your `` section should specify ``, `` /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim - - + + 768 2005/01/01 @@ -95,8 +100,8 @@ In the meantime, your `` section should specify ``, `` /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-768/AMF_US-MOz_BASE_HH_7-1.2005-01-01.2015-12-31.clim - - + + 776 2007/01/01 @@ -112,8 +117,8 @@ In the meantime, your `` section should specify ``, `` /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-776/AMF_US-UMB_BASE_HH_10-1.2007-01-01.2014-12-31.clim - - + + 676 1999/01/01 @@ -129,8 +134,8 @@ In the meantime, your `` section should specify ``, `` /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-676/AMF_US-WCr_BASE_HH_11-1.1999-01-01.2006-12-31.clim - - + + 1000000109 2013/01/01 @@ -146,58 +151,89 @@ In the meantime, your `` section should specify ``, `` /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-109/AMF_CA-TPD_BASE_HH_1-1.2013-01-01.2015-12-31.clim - - + + - 740 - 1997/01/01 + 1000000145 + 2008/01/01 2010/12/31 - BOREAS SSA Old Aspen (CA-Oas) - 53.6289 - -106.198 + Chestnut Ridge (US-ChR) + 35.9311 + -84.3324 - 1997/01/01 + 2008/01/01 2010/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-740/AMF_CA-Oas_BASE_HH_1-1.1997-01-01.2010-12-31.clim + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-145/AMF_US-ChR_BASE_HH_2-1.2008-01-01.2010-12-31.clim + + + + + + 1000000066 + 2005/01/01 + 2014/12/31 + Silas Little- New Jersey (US-Slt) + 39.9138 + -74.596 + + 2005/01/01 + 2014/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-66/AMF_US-Slt_BASE_HH_5-1.2005-01-01.2014-12-31.clim + + + 1000000061 + 2004/01/01 + 2013/12/31 + Oak Openings (US-Oho) + 41.5545 + -83.8438 + + 2004/01/01 + 2013/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-61/AMF_US-Oho_BASE_HH_4-1.2005-01-01.2006-12-31.clim + + + ``` +Re-read the updated `pecan.CHECKED.xml`, make sure to re-assign random effects as logical, otherwise `meta.analysis` will throw an error: +``` +settings <- read.settings("pecan.CHECKED.xml") +settings$meta.analysis$random.effects <- as.logical(settings$meta.analysis$random.effects) +``` -## Settings for HB-PDA - -Similar to PDA settings, we will contain HB-PDA settings under the `` section. Likewise standard PDA, if you have chosen the parameters you want to target in the calibration, open the `pecan.CONFIGS.xml` file, insert the tag below, and save as `pecan.HBPDA.xml`: - +Then continue with the rest of the workflow until `` module: ``` - - emulator.ms - local - 115 - 100000 - 3 - - - som_respiration_rate - soil_respiration_Q10 - - - half_saturation_PAR - dVPDSlope - psnTOpt - AmaxFrac - leafGrowth - - - +# Run parameter data assimilation +if ('assim.batch' %in% names(settings)) { + if (PEcAn.utils::status.check("PDA") == 0) { + PEcAn.utils::status.start("PDA") + settings <- PEcAn.assim.batch::runModule.assim.batch(settings) + PEcAn.utils::status.end() + } +} ``` -Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be selected under the `` tag above: local, global and hierarchical. If left empty, the function will go through all three modes. But note that, for both global and hierarhical calibration, we need site-level runs. We will start with local mode, i.e. site-level calibration. +Now add the PDA tags, see next section. -Now we will extend the settings above to include information about data to assimilate and likelihoods, similar to the standard-PDA setup, except for multiple sites. First, add `` tag to the multisettings in the bottom of your `pecan.HBPDA.xml`: + +## Settings for HB-PDA + +Similar to PDA settings, we will contain HB-PDA settings under the `` section. Likewise standard PDA, if you have chosen the parameters you want to target in the calibration, we will open the `pecan.CONFIGS.xml` file, and insert the `` tag. + +Different from the PDA settings, this time the main function will be the `emulator.ms`. This function can be run in three modes, which will be selected under the `` tag above: individual, joint and hierarchical. If left empty, the function will go through all three modes. But note that, for both joint and hierarhical calibration, we need site-level runs. We will start with individual mode, i.e. site-level calibration. + +Now open the `pecan.CONFIGS.xml` file. First, add `` tag to the multisettings in the bottom of your xml among others: ``` @@ -208,27 +244,28 @@ Now we will extend the settings above to include information about data to assim ``` -Now we can insert multi-site settings in the `` section accordingly (note that now the chunk above where we defined `method`, `mode`, `param.names` etc. will go under each `setting.*`): +Now we can insert multi-site settings in the `` section accordingly (note that `method`, `mode`, `param.names` etc. will go under each `setting.*`): ``` - + emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + @@ -247,27 +284,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 2000000128 + 1000013317 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-758/AMF_US-Ha1_BASE_HR_10-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-767/AMF_US-MMS_BASE_HR_8-1.csv Laplace 1000000042 @@ -280,27 +318,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 1000013317 + 1000013316 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-767/AMF_US-MMS_BASE_HR_8-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-768/AMF_US-MOz_BASE_HH_7-1.csv Laplace 1000000042 @@ -313,27 +352,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 1000013316 + 1000013832 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-768/AMF_US-MOz_BASE_HH_7-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-776/AMF_US-UMB_BASE_HH_10-1.csv Laplace 1000000042 @@ -346,27 +386,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 1000013832 + 1000012965 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-776/AMF_US-UMB_BASE_HH_10-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-676/AMF_US-WCr_BASE_HH_11-1.csv Laplace 1000000042 @@ -379,27 +420,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 1000012965 + 1000013836 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-676/AMF_US-WCr_BASE_HH_11-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_1-109/AMF_CA-TPD_BASE_HH_1-1.csv Laplace 1000000042 @@ -412,27 +454,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 1000013836 + 1000013546 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_1-109/AMF_CA-TPD_BASE_HH_1-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_1-145/AMF_US-ChR_BASE_HH_2-1.csv Laplace 1000000042 @@ -445,27 +488,28 @@ Now we can insert multi-site settings in the `` section accordingly emulator.ms - local - 60 - 10000 + individual + 70 + 100000 3 - + som_respiration_rate soil_respiration_Q10 - soilWHC - - + + + psnTOpt half_saturation_PAR dVPDSlope leafGrowth - + AmaxFrac + - 1000013838 + 1000013572 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_0-740/AMF_CA-Oas_BASE_HH_1-1.csv + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_1-66/AMF_US-Slt_BASE_HH_5-1.csv Laplace 1000000042 @@ -476,9 +520,45 @@ Now we can insert multi-site settings in the `` section accordingly + + emulator.ms + individual + 70 + 100000 + 3 + + + som_respiration_rate + soil_respiration_Q10 + + + psnTOpt + half_saturation_PAR + dVPDSlope + leafGrowth + AmaxFrac + + + + + 1000013482 + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_site_1-61/AMF_US-Oho_BASE_HH_4-1.csv + + Laplace + 1000000042 + + FC + UST + + + + ``` +Now save this as `pecan.HBPDA.xml` and read it. + Following section briefly explains what happens after: ``` multi.settings <- read.settings("pecan.HBPDA.xml") From 10c0e09cb6f8220addc838b5c1938914cd71808a Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 8 Jul 2018 13:29:58 -0400 Subject: [PATCH 0038/2289] case for no bias param --- modules/assim.batch/R/pda.emulator.R | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index dd9c29f4ee0..53fa4035636 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -467,7 +467,12 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # but then, there are other things that needs to change in the emulator workflow # such as the way proposed parameters are used in estimation in get_ss function # so punting this development until it is needed - rng <- t(apply(SS[[isbias]][,-ncol(SS[[isbias]])], 2, range)) + if(any(unlist(any.mgauss) == "multipGauss")){ + colsel <- isbias + }else{ # first is as good as any + colsel <- 1 + } + rng <- t(apply(SS[[colsel]][,-ncol(SS[[colsel]])], 2, range)) if (run.normal | run.round) { @@ -481,7 +486,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, function(x) 0.1 * diff(eval(x, list(p = c(0.05, 0.95)))))[prior.ind.all] jmp.list[[c]] <- sqrt(jmp.list[[c]]) - init.list[[c]] <- as.list(SS[[isbias]][indx[c], -ncol(SS[[isbias]])]) + init.list[[c]] <- as.list(SS[[colsel]][indx[c], -ncol(SS[[colsel]])]) resume.list[[c]] <- NA } } From 6a5dfe533fa0c38caca7c881c567b4cc829061b8 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 8 Jul 2018 13:38:44 -0400 Subject: [PATCH 0039/2289] fix indices --- modules/assim.batch/R/pda.define.llik.R | 8 +++++--- modules/emulator/R/minimize.GP.R | 3 ++- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/modules/assim.batch/R/pda.define.llik.R b/modules/assim.batch/R/pda.define.llik.R index a58e30a7d7b..695e957a96c 100644 --- a/modules/assim.batch/R/pda.define.llik.R +++ b/modules/assim.batch/R/pda.define.llik.R @@ -172,8 +172,9 @@ pda.calc.llik <- function(pda.errors, llik.fn, llik.par) { for (k in seq_len(n.var)) { - j <- (k-1) %% length(llik.fn) + 1 - + j <- k %% length(llik.fn) + if(j==0) j <- length(llik.fn) + LL.vec[k] <- llik.fn[[j]](pda.errors[k], llik.par[[k]]) } @@ -201,7 +202,8 @@ pda.calc.llik.par <-function(settings, n, error.stats, hyper.pars){ for(k in seq_along(error.stats)){ - j <- (k-1) %% length(settings$assim.batch$inputs) + 1 + j <- k %% length(settings$assim.batch$inputs) + if(j==0) j <- length(settings$assim.batch$inputs) llik.par[[k]] <- list() diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 31056b3e477..42bcfca509b 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -125,7 +125,8 @@ get_ss <- function(gp, xnew, pos.check) { for(igp in seq_along(gp)){ Y <- mlegp::predict.gp(gp[[igp]], newData = X[, 1:ncol(gp[[igp]]$X), drop=FALSE], se.fit = TRUE) - j <- (igp %% length(pos.check)) + 1 + j <- (igp %% length(pos.check)) + if(j == 0) j <- length(pos.check) if(pos.check[j]){ if(Y$fit < 0){ From 97e0a9d7606c3b8862a375f0aa288511e509f90d Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 10 Jul 2018 17:32:57 -0400 Subject: [PATCH 0040/2289] fcn args --- modules/assim.batch/R/pda.utils.R | 19 ++++++++++++++++++- .../man/back_transform_posteriors.Rd | 12 ++++++++++++ .../assim.batch/man/norm_transform_priors.Rd | 16 ++++++++++++++++ 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index e56ec759ece..0e3b3372f29 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -810,6 +810,15 @@ load_pda_history <- function(workdir, ensemble.id, objects){ } ##' Helper function that transforms the values of each parameter into N(0,1) equivalent +##' +##' @param prior.list list of prior data frames, same length as number of pfts +##' @param prior.fn.all list of expressions of d/r/q/p functions of the priors given in the prior.list +##' @param prior.ind.all a vector of indices identifying which params are targeted, indices refer to the row numbers when prior.list sublists are rbinded +##' @param SS.stack list of design matrices for the emulator, length = nsites, each sublist will be of length nvars +##' @param init.list list of initial values for the targeted params, they will change when the range is normalized +##' @param jmp.list list of hump variances, they will change when the range is normalized +##' +##' @return a list of new objects that contain the normalized versions ##' @author Istem Fer ##' @export norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list){ @@ -822,7 +831,7 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st if(!norm.check){ - rng <- array(NA, dim = c(length(prior.ind.all),2,length(SS.stack))) + rng <- array(NA, dim = c(length(prior.ind.all), 2, length(SS.stack))) # need to modify init.list and jmp.list as well parnames <- names(init.list[[1]]) @@ -867,6 +876,14 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st ##' Helper function that transforms the samples back to their original prior distribution equivalents +##' +##' @param prior.all a dataframe of all priors for both pfts +##' @param prior.fn.all list of expressions of d/r/q/p functions of the priors with original parameter space +##' @param prior.ind.all a vector of indices identifying targeted params +##' @param mcmc.out hierarchical MCMC outputs in standard-normal space +##' +##' @return hierarchical MCMC outputs in original parameter space +##' ##' @author Istem Fer ##' @export back_transform_posteriors <- function(prior.all, prior.fn.all, prior.ind.all, mcmc.out){ diff --git a/modules/assim.batch/man/back_transform_posteriors.Rd b/modules/assim.batch/man/back_transform_posteriors.Rd index 4338996fd0b..066c8446d04 100644 --- a/modules/assim.batch/man/back_transform_posteriors.Rd +++ b/modules/assim.batch/man/back_transform_posteriors.Rd @@ -6,6 +6,18 @@ \usage{ back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) } +\arguments{ +\item{prior.all}{a dataframe of all priors for both pfts} + +\item{prior.fn.all}{list of expressions of d/r/q/p functions of the priors with original parameter space} + +\item{prior.ind.all}{a vector of indices identifying targeted params} + +\item{mcmc.out}{hierarchical MCMC outputs in standard-normal space} +} +\value{ +hierarchical MCMC outputs in original parameter space +} \description{ Helper function that transforms the samples back to their original prior distribution equivalents } diff --git a/modules/assim.batch/man/norm_transform_priors.Rd b/modules/assim.batch/man/norm_transform_priors.Rd index 76a0cc14d52..3980f591a12 100644 --- a/modules/assim.batch/man/norm_transform_priors.Rd +++ b/modules/assim.batch/man/norm_transform_priors.Rd @@ -7,6 +7,22 @@ norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list) } +\arguments{ +\item{prior.list}{list of prior data frames, same length as number of pfts} + +\item{prior.fn.all}{list of expressions of d/r/q/p functions of the priors given in the prior.list} + +\item{prior.ind.all}{a vector of indices identifying which params are targeted, indices refer to the row numbers when prior.list sublists are rbinded} + +\item{SS.stack}{list of design matrices for the emulator, length = nsites, each sublist will be of length nvars} + +\item{init.list}{list of initial values for the targeted params, they will change when the range is normalized} + +\item{jmp.list}{list of hump variances, they will change when the range is normalized} +} +\value{ +a list of new objects that contain the normalized versions +} \description{ Helper function that transforms the values of each parameter into N(0,1) equivalent } From 05388c1b213c906e0334e1c5d6e290a95171255b Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 10 Jul 2018 17:36:36 -0400 Subject: [PATCH 0041/2289] use first settings --- modules/assim.batch/R/pda.emulator.ms.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index b5dcee14e01..4fb1e6610de 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -138,12 +138,12 @@ pda.emulator.ms <- function(multi.settings) { mcmc.out <- parallel::parLapply(cl, 1:multi.settings[[1]]$assim.batch$chain, function(chain) { mcmc.GP(gp = gp, ## Emulator(s) x0 = init.list[[chain]], ## Initial conditions - nmcmc = 100000, ## Number of reps + nmcmc = multi.settings[[1]]$assim.batch$iter, ## Number of iters rng = rng, ## range format = "lin", ## "lin"ear vs "log" of LogLikelihood mix = "joint", ## Jump "each" dimension independently or update them "joint"ly jmp0 = jmp.list[[chain]], ## Initial jump size - ar.target = 0.3, ## Target acceptance rate + ar.target = multi.settings[[1]]$assim.batch$jump$ar.target, ## Target acceptance rate priors = prior.fn.all$dprior[prior.ind.all], ## priors settings = tmp.settings, # this is just for checking llik functions downstream run.block = TRUE, From d673a0597c1b9f20290432aa6645681af70db7ce Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 23 Jul 2018 14:17:53 -0500 Subject: [PATCH 0042/2289] Add ED2 SAS script May still need some fine-tuning for finding soil & veg states, but that *should* be doable by modifying parameters in this function --- models/ed/NAMESPACE | 1 + models/ed/R/SAS.ED2.R | 502 +++++++++++++++++++++++++++++++++++++++ models/ed/man/SAS.ED2.Rd | 98 ++++++++ 3 files changed, 601 insertions(+) create mode 100644 models/ed/R/SAS.ED2.R create mode 100644 models/ed/man/SAS.ED2.Rd diff --git a/models/ed/NAMESPACE b/models/ed/NAMESPACE index 39e6fc9855b..87b4bd81d28 100644 --- a/models/ed/NAMESPACE +++ b/models/ed/NAMESPACE @@ -3,6 +3,7 @@ S3method(print,ed2in) S3method(write_ed2in,default) S3method(write_ed2in,ed2in) +export(SAS.ED2) export(check_css) export(check_ed2in) export(check_ed_metheader) diff --git a/models/ed/R/SAS.ED2.R b/models/ed/R/SAS.ED2.R new file mode 100644 index 00000000000..2b6b47e6919 --- /dev/null +++ b/models/ed/R/SAS.ED2.R @@ -0,0 +1,502 @@ +##' @name SAS.ED2 +##' @title Use semi-analytical solution to accellerate model spinup +##' @author Christine Rollinson, modified from original by Jaclyn Hatala-Matthes (2/18/14) +##' 2014 Feb: Original ED SAS solution Script at PalEON modeling HIPS sites (Matthes) +##' 2015 Aug: Modifications for greater site flexibility & updated ED +##' 2016 Jan: Adaptation for regional-scale runs (single-cells run independently, but executed in batches) +##' 2018 Jul: Conversion to function, Christine Rollinson July 2018 +##'@description This functions approximates landscape equilibrium steady state for vegetation and +##' soil pools using the successional trajectory of a single patch modeled with disturbance +##' off and the prescribed disturbance rates for runs (Xia et al. 2012 GMD 5:1259-1271). +##' @param dir.analy Location of ED2 analyis files; expects monthly and yearly output +##' @param dir.histo Location of ED2 history files (for vars not in analy); expects monthly +##' @param outdir Location to write SAS .css & .pss files +##' @param block Number of years between patch ages +##' @param lat site latitude; used for file naming +##' @param lon site longitude; used for file naming +##' @param yrs.met Number of years cycled in model spinup part 1 +##' @param treefall Value to be used for TREEFALL_DISTURBANCE_RATE in ED2IN for full runs (disturbance on) +##' @param sm_fire Value to be used for SM_FIRE if INCLUDE_FIRE=2; defaults to 0 (fire off) +##' @param fire_intensity Value to be used for FIRE_PARAMTER; defaults to 0 (fire off) +##' @param slxsand Soil percent sand; used to calculate expected fire return interval +##' @param slxclay Soil percent clay; used to calculate expected fire return interval +##' @param sufx ED2 out file suffix; used in constructing file names(default "g01.h5) +##' @param decomp_scheme Decomposition scheme specified in ED2IN +##' @param kh_active_depth +##' @param Lc Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90) +##' @param c2n_slow Carbon to Nitrogen ratio, slow pool; ED Default 10.0 +##' @param c2n_structural Carbon to Nitrogen ratio, structural pool. ED default 150.0 +##' @param r_stsc Decomp param +##' @param rh_decay_low Param used for ED-1/CENTURY decomp schemes; ED default = 0.24 +##' @param rh_decay_high Param used for ED-1/CENTURY decomp schemes; ED default = 0.60 +##' @param rh_low_temp Param used for ED-1/CENTURY decomp schemes; ED default = 291 +##' @param rh_high_temp Param used for ED-1/CENTURY decomp schemes; ED default = 318.15 +##' @param rh_decay_dry Param used for ED-1/CENTURY decomp schemes; ED default = 12.0 +##' @param rh_decay_wet Param used for ED-1/CENTURY decomp schemes; ED default = 36.0 +##' @param rh_dry_smoist Param used for ED-1/CENTURY decomp schemes; ED default = 0.48 +##' @param rh_wet_smoist Param used for ED-1/CENTURY decomp schemes; ED default = 0.98 +##' @param resp_opt_water Param used for decomp schemes 0 & 3, ED default = 0.8938 +##' @param resp_water_below_opt Param used for decomp schemes 0 & 3, ED default = 5.0786 +##' @param resp_water_above_opt Param used for decomp schemes 0 & 3, ED default = 4.5139 +##' @param resp_temperature_increase Param used for decomp schemes 0 & 3, ED default = 0.0757 +##' @param rh_lloyd_1 Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 308.56 +##' @param rh_lloyd_2 Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 1/56.02 +##' @param rh_lloyd_3 Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 227.15 +##' @export +##' +SAS.ED2 <- function(dir.analy, dir.histo, outdir, prefix, lat, lon, block, yrs.met=30, + treefall, sm_fire=0, fire_intensity=0, slxsand=0.33, slxclay=0.33, + sufx="g01.h5", + decomp_scheme=2, + kh_active_depth = -0.20, + decay_rate_fsc=11, decay_rate_stsc=4.5, decay_rate_ssc=0.2, + Lc=0.049787, c2n_slow=10.0, c2n_structural=150.0, r_stsc=0.3, # Constants from ED + rh_decay_low=0.24, rh_decay_high=0.60, + rh_low_temp=18.0+273.15, rh_high_temp=45.0+273.15, + rh_decay_dry=12.0, rh_decay_wet=36.0, + rh_dry_smoist=0.48, rh_wet_smoist=0.98, + resp_opt_water=0.8938, resp_water_below_opt=5.0786, resp_water_above_opt=4.5139, + resp_temperature_increase=0.0757, + rh_lloyd_1=308.56, rh_lloyd_2=1/56.02, rh_lloyd_3=227.15 + ) { + if(!decomp_scheme %in% 0:4) stop("Invalid decomp_scheme") + # create a directory for the initialization files + dir.create(outdir, recursive=T, showWarnings=F) + + #--------------------------------------- + # Setting up some specifics that vary by site (like soil depth) + #--------------------------------------- + #Set directories + # dat.dir <- dir.analy + ann.files <- dir(dir.analy, "-Y-") #yearly files only + + #Get time window + # Note: Need to make this more flexible to get the thing after "Y" + yrind <- which(strsplit(ann.files,"-")[[1]] == "Y") + yeara <- as.numeric(strsplit(ann.files,"-")[[1]][yrind+1]) #first year + yearz <- as.numeric(strsplit(ann.files,"-")[[length(ann.files)]][yrind+1]) #last full year + yrs <- seq(yeara+1, yearz, by=blckyr) # The years we're going to use as time steps for the demography + nsteps <- length(yrs) # The number of blocks = the number steps we'll have + + # Need to get the layers being used for calculating temp & moist + # Note: In ED there's a pain in the butt way of doing this with the energy, but we're going to approximate + # slz <- c(-5.50, -4.50, -2.17, -1.50, -1.10, -0.80, -0.60, -0.45, -0.30, -0.20, -0.12, -0.06) + # dslz <- c(1.00, 2.33, 0.67, 0.40, 0.30, 0.20, 0.15, 0.15, 0.10, 0.08, 0.06, 0.06) + nc.temp <- ncdf4::nc_open(file.path(dir.analy, ann.files[1])) + slz <- ncdf4::ncvar_get(nc.temp, "SLZ") + ncdf4::nc_close(nc.temp) + + dslz <- vector(length=length(slz)) + dslz[length(dslz)] <- 0-slz[length(dslz)] + + for(i in 1:(length(dslz)-1)){ + dslz[i] <- slz[i+1] - slz[i] + } + + nsoil=which(slz >= kh_active_depth-1e-3) # Maximum depth for avg. temperature and moisture; adding a fudge factor bc it's being weird + # nsoil=length(slz) + #--------------------------------------- + + #--------------------------------------- + # First loop over analy files (faster than histo) to aggregate initial + # .css and .pss files for each site + #--------------------------------------- + #create an emtpy storage for the patch info + pss.big <- matrix(nrow=length(yrs),ncol=13) # save every X yrs according to chunks specified above + colnames(pss.big) <- c("time","patch","trk","age","area","water","fsc","stsc","stsl", + "ssc","psc","msn","fsn") + + #--------------------------------------- + # Finding the mean soil temp & moisture + # NOTE: I've been plyaing around with finding the best temp & soil moisture to initialize things + # with; if using the means from the spin met cycle work best, insert them here + # This will also be necessary for helping update disturbance parameter + #--------------------------------------- + slmsts <- calc.slmsts(slxsand, slxclay) + slpots <- calc.slpots(slxsand, slxclay) + slbs <- calc.slbs(slxsand, slxclay) + soilcp <- calc.soilcp(slmsts, slpots, slbs) + + # Calculating Soil fire characteristics + soilfr=0 + if(abs(sm_fire)>0){ + if(sm_fire>0){ + soilfr <- smfire.pos(slmsts, soilcp, smfire=sm_fire) + } else { + soilfr <- smfire.neg(slmsts, slpots, smfire=sm_fire, slbs) + } + } + + + month.begin = 1 + month.end = 12 + + tempk.air <- tempk.soil <- moist.soil <- moist.soil.mx <- moist.soil.mn <- nfire <- vector() + for(y in yrs){ + air.temp.tmp <- soil.temp.tmp <- soil.moist.tmp <- soil.mmax.tmp <- soil.mmin.tmp <- vector() + ind <- which(yrs == y) + for(m in month.begin:month.end){ + #Make the file name. + year.now <-sprintf("%4.4i",y) + month.now <- sprintf("%2.2i",m) + day.now <- sprintf("%2.2i",0) + hour.now <- sprintf("%6.6i",0) + + file.now <- paste(prefix,"-E-",year.now,"-",month.now,"-",day.now,"-" + ,hour.now,"-",sufx,sep="") + + # cat(" - Reading file :",file.now,"...","\n") + now <- ncdf4::nc_open(file.path(dir.analy,file.now)) + + air.temp.tmp [m] <- ncdf4::ncvar_get(now, "MMEAN_ATM_TEMP_PY") + soil.temp.tmp [m] <- sum(ncdf4::ncvar_get(now, "MMEAN_SOIL_TEMP_PY")[nsoil]*dslz[nsoil]/sum(dslz[nsoil])) + soil.moist.tmp[m] <- sum(ncdf4::ncvar_get(now, "MMEAN_SOIL_WATER_PY")[nsoil]*dslz[nsoil]/sum(dslz[nsoil])) + soil.mmax.tmp [m] <- max(ncdf4::ncvar_get(now, "MMEAN_SOIL_WATER_PY")) + soil.mmin.tmp [m] <- min(ncdf4::ncvar_get(now, "MMEAN_SOIL_WATER_PY")) + + ncdf4::nc_close(now) + } # End month loop + # Finding yearly means + tempk.air [ind] <- mean(air.temp.tmp) + tempk.soil [ind] <- mean(soil.temp.tmp) + moist.soil [ind] <- mean(soil.moist.tmp) + moist.soil.mx[ind] <- max(soil.mmax.tmp) + moist.soil.mn[ind] <- min(soil.mmin.tmp) + nfire [ind] <- length(which(soil.moist.tmp0, length(nfire)/length(which(nfire>0)), 0) + + cat(paste0("mean soil temp : ", round(soil_tempk, 2), "\n")) + cat(paste0("mean soil moist : ", round(rel_soil_moist, 3), "\n")) + cat(paste0("fire return interval (yrs) : ", fire_return), "\n") + #--------------------------------------- + + #--------------------------------------- + # Calculate area distribution based on geometric decay based loosely on your disturbance rates + # Note: This one varies from Jackie's original in that it lets your oldest, undisturbed bin + # start a bit larger (everything leftover) to let it get cycled in naturally + #--------------------------------------- + # ------ + # Calculate the Rate of fire & total disturbance + # ------ + fire_rate <- pfire * fire_intensity + + # Total disturbance rate = treefall + fire + # -- treefall = % area/yr + disturb <- treefall + fire_rate + # ------ + + stand.age <- seq(yrs[1]-yeara,nrow(pss.big)*blckyr,by=blckyr) + area.dist <- vector(length=nrow(pss.big)) + area.dist[1] <- sum(dgeom(0:(stand.age[2]-1), disturb)) + for(i in 2:(length(area.dist)-1)){ + area.dist[i] <- sum(dgeom((stand.age[i]):(stand.age[i+1]-1),disturb)) + } + area.dist[length(area.dist)] <- 1 - sum(area.dist[1:(length(area.dist)-1)]) + pss.big[,"area"] <- area.dist + #--------------------------------------- + + + #--------------------------------------- + # Extraction Loop Part 1: Cohorts!! + # This loop does the following: + # -- Extract cohort info from each age slice from *annual* *analy* files (these are annual means) + # -- Write cohort info to the .css file as a new patch for each age slice + # -- Dummy extractions of patch-level variables; all of the important variables here are place holders + #--------------------------------------- + cat(" - Reading analy files ...","\n") + for (y in yrs){ + now <- ncdf4::nc_open(file.path(dir.analy,ann.files[y-yeara+1])) + ind <- which(yrs == y) + + #Grab variable to see how many cohorts there are + ipft <- ncdf4::ncvar_get(now,'PFT') + + #--------------------------------------- + # organize into .css variables (Cohorts) + # Note: all cohorts from a time slice are assigned to a single patch representing a stand of age X + #--------------------------------------- + css.tmp <- matrix(nrow=length(ipft),ncol=10) + colnames(css.tmp) <- c("time", "patch", "cohort", "dbh", "hite", "pft", "n", "bdead", "balive", "Avgrg") + + css.tmp[,"time" ] <- rep(yeara,length(ipft)) + css.tmp[,"patch" ] <- rep(floor((y-yeara)/blckyr)+1,length(ipft)) + css.tmp[,"cohort"] <- 1:length(ipft) + css.tmp[,"dbh" ] <- ncdf4::ncvar_get(now,'DBH') + css.tmp[,"hite" ] <- ncdf4::ncvar_get(now,'HITE') + css.tmp[,"pft" ] <- ipft + css.tmp[,"n" ] <- ncdf4::ncvar_get(now,'NPLANT') + css.tmp[,"bdead" ] <- ncdf4::ncvar_get(now,'BDEAD') + css.tmp[,"balive"] <- ncdf4::ncvar_get(now,'BALIVE') + css.tmp[,"Avgrg" ] <- rep(0,length(ipft)) + + #save big .css matrix + if(y==yrs[1]){ + css.big <- css.tmp + } else{ + css.big <- rbind(css.big,css.tmp) + } + #--------------------------------------- + + + #--------------------------------------- + # save .pss variables (Patches) + # NOTE: patch AREA needs to be adjusted to be equal to the probability of a stand of age x on the landscape + #--------------------------------------- + pss.big[ind,"time"] <- 1800 + pss.big[ind,"patch"] <- floor((y-yeara)/blckyr)+1 + pss.big[ind,"trk"] <- 1 + pss.big[ind,"age"] <- y-yeara + # Note: the following are just place holders that will be overwritten post-SAS + # pss.big[ind,6] <- ncdf4::ncvar_get(now,"AREA") + pss.big[ind,"water"] <- 0.5 + pss.big[ind,"fsc"] <- ncdf4::ncvar_get(now,"FAST_SOIL_C") + pss.big[ind,"stsc"] <- ncdf4::ncvar_get(now,"STRUCTURAL_SOIL_C") + pss.big[ind,"stsl"] <- ncdf4::ncvar_get(now,"STRUCTURAL_SOIL_L") + pss.big[ind,"ssc"] <- ncdf4::ncvar_get(now,"SLOW_SOIL_C") + pss.big[ind,"psc"] <- 0 + pss.big[ind,"msn"] <- ncdf4::ncvar_get(now,"MINERALIZED_SOIL_N") + pss.big[ind,"fsn"] <- ncdf4::ncvar_get(now,"FAST_SOIL_N") + + ncdf4::nc_close(now) + } + #--------------------------------------- + + #--------------------------------------- + # Extraction Loop Part 2: Patches! + # This loop does the following: + # -- Extract age slice (new patch) soil carbon conditions from *monthly* *histo* files + # -- Note: this is done because most of the necessary inputs for SAS are instantaneous values that + # are not currently tracked in analy files, let alone annual analy files; this could + # theoretically change in the future + # -- Monthly data is then aggregated to a yearly value: sum for carbon inputs; mean for temp/moist + # (if not calculated above) + #--------------------------------------- + pss.big <- pss.big[complete.cases(pss.big),] + + # some empty vectors for storage etc + fsc_in_y <- ssc_in_y <- ssl_in_y <- fsn_in_y <- pln_up_y <- vector() + fsc_in_m <- ssc_in_m <- ssl_in_m <- fsn_in_m <- pln_up_m <- vector() + # # NOTE: The following line should get removed if we roll with 20-year mean temp & moist + # soil_tempk_y <- soil_tempk_m <- swc_max_m <- swc_max_y <- swc_m <- swc_y <- vector() + + # switch to the histo directory + # dat.dir <- file.path(in.base,sites[s],"/analy/") + mon.files <- dir(dir.histo, "-S-") # monthly files only + + #Get time window + yeara <- as.numeric(strsplit(mon.files,"-")[[1]][yrind+1]) #first year + yearz <- as.numeric(strsplit(mon.files,"-")[[length(mon.files)-1]][yrind+1]) #last year + + montha <- as.numeric(strsplit(mon.files,"-")[[1]][yrind+2]) #first month + monthz <- as.numeric(strsplit(mon.files,"-")[[length(mon.files)-1]][yrind+2]) #last month + + cat(" - Processing History Files \n") + for (y in yrs){ + dpm <- lubridate::days_in_month(1:12) + if(lubridate::leap_year(y)) dpm[2] <- dpm[2]+1 + #calculate month start/end based on year + if (y == yrs[1]){ + month.begin = montha + }else{ + month.begin = 1 + } + if (y == yrs[length(yrs)]){ + month.end = monthz + }else{ + month.end = 12 + } + + for(m in month.begin:month.end){ + #Make the file name. + year.now <-sprintf("%4.4i",y) + month.now <- sprintf("%2.2i",m) + day.now <- sprintf("%2.2i",1) + hour.now <- sprintf("%6.6i",0) + + # dat.dir <- paste(in.base,sites[s],"/histo/",sep="") + file.now <- paste(prefix,"-S-",year.now,"-",month.now,"-",day.now,"-" + ,hour.now,"-",sufx,sep="") + + # cat(" - Reading file :",file.now,"...","\n") + now <- ncdf4::nc_open(file.path(dir.histo,file.now)) + + # Note: we have to convert the daily value for 1 month by days per month to get a monthly estimate + fsc_in_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"FSC_IN")*dpm[m] #kg/(m2*day) --> kg/(m2*month) + ssc_in_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"SSC_IN")*dpm[m] + ssl_in_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"SSL_IN")*dpm[m] + fsn_in_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"FSN_IN")*dpm[m] + pln_up_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"TOTAL_PLANT_NITROGEN_UPTAKE")*dpm[m] + # ssc_in_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"SSC_IN")*dpm[m] + + # # NOTE: the following lines shoudl get removed if using 20-year means + # soil_tempk_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"SOIL_TEMPK_PA")[nsoil] # Surface soil temp + # swc_max_m[m-month.begin+1] <- max(ncdf4::ncvar_get(now,"SOIL_WATER_PA")) # max soil moist to avoid digging through water capacity stuff + # swc_m[m-month.begin+1] <- ncdf4::ncvar_get(now,"SOIL_WATER_PA")[nsoil] #Surface soil moist + + ncdf4::nc_close(now) + } + # Find which patch we're working in + ind <- (y-yeara)/blckyr + 1 + + # Sum monthly values to get a total estimated carbon input + fsc_in_y[ind] <- sum(fsc_in_m,na.rm=TRUE) + ssc_in_y[ind] <- sum(ssc_in_m,na.rm=TRUE) + ssl_in_y[ind] <- sum(ssl_in_m,na.rm=TRUE) + fsn_in_y[ind] <- sum(fsn_in_m,na.rm=TRUE) + pln_up_y[ind] <- sum(pln_up_m,na.rm=TRUE) + + # # Soil temp & moisture here should get deleted if using the 20-year means + # soil_tempk_y[ind] <- mean(soil_tempk_m,na.rm=TRUE) + # swc_y[ind] <- mean(swc_m,na.rm=TRUE)/max(swc_max_m,na.rm=TRUE) + } + #--------------------------------------- + + #--------------------------------------- + # Calculate steady-state soil pools! + # + # These are the equations from soil_respiration.f90 -- if this module has changed, these need + # Note: We ignore the unit conversions here because we're now we're working with the yearly + # sum so that we end up with straight kgC/m2 + # fast_C_loss <- kgCday_2_umols * A_decomp * decay_rate_fsc * fast_soil_C + # struc_C_loss <- kgCday_2_umols * A_decomp * Lc * decay_rate_stsc * struct_soil_C * f_decomp + # slow_C_loss <- kcCday_2_umols * A_decomp * decay_rate_ssc * slow_soil_C + #--------------------------------------- + + # ----------------------- + # Calculate the annual carbon loss if things are stable + # ----------- + fsc_loss <- decay_rate_fsc + ssc_loss <- decay_rate_ssc + ssl_loss <- decay_rate_stsc + # ----------- + + + # ************************************* + # Calculate A_decomp according to your DECOMP_SCPEME + # A_decomp <- temperature_limitation * water_limitation # aka het_resp_weight + # ************************************* + # ======================== + # Temperature Limitation + # ======================== + # soil_tempk <- sum(soil_tempo_y*area.dist) + if(decomp_scheme %in% c(0, 3)){ + temperature_limitation = min(1,exp(resp_temperature_increase * (soil_tempk-318.15))) + } else if(decomp_scheme %in% c(1,4)){ + lnexplloyd = rh_lloyd_1 * ( rh_lloyd_2 - 1. / (soil_tempk - rh_lloyd_3)) + lnexplloyd = max(-38.,min(38,lnexplloyd)) + temperature_limitation = min( 1.0, resp_temperature_increase * exp(lnexplloyd) ) + } else if(decomp_scheme==2) { + # Low Temp Limitation + lnexplow <- rh_decay_low * (rh_low_temp - soil_tempk) + lnexplow <- max(-38, min(38, lnexplow)) + tlow_fun <- 1 + exp(lnexplow) + + # High Temp Limitation + lnexphigh <- rh_decay_high*(soil_tempk - rh_high_temp) + lnexphigh <- max(-38, min(38, lnexphigh)) + thigh_fun <- 1 + exp(lnexphigh) + + temperature_limitation <- 1/(tlow_fun*thigh_fun) + } + # ======================== + + # ======================== + # Moisture Limitation + # ======================== + # rel_soil_moist <- sum(swc_y*area.dist) + if(decomp_scheme %in% c(0,1)){ + if (rel_soil_moist <= resp_opt_water){ + water_limitation = exp( (rel_soil_moist - resp_opt_water) * resp_water_below_opt) + } else { + water_limitation = exp( (resp_opt_water - rel_soil_moist) * resp_water_above_opt) + } + } else if(decomp_scheme==2){ + # Dry soil Limitation + lnexpdry <- rh_decay_dry * (rh_dry_smoist - rel_soil_moist) + lnexpdry <- max(-38, min(38, lnexpdry)) + smdry_fun <- 1+exp(lnexpdry) + + # Wet Soil limitation + lnexpwet <- rh_decay_wet * (rel_soil_moist - rh_wet_smoist) + lnexpwet <- max(-38, min(38, lnexpwet)) + smwet_fun <- 1+exp(lnexpwet) + + water_limitation <- 1/(smdry_fun * smwet_fun) + } else { + water_limitation = rel_soil_moist * 4.0893 - rel_soil_moist**2 * 3.1681 - 0.3195897 + } + # ======================== + + A_decomp <- temperature_limitation * water_limitation # aka het_resp_weight + # ************************************* + + # ************************************* + # Calculate the steady-state pools + # NOTE: Current implementation weights carbon input by patch size rather than using the + # carbon balance from the oldest state (as was the first implementation) + # ************************************* + # ------------------- + # Do the carbon and fast nitrogen pools + # ------------------- + fsc_ss <- fsc_in_y[length(fsc_in_y)]/(fsc_loss * A_decomp) + ssl_ss <- ssl_in_y[length(ssl_in_y)]/(ssl_loss * A_decomp * Lc) # Structural soil C + ssc_ss <- ((ssl_loss * A_decomp * Lc * ssl_ss)*(1 - r_stsc))/(ssc_loss * A_decomp ) + fsn_ss <- fsn_in_y[length(fsn_in_y)]/(fsc_loss * A_decomp) + # ------------------- + + # ------------------- + # Do the mineralized nitrogen calculation + # ------------------- + #ED2: csite%mineralized_N_loss = csite%total_plant_nitrogen_uptake(ipa) + # + csite%today_Af_decomp(ipa) * Lc * K1 * csite%structural_soil_C(ipa) + # * ( (1.0 - r_stsc) / c2n_slow - 1.0 / c2n_structural) + msn_loss <- pln_up_y[length(pln_up_y)] + + A_decomp*Lc*ssl_loss*ssl_in_y[length(ssl_in_y)]* + ((1.0-r_stsc)/c2n_slow - 1.0/c2n_structural) + + #fast_N_loss + slow_C_loss/c2n_slow + msn_med <- fsc_loss*A_decomp*fsn_in_y[length(fsn_in_y)]+ (ssc_loss * A_decomp)/c2n_slow + + msn_ss <- msn_med/msn_loss + # ------------------- + # ************************************* + + # ************************************* + # Replace dummy values in patch matrix with the steady state calculations + # ************************************* + # Figure out what steady state value index we shoudl use + # Note: In the current implementaiton this should be 1 because we did the weighted averaging up front, + # but if something went wrong and dimensions are off, use this to pick the last (etc) + p.use <- 1 + + # write the values to file + pss.big[,"patch"] <- 1:nrow(pss.big) + pss.big[,"area"] <- area.dist + pss.big[,"fsc"] <- rep(fsc_ss[p.use],nrow(pss.big)) # fsc + pss.big[,"stsc"] <- rep(ssl_ss[p.use],nrow(pss.big)) # stsc + pss.big[,"stsl"] <- rep(ssl_ss[p.use],nrow(pss.big)) # stsl (not used) + pss.big[,"ssc"] <- rep(ssc_ss[p.use],nrow(pss.big)) # ssc + pss.big[,"msn"] <- rep(msn_ss[p.use],nrow(pss.big)) # msn + pss.big[,"fsn"] <- rep(fsn_ss[p.use],nrow(pss.big)) # fsn + # ************************************* + #--------------------------------------- + + #--------------------------------------- + # Write everything to file!! + #--------------------------------------- + file.prefix=paste0(prefix, "-lat", lat, "lon", lon) + write.table(css.big,file=file.path(outdir,paste0(file.prefix,".css")),row.names=FALSE,append=FALSE, + col.names=TRUE,quote=FALSE) + + write.table(pss.big,file=file.path(outdir,paste0(file.prefix,".pss")),row.names=FALSE,append=FALSE, + col.names=TRUE,quote=FALSE) + #--------------------------------------- + +} + \ No newline at end of file diff --git a/models/ed/man/SAS.ED2.Rd b/models/ed/man/SAS.ED2.Rd new file mode 100644 index 00000000000..faf7af4330c --- /dev/null +++ b/models/ed/man/SAS.ED2.Rd @@ -0,0 +1,98 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/SAS.ED2.R +\name{SAS.ED2} +\alias{SAS.ED2} +\title{Use semi-analytical solution to accellerate model spinup} +\usage{ +SAS.ED2(dir.analy, dir.histo, outdir, prefix, lat, lon, block, yrs.met = 30, + treefall, sm_fire = 0, fire_intensity = 0, slxsand = 0.33, + slxclay = 0.33, sufx = "g01.h5", decomp_scheme = 2, + kh_active_depth = -0.2, decay_rate_fsc = 11, decay_rate_stsc = 4.5, + decay_rate_ssc = 0.2, Lc = 0.049787, c2n_slow = 10, + c2n_structural = 150, r_stsc = 0.3, rh_decay_low = 0.24, + rh_decay_high = 0.6, rh_low_temp = 18 + 273.15, rh_high_temp = 45 + + 273.15, rh_decay_dry = 12, rh_decay_wet = 36, rh_dry_smoist = 0.48, + rh_wet_smoist = 0.98, resp_opt_water = 0.8938, + resp_water_below_opt = 5.0786, resp_water_above_opt = 4.5139, + resp_temperature_increase = 0.0757, rh_lloyd_1 = 308.56, + rh_lloyd_2 = 1/56.02, rh_lloyd_3 = 227.15) +} +\arguments{ +\item{dir.analy}{Location of ED2 analyis files; expects monthly and yearly output} + +\item{dir.histo}{Location of ED2 history files (for vars not in analy); expects monthly} + +\item{outdir}{Location to write SAS .css & .pss files} + +\item{lat}{site latitude; used for file naming} + +\item{lon}{site longitude; used for file naming} + +\item{block}{Number of years between patch ages} + +\item{yrs.met}{Number of years cycled in model spinup part 1} + +\item{treefall}{Value to be used for TREEFALL_DISTURBANCE_RATE in ED2IN for full runs (disturbance on)} + +\item{sm_fire}{Value to be used for SM_FIRE if INCLUDE_FIRE=2; defaults to 0 (fire off)} + +\item{fire_intensity}{Value to be used for FIRE_PARAMTER; defaults to 0 (fire off)} + +\item{slxsand}{Soil percent sand; used to calculate expected fire return interval} + +\item{slxclay}{Soil percent clay; used to calculate expected fire return interval} + +\item{sufx}{ED2 out file suffix; used in constructing file names(default "g01.h5)} + +\item{decomp_scheme}{Decomposition scheme specified in ED2IN} + +\item{Lc}{Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90)} + +\item{c2n_slow}{Carbon to Nitrogen ratio, slow pool; ED Default 10.0} + +\item{c2n_structural}{Carbon to Nitrogen ratio, structural pool. ED default 150.0} + +\item{r_stsc}{Decomp param} + +\item{rh_decay_low}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.24} + +\item{rh_decay_high}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.60} + +\item{rh_low_temp}{Param used for ED-1/CENTURY decomp schemes; ED default = 291} + +\item{rh_high_temp}{Param used for ED-1/CENTURY decomp schemes; ED default = 318.15} + +\item{rh_decay_dry}{Param used for ED-1/CENTURY decomp schemes; ED default = 12.0} + +\item{rh_decay_wet}{Param used for ED-1/CENTURY decomp schemes; ED default = 36.0} + +\item{rh_dry_smoist}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.48} + +\item{rh_wet_smoist}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.98} + +\item{resp_opt_water}{Param used for decomp schemes 0 & 3, ED default = 0.8938} + +\item{resp_water_below_opt}{Param used for decomp schemes 0 & 3, ED default = 5.0786} + +\item{resp_water_above_opt}{Param used for decomp schemes 0 & 3, ED default = 4.5139} + +\item{resp_temperature_increase}{Param used for decomp schemes 0 & 3, ED default = 0.0757} + +\item{rh_lloyd_1}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 308.56} + +\item{rh_lloyd_2}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 1/56.02} + +\item{rh_lloyd_3}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 227.15} +} +\description{ +This functions approximates landscape equilibrium steady state for vegetation and +soil pools using the successional trajectory of a single patch modeled with disturbance +off and the prescribed disturbance rates for runs (Xia et al. 2012 GMD 5:1259-1271). +} +\author{ +Christine Rollinson, modified from original by Jaclyn Hatala-Matthes (2/18/14) +2014 Feb: Original ED SAS solution Script at PalEON modeling HIPS sites (Matthes) +2015 Aug: Modifications for greater site flexibility & updated ED +2016 Jan: Adaptation for regional-scale runs (single-cells run independently, but executed in batches) +2018 Jul: Conversion to function, Christine Rollinson July 2018 +} From c5dc0636a6504084f16a589b461dd8107da49e04 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Fri, 31 Aug 2018 11:23:57 -0400 Subject: [PATCH 0043/2289] initial base script --- base/workflow/inst/batch_runs.R | 100 ++++++++++++++++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 base/workflow/inst/batch_runs.R diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R new file mode 100644 index 00000000000..776108eb0b9 --- /dev/null +++ b/base/workflow/inst/batch_runs.R @@ -0,0 +1,100 @@ +## Function to Create and execute pecan xml +create_exec_test_xml <- function(run_list){ + + model_id <- run_list[1] + met <- run_list[2] + site_id<- run_list[3] + start_date<- run_list[4] + end_date<- run_list[5] + pecan_path<- run_list[6] + user_id<- run_list[7] + pft_name<- run_list[8] + + config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) + bety <- betyConnect(paste0(pecan_path,"/web/config.php")) + con <- bety$con + settings <- list() + # Info + settings$info$notes <- paste0("Test_Run") + settings$info$userid <- user_id + settings$info$username <- "None" + settings$info$dates <- Sys.Date() + #Outdir + model.new <- tbl(bety, "models") %>% filter(model_id == id) %>% collect() + outdir_base<-config.list$output_folder + outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"),format(as.Date(end_date), "%Y-%m"),met,site_id,"_test_runs",sep="",collapse =NULL) + outdir <- paste0(outdir_base,outdir_pre) + dir.create(outdir) + settings$outdir <- outdir + #Database BETY + settings$database$bety$user <- config.list$db_bety_username + settings$database$bety$password <- config.list$db_bety_password + settings$database$bety$host <- config.list$db_bety_hostname + settings$database$bety$dbname <- config.list$db_bety_database + settings$database$bety$driver <- "PostgreSQL" + settings$database$bety$write <- FALSE + #Database dbfiles + settings$database$dbfiles <- config.list$dbfiles_folder + #PFT + if (is.na(pft_name)){ + pft <- tbl(bety, "pfts") %>% filter(modeltype_id == model.new$modeltype_id) %>% collect() + pft_name <- pft$name[1] + } + settings$pfts$pft$name <- pft_name + settings$pfts$pft$constants$num <- 1 + #Meta Analysis + settings$meta.analysis$iter <- 3000 + settings$meta.analysis$random.effects <- FALSE + #Ensemble + settings$ensemble$size <- 1 + settings$ensemble$variable <- "GPP" + settings$ensemble$samplingspace$met$method <- "sampling" + settings$ensemble$samplingspace$parameters$method <- "uniform" + #Model + settings$model$id <- model.new$id + #Worflow + settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) + settings$run$site$id <- site_id + settings$run$site$met.start <- start_date + settings$run$site$met.end <- end_date + settings$run$inputs$met$source <- met + settings$run$inputs$met$output <- model.new$model_name + settings$run$inputs$met$username <- "pecan" + settings$run$start.date <- start_date + settings$run$end.date <- end_date + settings$host$name <-config.list$db_bety_hostname + + #create file and Run + saveXML(listToXml(settings, "pecan"), file=paste0(outdir,"/","pecan.xml")) + file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) + setwd(outdir) + source("workflow.R") + +} + + + + +##Create Run Args +pecan_path <- "~/pecan" +config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) +bety <- betyConnect(paste0(pecan_path,"/web/config.php")) +con <- bety$con +machineid <- 99000000001 #ID of the VM +model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == machineid) %>% + filter(container_type == "Model")%>% select(container_id) %>% collect() + + +models <- model_ids$container_id +met_name <- c("CRUNCEP","AmerifluxLBL") +site_id <- "772" +startdate<-"2004/01/01" +enddate<-"2004/12/31" + +#Create permutations of arg combinations +run_table <- expand.grid(models,met_name,site_id, startdate,enddate,pecan_path,scientific =FALSE) + +#Apply function by row of run_table. Return NA if error occurs +apply(run_table,MARGIN = 1,FUN = purrr::possibly(create_exec_test_xml,otherwise = NA)) + + From 317373317f78da9f653a7df721ba29d9c909c3cf Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Mon, 3 Sep 2018 13:43:32 -0400 Subject: [PATCH 0044/2289] add sink to file and machine id find --- base/workflow/inst/batch_runs.R | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 776108eb0b9..23e09136f15 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -68,20 +68,26 @@ create_exec_test_xml <- function(run_list){ saveXML(listToXml(settings, "pecan"), file=paste0(outdir,"/","pecan.xml")) file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) setwd(outdir) + ##Name log file + sink(file = "run_out.txt",type = c("output","message"), split =TRUE) source("workflow.R") - + sink(file=NULL) } ##Create Run Args -pecan_path <- "~/pecan" +pecan_path <- "/fs/data3/tonygard/work/pecan" config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) bety <- betyConnect(paste0(pecan_path,"/web/config.php")) con <- bety$con -machineid <- 99000000001 #ID of the VM -model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == machineid) %>% + +## Find name of Machine R is running on +mach_name <- Sys.info()[[4]] +mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) + +model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% filter(container_type == "Model")%>% select(container_id) %>% collect() From 6115d9674f554770df234bc3ad53eb54189447e7 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Tue, 4 Sep 2018 17:50:59 -0400 Subject: [PATCH 0045/2289] better runs execuation to make output add to a status table, and improve logs --- base/workflow/inst/batch_runs.R | 52 +++++++++++++++++++++------------ 1 file changed, 33 insertions(+), 19 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 23e09136f15..c5440853230 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -1,19 +1,24 @@ ## Function to Create and execute pecan xml create_exec_test_xml <- function(run_list){ + library(PEcAn.DB) + library(dplyr) + library(PEcAn.utils) + library(XML) + library(PEcAn.settings) + model_id <- run_list[[1]] + met <- run_list[[2]] + site_id<- run_list[[3]] + start_date<- run_list[[4]] + end_date<- run_list[[5]] + pecan_path<- run_list[[6]] + user_id<- NA + pft_name<- NA - model_id <- run_list[1] - met <- run_list[2] - site_id<- run_list[3] - start_date<- run_list[4] - end_date<- run_list[5] - pecan_path<- run_list[6] - user_id<- run_list[7] - pft_name<- run_list[8] - - config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) + config.list <-read_web_config(paste0(pecan_path,"/web/config.php")) bety <- betyConnect(paste0(pecan_path,"/web/config.php")) con <- bety$con settings <- list() + # Info settings$info$notes <- paste0("Test_Run") settings$info$userid <- user_id @@ -22,7 +27,10 @@ create_exec_test_xml <- function(run_list){ #Outdir model.new <- tbl(bety, "models") %>% filter(model_id == id) %>% collect() outdir_base<-config.list$output_folder - outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"),format(as.Date(end_date), "%Y-%m"),met,site_id,"_test_runs",sep="",collapse =NULL) + outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"), + format(as.Date(end_date), "%Y-%m"), + met,site_id,"_test_runs", + sep="",collapse =NULL) outdir <- paste0(outdir_base,outdir_pre) dir.create(outdir) settings$outdir <- outdir @@ -69,9 +77,11 @@ create_exec_test_xml <- function(run_list){ file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) setwd(outdir) ##Name log file - sink(file = "run_out.txt",type = c("output","message"), split =TRUE) + log <- file("workflow.Rout", open = "wt") + sink(log) + sink(log, type = "message") source("workflow.R") - sink(file=NULL) + sink() } @@ -88,19 +98,23 @@ mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% - filter(container_type == "Model")%>% select(container_id) %>% collect() + filter(container_type == "Model") %>% pull(container_id) -models <- model_ids$container_id + +models <- model_ids met_name <- c("CRUNCEP","AmerifluxLBL") site_id <- "772" startdate<-"2004/01/01" enddate<-"2004/12/31" #Create permutations of arg combinations -run_table <- expand.grid(models,met_name,site_id, startdate,enddate,pecan_path,scientific =FALSE) - -#Apply function by row of run_table. Return NA if error occurs -apply(run_table,MARGIN = 1,FUN = purrr::possibly(create_exec_test_xml,otherwise = NA)) +options(scipen = 999) +run_table <- expand.grid(models,met_name,site_id, startdate,enddate,pecan_path,stringsAsFactors = FALSE) +#Execute function to spit out a table with a clomn of NA or success +tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr:::possibly(function(...){ + create_exec_test_xml(list(...)) + },otherwise =NA)) + ) From 211b06284dade93f80d94054a086dec7c9765a0b Mon Sep 17 00:00:00 2001 From: Anthony Kenya Gardella Date: Wed, 5 Sep 2018 10:27:27 -0400 Subject: [PATCH 0046/2289] extra colon --- base/workflow/inst/batch_runs.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index c5440853230..1c1e802d05f 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -113,7 +113,7 @@ options(scipen = 999) run_table <- expand.grid(models,met_name,site_id, startdate,enddate,pecan_path,stringsAsFactors = FALSE) #Execute function to spit out a table with a clomn of NA or success -tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr:::possibly(function(...){ +tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ create_exec_test_xml(list(...)) },otherwise =NA)) ) From 3b8f6718df9014db5c60996966f88e80860a9c3a Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Mon, 17 Sep 2018 16:52:14 -0400 Subject: [PATCH 0047/2289] find site with no inputs --- base/workflow/inst/batch_runs.R | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 1c1e802d05f..f6192c978be 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -104,10 +104,39 @@ model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% models <- model_ids met_name <- c("CRUNCEP","AmerifluxLBL") -site_id <- "772" startdate<-"2004/01/01" enddate<-"2004/12/31" +## Find Sites +## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group +site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% + inner_join(tbl(bety, "sitegroups_sites") + %>% filter(sitegroup_id == 1), + by = c("id" = "site_id")) %>% + dplyr::select("id.x", "notes", "sitename") %>% + dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% collect() %>% + dplyr::mutate( + start_year = stringi::stri_extract_first_regex(notes, "[0-9]+"), + end_year = if_else( + stringi::stri_extract_last_regex(notes, "[0-9]+") == start_year, + as.character(lubridate::year(Sys.Date())), + stringi::stri_extract_last_regex(notes, "[0-9]+") + ), + contains_run = if_else( + between(lubridate::year(startdate), start_year, end_year), + "TRUE", + "FALSE" + ), + len = as.integer(end_year) - as.integer(start_year) + ) %>% + filter(contains_run == TRUE) %>% + filter(str_length(end_year) == 4) %>% + filter(len == max(len)) %>% + select("id.x") + + +site_id <- "772" + #Create permutations of arg combinations options(scipen = 999) run_table <- expand.grid(models,met_name,site_id, startdate,enddate,pecan_path,stringsAsFactors = FALSE) From 2bf4d4520b97ae5272cbaa50092c667c973eb16b Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Thu, 27 Sep 2018 14:15:48 -0400 Subject: [PATCH 0048/2289] add sa and ens args --- base/workflow/inst/batch_runs.R | 46 +++++++++++++++++++++++++-------- 1 file changed, 35 insertions(+), 11 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index f6192c978be..db30e93bb3f 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -11,6 +11,10 @@ create_exec_test_xml <- function(run_list){ start_date<- run_list[[4]] end_date<- run_list[[5]] pecan_path<- run_list[[6]] + out.var<- run_list[[7]] + ensemble<- run_list[[8]] + ens_size<- run_list[[9]] + sensitivity<- run_list[[10]] user_id<- NA pft_name<- NA @@ -54,10 +58,26 @@ create_exec_test_xml <- function(run_list){ settings$meta.analysis$iter <- 3000 settings$meta.analysis$random.effects <- FALSE #Ensemble - settings$ensemble$size <- 1 - settings$ensemble$variable <- "GPP" - settings$ensemble$samplingspace$met$method <- "sampling" - settings$ensemble$samplingspace$parameters$method <- "uniform" + if(ensemble){ + settings$ensemble$size <- ens_size + settings$ensemble$variable <- out.var + settings$ensemble$samplingspace$met$method <- "sampling" + settings$ensemble$samplingspace$parameters$method <- "uniform" + }else{ + settings$ensemble$size <- 1 + settings$ensemble$variable <- out.var + settings$ensemble$samplingspace$met$method <- "sampling" + settings$ensemble$samplingspace$parameters$method <- "uniform" + } + #Sensitivity + if(sensitivity){ + settings$sensitivity.analysis$quantiles <- + settings$sensitivity.analysis$quantiles$sigma1 <--2 + settings$sensitivity.analysis$quantiles$sigma2 <--1 + settings$sensitivity.analysis$quantiles$sigma3 <- 1 + settings$sensitivity.analysis$quantiles$sigma4 <- 2 + names(settings$sensitivity.analysis$quantiles) <-c("sigma","sigma","sigma","sigma") + } #Model settings$model$id <- model.new$id #Worflow @@ -96,9 +116,9 @@ con <- bety$con ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) - + model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% - filter(container_type == "Model") %>% pull(container_id) + filter(container_type == "Model") %>% pull(container_id) @@ -106,7 +126,10 @@ models <- model_ids met_name <- c("CRUNCEP","AmerifluxLBL") startdate<-"2004/01/01" enddate<-"2004/12/31" - +out.var <- "NPP" +ensemble <- TRUE +ens_size <- 100 +sensitivity <- TRUE ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% @@ -139,11 +162,12 @@ site_id <- "772" #Create permutations of arg combinations options(scipen = 999) -run_table <- expand.grid(models,met_name,site_id, startdate,enddate,pecan_path,stringsAsFactors = FALSE) +run_table <- expand.grid(models,met_name,site_id, startdate, enddate, + pecan_path,out.var, ensemble, ens_size, sensitivity, stringsAsFactors = FALSE) #Execute function to spit out a table with a clomn of NA or success tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ - create_exec_test_xml(list(...)) - },otherwise =NA)) - ) + create_exec_test_xml(list(...)) +},otherwise =NA)) +) From 225ffb94ba3a430b83c099b2edd97bbeca258996 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 25 Oct 2018 09:36:18 -0400 Subject: [PATCH 0049/2289] extend sites to 12 in the vognette --- base/remote/man/remote.execute.R.Rd | 4 +- base/remote/man/start.model.runs.Rd | 3 +- base/remote/man/start_qsub.Rd | 4 +- modules/assim.batch/man/pda.emulator.Rd | 6 +- modules/assim.batch/man/pda.plot.params.Rd | 4 +- .../vignettes/MultiSitePDAVignette.Rmd | 63 +++++++++++++++++-- 6 files changed, 67 insertions(+), 17 deletions(-) diff --git a/base/remote/man/remote.execute.R.Rd b/base/remote/man/remote.execute.R.Rd index 9c00555ba5e..ae8f76b7aab 100644 --- a/base/remote/man/remote.execute.R.Rd +++ b/base/remote/man/remote.execute.R.Rd @@ -4,8 +4,8 @@ \alias{remote.execute.R} \title{Execute command remotely} \usage{ -remote.execute.R(script, host = "localhost", user = NA, - verbose = FALSE, R = "R", scratchdir = tempdir()) +remote.execute.R(script, host = "localhost", user = NA, verbose = FALSE, + R = "R", scratchdir = tempdir()) } \arguments{ \item{script}{the script to be invoked, as a list of commands.} diff --git a/base/remote/man/start.model.runs.Rd b/base/remote/man/start.model.runs.Rd index 02985be0760..85d06d5fbd9 100644 --- a/base/remote/man/start.model.runs.Rd +++ b/base/remote/man/start.model.runs.Rd @@ -4,8 +4,7 @@ \alias{start.model.runs} \title{Start selected ecosystem model runs within PEcAn workflow} \usage{ -\method{start}{model.runs}(settings, write = TRUE, - stop.on.error = TRUE) +\method{start}{model.runs}(settings, write = TRUE, stop.on.error = TRUE) } \arguments{ \item{settings}{pecan settings object} diff --git a/base/remote/man/start_qsub.Rd b/base/remote/man/start_qsub.Rd index 4c135b3d151..e5bb651e883 100644 --- a/base/remote/man/start_qsub.Rd +++ b/base/remote/man/start_qsub.Rd @@ -4,8 +4,8 @@ \alias{start_qsub} \title{Start qsub runs} \usage{ -start_qsub(run, qsub_string, rundir, host, host_rundir, host_outdir, - stdout_log, stderr_log, job_script, qsub_extra = NULL) +start_qsub(run, qsub_string, rundir, host, host_rundir, host_outdir, stdout_log, + stderr_log, job_script, qsub_extra = NULL) } \arguments{ \item{run}{(numeric) run ID, as an integer} diff --git a/modules/assim.batch/man/pda.emulator.Rd b/modules/assim.batch/man/pda.emulator.Rd index b9ced428e04..ba7a3d3e1ea 100644 --- a/modules/assim.batch/man/pda.emulator.Rd +++ b/modules/assim.batch/man/pda.emulator.Rd @@ -5,9 +5,9 @@ \title{Paramater Data Assimilation using emulator} \usage{ pda.emulator(settings, external.data = NULL, external.priors = NULL, - params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, - iter = NULL, adapt = NULL, adj.min = NULL, ar.target = NULL, - jvar = NULL, n.knot = NULL, individual = TRUE) + params.id = NULL, param.names = NULL, prior.id = NULL, + chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, + ar.target = NULL, jvar = NULL, n.knot = NULL, individual = TRUE) } \arguments{ \item{settings}{= a pecan settings list} diff --git a/modules/assim.batch/man/pda.plot.params.Rd b/modules/assim.batch/man/pda.plot.params.Rd index b86f51c8e70..c87341bb0e6 100644 --- a/modules/assim.batch/man/pda.plot.params.Rd +++ b/modules/assim.batch/man/pda.plot.params.Rd @@ -4,8 +4,8 @@ \alias{pda.plot.params} \title{Plot PDA Parameter Diagnostics} \usage{ -pda.plot.params(settings, mcmc.param.list, prior.ind, par.file.name = NULL, - sffx) +pda.plot.params(settings, mcmc.param.list, prior.ind, + par.file.name = NULL, sffx) } \arguments{ \item{all}{params are the identically named variables in pda.mcmc / pda.emulator} diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index aa26aecd02e..ea88a78e05c 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -153,6 +153,23 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded + + 755 + 2002/01/01 + 2008/12/31 + Duke Forest-hardwoods (US-Dk2) + 35.9736 + -79.1004 + + 2002/01/01 + 2008/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-755/AMF_US-Dk2_BASE_HH_4-4.2002-01-01.2008-12-31.clim + + + + 1000000145 2008/01/01 @@ -168,8 +185,8 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-145/AMF_US-ChR_BASE_HH_2-1.2008-01-01.2010-12-31.clim - - + + 1000000066 2005/01/01 @@ -185,8 +202,8 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-66/AMF_US-Slt_BASE_HH_5-1.2005-01-01.2014-12-31.clim - - + + 1000000061 2004/01/01 @@ -199,10 +216,44 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2013/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-61/AMF_US-Oho_BASE_HH_4-1.2005-01-01.2006-12-31.clim + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-61/AMF_US-Oho_BASE_HH_4-1.2004-01-01.2013-12-31.clim - + + + + 740 + 1997/01/01 + 2010/12/31 + BOREAS SSA Old Aspen (CA-Oas) + 53.6289 + -106.198 + + 1997/01/01 + 2010/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-740/AMF_CA-Oas_BASE_HH_1-1.1997-01-01.2010-12-31.clim + + + + + + 758 + 2010/01/01 + 2015/12/31 + Harvard Forest EMS Tower/HFR1 (US-Ha1) + 42.5378 + -72.1715 + + 2010/01/01 + 2015/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-758/AMF_US-Ha1_BASE_HR_10-1.2010-01-01.2015-12-31.clim + + + ``` From cd6baeab5dc4caa5f7bbdb23c0db261d41babb24 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 25 Oct 2018 11:07:56 -0400 Subject: [PATCH 0050/2289] add day on/off to the std vars --- base/utils/data/standard_vars.csv | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/base/utils/data/standard_vars.csv b/base/utils/data/standard_vars.csv index d6d72944b67..94c54ef2f4d 100755 --- a/base/utils/data/standard_vars.csv +++ b/base/utils/data/standard_vars.csv @@ -104,10 +104,12 @@ Root carbon content, optionally by size class; alternatively specify fine_ and c "PAR","surface_downwelling_photosynthetic_photon_flux_in_air","mol m-2 s-1","Photosynthetically Active Radiation","Driver","real","lon","lat","time",NA,"Photosynthetically Active Radiation" "precipf",NA,"kg m-2 s-1","Precipitation","Driver","real","lon","lat","time",NA,"The per unit area and time precipitation representing the sum of convective rainfall, stratiform rainfall, and snowfall" "BA",NA,"m2 ha-1","Basal area","Diversity","real","lon","lat","time","pft","Basal area by PFT" -"Dens",NA,"1 ha-1","Stem Density","Diversity","real","lon","lat","time","pft","Stem Density by PFT" +"Dens",NA,"1 ha-1","Stem Density","Diversity","real","lon","lat","time","pft","NA","Stem Density by PFT" "DBH",NA,"cm","Diameter at Breast Height","Diversity","real","lon","lat","time","pft","NA","DBH by PFT" "Fcomp",NA,"kgC kgC-1","Aboveground Biomass Fractional Composition","Diversity","real","lon","lat","time","pft","Aboveground biomass Fractional composition of each PFT within each grid cell" "Estab",NA,"1 ha-1","New Individuals","Diversity","real","lon","lat","time","pft","New Individuals" "Mort",NA,"1 ha-1","Mortality","Diversity","real","lon","lat","time","pft","Individuals lost through death" "SoilDepth",NA,"m","Soil Depth Layer","Deprecated","real","depth",NA,NA,NA,"Depth to the bottom of each model-defined soil layer" "assimilation_rate",NA,"kg C m-2 s-1","Leaf assimilation rate","Carbon Fluxes","real","lon","lat","time",NA,"Rate of leaf photosynthesis / carbon assimilation" +"date_of_budburst",NA,"day of year","Date of Budburst","Phenology","real","lon","lat","time",NA,"Date of Budburst" +"date_of_senescence",NA,"day of year","Date of Senescence","Phenology","real","lon","lat","time",NA,"Date of Senescence" \ No newline at end of file From c59be35baf1d7f1879dc72b932f858c6a5e0f972 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 25 Oct 2018 11:38:15 -0400 Subject: [PATCH 0051/2289] pass leafOnDay/leafOffDay via IC --- models/sipnet/R/write.configs.SIPNET.R | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index 0045ced8c3d..19fa7b13c78 100644 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -460,11 +460,21 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if (!is.na(snow) && is.numeric(snow)) { param[which(param[, 1] == "snowInit"), 2] <- udunits2::ud.convert(snow, "kg m-2", "g cm-2") # BETY: kg m-2 } - ## microbeInit mgC/g soil + ## leafOnDay + leafOnDay <- try(ncdf4::ncvar_get(IC.nc,"date_of_budburst"),silent = TRUE) + if (!is.na(leafOnDay) && is.numeric(leafOnDay)) { + param[which(param[, 1] == "leafOnDay"), 2] <- leafOnDay + } + ## leafOffDay + leafOffDay <- try(ncdf4::ncvar_get(IC.nc,"date_of_senescence"),silent = TRUE) + if (!is.na(leafOffDay) && is.numeric(leafOffDay)) { + param[which(param[, 1] == "leafOffDay"), 2] <- leafOffDay + } microbe <- try(ncdf4::ncvar_get(IC.nc,"Microbial Biomass C"),silent = TRUE) if (!is.na(microbe) && is.numeric(microbe)) { param[which(param[, 1] == "microbeInit"), 2] <- udunits2::ud.convert(microbe, "mg kg-1", "mg g-1") #BETY: mg microbial C kg-1 soil } + ncdf4::nc_close(IC.nc) }else{ PEcAn.logger::logger.error("Bad initial conditions filepath; keeping defaults") From b1b456640ddcc9296a1f279dc8811756d5763387 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Nov 2018 11:05:27 -0500 Subject: [PATCH 0052/2289] allow re-run if process data file is not there --- modules/assim.batch/R/pda.emulator.R | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 53fa4035636..5db05eac554 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -56,9 +56,11 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # load inputs with neff if this is another round if(!run.normal){ external_data_path <- file.path(settings$outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata")) - load(external_data_path) - # and delete the file afterwards because it will be re-written with a new ensemble id in the end - file.remove(external_data_path) + if(exists(external_data_path)){ + load(external_data_path) + # and delete the file afterwards because it will be re-written with a new ensemble id in the end + file.remove(external_data_path) + } } ## will be used to check if multiplicative Gaussian is requested From 35790e0e9fadb5126fa12f5b6f1e2ba16da6a68e Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 5 Nov 2018 11:05:42 -0500 Subject: [PATCH 0053/2289] fcn doc --- modules/assim.batch/R/pda.utils.R | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 0e3b3372f29..7a3df70c235 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -800,6 +800,13 @@ return_hyperpars <- function(assim.settings, inputs){ ##' Helper function that loads history from previous PDA run, but returns only requested objects +##' +##' @param workdir path of working dir e.g. '/fs/data2/output/PEcAn_***' +##' @param ensemble.id ensemble id of a previous PDA run, from which the objects will be retrieved +##' @param objects object names that are common to all multi PDA runs, e.g. llik.fn, prior.list etc. +##' +##' @return a list of objects that will be used in joint and hierarchical PDA +##' ##' @author Istem Fer ##' @export load_pda_history <- function(workdir, ensemble.id, objects){ From 4c6e38475b1da539573bf88a467fd0240567cdf1 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 7 Nov 2018 15:50:43 -0500 Subject: [PATCH 0054/2289] use correct file check command --- modules/assim.batch/R/pda.emulator.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 5db05eac554..f6234d0c9da 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -56,7 +56,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # load inputs with neff if this is another round if(!run.normal){ external_data_path <- file.path(settings$outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata")) - if(exists(external_data_path)){ + if(file.exists(external_data_path)){ load(external_data_path) # and delete the file afterwards because it will be re-written with a new ensemble id in the end file.remove(external_data_path) From 603d91f3deefffa9891c96a7148d09771ee98f40 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 7 Nov 2018 15:59:46 -0500 Subject: [PATCH 0055/2289] note parallelization across chains --- modules/assim.batch/R/pda.emulator.ms.R | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 4fb1e6610de..58a85fe554b 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -44,6 +44,8 @@ pda.emulator.ms <- function(multi.settings) { # but will run LHC ensembles, calculate SS and return settings list with saved SS paths etc. # this requires some re-arrangement in pda.emulator, # for now we will always run site-level calibration + multi.settings[[i]] <- pda.emulator(multi.settings[[i]], individual = individual) + multi.settings <- papply(multi.settings, pda.emulator, individual = individual) } @@ -126,19 +128,19 @@ pda.emulator.ms <- function(multi.settings) { # start the clock ptm.start <- proc.time() - # prepare for parallelization + # prepare for parallelization (over chains) dcores <- parallel::detectCores() - 1 ncores <- min(max(dcores, 1), multi.settings[[1]]$assim.batch$chain) - # + logger.setOutputFile(file.path(multi.settings$outdir, "pda.log")) - # + cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(multi.settings$outdir, "pda.log")) ## Sample posterior from emulator mcmc.out <- parallel::parLapply(cl, 1:multi.settings[[1]]$assim.batch$chain, function(chain) { mcmc.GP(gp = gp, ## Emulator(s) x0 = init.list[[chain]], ## Initial conditions - nmcmc = multi.settings[[1]]$assim.batch$iter, ## Number of iters + nmcmc = as.numeric(multi.settings[[1]]$assim.batch$iter), ## Number of iters rng = rng, ## range format = "lin", ## "lin"ear vs "log" of LogLikelihood mix = "joint", ## Jump "each" dimension independently or update them "joint"ly From 99ae1a88bea48bf5ea6bcdb9103e8dc64e73a511 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 7 Nov 2018 16:03:51 -0500 Subject: [PATCH 0056/2289] reduce to one line --- modules/assim.batch/R/pda.emulator.ms.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 58a85fe554b..3674193d5ba 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -270,8 +270,7 @@ pda.emulator.ms <- function(multi.settings) { ## re-fit GP on new param space for(i in seq_along(SS.stack)){ - GPmodel <- lapply(SS.stack[[i]], function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) - gp.stack[[i]] <- GPmodel + gp.stack[[i]] <- lapply(SS.stack[[i]], function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) } ## re-define rng From a58e6260e9ab01fea878dc83d38b5ebea9c912ed Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 7 Nov 2018 16:19:25 -0500 Subject: [PATCH 0057/2289] write df for tau_global explicitly --- modules/assim.batch/R/pda.emulator.ms.R | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 3674193d5ba..498bc5089bc 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -357,8 +357,6 @@ pda.emulator.ms <- function(multi.settings) { # initialize tau_global (nparam x nparam) tau_global <- rWishart(1, tau_df, tau_V)[,,1] - # df to use while updating tau later, setting here out of the loop - tau_df <- tau_df + nsites # initialize jcov.arr (jump variances per site) jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) @@ -422,7 +420,7 @@ pda.emulator.ms <- function(multi.settings) { tau_sigma <- solve(V_inv + sum_term) # update tau - tau_global <- rWishart(1, df = tau_df, Sigma = tau_sigma)[,,1] # site precision + tau_global <- rWishart(1, df = tau_df + nsites, Sigma = tau_sigma)[,,1] # site precision sigma_global <- solve(tau_global) # site covariance, new prior sigma to be used below for prior prob. calc. From aba53a5a3409d583868f7214d1ad2013b680147a Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 13 Nov 2018 09:03:27 -0500 Subject: [PATCH 0058/2289] fcn arg to pass knots --- modules/assim.batch/R/pda.emulator.R | 11 +++++++++-- modules/assim.batch/man/load_pda_history.Rd | 10 ++++++++++ modules/assim.batch/man/pda.emulator.Rd | 9 ++++++--- 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index f6234d0c9da..30808ef4c82 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -4,6 +4,7 @@ ##' @param settings = a pecan settings list ##' @param external.data = list of inputs ##' @param external.priors = list or priors +##' @param external.knots = list of knots ##' ##' @return nothing. Diagnostic plots, MCMC samples, and posterior distributions ##' are saved as files and db records. @@ -11,7 +12,7 @@ ##' @author Mike Dietze ##' @author Ryan Kelly, Istem Fer ##' @export -pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, +pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, external.knots = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, ar.target = NULL, jvar = NULL, n.knot = NULL, individual = TRUE) { @@ -181,6 +182,12 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, pname[[x]])) names(knots.list) <- sapply(settings$pfts,"[[",'name') + # if knots were passed externally overwrite them + if(!is.null(external.knots)){ + PEcAn.logger::logger.info("Overwriting the knots list.") + knots.list <- external.knots + } + knots.params <- lapply(knots.list, `[[`, "params") # don't need anymore # knots.probs <- lapply(knots.list, `[[`, "probs") @@ -189,7 +196,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) ## Run this block if this is a "round" extension - if (run.round) { + if (run.round & is.null(external.knots)) { ## Propose a percentage (if not specified 90%) of the new parameter knots from the posterior of the previous run knot.par <- ifelse(!is.null(settings$assim.batch$knot.par), diff --git a/modules/assim.batch/man/load_pda_history.Rd b/modules/assim.batch/man/load_pda_history.Rd index fcdccf90fef..1c8d2464998 100644 --- a/modules/assim.batch/man/load_pda_history.Rd +++ b/modules/assim.batch/man/load_pda_history.Rd @@ -6,6 +6,16 @@ \usage{ load_pda_history(workdir, ensemble.id, objects) } +\arguments{ +\item{workdir}{path of working dir e.g. '/fs/data2/output/PEcAn_***'} + +\item{ensemble.id}{ensemble id of a previous PDA run, from which the objects will be retrieved} + +\item{objects}{object names that are common to all multi PDA runs, e.g. llik.fn, prior.list etc.} +} +\value{ +a list of objects that will be used in joint and hierarchical PDA +} \description{ Helper function that loads history from previous PDA run, but returns only requested objects } diff --git a/modules/assim.batch/man/pda.emulator.Rd b/modules/assim.batch/man/pda.emulator.Rd index ba7a3d3e1ea..fa0fb15fe8e 100644 --- a/modules/assim.batch/man/pda.emulator.Rd +++ b/modules/assim.batch/man/pda.emulator.Rd @@ -5,9 +5,10 @@ \title{Paramater Data Assimilation using emulator} \usage{ pda.emulator(settings, external.data = NULL, external.priors = NULL, - params.id = NULL, param.names = NULL, prior.id = NULL, - chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, - ar.target = NULL, jvar = NULL, n.knot = NULL, individual = TRUE) + external.knots = NULL, params.id = NULL, param.names = NULL, + prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, + adj.min = NULL, ar.target = NULL, jvar = NULL, n.knot = NULL, + individual = TRUE) } \arguments{ \item{settings}{= a pecan settings list} @@ -15,6 +16,8 @@ pda.emulator(settings, external.data = NULL, external.priors = NULL, \item{external.data}{= list of inputs} \item{external.priors}{= list or priors} + +\item{external.knots}{= list of knots} } \value{ nothing. Diagnostic plots, MCMC samples, and posterior distributions From 6c34c975f5611178af93415693cf6f11d95f650a Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Wed, 28 Nov 2018 12:55:22 -0500 Subject: [PATCH 0059/2289] updating WCr graphs --- modules/assim.sequential/inst/NEFI/graphs.R | 24 ++++++++++------- .../inst/NEFI/graphs_timeframe.R | 27 +++++++++++-------- 2 files changed, 31 insertions(+), 20 deletions(-) diff --git a/modules/assim.sequential/inst/NEFI/graphs.R b/modules/assim.sequential/inst/NEFI/graphs.R index 573322d71dd..6d9695f105d 100644 --- a/modules/assim.sequential/inst/NEFI/graphs.R +++ b/modules/assim.sequential/inst/NEFI/graphs.R @@ -133,22 +133,28 @@ qledf$real_qle <- real_data$qle neeplot <- ggplot(needf) + # geom_ribbon(aes(x=time, ymin=neemins, ymax=neemaxes, fill="Spread of data (excluding outliers)"), alpha = 0.7) + geom_ribbon(aes(x = time, ymin=neelower95, ymax=neeupper95, fill="95% confidence interval"), alpha = 0.4) + - geom_line(aes(x=time, y=neemeans, color="predicted mean")) + - geom_line(aes(x=time, y=real_data$nee, color="actual data")) + + geom_line(aes(x=time, y=neemeans, color="predicted mean"), size = 1) + + geom_line(aes(x=time, y=real_data$nee, color="observed data"), size = 1) + ggtitle(paste0("Net Ecosystem Exchange for ", workflow$start_date, " to ", workflow$end_date, "")) + scale_x_continuous(name="Time (hours)") + scale_y_continuous(name="NEE (kg C m-2 s-1)") + - scale_colour_manual(name='Legend', values=c("predicted mean"="lightskyblue1", "actual data"="darkorange3")) + - scale_fill_manual(name='Legend', values=c("95% confidence interval" = "blue3", "mean"="lightskyblue1")) + scale_colour_manual(name='Legend', values=c("predicted mean"="lightskyblue1", "observed data"="firebrick4")) + + scale_fill_manual(name=element_blank(), values=c("95% confidence interval" = "blue3", "mean"="lightskyblue1")) + + theme_linedraw() + + theme(plot.title = element_text(hjust = 0.5, size = 16), legend.title = element_text(size = 14), legend.text = element_text(size = 12), axis.text.x = element_text(size = 14), axis.text.y = element_text(size = 14), axis.title.x = element_text(size = 14), axis.title.y = element_text(size = 14)) qleplot <- ggplot(qledf) + # geom_ribbon(aes(x=time, ymin=qlemins, ymax=qlemax, fill="Spread of data (excluding outliers)"), alpha=0.7) + geom_ribbon(aes(x=time, ymin=qlelower95, ymax=qleupper95, fill="95% confidence interval"), alpha = 0.4) + - geom_line(aes(x=time, y=qlemeans, color="mean")) + - geom_line(aes(x=time, y=real_data$qle, color="actual data")) + - ggtitle(paste0("LE for ", workflow$start_date, " to ", workflow$end_date, ", \nSummary of All Ensembles")) + + geom_line(aes(x=time, y=qlemeans, color="mean"), size = 1) + + geom_line(aes(x=time, y=real_data$qle, color="observed data"), size = 1) + + ggtitle(paste0("LE for ", workflow$start_date, " to ", workflow$end_date, "\nSummary of All Ensembles")) + + theme(plot.title = element_text(hjust = 0.5)) + scale_x_continuous(name="Time (hours)") + scale_y_continuous(name="LE (W m-2 s-1)") + - scale_color_manual(name='Legend', values=c("mean"="lightskyblue1", "actual data"="darkorange3")) + - scale_fill_manual(name='Legend', values=c("95% confidence interval" = "blue3")) + scale_color_manual(name='Legend', values=c("mean"="lightskyblue1", "observed data"="firebrick4")) + + scale_fill_manual(name=element_blank(), values=c("95% confidence interval" = "blue3")) + + theme_linedraw() + + theme(plot.title = element_text(hjust = 0.5, size = 16), legend.title = element_text(size = 14), legend.text = element_text(size = 12), axis.text.x = element_text(size = 14), axis.text.y = element_text(size = 14), axis.title.x = element_text(size = 14), axis.title.y = element_text(size = 14)) + if (!dir.exists(graph_dir)) { dir.create(graph_dir, recursive = TRUE) diff --git a/modules/assim.sequential/inst/NEFI/graphs_timeframe.R b/modules/assim.sequential/inst/NEFI/graphs_timeframe.R index 255daee9df8..254eeb9931b 100644 --- a/modules/assim.sequential/inst/NEFI/graphs_timeframe.R +++ b/modules/assim.sequential/inst/NEFI/graphs_timeframe.R @@ -203,25 +203,30 @@ qledf <- data.frame(Time = Time, lower = qlelower95, means = qlemeans, upper = q neeplot <- ggplot(needf) + # geom_ribbon(aes(x=time, ymin=neemins, ymax=neemaxes, fill="Spread of data (excluding outliers)"), alpha = 0.7) + geom_ribbon(aes(x = Time, ymin=neelower95, ymax=neeupper95, fill="95% confidence interval"), alpha = 0.4) + - geom_line(aes(x=Time, y=neemeans, color="predicted mean")) + - geom_line(aes(x=Time, y=real_nee, color="observed data")) + + geom_line(aes(x=Time, y=neemeans, color="predicted mean"), size = 1) + + geom_line(aes(x=Time, y=real_nee, color="observed data"), size = 1) + ggtitle(paste0("Net Ecosystem Exchange for ", workflow$start_date, " to ", workflow$end_date, ", Willow Creek, Wisconson")) + xlim(frame_start, frame_end) + theme(axis.text.x=element_text(angle=60, hjust=1)) + - scale_colour_manual(name='Legend', values=c("predicted mean"="lightskyblue1", "observed data"="orange1")) + - scale_fill_manual(name='Legend', values=c("Spread of data (excluding outliers)"="azure4", "95% confidence interval" = "blue3", "mean"="lightskyblue1")) + - scale_y_continuous(name="NEE (kg C m-2 s-1)", limits=c(nee_lower, nee_upper)) + scale_colour_manual(name='Legend', values=c("predicted mean"="lightskyblue1", "observed data"="firebrick4")) + + scale_fill_manual(name=element_blank(), values=c("Spread of data (excluding outliers)"="azure4", "95% confidence interval" = "blue3", "mean"="lightskyblue1")) + + scale_y_continuous(name="NEE (kg C m-2 s-1)", limits=c(nee_lower, nee_upper)) + + theme_linedraw() + + theme(plot.title = element_text(hjust = 0.5, size = 16), legend.title = element_text(size = 14), legend.text = element_text(size = 12), axis.text.x = element_text(size = 14), axis.text.y = element_text(size = 14), axis.title.x = element_text(size = 14), axis.title.y = element_text(size = 14)) - qleplot <- ggplot(qledf) + +qleplot <- ggplot(qledf) + geom_ribbon(aes(x=Time, ymin=qlelower95, ymax=qleupper95, fill="95% confidence interval"), alpha = 0.4) + - geom_line(aes(x=Time, y=qlemeans, color="mean")) + - geom_point(aes(x=Time, y=real_qle, color="observed data")) + + geom_line(aes(x=Time, y=qlemeans, color="mean"), size = 1) + + geom_point(aes(x=Time, y=real_qle, color="observed data"), size = 1) + ggtitle(paste0("Latent Energy for ", workflow$start_date, " to ", workflow$end_date, ", Summary of All Ensembles")) + xlim(frame_start, frame_end) + theme(axis.text.x=element_text(angle=60, hjust=1)) + - scale_color_manual(name='Legend', values=c("mean"="lightskyblue1", "observed data"="orange2")) + - scale_fill_manual(name='Legend', values=c("95% confidence interval" = "blue3")) + - scale_y_discrete(name="LE (W m-2 s-1)", limits = c(qle_lower, qle_upper)) + scale_color_manual(name='Legend', values=c("mean"="lightskyblue1", "observed data"="firebrick4")) + + scale_fill_manual(name= element_blank(), values=c("95% confidence interval" = "blue3")) + + scale_y_discrete(name="LE (W m-2 s-1)", limits = c(qle_lower, qle_upper)) + + theme_linedraw() + + theme(plot.title = element_text(hjust = 0.5, size = 16), legend.title = element_text(size = 14), legend.text = element_text(size = 12), axis.text.x = element_text(size = 14), axis.text.y = element_text(size = 14), axis.title.x = element_text(size = 14), axis.title.y = element_text(size = 14)) + if (!dir.exists(outfolder)) { dir.create(outfolder, recursive = TRUE) From 600adc86ddd570a764f8530d3ab226de82f778f1 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 17 Dec 2018 08:00:20 -0500 Subject: [PATCH 0060/2289] commenting out test line --- modules/data.atmosphere/R/download.US_Syv.R | 2 +- workflow_id_log.txt | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) create mode 100644 workflow_id_log.txt diff --git a/modules/data.atmosphere/R/download.US_Syv.R b/modules/data.atmosphere/R/download.US_Syv.R index 65e2bd8327b..a0135c4c148 100644 --- a/modules/data.atmosphere/R/download.US_Syv.R +++ b/modules/data.atmosphere/R/download.US_Syv.R @@ -112,4 +112,4 @@ download.US_Syv <- function(start_date, end_date, timestep = 1) { } # download.US_Syv.R # This line is great for testing. -download.US_Syv('2018-07-23 06:00', '2018-08-08 06:00', timestep=12) +#download.US_Syv('2018-11-30 06:00', '2018-12-11 06:00', timestep=12) diff --git a/workflow_id_log.txt b/workflow_id_log.txt new file mode 100644 index 00000000000..727fb5efd6c --- /dev/null +++ b/workflow_id_log.txt @@ -0,0 +1 @@ +1000010061 From e2c46a2a9f4d0e2ceae3247b797933da11063e0e Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 13 Jan 2019 11:10:30 -0500 Subject: [PATCH 0061/2289] add more soil parameters to write.config --- models/sipnet/R/write.configs.SIPNET.R | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index 19fa7b13c78..c94ffa3ab96 100644 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -363,6 +363,20 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs param[which(param[, 1] == "baseSoilRespCold"), 2] <- param[which(param[, 1] == "baseSoilResp"), 2] * 0.25 } + if ("immedEvapFrac" %in% pft.names) { + id <- which(param[, 1] == "immedEvapFrac") + param[which(param[, 1] == "immedEvapFrac"), 2] <- pft.traits[which(pft.names == "immedEvapFrac")] + } + + if ("waterRemoveFrac" %in% pft.names) { + id <- which(param[, 1] == "waterRemoveFrac") + param[which(param[, 1] == "waterRemoveFrac"), 2] <- pft.traits[which(pft.names == "waterRemoveFrac")] + } + + if ("rdConst" %in% pft.names) { + id <- which(param[, 1] == "rdConst") + param[which(param[, 1] == "rdConst"), 2] <- pft.traits[which(pft.names == "rdConst")] + } ### ----- Phenology parameters GDD leaf on if ("GDD" %in% pft.names) { param[which(param[, 1] == "gddLeafOn"), 2] <- pft.traits[which(pft.names == "GDD")] From c2adba619e8cac38a844d2dd4287631e67ae644e Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 15 Jan 2019 08:08:18 -0500 Subject: [PATCH 0062/2289] wrong place --- modules/assim.batch/R/pda.emulator.R | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 30808ef4c82..012568d503b 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -182,16 +182,17 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, pname[[x]])) names(knots.list) <- sapply(settings$pfts,"[[",'name') - # if knots were passed externally overwrite them - if(!is.null(external.knots)){ - PEcAn.logger::logger.info("Overwriting the knots list.") - knots.list <- external.knots - } knots.params <- lapply(knots.list, `[[`, "params") # don't need anymore # knots.probs <- lapply(knots.list, `[[`, "probs") + # if knots were passed externally overwrite them + if(!is.null(external.knots)){ + PEcAn.logger::logger.info("Overwriting the knots list.") + knots.params <- external.knots + } + current.step <- "GENERATE KNOTS" save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) From 5e5d41444ca4337488cca5a214e2215d503144cb Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 15 Jan 2019 08:34:53 -0500 Subject: [PATCH 0063/2289] willow creek xml update --- .../inst/WillowCreek/gefs.sipnet.template.xml | 52 +++++++------------ 1 file changed, 20 insertions(+), 32 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index f16a0a87f01..6cdf895d8fc 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -14,46 +14,35 @@ -9999 9999 - - Qle - MW/m2 - -9999 - 9999 - - - TotSoilCarb - kg/m^2 - 0 - 9999 - - - SoilMoistFrac - m/m + + TotSoilCarb + KgC/m^2 0 - 9999 + 9999 - - SWE - kg/m^2 + + SoilMoistFrac + 0 - 9999 + 9999 - - litter_carbon_content - kgC/m2 + + Litter + gC/m^2 0 - 9999 + 9999 year 2017-01-01 2018-11-05 + 1 -1 - 2018/09/07 14:52:35 +0000 + 2019/01/04 10:19:35 +0000 /fs/data3/kzarada/ouput/PEcAn_1000009774/ @@ -63,17 +52,16 @@ psql-pecan.bu.edu bety PostgreSQL - false + true /fs/data3/kzarada/pecan.data/dbfiles/ - soil.ALL + temperate.coniferous 1 - 1000013117 @@ -81,7 +69,7 @@ FALSE - 50 + 100 NEE 2018 2018 @@ -119,10 +107,10 @@ SIPNET - 2018-11-05 - 2018-11-20 + 2018-12-05 + 2018-12-20 localhost - + \ No newline at end of file From a9b2f2ff54104d8a7f6e9f788d6f9b26ee24e2d9 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Tue, 22 Jan 2019 15:13:51 -0500 Subject: [PATCH 0064/2289] some testing --- base/workflow/inst/batch_runs.R | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index db30e93bb3f..c2395dcb8a3 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -109,14 +109,21 @@ create_exec_test_xml <- function(run_list){ ##Create Run Args pecan_path <- "/fs/data3/tonygard/work/pecan" -config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) +config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.exphp")) bety <- betyConnect(paste0(pecan_path,"/web/config.php")) + +bety <- dplyr::src_postgres(dbname = 'bety', + host = 'psql-pecan.bu.edu', + user = 'bety', + password = 'bety') con <- bety$con ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) +## Find Models +devtools::install_github("pecanproject/pecan", subdir = "api") model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% filter(container_type == "Model") %>% pull(container_id) From d408dae8063a82708097108e640fe3cc1f2df2f3 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 13 Feb 2019 10:50:16 -0500 Subject: [PATCH 0065/2289] adding a try and addressing issue 2244 --- base/workflow/R/run.write.configs.R | 4 +-- modules/uncertainty/R/ensemble.R | 40 ++++++++++++++++++----------- 2 files changed, 27 insertions(+), 17 deletions(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 7b02ba600d0..62a1c639648 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -26,8 +26,8 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo posterior.files = rep(NA, length(settings$pfts)), overwrite = TRUE) { - con <- PEcAn.DB::db.open(settings$database$bety) - on.exit(PEcAn.DB::db.close(con)) + con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) ## Which posterior to use? for (i in seq_along(settings$pfts)) { diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index cab0b0bd18c..e0e646607ea 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -215,8 +215,6 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, } else { on.exit(PEcAn.DB::db.close(con)) } - } else { - con <- NULL } # Get the workflow id @@ -287,31 +285,43 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, # Let's find the PFT based on site location, if it was found I will subset the ensemble.samples otherwise we're not affecting anything if(!is.null(con)){ - Pft_Site_df <- tbl(con, 'sites_cultivars')%>% - dplyr::filter(site_id==settings$run$site$id) %>% + Pft_Site_df <- tbl(con, 'sites_cultivars') %>% + dplyr::filter(site_id == settings$run$site$id) %>% dplyr::inner_join(dplyr::tbl(con, "cultivars_pfts"), by = c('cultivar_id')) %>% - dplyr::inner_join(dplyr::tbl(con, "pfts"), by = c('pft_id'='id')) %>% - dplyr::collect() + dplyr::inner_join(dplyr::tbl(con, "pfts"), by = c('pft_id' = 'id')) %>% + dplyr::collect() - site_pfts_names <- Pft_Site_df$name %>% unlist() %>% as.character() + site_pfts_names <- + Pft_Site_df$name %>% unlist() %>% as.character() - PEcAn.logger::logger.info(paste("The most suitable pfts for your site are the followings:",site_pfts_names)) + PEcAn.logger::logger.info(paste( + "The most suitable pfts for your site are the followings:", + site_pfts_names + )) #-- if there is enough info to connect the site to pft #if ( nrow(Pft_Site_df) > 0 & all(site_pfts_names %in% names(ensemble.samples)) ) ensemble.samples <- ensemble.samples [Pft_Site$name %>% unlist() %>% as.character()] } # Reading the site.pft specific tags from xml site.pfts.vec <- settings$run$site$site.pft %>% unlist %>% as.character - if(!is.null(site.pfts.vec)){ + if (!is.null(site.pfts.vec)) { # find the name of pfts defined in the body of pecan.xml - defined.pfts <- settings$pfts %>% purrr::map('name') %>% unlist %>% as.character + defined.pfts <- + settings$pfts %>% purrr::map('name') %>% unlist %>% as.character # subset ensemble samples based on the pfts that are specified in the site and they are also sampled from. - if (length(which(site.pfts.vec %in% defined.pfts)) > 0 ) - ensemble.samples <- ensemble.samples [site.pfts.vec[ which(site.pfts.vec %in% defined.pfts) ]] + if (length(which(site.pfts.vec %in% defined.pfts)) > 0) + ensemble.samples <- + ensemble.samples [site.pfts.vec[which(site.pfts.vec %in% defined.pfts)]] # warn if there is a pft specified in the site but it's not defined in the pecan xml. - if (length(which(!(site.pfts.vec %in% defined.pfts)))>0) - PEcAn.logger::logger.warn(paste0("The following pfts are specified for the siteid ", settings$run$site$id ," but they are not defined as a pft in pecan.xml:", - site.pfts.vec[which(!(site.pfts.vec %in% defined.pfts))])) + if (length(which(!(site.pfts.vec %in% defined.pfts))) > 0) + PEcAn.logger::logger.warn( + paste0( + "The following pfts are specified for the siteid ", + settings$run$site$id , + " but they are not defined as a pft in pecan.xml:", + site.pfts.vec[which(!(site.pfts.vec %in% defined.pfts))] + ) + ) } # if no ensemble piece was in the xml I replicate n times the first element in params From 6d34928b1b9d04d2470dca485c2824d02d7e12bd Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 13 Feb 2019 13:06:42 -0500 Subject: [PATCH 0066/2289] if there is no con then use the local pft --- modules/uncertainty/R/get.parameter.samples.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/uncertainty/R/get.parameter.samples.R b/modules/uncertainty/R/get.parameter.samples.R index c4a85117140..66076c56367 100644 --- a/modules/uncertainty/R/get.parameter.samples.R +++ b/modules/uncertainty/R/get.parameter.samples.R @@ -16,8 +16,8 @@ get.parameter.samples <- function(settings, pft.names <- list() outdirs <- list() ## Open database connection - con <- PEcAn.DB::db.open(settings$database$bety) - on.exit(PEcAn.DB::db.close(con)) + con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) for (i.pft in seq_along(pfts)) { pft.names[i.pft] <- settings$pfts[[i.pft]]$name @@ -66,7 +66,7 @@ get.parameter.samples <- function(settings, } ### Load trait mcmc data (if exists, either from MA or PDA) - if (!is.null(settings$pfts[[i]]$posteriorid)) { # first check if there are any files associated with posterior ids + if (!is.null(settings$pfts[[i]]$posteriorid) & !inherits(con, "try-error")) {# first check if there are any files associated with posterior ids files <- PEcAn.DB::dbfile.check("Posterior", settings$pfts[[i]]$posteriorid, con, settings$host$name, return.all = TRUE) From 83ff302274b486cdf1577c73015b8b2e9ce23b59 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 14 Feb 2019 12:21:09 -0500 Subject: [PATCH 0067/2289] Changing the con setup in write .ens --- modules/uncertainty/R/ensemble.R | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index e0e646607ea..0123a2cc3f2 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -208,14 +208,13 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, } # Open connection to database so we can store all run/ensemble information - if (write.to.db) { - con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - if (inherits(con, "try-error")) { - con <- NULL - } else { - on.exit(PEcAn.DB::db.close(con)) - } + con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) + + if (inherits(con, "try-error")) { + con <- NULL } + # Get the workflow id if ("workflow" %in% names(settings)) { From 1e407388deae143ced77e58c7cf66cab6cbc3253 Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 15 Feb 2019 09:48:31 -0500 Subject: [PATCH 0068/2289] revisiting the logical staement for writing to DB --- modules/uncertainty/R/ensemble.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index 0123a2cc3f2..b91329a9d96 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -225,7 +225,7 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, #------------------------------------------------- if this is a new fresh run------------------ if (is.null(restart)){ # create an ensemble id - if (!is.null(con)) { + if (!is.null(con) & as.logical(settings$database$bety$write)) { # write ensemble first ensemble.id <- PEcAn.DB::db.query(paste0( "INSERT INTO ensembles (runtype, workflow_id) ", From 07d931067128694dbb6d2df7717260a2033b162c Mon Sep 17 00:00:00 2001 From: hamzed Date: Tue, 19 Feb 2019 09:29:23 -0500 Subject: [PATCH 0069/2289] taking out the part that we never used --- modules/uncertainty/R/ensemble.R | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index b91329a9d96..ad78fc60665 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -282,24 +282,6 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, if (is.null(samples[[r_tag]]) & r_tag!="parameters") samples[[r_tag]]$samples <<- rep(settings$run$inputs[[tolower(r_tag)]]$path[1], settings$ensemble$size) }) - # Let's find the PFT based on site location, if it was found I will subset the ensemble.samples otherwise we're not affecting anything - if(!is.null(con)){ - Pft_Site_df <- tbl(con, 'sites_cultivars') %>% - dplyr::filter(site_id == settings$run$site$id) %>% - dplyr::inner_join(dplyr::tbl(con, "cultivars_pfts"), by = c('cultivar_id')) %>% - dplyr::inner_join(dplyr::tbl(con, "pfts"), by = c('pft_id' = 'id')) %>% - dplyr::collect() - - site_pfts_names <- - Pft_Site_df$name %>% unlist() %>% as.character() - - PEcAn.logger::logger.info(paste( - "The most suitable pfts for your site are the followings:", - site_pfts_names - )) - #-- if there is enough info to connect the site to pft - #if ( nrow(Pft_Site_df) > 0 & all(site_pfts_names %in% names(ensemble.samples)) ) ensemble.samples <- ensemble.samples [Pft_Site$name %>% unlist() %>% as.character()] - } # Reading the site.pft specific tags from xml site.pfts.vec <- settings$run$site$site.pft %>% unlist %>% as.character From c76bca3211b3d4342d89b9fd0081cdc94b675967 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Mon, 11 Mar 2019 13:36:55 -0400 Subject: [PATCH 0070/2289] updates --- base/workflow/inst/batch_runs.R | 69 +++++++++++++++++++++------------ 1 file changed, 45 insertions(+), 24 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index c2395dcb8a3..c06b24ff9bf 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -104,13 +104,10 @@ create_exec_test_xml <- function(run_list){ sink() } - - - ##Create Run Args pecan_path <- "/fs/data3/tonygard/work/pecan" -config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.exphp")) -bety <- betyConnect(paste0(pecan_path,"/web/config.php")) +config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.example.php")) +bety <- betyConnect(paste0(pecan_path,"/web/config.example.php")) bety <- dplyr::src_postgres(dbname = 'bety', host = 'psql-pecan.bu.edu', @@ -123,7 +120,7 @@ mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) ## Find Models -devtools::install_github("pecanproject/pecan", subdir = "api") +#devtools::install_github("pecanproject/pecan", subdir = "api") model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% filter(container_type == "Model") %>% pull(container_id) @@ -134,31 +131,51 @@ met_name <- c("CRUNCEP","AmerifluxLBL") startdate<-"2004/01/01" enddate<-"2004/12/31" out.var <- "NPP" -ensemble <- TRUE +ensemble <- FALSE ens_size <- 100 -sensitivity <- TRUE +sensitivity <- FALSE ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% - inner_join(tbl(bety, "sitegroups_sites") - %>% filter(sitegroup_id == 1), + +site_id_noinput<- tbl(bety, "sites")%>% + inner_join(tbl(bety, "sitegroups_sites") %>% + filter(sitegroup_id == 1), by = c("id" = "site_id")) %>% - dplyr::select("id.x", "notes", "sitename") %>% - dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% collect() %>% - dplyr::mutate( - start_year = stringi::stri_extract_first_regex(notes, "[0-9]+"), - end_year = if_else( - stringi::stri_extract_last_regex(notes, "[0-9]+") == start_year, - as.character(lubridate::year(Sys.Date())), - stringi::stri_extract_last_regex(notes, "[0-9]+") - ), - contains_run = if_else( - between(lubridate::year(startdate), start_year, end_year), + dplyr::select("id.x", "notes", "sitename") %>% + dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% + collect() + + %>% + + #test <- dplyr::mutate(site_id_noinput, + # start_year = substring(stringi::stri_extract_first_regex(notes, "[0-9]+"),1:4), + # end_year = if_else( + # substring(stringi::stri_extract_last_regex(notes, "[0-9]+"), 1:4)== start_year, + # as.character(lubridate::year(Sys.Date())), + # stringi::stri_extract_last_regex(notes, "[0-9]+") + # )) + #"IGBP = MF CLIMATE_KOEPPEN = Dfb TOWER_BEGAN = 1999 TOWER_END = 2004" + test <- dplyr::mutate(site_id_noinput, + start_year = substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), + end_year = dplyr::if_else( + substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", + as.character(lubridate::year(Sys.Date())), + substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) + ) + ) %>% filter(~between(lubridate::year(startdate) between( start_year:end_year) + + +filter(test,dplyr::between(lubridate::year(startdate), test$start_year, test$end_year)) + + contains_run = if_else( + between(lubridate::year(startdate), as.numeric(start_year), end_year), "TRUE", "FALSE" - ), + )), len = as.integer(end_year) - as.integer(start_year) - ) %>% + ) +%>% filter(contains_run == TRUE) %>% filter(str_length(end_year) == 4) %>% filter(len == max(len)) %>% @@ -171,10 +188,14 @@ site_id <- "772" options(scipen = 999) run_table <- expand.grid(models,met_name,site_id, startdate, enddate, pecan_path,out.var, ensemble, ens_size, sensitivity, stringsAsFactors = FALSE) -#Execute function to spit out a table with a clomn of NA or success +#Execute function to spit out a table with a column of NA or success tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ create_exec_test_xml(list(...)) },otherwise =NA)) ) +## Turn into a html table +as_hux(tab) + + From ce2889b8742f714558e82bc5d01e6914d13bd8f8 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 12 Mar 2019 14:17:15 -0400 Subject: [PATCH 0071/2289] pushing custom global SA fcn for Hamze --- .../vignettes/MultiSitePDAVignette.Rmd | 190 ++++++++++++++++-- 1 file changed, 173 insertions(+), 17 deletions(-) diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index ea88a78e05c..d62a38c8eeb 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -1,7 +1,7 @@ --- -title: "HB-PDA vignette" +title: "Multi-site hierarchical calibration vignette" author: "Istem Fer" -date: "3/2/2018" +date: "3/2/2019" output: html_document --- @@ -11,20 +11,41 @@ knitr::opts_chunk$set(echo = TRUE) ## Multi-Site Hierarchical PDA -This vignette documents the steps and settings of Hierarchical Bayesian Parameter Data Assimilation (HB-PDA) analysis on multiple sites. If you haven't done the standard PDA check that vignette first modules/assim.batch/vignettes/AssimBatchVignette.Rmd +This vignette documents the steps and settings of Hierarchical Bayesian Parameter Data Assimilation (HB-PDA) analysis on multiple sites, and accompanies the paper. -## Start with multi-settings - -Add the `` tag in your `pecan.xml` for the groups of sites you are interested in. In this vignette, we will use HB-PDA site group in BETYdb which consists of the temperate decidious broadleaf forest (DBF) sites of Ameriflux network. +## Initiate a new run from the web interface +Follow the steps: ``` - - 1000000022 - +host : pecan +model : SIPNET (r136) # note I'll change this, because I want to make sure SIPNET code is also version controlled (which exact settings/modules I turned on/off etc.) +sitegroup : HB-PDA +site : US-Bar (choose any we will modify the xml in the next step) + +CLICK NEXT + +pft : soil.ALL, temperate.deciduous.ALL +start date : leave as it is +end date : leave as it is +pool_initial_condition : skip +Sipnet.climna: Use AmerifluxLBL + +NOW CHECK THE EDIT XML TICK BOX AND CLICK NEXT +(Enter pecan for user name in the following page) ``` -In the meantime, your `` tag will look like this: +## Start with multi-settings + +Now you should be seeing the `pecan.xml` associated with the choices you made above. + +Remove ``, `` and `` tags from the `` section and add the `` tag in your `pecan.xml` before the `` section for the group of sites you are interested in. In this vignette, we will use HB-PDA site group in BETYdb which consists of 12 temperate decidious broadleaf forest (DBF) sites ofthe Ameriflux network. + + +Overall, your `pecan.xml` for these sections will look like this: ``` + + 1000000022 + @@ -33,20 +54,20 @@ In the meantime, your `` tag will look like this: pecan - 2004/01/01 - 2004/12/31 ``` -It's OK if not all your sites have the same `start.date` and `end.date`, multi settings functions will use this as a template to expand. +Now hit save button and open RStudio. Go to the working directory for the run you just created. E.g. `setwd("/fs/data2/output/PEcAn_1000010272")`. -Now start going through the workflow script and stop after: +Open `workflow.R` script and start going through the lines. Stop after: ``` # Write pecan.CHECKED.xml PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") ``` -When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded with the same information repeated. Now you specify ``, `` and `` information accordingly if you want them to be different. E.g.: +## Fill in site-specific input information + +When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded with the same tags repeated for each site. Now specify ``, `` and `` information. E.g.: ``` @@ -65,6 +86,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-796/AMF_US-Bar_BASE_HH_4-1.2005-01-01.2011-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-796/IC_site_0-796.nc + @@ -82,6 +106,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-767/IC_site_0-767.nc + @@ -99,6 +126,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-768/AMF_US-MOz_BASE_HH_7-1.2005-01-01.2015-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-768/IC_site_0-768.nc + @@ -116,6 +146,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-776/AMF_US-UMB_BASE_HH_10-1.2007-01-01.2014-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-776/IC_site_0-776.nc + @@ -133,6 +166,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-676/AMF_US-WCr_BASE_HH_11-1.1999-01-01.2006-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-676/IC_site_0-676.nc + @@ -150,6 +186,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-109/AMF_CA-TPD_BASE_HH_1-1.2013-01-01.2015-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_1-109/IC_site_1-109.nc + @@ -167,6 +206,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-755/AMF_US-Dk2_BASE_HH_4-4.2002-01-01.2008-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-755/IC_site_0-755.nc + @@ -184,6 +226,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-145/AMF_US-ChR_BASE_HH_2-1.2008-01-01.2010-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_1-145/IC_site_1-145.nc + @@ -201,6 +246,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-66/AMF_US-Slt_BASE_HH_5-1.2005-01-01.2014-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_1-66/IC_site_1-66.nc + @@ -218,6 +266,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-61/AMF_US-Oho_BASE_HH_4-1.2004-01-01.2013-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_1-61/IC_site_1-61.nc + @@ -235,6 +286,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-740/AMF_CA-Oas_BASE_HH_1-1.1997-01-01.2010-12-31.clim + + /fs/data3/istfer/HPDA/data/param.files/site_0-740/IC_site_0-740.nc + @@ -257,12 +311,111 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded ``` -Re-read the updated `pecan.CHECKED.xml`, make sure to re-assign random effects as logical, otherwise `meta.analysis` will throw an error: + +## Modify ensemble analysis tags + +Now let's specify the ensemble `` and the ensemble `` within the `` tag. In the paper we give an ensemble of runs with `1000` members but you can try smaller ensembles. + + +Overall, your `pecan.xml` for the `` sections will look like this: +``` + + 1000 + NEE + + + uniform + + + sampling + + + +``` + + +Now save the file. Re-read the updated `pecan.CHECKED.xml`, make sure to re-assign random effects as logical, otherwise `meta.analysis` will throw an error: ``` settings <- read.settings("pecan.CHECKED.xml") settings$meta.analysis$random.effects <- as.logical(settings$meta.analysis$random.effects) ``` +## Global SA + + +``` +globalSA_explore <- function(workflow_dir, var, ens.settings){ + load(paste0(workflow_dir, "/ensemble.samples.", ens.settings$ensemble$ensemble.id,".Rdata")) + samps <- do.call("cbind", ens.samples) + + load(paste0(workflow_dir, "/ensemble.output.", ens.settings$ensemble$ensemble.id, + ".",var,".", ens.settings$ensemble$start.year, ".", + ens.settings$ensemble$end.year,".Rdata")) + + allm <- cbind(unlist(ensemble.output), samps) + colnames(allm) <- c(var, colnames(samps)) + form <- as.formula(paste(var, " ~ .")) + var.fit <- lm(form, allm) + + alls <-summary(var.fit)$coefficients[,1] + alls[summary(var.fit)$coefficients[,4] >= 0.05] <- NA # don't consider non-significant fits + + alls <- alls[names(alls) != "(Intercept)"] + preds <- names(alls) + + n <- length(preds) + id <- unlist(lapply(1, function(i)combn(1:n,i,simplify=FALSE)),recursive=FALSE) + + Formulas <- sapply(id, function(i) + paste(var,"~",paste(preds[i],collapse=":")) + ) + + fits <- lapply(Formulas,function(i) + lm(as.formula(i), data=allm)) + + for(tr in 1:n){ + + f <- summary(fits[[tr]])$fstatistic + pfval <- pf(f[1],f[2],f[3],lower.tail=F) + if(pfval < 0.001){ + alls[names(alls) == preds[tr]] <- summary(fits[[tr]])$adj.r.squared + }else{ + alls[names(alls) == preds[tr]] <- NA + } + + } + + alls <- data.frame(par=names(alls), rexp = alls, site =i) + alls <- alls[order( alls$rexp),] + + return(alls) +} + + +workflow_dir <- "/fs/data2/output//PEcAn_1000010172" + +multi.settings <- read.settings("pecan.CONFIGS.xml") + +Qle_list <- NEE_list <- list() + +for(i in seq_along(multi.settings)){ + Qle_list[[i]] <- globalSA_explore(workflow_dir, "Qle", multi.settings[[i]]) + NEE_list[[i]] <- globalSA_explore(workflow_dir, "NEE", multi.settings[[i]]) +} + +Qle_df <- do.call("rbind", Qle_list) +Qle_df$var <- "Qle" +NEE_df <- do.call("rbind", NEE_list) +NEE_df$var <- "NEE" +allm <- rbind(Qle_df, NEE_df) +allm <- allm[!is.na(allm$rexp),] + +p1 <- ggplot(allm, aes(par, rexp, fill=var)) + coord_flip() + ylab("R2") + xlab("SIPNET parameters") +p1 + geom_boxplot() + theme_bw() + + +``` + Then continue with the rest of the workflow until `` module: ``` # Run parameter data assimilation @@ -275,7 +428,10 @@ if ('assim.batch' %in% names(settings)) { } ``` -Now add the PDA tags, see next section. +Before running this module, we will conduct some analyses on the ensemble run outputs to identify the parameters we would like to target in PDA. + + +add the PDA tags, see next section. ## Settings for HB-PDA From 3463070b8dbd8ee2da91306d3e5023a9846bba41 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 15:21:34 -0400 Subject: [PATCH 0072/2289] updated date format and alternative date range issues --- modules/data.remote/R/call_MODIS.R | 45 ++++++++++++++++++++++++------ 1 file changed, 37 insertions(+), 8 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index f531c517f49..f4758575d10 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,6 +14,7 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -32,10 +33,21 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 + # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ + if (grepl("/", start_date) == T) + { + start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3), format = "%Y%j") + } + + if (grepl("/", end_date) == T) + { + end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3), format = "%Y%j") + } + # set start and end dates to correct format if (package_method == "MODISTools"){ - products = MODISTools::mt_products() + products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) @@ -44,17 +56,34 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 print("Check #1: Product exists!") } - + # checks if start and end dates are within all or partial range of data available from MODIS product date range dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date dates <- as.numeric(substr(dates, 2, nchar(dates))) - if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) + #list total range of dates available for product + print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) + # Best case scenario: the start_date and end_date parameters fall within available MODIS data dates + if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) { - print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) - stop("Please choose dates between the date range listed above.") - } else { - print("Check #2: Dates are available!") + print("Check #2: All dates are available!") } - + # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble + # MODIS data dates + if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + { + start_date = dates[1] + print("WARNING: Dates are only partially available. Look at list of available dates for MODIS data product.") + } + if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + { + end_date = dates[length(dates)] + print("WARNING: Dates are only partially available. Look at list of available dates for MODIS data product.") + } + # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. + if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + { + stop("No MODIS data available start_date and end_date parameterized.") + } + bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From b8fe444cbacc4312106232a8beb3d7db3e8fafcb Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 15:31:22 -0400 Subject: [PATCH 0073/2289] fixed boo-boo --- modules/data.remote/R/call_MODIS.R | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index f4758575d10..bb959a2522f 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -31,19 +31,19 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", package_method = "MODISTools") { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. - size <- 0 +size <- 0 - # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ + # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ if (grepl("/", start_date) == T) { - start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3), format = "%Y%j") + start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") } if (grepl("/", end_date) == T) { - end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3), format = "%Y%j") + end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } - + # set start and end dates to correct format if (package_method == "MODISTools"){ From 3ddf47b95a7f9d36fcc8f209605bbe0dbbe91d98 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 15:40:51 -0400 Subject: [PATCH 0074/2289] added in lat/lon from siteID --- modules/data.remote/R/call_MODIS.R | 18 +++++++++++++++--- modules/data.remote/man/call_MODIS.Rd | 4 +++- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index bb959a2522f..0867f042ec6 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,7 +14,7 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename and extracting lat/lon for a PEcAn location. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -28,11 +28,23 @@ ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", package_method = "MODISTools") { +call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools") { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. -size <- 0 + size <- 0 + # For PEcAn users: finds lat lon based on a site id and also uses this to name the output file + if (!(is.null(siteID))) + { + siteID = as.character(siteID) + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + sites <- PEcAn.DB::query.site(siteID,con) + lat = sites$lat + lon = sties$lon + } # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ if (grepl("/", start_date) == T) { diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 834e4069151..9762e13611f 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -29,7 +29,9 @@ call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} + +\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} From 9176900816495607ed7406dfb116607af8ad8cf4 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 15:45:13 -0400 Subject: [PATCH 0075/2289] fixed boo-boo --- modules/data.remote/R/call_MODIS.R | 2 +- modules/data.remote/man/call_MODIS.Rd | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 0867f042ec6..05b9198066e 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -43,7 +43,7 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 bety$con <- con sites <- PEcAn.DB::query.site(siteID,con) lat = sites$lat - lon = sties$lon + lon = sites$lon } # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ if (grepl("/", start_date) == T) diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 9762e13611f..f96ce496b5e 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -5,7 +5,7 @@ \title{call_MODIS} \usage{ call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, - product, band, band_qc = "", band_sd = "", + product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools") } \arguments{ @@ -29,12 +29,12 @@ call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} - -\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename. +\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename and extracting lat/lon for a PEcAn location. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} + +\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} } \description{ Get MODIS data by date and location From e524a94bad6d4aec94590907566606320b0004f9 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 15:52:38 -0400 Subject: [PATCH 0076/2289] as.numeric to siteID --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 05b9198066e..52dbff004c7 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -36,7 +36,7 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 # For PEcAn users: finds lat lon based on a site id and also uses this to name the output file if (!(is.null(siteID))) { - siteID = as.character(siteID) + siteID = as.numeric(siteID) bety <- list(user='bety', password='bety', host='localhost', dbname='bety', driver='PostgreSQL',write=TRUE) con <- PEcAn.DB::db.open(bety) From 1d7f04432a006945d35728c147a236222fb3a302 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 16:14:00 -0400 Subject: [PATCH 0077/2289] removed siteID lat lon function --- modules/data.remote/R/call_MODIS.R | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 52dbff004c7..9c470472573 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,7 +14,7 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename and extracting lat/lon for a PEcAn location. +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -33,18 +33,18 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - # For PEcAn users: finds lat lon based on a site id and also uses this to name the output file - if (!(is.null(siteID))) - { - siteID = as.numeric(siteID) - bety <- list(user='bety', password='bety', host='localhost', - dbname='bety', driver='PostgreSQL',write=TRUE) - con <- PEcAn.DB::db.open(bety) - bety$con <- con - sites <- PEcAn.DB::query.site(siteID,con) - lat = sites$lat - lon = sites$lon - } +# # For PEcAn users: finds lat lon based on a site id and also uses this to name the output file +# if (!(is.null(siteID))) +# { +# siteID = as.numeric(siteID) +# bety <- list(user='bety', password='bety', host='localhost', +# dbname='bety', driver='PostgreSQL',write=TRUE) +# con <- PEcAn.DB::db.open(bety) +# bety$con <- con +# sites <- PEcAn.DB::query.site(siteID,con) +# lat = sites$lat +# lon = sites$lon +# } # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ if (grepl("/", start_date) == T) { From 6a0de13ab42be10778cc806c6d49f6421917e796 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 16:17:36 -0400 Subject: [PATCH 0078/2289] editied DESCRIPTION + roxygenized --- modules/data.remote/DESCRIPTION | 3 ++- modules/data.remote/man/call_MODIS.Rd | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 2fe8f52d022..89b8b468860 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -12,7 +12,8 @@ Imports: reticulate, PEcAn.logger, PEcAn.remote, - stringr (>= 1.1.0) + stringr (>= 1.1.0), + spatial.tools Suggests: testthat (>= 1.0.2) License: FreeBSD + file LICENSE diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index f96ce496b5e..da236851564 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -29,7 +29,7 @@ call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename and extracting lat/lon for a PEcAn location. +\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} From 42650cbbac2c2d1a11df73604e93d8a8106c0a33 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 19 Mar 2019 16:26:19 -0400 Subject: [PATCH 0079/2289] as.Date to list of modis dates --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 9c470472573..3ece81eb5c4 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -70,7 +70,7 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 # checks if start and end dates are within all or partial range of data available from MODIS product date range dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.numeric(substr(dates, 2, nchar(dates))) + dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") #list total range of dates available for product print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) # Best case scenario: the start_date and end_date parameters fall within available MODIS data dates From df30d2a672fc403a2992b4cc9259085ad41f34cb Mon Sep 17 00:00:00 2001 From: "Shawn P. Serbin" Date: Fri, 22 Mar 2019 13:08:20 -0400 Subject: [PATCH 0080/2289] Working on testing MODIS LAI SDA with Bailey --- base/settings/R/prepare.settings.R | 2 +- models/sipnet/R/read_restart.SIPNET.R | 5 +++++ models/sipnet/R/write_restart.SIPNET.R | 5 +++++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/base/settings/R/prepare.settings.R b/base/settings/R/prepare.settings.R index 51b31e59144..abfef4a28dc 100644 --- a/base/settings/R/prepare.settings.R +++ b/base/settings/R/prepare.settings.R @@ -15,7 +15,7 @@ prepare.settings <- function(settings, force=FALSE) { if(is.MultiSettings(settings)) { return(invisible(papply(settings, prepare.settings, force=force))) } - settings <- site.pft.link.settings (settings) # this will find the link between the site and pft and it will add extra tags for write.ensemble.config function. + settings <- site.pft.link.settings(settings) # this will find the link between the site and pft and it will add extra tags for write.ensemble.config function. settings <- fix.deprecated.settings(settings, force=force) settings <- addSecrets(settings, force=force) settings <- update.settings(settings, force=force) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index f98c3ef73e7..381e622f769 100644 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -88,6 +88,11 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p names(forecast[[length(forecast)]]) <- c("TotLivBiom") } + if ("LAI" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$LAI[last] ## m2/m2 + names(forecast[[length(forecast)]]) <- c("LAI") + } + print(runid) X_tmp <- list(X = unlist(forecast), params = params) diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index f993e1f95b4..37d7f933a12 100644 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -105,6 +105,11 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, names(analysis.save[[length(analysis.save)]]) <- c("snow") } + if ("LAI" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$LAI + if (new.state$LAI < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("LAI") + } if (!is.null(analysis.save) & length(analysis.save)>0){ analysis.save.mat <- data.frame(matrix(unlist(analysis.save, use.names = TRUE), nrow = 1)) From 3e2879d5ed8c3b4cf2d72c6110403eaf92f67091 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Fri, 22 Mar 2019 13:34:40 -0400 Subject: [PATCH 0081/2289] updated sipnet read.restart with LAI --- models/sipnet/R/read_restart.SIPNET.R | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index f98c3ef73e7..d8b69d1929f 100644 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -87,6 +87,11 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p forecast[[length(forecast) + 1]] <- udunits2::ud.convert(ens$TotLivBiom[last], "kg/m^2", "Mg/ha") names(forecast[[length(forecast)]]) <- c("TotLivBiom") } + + if ("LAI" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$LAI[last] ## m2/m2 + names(forecast[[length(forecast)]]) <- c("LAI") + } print(runid) From 9a5e74ffb7b66d27f8cde79f0ffdeff08a3f12f3 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Mon, 25 Mar 2019 08:37:12 -0400 Subject: [PATCH 0082/2289] testing --- base/workflow/inst/batch_runs.R | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index c06b24ff9bf..b5fd00f8e08 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -146,8 +146,6 @@ site_id_noinput<- tbl(bety, "sites")%>% dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% collect() - %>% - #test <- dplyr::mutate(site_id_noinput, # start_year = substring(stringi::stri_extract_first_regex(notes, "[0-9]+"),1:4), # end_year = if_else( @@ -162,12 +160,30 @@ site_id_noinput<- tbl(bety, "sites")%>% substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", as.character(lubridate::year(Sys.Date())), substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) - ) - ) %>% filter(~between(lubridate::year(startdate) between( start_year:end_year) + ), + in_interval = between(as.numeric(year(startdate)), as.numeric(test$start_year),as.numeric(test$end_year)) + ) + + apply(as.matrix(test), MARGIN = c(test$start_year,test$end_year),test_f<-function(){ + between(startdate,test$start_year,test$end_year) +}) + + + test_1<- for(i in 1:128){ + between(as.numeric(year(startdate)),as.numeric(test$start_year[i]),as.numeric(test$end_year[i])) + } + + + + between(as.numeric(year(startdate)),as.numeric(test$start_year[1]),as.numeric(test$end_year[1])) + + +dplyr::select(test,) filter(test,dplyr::between(lubridate::year(startdate), test$start_year, test$end_year)) + contains_run = if_else( between(lubridate::year(startdate), as.numeric(start_year), end_year), "TRUE", From f6e76bbdcf8cb9e9de61efa5fab430b792d42c92 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 27 Mar 2019 10:03:26 -0400 Subject: [PATCH 0083/2289] pull upstream --- base/workflow/R/run.write.configs.R | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 62a1c639648..6c7275d3b60 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -26,8 +26,10 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo posterior.files = rep(NA, length(settings$pfts)), overwrite = TRUE) { - con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) + con <- + try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE) + ) ## Which posterior to use? for (i in seq_along(settings$pfts)) { From 665ead6f26fce4fd81312c405449c2ff720b3efb Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Wed, 27 Mar 2019 14:12:07 -0400 Subject: [PATCH 0084/2289] better site finding --- base/workflow/inst/batch_runs.R | 76 ++++++++------------------------- 1 file changed, 18 insertions(+), 58 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index b5fd00f8e08..005bfb92127 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -137,65 +137,25 @@ sensitivity <- FALSE ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% - -site_id_noinput<- tbl(bety, "sites")%>% - inner_join(tbl(bety, "sitegroups_sites") %>% - filter(sitegroup_id == 1), + inner_join(tbl(bety, "sitegroups_sites") %>% + filter(sitegroup_id == 1), by = c("id" = "site_id")) %>% - dplyr::select("id.x", "notes", "sitename") %>% - dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% - collect() - - #test <- dplyr::mutate(site_id_noinput, - # start_year = substring(stringi::stri_extract_first_regex(notes, "[0-9]+"),1:4), - # end_year = if_else( - # substring(stringi::stri_extract_last_regex(notes, "[0-9]+"), 1:4)== start_year, - # as.character(lubridate::year(Sys.Date())), - # stringi::stri_extract_last_regex(notes, "[0-9]+") - # )) - #"IGBP = MF CLIMATE_KOEPPEN = Dfb TOWER_BEGAN = 1999 TOWER_END = 2004" - test <- dplyr::mutate(site_id_noinput, - start_year = substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), - end_year = dplyr::if_else( - substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", - as.character(lubridate::year(Sys.Date())), - substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) - ), - in_interval = between(as.numeric(year(startdate)), as.numeric(test$start_year),as.numeric(test$end_year)) - ) - - - apply(as.matrix(test), MARGIN = c(test$start_year,test$end_year),test_f<-function(){ - between(startdate,test$start_year,test$end_year) -}) - - - - test_1<- for(i in 1:128){ - between(as.numeric(year(startdate)),as.numeric(test$start_year[i]),as.numeric(test$end_year[i])) - } - - - - between(as.numeric(year(startdate)),as.numeric(test$start_year[1]),as.numeric(test$end_year[1])) - - -dplyr::select(test,) -filter(test,dplyr::between(lubridate::year(startdate), test$start_year, test$end_year)) - - - contains_run = if_else( - between(lubridate::year(startdate), as.numeric(start_year), end_year), - "TRUE", - "FALSE" - )), - len = as.integer(end_year) - as.integer(start_year) - ) -%>% - filter(contains_run == TRUE) %>% - filter(str_length(end_year) == 4) %>% - filter(len == max(len)) %>% - select("id.x") + dplyr::select("id.x", "notes", "sitename") %>% + dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% + collect() %>% + dplyr::mutate( + # Grab years from string within the notes + start_year = substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), + #Empty tower end in the notes means that it goes until present day so if empty enter curent year. + end_year = dplyr::if_else( + substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", + as.character(lubridate::year(Sys.Date())), + substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) + ), + #Check if startdate year is within the inerval of that is given + in_date = between(as.numeric(year(startdate)),as.numeric(start_year),as.numeric(end_year)) + ) %>% + dplyr::filter(in_date) site_id <- "772" From 317cd66a04bbfcc6846d0293d2d2faa6adbaa823 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Wed, 27 Mar 2019 16:42:27 -0400 Subject: [PATCH 0085/2289] fix site --- base/workflow/inst/batch_runs.R | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 005bfb92127..235e1e39635 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -155,10 +155,12 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% #Check if startdate year is within the inerval of that is given in_date = between(as.numeric(year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% - dplyr::filter(in_date) + dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) + + -site_id <- "772" +site_id <- site_id_noinput$id.x #Create permutations of arg combinations options(scipen = 999) @@ -171,7 +173,3 @@ tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...) },otherwise =NA)) ) -## Turn into a html table -as_hux(tab) - - From e4fdd0972c2df2d07a086ab066fdc442cf87faac Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Thu, 28 Mar 2019 13:26:35 -0400 Subject: [PATCH 0086/2289] wrong name --- base/workflow/inst/batch_runs.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 235e1e39635..3af6e53afe7 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -145,12 +145,12 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% collect() %>% dplyr::mutate( # Grab years from string within the notes - start_year = substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), + start_year = substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), #Empty tower end in the notes means that it goes until present day so if empty enter curent year. end_year = dplyr::if_else( - substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", + substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", as.character(lubridate::year(Sys.Date())), - substring(stringr::str_extract(test$notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) + substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) ), #Check if startdate year is within the inerval of that is given in_date = between(as.numeric(year(startdate)),as.numeric(start_year),as.numeric(end_year)) From 70d30cb3604382b12e00ecdc2c0145fb91cbfb58 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Thu, 28 Mar 2019 17:17:56 -0400 Subject: [PATCH 0087/2289] add package --- base/workflow/inst/batch_runs.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 3af6e53afe7..e017d311900 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -107,14 +107,14 @@ create_exec_test_xml <- function(run_list){ ##Create Run Args pecan_path <- "/fs/data3/tonygard/work/pecan" config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.example.php")) -bety <- betyConnect(paste0(pecan_path,"/web/config.example.php")) +bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.example.php")) bety <- dplyr::src_postgres(dbname = 'bety', host = 'psql-pecan.bu.edu', user = 'bety', password = 'bety') con <- bety$con - +library(tidyverse) ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) @@ -153,7 +153,7 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) ), #Check if startdate year is within the inerval of that is given - in_date = between(as.numeric(year(startdate)),as.numeric(start_year),as.numeric(end_year)) + in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) From ccb00e4bb9bb323b69377ea37be9d9d00070cfd3 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 4 Apr 2019 16:18:22 -0400 Subject: [PATCH 0088/2289] something shawn told me to do --- models/sipnet/R/write.configs.SIPNET.R | 1 + 1 file changed, 1 insertion(+) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index 0a5bb144665..49ad887f28e 100644 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -391,6 +391,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs # reconstruct total wood C wood_total_C <- IC$AbvGrndWood / IC$abvGrndWoodFrac + if (is.na(wood_total_C) | is.infinite(wood_total_C) | is.nan(wood_total_C) | wood_total_C <0) wood_total_C <- 0 param[which(param[, 1] == "plantWoodInit"), 2] <- wood_total_C param[which(param[, 1] == "coarseRootFrac"), 2] <- IC$coarseRootFrac param[which(param[, 1] == "fineRootFrac"), 2] <- IC$fineRootFrac From dd616503e321250e1f8b43984b7bdcde61df373c Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Fri, 5 Apr 2019 13:41:48 -0400 Subject: [PATCH 0089/2289] updated outfolder command --- modules/data.remote/R/call_MODIS.R | 88 ++++++++++++++++++++++++++---- 1 file changed, 78 insertions(+), 10 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 3ece81eb5c4..7096aa0c0e6 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -73,6 +73,71 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") #list total range of dates available for product print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) + + #case where user only wants one date: + if (start_date == end_date & as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) + { + bands <- MODISTools::mt_bands(product = product) + if (!(band %in% bands$band)) + { + print(bands$band) + stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") + } else { + print("Check #3: Band Exists!") + } + + + print("Extracting data") + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") + + # extract main band data from api + dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, + start=start, end=end, km_ab=size, km_lr=size) + # extract QC data + if(band_qc != ""){ + qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + start=start, end=end, km_ab=size, km_lr=size) + } + + # extract stdev data + if(band_sd != ""){ + sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + start=start, end=end, km_ab=size, km_lr=size) + } + + + if (band_qc == "") + { + QC <- rep("nan", nrow(dat)) + } else { + QC <- as.numeric(qc$value) + } + + if (band_sd == "") + { + SD <- rep("nan", nrow(dat)) + } else { + SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + } + + output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) + names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") + + output[,5:10] <- lapply(output[,5:10], as.numeric) + + # scale the data + stdev to proper units + output$data <- output$data * (as.numeric(dat$scale)) + output$sd <- output$sd * (as.numeric(dat$scale)) + output$lat <- round(output$lat, 4) + output$lon <- round(output$lon, 4) + + fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname, row.names = F) + return(output) + } else { # Best case scenario: the start_date and end_date parameters fall within available MODIS data dates if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) { @@ -93,9 +158,9 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) { - stop("No MODIS data available start_date and end_date parameterized.") + stop("No MODIS data available start_date and end_date parameterized.") } - + bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { @@ -104,7 +169,7 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 } else { print("Check #3: Band Exists!") } - + print("Extracting data") @@ -113,13 +178,13 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 # extract main band data from api dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start, end=end, km_ab=size, km_lr=size) + start=start, end=end, km_ab=size, km_lr=size) # extract QC data if(band_qc != ""){ qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, start=start, end=end, km_ab=size, km_lr=size) } - + # extract stdev data if(band_sd != ""){ sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, @@ -154,8 +219,11 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") fname <- paste0(outfolder, "/", fname) - write.csv(output, fname) - return(output)} + write.csv(output, fname, row.names = F) + return(output) + } + + } if (package_method == "reticulate"){ @@ -170,8 +238,8 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) - fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname) + #fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") + #fname <- paste0(outfolder, "/", fname) + #write.csv(output, fname) return(output)} } From f068046b579986dba0a5aca49981b431254996c2 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 11:37:45 -0400 Subject: [PATCH 0090/2289] added qc filter --- modules/data.remote/DESCRIPTION | 3 +- modules/data.remote/R/call_MODIS.R | 281 ++++++++++++-------------- modules/data.remote/man/call_MODIS.Rd | 4 +- 3 files changed, 128 insertions(+), 160 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 89b8b468860..9d8247c16f0 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -13,7 +13,8 @@ Imports: PEcAn.logger, PEcAn.remote, stringr (>= 1.1.0), - spatial.tools + spatial.tools, + binaryLogic Suggests: testthat (>= 1.0.2) License: FreeBSD + file LICENSE diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 7096aa0c0e6..b2b30ece258 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -28,183 +28,131 @@ ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools") { +call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = F) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 -# # For PEcAn users: finds lat lon based on a site id and also uses this to name the output file -# if (!(is.null(siteID))) -# { -# siteID = as.numeric(siteID) -# bety <- list(user='bety', password='bety', host='localhost', -# dbname='bety', driver='PostgreSQL',write=TRUE) -# con <- PEcAn.DB::db.open(bety) -# bety$con <- con -# sites <- PEcAn.DB::query.site(siteID,con) -# lat = sites$lat -# lon = sites$lon -# } - # reformat start and end date if they are in YYYYMMDD format instead of YYYYJJJ + + # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) - { - start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") - } + { + start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") + } if (grepl("/", end_date) == T) - { - end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + { + end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } + start_date = as.Date(start_date, format = "%Y%j") + end_date = as.Date(end_date, format = "%Y%j") - # set start and end dates to correct format - if (package_method == "MODISTools"){ - - products = MODISTools::mt_products() - if (!(product %in% products$product)) - { - print(products) - stop("Product not available for MODIS API. Please chose a product from the list above.") - } else { - print("Check #1: Product exists!") - } - - # checks if start and end dates are within all or partial range of data available from MODIS product date range - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") - #list total range of dates available for product - print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) - - #case where user only wants one date: - if (start_date == end_date & as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) +#################### if package_method == MODISTools option #################### + if (package_method == "MODISTools") { + +#################### FUNCTION PARAMETER PRECHECKS #################### + #1. check that modis product is available + products = MODISTools::mt_products() + if (!(product %in% products$product)) + { + print(products) + stop("Product not available for MODIS API. Please chose a product from the list above.") + } + + #2. check that modis produdct band is available bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) + { + print(bands$band) + stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") + } + + #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") + + ########## Date case 1: user only wants one date ########## + if (start_date == end_date) { - print(bands$band) - stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") - } else { - print("Check #3: Band Exists!") + if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) + { + print("Extracting data") + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") + } + ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## + if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) + { + print(start) + stop("start or end date are not within MODIS data product date range. Please choose another date.") + } + } else { + ########## Date case 2: user want a range of dates ########## + # Best case scenario: Start and end date asked for fall with available date range of modis data product. + if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) + { + print("Check #2: All dates are available!") + } + + # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble + # MODIS data dates + if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + { + start_date = dates[1] + print("WARNING: Dates are only partially available. Start date before modis data product is available.") + } + if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + { + end_date = dates[length(dates)] + print("WARNING: Dates are only partially available. End date befire modis data product is available.") + } + + # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. + if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + { + stop("No MODIS data available start_date and end_date parameterized.") + } + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") } - print("Extracting data") - - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") + cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start, end=end, km_ab=size, km_lr=size) + start=start_date, end=end_date, km_ab=size, km_lr=size) + # extract QC data - if(band_qc != ""){ - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + if(band_qc != "") + { + qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, start=start, end=end, km_ab=size, km_lr=size) - } - + } + # extract stdev data - if(band_sd != ""){ - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + if(band_sd != "") + { + sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, start=start, end=end, km_ab=size, km_lr=size) - } - - - if (band_qc == "") - { - QC <- rep("nan", nrow(dat)) - } else { - QC <- as.numeric(qc$value) - } - - if (band_sd == "") - { - SD <- rep("nan", nrow(dat)) - } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') - } - - output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) - names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") - - output[,5:10] <- lapply(output[,5:10], as.numeric) - - # scale the data + stdev to proper units - output$data <- output$data * (as.numeric(dat$scale)) - output$sd <- output$sd * (as.numeric(dat$scale)) - output$lat <- round(output$lat, 4) - output$lon <- round(output$lon, 4) - - fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname, row.names = F) - return(output) - } else { - # Best case scenario: the start_date and end_date parameters fall within available MODIS data dates - if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) - { - print("Check #2: All dates are available!") - } - # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble - # MODIS data dates - if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) - { - start_date = dates[1] - print("WARNING: Dates are only partially available. Look at list of available dates for MODIS data product.") - } - if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) - { - end_date = dates[length(dates)] - print("WARNING: Dates are only partially available. Look at list of available dates for MODIS data product.") - } - # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. - if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) - { - stop("No MODIS data available start_date and end_date parameterized.") - } - - bands <- MODISTools::mt_bands(product = product) - if (!(band %in% bands$band)) - { - print(bands$band) - stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") - } else { - print("Check #3: Band Exists!") - } - - - print("Extracting data") - - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") - - # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start, end=end, km_ab=size, km_lr=size) - # extract QC data - if(band_qc != ""){ - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size) - } - - # extract stdev data - if(band_sd != ""){ - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size) - } - + } - if (band_qc == "") - { - QC <- rep("nan", nrow(dat)) - } else { - QC <- as.numeric(qc$value) - } + if (band_qc == "") + { + QC <- rep("nan", nrow(dat)) + } else { + QC <- as.numeric(qc$value) + } - if (band_sd == "") - { - SD <- rep("nan", nrow(dat)) - } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') - } + if (band_sd == "") + { + SD <- rep("nan", nrow(dat)) + } else { + SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + } output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") @@ -217,12 +165,31 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) - fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname, row.names = F) - return(output) - } + if (QC_filter == T) + { + output$qc == as.character(output$qc) + for (i in 1:nrow(output)) + { + convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") + output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) + } + good = which(output$qc == "000" | output$qc == "001") + if (length(good) > 0 | !(is.null(good))) + { + output = output[good,] + } else { + print("All QC values are bad. No data to output with QC filter == TRUE.") + } + } + if (!(is.null(outfolder))) + { + fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname, row.names = F) + } + + return(output) } diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index da236851564..31cdcaf5969 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -4,9 +4,9 @@ \alias{call_MODIS} \title{call_MODIS} \usage{ -call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, +call_MODIS(outfolder = NULL, start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, - package_method = "MODISTools") + package_method = "MODISTools", QC_filter = F) } \arguments{ \item{outfolder}{where the output file will be stored} From e331a0b972de7af23c7223bf566c6eaea1f94cad Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 13:27:33 -0400 Subject: [PATCH 0091/2289] added qc_filter --- modules/data.remote/R/call_MODIS.R | 202 +++++++++++++++++------------ 1 file changed, 122 insertions(+), 80 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index f531c517f49..9b0f39d6788 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,6 +14,7 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -27,78 +28,131 @@ ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", package_method = "MODISTools") { +call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = F) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - # set start and end dates to correct format - if (package_method == "MODISTools"){ - - products = MODISTools::mt_products() - if (!(product %in% products$product)) - { - print(products) - stop("Product not available for MODIS API. Please chose a product from the list above.") - } else { - print("Check #1: Product exists!") - } - - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.numeric(substr(dates, 2, nchar(dates))) - if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) + # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ + if (grepl("/", start_date) == T) { - print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) - stop("Please choose dates between the date range listed above.") - } else { - print("Check #2: Dates are available!") + start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") } - - bands <- MODISTools::mt_bands(product = product) - if (!(band %in% bands$band)) + + if (grepl("/", end_date) == T) { - print(bands$band) - stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") - } else { - print("Check #3: Band Exists!") - } + end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + } + start_date = as.Date(start_date, format = "%Y%j") + end_date = as.Date(end_date, format = "%Y%j") - - print("Extracting data") - - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") - - # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start, end=end, km_ab=size, km_lr=size) - # extract QC data - if(band_qc != ""){ - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size) - } - - # extract stdev data - if(band_sd != ""){ - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size) - } - - - if (band_qc == "") +#################### if package_method == MODISTools option #################### + if (package_method == "MODISTools") { - QC <- rep("nan", nrow(dat)) - } else { - QC <- as.numeric(qc$value) - } + +#################### FUNCTION PARAMETER PRECHECKS #################### + #1. check that modis product is available + products = MODISTools::mt_products() + if (!(product %in% products$product)) + { + print(products) + stop("Product not available for MODIS API. Please chose a product from the list above.") + } + + #2. check that modis produdct band is available + bands <- MODISTools::mt_bands(product = product) + if (!(band %in% bands$band)) + { + print(bands$band) + stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") + } + + #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") + + ########## Date case 1: user only wants one date ########## + if (start_date == end_date) + { + if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) + { + print("Extracting data") + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") + } + ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## + if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) + { + print(start) + stop("start or end date are not within MODIS data product date range. Please choose another date.") + } + } else { + ########## Date case 2: user want a range of dates ########## + # Best case scenario: Start and end date asked for fall with available date range of modis data product. + if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) + { + print("Check #2: All dates are available!") + } + + # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble + # MODIS data dates + if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + { + start_date = dates[1] + print("WARNING: Dates are only partially available. Start date before modis data product is available.") + } + if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + { + end_date = dates[length(dates)] + print("WARNING: Dates are only partially available. End date befire modis data product is available.") + } + + # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. + if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + { + stop("No MODIS data available start_date and end_date parameterized.") + } + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") + } + + print("Extracting data") + cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) + + # extract main band data from api + dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, + start=start_date, end=end_date, km_ab=size, km_lr=size) + + # extract QC data + if(band_qc != "") + { + qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + start=start, end=end, km_ab=size, km_lr=size) + } + + # extract stdev data + if(band_sd != "") + { + sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + start=start, end=end, km_ab=size, km_lr=size) + } - if (band_sd == "") - { - SD <- rep("nan", nrow(dat)) - } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') - } + if (band_qc == "") + { + QC <- rep("nan", nrow(dat)) + } else { + QC <- as.numeric(qc$value) + } + + if (band_sd == "") + { + SD <- rep("nan", nrow(dat)) + } else { + SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + } output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") @@ -111,26 +165,14 @@ call_MODIS <- function(outfolder = ".", start_date, end_date, lat, lon, size = 0 output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) - fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname) - return(output)} - - - if (package_method == "reticulate"){ - # load in python script - script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) + if (QC_filter == T) + { + output$qc == as.character(output$qc) + for (i in 1:nrow(output)) + { + convert = paste(binaryLogic::as.binary(as.integer(ou script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) #script.path = file.path('/Users/bmorrison/pecan/modules/data.remote/inst/extract_modis_data.py') reticulate::source_python(script.path) # extract the data - output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start_date, end_date = end_date, size = size, band_qc = band_qc, band_sd = band_sd) - output[,5:10] <- lapply(output[,5:10], as.numeric) - output$lat <- round(output$lat, 4) - output$lon <- round(output$lon, 4) - - fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname) - return(output)} -} + output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start \ No newline at end of file From e805b018b39495db8567470003d0547438942007 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 13:40:57 -0400 Subject: [PATCH 0092/2289] added qc filter --- modules/data.remote/DESCRIPTION | 3 +- modules/data.remote/R/call_MODIS.R | 255 +++++++++++++++----------- modules/data.remote/man/call_MODIS.Rd | 12 +- 3 files changed, 157 insertions(+), 113 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index f663a5941ab..227b2b93a88 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -12,7 +12,8 @@ Imports: reticulate, PEcAn.logger, PEcAn.remote, - stringr (>= 1.1.0) + stringr (>= 1.1.0), + binaryLogic Suggests: testthat (>= 1.0.2), ggplot2, diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 9b0f39d6788..1350333d62f 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -15,6 +15,7 @@ ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -33,126 +34,126 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - + # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) - { - start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") - } + { + start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") + } if (grepl("/", end_date) == T) - { - end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + { + end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } start_date = as.Date(start_date, format = "%Y%j") end_date = as.Date(end_date, format = "%Y%j") -#################### if package_method == MODISTools option #################### + #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") - { - -#################### FUNCTION PARAMETER PRECHECKS #################### + { + + #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available - products = MODISTools::mt_products() - if (!(product %in% products$product)) - { - print(products) - stop("Product not available for MODIS API. Please chose a product from the list above.") - } - - #2. check that modis produdct band is available - bands <- MODISTools::mt_bands(product = product) - if (!(band %in% bands$band)) - { - print(bands$band) - stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") - } - - #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") - - ########## Date case 1: user only wants one date ########## - if (start_date == end_date) + products = MODISTools::mt_products() + if (!(product %in% products$product)) + { + print(products) + stop("Product not available for MODIS API. Please chose a product from the list above.") + } + + #2. check that modis produdct band is available + bands <- MODISTools::mt_bands(product = product) + if (!(band %in% bands$band)) + { + print(bands$band) + stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") + } + + #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") + + ########## Date case 1: user only wants one date ########## + if (start_date == end_date) + { + if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) { - if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) - { - print("Extracting data") - - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") - } - ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## - if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) - { - print(start) - stop("start or end date are not within MODIS data product date range. Please choose another date.") - } - } else { - ########## Date case 2: user want a range of dates ########## - # Best case scenario: Start and end date asked for fall with available date range of modis data product. - if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) - { - print("Check #2: All dates are available!") - } - - # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble - # MODIS data dates - if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) - { - start_date = dates[1] - print("WARNING: Dates are only partially available. Start date before modis data product is available.") - } - if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) - { - end_date = dates[length(dates)] - print("WARNING: Dates are only partially available. End date befire modis data product is available.") - } - - # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. - if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) - { - stop("No MODIS data available start_date and end_date parameterized.") - } + print("Extracting data") start <- as.Date(start_date, "%Y%j") end <- as.Date(end_date, "%Y%j") } + ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## + if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) + { + print(start) + stop("start or end date are not within MODIS data product date range. Please choose another date.") + } + } else { + ########## Date case 2: user want a range of dates ########## + # Best case scenario: Start and end date asked for fall with available date range of modis data product. + if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) + { + print("Check #2: All dates are available!") + } - print("Extracting data") - cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) + # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble + # MODIS data dates + if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + { + start_date = dates[1] + print("WARNING: Dates are only partially available. Start date before modis data product is available.") + } + if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + { + end_date = dates[length(dates)] + print("WARNING: Dates are only partially available. End date befire modis data product is available.") + } - # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size) - - # extract QC data - if(band_qc != "") - { - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size) - } - - # extract stdev data - if(band_sd != "") - { - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size) - } - - if (band_qc == "") - { - QC <- rep("nan", nrow(dat)) - } else { - QC <- as.numeric(qc$value) - } - - if (band_sd == "") - { - SD <- rep("nan", nrow(dat)) - } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') - } + # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. + if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + { + stop("No MODIS data available start_date and end_date parameterized.") + } + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") + } + + print("Extracting data") + cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) + + # extract main band data from api + dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, + start=start_date, end=end_date, km_ab=size, km_lr=size) + + # extract QC data + if(band_qc != "") + { + qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + start=start, end=end, km_ab=size, km_lr=size) + } + + # extract stdev data + if(band_sd != "") + { + sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + start=start, end=end, km_ab=size, km_lr=size) + } + + if (band_qc == "") + { + QC <- rep("nan", nrow(dat)) + } else { + QC <- as.numeric(qc$value) + } + + if (band_sd == "") + { + SD <- rep("nan", nrow(dat)) + } else { + SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + } output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") @@ -166,13 +167,51 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = output$lon <- round(output$lon, 4) if (QC_filter == T) + { + output$qc == as.character(output$qc) + for (i in 1:nrow(output)) + { + convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") + output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) + } + good = which(output$qc == "000" | output$qc == "001") + if (length(good) > 0 | !(is.null(good))) { - output$qc == as.character(output$qc) - for (i in 1:nrow(output)) - { - convert = paste(binaryLogic::as.binary(as.integer(ou script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) + output = output[good,] + } else { + print("All QC values are bad. No data to output with QC filter == TRUE.") + } + } + + if (!(is.null(outfolder))) + { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname, row.names = F) + } + + return(output) + } + + + if (package_method == "reticulate"){ + # load in python script + script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) #script.path = file.path('/Users/bmorrison/pecan/modules/data.remote/inst/extract_modis_data.py') reticulate::source_python(script.path) # extract the data - output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start \ No newline at end of file + output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start_date, end_date = end_date, size = size, band_qc = band_qc, band_sd = band_sd) + output[,5:10] <- lapply(output[,5:10], as.numeric) + output$lat <- round(output$lat, 4) + output$lon <- round(output$lon, 4) + + if (!(is.null(outfolder))) + { + fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname) + } + + return(output)} +} diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 834e4069151..2c5a6ea9133 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -4,9 +4,9 @@ \alias{call_MODIS} \title{call_MODIS} \usage{ -call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, - product, band, band_qc = "", band_sd = "", - package_method = "MODISTools") +call_MODIS(outfolder = NULL, start_date, end_date, lat, lon, size = 0, + product, band, band_qc = "", band_sd = "", siteID = NULL, + package_method = "MODISTools", QC_filter = F) } \arguments{ \item{outfolder}{where the output file will be stored} @@ -29,7 +29,11 @@ call_MODIS(outfolder = ".", start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename.} + +\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} + +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} From f2269ddd221f953ef1ba4b7de1ae12ebf9a5b770 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 13:46:08 -0400 Subject: [PATCH 0093/2289] added qc filter + cleaned up --- modules/data.remote/R/call_MODIS.R | 244 +++++++++++++++-------------- 1 file changed, 124 insertions(+), 120 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index b2b30ece258..69d2533b36b 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -15,6 +15,7 @@ ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -33,126 +34,125 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - + # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) - { - start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") - } + { + start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") + } if (grepl("/", end_date) == T) - { - end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + { + end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } start_date = as.Date(start_date, format = "%Y%j") end_date = as.Date(end_date, format = "%Y%j") -#################### if package_method == MODISTools option #################### + #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") - { - -#################### FUNCTION PARAMETER PRECHECKS #################### + { + #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available - products = MODISTools::mt_products() - if (!(product %in% products$product)) - { - print(products) - stop("Product not available for MODIS API. Please chose a product from the list above.") - } - - #2. check that modis produdct band is available - bands <- MODISTools::mt_bands(product = product) - if (!(band %in% bands$band)) - { - print(bands$band) - stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") - } - - #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") - - ########## Date case 1: user only wants one date ########## - if (start_date == end_date) + products = MODISTools::mt_products() + if (!(product %in% products$product)) + { + print(products) + stop("Product not available for MODIS API. Please chose a product from the list above.") + } + + #2. check that modis produdct band is available + bands <- MODISTools::mt_bands(product = product) + if (!(band %in% bands$band)) + { + print(bands$band) + stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") + } + + #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") + + ########## Date case 1: user only wants one date ########## + if (start_date == end_date) + { + if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) { - if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) - { - print("Extracting data") - - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") - } - ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## - if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) - { - print(start) - stop("start or end date are not within MODIS data product date range. Please choose another date.") - } - } else { - ########## Date case 2: user want a range of dates ########## - # Best case scenario: Start and end date asked for fall with available date range of modis data product. - if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) - { - print("Check #2: All dates are available!") - } - - # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble - # MODIS data dates - if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) - { - start_date = dates[1] - print("WARNING: Dates are only partially available. Start date before modis data product is available.") - } - if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) - { - end_date = dates[length(dates)] - print("WARNING: Dates are only partially available. End date befire modis data product is available.") - } - - # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. - if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) - { - stop("No MODIS data available start_date and end_date parameterized.") - } + print("Extracting data") start <- as.Date(start_date, "%Y%j") end <- as.Date(end_date, "%Y%j") } + ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## + if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) + { + print(start) + stop("start or end date are not within MODIS data product date range. Please choose another date.") + } + } else { + ########## Date case 2: user want a range of dates ########## + # Best case scenario: Start and end date asked for fall with available date range of modis data product. + if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) + { + print("Check #2: All dates are available!") + } - print("Extracting data") - cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) + # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble + # MODIS data dates + if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + { + start_date = dates[1] + print("WARNING: Dates are only partially available. Start date before modis data product is available.") + } + if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + { + end_date = dates[length(dates)] + print("WARNING: Dates are only partially available. End date befire modis data product is available.") + } - # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size) - - # extract QC data - if(band_qc != "") - { - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size) - } - - # extract stdev data - if(band_sd != "") - { - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size) - } - - if (band_qc == "") - { - QC <- rep("nan", nrow(dat)) - } else { - QC <- as.numeric(qc$value) - } - - if (band_sd == "") - { - SD <- rep("nan", nrow(dat)) - } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') - } + # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. + if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + { + stop("No MODIS data available start_date and end_date parameterized.") + } + + start <- as.Date(start_date, "%Y%j") + end <- as.Date(end_date, "%Y%j") + } + + print("Extracting data") + cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) + + # extract main band data from api + dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, + start=start_date, end=end_date, km_ab=size, km_lr=size) + + # extract QC data + if(band_qc != "") + { + qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + start=start, end=end, km_ab=size, km_lr=size) + } + + # extract stdev data + if(band_sd != "") + { + sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + start=start, end=end, km_ab=size, km_lr=size) + } + + if (band_qc == "") + { + QC <- rep("nan", nrow(dat)) + } else { + QC <- as.numeric(qc$value) + } + + if (band_sd == "") + { + SD <- rep("nan", nrow(dat)) + } else { + SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + } output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") @@ -166,13 +166,13 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = output$lon <- round(output$lon, 4) if (QC_filter == T) + { + output$qc == as.character(output$qc) + for (i in 1:nrow(output)) { - output$qc == as.character(output$qc) - for (i in 1:nrow(output)) - { - convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") - output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) - } + convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") + output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) + } good = which(output$qc == "000" | output$qc == "001") if (length(good) > 0 | !(is.null(good))) { @@ -180,18 +180,18 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = } else { print("All QC values are bad. No data to output with QC filter == TRUE.") } - } + } if (!(is.null(outfolder))) - { - fname <- paste(product, "_", band, "output_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname, row.names = F) - } + { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname, row.names = F) + } return(output) - } - + } + if (package_method == "reticulate"){ # load in python script @@ -205,8 +205,12 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) - #fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - #fname <- paste0(outfolder, "/", fname) - #write.csv(output, fname) + if (!(is.null(outfolder))) + { + fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname) + } + return(output)} } From 40b62d7edfa6cddcfd83ec371a8d72d78234c6c4 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:18:53 -0400 Subject: [PATCH 0094/2289] fix --- modules/data.remote/R/call_MODIS.R | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 69d2533b36b..cfcc636ca98 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,8 +14,11 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +======= +>>>>>>> parent of 3463070... updated date format and alternative date range issues ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -34,6 +37,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 +<<<<<<< HEAD # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -53,14 +57,37 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = { #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available +======= + # set start and end dates to correct format + if (package_method == "MODISTools"){ + +>>>>>>> parent of 3463070... updated date format and alternative date range issues products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") +<<<<<<< HEAD } #2. check that modis produdct band is available +======= + } else { + print("Check #1: Product exists!") + } + + + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.numeric(substr(dates, 2, nchar(dates))) + if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) + { + print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) + stop("Please choose dates between the date range listed above.") + } else { + print("Check #2: Dates are available!") + } + +>>>>>>> parent of 3463070... updated date format and alternative date range issues bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 73c7f04712ab4e9ef7e737def514a80d695cba45 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:20:10 -0400 Subject: [PATCH 0095/2289] fix1 --- modules/data.remote/R/call_MODIS.R | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index cfcc636ca98..73c17523cbb 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -15,10 +15,15 @@ ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) <<<<<<< HEAD +<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ======= >>>>>>> parent of 3463070... updated date format and alternative date range issues +======= +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +>>>>>>> parent of ab371da... Delete call_MODIS.R ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -38,6 +43,9 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = size <- 0 <<<<<<< HEAD +<<<<<<< HEAD +======= +>>>>>>> parent of ab371da... Delete call_MODIS.R # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -57,16 +65,20 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = { #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available +<<<<<<< HEAD ======= # set start and end dates to correct format if (package_method == "MODISTools"){ >>>>>>> parent of 3463070... updated date format and alternative date range issues +======= +>>>>>>> parent of ab371da... Delete call_MODIS.R products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") +<<<<<<< HEAD <<<<<<< HEAD } @@ -88,6 +100,11 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = } >>>>>>> parent of 3463070... updated date format and alternative date range issues +======= + } + + #2. check that modis produdct band is available +>>>>>>> parent of ab371da... Delete call_MODIS.R bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 944cf7122c1e5ceb9dfc83385e415a669010e78a Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:36:16 -0400 Subject: [PATCH 0096/2289] trying to fix pr --- modules/data.remote/R/call_MODIS.R | 46 +----------------------------- 1 file changed, 1 insertion(+), 45 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 73c17523cbb..d3a23d94536 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,16 +14,8 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -<<<<<<< HEAD -<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. -======= ->>>>>>> parent of 3463070... updated date format and alternative date range issues -======= -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. -##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ->>>>>>> parent of ab371da... Delete call_MODIS.R ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -42,10 +34,6 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 -<<<<<<< HEAD -<<<<<<< HEAD -======= ->>>>>>> parent of ab371da... Delete call_MODIS.R # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -63,48 +51,16 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") { - #################### FUNCTION PARAMETER PRECHECKS #################### + #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available -<<<<<<< HEAD -======= - # set start and end dates to correct format - if (package_method == "MODISTools"){ - ->>>>>>> parent of 3463070... updated date format and alternative date range issues -======= ->>>>>>> parent of ab371da... Delete call_MODIS.R products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") -<<<<<<< HEAD -<<<<<<< HEAD - } - - #2. check that modis produdct band is available -======= - } else { - print("Check #1: Product exists!") - } - - - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.numeric(substr(dates, 2, nchar(dates))) - if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) - { - print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) - stop("Please choose dates between the date range listed above.") - } else { - print("Check #2: Dates are available!") - } - ->>>>>>> parent of 3463070... updated date format and alternative date range issues -======= } #2. check that modis produdct band is available ->>>>>>> parent of ab371da... Delete call_MODIS.R bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 0a2176612d8c39312e4229a38319a76c661e2e61 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:48:55 -0400 Subject: [PATCH 0097/2289] trying to fix 2 --- modules/data.remote/R/call_MODIS.R | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index d3a23d94536..dee593a6e28 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,8 +14,11 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +======= +>>>>>>> parent of 3463070... updated date format and alternative date range issues ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -34,6 +37,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 +<<<<<<< HEAD # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -53,14 +57,37 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = { #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available +======= + # set start and end dates to correct format + if (package_method == "MODISTools"){ + +>>>>>>> parent of 3463070... updated date format and alternative date range issues products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") +<<<<<<< HEAD } #2. check that modis produdct band is available +======= + } else { + print("Check #1: Product exists!") + } + + + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.numeric(substr(dates, 2, nchar(dates))) + if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) + { + print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) + stop("Please choose dates between the date range listed above.") + } else { + print("Check #2: Dates are available!") + } + +>>>>>>> parent of 3463070... updated date format and alternative date range issues bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From b8377890fa962531eb034c4f2f00152ef7b04b91 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:50:05 -0400 Subject: [PATCH 0098/2289] Revert "trying to fix 2" This reverts commit 0a2176612d8c39312e4229a38319a76c661e2e61. --- modules/data.remote/R/call_MODIS.R | 27 --------------------------- 1 file changed, 27 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index dee593a6e28..d3a23d94536 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,11 +14,8 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. -======= ->>>>>>> parent of 3463070... updated date format and alternative date range issues ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -37,7 +34,6 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 -<<<<<<< HEAD # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -57,37 +53,14 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = { #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available -======= - # set start and end dates to correct format - if (package_method == "MODISTools"){ - ->>>>>>> parent of 3463070... updated date format and alternative date range issues products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") -<<<<<<< HEAD } #2. check that modis produdct band is available -======= - } else { - print("Check #1: Product exists!") - } - - - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.numeric(substr(dates, 2, nchar(dates))) - if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) - { - print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) - stop("Please choose dates between the date range listed above.") - } else { - print("Check #2: Dates are available!") - } - ->>>>>>> parent of 3463070... updated date format and alternative date range issues bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 6e9dc59f010c5dbfd5e85747eeefb319c6288eb3 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:51:42 -0400 Subject: [PATCH 0099/2289] Revert "updated sipnet read.restart with LAI" This reverts commit 3e2879d5ed8c3b4cf2d72c6110403eaf92f67091. --- models/sipnet/R/read_restart.SIPNET.R | 5 ----- 1 file changed, 5 deletions(-) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index d8b69d1929f..f98c3ef73e7 100644 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -87,11 +87,6 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p forecast[[length(forecast) + 1]] <- udunits2::ud.convert(ens$TotLivBiom[last], "kg/m^2", "Mg/ha") names(forecast[[length(forecast)]]) <- c("TotLivBiom") } - - if ("LAI" %in% var.names) { - forecast[[length(forecast) + 1]] <- ens$LAI[last] ## m2/m2 - names(forecast[[length(forecast)]]) <- c("LAI") - } print(runid) From dc8a98bdc192e6113972c332e0eec79ac407f9b0 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:53:05 -0400 Subject: [PATCH 0100/2289] Revert "trying to fix pr" This reverts commit 944cf7122c1e5ceb9dfc83385e415a669010e78a. --- modules/data.remote/R/call_MODIS.R | 46 +++++++++++++++++++++++++++++- 1 file changed, 45 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index d3a23d94536..73c17523cbb 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,8 +14,16 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) +<<<<<<< HEAD +<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +======= +>>>>>>> parent of 3463070... updated date format and alternative date range issues +======= +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +>>>>>>> parent of ab371da... Delete call_MODIS.R ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -34,6 +42,10 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 +<<<<<<< HEAD +<<<<<<< HEAD +======= +>>>>>>> parent of ab371da... Delete call_MODIS.R # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -51,16 +63,48 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") { - #################### FUNCTION PARAMETER PRECHECKS #################### + #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available +<<<<<<< HEAD +======= + # set start and end dates to correct format + if (package_method == "MODISTools"){ + +>>>>>>> parent of 3463070... updated date format and alternative date range issues +======= +>>>>>>> parent of ab371da... Delete call_MODIS.R products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") +<<<<<<< HEAD +<<<<<<< HEAD + } + + #2. check that modis produdct band is available +======= + } else { + print("Check #1: Product exists!") + } + + + dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date + dates <- as.numeric(substr(dates, 2, nchar(dates))) + if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) + { + print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) + stop("Please choose dates between the date range listed above.") + } else { + print("Check #2: Dates are available!") + } + +>>>>>>> parent of 3463070... updated date format and alternative date range issues +======= } #2. check that modis produdct band is available +>>>>>>> parent of ab371da... Delete call_MODIS.R bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 48b4e307c4f0dc4ed4acd1ab8d8a5272a3d391f8 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 14:55:25 -0400 Subject: [PATCH 0101/2289] checking fixes --- modules/data.remote/R/call_MODIS.R | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 73c17523cbb..b4d29217547 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -16,6 +16,7 @@ ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ======= @@ -24,6 +25,10 @@ ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. >>>>>>> parent of ab371da... Delete call_MODIS.R +======= +##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +>>>>>>> parent of 40b62d7... fix ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -42,10 +47,13 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 +<<<<<<< HEAD <<<<<<< HEAD <<<<<<< HEAD ======= >>>>>>> parent of ab371da... Delete call_MODIS.R +======= +>>>>>>> parent of 40b62d7... fix # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -66,6 +74,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available <<<<<<< HEAD +<<<<<<< HEAD ======= # set start and end dates to correct format if (package_method == "MODISTools"){ @@ -73,12 +82,15 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = >>>>>>> parent of 3463070... updated date format and alternative date range issues ======= >>>>>>> parent of ab371da... Delete call_MODIS.R +======= +>>>>>>> parent of 40b62d7... fix products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") <<<<<<< HEAD +<<<<<<< HEAD <<<<<<< HEAD } @@ -105,6 +117,11 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = #2. check that modis produdct band is available >>>>>>> parent of ab371da... Delete call_MODIS.R +======= + } + + #2. check that modis produdct band is available +>>>>>>> parent of 40b62d7... fix bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 78b73af921e9badafde57662fc58c8c79f1498d8 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 15:09:43 -0400 Subject: [PATCH 0102/2289] fixed weird github formatting --- modules/data.remote/R/call_MODIS.R | 70 ++---------------------------- 1 file changed, 4 insertions(+), 66 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index b4d29217547..bde30840128 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -14,21 +14,8 @@ ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -<<<<<<< HEAD -<<<<<<< HEAD -<<<<<<< HEAD ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. -======= ->>>>>>> parent of 3463070... updated date format and alternative date range issues -======= -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. -##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ->>>>>>> parent of ab371da... Delete call_MODIS.R -======= -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. -##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. ->>>>>>> parent of 40b62d7... fix ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -46,16 +33,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - -<<<<<<< HEAD -<<<<<<< HEAD -<<<<<<< HEAD -======= ->>>>>>> parent of ab371da... Delete call_MODIS.R -======= ->>>>>>> parent of 40b62d7... fix - - # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ +# reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) { start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") @@ -67,61 +45,21 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = } start_date = as.Date(start_date, format = "%Y%j") end_date = as.Date(end_date, format = "%Y%j") - + + #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") { - #################### FUNCTION PARAMETER PRECHECKS #################### + #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available -<<<<<<< HEAD -<<<<<<< HEAD -======= - # set start and end dates to correct format - if (package_method == "MODISTools"){ - ->>>>>>> parent of 3463070... updated date format and alternative date range issues -======= ->>>>>>> parent of ab371da... Delete call_MODIS.R -======= ->>>>>>> parent of 40b62d7... fix products = MODISTools::mt_products() if (!(product %in% products$product)) { print(products) stop("Product not available for MODIS API. Please chose a product from the list above.") -<<<<<<< HEAD -<<<<<<< HEAD -<<<<<<< HEAD - } - - #2. check that modis produdct band is available -======= - } else { - print("Check #1: Product exists!") - } - - - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.numeric(substr(dates, 2, nchar(dates))) - if (as.numeric(start_date) <= dates[1] | as.numeric(end_date) >= dates[length(dates)]) - { - print(paste("Range of dates for product are ", dates[1], " - ", dates[length(dates)], sep = "")) - stop("Please choose dates between the date range listed above.") - } else { - print("Check #2: Dates are available!") - } - ->>>>>>> parent of 3463070... updated date format and alternative date range issues -======= - } - - #2. check that modis produdct band is available ->>>>>>> parent of ab371da... Delete call_MODIS.R -======= } #2. check that modis produdct band is available ->>>>>>> parent of 40b62d7... fix bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { From 30e138a3720f543852e623f50b146331a1e46af4 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 8 Apr 2019 15:11:27 -0400 Subject: [PATCH 0103/2289] roxygenized data.remote --- modules/data.remote/man/call_MODIS.Rd | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 31cdcaf5969..2c5a6ea9133 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -29,12 +29,14 @@ call_MODIS(outfolder = NULL, start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename. +\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename.} + +\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} + +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} - -\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} } \description{ Get MODIS data by date and location From 07e8d79c800c3f9156f01cb7d34989ff6faa6a63 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Wed, 10 Apr 2019 14:32:39 -0400 Subject: [PATCH 0104/2289] fixed outfolder problem --- modules/data.remote/R/call_MODIS.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 1350333d62f..77ae302d9c6 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -29,7 +29,7 @@ ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = F) { +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 @@ -183,7 +183,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = } } - if (!(is.null(outfolder))) + if (!(outfolder == "")) { fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") fname <- paste0(outfolder, "/", fname) From ea8edef5b36eec73dabbc00df0cd273bd1d92955 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Wed, 10 Apr 2019 14:33:45 -0400 Subject: [PATCH 0105/2289] fixed outfolder problem --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index bde30840128..47eee77ffdc 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -181,7 +181,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = } } - if (!(is.null(outfolder))) + if (!(outfolder == "")) { fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") fname <- paste0(outfolder, "/", fname) From a0cf5a3482214463c2eb40b7868eef8f6eb6077b Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 14 Apr 2019 08:11:40 -0400 Subject: [PATCH 0106/2289] making pda functions remote friendly --- modules/assim.batch/R/pda.emulator.R | 53 ++++++++++++++------ modules/assim.batch/R/pda.get.model.output.R | 11 ++-- modules/assim.batch/R/pda.load.data.R | 9 +++- modules/assim.batch/R/pda.postprocess.R | 43 +++++++++------- 4 files changed, 78 insertions(+), 38 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 2e9c66030d8..fe58e6f1e7d 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -12,18 +12,21 @@ ##' @author Mike Dietze ##' @author Ryan Kelly, Istem Fer ##' @export -pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, external.knots = NULL, - params.id = NULL, param.names = NULL, prior.id = NULL, +pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, + external.knots = NULL, external.formats = NULL, + ensemble.id = NULL, params.id = NULL, param.names = NULL, prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, - ar.target = NULL, jvar = NULL, n.knot = NULL, individual = TRUE) { + ar.target = NULL, jvar = NULL, n.knot = NULL, + individual = TRUE, remote = FALSE) { ## this bit of code is useful for defining the variables passed to this function if you are ## debugging if (FALSE) { - external.data <- external.priors <- NULL - params.id <- param.names <- prior.id <- chain <- iter <- NULL + external.data <- external.priors <- external.knots <- external.formats <- NULL + ensemble.id <- params.id <- param.names <- prior.id <- chain <- iter <- NULL n.knot <- adapt <- adj.min <- ar.target <- jvar <- NULL individual <- TRUE + remote <- FALSE } # handle extention flags @@ -76,7 +79,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, pass2bias <- NULL ## Open database connection - if (settings$database$bety$write) { + if (as.logical(settings$database$bety$write) & !remote) { con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) if (inherits(con, "try-error")) { con <- NULL @@ -87,10 +90,16 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, con <- NULL } - bety <- dplyr::src_postgres(dbname = settings$database$bety$dbname, - host = settings$database$bety$host, - user = settings$database$bety$user, - password = settings$database$bety$password) + if(!remote){ + bety <- dplyr::src_postgres(dbname = settings$database$bety$dbname, + host = settings$database$bety$host, + user = settings$database$bety$user, + password = settings$database$bety$password) + }else{ + bety <- list() + bety$con <- NULL + } + ## Load priors if(is.null(external.priors)){ @@ -105,7 +114,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, if(is.null(external.data)){ - inputs <- load.pda.data(settings, bety) + inputs <- load.pda.data(settings, bety, external.formats) }else{ inputs <- external.data } @@ -146,7 +155,12 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } ## Create an ensemble id - settings$assim.batch$ensemble.id <- pda.create.ensemble(settings, con, workflow.id) + if(is.null(ensemble.id)){ + settings$assim.batch$ensemble.id <- pda.create.ensemble(settings, con, workflow.id) + }else{ + settings$assim.batch$ensemble.id <- ensemble.id + } + ## history restart pda.restart.file <- file.path(settings$outdir,paste0("history.pda", @@ -254,7 +268,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) ## start model runs - PEcAn.remote::start.model.runs(settings, settings$database$bety$write) + PEcAn.remote::start.model.runs(settings, (as.logical(settings$database$bety$write) & !remote)) ## Retrieve model outputs and error statistics model.out <- list() @@ -263,7 +277,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ## read model outputs for (i in seq_len(settings$assim.batch$n.knot)) { - align.return <- pda.get.model.output(settings, run.ids[i], bety, inputs) + align.return <- pda.get.model.output(settings, run.ids[i], bety, inputs, external.formats) model.out[[i]] <- align.return$model.out if(all(!is.na(model.out[[i]]))){ inputs <- align.return$inputs @@ -711,6 +725,8 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } } + settings$assim.batch$round_counter <- which_round + settings <- pda.postprocess(settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig) ## close database connection @@ -721,6 +737,13 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ## Output an updated settings list current.step <- "pda.finish" save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) - return(settings) + + if(!remote){ + return(settings) + }else{ + #sync back + return(settings) + } + } ## end pda.emulator diff --git a/modules/assim.batch/R/pda.get.model.output.R b/modules/assim.batch/R/pda.get.model.output.R index 0aac44f49b6..5bd214b3c81 100644 --- a/modules/assim.batch/R/pda.get.model.output.R +++ b/modules/assim.batch/R/pda.get.model.output.R @@ -8,7 +8,7 @@ ##' ##' @author Ryan Kelly, Istem Fer ##' @export -pda.get.model.output <- function(settings, run.id, bety, inputs) { +pda.get.model.output <- function(settings, run.id, bety, inputs, external.formats = NULL) { input.info <- settings$assim.batch$inputs @@ -30,10 +30,13 @@ pda.get.model.output <- function(settings, run.id, bety, inputs) { # if no derivation is requested expr will be the same as variable name expr <- lapply(variable.name, `[[`, "expression") - format <- PEcAn.DB::query.format.vars(bety = bety, - input.id = settings$assim.batch$inputs[[k]]$input.id) + if(is.null(bety$con)){ + format <- external.formats[[k]] + }else{ + format <- PEcAn.DB::query.format.vars(bety = bety, + input.id = settings$assim.batch$inputs[[k]]$input.id) + } - for(l in seq_along(model.var)){ if(length(model.var[[l]][model.var[[l]] %in% format$vars$bety_name]) != 0){ diff --git a/modules/assim.batch/R/pda.load.data.R b/modules/assim.batch/R/pda.load.data.R index fdd96cac17a..1df1533fca8 100644 --- a/modules/assim.batch/R/pda.load.data.R +++ b/modules/assim.batch/R/pda.load.data.R @@ -10,7 +10,7 @@ ##' ##' @author Ryan Kelly, Istem Fer ##' @export -load.pda.data <- function(settings, bety) { +load.pda.data <- function(settings, bety, external.formats = NULL) { # Outlining setup for multiple datasets @@ -37,7 +37,12 @@ load.pda.data <- function(settings, bety) { PEcAn.logger::logger.error("Must provide both ID and PATH for all data assimilation inputs.") } - format <- PEcAn.DB::query.format.vars(bety = bety, input.id = inputs[[i]]$input.id) + if(is.null(bety$con)){ + format <- external.formats[[i]] + }else{ + format <- PEcAn.DB::query.format.vars(bety = bety, input.id = inputs[[i]]$input.id) + } + vars.used.index <- which(format$vars$bety_name %in% data.var) diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index eaa47bc76d2..e472937991c 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -43,21 +43,22 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. params.pft <- params.subset[[i]] save(params.pft, file = filename.mcmc) - - ## create a new Posteriors DB entry - pft.id <- PEcAn.DB::db.query(paste0("SELECT pfts.id FROM pfts, modeltypes WHERE pfts.name='", - settings$pfts[[i]]$name, - "' and pfts.modeltype_id=modeltypes.id and modeltypes.name='", - settings$model$type, "'"), - con)[["id"]] - - - posteriorid <- PEcAn.DB::db.query(paste0("INSERT INTO posteriors (pft_id) VALUES (", - pft.id, ") RETURNING id"), con) - - - PEcAn.logger::logger.info(paste0("--- Posteriorid for ", settings$pfts[[i]]$name, " is ", posteriorid, " ---")) - settings$pfts[[i]]$posteriorid <- posteriorid + if(!is.null(con)){ + ## create a new Posteriors DB entry + pft.id <- PEcAn.DB::db.query(paste0("SELECT pfts.id FROM pfts, modeltypes WHERE pfts.name='", + settings$pfts[[i]]$name, + "' and pfts.modeltype_id=modeltypes.id and modeltypes.name='", + settings$model$type, "'"), + con)[["id"]] + + + posteriorid <- PEcAn.DB::db.query(paste0("INSERT INTO posteriors (pft_id) VALUES (", + pft.id, ") RETURNING id"), con) + + + PEcAn.logger::logger.info(paste0("--- Posteriorid for ", settings$pfts[[i]]$name, " is ", posteriorid, " ---")) + settings$pfts[[i]]$posteriorid <- posteriorid + } ## save named distributions ## *** TODO: Generalize for multiple PFTS @@ -70,7 +71,11 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. paste0("post.distns.pda.", settings$pfts[[i]]$name, "_", settings$assim.batch$ensemble.id, sffx, ".Rdata")) save(post.distns, file = filename) - PEcAn.DB::dbfile.insert(dirname(filename), basename(filename), "Posterior", posteriorid, con) + + if(!is.null(con)){ + PEcAn.DB::dbfile.insert(dirname(filename), basename(filename), "Posterior", posteriorid, con) + } + # Symlink to post.distns.Rdata (no ensemble.id identifier) if (file.exists(file.path(dirname(filename), "post.distns.Rdata"))) { @@ -101,7 +106,11 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. "_", settings$assim.batch$ensemble.id, sffx, ".Rdata")) save(trait.mcmc, file = filename) - PEcAn.DB::dbfile.insert(dirname(filename), basename(filename), "Posterior", posteriorid, con) + + if(!is.null(con)){ + PEcAn.DB::dbfile.insert(dirname(filename), basename(filename), "Posterior", posteriorid, con) + } + } #end of loop over PFTs ## save updated settings XML From 08617e5af554bbaa90619500b588af089f6dc8dc Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 14 Apr 2019 08:12:17 -0400 Subject: [PATCH 0107/2289] first pass on some remote helper functions --- modules/assim.batch/R/pda.utils.R | 129 ++++++++++++++++++++++++++++++ 1 file changed, 129 insertions(+) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index bc2f6f1321a..d7f3bceb29c 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1023,3 +1023,132 @@ sample_MCMC <- function(mcmc_path, n.param.orig, prior.ind.orig, n.post.knots, k } +# This is a helper function partly uses pda.emulator code +# It returns couple of objects that are to be passed externally to the pda.emulator for multi-site runs +# These objects either require DB connection or should be common across sites +# This function does require DB connection +##' @export +return_multi_site_objects <- function(multi.settings){ + + settings <- multi.settings[[1]] # first one is as good as any + + ## check if scaling factors are gonna be used + any.scaling <- sapply(settings$assim.batch$param.names, `[[`, "scaling") + sf <- unique(unlist(any.scaling)) + + bety <- dplyr::src_postgres(dbname = settings$database$bety$dbname, + host = settings$database$bety$host, + user = settings$database$bety$user, + password = settings$database$bety$password) + + # get prior.list + temp <- pda.load.priors(settings, bety$con, TRUE) + prior_list <- temp$prior + + # extract other indices to fenerate knots + pname <- lapply(prior_list, rownames) + n.param.all <- sapply(prior_list, nrow) + + ## Select parameters to constrain + all_pft_names <- sapply(settings$pfts, `[[`, "name") + prior.ind <- prior.ind.orig <- vector("list", length(settings$pfts)) + names(prior.ind) <- names(prior.ind.orig) <- all_pft_names + for(i in seq_along(settings$pfts)){ + pft.name <- settings$pfts[[i]]$name + if(pft.name %in% names(settings$assim.batch$param.names)){ + prior.ind[[i]] <- which(pname[[i]] %in% settings$assim.batch$param.names[[pft.name]]) + prior.ind.orig[[i]] <- which(pname[[i]] %in% settings$assim.batch$param.names[[pft.name]] | + pname[[i]] %in% any.scaling[[pft.name]]) + } + } + + n.param <- sapply(prior.ind, length) + n.param.orig <- sapply(prior.ind.orig, length) + + ## Set prior distribution functions (d___, q___, r___, and multivariate versions) + prior.fn <- lapply(prior_list, pda.define.prior.fn) + + # get format.list + input_ids <- sapply(settings$assim.batch$inputs, `[[`, "input.id") + format_list <- lapply(input_ids, PEcAn.DB::query.format.vars, bety = bety) + + # get knots + # if this is the initial round we will draw from priors + if(TRUE){ + + ## Propose parameter knots (X) for emulator design + knots.list <- lapply(seq_along(settings$pfts), + function(x) pda.generate.knots(as.numeric(settings$assim.batch$n.knot), NULL, NULL, + n.param.all[x], + prior.ind.orig[[x]], + prior.fn[[x]], + pname[[x]])) + names(knots.list) <- sapply(settings$pfts,"[[",'name') + external_knots <- lapply(knots.list, `[[`, "params") + + }else{ + # if not, we have to bring all the MCMC samples from all sites and draw from them. + } + + ensembleid_list <- sapply(multi.settings, function(x) pda.create.ensemble(x, bety$con, x$workflow$id)) + + return(list(priorlist = prior_list, + formatlist = format_list, + externalknots = external_knots, + ensembleidlist = ensembleid_list)) +} + + +# helper function for submitting remote pda runs +##' @export +prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ + + multi_site_objects$settings <- settings + + #save + local_object_file <- paste0(settings$outdir, "/multi_site_objects_s",site,".Rdata") + remote_object_file <- paste0(settings$host$folder, "/" , settings$workflow$id, "/multi_site_objects_s",site,".Rdata") + save(multi_site_objects, file = local_object_file) + + ######## prepare the sub.sh + # this will need generalization over other machines, can parse some of these from settings$host$qsub + local_sub_file <- paste0(settings$outdir, "/sub" , site, ".sh") + cat("#!/bin/sh\n", file = local_sub_file) + remote_dir <- paste0(settings$host$folder, "/" , settings$workflow$id) + cat(paste0("#$ -wd ", remote_dir, "\n"), file = local_sub_file, append = TRUE) + cat("#$ -j y\n", file = local_sub_file, append = TRUE) + cat("#$ -S /bin/bash\n", file = local_sub_file, append = TRUE) + cat("#$ -V\n", file = local_sub_file, append = TRUE) + # parse 'geo*' from settings$host$qsub + # gsub( " .*$", "", sub(".*-q ", "", settings$host$qsub)) + cat("#$ -q 'geo*'\n", file = local_sub_file, append = TRUE) + cat(paste0("#$ -N emulator_s", site,"\n"), file = local_sub_file, append = TRUE) + #cat("#$ -pe omp 3\n", file = local_sub_file, append = TRUE) + cat(paste0("#cd ", remote_dir, "\n"), file = local_sub_file, append = TRUE) + cat(paste0("#", settings$host$prerun, "\n"), file = local_sub_file, append = TRUE) + cat(paste0("Rscript remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) + cat(paste0("rm remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) + remote_sub_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/sub" , site, ".sh") + + ######## create R script + pdaemulator <- readLines('~/pecan/modules/assim.batch/R/pda.emulator.R',-1) + local_script_file <- paste0(settings$outdir, "/remote_emulator_s",site,".R") + first_lines <- c("rm(list=ls(all=TRUE))\n", "library(PEcAn.all)\n", paste0("load(\"",remote_object_file,"\")\n"), + "settings <- multi_site_objects$settings\n", "external_priors <- multi_site_objects$priorlist\n", + "external_knots <- multi_site_objects$externalknots\n", "external_formats <- multi_site_objects$formatlist\n", + paste0("ensemble_id <- multi_site_objects$ensembleidlist[", site, "]\n")) + last_lines <- c("pda.emulator(settings, external.priors = external_priors, + external.knots = external_knots, external.formats = external_formats, + ensemble.id = ensemble_id, remote = TRUE)") + writeLines(c(first_lines, pdaemulator, last_lines), local_script_file) + remote_script_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/remote_emulator_s", site,".R") + + ######## copy to remote + remote.execute.cmd(settings$host, paste0("mkdir -p ", settings$host$folder, "/", settings$workflow$id)) + remote.copy.to(settings$host, local_sub_file, remote_sub_file) + remote.copy.to(settings$host, local_script_file, remote_script_file) + remote.copy.to(settings$host, local_object_file, remote_object_file) + + return(remote_sub_file) + +} From 09e781e4408dbe2fc631058ece37ea740f518297 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 14 Apr 2019 08:31:07 -0400 Subject: [PATCH 0108/2289] roxygenise --- modules/assim.batch/NAMESPACE | 2 ++ modules/assim.batch/man/load.pda.data.Rd | 2 +- modules/assim.batch/man/pda.emulator.Rd | 9 +++++---- modules/assim.batch/man/pda.get.model.output.Rd | 3 ++- 4 files changed, 10 insertions(+), 6 deletions(-) diff --git a/modules/assim.batch/NAMESPACE b/modules/assim.batch/NAMESPACE index 56fea96688b..9aab9982693 100644 --- a/modules/assim.batch/NAMESPACE +++ b/modules/assim.batch/NAMESPACE @@ -40,8 +40,10 @@ export(pda.postprocess) export(pda.settings) export(pda.settings.bt) export(pda.sort.params) +export(prepare_pda_remote) export(return.bias) export(return_hyperpars) +export(return_multi_site_objects) export(runModule.assim.batch) export(sample_MCMC) export(write_sf_posterior) diff --git a/modules/assim.batch/man/load.pda.data.Rd b/modules/assim.batch/man/load.pda.data.Rd index 6fabdd500ed..08d67597740 100644 --- a/modules/assim.batch/man/load.pda.data.Rd +++ b/modules/assim.batch/man/load.pda.data.Rd @@ -7,7 +7,7 @@ \usage{ load.L2Ameriflux.cf(file.in) -load.pda.data(settings, bety) +load.pda.data(settings, bety, external.formats = NULL) } \arguments{ \item{file.in}{= the netcdf file of L2 data} diff --git a/modules/assim.batch/man/pda.emulator.Rd b/modules/assim.batch/man/pda.emulator.Rd index fa0fb15fe8e..13123972ba8 100644 --- a/modules/assim.batch/man/pda.emulator.Rd +++ b/modules/assim.batch/man/pda.emulator.Rd @@ -5,10 +5,11 @@ \title{Paramater Data Assimilation using emulator} \usage{ pda.emulator(settings, external.data = NULL, external.priors = NULL, - external.knots = NULL, params.id = NULL, param.names = NULL, - prior.id = NULL, chain = NULL, iter = NULL, adapt = NULL, - adj.min = NULL, ar.target = NULL, jvar = NULL, n.knot = NULL, - individual = TRUE) + external.knots = NULL, external.formats = NULL, ensemble.id = NULL, + params.id = NULL, param.names = NULL, prior.id = NULL, + chain = NULL, iter = NULL, adapt = NULL, adj.min = NULL, + ar.target = NULL, jvar = NULL, n.knot = NULL, individual = TRUE, + remote = FALSE) } \arguments{ \item{settings}{= a pecan settings list} diff --git a/modules/assim.batch/man/pda.get.model.output.Rd b/modules/assim.batch/man/pda.get.model.output.Rd index 5267ea99c68..c478d56f4c5 100644 --- a/modules/assim.batch/man/pda.get.model.output.Rd +++ b/modules/assim.batch/man/pda.get.model.output.Rd @@ -4,7 +4,8 @@ \alias{pda.get.model.output} \title{Get Model Output for PDA} \usage{ -pda.get.model.output(settings, run.id, bety, inputs) +pda.get.model.output(settings, run.id, bety, inputs, + external.formats = NULL) } \arguments{ \item{all}{params are the identically named variables in pda.mcmc / pda.emulator} From d7091d375a99ba91796b5108bb1fa0b825183dc7 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 15 Apr 2019 10:06:06 -0400 Subject: [PATCH 0109/2289] path corrections --- modules/assim.batch/R/pda.emulator.R | 10 ++++++++-- modules/assim.batch/R/pda.utils.R | 2 ++ 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index fe58e6f1e7d..fc7e83962cd 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -163,8 +163,14 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ## history restart - pda.restart.file <- file.path(settings$outdir,paste0("history.pda", - settings$assim.batch$ensemble.id, ".Rdata")) + if(!remote){ + pda.restart.file <- file.path(settings$outdir,paste0("history.pda", + settings$assim.batch$ensemble.id, ".Rdata")) + }else{ + pda.restart.file <- paste0(settings$host$folder, "/", settings$workflow$id, "/history.pda", + settings$assim.batch$ensemble.id, ".Rdata") + } + current.step <- "START" ## Set up likelihood functions diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index d7f3bceb29c..d3667f43ef6 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1103,6 +1103,8 @@ return_multi_site_objects <- function(multi.settings){ ##' @export prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ + settings$rundir <- settings$host$rundir + settings$modeloutdir <- settings$host$outdir multi_site_objects$settings <- settings #save From 3cc5815607b0553e30b6ec84dd98e1754e36b364 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 15 Apr 2019 15:58:53 -0400 Subject: [PATCH 0110/2289] few more remote modifications --- modules/assim.batch/R/pda.emulator.R | 34 +++++++++++++++---------- modules/assim.batch/R/pda.postprocess.R | 2 +- modules/assim.batch/R/pda.utils.R | 34 +++++++++++++++++-------- 3 files changed, 46 insertions(+), 24 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index fc7e83962cd..119a3b56c32 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -164,10 +164,14 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ## history restart if(!remote){ - pda.restart.file <- file.path(settings$outdir,paste0("history.pda", + settings_outdir <- settings$outdir + + pda.restart.file <- file.path(settings_outdir, paste0("history.pda", settings$assim.batch$ensemble.id, ".Rdata")) + }else{ - pda.restart.file <- paste0(settings$host$folder, "/", settings$workflow$id, "/history.pda", + settings_outdir <- paste0(settings$host$folder, "/", settings$workflow$id, "/") + pda.restart.file <- paste0(settings_outdir, "history.pda", settings$assim.batch$ensemble.id, ".Rdata") } @@ -545,9 +549,12 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, dcores <- parallel::detectCores() - 1 ncores <- min(max(dcores, 1), settings$assim.batch$chain) - PEcAn.logger::logger.setOutputFile(file.path(settings$outdir, "pda.log")) + + logfile_path <- file.path(settings_outdir, "pda.log") + + PEcAn.logger::logger.setOutputFile(logfile_path) - cl <- parallel::makeCluster(ncores, type="FORK", outfile = file.path(settings$outdir, "pda.log")) + cl <- parallel::makeCluster(ncores, type="FORK", outfile = logfile_path) ## Sample posterior from emulator mcmc.out <- parallel::parLapply(cl, 1:settings$assim.batch$chain, function(chain) { @@ -651,25 +658,25 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) ## Save emulator, outputs files - settings$assim.batch$emulator.path <- file.path(settings$outdir, + settings$assim.batch$emulator.path <- file.path(settings_outdir, paste0("emulator.pda", settings$assim.batch$ensemble.id, ".Rdata")) save(gp, file = settings$assim.batch$emulator.path) - settings$assim.batch$ss.path <- file.path(settings$outdir, + settings$assim.batch$ss.path <- file.path(settings_outdir, paste0("ss.pda", settings$assim.batch$ensemble.id, ".Rdata")) save(SS, file = settings$assim.batch$ss.path) - settings$assim.batch$mcmc.path <- file.path(settings$outdir, + settings$assim.batch$mcmc.path <- file.path(settings_outdir, paste0("mcmc.list.pda", settings$assim.batch$ensemble.id, ".Rdata")) save(mcmc.samp.list, file = settings$assim.batch$mcmc.path) - settings$assim.batch$resume.path <- file.path(settings$outdir, + settings$assim.batch$resume.path <- file.path(settings_outdir, paste0("resume.pda", settings$assim.batch$ensemble.id, ".Rdata")) @@ -678,14 +685,14 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # save inputs list, this object has been processed for autocorrelation correction # this can take a long time depending on the data, re-load and skip in next iteration external.data <- inputs - save(external.data, file = file.path(settings$outdir, + save(external.data, file = file.path(settings_outdir, paste0("external.", settings$assim.batch$ensemble.id, ".Rdata"))) # save prior.list with bias term if(any(unlist(any.mgauss) == "multipGauss")){ - settings$assim.batch$bias.path <- file.path(settings$outdir, + settings$assim.batch$bias.path <- file.path(settings_outdir, paste0("bias.pda", settings$assim.batch$ensemble.id, ".Rdata")) @@ -694,9 +701,9 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, # save sf posterior if(!is.null(sf)){ - sf.post.filename <- file.path(settings$outdir, + sf.post.filename <- file.path(settings_outdir, paste0("post.distns.pda.sf", "_", settings$assim.batch$ensemble.id, ".Rdata")) - sf.samp.filename <- file.path(settings$outdir, + sf.samp.filename <- file.path(settings_outdir, paste0("samples.pda.sf", "_", settings$assim.batch$ensemble.id, ".Rdata")) sf.prior <- prior.list[[sf.ind]] sf.post.distns <- write_sf_posterior(sf.samp.list, sf.prior, sf.samp.filename) @@ -731,7 +738,8 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } } - settings$assim.batch$round_counter <- which_round + # will I need a counter? + #settings$assim.batch$round_counter <- which_round settings <- pda.postprocess(settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig) diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index e472937991c..b06b918f574 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -117,7 +117,7 @@ pda.postprocess <- function(settings, con, mcmc.param.list, pname, prior, prior. XML::saveXML( PEcAn.settings::listToXml(settings, "pecan"), file = file.path( - settings$outdir, + dirname(settings$modeloutdir), paste0("pecan.pda", settings$assim.batch$ensemble.id, ".xml"))) return(settings) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index d3667f43ef6..00cc5edf0dd 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1103,20 +1103,16 @@ return_multi_site_objects <- function(multi.settings){ ##' @export prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ - settings$rundir <- settings$host$rundir - settings$modeloutdir <- settings$host$outdir - multi_site_objects$settings <- settings + remote_dir <- paste0(settings$host$folder, "/" , settings$workflow$id) #save local_object_file <- paste0(settings$outdir, "/multi_site_objects_s",site,".Rdata") - remote_object_file <- paste0(settings$host$folder, "/" , settings$workflow$id, "/multi_site_objects_s",site,".Rdata") - save(multi_site_objects, file = local_object_file) + remote_object_file <- paste0(remote_dir, "/multi_site_objects_s",site,".Rdata") ######## prepare the sub.sh # this will need generalization over other machines, can parse some of these from settings$host$qsub local_sub_file <- paste0(settings$outdir, "/sub" , site, ".sh") cat("#!/bin/sh\n", file = local_sub_file) - remote_dir <- paste0(settings$host$folder, "/" , settings$workflow$id) cat(paste0("#$ -wd ", remote_dir, "\n"), file = local_sub_file, append = TRUE) cat("#$ -j y\n", file = local_sub_file, append = TRUE) cat("#$ -S /bin/bash\n", file = local_sub_file, append = TRUE) @@ -1145,11 +1141,29 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ writeLines(c(first_lines, pdaemulator, last_lines), local_script_file) remote_script_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/remote_emulator_s", site,".R") + + #cheating. needs to be done after extracting all paths + host_info <- settings$host + + remote.execute.cmd(host_info, paste0("mkdir -p ", remote_dir,"/pft")) + for(i in seq_along(settings$pfts)){ + settings$pfts[[i]]$outdir <- file.path(remote_dir, "pft", basename(settings$pfts[[i]]$outdir)) + remote.execute.cmd(host_info, paste0("mkdir -p ", settings$pfts[[i]]$outdir)) + } + + settings$host$name <- "localhost" + settings$rundir <- settings$host$rundir + settings$modeloutdir <- settings$host$outdir + + multi_site_objects$settings <- settings + save(multi_site_objects, file = local_object_file) + ######## copy to remote - remote.execute.cmd(settings$host, paste0("mkdir -p ", settings$host$folder, "/", settings$workflow$id)) - remote.copy.to(settings$host, local_sub_file, remote_sub_file) - remote.copy.to(settings$host, local_script_file, remote_script_file) - remote.copy.to(settings$host, local_object_file, remote_object_file) + remote.execute.cmd(host_info, paste0("mkdir -p ", settings$host$folder, "/", settings$workflow$id)) + remote.copy.to(host_info, local_sub_file, remote_sub_file) + remote.copy.to(host_info, local_script_file, remote_script_file) + remote.copy.to(host_info, local_object_file, remote_object_file) + return(remote_sub_file) From 4cf9f69ac9bb597e0422649f5fec9a68180c70f9 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 16 Apr 2019 14:34:13 -0400 Subject: [PATCH 0111/2289] minor call_modis fixes --- modules/data.remote/R/call_MODIS.R | 28 ++++++++++++++++----------- modules/data.remote/man/call_MODIS.Rd | 4 ++-- 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 77ae302d9c6..d6cd96750d8 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -22,18 +22,18 @@ ##' ##' @examples ##' \dontrun{ -##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools") +##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T) ##' plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) ##' test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") ##' } ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F) { +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - + require(PEcAn.all) # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == T) @@ -124,21 +124,21 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size) + dat <- PEcAn.utils::retry.func(MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, + start=start_date, end=end_date, km_ab=size, km_lr=size, progress = F), maxErrors = 10, sleep = 2) # extract QC data if(band_qc != "") { - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size) + qc <- PEcAn.utils::retry.function(MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + start=start, end=end, km_ab=size, km_lr=size, progress =F), maxErrors = 10, sleep = 2) } # extract stdev data if(band_sd != "") { - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size) + sd <- PEcAn.utils::retry.function(MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + start=start, end=end, km_ab=size, km_lr=size, progress = F), maxErrors = 10, sleep = 2) } if (band_qc == "") @@ -183,9 +183,15 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, } } - if (!(outfolder == "")) + if (!(outfolder == "") & !(is.null(siteID))) + { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", iter, ".csv", sep = "") + fname <- paste0(outfolder, "/", fname) + write.csv(output, fname, row.names = F) + } + if (!(outfolder == "") & is.null(siteID)) { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", iter, ".csv", sep = "") fname <- paste0(outfolder, "/", fname) write.csv(output, fname, row.names = F) } diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 2c5a6ea9133..cdf40a11eaf 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -4,8 +4,8 @@ \alias{call_MODIS} \title{call_MODIS} \usage{ -call_MODIS(outfolder = NULL, start_date, end_date, lat, lon, size = 0, - product, band, band_qc = "", band_sd = "", siteID = NULL, +call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, + product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F) } \arguments{ From 6ffa6c613f3bd768b04c0c27d0d88655cdc2f9e0 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 16 Apr 2019 16:45:49 -0400 Subject: [PATCH 0112/2289] removed retry.func from call_modis --- modules/data.remote/R/call_MODIS.R | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index d6cd96750d8..065a1428b3e 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -124,21 +124,21 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api - dat <- PEcAn.utils::retry.func(MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size, progress = F), maxErrors = 10, sleep = 2) + dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, + start=start_date, end=end_date, km_ab=size, km_lr=size, progress = F) # extract QC data if(band_qc != "") { - qc <- PEcAn.utils::retry.function(MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size, progress =F), maxErrors = 10, sleep = 2) + qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + start=start, end=end, km_ab=size, km_lr=size, progress =F) } # extract stdev data if(band_sd != "") { - sd <- PEcAn.utils::retry.function(MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size, progress = F), maxErrors = 10, sleep = 2) + sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, + start=start, end=end, km_ab=size, km_lr=size, progress = F) } if (band_qc == "") @@ -185,13 +185,13 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (!(outfolder == "") & !(is.null(siteID))) { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", iter, ".csv", sep = "") + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 4), ".csv", sep = "") fname <- paste0(outfolder, "/", fname) write.csv(output, fname, row.names = F) } if (!(outfolder == "") & is.null(siteID)) { - fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", iter, ".csv", sep = "") + fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 2), ".csv", sep = "") fname <- paste0(outfolder, "/", fname) write.csv(output, fname, row.names = F) } From 4d0246ac58c13a3d7ad1500a77ce7bd103034a04 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Fri, 19 Apr 2019 11:59:00 -0400 Subject: [PATCH 0113/2289] change error messaging --- base/workflow/inst/batch_runs.R | 37 +++++++++++++++++---------------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index e017d311900..a17773358ad 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -1,5 +1,5 @@ ## Function to Create and execute pecan xml -create_exec_test_xml <- function(run_list){ +create_execute_test_xml <- function(run_list){ library(PEcAn.DB) library(dplyr) library(PEcAn.utils) @@ -41,7 +41,7 @@ create_exec_test_xml <- function(run_list){ #Database BETY settings$database$bety$user <- config.list$db_bety_username settings$database$bety$password <- config.list$db_bety_password - settings$database$bety$host <- config.list$db_bety_hostname + settings$database$bety$host <- "localhost" settings$database$bety$dbname <- config.list$db_bety_database settings$database$bety$driver <- "PostgreSQL" settings$database$bety$write <- FALSE @@ -90,29 +90,28 @@ create_exec_test_xml <- function(run_list){ settings$run$inputs$met$username <- "pecan" settings$run$start.date <- start_date settings$run$end.date <- end_date - settings$host$name <-config.list$db_bety_hostname + settings$host$name <-"localhost" #create file and Run saveXML(listToXml(settings, "pecan"), file=paste0(outdir,"/","pecan.xml")) file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) setwd(outdir) ##Name log file - log <- file("workflow.Rout", open = "wt") - sink(log) - sink(log, type = "message") - source("workflow.R") - sink() + #log <- file("workflow.Rout", open = "wt") + #sink(log) + #sink(log, type = "message") + + system("./workflow.R 2>&1 | tee workflow.Rout") + #source("workflow.R") + #sink() } ##Create Run Args -pecan_path <- "/fs/data3/tonygard/work/pecan" -config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.example.php")) -bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.example.php")) -bety <- dplyr::src_postgres(dbname = 'bety', - host = 'psql-pecan.bu.edu', - user = 'bety', - password = 'bety') +## Insert your path to base pecan +pecan_path <- "/fs/data3/tonygard/work/pecan" +config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) +bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) con <- bety$con library(tidyverse) ## Find name of Machine R is running on @@ -121,8 +120,10 @@ mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(i ## Find Models #devtools::install_github("pecanproject/pecan", subdir = "api") -model_ids <- tbl(bety, "dbfiles") %>% filter(machine_id == mach_id) %>% - filter(container_type == "Model") %>% pull(container_id) +model_ids <- tbl(bety, "dbfiles") %>% + filter(machine_id == mach_id) %>% + filter(container_type == "Model") %>% + pull(container_id) @@ -169,7 +170,7 @@ run_table <- expand.grid(models,met_name,site_id, startdate, enddate, #Execute function to spit out a table with a column of NA or success tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ - create_exec_test_xml(list(...)) + create_execute_test_xml(list(...)) },otherwise =NA)) ) From 452109866e2d55c734894a301145f1fe064c5820 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Fri, 19 Apr 2019 13:09:05 -0400 Subject: [PATCH 0114/2289] change in dependencies --- docker/depends/pecan.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index c46d79faf39..197f3787928 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -23,9 +23,9 @@ install2.r -e -s -n -1\ BioCro \ car \ coda \ - data.table \ dataone \ datapack \ + data.table \ DBI \ dbplyr \ doParallel \ From 1bc54094cc6c3a4627c470cd8c48e71482302762 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Mon, 29 Apr 2019 13:00:42 -0400 Subject: [PATCH 0115/2289] new additions --- base/workflow/inst/batch_runs.R | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index a17773358ad..d74464d9b67 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -33,8 +33,8 @@ create_execute_test_xml <- function(run_list){ outdir_base<-config.list$output_folder outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"), format(as.Date(end_date), "%Y-%m"), - met,site_id,"_test_runs", - sep="",collapse =NULL) + met,site_id,"test_runs", + sep="_",collapse =NULL) outdir <- paste0(outdir_base,outdir_pre) dir.create(outdir) settings$outdir <- outdir @@ -157,7 +157,7 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) - + @@ -174,3 +174,7 @@ tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...) },otherwise =NA)) ) +## print to table +tux_tab <- huxtable::hux(tab) +html_table <- huxtable::print_html(tux_tab) +htmlTable::htmlTable(tab) From 478574decb4c41fb5501667f7fa9cd3254de4e53 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 7 May 2019 13:04:14 -0400 Subject: [PATCH 0116/2289] remove round --- models/sipnet/R/model2netcdf.SIPNET.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/sipnet/R/model2netcdf.SIPNET.R b/models/sipnet/R/model2netcdf.SIPNET.R index fefa9634602..416fa3dd138 100644 --- a/models/sipnet/R/model2netcdf.SIPNET.R +++ b/models/sipnet/R/model2netcdf.SIPNET.R @@ -182,7 +182,7 @@ model2netcdf.SIPNET <- function(outdir, sitelat, sitelon, start_date, end_date, t <- ncdf4::ncdim_def(name = "time", longname = "time", units = paste0("days since ", y, "-01-01 00:00:00"), - vals = round(sub_dates_cf,4), + vals = sub_dates_cf, calendar = "standard", unlim = TRUE) lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(sitelat), From f993f84d480afd4f6f63c11b2e44aab7736f4794 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 8 May 2019 12:25:25 -0400 Subject: [PATCH 0117/2289] pushing for test --- modules/assim.batch/R/pda.emulator.R | 1 + modules/assim.batch/R/pda.emulator.ms.R | 73 +++++++++++++++++++++++-- modules/assim.batch/R/pda.utils.R | 41 +++++++++++--- 3 files changed, 102 insertions(+), 13 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 119a3b56c32..ac880f06bd2 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -743,6 +743,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, settings <- pda.postprocess(settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig) + ## close database connection if (!is.null(con)) { PEcAn.DB::db.close(con) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 498bc5089bc..d8e08cc55d5 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -39,14 +39,77 @@ pda.emulator.ms <- function(multi.settings) { ## -------------------------------------- Individual runs and calibration ------------------------------------------ if(individual){ - # NOTE: individual flag is not used currently, prepearation for future use + # NOTE: individual flag -mode functionality in general- is not currently in use, + # preparation for future use, the idea is to skip site-level fitting if we've already done it # if this flag is FALSE, pda.emulator will not fit GP and run MCMC, - # but will run LHC ensembles, calculate SS and return settings list with saved SS paths etc. - # this requires some re-arrangement in pda.emulator, + # but this requires some re-arrangement in pda.emulator function # for now we will always run site-level calibration - multi.settings[[i]] <- pda.emulator(multi.settings[[i]], individual = individual) + + + # number of sites will probably get big real quick, so some multi-site PDA runs should be run on the cluster + # unfortunately pda.emulator function was not fully designed for remote runs, so first we need to prepare a few things it needs + # (1) all sites should be running the same knots + # (2) all sites will use the same prior.list + # (3) the format information for the assimilated data (pda.emulator needs DB connection to query it if it's not externally provided) + # (4) ensemble ids (they provide unique trackers for emulator functions) + multi_site_objects <- return_multi_site_objects(multi.settings) # using the first site is enough + + # the default implementation is that we will iteratively run the rounds until improvement in site-level fits slows down. + #if(one_round){ + #multi.settings <- papply(multi.settings, pda.emulator, individual = individual) + #}else{ + + # Open the tunnel (might not need) + PEcAn.remote::open_tunnel(multi.settings[[1]]$host$name, + user = multi.settings[[1]]$host$user, + tunnel_dir = dirname(multi.settings[[1]]$host$tunnel)) + + emulator_jobs <- rep(NA, length(multi.settings)) + for(ms in seq_along(multi.settings)){ - multi.settings <- papply(multi.settings, pda.emulator, individual = individual) + # Sync to remote + subfile <- prepare_pda_remote(multi.settings[[ms]], site = ms, multi_site_objects) + + # Submit emulator scripts + tmp <- PEcAn.remote::remote.execute.cmd(multi.settings[[ms]]$host, paste0("qsub ", subfile)) + emulator_jobs[ms] <- as.numeric( sub("\\D*(\\d+).*", "\\1", tmp)) + } + + # listen + repeat{ + PEcAn.logger::logger.info("Multi-site calibration running. Please wait.") + Sys.sleep(1000) + check_all_sites <- sapply(emulator_jobs, qsub_run_finished, multi.settings[[1]]$host, multi.settings[[1]]$host$qstat) + if(all(check_all_sites)) break + } + + + # Sync regular files back + args <- c("-e", paste0("ssh -o ControlPath=\"", multi.settings[[1]]$host$tunnel, "\"", collapse = "")) + args <- c(args, paste0(multi.settings[[1]]$host$name, ":", dirname(multi.settings[[1]]$host$outdir)), + multi.settings[[1]]$outdir) + system2("rsync", shQuote(args), stdout = TRUE, stderr = FALSE) + + # update multi.settings + for(ms in seq_along(multi.settings)){ + tmp_settings <- read.settings(paste0(multi.settings[[ms]]$outdir,"/pecan.pda", + multi_site_objects$ensembleidlist[[ms]],".xml")) + multi.settings[[ms]]$assim.batch <- tmp_settings$assim.batch + multi.settings[[ms]]$pfts <- tmp_settings$pfts + } + #repeat( + + #emulator_r_check <- round_check(multi.settings) + #if(emulator_r_check) break + # external.knots <- sample_multi_site_MCMC + # add round tag + #) + # Close the tunnel + PEcAn.remote::kill.tunnel(settings) + #} + #multi.settings[[i]] <- pda.emulator(multi.settings[[i]], individual = individual) + + } # write multi.settings with individual-pda info diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 00cc5edf0dd..c58318bce07 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1074,7 +1074,7 @@ return_multi_site_objects <- function(multi.settings){ # get knots # if this is the initial round we will draw from priors - if(TRUE){ + ## Propose parameter knots (X) for emulator design knots.list <- lapply(seq_along(settings$pfts), @@ -1086,10 +1086,26 @@ return_multi_site_objects <- function(multi.settings){ names(knots.list) <- sapply(settings$pfts,"[[",'name') external_knots <- lapply(knots.list, `[[`, "params") - }else{ - # if not, we have to bring all the MCMC samples from all sites and draw from them. - } - + if(FALSE){ + collect_site_knots <- list() + for(i in seq_along(multi.settings)){ + settings <- multi.settings[[i]] + # if not, we have to bring all the MCMC samples from all sites and draw from them. + sampled_knots <- sample_MCMC(file.path(settings$outdir, basename(settings$assim.batch$mcmc.path)), + n.param.orig, prior.ind.orig, + as.numeric(settings$assim.batch$n.knot), external_knots, + prior.list, prior.fn, sf, NULL) + collect_site_knots[[i]] <- sampled_knots$knots.params.temp + } + + # bring them all and sample from sites + for(p in seq_along(settings$pfts)){ + tmp <- lapply(collect_site_knots,`[[`, p) + foo <- do.call("rbind", tmp) + foo_ind <- sample(1:nrow(foo), as.numeric(settings$assim.batch$n.knot)) + external_knots[[p]] <- foo[foo_ind,] + } + } ensembleid_list <- sapply(multi.settings, function(x) pda.create.ensemble(x, bety$con, x$workflow$id)) return(list(priorlist = prior_list, @@ -1125,7 +1141,9 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ cat(paste0("#cd ", remote_dir, "\n"), file = local_sub_file, append = TRUE) cat(paste0("#", settings$host$prerun, "\n"), file = local_sub_file, append = TRUE) cat(paste0("Rscript remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) - cat(paste0("rm remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) + sendbackto <- fqdn() + cat(paste0("mv ", multi_site_objects$ensembleidlist[site],"/pecan.pda", + multi_site_objects$ensembleidlist[site], ".xml ", remote_dir), file = local_sub_file, append = TRUE) remote_sub_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/sub" , site, ".sh") ######## create R script @@ -1152,8 +1170,14 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ } settings$host$name <- "localhost" - settings$rundir <- settings$host$rundir - settings$modeloutdir <- settings$host$outdir + newrundir <- paste0(remote_dir, "/", multi_site_objects$ensembleidlist[site], "/run/") + newoutdir <- paste0(remote_dir, "/", multi_site_objects$ensembleidlist[site], "/out/") + settings$host$rundir <- settings$rundir <- newrundir + settings$host$outdir <- settings$modeloutdir <- newoutdir + + if(FALSE){ + settings$assim.batch$extension <- "round" + } multi_site_objects$settings <- settings save(multi_site_objects, file = local_object_file) @@ -1168,3 +1192,4 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ return(remote_sub_file) } + From 516b8e31dcf29b1829f88213723ba4606544f2c7 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 8 May 2019 12:45:28 -0400 Subject: [PATCH 0118/2289] add a counter --- modules/assim.batch/R/pda.emulator.R | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index ac880f06bd2..d1b0ec246cc 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -738,8 +738,13 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } } - # will I need a counter? - #settings$assim.batch$round_counter <- which_round + # I can use a counter + if(is.null(settings$assim.batch$round_counter){ + settings$assim.batch$round_counter <- 1 + }else{ + settings$assim.batch$round_counter <- 1 + as.numeric(settings$assim.batch$round_counter) + } + settings <- pda.postprocess(settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig) @@ -753,12 +758,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, current.step <- "pda.finish" save(list = ls(all.names = TRUE),envir=environment(),file=pda.restart.file) - if(!remote){ - return(settings) - }else{ - #sync back - return(settings) - } - + return(settings) + } ## end pda.emulator From 295cfd883337983679d1c4f76a1cfe27f851d976 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 8 May 2019 14:03:24 -0400 Subject: [PATCH 0119/2289] dont need pecan.all --- modules/assim.batch/R/pda.utils.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index c58318bce07..88f4dfcc3da 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1149,7 +1149,9 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ ######## create R script pdaemulator <- readLines('~/pecan/modules/assim.batch/R/pda.emulator.R',-1) local_script_file <- paste0(settings$outdir, "/remote_emulator_s",site,".R") - first_lines <- c("rm(list=ls(all=TRUE))\n", "library(PEcAn.all)\n", paste0("load(\"",remote_object_file,"\")\n"), + first_lines <- c("rm(list=ls(all=TRUE))\n", + "library(PEcAn.assim.batch)\n", + paste0("load(\"",remote_object_file,"\")\n"), "settings <- multi_site_objects$settings\n", "external_priors <- multi_site_objects$priorlist\n", "external_knots <- multi_site_objects$externalknots\n", "external_formats <- multi_site_objects$formatlist\n", paste0("ensemble_id <- multi_site_objects$ensembleidlist[", site, "]\n")) From aac5352911933b86f856af833e95f11ea630c931 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 8 May 2019 14:04:14 -0400 Subject: [PATCH 0120/2289] dont write the function --- modules/assim.batch/R/pda.utils.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 88f4dfcc3da..9b42ea866fd 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1147,7 +1147,6 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ remote_sub_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/sub" , site, ".sh") ######## create R script - pdaemulator <- readLines('~/pecan/modules/assim.batch/R/pda.emulator.R',-1) local_script_file <- paste0(settings$outdir, "/remote_emulator_s",site,".R") first_lines <- c("rm(list=ls(all=TRUE))\n", "library(PEcAn.assim.batch)\n", @@ -1158,7 +1157,7 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ last_lines <- c("pda.emulator(settings, external.priors = external_priors, external.knots = external_knots, external.formats = external_formats, ensemble.id = ensemble_id, remote = TRUE)") - writeLines(c(first_lines, pdaemulator, last_lines), local_script_file) + writeLines(c(first_lines, last_lines), local_script_file) remote_script_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/remote_emulator_s", site,".R") From 9e9a4f7c9052ea517347e475778ebdd4e31694d5 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 8 May 2019 14:07:24 -0400 Subject: [PATCH 0121/2289] dont assume workflowid in remotedir --- modules/assim.batch/R/pda.utils.R | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 9b42ea866fd..270423bbb80 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1119,8 +1119,11 @@ return_multi_site_objects <- function(multi.settings){ ##' @export prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ - remote_dir <- paste0(settings$host$folder, "/" , settings$workflow$id) - + # not everyone might be working with workflowid + # remote_dir <- paste0(settings$host$folder, "/" , settings$workflow$id) + # instead find this directory from remote rundir so that it's consistent + remote_dir <- dirname(settings$host$rundir) + #save local_object_file <- paste0(settings$outdir, "/multi_site_objects_s",site,".Rdata") remote_object_file <- paste0(remote_dir, "/multi_site_objects_s",site,".Rdata") From 4762d231cb60b007e4033b59f05dae98f2b78548 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Wed, 8 May 2019 14:14:43 -0400 Subject: [PATCH 0122/2289] outline documentation of script --- base/workflow/inst/batch_runs.R | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index d74464d9b67..f519f649445 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -1,10 +1,34 @@ -## Function to Create and execute pecan xml +## This script contains the Following parts: +## Part 1 - Write Main Function +## A. takes in a list defining specifications for a single run and assigns them to objects +## B. The function then writes out a pecan settings list object +## C. A directory is created from the ouput folder defined in a users config.php file +## D. Settings object gets written into pecan.xml file and put in output directory. +## E. workflow.R is copied into this directory and executed. +## F. console output is stored in a file called workflow.Rout +## Part 2 - Write run settings +## A. Set BETY connection object. Needs user to define path to their pecan directory. +## B. Find Machine ID +## C. Find Models ids based on machine +## D. Manually Define Met, right now CRUNCEP and AMerifluxlbl +## E. Manually Define start and end date +## F. Manually Define Output var +## G. Manually Define Sensitivity and enesmble +## H. Find sites not associated with any inputs(meaning data needs to be downloaded), part of ameriflux network, and have data for start-end year range. +## Available ameriflux data is found by parsing the notes section of the sites table where ameriflux sites have a year range. +## Part 3 - Create Run table +## A. Create table that contains combinations of models,met_name,site_id, startdate, enddate, pecan_path,out.var, ensemble, ens_size, sensitivity args that the function above will use to do runs. +## Part 4 - Run the Function across the table +## A. Use the pmap function to apply the function across each row of arguments and append a pass or fail outcome column to the original table of runs +## Part 5 - (In progress) Turn output table into a table create_execute_test_xml <- function(run_list){ library(PEcAn.DB) library(dplyr) library(PEcAn.utils) library(XML) library(PEcAn.settings) + + #Read in Table model_id <- run_list[[1]] met <- run_list[[2]] site_id<- run_list[[3]] @@ -114,6 +138,7 @@ config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php") bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) con <- bety$con library(tidyverse) + ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) @@ -156,12 +181,12 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% #Check if startdate year is within the inerval of that is given in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% - dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) - - + dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) %>% + mutate(sitename= gsub(" ","_",.$sitename)) %>% rename_at(id.x = site_id) site_id <- site_id_noinput$id.x +site_name <- gsub(" ","_",site_id_noinput$sitename) #Create permutations of arg combinations options(scipen = 999) From 19476706b82e6deb589901ae169b4e16a3addd90 Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Thu, 9 May 2019 12:09:03 -0400 Subject: [PATCH 0123/2289] comments and documentation --- base/workflow/inst/batch_runs.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index f519f649445..606b2e54040 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -21,6 +21,7 @@ ## Part 4 - Run the Function across the table ## A. Use the pmap function to apply the function across each row of arguments and append a pass or fail outcome column to the original table of runs ## Part 5 - (In progress) Turn output table into a table + create_execute_test_xml <- function(run_list){ library(PEcAn.DB) library(dplyr) From e17a2805301498e4414e24f4a394916bc2e54f66 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 14:11:30 -0400 Subject: [PATCH 0124/2289] multi-var sda updates --- modules/assim.sequential/R/Multi_Site_Constructors.R | 2 ++ modules/assim.sequential/R/sda.enkf_MultiSite.R | 9 +++------ modules/data.remote/R/call_MODIS.R | 8 ++++---- modules/data.remote/man/call_MODIS.Rd | 4 ++-- 4 files changed, 11 insertions(+), 12 deletions(-) mode change 100644 => 100755 modules/assim.sequential/R/sda.enkf_MultiSite.R diff --git a/modules/assim.sequential/R/Multi_Site_Constructors.R b/modules/assim.sequential/R/Multi_Site_Constructors.R index 73c2c7171bd..17c38551cdd 100644 --- a/modules/assim.sequential/R/Multi_Site_Constructors.R +++ b/modules/assim.sequential/R/Multi_Site_Constructors.R @@ -189,6 +189,8 @@ Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ if(is.null(obs.names)) next; + choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), + max = 1, USE.NAMES = F) %>% unlist choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = F) %>% unlist choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = F) %>% unlist diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R old mode 100644 new mode 100755 index ab0a296372a..a777e51f77f --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -22,11 +22,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, control=list(trace=T, FF=F, - interactivePlot=T, - TimeseriesPlot=T, - BiasPlot=F, - plot.title=NULL, - facet.plots=F, + plot=F, debug=FALSE, pause=F), ...) { @@ -473,7 +469,8 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, out.configs, ensemble.samples, inputs, Viz.output, file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) #writing down the image - either you asked for it or nor :) - if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + if (control$plot == T & (t%%2==0 | t==nt)) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + # if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) } ### end loop over time } # sda.enkf diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 065a1428b3e..89d6367032d 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -29,7 +29,7 @@ ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F) { +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F, progress = T) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 @@ -125,20 +125,20 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # extract main band data from api dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size, progress = F) + start=start_date, end=end_date, km_ab=size, km_lr=size, progress = progress) # extract QC data if(band_qc != "") { qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size, progress =F) + start=start, end=end, km_ab=size, km_lr=size, progress =progress) } # extract stdev data if(band_sd != "") { sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size, progress = F) + start=start, end=end, km_ab=size, km_lr=size, progress = progress) } if (band_qc == "") diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index cdf40a11eaf..5e21df57390 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -5,7 +5,7 @@ \title{call_MODIS} \usage{ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, - product, band, band_qc = "", band_sd = "", siteID = "", + iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F) } \arguments{ @@ -43,7 +43,7 @@ Get MODIS data by date and location } \examples{ \dontrun{ -test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools") +test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T) plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") } From ded9a6063191773f321092ed8e654c7c92c0385f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 14:28:35 -0400 Subject: [PATCH 0125/2289] Updated DOCUMENTATION --- modules/assim.sequential/man/sda.enkf.multisite.Rd | 5 ++--- modules/data.remote/man/call_MODIS.Rd | 2 +- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/man/sda.enkf.multisite.Rd b/modules/assim.sequential/man/sda.enkf.multisite.Rd index e53746d0f06..3bedb2a63bb 100644 --- a/modules/assim.sequential/man/sda.enkf.multisite.Rd +++ b/modules/assim.sequential/man/sda.enkf.multisite.Rd @@ -5,9 +5,8 @@ \title{sda.enkf.multisite} \usage{ sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, restart = F, - control = list(trace = T, FF = F, interactivePlot = T, TimeseriesPlot = - T, BiasPlot = F, plot.title = NULL, facet.plots = F, debug = FALSE, pause - = F), ...) + control = list(trace = T, FF = F, plot = F, debug = FALSE, pause = F), + ...) } \arguments{ \item{settings}{PEcAn settings object} diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 5e21df57390..f041fc2c67a 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -6,7 +6,7 @@ \usage{ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", - package_method = "MODISTools", QC_filter = F) + package_method = "MODISTools", QC_filter = F, progress = T) } \arguments{ \item{outfolder}{where the output file will be stored} From 528881c98f20e4d0becd2a41eae30b55386613d4 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 15:37:29 -0400 Subject: [PATCH 0126/2289] Update models/sipnet/R/read_restart.SIPNET.R Co-Authored-By: Alexey Shiklomanov --- models/sipnet/R/read_restart.SIPNET.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index 381e622f769..0335653e1d0 100644 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -90,7 +90,7 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p if ("LAI" %in% var.names) { forecast[[length(forecast) + 1]] <- ens$LAI[last] ## m2/m2 - names(forecast[[length(forecast)]]) <- c("LAI") + names(forecast[[length(forecast)]]) <- "LAI" } print(runid) From 8be499e4467feab9ab263f913c58ec3a2927a01b Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 15:37:51 -0400 Subject: [PATCH 0127/2289] Update modules/assim.sequential/R/sda.enkf_MultiSite.R Co-Authored-By: Hamze Dokoohaki --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index a777e51f77f..5af8f6c973d 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -468,7 +468,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, save(site.locs, t, FORECAST, ANALYSIS, enkf.params, new.state, new.params, out.configs, ensemble.samples, inputs, Viz.output, file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) - #writing down the image - either you asked for it or nor :) + #writing down the image - either you asked for it or not :) if (control$plot == T & (t%%2==0 | t==nt)) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) # if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) } ### end loop over time From d7210b235f3b4c28e4481e1c55ab1a7083e8b771 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 15:53:13 -0400 Subject: [PATCH 0128/2289] fixed F/T to FALSE/TRUE --- .../assim.sequential/R/sda.enkf_MultiSite.R | 29 ++++++++++--------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index a777e51f77f..afd5965b5e8 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -19,12 +19,16 @@ #' @import nimble #' @export #' -sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, - control=list(trace=T, - FF=F, - plot=F, +sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = FALSE, + control=list(trace = TRUE, + FF = FALSE, + interactivePlot = FALSE, + TimeseriesPlot = FALSE, + BiasPlot = FALSE, + plot.title = NULL, + facet.plots = FALSE, debug=FALSE, - pause=F), + pause=FALSE), ...) { if (control$debug) browser() ###-------------------------------------------------------------------### @@ -65,7 +69,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, } #Finding the distance between the sites - distances <- sp::spDists(site.locs, longlat=T) + distances <- sp::spDists(site.locs, longlat=TRUE) #turn that into a blocked matrix format blocked.dis<-block_matrix(distances %>% as.numeric(), rep(length(var.names), length(site.ids))) @@ -74,14 +78,14 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, obs.mean <- obs.mean[sapply(year(names(obs.mean)), function(obs.year) obs.year %in% (assimyears))] obs.cov <- obs.cov[sapply(year(names(obs.cov)), function(obs.year) obs.year %in% (assimyears))] # dir address based on the end date - if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) + if(!dir.exists("SDA")) dir.create("SDA",showWarnings = FALSE) #--get model specific functions do.call("library", list(paste0("PEcAn.", model))) my.write_restart <- paste0("write_restart.", model) my.read_restart <- paste0("read_restart.", model) my.split_inputs <- paste0("split_inputs.", model) #- Double checking some of the inputs - if (is.null(adjustment)) adjustment<-T + if (is.null(adjustment)) adjustment<-TRUE # models that don't need split_inputs, check register file for that register.xml <- system.file(paste0("register.", model, ".xml"), package = paste0("PEcAn.", model)) register <- XML::xmlToList(XML::xmlParse(register.xml)) @@ -110,7 +114,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, start.time = lubridate::ymd_hms(settings$state.data.assimilation$start.date, truncated = 3), stop.time = lubridate::ymd_hms(settings$state.data.assimilation$end.date, truncated = 3), inputs = settings$run$inputs$met$path[[i]], - overwrite =F + overwrite =FALSE ) ) } @@ -204,7 +208,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, #- Check to see if this is the first run or not and what inputs needs to be sent to write.ensemble configs if (t>1){ #removing old simulations - unlink(list.files(outdir, "*.nc", recursive = T, full.names = T)) + unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. inputs.split <- conf.settings %>% purrr::map2(inputs, function(settings, inputs) { @@ -468,9 +472,8 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart=F, save(site.locs, t, FORECAST, ANALYSIS, enkf.params, new.state, new.params, out.configs, ensemble.samples, inputs, Viz.output, file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) - #writing down the image - either you asked for it or nor :) - if (control$plot == T & (t%%2==0 | t==nt)) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) - # if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + #writing down the image - either you asked for it or not :) + if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) } ### end loop over time } # sda.enkf From e97770617dafad857caa56a17e0e7cc1a7624f4b Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 16:20:17 -0400 Subject: [PATCH 0129/2289] Fixed T/F in call_MODIS --- modules/data.remote/R/call_MODIS.R | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 89d6367032d..82aa2400cee 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -22,26 +22,26 @@ ##' ##' @examples ##' \dontrun{ -##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T) +##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = TRUE) ##' plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) ##' test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") ##' } ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = F, progress = T) { +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 require(PEcAn.all) # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ - if (grepl("/", start_date) == T) + if (grepl("/", start_date) == TRUE) { start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") } - if (grepl("/", end_date) == T) + if (grepl("/", end_date) == TRUE) { end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } @@ -155,7 +155,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') } - output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) + output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = FALSE) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") output[,5:10] <- lapply(output[,5:10], as.numeric) @@ -187,13 +187,13 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, { fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 4), ".csv", sep = "") fname <- paste0(outfolder, "/", fname) - write.csv(output, fname, row.names = F) + write.csv(output, fname, row.names = FALSE) } if (!(outfolder == "") & is.null(siteID)) { fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 2), ".csv", sep = "") fname <- paste0(outfolder, "/", fname) - write.csv(output, fname, row.names = F) + write.csv(output, fname, row.names = FALSE) } return(output) From cd3c3ed4ac6df68a25e79eed0227ca2675567eee Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 16:22:51 -0400 Subject: [PATCH 0130/2289] Fixed T/F Multi_Site_Constructors.R --- modules/assim.sequential/R/Multi_Site_Constructors.R | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/modules/assim.sequential/R/Multi_Site_Constructors.R b/modules/assim.sequential/R/Multi_Site_Constructors.R index 17c38551cdd..ddcb71ee812 100644 --- a/modules/assim.sequential/R/Multi_Site_Constructors.R +++ b/modules/assim.sequential/R/Multi_Site_Constructors.R @@ -39,7 +39,7 @@ Contruct.Pf <- function(site.ids, var.names, X, localization.FUN=NULL, t=1, bloc #estimated between these two sites two.site.cov <- cov( X [, c(rows.in.matrix, cols.in.matrix)],use="complete.obs" )[(nvariable+1):(2*nvariable),1:nvariable] # I'm setting the off diag to zero - two.site.cov [which(lower.tri(two.site.cov, diag = FALSE),T) %>% rbind (which(upper.tri(two.site.cov,F),T))] <- 0 + two.site.cov [which(lower.tri(two.site.cov, diag = FALSE),TRUE) %>% rbind (which(upper.tri(two.site.cov,FALSE),TRUE))] <- 0 #putting it back to the main matrix pf.matrix [rows.in.matrix, cols.in.matrix] <- two.site.cov } @@ -88,7 +88,7 @@ Construct.R<-function(site.ids, var.names, obs.t.mean, obs.t.cov){ Y<-c() for (site in site.ids){ - choose <- sapply(var.names, agrep, x=names(obs.t.mean[[site]]), max=1, USE.NAMES = F) %>% unlist + choose <- sapply(var.names, agrep, x=names(obs.t.mean[[site]]), max=1, USE.NAMES = FALSE) %>% unlist # if there is no obs for this site if(length(choose)==0){ next; @@ -190,9 +190,9 @@ Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ if(is.null(obs.names)) next; choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), - max = 1, USE.NAMES = F) %>% unlist - choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = F) %>% unlist - choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = F) %>% unlist + max = 1, USE.NAMES = FALSE) %>% unlist + choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = FALSE) %>% unlist # empty matrix for this site H.this.site <- matrix(0, length(choose), nvariable) From 36bc56b0d99ed0b3d8f4211041e69acb62eaae0b Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 16:34:35 -0400 Subject: [PATCH 0131/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Alexey Shiklomanov --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 82aa2400cee..35fa42d8b0a 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -169,7 +169,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (QC_filter == T) { output$qc == as.character(output$qc) - for (i in 1:nrow(output)) + for (i in seq_len(nrow(output))) { convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) From 0dcf42d87f9380955f3d27a873927958a0e46089 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 16:51:20 -0400 Subject: [PATCH 0132/2289] added file.path and fixed = to <- --- modules/data.remote/R/call_MODIS.R | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 35fa42d8b0a..cbf1bb3f36a 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -38,15 +38,15 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == TRUE) { - start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") + start_date <- as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") } if (grepl("/", end_date) == TRUE) { - end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + end_date <- as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } - start_date = as.Date(start_date, format = "%Y%j") - end_date = as.Date(end_date, format = "%Y%j") + start_date <- as.Date(start_date, format = "%Y%j") + end_date <- as.Date(end_date, format = "%Y%j") #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") @@ -54,7 +54,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available - products = MODISTools::mt_products() + products <- MODISTools::mt_products() if (!(product %in% products$product)) { print(products) @@ -101,12 +101,12 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # MODIS data dates if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) { - start_date = dates[1] + start_date <- dates[1] print("WARNING: Dates are only partially available. Start date before modis data product is available.") } if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) { - end_date = dates[length(dates)] + end_date <- dates[length(dates)] print("WARNING: Dates are only partially available. End date befire modis data product is available.") } @@ -168,16 +168,16 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (QC_filter == T) { - output$qc == as.character(output$qc) + output$qc <- as.character(output$qc) for (i in seq_len(nrow(output))) { - convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") - output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) + convert <- paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") + output$qc[i] <- substr(convert, nchar(convert)-2, nchar(convert)) } - good = which(output$qc == "000" | output$qc == "001") + good <- which(output$qc == "000" | output$qc == "001") if (length(good) > 0 | !(is.null(good))) { - output = output[good,] + output <- output[good,] } else { print("All QC values are bad. No data to output with QC filter == TRUE.") } @@ -192,7 +192,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (!(outfolder == "") & is.null(siteID)) { fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 2), ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) + fname <- file.path(outfolder, fname) write.csv(output, fname, row.names = FALSE) } @@ -203,7 +203,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (package_method == "reticulate"){ # load in python script script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) - #script.path = file.path('/Users/bmorrison/pecan/modules/data.remote/inst/extract_modis_data.py') + #script.path <- file.path('/Users/bmorrison/pecan/modules/data.remote/inst/extract_modis_data.py') reticulate::source_python(script.path) # extract the data @@ -215,7 +215,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (!(is.null(outfolder))) { fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) + fname <- file.path(outfolder, fname) write.csv(output, fname) } From 766bd0ba58f789d19edd6760e0ecb12230141fc9 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 17:07:55 -0400 Subject: [PATCH 0133/2289] changed add_leading_zeroes to sprintf() --- models/sipnet/R/write.configs.SIPNET.R | 2 +- modules/data.remote/R/call_MODIS.R | 9 ++++----- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index 49ad887f28e..fb73f17cf4f 100644 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -455,7 +455,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } ## litterWFracInit fraction litterWFrac <- soilWFrac - +` ## snowInit cm water equivalent (cm = g / cm2 because 1 g water = 1 cm3 water) snow = try(ncdf4::ncvar_get(IC.nc,"SWE"),silent = TRUE) if (!is.na(snow) && is.numeric(snow)) { diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index cbf1bb3f36a..7f8943a41e2 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -38,12 +38,12 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == TRUE) { - start_date <- as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") + start_date <- as.Date(paste0(lubridate::year(start_date), sprintf("%03d",lubridate::yday(start_date))), format = "%Y%j") } if (grepl("/", end_date) == TRUE) { - end_date <- as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + end_date <- as.Date(paste0(lubridate::year(end_date), sprintf("%03d",lubridate::yday(end_date))), format = "%Y%j") } start_date <- as.Date(start_date, format = "%Y%j") end_date <- as.Date(end_date, format = "%Y%j") @@ -184,14 +184,13 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, } if (!(outfolder == "") & !(is.null(siteID))) - { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 4), ".csv", sep = "") + {fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", sprintf("%04d",iter), ".csv", sep = "") fname <- paste0(outfolder, "/", fname) write.csv(output, fname, row.names = FALSE) } if (!(outfolder == "") & is.null(siteID)) { - fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", spatial.tools::add_leading_zeroes(iter, 2), ".csv", sep = "") + fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", sprintf("%04d", iter), ".csv", sep = "") fname <- file.path(outfolder, fname) write.csv(output, fname, row.names = FALSE) } From 8159c8812d46e9cfecdd45977660fb12c574342b Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 17:12:22 -0400 Subject: [PATCH 0134/2289] Update modules/assim.sequential/R/sda.enkf_MultiSite.R Co-Authored-By: Shawn P. Serbin --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index f66448ccc7b..0d54e381a89 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -27,7 +27,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = BiasPlot = FALSE, plot.title = NULL, facet.plots = FALSE, - debug=FALSE, + debug = FALSE, pause=FALSE), ...) { if (control$debug) browser() From bfa4f45d7840f962c486df246285ecce3084195a Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 17:13:22 -0400 Subject: [PATCH 0135/2289] Update modules/assim.sequential/R/sda.enkf_MultiSite.R Co-Authored-By: Shawn P. Serbin --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 0d54e381a89..dff6fa45a2d 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -69,7 +69,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = } #Finding the distance between the sites - distances <- sp::spDists(site.locs, longlat=TRUE) + distances <- sp::spDists(site.locs, longlat = TRUE) #turn that into a blocked matrix format blocked.dis<-block_matrix(distances %>% as.numeric(), rep(length(var.names), length(site.ids))) From dca14c98f1dc23f27b6b2a12c9fc4accab4b47f7 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 17:14:04 -0400 Subject: [PATCH 0136/2289] Update modules/assim.sequential/R/sda.enkf_MultiSite.R Co-Authored-By: Shawn P. Serbin --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index dff6fa45a2d..eee7412dbd5 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -114,7 +114,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = start.time = lubridate::ymd_hms(settings$state.data.assimilation$start.date, truncated = 3), stop.time = lubridate::ymd_hms(settings$state.data.assimilation$end.date, truncated = 3), inputs = settings$run$inputs$met$path[[i]], - overwrite =FALSE + overwrite = FALSE ) ) } From f0fe7aae143db4a71eb97cfd0191bdcdc3e1f790 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 17:14:49 -0400 Subject: [PATCH 0137/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Alexey Shiklomanov --- modules/data.remote/R/call_MODIS.R | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 7f8943a41e2..67bd4da148e 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -33,7 +33,6 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 - require(PEcAn.all) # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ if (grepl("/", start_date) == TRUE) From 51a98a55e8cdf047eaa045dff80c70ec5ff8368f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 9 May 2019 17:25:50 -0400 Subject: [PATCH 0138/2289] updated iter parameter and updated documentation+roxygen2 --- modules/data.remote/R/call_MODIS.R | 24 ++++++++++++++++++------ modules/data.remote/man/call_MODIS.Rd | 17 ++++++++++------- 2 files changed, 28 insertions(+), 13 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 67bd4da148e..e1ae16f2d1c 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -16,6 +16,7 @@ ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) ##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +##' @param iter a value (e.g. i) used to help name files when call_MODIS is used parallelized or in a for loop. Default is NULL. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -29,7 +30,7 @@ ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = i, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = NULL, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 @@ -183,16 +184,27 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, } if (!(outfolder == "") & !(is.null(siteID))) - {fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", sprintf("%04d",iter), ".csv", sep = "") + { + if (is.null(iter)) + { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + } else { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", sprintf("%04d",iter), ".csv", sep = "") + } fname <- paste0(outfolder, "/", fname) write.csv(output, fname, row.names = FALSE) - } + } if (!(outfolder == "") & is.null(siteID)) - { - fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", sprintf("%04d", iter), ".csv", sep = "") + { + if (is.null(iter)) + { + fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, ".csv", sep = "") + } else { + fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", sprintf("%04d", iter), ".csv", sep = "") + } fname <- file.path(outfolder, fname) write.csv(output, fname, row.names = FALSE) - } + } return(output) } diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index f041fc2c67a..f45e5798429 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -5,8 +5,9 @@ \title{call_MODIS} \usage{ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, - iter = i, product, band, band_qc = "", band_sd = "", siteID = "", - package_method = "MODISTools", QC_filter = F, progress = T) + iter = NULL, product, band, band_qc = "", band_sd = "", + siteID = "", package_method = "MODISTools", QC_filter = FALSE, + progress = TRUE) } \arguments{ \item{outfolder}{where the output file will be stored} @@ -21,6 +22,11 @@ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, \item{size}{kmAboveBelow and kmLeftRight distance in km to be included} +\item{iter}{a value (e.g. i) used to help name files when call_MODIS is used parallelized or in a for loop. Default is NULL. + +depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json +depends on the MODISTools package version 1.1.0} + \item{product}{string value for MODIS product number} \item{band}{string value for which measurement to extract} @@ -33,17 +39,14 @@ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} -\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. - -depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json -depends on the MODISTools package version 1.1.0} +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False.} } \description{ Get MODIS data by date and location } \examples{ \dontrun{ -test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T) +test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = TRUE) plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") } From b36ef2cc2ba9101e60ec1ade07c642c472eaec52 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 17:28:04 -0400 Subject: [PATCH 0139/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index e1ae16f2d1c..341cb0d136b 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -78,7 +78,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, { if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) { - print("Extracting data") + PEcAn.logger::logger.info("Extracting data") start <- as.Date(start_date, "%Y%j") end <- as.Date(end_date, "%Y%j") From dc0d610208a524895e68588695191d02f488c7c5 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Thu, 9 May 2019 17:28:47 -0400 Subject: [PATCH 0140/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 341cb0d136b..812fae10323 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -131,7 +131,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if(band_qc != "") { qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size, progress =progress) + start = start, end = end, km_ab = size, km_lr = size, progress = progress) } # extract stdev data From 6d38303484ff2e547f9fb67d7ee69224114fe146 Mon Sep 17 00:00:00 2001 From: "Shawn P. Serbin" Date: Fri, 10 May 2019 10:50:24 -0400 Subject: [PATCH 0141/2289] Updated docker/depends/pecan.depends by running Rscript scripts/generate_dependencies.R --- docker/depends/pecan.depends | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index c46d79faf39..07e6ec86f6f 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -20,6 +20,7 @@ installGithub.r \ install2.r -e -s -n -1\ abind \ BayesianTools \ + binaryLogic \ BioCro \ car \ coda \ From 78ebefb9838963622ca949eb9e7707f6a71c61c3 Mon Sep 17 00:00:00 2001 From: "Shawn P. Serbin" Date: Fri, 10 May 2019 10:54:53 -0400 Subject: [PATCH 0142/2289] Updated documention: modules/assim.sequential/man/sda.enkf.multisite.Rd --- modules/assim.sequential/man/sda.enkf.multisite.Rd | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/man/sda.enkf.multisite.Rd b/modules/assim.sequential/man/sda.enkf.multisite.Rd index 3bedb2a63bb..40ef809e33e 100644 --- a/modules/assim.sequential/man/sda.enkf.multisite.Rd +++ b/modules/assim.sequential/man/sda.enkf.multisite.Rd @@ -4,8 +4,10 @@ \alias{sda.enkf.multisite} \title{sda.enkf.multisite} \usage{ -sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, restart = F, - control = list(trace = T, FF = F, plot = F, debug = FALSE, pause = F), +sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, + restart = FALSE, control = list(trace = TRUE, FF = FALSE, + interactivePlot = FALSE, TimeseriesPlot = FALSE, BiasPlot = FALSE, + plot.title = NULL, facet.plots = FALSE, debug = FALSE, pause = FALSE), ...) } \arguments{ From fa5a789f668c4978fb9db911b48f6b756cbaa79b Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 12:38:05 -0400 Subject: [PATCH 0143/2289] add missing paranthesis --- modules/assim.batch/R/pda.emulator.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index d1b0ec246cc..4380fe72db4 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -739,7 +739,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } # I can use a counter - if(is.null(settings$assim.batch$round_counter){ + if(is.null(settings$assim.batch$round_counter)){ settings$assim.batch$round_counter <- 1 }else{ settings$assim.batch$round_counter <- 1 + as.numeric(settings$assim.batch$round_counter) From eda33fe654b1af2f00a5574f1c0e7510038c1c91 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 12:49:32 -0400 Subject: [PATCH 0144/2289] changes for emulator rounds --- modules/assim.batch/R/pda.emulator.R | 2 +- modules/assim.batch/R/pda.utils.R | 17 ++++++++++++----- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 4380fe72db4..403c19f5b8e 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -738,7 +738,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, } } - # I can use a counter + # I can use a counter to run pre-defined number of emulator rounds if(is.null(settings$assim.batch$round_counter)){ settings$assim.batch$round_counter <- 1 }else{ diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 270423bbb80..32d7724a79b 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1086,7 +1086,7 @@ return_multi_site_objects <- function(multi.settings){ names(knots.list) <- sapply(settings$pfts,"[[",'name') external_knots <- lapply(knots.list, `[[`, "params") - if(FALSE){ + if(!is.null(settings$assim.batch$round_counter)){ collect_site_knots <- list() for(i in seq_along(multi.settings)){ settings <- multi.settings[[i]] @@ -1157,6 +1157,17 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ "settings <- multi_site_objects$settings\n", "external_priors <- multi_site_objects$priorlist\n", "external_knots <- multi_site_objects$externalknots\n", "external_formats <- multi_site_objects$formatlist\n", paste0("ensemble_id <- multi_site_objects$ensembleidlist[", site, "]\n")) + + # if this is another round + if(!is.null(settings$assim.batch$round_counter)){ + external_data_line <- paste0("load(\"",file.path(remote_dir, + paste0("external.", + settings$assim.batch$ensemble.id, + ".Rdata")),"\")\n") + first_lines <- c(first_lines, external_data_line) + settings$assim.batch$extension <- "round" + } + last_lines <- c("pda.emulator(settings, external.priors = external_priors, external.knots = external_knots, external.formats = external_formats, ensemble.id = ensemble_id, remote = TRUE)") @@ -1179,10 +1190,6 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ settings$host$rundir <- settings$rundir <- newrundir settings$host$outdir <- settings$modeloutdir <- newoutdir - if(FALSE){ - settings$assim.batch$extension <- "round" - } - multi_site_objects$settings <- settings save(multi_site_objects, file = local_object_file) From 04f64af846b3249712fb5c90d5b31110e489ca2c Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 12:58:31 -0400 Subject: [PATCH 0145/2289] add external data to fcn call --- modules/assim.batch/R/pda.utils.R | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 32d7724a79b..437f102521c 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1166,11 +1166,16 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ ".Rdata")),"\")\n") first_lines <- c(first_lines, external_data_line) settings$assim.batch$extension <- "round" - } - - last_lines <- c("pda.emulator(settings, external.priors = external_priors, + last_lines <- c("pda.emulator(settings, external.priors = external_priors, external.data = external.data, external.knots = external_knots, external.formats = external_formats, ensemble.id = ensemble_id, remote = TRUE)") + }else{ + + last_lines <- c("pda.emulator(settings, external.priors = external_priors, + external.knots = external_knots, external.formats = external_formats, + ensemble.id = ensemble_id, remote = TRUE)") + } + writeLines(c(first_lines, last_lines), local_script_file) remote_script_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/remote_emulator_s", site,".R") From 82c006a8690d130dc5474b18bd3345abdfdc6ebb Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 13:13:00 -0400 Subject: [PATCH 0146/2289] small generalization --- modules/assim.batch/R/pda.utils.R | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 437f102521c..374954a1374 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1136,15 +1136,13 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ cat("#$ -j y\n", file = local_sub_file, append = TRUE) cat("#$ -S /bin/bash\n", file = local_sub_file, append = TRUE) cat("#$ -V\n", file = local_sub_file, append = TRUE) - # parse 'geo*' from settings$host$qsub - # gsub( " .*$", "", sub(".*-q ", "", settings$host$qsub)) - cat("#$ -q 'geo*'\n", file = local_sub_file, append = TRUE) + # parse queue from settings$host$qsub + cat(paste0("#$ -q '", gsub( " .*$", "", sub(".*-q ", "", settings$host$qsub)), "'\n"), file = local_sub_file, append = TRUE) cat(paste0("#$ -N emulator_s", site,"\n"), file = local_sub_file, append = TRUE) #cat("#$ -pe omp 3\n", file = local_sub_file, append = TRUE) cat(paste0("#cd ", remote_dir, "\n"), file = local_sub_file, append = TRUE) cat(paste0("#", settings$host$prerun, "\n"), file = local_sub_file, append = TRUE) cat(paste0("Rscript remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) - sendbackto <- fqdn() cat(paste0("mv ", multi_site_objects$ensembleidlist[site],"/pecan.pda", multi_site_objects$ensembleidlist[site], ".xml ", remote_dir), file = local_sub_file, append = TRUE) remote_sub_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/sub" , site, ".sh") From 1dac7d8d2dc6e4f65510b1120e12a0341c1f4b28 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 13:58:33 -0400 Subject: [PATCH 0147/2289] dont use workflowid in paths --- modules/assim.batch/R/pda.utils.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 374954a1374..6fa2a0c6694 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1145,7 +1145,7 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ cat(paste0("Rscript remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) cat(paste0("mv ", multi_site_objects$ensembleidlist[site],"/pecan.pda", multi_site_objects$ensembleidlist[site], ".xml ", remote_dir), file = local_sub_file, append = TRUE) - remote_sub_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/sub" , site, ".sh") + remote_sub_file <- paste0(remote_dir, "/sub" , site, ".sh") ######## create R script local_script_file <- paste0(settings$outdir, "/remote_emulator_s",site,".R") @@ -1175,7 +1175,7 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ } writeLines(c(first_lines, last_lines), local_script_file) - remote_script_file <- paste0(settings$host$folder, "/", settings$workflow$id, "/remote_emulator_s", site,".R") + remote_script_file <- paste0(remote_dir, "/remote_emulator_s", site,".R") #cheating. needs to be done after extracting all paths @@ -1197,7 +1197,7 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ save(multi_site_objects, file = local_object_file) ######## copy to remote - remote.execute.cmd(host_info, paste0("mkdir -p ", settings$host$folder, "/", settings$workflow$id)) + remote.execute.cmd(host_info, paste0("mkdir -p ", remote_dir)) remote.copy.to(host_info, local_sub_file, remote_sub_file) remote.copy.to(host_info, local_script_file, remote_script_file) remote.copy.to(host_info, local_object_file, remote_object_file) From 86958cdaf09ce7d531afecce0a810b48e9a01f05 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 14:18:25 -0400 Subject: [PATCH 0148/2289] not using worklowid in the settings_outdir as well --- modules/assim.batch/R/pda.emulator.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index 403c19f5b8e..e7a9dcdb889 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -170,7 +170,7 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, settings$assim.batch$ensemble.id, ".Rdata")) }else{ - settings_outdir <- paste0(settings$host$folder, "/", settings$workflow$id, "/") + settings_outdir <- dirname(settings$host$rundir) pda.restart.file <- paste0(settings_outdir, "history.pda", settings$assim.batch$ensemble.id, ".Rdata") } From a5cafc411f88c52bfcb888e84933de11e8716ef2 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 14:27:05 -0400 Subject: [PATCH 0149/2289] add missing packages --- modules/assim.batch/R/pda.postprocess.R | 2 +- modules/emulator/DESCRIPTION | 1 + modules/emulator/R/minimize.GP.R | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index b06b918f574..732973ea42f 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -141,7 +141,7 @@ pda.plot.params <- function(settings, mcmc.param.list, prior.ind, par.file.name enough.iter <- TRUE for (i in seq_along(prior.ind)) { - params.subset[[i]] <- coda::as.mcmc.list(lapply(mcmc.param.list[[i]], mcmc)) + params.subset[[i]] <- coda::as.mcmc.list(lapply(mcmc.param.list[[i]], coda::mcmc)) burnin <- getBurnin(params.subset[[i]], method = "gelman.plot") diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 7a99e08f30c..3717684dcba 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -13,6 +13,7 @@ Depends: Imports: PEcAn.logger, coda (>= 0.18), + MASS, methods Description: Implementation of a Gaussian Process model (both likelihood and bayesian approaches) for kriging and model emulation. Includes functions diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 42bcfca509b..e39ddc5c8e8 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -259,7 +259,7 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn ## propose new parameters repeat { - xnew <- mvrnorm(1, unlist(xcurr), jcov) + xnew <- MASS::mvrnorm(1, unlist(xcurr), jcov) if (bounded(xnew, rng)) { break } From cdad0d3e54b4cd95e6c0e926d0e0c23e5d8e5eba Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 14 May 2019 15:02:32 -0400 Subject: [PATCH 0150/2289] make sure packages will be loaded --- modules/assim.batch/DESCRIPTION | 10 ++++++---- modules/emulator/DESCRIPTION | 4 ++-- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index b4da99cb0dd..974d8ae6469 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -9,17 +9,19 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific model parameterization, execution, and analysis. The goal of PECAn is to streamline the interaction between data and models, and to improve the efficacy of scientific investigation. -Imports: +Depends: + abind, BayesianTools, coda (>= 0.18), - dplyr, + MASS, + mlegp, + dplyr +Imports: ellipse, graphics, grDevices, IDPmisc, lubridate (>= 1.6.0), - MASS, - mlegp, ncdf4 (>= 1.15), parallel, PEcAn.benchmark, diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 3717684dcba..753048d0aad 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -9,11 +9,11 @@ Maintainer: Michael Dietze Depends: mvtnorm, mlegp, + coda (>= 0.18), + MASS, MCMCpack Imports: PEcAn.logger, - coda (>= 0.18), - MASS, methods Description: Implementation of a Gaussian Process model (both likelihood and bayesian approaches) for kriging and model emulation. Includes functions From d3dacfc0422fce5376e17b8c2ad06d187508bb37 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 14 May 2019 17:07:29 -0400 Subject: [PATCH 0151/2289] some more adjustments --- models/sipnet/R/read_restart.SIPNET.R | 12 +- models/sipnet/R/write.configs.SIPNET.R | 131 ++++++++++-------- models/sipnet/R/write_restart.SIPNET.R | 7 +- .../R/Multi_Site_Constructors.R | 63 +++++++-- .../assim.sequential/R/sda.enkf_MultiSite.R | 2 +- modules/assim.sequential/R/sda_plotting.R | 2 +- 6 files changed, 133 insertions(+), 84 deletions(-) mode change 100644 => 100755 models/sipnet/R/read_restart.SIPNET.R mode change 100644 => 100755 models/sipnet/R/write.configs.SIPNET.R mode change 100644 => 100755 models/sipnet/R/write_restart.SIPNET.R mode change 100644 => 100755 modules/assim.sequential/R/Multi_Site_Constructors.R diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R old mode 100644 new mode 100755 index 0335653e1d0..1034e45815c --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -35,7 +35,7 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p last <- length(ens[[1]]) forecast <- list() - + #### PEcAn Standard Outputs if ("GWBI" %in% var.names) { forecast[[length(forecast) + 1]] <- udunits2::ud.convert(mean(ens$GWBI), "kg/m^2/s", "Mg/ha/yr") @@ -45,17 +45,17 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p if ("AbvGrndWood" %in% var.names) { forecast[[length(forecast) + 1]] <- udunits2::ud.convert(ens$AbvGrndWood[last], "kg/m^2", "Mg/ha") names(forecast[[length(forecast)]]) <- c("AbvGrndWood") - + # calculate fractions, store in params, will use in write_restart wood_total_C <- ens$AbvGrndWood[last] + ens$fine_root_carbon_content[last] + ens$coarse_root_carbon_content[last] - + if (wood_total_C==0) wood_total_C <- 0.0001 # Making sure we are not making Nans in case there is no plant living there. abvGrndWoodFrac <- ens$AbvGrndWood[last] / wood_total_C coarseRootFrac <- ens$coarse_root_carbon_content[last] / wood_total_C fineRootFrac <- ens$fine_root_carbon_content[last] / wood_total_C params$restart <- c(abvGrndWoodFrac, coarseRootFrac, fineRootFrac) - + if (length(params$restart)>0) - names(params$restart) <- c("abvGrndWoodFrac", "coarseRootFrac", "fineRootFrac") + names(params$restart) <- c("abvGrndWoodFrac", "coarseRootFrac", "fineRootFrac") } if ("leaf_carbon_content" %in% var.names) { @@ -98,4 +98,4 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p X_tmp <- list(X = unlist(forecast), params = params) return(X_tmp) -} # read_restart.SIPNET +} # read_restart.SIPNET \ No newline at end of file diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R old mode 100644 new mode 100755 index fb73f17cf4f..d4d34ef36af --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -20,10 +20,10 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs template.in <- system.file("sipnet.in", package = "PEcAn.SIPNET") config.text <- readLines(con = template.in, n = -1) writeLines(config.text, con = file.path(settings$rundir, run.id, "sipnet.in")) - + ### WRITE *.clim template.clim <- settings$run$inputs$met$path ## read from settings - + if (!is.null(inputs)) { ## override if specified in inputs if ("met" %in% names(inputs)) { @@ -32,7 +32,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } PEcAn.logger::logger.info(paste0("Writing SIPNET configs with input ", template.clim)) - + # find out where to write run/ouput rundir <- file.path(settings$host$rundir, as.character(run.id)) outdir <- file.path(settings$host$outdir, as.character(run.id)) @@ -40,14 +40,14 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs rundir <- file.path(settings$rundir, as.character(run.id)) outdir <- file.path(settings$modeloutdir, as.character(run.id)) } - + # create launch script (which will create symlink) if (!is.null(settings$model$jobtemplate) && file.exists(settings$model$jobtemplate)) { jobsh <- readLines(con = settings$model$jobtemplate, n = -1) } else { jobsh <- readLines(con = system.file("template.job", package = "PEcAn.SIPNET"), n = -1) } - + # create host specific setttings hostsetup <- "" if (!is.null(settings$model$prerun)) { @@ -56,7 +56,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if (!is.null(settings$host$prerun)) { hostsetup <- paste(hostsetup, sep = "\n", paste(settings$host$prerun, collapse = "\n")) } - + hostteardown <- "" if (!is.null(settings$model$postrun)) { hostteardown <- paste(hostteardown, sep = "\n", paste(settings$model$postrun, collapse = "\n")) @@ -64,55 +64,55 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if (!is.null(settings$host$postrun)) { hostteardown <- paste(hostteardown, sep = "\n", paste(settings$host$postrun, collapse = "\n")) } - + # create job.sh jobsh <- gsub("@HOST_SETUP@", hostsetup, jobsh) jobsh <- gsub("@HOST_TEARDOWN@", hostteardown, jobsh) - + jobsh <- gsub("@SITE_LAT@", settings$run$site$lat, jobsh) jobsh <- gsub("@SITE_LON@", settings$run$site$lon, jobsh) jobsh <- gsub("@SITE_MET@", template.clim, jobsh) - + jobsh <- gsub("@OUTDIR@", outdir, jobsh) jobsh <- gsub("@RUNDIR@", rundir, jobsh) - + jobsh <- gsub("@START_DATE@", settings$run$start.date, jobsh) jobsh <- gsub("@END_DATE@", settings$run$end.date, jobsh) - + jobsh <- gsub("@BINARY@", settings$model$binary, jobsh) jobsh <- gsub("@REVISION@", settings$model$revision, jobsh) - + if (is.null(settings$model$delete.raw)) { settings$model$delete.raw <- FALSE } jobsh <- gsub("@DELETE.RAW@", settings$model$delete.raw, jobsh) - + writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) Sys.chmod(file.path(settings$rundir, run.id, "job.sh")) - + ### WRITE *.param-spatial template.paramSpatial <- system.file("template.param-spatial", package = "PEcAn.SIPNET") file.copy(template.paramSpatial, file.path(settings$rundir, run.id, "sipnet.param-spatial")) - + ### WRITE *.param template.param <- system.file("template.param", package = "PEcAn.SIPNET") if ("default.param" %in% names(settings$model)) { template.param <- settings$model$default.param } - + param <- read.table(template.param) - - - + + + #### write run-specific PFT parameters here #### Get parameters being handled by PEcAn for (pft in seq_along(trait.values)) { pft.traits <- unlist(trait.values[[pft]]) pft.names <- names(pft.traits) - + ## Append/replace params specified as constants constant.traits <- unlist(defaults[[1]]$constants) constant.names <- names(constant.traits) - + # Replace matches for (i in seq_along(constant.traits)) { ind <- match(constant.names[i], pft.names) @@ -125,13 +125,13 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs pft.traits[ind] <- constant.traits[i] } } - + # Remove NAs. Constants may be specified as NA to request template defaults. Note that it is 'NA' # (character) not actual NA due to being read in as XML pft.names <- pft.names[pft.traits != "NA" & !is.na(pft.traits)] pft.traits <- pft.traits[pft.traits != "NA" & !is.na(pft.traits)] pft.traits <- as.numeric(pft.traits) - + # Leaf carbon concentration leafC <- 0.48 #0.5 if ("leafC" %in% pft.names) { @@ -139,7 +139,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs id <- which(param[, 1] == "cFracLeaf") param[id, 2] <- leafC * 0.01 # convert to percentage from 0 to 1 } - + # Specific leaf area converted to SLW SLA <- NA id <- which(param[, 1] == "leafCSpWt") @@ -149,7 +149,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } else { SLA <- 1000 * leafC / param[id, 2] } - + # Maximum photosynthesis Amax <- NA id <- which(param[, 1] == "aMax") @@ -159,51 +159,51 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } else { Amax <- param[id, 2] * SLA } - + # Daily fraction of maximum photosynthesis if ("AmaxFrac" %in% pft.names) { param[which(param[, 1] == "aMaxFrac"), 2] <- pft.traits[which(pft.names == "AmaxFrac")] } - + ### Canopy extinction coefficiet (k) if ("extinction_coefficient" %in% pft.names) { param[which(param[, 1] == "attenuation"), 2] <- pft.traits[which(pft.names == "extinction_coefficient")] } - + # Leaf respiration rate converted to baseFolRespFrac if ("leaf_respiration_rate_m2" %in% pft.names) { Rd <- pft.traits[which(pft.names == "leaf_respiration_rate_m2")] id <- which(param[, 1] == "baseFolRespFrac") param[id, 2] <- max(min(Rd/Amax, 1), 0) } - + # Low temp threshold for photosynethsis if ("Vm_low_temp" %in% pft.names) { param[which(param[, 1] == "psnTMin"), 2] <- pft.traits[which(pft.names == "Vm_low_temp")] } - + # Opt. temp for photosynthesis if ("psnTOpt" %in% pft.names) { param[which(param[, 1] == "psnTOpt"), 2] <- pft.traits[which(pft.names == "psnTOpt")] } - + # Growth respiration factor (fraction of GPP) if ("growth_resp_factor" %in% pft.names) { param[which(param[, 1] == "growthRespFrac"), 2] <- pft.traits[which(pft.names == "growth_resp_factor")] } - + ### !!! NOT YET USED #Jmax = NA #if("Jmax" %in% pft.names){ # Jmax = pft.traits[which(pft.names == 'Jmax')] ### Using Jmax scaled to 25 degC. Maybe not be the best approach #} - + #alpha = NA #if("quantum_efficiency" %in% pft.names){ # alpha = pft.traits[which(pft.names == 'quantum_efficiency')] #} - + # Half saturation of PAR. PAR at which photosynthesis occurs at 1/2 theoretical maximum (Einsteins * m^-2 ground area * day^-1). #if(!is.na(Jmax) & !is.na(alpha)){ # param[which(param[,1] == "halfSatPar"),2] = Jmax/(2*alpha) @@ -212,29 +212,29 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs ### Once halfSatPar is calculated, need to remove Jmax and quantum_efficiency from param list so they are not included in SA #} ### !!! - + # Half saturation of PAR. PAR at which photosynthesis occurs at 1/2 theoretical maximum (Einsteins * m^-2 ground area * day^-1). # Temporary implementation until above is working. if ("half_saturation_PAR" %in% pft.names) { param[which(param[, 1] == "halfSatPar"), 2] <- pft.traits[which(pft.names == "half_saturation_PAR")] } - + # Ball-berry slomatal slope parameter m if ("stomatal_slope.BB" %in% pft.names) { id <- which(param[, 1] == "m_ballBerry") param[id, 2] <- pft.traits[which(pft.names == "stomatal_slope.BB")] } - + # Slope of VPD–photosynthesis relationship. dVpd = 1 - dVpdSlope * vpd^dVpdExp if ("dVPDSlope" %in% pft.names) { param[which(param[, 1] == "dVpdSlope"), 2] <- pft.traits[which(pft.names == "dVPDSlope")] } - + # VPD–water use efficiency relationship. dVpd = 1 - dVpdSlope * vpd^dVpdExp if ("dVpdExp" %in% pft.names) { param[which(param[, 1] == "dVpdExp"), 2] <- pft.traits[which(pft.names == "dVpdExp")] } - + # Leaf turnover rate average turnover rate of leaves, in fraction per day NOTE: read in as # per-year rate! if ("leaf_turnover_rate" %in% pft.names) { @@ -244,12 +244,12 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("wueConst" %in% pft.names) { param[which(param[, 1] == "wueConst"), 2] <- pft.traits[which(pft.names == "wueConst")] } - + # vegetation respiration Q10. if ("veg_respiration_Q10" %in% pft.names) { param[which(param[, 1] == "vegRespQ10"), 2] <- pft.traits[which(pft.names == "veg_respiration_Q10")] } - + # Base vegetation respiration. vegetation maintenance respiration at 0 degrees C (g C respired * g^-1 plant C * day^-1) # NOTE: only counts plant wood C - leaves handled elsewhere (both above and below-ground: assumed for now to have same resp. rate) # NOTE: read in as per-year rate! @@ -263,18 +263,18 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs ## pft.traits[which(pft.names=='stem_respiration_rate')]*vegRespQ10^(-25/10) param[id, 2] <- stem_resp_g * vegRespQ10^(-25/10) } - + # turnover of fine roots (per year rate) if ("root_turnover_rate" %in% pft.names) { id <- which(param[, 1] == "fineRootTurnoverRate") param[id, 2] <- pft.traits[which(pft.names == "root_turnover_rate")] } - + # fine root respiration Q10 if ("fine_root_respiration_Q10" %in% pft.names) { param[which(param[, 1] == "fineRootQ10"), 2] <- pft.traits[which(pft.names == "fine_root_respiration_Q10")] } - + # base respiration rate of fine roots (per year rate) if ("root_respiration_rate" %in% pft.names) { fineRootQ10 <- param[which(param[, 1] == "fineRootQ10"), 2] @@ -286,7 +286,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs ## pft.traits[which(pft.names=='root_respiration_rate')]*fineRootQ10^(-25/10) param[id, 2] <- root_resp_rate_g * fineRootQ10 ^ (-25 / 10) } - + # coarse root respiration Q10 if ("coarse_root_respiration_Q10" %in% pft.names) { param[which(param[, 1] == "coarseRootQ10"), 2] <- pft.traits[which(pft.names == "coarse_root_respiration_Q10")] @@ -328,7 +328,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("wood_turnover_rate" %in% pft.names) { param[which(param[, 1] == "woodTurnoverRate"), 2] <- pft.traits[which(pft.names == "wood_turnover_rate")] } - + ### ----- Soil parameters soil respiration Q10. if ("soil_respiration_Q10" %in% pft.names) { param[which(param[, 1] == "soilRespQ10"), 2] <- pft.traits[which(pft.names == "soil_respiration_Q10")] @@ -351,7 +351,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("soilWHC" %in% pft.names) { param[which(param[, 1] == "soilWHC"), 2] <- pft.traits[which(pft.names == "soilWHC")] } - + # 10/31/2017 IF: these were the two assumptions used in the emulator paper in order to reduce dimensionality # These results in improved winter soil respiration values # they don't affect anything when the seasonal soil respiration functionality in SIPNET is turned-off @@ -367,31 +367,44 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("GDD" %in% pft.names) { param[which(param[, 1] == "gddLeafOn"), 2] <- pft.traits[which(pft.names == "GDD")] } - + # Fraction of leaf fall per year (should be 1 for decid) if ("fracLeafFall" %in% pft.names) { param[which(param[, 1] == "fracLeafFall"), 2] <- pft.traits[which(pft.names == "fracLeafFall")] } - + # Leaf growth. Amount of C added to the leaf during the greenup period if ("leafGrowth" %in% pft.names) { param[which(param[, 1] == "leafGrowth"), 2] <- pft.traits[which(pft.names == "leafGrowth")] } } ## end loop over PFTS ####### end parameter update - - + + #### write INITIAL CONDITIONS here #### if (!is.null(IC)) { ic.names <- names(IC) ## plantWoodInit gC/m2 plant_wood_vars <- c("AbvGrndWood", "abvGrndWoodFrac", "coarseRootFrac", "fineRootFrac") if (all(plant_wood_vars %in% ic.names)) { - - # reconstruct total wood C wood_total_C <- IC$AbvGrndWood / IC$abvGrndWoodFrac - if (is.na(wood_total_C) | is.infinite(wood_total_C) | is.nan(wood_total_C) | wood_total_C <0) wood_total_C <- 0 + #Sanity check + if (is.na(wood_total_C) | is.infinite(wood_total_C) | is.nan(wood_total_C) | wood_total_C <0) { + wood_total_C <- 0 + if (round(IC$AbvGrndWood)>0 & round(IC$abvGrndWoodFrac, 3)==0) + PEcAn.logger::logger.warn( + paste0( + "There is a major problem with ", + run.id, + " in either the model's parameters or IC.", + "Because the ABG is estimated=", + IC$AbvGrndWood, + " while AGB Frac is estimated=", + IC$abvGrndWoodFrac + ) + ) + } param[which(param[, 1] == "plantWoodInit"), 2] <- wood_total_C param[which(param[, 1] == "coarseRootFrac"), 2] <- IC$coarseRootFrac param[which(param[, 1] == "fineRootFrac"), 2] <- IC$fineRootFrac @@ -428,7 +441,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs else if (!is.null(settings$run$inputs$poolinitcond$path)) { IC.path <- settings$run$inputs$poolinitcond$path IC.pools <- PEcAn.data.land::prepare_pools(IC.path, constants = list(sla = SLA)) - + if(!is.null(IC.pools)){ IC.nc <- ncdf4::nc_open(IC.path) #for additional variables specific to SIPNET ## plantWoodInit gC/m2 @@ -455,7 +468,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } ## litterWFracInit fraction litterWFrac <- soilWFrac -` + ## snowInit cm water equivalent (cm = g / cm2 because 1 g water = 1 cm3 water) snow = try(ncdf4::ncvar_get(IC.nc,"SWE"),silent = TRUE) if (!is.na(snow) && is.numeric(snow)) { @@ -495,7 +508,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs ##' ##' @author Shawn Serbin, David LeBauer remove.config.SIPNET <- function(main.outdir, settings) { - + ### Remove files on localhost if (settings$host$name == "localhost") { files <- paste0(settings$outdir, list.files(path = settings$outdir, recursive = FALSE)) # Need to change this to the run folder when implemented @@ -506,9 +519,9 @@ remove.config.SIPNET <- function(main.outdir, settings) { files <- files[-grep(pft.dir, files)] # Keep pft folder # file.remove(files,recursive=TRUE) system(paste("rm -r ", files, sep = "", collapse = " "), ignore.stderr = TRUE) # remove files/dirs - + ### On remote host } else { print("*** WARNING: Removal of files on remote host not yet implemented ***") } -} # remove.config.SIPNET +} # remove.config.SIPNET \ No newline at end of file diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R old mode 100644 new mode 100755 index c6d6599d75b..f740567d41b --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -58,6 +58,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if ("AbvGrndWood" %in% variables) { AbvGrndWood <- udunits2::ud.convert(new.state$AbvGrndWood, "Mg/ha", "g/m^2") analysis.save[[length(analysis.save) + 1]] <- AbvGrndWood + if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0 names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") analysis.save[[length(analysis.save) + 1]] <- IC_extra$abvGrndWoodFrac @@ -103,7 +104,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if (new.state$SWE < 0) analysis.save[[length(analysis.save)]] <- 0 names(analysis.save[[length(analysis.save)]]) <- c("snow") } - + if ("LAI" %in% variables) { analysis.save[[length(analysis.save) + 1]] <- new.state$LAI if (new.state$LAI < 0) analysis.save[[length(analysis.save)]] <- 0 @@ -116,7 +117,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, }else{ analysis.save.mat<-NULL } - + do.call(write.config.SIPNET, args = list(defaults = NULL, trait.values = new.params, @@ -125,4 +126,4 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, inputs = inputs, IC = analysis.save.mat)) print(runid) -} # write_restart.SIPNET +} # write_restart.SIPNET \ No newline at end of file diff --git a/modules/assim.sequential/R/Multi_Site_Constructors.R b/modules/assim.sequential/R/Multi_Site_Constructors.R old mode 100644 new mode 100755 index ddcb71ee812..12f3f50b9e7 --- a/modules/assim.sequential/R/Multi_Site_Constructors.R +++ b/modules/assim.sequential/R/Multi_Site_Constructors.R @@ -177,29 +177,64 @@ Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ #This is used inside the loop below for moving between the sites when populating the big H matrix nobs <- obs.t.mean %>% map_dbl(~length(.x)) %>% max # this gives me the max number of obs at sites nobstotal<-obs.t.mean %>% purrr::flatten() %>% length() # this gives me the total number of obs - #H <- matrix(0, (nobs * nsite.ids.with.data), (nvariable*nsite)) - #big empty H which needs to be filled in. + #Having the total number of obs as the row number H <- matrix(0, nobstotal, (nvariable*nsite)) j<-1 - for(i in seq_along(site.ids)){ - + + for(i in seq_along(site.ids)) + { site <- site.ids[i] obs.names <- names(obs.t.mean[[site]]) if(is.null(obs.names)) next; - choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), - max = 1, USE.NAMES = FALSE) %>% unlist - choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist - choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = FALSE) %>% unlist - - # empty matrix for this site - H.this.site <- matrix(0, length(choose), nvariable) - # fill in the ones based on choose - H.this.site [choose.row, choose.col] <- 1 + if (length(obs.names) == 1) + { + + # choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), + # max = 1, USE.NAMES = FALSE) %>% unlist + choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = FALSE) %>% unlist + + # empty matrix for this site + H.this.site <- matrix(0, nrow(H), nvariable) + # fill in the ones based on choose + H.this.site [choose.row, choose.col] <- 1 + } - pos.row<- ((nobs*j)-(nobs-1)):(nobs*j) + if (length(obs.names) > 1) + { + # empty matrix for this site + H.this.site <- matrix(0, nobs, nvariable) + + for (n in seq_along(obs.names)) + { + choose.col <- sapply(obs.names[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + H.this.site[n, choose.col] = 1 + + } + H.this.site = do.call(rbind, replicate(length(obs.names), H.this.site, simplify = FALSE)) + } + + # for (n in seq_along(obs.names)) + # { + # choose.col <- sapply(obs.names[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + # H.this.obs[n, choose.col] = 1 + # + # } + # H.this.site = data.frame() + # for (x in seq_along(obs.names)) + # { + # test = do.call(rbind, replicate(length(obs.names), H.this.obs[x,], simplify = FALSE)) + # H.this.site = rbind(H.this.site, test) + # + # } + # H.this.site = as.matrix(H.this.site) + # } + # + pos.row = 1:nobstotal + #pos.row<- ((nobs*j)-(nobs-1)):(nobs*j) pos.col<- ((nvariable*i)-(nvariable-1)):(nvariable*i) H[pos.row,pos.col] <-H.this.site diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index eee7412dbd5..a6195513a2a 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -474,7 +474,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) #writing down the image - either you asked for it or not :) - if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) } ### end loop over time diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 9531e8f611f..780073fe3ca 100644 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -559,7 +559,7 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs mutate(Site=names(one.day.data$means)) %>% tidyr::gather(Variable,Means,-c(Site)) %>% right_join(one.day.data$covs %>% - map_dfr(~ t(sqrt(diag(.x))) %>% + map_dfr(~ t(sqrt(as.numeric(diag(.x)))) %>% data.frame %>% `colnames<-`(c(obs.var.names))) %>% mutate(Site=names(one.day.data$covs)) %>% tidyr::gather(Variable,Sd,-c(Site)), From fa110be2abf82733cac6c59fbcee101a34b1563c Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 07:50:26 -0400 Subject: [PATCH 0152/2289] one more path fix --- modules/assim.batch/R/pda.emulator.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.R b/modules/assim.batch/R/pda.emulator.R index e7a9dcdb889..7cfee33b23f 100644 --- a/modules/assim.batch/R/pda.emulator.R +++ b/modules/assim.batch/R/pda.emulator.R @@ -164,13 +164,13 @@ pda.emulator <- function(settings, external.data = NULL, external.priors = NULL, ## history restart if(!remote){ - settings_outdir <- settings$outdir - + settings_outdir <- settings$outdir pda.restart.file <- file.path(settings_outdir, paste0("history.pda", settings$assim.batch$ensemble.id, ".Rdata")) }else{ - settings_outdir <- dirname(settings$host$rundir) + settings_outdir <- dirname(settings$host$rundir) + settings_outdir <- gsub(settings$assim.batch$ensemble.id, "", settings_outdir) pda.restart.file <- paste0(settings_outdir, "history.pda", settings$assim.batch$ensemble.id, ".Rdata") } From e8e051e27d2c24e4a7abf73d21642b82561234e2 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 11:01:05 -0400 Subject: [PATCH 0153/2289] start sync function --- modules/assim.batch/R/pda.emulator.ms.R | 17 +++---------- modules/assim.batch/R/pda.utils.R | 33 +++++++++++++++++++++++++ 2 files changed, 36 insertions(+), 14 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index d8e08cc55d5..4fd389fcf65 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -83,20 +83,9 @@ pda.emulator.ms <- function(multi.settings) { if(all(check_all_sites)) break } - - # Sync regular files back - args <- c("-e", paste0("ssh -o ControlPath=\"", multi.settings[[1]]$host$tunnel, "\"", collapse = "")) - args <- c(args, paste0(multi.settings[[1]]$host$name, ":", dirname(multi.settings[[1]]$host$outdir)), - multi.settings[[1]]$outdir) - system2("rsync", shQuote(args), stdout = TRUE, stderr = FALSE) - - # update multi.settings - for(ms in seq_along(multi.settings)){ - tmp_settings <- read.settings(paste0(multi.settings[[ms]]$outdir,"/pecan.pda", - multi_site_objects$ensembleidlist[[ms]],".xml")) - multi.settings[[ms]]$assim.batch <- tmp_settings$assim.batch - multi.settings[[ms]]$pfts <- tmp_settings$pfts - } + # Sync fcn + multi.settings <- sync_pda_remote(multi.settings, multi_site_objects$ensembleidlist) + #repeat( #emulator_r_check <- round_check(multi.settings) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 6fa2a0c6694..2a5689c4cab 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1207,3 +1207,36 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ } +# helper function for syncing remote pda runs +# this function resembles remote.copy.from but we don't want to sync everything back +# if register==TRUE, the last files returned will be registered to the DB +##' @export +sync_pda_remote <- function(multi.settings, ensembleidlist, register = FALSE){ + + args <- c("-qa") + args <- c(args, "--include=\"pecan.pda*\"") + args <- c(args, "--include=\"history*\"") + args <- c(args, "--include 'pft/***'") + args <- c(args, "--include=\"mcmc.list*\"") + args <- c(args, "--exclude=\"*\"") #exclude everything else + args <- c(args, "-e", paste0("'ssh -o ControlPath=", multi.settings[[1]]$host$tunnel, "'")) + args <- c(args, paste0(multi.settings[[1]]$host$name, ":", dirname(multi.settings[[1]]$host$outdir),"/"), + multi.settings[[1]]$outdir) + res <- system2("rsync", paste(args), stdout = TRUE, stderr = FALSE) + if(attr(res,"status") == 255) PEcAn.logger::logger.error("Tunnel closed. Reopen and try again.") + + # update multi.settings + for(ms in seq_along(multi.settings)){ + tmp_settings <- read.settings(paste0(multi.settings[[ms]]$outdir,"/pecan.pda", + ensembleidlist[[ms]],".xml")) + multi.settings[[ms]]$assim.batch <- tmp_settings$assim.batch + multi.settings[[ms]]$pfts <- tmp_settings$pfts + } + + if(register){ + # fcn needs conenction to DB + } + + return(multi.settings) +} + From cc8ac1b4c50602267c59d0cd8983d7262da7b284 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 11:24:43 -0400 Subject: [PATCH 0154/2289] round-loop --- modules/assim.batch/R/pda.emulator.ms.R | 57 +++++++++++-------------- modules/assim.batch/R/pda.utils.R | 2 +- 2 files changed, 27 insertions(+), 32 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 4fd389fcf65..26cc0b51989 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -45,28 +45,27 @@ pda.emulator.ms <- function(multi.settings) { # but this requires some re-arrangement in pda.emulator function # for now we will always run site-level calibration - - # number of sites will probably get big real quick, so some multi-site PDA runs should be run on the cluster - # unfortunately pda.emulator function was not fully designed for remote runs, so first we need to prepare a few things it needs - # (1) all sites should be running the same knots - # (2) all sites will use the same prior.list - # (3) the format information for the assimilated data (pda.emulator needs DB connection to query it if it's not externally provided) - # (4) ensemble ids (they provide unique trackers for emulator functions) - multi_site_objects <- return_multi_site_objects(multi.settings) # using the first site is enough - - # the default implementation is that we will iteratively run the rounds until improvement in site-level fits slows down. - #if(one_round){ - #multi.settings <- papply(multi.settings, pda.emulator, individual = individual) - #}else{ - - # Open the tunnel (might not need) - PEcAn.remote::open_tunnel(multi.settings[[1]]$host$name, - user = multi.settings[[1]]$host$user, - tunnel_dir = dirname(multi.settings[[1]]$host$tunnel)) + # Open the tunnel (might not need) + PEcAn.remote::open_tunnel(multi.settings[[1]]$host$name, + user = multi.settings[[1]]$host$user, + tunnel_dir = dirname(multi.settings[[1]]$host$tunnel)) + + # Until a check function is implemented, run a predefined number of emulator rounds + n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 3, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) + PEcAn.logger::logger.info(n_rounds, " individual PDA rounds will be run per site. Please wait.") + repeat{ + + # number of sites will probably get big real quick, so some multi-site PDA runs should be run on the cluster + # unfortunately pda.emulator function was not fully designed for remote runs, so first we need to prepare a few things it needs + # (1) all sites should be running the same knots + # (2) all sites will use the same prior.list + # (3) the format information for the assimilated data (pda.emulator needs DB connection to query it if it's not externally provided) + # (4) ensemble ids (they provide unique trackers for emulator functions) + multi_site_objects <- return_multi_site_objects(multi.settings) emulator_jobs <- rep(NA, length(multi.settings)) for(ms in seq_along(multi.settings)){ - + # Sync to remote subfile <- prepare_pda_remote(multi.settings[[ms]], site = ms, multi_site_objects) @@ -85,20 +84,16 @@ pda.emulator.ms <- function(multi.settings) { # Sync fcn multi.settings <- sync_pda_remote(multi.settings, multi_site_objects$ensembleidlist) + + # continue or stop + r_counter <- multi.settings[[1]]$assim.batch$round_counter + PEcAn.logger::logger.info("Round", r_counter, "finished.") + if(r_counter == n_rounds) break + } - #repeat( - - #emulator_r_check <- round_check(multi.settings) - #if(emulator_r_check) break - # external.knots <- sample_multi_site_MCMC - # add round tag - #) - # Close the tunnel - PEcAn.remote::kill.tunnel(settings) - #} - #multi.settings[[i]] <- pda.emulator(multi.settings[[i]], individual = individual) + # Close the tunnel + PEcAn.remote::kill.tunnel(settings) - } # write multi.settings with individual-pda info diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 2a5689c4cab..cd7c7cc2db8 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1234,7 +1234,7 @@ sync_pda_remote <- function(multi.settings, ensembleidlist, register = FALSE){ } if(register){ - # fcn needs conenction to DB + # fcn needs connection to DB } return(multi.settings) From 7f9139948b8776da08148191c8455851c1721047 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 11:26:41 -0400 Subject: [PATCH 0155/2289] roxygenise --- modules/assim.batch/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/assim.batch/NAMESPACE b/modules/assim.batch/NAMESPACE index 9aab9982693..f308e425146 100644 --- a/modules/assim.batch/NAMESPACE +++ b/modules/assim.batch/NAMESPACE @@ -46,4 +46,5 @@ export(return_hyperpars) export(return_multi_site_objects) export(runModule.assim.batch) export(sample_MCMC) +export(sync_pda_remote) export(write_sf_posterior) From 2af6270552e30615e9e22cfbf8b41664c91e21ce Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 12:46:42 -0400 Subject: [PATCH 0156/2289] default to five rounds --- modules/assim.batch/R/pda.emulator.ms.R | 4 ++-- modules/assim.batch/R/pda.utils.R | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 26cc0b51989..1275aa75466 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -51,7 +51,7 @@ pda.emulator.ms <- function(multi.settings) { tunnel_dir = dirname(multi.settings[[1]]$host$tunnel)) # Until a check function is implemented, run a predefined number of emulator rounds - n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 3, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) + n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 5, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) PEcAn.logger::logger.info(n_rounds, " individual PDA rounds will be run per site. Please wait.") repeat{ @@ -86,7 +86,7 @@ pda.emulator.ms <- function(multi.settings) { multi.settings <- sync_pda_remote(multi.settings, multi_site_objects$ensembleidlist) # continue or stop - r_counter <- multi.settings[[1]]$assim.batch$round_counter + r_counter <- as.numeric(multi.settings[[1]]$assim.batch$round_counter) PEcAn.logger::logger.info("Round", r_counter, "finished.") if(r_counter == n_rounds) break } diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index cd7c7cc2db8..4016a51fa9d 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1218,6 +1218,8 @@ sync_pda_remote <- function(multi.settings, ensembleidlist, register = FALSE){ args <- c(args, "--include=\"history*\"") args <- c(args, "--include 'pft/***'") args <- c(args, "--include=\"mcmc.list*\"") + args <- c(args, "--include=\"ss.pda*\"") + args <- c(args, "--include=\"emulator.pda*\"") args <- c(args, "--exclude=\"*\"") #exclude everything else args <- c(args, "-e", paste0("'ssh -o ControlPath=", multi.settings[[1]]$host$tunnel, "'")) args <- c(args, paste0(multi.settings[[1]]$host$name, ":", dirname(multi.settings[[1]]$host$outdir),"/"), From 63f0ca9fdaa9cfe89bae078488321f582ca847d1 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 13:18:14 -0400 Subject: [PATCH 0157/2289] write intermediate multi settings --- modules/assim.batch/R/pda.emulator.ms.R | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 1275aa75466..13d01048c58 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -82,11 +82,14 @@ pda.emulator.ms <- function(multi.settings) { if(all(check_all_sites)) break } - # Sync fcn + # Sync from remote multi.settings <- sync_pda_remote(multi.settings, multi_site_objects$ensembleidlist) + # continue or stop r_counter <- as.numeric(multi.settings[[1]]$assim.batch$round_counter) + # write multi.settings with individual-pda info + PEcAn.settings::write.settings(multi.settings, outputfile = paste0('pecan.PDA_MS', r_counter, '.xml')) PEcAn.logger::logger.info("Round", r_counter, "finished.") if(r_counter == n_rounds) break } @@ -96,8 +99,7 @@ pda.emulator.ms <- function(multi.settings) { } - # write multi.settings with individual-pda info - PEcAn.settings::write.settings(multi.settings, outputfile='pecan.PDA_MS.xml') + ## -------------------------------- Prepare for Joint and Hierarchical ----------------------------------------- From 890f701ac4f53e63e26e932ea5775cf2d7025856 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 13:27:30 -0400 Subject: [PATCH 0158/2289] adding files travis requests --- base/remote/man/remote.execute.R.Rd | 4 ++-- base/remote/man/start.model.runs.Rd | 3 ++- base/remote/man/start_qsub.Rd | 4 ++-- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/base/remote/man/remote.execute.R.Rd b/base/remote/man/remote.execute.R.Rd index ae8f76b7aab..9c00555ba5e 100644 --- a/base/remote/man/remote.execute.R.Rd +++ b/base/remote/man/remote.execute.R.Rd @@ -4,8 +4,8 @@ \alias{remote.execute.R} \title{Execute command remotely} \usage{ -remote.execute.R(script, host = "localhost", user = NA, verbose = FALSE, - R = "R", scratchdir = tempdir()) +remote.execute.R(script, host = "localhost", user = NA, + verbose = FALSE, R = "R", scratchdir = tempdir()) } \arguments{ \item{script}{the script to be invoked, as a list of commands.} diff --git a/base/remote/man/start.model.runs.Rd b/base/remote/man/start.model.runs.Rd index 85d06d5fbd9..02985be0760 100644 --- a/base/remote/man/start.model.runs.Rd +++ b/base/remote/man/start.model.runs.Rd @@ -4,7 +4,8 @@ \alias{start.model.runs} \title{Start selected ecosystem model runs within PEcAn workflow} \usage{ -\method{start}{model.runs}(settings, write = TRUE, stop.on.error = TRUE) +\method{start}{model.runs}(settings, write = TRUE, + stop.on.error = TRUE) } \arguments{ \item{settings}{pecan settings object} diff --git a/base/remote/man/start_qsub.Rd b/base/remote/man/start_qsub.Rd index e5bb651e883..4c135b3d151 100644 --- a/base/remote/man/start_qsub.Rd +++ b/base/remote/man/start_qsub.Rd @@ -4,8 +4,8 @@ \alias{start_qsub} \title{Start qsub runs} \usage{ -start_qsub(run, qsub_string, rundir, host, host_rundir, host_outdir, stdout_log, - stderr_log, job_script, qsub_extra = NULL) +start_qsub(run, qsub_string, rundir, host, host_rundir, host_outdir, + stdout_log, stderr_log, job_script, qsub_extra = NULL) } \arguments{ \item{run}{(numeric) run ID, as an integer} From 25981b6a94c711b21a178e71fb2c90058e1ef347 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 15 May 2019 13:49:56 -0400 Subject: [PATCH 0159/2289] using an appropriate pe spec --- modules/assim.batch/R/pda.utils.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 4016a51fa9d..9002baa62b2 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1139,7 +1139,7 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ # parse queue from settings$host$qsub cat(paste0("#$ -q '", gsub( " .*$", "", sub(".*-q ", "", settings$host$qsub)), "'\n"), file = local_sub_file, append = TRUE) cat(paste0("#$ -N emulator_s", site,"\n"), file = local_sub_file, append = TRUE) - #cat("#$ -pe omp 3\n", file = local_sub_file, append = TRUE) + cat(paste0("#$ -pe omp ", length(settings$assim.batch$inputs), "\n"), file = local_sub_file, append = TRUE) cat(paste0("#cd ", remote_dir, "\n"), file = local_sub_file, append = TRUE) cat(paste0("#", settings$host$prerun, "\n"), file = local_sub_file, append = TRUE) cat(paste0("Rscript remote_emulator_s",site,".R\n"), file = local_sub_file, append = TRUE) @@ -1151,6 +1151,7 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ local_script_file <- paste0(settings$outdir, "/remote_emulator_s",site,".R") first_lines <- c("rm(list=ls(all=TRUE))\n", "library(PEcAn.assim.batch)\n", + "library(PEcAn.benchmark)\n", paste0("load(\"",remote_object_file,"\")\n"), "settings <- multi_site_objects$settings\n", "external_priors <- multi_site_objects$priorlist\n", "external_knots <- multi_site_objects$externalknots\n", "external_formats <- multi_site_objects$formatlist\n", From c4f9609a543da15c5798f090d9c2d6629a35dc44 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 16 May 2019 14:04:27 -0400 Subject: [PATCH 0160/2289] path fix for load --- modules/assim.batch/R/pda.emulator.ms.R | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 13d01048c58..c7a8156e5c2 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -134,8 +134,10 @@ pda.emulator.ms <- function(multi.settings) { # collect GPs and SSs for(s in seq_along(multi.settings)){ - load(multi.settings[[s]]$assim.batch$emulator.path) - load(multi.settings[[s]]$assim.batch$ss.path) + load(file.path(multi.settings[[s]]$outdir, + basename(multi.settings[[s]]$assim.batch$emulator.path))) + load(file.path(multi.settings[[s]]$outdir, + basename(multi.settings[[s]]$assim.batch$ss.path))) gp.stack[[s]] <- gp SS.stack[[s]] <- SS remove(gp, SS) @@ -169,7 +171,7 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) ## history restart - hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.hbc", + hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.joint", tmp.settings$assim.batch$ensemble.id, ".Rdata")) gp <- unlist(gp.stack, recursive = FALSE) From 373f4a6b6abc5ea39b2ffaf87c6a6d3e387607e1 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Fri, 17 May 2019 11:41:40 -0400 Subject: [PATCH 0161/2289] fix sda var order + H matrix stuff --- models/sipnet/R/write_restart.SIPNET.R | 2 +- .../R/Multi_Site_Constructors.R | 57 ++++--------------- .../assim.sequential/R/sda.enkf_MultiSite.R | 16 ++++-- 3 files changed, 23 insertions(+), 52 deletions(-) diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index f740567d41b..e46df1b9daf 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -58,7 +58,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if ("AbvGrndWood" %in% variables) { AbvGrndWood <- udunits2::ud.convert(new.state$AbvGrndWood, "Mg/ha", "g/m^2") analysis.save[[length(analysis.save) + 1]] <- AbvGrndWood - if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0 +# if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0 names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") analysis.save[[length(analysis.save) + 1]] <- IC_extra$abvGrndWoodFrac diff --git a/modules/assim.sequential/R/Multi_Site_Constructors.R b/modules/assim.sequential/R/Multi_Site_Constructors.R index 12f3f50b9e7..b28736e8d8e 100755 --- a/modules/assim.sequential/R/Multi_Site_Constructors.R +++ b/modules/assim.sequential/R/Multi_Site_Constructors.R @@ -185,62 +185,27 @@ Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ for(i in seq_along(site.ids)) { site <- site.ids[i] - obs.names <- names(obs.t.mean[[site]]) + choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), + max = 1, USE.NAMES = FALSE) %>% unlist - if(is.null(obs.names)) next; + if(is.null(choose)) next; - if (length(obs.names) == 1) + H.this.site <- matrix(0, length(choose), nvariable) + + for (n in seq_along(choose)) { + choose.col <- sapply(choose[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + H.this.site[n, choose.col] = 1 - # choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), - # max = 1, USE.NAMES = FALSE) %>% unlist - choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist - choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = FALSE) %>% unlist - - # empty matrix for this site - H.this.site <- matrix(0, nrow(H), nvariable) - # fill in the ones based on choose - H.this.site [choose.row, choose.col] <- 1 } - if (length(obs.names) > 1) - { - # empty matrix for this site - H.this.site <- matrix(0, nobs, nvariable) - - for (n in seq_along(obs.names)) - { - choose.col <- sapply(obs.names[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist - H.this.site[n, choose.col] = 1 - - } - H.this.site = do.call(rbind, replicate(length(obs.names), H.this.site, simplify = FALSE)) - } - - # for (n in seq_along(obs.names)) - # { - # choose.col <- sapply(obs.names[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist - # H.this.obs[n, choose.col] = 1 - # - # } - # H.this.site = data.frame() - # for (x in seq_along(obs.names)) - # { - # test = do.call(rbind, replicate(length(obs.names), H.this.obs[x,], simplify = FALSE)) - # H.this.site = rbind(H.this.site, test) - # - # } - # H.this.site = as.matrix(H.this.site) - # } - # - pos.row = 1:nobstotal - #pos.row<- ((nobs*j)-(nobs-1)):(nobs*j) + #pos.row = 1:nobstotal + pos.row<- ((nobs*j)-(nobs-1)):(nobs*j) pos.col<- ((nvariable*i)-(nvariable-1)):(nvariable*i) H[pos.row,pos.col] <-H.this.site - j <- j +1 } - + return(H) } diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index a6195513a2a..1c249d0905e 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -22,11 +22,11 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = FALSE, control=list(trace = TRUE, FF = FALSE, - interactivePlot = FALSE, + interactivePlot = FALSE, TimeseriesPlot = FALSE, - BiasPlot = FALSE, - plot.title = NULL, - facet.plots = FALSE, + BiasPlot = FALSE, + plot.title = NULL, + facet.plots = FALSE, debug = FALSE, pause=FALSE), ...) { @@ -343,9 +343,15 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = X <- X %>% map_dfc(~.x) %>% as.matrix() %>% - `colnames<-`(c(rep(var.names, length(X)))) %>% + `colnames<-`(c(rep(colnames(X[[names(X)[1]]]), length(X)))) %>% `attr<-`('Site',c(rep(site.ids, each=length(var.names)))) + # X <- X %>% + # map_dfc(~.x) %>% + # as.matrix() %>% + # `colnames<-`(c(rep(var.names, length(X)))) %>% + # `attr<-`('Site',c(rep(site.ids, each=length(var.names)))) + FORECAST[[t]] <- X ###-------------------------------------------------------------------### ### preparing OBS ### From f98ea30595b142034d1db6a216e74d4324afe16c Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Fri, 17 May 2019 11:54:41 -0400 Subject: [PATCH 0162/2289] removed '_' from SDA plot legends --- modules/assim.sequential/R/sda_plotting.R | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) mode change 100644 => 100755 modules/assim.sequential/R/sda_plotting.R diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R old mode 100644 new mode 100755 index 780073fe3ca..d2366b2a727 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -539,8 +539,11 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs Lower=quantile(Value,0.025, na.rm=T), Upper=quantile(Value,0.975, na.rm=T)) - })%>%mutate(Type=paste0("SDA_",listFA), + # dropped the "_" from "SDA_" in plot legends + })%>%mutate(Type=paste0("SDA ",listFA), Date=rep(obs.times[t1:t], each=colnames((All.my.data[[listFA]])[[1]]) %>% length() / length(unique(site.ids)))%>% as.POSIXct() + # })%>%mutate(Type=paste0("SDA_",listFA), + # Date=rep(obs.times[t1:t], each=colnames((All.my.data[[listFA]])[[1]]) %>% length() / length(unique(site.ids)))%>% as.POSIXct() ) }) @@ -566,8 +569,11 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs by=c('Site','Variable')) %>% mutate(Upper=Means+(Sd*1.96), Lower=Means-(Sd*1.96))%>% - mutate(Type="SDA_Data", + # dropped the "_" from "SDA_Data" + mutate(Type="SDA Data", Date=one.day.data$Date %>% as.POSIXct()) + # mutate(Type="SDA_Data", + # Date=one.day.data$Date %>% as.POSIXct()) })%>% From 93c34597e135d202f97d02d000d38a3f68a11ec6 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Fri, 17 May 2019 16:31:31 -0400 Subject: [PATCH 0163/2289] fixed H matrix... again --- .../R/Multi_Site_Constructors.R | 17 ++++++++--------- modules/assim.sequential/R/sda.enkf_MultiSite.R | 9 +++++++-- modules/assim.sequential/R/sda_plotting.R | 9 +++++---- 3 files changed, 20 insertions(+), 15 deletions(-) diff --git a/modules/assim.sequential/R/Multi_Site_Constructors.R b/modules/assim.sequential/R/Multi_Site_Constructors.R index b28736e8d8e..e61271530f5 100755 --- a/modules/assim.sequential/R/Multi_Site_Constructors.R +++ b/modules/assim.sequential/R/Multi_Site_Constructors.R @@ -185,7 +185,7 @@ Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ for(i in seq_along(site.ids)) { site <- site.ids[i] - choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), + choose <- sapply(names(obs.t.mean[[site]]), agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist if(is.null(choose)) next; @@ -194,17 +194,16 @@ Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ for (n in seq_along(choose)) { - choose.col <- sapply(choose[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist - H.this.site[n, choose.col] = 1 - + H.this.site[n, choose[n]] <- 1 + #sapply(choose[n], agrep, x = seq_along(var.names), max = 1, USE.NAMES = FALSE) %>% unlist + #H.this.site[n, choose.col] = 1 } - #pos.row = 1:nobstotal - pos.row<- ((nobs*j)-(nobs-1)):(nobs*j) - pos.col<- ((nvariable*i)-(nvariable-1)):(nvariable*i) - - H[pos.row,pos.col] <-H.this.site + pos.row <- ((nobs*j)-(nobs-1)):(nobs*j) + pos.col <- ((nvariable*i)-(nvariable-1)):(nvariable*i) + H[pos.row,pos.col] <- H.this.site + j <- j+1 } return(H) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 1c249d0905e..901583a7f30 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -206,9 +206,10 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = ### Taking care of Forecast. Splitting / Writting / running / reading back### ###-------------------------------------------------------------------------###----- #- Check to see if this is the first run or not and what inputs needs to be sent to write.ensemble configs + if (t>1){ #removing old simulations - unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) + unlink(list.files(outdir, "*.nc.var", recursive = TRUE, full.names = TRUE)) #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. inputs.split <- conf.settings %>% purrr::map2(inputs, function(settings, inputs) { @@ -226,6 +227,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = ) } } else{ + #unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) inputs.split <- inputs } inputs.split @@ -367,6 +369,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = # making the mapping oprator H <- Construct.H.multisite(site.ids, var.names, obs.mean[[t]]) + ###-------------------------------------------------------------------### ### Analysis ### ###-------------------------------------------------------------------###---- @@ -481,7 +484,9 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = #writing down the image - either you asked for it or not :) if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) - + if (t == 1){ + unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) + } } ### end loop over time } # sda.enkf diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index d2366b2a727..4a4f1e20264 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -540,11 +540,12 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs Upper=quantile(Value,0.975, na.rm=T)) # dropped the "_" from "SDA_" in plot legends - })%>%mutate(Type=paste0("SDA ",listFA), + })%>%mutate(Type=listFA, Date=rep(obs.times[t1:t], each=colnames((All.my.data[[listFA]])[[1]]) %>% length() / length(unique(site.ids)))%>% as.POSIXct() - # })%>%mutate(Type=paste0("SDA_",listFA), + ) + # })%>%mutate(Type=paste0("SDA ",listFA), # Date=rep(obs.times[t1:t], each=colnames((All.my.data[[listFA]])[[1]]) %>% length() / length(unique(site.ids)))%>% as.POSIXct() - ) + # ) }) @@ -570,7 +571,7 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs mutate(Upper=Means+(Sd*1.96), Lower=Means-(Sd*1.96))%>% # dropped the "_" from "SDA_Data" - mutate(Type="SDA Data", + mutate(Type="Data", Date=one.day.data$Date %>% as.POSIXct()) # mutate(Type="SDA_Data", # Date=one.day.data$Date %>% as.POSIXct()) From c61ff81d7123d30b16342a1cf98b273ab98fb601 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 17 May 2019 16:48:29 -0400 Subject: [PATCH 0164/2289] Start cleaning up batch test file --- base/workflow/inst/batch_runs.R | 243 +++++++++++++++++++------------- 1 file changed, 143 insertions(+), 100 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 606b2e54040..7098dc81238 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -13,124 +13,157 @@ ## D. Manually Define Met, right now CRUNCEP and AMerifluxlbl ## E. Manually Define start and end date ## F. Manually Define Output var -## G. Manually Define Sensitivity and enesmble +## G. Manually Define Sensitivity and enesmble ## H. Find sites not associated with any inputs(meaning data needs to be downloaded), part of ameriflux network, and have data for start-end year range. ## Available ameriflux data is found by parsing the notes section of the sites table where ameriflux sites have a year range. ## Part 3 - Create Run table ## A. Create table that contains combinations of models,met_name,site_id, startdate, enddate, pecan_path,out.var, ensemble, ens_size, sensitivity args that the function above will use to do runs. -## Part 4 - Run the Function across the table +## Part 4 - Run the Function across the table ## A. Use the pmap function to apply the function across each row of arguments and append a pass or fail outcome column to the original table of runs -## Part 5 - (In progress) Turn output table into a table - -create_execute_test_xml <- function(run_list){ - library(PEcAn.DB) - library(dplyr) - library(PEcAn.utils) - library(XML) - library(PEcAn.settings) - - #Read in Table - model_id <- run_list[[1]] - met <- run_list[[2]] - site_id<- run_list[[3]] - start_date<- run_list[[4]] - end_date<- run_list[[5]] - pecan_path<- run_list[[6]] - out.var<- run_list[[7]] - ensemble<- run_list[[8]] - ens_size<- run_list[[9]] - sensitivity<- run_list[[10]] - user_id<- NA - pft_name<- NA - - config.list <-read_web_config(paste0(pecan_path,"/web/config.php")) - bety <- betyConnect(paste0(pecan_path,"/web/config.php")) +## Part 5 - (In progress) Turn output table into a table + +#' .. title/description .. +#' +#' @param model_id Model ID (from `models` table) +#' @param met Meteorology input source (e.g. "CRUNCEP") +#' @param site_id Site ID (from `sites` table) +#' @param start_date Run start date +#' @param end_date Run end date +#' @param pecan_path Path to PEcAn source code. Default is current +#' working directory. +#' @param user_id +#' @param output_folder +#' @param dbfiles_folder +#' @param pft_name +#' @param ensemble_size +#' @param ensemble_variable +#' @param sensitivity +#' @param db_bety_username +#' @param db_bety_password +#' @param db_bety_hostname +#' @param db_bety_driver +#' @return +#' @author Alexey Shiklomanov +create_execute_test_xml <- function(model_id, + met, + site_id, + start_date, + end_date, + dbfiles_folder, + user_id, + output_folder = "batch_test_output", + pecan_path = getwd(), + pft_name = NULL, + ensemble_size = 1, + ensemble_variable = "NPP", + sensitivity = FALSE, + db_bety_username = "bety", + db_bety_password = "bety", + db_bety_hostname = "localhost", + db_bety_driver = "PostgreSQL") { + + php_file <- file.path(pecan_path, "web", "config.php") + config.list <- read_web_config(php_file) + bety <- betyConnect(php_file) con <- bety$con - settings <- list() - - # Info - settings$info$notes <- paste0("Test_Run") - settings$info$userid <- user_id - settings$info$username <- "None" - settings$info$dates <- Sys.Date() + + settings <- list( + info = list(notes = "Test_Run", + userid = user_id, + username = "None", + dates = Sys.Date()) + ) + #Outdir - model.new <- tbl(bety, "models") %>% filter(model_id == id) %>% collect() - outdir_base<-config.list$output_folder - outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"), - format(as.Date(end_date), "%Y-%m"), - met,site_id,"test_runs", - sep="_",collapse =NULL) - outdir <- paste0(outdir_base,outdir_pre) - dir.create(outdir) - settings$outdir <- outdir + model.new <- tbl(bety, "models") %>% + filter(model_id == id) %>% + collect() + outdir_pre <- paste( + model.new[["model_name"]], + format(as.Date(start_date), "%Y-%m"), + format(as.Date(end_date), "%Y-%m"), + met, site_id, "test_runs", + sep = "_" + ) + outdir <- file.path(output_folder, outdir_pre) + dir.create(outdir, showWarnings = FALSE, recursive = TRUE) + settings[["outdir"]] <- outdir + #Database BETY - settings$database$bety$user <- config.list$db_bety_username - settings$database$bety$password <- config.list$db_bety_password - settings$database$bety$host <- "localhost" - settings$database$bety$dbname <- config.list$db_bety_database - settings$database$bety$driver <- "PostgreSQL" - settings$database$bety$write <- FALSE - #Database dbfiles - settings$database$dbfiles <- config.list$dbfiles_folder + settings[["database"]] <- list( + bety = list(user = db_bety_username, + password = db_bety_password, + host = db_bety_hostname, + dbname = "bety", + driver = db_bety_driver, + write = FALSE), + dbfiles = dbfiles_folder + ) + #PFT - if (is.na(pft_name)){ - pft <- tbl(bety, "pfts") %>% filter(modeltype_id == model.new$modeltype_id) %>% collect() - pft_name <- pft$name[1] + if (is.null(pft_name)){ + # Select the first PFT in the model list. + pft <- tbl(bety, "pfts") %>% + filter(modeltype_id == model.new$modeltype_id) %>% + collect() + pft_name <- pft$name[[1]] } - settings$pfts$pft$name <- pft_name - settings$pfts$pft$constants$num <- 1 + settings[["pfts"]] <- list( + pft = list(name = pft_name, + constants = list(num = 1)) + ) + #Meta Analysis - settings$meta.analysis$iter <- 3000 - settings$meta.analysis$random.effects <- FALSE + settings[["meta.analysis"]] <- list(iter = 3000, random.effects = FALSE) + #Ensemble - if(ensemble){ - settings$ensemble$size <- ens_size - settings$ensemble$variable <- out.var - settings$ensemble$samplingspace$met$method <- "sampling" - settings$ensemble$samplingspace$parameters$method <- "uniform" - }else{ - settings$ensemble$size <- 1 - settings$ensemble$variable <- out.var - settings$ensemble$samplingspace$met$method <- "sampling" - settings$ensemble$samplingspace$parameters$method <- "uniform" - } - #Sensitivity - if(sensitivity){ - settings$sensitivity.analysis$quantiles <- - settings$sensitivity.analysis$quantiles$sigma1 <--2 - settings$sensitivity.analysis$quantiles$sigma2 <--1 - settings$sensitivity.analysis$quantiles$sigma3 <- 1 - settings$sensitivity.analysis$quantiles$sigma4 <- 2 - names(settings$sensitivity.analysis$quantiles) <-c("sigma","sigma","sigma","sigma") + settings[["ensemble"]] <- list( + size = ensemble_size, + variable = ensemble_variable, + samplingspace = list(met = list(method = "sampling"), + parameters = list(method = "uniform")) + ) + + #Sensitivity + if (sensitivity) { + settings[["sensitivity.analysis"]] <- list( + quantiles = list(sigma1 = -2, sigma2 = -1, sigma3 = 1, sigma4 = 2) + ) } + #Model - settings$model$id <- model.new$id - #Worflow - settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) - settings$run$site$id <- site_id - settings$run$site$met.start <- start_date - settings$run$site$met.end <- end_date - settings$run$inputs$met$source <- met - settings$run$inputs$met$output <- model.new$model_name - settings$run$inputs$met$username <- "pecan" - settings$run$start.date <- start_date - settings$run$end.date <- end_date - settings$host$name <-"localhost" - + settings[[c("model", "id")]] <- model.new[["id"]] + + #Workflow + settings[[c("workflow", "id")]] <- paste0("Test_run_","_",model.new$model_name) + settings[["run"]] <- list( + site = list(id = site_id, met.start = start_date, met.end = end_date), + inputs = list(met = list(source = met, output = model.new[["model_name"]], + username = "pecan")), + start.date = start_date, end.date = end_date + ) + settings[[c("host", "name")]] <- "localhost" + #create file and Run - saveXML(listToXml(settings, "pecan"), file=paste0(outdir,"/","pecan.xml")) - file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) + saveXML(listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) + file.copy(file.path(pecan_path, "web", "workflow.R"), outdir) setwd(outdir) ##Name log file #log <- file("workflow.Rout", open = "wt") #sink(log) #sink(log, type = "message") - + system("./workflow.R 2>&1 | tee workflow.Rout") #source("workflow.R") #sink() } +library(tidyverse) +library(PEcAn.DB) +library(PEcAn.utils) +library(XML) +library(PEcAn.settings) + ##Create Run Args ## Insert your path to base pecan @@ -138,7 +171,6 @@ pecan_path <- "/fs/data3/tonygard/work/pecan" config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) con <- bety$con -library(tidyverse) ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] @@ -146,8 +178,8 @@ mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(i ## Find Models #devtools::install_github("pecanproject/pecan", subdir = "api") -model_ids <- tbl(bety, "dbfiles") %>% - filter(machine_id == mach_id) %>% +model_ids <- tbl(bety, "dbfiles") %>% + filter(machine_id == mach_id) %>% filter(container_type == "Model") %>% pull(container_id) @@ -164,11 +196,11 @@ sensitivity <- FALSE ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% - inner_join(tbl(bety, "sitegroups_sites") %>% + inner_join(tbl(bety, "sitegroups_sites") %>% filter(sitegroup_id == 1), by = c("id" = "site_id")) %>% dplyr::select("id.x", "notes", "sitename") %>% - dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% + dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% collect() %>% dplyr::mutate( # Grab years from string within the notes @@ -191,8 +223,19 @@ site_name <- gsub(" ","_",site_id_noinput$sitename) #Create permutations of arg combinations options(scipen = 999) -run_table <- expand.grid(models,met_name,site_id, startdate, enddate, - pecan_path,out.var, ensemble, ens_size, sensitivity, stringsAsFactors = FALSE) +run_table <- expand.grid( + models, + met_name, + site_id, + startdate, + enddate, + pecan_path, + out.var, + ensemble, + ens_size, + sensitivity, + stringsAsFactors = FALSE +) #Execute function to spit out a table with a column of NA or success tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ From 39f0dcb3b454dc007859aa0b693a4ebffc942516 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 19 May 2019 10:07:04 -0400 Subject: [PATCH 0165/2289] few temporary path fixes --- modules/assim.batch/R/pda.emulator.ms.R | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index c7a8156e5c2..7d0fd2e4603 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -163,6 +163,14 @@ pda.emulator.ms <- function(multi.settings) { workflow.id <- -1 } + ## remote hack for now + ## currently site-level PDA runs on remote but joint and hierarchical runs locally + ## this will change soon (before this PR is finalized) + ## but I'm still developing the code so for now let's change the paths back to local + for(i in seq_along(tmp.settings$pfts)){ + tmp.settings$pfts[[i]]$outdir <- file.path(tmp.settings$outdir, "pft", basename(tmp.settings$pfts[[i]]$outdir)) + } + tmp.settings$modeloutdir <- file.path(tmp.settings$outdir, basename(tmp.settings$modeloutdir)) ## -------------------------------------- Joint calibration -------------------------------------------------- if(joint){ # joint - if begin From 50f35fa05fabee217f593a39b1f92a43555a7252 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 19 May 2019 10:54:32 -0400 Subject: [PATCH 0166/2289] prep for hier --- modules/assim.batch/R/pda.emulator.ms.R | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 7d0fd2e4603..ecbee979254 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -312,28 +312,21 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings$assim.batch$ensemble.id <- pda.create.ensemble(tmp.settings, con, workflow.id) ## history restart - hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.hbc", + hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.hier", tmp.settings$assim.batch$ensemble.id, ".Rdata")) ## Transform values from non-normal distributions to standard Normal ## it won't do anything if all priors are already normal + ## edit: actually hierarchical sampling may be assuming standard normal, test for this later norm_transform <- norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list) if(!norm_transform$normF){ # means SS values are transformed - prior.all <- norm_transform$prior.all - prior.fn.all <- norm_transform$prior.fn.all + ## Previously emulator was refitted on the standard normal domain + ## Instead I now use original emulators, but switch back and forth between domains - ## get new SS.stack with transformed values - SS.stack <- norm_transform$normSS - - ## re-fit GP on new param space - for(i in seq_along(SS.stack)){ - gp.stack[[i]] <- lapply(SS.stack[[i]], function(x) mlegp::mlegp(X = x[, -ncol(x), drop = FALSE], Z = x[, ncol(x), drop = FALSE], nugget = 0, nugget.known = 1, verbose = 0)) - } - - ## re-define rng - rng <- norm_transform$rng + ## range limits on standard normal domain + rng_stdnorm <- norm_transform$rng[,,1] #all same, maybe return just one from norm_transform_priors ## get new init.list and jmp.list init.list <- norm_transform$init From 7d0f20ad56b10af3c3b803f2ba0c3c698a01b240 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 20 May 2019 15:42:48 -0400 Subject: [PATCH 0167/2289] use new remote.copy.from functionality --- modules/assim.batch/R/pda.utils.R | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 9002baa62b2..0c4beb5c35d 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1214,19 +1214,17 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ ##' @export sync_pda_remote <- function(multi.settings, ensembleidlist, register = FALSE){ - args <- c("-qa") - args <- c(args, "--include=\"pecan.pda*\"") - args <- c(args, "--include=\"history*\"") - args <- c(args, "--include 'pft/***'") - args <- c(args, "--include=\"mcmc.list*\"") - args <- c(args, "--include=\"ss.pda*\"") - args <- c(args, "--include=\"emulator.pda*\"") - args <- c(args, "--exclude=\"*\"") #exclude everything else - args <- c(args, "-e", paste0("'ssh -o ControlPath=", multi.settings[[1]]$host$tunnel, "'")) - args <- c(args, paste0(multi.settings[[1]]$host$name, ":", dirname(multi.settings[[1]]$host$outdir),"/"), - multi.settings[[1]]$outdir) - res <- system2("rsync", paste(args), stdout = TRUE, stderr = FALSE) - if(attr(res,"status") == 255) PEcAn.logger::logger.error("Tunnel closed. Reopen and try again.") + options <- "--include=pecan.pda*" + options <- c(options, "--include=history*") + options <- c(options, "--include=pft/***") + options <- c(options, "--include=mcmc.list*") + options <- c(options, "--include=ss.pda*") + options <- c(options, "--include=emulator.pda*") + options <- c(options, "--exclude=*") #exclude everything else + PEcAn.remote::remote.copy.from(host = multi.settings[[1]]$host, + src = paste0(dirname(multi.settings[[1]]$host$outdir),"/"), + dst = multi.settings[[1]]$outdir, + options = options) # update multi.settings for(ms in seq_along(multi.settings)){ From 0a2f0d66bc12727fc06de0c34235489a534c3d94 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 17 May 2019 20:39:47 -0400 Subject: [PATCH 0168/2289] TEMPORARY CHANGES --- base/utils/R/read_web_config.R | 17 +++++++++++++++-- .../tests/testthat/test.read_web_config.R | 10 ++++++++++ base/workflow/inst/batch_runs.R | 18 ++++++++---------- 3 files changed, 33 insertions(+), 12 deletions(-) create mode 100644 base/utils/tests/testthat/test.read_web_config.R diff --git a/base/utils/R/read_web_config.R b/base/utils/R/read_web_config.R index d5ecc399fea..0b1c0f5706c 100644 --- a/base/utils/R/read_web_config.R +++ b/base/utils/R/read_web_config.R @@ -10,20 +10,28 @@ read_web_config = function(php.config = "../../web/config.php") { ## Read PHP config file for webserver - config <- scan(php.config, what = "character", sep = "\n") + config <- readLines(php.config) config <- config[grep("^\\$", config)] ## find lines that begin with $ (variables) + rxp <- paste0("^\\$([[:graph:]]+)[[:space:]]*", + "=[[:space:]]*(.*?);?(?:[[:space:]]*//+.*)?$") + rxp_matches <- gregexpr(rxp, config, perl = TRUE) + results <- Map(extract_matches, config, rxp_matches) + list_names <- vapply(results, `[[`, character(1), 1, USE.NAMES = FALSE) + list_vals <- vapply(results, `[[`, character(1), 2, USE.NAMES = FALSE) + ## replacements config <- gsub("^\\$", "", config) ## remove leading $ config <- gsub(";.*$", "", config) ## remove ; and everything afterwards config <- sub("false", "FALSE", config, fixed = TRUE) ## Boolean capitalization config <- sub("true", "TRUE", config, fixed = TRUE) ## Boolean capitalization - config <- gsub(pattern = "DIRECTORY_SEPARATOR",replacement = "/",config) + config <- gsub(pattern = "DIRECTORY_SEPARATOR", replacement = "/", config) ## subsetting config <- config[!grepl("exec", config, fixed = TRUE)] ## lines 'exec' fail config <- config[!grepl("dirname", config, fixed = TRUE)] ## lines 'dirname' fail config <- config[!grepl("array", config, fixed = TRUE)] ## lines 'array' fail + ## config <- config[!grepl(":", config, fixed = TRUE)] ## lines with colons fail ##references ref <- grep("$", config, fixed = TRUE) @@ -43,3 +51,8 @@ read_web_config = function(php.config = "../../web/config.php") { return(config.list) } +extract_matches <- function(string, rxp) { + start <- attr(rxp, "capture.start") + len <- attr(rxp, "capture.length") + Map(function(s, l) substring(string, s, s + l - 1), start, len) +} diff --git a/base/utils/tests/testthat/test.read_web_config.R b/base/utils/tests/testthat/test.read_web_config.R new file mode 100644 index 00000000000..87231eb4dea --- /dev/null +++ b/base/utils/tests/testthat/test.read_web_config.R @@ -0,0 +1,10 @@ +context("Read web config") + +pecan_root <- file.path("..", "..") +php_config_example <- file.path(pecan_root, "web", "config.example.php") +php_config_docker <- file.path(pecan_root, "docker", "web", "config.docker.php") +stopifnot(file.exists(php_config_example), file.exists(php_config_docker)) + +cfg_example <- read_web_config(php_config_example) +cfg_docker <- read_web_config(php_config_docker) +php.config <- php_config_docker diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 7098dc81238..8b6c486d325 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -147,15 +147,11 @@ create_execute_test_xml <- function(model_id, #create file and Run saveXML(listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) file.copy(file.path(pecan_path, "web", "workflow.R"), outdir) + cwd <- getwd() setwd(outdir) - ##Name log file - #log <- file("workflow.Rout", open = "wt") - #sink(log) - #sink(log, type = "message") - - system("./workflow.R 2>&1 | tee workflow.Rout") - #source("workflow.R") - #sink() + on.exit(setwd(cwd), add = TRUE) + + system("Rscript workflow.R 2>&1 | tee workflow.Rout") } library(tidyverse) @@ -167,8 +163,10 @@ library(PEcAn.settings) ##Create Run Args ## Insert your path to base pecan -pecan_path <- "/fs/data3/tonygard/work/pecan" -config.list <- PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) +## pecan_path <- "/fs/data3/tonygard/work/pecan" +pecan_path <- "../.." +php_file <- file.path(pecan_path, "web", "config.php") +config.list <- PEcAn.utils::read_web_config(php_file) bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) con <- bety$con From 9e8b457cd227d7b3c29d6e3f1712c9790b8b3d40 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 22 May 2019 10:05:18 -0400 Subject: [PATCH 0169/2289] UTILS: Refactor read_web_config and add unit test Use more formal regular expressions and parsing, rather than relying on unsafe and unreliable `eval(parse(...))` structure. --- base/utils/R/read_web_config.R | 88 +++++++++---------- base/utils/man/read_web_config.Rd | 25 ++++-- .../tests/testthat/test.read_web_config.R | 23 +++-- 3 files changed, 76 insertions(+), 60 deletions(-) diff --git a/base/utils/R/read_web_config.R b/base/utils/R/read_web_config.R index 0b1c0f5706c..aeef4943413 100644 --- a/base/utils/R/read_web_config.R +++ b/base/utils/R/read_web_config.R @@ -1,58 +1,52 @@ -#' read_web_config +#' Read `config.php` file into an R list #' -#' @author Michael Dietze and Rob Kooper -#' @param php.config Path to `config.php` -#' -#' @return config.list +#' @author Alexey Shiklomanov, Michael Dietze, Rob Kooper +#' @param php.config Path to `config.php` file +#' @param parse Logical. If `TRUE` (default), try to parse numbers and +#' unquote strings. +#' @param expand Logical. If `TRUE` (default), try to perform some +#' variable substitutions. +#' @return Named list of variable-value pairs set in `config.php` #' @export -#' -#' -read_web_config = function(php.config = "../../web/config.php") { +#' @examples +#' # Read Docker configuration and extract the `dbfiles` and output folders. +#' docker_config <- read_web_config(file.path("..", "..", "docker", "web", "config.docker.php")) +#' docker_config[["dbfiles_folder"]] +#' docker_config[["output_folder"]] +read_web_config <- function(php.config = "../../web/config.php", + parse = TRUE, + expand = TRUE) { - ## Read PHP config file for webserver config <- readLines(php.config) config <- config[grep("^\\$", config)] ## find lines that begin with $ (variables) - rxp <- paste0("^\\$([[:graph:]]+)[[:space:]]*", + rxp <- paste0("^\\$([[:graph:]]+?)[[:space:]]*", "=[[:space:]]*(.*?);?(?:[[:space:]]*//+.*)?$") - rxp_matches <- gregexpr(rxp, config, perl = TRUE) - results <- Map(extract_matches, config, rxp_matches) - list_names <- vapply(results, `[[`, character(1), 1, USE.NAMES = FALSE) - list_vals <- vapply(results, `[[`, character(1), 2, USE.NAMES = FALSE) - - ## replacements - config <- gsub("^\\$", "", config) ## remove leading $ - config <- gsub(";.*$", "", config) ## remove ; and everything afterwards - config <- sub("false", "FALSE", config, fixed = TRUE) ## Boolean capitalization - config <- sub("true", "TRUE", config, fixed = TRUE) ## Boolean capitalization - config <- gsub(pattern = "DIRECTORY_SEPARATOR", replacement = "/", config) - - ## subsetting - config <- config[!grepl("exec", config, fixed = TRUE)] ## lines 'exec' fail - config <- config[!grepl("dirname", config, fixed = TRUE)] ## lines 'dirname' fail - config <- config[!grepl("array", config, fixed = TRUE)] ## lines 'array' fail - ## config <- config[!grepl(":", config, fixed = TRUE)] ## lines with colons fail - - ##references - ref <- grep("$", config, fixed = TRUE) - if(length(ref) > 0){ - refsplit = strsplit(config[ref],split = " . ",fixed=TRUE)[[1]] - refsplit = sub(pattern = '\"',replacement = "",x = refsplit) - refsplit = sub(pattern = '$',replacement = '\"',refsplit,fixed=TRUE) - config[ref] <- paste0(refsplit,collapse = "") ## lines with variable references fail + rxp_matches <- regexec(rxp, config, perl = TRUE) + results <- regmatches(config, rxp_matches) + list_names <- vapply(results, `[[`, character(1), 2, USE.NAMES = FALSE) + config_list <- lapply(results, `[[`, 3) + names(config_list) <- list_names + + # Convert to numeric if possible + if (parse) { + # Remove surrounding quotes + config_list <- lapply(config_list, gsub, + pattern = "\"(.*?)\"", replacement = "\\1") + + # Try to convert numbers to numeric + config_list <- lapply( + config_list, + function(x) tryCatch(as.numeric(x), warning = function(e) x) + ) } - ## convert to list - config.list <- eval(parse(text = paste("list(", paste0(config, collapse = ","), ")"))) - - ## replacements - config.list <- lapply(X = config.list,FUN = sub,pattern="output_folder",replacement=config.list$output_folder,fixed=TRUE) - - return(config.list) -} + if (expand) { + # Replace $output_folder with its value, and concatenate strings + config_list <- lapply(config_list, gsub, + pattern = "\\$output_folder *\\. *", + replacement = config_list[["output_folder"]]) + } -extract_matches <- function(string, rxp) { - start <- attr(rxp, "capture.start") - len <- attr(rxp, "capture.length") - Map(function(s, l) substring(string, s, s + l - 1), start, len) + config_list } diff --git a/base/utils/man/read_web_config.Rd b/base/utils/man/read_web_config.Rd index 67329f3b4c1..a6f206176d2 100644 --- a/base/utils/man/read_web_config.Rd +++ b/base/utils/man/read_web_config.Rd @@ -2,19 +2,32 @@ % Please edit documentation in R/read_web_config.R \name{read_web_config} \alias{read_web_config} -\title{read_web_config} +\title{Read \code{config.php} file into an R list} \usage{ -read_web_config(php.config = "../../web/config.php") +read_web_config(php.config = "../../web/config.php", parse = TRUE, + expand = TRUE) } \arguments{ -\item{php.config}{Path to \code{config.php}} +\item{php.config}{Path to \code{config.php} file} + +\item{parse}{Logical. If \code{TRUE} (default), try to parse numbers and +unquote strings.} + +\item{expand}{Logical. If \code{TRUE} (default), try to perform some +variable substitutions.} } \value{ -config.list +Named list of variable-value pairs set in \code{config.php} } \description{ -read_web_config +Read \code{config.php} file into an R list +} +\examples{ +# Read Docker configuration and extract the `dbfiles` and output folders. +docker_config <- read_web_config(file.path("..", "..", "docker", "web", "config.docker.php")) +docker_config[["dbfiles_folder"]] +docker_config[["output_folder"]] } \author{ -Michael Dietze and Rob Kooper +Alexey Shiklomanov, Michael Dietze, Rob Kooper } diff --git a/base/utils/tests/testthat/test.read_web_config.R b/base/utils/tests/testthat/test.read_web_config.R index 87231eb4dea..25d7df32bc9 100644 --- a/base/utils/tests/testthat/test.read_web_config.R +++ b/base/utils/tests/testthat/test.read_web_config.R @@ -1,10 +1,19 @@ context("Read web config") -pecan_root <- file.path("..", "..") -php_config_example <- file.path(pecan_root, "web", "config.example.php") -php_config_docker <- file.path(pecan_root, "docker", "web", "config.docker.php") -stopifnot(file.exists(php_config_example), file.exists(php_config_docker)) +# `here` package needed to correctly set path relative to package +skip_if_not_installed("here") +pecan_root <- normalizePath(here::here("..", "..")) -cfg_example <- read_web_config(php_config_example) -cfg_docker <- read_web_config(php_config_docker) -php.config <- php_config_docker +test_that("Read example config file", { + php_config_example <- file.path(pecan_root, "web", "config.example.php") + cfg_example <- read_web_config(php_config_example) + expect_equal(cfg_example[["output_folder"]], "/home/carya/output/") + expect_equal(cfg_example[["dbfiles_folder"]], "/home/carya/output//dbfiles") +}) + +test_that("Read docker config file", { + php_config_docker <- file.path(pecan_root, "docker", "web", "config.docker.php") + cfg_docker <- read_web_config(php_config_docker) + expect_equal(cfg_docker[["output_folder"]], "/data/workflows") + expect_equal(cfg_docker[["dbfiles_folder"]], "/data/dbfiles") +}) From 4dea267880788359bb1c338297e6105d77f0116d Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 22 May 2019 10:25:49 -0400 Subject: [PATCH 0170/2289] Minor format cleanup of batch runs script --- base/workflow/inst/batch_runs.R | 34 ++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 8b6c486d325..29e24605c72 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -164,15 +164,18 @@ library(PEcAn.settings) ## Insert your path to base pecan ## pecan_path <- "/fs/data3/tonygard/work/pecan" -pecan_path <- "../.." +pecan_path <- file.path("..", "..") php_file <- file.path(pecan_path, "web", "config.php") +stopifnot(file.exists(php_file)) config.list <- PEcAn.utils::read_web_config(php_file) -bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) +bety <- PEcAn.DB::betyConnect(php_file) con <- bety$con ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] -mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) +mach_id <- tbl(bety, "machines") %>% + filter(grepl(mach_name,hostname)) %>% + pull(id) ## Find Models #devtools::install_github("pecanproject/pecan", subdir = "api") @@ -181,19 +184,18 @@ model_ids <- tbl(bety, "dbfiles") %>% filter(container_type == "Model") %>% pull(container_id) - - models <- model_ids -met_name <- c("CRUNCEP","AmerifluxLBL") -startdate<-"2004/01/01" -enddate<-"2004/12/31" +met_name <- c("CRUNCEP", "AmerifluxLBL") +startdate <- "2004-01-01" +enddate <- "2004-12-31" out.var <- "NPP" ensemble <- FALSE ens_size <- 100 sensitivity <- FALSE + ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group -site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% +site_id_noinput <- anti_join(tbl(bety, "sites"), tbl(bety, "inputs")) %>% inner_join(tbl(bety, "sitegroups_sites") %>% filter(sitegroup_id == 1), by = c("id" = "site_id")) %>% @@ -213,11 +215,12 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) %>% - mutate(sitename= gsub(" ","_",.$sitename)) %>% rename_at(id.x = site_id) + mutate(sitename = gsub(" ", "_", sitename)) %>% + rename(id.x = site_id) site_id <- site_id_noinput$id.x -site_name <- gsub(" ","_",site_id_noinput$sitename) +site_name <- gsub(" ", "_", site_id_noinput$sitename) #Create permutations of arg combinations options(scipen = 999) @@ -236,10 +239,11 @@ run_table <- expand.grid( ) #Execute function to spit out a table with a column of NA or success -tab <-run_table %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ - create_execute_test_xml(list(...)) -},otherwise =NA)) -) +tab <- run_table %>% + mutate(outcome = purrr::pmap(., purrr::possibly(function(...) { + create_execute_test_xml(list(...)) + }, otherwise = NA)) + ) ## print to table tux_tab <- huxtable::hux(tab) From 5dd7588d60bfb4de6d9b174a8365894ea534b420 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 23 May 2019 11:10:52 -0400 Subject: [PATCH 0171/2289] starting points --- modules/assim.batch/R/pda.emulator.ms.R | 47 +++++++++++++++++-------- 1 file changed, 32 insertions(+), 15 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index ecbee979254..e7ee925e9ce 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -115,7 +115,7 @@ pda.emulator.ms <- function(multi.settings) { objects = obj_names) init.list <- need_obj$init.list - rng <- need_obj$rng + rng_ori <- need_obj$rng jmp.list <- need_obj$jmp.list prior.list <- need_obj$prior.list prior.fn.all <- need_obj$prior.fn.all @@ -326,7 +326,7 @@ pda.emulator.ms <- function(multi.settings) { ## Instead I now use original emulators, but switch back and forth between domains ## range limits on standard normal domain - rng_stdnorm <- norm_transform$rng[,,1] #all same, maybe return just one from norm_transform_priors + rng_stdn <- norm_transform$rng[,,1] #all same, maybe return just one from norm_transform_priors ## get new init.list and jmp.list init.list <- norm_transform$init @@ -335,6 +335,21 @@ pda.emulator.ms <- function(multi.settings) { } + ## proposing starting points from knots + mu_site_init <- list() + if(nrow(SS.stack[[1]][[1]]) > nsites*tmp.settings$assim.batch$chain){ + # sample without replacement + sampind <- sample(seq_len(nrow(SS.stack[[1]][[1]])), nsites*tmp.settings$assim.batch$chain) + }else{ + # this would hardly happen, usually we have a lot more knots than nsites*tmp.settings$assim.batch$chain + # but to make this less error-prone sample with replacement if we have fewer, combinations should still be different enough + sampind <- sample(seq_len(nrow(SS.stack[[1]][[1]])), nsites*tmp.settings$assim.batch$chain, replace = TRUE) + } + + for(i in seq_len(tmp.settings$assim.batch$chain)){ + mu_site_init[[i]] <- SS.stack[[1]][[1]][sampind[((i-1) * nsites + 1):((i-1) * nsites + nsites)], 1:nparam] + } + current.step <- "HIERARCHICAL MCMC PREP" save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) @@ -342,8 +357,8 @@ pda.emulator.ms <- function(multi.settings) { ########### hierarchical MCMC function with Gibbs ############## - hier.mcmc <- function(settings, gp.stack, nstack, nmcmc, rng, - mu0, jmp0, nparam, nsites){ + hier.mcmc <- function(settings, gp.stack, nstack, nmcmc, rng_stdn, rng_orig, + mu0, jmp0, mu_site_init, nparam, nsites, prior.fn.all, prior.ind.all){ pos.check <- sapply(settings$assim.batch$inputs, `[[`, "ss.positive") @@ -382,16 +397,12 @@ pda.emulator.ms <- function(multi.settings) { # mu_global ~ MVN (mu_f, P_f) # - mu_f <- unlist(mu0) + mu_f <- rep(0, nparam) P_f <- diag(1, nparam) P_f_inv <- solve(P_f) - # # initialize mu_global (nparam) - repeat{ - mu_global <- mvtnorm::rmvnorm(1, mu_f, P_f) - check.that <- (mu_global > apply(rng,1:2,max)[, 1] & mu_global < apply(rng,1:2,min)[, 2]) - if(all(check.that)) break - } + ## initialize mu_global (nparam) + mu_global <- mu0 ###### (hierarchical) global tau priors # @@ -403,7 +414,7 @@ pda.emulator.ms <- function(multi.settings) { # tau_df <- nparam + 1 - tau_V <- diag(1, nparam) + tau_V <- P_f_inv V_inv <- solve(tau_V) # will be used in gibbs updating # initialize tau_global (nparam x nparam) @@ -412,12 +423,18 @@ pda.emulator.ms <- function(multi.settings) { # initialize jcov.arr (jump variances per site) jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) - for(j in seq_len(nsites)) jcov.arr[,,j] <- diag(jmp0) - + for(j in seq_len(nsites)) jcov.arr[,,j] <- diag(jmp0) # initialize mu_site (nsite x nparam) mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) + + mu_global_temp <- sapply(seq_len(nsites), function(x){ + norm.quantiles <- pnorm(mu_global) + orig.vals <- eval(prior.fn.all$qprior[[prior.ind.all[x]]], list(p = norm.quantiles[,x])) + return(orig.vals) + }) + for(ns in 1:nsites){ repeat{ mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean @@ -568,7 +585,7 @@ pda.emulator.ms <- function(multi.settings) { mcmc.out <- parallel::parLapply(cl, seq_len(tmp.settings$assim.batch$chain), function(chain) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, nstack = NULL, nmcmc = tmp.settings$assim.batch$iter, rng = rng, - mu0 = init.list[[chain]], jmp0 = jmp.list[[chain]], + mu0 = unlist(init.list[[chain]]), jmp0 = jmp.list[[chain]], nparam = length(prior.ind.all), nsites = nsites) }) From b3d07e04ee101b8e9d2d93d2ce66b1f26f89d7f7 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 23 May 2019 11:48:05 -0400 Subject: [PATCH 0172/2289] retrieving previous jmp as initial guess --- modules/assim.batch/R/pda.emulator.ms.R | 65 +++++++++++++++---------- 1 file changed, 40 insertions(+), 25 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index e7ee925e9ce..4038c12be6e 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -108,14 +108,14 @@ pda.emulator.ms <- function(multi.settings) { # we need some objects that are common to all calibrations obj_names <- c("init.list", "rng", "jmp.list", "prior.fn.all", "prior.ind.all", "llik.fn", "settings", "prior.ind.all.ns", "sf", "prior.list", "n.param.orig", "pname", "prior.ind.orig", - "hyper.pars") + "hyper.pars", "resume.list") need_obj <- load_pda_history(workdir = multi.settings$outdir, ensemble.id = multi.settings[[1]]$assim.batch$ensemble.id, objects = obj_names) init.list <- need_obj$init.list - rng_ori <- need_obj$rng + rng_orig <- need_obj$rng jmp.list <- need_obj$jmp.list prior.list <- need_obj$prior.list prior.fn.all <- need_obj$prior.fn.all @@ -337,6 +337,7 @@ pda.emulator.ms <- function(multi.settings) { ## proposing starting points from knots mu_site_init <- list() + jump_init <- list() if(nrow(SS.stack[[1]][[1]]) > nsites*tmp.settings$assim.batch$chain){ # sample without replacement sampind <- sample(seq_len(nrow(SS.stack[[1]][[1]])), nsites*tmp.settings$assim.batch$chain) @@ -348,6 +349,7 @@ pda.emulator.ms <- function(multi.settings) { for(i in seq_len(tmp.settings$assim.batch$chain)){ mu_site_init[[i]] <- SS.stack[[1]][[1]][sampind[((i-1) * nsites + 1):((i-1) * nsites + nsites)], 1:nparam] + jump_init[[i]] <- need_obj$resume.list[[i]]$jump } current.step <- "HIERARCHICAL MCMC PREP" @@ -358,7 +360,7 @@ pda.emulator.ms <- function(multi.settings) { hier.mcmc <- function(settings, gp.stack, nstack, nmcmc, rng_stdn, rng_orig, - mu0, jmp0, mu_site_init, nparam, nsites, prior.fn.all, prior.ind.all){ + mu0, jmp0, mu_site_init, nparam, nsites, prior.fn.all, prior.ind.all){ pos.check <- sapply(settings$assim.batch$inputs, `[[`, "ss.positive") @@ -390,36 +392,40 @@ pda.emulator.ms <- function(multi.settings) { ###### (hierarchical) global mu priors # - # mu_f : prior mean vector - # P_f : prior covariance matrix - # P_f_inv : prior precision matrix + # mu_global_mean : prior mean vector + # mu_global_sigma : prior covariance matrix + # mu_global_tau : prior precision matrix # - # mu_global ~ MVN (mu_f, P_f) - # - - mu_f <- rep(0, nparam) - P_f <- diag(1, nparam) - P_f_inv <- solve(P_f) + # mu_global ~ MVN (mu_global_mean, mu_global_tau) + + ### these are all in the STANDARD NORMAL DOMAIN + # we want MVN for mu_global for conjugacy + # stdandard normal to avoid singularity (param units may differ on orders of magnitudes) + + mu_global_mean <- as.matrix(rep(0, nparam)) + mu_global_sigma <- diag(1, nparam) + mu_global_tau <- solve(mu_global_sigma) + ## initialize mu_global (nparam) - mu_global <- mu0 + mu_global <- as.matrix(unlist(mu0)) ###### (hierarchical) global tau priors # - # tau_df : Wishart degrees of freedom - # tau_V : Wishart scale matrix + # tau_global_df : Wishart degrees of freedom + # tau_global_sigma : Wishart scale matrix # - # tau_global ~ W (tau_df, tau_scale) + # tau_global ~ W (tau_global_df, tau_global_sigma) # sigma_global <- solve(tau_global) # - tau_df <- nparam + 1 - tau_V <- P_f_inv - V_inv <- solve(tau_V) # will be used in gibbs updating + tau_global_df <- nparam + 1 + tau_global_sigma <- diag(1, nparam) + tau_global_tau <- solve(tau_global_sigma) # will be used in gibbs updating # initialize tau_global (nparam x nparam) - tau_global <- rWishart(1, tau_df, tau_V)[,,1] - + tau_global <- rWishart(1, tau_global_df, tau_global_sigma)[,,1] + sigma_global <- solve(tau_global) # initialize jcov.arr (jump variances per site) jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) @@ -583,10 +589,19 @@ pda.emulator.ms <- function(multi.settings) { ## Sample posterior from emulator mcmc.out <- parallel::parLapply(cl, seq_len(tmp.settings$assim.batch$chain), function(chain) { - hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, nstack = NULL, - nmcmc = tmp.settings$assim.batch$iter, rng = rng, - mu0 = unlist(init.list[[chain]]), jmp0 = jmp.list[[chain]], - nparam = length(prior.ind.all), nsites = nsites) + hier.mcmc(settings = tmp.settings, + gp.stack = gp.stack, + nstack = NULL, + nmcmc = 150000, + rng_stdn = rng_stdn, + rng_orig = rng_orig, + mu0 = init.list[[chain]], + jmp0 = jump_init[[chain]], + mu_site_init = mu_site_init[[chain]], + nparam = length(prior.ind.all), + nsites = nsites, + prior.fn.all = prior.fn.all, + prior.ind.all = prior.ind.all) }) parallel::stopCluster(cl) From 3994448c441f096c49fdc6f8d3a90ef91d116c63 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 27 May 2019 09:31:16 -0400 Subject: [PATCH 0173/2289] add order warning --- modules/assim.batch/R/pda.utils.R | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 0c4beb5c35d..fdf2a29a3ab 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -66,6 +66,14 @@ pda.settings <- function(settings, params.id = NULL, param.names = NULL, prior.i # An explicit argument overrides whatever is in settings, if anything. # If neither an argument or a setting is provided, set a default value in settings. + # When there are more than 1 PFT, make sure they are in the same order in PDA tags to avoid index problems + if(length(settings$assim.batch$param.names) > 1){ + # here I assume if a PFT is listed under the PFT tag, we want to constrain at least one of its parameters + non_match <- which(names(settings$assim.batch$param.names) != sapply(settings$pfts,`[[`, "name")) + if(length(non_match) > 0){ + PEcAn.logger::logger.severe("Please make sure the ORDER of the PFT name tags match under and sections in your pecan.xml and try again.") + } + } # Each assignment below includes an explicit type conversion to avoid problems later. # params.id: Either null or an ID used to query for a matrix of MCMC samples later From 097963a09e18732a681a6ed880debf7b65409cc8 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 27 May 2019 09:39:58 -0400 Subject: [PATCH 0174/2289] add nknot warning --- modules/assim.batch/R/pda.utils.R | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index fdf2a29a3ab..b008a43324d 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1127,6 +1127,13 @@ return_multi_site_objects <- function(multi.settings){ ##' @export prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ + # Check the dimensions of the proposed knots and the number of knots requested + # mistakes can happen when the user changes the settings$assim.batch$n.knot only for the first site in the xml + if(settings$assim.batch$n.knot != nrow(multi_site_objects$externalknots[[1]])){ + PEcAn.logger::logger.warn("The number of knots requested and proposed number of knots do not match. Changing settings$assim.batch$n.knot from ", settings$assim.batch$n.knot, "to", nrow(multi_site_objects$externalknots[[1]])) + settings$assim.batch$n.knot <- nrow(multi_site_objects$externalknots[[1]]) + } + # not everyone might be working with workflowid # remote_dir <- paste0(settings$host$folder, "/" , settings$workflow$id) # instead find this directory from remote rundir so that it's consistent From 08c72e645117e818e8c152677e10a58861b5d3da Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 31 May 2019 12:13:02 -0400 Subject: [PATCH 0175/2289] less informative df --- modules/assim.batch/R/pda.emulator.ms.R | 147 +++++++++++++----------- 1 file changed, 81 insertions(+), 66 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 4038c12be6e..182b2db210b 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -128,6 +128,7 @@ pda.emulator.ms <- function(multi.settings) { prior.ind.orig <- need_obj$prior.ind.orig pname <- need_obj$pname hyper.pars <- need_obj$hyper.pars + nparam <- length(prior.ind.all) resume.list <- vector("list", multi.settings[[1]]$assim.batch$chain) @@ -200,7 +201,7 @@ pda.emulator.ms <- function(multi.settings) { mcmc.GP(gp = gp, ## Emulator(s) x0 = init.list[[chain]], ## Initial conditions nmcmc = as.numeric(multi.settings[[1]]$assim.batch$iter), ## Number of iters - rng = rng, ## range + rng = rng_orig, ## range format = "lin", ## "lin"ear vs "log" of LogLikelihood mix = "joint", ## Jump "each" dimension independently or update them "joint"ly jmp0 = jmp.list[[chain]], ## Initial jump size @@ -419,7 +420,7 @@ pda.emulator.ms <- function(multi.settings) { # sigma_global <- solve(tau_global) # - tau_global_df <- nparam + 1 + tau_global_df <- nparam # the least informative choice tau_global_sigma <- diag(1, nparam) tau_global_tau <- solve(tau_global_sigma) # will be used in gibbs updating @@ -428,26 +429,14 @@ pda.emulator.ms <- function(multi.settings) { sigma_global <- solve(tau_global) # initialize jcov.arr (jump variances per site) - jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) - for(j in seq_len(nsites)) jcov.arr[,,j] <- diag(jmp0) + #jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) + #for(j in seq_len(nsites)) jcov.arr[,,j] <- jmp0 - # initialize mu_site (nsite x nparam) - mu_site_curr <- matrix(NA_real_, nrow = nsites, ncol= nparam) + # prepare mu_site (nsite x nparam) mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) + mu_site_new_stdn <- matrix(NA_real_, nrow = nsites, ncol= nparam) - mu_global_temp <- sapply(seq_len(nsites), function(x){ - norm.quantiles <- pnorm(mu_global) - orig.vals <- eval(prior.fn.all$qprior[[prior.ind.all[x]]], list(p = norm.quantiles[,x])) - return(orig.vals) - }) - - for(ns in 1:nsites){ - repeat{ - mu_site_curr[ns,] <- mvtnorm::rmvnorm(1, mu_global, jcov.arr[,,ns]) # site mean - check.that <- (mu_site_curr[ns,] > rng[, 1, ns] & mu_site_curr[ns,] < rng[, 2, ns]) - if(all(check.that)) break - } - } + mu_site_curr <- mu_site_init # values for each site will be accepted/rejected in themselves currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) @@ -456,97 +445,122 @@ pda.emulator.ms <- function(multi.settings) { currllp <- lapply(seq_len(nsites), function(v) PEcAn.assim.batch::pda.calc.llik.par(settings, nstack[[v]], currSS[,v])) # storage - mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) - # tau_site_samp <- array(NA_real_, c(nmcmc, nsites, nsites)) + mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) + mu_site_samp_stdn <- array(NA_real_, c(nmcmc, nparam, nsites)) # for exploring mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) tau_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) musite.accept.count <- rep(0, nsites) + ########################## Start MCMC ######################## - for(g in seq_len(nmcmc)){ - - # jump adaptation step - if ((g > 2) && ((g - 1) %% settings$assim.batch$jump$adapt == 0)) { - - # update site level jvars - params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] - #colnames(params.recent) <- names(x0) - jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], params.recent[,,v])) - jcov.arr <- abind::abind(jcov.list, along=3) - musite.accept.count <- rep(0, nsites) # Reset counter - - } - + for(g in 10001:50000){ ######################################## - # update tau_global | mu_global, mu_site + # gibbs update tau_global | mu_global, mu_site # # W(tau_global | mu_global, mu_site) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) # - # tau_global : error precision matrix + # + # using MVN-Wishart conjugacy + # prior hyperparameters: tau_global_df, tau_global_sigma + # posterior hyperparameters: tau_global_df_gibbs, tau_global_sigma_gibbs + # + # update: + # tau_global ~ W(tau_global_df_gibbs, tau_global_sigma_gibbs) + + tau_global_df_gibbs <- tau_global_df + nsites + + # transform from original domain to standard normal + mu_site_curr_stdn <- sapply(seq_len(nparam), function(x){ + orig.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[x]]], list(q = mu_site_curr[,x])) + norm.vals <- qnorm(orig.quantiles) + return(norm.vals) + }) # sum of pairwise deviation products - pairwise_deviation <- apply(mu_site_curr, 1, function(r) r - mu_global) + pairwise_deviation <- apply(mu_site_curr_stdn, 1, function(r) r - t(mu_global)) sum_term <- pairwise_deviation %*% t(pairwise_deviation) - tau_sigma <- solve(V_inv + sum_term) + tau_global_sigma_gibbs <- solve(tau_global_tau + sum_term) # update tau - tau_global <- rWishart(1, df = tau_df + nsites, Sigma = tau_sigma)[,,1] # site precision - sigma_global <- solve(tau_global) # site covariance, new prior sigma to be used below for prior prob. calc. - - + tau_global <- rWishart(1, df = tau_global_df_gibbs, Sigma = tau_global_sigma_gibbs)[,,1] # across-site precision + sigma_global <- solve(tau_global) # across-site covariance, to be used below + ######################################## # update mu_global | mu_site, tau_global # + # MVN(mu_global | mu_site, tau_global) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) + # # mu_global ~ MVN(global_mu, global_Sigma) # # mu_global : global parameters # global_mu : precision weighted average between the data (mu_site) and prior mean (mu_f) - # global_Sigma : sum of mu_site and mu_f precision + # global_Sigma : sum of mu_site and mu_f precision # - # MVN(mu_global | mu_site, tau_global) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) + # Dietze, 2017, Eqn 13.6 + # mu_global ~ MVN(solve((nsites * sigma_global) + P_f_inv)) * ((nsites * sigma_global) + P_f_inv * mu_f), + # solve((nsites * sigma_global) + P_f_inv)) + # prior hyperparameters : mu_global_mean, mu_global_sigma + # posterior hyperparameters : mu_global_mean_gibbs, mu_global_sigma_gibbs + # + # update: + # mu_global ~ MVN(mu_global_mean_gibbs, mu_global_sigma_gibbs) + # calculate mu_global_sigma_gibbs from prior hyperparameters and tau_global + mu_global_sigma_gibbs <- solve(mu_global_tau + nsites * tau_global) - - global_Sigma <- solve(P_f + (nsites * sigma_global)) + # Jensen's inequality: take the mean of mu_site_curr, then transform + mu_site_bar <- apply(mu_site_curr, 2, mean) + mu_site_bar_std <- sapply(seq_len(nparam), function(x){ + prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[x]]], list(q = mu_site_bar[x])) + norm.vals <- qnorm(prior.quantiles) + return(norm.vals) + }) + mu_site_bar_std <- as.matrix(mu_site_bar_std) + + # calculate mu_global_mean_gibbs from prior hyperparameters, mu_site_means and tau_global + mu_global_mean_gibbs <- mu_global_sigma_gibbs %*% (mu_global_tau %*% mu_global_mean + (tau_global * nsites) %*% mu_site_bar_std) + + # update mu_global + mu_global <- mvtnorm::rmvnorm(1, mu_global_mean_gibbs, mu_global_sigma_gibbs) # new prior mu to be used below for prior prob. calc. - global_mu <- global_Sigma %*% ((sigma_global %*% colSums(mu_site_curr)) + (P_f %*% mu_f)) - - mu_global <- mvtnorm::rmvnorm(1, global_mu, global_Sigma) # new prior mu to be used below for prior prob. calc. - - # site level M-H ######################################## - # propose mu_site - + # propose new mu_site on standard normal domain for(ns in seq_len(nsites)){ repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) - check.that <- (mu_site_new[ns,] > rng[, 1, ns] & mu_site_new[ns,] < rng[, 2, ns]) + mu_site_new_stdn[ns,] <- mvtnorm::rmvnorm(1, mu_global, sigma_global) + check.that <- (mu_site_new_stdn[ns,] > rng_stdn[, 1] & mu_site_new_stdn[ns, ] < rng_stdn[, 2]) if(all(check.that)) break } } + # transform back to original domain + mu_site_new <- sapply(seq_len(nparam), function(x){ + norm.quantiles <- pnorm(mu_site_new_stdn[,x]) + orig.vals <- eval(prior.fn.all$qprior[[prior.ind.all[x]]], list(p = norm.quantiles)) + return(orig.vals) + }) # re-predict current SS - currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) + currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) # calculate posterior currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) # use new priors for calculating prior probability - currPrior <- mvtnorm::dmvnorm(mu_site_curr, mu_global, sigma_global, log = TRUE) + currPrior <- dmvnorm(mu_site_curr_stdn, mu_global, sigma_global, log = TRUE) + #currPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_curr_stdn[v,], mu_s, s_sigma[[v]], log = TRUE))) currPost <- currLL + currPrior - # predict new SS newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,], pos.check)) newSS <- matrix(newSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) @@ -555,20 +569,21 @@ pda.emulator.ms <- function(multi.settings) { newllp <- lapply(seq_len(nsites), function(v) pda.calc.llik.par(settings, nstack[[v]], newSS[,v])) newLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(newSS[,v], llik.fn, newllp[[v]])) # use new priors for calculating prior probability - newPrior <- dmvnorm(mu_site_new, mu_global, sigma_global, log = TRUE) + newPrior <- dmvnorm(mu_site_new_stdn, mu_global, sigma_global, log = TRUE) + #newPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_new_stdn[v,], mu_s, s_sigma[[v]], log = TRUE))) newPost <- newLL + newPrior ar <- is.accepted(currPost, newPost) mu_site_curr[ar, ] <- mu_site_new[ar, ] + mu_site_curr_stdn[ar, ] <- mu_site_new_stdn[ar, ] musite.accept.count <- musite.accept.count + ar - mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] - # tau_site_samp <- - mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs - tau_global_samp[g,,] <- tau_global # 100% acceptance for gibbs + mu_site_samp_stdn[g, , seq_len(nsites)] <- t(mu_site_curr_stdn)[,seq_len(nsites)] + mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs + tau_global_samp[g, , ] <- tau_global # 100% acceptance for gibbs - if(g %% 500 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") + if(g %% 100 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") } return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp, @@ -596,7 +611,7 @@ pda.emulator.ms <- function(multi.settings) { rng_stdn = rng_stdn, rng_orig = rng_orig, mu0 = init.list[[chain]], - jmp0 = jump_init[[chain]], + #jmp0 = jump_init[[chain]], mu_site_init = mu_site_init[[chain]], nparam = length(prior.ind.all), nsites = nsites, From 0357c8a1cfdb9ef7cf2135f799a361aaff3dcc65 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 1 Jun 2019 11:19:30 -0400 Subject: [PATCH 0176/2289] propose at site level use jcov --- modules/assim.batch/R/pda.emulator.ms.R | 70 +++++++++++++++++-------- 1 file changed, 48 insertions(+), 22 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 182b2db210b..6b2016f37fd 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -429,8 +429,8 @@ pda.emulator.ms <- function(multi.settings) { sigma_global <- solve(tau_global) # initialize jcov.arr (jump variances per site) - #jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) - #for(j in seq_len(nsites)) jcov.arr[,,j] <- jmp0 + jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) + for(j in seq_len(nsites)) jcov.arr[,,j] <- jmp0 # prepare mu_site (nsite x nparam) mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) @@ -455,8 +455,20 @@ pda.emulator.ms <- function(multi.settings) { ########################## Start MCMC ######################## - for(g in 10001:50000){ + for(g in 1:nmcmc){ + # jump adaptation step + if ((g > 2) && ((g - 1) %% settings$assim.batch$jump$adapt == 0)) { + + # update site level jvars + params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] + #colnames(params.recent) <- names(x0) + jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], params.recent[,,v])) + jcov.arr <- abind::abind(jcov.list, along=3) + musite.accept.count <- rep(0, nsites) # Reset counter + + } + ######################################## # gibbs update tau_global | mu_global, mu_site @@ -534,21 +546,35 @@ pda.emulator.ms <- function(multi.settings) { # site level M-H ######################################## - # propose new mu_site on standard normal domain - for(ns in seq_len(nsites)){ - repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - mu_site_new_stdn[ns,] <- mvtnorm::rmvnorm(1, mu_global, sigma_global) - check.that <- (mu_site_new_stdn[ns,] > rng_stdn[, 1] & mu_site_new_stdn[ns, ] < rng_stdn[, 2]) - if(all(check.that)) break - } - } + # # propose new mu_site on standard normal domain + # for(ns in seq_len(nsites)){ + # repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation + # mu_site_new_stdn[ns,] <- mvtnorm::rmvnorm(1, mu_global, sigma_global) + # check.that <- (mu_site_new_stdn[ns,] > rng_stdn[, 1] & mu_site_new_stdn[ns, ] < rng_stdn[, 2]) + # if(all(check.that)) break + # } + # } - # transform back to original domain - mu_site_new <- sapply(seq_len(nparam), function(x){ - norm.quantiles <- pnorm(mu_site_new_stdn[,x]) - orig.vals <- eval(prior.fn.all$qprior[[prior.ind.all[x]]], list(p = norm.quantiles)) - return(orig.vals) - }) + # propose new site parameter vectors + for(ns in seq_len(nsites)){ + repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation + mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) + check.that <- (mu_site_new[ns,] > rng_orig[, 1] & mu_site_new[ns, ] < rng_orig[, 2]) + if(all(check.that)) break + } + } + # # transform back to original domain + # mu_site_new <- sapply(seq_len(nparam), function(x){ + # norm.quantiles <- pnorm(mu_site_new_stdn[,x]) + # orig.vals <- eval(prior.fn.all$qprior[[prior.ind.all[x]]], list(p = norm.quantiles)) + # return(orig.vals) + # }) + + mu_site_new_stdn <- sapply(seq_len(nparam), function(x){ + orig.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[x]]], list(q = mu_site_new[,x])) + norm.vals <- qnorm(orig.quantiles) + return(norm.vals) + }) # re-predict current SS currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) @@ -558,7 +584,7 @@ pda.emulator.ms <- function(multi.settings) { currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) # use new priors for calculating prior probability currPrior <- dmvnorm(mu_site_curr_stdn, mu_global, sigma_global, log = TRUE) - #currPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_curr_stdn[v,], mu_s, s_sigma[[v]], log = TRUE))) + #currPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_curr_stdn[v,], mu_global, sigma_global, log = TRUE))) currPost <- currLL + currPrior # predict new SS @@ -607,11 +633,11 @@ pda.emulator.ms <- function(multi.settings) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, nstack = NULL, - nmcmc = 150000, + nmcmc = tmp.settings$assim.batch$iter, rng_stdn = rng_stdn, rng_orig = rng_orig, mu0 = init.list[[chain]], - #jmp0 = jump_init[[chain]], + jmp0 = jump_init[[chain]], mu_site_init = mu_site_init[[chain]], nparam = length(prior.ind.all), nsites = nsites, @@ -632,12 +658,12 @@ pda.emulator.ms <- function(multi.settings) { mcmc.out2 <- back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) # Collect global params in their own list and postprocess - mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") # Collect site-level params in their own list and postprocess for(ns in seq_len(nsites)){ - mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_site_samp", ns = ns, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_site_samp", ns = ns, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = paste0("_hierarchical_SL",ns)) } From ebbddfea9dad7bac22608c9d2318eb328a1e1ec3 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 3 Jun 2019 10:35:07 -0400 Subject: [PATCH 0177/2289] generate hierarchical samples in post-processing --- modules/assim.batch/R/pda.utils.R | 55 +++++++++++++++++-------------- 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index b008a43324d..a514101aa58 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -906,33 +906,40 @@ norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.st ##' @export back_transform_posteriors <- function(prior.all, prior.fn.all, prior.ind.all, mcmc.out){ - # check for non-normals - psel <- prior.all[prior.ind.all, 1] != "norm" - norm.check <- all(!psel) # if all are norm do nothing - - if(!norm.check){ - for(i in seq_along(mcmc.out)){ - mu_global_samp <- mcmc.out[[i]]$mu_global_samp - mu_site_samp <- mcmc.out[[i]]$mu_site_samp - - mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), mu_site_samp, along = 3) - for(ms in seq_len(dim(mu_sample_tmp)[3])){ - mcmc.vals <- mu_sample_tmp[,,ms] - stdnorm.quantiles <- pnorm(mcmc.vals) - # counter, because all cols need transforming back - for(ps in seq_along(psel)){ - prior.quantiles <- eval(prior.fn.all$qprior[[prior.ind.all[ps]]], list(p = stdnorm.quantiles[,ps])) - mcmc.vals[,ps] <- prior.quantiles - } - mu_sample_tmp[,,ms] <- mcmc.vals + for(i in seq_along(mcmc.out)){ + mu_global_samp <- mcmc.out[[i]]$mu_global_samp + tau_global_samp <- mcmc.out[[i]]$tau_global_samp + + iter_size <- dim(tau_global_samp)[1] + + sigma_global_samp <- tau_global_samp + for(si in seq_len(iter_size)){ + sigma_global_samp[si,,] <- solve(tau_global_samp[si,,]) + } + + # first calculate hierarchical posteriors from mu_global_samp and tau_global_samp + hierarchical_samp <- mu_global_samp + for(si in seq_len(iter_size)){ + hierarchical_samp[si,] <- mvtnorm::rmvnorm(1, mean = mu_global_samp[si,], sigma = sigma_global_samp[si,,]) + } + + # back transform all parameter values from standard normal to the original domain + mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), hierarchical_samp, along = 3) + for(ms in seq_len(dim(mu_sample_tmp)[3])){ + mcmc.vals <- mu_sample_tmp[,,ms] + stdnorm.quantiles <- pnorm(mcmc.vals) + # counter, because all cols need transforming back + for(ps in seq_along(prior.ind.all)){ + prior.quantiles <- eval(prior.fn.all$qprior[[prior.ind.all[ps]]], list(p = stdnorm.quantiles[,ps])) + mcmc.vals[,ps] <- prior.quantiles } - - - mcmc.out[[i]]$mu_global_samp <- mu_sample_tmp[,,1] - mcmc.out[[i]]$mu_site_samp <- mu_sample_tmp[,,-1] + mu_sample_tmp[,,ms] <- mcmc.vals } + + mcmc.out[[i]]$mu_global_samp <- mu_sample_tmp[,,1] + mcmc.out[[i]]$hierarchical_samp <- mu_sample_tmp[,,-1] } - + return(mcmc.out) } From 308dee85a74fd71845efa77dde43ef2d1969a9f2 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 3 Jun 2019 11:30:55 -0400 Subject: [PATCH 0178/2289] site-level params are not std normal anymore --- modules/assim.batch/R/pda.postprocess.R | 15 +++------------ modules/assim.batch/R/pda.utils.R | 3 ++- 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index 732973ea42f..ce79bff077c 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -276,11 +276,7 @@ pda.sort.params <- function(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, for (c in seq_along(mcmc.out)) { - if(sub.sample == "mu_global_samp"){ - m <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]]), ncol = length(prior.ind.all.ns)) - }else if(sub.sample == "mu_site_samp"){ - m <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]][,,ns]), ncol = length(prior.ind.all.ns)) - } + m <- matrix(NA, nrow = nrow(mcmc.out[[c]][[sub.sample]]), ncol = length(prior.ind.all.ns)) # TODO: make this sf compatible for multi site if(!is.null(sf)){ @@ -308,13 +304,8 @@ pda.sort.params <- function(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, }else{ - if(sub.sample == "mu_global_samp"){ - m[, i] <- mcmc.out[[c]][[sub.sample]][, idx] - }else if(sub.sample == "mu_site_samp"){ - m[, i] <- mcmc.out[[c]][[sub.sample]][, idx, ns] - } - - + m[, i] <- mcmc.out[[c]][[sub.sample]][, idx] + } } diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index a514101aa58..80dac59a52e 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -924,7 +924,8 @@ back_transform_posteriors <- function(prior.all, prior.fn.all, prior.ind.all, mc } # back transform all parameter values from standard normal to the original domain - mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), hierarchical_samp, along = 3) + mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), + array(hierarchical_samp, dim = c(dim(hierarchical_samp), 1)), along = 3) for(ms in seq_len(dim(mu_sample_tmp)[3])){ mcmc.vals <- mu_sample_tmp[,,ms] stdnorm.quantiles <- pnorm(mcmc.vals) From 2422132b1b251f35949bf76df8f35f408960f292 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 3 Jun 2019 11:47:02 -0400 Subject: [PATCH 0179/2289] postprocess hierarchical samples --- modules/assim.batch/R/pda.emulator.ms.R | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 6b2016f37fd..71dca7d0647 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -331,8 +331,6 @@ pda.emulator.ms <- function(multi.settings) { ## get new init.list and jmp.list init.list <- norm_transform$init - jmp.list <- norm_transform$jmp - } @@ -446,7 +444,6 @@ pda.emulator.ms <- function(multi.settings) { # storage mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) - mu_site_samp_stdn <- array(NA_real_, c(nmcmc, nparam, nsites)) # for exploring mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) tau_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) @@ -605,11 +602,10 @@ pda.emulator.ms <- function(multi.settings) { musite.accept.count <- musite.accept.count + ar mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] - mu_site_samp_stdn[g, , seq_len(nsites)] <- t(mu_site_curr_stdn)[,seq_len(nsites)] mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs tau_global_samp[g, , ] <- tau_global # 100% acceptance for gibbs - if(g %% 100 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") + if(g %% 500 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") } return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp, @@ -659,6 +655,9 @@ pda.emulator.ms <- function(multi.settings) { # Collect global params in their own list and postprocess mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical_mean") + + mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "hierarchical_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") # Collect site-level params in their own list and postprocess From cd491ef7158d8a182bf955e7cb0d7a8057e20808 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 3 Jun 2019 14:10:21 -0400 Subject: [PATCH 0180/2289] dont insert global means in the DB --- modules/assim.batch/R/pda.emulator.ms.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 71dca7d0647..1ba90026f4c 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -655,7 +655,8 @@ pda.emulator.ms <- function(multi.settings) { # Collect global params in their own list and postprocess mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) - tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical_mean") + # processing these just for further analysis later, but con=NULL because these samples shouldn't be used for new runs later + tmp.settings <- pda.postprocess(tmp.settings, con = NULL, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical_mean") mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "hierarchical_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") From 605ca82b2349ecf9695a49225cd5876e6859519a Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 4 Jun 2019 16:00:45 -0400 Subject: [PATCH 0181/2289] lower boundary limit in unif check --- modules/meta.analysis/R/approx.posterior.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/approx.posterior.R b/modules/meta.analysis/R/approx.posterior.R index bbba0327b94..45af227fd24 100644 --- a/modules/meta.analysis/R/approx.posterior.R +++ b/modules/meta.analysis/R/approx.posterior.R @@ -75,7 +75,7 @@ approx.posterior <- function(trait.mcmc, priors, trait.data = NULL, outdir = NUL } posteriors[trait, "parama"] <- fit$estimate[1] posteriors[trait, "paramb"] <- fit$estimate[2] - } else if (pdist %in% zerobound | (pdist == "unif" & pparm[1] > 0)) { + } else if (pdist %in% zerobound | (pdist == "unif" & pparm[1] >= 0)) { dist.names <- c("exp", "lnorm", "weibull", "norm") fit <- list() fit[[1]] <- try(suppressWarnings(fitdistr(dat, "exponential")), silent = TRUE) From c38d154db672c532b7af79e99d095a5bda610f40 Mon Sep 17 00:00:00 2001 From: PEcAn Demo User Date: Fri, 7 Jun 2019 01:16:12 -0500 Subject: [PATCH 0182/2289] add options --- shiny/workflowPlot/server.R | 3 +++ 1 file changed, 3 insertions(+) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 75e4bdc002e..1b4e5e7d61a 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -33,6 +33,9 @@ lapply(c( "shiny", # Maximum size of file allowed to be uploaded: 100MB options(shiny.maxRequestSize=100*1024^2) +options(shiny.port = 6438) +options(shiny.launch.browser = 'FALSE') + # Define server logic server <- shinyServer(function(input, output, session) { bety <- betyConnect() From eba0b51ce9719cf78a3ff9d3ddb44ebd1712a588 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 7 Jun 2019 01:31:44 -0500 Subject: [PATCH 0183/2289] update --- shiny/workflowPlot/server.R | 3 ++- shiny/workflowPlot/ui.R | 3 +++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 1b4e5e7d61a..80d783953b6 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -20,7 +20,8 @@ lapply(c( "shiny", "ncdf4", "scales", "lubridate", - "shinythemes" + "shinythemes", + "shinytoastr" ),function(pkg){ if (!(pkg %in% installed.packages()[,1])){ install.packages(pkg) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 509b66e93f4..bf1a0c055bf 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -3,6 +3,7 @@ library(plotly) library(shinythemes) library(knitr) library(shinyjs) +library(shinytoastr) source("ui_utils.R", local = TRUE) @@ -10,6 +11,8 @@ source("ui_utils.R", local = TRUE) ui <- fluidPage(theme = shinytheme("simplex"), # Initializing shinyJs useShinyjs(), + # Initializing shinytoastr + useToastr(), # Adding CSS to head tags$head( tags$link(rel = "stylesheet", type = "text/css", href = "style.css") From 3f39ac895e6572400715a86c0f25a124a25c4306 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 7 Jun 2019 08:32:14 -0500 Subject: [PATCH 0184/2289] add progress bar --- shiny/workflowPlot/server.R | 5 +- .../server_files/benchmarking_server.R | 245 +++++++++++------- .../server_files/model_data_plots_server.R | 116 ++++++--- .../server_files/model_plots_server.R | 80 ++++-- .../server_files/select_data_server.R | 79 +++--- 5 files changed, 334 insertions(+), 191 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 80d783953b6..4bceb0867c7 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -34,8 +34,9 @@ lapply(c( "shiny", # Maximum size of file allowed to be uploaded: 100MB options(shiny.maxRequestSize=100*1024^2) -options(shiny.port = 6438) -options(shiny.launch.browser = 'FALSE') +# Port forwarding +#options(shiny.port = 6438) +#options(shiny.launch.browser = 'FALSE') # Define server logic server <- shinyServer(function(input, output, session) { diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 6e8e3204c61..806aaf42f7e 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -50,10 +50,23 @@ observeEvent(input$load_model,{ # When button to register run is clicked, create.BRR is run and the button is removed. observeEvent(input$create_bm,{ - bm$BRR <- PEcAn.benchmark::create_BRR(bm$ens_wf, con = bety$con) - bm$brr_message <- sprintf("This run has been successfully registered as a reference run (id = %.0f)", bm$BRR$reference_run_id) - bm$button_BRR <- FALSE - bm$ready <- bm$ready + 1 + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + bm$BRR <- PEcAn.benchmark::create_BRR(bm$ens_wf, con = bety$con) + incProgress( 10/ 15) + bm$brr_message <- sprintf("This run has been successfully registered as a reference run (id = %.0f)", bm$BRR$reference_run_id) + bm$button_BRR <- FALSE + bm$ready <- bm$ready + 1 + incProgress(5/15) + }) + #Signaling the success of the operation + toastr_success("Registered reference run") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) observeEvent({ @@ -188,59 +201,78 @@ observeEvent({ }, ignoreNULL = FALSE) observeEvent(input$calc_bm,{ - req(input$all_input_id) - req(input$all_site_id) - bm$calc_bm_message <- sprintf("Setting up benchmarks") - output$reportvars <- renderText(paste(bm$bm_vars, seq_along(bm$bm_vars))) - output$reportmetrics <- renderText(paste(bm$bm_metrics)) - - inputs_df <- getInputs(bety,c(input$all_site_id)) %>% - dplyr::filter(input_selection_list == input$all_input_id) - output$inputs_df_table <- renderTable(inputs_df) - - config.list <- PEcAn.utils::read_web_config("../../web/config.php") - output$config_list_table <- renderTable(as.data.frame.list(config.list)) - - bm$bm_settings$info <- list(userid = 1000000003) # This is my user id. I have no idea how to get people to log in to their accounts through the web interface and right now the benchmarking code has sections dependent on user id - I will fix this. - bm$bm_settings$database <- list( - bety = list( - user = config.list$db_bety_username, - password = config.list$db_bety_password, - host = config.list$db_bety_hostname, - dbname = config.list$db_bety_database, - driver = config.list$db_bety_type, - write = TRUE - ), - dbfiles = config.list$dbfiles_folder - ) - bm$bm_settings$benchmarking <- list( - ensemble_id = bm$ens_wf$ensemble_id, - new_run = FALSE - ) - - for(i in seq_along(bm$bm_vars)){ - benchmark <- list( - input_id = inputs_df$input_id, - variable_id = bm$bm_vars[i], - site_id = inputs_df$site_id, - metrics = list() - ) - for(j in seq_along(bm$bm_metrics)){ - benchmark$metrics = append(benchmark$metrics, list(metric_id = bm$bm_metrics[j])) - } - bm$bm_settings$benchmarking <- append(bm$bm_settings$benchmarking,list(benchmark = benchmark)) - } - - # output$calc_bm_button <- renderUI({}) - output$print_bm_settings <- renderPrint(bm$bm_settings) - - basePath <- dplyr::tbl(bety, 'workflows') %>% dplyr::filter(id %in% bm$ens_wf$workflow_id) %>% dplyr::pull(folder) - - settings_path <- file.path(basePath, "pecan.BENCH.xml") - saveXML(PEcAn.settings::listToXml(bm$bm_settings,"pecan"), file = settings_path) - bm$settings_path <- settings_path + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + req(input$all_input_id) + req(input$all_site_id) + bm$calc_bm_message <- sprintf("Setting up benchmarks") + output$reportvars <- renderText(paste(bm$bm_vars, seq_along(bm$bm_vars))) + output$reportmetrics <- renderText(paste(bm$bm_metrics)) + incProgress(1 / 15) + + inputs_df <- getInputs(bety,c(input$all_site_id)) %>% + dplyr::filter(input_selection_list == input$all_input_id) + output$inputs_df_table <- renderTable(inputs_df) + incProgress(1 / 15) + + config.list <- PEcAn.utils::read_web_config("../../web/config.php") + output$config_list_table <- renderTable(as.data.frame.list(config.list)) + incProgress(1 / 15) + + bm$bm_settings$info <- list(userid = 1000000003) # This is my user id. I have no idea how to get people to log in to their accounts through the web interface and right now the benchmarking code has sections dependent on user id - I will fix this. + bm$bm_settings$database <- list( + bety = list( + user = config.list$db_bety_username, + password = config.list$db_bety_password, + host = config.list$db_bety_hostname, + dbname = config.list$db_bety_database, + driver = config.list$db_bety_type, + write = TRUE + ), + dbfiles = config.list$dbfiles_folder + ) + bm$bm_settings$benchmarking <- list( + ensemble_id = bm$ens_wf$ensemble_id, + new_run = FALSE + ) + incProgress(3 / 15) + + for(i in seq_along(bm$bm_vars)){ + benchmark <- list( + input_id = inputs_df$input_id, + variable_id = bm$bm_vars[i], + site_id = inputs_df$site_id, + metrics = list() + ) + for(j in seq_along(bm$bm_metrics)){ + benchmark$metrics = append(benchmark$metrics, list(metric_id = bm$bm_metrics[j])) + } + bm$bm_settings$benchmarking <- append(bm$bm_settings$benchmarking,list(benchmark = benchmark)) + } + incProgress(6 / 15) + + # output$calc_bm_button <- renderUI({}) + output$print_bm_settings <- renderPrint(bm$bm_settings) + incProgress(1 / 15) + + basePath <- dplyr::tbl(bety, 'workflows') %>% dplyr::filter(id %in% bm$ens_wf$workflow_id) %>% dplyr::pull(folder) + + settings_path <- file.path(basePath, "pecan.BENCH.xml") + saveXML(PEcAn.settings::listToXml(bm$bm_settings,"pecan"), file = settings_path) + bm$settings_path <- settings_path + + bm$calc_bm_message <- sprintf("Benchmarking settings have been saved here: %s", bm$settings_path) + incProgress(2 / 15) + }) + #Signaling the success of the operation + toastr_success("Setup benchmarks") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) - bm$calc_bm_message <- sprintf("Benchmarking settings have been saved here: %s", bm$settings_path) ############################################################################## # Run the benchmarking functions @@ -281,43 +313,76 @@ observeEvent(bm$results_message,{ }) observeEvent(bm$load_results,{ - if(bm$load_results > 0){ - load(file.path(bm$ens_wf$folder,"benchmarking",bm$input$input_id,"benchmarking.output.Rdata")) - bm$bench.results <- result.out$bench.results - bm$aligned.dat <- result.out$aligned.dat - output$results_table <- DT::renderDataTable(DT::datatable(bm$bench.results)) - plots_used <- grep("plot", result.out$bench.results$metric) - if(length(plots_used) > 0){ - plot_list <- apply( - result.out$bench.results[plots_used,c("variable", "metric")], - 1, paste, collapse = " ") - selection <- as.list(as.numeric(names(plot_list))) - names(selection) <- as.vector(plot_list) - output$bm_plots <- renderUI({ - selectInput("bench_plot", "Benchmark Plot", multiple = FALSE, - choices = selection) - }) - } - } + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + if(bm$load_results > 0){ + load(file.path(bm$ens_wf$folder,"benchmarking",bm$input$input_id,"benchmarking.output.Rdata")) + incProgress(1/3) + + bm$bench.results <- result.out$bench.results + bm$aligned.dat <- result.out$aligned.dat + output$results_table <- DT::renderDataTable(DT::datatable(bm$bench.results)) + plots_used <- grep("plot", result.out$bench.results$metric) + incProgress(1/3) + + if(length(plots_used) > 0){ + plot_list <- apply( + result.out$bench.results[plots_used,c("variable", "metric")], + 1, paste, collapse = " ") + selection <- as.list(as.numeric(names(plot_list))) + names(selection) <- as.vector(plot_list) + output$bm_plots <- renderUI({ + selectInput("bench_plot", "Benchmark Plot", multiple = FALSE, + choices = selection) + }) + } + incProgress(1/3) + } + incProgress(1) + }) + #Signaling the success of the operation + toastr_success("Calculated Scores") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) observeEvent(input$bench_plot,{ - var <- bm$bench.results[input$bench_plot,"variable"] - metric_dat = bm$aligned.dat[[var]] - names(metric_dat)[grep("[.]m", names(metric_dat))] <- "model" - names(metric_dat)[grep("[.]o", names(metric_dat))] <- "obvs" - names(metric_dat)[grep("posix", names(metric_dat))] <- "time" - fcn <- get(paste0("metric_",bm$bench.results[input$bench_plot,"metric"]), asNamespace("PEcAn.benchmark")) - # fcn <- paste0("metric_",bm$bench.results[input$bench_plot,"metric"]) - args <- list( - metric_dat = metric_dat, - var = var, - filename = NA, - draw.plot = TRUE - ) - p <- do.call(fcn, args) - output$bmPlot <- renderPlotly({ - plotly::ggplotly(p) + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + var <- bm$bench.results[input$bench_plot,"variable"] + metric_dat = bm$aligned.dat[[var]] + names(metric_dat)[grep("[.]m", names(metric_dat))] <- "model" + names(metric_dat)[grep("[.]o", names(metric_dat))] <- "obvs" + names(metric_dat)[grep("posix", names(metric_dat))] <- "time" + incProgress(2 / 15) + + fcn <- get(paste0("metric_",bm$bench.results[input$bench_plot,"metric"]), asNamespace("PEcAn.benchmark")) + # fcn <- paste0("metric_",bm$bench.results[input$bench_plot,"metric"]) + args <- list( + metric_dat = metric_dat, + var = var, + filename = NA, + draw.plot = TRUE + ) + p <- do.call(fcn, args) + incProgress(9 / 15) + + output$bmPlot <- renderPlotly({ + plotly::ggplotly(p) + }) + incProgress(4 / 15) + }) + #Signaling the success of the operation + toastr_success("Generated Plot") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) }) }) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index 79ea31355ed..48f6f30fa24 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -1,6 +1,6 @@ # Renders ggplotly -output$modelDataPlot <- renderPlotly({ +output$modelDataPlotStatic <- renderPlotly({ validate( need(length(input$all_workflow_id) == 1, "Select only ONE workflow ID"), need(length(input$all_run_id) == 1, "Select only ONE run ID"), @@ -12,6 +12,7 @@ output$modelDataPlot <- renderPlotly({ plt <- ggplot(data.frame(x = 0, y = 0), aes(x,y)) + annotate("text", x = 0, y = 0, label = "You are ready to plot!", size = 10, color = "grey") + ggplotly(plt) }) # Update units every time a variable is selected @@ -81,58 +82,87 @@ observeEvent(input$ex_plot_modeldata,{ plt <- plt + geom_smooth(n=input$smooth_n_modeldata) ply <- ggplotly(plt) - }) }) + output$modelDataPlotStatic <- renderPlotly({ input$ex_plot_modeldata isolate({ - - var = input$var_name_modeldata - - model_data <- dplyr::filter(load.model(), var_name == var) - - updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) - title <- unique(model_data$title) - xlab <- unique(model_data$xlab) - ylab <- unique(model_data$ylab) - - model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) - external_data <- load.model.data() - aligned_data = PEcAn.benchmark::align_data( - model.calc = model_data, obvs.calc = external_data, - var = var, align_method = "mean_over_larger_timestep") %>% - dplyr::select(everything(), - model = matches("[.]m"), - observations = matches("[.]o"), - Date = posix) - - print(head(aligned_data)) - # Melt dataframe to plot two types of columns together - aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) - - unit <- ylab - if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ - aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) - ylab <- input$units_modeldata - } - - - data_geom <- switch(input$plotType_modeldata, point = geom_point, line = geom_line) - - plt <- ggplot(aligned_data, aes(x=Date, y=value, color=variable)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_modeldata) - ply <- ggplotly(plt) - ply <- plotly::config(ply, collaborate = F, doubleClick = F, displayModeBar = F, staticPlot = T) + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', value = 0, { + + var = input$var_name_modeldata + + model_data <- dplyr::filter(load.model(), var_name == var) + + updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) + title <- unique(model_data$title) + xlab <- unique(model_data$xlab) + ylab <- unique(model_data$ylab) + incProgress(3/15) + + model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) + external_data <- load.model.data() + incProgress(4/15) + + aligned_data = PEcAn.benchmark::align_data( + model.calc = model_data, obvs.calc = external_data, + var = var, align_method = "mean_over_larger_timestep") %>% + dplyr::select(everything(), + model = matches("[.]m"), + observations = matches("[.]o"), + Date = posix) + + print(head(aligned_data)) + # Melt dataframe to plot two types of columns together + aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) + incProgress(4/15) + + unit <- ylab + if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ + aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) + ylab <- input$units_modeldata + } + + + data_geom <- switch(input$plotType_modeldata, point = geom_point, line = geom_line) + + plt <- ggplot(aligned_data, aes(x=Date, y=value, color=variable)) + plt <- plt + data_geom() + plt <- plt + labs(title=title, x=xlab, y=ylab) + plt <- plt + geom_smooth(n=input$smooth_n_modeldata) + ply <- ggplotly(plt) + ply <- plotly::config(ply, collaborate = F, doubleClick = F, displayModeBar = F, staticPlot = T) + incProgress(4/15) + }) + #Signaling the success of the operation + toastr_success("Genearated plots") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) + ply }) }) }) observeEvent(input$model_data_toggle_plot,{ - toggleElement("model_data_plot_interactive") - toggleElement("model_data_plot_static") + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + toggleElement("model_data_plot_static") + incProgress(7 / 15) + toggleElement("model_data_plot_interactive") + incProgress(8 / 15) + }) + #Signaling the success of the operation + toastr_success("Toggled plots") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index e98be8fc413..cb42866c687 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -1,6 +1,6 @@ # Renders ggplotly -output$modelPlot <- renderPlotly({ +output$modelPlotStatic <- renderPlotly({ validate( need(input$all_workflow_id, 'Select workflow id'), need(input$all_run_id, 'Select Run id'), @@ -10,6 +10,7 @@ output$modelPlot <- renderPlotly({ plt <- ggplot(data.frame(x = 0, y = 0), aes(x,y)) + annotate("text", x = 0, y = 0, label = "Ready to plot!", size = 10, color = "grey") + ggplotly(plt) }) # Update units every time a variable is selected @@ -69,35 +70,64 @@ observeEvent(input$ex_plot_model,{ output$modelPlotStatic <- renderPlotly({ input$ex_plot_model isolate({ - df <- dplyr::filter(load.model(), var_name == input$var_name_model) - - updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) - - title <- unique(df$title) - xlab <- unique(df$xlab) - ylab <- unique(df$ylab) - - unit <- ylab - if(input$units_model != unit & udunits2::ud.are.convertible(unit, input$units_model)){ - df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) - ylab <- input$units_model - } - - data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) - - plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_model) - ply <- ggplotly(plt) - ply <- plotly::config(ply, collaborate = F, doubleClick = F, displayModeBar = F, staticPlot = T) + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', value = 0, { + df <- dplyr::filter(load.model(), var_name == input$var_name_model) + incProgress(2/15) + + updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) + incProgress(2/15) + + title <- unique(df$title) + xlab <- unique(df$xlab) + ylab <- unique(df$ylab) + + unit <- ylab + if(input$units_model != unit & udunits2::ud.are.convertible(unit, input$units_model)){ + df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) + ylab <- input$units_model + } + incProgress(2/15) + + data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) + + plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) + plt <- plt + data_geom() + plt <- plt + labs(title=title, x=xlab, y=ylab) + plt <- plt + geom_smooth(n=input$smooth_n_model) + ply <- ggplotly(plt) + ply <- plotly::config(ply, collaborate = F, doubleClick = F, displayModeBar = F, staticPlot = T) + incProgress(9/15) + }) + #Signaling the success of the operation + toastr_success("Generated plots") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) + + ply }) }) }) observeEvent(input$model_toggle_plot,{ - toggleElement("model_plot_static") - toggleElement("model_plot_interactive") + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + toggleElement("model_plot_static") + incProgress(7 / 15) + toggleElement("model_plot_interactive") + incProgress(8 / 15) + }) + #Signaling the success of the operation + toastr_success("Toggled plots") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) # masterDF <- loadNewData() diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 9028ec85ac5..355979f81ff 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -1,36 +1,53 @@ observeEvent(input$load_model,{ - req(input$all_run_id) - - df <- load.model() - # output$results_table <- DT::renderDataTable(DT::datatable(head(masterDF))) - - ids_DF <- parse_ids_from_input_runID(input$all_run_id) - README.text <- c() - - for(i in seq(nrow(ids_DF))){ + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + + req(input$all_run_id) + incProgress(1 / 15) + + df <- load.model() + # output$results_table <- DT::renderDataTable(DT::datatable(head(masterDF))) + incProgress(10 / 15) + + ids_DF <- parse_ids_from_input_runID(input$all_run_id) + README.text <- c() + + + for(i in seq(nrow(ids_DF))){ + + dfsub <- df %>% filter(run_id == ids_DF$runID[i]) + + diff.m <- diff(dfsub$dates) + mode.m <- diff.m[which.max(tabulate(match(unique(diff.m), diff.m)))] + diff_units.m = units(mode.m) + + diff_message <- sprintf("timestep: %.2f %s", mode.m, diff_units.m) + wf.folder <- workflow(bety, ids_DF$wID[i]) %>% collect() %>% pull(folder) + + README.text <- c(README.text, + paste("SELECTION",i), + "============", + readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")), + diff_message, + "" + ) + } + + + output$README <- renderUI({HTML(paste(README.text, collapse = '
'))}) + + output$dim_message <- renderText({sprintf("This data has %.0f rows, think about skipping exploratory plots if this is a large number...", dim(df)[1])}) + incProgress(4 / 15) + }) - dfsub <- df %>% filter(run_id == ids_DF$runID[i]) - - diff.m <- diff(dfsub$dates) - mode.m <- diff.m[which.max(tabulate(match(unique(diff.m), diff.m)))] - diff_units.m = units(mode.m) - - diff_message <- sprintf("timestep: %.2f %s", mode.m, diff_units.m) - wf.folder <- workflow(bety, ids_DF$wID[i]) %>% collect() %>% pull(folder) - - README.text <- c(README.text, - paste("SELECTION",i), - "============", - readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")), - diff_message, - "" - ) - } - - output$README <- renderUI({HTML(paste(README.text, collapse = '
'))}) - - output$dim_message <- renderText({sprintf("This data has %.0f rows, think about skipping exploratory plots if this is a large number...", dim(df)[1])}) - + #Signaling the success of the operation + toastr_success("Loaded model outputs") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) From 6f3bf9d581299c775ac70935dc0e70c17f409c3c Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 7 Jun 2019 14:02:34 -0400 Subject: [PATCH 0185/2289] temporary change --- .../meta.analysis/R/meta.analysis.summary.R | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/modules/meta.analysis/R/meta.analysis.summary.R b/modules/meta.analysis/R/meta.analysis.summary.R index a9505e8cc69..68cd11271c0 100644 --- a/modules/meta.analysis/R/meta.analysis.summary.R +++ b/modules/meta.analysis/R/meta.analysis.summary.R @@ -28,7 +28,8 @@ ##' @author David LeBauer, Shawn Serbin pecan.ma.summary <- function(mcmc.object, pft, outdir, threshold = 1.2, gg = FALSE) { - fail <- FALSE + fail <- rep(FALSE, length(mcmc.object)) + names(fail) <- names(mcmc.object) sink(file = file.path(outdir, "meta-analysis.log"), append = TRUE, split = TRUE) for (trait in names(mcmc.object)) { @@ -52,11 +53,11 @@ pecan.ma.summary <- function(mcmc.object, pft, outdir, threshold = 1.2, gg = FAL ## plots for mcmc diagnosis pdf(file.path(outdir, paste0("ma.summaryplots.", trait, ".pdf"))) - + for (i in maparms) { - plot(mcmc.object[[trait]][, i], + plot(mcmc.object[[trait]][, i], trace = FALSE, - density = TRUE, + density = TRUE, main = paste("summary plots of", i, "for", pft, trait)) box(lwd = 2) plot(mcmc.object[[trait]][, i], density = FALSE) @@ -68,7 +69,7 @@ pecan.ma.summary <- function(mcmc.object, pft, outdir, threshold = 1.2, gg = FAL lattice::densityplot(mcmc.object[[trait]]) coda::acfplot(mcmc.object[[trait]]) dev.off() - + ## G-R diagnostics to ensure convergence gd <- coda::gelman.diag(mcmc.object[[trait]]) mpsrf <- round(gd$mpsrf, digits = 3) @@ -80,15 +81,18 @@ pecan.ma.summary <- function(mcmc.object, pft, outdir, threshold = 1.2, gg = FAL not.converged <- rbind(not.converged, data.frame(pft = pft, trait = trait, mpsrf = mpsrf)) PEcAn.logger::logger.info(paste("JAGS model did not converge for", pft, trait, "\nGD MPSRF = ", mpsrf, "\n")) - fail <- TRUE + fail[trait] <- TRUE } } - if (fail) { + if (any(fail)) { PEcAn.logger::logger.warn("JAGS model failed to converge for one or more pft.") for (i in seq_len(nrow(not.converged))) { with(not.converged[i, ], PEcAn.logger::logger.info(paste(pft, trait, "MPSRF = ", mpsrf))) } + mcmc.object[fail] <- NULL #discard samples + trait.mcmc <- mcmc.object + save(trait.mcmc, file = file.path(outdir, "trait.mcmc.Rdata")) } sink() } # pecan.ma.summary From 8f21b5deeb1a4bba33088667f1594b1b3294e30f Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 7 Jun 2019 14:28:26 -0400 Subject: [PATCH 0186/2289] extract prior.all before --- modules/assim.batch/R/pda.emulator.ms.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 1ba90026f4c..898c508df6f 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -129,6 +129,7 @@ pda.emulator.ms <- function(multi.settings) { pname <- need_obj$pname hyper.pars <- need_obj$hyper.pars nparam <- length(prior.ind.all) + prior.all <- do.call("rbind", prior.list) resume.list <- vector("list", multi.settings[[1]]$assim.batch$chain) From d582fbb1f20eb500f591eae162840c3fa03b6af9 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 7 Jun 2019 18:35:19 -0400 Subject: [PATCH 0187/2289] change order, temporary change --- modules/meta.analysis/R/run.meta.analysis.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/meta.analysis/R/run.meta.analysis.R b/modules/meta.analysis/R/run.meta.analysis.R index f3acd8911e4..8445a9cb455 100644 --- a/modules/meta.analysis/R/run.meta.analysis.R +++ b/modules/meta.analysis/R/run.meta.analysis.R @@ -111,12 +111,12 @@ run.meta.analysis.pft <- function(pft, iterations, random = TRUE, threshold = 1. } } - ### Generate summaries and diagnostics - pecan.ma.summary(trait.mcmc, pft$name, pft$outdir, threshold) - ### Save the meta.analysis output save(trait.mcmc, file = file.path(pft$outdir, "trait.mcmc.Rdata")) + ### Generate summaries and diagnostics + pecan.ma.summary(trait.mcmc, pft$name, pft$outdir, threshold) + post.distns <- approx.posterior(trait.mcmc, prior.distns, jagged.data, pft$outdir) dist_MA_path <- file.path(pft$outdir, "post.distns.MA.Rdata") save(post.distns, file = dist_MA_path) From 9f37a289e76d88107fc7d5182047efe2c2306a4b Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 12 Jun 2019 09:41:55 -0400 Subject: [PATCH 0188/2289] Solving the write ens issue --- docker/depends/pecan.depends | 101 +------------------------------ modules/uncertainty/R/ensemble.R | 9 ++- scripts/Makefile.depends | 1 + 3 files changed, 9 insertions(+), 102 deletions(-) create mode 100644 scripts/Makefile.depends diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index c46d79faf39..364b563ddaa 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -10,105 +10,8 @@ RGL_USE_NULL=TRUE # install remotes first in case packages are references in dependencies installGithub.r \ - araiho/linkages_package \ - ebimodeling/biocro \ - MikkoPeltoniemi/Rpreles \ - ropensci/geonames \ - ropensci/nneo + # install all packages (depends, imports, suggests) install2.r -e -s -n -1\ - abind \ - BayesianTools \ - BioCro \ - car \ - coda \ - data.table \ - dataone \ - datapack \ - DBI \ - dbplyr \ - doParallel \ - dplyr \ - ellipse \ - fields \ - foreach \ - geonames \ - getPass \ - ggmap \ - ggplot2 \ - glue \ - graphics \ - grDevices \ - grid \ - gridExtra \ - hdf5r \ - here \ - httr \ - IDPmisc \ - jsonlite \ - knitr \ - lattice \ - linkages \ - lubridate \ - Maeswrap \ - magic \ - magrittr \ - maps \ - maptools \ - MASS \ - mclust \ - MCMCpack \ - methods \ - mgcv \ - minpack.lm \ - mlegp \ - mockery \ - MODISTools \ - mvtnorm \ - ncdf4 \ - ncdf4.helpers \ - nimble \ - nneo \ - parallel \ - plotrix \ - plyr \ - png \ - progress \ - purrr \ - pwr \ - randtoolbox \ - raster \ - rcrossref \ - RCurl \ - REddyProc \ - redland \ - reshape \ - reshape2 \ - reticulate \ - rgdal \ - rjags \ - rjson \ - rlang \ - rnoaa \ - RPostgreSQL \ - Rpreles \ - RSQLite \ - sf \ - SimilarityMeasures \ - sirt \ - sp \ - stats \ - stringi \ - stringr \ - testthat \ - tibble \ - tidyr \ - tidyverse \ - tools \ - truncnorm \ - udunits2 \ - utils \ - XML \ - xtable \ - zoo + diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index 0b430a2fac0..478dff61270 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -207,17 +207,20 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, return(list(runs = NULL, ensemble.id = NULL)) } + # See if we need to write to DB + write.to.db <- as.logical(settings$database$bety$write) # Open connection to database so we can store all run/ensemble information con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) + # If we fail to connect to DB then we set to NULL if (inherits(con, "try-error")) { con <- NULL } # Get the workflow id - if ("workflow" %in% names(settings)) { + if (!is.null(settings$workflow$id)) { workflow.id <- settings$workflow$id } else { workflow.id <- -1 @@ -225,7 +228,7 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, #------------------------------------------------- if this is a new fresh run------------------ if (is.null(restart)){ # create an ensemble id - if (!is.null(con) & as.logical(settings$database$bety$write)) { + if (!is.null(con) & write.to.db) { # write ensemble first ensemble.id <- PEcAn.DB::db.query(paste0( "INSERT INTO ensembles (runtype, workflow_id) ", @@ -317,7 +320,7 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, # write configuration for each run of the ensemble runs <- data.frame() for (i in seq_len(settings$ensemble$size)) { - if (!is.null(con)) { + if (!is.null(con) & write.to.db) { paramlist <- paste("ensemble=", i, sep = "") # inserting this into the table and getting an id back run.id <- PEcAn.DB::db.query(paste0( diff --git a/scripts/Makefile.depends b/scripts/Makefile.depends new file mode 100644 index 00000000000..adac6e7c09d --- /dev/null +++ b/scripts/Makefile.depends @@ -0,0 +1 @@ +# autogenerated From 308a227e8b5a4599a89f711ac80d1d41705092db Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 12 Jun 2019 10:31:06 -0400 Subject: [PATCH 0189/2289] pecan.depend --- docker/depends/pecan.depends | 102 ++++++++++++++++++++++++++++++++++- 1 file changed, 100 insertions(+), 2 deletions(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 364b563ddaa..bab9ad197db 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -10,8 +10,106 @@ RGL_USE_NULL=TRUE # install remotes first in case packages are references in dependencies installGithub.r \ - + araiho/linkages_package \ + ebimodeling/biocro \ + MikkoPeltoniemi/Rpreles \ + ropensci/geonames \ + ropensci/nneo # install all packages (depends, imports, suggests) install2.r -e -s -n -1\ - + abind \ + BayesianTools \ + BioCro \ + car \ + coda \ + data.table \ + dataone \ + datapack \ + DBI \ + dbplyr \ + doParallel \ + dplyr \ + ellipse \ + fields \ + foreach \ + geonames \ + getPass \ + ggmap \ + ggplot2 \ + glue \ + graphics \ + grDevices \ + grid \ + gridExtra \ + hdf5r \ + here \ + httr \ + IDPmisc \ + jsonlite \ + knitr \ + lattice \ + linkages \ + lubridate \ + Maeswrap \ + magic \ + magrittr \ + maps \ + maptools \ + MASS \ + mclust \ + MCMCpack \ + methods \ + mgcv \ + minpack.lm \ + mlegp \ + mockery \ + MODISTools \ + mvtnorm \ + ncdf4 \ + ncdf4.helpers \ + nimble \ + nneo \ + parallel \ + plotrix \ + plyr \ + png \ + progress \ + purrr \ + pwr \ + randtoolbox \ + raster \ + rcrossref \ + RCurl \ + REddyProc \ + redland \ + reshape \ + reshape2 \ + reticulate \ + rgdal \ + rjags \ + rjson \ + rlang \ + rnoaa \ + RPostgreSQL \ + Rpreles \ + RSQLite \ + sf \ + SimilarityMeasures \ + sirt \ + sp \ + stats \ + stringi \ + stringr \ + testthat \ + tibble \ + tidyr \ + tidyverse \ + tools \ + truncnorm \ + udunits2 \ + utils \ + XML \ + xtable \ + xts \ + zoo \ No newline at end of file From 5aa4f8a8287df20e4a82d64d5158d674ac514603 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Wed, 12 Jun 2019 11:23:58 -0400 Subject: [PATCH 0190/2289] Update pecan.depends --- docker/depends/pecan.depends | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index bab9ad197db..c46d79faf39 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -111,5 +111,4 @@ install2.r -e -s -n -1\ utils \ XML \ xtable \ - xts \ - zoo \ No newline at end of file + zoo From e1799597d3e9817a919fcd80b8dbdf71dbbbf47a Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 12 Jun 2019 11:25:16 -0400 Subject: [PATCH 0191/2289] reordering thing in pecan.depend --- scripts/Makefile.depends | 1 - 1 file changed, 1 deletion(-) delete mode 100644 scripts/Makefile.depends diff --git a/scripts/Makefile.depends b/scripts/Makefile.depends deleted file mode 100644 index adac6e7c09d..00000000000 --- a/scripts/Makefile.depends +++ /dev/null @@ -1 +0,0 @@ -# autogenerated From c7ff73ebe68e5c936eba6108b7c25a042f95e830 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 14 Jun 2019 07:48:28 -0500 Subject: [PATCH 0192/2289] add tryCatch for betyConnect --- shiny/workflowPlot/server.R | 54 ++++++++++++++++++++++++++++++++++--- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 4bceb0867c7..dfbe4cdb199 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -35,13 +35,59 @@ lapply(c( "shiny", options(shiny.maxRequestSize=100*1024^2) # Port forwarding -#options(shiny.port = 6438) -#options(shiny.launch.browser = 'FALSE') +# options(shiny.port = 6438) +# options(shiny.launch.browser = 'FALSE') # Define server logic server <- shinyServer(function(input, output, session) { - bety <- betyConnect() - + + # Try `betyConnect` function. + # If it breaks, ask user to enter user, password and host information + # then use the `db.open` function to connect to the database + tryCatch({ + bety <- betyConnect() + }, + error = function(e){ + + #---- shiny modal---- + showModal( + modalDialog( + title = "Connect to Database", + fluidRow(column(12,textInput('user', h4('User:'), width = "100%", value = "bety"))), + fluidRow(column(12,textInput('password', h4('Password:'), width = "100%", value = "bety"))), + fluidRow(column(12,textInput('host', h4('Host:'), width = "100%", value = "localhost"))), + fluidRow( + column(3), + column(6,br(),actionButton('submitInfo', h4('Submit'), width = "100%", class="btn-primary")), + column(3) + ), + footer = NULL, + size = 's' + ) + ) + + # --- connect to database --- + observeEvent(input$submitInfo,{ + tryCatch( + { + bety <- PEcAn.DB::db.open( + list( + user = input$user, + password = input$password, + host = input$host + ) + ) + removeModal() + toastr_success("Connect to Database") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + } + ) + }) + + }) + # Hiding the animation and showing the application content hide(id = "loading-content", anim = TRUE, animType = "fade") showElement("app") From 48acf7d6db529fd51796e2f647da386aeac0a3a4 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 14 Jun 2019 09:00:59 -0500 Subject: [PATCH 0193/2289] Add tryCatch to observeEvent --- .../server_files/benchmarking_server.R | 132 ++++++++++-------- .../server_files/model_data_plots_server.R | 98 ++++++------- .../server_files/model_plots_server.R | 65 +++++---- .../server_files/select_data_server.R | 2 +- .../server_files/sidebar_server.R | 122 ++++++++++------ 5 files changed, 239 insertions(+), 180 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 806aaf42f7e..efca71b2d18 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -8,44 +8,51 @@ bm <- reactiveValues() ## as a reference run. If not, create the record upon button click observeEvent(input$load_model,{ - req(input$all_run_id) - ids_DF <- parse_ids_from_input_runID(input$all_run_id) - button <- FALSE - if(nrow(ids_DF) == 1){ - - # Check to see if the run has been saved as a reference run - ens_id <- dplyr::tbl(bety, 'runs') %>% dplyr::filter(id == ids_DF$runID) %>% dplyr::pull(ensemble_id) - ens_wf <- dplyr::tbl(bety, 'ensembles') %>% dplyr::filter(id == ens_id) %>% - dplyr::rename(ensemble_id = id) %>% - dplyr::left_join(.,tbl(bety, "workflows") %>% dplyr::rename(workflow_id = id), by="workflow_id") %>% dplyr::collect() - bm$model_vars <- var_names_all(bety,ids_DF$wID,ids_DF$runID) - - clean <- PEcAn.benchmark::clean_settings_BRR(inputfile = file.path(ens_wf$folder,"pecan.CHECKED.xml")) - settings_xml <- toString(PEcAn.settings::listToXml(clean, "pecan")) - ref_run <- PEcAn.benchmark::check_BRR(settings_xml, bety$con) - - if(length(ref_run) == 0){ - # If not registered, button appears with option to run create.BRR - brr_message <- sprintf("Would you like to save this run (run id = %.0f, ensemble id = %0.f) as a reference run?", ids_DF$runID, ens_id) - button <- TRUE - }else if(dim(ref_run)[1] == 1){ - bm$BRR <- ref_run %>% rename(.,reference_run_id = id) - bm$BRR - brr_message <- sprintf("This run has been registered as a reference run (id = %.0f)", bm$BRR$reference_run_id) - }else if(dim(ref_run)[1] > 1){ # There shouldn't be more than one reference run per run - brr_message <- ("There is more than one reference run in the database for this run. Review for duplicates.") + tryCatch({ + req(input$all_run_id) + ids_DF <- parse_ids_from_input_runID(input$all_run_id) + button <- FALSE + if(nrow(ids_DF) == 1){ + + # Check to see if the run has been saved as a reference run + ens_id <- dplyr::tbl(bety, 'runs') %>% dplyr::filter(id == ids_DF$runID) %>% dplyr::pull(ensemble_id) + ens_wf <- dplyr::tbl(bety, 'ensembles') %>% dplyr::filter(id == ens_id) %>% + dplyr::rename(ensemble_id = id) %>% + dplyr::left_join(.,tbl(bety, "workflows") %>% dplyr::rename(workflow_id = id), by="workflow_id") %>% dplyr::collect() + bm$model_vars <- var_names_all(bety,ids_DF$wID,ids_DF$runID) + + clean <- PEcAn.benchmark::clean_settings_BRR(inputfile = file.path(ens_wf$folder,"pecan.CHECKED.xml")) + settings_xml <- toString(PEcAn.settings::listToXml(clean, "pecan")) + ref_run <- PEcAn.benchmark::check_BRR(settings_xml, bety$con) + + if(length(ref_run) == 0){ + # If not registered, button appears with option to run create.BRR + brr_message <- sprintf("Would you like to save this run (run id = %.0f, ensemble id = %0.f) as a reference run?", ids_DF$runID, ens_id) + button <- TRUE + }else if(dim(ref_run)[1] == 1){ + bm$BRR <- ref_run %>% rename(.,reference_run_id = id) + bm$BRR + brr_message <- sprintf("This run has been registered as a reference run (id = %.0f)", bm$BRR$reference_run_id) + }else if(dim(ref_run)[1] > 1){ # There shouldn't be more than one reference run per run + brr_message <- ("There is more than one reference run in the database for this run. Review for duplicates.") + } + }else if(nrow(ids_DF) > 1){ + brr_message <- "Benchmarking currently only works when one run is selected." + }else{ + brr_message <- "Cannot do benchmarking" } - }else if(nrow(ids_DF) > 1){ - brr_message <- "Benchmarking currently only works when one run is selected." - }else{ - brr_message <- "Cannot do benchmarking" - } - - # This is redundant but better for debugging - bm$brr_message <- brr_message - bm$button_BRR <- button - bm$ens_wf <- ens_wf - bm$ready <- 0 + + # This is redundant but better for debugging + bm$brr_message <- brr_message + bm$button_BRR <- button + bm$ens_wf <- ens_wf + bm$ready <- 0 + #Signaling the success of the operation + toastr_success("Check for reference run") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) # When button to register run is clicked, create.BRR is run and the button is removed. @@ -84,25 +91,32 @@ observeEvent({ ## have already been run. In addition, setup and run new benchmarks. observeEvent(input$load_data,{ - req(input$all_input_id) - req(input$all_site_id) - - bm$metrics <- dplyr::tbl(bety,'metrics') %>% dplyr::select(one_of("id","name","description")) %>% collect() - - # Need to write warning message that can only use one input id - bm$input <- getInputs(bety,c(input$all_site_id)) %>% - dplyr::filter(input_selection_list == input$all_input_id) - format <- PEcAn.DB::query.format.vars(bety = bety, input.id = bm$input$input_id) - # Are there more human readable names? - bm$vars <- dplyr::inner_join( - data.frame(read_name = names(bm$model_vars), - pecan_name = bm$model_vars, stringsAsFactors = FALSE), - format$vars[-grep("%",format$vars$storage_type), - c("variable_id", "pecan_name")], - by = "pecan_name") - - #This will be a longer set of conditions - bm$ready <- bm$ready + 1 + tryCatch({ + req(input$all_input_id) + req(input$all_site_id) + + bm$metrics <- dplyr::tbl(bety,'metrics') %>% dplyr::select(one_of("id","name","description")) %>% collect() + + # Need to write warning message that can only use one input id + bm$input <- getInputs(bety,c(input$all_site_id)) %>% + dplyr::filter(input_selection_list == input$all_input_id) + format <- PEcAn.DB::query.format.vars(bety = bety, input.id = bm$input$input_id) + # Are there more human readable names? + bm$vars <- dplyr::inner_join( + data.frame(read_name = names(bm$model_vars), + pecan_name = bm$model_vars, stringsAsFactors = FALSE), + format$vars[-grep("%",format$vars$storage_type), + c("variable_id", "pecan_name")], + by = "pecan_name") + + #This will be a longer set of conditions + bm$ready <- bm$ready + 1 + #Signaling the success of the operation + toastr_success("Check for benchmarks") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) observeEvent(bm$ready,{ @@ -267,7 +281,7 @@ observeEvent(input$calc_bm,{ incProgress(2 / 15) }) #Signaling the success of the operation - toastr_success("Setup benchmarks") + toastr_success("Calculate benchmarks") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) @@ -343,7 +357,7 @@ observeEvent(bm$load_results,{ incProgress(1) }) #Signaling the success of the operation - toastr_success("Calculated Scores") + toastr_success("Calculate Scores") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) @@ -379,7 +393,7 @@ observeEvent(input$bench_plot,{ incProgress(4 / 15) }) #Signaling the success of the operation - toastr_success("Generated Plot") + toastr_success("Generate Plots") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index 48f6f30fa24..c3195ee39d4 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -43,46 +43,52 @@ observeEvent(input$ex_plot_modeldata,{ output$modelDataPlot <- renderPlotly({ input$ex_plot_modeldata isolate({ - - var = input$var_name_modeldata - - model_data <- dplyr::filter(load.model(), var_name == var) - - updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) - title <- unique(model_data$title) - xlab <- unique(model_data$xlab) - ylab <- unique(model_data$ylab) - - model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) - external_data <- load.model.data() - aligned_data = PEcAn.benchmark::align_data( - model.calc = model_data, obvs.calc = external_data, - var = var, align_method = "mean_over_larger_timestep") %>% - dplyr::select(everything(), - model = matches("[.]m"), - observations = matches("[.]o"), - Date = posix) - - print(head(aligned_data)) - # Melt dataframe to plot two types of columns together - aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) - - unit <- ylab - if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ - aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) - ylab <- input$units_modeldata - } - - - data_geom <- switch(input$plotType_modeldata, point = geom_point, line = geom_line) - - plt <- ggplot(aligned_data, aes(x=Date, y=value, color=variable)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_modeldata) - ply <- ggplotly(plt) - + tryCatch({ + var = input$var_name_modeldata + + model_data <- dplyr::filter(load.model(), var_name == var) + + updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) + title <- unique(model_data$title) + xlab <- unique(model_data$xlab) + ylab <- unique(model_data$ylab) + + model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) + external_data <- load.model.data() + aligned_data = PEcAn.benchmark::align_data( + model.calc = model_data, obvs.calc = external_data, + var = var, align_method = "mean_over_larger_timestep") %>% + dplyr::select(everything(), + model = matches("[.]m"), + observations = matches("[.]o"), + Date = posix) + + print(head(aligned_data)) + # Melt dataframe to plot two types of columns together + aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) + + unit <- ylab + if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ + aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) + ylab <- input$units_modeldata + } + + + data_geom <- switch(input$plotType_modeldata, point = geom_point, line = geom_line) + + plt <- ggplot(aligned_data, aes(x=Date, y=value, color=variable)) + plt <- plt + data_geom() + plt <- plt + labs(title=title, x=xlab, y=ylab) + plt <- plt + geom_smooth(n=input$smooth_n_modeldata) + ply <- ggplotly(plt) + #Signaling the success of the operation + toastr_success("Generate interactive plots") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) + ply }) output$modelDataPlotStatic <- renderPlotly({ @@ -137,7 +143,7 @@ observeEvent(input$ex_plot_modeldata,{ incProgress(4/15) }) #Signaling the success of the operation - toastr_success("Genearated plots") + toastr_success("Genearate static plots") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) @@ -149,16 +155,10 @@ observeEvent(input$ex_plot_modeldata,{ observeEvent(input$model_data_toggle_plot,{ tryCatch({ - withProgress(message = 'Calculation in progress', - detail = 'This may take a while...', - value = 0,{ - toggleElement("model_data_plot_static") - incProgress(7 / 15) - toggleElement("model_data_plot_interactive") - incProgress(8 / 15) - }) + toggleElement("model_data_plot_static") + toggleElement("model_data_plot_interactive") #Signaling the success of the operation - toastr_success("Toggled plots") + toastr_success("Toggle plots") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index cb42866c687..7532dc4cc60 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -41,30 +41,39 @@ observeEvent(input$ex_plot_model,{ req(input$units_model) output$modelPlot <- renderPlotly({ + input$ex_plot_model isolate({ - df <- dplyr::filter(load.model(), var_name == input$var_name_model) - - updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) - - title <- unique(df$title) - xlab <- unique(df$xlab) - ylab <- unique(df$ylab) - - unit <- ylab - if(input$units_model != unit & udunits2::ud.are.convertible(unit, input$units_model)){ - df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) - ylab <- input$units_model - } - - data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) - - plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_model) - ply <- ggplotly(plt) + tryCatch({ + df <- dplyr::filter(load.model(), var_name == input$var_name_model) + + updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) + + title <- unique(df$title) + xlab <- unique(df$xlab) + ylab <- unique(df$ylab) + + unit <- ylab + if(input$units_model != unit & udunits2::ud.are.convertible(unit, input$units_model)){ + df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) + ylab <- input$units_model + } + + data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) + + plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) + plt <- plt + data_geom() + plt <- plt + labs(title=title, x=xlab, y=ylab) + plt <- plt + geom_smooth(n=input$smooth_n_model) + ply <- ggplotly(plt) + #Signaling the success of the operation + toastr_success("Generate interactive plots") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) + ply }) output$modelPlotStatic <- renderPlotly({ @@ -101,7 +110,7 @@ observeEvent(input$ex_plot_model,{ incProgress(9/15) }) #Signaling the success of the operation - toastr_success("Generated plots") + toastr_success("Generate static plots") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) @@ -114,16 +123,10 @@ observeEvent(input$ex_plot_model,{ observeEvent(input$model_toggle_plot,{ tryCatch({ - withProgress(message = 'Calculation in progress', - detail = 'This may take a while...', - value = 0,{ - toggleElement("model_plot_static") - incProgress(7 / 15) - toggleElement("model_plot_interactive") - incProgress(8 / 15) - }) + toggleElement("model_plot_static") + toggleElement("model_plot_interactive") #Signaling the success of the operation - toastr_success("Toggled plots") + toastr_success("Toggle plots") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 355979f81ff..3eb4bc33225 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -43,7 +43,7 @@ observeEvent(input$load_model,{ }) #Signaling the success of the operation - toastr_success("Loaded model outputs") + toastr_success("Load model outputs") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 4fdba4c61a8..9fa340bfa38 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -4,15 +4,22 @@ # Update workflow ids observe({ - # get_workflow_ids function (line 137) in db/R/query.dplyr.R takes a flag to check - # if we want to load all workflow ids. - # get_workflow_id function from query.dplyr.R - all_ids <- get_workflow_ids(bety, query, all.ids=TRUE) - updateSelectizeInput(session, "all_workflow_id", choices = all_ids) - # Get URL prameters - query <- parseQueryString(session$clientData$url_search) - # Pre-select workflow_id from URL prams - updateSelectizeInput(session, "all_workflow_id", selected = query[["workflow_id"]]) + tryCatch({ + # get_workflow_ids function (line 137) in db/R/query.dplyr.R takes a flag to check + # if we want to load all workflow ids. + # get_workflow_id function from query.dplyr.R + all_ids <- get_workflow_ids(bety, query, all.ids=TRUE) + updateSelectizeInput(session, "all_workflow_id", choices = all_ids) + # Get URL prameters + query <- parseQueryString(session$clientData$url_search) + # Pre-select workflow_id from URL prams + updateSelectizeInput(session, "all_workflow_id", selected = query[["workflow_id"]]) + #Signaling the success of the operation + toastr_success("Update workflow IDs") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) # Update run ids @@ -36,14 +43,21 @@ all_run_ids <- reactive({ return(run_id_list) }) # Update all run_ids ('workflow ',w_id,', run ',r_id) -observe({ - updateSelectizeInput(session, "all_run_id", choices = all_run_ids()) - # Get URL parameters - query <- parseQueryString(session$clientData$url_search) - # Make the run_id string with workflow_id - url_run_id <- paste0('workflow ', query[["workflow_id"]],', run ', query[["run_id"]]) - # Pre-select run_id from URL params - updateSelectizeInput(session, "all_run_id", selected = url_run_id) +observeEvent(input$all_workflow_id,{ + tryCatch({ + updateSelectizeInput(session, "all_run_id", choices = all_run_ids()) + # Get URL parameters + query <- parseQueryString(session$clientData$url_search) + # Make the run_id string with workflow_id + url_run_id <- paste0('workflow ', query[["workflow_id"]],', run ', query[["run_id"]]) + # Pre-select run_id from URL params + updateSelectizeInput(session, "all_run_id", selected = url_run_id) + #Signaling the success of the operation + toastr_success("Update run IDs") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) @@ -65,27 +79,41 @@ load.model <- eventReactive(input$load_model,{ # Update all variable names observeEvent(input$load_model, { - req(input$all_run_id) - # All information about a model is contained in 'all_run_id' string - ids_DF <- parse_ids_from_input_runID(input$all_run_id) - var_name_list <- c() - for(row_num in 1:nrow(ids_DF)){ - var_name_list <- c(var_name_list, var_names_all(bety, ids_DF$wID[row_num], ids_DF$runID[row_num])) - } - updateSelectizeInput(session, "var_name_model", choices = var_name_list) + tryCatch({ + req(input$all_run_id) + # All information about a model is contained in 'all_run_id' string + ids_DF <- parse_ids_from_input_runID(input$all_run_id) + var_name_list <- c() + for(row_num in 1:nrow(ids_DF)){ + var_name_list <- c(var_name_list, var_names_all(bety, ids_DF$wID[row_num], ids_DF$runID[row_num])) + } + updateSelectizeInput(session, "var_name_model", choices = var_name_list) + #Signaling the success of the operation + toastr_success("Update variable names") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) observeEvent(input$load_model,{ - # Retrieves all site ids from multiple seleted run ids when load button is pressed - req(input$all_run_id) - ids_DF <- parse_ids_from_input_runID(input$all_run_id) - site_id_list <- c() - for(row_num in 1:nrow(ids_DF)){ - settings <- getSettingsFromWorkflowId(bety,ids_DF$wID[row_num]) - site.id <- c(settings$run$site$id) - site_id_list <- c(site_id_list,site.id) - } - updateSelectizeInput(session, "all_site_id", choices=site_id_list) + tryCatch({ + # Retrieves all site ids from multiple seleted run ids when load button is pressed + req(input$all_run_id) + ids_DF <- parse_ids_from_input_runID(input$all_run_id) + site_id_list <- c() + for(row_num in 1:nrow(ids_DF)){ + settings <- getSettingsFromWorkflowId(bety,ids_DF$wID[row_num]) + site.id <- c(settings$run$site$id) + site_id_list <- c(site_id_list,site.id) + } + updateSelectizeInput(session, "all_site_id", choices=site_id_list) + #Signaling the success of the operation + toastr_success("Retrieve site IDs") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) # Update input id list as (input id, name) observe({ @@ -119,7 +147,7 @@ load.model.data <- eventReactive(input$load_data, { File_path <- inputs_df$filePath # TODO There is an issue with the db where file names are not saved properly. # To make it work with the VM, uncomment the line below - # File_path <- paste0(inputs_df$filePath,'.csv') + File_path <- paste0(inputs_df$filePath,'.csv') site.id <- inputs_df$site_id site <- PEcAn.DB::query.site(site.id,bety$con) observations <- PEcAn.benchmark::load_data( @@ -133,8 +161,22 @@ load.model.data <- eventReactive(input$load_data, { # Update all variable names observeEvent(input$load_data, { - model.df <- load.model() - obvs.df <- load.model.data() - updateSelectizeInput(session, "var_name_modeldata", - choices = intersect(model.df$var_name, names(obvs.df))) + tryCatch({ + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...', + value = 0,{ + model.df <- load.model() + incProgress(7 / 15) + obvs.df <- load.model.data() + incProgress(7 / 15) + updateSelectizeInput(session, "var_name_modeldata", + choices = intersect(model.df$var_name, names(obvs.df))) + incProgress(1 / 15) + }) + #Signaling the success of the operation + toastr_success("Update variable names") + }, + error = function(e) { + toastr_error(title = "Error", conditionMessage(e)) + }) }) \ No newline at end of file From eadc463398ac098a557a8a8a924b2369bc664e7f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 17 Jun 2019 16:00:57 -0400 Subject: [PATCH 0194/2289] mostly working sda --- models/sipnet/R/#write_restart.SIPNET.R# | 126 ++++++++++++++++++ models/sipnet/R/met2model.SIPNET.R | 2 +- models/sipnet/R/model2netcdf.SIPNET.R | 2 +- models/sipnet/R/read_restart.SIPNET.R | 39 +++--- models/sipnet/R/write.configs.SIPNET.R | 32 +---- models/sipnet/R/write_restart.SIPNET.R | 14 +- .../assim.sequential/R/sda.enkf_MultiSite.R | 29 ++-- modules/assim.sequential/R/sda_plotting.R | 32 ++++- 8 files changed, 201 insertions(+), 75 deletions(-) create mode 100755 models/sipnet/R/#write_restart.SIPNET.R# diff --git a/models/sipnet/R/#write_restart.SIPNET.R# b/models/sipnet/R/#write_restart.SIPNET.R# new file mode 100755 index 00000000000..4526b426e21 --- /dev/null +++ b/models/sipnet/R/#write_restart.SIPNET.R# @@ -0,0 +1,126 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##' @title write_restart.SIPNET +##' @name write_restart.SIPNET +##' @author Ann Raiho \email{araiho@@nd.edu} +##' +##' @param outdir output directory +##' @param runid run ID +##' @param start.time start date and time for each SDA ensemble +##' @param stop.time stop date and time for each SDA ensemble +##' @param settings PEcAn settings object +##' @param new.state analysis state vector +##' @param RENAME flag to either rename output file or not +##' @param new.params list of parameters to convert between different states +##' @param inputs list of model inputs to use in write.configs.SIPNET +##' +##' @description Write restart files for SIPNET +##' +##' @return NONE +##' @export +write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, new.state, + RENAME = TRUE, new.params = FALSE, inputs) { + + rundir <- settings$host$rundir + variables <- colnames(new.state) + # values that will be used for updating other states deterministically depending on the SDA states + IC_extra <- data.frame(t(new.params$restart)) + + if (RENAME) { + file.rename(file.path(outdir, runid, "sipnet.out"), + file.path(outdir, runid, paste0("sipnet.", as.Date(start.time), ".out"))) + system(paste("rm", file.path(rundir, runid, "sipnet.clim"))) + } else { + print(paste("Files not renamed -- Need to rerun year", start.time, "before next time step")) + } + + settings$run$start.date <- start.time + settings$run$end.date <- stop.time + + ## Converting to sipnet units + prior.sla <- new.params[[which(!names(new.params) %in% c("soil", "soil_SDA", "restart"))[1]]]$SLA + unit.conv <- 2 * (10000 / 1) * (1 / 1000) * (3.154 * 10^7) # kgC/m2/s -> Mg/ha/yr + + analysis.save <- list() + + if ("NPP" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$NPP, "kg/m^2/s", "Mg/ha/yr") #*unit.conv -> Mg/ha/yr + names(analysis.save[[length(analysis.save)]]) <- c("NPP") + } + +if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0.0001 + names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") + + analysis.save[[length(analysis.save) + 1]] <- IC_extra$abvGrndWoodFrac + names(analysis.save[[length(analysis.save)]]) <- c("abvGrndWoodFrac") + + analysis.save[[length(analysis.save) + 1]] <- IC_extra$coarseRootFrac + names(analysis.save[[length(analysis.save)]]) <- c("coarseRootFrac") + + analysis.save[[length(analysis.save) + 1]] <- IC_extra$fineRootFrac + names(analysis.save[[length(analysis.save)]]) <- c("fineRootFrac") + } + + if ("LeafC" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$LeafC * prior.sla * 2 ## kgC/m2*m2/kg*2kg/kgC -> m2/m2 + if (new.state$LeafC < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("lai") + } + + if ("Litter" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$Litter, 'kg m-2', 'g m-2') # kgC/m2 -> gC/m2 + if (new.state$Litter < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("litter") + } + + if ("TotSoilCarb" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$TotSoilCarb, 'kg m-2', 'g m-2') # kgC/m2 -> gC/m2 + if (new.state$TotSoilCarb < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("soil") + } + + if ("SoilMoistFrac" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$SoilMoistFrac ## unitless + if (new.state$SoilMoistFrac < 0 | new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 + names(analysis.save[[length(analysis.save)]]) <- c("litterWFrac") + + analysis.save[[length(analysis.save) + 1]] <- new.state$SoilMoistFrac ## unitless + if (new.state$SoilMoistFrac < 0 | new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 + names(analysis.save[[length(analysis.save)]]) <- c("soilWFrac") + } + + if ("SWE" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$SWE/10 + if (new.state$SWE < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("snow") + } + + if ("LAI" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$LAI + if (new.state$LAI < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("LAI") + } + + if (!is.null(analysis.save) & length(analysis.save)>0){ + analysis.save.mat <- data.frame(matrix(unlist(analysis.save, use.names = TRUE), nrow = 1)) + colnames(analysis.save.mat) <- names(unlist(analysis.save)) + }else{ + analysis.save.mat<-NULL + } + + + do.call(write.config.SIPNET, args = list(defaults = NULL, + trait.values = new.params, + settings = settings, + run.id = runid, + inputs = inputs, + IC = analysis.save.mat)) + print(runid) +} # write_restart.SIPNET \ No newline at end of file diff --git a/models/sipnet/R/met2model.SIPNET.R b/models/sipnet/R/met2model.SIPNET.R index ccba82a28ff..e1d0f74dbdf 100644 --- a/models/sipnet/R/met2model.SIPNET.R +++ b/models/sipnet/R/met2model.SIPNET.R @@ -301,4 +301,4 @@ met2model.SIPNET <- function(in.path, in.prefix, outfolder, start_date, end_date PEcAn.logger::logger.info("NO MET TO OUTPUT") return(invisible(NULL)) } -} # met2model.SIPNET +} # met2model.SIPNET \ No newline at end of file diff --git a/models/sipnet/R/model2netcdf.SIPNET.R b/models/sipnet/R/model2netcdf.SIPNET.R index fefa9634602..5d7f03b4cc3 100644 --- a/models/sipnet/R/model2netcdf.SIPNET.R +++ b/models/sipnet/R/model2netcdf.SIPNET.R @@ -253,4 +253,4 @@ model2netcdf.SIPNET <- function(outdir, sitelat, sitelon, start_date, end_date, } } # model2netcdf.SIPNET #--------------------------------------------------------------------------------------------------# -### EOF +### EOF \ No newline at end of file diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index 1034e45815c..6d23a193a83 100755 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -35,27 +35,28 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p last <- length(ens[[1]]) forecast <- list() - + #### PEcAn Standard Outputs - if ("GWBI" %in% var.names) { - forecast[[length(forecast) + 1]] <- udunits2::ud.convert(mean(ens$GWBI), "kg/m^2/s", "Mg/ha/yr") - names(forecast[[length(forecast)]]) <- c("GWBI") - } - if ("AbvGrndWood" %in% var.names) { forecast[[length(forecast) + 1]] <- udunits2::ud.convert(ens$AbvGrndWood[last], "kg/m^2", "Mg/ha") names(forecast[[length(forecast)]]) <- c("AbvGrndWood") - + # calculate fractions, store in params, will use in write_restart wood_total_C <- ens$AbvGrndWood[last] + ens$fine_root_carbon_content[last] + ens$coarse_root_carbon_content[last] - if (wood_total_C==0) wood_total_C <- 0.0001 # Making sure we are not making Nans in case there is no plant living there. + if (wood_total_C<0) wood_total_C <- 0.0001 # Making sure we are not making Nans in case there is no plant living there. + abvGrndWoodFrac <- ens$AbvGrndWood[last] / wood_total_C coarseRootFrac <- ens$coarse_root_carbon_content[last] / wood_total_C fineRootFrac <- ens$fine_root_carbon_content[last] / wood_total_C params$restart <- c(abvGrndWoodFrac, coarseRootFrac, fineRootFrac) - + if (length(params$restart)>0) - names(params$restart) <- c("abvGrndWoodFrac", "coarseRootFrac", "fineRootFrac") + names(params$restart) <- c("abvGrndWoodFrac", "coarseRootFrac", "fineRootFrac") + } + + if ("GWBI" %in% var.names) { + forecast[[length(forecast) + 1]] <- udunits2::ud.convert(mean(ens$GWBI), "kg/m^2/s", "Mg/ha/yr") + names(forecast[[length(forecast)]]) <- c("GWBI") } if ("leaf_carbon_content" %in% var.names) { @@ -63,16 +64,16 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p names(forecast[[length(forecast)]]) <- c("LeafC") } + if ("LAI" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$LAI[last] ## m2/m2 + names(forecast[[length(forecast)]]) <- c("LAI") + } + if ("litter_carbon_content" %in% var.names) { forecast[[length(forecast) + 1]] <- ens$litter_carbon_content[last] ##kgC/m2 names(forecast[[length(forecast)]]) <- c("Litter") } - if ("TotSoilCarb" %in% var.names) { - forecast[[length(forecast) + 1]] <- ens$TotSoilCarb[last] ## kgC/m2 - names(forecast[[length(forecast)]]) <- c("TotSoilCarb") - } - if ("SoilMoistFrac" %in% var.names) { forecast[[length(forecast) + 1]] <- ens$SoilMoistFrac[last] ## unitless names(forecast[[length(forecast)]]) <- c("SoilMoistFrac") @@ -88,11 +89,11 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p names(forecast[[length(forecast)]]) <- c("TotLivBiom") } - if ("LAI" %in% var.names) { - forecast[[length(forecast) + 1]] <- ens$LAI[last] ## m2/m2 - names(forecast[[length(forecast)]]) <- "LAI" + if ("TotSoilCarb" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$TotSoilCarb[last] ## kgC/m2 + names(forecast[[length(forecast)]]) <- c("TotSoilCarb") } - + print(runid) X_tmp <- list(X = unlist(forecast), params = params) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index d4d34ef36af..9453178bfb1 100755 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -15,7 +15,6 @@ ##' @author Michael Dietze write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs = NULL, IC = NULL, restart = NULL, spinup = NULL) { - ### WRITE sipnet.in template.in <- system.file("sipnet.in", package = "PEcAn.SIPNET") config.text <- readLines(con = template.in, n = -1) @@ -30,7 +29,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs template.clim <- inputs$met$path } } - PEcAn.logger::logger.info(paste0("Writing SIPNET configs with input ", template.clim)) # find out where to write run/ouput @@ -64,7 +62,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if (!is.null(settings$host$postrun)) { hostteardown <- paste(hostteardown, sep = "\n", paste(settings$host$postrun, collapse = "\n")) } - # create job.sh jobsh <- gsub("@HOST_SETUP@", hostsetup, jobsh) jobsh <- gsub("@HOST_TEARDOWN@", hostteardown, jobsh) @@ -102,8 +99,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs param <- read.table(template.param) - - #### write run-specific PFT parameters here #### Get parameters being handled by PEcAn for (pft in seq_along(trait.values)) { pft.traits <- unlist(trait.values[[pft]]) @@ -159,7 +154,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } else { Amax <- param[id, 2] * SLA } - # Daily fraction of maximum photosynthesis if ("AmaxFrac" %in% pft.names) { param[which(param[, 1] == "aMaxFrac"), 2] <- pft.traits[which(pft.names == "AmaxFrac")] @@ -191,7 +185,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("growth_resp_factor" %in% pft.names) { param[which(param[, 1] == "growthRespFrac"), 2] <- pft.traits[which(pft.names == "growth_resp_factor")] } - ### !!! NOT YET USED #Jmax = NA #if("Jmax" %in% pft.names){ @@ -244,7 +237,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("wueConst" %in% pft.names) { param[which(param[, 1] == "wueConst"), 2] <- pft.traits[which(pft.names == "wueConst")] } - # vegetation respiration Q10. if ("veg_respiration_Q10" %in% pft.names) { param[which(param[, 1] == "vegRespQ10"), 2] <- pft.traits[which(pft.names == "veg_respiration_Q10")] @@ -291,7 +283,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("coarse_root_respiration_Q10" %in% pft.names) { param[which(param[, 1] == "coarseRootQ10"), 2] <- pft.traits[which(pft.names == "coarse_root_respiration_Q10")] } - # WARNING: fineRootAllocation + woodAllocation + leafAllocation isn't supposed to exceed 1 # see sipnet.c code L2005 : # fluxes.coarseRootCreation=(1-params.leafAllocation-params.fineRootAllocation-params.woodAllocation)*npp; @@ -351,7 +342,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("soilWHC" %in% pft.names) { param[which(param[, 1] == "soilWHC"), 2] <- pft.traits[which(pft.names == "soilWHC")] } - # 10/31/2017 IF: these were the two assumptions used in the emulator paper in order to reduce dimensionality # These results in improved winter soil respiration values # they don't affect anything when the seasonal soil respiration functionality in SIPNET is turned-off @@ -380,8 +370,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } ## end loop over PFTS ####### end parameter update - - #### write INITIAL CONDITIONS here #### if (!is.null(IC)) { ic.names <- names(IC) ## plantWoodInit gC/m2 @@ -390,21 +378,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs # reconstruct total wood C wood_total_C <- IC$AbvGrndWood / IC$abvGrndWoodFrac #Sanity check - if (is.na(wood_total_C) | is.infinite(wood_total_C) | is.nan(wood_total_C) | wood_total_C <0) { - wood_total_C <- 0 - if (round(IC$AbvGrndWood)>0 & round(IC$abvGrndWoodFrac, 3)==0) - PEcAn.logger::logger.warn( - paste0( - "There is a major problem with ", - run.id, - " in either the model's parameters or IC.", - "Because the ABG is estimated=", - IC$AbvGrndWood, - " while AGB Frac is estimated=", - IC$abvGrndWoodFrac - ) - ) - } + if (is.na(wood_total_C) | is.infinite(wood_total_C) | is.nan(wood_total_C) | wood_total_C <0) wood_total_C <- 0 param[which(param[, 1] == "plantWoodInit"), 2] <- wood_total_C param[which(param[, 1] == "coarseRootFrac"), 2] <- IC$coarseRootFrac param[which(param[, 1] == "fineRootFrac"), 2] <- IC$fineRootFrac @@ -493,8 +467,6 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs write.table(param, file.path(settings$rundir, run.id, "sipnet.param"), row.names = FALSE, col.names = FALSE, quote = FALSE) } # write.config.SIPNET - - #--------------------------------------------------------------------------------------------------# ##' ##' Clear out previous SIPNET config and parameter files. @@ -524,4 +496,4 @@ remove.config.SIPNET <- function(main.outdir, settings) { } else { print("*** WARNING: Removal of files on remote host not yet implemented ***") } -} # remove.config.SIPNET \ No newline at end of file +} # remove.config.SIPNET \ No newline at end of file diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index e46df1b9daf..52d5b8aafb5 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -55,11 +55,11 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, names(analysis.save[[length(analysis.save)]]) <- c("NPP") } - if ("AbvGrndWood" %in% variables) { - AbvGrndWood <- udunits2::ud.convert(new.state$AbvGrndWood, "Mg/ha", "g/m^2") - analysis.save[[length(analysis.save) + 1]] <- AbvGrndWood -# if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") + if ("AbvGrndWood" %in% variables) { + AbvGrndWood <- udunits2::ud.convert(new.state$AbvGrndWood, "Mg/ha", "g/m^2") + analysis.save[[length(analysis.save) + 1]] <- AbvGrndWood + if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0.0001 + names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") analysis.save[[length(analysis.save) + 1]] <- IC_extra$abvGrndWoodFrac names(analysis.save[[length(analysis.save)]]) <- c("abvGrndWoodFrac") @@ -104,7 +104,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if (new.state$SWE < 0) analysis.save[[length(analysis.save)]] <- 0 names(analysis.save[[length(analysis.save)]]) <- c("snow") } - + if ("LAI" %in% variables) { analysis.save[[length(analysis.save) + 1]] <- new.state$LAI if (new.state$LAI < 0) analysis.save[[length(analysis.save)]] <- 0 @@ -117,7 +117,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, }else{ analysis.save.mat<-NULL } - + do.call(write.config.SIPNET, args = list(defaults = NULL, trait.values = new.params, diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 901583a7f30..13d0ff79f8d 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -48,7 +48,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = scalef <- settings$state.data.assimilation$scalef %>% as.numeric() # scale factor for localization var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") names(var.names) <- NULL - multi.site.flag <- PEcAn.settings::is.MultiSettings(settings) + multi.site.flag <- T#PEcAn.settings::is.MultiSettings(settings) readsFF<-NULL # this keeps the forward forecast nitr.GEF <- ifelse(is.null(settings$state.data.assimilation$nitrGEF), 1e6, settings$state.data.assimilation$nitrGEF %>%as.numeric) nthin <- ifelse(is.null(settings$state.data.assimilation$nthin), 100, settings$state.data.assimilation$nthin %>%as.numeric) @@ -207,7 +207,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = ###-------------------------------------------------------------------------###----- #- Check to see if this is the first run or not and what inputs needs to be sent to write.ensemble configs - if (t>1){ + if (t > 1){ #removing old simulations unlink(list.files(outdir, "*.nc.var", recursive = TRUE, full.names = TRUE)) #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. @@ -268,7 +268,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = ) }) - if(t==1) inputs <- out.configs %>% map(~.x[['samples']][['met']]) # for any time after t==1 the met is the splitted met + if (t == 1) inputs <- out.configs %>% map(~.x[['samples']][['met']]) # for any time after t==1 the met is the splitted met #-------------------------------------------- RUN PEcAn.remote::start.model.runs(settings, settings$database$bety$write) @@ -277,7 +277,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = if (control$debug) browser() #--- Reading just the first run when we have all years and for VIS - if (t==1 & control$FF){ + if (t == 1 & control$FF){ readsFF <- out.configs %>% purrr::map(function(configs) { @@ -342,17 +342,12 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = #[1,] 3.872521 37.2581 3.872521 37.2581 # But therer is an attribute called `Site` which tells yout what column is for what site id - check out attr (X,"Site") if (multi.site.flag) - X <- X %>% - map_dfc(~.x) %>% - as.matrix() %>% - `colnames<-`(c(rep(colnames(X[[names(X)[1]]]), length(X)))) %>% - `attr<-`('Site',c(rep(site.ids, each=length(var.names)))) - - # X <- X %>% - # map_dfc(~.x) %>% - # as.matrix() %>% - # `colnames<-`(c(rep(var.names, length(X)))) %>% - # `attr<-`('Site',c(rep(site.ids, each=length(var.names)))) + X <- X %>% + map_dfc(~.x) %>% + as.matrix() %>% +# `colnames<-`(c(rep(var.names, length(X)))) %>% + `colnames<-`(c(rep(colnames(X[[names(X)[1]]]), length(X)))) %>% + `attr<-`('Site',c(rep(site.ids, each=length(var.names)))) FORECAST[[t]] <- X ###-------------------------------------------------------------------### @@ -418,6 +413,8 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = print(Y) PEcAn.logger::logger.warn ("\n --------------Obs Cov ----------- \n") print(R) + PEcAn.logger::logger.warn ("\n --------------Obs H ----------- \n") + print(H) PEcAn.logger::logger.warn ("\n --------------Forecast mean ----------- \n") print(enkf.params[[t]]$mu.f) PEcAn.logger::logger.warn ("\n --------------Forecast Cov ----------- \n") @@ -483,7 +480,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) #writing down the image - either you asked for it or not :) - if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) +# if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) if (t == 1){ unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) } diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 4a4f1e20264..5e4833dfc0f 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -498,7 +498,37 @@ post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.co ##' @rdname interactive.plotting.sda ##' @export -post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=F, readsFF=NULL){ +post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=F, readsFF=NULL, observed_vars){ + + # fix obs.mean/obs.cov for multivariable plotting issues when there is NA data + for (name in names(obs.mean)) + { + data_mean = obs.mean[name] + data_cov = obs.cov[name] + sites = names(data_mean[[1]]) + for (site in sites) + { + d_mean = data_mean[[1]][[site]] + d_cov = data_cov[[1]][[site]] + colnames = names(d_mean) + if (length(colnames) < length(observed_vars)) + { + missing = which(!(observed_vars %in% colnames)) + missing_mean = as.data.frame(NA) + colnames(missing_mean) = observed_vars[missing] + d_mean = cbind(d_mean, missing_mean) + + missing_cov = matrix(0, nrow = length(observed_vars), ncol = length(observed_vars)) + diag(missing_cov) = c(diag(d_cov), NA) + d_cov = missing_cov + } + data_mean[[1]][[site]] = d_mean + data_cov[[1]][[site]] = d_cov + } + obs.mean[name] = data_mean + obs.cov[name] = data_cov + } + if (!('ggrepel' %in% installed.packages()[,1])) devtools::install_github("slowkow/ggrepel") From dc4b41bd878b785d230e638ac54eaff527770f93 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 14:10:20 -0400 Subject: [PATCH 0195/2289] added forceRun option to SDA --- .../assim.sequential/R/sda.enkf_MultiSite.R | 24 ++++++++++++++++++- .../man/interactive.plotting.sda.Rd | 3 ++- .../man/sda.enkf.multisite.Rd | 6 +++-- 3 files changed, 29 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 13d0ff79f8d..f95b224b53a 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -7,6 +7,7 @@ #' @param obs.cov List of covariance matrices of state variables , named with observation datetime. #' @param Q Process covariance matrix given if there is no data to estimate it. #' @param restart Used for iterative updating previous forecasts. When the restart is TRUE it read the object in SDA folder written from previous SDA. +#' @param forceRun Used to force job.sh files that were not run for ensembles in SDA (quick fix) #' @param control List of flags controlling the behaviour of the SDA. trace for reporting back the SDA outcomes, interactivePlot for plotting the outcomes after each step, #' TimeseriesPlot for post analysis examination, BiasPlot for plotting the correlation between state variables, plot.title is the title of post analysis plots and debug mode allows for pausing the code and examinign the variables inside the function. #' @@ -19,7 +20,7 @@ #' @import nimble #' @export #' -sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = FALSE, +sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = FALSE, forceRun = TRUE, control=list(trace = TRUE, FF = FALSE, interactivePlot = FALSE, @@ -272,6 +273,27 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = #-------------------------------------------- RUN PEcAn.remote::start.model.runs(settings, settings$database$bety$write) + + if (forceRun == TRUE) + { + # quick fix for job.sh files not getting run + folders = list.files(path = paste0(settings$outdir, "/SDA/out"), include.dirs = TRUE, full.names = TRUE) + for (i in seq_along(folders)) + { + files = list.files(path = folders[i], pattern = ".nc") + remove = grep(files, pattern = '.nc.var') + if (length(remove) > 0) + { + files = files[-remove] + } + if (!(paste0(obs.year, '.nc') %in% files)) + { + bad = print(paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc")) + file = paste0(gsub("out", "run", folders[i]), "/", "job.sh") + system(paste0("sh ", file)) + } + } + } #------------------------------------------- Reading the output if (control$debug) browser() diff --git a/modules/assim.sequential/man/interactive.plotting.sda.Rd b/modules/assim.sequential/man/interactive.plotting.sda.Rd index a0780c797a9..3f60c03a02a 100644 --- a/modules/assim.sequential/man/interactive.plotting.sda.Rd +++ b/modules/assim.sequential/man/interactive.plotting.sda.Rd @@ -28,7 +28,8 @@ post.analysis.ggplot.violin(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS, plot.title = NULL) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, - FORECAST, ANALYSIS, plot.title = NULL, facetg = F, readsFF = NULL) + FORECAST, ANALYSIS, plot.title = NULL, facetg = F, readsFF = NULL, + observed_vars) } \arguments{ \item{settings}{pecan standard settings list.} diff --git a/modules/assim.sequential/man/sda.enkf.multisite.Rd b/modules/assim.sequential/man/sda.enkf.multisite.Rd index 40ef809e33e..55460dae85d 100644 --- a/modules/assim.sequential/man/sda.enkf.multisite.Rd +++ b/modules/assim.sequential/man/sda.enkf.multisite.Rd @@ -5,8 +5,8 @@ \title{sda.enkf.multisite} \usage{ sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, - restart = FALSE, control = list(trace = TRUE, FF = FALSE, - interactivePlot = FALSE, TimeseriesPlot = FALSE, BiasPlot = FALSE, + restart = FALSE, forceRun = TRUE, control = list(trace = TRUE, FF = + FALSE, interactivePlot = FALSE, TimeseriesPlot = FALSE, BiasPlot = FALSE, plot.title = NULL, facet.plots = FALSE, debug = FALSE, pause = FALSE), ...) } @@ -21,6 +21,8 @@ sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, \item{restart}{Used for iterative updating previous forecasts. When the restart is TRUE it read the object in SDA folder written from previous SDA.} +\item{forceRun}{Used to force job.sh files that were not run for ensembles in SDA (quick fix)} + \item{control}{List of flags controlling the behaviour of the SDA. trace for reporting back the SDA outcomes, interactivePlot for plotting the outcomes after each step, TimeseriesPlot for post analysis examination, BiasPlot for plotting the correlation between state variables, plot.title is the title of post analysis plots and debug mode allows for pausing the code and examinign the variables inside the function.} } From 065e373c4cc066f05c64c1bc5c587f47a441ab3f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 15:10:22 -0400 Subject: [PATCH 0196/2289] Alexey fixes :) --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index f95b224b53a..e4533c0b4a0 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -277,19 +277,19 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = if (forceRun == TRUE) { # quick fix for job.sh files not getting run - folders = list.files(path = paste0(settings$outdir, "/SDA/out"), include.dirs = TRUE, full.names = TRUE) + folders <- list.files(path = paste0(settings$outdir, "/SDA/out"), include.dirs = TRUE, full.names = TRUE) for (i in seq_along(folders)) { - files = list.files(path = folders[i], pattern = ".nc") - remove = grep(files, pattern = '.nc.var') + files <- list.files(path = folders[i], pattern = ".nc") + remove <- grep(files, pattern = '.nc.var') if (length(remove) > 0) { - files = files[-remove] + files <- files[-remove] } if (!(paste0(obs.year, '.nc') %in% files)) { - bad = print(paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc")) - file = paste0(gsub("out", "run", folders[i]), "/", "job.sh") + bad <- print(paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc")) + file <- paste0(gsub("out", "run", folders[i]), "/", "job.sh") system(paste0("sh ", file)) } } From 47f63e62b9b1285269c9dc87ad1bec6094ae5835 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 15:53:49 -0400 Subject: [PATCH 0197/2289] keep all .nc and .nc.var files --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index e4533c0b4a0..a86b528fcb8 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -210,7 +210,8 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = if (t > 1){ #removing old simulations - unlink(list.files(outdir, "*.nc.var", recursive = TRUE, full.names = TRUE)) + ## keep all old simulations right now since there is debugging going on + #unlink(list.files(outdir, "*.nc.var", recursive = TRUE, full.names = TRUE)) #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. inputs.split <- conf.settings %>% purrr::map2(inputs, function(settings, inputs) { From bf8dfb7b7bc6a6c18569a00bab022010d921916f Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:01:28 -0400 Subject: [PATCH 0198/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Alexey Shiklomanov --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 47eee77ffdc..41b81b51cb6 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -207,7 +207,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = if (!(is.null(outfolder))) { fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) + fname <- file.path(outfolder, fname) write.csv(output, fname) } From 719e2ee73e571ae39dcf7dcb829e2e55555f73fa Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:01:58 -0400 Subject: [PATCH 0199/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Alexey Shiklomanov --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 41b81b51cb6..06eaec4afbe 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -167,7 +167,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = if (QC_filter == T) { output$qc == as.character(output$qc) - for (i in 1:nrow(output)) + for (i in seq_len(nrow(output))) { convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) From 197db0ce5701533b8594d5a5b20b2fef705030e2 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:02:07 -0400 Subject: [PATCH 0200/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Alexey Shiklomanov --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 06eaec4afbe..9d68fe56206 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -164,7 +164,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) - if (QC_filter == T) + if (QC_filter) { output$qc == as.character(output$qc) for (i in seq_len(nrow(output))) From 16a0ed374b4ee174e0502ce4664dfdf7d39725b4 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:02:23 -0400 Subject: [PATCH 0201/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 9d68fe56206..bbbb89829f1 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -41,7 +41,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = if (grepl("/", end_date) == T) { - end_date = as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + end_date <- as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } start_date = as.Date(start_date, format = "%Y%j") end_date = as.Date(end_date, format = "%Y%j") From 1079636ebb0ceb298899204f3d89d02c500bbbe4 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:02:33 -0400 Subject: [PATCH 0202/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index bbbb89829f1..da60498a4c3 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -43,7 +43,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = { end_date <- as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } - start_date = as.Date(start_date, format = "%Y%j") + start_date <- as.Date(start_date, format = "%Y%j") end_date = as.Date(end_date, format = "%Y%j") From d96402224058ee11f8a6be9ff88566a12e55e1cf Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:02:41 -0400 Subject: [PATCH 0203/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index da60498a4c3..4be1dbf8ad5 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -44,7 +44,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = end_date <- as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") } start_date <- as.Date(start_date, format = "%Y%j") - end_date = as.Date(end_date, format = "%Y%j") + end_date <- as.Date(end_date, format = "%Y%j") #################### if package_method == MODISTools option #################### From 507f015b4e3f342074034bb9fcf56305d51f71fe Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:02:51 -0400 Subject: [PATCH 0204/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 4be1dbf8ad5..d384c476bce 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -82,7 +82,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = end <- as.Date(end_date, "%Y%j") } ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## - if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) + if (as.numeric(start_date)< dates[1] || as.numeric(end_date)> dates[length(dates)]) { print(start) stop("start or end date are not within MODIS data product date range. Please choose another date.") From 21e5ab7247c1f6657e99a5dd3427b0aa0d63fd7e Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:06:33 -0400 Subject: [PATCH 0205/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Shawn P. Serbin --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index d384c476bce..d7f9cceb8fb 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -172,7 +172,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) } - good = which(output$qc == "000" | output$qc == "001") + good <- which(output$qc == "000" | output$qc == "001") if (length(good) > 0 | !(is.null(good))) { output = output[good,] From a0a06344c4f153e21b3fff68d377fbf6ea9d3dad Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:06:45 -0400 Subject: [PATCH 0206/2289] Update modules/data.remote/R/call_MODIS.R Co-Authored-By: Alexey Shiklomanov --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index d7f9cceb8fb..08de6c9b2e7 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -184,7 +184,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = if (!(outfolder == "")) { fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") - fname <- paste0(outfolder, "/", fname) + fname <- file.path(outfolder, fname) write.csv(output, fname, row.names = F) } From 9100a9bb14d0db5a58e5eff1a0e97960c4f3d864 Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:28:13 -0400 Subject: [PATCH 0207/2289] Update modules/assim.sequential/R/sda.enkf_MultiSite.R Co-Authored-By: Shawn P. Serbin --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index a86b528fcb8..8a7e5874e94 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -86,7 +86,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = my.read_restart <- paste0("read_restart.", model) my.split_inputs <- paste0("split_inputs.", model) #- Double checking some of the inputs - if (is.null(adjustment)) adjustment<-TRUE + if (is.null(adjustment)) adjustment <- TRUE # models that don't need split_inputs, check register file for that register.xml <- system.file(paste0("register.", model, ".xml"), package = paste0("PEcAn.", model)) register <- XML::xmlToList(XML::xmlParse(register.xml)) From 42ba94a1f4b24b94c2ede4ed3f85ac1fed126e25 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 16:54:28 -0400 Subject: [PATCH 0208/2289] formatting fixes and match to bailey_sda branch --- modules/data.remote/DESCRIPTION | 1 - modules/data.remote/R/call_MODIS.R | 79 +++++++++++++++------------ modules/data.remote/man/call_MODIS.Rd | 10 ++-- 3 files changed, 49 insertions(+), 41 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index f6e3d747771..227b2b93a88 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -13,7 +13,6 @@ Imports: PEcAn.logger, PEcAn.remote, stringr (>= 1.1.0), - spatial.tools, binaryLogic Suggests: testthat (>= 1.0.2), diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 08de6c9b2e7..88359427ec1 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -13,9 +13,10 @@ ##' @param band string value for which measurement to extract ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) +##' @param siteID numeric value of BETY site id value to use for output file name. Default is NULL ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +##' @param progress TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 @@ -28,25 +29,26 @@ ##' } ##' ##' @author Bailey Morrison -##' -call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = F) { - +##' +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { + # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 -# reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ - if (grepl("/", start_date) == T) + + # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ + if (grepl("/", start_date) == TRUE) { - start_date = as.Date(paste0(lubridate::year(start_date), spatial.tools::add_leading_zeroes(lubridate::yday(start_date), 3)), format = "%Y%j") + start_date <- as.Date(paste0(lubridate::year(start_date), sprintf("%03d",lubridate::yday(start_date))), format = "%Y%j") } - if (grepl("/", end_date) == T) + if (grepl("/", end_date) == TRUE) { - end_date <- as.Date(paste0(lubridate::year(end_date), spatial.tools::add_leading_zeroes(lubridate::yday(end_date), 3)), format = "%Y%j") + end_date <- as.Date(paste0(lubridate::year(end_date), sprintf("%03d",lubridate::yday(end_date))), format = "%Y%j") } start_date <- as.Date(start_date, format = "%Y%j") end_date <- as.Date(end_date, format = "%Y%j") - - + + #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") { @@ -74,7 +76,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = ########## Date case 1: user only wants one date ########## if (start_date == end_date) { - if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) + if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { print("Extracting data") @@ -82,7 +84,7 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = end <- as.Date(end_date, "%Y%j") } ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## - if (as.numeric(start_date)< dates[1] || as.numeric(end_date)> dates[length(dates)]) + if (as.numeric(start_date) < dates[1] || as.numeric(end_date) > dates[length(dates)]) { print(start) stop("start or end date are not within MODIS data product date range. Please choose another date.") @@ -90,26 +92,26 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = } else { ########## Date case 2: user want a range of dates ########## # Best case scenario: Start and end date asked for fall with available date range of modis data product. - if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) + if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { print("Check #2: All dates are available!") } # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble # MODIS data dates - if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + if (as.numeric(start_date) <= dates[1] && as.numeric(end_date) >= dates[1]) { start_date = dates[1] print("WARNING: Dates are only partially available. Start date before modis data product is available.") } - if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + if (as.numeric(end_date) >= dates[length(dates)] && as.numeric(start_date) <= dates[length(dates)]) { - end_date = dates[length(dates)] + end_date <- dates[length(dates)] print("WARNING: Dates are only partially available. End date befire modis data product is available.") } # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. - if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + if ((as.numeric(start_date) < dates[1] && as.numeric(end_date < dates[1])) || (as.numeric(start_date) > dates[length(dates)] && as.numeric(end_date) > dates[length(dates)])) { stop("No MODIS data available start_date and end_date parameterized.") } @@ -122,21 +124,21 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size) + dat <- MODISTools::mt_subset(lat= lat, lon = lon, product = product, band = band, + start = start_date, end = end_date, km_ab = size, km_lr = size, progress = progress) # extract QC data if(band_qc != "") { - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, - start=start, end=end, km_ab=size, km_lr=size) + qc <- MODISTools::mt_subset(lat = lat, lon = lon, product = product, band = band_qc, + start = start, end = end, km_ab = size, km_lr = size, progress = progress) } # extract stdev data if(band_sd != "") { - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size) + sd <- MODISTools::mt_subset(lat = lat, lon = lon, product = product, band = band_sd, + start = start, end = end, km_ab = size, km_lr = size, progress = progress) } if (band_qc == "") @@ -150,13 +152,13 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = { SD <- rep("nan", nrow(dat)) } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + SD <- as.numeric(sd$value) * as.numeric(sd$scale) } - output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = F) + output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = FALSE) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") - output[,5:10] <- lapply(output[,5:10], as.numeric) + output[ ,5:10] <- lapply(output[,5:10], as.numeric) # scale the data + stdev to proper units output$data <- output$data * (as.numeric(dat$scale)) @@ -169,23 +171,29 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = output$qc == as.character(output$qc) for (i in seq_len(nrow(output))) { - convert = paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") - output$qc[i] = substr(convert, nchar(convert)-2, nchar(convert)) + convert <- paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") + output$qc[i] <- substr(convert, nchar(convert)-2, nchar(convert)) } - good <- which(output$qc == "000" | output$qc == "001") - if (length(good) > 0 | !(is.null(good))) + good <- which(output$qc %in% c("000", "001")) + if (length(good) > 0 || !(is.null(good))) { - output = output[good,] + output = output[good, ] } else { print("All QC values are bad. No data to output with QC filter == TRUE.") } } + if (!(outfolder == "")) { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + if (!(is.null(siteID))) + { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + } else { + fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, ".csv", sep = "") + } fname <- file.path(outfolder, fname) - write.csv(output, fname, row.names = F) + write.csv(output, fname, row.names = FALSE) } return(output) @@ -195,12 +203,11 @@ call_MODIS <- function(outfolder = NULL, start_date, end_date, lat, lon, size = if (package_method == "reticulate"){ # load in python script script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) - #script.path = file.path('/Users/bmorrison/pecan/modules/data.remote/inst/extract_modis_data.py') reticulate::source_python(script.path) # extract the data output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start_date, end_date = end_date, size = size, band_qc = band_qc, band_sd = band_sd) - output[,5:10] <- lapply(output[,5:10], as.numeric) + output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 2c5a6ea9133..cac10cfb77b 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -4,9 +4,9 @@ \alias{call_MODIS} \title{call_MODIS} \usage{ -call_MODIS(outfolder = NULL, start_date, end_date, lat, lon, size = 0, +call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, - package_method = "MODISTools", QC_filter = F) + package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) } \arguments{ \item{outfolder}{where the output file will be stored} @@ -29,11 +29,13 @@ call_MODIS(outfolder = NULL, start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename.} +\item{siteID}{numeric value of BETY site id value to use for output file name. Default is NULL} \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} -\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False.} + +\item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} From d7905063127c3cb33612e470f49569b9261be04c Mon Sep 17 00:00:00 2001 From: "Bailey(BNL)" <34662202+bailsofhay@users.noreply.github.com> Date: Tue, 18 Jun 2019 16:57:53 -0400 Subject: [PATCH 0209/2289] Update modules/assim.sequential/R/sda.enkf_MultiSite.R Co-Authored-By: Shawn P. Serbin --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 8a7e5874e94..b0911985542 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -29,7 +29,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = plot.title = NULL, facet.plots = FALSE, debug = FALSE, - pause=FALSE), + pause = FALSE), ...) { if (control$debug) browser() ###-------------------------------------------------------------------### From 79eb1bfe84027bc98ee1427dfc1ce155f2ba0ebd Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 17:02:07 -0400 Subject: [PATCH 0210/2289] remove multi.flag hard coded --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index a86b528fcb8..52e89c61a93 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -49,7 +49,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = scalef <- settings$state.data.assimilation$scalef %>% as.numeric() # scale factor for localization var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") names(var.names) <- NULL - multi.site.flag <- T#PEcAn.settings::is.MultiSettings(settings) + multi.site.flag <- PEcAn.settings::is.MultiSettings(settings) readsFF<-NULL # this keeps the forward forecast nitr.GEF <- ifelse(is.null(settings$state.data.assimilation$nitrGEF), 1e6, settings$state.data.assimilation$nitrGEF %>%as.numeric) nthin <- ifelse(is.null(settings$state.data.assimilation$nthin), 100, settings$state.data.assimilation$nthin %>%as.numeric) From 61ce1fd17056d4aa365fc354f00d3f50a62da388 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 17:10:55 -0400 Subject: [PATCH 0211/2289] added <0 value warning to write_restart.SIPNET --- models/sipnet/R/write_restart.SIPNET.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index 52d5b8aafb5..08550dfbbd2 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -21,8 +21,8 @@ ##' @param new.params list of parameters to convert between different states ##' @param inputs list of model inputs to use in write.configs.SIPNET ##' -##' @description Write restart files for SIPNET -##' +##' @description Write restart files for SIPNET. WARNING: Some variables produce illegal values < 0 and have been hardcoded to correct these values!! +##' ##' @return NONE ##' @export write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, new.state, From 92823a46738d72a656a6f775b4f2a8ed6405f4ab Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 18 Jun 2019 17:11:58 -0400 Subject: [PATCH 0212/2289] fixed | and & to || and && --- models/sipnet/R/write_restart.SIPNET.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index 08550dfbbd2..bd7e92e67ff 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -91,11 +91,11 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if ("SoilMoistFrac" %in% variables) { analysis.save[[length(analysis.save) + 1]] <- new.state$SoilMoistFrac ## unitless - if (new.state$SoilMoistFrac < 0 | new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 + if (new.state$SoilMoistFrac < 0 || new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 names(analysis.save[[length(analysis.save)]]) <- c("litterWFrac") analysis.save[[length(analysis.save) + 1]] <- new.state$SoilMoistFrac ## unitless - if (new.state$SoilMoistFrac < 0 | new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 + if (new.state$SoilMoistFrac < 0 || new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 names(analysis.save[[length(analysis.save)]]) <- c("soilWFrac") } @@ -111,7 +111,7 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, names(analysis.save[[length(analysis.save)]]) <- c("LAI") } - if (!is.null(analysis.save) & length(analysis.save)>0){ + if (!is.null(analysis.save) && length(analysis.save)>0){ analysis.save.mat <- data.frame(matrix(unlist(analysis.save, use.names = TRUE), nrow = 1)) colnames(analysis.save.mat) <- names(unlist(analysis.save)) }else{ From cc14bb95eb2cd239244653aa42648f9ab30d867e Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 19 Jun 2019 03:46:01 -0500 Subject: [PATCH 0213/2289] update bety modal --- shiny/workflowPlot/server.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index dfbe4cdb199..585d476da4d 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -70,7 +70,7 @@ server <- shinyServer(function(input, output, session) { observeEvent(input$submitInfo,{ tryCatch( { - bety <- PEcAn.DB::db.open( + bety$con <- PEcAn.DB::db.open( list( user = input$user, password = input$password, From dd7d1b10fcdef2e5381d76f453f9f9765ef4c356 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 19 Jun 2019 04:57:35 -0500 Subject: [PATCH 0214/2289] add tryCatch Statement --- .../server_files/select_data_server.R | 7 +- .../server_files/sidebar_server.R | 72 ++++++++++--------- 2 files changed, 45 insertions(+), 34 deletions(-) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 3eb4bc33225..e414731d556 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -29,7 +29,12 @@ observeEvent(input$load_model,{ README.text <- c(README.text, paste("SELECTION",i), "============", - readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")), + tryCatch({ + readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")) + }, + error = function(e){ + return(NULL) + }), diff_message, "" ) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 9fa340bfa38..f09bf8251a9 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -69,7 +69,15 @@ load.model <- eventReactive(input$load_model,{ req(input$all_run_id) # Get IDs DF from 'all_run_id' string ids_DF <- parse_ids_from_input_runID(input$all_run_id) - globalDF <- map2_df(ids_DF$wID, ids_DF$runID, ~load_data_single_run(bety, .x, .y)) + globalDF <- map2_df(ids_DF$wID, ids_DF$runID, + ~tryCatch({ + load_data_single_run(bety, .x, .y) + }, + error = function(e){ + toastr_error(title = paste("Error in WorkflowID", .x), + conditionMessage(e)) + return() + })) print("Yay the model data is loaded!") print(head(globalDF)) globalDF$var_name <- as.character(globalDF$var_name) @@ -79,41 +87,39 @@ load.model <- eventReactive(input$load_model,{ # Update all variable names observeEvent(input$load_model, { - tryCatch({ - req(input$all_run_id) - # All information about a model is contained in 'all_run_id' string - ids_DF <- parse_ids_from_input_runID(input$all_run_id) - var_name_list <- c() - for(row_num in 1:nrow(ids_DF)){ - var_name_list <- c(var_name_list, var_names_all(bety, ids_DF$wID[row_num], ids_DF$runID[row_num])) - } - updateSelectizeInput(session, "var_name_model", choices = var_name_list) - #Signaling the success of the operation - toastr_success("Update variable names") - }, - error = function(e) { - toastr_error(title = "Error", conditionMessage(e)) - }) + req(input$all_run_id) + # All information about a model is contained in 'all_run_id' string + ids_DF <- parse_ids_from_input_runID(input$all_run_id) + var_name_list <- c() + for(row_num in 1:nrow(ids_DF)){ + var_name_list <- c(var_name_list, + tryCatch({ + var_names_all(bety, ids_DF$wID[row_num], ids_DF$runID[row_num]) + }, + error = function(e){ + return(NULL) + })) + } + updateSelectizeInput(session, "var_name_model", choices = var_name_list) }) observeEvent(input$load_model,{ - tryCatch({ - # Retrieves all site ids from multiple seleted run ids when load button is pressed - req(input$all_run_id) - ids_DF <- parse_ids_from_input_runID(input$all_run_id) - site_id_list <- c() - for(row_num in 1:nrow(ids_DF)){ - settings <- getSettingsFromWorkflowId(bety,ids_DF$wID[row_num]) - site.id <- c(settings$run$site$id) - site_id_list <- c(site_id_list,site.id) - } - updateSelectizeInput(session, "all_site_id", choices=site_id_list) - #Signaling the success of the operation - toastr_success("Retrieve site IDs") - }, - error = function(e) { - toastr_error(title = "Error", conditionMessage(e)) - }) + # Retrieves all site ids from multiple seleted run ids when load button is pressed + req(input$all_run_id) + ids_DF <- parse_ids_from_input_runID(input$all_run_id) + site_id_list <- c() + for(row_num in 1:nrow(ids_DF)){ + settings <- + tryCatch({ + getSettingsFromWorkflowId(bety,ids_DF$wID[row_num]) + }, + error = function(e){ + return(NULL) + }) + site.id <- c(settings$run$site$id) + site_id_list <- c(site_id_list,site.id) + } + updateSelectizeInput(session, "all_site_id", choices=site_id_list) }) # Update input id list as (input id, name) observe({ From d46cdf00d2cf2d2f226f50ac57c81af79a555fc3 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Wed, 19 Jun 2019 16:17:05 -0400 Subject: [PATCH 0215/2289] added force job error output --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 6b848ffb797..6c5066f79f7 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -289,7 +289,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = } if (!(paste0(obs.year, '.nc') %in% files)) { - bad <- print(paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc")) + write.csv(paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc"), file = paste0(getwd(), '/SDA/forced_job_output.csv'), append = TRUE) file <- paste0(gsub("out", "run", folders[i]), "/", "job.sh") system(paste0("sh ", file)) } From e65df5208eda21b3e07c41b7e8a6cf0081ff7b66 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Thu, 20 Jun 2019 12:53:13 -0400 Subject: [PATCH 0216/2289] bringing branch up to date --- .../inst/WillowCreek/gefs.sipnet.template.xml | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index 778e3f57497..bf88cdbc19e 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -14,11 +14,9 @@ -9999 9999
-<<<<<<< HEAD TotSoilCarb KgC/m^2 -======= Qle MW/m2 @@ -28,12 +26,10 @@ SoilMoistFrac m/m ->>>>>>> b2502138928d03585677f39c8d67d9d94b709f7f 0 9999 -<<<<<<< HEAD SoilMoistFrac 0 @@ -45,7 +41,6 @@ 0 9999 -======= SWE kg/m^2 0 @@ -57,7 +52,6 @@ 0 9999 ->>>>>>> b2502138928d03585677f39c8d67d9d94b709f7f year 2017-01-01 @@ -77,21 +71,15 @@ psql-pecan.bu.edu bety PostgreSQL -<<<<<<< HEAD true -======= TRUE ->>>>>>> b2502138928d03585677f39c8d67d9d94b709f7f /fs/data3/kzarada/pecan.data/dbfiles/ -<<<<<<< HEAD temperate.coniferous -======= temperate.deciduous ->>>>>>> b2502138928d03585677f39c8d67d9d94b709f7f 1 From 7a452c5664b911881ed0ebd9a436b76a568f55e4 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Thu, 20 Jun 2019 15:36:39 -0400 Subject: [PATCH 0217/2289] updating branch --- modules/assim.sequential/inst/WillowCreek/workflow.template.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index 531504c4f20..1df5cf8a9ab 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -28,7 +28,7 @@ if (is.na(args[2])){ if (is.na(args[3])){ xmlTempName <-"gefs.sipnet.template.xml" } else { - xmlTempName <- args[2] + xmlTempName <- args[3] } setwd(outputPath) #------------------------------------------------------------------------------------------------ @@ -212,7 +212,7 @@ if ('state.data.assimilation' %in% names(settings)) { interactivePlot =FALSE, TimeseriesPlot =TRUE, BiasPlot =FALSE, - debug = FALSE, + debug = TRUE, pause=FALSE ) ) From 2852ddf6bcfea0fcbb5eee4e675f2e8b04158775 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Fri, 21 Jun 2019 08:47:31 -0400 Subject: [PATCH 0218/2289] updating branch --- modules/assim.sequential/R/sda_plotting.R | 2 ++ .../inst/WillowCreek/gefs.sipnet.template.xml | 1 - modules/data.atmosphere/NAMESPACE | 1 - .../data.atmosphere/man/download.US_Syv.Rd | 32 ------------------- 4 files changed, 2 insertions(+), 34 deletions(-) delete mode 100644 modules/data.atmosphere/man/download.US_Syv.Rd diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index a5725a82494..0c05b302ae2 100644 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -767,3 +767,5 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs } + + diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index bf88cdbc19e..ecb42d48f08 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -72,7 +72,6 @@ bety PostgreSQL true - TRUE /fs/data3/kzarada/pecan.data/dbfiles/ diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 9ee06bbea5a..720e49dea79 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -36,7 +36,6 @@ export(download.NOAA_GEFS) export(download.PalEON) export(download.PalEON_ENS) export(download.US_Los) -export(download.US_Syv) export(download.US_WCr) export(download.US_Wlef) export(equation_of_time) diff --git a/modules/data.atmosphere/man/download.US_Syv.Rd b/modules/data.atmosphere/man/download.US_Syv.Rd deleted file mode 100644 index 9b80b15bbd2..00000000000 --- a/modules/data.atmosphere/man/download.US_Syv.Rd +++ /dev/null @@ -1,32 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/download.US_Syv.R -\name{download.US_Syv} -\alias{download.US_Syv} -\title{download.US-Syv} -\usage{ -download.US_Syv(start_date, end_date, timestep = 1) -} -\arguments{ -\item{start_date}{Start date/time data should be downloaded for} - -\item{end_date}{End date/time data should be downloaded for} - -\item{timestep}{How often to take data points from the file. Must be a multiple of 0.5} -} -\description{ -download.US-Syv -} -\section{General Description}{ - -Obtains data from Ankur Desai's Sylvannia flux tower, and selects certain variables (NEE and LE) to return -Data is retruned at the given timestep in the given range. - -This data includes information on a number of flux variables. - -The timestep parameter is measured in hours, but is then converted to half hours because the data's timestep -is every half hour. -} - -\author{ -Luke Dramko and K Zarada -} From b90ab46d538a04fa5b80e55824017d1db57a2d9a Mon Sep 17 00:00:00 2001 From: sl4397 Date: Mon, 24 Jun 2019 05:17:46 -0500 Subject: [PATCH 0219/2289] highcharter --- shiny/workflowPlot/server.R | 1 + .../server_files/model_plots_server.R | 43 ++++++++++++++----- shiny/workflowPlot/ui.R | 1 + shiny/workflowPlot/ui_files/model_plots_UI.R | 33 +++++++------- 4 files changed, 51 insertions(+), 27 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 585d476da4d..b3cd64eac72 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -13,6 +13,7 @@ lapply(c("PEcAn.visualization", lapply(c( "shiny", "ggplot2", "plotly", + "highcharter", "shinyjs", "dplyr", "reshape2", diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index 7532dc4cc60..eab00ebad9b 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -40,32 +40,53 @@ observeEvent(input$units_model,{ observeEvent(input$ex_plot_model,{ req(input$units_model) - output$modelPlot <- renderPlotly({ - + #output$modelPlot <- renderPlotly({ + output$modelPlot <- renderHighchart({ + input$ex_plot_model isolate({ tryCatch({ df <- dplyr::filter(load.model(), var_name == input$var_name_model) - + updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) - + title <- unique(df$title) xlab <- unique(df$xlab) ylab <- unique(df$ylab) - + unit <- ylab if(input$units_model != unit & udunits2::ud.are.convertible(unit, input$units_model)){ df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) ylab <- input$units_model } + + # data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) + # + # plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) + # plt <- plt + data_geom() + # plt <- plt + labs(title=title, x=xlab, y=ylab) + # plt <- plt + geom_smooth(n=input$smooth_n_model) + # ply <- ggplotly(plt) + + + df$run_id <- as.numeric(as.character(df$run_id)) + xts.df <- xts(df[,c("vals", "run_id")], order.by = df$dates) + + plot_type <- switch(input$plotType_model, point = "scatter", line = "line") + # not sure if this method to calcualte smoothing parameter is correct + smooth_param <- input$smooth_n_model / nrow(df) + + ply <- highchart(type = "stock") %>% + hc_add_series(xts.df$vals, type = plot_type, name = title, regression = TRUE, + regressionSettings = list(type = "loess", loessSmooth = smooth_param)) %>% + hc_add_dependency("plugins/highcharts-regression.js") %>% + hc_title(text = title) %>% + hc_xAxis(title = list(text = xlab), type = 'datetime') %>% + hc_yAxis(title = list(text = ylab)) %>% + hc_tooltip(pointFormat = " Date: {point.x:%Y-%m-%d %H:%M}
y: {point.y}") %>% + hc_exporting(enabled = TRUE) - data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) - plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_model) - ply <- ggplotly(plt) #Signaling the success of the operation toastr_success("Generate interactive plots") }, diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index bf1a0c055bf..3ac303452de 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -1,5 +1,6 @@ library(shiny) library(plotly) +library(highcharter) library(shinythemes) library(knitr) library(shinyjs) diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 7ed8888d3bb..754e16beb95 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -4,22 +4,23 @@ tabPanel( 12, div( id = "plot-container", - div( - class = "plotlybars-wrapper", - div( - class = "plotlybars", - div(class = "plotlybars-bar b1"), - div(class = "plotlybars-bar b2"), - div(class = "plotlybars-bar b3"), - div(class = "plotlybars-bar b4"), - div(class = "plotlybars-bar b5"), - div(class = "plotlybars-bar b6"), - div(class = "plotlybars-bar b7") - ), - div(class = "plotlybars-text", - p("Updating the plot. Hold tight!")) - ), - plotlyOutput("modelPlot") + # div( + # class = "plotlybars-wrapper", + # div( + # class = "plotlybars", + # div(class = "plotlybars-bar b1"), + # div(class = "plotlybars-bar b2"), + # div(class = "plotlybars-bar b3"), + # div(class = "plotlybars-bar b4"), + # div(class = "plotlybars-bar b5"), + # div(class = "plotlybars-bar b6"), + # div(class = "plotlybars-bar b7") + # ), + # div(class = "plotlybars-text", + # p("Updating the plot. Hold tight!")) + # ), + #plotlyOutput("modelPlot") + highchartOutput("modelPlot") ) ))), div(id = "model_plot_static", column( From eefa35a7b6634635e2d25f39cac0ae5ad51a6479 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 10:11:38 -0400 Subject: [PATCH 0220/2289] don't remember what I changed --- models/sipnet/R/#write_restart.SIPNET.R# | 126 ------------------ models/sipnet/R/read_restart.SIPNET.R | 5 + models/sipnet/R/write.configs.SIPNET.R | 5 + models/sipnet/R/write_restart.SIPNET.R | 6 + .../assim.sequential/R/sda.enkf_MultiSite.R | 7 +- modules/data.remote/R/download.thredds.R | 103 ++++++++++++++ 6 files changed, 124 insertions(+), 128 deletions(-) delete mode 100755 models/sipnet/R/#write_restart.SIPNET.R# create mode 100755 modules/data.remote/R/download.thredds.R diff --git a/models/sipnet/R/#write_restart.SIPNET.R# b/models/sipnet/R/#write_restart.SIPNET.R# deleted file mode 100755 index 4526b426e21..00000000000 --- a/models/sipnet/R/#write_restart.SIPNET.R# +++ /dev/null @@ -1,126 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##' @title write_restart.SIPNET -##' @name write_restart.SIPNET -##' @author Ann Raiho \email{araiho@@nd.edu} -##' -##' @param outdir output directory -##' @param runid run ID -##' @param start.time start date and time for each SDA ensemble -##' @param stop.time stop date and time for each SDA ensemble -##' @param settings PEcAn settings object -##' @param new.state analysis state vector -##' @param RENAME flag to either rename output file or not -##' @param new.params list of parameters to convert between different states -##' @param inputs list of model inputs to use in write.configs.SIPNET -##' -##' @description Write restart files for SIPNET -##' -##' @return NONE -##' @export -write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, new.state, - RENAME = TRUE, new.params = FALSE, inputs) { - - rundir <- settings$host$rundir - variables <- colnames(new.state) - # values that will be used for updating other states deterministically depending on the SDA states - IC_extra <- data.frame(t(new.params$restart)) - - if (RENAME) { - file.rename(file.path(outdir, runid, "sipnet.out"), - file.path(outdir, runid, paste0("sipnet.", as.Date(start.time), ".out"))) - system(paste("rm", file.path(rundir, runid, "sipnet.clim"))) - } else { - print(paste("Files not renamed -- Need to rerun year", start.time, "before next time step")) - } - - settings$run$start.date <- start.time - settings$run$end.date <- stop.time - - ## Converting to sipnet units - prior.sla <- new.params[[which(!names(new.params) %in% c("soil", "soil_SDA", "restart"))[1]]]$SLA - unit.conv <- 2 * (10000 / 1) * (1 / 1000) * (3.154 * 10^7) # kgC/m2/s -> Mg/ha/yr - - analysis.save <- list() - - if ("NPP" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$NPP, "kg/m^2/s", "Mg/ha/yr") #*unit.conv -> Mg/ha/yr - names(analysis.save[[length(analysis.save)]]) <- c("NPP") - } - -if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0.0001 - names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") - - analysis.save[[length(analysis.save) + 1]] <- IC_extra$abvGrndWoodFrac - names(analysis.save[[length(analysis.save)]]) <- c("abvGrndWoodFrac") - - analysis.save[[length(analysis.save) + 1]] <- IC_extra$coarseRootFrac - names(analysis.save[[length(analysis.save)]]) <- c("coarseRootFrac") - - analysis.save[[length(analysis.save) + 1]] <- IC_extra$fineRootFrac - names(analysis.save[[length(analysis.save)]]) <- c("fineRootFrac") - } - - if ("LeafC" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- new.state$LeafC * prior.sla * 2 ## kgC/m2*m2/kg*2kg/kgC -> m2/m2 - if (new.state$LeafC < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("lai") - } - - if ("Litter" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$Litter, 'kg m-2', 'g m-2') # kgC/m2 -> gC/m2 - if (new.state$Litter < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("litter") - } - - if ("TotSoilCarb" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$TotSoilCarb, 'kg m-2', 'g m-2') # kgC/m2 -> gC/m2 - if (new.state$TotSoilCarb < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("soil") - } - - if ("SoilMoistFrac" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- new.state$SoilMoistFrac ## unitless - if (new.state$SoilMoistFrac < 0 | new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 - names(analysis.save[[length(analysis.save)]]) <- c("litterWFrac") - - analysis.save[[length(analysis.save) + 1]] <- new.state$SoilMoistFrac ## unitless - if (new.state$SoilMoistFrac < 0 | new.state$SoilMoistFrac > 1) analysis.save[[length(analysis.save)]] <- 0.5 - names(analysis.save[[length(analysis.save)]]) <- c("soilWFrac") - } - - if ("SWE" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- new.state$SWE/10 - if (new.state$SWE < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("snow") - } - - if ("LAI" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- new.state$LAI - if (new.state$LAI < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("LAI") - } - - if (!is.null(analysis.save) & length(analysis.save)>0){ - analysis.save.mat <- data.frame(matrix(unlist(analysis.save, use.names = TRUE), nrow = 1)) - colnames(analysis.save.mat) <- names(unlist(analysis.save)) - }else{ - analysis.save.mat<-NULL - } - - - do.call(write.config.SIPNET, args = list(defaults = NULL, - trait.values = new.params, - settings = settings, - run.id = runid, - inputs = inputs, - IC = analysis.save.mat)) - print(runid) -} # write_restart.SIPNET \ No newline at end of file diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index 6d23a193a83..c3dd50c5f43 100755 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -73,6 +73,11 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p forecast[[length(forecast) + 1]] <- ens$litter_carbon_content[last] ##kgC/m2 names(forecast[[length(forecast)]]) <- c("Litter") } + + if ("NEE" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$NEE[last] ## gC/m2 + names(forecast[[length(forecast)]]) <- c("NEE") + } if ("SoilMoistFrac" %in% var.names) { forecast[[length(forecast) + 1]] <- ens$SoilMoistFrac[last] ## unitless diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index 9453178bfb1..55d6f45719f 100755 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -427,6 +427,11 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if (!is.na(lai) && is.numeric(lai)) { param[which(param[, 1] == "laiInit"), 2] <- lai } + ## neeInit gC/m2 + nee <- try(ncdf4::ncvar_get(IC.nc,"nee"),silent = TRUE) + if (!is.na(nee) && is.numeric(nee)) { + param[which(param[, 1] == "neeInit"), 2] <- nee + } ## litterInit gC/m2 if ("litter" %in% names(IC.pools)) { param[which(param[, 1] == "litterInit"), 2] <- udunits2::ud.convert(IC.pools$litter, 'kg m-2', 'g m-2') # BETY: kgC m-2 diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index bd7e92e67ff..2ed67a88d05 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -54,6 +54,12 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$NPP, "kg/m^2/s", "Mg/ha/yr") #*unit.conv -> Mg/ha/yr names(analysis.save[[length(analysis.save)]]) <- c("NPP") } + + if ("NEE" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$NEE + if (new.state$NEE < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("NEE") + } if ("AbvGrndWood" %in% variables) { AbvGrndWood <- udunits2::ud.convert(new.state$AbvGrndWood, "Mg/ha", "g/m^2") diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 6c5066f79f7..1214e85c9cd 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -287,15 +287,18 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = { files <- files[-remove] } + missing = vector() if (!(paste0(obs.year, '.nc') %in% files)) { - write.csv(paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc"), file = paste0(getwd(), '/SDA/forced_job_output.csv'), append = TRUE) + bad <- (paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc")) file <- paste0(gsub("out", "run", folders[i]), "/", "job.sh") system(paste0("sh ", file)) } + missing = c(missing, bad) } } - + write.csv(missing, file = paste0(getwd(), '/SDA/forced_job_output.csv'), append = TRUE) + #------------------------------------------- Reading the output if (control$debug) browser() #--- Reading just the first run when we have all years and for VIS diff --git a/modules/data.remote/R/download.thredds.R b/modules/data.remote/R/download.thredds.R new file mode 100755 index 00000000000..4b3fadacd2f --- /dev/null +++ b/modules/data.remote/R/download.thredds.R @@ -0,0 +1,103 @@ +# +##' @title download.thredds.AGB +##' @name download.thredds.AGB +##' +##' @param outdir Where to place output +##' @param site_ids What locations to download data at? +##' @param run_parallel Logical. Download and extract files in parallel? +##' @param ncores Optional. If run_parallel=TRUE how many cores to use? If left as NULL will select max number -1 +##' +##' @return data.frame summarize the results of the function call +##' +##' @examples +##' \dontrun{ +##' outdir <- "~/scratch/abg_data/" + +##' results <- PEcAn.data.remote::download.thredds.AGB(outdir=outdir, +##' site_ids = c(676, 678, 679, 755, 767, 1000000030, 1000000145, 1000025731), +##' run_parallel = TRUE, ncores = 8) +##' +##' @export +##' @author Bailey Morrison +##' +download.thredds.AGB <- function(outdir = NULL, site_ids, run_parallel = FALSE, + ncores = NULL) { + + + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- as.character(site_ids) + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + mylat = site_info$lat + mylon = site_info$lon + + # site specific URL for dataset --> these will be made to work for all THREDDS datasets in the future, but for now, just testing with + # this one dataset. This specific dataset only has 1 year (2005), so no temporal looping for now. + obs_file = "https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1221/agb_5k.nc4" + obs_err = "https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1221/agb_SE_5k.nc4" + files = c(obs_file, obs_err) + + # function to extract ncdf data from lat and lon values for value + SE URLs + get_data = function(i) + { + data = ncdf4::nc_open(files[1]) + agb_lats = ncdf4::ncvar_get(data, "latitude") + agb_lons = ncdf4::ncvar_get(data, "longitude") + + agb_x = which(abs(agb_lons- mylon[i]) == min(abs(agb_lons - mylon[i]))) + agb_y = which(abs(agb_lats- mylat[i]) == min(abs(agb_lats - mylat[i]))) + + start = c(agb_x, agb_y) + count = c(1,1) + d = ncdf4::ncvar_get(ncdf4::nc_open(files[1]), "abvgrndbiomass", start=start, count = count) + if (is.na(d)) d <- NA + sd = ncdf4::ncvar_get(ncdf4::nc_open(files[2]), "agbSE", start=start, count = count) + if (is.na(sd)) sd <- NA + date = "2005" + site = site_ID[i] + output = as.data.frame(cbind(d, sd, date, site)) + names(output) = c("value", "sd", "date", "siteID") + + # option to save output dataset to directory for user. + if (!(is.null(outdir))) + { + write.csv(output, file = paste0(outdir, "THREDDS_", sub("^([^.]*).*", "\\1",basename(files[1])), "_site_", site, ".csv"), row.names = FALSE) + } + + return(output) + } + + ## setup parallel + if (run_parallel) { + if (!is.null(ncores)) { + ncores <- ncores + } else { + ncores <- parallel::detectCores() -1 + } + require(doParallel) + PEcAn.logger::logger.info(paste0("Running in parallel with: ", ncores)) + cl = parallel::makeCluster(ncores) + doParallel::registerDoParallel(cl) + data = foreach(i = seq_along(mylat), .combine = rbind) %dopar% get_data(i) + stopCluster(cl) + + } else { + # setup sequential run + data = data.frame() + for (i in seq_along(mylat)) + { + data = rbind(data, get_data(i)) + } + } + + return(data) +} From 5439e38253caeee6c33b2c7213971f31284d11bd Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 10:28:38 -0400 Subject: [PATCH 0221/2289] fixed spaces, checked example --- modules/data.remote/R/call_MODIS.R | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 88359427ec1..6b2de99c269 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -13,18 +13,17 @@ ##' @param band string value for which measurement to extract ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) -##' @param siteID numeric value of BETY site id value to use for output file name. Default is NULL +##' @param siteID numeric value of BETY site id value to use for output file name. Default is NULL. Only MODISTools option. ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. -##' @param progress TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option. +##' @param progress TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 ##' ##' @examples ##' \dontrun{ -##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools") -##' plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) +##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", progress = TRUE, QC_filter = FALSE) ##' test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") ##' } ##' @@ -158,7 +157,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = FALSE) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") - output[ ,5:10] <- lapply(output[,5:10], as.numeric) + output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) # scale the data + stdev to proper units output$data <- output$data * (as.numeric(dat$scale)) @@ -172,7 +171,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, for (i in seq_len(nrow(output))) { convert <- paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") - output$qc[i] <- substr(convert, nchar(convert)-2, nchar(convert)) + output$qc[i] <- substr(convert, nchar(convert) - 2, nchar(convert)) } good <- which(output$qc %in% c("000", "001")) if (length(good) > 0 || !(is.null(good))) From 2f1d0c41bf0905b0361cf59efc64851e9b81721f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 10:47:26 -0400 Subject: [PATCH 0222/2289] added option to keep .nc* files --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 1214e85c9cd..222b5d8e662 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -30,6 +30,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = facet.plots = FALSE, debug = FALSE, pause = FALSE), + keepNC = TRUE, ...) { if (control$debug) browser() ###-------------------------------------------------------------------### @@ -211,7 +212,10 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = if (t > 1){ #removing old simulations ## keep all old simulations right now since there is debugging going on - #unlink(list.files(outdir, "*.nc.var", recursive = TRUE, full.names = TRUE)) + if (!(keepNC)) + { + unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) + } #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. inputs.split <- conf.settings %>% purrr::map2(inputs, function(settings, inputs) { From cd3e032a36a73f61b6684ac915e8bd98526a6353 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 10:50:26 -0400 Subject: [PATCH 0223/2289] brought back plot line --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 222b5d8e662..0d3b3b5d1d4 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -510,7 +510,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) #writing down the image - either you asked for it or not :) -# if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) if (t == 1){ unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) } From 481f6edc8fe78528436f0b1ddb5ec9a0b11c5c0b Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 12:54:00 -0400 Subject: [PATCH 0224/2289] roxyginzed --- modules/data.remote/man/call_MODIS.Rd | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index cac10cfb77b..b686d520510 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -29,13 +29,13 @@ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{siteID}{numeric value of BETY site id value to use for output file name. Default is NULL} +\item{siteID}{numeric value of BETY site id value to use for output file name. Default is NULL. Only MODISTools option.} \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} -\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False.} +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option.} -\item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. +\item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} @@ -45,8 +45,7 @@ Get MODIS data by date and location } \examples{ \dontrun{ -test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools") -plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) +test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", progress = TRUE, QC_filter = FALSE) test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") } From 0c2a57353013dec75aa9c2404c84434e8ace2ca1 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 16:03:45 -0400 Subject: [PATCH 0225/2289] plotting option per hamze added --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 0d3b3b5d1d4..e0496329113 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -300,8 +300,8 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = } missing = c(missing, bad) } + write.csv(missing, file = paste0(getwd(), '/SDA/forced_job_output.csv'), append = TRUE) } - write.csv(missing, file = paste0(getwd(), '/SDA/forced_job_output.csv'), append = TRUE) #------------------------------------------- Reading the output if (control$debug) browser() @@ -510,7 +510,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) #writing down the image - either you asked for it or not :) - if (t%%2==0 | t==nt) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + if (t%%2==0 || t==nt && control$TimeseriesPlot) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) if (t == 1){ unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) } From 021758b02b9ea9b25561007ac4ae0aeb3ef9f063 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 24 Jun 2019 16:05:58 -0400 Subject: [PATCH 0226/2289] removed if statement --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index e0496329113..6a6fd9b4ec6 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -212,7 +212,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = if (t > 1){ #removing old simulations ## keep all old simulations right now since there is debugging going on - if (!(keepNC)) + if (!(keepNC) && t != 1) { unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) } @@ -511,9 +511,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = #writing down the image - either you asked for it or not :) if (t%%2==0 || t==nt && control$TimeseriesPlot) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) - if (t == 1){ - unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) - } + } ### end loop over time } # sda.enkf From f071d3cc5e54f7bf8e04438b0a8f1a7c4eaafd2b Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 26 Jun 2019 03:58:11 -0500 Subject: [PATCH 0227/2289] add grouping to highcharter --- .../server_files/model_plots_server.R | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index eab00ebad9b..13d38a2791b 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -74,11 +74,18 @@ observeEvent(input$ex_plot_model,{ plot_type <- switch(input$plotType_model, point = "scatter", line = "line") # not sure if this method to calcualte smoothing parameter is correct - smooth_param <- input$smooth_n_model / nrow(df) + smooth_param <- input$smooth_n_model / nrow(df) *100 - ply <- highchart(type = "stock") %>% - hc_add_series(xts.df$vals, type = plot_type, name = title, regression = TRUE, - regressionSettings = list(type = "loess", loessSmooth = smooth_param)) %>% + ply <- highchart() + + for(i in unique(xts.df$run_id)){ + ply <- ply %>% + hc_add_series(xts.df[xts.df$run_id == i, "vals"], + type = plot_type, name = i, regression = TRUE, + regressionSettings = list(type = "loess", loessSmooth = smooth_param)) + } + + ply <- ply %>% hc_add_dependency("plugins/highcharts-regression.js") %>% hc_title(text = title) %>% hc_xAxis(title = list(text = xlab), type = 'datetime') %>% From cc1f0bd46501403a6d26845060b2b8dc7a5960db Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Wed, 26 Jun 2019 09:54:46 -0400 Subject: [PATCH 0228/2289] fixed print with pecan.logger --- modules/data.remote/R/call_MODIS.R | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 6b2de99c269..2fdc3538257 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -56,7 +56,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, products = MODISTools::mt_products() if (!(product %in% products$product)) { - print(products) + PEcAn.logger::logger.warn(products) stop("Product not available for MODIS API. Please chose a product from the list above.") } @@ -64,7 +64,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { - print(bands$band) + PEcAn.logger::logger.warn(bands$band) stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") } @@ -77,7 +77,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, { if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { - print("Extracting data") + PEcAn.logger::logger.warn("Extracting data") start <- as.Date(start_date, "%Y%j") end <- as.Date(end_date, "%Y%j") @@ -85,7 +85,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## if (as.numeric(start_date) < dates[1] || as.numeric(end_date) > dates[length(dates)]) { - print(start) + PEcAn.logger::logger.warn(start) stop("start or end date are not within MODIS data product date range. Please choose another date.") } } else { @@ -93,7 +93,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # Best case scenario: Start and end date asked for fall with available date range of modis data product. if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { - print("Check #2: All dates are available!") + PEcAn.logger::logger.warn("Check #2: All dates are available!") } # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble @@ -101,12 +101,12 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (as.numeric(start_date) <= dates[1] && as.numeric(end_date) >= dates[1]) { start_date = dates[1] - print("WARNING: Dates are only partially available. Start date before modis data product is available.") + PEcAn.logger::logger.warn("WARNING: Dates are only partially available. Start date before modis data product is available.") } if (as.numeric(end_date) >= dates[length(dates)] && as.numeric(start_date) <= dates[length(dates)]) { end_date <- dates[length(dates)] - print("WARNING: Dates are only partially available. End date befire modis data product is available.") + PEcAn.logger::logger.warn("WARNING: Dates are only partially available. End date befire modis data product is available.") } # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. @@ -119,7 +119,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, end <- as.Date(end_date, "%Y%j") } - print("Extracting data") + PEcAn.logger::logger.warn("Extracting data") cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api @@ -178,7 +178,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, { output = output[good, ] } else { - print("All QC values are bad. No data to output with QC filter == TRUE.") + PEcAn.logger::logger.warn("All QC values are bad. No data to output with QC filter == TRUE.") } } From 0ee8bf68daa35f40b13ec077d521c2ad638bc767 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Wed, 26 Jun 2019 10:00:46 -0400 Subject: [PATCH 0229/2289] added logger.warn and logger.info --- modules/data.remote/R/call_MODIS.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 2fdc3538257..15454c0e9c7 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -77,7 +77,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, { if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { - PEcAn.logger::logger.warn("Extracting data") + PEcAn.logger::logger.info("Extracting data") start <- as.Date(start_date, "%Y%j") end <- as.Date(end_date, "%Y%j") @@ -93,7 +93,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, # Best case scenario: Start and end date asked for fall with available date range of modis data product. if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { - PEcAn.logger::logger.warn("Check #2: All dates are available!") + PEcAn.logger::logger.info("Check #2: All dates are available!") } # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble @@ -119,7 +119,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, end <- as.Date(end_date, "%Y%j") } - PEcAn.logger::logger.warn("Extracting data") + PEcAn.logger::logger.info("Extracting data") cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api From c34d1149187b0f5a0fc73c15579ec0a941b0434a Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Wed, 26 Jun 2019 10:02:01 -0400 Subject: [PATCH 0230/2289] made call_modis the same as the modisTools PR --- modules/data.remote/R/call_MODIS.R | 102 +++++++++++++---------------- 1 file changed, 45 insertions(+), 57 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 812fae10323..93264dd887d 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -13,24 +13,23 @@ ##' @param band string value for which measurement to extract ##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) ##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) +##' @param siteID numeric value of BETY site id value to use for output file name. Default is NULL. Only MODISTools option. ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) -##' @param siteID string value of a PEcAn site ID. Currently only used for output filename. -##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. -##' @param iter a value (e.g. i) used to help name files when call_MODIS is used parallelized or in a for loop. Default is NULL. +##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option. +##' @param progress TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. ##' ##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 ##' ##' @examples ##' \dontrun{ -##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = TRUE) -##' plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) +##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", progress = TRUE, QC_filter = FALSE) ##' test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") ##' } ##' ##' @author Bailey Morrison -##' -call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, iter = NULL, product, band, band_qc = "", band_sd = "", siteID = "", package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { +##' +call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. size <- 0 @@ -48,16 +47,16 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, start_date <- as.Date(start_date, format = "%Y%j") end_date <- as.Date(end_date, format = "%Y%j") + #################### if package_method == MODISTools option #################### if (package_method == "MODISTools") { - #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available - products <- MODISTools::mt_products() + products = MODISTools::mt_products() if (!(product %in% products$product)) { - print(products) + PEcAn.logger::logger.warn(products) stop("Product not available for MODIS API. Please chose a product from the list above.") } @@ -65,7 +64,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, bands <- MODISTools::mt_bands(product = product) if (!(band %in% bands$band)) { - print(bands$band) + PEcAn.logger::logger.warn(bands$band) stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") } @@ -76,7 +75,7 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, ########## Date case 1: user only wants one date ########## if (start_date == end_date) { - if (as.numeric(start_date) >= dates[1] & as.numeric(end_date) <= dates[length(dates)]) + if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { PEcAn.logger::logger.info("Extracting data") @@ -84,34 +83,34 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, end <- as.Date(end_date, "%Y%j") } ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## - if (as.numeric(start_date)< dates[1] | as.numeric(end_date)> dates[length(dates)]) + if (as.numeric(start_date) < dates[1] || as.numeric(end_date) > dates[length(dates)]) { - print(start) + PEcAn.logger::logger.warn(start) stop("start or end date are not within MODIS data product date range. Please choose another date.") } } else { ########## Date case 2: user want a range of dates ########## # Best case scenario: Start and end date asked for fall with available date range of modis data product. - if (as.numeric(start_date)>=dates[1] & as.numeric(end_date)<=dates[length(dates)]) + if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) { - print("Check #2: All dates are available!") + PEcAn.logger::logger.info("Check #2: All dates are available!") } # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble # MODIS data dates - if (as.numeric(start_date)<=dates[1] & as.numeric(end_date)>=dates[1]) + if (as.numeric(start_date) <= dates[1] && as.numeric(end_date) >= dates[1]) { - start_date <- dates[1] - print("WARNING: Dates are only partially available. Start date before modis data product is available.") + start_date = dates[1] + PEcAn.logger::logger.warn("WARNING: Dates are only partially available. Start date before modis data product is available.") } - if (as.numeric(end_date)>=dates[length(dates)] & as.numeric(start_date) <= dates[length(dates)]) + if (as.numeric(end_date) >= dates[length(dates)] && as.numeric(start_date) <= dates[length(dates)]) { end_date <- dates[length(dates)] - print("WARNING: Dates are only partially available. End date befire modis data product is available.") + PEcAn.logger::logger.warn("WARNING: Dates are only partially available. End date befire modis data product is available.") } # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. - if ((as.numeric(start_date)dates[length(dates)] & as.numeric(end_date)>dates[length(dates)])) + if ((as.numeric(start_date) < dates[1] && as.numeric(end_date < dates[1])) || (as.numeric(start_date) > dates[length(dates)] && as.numeric(end_date) > dates[length(dates)])) { stop("No MODIS data available start_date and end_date parameterized.") } @@ -120,25 +119,25 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, end <- as.Date(end_date, "%Y%j") } - print("Extracting data") + PEcAn.logger::logger.info("Extracting data") cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) # extract main band data from api - dat <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band, - start=start_date, end=end_date, km_ab=size, km_lr=size, progress = progress) + dat <- MODISTools::mt_subset(lat= lat, lon = lon, product = product, band = band, + start = start_date, end = end_date, km_ab = size, km_lr = size, progress = progress) # extract QC data if(band_qc != "") { - qc <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_qc, + qc <- MODISTools::mt_subset(lat = lat, lon = lon, product = product, band = band_qc, start = start, end = end, km_ab = size, km_lr = size, progress = progress) } # extract stdev data if(band_sd != "") { - sd <- MODISTools::mt_subset(lat=lat, lon=lon, product=product, band=band_sd, - start=start, end=end, km_ab=size, km_lr=size, progress = progress) + sd <- MODISTools::mt_subset(lat = lat, lon = lon, product = product, band = band_sd, + start = start, end = end, km_ab = size, km_lr = size, progress = progress) } if (band_qc == "") @@ -152,13 +151,13 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, { SD <- rep("nan", nrow(dat)) } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) #formatC(sd$data$data*scale, digits = 2, format = 'f') + SD <- as.numeric(sd$value) * as.numeric(sd$scale) } output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = FALSE) names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") - output[,5:10] <- lapply(output[,5:10], as.numeric) + output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) # scale the data + stdev to proper units output$data <- output$data * (as.numeric(dat$scale)) @@ -166,45 +165,35 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) - if (QC_filter == T) + if (QC_filter) { - output$qc <- as.character(output$qc) + output$qc == as.character(output$qc) for (i in seq_len(nrow(output))) { convert <- paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") - output$qc[i] <- substr(convert, nchar(convert)-2, nchar(convert)) + output$qc[i] <- substr(convert, nchar(convert) - 2, nchar(convert)) } - good <- which(output$qc == "000" | output$qc == "001") - if (length(good) > 0 | !(is.null(good))) + good <- which(output$qc %in% c("000", "001")) + if (length(good) > 0 || !(is.null(good))) { - output <- output[good,] + output = output[good, ] } else { - print("All QC values are bad. No data to output with QC filter == TRUE.") + PEcAn.logger::logger.warn("All QC values are bad. No data to output with QC filter == TRUE.") } } - if (!(outfolder == "") & !(is.null(siteID))) + + if (!(outfolder == "")) { - if (is.null(iter)) - { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") - } else { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, "_", sprintf("%04d",iter), ".csv", sep = "") - } - fname <- paste0(outfolder, "/", fname) - write.csv(output, fname, row.names = FALSE) - } - if (!(outfolder == "") & is.null(siteID)) + if (!(is.null(siteID))) { - if (is.null(iter)) - { + fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") + } else { fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, ".csv", sep = "") - } else { - fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, "_", sprintf("%04d", iter), ".csv", sep = "") - } + } fname <- file.path(outfolder, fname) write.csv(output, fname, row.names = FALSE) - } + } return(output) } @@ -213,12 +202,11 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, if (package_method == "reticulate"){ # load in python script script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) - #script.path <- file.path('/Users/bmorrison/pecan/modules/data.remote/inst/extract_modis_data.py') reticulate::source_python(script.path) # extract the data output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start_date, end_date = end_date, size = size, band_qc = band_qc, band_sd = band_sd) - output[,5:10] <- lapply(output[,5:10], as.numeric) + output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) output$lat <- round(output$lat, 4) output$lon <- round(output$lon, 4) @@ -228,6 +216,6 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, fname <- file.path(outfolder, fname) write.csv(output, fname) } - + return(output)} } From 1bc3bf60a1bcc02dbe97a156636f34c23bfa18a3 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 27 Jun 2019 15:23:41 -0400 Subject: [PATCH 0231/2289] adjusted forceRun and plotting output --- .../assim.sequential/R/sda.enkf_MultiSite.R | 48 +++++++++++++------ modules/assim.sequential/R/sda_plotting.R | 17 ++++++- 2 files changed, 48 insertions(+), 17 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 6a6fd9b4ec6..01a1cfac422 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -199,6 +199,11 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = ###------------------------------------------------------------------------------------------------### ### loop over time ### ###------------------------------------------------------------------------------------------------###---- + if (forceRun) + { + bad_run = vector() + } + for(t in seq_len(nt)){ # do we have obs for this time - what year is it ? @@ -209,13 +214,9 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = ###-------------------------------------------------------------------------###----- #- Check to see if this is the first run or not and what inputs needs to be sent to write.ensemble configs + if (t > 1){ - #removing old simulations - ## keep all old simulations right now since there is debugging going on - if (!(keepNC) && t != 1) - { - unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) - } + #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. inputs.split <- conf.settings %>% purrr::map2(inputs, function(settings, inputs) { @@ -238,7 +239,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = } inputs.split }) - + #---------------- setting up the restart argument for each site separatly and keeping them in a list restart.list <- purrr::pmap(list(out.configs, conf.settings, params.list, inputs.split), function(configs, settings, new.params, inputs){ @@ -255,7 +256,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = }) - }else{ + } else{ restart.list <- vector("list",length(conf.settings)) } #-------------------------- Writing the config/Running the model and reading the outputs for each ensemble @@ -279,7 +280,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = PEcAn.remote::start.model.runs(settings, settings$database$bety$write) - if (forceRun == TRUE) + if (forceRun) { # quick fix for job.sh files not getting run folders <- list.files(path = paste0(settings$outdir, "/SDA/out"), include.dirs = TRUE, full.names = TRUE) @@ -291,19 +292,19 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = { files <- files[-remove] } - missing = vector() if (!(paste0(obs.year, '.nc') %in% files)) { - bad <- (paste0("missing these .nc files: ", folders[i], "/", obs.year, ".nc")) + bad <- paste0("job.sh file not run for this .nc file ", folders[i], "/", obs.year, ".nc") + PEcAn.logger::logger.warn(paste0("WARNING: ", bad)) file <- paste0(gsub("out", "run", folders[i]), "/", "job.sh") system(paste0("sh ", file)) + bad_run = c(bad_run, bad) } - missing = c(missing, bad) } - write.csv(missing, file = paste0(getwd(), '/SDA/forced_job_output.csv'), append = TRUE) } - - #------------------------------------------- Reading the output + #------------------------------------------- Reading the output + + if (control$debug) browser() #--- Reading just the first run when we have all years and for VIS @@ -380,6 +381,7 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = `attr<-`('Site',c(rep(site.ids, each=length(var.names)))) FORECAST[[t]] <- X + ###-------------------------------------------------------------------### ### preparing OBS ### ###-------------------------------------------------------------------###---- @@ -512,6 +514,22 @@ sda.enkf.multisite <- function(settings, obs.mean, obs.cov, Q = NULL, restart = if (t%%2==0 || t==nt && control$TimeseriesPlot) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) + # remove files as SDA runs + if (!(keepNC)) + { + unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) + } + # useful for debugging to keep .nc files for assimilated years. T = 2, because this loops removes the files that were run when starting the next loop + if (keepNC && t == 1) + { + unlink(list.files(outdir, "*.nc", recursive = TRUE, full.names = TRUE)) + } + } ### end loop over time + #output list of job.sh files that were force run + if (forceRun) + { + write.csv(bad_run, file = paste0(getwd(), "/SDA/job_files_force_run.csv")) + } } # sda.enkf diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 5e4833dfc0f..ce4da2cd5de 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -498,9 +498,22 @@ post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.co ##' @rdname interactive.plotting.sda ##' @export -post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=F, readsFF=NULL, observed_vars){ +post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=FALSE, readsFF=NULL){ + + # fix obs.mean/obs.cov for multivariable plotting issues when there is NA data. When more than 1 data set is assimilated, but there are missing data + # for some sites/years/etc. the plotting will fail and crash the SDA because the numbers of columns are not consistent across all sublists within obs.mean + # or obs.cov. + observed_vars = vector() + for (date in names(obs.mean)) + { + for (site in names(obs.mean[[date]])) + { + vars = names(obs.mean[[date]][[site]]) + observed_vars = c(observed_vars, vars) + } + } + observed_vars = unique(observed_vars) - # fix obs.mean/obs.cov for multivariable plotting issues when there is NA data for (name in names(obs.mean)) { data_mean = obs.mean[name] From be36ae5ed4c32cf34ae7027a4a87013c62b3bb4f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Fri, 28 Jun 2019 14:24:53 -0400 Subject: [PATCH 0232/2289] fixed missing code from merge conflict --- .../assim.sequential/R/sda.enkf_MultiSite.R | 37 +++++++++++++------ 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index c411c30bcce..3917c0c61c5 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -40,7 +40,7 @@ sda.enkf.multisite <- function(settings, Profiling = FALSE, OutlierDetection=FALSE), ...) { - plan(multiprocess) + future::plan(multiprocess) if (control$debug) browser() tic("Prepration") ###-------------------------------------------------------------------### @@ -138,6 +138,29 @@ sda.enkf.multisite <- function(settings, } + ###-------------------------------------------------------------------### + ### tests before data assimilation ### + ###-------------------------------------------------------------------###---- + obs.times <- names(obs.mean) + obs.times.POSIX <- lubridate::ymd_hms(obs.times) + + for (i in seq_along(obs.times)) { + if (is.na(obs.times.POSIX[i])) { + if (is.na(lubridate::ymd(obs.times[i]))) { + PEcAn.logger::logger.warn("Error: no dates associated with observations") + } else { + ### Data does not have time associated with dates + ### Adding 12:59:59PM assuming next time step starts one second later + PEcAn.logger::logger.warn("Pumpkin Warning: adding one minute before midnight time assumption to dates associated with data") + obs.times.POSIX[i] <- lubridate::ymd_hms(paste(obs.times[i], "23:59:59")) + } + } + } + obs.times <- obs.times.POSIX + nt <- length(obs.times) + if (nt==0) PEcAn.logger::logger.severe('There has to be at least one Obs.') + bqq <- numeric(nt + 1) + ###-------------------------------------------------------------------### ### If this is a restart - Picking up were we left last time ### @@ -334,6 +357,7 @@ sda.enkf.multisite <- function(settings, if (control$debug) browser() PEcAn.remote::start.model.runs(settings, settings$database$bety$write) + # this is the option to force job.sh files that are randomly not being run by the SDA if (forceRun) { # quick fix for job.sh files not getting run @@ -622,16 +646,7 @@ sda.enkf.multisite <- function(settings, PEcAn.logger::logger.severe(paste0("Something just broke along the way. See if the message is helpful ", e)) }) - - - - save(site.locs, t, FORECAST, ANALYSIS, enkf.params, new.state, new.params, - out.configs, ensemble.samples, inputs, Viz.output, - file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) - #writing down the image - either you asked for it or not :) - - if (t%%2==0 || t==nt && control$TimeseriesPlot) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS ,plot.title=control$plot.title, facetg=control$facet.plots, readsFF=readsFF) - + # remove files as SDA runs if (!(keepNC)) { From 6cef790be05f2b838a5107ea3009c9286831bdef Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Tue, 2 Jul 2019 13:36:58 -0400 Subject: [PATCH 0233/2289] fix order in pecan.depends --- docker/depends/pecan.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 0a11848c086..9b00cb36a82 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -23,9 +23,9 @@ install2.r -e -s -n -1\ BioCro \ car \ coda \ + data.table \ dataone \ datapack \ - data.table \ DBI \ dbplyr \ doParallel \ From b544c485536882e09bcaee3c6c40d43b794aa2bb Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 3 Jul 2019 10:17:36 -0500 Subject: [PATCH 0234/2289] remove ggplotly and toggle button and set highcharter to default --- shiny/workflowPlot/server.R | 1 + .../server_files/model_plots_server.R | 138 +++++------------- shiny/workflowPlot/ui_files/model_plots_UI.R | 51 +------ 3 files changed, 45 insertions(+), 145 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index b3cd64eac72..46ffd8bf282 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -16,6 +16,7 @@ lapply(c( "shiny", "highcharter", "shinyjs", "dplyr", + "xts", "reshape2", "purrr", "ncdf4", diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index 13d38a2791b..a71e673cf07 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -1,16 +1,18 @@ -# Renders ggplotly +# Renders highcharter -output$modelPlotStatic <- renderPlotly({ +output$modelPlot <- renderHighchart({ validate( need(input$all_workflow_id, 'Select workflow id'), need(input$all_run_id, 'Select Run id'), need(input$load_model > 0, 'Select Load Model Outputs') ) - plt <- ggplot(data.frame(x = 0, y = 0), aes(x,y)) + - annotate("text", x = 0, y = 0, label = "Ready to plot!", - size = 10, color = "grey") - ggplotly(plt) + highchart() %>% + hc_add_series(data = c(), showInLegend = F) %>% + hc_xAxis(title = list(text = "Time")) %>% + hc_yAxis(title = list(text = "y")) %>% + hc_title(text = "Ready to plot!") %>% + hc_add_theme(hc_theme_flat()) }) # Update units every time a variable is selected @@ -39,82 +41,18 @@ observeEvent(input$units_model,{ observeEvent(input$ex_plot_model,{ req(input$units_model) - - #output$modelPlot <- renderPlotly({ - output$modelPlot <- renderHighchart({ - - input$ex_plot_model - isolate({ - tryCatch({ - df <- dplyr::filter(load.model(), var_name == input$var_name_model) - - updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) - - title <- unique(df$title) - xlab <- unique(df$xlab) - ylab <- unique(df$ylab) - - unit <- ylab - if(input$units_model != unit & udunits2::ud.are.convertible(unit, input$units_model)){ - df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) - ylab <- input$units_model - } - - # data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) - # - # plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) - # plt <- plt + data_geom() - # plt <- plt + labs(title=title, x=xlab, y=ylab) - # plt <- plt + geom_smooth(n=input$smooth_n_model) - # ply <- ggplotly(plt) - - - df$run_id <- as.numeric(as.character(df$run_id)) - xts.df <- xts(df[,c("vals", "run_id")], order.by = df$dates) - - plot_type <- switch(input$plotType_model, point = "scatter", line = "line") - # not sure if this method to calcualte smoothing parameter is correct - smooth_param <- input$smooth_n_model / nrow(df) *100 - - ply <- highchart() - - for(i in unique(xts.df$run_id)){ - ply <- ply %>% - hc_add_series(xts.df[xts.df$run_id == i, "vals"], - type = plot_type, name = i, regression = TRUE, - regressionSettings = list(type = "loess", loessSmooth = smooth_param)) - } - - ply <- ply %>% - hc_add_dependency("plugins/highcharts-regression.js") %>% - hc_title(text = title) %>% - hc_xAxis(title = list(text = xlab), type = 'datetime') %>% - hc_yAxis(title = list(text = ylab)) %>% - hc_tooltip(pointFormat = " Date: {point.x:%Y-%m-%d %H:%M}
y: {point.y}") %>% - hc_exporting(enabled = TRUE) - - - #Signaling the success of the operation - toastr_success("Generate interactive plots") - }, - error = function(e) { - toastr_error(title = "Error", conditionMessage(e)) - }) - }) - ply - }) - - output$modelPlotStatic <- renderPlotly({ + + output$modelPlot <- renderHighchart({ + input$ex_plot_model isolate({ tryCatch({ withProgress(message = 'Calculation in progress', - detail = 'This may take a while...', value = 0, { + detail = 'This may take a while...',{ + df <- dplyr::filter(load.model(), var_name == input$var_name_model) - incProgress(2/15) updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) - incProgress(2/15) title <- unique(df$title) xlab <- unique(df$xlab) @@ -125,41 +63,45 @@ observeEvent(input$ex_plot_model,{ df$vals <- udunits2::ud.convert(df$vals,unit,input$units_model) ylab <- input$units_model } - incProgress(2/15) - data_geom <- switch(input$plotType_model, point = geom_point, line = geom_line) + df$run_id <- as.numeric(as.character(df$run_id)) + xts.df <- xts(df[,c("vals", "run_id")], order.by = df$dates) + + plot_type <- switch(input$plotType_model, point = "scatter", line = "line") + # not sure if this method to calcualte smoothing parameter is correct + smooth_param <- input$smooth_n_model / nrow(df) *100 + + ply <- highchart() + + for(i in unique(xts.df$run_id)){ + ply <- ply %>% + hc_add_series(xts.df[xts.df$run_id == i, "vals"], + type = plot_type, name = i, regression = TRUE, + regressionSettings = list(type = "loess", loessSmooth = smooth_param)) + } + + ply <- ply %>% + hc_add_dependency("plugins/highcharts-regression.js") %>% + hc_title(text = title) %>% + hc_xAxis(title = list(text = xlab), type = 'datetime') %>% + hc_yAxis(title = list(text = ylab)) %>% + hc_tooltip(pointFormat = " Date: {point.x:%Y-%m-%d %H:%M}
y: {point.y}") %>% + hc_exporting(enabled = TRUE) %>% + hc_chart(zoomType = "x") + - plt <- ggplot(df, aes(x = dates, y = vals, color = run_id)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_model) - ply <- ggplotly(plt) - ply <- plotly::config(ply, collaborate = F, doubleClick = F, displayModeBar = F, staticPlot = T) - incProgress(9/15) }) #Signaling the success of the operation - toastr_success("Generate static plots") + toastr_success("Generate plot") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) }) - - ply }) + ply }) }) -observeEvent(input$model_toggle_plot,{ - tryCatch({ - toggleElement("model_plot_static") - toggleElement("model_plot_interactive") - #Signaling the success of the operation - toastr_success("Toggle plots") - }, - error = function(e) { - toastr_error(title = "Error", conditionMessage(e)) - }) -}) # masterDF <- loadNewData() # # Convert from factor to character. For subsetting diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 754e16beb95..8d28f5075a9 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -1,54 +1,11 @@ tabPanel( "Model Plots", - hidden(div(id = "model_plot_interactive", column( - 12, - div( - id = "plot-container", - # div( - # class = "plotlybars-wrapper", - # div( - # class = "plotlybars", - # div(class = "plotlybars-bar b1"), - # div(class = "plotlybars-bar b2"), - # div(class = "plotlybars-bar b3"), - # div(class = "plotlybars-bar b4"), - # div(class = "plotlybars-bar b5"), - # div(class = "plotlybars-bar b6"), - # div(class = "plotlybars-bar b7") - # ), - # div(class = "plotlybars-text", - # p("Updating the plot. Hold tight!")) - # ), - #plotlyOutput("modelPlot") - highchartOutput("modelPlot") - ) - ))), - div(id = "model_plot_static", column( + column( 12, - div( - id = "plot-container", - div( - class = "plotlybars-wrapper", - div( - class = "plotlybars", - div(class = "plotlybars-bar b1"), - div(class = "plotlybars-bar b2"), - div(class = "plotlybars-bar b3"), - div(class = "plotlybars-bar b4"), - div(class = "plotlybars-bar b5"), - div(class = "plotlybars-bar b6"), - div(class = "plotlybars-bar b7") - ), - div(class = "plotlybars-text", - p("Updating the plot. Hold tight!")) - ), - plotlyOutput("modelPlotStatic") - ) - )), + highchartOutput("modelPlot") + ), column(12, wellPanel( - actionButton("ex_plot_model", "Generate Plot"), - div(actionButton("model_toggle_plot", "Toggle Plot"), - style = "float:right") + actionButton("ex_plot_model", "Generate Plot") )), column( 12, From d2c6c3bd988607d2a0bfcebc3d2cfc260cd5d28e Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 3 Jul 2019 23:23:53 -0500 Subject: [PATCH 0235/2289] switch from ggplotly to highcharter for model data plot --- .../server_files/model_data_plots_server.R | 186 +++++++----------- .../server_files/model_plots_server.R | 5 +- .../ui_files/model_data_plots_UI.R | 50 +---- 3 files changed, 76 insertions(+), 165 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index c3195ee39d4..7e62149d2b5 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -1,6 +1,6 @@ -# Renders ggplotly +# Renders highcharter -output$modelDataPlotStatic <- renderPlotly({ +output$modelDataPlot <- renderHighchart({ validate( need(length(input$all_workflow_id) == 1, "Select only ONE workflow ID"), need(length(input$all_run_id) == 1, "Select only ONE run ID"), @@ -9,10 +9,12 @@ output$modelDataPlotStatic <- renderPlotly({ need(length(input$all_input_id) == 1, 'Select only ONE Input ID'), need(input$load_data > 0, 'Select Load External Data') ) - plt <- ggplot(data.frame(x = 0, y = 0), aes(x,y)) + - annotate("text", x = 0, y = 0, label = "You are ready to plot!", - size = 10, color = "grey") - ggplotly(plt) + highchart() %>% + hc_add_series(data = c(), showInLegend = F) %>% + hc_xAxis(title = list(text = "Time")) %>% + hc_yAxis(title = list(text = "y")) %>% + hc_title(text = "You are ready to plot!") %>% + hc_add_theme(hc_theme_flat()) }) # Update units every time a variable is selected @@ -40,49 +42,72 @@ observeEvent(input$units_modeldata,{ observeEvent(input$ex_plot_modeldata,{ - output$modelDataPlot <- renderPlotly({ + output$modelDataPlot <- renderHighchart({ input$ex_plot_modeldata isolate({ tryCatch({ - var = input$var_name_modeldata - - model_data <- dplyr::filter(load.model(), var_name == var) - - updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) - title <- unique(model_data$title) - xlab <- unique(model_data$xlab) - ylab <- unique(model_data$ylab) - - model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) - external_data <- load.model.data() - aligned_data = PEcAn.benchmark::align_data( - model.calc = model_data, obvs.calc = external_data, - var = var, align_method = "mean_over_larger_timestep") %>% - dplyr::select(everything(), - model = matches("[.]m"), - observations = matches("[.]o"), - Date = posix) - - print(head(aligned_data)) - # Melt dataframe to plot two types of columns together - aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) - - unit <- ylab - if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ - aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) - ylab <- input$units_modeldata - } - - - data_geom <- switch(input$plotType_modeldata, point = geom_point, line = geom_line) + withProgress(message = 'Calculation in progress', + detail = 'This may take a while...',{ + + var = input$var_name_modeldata + + model_data <- dplyr::filter(load.model(), var_name == var) + + updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) + title <- unique(model_data$title) + xlab <- unique(model_data$xlab) + ylab <- unique(model_data$ylab) + + model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) + external_data <- load.model.data() + aligned_data = PEcAn.benchmark::align_data( + model.calc = model_data, obvs.calc = external_data, + var = var, align_method = "mean_over_larger_timestep") %>% + dplyr::select(everything(), + model = matches("[.]m"), + observations = matches("[.]o"), + Date = posix) + + print(head(aligned_data)) + # Melt dataframe to plot two types of columns together + aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) + + model <- filter(aligned_data, variable == "model") + observasions <- filter(aligned_data, variable == "observations") + + model.xts <- xts(model$value, order.by = model$Date) + observasions.xts <- xts(observasions$value, order.by = observasions$Date) + + unit <- ylab + if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ + aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) + ylab <- input$units_modeldata + } + + + plot_type <- switch(input$plotType_model, point = "scatter", line = "line") + # not sure if this method to calcualte smoothing parameter is correct + smooth_param <- input$smooth_n_model / nrow(df) *100 + + ply <- highchart() %>% + hc_add_series(model.xts, name = "model", type = plot_type, + regression = TRUE, + regressionSettings = list(type = "loess", loessSmooth = smooth_param)) %>% + hc_add_series(observasions.xts, name = "observations", type = plot_type, + regression = TRUE, + regressionSettings = list(type = "loess", loessSmooth = smooth_param)) %>% + hc_add_dependency("plugins/highcharts-regression.js") %>% + hc_title(text = title) %>% + hc_xAxis(title = list(text = xlab), type = 'datetime') %>% + hc_yAxis(title = list(text = ylab)) %>% + hc_tooltip(pointFormat = " Date: {point.x:%Y-%m-%d %H:%M}
y: {point.y}") %>% + hc_exporting(enabled = TRUE) %>% + hc_chart(zoomType = "x") + + }) - plt <- ggplot(aligned_data, aes(x=Date, y=value, color=variable)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_modeldata) - ply <- ggplotly(plt) #Signaling the success of the operation - toastr_success("Generate interactive plots") + toastr_success("Generate plot") }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) @@ -90,80 +115,9 @@ observeEvent(input$ex_plot_modeldata,{ }) ply }) - - output$modelDataPlotStatic <- renderPlotly({ - input$ex_plot_modeldata - isolate({ - tryCatch({ - withProgress(message = 'Calculation in progress', - detail = 'This may take a while...', value = 0, { - - var = input$var_name_modeldata - - model_data <- dplyr::filter(load.model(), var_name == var) - - updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) - title <- unique(model_data$title) - xlab <- unique(model_data$xlab) - ylab <- unique(model_data$ylab) - incProgress(3/15) - - model_data <- model_data %>% dplyr::select(posix = dates, !!var := vals) - external_data <- load.model.data() - incProgress(4/15) - - aligned_data = PEcAn.benchmark::align_data( - model.calc = model_data, obvs.calc = external_data, - var = var, align_method = "mean_over_larger_timestep") %>% - dplyr::select(everything(), - model = matches("[.]m"), - observations = matches("[.]o"), - Date = posix) - - print(head(aligned_data)) - # Melt dataframe to plot two types of columns together - aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) - incProgress(4/15) - - unit <- ylab - if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ - aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) - ylab <- input$units_modeldata - } - - - data_geom <- switch(input$plotType_modeldata, point = geom_point, line = geom_line) - - plt <- ggplot(aligned_data, aes(x=Date, y=value, color=variable)) - plt <- plt + data_geom() - plt <- plt + labs(title=title, x=xlab, y=ylab) - plt <- plt + geom_smooth(n=input$smooth_n_modeldata) - ply <- ggplotly(plt) - ply <- plotly::config(ply, collaborate = F, doubleClick = F, displayModeBar = F, staticPlot = T) - incProgress(4/15) - }) - #Signaling the success of the operation - toastr_success("Genearate static plots") - }, - error = function(e) { - toastr_error(title = "Error", conditionMessage(e)) - }) - ply - }) - }) }) -observeEvent(input$model_data_toggle_plot,{ - tryCatch({ - toggleElement("model_data_plot_static") - toggleElement("model_data_plot_interactive") - #Signaling the success of the operation - toastr_success("Toggle plots") - }, - error = function(e) { - toastr_error(title = "Error", conditionMessage(e)) - }) -}) + diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index a71e673cf07..4498672da68 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -11,7 +11,7 @@ output$modelPlot <- renderHighchart({ hc_add_series(data = c(), showInLegend = F) %>% hc_xAxis(title = list(text = "Time")) %>% hc_yAxis(title = list(text = "y")) %>% - hc_title(text = "Ready to plot!") %>% + hc_title(text = "You are ready to plot!") %>% hc_add_theme(hc_theme_flat()) }) @@ -88,8 +88,7 @@ observeEvent(input$ex_plot_model,{ hc_tooltip(pointFormat = " Date: {point.x:%Y-%m-%d %H:%M}
y: {point.y}") %>% hc_exporting(enabled = TRUE) %>% hc_chart(zoomType = "x") - - + }) #Signaling the success of the operation toastr_success("Generate plot") diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index dd1c5e35aab..295a927647f 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -1,53 +1,11 @@ tabPanel( "Model-Data Plots", - hidden(div(id = "model_data_plot_interactive", column( - 12, - div( - id = "plot-container", - div( - class = "plotlybars-wrapper", - div( - class = "plotlybars", - div(class = "plotlybars-bar b1"), - div(class = "plotlybars-bar b2"), - div(class = "plotlybars-bar b3"), - div(class = "plotlybars-bar b4"), - div(class = "plotlybars-bar b5"), - div(class = "plotlybars-bar b6"), - div(class = "plotlybars-bar b7") - ), - div(class = "plotlybars-text", - p("Updating the plot. Hold tight!")) - ), - plotlyOutput("modelDataPlot") - ) - ))), - div(id = "model_data_plot_static", column( + column( 12, - div( - id = "plot-container", - div( - class = "plotlybars-wrapper", - div( - class = "plotlybars", - div(class = "plotlybars-bar b1"), - div(class = "plotlybars-bar b2"), - div(class = "plotlybars-bar b3"), - div(class = "plotlybars-bar b4"), - div(class = "plotlybars-bar b5"), - div(class = "plotlybars-bar b6"), - div(class = "plotlybars-bar b7") - ), - div(class = "plotlybars-text", - p("Updating the plot. Hold tight!")) - ), - plotlyOutput("modelDataPlotStatic") - ) - )), + highchartOutput("modelDataPlot") + ), column(12, wellPanel( - actionButton("ex_plot_modeldata", "Generate Plot"), - div(actionButton("model_data_toggle_plot", "Toggle Plot"), - style = "float:right") + actionButton("ex_plot_modeldata", "Generate Plot") )), column( 12, From 2e9ac377314fd1fb2110e7233979caa5b4b62375 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 5 Jul 2019 02:41:57 -0500 Subject: [PATCH 0236/2289] set connection to database to be reactive value --- shiny/workflowPlot/server.R | 24 ++++++++++------ .../server_files/benchmarking_server.R | 28 +++++++++---------- .../server_files/select_data_server.R | 2 +- .../server_files/sidebar_server.R | 21 +++++++------- 4 files changed, 42 insertions(+), 33 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 46ffd8bf282..aa8f3e1c07c 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -43,11 +43,15 @@ options(shiny.maxRequestSize=100*1024^2) # Define server logic server <- shinyServer(function(input, output, session) { + dbConnect <- reactiveValues(bety = NULL) + # Try `betyConnect` function. # If it breaks, ask user to enter user, password and host information # then use the `db.open` function to connect to the database tryCatch({ - bety <- betyConnect() + #dbConnect$bety <- betyConnect() + #For betyConnect to break to test shiny modal + dbConnect$bety <- betyConnect(".") }, error = function(e){ @@ -72,13 +76,17 @@ server <- shinyServer(function(input, output, session) { observeEvent(input$submitInfo,{ tryCatch( { - bety$con <- PEcAn.DB::db.open( - list( - user = input$user, - password = input$password, - host = input$host - ) - ) + # dbConnect$bety$con <- PEcAn.DB::db.open( + # list( + # user = input$user, + # password = input$password, + # host = input$host + # ) + # ) + + # For testing reactivity of bety connection + dbConnect$bety <- betyConnect() + removeModal() toastr_success("Connect to Database") }, diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index efca71b2d18..febb30a88ce 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -15,15 +15,15 @@ observeEvent(input$load_model,{ if(nrow(ids_DF) == 1){ # Check to see if the run has been saved as a reference run - ens_id <- dplyr::tbl(bety, 'runs') %>% dplyr::filter(id == ids_DF$runID) %>% dplyr::pull(ensemble_id) - ens_wf <- dplyr::tbl(bety, 'ensembles') %>% dplyr::filter(id == ens_id) %>% + ens_id <- dplyr::tbl(dbConnect$bety, 'runs') %>% dplyr::filter(id == ids_DF$runID) %>% dplyr::pull(ensemble_id) + ens_wf <- dplyr::tbl(dbConnect$bety, 'ensembles') %>% dplyr::filter(id == ens_id) %>% dplyr::rename(ensemble_id = id) %>% - dplyr::left_join(.,tbl(bety, "workflows") %>% dplyr::rename(workflow_id = id), by="workflow_id") %>% dplyr::collect() - bm$model_vars <- var_names_all(bety,ids_DF$wID,ids_DF$runID) + dplyr::left_join(.,tbl(dbConnect$bety, "workflows") %>% dplyr::rename(workflow_id = id), by="workflow_id") %>% dplyr::collect() + bm$model_vars <- var_names_all(dbConnect$bety,ids_DF$wID,ids_DF$runID) clean <- PEcAn.benchmark::clean_settings_BRR(inputfile = file.path(ens_wf$folder,"pecan.CHECKED.xml")) settings_xml <- toString(PEcAn.settings::listToXml(clean, "pecan")) - ref_run <- PEcAn.benchmark::check_BRR(settings_xml, bety$con) + ref_run <- PEcAn.benchmark::check_BRR(settings_xml, dbConnect$bety$con) if(length(ref_run) == 0){ # If not registered, button appears with option to run create.BRR @@ -61,7 +61,7 @@ observeEvent(input$create_bm,{ withProgress(message = 'Calculation in progress', detail = 'This may take a while...', value = 0,{ - bm$BRR <- PEcAn.benchmark::create_BRR(bm$ens_wf, con = bety$con) + bm$BRR <- PEcAn.benchmark::create_BRR(bm$ens_wf, con = dbConnect$bety$con) incProgress( 10/ 15) bm$brr_message <- sprintf("This run has been successfully registered as a reference run (id = %.0f)", bm$BRR$reference_run_id) bm$button_BRR <- FALSE @@ -95,12 +95,12 @@ observeEvent(input$load_data,{ req(input$all_input_id) req(input$all_site_id) - bm$metrics <- dplyr::tbl(bety,'metrics') %>% dplyr::select(one_of("id","name","description")) %>% collect() + bm$metrics <- dplyr::tbl(dbConnect$bety,'metrics') %>% dplyr::select(one_of("id","name","description")) %>% collect() # Need to write warning message that can only use one input id - bm$input <- getInputs(bety,c(input$all_site_id)) %>% + bm$input <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) - format <- PEcAn.DB::query.format.vars(bety = bety, input.id = bm$input$input_id) + format <- PEcAn.DB::query.format.vars(bety = dbConnect$bety, input.id = bm$input$input_id) # Are there more human readable names? bm$vars <- dplyr::inner_join( data.frame(read_name = names(bm$model_vars), @@ -226,7 +226,7 @@ observeEvent(input$calc_bm,{ output$reportmetrics <- renderText(paste(bm$bm_metrics)) incProgress(1 / 15) - inputs_df <- getInputs(bety,c(input$all_site_id)) %>% + inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) output$inputs_df_table <- renderTable(inputs_df) incProgress(1 / 15) @@ -271,7 +271,7 @@ observeEvent(input$calc_bm,{ output$print_bm_settings <- renderPrint(bm$bm_settings) incProgress(1 / 15) - basePath <- dplyr::tbl(bety, 'workflows') %>% dplyr::filter(id %in% bm$ens_wf$workflow_id) %>% dplyr::pull(folder) + basePath <- dplyr::tbl(dbConnect$bety, 'workflows') %>% dplyr::filter(id %in% bm$ens_wf$workflow_id) %>% dplyr::pull(folder) settings_path <- file.path(basePath, "pecan.BENCH.xml") saveXML(PEcAn.settings::listToXml(bm$bm_settings,"pecan"), file = settings_path) @@ -294,8 +294,8 @@ observeEvent(input$calc_bm,{ # "the benchmarking workflow" in its entirety settings <- PEcAn.settings::read.settings(bm$settings_path) - bm.settings <- PEcAn.benchmark::define_benchmark(settings,bety) - settings <- PEcAn.benchmark::add_workflow_info(settings,bety) + bm.settings <- PEcAn.benchmark::define_benchmark(settings,dbConnect$bety) + settings <- PEcAn.benchmark::add_workflow_info(settings,dbConnect$bety) settings$benchmarking <- PEcAn.benchmark::bm_settings2pecan_settings(bm.settings) settings <- PEcAn.benchmark::read_settings_BRR(settings) @@ -313,7 +313,7 @@ observeEvent(input$calc_bm,{ settings$host$name <- "localhost" # This may not be the best place to set this, but it isn't set by any of the other functions. Another option is to have it set by the default_hostname function (if input is NULL, set to localhost) # results <- PEcAn.settings::papply(settings, function(x) calc_benchmark(x, bety, start_year = input$start_year, end_year = input$end_year)) results <- PEcAn.settings::papply(settings, function(x) - calc_benchmark(settings = x, bety = bety)) + calc_benchmark(settings = x, bety = dbConnect$bety)) bm$load_results <- bm$load_results + 1 }) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index e414731d556..bd4779915ad 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -24,7 +24,7 @@ observeEvent(input$load_model,{ diff_units.m = units(mode.m) diff_message <- sprintf("timestep: %.2f %s", mode.m, diff_units.m) - wf.folder <- workflow(bety, ids_DF$wID[i]) %>% collect() %>% pull(folder) + wf.folder <- workflow(dbConnect$bety, ids_DF$wID[i]) %>% collect() %>% pull(folder) README.text <- c(README.text, paste("SELECTION",i), diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index f09bf8251a9..67140ecf766 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -2,13 +2,14 @@ # Loading Model Output(s) -----------------------------------------------------# + # Update workflow ids observe({ tryCatch({ # get_workflow_ids function (line 137) in db/R/query.dplyr.R takes a flag to check # if we want to load all workflow ids. # get_workflow_id function from query.dplyr.R - all_ids <- get_workflow_ids(bety, query, all.ids=TRUE) + all_ids <- get_workflow_ids(dbConnect$bety, query, all.ids=TRUE) updateSelectizeInput(session, "all_workflow_id", choices = all_ids) # Get URL prameters query <- parseQueryString(session$clientData$url_search) @@ -32,7 +33,7 @@ all_run_ids <- reactive({ run_id_list <- c() for(w_id in w_ids){ # For all the workflow ids - r_ids <- get_run_ids(bety, w_id) + r_ids <- get_run_ids(dbConnect$bety, w_id) for(r_id in r_ids){ # Each workflow id can have more than one run ids # ',' as a separator between workflow id and run id @@ -71,7 +72,7 @@ load.model <- eventReactive(input$load_model,{ ids_DF <- parse_ids_from_input_runID(input$all_run_id) globalDF <- map2_df(ids_DF$wID, ids_DF$runID, ~tryCatch({ - load_data_single_run(bety, .x, .y) + load_data_single_run(dbConnect$bety, .x, .y) }, error = function(e){ toastr_error(title = paste("Error in WorkflowID", .x), @@ -94,7 +95,7 @@ observeEvent(input$load_model, { for(row_num in 1:nrow(ids_DF)){ var_name_list <- c(var_name_list, tryCatch({ - var_names_all(bety, ids_DF$wID[row_num], ids_DF$runID[row_num]) + var_names_all(dbConnect$bety, ids_DF$wID[row_num], ids_DF$runID[row_num]) }, error = function(e){ return(NULL) @@ -111,7 +112,7 @@ observeEvent(input$load_model,{ for(row_num in 1:nrow(ids_DF)){ settings <- tryCatch({ - getSettingsFromWorkflowId(bety,ids_DF$wID[row_num]) + getSettingsFromWorkflowId(dbConnect$bety,ids_DF$wID[row_num]) }, error = function(e){ return(NULL) @@ -124,8 +125,8 @@ observeEvent(input$load_model,{ # Update input id list as (input id, name) observe({ req(input$all_site_id) - inputs_df <- getInputs(bety, c(input$all_site_id)) - formats_1 <- dplyr::tbl(bety, 'formats_variables') %>% + inputs_df <- getInputs(dbConnect$bety, c(input$all_site_id)) + formats_1 <- dplyr::tbl(dbConnect$bety, 'formats_variables') %>% dplyr::filter(format_id %in% inputs_df$format_id) if (dplyr.count(formats_1) == 0) { logger.warn("No inputs found. Returning NULL.") @@ -142,12 +143,12 @@ observe({ load.model.data <- eventReactive(input$load_data, { req(input$all_input_id) - inputs_df <- getInputs(bety,c(input$all_site_id)) + inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) inputs_df <- inputs_df %>% dplyr::filter(input_selection_list == input$all_input_id) input_id <- inputs_df$input_id # File_format <- getFileFormat(bety,input_id) - File_format <- PEcAn.DB::query.format.vars(bety = bety, input.id = input_id) + File_format <- PEcAn.DB::query.format.vars(bety = dbConnect$bety, input.id = input_id) start.year <- as.numeric(lubridate::year(inputs_df$start_date)) end.year <- as.numeric(lubridate::year(inputs_df$end_date)) File_path <- inputs_df$filePath @@ -155,7 +156,7 @@ load.model.data <- eventReactive(input$load_data, { # To make it work with the VM, uncomment the line below File_path <- paste0(inputs_df$filePath,'.csv') site.id <- inputs_df$site_id - site <- PEcAn.DB::query.site(site.id,bety$con) + site <- PEcAn.DB::query.site(site.id,dbConnect$bety$con) observations <- PEcAn.benchmark::load_data( data.path = File_path, format = File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) From fab740f44cd5b781b093c18be121f5938c7418f5 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 5 Jul 2019 07:46:38 -0500 Subject: [PATCH 0237/2289] change sidebar & documentation UI --- .../markdown/app_documentation.Rmd | 16 +++++ .../markdown/benchmarking_plots.Rmd | 8 +++ .../markdown/benchmarking_scores.Rmd | 10 +++ .../markdown/benchmarking_setting.Rmd | 34 +++++++++++ .../markdown/exploratory_plot.Rmd | 26 ++++++++ shiny/workflowPlot/markdown/setup_page.Rmd | 8 +++ shiny/workflowPlot/ui.R | 61 ++++++++++++++++--- shiny/workflowPlot/ui_files/sidebar_UI.R | 6 +- 8 files changed, 156 insertions(+), 13 deletions(-) create mode 100644 shiny/workflowPlot/markdown/app_documentation.Rmd create mode 100644 shiny/workflowPlot/markdown/benchmarking_plots.Rmd create mode 100644 shiny/workflowPlot/markdown/benchmarking_scores.Rmd create mode 100644 shiny/workflowPlot/markdown/benchmarking_setting.Rmd create mode 100644 shiny/workflowPlot/markdown/exploratory_plot.Rmd create mode 100644 shiny/workflowPlot/markdown/setup_page.Rmd diff --git a/shiny/workflowPlot/markdown/app_documentation.Rmd b/shiny/workflowPlot/markdown/app_documentation.Rmd new file mode 100644 index 00000000000..f83bf101ba3 --- /dev/null +++ b/shiny/workflowPlot/markdown/app_documentation.Rmd @@ -0,0 +1,16 @@ +--- +title: "app domumentation" +output: + html_document: + theme: united +--- + +This is the shiny app for: + +- Visualizing model output data alongside external data +- Registering Reference Runs +- Calculating Benchmarks + +Do you have ideas for new features? + +[Add your comments here!](https://github.com/PecanProject/pecan/issues/1894) diff --git a/shiny/workflowPlot/markdown/benchmarking_plots.Rmd b/shiny/workflowPlot/markdown/benchmarking_plots.Rmd new file mode 100644 index 00000000000..316e5cebc63 --- /dev/null +++ b/shiny/workflowPlot/markdown/benchmarking_plots.Rmd @@ -0,0 +1,8 @@ +--- +title: "Documentation for the Visualization and Benchmarking Shiny App" +output: + html_document: + theme: united +--- + +More interactive ploty plots! diff --git a/shiny/workflowPlot/markdown/benchmarking_scores.Rmd b/shiny/workflowPlot/markdown/benchmarking_scores.Rmd new file mode 100644 index 00000000000..fc9eb818cf2 --- /dev/null +++ b/shiny/workflowPlot/markdown/benchmarking_scores.Rmd @@ -0,0 +1,10 @@ +--- +title: "Documentation for the Visualization and Benchmarking Shiny App" +output: + html_document: + theme: united +--- + +A table of all outputs of the selected metrics. +- For numerical metrics, the score is printed. +- For visual metrics, the path to the PDF is printed. Eventually it might be nice to have a download button. For now, you can navigate back to rstudio and download from there. diff --git a/shiny/workflowPlot/markdown/benchmarking_setting.Rmd b/shiny/workflowPlot/markdown/benchmarking_setting.Rmd new file mode 100644 index 00000000000..a36807ffd81 --- /dev/null +++ b/shiny/workflowPlot/markdown/benchmarking_setting.Rmd @@ -0,0 +1,34 @@ +--- +title: "Documentation for the Visualization and Benchmarking Shiny App" +output: + html_document: + theme: united +--- + +[See additional documentation on benchmarking](https://pecanproject.github.io/pecan-documentation/develop/settings-configured-analyses.html#Benchmarking) + + +All benchmarks that are calcualted are automatcally registered in to the database. + +#### Setup Reference Run + +You need to register a run before performing benchmarks. + +[See documentation on reference runs](https://pecanproject.github.io/pecan-documentation/develop/reference-runs.html) + +#### Setup Benchmarks + +**Variables** + +These are the variables that the model and data have in common. + +**Metrics** + +Warning: Don't use Frechet for large datasets, it is not efficient at all. I should probably remove it if data dimensions are too large. + +**Plots** + +These plots will be saved as PDFs in the model output directory. You can also load them interactively in Benchmarking > Plots. + + + diff --git a/shiny/workflowPlot/markdown/exploratory_plot.Rmd b/shiny/workflowPlot/markdown/exploratory_plot.Rmd new file mode 100644 index 00000000000..bc41125a732 --- /dev/null +++ b/shiny/workflowPlot/markdown/exploratory_plot.Rmd @@ -0,0 +1,26 @@ +--- +title: "Documentation for the Visualization and Benchmarking Shiny App" +output: + html_document: + theme: united +--- + +All our plots are made with [plotly](https://plot.ly/) and [Highcharter](http://jkunst.com/highcharter/). + +They are interactive and packed full of features. All plotly plots have a toolbar with the following features: + +![](plotly_bar.png) + +- Download plot as a png +- Zoom +- Pan +- Box select +- Lasso select +- Zoom in +- Zoom out +- Autoscale +- Reset axes +- Toggle spike lines +- Show closest data on hover +- Compare data on hover +- Collaborate diff --git a/shiny/workflowPlot/markdown/setup_page.Rmd b/shiny/workflowPlot/markdown/setup_page.Rmd new file mode 100644 index 00000000000..6ee4c24e12e --- /dev/null +++ b/shiny/workflowPlot/markdown/setup_page.Rmd @@ -0,0 +1,8 @@ +--- +title: "setup_page" +output: + html_document: + theme: united +--- + +For now this is just a place to see some information about the run you just loaded. diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 3ac303452de..07ea5c74648 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -5,6 +5,7 @@ library(shinythemes) library(knitr) library(shinyjs) library(shinytoastr) +library(bsplus) source("ui_utils.R", local = TRUE) @@ -39,13 +40,16 @@ ui <- fluidPage(theme = shinytheme("simplex"), hidden( div( id = "app", - sidebarLayout( - source_ui("sidebar_UI.R"), # Sidebar - mainPanel(navbarPage(title = NULL, + navbarPage(title = NULL, tabPanel(h4("Select Data"), - # tabsetPanel( - source_ui("select_data_UI.R") - # ) + tagList( + column(4, + source_ui("sidebar_UI.R") + ), + column(8, + source_ui("select_data_UI.R") + ) + ) ), tabPanel(h4("Exploratory Plots"), tabsetPanel( @@ -61,12 +65,49 @@ ui <- fluidPage(theme = shinytheme("simplex"), ) ), tabPanel(h4("Documentation"), - withMathJax(includeMarkdown("markdown/workflowPlot_doc.Rmd")) + #withMathJax(includeMarkdown("markdown/workflowPlot_doc.Rmd")) + bs_accordion_sidebar(id = "documentation") %>% + bs_append( + title_side = "App Documentation", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/app_documentation.Rmd")) + ) %>% + bs_append( + title_side = "Setup page", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/setup_page.Rmd")) + ) %>% + bs_append( + title_side = "Exploratory Plots", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/exploratory_plot.Rmd")) + ) %>% + bs_append( + title_side = "Benchmarking", + content_side = NULL, + content_main = + bs_accordion_sidebar(id = "benchmarking") %>% + bs_append( + title_side = "Settings", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/benchmarking_setting.Rmd")) + ) %>% + bs_append( + title_side = "Scores", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/benchmarking_scores.Rmd")) + ) %>% + bs_append( + title_side = "Plots", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/benchmarking_plots.Rmd")) + ) + ), + use_bs_accordion_sidebar() + + ) ) - - ) - ) ) ) ) \ No newline at end of file diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index 505afa80afa..c4a355f500f 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -1,5 +1,5 @@ -sidebarPanel( - h3("Load Model Output"), +tagList( + h4("Load Model Output"), wellPanel( p("Please select the workflow IDs to continue. You can select multiple IDs"), selectizeInput("all_workflow_id", "Mutliple Workflow IDs", c(), multiple=TRUE), @@ -7,7 +7,7 @@ sidebarPanel( selectizeInput("all_run_id", "Mutliple Run IDs", c(), multiple=TRUE), actionButton("load_model", "Load Model outputs") ), - h3("Load External Data"), + h4("Load External Data"), wellPanel( selectizeInput("all_site_id", "Select Site ID", c()), # If loading multiple sites in future From d3a2dc2b09f54c5533a3825f7d57c2a5887a35ce Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 5 Jul 2019 11:36:48 -0400 Subject: [PATCH 0238/2289] changing how to connect to DB in the catch part --- shiny/workflowPlot/server.R | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index aa8f3e1c07c..09444cf43b4 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -74,18 +74,14 @@ server <- shinyServer(function(input, output, session) { # --- connect to database --- observeEvent(input$submitInfo,{ - tryCatch( - { - # dbConnect$bety$con <- PEcAn.DB::db.open( - # list( - # user = input$user, - # password = input$password, - # host = input$host - # ) - # ) - + tryCatch({ + + dbConnect$bety <- dplyr::src_postgres(dbname ='bety' , + host =input$host, user = input$user, + password = input$password) + # For testing reactivity of bety connection - dbConnect$bety <- betyConnect() + #dbConnect$bety <- betyConnect() removeModal() toastr_success("Connect to Database") From 3f98958b9248887b0c7bdc4be57cf2afb87f354d Mon Sep 17 00:00:00 2001 From: sl4397 Date: Mon, 8 Jul 2019 10:31:47 -0500 Subject: [PATCH 0239/2289] change theme to yeti --- shiny/workflowPlot/ui.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 07ea5c74648..175aacd1688 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -10,7 +10,7 @@ library(bsplus) source("ui_utils.R", local = TRUE) # Define UI -ui <- fluidPage(theme = shinytheme("simplex"), +ui <- fluidPage(theme = shinytheme("yeti"), # Initializing shinyJs useShinyjs(), # Initializing shinytoastr From b8b37305d75ae27b946fbc6449dd133981c5df03 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Mon, 8 Jul 2019 23:41:13 -0500 Subject: [PATCH 0240/2289] change select data output to datatable --- .../server_files/select_data_server.R | 22 +++++++++++++------ shiny/workflowPlot/ui_files/select_data_UI.R | 8 +++++-- 2 files changed, 21 insertions(+), 9 deletions(-) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index bd4779915ad..736b368d351 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -27,23 +27,31 @@ observeEvent(input$load_model,{ wf.folder <- workflow(dbConnect$bety, ids_DF$wID[i]) %>% collect() %>% pull(folder) README.text <- c(README.text, - paste("SELECTION",i), - "============", tryCatch({ readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")) }, error = function(e){ return(NULL) }), - diff_message, - "" + diff_message ) } - - output$README <- renderUI({HTML(paste(README.text, collapse = '
'))}) + select.data <- read.delim(textConnection(README.text), + header=FALSE,sep=":",strip.white=TRUE) %>% + unstack(V2 ~ V1) %>% + dplyr::rename(site.id = site..id) %>% + select(runtype, workflow.id, ensemble.id, pft.name, quantile, trait, run.id, + model, site.id, start.date, end.date, hostname, timestep, rundir, outdir) + + + output$datatable <- DT::renderDataTable( + DT::datatable(select.data,options = list(scrollX = TRUE)) + ) + + #output$README <- renderUI({HTML(paste(README.text, collapse = '
'))}) - output$dim_message <- renderText({sprintf("This data has %.0f rows, think about skipping exploratory plots if this is a large number...", dim(df)[1])}) + output$dim_message <- renderText({sprintf("This data has %.0f rows,\nthink about skipping exploratory plots if this is a large number...", dim(df)[1])}) incProgress(4 / 15) }) diff --git a/shiny/workflowPlot/ui_files/select_data_UI.R b/shiny/workflowPlot/ui_files/select_data_UI.R index 873ad1b06e5..dd7ac2fd569 100644 --- a/shiny/workflowPlot/ui_files/select_data_UI.R +++ b/shiny/workflowPlot/ui_files/select_data_UI.R @@ -1,6 +1,10 @@ # Select_Data tagList( - column(6, htmlOutput("README")), - column(6, verbatimTextOutput("dim_message")) + #column(6, htmlOutput("README")), + DT::dataTableOutput("datatable"), + br(), + br(), + br(), + verbatimTextOutput("dim_message") ) From b267fbf3decf4d4ed61e1877bfbefa3fcb4afc38 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 10 Jul 2019 03:07:58 -0500 Subject: [PATCH 0241/2289] update select data datatable --- shiny/workflowPlot/server.R | 1 + shiny/workflowPlot/server_files/benchmarking_server.R | 2 +- shiny/workflowPlot/server_files/select_data_server.R | 11 ++++++----- shiny/workflowPlot/server_files/sidebar_server.R | 2 +- shiny/workflowPlot/ui.R | 4 ++-- 5 files changed, 11 insertions(+), 9 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 09444cf43b4..98b163df6ea 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -16,6 +16,7 @@ lapply(c( "shiny", "highcharter", "shinyjs", "dplyr", + "plyr", "xts", "reshape2", "purrr", diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index febb30a88ce..ece85cd3fc0 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -30,7 +30,7 @@ observeEvent(input$load_model,{ brr_message <- sprintf("Would you like to save this run (run id = %.0f, ensemble id = %0.f) as a reference run?", ids_DF$runID, ens_id) button <- TRUE }else if(dim(ref_run)[1] == 1){ - bm$BRR <- ref_run %>% rename(.,reference_run_id = id) + bm$BRR <- ref_run %>% dplyr::rename(.,reference_run_id = id) bm$BRR brr_message <- sprintf("This run has been registered as a reference run (id = %.0f)", bm$BRR$reference_run_id) }else if(dim(ref_run)[1] > 1){ # There shouldn't be more than one reference run per run diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 736b368d351..e2f2c62b6fe 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -37,11 +37,12 @@ observeEvent(input$load_model,{ ) } - select.data <- read.delim(textConnection(README.text), - header=FALSE,sep=":",strip.white=TRUE) %>% - unstack(V2 ~ V1) %>% - dplyr::rename(site.id = site..id) %>% - select(runtype, workflow.id, ensemble.id, pft.name, quantile, trait, run.id, + select.data <- read.delim(textConnection(README.text), + header=FALSE,sep=":",strip.white=TRUE) %>% + dlply(.(V1), function(x) x[[2]]) %>% + as.data.frame() %>% + dplyr::rename(site.id = site..id) %>% + select(runtype, workflow.id, ensemble.id, pft.name, quantile, trait, run.id, model, site.id, start.date, end.date, hostname, timestep, rundir, outdir) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 67140ecf766..95e0e0bb6a8 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -154,7 +154,7 @@ load.model.data <- eventReactive(input$load_data, { File_path <- inputs_df$filePath # TODO There is an issue with the db where file names are not saved properly. # To make it work with the VM, uncomment the line below - File_path <- paste0(inputs_df$filePath,'.csv') + #File_path <- paste0(inputs_df$filePath,'.csv') site.id <- inputs_df$site_id site <- PEcAn.DB::query.site(site.id,dbConnect$bety$con) observations <- PEcAn.benchmark::load_data( diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 175aacd1688..c19f55fec4e 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -43,10 +43,10 @@ ui <- fluidPage(theme = shinytheme("yeti"), navbarPage(title = NULL, tabPanel(h4("Select Data"), tagList( - column(4, + column(3, source_ui("sidebar_UI.R") ), - column(8, + column(9, source_ui("select_data_UI.R") ) ) From 04b0942c2196499d9ace3c3c4147980b5121ea23 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 10 Jul 2019 03:09:21 -0500 Subject: [PATCH 0242/2289] update plot page UI --- .../server_files/model_data_plots_server.R | 7 ++++--- .../server_files/model_plots_server.R | 7 ++++--- .../ui_files/model_data_plots_UI.R | 21 +++++++++---------- shiny/workflowPlot/ui_files/model_plots_UI.R | 21 +++++++++---------- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index 7e62149d2b5..69ea1e97a11 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -53,7 +53,7 @@ observeEvent(input$ex_plot_modeldata,{ model_data <- dplyr::filter(load.model(), var_name == var) - updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) + #updateSliderInput(session,"smooth_n_modeldata", min = 0, max = nrow(model_data)) title <- unique(model_data$title) xlab <- unique(model_data$xlab) ylab <- unique(model_data$ylab) @@ -86,8 +86,9 @@ observeEvent(input$ex_plot_modeldata,{ plot_type <- switch(input$plotType_model, point = "scatter", line = "line") - # not sure if this method to calcualte smoothing parameter is correct - smooth_param <- input$smooth_n_model / nrow(df) *100 + + #smooth_param <- input$smooth_n_model / nrow(df) *100 + smooth_param <- input$smooth_n_model * 100 ply <- highchart() %>% hc_add_series(model.xts, name = "model", type = plot_type, diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index 4498672da68..6171859cc9d 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -52,7 +52,7 @@ observeEvent(input$ex_plot_model,{ df <- dplyr::filter(load.model(), var_name == input$var_name_model) - updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) + #updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) title <- unique(df$title) xlab <- unique(df$xlab) @@ -68,8 +68,9 @@ observeEvent(input$ex_plot_model,{ xts.df <- xts(df[,c("vals", "run_id")], order.by = df$dates) plot_type <- switch(input$plotType_model, point = "scatter", line = "line") - # not sure if this method to calcualte smoothing parameter is correct - smooth_param <- input$smooth_n_model / nrow(df) *100 + + #smooth_param <- input$smooth_n_model / nrow(df) *100 + smooth_param <- input$smooth_n_model * 100 ply <- highchart() diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index 295a927647f..e36062d5d4a 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -1,14 +1,7 @@ tabPanel( "Model-Data Plots", column( - 12, - highchartOutput("modelDataPlot") - ), - column(12, wellPanel( - actionButton("ex_plot_modeldata", "Generate Plot") - )), - column( - 12, + 3, wellPanel( selectInput("var_name_modeldata", "Variable Name", ""), textInput("units_modeldata", "Units", @@ -24,9 +17,15 @@ tabPanel( "smooth_n_modeldata", "Value for smoothing:", min = 0, - max = 100, - value = 80 - ) + max = 1, + value = 0.8 + ), + br(), + actionButton("ex_plot_modeldata", "Generate Plot", width = "100%", class="btn-primary") ) + ), + column( + 9, + highchartOutput("modelDataPlot", height = "500px") ) ) diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 8d28f5075a9..df66db00bb2 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -1,14 +1,7 @@ tabPanel( "Model Plots", column( - 12, - highchartOutput("modelPlot") - ), - column(12, wellPanel( - actionButton("ex_plot_model", "Generate Plot") - )), - column( - 12, + 3, wellPanel( selectInput("var_name_model", "Variable Name", ""), textInput("units_model", "Units", @@ -24,9 +17,15 @@ tabPanel( "smooth_n_model", "Value for smoothing:", min = 0, - max = 100, - value = 80 - ) + max = 1, + value = 0.8 + ), + br(), + actionButton("ex_plot_model", "Generate Plot", width = "100%", class="btn-primary") ) + ), + column( + 9, + highchartOutput("modelPlot", height = "500px") ) ) From 92d87c69cc70663159311142a2ded0844533a08e Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 10 Jul 2019 08:44:14 -0500 Subject: [PATCH 0243/2289] add one line break to plot page --- shiny/workflowPlot/ui_files/model_data_plots_UI.R | 1 + shiny/workflowPlot/ui_files/model_plots_UI.R | 1 + 2 files changed, 2 insertions(+) diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index e36062d5d4a..d23e7e3bd51 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -1,5 +1,6 @@ tabPanel( "Model-Data Plots", + br(), column( 3, wellPanel( diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index df66db00bb2..43077fde0bc 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -1,5 +1,6 @@ tabPanel( "Model Plots", + br(), column( 3, wellPanel( From 6555cbfbf3d7e302dcb2a1856144c6e59876696d Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 10 Jul 2019 08:44:52 -0500 Subject: [PATCH 0244/2289] change menu width of documentation page --- shiny/workflowPlot/ui.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index c19f55fec4e..599437b9fc6 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -66,7 +66,8 @@ ui <- fluidPage(theme = shinytheme("yeti"), ), tabPanel(h4("Documentation"), #withMathJax(includeMarkdown("markdown/workflowPlot_doc.Rmd")) - bs_accordion_sidebar(id = "documentation") %>% + bs_accordion_sidebar(id = "documentation", + spec_side = c(width = 3, offset = 0)) %>% bs_append( title_side = "App Documentation", content_side = NULL, From 9c97c106abfe978be7b42114bd7e7ea309368f9c Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 10 Jul 2019 22:53:45 -0500 Subject: [PATCH 0245/2289] add date range to Explorator plot tab --- .../server_files/model_data_plots_server.R | 15 ++++++++++++++- .../server_files/model_plots_server.R | 15 ++++++++++++++- shiny/workflowPlot/ui_files/model_data_plots_UI.R | 1 + shiny/workflowPlot/ui_files/model_plots_UI.R | 1 + 4 files changed, 30 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index 69ea1e97a11..c7b3a9ea378 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -39,7 +39,16 @@ observeEvent(input$units_modeldata,{ } }) - +# update date range input limit +observe({ + df <- load.model() + updateDateRangeInput(session, "date_range2", + start = as.Date(min(df$dates)), + end = as.Date(max(df$dates)), + min = as.Date(min(df$dates)), + max = as.Date(max(df$dates)) + ) +}) observeEvent(input$ex_plot_modeldata,{ output$modelDataPlot <- renderHighchart({ @@ -78,6 +87,10 @@ observeEvent(input$ex_plot_modeldata,{ model.xts <- xts(model$value, order.by = model$Date) observasions.xts <- xts(observasions$value, order.by = observasions$Date) + date_range2 <- paste0(input$date_range2, collapse = "/") + model.xts <- model.xts[date_range2] + observasions.xts <- observasions.xts[date_range2] + unit <- ylab if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index 6171859cc9d..c1dd96cb430 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -38,6 +38,16 @@ observeEvent(input$units_model,{ } }) +# update date range input limit +observe({ + df <- load.model() + updateDateRangeInput(session, "date_range", + start = as.Date(min(df$dates)), + end = as.Date(max(df$dates)), + min = as.Date(min(df$dates)), + max = as.Date(max(df$dates)) + ) +}) observeEvent(input$ex_plot_model,{ req(input$units_model) @@ -53,7 +63,7 @@ observeEvent(input$ex_plot_model,{ df <- dplyr::filter(load.model(), var_name == input$var_name_model) #updateSliderInput(session,"smooth_n_model", min = 0, max = nrow(df)) - + title <- unique(df$title) xlab <- unique(df$xlab) ylab <- unique(df$ylab) @@ -66,6 +76,9 @@ observeEvent(input$ex_plot_model,{ df$run_id <- as.numeric(as.character(df$run_id)) xts.df <- xts(df[,c("vals", "run_id")], order.by = df$dates) + date_range <- paste0(input$date_range, collapse = "/") + xts.df <- xts.df[date_range] + plot_type <- switch(input$plotType_model, point = "scatter", line = "line") diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index d23e7e3bd51..9fe1f6c2f10 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -8,6 +8,7 @@ tabPanel( textInput("units_modeldata", "Units", placeholder = "Type units in udunits2 compatible format"), verbatimTextOutput("unit_text2"), + dateRangeInput("date_range2", "Date Range", separator = " - "), radioButtons( "plotType_modeldata", "Plot Type (for Model Outputs)", diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 43077fde0bc..8fc5142b0e8 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -8,6 +8,7 @@ tabPanel( textInput("units_model", "Units", placeholder = "Type units in udunits2 compatible format"), verbatimTextOutput("unit_text"), + dateRangeInput("date_range", "Date Range", separator = " - "), radioButtons( "plotType_model", "Plot Type (for Model Outputs)", From 2fee50939ccd542925bf65b7f4060c5d829fa67a Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 12 Jul 2019 04:12:54 -0500 Subject: [PATCH 0246/2289] add data aggregation for exploratory plot tab --- .../server_files/model_data_plots_server.R | 20 +++++++++++ .../server_files/model_plots_server.R | 34 ++++++++++++++----- .../ui_files/model_data_plots_UI.R | 12 ++++++- shiny/workflowPlot/ui_files/model_plots_UI.R | 12 ++++++- 4 files changed, 67 insertions(+), 11 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index c7b3a9ea378..b22c66f3c68 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -84,13 +84,33 @@ observeEvent(input$ex_plot_modeldata,{ model <- filter(aligned_data, variable == "model") observasions <- filter(aligned_data, variable == "observations") + #convert dataframe to xts object model.xts <- xts(model$value, order.by = model$Date) observasions.xts <- xts(observasions$value, order.by = observasions$Date) + # subsetting of a date range date_range2 <- paste0(input$date_range2, collapse = "/") model.xts <- model.xts[date_range2] observasions.xts <- observasions.xts[date_range2] + # Aggregation function + aggr <- function(xts.df){ + if(input$agg2 == "daily"){ + xts.df <- apply.daily(xts.df, input$func2) + }else if(input$agg2 == "weekly"){ + xts.df <- apply.weekly(xts.df, input$func2) + }else if(input$agg2 == "monthly"){ + xts.df <- apply.monthly(xts.df, input$func2) + }else if(input$agg2 == "quarterly"){ + xts.df <- apply.quarterly(xts.df, input$func2) + }else{ + xts.df <- apply.yearly(xts.df, input$func2) + } + } + + model.xts <- aggr(model.xts) + observasions.xts <- aggr(observasions.xts) + unit <- ylab if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index c1dd96cb430..5d8579f11b1 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -74,23 +74,39 @@ observeEvent(input$ex_plot_model,{ ylab <- input$units_model } - df$run_id <- as.numeric(as.character(df$run_id)) - xts.df <- xts(df[,c("vals", "run_id")], order.by = df$dates) date_range <- paste0(input$date_range, collapse = "/") - xts.df <- xts.df[date_range] - plot_type <- switch(input$plotType_model, point = "scatter", line = "line") - - #smooth_param <- input$smooth_n_model / nrow(df) *100 + smooth_param <- input$smooth_n_model * 100 + # function that converts dataframe to xts object, + # selects subset of a date range and does data aggregtion + func <- function(df){ + xts.df <- xts(df$vals, order.by = df$dates) + xts.df <- xts.df[date_range] + if(input$agg == "daily"){ + xts.df <- apply.daily(xts.df, input$func) + }else if(input$agg == "weekly"){ + xts.df <- apply.weekly(xts.df, input$func) + }else if(input$agg == "monthly"){ + xts.df <- apply.monthly(xts.df, input$func) + }else if(input$agg == "quarterly"){ + xts.df <- apply.quarterly(xts.df, input$func) + }else{ + xts.df <- apply.yearly(xts.df, input$func) + } + } + + list <- split(df, df$run_id) + xts.list <- lapply(list, func) + ply <- highchart() - for(i in unique(xts.df$run_id)){ + for(i in 1:length(xts.list)){ ply <- ply %>% - hc_add_series(xts.df[xts.df$run_id == i, "vals"], - type = plot_type, name = i, regression = TRUE, + hc_add_series(xts.list[[i]], type = plot_type, name = names(xts.list[i]), + regression = TRUE, regressionSettings = list(type = "loess", loessSmooth = smooth_param)) } diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index 9fe1f6c2f10..f9fb70fd3c9 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -9,6 +9,17 @@ tabPanel( placeholder = "Type units in udunits2 compatible format"), verbatimTextOutput("unit_text2"), dateRangeInput("date_range2", "Date Range", separator = " - "), + fluidRow( + column(6, + selectInput("agg2", "Aggregation", + choices = c("daily", "weekly", "monthly", "quarterly", "annually"), + selected = "daily")), + column(6, + selectInput("func2", "function", + choices = c("mean", "sum"), + selected = "mean") + ) + ), radioButtons( "plotType_modeldata", "Plot Type (for Model Outputs)", @@ -22,7 +33,6 @@ tabPanel( max = 1, value = 0.8 ), - br(), actionButton("ex_plot_modeldata", "Generate Plot", width = "100%", class="btn-primary") ) ), diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 8fc5142b0e8..3e0f058d624 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -9,6 +9,17 @@ tabPanel( placeholder = "Type units in udunits2 compatible format"), verbatimTextOutput("unit_text"), dateRangeInput("date_range", "Date Range", separator = " - "), + fluidRow( + column(6, + selectInput("agg", "Aggregation", + choices = c("daily", "weekly", "monthly", "quarterly", "annually"), + selected = "daily")), + column(6, + selectInput("func", "function", + choices = c("mean", "sum"), + selected = "mean") + ) + ), radioButtons( "plotType_model", "Plot Type (for Model Outputs)", @@ -22,7 +33,6 @@ tabPanel( max = 1, value = 0.8 ), - br(), actionButton("ex_plot_model", "Generate Plot", width = "100%", class="btn-primary") ) ), From a7fbe77d98b303865bd63d1897f3a1c7192e7058 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Sat, 13 Jul 2019 18:16:52 -0400 Subject: [PATCH 0247/2289] fixed Construct.R + commented out TryCatch --- models/sipnet/R/write_restart.SIPNET.R | 8 ++++-- .../R/Multi_Site_Constructors.R | 11 ++++++-- .../assim.sequential/R/sda.enkf_MultiSite.R | 28 +++++++++---------- 3 files changed, 27 insertions(+), 20 deletions(-) diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index 6f4f701970c..9ccedb628fe 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -31,8 +31,11 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, rundir <- settings$host$rundir variables <- colnames(new.state) # values that will be used for updating other states deterministically depending on the SDA states - IC_extra <- ifelse(length(new.params$restart)>0, data.frame(t(new.params$restart)), data.frame()) - + if (length(new.params$restart) > 0) { + IC_extra <- data.frame(t(new.params$restart)) + } else{ + IC_extra <- data.frame() + } if (RENAME) { file.rename(file.path(outdir, runid, "sipnet.out"), @@ -65,7 +68,6 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if ("AbvGrndWood" %in% variables) { AbvGrndWood <- udunits2::ud.convert(new.state$AbvGrndWood, "Mg/ha", "g/m^2") analysis.save[[length(analysis.save) + 1]] <- AbvGrndWood - if (new.state$AbvGrndWood < 0) analysis.save[[length(analysis.save)]] <- 0.0001 names(analysis.save[[length(analysis.save)]]) <- c("AbvGrndWood") analysis.save[[length(analysis.save) + 1]] <- IC_extra$abvGrndWoodFrac diff --git a/modules/assim.sequential/R/Multi_Site_Constructors.R b/modules/assim.sequential/R/Multi_Site_Constructors.R index e61271530f5..11ef8b4eeff 100755 --- a/modules/assim.sequential/R/Multi_Site_Constructors.R +++ b/modules/assim.sequential/R/Multi_Site_Constructors.R @@ -90,12 +90,17 @@ Construct.R<-function(site.ids, var.names, obs.t.mean, obs.t.cov){ for (site in site.ids){ choose <- sapply(var.names, agrep, x=names(obs.t.mean[[site]]), max=1, USE.NAMES = FALSE) %>% unlist # if there is no obs for this site - if(length(choose)==0){ + if(length(choose) == 0){ next; }else{ Y <- c(Y, unlist(obs.t.mean[[site]][choose])) - # collecting them - site.specific.Rs <- c(site.specific.Rs, list(as.matrix(obs.t.cov[[site]][choose,choose])) ) + #collecting them + if (ncol(obs.t.mean[[site]]) > 1) + { + site.specific.Rs <- c(site.specific.Rs, list(as.matrix(obs.t.cov[[site]][choose,choose]))) + } else { + site.specific.Rs <- c(site.specific.Rs, list(as.matrix(obs.t.cov[[site]][choose]))) + } } #make block matrix out of our collection R <- Matrix::bdiag(site.specific.Rs) %>% as.matrix() diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 3917c0c61c5..04a4fe3f8be 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -265,7 +265,7 @@ sda.enkf.multisite <- function(settings, for(t in sim.time){ # if it beaks at least save the trace - tryCatch({ + #tryCatch({ tic(paste0("Writing configs for cycle = ", t)) # do we have obs for this time - what year is it ? @@ -633,19 +633,19 @@ sda.enkf.multisite <- function(settings, #Saving the profiling result if (control$Profiling) alltocs(file.path(settings$outdir,"SDA", "Profiling.csv")) - },error = function(e) { - # If it breaks at some steps then I lose all the info on the other variables that worked fine up to the step before the break - save(site.locs, - t, - FORECAST, - ANALYSIS, - enkf.params, - new.state, new.params, params.list, - out.configs, ensemble.samples, inputs, Viz.output, - file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) - - PEcAn.logger::logger.severe(paste0("Something just broke along the way. See if the message is helpful ", e)) - }) + # },error = function(e) { + # # If it breaks at some steps then I lose all the info on the other variables that worked fine up to the step before the break + # save(site.locs, + # t, + # FORECAST, + # ANALYSIS, + # enkf.params, + # new.state, new.params, params.list, + # out.configs, ensemble.samples, inputs, Viz.output, + # file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) + # + # PEcAn.logger::logger.severe(paste0("Something just broke along the way. See if the message is helpful ", e)) + # }) # remove files as SDA runs if (!(keepNC)) From 2ad137039293c5b5b9cf63c0451a560203ac84f3 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Sun, 14 Jul 2019 23:24:57 -0500 Subject: [PATCH 0248/2289] add None option to aggregation select box --- .../server_files/model_data_plots_server.R | 11 +++++++++++ shiny/workflowPlot/server_files/model_plots_server.R | 12 ++++++++++++ shiny/workflowPlot/ui_files/model_data_plots_UI.R | 2 +- shiny/workflowPlot/ui_files/model_plots_UI.R | 2 +- 4 files changed, 25 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index b22c66f3c68..b9ca5106f5f 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -50,6 +50,15 @@ observe({ ) }) +# update "function" select box choice according to "agrregation" select box +observe({ + if(input$agg2 == "NONE"){ + updateSelectInput(session, "func2", choices = "NONE") + }else{ + updateSelectInput(session, "func2", choices = c("mean", "sum")) + } +}) + observeEvent(input$ex_plot_modeldata,{ output$modelDataPlot <- renderHighchart({ input$ex_plot_modeldata @@ -95,6 +104,8 @@ observeEvent(input$ex_plot_modeldata,{ # Aggregation function aggr <- function(xts.df){ + if(input$agg2=="NONE") return(xts.df) + if(input$agg2 == "daily"){ xts.df <- apply.daily(xts.df, input$func2) }else if(input$agg2 == "weekly"){ diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index 5d8579f11b1..94bb3180bb4 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -49,6 +49,15 @@ observe({ ) }) +# update "function" select box choice according to "agrregation" select box +observe({ + if(input$agg == "NONE"){ + updateSelectInput(session, "func", choices = "NONE") + }else{ + updateSelectInput(session, "func", choices = c("mean", "sum")) + } +}) + observeEvent(input$ex_plot_model,{ req(input$units_model) @@ -85,6 +94,9 @@ observeEvent(input$ex_plot_model,{ func <- function(df){ xts.df <- xts(df$vals, order.by = df$dates) xts.df <- xts.df[date_range] + + if(input$agg=="NONE") return(xts.df) + if(input$agg == "daily"){ xts.df <- apply.daily(xts.df, input$func) }else if(input$agg == "weekly"){ diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index f9fb70fd3c9..3722de3eba3 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -12,7 +12,7 @@ tabPanel( fluidRow( column(6, selectInput("agg2", "Aggregation", - choices = c("daily", "weekly", "monthly", "quarterly", "annually"), + choices = c("NONE", "daily", "weekly", "monthly", "quarterly", "annually"), selected = "daily")), column(6, selectInput("func2", "function", diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 3e0f058d624..23e65b70a42 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -12,7 +12,7 @@ tabPanel( fluidRow( column(6, selectInput("agg", "Aggregation", - choices = c("daily", "weekly", "monthly", "quarterly", "annually"), + choices = c("NONE", "daily", "weekly", "monthly", "quarterly", "annually"), selected = "daily")), column(6, selectInput("func", "function", From 7fff1b774816f73b8920c1cde6f7522e1ecad07f Mon Sep 17 00:00:00 2001 From: sl4397 Date: Mon, 15 Jul 2019 08:43:22 -0500 Subject: [PATCH 0249/2289] add scatter plot to model data plots --- .../server_files/model_data_plots_server.R | 29 ++++++++++++++++++- .../ui_files/model_data_plots_UI.R | 5 +++- 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index b9ca5106f5f..16f38525063 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -90,6 +90,9 @@ observeEvent(input$ex_plot_modeldata,{ # Melt dataframe to plot two types of columns together aligned_data <- tidyr::gather(aligned_data, variable, value, -Date) + + + model <- filter(aligned_data, variable == "model") observasions <- filter(aligned_data, variable == "observations") @@ -104,6 +107,7 @@ observeEvent(input$ex_plot_modeldata,{ # Aggregation function aggr <- function(xts.df){ + if(input$agg2=="NONE") return(xts.df) if(input$agg2 == "daily"){ @@ -122,6 +126,28 @@ observeEvent(input$ex_plot_modeldata,{ model.xts <- aggr(model.xts) observasions.xts <- aggr(observasions.xts) + + output$modelDataPlotscatter <- renderHighchart({ + scatter.df <- data.frame ( + 'y' = zoo::coredata(model.xts), + 'x' = zoo::coredata(observasions.xts) + ) + + highchart() %>% + hc_chart(type = 'scatter') %>% + hc_add_series(scatter.df, name = "Model data comparison", showInLegend = FALSE) %>% + hc_legend(enabled = FALSE) %>% + hc_yAxis(title = list(text = "Simulated",fontSize=19))%>% + hc_exporting(enabled = TRUE, filename=paste0("Model_data_comparison")) %>% + hc_add_theme(hc_theme_elementary(yAxis = list(title = list(style = list(color = "#373b42",fontSize=15)), + labels = list(style = list(color = "#373b42",fontSize=15))), + xAxis = list(title = list(style = list(color = "#373b42",fontSize=15)), + labels = list(style = list(color = "#373b42",fontSize=15))) + ))%>% + hc_xAxis(title = list(text ="Observed" ,fontSize=19)) + + }) + unit <- ylab if(input$units_modeldata != unit & udunits2::ud.are.convertible(unit, input$units_modeldata)){ aligned_data$value <- udunits2::ud.convert(aligned_data$value,unit,input$units_modeldata) @@ -130,7 +156,7 @@ observeEvent(input$ex_plot_modeldata,{ plot_type <- switch(input$plotType_model, point = "scatter", line = "line") - + #smooth_param <- input$smooth_n_model / nrow(df) *100 smooth_param <- input$smooth_n_model * 100 @@ -270,3 +296,4 @@ observeEvent(input$ex_plot_modeldata,{ # }) + diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index 3722de3eba3..bc43396965f 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -38,6 +38,9 @@ tabPanel( ), column( 9, - highchartOutput("modelDataPlot", height = "500px") + h3("Time series"), + highchartOutput("modelDataPlot", height = "500px"), br(), + h3("Scatter Plot"), + highchartOutput("modelDataPlotscatter", height = "500px") ) ) From 37d7b06dab98bf0b0f83e46463149fd309185215 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 15 Jul 2019 10:54:49 -0400 Subject: [PATCH 0250/2289] Fixing the failed runs --- shiny/workflowPlot/server_files/select_data_server.R | 1 + 1 file changed, 1 insertion(+) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index e2f2c62b6fe..1a02329b60e 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -8,6 +8,7 @@ observeEvent(input$load_model,{ incProgress(1 / 15) df <- load.model() + if (nrow(df)==0) return(NULL) # output$results_table <- DT::renderDataTable(DT::datatable(head(masterDF))) incProgress(10 / 15) From 6c27427574bcf77327b116f92877b61840340afe Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 15 Jul 2019 10:55:28 -0400 Subject: [PATCH 0251/2289] Fixing failed runs --- .../server_files/model_plots_server.R | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index 94bb3180bb4..fd86f8d1006 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -17,9 +17,19 @@ output$modelPlot <- renderHighchart({ # Update units every time a variable is selected observeEvent(input$var_name_model, { - model.df <- load.model() - default.unit <- model.df %>% filter(var_name == input$var_name_model) %>% pull(ylab) %>% unique() - updateTextInput(session, "units_model", value = default.unit) + req(input$var_name_model) + tryCatch({ + model.df <- load.model() + default.unit <- + model.df %>% filter(var_name == input$var_name_model) %>% pull(ylab) %>% unique() + updateTextInput(session, "units_model", value = default.unit) + + #Signaling the success of the operation + toastr_success("Load model outputs") + }, + error = function(e) { + toastr_error(title = "Error in reading the run files.", conditionMessage(e)) + }) }) # Check that new units are parsible and can be used for conversion From 17d15779ab2a490d2c1ee357b893e858f6a94267 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 15 Jul 2019 10:59:39 -0400 Subject: [PATCH 0252/2289] Update model_plots_server.R --- shiny/workflowPlot/server_files/model_plots_server.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server_files/model_plots_server.R b/shiny/workflowPlot/server_files/model_plots_server.R index fd86f8d1006..7a0d7dbd6fd 100644 --- a/shiny/workflowPlot/server_files/model_plots_server.R +++ b/shiny/workflowPlot/server_files/model_plots_server.R @@ -25,7 +25,7 @@ observeEvent(input$var_name_model, { updateTextInput(session, "units_model", value = default.unit) #Signaling the success of the operation - toastr_success("Load model outputs") + toastr_success("Variables were updated.") }, error = function(e) { toastr_error(title = "Error in reading the run files.", conditionMessage(e)) From 784cc65adc829a382598d5a26f24cafc10830bd9 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Tue, 16 Jul 2019 07:48:03 -0500 Subject: [PATCH 0253/2289] add XML package --- shiny/workflowPlot/server.R | 1 + 1 file changed, 1 insertion(+) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 98b163df6ea..46209900d95 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -17,6 +17,7 @@ lapply(c( "shiny", "shinyjs", "dplyr", "plyr", + "XML", "xts", "reshape2", "purrr", From 363dc58508e75d3c093af55126ed316e3cab9e61 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Tue, 16 Jul 2019 07:52:08 -0500 Subject: [PATCH 0254/2289] fix previous tryCatch Error --- .../server_files/benchmarking_server.R | 75 ++++++++++--------- 1 file changed, 38 insertions(+), 37 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index ece85cd3fc0..3987e06e8aa 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -224,16 +224,16 @@ observeEvent(input$calc_bm,{ bm$calc_bm_message <- sprintf("Setting up benchmarks") output$reportvars <- renderText(paste(bm$bm_vars, seq_along(bm$bm_vars))) output$reportmetrics <- renderText(paste(bm$bm_metrics)) - incProgress(1 / 15) + inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) output$inputs_df_table <- renderTable(inputs_df) - incProgress(1 / 15) + config.list <- PEcAn.utils::read_web_config("../../web/config.php") output$config_list_table <- renderTable(as.data.frame.list(config.list)) - incProgress(1 / 15) + bm$bm_settings$info <- list(userid = 1000000003) # This is my user id. I have no idea how to get people to log in to their accounts through the web interface and right now the benchmarking code has sections dependent on user id - I will fix this. bm$bm_settings$database <- list( @@ -251,7 +251,7 @@ observeEvent(input$calc_bm,{ ensemble_id = bm$ens_wf$ensemble_id, new_run = FALSE ) - incProgress(3 / 15) + for(i in seq_along(bm$bm_vars)){ benchmark <- list( @@ -265,11 +265,11 @@ observeEvent(input$calc_bm,{ } bm$bm_settings$benchmarking <- append(bm$bm_settings$benchmarking,list(benchmark = benchmark)) } - incProgress(6 / 15) + # output$calc_bm_button <- renderUI({}) output$print_bm_settings <- renderPrint(bm$bm_settings) - incProgress(1 / 15) + basePath <- dplyr::tbl(dbConnect$bety, 'workflows') %>% dplyr::filter(id %in% bm$ens_wf$workflow_id) %>% dplyr::pull(folder) @@ -278,7 +278,37 @@ observeEvent(input$calc_bm,{ bm$settings_path <- settings_path bm$calc_bm_message <- sprintf("Benchmarking settings have been saved here: %s", bm$settings_path) - incProgress(2 / 15) + incProgress(1/2) + + ############################################################################## + # Run the benchmarking functions + # The following seven functions are essentially + # "the benchmarking workflow" in its entirety + + settings <- PEcAn.settings::read.settings(bm$settings_path) + bm.settings <- PEcAn.benchmark::define_benchmark(settings,dbConnect$bety) + settings <- PEcAn.benchmark::add_workflow_info(settings,dbConnect$bety) + + settings$benchmarking <- PEcAn.benchmark::bm_settings2pecan_settings(bm.settings) + settings <- PEcAn.benchmark::read_settings_BRR(settings) + + # This is a hack to get old runs that don't have the right pecan.CHECKED.xml data working + if(is.null(settings$settings.info)){ + settings$settings.info <- list( + deprecated.settings.fixed = TRUE, + settings.updated = TRUE, + checked = TRUE + ) + } + + settings <- PEcAn.settings::prepare.settings(settings) + settings$host$name <- "localhost" # This may not be the best place to set this, but it isn't set by any of the other functions. Another option is to have it set by the default_hostname function (if input is NULL, set to localhost) + # results <- PEcAn.settings::papply(settings, function(x) calc_benchmark(x, bety, start_year = input$start_year, end_year = input$end_year)) + results <- PEcAn.settings::papply(settings, function(x) + calc_benchmark(settings = x, bety = dbConnect$bety)) + bm$load_results <- bm$load_results + 1 + + incProgress(1/2) }) #Signaling the success of the operation toastr_success("Calculate benchmarks") @@ -286,36 +316,7 @@ observeEvent(input$calc_bm,{ error = function(e) { toastr_error(title = "Error", conditionMessage(e)) }) - - - ############################################################################## - # Run the benchmarking functions - # The following seven functions are essentially - # "the benchmarking workflow" in its entirety - - settings <- PEcAn.settings::read.settings(bm$settings_path) - bm.settings <- PEcAn.benchmark::define_benchmark(settings,dbConnect$bety) - settings <- PEcAn.benchmark::add_workflow_info(settings,dbConnect$bety) - - settings$benchmarking <- PEcAn.benchmark::bm_settings2pecan_settings(bm.settings) - settings <- PEcAn.benchmark::read_settings_BRR(settings) - - # This is a hack to get old runs that don't have the right pecan.CHECKED.xml data working - if(is.null(settings$settings.info)){ - settings$settings.info <- list( - deprecated.settings.fixed = TRUE, - settings.updated = TRUE, - checked = TRUE - ) - } - - settings <- PEcAn.settings::prepare.settings(settings) - settings$host$name <- "localhost" # This may not be the best place to set this, but it isn't set by any of the other functions. Another option is to have it set by the default_hostname function (if input is NULL, set to localhost) - # results <- PEcAn.settings::papply(settings, function(x) calc_benchmark(x, bety, start_year = input$start_year, end_year = input$end_year)) - results <- PEcAn.settings::papply(settings, function(x) - calc_benchmark(settings = x, bety = dbConnect$bety)) - bm$load_results <- bm$load_results + 1 - + }) observeEvent(bm$calc_bm_message,{ From 041b4ce8cbb748b80af5091ced7cbadb989a023c Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 16 Jul 2019 13:51:50 -0400 Subject: [PATCH 0255/2289] testing new proposal scheme --- modules/assim.batch/R/pda.emulator.ms.R | 2 +- modules/assim.batch/R/pda.utils.R | 56 ++++++++++++++++++++++--- 2 files changed, 51 insertions(+), 7 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 898c508df6f..bc36922a37f 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -51,7 +51,7 @@ pda.emulator.ms <- function(multi.settings) { tunnel_dir = dirname(multi.settings[[1]]$host$tunnel)) # Until a check function is implemented, run a predefined number of emulator rounds - n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 5, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) + n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 3, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) PEcAn.logger::logger.info(n_rounds, " individual PDA rounds will be run per site. Please wait.") repeat{ diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 80dac59a52e..b2cf3cd82bb 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1111,16 +1111,51 @@ return_multi_site_objects <- function(multi.settings){ n.param.orig, prior.ind.orig, as.numeric(settings$assim.batch$n.knot), external_knots, prior.list, prior.fn, sf, NULL) - collect_site_knots[[i]] <- sampled_knots$knots.params.temp + collect_site_knots[[i]] <- do.call("cbind", sampled_knots$knots.params.temp) } - # bring them all and sample from sites + # bring them all together + collect_site_knots <- do.call("rbind", collect_site_knots) + + # sample twice as much + collect_site_knots <- collect_site_knots[sample(1:nrow(collect_site_knots), + 2*as.numeric(settings$assim.batch$n.knot)), ] + + # bring the previous set in + need_obj <- load_pda_history(workdir = settings$outdir, + ensemble.id = settings$assim.batch$ensemble.id, + objects = c("SS", "prior.ind.all")) + previous_knots <- need_obj$SS[[1]][, -ncol(need_obj$SS[[1]])] + new_site_knots <- rbind(previous_knots, collect_site_knots[, need_obj$prior.ind.all]) + + # some knots might end up very close to each other + # systematically choose the most distant ones + PEcAn.logger::logger.info("Choosing distant points. Please wait.") + repeat{ + n <- dim(new_site_knots)[1] + if(n == as.numeric(settings$assim.batch$n.knot) + nrow(previous_knots)) break + foo <- combn(seq_len(n), 2) + dr <- dist(new_site_knots) + if(all(foo[, which.min(dr)] %in% 1:nrow(previous_knots))){ + new_site_knots <- new_site_knots[-foo[, which.min(dr)],] + previous_knots <- previous_knots[-foo[, which.min(dr)],] + }else if(any(foo[, which.min(dr)] %in% 1:nrow(previous_knots))){ + new_site_knots <- new_site_knots[-foo[, which.min(dr)][!(foo[, which.min(dr)] %in% 1:nrow(previous_knots))],] + }else{ + new_site_knots <- new_site_knots[-sample(foo[, which.min(dr)], 1),] + } + + } + new_site_knots <- new_site_knots[-(1:nrow(previous_knots)),] + these_knots <- apply(new_site_knots, 1, function(x) row.match(x, collect_site_knots[, need_obj$prior.ind.all]) ) + collect_site_knots <- collect_site_knots[these_knots,] + + ind <- 0 for(p in seq_along(settings$pfts)){ - tmp <- lapply(collect_site_knots,`[[`, p) - foo <- do.call("rbind", tmp) - foo_ind <- sample(1:nrow(foo), as.numeric(settings$assim.batch$n.knot)) - external_knots[[p]] <- foo[foo_ind,] + external_knots[[p]] <- collect_site_knots[, (ind + 1):(ind + ncol(external_knots[[p]]))] + ind <- ind + ncol(external_knots[[p]]) } + } ensembleid_list <- sapply(multi.settings, function(x) pda.create.ensemble(x, bety$con, x$workflow$id)) @@ -1191,6 +1226,15 @@ prepare_pda_remote <- function(settings, site = 1, multi_site_objects){ last_lines <- c("pda.emulator(settings, external.priors = external_priors, external.data = external.data, external.knots = external_knots, external.formats = external_formats, ensemble.id = ensemble_id, remote = TRUE)") + }else if(!is.null(settings$assim.batch$data.path)){ + + external_data_line <- paste0("load(\"", settings$assim.batch$data.path ,"\")\n") + first_lines <- c(first_lines, external_data_line) + + last_lines <- c("pda.emulator(settings, external.priors = external_priors, external.data = external.data, + external.knots = external_knots, external.formats = external_formats, + ensemble.id = ensemble_id, remote = TRUE)") + }else{ last_lines <- c("pda.emulator(settings, external.priors = external_priors, From b23a437c4f2e5c7684e07e06b3bf75700728916f Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 16 Jul 2019 13:52:07 -0400 Subject: [PATCH 0256/2289] small bugfix --- modules/assim.batch/R/pda.postprocess.R | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.postprocess.R b/modules/assim.batch/R/pda.postprocess.R index ce79bff077c..585f3e7382c 100644 --- a/modules/assim.batch/R/pda.postprocess.R +++ b/modules/assim.batch/R/pda.postprocess.R @@ -304,7 +304,12 @@ pda.sort.params <- function(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, }else{ - m[, i] <- mcmc.out[[c]][[sub.sample]][, idx] + if(is.null(ns)){ + m[, i] <- mcmc.out[[c]][[sub.sample]][, idx] + }else{ + m[, i] <- mcmc.out[[c]][[sub.sample]][, idx, ns] + } + } } From eb73b9c2ed20f9e4917ee97d695911c705e0126c Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 16 Jul 2019 13:54:29 -0400 Subject: [PATCH 0257/2289] add step --- modules/assim.batch/R/pda.emulator.ms.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index bc36922a37f..46307a90f21 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -77,7 +77,7 @@ pda.emulator.ms <- function(multi.settings) { # listen repeat{ PEcAn.logger::logger.info("Multi-site calibration running. Please wait.") - Sys.sleep(1000) + Sys.sleep(180) check_all_sites <- sapply(emulator_jobs, qsub_run_finished, multi.settings[[1]]$host, multi.settings[[1]]$host$qstat) if(all(check_all_sites)) break } @@ -184,6 +184,9 @@ pda.emulator.ms <- function(multi.settings) { hbc.restart.file <- file.path(tmp.settings$outdir,paste0("history.joint", tmp.settings$assim.batch$ensemble.id, ".Rdata")) + current.step <- "BEG OF JOINT MCMC" + save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) + gp <- unlist(gp.stack, recursive = FALSE) # start the clock From 5da8aaaccbfe5339acb922a24c2bd2b3de24dcf0 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 17 Jul 2019 05:09:32 -0500 Subject: [PATCH 0258/2289] fix select data datatable errors --- .../server_files/select_data_server.R | 44 +++++++++++++------ 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 1a02329b60e..41f1eaf7d77 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -13,11 +13,11 @@ observeEvent(input$load_model,{ incProgress(10 / 15) ids_DF <- parse_ids_from_input_runID(input$all_run_id) - README.text <- c() - + select.df <- data.frame() + for(i in seq(nrow(ids_DF))){ - + dfsub <- df %>% filter(run_id == ids_DF$runID[i]) diff.m <- diff(dfsub$dates) @@ -27,19 +27,35 @@ observeEvent(input$load_model,{ diff_message <- sprintf("timestep: %.2f %s", mode.m, diff_units.m) wf.folder <- workflow(dbConnect$bety, ids_DF$wID[i]) %>% collect() %>% pull(folder) - README.text <- c(README.text, - tryCatch({ - readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")) - }, - error = function(e){ - return(NULL) - }), - diff_message - ) + README.text <- tryCatch({ + c(readLines(file.path(wf.folder, 'run', ids_DF$runID[i], "README.txt")), + diff_message) + }, + error = function(e){ + return(NULL) + }) + + README.df <- data.frame() + + if(!is.null(README.text)){ + README.df <- read.delim(textConnection(README.text), + header=FALSE,sep=":",strip.white=TRUE) + + if("pft names" %in% levels(README.df$V1)){ + levels(README.df$V1)[levels(README.df$V1)=="pft names"] <- "pft name" + } + if(!"trait" %in% levels(README.df$V1)){ + README.df <- rbind(README.df, data.frame(V1 = "trait", V2 = "-")) + } + if(!"quantile" %in% levels(README.df$V1)){ + README.df <- rbind(README.df, data.frame(V1 = "quantile", V2 = "-")) + } + } + + select.df <- rbind(select.df, README.df) } - select.data <- read.delim(textConnection(README.text), - header=FALSE,sep=":",strip.white=TRUE) %>% + select.data <- select.df %>% dlply(.(V1), function(x) x[[2]]) %>% as.data.frame() %>% dplyr::rename(site.id = site..id) %>% From e283f233093330a2aaa429b971a6aada83f0bb1c Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Wed, 17 Jul 2019 10:55:15 -0400 Subject: [PATCH 0259/2289] add demos test --- base/workflow/inst/batch_runs.R | 43 ++++++++++++++++++++++++++++++--- 1 file changed, 40 insertions(+), 3 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 606b2e54040..c5a7b0ee5a8 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -74,7 +74,7 @@ create_execute_test_xml <- function(run_list){ settings$database$dbfiles <- config.list$dbfiles_folder #PFT if (is.na(pft_name)){ - pft <- tbl(bety, "pfts") %>% filter(modeltype_id == model.new$modeltype_id) %>% collect() + pft <- tbl(bety, "pfts") %>% collect() %>% filter(modeltype_id == model.new$modeltype_id) pft_name <- pft$name[1] } settings$pfts$pft$name <- pft_name @@ -144,6 +144,41 @@ library(tidyverse) mach_name <- Sys.info()[[4]] mach_id <- tbl(bety, "machines")%>% filter(grepl(mach_name,hostname)) %>% pull(id) +## Test the Demos +# Site Niwot ridge +# Temperate COniferous +# Year 2003/01/01-2006/12/31 +# MET AmerifluxLBL +# Model - Sipnet +# Basic Run, then sensitivity and ensemble run. +## +models <- 1000000014 +met_name <- "AmerifluxLBL" +site_id <- 772 +startdate<-"2003/01/01" +enddate<-"2006/12/31" +out.var <- "NPP" +ensemble <- FALSE +ens_size <- 100 +sensitivity <- FALSE +demo_one_run_settings <- data.frame(models,met_name, site_id, startdate,enddate,pecan_path, out.var, ensemble,ens_size, sensitivity,stringsAsFactors=FALSE) + +demo_one_result <-demo_one_run_settings %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ + create_execute_test_xml(list(...)) +},otherwise =NA)) +) + +ensemble <- TRUE +ens_size <- 100 +sensitivity <- TRUE +demo_two_run_settings <- data.frame( models,met_name, site_id, startdate,enddate,out.var,ensemble,ens_size, sensitivity) +demo_two_result <- demo_two_run_settings %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ + create_execute_test_xml(list(...)) +},otherwise =NA)) +) + +-------------------------------------------------------------------------- + ## Find Models #devtools::install_github("pecanproject/pecan", subdir = "api") model_ids <- tbl(bety, "dbfiles") %>% @@ -183,12 +218,14 @@ site_id_noinput<- anti_join(tbl(bety, "sites"),tbl(bety, "inputs")) %>% in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) %>% - mutate(sitename= gsub(" ","_",.$sitename)) %>% rename_at(id.x = site_id) + mutate(sitename= gsub(" ","_",.$sitename)) %>% rename(site_id = id.x) -site_id <- site_id_noinput$id.x +site_id <- site_id_noinput$site_id site_name <- gsub(" ","_",site_id_noinput$sitename) + + #Create permutations of arg combinations options(scipen = 999) run_table <- expand.grid(models,met_name,site_id, startdate, enddate, From 543721b057d557a0657139e01460edd27fdee7d1 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 17 Jul 2019 22:19:01 -0500 Subject: [PATCH 0260/2289] replace database info --- .../server_files/benchmarking_server.R | 38 +++++++++++++------ 1 file changed, 27 insertions(+), 11 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 3987e06e8aa..226b5c9b617 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -216,11 +216,15 @@ observeEvent({ observeEvent(input$calc_bm,{ tryCatch({ + req(input$all_input_id) + req(input$all_site_id) + req(input$host) + req(input$user) + req(input$password) + withProgress(message = 'Calculation in progress', detail = 'This may take a while...', value = 0,{ - req(input$all_input_id) - req(input$all_site_id) bm$calc_bm_message <- sprintf("Setting up benchmarks") output$reportvars <- renderText(paste(bm$bm_vars, seq_along(bm$bm_vars))) output$reportmetrics <- renderText(paste(bm$bm_metrics)) @@ -231,27 +235,39 @@ observeEvent(input$calc_bm,{ output$inputs_df_table <- renderTable(inputs_df) - config.list <- PEcAn.utils::read_web_config("../../web/config.php") - output$config_list_table <- renderTable(as.data.frame.list(config.list)) + # config.list <- PEcAn.utils::read_web_config("../../web/config.php") + # output$config_list_table <- renderTable(as.data.frame.list(config.list)) bm$bm_settings$info <- list(userid = 1000000003) # This is my user id. I have no idea how to get people to log in to their accounts through the web interface and right now the benchmarking code has sections dependent on user id - I will fix this. + # bm$bm_settings$database <- list( + # bety = list( + # user = config.list$db_bety_username, + # password = config.list$db_bety_password, + # host = config.list$db_bety_hostname, + # dbname = config.list$db_bety_database, + # driver = config.list$db_bety_type, + # write = TRUE + # ), + # dbfiles = config.list$dbfiles_folder + # ) + bm$bm_settings$database <- list( bety = list( - user = config.list$db_bety_username, - password = config.list$db_bety_password, - host = config.list$db_bety_hostname, - dbname = config.list$db_bety_database, - driver = config.list$db_bety_type, + user = input$user, + password = input$password, + host = input$host, + dbname = "bety", + driver = "pgsql", write = TRUE ), - dbfiles = config.list$dbfiles_folder + dbfiles = "/home/carya/output//dbfiles" ) bm$bm_settings$benchmarking <- list( ensemble_id = bm$ens_wf$ensemble_id, new_run = FALSE ) - + for(i in seq_along(bm$bm_vars)){ benchmark <- list( From 1af5e00d4ba91b47aafd07bde946d17b3a0f121a Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 17 Jul 2019 23:13:19 -0500 Subject: [PATCH 0261/2289] change benchmarking settings UI --- .../server_files/benchmarking_server.R | 15 ++++++++++----- .../ui_files/benchmarking_settings_UI.R | 10 +++++++--- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 226b5c9b617..76d518dca4d 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -226,13 +226,17 @@ observeEvent(input$calc_bm,{ detail = 'This may take a while...', value = 0,{ bm$calc_bm_message <- sprintf("Setting up benchmarks") - output$reportvars <- renderText(paste(bm$bm_vars, seq_along(bm$bm_vars))) - output$reportmetrics <- renderText(paste(bm$bm_metrics)) + output$reportvars <- renderText(paste("Variable Id: ", bm$bm_vars, seq_along(bm$bm_vars))) + output$reportmetrics <- renderText(paste("Metrics Id: ", bm$bm_metrics)) inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) - output$inputs_df_table <- renderTable(inputs_df) + output$inputs_df_table <- DT::renderDataTable( + DT::datatable(inputs_df, + options = list(scrollX = TRUE), + caption = "Benchmarking Input Data Table") + ) # config.list <- PEcAn.utils::read_web_config("../../web/config.php") @@ -283,8 +287,9 @@ observeEvent(input$calc_bm,{ } - # output$calc_bm_button <- renderUI({}) - output$print_bm_settings <- renderPrint(bm$bm_settings) + output$calc_bm_button <- renderUI({}) + output$settings_title <- renderText("Benchmarking Settings:") + output$print_bm_settings <- renderPrint(print(bm$bm_settings)) basePath <- dplyr::tbl(dbConnect$bety, 'workflows') %>% dplyr::filter(id %in% bm$ens_wf$workflow_id) %>% dplyr::pull(folder) diff --git a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R index e4bf0083c04..4302f156c00 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R @@ -15,10 +15,14 @@ tabPanel("Settings", verbatimTextOutput("calc_bm_message"), # verbatimTextOutput("report"), uiOutput("calc_bm_button"), - uiOutput("inputs_df_table"), - uiOutput("config_list_table"), + br(), + DT::dataTableOutput("inputs_df_table"), + br(), + #uiOutput("config_list_table"), uiOutput("reportvars"), uiOutput("reportmetrics"), - uiOutput("print_bm_settings") + br(), + uiOutput("settings_title"), + verbatimTextOutput("print_bm_settings") ) ) \ No newline at end of file From 8c7d1161e6746ea30baab36e12a9edc2bd8245bd Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 19 Jul 2019 03:14:39 -0500 Subject: [PATCH 0262/2289] add pdf viewer tab --- shiny/workflowPlot/server.R | 1 + .../server_files/pdf_viewer_server.R | 57 +++++++++++++++++++ shiny/workflowPlot/ui.R | 3 +- shiny/workflowPlot/ui_files/pdf_viewer_UI.R | 12 ++++ 4 files changed, 72 insertions(+), 1 deletion(-) create mode 100644 shiny/workflowPlot/server_files/pdf_viewer_server.R create mode 100644 shiny/workflowPlot/ui_files/pdf_viewer_UI.R diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 46209900d95..a546256b484 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -112,6 +112,7 @@ server <- shinyServer(function(input, output, session) { # Page 2: Exploratory Plots source("server_files/model_plots_server.R", local = TRUE) source("server_files/model_data_plots_server.R", local = TRUE) + source("server_files/pdf_viewer_server.R", local = TRUE) # Page 3: Benchmarking observeEvent(input$load_model,{ diff --git a/shiny/workflowPlot/server_files/pdf_viewer_server.R b/shiny/workflowPlot/server_files/pdf_viewer_server.R new file mode 100644 index 00000000000..41ff519391d --- /dev/null +++ b/shiny/workflowPlot/server_files/pdf_viewer_server.R @@ -0,0 +1,57 @@ +# data table that lists file names +observe({ + req(input$all_workflow_id) + w_ids <- input$all_workflow_id + folder.path <- c() + for(w_id in w_ids){ + folder.path <- c(folder.path, workflow(dbConnect$bety, w_id) %>% collect() %>% pull(folder)) + } + + output$files <- DT::renderDT( + DT::datatable(list.files(folder.path,"*.pdf") %>% + as.data.frame()%>% + `colnames<-`(c("File name")), + escape = F, + selection="single", + style='bootstrap', + rownames = FALSE, + options = list( + dom = 'ft', + pageLength = 10, + scrollX = TRUE, + scrollCollapse = TRUE, + initComplete = DT::JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#000', 'color': '#fff'});", + "}") + ) + ) + + ) +}) + + +# displays pdf views +observeEvent(input$files_cell_clicked, { + req(input$all_workflow_id) + w_ids <- input$all_workflow_id + folder.path <- c() + for(w_id in w_ids){ + folder.path <- c(folder.path, workflow(dbConnect$bety, w_id) %>% collect() %>% pull(folder)) + } + + if (length(input$files_cell_clicked) > 0) { + # File needs to be copied to the www folder + for(i in length(folder.path)){ + file.copy(file.path(folder.path[i], input$files_cell_clicked$value), + "www", + overwrite = T) + } + + output$pdfview <- renderUI({ + tags$iframe(style = "height:600px; width:100%; border: 1px grey solid;", + src = input$files_cell_clicked$value) + }) + } + +}) \ No newline at end of file diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 599437b9fc6..7890939b679 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -54,7 +54,8 @@ ui <- fluidPage(theme = shinytheme("yeti"), tabPanel(h4("Exploratory Plots"), tabsetPanel( source_ui("model_plots_UI.R"), - source_ui("model_data_plots_UI.R") + source_ui("model_data_plots_UI.R"), + source_ui("pdf_viewer_UI.R") ) ), tabPanel(h4("Benchmarking"), diff --git a/shiny/workflowPlot/ui_files/pdf_viewer_UI.R b/shiny/workflowPlot/ui_files/pdf_viewer_UI.R new file mode 100644 index 00000000000..45ed6747abe --- /dev/null +++ b/shiny/workflowPlot/ui_files/pdf_viewer_UI.R @@ -0,0 +1,12 @@ +tabPanel( + "PDF Viewer", + br(), + column( + 4, + DT::DTOutput("files") + ), + column( + 8, + uiOutput("pdfview") + ) +) \ No newline at end of file From 20af97817098b92ce9cd3461efb6994af077da89 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 19 Jul 2019 03:53:57 -0500 Subject: [PATCH 0263/2289] clean up packages --- shiny/workflowPlot/server.R | 4 ---- 1 file changed, 4 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index a546256b484..f34217a71bb 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -11,7 +11,6 @@ lapply(c("PEcAn.visualization", # Shiny and plotting packages lapply(c( "shiny", - "ggplot2", "plotly", "highcharter", "shinyjs", @@ -19,10 +18,7 @@ lapply(c( "shiny", "plyr", "XML", "xts", - "reshape2", "purrr", - "ncdf4", - "scales", "lubridate", "shinythemes", "shinytoastr" From cd29e9e3265d32dd460ad155343ba79ab33a3aa2 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 19 Jul 2019 04:55:39 -0500 Subject: [PATCH 0264/2289] create empty modal for register external data --- shiny/workflowPlot/server_files/sidebar_server.R | 15 +++++++++++++++ shiny/workflowPlot/ui.R | 4 ++-- shiny/workflowPlot/ui_files/sidebar_UI.R | 12 ++++++++++-- 3 files changed, 27 insertions(+), 4 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 95e0e0bb6a8..fe330db3b9c 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -186,4 +186,19 @@ observeEvent(input$load_data, { error = function(e) { toastr_error(title = "Error", conditionMessage(e)) }) +}) + + +# Register external data +observeEvent(input$register_data,{ + showModal( + modalDialog( + title = "Register External Data", + footer = tagList( + actionButton("register_data", "Register"), + modalButton("Cancel") + ), + size = 'm' + ) + ) }) \ No newline at end of file diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 7890939b679..0bcf92d2f2b 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -43,10 +43,10 @@ ui <- fluidPage(theme = shinytheme("yeti"), navbarPage(title = NULL, tabPanel(h4("Select Data"), tagList( - column(3, + column(4, source_ui("sidebar_UI.R") ), - column(9, + column(8, source_ui("select_data_UI.R") ) ) diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index c4a355f500f..9bbc765d5a4 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -5,7 +5,7 @@ tagList( selectizeInput("all_workflow_id", "Mutliple Workflow IDs", c(), multiple=TRUE), p("Please select the run IDs. You can select multiple IDs"), selectizeInput("all_run_id", "Mutliple Run IDs", c(), multiple=TRUE), - actionButton("load_model", "Load Model outputs") + actionButton("load_model", h5("Load Model outputs")) ), h4("Load External Data"), wellPanel( @@ -13,6 +13,14 @@ tagList( # If loading multiple sites in future # selectizeInput("all_site_id", "Select Site ID", c(), multiple=TRUE), selectizeInput("all_input_id", "Select Input ID", c()), - actionButton("load_data", "Load External Data") + fluidRow( + column(5, + actionButton("load_data", h6("Load External Data"), width = "130px") + ), + column(7, + actionButton("register_data", h6("Register External Data"), + width = "155px", class="btn-primary") + ) + ) ) ) \ No newline at end of file From 738030271c4499d153c9ed34c102cda61cd07224 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 19 Jul 2019 06:23:21 -0500 Subject: [PATCH 0265/2289] update datatable UI --- shiny/workflowPlot/server_files/benchmarking_server.R | 10 ++++++++-- shiny/workflowPlot/server_files/select_data_server.R | 11 ++++++++++- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 76d518dca4d..d8194ff5030 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -234,8 +234,14 @@ observeEvent(input$calc_bm,{ dplyr::filter(input_selection_list == input$all_input_id) output$inputs_df_table <- DT::renderDataTable( DT::datatable(inputs_df, - options = list(scrollX = TRUE), - caption = "Benchmarking Input Data Table") + options = list( + dom = 'ft', + scrollX = TRUE, + initComplete = DT::JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#404040', 'color': '#fff'});", + "}")), + caption = "Table: Benchmarking Input Data") ) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 41f1eaf7d77..712a3e4a97c 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -64,7 +64,16 @@ observeEvent(input$load_model,{ output$datatable <- DT::renderDataTable( - DT::datatable(select.data,options = list(scrollX = TRUE)) + DT::datatable(select.data, + options = list( + dom = 'ft', + scrollX = TRUE, + initComplete = DT::JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#404040', 'color': '#fff'});", + "}") + ) + ) ) #output$README <- renderUI({HTML(paste(README.text, collapse = '
'))}) From 7130a15e408fb6c69448a97bcb9a658207b93680 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Sun, 21 Jul 2019 21:13:47 -0500 Subject: [PATCH 0266/2289] update register external data modal UI --- .../server_files/sidebar_server.R | 49 +++++++++++++++---- 1 file changed, 39 insertions(+), 10 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index fe330db3b9c..c4b10c3caaa 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -73,12 +73,12 @@ load.model <- eventReactive(input$load_model,{ globalDF <- map2_df(ids_DF$wID, ids_DF$runID, ~tryCatch({ load_data_single_run(dbConnect$bety, .x, .y) - }, - error = function(e){ - toastr_error(title = paste("Error in WorkflowID", .x), - conditionMessage(e)) - return() - })) + }, + error = function(e){ + toastr_error(title = paste("Error in WorkflowID", .x), + conditionMessage(e)) + return() + })) print("Yay the model data is loaded!") print(head(globalDF)) globalDF$var_name <- as.character(globalDF$var_name) @@ -142,10 +142,10 @@ observe({ load.model.data <- eventReactive(input$load_data, { req(input$all_input_id) - + inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) inputs_df <- inputs_df %>% dplyr::filter(input_selection_list == input$all_input_id) - + input_id <- inputs_df$input_id # File_format <- getFileFormat(bety,input_id) File_format <- PEcAn.DB::query.format.vars(bety = dbConnect$bety, input.id = input_id) @@ -191,14 +191,43 @@ observeEvent(input$load_data, { # Register external data observeEvent(input$register_data,{ + browser() + mt <-tbl(dbConnect$bety,"mimetypes") %>% + distinct(type_string, .keep_all = FALSE) %>% + collect() + showModal( modalDialog( title = "Register External Data", + fluidRow( + column(12, + fileInput("Datafile", "Choose CSV/NC File", + width = "100%", + accept = c( + "text/csv", + "text/comma-separated-values,text/plain", + ".csv", + ".nc") + )), + tags$hr() + ), + fluidRow( + column(6, dateInput("date4", "Start Date:", value = Sys.Date()-10)), + column(6, dateInput("date4", "End Date:", value = Sys.Date()-10) ) + ), + fluidRow( + column(6, shinyTime::timeInput("time2", "Start Time:", value = Sys.time())), + column(6, shinyTime::timeInput("time2", "End Time:", value = Sys.time())) + ), + fluidRow( + column(6, selectizeInput("mimet_sel", "Mime type", mt) ), + column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ) + ), footer = tagList( - actionButton("register_data", "Register"), + actionButton("register", "Register"), modalButton("Cancel") ), - size = 'm' + size = 'l' ) ) }) \ No newline at end of file From 7863e540d66765b80a425e36db44df27a282b161 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Tue, 23 Jul 2019 03:27:42 -0500 Subject: [PATCH 0267/2289] update register external data server --- .../server_files/sidebar_server.R | 68 +++++++++++++++++-- 1 file changed, 64 insertions(+), 4 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index c4b10c3caaa..6035aa6dd0a 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -212,7 +212,7 @@ observeEvent(input$register_data,{ tags$hr() ), fluidRow( - column(6, dateInput("date4", "Start Date:", value = Sys.Date()-10)), + column(6, dateInput("date3", "Start Date:", value = Sys.Date()-10)), column(6, dateInput("date4", "End Date:", value = Sys.Date()-10) ) ), fluidRow( @@ -220,14 +220,74 @@ observeEvent(input$register_data,{ column(6, shinyTime::timeInput("time2", "End Time:", value = Sys.time())) ), fluidRow( - column(6, selectizeInput("mimet_sel", "Mime type", mt) ), - column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ) + column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ), + column(6, selectizeInput("mimet_sel", "Mime type", mt) ) ), footer = tagList( - actionButton("register", "Register"), + actionButton("register_button", "Register"), modalButton("Cancel") ), size = 'l' ) ) +}) + + +# update Mimetype select box choices according to selected format +observeEvent(input$format_sel,{ + req(input$format_sel) + mt_choices <- tbl(dbConnect$bety,"formats") %>% + left_join(tbl(dbConnect$bety,"mimetypes"), by = c("mimetype_id" = "id")) %>% + filter(name == input$format_sel) %>% + pull(type_string) %>% + unique() + + updateSelectInput(session, "mimet_sel", choices = mt_choices) +}) + + +# register input file in database +observeEvent(input$register_button,{ + tryCatch({ + inFile <- input$Datafile + file.copy(inFile$datapath, + file.path("/home/carya/output/dbfiles", inFile$name), + overwrite = T) + + PEcAn.DB::dbfile.input.insert(in.path = file.path("/home/carya/output/dbfiles", inFile$name), + in.prefix = inFile$name, + siteid = input$all_site_id, # select box + startdate = input$date3, + enddate = input$date4, + mimetype = input$mimet_sel, + formatname = input$format_sel, + #parentid = input$parentID, + con = dbConnect$bety$con + #hostname = localhost #?, #default to localhost for now + #allow.conflicting.dates#? #default to FALSE for now + ) + removeModal() + toastr_success("Register External Data") + }, + error = function(e){ + toastr_error(title = "Error", conditionMessage(e)) + }) +}) + +# update input id list when register button is clicked +observeEvent(input$register_button,{ + req(input$all_site_id) + inputs_df <- getInputs(dbConnect$bety, c(input$all_site_id)) + formats_1 <- dplyr::tbl(dbConnect$bety, 'formats_variables') %>% + dplyr::filter(format_id %in% inputs_df$format_id) + if (dplyr.count(formats_1) == 0) { + logger.warn("No inputs found. Returning NULL.") + return(NULL) + } else { + formats_sub <- formats_1 %>% + dplyr::pull(format_id) %>% + unique() + inputs_df <- inputs_df %>% dplyr::filter(format_id %in% formats_sub) # Only data sets with formats with associated variables will show up + updateSelectizeInput(session, "all_input_id", choices=inputs_df$input_selection_list) + } }) \ No newline at end of file From c98d007e86a45a6c1bf3e582274d31f3686c9daa Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 24 Jul 2019 04:26:52 -0500 Subject: [PATCH 0268/2289] delete mimetype select box --- .../server_files/sidebar_server.R | 33 +++++++------------ 1 file changed, 11 insertions(+), 22 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 6035aa6dd0a..12c2bca4d60 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -192,9 +192,6 @@ observeEvent(input$load_data, { # Register external data observeEvent(input$register_data,{ browser() - mt <-tbl(dbConnect$bety,"mimetypes") %>% - distinct(type_string, .keep_all = FALSE) %>% - collect() showModal( modalDialog( @@ -220,8 +217,7 @@ observeEvent(input$register_data,{ column(6, shinyTime::timeInput("time2", "End Time:", value = Sys.time())) ), fluidRow( - column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ), - column(6, selectizeInput("mimet_sel", "Mime type", mt) ) + column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ) ), footer = tagList( actionButton("register_button", "Register"), @@ -233,33 +229,26 @@ observeEvent(input$register_data,{ }) -# update Mimetype select box choices according to selected format -observeEvent(input$format_sel,{ - req(input$format_sel) - mt_choices <- tbl(dbConnect$bety,"formats") %>% - left_join(tbl(dbConnect$bety,"mimetypes"), by = c("mimetype_id" = "id")) %>% - filter(name == input$format_sel) %>% - pull(type_string) %>% - unique() - - updateSelectInput(session, "mimet_sel", choices = mt_choices) -}) - # register input file in database observeEvent(input$register_button,{ tryCatch({ inFile <- input$Datafile - file.copy(inFile$datapath, - file.path("/home/carya/output/dbfiles", inFile$name), + file.copy(inFile$datapath, + file.path("/home/carya/output/dbfiles", inFile$name), overwrite = T) - + + mt <- tbl(dbConnect$bety,"formats") %>% + left_join(tbl(dbConnect$bety,"mimetypes"), by = c("mimetype_id" = "id")) %>% + filter(name == input$format_sel) %>% + pull(type_string) + PEcAn.DB::dbfile.input.insert(in.path = file.path("/home/carya/output/dbfiles", inFile$name), in.prefix = inFile$name, siteid = input$all_site_id, # select box startdate = input$date3, enddate = input$date4, - mimetype = input$mimet_sel, + mimetype = mt, formatname = input$format_sel, #parentid = input$parentID, con = dbConnect$bety$con @@ -268,7 +257,7 @@ observeEvent(input$register_button,{ ) removeModal() toastr_success("Register External Data") - }, + }, error = function(e){ toastr_error(title = "Error", conditionMessage(e)) }) From af6d3f0bf02e6c158543fdd1876e428db75bcbf1 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Wed, 24 Jul 2019 09:25:02 -0400 Subject: [PATCH 0269/2289] add to the height of pdf viewer --- shiny/workflowPlot/server_files/pdf_viewer_server.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/server_files/pdf_viewer_server.R b/shiny/workflowPlot/server_files/pdf_viewer_server.R index 41ff519391d..76d5d01ba90 100644 --- a/shiny/workflowPlot/server_files/pdf_viewer_server.R +++ b/shiny/workflowPlot/server_files/pdf_viewer_server.R @@ -49,9 +49,9 @@ observeEvent(input$files_cell_clicked, { } output$pdfview <- renderUI({ - tags$iframe(style = "height:600px; width:100%; border: 1px grey solid;", + tags$iframe(style = "height:800px; width:100%; border: 1px grey solid;", src = input$files_cell_clicked$value) }) } -}) \ No newline at end of file +}) From cda564d999af978b1af229aa2af1e7dc2ab85827 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Wed, 24 Jul 2019 09:26:47 -0400 Subject: [PATCH 0270/2289] Update pdf_viewer_UI.R --- shiny/workflowPlot/ui_files/pdf_viewer_UI.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/ui_files/pdf_viewer_UI.R b/shiny/workflowPlot/ui_files/pdf_viewer_UI.R index 45ed6747abe..bf9063b0dd7 100644 --- a/shiny/workflowPlot/ui_files/pdf_viewer_UI.R +++ b/shiny/workflowPlot/ui_files/pdf_viewer_UI.R @@ -2,11 +2,11 @@ tabPanel( "PDF Viewer", br(), column( - 4, + 3, DT::DTOutput("files") ), column( - 8, + 9, uiOutput("pdfview") ) -) \ No newline at end of file +) From a6cedd54099361f7ba35f857109b46472d14c470 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 24 Jul 2019 21:43:06 -0500 Subject: [PATCH 0271/2289] require site id for register external data modal --- shiny/workflowPlot/server_files/sidebar_server.R | 1 + 1 file changed, 1 insertion(+) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 12c2bca4d60..b2ca1d781ae 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -192,6 +192,7 @@ observeEvent(input$load_data, { # Register external data observeEvent(input$register_data,{ browser() + req(input$all_site_id) showModal( modalDialog( From 2a42725856b99602854bcbe048ff1efddd46cbf9 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 24 Jul 2019 22:49:38 -0500 Subject: [PATCH 0272/2289] check write permission of www folder --- shiny/workflowPlot/server_files/pdf_viewer_server.R | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/shiny/workflowPlot/server_files/pdf_viewer_server.R b/shiny/workflowPlot/server_files/pdf_viewer_server.R index 76d5d01ba90..66022d2e9f0 100644 --- a/shiny/workflowPlot/server_files/pdf_viewer_server.R +++ b/shiny/workflowPlot/server_files/pdf_viewer_server.R @@ -42,10 +42,14 @@ observeEvent(input$files_cell_clicked, { if (length(input$files_cell_clicked) > 0) { # File needs to be copied to the www folder - for(i in length(folder.path)){ - file.copy(file.path(folder.path[i], input$files_cell_clicked$value), - "www", - overwrite = T) + if(file.access("www", 2) == 0){ #check write permission + for(i in length(folder.path)){ + file.copy(file.path(folder.path[i], input$files_cell_clicked$value), + "www", + overwrite = T) + } + }else{ + print("Pdf files cannot not be copied to www folfer. Do not have write permission.") } output$pdfview <- renderUI({ From 68d499eb81a93c3b9c82dab5be8202e7ac357436 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 26 Jul 2019 05:39:58 -0500 Subject: [PATCH 0273/2289] add history runs page --- shiny/workflowPlot/server.R | 10 ++- .../server_files/history_server.R | 83 +++++++++++++++++++ shiny/workflowPlot/ui.R | 9 ++ shiny/workflowPlot/www/scripts.js | 5 ++ 4 files changed, 104 insertions(+), 3 deletions(-) create mode 100644 shiny/workflowPlot/server_files/history_server.R create mode 100644 shiny/workflowPlot/www/scripts.js diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index f34217a71bb..a6ece72aa56 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -20,6 +20,7 @@ lapply(c( "shiny", "xts", "purrr", "lubridate", + "listviewer", "shinythemes", "shinytoastr" ),function(pkg){ @@ -89,9 +90,9 @@ server <- shinyServer(function(input, output, session) { } ) }) - }) + # Hiding the animation and showing the application content hide(id = "loading-content", anim = TRUE, animType = "fade") showElement("app") @@ -104,13 +105,16 @@ server <- shinyServer(function(input, output, session) { # Page 1: Select Data source("server_files/select_data_server.R", local = TRUE) + + # Page 2: History Runs + source("server_files/history_server.R", local = TRUE) - # Page 2: Exploratory Plots + # Page 3: Exploratory Plots source("server_files/model_plots_server.R", local = TRUE) source("server_files/model_data_plots_server.R", local = TRUE) source("server_files/pdf_viewer_server.R", local = TRUE) - # Page 3: Benchmarking + # Page 4: Benchmarking observeEvent(input$load_model,{ req(input$all_run_id) ids_DF <- parse_ids_from_input_runID(input$all_run_id) diff --git a/shiny/workflowPlot/server_files/history_server.R b/shiny/workflowPlot/server_files/history_server.R new file mode 100644 index 00000000000..1f701d7baa3 --- /dev/null +++ b/shiny/workflowPlot/server_files/history_server.R @@ -0,0 +1,83 @@ +# db.query query statement +cmd <- paste0("SELECT workflows.id, workflows.folder, workflows.start_date, workflows.end_date, workflows.started_at, workflows.finished_at, attributes.value," , + "CONCAT(coalesce(sites.id, -99), ' / ', coalesce(sites.sitename, ''), ', ', ', ') AS sitename, " , + "CONCAT(coalesce(models.model_name, ''), ' ', coalesce(models.revision, '')) AS modelname, modeltypes.name " , + "FROM workflows " , + "LEFT OUTER JOIN sites on workflows.site_id=sites.id " , + "LEFT OUTER JOIN models on workflows.model_id=models.id " , + "LEFT OUTER JOIN modeltypes on models.modeltype_id=modeltypes.id " , + "LEFT OUTER JOIN attributes ON workflows.id=attributes.container_id AND attributes.container_type='workflows' ") + + +observeEvent(input$workflowclassrand, { + #browser() + history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) + workflow_id <- strsplit(input$workflowselected, "_")[[1]] + workflow_id <- trimws(workflow_id[2]) + val.jason <- history$value[history$id == workflow_id] + fld <- history$folder[history$id == workflow_id] + + if (!is.na(val.jason)) { + # server and ui for the listviewer + output$jsed <- renderJsonedit({ + jsonedit(jsonlite::fromJSON(val.jason)) + + }) + + showModal(modalDialog( + title = "Details", + tabsetPanel( + tabPanel("Info", br(), + jsoneditOutput("jsed", height = "500px") + )), + easyClose = TRUE, + footer = NULL, + size = 'l' + )) + } + +}) + + +observeEvent(input$submitInfo, { + history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) + output$historyfiles <- DT::renderDT( + DT::datatable(history %>% + dplyr::select(-value, -modelname) %>% + mutate(id = id %>% as.character()) %>% + mutate(id=paste0(""), + Action= paste0('
+ + +
') + + )%>% + dplyr::rename(model=name), + escape = F, + filter = 'top', + selection="none", + style='bootstrap', + rownames = FALSE, + options = list( + autowidth = TRUE, + columnDefs = list(list(width = '90px', targets = -1)), #set column width for action button + dom = 'ftp', + pageLength = 10, + scrollX = TRUE, + scrollCollapse = FALSE, + initComplete = DT::JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#000', 'color': '#fff'});", + "}") + ) + ) + + ) +}) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 0bcf92d2f2b..327b1f5574b 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -19,6 +19,12 @@ ui <- fluidPage(theme = shinytheme("yeti"), tags$head( tags$link(rel = "stylesheet", type = "text/css", href = "style.css") ), + tags$head(tags$script(src="scripts.js")), + tags$head( tags$style(HTML(" + .modal-lg { + width: 95%; + } + "))), # Showing the animation div( id = "loading-content", div(class = "plotlybars-wrapper", @@ -51,6 +57,9 @@ ui <- fluidPage(theme = shinytheme("yeti"), ) ) ), + tabPanel(h4("History Runs"), + DT::DTOutput("historyfiles") + ), tabPanel(h4("Exploratory Plots"), tabsetPanel( source_ui("model_plots_UI.R"), diff --git a/shiny/workflowPlot/www/scripts.js b/shiny/workflowPlot/www/scripts.js new file mode 100644 index 00000000000..e3700daaa2d --- /dev/null +++ b/shiny/workflowPlot/www/scripts.js @@ -0,0 +1,5 @@ +$(document).on('click', '.workflowclass', function () { +Shiny.onInputChange('workflowselected',this.id); +// to report changes on the same selectInput +Shiny.onInputChange('workflowclassrand', Math.random()); +}); \ No newline at end of file From 5af702d2a743a2a9111c1558132be583000ac75c Mon Sep 17 00:00:00 2001 From: Tony Gardella Date: Fri, 26 Jul 2019 15:12:12 -0400 Subject: [PATCH 0274/2289] put xml into a exported function --- base/workflow/DESCRIPTION | 1 + base/workflow/NAMESPACE | 1 + base/workflow/R/create_xml.R | 113 ++++++++++++++++++++++++++++++++ base/workflow/man/create_xml.Rd | 20 ++++++ 4 files changed, 135 insertions(+) create mode 100644 base/workflow/R/create_xml.R create mode 100644 base/workflow/man/create_xml.Rd diff --git a/base/workflow/DESCRIPTION b/base/workflow/DESCRIPTION index 63a098d445e..8c1059a41fa 100644 --- a/base/workflow/DESCRIPTION +++ b/base/workflow/DESCRIPTION @@ -24,6 +24,7 @@ Description: The Predictive Ecosystem Carbon Analyzer that can be used to run the major steps of a PEcAn analysis. License: FreeBSD + file LICENSE Imports: + dplyr, PEcAn.data.atmosphere, PEcAn.data.land, PEcAn.DB, diff --git a/base/workflow/NAMESPACE b/base/workflow/NAMESPACE index 90f6c8fcbb0..7144c5c0973 100644 --- a/base/workflow/NAMESPACE +++ b/base/workflow/NAMESPACE @@ -1,5 +1,6 @@ # Generated by roxygen2: do not edit by hand +export(create_execute_test_xml) export(do_conversions) export(run.write.configs) export(runModule.get.trait.data) diff --git a/base/workflow/R/create_xml.R b/base/workflow/R/create_xml.R new file mode 100644 index 00000000000..75f73bc305b --- /dev/null +++ b/base/workflow/R/create_xml.R @@ -0,0 +1,113 @@ +##' @export +##' @aliases create_xml +##' @name create_xml +##' @title create_xml +##' @description Function to create a viable PEcAn xml from a table containing specifications of a run +##' @param run_list PEcAn table containing specifications of runs +##' @param overwrite.met,overwrite.fia,overwrite.ic logical +##' +##' @author Tony Gardella + +create_execute_test_xml <- function(run_list){ + + #Read in Table + model_id <- run_list[[1]] + met <- run_list[[2]] + site_id<- run_list[[3]] + start_date<- run_list[[4]] + end_date<- run_list[[5]] + pecan_path<- run_list[[6]] + out.var<- run_list[[7]] + ensemble<- run_list[[8]] + ens_size<- run_list[[9]] + sensitivity<- run_list[[10]] + user_id<- NA + pft_name<- NA + + config.list <-PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) + bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) + con <- bety$con + settings <- list() + + # Info + settings$info$notes <- paste0("Test_Run") + settings$info$userid <- user_id + settings$info$username <- "None" + settings$info$dates <- Sys.Date() + #Outdir + model.new <- dplr::tbl(bety, "models") %>% dplyr::filter(model_id == id) %>% dplyr::collect() + outdir_base<-config.list$output_folder + outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"), + format(as.Date(end_date), "%Y-%m"), + met,site_id,"test_runs", + sep="_",collapse =NULL) + outdir <- paste0(outdir_base,outdir_pre) + dir.create(outdir) + settings$outdir <- outdir + #Database BETY + settings$database$bety$user <- config.list$db_bety_username + settings$database$bety$password <- config.list$db_bety_password + settings$database$bety$host <- "localhost" + settings$database$bety$dbname <- config.list$db_bety_database + settings$database$bety$driver <- "PostgreSQL" + settings$database$bety$write <- FALSE + #Database dbfiles + settings$database$dbfiles <- config.list$dbfiles_folder + #PFT + if (is.na(pft_name)){ + pft <- dplyr::tbl(bety, "pfts") %>% dplyr::collect() %>% dplyr::filter(modeltype_id == model.new$modeltype_id) + pft_name <- pft$name[1] + } + settings$pfts$pft$name <- pft_name + settings$pfts$pft$constants$num <- 1 + #Meta Analysis + settings$meta.analysis$iter <- 3000 + settings$meta.analysis$random.effects <- FALSE + #Ensemble + if(ensemble){ + settings$ensemble$size <- ens_size + settings$ensemble$variable <- out.var + settings$ensemble$samplingspace$met$method <- "sampling" + settings$ensemble$samplingspace$parameters$method <- "uniform" + }else{ + settings$ensemble$size <- 1 + settings$ensemble$variable <- out.var + settings$ensemble$samplingspace$met$method <- "sampling" + settings$ensemble$samplingspace$parameters$method <- "uniform" + } + #Sensitivity + if(sensitivity){ + settings$sensitivity.analysis$quantiles <- + settings$sensitivity.analysis$quantiles$sigma1 <--2 + settings$sensitivity.analysis$quantiles$sigma2 <--1 + settings$sensitivity.analysis$quantiles$sigma3 <- 1 + settings$sensitivity.analysis$quantiles$sigma4 <- 2 + names(settings$sensitivity.analysis$quantiles) <-c("sigma","sigma","sigma","sigma") + } + #Model + settings$model$id <- model.new$id + #Workflow + settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) + settings$run$site$id <- site_id + settings$run$site$met.start <- start_date + settings$run$site$met.end <- end_date + settings$run$inputs$met$source <- met + settings$run$inputs$met$output <- model.new$model_name + settings$run$inputs$met$username <- "pecan" + settings$run$start.date <- start_date + settings$run$end.date <- end_date + settings$host$name <-"localhost" + + #create file and Run + XML::saveXML(PEcAn.settings::listToXml(settings, "pecan"), file=paste0(outdir,"/","pecan.xml")) + file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) + setwd(outdir) + ##Name log file + #log <- file("workflow.Rout", open = "wt") + #sink(log) + #sink(log, type = "message") + + system("./workflow.R 2>&1 | tee workflow.Rout") + #source("workflow.R") + #sink() +} diff --git a/base/workflow/man/create_xml.Rd b/base/workflow/man/create_xml.Rd new file mode 100644 index 00000000000..3939ffb6db8 --- /dev/null +++ b/base/workflow/man/create_xml.Rd @@ -0,0 +1,20 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/create_xml.R +\name{create_xml} +\alias{create_xml} +\alias{create_execute_test_xml} +\title{create_xml} +\usage{ +create_execute_test_xml(run_list) +} +\arguments{ +\item{run_list}{PEcAn table containing specifications of runs} + +\item{overwrite.met, overwrite.fia, overwrite.ic}{logical} +} +\description{ +Function to create a viable PEcAn xml from a table containing specifications of a run +} +\author{ +Tony Gardella +} From a0baba85475453b86c136a218d091166bb8367b9 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 26 Jul 2019 15:30:46 -0400 Subject: [PATCH 0275/2289] Update ui.R --- shiny/workflowPlot/ui.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 327b1f5574b..d7cde1648ee 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -49,10 +49,10 @@ ui <- fluidPage(theme = shinytheme("yeti"), navbarPage(title = NULL, tabPanel(h4("Select Data"), tagList( - column(4, + column(3, source_ui("sidebar_UI.R") ), - column(8, + column(9, source_ui("select_data_UI.R") ) ) @@ -121,4 +121,4 @@ ui <- fluidPage(theme = shinytheme("yeti"), ) ) ) - ) \ No newline at end of file + ) From b1ef2adf48494d99930f5cc155fc24d34c9af064 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 26 Jul 2019 15:33:19 -0400 Subject: [PATCH 0276/2289] Update sidebar_UI.R --- shiny/workflowPlot/ui_files/sidebar_UI.R | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index 9bbc765d5a4..b45e9e70fcc 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -5,7 +5,15 @@ tagList( selectizeInput("all_workflow_id", "Mutliple Workflow IDs", c(), multiple=TRUE), p("Please select the run IDs. You can select multiple IDs"), selectizeInput("all_run_id", "Mutliple Run IDs", c(), multiple=TRUE), - actionButton("load_model", h5("Load Model outputs")) + + fluidRow( + column(6, + actionButton("load_model", h5("Load Model outputs")) + ), + column(6, + actionButton("NewRun", h6("New Run !"), width = "70%", class="btn-primary") + ) + ) ), h4("Load External Data"), wellPanel( @@ -14,13 +22,13 @@ tagList( # selectizeInput("all_site_id", "Select Site ID", c(), multiple=TRUE), selectizeInput("all_input_id", "Select Input ID", c()), fluidRow( - column(5, - actionButton("load_data", h6("Load External Data"), width = "130px") - ), - column(7, + column(6, actionButton("register_data", h6("Register External Data"), - width = "155px", class="btn-primary") + width = "70%", class="btn-primary") + ), + column(6, + actionButton("load_data", h6("Load External Data"), width = "70%") ) ) ) -) \ No newline at end of file +) From 079e6e4656188ae9a3070072d011121aa2b2bdf0 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 26 Jul 2019 15:36:49 -0400 Subject: [PATCH 0277/2289] Update sidebar_UI.R --- shiny/workflowPlot/ui_files/sidebar_UI.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index b45e9e70fcc..88e0ef7ccc7 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -24,10 +24,10 @@ tagList( fluidRow( column(6, actionButton("register_data", h6("Register External Data"), - width = "70%", class="btn-primary") + width = "90%", class="btn-primary") ), column(6, - actionButton("load_data", h6("Load External Data"), width = "70%") + actionButton("load_data", h6("Load External Data"), width = "90%") ) ) ) From 408731b19d53fbb11bdedffb1a39e38768be59e7 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 26 Jul 2019 16:01:38 -0400 Subject: [PATCH 0278/2289] Update ui.R --- shiny/workflowPlot/ui.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index d7cde1648ee..39e43d57374 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -22,7 +22,7 @@ ui <- fluidPage(theme = shinytheme("yeti"), tags$head(tags$script(src="scripts.js")), tags$head( tags$style(HTML(" .modal-lg { - width: 95%; + width: 85%; } "))), # Showing the animation From d1f6d5334a2a3dcc44a93ff16f22d3b7da3d05fe Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 26 Jul 2019 16:02:08 -0400 Subject: [PATCH 0279/2289] Update sidebar_UI.R --- shiny/workflowPlot/ui_files/sidebar_UI.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index 88e0ef7ccc7..110206b9c85 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -8,10 +8,11 @@ tagList( fluidRow( column(6, - actionButton("load_model", h5("Load Model outputs")) + actionButton("NewRun", h6("New Run !"), width = "70%", class="btn-primary") + ), column(6, - actionButton("NewRun", h6("New Run !"), width = "70%", class="btn-primary") + actionButton("load_model", h5("Load Model outputs")) ) ) ), From 8074b87588edb5fa581bef1edb9910459088d97e Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Sat, 27 Jul 2019 10:55:44 -0400 Subject: [PATCH 0280/2289] Update sidebar_UI.R --- shiny/workflowPlot/ui_files/sidebar_UI.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index 110206b9c85..aed79c307fd 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -8,11 +8,11 @@ tagList( fluidRow( column(6, - actionButton("NewRun", h6("New Run !"), width = "70%", class="btn-primary") + actionButton("NewRun", h6("New Run !"), width = "100%", class="btn-primary") ), column(6, - actionButton("load_model", h5("Load Model outputs")) + actionButton("load_model", h5("Load Model outputs"), width = "100%") ) ) ), @@ -25,10 +25,10 @@ tagList( fluidRow( column(6, actionButton("register_data", h6("Register External Data"), - width = "90%", class="btn-primary") + width = "100%", class="btn-primary") ), column(6, - actionButton("load_data", h6("Load External Data"), width = "90%") + actionButton("load_data", h6("Load External Data"), width = "100%") ) ) ) From 83b39bebacf3626da73918d8f2f38e1606b21a4e Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Sat, 27 Jul 2019 10:57:51 -0400 Subject: [PATCH 0281/2289] Update sidebar_UI.R --- shiny/workflowPlot/ui_files/sidebar_UI.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index aed79c307fd..9cd2988c0f5 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -12,7 +12,7 @@ tagList( ), column(6, - actionButton("load_model", h5("Load Model outputs"), width = "100%") + actionButton("load_model", h5("Load"), width = "100%") ) ) ), @@ -24,11 +24,11 @@ tagList( selectizeInput("all_input_id", "Select Input ID", c()), fluidRow( column(6, - actionButton("register_data", h6("Register External Data"), + actionButton("register_data", h6("Register"), width = "100%", class="btn-primary") ), column(6, - actionButton("load_data", h6("Load External Data"), width = "100%") + actionButton("load_data", h6("Load"), width = "100%") ) ) ) From 11820b0578d0e32c88630a68f371bc5b25508728 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Sat, 27 Jul 2019 11:06:22 -0400 Subject: [PATCH 0282/2289] Update server.R --- shiny/workflowPlot/server.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index a6ece72aa56..e70da4ef07c 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -60,7 +60,7 @@ server <- shinyServer(function(input, output, session) { title = "Connect to Database", fluidRow(column(12,textInput('user', h4('User:'), width = "100%", value = "bety"))), fluidRow(column(12,textInput('password', h4('Password:'), width = "100%", value = "bety"))), - fluidRow(column(12,textInput('host', h4('Host:'), width = "100%", value = "localhost"))), + fluidRow(column(12,textInput('host', h4('Host:'), width = "100%", value = "psql-pecan.bu.edu"))), fluidRow( column(3), column(6,br(),actionButton('submitInfo', h4('Submit'), width = "100%", class="btn-primary")), From f3f2f2c4452fd885669d36d1e54080496c751adb Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Tue, 30 Jul 2019 00:57:56 -0400 Subject: [PATCH 0283/2289] roxygen2 docs --- models/sipnet/man/write_restart.SIPNET.Rd | 2 +- .../man/interactive.plotting.sda.Rd | 4 +-- .../man/sda.enkf.multisite.Rd | 16 +++++------ modules/data.remote/NAMESPACE | 1 + modules/data.remote/man/call_MODIS.Rd | 22 +++++++-------- .../data.remote/man/download.thredds.AGB.Rd | 27 +++++++++++++++++++ 6 files changed, 47 insertions(+), 25 deletions(-) create mode 100644 modules/data.remote/man/download.thredds.AGB.Rd diff --git a/models/sipnet/man/write_restart.SIPNET.Rd b/models/sipnet/man/write_restart.SIPNET.Rd index 69dff5af6f6..fd20b60dd7f 100644 --- a/models/sipnet/man/write_restart.SIPNET.Rd +++ b/models/sipnet/man/write_restart.SIPNET.Rd @@ -30,7 +30,7 @@ write_restart.SIPNET(outdir, runid, start.time, stop.time, settings, NONE } \description{ -Write restart files for SIPNET +Write restart files for SIPNET. WARNING: Some variables produce illegal values < 0 and have been hardcoded to correct these values!! } \author{ Ann Raiho \email{araiho@nd.edu} diff --git a/modules/assim.sequential/man/interactive.plotting.sda.Rd b/modules/assim.sequential/man/interactive.plotting.sda.Rd index 3f60c03a02a..6c9a6320b0e 100644 --- a/modules/assim.sequential/man/interactive.plotting.sda.Rd +++ b/modules/assim.sequential/man/interactive.plotting.sda.Rd @@ -28,8 +28,8 @@ post.analysis.ggplot.violin(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS, plot.title = NULL) post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, - FORECAST, ANALYSIS, plot.title = NULL, facetg = F, readsFF = NULL, - observed_vars) + FORECAST, ANALYSIS, plot.title = NULL, facetg = FALSE, + readsFF = NULL) } \arguments{ \item{settings}{pecan standard settings list.} diff --git a/modules/assim.sequential/man/sda.enkf.multisite.Rd b/modules/assim.sequential/man/sda.enkf.multisite.Rd index b1d2d3e6589..5d741cc4c84 100644 --- a/modules/assim.sequential/man/sda.enkf.multisite.Rd +++ b/modules/assim.sequential/man/sda.enkf.multisite.Rd @@ -5,17 +5,11 @@ \title{sda.enkf.multisite} \usage{ sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, -<<<<<<< HEAD - restart = FALSE, forceRun = TRUE, control = list(trace = TRUE, FF = - FALSE, interactivePlot = FALSE, TimeseriesPlot = FALSE, BiasPlot = FALSE, - plot.title = NULL, facet.plots = FALSE, debug = FALSE, pause = FALSE), - ...) -======= - restart = FALSE, control = list(trace = T, FF = F, interactivePlot = - FALSE, TimeseriesPlot = FALSE, BiasPlot = FALSE, plot.title = NULL, - facet.plots = FALSE, debug = FALSE, pause = FALSE, Profiling = FALSE, + restart = FALSE, forceRun = TRUE, keepNC = TRUE, + control = list(trace = TRUE, FF = FALSE, interactivePlot = FALSE, + TimeseriesPlot = FALSE, BiasPlot = FALSE, plot.title = NULL, facet.plots + = FALSE, debug = FALSE, pause = FALSE, Profiling = FALSE, OutlierDetection = FALSE), ...) ->>>>>>> 990aae68cfb51501ffeeea7cca9a4195cba6cece } \arguments{ \item{settings}{PEcAn settings object} @@ -30,6 +24,8 @@ sda.enkf.multisite(settings, obs.mean, obs.cov, Q = NULL, \item{forceRun}{Used to force job.sh files that were not run for ensembles in SDA (quick fix)} +\item{keepNC}{Used for debugging issues. .nc files are usually removed after each year in the out folder. This flag will keep the .nc + .nc.var files for futher investigations.} + \item{control}{List of flags controlling the behaviour of the SDA. trace for reporting back the SDA outcomes, interactivePlot for plotting the outcomes after each step, TimeseriesPlot for post analysis examination, BiasPlot for plotting the correlation between state variables, plot.title is the title of post analysis plots and debug mode allows for pausing the code and examinign the variables inside the function.} } diff --git a/modules/data.remote/NAMESPACE b/modules/data.remote/NAMESPACE index 01e5532887b..d84c728b44c 100644 --- a/modules/data.remote/NAMESPACE +++ b/modules/data.remote/NAMESPACE @@ -3,5 +3,6 @@ export(call_MODIS) export(download.LandTrendr.AGB) export(download.NLCD) +export(download.thredds.AGB) export(extract.LandTrendr.AGB) export(extract_NLCD) diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index f45e5798429..b686d520510 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -5,9 +5,8 @@ \title{call_MODIS} \usage{ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, - iter = NULL, product, band, band_qc = "", band_sd = "", - siteID = "", package_method = "MODISTools", QC_filter = FALSE, - progress = TRUE) + product, band, band_qc = "", band_sd = "", siteID = NULL, + package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) } \arguments{ \item{outfolder}{where the output file will be stored} @@ -22,11 +21,6 @@ call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, \item{size}{kmAboveBelow and kmLeftRight distance in km to be included} -\item{iter}{a value (e.g. i) used to help name files when call_MODIS is used parallelized or in a for loop. Default is NULL. - -depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json -depends on the MODISTools package version 1.1.0} - \item{product}{string value for MODIS product number} \item{band}{string value for which measurement to extract} @@ -35,19 +29,23 @@ depends on the MODISTools package version 1.1.0} \item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} -\item{siteID}{string value of a PEcAn site ID. Currently only used for output filename.} +\item{siteID}{numeric value of BETY site id value to use for output file name. Default is NULL. Only MODISTools option.} \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} -\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False.} +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option.} + +\item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. + +depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json +depends on the MODISTools package version 1.1.0} } \description{ Get MODIS data by date and location } \examples{ \dontrun{ -test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, iter = 1, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = TRUE) -plot(lubridate::yday(test_modistools$calendar_date), test_modistools$data, type = 'l', xlab = "day of year", ylab = test_modistools$band[1]) +test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", progress = TRUE, QC_filter = FALSE) test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") } diff --git a/modules/data.remote/man/download.thredds.AGB.Rd b/modules/data.remote/man/download.thredds.AGB.Rd new file mode 100644 index 00000000000..35dfd405cd5 --- /dev/null +++ b/modules/data.remote/man/download.thredds.AGB.Rd @@ -0,0 +1,27 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/download.thredds.R +\name{download.thredds.AGB} +\alias{download.thredds.AGB} +\title{download.thredds.AGB} +\usage{ +download.thredds.AGB(outdir = NULL, site_ids, run_parallel = FALSE, + ncores = NULL) +} +\arguments{ +\item{outdir}{Where to place output} + +\item{site_ids}{What locations to download data at?} + +\item{run_parallel}{Logical. Download and extract files in parallel?} + +\item{ncores}{Optional. If run_parallel=TRUE how many cores to use? If left as NULL will select max number -1} +} +\value{ +data.frame summarize the results of the function call +} +\description{ +download.thredds.AGB +} +\author{ +Bailey Morrison +} From fa9fc9ac3dd1c26e673b3c0ae9a4bfd0f2ef5b34 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Tue, 30 Jul 2019 03:41:02 -0500 Subject: [PATCH 0284/2289] add tryCatch to history runs page --- .../server_files/history_server.R | 140 ++++++++++-------- 1 file changed, 76 insertions(+), 64 deletions(-) diff --git a/shiny/workflowPlot/server_files/history_server.R b/shiny/workflowPlot/server_files/history_server.R index 1f701d7baa3..7fd3cee07d5 100644 --- a/shiny/workflowPlot/server_files/history_server.R +++ b/shiny/workflowPlot/server_files/history_server.R @@ -10,74 +10,86 @@ cmd <- paste0("SELECT workflows.id, workflows.folder, workflows.start_date, wor observeEvent(input$workflowclassrand, { - #browser() - history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) - workflow_id <- strsplit(input$workflowselected, "_")[[1]] - workflow_id <- trimws(workflow_id[2]) - val.jason <- history$value[history$id == workflow_id] - fld <- history$folder[history$id == workflow_id] - - if (!is.na(val.jason)) { - # server and ui for the listviewer - output$jsed <- renderJsonedit({ - jsonedit(jsonlite::fromJSON(val.jason)) - - }) - - showModal(modalDialog( - title = "Details", - tabsetPanel( - tabPanel("Info", br(), - jsoneditOutput("jsed", height = "500px") - )), - easyClose = TRUE, - footer = NULL, - size = 'l' - )) - } - + tryCatch({ + history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) + workflow_id <- strsplit(input$workflowselected, "_")[[1]] + workflow_id <- trimws(workflow_id[2]) + val.jason <- history$value[history$id == workflow_id] + fld <- history$folder[history$id == workflow_id] + + if (!is.na(val.jason)) { + # server and ui for the listviewer + output$jsed <- renderJsonedit({ + jsonedit(jsonlite::fromJSON(val.jason)) + + }) + + showModal(modalDialog( + title = "Details", + tabsetPanel( + tabPanel("Info", br(), + jsoneditOutput("jsed", height = "500px") + )), + easyClose = TRUE, + footer = NULL, + size = 'l' + )) + } + }, + error = function(e){ + toastr_error(title = "Error", conditionMessage(e)) + }) }) observeEvent(input$submitInfo, { - history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) - output$historyfiles <- DT::renderDT( - DT::datatable(history %>% - dplyr::select(-value, -modelname) %>% - mutate(id = id %>% as.character()) %>% - mutate(id=paste0(""), - Action= paste0('
- - -
') - - )%>% - dplyr::rename(model=name), - escape = F, - filter = 'top', - selection="none", - style='bootstrap', - rownames = FALSE, - options = list( - autowidth = TRUE, - columnDefs = list(list(width = '90px', targets = -1)), #set column width for action button - dom = 'ftp', - pageLength = 10, - scrollX = TRUE, - scrollCollapse = FALSE, - initComplete = DT::JS( - "function(settings, json) {", - "$(this.api().table().header()).css({'background-color': '#000', 'color': '#fff'});", - "}") - ) + tryCatch({ + history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) + output$historyfiles <- DT::renderDT( + DT::datatable(history %>% + dplyr::select(-value, -modelname) %>% + mutate(id = id %>% as.character()) %>% + mutate(id=paste0(""), + Action= paste0('
+ + +
') + + )%>% + dplyr::rename(model=name), + escape = F, + filter = 'top', + selection="none", + style='bootstrap', + rownames = FALSE, + options = list( + autowidth = TRUE, + columnDefs = list(list(width = '90px', targets = -1)), #set column width for action button + dom = 'ftp', + pageLength = 10, + scrollX = TRUE, + scrollCollapse = FALSE, + initComplete = DT::JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#000', 'color': '#fff'});", + "}") ) + ) + + ) + toastr_success("Generate history runs") + }, + error = function(e) { + toastr_error(title = "Error in History Runs Page", message = "" + #, conditionMessage(e) + ) + }) - ) }) From 7e6564ce59f6f41da66145c1a3b2fba3e8818063 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Thu, 1 Aug 2019 11:33:54 +0000 Subject: [PATCH 0285/2289] add stringr package --- shiny/workflowPlot/server.R | 1 + 1 file changed, 1 insertion(+) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index e70da4ef07c..78de5d47f66 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -16,6 +16,7 @@ lapply(c( "shiny", "shinyjs", "dplyr", "plyr", + "stringr", "XML", "xts", "purrr", From 5fdd5738306b75d98c2804a222f617ba26be5e8c Mon Sep 17 00:00:00 2001 From: sl4397 Date: Thu, 1 Aug 2019 11:35:36 +0000 Subject: [PATCH 0286/2289] update register external data --- shiny/workflowPlot/server_files/sidebar_server.R | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index b2ca1d781ae..7874b19a934 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -160,6 +160,11 @@ load.model.data <- eventReactive(input$load_data, { observations <- PEcAn.benchmark::load_data( data.path = File_path, format = File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) + # Manually select variables to deal with the error + # observations <- PEcAn.benchmark::load_data( + # data.path = File_path, format = File_format, + # site = site, start_year = start.year, end_year = end.year, + # vars.used.index = c(1,2,3,5,6,7,9,10,12,13,14,15,16,19)) print("Yay the observational data is loaded!") print(head(observations)) return(observations) @@ -191,7 +196,7 @@ observeEvent(input$load_data, { # Register external data observeEvent(input$register_data,{ - browser() + #browser() req(input$all_site_id) showModal( @@ -235,8 +240,11 @@ observeEvent(input$register_data,{ observeEvent(input$register_button,{ tryCatch({ inFile <- input$Datafile + + dir.name <- gsub(".[a-z]+", "", inFile$name) + dir.create(file.path("/home/carya/output/dbfiles", dir.name)) file.copy(inFile$datapath, - file.path("/home/carya/output/dbfiles", inFile$name), + file.path("/home/carya/output/dbfiles", dir.name, inFile$name), overwrite = T) mt <- tbl(dbConnect$bety,"formats") %>% @@ -244,7 +252,7 @@ observeEvent(input$register_button,{ filter(name == input$format_sel) %>% pull(type_string) - PEcAn.DB::dbfile.input.insert(in.path = file.path("/home/carya/output/dbfiles", inFile$name), + PEcAn.DB::dbfile.input.insert(in.path = file.path("/home/carya/output/dbfiles", dir.name), in.prefix = inFile$name, siteid = input$all_site_id, # select box startdate = input$date3, From e970f575850138b1eb2619533c6993ff41b6c597 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Thu, 1 Aug 2019 11:32:54 -0400 Subject: [PATCH 0287/2289] mike's fixes --- models/sipnet/R/read_restart.SIPNET.R | 11 ++++++----- models/sipnet/R/write_restart.SIPNET.R | 1 - 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index 86db20a8411..3650f737f29 100755 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -55,6 +55,11 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p names(params$restart) <- c("abvGrndWoodFrac", "coarseRootFrac", "fineRootFrac") } + if ("GWBI" %in% var.names) { + forecast[[length(forecast) + 1]] <- udunits2::ud.convert(mean(ens$GWBI), "kg/m^2/s", "Mg/ha/yr") + names(forecast[[length(forecast)]]) <- c("GWBI") + } + # Reading in NET Ecosystem Exchange for SDA - unit is kg C m-2 s-1 and the average is estimated if ("NEE" %in% var.names) { forecast[[length(forecast) + 1]] <- mean(ens$NEE) ## @@ -83,11 +88,7 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p names(forecast[[length(forecast)]]) <- c("Litter") } - if ("NEE" %in% var.names) { - forecast[[length(forecast) + 1]] <- ens$NEE[last] ## gC/m2 - names(forecast[[length(forecast)]]) <- c("NEE") - } - + if ("SoilMoistFrac" %in% var.names) { forecast[[length(forecast) + 1]] <- ens$SoilMoistFrac[last] ## unitless names(forecast[[length(forecast)]]) <- c("SoilMoistFrac") diff --git a/models/sipnet/R/write_restart.SIPNET.R b/models/sipnet/R/write_restart.SIPNET.R index 9ccedb628fe..c4b8897cbca 100755 --- a/models/sipnet/R/write_restart.SIPNET.R +++ b/models/sipnet/R/write_restart.SIPNET.R @@ -61,7 +61,6 @@ write_restart.SIPNET <- function(outdir, runid, start.time, stop.time, settings, if ("NEE" %in% variables) { analysis.save[[length(analysis.save) + 1]] <- new.state$NEE - if (new.state$NEE < 0) analysis.save[[length(analysis.save)]] <- 0 names(analysis.save[[length(analysis.save)]]) <- c("NEE") } From cfc88eeb68eeaeef24142161ef3ae43f68b6198d Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 2 Aug 2019 07:58:30 +0000 Subject: [PATCH 0288/2289] update model data plot UI --- .../server_files/model_data_plots_server.R | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index 16f38525063..27e64d4f751 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -17,6 +17,23 @@ output$modelDataPlot <- renderHighchart({ hc_add_theme(hc_theme_flat()) }) +output$modelDataPlotscatter <- renderHighchart({ + validate( + need(length(input$all_workflow_id) == 1, "Select only ONE workflow ID"), + need(length(input$all_run_id) == 1, "Select only ONE run ID"), + need(input$load_model > 0, 'Select Load Data'), + need(length(input$all_site_id) == 1, 'Select only ONE Site ID'), + need(length(input$all_input_id) == 1, 'Select only ONE Input ID'), + need(input$load_data > 0, 'Select Load External Data') + ) + highchart() %>% + hc_add_series(data = c(), showInLegend = F) %>% + hc_xAxis(title = list(text = "Time")) %>% + hc_yAxis(title = list(text = "y")) %>% + hc_title(text = "You are ready to plot!") %>% + hc_add_theme(hc_theme_flat()) +}) + # Update units every time a variable is selected observeEvent(input$var_name_modeldata, { model.df <- load.model() From 4aabd8864b76cddc5efdd292aca90596ca368634 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 2 Aug 2019 10:04:30 -0500 Subject: [PATCH 0289/2289] temp benchmarking changes for AmeriFlux.level2.h.nc format --- modules/benchmark/R/calc_benchmark.R | 7 ++++--- shiny/workflowPlot/server_files/benchmarking_server.R | 7 ++++--- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/modules/benchmark/R/calc_benchmark.R b/modules/benchmark/R/calc_benchmark.R index 7e29eba2051..dce38763776 100644 --- a/modules/benchmark/R/calc_benchmark.R +++ b/modules/benchmark/R/calc_benchmark.R @@ -28,7 +28,7 @@ calc_benchmark <- function(settings, bety, start_year = NA, end_year = NA) { # Retrieve/create benchmark ensemble database record bm.ensemble <- tbl(bety,'benchmarks_ensembles') %>% filter(reference_run_id == settings$benchmarking$reference_run_id, - ensemble_id == ensemble$id, + ensemble_id %in% ensemble$id, # ensemble$id has more than one element model_id == settings$model$id) %>% collect() @@ -98,10 +98,11 @@ calc_benchmark <- function(settings, bety, start_year = NA, end_year = NA) { obvs <- load_data(data.path, format, start_year = start_year, end_year = end_year, site, vars.used.index, time.row) dat_vars <- format$vars$pecan_name # IF : is this line redundant? obvs_full <- obvs - + # ---- LOAD MODEL DATA ---- # - model_vars <- format$vars$pecan_name[-time.row] # IF : what will happen when time.row is NULL? + #model_vars <- format$vars$pecan_name[-time.row] # IF : what will happen when time.row is NULL? + model_vars <- format$vars$pecan_name # time.row is NULL # For example 'AmeriFlux.level2.h.nc' format (38) has time vars year-day-hour listed, # but storage type column is empty and it should be because in load_netcdf we extract # the time from netcdf files using the time dimension we can remove time variables from diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index d8194ff5030..14e27f0049e 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -105,8 +105,9 @@ observeEvent(input$load_data,{ bm$vars <- dplyr::inner_join( data.frame(read_name = names(bm$model_vars), pecan_name = bm$model_vars, stringsAsFactors = FALSE), - format$vars[-grep("%",format$vars$storage_type), - c("variable_id", "pecan_name")], + # format$vars[-grep("%",format$vars$storage_type), + # c("variable_id", "pecan_name")], + format$vars[c("variable_id", "pecan_name")], #for AmeriFlux.level2.h.nc, format$vars$storage_type is NA by = "pecan_name") #This will be a longer set of conditions @@ -271,7 +272,7 @@ observeEvent(input$calc_bm,{ driver = "pgsql", write = TRUE ), - dbfiles = "/home/carya/output//dbfiles" + dbfiles = "/home/carya/output/dbfiles" ) bm$bm_settings$benchmarking <- list( ensemble_id = bm$ens_wf$ensemble_id, From eecb18208d4bc1c8b11a80b5ce112a90b3ccde1f Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 5 Aug 2019 16:16:26 -0400 Subject: [PATCH 0290/2289] parallel/multi-site updates --- modules/data.remote/R/call_MODIS.R | 338 +++++++++++++++++------------ 1 file changed, 197 insertions(+), 141 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 15454c0e9c7..ecf45ec0768 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -3,17 +3,15 @@ ##' @name call_MODIS ##' @title call_MODIS ##' @export -##' @param outfolder where the output file will be stored -##' @param start_date string value for beginning of date range for download in unambiguous date format (YYYYJJJ) -##' @param end_date string value for end of date range for download in unambiguous date format (YYYYJJJ) -##' @param lat Latitude of the pixel -##' @param lon Longitude of the pixel +##' @param outdir where the output file will be stored +##' @param var the simple name of the modis dataset variable (e.g. lai) +##' @param site_info list of site info for parsing MODIS data: list(site_id, site_name, lat, lon, time_zone) +##' @param product_dates a character vector of the start and end date of the data in YYYYJJJ +##' @param run_parallel optional method to download data paralleize. Only works if more than 1 site is needed and there are >1 CPUs available. +##' @param ncores number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL. ##' @param size kmAboveBelow and kmLeftRight distance in km to be included ##' @param product string value for MODIS product number ##' @param band string value for which measurement to extract -##' @param band_qc string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional) -##' @param band_sd string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional) -##' @param siteID numeric value of BETY site id value to use for output file name. Default is NULL. Only MODISTools option. ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option. ##' @param progress TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. @@ -23,32 +21,45 @@ ##' ##' @examples ##' \dontrun{ -##' test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", progress = TRUE, QC_filter = FALSE) -##' test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") +##' test_modistools <- call_MODIS(outdir = "/data/Model_Data/modis_lai", var = "lai", site_info, product_dates = c("2001150", "2001365"), run_parallel = TRUE, size = 0, product = "MOD15A2H", band = "Lai_500m", package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) ##' } ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, product, band, band_qc = "", band_sd = "", siteID = NULL, package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) { - - # makes the query search for 1 pixel and not for rasters for now. Will be changed when we provide raster output support. +call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run_parallel = FALSE, ncores = NULL, size = 0, product, band, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE) { + + # makes the query search for 1 pixel and not for rasters chunks for now. Will be changed when we provide raster output support. size <- 0 - # reformat start and end date if they are in YYYY/MM/DD format instead of YYYYJJJ - if (grepl("/", start_date) == TRUE) - { - start_date <- as.Date(paste0(lubridate::year(start_date), sprintf("%03d",lubridate::yday(start_date))), format = "%Y%j") - } + site_coords <- data.frame(site_info$lon, site_info$lat) + names(site_coords) <- c("lon","lat") - if (grepl("/", end_date) == TRUE) + require(doParallel) + # set up CPUS for parallel runs. + if (is.null(ncores)) { + total_cores = parallel::detectCores(all.tests = FALSE, logical = TRUE) + ncores = total_cores/2 + } + if (ncores > 10) # MODIS API has a 10 download limit / computer { - end_date <- as.Date(paste0(lubridate::year(end_date), sprintf("%03d",lubridate::yday(end_date))), format = "%Y%j") + ncores = 10 } - start_date <- as.Date(start_date, format = "%Y%j") - end_date <- as.Date(end_date, format = "%Y%j") + # register CPUS if run_parallel = TRUE + if (run_parallel){ + if (progress){ + cl <- parallel::makeCluster(ncores, outfile = "") + doParallel::registerDoParallel(cl) + } else { + cl <- parallel::makeCluster(ncores) + doParallel::registerDoParallel(cl) + } + + } + #################### if package_method == MODISTools option #################### + if (package_method == "MODISTools") { #################### FUNCTION PARAMETER PRECHECKS #################### @@ -68,154 +79,199 @@ call_MODIS <- function(outfolder = "", start_date, end_date, lat, lon, size = 0, stop("Band selected is not avialable. Please selected from the bands listed above that correspond with the data product.") } + #3. check that dates asked for in function parameters are fall within dates available for modis product/bands. - dates <- MODISTools::mt_dates(product = product, lat = lat, lon = lon)$modis_date - dates <- as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") + if (run_parallel) + { + modis_dates <- as.numeric(substr(sort(unique(foreach::foreach(i = seq_along(nrow(site_coords)), .combine = c) %dopar% + MODISTools::mt_dates(product = product, lat = site_coords$lat[i], lon = site_coords$lon[i])$modis_date)), 2, 8)) + } else { + modis_dates <- as.numeric(substr(sort(unique(foreach::foreach(i = seq_along(nrow(site_coords)), .combine = c) %do% + MODISTools::mt_dates(product = product, lat = site_coords$lat[i], lon = site_coords$lon[i])$modis_date)), 2, 8)) + } + + + # check if user asked for dates for data, if not, download all dates + if (is.null(product_dates)) { + dates <- sort(unique(foreach(i = seq_along(nrow(site_coords)), .combine = c) %do% + MODISTools::mt_dates(product = product, lat = site_coords$lat[i], lon = site_coords$lon[i])$modis_date)) + #dates = as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") + } else { + # if user asked for specific dates, first make sure data is available, then inform user of any missing dates in time period asked for. + start_date = as.numeric(product_dates[1]) + end_date = as.numeric(product_dates[2]) - ########## Date case 1: user only wants one date ########## - if (start_date == end_date) + # if all dates are available with user defined time period: + if (start_date >= modis_dates[1] & end_date <= modis_dates[length(modis_dates)]) { - if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) - { - PEcAn.logger::logger.info("Extracting data") - - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") - } - ########## For Date case 1: if only one date is asked for, but date is not within modis data prodate date range ########## - if (as.numeric(start_date) < dates[1] || as.numeric(end_date) > dates[length(dates)]) - { - PEcAn.logger::logger.warn(start) - stop("start or end date are not within MODIS data product date range. Please choose another date.") - } - } else { - ########## Date case 2: user want a range of dates ########## - # Best case scenario: Start and end date asked for fall with available date range of modis data product. - if (as.numeric(start_date) >= dates[1] && as.numeric(end_date) <= dates[length(dates)]) - { - PEcAn.logger::logger.info("Check #2: All dates are available!") - } - - # Okay scenario: Some MODIS data is available for parameter start_date and end_date range, but either the start_date or end_date falls outside the range of availble - # MODIS data dates - if (as.numeric(start_date) <= dates[1] && as.numeric(end_date) >= dates[1]) - { - start_date = dates[1] - PEcAn.logger::logger.warn("WARNING: Dates are only partially available. Start date before modis data product is available.") - } - if (as.numeric(end_date) >= dates[length(dates)] && as.numeric(start_date) <= dates[length(dates)]) - { - end_date <- dates[length(dates)] - PEcAn.logger::logger.warn("WARNING: Dates are only partially available. End date befire modis data product is available.") - } + PEcAn.logger::logger.info("Check #2: All dates are available!") - # Unacceptable scenario: start_date and end_date does not fall within the availa MODIS data product date range. There is no data to extract in this scenario. - if ((as.numeric(start_date) < dates[1] && as.numeric(end_date < dates[1])) || (as.numeric(start_date) > dates[length(dates)] && as.numeric(end_date) > dates[length(dates)])) - { - stop("No MODIS data available start_date and end_date parameterized.") - } + start_date = modis_dates[which(modis_dates >= start_date)[1]] - start <- as.Date(start_date, "%Y%j") - end <- as.Date(end_date, "%Y%j") + include = which(modis_dates <= end_date) + end_date = modis_dates[include[length(include)]] } - PEcAn.logger::logger.info("Extracting data") - cat(paste("Product =", product, "\n", "Band =", band, "\n", "Date Range =", start, "-", end, "\n", "Latitude =", lat, "\n", "Longitude =", lon, sep = " ")) + # if start and end dates fall completely outside of available modis_dates: + if ((start_date < modis_dates[1] & end_date < modis_dates[1]) | start_date > modis_dates[length(modis_dates)] & end_date > modis_dates[length(modis_dates)]) + { + PEcAn.logger::logger.warn(start) + stop("Start and end date are not within MODIS data product date range. Please choose another date.") + } + + # if start and end dates are larger than the available range, but part or full range: + if ((start_date < modis_dates[1] & end_date > modis_dates[1]) | start_date < modis_dates[length(modis_dates)] & end_date > modis_dates[length(modis_dates)]) + { + PEcAn.logger::logger.warn("WARNING: Dates are partially available. Start and/or end date extend beyond modis data product availability.") + start_date = modis_dates[which(modis_dates >= start_date)[1]] + + include = which(modis_dates <= end_date) + end_date = modis_dates[include[length(include)]] + } - # extract main band data from api - dat <- MODISTools::mt_subset(lat= lat, lon = lon, product = product, band = band, - start = start_date, end = end_date, km_ab = size, km_lr = size, progress = progress) + dates = modis_dates[which(modis_dates >= start_date & modis_dates <= end_date)] + + } + + modis_dates = as.Date(as.character(modis_dates), format = "%Y%j") + dates = as.Date(as.character(dates), format = "%Y%j") + + #### Start extracting the data + PEcAn.logger::logger.info("Extracting data") + + if (run_parallel) + { + dat = foreach(i=seq_along(site_info$site_id), .combine = rbind) %dopar% + MODISTools::mt_subset(lat = site_coords$lat[i],lon = site_coords$lon[i], + product = product, + band = band, + start = dates[1], + end = dates[length(dates)], + km_ab = size, km_lr = size, + progress = progress, site_name = as.character(site_info$site_id[i])) + } else { + dat = data.frame() - # extract QC data - if(band_qc != "") + for (i in seq_along(site_info$site_id)) { - qc <- MODISTools::mt_subset(lat = lat, lon = lon, product = product, band = band_qc, - start = start, end = end, km_ab = size, km_lr = size, progress = progress) + d = MODISTools::mt_subset(lat = site_coords$lat[i], + lon = site_coords$lon[i], + product = product, + band = band, + start = dates[1], + end = dates[length(dates)], + km_ab = size, km_lr = size, + progress = progress) + dat = rbind(dat, d) } + } + + # clean up data outputs so there isn't extra data, format classes. + output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$site, dat$latitude, dat$longitude, dat$pixel, dat$value), stringsAsFactors = FALSE) + names(output) <- c("modis_date", "calendar_date", "band", "tile", "site_id", "lat", "lon", "pixels", "data") + + output[ ,5:9] <- lapply(output[ ,5:9], as.numeric) + + # scale the data + stdev to proper units + output$data <- output$data * (as.numeric(dat$scale)) + output$lat <- round(output$lat, 4) + output$lon <- round(output$lon, 4) + + # remove bad values if QC filter is on + if (QC_filter) + { + qc_band = bands$band[which(grepl(var, bands$band, ignore.case = TRUE) & grepl("QC", bands$band, ignore.case = TRUE))] - # extract stdev data - if(band_sd != "") + if (run_parallel) { - sd <- MODISTools::mt_subset(lat = lat, lon = lon, product = product, band = band_sd, - start = start, end = end, km_ab = size, km_lr = size, progress = progress) - } + qc = foreach(i=seq_along(site_info$site_id), .combine = rbind) %dopar% + MODISTools::mt_subset(lat = site_coords$lat[i],lon = site_coords$lon[i], + product = product, + band = qc_band, + start = dates[1], + end = dates[length(dates)], + km_ab = size, km_lr = size, + progress = progress) + } else { + qc <- MODISTools::mt_subset(lat = site_coords$lat[i],lon = site_coords$lon[i], + product = product, + band = qc_band, + start = dates[1], + end = dates[length(dates)], + km_ab = size, km_lr = size, + progress = progress) + + } + + output$qc = as.character(qc$value) - if (band_qc == "") + #convert QC values and keep only okay values + for (i in seq_len(nrow(output))) { - QC <- rep("nan", nrow(dat)) - } else { - QC <- as.numeric(qc$value) + convert <- paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") + output$qc[i] <- substr(convert, nchar(convert) - 2, nchar(convert)) } - - if (band_sd == "") + good <- which(output$qc %in% c("000", "001")) + if (length(good) > 0 || !(is.null(good))) { - SD <- rep("nan", nrow(dat)) + output = output[good, ] } else { - SD <- as.numeric(sd$value) * as.numeric(sd$scale) + PEcAn.logger::logger.warn("All QC values are bad. No data to output with QC filter == TRUE.") } - - output <- as.data.frame(cbind(dat$modis_date, dat$calendar_date, dat$band, dat$tile, dat$latitude, dat$longitude, dat$pixel, dat$value, QC, SD), stringsAsFactors = FALSE) - names(output) <- c("modis_date", "calendar_date", "band", "tile", "lat", "lon", "pixels", "data", "qc", "sd") - - output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) - - # scale the data + stdev to proper units - output$data <- output$data * (as.numeric(dat$scale)) - output$sd <- output$sd * (as.numeric(dat$scale)) - output$lat <- round(output$lat, 4) - output$lon <- round(output$lon, 4) - - if (QC_filter) + } + + # unregister cores since parallel process is done + if (run_parallel) + { + stopCluster(cl) + } + + # break dataoutput up by site and save out chunks + if (!(is.null(outdir))) + { + for (i in seq_along(site_info$site_id)) { - output$qc == as.character(output$qc) - for (i in seq_len(nrow(output))) + if (!(dir.exists(file.path(outdir, site_info$site_id[i])))) { - convert <- paste(binaryLogic::as.binary(as.integer(output$qc[i]), n = 8), collapse = "") - output$qc[i] <- substr(convert, nchar(convert) - 2, nchar(convert)) + dir.create(file.path(outdir, site_info$site_id[i])) } - good <- which(output$qc %in% c("000", "001")) - if (length(good) > 0 || !(is.null(good))) + + site = output[which(output$site_id == site_info$site_id[i]), ] + site$modis_date = substr(site$modis_date, 2, length(site$modis_date)) + + if (QC_filter) { - output = output[good, ] + fname = paste(site_info$site_id[i], "/", product, "_", band, "_", start_date, "-", end_date, "_filtered.csv", sep = "") } else { - PEcAn.logger::logger.warn("All QC values are bad. No data to output with QC filter == TRUE.") + fname = paste(site_info$site_id[i], "/", product, "_", band, "_", start_date, "-", end_date, "_unfiltered.csv", sep = "") } + fname = file.path(outdir, fname) + write.csv(site, fname, row.names = FALSE) } - - if (!(outfolder == "")) - { - if (!(is.null(siteID))) - { - fname <- paste(product, "_", band, "_", siteID, "_output_", start_date, "_", end_date, ".csv", sep = "") - } else { - fname <- paste(product, "_", band, "_output_", lat, "_", lon, "_", start_date, "_", end_date, ".csv", sep = "") - } - fname <- file.path(outfolder, fname) - write.csv(output, fname, row.names = FALSE) - } + } return(output) } - - if (package_method == "reticulate"){ - # load in python script - script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) - reticulate::source_python(script.path) - - # extract the data - output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start_date, end_date = end_date, size = size, band_qc = band_qc, band_sd = band_sd) - output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) - output$lat <- round(output$lat, 4) - output$lon <- round(output$lon, 4) - - if (!(is.null(outfolder))) - { - fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") - fname <- file.path(outfolder, fname) - write.csv(output, fname) - } - - return(output)} + ########### temporarily removed for now as python2 is being discontinued and modules are not working correctly + # if (package_method == "reticulate"){ + # # load in python script + # script.path <- file.path(system.file("extract_modis_data.py", package = "PEcAn.data.remote")) + # reticulate::source_python(script.path) + # + # # extract the data + # output <- extract_modis_data(product = product, band = band, lat = lat, lon = lon, start_date = start_date, end_date = end_date, size = size, band_qc = band_qc, band_sd = band_sd) + # output[ ,5:10] <- lapply(output[ ,5:10], as.numeric) + # output$lat <- round(output$lat, 4) + # output$lon <- round(output$lon, 4) + # + # if (!(is.null(outdir))) + # { + # fname <- paste(product, "_", band, "_", start_date, "_", end_date, "_", lat, "_", lon, ".csv", sep = "") + # fname <- file.path(outdir, fname) + # write.csv(output, fname) + # } + # + # return(output)} } From 5eda79fa3d2aa2068742fa24f9040285b6d4ec88 Mon Sep 17 00:00:00 2001 From: "bmorrison@bnl.gov" Date: Mon, 5 Aug 2019 16:40:59 -0400 Subject: [PATCH 0291/2289] roxygenized --- modules/data.remote/DESCRIPTION | 3 +- modules/data.remote/R/call_MODIS.R | 63 +++++++++++++-------------- modules/data.remote/man/call_MODIS.Rd | 29 +++++------- 3 files changed, 44 insertions(+), 51 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 227b2b93a88..f5dfe94a1bf 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -13,7 +13,8 @@ Imports: PEcAn.logger, PEcAn.remote, stringr (>= 1.1.0), - binaryLogic + binaryLogic, + doParallel Suggests: testthat (>= 1.0.2), ggplot2, diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index ecf45ec0768..e6fae8cb250 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -9,24 +9,23 @@ ##' @param product_dates a character vector of the start and end date of the data in YYYYJJJ ##' @param run_parallel optional method to download data paralleize. Only works if more than 1 site is needed and there are >1 CPUs available. ##' @param ncores number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL. -##' @param size kmAboveBelow and kmLeftRight distance in km to be included ##' @param product string value for MODIS product number ##' @param band string value for which measurement to extract ##' @param package_method string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional) ##' @param QC_filter Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option. ##' @param progress TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. ##' -##' depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json +##' Requires Python3 for reticulate method option. There are a number of required python libraries. sudo -H pip install numpy suds netCDF4 json ##' depends on the MODISTools package version 1.1.0 ##' ##' @examples ##' \dontrun{ -##' test_modistools <- call_MODIS(outdir = "/data/Model_Data/modis_lai", var = "lai", site_info, product_dates = c("2001150", "2001365"), run_parallel = TRUE, size = 0, product = "MOD15A2H", band = "Lai_500m", package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) +##' test_modistools <- call_MODIS(var = "lai", site_info = site_info, product_dates = c("2001150", "2001365"), run_parallel = TRUE, product = "MOD15A2H", band = "Lai_500m", package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) ##' } ##' ##' @author Bailey Morrison ##' -call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run_parallel = FALSE, ncores = NULL, size = 0, product, band, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE) { +call_MODIS <- function(outdir = NULL, var, site_info, product_dates, run_parallel = FALSE, ncores = NULL, product, band, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE) { # makes the query search for 1 pixel and not for rasters chunks for now. Will be changed when we provide raster output support. size <- 0 @@ -37,12 +36,12 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run require(doParallel) # set up CPUS for parallel runs. if (is.null(ncores)) { - total_cores = parallel::detectCores(all.tests = FALSE, logical = TRUE) - ncores = total_cores/2 + total_cores <- parallel::detectCores(all.tests = FALSE, logical = TRUE) + ncores <- total_cores-2 } if (ncores > 10) # MODIS API has a 10 download limit / computer { - ncores = 10 + ncores <- 10 } # register CPUS if run_parallel = TRUE @@ -64,7 +63,7 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run { #################### FUNCTION PARAMETER PRECHECKS #################### #1. check that modis product is available - products = MODISTools::mt_products() + products <- MODISTools::mt_products() if (!(product %in% products$product)) { PEcAn.logger::logger.warn(products) @@ -98,18 +97,18 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run #dates = as.Date(as.character(substr(dates, 2, nchar(dates))), format = "%Y%j") } else { # if user asked for specific dates, first make sure data is available, then inform user of any missing dates in time period asked for. - start_date = as.numeric(product_dates[1]) - end_date = as.numeric(product_dates[2]) + start_date <- as.numeric(product_dates[1]) + end_date <- as.numeric(product_dates[2]) # if all dates are available with user defined time period: if (start_date >= modis_dates[1] & end_date <= modis_dates[length(modis_dates)]) { PEcAn.logger::logger.info("Check #2: All dates are available!") - start_date = modis_dates[which(modis_dates >= start_date)[1]] + start_date <- modis_dates[which(modis_dates >= start_date)[1]] - include = which(modis_dates <= end_date) - end_date = modis_dates[include[length(include)]] + include <- which(modis_dates <= end_date) + end_date <- modis_dates[include[length(include)]] } # if start and end dates fall completely outside of available modis_dates: @@ -123,25 +122,25 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run if ((start_date < modis_dates[1] & end_date > modis_dates[1]) | start_date < modis_dates[length(modis_dates)] & end_date > modis_dates[length(modis_dates)]) { PEcAn.logger::logger.warn("WARNING: Dates are partially available. Start and/or end date extend beyond modis data product availability.") - start_date = modis_dates[which(modis_dates >= start_date)[1]] + start_date <- modis_dates[which(modis_dates >= start_date)[1]] - include = which(modis_dates <= end_date) - end_date = modis_dates[include[length(include)]] + include <- which(modis_dates <= end_date) + end_date <- modis_dates[include[length(include)]] } - dates = modis_dates[which(modis_dates >= start_date & modis_dates <= end_date)] + dates <- modis_dates[which(modis_dates >= start_date & modis_dates <= end_date)] } - modis_dates = as.Date(as.character(modis_dates), format = "%Y%j") - dates = as.Date(as.character(dates), format = "%Y%j") + modis_dates <- as.Date(as.character(modis_dates), format = "%Y%j") + dates <- as.Date(as.character(dates), format = "%Y%j") #### Start extracting the data PEcAn.logger::logger.info("Extracting data") if (run_parallel) { - dat = foreach(i=seq_along(site_info$site_id), .combine = rbind) %dopar% + dat <- foreach(i=seq_along(site_info$site_id), .combine = rbind) %dopar% MODISTools::mt_subset(lat = site_coords$lat[i],lon = site_coords$lon[i], product = product, band = band, @@ -150,11 +149,11 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run km_ab = size, km_lr = size, progress = progress, site_name = as.character(site_info$site_id[i])) } else { - dat = data.frame() + dat <- data.frame() for (i in seq_along(site_info$site_id)) { - d = MODISTools::mt_subset(lat = site_coords$lat[i], + d <- MODISTools::mt_subset(lat = site_coords$lat[i], lon = site_coords$lon[i], product = product, band = band, @@ -162,7 +161,7 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run end = dates[length(dates)], km_ab = size, km_lr = size, progress = progress) - dat = rbind(dat, d) + dat <- rbind(dat, d) } } @@ -180,11 +179,11 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run # remove bad values if QC filter is on if (QC_filter) { - qc_band = bands$band[which(grepl(var, bands$band, ignore.case = TRUE) & grepl("QC", bands$band, ignore.case = TRUE))] + qc_band <- bands$band[which(grepl(var, bands$band, ignore.case = TRUE) & grepl("QC", bands$band, ignore.case = TRUE))] if (run_parallel) { - qc = foreach(i=seq_along(site_info$site_id), .combine = rbind) %dopar% + qc <- foreach(i=seq_along(site_info$site_id), .combine = rbind) %dopar% MODISTools::mt_subset(lat = site_coords$lat[i],lon = site_coords$lon[i], product = product, band = qc_band, @@ -203,7 +202,7 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run } - output$qc = as.character(qc$value) + output$qc <- as.character(qc$value) #convert QC values and keep only okay values for (i in seq_len(nrow(output))) @@ -214,7 +213,7 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run good <- which(output$qc %in% c("000", "001")) if (length(good) > 0 || !(is.null(good))) { - output = output[good, ] + output <- output[good, ] } else { PEcAn.logger::logger.warn("All QC values are bad. No data to output with QC filter == TRUE.") } @@ -236,16 +235,16 @@ call_MODIS <- function(outdir = NULL, var, site_info, product_dates = NULL, run dir.create(file.path(outdir, site_info$site_id[i])) } - site = output[which(output$site_id == site_info$site_id[i]), ] - site$modis_date = substr(site$modis_date, 2, length(site$modis_date)) + site <- output[which(output$site_id == site_info$site_id[i]), ] + site$modis_date <- substr(site$modis_date, 2, length(site$modis_date)) if (QC_filter) { - fname = paste(site_info$site_id[i], "/", product, "_", band, "_", start_date, "-", end_date, "_filtered.csv", sep = "") + fname <- paste(site_info$site_id[i], "/", product, "_", band, "_", start_date, "-", end_date, "_filtered.csv", sep = "") } else { - fname = paste(site_info$site_id[i], "/", product, "_", band, "_", start_date, "-", end_date, "_unfiltered.csv", sep = "") + fname <- paste(site_info$site_id[i], "/", product, "_", band, "_", start_date, "-", end_date, "_unfiltered.csv", sep = "") } - fname = file.path(outdir, fname) + fname <- file.path(outdir, fname) write.csv(site, fname, row.names = FALSE) } diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index b686d520510..42bafca3a06 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -4,40 +4,34 @@ \alias{call_MODIS} \title{call_MODIS} \usage{ -call_MODIS(outfolder = "", start_date, end_date, lat, lon, size = 0, - product, band, band_qc = "", band_sd = "", siteID = NULL, - package_method = "MODISTools", QC_filter = FALSE, progress = TRUE) +call_MODIS(outdir = NULL, var, site_info, product_dates, + run_parallel = FALSE, ncores = NULL, product, band, + package_method = "MODISTools", QC_filter = FALSE, progress = FALSE) } \arguments{ -\item{outfolder}{where the output file will be stored} +\item{outdir}{where the output file will be stored} -\item{start_date}{string value for beginning of date range for download in unambiguous date format (YYYYJJJ)} +\item{var}{the simple name of the modis dataset variable (e.g. lai)} -\item{end_date}{string value for end of date range for download in unambiguous date format (YYYYJJJ)} +\item{site_info}{list of site info for parsing MODIS data: list(site_id, site_name, lat, lon, time_zone)} -\item{lat}{Latitude of the pixel} +\item{product_dates}{a character vector of the start and end date of the data in YYYYJJJ} -\item{lon}{Longitude of the pixel} +\item{run_parallel}{optional method to download data paralleize. Only works if more than 1 site is needed and there are >1 CPUs available.} -\item{size}{kmAboveBelow and kmLeftRight distance in km to be included} +\item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL.} \item{product}{string value for MODIS product number} \item{band}{string value for which measurement to extract} -\item{band_qc}{string value for which quality control band, or use "NA" if you do not know or do not need QC information (optional)} - -\item{band_sd}{string value for which standard deviation band, or use "NA" if you do not know or do not need StdDev information (optional)} - -\item{siteID}{numeric value of BETY site id value to use for output file name. Default is NULL. Only MODISTools option.} - \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} \item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good (as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option.} \item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. -depends on a number of Python libraries. sudo -H pip install numpy suds netCDF4 json +Requires Python3 for reticulate method option. There are a number of required python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} } \description{ @@ -45,8 +39,7 @@ Get MODIS data by date and location } \examples{ \dontrun{ -test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", progress = TRUE, QC_filter = FALSE) -test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -123, size = 0, band_qc = "",band_sd = "", package_method = "reticulate") +test_modistools <- call_MODIS(var = "lai", site_info = site_info, product_dates = c("2001150", "2001365"), run_parallel = TRUE, product = "MOD15A2H", band = "Lai_500m", package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) } } From 56384bd25ff8b60f065669b63e83566d89874150 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Tue, 6 Aug 2019 17:12:15 -0400 Subject: [PATCH 0292/2289] Format preview and dir choose --- .../server_files/sidebar_server.R | 57 ++++++++++++++----- 1 file changed, 44 insertions(+), 13 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 7874b19a934..c4769a22d9d 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -13,8 +13,9 @@ observe({ updateSelectizeInput(session, "all_workflow_id", choices = all_ids) # Get URL prameters query <- parseQueryString(session$clientData$url_search) + # Pre-select workflow_id from URL prams - updateSelectizeInput(session, "all_workflow_id", selected = query[["workflow_id"]]) + if(length(query)>0) updateSelectizeInput(session, "all_workflow_id", selected = query[["workflow_id"]]) #Signaling the success of the operation toastr_success("Update workflow IDs") }, @@ -141,6 +142,7 @@ observe({ }) load.model.data <- eventReactive(input$load_data, { + req(input$all_input_id) inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) @@ -152,11 +154,13 @@ load.model.data <- eventReactive(input$load_data, { start.year <- as.numeric(lubridate::year(inputs_df$start_date)) end.year <- as.numeric(lubridate::year(inputs_df$end_date)) File_path <- inputs_df$filePath + # TODO There is an issue with the db where file names are not saved properly. # To make it work with the VM, uncomment the line below #File_path <- paste0(inputs_df$filePath,'.csv') site.id <- inputs_df$site_id site <- PEcAn.DB::query.site(site.id,dbConnect$bety$con) +browser() observations <- PEcAn.benchmark::load_data( data.path = File_path, format = File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) @@ -194,6 +198,32 @@ observeEvent(input$load_data, { }) +volumes <- c(Home = fs::path_home(), "R Installation" = R.home(), getVolumes()()) + +shinyDirChoose(input, "regdirectory", roots = volumes, session = session, restrictions = system.file(package = "base")) + + +output$formatPreview <- DT::renderDT({ + req(input$format_sel) + tryCatch({ + Fids <- + PEcAn.DB::get.id("formats", "name", input$format_sel, dbConnect$bety$con) %>% + as.character() + + if (length(Fids)>1) toastr_warning(title="Format Preview", + message = "More than one id was found for this format. The first one will be used.") + + tbl(dbConnect$bety$con, "formats_variables") %>% + dplyr::filter(format_id == Fids[1]) %>% + dplyr::select(-id,-format_id, -variable_id, -created_at, -updated_at) %>% + dplyr::filter(name!="")%>% + collect() + }, + error = function(e) { + toastr_error(title = "Error in format preview", message = conditionMessage(e)) + }) +}) + # Register external data observeEvent(input$register_data,{ #browser() @@ -203,7 +233,7 @@ observeEvent(input$register_data,{ modalDialog( title = "Register External Data", fluidRow( - column(12, + column(6, fileInput("Datafile", "Choose CSV/NC File", width = "100%", accept = c( @@ -212,19 +242,23 @@ observeEvent(input$register_data,{ ".csv", ".nc") )), + column(6,br(), + shinyFiles::shinyDirButton("regdirectory", "Choose your target dir", "Please select a folder") + ), tags$hr() ), fluidRow( column(6, dateInput("date3", "Start Date:", value = Sys.Date()-10)), column(6, dateInput("date4", "End Date:", value = Sys.Date()-10) ) - ), + ),tags$hr(), fluidRow( column(6, shinyTime::timeInput("time2", "Start Time:", value = Sys.time())), column(6, shinyTime::timeInput("time2", "End Time:", value = Sys.time())) - ), + ),tags$hr(), fluidRow( - column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ) - ), + column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ), + column(6, DT::dataTableOutput("formatPreview") ) + ),tags$hr(), footer = tagList( actionButton("register_button", "Register"), modalButton("Cancel") @@ -234,17 +268,14 @@ observeEvent(input$register_data,{ ) }) - - # register input file in database observeEvent(input$register_button,{ tryCatch({ inFile <- input$Datafile - dir.name <- gsub(".[a-z]+", "", inFile$name) - dir.create(file.path("/home/carya/output/dbfiles", dir.name)) + dir.create(file.path(parseDirPath(volumes, input$regdirectory), dir.name)) file.copy(inFile$datapath, - file.path("/home/carya/output/dbfiles", dir.name, inFile$name), + file.path(parseDirPath(volumes, input$regdirectory), dir.name, inFile$name), overwrite = T) mt <- tbl(dbConnect$bety,"formats") %>% @@ -252,7 +283,7 @@ observeEvent(input$register_button,{ filter(name == input$format_sel) %>% pull(type_string) - PEcAn.DB::dbfile.input.insert(in.path = file.path("/home/carya/output/dbfiles", dir.name), + PEcAn.DB::dbfile.input.insert(in.path = file.path(parseDirPath(volumes, input$regdirectory), dir.name), in.prefix = inFile$name, siteid = input$all_site_id, # select box startdate = input$date3, @@ -288,4 +319,4 @@ observeEvent(input$register_button,{ inputs_df <- inputs_df %>% dplyr::filter(format_id %in% formats_sub) # Only data sets with formats with associated variables will show up updateSelectizeInput(session, "all_input_id", choices=inputs_df$input_selection_list) } -}) \ No newline at end of file +}) From 2b8eaeae7a4e7d0fb1fa6390890285d2393e8080 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Tue, 6 Aug 2019 17:12:41 -0400 Subject: [PATCH 0293/2289] Update server.R --- shiny/workflowPlot/server.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 78de5d47f66..d8b888f7709 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -23,7 +23,8 @@ lapply(c( "shiny", "lubridate", "listviewer", "shinythemes", - "shinytoastr" + "shinytoastr", + "shinyFiles" ),function(pkg){ if (!(pkg %in% installed.packages()[,1])){ install.packages(pkg) From 234ec41fd207b6ef6e4b081b7b4a397ebf87c80d Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 7 Aug 2019 03:53:35 +0000 Subject: [PATCH 0294/2289] change setup benchmarks to selectbox --- .../server_files/benchmarking_server.R | 60 +++++++++---------- shiny/workflowPlot/ui.R | 1 + .../ui_files/benchmarking_settings_UI.R | 1 + 3 files changed, 30 insertions(+), 32 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 14e27f0049e..15084addb41 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -155,40 +155,36 @@ observeEvent({ plot_ind <- grep("_plot",bm$metrics$name) + variable_choices <- bm$vars$variable_id + names(variable_choices) <- bm$vars$read_name + metrics_choices <- bm$metrics$id[-plot_ind] + names(metrics_choices) <- bm$metrics$description[-plot_ind] + plot_choices <- bm$metrics$id[plot_ind] + names(plot_choices) <- bm$metrics$description[plot_ind] + output$bm_inputs <- renderUI({ if(bm$ready > 0){ - list( - column(4, wellPanel( - checkboxGroupInput("vars", label = "Variables", - choiceNames = bm$vars$read_name, - choiceValues = bm$vars$variable_id), - # actionButton("selectall.var","Select /Deselect all variables"), - label=h3("Label") - )), - column(4, wellPanel( - checkboxGroupInput("metrics", label = "Numerical Metrics", - choiceNames = bm$metrics$description[-plot_ind], - choiceValues = bm$metrics$id[-plot_ind]), - # actionButton("selectall.num","Select/Deselect all numerical metrics") , - label=h3("Label") - )), - column(4, wellPanel( - checkboxGroupInput("plots", label = "Plot Metrics", - choiceNames = bm$metrics$description[plot_ind], - choiceValues = bm$metrics$id[plot_ind]), - # actionButton("selectall.plot","Select/Deselect all plot metrics"), - label=h3("Label") - )) - # column(4, wellPanel( - # textInput("start_year", label = "Benchmarking Start Year", - # value = "don't use this"), - # label=h3("Label") - # )), - # column(4, wellPanel( - # textInput("end_year", label = "Benchmarking End Year", - # value = "don't use this"), - # label=h3("Label") - # )) + wellPanel( + fluidRow( + column(4, + pickerInput("vars", "Variables", + choices = variable_choices, + multiple = TRUE, + options = list(`actions-box` = TRUE, `dropup-auto` = FALSE)) + ), + column(4, + pickerInput("metrics", "Numerical Metrics", + choices = metrics_choices, + multiple = TRUE, + options = list(`actions-box` = TRUE, `dropup-auto` = FALSE)) + ), + column(4, + pickerInput("plots", "Plot Metrics", + choices = plot_choices, + multiple = TRUE, + options = list(`actions-box` = TRUE, `dropup-auto` = FALSE)) + ) + ) ) } }) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 39e43d57374..05f629133d1 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -5,6 +5,7 @@ library(shinythemes) library(knitr) library(shinyjs) library(shinytoastr) +library(shinyWidgets) library(bsplus) source("ui_utils.R", local = TRUE) diff --git a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R index 4302f156c00..36d0d77f797 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R @@ -8,6 +8,7 @@ tabPanel("Settings", h3("Setup Benchmarks")), column(12, uiOutput("results_message"), + br(), uiOutput("bm_inputs") ), column(12, h3("Calculate Benchmarks")), From 96684ae6c5b3f7deed939ee301abb1e640cb8c7c Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 7 Aug 2019 04:11:39 +0000 Subject: [PATCH 0295/2289] delete plot rows in scores datatable --- shiny/workflowPlot/server_files/benchmarking_server.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 15084addb41..b48eb14be97 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -362,8 +362,8 @@ observeEvent(bm$load_results,{ bm$bench.results <- result.out$bench.results bm$aligned.dat <- result.out$aligned.dat - output$results_table <- DT::renderDataTable(DT::datatable(bm$bench.results)) - plots_used <- grep("plot", result.out$bench.results$metric) + plots_used <- grep("plot", result.out$bench.results$metric) + output$results_table <- DT::renderDataTable(DT::datatable(bm$bench.results[-plots_used,])) incProgress(1/3) if(length(plots_used) > 0){ From 6a8ad9cd6ce5347d7d3000612cc836e8220b45b6 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 7 Aug 2019 04:39:56 +0000 Subject: [PATCH 0296/2289] move benchmarking settings to a new tab --- shiny/workflowPlot/ui.R | 5 ++- .../ui_files/benchmarking_ScoresPlots_UI.R | 23 +++++++++++++ .../ui_files/benchmarking_settings_UI.R | 32 ++----------------- 3 files changed, 28 insertions(+), 32 deletions(-) create mode 100644 shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 05f629133d1..cb6401f9844 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -70,9 +70,8 @@ ui <- fluidPage(theme = shinytheme("yeti"), ), tabPanel(h4("Benchmarking"), tabsetPanel( - source_ui("benchmarking_settings_UI.R"), - source_ui("benchmarking_scores_UI.R"), - source_ui("benchmarking_plots_UI.R") + source_ui("benchmarking_ScoresPlots_UI.R"), + source_ui("benchmarking_settings_UI.R") ) ), tabPanel(h4("Documentation"), diff --git a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R new file mode 100644 index 00000000000..e2c8a5487ba --- /dev/null +++ b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R @@ -0,0 +1,23 @@ +tabPanel("Scores/Plots", + column(12, h3("Setup Reference Run")), + column(12, + verbatimTextOutput("brr_message"), + uiOutput("button_BRR") + ), + column(12, + h3("Setup Benchmarks")), + column(12, + uiOutput("results_message"), + br(), + uiOutput("bm_inputs") + ), + column(12, h3("Calculate Benchmarks")), + column(12, + verbatimTextOutput("calc_bm_message"), + # verbatimTextOutput("report"), + uiOutput("calc_bm_button"), + br(), + DT::dataTableOutput("inputs_df_table"), + br() + ) +) \ No newline at end of file diff --git a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R index 36d0d77f797..f3b2b969730 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R @@ -1,29 +1,3 @@ -tabPanel("Settings", - column(12, h3("Setup Reference Run")), - column(12, - verbatimTextOutput("brr_message"), - uiOutput("button_BRR") - ), - column(12, - h3("Setup Benchmarks")), - column(12, - uiOutput("results_message"), - br(), - uiOutput("bm_inputs") - ), - column(12, h3("Calculate Benchmarks")), - column(12, - verbatimTextOutput("calc_bm_message"), - # verbatimTextOutput("report"), - uiOutput("calc_bm_button"), - br(), - DT::dataTableOutput("inputs_df_table"), - br(), - #uiOutput("config_list_table"), - uiOutput("reportvars"), - uiOutput("reportmetrics"), - br(), - uiOutput("settings_title"), - verbatimTextOutput("print_bm_settings") - ) -) \ No newline at end of file +tabPanel("Settings", + br(), + verbatimTextOutput("print_bm_settings")) \ No newline at end of file From 78780802903f44d3ef748aaffa7f200d25fca154 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Wed, 7 Aug 2019 16:54:31 -0400 Subject: [PATCH 0297/2289] Format Preview --- .../server_files/sidebar_server.R | 133 +++++++++++++----- 1 file changed, 96 insertions(+), 37 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index c4769a22d9d..6a45d87122b 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -197,27 +197,65 @@ observeEvent(input$load_data, { }) }) - +# These are required for shinyFiles which allows to select target folder on server machine volumes <- c(Home = fs::path_home(), "R Installation" = R.home(), getVolumes()()) - shinyDirChoose(input, "regdirectory", roots = volumes, session = session, restrictions = system.file(package = "base")) output$formatPreview <- DT::renderDT({ - req(input$format_sel) + req(input$format_sel_pre) tryCatch({ Fids <- - PEcAn.DB::get.id("formats", "name", input$format_sel, dbConnect$bety$con) %>% + PEcAn.DB::get.id("formats", + "name", + input$format_sel_pre, + dbConnect$bety$con) %>% as.character() - if (length(Fids)>1) toastr_warning(title="Format Preview", - message = "More than one id was found for this format. The first one will be used.") + if (length(Fids) > 1) + toastr_warning(title = "Format Preview", + message = "More than one id was found for this format. The first one will be used.") + + mimt<-tbl(dbConnect$bety$con,"formats") %>% + left_join(tbl(dbConnect$bety$con,"mimetypes"), by=c('mimetype_id'='id'))%>% + dplyr::filter(id==Fids[1]) %>% + dplyr::pull(type_string) + + output$mimt_pre<-renderText({ + mimt + }) + + DT::datatable( + tbl(dbConnect$bety$con, "formats_variables") %>% + dplyr::filter(format_id == Fids[1]) %>% + dplyr::select(-id, -format_id,-variable_id,-created_at,-updated_at) %>% + dplyr::filter(name != "") %>% + collect(), + escape = F, + filter = 'none', + selection = "none", + style = 'bootstrap', + rownames = FALSE, + options = list( + autowidth = TRUE, + columnDefs = list(list( + width = '90px', targets = -1 + )), + #set column width for action button + dom = 'tp', + pageLength = 10, + scrollX = TRUE, + scrollCollapse = FALSE, + initComplete = DT::JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#000', 'color': '#fff'});", + "}" + ) + ) + ) + + - tbl(dbConnect$bety$con, "formats_variables") %>% - dplyr::filter(format_id == Fids[1]) %>% - dplyr::select(-id,-format_id, -variable_id, -created_at, -updated_at) %>% - dplyr::filter(name!="")%>% - collect() }, error = function(e) { toastr_error(title = "Error in format preview", message = conditionMessage(e)) @@ -232,33 +270,54 @@ observeEvent(input$register_data,{ showModal( modalDialog( title = "Register External Data", - fluidRow( - column(6, - fileInput("Datafile", "Choose CSV/NC File", - width = "100%", - accept = c( - "text/csv", - "text/comma-separated-values,text/plain", - ".csv", - ".nc") - )), - column(6,br(), - shinyFiles::shinyDirButton("regdirectory", "Choose your target dir", "Please select a folder") - ), - tags$hr() + tabsetPanel( + tabPanel("Register", + br(), + fluidRow( + column(6, + fileInput("Datafile", "Choose CSV/NC File", + width = "100%", + accept = c( + "text/csv", + "text/comma-separated-values,text/plain", + ".csv", + ".nc") + )), + column(6,br(), + shinyFiles::shinyDirButton("regdirectory", "Choose your target dir", "Please select a folder") + ), + tags$hr() + ), + fluidRow( + column(6, dateInput("date3", "Start Date:", value = Sys.Date()-10)), + column(6, dateInput("date4", "End Date:", value = Sys.Date()-10) ) + ),tags$hr(), + fluidRow( + column(6, shinyTime::timeInput("time2", "Start Time:", value = Sys.time())), + column(6, shinyTime::timeInput("time2", "End Time:", value = Sys.time())) + ),tags$hr(), + fluidRow( + column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% + pull(name) %>% + unique() + ) ), + column(6) + ) + ), + tabPanel("Fromat Preview", br(), + fluidRow( + column(6,selectizeInput("format_sel_pre", "Format Name", tbl(dbConnect$bety,"formats") %>% + pull(name) %>% unique())), + column(6, h5(shiny::tags$b("Mimetypes")), textOutput("mimt_pre")) + ), + fluidRow( + column(12, + DT::dataTableOutput("formatPreview") + ) + + ) + ) ), - fluidRow( - column(6, dateInput("date3", "Start Date:", value = Sys.Date()-10)), - column(6, dateInput("date4", "End Date:", value = Sys.Date()-10) ) - ),tags$hr(), - fluidRow( - column(6, shinyTime::timeInput("time2", "Start Time:", value = Sys.time())), - column(6, shinyTime::timeInput("time2", "End Time:", value = Sys.time())) - ),tags$hr(), - fluidRow( - column(6, selectizeInput("format_sel", "Format Name", tbl(dbConnect$bety,"formats") %>% pull(name) %>% unique()) ), - column(6, DT::dataTableOutput("formatPreview") ) - ),tags$hr(), footer = tagList( actionButton("register_button", "Register"), modalButton("Cancel") From a23dd2b1be2cb06560e4e4734a97f401ee1c4f84 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Thu, 8 Aug 2019 04:42:17 +0000 Subject: [PATCH 0298/2289] change benchmark scores/plots UI --- .../server_files/benchmarking_server.R | 41 ++++++++++++++++--- .../ui_files/benchmarking_ScoresPlots_UI.R | 19 +++++++++ 2 files changed, 55 insertions(+), 5 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index b48eb14be97..a7b6b8ec7a1 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -229,16 +229,17 @@ observeEvent(input$calc_bm,{ inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) + output$inputs_df_title <- renderText("Benchmark Input Data") output$inputs_df_table <- DT::renderDataTable( DT::datatable(inputs_df, + rownames = FALSE, options = list( - dom = 'ft', + dom = 't', scrollX = TRUE, initComplete = DT::JS( "function(settings, json) {", "$(this.api().table().header()).css({'background-color': '#404040', 'color': '#fff'});", - "}")), - caption = "Table: Benchmarking Input Data") + "}"))) ) @@ -363,7 +364,16 @@ observeEvent(bm$load_results,{ bm$bench.results <- result.out$bench.results bm$aligned.dat <- result.out$aligned.dat plots_used <- grep("plot", result.out$bench.results$metric) - output$results_table <- DT::renderDataTable(DT::datatable(bm$bench.results[-plots_used,])) + output$results_df_title <- renderText("Benchmark Scores") + output$results_table <- DT::renderDataTable( + DT::datatable(bm$bench.results[-plots_used,], + rownames = FALSE, + options = list(dom = 'ft', + initComplete = JS( + "function(settings, json) {", + "$(this.api().table().header()).css({'background-color': '#404040', 'color': '#fff'});", + "}") + ))) incProgress(1/3) if(length(plots_used) > 0){ @@ -372,10 +382,31 @@ observeEvent(bm$load_results,{ 1, paste, collapse = " ") selection <- as.list(as.numeric(names(plot_list))) names(selection) <- as.vector(plot_list) + output$plots_tilte <- renderText("Benchmark Plots") output$bm_plots <- renderUI({ - selectInput("bench_plot", "Benchmark Plot", multiple = FALSE, + selectInput("bench_plot", label = NULL, multiple = FALSE, choices = selection) }) + output$plotlybars <- renderUI({ + div( + id = "plot-container", + div( + class = "plotlybars-wrapper", + div( + class = "plotlybars", + div(class = "plotlybars-bar b1"), + div(class = "plotlybars-bar b2"), + div(class = "plotlybars-bar b3"), + div(class = "plotlybars-bar b4"), + div(class = "plotlybars-bar b5"), + div(class = "plotlybars-bar b6"), + div(class = "plotlybars-bar b7") + ), + div(class = "plotlybars-text", + p("Updating the plot. Hold tight!")) + ) + ) + }) } incProgress(1/3) } diff --git a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R index e2c8a5487ba..540d0da45ff 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R @@ -17,7 +17,26 @@ tabPanel("Scores/Plots", # verbatimTextOutput("report"), uiOutput("calc_bm_button"), br(), + textOutput("inputs_df_title"), + br(), DT::dataTableOutput("inputs_df_table"), + br(), br() + ), + fluidRow( + column(8, + fluidRow( + column(3,offset = 1, textOutput("plots_tilte")), + column(8, uiOutput("bm_plots")) + ), + uiOutput("plotlybars"), + plotlyOutput("bmPlot"), + br() + ), + column(4, + textOutput("results_df_title"), + br(), + DT::dataTableOutput("results_table") + ) ) ) \ No newline at end of file From 1366708bab168394baf8f5ad738aaf14d911c93e Mon Sep 17 00:00:00 2001 From: sl4397 Date: Thu, 8 Aug 2019 08:29:21 +0000 Subject: [PATCH 0299/2289] change calculate benchmark button and message UI --- .../server_files/benchmarking_server.R | 40 +++++++++++-------- .../ui_files/benchmarking_ScoresPlots_UI.R | 10 ++--- .../ui_files/benchmarking_settings_UI.R | 1 + 3 files changed, 29 insertions(+), 22 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index a7b6b8ec7a1..77b105143b7 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -188,7 +188,20 @@ observeEvent({ ) } }) - if(bm$ready > 0){bm$calc_bm_message <- sprintf("Please select at least one variable and one metric")} + + output$calc_bm <- renderUI({ + if(bm$ready > 0){ + fluidRow( + column(5), + column(2, + shinyjs::disabled( + actionButton('calc_bm_button', "Calculate", width = "100%", class="btn-primary") + ) + ), + column(5) + ) + } + }) }) observeEvent({ @@ -200,9 +213,9 @@ observeEvent({ n <- ifelse(is.null(input$metrics),0,length(input$metrics)) p <- ifelse(is.null(input$plots),0,length(input$plots)) m <- n + p - output$report <- renderText(sprintf("Number of vars: %0.f, Number of metrics: %0.f", v,m)) + #output$report <- renderText(sprintf("Number of vars: %0.f, Number of metrics: %0.f", v,m)) if(v > 0 & m > 0){ - output$calc_bm_button <- renderUI({actionButton("calc_bm", "Calculate Benchmarks")}) + shinyjs::enable("calc_bm_button") bm$bm_vars <- input$vars bm$bm_metrics <- c() if(n > 0) bm$bm_metrics <- c(bm$bm_metrics, input$metrics) @@ -211,7 +224,7 @@ observeEvent({ }, ignoreNULL = FALSE) -observeEvent(input$calc_bm,{ +observeEvent(input$calc_bm_button,{ tryCatch({ req(input$all_input_id) req(input$all_site_id) @@ -222,11 +235,7 @@ observeEvent(input$calc_bm,{ withProgress(message = 'Calculation in progress', detail = 'This may take a while...', value = 0,{ - bm$calc_bm_message <- sprintf("Setting up benchmarks") - output$reportvars <- renderText(paste("Variable Id: ", bm$bm_vars, seq_along(bm$bm_vars))) - output$reportmetrics <- renderText(paste("Metrics Id: ", bm$bm_metrics)) - - + inputs_df <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) output$inputs_df_title <- renderText("Benchmark Input Data") @@ -290,9 +299,8 @@ observeEvent(input$calc_bm,{ bm$bm_settings$benchmarking <- append(bm$bm_settings$benchmarking,list(benchmark = benchmark)) } - - output$calc_bm_button <- renderUI({}) - output$settings_title <- renderText("Benchmarking Settings:") + + disable("calc_bm_button") output$print_bm_settings <- renderPrint(print(bm$bm_settings)) @@ -302,7 +310,9 @@ observeEvent(input$calc_bm,{ saveXML(PEcAn.settings::listToXml(bm$bm_settings,"pecan"), file = settings_path) bm$settings_path <- settings_path - bm$calc_bm_message <- sprintf("Benchmarking settings have been saved here: %s", bm$settings_path) + output$settings_path <- renderText({ + sprintf("Benchmarking settings have been saved here: %s", bm$settings_path) + }) incProgress(1/2) ############################################################################## @@ -344,10 +354,6 @@ observeEvent(input$calc_bm,{ }) -observeEvent(bm$calc_bm_message,{ - output$calc_bm_message <- renderText({bm$calc_bm_message}) -}) - observeEvent(bm$results_message,{ output$results_message <- renderText({bm$results_message}) }) diff --git a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R index 540d0da45ff..5618f61084b 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R @@ -9,18 +9,18 @@ tabPanel("Scores/Plots", column(12, uiOutput("results_message"), br(), - uiOutput("bm_inputs") + uiOutput("bm_inputs"), + uiOutput("calc_bm"), + tags$hr(), + br() ), - column(12, h3("Calculate Benchmarks")), column(12, - verbatimTextOutput("calc_bm_message"), # verbatimTextOutput("report"), - uiOutput("calc_bm_button"), - br(), textOutput("inputs_df_title"), br(), DT::dataTableOutput("inputs_df_table"), br(), + br(), br() ), fluidRow( diff --git a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R index f3b2b969730..7ae3750512b 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R @@ -1,3 +1,4 @@ tabPanel("Settings", br(), + verbatimTextOutput("settings_path"), verbatimTextOutput("print_bm_settings")) \ No newline at end of file From 72689d2b7b513d331efdfffff37d74f62e258ba4 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Thu, 8 Aug 2019 08:30:50 +0000 Subject: [PATCH 0300/2289] delete benchmarking plots and scores tab UI --- .../ui_files/benchmarking_plots_UI.R | 22 ------------------- .../ui_files/benchmarking_scores_UI.R | 3 --- 2 files changed, 25 deletions(-) delete mode 100644 shiny/workflowPlot/ui_files/benchmarking_plots_UI.R delete mode 100644 shiny/workflowPlot/ui_files/benchmarking_scores_UI.R diff --git a/shiny/workflowPlot/ui_files/benchmarking_plots_UI.R b/shiny/workflowPlot/ui_files/benchmarking_plots_UI.R deleted file mode 100644 index 9b365456731..00000000000 --- a/shiny/workflowPlot/ui_files/benchmarking_plots_UI.R +++ /dev/null @@ -1,22 +0,0 @@ -tabPanel("Plots", - uiOutput("bm_plots"), - div( - id = "plot-container", - div( - class = "plotlybars-wrapper", - div( - class = "plotlybars", - div(class = "plotlybars-bar b1"), - div(class = "plotlybars-bar b2"), - div(class = "plotlybars-bar b3"), - div(class = "plotlybars-bar b4"), - div(class = "plotlybars-bar b5"), - div(class = "plotlybars-bar b6"), - div(class = "plotlybars-bar b7") - ), - div(class = "plotlybars-text", - p("Updating the plot. Hold tight!")) - ), - plotlyOutput("bmPlot") - ) -) diff --git a/shiny/workflowPlot/ui_files/benchmarking_scores_UI.R b/shiny/workflowPlot/ui_files/benchmarking_scores_UI.R deleted file mode 100644 index 31c779fba95..00000000000 --- a/shiny/workflowPlot/ui_files/benchmarking_scores_UI.R +++ /dev/null @@ -1,3 +0,0 @@ -tabPanel("Scores", - DT::dataTableOutput("results_table") -) \ No newline at end of file From 412fa592c4a84cd691082b97abdd1a73048c80da Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 8 Aug 2019 16:24:37 -0400 Subject: [PATCH 0301/2289] explorebtn --- shiny/workflowPlot/www/scripts.js | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/www/scripts.js b/shiny/workflowPlot/www/scripts.js index e3700daaa2d..257578b637f 100644 --- a/shiny/workflowPlot/www/scripts.js +++ b/shiny/workflowPlot/www/scripts.js @@ -2,4 +2,10 @@ $(document).on('click', '.workflowclass', function () { Shiny.onInputChange('workflowselected',this.id); // to report changes on the same selectInput Shiny.onInputChange('workflowclassrand', Math.random()); -}); \ No newline at end of file +}); + +$(document).on('click', '.expanclass', function () { +Shiny.onInputChange('workflows_explor_selected',this.id); +// to report changes on the same selectInput +Shiny.onInputChange('workflow_explor_classrand', Math.random()); +}); From 5ffefbfb5d51592575b56344a59c87af50ed8cec Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 8 Aug 2019 16:25:54 -0400 Subject: [PATCH 0302/2289] Update sidebar_server.R --- shiny/workflowPlot/server_files/sidebar_server.R | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 6a45d87122b..02abece90c6 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -10,7 +10,12 @@ observe({ # if we want to load all workflow ids. # get_workflow_id function from query.dplyr.R all_ids <- get_workflow_ids(dbConnect$bety, query, all.ids=TRUE) - updateSelectizeInput(session, "all_workflow_id", choices = all_ids) + selectList <- as.data.table(all_ids) + + updateSelectizeInput(session, + "all_workflow_id", + choices = all_ids, + server = TRUE) # Get URL prameters query <- parseQueryString(session$clientData$url_search) @@ -160,7 +165,7 @@ load.model.data <- eventReactive(input$load_data, { #File_path <- paste0(inputs_df$filePath,'.csv') site.id <- inputs_df$site_id site <- PEcAn.DB::query.site(site.id,dbConnect$bety$con) -browser() + observations <- PEcAn.benchmark::load_data( data.path = File_path, format = File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) From c322e0ba25e8d1986f105dd75757ce0dec5de073 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 8 Aug 2019 16:26:41 -0400 Subject: [PATCH 0303/2289] new packages added --- shiny/workflowPlot/server.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index d8b888f7709..58b54d7e8ec 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -24,7 +24,9 @@ lapply(c( "shiny", "listviewer", "shinythemes", "shinytoastr", - "shinyFiles" + "shinyFiles", + "data.table", + "shinyWidgets" ),function(pkg){ if (!(pkg %in% installed.packages()[,1])){ install.packages(pkg) From 1ef4f2a9f8e22960fff0c1639b6b501cf33f6647 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 8 Aug 2019 16:27:12 -0400 Subject: [PATCH 0304/2289] explorebtn --- .../server_files/history_server.R | 20 ++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server_files/history_server.R b/shiny/workflowPlot/server_files/history_server.R index 7fd3cee07d5..25767dea028 100644 --- a/shiny/workflowPlot/server_files/history_server.R +++ b/shiny/workflowPlot/server_files/history_server.R @@ -41,6 +41,24 @@ observeEvent(input$workflowclassrand, { }) }) +observeEvent(input$workflow_explor_classrand, { + tryCatch({ + #history <- PEcAn.DB::db.query(cmd, dbConnect$bety$con) + workflow_id <- strsplit(input$workflows_explor_selected, "_")[[1]] + + workflow_id <- trimws(workflow_id[1]) + + updateSelectizeInput(session, + "all_workflow_id", + choices = c(input$all_workflow_id, workflow_id), + selected = c(input$all_workflow_id, workflow_id)) + + }, + error = function(e){ + toastr_error(title = "Error", conditionMessage(e)) + }) +}) + observeEvent(input$submitInfo, { tryCatch({ @@ -74,7 +92,7 @@ observeEvent(input$submitInfo, { columnDefs = list(list(width = '90px', targets = -1)), #set column width for action button dom = 'ftp', pageLength = 10, - scrollX = TRUE, + scrollX = FALSE, scrollCollapse = FALSE, initComplete = DT::JS( "function(settings, json) {", From e7e41e8b892ebaceb7971020f37ddf2ab67c3638 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 8 Aug 2019 16:28:05 -0400 Subject: [PATCH 0305/2289] Update benchmarking_settings_UI.R --- shiny/workflowPlot/ui_files/benchmarking_settings_UI.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R index 7ae3750512b..8f1db5ab5c2 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_settings_UI.R @@ -1,4 +1,5 @@ -tabPanel("Settings", +tabPanel("Benchmark Settings", br(), verbatimTextOutput("settings_path"), - verbatimTextOutput("print_bm_settings")) \ No newline at end of file + verbatimTextOutput("print_bm_settings") + ) From 156beb34aef760f57826b70541b796d8a8a4617f Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 8 Aug 2019 16:28:55 -0400 Subject: [PATCH 0306/2289] Update model_data_plots_server.R --- .../workflowPlot/server_files/model_data_plots_server.R | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/server_files/model_data_plots_server.R b/shiny/workflowPlot/server_files/model_data_plots_server.R index 27e64d4f751..03f7f0885b7 100644 --- a/shiny/workflowPlot/server_files/model_data_plots_server.R +++ b/shiny/workflowPlot/server_files/model_data_plots_server.R @@ -143,25 +143,28 @@ observeEvent(input$ex_plot_modeldata,{ model.xts <- aggr(model.xts) observasions.xts <- aggr(observasions.xts) - + #Scatter plot output$modelDataPlotscatter <- renderHighchart({ scatter.df <- data.frame ( 'y' = zoo::coredata(model.xts), 'x' = zoo::coredata(observasions.xts) ) + hlim <- max(max(scatter.df$y, scatter.df$x)) + llim <- min(min(scatter.df$y, scatter.df$x)) + highchart() %>% hc_chart(type = 'scatter') %>% hc_add_series(scatter.df, name = "Model data comparison", showInLegend = FALSE) %>% hc_legend(enabled = FALSE) %>% - hc_yAxis(title = list(text = "Simulated",fontSize=19))%>% + hc_yAxis(title = list(text = "Simulated",fontSize=19), min=llim, max=hlim)%>% hc_exporting(enabled = TRUE, filename=paste0("Model_data_comparison")) %>% hc_add_theme(hc_theme_elementary(yAxis = list(title = list(style = list(color = "#373b42",fontSize=15)), labels = list(style = list(color = "#373b42",fontSize=15))), xAxis = list(title = list(style = list(color = "#373b42",fontSize=15)), labels = list(style = list(color = "#373b42",fontSize=15))) ))%>% - hc_xAxis(title = list(text ="Observed" ,fontSize=19)) + hc_xAxis(title = list(text ="Observed" ,fontSize=19), min=llim, max=hlim) }) From fb106952fdf8b4adf79b77ae1619c4481f0378e5 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 9 Aug 2019 04:17:07 +0000 Subject: [PATCH 0307/2289] skip loading file format with Null time.row and NAs var$column_number --- .../server_files/benchmarking_server.R | 31 ++++++++++++------- 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 77b105143b7..e510c6bf7f4 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -101,19 +101,26 @@ observeEvent(input$load_data,{ bm$input <- getInputs(dbConnect$bety,c(input$all_site_id)) %>% dplyr::filter(input_selection_list == input$all_input_id) format <- PEcAn.DB::query.format.vars(bety = dbConnect$bety, input.id = bm$input$input_id) - # Are there more human readable names? - bm$vars <- dplyr::inner_join( - data.frame(read_name = names(bm$model_vars), - pecan_name = bm$model_vars, stringsAsFactors = FALSE), - # format$vars[-grep("%",format$vars$storage_type), - # c("variable_id", "pecan_name")], - format$vars[c("variable_id", "pecan_name")], #for AmeriFlux.level2.h.nc, format$vars$storage_type is NA - by = "pecan_name") - #This will be a longer set of conditions - bm$ready <- bm$ready + 1 - #Signaling the success of the operation - toastr_success("Check for benchmarks") + # If format has Null time.row and NAs for var$column_number, skip loading + if(is.null(format$time.row) & is.na(format$var$column_number)){ + print("File_format has Null time.row and NAs for var$column_number, skip loading") + toastr_warning("This file format cannot do benchmarking") + }else{ + # Are there more human readable names? + bm$vars <- dplyr::inner_join( + data.frame(read_name = names(bm$model_vars), + pecan_name = bm$model_vars, stringsAsFactors = FALSE), + format$vars[-grep("%",format$vars$storage_type), + c("variable_id", "pecan_name")], + #format$vars[c("variable_id", "pecan_name")], #for AmeriFlux.level2.h.nc, format$vars$storage_type is NA + by = "pecan_name") + + #This will be a longer set of conditions + bm$ready <- bm$ready + 1 + #Signaling the success of the operation + toastr_success("Check for benchmarks") + } }, error = function(e) { toastr_error(title = "Error", conditionMessage(e)) From 8321929ce421353f17e5b934f362fa30bfdee092 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Fri, 9 Aug 2019 00:18:07 -0500 Subject: [PATCH 0308/2289] add icons to tabPanel titles --- shiny/workflowPlot/ui.R | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index cb6401f9844..352f5b3c9fd 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -49,6 +49,7 @@ ui <- fluidPage(theme = shinytheme("yeti"), id = "app", navbarPage(title = NULL, tabPanel(h4("Select Data"), + icon = icon("hand-pointer"), tagList( column(3, source_ui("sidebar_UI.R") @@ -59,9 +60,11 @@ ui <- fluidPage(theme = shinytheme("yeti"), ) ), tabPanel(h4("History Runs"), + icon = icon("history"), DT::DTOutput("historyfiles") ), tabPanel(h4("Exploratory Plots"), + icon = icon("chart-bar"), tabsetPanel( source_ui("model_plots_UI.R"), source_ui("model_data_plots_UI.R"), @@ -69,12 +72,14 @@ ui <- fluidPage(theme = shinytheme("yeti"), ) ), tabPanel(h4("Benchmarking"), + icon = icon("pencil-ruler"), tabsetPanel( source_ui("benchmarking_ScoresPlots_UI.R"), source_ui("benchmarking_settings_UI.R") ) ), tabPanel(h4("Documentation"), + icon = icon("book"), #withMathJax(includeMarkdown("markdown/workflowPlot_doc.Rmd")) bs_accordion_sidebar(id = "documentation", spec_side = c(width = 3, offset = 0)) %>% From 953c55bd845615efadfe1f9f7a4105d0129cf984 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 9 Aug 2019 10:06:25 -0400 Subject: [PATCH 0309/2289] Remove library and testthat calls from rtm tests Including them is bad form. --- modules/rtm/tests/testthat/test.2s.R | 1 - modules/rtm/tests/testthat/test.gpm.R | 2 -- modules/rtm/tests/testthat/test.invert_bayestools.R | 2 -- modules/rtm/tests/testthat/test.prospect.R | 2 -- modules/rtm/tests/testthat/test.resample.R | 2 -- modules/rtm/tests/testthat/test.sail.R | 1 - modules/rtm/tests/testthat/test.spectra.R | 2 -- 7 files changed, 12 deletions(-) diff --git a/modules/rtm/tests/testthat/test.2s.R b/modules/rtm/tests/testthat/test.2s.R index e88c7188d24..d6adfe58da0 100644 --- a/modules/rtm/tests/testthat/test.2s.R +++ b/modules/rtm/tests/testthat/test.2s.R @@ -1,4 +1,3 @@ -library(PEcAnRTM) context("Two stream model") p4.pars <- defparam("prospect_4") diff --git a/modules/rtm/tests/testthat/test.gpm.R b/modules/rtm/tests/testthat/test.gpm.R index 3e846453235..bee5f098bbf 100644 --- a/modules/rtm/tests/testthat/test.gpm.R +++ b/modules/rtm/tests/testthat/test.gpm.R @@ -1,5 +1,3 @@ -library(PEcAnRTM) -library(testthat) context("Generalized plate model") data(dataSpec_prospectd) diff --git a/modules/rtm/tests/testthat/test.invert_bayestools.R b/modules/rtm/tests/testthat/test.invert_bayestools.R index 63570ba97b6..f49c5103866 100644 --- a/modules/rtm/tests/testthat/test.invert_bayestools.R +++ b/modules/rtm/tests/testthat/test.invert_bayestools.R @@ -1,6 +1,4 @@ # devtools::load_all('.') -library(PEcAnRTM) -library(testthat) context('Inversion using BayesianTools') skip_on_travis() diff --git a/modules/rtm/tests/testthat/test.prospect.R b/modules/rtm/tests/testthat/test.prospect.R index 3399fa53a91..cfc2295e96a 100644 --- a/modules/rtm/tests/testthat/test.prospect.R +++ b/modules/rtm/tests/testthat/test.prospect.R @@ -1,6 +1,4 @@ #' Tests of radiative transfer models -library(PEcAnRTM) -library(testthat) context("PROSPECT models") p4 <- c("N"=1.4, "Cab"=30, "Cw"=0.004, "Cm"=0.003) diff --git a/modules/rtm/tests/testthat/test.resample.R b/modules/rtm/tests/testthat/test.resample.R index 8fdf755cd33..231a0fa97e9 100644 --- a/modules/rtm/tests/testthat/test.resample.R +++ b/modules/rtm/tests/testthat/test.resample.R @@ -1,5 +1,3 @@ -library(testthat) -library(PEcAnRTM) context("Resampling functions") test_that( diff --git a/modules/rtm/tests/testthat/test.sail.R b/modules/rtm/tests/testthat/test.sail.R index 6079faab72c..1000c3522cd 100644 --- a/modules/rtm/tests/testthat/test.sail.R +++ b/modules/rtm/tests/testthat/test.sail.R @@ -1,5 +1,4 @@ #' Tests of radiative transfer models -library(PEcAnRTM) context("SAIL models") p <- defparam("pro4sail") diff --git a/modules/rtm/tests/testthat/test.spectra.R b/modules/rtm/tests/testthat/test.spectra.R index 37c85213651..a3e52606970 100644 --- a/modules/rtm/tests/testthat/test.spectra.R +++ b/modules/rtm/tests/testthat/test.spectra.R @@ -1,5 +1,3 @@ -library(PEcAnRTM) -library(testthat) context("Spectra S3 class") data(testspec, package = "PEcAnRTM") From b23d8617122025bba3b9b3f20df29761985f594f Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 14 Aug 2019 02:24:49 -0500 Subject: [PATCH 0310/2289] update main panel titles --- shiny/workflowPlot/ui.R | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 352f5b3c9fd..52f3a238851 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -20,12 +20,19 @@ ui <- fluidPage(theme = shinytheme("yeti"), tags$head( tags$link(rel = "stylesheet", type = "text/css", href = "style.css") ), - tags$head(tags$script(src="scripts.js")), - tags$head( tags$style(HTML(" - .modal-lg { - width: 85%; - } - "))), + tags$head( + tags$script(src="scripts.js") + ), + tags$head( + tags$style(HTML(" + .modal-lg {width: 85%;} + .navbar-default .navbar-nav{font-size: 16px; + padding-top: 10px; + padding-bottom: 10px; + } + ") + ) + ), # Showing the animation div( id = "loading-content", div(class = "plotlybars-wrapper", @@ -48,7 +55,7 @@ ui <- fluidPage(theme = shinytheme("yeti"), div( id = "app", navbarPage(title = NULL, - tabPanel(h4("Select Data"), + tabPanel("Select Data", icon = icon("hand-pointer"), tagList( column(3, @@ -59,11 +66,11 @@ ui <- fluidPage(theme = shinytheme("yeti"), ) ) ), - tabPanel(h4("History Runs"), + tabPanel("History Runs", icon = icon("history"), DT::DTOutput("historyfiles") ), - tabPanel(h4("Exploratory Plots"), + tabPanel("Exploratory Plots", icon = icon("chart-bar"), tabsetPanel( source_ui("model_plots_UI.R"), @@ -71,14 +78,14 @@ ui <- fluidPage(theme = shinytheme("yeti"), source_ui("pdf_viewer_UI.R") ) ), - tabPanel(h4("Benchmarking"), + tabPanel("Benchmarking", icon = icon("pencil-ruler"), tabsetPanel( source_ui("benchmarking_ScoresPlots_UI.R"), source_ui("benchmarking_settings_UI.R") ) ), - tabPanel(h4("Documentation"), + tabPanel("Documentation", icon = icon("book"), #withMathJax(includeMarkdown("markdown/workflowPlot_doc.Rmd")) bs_accordion_sidebar(id = "documentation", From f6237f28d383edbc12d14dd82570fef03ea64b7b Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 14 Aug 2019 08:33:40 +0000 Subject: [PATCH 0311/2289] add icons for buttons --- shiny/workflowPlot/server.R | 2 +- .../server_files/benchmarking_server.R | 3 ++- shiny/workflowPlot/server_files/sidebar_server.R | 2 +- .../workflowPlot/ui_files/model_data_plots_UI.R | 4 +++- shiny/workflowPlot/ui_files/model_plots_UI.R | 4 +++- shiny/workflowPlot/ui_files/sidebar_UI.R | 16 ++++++++-------- 6 files changed, 18 insertions(+), 13 deletions(-) diff --git a/shiny/workflowPlot/server.R b/shiny/workflowPlot/server.R index 58b54d7e8ec..1254dce87bf 100644 --- a/shiny/workflowPlot/server.R +++ b/shiny/workflowPlot/server.R @@ -67,7 +67,7 @@ server <- shinyServer(function(input, output, session) { fluidRow(column(12,textInput('host', h4('Host:'), width = "100%", value = "psql-pecan.bu.edu"))), fluidRow( column(3), - column(6,br(),actionButton('submitInfo', h4('Submit'), width = "100%", class="btn-primary")), + column(6,br(),actionButton('submitInfo', 'Submit', width = "100%", class="btn-primary")), column(3) ), footer = NULL, diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index e510c6bf7f4..45a7bb02081 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -202,7 +202,8 @@ observeEvent({ column(5), column(2, shinyjs::disabled( - actionButton('calc_bm_button', "Calculate", width = "100%", class="btn-primary") + actionButton('calc_bm_button', "Calculate", icon = icon("calculator"), + width = "100%", class="btn-primary") ) ), column(5) diff --git a/shiny/workflowPlot/server_files/sidebar_server.R b/shiny/workflowPlot/server_files/sidebar_server.R index 02abece90c6..6aac39ad572 100644 --- a/shiny/workflowPlot/server_files/sidebar_server.R +++ b/shiny/workflowPlot/server_files/sidebar_server.R @@ -324,7 +324,7 @@ observeEvent(input$register_data,{ ) ), footer = tagList( - actionButton("register_button", "Register"), + actionButton("register_button", "Register", class="btn-primary"), modalButton("Cancel") ), size = 'l' diff --git a/shiny/workflowPlot/ui_files/model_data_plots_UI.R b/shiny/workflowPlot/ui_files/model_data_plots_UI.R index bc43396965f..473776fd091 100644 --- a/shiny/workflowPlot/ui_files/model_data_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_data_plots_UI.R @@ -33,7 +33,9 @@ tabPanel( max = 1, value = 0.8 ), - actionButton("ex_plot_modeldata", "Generate Plot", width = "100%", class="btn-primary") + tags$hr(), + actionButton("ex_plot_modeldata", "Generate Plot", icon = icon("pencil-alt"), + width = "100%", class="btn-primary") ) ), column( diff --git a/shiny/workflowPlot/ui_files/model_plots_UI.R b/shiny/workflowPlot/ui_files/model_plots_UI.R index 23e65b70a42..975f76eba85 100644 --- a/shiny/workflowPlot/ui_files/model_plots_UI.R +++ b/shiny/workflowPlot/ui_files/model_plots_UI.R @@ -33,7 +33,9 @@ tabPanel( max = 1, value = 0.8 ), - actionButton("ex_plot_model", "Generate Plot", width = "100%", class="btn-primary") + tags$hr(), + actionButton("ex_plot_model", "Generate Plot", icon = icon("pencil-alt"), + width = "100%", class="btn-primary") ) ), column( diff --git a/shiny/workflowPlot/ui_files/sidebar_UI.R b/shiny/workflowPlot/ui_files/sidebar_UI.R index 9cd2988c0f5..ccb91b0d275 100644 --- a/shiny/workflowPlot/ui_files/sidebar_UI.R +++ b/shiny/workflowPlot/ui_files/sidebar_UI.R @@ -5,14 +5,14 @@ tagList( selectizeInput("all_workflow_id", "Mutliple Workflow IDs", c(), multiple=TRUE), p("Please select the run IDs. You can select multiple IDs"), selectizeInput("all_run_id", "Mutliple Run IDs", c(), multiple=TRUE), - - fluidRow( - column(6, - actionButton("NewRun", h6("New Run !"), width = "100%", class="btn-primary") + fluidRow( + column(6, + actionButton("NewRun", "New Run", icon = icon("plus"), + width = "120%", class="btn-primary") ), column(6, - actionButton("load_model", h5("Load"), width = "100%") + actionButton("load_model", "Load", icon = icon("download"), width = "100%") ) ) ), @@ -24,11 +24,11 @@ tagList( selectizeInput("all_input_id", "Select Input ID", c()), fluidRow( column(6, - actionButton("register_data", h6("Register"), - width = "100%", class="btn-primary") + actionButton("register_data", "Register", icon = icon("upload"), + width = "120%", class="btn-primary") ), column(6, - actionButton("load_data", h6("Load"), width = "100%") + actionButton("load_data", "Load", icon = icon("download"), width = "100%") ) ) ) From c49b7120cff931afa2d0b9148cc1eb2be218fd28 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 14 Aug 2019 08:47:26 +0000 Subject: [PATCH 0312/2289] move documentation ui to a seperate file --- shiny/workflowPlot/ui.R | 42 +------------------ .../workflowPlot/ui_files/documentation_UI.R | 38 +++++++++++++++++ 2 files changed, 40 insertions(+), 40 deletions(-) create mode 100644 shiny/workflowPlot/ui_files/documentation_UI.R diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 52f3a238851..299279c02d4 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -88,47 +88,9 @@ ui <- fluidPage(theme = shinytheme("yeti"), tabPanel("Documentation", icon = icon("book"), #withMathJax(includeMarkdown("markdown/workflowPlot_doc.Rmd")) - bs_accordion_sidebar(id = "documentation", - spec_side = c(width = 3, offset = 0)) %>% - bs_append( - title_side = "App Documentation", - content_side = NULL, - content_main = withMathJax(includeMarkdown("markdown/app_documentation.Rmd")) - ) %>% - bs_append( - title_side = "Setup page", - content_side = NULL, - content_main = withMathJax(includeMarkdown("markdown/setup_page.Rmd")) - ) %>% - bs_append( - title_side = "Exploratory Plots", - content_side = NULL, - content_main = withMathJax(includeMarkdown("markdown/exploratory_plot.Rmd")) - ) %>% - bs_append( - title_side = "Benchmarking", - content_side = NULL, - content_main = - bs_accordion_sidebar(id = "benchmarking") %>% - bs_append( - title_side = "Settings", - content_side = NULL, - content_main = withMathJax(includeMarkdown("markdown/benchmarking_setting.Rmd")) - ) %>% - bs_append( - title_side = "Scores", - content_side = NULL, - content_main = withMathJax(includeMarkdown("markdown/benchmarking_scores.Rmd")) - ) %>% - bs_append( - title_side = "Plots", - content_side = NULL, - content_main = withMathJax(includeMarkdown("markdown/benchmarking_plots.Rmd")) - ) - ), + source_ui("documentation_UI.R"), use_bs_accordion_sidebar() - - + ) ) ) diff --git a/shiny/workflowPlot/ui_files/documentation_UI.R b/shiny/workflowPlot/ui_files/documentation_UI.R new file mode 100644 index 00000000000..517e73bbe5c --- /dev/null +++ b/shiny/workflowPlot/ui_files/documentation_UI.R @@ -0,0 +1,38 @@ +bs_accordion_sidebar(id = "documentation", + spec_side = c(width = 3, offset = 0)) %>% + bs_append( + title_side = "App Documentation", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/app_documentation.Rmd")) + ) %>% + bs_append( + title_side = "Setup Page", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/setup_page.Rmd")) + ) %>% + bs_append( + title_side = "Exploratory Plots", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/exploratory_plot.Rmd")) + ) %>% + bs_append( + title_side = "Benchmarking", + content_side = NULL, + content_main = + bs_accordion_sidebar(id = "benchmarking") %>% + bs_append( + title_side = "Settings", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/benchmarking_setting.Rmd")) + ) %>% + bs_append( + title_side = "Scores", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/benchmarking_scores.Rmd")) + ) %>% + bs_append( + title_side = "Plots", + content_side = NULL, + content_main = withMathJax(includeMarkdown("markdown/benchmarking_plots.Rmd")) + ) + ) \ No newline at end of file From d8675f35ecbb1f43b1bda33d687d9cd358f858dd Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 14 Aug 2019 08:48:09 +0000 Subject: [PATCH 0313/2289] add history runs to documentation page --- shiny/workflowPlot/ui_files/documentation_UI.R | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/shiny/workflowPlot/ui_files/documentation_UI.R b/shiny/workflowPlot/ui_files/documentation_UI.R index 517e73bbe5c..bb93b6f09db 100644 --- a/shiny/workflowPlot/ui_files/documentation_UI.R +++ b/shiny/workflowPlot/ui_files/documentation_UI.R @@ -10,6 +10,11 @@ bs_accordion_sidebar(id = "documentation", content_side = NULL, content_main = withMathJax(includeMarkdown("markdown/setup_page.Rmd")) ) %>% + bs_append( + title_side = "History Runs", + content_side = NULL, + content_main = "This is a page for searching history runs." + ) %>% bs_append( title_side = "Exploratory Plots", content_side = NULL, From 8100930ba572a7ef09a3cd8c59d074db45c991b2 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 14 Aug 2019 08:56:19 +0000 Subject: [PATCH 0314/2289] update benchmark documentation link --- shiny/workflowPlot/markdown/benchmarking_setting.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/workflowPlot/markdown/benchmarking_setting.Rmd b/shiny/workflowPlot/markdown/benchmarking_setting.Rmd index a36807ffd81..2ece16a2007 100644 --- a/shiny/workflowPlot/markdown/benchmarking_setting.Rmd +++ b/shiny/workflowPlot/markdown/benchmarking_setting.Rmd @@ -5,7 +5,7 @@ output: theme: united --- -[See additional documentation on benchmarking](https://pecanproject.github.io/pecan-documentation/develop/settings-configured-analyses.html#Benchmarking) +[See additional documentation on benchmarking](https://pecanproject.github.io/pecan-documentation/develop/intermediate-user.html#benchmarking) All benchmarks that are calcualted are automatcally registered in to the database. From f0f14d88363574e9c6bf3a22e3a2f6f56e92f104 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 15 Aug 2019 15:49:57 -0400 Subject: [PATCH 0315/2289] Intro --- shiny/workflowPlot/ui.R | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 299279c02d4..23837eb68f3 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -11,11 +11,13 @@ library(bsplus) source("ui_utils.R", local = TRUE) # Define UI -ui <- fluidPage(theme = shinytheme("yeti"), +ui <- fluidPage(theme = shinytheme("paper"), + tags$head(HTML("PEcAn WorkFlow App")), # Initializing shinyJs useShinyjs(), # Initializing shinytoastr useToastr(), + shinyWidgets::useShinydashboard(), # Adding CSS to head tags$head( tags$link(rel = "stylesheet", type = "text/css", href = "style.css") @@ -62,6 +64,19 @@ ui <- fluidPage(theme = shinytheme("yeti"), source_ui("sidebar_UI.R") ), column(9, + HTML(' +
+

Hello user :)

+

- This app is designed to help you better explore your runs.

+

- First thing first is to choose your workflow ID. You don\'t know it ? It\'s alright. Use the history runs tab to explore all the runs at all sites.

+

- You can choose previously registered input files to assess your model\'s performance. You have\'nt registered your file ? It\'s alright. Use the register button to do so.

+
+

If you are interested to learn more the PEcAn project or maybe become a member of our community use the following links:

+

+ Learn more PEcAn +

+
+ '), source_ui("select_data_UI.R") ) ) From e567cf4aea0a3a67b7d9b6e38c6b5c7d2bb10e20 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 15 Aug 2019 15:50:37 -0400 Subject: [PATCH 0316/2289] cards --- .../server_files/select_data_server.R | 70 +++++++++++++++---- 1 file changed, 55 insertions(+), 15 deletions(-) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 712a3e4a97c..674f81e0058 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -55,27 +55,67 @@ observeEvent(input$load_model,{ select.df <- rbind(select.df, README.df) } + #hide the into msg + shinyjs::hide("intromsg") + select.data <- select.df %>% dlply(.(V1), function(x) x[[2]]) %>% as.data.frame() %>% dplyr::rename(site.id = site..id) %>% - select(runtype, workflow.id, ensemble.id, pft.name, quantile, trait, run.id, + dplyr::select(runtype, workflow.id, ensemble.id, pft.name, quantile, trait, run.id, model, site.id, start.date, end.date, hostname, timestep, rundir, outdir) - - output$datatable <- DT::renderDataTable( - DT::datatable(select.data, - options = list( - dom = 'ft', - scrollX = TRUE, - initComplete = DT::JS( - "function(settings, json) {", - "$(this.api().table().header()).css({'background-color': '#404040', 'color': '#fff'});", - "}") - ) - ) - ) - + output$runsui<-renderUI({ + seq_len(nrow(select.data)) %>% + map( + function(rown){ + + HTML(paste0(' +
+

',select.data$workflow.id[rown],'

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Runtype:
',select.data$runtype[rown],'
Ensemble.id:
',select.data$ensemble.id[rown],'
Pft.name
',select.data$pft.name[rown],'
Run.id
',select.data$run.id[rown],'
Model
',select.data$model[rown],'
Site.id
',select.data$site.id[rown],'
Start.date
',select.data$start.date[rown],'
End.date
',select.data$end.date[rown],'
Hostname
',select.data$hostname[rown],'
Outdir
',select.data$outdir[rown],'
+

+ + ')) + } + ) + }) + #output$README <- renderUI({HTML(paste(README.text, collapse = '
'))}) output$dim_message <- renderText({sprintf("This data has %.0f rows,\nthink about skipping exploratory plots if this is a large number...", dim(df)[1])}) From ef943c4ae34d9865d1855bd0f84f73d6cdff113e Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 15 Aug 2019 15:51:43 -0400 Subject: [PATCH 0317/2289] Update select_data_UI.R --- shiny/workflowPlot/ui_files/select_data_UI.R | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/shiny/workflowPlot/ui_files/select_data_UI.R b/shiny/workflowPlot/ui_files/select_data_UI.R index dd7ac2fd569..537fe3aa753 100644 --- a/shiny/workflowPlot/ui_files/select_data_UI.R +++ b/shiny/workflowPlot/ui_files/select_data_UI.R @@ -2,9 +2,8 @@ tagList( #column(6, htmlOutput("README")), - DT::dataTableOutput("datatable"), - br(), - br(), +# DT::dataTableOutput("datatable"), br(), + uiOutput("runsui"), verbatimTextOutput("dim_message") ) From 981ab5f28ed17d75971b3453bd7a258552bd1f9d Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 15 Aug 2019 15:52:47 -0400 Subject: [PATCH 0318/2289] Update style.css --- shiny/workflowPlot/www/style.css | 3 +++ 1 file changed, 3 insertions(+) diff --git a/shiny/workflowPlot/www/style.css b/shiny/workflowPlot/www/style.css index e4d0d017911..d7a7efc6e17 100644 --- a/shiny/workflowPlot/www/style.css +++ b/shiny/workflowPlot/www/style.css @@ -1,3 +1,6 @@ +.alert h5{margin-top:0;color:inherit} +.alert h6{margin-top:0;color:inherit} + #loading-content { position: absolute; background: #ffffff; From d515b5eca2c5c77169aecded6a7bd5fa05c2d322 Mon Sep 17 00:00:00 2001 From: Marissa Kivi Date: Sat, 17 Aug 2019 15:17:18 -0400 Subject: [PATCH 0319/2289] fixed 3 met issues with linkages: CRUNCEP conversion, enabled 1-year run, and overwrite of pre-existing met input files when years in file are not adequate --- models/linkages/R/met2model.LINKAGES.R | 29 +++++++++++++++++++------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/models/linkages/R/met2model.LINKAGES.R b/models/linkages/R/met2model.LINKAGES.R index d7a94722377..d45f5d807a8 100644 --- a/models/linkages/R/met2model.LINKAGES.R +++ b/models/linkages/R/met2model.LINKAGES.R @@ -29,6 +29,11 @@ met2model.LINKAGES <- function(in.path, in.prefix, outfolder, start_date, end_da out.file <- file.path(outfolder, "climate.Rdata") # out.file <- file.path(outfolder, paste(in.prefix, strptime(start_date, '%Y-%m-%d'), # strptime(end_date, '%Y-%m-%d'), 'dat', sep='.')) + + # get start/end year since inputs are specified on year basis + # use years to check if met data contains all of the necessary years + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) results <- data.frame(file = c(out.file), host = c(PEcAn.remote::fqdn()), @@ -42,8 +47,17 @@ met2model.LINKAGES <- function(in.path, in.prefix, outfolder, start_date, end_da print(results) if (file.exists(out.file) && !overwrite) { - PEcAn.logger::logger.debug("File '", out.file, "' already exists, skipping to next file.") - return(invisible(results)) + + # get year span for current data file + load(out.file) + data_start = min(rownames(temp.mat)) + data_end = max(rownames(temp.mat)) + + # check to see if needed years fall into the current data year span; if not, rewrite + if ((data_start <= start_year) & (data_end >= end_year)){ + PEcAn.logger::logger.debug("File '", out.file, "' already exists, skipping to next file.") + return(invisible(results)) + } } library(PEcAn.data.atmosphere) @@ -55,18 +69,13 @@ met2model.LINKAGES <- function(in.path, in.prefix, outfolder, start_date, end_da out <- NULL - # get start/end year since inputs are specified on year basis - start_year <- lubridate::year(start_date) - end_year <- lubridate::year(end_date) - year <- sprintf("%04d", seq(start_year, end_year, 1)) month <- sprintf("%02d", seq(1, 12, 1)) nyear <- length(year) # number of years to simulate month_matrix_precip <- matrix(NA, nyear, 12) - DOY_vec_hr <- c(1, c(32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 365) * 24) - + if(nchar(in.prefix)>0 & substr(in.prefix,nchar(in.prefix),nchar(in.prefix)) != ".") in.prefix = paste0(in.prefix,".") for (i in seq_len(nyear)) { @@ -79,6 +88,10 @@ met2model.LINKAGES <- function(in.path, in.prefix, outfolder, start_date, end_da sec <- udunits2::ud.convert(sec, unlist(strsplit(ncin$dim$time$units, " "))[1], "seconds") dt <- PEcAn.utils::seconds_in_year(as.numeric(year[i])) / length(sec) tstep <- 86400 / dt + + # adjust vector depending on the time step of data + # assumes evenly-spaced measurements + DOY_vec_hr <- c(1, c(32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 365) * as.integer(tstep)) ncprecipf <- ncdf4::ncvar_get(ncin, "precipitation_flux") # units are kg m-2 s-1 for (m in 1:12) { From 073a1fdadab369b710b62b1a82551f229c03c5df Mon Sep 17 00:00:00 2001 From: Marissa Kivi Date: Sat, 17 Aug 2019 15:30:20 -0400 Subject: [PATCH 0320/2289] forgot to add changes for enabling one year run --- models/linkages/R/write.config.LINKAGES.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/linkages/R/write.config.LINKAGES.R b/models/linkages/R/write.config.LINKAGES.R index f1e5a8ba331..193cf1f4e78 100644 --- a/models/linkages/R/write.config.LINKAGES.R +++ b/models/linkages/R/write.config.LINKAGES.R @@ -104,8 +104,8 @@ write.config.LINKAGES <- function(defaults = NULL, trait.values, settings, run.i load(climate_file) } - temp.mat <- temp.mat[which(rownames(temp.mat)%in%start.year:end.year),] - precip.mat <- precip.mat[which(rownames(precip.mat)%in%start.year:end.year),] + temp.mat <- matrix(temp.mat[which(rownames(temp.mat)%in%start.year:end.year),]) + precip.mat <- matrix(precip.mat[which(rownames(precip.mat)%in%start.year:end.year),]) basesc <- 74 basesn <- 1.64 From 2b7f6807c2e431240943d8097e55715de119823e Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 19 Aug 2019 15:12:17 -0400 Subject: [PATCH 0321/2289] Tweaks to batch_runs script --- base/workflow/inst/batch_runs.R | 44 +++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 29e24605c72..06dbed77880 100644 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -173,8 +173,9 @@ con <- bety$con ## Find name of Machine R is running on mach_name <- Sys.info()[[4]] +## mach_name <- "docker" mach_id <- tbl(bety, "machines") %>% - filter(grepl(mach_name,hostname)) %>% + filter(grepl(mach_name, hostname)) %>% pull(id) ## Find Models @@ -195,7 +196,8 @@ sensitivity <- FALSE ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group -site_id_noinput <- anti_join(tbl(bety, "sites"), tbl(bety, "inputs")) %>% +site_id_noinput <- tbl(bety, "sites") %>% + anti_join(tbl(bety, "inputs")) %>% inner_join(tbl(bety, "sitegroups_sites") %>% filter(sitegroup_id == 1), by = c("id" = "site_id")) %>% @@ -214,35 +216,41 @@ site_id_noinput <- anti_join(tbl(bety, "sites"), tbl(bety, "inputs")) %>% #Check if startdate year is within the inerval of that is given in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) ) %>% - dplyr::filter(in_date & as.numeric(end_year) - as.numeric(start_year) > 1) %>% + dplyr::filter( + in_date, + as.numeric(end_year) - as.numeric(start_year) > 1 + ) %>% mutate(sitename = gsub(" ", "_", sitename)) %>% - rename(id.x = site_id) + rename(site_id = id.x) -site_id <- site_id_noinput$id.x +site_id <- site_id_noinput$site_id site_name <- gsub(" ", "_", site_id_noinput$sitename) #Create permutations of arg combinations options(scipen = 999) run_table <- expand.grid( - models, - met_name, - site_id, - startdate, - enddate, - pecan_path, - out.var, - ensemble, - ens_size, - sensitivity, + model_id = models, + met = met_name, + site_id = site_id, + start_date = startdate, + end_date = enddate, + pecan_path = pecan_path, + output_variable = out.var, + ensemble = ensemble, + ens_size = ens_size, + sensitivity = sensitivity, stringsAsFactors = FALSE ) #Execute function to spit out a table with a column of NA or success tab <- run_table %>% - mutate(outcome = purrr::pmap(., purrr::possibly(function(...) { - create_execute_test_xml(list(...)) - }, otherwise = NA)) + mutate( + outcome = purrr::pmap( + ., + purrr::possibly(function(...) create_execute_test_xml(list(...)), + otherwise = NA) + ) ) ## print to table From d91bdc9aeeb9d4e3ebab2e627b3347dc1c34135d Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 19 Aug 2019 15:36:09 -0500 Subject: [PATCH 0322/2289] Sort-of working prototype of batch_runs script --- base/workflow/inst/batch_runs.R | 102 ++++++++++++++++++++++---------- 1 file changed, 70 insertions(+), 32 deletions(-) mode change 100644 => 100755 base/workflow/inst/batch_runs.R diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R old mode 100644 new mode 100755 index 06dbed77880..e38d189faa0 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -1,3 +1,15 @@ +#!/usr/bin/env Rscript + +# if (!requireNamespace("huxtable")) { +# message("Installing missing package 'huxtable'") +# install.packages("huxtable") +# } +# +# if (!requireNamespace("htmlTable")) { +# message("Installing missing package 'htmlTable'") +# install.packages("htmlTable") +# } + ## This script contains the Following parts: ## Part 1 - Write Main Function ## A. takes in a list defining specifications for a single run and assigns them to objects @@ -76,7 +88,7 @@ create_execute_test_xml <- function(model_id, #Outdir model.new <- tbl(bety, "models") %>% - filter(model_id == id) %>% + filter(id == !!model_id) %>% collect() outdir_pre <- paste( model.new[["model_name"]], @@ -87,10 +99,10 @@ create_execute_test_xml <- function(model_id, ) outdir <- file.path(output_folder, outdir_pre) dir.create(outdir, showWarnings = FALSE, recursive = TRUE) - settings[["outdir"]] <- outdir + settings$outdir <- outdir #Database BETY - settings[["database"]] <- list( + settings$database <- list( bety = list(user = db_bety_username, password = db_bety_password, host = db_bety_hostname, @@ -104,20 +116,20 @@ create_execute_test_xml <- function(model_id, if (is.null(pft_name)){ # Select the first PFT in the model list. pft <- tbl(bety, "pfts") %>% - filter(modeltype_id == model.new$modeltype_id) %>% + filter(modeltype_id == !!model.new$modeltype_id) %>% collect() pft_name <- pft$name[[1]] } - settings[["pfts"]] <- list( + settings$pfts <- list( pft = list(name = pft_name, constants = list(num = 1)) ) #Meta Analysis - settings[["meta.analysis"]] <- list(iter = 3000, random.effects = FALSE) + settings$meta.analysis <- list(iter = 3000, random.effects = FALSE) #Ensemble - settings[["ensemble"]] <- list( + settings$ensemble <- list( size = ensemble_size, variable = ensemble_variable, samplingspace = list(met = list(method = "sampling"), @@ -126,23 +138,24 @@ create_execute_test_xml <- function(model_id, #Sensitivity if (sensitivity) { - settings[["sensitivity.analysis"]] <- list( + settings$sensitivity.analysis <- list( quantiles = list(sigma1 = -2, sigma2 = -1, sigma3 = 1, sigma4 = 2) ) } #Model - settings[[c("model", "id")]] <- model.new[["id"]] + settings$model$id <- model.new[["id"]] #Workflow - settings[[c("workflow", "id")]] <- paste0("Test_run_","_",model.new$model_name) - settings[["run"]] <- list( + settings$workflow$id + settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) + settings$run <- list( site = list(id = site_id, met.start = start_date, met.end = end_date), inputs = list(met = list(source = met, output = model.new[["model_name"]], username = "pecan")), start.date = start_date, end.date = end_date ) - settings[[c("host", "name")]] <- "localhost" + settings$host$name <- "localhost" #create file and Run saveXML(listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) @@ -160,6 +173,8 @@ library(PEcAn.utils) library(XML) library(PEcAn.settings) +argv <- commandArgs(trailingOnly = TRUE) + ##Create Run Args ## Insert your path to base pecan @@ -172,16 +187,24 @@ bety <- PEcAn.DB::betyConnect(php_file) con <- bety$con ## Find name of Machine R is running on -mach_name <- Sys.info()[[4]] -## mach_name <- "docker" -mach_id <- tbl(bety, "machines") %>% - filter(grepl(mach_name, hostname)) %>% - pull(id) +machid_rxp <- "^--machine_id=" +if (any(grepl(machid_rxp, argv))) { + machid_raw <- grep(machid_rxp, argv, value = TRUE) + mach_id <- as.numeric(gsub(machid_rxp, "", machid_raw)) + message("Using specified machine ID: ", mach_id) +} else { + mach_name <- Sys.info()[[4]] + message("Auto-detected machine name: ", mach_name) + ## mach_name <- "docker" + mach_id <- tbl(bety, "machines") %>% + filter(hostname == !!mach_name) %>% + pull(id) +} ## Find Models #devtools::install_github("pecanproject/pecan", subdir = "api") model_ids <- tbl(bety, "dbfiles") %>% - filter(machine_id == mach_id) %>% + filter(machine_id == !!mach_id) %>% filter(container_type == "Model") %>% pull(container_id) @@ -193,6 +216,8 @@ out.var <- "NPP" ensemble <- FALSE ens_size <- 100 sensitivity <- FALSE +user_id <- 99000000002 # TODO: Un-hard-code this +dbfiles_folder <- normalizePath("~/output/dbfiles") # TODO: Un-hard-code this ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group @@ -236,24 +261,37 @@ run_table <- expand.grid( start_date = startdate, end_date = enddate, pecan_path = pecan_path, - output_variable = out.var, - ensemble = ensemble, - ens_size = ens_size, + ensemble_variable = out.var, + # ensemble = ensemble, + ensemble_size = ens_size, sensitivity = sensitivity, + user_id = user_id, + dbfiles_folder = dbfiles_folder, stringsAsFactors = FALSE ) #Execute function to spit out a table with a column of NA or success -tab <- run_table %>% - mutate( - outcome = purrr::pmap( - ., - purrr::possibly(function(...) create_execute_test_xml(list(...)), - otherwise = NA) - ) - ) +for (i in seq_len(NROW(run_table))) { + message("\n\n############################################") + message("Testing the following configuration:") + glimpse(run_table[i, ]) + do.call(create_execute_test_xml, run_table[i, ]) + message("Done!") + message("############################################\n\n") +} + +# tab <- run_table %>% +# mutate( +# outcome = purrr::pmap( +# ., +# purrr::possibly(function(...) create_execute_test_xml(list(...)), +# otherwise = NA) +# ) +# ) +# +# print(tab, n = Inf) ## print to table -tux_tab <- huxtable::hux(tab) -html_table <- huxtable::print_html(tux_tab) -htmlTable::htmlTable(tab) +# tux_tab <- huxtable::hux(tab) +# html_table <- huxtable::print_html(tux_tab) +# htmlTable::htmlTable(tab) From 929bda9bc7d1097472a592ff3f3321196e9ef40b Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 19 Aug 2019 17:29:07 -0400 Subject: [PATCH 0323/2289] Update ui.R --- shiny/workflowPlot/ui.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/ui.R b/shiny/workflowPlot/ui.R index 23837eb68f3..9dd8c3d0d76 100644 --- a/shiny/workflowPlot/ui.R +++ b/shiny/workflowPlot/ui.R @@ -66,14 +66,15 @@ ui <- fluidPage(theme = shinytheme("paper"), column(9, HTML('
-

Hello user :)

+

Hello PEcAn user,

- This app is designed to help you better explore your runs.

- First thing first is to choose your workflow ID. You don\'t know it ? It\'s alright. Use the history runs tab to explore all the runs at all sites.

- You can choose previously registered input files to assess your model\'s performance. You have\'nt registered your file ? It\'s alright. Use the register button to do so.


If you are interested to learn more the PEcAn project or maybe become a member of our community use the following links:

- Learn more PEcAn + Learn more about PEcAn + Slack Channel

'), From 29d8f09c2748554302d0884e32f2bf12be2dff2f Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 19 Aug 2019 17:29:44 -0400 Subject: [PATCH 0324/2289] Update select_data_server.R --- .../server_files/select_data_server.R | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/shiny/workflowPlot/server_files/select_data_server.R b/shiny/workflowPlot/server_files/select_data_server.R index 674f81e0058..3d828642e81 100644 --- a/shiny/workflowPlot/server_files/select_data_server.R +++ b/shiny/workflowPlot/server_files/select_data_server.R @@ -71,45 +71,45 @@ observeEvent(input$load_model,{ function(rown){ HTML(paste0(' -
+

',select.data$workflow.id[rown],'

- - - - + + + + - + - + - + - + - + - + - + - +
Runtype:
',select.data$runtype[rown],'
Ensemble.id:
',select.data$ensemble.id[rown],'
Runtype:
',select.data$runtype[rown],'
Ensemble.id:
',select.data$ensemble.id[rown],'
Pft.name
',select.data$pft.name[rown],'',select.data$pft.name[rown],'
Run.id
',select.data$run.id[rown],'',select.data$run.id[rown],'
Model
',select.data$model[rown],'',select.data$model[rown],'
Site.id
',select.data$site.id[rown],'',select.data$site.id[rown],'
Start.date
',select.data$start.date[rown],'',select.data$start.date[rown],'
End.date
',select.data$end.date[rown],'',select.data$end.date[rown],'
Hostname
',select.data$hostname[rown],'',select.data$hostname[rown],'
Outdir
',select.data$outdir[rown],'',select.data$outdir[rown],'
-

+
')) } From cdbda0f9b466e747f57a72553e79885b568cea13 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 19 Aug 2019 17:30:25 -0400 Subject: [PATCH 0325/2289] Update style.css --- shiny/workflowPlot/www/style.css | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/shiny/workflowPlot/www/style.css b/shiny/workflowPlot/www/style.css index d7a7efc6e17..4d57dd232b1 100644 --- a/shiny/workflowPlot/www/style.css +++ b/shiny/workflowPlot/www/style.css @@ -1,4 +1,16 @@ +.table { + width: 100%; + max-width: 100%; + margin-bottom: 0px; +} + +.badge { + background-color: #4c4343; + font-size: 14px; +} + .alert h5{margin-top:0;color:inherit} + .alert h6{margin-top:0;color:inherit} #loading-content { From 6ccedc35b28b11fca2f1f3e11434c7394f45fff2 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 19 Aug 2019 17:31:20 -0400 Subject: [PATCH 0326/2289] Update benchmarking_ScoresPlots_UI.R --- shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R index 5618f61084b..d2ccc1b61b0 100644 --- a/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R +++ b/shiny/workflowPlot/ui_files/benchmarking_ScoresPlots_UI.R @@ -19,8 +19,6 @@ tabPanel("Scores/Plots", textOutput("inputs_df_title"), br(), DT::dataTableOutput("inputs_df_table"), - br(), - br(), br() ), fluidRow( @@ -39,4 +37,4 @@ tabPanel("Scores/Plots", DT::dataTableOutput("results_table") ) ) -) \ No newline at end of file +) From 0743470995fc638a915bd3f74e94decc0a9fd865 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 19 Aug 2019 17:32:14 -0400 Subject: [PATCH 0327/2289] Update benchmarking_server.R --- shiny/workflowPlot/server_files/benchmarking_server.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/server_files/benchmarking_server.R b/shiny/workflowPlot/server_files/benchmarking_server.R index 45a7bb02081..403ea35626d 100644 --- a/shiny/workflowPlot/server_files/benchmarking_server.R +++ b/shiny/workflowPlot/server_files/benchmarking_server.R @@ -346,6 +346,8 @@ observeEvent(input$calc_bm_button,{ settings <- PEcAn.settings::prepare.settings(settings) settings$host$name <- "localhost" # This may not be the best place to set this, but it isn't set by any of the other functions. Another option is to have it set by the default_hostname function (if input is NULL, set to localhost) + # browser() + #results <-calc_benchmark(settings, bety = dbConnect$bety) # results <- PEcAn.settings::papply(settings, function(x) calc_benchmark(x, bety, start_year = input$start_year, end_year = input$end_year)) results <- PEcAn.settings::papply(settings, function(x) calc_benchmark(settings = x, bety = dbConnect$bety)) @@ -458,7 +460,8 @@ observeEvent(input$bench_plot,{ incProgress(9 / 15) output$bmPlot <- renderPlotly({ - plotly::ggplotly(p) + plotly::ggplotly(p)%>% + layout(height = "100%", width = "100%") }) incProgress(4 / 15) }) From d8a907bbff1686c841487e5ad1956c992cfc8b1c Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Mon, 19 Aug 2019 17:32:46 -0400 Subject: [PATCH 0328/2289] Update documentation_UI.R --- shiny/workflowPlot/ui_files/documentation_UI.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/shiny/workflowPlot/ui_files/documentation_UI.R b/shiny/workflowPlot/ui_files/documentation_UI.R index bb93b6f09db..d1f82c6f68b 100644 --- a/shiny/workflowPlot/ui_files/documentation_UI.R +++ b/shiny/workflowPlot/ui_files/documentation_UI.R @@ -1,5 +1,6 @@ bs_accordion_sidebar(id = "documentation", - spec_side = c(width = 3, offset = 0)) %>% + spec_side = c(width = 3, offset = 0), + spec_main = c(width = 9, offset = 0)) %>% bs_append( title_side = "App Documentation", content_side = NULL, @@ -40,4 +41,4 @@ bs_accordion_sidebar(id = "documentation", content_side = NULL, content_main = withMathJax(includeMarkdown("markdown/benchmarking_plots.Rmd")) ) - ) \ No newline at end of file + ) From 5d2c2904c1d79941ba6ab7a07e2949a13b7090d5 Mon Sep 17 00:00:00 2001 From: sl4397 Date: Wed, 21 Aug 2019 09:14:09 -0500 Subject: [PATCH 0329/2289] add history runs page documentation --- shiny/workflowPlot/ui_files/documentation_UI.R | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/shiny/workflowPlot/ui_files/documentation_UI.R b/shiny/workflowPlot/ui_files/documentation_UI.R index d1f82c6f68b..96d5d7eb295 100644 --- a/shiny/workflowPlot/ui_files/documentation_UI.R +++ b/shiny/workflowPlot/ui_files/documentation_UI.R @@ -14,7 +14,11 @@ bs_accordion_sidebar(id = "documentation", bs_append( title_side = "History Runs", content_side = NULL, - content_main = "This is a page for searching history runs." + content_main = HTML(" +

This page is for seaching history runs.

+

If you don\'t know the workflow Id to select in the first panel, use this page to explore all the runs at all sites. +
Select the one you wish to explore using the explore button.

+ ") ) %>% bs_append( title_side = "Exploratory Plots", From 9edac4b4d9a45f2183e5c8ff9b0976d3eb5c703f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Aug 2019 11:09:18 +0200 Subject: [PATCH 0330/2289] documenting standards for adding package dependencies (migrated from a stale branch because this content has since been moved to a different file) --- .../03_coding_practices/01_Coding_style.Rmd | 120 ++++++++++++------ 1 file changed, 82 insertions(+), 38 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index a8ad5ec4557..81c15ac33bc 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -58,48 +58,92 @@ The option to omit curly braces is another shortcut that makes code easier to wr #### Package Dependencies -In the source code for PEcAn functions, all functions that are not from base R or the current package must be called with explicit namespacing; i.e. `package::function` (e.g. `ncdf4::nc_open(...)`, `dplyr::select()`, `PEcAn.logger::logger.warn()`). -This is intended to maximize clarity for current and future developers (including yourself), and to make it easier to quickly identify (and possibly remove) external dependencies. +To make one PEcAn package use functionality from another R package (including other PEcAn packages), you must do at least one and up to four things in your own package. -In addition, it may be a good idea to call some base R functions with known, common namespace conflicts this way as well. -For instance, if you want to use base R's `filter` function, it's a good idea to write it as `stats::filter` to avoid unintentional conflicts with `dplyr::filter`. +* Always, *declare* which packages your package depends on, so that R can install them as needed when someone installs your package and so that human readers can understand what additional functionality it uses. Declare dependencies by manually adding them to your package's DESCRIPTION file. +* Sometimes, *import* functions from the dependency package into your package's namespace, so that your functions know where to find them. This is only sometimes necessary, because you can usually use `::` to call functions without importing them. Import functions by writing Roxygen `@importFrom` statements and do not edit the NAMESPACE file by hand. +* Rarely, *load* dependency code into the R environment, so that the person using your package can use it without loading it separately. This is usually a bad idea, has caused many subtle bugs, and in PEcAn it should only be used when unavoidable. When unavoidable, use `requireNamespace(... quietly = TRUE)` over `Depends:` or `require()` or `library()`. +* Only if your dependency relies on non-R tools, *install* any components that R won't know how to find for itself. These components are often but not always identifiable from a `SystemRequirements` field in the dependency's DESCRIPTION file. The exact installation procedure will vary case by case and from one operating system to another, and for PEcAn the key point is that you should skip this step until it proves necessary. When it does prove necessary, edit the documentation for your package to include advice on installing the dependency components, then edit the PEcAn build and testing scripts as needed so that they follow your advice. -The one exception to this rule is infix operators (e.g. `magrittr::"%>%"`) which cannot be conveniently namespaced. -These functions should be imported using the Roxygen `@importFrom` tag. -For example: +The advice below about each step is written specifically for PEcAn, although much of it holds for R packages in general. For more details about working with dependencies, start with Hadley Wickham's [R packages](http://r-pkgs.had.co.nz/description.html#dependencies) and treat [Writing R Extensions](https://cran.r-project.org/doc/manuals/R-exts.html) as the final authority. -```r -#' My function + +##### Declaring Dependencies: Depends, Suggests, Imports + +List all dependencies in the DESCRIPTION file. Every package that is used by your package's code must appear in exactly one of the sections `Depends`, `Imports`, or `Suggests`. + +Please list packages in alphabetical order within each section. R doesn't care about the order, but you will later when you're trying to check whether this package uses a particular dependency. + +* `Imports` is the correct place to declare most PEcAn dependencies. This ensures that they get installed, but *does not* automatically import any of their functions -- Since PEcAn style prefers to mostly use `::` instead of importing, this is what we want. + +* `Depends` is, despite the name, usually the wrong place to declare PEcAn dependencies. The only difference between `Depends` and `Imports` is that when the user attaches your packages to their own R workspace (e.g. using `library("PEcAn.yourpkg")`), the packages in `Depends` are attached as well. Notice that a call like `PEcAn.yourpkg::yourfun()` *will not* attach your package *or* its dependencies, so your code still needs to import or `::`-qualify all functions from packages listed in `Depends`. In short, `Depends` is not a shortcut, is not for developer convenience, and makes it easy to create subtle bugs that appear to work during interactive test sessions but fail when run from scripts. As the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies) puts it (emphasis added): + + > Field ‘Depends’ should nowadays be used rarely, only for packages which are intended to be put on the search path to make their facilities **available to the end user (and not to the package itself)**." + +* The `Suggests` field can be used to declare dependencies on packages that make your package more useful but are not completely essential. By default R will not install these when your package is installed (users can change this using the `dependencies` argument of `install.packages`), and you are not allowed to import functions from them into your package's namespace. Again from the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies): + + > The `Suggests` field uses the same syntax as `Depends` and lists packages that are not necessarily needed. This includes packages used only in examples, tests or vignettes (see [Writing package vignettes](https://cran.r-project.org/doc/manuals/R-exts.html#Writing-package-vignettes)), and packages loaded in the body of functions. E.g., suppose an example from package foo uses a dataset from package bar. Then it is not necessary to have bar use foo unless one wants to execute all the examples/tests/vignettes: it is useful to have bar, but not necessary. + + Some of the PEcAn model interface packages push this definition of "not necessarily needed" by declaring their coupled model package in `Suggests` rather than `Imports`. For example, the `PEcAn.BIOCRO` package cannot do anything useful when the BioCro model is not installed, but it lists BioCro in Suggests because *PEcAn as a whole* can work without it. This is a compromise to simplify installation of PEcAn for users who only plan to use a few models, so that they can avoid the bother of installing BioCro if they only plan to run, say, SIPNET. + + All package code that uses a suggested package must behave reasonably when the package is missing. Depending on the situation this could mean checking whether the package is available and throwing an error as needed (PEcAn.BIOCRO uses its `.onLoad` function to check at load time whether BioCro is installed and will refuse to load if it is not), or providing an alternative behavior (`PEcAn.data.atmosphere::get_NARR_thredds` checks at call time for either `parallel` or `doParallel` and uses whichever one it finds first), or something else, but your code should never just assume that the suggested package is available. + + It is often tempting to move a dependency from Imports to Suggests because it is a hassle to install (large, hard to compile, no longer available from CRAN, currently broken on GitHub, etc), in the hopes that this will isolate the rest of PEcAn from the troublesome dependency. This helps for some cases, but fails for two very common ones: It does not reduce install time for CI builds, because all suggested packages need to be present when running full package checks (`R CMD check` or `devtools::check` or `make check`). It also does not prevent breakage when updating PEcAn via `make install`, because `devtools::install_deps` does not install suggested packages that are missing but does try to *upgrade* any that are already installed to the newest available version -- even if the installed version took ages to compile and would have worked just fine! + +##### Importing Functions: Use Roxygen + +PEcAn style is to import very few functions by using fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. + +If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package (this probably lives in a file called either `zzz.R` or `-package.R`): + +```{r} +#' What your package does #' -#' @param a First param -#' @param b Second param -#' @returns Something +#' Longer description of the package goes here. +#' Probably with links to other resources about it, citations, etc. +#' +#' @docType package +#' @name PEcAn.yourpkg #' @importFrom magrittr %>% -#' @export -f <- myfunction(a, b) { - something(a) %>% something_else(b) -} +NULL ``` -**Never use `library` or `require` inside package functions**. - -Any package dependencies added in this way should be added to the `Imports:` list in the package `DESCRIPTION` file. -**Do not use `Depends:` unless you have a _very_ good reason.** -The `Imports` list should be sorted alphabetically, with each package on its own line. -It is also a good idea to include version requirements in the `Imports` list (e.g. `dplyr (>=0.7)`). - -External packages that do not provide essential functionality can be relegated to `Suggests` instead of `Imports`. -In particular, consider this for packages that are large, difficult to install, and/or bring in a large number of their own dependencies. -Functions using these kinds of dependencies should check for their availability with `requireNamespace` and fail informatively in their absence. -For example: - -```r -g <- myfunction() { - if (!requireNamespace("BayesianTools", quietly = TRUE) { - PEcAn.logger::logger.severe( - "`BayesianTools` package required but not found.", - "Please make sure it is installed before using `g`.") - }) - BayesianTools::do_stuff(...) -} -``` +Roxygen will make sure there's only one NAMESPACE entry per imported function no matter how many `importFrom` statements there are, but please pick a scheme (either import on every usage or once for the whole package), stick with it, and do not make function `x()` rely on an importFrom in the comments above function `y()`. + +Please do *not* import entire package namespaces (`#' @import pkg`); it increases the chance of function name collisions and makes it much harder to understand which package a given function was called from. + +A special note about importing functions from the [tidyverse](https://tidyverse.org): Be sure to import from the package(s) that actually contain the functions you want to use, e.g. `Imports: dplyr, magrittr, purrr` / `@importFrom magrittr %>%` / `purrr::map(...)`, not `Imports: tidyverse` / `@importFrom tidyverse %>%` / `tidyverse::map(...)`. The package named `tidyverse` is just a interactive shortcut that loads the whole collection of constituent packages; it doesn't export any functions in its own namespace and therefore importing it into your package doesn't make them available. + +##### Loading Code: Don't... But Use `requireNamespace` When You Do + +One of the main differences between an interactive R session and programming for an R package is that interactive sessions usually focus on manipulating data in a global environment, whereas when writing packages you want to think about functions operating in a namespace. When functions are called they have access to the user's environment, but do not forget that the user can change their environment in ways you have no control over, and that they will be justifiably angry if you change their environment in ways they were not expecting. + +* Attaching vs loading... be careful with terminology, but try to avoid making this section into a lesson about it +* reasons it's a bad idea: + - pollutes namespace -> collisions, load order dependent behavior + - doesn't work the way you think it does + - in particular, Depends: doesn't attach when your package not attached + - Causes R package check messages + +* The most common reason to need to load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.meta.analysis contains calls that look like `as.matrix(some_mcmc.list)`. These can be correctly dispatched by `base::as.matrix` to the method `coda:::as.matrix.mcmc.list` when the `coda` namespace is loaded, but will fail when it is not. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.meta.analysis namespace. + +* If your package uses a dependency that ignores all good practice and wrongly assumes all of its own dependencies are attached (if its DESCRIPTION uses only `Depends` instead of `Imports`, this is often a warning sign), accept first that you are relying on a package that is broken, and you should either convince its maintainer to fix it or find a way to remove the dependency from PEcAn. But as a short-term workaround, it is sometimes possible for your code to attach the direct dependency so that it will behave right with regard to its secondary dependencies. + +* Possible ways: + * Depends: bad, affects user's search path and doesn't help when pkg not attached + * library(): OK for scripts in inst/. Stops execution if dep not available, affects user's search path. + * require(): just like library() but returns FALSE instead of erroring if package load fails. OK for scripts in inst/ that can do *and do* do something useful when a dep is missing. If used as `if(!require(pkg)) stop(...)`, replace it with `library(pkg)`. + * requireNamespace(): + Find out: Does it affect user search path? + With quietly=FALSE, avoids (most) annoying load messages +* If you think you need to attach code, be prepared for questions about it during code review. + +##### Installing dependencies: Let the machines do it + +In most cases declaring all dependencies in DESCRIPTION is all you need to do, and the installation will be handled automatically by R and devtools during the build process. There are two exceptions: + +First, some dependencies rely on non-R software (e.g. rjags relies on JAGS, which might be installed in a different place on every machine) that needs to be installed by the user. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. + +Second, some dependencies *will* install automatically, but they take a long time to compile or conflict with dependencies needed by other packages or are otherwise annoying to deal with. To save time during CI builds, PEcAn's Travis configuration file includes a manually curated list of the most-annoying dependencies and installs them from pre-compiled binaries before starting the normal installation process. + +Even if you suspect you're adding a dependency that will be "annoying", please *do not* add it to the Travis binary-install list right away; instead, focus on getting your package to work right using the default automatic installation and then, if needed to keep build times reasonable, submit a separate pull request to change the Travis configuration. This makes it much easier to understand which merges changed package code and which ones changed the testing configuration without changing package functionality. From 5dd532d96acf1292ea8f02419ddb987fdc6b738c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Aug 2019 11:09:36 +0200 Subject: [PATCH 0331/2289] whitespace --- .../03_coding_practices/01_Coding_style.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index 81c15ac33bc..a05aa8ba647 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -25,7 +25,7 @@ have your name listed after the author tag in the function documentation. See [Unit_Testing](#developer-testing) for instructions, and [Advanced R: Tests](http://r-pkgs.had.co.nz/tests.html). -* tests provide support for documentation - they define what a function is (and is not) expected to do +* tests provide support for documentation - they define what a function is (and is not) expected to do * all functions need tests to ensure basic functionality is maintained during development. * all bugs should have a test that reproduces the bug, and the test should pass before bug is closed @@ -49,7 +49,7 @@ Because most R code uses <- (except where = is required), we will use <- #### Use Spaces -* around all binary operators (=, +, -, <-, etc.). +* around all binary operators (=, +, -, <-, etc.). * after but not before a comma #### Use curly braces From eb321a44223aadbf76b66a9fe95cf0e797d35412 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Aug 2019 19:58:12 +0200 Subject: [PATCH 0332/2289] formatting --- .../03_coding_practices/01_Coding_style.Rmd | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index a05aa8ba647..7dbccb980d1 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -44,18 +44,19 @@ File names should end in `.R`, `.Rdata`, or `.rds` (as appropriate) and should b #### Use "<-" as an assignment operator -Because most R code uses <- (except where = is required), we will use <- +Because most R code uses <- (except where = is required), we will use <-. `=` is reserved for function arguments #### Use Spaces -* around all binary operators (=, +, -, <-, etc.). +* around all binary operators (`=`, `+`, `-`, `<-`, etc.). * after but not before a comma #### Use curly braces The option to omit curly braces is another shortcut that makes code easier to write but harder to read and more prone to error. + #### Package Dependencies To make one PEcAn package use functionality from another R package (including other PEcAn packages), you must do at least one and up to four things in your own package. From 48dc63ac3c9c4e00cdf76803bda7a551fa3d3771 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Aug 2019 20:01:37 +0200 Subject: [PATCH 0333/2289] add TLDR for dependencies, plus wording tweaks --- .../03_coding_practices/01_Coding_style.Rmd | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index 7dbccb980d1..15f994ec6a9 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -59,6 +59,12 @@ The option to omit curly braces is another shortcut that makes code easier to wr #### Package Dependencies +##### Executive Summary: What to usually do + +When you're editing one PEcAn package and want to use a function from any other R package (including other PEcAn packages), the standard method is to add the other package to the `Imports:` field of your DESCRIPTION file, spell the function in fully namespaced form (`pkg::function()`) everywhere you call it, and be done. There are a few cases where this isn't enough, but they're rarer than you think. The rest of this section mostly deals with the exceptions to this rule and why not to use them when they can be avoided. + +##### Big Picture: What's possible to do + To make one PEcAn package use functionality from another R package (including other PEcAn packages), you must do at least one and up to four things in your own package. * Always, *declare* which packages your package depends on, so that R can install them as needed when someone installs your package and so that human readers can understand what additional functionality it uses. Declare dependencies by manually adding them to your package's DESCRIPTION file. @@ -77,17 +83,19 @@ Please list packages in alphabetical order within each section. R doesn't care a * `Imports` is the correct place to declare most PEcAn dependencies. This ensures that they get installed, but *does not* automatically import any of their functions -- Since PEcAn style prefers to mostly use `::` instead of importing, this is what we want. -* `Depends` is, despite the name, usually the wrong place to declare PEcAn dependencies. The only difference between `Depends` and `Imports` is that when the user attaches your packages to their own R workspace (e.g. using `library("PEcAn.yourpkg")`), the packages in `Depends` are attached as well. Notice that a call like `PEcAn.yourpkg::yourfun()` *will not* attach your package *or* its dependencies, so your code still needs to import or `::`-qualify all functions from packages listed in `Depends`. In short, `Depends` is not a shortcut, is not for developer convenience, and makes it easy to create subtle bugs that appear to work during interactive test sessions but fail when run from scripts. As the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies) puts it (emphasis added): +* `Depends` is, despite the name, usually the wrong place to declare PEcAn dependencies. The only difference between `Depends` and `Imports` is that when the user attaches your packages to their own R workspace (e.g. using `library("PEcAn.yourpkg")`), the packages in `Depends` are attached as well. Notice that a call like `PEcAn.yourpkg::yourfun()` *will not* attach your package *or* its dependencies, so your code still needs to import or `::`-qualify all functions from packages listed in `Depends`. In short, `Depends` is not a shortcut, is for user convenience not developer convenience, and makes it easy to create subtle bugs that appear to work during interactive test sessions but fail when run from scripts. As the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies) puts it (emphasis added): - > Field ‘Depends’ should nowadays be used rarely, only for packages which are intended to be put on the search path to make their facilities **available to the end user (and not to the package itself)**." + > This [Imports and Depends] scheme was developed before all packages had namespaces (R 2.14.0 in October 2011), and good practice changed once that was in place. Field ‘Depends’ should nowadays be used rarely, only for packages which are intended to be put on the search path to make their facilities **available to the end user (and not to the package itself)**." -* The `Suggests` field can be used to declare dependencies on packages that make your package more useful but are not completely essential. By default R will not install these when your package is installed (users can change this using the `dependencies` argument of `install.packages`), and you are not allowed to import functions from them into your package's namespace. Again from the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies): +* The `Suggests` field can be used to declare dependencies on packages that make your package more useful but are not completely essential. Again from the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies): > The `Suggests` field uses the same syntax as `Depends` and lists packages that are not necessarily needed. This includes packages used only in examples, tests or vignettes (see [Writing package vignettes](https://cran.r-project.org/doc/manuals/R-exts.html#Writing-package-vignettes)), and packages loaded in the body of functions. E.g., suppose an example from package foo uses a dataset from package bar. Then it is not necessary to have bar use foo unless one wants to execute all the examples/tests/vignettes: it is useful to have bar, but not necessary. Some of the PEcAn model interface packages push this definition of "not necessarily needed" by declaring their coupled model package in `Suggests` rather than `Imports`. For example, the `PEcAn.BIOCRO` package cannot do anything useful when the BioCro model is not installed, but it lists BioCro in Suggests because *PEcAn as a whole* can work without it. This is a compromise to simplify installation of PEcAn for users who only plan to use a few models, so that they can avoid the bother of installing BioCro if they only plan to run, say, SIPNET. - All package code that uses a suggested package must behave reasonably when the package is missing. Depending on the situation this could mean checking whether the package is available and throwing an error as needed (PEcAn.BIOCRO uses its `.onLoad` function to check at load time whether BioCro is installed and will refuse to load if it is not), or providing an alternative behavior (`PEcAn.data.atmosphere::get_NARR_thredds` checks at call time for either `parallel` or `doParallel` and uses whichever one it finds first), or something else, but your code should never just assume that the suggested package is available. + Since the point of Suggests is that they are allowed to be missing, all code that uses a suggested package must behave reasonably when the package is not found. Depending on the situation, "reasonably" could mean checking whether the package is available and throwing an error as needed (PEcAn.BIOCRO uses its `.onLoad` function to check at load time whether BioCro is installed and will refuse to load if it is not), or providing an alternative behavior (`PEcAn.data.atmosphere::get_NARR_thredds` checks at call time for either `parallel` or `doParallel` and uses whichever one it finds first), or something else, but your code should never just assume that the suggested package is available. + + You are not allowed to import functions from `Suggests` into your package's namespace, so always call them in `::`-qualified form. By default R will not install suggested packages when your package is installed, but users can change this using the `dependencies` argument of `install.packages`. Note that for testing on Travis CI, PEcAn *does* install all `Suggests` (because they are required for full package checks), so any of your code that runs when a suggested package is not available will never be exercised by Travis checks. It is often tempting to move a dependency from Imports to Suggests because it is a hassle to install (large, hard to compile, no longer available from CRAN, currently broken on GitHub, etc), in the hopes that this will isolate the rest of PEcAn from the troublesome dependency. This helps for some cases, but fails for two very common ones: It does not reduce install time for CI builds, because all suggested packages need to be present when running full package checks (`R CMD check` or `devtools::check` or `make check`). It also does not prevent breakage when updating PEcAn via `make install`, because `devtools::install_deps` does not install suggested packages that are missing but does try to *upgrade* any that are already installed to the newest available version -- even if the installed version took ages to compile and would have worked just fine! @@ -95,7 +103,7 @@ Please list packages in alphabetical order within each section. R doesn't care a PEcAn style is to import very few functions by using fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. -If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package (this probably lives in a file called either `zzz.R` or `-package.R`): +If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package, which probably lives in a file called either `zzz.R` or `-package.R`: ```{r} #' What your package does From a4044ebeca28d1842b55e4d81e5cfaf7b7062674 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Aug 2019 20:02:29 +0200 Subject: [PATCH 0334/2289] installation wording --- .../03_coding_practices/01_Coding_style.Rmd | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index 15f994ec6a9..8c306e0aa56 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -149,10 +149,8 @@ One of the main differences between an interactive R session and programming for ##### Installing dependencies: Let the machines do it -In most cases declaring all dependencies in DESCRIPTION is all you need to do, and the installation will be handled automatically by R and devtools during the build process. There are two exceptions: +In most cases you won't need to think about how dependencies get installed -- just declare them in your package's DESCRIPTION and the installation will be handled automatically by R and devtools during the build process. In PEcAn packages, the rare cases where this isn't enough will probably fall into one of two categories. -First, some dependencies rely on non-R software (e.g. rjags relies on JAGS, which might be installed in a different place on every machine) that needs to be installed by the user. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. +First, some dependencies rely on non-R software that R does not know how to install automatically. For example, rjags relies on JAGS, which might be installed in a different place on every machine. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. -Second, some dependencies *will* install automatically, but they take a long time to compile or conflict with dependencies needed by other packages or are otherwise annoying to deal with. To save time during CI builds, PEcAn's Travis configuration file includes a manually curated list of the most-annoying dependencies and installs them from pre-compiled binaries before starting the normal installation process. - -Even if you suspect you're adding a dependency that will be "annoying", please *do not* add it to the Travis binary-install list right away; instead, focus on getting your package to work right using the default automatic installation and then, if needed to keep build times reasonable, submit a separate pull request to change the Travis configuration. This makes it much easier to understand which merges changed package code and which ones changed the testing configuration without changing package functionality. +Second, some dependencies *will* install automatically, but they take a long time to compile or conflict with dependencies needed by other packages or are otherwise annoying to deal with. To save time during CI builds, PEcAn's Travis configuration file includes a manually curated list of the most-annoying dependencies and installs them from pre-compiled binaries before starting the normal installation process. If you suspect you're adding a dependency that will be "annoying", please *do not* add it to the Travis binary-install list right away; instead, focus on getting your package to work right using the default automatic installation. Then, if needed to keep build times reasonable, submit a separate pull request to change the Travis configuration. This makes it much easier to understand which merges changed package code and which ones changed the testing configuration without changing package functionality. From fe417e2b804b37f965146638f982d28c09af324d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 26 Aug 2019 20:38:22 +0200 Subject: [PATCH 0335/2289] minor wording changes --- .../03_coding_practices/01_Coding_style.Rmd | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index 8c306e0aa56..9b463525ec5 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -69,10 +69,10 @@ To make one PEcAn package use functionality from another R package (including ot * Always, *declare* which packages your package depends on, so that R can install them as needed when someone installs your package and so that human readers can understand what additional functionality it uses. Declare dependencies by manually adding them to your package's DESCRIPTION file. * Sometimes, *import* functions from the dependency package into your package's namespace, so that your functions know where to find them. This is only sometimes necessary, because you can usually use `::` to call functions without importing them. Import functions by writing Roxygen `@importFrom` statements and do not edit the NAMESPACE file by hand. -* Rarely, *load* dependency code into the R environment, so that the person using your package can use it without loading it separately. This is usually a bad idea, has caused many subtle bugs, and in PEcAn it should only be used when unavoidable. When unavoidable, use `requireNamespace(... quietly = TRUE)` over `Depends:` or `require()` or `library()`. +* Rarely, *load* dependency code into the R environment, so that the person using your package can use it without loading it separately. This is usually a bad idea, has caused many subtle bugs, and in PEcAn it should only be used when unavoidable. When unavoidable, prefer `requireNamespace(... quietly = TRUE)` over `Depends:` or `require()` or `library()`. * Only if your dependency relies on non-R tools, *install* any components that R won't know how to find for itself. These components are often but not always identifiable from a `SystemRequirements` field in the dependency's DESCRIPTION file. The exact installation procedure will vary case by case and from one operating system to another, and for PEcAn the key point is that you should skip this step until it proves necessary. When it does prove necessary, edit the documentation for your package to include advice on installing the dependency components, then edit the PEcAn build and testing scripts as needed so that they follow your advice. -The advice below about each step is written specifically for PEcAn, although much of it holds for R packages in general. For more details about working with dependencies, start with Hadley Wickham's [R packages](http://r-pkgs.had.co.nz/description.html#dependencies) and treat [Writing R Extensions](https://cran.r-project.org/doc/manuals/R-exts.html) as the final authority. +The advice below about each step is written specifically for PEcAn, although much of it holds for R packages in general. For more details about working with dependencies, start with Hadley Wickham's [R packages](http://r-pkgs.had.co.nz/description.html#dependencies) and treat the CRAN team's [Writing R Extensions](https://cran.r-project.org/doc/manuals/R-exts.html) as the final authority. ##### Declaring Dependencies: Depends, Suggests, Imports @@ -101,7 +101,7 @@ Please list packages in alphabetical order within each section. R doesn't care a ##### Importing Functions: Use Roxygen -PEcAn style is to import very few functions by using fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. +PEcAn style is to import very few functions and instead use fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package, which probably lives in a file called either `zzz.R` or `-package.R`: @@ -153,4 +153,4 @@ In most cases you won't need to think about how dependencies get installed -- ju First, some dependencies rely on non-R software that R does not know how to install automatically. For example, rjags relies on JAGS, which might be installed in a different place on every machine. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. -Second, some dependencies *will* install automatically, but they take a long time to compile or conflict with dependencies needed by other packages or are otherwise annoying to deal with. To save time during CI builds, PEcAn's Travis configuration file includes a manually curated list of the most-annoying dependencies and installs them from pre-compiled binaries before starting the normal installation process. If you suspect you're adding a dependency that will be "annoying", please *do not* add it to the Travis binary-install list right away; instead, focus on getting your package to work right using the default automatic installation. Then, if needed to keep build times reasonable, submit a separate pull request to change the Travis configuration. This makes it much easier to understand which merges changed package code and which ones changed the testing configuration without changing package functionality. +Second, some dependencies *will* install automatically, but they take a long time to compile or conflict with dependencies needed by other packages or are otherwise annoying to deal with. To save time during CI builds, PEcAn's Travis configuration file includes a manually curated list of the most-annoying dependencies and installs them from pre-compiled binaries before starting the normal installation process. If you suspect you're adding a dependency that will be "annoying", please *do not* add it to the Travis binary-install list right away; instead, focus on getting your package to work right using the default automatic installation. Then, if needed to keep build times reasonable, submit a separate pull request to change the Travis configuration. This two-step procedure makes it much easier to understand which merges changed package code and which ones changed the testing configuration without changing package functionality, and also lets you focus on what the code is supposed to do instead of on installation details. From 5d86384767812f74b82302f49b1a4120df1cee94 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 26 Aug 2019 20:39:19 +0200 Subject: [PATCH 0336/2289] lots more words about loading and attaching --- .../03_coding_practices/01_Coding_style.Rmd | 50 +++++++++++-------- 1 file changed, 29 insertions(+), 21 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index 9b463525ec5..af05d26511a 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -125,27 +125,35 @@ A special note about importing functions from the [tidyverse](https://tidyverse. ##### Loading Code: Don't... But Use `requireNamespace` When You Do -One of the main differences between an interactive R session and programming for an R package is that interactive sessions usually focus on manipulating data in a global environment, whereas when writing packages you want to think about functions operating in a namespace. When functions are called they have access to the user's environment, but do not forget that the user can change their environment in ways you have no control over, and that they will be justifiably angry if you change their environment in ways they were not expecting. - -* Attaching vs loading... be careful with terminology, but try to avoid making this section into a lesson about it -* reasons it's a bad idea: - - pollutes namespace -> collisions, load order dependent behavior - - doesn't work the way you think it does - - in particular, Depends: doesn't attach when your package not attached - - Causes R package check messages - -* The most common reason to need to load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.meta.analysis contains calls that look like `as.matrix(some_mcmc.list)`. These can be correctly dispatched by `base::as.matrix` to the method `coda:::as.matrix.mcmc.list` when the `coda` namespace is loaded, but will fail when it is not. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.meta.analysis namespace. - -* If your package uses a dependency that ignores all good practice and wrongly assumes all of its own dependencies are attached (if its DESCRIPTION uses only `Depends` instead of `Imports`, this is often a warning sign), accept first that you are relying on a package that is broken, and you should either convince its maintainer to fix it or find a way to remove the dependency from PEcAn. But as a short-term workaround, it is sometimes possible for your code to attach the direct dependency so that it will behave right with regard to its secondary dependencies. - -* Possible ways: - * Depends: bad, affects user's search path and doesn't help when pkg not attached - * library(): OK for scripts in inst/. Stops execution if dep not available, affects user's search path. - * require(): just like library() but returns FALSE instead of erroring if package load fails. OK for scripts in inst/ that can do *and do* do something useful when a dep is missing. If used as `if(!require(pkg)) stop(...)`, replace it with `library(pkg)`. - * requireNamespace(): - Find out: Does it affect user search path? - With quietly=FALSE, avoids (most) annoying load messages -* If you think you need to attach code, be prepared for questions about it during code review. +The very short version of this section: We want to maintain clear separation between the [package's namespace](http://r-pkgs.had.co.nz/namespace.html) (which we control and want to keep predictable) and the global namespace (which the user controls, might change in ways we have no control over, and will be justifiably angry if we change it in ways they were not expecting). Therefore, avoid attaching packages to the search path (so no `Depends` and no `library()` or `require()` inside functions), and do not explicitly load other namespaces if you can help it. + +The longer version requires that we make a distinction often glossed over: *Loading* a package makes it possible for *R* to find things in the package namespace and does any actions needed to make it ready for use (e.g. running its .onLoad method, loading DLLs if the package contains compiled code, etc). *Attaching* a package (usually by calling `library("somePackage")`) loads it if it wasn't already loaded, and then adds it to the search path so that the *user* can find things in its namespace. As discussed in the "Declaring Dependancies" section above, dependencies listed in `Depends` will be attached when your package is attached, but they will be *neither attached nor loaded* when your package is loaded without being attached. + +Loading a dependency into the package namespace is undesirable because it makes it hard to understand our own code -- if we need to use something from elsewhere, we'd prefer call it from its own namespace using `::` (which implicitly loads the dependency!) or explicitly import it with a Rogygen `@import` directive. But in a few cases this isn't enough. The most common reason to need to explicitly load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.MA needs to call `as.matrix` on objects of class `mcmc.list`. When the `coda` namespace is loaded, `as.matrix(some_mcmc.list)` can be correctly dispatched by `base::as.matrix` to the unexported method `coda:::as.matrix.mcmc.list`, but when `coda` is not loaded this dispatch will fail. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.MA namespace, so instead we [load the `coda` namespace](https://github.com/PecanProject/pecan/pull/1966/files#diff-e0b625a54a8654cc9b22d9c076e7a838R13) whenever PEcAn.MA is loaded. + +Attaching packages to the user's search path is even more problematic because it makes it hard for the user to understand *how your package will affect their own code*. Packages attached by a function stay attached after the function exits, so they can cause name collisions far downstream of the function call, potentially in code that has nothing to do with your package. And worse, since names are looked up in reverse order of package loading, it can cause behavior that differs in strange ways depending on the order of lines that look independent of each other: + +```{r} +> library(Hmisc) +> x = ... +> y = 3 +> summarize(x) # calls Hmisc::summarize +> y2 <- some_package_that_attaches_dplyr::innocent.looking.function(y) +Loading required package: dplyr +> summarize(x) # Looks identical to previous summarize, but calls dplyr::summarize! +``` + +This is not to say that users will *never* want your package to attach another one for them, just that it's rare and that attaching dependencies is much more likely to cause bugs than to fix them and additionally doesn't usually save the package author any work. + +One possible exception to the do-not-attach-packages rule is a case where your dependency ignores all good practice and wrongly assumes, without checking, that all of its own dependencies are attached; if its DESCRIPTION uses only `Depends` instead of `Imports`, this is often a warning sign. For example, a small-but-surprising number of packages depend on the `methods` package without proper checks (this is probably because most *but not all* R interpreters attach `methods` by default and therefore it's easy for an author to forget it might ever be otherwise unless they happen to test with a but-not-all interpreter). + +If you find yourself with a dependency that does this, accept first that you are relying on a package that is broken, and you should either convince its maintainer to fix it or find a way to remove the dependency from PEcAn. But as a short-term workaround, it is sometimes possible for your code to attach the direct dependency so that it will behave right with regard to its secondary dependencies. If so, make sure the attachment happens every time your package is loaded (e.g. by calling `library(depname)` inside your package's `.onLoad` method) and not just when your package is attached (e.g. by putting it in Depends). + +When you do need to load or attach a dependency, it is probably better to do it inside your package's `.onLoad` method rather than in individual functions, but this isn't ironclad. To only load, use `requireNamespace(pkgname, quietly=TRUE)` -- this will make it available inside your package's namespace while avoiding (most) annoying loadtime messages and not disturbing the user's search path. To attach when you really can't avoid it, declare the dependency in `Depends` and *also* attach it using `library(pkgname)` in your .onLoad method. + +Note that scripts in `inst/` are considered to be sample code rather than part of the package namespace, so it is acceptable for them to explicitly attach packages using `library()`. You may also see code that uses `require(pkgname)`; this is just like `library`, but returns FALSE instead of erroring if package load fails. It is OK for scripts in `inst/` that can do *and do* do something useful when a dependency is missing, but if it is used as `if(!require(pkg)){ stop(...)}` then replace it with `library(pkg)`. + +If you think your package needs to load or attach code for any reason, please note why in your pull request description and be prepared for questions about it during code review. If your reviewers can think of an alternate approach that avoids loading or attaching, they will likely ask you to use it even if it creates extra work for you. ##### Installing dependencies: Let the machines do it From e9a0313ec6d25d928d4fb62d7e5368be7761011b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 27 Aug 2019 15:43:19 +0200 Subject: [PATCH 0337/2289] Update book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd Co-Authored-By: Alexey Shiklomanov --- .../03_coding_practices/01_Coding_style.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index af05d26511a..5ad9b4ba980 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -129,7 +129,7 @@ The very short version of this section: We want to maintain clear separation bet The longer version requires that we make a distinction often glossed over: *Loading* a package makes it possible for *R* to find things in the package namespace and does any actions needed to make it ready for use (e.g. running its .onLoad method, loading DLLs if the package contains compiled code, etc). *Attaching* a package (usually by calling `library("somePackage")`) loads it if it wasn't already loaded, and then adds it to the search path so that the *user* can find things in its namespace. As discussed in the "Declaring Dependancies" section above, dependencies listed in `Depends` will be attached when your package is attached, but they will be *neither attached nor loaded* when your package is loaded without being attached. -Loading a dependency into the package namespace is undesirable because it makes it hard to understand our own code -- if we need to use something from elsewhere, we'd prefer call it from its own namespace using `::` (which implicitly loads the dependency!) or explicitly import it with a Rogygen `@import` directive. But in a few cases this isn't enough. The most common reason to need to explicitly load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.MA needs to call `as.matrix` on objects of class `mcmc.list`. When the `coda` namespace is loaded, `as.matrix(some_mcmc.list)` can be correctly dispatched by `base::as.matrix` to the unexported method `coda:::as.matrix.mcmc.list`, but when `coda` is not loaded this dispatch will fail. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.MA namespace, so instead we [load the `coda` namespace](https://github.com/PecanProject/pecan/pull/1966/files#diff-e0b625a54a8654cc9b22d9c076e7a838R13) whenever PEcAn.MA is loaded. +Loading a dependency into the package namespace is undesirable because it makes it hard to understand our own code -- if we need to use something from elsewhere, we'd prefer call it from its own namespace using `::` (which implicitly loads the dependency!) or explicitly import it with a Roxygen `@import` directive. But in a few cases this isn't enough. The most common reason to need to explicitly load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.MA needs to call `as.matrix` on objects of class `mcmc.list`. When the `coda` namespace is loaded, `as.matrix(some_mcmc.list)` can be correctly dispatched by `base::as.matrix` to the unexported method `coda:::as.matrix.mcmc.list`, but when `coda` is not loaded this dispatch will fail. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.MA namespace, so instead we [load the `coda` namespace](https://github.com/PecanProject/pecan/pull/1966/files#diff-e0b625a54a8654cc9b22d9c076e7a838R13) whenever PEcAn.MA is loaded. Attaching packages to the user's search path is even more problematic because it makes it hard for the user to understand *how your package will affect their own code*. Packages attached by a function stay attached after the function exits, so they can cause name collisions far downstream of the function call, potentially in code that has nothing to do with your package. And worse, since names are looked up in reverse order of package loading, it can cause behavior that differs in strange ways depending on the order of lines that look independent of each other: From 13bc0608479d91ed77f941ae5886768b58ba0292 Mon Sep 17 00:00:00 2001 From: PEcAn Demo User Date: Wed, 28 Aug 2019 13:30:29 -0500 Subject: [PATCH 0338/2289] Update model testing script --- base/workflow/inst/batch_runs.R | 124 +++++++++++++++++++++++++++----- 1 file changed, 105 insertions(+), 19 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index e38d189faa0..3ca7820cd9f 100755 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -75,7 +75,7 @@ create_execute_test_xml <- function(model_id, db_bety_driver = "PostgreSQL") { php_file <- file.path(pecan_path, "web", "config.php") - config.list <- read_web_config(php_file) + config.list <- PEcAn.utils::read_web_config(php_file) bety <- betyConnect(php_file) con <- bety$con @@ -97,8 +97,11 @@ create_execute_test_xml <- function(model_id, met, site_id, "test_runs", sep = "_" ) - outdir <- file.path(output_folder, outdir_pre) + outdir <- file.path(output_folder, outdir_pre) dir.create(outdir, showWarnings = FALSE, recursive = TRUE) + # Convert to absolute path so I don't end up with unnecessary nested + # directories + outdir <- normalizePath(outdir) settings$outdir <- outdir #Database BETY @@ -164,7 +167,12 @@ create_execute_test_xml <- function(model_id, setwd(outdir) on.exit(setwd(cwd), add = TRUE) - system("Rscript workflow.R 2>&1 | tee workflow.Rout") + sys_out <- system("Rscript workflow.R 2>&1 | tee workflow.Rout") + + list( + sys = sys_out, + outdir = outdir + ) } library(tidyverse) @@ -203,21 +211,28 @@ if (any(grepl(machid_rxp, argv))) { ## Find Models #devtools::install_github("pecanproject/pecan", subdir = "api") -model_ids <- tbl(bety, "dbfiles") %>% +model_df <- tbl(bety, "dbfiles") %>% filter(machine_id == !!mach_id) %>% filter(container_type == "Model") %>% - pull(container_id) + left_join(tbl(bety, "models"), c("container_id" = "id")) %>% + select(model_id = container_id, model_name, revision, + file_name, file_path, dbfile_id = id, ) %>% + collect() %>% + mutate(exists = file.exists(file.path(file_path, file_name))) -models <- model_ids -met_name <- c("CRUNCEP", "AmerifluxLBL") -startdate <- "2004-01-01" -enddate <- "2004-12-31" -out.var <- "NPP" -ensemble <- FALSE -ens_size <- 100 -sensitivity <- FALSE -user_id <- 99000000002 # TODO: Un-hard-code this -dbfiles_folder <- normalizePath("~/output/dbfiles") # TODO: Un-hard-code this +message("Found the following models on the machine:") +print(model_df) + +if (!all(model_df$exists)) { + message("WARNING: The following models are registered on the machine ", + "but their files do not exist:") + model_df %>% + filter(!exists) %>% + print() + + model_df <- model_df %>% + filter(exists) +} ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group @@ -248,6 +263,8 @@ site_id_noinput <- tbl(bety, "sites") %>% mutate(sitename = gsub(" ", "_", sitename)) %>% rename(site_id = id.x) +message("Running tests at ", nrow(site_id_noinput), " sites:") +print(site_id_noinput) site_id <- site_id_noinput$site_id site_name <- gsub(" ", "_", site_id_noinput$sitename) @@ -271,15 +288,84 @@ run_table <- expand.grid( ) #Execute function to spit out a table with a column of NA or success -for (i in seq_len(NROW(run_table))) { +models <- model_df[["model_id"]] +met_name <- c("CRUNCEP", "AmerifluxLBL") +startdate <- "2004-01-01" +enddate <- "2004-12-31" +out.var <- "NPP" +ensemble <- FALSE +ens_size <- 1 # Run ensemble analysis for some models? +sensitivity <- FALSE +user_id <- 99000000002 # TODO: Un-hard-code this + +dbfiles_folder <- normalizePath("~/output/dbfiles") # TODO: Un-hard-code this + +result_table <- as_tibble(run_table) %>% + mutate( + outdir = NA_character_, + workflow_complete = NA, + has_jobsh = NA, + model_output_raw = NA, + model_output_processed = NA + ) + +for (i in seq_len(nrow(run_table))) { + result_table %>% + filter(!is.na(outdir)) %>% + write_csv("result_table.csv") message("\n\n############################################") message("Testing the following configuration:") glimpse(run_table[i, ]) - do.call(create_execute_test_xml, run_table[i, ]) - message("Done!") - message("############################################\n\n") + raw_result <- do.call(create_execute_test_xml, run_table[i, ]) + outdir <- raw_result$outdir + result_table$outdir[[i]] <- outdir + ################################################## + # Did the workflow finish? + ################################################## + raw_output <- readLines(file.path(outdir, "workflow.Rout")) + result_table$workflow_complete[[i]] <- any(grepl("PEcAn Workflow Complete", raw_output)) + ################################################## + # Did we write a job.sh file? + ################################################## + out <- file.path(outdir, "out") + run <- file.path(outdir, "run") + jobsh <- list.files(run, "job\\.sh", recursive = TRUE) + ## pft <- file.path(outdir, "pft") + if (length(jobsh) > 0) { + result_table$has_jobsh[[i]] <- TRUE + } else { + next + } + ################################################## + # Did the model produce any output? + ################################################## + raw_out <- list.files(out, recursive = TRUE) + if (length(raw_out) > 0) { + result_table$model_output_raw[[i]] <- TRUE + } else { + next + } + ################################################## + # Did PEcAn post-process the output? + ################################################## + # Files should have name `YYYY.nc` + proc_out <- list.files(out, "[[:digit:]]{4}\\.nc", recursive = TRUE) + if (length(proc_out) > 0) { + result_table$model_output_processed[[i]] <- TRUE + } else { + next + } } +## out_table <- result_table %>% +## left_join(select(model_df, model_id, model_name, revision), +## "model_id") %>% +## select(model = model_name, revision, site_id, +## workflow_complete, has_jobsh, model_output_raw, +## model_output_processed) + +## write_csv(result_table, "result_table.csv") + # tab <- run_table %>% # mutate( # outcome = purrr::pmap( From 6f784e05d5890d05a9ab0699641fd6063766540f Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Aug 2019 14:04:18 -0500 Subject: [PATCH 0339/2289] Bugfix batch_run script --- base/workflow/inst/batch_runs.R | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 65b89dcfaa5..bc732920c56 100755 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -184,6 +184,15 @@ library(PEcAn.settings) argv <- commandArgs(trailingOnly = TRUE) ##Create Run Args +met_name <- c("CRUNCEP", "AmerifluxLBL") +startdate <- "2004-01-01" +enddate <- "2004-12-31" +out.var <- "NPP" +ensemble <- FALSE +ens_size <- 1 # Run ensemble analysis for some models? +sensitivity <- FALSE +user_id <- 99000000002 # TODO: Un-hard-code this +dbfiles_folder <- normalizePath("~/output/dbfiles") # TODO: Un-hard-code this ## Insert your path to base pecan ## pecan_path <- "/fs/data3/tonygard/work/pecan" @@ -234,6 +243,8 @@ if (!all(model_df$exists)) { filter(exists) } +models <- model_df[["model_id"]] + ## Find Sites ## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group site_id_noinput <- tbl(bety, "sites") %>% @@ -290,18 +301,6 @@ run_table <- expand.grid( ) #Execute function to spit out a table with a column of NA or success -models <- model_df[["model_id"]] -met_name <- c("CRUNCEP", "AmerifluxLBL") -startdate <- "2004-01-01" -enddate <- "2004-12-31" -out.var <- "NPP" -ensemble <- FALSE -ens_size <- 1 # Run ensemble analysis for some models? -sensitivity <- FALSE -user_id <- 99000000002 # TODO: Un-hard-code this - -dbfiles_folder <- normalizePath("~/output/dbfiles") # TODO: Un-hard-code this - result_table <- as_tibble(run_table) %>% mutate( outdir = NA_character_, @@ -359,6 +358,13 @@ for (i in seq_len(nrow(run_table))) { } } +result_table %>% + filter(!is.na(outdir)) %>% + # Add model information, to make this easier to read + left_join(select(model_df, model_id, model_name, revision), + "model_id") %>% + write_csv("result_table.csv") + ## out_table <- result_table %>% ## left_join(select(model_df, model_id, model_name, revision), ## "model_id") %>% From e58094b5a0c4c91100e67199d1422aaaaf2b9a09 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Aug 2019 14:11:07 -0500 Subject: [PATCH 0340/2289] Close PSQL connections in batch_run exit --- base/workflow/inst/batch_runs.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index bc732920c56..c412adea911 100755 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -76,8 +76,9 @@ create_execute_test_xml <- function(model_id, php_file <- file.path(pecan_path, "web", "config.php") config.list <- PEcAn.utils::read_web_config(php_file) - bety <- betyConnect(php_file) + bety <- PEcAn.DB::betyConnect(php_file) con <- bety$con + on.exit(DBI::dbDisconnect(con), add = TRUE) settings <- list( info = list(notes = "Test_Run", From 491f06a2dfb6d98c6e3e8908f4787a7b35377514 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Aug 2019 14:18:05 -0500 Subject: [PATCH 0341/2289] Batch: Clean up demo 1 and 2 tests --- base/workflow/inst/batch_runs.R | 47 ++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 21 deletions(-) diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index c412adea911..9fbd0e5a9a2 100755 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -400,27 +400,32 @@ result_table %>% # Model - Sipnet # Basic Run, then sensitivity and ensemble run. ## -models <- 1000000014 -met_name <- "AmerifluxLBL" -site_id <- 772 -startdate<-"2003/01/01" -enddate<-"2006/12/31" -out.var <- "NPP" -ensemble <- FALSE -ens_size <- 100 -sensitivity <- FALSE -demo_one_run_settings <- data.frame(models,met_name, site_id, startdate,enddate,pecan_path, out.var, ensemble,ens_size, sensitivity,stringsAsFactors=FALSE) - -demo_one_result <-demo_one_run_settings %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ - create_execute_test_xml(list(...)) -},otherwise =NA)) +demo_model <- 1000000014 +demo_met <- "AmerifluxLBL" +demo_start <- "2003/01/01" +demo_end <- "2006/12/31" +demo_site <- 772 + +demo_one_result <- create_execute_test_xml( + model_id = demo_model, + met = demo_met, + site_id = demo_site, + start_date = demo_start, + end_date = demo_end, + dbfiles_folder = dbfiles_folder, + user_id = user_id, + output_folder = "batch_test_demo1_output", ) -ensemble <- TRUE -ens_size <- 100 -sensitivity <- TRUE -demo_two_run_settings <- data.frame( models,met_name, site_id, startdate,enddate,out.var,ensemble,ens_size, sensitivity) -demo_two_result <- demo_two_run_settings %>% mutate(outcome = purrr::pmap(.,purrr::possibly(function(...){ - create_execute_test_xml(list(...)) -},otherwise =NA)) +demo_two_result <- create_execute_test_xml( + model_id = demo_model, + met = demo_met, + site_id = demo_site, + start_date = demo_start, + end_date = demo_end, + dbfiles_folder = dbfiles_folder, + user_id = user_id, + output_folder = "batch_test_demo2_output", + ensemble_size = 100, + sensitivity = TRUE ) From 1b1064ef0e177f9148b4ab3d4c6af6b063751fe3 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Aug 2019 15:06:32 -0500 Subject: [PATCH 0342/2289] Don't run utils examples --- base/utils/R/read_web_config.R | 2 ++ base/utils/man/read_web_config.Rd | 2 ++ 2 files changed, 4 insertions(+) diff --git a/base/utils/R/read_web_config.R b/base/utils/R/read_web_config.R index bcf6c5eaae7..fba2e570eb0 100644 --- a/base/utils/R/read_web_config.R +++ b/base/utils/R/read_web_config.R @@ -9,10 +9,12 @@ #' @return Named list of variable-value pairs set in `config.php` #' @export #' @examples +#' \dontrun{ #' # Read Docker configuration and extract the `dbfiles` and output folders. #' docker_config <- read_web_config(file.path("..", "..", "docker", "web", "config.docker.php")) #' docker_config[["dbfiles_folder"]] #' docker_config[["output_folder"]] +#' } read_web_config <- function(php.config = "../../web/config.php", parse = TRUE, expand = TRUE) { diff --git a/base/utils/man/read_web_config.Rd b/base/utils/man/read_web_config.Rd index a6f206176d2..29038181c5a 100644 --- a/base/utils/man/read_web_config.Rd +++ b/base/utils/man/read_web_config.Rd @@ -23,11 +23,13 @@ Named list of variable-value pairs set in \code{config.php} Read \code{config.php} file into an R list } \examples{ +\dontrun{ # Read Docker configuration and extract the `dbfiles` and output folders. docker_config <- read_web_config(file.path("..", "..", "docker", "web", "config.docker.php")) docker_config[["dbfiles_folder"]] docker_config[["output_folder"]] } +} \author{ Alexey Shiklomanov, Michael Dietze, Rob Kooper } From 6802e162cb7adf44a2430adb8e4e8a0e9324325f Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 30 Aug 2019 14:58:48 -0400 Subject: [PATCH 0343/2289] Move create_execute_test_xml into pkg and document --- base/workflow/R/create_execute_test_xml.R | 154 +++++++++++++++++++ base/workflow/R/create_xml.R | 113 -------------- base/workflow/inst/batch_runs.R | 142 ----------------- base/workflow/man/create_execute_test_xml.Rd | 59 +++++++ base/workflow/man/create_xml.Rd | 20 --- 5 files changed, 213 insertions(+), 275 deletions(-) create mode 100644 base/workflow/R/create_execute_test_xml.R delete mode 100644 base/workflow/R/create_xml.R create mode 100644 base/workflow/man/create_execute_test_xml.Rd delete mode 100644 base/workflow/man/create_xml.Rd diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R new file mode 100644 index 00000000000..07b4050f087 --- /dev/null +++ b/base/workflow/R/create_execute_test_xml.R @@ -0,0 +1,154 @@ +#' Create a PEcAn XML file and use it to run a PEcAn workflow +#' +#' @param model_id (numeric) Model ID (from `models` table) +#' @param met (character) Name of meteorology input source (e.g. `"CRUNCEP"`) +#' @param site_id (numeric) Site ID (from `sites` table) +#' @param start_date (character or date) Run start date +#' @param end_date (character or date) Run end date +#' @param dbfiles_folder (character) Path to `dbfiles` directory +#' @param user_id (numeric) User ID to associate with the workflow +#' @param output_folder (character) Path to root directory for storing outputs. +#' Default = `"batch_test_output"` +#' @param pecan_path (character) Path to PEcAn source code. Default is current +#' working directory. +#' @param pft (character) Name of PFT to run. If `NULL` (default), use the first +#' PFT in BETY associated with the model. +#' @param ensemble_size (numeric) Number of ensembles to run. Default = 1. +#' @param sensitivity_variable (character) Variable for performing sensitivity +#' analysis. Default = `"NPP"` +#' @param sensitivity (logical) Whether or not to perform a sensitivity analysis +#' (default = `FALSE`) +#' @param db_bety_username (character) BETY username for workflow (default = `"bety"`) +#' @param db_bety_password (character) BETY password for workflow (default = `"bety"`) +#' @param db_bety_hostname (character) BETY hostname for workflow (default = `"localhost"`) +#' @param db_bety_driver (character) BETY DBI driver for workflow (default = `"Postgres"`) +#' @return +#' @author Alexey Shiklomanov, Tony Gardella +#' @export +create_execute_test_xml <- function(model_id, + met, + site_id, + start_date, + end_date, + dbfiles_folder, + user_id, + output_folder = "batch_test_output", + pecan_path = getwd(), + pft = NULL, + ensemble_size = 1, + sensitivity_variable = "NPP", + sensitivity = FALSE, + db_bety_username = "bety", + db_bety_password = "bety", + db_bety_hostname = "localhost", + db_bety_driver = "Postgres") { + + php_file <- file.path(pecan_path, "web", "config.php") + config.list <- PEcAn.utils::read_web_config(php_file) + bety <- PEcAn.DB::betyConnect(php_file) + con <- bety$con + on.exit(DBI::dbDisconnect(con), add = TRUE) + + settings <- list( + info = list(notes = "Test_Run", + userid = user_id, + username = "None", + dates = Sys.Date()) + ) + + #Outdir + model.new <- tbl(bety, "models") %>% + filter(id == !!model_id) %>% + collect() + outdir_pre <- paste( + model.new[["model_name"]], + format(as.Date(start_date), "%Y-%m"), + format(as.Date(end_date), "%Y-%m"), + met, site_id, "test_runs", + sep = "_" + ) + outdir <- file.path(output_folder, outdir_pre) + dir.create(outdir, showWarnings = FALSE, recursive = TRUE) + # Convert to absolute path so I don't end up with unnecessary nested + # directories + outdir <- normalizePath(outdir) + settings$outdir <- outdir + + #Database BETY + settings$database <- list( + bety = list(user = db_bety_username, + password = db_bety_password, + host = db_bety_hostname, + dbname = "bety", + driver = db_bety_driver, + write = FALSE), + dbfiles = dbfiles_folder + ) + + #PFT + if (is.null(pft)){ + # Select the first PFT in the model list. + pft <- tbl(bety, "pfts") %>% + filter(modeltype_id == !!model.new$modeltype_id) %>% + collect() + pft <- pft$name[[1]] + message("PFT is `NULL`. Defaulting to the following PFT: ", + pft) + } + if (length(pft) > 1) { + stop( + "Currently, only a single PFT is supported. ", + "Multiple PFTs will be implemented in a future version." + ) + } + settings$pfts <- list( + pft = list(name = pft, + constants = list(num = 1)) + ) + + #Meta Analysis + settings$meta.analysis <- list(iter = 3000, random.effects = FALSE) + + #Ensemble + settings$ensemble <- list( + size = ensemble_size, + variable = sensitivity_variable, + samplingspace = list(met = list(method = "sampling"), + parameters = list(method = "uniform")) + ) + + #Sensitivity + if (sensitivity) { + settings$sensitivity.analysis <- list( + quantiles = list(sigma1 = -2, sigma2 = -1, sigma3 = 1, sigma4 = 2) + ) + } + + #Model + settings$model$id <- model.new[["id"]] + + #Workflow + settings$workflow$id + settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) + settings$run <- list( + site = list(id = site_id, met.start = start_date, met.end = end_date), + inputs = list(met = list(source = met, output = model.new[["model_name"]], + username = "pecan")), + start.date = start_date, end.date = end_date + ) + settings$host$name <- "localhost" + + #create file and Run + saveXML(listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) + file.copy(file.path(pecan_path, "web", "workflow.R"), outdir) + cwd <- getwd() + setwd(outdir) + on.exit(setwd(cwd), add = TRUE) + + sys_out <- system("Rscript workflow.R 2>&1 | tee workflow.Rout") + + list( + sys = sys_out, + outdir = outdir + ) +} diff --git a/base/workflow/R/create_xml.R b/base/workflow/R/create_xml.R deleted file mode 100644 index 75f73bc305b..00000000000 --- a/base/workflow/R/create_xml.R +++ /dev/null @@ -1,113 +0,0 @@ -##' @export -##' @aliases create_xml -##' @name create_xml -##' @title create_xml -##' @description Function to create a viable PEcAn xml from a table containing specifications of a run -##' @param run_list PEcAn table containing specifications of runs -##' @param overwrite.met,overwrite.fia,overwrite.ic logical -##' -##' @author Tony Gardella - -create_execute_test_xml <- function(run_list){ - - #Read in Table - model_id <- run_list[[1]] - met <- run_list[[2]] - site_id<- run_list[[3]] - start_date<- run_list[[4]] - end_date<- run_list[[5]] - pecan_path<- run_list[[6]] - out.var<- run_list[[7]] - ensemble<- run_list[[8]] - ens_size<- run_list[[9]] - sensitivity<- run_list[[10]] - user_id<- NA - pft_name<- NA - - config.list <-PEcAn.utils::read_web_config(paste0(pecan_path,"/web/config.php")) - bety <- PEcAn.DB::betyConnect(paste0(pecan_path,"/web/config.php")) - con <- bety$con - settings <- list() - - # Info - settings$info$notes <- paste0("Test_Run") - settings$info$userid <- user_id - settings$info$username <- "None" - settings$info$dates <- Sys.Date() - #Outdir - model.new <- dplr::tbl(bety, "models") %>% dplyr::filter(model_id == id) %>% dplyr::collect() - outdir_base<-config.list$output_folder - outdir_pre <- paste(model.new$model_name,format(as.Date(start_date), "%Y-%m"), - format(as.Date(end_date), "%Y-%m"), - met,site_id,"test_runs", - sep="_",collapse =NULL) - outdir <- paste0(outdir_base,outdir_pre) - dir.create(outdir) - settings$outdir <- outdir - #Database BETY - settings$database$bety$user <- config.list$db_bety_username - settings$database$bety$password <- config.list$db_bety_password - settings$database$bety$host <- "localhost" - settings$database$bety$dbname <- config.list$db_bety_database - settings$database$bety$driver <- "PostgreSQL" - settings$database$bety$write <- FALSE - #Database dbfiles - settings$database$dbfiles <- config.list$dbfiles_folder - #PFT - if (is.na(pft_name)){ - pft <- dplyr::tbl(bety, "pfts") %>% dplyr::collect() %>% dplyr::filter(modeltype_id == model.new$modeltype_id) - pft_name <- pft$name[1] - } - settings$pfts$pft$name <- pft_name - settings$pfts$pft$constants$num <- 1 - #Meta Analysis - settings$meta.analysis$iter <- 3000 - settings$meta.analysis$random.effects <- FALSE - #Ensemble - if(ensemble){ - settings$ensemble$size <- ens_size - settings$ensemble$variable <- out.var - settings$ensemble$samplingspace$met$method <- "sampling" - settings$ensemble$samplingspace$parameters$method <- "uniform" - }else{ - settings$ensemble$size <- 1 - settings$ensemble$variable <- out.var - settings$ensemble$samplingspace$met$method <- "sampling" - settings$ensemble$samplingspace$parameters$method <- "uniform" - } - #Sensitivity - if(sensitivity){ - settings$sensitivity.analysis$quantiles <- - settings$sensitivity.analysis$quantiles$sigma1 <--2 - settings$sensitivity.analysis$quantiles$sigma2 <--1 - settings$sensitivity.analysis$quantiles$sigma3 <- 1 - settings$sensitivity.analysis$quantiles$sigma4 <- 2 - names(settings$sensitivity.analysis$quantiles) <-c("sigma","sigma","sigma","sigma") - } - #Model - settings$model$id <- model.new$id - #Workflow - settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) - settings$run$site$id <- site_id - settings$run$site$met.start <- start_date - settings$run$site$met.end <- end_date - settings$run$inputs$met$source <- met - settings$run$inputs$met$output <- model.new$model_name - settings$run$inputs$met$username <- "pecan" - settings$run$start.date <- start_date - settings$run$end.date <- end_date - settings$host$name <-"localhost" - - #create file and Run - XML::saveXML(PEcAn.settings::listToXml(settings, "pecan"), file=paste0(outdir,"/","pecan.xml")) - file.copy(paste0(config.list$pecan_home,"web/","workflow.R"),to = outdir) - setwd(outdir) - ##Name log file - #log <- file("workflow.Rout", open = "wt") - #sink(log) - #sink(log, type = "message") - - system("./workflow.R 2>&1 | tee workflow.Rout") - #source("workflow.R") - #sink() -} diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_runs.R index 9fbd0e5a9a2..b5154119a92 100755 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_runs.R @@ -34,148 +34,6 @@ ## A. Use the pmap function to apply the function across each row of arguments and append a pass or fail outcome column to the original table of runs ## Part 5 - (In progress) Turn output table into a table -#' .. title/description .. -#' -#' @param model_id Model ID (from `models` table) -#' @param met Meteorology input source (e.g. "CRUNCEP") -#' @param site_id Site ID (from `sites` table) -#' @param start_date Run start date -#' @param end_date Run end date -#' @param pecan_path Path to PEcAn source code. Default is current -#' working directory. -#' @param user_id -#' @param output_folder -#' @param dbfiles_folder -#' @param pft_name -#' @param ensemble_size -#' @param ensemble_variable -#' @param sensitivity -#' @param db_bety_username -#' @param db_bety_password -#' @param db_bety_hostname -#' @param db_bety_driver -#' @return -#' @author Alexey Shiklomanov -create_execute_test_xml <- function(model_id, - met, - site_id, - start_date, - end_date, - dbfiles_folder, - user_id, - output_folder = "batch_test_output", - pecan_path = getwd(), - pft_name = NULL, - ensemble_size = 1, - ensemble_variable = "NPP", - sensitivity = FALSE, - db_bety_username = "bety", - db_bety_password = "bety", - db_bety_hostname = "localhost", - db_bety_driver = "PostgreSQL") { - - php_file <- file.path(pecan_path, "web", "config.php") - config.list <- PEcAn.utils::read_web_config(php_file) - bety <- PEcAn.DB::betyConnect(php_file) - con <- bety$con - on.exit(DBI::dbDisconnect(con), add = TRUE) - - settings <- list( - info = list(notes = "Test_Run", - userid = user_id, - username = "None", - dates = Sys.Date()) - ) - - #Outdir - model.new <- tbl(bety, "models") %>% - filter(id == !!model_id) %>% - collect() - outdir_pre <- paste( - model.new[["model_name"]], - format(as.Date(start_date), "%Y-%m"), - format(as.Date(end_date), "%Y-%m"), - met, site_id, "test_runs", - sep = "_" - ) - outdir <- file.path(output_folder, outdir_pre) - dir.create(outdir, showWarnings = FALSE, recursive = TRUE) - # Convert to absolute path so I don't end up with unnecessary nested - # directories - outdir <- normalizePath(outdir) - settings$outdir <- outdir - - #Database BETY - settings$database <- list( - bety = list(user = db_bety_username, - password = db_bety_password, - host = db_bety_hostname, - dbname = "bety", - driver = db_bety_driver, - write = FALSE), - dbfiles = dbfiles_folder - ) - - #PFT - if (is.null(pft_name)){ - # Select the first PFT in the model list. - pft <- tbl(bety, "pfts") %>% - filter(modeltype_id == !!model.new$modeltype_id) %>% - collect() - pft_name <- pft$name[[1]] - } - settings$pfts <- list( - pft = list(name = pft_name, - constants = list(num = 1)) - ) - - #Meta Analysis - settings$meta.analysis <- list(iter = 3000, random.effects = FALSE) - - #Ensemble - settings$ensemble <- list( - size = ensemble_size, - variable = ensemble_variable, - samplingspace = list(met = list(method = "sampling"), - parameters = list(method = "uniform")) - ) - - #Sensitivity - if (sensitivity) { - settings$sensitivity.analysis <- list( - quantiles = list(sigma1 = -2, sigma2 = -1, sigma3 = 1, sigma4 = 2) - ) - } - - #Model - settings$model$id <- model.new[["id"]] - - #Workflow - settings$workflow$id - settings$workflow$id <- paste0("Test_run_","_",model.new$model_name) - settings$run <- list( - site = list(id = site_id, met.start = start_date, met.end = end_date), - inputs = list(met = list(source = met, output = model.new[["model_name"]], - username = "pecan")), - start.date = start_date, end.date = end_date - ) - settings$host$name <- "localhost" - - #create file and Run - saveXML(listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) - file.copy(file.path(pecan_path, "web", "workflow.R"), outdir) - cwd <- getwd() - setwd(outdir) - on.exit(setwd(cwd), add = TRUE) - - sys_out <- system("Rscript workflow.R 2>&1 | tee workflow.Rout") - - list( - sys = sys_out, - outdir = outdir - ) -} - library(tidyverse) library(PEcAn.DB) library(PEcAn.utils) diff --git a/base/workflow/man/create_execute_test_xml.Rd b/base/workflow/man/create_execute_test_xml.Rd new file mode 100644 index 00000000000..ed6e91e5897 --- /dev/null +++ b/base/workflow/man/create_execute_test_xml.Rd @@ -0,0 +1,59 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/create_execute_test_xml.R +\name{create_execute_test_xml} +\alias{create_execute_test_xml} +\title{Create a PEcAn XML file and use it to run a PEcAn workflow} +\usage{ +create_execute_test_xml(model_id, met, site_id, start_date, end_date, + dbfiles_folder, user_id, output_folder = "batch_test_output", + pecan_path = getwd(), pft = NULL, ensemble_size = 1, + sensitivity_variable = "NPP", sensitivity = FALSE, + db_bety_username = "bety", db_bety_password = "bety", + db_bety_hostname = "localhost", db_bety_driver = "Postgres") +} +\arguments{ +\item{model_id}{(numeric) Model ID (from `models` table)} + +\item{met}{(character) Name of meteorology input source (e.g. `"CRUNCEP"`)} + +\item{site_id}{(numeric) Site ID (from `sites` table)} + +\item{start_date}{(character or date) Run start date} + +\item{end_date}{(character or date) Run end date} + +\item{dbfiles_folder}{(character) Path to `dbfiles` directory} + +\item{user_id}{(numeric) User ID to associate with the workflow} + +\item{output_folder}{(character) Path to root directory for storing outputs. +Default = `"batch_test_output"`} + +\item{pecan_path}{(character) Path to PEcAn source code. Default is current +working directory.} + +\item{pft}{(character) Name of PFT to run. If `NULL` (default), use the first +PFT in BETY associated with the model.} + +\item{ensemble_size}{(numeric) Number of ensembles to run. Default = 1.} + +\item{sensitivity_variable}{(character) Variable for performing sensitivity +analysis. Default = `"NPP"`} + +\item{sensitivity}{(logical) Whether or not to perform a sensitivity analysis +(default = `FALSE`)} + +\item{db_bety_username}{(character) BETY username for workflow (default = `"bety"`)} + +\item{db_bety_password}{(character) BETY password for workflow (default = `"bety"`)} + +\item{db_bety_hostname}{(character) BETY hostname for workflow (default = `"localhost"`)} + +\item{db_bety_driver}{(character) BETY DBI driver for workflow (default = `"Postgres"`)} +} +\description{ +Create a PEcAn XML file and use it to run a PEcAn workflow +} +\author{ +Alexey Shiklomanov, Tony Gardella +} diff --git a/base/workflow/man/create_xml.Rd b/base/workflow/man/create_xml.Rd deleted file mode 100644 index 3939ffb6db8..00000000000 --- a/base/workflow/man/create_xml.Rd +++ /dev/null @@ -1,20 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/create_xml.R -\name{create_xml} -\alias{create_xml} -\alias{create_execute_test_xml} -\title{create_xml} -\usage{ -create_execute_test_xml(run_list) -} -\arguments{ -\item{run_list}{PEcAn table containing specifications of runs} - -\item{overwrite.met, overwrite.fia, overwrite.ic}{logical} -} -\description{ -Function to create a viable PEcAn xml from a table containing specifications of a run -} -\author{ -Tony Gardella -} From c5c6c01e0a9ea8bd6018fd31458c39000965a743 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 30 Aug 2019 16:01:31 -0400 Subject: [PATCH 0344/2289] Draft of batch_run script based on input table --- base/workflow/inst/batch_run.R | 141 ++++++++++++++++++ .../inst/{batch_runs.R => batch_run_all.R} | 9 +- base/workflow/inst/default_tests.csv | 4 + 3 files changed, 150 insertions(+), 4 deletions(-) create mode 100644 base/workflow/inst/batch_run.R rename base/workflow/inst/{batch_runs.R => batch_run_all.R} (99%) create mode 100644 base/workflow/inst/default_tests.csv diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R new file mode 100644 index 00000000000..66216d0a452 --- /dev/null +++ b/base/workflow/inst/batch_run.R @@ -0,0 +1,141 @@ +library(dplyr) + +################################################## +# Parse arguments +argv <- commandArgs(trailingOnly = TRUE) +if ("--help" %in% argv) { + message( + "This script supports the following options:\n", + "--help Print this help message.\n", + "--dbfiles= Path to dbfiles folder", + "--table= Path to table listing tests to run", + "--userid= User ID for registering workflow.", + "--outdir= Path to output directory.", + "--pecandir= Path to PEcAn root directory.", + "--outfile= Path to output table" + ) + quit(save = "no", status = 0) +} + +get_arg <- function(argv, pattern, default_value) { + if (any(grepl(pattern, argv))) { + result <- argv[grep(pattern, argv)] %>% + gsub(pattern = paste0(pattern, "="), replacement = "") + } else { + result <- default_value + } + return(result) +} + +dbfiles_folder <- normalizePath(get_arg(argv, "--dbfiles", "~/output/dbfiles")) +input_table_file <- get_arg( + argv, + "--table", + system.file("default_tests.csv", package = "PEcAn.workflow") +) +user_id <- as.numeric(get_arg(argv, "--userid", 99000000002)) +pecan_path <- get_arg(argv, "--pecandir", getwd()) +output_folder <- get_arg(argv, "--outdir", "batch_test_output") +outfile <- get_arg(argv, "--outfile", "test_result_table.csv") +################################################## + +# Create outfile directory if it doesn't exist +dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) + +input_table <- read_csv(input_table_file) +result_table <- input_table %>% + mutate( + outdir = NA_character_, + workflow_complete = NA, + has_jobsh = NA, + model_output_raw = NA, + model_output_processed = NA + ) + +for (i in seq_len(nrow(input_table))) { + table_row <- input_table[i, ] + + # Get model ID + model <- table_row$model + revision <- table_row$revision + model_id <- tbl(con, "models") %>% + filter(model_name == !!model, + revision == !!revision) %>% + pull(id) + if (!length(model_id) == 1) { + message("Invalid number of models returned: ", + length(model_id), "\n", + "Moving on to next row.") + next + } + + pft <- table_row$pft + if (is.na(pft)) pft <- NULL + + # Run test + raw_result <- create_execute_test_xml( + model_id = model_id, + met = table_row$met, + site_id = table_row$site_id, + pft = pft, + start_date = table_row$start_date, + end_date = table_row$end_date, + dbfiles_folder = dbfiles_folder, + user_id = user_id, + ensemble_size = table_row$ensemble_size, + sensitivity = table_row$sensitivity + ) + + result_table$outdir <- raw_result$outdir + + ################################################## + # Did the workflow finish? + ################################################## + raw_output <- readLines(file.path(outdir, "workflow.Rout")) + result_table$workflow_complete[[i]] <- any(grepl( + "PEcAn Workflow Complete", + raw_output + )) + continue <- FALSE + ################################################## + # Did we write a job.sh file? + ################################################## + out <- file.path(outdir, "out") + run <- file.path(outdir, "run") + jobsh <- list.files(run, "job\\.sh", recursive = TRUE) + ## pft <- file.path(outdir, "pft") + if (length(jobsh) > 0) { + result_table$has_jobsh[[i]] <- TRUE + continue <- TRUE + } + + ################################################## + # Did the model produce any output? + ################################################## + if (continue) { + continue <- FALSE + raw_out <- list.files(out, recursive = TRUE) + if (length(raw_out) > 0) { + result_table$model_output_raw[[i]] <- TRUE + continue <- TRUE + } + } + + ################################################## + # Did PEcAn post-process the output? + ################################################## + # Files should have name `YYYY.nc` + if (continue) { + continue <- FALSE + proc_out <- list.files(out, "[[:digit:]]{4}\\.nc", recursive = TRUE) + if (length(proc_out) > 0) { + result_table$model_output_processed[[i]] <- TRUE + continue <- TRUE + } + + # This will continuously update the output table with the current results + result_table %>% + filter(!is.na(outdir)) %>% + write_csv(outfile) + } +} diff --git a/base/workflow/inst/batch_runs.R b/base/workflow/inst/batch_run_all.R similarity index 99% rename from base/workflow/inst/batch_runs.R rename to base/workflow/inst/batch_run_all.R index b5154119a92..8b9f0df1c8f 100755 --- a/base/workflow/inst/batch_runs.R +++ b/base/workflow/inst/batch_run_all.R @@ -34,11 +34,8 @@ ## A. Use the pmap function to apply the function across each row of arguments and append a pass or fail outcome column to the original table of runs ## Part 5 - (In progress) Turn output table into a table +library(PEcAn.workflow) library(tidyverse) -library(PEcAn.DB) -library(PEcAn.utils) -library(XML) -library(PEcAn.settings) argv <- commandArgs(trailingOnly = TRUE) @@ -133,6 +130,10 @@ site_id_noinput <- tbl(bety, "sites") %>% mutate(sitename = gsub(" ", "_", sitename)) %>% rename(site_id = id.x) +site_id_noinput %>% + select(site_id, sitename) %>% + print(n = Inf) + message("Running tests at ", nrow(site_id_noinput), " sites:") print(site_id_noinput) diff --git a/base/workflow/inst/default_tests.csv b/base/workflow/inst/default_tests.csv new file mode 100644 index 00000000000..45ed569eb0d --- /dev/null +++ b/base/workflow/inst/default_tests.csv @@ -0,0 +1,4 @@ +model, revision, met, site_id, pft, start_date, end_date, sensitivity, ensemble_size, comment +"SIPNET", "unk", "CRUNCEP", 772, "temperate.coniferous", "2001-01-01", "2001-12-31", FALSE, 1, "Sipnet unk - Niwot Ridge" +"SIPNET", "136", "CRUNCEP", 772, "temperate.coniferous", "2001-01-01", "2001-12-31", FALSE, 1, "Sipnet 136 - Niwot Ridge" +"ED2", "rgit", "CRUNCEP", 772, "temperate.Early_Hardwood", "2001-06-01", "2001-08-31", FALSE, 1, "ED2 git Harvard Forest summer" From e4ea724e70d7ee0773fdaec839fabb93d61116f7 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Aug 2019 15:12:54 -0500 Subject: [PATCH 0345/2289] Fix bad R chunk in bookdown --- .../03_coding_practices/01_Coding_style.Rmd | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd index 5ad9b4ba980..8af3d2b8c9f 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd @@ -133,14 +133,14 @@ Loading a dependency into the package namespace is undesirable because it makes Attaching packages to the user's search path is even more problematic because it makes it hard for the user to understand *how your package will affect their own code*. Packages attached by a function stay attached after the function exits, so they can cause name collisions far downstream of the function call, potentially in code that has nothing to do with your package. And worse, since names are looked up in reverse order of package loading, it can cause behavior that differs in strange ways depending on the order of lines that look independent of each other: -```{r} -> library(Hmisc) -> x = ... -> y = 3 -> summarize(x) # calls Hmisc::summarize -> y2 <- some_package_that_attaches_dplyr::innocent.looking.function(y) -Loading required package: dplyr -> summarize(x) # Looks identical to previous summarize, but calls dplyr::summarize! +```{r eval = FALSE} +library(Hmisc) +x = ... +y = 3 +summarize(x) # calls Hmisc::summarize +y2 <- some_package_that_attaches_dplyr::innocent.looking.function(y) +# Loading required package: dplyr +summarize(x) # Looks identical to previous summarize, but calls dplyr::summarize! ``` This is not to say that users will *never* want your package to attach another one for them, just that it's rare and that attaching dependencies is much more likely to cause bugs than to fix them and additionally doesn't usually save the package author any work. From 66cfecc61f33123e87efd6c17ad1c3003d94b566 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 30 Aug 2019 15:43:18 -0500 Subject: [PATCH 0346/2289] Mulitple bugfixes and enhancements to batch_run --- base/workflow/R/create_execute_test_xml.R | 3 +- base/workflow/inst/batch_run.R | 43 ++++++++++++++++++----- base/workflow/inst/default_tests.csv | 8 ++--- 3 files changed, 40 insertions(+), 14 deletions(-) mode change 100644 => 100755 base/workflow/inst/batch_run.R diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 07b4050f087..6cd4897b4da 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -139,7 +139,8 @@ create_execute_test_xml <- function(model_id, settings$host$name <- "localhost" #create file and Run - saveXML(listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) + XML::saveXML(PEcAn.settings::listToXml(settings, "pecan"), + file = file.path(outdir, "pecan.xml")) file.copy(file.path(pecan_path, "web", "workflow.R"), outdir) cwd <- getwd() setwd(outdir) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R old mode 100644 new mode 100755 index 66216d0a452..9a5c561fe21 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -1,4 +1,11 @@ +#!/usr/bin/env Rscript + library(dplyr) +library(PEcAn.workflow) +stopifnot( + requireNamespace("PEcAn.DB", quietly = TRUE), + requireNamespace("PEcAn.utils", quietly = TRUE) +) ################################################## # Parse arguments @@ -38,11 +45,17 @@ pecan_path <- get_arg(argv, "--pecandir", getwd()) output_folder <- get_arg(argv, "--outdir", "batch_test_output") outfile <- get_arg(argv, "--outfile", "test_result_table.csv") ################################################## +# Establish database connection based on config.php +php_file <- file.path(pecan_path, "web", "config.php") +stopifnot(file.exists(php_file)) +config.list <- PEcAn.utils::read_web_config(php_file) +bety <- PEcAn.DB::betyConnect(php_file) +con <- bety$con # Create outfile directory if it doesn't exist dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) -input_table <- read_csv(input_table_file) +input_table <- read.csv(input_table_file, stringsAsFactors = FALSE) result_table <- input_table %>% mutate( outdir = NA_character_, @@ -58,15 +71,25 @@ for (i in seq_len(nrow(input_table))) { # Get model ID model <- table_row$model revision <- table_row$revision - model_id <- tbl(con, "models") %>% + message("Model: ", shQuote(model)) + message("Revision: ", shQuote(revision)) + model_df <- tbl(con, "models") %>% filter(model_name == !!model, revision == !!revision) %>% - pull(id) - if (!length(model_id) == 1) { - message("Invalid number of models returned: ", - length(model_id), "\n", - "Moving on to next row.") + collect() + if (nrow(model_df) == 0) { + message("No models found with name ", model, + " and revision ", revision, ".\n", + "Moving on to next row.") next + } else if (nrow(model_df) > 1) { + print(model_df) + message("Multiple models found with name ", model, + " and revision ", revision, ".\n", + "Moving on to next row.") + next + } else { + model_id <- model_df$id } pft <- table_row$pft @@ -81,12 +104,14 @@ for (i in seq_len(nrow(input_table))) { start_date = table_row$start_date, end_date = table_row$end_date, dbfiles_folder = dbfiles_folder, + pecan_path = pecan_path, user_id = user_id, ensemble_size = table_row$ensemble_size, sensitivity = table_row$sensitivity ) - result_table$outdir <- raw_result$outdir + outdir <- raw_result$outdir + result_table$outdir <- outdir ################################################## # Did the workflow finish? @@ -136,6 +161,6 @@ for (i in seq_len(nrow(input_table))) { # This will continuously update the output table with the current results result_table %>% filter(!is.na(outdir)) %>% - write_csv(outfile) + write.csv(outfile, row.names = FALSE) } } diff --git a/base/workflow/inst/default_tests.csv b/base/workflow/inst/default_tests.csv index 45ed569eb0d..4e483731111 100644 --- a/base/workflow/inst/default_tests.csv +++ b/base/workflow/inst/default_tests.csv @@ -1,4 +1,4 @@ -model, revision, met, site_id, pft, start_date, end_date, sensitivity, ensemble_size, comment -"SIPNET", "unk", "CRUNCEP", 772, "temperate.coniferous", "2001-01-01", "2001-12-31", FALSE, 1, "Sipnet unk - Niwot Ridge" -"SIPNET", "136", "CRUNCEP", 772, "temperate.coniferous", "2001-01-01", "2001-12-31", FALSE, 1, "Sipnet 136 - Niwot Ridge" -"ED2", "rgit", "CRUNCEP", 772, "temperate.Early_Hardwood", "2001-06-01", "2001-08-31", FALSE, 1, "ED2 git Harvard Forest summer" +model,revision,met,site_id,pft,start_date,end_date,sensitivity,ensemble_size,comment +"SIPNET","unk","AmerifluxLBL",772,"temperate.coniferous","2003-01-01","2006-12-31",FALSE,1,"Demo 1" +"SIPNET","unk","AmerifluxLBL",772,"temperate.coniferous","2003-01-01","2006-12-31",TRUE,100,"Demo 2" +"ED2","git","CRUNCEP",758,"temperate.Early_Hardwood","2001-06-01","2001-08-31",FALSE,1, "ED2 git Harvard Forest summer" From b7045432ab48b351f6e379bb90797ba728e72670 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 30 Aug 2019 17:17:06 -0400 Subject: [PATCH 0347/2289] Add batch_run documentation --- .../03_coding_practices/05_Testing.Rmd | 48 +++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd index a1d5d0f572b..e5b4000da5e 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd @@ -359,3 +359,51 @@ So, any changes you make to the code in `server.R` and `ui.R` or scripts loaded If for whatever reason this doesn't work with RStudio, you can always run R from the command line. Also, note that the ability to forward ports (`ssh -L`) may depend on the `ssh` configuration of your remote machine. These instructions have been tested on the PEcAn VM (v.1.5.2+). + +#### Testing PEcAn in bulk + +The [`base/workflow/inst/batch_run.R`][batch_run] script can be used to quickly run a series of user-specified integration tests without having to create a bunch of XML files. +This script is powered by the [`PEcAn.workflow::create_execute_test_xml()`][xml_fun] function, +which takes as input information about the model, meteorology driver, site ID, run dates, and others, +uses these to construct a PEcAn XML file, +and then uses the `system()` command to run a workflow with that XML. + +If run without arguments, `batch_run.R` will try to run the model configurations specified in the [`base/workflow/inst/default_tests.csv`][default_tests] file. +This file contains a CSV table with the following columns: + +- `model` -- The name of the model (`models.model_name` column in BETY) +- `revision` -- The version of the model (`models.revision` column in BETY) +- `met` -- The name of the meteorology driver source +- `site_id` -- The numeric site ID for the model run (`sites.site_id`) +- `pft` -- The name of the plant functional type to run. If `NA`, the script will use the first PFT associated with the model. +- `start_date`, `end_date` -- The start and end dates for the model run, respectively. These should be formatted according to ISO standard (`YYYY-MM-DD`, e.g. `2010-03-16`) +- `sensitivity` -- Whether or not to run the sensitivity analysis. `TRUE` means run it, `FALSE` means do not. +- `ensemble_size` -- The number of ensemble members to run. Set this to 1 to do a single run at the trait median. +- `comment` -- An string providing some user-friendly information about the run. + +The `batch_run.R` script will run a workflow for every row in the input table, sequentially (for now; eventually, it will try to run them in parallel), +and at the end of each workflow, will perform some basic checks, including whether or not the workflow finished and if the model produced any output. +These results are summarized in a CSV table (by default, a file called `test_result_table.csv`), with all of the columns as the input test CSV plus the following: + +- `outdir` -- Absolute path to the workflow directory. +- `workflow_complete` -- Whether or not the PEcAn workflow completed. Note that this is a relatively low bar -- PEcAn workflows can complete without having run the model or finished some other steps. +- `has_jobsh` -- Whether or not PEcAn was able to write the model's `job.sh` script. This is a good indication of whether or not the model's `write.configs` step was successful, and may be useful for separating model configuration errors from model execution errors. +- `model_output_raw` -- Whether or not the model produced any output files at all. This is just a check to see of the `/out` directory is empty or not. Note that some models may produce logfiles or similar artifacts as soon as they are executed, whether or not they ran even a single timestep, so this is not an indication of model success. +- `model_output_processed` -- Whether or not PEcAn was able to post-process any model output. This test just sees if there are any files of the form `YYYY.nc` (e.g. `1992.nc`) in the `/out` directory. + +Right now, these checks are not particularly robust or comprehensive, but they should be sufficient for catching common errors. +Development of more, better tests is ongoing. + +The `batch_run.R` script can take the following command-line arguments: + +- `--help` -- Prints a help message about the script's arguments +- `--dbfiles=` -- The path to the PEcAn `dbfiles` folder. The default value is `~/output/dbfiles`, based on the file structure of the PEcAn VM. Note that for this and all other paths, if a relative path is given, it is assumed to be relative to the current working directory, i.e. the directory from which the script was called. +- `--table=` -- Path to an alternate test table. The default is the `base/workflow/inst/default_tests.csv` file. See preceding paragraph for a description of the format. +- `--userid=` -- The numeric user ID for registering the workflow. The default value is 99000000002, corresponding to the guest user on the PEcAn VM. +- `--outdir=` -- Path to a directory (which will be created if it doesn't exist) for storing the PEcAn workflow outputs. Default is `batch_test_output` (in the current working directory). +- `--pecandir=` -- Path to the PEcAn source code root directory. Default is the current working directory. +- `--outfile=` -- Full path (including file name) of the CSV file summarizing the results of the runs. Default is `test_result_table.csv`. The format of the output + +[batch_run]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/batch_run.R +[default_tests]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/default_tests.csv +[xml_fun]: From 76abdeabaa7f510a5476365e7d6a0bda1af43c5c Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 30 Aug 2019 17:34:14 -0400 Subject: [PATCH 0348/2289] Convert old batch_run_all to permutation_tests This script generates a bunch of possible permutations of models, sites, and met products for a given machine. It's main purpose is to create a CSV file that will be used as an input to `batch_run.R` --- base/workflow/inst/batch_run_all.R | 290 ------------------------- base/workflow/inst/permutation_tests.R | 142 ++++++++++++ 2 files changed, 142 insertions(+), 290 deletions(-) delete mode 100755 base/workflow/inst/batch_run_all.R create mode 100755 base/workflow/inst/permutation_tests.R diff --git a/base/workflow/inst/batch_run_all.R b/base/workflow/inst/batch_run_all.R deleted file mode 100755 index 8b9f0df1c8f..00000000000 --- a/base/workflow/inst/batch_run_all.R +++ /dev/null @@ -1,290 +0,0 @@ -#!/usr/bin/env Rscript - -# if (!requireNamespace("huxtable")) { -# message("Installing missing package 'huxtable'") -# install.packages("huxtable") -# } -# -# if (!requireNamespace("htmlTable")) { -# message("Installing missing package 'htmlTable'") -# install.packages("htmlTable") -# } - -## This script contains the Following parts: -## Part 1 - Write Main Function -## A. takes in a list defining specifications for a single run and assigns them to objects -## B. The function then writes out a pecan settings list object -## C. A directory is created from the ouput folder defined in a users config.php file -## D. Settings object gets written into pecan.xml file and put in output directory. -## E. workflow.R is copied into this directory and executed. -## F. console output is stored in a file called workflow.Rout -## Part 2 - Write run settings -## A. Set BETY connection object. Needs user to define path to their pecan directory. -## B. Find Machine ID -## C. Find Models ids based on machine -## D. Manually Define Met, right now CRUNCEP and AMerifluxlbl -## E. Manually Define start and end date -## F. Manually Define Output var -## G. Manually Define Sensitivity and enesmble -## H. Find sites not associated with any inputs(meaning data needs to be downloaded), part of ameriflux network, and have data for start-end year range. -## Available ameriflux data is found by parsing the notes section of the sites table where ameriflux sites have a year range. -## Part 3 - Create Run table -## A. Create table that contains combinations of models,met_name,site_id, startdate, enddate, pecan_path,out.var, ensemble, ens_size, sensitivity args that the function above will use to do runs. -## Part 4 - Run the Function across the table -## A. Use the pmap function to apply the function across each row of arguments and append a pass or fail outcome column to the original table of runs -## Part 5 - (In progress) Turn output table into a table - -library(PEcAn.workflow) -library(tidyverse) - -argv <- commandArgs(trailingOnly = TRUE) - -##Create Run Args -met_name <- c("CRUNCEP", "AmerifluxLBL") -startdate <- "2004-01-01" -enddate <- "2004-12-31" -out.var <- "NPP" -ensemble <- FALSE -ens_size <- 1 # Run ensemble analysis for some models? -sensitivity <- FALSE -user_id <- 99000000002 # TODO: Un-hard-code this -dbfiles_folder <- normalizePath("~/output/dbfiles") # TODO: Un-hard-code this - -## Insert your path to base pecan -## pecan_path <- "/fs/data3/tonygard/work/pecan" -pecan_path <- file.path("..", "..") -php_file <- file.path(pecan_path, "web", "config.php") -stopifnot(file.exists(php_file)) -config.list <- PEcAn.utils::read_web_config(php_file) -bety <- PEcAn.DB::betyConnect(php_file) -con <- bety$con - -## Find name of Machine R is running on -machid_rxp <- "^--machine_id=" -if (any(grepl(machid_rxp, argv))) { - machid_raw <- grep(machid_rxp, argv, value = TRUE) - mach_id <- as.numeric(gsub(machid_rxp, "", machid_raw)) - message("Using specified machine ID: ", mach_id) -} else { - mach_name <- Sys.info()[[4]] - message("Auto-detected machine name: ", mach_name) - ## mach_name <- "docker" - mach_id <- tbl(bety, "machines") %>% - filter(hostname == !!mach_name) %>% - pull(id) -} - -## Find Models -#devtools::install_github("pecanproject/pecan", subdir = "api") -model_df <- tbl(bety, "dbfiles") %>% - filter(machine_id == !!mach_id) %>% - filter(container_type == "Model") %>% - left_join(tbl(bety, "models"), c("container_id" = "id")) %>% - select(model_id = container_id, model_name, revision, - file_name, file_path, dbfile_id = id, ) %>% - collect() %>% - mutate(exists = file.exists(file.path(file_path, file_name))) - -message("Found the following models on the machine:") -print(model_df) - -if (!all(model_df$exists)) { - message("WARNING: The following models are registered on the machine ", - "but their files do not exist:") - model_df %>% - filter(!exists) %>% - print() - - model_df <- model_df %>% - filter(exists) -} - -models <- model_df[["model_id"]] - -## Find Sites -## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group -site_id_noinput <- tbl(bety, "sites") %>% - anti_join(tbl(bety, "inputs")) %>% - inner_join(tbl(bety, "sitegroups_sites") %>% - filter(sitegroup_id == 1), - by = c("id" = "site_id")) %>% - dplyr::select("id.x", "notes", "sitename") %>% - dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% - collect() %>% - dplyr::mutate( - # Grab years from string within the notes - start_year = substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), - #Empty tower end in the notes means that it goes until present day so if empty enter curent year. - end_year = dplyr::if_else( - substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", - as.character(lubridate::year(Sys.Date())), - substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) - ), - #Check if startdate year is within the inerval of that is given - in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) - ) %>% - dplyr::filter( - in_date, - as.numeric(end_year) - as.numeric(start_year) > 1 - ) %>% - mutate(sitename = gsub(" ", "_", sitename)) %>% - rename(site_id = id.x) - -site_id_noinput %>% - select(site_id, sitename) %>% - print(n = Inf) - -message("Running tests at ", nrow(site_id_noinput), " sites:") -print(site_id_noinput) - -site_id <- site_id_noinput$site_id -site_name <- gsub(" ", "_", site_id_noinput$sitename) - - - -#Create permutations of arg combinations -options(scipen = 999) -run_table <- expand.grid( - model_id = models, - met = met_name, - site_id = site_id, - start_date = startdate, - end_date = enddate, - pecan_path = pecan_path, - ensemble_variable = out.var, - # ensemble = ensemble, - ensemble_size = ens_size, - sensitivity = sensitivity, - user_id = user_id, - dbfiles_folder = dbfiles_folder, - stringsAsFactors = FALSE -) -#Execute function to spit out a table with a column of NA or success - -result_table <- as_tibble(run_table) %>% - mutate( - outdir = NA_character_, - workflow_complete = NA, - has_jobsh = NA, - model_output_raw = NA, - model_output_processed = NA - ) - -for (i in seq_len(nrow(run_table))) { - result_table %>% - filter(!is.na(outdir)) %>% - write_csv("result_table.csv") - message("\n\n############################################") - message("Testing the following configuration:") - glimpse(run_table[i, ]) - raw_result <- do.call(create_execute_test_xml, run_table[i, ]) - outdir <- raw_result$outdir - result_table$outdir[[i]] <- outdir - ################################################## - # Did the workflow finish? - ################################################## - raw_output <- readLines(file.path(outdir, "workflow.Rout")) - result_table$workflow_complete[[i]] <- any(grepl("PEcAn Workflow Complete", raw_output)) - ################################################## - # Did we write a job.sh file? - ################################################## - out <- file.path(outdir, "out") - run <- file.path(outdir, "run") - jobsh <- list.files(run, "job\\.sh", recursive = TRUE) - ## pft <- file.path(outdir, "pft") - if (length(jobsh) > 0) { - result_table$has_jobsh[[i]] <- TRUE - } else { - next - } - ################################################## - # Did the model produce any output? - ################################################## - raw_out <- list.files(out, recursive = TRUE) - if (length(raw_out) > 0) { - result_table$model_output_raw[[i]] <- TRUE - } else { - next - } - ################################################## - # Did PEcAn post-process the output? - ################################################## - # Files should have name `YYYY.nc` - proc_out <- list.files(out, "[[:digit:]]{4}\\.nc", recursive = TRUE) - if (length(proc_out) > 0) { - result_table$model_output_processed[[i]] <- TRUE - } else { - next - } -} - -result_table %>% - filter(!is.na(outdir)) %>% - # Add model information, to make this easier to read - left_join(select(model_df, model_id, model_name, revision), - "model_id") %>% - write_csv("result_table.csv") - -## out_table <- result_table %>% -## left_join(select(model_df, model_id, model_name, revision), -## "model_id") %>% -## select(model = model_name, revision, site_id, -## workflow_complete, has_jobsh, model_output_raw, -## model_output_processed) - -## write_csv(result_table, "result_table.csv") - -# tab <- run_table %>% -# mutate( -# outcome = purrr::pmap( -# ., -# purrr::possibly(function(...) create_execute_test_xml(list(...)), -# otherwise = NA) -# ) -# ) -# -# print(tab, n = Inf) - -## print to table -# tux_tab <- huxtable::hux(tab) -# html_table <- huxtable::print_html(tux_tab) -# htmlTable::htmlTable(tab) - - -## Test the Demos -# Site Niwot ridge -# Temperate COniferous -# Year 2003/01/01-2006/12/31 -# MET AmerifluxLBL -# Model - Sipnet -# Basic Run, then sensitivity and ensemble run. -## -demo_model <- 1000000014 -demo_met <- "AmerifluxLBL" -demo_start <- "2003/01/01" -demo_end <- "2006/12/31" -demo_site <- 772 - -demo_one_result <- create_execute_test_xml( - model_id = demo_model, - met = demo_met, - site_id = demo_site, - start_date = demo_start, - end_date = demo_end, - dbfiles_folder = dbfiles_folder, - user_id = user_id, - output_folder = "batch_test_demo1_output", -) - -demo_two_result <- create_execute_test_xml( - model_id = demo_model, - met = demo_met, - site_id = demo_site, - start_date = demo_start, - end_date = demo_end, - dbfiles_folder = dbfiles_folder, - user_id = user_id, - output_folder = "batch_test_demo2_output", - ensemble_size = 100, - sensitivity = TRUE -) diff --git a/base/workflow/inst/permutation_tests.R b/base/workflow/inst/permutation_tests.R new file mode 100755 index 00000000000..69af5f96598 --- /dev/null +++ b/base/workflow/inst/permutation_tests.R @@ -0,0 +1,142 @@ +#!/usr/bin/env Rscript + +# This script can be used to quickly generate a permuted list of models, +# met-products, and sites for comprehensive PEcAn integration testing. It +# produces as output a CSV file that can be used by `batch_run.R`. + +# It can take the following command line arguments: +# +# - --machine-id= -- Machine ID. If not provided, try to determine +# the machine ID from the hostname. +# (TODO: More command line arguments; for now, just modify the variables at the +# top) + +library(PEcAn.workflow) +library(tidyverse) + +argv <- commandArgs(trailingOnly = TRUE) + +met_name <- c("CRUNCEP", "AmerifluxLBL") +startdate <- "2004-01-01" +enddate <- "2004-12-31" +out.var <- "NPP" +ens_size <- 1 +sensitivity <- FALSE +outfile <- "permuted_table.csv" + +## Insert your path to base pecan +## pecan_path <- "/fs/data3/tonygard/work/pecan" +pecan_path <- file.path("..", "..") +php_file <- file.path(pecan_path, "web", "config.php") +stopifnot(file.exists(php_file)) +config.list <- PEcAn.utils::read_web_config(php_file) +bety <- PEcAn.DB::betyConnect(php_file) +con <- bety$con + +# Create path for outfile +dir.create(dirname(outfile), showWarnings = FALSE, recursive = TRUE) + +## Find name of Machine R is running on +machid_rxp <- "^--machine_id=" +if (any(grepl(machid_rxp, argv))) { + machid_raw <- grep(machid_rxp, argv, value = TRUE) + mach_id <- as.numeric(gsub(machid_rxp, "", machid_raw)) + message("Using specified machine ID: ", mach_id) +} else { + mach_name <- Sys.info()[[4]] + message("Auto-detected machine name: ", mach_name) + ## mach_name <- "docker" + mach_id <- tbl(bety, "machines") %>% + filter(hostname == !!mach_name) %>% + pull(id) +} + +## Find all models available on the current machine +model_df <- tbl(bety, "dbfiles") %>% + filter(machine_id == !!mach_id) %>% + filter(container_type == "Model") %>% + left_join(tbl(bety, "models"), c("container_id" = "id")) %>% + select(model_id = container_id, model_name, revision, + file_name, file_path, dbfile_id = id, ) %>% + collect() %>% + mutate(exists = file.exists(file.path(file_path, file_name))) + +message("Found the following models on the machine:") +print(model_df) + +if (!all(model_df$exists)) { + message("WARNING: The following models are registered on the machine ", + "but their files do not exist:") + model_df %>% + filter(!exists) %>% + print() + + model_df <- model_df %>% + filter(exists) +} + +## Find Sites +## Site with no inputs from any machines that is part of Ameriflux site group and Fluxnet Site group +site_id_noinput <- tbl(bety, "sites") %>% + anti_join(tbl(bety, "inputs")) %>% + inner_join(tbl(bety, "sitegroups_sites") %>% + filter(sitegroup_id == 1), + by = c("id" = "site_id")) %>% + dplyr::select("id.x", "notes", "sitename") %>% + dplyr::filter(grepl("TOWER_BEGAN", notes)) %>% + collect() %>% + dplyr::mutate( + # Grab years from string within the notes + start_year = substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_BEGAN = ).*(?= TOWER_END)")),1,4), + #Empty tower end in the notes means that it goes until present day so if empty enter curent year. + end_year = dplyr::if_else( + substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) == "", + as.character(lubridate::year(Sys.Date())), + substring(stringr::str_extract(notes,pattern = ("(?<=TOWER_END = ).*(?=)")),1,4) + ), + #Check if startdate year is within the inerval of that is given + in_date = data.table::between(as.numeric(lubridate::year(startdate)),as.numeric(start_year),as.numeric(end_year)) + ) %>% + dplyr::filter( + in_date, + as.numeric(end_year) - as.numeric(start_year) > 1 + ) %>% + mutate(sitename = gsub(" ", "_", sitename)) %>% + rename(site_id = id.x) + +site_id_noinput %>% + select(site_id, sitename) %>% + print(n = Inf) + +message("Running tests at ", nrow(site_id_noinput), " sites:") +print(site_id_noinput) + +site_id <- site_id_noinput$site_id +site_name <- gsub(" ", "_", site_id_noinput$sitename) + +# Create permutations of all arguments +options(scipen = 999) +run_table <- tidyr::crossing( + # Keep model name and revision together -- don't cross them + tidyr::nesting( + model = model_df[["model_name"]], + revision = model_df[["revision"]] + # Eventually, PFT will go here... + ), + # Permute everything else + met = met_name, + site_id = site_id, + pft = NA, + start_date = startdate, + end_date = enddate, + # Possibly, nest these as well, e.g. + ## tidyr::nesting( + ## sensitivity = c(FALSE, TRUE), + ## ensemble_size = c(1, 100) + ## ), + sensitivity = sensitivity, + ensemble_size = ensemble_size, + comment = "" +) + +write.csv(run_table, outfile, row.names = FALSE) From 7f70a51fde518a091b969038a586867125833fd5 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 30 Aug 2019 20:26:14 +0200 Subject: [PATCH 0349/2289] shows diffs when files have changed 9/10 times the fix will be obvious from filenames alone, but this should help for the rare case where Travis results differ from local builds. These are very hard to debug otherwise --- scripts/travis/script.sh | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/scripts/travis/script.sh b/scripts/travis/script.sh index 58376ec020c..0940b586b0a 100755 --- a/scripts/travis/script.sh +++ b/scripts/travis/script.sh @@ -54,9 +54,12 @@ set -e ) # CHECK FOR CHANGES TO DOC/DEPENDENCIES -if [[ `git status -s` ]]; then - echo "These files were changed by the build process:"; +if [[ `git status -s` ]]; then + echo -e "\nThese files were changed by the build process:"; git status -s; echo "Have you run devtools::check and commited any updated Roxygen outputs?"; - exit 1; + echo -e "travis_fold:start:gitdiff\nFull diff:\n"; + git diff; + echo -e "travis_fold:end:gitdiff\n\n"; + exit 1; fi From d0f0b11ccf0ef7066be4a0c46c533e1e8895ac5f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 2 Sep 2019 10:32:24 +0200 Subject: [PATCH 0350/2289] fix 'number of columns of matrices must match' on R 3.5 Issue seems to be that C2D4U3.5 now includes binaries build on R 3.6, and R 3.6 changed internal namespace format by adding a column to the S3 method list. This patch works around by checking for packages built with later R than is currently running, and rebuilding them from source. --- scripts/travis/install.sh | 9 +++++++ scripts/travis/rebuild_pkg_binaries.R | 35 +++++++++++++++++++++++++++ 2 files changed, 44 insertions(+) create mode 100644 scripts/travis/rebuild_pkg_binaries.R diff --git a/scripts/travis/install.sh b/scripts/travis/install.sh index 9bb80041f21..b56a3d4d300 100755 --- a/scripts/travis/install.sh +++ b/scripts/travis/install.sh @@ -3,6 +3,15 @@ set -e . $( dirname $0 )/func.sh +# FIXING R BINARIES +( + travis_time_start "pkg_version_check" "Checking R package binaries" + + Rscript scripts/travis/rebuild_pkg_binaries.R + + travis_time_end +) + # INSTALLING SIPNET ( travis_time_start "install_sipnet" "Installing SIPNET for testing" diff --git a/scripts/travis/rebuild_pkg_binaries.R b/scripts/travis/rebuild_pkg_binaries.R new file mode 100644 index 00000000000..6c9ffa03cba --- /dev/null +++ b/scripts/travis/rebuild_pkg_binaries.R @@ -0,0 +1,35 @@ +#!/usr/bin/env Rscript + +# ¡ugly hack! +# +# Travis setup uses many prebuilt R packages from c2d4u3.5, which despite its +# name now contains some packages that were built for R 3.6. When loaded in +# R 3.5.x, these throw error `rbind(info, getNamespaceInfo(env, "S3methods")): +# number of columns of matrices must match`. +# +# We resolve this the slow brute-force way: By running this script before +# attempting to load any R packages, thereby reinstalling from source any +# package whose binary was built with R >= 3.6. +# +# In principle it's *always* unsafe to use a package built for a different R +# version, so it might be safest to rebuild any binary that fails the +# wrong_build test. But rebuilding is slooow and *probably* only needed for +# this special case, so we only do this check if run with R < 3.6. +# +# TODO: Remove this when c2d4u situation improves or we drop support for R 3.5. + +if (getRversion() < "3.6") { + wrong_build <- function(pkgname) { + # Typical packageDescription(pkgname)$Built result: + # "R 3.4.4; x86_64-apple-darwin15.6.0; 2019-03-18 04:41:51 UTC; unix" + build_ver <- substr(packageDescription(pkgname)$Built, start = 3, stop = 7) + build_ver > getRversion() + } + all_pkgs <- installed.packages()[,1] + needs_rebuild <- vapply(all_pkgs, wrong_build, logical(1)) + + if (any(needs_rebuild)) { + print("Found R packages that were built for a later R version. Reinstalling these from source.") + install.packages(all_pkgs[needs_rebuild], repos = "cloud.r-project.org", dependencies = FALSE) + } +} From d868410e4a90a38775c21d64ccb540586c392a87 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 2 Sep 2019 10:19:37 +0200 Subject: [PATCH 0351/2289] compare check results against previously cached version, error if any new messages --- scripts/check_with_errors.R | 51 +++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index f1cd1e39e3a..b127c956315 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -71,3 +71,54 @@ message(n_notes, ' notes found in ', pkg, '.') if (log_notes && n_notes > 0) { cat(notes, '\n') } + + + +# PEcAn has a lot of legacy code that issues check warnings, +# such that it's not yet practical to break the build on every warning. +# Cleaning this up is a long-term goal, but will take time. +# Meanwhile, we compare against a cached historic check output to enforce that +# no *new* warnings are added. As historic warnings are removed, we will update +# the cached results to ensure they stay gone. + +# Rcmdcheck identifies unique top-level warnings (e.g. "checking Rd cross-references ... WARNING"), +# but does not compare entire contents of warning (e.g. bad cross-references in one file counted same as in three different files) +# We want to get fussier and only allow existing *instances* of warnings, so let's parse a little more finely +msg_lines <- function(msg){ + msg <- strsplit( + gsub("\n ", " ", msg, fixed = TRUE), #leading double-space indicates line wrap + split = "\n", + fixed = TRUE) + msg <- purrr::map(msg, ~.[. != ""]) + purrr::flatten_chr(purrr::map(msg, ~paste(.[[1]], .[-1], sep=": "))) +} + +old_file <- file.path(pkg, "tests", "Rcheck_reference.log") +if (! file.exists(old_file)) { + # no reference output available, nothing else to do + quit("no") +} + +old <- rcmdcheck::parse_check(old_file) +cmp <- rcmdcheck::compare_checks(old, chk) + + +if (cmp$status != "+") { + print(cmp) + stop("R check of ", pkg, " reports new problems. Please fix them and resubmit.") +} else { + # No new messages, but need to check details of pre-existing ones + warn_cmp <- dplyr::filter(cmp$cmp, type == "warning") # stopped earlier for errors, notes let slide for now + reseen_msgs <- msg_lines(dplyr::filter(warn_cmp, which=="new")$output) + prev_msgs <- msg_lines(dplyr::filter(warn_cmp, which=="old")$output) + + lines_changed <- setdiff(reseen_msgs, prev_msgs) + if (length(lines_changed) > 0) { + print("Package check returned new warnings:") + print(lines_changed) + print("Please fix these and resubmit.") + } +} + +# If want to update saved result to use current check for future comparison: +# cat(chk$stdout, file = old_file) From 5598fca49c9dd020ac87f98a24235494f29a4b29 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 3 Sep 2019 13:00:51 +0200 Subject: [PATCH 0352/2289] cleanup --- scripts/check_with_errors.R | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index b127c956315..e6f37cee0b2 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -15,13 +15,13 @@ Sys.unsetenv( log_level <- Sys.getenv('LOGLEVEL', unset = NA) die_level <- Sys.getenv('DIELEVEL', unset = NA) redocument <- as.logical(Sys.getenv('REBUILD_DOCS', unset = NA)) -runtests <- as.logical(Sys.getenv('RUN_TESTS', unset = FALSE)) +runtests <- as.logical(Sys.getenv('RUN_TESTS', unset = TRUE)) # message('log_level = ', log_level) # message('die_level = ', die_level) # should test se run -if (as.logical(Sys.getenv('RUN_TESTS', unset = FALSE))) { +if (!runtests) { args <- c('--no-tests', '--timings') } else { args <- c('--timings') @@ -80,10 +80,13 @@ if (log_notes && n_notes > 0) { # Meanwhile, we compare against a cached historic check output to enforce that # no *new* warnings are added. As historic warnings are removed, we will update # the cached results to ensure they stay gone. +# +# To compare checks, we take a two-level approach: +# First by comparing results with rcmdcheck::compare_checks to find any new +# top-level warnings (e.g. "checking Rd cross-references ... WARNING"), +# then if those are OK we get fussier and check for new *instances* of existing +# warnings (e.g. new check increases from 2 bad Rd cross-references to 3). -# Rcmdcheck identifies unique top-level warnings (e.g. "checking Rd cross-references ... WARNING"), -# but does not compare entire contents of warning (e.g. bad cross-references in one file counted same as in three different files) -# We want to get fussier and only allow existing *instances* of warnings, so let's parse a little more finely msg_lines <- function(msg){ msg <- strsplit( gsub("\n ", " ", msg, fixed = TRUE), #leading double-space indicates line wrap @@ -95,7 +98,8 @@ msg_lines <- function(msg){ old_file <- file.path(pkg, "tests", "Rcheck_reference.log") if (! file.exists(old_file)) { - # no reference output available, nothing else to do + cat("No reference check file found. Saving current results as the new standard\n") + cat(chk$stdout, file = old_file) quit("no") } @@ -107,18 +111,18 @@ if (cmp$status != "+") { print(cmp) stop("R check of ", pkg, " reports new problems. Please fix them and resubmit.") } else { - # No new messages, but need to check details of pre-existing ones + # No new messages, but need to check details of pre-existing ones line by line warn_cmp <- dplyr::filter(cmp$cmp, type == "warning") # stopped earlier for errors, notes let slide for now reseen_msgs <- msg_lines(dplyr::filter(warn_cmp, which=="new")$output) prev_msgs <- msg_lines(dplyr::filter(warn_cmp, which=="old")$output) + # avoids false positives from tempdir changes + reseen_msgs <- stringr::str_replace_all(reseen_msgs, chk$checkdir, "...") + prev_msgs <- stringr::str_replace_all(prev_msgs, old$checkdir, "...") lines_changed <- setdiff(reseen_msgs, prev_msgs) if (length(lines_changed) > 0) { - print("Package check returned new warnings:") - print(lines_changed) - print("Please fix these and resubmit.") + cat("Package check returned new warnings:\n") + cat(lines_changed, "\n") + stop("Please fix these warnings and resubmit.") } } - -# If want to update saved result to use current check for future comparison: -# cat(chk$stdout, file = old_file) From d7b0e71f1a45a039a010ea41992c44884f11e892 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 3 Sep 2019 14:01:55 +0200 Subject: [PATCH 0353/2289] first versions of all reference check logs --- base/all/tests/Rcheck_reference.log | 58 + base/db/tests/Rcheck_reference.log | 199 +++ base/logger/tests/Rcheck_reference.log | 89 ++ base/qaqc/tests/Rcheck_reference.log | 124 ++ base/remote/tests/Rcheck_reference.log | 126 ++ base/settings/tests/Rcheck_reference.log | 204 +++ base/utils/tests/Rcheck_reference.log | 243 +++ base/visualization/tests/Rcheck_reference.log | 232 +++ base/workflow/tests/Rcheck_reference.log | 66 + models/biocro/tests/Rcheck_reference.log | 70 + models/clm45/tests/Rcheck_reference.log | 75 + models/dalec/tests/Rcheck_reference.log | 83 + models/dvmdostem/tests/Rcheck_reference.log | 80 + models/ed/tests/Rcheck_reference.log | 219 +++ models/fates/tests/Rcheck_reference.log | 81 + models/gday/tests/Rcheck_reference.log | 80 + models/jules/tests/Rcheck_reference.log | 92 ++ models/linkages/tests/Rcheck_reference.log | 163 ++ models/lpjguess/tests/Rcheck_reference.log | 114 ++ models/maat/tests/Rcheck_reference.log | 89 ++ models/maespa/tests/Rcheck_reference.log | 80 + models/preles/tests/Rcheck_reference.log | 87 + models/sipnet/tests/Rcheck_reference.log | 120 ++ models/template/tests/Rcheck_reference.log | 77 + modules/allometry/tests/Rcheck_reference.log | 137 ++ .../assim.batch/tests/Rcheck_reference.log | 232 +++ .../tests/Rcheck_reference.log | 953 +++++++++++ modules/benchmark/tests/Rcheck_reference.log | 373 +++++ .../tests/Rcheck_reference.log | 488 ++++++ .../data.hydrology/tests/Rcheck_reference.log | 42 + modules/data.land/tests/Rcheck_reference.log | 1424 +++++++++++++++++ .../data.remote/tests/Rcheck_reference.log | 116 ++ modules/emulator/tests/Rcheck_reference.log | 319 ++++ .../meta.analysis/tests/Rcheck_reference.log | 148 ++ .../photosynthesis/tests/Rcheck_reference.log | 113 ++ modules/priors/tests/Rcheck_reference.log | 192 +++ modules/rtm/tests/Rcheck_reference.log | 262 +++ .../uncertainty/tests/Rcheck_reference.log | 337 ++++ 38 files changed, 7987 insertions(+) create mode 100644 base/all/tests/Rcheck_reference.log create mode 100644 base/db/tests/Rcheck_reference.log create mode 100644 base/logger/tests/Rcheck_reference.log create mode 100644 base/qaqc/tests/Rcheck_reference.log create mode 100644 base/remote/tests/Rcheck_reference.log create mode 100644 base/settings/tests/Rcheck_reference.log create mode 100644 base/utils/tests/Rcheck_reference.log create mode 100644 base/visualization/tests/Rcheck_reference.log create mode 100644 base/workflow/tests/Rcheck_reference.log create mode 100644 models/biocro/tests/Rcheck_reference.log create mode 100644 models/clm45/tests/Rcheck_reference.log create mode 100644 models/dalec/tests/Rcheck_reference.log create mode 100644 models/dvmdostem/tests/Rcheck_reference.log create mode 100644 models/ed/tests/Rcheck_reference.log create mode 100644 models/fates/tests/Rcheck_reference.log create mode 100644 models/gday/tests/Rcheck_reference.log create mode 100644 models/jules/tests/Rcheck_reference.log create mode 100644 models/linkages/tests/Rcheck_reference.log create mode 100644 models/lpjguess/tests/Rcheck_reference.log create mode 100644 models/maat/tests/Rcheck_reference.log create mode 100644 models/maespa/tests/Rcheck_reference.log create mode 100644 models/preles/tests/Rcheck_reference.log create mode 100644 models/sipnet/tests/Rcheck_reference.log create mode 100644 models/template/tests/Rcheck_reference.log create mode 100644 modules/allometry/tests/Rcheck_reference.log create mode 100644 modules/assim.batch/tests/Rcheck_reference.log create mode 100644 modules/assim.sequential/tests/Rcheck_reference.log create mode 100644 modules/benchmark/tests/Rcheck_reference.log create mode 100644 modules/data.atmosphere/tests/Rcheck_reference.log create mode 100644 modules/data.hydrology/tests/Rcheck_reference.log create mode 100644 modules/data.land/tests/Rcheck_reference.log create mode 100644 modules/data.remote/tests/Rcheck_reference.log create mode 100644 modules/emulator/tests/Rcheck_reference.log create mode 100644 modules/meta.analysis/tests/Rcheck_reference.log create mode 100644 modules/photosynthesis/tests/Rcheck_reference.log create mode 100644 modules/priors/tests/Rcheck_reference.log create mode 100644 modules/rtm/tests/Rcheck_reference.log create mode 100644 modules/uncertainty/tests/Rcheck_reference.log diff --git a/base/all/tests/Rcheck_reference.log b/base/all/tests/Rcheck_reference.log new file mode 100644 index 00000000000..bba521a13ea --- /dev/null +++ b/base/all/tests/Rcheck_reference.log @@ -0,0 +1,58 @@ +* using log directory ‘/tmp/Rtmp3UHCCJ/PEcAn.all.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.all/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.all’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... NOTE +Depends: includes the non-default packages: + ‘PEcAn.DB’ ‘PEcAn.settings’ ‘PEcAn.MA’ ‘PEcAn.logger’ ‘PEcAn.utils’ + ‘PEcAn.uncertainty’ ‘PEcAn.data.atmosphere’ ‘PEcAn.data.land’ + ‘PEcAn.data.remote’ ‘PEcAn.assim.batch’ ‘PEcAn.emulator’ + ‘PEcAn.priors’ ‘PEcAn.benchmark’ ‘PEcAn.remote’ ‘PEcAn.workflow’ +Adding so many packages to the search path is excessive and importing +selectively is preferable. +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.all’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘pecan.packages’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 2 NOTEs diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log new file mode 100644 index 00000000000..4f72f2fcdf1 --- /dev/null +++ b/base/db/tests/Rcheck_reference.log @@ -0,0 +1,199 @@ +* using log directory ‘/tmp/RtmpzhxcCR/PEcAn.DB.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.DB/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.DB’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.DB’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' import not declared from: ‘bit64’ +'loadNamespace' or 'requireNamespace' call not declared from: ‘bit64’ +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +clone_pft: no visible binding for global variable ‘name’ +clone_pft: no visible binding for global variable ‘id’ +clone_pft: no visible binding for global variable ‘created_at’ +clone_pft: no visible binding for global variable ‘updated_at’ +clone_pft: no visible binding for global variable ‘pft_id’ +db.exists: no visible binding for '<<-' assignment to ‘user.permission’ +db.exists: no visible binding for global variable ‘user.permission’ +dbfile.check: no visible binding for global variable ‘container_type’ +dbfile.check: no visible binding for global variable ‘container_id’ +dbfile.check: no visible binding for global variable ‘machine_id’ +dbfile.check: no visible binding for global variable ‘updated_at’ +dbHostInfo: no visible binding for global variable ‘sync_host_id’ +dbHostInfo: no visible binding for global variable ‘sync_start’ +dbHostInfo: no visible binding for global variable ‘sync_end’ +get_run_ids: no visible binding for global variable ‘run_id’ +get_users: no visible binding for global variable ‘id’ +get_workflow_ids: no visible binding for global variable ‘workflow_id’ +get.trait.data.pft: no visible binding for global variable ‘pft_id’ +get.trait.data.pft: no visible binding for global variable ‘created_at’ +get.trait.data.pft: no visible global function definition for ‘head’ +get.trait.data.pft: no visible binding for global variable ‘stat’ +get.trait.data.pft: no visible binding for global variable ‘trait’ +insert.format.vars: no visible binding for global variable ‘id’ +insert.format.vars: no visible binding for global variable ‘name’ +insert.format.vars: no visible global function definition for ‘collect’ +load_data_single_run: no visible global function definition for + ‘read.output’ +load_data_single_run: no visible binding for global variable ‘var_name’ +load_data_single_run: no visible binding for global variable ‘vals’ +load_data_single_run: no visible binding for global variable ‘posix’ +match_dbcols: no visible global function definition for ‘head’ +match_dbcols: no visible binding for global variable ‘.’ +match_dbcols: no visible binding for global variable ‘as’ +query_pfts: no visible binding for global variable ‘name’ +query.data: no visible binding for global variable ‘settings’ +query.format.vars: no visible binding for global variable ‘variable_id’ +query.format.vars: no visible binding for global variable + ‘storage_type’ +query.pft_cultivars: no visible binding for global variable ‘name’ +query.pft_cultivars: no visible binding for global variable ‘pft_type’ +query.pft_cultivars: no visible binding for global variable ‘name.mt’ +query.pft_cultivars: no visible binding for global variable + ‘cultivar_id’ +query.pft_cultivars: no visible binding for global variable ‘specie_id’ +query.pft_cultivars: no visible binding for global variable ‘genus’ +query.pft_cultivars: no visible binding for global variable ‘species’ +query.pft_cultivars: no visible binding for global variable + ‘scientificname’ +query.pft_cultivars: no visible binding for global variable ‘name.cv’ +query.priors: no visible binding for global variable ‘settings’ +query.traits: no visible binding for global variable ‘settings’ +query.traits: no visible binding for global variable ‘name’ +rename_jags_columns: no visible binding for global variable ‘stat’ +rename_jags_columns: no visible binding for global variable ‘n’ +rename_jags_columns: no visible binding for global variable ‘trt_id’ +rename_jags_columns: no visible binding for global variable ‘site_id’ +rename_jags_columns: no visible binding for global variable + ‘citation_id’ +rename_jags_columns: no visible binding for global variable + ‘greenhouse’ +runs: no visible binding for global variable ‘folder’ +runs: no visible binding for global variable ‘id’ +runs: no visible binding for global variable ‘ensemble_id’ +search_reference_single: no visible binding for global variable ‘score’ +search_reference_single: no visible binding for global variable + ‘author’ +search_reference_single: no visible binding for global variable + ‘author_family’ +search_reference_single: no visible binding for global variable + ‘author_given’ +search_reference_single: no visible binding for global variable + ‘issued’ +symmetric_setdiff: no visible global function definition for ‘:=’ +workflows: no visible binding for global variable ‘workflow_id’ +Undefined global functions or variables: + := . as author author_family author_given citation_id collect + container_id container_type created_at cultivar_id ensemble_id folder + genus greenhouse head id issued machine_id n name name.cv name.mt + pft_id pft_type posix read.output run_id scientificname score + settings site_id specie_id species stat storage_type sync_end + sync_host_id sync_start trait trt_id updated_at user.permission vals + var_name variable_id workflow_id +Consider adding + importFrom("methods", "as") + importFrom("utils", "head") +to your NAMESPACE file (and ensure that your DESCRIPTION Imports field +contains 'methods'). +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'dbfile.input.insert.Rd': + \examples lines wider than 100 characters: + dbfile.input.insert('trait.data.Rdata', siteid, startdate, enddate, 'application/x-RData', 'traits', dbcon) + +Rd file 'insert.format.vars.Rd': + \examples lines wider than 100 characters: + insert.format.vars(con = bety$con, format_name = "LTER-HFR-103", mimetype_id = 1090, notes = "NPP from Harvard Forest.", header = FAL ... [TRUNCATED] + +Rd file 'query.data.Rd': + \usage lines wider than 90 characters: + extra.columns = "ST_X(ST_CENTROID(sites.geometry)) AS lon, ST_Y(ST_CENTROID(sites.geometry)) AS lat, ", + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'get_workflow_ids' + ‘all.ids’ + +Undocumented arguments in documentation object 'ncdays2date' + ‘time’ ‘unit’ + +Undocumented arguments in documentation object 'query.data' + ‘store.unconverted’ + +Undocumented arguments in documentation object 'query.file.path' + ‘input.id’ +Documented arguments not in \usage in documentation object 'query.file.path': + ‘input_id’ + +Undocumented arguments in documentation object 'query.format.vars' + ‘input.id’ +Documented arguments not in \usage in documentation object 'query.format.vars': + ‘input_id’ + +Undocumented arguments in documentation object 'query.site' + ‘site.id’ +Documented arguments not in \usage in documentation object 'query.site': + ‘site_id’ + +Undocumented arguments in documentation object 'query.trait.data' + ‘con’ ‘update.check.only’ ‘...’ + +Undocumented arguments in documentation object 'query_pfts' + ‘strict’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'query_pfts': + ‘modeltype’ + +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘betydb_access.Rmd’, ‘create_sites.geometry.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 4 WARNINGs, 3 NOTEs diff --git a/base/logger/tests/Rcheck_reference.log b/base/logger/tests/Rcheck_reference.log new file mode 100644 index 00000000000..0c596edc658 --- /dev/null +++ b/base/logger/tests/Rcheck_reference.log @@ -0,0 +1,89 @@ +* using log directory ‘/tmp/Rtmp1Q6pph/PEcAn.logger.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.logger/DESCRIPTION’ ... OK +* this is package ‘PEcAn.logger’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.logger’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +logger.message: no visible global function definition for ‘dump.frames’ +logger.message: no visible binding for global variable ‘dump.log’ +logger.message: no visible global function definition for ‘tail’ +print2string: no visible global function definition for + ‘capture.output’ +Undefined global functions or variables: + capture.output dump.frames dump.log tail +Consider adding + importFrom("utils", "capture.output", "dump.frames", "tail") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'severeifnot.Rd': + \examples lines wider than 100 characters: + severeifnot("I absolutely cannot deal with the fact that something is not a list.", is.list(a), is.list(b)) + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'logger.severe.Rd': + ‘logger.setQuitOnSevere(FALSE)’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'check_conditions' + ‘...’ + +Undocumented arguments in documentation object 'is_definitely_true' + ‘x’ + +Undocumented arguments in documentation object 'logger.getLevelNumber' + ‘level’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* DONE +Status: 2 WARNINGs, 3 NOTEs diff --git a/base/qaqc/tests/Rcheck_reference.log b/base/qaqc/tests/Rcheck_reference.log new file mode 100644 index 00000000000..d4b17e050de --- /dev/null +++ b/base/qaqc/tests/Rcheck_reference.log @@ -0,0 +1,124 @@ +* using log directory ‘/tmp/Rtmp0KMN9Z/PEcAn.qaqc.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.qaqc/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.qaqc’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.qaqc’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +Malformed Description field: should contain one or more complete sentences. +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard file/directory found at top level: + ‘README.Rmd’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘dplyr’ ‘PEcAn.DB’ +Package in Depends field not imported from: ‘plotrix’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +cull_database_entries: no visible global function definition for + ‘read.table’ +cull_database_entries: no visible global function definition for + ‘write.table’ +find_formats_without_inputs: no visible binding for global variable + ‘user_id’ +find_formats_without_inputs: no visible binding for global variable + ‘created_at’ +find_formats_without_inputs: no visible binding for global variable + ‘updated_at’ +find_inputs_without_formats: no visible binding for global variable + ‘user_id_code’ +find_inputs_without_formats: no visible binding for global variable + ‘created_at’ +find_inputs_without_formats: no visible binding for global variable + ‘updated_at’ +new.taylor: no visible global function definition for ‘taylor.diagram’ +new.taylor: no visible binding for global variable ‘obs’ +new.taylor: no visible binding for global variable ‘site’ +new.taylor: no visible global function definition for ‘cor’ +new.taylor: no visible global function definition for ‘sd’ +new.taylor: no visible global function definition for ‘text’ +write_out_table: no visible global function definition for + ‘write.table’ +Undefined global functions or variables: + cor created_at obs read.table sd site taylor.diagram text updated_at + user_id user_id_code write.table +Consider adding + importFrom("graphics", "text") + importFrom("stats", "cor", "sd") + importFrom("utils", "read.table", "write.table") +to your NAMESPACE file. + +Found the following calls to attach(): +File ‘PEcAn.qaqc/R/taylor.plot.R’: + attach(dataset) +See section ‘Good practice’ in ‘?attach’. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘cull_database_entries’ ‘find_formats_without_inputs’ + ‘find_inputs_without_formats’ ‘get_table_column_names’ + ‘write_out_table’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'new.taylor' + ‘dataset’ ‘siteid’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘compare_ED2.Rmd’, ‘function_relationships.Rmd’, + ‘lebauer2013ffb.Rmd’, ‘module_output.Rmd’, + ‘Pre-release-database-cleanup.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 4 WARNINGs, 3 NOTEs diff --git a/base/remote/tests/Rcheck_reference.log b/base/remote/tests/Rcheck_reference.log new file mode 100644 index 00000000000..88026a3093d --- /dev/null +++ b/base/remote/tests/Rcheck_reference.log @@ -0,0 +1,126 @@ +* using log directory ‘/tmp/RtmpUHZsfY/PEcAn.remote.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.remote/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.remote’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.remote’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' import not declared from: ‘PEcAn.DB’ +* checking S3 generic/method consistency ... WARNING +start: + function(x, ...) +start.model.runs: + function(settings, write, stop.on.error) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. + +Found the following apparent S3 methods exported but not registered: + start.model.runs +See section ‘Registering S3 methods’ in the ‘Writing R Extensions’ +manual. +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +remote.copy.update: no visible global function definition for + ‘db.query’ +remote.copy.update: no visible binding for global variable ‘putveg.id’ +remote.copy.update: no visible binding for global variable ‘i’ +runModule.start.model.runs: no visible global function definition for + ‘is.MultiSettings’ +runModule.start.model.runs: no visible global function definition for + ‘is.Settings’ +stamp_finished: no visible global function definition for ‘db.query’ +stamp_started: no visible global function definition for ‘db.query’ +start.model.runs: no visible global function definition for + ‘txtProgressBar’ +start.model.runs: no visible global function definition for ‘db.open’ +start.model.runs: no visible global function definition for ‘db.close’ +start.model.runs: no visible global function definition for + ‘setTxtProgressBar’ +Undefined global functions or variables: + db.close db.open db.query i is.MultiSettings is.Settings putveg.id + setTxtProgressBar txtProgressBar +Consider adding + importFrom("utils", "setTxtProgressBar", "txtProgressBar") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘runModule.start.model.runs’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'kill.tunnel' + ‘settings’ ‘exe’ ‘data’ + +Documented arguments not in \usage in documentation object 'remote.copy.update': + ‘stderr’ + +Undocumented arguments in documentation object 'remote.execute.R' + ‘R’ ‘scratchdir’ +Documented arguments not in \usage in documentation object 'remote.execute.R': + ‘args’ + +Undocumented arguments in documentation object 'remote.execute.cmd' + ‘cmd’ +Documented arguments not in \usage in documentation object 'remote.execute.cmd': + ‘command’ + +Undocumented arguments in documentation object 'start_rabbitmq' + ‘folder’ ‘rabbitmq_uri’ ‘rabbitmq_queue’ + +Undocumented arguments in documentation object 'test_remote' + ‘...’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... + OK +* DONE +Status: 4 WARNINGs, 2 NOTEs diff --git a/base/settings/tests/Rcheck_reference.log b/base/settings/tests/Rcheck_reference.log new file mode 100644 index 00000000000..43dff58f983 --- /dev/null +++ b/base/settings/tests/Rcheck_reference.log @@ -0,0 +1,204 @@ +* using log directory ‘/tmp/RtmpNKOZJQ/PEcAn.settings.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.settings/DESCRIPTION’ ... OK +* this is package ‘PEcAn.settings’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.settings’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +Malformed Description field: should contain one or more complete sentences. +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard file/directory found at top level: + ‘examples’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' import not declared from: ‘purrr’ +Packages in Depends field not imported from: + ‘methods’ ‘PEcAn.DB’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... WARNING +listToXml: + function(x, ...) +listToXml.MultiSettings: + function(item, tag, collapse) + +listToXml: + function(x, ...) +listToXml.default: + function(item, tag) + +printAll: + function(x) +printAll.MultiSettings: + function(multiSettings) + +update: + function(object, ...) +update.settings: + function(settings, force) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. + +Found the following apparent S3 methods exported but not registered: + update.settings +See section ‘Registering S3 methods’ in the ‘Writing R Extensions’ +manual. +* checking replacement functions ... WARNING + ‘$<-.MultiSettings’ ‘[<-.MultiSettings’ ‘[[<-.MultiSettings’ +The argument of a replacement function which corresponds to the right +hand side must be named ‘value’. +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +check.bety.version: no visible global function definition for ‘tail’ +check.ensemble.settings: no visible binding for global variable + ‘startdate’ +check.ensemble.settings: no visible binding for global variable + ‘enddate’ +check.settings: no visible global function definition for + ‘remote.execute.cmd’ +loadPath.sitePFT: no visible global function definition for + ‘read.table’ +loadPath.sitePFT: no visible global function definition for ‘%>%’ +site.pft.link.settings: no visible global function definition for ‘%>%’ +site.pft.link.settings: no visible global function definition for + ‘setNames’ +site.pft.link.settings: no visible binding for global variable ‘.’ +site.pft.linkage: no visible global function definition for ‘%>%’ +site.pft.linkage : : no visible global function definition + for ‘%>%’ +site.pft.linkage : : no visible global function definition + for ‘setNames’ +update.settings: no visible binding for global variable ‘null’ +Undefined global functions or variables: + . %>% enddate null read.table remote.execute.cmd setNames startdate + tail +Consider adding + importFrom("stats", "setNames") + importFrom("utils", "read.table", "tail") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'createMultiSiteSettings.Rd': + \examples lines wider than 100 characters: + prerun = "module load udunits R/R-3.0.0_gnu-4.4.6", dbfiles = "/projectnb/dietzelab/pecan.data/input", + modellauncher = structure(list(binary = "/usr2/postdoc/rykelly/pecan/utils/modellauncher/modellauncher", + "output")), lu = structure(list(id = "294", path = "/projectnb/dietzelab/EDI/ed_inputs/glu/"), .Names = c("id", + "path")), soil = structure(list(id = "297", path = "/projectnb/dietzelab/EDI/faoOLD/FAO_"), .Names = c("id", + "path")), thsum = structure(list(id = "295", path = "/projectnb/dietzelab/EDI/ed_inputs/"), .Names = c("id", + "path")), veg = structure(list(id = "296", path = "/projectnb/dietzelab/EDI/oge2OLD/OGE2_"), .Names = c("id", + +Rd file 'createSitegroupMultiSettings.Rd': + \examples lines wider than 100 characters: + prerun = "module load udunits R/R-3.0.0_gnu-4.4.6", dbfiles = "/projectnb/dietzelab/pecan.data/input", + modellauncher = structure(list(binary = "/usr2/postdoc/rykelly/pecan/utils/modellauncher/modellauncher", + "output")), lu = structure(list(id = "294", path = "/projectnb/dietzelab/EDI/ed_inputs/glu/"), .Names = c("id", + "path")), soil = structure(list(id = "297", path = "/projectnb/dietzelab/EDI/faoOLD/FAO_"), .Names = c("id", + "path")), thsum = structure(list(id = "295", path = "/projectnb/dietzelab/EDI/ed_inputs/"), .Names = c("id", + "path")), veg = structure(list(id = "296", path = "/projectnb/dietzelab/EDI/oge2OLD/OGE2_"), .Names = c("id", + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘as.MultiSettings’ ‘as.Settings’ ‘expandMultiSettings’ + ‘is.MultiSettings’ ‘is.Settings’ ‘listToXml’ ‘printAll’ + ‘settingNames’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'MultiSettings' + ‘...’ + +Duplicated \argument entries in documentation object 'SafeList': + ‘x’ + +Undocumented arguments in documentation object 'addSecrets' + ‘force’ + +Undocumented arguments in documentation object 'check.bety.version' + ‘dbcon’ +Documented arguments not in \usage in documentation object 'check.bety.version': + ‘settings’ + +Undocumented arguments in documentation object 'check.database' + ‘database’ +Documented arguments not in \usage in documentation object 'check.database': + ‘settings’ + +Undocumented arguments in documentation object 'check.model.settings' + ‘dbcon’ + +Undocumented arguments in documentation object 'check.run.settings' + ‘dbcon’ + +Undocumented arguments in documentation object 'check.settings' + ‘force’ + +Undocumented arguments in documentation object 'check.workflow.settings' + ‘dbcon’ + +Undocumented arguments in documentation object 'clean.settings' + ‘write’ + +Undocumented arguments in documentation object 'fix.deprecated.settings' + ‘force’ + +Undocumented arguments in documentation object 'papply' + ‘...’ +Documented arguments not in \usage in documentation object 'papply': + ‘\code{...}’ + +Documented arguments not in \usage in documentation object 'read.settings': + ‘outputfile’ + +Undocumented arguments in documentation object 'update.settings' + ‘force’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'getRunSettings': + ‘templateSettings’ ‘siteId’ + +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 6 WARNINGs, 4 NOTEs diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log new file mode 100644 index 00000000000..f40bafe588f --- /dev/null +++ b/base/utils/tests/Rcheck_reference.log @@ -0,0 +1,243 @@ +* using log directory ‘/tmp/RtmpWosl3N/PEcAn.utils.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.utils/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.utils’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.utils’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard file/directory found at top level: + ‘scripts’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' import not declared from: ‘PEcAn.uncertainty’ +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +convert.input: no visible binding for global variable ‘settings’ +convert.input: no visible binding for global variable ‘id’ +convert.input : log_format_df: no visible binding for global variable + ‘.’ +get.results: no visible binding for global variable ‘trait.samples’ +get.results: no visible binding for global variable ‘runs.samples’ +get.results: no visible binding for global variable ‘sa.samples’ +grid2netcdf: no visible binding for global variable ‘years’ +grid2netcdf: no visible binding for global variable ‘yieldarray’ +mcmc.list2init: no visible binding for global variable ‘nr’ +plot_data: no visible binding for global variable ‘x’ +plot_data: no visible binding for global variable ‘y’ +plot_data: no visible binding for global variable ‘control’ +plot_data: no visible binding for global variable ‘se’ +read.ensemble.output: no visible binding for global variable + ‘runs.samples’ +read.output: no visible binding for global variable ‘median’ +read.sa.output: no visible binding for global variable ‘runs.samples’ +run.write.configs: no visible global function definition for + ‘get.parameter.samples’ +run.write.configs: no visible binding for global variable + ‘trait.samples’ +run.write.configs: no visible binding for global variable ‘sa.samples’ +run.write.configs: no visible binding for global variable + ‘ensemble.samples’ +status.check: no visible binding for global variable ‘settings’ +status.end: no visible binding for global variable ‘settings’ +status.skip: no visible binding for global variable ‘settings’ +status.start: no visible binding for global variable ‘settings’ +summarize.result: no visible binding for global variable ‘n’ +summarize.result: no visible binding for global variable ‘citation_id’ +summarize.result: no visible binding for global variable ‘site_id’ +summarize.result: no visible binding for global variable ‘trt_id’ +summarize.result: no visible binding for global variable ‘control’ +summarize.result: no visible binding for global variable ‘greenhouse’ +summarize.result: no visible binding for global variable ‘time’ +summarize.result: no visible binding for global variable ‘cultivar_id’ +summarize.result: no visible binding for global variable ‘specie_id’ +summarize.result: no visible binding for global variable ‘statname’ +transformstats: no visible binding for global variable ‘trait’ +Undefined global functions or variables: + . citation_id control cultivar_id ensemble.samples + get.parameter.samples greenhouse id median n nr runs.samples + sa.samples se settings site_id specie_id statname time trait + trait.samples trt_id x y years yieldarray +Consider adding + importFrom("stats", "median", "time") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'retry.func.Rd': + \examples lines wider than 100 characters: + ncdf4::nc_open('https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4'), + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'create.base.plot.Rd': + ‘plot.prior.density’ ‘plot.posterior.density’ + +Missing link or links in documentation object 'download.file.Rd': + ‘method’ + +Missing link or links in documentation object 'get.results.Rd': + ‘read.settings’ + +Missing link or links in documentation object 'read.output.Rd': + ‘[PEcAn.benchmark:align.data]{PEcAn.benchmark::align.data()}’ + +Missing link or links in documentation object 'standard_vars.Rd': + ‘[udunits2]{udunits}’ + +Missing link or links in documentation object 'write.ensemble.configs.Rd': + ‘write.config.ED’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘logger.error’ ‘logger.getLevel’ ‘logger.info’ ‘logger.setLevel’ + ‘logger.setOutputFile’ ‘logger.setQuitOnSevere’ ‘logger.setWidth’ + ‘logger.severe’ ‘logger.warn’ ‘mstmip_local’ ‘mstmip_vars’ + ‘runModule.get.results’ ‘runModule.run.write.configs’ + ‘trait.dictionary’ +Undocumented data sets: + ‘mstmip_local’ ‘mstmip_vars’ ‘trait.dictionary’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'convert.input' + ‘dbparms’ + +Undocumented arguments in documentation object 'ensemble.filename' + ‘settings’ ‘prefix’ ‘suffix’ ‘all.var.yr’ ‘ensemble.id’ ‘variable’ + ‘start.year’ ‘end.year’ + +Undocumented arguments in documentation object 'full.path' + ‘folder’ + +Undocumented arguments in documentation object 'get.ensemble.samples' + ‘...’ + +Undocumented arguments in documentation object 'get.model.output' + ‘settings’ + +Undocumented arguments in documentation object 'get.results' + ‘sa.ensemble.id’ ‘ens.ensemble.id’ ‘variable’ ‘start.year’ ‘end.year’ + +Undocumented arguments in documentation object 'grid2netcdf' + ‘gdata’ ‘date’ ‘outfile’ +Documented arguments not in \usage in documentation object 'grid2netcdf': + ‘grid.data’ + +Undocumented arguments in documentation object 'logger.debug' + ‘...’ + +Undocumented arguments in documentation object 'mstmipvar' + ‘silent’ + +Undocumented arguments in documentation object 'plot_data' + ‘color’ + +Undocumented arguments in documentation object 'r2bugs.distributions' + ‘direction’ + +Undocumented arguments in documentation object 'read.ensemble.output' + ‘variable’ ‘ens.run.ids’ +Documented arguments not in \usage in documentation object 'read.ensemble.output': + ‘variables’ + +Undocumented arguments in documentation object 'read.sa.output' + ‘variable’ ‘sa.run.ids’ +Documented arguments not in \usage in documentation object 'read.sa.output': + ‘variables’ + +Undocumented arguments in documentation object 'retry.func' + ‘isError’ + +Undocumented arguments in documentation object 'run.write.configs' + ‘settings’ ‘overwrite’ +Documented arguments not in \usage in documentation object 'run.write.configs': + ‘model’ + +Undocumented arguments in documentation object 'seconds_in_year' + ‘...’ + +Undocumented arguments in documentation object 'sensitivity.filename' + ‘settings’ ‘prefix’ ‘suffix’ ‘all.var.yr’ ‘pft’ ‘ensemble.id’ + ‘variable’ ‘start.year’ ‘end.year’ + +Undocumented arguments in documentation object 'status.check' + ‘name’ + +Undocumented arguments in documentation object 'status.end' + ‘status’ + +Undocumented arguments in documentation object 'status.skip' + ‘name’ + +Undocumented arguments in documentation object 'status.start' + ‘name’ + +Undocumented arguments in documentation object 'write.ensemble.configs' + ‘model’ ‘write.to.db’ +Documented arguments not in \usage in documentation object 'write.ensemble.configs': + ‘write.config’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'bugs.rdist': + ‘n’ + +Argument items with no description in Rd object 'get.sa.sample.list': + ‘env’ + +Argument items with no description in Rd object 'grid2netcdf': + ‘grid.data’ + +Argument items with no description in Rd object 'theme_border': + ‘type’ ‘colour’ ‘size’ ‘linetype’ + +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 5 WARNINGs, 4 NOTEs diff --git a/base/visualization/tests/Rcheck_reference.log b/base/visualization/tests/Rcheck_reference.log new file mode 100644 index 00000000000..0a8a3950dcf --- /dev/null +++ b/base/visualization/tests/Rcheck_reference.log @@ -0,0 +1,232 @@ +* using log directory ‘/tmp/RtmpP8kDnH/PEcAn.visualization.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.visualization/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.visualization’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.visualization’ can be installed ... WARNING +Found the following significant warnings: + Note: possible error in 'pdf(filename = filename, ': unused argument (filename = filename) +See ‘/tmp/RtmpP8kDnH/PEcAn.visualization.Rcheck/00install.out’ for details. +Information on the location(s) of code generating the ‘Note’s can be +obtained by re-running with environment variable R_KEEP_PKG_SOURCE set +to ‘yes’. +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +'library' or 'require' calls in package code: + ‘grid’ ‘png’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Packages in Depends field not imported from: + ‘data.table’ ‘ggplot2’ ‘raster’ ‘sp’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... WARNING +plot: + function(x, ...) +plot.netcdf: + function(datafile, yvar, xvar, width, height, filename, year) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. + +Found the following apparent S3 methods exported but not registered: + plot.netcdf +See section ‘Registering S3 methods’ in the ‘Writing R Extensions’ +manual. +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +Found an obsolete/platform-specific call in the following function: + ‘plot.netcdf’ +Found the platform-specific device: + ‘x11’ +dev.new() is the preferred way to open a new device, in the unlikely +event one is needed. +add_icon: no visible global function definition for ‘readPNG’ +add_icon: no visible global function definition for ‘rasterGrob’ +add_icon: no visible global function definition for ‘unit’ +add_icon: no visible global function definition for ‘grid.draw’ +add_icon: no visible global function definition for ‘textGrob’ +ciEnvelope: no visible global function definition for ‘polygon’ +create_status_page: no visible global function definition for + ‘download.file’ +create_status_page: no visible binding for global variable ‘nodeinfo’ +create_status_page: no visible global function definition for ‘png’ +create_status_page: no visible global function definition for + ‘extendrange’ +create_status_page : : : no visible global + function definition for ‘segments’ +create_status_page : : no visible global function definition + for ‘points’ +create_status_page : : no visible global function definition + for ‘text’ +create_status_page: no visible global function definition for ‘text’ +create_status_page: no visible global function definition for ‘dev.off’ +data.fetch: no visible global function definition for ‘aggregate’ +map.output: no visible global function definition for ‘data.table’ +map.output: no visible global function definition for ‘map_data’ +map.output: no visible global function definition for ‘ggplot’ +map.output: no visible global function definition for ‘geom_polygon’ +map.output: no visible global function definition for ‘aes’ +map.output: no visible binding for global variable ‘long’ +map.output: no visible binding for global variable ‘lat’ +map.output: no visible binding for global variable ‘group’ +map.output: no visible global function definition for ‘geom_point’ +map.output: no visible binding for global variable ‘lon’ +map.output: no visible global function definition for + ‘scale_color_gradientn’ +map.output: no visible global function definition for ‘theme_bw’ +map.output: no visible global function definition for ‘xlim’ +map.output: no visible global function definition for ‘ylim’ +plot.netcdf: no visible global function definition for ‘x11’ +plot.netcdf: no visible global function definition for ‘png’ +plot.netcdf: no visible global function definition for ‘pdf’ +plot.netcdf: no visible global function definition for ‘jpg’ +plot.netcdf: no visible global function definition for ‘tiff’ +plot.netcdf: no visible global function definition for ‘plot.new’ +plot.netcdf: no visible global function definition for ‘title’ +plot.netcdf: no visible global function definition for ‘plot.window’ +plot.netcdf: no visible global function definition for ‘polygon’ +plot.netcdf: no visible global function definition for ‘lines’ +plot.netcdf: no visible global function definition for ‘points’ +plot.netcdf: no visible global function definition for ‘legend’ +plot.netcdf: no visible global function definition for ‘axis’ +plot.netcdf: no visible global function definition for ‘box’ +plot.netcdf: no visible global function definition for ‘dev.off’ +vwReg: no visible binding for global variable ‘loess’ +vwReg: no visible global function definition for ‘colorRampPalette’ +vwReg: no visible global function definition for ‘na.omit’ +vwReg: no visible global function definition for ‘loess.control’ +vwReg: no visible global function definition for ‘predict’ +vwReg : : no visible global function definition for + ‘quantile’ +vwReg : : no visible global function definition for ‘pnorm’ +vwReg: no visible global function definition for ‘ggplot’ +vwReg: no visible global function definition for ‘aes_string’ +vwReg: no visible global function definition for ‘theme_bw’ +vwReg: no visible global function definition for ‘flush.console’ +vwReg : : no visible global function definition for + ‘density’ +vwReg: no visible global function definition for ‘geom_tile’ +vwReg: no visible global function definition for ‘aes’ +vwReg: no visible binding for global variable ‘x’ +vwReg: no visible binding for global variable ‘y’ +vwReg: no visible binding for global variable ‘dens.scaled’ +vwReg: no visible binding for global variable ‘alpha.factor’ +vwReg: no visible global function definition for ‘scale_fill_gradientn’ +vwReg: no visible global function definition for + ‘scale_alpha_continuous’ +vwReg: no visible global function definition for ‘geom_polygon’ +vwReg: no visible binding for global variable ‘value’ +vwReg: no visible binding for global variable ‘group’ +vwReg: no visible global function definition for ‘geom_path’ +vwReg: no visible binding for global variable ‘M’ +vwReg: no visible binding for global variable ‘w3’ +vwReg: no visible binding for global variable ‘UL’ +vwReg: no visible binding for global variable ‘LL’ +vwReg: no visible global function definition for ‘geom_smooth’ +vwReg: no visible global function definition for ‘geom_point’ +vwReg: no visible global function definition for ‘theme’ +Undefined global functions or variables: + aes aes_string aggregate alpha.factor axis box colorRampPalette + data.table dens.scaled density dev.off download.file extendrange + flush.console geom_path geom_point geom_polygon geom_smooth geom_tile + ggplot grid.draw group jpg lat legend lines LL loess loess.control + lon long M map_data na.omit nodeinfo pdf plot.new plot.window png + pnorm points polygon predict quantile rasterGrob readPNG + scale_alpha_continuous scale_color_gradientn scale_fill_gradientn + segments text textGrob theme theme_bw tiff title UL unit value w3 x + x11 xlim y ylim +Consider adding + importFrom("graphics", "axis", "box", "legend", "lines", "plot.new", + "plot.window", "points", "polygon", "segments", "text", + "title") + importFrom("grDevices", "colorRampPalette", "dev.off", "extendrange", + "pdf", "png", "tiff", "x11") + importFrom("stats", "aggregate", "density", "loess", "loess.control", + "na.omit", "pnorm", "predict", "quantile") + importFrom("utils", "download.file", "flush.console") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘counties’ ‘yielddf’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'ciEnvelope' + ‘...’ + +Undocumented arguments in documentation object 'plot.hdf5' + ‘width’ ‘height’ +Duplicated \argument entries in documentation object 'plot.hdf5': + ‘the’ +Documented arguments not in \usage in documentation object 'plot.hdf5': + ‘the’ + +Undocumented arguments in documentation object 'vwReg' + ‘formula’ ‘data’ ‘title’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +S3 methods shown with full name in documentation object 'plot.hdf5': + ‘plot.netcdf’ + +The \usage entries for S3 methods should use the \method markup and not +their full name. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘usmap.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +Examples with CPU or elapsed time > 5s + user system elapsed +vwReg 11.31 0.1 11.486 +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 5 WARNINGs, 3 NOTEs diff --git a/base/workflow/tests/Rcheck_reference.log b/base/workflow/tests/Rcheck_reference.log new file mode 100644 index 00000000000..a2c5a9fcb24 --- /dev/null +++ b/base/workflow/tests/Rcheck_reference.log @@ -0,0 +1,66 @@ +* using log directory ‘/tmp/RtmpiqxPPg/PEcAn.workflow.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.workflow/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.workflow’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.workflow’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +run.write.configs: no visible binding for global variable + ‘trait.samples’ +run.write.configs: no visible binding for global variable ‘sa.samples’ +run.write.configs: no visible binding for global variable + ‘ensemble.samples’ +Undefined global functions or variables: + ensemble.samples sa.samples trait.samples +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... OK +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... + OK +* DONE +Status: 2 NOTEs diff --git a/models/biocro/tests/Rcheck_reference.log b/models/biocro/tests/Rcheck_reference.log new file mode 100644 index 00000000000..f9f226a309e --- /dev/null +++ b/models/biocro/tests/Rcheck_reference.log @@ -0,0 +1,70 @@ +* using log directory ‘/tmp/RtmpNWKth3/PEcAn.BIOCRO.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.BIOCRO/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.BIOCRO’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.BIOCRO’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard files/directories found at top level: + ‘Dockerfile’ ‘model_info.json’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +Missing or unexported object: ‘BioCro::Gro’ +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... OK +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'get_biocro_defaults.Rd': + ‘[BioCro:Gro]{Gro}’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... OK +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘C4grass_sa_vd.Rmd’, ‘sa.output.Rdata’, ‘workflow.R’, ‘workflow.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 2 WARNINGs, 3 NOTEs diff --git a/models/clm45/tests/Rcheck_reference.log b/models/clm45/tests/Rcheck_reference.log new file mode 100644 index 00000000000..970a0573373 --- /dev/null +++ b/models/clm45/tests/Rcheck_reference.log @@ -0,0 +1,75 @@ +* using log directory ‘/tmp/RtmpxFrzoj/PEcAn.CLM45.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.CLM45/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.CLM45’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.CLM45’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +Packages in Depends field not imported from: + ‘PEcAn.logger’ ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... OK +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.CLM45' + ‘lat’ ‘lon’ ‘...’ + +Undocumented arguments in documentation object 'write.config.CLM45' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.CLM45': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 2 NOTEs diff --git a/models/dalec/tests/Rcheck_reference.log b/models/dalec/tests/Rcheck_reference.log new file mode 100644 index 00000000000..5aa2927ee4f --- /dev/null +++ b/models/dalec/tests/Rcheck_reference.log @@ -0,0 +1,83 @@ +* using log directory ‘/tmp/RtmpRjF4uv/PEcAn.DALEC.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.DALEC/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.DALEC’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.DALEC’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘PEcAn.data.atmosphere’ ‘PEcAn.data.land’ +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +met2model.DALEC: no visible global function definition for + ‘write.table’ +model2netcdf.DALEC: no visible global function definition for + ‘read.table’ +write.config.DALEC: no visible global function definition for + ‘read.table’ +Undefined global functions or variables: + read.table write.table +Consider adding + importFrom("utils", "read.table", "write.table") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... WARNING +Functions or methods with usage in documentation object 'get.model.output.generic' but not in code: + get.model.output.dalec + +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.DALEC' + ‘spin_nyear’ ‘spin_nsample’ ‘spin_resample’ ‘...’ + +Undocumented arguments in documentation object 'write.config.DALEC' + ‘defaults’ ‘trait.values’ ‘settings’ ‘run.id’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* DONE +Status: 3 WARNINGs, 2 NOTEs diff --git a/models/dvmdostem/tests/Rcheck_reference.log b/models/dvmdostem/tests/Rcheck_reference.log new file mode 100644 index 00000000000..9176838951e --- /dev/null +++ b/models/dvmdostem/tests/Rcheck_reference.log @@ -0,0 +1,80 @@ +* using log directory ‘/tmp/RtmpbqCbB4/PEcAn.dvmdostem.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.dvmdostem/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.dvmdostem’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.dvmdostem’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘lubridate’ ‘PEcAn.logger’ ‘udunits2’ +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +model2netcdf.dvmdostem: no visible global function definition for + ‘write.table’ +setup.outputs.dvmdostem: no visible global function definition for + ‘read.csv’ +Undefined global functions or variables: + read.csv write.table +Consider adding + importFrom("utils", "read.csv", "write.table") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'convert.samples.dvmdostem' + ‘trait_values’ +Documented arguments not in \usage in documentation object 'convert.samples.dvmdostem': + ‘trait_samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... WARNING +'::' or ':::' import not declared from: ‘PEcAn.logger’ +* checking tests ... SKIPPED +* DONE +Status: 3 WARNINGs, 2 NOTEs diff --git a/models/ed/tests/Rcheck_reference.log b/models/ed/tests/Rcheck_reference.log new file mode 100644 index 00000000000..1b7267faef5 --- /dev/null +++ b/models/ed/tests/Rcheck_reference.log @@ -0,0 +1,219 @@ +* using log directory ‘/tmp/Rtmp3dAPgp/PEcAn.ED2.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.ED2/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.ED2’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.ED2’ can be installed ... WARNING +Found the following significant warnings: + Note: next used in wrong context: no loop is visible +See ‘/tmp/Rtmp3dAPgp/PEcAn.ED2.Rcheck/00install.out’ for details. +Information on the location(s) of code generating the ‘Note’s can be +obtained by re-running with environment variable R_KEEP_PKG_SOURCE set +to ‘yes’. +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard files/directories found at top level: + ‘Dockerfile’ ‘model_info.json’ ‘scripts’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +Package in Depends field not imported from: ‘coda’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +Warning: : ... may be used in an incorrect context: ‘check_pss(pss, ...)’ + +download_edi: no visible global function definition for ‘unzip’ +modify_df: no visible global function definition for ‘modifyList’ +modify_ed2in: no visible global function definition for ‘modifyList’ +put_E_values: no visible global function definition for ‘data’ +put_E_values: no visible binding for global variable ‘pftmapping’ +put_E_values : : no visible binding for global variable + ‘pftmapping’ +put_T_values: no visible global function definition for ‘head’ +read_css: no visible global function definition for ‘read.table’ +read_E_files: no visible global function definition for ‘data’ +read_E_files: no visible binding for global variable ‘pftmapping’ +read_E_files : : no visible binding for global variable + ‘pftmapping’ +read_ed_metheader: no visible global function definition for + ‘read.table’ +read_ed_metheader: no visible binding for global variable ‘value_type’ +read_pss: no visible global function definition for ‘read.table’ +read_pss: ... may be used in an incorrect context: ‘check_pss(pss, + ...)’ +read_S_files: no visible global function definition for ‘data’ +read_S_files: no visible binding for global variable ‘pftmapping’ +read_S_files : : no visible binding for global variable + ‘pftmapping’ +read_site: no visible global function definition for ‘read.table’ +veg2model.ED2: no visible global function definition for ‘data’ +veg2model.ED2: no visible binding for global variable ‘pftmapping’ +veg2model.ED2: no visible global function definition for ‘median’ +veg2model.ED2: no visible global function definition for ‘write.table’ +write_css: no visible global function definition for ‘write.table’ +write_pss: no visible global function definition for ‘write.table’ +write_restart.ED2: no visible binding for global variable ‘var.names’ +write_site: no visible global function definition for ‘write.table’ +write.config.ED2: no visible global function definition for ‘saveXML’ +write.config.ED2: no visible global function definition for ‘db.query’ +write.config.ED2: no visible global function definition for + ‘modifyList’ +write.config.ED2: no visible binding for global variable ‘x’ +write.config.ED2 : : no visible binding for global variable + ‘x’ +write.config.xml.ED2: no visible global function definition for ‘data’ +write.config.xml.ED2: no visible binding for global variable + ‘pftmapping’ +write.config.xml.ED2: no visible binding for global variable ‘soil’ +write.config.xml.ED2: no visible global function definition for + ‘modifyList’ +Undefined global functions or variables: + data db.query head median modifyList pftmapping read.table saveXML + soil unzip value_type var.names write.table x +Consider adding + importFrom("stats", "median") + importFrom("utils", "data", "head", "modifyList", "read.table", + "unzip", "write.table") +to your NAMESPACE file. + +Found the following assignments to the global environment: +File ‘PEcAn.ED2/R/model2netcdf.ED2.R’: + assign(x, ed.dat[[x]], envir = .GlobalEnv) + +Found the following calls to data() loading into the global environment: +File ‘PEcAn.ED2/R/model2netcdf.ED2.R’: + data(pftmapping, package = "PEcAn.ED2") + data(pftmapping, package = "PEcAn.ED2") + data(pftmapping, package = "PEcAn.ED2") +File ‘PEcAn.ED2/R/veg2model.ED2.R’: + data(pftmapping, package = "PEcAn.ED2") +File ‘PEcAn.ED2/R/write.configs.ed.R’: + data(list = histfile, package = "PEcAn.ED2") + data(list = histfile, package = "PEcAn.ED2") + data(pftmapping, package = "PEcAn.ED2") + data(soil, package = "PEcAn.ED2") +See section ‘Good practice’ in ‘?data’. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'read_ed_metheader.Rd': + ‘https://github.com/EDmodel/ED2/wiki/Drivers’ + +Missing link or links in documentation object 'read.output.ED2.Rd': + ‘DMYT’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘history’ ‘history.r46’ ‘history.r81’ ‘history.r82’ ‘history.r85’ + ‘history.rgit’ ‘pftmapping’ ‘soil’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'ed.var' + ‘varname’ + +Undocumented arguments in documentation object 'get.model.output.ED2' + ‘settings’ + +Undocumented arguments in documentation object 'list.files.nodir' + ‘...’ + +Undocumented arguments in documentation object 'met2model.ED2' + ‘lat’ ‘lon’ ‘...’ + +Undocumented arguments in documentation object 'modify_ed2in' + ‘ed2in’ + +Undocumented arguments in documentation object 'put_E_values' + ‘yr’ ‘nc_var’ ‘out’ ‘lat’ ‘lon’ ‘begins’ ‘ends’ ‘pft_names’ ‘...’ + +Undocumented arguments in documentation object 'put_T_values' + ‘yr’ ‘nc_var’ ‘out’ ‘lat’ ‘lon’ ‘begins’ ‘ends’ ‘...’ + +Undocumented arguments in documentation object 'read.output.ED2' + ‘end.year’ ‘variables’ + +Undocumented arguments in documentation object 'read_E_files' + ‘efiles’ ‘outdir’ ‘start_date’ ‘end_date’ ‘pft_names’ ‘...’ + +Undocumented arguments in documentation object 'read_T_files' + ‘tfiles’ ‘outdir’ ‘start_date’ ‘end_date’ ‘...’ + +Undocumented arguments in documentation object 'remove.config.ED2' + ‘main.outdir’ ‘settings’ + +Undocumented arguments in documentation object 'run_ed_singularity' + ‘...’ +Documented arguments not in \usage in documentation object 'run_ed_singularity': + ‘Additional’ + +Undocumented arguments in documentation object 'translate_vars_ed' + ‘varnames’ + +Undocumented arguments in documentation object 'veg2model.ED2' + ‘veg_info’ ‘start_date’ ‘new_site’ ‘source’ + +Undocumented arguments in documentation object 'write.config.ED2' + ‘...’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'read.output.ED2': + ‘start.year’ + +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘running_ed_from_R.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... WARNING +'::' or ':::' import not declared from: ‘devtools’ +* checking tests ... SKIPPED +* DONE +Status: 7 WARNINGs, 4 NOTEs diff --git a/models/fates/tests/Rcheck_reference.log b/models/fates/tests/Rcheck_reference.log new file mode 100644 index 00000000000..224c3512ea3 --- /dev/null +++ b/models/fates/tests/Rcheck_reference.log @@ -0,0 +1,81 @@ +* using log directory ‘/tmp/RtmpKgejnm/PEcAn.FATES.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.FATES/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.FATES’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.FATES’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘stringr’ ‘udunits2’ +'library' or 'require' call to ‘PEcAn.utils’ which was already attached by Depends. + Please remove these calls from your code. +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +model2netcdf.FATES: no visible global function definition for ‘head’ +model2netcdf.FATES: no visible global function definition for + ‘write.table’ +Undefined global functions or variables: + head write.table +Consider adding + importFrom("utils", "head", "write.table") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.FATES' + ‘lat’ ‘lon’ ‘...’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 2 WARNINGs, 2 NOTEs diff --git a/models/gday/tests/Rcheck_reference.log b/models/gday/tests/Rcheck_reference.log new file mode 100644 index 00000000000..3156671a84b --- /dev/null +++ b/models/gday/tests/Rcheck_reference.log @@ -0,0 +1,80 @@ +* using log directory ‘/tmp/Rtmpy5gmHK/PEcAn.GDAY.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.GDAY/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.GDAY’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.GDAY’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +model2netcdf.GDAY: no visible global function definition for ‘read.csv’ +Undefined global functions or variables: + read.csv +Consider adding + importFrom("utils", "read.csv") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.GDAY' + ‘...’ + +Undocumented arguments in documentation object 'write.config.GDAY' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.GDAY': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 3 NOTEs diff --git a/models/jules/tests/Rcheck_reference.log b/models/jules/tests/Rcheck_reference.log new file mode 100644 index 00000000000..e0144498691 --- /dev/null +++ b/models/jules/tests/Rcheck_reference.log @@ -0,0 +1,92 @@ +* using log directory ‘/tmp/RtmpDtuObw/PEcAn.JULES.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.JULES/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.JULES’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.JULES’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘PEcAn.data.atmosphere’ ‘udunits2’ +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +model2netcdf.JULES: no visible global function definition for + ‘write.table’ +write.config.JULES: no visible global function definition for + ‘read.table’ +write.config.JULES: no visible global function definition for + ‘write.table’ +write.config.JULES: no visible global function definition for + ‘read.csv’ +Undefined global functions or variables: + read.csv read.table write.table +Consider adding + importFrom("utils", "read.csv", "read.table", "write.table") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'detect.timestep' + ‘met.dir’ ‘met.regexp’ + +Undocumented arguments in documentation object 'write.config.JULES' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.JULES': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'detect.timestep': + ‘start_date’ + +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 3 WARNINGs, 2 NOTEs diff --git a/models/linkages/tests/Rcheck_reference.log b/models/linkages/tests/Rcheck_reference.log new file mode 100644 index 00000000000..6aeb971eaaf --- /dev/null +++ b/models/linkages/tests/Rcheck_reference.log @@ -0,0 +1,163 @@ +* using log directory ‘/tmp/RtmpsZI91X/PEcAn.LINKAGES.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.LINKAGES/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.LINKAGES’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.LINKAGES’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' import not declared from: ‘PEcAn.data.land’ +'library' or 'require' call not declared from: ‘PEcAn.data.atmosphere’ +'library' or 'require' call to ‘PEcAn.utils’ which was already attached by Depends. + Please remove these calls from your code. +'library' or 'require' calls in package code: + ‘linkages’ ‘PEcAn.data.atmosphere’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +met2model.LINKAGES: no visible global function definition for + ‘flush.console’ +model2netcdf.LINKAGES: no visible binding for global variable + ‘ag.biomass’ +model2netcdf.LINKAGES: no visible binding for global variable + ‘total.soil.carbon’ +model2netcdf.LINKAGES: no visible binding for global variable + ‘leaf.litter’ +model2netcdf.LINKAGES: no visible binding for global variable ‘area’ +model2netcdf.LINKAGES: no visible binding for global variable ‘ag.npp’ +model2netcdf.LINKAGES: no visible binding for global variable + ‘hetero.resp’ +model2netcdf.LINKAGES: no visible binding for global variable ‘nee’ +model2netcdf.LINKAGES: no visible binding for global variable ‘et’ +model2netcdf.LINKAGES: no visible binding for global variable ‘agb.pft’ +model2netcdf.LINKAGES: no visible binding for global variable ‘f.comp’ +model2netcdf.LINKAGES: no visible binding for global variable ‘water’ +model2netcdf.LINKAGES: no visible binding for global variable + ‘abvgroundwood.biomass’ +read_restart.LINKAGES: no visible global function definition for + ‘read.output’ +sample.IC.LINKAGES: no visible global function definition for ‘runif’ +write_restart.LINKAGES: no visible global function definition for + ‘read.csv’ +write_restart.LINKAGES: no visible global function definition for + ‘logger.severe’ +write_restart.LINKAGES: no visible binding for global variable + ‘ntrees.kill’ +write_restart.LINKAGES: no visible binding for global variable + ‘nogro.save’ +write_restart.LINKAGES: no visible binding for global variable + ‘iage.save’ +write_restart.LINKAGES: no visible binding for global variable + ‘dbh.save’ +write_restart.LINKAGES: no visible global function definition for + ‘aggregate’ +write_restart.LINKAGES: no visible global function definition for + ‘runif’ +write_restart.LINKAGES: no visible global function definition for + ‘optimize’ +write_restart.LINKAGES: no visible global function definition for + ‘rnorm’ +write_restart.LINKAGES: no visible global function definition for + ‘year’ +write.config.LINKAGES: no visible global function definition for + ‘read.csv’ +write.config.LINKAGES: no visible global function definition for + ‘db.open’ +write.config.LINKAGES: no visible global function definition for + ‘db.close’ +write.config.LINKAGES: no visible global function definition for + ‘db.query’ +Undefined global functions or variables: + abvgroundwood.biomass ag.biomass ag.npp agb.pft aggregate area + db.close db.open db.query dbh.save et f.comp flush.console + hetero.resp iage.save leaf.litter logger.severe nee nogro.save + ntrees.kill optimize read.csv read.output rnorm runif + total.soil.carbon water year +Consider adding + importFrom("stats", "aggregate", "optimize", "rnorm", "runif") + importFrom("utils", "flush.console", "read.csv") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.LINKAGES' + ‘start_date’ ‘end_date’ ‘overwrite’ ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'model2netcdf.LINKAGES' + ‘pft_names’ + +Undocumented arguments in documentation object 'read_restart.LINKAGES' + ‘settings’ ‘params’ +Documented arguments not in \usage in documentation object 'read_restart.LINKAGES': + ‘multi.settings’ + +Undocumented arguments in documentation object 'split_inputs.LINKAGES' + ‘settings’ ‘start.time’ ‘stop.time’ ‘inputs’ + +Undocumented arguments in documentation object 'write.config.LINKAGES' + ‘trait.values’ ‘restart’ ‘spinup’ ‘inputs’ ‘IC’ +Documented arguments not in \usage in documentation object 'write.config.LINKAGES': + ‘trait.samples’ + +Undocumented arguments in documentation object 'write_restart.LINKAGES' + ‘start.time’ ‘stop.time’ ‘new.params’ ‘inputs’ +Documented arguments not in \usage in documentation object 'write_restart.LINKAGES': + ‘trait.values’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'write_restart.LINKAGES': + ‘trait.values’ + +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 3 WARNINGs, 2 NOTEs diff --git a/models/lpjguess/tests/Rcheck_reference.log b/models/lpjguess/tests/Rcheck_reference.log new file mode 100644 index 00000000000..3b2e723a9a0 --- /dev/null +++ b/models/lpjguess/tests/Rcheck_reference.log @@ -0,0 +1,114 @@ +* using log directory ‘/tmp/RtmpHi8eq6/PEcAn.LPJGUESS.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.LPJGUESS/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.LPJGUESS’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.LPJGUESS’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘tibble’ ‘udunits2’ +'library' or 'require' call to ‘PEcAn.utils’ which was already attached by Depends. + Please remove these calls from your code. +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +getClass_Fluxes: no visible binding for global variable ‘zz’ +getClass_Individual: no visible binding for global variable ‘zz’ +getClass_Patchpft: no visible binding for global variable ‘npft’ +getClass_Patchpft: no visible binding for global variable ‘zz’ +getClass_Soil: no visible binding for global variable ‘zz’ +getClass_Sompool: no visible binding for global variable ‘zz’ +getClass_SompoolCent: no visible binding for global variable ‘zz’ +getClass_Vegetation: no visible binding for global variable ‘zz’ +model2netcdf.LPJGUESS: no visible binding for global variable + ‘read.table’ +write.insfile.LPJGUESS: no visible binding for global variable + ‘lpjguess_param_list’ +write.insfile.LPJGUESS: no visible global function definition for + ‘data’ +write.insfile.LPJGUESS: no visible binding for global variable + ‘co2.1850.2011’ +write.insfile.LPJGUESS: no visible global function definition for + ‘write.table’ +Undefined global functions or variables: + co2.1850.2011 data lpjguess_param_list npft read.table write.table zz +Consider adding + importFrom("utils", "data", "read.table", "write.table") +to your NAMESPACE file. + +Found the following calls to data() loading into the global environment: +File ‘PEcAn.LPJGUESS/R/write.config.LPJGUESS.R’: + data(co2.1850.2011, package = "PEcAn.LPJGUESS") +See section ‘Good practice’ in ‘?data’. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘co2.1850.2011’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.LPJGUESS' + ‘...’ + +Undocumented arguments in documentation object 'write.config.LPJGUESS' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.LPJGUESS': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 3 WARNINGs, 2 NOTEs diff --git a/models/maat/tests/Rcheck_reference.log b/models/maat/tests/Rcheck_reference.log new file mode 100644 index 00000000000..f482f70704c --- /dev/null +++ b/models/maat/tests/Rcheck_reference.log @@ -0,0 +1,89 @@ +* using log directory ‘/tmp/RtmpeZMF3K/PEcAn.MAAT.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.MAAT/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.MAAT’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.MAAT’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +convert.samples.MAAT: no visible binding for global variable ‘Ha.vcmax’ +convert.samples.MAAT: no visible binding for global variable ‘Hd.vcmax’ +convert.samples.MAAT: no visible binding for global variable ‘Ha.jmax’ +convert.samples.MAAT: no visible binding for global variable ‘Hd.jmax’ +convert.samples.MAAT: no visible binding for global variable + ‘leaf_width’ +convert.samples.MAAT: no visible binding for global variable ‘g0’ +model2netcdf.MAAT: no visible binding for global variable ‘time’ +model2netcdf.MAAT: no visible global function definition for ‘head’ +Undefined global functions or variables: + g0 Ha.jmax Ha.vcmax Hd.jmax Hd.vcmax head leaf_width time +Consider adding + importFrom("stats", "time") + importFrom("utils", "head") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.MAAT' + ‘...’ + +Undocumented arguments in documentation object 'write.config.MAAT' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.MAAT': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘create_amerifluxLBL_drivers_for_maat.Rmd’, + ‘graphics/example_assimilation_rate.png’, + ‘running_maat_in_pecan.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 2 WARNINGs, 2 NOTEs diff --git a/models/maespa/tests/Rcheck_reference.log b/models/maespa/tests/Rcheck_reference.log new file mode 100644 index 00000000000..3da9a40b845 --- /dev/null +++ b/models/maespa/tests/Rcheck_reference.log @@ -0,0 +1,80 @@ +* using log directory ‘/tmp/Rtmpl9sLKQ/PEcAn.MAESPA.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.MAESPA/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.MAESPA’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.MAESPA’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard files/directories found at top level: + ‘Dockerfile’ ‘model_info.json’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +'library' or 'require' call to ‘Maeswrap’ in package code. + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +write.config.MAESPA: no visible global function definition for + ‘logger.severe’ +Undefined global functions or variables: + logger.severe +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.MAESPA' + ‘...’ + +Undocumented arguments in documentation object 'write.config.MAESPA' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.MAESPA': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 4 NOTEs diff --git a/models/preles/tests/Rcheck_reference.log b/models/preles/tests/Rcheck_reference.log new file mode 100644 index 00000000000..aedbcecb385 --- /dev/null +++ b/models/preles/tests/Rcheck_reference.log @@ -0,0 +1,87 @@ +* using log directory ‘/tmp/RtmpBQPnSA/PEcAn.PRELES.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.PRELES/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.PRELES’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.PRELES’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +Package listed in more than one of Depends, Imports, Suggests, Enhances: + ‘PEcAn.utils’ +A package should be listed in only one of these fields. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +'library' or 'require' call to ‘Rpreles’ in package code. + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +runPRELES.jobsh: no visible global function definition for + ‘logger.severe’ +runPRELES.jobsh: no visible binding for global variable ‘trait.values’ +Undefined global functions or variables: + logger.severe trait.values +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'runPRELES.jobsh' + ‘met.file’ ‘parameters’ ‘sitelat’ ‘sitelon’ ‘start.date’ ‘end.date’ +Documented arguments not in \usage in documentation object 'runPRELES.jobsh': + ‘in.path’ ‘in.prefix’ ‘start_date’ ‘end_date’ + +Undocumented arguments in documentation object 'write.config.PRELES' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.PRELES': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 3 NOTEs diff --git a/models/sipnet/tests/Rcheck_reference.log b/models/sipnet/tests/Rcheck_reference.log new file mode 100644 index 00000000000..978152fd680 --- /dev/null +++ b/models/sipnet/tests/Rcheck_reference.log @@ -0,0 +1,120 @@ +* using log directory ‘/tmp/RtmpvQfkqE/PEcAn.SIPNET.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.SIPNET/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.SIPNET’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.SIPNET’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard files/directories found at top level: + ‘Dockerfile’ ‘model_info.json’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘dplyr’ ‘PEcAn.data.land’ +Package in Depends field not imported from: ‘PEcAn.data.atmosphere’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +met2model.SIPNET: no visible binding for global variable ‘i’ +met2model.SIPNET: no visible global function definition for ‘convolve’ +met2model.SIPNET: no visible global function definition for + ‘write.table’ +model2netcdf.SIPNET: no visible global function definition for + ‘read.table’ +model2netcdf.SIPNET: no visible binding for global variable ‘year’ +model2netcdf.SIPNET: no visible global function definition for ‘head’ +sample.IC.SIPNET: no visible global function definition for ‘runif’ +split_inputs.SIPNET: no visible global function definition for ‘%>%’ +split_inputs.SIPNET: no visible binding for global variable ‘.’ +split_inputs.SIPNET: no visible global function definition for + ‘read.table’ +split_inputs.SIPNET: no visible global function definition for ‘mutate’ +split_inputs.SIPNET: no visible binding for global variable ‘V2’ +split_inputs.SIPNET: no visible binding for global variable ‘V3’ +split_inputs.SIPNET: no visible binding for global variable ‘Date’ +split_inputs.SIPNET: no visible binding for global variable ‘V4’ +split_inputs.SIPNET: no visible global function definition for ‘filter’ +split_inputs.SIPNET: no visible global function definition for + ‘write.table’ +write_restart.SIPNET: no visible global function definition for ‘%>%’ +write.config.SIPNET: no visible global function definition for + ‘read.table’ +write.config.SIPNET: no visible global function definition for ‘%>%’ +write.config.SIPNET: no visible global function definition for + ‘write.table’ +Undefined global functions or variables: + . %>% convolve Date filter head i mutate read.table runif V2 V3 V4 + write.table year +Consider adding + importFrom("stats", "convolve", "filter", "runif") + importFrom("utils", "head", "read.table", "write.table") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'get.model.output.SIPNET' + ‘settings’ + +Undocumented arguments in documentation object 'met2model.SIPNET' + ‘...’ + +Undocumented arguments in documentation object 'model2netcdf.SIPNET' + ‘delete.raw’ + +Undocumented arguments in documentation object 'sample.IC.SIPNET' + ‘year’ + +Undocumented arguments in documentation object 'write.config.SIPNET' + ‘defaults’ ‘trait.values’ ‘settings’ ‘run.id’ ‘inputs’ ‘IC’ ‘restart’ + ‘spinup’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 2 WARNINGs, 3 NOTEs diff --git a/models/template/tests/Rcheck_reference.log b/models/template/tests/Rcheck_reference.log new file mode 100644 index 00000000000..68fa51a5ae8 --- /dev/null +++ b/models/template/tests/Rcheck_reference.log @@ -0,0 +1,77 @@ +* using log directory ‘/tmp/RtmpVlFdqS/PEcAn.ModelName.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.ModelName/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.ModelName’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.ModelName’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +write.config.MODEL: no visible global function definition for + ‘db.query’ +write.config.MODEL: no visible binding for global variable ‘startdate’ +write.config.MODEL: no visible binding for global variable ‘enddate’ +Undefined global functions or variables: + db.query enddate startdate +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.MODEL' + ‘overwrite’ + +Undocumented arguments in documentation object 'write.config.MODEL' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.MODEL': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 2 NOTEs diff --git a/modules/allometry/tests/Rcheck_reference.log b/modules/allometry/tests/Rcheck_reference.log new file mode 100644 index 00000000000..8b22f84c5aa --- /dev/null +++ b/modules/allometry/tests/Rcheck_reference.log @@ -0,0 +1,137 @@ +* using log directory ‘/tmp/Rtmpe7nWQ8/PEcAn.allometry.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.allometry/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.allometry’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.allometry’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +Malformed Description field: should contain one or more complete sentences. +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +'library' or 'require' calls to packages already attached by Depends: + ‘MCMCpack’ ‘mvtnorm’ ‘tools’ + Please remove these calls from your code. +'library' or 'require' call to ‘PEcAn.DB’ in package code. + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Packages in Depends field not imported from: + ‘MCMCpack’ ‘mvtnorm’ ‘tools’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +allom.BayesFit: no visible global function definition for + ‘txtProgressBar’ +allom.BayesFit: no visible global function definition for ‘runif’ +allom.BayesFit: no visible global function definition for ‘rnorm’ +allom.BayesFit: no visible global function definition for ‘rmvnorm’ +allom.BayesFit: no visible global function definition for ‘dnorm’ +allom.BayesFit: no visible global function definition for ‘rgamma’ +allom.BayesFit: no visible global function definition for ‘riwish’ +allom.BayesFit: no visible global function definition for ‘vech’ +allom.BayesFit: no visible global function definition for + ‘setTxtProgressBar’ +allom.BayesFit: no visible global function definition for ‘as.mcmc’ +allom.predict: no visible global function definition for ‘is.mcmc.list’ +allom.predict: no visible global function definition for ‘as.mcmc’ +allom.predict: no visible global function definition for ‘rmvnorm’ +allom.predict: no visible global function definition for ‘is’ +allom.predict: no visible global function definition for ‘rnorm’ +AllomAve: no visible global function definition for ‘as.mcmc’ +AllomAve: no visible global function definition for ‘as.mcmc.list’ +AllomAve: no visible global function definition for ‘cov’ +AllomAve: no visible global function definition for ‘pdf’ +AllomAve: no visible global function definition for ‘plot’ +AllomAve: no visible global function definition for ‘points’ +AllomAve: no visible global function definition for ‘lines’ +AllomAve: no visible global function definition for ‘legend’ +AllomAve: no visible global function definition for ‘dev.off’ +load.allom: no visible global function definition for ‘file_ext’ +load.allom: no visible binding for global variable ‘mc’ +load.allom: no visible global function definition for ‘is.mcmc.list’ +load.allom: no visible global function definition for ‘as.mcmc’ +query.allom.data: no visible global function definition for ‘db.query’ +read.allom.data: no visible global function definition for ‘read.csv’ +read.allom.data: no visible global function definition for ‘runif’ +read.allom.data: no visible global function definition for ‘var’ +Undefined global functions or variables: + as.mcmc as.mcmc.list cov db.query dev.off dnorm file_ext is + is.mcmc.list legend lines mc pdf plot points read.csv rgamma riwish + rmvnorm rnorm runif setTxtProgressBar txtProgressBar var vech +Consider adding + importFrom("graphics", "legend", "lines", "plot", "points") + importFrom("grDevices", "dev.off", "pdf") + importFrom("methods", "is") + importFrom("stats", "cov", "dnorm", "rgamma", "rnorm", "runif", "var") + importFrom("utils", "read.csv", "setTxtProgressBar", "txtProgressBar") +to your NAMESPACE file (and ensure that your DESCRIPTION Imports field +contains 'methods'). +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘allom.components’ ‘Jenkins2004_Table9’ ‘Table3_GTR-NE-319.v2’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'allom.predict' + ‘single.tree’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘AllomVignette.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... WARNING +'library' or 'require' call not declared from: ‘PEcAn.utils’ +* checking tests ... + OK +* DONE +Status: 4 WARNINGs, 3 NOTEs diff --git a/modules/assim.batch/tests/Rcheck_reference.log b/modules/assim.batch/tests/Rcheck_reference.log new file mode 100644 index 00000000000..be30471dd77 --- /dev/null +++ b/modules/assim.batch/tests/Rcheck_reference.log @@ -0,0 +1,232 @@ +* using log directory ‘/tmp/RtmpNECEcU/PEcAn.assim.batch.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.assim.batch/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.assim.batch’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.assim.batch’ can be installed ... WARNING +Found the following significant warnings: + Note: possible error in 'pda.postprocess(settings, ': unused argument (burnin) +See ‘/tmp/RtmpNECEcU/PEcAn.assim.batch.Rcheck/00install.out’ for details. +Information on the location(s) of code generating the ‘Note’s can be +obtained by re-running with environment variable R_KEEP_PKG_SOURCE set +to ‘yes’. +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +get.da.data: no visible binding for global variable ‘ensemble.samples’ +get.da.data : : no visible binding for global variable + ‘sa.samples’ +get.da.data: no visible binding for global variable ‘sa.samples’ +get.da.data : : no visible global function definition for + ‘read.output.type’ +get.da.data.growth: no visible binding for global variable + ‘ensemble.samples’ +get.da.data.growth: no visible binding for global variable ‘sa.samples’ +get.da.data.growth : : no visible global function definition + for ‘read.output.type’ +pda.init.params: no visible binding for global variable ‘mcmc.list’ +pda.init.params: no visible binding for global variable ‘llpar.list’ +pda.load.priors: no visible binding for global variable ‘post.distns’ +pda.load.priors: no visible binding for global variable ‘prior.distns’ +pda.mcmc.recover: possible error in pda.postprocess(settings, con, + params, pname, prior, prior.ind, burnin): unused argument (burnin) +pda.plot.params: no visible binding for global variable ‘mcmc’ +plot.da: no visible binding for global variable ‘y’ +plot.da: no visible binding for global variable ‘ensemble.samples’ +plot.da : : no visible binding for global variable + ‘ensemble.samples’ +plot.da : : no visible binding for global variable + ‘sa.samples’ +plot.da : : no visible binding for global variable ‘m’ +return.bias: no visible binding for global variable ‘prior.list’ +sample_MCMC: no visible binding for global variable ‘mcmc.samp.list’ +sample_MCMC: no visible binding for global variable ‘mcmc’ +write_sf_posterior: no visible binding for global variable ‘mcmc’ +Undefined global functions or variables: + ensemble.samples llpar.list m mcmc mcmc.list mcmc.samp.list + post.distns prior.distns prior.list read.output.type sa.samples y + +Found the following assignments to the global environment: +File ‘PEcAn.assim.batch/R/pda.get.model.output.R’: + assign(x, model.raw[x], envir = .GlobalEnv) +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘runModule.assim.batch’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'correlationPlot' + ‘whichParameters’ + +Undocumented arguments in documentation object 'gelman_diag_gelmanPlot' + ‘x’ ‘...’ + +Undocumented arguments in documentation object 'gelman_diag_mw' + ‘...’ + +Undocumented arguments in documentation object 'getBurnin' + ‘plotfile’ + +Undocumented arguments in documentation object 'load.pda.data' + ‘bety’ + +Undocumented arguments in documentation object 'pda.adjust.jumps' + ‘settings’ ‘jmp.list’ ‘accept.rate’ ‘pnames’ +Documented arguments not in \usage in documentation object 'pda.adjust.jumps': + ‘all’ + +Undocumented arguments in documentation object 'pda.adjust.jumps.bs' + ‘settings’ ‘jcov’ ‘accept.count’ ‘params.recent’ +Documented arguments not in \usage in documentation object 'pda.adjust.jumps.bs': + ‘all’ + +Undocumented arguments in documentation object 'pda.bayesian.tools' + ‘params.id’ ‘param.names’ ‘prior.id’ ‘chain’ ‘iter’ ‘adapt’ ‘adj.min’ + ‘ar.target’ ‘jvar’ ‘n.knot’ + +Undocumented arguments in documentation object 'pda.calc.error' + ‘con’ ‘run.id’ + +Undocumented arguments in documentation object 'pda.calc.llik' + ‘pda.errors’ ‘llik.fn’ ‘llik.par’ +Documented arguments not in \usage in documentation object 'pda.calc.llik': + ‘all’ + +Undocumented arguments in documentation object 'pda.create.ensemble' + ‘settings’ ‘con’ ‘workflow.id’ +Documented arguments not in \usage in documentation object 'pda.create.ensemble': + ‘all’ + +Undocumented arguments in documentation object 'pda.define.llik.fn' + ‘settings’ +Documented arguments not in \usage in documentation object 'pda.define.llik.fn': + ‘all’ + +Undocumented arguments in documentation object 'pda.define.prior.fn' + ‘prior’ +Documented arguments not in \usage in documentation object 'pda.define.prior.fn': + ‘all’ + +Undocumented arguments in documentation object 'pda.emulator' + ‘params.id’ ‘param.names’ ‘prior.id’ ‘chain’ ‘iter’ ‘adapt’ ‘adj.min’ + ‘ar.target’ ‘jvar’ ‘n.knot’ + +Undocumented arguments in documentation object 'pda.generate.knots' + ‘n.knot’ ‘sf’ ‘probs.sf’ ‘n.param.all’ ‘prior.ind’ ‘prior.fn’ ‘pname’ + +Undocumented arguments in documentation object 'pda.generate.sf' + ‘n.knot’ ‘sf’ ‘prior.list’ + +Undocumented arguments in documentation object 'pda.get.model.output' + ‘settings’ ‘run.id’ ‘bety’ ‘inputs’ +Documented arguments not in \usage in documentation object 'pda.get.model.output': + ‘all’ + +Undocumented arguments in documentation object 'pda.init.params' + ‘settings’ ‘chain’ ‘pname’ ‘n.param.all’ + +Undocumented arguments in documentation object 'pda.init.run' + ‘settings’ ‘con’ ‘my.write.config’ ‘workflow.id’ ‘params’ ‘n’ + ‘run.names’ +Documented arguments not in \usage in documentation object 'pda.init.run': + ‘all’ + +Undocumented arguments in documentation object 'pda.load.priors' + ‘settings’ ‘con’ ‘extension.check’ +Documented arguments not in \usage in documentation object 'pda.load.priors': + ‘all’ + +Undocumented arguments in documentation object 'pda.mcmc' + ‘params.id’ ‘param.names’ ‘prior.id’ ‘chain’ ‘iter’ ‘adapt’ ‘adj.min’ + ‘ar.target’ ‘jvar’ ‘n.knot’ + +Undocumented arguments in documentation object 'pda.mcmc.bs' + ‘params.id’ ‘param.names’ ‘prior.id’ ‘chain’ ‘iter’ ‘adapt’ ‘adj.min’ + ‘ar.target’ ‘jvar’ ‘n.knot’ + +Undocumented arguments in documentation object 'pda.mcmc.recover' + ‘settings’ ‘params.id’ ‘param.names’ ‘prior.id’ ‘chain’ ‘iter’ + ‘adapt’ ‘adj.min’ ‘ar.target’ ‘jvar’ ‘n.knot’ ‘burnin’ +Documented arguments not in \usage in documentation object 'pda.mcmc.recover': + ‘all’ + +Undocumented arguments in documentation object 'pda.neff.calc' + ‘recalculate’ + +Undocumented arguments in documentation object 'pda.plot.params' + ‘settings’ ‘mcmc.param.list’ ‘prior.ind’ ‘par.file.name’ +Documented arguments not in \usage in documentation object 'pda.plot.params': + ‘all’ + +Undocumented arguments in documentation object 'pda.postprocess' + ‘settings’ ‘con’ ‘mcmc.param.list’ ‘pname’ ‘prior’ ‘prior.ind’ +Documented arguments not in \usage in documentation object 'pda.postprocess': + ‘all’ + +Undocumented arguments in documentation object 'pda.settings' + ‘settings’ ‘params.id’ ‘param.names’ ‘prior.id’ ‘chain’ ‘iter’ + ‘adapt’ ‘adj.min’ ‘ar.target’ ‘jvar’ ‘n.knot’ ‘run.round’ +Documented arguments not in \usage in documentation object 'pda.settings': + ‘all’ + +Undocumented arguments in documentation object 'return_hyperpars' + ‘assim.settings’ ‘inputs’ + +Undocumented arguments in documentation object 'write_sf_posterior' + ‘sf.samp.list’ ‘sf.prior’ ‘sf.samp.filename’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘AssimBatchVignette.html’, ‘AssimBatchVignette.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... + OK +* DONE +Status: 4 WARNINGs, 2 NOTEs diff --git a/modules/assim.sequential/tests/Rcheck_reference.log b/modules/assim.sequential/tests/Rcheck_reference.log new file mode 100644 index 00000000000..47f24150b7d --- /dev/null +++ b/modules/assim.sequential/tests/Rcheck_reference.log @@ -0,0 +1,953 @@ +* using log directory ‘/tmp/Rtmpz0ebwx/PEcAn.assim.sequential.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.assim.sequential/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.assim.sequential’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.assim.sequential’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... WARNING +Found the following file with non-ASCII characters: + load_data_paleon_sda.R +Portable packages must use only ASCII characters in their R code, +except perhaps in comments. +Use \uxxxx escapes for other characters. +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘devtools’ ‘dplyr’ ‘future’ ‘ggrepel’ ‘Matrix’ ‘ncdf4’ + ‘PEcAn.benchmark’ ‘PEcAn.DB’ ‘PEcAn.settings’ ‘sf’ ‘sp’ ‘tidyr’ ‘XML’ +'library' or 'require' calls not declared from: + ‘corrplot’ ‘gridExtra’ ‘nimble’ ‘plotrix’ ‘plyr’ +'library' or 'require' calls in package code: + ‘corrplot’ ‘gridExtra’ ‘nimble’ ‘plotrix’ ‘plyr’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +There are ::: calls to the package's namespace in its code. A package + almost never needs to use ::: for its own objects: + ‘load_nimble’ ‘post.analysis.ggplot.violin’ + ‘postana.bias.plotting.sda’ +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +adj.ens: no visible global function definition for ‘logger.warn’ +adj.ens: no visible global function definition for ‘cov’ +alltocs: no visible global function definition for ‘%>%’ +alltocs: no visible global function definition for ‘map_dfr’ +alltocs : : no visible global function definition for ‘%>%’ +alltocs: no visible global function definition for ‘mutate’ +alltocs: no visible binding for global variable ‘TimeElapsed’ +alltocs: no visible global function definition for ‘write.table’ +assessParams: no visible global function definition for ‘cov’ +assessParams: no visible binding for global variable ‘quantile’ +assessParams: no visible global function definition for ‘par’ +assessParams: no visible global function definition for ‘plot’ +assessParams: no visible global function definition for ‘abline’ +assessParams: no visible global function definition for ‘lines’ +assessParams: no visible global function definition for ‘points’ +assessParams: no visible global function definition for ‘plot.new’ +assessParams: no visible global function definition for ‘legend’ +assessParams: no visible global function definition for ‘boxplot’ +assessParams: no visible global function definition for ‘cov2cor’ +Construct.H.multisite: no visible global function definition for ‘%>%’ +Construct.H.multisite: no visible global function definition for + ‘map_dbl’ +Construct.R: no visible global function definition for ‘%>%’ +Contruct.Pf: no visible global function definition for ‘cov’ +Contruct.Pf: no visible global function definition for ‘%>%’ +Contruct.Pf: no visible global function definition for ‘filter’ +Contruct.Pf: no visible binding for global variable ‘Var1’ +Contruct.Pf: no visible binding for global variable ‘Var2’ +EnKF: no visible global function definition for ‘cov’ +EnKF.MultiSite: no visible binding for global variable ‘settings’ +EnKF.MultiSite: no visible global function definition for ‘%>%’ +EnKF.MultiSite: no visible binding for global variable ‘site.ids’ +EnKF.MultiSite: no visible binding for global variable ‘blocked.dis’ +GEF: no visible global function definition for ‘cov’ +GEF : wish.df: no visible global function definition for ‘var’ +GEF: no visible binding for '<<-' assignment to ‘constants.tobit2space’ +GEF: no visible binding for '<<-' assignment to ‘data.tobit2space’ +GEF: no visible binding for '<<-' assignment to ‘tobit2space_pred’ +GEF: no visible binding for global variable ‘tobit2space.model’ +GEF: no visible binding for global variable ‘data.tobit2space’ +GEF: no visible binding for global variable ‘constants.tobit2space’ +GEF: no visible binding for '<<-' assignment to ‘conf_tobit2space’ +GEF: no visible binding for global variable ‘tobit2space_pred’ +GEF: no visible binding for global variable ‘conf_tobit2space’ +GEF: no visible binding for '<<-' assignment to + ‘samplerNumberOffset_tobit2space’ +GEF: no visible binding for '<<-' assignment to ‘Rmcmc_tobit2space’ +GEF: no visible binding for '<<-' assignment to ‘Cmcmc_tobit2space’ +GEF: no visible binding for global variable ‘Rmcmc_tobit2space’ +GEF: no visible binding for global variable ‘Cmcmc_tobit2space’ +GEF: no visible binding for global variable + ‘samplerNumberOffset_tobit2space’ +GEF: no visible global function definition for ‘pdf’ +GEF: no visible global function definition for ‘dev.off’ +GEF: no visible binding for global variable ‘obs.mean’ +GEF: no visible global function definition for ‘logger.warn’ +GEF: no visible binding for global variable ‘constants.tobit’ +GEF: no visible binding for '<<-' assignment to ‘constants.tobit’ +GEF: no visible binding for '<<-' assignment to ‘dimensions.tobit’ +GEF: no visible binding for '<<-' assignment to ‘data.tobit’ +GEF: no visible global function definition for ‘rnorm’ +GEF: no visible binding for global variable ‘tobit.model’ +GEF: no visible binding for global variable ‘data.tobit’ +GEF: no visible binding for global variable ‘dimensions.tobit’ +GEF: no visible binding for '<<-' assignment to ‘samplerNumberOffset’ +GEF: no visible binding for '<<-' assignment to ‘Rmcmc’ +GEF: no visible binding for '<<-' assignment to ‘Cmcmc’ +GEF: no visible binding for global variable ‘Rmcmc’ +GEF: no visible binding for global variable ‘Cmcmc’ +GEF: no visible binding for global variable ‘samplerNumberOffset’ +GEF: no visible global function definition for ‘par’ +GEF: no visible global function definition for ‘plot’ +GEF: no visible global function definition for ‘abline’ +GEF.MultiSite: no visible binding for global variable ‘settings’ +GEF.MultiSite: no visible global function definition for ‘cov’ +GEF.MultiSite : wish.df: no visible global function definition for + ‘var’ +GEF.MultiSite: no visible binding for '<<-' assignment to + ‘tobit2space_pred’ +GEF.MultiSite: no visible binding for global variable + ‘tobit2space.model’ +GEF.MultiSite: no visible binding for '<<-' assignment to + ‘conf_tobit2space’ +GEF.MultiSite: no visible binding for global variable + ‘tobit2space_pred’ +GEF.MultiSite: no visible binding for global variable + ‘conf_tobit2space’ +GEF.MultiSite: no visible binding for '<<-' assignment to + ‘samplerNumberOffset_tobit2space’ +GEF.MultiSite: no visible binding for '<<-' assignment to + ‘Rmcmc_tobit2space’ +GEF.MultiSite: no visible binding for '<<-' assignment to + ‘Cmcmc_tobit2space’ +GEF.MultiSite: no visible binding for global variable + ‘Rmcmc_tobit2space’ +GEF.MultiSite: no visible binding for global variable + ‘Cmcmc_tobit2space’ +GEF.MultiSite: no visible binding for global variable + ‘samplerNumberOffset_tobit2space’ +GEF.MultiSite: no visible binding for global variable ‘blocked.dis’ +GEF.MultiSite: no visible global function definition for ‘%>%’ +GEF.MultiSite: no visible binding for global variable ‘nt’ +GEF.MultiSite: no visible global function definition for ‘map’ +GEF.MultiSite: no visible global function definition for ‘modify’ +GEF.MultiSite: no visible global function definition for ‘modify_if’ +GEF.MultiSite: no visible binding for global variable ‘distances’ +GEF.MultiSite: no visible binding for global variable ‘obs.mean’ +GEF.MultiSite: no visible binding for global variable + ‘GEF.MultiSite.Nimble’ +GEF.MultiSite: no visible binding for '<<-' assignment to + ‘samplerNumberOffset’ +GEF.MultiSite: no visible binding for '<<-' assignment to ‘Rmcmc’ +GEF.MultiSite: no visible binding for '<<-' assignment to ‘Cmcmc’ +GEF.MultiSite: no visible binding for global variable ‘Rmcmc’ +GEF.MultiSite: no visible binding for global variable ‘Cmcmc’ +GEF.MultiSite: no visible binding for global variable + ‘samplerNumberOffset’ +GEF.MultiSite: no visible global function definition for ‘var’ +generate_colors_sda: no visible binding for '<<-' assignment to ‘pink’ +generate_colors_sda: no visible global function definition for + ‘col2rgb’ +generate_colors_sda: no visible binding for '<<-' assignment to + ‘alphapink’ +generate_colors_sda: no visible global function definition for ‘rgb’ +generate_colors_sda: no visible binding for global variable ‘pink’ +generate_colors_sda: no visible binding for '<<-' assignment to ‘green’ +generate_colors_sda: no visible binding for '<<-' assignment to + ‘alphagreen’ +generate_colors_sda: no visible binding for global variable ‘green’ +generate_colors_sda: no visible binding for '<<-' assignment to ‘blue’ +generate_colors_sda: no visible binding for '<<-' assignment to + ‘alphablue’ +generate_colors_sda: no visible binding for global variable ‘blue’ +generate_colors_sda: no visible binding for '<<-' assignment to + ‘purple’ +generate_colors_sda: no visible binding for '<<-' assignment to + ‘alphapurple’ +generate_colors_sda: no visible binding for global variable ‘purple’ +generate_colors_sda: no visible binding for '<<-' assignment to ‘brown’ +generate_colors_sda: no visible binding for '<<-' assignment to + ‘alphabrown’ +generate_colors_sda: no visible binding for global variable ‘brown’ +get_ensemble_weights: no visible global function definition for + ‘read.csv’ +hop_test: no visible global function definition for ‘run.write.configs’ +hop_test: no visible global function definition for ‘read.table’ +hop_test: no visible global function definition for ‘read.output’ +hop_test: no visible global function definition for ‘pdf’ +hop_test: no visible global function definition for ‘par’ +hop_test: no visible global function definition for ‘plot’ +hop_test: no visible global function definition for ‘points’ +hop_test: no visible global function definition for ‘abline’ +hop_test: no visible global function definition for ‘legend’ +hop_test: no visible global function definition for ‘title’ +hop_test: no visible global function definition for ‘cor’ +hop_test: no visible global function definition for ‘dev.off’ +interactive.plotting.sda: no visible global function definition for + ‘na.omit’ +interactive.plotting.sda: no visible global function definition for + ‘par’ +interactive.plotting.sda : : no visible global function + definition for ‘quantile’ +interactive.plotting.sda: no visible global function definition for + ‘plot’ +interactive.plotting.sda: no visible global function definition for + ‘ciEnvelope’ +interactive.plotting.sda: no visible binding for global variable + ‘alphagreen’ +interactive.plotting.sda: no visible global function definition for + ‘lines’ +interactive.plotting.sda: no visible binding for global variable + ‘alphablue’ +interactive.plotting.sda: no visible binding for global variable + ‘alphapink’ +load_data_paleon_sda: no visible global function definition for + ‘src_postgres’ +load_data_paleon_sda: no visible global function definition for + ‘db.query’ +load_data_paleon_sda: no visible global function definition for ‘.’ +load_data_paleon_sda: no visible binding for global variable + ‘MCMC_iteration’ +load_data_paleon_sda: no visible binding for global variable ‘site_id’ +load_data_paleon_sda: no visible global function definition for + ‘match_species_id’ +load_data_paleon_sda: no visible global function definition for + ‘match_pft’ +load_data_paleon_sda: no visible binding for global variable ‘pft.cat’ +load_data_paleon_sda : : no visible global function + definition for ‘cov’ +load_data_paleon_sda: no visible global function definition for ‘CRS’ +load_data_paleon_sda: no visible global function definition for + ‘ncvar_get’ +load_data_paleon_sda : ESS_calc: no visible binding for global variable + ‘var’ +load_data_paleon_sda: no visible global function definition for ‘cov’ +load_data_paleon_sda: no visible global function definition for + ‘na.omit’ +load_nimble: no visible binding for '<<-' assignment to ‘y_star_create’ +load_nimble : : no visible global function definition for + ‘returnType’ +load_nimble: no visible binding for '<<-' assignment to ‘alr’ +load_nimble: no visible binding for '<<-' assignment to ‘inv.alr’ +load_nimble: no visible binding for '<<-' assignment to ‘rwtmnorm’ +load_nimble: no visible binding for '<<-' assignment to ‘dwtmnorm’ +load_nimble: no visible binding for '<<-' assignment to + ‘tobit2space.model’ +load_nimble: no visible binding for global variable ‘N’ +load_nimble: no visible binding for global variable ‘J’ +load_nimble: no visible binding for global variable ‘lambda_0’ +load_nimble: no visible binding for global variable ‘nu_0’ +load_nimble: no visible binding for '<<-' assignment to ‘tobit.model’ +load_nimble: no visible binding for global variable ‘direct_TRUE’ +load_nimble: no visible binding for global variable ‘X_direct_start’ +load_nimble: no visible binding for global variable ‘X_direct_end’ +load_nimble: no visible global function definition for ‘y_star_create’ +load_nimble: no visible binding for global variable ‘X’ +load_nimble: no visible binding for global variable ‘fcomp_TRUE’ +load_nimble: no visible binding for global variable ‘X_fcomp_start’ +load_nimble: no visible binding for global variable ‘X_fcomp_end’ +load_nimble: no visible global function definition for ‘alr’ +load_nimble: no visible binding for global variable + ‘X_fcomp_model_start’ +load_nimble: no visible binding for global variable ‘X_fcomp_model_end’ +load_nimble: no visible binding for global variable ‘pft2total_TRUE’ +load_nimble: no visible binding for global variable ‘X_pft2total_start’ +load_nimble: no visible global function definition for + ‘y_star_create_pft2total’ +load_nimble: no visible binding for global variable + ‘X_pft2total_model_start’ +load_nimble: no visible binding for global variable + ‘X_pft2total_model_end’ +load_nimble: no visible binding for global variable ‘YN’ +load_nimble: no visible binding for '<<-' assignment to + ‘GEF.MultiSite.Nimble’ +load_nimble: no visible binding for global variable ‘q.type’ +load_nimble: no visible binding for global variable ‘qq’ +load_nimble: no visible binding for global variable ‘nH’ +load_nimble: no visible binding for global variable ‘X.mod’ +load_nimble: no visible binding for global variable ‘H’ +load_nimble: no visible binding for global variable ‘nNotH’ +load_nimble: no visible binding for global variable ‘NotH’ +load_nimble: no visible binding for '<<-' assignment to + ‘sampler_toggle’ +load_nimble : : no visible global function definition for + ‘nimbleFunctionList’ +load_nimble : : no visible binding for global variable + ‘toggle’ +load_nimble : : no visible binding for global variable + ‘nested_sampler_list’ +Obs.data.prepare.MultiSite: no visible global function definition for + ‘%>%’ +Obs.data.prepare.MultiSite: no visible global function definition for + ‘filter’ +Obs.data.prepare.MultiSite: no visible binding for global variable + ‘Site_ID’ +Obs.data.prepare.MultiSite: no visible global function definition for + ‘na.omit’ +Obs.data.prepare.MultiSite: no visible global function definition for + ‘map_chr’ +Obs.data.prepare.MultiSite: no visible binding for global variable ‘.’ +Obs.data.prepare.MultiSite: no visible global function definition for + ‘map’ +Obs.data.prepare.MultiSite : : no visible global function + definition for ‘%>%’ +Obs.data.prepare.MultiSite : : no visible global function + definition for ‘map’ +Obs.data.prepare.MultiSite : : no visible global function + definition for ‘setNames’ +Obs.data.prepare.MultiSite : : no visible binding for global + variable ‘.’ +Obs.data.prepare.MultiSite: no visible global function definition for + ‘setNames’ +outlier.detector.boxplot: no visible global function definition for + ‘%>%’ +outlier.detector.boxplot: no visible global function definition for + ‘map’ +outlier.detector.boxplot : : no visible global function + definition for ‘%>%’ +outlier.detector.boxplot : : no visible global function + definition for ‘map_dfc’ +outlier.detector.boxplot : : : no visible global + function definition for ‘boxplot’ +outlier.detector.boxplot : : : no visible global + function definition for ‘median’ +piecew.poly.local: no visible binding for global variable ‘rloc’ +post.analysis.ggplot: no visible global function definition for ‘%>%’ +post.analysis.ggplot : : no visible global function + definition for ‘%>%’ +post.analysis.ggplot : : : no visible binding + for global variable ‘quantile’ +post.analysis.ggplot : : : no visible global + function definition for ‘%>%’ +post.analysis.ggplot : : : no visible global + function definition for ‘mutate’ +post.analysis.ggplot : : no visible global function + definition for ‘mutate’ +post.analysis.ggplot: no visible global function definition for + ‘setNames’ +post.analysis.ggplot: no visible global function definition for + ‘bind_rows’ +post.analysis.ggplot: no visible global function definition for ‘walk’ +post.analysis.ggplot: no visible global function definition for + ‘ggplot’ +post.analysis.ggplot: no visible global function definition for ‘aes’ +post.analysis.ggplot: no visible binding for global variable ‘Date’ +post.analysis.ggplot: no visible global function definition for + ‘geom_ribbon’ +post.analysis.ggplot: no visible binding for global variable ‘2.5%’ +post.analysis.ggplot: no visible binding for global variable ‘97.5%’ +post.analysis.ggplot: no visible binding for global variable ‘Type’ +post.analysis.ggplot: no visible global function definition for + ‘geom_line’ +post.analysis.ggplot: no visible binding for global variable ‘means’ +post.analysis.ggplot: no visible global function definition for + ‘geom_point’ +post.analysis.ggplot: no visible global function definition for + ‘scale_fill_manual’ +post.analysis.ggplot: no visible binding for global variable + ‘alphapink’ +post.analysis.ggplot: no visible binding for global variable + ‘alphagreen’ +post.analysis.ggplot: no visible binding for global variable + ‘alphablue’ +post.analysis.ggplot: no visible global function definition for + ‘scale_color_manual’ +post.analysis.ggplot: no visible global function definition for + ‘theme_bw’ +post.analysis.ggplot: no visible global function definition for + ‘facet_wrap’ +post.analysis.ggplot: no visible global function definition for ‘theme’ +post.analysis.ggplot: no visible global function definition for + ‘element_blank’ +post.analysis.ggplot: no visible global function definition for ‘labs’ +post.analysis.ggplot: no visible global function definition for ‘pdf’ +post.analysis.ggplot: no visible global function definition for + ‘dev.off’ +post.analysis.ggplot.violin: no visible global function definition for + ‘%>%’ +post.analysis.ggplot.violin : : no visible global function + definition for ‘%>%’ +post.analysis.ggplot.violin : : : no visible + global function definition for ‘%>%’ +post.analysis.ggplot.violin : : no visible global function + definition for ‘mutate’ +post.analysis.ggplot.violin : : no visible binding for + global variable ‘t1’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘Variables’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘Value’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘Type’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘Date’ +post.analysis.ggplot.violin: no visible global function definition for + ‘setNames’ +post.analysis.ggplot.violin: no visible global function definition for + ‘walk’ +post.analysis.ggplot.violin: no visible global function definition for + ‘ggplot’ +post.analysis.ggplot.violin: no visible global function definition for + ‘aes’ +post.analysis.ggplot.violin: no visible global function definition for + ‘geom_ribbon’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘means’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘2.5%’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘97.5%’ +post.analysis.ggplot.violin: no visible global function definition for + ‘geom_line’ +post.analysis.ggplot.violin: no visible global function definition for + ‘geom_violin’ +post.analysis.ggplot.violin: no visible global function definition for + ‘position_dodge’ +post.analysis.ggplot.violin: no visible global function definition for + ‘geom_jitter’ +post.analysis.ggplot.violin: no visible global function definition for + ‘position_jitterdodge’ +post.analysis.ggplot.violin: no visible global function definition for + ‘scale_fill_manual’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘alphapink’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘alphagreen’ +post.analysis.ggplot.violin: no visible binding for global variable + ‘alphablue’ +post.analysis.ggplot.violin: no visible global function definition for + ‘scale_color_manual’ +post.analysis.ggplot.violin: no visible global function definition for + ‘facet_wrap’ +post.analysis.ggplot.violin: no visible global function definition for + ‘theme_bw’ +post.analysis.ggplot.violin: no visible global function definition for + ‘theme’ +post.analysis.ggplot.violin: no visible global function definition for + ‘element_blank’ +post.analysis.ggplot.violin: no visible global function definition for + ‘labs’ +post.analysis.ggplot.violin: no visible global function definition for + ‘pdf’ +post.analysis.ggplot.violin: no visible global function definition for + ‘dev.off’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘installed.packages’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘%>%’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘map’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘%>%’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘%>%’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘map_df’ +post.analysis.multisite.ggplot : : : + : no visible global function definition for ‘%>%’ +post.analysis.multisite.ggplot : : : + : no visible global function definition for ‘mutate’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Variable’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Value’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Site’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘group_by’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘summarise’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘quantile’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘mutate’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘setNames’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘map_dfr’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Variable’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Means’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Site’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘right_join’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Sd’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘Sd’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘bind_rows’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘map_df’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘map_df’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Date’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘mutate’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘walk’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘filter’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘ggplot’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘aes’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Date’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘geom_ribbon’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Lower’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Upper’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘Type’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘geom_line’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘geom_point’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘scale_fill_manual’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘alphabrown’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘alphapink’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘alphagreen’ +post.analysis.multisite.ggplot : : no visible binding for + global variable ‘alphablue’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘scale_color_manual’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘theme_bw’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘labs’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘theme’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘element_blank’ +post.analysis.multisite.ggplot : : no visible global + function definition for ‘facet_wrap’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘filter’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘ggplot’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘aes’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘geom_ribbon’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Lower’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Upper’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Type’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘geom_line’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘Means’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘geom_point’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘scale_fill_manual’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘alphabrown’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘alphapink’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘alphagreen’ +post.analysis.multisite.ggplot : : : no visible + binding for global variable ‘alphablue’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘scale_color_manual’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘theme_bw’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘labs’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘theme’ +post.analysis.multisite.ggplot : : : no visible + global function definition for ‘element_blank’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘map_dfr’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘mutate’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘coordinates<-’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘proj4string<-’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘CRS’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘spTransform’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘Site’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘ggplot’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘geom_sf’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘aes’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘NA_L1CODE’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘geom_point’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘Lon’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘Lat’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘Name’ +post.analysis.multisite.ggplot: no visible binding for global variable + ‘Data’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘scale_fill_manual’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘scale_color_manual’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘theme_minimal’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘theme’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘element_blank’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘pdf’ +post.analysis.multisite.ggplot: no visible global function definition + for ‘dev.off’ +postana.bias.plotting.sda: no visible global function definition for + ‘pdf’ +postana.bias.plotting.sda : : no visible global function + definition for ‘quantile’ +postana.bias.plotting.sda: no visible global function definition for + ‘lm’ +postana.bias.plotting.sda: no visible global function definition for + ‘plot’ +postana.bias.plotting.sda: no visible global function definition for + ‘ciEnvelope’ +postana.bias.plotting.sda: no visible binding for global variable + ‘alphabrown’ +postana.bias.plotting.sda: no visible global function definition for + ‘abline’ +postana.bias.plotting.sda: no visible global function definition for + ‘mtext’ +postana.bias.plotting.sda: no visible binding for global variable + ‘alphapurple’ +postana.bias.plotting.sda: no visible global function definition for + ‘dev.off’ +postana.bias.plotting.sda.corr: no visible global function definition + for ‘pdf’ +postana.bias.plotting.sda.corr: no visible global function definition + for ‘cov2cor’ +postana.bias.plotting.sda.corr: no visible global function definition + for ‘par’ +postana.bias.plotting.sda.corr: no visible global function definition + for ‘corrplot’ +postana.bias.plotting.sda.corr: no visible global function definition + for ‘plot’ +postana.bias.plotting.sda.corr: no visible global function definition + for ‘dev.off’ +postana.timeser.plotting.sda: no visible global function definition for + ‘pdf’ +postana.timeser.plotting.sda: no visible global function definition for + ‘%>%’ +postana.timeser.plotting.sda : : no visible global function + definition for ‘quantile’ +postana.timeser.plotting.sda: no visible global function definition for + ‘plot’ +postana.timeser.plotting.sda: no visible global function definition for + ‘ciEnvelope’ +postana.timeser.plotting.sda: no visible binding for global variable + ‘alphagreen’ +postana.timeser.plotting.sda: no visible global function definition for + ‘lines’ +postana.timeser.plotting.sda: no visible binding for global variable + ‘alphablue’ +postana.timeser.plotting.sda: no visible binding for global variable + ‘alphapink’ +postana.timeser.plotting.sda: no visible global function definition for + ‘legend’ +postana.timeser.plotting.sda: no visible global function definition for + ‘dev.off’ +Remote.Sync.launcher: no visible global function definition for + ‘read.settings’ +sample_met: no visible binding for global variable ‘host’ +sample.parameters: no visible global function definition for ‘db.query’ +sample.parameters: no visible binding for global variable ‘post.distns’ +SDA_remote_launcher: no visible global function definition for + ‘read.settings’ +SDA_remote_launcher: no visible global function definition for + ‘test_remote’ +SDA_remote_launcher: no visible global function definition for + ‘remote.execute.R’ +SDA_remote_launcher: no visible global function definition for + ‘remote.copy.to’ +SDA_remote_launcher: no visible global function definition for + ‘is.MultiSettings’ +SDA_remote_launcher: no visible global function definition for ‘%>%’ +SDA_remote_launcher: no visible global function definition for ‘map’ +SDA_remote_launcher: no visible global function definition for + ‘map_lgl’ +SDA_remote_launcher : : no visible global function + definition for ‘remote.execute.R’ +SDA_remote_launcher: no visible global function definition for ‘walk’ +SDA_remote_launcher : : no visible global function + definition for ‘remote.copy.to’ +SDA_remote_launcher: no visible global function definition for + ‘qsub_get_jobid’ +SDA_remote_launcher: no visible binding for global variable + ‘stop.on.error’ +sda.enkf: no visible global function definition for ‘%>%’ +sda.enkf: no visible binding for global variable ‘ensemble.samples’ +sda.enkf: no visible global function definition for + ‘write.ensemble.configs’ +sda.enkf: no visible global function definition for ‘cov’ +sda.enkf: no visible global function definition for ‘logger.severe’ +sda.enkf: no visible binding for global variable ‘H’ +sda.enkf: no visible global function definition for ‘rmvnorm’ +sda.enkf.multisite: no visible binding for global variable + ‘multiprocess’ +sda.enkf.multisite: no visible global function definition for ‘%>%’ +sda.enkf.multisite: no visible global function definition for ‘map’ +sda.enkf.multisite: no visible global function definition for ‘map_dfr’ +sda.enkf.multisite: no visible global function definition for ‘tail’ +sda.enkf.multisite: no visible binding for global variable + ‘ensemble.samples’ +sda.enkf.multisite : : no visible global function definition + for ‘input.ens.gen’ +sda.enkf.multisite : : no visible global function definition + for ‘write.ensemble.configs’ +sda.enkf.multisite : : no visible binding for global + variable ‘ensemble.samples’ +sda.enkf.multisite: no visible global function definition for + ‘setNames’ +sda.enkf.multisite : : no visible global function definition + for ‘%>%’ +sda.enkf.multisite : : : no visible global + function definition for ‘%>%’ +sda.enkf.multisite : : : no visible global + function definition for ‘map_df’ +sda.enkf.multisite : : no visible global function definition + for ‘setNames’ +sda.enkf.multisite: no visible global function definition for ‘map_dfc’ +sda.enkf.multisite: no visible global function definition for ‘rmvnorm’ +sda.enkf.multisite: no visible global function definition for + ‘write.csv’ +sda.enkf.original: no visible global function definition for ‘%>%’ +sda.enkf.original : : no visible global function definition + for ‘%>%’ +sda.enkf.original : : : no visible global + function definition for ‘%>%’ +sda.enkf.original: no visible global function definition for ‘db.open’ +sda.enkf.original: no visible global function definition for ‘is’ +sda.enkf.original: no visible global function definition for ‘db.close’ +sda.enkf.original: no visible global function definition for + ‘check.workflow.settings’ +sda.enkf.original: no visible global function definition for ‘db.query’ +sda.enkf.original: no visible global function definition for + ‘get.parameter.samples’ +sda.enkf.original: no visible global function definition for ‘tail’ +sda.enkf.original : wish.df: no visible global function definition for + ‘var’ +sda.enkf.original : : no visible global function definition + for ‘nimbleFunctionList’ +sda.enkf.original : : no visible binding for global variable + ‘toggle’ +sda.enkf.original : : no visible binding for global variable + ‘nested_sampler_list’ +sda.enkf.original : : no visible global function definition + for ‘returnType’ +sda.enkf.original: no visible binding for global variable ‘N’ +sda.enkf.original: no visible binding for global variable ‘YN’ +sda.enkf.original: no visible binding for global variable ‘J’ +sda.enkf.original: no visible binding for global variable ‘lambda_0’ +sda.enkf.original: no visible binding for global variable ‘nu_0’ +sda.enkf.original: no visible global function definition for ‘col2rgb’ +sda.enkf.original: no visible global function definition for ‘rgb’ +sda.enkf.original: no visible global function definition for ‘cov’ +sda.enkf.original: no visible global function definition for ‘na.omit’ +sda.enkf.original: no visible global function definition for ‘pdf’ +sda.enkf.original: no visible global function definition for ‘dev.off’ +sda.enkf.original: no visible global function definition for + ‘logger.warn’ +sda.enkf.original: no visible global function definition for ‘rnorm’ +sda.enkf.original: no visible global function definition for ‘quantile’ +sda.enkf.original: no visible global function definition for ‘rmvnorm’ +sda.enkf.original: no visible global function definition for ‘par’ +sda.enkf.original : : no visible global function definition + for ‘quantile’ +sda.enkf.original: no visible global function definition for ‘plot’ +sda.enkf.original: no visible global function definition for + ‘ciEnvelope’ +sda.enkf.original: no visible global function definition for ‘lines’ +sda.enkf.original: no visible global function definition for ‘legend’ +sda.enkf.original: no visible global function definition for ‘lm’ +sda.enkf.original: no visible global function definition for ‘abline’ +sda.enkf.original: no visible global function definition for ‘mtext’ +sda.enkf.original: no visible global function definition for + ‘tableGrob’ +sda.enkf.original: no visible global function definition for + ‘grid.arrange’ +sda.enkf.original: no visible global function definition for ‘cov2cor’ +sda.enkf.original: no visible global function definition for ‘corrplot’ +sda.particle: no visible global function definition for ‘read.output’ +sda.particle: no visible binding for global variable ‘settings’ +sda.particle: no visible global function definition for + ‘read.ensemble.ts’ +sda.particle: no visible global function definition for ‘rnorm’ +sda.particle: no visible binding for global variable ‘yrvec’ +sda.particle: no visible binding for global variable ‘mNPP’ +sda.particle: no visible binding for global variable ‘sNPP’ +sda.particle: no visible global function definition for ‘dnorm’ +sda.particle: no visible global function definition for ‘weighted.mean’ +sda.particle: no visible binding for global variable ‘quantile’ +sda.particle: no visible global function definition for ‘pdf’ +sda.particle: no visible binding for global variable ‘outfolder’ +sda.particle: no visible global function definition for ‘par’ +sda.particle: no visible global function definition for ‘plot’ +sda.particle: no visible global function definition for ‘lines’ +sda.particle: no visible global function definition for ‘legend’ +sda.particle: no visible global function definition for ‘abline’ +sda.particle: no visible global function definition for ‘hist’ +sda.particle: no visible global function definition for ‘weighted.hist’ +sda.particle: no visible global function definition for ‘dev.off’ +simple.local: no visible binding for global variable ‘rloc’ +Undefined global functions or variables: + . %>% 2.5% 97.5% abline aes alphablue alphabrown alphagreen alphapink + alphapurple alr bind_rows blocked.dis blue boxplot brown + check.workflow.settings ciEnvelope Cmcmc Cmcmc_tobit2space col2rgb + conf_tobit2space constants.tobit constants.tobit2space coordinates<- + cor corrplot cov cov2cor CRS Data data.tobit data.tobit2space Date + db.close db.open db.query dev.off dimensions.tobit direct_TRUE + distances dnorm element_blank ensemble.samples facet_wrap fcomp_TRUE + filter GEF.MultiSite.Nimble geom_jitter geom_line geom_point + geom_ribbon geom_sf geom_violin get.parameter.samples ggplot green + grid.arrange group_by H hist host input.ens.gen installed.packages is + is.MultiSettings J labs lambda_0 Lat legend lines lm logger.severe + logger.warn Lon Lower map map_chr map_dbl map_df map_dfc map_dfr + map_lgl match_pft match_species_id MCMC_iteration means Means median + mNPP modify modify_if mtext multiprocess mutate N NA_L1CODE na.omit + Name ncvar_get nested_sampler_list nH nimbleFunctionList nNotH NotH + nt nu_0 obs.mean outfolder par pdf pft.cat pft2total_TRUE pink plot + plot.new points position_dodge position_jitterdodge post.distns + proj4string<- purple q.type qq qsub_get_jobid quantile read.csv + read.ensemble.ts read.output read.settings read.table remote.copy.to + remote.execute.R returnType rgb right_join rloc Rmcmc + Rmcmc_tobit2space rmvnorm rnorm run.write.configs samplerNumberOffset + samplerNumberOffset_tobit2space scale_color_manual scale_fill_manual + Sd setNames settings Site site_id Site_ID site.ids sNPP spTransform + src_postgres stop.on.error summarise t1 tableGrob tail test_remote + theme theme_bw theme_minimal TimeElapsed title tobit.model + tobit2space_pred tobit2space.model toggle Type Upper Value var Var1 + Var2 Variable Variables walk weighted.hist weighted.mean write.csv + write.ensemble.configs write.table X X_direct_end X_direct_start + X_fcomp_end X_fcomp_model_end X_fcomp_model_start X_fcomp_start + X_pft2total_model_end X_pft2total_model_start X_pft2total_start X.mod + y_star_create y_star_create_pft2total YN yrvec +Consider adding + importFrom("graphics", "abline", "boxplot", "hist", "legend", "lines", + "mtext", "par", "plot", "plot.new", "points", "title") + importFrom("grDevices", "col2rgb", "dev.off", "pdf", "rgb") + importFrom("methods", "is") + importFrom("stats", "cor", "cov", "cov2cor", "dnorm", "filter", "lm", + "median", "na.omit", "quantile", "rnorm", "setNames", "var", + "weighted.mean") + importFrom("utils", "installed.packages", "read.csv", "read.table", + "tail", "write.csv", "write.table") +to your NAMESPACE file (and ensure that your DESCRIPTION Imports field +contains 'methods'). + +Found the following assignments to the global environment: +File ‘PEcAn.assim.sequential/R/Analysis_sda_multiSite.R’: + assign(name, dots[[name]], pos = 1) +File ‘PEcAn.assim.sequential/R/Analysis_sda.R’: + assign(name, dots[[name]], pos = 1) +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘sample_met’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'Contruct.Pf' + ‘t’ ‘blocked.dis’ ‘...’ + +Undocumented arguments in documentation object 'EnKF.MultiSite' + ‘setting’ +Documented arguments not in \usage in documentation object 'EnKF.MultiSite': + ‘settings’ + +Undocumented arguments in documentation object 'EnKF' + ‘setting’ +Documented arguments not in \usage in documentation object 'EnKF': + ‘settings’ + +Undocumented arguments in documentation object 'GEF' + ‘H’ ‘setting’ + +Undocumented arguments in documentation object 'SDA_remote_launcher' + ‘run.bash.args’ + +Undocumented arguments in documentation object 'hop_test' + ‘ens.runid’ + +Undocumented arguments in documentation object 'interactive.plotting.sda' + ‘obs.times’ ‘aqq’ ‘bqq’ ‘facetg’ ‘readsFF’ +Documented arguments not in \usage in documentation object 'interactive.plotting.sda': + ‘obs.time’ + +Undocumented arguments in documentation object 'sda.enkf' + ‘...’ +Duplicated \argument entries in documentation object 'sda.enkf': + ‘settings’ ‘obs.mean’ ‘obs.cov’ ‘Q’ ‘restart’ + +Undocumented arguments in documentation object 'sda.enkf.multisite' + ‘...’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* DONE +Status: 4 WARNINGs, 2 NOTEs diff --git a/modules/benchmark/tests/Rcheck_reference.log b/modules/benchmark/tests/Rcheck_reference.log new file mode 100644 index 00000000000..9d2837f11c2 --- /dev/null +++ b/modules/benchmark/tests/Rcheck_reference.log @@ -0,0 +1,373 @@ +* using log directory ‘/tmp/RtmppGOd8Z/PEcAn.benchmark.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.benchmark/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.benchmark’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.benchmark’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'library' or 'require' calls not declared from: + ‘bd’ ‘dplyr’ ‘lubridate’ ‘PEcAn.utils’ ‘udunits2’ +'library' or 'require' calls in package code: + ‘bd’ ‘dplyr’ ‘lubridate’ ‘PEcAn.utils’ ‘udunits2’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +Warning: : ... may be used in an incorrect context: ‘Set_Bench(...)’ + +Warning: : ... may be used in an incorrect context: ‘Compare_Bench(...)’ + +add_workflow_info: no visible global function definition for + ‘is.MultiSettings’ +add_workflow_info: no visible global function definition for ‘papply’ +add_workflow_info: no visible binding for global variable + ‘add_workflow_id’ +add_workflow_info: no visible global function definition for ‘%>%’ +add_workflow_info: no visible binding for global variable ‘id’ +add_workflow_info: no visible binding for global variable ‘workflow_id’ +add_workflow_info: no visible binding for global variable ‘.’ +align_data: possible error in round(obvs.calc$posix, units = + coarse.unit): unused argument (units = coarse.unit) +align_data: possible error in round(model.calc$posix, units = + coarse.unit): unused argument (units = coarse.unit) +align_data: no visible global function definition for ‘%>%’ +align_data: no visible binding for global variable ‘.’ +align_data: no visible binding for global variable ‘round.posix’ +align_data: no visible global function definition for ‘one_of’ +align_data_to_data_pft: no visible global function definition for + ‘logger.severe’ +bm_settings2pecan_settings: no visible global function definition for + ‘is.MultiSettings’ +bm_settings2pecan_settings: no visible global function definition for + ‘papply’ +calc_benchmark: no visible global function definition for ‘%>%’ +calc_benchmark: no visible binding for global variable ‘workflow_id’ +calc_benchmark: no visible binding for global variable + ‘reference_run_id’ +calc_benchmark: no visible binding for global variable ‘ensemble_id’ +calc_benchmark: no visible binding for global variable ‘model_id’ +calc_benchmark: no visible global function definition for ‘db.query’ +calc_benchmark: no visible binding for global variable ‘id’ +calc_benchmark: no visible global function definition for ‘left_join’ +calc_benchmark: no visible global function definition for ‘one_of’ +calc_benchmark: no visible binding for global variable ‘benchmark_id’ +calc_benchmark: no visible global function definition for ‘read.output’ +calc_benchmark: no visible binding for global variable ‘variable_id’ +calc_benchmark: no visible binding for global variable ‘.’ +calc_benchmark: no visible binding for global variable ‘metric’ +calc_benchmark: no visible binding for global variable + ‘benchmarks_ensemble_id’ +calc_benchmark: no visible binding for global variable ‘metric_id’ +calc_benchmark: no visible global function definition for ‘pdf’ +calc_benchmark: no visible global function definition for ‘dev.off’ +calc_metrics: no visible global function definition for ‘tail’ +check_BRR: no visible global function definition for ‘%>%’ +check_BRR: no visible binding for global variable ‘settings’ +clean_settings_BRR: no visible global function definition for + ‘is.MultiSettings’ +clean_settings_BRR: no visible global function definition for + ‘logger.error’ +create_BRR: no visible global function definition for ‘%>%’ +create_BRR: no visible binding for global variable ‘.’ +create_BRR: no visible binding for global variable ‘id’ +define_benchmark: no visible global function definition for + ‘is.MultiSettings’ +define_benchmark: no visible global function definition for ‘papply’ +define_benchmark: no visible global function definition for ‘%>%’ +define_benchmark: no visible binding for global variable ‘id’ +define_benchmark: no visible binding for global variable ‘ensemble_id’ +define_benchmark: no visible global function definition for ‘left_join’ +define_benchmark: no visible binding for global variable ‘.’ +define_benchmark: no visible global function definition for + ‘logger.error’ +define_benchmark: no visible global function definition for + ‘logger.debug’ +define_benchmark: no visible global function definition for ‘pull’ +define_benchmark: no visible binding for global variable ‘input_id’ +define_benchmark: no visible binding for global variable ‘variable_id’ +define_benchmark: no visible global function definition for ‘db.query’ +define_benchmark: no visible binding for global variable ‘benchmark_id’ +define_benchmark: no visible binding for global variable + ‘reference_run_id’ +define_benchmark: no visible binding for global variable ‘metric_id’ +get_species_list_standard: no visible binding for global variable + ‘custom_table’ +get_species_list_standard: no visible global function definition for + ‘logger.warn’ +load_csv: no visible global function definition for ‘read.csv’ +load_csv: no visible binding for global variable ‘header’ +load_csv: no visible global function definition for ‘one_of’ +load_data: no visible global function definition for ‘get_key’ +load_data: no visible binding for global variable ‘username’ +load_data: no visible binding for global variable ‘password’ +load_data: no visible global function definition for ‘get_token’ +load_data: no visible global function definition for ‘convert_file’ +load_data: no visible binding for global variable ‘output_path’ +load_data: no visible global function definition for + ‘misc.are.convertible’ +load_data: no visible global function definition for ‘misc.convert’ +load_data: no visible global function definition for ‘one_of’ +load_data: no visible global function definition for ‘%>%’ +load_data: no visible binding for global variable ‘.’ +load_data: no visible binding for global variable ‘year’ +load_rds: no visible global function definition for ‘one_of’ +load_tab_separated_values: no visible global function definition for + ‘read.table’ +load_tab_separated_values: no visible binding for global variable + ‘header’ +load_tab_separated_values: no visible global function definition for + ‘one_of’ +load_x_netcdf: no visible global function definition for ‘str_detect’ +load_x_netcdf: no visible global function definition for + ‘str_split_fixed’ +load_x_netcdf: no visible global function definition for ‘%>%’ +match_timestep: no visible global function definition for ‘head’ +metric_cor: no visible global function definition for ‘cor’ +metric_Frechet: no visible global function definition for ‘logger.info’ +metric_Frechet: no visible global function definition for ‘na.omit’ +metric_lmDiag_plot: no visible global function definition for ‘lm’ +metric_lmDiag_plot: no visible global function definition for ‘aes’ +metric_lmDiag_plot: no visible binding for global variable ‘.fitted’ +metric_lmDiag_plot: no visible binding for global variable ‘.resid’ +metric_lmDiag_plot: no visible global function definition for ‘qqnorm’ +metric_lmDiag_plot: no visible binding for global variable ‘.stdresid’ +metric_lmDiag_plot: no visible global function definition for ‘qqline’ +metric_lmDiag_plot: no visible binding for global variable ‘.cooksd’ +metric_lmDiag_plot: no visible binding for global variable ‘.hat’ +metric_lmDiag_plot: no visible global function definition for ‘pdf’ +metric_lmDiag_plot: no visible global function definition for ‘plot’ +metric_lmDiag_plot: no visible global function definition for ‘dev.off’ +metric_PPMC: no visible global function definition for ‘cor’ +metric_R2: no visible global function definition for ‘lm’ +metric_RAE: no visible global function definition for ‘na.omit’ +metric_residual_plot: no visible binding for global variable ‘time’ +metric_residual_plot: no visible global function definition for ‘aes’ +metric_residual_plot: no visible binding for global variable ‘zeros’ +metric_residual_plot: no visible global function definition for ‘pdf’ +metric_residual_plot: no visible global function definition for ‘plot’ +metric_residual_plot: no visible global function definition for + ‘dev.off’ +metric_scatter_plot: no visible global function definition for ‘aes’ +metric_scatter_plot: no visible binding for global variable ‘model’ +metric_scatter_plot: no visible binding for global variable ‘obvs’ +metric_scatter_plot: no visible global function definition for ‘pdf’ +metric_scatter_plot: no visible global function definition for ‘plot’ +metric_scatter_plot: no visible global function definition for + ‘dev.off’ +metric_timeseries_plot: no visible global function definition for ‘aes’ +metric_timeseries_plot: no visible binding for global variable ‘time’ +metric_timeseries_plot: no visible binding for global variable ‘model’ +metric_timeseries_plot: no visible binding for global variable ‘obvs’ +metric_timeseries_plot: no visible global function definition for ‘pdf’ +metric_timeseries_plot: no visible global function definition for + ‘plot’ +metric_timeseries_plot: no visible global function definition for + ‘dev.off’ +pecan_bench: no visible global function definition for + ‘validation_check’ +pecan_bench: no visible global function definition for ‘read.table’ +pecan_bench: no visible global function definition for ‘size’ +pecan_bench: no visible binding for global variable ‘comp_value’ +pecan_bench: no visible binding for global variable + ‘comp_dif_uncertainty’ +pecan_bench: no visible binding for global variable + ‘ratio_dif_uncertainty’ +pecan_bench: no visible binding for global variable ‘ratio_dif’ +pecan_bench: no visible global function definition for ‘Set_Bench’ +pecan_bench: ... may be used in an incorrect context: ‘Set_Bench(...)’ +pecan_bench: no visible global function definition for ‘Compare_Bench’ +pecan_bench: ... may be used in an incorrect context: + ‘Compare_Bench(...)’ +read_settings_BRR: no visible global function definition for ‘%>%’ +read_settings_BRR: no visible binding for global variable ‘id’ +read_settings_BRR: no visible global function definition for ‘pull’ +read_settings_BRR: no visible global function definition for + ‘xmlToList’ +read_settings_BRR: no visible binding for global variable ‘.’ +Undefined global functions or variables: + . .cooksd .fitted .hat .resid .stdresid %>% add_workflow_id aes + benchmark_id benchmarks_ensemble_id comp_dif_uncertainty comp_value + Compare_Bench convert_file cor custom_table db.query dev.off + ensemble_id get_key get_token head header id input_id + is.MultiSettings left_join lm logger.debug logger.error logger.info + logger.severe logger.warn metric metric_id misc.are.convertible + misc.convert model model_id na.omit obvs one_of output_path papply + password pdf plot pull qqline qqnorm ratio_dif ratio_dif_uncertainty + read.csv read.output read.table reference_run_id round.posix + Set_Bench settings size str_detect str_split_fixed tail time username + validation_check variable_id workflow_id xmlToList year zeros +Consider adding + importFrom("graphics", "plot") + importFrom("grDevices", "dev.off", "pdf") + importFrom("stats", "cor", "lm", "na.omit", "qqline", "qqnorm", "time") + importFrom("utils", "head", "read.csv", "read.table", "tail") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'align_by_first_observation.Rd': + \examples lines wider than 100 characters: + aligned<-align_by_first_observation(observation_one = observation_one, observation_two = observation_two, + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'align_data' + ‘align_method’ + +Undocumented arguments in documentation object 'align_pft' + ‘comparison_type’ ‘...’ + +Undocumented arguments in documentation object 'calc_benchmark' + ‘settings’ ‘start_year’ ‘end_year’ +Documented arguments not in \usage in documentation object 'calc_benchmark': + ‘bm.ensemble’ + +Undocumented arguments in documentation object 'check_if_legal_table' + ‘table’ +Documented arguments not in \usage in documentation object 'check_if_legal_table': + ‘custom_table’ + +Undocumented arguments in documentation object 'check_if_list_of_pfts' + ‘vars’ +Documented arguments not in \usage in documentation object 'check_if_list_of_pfts': + ‘observation_one’ ‘observation_two’ ‘custom_table’ + +Undocumented arguments in documentation object 'check_if_species_list' + ‘vars’ +Documented arguments not in \usage in documentation object 'check_if_species_list': + ‘observation_one’ ‘observation_two’ + +Undocumented arguments in documentation object 'create_BRR' + ‘user_id’ + +Undocumented arguments in documentation object 'define_benchmark' + ‘settings’ ‘bety’ +Documented arguments not in \usage in documentation object 'define_benchmark': + ‘bm.settings’ + +Undocumented arguments in documentation object 'format_wide2long' + ‘time.row’ + +Undocumented arguments in documentation object 'get_species_list_standard' + ‘vars’ +Documented arguments not in \usage in documentation object 'get_species_list_standard': + ‘observation_one’ ‘observation_two’ ‘custom_table’ + +Undocumented arguments in documentation object 'load_csv' + ‘vars’ +Documented arguments not in \usage in documentation object 'load_csv': + ‘start_year’ ‘end_year’ + +Undocumented arguments in documentation object 'load_data' + ‘vars.used.index’ ‘...’ + +Undocumented arguments in documentation object 'load_tab_separated_values' + ‘vars’ +Documented arguments not in \usage in documentation object 'load_tab_separated_values': + ‘start_year’ ‘end_year’ + +Documented arguments not in \usage in documentation object 'load_x_netcdf': + ‘start_year’ ‘end_year’ + +Undocumented arguments in documentation object 'metric_AME' + ‘...’ + +Undocumented arguments in documentation object 'metric_Frechet' + ‘...’ + +Undocumented arguments in documentation object 'metric_MAE' + ‘...’ + +Undocumented arguments in documentation object 'metric_MSE' + ‘...’ + +Undocumented arguments in documentation object 'metric_PPMC' + ‘...’ + +Undocumented arguments in documentation object 'metric_R2' + ‘...’ + +Undocumented arguments in documentation object 'metric_RAE' + ‘...’ + +Undocumented arguments in documentation object 'metric_RMSE' + ‘...’ + +Undocumented arguments in documentation object 'metric_cor' + ‘...’ + +Undocumented arguments in documentation object 'metric_lmDiag_plot' + ‘var’ ‘filename’ ‘draw.plot’ + +Undocumented arguments in documentation object 'metric_residual_plot' + ‘metric_dat’ ‘var’ ‘filename’ + +Undocumented arguments in documentation object 'metric_scatter_plot' + ‘metric_dat’ ‘var’ ‘filename’ + +Undocumented arguments in documentation object 'metric_timeseries_plot' + ‘metric_dat’ ‘var’ ‘filename’ ‘draw.plot’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'format_wide2long': + ‘vars_used’ + +Argument items with no description in Rd object 'load_rds': + ‘vars’ + +Argument items with no description in Rd object 'metric_residual_plot': + ‘draw.plot’ + +Argument items with no description in Rd object 'metric_scatter_plot': + ‘draw.plot’ + +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... + OK +* DONE +Status: 3 WARNINGs, 3 NOTEs diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log new file mode 100644 index 00000000000..336d032323e --- /dev/null +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -0,0 +1,488 @@ +* using log directory ‘/tmp/Rtmp02RC5y/PEcAn.data.atmosphere.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.data.atmosphere/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.data.atmosphere’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.data.atmosphere’ can be installed ... WARNING +Found the following significant warnings: + Warning: replacing previous import ‘dplyr::last’ by ‘xts::last’ when loading ‘PEcAn.data.atmosphere’ + Warning: replacing previous import ‘dplyr::first’ by ‘xts::first’ when loading ‘PEcAn.data.atmosphere’ +See ‘/tmp/Rtmp02RC5y/PEcAn.data.atmosphere.Rcheck/00install.out’ for details. +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Package listed in more than one of Depends, Imports, Suggests, Enhances: + ‘xts’ +A package should be listed in only one of these fields. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... WARNING +Found the following file with non-ASCII characters: + download.PalEON.R +Portable packages must use only ASCII characters in their R code, +except perhaps in comments. +Use \uxxxx escapes for other characters. +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' import not declared from: ‘raster’ +'library' or 'require' calls not declared from: + ‘car’ ‘MASS’ ‘mgcv’ +'library' or 'require' calls in package code: + ‘car’ ‘MASS’ ‘mgcv’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Package in Depends field not imported from: ‘methods’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +cfmet.downscale.daily: no visible binding for global variable ‘doy’ +cfmet.downscale.daily: no visible binding for global variable ‘I.dir’ +cfmet.downscale.daily: no visible binding for global variable ‘I.diff’ +cfmet.downscale.daily: no visible binding for global variable ‘Itot’ +cfmet.downscale.daily: no visible binding for global variable ‘year’ +cfmet.downscale.daily: no visible binding for global variable + ‘surface_downwelling_shortwave_flux_in_air’ +cfmet.downscale.daily: no visible binding for global variable ‘tmin’ +cfmet.downscale.daily: no visible binding for global variable ‘tmax’ +cfmet.downscale.daily: no visible binding for global variable + ‘relative_humidity’ +cfmet.downscale.daily: no visible binding for global variable + ‘air_pressure’ +cfmet.downscale.daily: no visible binding for global variable + ‘air_temperature’ +cfmet.downscale.daily: no visible binding for global variable ‘qmin’ +cfmet.downscale.daily: no visible binding for global variable ‘qmax’ +cfmet.downscale.daily: no visible binding for global variable + ‘pressure’ +cfmet.downscale.daily: no visible binding for global variable ‘rhmin’ +cfmet.downscale.daily: no visible binding for global variable ‘rhmax’ +cfmet.downscale.subdaily: no visible binding for global variable ‘year’ +cfmet.downscale.subdaily: no visible binding for global variable + ‘month’ +cfmet.downscale.subdaily: no visible binding for global variable ‘day’ +cfmet.downscale.subdaily: no visible binding for global variable ‘hour’ +check_met_input_file: no visible binding for global variable + ‘is_required’ +check_met_input_file: no visible binding for global variable + ‘cf_standard_name’ +check_met_input_file: no visible binding for global variable + ‘test_passed’ +check_met_input_file: no visible binding for global variable ‘test_raw’ +check_unit: no visible binding for global variable ‘cf_standard_name’ +col2ncvar: no visible binding for global variable ‘CF_name’ +debias.met.regression: no visible global function definition for ‘sd’ +debias.met.regression: no visible global function definition for + ‘aggregate’ +debias.met.regression: no visible global function definition for + ‘predict’ +debias.met.regression: no visible global function definition for + ‘resid’ +debias.met.regression: no visible global function definition for ‘lm’ +debias.met.regression: no visible global function definition for ‘coef’ +debias.met.regression: no visible global function definition for ‘vcov’ +debias.met.regression: no visible global function definition for + ‘terms’ +debias.met.regression: no visible global function definition for + ‘model.frame’ +debias.met.regression: no visible global function definition for + ‘model.matrix’ +debias.met.regression : : no visible global function + definition for ‘sd’ +debias.met.regression: no visible global function definition for + ‘rnorm’ +debias.met.regression: no visible global function definition for + ‘quantile’ +debias.met.regression: no visible binding for global variable + ‘quantile’ +debias.met.regression: no visible binding for global variable ‘Date’ +debias.met.regression: no visible binding for global variable ‘lwr’ +debias.met.regression: no visible binding for global variable ‘upr’ +debias.met.regression: no visible binding for global variable ‘obs’ +debias.met.regression: no visible binding for global variable ‘values’ +debias.met.regression: no visible binding for global variable ‘Year’ +download.NARR_site: no visible binding for global variable ‘year’ +download.NARR_site: no visible binding for global variable ‘data’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘timestamp’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘NOAA.member’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘surface_downwelling_longwave_flux_in_air’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘surface_downwelling_shortwave_flux_in_air’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘specific_humidity’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘air_temperature’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘precipitation_flux’ +download.NOAA_GEFS_downscale: no visible binding for global variable + ‘wind_speed’ +download.US_Syv: no visible global function definition for ‘read.csv’ +download.US_Wlef: no visible global function definition for + ‘read.table’ +downscale_repeat_6hr_to_hrly: no visible binding for global variable + ‘timestamp’ +downscale_ShortWave_to_hrly : downscale_solar_geom: no visible global + function definition for ‘median’ +downscale_ShortWave_to_hrly: no visible binding for global variable + ‘timestamp’ +downscale_ShortWave_to_hrly: no visible binding for global variable + ‘hour’ +downscale_ShortWave_to_hrly: no visible binding for global variable + ‘doy’ +downscale_ShortWave_to_hrly: no visible binding for global variable + ‘rpot’ +downscale_ShortWave_to_hrly: no visible binding for global variable + ‘avg.rpot’ +downscale_ShortWave_to_hrly: no visible binding for global variable + ‘NOAA.member’ +downscale_spline_to_hourly : interpolate: no visible global function + definition for ‘splinefun’ +downscale_spline_to_hourly: no visible binding for global variable ‘.’ +downscale_spline_to_hourly: no visible binding for global variable + ‘NOAA.member’ +downscale_spline_to_hourly: no visible binding for global variable + ‘dscale.member’ +downscale_spline_to_hourly: no visible global function definition for + ‘:=’ +downscale_spline_to_hourly: no visible binding for global variable + ‘days’ +extract.local.CMIP5: no visible binding for global variable ‘GCM’ +extract.nc.ERA5 : : no visible global function definition + for ‘setNames’ +extract.nc.ERA5: no visible global function definition for ‘setNames’ +extract.nc.ERA5 : : no visible binding for global variable + ‘.’ +generate_narr_url: no visible binding for global variable ‘year’ +generate_narr_url: no visible binding for global variable ‘month’ +generate_narr_url: no visible binding for global variable ‘startdate’ +get_cf_variables_table: no visible binding for global variable ‘.attrs’ +get_cf_variables_table: no visible binding for global variable + ‘canonical_units’ +get_cf_variables_table: no visible binding for global variable + ‘description’ +get_NARR_thredds: no visible binding for global variable ‘latitude’ +get_NARR_thredds: no visible binding for global variable ‘longitude’ +get_NARR_thredds: no visible binding for global variable ‘flx’ +get_narr_url: no visible binding for global variable ‘NARR_name’ +get.cruncep: no visible binding for global variable ‘Lat’ +get.cruncep: no visible binding for global variable ‘lati’ +get.cruncep: no visible global function definition for + ‘cruncep_dt2weather’ +get.rh: no visible binding for global variable ‘L’ +get.rh: no visible binding for global variable ‘Rw’ +get.weather: no visible global function definition for + ‘cruncep_dt2weather’ +is.land: no visible binding for global variable ‘met.nc’ +lm_ensemble_sims: no visible global function definition for ‘quantile’ +lm_ensemble_sims: no visible binding for global variable ‘mod.save’ +lm_ensemble_sims: no visible global function definition for ‘sd’ +load.cfmet: no visible binding for global variable ‘index’ +load.cfmet: no visible binding for global variable ‘mstmip_vars’ +met_temporal_downscale.Gaussian_ensemble: no visible global function + definition for ‘sd’ +met_temporal_downscale.Gaussian_ensemble: no visible global function + definition for ‘rnorm’ +met_temporal_downscale.Gaussian_ensemble: no visible binding for global + variable ‘temp_max’ +met_temporal_downscale.Gaussian_ensemble: no visible binding for global + variable ‘temp_min’ +met.process: no visible global function definition for ‘get.id’ +met.process: no visible binding for global variable ‘site_id’ +met.process: no visible binding for global variable ‘format_id’ +met.process: no visible binding for global variable ‘name’ +met.process: no visible binding for global variable ‘machine_id’ +met2CF.FACE: no visible binding for global variable ‘x’ +metgapfill: possible error in + round(as.POSIXlt(udunits2::ud.convert(time, tunit$value, + paste("seconds since", origin)), origin = origin, tz = "UTC"), units + = "mins"): unused argument (units = "mins") +metgapfill.NOAA_GEFS: no visible global function definition for + ‘na.omit’ +metgapfill.NOAA_GEFS: no visible global function definition for ‘lm’ +metgapfill.NOAA_GEFS: no visible global function definition for + ‘predict’ +model.train: no visible global function definition for ‘lm’ +model.train: no visible global function definition for ‘coef’ +model.train: no visible global function definition for ‘vcov’ +model.train: no visible global function definition for ‘resid’ +post_process: no visible binding for global variable ‘data’ +post_process: no visible binding for global variable ‘startdate’ +post_process: no visible binding for global variable ‘dhours’ +subdaily_pred: no visible global function definition for ‘model.matrix’ +Undefined global functions or variables: + := . .attrs aggregate air_pressure air_temperature avg.rpot + canonical_units CF_name cf_standard_name coef cruncep_dt2weather data + Date day days description dhours doy dscale.member flx format_id GCM + get.id hour I.diff I.dir index is_required Itot L Lat lati latitude + lm longitude lwr machine_id median met.nc mod.save model.frame + model.matrix month mstmip_vars na.omit name NARR_name NOAA.member obs + precipitation_flux predict pressure qmax qmin quantile read.csv + read.table relative_humidity resid rhmax rhmin rnorm rpot Rw sd + setNames site_id specific_humidity splinefun startdate + surface_downwelling_longwave_flux_in_air + surface_downwelling_shortwave_flux_in_air temp_max temp_min terms + test_passed test_raw timestamp tmax tmin upr values vcov wind_speed x + year Year +Consider adding + importFrom("datasets", "pressure") + importFrom("stats", "aggregate", "coef", "lm", "median", "model.frame", + "model.matrix", "na.omit", "predict", "quantile", "resid", + "rnorm", "sd", "setNames", "splinefun", "terms", "vcov") + importFrom("utils", "data", "read.csv", "read.table", "timestamp") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'build_cf_variables_table_url.Rd': + \usage lines wider than 90 characters: + url_format_string = "http://cfconventions.org/Data/cf-standard-names/%d/src/src-cf-standard-name-table.xml") + +Rd file 'download.NOAA_GEFS.Rd': + \examples lines wider than 100 characters: + download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, sitename="US-WCr") + +Rd file 'download.NOAA_GEFS_downscale.Rd': + \examples lines wider than 100 characters: + download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, sitename="US-WCr") + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'extract.nc.ERA5.Rd': + ‘https://confluence.ecmwf.int/display/CKB/ERA5+data+documentation#ERA5datadocumentation-Spatialgrid’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘cruncep_landmask’ ‘FLUXNET.sitemap’ ‘cruncep’ ‘ebifarm’ ‘narr’ + ‘narr3h’ ‘landmask’ ‘Lat’ ‘Lon’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'cfmet.downscale.time' + ‘lat’ ‘...’ + +Undocumented arguments in documentation object 'closest_xy' + ‘slat’ ‘slon’ ‘infolder’ ‘infile’ + +Undocumented arguments in documentation object 'daygroup' + ‘date’ ‘flx’ + +Undocumented arguments in documentation object 'db.site.lat.lon' + ‘site.id’ ‘con’ + +Undocumented arguments in documentation object 'debias_met' + ‘outfolder’ ‘site_id’ ‘...’ + +Undocumented arguments in documentation object 'download.Ameriflux' + ‘...’ + +Undocumented arguments in documentation object 'download.AmerifluxLBL' + ‘...’ + +Undocumented arguments in documentation object 'download.FACE' + ‘sitename’ ‘outfolder’ ‘start_date’ ‘end_date’ ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'download.Fluxnet2015' + ‘username’ ‘...’ + +Undocumented arguments in documentation object 'download.FluxnetLaThuile' + ‘...’ + +Undocumented arguments in documentation object 'download.GFDL' + ‘...’ + +Undocumented arguments in documentation object 'download.GLDAS' + ‘outfolder’ ‘start_date’ ‘end_date’ ‘site_id’ ‘lat.in’ ‘overwrite’ + ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'download.MACA' + ‘outfolder’ ‘site_id’ ‘lat.in’ ‘lon.in’ ‘overwrite’ ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'download.MsTMIP_NARR' + ‘outfolder’ ‘site_id’ ‘lat.in’ ‘lon.in’ ‘overwrite’ ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'download.NARR' + ‘outfolder’ ‘start_date’ ‘end_date’ ‘...’ + +Undocumented arguments in documentation object 'download.NARR_site' + ‘progress’ ‘...’ + +Undocumented arguments in documentation object 'download.NEONmet' + ‘...’ + +Undocumented arguments in documentation object 'download.NLDAS' + ‘outfolder’ ‘start_date’ ‘end_date’ ‘site_id’ ‘lat.in’ ‘lon.in’ + ‘overwrite’ ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'download.NOAA_GEFS' + ‘lat.in’ ‘lon.in’ ‘end_date’ + +Undocumented arguments in documentation object 'download.NOAA_GEFS_downscale' + ‘lat.in’ ‘lon.in’ ‘end_date’ + +Undocumented arguments in documentation object 'download.PalEON' + ‘sitename’ ‘outfolder’ ‘start_date’ ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'download.PalEON_ENS' + ‘sitename’ ‘outfolder’ ‘start_date’ ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'extract.nc.ERA5' + ‘...’ + +Undocumented arguments in documentation object 'extract.nc' + ‘...’ + +Undocumented arguments in documentation object 'get.ncvector' + ‘var’ ‘lati’ ‘loni’ ‘run.dates’ + +Undocumented arguments in documentation object 'lm_ensemble_sims' + ‘lags.list’ + +Undocumented arguments in documentation object 'met.process' + ‘browndog’ + +Undocumented arguments in documentation object 'met.process.stage' + ‘input.id’ ‘con’ + +Undocumented arguments in documentation object 'met2CF.ALMA' + ‘verbose’ + +Undocumented arguments in documentation object 'met2CF.Ameriflux' + ‘...’ + +Undocumented arguments in documentation object 'met2CF.AmerifluxLBL' + ‘...’ + +Undocumented arguments in documentation object 'met2CF.FACE' + ‘in.path’ ‘in.prefix’ ‘outfolder’ ‘start_date’ ‘end_date’ ‘input.id’ + ‘site’ ‘format’ ‘...’ + +Undocumented arguments in documentation object 'met2CF.NARR' + ‘in.path’ ‘in.prefix’ ‘outfolder’ ‘...’ + +Undocumented arguments in documentation object 'met2CF.PalEON' + ‘lat’ ‘lon’ ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'met2CF.PalEONregional' + ‘verbose’ ‘...’ +Duplicated \argument entries in documentation object 'met2CF.PalEONregional': + ‘in.path’ ‘in.prefix’ ‘outfolder’ ‘start_date’ ‘end_date’ ‘overwrite’ + +Undocumented arguments in documentation object 'met2CF.csv' + ‘in.path’ ‘in.prefix’ ‘outfolder’ ‘start_date’ ‘end_date’ ‘lat’ ‘lon’ + ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'met_temporal_downscale.Gaussian_ensemble' + ‘outfolder’ ‘site_id’ ‘...’ + +Undocumented arguments in documentation object 'metgapfill.NOAA_GEFS' + ‘...’ + +Undocumented arguments in documentation object 'metgapfill' + ‘...’ + +Undocumented arguments in documentation object 'model.train' + ‘v’ ‘...’ + +Undocumented arguments in documentation object 'nc.merge' + ‘...’ + +Undocumented arguments in documentation object 'permute.nc' + ‘...’ + +Undocumented arguments in documentation object 'predict_subdaily_met' + ‘...’ + +Undocumented arguments in documentation object 'site.lst' + ‘site.id’ ‘con’ + +Undocumented arguments in documentation object 'site_from_tag' + ‘sitename’ ‘tag’ + +Undocumented arguments in documentation object 'temporal.downscale.functions' + ‘...’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'download.GLDAS': + ‘lon.in’ + +Argument items with no description in Rd object 'download.PalEON': + ‘end_date’ + +Argument items with no description in Rd object 'download.PalEON_ENS': + ‘end_date’ + +Argument items with no description in Rd object 'gen.subdaily.models': + ‘in.prefix’ + +Argument items with no description in Rd object 'merge_met_variable': + ‘start_date’ ‘end_date’ ‘...’ + +Argument items with no description in Rd object 'met.process.stage': + ‘raw.id’ + +Argument items with no description in Rd object 'met_temporal_downscale.Gaussian_ensemble': + ‘in.path’ ‘in.prefix’ + +Argument items with no description in Rd object 'prepare_narr_year': + ‘verbose’ + +Argument items with no description in Rd object 'split_wind': + ‘start_date’ ‘end_date’ + +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... WARNING + + Note: significantly better compression could be obtained + by using R CMD build --resave-data + old_size new_size compress + cruncep_landmask.RData 39Kb 9Kb xz + narr_cruncep_ebifarm.RData 790Kb 595Kb xz +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘ameriflux_demo.Rmd’, ‘cfmet_downscaling.Rmd’, + ‘compare_narr_cruncep_met.Rmd’, ‘tdm_downscaling.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 9 WARNINGs, 3 NOTEs diff --git a/modules/data.hydrology/tests/Rcheck_reference.log b/modules/data.hydrology/tests/Rcheck_reference.log new file mode 100644 index 00000000000..02db244eb08 --- /dev/null +++ b/modules/data.hydrology/tests/Rcheck_reference.log @@ -0,0 +1,42 @@ +* using log directory ‘/tmp/RtmpirGbcf/PEcAn.data.hydrology.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.data.hydrology/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.data.hydrology’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.data.hydrology’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 NOTE diff --git a/modules/data.land/tests/Rcheck_reference.log b/modules/data.land/tests/Rcheck_reference.log new file mode 100644 index 00000000000..2e561ac0886 --- /dev/null +++ b/modules/data.land/tests/Rcheck_reference.log @@ -0,0 +1,1424 @@ +* using log directory ‘/tmp/RtmpVmBk6A/PEcAn.data.land.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.data.land/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.data.land’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... NOTE +Depends: includes the non-default packages: + ‘datapack’ ‘dataone’ ‘PEcAn.DB’ ‘PEcAn.utils’ ‘redland’ ‘sirt’ ‘sf’ +Adding so many packages to the search path is excessive and importing +selectively is preferable. +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.data.land’ can be installed ... WARNING +Found the following significant warnings: + Note: possible error in 'write_veg(outfolder, ': unused argument (source) +See ‘/tmp/RtmpVmBk6A/PEcAn.data.land.Rcheck/00install.out’ for details. +Information on the location(s) of code generating the ‘Note’s can be +obtained by re-running with environment variable R_KEEP_PKG_SOURCE set +to ‘yes’. +* checking installed package size ... NOTE + installed size is 13.4Mb + sub-directories of 1Mb or more: + data 9.4Mb + FIA_allometry 2.7Mb +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +Package listed in more than one of Depends, Imports, Suggests, Enhances: + ‘sf’ +A package should be listed in only one of these fields. +* checking top-level files ... NOTE +Non-standard file/directory found at top level: + ‘contrib’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘coda’ ‘lubridate’ ‘PEcAn.benchmark’ ‘PEcAn.data.atmosphere’ + ‘PEcAn.visualization’ ‘raster’ ‘RCurl’ ‘traits’ ‘udunits2’ +'library' or 'require' calls not declared from: + ‘dplR’ ‘maptools’ ‘mvtnorm’ ‘RCurl’ ‘rjags’ +'library' or 'require' calls in package code: + ‘dplR’ ‘fields’ ‘maptools’ ‘mvtnorm’ ‘RCurl’ ‘rgdal’ ‘rjags’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Packages in Depends field not imported from: + ‘dataone’ ‘datapack’ ‘PEcAn.DB’ ‘PEcAn.utils’ ‘redland’ ‘sf’ ‘sirt’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... WARNING +subset: + function(x, ...) +subset.layer: + function(file, coords, sub.layer, clip, out.dir, out.name) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. + +Found the following apparent S3 methods exported but not registered: + subset.layer +See section ‘Registering S3 methods’ in the ‘Writing R Extensions’ +manual. +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +BADM_IC_process: no visible global function definition for ‘setNames’ +dataone_download: no visible binding for '<<-' assignment to + ‘newdir_D1’ +dataone_download: no visible binding for global variable ‘newdir_D1’ +diametergrow : plotend: no visible global function definition for + ‘dev.off’ +diametergrow : plotstart: no visible global function definition for + ‘pdf’ +diametergrow : tnorm: no visible global function definition for ‘runif’ +diametergrow : tnorm: no visible global function definition for ‘pnorm’ +diametergrow : tnorm: no visible global function definition for ‘qnorm’ +diametergrow : diamint: no visible global function definition for ‘cov’ +diametergrow : diamint: no visible global function definition for ‘var’ +diametergrow : f.update: no visible global function definition for + ‘rmvnorm’ +diametergrow : f.update: no visible global function definition for + ‘rnorm’ +diametergrow : di.update_new: no visible global function definition for + ‘rgamma’ +diametergrow : di.update_new: no visible binding for global variable + ‘aa’ +diametergrow : di.update: no visible global function definition for + ‘dnorm’ +diametergrow : di.update: no visible global function definition for + ‘runif’ +diametergrow : di.update: no visible global function definition for + ‘rgamma’ +diametergrow : sd.update: no visible global function definition for + ‘rgamma’ +diametergrow : sp.update: no visible global function definition for + ‘rgamma’ +diametergrow : se.update: no visible global function definition for + ‘rgamma’ +diametergrow: no visible binding for global variable ‘settings’ +diametergrow: no visible global function definition for ‘rnorm’ +diametergrow: no visible global function definition for ‘rgamma’ +diametergrow: no visible binding for global variable ‘quantile’ +diametergrow: no visible global function definition for ‘write.table’ +diametergrow: no visible global function definition for ‘lm’ +diametergrow: no visible global function definition for ‘predict.lm’ +diametergrow: no visible global function definition for ‘par’ +diametergrow: no visible global function definition for ‘plot’ +diametergrow: no visible global function definition for ‘lines’ +diametergrow: no visible global function definition for ‘density’ +diametergrow: no visible global function definition for ‘dgamma’ +diametergrow: no visible global function definition for ‘title’ +diametergrow: no visible global function definition for ‘dnorm’ +diametergrow: no visible global function definition for ‘text’ +diametergrow: no visible global function definition for ‘abline’ +diametergrow: no visible global function definition for ‘points’ +download_package_rm: no visible binding for '<<-' assignment to ‘mnId’ +download_package_rm: no visible binding for '<<-' assignment to ‘mn’ +download_package_rm: no visible binding for global variable ‘mnId’ +download_package_rm: no visible binding for '<<-' assignment to + ‘bagitFile’ +download_package_rm: no visible binding for global variable ‘mn’ +download_package_rm: no visible binding for '<<-' assignment to + ‘zip_contents’ +download_package_rm: no visible binding for global variable ‘bagitFile’ +download_package_rm: no visible binding for global variable + ‘zip_contents’ +ens_veg_module: no visible global function definition for ‘db.close’ +ens_veg_module: no visible global function definition for ‘db.query’ +ens_veg_module: no visible global function definition for + ‘convert.input’ +EPA_ecoregion_finder: no visible global function definition for + ‘as_Spatial’ +extract_FIA: no visible global function definition for ‘db.close’ +extract_FIA: no visible global function definition for ‘db.query’ +extract_soil_gssurgo: no visible global function definition for ‘as’ +extract_soil_gssurgo: no visible binding for global variable ‘hzdept_r’ +extract_soil_gssurgo: no visible binding for global variable + ‘comppct_r’ +extract_soil_gssurgo: no visible global function definition for + ‘complete.cases’ +extract_soil_gssurgo: no visible global function definition for + ‘setNames’ +extract_soil_gssurgo: no visible global function definition for + ‘mutate’ +extract_soil_gssurgo : : no visible binding for global + variable ‘.’ +extract_soil_gssurgo : : no visible global function + definition for ‘mutate’ +extract_soil_gssurgo: no visible global function definition for + ‘filter’ +extract_soil_gssurgo: no visible binding for global variable ‘Area’ +extract_soil_gssurgo: no visible binding for global variable ‘.’ +extract_soil_gssurgo : : : no visible binding + for global variable ‘.’ +extract_soil_gssurgo : : no visible global function + definition for ‘setNames’ +extract_soil_nc: no visible global function definition for ‘median’ +extract_veg: possible error in write_veg(outfolder, start_date, + end_date, veg_info = veg_info, source): unused argument (source) +fia.to.psscss: no visible global function definition for ‘db.open’ +fia.to.psscss: no visible global function definition for ‘db.close’ +fia.to.psscss: no visible global function definition for + ‘dbfile.input.check’ +fia.to.psscss: no visible global function definition for ‘db.query’ +fia.to.psscss: no visible global function definition for ‘data’ +fia.to.psscss: no visible binding for global variable ‘pftmapping’ +fia.to.psscss: no visible global function definition for ‘na.omit’ +fia.to.psscss: no visible global function definition for ‘median’ +fia.to.psscss: no visible global function definition for ‘write.table’ +fia.to.psscss: no visible global function definition for + ‘dbfile.input.insert’ +find.land: no visible global function definition for ‘data’ +find.land: no visible binding for global variable ‘wrld_simpl’ +find.land: no visible global function definition for ‘SpatialPoints’ +find.land: no visible global function definition for ‘CRS’ +find.land: no visible global function definition for ‘proj4string’ +find.land: no visible global function definition for ‘over’ +find.land: no visible binding for global variable ‘land’ +format_identifier: no visible binding for '<<-' assignment to ‘doi1’ +format_identifier: no visible binding for global variable ‘doi1’ +get_resource_map: no visible binding for '<<-' assignment to ‘mnId’ +get_resource_map: no visible binding for '<<-' assignment to ‘mn’ +get_resource_map: no visible binding for global variable ‘mnId’ +get_resource_map: no visible binding for global variable ‘doi1’ +get_resource_map: no visible binding for '<<-' assignment to + ‘resource_map’ +get_resource_map: no visible binding for global variable ‘resource_map’ +get_veg_module: no visible global function definition for + ‘convert.input’ +get_veg_module: no visible binding for global variable ‘input’ +get_veg_module: no visible binding for global variable ‘new.site’ +get.elevation: no visible global function definition for ‘getURL’ +get.elevation: no visible global function definition for ‘xmlTreeParse’ +get.elevation: no visible global function definition for ‘xpathApply’ +get.elevation: no visible global function definition for ‘xmlValue’ +gSSURGO.Query: no visible global function definition for ‘xmlTreeParse’ +gSSURGO.Query: no visible global function definition for ‘xmlRoot’ +gSSURGO.Query: no visible global function definition for ‘getNodeSet’ +gSSURGO.Query : : no visible global function definition for + ‘xmlToList’ +gSSURGO.Query: no visible binding for global variable ‘comppct_r’ +gSSURGO.Query: no visible binding for global variable ‘mukey’ +gSSURGO.Query: no visible binding for global variable ‘aws050wta’ +ic_process: no visible global function definition for ‘db.close’ +ic_process: no visible global function definition for ‘db.query’ +id_resolveable: no visible binding for global variable ‘doi1’ +InventoryGrowthFusion: no visible global function definition for + ‘is.mcmc.list’ +InventoryGrowthFusion: no visible global function definition for + ‘mcmc.list2init’ +InventoryGrowthFusion: no visible global function definition for + ‘runif’ +InventoryGrowthFusion: no visible global function definition for ‘var’ +InventoryGrowthFusion: no visible global function definition for + ‘jags.model’ +InventoryGrowthFusion: no visible global function definition for + ‘coda.samples’ +InventoryGrowthFusion: no visible global function definition for ‘plot’ +InventoryGrowthFusion: no visible global function definition for + ‘load.module’ +InventoryGrowthFusion: no visible global function definition for + ‘as.mcmc.list’ +InventoryGrowthFusion : : no visible global function + definition for ‘coef’ +InventoryGrowthFusion : : no visible global function + definition for ‘lm’ +InventoryGrowthFusionDiagnostics: no visible binding for global + variable ‘quantile’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘layout’ +InventoryGrowthFusionDiagnostics: no visible binding for global + variable ‘data’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘plot’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘points’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘par’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘hist’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘abline’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘pairs’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘as.mcmc.list’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘gelman.diag’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘cor’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘lines’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘boxplot’ +InventoryGrowthFusionDiagnostics: no visible global function definition + for ‘legend’ +is.land: no visible binding for global variable ‘met.nc’ +match_species_id: no visible binding for global variable ‘id’ +match_species_id: no visible binding for global variable ‘genus’ +match_species_id: no visible binding for global variable ‘species’ +match_species_id: no visible binding for global variable ‘input_code’ +matchInventoryRings: no visible global function definition for + ‘combine.rwl’ +matchInventoryRings: no visible global function definition for + ‘write.table’ +plot2AGB: no visible global function definition for ‘txtProgressBar’ +plot2AGB: no visible global function definition for ‘rmvnorm’ +plot2AGB: no visible global function definition for ‘setTxtProgressBar’ +plot2AGB: no visible binding for global variable ‘sd’ +plot2AGB: no visible global function definition for ‘pdf’ +plot2AGB: no visible global function definition for ‘par’ +plot2AGB: no visible global function definition for ‘plot’ +plot2AGB: no visible global function definition for ‘lines’ +plot2AGB: no visible global function definition for ‘dev.off’ +put_veg_module: no visible global function definition for ‘db.close’ +put_veg_module: no visible global function definition for ‘db.query’ +put_veg_module: no visible global function definition for + ‘convert.input’ +Read_Tucson: no visible global function definition for ‘read.tucson’ +Read.IC.info.BADM: no visible global function definition for ‘read.csv’ +Read.IC.info.BADM: no visible binding for global variable ‘NA_L2CODE’ +Read.IC.info.BADM: no visible binding for global variable ‘VARIABLE’ +Read.IC.info.BADM: no visible binding for global variable ‘.’ +Read.IC.info.BADM: no visible binding for global variable ‘SITE_ID’ +Read.IC.info.BADM: no visible binding for global variable ‘GROUP_ID’ +Read.IC.info.BADM: no visible binding for global variable + ‘VARIABLE_GROUP’ +Read.IC.info.BADM: no visible binding for global variable ‘DATAVALUE’ +Read.IC.info.BADM: no visible binding for global variable ‘NA_L1CODE’ +Read.IC.info.BADM : : no visible binding for global variable + ‘VARIABLE’ +Read.IC.info.BADM : : no visible binding for global variable + ‘.’ +Read.IC.info.BADM : : no visible binding for global variable + ‘DATAVALUE’ +read.plot: no visible global function definition for ‘read.csv’ +read.velmex: no visible global function definition for ‘read.fwf’ +sample_ic: no visible global function definition for ‘complete.cases’ +shp2kml: no visible global function definition for ‘ogrListLayers’ +shp2kml: no visible global function definition for ‘ogrInfo’ +soil_params: no visible binding for global variable ‘soil.name’ +soil_params: no visible binding for global variable ‘xsand.def’ +soil_params: no visible binding for global variable ‘xclay.def’ +soil_params: no visible binding for global variable ‘fieldcp.K’ +soil_params: no visible binding for global variable ‘soilcp.MPa’ +soil_params: no visible binding for global variable ‘grav’ +soil_params: no visible binding for global variable ‘soilwp.MPa’ +soil_params: no visible binding for global variable ‘sand.hcap’ +soil_params: no visible binding for global variable ‘silt.hcap’ +soil_params: no visible binding for global variable ‘clay.hcap’ +soil_params: no visible binding for global variable ‘air.hcap’ +soil_params: no visible binding for global variable ‘ksand’ +soil_params: no visible binding for global variable ‘sand.cond’ +soil_params: no visible binding for global variable ‘ksilt’ +soil_params: no visible binding for global variable ‘silt.cond’ +soil_params: no visible binding for global variable ‘kclay’ +soil_params: no visible binding for global variable ‘clay.cond’ +soil_params: no visible binding for global variable ‘kair’ +soil_params: no visible binding for global variable ‘air.cond’ +soil_params: no visible binding for global variable ‘h2o.cond’ +soil_params: no visible binding for global variable ‘texture’ +soil_params: no visible global function definition for ‘median’ +soil_process: no visible global function definition for ‘db.query’ +write_ic: no visible binding for global variable ‘input_veg’ +Undefined global functions or variables: + . aa abline air.cond air.hcap Area as as_Spatial as.mcmc.list + aws050wta bagitFile boxplot clay.cond clay.hcap coda.samples coef + combine.rwl complete.cases comppct_r convert.input cor cov CRS data + DATAVALUE db.close db.open db.query dbfile.input.check + dbfile.input.insert density dev.off dgamma dnorm doi1 fieldcp.K + filter gelman.diag genus getNodeSet getURL grav GROUP_ID h2o.cond + hist hzdept_r id input input_code input_veg is.mcmc.list jags.model + kair kclay ksand ksilt land layout legend lines lm load.module + mcmc.list2init median met.nc mn mnId mukey mutate NA_L1CODE NA_L2CODE + na.omit new.site newdir_D1 ogrInfo ogrListLayers over pairs par pdf + pftmapping plot pnorm points predict.lm proj4string qnorm quantile + read.csv read.fwf read.tucson resource_map rgamma rmvnorm rnorm runif + sand.cond sand.hcap sd setNames settings setTxtProgressBar silt.cond + silt.hcap SITE_ID soil.name soilcp.MPa soilwp.MPa SpatialPoints + species text texture title txtProgressBar var VARIABLE VARIABLE_GROUP + write.table wrld_simpl xclay.def xmlRoot xmlToList xmlTreeParse + xmlValue xpathApply xsand.def zip_contents +Consider adding + importFrom("graphics", "abline", "boxplot", "hist", "layout", "legend", + "lines", "pairs", "par", "plot", "points", "text", "title") + importFrom("grDevices", "dev.off", "pdf") + importFrom("methods", "as") + importFrom("stats", "coef", "complete.cases", "cor", "cov", "density", + "dgamma", "dnorm", "filter", "lm", "median", "na.omit", + "pnorm", "predict.lm", "qnorm", "quantile", "rgamma", + "rnorm", "runif", "sd", "setNames", "var") + importFrom("utils", "data", "read.csv", "read.fwf", + "setTxtProgressBar", "txtProgressBar", "write.table") +to your NAMESPACE file (and ensure that your DESCRIPTION Imports field +contains 'methods'). + +Found the following calls to data() loading into the global environment: +File ‘PEcAn.data.land/R/fia2ED.R’: + data(pftmapping) +File ‘PEcAn.data.land/R/find.land.R’: + data(wrld_simpl) +See section ‘Good practice’ in ‘?data’. +* checking Rd files ... WARNING +checkRd: (7) gSSURGO.Query.Rd:29-35: Tag \dontrun not recognized +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'dataone_download.Rd': + \examples lines wider than 100 characters: + dataone_download(id = "doi:10.6073/pasta/63ad7159306bc031520f09b2faefcf87", filepath = "/fs/data1/pecan.data/dbfiles") + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'EPA_ecoregion_finder.Rd': + ‘https://www.epa.gov/eco-research/ecoregions’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘BADM’ ‘.extract.nc.module’ ‘.getRunSettings’ ‘.met2model.module’ + ‘.Random.seed’ ‘air.cond’ ‘air.hcap’ ‘clay.cond’ ‘clay.hcap’ + ‘clay.now’ ‘fieldcp.K’ ‘grav’ ‘h2o.cond’ ‘kair’ ‘kclay’ ‘ksand’ + ‘ksilt’ ‘n’ ‘nstext.lines’ ‘nstext.polygon’ ‘sand.cond’ ‘sand.hcap’ + ‘sand.now’ ‘silt.cond’ ‘silt.hcap’ ‘soil.key’ ‘soil.name’ + ‘soilcp.MPa’ ‘soilld.MPa’ ‘soilwp.MPa’ ‘stext.lines’ ‘stext.polygon’ + ‘texture’ ‘theta.crit’ ‘xclay.def’ ‘xsand.def’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'Clean_Tucson' + ‘file’ + +Undocumented arguments in documentation object 'InventoryGrowthFusion' + ‘cov.data’ ‘time_data’ ‘n.iter’ ‘fixed’ ‘time_varying’ ‘burnin_plot’ + ‘save.jags’ ‘z0’ + +Undocumented arguments in documentation object 'Read_Tucson' + ‘folder’ + +Undocumented arguments in documentation object 'extract.stringCode' + ‘x’ ‘extractor’ + +Undocumented arguments in documentation object 'extract_FIA' + ‘lon’ ‘lat’ ‘start_date’ ‘end_date’ ‘gridres’ ‘dbparms’ + +Undocumented arguments in documentation object 'extract_veg' + ‘new_site’ ‘start_date’ ‘end_date’ ‘source’ ‘gridres’ ‘format_name’ + ‘machine_host’ ‘dbparms’ ‘outfolder’ ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'fia.to.psscss' + ‘settings’ ‘lat’ ‘lon’ ‘year’ ‘gridres’ ‘min.year’ ‘max.year’ + ‘overwrite’ +Documented arguments not in \usage in documentation object 'fia.to.psscss': + ‘create’ + +Undocumented arguments in documentation object 'find.land' + ‘plot’ + +Undocumented arguments in documentation object 'from.Tag' + ‘x’ + +Undocumented arguments in documentation object 'from.TreeCode' + ‘x’ + +Undocumented arguments in documentation object 'ic_process' + ‘input’ ‘dir’ +Documented arguments not in \usage in documentation object 'ic_process': + ‘dbfiles’ + +Undocumented arguments in documentation object 'load_veg' + ‘new_site’ ‘start_date’ ‘end_date’ ‘source_id’ ‘source’ ‘icmeta’ + ‘format_name’ ‘machine_host’ ‘dbparms’ ‘outfolder’ ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'matchInventoryRings' + ‘trees’ ‘rings’ ‘extractor’ ‘nyears’ ‘coredOnly’ + +Undocumented arguments in documentation object 'match_pft' + ‘query’ + +Undocumented arguments in documentation object 'match_species_id' + ‘...’ + +Undocumented arguments in documentation object 'mpot2smoist' + ‘soil_water_potential_at_saturation’ ‘soil_hydraulic_b’ + ‘volume_fraction_of_water_in_soil_at_saturation’ +Documented arguments not in \usage in documentation object 'mpot2smoist': + ‘mysoil’ + +Undocumented arguments in documentation object 'netcdf.writer.BADM' + ‘ens’ + +Undocumented arguments in documentation object 'plot2AGB' + ‘allom.stats’ + +Undocumented arguments in documentation object 'pool_ic_list2netcdf' + ‘ens’ + +Undocumented arguments in documentation object 'sample_ic' + ‘...’ + +Undocumented arguments in documentation object 'soil_process' + ‘run.local’ + +Undocumented arguments in documentation object 'to.Tag' + ‘SITE’ ‘PLOT’ ‘SUBPLOT’ ‘TAG’ + +Undocumented arguments in documentation object 'to.TreeCode' + ‘SITE’ ‘PLOT’ ‘SUBPLOT’ ‘TAG’ + +Undocumented arguments in documentation object 'write_ic' + ‘in.path’ ‘in.name’ ‘start_date’ ‘end_date’ ‘outfolder’ ‘model’ + ‘new_site’ ‘pfts’ ‘source’ ‘overwrite’ ‘...’ + +Undocumented arguments in documentation object 'write_veg' + ‘outfolder’ ‘start_date’ ‘veg_info’ ‘source’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'dataone_download': + ‘CNode’ + +Argument items with no description in Rd object 'extract_soil_nc': + ‘in.file’ ‘outdir’ ‘lat’ ‘lon’ + +Argument items with no description in Rd object 'sclass': + ‘sandfrac’ ‘clayfrac’ + +Argument items with no description in Rd object 'soil.units': + ‘varname’ + +Argument items with no description in Rd object 'soil2netcdf': + ‘new.file’ + +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... WARNING +Files not of a type allowed in a ‘data’ directory: + ‘eco-region.json’ ‘eco-regionl2.json’ ‘lake_states_wgs84.dbf’ + ‘lake_states_wgs84.kml’ ‘lake_states_wgs84.prj’ + ‘lake_states_wgs84.qpj’ ‘lake_states_wgs84.shp’ + ‘lake_states_wgs84.shx’ +Please use e.g. ‘inst/extdata’ for non-R data files +* checking data for non-ASCII characters ... WARNING + Warning: found non-ASCII strings + 'US-Bar,272,44.0646,-71.2881,NA,4297,GRP_LOCATION,LOCATION_COMMENT,... Bartlett Experimental Forest. ...The correct location is 44 deg 3' 52.702794\ N [NOT 43 deg as I had earlier specified] 71 deg 17 17.0766744\" W","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24353,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_CRS","1100","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24353,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_FINE","900","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24353,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_TOT","2000","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24353,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24353,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_COMMENT","See: Ruth D. Yanai, Byung B. Park, and Steven P. Hamburg. 2006. The vertical and horizontal distribution of roots in northern hardwood stands of varying age. Can. J. For. Res. 36: 450-459. and Byung B. Park, Ruth D. Yanai, Matthew A. Vadeboncoeur, and Steven P. Hamburg. 2007. Estimating root biomass in rocky soils using pits, cores, and allometric equations. Soil Science Society of America Journal.","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23726,"GRP_SOIL_CHEM","SOIL_CHEM_C_ORG","5","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23726,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","10","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23726,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","30","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23726,"GRP_SOIL_CHEM","SOIL_CHEM_DATE","2004-07-29","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23726,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","mineral","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24095,"GRP_SOIL_CHEM","SOIL_CHEM_C_ORG","18","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24095,"GRP_SOIL_CHEM","SOIL_CHEM_DATE","2004-07-29","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24095,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Forest floor","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24755,"GRP_SOIL_CHEM","SOIL_CHEM_C_ORG","11","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24755,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24755,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","10","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24755,"GRP_SOIL_CHEM","SOIL_CHEM_DATE","2004-07-29","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24755,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","mineral soil","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,29625,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Spodosol","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,29625,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23780,"GRP_SOIL_DEPTH","SOIL_DEPTH","100","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23780,"GRP_SOIL_DEPTH","SOIL_DEPTH_COMMENT","<1m","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24546,"GRP_SOIL_DEPTH","SOIL_DEPTH","7","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23725,"GRP_SOIL_TEX","SOIL_TEX_SAND","74","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23725,"GRP_SOIL_TEX","SOIL_TEX_SILT","23","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23725,"GRP_SOIL_TEX","SOIL_TEX_CLAY","4","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,23725,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","data from Bartlett Experimental Forest, but not within tower footprint. Texture is for shallow-C layer material.","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24634,"GRP_WD_BIOMASS","WD_BIOMASS_CRS","900","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24634,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24634,"GRP_WD_BIOMASS","WD_BIOMASS_DATE","2004-07-29","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-Bar","272",44.0646,-71.2881,NA,24634,"GRP_WD_BIOMASS","WD_BIOMASS_COMMENT","Line intersect sampling","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" + "US-BdA",NA,35.8089,-90.0327,NA,13313,"GRP_LOCATION","LOCATION_LAT","35.8089","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-BdA",NA,35.8089,-90.0327,NA,13313,"GRP_LOCATION","LOCATION_LONG","-90.0327","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-BdC",NA,35.8089,-90.0284,NA,13331,"GRP_LOCATION","LOCATION_LAT","35.8089","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-BdC",NA,35.8089,-90.0284,NA,13331,"GRP_LOCATION","LOCATION_LONG","-90.0284","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Bi1","-3.92",38.1022,-121.5042,2016-08-11,30366,"GRP_LOCATION","LOCATION_LAT","38.1022","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi1","-3.92",38.1022,-121.5042,2016-08-11,30366,"GRP_LOCATION","LOCATION_LONG","-121.5042","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi1","-3.92",38.1022,-121.5042,2016-08-11,30366,"GRP_LOCATION","LOCATION_ELEV","-3.92","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi1","-3.92",38.1022,-121.5042,2016-08-11,30366,"GRP_LOCATION","LOCATION_DATE_START","2016-08-11","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi2","-4.98",38.109,-121.535,2017-04-26,31269,"GRP_LOCATION","LOCATION_LAT","38.1090","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi2","-4.98",38.109,-121.535,2017-04-26,31269,"GRP_LOCATION","LOCATION_LONG","-121.5350","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi2","-4.98",38.109,-121.535,2017-04-26,31269,"GRP_LOCATION","LOCATION_ELEV","-4.98","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bi2","-4.98",38.109,-121.535,2017-04-26,31269,"GRP_LOCATION","LOCATION_DATE_START","2017-04-26","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Bkg","510",44.3453,-96.8362,NA,6677,"GRP_LOCATION","LOCATION_LAT","44.3453","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Bkg","510",44.3453,-96.8362,NA,6677,"GRP_LOCATION","LOCATION_LONG","-96.8362","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Bkg","510",44.3453,-96.8362,NA,6677,"GRP_LOCATION","LOCATION_ELEV","510","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Bkg","510",44.3453,-96.8362,NA,6677,"GRP_LOCATION","LOCATION_COMMENT","http://public.ornl.gov/ameriflux/Site_Info/siteInfo.cfm?KEYID=us.brookings.01","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Bkg","510",44.3453,-96.8362,NA,27729,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Deep, well drained clay loams","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Bkg","510",44.3453,-96.8362,NA,27729,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Blk","1718",44.158,-103.65,NA,4244,"GRP_LOCATION","LOCATION_LAT","44.1580","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blk","1718",44.158,-103.65,NA,4244,"GRP_LOCATION","LOCATION_LONG","-103.6500","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blk","1718",44.158,-103.65,NA,4244,"GRP_LOCATION","LOCATION_ELEV","1718","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,23665,"GRP_BIOMASS_CHEM","BIOMASS_N","0.14","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,23665,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,23665,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,23665,"GRP_BIOMASS_CHEM","BIOMASS_SPP","(All)","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,23665,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","see extra data file US-Blo_Ncontent_leaves.xls","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,14394,"GRP_LOCATION","LOCATION_LAT","38.8953","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,14394,"GRP_LOCATION","LOCATION_LONG","-120.6328","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,14394,"GRP_LOCATION","LOCATION_ELEV","1315","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24434,"GRP_SOIL_CHEM","SOIL_CHEM_BD","0.76","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24434,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24434,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","10","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24434,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,28430,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Fine-loamy, mixed, mesic, ultic haploxeralf in the Cohasset series whose parent material was andesitic lahar. Relatively uniform, comprised predominantly of loam or clay-loam. The soil is comprised of 60% sand, 29% loam and 11% clay with a pH of 5.5.","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,28430,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,23802,"GRP_SOIL_DEPTH","SOIL_DEPTH","10","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24300,"GRP_SOIL_DEPTH","SOIL_DEPTH","200","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24300,"GRP_SOIL_DEPTH","SOIL_DEPTH_COMMENT","at least 2 m","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24189,"GRP_SOIL_TEX","SOIL_TEX_SAND","60","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24189,"GRP_SOIL_TEX","SOIL_TEX_SILT","29","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24189,"GRP_SOIL_TEX","SOIL_TEX_CLAY","11","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24189,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MIN","0","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24189,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MAX","10","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Blo","1315",38.8953,-120.6328,NA,24189,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","A","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Bo1","219",40.0062,-88.2904,NA,4521,"GRP_LOCATION","LOCATION_LAT","40.0062","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,4521,"GRP_LOCATION","LOCATION_LONG","-88.2904","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,4521,"GRP_LOCATION","LOCATION_ELEV","219","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,24811,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.4","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,24811,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,24811,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Meyers et al 2004: Measurements of soil bulk density average <89><88>1.4Mgm<88><92>3.","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,24812,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.4","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,24812,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,24812,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Meyers et al 2004: Measurements of soil bulk density average <89><88>1.4Mgm<88><92>3.","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,27186,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Slit loam","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo1","219",40.0062,-88.2904,NA,27186,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,2009,"GRP_LOCATION","LOCATION_LAT","40.0090","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,2009,"GRP_LOCATION","LOCATION_LONG","-88.2900","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,2009,"GRP_LOCATION","LOCATION_ELEV","219","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,2009,"GRP_LOCATION","LOCATION_COMMENT","N; ftp://cdiac.ornl.gov/pub/ameriflux/data/Level1/Sites_ByName/Bondville_Companion_Site/BP08_DAT.LABELS","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,24678,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.4","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,24678,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,24678,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Meyers et al 2004: Measurements of soil bulk density average <89><88>1.4Mgm<88><92>3.","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,25064,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.4","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,25064,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,25064,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Meyers et al 2004: Measurements of soil bulk density average <89><88>1.4Mgm<88><92>3.","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,24023,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Argiudolls ,Haplaquolls","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,24023,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Bo2","219",40.009,-88.29,NA,24023,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_COMMENT","Meyers et al 2004: The field contains three soil series: Dana (Fine-silty, mixed, mesic, Typic Argiudolls), Flanagan (Fine, montmorillonitic, mesic, Aquic Argiudolls), and Drummer (Fine-silty, mixed, mesic, Typic Haplaquolls)","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-Br1","313",41.9749,-93.6906,NA,30048,"GRP_LOCATION","LOCATION_LAT","41.9749","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br1","313",41.9749,-93.6906,NA,30048,"GRP_LOCATION","LOCATION_LONG","-93.6906","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br1","313",41.9749,-93.6906,NA,30048,"GRP_LOCATION","LOCATION_ELEV","313","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br1","313",41.9749,-93.6906,NA,28814,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Soil morphology is based on local topographic orientation. Soils within depressions are characterized by poorly drained clay material while the upslope soils are better drained. All soils are dominantly Clarion-Nicollet-Webster, fine textured with moderate to high organic matter content.","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br1","313",41.9749,-93.6906,NA,28814,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br2","314",41.9757,-93.6925,NA,8375,"GRP_LOCATION","LOCATION_LAT","41.9757","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br2","314",41.9757,-93.6925,NA,8375,"GRP_LOCATION","LOCATION_LONG","-93.6925","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br2","314",41.9757,-93.6925,NA,8375,"GRP_LOCATION","LOCATION_ELEV","314","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br2","314",41.9757,-93.6925,NA,27490,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Soil morphology is based on local topographic orientation. Soils within depressions are characterized by poorly drained clay material while the upslope soils are better drained. All soils are dominantly Clarion-Nicollet-Webster, fine textured with moderate to high organic matter content.","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br2","314",41.9757,-93.6925,NA,27490,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br3","313",41.9747,-93.6936,NA,7512,"GRP_LOCATION","LOCATION_LAT","41.9747","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br3","313",41.9747,-93.6936,NA,7512,"GRP_LOCATION","LOCATION_LONG","-93.6936","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br3","313",41.9747,-93.6936,NA,7512,"GRP_LOCATION","LOCATION_ELEV","313","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br3","313",41.9747,-93.6936,NA,28815,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Soil morphology is based on local topographic orientation. Soils within depressions are characterized by poorly drained clay material while the upslope soils are better drained. All soils are dominantly Clarion-Nicollet-Webster, fine textured with moderate to high organic matter content.","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-Br3","313",41.9747,-93.6936,NA,28815,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","9","GREAT PLAINS","9.2","TEMPERATE PRAIRIES" + "US-BRG","180",39.2167,-86.5406,2015-11-25,30568,"GRP_LOCATION","LOCATION_LAT","39.2167","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-BRG","180",39.2167,-86.5406,2015-11-25,30568,"GRP_LOCATION","LOCATION_LONG","-86.5406","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-BRG","180",39.2167,-86.5406,2015-11-25,30568,"GRP_LOCATION","LOCATION_ELEV","180","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-BRG","180",39.2167,-86.5406,2015-11-25,30568,"GRP_LOCATION","LOCATION_DATE_START","2015-11-25","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Bsg","1398",43.4712,-119.6909,NA,16087,"GRP_LOCATION","LOCATION_LAT","43.4712","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Bsg","1398",43.4712,-119.6909,NA,16087,"GRP_LOCATION","LOCATION_LONG","-119.6909","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Bsg","1398",43.4712,-119.6909,NA,16087,"GRP_LOCATION","LOCATION_ELEV","1398","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-BSM","0.5",41.7297,-70.3644,NA,31119,"GRP_LOCATION","LOCATION_LAT","41.7297","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-BSM","0.5",41.7297,-70.3644,NA,31119,"GRP_LOCATION","LOCATION_LONG","-70.3644","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-BSM","0.5",41.7297,-70.3644,NA,31119,"GRP_LOCATION","LOCATION_ELEV","0.5","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-CaV","994",39.0633,-79.4208,NA,6665,"GRP_LOCATION","LOCATION_LAT","39.0633","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-CaV","994",39.0633,-79.4208,NA,6665,"GRP_LOCATION","LOCATION_LONG","-79.4208","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-CaV","994",39.0633,-79.4208,NA,6665,"GRP_LOCATION","LOCATION_ELEV","994","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-Ced","58",39.8379,-74.3791,NA,12662,"GRP_LOCATION","LOCATION_LAT","39.8379","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Ced","58",39.8379,-74.3791,NA,12662,"GRP_LOCATION","LOCATION_LONG","-74.3791","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Ced","58",39.8379,-74.3791,NA,12662,"GRP_LOCATION","LOCATION_ELEV","58","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Ced","58",39.8379,-74.3791,NA,12662,"GRP_LOCATION","LOCATION_COMMENT","Annual tree census data, annual clip plots for understory vegetation, approx. monthly litterfall collection, forest floor sampling in 2003, 2006, 2008, 2012, 2013. Extensive soil sampling in 2012.","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Ced","58",39.8379,-74.3791,NA,29626,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Podzol, underlain by late Miocene fluvial sediments of the Kirkwood formation, and overlain with Cohansey sandy soil with low nutrient and cation exchange capacity","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Ced","58",39.8379,-74.3791,NA,29626,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-CF1","794",46.7815,-117.0821,2017-05-11,98317,"GRP_LOCATION","LOCATION_LAT","46.7815","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF1","794",46.7815,-117.0821,2017-05-11,98317,"GRP_LOCATION","LOCATION_LONG","-117.0821","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF1","794",46.7815,-117.0821,2017-05-11,98317,"GRP_LOCATION","LOCATION_ELEV","794","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF1","794",46.7815,-117.0821,2017-05-11,98317,"GRP_LOCATION","LOCATION_DATE_START","2017-05-11","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF2","807",46.784,-117.0908,2017-05-12,98337,"GRP_LOCATION","LOCATION_LAT","46.7840","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF2","807",46.784,-117.0908,2017-05-12,98337,"GRP_LOCATION","LOCATION_LONG","-117.0908","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF2","807",46.784,-117.0908,2017-05-12,98337,"GRP_LOCATION","LOCATION_ELEV","807","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF2","807",46.784,-117.0908,2017-05-12,98337,"GRP_LOCATION","LOCATION_DATE_START","2017-05-12","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF3","795",46.7551,-117.1261,2017-06-02,98357,"GRP_LOCATION","LOCATION_LAT","46.7551","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF3","795",46.7551,-117.1261,2017-06-02,98357,"GRP_LOCATION","LOCATION_LONG","-117.1261","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF3","795",46.7551,-117.1261,2017-06-02,98357,"GRP_LOCATION","LOCATION_ELEV","795","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF3","795",46.7551,-117.1261,2017-06-02,98357,"GRP_LOCATION","LOCATION_DATE_START","2017-06-02","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF4","795",46.7518,-117.1285,2017-06-02,98377,"GRP_LOCATION","LOCATION_LAT","46.7518","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF4","795",46.7518,-117.1285,2017-06-02,98377,"GRP_LOCATION","LOCATION_LONG","-117.1285","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF4","795",46.7518,-117.1285,2017-06-02,98377,"GRP_LOCATION","LOCATION_ELEV","795","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CF4","795",46.7518,-117.1285,2017-06-02,98377,"GRP_LOCATION","LOCATION_DATE_START","2017-06-02","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-ChR","286",35.9311,-84.3324,NA,1563,"GRP_LOCATION","LOCATION_LAT","35.9311","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-ChR","286",35.9311,-84.3324,NA,1563,"GRP_LOCATION","LOCATION_LONG","-84.3324","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-ChR","286",35.9311,-84.3324,NA,1563,"GRP_LOCATION","LOCATION_ELEV","286","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-Cop","1520",38.09,-109.39,NA,5746,"GRP_LOCATION","LOCATION_LAT","38.0900","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Cop","1520",38.09,-109.39,NA,5746,"GRP_LOCATION","LOCATION_LONG","-109.3900","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Cop","1520",38.09,-109.39,NA,5746,"GRP_LOCATION","LOCATION_ELEV","1520","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Cop","1520",38.09,-109.39,NA,5746,"GRP_LOCATION","LOCATION_COMMENT","From CDIAC Tom Boden database dump","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Cop","1520",38.09,-109.39,NA,28095,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Young, alkaline, well-drained fine sandy loams with weak or little horizonation","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-Cop","1520",38.09,-109.39,NA,28095,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","10","NORTH AMERICAN DESERTS","10.1","COLD DESERTS" + "US-CPk","2750",41.068,-106.1187,2009-01-01,10215,"GRP_LOCATION","LOCATION_LAT","41.0680","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CPk","2750",41.068,-106.1187,2009-01-01,10215,"GRP_LOCATION","LOCATION_LONG","-106.1187","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CPk","2750",41.068,-106.1187,2009-01-01,10215,"GRP_LOCATION","LOCATION_ELEV","2750","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CPk","2750",41.068,-106.1187,2009-01-01,10215,"GRP_LOCATION","LOCATION_DATE_START","2009-01-01","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CRT","180",41.6285,-83.3471,2010-09-20,12770,"GRP_LOCATION","LOCATION_LAT","41.6285","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-CRT","180",41.6285,-83.3471,2010-09-20,12770,"GRP_LOCATION","LOCATION_LONG","-83.3471","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-CRT","180",41.6285,-83.3471,2010-09-20,12770,"GRP_LOCATION","LOCATION_ELEV","180","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-CRT","180",41.6285,-83.3471,2010-09-20,12770,"GRP_LOCATION","LOCATION_DATE_START","2010-09-20","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-CRT","180",41.6285,-83.3471,2010-09-20,12770,"GRP_LOCATION","LOCATION_COMMENT","The tower was constructed in August and continuous measurement began in September, 2010.","8","EASTERN TEMPERATE FORESTS","8.2","CENTRAL USA PLAINS" + "US-CS1","328",44.1031,-89.5379,2018-06-29,96893,"GRP_LOCATION","LOCATION_LAT","44.1031","8","EASTERN TEMPERATE FORESTS","8.1","MIXED WOOD PLAINS" + "US-CS1","328",44.1031,-89.5379,2018-06-29,96893,"GRP_LOCATION","LOCATION_LONG","-89.5379","8","EASTERN TEMPERATE FORESTS","8.1","MIXED WOOD PLAINS" + "US-CS1","328",44.1031,-89.5379,2018-06-29,96893,"GRP_LOCATION","LOCATION_ELEV","328","8","EASTERN TEMPERATE FORESTS","8.1","MIXED WOOD PLAINS" + "US-CS1","328",44.1031,-89.5379,2018-06-29,96893,"GRP_LOCATION","LOCATION_DATE_START","2018-06-29","8","EASTERN TEMPERATE FORESTS","8.1","MIXED WOOD PLAINS" + "US-CS1","328",44.1031,-89.5379,2018-06-29,96893,"GRP_LOCATION","LOCATION_COMMENT","Approximation based on Google maps. GPS coords forthcoming","8","EASTERN TEMPERATE FORESTS","8.1","MIXED WOOD PLAINS" + "US-Cst","50",33.0442,-91.9204,NA,13214,"GRP_LOCATION","LOCATION_LAT","33.0442","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Cst","50",33.0442,-91.9204,NA,13214,"GRP_LOCATION","LOCATION_LONG","-91.9204","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Cst","50",33.0442,-91.9204,NA,13214,"GRP_LOCATION","LOCATION_ELEV","50","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Ctn","744",43.95,-101.8466,NA,637,"GRP_LOCATION","LOCATION_LAT","43.9500","9","GREAT PLAINS","9.3","WEST-CENTRAL SEMIARID PRAIRIES" + "US-Ctn","744",43.95,-101.8466,NA,637,"GRP_LOCATION","LOCATION_LONG","-101.8466","9","GREAT PLAINS","9.3","WEST-CENTRAL SEMIARID PRAIRIES" + "US-Ctn","744",43.95,-101.8466,NA,637,"GRP_LOCATION","LOCATION_ELEV","744","9","GREAT PLAINS","9.3","WEST-CENTRAL SEMIARID PRAIRIES" + "US-Ctn","744",43.95,-101.8466,NA,637,"GRP_LOCATION","LOCATION_COMMENT","From CDIAC Tom Boden database dump","9","GREAT PLAINS","9.3","WEST-CENTRAL SEMIARID PRAIRIES" + "US-Cwt","690",35.0592,-83.4275,2011-01-01,98507,"GRP_LOCATION","LOCATION_LAT","35.0592","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-Cwt","690",35.0592,-83.4275,2011-01-01,98507,"GRP_LOCATION","LOCATION_LONG","-83.4275","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-Cwt","690",35.0592,-83.4275,2011-01-01,98507,"GRP_LOCATION","LOCATION_ELEV","690","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-Cwt","690",35.0592,-83.4275,2011-01-01,98507,"GRP_LOCATION","LOCATION_DATE_START","2011-01-01","8","EASTERN TEMPERATE FORESTS","8.4","OZARK/OUACHITA-APPALACHIAN FORESTS" + "US-CZ1","400",37.1088,-119.7313,NA,30895,"GRP_LOCATION","LOCATION_LAT","37.1088","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-CZ1","400",37.1088,-119.7313,NA,30895,"GRP_LOCATION","LOCATION_LONG","-119.7313","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-CZ1","400",37.1088,-119.7313,NA,30895,"GRP_LOCATION","LOCATION_ELEV","400","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-CZ2","1160",37.0311,-119.2566,NA,30896,"GRP_LOCATION","LOCATION_LAT","37.0311","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ2","1160",37.0311,-119.2566,NA,30896,"GRP_LOCATION","LOCATION_LONG","-119.2566","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ2","1160",37.0311,-119.2566,NA,30896,"GRP_LOCATION","LOCATION_ELEV","1160","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ3","2015",37.0674,-119.1951,NA,30897,"GRP_LOCATION","LOCATION_LAT","37.0674","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ3","2015",37.0674,-119.1951,NA,30897,"GRP_LOCATION","LOCATION_LONG","-119.1951","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ3","2015",37.0674,-119.1951,NA,30897,"GRP_LOCATION","LOCATION_ELEV","2015","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ4","2710",37.0675,-118.9867,NA,30898,"GRP_LOCATION","LOCATION_LAT","37.0675","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ4","2710",37.0675,-118.9867,NA,30898,"GRP_LOCATION","LOCATION_LONG","-118.9867","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-CZ4","2710",37.0675,-118.9867,NA,30898,"GRP_LOCATION","LOCATION_ELEV","2710","6","NORTHWESTERN FORESTED MOUNTAINS","6.2","WESTERN CORDILLERA" + "US-Dea","-53",32.8136,-115.4423,NA,11411,"GRP_LOCATION","LOCATION_LAT","32.8136","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Dea","-53",32.8136,-115.4423,NA,11411,"GRP_LOCATION","LOCATION_LONG","-115.4423","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Dea","-53",32.8136,-115.4423,NA,11411,"GRP_LOCATION","LOCATION_ELEV","-53","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Dea","-53",32.8136,-115.4423,NA,11411,"GRP_LOCATION","LOCATION_COMMENT","The alfalfa field was existing. The measurements began at the above date.","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Deu","-53",32.8056,-115.4456,NA,11431,"GRP_LOCATION","LOCATION_LAT","32.8056","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Deu","-53",32.8056,-115.4456,NA,11431,"GRP_LOCATION","LOCATION_LONG","-115.4456","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Deu","-53",32.8056,-115.4456,NA,11431,"GRP_LOCATION","LOCATION_ELEV","-53","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Deu","-53",32.8056,-115.4456,NA,11431,"GRP_LOCATION","LOCATION_COMMENT","The tower is situated on scaffolding to measure fluxes from buildings, roads, farm equipment and crops. The measurements began at the above date.","10","NORTH AMERICAN DESERTS","10.2","WARM DESERTS" + "US-Dia","323",37.6773,-121.5296,NA,11505,"GRP_LOCATION","LOCATION_LAT","37.6773","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Dia","323",37.6773,-121.5296,NA,11505,"GRP_LOCATION","LOCATION_LONG","-121.5296","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Dia","323",37.6773,-121.5296,NA,11505,"GRP_LOCATION","LOCATION_ELEV","323","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Dia","323",37.6773,-121.5296,NA,11505,"GRP_LOCATION","LOCATION_COMMENT","From CDIAC Tom Boden database dump","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-Dix","48",39.9712,-74.4346,NA,7507,"GRP_LOCATION","LOCATION_LAT","39.9712","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Dix","48",39.9712,-74.4346,NA,7507,"GRP_LOCATION","LOCATION_LONG","-74.4346","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Dix","48",39.9712,-74.4346,NA,7507,"GRP_LOCATION","LOCATION_ELEV","48","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Dix","48",39.9712,-74.4346,NA,28940,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Podzol, underlain by late Miocene fluvial sediments of the Kirkwood formation, and overlain with Cohansey sandy soil with low nutrient and cation exchange capacity","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Dix","48",39.9712,-74.4346,NA,28940,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.5","MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27970,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","100","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27970,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27970,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27970,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27970,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2001-06-28","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27970,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","There are no trees and an insignificant number of shrubs in this field. It is mowed annually for hay; biomass was estimated by counting bales and multiplying by measured mass/bale","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27971,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","100","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27971,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27971,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27971,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27971,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2002-06-02","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27971,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","There are no trees and an insignificant number of shrubs in this field. It is mowed annually for hay; biomass was estimated by counting bales and multiplying by measured mass/bale","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28321,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","200","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28321,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28321,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28321,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28321,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2004-05-18","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28321,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","There are no trees and an insignificant number of shrubs in this field. It is mowed annually for hay; biomass was estimated by counting bales and multiplying by measured mass/bale","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29517,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","400","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29517,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29517,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29517,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29517,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2003-07-25","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29517,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","There are no trees and an insignificant number of shrubs in this field. It is mowed annually for hay; biomass was estimated by counting bales and multiplying by measured mass/bale","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29518,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","100","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29518,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29518,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29518,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29518,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2005-09-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29518,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","There are no trees and an insignificant number of shrubs in this field. It is mowed annually for hay; biomass was estimated by counting bales and multiplying by measured mass/bale","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,6039,"GRP_LOCATION","LOCATION_LAT","35.9712","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,6039,"GRP_LOCATION","LOCATION_LONG","-79.0934","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,6039,"GRP_LOCATION","LOCATION_ELEV","168","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28093,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Enon Series, low-fertility, acidic Hapludalf. An imprevious clay pan is located beneath all soils at a depth of 0.30 m.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28093,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27060,"GRP_SOIL_DEPTH","SOIL_DEPTH","3000","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27060,"GRP_SOIL_DEPTH","SOIL_DEPTH_COMMENT","A clay-pan underlies the entire Blackwood division of the Duke forest at 30 cm.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26757,"GRP_SOIL_TEX","SOIL_TEX_SAND","35.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26757,"GRP_SOIL_TEX","SOIL_TEX_SILT","40.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26757,"GRP_SOIL_TEX","SOIL_TEX_CLAY","24.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26757,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26757,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26758,"GRP_SOIL_TEX","SOIL_TEX_SAND","44","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26758,"GRP_SOIL_TEX","SOIL_TEX_SILT","37.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26758,"GRP_SOIL_TEX","SOIL_TEX_CLAY","18.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26758,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","BC","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26758,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26759,"GRP_SOIL_TEX","SOIL_TEX_SILT","26.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26759,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26759,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26760,"GRP_SOIL_TEX","SOIL_TEX_SILT","27.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26760,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26760,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26761,"GRP_SOIL_TEX","SOIL_TEX_CLAY","9.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26761,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26761,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26907,"GRP_SOIL_TEX","SOIL_TEX_SAND","55.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26907,"GRP_SOIL_TEX","SOIL_TEX_SILT","32.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26907,"GRP_SOIL_TEX","SOIL_TEX_CLAY","12.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26907,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","CB","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26907,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26908,"GRP_SOIL_TEX","SOIL_TEX_SILT","29.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26908,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26908,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26909,"GRP_SOIL_TEX","SOIL_TEX_SILT","27.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26909,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26909,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26910,"GRP_SOIL_TEX","SOIL_TEX_CLAY","6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26910,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,26910,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27059,"GRP_SOIL_TEX","SOIL_TEX_SAND","25.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27059,"GRP_SOIL_TEX","SOIL_TEX_SILT","40.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27059,"GRP_SOIL_TEX","SOIL_TEX_CLAY","34.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27059,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27059,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27061,"GRP_SOIL_TEX","SOIL_TEX_SAND","63.8","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27061,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27061,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27353,"GRP_SOIL_TEX","SOIL_TEX_SAND","50.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27353,"GRP_SOIL_TEX","SOIL_TEX_SILT","40.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27353,"GRP_SOIL_TEX","SOIL_TEX_CLAY","9.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27353,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","AE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27353,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27600,"GRP_SOIL_TEX","SOIL_TEX_SAND","67.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27600,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,27600,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28322,"GRP_SOIL_TEX","SOIL_TEX_WATER_HOLD_CAP","0.52","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28322,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28322,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28584,"GRP_SOIL_TEX","SOIL_TEX_SAND","48.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28584,"GRP_SOIL_TEX","SOIL_TEX_SILT","43.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28584,"GRP_SOIL_TEX","SOIL_TEX_CLAY","8.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28584,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28584,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28585,"GRP_SOIL_TEX","SOIL_TEX_SAND","47.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28585,"GRP_SOIL_TEX","SOIL_TEX_SILT","38.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28585,"GRP_SOIL_TEX","SOIL_TEX_CLAY","14.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28585,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","BE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28585,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28820,"GRP_SOIL_TEX","SOIL_TEX_SAND","61","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28820,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28820,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28821,"GRP_SOIL_TEX","SOIL_TEX_CLAY","8.7","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28821,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,28821,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29169,"GRP_SOIL_TEX","SOIL_TEX_SAND","68.9","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29169,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29169,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29523,"GRP_SOIL_TEX","SOIL_TEX_CLAY","4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29523,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk1","168",35.9712,-79.0934,NA,29523,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24271,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24271,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Fruits","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24271,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24271,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24271,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24271,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25546,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25546,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25546,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25546,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25546,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25546,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,26051,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,26051,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,26051,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,26051,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_CROP_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,26051,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,26051,"GRP_AG_BIOMASS_CROP","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23875,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23875,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23875,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23875,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23875,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23875,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23876,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23876,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23876,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23876,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23876,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23876,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25026,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25026,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25026,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25026,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_SHRUB_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25026,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25026,"GRP_AG_BIOMASS_SHRUB","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24115,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24115,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24115,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24115,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24115,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24115,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25157,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","18000","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25157,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25157,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25157,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25157,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2002-10-07","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25157,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","Estimated based on allometric equations for DBH (Clark et al. 2006)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24119,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24119,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24119,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Measured with litter baskets, annual sum of dry weight (not C)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25027,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25027,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25027,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Measured with litter baskets, annual sum of dry weight (not C)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25028,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25028,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25028,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Measured with litter baskets, annual sum of dry weight (not C)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25547,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25547,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25547,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Measured with litter baskets, annual sum of dry weight (not C)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,6680,"GRP_LOCATION","LOCATION_LAT","35.9736","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,6680,"GRP_LOCATION","LOCATION_LONG","-79.1004","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,6680,"GRP_LOCATION","LOCATION_ELEV","168","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24909,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_CRS","2900","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24909,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24909,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24909,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MAX","40","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24909,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23661,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.46","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23661,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23661,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","92","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23661,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23661,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","/Bulk density BE/1.43 Bt/1.38 Bt/1.54 B/1.49 D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series; BE/0.48 Bt/0.8 Bt/1.15 B/1.4 D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23919,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.98","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23919,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23919,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","20.32","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23919,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,23919,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","A 4.85; E 5.10 N.-H. Oh, D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series; A--0 to 3 inches; dark grayish brown (10YR 4/2) fine sandy loam; ...strongly acid; clear smooth boundary. (2 to 9 inches thick) E--3 to 8 inches; yellowish brown (10YR 5/4) fine sandy loam; ...moderately acid; clear wavy boundary. (0 to 7 inches thick) source: description ENON Series","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24420,"GRP_SOIL_CHEM","SOIL_CHEM_PH_H2O","6.04","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24420,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24420,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","20.32","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24420,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24420,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","A 5.85; E 6.23 N.-H. Oh, D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series; A--0 to 3 inches; dark grayish brown (10YR 4/2) fine sandy loam; ...strongly acid; clear smooth boundary. (2 to 9 inches thick) E--3 to 8 inches; yellowish brown (10YR 5/4) fine sandy loam; ...moderately acid; clear wavy boundary. (0 to 7 inches thick) source: description ENON Series","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24436,"GRP_SOIL_CHEM","SOIL_CHEM_C_ORG","17.9","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24436,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24436,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","20.32","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24436,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24436,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","A 2.48; E 1.10; (%C) N.-H. Oh, D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series; A--0 to 3 inches; dark grayish brown (10YR 4/2) fine sandy loam; ...strongly acid; clear smooth boundary. (2 to 9 inches thick) E--3 to 8 inches; yellowish brown (10YR 5/4) fine sandy loam; ...moderately acid; clear wavy boundary. (0 to 7 inches thick) source: description ENON Series","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24555,"GRP_SOIL_CHEM","SOIL_CHEM_PH_H2O","5.9","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24555,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24555,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","92","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24555,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24555,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","/pHw BE/5.75 Bt/5.83 Bt/6.26 B/5.82 D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series; BE/0.48 Bt/0.8 Bt/1.15 B/1.4 D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24942,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.22","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24942,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24942,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","20.32","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24942,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24942,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","A 1.15; E 1.29 N.-H. Oh, D.D. Richter / Geoderma 126 (2005) 5<80><93>25 description ENON Series; A--0 to 3 inches; dark grayish brown (10YR 4/2) fine sandy loam; ...strongly acid; clear smooth boundary. (2 to 9 inches thick) E--3 to 8 inches; yellowish brown (10YR 5/4) fine sandy loam; ...moderately acid; clear wavy boundary. (0 to 7 inches thick) source: description ENON Series","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,29424,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Enron silt loam transitioning to Iredell gravely loam to the southwest, an imprevious clay pan is located beneath all soils at a depth of 0.30 m","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,29424,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24376,"GRP_SOIL_DEPTH","SOIL_DEPTH","3000","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24376,"GRP_SOIL_DEPTH","SOIL_DEPTH_COMMENT","A clay-pan underlies the entire Blackwood division of the Duke forest at 30 cm.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24377,"GRP_SOIL_TEX","SOIL_TEX_WATER_HOLD_CAP","0.52","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24377,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24377,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24522,"GRP_WD_BIOMASS","WD_BIOMASS_CRS","500","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24522,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24522,"GRP_WD_BIOMASS","WD_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,24522,"GRP_WD_BIOMASS","WD_BIOMASS_COMMENT","Total annual dry weight, collected in litter baskets (not C only)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25029,"GRP_WD_BIOMASS","WD_BIOMASS_CRS","500","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25029,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25029,"GRP_WD_BIOMASS","WD_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25029,"GRP_WD_BIOMASS","WD_BIOMASS_COMMENT","Total annual dry weight, collected in litter baskets (not C only)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25030,"GRP_WD_BIOMASS","WD_BIOMASS_CRS","500","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25030,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25030,"GRP_WD_BIOMASS","WD_BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25030,"GRP_WD_BIOMASS","WD_BIOMASS_COMMENT","Total annual dry weight, collected in litter baskets (not C only)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25548,"GRP_WD_BIOMASS","WD_BIOMASS_CRS","500","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25548,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25548,"GRP_WD_BIOMASS","WD_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25548,"GRP_WD_BIOMASS","WD_BIOMASS_COMMENT","Total annual dry weight, collected in litter baskets (not C only)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25549,"GRP_WD_BIOMASS","WD_BIOMASS_FINE","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk2","168",35.9736,-79.1004,NA,25549,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25411,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","7345","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25411,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25411,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25411,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25411,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25411,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25540,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","7664","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25540,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25540,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25540,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25540,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25540,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25541,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","8464","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25541,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25541,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25541,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25541,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25541,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25916,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","8042","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25916,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25916,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25916,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25916,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25916,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27502,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","7821","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27502,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27502,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27502,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27502,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27502,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27503,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","378","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27503,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27503,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27503,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27503,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27503,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27504,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","9123","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27504,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Wood","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27504,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27504,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27504,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27504,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28109,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","8171","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28109,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28109,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28109,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28109,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28109,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28110,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","8859","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28110,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28110,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28110,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28110,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28110,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28222,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","9498","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28222,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28222,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28222,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28222,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28222,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28223,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","7754","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28223,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28223,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28223,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28223,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28223,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28448,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","398","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28448,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28448,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28448,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28448,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28448,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28452,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","408","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28452,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28452,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28452,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28452,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28452,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28454,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","375","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28454,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28454,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28454,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28454,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28454,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28949,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE","350","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28949,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_ORGAN","Foliage","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28949,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28949,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_TREE_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28949,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28949,"GRP_AG_BIOMASS_TREE","AG_BIOMASS_COMMENT","References: (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University. (2) McCarthy HR, Oren R, Kim H-K et al. (2006) Interaction of ice storms and management practices on current carbon sequestration in forests with potential mitigation under future CO2 atmosphere. Journal of Geophysical Research - Atmospheres, 111, D15103; (3) Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Canadian Journal of Forest Research, 28, 1116-1124; (4) Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25913,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25913,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25913,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27851,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27851,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27851,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28947,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28947,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28947,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28950,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28950,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28950,"GRP_AG_LIT_BIOMASS","AG_LIT_BIOMASS_COMMENT","Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26669,"GRP_BIOMASS_CHEM","BIOMASS_N","0.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26669,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26669,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26669,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26669,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26669,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27256,"GRP_BIOMASS_CHEM","BIOMASS_N","0.01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27256,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27256,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27256,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27256,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27256,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27257,"GRP_BIOMASS_CHEM","BIOMASS_N","0.01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27257,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27257,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27257,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27257,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27257,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27536,"GRP_BIOMASS_CHEM","BIOMASS_N","0.10","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27536,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27536,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27536,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27536,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27536,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27537,"GRP_BIOMASS_CHEM","BIOMASS_N","0.01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27537,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27537,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27537,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27537,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27537,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27892,"GRP_BIOMASS_CHEM","BIOMASS_C","4.8","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27892,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27892,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27892,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27892,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Oren R, Ellsworth DS, Johnsen KH et al. (2001) Soil fertility limits carbon sequestration by forest ecosystems in a CO2 - enriched atmosphere. Nature, 411, 469-472","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28144,"GRP_BIOMASS_CHEM","BIOMASS_N","0.02","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28144,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28144,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28144,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28144,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28144,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28257,"GRP_BIOMASS_CHEM","BIOMASS_N","0.10","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28257,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28257,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28257,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28257,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28257,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28486,"GRP_BIOMASS_CHEM","BIOMASS_N","0.10","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28486,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28486,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28486,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28486,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28486,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28777,"GRP_BIOMASS_CHEM","BIOMASS_N","0.11","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28777,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28777,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28777,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28777,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28777,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28987,"GRP_BIOMASS_CHEM","BIOMASS_N","0.01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28987,"GRP_BIOMASS_CHEM","BIOMASS_ORGAN","Total","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28987,"GRP_BIOMASS_CHEM","BIOMASS_PHEN","Mixed/unknown","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28987,"GRP_BIOMASS_CHEM","BIOMASS_SPP","PITA (NRCS plant code)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28987,"GRP_BIOMASS_CHEM","BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28987,"GRP_BIOMASS_CHEM","BIOMASS_COMMENT","Data from http://face.env.duke.edu/face.cfm","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,9262,"GRP_LOCATION","LOCATION_LAT","35.9782","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,9262,"GRP_LOCATION","LOCATION_LONG","-79.0942","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,9262,"GRP_LOCATION","LOCATION_ELEV","163","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,9262,"GRP_LOCATION","LOCATION_COMMENT","Central towere located in Ring 1 of FACE experiment","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25280,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_FINE","159","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25280,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25280,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25280,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25280,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_DATE","2004-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25280,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_COMMENT","35 cm (rooting depth ca. 30 cm, Oren et al., 1998; (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26033,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_FINE","156","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26033,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26033,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26033,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26033,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_DATE","2003-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26033,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_COMMENT","35 cm (rooting depth ca. 30 cm, Oren et al., 1998; (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27219,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_FINE","144","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27219,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27219,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27219,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27219,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27219,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_COMMENT","35 cm (rooting depth ca. 30 cm, Oren et al., 1998; (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28221,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_FINE","135","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28221,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28221,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28221,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28221,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_DATE","2001-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28221,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_COMMENT","35 cm (rooting depth ca. 30 cm, Oren et al., 1998; (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28451,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_FINE","172","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28451,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28451,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28451,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28451,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_DATE","2005-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28451,"GRP_ROOT_BIOMASS","ROOT_BIOMASS_COMMENT","35 cm (rooting depth ca. 30 cm, Oren et al., 1998; (1) McCarthy, H.R. 2007 ( Long-term effects of elevated CO2, soil nutrition, water availability and disturbance on carbon relations in a loblolly pine forest. PhD Dissertation. Nicholas School of the Environment, Duke University.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26694,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.9","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26694,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","140","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26694,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","75","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26694,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","BC","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26694,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26832,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","5.04","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26832,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","225","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26832,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","275","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26832,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26832,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26971,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","5.24","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26971,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","80","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26971,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","115","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26971,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26971,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26972,"GRP_SOIL_CHEM","SOIL_CHEM_N_TOT","1.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26972,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Schlesinger and Lichter (2001) Nature. Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847. Schlesinger and Lichter (2001) Nature. Also: Unpublished data, K. Johnsen et al.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27259,"GRP_SOIL_CHEM","SOIL_CHEM_C_ORG","61","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27259,"GRP_SOIL_CHEM","SOIL_CHEM_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27259,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Forest Service %C data from soil pit with bulk density from http://face.env.duke.edu/pdf/Full_Soil_Profile.pdf. Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847. Schlesinger and Lichter (2001) Limited carbon storage in soil and litter of experimental forest plots under elevated atmospheric CO2. Nature 411:466-469, See also: http://face.env.duke.edu/pdf/Full_Soil_Profile.pdf. Also: Unpublished data, K. Johnsen et al.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27260,"GRP_SOIL_CHEM","SOIL_CHEM_N_TOT","1.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27260,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Forest Service %N data from soil pit with bulk density from http://face.env.duke.edu/pdf/Full_Soil_Profile.pdf. Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847. Schlesinger and Lichter (2001) Nature. Also: Unpublished data, K. Johnsen et al.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27261,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","5.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27261,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","12","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27261,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","29","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27261,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","AE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27261,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27539,"GRP_SOIL_CHEM","SOIL_CHEM_C_ORG","24","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27539,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27539,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","15","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27539,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A, AE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27539,"GRP_SOIL_CHEM","SOIL_CHEM_DATE","2002-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27539,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Spatially variable. Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847. Schlesinger and Lichter (2001) Limited carbon storage in soil and litter of experimental forest plots under elevated atmospheric CO2. Nature 411:466-469, See also: http://face.env.duke.edu/pdf/Full_Soil_Profile.pdf. Also: Unpublished data, K. Johnsen et al.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28148,"GRP_SOIL_CHEM","SOIL_CHEM_N_TOT","1.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28148,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28148,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","15","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28148,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Spatially variable. Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847. Schlesinger and Lichter (2001) Nature. Also: Unpublished data, K. Johnsen et al.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28149,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.85","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28149,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28149,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","12","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28149,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28149,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28258,"GRP_SOIL_CHEM","SOIL_CHEM_BD","1.27","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28258,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28258,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28258,"GRP_SOIL_CHEM","SOIL_CHEM_DATE","2000-01-01","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28258,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Reference: Lichter J, Barron S, Finzi A, Irving K, Roberts M, Stemmler E and W Schlesinger. 2005. Soil carbon sequestration and turnover in a pine forest after six years of atmospheric CO2 enrichment. Ecology 86(7):1835-1847","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28259,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.87","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28259,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","29","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28259,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","48","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28259,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","BE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28259,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28487,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","5.02","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28487,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","75","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28487,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","225","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28487,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","CB","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28487,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28488,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","5.11","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28488,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","275","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28488,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","325","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28488,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,28488,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29091,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.95","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29091,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","115","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29091,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","140","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29091,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29091,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29092,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.98","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29092,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","325","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29092,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","700","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29092,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29092,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29450,"GRP_SOIL_CHEM","SOIL_CHEM_PH_SALT","4.91","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29450,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MIN","48","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29450,"GRP_SOIL_CHEM","SOIL_CHEM_PROFILE_MAX","80","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29450,"GRP_SOIL_CHEM","SOIL_CHEM_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,29450,"GRP_SOIL_CHEM","SOIL_CHEM_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27189,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION","Enon Series, low-fertility, acidic Hapludalf. An imprevious clay pan is located beneath all soils at a depth of 0.30 m.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,27189,"GRP_SOIL_CLASSIFICATION","SOIL_CLASSIFICATION_TAXONOMY","Other","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24910,"GRP_SOIL_DEPTH","SOIL_DEPTH","3000","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24910,"GRP_SOIL_DEPTH","SOIL_DEPTH_COMMENT","A clay-pan underlies the entire Blackwood division of the Duke forest at 30 cm.","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23737,"GRP_SOIL_TEX","SOIL_TEX_SAND","55.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23737,"GRP_SOIL_TEX","SOIL_TEX_SILT","32.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23737,"GRP_SOIL_TEX","SOIL_TEX_CLAY","12.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23737,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","BC","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23737,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23739,"GRP_SOIL_TEX","SOIL_TEX_SILT","27.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23739,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23739,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23879,"GRP_SOIL_TEX","SOIL_TEX_SILT","27.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23879,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23879,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23880,"GRP_SOIL_TEX","SOIL_TEX_SAND","63.8","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23880,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23880,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23881,"GRP_SOIL_TEX","SOIL_TEX_SAND","68.9","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23881,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,23881,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24006,"GRP_SOIL_TEX","SOIL_TEX_SAND","50.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24006,"GRP_SOIL_TEX","SOIL_TEX_SILT","40.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24006,"GRP_SOIL_TEX","SOIL_TEX_CLAY","9.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24006,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","AE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24006,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24007,"GRP_SOIL_TEX","SOIL_TEX_SAND","35.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24007,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24007,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24103,"GRP_SOIL_TEX","SOIL_TEX_CLAY","24.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24103,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24103,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24121,"GRP_SOIL_TEX","SOIL_TEX_WATER_HOLD_CAP","0.52","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24121,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MIN","0","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24121,"GRP_SOIL_TEX","SOIL_TEX_PROFILE_MAX","30","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24269,"GRP_SOIL_TEX","SOIL_TEX_SILT","40.1","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24269,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24269,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24363,"GRP_SOIL_TEX","SOIL_TEX_SAND","44","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24363,"GRP_SOIL_TEX","SOIL_TEX_SILT","37.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24363,"GRP_SOIL_TEX","SOIL_TEX_CLAY","18.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24363,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","B","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24363,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24523,"GRP_SOIL_TEX","SOIL_TEX_SAND","25.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24523,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24523,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24524,"GRP_SOIL_TEX","SOIL_TEX_SILT","40.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24524,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24524,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24777,"GRP_SOIL_TEX","SOIL_TEX_SAND","61","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24777,"GRP_SOIL_TEX","SOIL_TEX_SILT","29.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24777,"GRP_SOIL_TEX","SOIL_TEX_CLAY","9.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24777,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","CB","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24777,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24778,"GRP_SOIL_TEX","SOIL_TEX_SILT","26.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24778,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24778,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24779,"GRP_SOIL_TEX","SOIL_TEX_SAND","67.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24779,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,24779,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25033,"GRP_SOIL_TEX","SOIL_TEX_SAND","48.4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25033,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25142,"GRP_SOIL_TEX","SOIL_TEX_CLAY","34.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25142,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","Bt","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25142,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25162,"GRP_SOIL_TEX","SOIL_TEX_SILT","43.3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25162,"GRP_SOIL_TEX","SOIL_TEX_CLAY","8.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25162,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","A","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25162,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25163,"GRP_SOIL_TEX","SOIL_TEX_SAND","47.6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25163,"GRP_SOIL_TEX","SOIL_TEX_SILT","38.5","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25163,"GRP_SOIL_TEX","SOIL_TEX_CLAY","14.2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25163,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","BE","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25163,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25552,"GRP_SOIL_TEX","SOIL_TEX_CLAY","6","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25552,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25552,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25553,"GRP_SOIL_TEX","SOIL_TEX_CLAY","8.7","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25553,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25553,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26055,"GRP_SOIL_TEX","SOIL_TEX_CLAY","4","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26055,"GRP_SOIL_TEX","SOIL_TEX_HORIZON","C","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,26055,"GRP_SOIL_TEX","SOIL_TEX_COMMENT","Oh, N.H. and D.D. Richter. Elemental translocation and loss from three highly weathered soil-bedrock profiles in the southeastern United States. Geoderma 126 (5-25)","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25279,"GRP_WD_BIOMASS","WD_BIOMASS_CRS","255","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25279,"GRP_WD_BIOMASS","WD_BIOMASS_UNIT","gC m-2","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25279,"GRP_WD_BIOMASS","WD_BIOMASS_DATE","2003-01-15","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-Dk3","163",35.9782,-79.0942,NA,25279,"GRP_WD_BIOMASS","WD_BIOMASS_COMMENT","Coarse woody debris following a severe ice storm. See McCarthy et al. (2006) JGR Table 1 and Figure 3","8","EASTERN TEMPERATE FORESTS","8.3","SOUTHEASTERN USA PLAINS" + "US-EDN",NA,37.6156,-122.114,2018-02-16,98227,"GRP_LOCATION","LOCATION_LAT","37.6156","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-EDN",NA,37.6156,-122.114,2018-02-16,98227,"GRP_LOCATION","LOCATION_LONG","-122.1140","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-EDN",NA,37.6156,-122.114,2018-02-16,98227,"GRP_LOCATION","LOCATION_DATE_START","2018-02-16","11","MEDITERRANEAN CALIFORNIA","11.1","MEDITERRANEAN CALIFORNIA" + "US-EDN",NA,37.6156,-122.114,2018-02-16,98227,"GRP_LOCATION","LOCATION_COMMENT","Location was disturbed from late 1800s up to 1972 for salt harvesting. Location elevation between 36 to 48 inches (0.914m to 1.22m),11,MEDITERRANEAN CALIFORNIA,11.1,MEDITERRANEAN CALIFORNIA + US-Elm,0.77,25.5519,-80.7826,NA,6242,GRP_LOCATION,LOCATION_LAT,25.5519,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Elm,0.77,25.5519,-80.7826,NA,6242,GRP_LOCATION,LOCATION_LONG,-80.7826,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Elm,0.77,25.5519,-80.7826,NA,6242,GRP_LOCATION,LOCATION_ELEV,0.77,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Elm,0.77,25.5519,-80.7826,NA,29422,GRP_SOIL_CLASSIFICATION,SOIL_CLASSIFICATION,Peat,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Elm,0.77,25.5519,-80.7826,NA,29422,GRP_SOIL_CLASSIFICATION,SOIL_CLASSIFICATION_TAXONOMY,Other,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Esm,1.07,25.4379,-80.5946,NA,7077,GRP_LOCATION,LOCATION_LAT,25.4379,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Esm,1.07,25.4379,-80.5946,NA,7077,GRP_LOCATION,LOCATION_LONG,-80.5946,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Esm,1.07,25.4379,-80.5946,NA,7077,GRP_LOCATION,LOCATION_ELEV,1.07,15,TROPICAL WET FORESTS,15.4,EVERGLADES + US-Fmf,2160,35.1426,-111.7273,NA,29191,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER,6,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29191,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER_ORGAN,Total,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29191,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER_PHEN,Mixed/unknown,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29191,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER_UNIT,gC m-2,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29191,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_APPROACH,4 , 0.5 m2 plot once per year. projected leaf area was measured in the laboratory with an image analyzer (Agvision, Monochrome System, Decagon Devices, Inc., Pullman, WA, USA).,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29191,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_DATE,2006-09-07,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29206,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER,12,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29206,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER_ORGAN,Total,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29206,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER_PHEN,Mixed/unknown,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29206,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_OTHER_UNIT,gC m-2,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29206,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_APPROACH,4 , 0.5 m2 plot once per year. projected leaf area was measured in the laboratory with an image analyzer (Agvision, Monochrome System, Decagon Devices, Inc., Pullman, WA, USA).,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,29206,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_DATE,2007-09-07,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28840,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE,3910,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28840,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE_ORGAN,Total,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28840,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE_PHEN,Mixed/unknown,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28840,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE_UNIT,gC m-2,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28840,GRP_AG_BIOMASS_TREE,AG_BIOMASS_APPROACH,allometry on 5 25 m radious circular plots. Following equations in Kaye, J. P., Hart, S. C., Fule, P. Z., Covington, W. W., Moore, M. M., Kaye, M. W. 2005. Initial carbon, nitrogen, and phosphorus fluxes following ponderosa pine restoration treatments. Ecological Applications, 15(5) 1581-1593.,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28840,GRP_AG_BIOMASS_TREE,AG_BIOMASS_DATE,2006-09-07,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28841,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE,2660,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28841,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE_ORGAN,Total,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28841,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE_PHEN,Mixed/unknown,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28841,GRP_AG_BIOMASS_TREE,AG_BIOMASS_TREE_UNIT,gC m-2,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28841,GRP_AG_BIOMASS_TREE,AG_BIOMASS_APPROACH,allometry on 5 25 m radious circular plots. Following equations in Kaye, J. P., Hart, S. C., Fule, P. Z., Covington, W. W., Moore, M. M., Kaye, M. W. 2005. Initial carbon, nitrogen, and phosphorus fluxes following ponderosa pine restoration treatments. Ecological Applications, 15(5) 1581-1593.,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,28841,GRP_AG_BIOMASS_TREE,AG_BIOMASS_DATE,2007-09-07,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_N,0.12,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_ORGAN,Total,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_PHEN,Mixed/unknown,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_SPP,PIPO (NRCS plant code),13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS + US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_COMMENT,pinus ponderosa' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,23740,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(5-6); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,23741,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(6-7); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,23742,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(4-5); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,24008,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,24122,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(4-5); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,24378,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,24379,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,24781,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(4-5); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,26057,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-Shd,346,36.9333,-96.6833,NA,26058,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' + 'US-SP1,50,29.7381,-82.2188,NA,29203,GRP_AG_LIT_BIOMASS,AG_LIT_BIOMASS_COMMENT,842 277,8,EASTERN TEMPERATE FORESTS,8.5,MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS' in object 'BADM' + 'US-SRM,1120,31.8214,-110.8661,NA,27355,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,*aboveground tree biomass = (EXP(1.6*LN(35)-0.58)/100)/1000*.47 estimated using equation given in Browning et al. 2008EcolApp,18,933 (be careful<80>numbers not verified), biomass of herbage production given in Follet, SRER100 issue,12,SOUTHERN SEMIARID HIGHLANDS,12.1,WESTERN SIERRA MADRE PIEDMONT' in object 'BADM' + 'US-SRM,1120,31.8214,-110.8661,NA,27975,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,*aboveground tree biomass = (EXP(1.6*LN(35)-0.58)/100)/1000*.47 estimated using equation given in Browning et al. 2008EcolApp,18,933 (be careful<80>numbers not verified), biomass of herbage production given in Follet, SRER100 issue,12,SOUTHERN SEMIARID HIGHLANDS,12.1,WESTERN SIERRA MADRE PIEDMONT' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18391,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18392,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18423,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower at plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18424,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower at plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18437,GRP_AG_LIT_BIOMASS,AG_LIT_BIOMASS_COMMENT,Litter does NOT include identifiable leaves, but includes leaf fragments, seeds, flowers, lichens and fine woody debris collected from litter traps. Includes 16.2 50.3 of Quercus rubra acorns.,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18549,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Litter traps are .264 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMB,234,45.5598,-84.7138,NA,18550,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,Litter traps are .264 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMd,239,45.5625,-84.6975,NA,18645,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Litter traps are .264 m^2 with one - 3 traps in each of 21 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMd,239,45.5625,-84.6975,NA,18646,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,Litter traps are .264 m^2 with one - 3 traps in each of 21 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMd,239,45.5625,-84.6975,NA,18656,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Many of the Populus and all of the Betula trees girdled in 2008 had died by the 2010 census. Litter traps are .264 m^2 with one trap in each of 21 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20101020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-UMd,239,45.5625,-84.6975,NA,18657,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,Many of the Populus and all of the Betula trees girdled in 2008 had died by the 2010 census. Litter traps are .264 m^2 with one trap in each of 21 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20101020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' + 'US-WBW,283,35.9588,-84.2874,NA,23648,GRP_SOIL_CHEM,SOIL_CHEM_COMMENT,78.9 Mg C ha- 'http://public.ornl.gov/ameriflux/Site_Info/siteInfo.cfm?KEYID=us.walker_branch.01,8,EASTERN TEMPERATE FORESTS,8.4,OZARK/OUACHITA-APPALACHIAN FORESTS' in object 'BADM' +* checking data for ASCII and uncompressed saves ... WARNING + Warning: large data file saved inefficiently: + size ASCII compress + soil_class.RData 117Kb FALSE none + + Note: significantly better compression could be obtained + by using R CMD build --resave-data + old_size new_size compress + soil_class.RData 117Kb 18Kb xz +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 11 WARNINGs, 6 NOTEs diff --git a/modules/data.remote/tests/Rcheck_reference.log b/modules/data.remote/tests/Rcheck_reference.log new file mode 100644 index 00000000000..764fad76fc9 --- /dev/null +++ b/modules/data.remote/tests/Rcheck_reference.log @@ -0,0 +1,116 @@ +* using log directory ‘/tmp/RtmpYVm4R4/PEcAn.data.remote.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.data.remote/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.data.remote’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.data.remote’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘DBI’ ‘doParallel’ ‘foreach’ ‘glue’ ‘lubridate’ ‘ncdf4’ ‘PEcAn.DB’ + ‘PEcAn.utils’ ‘purrr’ ‘raster’ ‘RCurl’ ‘sp’ +'library' or 'require' calls not declared from: + ‘doParallel’ ‘PEcAn.DB’ ‘raster’ +'library' or 'require' calls in package code: + ‘doParallel’ ‘PEcAn.DB’ ‘raster’ ‘rgdal’ + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +call_MODIS: no visible global function definition for ‘write.csv’ +call_MODIS: no visible global function definition for + ‘extract_modis_data’ +download.LandTrendr.AGB: no visible binding for global variable ‘k’ +download.NLCD: no visible global function definition for ‘dbfile.check’ +download.NLCD: no visible global function definition for ‘db.query’ +download.NLCD: no visible global function definition for + ‘download.file’ +download.NLCD: no visible global function definition for + ‘dbfile.insert’ +download.thredds.AGB : get_data: no visible global function definition + for ‘write.csv’ +download.thredds.AGB: no visible global function definition for + ‘%dopar%’ +download.thredds.AGB: no visible global function definition for + ‘foreach’ +download.thredds.AGB: no visible global function definition for + ‘stopCluster’ +extract_NLCD: no visible global function definition for ‘dbfile.check’ +extract_NLCD: no visible global function definition for ‘db.query’ +extract_NLCD: no visible global function definition for ‘raster’ +extract_NLCD: no visible global function definition for ‘SpatialPoints’ +extract_NLCD: no visible global function definition for ‘CRS’ +extract_NLCD: no visible global function definition for ‘spTransform’ +extract_NLCD: no visible global function definition for ‘crs’ +extract_NLCD: no visible global function definition for ‘extract’ +Undefined global functions or variables: + %dopar% crs CRS db.query dbfile.check dbfile.insert download.file + extract extract_modis_data foreach k raster SpatialPoints spTransform + stopCluster write.csv +Consider adding + importFrom("utils", "download.file", "write.csv") +to your NAMESPACE file. +* checking Rd files ... WARNING +checkRd: (7) extract.LandTrendr.AGB.Rd:33-55: Tag \dontrun not recognized +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'call_MODIS.Rd': + \examples lines wider than 100 characters: + test_modistools <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -1 ... [TRUNCATED] + test_reticulate <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2004300", end_date = "2004365", lat = 38, lon = -1 ... [TRUNCATED] + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'extract.LandTrendr.AGB' + ‘...’ + +Undocumented arguments in documentation object 'extract_NLCD' + ‘year’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... OK +* DONE +Status: 3 WARNINGs, 3 NOTEs diff --git a/modules/emulator/tests/Rcheck_reference.log b/modules/emulator/tests/Rcheck_reference.log new file mode 100644 index 00000000000..c4f4d9fb1a4 --- /dev/null +++ b/modules/emulator/tests/Rcheck_reference.log @@ -0,0 +1,319 @@ +* using log directory ‘/tmp/RtmpNwieb2/PEcAn.emulator.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.emulator/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.emulator’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.emulator’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'library' or 'require' call not declared from: ‘time’ +'library' or 'require' calls to packages already attached by Depends: + ‘MCMCpack’ ‘mvtnorm’ + Please remove these calls from your code. +'library' or 'require' call to ‘time’ in package code. + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Packages in Depends field not imported from: + ‘MCMCpack’ ‘mlegp’ ‘mvtnorm’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... WARNING +calcSpatialCov: + function(x, ...) +calcSpatialCov.list: + function(d, psi, tau) + +calcSpatialCov: + function(x, ...) +calcSpatialCov.matrix: + function(d, psi, tau) + +p: + function(x, ...) +p.jump: + function(jmp) + +p: + function(x, ...) +p.mvjump: + function(jmp) + +plot: + function(x, ...) +plot.jump: + function(jmp) + +plot: + function(x, ...) +plot.mvjump: + function(jmp) + +predict: + function(object, ...) +predict.GP: + function(gp, xpred, cI, pI, splinefcns) + +predict: + function(object, ...) +predict.density: + function(den, xnew) + +update: + function(object, ...) +update.jump: + function(jmp, chain) + +update: + function(object, ...) +update.mvjump: + function(jmp, chain) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. + +Found the following apparent S3 methods exported but not registered: + plot.jump +See section ‘Registering S3 methods’ in the ‘Writing R Extensions’ +manual. +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +GaussProcess: no visible global function definition for ‘var’ +GaussProcess: no visible global function definition for ‘nlm’ +GaussProcess: no visible global function definition for ‘progressBar’ +GaussProcess: no visible global function definition for ‘rmvnorm’ +GaussProcess: no visible global function definition for ‘rnorm’ +GaussProcess: no visible global function definition for ‘dmvnorm’ +GaussProcess: no visible global function definition for ‘runif’ +GaussProcess: no visible global function definition for ‘update’ +GaussProcess: no visible global function definition for ‘rinvgamma’ +get_ss: no visible global function definition for ‘rnorm’ +get_y: no visible global function definition for ‘pda.calc.llik’ +gp_mle: no visible global function definition for ‘dmvnorm’ +gp_mle2: no visible global function definition for ‘dmvnorm’ +gpeval: no visible binding for global variable ‘splinefuns’ +gpeval : : no visible binding for global variable + ‘splinefuns’ +gpeval: no visible binding for global variable ‘ytrend’ +is.accepted: no visible global function definition for ‘runif’ +jump: no visible global function definition for ‘new’ +lhc: no visible global function definition for ‘runif’ +mcmc.GP: no visible global function definition for ‘pda.calc.llik.par’ +mcmc.GP: no visible global function definition for + ‘pda.adjust.jumps.bs’ +mcmc.GP: no visible global function definition for ‘mvrnorm’ +mcmc.GP: no visible global function definition for ‘rnorm’ +mcmc.GP: no visible binding for global variable ‘jmp’ +minimize.GP: no visible global function definition for ‘median’ +minimize.GP: no visible binding for global variable ‘median’ +minimize.GP: no visible global function definition for ‘nlm’ +minimize.GP: no visible binding for global variable ‘splinefcns’ +mvjump: no visible global function definition for ‘new’ +plot.jump: no visible global function definition for ‘par’ +plot.jump: no visible global function definition for ‘plot’ +plot.jump: no visible global function definition for ‘abline’ +plot.mvjump: no visible global function definition for ‘par’ +plot.mvjump: no visible global function definition for ‘plot’ +plot.mvjump: no visible global function definition for ‘abline’ +plot.mvjump: no visible global function definition for ‘text’ +predict.GP: no visible global function definition for ‘median’ +predict.GP: no visible binding for global variable ‘median’ +predict.GP: no visible binding for global variable ‘splinefuns’ +predict.GP: no visible global function definition for ‘progressBar’ +predict.GP: no visible global function definition for ‘rmvnorm’ +predict.GP: no visible global function definition for ‘rnorm’ +predict.GP: no visible binding for global variable ‘quantile’ +summarize.GP: no visible global function definition for ‘par’ +summarize.GP: no visible global function definition for ‘pdf’ +summarize.GP: no visible global function definition for ‘plot’ +summarize.GP: no visible global function definition for ‘title’ +summarize.GP: no visible global function definition for ‘lines’ +summarize.GP: no visible global function definition for ‘dev.off’ +Undefined global functions or variables: + abline dev.off dmvnorm jmp lines median mvrnorm new nlm par + pda.adjust.jumps.bs pda.calc.llik pda.calc.llik.par pdf plot + progressBar quantile rinvgamma rmvnorm rnorm runif splinefcns + splinefuns text title update var ytrend +Consider adding + importFrom("graphics", "abline", "lines", "par", "plot", "text", + "title") + importFrom("grDevices", "dev.off", "pdf") + importFrom("methods", "new") + importFrom("stats", "median", "nlm", "quantile", "rnorm", "runif", + "update", "var") +to your NAMESPACE file (and ensure that your DESCRIPTION Imports field +contains 'methods'). +* checking Rd files ... NOTE +prepare_Rd: emulator-package.Rd:27-29: Dropping empty section \references +prepare_Rd: emulator-package.Rd:31-32: Dropping empty section \seealso +prepare_Rd: emulator-package.Rd:33-34: Dropping empty section \examples +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'GaussProcess' + ‘x’ ‘y’ ‘isotropic’ ‘method’ ‘ngibbs’ ‘burnin’ ‘thin’ ‘jump.ic’ ‘psi’ + ‘zeroMean’ ‘...’ + +Undocumented arguments in documentation object 'arate' + ‘x’ + +Undocumented arguments in documentation object 'bounded' + ‘xnew’ ‘rng’ + +Undocumented arguments in documentation object 'calcSpatialCov' + ‘x’ ‘...’ + +Undocumented arguments in documentation object 'calculate.prior' + ‘samples’ ‘priors’ + +Undocumented arguments in documentation object 'ddist' + ‘x’ ‘prior’ + +Undocumented arguments in documentation object 'distance' + ‘x’ + +Undocumented arguments in documentation object 'distance.martix' + ‘x’ ‘power’ + +Undocumented arguments in documentation object 'distance12.martix' + ‘x’ ‘n1’ + +Undocumented arguments in documentation object 'get_ss' + ‘gp’ ‘xnew’ ‘pos.check’ + +Undocumented arguments in documentation object 'get_y' + ‘SSnew’ ‘xnew’ ‘llik.fn’ ‘priors’ ‘llik.par’ + +Undocumented arguments in documentation object 'gp_mle' + ‘theta’ ‘d’ ‘nugget’ ‘myY’ + +Undocumented arguments in documentation object 'gp_mle2' + ‘theta’ ‘d’ ‘nugget’ ‘myY’ ‘maxval’ + +Undocumented arguments in documentation object 'gpeval' + ‘xnew’ ‘k’ ‘mu’ ‘tau’ ‘psi’ ‘x’ + +Undocumented arguments in documentation object 'groupid' + ‘x’ + +Undocumented arguments in documentation object 'is.accepted' + ‘ycurr’ ‘ynew’ ‘format’ + +Undocumented arguments in documentation object 'jump' + ‘ic’ ‘...’ + +Undocumented arguments in documentation object 'ldinvgamma' + ‘x’ ‘shape’ + +Undocumented arguments in documentation object 'mcmc.GP' + ‘gp’ ‘nmcmc’ ‘rng’ ‘splinefcns’ ‘jmp0’ ‘ar.target’ ‘settings’ + ‘run.block’ ‘n.of.obs’ ‘llik.fn’ ‘hyper.pars’ ‘resume.list’ + +Undocumented arguments in documentation object 'minimize.GP' + ‘gp’ ‘rng’ + +Undocumented arguments in documentation object 'mvjump' + ‘ic’ ‘rate’ ‘nc’ ‘...’ + +Undocumented arguments in documentation object 'nderiv' + ‘x’ ‘y’ + +Undocumented arguments in documentation object 'p' + ‘x’ ‘...’ + +Undocumented arguments in documentation object 'predict.GP' + ‘gp’ ‘xpred’ ‘splinefcns’ + +Undocumented arguments in documentation object 'predict.density' + ‘den’ + +Undocumented arguments in documentation object 'summarize.GP' + ‘gp’ ‘pdf_file’ + +Undocumented arguments in documentation object 'update.mvjump' + ‘jmp’ ‘chain’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'GaussProcess': + ‘exclude’ + +Argument items with no description in Rd object 'distance': + ‘power’ + +Argument items with no description in Rd object 'distance.martix': + ‘dim’ + +Argument items with no description in Rd object 'distance12.martix': + ‘power’ + +Argument items with no description in Rd object 'gp_mle': + ‘maxval’ + +Argument items with no description in Rd object 'gpeval': + ‘splinefcns’ + +Argument items with no description in Rd object 'jump': + ‘rate’ + +Argument items with no description in Rd object 'ldinvgamma': + ‘scale’ + +Argument items with no description in Rd object 'mcmc.GP': + ‘x0’ ‘priors’ + +Argument items with no description in Rd object 'minimize.GP': + ‘x0’ ‘splinefuns’ + +Argument items with no description in Rd object 'plot.mvjump': + ‘jmp’ + +Argument items with no description in Rd object 'predict.density': + ‘xnew’ + +Argument items with no description in Rd object 'summarize.GP': + ‘txt_file’ + +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* DONE +Status: 4 WARNINGs, 3 NOTEs diff --git a/modules/meta.analysis/tests/Rcheck_reference.log b/modules/meta.analysis/tests/Rcheck_reference.log new file mode 100644 index 00000000000..037fd01b46c --- /dev/null +++ b/modules/meta.analysis/tests/Rcheck_reference.log @@ -0,0 +1,148 @@ +* using log directory ‘/tmp/RtmpgQ8jGx/PEcAn.MA.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.MA/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.MA’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.MA’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'library' or 'require' call not declared from: ‘ggmcmc’ +'library' or 'require' call to ‘ggmcmc’ in package code. + Please use :: or requireNamespace() instead. + See section 'Suggested packages' in the 'Writing R Extensions' manual. +Packages in Depends field not imported from: + ‘lattice’ ‘MASS’ ‘PEcAn.DB’ ‘PEcAn.utils’ ‘XML’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +.dens_plot: no visible global function definition for ‘plot’ +.dens_plot: no visible global function definition for ‘density’ +.dens_plot: no visible global function definition for ‘rug’ +.dens_plot: no visible global function definition for ‘lines’ +.dens_plot: no visible global function definition for ‘legend’ +approx.posterior: no visible global function definition for ‘pdf’ +approx.posterior: no visible global function definition for ‘var’ +approx.posterior: no visible global function definition for ‘plot’ +approx.posterior: no visible global function definition for ‘density’ +approx.posterior: no visible global function definition for ‘rug’ +approx.posterior: no visible global function definition for ‘lines’ +approx.posterior: no visible global function definition for ‘dbeta’ +approx.posterior: no visible global function definition for ‘legend’ +approx.posterior: no visible binding for global variable ‘AIC’ +approx.posterior: no visible global function definition for ‘sd’ +approx.posterior: no visible global function definition for ‘dev.off’ +jagify: no visible binding for global variable ‘stat’ +jagify: no visible binding for global variable ‘n’ +jagify: no visible binding for global variable ‘site_id’ +jagify: no visible binding for global variable ‘greenhouse’ +jagify: no visible binding for global variable ‘citation_id’ +pecan.ma: no visible global function definition for ‘stem’ +pecan.ma: no visible global function definition for ‘window’ +pecan.ma.summary: no visible global function definition for + ‘is.mcmc.list’ +pecan.ma.summary: no visible global function definition for ‘theme_set’ +pecan.ma.summary: no visible global function definition for ‘theme_bw’ +pecan.ma.summary: no visible global function definition for ‘ggmcmc’ +pecan.ma.summary: no visible global function definition for ‘ggs’ +pecan.ma.summary: no visible global function definition for ‘pdf’ +pecan.ma.summary: no visible global function definition for ‘plot’ +pecan.ma.summary: no visible global function definition for ‘box’ +pecan.ma.summary: no visible global function definition for ‘dev.off’ +run.meta.analysis: no visible global function definition for ‘db.open’ +run.meta.analysis: no visible global function definition for ‘db.close’ +run.meta.analysis.pft: no visible binding for global variable + ‘settings’ +run.meta.analysis.pft: no visible binding for global variable + ‘trait.data’ +run.meta.analysis.pft: no visible global function definition for + ‘median’ +run.meta.analysis.pft: no visible binding for global variable + ‘prior.distns’ +run.meta.analysis.pft: no visible global function definition for + ‘dbfile.insert’ +Undefined global functions or variables: + AIC box citation_id db.close db.open dbeta dbfile.insert density + dev.off ggmcmc ggs greenhouse is.mcmc.list legend lines median n pdf + plot prior.distns rug sd settings site_id stat stem theme_bw + theme_set trait.data var window +Consider adding + importFrom("graphics", "box", "legend", "lines", "plot", "rug", "stem") + importFrom("grDevices", "dev.off", "pdf") + importFrom("stats", "AIC", "dbeta", "density", "median", "sd", "var", + "window") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... NOTE +Package unavailable to check Rd xrefs: ‘R2WinBUGS’ +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘runModule.run.meta.analysis’ +Undocumented data sets: + ‘ma.testdata’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'single.MA' + ‘data’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'p.point.in.prior': + ‘point’ + +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘single.MA_demo.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 5 WARNINGs, 3 NOTEs diff --git a/modules/photosynthesis/tests/Rcheck_reference.log b/modules/photosynthesis/tests/Rcheck_reference.log new file mode 100644 index 00000000000..c134ecd296a --- /dev/null +++ b/modules/photosynthesis/tests/Rcheck_reference.log @@ -0,0 +1,113 @@ +* using log directory ‘/tmp/RtmpPffpxB/PEcAn.photosynthesis.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.photosynthesis/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.photosynthesis’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... WARNING +Found the following files with non-portable file names: + code/Sunny.preliminary.code/C3 photomodel.r + code/Sunny.preliminary.code/C3 Species.csv +These are not fully portable file names. +See section ‘Package structure’ in the ‘Writing R Extensions’ manual. +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.photosynthesis’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... NOTE +Non-standard file/directory found at top level: + ‘code’ +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +'library' or 'require' call to ‘rjags’ which was already attached by Depends. + Please remove these calls from your code. +Package in Depends field not imported from: ‘rjags’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +ciEnvelope: no visible global function definition for ‘polygon’ +estimate_mode: no visible global function definition for ‘density’ +fitA: no visible global function definition for ‘jags.model’ +fitA: no visible global function definition for ‘coda.samples’ +Licor_QC: no visible global function definition for ‘plot’ +Licor_QC: no visible global function definition for ‘points’ +Licor_QC: no visible global function definition for ‘text’ +Licor_QC: no visible global function definition for ‘legend’ +Licor_QC: no visible global function definition for ‘identify’ +mat2mcmc.list: no visible global function definition for ‘as.mcmc’ +mat2mcmc.list: no visible global function definition for ‘as.mcmc.list’ +plot_photo: no visible binding for global variable ‘quantile’ +plot_photo: no visible global function definition for ‘plot’ +plot_photo: no visible global function definition for ‘lines’ +plot_photo: no visible global function definition for ‘points’ +plot_photo: no visible global function definition for ‘legend’ +read_Licor: no visible global function definition for ‘tail’ +read_Licor: no visible global function definition for ‘read.table’ +Undefined global functions or variables: + as.mcmc as.mcmc.list coda.samples density identify jags.model legend + lines plot points polygon quantile read.table tail text +Consider adding + importFrom("graphics", "identify", "legend", "lines", "plot", "points", + "polygon", "text") + importFrom("stats", "density", "quantile") + importFrom("utils", "read.table", "tail") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'ciEnvelope' + ‘x’ ‘ylo’ ‘yhi’ ‘col’ ‘...’ + +Undocumented arguments in documentation object 'estimate_mode' + ‘x’ ‘adjust’ + +Undocumented arguments in documentation object 'plot_photo' + ‘data’ ‘out’ ‘curve’ ‘tol’ ‘byLeaf’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘ResponseCurves.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... NONE +* DONE +Status: 3 WARNINGs, 4 NOTEs diff --git a/modules/priors/tests/Rcheck_reference.log b/modules/priors/tests/Rcheck_reference.log new file mode 100644 index 00000000000..c920f056ce4 --- /dev/null +++ b/modules/priors/tests/Rcheck_reference.log @@ -0,0 +1,192 @@ +* using log directory ‘/tmp/RtmprUKFes/PEcAn.priors.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.priors/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.priors’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.priors’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... NOTE +Package in Depends field not imported from: ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... WARNING +plot: + function(x, ...) +plot.posterior.density: + function(posterior.density, base.plot) + +plot: + function(x, ...) +plot.prior.density: + function(prior.density, base.plot, prior.color) + +plot: + function(x, ...) +plot.trait: + function(trait, prior, posterior.sample, trait.df, fontsize, x.lim, + y.lim, logx) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +create.density.df: no visible global function definition for + ‘zero.bounded.density’ +create.density.df: no visible global function definition for ‘density’ +fit.dist : : no visible global function definition for + ‘fitdistr’ +fit.dist: no visible global function definition for ‘fitdistr’ +fit.dist: no visible binding for global variable ‘AIC’ +fit.dist : : no visible global function definition for + ‘tabnum’ +fit.dist: no visible global function definition for ‘tabnum’ +plot.densities: no visible global function definition for ‘pdf’ +plot.densities: no visible global function definition for + ‘plot.density’ +plot.densities: no visible binding for global variable + ‘sensitivity.plot’ +plot.densities: no visible global function definition for ‘dev.off’ +plot.posterior.density: no visible global function definition for + ‘create.base.plot’ +plot.posterior.density: no visible binding for global variable ‘x’ +plot.posterior.density: no visible binding for global variable ‘y’ +plot.prior.density: no visible global function definition for + ‘create.base.plot’ +plot.prior.density: no visible binding for global variable ‘x’ +plot.prior.density: no visible binding for global variable ‘y’ +plot.trait: no visible global function definition for ‘trait.lookup’ +plot.trait: no visible global function definition for ‘jagify’ +plot.trait: no visible binding for '<<-' assignment to ‘x.ticks’ +plot.trait: no visible global function definition for + ‘create.base.plot’ +plot.trait: no visible global function definition for ‘plot_data’ +plot.trait: no visible global function definition for ‘geom_segment’ +plot.trait: no visible binding for global variable ‘x.ticks’ +plot.trait: no visible global function definition for ‘last’ +plot.trait: no visible global function definition for ‘theme’ +prior.fn: no visible global function definition for ‘qnorm’ +prior.fn: no visible global function definition for ‘qlnorm’ +prior.fn: no visible global function definition for ‘qgamma’ +prior.fn: no visible global function definition for ‘qweibull’ +prior.fn: no visible global function definition for ‘qbeta’ +priorfig: no visible global function definition for ‘trait.lookup’ +priorfig: no visible global function definition for ‘theme’ +priorfig: no visible binding for global variable ‘x’ +priorfig: no visible global function definition for ‘runif’ +priorfig: no visible binding for global variable ‘y’ +Undefined global functions or variables: + AIC create.base.plot density dev.off fitdistr geom_segment jagify + last pdf plot_data plot.density qbeta qgamma qlnorm qnorm qweibull + runif sensitivity.plot tabnum theme trait.lookup x x.ticks y + zero.bounded.density +Consider adding + importFrom("grDevices", "dev.off", "pdf") + importFrom("stats", "AIC", "density", "qbeta", "qgamma", "qlnorm", + "qnorm", "qweibull", "runif") +to your NAMESPACE file. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'create.density.df.Rd': + ‘stats::density’ + +Missing link or links in documentation object 'prior.fn.Rd': + ‘DEoptim’ + +Missing link or links in documentation object 'priorfig.Rd': + ‘prior.density’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'create.density.df' + ‘n’ ‘...’ + +Undocumented arguments in documentation object 'fit.dist' + ‘trait’ ‘n’ + +Undocumented arguments in documentation object 'get.quantiles.from.density' + ‘density.df’ +Documented arguments not in \usage in documentation object 'get.quantiles.from.density': + ‘priordensity’ + +Undocumented arguments in documentation object 'plot.densities' + ‘density.plot.inputs’ ‘...’ +Documented arguments not in \usage in documentation object 'plot.densities': + ‘sensitivity.results’ + +Undocumented arguments in documentation object 'plot.trait' + ‘trait.df’ ‘x.lim’ ‘y.lim’ ‘logx’ +Documented arguments not in \usage in documentation object 'plot.trait': + ‘trait.data’ + +Undocumented arguments in documentation object 'priorfig' + ‘fontsize’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'create.density.df': + ‘zero.bounded’ + +Argument items with no description in Rd object 'plot.posterior.density': + ‘posterior.density’ + +Argument items with no description in Rd object 'plot.prior.density': + ‘prior.density’ + +Argument items with no description in Rd object 'plot.trait': + ‘posterior.sample’ ‘fontsize’ ‘trait.data’ + +Argument items with no description in Rd object 'pr.samp': + ‘distn’ ‘parama’ ‘paramb’ + +* checking for unstated dependencies in examples ... OK +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘priors_demo.Rmd’ +Package has no Sweave vignette sources and no VignetteBuilder field. +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 5 WARNINGs, 3 NOTEs diff --git a/modules/rtm/tests/Rcheck_reference.log b/modules/rtm/tests/Rcheck_reference.log new file mode 100644 index 00000000000..f4be10667a2 --- /dev/null +++ b/modules/rtm/tests/Rcheck_reference.log @@ -0,0 +1,262 @@ +* using log directory ‘/tmp/RtmpPkD9Bk/PEcAnRTM.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAnRTM/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAnRTM’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAnRTM’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +Unexported object imported by a ':::' call: ‘stats:::C_acf’ + See the note in ?`:::` about the use of this operator. + Including base/recommended package(s): + ‘stats’ +* checking S3 generic/method consistency ... WARNING +neff: + function(x, ...) +neff.default: + function(x, lag.max, min_rho) + +print: + function(x, ...) +print.spectra: + function(spectra, n, ...) + +plot: + function(x, ...) +plot.spectra: + function(spectra, type, ...) + +str: + function(object, ...) +str.spectra: + function(spectra, ...) + +See section ‘Generic functions and methods’ in the ‘Writing R +Extensions’ manual. +* checking replacement functions ... WARNING + ‘[[<-.spectra’ +The argument of a replacement function which corresponds to the right +hand side must be named ‘value’. +* checking foreign function calls ... WARNING +Foreign function call to a base package: + .Call(stats:::C_acf, ...) +Packages should not make .C/.Call/.External/.Fortran calls to a base +package. They are not part of the API, for use only by R itself and +subject to change without notice. +* checking R code for possible problems ... NOTE +defparam: no visible global function definition for ‘data’ +dtnorm: no visible global function definition for ‘dnorm’ +dtnorm: no visible global function definition for ‘pnorm’ +EDR: no visible global function definition for ‘data’ +EDR: no visible binding for global variable ‘pftmapping’ +generate.noise: no visible global function definition for ‘dnorm’ +generate.noise: no visible global function definition for ‘median’ +generate.noise: no visible global function definition for ‘rnorm’ +generate.rsr.all: no visible global function definition for ‘data’ +generate.rsr.all: no visible binding for global variable + ‘raw.sensor.data’ +generate.rsr.all: no visible binding for global variable + ‘fwhm.aviris.ng’ +generate.rsr.all: no visible binding for global variable + ‘fwhm.aviris.classic’ +generate.rsr.all: no visible binding for global variable + ‘fwhm.hyperion’ +generate.rsr.all: no visible binding for global variable + ‘bandwidth.chrisproba’ +get.EDR.output: no visible global function definition for ‘read.table’ +interpolate.rsr: no visible global function definition for ‘splinefun’ +invert_bt: no visible global function definition for ‘modifyList’ +invert_bt: no visible binding for global variable ‘sampler’ +invert.custom: no visible global function definition for + ‘txtProgressBar’ +invert.custom: no visible global function definition for + ‘setTxtProgressBar’ +invert.custom: no visible binding for global variable ‘sd’ +invert.custom: no visible global function definition for ‘cor’ +invert.custom: no visible global function definition for ‘runif’ +invert.custom: no visible global function definition for ‘rgamma’ +plot.spectra: no visible global function definition for ‘plot’ +print.spectra: no visible global function definition for ‘head’ +print.spectra: no visible global function definition for ‘tail’ +priorfunc.prospect : prior: no visible global function definition for + ‘dlnorm’ +process_output: no visible global function definition for ‘window’ +process.licor.rsr: no visible global function definition for ‘read.csv’ +process.licor.rsr: no visible global function definition for + ‘complete.cases’ +prospect_bt_prior: no visible global function definition for + ‘modifyList’ +read.rsr.folder: no visible global function definition for ‘read.csv’ +resample.default: no visible global function definition for ‘approxfun’ +resample.default: no visible global function definition for ‘splinefun’ +rsr.from.fwhm: no visible global function definition for ‘qnorm’ +rsr.from.fwhm: no visible binding for global variable ‘dnorm’ +rtnorm: no visible global function definition for ‘rnorm’ +rtnorm: no visible global function definition for ‘qnorm’ +rtnorm: no visible global function definition for ‘runif’ +rtnorm: no visible global function definition for ‘pnorm’ +spectral.response: no visible binding for global variable ‘sensor.rsr’ +summary_simple: no visible binding for global variable ‘sd’ +summary_simple: no visible binding for global variable ‘quantile’ +summary_simple: no visible binding for global variable ‘median’ +Undefined global functions or variables: + approxfun bandwidth.chrisproba complete.cases cor data dlnorm dnorm + fwhm.aviris.classic fwhm.aviris.ng fwhm.hyperion head median + modifyList pftmapping plot pnorm qnorm quantile raw.sensor.data + read.csv read.table rgamma rnorm runif sampler sd sensor.rsr + setTxtProgressBar splinefun tail txtProgressBar window +Consider adding + importFrom("graphics", "plot") + importFrom("stats", "approxfun", "complete.cases", "cor", "dlnorm", + "dnorm", "median", "pnorm", "qnorm", "quantile", "rgamma", + "rnorm", "runif", "sd", "splinefun", "window") + importFrom("utils", "data", "head", "modifyList", "read.csv", + "read.table", "setTxtProgressBar", "tail", "txtProgressBar") +to your NAMESPACE file. + +Found the following calls to data() loading into the global environment: +File ‘PEcAnRTM/R/defparam.R’: + data(model.list) +File ‘PEcAnRTM/R/edr.wrapper.R’: + data(pftmapping, package = "PEcAn.ED2") +File ‘PEcAnRTM/R/generate-rsr.R’: + data(raw.sensor.data) +See section ‘Good practice’ in ‘?data’. +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... WARNING +Undocumented data sets: + ‘dataSpec_prospectd’ ‘model.list’ ‘bandwidth.chrisproba’ + ‘fwhm.aviris.classic’ ‘fwhm.aviris.ng’ ‘fwhm.hyperion’ ‘rsr.avhrr’ + ‘rsr.landsat5’ ‘rsr.landsat7’ ‘rsr.landsat8’ ‘rsr.modis’ ‘rsr.viirs’ + ‘sensor.rsr’ ‘testspec_ACRU’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... WARNING +Codoc mismatches from documentation object '[[<-.spectra': +[[<-.spectra + Code: function(spectra, wavelength, j, values) + Docs: function(spectra, wavelength, j, values, value) + Argument names in docs not in code: + value + +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'EDR.preprocess.history' + ‘output.path’ + +Undocumented arguments in documentation object 'bt_check_convergence' + ‘samples’ ‘threshold’ ‘use_CI’ ‘use_mpsrf’ + +Undocumented arguments in documentation object 'corr_max_lag' + ‘nx’ ‘r’ ‘sig.level’ ‘power’ ‘...’ + +Undocumented arguments in documentation object 'generate.rsr.all' + ‘path.to.licor’ + +Undocumented arguments in documentation object 'invert.lsq' + ‘upper’ +Documented arguments not in \usage in documentation object 'invert.lsq': + ‘uppper’ + +Undocumented arguments in documentation object 'matplot' + ‘...’ + +Undocumented arguments in documentation object 'matplot.default' + ‘...’ + +Undocumented arguments in documentation object 'neff' + ‘...’ + +Undocumented arguments in documentation object 'plot.spectra' + ‘type’ + +Undocumented arguments in documentation object 'rtm_loglike' + ‘nparams’ ‘model’ ‘observed’ ‘lag.max’ ‘verbose’ ‘...’ + +Undocumented arguments in documentation object 'setup_edr' + ‘...’ + +Undocumented arguments in documentation object 'str.spectra' + ‘...’ + +Undocumented arguments in documentation object '[[<-.spectra' + ‘value’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... WARNING + + Note: significantly better compression could be obtained + by using R CMD build --resave-data + old_size new_size compress + dataSpec_prospectd.RData 36Kb 22Kb xz + raw.sensor.data.RData 109Kb 82Kb xz + sensor.rsr.RData 2.3Mb 1.9Mb xz + testspec.RData 1.2Mb 956Kb xz +* checking line endings in C/C++/Fortran sources/headers ... OK +* checking line endings in Makefiles ... OK +* checking compilation flags in Makevars ... OK +* checking for GNU extensions in Makefiles ... OK +* checking for portable use of $(BLAS_LIBS) and $(LAPACK_LIBS) ... OK +* checking pragmas in C/C++ headers and code ... OK +* checking compilation flags used ... OK +* checking compiled code ... NOTE +File ‘PEcAnRTM/libs/PEcAnRTM.so’: + Found no calls to: ‘R_registerRoutines’, ‘R_useDynamicSymbols’ + +It is good practice to register native routines and to disable symbol +search. + +See ‘Writing portable packages’ in the ‘Writing R Extensions’ manual. +* checking files in ‘vignettes’ ... WARNING +Files in the 'vignettes' directory but no files in 'inst/doc': + ‘edr.sensitivity.R’, ‘invert.edr.R’, ‘pecanrtm.vignette.Rmd’, + ‘test.edr.R’ +Files named as vignettes but with no recognized vignette engine: + ‘vignettes/pecanrtm.vignette.Rmd’ +(Is a VignetteBuilder field missing?) +* checking examples ... OK +* DONE +Status: 9 WARNINGs, 3 NOTEs diff --git a/modules/uncertainty/tests/Rcheck_reference.log b/modules/uncertainty/tests/Rcheck_reference.log new file mode 100644 index 00000000000..bb57f9b3a87 --- /dev/null +++ b/modules/uncertainty/tests/Rcheck_reference.log @@ -0,0 +1,337 @@ +* using log directory ‘/tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.uncertainty/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.uncertainty’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.uncertainty’ can be installed ... WARNING +Found the following significant warnings: + Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\sigma' + Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\left' + Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\frac' + Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\kappa' + Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\right' + Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/spline.ensemble.Rd:18: unknown macro '\phi' +See ‘/tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00install.out’ for details. +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... WARNING +'::' or ':::' imports not declared from: + ‘PEcAn.assim.sequential’ ‘PEcAn.data.atmosphere’ +Packages in Depends field not imported from: + ‘ggmap’ ‘ggplot2’ ‘gridExtra’ ‘PEcAn.priors’ ‘PEcAn.utils’ + These packages need to be imported from (in the NAMESPACE file) + for when this namespace is loaded but not attached. +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +ensemble.ts: no visible binding for global variable ‘quantile’ +ensemble.ts: no visible global function definition for ‘plot’ +ensemble.ts: no visible global function definition for ‘lines’ +ensemble.ts: no visible global function definition for ‘points’ +ensemble.ts: no visible global function definition for ‘legend’ +ensemble.ts: no visible global function definition for ‘box’ +flux.uncertainty: no visible global function definition for ‘sd’ +flux.uncertainty: no visible global function definition for ‘lm’ +format.plot.input: no visible global function definition for + ‘trait.lookup’ +get.coef.var: no visible global function definition for ‘var’ +get.coef.var: no visible global function definition for ‘median’ +get.elasticity: no visible global function definition for ‘median’ +get.gi.phii: no visible global function definition for ‘laply’ +get.parameter.samples: no visible global function definition for + ‘dbfile.check’ +get.parameter.samples: no visible binding for global variable + ‘post.distns’ +get.parameter.samples: no visible binding for global variable + ‘trait.mcmc’ +get.parameter.samples: no visible global function definition for + ‘get.quantiles’ +get.parameter.samples: no visible global function definition for + ‘vecpaste’ +get.parameter.samples: no visible global function definition for + ‘get.sa.sample.list’ +get.sensitivity: no visible global function definition for ‘median’ +kurtosis: no visible global function definition for ‘sd’ +plot_flux_uncertainty: no visible global function definition for ‘plot’ +plot_flux_uncertainty: no visible global function definition for + ‘lines’ +plot_flux_uncertainty: no visible global function definition for + ‘legend’ +plot_sensitivity: no visible global function definition for + ‘trait.lookup’ +plot_sensitivity: no visible global function definition for ‘ggplot’ +plot_sensitivity: no visible global function definition for ‘geom_line’ +plot_sensitivity: no visible global function definition for ‘aes’ +plot_sensitivity: no visible binding for global variable ‘x’ +plot_sensitivity: no visible binding for global variable ‘y’ +plot_sensitivity: no visible global function definition for + ‘geom_point’ +plot_sensitivity: no visible global function definition for + ‘scale_y_continuous’ +plot_sensitivity: no visible global function definition for ‘theme_bw’ +plot_sensitivity: no visible global function definition for ‘ggtitle’ +plot_sensitivity: no visible global function definition for ‘theme’ +plot_sensitivity: no visible global function definition for + ‘element_text’ +plot_sensitivity: no visible global function definition for + ‘element_blank’ +plot_sensitivity: no visible global function definition for + ‘scale_x_continuous’ +plot_variance_decomposition: no visible global function definition for + ‘theme_set’ +plot_variance_decomposition: no visible global function definition for + ‘theme_classic’ +plot_variance_decomposition: no visible global function definition for + ‘theme’ +plot_variance_decomposition: no visible global function definition for + ‘element_text’ +plot_variance_decomposition: no visible global function definition for + ‘element_blank’ +plot_variance_decomposition: no visible global function definition for + ‘trait.lookup’ +plot_variance_decomposition: no visible global function definition for + ‘ggplot’ +plot_variance_decomposition: no visible global function definition for + ‘coord_flip’ +plot_variance_decomposition: no visible global function definition for + ‘ggtitle’ +plot_variance_decomposition: no visible global function definition for + ‘geom_text’ +plot_variance_decomposition: no visible global function definition for + ‘aes’ +plot_variance_decomposition: no visible binding for global variable + ‘points’ +plot_variance_decomposition: no visible global function definition for + ‘scale_y_continuous’ +plot_variance_decomposition: no visible global function definition for + ‘geom_pointrange’ +plot_variance_decomposition: no visible binding for global variable + ‘coef.vars’ +plot_variance_decomposition: no visible binding for global variable + ‘elasticities’ +plot_variance_decomposition: no visible binding for global variable + ‘variances’ +plot.oechel.flux: no visible global function definition for ‘par’ +prep.data.assim: no visible global function definition for ‘rexp’ +prep.data.assim: no visible global function definition for + ‘complete.cases’ +prep.data.assim: no visible global function definition for ‘cov’ +read.ameriflux.L2: no visible global function definition for + ‘read.table’ +read.ensemble.output: no visible binding for global variable + ‘runs.samples’ +read.ensemble.ts: no visible binding for global variable ‘runs.samples’ +run.ensemble.analysis: no visible global function definition for + ‘convert.expr’ +run.ensemble.analysis: no visible global function definition for + ‘mstmipvar’ +run.ensemble.analysis: no visible global function definition for + ‘ensemble.filename’ +run.ensemble.analysis: no visible binding for global variable + ‘ensemble.output’ +run.ensemble.analysis: no visible global function definition for ‘pdf’ +run.ensemble.analysis: no visible global function definition for ‘par’ +run.ensemble.analysis: no visible global function definition for ‘hist’ +run.ensemble.analysis: no visible global function definition for ‘box’ +run.ensemble.analysis: no visible global function definition for + ‘boxplot’ +run.ensemble.analysis: no visible global function definition for + ‘dev.off’ +run.sensitivity.analysis: no visible global function definition for + ‘sensitivity.filename’ +run.sensitivity.analysis: no visible binding for global variable + ‘runs.samples’ +run.sensitivity.analysis: no visible global function definition for + ‘convert.expr’ +run.sensitivity.analysis: no visible binding for global variable + ‘sa.samples’ +run.sensitivity.analysis: no visible global function definition for + ‘trait.lookup’ +run.sensitivity.analysis: no visible binding for global variable + ‘sensitivity.output’ +run.sensitivity.analysis: no visible global function definition for + ‘vecpaste’ +run.sensitivity.analysis: no visible global function definition for + ‘pdf’ +run.sensitivity.analysis: no visible global function definition for + ‘dev.off’ +run.sensitivity.analysis: no visible binding for global variable + ‘grid.arrange’ +runModule.run.ensemble.analysis: no visible global function definition + for ‘is.MultiSettings’ +runModule.run.ensemble.analysis: no visible global function definition + for ‘papply’ +runModule.run.ensemble.analysis: no visible global function definition + for ‘is.Settings’ +runModule.run.sensitivity.analysis: no visible global function + definition for ‘is.MultiSettings’ +runModule.run.sensitivity.analysis: no visible global function + definition for ‘papply’ +runModule.run.sensitivity.analysis: no visible global function + definition for ‘is.Settings’ +sa.splinefun: no visible global function definition for ‘splinefun’ +sd.var: no visible global function definition for ‘var’ +sensitivity.analysis : : no visible global function + definition for ‘var’ +spline.truncate: no visible global function definition for ‘pnorm’ +spline.truncate: no visible global function definition for ‘quantile’ +spline.truncate: no visible global function definition for + ‘zero.truncate’ +tundra.flux.uncertainty : : no visible global function + definition for ‘read.table’ +variance.stats: no visible global function definition for ‘var’ +vd.variance: no visible binding for global variable ‘var’ +write.ensemble.configs: no visible binding for global variable ‘id’ +write.ensemble.configs: no visible binding for global variable + ‘required’ +write.ensemble.configs: no visible binding for global variable ‘tag’ +write.ensemble.configs: no visible binding for global variable + ‘site_id’ +write.ensemble.configs: no visible global function definition for ‘map’ +Undefined global functions or variables: + aes box boxplot coef.vars complete.cases convert.expr coord_flip cov + dbfile.check dev.off elasticities element_blank element_text + ensemble.filename ensemble.output geom_line geom_point + geom_pointrange geom_text get.quantiles get.sa.sample.list ggplot + ggtitle grid.arrange hist id is.MultiSettings is.Settings laply + legend lines lm map median mstmipvar papply par pdf plot pnorm points + post.distns quantile read.table required rexp runs.samples sa.samples + scale_x_continuous scale_y_continuous sd sensitivity.filename + sensitivity.output site_id splinefun tag theme theme_bw theme_classic + theme_set trait.lookup trait.mcmc var variances vecpaste x y + zero.truncate +Consider adding + importFrom("graphics", "box", "boxplot", "hist", "legend", "lines", + "par", "plot", "points") + importFrom("grDevices", "dev.off", "pdf") + importFrom("stats", "complete.cases", "cov", "lm", "median", "pnorm", + "quantile", "rexp", "sd", "splinefun", "var") + importFrom("utils", "read.table") +to your NAMESPACE file. +* checking Rd files ... WARNING +prepare_Rd: sd.var.Rd:19: unknown macro '\sigma' +prepare_Rd: sd.var.Rd:19: unknown macro '\left' +prepare_Rd: sd.var.Rd:19: unknown macro '\frac' +prepare_Rd: sd.var.Rd:19: unknown macro '\kappa' +prepare_Rd: sd.var.Rd:19: unknown macro '\right' +prepare_Rd: spline.ensemble.Rd:18: unknown macro '\phi' +* checking Rd metadata ... OK +* checking Rd line widths ... NOTE +Rd file 'sensitivity.analysis.Rd': + \examples lines wider than 100 characters: + sensitivity.analysis(trait.samples[[pft$name]], sa.samples[[pft$name]], sa.agb[[pft$name]], pft$outdir) + +These lines will be truncated in the PDF manual. +* checking Rd cross-references ... WARNING +Missing link or links in documentation object 'get.gi.phii.Rd': + ‘variance.decomposition’ + +See section 'Cross-references' in the 'Writing R Extensions' manual. + +* checking for missing documentation entries ... WARNING +Undocumented code objects: + ‘runModule.run.ensemble.analysis’ + ‘runModule.run.sensitivity.analysis’ +Undocumented data sets: + ‘ensemble.output’ ‘sensitivity.output’ ‘ensemble.samples’ + ‘sa.samples’ ‘settings’ ‘trait.samples’ +All user-level objects in a package should have documentation entries. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'ensemble.ts' + ‘ensemble.ts’ ‘observations’ ‘window’ ‘...’ + +Undocumented arguments in documentation object 'flux.uncertainty' + ‘...’ + +Undocumented arguments in documentation object 'get.change' + ‘measurement’ + +Undocumented arguments in documentation object 'get.parameter.samples' + ‘settings’ ‘posterior.files’ ‘ens.sample.method’ + +Undocumented arguments in documentation object 'input.ens.gen' + ‘input’ + +Documented arguments not in \usage in documentation object 'plot_sensitivities': + ‘sensitivity.results’ + +Undocumented arguments in documentation object 'plot_sensitivity' + ‘linesize’ ‘dotsize’ + +Undocumented arguments in documentation object 'plot_variance_decomposition' + ‘plot.inputs’ +Documented arguments not in \usage in documentation object 'plot_variance_decomposition': + ‘all.plot.inputs’ ‘exclude’ ‘convert.var’ ‘var.label’ + ‘order.plot.input’ ‘ticks.plot.input’ ‘col’ ‘pch’ ‘main’ + +Undocumented arguments in documentation object 'read.ameriflux.L2' + ‘file.name’ ‘year’ + +Undocumented arguments in documentation object 'read.ensemble.ts' + ‘settings’ ‘ensemble.id’ ‘variable’ ‘start.year’ ‘end.year’ + +Undocumented arguments in documentation object 'run.ensemble.analysis' + ‘settings’ ‘ensemble.id’ ‘variable’ ‘start.year’ ‘end.year’ ‘...’ + +Undocumented arguments in documentation object 'run.sensitivity.analysis' + ‘settings’ ‘ensemble.id’ ‘variable’ ‘start.year’ ‘end.year’ ‘...’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... WARNING +Argument items with no description in Rd object 'get.sensitivity': + ‘trait.samples’ ‘sa.splinefun’ + +Argument items with no description in Rd object 'plot_sensitivity': + ‘y.range’ + +Argument items with no description in Rd object 'sa.splinefun': + ‘quantiles.input’ ‘quantiles.output’ + +* checking for unstated dependencies in examples ... OK +* checking contents of ‘data’ directory ... OK +* checking data for non-ASCII characters ... OK +* checking data for ASCII and uncompressed saves ... OK +* checking examples ... OK +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 7 WARNINGs, 3 NOTEs From 447a83ec266f2c895444799573059a18f419c9a5 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 3 Sep 2019 14:23:36 +0200 Subject: [PATCH 0354/2289] remove file write, cleanup --- scripts/check_with_errors.R | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index e6f37cee0b2..4c8ff5d2a3a 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -97,29 +97,36 @@ msg_lines <- function(msg){ } old_file <- file.path(pkg, "tests", "Rcheck_reference.log") -if (! file.exists(old_file)) { - cat("No reference check file found. Saving current results as the new standard\n") - cat(chk$stdout, file = old_file) - quit("no") -} + +# To update reference files after fixing an old warning: +# * Run check_with_errors.R to be sure the check is currently passing +# * Delete the file you want to update +# * Uncomment this section, run check_with_errors.R, recomment +# * Commit updated file +# if (! file.exists(old_file)) { +# cat("No reference check file found. Saving current results as the new standard\n") +# cat(chk$stdout, file = old_file) +# quit("no") +# } old <- rcmdcheck::parse_check(old_file) cmp <- rcmdcheck::compare_checks(old, chk) if (cmp$status != "+") { + # rcmdcheck found new messages, so check has failed print(cmp) stop("R check of ", pkg, " reports new problems. Please fix them and resubmit.") } else { - # No new messages, but need to check details of pre-existing ones line by line - warn_cmp <- dplyr::filter(cmp$cmp, type == "warning") # stopped earlier for errors, notes let slide for now - reseen_msgs <- msg_lines(dplyr::filter(warn_cmp, which=="new")$output) - prev_msgs <- msg_lines(dplyr::filter(warn_cmp, which=="old")$output) + # No new messages, but need to check details of pre-existing ones + # We stopped earlier for errors, so all entries here are WARNING or NOTE + cur_msgs <- msg_lines(dplyr::filter(cmp$cmp, which=="new")$output) + prev_msgs <- msg_lines(dplyr::filter(cmp$cmp, which=="old")$output) # avoids false positives from tempdir changes - reseen_msgs <- stringr::str_replace_all(reseen_msgs, chk$checkdir, "...") + cur_msgs <- stringr::str_replace_all(cur_msgs, chk$checkdir, "...") prev_msgs <- stringr::str_replace_all(prev_msgs, old$checkdir, "...") - lines_changed <- setdiff(reseen_msgs, prev_msgs) + lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { cat("Package check returned new warnings:\n") cat(lines_changed, "\n") From b34048a4d75e8d60abd7e6deea8332571a744eed Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 3 Sep 2019 19:40:01 +0200 Subject: [PATCH 0355/2289] show new errors rather than make user guess --- scripts/check_with_errors.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 4c8ff5d2a3a..d09e921a80c 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -116,7 +116,10 @@ cmp <- rcmdcheck::compare_checks(old, chk) if (cmp$status != "+") { # rcmdcheck found new messages, so check has failed print(cmp) - stop("R check of ", pkg, " reports new problems. Please fix them and resubmit.") + cat("R check of", pkg, "reports the following new problems.", + "Please fix these and resubmit:\n") + cat(cmp$cmp$output[cmp$cmp$change == 1], sep="\n") + stop("Please fix these and resubmit.") } else { # No new messages, but need to check details of pre-existing ones # We stopped earlier for errors, so all entries here are WARNING or NOTE From 6bdb9b55f48d99db6c18d4785533bd6c38b22f2b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 3 Sep 2019 19:49:42 +0200 Subject: [PATCH 0356/2289] use base funs --- scripts/check_with_errors.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index d09e921a80c..3dfbdca9604 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -92,8 +92,8 @@ msg_lines <- function(msg){ gsub("\n ", " ", msg, fixed = TRUE), #leading double-space indicates line wrap split = "\n", fixed = TRUE) - msg <- purrr::map(msg, ~.[. != ""]) - purrr::flatten_chr(purrr::map(msg, ~paste(.[[1]], .[-1], sep=": "))) + msg <- lapply(msg, function(x)x[x != ""]) + unlist(lapply(msg, function(x)paste(x[[1]], x[-1], sep=": "))) } old_file <- file.path(pkg, "tests", "Rcheck_reference.log") @@ -123,11 +123,11 @@ if (cmp$status != "+") { } else { # No new messages, but need to check details of pre-existing ones # We stopped earlier for errors, so all entries here are WARNING or NOTE - cur_msgs <- msg_lines(dplyr::filter(cmp$cmp, which=="new")$output) - prev_msgs <- msg_lines(dplyr::filter(cmp$cmp, which=="old")$output) + cur_msgs <- msg_lines(cmp$cmp$output[cmp$cmp$which == "new"]) + prev_msgs <- msg_lines(cmp$cmp$output[cmp$cmp$which == "old"]) # avoids false positives from tempdir changes - cur_msgs <- stringr::str_replace_all(cur_msgs, chk$checkdir, "...") - prev_msgs <- stringr::str_replace_all(prev_msgs, old$checkdir, "...") + cur_msgs <- gsub(chk$checkdir, "...", cur_msgs) + prev_msgs <- gsub(old$checkdir, "...", prev_msgs) lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { From 03a9960e449d0dbf0cd3e059aa6ea4def96b421e Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 3 Sep 2019 19:54:45 +0200 Subject: [PATCH 0357/2289] wording --- scripts/check_with_errors.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 3dfbdca9604..60ecb891955 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -131,8 +131,8 @@ if (cmp$status != "+") { lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { - cat("Package check returned new warnings:\n") + cat("Package check returned new problems:\n") cat(lines_changed, "\n") - stop("Please fix these warnings and resubmit.") + stop("Please fix these and resubmit.") } } From c5522631b1ee4265101eb9ed1064ca4108338775 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 4 Sep 2019 09:23:56 -0400 Subject: [PATCH 0358/2289] resolving the confilict that xts makes in data.atmosphere --- modules/data.atmosphere/DESCRIPTION | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 3eab0f6cde6..0c297d8c465 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -16,8 +16,7 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific integrated into PEcAn. As a standalone package, it provides an interface to access diverse climate data sets. Depends: - methods, - xts + methods Imports: abind (>= 1.4.5), car, From 3390bdb30f73a06bc7c977c868d0612949206fc6 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 4 Sep 2019 09:33:08 -0400 Subject: [PATCH 0359/2289] checking for the ensemble weights just if the tag exists --- .../assim.sequential/R/sda.enkf_refactored.R | 22 ++++++++++++------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 32f606e3769..e34ca931164 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -160,16 +160,22 @@ sda.enkf <- function(settings, as.numeric(lapply(settings$state.data.assimilation$state.variables, '[[', 'max_value'))) rownames(state.interval) <- var.names - #Generate parameter needs to be run before this to generate the samples. This is hopefully done in the main workflow. - if(!file.exists(file.path(settings$outdir, "ensemble_weights.Rdata"))){ - PEcAn.logger::logger.warn("ensemble_weights.Rdata cannot be found. Make sure you generate samples by running the get.ensemble.weights function before running SDA if you want the ensembles to be weighted.") - #create null list - for(tt in 1:length(obs.times)){ - weight_list[[tt]] <- rep(1,nens) #no weights + + # This reads ensemble weights generated by `get_ensemble_weights` function from assim.sequential package + if(!is.null(settings$run$inputs$ensembleweights$path)){ + if(!file.exists(file.path(settings$outdir, "ensemble_weights.Rdata"))){ + PEcAn.logger::logger.warn("ensemble_weights.Rdata cannot be found. Make sure you generate samples by running the get.ensemble.weights function before running SDA if you want the ensembles to be weighted.") + #create null list + for(tt in 1:length(obs.times)){ + weight_list[[tt]] <- rep(1,nens) #no weights + } + } else{ + load(file.path(settings$outdir, "ensemble_weights.Rdata")) ## loads ensemble.samples } - } else{ - load(file.path(settings$outdir, "ensemble_weights.Rdata")) ## loads ensemble.samples + } + + #Generate parameter needs to be run before this to generate the samples. This is hopefully done in the main workflow. if(!file.exists(file.path(settings$outdir, "samples.Rdata"))) PEcAn.logger::logger.severe("samples.Rdata cannot be found. Make sure you generate samples by running the get.parameter.samples function before running SDA.") load(file.path(settings$outdir, "samples.Rdata")) ## loads ensemble.samples From 324815595bd73abb5af169a459505b1760fa2550 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 4 Sep 2019 14:16:10 -0400 Subject: [PATCH 0360/2289] Everything except restart --- .../assim.sequential/R/sda.enkf_refactored.R | 53 ++++++++++++------- .../inst/WillowCreek/workflow.template.R | 19 ++++++- 2 files changed, 50 insertions(+), 22 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index e34ca931164..436931a59a8 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -20,7 +20,6 @@ #' @export #' - sda.enkf <- function(settings, obs.mean, obs.cov, @@ -40,6 +39,7 @@ sda.enkf <- function(settings, ###-------------------------------------------------------------------### ### read settings ### ###-------------------------------------------------------------------### + weight_list <- list() adjustment <- settings$state.data.assimilation$adjustment model <- settings$model$type write <- settings$database$bety$write @@ -116,11 +116,11 @@ sda.enkf <- function(settings, overwrite=F)) } } - + if (control$debug) browser() ###-------------------------------------------------------------------### ### tests before data assimilation ### ###-------------------------------------------------------------------###---- - obs.times <- as.Date(names(obs.mean),format = "%Y/%m/%d") + obs.times <- names(obs.mean) obs.times.POSIX <- lubridate::ymd_hms(obs.times) ### TO DO: Need to find a way to deal with years before 1000 for paleon ### need a leading zero @@ -162,19 +162,17 @@ sda.enkf <- function(settings, # This reads ensemble weights generated by `get_ensemble_weights` function from assim.sequential package - if(!is.null(settings$run$inputs$ensembleweights$path)){ - if(!file.exists(file.path(settings$outdir, "ensemble_weights.Rdata"))){ - PEcAn.logger::logger.warn("ensemble_weights.Rdata cannot be found. Make sure you generate samples by running the get.ensemble.weights function before running SDA if you want the ensembles to be weighted.") - #create null list - for(tt in 1:length(obs.times)){ - weight_list[[tt]] <- rep(1,nens) #no weights - } - } else{ - load(file.path(settings$outdir, "ensemble_weights.Rdata")) ## loads ensemble.samples + if(!file.exists(file.path(settings$outdir, "ensemble_weights.Rdata"))){ + PEcAn.logger::logger.warn("ensemble_weights.Rdata cannot be found. Make sure you generate samples by running the get.ensemble.weights function before running SDA if you want the ensembles to be weighted.") + #create null list + for(tt in 1:length(obs.times)){ + weight_list[[tt]] <- rep(1,nens) #no weights } - + } else{ + load(file.path(settings$outdir, "ensemble_weights.Rdata")) ## loads ensemble.samples } - + + #Generate parameter needs to be run before this to generate the samples. This is hopefully done in the main workflow. if(!file.exists(file.path(settings$outdir, "samples.Rdata"))) PEcAn.logger::logger.severe("samples.Rdata cannot be found. Make sure you generate samples by running the get.parameter.samples function before running SDA.") @@ -249,10 +247,10 @@ sda.enkf <- function(settings, sum(unlist(sapply( X = run.id, FUN = function(x){ - pattern = paste0(x, '/', obs.year, '.nc')[1] + pattern = paste0(x, '/*.nc$')[1] grep( pattern = pattern, - x = list.files(file.path(outdir,x), "*.nc", recursive = F, full.names = T) + x = list.files(file.path(outdir,x), "*.nc$", recursive = F, full.names = T) ) }, simplify = T @@ -264,7 +262,6 @@ sda.enkf <- function(settings, if (sum_files == 0){ #removing:t > 1 - #removing old simulations #why? don't we need them to restart? #unlink(list.files(outdir,"*.nc",recursive = T,full.names = T)) #-Splitting the input for the models that they don't care about the start and end time of simulations and they run as long as their met file. @@ -295,8 +292,8 @@ sda.enkf <- function(settings, if(exists('new.state')){ #Has the analysis been run? Yes, then restart from analysis. restart.arg<-list(runid = run.id, - start.time = strptime(obs.times[t-1],format="%Y-%m-%d %H:%M:%S"), - stop.time = strptime(obs.times[t],format="%Y-%m-%d %H:%M:%S"), + start.time = lubridate::ymd_hms(obs.times[t - 1], truncated = 3), + stop.time = lubridate::ymd_hms(obs.times[t], truncated = 3), settings = settings, new.state = new.state, new.params = new.params, @@ -325,6 +322,8 @@ sda.enkf <- function(settings, #-------------------------------------------- RUN PEcAn.remote::start.model.runs(settings, settings$database$bety$write) + + } #------------------------------------------- Reading the output X_tmp <- vector("list", 2) @@ -350,6 +349,20 @@ sda.enkf <- function(settings, } + #----chaning the extension of nc files to a more specific date related name + list.files( + path = file.path(settings$outdir, "out"), + "*.nc$", + recursive = TRUE, + full.names = TRUE + ) %>% + walk( function(.x){ + + file.rename(.x , file.path(dirname(.x), + paste0(gsub(" ","",names(obs.mean)[t] %>% as.character()),".nc")) + ) + }) + #--- Reformating X X <- do.call(rbind, X) FORECAST[[t]] <- X @@ -399,7 +412,7 @@ sda.enkf <- function(settings, # making the mapping matrix #TO DO: doesn't work unless it's one to one - #H <- Construct_H(choose, Y, X) + if(length(operators)==0) H <- Construct_H(choose, Y, X) ###-------------------------------------------------------------------### ### Analysis ### ###-------------------------------------------------------------------###---- diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index ca27c78d7bc..d30520bfb98 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -183,6 +183,21 @@ prep.data <- prep.data %>% day.data }) +# if there is infinte value then take it out +prep.data<-prep.data %>% + map(function(day.data){ + #cheking the mean + nan.mean <- which(is.infinite(day.data$means) | is.nan(day.data$means) | is.na(day.data$means)) + if ( length(nan.mean)>0 ) { + + day.data$means <- day.data$means[-nan.mean] + day.data$covs <- day.data$covs[-nan.mean, -nan.mean] %>% + as.matrix() %>% + `colnames<-`(c(colnames(day.data$covs)[-nan.mean])) + } + day.data + }) + obs.mean <- prep.data %>% map('means') %>% setNames(names(prep.data)) obs.cov <- prep.data %>% map('covs') %>% setNames(names(prep.data)) @@ -203,7 +218,7 @@ if ('state.data.assimilation' %in% names(settings)) { PEcAn.utils::status.start("SDA") PEcAn.assim.sequential::sda.enkf( settings, - restart=restart.path, + restart=FALSE, Q=0, obs.mean = obs.mean, obs.cov = obs.cov, @@ -212,7 +227,7 @@ if ('state.data.assimilation' %in% names(settings)) { interactivePlot =FALSE, TimeseriesPlot =TRUE, BiasPlot =FALSE, - debug = FALSE, + debug =TRUE, pause=FALSE ) ) From 5ebe7458251631e62e989e448bd019469a2c911e Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 9 Sep 2019 09:57:34 -0400 Subject: [PATCH 0361/2289] removed the imported xts --- modules/data.atmosphere/NAMESPACE | 1 - modules/data.atmosphere/R/extract_ERA5.R | 1 - 2 files changed, 2 deletions(-) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 92d94cd86ea..ce899b5ac4e 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -99,6 +99,5 @@ export(temporal.downscale.functions) export(upscale_met) export(wide2long) import(dplyr) -import(xts) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) diff --git a/modules/data.atmosphere/R/extract_ERA5.R b/modules/data.atmosphere/R/extract_ERA5.R index d9a7eb684eb..c4854fb49bb 100644 --- a/modules/data.atmosphere/R/extract_ERA5.R +++ b/modules/data.atmosphere/R/extract_ERA5.R @@ -13,7 +13,6 @@ #' @details For the list of variables check out the documentation at \link{https://confluence.ecmwf.int/display/CKB/ERA5+data+documentation#ERA5datadocumentation-Spatialgrid} #' #' @return a list of xts objects with all the variables for the requested years -#' @import xts #' @export #' @examples #' \dontrun{ From 900a0c0c53d711da2bc1548861a419deef6fc4ae Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 9 Sep 2019 16:17:47 +0000 Subject: [PATCH 0362/2289] Add error message returned if no MA posterior exists --- base/db/R/get.trait.data.R | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/base/db/R/get.trait.data.R b/base/db/R/get.trait.data.R index cea64e49792..772814cb5b6 100644 --- a/base/db/R/get.trait.data.R +++ b/base/db/R/get.trait.data.R @@ -111,11 +111,18 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, # check to see if we need to update if (!forceupdate) { if (is.null(pft$posteriorid)) { - pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid) %>% - dplyr::arrange(dplyr::desc(created_at)) %>% - head(1) %>% - dplyr::pull(id) + recent_posterior <- dplyr::tbl(dbcon, "posteriors") %>% + dplyr::filter(pft_id == !!pftid) %>% + dplyr::collect() + if (length(recent_posterior) > 0) { + pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% + dplyr::filter(pft_id == !!pftid) %>% + dplyr::arrange(dplyr::desc(created_at)) %>% + head(1) %>% + dplyr::pull(id) + } else { + PEcAn.logger::logger.info("No previous posterior found. Forcing update") + } } if (!is.null(pft$posteriorid)) { files <- dbfile.check(type = "Posterior", container.id = pft$posteriorid, con = dbcon, From 320ee44253b89b8873de7cac3447537850e811b4 Mon Sep 17 00:00:00 2001 From: araiho Date: Tue, 10 Sep 2019 16:09:02 -0400 Subject: [PATCH 0363/2289] adding if statement to make sure it doesn't throw an error if the folder doesn't exisit --- modules/assim.sequential/inst/sda.rewind.R | 34 ++++++++++++---------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/modules/assim.sequential/inst/sda.rewind.R b/modules/assim.sequential/inst/sda.rewind.R index 3a747be7f7f..b8ca12241af 100644 --- a/modules/assim.sequential/inst/sda.rewind.R +++ b/modules/assim.sequential/inst/sda.rewind.R @@ -64,22 +64,24 @@ sda_rewind <- function(settings,run.id,time_to_rewind){ files.last.sda <- list.files.nodir(file.path(settings$outdir,"SDA")) #copying - file.copy(file.path(file.path(settings$outdir,"SDA"),files.last.sda), - file.path(file.path(settings$outdir,"SDA"),paste0(as.numeric(time_to_rewind)-1,"/",files.last.sda))) - - load(file.path(settings$outdir,"SDA",'sda.output.Rdata')) - - X <- FORECAST[[t]] - FORECAST[t] <- NULL - ANALYSIS[t] <- NULL - enkf.params[t] <- NULL - - for(i in 1:length(new.state)) new.state[[i]] <- ANALYSIS[[t]][,i] - - t = t-1 - - save(site.locs, t, X, FORECAST, ANALYSIS, enkf.params, new.state, new.params, run.id, - ensemble.id, ensemble.samples, inputs, Viz.output, file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) + if(file.exists(file.path(settings$outdir,"SDA"))){ + file.copy(file.path(file.path(settings$outdir,"SDA"),files.last.sda), + file.path(file.path(settings$outdir,"SDA"),paste0(as.numeric(time_to_rewind)-1,"/",files.last.sda))) + + load(file.path(settings$outdir,"SDA",'sda.output.Rdata')) + + X <- FORECAST[[t]] + FORECAST[t] <- NULL + ANALYSIS[t] <- NULL + enkf.params[t] <- NULL + + for(i in 1:length(new.state)) new.state[[i]] <- ANALYSIS[[t]][,i] + + t = t-1 + + save(site.locs, t, X, FORECAST, ANALYSIS, enkf.params, new.state, new.params, run.id, + ensemble.id, ensemble.samples, inputs, Viz.output, file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) + } ### Paleon specific with leading zero dates if(nchar(time_to_rewind) == 3){ From 1ec2b11acedaa3dec711c14b938b9efb53574e3f Mon Sep 17 00:00:00 2001 From: araiho Date: Tue, 10 Sep 2019 16:23:24 -0400 Subject: [PATCH 0364/2289] Adding the workflow I used for variance partitioning with linkages to the pecan develop branch for use by others and so I can cite it --- .../inst/workflow.variance.partitioning.R | 278 ++++++++++++++++++ 1 file changed, 278 insertions(+) create mode 100644 modules/assim.sequential/inst/workflow.variance.partitioning.R diff --git a/modules/assim.sequential/inst/workflow.variance.partitioning.R b/modules/assim.sequential/inst/workflow.variance.partitioning.R new file mode 100644 index 00000000000..e88cea1cd96 --- /dev/null +++ b/modules/assim.sequential/inst/workflow.variance.partitioning.R @@ -0,0 +1,278 @@ + +##### +##### Workflow code by Ann Raiho (ann.raiho@gmail.com) +##### This is the workflow template for doing a variance partitioning run +##### It probably will take you two days to rerun. The longest runs are the full SDA and process variance runs. +##### Basically I'm altering the pecan.SDA.xml to run the runs with data constrained initial conditions +##### For the spin up runs I'm altering the pecan.CONFIGS.xml to just use start.model.runs() +##### + + +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.ED2) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential + + +##### +##### SDA FULL RUN +##### + +settings <- read.settings("pecan.SDA.xml") + +load('sda.obs.Rdata') + +obs.mean <- obs.list$obs.mean +obs.cov <- obs.list$obs.cov + +sda.enkf(settings, obs.mean, obs.cov, Q = NULL, restart=F, + control=list(trace=T, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title=NULL, + debug=F, + pause = F)) + +#### +#### DEFAULT +#### + +nens <- settings$ensemble + +#changed input to be only one met ensemble member +#basically the same as pecan.CONFIGS.xml +settings <- read.settings('pecan.DEFAULT.xml') +settings <- PEcAn.workflow::runModule.run.write.configs(settings) + +# Taking average of samples to have fixed params across nens +load('samples.Rdata') +ensemble.samples.means <- ensemble.samples +for(i in 1:length(ensemble.samples.means)) ensemble.samples.means[[i]] <- matrix(colMeans(ensemble.samples[[i]]),nens,ncol(ensemble.samples[[i]]),byrow = T) +ensemble.samples <- ensemble.samples.means +save(ensemble.samples,file='average_samples.Rdata') +save(ensemble.samples,file='samples.Rdata') + +outconfig <- write.ensemble.configs(defaults = settings$pfts, + ensemble.samples = ensemble.samples, + settings = settings, + model = settings$model$type, + write.to.db = settings$database$bety$write, + restart = NULL) +PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) + +file.rename('out','out_default') +file.rename('run','run_default') + +#### +#### DEFAULT -- DATA IC #### +#### + +#similar to pecan.SDA.xml but with not sampling params or met or doing process. Using SDA to constrain time step 1. +settings <- read.settings('pecan.DEFAULT.DATAIC.xml') +load('sda.obs.Rdata') + +#Becasue we only want to inform the initial conditions for this model experiment we only use the first data point. +#The last data point is included so that the model runs until this point. +obs.cov <- obs.mean <- list() +for(i in c(1,length(obs.list$obs.mean))){ + obs.mean[[i]] <- obs.list$obs.mean[[i]] + obs.cov[[i]] <- obs.list$obs.cov[[i]] +} + +#write dates as names for data objects +names(obs.cov) <- names(obs.mean) <- names(obs.list$obs.cov) + +obs.mean[2:(length(obs.list$obs.mean)-1)] <- NULL +obs.cov[2:(length(obs.list$obs.mean)-1)] <- NULL + +obs.mean[[length(obs.mean)]] <- rep(NA,length(ensemble.samples.means)) + +sda.enkf(settings, obs.mean, obs.cov, Q = NULL, restart=F, + control=list(trace=T, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title=NULL, + debug=F, + pause = F)) + +file.rename('out','out_default_ic') +file.rename('run','run_default_ic') +file.rename('SDA','SDA_default_ic') + +#### +#### PARAM #### +#### + +#running with sampled params +settings <- read.settings('pecan.DEFAULT.xml') +settings <- PEcAn.workflow::runModule.run.write.configs(settings) +PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) + +file.rename('out','out_param') +file.rename('run','run_param') + +#### +#### PARAM DATA IC #### +#### + +settings <- read.settings('pecan.DEFAULT.DATAIC.xml') +load('sda.obs.Rdata')#load('sda.data_AGB.Rdata') + +#Becasue we only want to inform the initial conditions for this model experiment we only use the first data point. +#The last data point is included so that the model runs until this point. +obs.cov <- obs.mean <- list() +for(i in c(1,length(obs.list$obs.mean))){ + obs.mean[[i]] <- obs.list$obs.mean[[i]] + obs.cov[[i]] <- obs.list$obs.cov[[i]] +} + +#write dates as names for data objects +names(obs.cov) <- names(obs.mean) <- names(obs.list$obs.cov) + +obs.mean[2:(length(obs.list$obs.mean)-1)] <- NULL +obs.cov[2:(length(obs.list$obs.mean)-1)] <- NULL + +obs.mean[[length(obs.mean)]] <- rep(NA,length(ensemble.samples.means)) + +sda.enkf(settings, obs.mean, obs.cov, Q = NULL, restart=F, + control=list(trace=T, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title=NULL, + debug=F, + pause = F)) + +file.rename('out','out_param_ic') +file.rename('run','run_param_ic') + +#### +#### MET #### +#### + +#running with sampled params +settings <- read.settings('pecan.SAMP.MET.xml') +settings <- PEcAn.workflow::runModule.run.write.configs(settings) +PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) + +file.rename('out','out_met') +file.rename('run','run_met') + +#### +#### MET DATA IC #### +#### + +file.rename('ensemble_weights_SDA.Rdata','ensemble_weights.Rdata') + +settings <- read.settings('pecan.SAMP.MET.DATA.IC.xml') +load('sda.obs.Rdata')#load('sda.data_AGB.Rdata') + +#Becasue we only want to inform the initial conditions for this model experiment we only use the first data point. +#The last data point is included so that the model runs until this point. +obs.cov <- obs.mean <- list() +for(i in c(1,length(obs.list$obs.mean))){ + obs.mean[[i]] <- obs.list$obs.mean[[i]] + obs.cov[[i]] <- obs.list$obs.cov[[i]] +} + +#write dates as names for data objects +names(obs.cov) <- names(obs.mean) <- names(obs.list$obs.cov) + +obs.mean[2:(length(obs.list$obs.mean)-1)] <- NULL +obs.cov[2:(length(obs.list$obs.mean)-1)] <- NULL + +obs.mean[[length(obs.mean)]] <- rep(NA,length(ensemble.samples.means)) + +sda.enkf(settings, obs.mean, obs.cov, Q = NULL, restart=F, + control=list(trace=T, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title=NULL, + debug=F, + pause = F)) + +file.rename('out','out_met_ic') +file.rename('run','run_met_ic') + +#### +#### PROCESS #### +#### + +settings <- read.settings('pecan.PROCESS.xml') + +#running with sampled params +load('sda.obs.Rdata')#load('sda.data_AGB.Rdata') + +obs.mean <- obs.list$obs.mean +obs.cov <- obs.list$obs.cov + +#write dates as names for data objects +names(obs.cov) <- names(obs.mean) <- names(obs.list$obs.cov) + +for(i in 1:length(obs.list$obs.mean)) obs.mean[[i]] <- rep(NA,length(ensemble.samples.means)) + +load('SDA_SDA/sda.output.Rdata') + +Q <- solve(enkf.params[[t-1]]$q.bar) + +rm(new.state) + +sda.enkf(settings, obs.mean, obs.cov, Q = Q, restart=F, + control=list(trace=T, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title=NULL, + debug=F, + pause = F)) + +file.rename('out','out_process') +file.rename('run','run_process') +file.rename('SDA','SDA_process') + +#### +#### PROCESS DATA IC #### +#### + +settings <- read.settings('pecan.PROCESS.xml') + +#running with sampled params +load('sda.obs.Rdata') + +obs.mean <- obs.list$obs.mean +obs.cov <- obs.list$obs.cov + +#write dates as names for data objects +names(obs.cov) <- names(obs.mean) <- names(obs.list$obs.cov) + +for(i in 2:length(obs.list$obs.mean)) obs.mean[[i]] <- rep(NA,length(ensemble.samples.means)) + +load('SDA_SDA/sda.output.Rdata') + +Q <- solve(enkf.params[[t-1]]$q.bar) + +rm(new.state) + +sda.enkf(settings, obs.mean, obs.cov, Q = Q, restart=T, + control=list(trace=T, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title=NULL, + debug=F, + pause = F)) + +file.rename('out','out_process_ic') +file.rename('run','run_process_ic') +file.rename('SDA','SDA_process_ic') From ae3be8c976bf214d458795630e523c855a6151ac Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Sep 2019 10:31:42 -0400 Subject: [PATCH 0365/2289] Clean up "Developer guide" documentation (#2411) * BUILD: Add `book` target for building docs * DOCS: Clean up code style documentation * Clean up logger documentation * Clean up Roxygen2 documentation * DOCS: Clean up file names in developer section * DOCS: Fix formatting of tutorials section * DOCS: Major cleanup of testing documentation * DOCS: Fix typos in tutorials section * DOCS: Fix whitespace and typos * DOCS: Add @dlebauer suggestion Co-Authored-By: David LeBauer * DOCS: Address @dlebauer suggestions https://github.com/PecanProject/pecan/pull/2411#discussion_r322567504 https://github.com/PecanProject/pecan/pull/2411#discussion_r322565055 https://github.com/PecanProject/pecan/pull/2411#discussion_r322564818 * Fix Typo Thanks @dlebauer! Co-Authored-By: David LeBauer --- Makefile | 4 + .../02_user_demos/01_introductions_user.Rmd | 10 +- .../03_web_workflow.Rmd | 2 +- .../03_coding_practices/01-coding-style.Rmd | 36 ++ .../03_coding_practices/02-logging.Rmd | 16 + .../03_coding_practices/02_Logging.Rmd | 80 ---- ...3_Package-data.Rmd => 03-package-data.Rmd} | 0 .../03_coding_practices/04-roxygen.Rmd | 72 +++ .../03_coding_practices/04_Roxygen2.Rmd | 113 ----- .../03_coding_practices/05_Testing.Rmd | 409 ------------------ .../05_developer_workflows/04-testing.Rmd | 121 ++++++ ...compile_PEcAn.Rmd => 05-compile-pecan.Rmd} | 0 ...ructure.Rmd => 06-directory-structure.Rmd} | 0 .../03_topical_pages/10_shiny/shiny.Rmd | 44 +- .../04-package-dependencies.Rmd} | 73 +--- book_source/04_appendix/05-testthat.Rmd | 134 ++++++ .../06_devtools.Rmd | 4 +- 17 files changed, 435 insertions(+), 683 deletions(-) create mode 100644 book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01-coding-style.Rmd create mode 100755 book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02-logging.Rmd delete mode 100755 book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02_Logging.Rmd rename book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/{03_Package-data.Rmd => 03-package-data.Rmd} (100%) create mode 100644 book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04-roxygen.Rmd delete mode 100644 book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04_Roxygen2.Rmd delete mode 100755 book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd create mode 100755 book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd rename book_source/02_demos_tutorials_workflows/05_developer_workflows/{04_compile_PEcAn.Rmd => 05-compile-pecan.Rmd} (100%) rename book_source/02_demos_tutorials_workflows/05_developer_workflows/{05_directory_structure.Rmd => 06-directory-structure.Rmd} (100%) rename book_source/{02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd => 04_appendix/04-package-dependencies.Rmd} (86%) create mode 100644 book_source/04_appendix/05-testthat.Rmd rename book_source/{02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices => 04_appendix}/06_devtools.Rmd (92%) diff --git a/Makefile b/Makefile index dffd51f08af..5d0dd6cf04e 100644 --- a/Makefile +++ b/Makefile @@ -54,6 +54,10 @@ check: $(ALL_PKGS_C) .check/base/all test: $(ALL_PKGS_T) .test/base/all shiny: $(SHINY_I) +# Render the PEcAn bookdown documentation +book: + cd ./book_source && make build + depends = .doc/$(1) .install/$(1) .check/$(1) .test/$(1) # Make the timestamp directories if they don't exist yet diff --git a/book_source/02_demos_tutorials_workflows/02_user_demos/01_introductions_user.Rmd b/book_source/02_demos_tutorials_workflows/02_user_demos/01_introductions_user.Rmd index 1490be56a5c..498a51bd323 100644 --- a/book_source/02_demos_tutorials_workflows/02_user_demos/01_introductions_user.Rmd +++ b/book_source/02_demos_tutorials_workflows/02_user_demos/01_introductions_user.Rmd @@ -1,11 +1,6 @@ -# User Tutorial Section {#user-section} +# Tutorials {#user-section} -The user Section contains the following sections: -[Basic Web Workflow Usage](#basic-web-wrokflow) -[PEcAn Web Interface](#intermediate User Guide) -[PEcAn from the Command Line](#advanced-user) - -## How PEcAn Works in a nutshell +## How PEcAn Works in a nutshell {#pecan-in-a-nutshell} PEcAn provides an interface to a variety of ecosystem models and attempts to standardize and automate the processes of model parameterization, execution, and analysis. First, you choose an ecosystem model, then the time and location of interest (a site), the plant community (or crop) that you are interested in simulating, and a source of atmospheric data from the BETY database (LeBauer et al, 2010). These are set in a "settings" file, commonly named `pecan.xml` which can be edited manually if desired. From here, PEcAn will take over and set up and execute the selected model using your settings. The key is that PEcAn uses models as-is, and all of the translation steps are done within PEcAn so no modifications are required of the model itself. Once the model is finished it will allow you to create graphs with the results of the simulation as well as download the results. It is also possible to see all past experiments and simulations. @@ -31,4 +26,3 @@ The following Tutorials assume you have installed PEcAn. If you have not, please |Vignette|Photosynthetic Response Curves|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd)| |Vignette|Priors|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/priors/vignettes/priors_demo.Rmd)| |Vignette|Leaf Spectra:PROSPECT inversion|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/rtm/vignettes/pecanrtm.vignette.Rmd)| - diff --git a/book_source/02_demos_tutorials_workflows/03_web_workflow.Rmd b/book_source/02_demos_tutorials_workflows/03_web_workflow.Rmd index afb84079529..997c4d5faa0 100644 --- a/book_source/02_demos_tutorials_workflows/03_web_workflow.Rmd +++ b/book_source/02_demos_tutorials_workflows/03_web_workflow.Rmd @@ -1,4 +1,4 @@ -# Basic Web workflow {#basic-web-wrokflow} +# Basic Web workflow {#basic-web-workflow} This chapter describes the major steps of the PEcAn web-based workflow, which are as follows: diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01-coding-style.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01-coding-style.Rmd new file mode 100644 index 00000000000..c6a2534db27 --- /dev/null +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01-coding-style.Rmd @@ -0,0 +1,36 @@ +### Coding Style {#developer-codestyle} + +Consistent coding style improves readability and reduces errors in +shared code. + +Unless otherwise noted, PEcAn follows the [Tidyverse style guide](https://style.tidyverse.org/), so please familiarize yourself with it before contributing. +In addition, note the following: + +- **Document all functions using `roxygen2`**. +See [Roxygen2](#developer-roxygen) for more details. +- **Put your name on things**. +Any function that you create or make a meaningful contribution to should have your name listed after the author tag in the function documentation. +It is also often a good idea to add your name to extended comments describing particularly complex or strange code. +- **Write unit tests with `testthat`**. +Tests are a complement to documentation - they define what a function is (and is not) expected to do. +Not all functions necessarily need unit tests, but the more tests we have, the more confident we can be that changes don't break existing code. +Whenever you discover and fix a bug, it is a good idea to write a unit test that makes sure the same bug won't happen again. +See [Unit_Testing](#developer-testing) for instructions, and [Advanced R: Tests](http://r-pkgs.had.co.nz/tests.html). +- **Do not use abbreviations**. +Always write out `TRUE` and `FALSE` (i.e. _do not_ use `T` or `F`). +Do not rely on partial argument matching -- write out all arguments in full. +- **Avoid dots in function names**. +R's S3 methods system uses dots to denote object methods (e.g. `print.matrix` is the `print` method for objects of class `matrix`), which can cause confusion. +Use underscores instead (e.g. `do_analysis` instead of `do.analysis`). +(NOTE that many old PEcAn functions violate this convention. The plan is to deprecate those in PEcAn 2.0. See GitHub issue [#392](https://github.com/PecanProject/pecan/issues/392)). +- **Use informative file names with consistent extensions**. +Standard file extensions are `.R` for R scripts, `.rds` for individual objects (via `saveRDS` function), and `.RData` (note: capital D!) for multiple objects (via the `save` function). +For function source code, prefer multiple files with fewer functions in each to large files with lots of files (though it may be a good idea to group closely related functions in a single file). +File names should match, or at least closely reflect, their files (e.g. function `do_analysis` should be defined in a file called `do_analysis.R`). +_Do not use spaces in file names_ -- use dashes (`-`) or underscores (`_`). +- **For using external packages, add the package to `Imports:` and call the corresponding function with `package::function`**. +_Do not_ use `@importFrom package function` or, worse yet, `@import package`. +(The exception is infix operators like `magrittr::%>%` or `ggplot2::%+%`, which can be imported via roxygen2 documentation like `@importFrom magrittr %>%`). +_Do not_ add packages to `Depends`. +In general, try to avoid adding new dependencies (especially ones that depend on system libraries) unless they are necessary or already widely used in PEcAn (e.g. GDAL, NetCDF, XML, JAGS, `dplyr`). +For a more thorough and nuanced discussion, see the [package dependencies appendix](#package-dependencies). diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02-logging.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02-logging.Rmd new file mode 100755 index 00000000000..2a73a6bb1c6 --- /dev/null +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02-logging.Rmd @@ -0,0 +1,16 @@ +### Logging {#developer-logging} + +During development we often add many print statements to check to see how the code is doing, what is happening, what intermediate results there are etc. When done with the development it would be nice to turn this additional code off, but have the ability to quickly turn it back on if we discover a problem. This is where logging comes into play. Logging allows us to use "rules" to say what information should be shown. For example when I am working on the code to create graphs, I do not have to see any debugging information about the SQL command being sent, however trying to figure out what goes wrong during a SQL statement it would be nice to show the SQL statements without adding any additional code. + +PEcAn provides a set of `logger.*` functions that should be used in place of base R's `stop`, `warn`, `print`, and similar functions. The `logger` functions make it easier to print to a system log file, and to control the level of output produced by PEcAn. + +* The file [test.logger.R](https://github.com/PecanProject/pecan/blob/develop/base/logger/tests/testthat/test.logger.R) provides descriptive examples +* This query provides an current overview of [functions that use logging](https://github.com/PecanProject/pecan/search?q=logger&ref=cmdform) +* Logger functions and their corresponding levels (in order of increasing level): + * `logger.debug` (`"DEBUG"`) -- Low-level diagnostic messages that are hidden by default. Good examples of this are expanded file paths and raw results from database queries or other analyses. + * `logger.info` (`"INFO"`) -- Informational messages that regular users may want to see, but which do not indicate anything unexpected. Good examples of this are progress updates updates for long-running processes, or brief summaries of queries or analyses. + * `logger.warn` (`"WARN"`) -- Warning messages about issues that may lead to unexpected but valid results. Good examples of this are interactions between arguments that lead to some arguments being ignored or removal of missing or extreme values. + * `logger.error` (`"ERROR"`) -- Error messages from which PEcAn has some capacity to recover. Unless you have a very good reason, we recommend avoiding this in favor of either `logger.severe` to actually stop execution or `logger.warn` to more explicitly indicate that the problem is not fatal. + * `logger.severe` -- Catastrophic errors that warrant immediate termination of the workflow. This is the only function that actually stops R's execution (via `stop`). +* The `logger.setLevel` function sets the level at which a message will be printed. For instance, `logger.setLevel("WARN")` will suppress `logger.info` and `logger.debug` messages, but will print `logger.warn` and `logger.error` messages. `logger.setLevel("OFF")` suppresses all logger messages. +* To print all messages to console, use `logger.setUseConsole(TRUE)` diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02_Logging.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02_Logging.Rmd deleted file mode 100755 index bb8f2799c77..00000000000 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/02_Logging.Rmd +++ /dev/null @@ -1,80 +0,0 @@ -### Logging {#developer-logging} - -During development we often add many print statements to check to see how the code is doing, what is happening, what intermediate results there are etc. When done with the development it would be nice to turn this additional code off, but have the ability to quickly turn it back on if we discover a problem. This is where logging comes into play. Logging allows us to use "rules" to say what information should be shown. For example when I am working on the code to create graphs, I do not have to see any debugging information about the SQL command being sent, however trying to figure out what goes wrong during a SQL statement it would be nice to show the SQL statements without adding any additional code. - -#### PEcAn logging functions - -These `logger` family of functions are more sophisticated, and can be used in place of `stop`, `warn`, `print`, and similar functions. The `logger` functions make it easier to print to a system log file. - -* The file [test.logger.R](../blob/master/utils/inst/tests/test.logger.R) provides descriptive examples -* This query provides an current overview of [functions that use logging](https://github.com/PecanProject/pecan/search?q=logger&ref=cmdform) -* logger functions (in order of increasing level): - * `logger.debug` - * `logger.info` - * `logger.warn` - * `logger.error` -* the `logger.setLevel` function sets the level at which a message will be printed - * `logger.setLevel("DEBUG")` will print messages from all logger functions - * `logger.setLevel("ERROR")` will only print messages from `logger.error` - * `logger.setLevel("INFO")` and `logger.setLevel("WARN")` shows messages from `logger.` and higher functions, e.g. `logger.setLevel("WARN")` shows messages from `logger.warn` and `logger.error` - * `logger.setLevel("OFF")` suppresses all logger messages -* To print all messages to console, use `logger.setUseConsole(TRUE)` - -#### Other R logging packages - -* **This section is for reference - these functions should not be used in PEcAn, as they are redundant with the `logger.*` functions described above** - -R does provide a basic logging capability using stop, warning and message. These allow to print message (and stop execution in case of stop). However there is not an easy method to redirect the logging information to a file, or turn the logging information on and off. This is where one of the following packages comes into play. The packages themselves are very similar since they try to emulate log4j. - -Both of the following packages use a hierarchic loggers, meaning that if you change the level of displayed level of logging at one level all levels below it will update their logging. - -##### `logging` - -The logging development is done at http://logging.r-forge.r-project.org/ and more information is located at http://cran.r-project.org/web/packages/logging/index.html . To install use the following command: - -```r -install.packages("logging", repos="http://R-Forge.R-project.org") -``` - -This has my preference pure based on documentation. - -#### `futile.logger` - -The second logging package is http://cran.r-project.org/web/packages/futile.logger/ and is eerily similar to logging (as a matter of fact logging is based on futile). - -##### Example Usage - -To be able to use the loggers there needs to be some initialization done. Neither package allows to read it from a configuration file, so we might want to use the pecan.xml file to set it up. The setup will always be somewhat the same: - -```{r loggingexample1, echo=TRUE, eval = FALSE} -# load library -library(logging) -logReset() - -# add handlers, responsible for actually printing/saving the messages -addHandler(writeToConsole) -addHandler(writeToFile, file="file.log") - -# setup root logger with INFO -setLevel('INFO') - -# make all of PEcAn print debug messages -setLevel('DEBUG', getLogger('PEcAn')) - -# only print info and above for the SQL part of PEcAn -setLevel('INFO', getLogger('PEcAn.SQL')) -``` - -To now use logging in the code you can use the following code: -```{r loggingexample2, echo=TRUE,eval = FALSE} -pl <- getLogger('PEcAn.MetaAnalysis.function1') -pl$info("This is an INFO message.") -pl$debug("The value for x=%d", x) -pl$error("Something bad happened and I am scared now.") -``` -or -```{r loggingexample3, echo=TRUE, eval = FALSE} -loginfo("This is an INFO message.", logger="PEcAn.MetaAnalysis.function1") -logdebug("The value for x=%d", x, logger="PEcAn.MetaAnalysis.function1") -logerror("Something bad happened and I am scared now.", logger="PEcAn.MetaAnalysis.function1") -``` diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/03_Package-data.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/03-package-data.Rmd similarity index 100% rename from book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/03_Package-data.Rmd rename to book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/03-package-data.Rmd diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04-roxygen.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04-roxygen.Rmd new file mode 100644 index 00000000000..2638bd8c0c7 --- /dev/null +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04-roxygen.Rmd @@ -0,0 +1,72 @@ +### Documenting functions using `roxygen2` {#developer-roxygen} + +This is the standard method for documenting R functions in PEcAn. +For detailed instructions, see one of the following resources: + +* `roxygen2` [pacakge documentation](https://roxygen2.r-lib.org/) + - [Formatting overview](https://roxygen2.r-lib.org/articles/rd.html) + - [Markdown formatting](https://blog.rstudio.com/2017/02/01/roxygen2-6-0-0/) + - [Namespaces](https://roxygen2.r-lib.org/articles/namespace.html) (e.g. when to use `@export`) +* From "R packages" by Hadley Wickham: + - [Object Documentation](http://r-pkgs.had.co.nz/man.html) + - [Package Metadata](http://r-pkgs.had.co.nz/description.html) + +Below is a complete template for a Roxygen documentation block. +Note that roxygen lines start with `#'`: + +```r +#' Function title, in a few words +#' +#' Function description, in 2-3 sentences. +#' +#' (Optional) Package details. +#' +#' @param argument_1 A description of the argument +#' @param argument_2 Another argument to the function +#' @return A description of what the function returns. +#' +#' @author Your name +#' @examples +#' \dontrun{ +#' # This example will NOT be run by R CMD check. +#' # Useful for long-running functions, or functions that +#' # depend on files or values that may not be accessible to R CMD check. +#' my_function("~/user/my_file") +#'} +# # This example WILL be run by R CMD check +#' my_function(1:10, argument_2 = 5) +## ^^ A few examples of the function's usage +#' @export +# ^^ Whether or not the function will be "exported" (made available) to the user. +# If omitted, the function can only be used inside the package. +my_function <- function(argument_1, argument_2) {...} +``` + +Here is a complete example from the `PEcAn.utils::days_in_year()` function: + +```r +#' Number of days in a year +#' +#' Calculate number of days in a year based on whether it is a leap year or not. +#' +#' @param year Numeric year (can be a vector) +#' @param leap_year Default = TRUE. If set to FALSE will always return 365 +#' +#' @author Alexey Shiklomanov +#' @return integer vector, all either 365 or 366 +#' @export +#' @examples +#' days_in_year(2010) # Not a leap year -- returns 365 +#' days_in_year(2012) # Leap year -- returns 366 +#' days_in_year(2000:2008) # Function is vectorized over years +days_in_year <- function(year, leap_year = TRUE) {...} +``` + +To update documentation throughout PEcAn, run `make document` in the PEcAn root directory. +_Make sure you do this before opening a pull request_ -- +PEcAn's automated testing (Travis) will check if any documentation is out of date and will throw an error like the following if it is: + +``` +These files were changed by the build process: +{...} +``` diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04_Roxygen2.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04_Roxygen2.Rmd deleted file mode 100644 index a0a2eb4cfe8..00000000000 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/04_Roxygen2.Rmd +++ /dev/null @@ -1,113 +0,0 @@ -### Roxygen2 {#developer-roxygen} - -This is the standard method of documentation used in PEcAn development, it provides inline documentation similar to doxygen. - -#### Canonical references: - -* Must Read: R package development by Hadley Wickham: - * [**Object Documentation**](http://r-pkgs.had.co.nz/man.html) - * [Package Metadata](http://r-pkgs.had.co.nz/description.html) -* Roxygen2 Documentation - * [Roxygen2 Package Documentation](http://cran.r-project.org/web/packages/roxygen2/roxygen2.pdf) - * [GitHub](https://github.com/klutometis/roxygen) - -#### Basic Roxygen2 instructions: - -Section headers link to "Writing R extensions" which provides in-depth documentation. This is provided as an overview and quick reference. - -#### [Tags](http://cran.r-project.org/doc/manuals/R-exts.html#Documenting-functions) - -* tags are preceeded by `##'` -* tags required by R: -** `title` tag is required, along with actual title -** `param` one for each parameter, should be defined -** `return` must state what function returns (or nothing, if something occurs as a side effect -* tags strongly suggested for most functions: -** `author` -** `examples` can be similar to test cases. -* optional tags: -** `export` required if function is used by another package -** `import` can import a required function from another package (if package is not loaded or other function is not exported) -** `seealso` suggests related functions. These can be linked using `\code{link{}}` - -#### Text markup - -##### [Formatting](http://cran.r-project.org/doc/manuals/R-exts.html#Marking-text) - -* `\bold{}` -* `\emph{}` italics - - -##### [Links](http://cran.r-project.org/doc/manuals/R-exts.html#Marking-text) - -* `\url{www.url.com}` or `\href{url}{text}` for links -* `\code{\link{thisfn}}` links to function "thisfn" in the same package -* `\code{\link{foo::thatfn}}` links to function "thatfn" in package "foo" -* `\pkg{package_name}` - -##### [Math](http://cran.r-project.org/doc/manuals/R-exts.html#Mathematics) - -* `\eqn{a+b=c}` uses LaTex to format an inline equation -* `\deqn{a+b=c}` uses LaTex to format displayed equation -* `\deqn{latex}{ascii}` and `\eqn{latex}{ascii}` can be used to provide different versions in latex and ascii. - -##### [Lists](http://cran.r-project.org/doc/manuals/R-exts.html#Lists-and-tables) - -``` -\enumerate{ -\item A database consists of one or more records, each with one or -more named fields. -\item Regular lines start with a non-whitespace character. -\item Records are separated by one or more empty lines. -} -\itemize and \enumerate commands may be nested. -``` - -##### "Tables":http://cran.r-project.org/doc/manuals/R-exts.html#Lists-and-tables - -``` -\tabular{rlll}{ -[,1] \tab Ozone \tab numeric \tab Ozone (ppb)\cr -[,2] \tab Solar.R \tab numeric \tab Solar R (lang)\cr -[,3] \tab Wind \tab numeric \tab Wind (mph)\cr -[,4] \tab Temp \tab numeric \tab Temperature (degrees F)\cr -[,5] \tab Month \tab numeric \tab Month (1--12)\cr -[,6] \tab Day \tab numeric \tab Day of month (1--31) -} -``` - -#### Example - -Here is an example documented function, myfun - -``` -##' My function adds three numbers -##' -##' A great function for demonstrating Roxygen documentation -##' @param a numeric -##' @param b numeric -##' @param c numeric -##' @return d, numeric sum of a + b + c -##' @export -##' @author David LeBauer -##' @examples -##' myfun(1,2,3) -##' \dontrun{myfun(NULL)} -myfun <- function(a, b, c){ - d <- a + b + c - return(d) -} -``` - -In emacs, with the cursor inside the function, the keybinding C-x O will generate an outline or update the Roxygen2 documentation. - - -#### Updating documentation - -* After adding documentation run the following command (replacing common with the name of the folder you want to update): -** In R using devtools to call roxygenize: - -```r -require(devtools) -document("common") -``` diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd deleted file mode 100755 index e5b4000da5e..00000000000 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/05_Testing.Rmd +++ /dev/null @@ -1,409 +0,0 @@ -### Testing {#developer-testing} - -PEcAn uses the testthat package developed by Hadley Wickham. Hadley has -written instructions for using this package in his -[Testing](http://adv-r.had.co.nz/Testing.html) chapter. - -#### Rationale - -* makes development easier -* provides working documentation of expected functionality -* saves time by allowing computer to take over error checking once a - test has been made -* improves code quality -* Further reading: [Aruliah et al 2012 Best Practices for Scientific Computing](http://arxiv.org/pdf/1210.0530v3.pdf) - -#### Tests makes development easier and less error prone - -Testing makes it easier to develop by organizing everything you are -already doing anyway - but integrating it into the testing and -documentation. With a codebase like PEcAn, it is often difficult to get -started. You have to figure out - -* what was I doing yesterday? -* what do I want to do today? -* what existing functions do I need to edit? -* what are the arguments to these functions (and what are examples of - valid arguments) -* what packages are affected -* where is a logical place to put files used in testing - -#### Quick Start: - -* decide what you want to do today -* identify the issue in github (if none exists, create one) -* to work on issue 99, create a new branch called “github99” or some descriptive name… Today we will enable an -existing function, `make.cheas` to make `goat.cheddar`. We will know -that we are done by the color and taste. - ``` - git branch goat-cheddar - git checkout goat-cheddar - ``` -* open existing (or create new) file in `inst/tests/`. If working on code in "myfunction" or a set of functions in "R/myfile.R", the file should be named accordingly, e.g. "inst/tests/test.myfile.R" -* if you are lucky, the function has already been tested and has some examples. -* if not, you may need to create a minimal example, often requiring a settings file. The default settings file can be obtained in this way: - ```r - settings <- read.settings(system.file("extdata/test.settings.xml", package = "PEcAn.utils")) - ``` -* write what you want to do - ``` - test_that("make.cheas can make cheese",{ - goat.cheddar <- make.cheas(source = 'goat', style = 'cheddar') - expect_equal(color(goat.cheddar), "orange") - expect_is(object = goat.cheddar, class = "cheese") - expect_true(all(c("sharp", "creamy") %in% taste(goat.cheddar))) - } - ``` -* now edit the goat.cheddar function until it makes savory, creamy, orange cheese. -* commit often -* update documentation and test - ```{r, eval = FALSE} - library(devtools) - document("mypkg") - test("mypkg") - ``` -* commit again -* when complete, merge, and push - ```{bash, eval = FALSE} - git commit -m "make.cheas makes goat.cheddar now" - git checkout master - git merge goat-cheddar - git push - ``` - -#### Test files - - -Many of PEcAn’s functions require inputs that are provided as data. -These can be in the `/data` or the `/inst/extdata` folders of a package. -Data that are not package specific should be placed in the PEcAn.all or -PEcAn.utils files. - -Some useful conventions: - -#### Settings - -* A generic settings can be found in the PEcAn.all package -```r -settings.xml <- system.file("pecan.biocro.xml", package = "PEcAn.BIOCRO") -settings <- read.settings(settings.xml) -``` - -* database settings can be specified, and tests run only if a connection is available - -We currently use the following database to run tests against; tests that require access to a database should check `db.exists()` and be skipped if it returns FALSE to avoid failed tests on systems that do not have the database installed. - -```r -settings$database <- list(userid = "bety", - passwd = "bety", - name = "bety", # database name - host = "localhost" # server name) -test_that(..., { - skip_if_not(db.exists(settings$database)) - ## write tests here -}) -``` - -* instructions for installing this are available on the [VM creation - wiki](VM-Creation.md) -* examples can be found in the PEcAn.DB package (`base/db/tests/testthat/`). - -* Model specific settings can go in the model-specific module, for -example: - -```r -settings.xml <- system.file("extdata/pecan.biocro.xml", package = "PEcAn.BIOCRO") -settings <- read.settings(settings.xml) -``` -* test-specific settings: - - settings text can be specified inline: - ``` - settings.text <- " - - nope ## allows bypass of checks in the read.settings functions - - - ebifarm.pavi - test/ - - - test/ - - bety - bety - localhost - bety - - " - settings <- read.settings(settings.text) - ``` - - values in settings can be updated: - ```r - settings <- read.settings(settings.text) - settings$outdir <- "/tmp" ## or any other settings - ``` - -#### Helper functions created to make testing easier - -* **tryl** returns FALSE if function gives error -* **temp.settings** creates temporary settings file -* **test.remote** returns TRUE if remote connection is available -* **db.exists** returns TRUE if connection to database is available - -#### When should I test? - -A test *should* be written for each of the following situations: - -1. Each bug should get a regression test. - * The first step in handling a bug is to write code that reproduces the error - * This code becomes the test - * most important when error could re-appear - * essential when error silently produces invalid results - -2. Every time a (non-trivial) function is created or edited - * Write tests that indicate how the function should perform - * example: `expect_equal(sum(1,1), 2)` indicates that the sum - function should take the sum of its arguments - - * Write tests for cases under which the function should throw an - error - * example: `expect_error(sum("foo"))` - * better : `expect_error(sum("foo"), "invalid 'type' (character)")` - -#### What types of testing are important to understand? - - -#### Unit Testing / Test Driven Development - -Tests are only as good as the test - -1. write test -2. write code - -#### Regression Testing - -When a bug is found, - -1. write a test that finds the bug (the minimum test required to make - the test fail) -2. fix the bug -3. bug is fixed when test passes - -#### How should I test in R? The testthat package. - - -tests are found in `~/pecan//inst/tests`, for example -`utils/inst/tests/` - -See attached file and -[http://r-pkgs.had.co.nz/tests.html](http://r-pkgs.had.co.nz/tests.html) -for details on how to use the testthat package. - -##### List of Expectations - -|Full |Abbreviation| -|---|----| -|expect_that(x, is_true()) |expect_true(x)| -|expect_that(x, is_false()) |expect_false(x)| -|expect_that(x, is_a(y)) |expect_is(x, y)| -|expect_that(x, equals(y)) |expect_equal(x, y)| -|expect_that(x, is_equivalent_to(y)) |expect_equivalent(x, y)| -|expect_that(x, is_identical_to(y)) |expect_identical(x, y)| -|expect_that(x, matches(y)) |expect_matches(x, y)| -|expect_that(x, prints_text(y)) |expect_output(x, y)| -|expect_that(x, shows_message(y)) |expect_message(x, y)| -|expect_that(x, gives_warning(y)) |expect_warning(x, y)| -|expect_that(x, throws_error(y)) |expect_error(x, y)| - -##### How to run tests - -add the following to “pecan/tests/testthat.R” - -```r -library(testthat) -library(mypackage) - -test_check("mypackage") -``` - -#### basic use of the testthat package - -Here is an example of tests (these should be placed in -`/tests/testthat/test-.R`: - -```r -test_that("mathematical operators plus and minus work as expected",{ - expect_equal(sum(1,1), 2) - expect_equal(sum(-1,-1), -2) - expect_equal(sum(1,NA), NA) - expect_error(sum("cat")) - set.seed(0) - expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) -}) - -test_that("different testing functions work, giving excuse to demonstrate",{ - expect_identical(1, 1) - expect_identical(numeric(1), integer(1)) - expect_equivalent(numeric(1), integer(1)) - expect_warning(mean('1')) - expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) - expect_warning(mean('1'), "argument is not numeric or logical: returning NA") - expect_message(message("a"), "a") -}) -``` - - -##### Script testing - -It is useful to add tests to a script during development. This allows -you to test that the code is doing what you expect it to do. - -```r -* here is a fake script using the iris data set - -test_that("the iris data set has the same basic features as before",{ - expect_equal(dim(iris), c(150,5)) - expect_that(iris$Sepal.Length, is_a("numeric")) - expect_is(iris$Sepal.Length, "numeric")#equivalent to prev. line - expect_is(iris$Species, "factor") -}) - -iris.color <- data.frame(Species = c("setosa", "versicolor", "virginica"), - color = c("pink", "blue", "orange")) - -newiris <- merge(iris, iris.color) -iris.model <- lm(Petal.Length ~ color, data = newiris) - -test_that("changes to Iris code occurred as expected",{ - expect_that(dim(newiris), equals(c(150, 6))) - expect_that(unique(newiris$color), - is_identical_to(unique(iris.color$color))) - expect_equivalent(iris.model$coefficients["(Intercept)"], 4.26) -}) -``` - - -##### Function testing - -Testing of a new function, `as.sequence`. The function and documentation -are in source:R/utils.R and the tests are in source:tests/test.utils.R. - -Recently, I made the function `as.sequence` to turn any vector into a -sequence, with custom handling of NA’s: - - -```r -function(x, na.rm = TRUE){ - x2 <- as.integer(factor(x, unique(x))) - if(all(is.na(x2))){ - x2 <- rep(1, length(x2)) - } - if(na.rm == TRUE){ - x2[is.na(x2)] <- max(x2, na.rm = TRUE) + 1 - } - return(x2) -} - -``` - - -The next step was to add documentation and test. Many people find it -more efficient to write tests before writing the function. This is true, -but it also requires more discipline. I wrote these tests to handle the -variety of cases that I had observed. - -As currently used, the function is exposed to a fairly restricted set of -options - results of downloads from the database and transformations. - -```r -test_that(“as.sequence works”;{ - expect_identical(as.sequence(c(“a”, “b”)), 1:2) - expect_identical(as.sequence(c(“a”, NA)), 1:2) - expect_equal(as.sequence(c(“a”, NA), na.rm = FALSE), c(1,NA)) - expect_equal(as.sequence(c(NA,NA)), c(1,1)) -}) -``` - -#### Testing the Shiny Server - -Shiny can be difficult to debug because, when run as a web service, the R output is hidden in system log files that are hard to find and read. -One useful approach to debugging is to use port forwarding, as follows. - -First, on the remote machine (including the VM), make sure R's working directory is set to the directory of the Shiny app (e.g., `setwd(/path/to/pecan/shiny/WorkflowPlots)`, or just open the app as an RStudio project). -Then, in the R console, run the app as: - -``` -shiny::runApp(port = XXXX) -# E.g. shiny::runApp(port = 5638) -``` - -Then, on your local machine, open a terminal and run the following command, matching `XXXX` to the port above and `YYYY` to any unused port on your local machine (any 4-digit number should work). - -``` -ssh -L YYYY:localhost:XXXX -# E.g., for the PEcAn VM, given the above port: -# ssh -L 5639:localhost:5638 carya@localhost -p 6422 -``` - -Now, in a web browser on your local machine, browse to `localhost:YYYY` (e.g., `localhost:5639`) to run whatever app you started with `shiny::runApp` in the previous step. -All of the output should display in the R console where the `shiny::runApp` command was executed. -Note that this includes any `print`, `message`, `logger.*`, etc. statements in your Shiny app. - -If the Shiny app hits an R error, the backtrace should include a line like `Hit error at of server.R#LXX` -- that `XX` being a line number that you can use to track down the error. -To return from the error to a normal R prompt, hit `-C` (alternatively, the "Stop" button in RStudio). -To restart the app, run `shiny::runApp(port = XXXX)` again (keeping the same port). - -Note that Shiny runs any code in the `pecan/shiny/` directory at the moment the app is launched. -So, any changes you make to the code in `server.R` and `ui.R` or scripts loaded therein will take effect the next time the app is started. - -If for whatever reason this doesn't work with RStudio, you can always run R from the command line. -Also, note that the ability to forward ports (`ssh -L`) may depend on the `ssh` configuration of your remote machine. -These instructions have been tested on the PEcAn VM (v.1.5.2+). - -#### Testing PEcAn in bulk - -The [`base/workflow/inst/batch_run.R`][batch_run] script can be used to quickly run a series of user-specified integration tests without having to create a bunch of XML files. -This script is powered by the [`PEcAn.workflow::create_execute_test_xml()`][xml_fun] function, -which takes as input information about the model, meteorology driver, site ID, run dates, and others, -uses these to construct a PEcAn XML file, -and then uses the `system()` command to run a workflow with that XML. - -If run without arguments, `batch_run.R` will try to run the model configurations specified in the [`base/workflow/inst/default_tests.csv`][default_tests] file. -This file contains a CSV table with the following columns: - -- `model` -- The name of the model (`models.model_name` column in BETY) -- `revision` -- The version of the model (`models.revision` column in BETY) -- `met` -- The name of the meteorology driver source -- `site_id` -- The numeric site ID for the model run (`sites.site_id`) -- `pft` -- The name of the plant functional type to run. If `NA`, the script will use the first PFT associated with the model. -- `start_date`, `end_date` -- The start and end dates for the model run, respectively. These should be formatted according to ISO standard (`YYYY-MM-DD`, e.g. `2010-03-16`) -- `sensitivity` -- Whether or not to run the sensitivity analysis. `TRUE` means run it, `FALSE` means do not. -- `ensemble_size` -- The number of ensemble members to run. Set this to 1 to do a single run at the trait median. -- `comment` -- An string providing some user-friendly information about the run. - -The `batch_run.R` script will run a workflow for every row in the input table, sequentially (for now; eventually, it will try to run them in parallel), -and at the end of each workflow, will perform some basic checks, including whether or not the workflow finished and if the model produced any output. -These results are summarized in a CSV table (by default, a file called `test_result_table.csv`), with all of the columns as the input test CSV plus the following: - -- `outdir` -- Absolute path to the workflow directory. -- `workflow_complete` -- Whether or not the PEcAn workflow completed. Note that this is a relatively low bar -- PEcAn workflows can complete without having run the model or finished some other steps. -- `has_jobsh` -- Whether or not PEcAn was able to write the model's `job.sh` script. This is a good indication of whether or not the model's `write.configs` step was successful, and may be useful for separating model configuration errors from model execution errors. -- `model_output_raw` -- Whether or not the model produced any output files at all. This is just a check to see of the `/out` directory is empty or not. Note that some models may produce logfiles or similar artifacts as soon as they are executed, whether or not they ran even a single timestep, so this is not an indication of model success. -- `model_output_processed` -- Whether or not PEcAn was able to post-process any model output. This test just sees if there are any files of the form `YYYY.nc` (e.g. `1992.nc`) in the `/out` directory. - -Right now, these checks are not particularly robust or comprehensive, but they should be sufficient for catching common errors. -Development of more, better tests is ongoing. - -The `batch_run.R` script can take the following command-line arguments: - -- `--help` -- Prints a help message about the script's arguments -- `--dbfiles=` -- The path to the PEcAn `dbfiles` folder. The default value is `~/output/dbfiles`, based on the file structure of the PEcAn VM. Note that for this and all other paths, if a relative path is given, it is assumed to be relative to the current working directory, i.e. the directory from which the script was called. -- `--table=` -- Path to an alternate test table. The default is the `base/workflow/inst/default_tests.csv` file. See preceding paragraph for a description of the format. -- `--userid=` -- The numeric user ID for registering the workflow. The default value is 99000000002, corresponding to the guest user on the PEcAn VM. -- `--outdir=` -- Path to a directory (which will be created if it doesn't exist) for storing the PEcAn workflow outputs. Default is `batch_test_output` (in the current working directory). -- `--pecandir=` -- Path to the PEcAn source code root directory. Default is the current working directory. -- `--outfile=` -- Full path (including file name) of the CSV file summarizing the results of the runs. Default is `test_result_table.csv`. The format of the output - -[batch_run]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/batch_run.R -[default_tests]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/default_tests.csv -[xml_fun]: diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd new file mode 100755 index 00000000000..ea298e7c179 --- /dev/null +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -0,0 +1,121 @@ +## Testing {#developer-testing} + +PEcAn uses two different kinds of testing -- [unit tests](#developer-testing-unit) and [integration tests](#developer-testing-integration). + +### Unit testing {#developer-testing-unit} + +Unit tests are short (<1 minute runtime) tests of functionality of specific functions. +Ideally, every function should have at least one unit test associated with it. + +A unit test *should* be written for each of the following situations: + +1. Each bug should get a regression test. + * The first step in handling a bug is to write code that reproduces the error + * This code becomes the test + * most important when error could re-appear + * essential when error silently produces invalid results + +2. Every time a (non-trivial) function is created or edited + * Write tests that indicate how the function should perform + * example: `expect_equal(sum(1,1), 2)` indicates that the sum + function should take the sum of its arguments + + * Write tests for cases under which the function should throw an + error + * example: `expect_error(sum("foo"))` + * better : `expect_error(sum("foo"), "invalid 'type' (character)")` +3. Any functionality that you would like to protect over the long term. Functionality that is not tested is more likely to be lost. +PEcAn uses the `testthat` package for unit testing. +A general overview of is provided in the ["Testing"](http://adv-r.had.co.nz/Testing.html) chapter of Hadley Wickham's book "R packages". +Another useful resource is the `testthat` [package documentation website](https://testthat.r-lib.org/). +See also our [`testthat` appendix](#appendix-testthat). +Below is a lightning introduction to unit testing with `testthat`. + +Each package's unit tests live in `.R` scripts in the folder `/tests/testthat`. +In addition, a `testthat`-enabled package has a file called `/tests/testthat.R` with the following contents: + +```r +library(testthat) +library() + +test_check("") +``` + +Tests should be placed in `/tests/testthat/test-.R`, and look like the following: + +```r +context("Mathematical operators") + +test_that("mathematical operators plus and minus work as expected",{ + sum1 <- sum(1, 1) + expect_equal(sum1, 2) + sum2 <- sum(-1, -1) + expect_equal(sum2, -2) + expect_equal(sum(1,NA), NA) + expect_error(sum("cat")) + set.seed(0) + expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) +}) + +test_that("different testing functions work, giving excuse to demonstrate",{ + expect_identical(1, 1) + expect_identical(numeric(1), integer(1)) + expect_equivalent(numeric(1), integer(1)) + expect_warning(mean('1')) + expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) + expect_warning(mean('1'), "argument is not numeric or logical: returning NA") + expect_message(message("a"), "a") +}) +``` + +### Integration testing {#developer-testing-integration} + +Integration tests consist of running the PEcAn workflow in full. +One way to do integration tests is to manually run workflows for a given version of PEcAn, either through the web interface or by manually creating a [`pecan.xml` file](#pecanXML). +Such manual tests are an important part of checking PEcAn functionality. + +Alternatively, the [`base/workflow/inst/batch_run.R`][batch_run] script can be used to quickly run a series of user-specified integration tests without having to create a bunch of XML files. +This script is powered by the [`PEcAn.workflow::create_execute_test_xml()`][xml_fun] function, +which takes as input information about the model, meteorology driver, site ID, run dates, and others, +uses these to construct a PEcAn XML file, +and then uses the `system()` command to run a workflow with that XML. + +If run without arguments, `batch_run.R` will try to run the model configurations specified in the [`base/workflow/inst/default_tests.csv`][default_tests] file. +This file contains a CSV table with the following columns: + +- `model` -- The name of the model (`models.model_name` column in BETY) +- `revision` -- The version of the model (`models.revision` column in BETY) +- `met` -- The name of the meteorology driver source +- `site_id` -- The numeric site ID for the model run (`sites.site_id`) +- `pft` -- The name of the plant functional type to run. If `NA`, the script will use the first PFT associated with the model. +- `start_date`, `end_date` -- The start and end dates for the model run, respectively. These should be formatted according to ISO standard (`YYYY-MM-DD`, e.g. `2010-03-16`) +- `sensitivity` -- Whether or not to run the sensitivity analysis. `TRUE` means run it, `FALSE` means do not. +- `ensemble_size` -- The number of ensemble members to run. Set this to 1 to do a single run at the trait median. +- `comment` -- An string providing some user-friendly information about the run. + +The `batch_run.R` script will run a workflow for every row in the input table, sequentially (for now; eventually, it will try to run them in parallel), +and at the end of each workflow, will perform some basic checks, including whether or not the workflow finished and if the model produced any output. +These results are summarized in a CSV table (by default, a file called `test_result_table.csv`), with all of the columns as the input test CSV plus the following: + +- `outdir` -- Absolute path to the workflow directory. +- `workflow_complete` -- Whether or not the PEcAn workflow completed. Note that this is a relatively low bar -- PEcAn workflows can complete without having run the model or finished some other steps. +- `has_jobsh` -- Whether or not PEcAn was able to write the model's `job.sh` script. This is a good indication of whether or not the model's `write.configs` step was successful, and may be useful for separating model configuration errors from model execution errors. +- `model_output_raw` -- Whether or not the model produced any output files at all. This is just a check to see of the `/out` directory is empty or not. Note that some models may produce logfiles or similar artifacts as soon as they are executed, whether or not they ran even a single timestep, so this is not an indication of model success. +- `model_output_processed` -- Whether or not PEcAn was able to post-process any model output. This test just sees if there are any files of the form `YYYY.nc` (e.g. `1992.nc`) in the `/out` directory. + +Right now, these checks are not particularly robust or comprehensive, but they should be sufficient for catching common errors. +Development of more, better tests is ongoing. + +The `batch_run.R` script can take the following command-line arguments: + +- `--help` -- Prints a help message about the script's arguments +- `--dbfiles=` -- The path to the PEcAn `dbfiles` folder. The default value is `~/output/dbfiles`, based on the file structure of the PEcAn VM. Note that for this and all other paths, if a relative path is given, it is assumed to be relative to the current working directory, i.e. the directory from which the script was called. +- `--table=` -- Path to an alternate test table. The default is the `base/workflow/inst/default_tests.csv` file. See preceding paragraph for a description of the format. +- `--userid=` -- The numeric user ID for registering the workflow. The default value is 99000000002, corresponding to the guest user on the PEcAn VM. +- `--outdir=` -- Path to a directory (which will be created if it doesn't exist) for storing the PEcAn workflow outputs. Default is `batch_test_output` (in the current working directory). +- `--pecandir=` -- Path to the PEcAn source code root directory. Default is the current working directory. +- `--outfile=` -- Full path (including file name) of the CSV file summarizing the results of the runs. Default is `test_result_table.csv`. The format of the output + +[batch_run]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/batch_run.R +[default_tests]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/default_tests.csv +[xml_fun]: diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04_compile_PEcAn.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/05-compile-pecan.Rmd similarity index 100% rename from book_source/02_demos_tutorials_workflows/05_developer_workflows/04_compile_PEcAn.Rmd rename to book_source/02_demos_tutorials_workflows/05_developer_workflows/05-compile-pecan.Rmd diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/05_directory_structure.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/06-directory-structure.Rmd similarity index 100% rename from book_source/02_demos_tutorials_workflows/05_developer_workflows/05_directory_structure.Rmd rename to book_source/02_demos_tutorials_workflows/05_developer_workflows/06-directory-structure.Rmd diff --git a/book_source/03_topical_pages/10_shiny/shiny.Rmd b/book_source/03_topical_pages/10_shiny/shiny.Rmd index abdcc91df66..f805f11ae1b 100644 --- a/book_source/03_topical_pages/10_shiny/shiny.Rmd +++ b/book_source/03_topical_pages/10_shiny/shiny.Rmd @@ -1,6 +1,42 @@ -# SHINY +# Shiny -### Debugging Shiny Apps +## Testing the Shiny Server + +Shiny can be difficult to debug because, when run as a web service, the R output is hidden in system log files that are hard to find and read. +One useful approach to debugging is to use port forwarding, as follows. + +First, on the remote machine (including the VM), make sure R's working directory is set to the directory of the Shiny app (e.g., `setwd(/path/to/pecan/shiny/WorkflowPlots)`, or just open the app as an RStudio project). +Then, in the R console, run the app as: + +``` +shiny::runApp(port = XXXX) +# E.g. shiny::runApp(port = 5638) +``` + +Then, on your local machine, open a terminal and run the following command, matching `XXXX` to the port above and `YYYY` to any unused port on your local machine (any 4-digit number should work). + +``` +ssh -L YYYY:localhost:XXXX +# E.g., for the PEcAn VM, given the above port: +# ssh -L 5639:localhost:5638 carya@localhost -p 6422 +``` + +Now, in a web browser on your local machine, browse to `localhost:YYYY` (e.g., `localhost:5639`) to run whatever app you started with `shiny::runApp` in the previous step. +All of the output should display in the R console where the `shiny::runApp` command was executed. +Note that this includes any `print`, `message`, `logger.*`, etc. statements in your Shiny app. + +If the Shiny app hits an R error, the backtrace should include a line like `Hit error at of server.R#LXX` -- that `XX` being a line number that you can use to track down the error. +To return from the error to a normal R prompt, hit `-C` (alternatively, the "Stop" button in RStudio). +To restart the app, run `shiny::runApp(port = XXXX)` again (keeping the same port). + +Note that Shiny runs any code in the `pecan/shiny/` directory at the moment the app is launched. +So, any changes you make to the code in `server.R` and `ui.R` or scripts loaded therein will take effect the next time the app is started. + +If for whatever reason this doesn't work with RStudio, you can always run R from the command line. +Also, note that the ability to forward ports (`ssh -L`) may depend on the `ssh` configuration of your remote machine. +These instructions have been tested on the PEcAn VM (v.1.5.2+). + +## Debugging Shiny Apps When developing shiny apps you can run the application from rstudio and place breakpoints int he code. To do this you will need to do the following steps first (already done on the VM) before starting rstudio: - echo "options(shiny.port = 6438)" >> ${HOME}/.Rprofile @@ -10,11 +46,11 @@ Next you will need to create a tunnel for port 6438 to the VM, which will be use Now you can from rstudio run your application using `shiny::runApp()` and it will show the output from the application in your console. You can now place breakpoints and evaluate the output. -#### Checking Log Files +## Checking Log Files To create Log files on the VM, execute the following: ``` sudo -s echo "preserve_logs true;" >> /etc/shiny-server/shiny-server.conf service shiny-server restart ``` -Then within the directory `/var/log/shiny-server` you will see log files for your specific shiny apps. \ No newline at end of file +Then within the directory `/var/log/shiny-server` you will see log files for your specific shiny apps. diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd b/book_source/04_appendix/04-package-dependencies.Rmd similarity index 86% rename from book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd rename to book_source/04_appendix/04-package-dependencies.Rmd index 8af3d2b8c9f..879d5465791 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/01_Coding_style.Rmd +++ b/book_source/04_appendix/04-package-dependencies.Rmd @@ -1,69 +1,10 @@ -### Coding Style {#developer-codestyle} +# Package Dependencies {#package-dependencies} -Consistent coding style improves readability and reduces errors in -shared code. - -R does not have an official style guide, but Hadley Wickham provides one that is well -thought out and widely adopted. [Advanced R: Coding Style](http://r-pkgs.had.co.nz/style.html). - -Both the Wickham text and this page are derived from [Google's R Style Guide](https://google.github.io/styleguide/Rguide.xml). - -#### Use Roxygen2 documentation - -This is the standard method of documentation used in PEcAn development, -it provides inline documentation similar to doxygen. Even trivial -functions should be documented. - -See [Roxygen2](#developer-roxygen). - -#### Write your name at the top - -Any function that you create or make a meaningful contribution to should -have your name listed after the author tag in the function documentation. - -#### Use testthat testing package - -See [Unit_Testing](#developer-testing) for instructions, and [Advanced R: Tests](http://r-pkgs.had.co.nz/tests.html). - -* tests provide support for documentation - they define what a function is (and is not) expected to do -* all functions need tests to ensure basic functionality is maintained during development. -* all bugs should have a test that reproduces the bug, and the test should pass before bug is closed - - -#### Don't use shortcuts - -R provides many shortcuts that are useful when coding interactively, or for writing scripts. However, these can make code more difficult to read and can cause problems when written into packages. - -#### Function Names (`verb.noun`) - -Following convention established in PEcAn 0.1, we use the all lowercase with periods to separate words. They should generally have a `verb.noun` format, such as `query.traits`, `get.samples`, etc. - -#### File Names - -File names should end in `.R`, `.Rdata`, or `.rds` (as appropriate) and should be meaningful, e.g. named after the primary functions that they contain. There should be a separate file for each major high-level function to aid in identifying the contents of files in a directory. - -#### Use "<-" as an assignment operator - -Because most R code uses <- (except where = is required), we will use <-. -`=` is reserved for function arguments - -#### Use Spaces - -* around all binary operators (`=`, `+`, `-`, `<-`, etc.). -* after but not before a comma - -#### Use curly braces - -The option to omit curly braces is another shortcut that makes code easier to write but harder to read and more prone to error. - - -#### Package Dependencies - -##### Executive Summary: What to usually do +## Executive Summary: What to usually do When you're editing one PEcAn package and want to use a function from any other R package (including other PEcAn packages), the standard method is to add the other package to the `Imports:` field of your DESCRIPTION file, spell the function in fully namespaced form (`pkg::function()`) everywhere you call it, and be done. There are a few cases where this isn't enough, but they're rarer than you think. The rest of this section mostly deals with the exceptions to this rule and why not to use them when they can be avoided. -##### Big Picture: What's possible to do +## Big Picture: What's possible to do To make one PEcAn package use functionality from another R package (including other PEcAn packages), you must do at least one and up to four things in your own package. @@ -75,7 +16,7 @@ To make one PEcAn package use functionality from another R package (including ot The advice below about each step is written specifically for PEcAn, although much of it holds for R packages in general. For more details about working with dependencies, start with Hadley Wickham's [R packages](http://r-pkgs.had.co.nz/description.html#dependencies) and treat the CRAN team's [Writing R Extensions](https://cran.r-project.org/doc/manuals/R-exts.html) as the final authority. -##### Declaring Dependencies: Depends, Suggests, Imports +## Declaring Dependencies: Depends, Suggests, Imports List all dependencies in the DESCRIPTION file. Every package that is used by your package's code must appear in exactly one of the sections `Depends`, `Imports`, or `Suggests`. @@ -99,7 +40,7 @@ Please list packages in alphabetical order within each section. R doesn't care a It is often tempting to move a dependency from Imports to Suggests because it is a hassle to install (large, hard to compile, no longer available from CRAN, currently broken on GitHub, etc), in the hopes that this will isolate the rest of PEcAn from the troublesome dependency. This helps for some cases, but fails for two very common ones: It does not reduce install time for CI builds, because all suggested packages need to be present when running full package checks (`R CMD check` or `devtools::check` or `make check`). It also does not prevent breakage when updating PEcAn via `make install`, because `devtools::install_deps` does not install suggested packages that are missing but does try to *upgrade* any that are already installed to the newest available version -- even if the installed version took ages to compile and would have worked just fine! -##### Importing Functions: Use Roxygen +## Importing Functions: Use Roxygen PEcAn style is to import very few functions and instead use fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. @@ -123,7 +64,7 @@ Please do *not* import entire package namespaces (`#' @import pkg`); it increase A special note about importing functions from the [tidyverse](https://tidyverse.org): Be sure to import from the package(s) that actually contain the functions you want to use, e.g. `Imports: dplyr, magrittr, purrr` / `@importFrom magrittr %>%` / `purrr::map(...)`, not `Imports: tidyverse` / `@importFrom tidyverse %>%` / `tidyverse::map(...)`. The package named `tidyverse` is just a interactive shortcut that loads the whole collection of constituent packages; it doesn't export any functions in its own namespace and therefore importing it into your package doesn't make them available. -##### Loading Code: Don't... But Use `requireNamespace` When You Do +## Loading Code: Don't... But Use `requireNamespace` When You Do The very short version of this section: We want to maintain clear separation between the [package's namespace](http://r-pkgs.had.co.nz/namespace.html) (which we control and want to keep predictable) and the global namespace (which the user controls, might change in ways we have no control over, and will be justifiably angry if we change it in ways they were not expecting). Therefore, avoid attaching packages to the search path (so no `Depends` and no `library()` or `require()` inside functions), and do not explicitly load other namespaces if you can help it. @@ -155,7 +96,7 @@ Note that scripts in `inst/` are considered to be sample code rather than part o If you think your package needs to load or attach code for any reason, please note why in your pull request description and be prepared for questions about it during code review. If your reviewers can think of an alternate approach that avoids loading or attaching, they will likely ask you to use it even if it creates extra work for you. -##### Installing dependencies: Let the machines do it +## Installing dependencies: Let the machines do it In most cases you won't need to think about how dependencies get installed -- just declare them in your package's DESCRIPTION and the installation will be handled automatically by R and devtools during the build process. In PEcAn packages, the rare cases where this isn't enough will probably fall into one of two categories. diff --git a/book_source/04_appendix/05-testthat.Rmd b/book_source/04_appendix/05-testthat.Rmd new file mode 100644 index 00000000000..fae9a3f1b8e --- /dev/null +++ b/book_source/04_appendix/05-testthat.Rmd @@ -0,0 +1,134 @@ +# Testing with the `testthat` package {#appendix-testthat} + +Tests are found in `/tests/testthat/` (for example, `base/utils/inst/tests/`) + +See attached file and +[http://r-pkgs.had.co.nz/tests.html](http://r-pkgs.had.co.nz/tests.html) +for details on how to use the testthat package. + +## List of Expectations + +|Full |Abbreviation| +|---|----| +|expect_that(x, is_true()) |expect_true(x)| +|expect_that(x, is_false()) |expect_false(x)| +|expect_that(x, is_a(y)) |expect_is(x, y)| +|expect_that(x, equals(y)) |expect_equal(x, y)| +|expect_that(x, is_equivalent_to(y)) |expect_equivalent(x, y)| +|expect_that(x, is_identical_to(y)) |expect_identical(x, y)| +|expect_that(x, matches(y)) |expect_matches(x, y)| +|expect_that(x, prints_text(y)) |expect_output(x, y)| +|expect_that(x, shows_message(y)) |expect_message(x, y)| +|expect_that(x, gives_warning(y)) |expect_warning(x, y)| +|expect_that(x, throws_error(y)) |expect_error(x, y)| + +## Basic use of the `testthat` package + +Create a file called `/tests/testthat.R` with the following contents: + +```r +library(testthat) +library(mypackage) + +test_check("mypackage") +``` + +Tests should be placed in `/tests/testthat/test-.R`, and look like the following: + +```r +test_that("mathematical operators plus and minus work as expected",{ + expect_equal(sum(1,1), 2) + expect_equal(sum(-1,-1), -2) + expect_equal(sum(1,NA), NA) + expect_error(sum("cat")) + set.seed(0) + expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) +}) + +test_that("different testing functions work, giving excuse to demonstrate",{ + expect_identical(1, 1) + expect_identical(numeric(1), integer(1)) + expect_equivalent(numeric(1), integer(1)) + expect_warning(mean('1')) + expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) + expect_warning(mean('1'), "argument is not numeric or logical: returning NA") + expect_message(message("a"), "a") +}) +``` + +## Data for tests + +Many of PEcAn’s functions require inputs that are provided as data. +These can be in the `data` or the `inst/extdata` folders of a package. +Data that are not package specific should be placed in the `PEcAn.all` (`base/all`) or +`PEcAn.utils` (`base/utils`) packages. + +Some useful conventions: + +## Settings + +* A generic settings can be found in the `PEcAn.all` package +```r +settings.xml <- system.file("pecan.biocro.xml", package = "PEcAn.BIOCRO") +settings <- read.settings(settings.xml) +``` + +* database settings can be specified, and tests run only if a connection is available + +We currently use the following database to run tests against; tests that require access to a database should check `db.exists()` and be skipped if it returns FALSE to avoid failed tests on systems that do not have the database installed. + +```r +settings$database <- list(userid = "bety", + passwd = "bety", + name = "bety", # database name + host = "localhost" # server name) +test_that(..., { + skip_if_not(db.exists(settings$database)) + ## write tests here +}) +``` + +* instructions for installing this are available on the [VM creation + wiki](VM-Creation.md) +* examples can be found in the PEcAn.DB package (`base/db/tests/testthat/`). + +* Model specific settings can go in the model-specific module, for +example: + +```r +settings.xml <- system.file("extdata/pecan.biocro.xml", package = "PEcAn.BIOCRO") +settings <- read.settings(settings.xml) +``` +* test-specific settings: + - settings text can be specified inline: + ``` + settings.text <- " + + nope ## allows bypass of checks in the read.settings functions + + + ebifarm.pavi + test/ + + + test/ + + bety + bety + localhost + bety + + " + settings <- read.settings(settings.text) + ``` + - values in settings can be updated: + ```r + settings <- read.settings(settings.text) + settings$outdir <- "/tmp" ## or any other settings + ``` + +## Helper functions for unit tests + +* `PEcAn.utils::tryl` returns `FALSE` if function gives error +* `PEcAn.utils::temp.settings` creates temporary settings file +* `PEcAn.DB::db.exists` returns `TRUE` if connection to database is available diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/06_devtools.Rmd b/book_source/04_appendix/06_devtools.Rmd similarity index 92% rename from book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/06_devtools.Rmd rename to book_source/04_appendix/06_devtools.Rmd index 0826dcb59a3..c2fc23ca246 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/03_coding_practices/06_devtools.Rmd +++ b/book_source/04_appendix/06_devtools.Rmd @@ -1,9 +1,9 @@ -### `devtools` package {#developer-devtools} +# `devtools` package {#developer-devtools} Provides functions to simplify development Documentation: -[The R devtools packate](https://github.com/hadley/devtools) +[The R devtools package](https://devtools.r-lib.org/) ```r load_all("pkg") From 5ce39e9388b9e80b62b6f8cae37217797f270472 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Wed, 11 Sep 2019 15:28:58 -0400 Subject: [PATCH 0366/2289] working on updating WCr SDA --- .../inst/WillowCreek/workflow.template.R | 41 +++++++++++++++---- modules/data.atmosphere/NAMESPACE | 2 - 2 files changed, 34 insertions(+), 9 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index 6351790beae..d3ff39b6bfb 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -45,9 +45,7 @@ c( package = "PEcAn.assim.sequential") )) #reading xml -settings <- read.settings(system.file("WillowCreek", - xmlTempName, - package ="PEcAn.assim.sequential" )) +settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml") #connecting to DB con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) @@ -207,14 +205,41 @@ if (nodata) { obs.cov <- obs.cov %>% map(function(x) return(NA)) } -#---------------------- Copy the last SDA simulation outputs using restart.path +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Restart ------------------------------------- +# -------------------------------------------------------------------------------------------------- + +#@Hamze - should we add a if statement here for the times that we don't want to copy the path? + + if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) + + file.copy(from= file.path(restart.path, "SDA", "sda.output.Rdata"), + to = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + + file.copy(from= file.path(restart.path, "SDA", "outconfig.Rdata"), + to = file.path(settings$outdir, "SDA", "outconfig.Rdata")) + +#Update the SDA Output + load(file.path(restart.path, "SDA", "sda.output.Rdata")) + + ANALYSIS1 = list() + FORECAST1 = list() + enkf.params1 = list() + ANALYSIS[[1]]= ANALYSIS[[length(ANALYSIS)]] + FORECAST1[[1]] = FORECAST[[length(FORECAST)]] + enkf.params1[[1]] = enkf.params[[length(enkf.params)]] + t = 1 + ANALYSIS = ANALYSIS1 + FORECAST = FORECAST1 + enfk.params = enkf.params1 + save(list = c(ANALYSIS, FORECAST, enkf.params, t, ensemble.samples, inputs, new.params, new.state, site.locs, Viz.output, X, ensemble.id, run.id), file = file.path(restart.path, "SDA", "sda.output.Rdata")) # -------------------------------------------------------------------------------------------------- #--------------------------------- Run state data assimilation ------------------------------------- # -------------------------------------------------------------------------------------------------- -unlink(c('run','out','SDA'), recursive = T) +unlink(c('run','out', "SDA"), recursive = T) if ('state.data.assimilation' %in% names(settings)) { if (PEcAn.utils::status.check("SDA") == 0) { @@ -228,12 +253,14 @@ if ('state.data.assimilation' %in% names(settings)) { control = list( trace = TRUE, interactivePlot =FALSE, - TimeseriesPlot =TRUE, + TimeseriesPlot =FALSE, BiasPlot =FALSE, - debug =TRUE, + debug =FALSE, pause=FALSE ) ) PEcAn.utils::status.end() } } + + \ No newline at end of file diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 82763b58825..aac1d9f80cb 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -35,8 +35,6 @@ export(download.NOAA_GEFS) export(download.NOAA_GEFS_downscale) export(download.PalEON) export(download.PalEON_ENS) -export(download.US_Los) -export(download.US_Syv) export(download.US_WCr) export(download.US_Wlef) export(downscale_ShortWave_to_hrly) From de47d86c49761538b519078e2822a771cd7e4e08 Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Fri, 13 Sep 2019 17:17:41 -0400 Subject: [PATCH 0367/2289] Dividing the R script files get.trait.data.R and query.trait.data.R in to separate files for each function --- base/db/R/assign.treatments.R | 48 +++ base/db/R/check.lists.R | 35 ++ base/db/R/covariate.functions.R | 124 ++++++ base/db/R/derive.trait.R | 46 +++ base/db/R/derive.traits.R | 59 +++ base/db/R/fetch.stats2se.R | 25 ++ base/db/R/get.trait.data.R | 440 +-------------------- base/db/R/get.trait.data.pft.R | 350 +++++++++++++++++ base/db/R/query.data.R | 52 +++ base/db/R/query.trait.data.R | 495 ++---------------------- base/db/R/query.traits.R | 36 +- base/db/R/query.yields.R | 45 +++ base/db/R/rename_jags_columns.R | 36 ++ base/db/R/symmetric_setdiff.R | 65 ++++ base/db/R/take.samples.R | 35 ++ base/db/man/append.covariate.Rd | 2 +- base/db/man/arrhenius.scaling.traits.Rd | 2 +- base/db/man/assign.treatments.Rd | 2 +- base/db/man/check.lists.Rd | 2 +- base/db/man/derive.trait.Rd | 2 +- base/db/man/derive.traits.Rd | 2 +- base/db/man/fetch.stats2se.Rd | 2 +- base/db/man/filter_sunleaf_traits.Rd | 2 +- base/db/man/get.trait.data.pft.Rd | 2 +- base/db/man/query.covariates.Rd | 2 +- base/db/man/query.data.Rd | 2 +- base/db/man/query.yields.Rd | 2 +- base/db/man/rename_jags_columns.Rd | 2 +- base/db/man/symmetric_setdiff.Rd | 2 +- base/db/man/take.samples.Rd | 2 +- 30 files changed, 994 insertions(+), 927 deletions(-) create mode 100644 base/db/R/assign.treatments.R create mode 100644 base/db/R/check.lists.R create mode 100644 base/db/R/covariate.functions.R create mode 100644 base/db/R/derive.trait.R create mode 100644 base/db/R/derive.traits.R create mode 100644 base/db/R/fetch.stats2se.R create mode 100644 base/db/R/get.trait.data.pft.R create mode 100644 base/db/R/query.data.R create mode 100644 base/db/R/query.yields.R create mode 100644 base/db/R/rename_jags_columns.R create mode 100644 base/db/R/symmetric_setdiff.R create mode 100644 base/db/R/take.samples.R diff --git a/base/db/R/assign.treatments.R b/base/db/R/assign.treatments.R new file mode 100644 index 00000000000..1da9a0079f7 --- /dev/null +++ b/base/db/R/assign.treatments.R @@ -0,0 +1,48 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-----------------------------------------------------------------------------# +##' Change treatments to sequential integers +##' +##' Assigns all control treatments the same value, then assigns unique treatments +##' within each site. Each site is required to have a control treatment. +##' The algorithm (incorrectly) assumes that each site has a unique set of experimental +##' treatments. +##' @name assign.treatments +##' @title assign.treatments +##' @param data input data +##' @return dataframe with sequential treatments +##' @export +##' @author David LeBauer, Carl Davidson, Alexey Shiklomanov +assign.treatments <- function(data){ + data$trt_id[which(data$control == 1)] <- "control" + sites <- unique(data$site_id) + # Site IDs may be returned as `integer64`, which the `for` loop + # type-coerces to regular integer, which turns it into gibberish. + # Looping over the index instead prevents this type coercion. + for (si in seq_along(sites)) { + ss <- sites[[si]] + site.i <- data$site_id == ss + #if only one treatment, it's control + if (length(unique(data$trt_id[site.i])) == 1) data$trt_id[site.i] <- "control" + if (!"control" %in% data$trt_id[site.i]){ + PEcAn.logger::logger.severe(paste0( + "No control treatment set for site_id ", unique(data$site_id[site.i]), + " and citation id ", unique(data$citation_id[site.i]), ".\n", + "Please set control treatment for this site / citation in database.\n" + )) + } + } + return(data) +} + +drop.columns <- function(data, columns){ + return(data[, which(!colnames(data) %in% columns)]) +} +##=============================================================================# \ No newline at end of file diff --git a/base/db/R/check.lists.R b/base/db/R/check.lists.R new file mode 100644 index 00000000000..3b821f5754f --- /dev/null +++ b/base/db/R/check.lists.R @@ -0,0 +1,35 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' Check two lists. Identical does not work since one can be loaded +##' from the database and the other from a CSV file. +##' +##' @name check.lists +##' @title Compares two lists +##' @param x first list +##' @param y second list +##' @param filename one of "species.csv" or "cultivars.csv" +##' @return true if two list are the same +##' @author Rob Kooper +##' +check.lists <- function(x, y, filename = "species.csv") { + if (nrow(x) != nrow(y)) { + return(FALSE) + } + if(filename == "species.csv"){ + cols <- c('id', 'genus', 'species', 'scientificname') + } else if (filename == "cultivars.csv") { + cols <- c('id', 'specie_id', 'species_name', 'cultivar_name') + } else { + return(FALSE) + } + xy_match <- vapply(cols, function(i) identical(as.character(x[[i]]), as.character(y[[i]])), logical(1)) + return(all(unlist(xy_match))) +} \ No newline at end of file diff --git a/base/db/R/covariate.functions.R b/base/db/R/covariate.functions.R new file mode 100644 index 00000000000..31c93009ee9 --- /dev/null +++ b/base/db/R/covariate.functions.R @@ -0,0 +1,124 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +######################## COVARIATE FUNCTIONS ################################# + +##--------------------------------------------------------------------------------------------------# +##' Append covariate data as a column within a table +##' +##' \code{append.covariate} appends a data frame of covariates as a new column in a data frame +##' of trait data. +##' In the event a trait has several covariates available, the first one found +##' (i.e. lowest row number) will take precedence +##' +##' @param data trait dataframe that will be appended to. +##' @param column.name name of the covariate as it will appear in the appended column +##' @param covariates.data one or more tables of covariate data, ordered by the precedence +##' they will assume in the event a trait has covariates across multiple tables. +##' All tables must contain an 'id' and 'level' column, at minimum. +##' +##' @author Carl Davidson, Ryan Kelly +##' @export +##--------------------------------------------------------------------------------------------------# +append.covariate <- function(data, column.name, covariates.data){ + # Keep only the highest-priority covariate for each trait + covariates.data <- covariates.data[!duplicated(covariates.data$trait_id), ] + + # Select columns to keep, and rename the covariate column + covariates.data <- covariates.data[, c('trait_id', 'level')] + names(covariates.data) <- c('id', column.name) + + # Merge on trait ID + merged <- merge(covariates.data, data, all = TRUE, by = "id") + return(merged) +} +##==================================================================================================# + + +##--------------------------------------------------------------------------------------------------# +##' Queries covariates from database for a given vector of trait id's +##' +##' @param trait.ids list of trait ids +##' @param con database connection +##' @param ... extra arguments +##' +##' @author David LeBauer +query.covariates <- function(trait.ids, con = NULL, ...){ + covariate.query <- paste("select covariates.trait_id, covariates.level,variables.name", + "from covariates left join variables on variables.id = covariates.variable_id", + "where trait_id in (", PEcAn.utils::vecpaste(trait.ids), ")") + covariates <- db.query(query = covariate.query, con = con) + return(covariates) +} +##==================================================================================================# + + +##--------------------------------------------------------------------------------------------------# +##' Apply Arrhenius scaling to 25 degC for temperature-dependent traits +##' +##' @param data data frame of data to scale, as returned by query.data() +##' @param covariates data frame of covariates, as returned by query.covariates(). +##' Note that data with no matching covariates will be unchanged. +##' @param temp.covariates names of covariates used to adjust for temperature; +##' if length > 1, order matters (first will be used preferentially) +##' @param new.temp the reference temperature for the scaled traits. Curerntly 25 degC +##' @param missing.temp the temperature assumed for traits with no covariate found. Curerntly 25 degC +##' @author Carl Davidson, David LeBauer, Ryan Kelly +arrhenius.scaling.traits <- function(data, covariates, temp.covariates, new.temp = 25, missing.temp = 25){ + # Select covariates that match temp.covariates + covariates <- covariates[covariates$name %in% temp.covariates,] + + if(nrow(covariates)>0) { + # Sort covariates in order of priority + covariates <- do.call(rbind, + lapply(temp.covariates, function(temp.covariate) covariates[covariates$name == temp.covariate, ]) + ) + + data <- append.covariate(data, 'temp', covariates) + + # Assign default value for traits with no covariates + data$temp[is.na(data$temp)] <- missing.temp + + # Scale traits + data$mean <- PEcAn.utils::arrhenius.scaling(observed.value = data$mean, old.temp = data$temp, new.temp=new.temp) + data$stat <- PEcAn.utils::arrhenius.scaling(observed.value = data$stat, old.temp = data$temp, new.temp=new.temp) + + #remove temporary covariate column. + data<-data[,colnames(data)!='temp'] + } else { + data <- NULL + } + return(data) +} +##==================================================================================================# + + +##--------------------------------------------------------------------------------------------------# +##' Function to filter out upper canopy leaves +##' +##' @name filter_sunleaf_traits +##' @aliases filter.sunleaf.traits +##' @param data input data +##' @param covariates covariate data +##' +##' @author David LeBauer +filter_sunleaf_traits <- function(data, covariates){ + if(length(covariates)>0) { + data <- append.covariate(data = data, column.name = 'canopy_layer', + covariates.data = covariates[covariates$name == 'canopy_layer',]) + data <- data[data$canopy_layer >= 0.66 | is.na(data$canopy_layer),] + + # remove temporary covariate column + data <- data[,colnames(data)!='canopy_layer'] + } else { + data <- NULL + } + return(data) +} +##==================================================================================================# \ No newline at end of file diff --git a/base/db/R/derive.trait.R b/base/db/R/derive.trait.R new file mode 100644 index 00000000000..7abc5b9cc64 --- /dev/null +++ b/base/db/R/derive.trait.R @@ -0,0 +1,46 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' +##' Performs an arithmetic function, FUN, over a series of traits and returns +##' the result as a derived trait. +##' Traits must be specified as either lists or single row data frames, +##' and must be either single data points or normally distributed. +##' In the event one or more input traits are normally distributed, +##' the resulting distribution is approximated by numerical simulation. +##' The output trait is effectively a copy of the first input trait with +##' modified mean, stat, and n. +##' +##' @name derive.trait +##' @title Performs an arithmetic function, FUN, over a series of traits and returns the result as a derived trait. +##' @param FUN arithmetic function +##' @param ... traits that will be supplied to FUN as input +##' @param input list of trait inputs. See examples +##' @param var.name name to use in output +##' @param sample.size number of random samples generated by rnorm for normally distributed trait input +##' @return a copy of the first input trait with mean, stat, and n reflecting the derived trait +##' @export +##' @examples +##' input <- list(x = data.frame(mean = 1, stat = 1, n = 1)) +##' derive.trait(FUN = identity, input = input, var.name = 'x') +derive.trait <- function(FUN, ..., input = list(...), var.name = NA, sample.size = 10^6){ + if(any(lapply(input, nrow) > 1)){ + return(NULL) + } + input.samples <- lapply(input, take.samples, sample.size=sample.size) + output.samples <- do.call(FUN, input.samples) + output <- input[[1]] + output$mean <- mean(output.samples) + output$stat <- ifelse(length(output.samples) > 1, stats::sd(output.samples), NA) + output$n <- min(sapply(input, function(trait){trait$n})) + output$vname <- ifelse(is.na(var.name), output$vname, var.name) + return(output) +} +##==================================================================================================# \ No newline at end of file diff --git a/base/db/R/derive.traits.R b/base/db/R/derive.traits.R new file mode 100644 index 00000000000..c5b8be9e7ec --- /dev/null +++ b/base/db/R/derive.traits.R @@ -0,0 +1,59 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' Equivalent to derive.trait(), but operates over a series of trait datasets, +##' as opposed to individual trait rows. See \code{\link{derive.trait}}; for more information. +##' +##' @name derive.traits +##' @title Performs an arithmetic function, FUN, over a series of traits and returns the result as a derived trait. +##' @export +##' @param FUN arithmetic function +##' @param ... trait datasets that will be supplied to FUN as input +##' @param input list of trait inputs. See examples in \code{\link{derive.trait}} +##' @param var.name name to use in output +##' @param sample.size where traits are normally distributed with a given +##' @param match.columns in the event more than one trait dataset is supplied, +##' this specifies the columns that identify a unique data point +##' @return a copy of the first input trait with modified mean, stat, and n +derive.traits <- function(FUN, ..., input = list(...), + match.columns = c('citation_id', 'site_id', 'specie_id'), + var.name = NA, sample.size = 10^6){ + if(length(input) == 1){ + input <- input[[1]] + #KLUDGE: modified to handle empty datasets + for(i in (0:nrow(input))[-1]){ + input[i,] <- derive.trait(FUN, input[i,], sample.size=sample.size) + } + return(input) + } + else if(length(match.columns) > 0){ + #function works recursively to reduce the number of match columns + match.column <- match.columns[[1]] + #find unique values within the column that intersect among all input datasets + columns <- lapply(input, function(data){data[[match.column]]}) + intersection <- Reduce(intersect, columns) + + #run derive.traits() on subsets of input that contain those unique values + derived.traits<-lapply(intersection, + function(id){ + filtered.input <- lapply(input, + function(data){data[data[[match.column]] == id,]}) + derive.traits(FUN, input=filtered.input, + match.columns=match.columns[-1], + var.name=var.name, + sample.size=sample.size) + }) + derived.traits <- derived.traits[!is.null(derived.traits)] + derived.traits <- do.call(rbind, derived.traits) + return(derived.traits) + } else { + return(derive.trait(FUN, input = input, var.name = var.name, sample.size = sample.size)) + } +} diff --git a/base/db/R/fetch.stats2se.R b/base/db/R/fetch.stats2se.R new file mode 100644 index 00000000000..7425399f3c3 --- /dev/null +++ b/base/db/R/fetch.stats2se.R @@ -0,0 +1,25 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' Queries data from the trait database and transforms statistics to SE +##' +##' Performs query and then uses \code{transformstats} to convert miscellaneous statistical summaries +##' to SE +##' @name fetch.stats2se +##' @title Fetch data and transform stats to SE +##' @param connection connection to trait database +##' @param query to send to databse +##' @return dataframe with trait data +##' @seealso used in \code{\link{query.trait.data}}; \code{\link{transformstats}} performs transformation calculations +##' @author +fetch.stats2se <- function(connection, query){ + transformed <- PEcAn.utils::transformstats(db.query(query = query, con = connection)) + return(transformed) +} \ No newline at end of file diff --git a/base/db/R/get.trait.data.R b/base/db/R/get.trait.data.R index 772814cb5b6..dfcddc63b98 100644 --- a/base/db/R/get.trait.data.R +++ b/base/db/R/get.trait.data.R @@ -7,375 +7,6 @@ # http://opensource.ncsa.illinois.edu/license.html #------------------------------------------------------------------------------- -##--------------------------------------------------------------------------------------------------# -##' Check two lists. Identical does not work since one can be loaded -##' from the database and the other from a CSV file. -##' -##' @name check.lists -##' @title Compares two lists -##' @param x first list -##' @param y second list -##' @param filename one of "species.csv" or "cultivars.csv" -##' @return true if two list are the same -##' @author Rob Kooper -##' -check.lists <- function(x, y, filename = "species.csv") { - if (nrow(x) != nrow(y)) { - return(FALSE) - } - if(filename == "species.csv"){ - cols <- c('id', 'genus', 'species', 'scientificname') - } else if (filename == "cultivars.csv") { - cols <- c('id', 'specie_id', 'species_name', 'cultivar_name') - } else { - return(FALSE) - } - xy_match <- vapply(cols, function(i) identical(as.character(x[[i]]), as.character(y[[i]])), logical(1)) - return(all(unlist(xy_match))) -} - -##--------------------------------------------------------------------------------------------------# -##' Get trait data from the database for a single PFT -##' -##' @details `pft` should be a list containing at least `name` and `outdir`, and optionally `posteriorid` and `constants`. BEWARE: All existing files in `outir` will be deleted! -##' @param pft list of settings for the pft whose traits to retrieve. See details -##' @param modeltype type of model that is used, this is used to distinguish between different pfts with the same name. -##' @param dbfiles location where previous results are found -##' @param dbcon database connection -##' @param forceupdate set this to true to force an update, auto will check to see if an update is needed. -##' @param trait.names list of trait names to retrieve -##' @return updated pft with posteriorid -##' @author David LeBauer, Shawn Serbin, Rob Kooper -##' @export -get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, - forceupdate = FALSE) { - - # Create directory if necessary - if (!file.exists(pft$outdir) && !dir.create(pft$outdir, recursive = TRUE)) { - PEcAn.logger::logger.error(paste0("Couldn't create PFT output directory: ", pft$outdir)) - } - - ## Remove old files. Clean up. - old.files <- list.files(path = pft$outdir, full.names = TRUE, include.dirs = FALSE) - file.remove(old.files) - - # find appropriate pft - pftres <- query_pfts(dbcon, pft[["name"]], modeltype) - pfttype <- pftres[["pft_type"]] - pftid <- pftres[["id"]] - - if (nrow(pftres) > 1) { - PEcAn.logger::logger.severe( - "Multiple PFTs named", pft[["name"]], "found,", - "with ids", PEcAn.utils::vecpaste(pftres[["id"]]), ".", - "Specify modeltype to fix this.") - } - - if (nrow(pftres) == 0) { - PEcAn.logger::logger.severe("Could not find pft", pft[["name"]]) - return(NA) - } - - # get the member species/cultivars, we need to check if anything changed - if (pfttype == "plant") { - pft_member_filename = "species.csv" - pft_members <- PEcAn.DB::query.pft_species(pft$name, modeltype, dbcon) - } else if (pfttype == "cultivar") { - pft_member_filename = "cultivars.csv" - pft_members <- PEcAn.DB::query.pft_cultivars(pft$name, modeltype, dbcon) - } else { - PEcAn.logger::logger.severe("Unknown pft type! Expected 'plant' or 'cultivar', got", pfttype) - } - - # ANS: Need to do this conversion for the check against existing - # membership later on. Otherwise, `NA` from the CSV is interpreted - # as different from `""` returned here, even though they are really - # the same thing. - pft_members <- pft_members %>% - dplyr::mutate_if(is.character, ~dplyr::na_if(., "")) - - # get the priors - prior.distns <- PEcAn.DB::query.priors(pft = pftid, trstr = PEcAn.utils::vecpaste(trait.names), con = dbcon) - prior.distns <- prior.distns[which(!rownames(prior.distns) %in% names(pft$constants)),] - traits <- rownames(prior.distns) - - # get the trait data (don't bother sampling derived traits until after update check) - trait.data.check <- PEcAn.DB::query.traits(ids = pft_members$id, priors = traits, con = dbcon, update.check.only = TRUE, ids_are_cultivars = (pfttype=="cultivar")) - traits <- names(trait.data.check) - - # Set forceupdate FALSE if it's a string (backwards compatible with 'AUTO' flag used in the past) - if (!is.logical(forceupdate)) { - forceupdate <- FALSE - } - - # check to see if we need to update - if (!forceupdate) { - if (is.null(pft$posteriorid)) { - recent_posterior <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid) %>% - dplyr::collect() - if (length(recent_posterior) > 0) { - pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid) %>% - dplyr::arrange(dplyr::desc(created_at)) %>% - head(1) %>% - dplyr::pull(id) - } else { - PEcAn.logger::logger.info("No previous posterior found. Forcing update") - } - } - if (!is.null(pft$posteriorid)) { - files <- dbfile.check(type = "Posterior", container.id = pft$posteriorid, con = dbcon, - return.all = TRUE) - need_files <- c( - trait_data = "trait.data.Rdata", - priors = "prior.distns.Rdata", - pft_membership = pft_member_filename - ) - ids <- match(need_files, files$file_name) - names(ids) <- names(need_files) - if (any(is.na(ids))) { - missing_files <- need_files[is.na(ids)] - PEcAn.logger::logger.info(paste0( - "Forcing meta-analysis update because ", - "the following files are missing from the posterior: ", - paste0(shQuote(missing_files), collapse = ", ") - )) - PEcAn.logger::logger.debug( - "\n `dbfile.check` returned the following output:\n", - PEcAn.logger::print2string(files), - wrap = FALSE - ) - } else { - PEcAn.logger::logger.debug( - "All posterior files are present. Performing additional checks ", - "to determine if meta-analysis needs to be updated." - ) - # check if all files exist - need_paths <- file.path(files$file_path[ids], need_files) - names(need_paths) <- names(need_files) - files_exist <- file.exists(need_paths) - foundallfiles <- all(files_exist) - if (!foundallfiles) { - PEcAn.logger::logger.warn( - "The following files are in database but not found on disk: ", - paste(shQuote(need_files[!files_exist]), collapse = ", "), ". ", - "Re-running meta-analysis." - ) - } else { - # Check if PFT membership has changed - PEcAn.logger::logger.debug("Checking if PFT membership has changed.") - existing_membership <- utils::read.csv( - need_paths[["pft_membership"]], - # Columns are: id, genus, species, scientificname - # Need this so NA values are - colClasses = c("double", "character", "character", "character"), - stringsAsFactors = FALSE, - na.strings = "" - ) - diff_membership <- symmetric_setdiff( - existing_membership, - pft_members, - xname = "existing", - yname = "current" - ) - if (nrow(diff_membership) > 0) { - PEcAn.logger::logger.error( - "\n PFT membership has changed. \n", - "Difference is:\n", - PEcAn.logger::print2string(diff_membership), - wrap = FALSE - ) - foundallfiles <- FALSE - } - - # Check if priors have changed - PEcAn.logger::logger.debug("Checking if priors have changed") - existing_prior <- PEcAn.utils::load_local(need_paths[["priors"]])[["prior.distns"]] - diff_prior <- symmetric_setdiff( - dplyr::as_tibble(prior.distns, rownames = "trait"), - dplyr::as_tibble(existing_prior, rownames = "trait") - ) - if (nrow(diff_prior) > 0) { - PEcAn.logger::logger.error( - "\n Prior has changed. \n", - "Difference is:\n", - PEcAn.logger::print2string(diff_prior), - wrap = FALSE - ) - foundallfiles <- FALSE - } - - # Check if trait data have changed - PEcAn.logger::logger.debug("Checking if trait data have changed") - existing_trait_data <- PEcAn.utils::load_local( - need_paths[["trait_data"]] - )[["trait.data"]] - if (length(trait.data.check) != length(existing_trait_data)) { - PEcAn.logger::logger.warn( - "Lengths of new and existing `trait.data` differ. ", - "Re-running meta-analysis." - ) - foundallfiles <- FALSE - } else if (length(trait.data.check) == 0) { - PEcAn.logger::logger.warn("New and existing trait data are both empty. Skipping this check.") - } else { - current_traits <- dplyr::bind_rows(trait.data.check, .id = "trait") %>% - dplyr::select(-mean, -stat) - existing_traits <- dplyr::bind_rows(existing_trait_data, .id = "trait") %>% - dplyr::select(-mean, -stat) - diff_traits <- symmetric_setdiff(current_traits, existing_traits) - if (nrow(diff_traits) > 0) { - diff_summary <- diff_traits %>% - dplyr::count(source, trait) - PEcAn.logger::logger.error( - "\n Prior has changed. \n", - "Here are the number of differing trait records by trait:\n", - PEcAn.logger::print2string(diff_summary), - wrap = FALSE - ) - foundallfiles <- FALSE - } - } - } - - - if (foundallfiles) { - PEcAn.logger::logger.info( - "Reusing existing files from posterior", pft$posteriorid, - "for PFT", shQuote(pft$name) - ) - for (id in seq_len(nrow(files))) { - file.copy(from = file.path(files[[id, "file_path"]], files[[id, "file_name"]]), - to = file.path(pft$outdir, files[[id, "file_name"]])) - } - - done <- TRUE - - # May need to symlink the generic post.distns.Rdata to a specific post.distns.*.Rdata file. - if (length(list.files(pft$outdir, "post.distns.Rdata")) == 0) { - all.files <- list.files(pft$outdir) - post.distn.file <- all.files[grep("post\\.distns\\..*\\.Rdata", all.files)] - if (length(post.distn.file) > 1) - PEcAn.logger::logger.severe( - "get.trait.data.pft() doesn't know how to ", - "handle multiple `post.distns.*.Rdata` files.", - "Found the following files: ", - paste(shQuote(post.distn.file), collapse = ", ") - ) - else if (length(post.distn.file) == 1) { - # Found exactly one post.distns.*.Rdata file. Use it. - link_input <- file.path(pft[["outdir"]], post.distn.file) - link_target <- file.path(pft[["outdir"]], "post.distns.Rdata") - PEcAn.logger::logger.debug( - "Found exactly one posterior distribution file: ", - shQuote(link_input), - ". Symlinking it to PFT output directory: ", - shQuote(link_target) - ) - file.symlink(from = link_input, to = link_target) - } else { - PEcAn.logger::logger.error( - "No previous posterior distribution file found. ", - "Most likely, trait data were retrieved, but meta-analysis ", - "was not run. Meta-analysis will be run." - ) - done <- FALSE - } - } - if (done) return(pft) - } - } - } - } - - # get the trait data (including sampling of derived traits, if any) - trait.data <- query.traits(pft_members$id, traits, con = dbcon, - update.check.only = FALSE, - ids_are_cultivars = (pfttype == "cultivar")) - traits <- names(trait.data) - - if (length(trait.data) > 0) { - trait_counts <- trait.data %>% - dplyr::bind_rows(.id = "trait") %>% - dplyr::count(trait) - - PEcAn.logger::logger.info( - "\n Number of observations per trait for PFT ", shQuote(pft[["name"]]), ":\n", - PEcAn.logger::print2string(trait_counts, n = Inf), - wrap = FALSE - ) - } else { - PEcAn.logger::logger.warn( - "None of the requested traits were found for PFT ", - format(pft_members[["id"]], scientific = FALSE) - ) - } - - # get list of existing files so they get ignored saving - old.files <- list.files(path = pft$outdir) - - # create a new posterior - now <- format(x = Sys.time(), format = "%Y-%m-%d %H:%M:%S") - db.query(paste0("INSERT INTO posteriors (pft_id, created_at, updated_at) ", - "VALUES (", pftid, ", '", now, "', '", now, "')"), - con = dbcon) - pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid, created_at == !!now) %>% - dplyr::pull(id) - - # create path where to store files - pathname <- file.path(dbfiles, "posterior", pft$posteriorid) - dir.create(pathname, showWarnings = FALSE, recursive = TRUE) - - ## 1. get species/cultivar list based on pft - utils::write.csv(pft_members, file.path(pft$outdir, pft_member_filename), - row.names = FALSE) - - ## save priors - save(prior.distns, file = file.path(pft$outdir, "prior.distns.Rdata")) - utils::write.csv(prior.distns, file.path(pft$outdir, "prior.distns.csv"), - row.names = TRUE) - - ## 3. display info to the console - PEcAn.logger::logger.info( - "\n Summary of prior distributions for PFT ", shQuote(pft$name), ":\n", - PEcAn.logger::print2string(prior.distns), - wrap = FALSE - ) - - ## traits = variables with prior distributions for this pft - trait.data.file <- file.path(pft$outdir, "trait.data.Rdata") - save(trait.data, file = trait.data.file) - utils::write.csv( - dplyr::bind_rows(trait.data), - file.path(pft$outdir, "trait.data.csv"), - row.names = FALSE - ) - - ### save and store in database all results except those that were there already - store_files_all <- list.files(path = pft[["outdir"]]) - store_files <- setdiff(store_files_all, old.files) - PEcAn.logger::logger.debug( - "The following posterior files found in PFT outdir ", - "(", shQuote(pft[["outdir"]]), ") will be registered in BETY ", - "under posterior ID ", format(pft[["posteriorid"]], scientific = FALSE), ": ", - paste(shQuote(store_files), collapse = ", "), ". ", - "The following files (if any) will not be registered because they already existed: ", - paste(shQuote(intersect(store_files, old.files)), collapse = ", "), - wrap = FALSE - ) - for (file in store_files) { - filename <- file.path(pathname, file) - file.copy(file.path(pft$outdir, file), filename) - dbfile.insert(in.path = pathname, in.prefix = file, - type = "Posterior", id = pft[["posteriorid"]], - con = dbcon) - } - - return(pft) -} - ##--------------------------------------------------------------------------------------------------# ##' Get trait data from the database. ##' @@ -409,10 +40,10 @@ get.trait.data <- function(pfts, modeltype, dbfiles, database, forceupdate, if (any(sapply(pft_outdirs, is.null))) { PEcAn.logger::logger.severe('At least one pft in settings is missing its "outdir"') } - + dbcon <- db.open(database) on.exit(db.close(dbcon)) - + if (is.null(trait.names)) { PEcAn.logger::logger.debug(paste0( "`trait.names` is NULL, so retrieving all traits ", @@ -431,7 +62,7 @@ get.trait.data <- function(pfts, modeltype, dbfiles, database, forceupdate, # all_priors <- query_priors(pfts, params = database) # trait.names <- unique(all_priors[["name"]]) } - + # process all pfts result <- lapply(pfts, get.trait.data.pft, modeltype = modeltype, @@ -439,67 +70,6 @@ get.trait.data <- function(pfts, modeltype, dbfiles, database, forceupdate, dbcon = dbcon, forceupdate = forceupdate, trait.names = trait.names) - + invisible(result) -} - -#' Symmetric set difference of two data frames -#' -#' @param x,y `data.frame`s to compare -#' @param xname Label for data in x but not y. Default = "x" -#' @param yname Label for data in y but not x. Default = "y" -#' @param namecol Name of label column. Default = "source". -#' @param simplify_types (Logical) If `TRUE`, coerce anything that -#' isn't numeric to character, to facilitate comparison. -#' @return `data.frame` of data not common to x and y, with additional -#' column (`namecol`) indicating whether data are only in x -#' (`xname`) or y (`yname`) -#' @export -#' @examples -#' xdf <- data.frame(a = c("a", "b", "c"), -#' b = c(1, 2, 3), -#' stringsAsFactors = FALSE) -#' ydf <- data.frame(a = c("a", "b", "d"), -#' b = c(1, 2.5, 3), -#' stringsAsFactors = FALSE) -#' symmetric_setdiff(xdf, ydf) -symmetric_setdiff <- function(x, y, xname = "x", yname = "y", - namecol = "source", simplify_types = TRUE) { - stopifnot(is.data.frame(x), is.data.frame(y), - is.character(xname), is.character(yname), - length(xname) == 1, length(yname) == 1) - is_i64 <- c( - vapply(x, inherits, logical(1), what = "integer64"), - vapply(y, inherits, logical(1), what = "integer64") - ) - if (any(is_i64)) { - PEcAn.logger::logger.debug( - "Detected at least one `integer64` column. ", - "Converting to `numeric` for comparison." - ) - if (requireNamespace("bit64", quietly = TRUE)) { - x <- dplyr::mutate_if(x, bit64::is.integer64, as.numeric) - y <- dplyr::mutate_if(y, bit64::is.integer64, as.numeric) - } else { - PEcAn.logger::logger.warn( - '"bit64" package required for `integer64` conversion, but not installed. ', - "Skipping conversion, which may produce weird results!" - ) - } - } - if (simplify_types) { - x <- dplyr::mutate_if(x, ~!is.numeric(.), as.character) - y <- dplyr::mutate_if(x, ~!is.numeric(.), as.character) - } - namecol <- dplyr::sym(namecol) - xy <- dplyr::setdiff(x, y) %>% - dplyr::mutate(!!namecol := xname) - yx <- dplyr::setdiff(y, x) %>% - dplyr::mutate(!!namecol := yname) - dplyr::bind_rows(xy, yx) %>% - dplyr::select(!!namecol, dplyr::everything()) -} - -#################################################################################################### -### EOF. End of R script file. -#################################################################################################### +} \ No newline at end of file diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R new file mode 100644 index 00000000000..606d36b79c8 --- /dev/null +++ b/base/db/R/get.trait.data.pft.R @@ -0,0 +1,350 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' Get trait data from the database for a single PFT +##' +##' @details `pft` should be a list containing at least `name` and `outdir`, and optionally `posteriorid` and `constants`. BEWARE: All existing files in `outir` will be deleted! +##' @param pft list of settings for the pft whose traits to retrieve. See details +##' @param modeltype type of model that is used, this is used to distinguish between different pfts with the same name. +##' @param dbfiles location where previous results are found +##' @param dbcon database connection +##' @param forceupdate set this to true to force an update, auto will check to see if an update is needed. +##' @param trait.names list of trait names to retrieve +##' @return updated pft with posteriorid +##' @author David LeBauer, Shawn Serbin, Rob Kooper +##' @export +get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, + forceupdate = FALSE) { + + # Create directory if necessary + if (!file.exists(pft$outdir) && !dir.create(pft$outdir, recursive = TRUE)) { + PEcAn.logger::logger.error(paste0("Couldn't create PFT output directory: ", pft$outdir)) + } + + ## Remove old files. Clean up. + old.files <- list.files(path = pft$outdir, full.names = TRUE, include.dirs = FALSE) + file.remove(old.files) + + # find appropriate pft + pftres <- query_pfts(dbcon, pft[["name"]], modeltype) + pfttype <- pftres[["pft_type"]] + pftid <- pftres[["id"]] + + if (nrow(pftres) > 1) { + PEcAn.logger::logger.severe( + "Multiple PFTs named", pft[["name"]], "found,", + "with ids", PEcAn.utils::vecpaste(pftres[["id"]]), ".", + "Specify modeltype to fix this.") + } + + if (nrow(pftres) == 0) { + PEcAn.logger::logger.severe("Could not find pft", pft[["name"]]) + return(NA) + } + + # get the member species/cultivars, we need to check if anything changed + if (pfttype == "plant") { + pft_member_filename = "species.csv" + pft_members <- PEcAn.DB::query.pft_species(pft$name, modeltype, dbcon) + } else if (pfttype == "cultivar") { + pft_member_filename = "cultivars.csv" + pft_members <- PEcAn.DB::query.pft_cultivars(pft$name, modeltype, dbcon) + } else { + PEcAn.logger::logger.severe("Unknown pft type! Expected 'plant' or 'cultivar', got", pfttype) + } + + # ANS: Need to do this conversion for the check against existing + # membership later on. Otherwise, `NA` from the CSV is interpreted + # as different from `""` returned here, even though they are really + # the same thing. + pft_members <- pft_members %>% + dplyr::mutate_if(is.character, ~dplyr::na_if(., "")) + + # get the priors + prior.distns <- PEcAn.DB::query.priors(pft = pftid, trstr = PEcAn.utils::vecpaste(trait.names), con = dbcon) + prior.distns <- prior.distns[which(!rownames(prior.distns) %in% names(pft$constants)),] + traits <- rownames(prior.distns) + + # get the trait data (don't bother sampling derived traits until after update check) + trait.data.check <- PEcAn.DB::query.traits(ids = pft_members$id, priors = traits, con = dbcon, update.check.only = TRUE, ids_are_cultivars = (pfttype=="cultivar")) + traits <- names(trait.data.check) + + # Set forceupdate FALSE if it's a string (backwards compatible with 'AUTO' flag used in the past) + if (!is.logical(forceupdate)) { + forceupdate <- FALSE + } + + # check to see if we need to update + if (!forceupdate) { + if (is.null(pft$posteriorid)) { + recent_posterior <- dplyr::tbl(dbcon, "posteriors") %>% + dplyr::filter(pft_id == !!pftid) %>% + dplyr::collect() + if (length(recent_posterior) > 0) { + pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% + dplyr::filter(pft_id == !!pftid) %>% + dplyr::arrange(dplyr::desc(created_at)) %>% + head(1) %>% + dplyr::pull(id) + } else { + PEcAn.logger::logger.info("No previous posterior found. Forcing update") + } + } + if (!is.null(pft$posteriorid)) { + files <- dbfile.check(type = "Posterior", container.id = pft$posteriorid, con = dbcon, + return.all = TRUE) + need_files <- c( + trait_data = "trait.data.Rdata", + priors = "prior.distns.Rdata", + pft_membership = pft_member_filename + ) + ids <- match(need_files, files$file_name) + names(ids) <- names(need_files) + if (any(is.na(ids))) { + missing_files <- need_files[is.na(ids)] + PEcAn.logger::logger.info(paste0( + "Forcing meta-analysis update because ", + "the following files are missing from the posterior: ", + paste0(shQuote(missing_files), collapse = ", ") + )) + PEcAn.logger::logger.debug( + "\n `dbfile.check` returned the following output:\n", + PEcAn.logger::print2string(files), + wrap = FALSE + ) + } else { + PEcAn.logger::logger.debug( + "All posterior files are present. Performing additional checks ", + "to determine if meta-analysis needs to be updated." + ) + # check if all files exist + need_paths <- file.path(files$file_path[ids], need_files) + names(need_paths) <- names(need_files) + files_exist <- file.exists(need_paths) + foundallfiles <- all(files_exist) + if (!foundallfiles) { + PEcAn.logger::logger.warn( + "The following files are in database but not found on disk: ", + paste(shQuote(need_files[!files_exist]), collapse = ", "), ". ", + "Re-running meta-analysis." + ) + } else { + # Check if PFT membership has changed + PEcAn.logger::logger.debug("Checking if PFT membership has changed.") + existing_membership <- utils::read.csv( + need_paths[["pft_membership"]], + # Columns are: id, genus, species, scientificname + # Need this so NA values are + colClasses = c("double", "character", "character", "character"), + stringsAsFactors = FALSE, + na.strings = "" + ) + diff_membership <- symmetric_setdiff( + existing_membership, + pft_members, + xname = "existing", + yname = "current" + ) + if (nrow(diff_membership) > 0) { + PEcAn.logger::logger.error( + "\n PFT membership has changed. \n", + "Difference is:\n", + PEcAn.logger::print2string(diff_membership), + wrap = FALSE + ) + foundallfiles <- FALSE + } + + # Check if priors have changed + PEcAn.logger::logger.debug("Checking if priors have changed") + existing_prior <- PEcAn.utils::load_local(need_paths[["priors"]])[["prior.distns"]] + diff_prior <- symmetric_setdiff( + dplyr::as_tibble(prior.distns, rownames = "trait"), + dplyr::as_tibble(existing_prior, rownames = "trait") + ) + if (nrow(diff_prior) > 0) { + PEcAn.logger::logger.error( + "\n Prior has changed. \n", + "Difference is:\n", + PEcAn.logger::print2string(diff_prior), + wrap = FALSE + ) + foundallfiles <- FALSE + } + + # Check if trait data have changed + PEcAn.logger::logger.debug("Checking if trait data have changed") + existing_trait_data <- PEcAn.utils::load_local( + need_paths[["trait_data"]] + )[["trait.data"]] + if (length(trait.data.check) != length(existing_trait_data)) { + PEcAn.logger::logger.warn( + "Lengths of new and existing `trait.data` differ. ", + "Re-running meta-analysis." + ) + foundallfiles <- FALSE + } else if (length(trait.data.check) == 0) { + PEcAn.logger::logger.warn("New and existing trait data are both empty. Skipping this check.") + } else { + current_traits <- dplyr::bind_rows(trait.data.check, .id = "trait") %>% + dplyr::select(-mean, -stat) + existing_traits <- dplyr::bind_rows(existing_trait_data, .id = "trait") %>% + dplyr::select(-mean, -stat) + diff_traits <- symmetric_setdiff(current_traits, existing_traits) + if (nrow(diff_traits) > 0) { + diff_summary <- diff_traits %>% + dplyr::count(source, trait) + PEcAn.logger::logger.error( + "\n Prior has changed. \n", + "Here are the number of differing trait records by trait:\n", + PEcAn.logger::print2string(diff_summary), + wrap = FALSE + ) + foundallfiles <- FALSE + } + } + } + + + if (foundallfiles) { + PEcAn.logger::logger.info( + "Reusing existing files from posterior", pft$posteriorid, + "for PFT", shQuote(pft$name) + ) + for (id in seq_len(nrow(files))) { + file.copy(from = file.path(files[[id, "file_path"]], files[[id, "file_name"]]), + to = file.path(pft$outdir, files[[id, "file_name"]])) + } + + done <- TRUE + + # May need to symlink the generic post.distns.Rdata to a specific post.distns.*.Rdata file. + if (length(list.files(pft$outdir, "post.distns.Rdata")) == 0) { + all.files <- list.files(pft$outdir) + post.distn.file <- all.files[grep("post\\.distns\\..*\\.Rdata", all.files)] + if (length(post.distn.file) > 1) + PEcAn.logger::logger.severe( + "get.trait.data.pft() doesn't know how to ", + "handle multiple `post.distns.*.Rdata` files.", + "Found the following files: ", + paste(shQuote(post.distn.file), collapse = ", ") + ) + else if (length(post.distn.file) == 1) { + # Found exactly one post.distns.*.Rdata file. Use it. + link_input <- file.path(pft[["outdir"]], post.distn.file) + link_target <- file.path(pft[["outdir"]], "post.distns.Rdata") + PEcAn.logger::logger.debug( + "Found exactly one posterior distribution file: ", + shQuote(link_input), + ". Symlinking it to PFT output directory: ", + shQuote(link_target) + ) + file.symlink(from = link_input, to = link_target) + } else { + PEcAn.logger::logger.error( + "No previous posterior distribution file found. ", + "Most likely, trait data were retrieved, but meta-analysis ", + "was not run. Meta-analysis will be run." + ) + done <- FALSE + } + } + if (done) return(pft) + } + } + } + } + + # get the trait data (including sampling of derived traits, if any) + trait.data <- query.traits(pft_members$id, traits, con = dbcon, + update.check.only = FALSE, + ids_are_cultivars = (pfttype == "cultivar")) + traits <- names(trait.data) + + if (length(trait.data) > 0) { + trait_counts <- trait.data %>% + dplyr::bind_rows(.id = "trait") %>% + dplyr::count(trait) + + PEcAn.logger::logger.info( + "\n Number of observations per trait for PFT ", shQuote(pft[["name"]]), ":\n", + PEcAn.logger::print2string(trait_counts, n = Inf), + wrap = FALSE + ) + } else { + PEcAn.logger::logger.warn( + "None of the requested traits were found for PFT ", + format(pft_members[["id"]], scientific = FALSE) + ) + } + + # get list of existing files so they get ignored saving + old.files <- list.files(path = pft$outdir) + + # create a new posterior + now <- format(x = Sys.time(), format = "%Y-%m-%d %H:%M:%S") + db.query(paste0("INSERT INTO posteriors (pft_id, created_at, updated_at) ", + "VALUES (", pftid, ", '", now, "', '", now, "')"), + con = dbcon) + pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% + dplyr::filter(pft_id == !!pftid, created_at == !!now) %>% + dplyr::pull(id) + + # create path where to store files + pathname <- file.path(dbfiles, "posterior", pft$posteriorid) + dir.create(pathname, showWarnings = FALSE, recursive = TRUE) + + ## 1. get species/cultivar list based on pft + utils::write.csv(pft_members, file.path(pft$outdir, pft_member_filename), + row.names = FALSE) + + ## save priors + save(prior.distns, file = file.path(pft$outdir, "prior.distns.Rdata")) + utils::write.csv(prior.distns, file.path(pft$outdir, "prior.distns.csv"), + row.names = TRUE) + + ## 3. display info to the console + PEcAn.logger::logger.info( + "\n Summary of prior distributions for PFT ", shQuote(pft$name), ":\n", + PEcAn.logger::print2string(prior.distns), + wrap = FALSE + ) + + ## traits = variables with prior distributions for this pft + trait.data.file <- file.path(pft$outdir, "trait.data.Rdata") + save(trait.data, file = trait.data.file) + utils::write.csv( + dplyr::bind_rows(trait.data), + file.path(pft$outdir, "trait.data.csv"), + row.names = FALSE + ) + + ### save and store in database all results except those that were there already + store_files_all <- list.files(path = pft[["outdir"]]) + store_files <- setdiff(store_files_all, old.files) + PEcAn.logger::logger.debug( + "The following posterior files found in PFT outdir ", + "(", shQuote(pft[["outdir"]]), ") will be registered in BETY ", + "under posterior ID ", format(pft[["posteriorid"]], scientific = FALSE), ": ", + paste(shQuote(store_files), collapse = ", "), ". ", + "The following files (if any) will not be registered because they already existed: ", + paste(shQuote(intersect(store_files, old.files)), collapse = ", "), + wrap = FALSE + ) + for (file in store_files) { + filename <- file.path(pathname, file) + file.copy(file.path(pft$outdir, file), filename) + dbfile.insert(in.path = pathname, in.prefix = file, + type = "Posterior", id = pft[["posteriorid"]], + con = dbcon) + } + + return(pft) +} \ No newline at end of file diff --git a/base/db/R/query.data.R b/base/db/R/query.data.R new file mode 100644 index 00000000000..6fb2b85181c --- /dev/null +++ b/base/db/R/query.data.R @@ -0,0 +1,52 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' Function to query data from database for specific species and convert stat to SE +##' +##' @name query.data +##' @title Query data and transform stats to SE by calling \code{\link{fetch.stats2se}}; +##' @param trait trait to query from the database +##' @param spstr IDs of species to query from, as a single comma-separated string +##' @param extra.columns other query terms to pass in. If unspecified, retrieves latitude and longitude +##' @param con database connection +##' @param ids_are_cultivars if TRUE, ids is a vector of cultivar IDs, otherwise they are species IDs +##' @param ... extra arguments +##' @seealso used in \code{\link{query.trait.data}}; \code{\link{fetch.stats2se}}; \code{\link{transformstats}} performs transformation calculations +##' @author David LeBauer, Carl Davidson +query.data <- function(trait, spstr, extra.columns = 'ST_X(ST_CENTROID(sites.geometry)) AS lon, ST_Y(ST_CENTROID(sites.geometry)) AS lat, ', con=NULL, store.unconverted=FALSE, ids_are_cultivars=FALSE, ...) { + if (is.null(con)) { + PEcAn.logger::logger.error("No open database connection passed in.") + con <- db.open(settings$database$bety) + on.exit(db.close(con)) + } + id_type = if (ids_are_cultivars) {"cultivar_id"} else {"specie_id"} + + query <- paste("select + traits.id, traits.citation_id, traits.site_id, traits.treatment_id, + treatments.name, traits.date, traits.time, traits.cultivar_id, traits.specie_id, + traits.mean, traits.statname, traits.stat, traits.n, variables.name as vname, + extract(month from traits.date) as month,", + extra.columns, + "treatments.control, sites.greenhouse + from traits + left join treatments on (traits.treatment_id = treatments.id) + left join sites on (traits.site_id = sites.id) + left join variables on (traits.variable_id = variables.id) + where ", id_type, " in (", spstr,") + and variables.name in ('", trait,"');", sep = "") + result <- fetch.stats2se(connection = con, query = query) + + if(store.unconverted) { + result$mean_unconverted <- result$mean + result$stat_unconverted <- result$stat + } + + return(result) +} \ No newline at end of file diff --git a/base/db/R/query.trait.data.R b/base/db/R/query.trait.data.R index b2054425ecb..19eb2c88d83 100644 --- a/base/db/R/query.trait.data.R +++ b/base/db/R/query.trait.data.R @@ -6,417 +6,6 @@ # which accompanies this distribution, and is available at # http://opensource.ncsa.illinois.edu/license.html #------------------------------------------------------------------------------- -##--------------------------------------------------------------------------------------------------# -##' Queries data from the trait database and transforms statistics to SE -##' -##' Performs query and then uses \code{transformstats} to convert miscellaneous statistical summaries -##' to SE -##' @name fetch.stats2se -##' @title Fetch data and transform stats to SE -##' @param connection connection to trait database -##' @param query to send to databse -##' @return dataframe with trait data -##' @seealso used in \code{\link{query.trait.data}}; \code{\link{transformstats}} performs transformation calculations -##' @author -fetch.stats2se <- function(connection, query){ - transformed <- PEcAn.utils::transformstats(db.query(query = query, con = connection)) - return(transformed) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' Function to query data from database for specific species and convert stat to SE -##' -##' @name query.data -##' @title Query data and transform stats to SE by calling \code{\link{fetch.stats2se}}; -##' @param trait trait to query from the database -##' @param spstr IDs of species to query from, as a single comma-separated string -##' @param extra.columns other query terms to pass in. If unspecified, retrieves latitude and longitude -##' @param con database connection -##' @param ids_are_cultivars if TRUE, ids is a vector of cultivar IDs, otherwise they are species IDs -##' @param ... extra arguments -##' @seealso used in \code{\link{query.trait.data}}; \code{\link{fetch.stats2se}}; \code{\link{transformstats}} performs transformation calculations -##' @author David LeBauer, Carl Davidson -query.data <- function(trait, spstr, extra.columns = 'ST_X(ST_CENTROID(sites.geometry)) AS lon, ST_Y(ST_CENTROID(sites.geometry)) AS lat, ', con=NULL, store.unconverted=FALSE, ids_are_cultivars=FALSE, ...) { - if (is.null(con)) { - PEcAn.logger::logger.error("No open database connection passed in.") - con <- db.open(settings$database$bety) - on.exit(db.close(con)) - } - id_type = if (ids_are_cultivars) {"cultivar_id"} else {"specie_id"} - - query <- paste("select - traits.id, traits.citation_id, traits.site_id, traits.treatment_id, - treatments.name, traits.date, traits.time, traits.cultivar_id, traits.specie_id, - traits.mean, traits.statname, traits.stat, traits.n, variables.name as vname, - extract(month from traits.date) as month,", - extra.columns, - "treatments.control, sites.greenhouse - from traits - left join treatments on (traits.treatment_id = treatments.id) - left join sites on (traits.site_id = sites.id) - left join variables on (traits.variable_id = variables.id) - where ", id_type, " in (", spstr,") - and variables.name in ('", trait,"');", sep = "") - result <- fetch.stats2se(connection = con, query = query) - - if(store.unconverted) { - result$mean_unconverted <- result$mean - result$stat_unconverted <- result$stat - } - - return(result) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' Function to query yields data from database for specific species and convert stat to SE -##' -##' @name query.yields -##' @title Query yield data and transform stats to SE by calling \code{\link{fetch.stats2se}}; -##' @param trait yield trait to query -##' @param spstr species to query for yield data -##' @param extra.columns other query terms to pass in. Optional -##' @param con database connection -##' @param ids_are_cultivars if TRUE, spstr contains cultivar IDs, otherwise they are species IDs -##' @param ... extra arguments -##' @seealso used in \code{\link{query.trait.data}}; \code{\link{fetch.stats2se}}; \code{\link{transformstats}} performs transformation calculations -##' @author -query.yields <- function(trait = 'yield', spstr, extra.columns = '', con = NULL, - ids_are_cultivars = FALSE, ...){ - - member_column <- if (ids_are_cultivars) {"cultivar_id"} else {"specie_id"} - query <- paste("select - yields.id, yields.citation_id, yields.site_id, treatments.name, - yields.date, yields.time, yields.cultivar_id, yields.specie_id, - yields.mean, yields.statname, yields.stat, yields.n, - variables.name as vname, - month(yields.date) as month,", - extra.columns, - "treatments.control, sites.greenhouse - from yields - left join treatments on (yields.treatment_id = treatments.id) - left join sites on (yields.site_id = sites.id) - left join variables on (yields.variable_id = variables.id) - where ", member_column, " in (", spstr,");", sep = "") - if(!trait == 'yield'){ - query <- gsub(");", paste(" and variables.name in ('", trait,"');", sep = ""), query) - } - - return(fetch.stats2se(connection = con, query = query)) -} -##==================================================================================================# - - -######################## COVARIATE FUNCTIONS ################################# - -##--------------------------------------------------------------------------------------------------# -##' Append covariate data as a column within a table -##' -##' \code{append.covariate} appends a data frame of covariates as a new column in a data frame -##' of trait data. -##' In the event a trait has several covariates available, the first one found -##' (i.e. lowest row number) will take precedence -##' -##' @param data trait dataframe that will be appended to. -##' @param column.name name of the covariate as it will appear in the appended column -##' @param covariates.data one or more tables of covariate data, ordered by the precedence -##' they will assume in the event a trait has covariates across multiple tables. -##' All tables must contain an 'id' and 'level' column, at minimum. -##' -##' @author Carl Davidson, Ryan Kelly -##' @export -##--------------------------------------------------------------------------------------------------# -append.covariate <- function(data, column.name, covariates.data){ - # Keep only the highest-priority covariate for each trait - covariates.data <- covariates.data[!duplicated(covariates.data$trait_id), ] - - # Select columns to keep, and rename the covariate column - covariates.data <- covariates.data[, c('trait_id', 'level')] - names(covariates.data) <- c('id', column.name) - - # Merge on trait ID - merged <- merge(covariates.data, data, all = TRUE, by = "id") - return(merged) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' Queries covariates from database for a given vector of trait id's -##' -##' @param trait.ids list of trait ids -##' @param con database connection -##' @param ... extra arguments -##' -##' @author David LeBauer -query.covariates <- function(trait.ids, con = NULL, ...){ - covariate.query <- paste("select covariates.trait_id, covariates.level,variables.name", - "from covariates left join variables on variables.id = covariates.variable_id", - "where trait_id in (", PEcAn.utils::vecpaste(trait.ids), ")") - covariates <- db.query(query = covariate.query, con = con) - return(covariates) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' Apply Arrhenius scaling to 25 degC for temperature-dependent traits -##' -##' @param data data frame of data to scale, as returned by query.data() -##' @param covariates data frame of covariates, as returned by query.covariates(). -##' Note that data with no matching covariates will be unchanged. -##' @param temp.covariates names of covariates used to adjust for temperature; -##' if length > 1, order matters (first will be used preferentially) -##' @param new.temp the reference temperature for the scaled traits. Curerntly 25 degC -##' @param missing.temp the temperature assumed for traits with no covariate found. Curerntly 25 degC -##' @author Carl Davidson, David LeBauer, Ryan Kelly -arrhenius.scaling.traits <- function(data, covariates, temp.covariates, new.temp = 25, missing.temp = 25){ - # Select covariates that match temp.covariates - covariates <- covariates[covariates$name %in% temp.covariates,] - - if(nrow(covariates)>0) { - # Sort covariates in order of priority - covariates <- do.call(rbind, - lapply(temp.covariates, function(temp.covariate) covariates[covariates$name == temp.covariate, ]) - ) - - data <- append.covariate(data, 'temp', covariates) - - # Assign default value for traits with no covariates - data$temp[is.na(data$temp)] <- missing.temp - - # Scale traits - data$mean <- PEcAn.utils::arrhenius.scaling(observed.value = data$mean, old.temp = data$temp, new.temp=new.temp) - data$stat <- PEcAn.utils::arrhenius.scaling(observed.value = data$stat, old.temp = data$temp, new.temp=new.temp) - - #remove temporary covariate column. - data<-data[,colnames(data)!='temp'] - } else { - data <- NULL - } - return(data) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' Function to filter out upper canopy leaves -##' -##' @name filter_sunleaf_traits -##' @aliases filter.sunleaf.traits -##' @param data input data -##' @param covariates covariate data -##' -##' @author David LeBauer -filter_sunleaf_traits <- function(data, covariates){ - if(length(covariates)>0) { - data <- append.covariate(data = data, column.name = 'canopy_layer', - covariates.data = covariates[covariates$name == 'canopy_layer',]) - data <- data[data$canopy_layer >= 0.66 | is.na(data$canopy_layer),] - - # remove temporary covariate column - data <- data[,colnames(data)!='canopy_layer'] - } else { - data <- NULL - } - return(data) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' renames the variables within output data frame trait.data -##' -##' @param data data frame to with variables to rename -##' -##' @seealso used with \code{\link[PEcAn.MA]{jagify}}; -##' @export -##' @author David LeBauer -rename_jags_columns <- function(data) { - - # Change variable names and calculate obs.prec within data frame - transformed <- transform(data, - Y = mean, - se = stat, - obs.prec = 1 / (sqrt(n) * stat) ^2, - trt = trt_id, - site = site_id, - cite = citation_id, - ghs = greenhouse) - - # Subset data frame - selected <- subset(transformed, select = c('Y', 'n', 'site', 'trt', 'ghs', 'obs.prec', - 'se', 'cite')) - # Return subset data frame - return(selected) -} -##==================================================================================================# - - - -##--------------------------------------------------------------------------------------------------# -##' Change treatments to sequential integers -##' -##' Assigns all control treatments the same value, then assigns unique treatments -##' within each site. Each site is required to have a control treatment. -##' The algorithm (incorrectly) assumes that each site has a unique set of experimental -##' treatments. -##' @name assign.treatments -##' @title assign.treatments -##' @param data input data -##' @return dataframe with sequential treatments -##' @export -##' @author David LeBauer, Carl Davidson, Alexey Shiklomanov -assign.treatments <- function(data){ - data$trt_id[which(data$control == 1)] <- "control" - sites <- unique(data$site_id) - # Site IDs may be returned as `integer64`, which the `for` loop - # type-coerces to regular integer, which turns it into gibberish. - # Looping over the index instead prevents this type coercion. - for (si in seq_along(sites)) { - ss <- sites[[si]] - site.i <- data$site_id == ss - #if only one treatment, it's control - if (length(unique(data$trt_id[site.i])) == 1) data$trt_id[site.i] <- "control" - if (!"control" %in% data$trt_id[site.i]){ - PEcAn.logger::logger.severe(paste0( - "No control treatment set for site_id ", unique(data$site_id[site.i]), - " and citation id ", unique(data$citation_id[site.i]), ".\n", - "Please set control treatment for this site / citation in database.\n" - )) - } - } - return(data) -} - -drop.columns <- function(data, columns){ - return(data[, which(!colnames(data) %in% columns)]) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' sample from normal distribution, given summary stats -##' -##' @name take.samples -##' @title Sample from normal distribution, given summary stats -##' @param summary data.frame with values of mean and sd -##' @param sample.size number of samples to take -##' @return sample of length sample.size -##' @author David LeBauer, Carl Davidson -##' @export -##' @examples -##' ## return the mean when stat = NA -##' take.samples(summary = data.frame(mean = 10, stat = NA)) -##' ## return vector of length \code{sample.size} from N(mean,stat) -##' take.samples(summary = data.frame(mean = 10, stat = 10), sample.size = 10) -##' -take.samples <- function(summary, sample.size = 10^6){ - if(is.na(summary$stat)){ - ans <- summary$mean - } else { - set.seed(0) - ans <- stats::rnorm(n = sample.size, mean = summary$mean, sd = summary$stat) - } - return(ans) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' -##' Performs an arithmetic function, FUN, over a series of traits and returns -##' the result as a derived trait. -##' Traits must be specified as either lists or single row data frames, -##' and must be either single data points or normally distributed. -##' In the event one or more input traits are normally distributed, -##' the resulting distribution is approximated by numerical simulation. -##' The output trait is effectively a copy of the first input trait with -##' modified mean, stat, and n. -##' -##' @name derive.trait -##' @title Performs an arithmetic function, FUN, over a series of traits and returns the result as a derived trait. -##' @param FUN arithmetic function -##' @param ... traits that will be supplied to FUN as input -##' @param input list of trait inputs. See examples -##' @param var.name name to use in output -##' @param sample.size number of random samples generated by rnorm for normally distributed trait input -##' @return a copy of the first input trait with mean, stat, and n reflecting the derived trait -##' @export -##' @examples -##' input <- list(x = data.frame(mean = 1, stat = 1, n = 1)) -##' derive.trait(FUN = identity, input = input, var.name = 'x') -derive.trait <- function(FUN, ..., input = list(...), var.name = NA, sample.size = 10^6){ - if(any(lapply(input, nrow) > 1)){ - return(NULL) - } - input.samples <- lapply(input, take.samples, sample.size=sample.size) - output.samples <- do.call(FUN, input.samples) - output <- input[[1]] - output$mean <- mean(output.samples) - output$stat <- ifelse(length(output.samples) > 1, stats::sd(output.samples), NA) - output$n <- min(sapply(input, function(trait){trait$n})) - output$vname <- ifelse(is.na(var.name), output$vname, var.name) - return(output) -} -##==================================================================================================# - - -##--------------------------------------------------------------------------------------------------# -##' Equivalent to derive.trait(), but operates over a series of trait datasets, -##' as opposed to individual trait rows. See \code{\link{derive.trait}}; for more information. -##' -##' @name derive.traits -##' @title Performs an arithmetic function, FUN, over a series of traits and returns the result as a derived trait. -##' @export -##' @param FUN arithmetic function -##' @param ... trait datasets that will be supplied to FUN as input -##' @param input list of trait inputs. See examples in \code{\link{derive.trait}} -##' @param var.name name to use in output -##' @param sample.size where traits are normally distributed with a given -##' @param match.columns in the event more than one trait dataset is supplied, -##' this specifies the columns that identify a unique data point -##' @return a copy of the first input trait with modified mean, stat, and n -derive.traits <- function(FUN, ..., input = list(...), - match.columns = c('citation_id', 'site_id', 'specie_id'), - var.name = NA, sample.size = 10^6){ - if(length(input) == 1){ - input <- input[[1]] - #KLUDGE: modified to handle empty datasets - for(i in (0:nrow(input))[-1]){ - input[i,] <- derive.trait(FUN, input[i,], sample.size=sample.size) - } - return(input) - } - else if(length(match.columns) > 0){ - #function works recursively to reduce the number of match columns - match.column <- match.columns[[1]] - #find unique values within the column that intersect among all input datasets - columns <- lapply(input, function(data){data[[match.column]]}) - intersection <- Reduce(intersect, columns) - - #run derive.traits() on subsets of input that contain those unique values - derived.traits<-lapply(intersection, - function(id){ - filtered.input <- lapply(input, - function(data){data[data[[match.column]] == id,]}) - derive.traits(FUN, input=filtered.input, - match.columns=match.columns[-1], - var.name=var.name, - sample.size=sample.size) - }) - derived.traits <- derived.traits[!is.null(derived.traits)] - derived.traits <- do.call(rbind, derived.traits) - return(derived.traits) - } else { - return(derive.trait(FUN, input = input, var.name = var.name, sample.size = sample.size)) - } -} -##==================================================================================================# - ##--------------------------------------------------------------------------------------------------# ##' Extract trait data from database @@ -440,114 +29,114 @@ derive.traits <- function(FUN, ..., input = list(...), ##' } ##' @author David LeBauer, Carl Davidson, Shawn Serbin query.trait.data <- function(trait, spstr, con = NULL, update.check.only = FALSE, ids_are_cultivars = FALSE, ...){ - + if(is.list(con)){ PEcAn.logger::logger.warn("WEB QUERY OF DATABASE NOT IMPLEMENTED") return(NULL) } - + # print trait info if(!update.check.only) { PEcAn.logger::logger.info("---------------------------------------------------------") PEcAn.logger::logger.info(trait) } - -### Query the data from the database for trait X. + + ### Query the data from the database for trait X. data <- query.data(trait = trait, spstr = spstr, con = con, store.unconverted = TRUE, ids_are_cultivars = ids_are_cultivars) - -### Query associated covariates from database for trait X. + + ### Query associated covariates from database for trait X. covariates <- query.covariates(trait.ids = data$id, con = con) canopy.layer.covs <- covariates[covariates$name == 'canopy_layer', ] - -### Set small sample size for derived traits if update-checking only. Otherwise use default. + + ### Set small sample size for derived traits if update-checking only. Otherwise use default. if(update.check.only) { sample.size <- 10 } else { sample.size <- 10^6 ## Same default as derive.trait(), derive.traits(), and take.samples() } - + if(trait == 'Vcmax') { -######################### VCMAX ############################ -### Apply Arrhenius scaling to convert Vcmax at measurement temp to that at 25 degC (ref temp). + ######################### VCMAX ############################ + ### Apply Arrhenius scaling to convert Vcmax at measurement temp to that at 25 degC (ref temp). data <- arrhenius.scaling.traits(data = data, covariates = covariates, temp.covariates = c('leafT', 'airT','T')) - -### Keep only top of canopy/sunlit leaf samples based on covariate. + + ### Keep only top of canopy/sunlit leaf samples based on covariate. if(nrow(canopy.layer.covs) > 0) data <- filter_sunleaf_traits(data = data, covariates = canopy.layer.covs) - + ## select only summer data for Panicum virgatum ##TODO fix following hack to select only summer data if (spstr == "'938'"){ data <- subset(data, subset = data$month %in% c(0,5,6,7)) } - + } else if (trait == 'SLA') { -######################### SLA ############################ + ######################### SLA ############################ ## convert LMA to SLA data <- rbind(data, derive.traits(function(lma){1/lma}, query.data('LMA', spstr, con=con, store.unconverted=TRUE, - ids_are_cultivars=ids_are_cultivars), + ids_are_cultivars=ids_are_cultivars), sample.size=sample.size)) - + ### Keep only top of canopy/sunlit leaf samples based on covariate. if(nrow(canopy.layer.covs) > 0) data <- filter_sunleaf_traits(data = data, covariates = canopy.layer.covs) - + ## select only summer data for Panicum virgatum ##TODO fix following hack to select only summer data if (spstr == "'938'"){ data <- subset(data, subset = data$month %in% c(0,5,6,7,8,NA)) } - + } else if (trait == 'leaf_turnover_rate'){ -######################### LEAF TURNOVER ############################ + ######################### LEAF TURNOVER ############################ ## convert Longevity to Turnover data <- rbind(data, derive.traits(function(leaf.longevity){ 1 / leaf.longevity }, query.data('Leaf Longevity', spstr, con = con, store.unconverted = TRUE, - ids_are_cultivars = ids_are_cultivars), + ids_are_cultivars = ids_are_cultivars), sample.size = sample.size)) - + } else if (trait == 'root_respiration_rate') { -######################### ROOT RESPIRATION ############################ + ######################### ROOT RESPIRATION ############################ ## Apply Arrhenius scaling to convert root respiration at measurement temp ## to that at 25 degC (ref temp). data <- arrhenius.scaling.traits(data = data, covariates = covariates, temp.covariates = c('rootT', 'airT','soilT')) - + } else if (trait == 'leaf_respiration_rate_m2') { -######################### LEAF RESPIRATION ############################ + ######################### LEAF RESPIRATION ############################ ## Apply Arrhenius scaling to convert leaf respiration at measurement temp ## to that at 25 degC (ref temp). - data <- arrhenius.scaling.traits(data = data, covariates = covariates, temp.covariates = c('leafT', 'airT','T')) - + data <- arrhenius.scaling.traits(data = data, covariates = covariates, temp.covariates = c('leafT', 'airT','T')) + } else if (trait == 'stem_respiration_rate') { -######################### STEM RESPIRATION ############################ + ######################### STEM RESPIRATION ############################ ## Apply Arrhenius scaling to convert stem respiration at measurement temp ## to that at 25 degC (ref temp). data <- arrhenius.scaling.traits(data = data, covariates = covariates, temp.covariates = c('stemT', 'airT','T')) - + } else if (trait == 'c2n_leaf') { -######################### LEAF C:N ############################ - + ######################### LEAF C:N ############################ + data <- rbind(data, derive.traits(function(leafN){ 48 / leafN }, query.data('leafN', spstr, con = con, store.unconverted = TRUE, - ids_are_cultivars = ids_are_cultivars), + ids_are_cultivars = ids_are_cultivars), sample.size = sample.size)) - + } else if (trait == 'fineroot2leaf') { -######################### FINE ROOT ALLOCATION ############################ + ######################### FINE ROOT ALLOCATION ############################ ## FRC_LC is the ratio of fine root carbon to leaf carbon data <- rbind(data, query.data(trait = 'FRC_LC', spstr = spstr, con = con, store.unconverted = TRUE, ids_are_cultivars = ids_are_cultivars)) } result <- data - + ## if result is empty, stop run - + if (nrow(result) == 0) { return(NA) warning(paste("there is no data for", trait)) } else { - + ## Do we really want to print each trait table?? Seems like a lot of ## info to send to console. Maybe just print summary stats? ## print(result) @@ -558,10 +147,4 @@ query.trait.data <- function(trait, spstr, con = NULL, update.check.only = FALSE # print list of traits queried and number by outdoor/glasshouse return(result) } -} -##==================================================================================================# - - -#################################################################################################### -### EOF. End of R script file. -#################################################################################################### +} \ No newline at end of file diff --git a/base/db/R/query.traits.R b/base/db/R/query.traits.R index c156550ecc7..df83a2e4ff5 100644 --- a/base/db/R/query.traits.R +++ b/base/db/R/query.traits.R @@ -6,7 +6,7 @@ # which accompanies this distribution, and is available at # http://opensource.ncsa.illinois.edu/license.html #------------------------------------------------------------------------------- -#--------------------------------------------------------------------------------------------------# +#------------------------------------------------------------------------------# ##' Query available trait data associated with a given pft and a list of traits ##' ##' @name query.traits @@ -30,7 +30,7 @@ query.traits <- function(ids, priors, con = NULL, update.check.only=FALSE, ids_are_cultivars=FALSE){ - + if(is.null(con)){ con <- db.open(settings$database$bety) on.exit(db.close(con)) @@ -40,25 +40,25 @@ query.traits <- function(ids, priors, con = NULL, print("WEB QUERY OF DATABASE NOT IMPLEMENTED") return(NULL) } - + if (length(ids) == 0 || length(priors) == 0) { return(list()) } - + id_type = rlang::sym(if (ids_are_cultivars) {"cultivar_id"} else {"specie_id"}) - + traits <- (dplyr::tbl(con, "traits") - %>% dplyr::inner_join(dplyr::tbl(con, "variables"), by = c("variable_id" = "id")) - %>% dplyr::filter( - (!!id_type %in% ids), - (name %in% !!priors)) # TODO: use .data$name when filter supports it - %>% dplyr::distinct(name) # TODO: use .data$name when distinct supports it - %>% dplyr::collect()) - + %>% dplyr::inner_join(dplyr::tbl(con, "variables"), by = c("variable_id" = "id")) + %>% dplyr::filter( + (!!id_type %in% ids), + (name %in% !!priors)) # TODO: use .data$name when filter supports it + %>% dplyr::distinct(name) # TODO: use .data$name when distinct supports it + %>% dplyr::collect()) + if (nrow(traits) == 0) { return(list()) } - + ### Grab trait data trait.data <- lapply(traits$name, function(trait){ query.trait.data( @@ -69,12 +69,6 @@ query.traits <- function(ids, priors, con = NULL, ids_are_cultivars = ids_are_cultivars) }) names(trait.data) <- traits$name - + return(trait.data) -} -#==================================================================================================# - - -#################################################################################################### -### EOF. End of R script file. -#################################################################################################### +} \ No newline at end of file diff --git a/base/db/R/query.yields.R b/base/db/R/query.yields.R new file mode 100644 index 00000000000..d28c8bf10dd --- /dev/null +++ b/base/db/R/query.yields.R @@ -0,0 +1,45 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##--------------------------------------------------------------------------------------------------# +##' Function to query yields data from database for specific species and convert stat to SE +##' +##' @name query.yields +##' @title Query yield data and transform stats to SE by calling \code{\link{fetch.stats2se}}; +##' @param trait yield trait to query +##' @param spstr species to query for yield data +##' @param extra.columns other query terms to pass in. Optional +##' @param con database connection +##' @param ids_are_cultivars if TRUE, spstr contains cultivar IDs, otherwise they are species IDs +##' @param ... extra arguments +##' @seealso used in \code{\link{query.trait.data}}; \code{\link{fetch.stats2se}}; \code{\link{transformstats}} performs transformation calculations +##' @author +query.yields <- function(trait = 'yield', spstr, extra.columns = '', con = NULL, + ids_are_cultivars = FALSE, ...){ + + member_column <- if (ids_are_cultivars) {"cultivar_id"} else {"specie_id"} + query <- paste("select + yields.id, yields.citation_id, yields.site_id, treatments.name, + yields.date, yields.time, yields.cultivar_id, yields.specie_id, + yields.mean, yields.statname, yields.stat, yields.n, + variables.name as vname, + month(yields.date) as month,", + extra.columns, + "treatments.control, sites.greenhouse + from yields + left join treatments on (yields.treatment_id = treatments.id) + left join sites on (yields.site_id = sites.id) + left join variables on (yields.variable_id = variables.id) + where ", member_column, " in (", spstr,");", sep = "") + if(!trait == 'yield'){ + query <- gsub(");", paste(" and variables.name in ('", trait,"');", sep = ""), query) + } + + return(fetch.stats2se(connection = con, query = query)) +} \ No newline at end of file diff --git a/base/db/R/rename_jags_columns.R b/base/db/R/rename_jags_columns.R new file mode 100644 index 00000000000..3e82ecfa445 --- /dev/null +++ b/base/db/R/rename_jags_columns.R @@ -0,0 +1,36 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-----------------------------------------------------------------------------# +##' renames the variables within output data frame trait.data +##' +##' @param data data frame to with variables to rename +##' +##' @seealso used with \code{\link[PEcAn.MA]{jagify}}; +##' @export +##' @author David LeBauer +rename_jags_columns <- function(data) { + + # Change variable names and calculate obs.prec within data frame + transformed <- transform(data, + Y = mean, + se = stat, + obs.prec = 1 / (sqrt(n) * stat) ^2, + trt = trt_id, + site = site_id, + cite = citation_id, + ghs = greenhouse) + + # Subset data frame + selected <- subset(transformed, select = c('Y', 'n', 'site', 'trt', 'ghs', 'obs.prec', + 'se', 'cite')) + # Return subset data frame + return(selected) +} +##=============================================================================# diff --git a/base/db/R/symmetric_setdiff.R b/base/db/R/symmetric_setdiff.R new file mode 100644 index 00000000000..1841bde284e --- /dev/null +++ b/base/db/R/symmetric_setdiff.R @@ -0,0 +1,65 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +#' Symmetric set difference of two data frames +#' +#' @param x,y `data.frame`s to compare +#' @param xname Label for data in x but not y. Default = "x" +#' @param yname Label for data in y but not x. Default = "y" +#' @param namecol Name of label column. Default = "source". +#' @param simplify_types (Logical) If `TRUE`, coerce anything that +#' isn't numeric to character, to facilitate comparison. +#' @return `data.frame` of data not common to x and y, with additional +#' column (`namecol`) indicating whether data are only in x +#' (`xname`) or y (`yname`) +#' @export +#' @examples +#' xdf <- data.frame(a = c("a", "b", "c"), +#' b = c(1, 2, 3), +#' stringsAsFactors = FALSE) +#' ydf <- data.frame(a = c("a", "b", "d"), +#' b = c(1, 2.5, 3), +#' stringsAsFactors = FALSE) +#' symmetric_setdiff(xdf, ydf) +symmetric_setdiff <- function(x, y, xname = "x", yname = "y", + namecol = "source", simplify_types = TRUE) { + stopifnot(is.data.frame(x), is.data.frame(y), + is.character(xname), is.character(yname), + length(xname) == 1, length(yname) == 1) + is_i64 <- c( + vapply(x, inherits, logical(1), what = "integer64"), + vapply(y, inherits, logical(1), what = "integer64") + ) + if (any(is_i64)) { + PEcAn.logger::logger.debug( + "Detected at least one `integer64` column. ", + "Converting to `numeric` for comparison." + ) + if (requireNamespace("bit64", quietly = TRUE)) { + x <- dplyr::mutate_if(x, bit64::is.integer64, as.numeric) + y <- dplyr::mutate_if(y, bit64::is.integer64, as.numeric) + } else { + PEcAn.logger::logger.warn( + '"bit64" package required for `integer64` conversion, but not installed. ', + "Skipping conversion, which may produce weird results!" + ) + } + } + if (simplify_types) { + x <- dplyr::mutate_if(x, ~!is.numeric(.), as.character) + y <- dplyr::mutate_if(x, ~!is.numeric(.), as.character) + } + namecol <- dplyr::sym(namecol) + xy <- dplyr::setdiff(x, y) %>% + dplyr::mutate(!!namecol := xname) + yx <- dplyr::setdiff(y, x) %>% + dplyr::mutate(!!namecol := yname) + dplyr::bind_rows(xy, yx) %>% + dplyr::select(!!namecol, dplyr::everything()) +} \ No newline at end of file diff --git a/base/db/R/take.samples.R b/base/db/R/take.samples.R new file mode 100644 index 00000000000..aa755a7e45e --- /dev/null +++ b/base/db/R/take.samples.R @@ -0,0 +1,35 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-----------------------------------------------------------------------------# +##' sample from normal distribution, given summary stats +##' +##' @name take.samples +##' @title Sample from normal distribution, given summary stats +##' @param summary data.frame with values of mean and sd +##' @param sample.size number of samples to take +##' @return sample of length sample.size +##' @author David LeBauer, Carl Davidson +##' @export +##' @examples +##' ## return the mean when stat = NA +##' take.samples(summary = data.frame(mean = 10, stat = NA)) +##' ## return vector of length \code{sample.size} from N(mean,stat) +##' take.samples(summary = data.frame(mean = 10, stat = 10), sample.size = 10) +##' +take.samples <- function(summary, sample.size = 10^6){ + if(is.na(summary$stat)){ + ans <- summary$mean + } else { + set.seed(0) + ans <- stats::rnorm(n = sample.size, mean = summary$mean, sd = summary$stat) + } + return(ans) +} +##=============================================================================# diff --git a/base/db/man/append.covariate.Rd b/base/db/man/append.covariate.Rd index ccc4f1d33c5..f964667171e 100644 --- a/base/db/man/append.covariate.Rd +++ b/base/db/man/append.covariate.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/covariate.functions.R \name{append.covariate} \alias{append.covariate} \title{Append covariate data as a column within a table} diff --git a/base/db/man/arrhenius.scaling.traits.Rd b/base/db/man/arrhenius.scaling.traits.Rd index 2f9db05abd3..5402b3a2b39 100644 --- a/base/db/man/arrhenius.scaling.traits.Rd +++ b/base/db/man/arrhenius.scaling.traits.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/covariate.functions.R \name{arrhenius.scaling.traits} \alias{arrhenius.scaling.traits} \title{Apply Arrhenius scaling to 25 degC for temperature-dependent traits} diff --git a/base/db/man/assign.treatments.Rd b/base/db/man/assign.treatments.Rd index 9f20a925b01..22ae629d18e 100644 --- a/base/db/man/assign.treatments.Rd +++ b/base/db/man/assign.treatments.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/assign.treatments.R \name{assign.treatments} \alias{assign.treatments} \title{assign.treatments} diff --git a/base/db/man/check.lists.Rd b/base/db/man/check.lists.Rd index 70fd997c9be..8063dc975c0 100644 --- a/base/db/man/check.lists.Rd +++ b/base/db/man/check.lists.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.trait.data.R +% Please edit documentation in R/check.lists.R \name{check.lists} \alias{check.lists} \title{Compares two lists} diff --git a/base/db/man/derive.trait.Rd b/base/db/man/derive.trait.Rd index f9eff2630e7..f47d1a9dca3 100644 --- a/base/db/man/derive.trait.Rd +++ b/base/db/man/derive.trait.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/derive.trait.R \name{derive.trait} \alias{derive.trait} \title{Performs an arithmetic function, FUN, over a series of traits and returns the result as a derived trait.} diff --git a/base/db/man/derive.traits.Rd b/base/db/man/derive.traits.Rd index 578018a5a8d..0c9222442df 100644 --- a/base/db/man/derive.traits.Rd +++ b/base/db/man/derive.traits.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/derive.traits.R \name{derive.traits} \alias{derive.traits} \title{Performs an arithmetic function, FUN, over a series of traits and returns the result as a derived trait.} diff --git a/base/db/man/fetch.stats2se.Rd b/base/db/man/fetch.stats2se.Rd index 5878ac6a46f..481bfdd1bee 100644 --- a/base/db/man/fetch.stats2se.Rd +++ b/base/db/man/fetch.stats2se.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/fetch.stats2se.R \name{fetch.stats2se} \alias{fetch.stats2se} \title{Fetch data and transform stats to SE} diff --git a/base/db/man/filter_sunleaf_traits.Rd b/base/db/man/filter_sunleaf_traits.Rd index 30a35d494e6..0e47497f451 100644 --- a/base/db/man/filter_sunleaf_traits.Rd +++ b/base/db/man/filter_sunleaf_traits.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/covariate.functions.R \name{filter_sunleaf_traits} \alias{filter_sunleaf_traits} \alias{filter.sunleaf.traits} diff --git a/base/db/man/get.trait.data.pft.Rd b/base/db/man/get.trait.data.pft.Rd index 5ad8cce27ee..2aedeaf6bbf 100644 --- a/base/db/man/get.trait.data.pft.Rd +++ b/base/db/man/get.trait.data.pft.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.trait.data.R +% Please edit documentation in R/get.trait.data.pft.R \name{get.trait.data.pft} \alias{get.trait.data.pft} \title{Get trait data from the database for a single PFT} diff --git a/base/db/man/query.covariates.Rd b/base/db/man/query.covariates.Rd index fb7167c526b..ac2081a91ae 100644 --- a/base/db/man/query.covariates.Rd +++ b/base/db/man/query.covariates.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/covariate.functions.R \name{query.covariates} \alias{query.covariates} \title{Queries covariates from database for a given vector of trait id's} diff --git a/base/db/man/query.data.Rd b/base/db/man/query.data.Rd index e8bd1dfe0c8..751a85af70c 100644 --- a/base/db/man/query.data.Rd +++ b/base/db/man/query.data.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/query.data.R \name{query.data} \alias{query.data} \title{Query data and transform stats to SE by calling \code{\link{fetch.stats2se}};} diff --git a/base/db/man/query.yields.Rd b/base/db/man/query.yields.Rd index 66e565205f7..f4e5e09cd7e 100644 --- a/base/db/man/query.yields.Rd +++ b/base/db/man/query.yields.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/query.yields.R \name{query.yields} \alias{query.yields} \title{Query yield data and transform stats to SE by calling \code{\link{fetch.stats2se}};} diff --git a/base/db/man/rename_jags_columns.Rd b/base/db/man/rename_jags_columns.Rd index da89de92e8d..5a53963c293 100644 --- a/base/db/man/rename_jags_columns.Rd +++ b/base/db/man/rename_jags_columns.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/rename_jags_columns.R \name{rename_jags_columns} \alias{rename_jags_columns} \title{renames the variables within output data frame trait.data} diff --git a/base/db/man/symmetric_setdiff.Rd b/base/db/man/symmetric_setdiff.Rd index 920a8007d8c..8fb3009ae48 100644 --- a/base/db/man/symmetric_setdiff.Rd +++ b/base/db/man/symmetric_setdiff.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.trait.data.R +% Please edit documentation in R/symmetric_setdiff.R \name{symmetric_setdiff} \alias{symmetric_setdiff} \title{Symmetric set difference of two data frames} diff --git a/base/db/man/take.samples.Rd b/base/db/man/take.samples.Rd index 94427ab36fb..f834d60df80 100644 --- a/base/db/man/take.samples.Rd +++ b/base/db/man/take.samples.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/query.trait.data.R +% Please edit documentation in R/take.samples.R \name{take.samples} \alias{take.samples} \title{Sample from normal distribution, given summary stats} From 30315c3061f8b8a4b4943d9345467a5a1f1d361b Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 16 Sep 2019 11:26:34 -0400 Subject: [PATCH 0368/2289] updating downscale to include different forecasts lengths --- .../inst/WillowCreek/workflow.template.R | 21 ++++++++++++++----- .../R/download.NOAA_GEFS_downscale.R | 6 +++--- 2 files changed, 19 insertions(+), 8 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index d3ff39b6bfb..973cc721b42 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -7,6 +7,7 @@ library(RCurl) library(REddyProc) library(tidyverse) library(furrr) +library(R.utils) plan(multiprocess) # ---------------------------------------------------------------------------------------------- #------------------------------------------ That's all we need xml path and the out folder ----- @@ -220,13 +221,13 @@ if (nodata) { file.copy(from= file.path(restart.path, "SDA", "outconfig.Rdata"), to = file.path(settings$outdir, "SDA", "outconfig.Rdata")) -#Update the SDA Output +#Update the SDA Output to just have last time step load(file.path(restart.path, "SDA", "sda.output.Rdata")) ANALYSIS1 = list() FORECAST1 = list() enkf.params1 = list() - ANALYSIS[[1]]= ANALYSIS[[length(ANALYSIS)]] + ANALYSIS1[[1]]= ANALYSIS[[length(ANALYSIS)]] FORECAST1[[1]] = FORECAST[[length(FORECAST)]] enkf.params1[[1]] = enkf.params[[length(enkf.params)]] t = 1 @@ -234,19 +235,29 @@ if (nodata) { FORECAST = FORECAST1 enfk.params = enkf.params1 - save(list = c(ANALYSIS, FORECAST, enkf.params, t, ensemble.samples, inputs, new.params, new.state, site.locs, Viz.output, X, ensemble.id, run.id), file = file.path(restart.path, "SDA", "sda.output.Rdata")) + save(list = c("ANALYSIS", "FORECAST", "enkf.params", "t", "ensemble.samples", "inputs", "new.params", "new.state", "site.locs", "Viz.output", "X", "ensemble.id", "run.id"), file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + +#copy over run and out folders + + if(!dir.exists("run")) dir.create("run",showWarnings = F) + copyDirectory(from = file.path(restart.path, "run/"), + to = file.path(settings$outdir, "run/")) + if(!dir.exists("out")) dir.create("out",showWarnings = F) + copyDirectory(from = file.path(restart.path, "out/"), + to = file.path(settings$outdir, "out/")) + # -------------------------------------------------------------------------------------------------- #--------------------------------- Run state data assimilation ------------------------------------- # -------------------------------------------------------------------------------------------------- -unlink(c('run','out', "SDA"), recursive = T) +#unlink(c('run','out', "SDA"), recursive = T) if ('state.data.assimilation' %in% names(settings)) { if (PEcAn.utils::status.check("SDA") == 0) { PEcAn.utils::status.start("SDA") PEcAn.assim.sequential::sda.enkf( settings, - restart=FALSE, + restart=TRUE, Q=0, obs.mean = obs.mean, obs.cov = obs.cov, diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index e23404f4835..c090a95a326 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -207,17 +207,17 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st forecasts = matrix(ncol = length(noaa_data)+ 2, nrow = 0) colnames(forecasts) <- c(cf_var_names, "timestamp", "NOAA.member") - index = matrix(ncol = length(noaa_data), nrow = 64) + index = matrix(ncol = length(noaa_data), nrow = length(time)) for(i in 1:21){ rm(index) - index = matrix(ncol = length(noaa_data), nrow = 64) + index = matrix(ncol = length(noaa_data), nrow = length(time)) for(j in 1:length(noaa_data)){ index[,j] <- noaa_data[[j]][i,] colnames(index) <- c(cf_var_names) index <- as.data.frame(index) } index$timestamp <- as.POSIXct(time) - index$NOAA.member <- rep(i, times = 64) + index$NOAA.member <- rep(i, times = length(time)) forecasts <- rbind(forecasts, index) } From c98485c9aee29087dcbd68bf1efbb1e812c4c8d1 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 16 Sep 2019 16:21:19 -0400 Subject: [PATCH 0369/2289] Finer control over BETY con in integration tests --- base/workflow/R/create_execute_test_xml.R | 35 ++++++++++++++------ base/workflow/man/create_execute_test_xml.Rd | 17 +++++++--- 2 files changed, 37 insertions(+), 15 deletions(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 6cd4897b4da..1345094b864 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -18,9 +18,14 @@ #' analysis. Default = `"NPP"` #' @param sensitivity (logical) Whether or not to perform a sensitivity analysis #' (default = `FALSE`) -#' @param db_bety_username (character) BETY username for workflow (default = `"bety"`) -#' @param db_bety_password (character) BETY password for workflow (default = `"bety"`) -#' @param db_bety_hostname (character) BETY hostname for workflow (default = `"localhost"`) +#' @param db_bety_username (character) BETY username for workflow. Default is +#' the value from the `config.php`. +#' @param db_bety_password (character) BETY password for workflow. Default is +#' the value from `config.php`. +#' @param db_bety_hostname (character) BETY hostname for workflow. Default is +#' the value from `config.php` +#' @param db_bety_hostname (character) BETY connection port for workflow. +#' Default is the value from `config.php` #' @param db_bety_driver (character) BETY DBI driver for workflow (default = `"Postgres"`) #' @return #' @author Alexey Shiklomanov, Tony Gardella @@ -38,15 +43,25 @@ create_execute_test_xml <- function(model_id, ensemble_size = 1, sensitivity_variable = "NPP", sensitivity = FALSE, - db_bety_username = "bety", - db_bety_password = "bety", - db_bety_hostname = "localhost", + db_bety_username = NULL, + db_bety_password = NULL, + db_bety_hostname = NULL, + db_bety_port = NULL, db_bety_driver = "Postgres") { php_file <- file.path(pecan_path, "web", "config.php") config.list <- PEcAn.utils::read_web_config(php_file) - bety <- PEcAn.DB::betyConnect(php_file) - con <- bety$con + if (is.null(db_bety_username)) db_bety_username <- config.list$db_bety_username + if (is.null(db_bety_password)) db_bety_password <- config.list$db_bety_password + if (is.null(db_bety_hostname)) db_bety_hostname <- config.list$db_bety_hostname + if (is.null(db_bety_port)) db_bety_port <- config.list$db_bety_port + con <- PEcAn.DB::db.open(list( + user = db_bety_username, + password = db_bety_password, + host = db_bety_hostname, + port = db_bety_port, + driver = db_bety_driver + )) on.exit(DBI::dbDisconnect(con), add = TRUE) settings <- list( @@ -57,7 +72,7 @@ create_execute_test_xml <- function(model_id, ) #Outdir - model.new <- tbl(bety, "models") %>% + model.new <- tbl(con, "models") %>% filter(id == !!model_id) %>% collect() outdir_pre <- paste( @@ -88,7 +103,7 @@ create_execute_test_xml <- function(model_id, #PFT if (is.null(pft)){ # Select the first PFT in the model list. - pft <- tbl(bety, "pfts") %>% + pft <- tbl(con, "pfts") %>% filter(modeltype_id == !!model.new$modeltype_id) %>% collect() pft <- pft$name[[1]] diff --git a/base/workflow/man/create_execute_test_xml.Rd b/base/workflow/man/create_execute_test_xml.Rd index ed6e91e5897..b85a60ce6db 100644 --- a/base/workflow/man/create_execute_test_xml.Rd +++ b/base/workflow/man/create_execute_test_xml.Rd @@ -8,8 +8,9 @@ create_execute_test_xml(model_id, met, site_id, start_date, end_date, dbfiles_folder, user_id, output_folder = "batch_test_output", pecan_path = getwd(), pft = NULL, ensemble_size = 1, sensitivity_variable = "NPP", sensitivity = FALSE, - db_bety_username = "bety", db_bety_password = "bety", - db_bety_hostname = "localhost", db_bety_driver = "Postgres") + db_bety_username = NULL, db_bety_password = NULL, + db_bety_hostname = NULL, db_bety_port = NULL, + db_bety_driver = "Postgres") } \arguments{ \item{model_id}{(numeric) Model ID (from `models` table)} @@ -43,13 +44,19 @@ analysis. Default = `"NPP"`} \item{sensitivity}{(logical) Whether or not to perform a sensitivity analysis (default = `FALSE`)} -\item{db_bety_username}{(character) BETY username for workflow (default = `"bety"`)} +\item{db_bety_username}{(character) BETY username for workflow. Default is +the value from the `config.php`.} -\item{db_bety_password}{(character) BETY password for workflow (default = `"bety"`)} +\item{db_bety_password}{(character) BETY password for workflow. Default is +the value from `config.php`.} -\item{db_bety_hostname}{(character) BETY hostname for workflow (default = `"localhost"`)} +\item{db_bety_hostname}{(character) BETY hostname for workflow. Default is +the value from `config.php`} \item{db_bety_driver}{(character) BETY DBI driver for workflow (default = `"Postgres"`)} + +\item{db_bety_hostname}{(character) BETY connection port for workflow. +Default is the value from `config.php`} } \description{ Create a PEcAn XML file and use it to run a PEcAn workflow From b0e4d6938c8e6bd8c4b291b3e661b334036af604 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 17 Sep 2019 08:36:47 -0400 Subject: [PATCH 0370/2289] Typo in base/workflow/R/create_execute_test_xml.R Co-Authored-By: Chris Black --- base/workflow/R/create_execute_test_xml.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 1345094b864..0ae1cf1970e 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -24,7 +24,7 @@ #' the value from `config.php`. #' @param db_bety_hostname (character) BETY hostname for workflow. Default is #' the value from `config.php` -#' @param db_bety_hostname (character) BETY connection port for workflow. +#' @param db_bety_port (character) BETY connection port for workflow. #' Default is the value from `config.php` #' @param db_bety_driver (character) BETY DBI driver for workflow (default = `"Postgres"`) #' @return From 9c68101eb726eb2bf3975f53626706e8ead5146b Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 17 Sep 2019 08:53:33 -0400 Subject: [PATCH 0371/2289] Revise BETY param docs for create_execute_test_xml Per @infotroph suggestions. --- base/workflow/R/create_execute_test_xml.R | 13 ++++--------- base/workflow/man/create_execute_test_xml.Rd | 15 +++------------ 2 files changed, 7 insertions(+), 21 deletions(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 0ae1cf1970e..782590eed86 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -18,15 +18,10 @@ #' analysis. Default = `"NPP"` #' @param sensitivity (logical) Whether or not to perform a sensitivity analysis #' (default = `FALSE`) -#' @param db_bety_username (character) BETY username for workflow. Default is -#' the value from the `config.php`. -#' @param db_bety_password (character) BETY password for workflow. Default is -#' the value from `config.php`. -#' @param db_bety_hostname (character) BETY hostname for workflow. Default is -#' the value from `config.php` -#' @param db_bety_port (character) BETY connection port for workflow. -#' Default is the value from `config.php` -#' @param db_bety_driver (character) BETY DBI driver for workflow (default = `"Postgres"`) +#' @param db_bety_username,db_bety_password,db_bety_hostname,db_bety_port +#' (character) BETY database connection options. Default values for all of +#' these are pulled from `/web/config.php`. +#' @param db_bety_driver (character) BETY database connection driver (default = `"Postgres"`) #' @return #' @author Alexey Shiklomanov, Tony Gardella #' @export diff --git a/base/workflow/man/create_execute_test_xml.Rd b/base/workflow/man/create_execute_test_xml.Rd index b85a60ce6db..3379b8b5a48 100644 --- a/base/workflow/man/create_execute_test_xml.Rd +++ b/base/workflow/man/create_execute_test_xml.Rd @@ -44,19 +44,10 @@ analysis. Default = `"NPP"`} \item{sensitivity}{(logical) Whether or not to perform a sensitivity analysis (default = `FALSE`)} -\item{db_bety_username}{(character) BETY username for workflow. Default is -the value from the `config.php`.} +\item{db_bety_username, db_bety_password, db_bety_hostname, db_bety_port}{(character) BETY database connection options. Default values for all of +these are pulled from `/web/config.php`.} -\item{db_bety_password}{(character) BETY password for workflow. Default is -the value from `config.php`.} - -\item{db_bety_hostname}{(character) BETY hostname for workflow. Default is -the value from `config.php`} - -\item{db_bety_driver}{(character) BETY DBI driver for workflow (default = `"Postgres"`)} - -\item{db_bety_hostname}{(character) BETY connection port for workflow. -Default is the value from `config.php`} +\item{db_bety_driver}{(character) BETY database connection driver (default = `"Postgres"`)} } \description{ Create a PEcAn XML file and use it to run a PEcAn workflow From e1015d619ba9784c1ba35edd69343aecf9cb03c1 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 17 Sep 2019 09:11:04 -0400 Subject: [PATCH 0372/2289] BUILD: Use `options` instead of arguments This fixes what seems to be a bug in `devtools::install_deps` that doesn't understand that `Ncpus` is a valid argument. Also, use chained `Rscript -e "..." -e "..."` instead of single strings with complex escaping. This feels cleaner. --- Makefile | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/Makefile b/Makefile index 5d0dd6cf04e..ecb3e630c91 100644 --- a/Makefile +++ b/Makefile @@ -81,32 +81,34 @@ $(subst .doc/models/template,,$(MODELS_D)): .install/models/template include Makefile.depends +SETROPTIONS = "options(Ncpus = ${NCPUS}, repos = 'http://cran.rstudio.com')" + clean: rm -rf .install .check .test .doc find modules/rtm/src \( -name \*.mod -o -name \*.o -o -name \*.so \) -delete .install/devtools: | .install - + ./scripts/time.sh "${1}" Rscript -e "if(!requireNamespace('devtools', quietly = TRUE)) install.packages('devtools', repos = 'http://cran.rstudio.com', Ncpus = ${NCPUS})" + + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('devtools', quietly = TRUE)) install.packages('devtools')" echo `date` > $@ .install/roxygen2: | .install - + ./scripts/time.sh "${1}" Rscript -e "if(!requireNamespace('roxygen2', quietly = TRUE)) install.packages('roxygen2', repos = 'http://cran.rstudio.com', Ncpus = ${NCPUS})" + + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('roxygen2', quietly = TRUE)) install.packages('roxygen2')" echo `date` > $@ .install/testthat: | .install - + ./scripts/time.sh "${1}" Rscript -e "if(!requireNamespace('testthat', quietly = TRUE)) install.packages('testthat', repos = 'http://cran.rstudio.com', Ncpus = ${NCPUS})" + + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('testthat', quietly = TRUE)) install.packages('testthat')" echo `date` > $@ .install/mockery: | .install - + ./scripts/time.sh "${1}" Rscript -e "if(!requireNamespace('mockery', quietly = TRUE)) install.packages('mockery', repos = 'http://cran.rstudio.com', Ncpus = ${NCPUS})" + + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('mockery', quietly = TRUE)) install.packages('mockery')" echo `date` > $@ # HACK: assigning to `deps` is an ugly workaround for circular dependencies in utils pkg. # When these are fixed, can go back to simple `dependencies = TRUE` -depends_R_pkg = ./scripts/time.sh "${1}" Rscript -e " \ - deps <- if (grepl('base/utils', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }; \ - devtools::install_deps('$(strip $(1))', Ncpus = ${NCPUS}, dependencies = deps, upgrade=FALSE);" -install_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::install('$(strip $(1))', Ncpus = ${NCPUS}, upgrade=FALSE);" +depends_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} \ + -e "deps <- if (grepl('base/utils', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ + -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" +install_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" check_R_pkg = ./scripts/time.sh "${1}" Rscript scripts/check_with_errors.R $(strip $(1)) test_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::test('"$(strip $(1))"', stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can doc_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::document('"$(strip $(1))"')" From 3a78880e0ff765f5828cf691c1259e04a64f3211 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 17 Sep 2019 17:46:28 +0200 Subject: [PATCH 0373/2289] use local file for read_web_config test. Fixes #2417 --- .../tests/testthat/data/config.example.php | 125 ++++++++++++++++++ .../tests/testthat/test.read_web_config.R | 27 ++-- 2 files changed, 143 insertions(+), 9 deletions(-) create mode 100644 base/utils/tests/testthat/data/config.example.php diff --git a/base/utils/tests/testthat/data/config.example.php b/base/utils/tests/testthat/data/config.example.php new file mode 100644 index 00000000000..653cd64fcfb --- /dev/null +++ b/base/utils/tests/testthat/data/config.example.php @@ -0,0 +1,125 @@ + array(), + "geo.bu.edu" => + array("displayname" => "geo", + "qsub" => "qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash", + "jobid" => "Your job ([0-9]+) .*", + "qstat" => "qstat -j @JOBID@ || echo DONE", + "prerun" => "module load udunits R/R-3.0.0_gnu-4.4.6", + "postrun" => "sleep 60", + "models" => + array("ED2" => + array("prerun" => "module load hdf5"), + "ED2 (r82)" => + array("prerun" => "module load hdf5") + ) + ) + ); + +# Folder where PEcAn is installed +$R_library_path="/home/carya/R/library"; + +# Location where PEcAn is installed, not really needed anymore +$pecan_home="/home/carya/pecan/"; + +# Folder where the runs are stored +$output_folder="/home/carya/output/"; + +# Folder where the generated files are stored +$dbfiles_folder=$output_folder . "/dbfiles"; + +# location of BETY DB set to empty to not create links, can be both +# relative or absolute paths or full URL's. Should point to the base +# of BETYDB +$betydb="/bety"; + +# ---------------------------------------------------------------------- +# SIMPLE EDITING OF BETY DATABSE +# ---------------------------------------------------------------------- +# Number of items to show on a page +$pagesize = 30; + +# Location where logs should be written +$logfile = "/home/carya/output/betydb.log"; + +# uncomment the following variable to enable the simple interface +#$simpleBETY = TRUE; + +# syncing details + +$server_url="192.168.0.5"; // local test server +$client_sceret=""; +$server_auth_token=""; + +?> diff --git a/base/utils/tests/testthat/test.read_web_config.R b/base/utils/tests/testthat/test.read_web_config.R index 25d7df32bc9..5021b0c16ef 100644 --- a/base/utils/tests/testthat/test.read_web_config.R +++ b/base/utils/tests/testthat/test.read_web_config.R @@ -1,19 +1,28 @@ context("Read web config") -# `here` package needed to correctly set path relative to package -skip_if_not_installed("here") -pecan_root <- normalizePath(here::here("..", "..")) +php_config_example <- file.path("data", "config.example.php") test_that("Read example config file", { - php_config_example <- file.path(pecan_root, "web", "config.example.php") cfg_example <- read_web_config(php_config_example) expect_equal(cfg_example[["output_folder"]], "/home/carya/output/") expect_equal(cfg_example[["dbfiles_folder"]], "/home/carya/output//dbfiles") }) -test_that("Read docker config file", { - php_config_docker <- file.path(pecan_root, "docker", "web", "config.docker.php") - cfg_docker <- read_web_config(php_config_docker) - expect_equal(cfg_docker[["output_folder"]], "/data/workflows") - expect_equal(cfg_docker[["dbfiles_folder"]], "/data/dbfiles") +test_that("parse converts types", { + cfg_example <- read_web_config(php_config_example, parse = FALSE) + expect_type(cfg_example[["pagesize"]], "character") + expect_equal(cfg_example[["pagesize"]], "30") + + # parse not currently working; uncomment when fixed + # cfg_example <- read_web_config(php_config_example, parse = TRUE) + # expect_type(cfg_example[["pagesize"]], "double") + # expect_equal(cfg_example[["pagesize"]], 30) +}) + +test_that("expand replaces output_folder", { + cfg_example <- read_web_config(php_config_example, expand = FALSE) + expect_equal(cfg_example[["dbfiles_folder"]], "$output_folder . /dbfiles") + + cfg_example <- read_web_config(php_config_example, expand = TRUE) + expect_equal(cfg_example[["dbfiles_folder"]], "/home/carya/output//dbfiles") }) From af58e25c53cbe5f3166cfe88c2ee455bcc7b8cf1 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 17 Sep 2019 18:41:42 +0200 Subject: [PATCH 0374/2289] avoid unparsing. Fixes #2421 --- CHANGELOG.md | 1 + base/utils/R/read_web_config.R | 4 ++-- base/utils/tests/testthat/test.read_web_config.R | 7 +++---- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index c95d8667eb8..e1bb4dc9dad 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -15,6 +15,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Replace deprecated `rlang::UQ` syntax with the recommended `!!` - Explicitly use `PEcAn.uncertainty::read.ensemble.output` in `PEcAn.utils::get.results`. Otherwise, it would sometimes use the deprecated `PEcAn.utils::read.ensemble.output` version. - History page would not pass the hostname parameter when showing a running workflow, this would result in the running page showing an error. +- The `parse` option to `PEcAn.utils::read_web_config` had no effect when `expand` was TRUE (#2421). ### Changed - Updated modules/rtm PROSPECT docs diff --git a/base/utils/R/read_web_config.R b/base/utils/R/read_web_config.R index fba2e570eb0..4b5c5a8f4a3 100644 --- a/base/utils/R/read_web_config.R +++ b/base/utils/R/read_web_config.R @@ -45,10 +45,10 @@ read_web_config <- function(php.config = "../../web/config.php", if (expand) { # Replace $output_folder with its value, and concatenate strings - config_list <- lapply(config_list, gsub, + chr <- vapply(config_list, is.character, logical(1)) + config_list[chr] <- lapply(config_list[chr], gsub, pattern = "\\$output_folder *\\. *", replacement = config_list[["output_folder"]]) } - config_list } diff --git a/base/utils/tests/testthat/test.read_web_config.R b/base/utils/tests/testthat/test.read_web_config.R index 5021b0c16ef..a45173d92d8 100644 --- a/base/utils/tests/testthat/test.read_web_config.R +++ b/base/utils/tests/testthat/test.read_web_config.R @@ -13,10 +13,9 @@ test_that("parse converts types", { expect_type(cfg_example[["pagesize"]], "character") expect_equal(cfg_example[["pagesize"]], "30") - # parse not currently working; uncomment when fixed - # cfg_example <- read_web_config(php_config_example, parse = TRUE) - # expect_type(cfg_example[["pagesize"]], "double") - # expect_equal(cfg_example[["pagesize"]], 30) + cfg_example <- read_web_config(php_config_example, parse = TRUE) + expect_type(cfg_example[["pagesize"]], "double") + expect_equal(cfg_example[["pagesize"]], 30) }) test_that("expand replaces output_folder", { From 9dd9085353ebd1c0ffb78d299335590719dadf30 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 17 Sep 2019 22:07:53 +0200 Subject: [PATCH 0375/2289] remove dead code Apparently R check now does some static analysis and complains about "unstated dependencies in tests" because of these lines even though they never run. Easier to remove them than to fight it. --- base/settings/tests/testthat/test.MultiSettings.class.R | 5 ----- base/settings/tests/testthat/test.papply.R | 5 ----- 2 files changed, 10 deletions(-) diff --git a/base/settings/tests/testthat/test.MultiSettings.class.R b/base/settings/tests/testthat/test.MultiSettings.class.R index dff5278d5ad..72b1b5aea0a 100644 --- a/base/settings/tests/testthat/test.MultiSettings.class.R +++ b/base/settings/tests/testthat/test.MultiSettings.class.R @@ -8,11 +8,6 @@ ## #------------------------------------------------------------------------------- context("test MultiSettings class") -if(FALSE) { - library(devtools) - rm(list=ls()) - load_all() -} # SETUP l <- list(aa=1, bb=2, cc=list(dd=3, ee=4)) diff --git a/base/settings/tests/testthat/test.papply.R b/base/settings/tests/testthat/test.papply.R index 3b60d292609..41bda3646ec 100644 --- a/base/settings/tests/testthat/test.papply.R +++ b/base/settings/tests/testthat/test.papply.R @@ -8,11 +8,6 @@ ## #------------------------------------------------------------------------------- context("test papply") -if(FALSE) { - library(devtools) - rm(list=ls()) - load_all() -} # SETUP l <- list(aa=1, bb=2, cc=list(dd=3, ee=4)) From af822265cf7fc0f36651b60da1eeb0cb3d095943 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 17 Sep 2019 23:02:24 +0200 Subject: [PATCH 0376/2289] avoid direct DBI dependency in workflow --- base/workflow/R/create_execute_test_xml.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 782590eed86..f6928ef6d72 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -57,7 +57,7 @@ create_execute_test_xml <- function(model_id, port = db_bety_port, driver = db_bety_driver )) - on.exit(DBI::dbDisconnect(con), add = TRUE) + on.exit(PEcAn.DB::db.close(con), add = TRUE) settings <- list( info = list(notes = "Test_Run", From 74c6388735c51d382e543d2ab9b1a574b0e87ba7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 17 Sep 2019 23:03:04 +0200 Subject: [PATCH 0377/2289] remove outdated workaround --- scripts/check_with_errors.R | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 60ecb891955..ccf0e7fb4fc 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -2,16 +2,6 @@ arg <- commandArgs(trailingOnly = TRUE) pkg <- arg[1] -# Workaround for devtools/#1914: -# check() sets its own values for `_R_CHECK_*` environment variables, without -# checking whether any are already set. It winds up string-concatenating new -# onto old (e.g. "FALSE TRUE") instead of either respecting or overriding them. -# (Fixed in devtools 2.0.1.9000; remove these lines after next CRAN release) -Sys.unsetenv( - c('_R_CHECK_CRAN_INCOMING_', - '_R_CHECK_CRAN_INCOMING_REMOTE_', - '_R_CHECK_FORCE_SUGGESTS_')) - log_level <- Sys.getenv('LOGLEVEL', unset = NA) die_level <- Sys.getenv('DIELEVEL', unset = NA) redocument <- as.logical(Sys.getenv('REBUILD_DOCS', unset = NA)) From 168787b222fffa64e96ba8a6ccde9647a21ff4c3 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 10:17:03 +0200 Subject: [PATCH 0378/2289] fix warnings added since I saved the reference file --- base/workflow/NAMESPACE | 2 ++ base/workflow/R/create_execute_test_xml.R | 17 ++++++++++------- base/workflow/man/create_execute_test_xml.Rd | 5 +++++ 3 files changed, 17 insertions(+), 7 deletions(-) diff --git a/base/workflow/NAMESPACE b/base/workflow/NAMESPACE index 7144c5c0973..232b3667fe4 100644 --- a/base/workflow/NAMESPACE +++ b/base/workflow/NAMESPACE @@ -5,3 +5,5 @@ export(do_conversions) export(run.write.configs) export(runModule.get.trait.data) export(runModule.run.write.configs) +importFrom(dplyr,"%>%") +importFrom(dplyr,.data) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index f6928ef6d72..bab739c44f2 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -22,8 +22,11 @@ #' (character) BETY database connection options. Default values for all of #' these are pulled from `/web/config.php`. #' @param db_bety_driver (character) BETY database connection driver (default = `"Postgres"`) -#' @return +#' @return A list with two entries: +#' * `sys`: Exit value returned by the workflow (0 for sucess). +#' * `outdir`: Path where the workflow results are saved #' @author Alexey Shiklomanov, Tony Gardella +#' @importFrom dplyr %>% .data #' @export create_execute_test_xml <- function(model_id, met, @@ -67,9 +70,9 @@ create_execute_test_xml <- function(model_id, ) #Outdir - model.new <- tbl(con, "models") %>% - filter(id == !!model_id) %>% - collect() + model.new <- dplyr::tbl(con, "models") %>% + dplyr::filter(.data$id == !!model_id) %>% + dplyr::collect() outdir_pre <- paste( model.new[["model_name"]], format(as.Date(start_date), "%Y-%m"), @@ -98,9 +101,9 @@ create_execute_test_xml <- function(model_id, #PFT if (is.null(pft)){ # Select the first PFT in the model list. - pft <- tbl(con, "pfts") %>% - filter(modeltype_id == !!model.new$modeltype_id) %>% - collect() + pft <- dplyr::tbl(con, "pfts") %>% + dplyr::filter(.data$modeltype_id == !!model.new$modeltype_id) %>% + dplyr::collect() pft <- pft$name[[1]] message("PFT is `NULL`. Defaulting to the following PFT: ", pft) diff --git a/base/workflow/man/create_execute_test_xml.Rd b/base/workflow/man/create_execute_test_xml.Rd index 3379b8b5a48..ef844a69827 100644 --- a/base/workflow/man/create_execute_test_xml.Rd +++ b/base/workflow/man/create_execute_test_xml.Rd @@ -49,6 +49,11 @@ these are pulled from `/web/config.php`.} \item{db_bety_driver}{(character) BETY database connection driver (default = `"Postgres"`)} } +\value{ +A list with two entries: + * `sys`: Exit value returned by the workflow (0 for sucess). + * `outdir`: Path where the workflow results are saved +} \description{ Create a PEcAn XML file and use it to run a PEcAn workflow } From a7f4121b3a4fc5dd6c4995d2a324bc0a1157bc14 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 11:32:26 +0200 Subject: [PATCH 0379/2289] typo --- scripts/check_with_errors.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index ccf0e7fb4fc..2b1e2a21f21 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -108,7 +108,7 @@ if (cmp$status != "+") { print(cmp) cat("R check of", pkg, "reports the following new problems.", "Please fix these and resubmit:\n") - cat(cmp$cmp$output[cmp$cmp$change == 1], sep="\n") + cat(cmp$cmp$output[cmp$cmp$change == 1], sep = "\n") stop("Please fix these and resubmit.") } else { # No new messages, but need to check details of pre-existing ones @@ -122,7 +122,7 @@ if (cmp$status != "+") { lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { cat("Package check returned new problems:\n") - cat(lines_changed, "\n") + cat(lines_changed, sep = "\n") stop("Please fix these and resubmit.") } } From 2efcb700b5d62b63fe093b4468bbfcc7d9edaff4 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 12:00:06 +0200 Subject: [PATCH 0380/2289] always report package name --- scripts/check_with_errors.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 2b1e2a21f21..e135045b118 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -121,7 +121,7 @@ if (cmp$status != "+") { lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { - cat("Package check returned new problems:\n") + cat("R check of", pkg, "returned new problems:\n") cat(lines_changed, sep = "\n") stop("Please fix these and resubmit.") } From 61b3c4bcc6015ba104d5d688b93a0b9ffa845d63 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 12:32:29 +0200 Subject: [PATCH 0381/2289] functionalize 2-item pipes for easier reading and to avoid undeclared import --- .../assim.sequential/R/sda.enkf_refactored.R | 35 ++++++++++--------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 436931a59a8..7ca60647b97 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -49,7 +49,7 @@ sda.enkf <- function(settings, host <- settings$host forecast.time.step <- settings$state.data.assimilation$forecast.time.step #idea for later generalizing nens <- as.numeric(settings$ensemble$size) - processvar <- settings$state.data.assimilation$process.variance %>% as.logical() + processvar <- as.logical(settings$state.data.assimilation$process.variance) var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") names(var.names) <- NULL @@ -59,10 +59,10 @@ sda.enkf <- function(settings, # Site location first col is the long second is the lat and row names are the site ids site.ids <- settings$run$site$id - site.locs <- data.frame(Lon=settings$run$site$lon %>% as.numeric, - Lat=settings$run$site$lat %>% as.numeric) %>% - `colnames<-`(c("Lon","Lat")) %>% - `rownames<-`(site.ids) + site.locs <- data.frame(Lon = as.numeric(settings$run$site$lon), + Lat = as.numeric(settings$run$site$lat)) + colnames(site.locs) <- c("Lon","Lat") + rownames(site.locs) <- site.ids # start cut determines what is the best year to start spliting the met based on if we start with a restart or not. if (!is.null(restart)) { start.cut <-lubridate::ymd_hms(settings$state.data.assimilation$start.date, truncated = 3)-1 @@ -350,16 +350,17 @@ sda.enkf <- function(settings, } #----chaning the extension of nc files to a more specific date related name - list.files( - path = file.path(settings$outdir, "out"), - "*.nc$", - recursive = TRUE, - full.names = TRUE - ) %>% - walk( function(.x){ - - file.rename(.x , file.path(dirname(.x), - paste0(gsub(" ","",names(obs.mean)[t] %>% as.character()),".nc")) + purrr::walk( + list.files( + path = file.path(settings$outdir, "out"), + "*.nc$", + recursive = TRUE, + full.names = TRUE), + function(.x){ + file.rename(.x , + file.path(dirname(.x), + paste0(gsub(" ", "", as.character(names(obs.mean)[t])), + ".nc")) ) }) #--- Reformating X @@ -387,8 +388,8 @@ sda.enkf <- function(settings, names(input.order.cov) <- operators ### this is for pfts not sure if it's always nessecary? - choose <- sapply(colnames(X), agrep, x=names(obs.mean[[t]]), max=1, USE.NAMES = F) %>% unlist - choose.cov <- sapply(colnames(X), agrep, x=colnames(obs.cov[[t]]), max=1, USE.NAMES = F) %>% unlist + choose <- unlist(sapply(colnames(X), agrep, x=names(obs.mean[[t]]), max=1, USE.NAMES = F)) + choose.cov <- unlist(sapply(colnames(X), agrep, x=colnames(obs.cov[[t]]), max=1, USE.NAMES = F)) if(!any(choose)){ choose <- unlist(input.order) From dfea8a8a73c0b8e69d2787b044aa927ea016eb0b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 14:09:50 +0200 Subject: [PATCH 0382/2289] enquote stray apostrophe No idea why this complains on Travis but not locally, but whatever --- modules/allometry/data/Jenkins2004_Table9.csv | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/allometry/data/Jenkins2004_Table9.csv b/modules/allometry/data/Jenkins2004_Table9.csv index d965876b774..8a553138731 100644 --- a/modules/allometry/data/Jenkins2004_Table9.csv +++ b/modules/allometry/data/Jenkins2004_Table9.csv @@ -1 +1 @@ -Table 9. -- Sources and general locations for all equations referenced (see Appendix A),, ,, Reference no.,Reference,Origin 1,Acker and Easter 1994,Pacific Northwest 2,Adhikari et al. 1995,Himalayas 3,Anurag et al. 1989,India 4,Bajrang et al. 1996,North Indian plains 5,Baldwin 1989,Louisiana 6,Barclay et al. 1986,"Vancouver, BC" 7,Barney et al. 1978,Alaska 8,Bartelink 1996,Nertherlands 9,Baskerville 1965,New Brunswick 10,Baskerville 1966,New Brunswick 11,Bergez et al. 1988,central France 12,Bickelhaupt et al. 1973,New York 13,Binkley 1983,"British Columbia, Washington State" 14,Binkley et al. 1984,Pacific Northwest 15,Bockheim and Lee 1984,Wisconsin 16,Boerner and Kost 1986,Ohio 17,Bormann 1990,Southeastern Alaska 18,Brenneman et al. 1978,West Virginia 19,Bridge 1979,Rhode Island 20,Briggs et al. 1989,New York 21,Brown 1978,Rocky Mountains 22,Bunyavejchewin and Kiratiprayoon 1989,"Ratchaburi Province, Thailand" 23,Busing et al. 1993,Tennessee 24,Campbell et al. 1985,Alberta 25,Carlyle and Malcolm 1986,Great Britain 26,Carpenter 1983,Minnesota 27,Carter and White 1971,Alabama 29,Chapman and Gower 1991,Wisconsin 30,Chaturvedi and Singh 1982,Lesser Himalayas 31,Chojnacky 1984,Nevada 32,Chojnacky and Moisen 1993,Nevada 33,Clark et al. 1985,Gulf and Atlantic Coastal Plains 34,Clark et al. 1986a,Piedmont (Southeastern U.S.) 35,Clark et al. 1986b,Upland South 36,Clark and Schroeder 1986,"North Carolina, Georgia" 37,Clary and Tiedemann 1987,Utah 38,Clebsch 1971,Tennessee 39,Cochran et al. 1984,Pacific Northwest 40,Crow 1971,Maine 41,Crow 1976,North-central U.S. 42,Crow 1983,"Wisconsin, Michigan" 43,Darling 1967,Arizona 44,Dudley and Fownes 1992,Hawaii 45,Dunlap and Shipman 1967,Pennsylvania 47,Espinosa-Bancalari and Perry 1987,Oregon 48,Fassnacht 1996,Wisconsin 49,Felker et al. 1982,California 50,Feller 1992,British Columbia 51,Freedman 1984,Nova Scotia 52,Freedman et al. 1982,Nova Scotia 53,Gary 1976,"Wyoming, Colorado" 54,Gholz 1980,Oregon 55,Gholz et al. 1979,Pacific Northwest 56,Gholz et al. 1991,Florida 57,Goldsmith and Hocker 1978,New Hampshire 58,Gower et al. 1987,Washington 59,Gower et al. 1993a,"Wisconsin, Montana" 60,Gower et al. 1993b,Southwestern Wisconsin 61,Gower et al. 1992,New Mexico 62,Green and Grigal 1978,Minnesota 63,Grier et al. 1992,Arizona 64,Grier et al. 1984,Washington 65,Grier and Logan 1977,Oregon 66,Grigal and Kernik 1978,Minnesota 67,Harding and Grigal 1985,Minnesota 68,Harmon 1994,Pacific Northwest 69,Harrington et al. 1984,Oregon 70,Harris et al. 1973,Tennessee 71,Hegyi 1972,Ontario 72,Helgerson et al. 1988,Oregon 73,Heth and Donald 1978,"Cape Province, South Africa" 74,Hocker and Early 1983,New Hampshire 75,Honer 1971,Ontario 76,Ivask et al. 1988, 77,Jackson and Chittenden 1981,New Zealand 78,Johnston and Bartos 1977,"Utah, Wyoming" 79,Jokela et al. 1981,Minnesota 80,Jokela et al. 1986,New York 81,Ker 1980a,New Brunswick 82,Ker 1980b,Nova Scotia 83,Ker 1984, 84,Ker and van Raalte 1981,New Brunswick 85,Kimmins 1973,British Columbia 86,Kinerson and Bartholomew 1977,New Hampshire 87,King and Schnell 1972,"North Carolina, Kentucky, Tennessee" 88,Klopsch 1994,Pacific Northwest 89,Koerper 1994,Pacific Northwest 90,Koerper and Richardson 1980,Michigan 91,Krumlik 1974,British Columbia 92,Krumlik and Kimmins 1973,British Columbia 95,Landis and Mogren 1975,Colorado 96,Lieffers and Campbell 1984,Alberta 97,Lodhiyal et al. 1995,Central Himalayas 98,Loomis et al. 1966,Missouri Ozarks 99,Lovenstein and Berliner 1993,Israel 100,Maclean and Wein 1976,New Brunswick 101,Marshall and Wang 1995,British Columbia 102,Martin et al. 1998,North Carolina 103,McCain 1994,Pacific Northwest 104,Means et al. 1994,Pacific Northwest 105,Miller et al. 1981,"Nevada, eastern California" 106,Monk et al. 1970,Georgia 107,Monteith 1979,New York 108,Moore and Verspoor 1973,Quebec 109,Morrison 1990,Northern Ontario 110,Naidu et al. 1998,North Carolina 111,Nelson and Switzer 1975,Mississippi 112,Ouellet 1983,Quebec 113,Parker and Schneider 1975,Michigan 114,Pastor et al. 1984,Eastern U.S. 115,Pastor and Bockheim 1981,Wisconsin 116,Pearson et al. 1984,Wyoming 117,Perala and Alban 1994,North Central States 118,Peterson et al. 1970,Alberta 119,Phillips 1981,Southeast U.S. 120,Pollard 1972,Ontario 121,Rajeev et al. 1998,"Haryana, India" 122,Ralston 1973,North Carolina 123,Ralston and Prince 1965,North Carolina 124,Ramseur and Kelly 1981,Tennessee 125,Rawat and Singh 1993,Central Himalayas 126,Reid et al. 1974, 127,Reiners 1972,Minnesota 128,Rencz and Auclair 1980,Quebec 129,Reynolds et al. 1978,New Jersey 130,Ribe 1973,Maine 131,Rogerson 1964,Mississippi 132,Rolfe et al. 1978,Southern Illinois 133,Ruark and Bockheim 1988,Northern Wisconsin 134,Ruark et al. 1987,Wisconsin 135,Sachs 1984,Pacific Northwest 136,Santantonio et al. 1977, 137,Schmitt and Grigal 1981, 138,Schnell 1976,Tennessee 139,Schnell 1978,Tennessee 140,Schroeder et al. 1997, 141,Schubert et al. 1988,Hawaii 142,Siccama et al. 1994,New Hampshire 143,Singh 1984,Northwest Territories 144,Singh and Misra 1979,"Uttar Pradesh, India" 146,Snell and Little 1983,Pacific Northwest 147,Snell and Max 1985,Washington 148,Sollins and Anderson 1971,Southeastern U.S. 149,Sollins et al. 1973,Tennessee 150,St. Clair 1993,Oregon 151,Standish et al. 1985,British Columbia 152,Stanek and State 1978,British Columbia 153,Swank and Schreuder 1974,North Carolina 154,Tandon et al. 1988,"Haryana, India" 155,Telfer 1969, 156,Teller 1988,Belgium 157,Ter-Mikaelian and Korzukhin 1997,North America 158,Thies and Cunningham 1996,Oregon 159,Tritton and Hornbeck 1982,Northeastern U.S. 160,Tuskan and Rensema 1992,North Dakota 161,van Laar 1982,South Africa 162,Van Lear et al. 1984,South Carolina 163,Vertanen et al. 1994,Kenya 164,Wade 1969,Georgia 165,Wang et al. 1995,British Columbia 166,Wang et al. 1996,British Columbia 167,Waring et al. 1978,Oregon 168,Wartluft 1977,West Virginia 169,Watson and O'Loughlin 1990,New Zealand 170,Weetman and Harland 1964,Quebec 171,Westman 1987,"Sierra Nevada, California" 172,Whittaker et al. 1974,New Hampshire 173,Whittaker and Niering 1975,Arizona 174,Whittaker and Woodwell 1968,New York 175,Wiant et al. 1977,West Virginia 176,Williams and McClenahan 1984,Ohio 177,Young et al. 1980,Maine \ No newline at end of file +Table 9. -- Sources and general locations for all equations referenced (see Appendix A),, ,, Reference no.,Reference,Origin 1,Acker and Easter 1994,Pacific Northwest 2,Adhikari et al. 1995,Himalayas 3,Anurag et al. 1989,India 4,Bajrang et al. 1996,North Indian plains 5,Baldwin 1989,Louisiana 6,Barclay et al. 1986,"Vancouver, BC" 7,Barney et al. 1978,Alaska 8,Bartelink 1996,Nertherlands 9,Baskerville 1965,New Brunswick 10,Baskerville 1966,New Brunswick 11,Bergez et al. 1988,central France 12,Bickelhaupt et al. 1973,New York 13,Binkley 1983,"British Columbia, Washington State" 14,Binkley et al. 1984,Pacific Northwest 15,Bockheim and Lee 1984,Wisconsin 16,Boerner and Kost 1986,Ohio 17,Bormann 1990,Southeastern Alaska 18,Brenneman et al. 1978,West Virginia 19,Bridge 1979,Rhode Island 20,Briggs et al. 1989,New York 21,Brown 1978,Rocky Mountains 22,Bunyavejchewin and Kiratiprayoon 1989,"Ratchaburi Province, Thailand" 23,Busing et al. 1993,Tennessee 24,Campbell et al. 1985,Alberta 25,Carlyle and Malcolm 1986,Great Britain 26,Carpenter 1983,Minnesota 27,Carter and White 1971,Alabama 29,Chapman and Gower 1991,Wisconsin 30,Chaturvedi and Singh 1982,Lesser Himalayas 31,Chojnacky 1984,Nevada 32,Chojnacky and Moisen 1993,Nevada 33,Clark et al. 1985,Gulf and Atlantic Coastal Plains 34,Clark et al. 1986a,Piedmont (Southeastern U.S.) 35,Clark et al. 1986b,Upland South 36,Clark and Schroeder 1986,"North Carolina, Georgia" 37,Clary and Tiedemann 1987,Utah 38,Clebsch 1971,Tennessee 39,Cochran et al. 1984,Pacific Northwest 40,Crow 1971,Maine 41,Crow 1976,North-central U.S. 42,Crow 1983,"Wisconsin, Michigan" 43,Darling 1967,Arizona 44,Dudley and Fownes 1992,Hawaii 45,Dunlap and Shipman 1967,Pennsylvania 47,Espinosa-Bancalari and Perry 1987,Oregon 48,Fassnacht 1996,Wisconsin 49,Felker et al. 1982,California 50,Feller 1992,British Columbia 51,Freedman 1984,Nova Scotia 52,Freedman et al. 1982,Nova Scotia 53,Gary 1976,"Wyoming, Colorado" 54,Gholz 1980,Oregon 55,Gholz et al. 1979,Pacific Northwest 56,Gholz et al. 1991,Florida 57,Goldsmith and Hocker 1978,New Hampshire 58,Gower et al. 1987,Washington 59,Gower et al. 1993a,"Wisconsin, Montana" 60,Gower et al. 1993b,Southwestern Wisconsin 61,Gower et al. 1992,New Mexico 62,Green and Grigal 1978,Minnesota 63,Grier et al. 1992,Arizona 64,Grier et al. 1984,Washington 65,Grier and Logan 1977,Oregon 66,Grigal and Kernik 1978,Minnesota 67,Harding and Grigal 1985,Minnesota 68,Harmon 1994,Pacific Northwest 69,Harrington et al. 1984,Oregon 70,Harris et al. 1973,Tennessee 71,Hegyi 1972,Ontario 72,Helgerson et al. 1988,Oregon 73,Heth and Donald 1978,"Cape Province, South Africa" 74,Hocker and Early 1983,New Hampshire 75,Honer 1971,Ontario 76,Ivask et al. 1988, 77,Jackson and Chittenden 1981,New Zealand 78,Johnston and Bartos 1977,"Utah, Wyoming" 79,Jokela et al. 1981,Minnesota 80,Jokela et al. 1986,New York 81,Ker 1980a,New Brunswick 82,Ker 1980b,Nova Scotia 83,Ker 1984, 84,Ker and van Raalte 1981,New Brunswick 85,Kimmins 1973,British Columbia 86,Kinerson and Bartholomew 1977,New Hampshire 87,King and Schnell 1972,"North Carolina, Kentucky, Tennessee" 88,Klopsch 1994,Pacific Northwest 89,Koerper 1994,Pacific Northwest 90,Koerper and Richardson 1980,Michigan 91,Krumlik 1974,British Columbia 92,Krumlik and Kimmins 1973,British Columbia 95,Landis and Mogren 1975,Colorado 96,Lieffers and Campbell 1984,Alberta 97,Lodhiyal et al. 1995,Central Himalayas 98,Loomis et al. 1966,Missouri Ozarks 99,Lovenstein and Berliner 1993,Israel 100,Maclean and Wein 1976,New Brunswick 101,Marshall and Wang 1995,British Columbia 102,Martin et al. 1998,North Carolina 103,McCain 1994,Pacific Northwest 104,Means et al. 1994,Pacific Northwest 105,Miller et al. 1981,"Nevada, eastern California" 106,Monk et al. 1970,Georgia 107,Monteith 1979,New York 108,Moore and Verspoor 1973,Quebec 109,Morrison 1990,Northern Ontario 110,Naidu et al. 1998,North Carolina 111,Nelson and Switzer 1975,Mississippi 112,Ouellet 1983,Quebec 113,Parker and Schneider 1975,Michigan 114,Pastor et al. 1984,Eastern U.S. 115,Pastor and Bockheim 1981,Wisconsin 116,Pearson et al. 1984,Wyoming 117,Perala and Alban 1994,North Central States 118,Peterson et al. 1970,Alberta 119,Phillips 1981,Southeast U.S. 120,Pollard 1972,Ontario 121,Rajeev et al. 1998,"Haryana, India" 122,Ralston 1973,North Carolina 123,Ralston and Prince 1965,North Carolina 124,Ramseur and Kelly 1981,Tennessee 125,Rawat and Singh 1993,Central Himalayas 126,Reid et al. 1974, 127,Reiners 1972,Minnesota 128,Rencz and Auclair 1980,Quebec 129,Reynolds et al. 1978,New Jersey 130,Ribe 1973,Maine 131,Rogerson 1964,Mississippi 132,Rolfe et al. 1978,Southern Illinois 133,Ruark and Bockheim 1988,Northern Wisconsin 134,Ruark et al. 1987,Wisconsin 135,Sachs 1984,Pacific Northwest 136,Santantonio et al. 1977, 137,Schmitt and Grigal 1981, 138,Schnell 1976,Tennessee 139,Schnell 1978,Tennessee 140,Schroeder et al. 1997, 141,Schubert et al. 1988,Hawaii 142,Siccama et al. 1994,New Hampshire 143,Singh 1984,Northwest Territories 144,Singh and Misra 1979,"Uttar Pradesh, India" 146,Snell and Little 1983,Pacific Northwest 147,Snell and Max 1985,Washington 148,Sollins and Anderson 1971,Southeastern U.S. 149,Sollins et al. 1973,Tennessee 150,St. Clair 1993,Oregon 151,Standish et al. 1985,British Columbia 152,Stanek and State 1978,British Columbia 153,Swank and Schreuder 1974,North Carolina 154,Tandon et al. 1988,"Haryana, India" 155,Telfer 1969, 156,Teller 1988,Belgium 157,Ter-Mikaelian and Korzukhin 1997,North America 158,Thies and Cunningham 1996,Oregon 159,Tritton and Hornbeck 1982,Northeastern U.S. 160,Tuskan and Rensema 1992,North Dakota 161,van Laar 1982,South Africa 162,Van Lear et al. 1984,South Carolina 163,Vertanen et al. 1994,Kenya 164,Wade 1969,Georgia 165,Wang et al. 1995,British Columbia 166,Wang et al. 1996,British Columbia 167,Waring et al. 1978,Oregon 168,Wartluft 1977,West Virginia 169,"Watson and O'Loughlin 1990",New Zealand 170,Weetman and Harland 1964,Quebec 171,Westman 1987,"Sierra Nevada, California" 172,Whittaker et al. 1974,New Hampshire 173,Whittaker and Niering 1975,Arizona 174,Whittaker and Woodwell 1968,New York 175,Wiant et al. 1977,West Virginia 176,Williams and McClenahan 1984,Ohio 177,Young et al. 1980,Maine \ No newline at end of file From 5a8d76b083f0909d6240137d3bdce1da63fe2e31 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 15:55:10 +0200 Subject: [PATCH 0383/2289] ignore small differences in potential recompression savings --- scripts/check_with_errors.R | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index e135045b118..f91a1021063 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -115,10 +115,23 @@ if (cmp$status != "+") { # We stopped earlier for errors, so all entries here are WARNING or NOTE cur_msgs <- msg_lines(cmp$cmp$output[cmp$cmp$which == "new"]) prev_msgs <- msg_lines(cmp$cmp$output[cmp$cmp$which == "old"]) + # avoids false positives from tempdir changes cur_msgs <- gsub(chk$checkdir, "...", cur_msgs) prev_msgs <- gsub(old$checkdir, "...", prev_msgs) + # Compression warnings report slightly different sizes on different R versions + # If the only difference is in the numbers, don't complain + cmprs_msg <- grepl("significantly better compression", cur_msgs) + if(any(cmprs_msg)){ + prev_cmprs_msg <- grepl("significantly better compression", prev_msgs) + cur_cmprs_nodigit <- gsub("[0-9]", "", cur_msgs[cmprs_msg]) + prev_cmprs_nodigit <- gsub("[0-9]", "", prev_msgs[prev_cmprs_msg]) + if(all(cur_cmprs_nodigit %in% prev_cmprs_nodigit)){ + cur_msgs <- cur_msgs[!cmprs_msg] + } + } + lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { cat("R check of", pkg, "returned new problems:\n") From 5accc9272d0095a18be3170d1b8e8f5ddeab6c39 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 16:30:49 +0200 Subject: [PATCH 0384/2289] ignore warning from existing dataset --- modules/data.land/tests/Rcheck_reference.log | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.land/tests/Rcheck_reference.log b/modules/data.land/tests/Rcheck_reference.log index 2e561ac0886..e327d4fe7e3 100644 --- a/modules/data.land/tests/Rcheck_reference.log +++ b/modules/data.land/tests/Rcheck_reference.log @@ -528,6 +528,10 @@ Files not of a type allowed in a ‘data’ directory: ‘lake_states_wgs84.qpj’ ‘lake_states_wgs84.shp’ ‘lake_states_wgs84.shx’ Please use e.g. ‘inst/extdata’ for non-R data files + +Object named ‘.Random.seed’ found in dataset: ‘soil_class’ +Please remove it. + * checking data for non-ASCII characters ... WARNING Warning: found non-ASCII strings 'US-Bar,272,44.0646,-71.2881,NA,4297,GRP_LOCATION,LOCATION_COMMENT,... Bartlett Experimental Forest. ...The correct location is 44 deg 3' 52.702794\ N [NOT 43 deg as I had earlier specified] 71 deg 17 17.0766744\" W","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" From a36d80aed07140cc4dba7415a8f7d61ea5ea4130 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 16:53:30 +0200 Subject: [PATCH 0385/2289] defancify quotes --- scripts/check_with_errors.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index f91a1021063..d18c8427d6a 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -120,6 +120,10 @@ if (cmp$status != "+") { cur_msgs <- gsub(chk$checkdir, "...", cur_msgs) prev_msgs <- gsub(old$checkdir, "...", prev_msgs) + # Different R versions seem to differ on straight vs fancy quotes + cur_msgs <- gsub("[‘’]", "'", cur_msgs) + prev_msgs <- gsub("[‘’]", "'", prev_msgs) + # Compression warnings report slightly different sizes on different R versions # If the only difference is in the numbers, don't complain cmprs_msg <- grepl("significantly better compression", cur_msgs) From 4cde04d9b0f5c8f9a92078fae0ceae54c3f50eae Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 19:49:05 +0200 Subject: [PATCH 0386/2289] ok, defancify AND strip commas --- scripts/check_with_errors.R | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index d18c8427d6a..67dfec062c5 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -120,9 +120,13 @@ if (cmp$status != "+") { cur_msgs <- gsub(chk$checkdir, "...", cur_msgs) prev_msgs <- gsub(old$checkdir, "...", prev_msgs) - # Different R versions seem to differ on straight vs fancy quotes + # R 3.6.0 switched style for lists of packages + # from space-separated fancy quotes to comma-separated straight quotes + # We'll meet halfway, with space-separated straight quotes cur_msgs <- gsub("[‘’]", "'", cur_msgs) + cur_msgs <- gsub("', '", "' '", cur_msgs) prev_msgs <- gsub("[‘’]", "'", prev_msgs) + prev_msgs <- gsub("', '", "' '", prev_msgs) # Compression warnings report slightly different sizes on different R versions # If the only difference is in the numbers, don't complain From 6913ad4824e8b70c535773aa20010dcadf0a5985 Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Wed, 18 Sep 2019 14:07:24 -0400 Subject: [PATCH 0387/2289] Update base/db/R/assign.treatments.R Co-Authored-By: David LeBauer --- base/db/R/assign.treatments.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/R/assign.treatments.R b/base/db/R/assign.treatments.R index 1da9a0079f7..5e22285e837 100644 --- a/base/db/R/assign.treatments.R +++ b/base/db/R/assign.treatments.R @@ -13,7 +13,7 @@ ##' Assigns all control treatments the same value, then assigns unique treatments ##' within each site. Each site is required to have a control treatment. ##' The algorithm (incorrectly) assumes that each site has a unique set of experimental -##' treatments. +##' treatments. This assumption is required by the data in BETTdb that does not always consistently name treatments or quantity them in the managements table. Also it avoids having the need to estimate treatment by site interactions in the meta analysis model. This model uses data in the control treatment to estimate model parameters so the impact of the assumption is minimal. ##' @name assign.treatments ##' @title assign.treatments ##' @param data input data @@ -45,4 +45,4 @@ assign.treatments <- function(data){ drop.columns <- function(data, columns){ return(data[, which(!colnames(data) %in% columns)]) } -##=============================================================================# \ No newline at end of file +##=============================================================================# From 365e3ceb50268b9459f84b0ad2720d6edd837b4b Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Wed, 18 Sep 2019 14:08:19 -0400 Subject: [PATCH 0388/2289] Update base/db/R/check.lists.R Co-Authored-By: David LeBauer --- base/db/R/check.lists.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/R/check.lists.R b/base/db/R/check.lists.R index 3b821f5754f..18e7c8b3840 100644 --- a/base/db/R/check.lists.R +++ b/base/db/R/check.lists.R @@ -16,7 +16,7 @@ ##' @param x first list ##' @param y second list ##' @param filename one of "species.csv" or "cultivars.csv" -##' @return true if two list are the same +##' @return true if two lists are the same ##' @author Rob Kooper ##' check.lists <- function(x, y, filename = "species.csv") { @@ -32,4 +32,4 @@ check.lists <- function(x, y, filename = "species.csv") { } xy_match <- vapply(cols, function(i) identical(as.character(x[[i]]), as.character(y[[i]])), logical(1)) return(all(unlist(xy_match))) -} \ No newline at end of file +} From ac580986ec40f26edc3d91386f357b0e3be46e43 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 18 Sep 2019 23:30:43 +0200 Subject: [PATCH 0389/2289] fix markup error rather than fight path differences R 3.5 reports the bare Rd filename, R >= 3.6 reports it as `man/.Rd`. Decided it was easier to fix the error than to make the script handle it gracefully --- modules/uncertainty/R/sensitivity.analysis.R | 2 +- modules/uncertainty/R/variance.R | 4 ++-- modules/uncertainty/man/sd.var.Rd | 2 +- modules/uncertainty/man/spline.ensemble.Rd | 4 ++-- modules/uncertainty/tests/Rcheck_reference.log | 18 ++---------------- 5 files changed, 8 insertions(+), 22 deletions(-) diff --git a/modules/uncertainty/R/sensitivity.analysis.R b/modules/uncertainty/R/sensitivity.analysis.R index 4bb8b229d37..fac5135b7eb 100644 --- a/modules/uncertainty/R/sensitivity.analysis.R +++ b/modules/uncertainty/R/sensitivity.analysis.R @@ -24,7 +24,7 @@ sa.splinefun <- function(quantiles.input, quantiles.output) { #--------------------------------------------------------------------------------------------------# ##' Calculates the standard deviation of the variance estimate ##' -##' Uses the equation \sigma^4\left(\frac{2}{n-1}+\frac{\kappa}{n}\right) +##' Uses the equation \eqn{\sigma^4\left(\frac{2}{n-1}+\frac{\kappa}{n}\right)}{\sigma^4 (2/(n-1) + \kappa/n)} ##' @name sd.var ##' @title Standard deviation of sample variance ##' @param x sample diff --git a/modules/uncertainty/R/variance.R b/modules/uncertainty/R/variance.R index fe3d5511802..19c383905e2 100644 --- a/modules/uncertainty/R/variance.R +++ b/modules/uncertainty/R/variance.R @@ -55,8 +55,8 @@ get.gi.phii <- function(splinefuns, trait.samples, maxn = NULL){ ##' Estimate model output based on univariate splines ##' -##' Accepts output from get.gi.phii (the matrix $g(\phi_i)$) and produces -##' spline estimate of $f(phi)$ for use in estimating closure term associated with +##' Accepts output from get.gi.phii (the matrix \eqn{g(\phi_i)}) and produces +##' spline estimate of \eqn{f(\phi)} for use in estimating closure term associated with ##' spline approximation ##' @title Spline Ensemble ##' @author David LeBauer diff --git a/modules/uncertainty/man/sd.var.Rd b/modules/uncertainty/man/sd.var.Rd index 7bc7b241892..fe6c570e436 100644 --- a/modules/uncertainty/man/sd.var.Rd +++ b/modules/uncertainty/man/sd.var.Rd @@ -16,7 +16,7 @@ estimate of standard deviation of the sample variance Calculates the standard deviation of the variance estimate } \details{ -Uses the equation \sigma^4\left(\frac{2}{n-1}+\frac{\kappa}{n}\right) +Uses the equation \eqn{\sigma^4\left(\frac{2}{n-1}+\frac{\kappa}{n}\right)}{\sigma^4 (2/(n-1) + \kappa/n)} } \references{ Mood, Graybill, Boes 1974 'Introduction to the Theory of Statistics' 3rd ed. p 229; Casella and Berger 'Statistical Inference' p 364 ex. 7.45; 'Reference for Var(s^2)' CrossValidated \url{http://stats.stackexchange.com/q/29905/1381}, 'Calculating required sample size, precision of variance estimate' CrossValidated \url{http://stats.stackexchange.com/q/7004/1381}, 'Variance of Sample Variance?' Mathematics - Stack Exchange \url{http://math.stackexchange.com/q/72975/3733} diff --git a/modules/uncertainty/man/spline.ensemble.Rd b/modules/uncertainty/man/spline.ensemble.Rd index 5dbf10b467a..0827106a184 100644 --- a/modules/uncertainty/man/spline.ensemble.Rd +++ b/modules/uncertainty/man/spline.ensemble.Rd @@ -15,8 +15,8 @@ spline.ensemble(gi.phii, median) Estimate model output based on univariate splines } \details{ -Accepts output from get.gi.phii (the matrix $g(\phi_i)$) and produces -spline estimate of $f(phi)$ for use in estimating closure term associated with +Accepts output from get.gi.phii (the matrix \eqn{g(\phi_i)}) and produces +spline estimate of \eqn{f(\phi)} for use in estimating closure term associated with spline approximation } \author{ diff --git a/modules/uncertainty/tests/Rcheck_reference.log b/modules/uncertainty/tests/Rcheck_reference.log index bb57f9b3a87..bc97509d7a1 100644 --- a/modules/uncertainty/tests/Rcheck_reference.log +++ b/modules/uncertainty/tests/Rcheck_reference.log @@ -16,15 +16,7 @@ * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK * checking serialization versions ... OK -* checking whether package ‘PEcAn.uncertainty’ can be installed ... WARNING -Found the following significant warnings: - Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\sigma' - Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\left' - Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\frac' - Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\kappa' - Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/sd.var.Rd:19: unknown macro '\right' - Warning: /tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00_pkg_src/PEcAn.uncertainty/man/spline.ensemble.Rd:18: unknown macro '\phi' -See ‘/tmp/Rtmpy5mZ7I/PEcAn.uncertainty.Rcheck/00install.out’ for details. +* checking whether package ‘PEcAn.uncertainty’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE @@ -240,13 +232,7 @@ Consider adding "quantile", "rexp", "sd", "splinefun", "var") importFrom("utils", "read.table") to your NAMESPACE file. -* checking Rd files ... WARNING -prepare_Rd: sd.var.Rd:19: unknown macro '\sigma' -prepare_Rd: sd.var.Rd:19: unknown macro '\left' -prepare_Rd: sd.var.Rd:19: unknown macro '\frac' -prepare_Rd: sd.var.Rd:19: unknown macro '\kappa' -prepare_Rd: sd.var.Rd:19: unknown macro '\right' -prepare_Rd: spline.ensemble.Rd:18: unknown macro '\phi' +* checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... NOTE Rd file 'sensitivity.analysis.Rd': From 005173f880614830c855392744559d1bd7878332 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 00:58:47 +0200 Subject: [PATCH 0390/2289] replace hand-written Rd stub with Roxygen --- models/dalec/NAMESPACE | 1 + models/dalec/R/get.model.output.dalec.R | 6 ++++++ models/dalec/man/get.model.output.dalec.Rd | 11 +++++++++++ models/dalec/man/get.model.output.generic.Rd | 11 ----------- models/dalec/tests/Rcheck_reference.log | 5 +---- 5 files changed, 19 insertions(+), 15 deletions(-) create mode 100644 models/dalec/R/get.model.output.dalec.R create mode 100644 models/dalec/man/get.model.output.dalec.Rd delete mode 100644 models/dalec/man/get.model.output.generic.Rd diff --git a/models/dalec/NAMESPACE b/models/dalec/NAMESPACE index 2912413b08a..988cf70da2e 100644 --- a/models/dalec/NAMESPACE +++ b/models/dalec/NAMESPACE @@ -1,5 +1,6 @@ # Generated by roxygen2: do not edit by hand +export(get.model.output.dalec) export(met2model.DALEC) export(model2netcdf.DALEC) export(write.config.DALEC) diff --git a/models/dalec/R/get.model.output.dalec.R b/models/dalec/R/get.model.output.dalec.R new file mode 100644 index 00000000000..f91e4835d60 --- /dev/null +++ b/models/dalec/R/get.model.output.dalec.R @@ -0,0 +1,6 @@ +#' Retrieve model output from local or remote server +#' +#' @export +get.model.output.dalec <- function(){ + # not yet implemented, just put here to go along with its Rd skeleton +} \ No newline at end of file diff --git a/models/dalec/man/get.model.output.dalec.Rd b/models/dalec/man/get.model.output.dalec.Rd new file mode 100644 index 00000000000..d693e1fbafb --- /dev/null +++ b/models/dalec/man/get.model.output.dalec.Rd @@ -0,0 +1,11 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/get.model.output.dalec.R +\name{get.model.output.dalec} +\alias{get.model.output.dalec} +\title{Retrieve model output from local or remote server} +\usage{ +get.model.output.dalec() +} +\description{ +Retrieve model output from local or remote server +} diff --git a/models/dalec/man/get.model.output.generic.Rd b/models/dalec/man/get.model.output.generic.Rd deleted file mode 100644 index 74b92afecee..00000000000 --- a/models/dalec/man/get.model.output.generic.Rd +++ /dev/null @@ -1,11 +0,0 @@ -\name{get.model.output.generic} -\alias{get.model.output.generic} -\title{Retrieve model output from local or remote server} -\usage{ - get.model.output.dalec() -} -\description{ - Function to retrieve model output from local or remote - server -} - diff --git a/models/dalec/tests/Rcheck_reference.log b/models/dalec/tests/Rcheck_reference.log index 5aa2927ee4f..1851b6f9fbb 100644 --- a/models/dalec/tests/Rcheck_reference.log +++ b/models/dalec/tests/Rcheck_reference.log @@ -60,10 +60,7 @@ to your NAMESPACE file. * checking Rd line widths ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK -* checking for code/documentation mismatches ... WARNING -Functions or methods with usage in documentation object 'get.model.output.generic' but not in code: - get.model.output.dalec - +* checking for code/documentation mismatches ... OK * checking Rd \usage sections ... WARNING Undocumented arguments in documentation object 'met2model.DALEC' ‘spin_nyear’ ‘spin_nsample’ ‘spin_resample’ ‘...’ From 5beba86b84d74d3c0d9b42b3a7a9f5e83b9b26d0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 01:48:06 +0200 Subject: [PATCH 0391/2289] remove stray trailing lines --- base/all/data/pecan.packages.csv | 2 -- 1 file changed, 2 deletions(-) diff --git a/base/all/data/pecan.packages.csv b/base/all/data/pecan.packages.csv index 0faa264c50e..e009c4e36aa 100644 --- a/base/all/data/pecan.packages.csv +++ b/base/all/data/pecan.packages.csv @@ -1,4 +1,2 @@ 1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16;17;18 "utils";"common";"db";"modules/meta.analysis";"modules/uncertainty";"modules/emulator";"modules/assim.batch";"modules/assim.sequential";"modules/data.land";"modules/photosythesis";"modules/priors";"modules/rtm";"modules/benchmark";models/c4photo";"models/ed";"models/sipnet";"models/maat";"all" - - From dd5875a09bbf95d3c6fa84ff6faee7ca546ff867 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 02:37:06 +0200 Subject: [PATCH 0392/2289] missing quote --- base/all/data/pecan.packages.csv | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/all/data/pecan.packages.csv b/base/all/data/pecan.packages.csv index e009c4e36aa..4fc9fa6db82 100644 --- a/base/all/data/pecan.packages.csv +++ b/base/all/data/pecan.packages.csv @@ -1,2 +1,2 @@ 1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16;17;18 -"utils";"common";"db";"modules/meta.analysis";"modules/uncertainty";"modules/emulator";"modules/assim.batch";"modules/assim.sequential";"modules/data.land";"modules/photosythesis";"modules/priors";"modules/rtm";"modules/benchmark";models/c4photo";"models/ed";"models/sipnet";"models/maat";"all" +"utils";"common";"db";"modules/meta.analysis";"modules/uncertainty";"modules/emulator";"modules/assim.batch";"modules/assim.sequential";"modules/data.land";"modules/photosythesis";"modules/priors";"modules/rtm";"modules/benchmark";"models/c4photo";"models/ed";"models/sipnet";"models/maat";"all" From 6cc07ba7214a5e202b41da86985bd84f9a014bbc Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Thu, 19 Sep 2019 00:23:06 -0400 Subject: [PATCH 0393/2289] Update met.process.R `stage$local` is defined as `TRUE` given a condition, but never defined as `FALSE` otherwise and causes errors if neither `stage$download.raw` nor `stage$local` is `TRUE` i.e. if it's a remote file. --- modules/data.atmosphere/R/met.process.R | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index f61749c67ef..8db4f153788 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -147,6 +147,8 @@ met.process <- function(site, input_met, start_date, end_date, model, stage$download.raw <- FALSE stage$local <- TRUE } + }else{ + stage$local <- FALSE } PEcAn.logger::logger.debug(stage) From 6482e09a77a08b3efcf64b0151f351bc9c0484f8 Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Thu, 19 Sep 2019 00:36:12 -0400 Subject: [PATCH 0394/2289] Updating documentation --- base/db/man/assign.treatments.Rd | 2 +- base/db/man/check.lists.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/man/assign.treatments.Rd b/base/db/man/assign.treatments.Rd index 22ae629d18e..30365dc8468 100644 --- a/base/db/man/assign.treatments.Rd +++ b/base/db/man/assign.treatments.Rd @@ -19,7 +19,7 @@ Change treatments to sequential integers Assigns all control treatments the same value, then assigns unique treatments within each site. Each site is required to have a control treatment. The algorithm (incorrectly) assumes that each site has a unique set of experimental -treatments. +treatments. This assumption is required by the data in BETTdb that does not always consistently name treatments or quantity them in the managements table. Also it avoids having the need to estimate treatment by site interactions in the meta analysis model. This model uses data in the control treatment to estimate model parameters so the impact of the assumption is minimal. } \author{ David LeBauer, Carl Davidson, Alexey Shiklomanov diff --git a/base/db/man/check.lists.Rd b/base/db/man/check.lists.Rd index 8063dc975c0..87c48ce198c 100644 --- a/base/db/man/check.lists.Rd +++ b/base/db/man/check.lists.Rd @@ -14,7 +14,7 @@ check.lists(x, y, filename = "species.csv") \item{filename}{one of "species.csv" or "cultivars.csv"} } \value{ -true if two list are the same +true if two lists are the same } \description{ Check two lists. Identical does not work since one can be loaded From cb16f9cca0c0a885c274c686fc16a35d20ee60b3 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 13:50:44 +0200 Subject: [PATCH 0395/2289] document Travis checks --- CHANGELOG.md | 1 + .../05_developer_workflows/04-testing.Rmd | 54 ++++++++++++++++++- 2 files changed, 54 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e1bb4dc9dad..d8b9658e8eb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,6 +18,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - The `parse` option to `PEcAn.utils::read_web_config` had no effect when `expand` was TRUE (#2421). ### Changed +- Stricter package checking (#2404): `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings. - Updated modules/rtm PROSPECT docs - Updated models/sipnet/R/model2netcdf.SIPNET.R to address issues in PR #2254 - Improved testing (#2281). Automatic Travis CI builds of PEcAn on are now run using three versions of R in parallel. This should mean fewer issues with new releases and better backwards compatibility, but note that we still only guarantee full compatibility with the current release version of R. The tested versions are: diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index ea298e7c179..28ebf7c38e5 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -118,4 +118,56 @@ The `batch_run.R` script can take the following command-line arguments: [batch_run]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/batch_run.R [default_tests]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/default_tests.csv -[xml_fun]: +[xml_fun]: + +### Continuous Integration + +Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase using [Travis CI](https://travis-ci.org/pecanProject/pecan), and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. + +At this writing (September 2019), all our Travis builds run on the same version of Linux (currently Ubuntu 16.04) using three different versions of R in parallel: previous release, current release, and nightly builds of the R development branch. In most cases the build should pass on all three versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on R:oldrel as developer time and forward compatibility allow. + +Each build starts by launching three clean virtual machines (one for each R version) and performs roughly the following actions on all of them: + +* Installs binary system dependencies needed by PEcAn (NetCDF and HDF5 libraries, JAGS, udunits, etc). +* Installs prebuilt binaries of a few R packages from [cran2deb4ubuntu](https://launchpad.net/~marutter/+archive/ubuntu/c2d4u3.5). + - This is only a small subset of the PEcAn dependency list, mostly the ones that take a very long time to compile but have binaries small enough to download quickly. + - The rest are installed as needed during the installation of PEcAn packages. + - Because these packages are compiled for a specific version of R and we test on three different versions, in some cases the installed binary is incompatible with the installed R version and is overwritten by a locally-compiled one in a later step. At this writing this is only done on R 3.5 (the current R:oldrel), but this may change. +* Clones the PEcAn repository from GitHub, and checks out the branch to be tested. +* Retrieves any cached files available from previous Travis builds. + - The main thing in the cache is previously-installed dependency R packages, to avoid recompiling them every time. + - If the cache becomes stale or is preventing a package update needed by the build (e.g. to get a new version that contains a needed bug fix), delete the cache through the Travis web interface and it will be reconstructed on the next build. + - Because PEcAn has so many dependencies, builds with no cache will spend most of their time recompiling packages and will often run out of time before the tests complete. You can fix this by [triggering a custom build](https://blog.travis-ci.com/2017-08-24-trigger-custom-build) using a custom config that installs but does not test PEcAn (e.g. `script: make install`), waiting for this to finish and upload its new cache, then restarting the standard full build. +* Initializes a skeleton version of the PEcAn database (BeTY) containing a few public records to be used by the test runs. +* Installs all the PEcAn packages, recompiling documentation and installing dependencies as needed. +* Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). + - As discussed in [Unit testing](#developer-testing-unit), these tests should run quickly and test individual components in relative isolation. + - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! +* Runs R package checks (the same ones you run locally with `make check` or `devtools::check(pkgname)`), skipping tests and documentation rebuild because we just did those in the previous steps. + - Any ERROR in the check output will stop the build immediately. + - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. + - If all messages from the current built were also present in the reference result, the check passes. If any messages are newly added, a build failure is reported. + - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! + - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. + - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. + - If a check fails when you think it ought to be grandfathered, please fix it anyway. They all need to be cleaned up eventually, and it's likely easier to fix the error than to figure out how to re-exclude it. +* Runs a simple PEcAn workflow from beginning to end (three years of SIPNET at Niwot Ridge using stored weather data, meta-analysis on fixed effects only, sensitivity analysis on NPP), and verifies that the models complete successfully. +* Checks whether any version-controlled files have been changed by the build and testing process, and marks a build failure if they have. + - If your build fails at this step, the most common cause is that you forgot to Roxygenize before committing. + - Note that this only finds changes to files that have already been committed. If you created a *new* file and forgot to add it to the commit, the CI build will not be able to detect the problem. + +If any of these steps reports an error, the build is marked as "broken" and stops before the next step. If they all pass, the Travis CI bot marks the build as successful and tells the GitHub bot that it's OK to allow your changes to be merged... but the final decision belongs to the human reviewing your code and they might still ask you for other changes! + +After a successful build, Travis performs two post-build steps: + +* Compiles the PEcAn documentation book (`book_source`) and the tutorials (`documentation/tutorials`) and uploads them to the [PEcAn website](https://pecanproject.github.io/pecan-documentation). + - This is only done for commits to the `master` or `develop` branch, so changes to in-progress pull requests never change the live documentation until after they are merged. +* Packs up selected build artifacts into a cache file and uploads it to the Travis servers for use on the next build. + +The post-build steps are allowed to fail without breaking the build. If you made documentation changes but don't see them deployed, or if your build seems to be reinstalling packages that ought to be cached, inspect the Travis logs of the previous supposedly-successful build to see if their uploads succeeded. + +All of the above descriptions apply to the build Travis generates when you push to the main `PecanProject/pecan` repository, either by directly pushing to a branch or by opening a pull request. If you like, you can also [enable Travis builds](https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci) from your own PEcAn fork. This can be useful for several reasons: + +* It lets your test whether your changes worked before you open a pull request. +* It often lets you get faster test results: The PEcAn project uses Travis CI's free tier, which only allows a few simultaneous build jobs per repository. If many people are pushing code at the same time, your build might wait in line for a long time at PecanProject/pecan but start immediately at yourname/pecan. +* If you will be editing the documentation a lot and want to see rendered previews of your in-progress work (instead of waiting until it is merged into develop), you can clone the [pecan-documentation](https://github.com/PecanProject/pecan-documentation) repository to your own GitHub account and let Travis update it for you. From edb46d5dce8db5783f2d944d1574040f19e910a0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 14:58:50 +0200 Subject: [PATCH 0396/2289] Update book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd Co-Authored-By: Alexey Shiklomanov --- .../05_developer_workflows/04-testing.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index 28ebf7c38e5..fb8e4bce9b8 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -142,7 +142,7 @@ Each build starts by launching three clean virtual machines (one for each R vers * Installs all the PEcAn packages, recompiling documentation and installing dependencies as needed. * Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). - As discussed in [Unit testing](#developer-testing-unit), these tests should run quickly and test individual components in relative isolation. - - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! + - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time (e.g. large data product downloads) or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! * Runs R package checks (the same ones you run locally with `make check` or `devtools::check(pkgname)`), skipping tests and documentation rebuild because we just did those in the previous steps. - Any ERROR in the check output will stop the build immediately. - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. From 96b8965ed6bcab63d1bc4dafafb9781b8b873a38 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 14:59:43 +0200 Subject: [PATCH 0397/2289] Update book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd Co-Authored-By: Alexey Shiklomanov --- .../05_developer_workflows/04-testing.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index fb8e4bce9b8..7e6b1a1298c 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -124,7 +124,7 @@ The `batch_run.R` script can take the following command-line arguments: Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase using [Travis CI](https://travis-ci.org/pecanProject/pecan), and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. -At this writing (September 2019), all our Travis builds run on the same version of Linux (currently Ubuntu 16.04) using three different versions of R in parallel: previous release, current release, and nightly builds of the R development branch. In most cases the build should pass on all three versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on R:oldrel as developer time and forward compatibility allow. +At this writing (September 2019), all our Travis builds run on the same version of Linux (currently Ubuntu 16.04, `xenial`) using three different versions of R in parallel: previous release, current release, and nightly builds of the R development branch. In most cases the build should pass on all three versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on R:oldrel as developer time and forward compatibility allow. Each build starts by launching three clean virtual machines (one for each R version) and performs roughly the following actions on all of them: From 2eac81f9dc842ed366c7b9d5ea7b3ac946cd3450 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 19 Sep 2019 15:07:48 +0200 Subject: [PATCH 0398/2289] remove dummy functions --- models/dalec/NAMESPACE | 1 - models/dalec/R/get.model.output.dalec.R | 6 ------ models/dalec/man/get.model.output.dalec.Rd | 11 ----------- 3 files changed, 18 deletions(-) delete mode 100644 models/dalec/R/get.model.output.dalec.R delete mode 100644 models/dalec/man/get.model.output.dalec.Rd diff --git a/models/dalec/NAMESPACE b/models/dalec/NAMESPACE index 988cf70da2e..2912413b08a 100644 --- a/models/dalec/NAMESPACE +++ b/models/dalec/NAMESPACE @@ -1,6 +1,5 @@ # Generated by roxygen2: do not edit by hand -export(get.model.output.dalec) export(met2model.DALEC) export(model2netcdf.DALEC) export(write.config.DALEC) diff --git a/models/dalec/R/get.model.output.dalec.R b/models/dalec/R/get.model.output.dalec.R deleted file mode 100644 index f91e4835d60..00000000000 --- a/models/dalec/R/get.model.output.dalec.R +++ /dev/null @@ -1,6 +0,0 @@ -#' Retrieve model output from local or remote server -#' -#' @export -get.model.output.dalec <- function(){ - # not yet implemented, just put here to go along with its Rd skeleton -} \ No newline at end of file diff --git a/models/dalec/man/get.model.output.dalec.Rd b/models/dalec/man/get.model.output.dalec.Rd deleted file mode 100644 index d693e1fbafb..00000000000 --- a/models/dalec/man/get.model.output.dalec.Rd +++ /dev/null @@ -1,11 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.dalec.R -\name{get.model.output.dalec} -\alias{get.model.output.dalec} -\title{Retrieve model output from local or remote server} -\usage{ -get.model.output.dalec() -} -\description{ -Retrieve model output from local or remote server -} From 4723b4b0062de9f0039292c9348df152b35d92aa Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 19 Sep 2019 12:35:46 -0400 Subject: [PATCH 0399/2289] Fall back to the ncss method --- .../R/download.CRUNCEP_Global.R | 6 +- .../data.atmosphere/man/download.CRUNCEP.Rd | 2 +- web/config.example.php | 125 ------------------ 3 files changed, 4 insertions(+), 129 deletions(-) delete mode 100644 web/config.example.php diff --git a/modules/data.atmosphere/R/download.CRUNCEP_Global.R b/modules/data.atmosphere/R/download.CRUNCEP_Global.R index a8ce003a3dd..a7c71088557 100644 --- a/modules/data.atmosphere/R/download.CRUNCEP_Global.R +++ b/modules/data.atmosphere/R/download.CRUNCEP_Global.R @@ -24,9 +24,9 @@ ##' @author James Simkins, Mike Dietze, Alexey Shiklomanov download.CRUNCEP <- function(outfolder, start_date, end_date, site_id, lat.in, lon.in, overwrite = FALSE, verbose = FALSE, maxErrors = 10, sleep = 2, - method = "opendap", ...) { + method = "ncss", ...) { - if (is.null(method)) method <- "opendap" + if (is.null(method)) method <- "ncss" if (!method %in% c("opendap", "ncss")) { PEcAn.logger::logger.severe(glue::glue( "Bad method '{method}'. Currently, only 'opendap' or 'ncss' are supported." @@ -220,7 +220,7 @@ download.CRUNCEP <- function(outfolder, start_date, end_date, site_id, lat.in, l "but got", min(dap_time), "..", max(dap_time)) } - + dat.list[[j]] <- PEcAn.utils::retry.func( ncdf4::ncvar_get( dap, diff --git a/modules/data.atmosphere/man/download.CRUNCEP.Rd b/modules/data.atmosphere/man/download.CRUNCEP.Rd index 1bdcc6cca3d..416602ed2a4 100644 --- a/modules/data.atmosphere/man/download.CRUNCEP.Rd +++ b/modules/data.atmosphere/man/download.CRUNCEP.Rd @@ -6,7 +6,7 @@ \usage{ download.CRUNCEP(outfolder, start_date, end_date, site_id, lat.in, lon.in, overwrite = FALSE, verbose = FALSE, maxErrors = 10, sleep = 2, - method = "opendap", ...) + method = "ncss", ...) } \arguments{ \item{outfolder}{Directory where results should be written} diff --git a/web/config.example.php b/web/config.example.php deleted file mode 100644 index 653cd64fcfb..00000000000 --- a/web/config.example.php +++ /dev/null @@ -1,125 +0,0 @@ - array(), - "geo.bu.edu" => - array("displayname" => "geo", - "qsub" => "qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash", - "jobid" => "Your job ([0-9]+) .*", - "qstat" => "qstat -j @JOBID@ || echo DONE", - "prerun" => "module load udunits R/R-3.0.0_gnu-4.4.6", - "postrun" => "sleep 60", - "models" => - array("ED2" => - array("prerun" => "module load hdf5"), - "ED2 (r82)" => - array("prerun" => "module load hdf5") - ) - ) - ); - -# Folder where PEcAn is installed -$R_library_path="/home/carya/R/library"; - -# Location where PEcAn is installed, not really needed anymore -$pecan_home="/home/carya/pecan/"; - -# Folder where the runs are stored -$output_folder="/home/carya/output/"; - -# Folder where the generated files are stored -$dbfiles_folder=$output_folder . "/dbfiles"; - -# location of BETY DB set to empty to not create links, can be both -# relative or absolute paths or full URL's. Should point to the base -# of BETYDB -$betydb="/bety"; - -# ---------------------------------------------------------------------- -# SIMPLE EDITING OF BETY DATABSE -# ---------------------------------------------------------------------- -# Number of items to show on a page -$pagesize = 30; - -# Location where logs should be written -$logfile = "/home/carya/output/betydb.log"; - -# uncomment the following variable to enable the simple interface -#$simpleBETY = TRUE; - -# syncing details - -$server_url="192.168.0.5"; // local test server -$client_sceret=""; -$server_auth_token=""; - -?> From 8edc28d91f61f9a33209afb34030e62e990f926f Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 19 Sep 2019 12:40:29 -0400 Subject: [PATCH 0400/2289] config --- web/config.example.php | 125 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 web/config.example.php diff --git a/web/config.example.php b/web/config.example.php new file mode 100644 index 00000000000..7eafcfb4072 --- /dev/null +++ b/web/config.example.php @@ -0,0 +1,125 @@ + array(), + "geo.bu.edu" => + array("displayname" => "geo", + "qsub" => "qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash", + "jobid" => "Your job ([0-9]+) .*", + "qstat" => "qstat -j @JOBID@ || echo DONE", + "prerun" => "module load udunits R/R-3.0.0_gnu-4.4.6", + "postrun" => "sleep 60", + "models" => + array("ED2" => + array("prerun" => "module load hdf5"), + "ED2 (r82)" => + array("prerun" => "module load hdf5") + ) + ) + ); + +# Folder where PEcAn is installed +$R_library_path="~/R/library"; + +# Location where PEcAn is installed, not really needed anymore +$pecan_home="/fs/data3/hamzed/pecan"; + +# Folder where the runs are stored +$output_folder="/home/carya/output/"; + +# Folder where the generated files are stored +$dbfiles_folder=$output_folder . "/dbfiles"; + +# location of BETY DB set to empty to not create links, can be both +# relative or absolute paths or full URL's. Should point to the base +# of BETYDB +$betydb="/bety"; + +# ---------------------------------------------------------------------- +# SIMPLE EDITING OF BETY DATABSE +# ---------------------------------------------------------------------- +# Number of items to show on a page +$pagesize = 30; + +# Location where logs should be written +$logfile = "/home/carya/output/betydb.log"; + +# uncomment the following variable to enable the simple interface +#$simpleBETY = TRUE; + +# syncing details + +$server_url="192.168.0.5"; // local test server +$client_sceret=""; +$server_auth_token=""; + +?> From 7684385d7ad09b99dd862d3451682a90c1f2dee9 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 19 Sep 2019 12:41:52 -0400 Subject: [PATCH 0401/2289] config php --- web/config.example.php | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/web/config.example.php b/web/config.example.php index 7eafcfb4072..0e840e7c64b 100644 --- a/web/config.example.php +++ b/web/config.example.php @@ -2,7 +2,7 @@ # Information to connect to the BETY database $db_bety_type="pgsql"; -$db_bety_hostname="128.197.168.114"; +$db_bety_hostname="localhost"; $db_bety_port=5432; $db_bety_username="bety"; $db_bety_password="bety"; @@ -78,7 +78,7 @@ "qstat" => "qstat -j @JOBID@ || echo DONE", "prerun" => "module load udunits R/R-3.0.0_gnu-4.4.6", "postrun" => "sleep 60", - "models" => + "models" => array("ED2" => array("prerun" => "module load hdf5"), "ED2 (r82)" => @@ -88,10 +88,10 @@ ); # Folder where PEcAn is installed -$R_library_path="~/R/library"; +$R_library_path="/home/carya/R/library"; # Location where PEcAn is installed, not really needed anymore -$pecan_home="/fs/data3/hamzed/pecan"; +$pecan_home="/home/carya/pecan/"; # Folder where the runs are stored $output_folder="/home/carya/output/"; @@ -122,4 +122,4 @@ $client_sceret=""; $server_auth_token=""; -?> +?> \ No newline at end of file From 4a978b556b7b0f7f6eb89268b40c50429b15fcd2 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 19 Sep 2019 12:45:42 -0400 Subject: [PATCH 0402/2289] fix config --- web/config.example.php | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/config.example.php b/web/config.example.php index 0e840e7c64b..653cd64fcfb 100644 --- a/web/config.example.php +++ b/web/config.example.php @@ -122,4 +122,4 @@ $client_sceret=""; $server_auth_token=""; -?> \ No newline at end of file +?> From f31b94fb1223a7d027b1c24c5b94f2eae599e3fa Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 19 Sep 2019 14:28:47 -0400 Subject: [PATCH 0403/2289] Update modules/data.atmosphere/R/download.CRUNCEP_Global.R Co-Authored-By: Alexey Shiklomanov --- modules/data.atmosphere/R/download.CRUNCEP_Global.R | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.CRUNCEP_Global.R b/modules/data.atmosphere/R/download.CRUNCEP_Global.R index a7c71088557..004337e82c2 100644 --- a/modules/data.atmosphere/R/download.CRUNCEP_Global.R +++ b/modules/data.atmosphere/R/download.CRUNCEP_Global.R @@ -220,7 +220,6 @@ download.CRUNCEP <- function(outfolder, start_date, end_date, site_id, lat.in, l "but got", min(dap_time), "..", max(dap_time)) } - dat.list[[j]] <- PEcAn.utils::retry.func( ncdf4::ncvar_get( dap, From 095db698bdf459734be27cc2526b02b665fea715 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Thu, 19 Sep 2019 19:13:25 -0700 Subject: [PATCH 0404/2289] Update CHANGELOG.md --- CHANGELOG.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index d8b9658e8eb..d3e0f768217 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,6 +8,8 @@ For more information about this file see also [Keep a Changelog](http://keepacha ## [Unreleased] ### Fixes + +- Fix #2424 issue with cruncep download: use netcdf subset (ncss) method instead of opendap - Fixed issue that prevented modellauncher from working properly #2262 - Use explicit namespacing (`package::function`) throughout `PEcAn.meta.analysis`. Otherwise, many of these functions would fail when trying to run a meta-analysis outside of the PEcAn workflow (i.e. without having loaded the packages first) (#2351). - Standardize how `PEcAn.DB` tests create database connections, and make sure tests work with both the newer `Postgres` and older `PostgreSQL` drivers (#2351). From 53d1576930d38872957a0e413dcff017a9300fb3 Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Thu, 19 Sep 2019 22:52:58 -0400 Subject: [PATCH 0405/2289] Update symmetric_setdiff.R (#2428) Typo found. Now this function will actually find the differences data frames because it won't set them equal to each other! --- base/db/R/symmetric_setdiff.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/R/symmetric_setdiff.R b/base/db/R/symmetric_setdiff.R index 1841bde284e..1056ec0b5fd 100644 --- a/base/db/R/symmetric_setdiff.R +++ b/base/db/R/symmetric_setdiff.R @@ -53,7 +53,7 @@ symmetric_setdiff <- function(x, y, xname = "x", yname = "y", } if (simplify_types) { x <- dplyr::mutate_if(x, ~!is.numeric(.), as.character) - y <- dplyr::mutate_if(x, ~!is.numeric(.), as.character) + y <- dplyr::mutate_if(y, ~!is.numeric(.), as.character) } namecol <- dplyr::sym(namecol) xy <- dplyr::setdiff(x, y) %>% @@ -62,4 +62,4 @@ symmetric_setdiff <- function(x, y, xname = "x", yname = "y", dplyr::mutate(!!namecol := yname) dplyr::bind_rows(xy, yx) %>% dplyr::select(!!namecol, dplyr::everything()) -} \ No newline at end of file +} From e9a8ad7250eeebeda82901a38b60232da6130dac Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 20 Sep 2019 11:04:47 +0200 Subject: [PATCH 0406/2289] add more tests of symmetric_setdiff Plus import cleanup --- base/db/DESCRIPTION | 1 + base/db/NAMESPACE | 1 + base/db/R/zzz.R | 2 +- .../tests/testthat/test.symmetric-setdiff.R | 23 +++++++++++++++++++ 4 files changed, 26 insertions(+), 1 deletion(-) diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index 243980584ac..5d35bbb1758 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -63,6 +63,7 @@ Suggests: RPostgreSQL, RPostgres, RSQLite, + bit64, data.table, here, rcrossref, diff --git a/base/db/NAMESPACE b/base/db/NAMESPACE index cb9ae447563..c6e5d754069 100644 --- a/base/db/NAMESPACE +++ b/base/db/NAMESPACE @@ -62,4 +62,5 @@ export(workflows) importFrom(magrittr,"%>%") importFrom(rlang,"!!!") importFrom(rlang,"!!") +importFrom(rlang,":=") importFrom(rlang,.data) diff --git a/base/db/R/zzz.R b/base/db/R/zzz.R index 5fa815e0e9f..0eb51e75196 100644 --- a/base/db/R/zzz.R +++ b/base/db/R/zzz.R @@ -6,7 +6,7 @@ #' @docType package #' @name PEcAn.DB #' @importFrom magrittr %>% -#' @importFrom rlang .data !! !!! +#' @importFrom rlang .data !! !!! := NULL #' @export diff --git a/base/db/tests/testthat/test.symmetric-setdiff.R b/base/db/tests/testthat/test.symmetric-setdiff.R index eaf1bf961e0..ada630cfb35 100644 --- a/base/db/tests/testthat/test.symmetric-setdiff.R +++ b/base/db/tests/testthat/test.symmetric-setdiff.R @@ -14,3 +14,26 @@ test_that("Symmetric setdiff works", { expect_match(msg, "Detected at least one `integer64` column") expect_equal(nrow(xydiff), 0) }) + +test_that("Unequal dfs compare unequal", { + expect_error( + symmetric_setdiff(data.frame(a = 1L), data.frame(b = 1L)), + "Cols in x but not y") + d <- symmetric_setdiff(data.frame(a = 1:3L), data.frame(a = 1:4L)) + expect_length(d$a, 1L) + expect_equal(d$a, 4L) + expect_equal(d$source, "y") + +}) + +test_that("symmetric inputs give same output", { + x <- data.frame(a=1:3L, b=LETTERS[1:3L]) + y <- data.frame(a=2:5L, b=LETTERS[2:5L]) + xy <- symmetric_setdiff(x, y) + yx <- symmetric_setdiff(y, x) + purrr::walk2(xy, yx, expect_setequal) + expect_equal( + # left input aways labeled x -> xy$source is inverse of yx$source + dplyr::select(xy, -source) %>% dplyr::arrange(a), + dplyr::select(yx, -source) %>% dplyr::arrange(a)) +}) From 7b41f182d5f76c136015369d7ee04e503d38df1b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 20 Sep 2019 11:25:51 +0200 Subject: [PATCH 0407/2289] remove from saved checks --- base/db/tests/Rcheck_reference.log | 7 ++----- docker/depends/pecan.depends | 1 + 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index 4f72f2fcdf1..54b0fcb2874 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -34,9 +34,7 @@ License components with restrictions not permitted: * checking whether the namespace can be loaded with stated dependencies ... OK * checking whether the namespace can be unloaded cleanly ... OK * checking loading without being on the library search path ... OK -* checking dependencies in R code ... WARNING -'::' or ':::' import not declared from: ‘bit64’ -'loadNamespace' or 'requireNamespace' call not declared from: ‘bit64’ +* checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK @@ -113,10 +111,9 @@ search_reference_single: no visible binding for global variable ‘author_given’ search_reference_single: no visible binding for global variable ‘issued’ -symmetric_setdiff: no visible global function definition for ‘:=’ workflows: no visible binding for global variable ‘workflow_id’ Undefined global functions or variables: - := . as author author_family author_given citation_id collect + . as author author_family author_given citation_id collect container_id container_type created_at cultivar_id ensemble_id folder genus greenhouse head id issued machine_id n name name.cv name.mt pft_id pft_type posix read.output run_id scientificname score diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 4062b1c0f9a..343a3e3b8ce 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -22,6 +22,7 @@ install2.r -e -s -n -1\ BayesianTools \ binaryLogic \ BioCro \ + bit64 \ car \ coda \ data.table \ From dd37f5736525c5bba8510adeaf8fe372062eb89f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 20 Sep 2019 12:01:10 +0200 Subject: [PATCH 0408/2289] changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index d8b9658e8eb..36e42a51710 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,6 +16,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Explicitly use `PEcAn.uncertainty::read.ensemble.output` in `PEcAn.utils::get.results`. Otherwise, it would sometimes use the deprecated `PEcAn.utils::read.ensemble.output` version. - History page would not pass the hostname parameter when showing a running workflow, this would result in the running page showing an error. - The `parse` option to `PEcAn.utils::read_web_config` had no effect when `expand` was TRUE (#2421). +- Fixed a typo that made `PEcAn.DB::symmetric_setdiff` falsely report no differences (#2428). ### Changed - Stricter package checking (#2404): `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings. From db85935b5dcb6792c99c5105a9bf60143cd16b97 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 21 Sep 2019 09:45:24 +0200 Subject: [PATCH 0409/2289] pull post-1.7.1 changes into unreleased section --- CHANGELOG.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f0538811744..a207ba4da6b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,7 +8,12 @@ For more information about this file see also [Keep a Changelog](http://keepacha ## [Unreleased] ### Fixed -- Fix #2424 issue with cruncep download: use netcdf subset (ncss) method instead of opendap +- Fix issue with cruncep download: use netcdf subset (ncss) method instead of opendap (#2424). +- The `parse` option to `PEcAn.utils::read_web_config` had no effect when `expand` was TRUE (#2421). +- Fixed a typo that made `PEcAn.DB::symmetric_setdiff` falsely report no differences (#2428). + +### Changed +- Stricter package checking: `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings (#2404). ## [1.7.1] - 2018-09-12 @@ -23,11 +28,8 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Replace deprecated `rlang::UQ` syntax with the recommended `!!` - Explicitly use `PEcAn.uncertainty::read.ensemble.output` in `PEcAn.utils::get.results`. Otherwise, it would sometimes use the deprecated `PEcAn.utils::read.ensemble.output` version. - History page would not pass the hostname parameter when showing a running workflow, this would result in the running page showing an error. -- The `parse` option to `PEcAn.utils::read_web_config` had no effect when `expand` was TRUE (#2421). -- Fixed a typo that made `PEcAn.DB::symmetric_setdiff` falsely report no differences (#2428). ### Changed -- Stricter package checking (#2404): `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings. - Updated modules/rtm PROSPECT docs - Updated models/sipnet/R/model2netcdf.SIPNET.R to address issues in PR #2254 - Improved testing (#2281). Automatic Travis CI builds of PEcAn on are now run using three versions of R in parallel. This should mean fewer issues with new releases and better backwards compatibility, but note that we still only guarantee full compatibility with the current release version of R. The tested versions are: From 8dbd707b9fe1a4244e93cd6debdc99e597933470 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 23 Sep 2019 03:14:17 -0400 Subject: [PATCH 0410/2289] move hier fnc out --- modules/assim.batch/R/hier.mcmc.R | 244 ++++++++++++++++++++++++++++++ 1 file changed, 244 insertions(+) create mode 100644 modules/assim.batch/R/hier.mcmc.R diff --git a/modules/assim.batch/R/hier.mcmc.R b/modules/assim.batch/R/hier.mcmc.R new file mode 100644 index 00000000000..dc05036b838 --- /dev/null +++ b/modules/assim.batch/R/hier.mcmc.R @@ -0,0 +1,244 @@ +########### hierarchical MCMC function with Gibbs ############## + + +hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, + jmp0, mu_site_init, nparam, nsites, prior.fn.all, prior.ind.all){ + + pos.check <- sapply(settings$assim.batch$inputs, `[[`, "ss.positive") + + if(length(unlist(pos.check)) == 0){ + # if not passed from settings assume none + pos.check <- rep(FALSE, length(settings$assim.batch$inputs)) + }else if(length(unlist(pos.check)) != length(settings$assim.batch$inputs)){ + # maybe one provided, but others are forgotten + # check which ones are provided in settings + from.settings <- sapply(seq_along(pos.check), function(x) !is.null(pos.check[[x]])) + tmp.check <- rep(FALSE, length(settings$assim.batch$inputs)) + # replace those with the values provided in the settings + tmp.check[from.settings] <- as.logical(unlist(pos.check)) + pos.check <- tmp.check + }else{ + pos.check <- as.logical(pos.check) + } + + ################################################################ + # + # mu_site : site level parameters (nsite x nparam) + # tau_site : site level precision (nsite x nsite) + # mu_global : global parameters (nparam) + # tau_global : global precision matrix (nparam x nparam) + # + ################################################################ + + + + ###### (hierarchical) global mu priors + # + # mu_global_mean : prior mean vector + # mu_global_sigma : prior covariance matrix + # mu_global_tau : prior precision matrix + # + # mu_global ~ MVN (mu_global_mean, mu_global_tau) + + #approximate a normal dist + mu_init_samp <- matrix(NA, ncol = nparam, nrow = 1000) + for(ps in seq_along(prior.ind.all)){ + prior.quantiles <- eval(prior.fn.all$rprior[[prior.ind.all[ps]]], list(n = 1000)) + mu_init_samp[,ps] <- prior.quantiles + } + + # mean hyperprior + mu_global_mean <- apply(mu_init_samp, 2, mean) + + # sigma/tau hyperprior + distto <- rbind(abs(mu_global_mean - rng_orig[,1]), abs(mu_global_mean - rng_orig[,2])) + mu_global_sigma <- diag((apply(distto, 2, min)/4)^2) + mu_global_tau <- solve(mu_global_sigma) + + + ## initialize mu_global (nparam) + mu_global <- rmvnorm(1, mu_global_mean, mu_global_sigma) + + + ###### (hierarchical) global tau priors + # + # tau_global_df : Wishart degrees of freedom + # tau_global_sigma : Wishart scale matrix + # + # tau_global ~ W (tau_global_df, tau_global_sigma) + # sigma_global <- solve(tau_global) + # + + sigma_global_df <- nparam + 1 + sigma_global_scale <- (cov(mu_init_samp)/sigma_global_df) + + # initialize sigma_global (nparam x nparam) + sigma_global <- riwish(sigma_global_df, sigma_global_scale) + + # initialize jcov.arr (jump variances per site) + jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) + for(j in seq_len(nsites)) jcov.arr[,,j] <- jmp0 + + # prepare mu_site (nsite x nparam) + mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) + + mu_site_curr <- mu_site_init + + # values for each site will be accepted/rejected in themselves + currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) + # force it to be nvar x nsites matrix + currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) + currllp <- lapply(seq_len(nsites), function(v) PEcAn.assim.batch::pda.calc.llik.par(settings, nstack[[v]], currSS[,v])) + + # storage + mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) + mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) + sigma_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) + + musite.accept.count <- rep(0, nsites) + + adapt_orig <- settings$assim.batch$jump$adapt + settings$assim.batch$jump$adapt <- adapt_orig * nsites + + ########################## Start MCMC ######################## + + for(g in 1:nmcmc){ + + # jump adaptation step + if ((g > 2) && ((g - 1) %% settings$assim.batch$jump$adapt == 0)) { + + # update site level jvars + params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] + #colnames(params.recent) <- names(x0) + settings$assim.batch$jump$adapt <- adapt_orig + jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], params.recent[seq(v,adapt_orig * nsites, by=12),,v])) + jcov.arr <- abind::abind(jcov.list, along=3) + musite.accept.count <- rep(0, nsites) # Reset counter + settings$assim.batch$jump$adapt <- adapt_orig * nsites + } + + + ######################################## + # gibbs update tau_global | mu_global, mu_site + # + # W(tau_global | mu_global, mu_site) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) + # + # + # using MVN-Wishart conjugacy + # prior hyperparameters: tau_global_df, tau_global_sigma + # posterior hyperparameters: tau_global_df_gibbs, tau_global_sigma_gibbs + # + # update: + # tau_global ~ W(tau_global_df_gibbs, tau_global_sigma_gibbs) + + + sigma_global_df_gibbs <- sigma_global_df + nsites + + + pairwise_deviation <- apply(mu_site_curr, 1, function(r) r - t(mu_global)) + sum_term <- pairwise_deviation %*% t(pairwise_deviation) + + sigma_global_scale_gibbs <- sigma_global_scale + sum_term + + # update sigma + sigma_global <- riwish(sigma_global_df_gibbs, sigma_global_scale_gibbs) # across-site covariance + + + + ######################################## + # update mu_global | mu_site, tau_global + # + # MVN(mu_global | mu_site, tau_global) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) + # + # mu_global ~ MVN(global_mu, global_Sigma) + # + # mu_global : global parameters + # global_mu : precision weighted average between the data (mu_site) and prior mean (mu_f) + # global_Sigma : sum of mu_site and mu_f precision + # + # Dietze, 2017, Eqn 13.6 + # mu_global ~ MVN(solve((nsites * sigma_global) + P_f_inv)) * ((nsites * sigma_global) + P_f_inv * mu_f), + # solve((nsites * sigma_global) + P_f_inv)) + + # prior hyperparameters : mu_global_mean, mu_global_sigma + # posterior hyperparameters : mu_global_mean_gibbs, mu_global_sigma_gibbs + # + # update: + # mu_global ~ MVN(mu_global_mean_gibbs, mu_global_sigma_gibbs) + + # calculate mu_global_sigma_gibbs from prior hyperparameters and tau_global + mu_global_sigma_gibbs <- solve(mu_global_tau + nsites * solve(sigma_global)) + + + mu_site_bar <- apply(mu_site_curr, 2, mean) + + # calculate mu_global_mean_gibbs from prior hyperparameters, mu_site_means and tau_global + mu_global_mean_gibbs <- mu_global_sigma_gibbs %*% + (mu_global_tau %*% mu_global_mean + ((nsites*solve(sigma_global)) %*% mu_site_bar)) + + # update mu_global + mu_global <- mvtnorm::rmvnorm(1, mu_global_mean_gibbs, mu_global_sigma_gibbs) # new prior mu to be used below for prior prob. calc. + + + # site level M-H + ######################################## + + # # propose new mu_site on standard normal domain + # for(ns in seq_len(nsites)){ + # repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation + # mu_site_new_stdn[ns,] <- mvtnorm::rmvnorm(1, mu_global, sigma_global) + # check.that <- (mu_site_new_stdn[ns,] > rng_stdn[, 1] & mu_site_new_stdn[ns, ] < rng_stdn[, 2]) + # if(all(check.that)) break + # } + # } + + # propose new site parameter vectors + repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation + thissite <- g %% nsites + if(thissite == 0) thissite <- nsites + proposed <- mvtnorm::rmvnorm(1, mu_site_curr[thissite,], jcov.arr[,,thissite]) + check.that <- (proposed > rng_orig[, 1] & proposed < rng_orig[, 2]) + if(all(check.that)) break + } + + mu_site_new <- matrix(rep(proposed, nsites),ncol=nparam, byrow = TRUE) + + # re-predict current SS + currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) + currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) + + # calculate posterior + currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) + # use new priors for calculating prior probability + currPrior <- dmvnorm(mu_site_curr, mu_global, sigma_global, log = TRUE) + #currPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_curr_stdn[v,], mu_global, sigma_global, log = TRUE))) + currPost <- currLL + currPrior + + # predict new SS + newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,], pos.check)) + newSS <- matrix(newSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) + + # calculate posterior + newllp <- lapply(seq_len(nsites), function(v) pda.calc.llik.par(settings, nstack[[v]], newSS[,v])) + newLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(newSS[,v], llik.fn, newllp[[v]])) + # use new priors for calculating prior probability + newPrior <- dmvnorm(mu_site_new, mu_global, sigma_global, log = TRUE) + #newPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_new_stdn[v,], mu_s, s_sigma[[v]], log = TRUE))) + newPost <- newLL + newPrior + + ar <- is.accepted(currPost, newPost) + mu_site_curr[ar, ] <- mu_site_new[ar, ] + #mu_site_curr_stdn[ar, ] <- mu_site_new_stdn[ar, ] + musite.accept.count[thissite] <- musite.accept.count[thissite] + ar[thissite] + + + mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] + mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs + sigma_global_samp[g, , ] <- sigma_global # 100% acceptance for gibbs + + if(g %% 500 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") + } + + return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, sigma_global_samp = sigma_global_samp, + musite.accept.count = musite.accept.count)) +} # hier.mcmc \ No newline at end of file From 4cbcbf1e25239615d81cad34883908b021d4c329 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 23 Sep 2019 03:16:22 -0400 Subject: [PATCH 0411/2289] hier.mcmc moved out --- modules/assim.batch/R/pda.emulator.ms.R | 282 +----------------------- 1 file changed, 4 insertions(+), 278 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 46307a90f21..de989eef1e3 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -77,7 +77,7 @@ pda.emulator.ms <- function(multi.settings) { # listen repeat{ PEcAn.logger::logger.info("Multi-site calibration running. Please wait.") - Sys.sleep(180) + Sys.sleep(300) check_all_sites <- sapply(emulator_jobs, qsub_run_finished, multi.settings[[1]]$host, multi.settings[[1]]$host$qstat) if(all(check_all_sites)) break } @@ -167,7 +167,7 @@ pda.emulator.ms <- function(multi.settings) { ## remote hack for now ## currently site-level PDA runs on remote but joint and hierarchical runs locally - ## this will change soon (before this PR is finalized) + ## this will change soon (?!) ## but I'm still developing the code so for now let's change the paths back to local for(i in seq_along(tmp.settings$pfts)){ tmp.settings$pfts[[i]]$outdir <- file.path(tmp.settings$outdir, "pft", basename(tmp.settings$pfts[[i]]$outdir)) @@ -321,22 +321,6 @@ pda.emulator.ms <- function(multi.settings) { tmp.settings$assim.batch$ensemble.id, ".Rdata")) - ## Transform values from non-normal distributions to standard Normal - ## it won't do anything if all priors are already normal - ## edit: actually hierarchical sampling may be assuming standard normal, test for this later - norm_transform <- norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list) - if(!norm_transform$normF){ # means SS values are transformed - - ## Previously emulator was refitted on the standard normal domain - ## Instead I now use original emulators, but switch back and forth between domains - - ## range limits on standard normal domain - rng_stdn <- norm_transform$rng[,,1] #all same, maybe return just one from norm_transform_priors - - ## get new init.list and jmp.list - init.list <- norm_transform$init - - } ## proposing starting points from knots mu_site_init <- list() @@ -359,262 +343,7 @@ pda.emulator.ms <- function(multi.settings) { save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) - ########### hierarchical MCMC function with Gibbs ############## - - - hier.mcmc <- function(settings, gp.stack, nstack, nmcmc, rng_stdn, rng_orig, - mu0, jmp0, mu_site_init, nparam, nsites, prior.fn.all, prior.ind.all){ - - pos.check <- sapply(settings$assim.batch$inputs, `[[`, "ss.positive") - - if(length(unlist(pos.check)) == 0){ - # if not passed from settings assume none - pos.check <- rep(FALSE, length(settings$assim.batch$inputs)) - }else if(length(unlist(pos.check)) != length(settings$assim.batch$inputs)){ - # maybe one provided, but others are forgotten - # check which ones are provided in settings - from.settings <- sapply(seq_along(pos.check), function(x) !is.null(pos.check[[x]])) - tmp.check <- rep(FALSE, length(settings$assim.batch$inputs)) - # replace those with the values provided in the settings - tmp.check[from.settings] <- as.logical(unlist(pos.check)) - pos.check <- tmp.check - }else{ - pos.check <- as.logical(pos.check) - } - - ################################################################ - # - # mu_site : site level parameters (nsite x nparam) - # tau_site : site level precision (nsite x nsite) - # mu_global : global parameters (nparam) - # tau_global : global precision matrix (nparam x nparam) - # - ################################################################ - - - - ###### (hierarchical) global mu priors - # - # mu_global_mean : prior mean vector - # mu_global_sigma : prior covariance matrix - # mu_global_tau : prior precision matrix - # - # mu_global ~ MVN (mu_global_mean, mu_global_tau) - - ### these are all in the STANDARD NORMAL DOMAIN - # we want MVN for mu_global for conjugacy - # stdandard normal to avoid singularity (param units may differ on orders of magnitudes) - - mu_global_mean <- as.matrix(rep(0, nparam)) - mu_global_sigma <- diag(1, nparam) - mu_global_tau <- solve(mu_global_sigma) - - - ## initialize mu_global (nparam) - mu_global <- as.matrix(unlist(mu0)) - - ###### (hierarchical) global tau priors - # - # tau_global_df : Wishart degrees of freedom - # tau_global_sigma : Wishart scale matrix - # - # tau_global ~ W (tau_global_df, tau_global_sigma) - # sigma_global <- solve(tau_global) - # - - tau_global_df <- nparam # the least informative choice - tau_global_sigma <- diag(1, nparam) - tau_global_tau <- solve(tau_global_sigma) # will be used in gibbs updating - - # initialize tau_global (nparam x nparam) - tau_global <- rWishart(1, tau_global_df, tau_global_sigma)[,,1] - sigma_global <- solve(tau_global) - - # initialize jcov.arr (jump variances per site) - jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) - for(j in seq_len(nsites)) jcov.arr[,,j] <- jmp0 - - # prepare mu_site (nsite x nparam) - mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) - mu_site_new_stdn <- matrix(NA_real_, nrow = nsites, ncol= nparam) - - mu_site_curr <- mu_site_init - - # values for each site will be accepted/rejected in themselves - currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) - # force it to be nvar x nsites matrix - currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) - currllp <- lapply(seq_len(nsites), function(v) PEcAn.assim.batch::pda.calc.llik.par(settings, nstack[[v]], currSS[,v])) - - # storage - mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) - mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) - tau_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) - - musite.accept.count <- rep(0, nsites) - - - ########################## Start MCMC ######################## - - for(g in 1:nmcmc){ - - # jump adaptation step - if ((g > 2) && ((g - 1) %% settings$assim.batch$jump$adapt == 0)) { - - # update site level jvars - params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] - #colnames(params.recent) <- names(x0) - jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], params.recent[,,v])) - jcov.arr <- abind::abind(jcov.list, along=3) - musite.accept.count <- rep(0, nsites) # Reset counter - - } - - - ######################################## - # gibbs update tau_global | mu_global, mu_site - # - # W(tau_global | mu_global, mu_site) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) - # - # - # using MVN-Wishart conjugacy - # prior hyperparameters: tau_global_df, tau_global_sigma - # posterior hyperparameters: tau_global_df_gibbs, tau_global_sigma_gibbs - # - # update: - # tau_global ~ W(tau_global_df_gibbs, tau_global_sigma_gibbs) - - tau_global_df_gibbs <- tau_global_df + nsites - - # transform from original domain to standard normal - mu_site_curr_stdn <- sapply(seq_len(nparam), function(x){ - orig.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[x]]], list(q = mu_site_curr[,x])) - norm.vals <- qnorm(orig.quantiles) - return(norm.vals) - }) - - # sum of pairwise deviation products - pairwise_deviation <- apply(mu_site_curr_stdn, 1, function(r) r - t(mu_global)) - sum_term <- pairwise_deviation %*% t(pairwise_deviation) - - tau_global_sigma_gibbs <- solve(tau_global_tau + sum_term) - - # update tau - tau_global <- rWishart(1, df = tau_global_df_gibbs, Sigma = tau_global_sigma_gibbs)[,,1] # across-site precision - sigma_global <- solve(tau_global) # across-site covariance, to be used below - - - ######################################## - # update mu_global | mu_site, tau_global - # - # MVN(mu_global | mu_site, tau_global) ~ MVN( mu_site | mu_global, tau_global) * W(tau_global | tau_df, tau_V) - # - # mu_global ~ MVN(global_mu, global_Sigma) - # - # mu_global : global parameters - # global_mu : precision weighted average between the data (mu_site) and prior mean (mu_f) - # global_Sigma : sum of mu_site and mu_f precision - # - # Dietze, 2017, Eqn 13.6 - # mu_global ~ MVN(solve((nsites * sigma_global) + P_f_inv)) * ((nsites * sigma_global) + P_f_inv * mu_f), - # solve((nsites * sigma_global) + P_f_inv)) - - # prior hyperparameters : mu_global_mean, mu_global_sigma - # posterior hyperparameters : mu_global_mean_gibbs, mu_global_sigma_gibbs - # - # update: - # mu_global ~ MVN(mu_global_mean_gibbs, mu_global_sigma_gibbs) - - # calculate mu_global_sigma_gibbs from prior hyperparameters and tau_global - mu_global_sigma_gibbs <- solve(mu_global_tau + nsites * tau_global) - - # Jensen's inequality: take the mean of mu_site_curr, then transform - mu_site_bar <- apply(mu_site_curr, 2, mean) - mu_site_bar_std <- sapply(seq_len(nparam), function(x){ - prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[x]]], list(q = mu_site_bar[x])) - norm.vals <- qnorm(prior.quantiles) - return(norm.vals) - }) - mu_site_bar_std <- as.matrix(mu_site_bar_std) - - # calculate mu_global_mean_gibbs from prior hyperparameters, mu_site_means and tau_global - mu_global_mean_gibbs <- mu_global_sigma_gibbs %*% (mu_global_tau %*% mu_global_mean + (tau_global * nsites) %*% mu_site_bar_std) - - # update mu_global - mu_global <- mvtnorm::rmvnorm(1, mu_global_mean_gibbs, mu_global_sigma_gibbs) # new prior mu to be used below for prior prob. calc. - - - # site level M-H - ######################################## - - # # propose new mu_site on standard normal domain - # for(ns in seq_len(nsites)){ - # repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - # mu_site_new_stdn[ns,] <- mvtnorm::rmvnorm(1, mu_global, sigma_global) - # check.that <- (mu_site_new_stdn[ns,] > rng_stdn[, 1] & mu_site_new_stdn[ns, ] < rng_stdn[, 2]) - # if(all(check.that)) break - # } - # } - - # propose new site parameter vectors - for(ns in seq_len(nsites)){ - repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - mu_site_new[ns,] <- mvtnorm::rmvnorm(1, mu_site_curr[ns,], jcov.arr[,,ns]) - check.that <- (mu_site_new[ns,] > rng_orig[, 1] & mu_site_new[ns, ] < rng_orig[, 2]) - if(all(check.that)) break - } - } - # # transform back to original domain - # mu_site_new <- sapply(seq_len(nparam), function(x){ - # norm.quantiles <- pnorm(mu_site_new_stdn[,x]) - # orig.vals <- eval(prior.fn.all$qprior[[prior.ind.all[x]]], list(p = norm.quantiles)) - # return(orig.vals) - # }) - - mu_site_new_stdn <- sapply(seq_len(nparam), function(x){ - orig.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[x]]], list(q = mu_site_new[,x])) - norm.vals <- qnorm(orig.quantiles) - return(norm.vals) - }) - - # re-predict current SS - currSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) - currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) - - # calculate posterior - currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) - # use new priors for calculating prior probability - currPrior <- dmvnorm(mu_site_curr_stdn, mu_global, sigma_global, log = TRUE) - #currPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_curr_stdn[v,], mu_global, sigma_global, log = TRUE))) - currPost <- currLL + currPrior - - # predict new SS - newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,], pos.check)) - newSS <- matrix(newSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) - - # calculate posterior - newllp <- lapply(seq_len(nsites), function(v) pda.calc.llik.par(settings, nstack[[v]], newSS[,v])) - newLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(newSS[,v], llik.fn, newllp[[v]])) - # use new priors for calculating prior probability - newPrior <- dmvnorm(mu_site_new_stdn, mu_global, sigma_global, log = TRUE) - #newPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_new_stdn[v,], mu_s, s_sigma[[v]], log = TRUE))) - newPost <- newLL + newPrior - - ar <- is.accepted(currPost, newPost) - mu_site_curr[ar, ] <- mu_site_new[ar, ] - mu_site_curr_stdn[ar, ] <- mu_site_new_stdn[ar, ] - musite.accept.count <- musite.accept.count + ar - - mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] - mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs - tau_global_samp[g, , ] <- tau_global # 100% acceptance for gibbs - - if(g %% 500 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") - } - - return(list(mu_site_samp = mu_site_samp, mu_global_samp = mu_global_samp, tau_global_samp = tau_global_samp, - musite.accept.count = musite.accept.count)) - } # hier.mcmc + # start the clock ptm.start <- proc.time() @@ -632,11 +361,8 @@ pda.emulator.ms <- function(multi.settings) { mcmc.out <- parallel::parLapply(cl, seq_len(tmp.settings$assim.batch$chain), function(chain) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, - nstack = NULL, - nmcmc = tmp.settings$assim.batch$iter, - rng_stdn = rng_stdn, + nmcmc = 600000, rng_orig = rng_orig, - mu0 = init.list[[chain]], jmp0 = jump_init[[chain]], mu_site_init = mu_site_init[[chain]], nparam = length(prior.ind.all), From 27d0859c5af8151f3f31c2f35609271bbf516324 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 23 Sep 2019 03:33:04 -0400 Subject: [PATCH 0412/2289] removed copula approach --- modules/assim.batch/R/pda.emulator.ms.R | 10 +-- modules/assim.batch/R/pda.utils.R | 109 +++--------------------- 2 files changed, 18 insertions(+), 101 deletions(-) diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index de989eef1e3..8a72fda2b72 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -361,7 +361,7 @@ pda.emulator.ms <- function(multi.settings) { mcmc.out <- parallel::parLapply(cl, seq_len(tmp.settings$assim.batch$chain), function(chain) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, - nmcmc = 600000, + nmcmc = tmp.settings$assim.batch$iter * 3, # need to run chains longerthan indv rng_orig = rng_orig, jmp0 = jump_init[[chain]], mu_site_init = mu_site_init[[chain]], @@ -380,15 +380,15 @@ pda.emulator.ms <- function(multi.settings) { current.step <- "HIERARCHICAL MCMC END" save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) - # transform samples from std normal to prior quantiles - mcmc.out2 <- back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) + # generate hierarhical posteriors + mcmc.out <- generate_hierpost(mcmc.out, rng_orig) # Collect global params in their own list and postprocess - mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) # processing these just for further analysis later, but con=NULL because these samples shouldn't be used for new runs later tmp.settings <- pda.postprocess(tmp.settings, con = NULL, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical_mean") - mcmc.param.list <- pda.sort.params(mcmc.out2, sub.sample = "hierarchical_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) + mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "hierarchical_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) tmp.settings <- pda.postprocess(tmp.settings, con, mcmc.param.list, pname, prior.list, prior.ind.orig, sffx = "_hierarchical") # Collect site-level params in their own list and postprocess diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index b2cf3cd82bb..1d11fc987e1 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -827,118 +827,35 @@ load_pda_history <- function(workdir, ensemble.id, objects){ return(alist) } -##' Helper function that transforms the values of each parameter into N(0,1) equivalent -##' -##' @param prior.list list of prior data frames, same length as number of pfts -##' @param prior.fn.all list of expressions of d/r/q/p functions of the priors given in the prior.list -##' @param prior.ind.all a vector of indices identifying which params are targeted, indices refer to the row numbers when prior.list sublists are rbinded -##' @param SS.stack list of design matrices for the emulator, length = nsites, each sublist will be of length nvars -##' @param init.list list of initial values for the targeted params, they will change when the range is normalized -##' @param jmp.list list of hump variances, they will change when the range is normalized -##' -##' @return a list of new objects that contain the normalized versions -##' @author Istem Fer -##' @export -norm_transform_priors <- function(prior.list, prior.fn.all, prior.ind.all, SS.stack, init.list, jmp.list){ - - # check for non-normals - prior.all <- do.call("rbind", prior.list) - psel <- prior.all[prior.ind.all, 1] != "norm" - norm.check <- all(!psel) # if all are norm do nothing - - if(!norm.check){ - - rng <- array(NA, dim = c(length(prior.ind.all), 2, length(SS.stack))) - - # need to modify init.list and jmp.list as well - parnames <- names(init.list[[1]]) - - prior.fn.all <- pda.define.prior.fn(prior.all) # setup prior functions again - - for(i in seq_along(SS.stack)){ - # NOTE: there might be differences in dimensions, - # some SS-matrices might have likelihood params such as bias - # check for that later - SS.tmp <- lapply(SS.stack[[i]], function(ss){ - for(p in seq_along(psel)){ - #we transform all - prior.quantiles <- eval(prior.fn.all$pprior[[prior.ind.all[p]]], list(q = ss[,p])) - stdnorm.vals <- qnorm(prior.quantiles) - ss[,p] <- stdnorm.vals - } - return(ss) - }) - SS.stack[[i]] <- SS.tmp - rng.tmp <- apply(SS.tmp[[1]],2,range) - rng[,,i] <- t(rng.tmp[,-ncol(rng.tmp)]) - } - - all_knots <- do.call("rbind", lapply(SS.stack,`[[`,1)) - rand_ind <- sample(seq_len(nrow(all_knots)), length(init.list)) - - for(c in seq_along(init.list)){ - init.list[[c]] <- lapply(seq_along(init.list[[c]]), function(x){ - init.list[[c]][[x]] <- all_knots[rand_ind[c],x] - }) - names(init.list[[c]]) <- parnames - jmp.list[[c]][psel] <- 0.1 * diff(qnorm(c(0.05, 0.95))) - } - - } - - return(list(normSS = SS.stack, normF = norm.check, init = init.list, jmp = jmp.list, - rng = rng, prior.all = prior.all, prior.fn.all = prior.fn.all)) - -} # norm_transform_priors - - -##' Helper function that transforms the samples back to their original prior distribution equivalents +##' Helper function that generates the hierarchical posteriors ##' -##' @param prior.all a dataframe of all priors for both pfts -##' @param prior.fn.all list of expressions of d/r/q/p functions of the priors with original parameter space -##' @param prior.ind.all a vector of indices identifying targeted params -##' @param mcmc.out hierarchical MCMC outputs in standard-normal space +##' @param mcmc.out hierarchical MCMC outputs +##' @param rng_orig nparam x 2 matrix, 1st and 2nd columns are the lower and upper prior limits respectively ##' ##' @return hierarchical MCMC outputs in original parameter space ##' ##' @author Istem Fer ##' @export -back_transform_posteriors <- function(prior.all, prior.fn.all, prior.ind.all, mcmc.out){ +generate_hierpost <- function(mcmc.out, rng_orig){ for(i in seq_along(mcmc.out)){ mu_global_samp <- mcmc.out[[i]]$mu_global_samp - tau_global_samp <- mcmc.out[[i]]$tau_global_samp + sigma_global_samp <- mcmc.out[[i]]$sigma_global_samp - iter_size <- dim(tau_global_samp)[1] + iter_size <- dim(sigma_global_samp)[1] - sigma_global_samp <- tau_global_samp - for(si in seq_len(iter_size)){ - sigma_global_samp[si,,] <- solve(tau_global_samp[si,,]) - } - - # first calculate hierarchical posteriors from mu_global_samp and tau_global_samp + # calculate hierarchical posteriors from mu_global_samp and tau_global_samp hierarchical_samp <- mu_global_samp for(si in seq_len(iter_size)){ - hierarchical_samp[si,] <- mvtnorm::rmvnorm(1, mean = mu_global_samp[si,], sigma = sigma_global_samp[si,,]) - } - - # back transform all parameter values from standard normal to the original domain - mu_sample_tmp <- abind::abind(array(mu_global_samp, dim = c(dim(mu_global_samp), 1)), - array(hierarchical_samp, dim = c(dim(hierarchical_samp), 1)), along = 3) - for(ms in seq_len(dim(mu_sample_tmp)[3])){ - mcmc.vals <- mu_sample_tmp[,,ms] - stdnorm.quantiles <- pnorm(mcmc.vals) - # counter, because all cols need transforming back - for(ps in seq_along(prior.ind.all)){ - prior.quantiles <- eval(prior.fn.all$qprior[[prior.ind.all[ps]]], list(p = stdnorm.quantiles[,ps])) - mcmc.vals[,ps] <- prior.quantiles + repeat{ + proposed <- mvtnorm::rmvnorm(1, mean = mu_global_samp[si,], sigma = sigma_global_samp[si,,]) + if(all(proposed >= rng_orig[,1] & proposed <= rng_orig[,2])) break } - mu_sample_tmp[,,ms] <- mcmc.vals + hierarchical_samp[si,] <- proposed } - - mcmc.out[[i]]$mu_global_samp <- mu_sample_tmp[,,1] - mcmc.out[[i]]$hierarchical_samp <- mu_sample_tmp[,,-1] + + mcmc.out[[i]]$hierarchical_samp <- hierarchical_samp } return(mcmc.out) From 9801c54235b1de374528045c6edf37a9d00320ed Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 23 Sep 2019 03:38:20 -0400 Subject: [PATCH 0413/2289] update functions --- modules/assim.batch/DESCRIPTION | 1 + modules/assim.batch/NAMESPACE | 3 +- modules/assim.batch/R/pda.assessment.R | 57 +++++++++++++++++++ modules/assim.batch/R/pda.utils.R | 2 +- .../man/back_transform_posteriors.Rd | 26 --------- modules/assim.batch/man/generate_hierpost.Rd | 22 +++++++ .../assim.batch/man/norm_transform_priors.Rd | 31 ---------- 7 files changed, 82 insertions(+), 60 deletions(-) create mode 100644 modules/assim.batch/R/pda.assessment.R delete mode 100644 modules/assim.batch/man/back_transform_posteriors.Rd create mode 100644 modules/assim.batch/man/generate_hierpost.Rd delete mode 100644 modules/assim.batch/man/norm_transform_priors.Rd diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 974d8ae6469..0265a7be70d 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -35,6 +35,7 @@ Imports: PEcAn.utils, rjags, stats, + prodlim, udunits2 (>= 0.11), utils, XML diff --git a/modules/assim.batch/NAMESPACE b/modules/assim.batch/NAMESPACE index f308e425146..16020c83324 100644 --- a/modules/assim.batch/NAMESPACE +++ b/modules/assim.batch/NAMESPACE @@ -2,16 +2,15 @@ export(assim.batch) export(autoburnin) -export(back_transform_posteriors) export(correlationPlot) export(gelman_diag_gelmanPlot) export(gelman_diag_mw) +export(generate_hierpost) export(getBurnin) export(load.L2Ameriflux.cf) export(load.pda.data) export(load_pda_history) export(makeMCMCList) -export(norm_transform_priors) export(pda.adjust.jumps) export(pda.adjust.jumps.bs) export(pda.autocorr.calc) diff --git a/modules/assim.batch/R/pda.assessment.R b/modules/assim.batch/R/pda.assessment.R new file mode 100644 index 00000000000..0e03b579b6b --- /dev/null +++ b/modules/assim.batch/R/pda.assessment.R @@ -0,0 +1,57 @@ +# This is a diagnostic function that checks for post-pda model-data comparison +# the value it returns can be used in determining whether to stop pda rounds or continue +# it samples from MCMC for parameter vectors, runs a small ensemble (100) unless requested otherwise, compares model with data, returns metric +postpda_assessment <- function(settings, n.param.orig, prior.ind.orig, + n.post.knots, knots.params.temp, + prior.list, prior.fn, sf, sf.samp){ + + # sample from MCMC + sampled_knots <- sample_MCMC(settings$assim.batch$mcmc.path, n.param.orig, prior.ind.orig, + n.post.knots, knots.params.temp, + prior.list, prior.fn, sf, sf.samp) + + knots.params.temp <- sampled_knots$knots.params.temp + probs.round.sf <- sampled_knots$sf_knots + pass2bias <- sampled_knots$pass2bias + + ## Set up runs and write run configs for all proposed knots + run.ids <- pda.init.run(settings, con, my.write.config, workflow.id, knots.params, + n = settings$assim.batch$n.knot, + run.names = paste0(settings$assim.batch$ensemble.id, ".knot.", + 1:settings$assim.batch$n.knot)) + + ## start model runs + PEcAn.remote::start.model.runs(settings, (as.logical(settings$database$bety$write) & !remote)) + + ## Retrieve model outputs and error statistics + model.out <- list() + + ## read model outputs + for (i in seq_len(settings$assim.batch$n.knot)) { + align.return <- pda.get.model.output(settings, run.ids[i], bety, inputs, external.formats) + model.out[[i]] <- align.return$model.out + if(all(!is.na(model.out[[i]]))){ + inputs <- align.return$inputs + } + } + + ee <- sapply(model.out, function(x) x[[1]][[1]]) + ii <- inputs[[1]]$obs + ee <- ee[!is.na(ii),] + ii <- ii[!is.na(ii)] + rmv <- sapply(seq_len(nrow(ee)), function(x){ + if(length(unique(ee[x,])) == 1){ + return(1) + }else{ + return(0) + } + }) + ee <- ee[rmv==0,] + ii <- ii[rmv ==0] + sim = createDHARMa(simulatedResponse = ee, observedResponse = ii) + # calculate metric + +} + + + diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 1d11fc987e1..b4cb1b37c2a 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -1064,7 +1064,7 @@ return_multi_site_objects <- function(multi.settings){ } new_site_knots <- new_site_knots[-(1:nrow(previous_knots)),] - these_knots <- apply(new_site_knots, 1, function(x) row.match(x, collect_site_knots[, need_obj$prior.ind.all]) ) + these_knots <- apply(new_site_knots, 1, function(x) prodlim::row.match(x, collect_site_knots[, need_obj$prior.ind.all]) ) collect_site_knots <- collect_site_knots[these_knots,] ind <- 0 diff --git a/modules/assim.batch/man/back_transform_posteriors.Rd b/modules/assim.batch/man/back_transform_posteriors.Rd deleted file mode 100644 index 066c8446d04..00000000000 --- a/modules/assim.batch/man/back_transform_posteriors.Rd +++ /dev/null @@ -1,26 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/pda.utils.R -\name{back_transform_posteriors} -\alias{back_transform_posteriors} -\title{Helper function that transforms the samples back to their original prior distribution equivalents} -\usage{ -back_transform_posteriors(prior.all, prior.fn.all, prior.ind.all, mcmc.out) -} -\arguments{ -\item{prior.all}{a dataframe of all priors for both pfts} - -\item{prior.fn.all}{list of expressions of d/r/q/p functions of the priors with original parameter space} - -\item{prior.ind.all}{a vector of indices identifying targeted params} - -\item{mcmc.out}{hierarchical MCMC outputs in standard-normal space} -} -\value{ -hierarchical MCMC outputs in original parameter space -} -\description{ -Helper function that transforms the samples back to their original prior distribution equivalents -} -\author{ -Istem Fer -} diff --git a/modules/assim.batch/man/generate_hierpost.Rd b/modules/assim.batch/man/generate_hierpost.Rd new file mode 100644 index 00000000000..f7386b2a441 --- /dev/null +++ b/modules/assim.batch/man/generate_hierpost.Rd @@ -0,0 +1,22 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/pda.utils.R +\name{generate_hierpost} +\alias{generate_hierpost} +\title{Helper function that generates the hierarchical posteriors} +\usage{ +generate_hierpost(mcmc.out, rng_orig) +} +\arguments{ +\item{mcmc.out}{hierarchical MCMC outputs} + +\item{rng_orig}{nparam x 2 matrix, 1st and 2nd columns are the lower and upper prior limits respectively} +} +\value{ +hierarchical MCMC outputs in original parameter space +} +\description{ +Helper function that generates the hierarchical posteriors +} +\author{ +Istem Fer +} diff --git a/modules/assim.batch/man/norm_transform_priors.Rd b/modules/assim.batch/man/norm_transform_priors.Rd deleted file mode 100644 index 3980f591a12..00000000000 --- a/modules/assim.batch/man/norm_transform_priors.Rd +++ /dev/null @@ -1,31 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/pda.utils.R -\name{norm_transform_priors} -\alias{norm_transform_priors} -\title{Helper function that transforms the values of each parameter into N(0,1) equivalent} -\usage{ -norm_transform_priors(prior.list, prior.fn.all, prior.ind.all, SS.stack, - init.list, jmp.list) -} -\arguments{ -\item{prior.list}{list of prior data frames, same length as number of pfts} - -\item{prior.fn.all}{list of expressions of d/r/q/p functions of the priors given in the prior.list} - -\item{prior.ind.all}{a vector of indices identifying which params are targeted, indices refer to the row numbers when prior.list sublists are rbinded} - -\item{SS.stack}{list of design matrices for the emulator, length = nsites, each sublist will be of length nvars} - -\item{init.list}{list of initial values for the targeted params, they will change when the range is normalized} - -\item{jmp.list}{list of hump variances, they will change when the range is normalized} -} -\value{ -a list of new objects that contain the normalized versions -} -\description{ -Helper function that transforms the values of each parameter into N(0,1) equivalent -} -\author{ -Istem Fer -} From f22175b765a7d886b1d09d85bbc50487f150196f Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 23 Sep 2019 03:41:05 -0400 Subject: [PATCH 0414/2289] add leafPoolDepth --- models/sipnet/R/write.configs.SIPNET.R | 16 +++++++++++++++- models/sipnet/inst/template.param | 1 + 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index e80ce100cf2..c3b63bf9c37 100644 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -347,6 +347,10 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("frozenSoilEff" %in% pft.names) { param[which(param[, 1] == "frozenSoilEff"), 2] <- pft.traits[which(pft.names == "frozenSoilEff")] } + # frozenSoilFolREff + if ("frozenSoilFolREff" %in% pft.names) { + param[which(param[, 1] == "frozenSoilFolREff"), 2] <- pft.traits[which(pft.names == "frozenSoilFolREff")] + } # soilWHC if ("soilWHC" %in% pft.names) { param[which(param[, 1] == "soilWHC"), 2] <- pft.traits[which(pft.names == "soilWHC")] @@ -355,7 +359,7 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs # 10/31/2017 IF: these were the two assumptions used in the emulator paper in order to reduce dimensionality # These results in improved winter soil respiration values # they don't affect anything when the seasonal soil respiration functionality in SIPNET is turned-off - if(FALSE){ + if(TRUE){ # assume soil resp Q10 cold == soil resp Q10 param[which(param[, 1] == "soilRespQ10Cold"), 2] <- param[which(param[, 1] == "soilRespQ10"), 2] # default SIPNET prior of baseSoilRespCold was 1/4th of baseSoilResp @@ -368,11 +372,21 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs param[which(param[, 1] == "immedEvapFrac"), 2] <- pft.traits[which(pft.names == "immedEvapFrac")] } + if ("leafPoolDepth" %in% pft.names) { + id <- which(param[, 1] == "leafPoolDepth") + param[which(param[, 1] == "leafPoolDepth"), 2] <- pft.traits[which(pft.names == "leafPoolDepth")] + } + if ("waterRemoveFrac" %in% pft.names) { id <- which(param[, 1] == "waterRemoveFrac") param[which(param[, 1] == "waterRemoveFrac"), 2] <- pft.traits[which(pft.names == "waterRemoveFrac")] } + if ("fastFlowFrac" %in% pft.names) { + id <- which(param[, 1] == "fastFlowFrac") + param[which(param[, 1] == "fastFlowFrac"), 2] <- pft.traits[which(pft.names == "fastFlowFrac")] + } + if ("rdConst" %in% pft.names) { id <- which(param[, 1] == "rdConst") param[which(param[, 1] == "rdConst"), 2] <- pft.traits[which(pft.names == "rdConst")] diff --git a/models/sipnet/inst/template.param b/models/sipnet/inst/template.param index 9eaab57fa85..506ea695643 100644 --- a/models/sipnet/inst/template.param +++ b/models/sipnet/inst/template.param @@ -57,6 +57,7 @@ wueConst 10.9 1 0.01 109 0.5 litterWHC 1.000000 0 0.010000 4.000000 0.250000 soilWHC 12.0 1 0.1 36.000000 1.000000 immedEvapFrac 0.100000 0 0.000000 0.200000 0.025000 +leafPoolDepth 0.100000 0 0.000000 0.200000 0.025000 fastFlowFrac 0.100000 0 0.000000 0.200000 0.025000 snowMelt 0.150000 0 0.050000 0.250000 0.020000 litWaterDrainRate 0.100000 0 0.010000 1.000000 0.100000 From 4893904ece47a6a20a1ff5cb9d3428cfd4a680dc Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 23 Sep 2019 10:37:26 -0400 Subject: [PATCH 0415/2289] updating SDA workflow --- .../inst/WillowCreek/gefs.sipnet.template.xml | 2 +- .../inst/WillowCreek/workflow.template.R | 179 ++++++++++++------ 2 files changed, 126 insertions(+), 55 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index ecb42d48f08..c36bbd06fe5 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -89,7 +89,7 @@ FALSE - 100 + 10 NEE 2018 2018 diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index 973cc721b42..ec45bdb89fb 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -9,6 +9,7 @@ library(tidyverse) library(furrr) library(R.utils) plan(multiprocess) + # ---------------------------------------------------------------------------------------------- #------------------------------------------ That's all we need xml path and the out folder ----- # ---------------------------------------------------------------------------------------------- @@ -31,6 +32,12 @@ if (is.na(args[3])){ } else { xmlTempName <- args[3] } + +if (is.na(args[4])){ + restart <-FALSE +} else { + restart <- args[4] +} setwd(outputPath) #------------------------------------------------------------------------------------------------ #------------------------------------------ sourcing the required tools ------------------------- @@ -46,7 +53,7 @@ c( package = "PEcAn.assim.sequential") )) #reading xml -settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml") +settings <- read.settings("/fs/data3/hamzed/pecan/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml") #connecting to DB con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) @@ -92,7 +99,7 @@ if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { sda.start <- Sys.Date() - 14 } -sda.end <- Sys.Date() +sda.end <- Sys.Date() #----------------------------------------------------------------------------------------------- #------------------------------------------ Download met and flux ------------------------------ @@ -108,13 +115,99 @@ if(!exists('prep.data')) ) obs.raw <-prep.data$rawobs prep.data<-prep.data$obs + + +# if there is infinte value then take it out - here we want to remove any that just have one NA in the observed data +prep.data<-prep.data %>% + map(function(day.data){ + #cheking the mean + nan.mean <- which(is.infinite(day.data$means) | is.nan(day.data$means) | is.na(day.data$means)) + if ( length(nan.mean)>0 ) { + + day.data$means <- day.data$means[-nan.mean] + day.data$covs <- day.data$covs[-nan.mean, -nan.mean] %>% + as.matrix() %>% + `colnames<-`(c(colnames(day.data$covs)[-nan.mean])) + } + day.data + }) + + +# Changing LE to Qle which is what sipnet expects +prep.data <- prep.data %>% + map(function(day.data) { + names(day.data$means)[names(day.data$means) == "LE"] <- "Qle" + dimnames(day.data$covs) <- dimnames(day.data$covs) %>% + map(function(name) { + name[name == "LE"] <- "Qle" + name + }) + + day.data + }) + + + +# Finding the right end and start date +met.start <- lubridate::floor_date(Sys.Date() - lubridate::hours(2), unit = "6 hour") +met.end <- met.start + lubridate::days(16) + +#pad Observed Data to match met data + +date <- + seq( + from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC"), + to = lubridate::with_tz(as.POSIXct(met.end, format = "%Y-%m-%d"), tz = "UTC"), + by = "6 hour" + ) +pad.prep <- obs.raw %>% + complete(Date = seq( + from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC"), + to = lubridate::with_tz(as.POSIXct(met.end, format = "%Y-%m-%d"), tz = "UTC"), + by = "6 hour" + )) %>% + mutate(means = NA, covs = NA) %>% + dplyr::select(Date, means, covs) %>% + tibble_as_list() + +names(pad.prep) <-date + +#create the data type to match the other data +pad.cov <- matrix(data = c(rep(NA, 4)), nrow = 2, ncol = 2, dimnames = list(c("NEE", "Qle"), c("NEE", "Qle"))) +pad.means = c(NA, NA) +names(pad.means) <- c("NEE", "Qle") + +#cycle through and populate the list + +pad <- pad.prep %>% + map(function(day.data){ + day.data$means <- pad.means + day.data$covs <- pad.cov + day.data + }) + + +#add onto end of prep.data list + +prep.data = c(prep.data, pad) + + # This line is what makes the SDA to run daily ***** IMPORTANT CODE OVER HERE prep.data<-prep.data %>% discard(~lubridate::hour(.x$Date)!=0) -# Finding the right end and start date -met.start <- obs.raw$Date%>% head(1) %>% lubridate::floor_date(unit = "day") -met.end <- obs.raw$Date %>% tail(1) %>% lubridate::ceiling_date(unit = "day") + +obs.mean <- prep.data %>% map('means') %>% setNames(names(prep.data)) +obs.cov <- prep.data %>% map('covs') %>% setNames(names(prep.data)) + + + + + + + + + #----------------------------------------------------------------------------------------------- #------------------------------------------ Fixing the settings -------------------------------- #----------------------------------------------------------------------------------------------- @@ -169,36 +262,6 @@ get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingsp # Setting dates in assimilation tags - This will help with preprocess split in SDA code settings$state.data.assimilation$start.date <-as.character(met.start) settings$state.data.assimilation$end.date <-as.character(met.end - lubridate::hms("06:00:00")) -# Changing LE to Qle which is what sipnet expects -prep.data <- prep.data %>% - map(function(day.data) { - names(day.data$means)[names(day.data$means) == "LE"] <- "Qle" - dimnames(day.data$covs) <- dimnames(day.data$covs) %>% - map(function(name) { - name[name == "LE"] <- "Qle" - name - }) - - day.data - }) - -# if there is infinte value then take it out -prep.data<-prep.data %>% - map(function(day.data){ - #cheking the mean - nan.mean <- which(is.infinite(day.data$means) | is.nan(day.data$means) | is.na(day.data$means)) - if ( length(nan.mean)>0 ) { - - day.data$means <- day.data$means[-nan.mean] - day.data$covs <- day.data$covs[-nan.mean, -nan.mean] %>% - as.matrix() %>% - `colnames<-`(c(colnames(day.data$covs)[-nan.mean])) - } - day.data - }) - -obs.mean <- prep.data %>% map('means') %>% setNames(names(prep.data)) -obs.cov <- prep.data %>% map('covs') %>% setNames(names(prep.data)) if (nodata) { obs.mean <- obs.mean %>% map(function(x) @@ -212,7 +275,8 @@ if (nodata) { # -------------------------------------------------------------------------------------------------- #@Hamze - should we add a if statement here for the times that we don't want to copy the path? - +# @Hamze: Yes if restart == TRUE +if(restart == TRUE){ if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) file.copy(from= file.path(restart.path, "SDA", "sda.output.Rdata"), @@ -222,20 +286,28 @@ if (nodata) { to = file.path(settings$outdir, "SDA", "outconfig.Rdata")) #Update the SDA Output to just have last time step - load(file.path(restart.path, "SDA", "sda.output.Rdata")) + temp <- new.env() + load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) + temp <- as.list(temp) + + + for(i in rev(2:length(temp$ANALYSIS))){ + temp$ANALYSIS[[i]] <- NULL + } + + for(i in rev(2:length(temp$FORECAST))){ + temp$FORECAST[[i]] <- NULL + } + - ANALYSIS1 = list() - FORECAST1 = list() - enkf.params1 = list() - ANALYSIS1[[1]]= ANALYSIS[[length(ANALYSIS)]] - FORECAST1[[1]] = FORECAST[[length(FORECAST)]] - enkf.params1[[1]] = enkf.params[[length(enkf.params)]] - t = 1 - ANALYSIS = ANALYSIS1 - FORECAST = FORECAST1 - enfk.params = enkf.params1 - - save(list = c("ANALYSIS", "FORECAST", "enkf.params", "t", "ensemble.samples", "inputs", "new.params", "new.state", "site.locs", "Viz.output", "X", "ensemble.id", "run.id"), file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + for(i in rev(2:length(temp$enkf.params))){ + temp$enkf.params[[i]] <- NULL + } + + temp$t = 1 + + save(list = "temp", + file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) #copy over run and out folders @@ -245,26 +317,26 @@ if (nodata) { if(!dir.exists("out")) dir.create("out",showWarnings = F) copyDirectory(from = file.path(restart.path, "out/"), to = file.path(settings$outdir, "out/")) - +} #restart == TRUE # -------------------------------------------------------------------------------------------------- #--------------------------------- Run state data assimilation ------------------------------------- # -------------------------------------------------------------------------------------------------- -#unlink(c('run','out', "SDA"), recursive = T) +unlink(c('run','out','SDA'), recursive = T) if ('state.data.assimilation' %in% names(settings)) { if (PEcAn.utils::status.check("SDA") == 0) { PEcAn.utils::status.start("SDA") PEcAn.assim.sequential::sda.enkf( settings, - restart=TRUE, + restart=FALSE, Q=0, obs.mean = obs.mean, obs.cov = obs.cov, control = list( trace = TRUE, interactivePlot =FALSE, - TimeseriesPlot =FALSE, + TimeseriesPlot =TRUE, BiasPlot =FALSE, debug =FALSE, pause=FALSE @@ -273,5 +345,4 @@ if ('state.data.assimilation' %in% names(settings)) { PEcAn.utils::status.end() } } - \ No newline at end of file From 69780eb36ab08264dcbe307b1a0ab94df21d45b2 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 24 Sep 2019 03:56:11 -0400 Subject: [PATCH 0416/2289] remove wrong file --- modules/assim.batch/R/pda.assessment.R | 57 -------------------------- 1 file changed, 57 deletions(-) delete mode 100644 modules/assim.batch/R/pda.assessment.R diff --git a/modules/assim.batch/R/pda.assessment.R b/modules/assim.batch/R/pda.assessment.R deleted file mode 100644 index 0e03b579b6b..00000000000 --- a/modules/assim.batch/R/pda.assessment.R +++ /dev/null @@ -1,57 +0,0 @@ -# This is a diagnostic function that checks for post-pda model-data comparison -# the value it returns can be used in determining whether to stop pda rounds or continue -# it samples from MCMC for parameter vectors, runs a small ensemble (100) unless requested otherwise, compares model with data, returns metric -postpda_assessment <- function(settings, n.param.orig, prior.ind.orig, - n.post.knots, knots.params.temp, - prior.list, prior.fn, sf, sf.samp){ - - # sample from MCMC - sampled_knots <- sample_MCMC(settings$assim.batch$mcmc.path, n.param.orig, prior.ind.orig, - n.post.knots, knots.params.temp, - prior.list, prior.fn, sf, sf.samp) - - knots.params.temp <- sampled_knots$knots.params.temp - probs.round.sf <- sampled_knots$sf_knots - pass2bias <- sampled_knots$pass2bias - - ## Set up runs and write run configs for all proposed knots - run.ids <- pda.init.run(settings, con, my.write.config, workflow.id, knots.params, - n = settings$assim.batch$n.knot, - run.names = paste0(settings$assim.batch$ensemble.id, ".knot.", - 1:settings$assim.batch$n.knot)) - - ## start model runs - PEcAn.remote::start.model.runs(settings, (as.logical(settings$database$bety$write) & !remote)) - - ## Retrieve model outputs and error statistics - model.out <- list() - - ## read model outputs - for (i in seq_len(settings$assim.batch$n.knot)) { - align.return <- pda.get.model.output(settings, run.ids[i], bety, inputs, external.formats) - model.out[[i]] <- align.return$model.out - if(all(!is.na(model.out[[i]]))){ - inputs <- align.return$inputs - } - } - - ee <- sapply(model.out, function(x) x[[1]][[1]]) - ii <- inputs[[1]]$obs - ee <- ee[!is.na(ii),] - ii <- ii[!is.na(ii)] - rmv <- sapply(seq_len(nrow(ee)), function(x){ - if(length(unique(ee[x,])) == 1){ - return(1) - }else{ - return(0) - } - }) - ee <- ee[rmv==0,] - ii <- ii[rmv ==0] - sim = createDHARMa(simulatedResponse = ee, observedResponse = ii) - # calculate metric - -} - - - From 13c3753c472acf829c68e6c97f8ec87cfcd217a5 Mon Sep 17 00:00:00 2001 From: hamzed Date: Tue, 24 Sep 2019 11:05:22 -0400 Subject: [PATCH 0417/2289] Parallel and multiple PFTs --- base/workflow/R/create_execute_test_xml.R | 20 +- base/workflow/inst/batch_run.R | 223 +++++++++++----------- 2 files changed, 120 insertions(+), 123 deletions(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index bab739c44f2..43d724b2ed0 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -73,6 +73,7 @@ create_execute_test_xml <- function(model_id, model.new <- dplyr::tbl(con, "models") %>% dplyr::filter(.data$id == !!model_id) %>% dplyr::collect() + outdir_pre <- paste( model.new[["model_name"]], format(as.Date(start_date), "%Y-%m"), @@ -104,20 +105,19 @@ create_execute_test_xml <- function(model_id, pft <- dplyr::tbl(con, "pfts") %>% dplyr::filter(.data$modeltype_id == !!model.new$modeltype_id) %>% dplyr::collect() + pft <- pft$name[[1]] message("PFT is `NULL`. Defaulting to the following PFT: ", pft) } - if (length(pft) > 1) { - stop( - "Currently, only a single PFT is supported. ", - "Multiple PFTs will be implemented in a future version." - ) - } - settings$pfts <- list( - pft = list(name = pft, - constants = list(num = 1)) - ) + + ## Putting multiple PFTs separated by semicolon + settings$pfts <- strsplit(pft, ";")[[1]] %>% + purrr::map( ~ list(name = .x, + constants = list(num = 1) + ) + ) %>% + setNames(rep("pft", length(.))) #Meta Analysis settings$meta.analysis <- list(iter = 3000, random.effects = FALSE) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index 9a5c561fe21..df4ef0992f3 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -1,12 +1,12 @@ #!/usr/bin/env Rscript - -library(dplyr) +library(tidyverse) library(PEcAn.workflow) stopifnot( requireNamespace("PEcAn.DB", quietly = TRUE), requireNamespace("PEcAn.utils", quietly = TRUE) ) - +library(furrr) +plan(multiprocess) ################################################## # Parse arguments argv <- commandArgs(trailingOnly = TRUE) @@ -45,122 +45,119 @@ pecan_path <- get_arg(argv, "--pecandir", getwd()) output_folder <- get_arg(argv, "--outdir", "batch_test_output") outfile <- get_arg(argv, "--outfile", "test_result_table.csv") ################################################## -# Establish database connection based on config.php -php_file <- file.path(pecan_path, "web", "config.php") -stopifnot(file.exists(php_file)) -config.list <- PEcAn.utils::read_web_config(php_file) -bety <- PEcAn.DB::betyConnect(php_file) -con <- bety$con - # Create outfile directory if it doesn't exist dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) - input_table <- read.csv(input_table_file, stringsAsFactors = FALSE) -result_table <- input_table %>% - mutate( - outdir = NA_character_, - workflow_complete = NA, - has_jobsh = NA, - model_output_raw = NA, - model_output_processed = NA - ) - -for (i in seq_len(nrow(input_table))) { - table_row <- input_table[i, ] - - # Get model ID - model <- table_row$model - revision <- table_row$revision - message("Model: ", shQuote(model)) - message("Revision: ", shQuote(revision)) - model_df <- tbl(con, "models") %>% - filter(model_name == !!model, - revision == !!revision) %>% - collect() - if (nrow(model_df) == 0) { - message("No models found with name ", model, - " and revision ", revision, ".\n", - "Moving on to next row.") - next - } else if (nrow(model_df) > 1) { - print(model_df) - message("Multiple models found with name ", model, - " and revision ", revision, ".\n", - "Moving on to next row.") - next - } else { - model_id <- model_df$id - } - - pft <- table_row$pft - if (is.na(pft)) pft <- NULL - - # Run test - raw_result <- create_execute_test_xml( - model_id = model_id, - met = table_row$met, - site_id = table_row$site_id, - pft = pft, - start_date = table_row$start_date, - end_date = table_row$end_date, - dbfiles_folder = dbfiles_folder, - pecan_path = pecan_path, - user_id = user_id, - ensemble_size = table_row$ensemble_size, - sensitivity = table_row$sensitivity - ) +#----------------------- Parallel Distribution of jobs +seq_len(nrow(input_table)) %>% + furrr::future_map(function(i){ + # Each job needs to have its own connection + # Establish database connection based on config.php + php_file <- file.path(pecan_path, "web", "config.php") + stopifnot(file.exists(php_file)) + config.list <- PEcAn.utils::read_web_config(php_file) + bety <- PEcAn.DB::betyConnect(php_file) + con <- bety$con + + # Get model ID + table_row <- input_table[i, ] + model <- table_row$model + revision <- table_row$revision + message("Model: ", shQuote(model)) + message("Revision: ", shQuote(revision)) + + + + model_df <- tbl(con, "models") %>% + filter(model_name == !!model, + revision == !!revision) %>% + collect() + + if (nrow(model_df) == 0) { + message("No models found with name ", model, + " and revision ", revision, ".\n", + "Moving on to next row.") + next + } else if (nrow(model_df) > 1) { + print(model_df) + message("Multiple models found with name ", model, + " and revision ", revision, ".\n", + "Moving on to next row.") + next + } else { + model_id <- model_df$id + } + + pft <- table_row$pft + if (is.na(pft)) pft <- NULL + + # Run test + raw_result <- create_execute_test_xml( + model_id = model_id, + met = table_row$met, + site_id = table_row$site_id, + pft = pft, + start_date = table_row$start_date, + end_date = table_row$end_date, + dbfiles_folder = dbfiles_folder, + pecan_path = pecan_path, + user_id = user_id, + ensemble_size = table_row$ensemble_size, + sensitivity = table_row$sensitivity, + output_folder=output_folder + ) + + }) - outdir <- raw_result$outdir - result_table$outdir <- outdir - ################################################## - # Did the workflow finish? - ################################################## - raw_output <- readLines(file.path(outdir, "workflow.Rout")) - result_table$workflow_complete[[i]] <- any(grepl( - "PEcAn Workflow Complete", - raw_output - )) - continue <- FALSE - ################################################## - # Did we write a job.sh file? - ################################################## - out <- file.path(outdir, "out") - run <- file.path(outdir, "run") - jobsh <- list.files(run, "job\\.sh", recursive = TRUE) - ## pft <- file.path(outdir, "pft") - if (length(jobsh) > 0) { - result_table$has_jobsh[[i]] <- TRUE - continue <- TRUE - } - ################################################## - # Did the model produce any output? - ################################################## - if (continue) { - continue <- FALSE - raw_out <- list.files(out, recursive = TRUE) - if (length(raw_out) > 0) { - result_table$model_output_raw[[i]] <- TRUE - continue <- TRUE +#----------- Checking the results of the runs +checks_df<-list.dirs(output_folder, full.names = TRUE, recursive = FALSE) %>% + purrr::map_dfr(function(outdir){ + + result_table <-NULL + ################################################## + # Did the workflow finish? + ################################################## + if (file.exists(file.path(outdir, "workflow.Rout"))) { + raw_output <- readLines(file.path(outdir, "workflow.Rout")) + result_table$workflow_complete <- any(grepl( + "PEcAn Workflow Complete", + raw_output + )) + }else{ + result_table$workflow_complete <- FALSE } - } - - ################################################## - # Did PEcAn post-process the output? - ################################################## - # Files should have name `YYYY.nc` - if (continue) { - continue <- FALSE + ################################################## + # Did we write a job.sh file? + ################################################## + out <- file.path(outdir, "out") + run <- file.path(outdir, "run") + jobsh <- list.files(run, "job\\.sh", recursive = TRUE) + ## pft <- file.path(outdir, "pft") + result_table$has_jobsh <-ifelse(length(jobsh) > 0, TRUE, FALSE) + ################################################## + # Did the model produce any output? + ################################################## + raw_out <- list.files(out, recursive = TRUE) + result_table$model_output_raw <-ifelse(length(raw_out) > 0, TRUE, FALSE) + ################################################## + # Did PEcAn post-process the output? + ################################################## + # Files should have name `YYYY.nc` proc_out <- list.files(out, "[[:digit:]]{4}\\.nc", recursive = TRUE) - if (length(proc_out) > 0) { - result_table$model_output_processed[[i]] <- TRUE - continue <- TRUE - } + result_table$model_output_processed <-ifelse(length(proc_out) > 0, TRUE, FALSE) + + return(result_table %>% + as.data.frame() %>% + mutate(site_id=strsplit(outdir,"_")[[1]][5]) + ) + + }) - # This will continuously update the output table with the current results - result_table %>% - filter(!is.na(outdir)) %>% - write.csv(outfile, row.names = FALSE) - } -} +#-- Writing down the results +input_table %>% + mutate(site_id= as.character(site_id)) %>% + left_join(checks_df, + by="site_id") %>% + write.csv(outfile, row.names = FALSE) From d514e431dfb9f8990799e07f1c80515e75e4dd23 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Tue, 24 Sep 2019 11:50:32 -0400 Subject: [PATCH 0418/2289] Update base/workflow/inst/batch_run.R Co-Authored-By: Alexey Shiklomanov --- base/workflow/inst/batch_run.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index df4ef0992f3..94b4e51b172 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -112,7 +112,7 @@ seq_len(nrow(input_table)) %>% #----------- Checking the results of the runs -checks_df<-list.dirs(output_folder, full.names = TRUE, recursive = FALSE) %>% +checks_df <- list.dirs(output_folder, full.names = TRUE, recursive = FALSE) %>% purrr::map_dfr(function(outdir){ result_table <-NULL From ee8a00ff8480ed447f0662f4c55bcbe2f67ef1f2 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Tue, 24 Sep 2019 11:50:45 -0400 Subject: [PATCH 0419/2289] Update base/workflow/inst/batch_run.R Co-Authored-By: Alexey Shiklomanov --- base/workflow/inst/batch_run.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index 94b4e51b172..886b4c6c7f2 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -125,7 +125,7 @@ checks_df <- list.dirs(output_folder, full.names = TRUE, recursive = FALSE) %>% "PEcAn Workflow Complete", raw_output )) - }else{ + } else { result_table$workflow_complete <- FALSE } ################################################## From 3963ae8efab5ecfb8fb5e5baab8776ac09459cc7 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 24 Sep 2019 16:33:52 -0400 Subject: [PATCH 0420/2289] updating SDA workflow to forecast --- .../inst/WillowCreek/workflow.template.R | 21 +++++++++++-------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index ec45bdb89fb..d26ff2761ed 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -8,6 +8,7 @@ library(REddyProc) library(tidyverse) library(furrr) library(R.utils) +library(dynutils) plan(multiprocess) # ---------------------------------------------------------------------------------------------- @@ -38,6 +39,10 @@ if (is.na(args[4])){ } else { restart <- args[4] } +outputPath <- "/fs/data3/kzarada/ouput" +xmlTempName <-"gefs.sipnet.template.xml" +restart <-FALSE +nodata <- FALSE setwd(outputPath) #------------------------------------------------------------------------------------------------ #------------------------------------------ sourcing the required tools ------------------------- @@ -53,7 +58,7 @@ c( package = "PEcAn.assim.sequential") )) #reading xml -settings <- read.settings("/fs/data3/hamzed/pecan/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml") +settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml") #connecting to DB con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) @@ -63,8 +68,9 @@ con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) #------------------------------------------------------------------------------------------------ #--------------------------- Finding old sims all.previous.sims <- list.dirs(outputPath, recursive = F) - -if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { +sda.start <- Sys.Date() - 14 +sda.end <- Sys.Date() +if (length(all.previous.sims) > 0 & !inherits(con, "try-error") & restart) { tryCatch({ # Looking through all the old simulations and find the most recent @@ -98,9 +104,6 @@ if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { if (is.na(sda.start)) sda.start <- Sys.Date() - 14 } - -sda.end <- Sys.Date() - #----------------------------------------------------------------------------------------------- #------------------------------------------ Download met and flux ------------------------------ #----------------------------------------------------------------------------------------------- @@ -161,14 +164,14 @@ date <- by = "6 hour" ) pad.prep <- obs.raw %>% - complete(Date = seq( + tidyr::complete(Date = seq( from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC"), to = lubridate::with_tz(as.POSIXct(met.end, format = "%Y-%m-%d"), tz = "UTC"), by = "6 hour" )) %>% mutate(means = NA, covs = NA) %>% dplyr::select(Date, means, covs) %>% - tibble_as_list() + dynutils::tibble_as_list() names(pad.prep) <-date @@ -338,7 +341,7 @@ if ('state.data.assimilation' %in% names(settings)) { interactivePlot =FALSE, TimeseriesPlot =TRUE, BiasPlot =FALSE, - debug =FALSE, + debug =TRUE, pause=FALSE ) ) From 77683bb3c0d061d3d730aab8efc8d186a23b8a7a Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 25 Sep 2019 04:03:38 -0400 Subject: [PATCH 0421/2289] create model package --- models/basgra/DESCRIPTION | 22 ++++ models/basgra/LICENSE | 34 +++++ models/basgra/NAMESPACE | 7 + models/basgra/R/met2model.BASGRA.R | 36 +++++ models/basgra/R/model2netcdf.BASGRA.R | 37 ++++++ models/basgra/R/write.config.BASGRA.R | 124 ++++++++++++++++++ models/basgra/README.md | 64 +++++++++ models/basgra/inst/template.job | 41 ++++++ models/basgra/man/met2model.MODEL.Rd | 25 ++++ models/basgra/man/model2netcdf.MODEL.Rd | 25 ++++ models/basgra/man/read_restart.ModelName.Rd | 31 +++++ models/basgra/man/write.config.MODEL.Rd | 30 +++++ models/basgra/man/write_restart.ModelName.Rd | 34 +++++ models/basgra/tests/Rcheck_reference.log | 77 +++++++++++ models/basgra/tests/testthat.R | 13 ++ models/basgra/tests/testthat/README.txt | 3 + models/basgra/tests/testthat/test.met2model.R | 18 +++ 17 files changed, 621 insertions(+) create mode 100644 models/basgra/DESCRIPTION create mode 100644 models/basgra/LICENSE create mode 100644 models/basgra/NAMESPACE create mode 100644 models/basgra/R/met2model.BASGRA.R create mode 100644 models/basgra/R/model2netcdf.BASGRA.R create mode 100644 models/basgra/R/write.config.BASGRA.R create mode 100644 models/basgra/README.md create mode 100644 models/basgra/inst/template.job create mode 100644 models/basgra/man/met2model.MODEL.Rd create mode 100644 models/basgra/man/model2netcdf.MODEL.Rd create mode 100644 models/basgra/man/read_restart.ModelName.Rd create mode 100644 models/basgra/man/write.config.MODEL.Rd create mode 100644 models/basgra/man/write_restart.ModelName.Rd create mode 100644 models/basgra/tests/Rcheck_reference.log create mode 100644 models/basgra/tests/testthat.R create mode 100644 models/basgra/tests/testthat/README.txt create mode 100644 models/basgra/tests/testthat/test.met2model.R diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION new file mode 100644 index 00000000000..a1ffe89e6c3 --- /dev/null +++ b/models/basgra/DESCRIPTION @@ -0,0 +1,22 @@ +Package: PEcAn.BASGRA +Type: Package +Title: PEcAn package for integration of the ModelName model +Version: 1.7.1 +Date: 2019-09-05 +Authors@R: c(person("Istem", "Fer)) +Author: Istem Fer +Maintainer: Istem Fer +Description: This module provides functions to link the BASGRA to PEcAn. +Imports: + PEcAn.logger, + PEcAn.utils (>= 1.4.8) +Suggests: + testthat (>= 1.0.2) +SystemRequirements: ModelName +OS_type: unix +License: FreeBSD + file LICENSE +Copyright: Authors +LazyLoad: yes +LazyData: FALSE +Encoding: UTF-8 +RoxygenNote: 6.1.1 diff --git a/models/basgra/LICENSE b/models/basgra/LICENSE new file mode 100644 index 00000000000..5a9e44128f1 --- /dev/null +++ b/models/basgra/LICENSE @@ -0,0 +1,34 @@ +## This is the master copy of the PEcAn License + +University of Illinois/NCSA Open Source License + +Copyright (c) 2012, University of Illinois, NCSA. All rights reserved. + +PEcAn project +www.pecanproject.org + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal with the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +- Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimers. +- Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimers in the + documentation and/or other materials provided with the distribution. +- Neither the names of University of Illinois, NCSA, nor the names + of its contributors may be used to endorse or promote products + derived from this Software without specific prior written permission. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR +ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF +CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE. + diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE new file mode 100644 index 00000000000..e815441f0ca --- /dev/null +++ b/models/basgra/NAMESPACE @@ -0,0 +1,7 @@ +# Generated by roxygen2: do not edit by hand + +export(met2model.MODEL) +export(model2netcdf.MODEL) +export(read_restart.ModelName) +export(write.config.MODEL) +export(write_restart.ModelName) diff --git a/models/basgra/R/met2model.BASGRA.R b/models/basgra/R/met2model.BASGRA.R new file mode 100644 index 00000000000..eadfaed7d33 --- /dev/null +++ b/models/basgra/R/met2model.BASGRA.R @@ -0,0 +1,36 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-------------------------------------------------------------------------------------------------# +##' Converts a met CF file to a model specific met file. The input +##' files are calld /.YYYY.cf +##' +##' @name met2model.MODEL +##' @title Write MODEL met files +##' @param in.path path on disk where CF file lives +##' @param in.prefix prefix for each file +##' @param outfolder location where model specific output is written. +##' @return OK if everything was succesful. +##' @export +##' @author Rob Kooper +##-------------------------------------------------------------------------------------------------# +met2model.MODEL <- function(in.path, in.prefix, outfolder, overwrite = FALSE) { + PEcAn.logger::logger.severe("NOT IMPLEMENTED") + + # Please follow the PEcAn style guide: + # https://pecanproject.github.io/pecan-documentation/master/coding-style.html + + # Note that `library()` calls should _never_ appear here; instead, put + # packages dependencies in the DESCRIPTION file, under "Imports:". + # Calls to dependent packages should use a double colon, e.g. + # `packageName::functionName()`. + # Also, `require()` should be used only when a package dependency is truly + # optional. In this case, put the package name under "Suggests:" in DESCRIPTION. + +} # met2model.MODEL diff --git a/models/basgra/R/model2netcdf.BASGRA.R b/models/basgra/R/model2netcdf.BASGRA.R new file mode 100644 index 00000000000..230deb181da --- /dev/null +++ b/models/basgra/R/model2netcdf.BASGRA.R @@ -0,0 +1,37 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-------------------------------------------------------------------------------------------------# +##' Convert MODEL output into the NACP Intercomparison format (ALMA using netCDF) +##' +##' @name model2netcdf.MODEL +##' @title Code to convert MODELS's output into netCDF format +##' +##' @param outdir Location of model output +##' @param sitelat Latitude of the site +##' @param sitelon Longitude of the site +##' @param start_date Start time of the simulation +##' @param end_date End time of the simulation +##' @export +##' +##' @author Rob Kooper +model2netcdf.MODEL <- function(outdir, sitelat, sitelon, start_date, end_date) { + PEcAn.logger::logger.severe("NOT IMPLEMENTED") + + # Please follow the PEcAn style guide: + # https://pecanproject.github.io/pecan-documentation/develop/coding-style.html + + # Note that `library()` calls should _never_ appear here; instead, put + # packages dependencies in the DESCRIPTION file, under "Imports:". + # Calls to dependent packages should use a double colon, e.g. + # `packageName::functionName()`. + # Also, `require()` should be used only when a package dependency is truly + # optional. In this case, put the package name under "Suggests:" in DESCRIPTION. + +} # model2netcdf.MODEL diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R new file mode 100644 index 00000000000..631edf9f904 --- /dev/null +++ b/models/basgra/R/write.config.BASGRA.R @@ -0,0 +1,124 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-------------------------------------------------------------------------------------------------# +##' Writes a MODEL config file. +##' +##' Requires a pft xml object, a list of trait values for a single model run, +##' and the name of the file to create +##' +##' @name write.config.MODEL +##' @title Write MODEL configuration files +##' @param defaults list of defaults to process +##' @param trait.samples vector of samples for a given trait +##' @param settings list of settings from pecan settings file +##' @param run.id id of run +##' @return configuration file for MODEL for given run +##' @export +##' @author Rob Kooper +##-------------------------------------------------------------------------------------------------# +write.config.MODEL <- function(defaults, trait.values, settings, run.id) { + PEcAn.logger::logger.severe("NOT IMPLEMENTED") + # Please follow the PEcAn style guide: + # https://pecanproject.github.io/pecan-documentation/develop/coding-style.html + # Note that `library()` calls should _never_ appear here; instead, put + # packages dependencies in the DESCRIPTION file, under "Imports:". + # Calls to dependent packages should use a double colon, e.g. + # `packageName::functionName()`. + # Also, `require()` should be used only when a package dependency is truly + # optional. In this case, put the package name under "Suggests:" in DESCRIPTION. + + # find out where to write run/ouput + rundir <- file.path(settings$host$rundir, run.id) + outdir <- file.path(settings$host$outdir, run.id) + + #----------------------------------------------------------------------- + # create launch script (which will create symlink) + if (!is.null(settings$model$jobtemplate) && file.exists(settings$model$jobtemplate)) { + jobsh <- readLines(con = settings$model$jobtemplate, n = -1) + } else { + jobsh <- readLines(con = system.file("template.job", package = "PEcAn.MODEL"), n = -1) + } + + # create host specific setttings + hostsetup <- "" + if (!is.null(settings$model$prerun)) { + hostsetup <- paste(hostsetup, sep = "\n", paste(settings$model$prerun, collapse = "\n")) + } + if (!is.null(settings$host$prerun)) { + hostsetup <- paste(hostsetup, sep = "\n", paste(settings$host$prerun, collapse = "\n")) + } + + hostteardown <- "" + if (!is.null(settings$model$postrun)) { + hostteardown <- paste(hostteardown, sep = "\n", paste(settings$model$postrun, collapse = "\n")) + } + if (!is.null(settings$host$postrun)) { + hostteardown <- paste(hostteardown, sep = "\n", paste(settings$host$postrun, collapse = "\n")) + } + + # create job.sh + jobsh <- gsub("@HOST_SETUP@", hostsetup, jobsh) + jobsh <- gsub("@HOST_TEARDOWN@", hostteardown, jobsh) + + jobsh <- gsub("@SITE_LAT@", settings$run$site$lat, jobsh) + jobsh <- gsub("@SITE_LON@", settings$run$site$lon, jobsh) + jobsh <- gsub("@SITE_MET@", settings$run$site$met, jobsh) + + jobsh <- gsub("@START_DATE@", settings$run$start.date, jobsh) + jobsh <- gsub("@END_DATE@", settings$run$end.date, jobsh) + + jobsh <- gsub("@OUTDIR@", outdir, jobsh) + jobsh <- gsub("@RUNDIR@", rundir, jobsh) + + jobsh <- gsub("@BINARY@", settings$model$binary, jobsh) + + writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) + Sys.chmod(file.path(settings$rundir, run.id, "job.sh")) + + #----------------------------------------------------------------------- + ### Edit a templated config file for runs + if (!is.null(settings$model$config) && file.exists(settings$model$config)) { + config.text <- readLines(con = settings$model$config, n = -1) + } else { + filename <- system.file(settings$model$config, package = "PEcAn.MODEL") + if (filename == "") { + if (!is.null(settings$model$revision)) { + filename <- system.file(paste0("config.", settings$model$revision), package = "PEcAn.MODEL") + } else { + model <- db.query(paste("SELECT * FROM models WHERE id =", settings$model$id), params = settings$database$bety) + filename <- system.file(paste0("config.r", model$revision), package = "PEcAn.MODEL") + } + } + if (filename == "") { + PEcAn.logger::logger.severe("Could not find config template") + } + PEcAn.logger::logger.info("Using", filename, "as template") + config.text <- readLines(con = filename, n = -1) + } + + config.text <- gsub("@SITE_LAT@", settings$run$site$lat, config.text) + config.text <- gsub("@SITE_LON@", settings$run$site$lon, config.text) + config.text <- gsub("@SITE_MET@", settings$run$inputs$met$path, config.text) + config.text <- gsub("@MET_START@", settings$run$site$met.start, config.text) + config.text <- gsub("@MET_END@", settings$run$site$met.end, config.text) + config.text <- gsub("@START_MONTH@", format(startdate, "%m"), config.text) + config.text <- gsub("@START_DAY@", format(startdate, "%d"), config.text) + config.text <- gsub("@START_YEAR@", format(startdate, "%Y"), config.text) + config.text <- gsub("@END_MONTH@", format(enddate, "%m"), config.text) + config.text <- gsub("@END_DAY@", format(enddate, "%d"), config.text) + config.text <- gsub("@END_YEAR@", format(enddate, "%Y"), config.text) + config.text <- gsub("@OUTDIR@", settings$host$outdir, config.text) + config.text <- gsub("@ENSNAME@", run.id, config.text) + config.text <- gsub("@OUTFILE@", paste0("out", run.id), config.text) + + #----------------------------------------------------------------------- + config.file.name <- paste0("CONFIG.", run.id, ".txt") + writeLines(config.text, con = paste(outdir, config.file.name, sep = "")) +} # write.config.MODEL diff --git a/models/basgra/README.md b/models/basgra/README.md new file mode 100644 index 00000000000..ac17ce2dae4 --- /dev/null +++ b/models/basgra/README.md @@ -0,0 +1,64 @@ +A generic template for adding a new model to PEcAn +========================================================================== + +Adding a new model to PEcAn in a few easy steps: + +1. add modeltype to BETY +2. add a model and PFT to BETY for use with modeltype +3. implement 3 functions as described below +4. Add tests to `tests/testthat` +5. Update README, documentation +6. execute pecan with new model + + +### Three Functions + +There are 3 functions that will need to be implemented, each of these +functions will need to have MODEL be replaced with the actual modeltype as +it is defined in the BETY database. + +* `write.config.MODEL.R` + + This will write the configuratin file as well as the job launcher used by + PEcAn. There is an example of the job execution script in the template + folder. The configuration file can also be a template that is found based + on the revision number of the model. This should use the computed results + specified in defaults and trait.values to write a configuration file + based on the PFT and traits found. + +* `met2model.MODEL.R` + + This will convert the standard Met CF file to the model specific file + format. This will allow PEcAn to create metereological files for the + specific site and model. This will only be called if no meterological + data is found for that specific site and model combination. + +* `model2netcdf.MODEL.R` + + This will convert the model specific output to NACP Intercomparison + format. After this function is finished PEcAn will use the generated + output and not use the model specific outputs. The outputs should be + named YYYY.nc + +### Additional Changes + +* `README.md` + +This file should contain basic background information about the model. +At a minimum, this should include the scientific motivation and scope, +name(s) of maintainer(s), links to project homepage, and a list of a few +key publications. +relevant publications. + +* `/tests/testthat/` + +Each package should have tests that cover the key functions of the package, +at a minimum, the three functions above. + +* documentation + +Update the `NAMESPACE`, `DESCRIPTION` and `man/*.Rd` files by running + +```r +devtools("models//") +``` diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job new file mode 100644 index 00000000000..7ccb49823de --- /dev/null +++ b/models/basgra/inst/template.job @@ -0,0 +1,41 @@ +#!/bin/bash + +# redirect output +exec 3>&1 +exec &> "@OUTDIR@/logfile.txt" + +# host specific setup +@HOST_SETUP@ + +# create output folder +mkdir -p "@OUTDIR@" + +# see if application needs running +if [ ! -e "@OUTDIR@/results.csv" ]; then + cd "@RUNDIR@" + + "@BINARY@" + STATUS=$? + + # check the status + if [ $STATUS -ne 0 ]; then + echo -e "ERROR IN MODEL RUN\nLogfile is located at '@OUTDIR@/logfile.txt'" >&3 + exit $STATUS + fi + + # convert to MsTMIP + echo "require (PEcAn.MODEL) +model2netcdf.MODEL('@OUTDIR@', @SITE_LAT@, @SITE_LON@, '@START_DATE@', '@END_DATE@') +" | R --vanilla +fi + +# copy readme with specs to output +cp "@RUNDIR@/README.txt" "@OUTDIR@/README.txt" + +# run getdata to extract right variables + +# host specific teardown +@HOST_TEARDOWN@ + +# all done +echo -e "MODEL FINISHED\nLogfile is located at '@OUTDIR@/logfile.txt'" >&3 diff --git a/models/basgra/man/met2model.MODEL.Rd b/models/basgra/man/met2model.MODEL.Rd new file mode 100644 index 00000000000..a3e66def0e3 --- /dev/null +++ b/models/basgra/man/met2model.MODEL.Rd @@ -0,0 +1,25 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/met2model.MODEL.R +\name{met2model.MODEL} +\alias{met2model.MODEL} +\title{Write MODEL met files} +\usage{ +met2model.MODEL(in.path, in.prefix, outfolder, overwrite = FALSE) +} +\arguments{ +\item{in.path}{path on disk where CF file lives} + +\item{in.prefix}{prefix for each file} + +\item{outfolder}{location where model specific output is written.} +} +\value{ +OK if everything was succesful. +} +\description{ +Converts a met CF file to a model specific met file. The input +files are calld /.YYYY.cf +} +\author{ +Rob Kooper +} diff --git a/models/basgra/man/model2netcdf.MODEL.Rd b/models/basgra/man/model2netcdf.MODEL.Rd new file mode 100644 index 00000000000..0eb69eec773 --- /dev/null +++ b/models/basgra/man/model2netcdf.MODEL.Rd @@ -0,0 +1,25 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/model2netcdf.MODEL.R +\name{model2netcdf.MODEL} +\alias{model2netcdf.MODEL} +\title{Code to convert MODELS's output into netCDF format} +\usage{ +model2netcdf.MODEL(outdir, sitelat, sitelon, start_date, end_date) +} +\arguments{ +\item{outdir}{Location of model output} + +\item{sitelat}{Latitude of the site} + +\item{sitelon}{Longitude of the site} + +\item{start_date}{Start time of the simulation} + +\item{end_date}{End time of the simulation} +} +\description{ +Convert MODEL output into the NACP Intercomparison format (ALMA using netCDF) +} +\author{ +Rob Kooper +} diff --git a/models/basgra/man/read_restart.ModelName.Rd b/models/basgra/man/read_restart.ModelName.Rd new file mode 100644 index 00000000000..9a3792ff612 --- /dev/null +++ b/models/basgra/man/read_restart.ModelName.Rd @@ -0,0 +1,31 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/read_restart.ModelName.R +\name{read_restart.ModelName} +\alias{read_restart.ModelName} +\title{Read restart template for SDA} +\usage{ +read_restart.ModelName(outdir, runid, stop.time, settings, var.names, + params) +} +\arguments{ +\item{outdir}{Output directory} + +\item{runid}{Run ID} + +\item{stop.time}{Year that is being read} + +\item{settings}{PEcAn settings object} + +\item{var.names}{Variable names to be extracted} + +\item{params}{Any parameters required for state calculations} +} +\value{ +Forecast numeric matrix +} +\description{ +Read restart files from model. +} +\author{ +Alexey Shiklomanov +} diff --git a/models/basgra/man/write.config.MODEL.Rd b/models/basgra/man/write.config.MODEL.Rd new file mode 100644 index 00000000000..553c8082f2d --- /dev/null +++ b/models/basgra/man/write.config.MODEL.Rd @@ -0,0 +1,30 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/write.config.MODEL.R +\name{write.config.MODEL} +\alias{write.config.MODEL} +\title{Write MODEL configuration files} +\usage{ +write.config.MODEL(defaults, trait.values, settings, run.id) +} +\arguments{ +\item{defaults}{list of defaults to process} + +\item{settings}{list of settings from pecan settings file} + +\item{run.id}{id of run} + +\item{trait.samples}{vector of samples for a given trait} +} +\value{ +configuration file for MODEL for given run +} +\description{ +Writes a MODEL config file. +} +\details{ +Requires a pft xml object, a list of trait values for a single model run, +and the name of the file to create +} +\author{ +Rob Kooper +} diff --git a/models/basgra/man/write_restart.ModelName.Rd b/models/basgra/man/write_restart.ModelName.Rd new file mode 100644 index 00000000000..78d0a4ecd62 --- /dev/null +++ b/models/basgra/man/write_restart.ModelName.Rd @@ -0,0 +1,34 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/write_restart.ModelName.R +\name{write_restart.ModelName} +\alias{write_restart.ModelName} +\title{Write restart template for SDA} +\usage{ +write_restart.ModelName(outdir, runid, start.time, stop.time, settings, + new.state, RENAME, new.params, inputs) +} +\arguments{ +\item{outdir}{outout directory} + +\item{runid}{run id} + +\item{start.time}{Time of current assimilation step} + +\item{stop.time}{Time of next assimilation step} + +\item{settings}{pecan settings list} + +\item{new.state}{Analysis state matrix returned by \code{sda.enkf}} + +\item{RENAME}{flag to either rename output file or not} + +\item{new.params}{optional, additionals params to pass write.configs that are deterministically related to the parameters updated by the analysis} + +\item{inputs}{new input paths updated by the SDA workflow, will be passed to write.configs} +} +\description{ +Write restart files for model +} +\author{ +Alexey Shiklomanov +} diff --git a/models/basgra/tests/Rcheck_reference.log b/models/basgra/tests/Rcheck_reference.log new file mode 100644 index 00000000000..68fa51a5ae8 --- /dev/null +++ b/models/basgra/tests/Rcheck_reference.log @@ -0,0 +1,77 @@ +* using log directory ‘/tmp/RtmpVlFdqS/PEcAn.ModelName.Rcheck’ +* using R version 3.5.2 (2018-12-20) +* using platform: x86_64-pc-linux-gnu (64-bit) +* using session charset: UTF-8 +* using options ‘--no-tests --no-manual --as-cran’ +* checking for file ‘PEcAn.ModelName/DESCRIPTION’ ... OK +* checking extension type ... Package +* this is package ‘PEcAn.ModelName’ version ‘1.7.0’ +* package encoding: UTF-8 +* checking package namespace information ... OK +* checking package dependencies ... OK +* checking if this is a source package ... OK +* checking if there is a namespace ... OK +* checking for executable files ... OK +* checking for hidden files and directories ... OK +* checking for portable file names ... OK +* checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK +* checking whether package ‘PEcAn.ModelName’ can be installed ... OK +* checking installed package size ... OK +* checking package directory ... OK +* checking DESCRIPTION meta-information ... NOTE +License components with restrictions not permitted: + FreeBSD + file LICENSE +Authors@R field gives no person with name and roles. +Authors@R field gives no person with maintainer role, valid email +address and non-empty name. +* checking top-level files ... OK +* checking for left-over files ... OK +* checking index information ... OK +* checking package subdirectories ... OK +* checking R files for non-ASCII characters ... OK +* checking R files for syntax errors ... OK +* checking whether the package can be loaded ... OK +* checking whether the package can be loaded with stated dependencies ... OK +* checking whether the package can be unloaded cleanly ... OK +* checking whether the namespace can be loaded with stated dependencies ... OK +* checking whether the namespace can be unloaded cleanly ... OK +* checking loading without being on the library search path ... OK +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK +* checking replacement functions ... OK +* checking foreign function calls ... OK +* checking R code for possible problems ... NOTE +write.config.MODEL: no visible global function definition for + ‘db.query’ +write.config.MODEL: no visible binding for global variable ‘startdate’ +write.config.MODEL: no visible binding for global variable ‘enddate’ +Undefined global functions or variables: + db.query enddate startdate +* checking Rd files ... OK +* checking Rd metadata ... OK +* checking Rd line widths ... OK +* checking Rd cross-references ... OK +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING +Undocumented arguments in documentation object 'met2model.MODEL' + ‘overwrite’ + +Undocumented arguments in documentation object 'write.config.MODEL' + ‘trait.values’ +Documented arguments not in \usage in documentation object 'write.config.MODEL': + ‘trait.samples’ + +Functions with \usage entries need to have the appropriate \alias +entries, and all their arguments documented. +The \usage entries must correspond to syntactically valid R code. +See chapter ‘Writing R documentation files’ in the ‘Writing R +Extensions’ manual. +* checking Rd contents ... OK +* checking for unstated dependencies in examples ... OK +* checking examples ... NONE +* checking for unstated dependencies in ‘tests’ ... OK +* checking tests ... SKIPPED +* DONE +Status: 1 WARNING, 2 NOTEs diff --git a/models/basgra/tests/testthat.R b/models/basgra/tests/testthat.R new file mode 100644 index 00000000000..d93798b4ffe --- /dev/null +++ b/models/basgra/tests/testthat.R @@ -0,0 +1,13 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- +library(testthat) +library(PEcAn.utils) + +PEcAn.logger::logger.setQuitOnSevere(FALSE) +#test_check("PEcAn.ModelName") diff --git a/models/basgra/tests/testthat/README.txt b/models/basgra/tests/testthat/README.txt new file mode 100644 index 00000000000..b11fefae099 --- /dev/null +++ b/models/basgra/tests/testthat/README.txt @@ -0,0 +1,3 @@ +Place your tests here. They will be executed in this folder, so you +can place any data you need in this folder as well (or in a subfolder +called data). diff --git a/models/basgra/tests/testthat/test.met2model.R b/models/basgra/tests/testthat/test.met2model.R new file mode 100644 index 00000000000..566b76d2cb4 --- /dev/null +++ b/models/basgra/tests/testthat/test.met2model.R @@ -0,0 +1,18 @@ +context("met2model") + +outfolder <- tempfile() +setup(dir.create(outfolder, showWarnings = FALSE)) +teardown(unlink(outfolder, recursive = TRUE)) + +test_that("Met conversion runs without error", { + skip("This is a template test that will not run. To run it, remove this `skip` call.") + nc_path <- system.file("test-data", "CRUNCEP.2000.nc", + package = "PEcAn.utils") + in.path <- dirname(nc_path) + in.prefix <- "CRUNCEP" + start_date <- "2000-01-01" + end_date <- "2000-12-31" + result <- met2model.MODEL(in.path, in.prefix, outfolder, start_date, end_date) + expect_s3_class(result, "data.frame") + expect_true(file.exists(result[["file"]][[1]])) +}) From 6f143283122477db3fc4865e27298c7b15f33257 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 25 Sep 2019 06:14:02 -0400 Subject: [PATCH 0422/2289] create model register xml --- models/basgra/inst/register.BASGRA.xml | 5 +++++ 1 file changed, 5 insertions(+) create mode 100644 models/basgra/inst/register.BASGRA.xml diff --git a/models/basgra/inst/register.BASGRA.xml b/models/basgra/inst/register.BASGRA.xml new file mode 100644 index 00000000000..6608bff29fe --- /dev/null +++ b/models/basgra/inst/register.BASGRA.xml @@ -0,0 +1,5 @@ + + + BASGRA + FALSE + From 315d8987e7b11191e6ee42265aa6629eba716d0c Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 25 Sep 2019 06:14:27 -0400 Subject: [PATCH 0423/2289] run check --- models/basgra/DESCRIPTION | 2 +- models/basgra/NAMESPACE | 2 -- models/basgra/man/met2model.MODEL.Rd | 2 +- models/basgra/man/model2netcdf.MODEL.Rd | 2 +- models/basgra/man/read_restart.ModelName.Rd | 31 ------------------ models/basgra/man/write.config.MODEL.Rd | 2 +- models/basgra/man/write_restart.ModelName.Rd | 34 -------------------- 7 files changed, 4 insertions(+), 71 deletions(-) delete mode 100644 models/basgra/man/read_restart.ModelName.Rd delete mode 100644 models/basgra/man/write_restart.ModelName.Rd diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index a1ffe89e6c3..1308677096e 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -3,7 +3,7 @@ Type: Package Title: PEcAn package for integration of the ModelName model Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Istem", "Fer)) +Authors@R: c(person("Istem", "Fer")) Author: Istem Fer Maintainer: Istem Fer Description: This module provides functions to link the BASGRA to PEcAn. diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index e815441f0ca..c402d35dcd6 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -2,6 +2,4 @@ export(met2model.MODEL) export(model2netcdf.MODEL) -export(read_restart.ModelName) export(write.config.MODEL) -export(write_restart.ModelName) diff --git a/models/basgra/man/met2model.MODEL.Rd b/models/basgra/man/met2model.MODEL.Rd index a3e66def0e3..3d823706b25 100644 --- a/models/basgra/man/met2model.MODEL.Rd +++ b/models/basgra/man/met2model.MODEL.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/met2model.MODEL.R +% Please edit documentation in R/met2model.BASGRA.R \name{met2model.MODEL} \alias{met2model.MODEL} \title{Write MODEL met files} diff --git a/models/basgra/man/model2netcdf.MODEL.Rd b/models/basgra/man/model2netcdf.MODEL.Rd index 0eb69eec773..6ee72ccd937 100644 --- a/models/basgra/man/model2netcdf.MODEL.Rd +++ b/models/basgra/man/model2netcdf.MODEL.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/model2netcdf.MODEL.R +% Please edit documentation in R/model2netcdf.BASGRA.R \name{model2netcdf.MODEL} \alias{model2netcdf.MODEL} \title{Code to convert MODELS's output into netCDF format} diff --git a/models/basgra/man/read_restart.ModelName.Rd b/models/basgra/man/read_restart.ModelName.Rd deleted file mode 100644 index 9a3792ff612..00000000000 --- a/models/basgra/man/read_restart.ModelName.Rd +++ /dev/null @@ -1,31 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/read_restart.ModelName.R -\name{read_restart.ModelName} -\alias{read_restart.ModelName} -\title{Read restart template for SDA} -\usage{ -read_restart.ModelName(outdir, runid, stop.time, settings, var.names, - params) -} -\arguments{ -\item{outdir}{Output directory} - -\item{runid}{Run ID} - -\item{stop.time}{Year that is being read} - -\item{settings}{PEcAn settings object} - -\item{var.names}{Variable names to be extracted} - -\item{params}{Any parameters required for state calculations} -} -\value{ -Forecast numeric matrix -} -\description{ -Read restart files from model. -} -\author{ -Alexey Shiklomanov -} diff --git a/models/basgra/man/write.config.MODEL.Rd b/models/basgra/man/write.config.MODEL.Rd index 553c8082f2d..7ee6f2d1dd1 100644 --- a/models/basgra/man/write.config.MODEL.Rd +++ b/models/basgra/man/write.config.MODEL.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/write.config.MODEL.R +% Please edit documentation in R/write.config.BASGRA.R \name{write.config.MODEL} \alias{write.config.MODEL} \title{Write MODEL configuration files} diff --git a/models/basgra/man/write_restart.ModelName.Rd b/models/basgra/man/write_restart.ModelName.Rd deleted file mode 100644 index 78d0a4ecd62..00000000000 --- a/models/basgra/man/write_restart.ModelName.Rd +++ /dev/null @@ -1,34 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/write_restart.ModelName.R -\name{write_restart.ModelName} -\alias{write_restart.ModelName} -\title{Write restart template for SDA} -\usage{ -write_restart.ModelName(outdir, runid, start.time, stop.time, settings, - new.state, RENAME, new.params, inputs) -} -\arguments{ -\item{outdir}{outout directory} - -\item{runid}{run id} - -\item{start.time}{Time of current assimilation step} - -\item{stop.time}{Time of next assimilation step} - -\item{settings}{pecan settings list} - -\item{new.state}{Analysis state matrix returned by \code{sda.enkf}} - -\item{RENAME}{flag to either rename output file or not} - -\item{new.params}{optional, additionals params to pass write.configs that are deterministically related to the parameters updated by the analysis} - -\item{inputs}{new input paths updated by the SDA workflow, will be passed to write.configs} -} -\description{ -Write restart files for model -} -\author{ -Alexey Shiklomanov -} From 9c76ce39f32c99ed182906238b629058c4c82d30 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 25 Sep 2019 06:30:02 -0400 Subject: [PATCH 0424/2289] start met2model --- models/basgra/R/met2model.BASGRA.R | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/models/basgra/R/met2model.BASGRA.R b/models/basgra/R/met2model.BASGRA.R index eadfaed7d33..75be087ecdb 100644 --- a/models/basgra/R/met2model.BASGRA.R +++ b/models/basgra/R/met2model.BASGRA.R @@ -11,26 +11,19 @@ ##' Converts a met CF file to a model specific met file. The input ##' files are calld /.YYYY.cf ##' -##' @name met2model.MODEL -##' @title Write MODEL met files +##' @name met2model.BASGRA +##' @title Write BASGRA met files ##' @param in.path path on disk where CF file lives ##' @param in.prefix prefix for each file ##' @param outfolder location where model specific output is written. +##' @param start_date beginning of the weather data +##' @param end_date end of the weather data ##' @return OK if everything was succesful. ##' @export -##' @author Rob Kooper +##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -met2model.MODEL <- function(in.path, in.prefix, outfolder, overwrite = FALSE) { - PEcAn.logger::logger.severe("NOT IMPLEMENTED") +met2model.BASGRA <- function(in.path, in.prefix, outfolder, overwrite = FALSE, + start_date, end_date, ...) { - # Please follow the PEcAn style guide: - # https://pecanproject.github.io/pecan-documentation/master/coding-style.html - # Note that `library()` calls should _never_ appear here; instead, put - # packages dependencies in the DESCRIPTION file, under "Imports:". - # Calls to dependent packages should use a double colon, e.g. - # `packageName::functionName()`. - # Also, `require()` should be used only when a package dependency is truly - # optional. In this case, put the package name under "Suggests:" in DESCRIPTION. - -} # met2model.MODEL +} # met2model.BASGRA From f8bc4b3f8297d5f2613392a278bca03421389ae3 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 25 Sep 2019 07:58:53 -0400 Subject: [PATCH 0425/2289] first pass at met2model --- models/basgra/R/met2model.BASGRA.R | 116 +++++++++++++++++++++++++++++ 1 file changed, 116 insertions(+) diff --git a/models/basgra/R/met2model.BASGRA.R b/models/basgra/R/met2model.BASGRA.R index 75be087ecdb..6724d1b12f0 100644 --- a/models/basgra/R/met2model.BASGRA.R +++ b/models/basgra/R/met2model.BASGRA.R @@ -25,5 +25,121 @@ met2model.BASGRA <- function(in.path, in.prefix, outfolder, overwrite = FALSE, start_date, end_date, ...) { + PEcAn.logger::logger.info("START met2model.BASGRA") + + ## check to see if the outfolder is defined, if not create directory for output + if (!file.exists(outfolder)) { + dir.create(outfolder) + } + + start_date <- as.POSIXlt(start_date, tz = "UTC") + end_date <- as.POSIXlt(end_date, tz = "UTC") + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + + out.file <- paste(in.prefix, strptime(start_date, "%Y-%m-%d"), + strptime(end_date, "%Y-%m-%d"), + "txt", + sep = ".") + + out.file.full <- file.path(outfolder, out.file) + + results <- data.frame(file = out.file.full, + host = PEcAn.remote::fqdn(), + mimetype = "text/csv", + formatname = "Sipnet.climna", + startdate = start_date, + enddate = end_date, + dbfile.name = out.file, + stringsAsFactors = FALSE) + PEcAn.logger::logger.info("internal results") + PEcAn.logger::logger.info(results) + + if (file.exists(out.file.full) && !overwrite) { + PEcAn.logger::logger.debug("File '", out.file.full, "' already exists, skipping to next file.") + return(invisible(results)) + } + + out.list <- list() + + ctr <- 1 + for(year in start_year:end_year) { + + PEcAn.logger::logger.info(year) + + # prepare data frame for BASGRA format, daily inputs, but doesn't have to be full year + diy <- PEcAn.utils::days_in_year(year) + out <- data.frame(ST = rep(42, diy), # station number, not important + YR = rep(year, diy), # year + doy = seq_len(diy)) # day of year, simple implementation for now + + + + old.file <- file.path(in.path, paste(in.prefix, year, "nc", sep = ".")) + + if (file.exists(old.file)) { + + ## open netcdf + nc <- ncdf4::nc_open(old.file) + + ## convert time to seconds + sec <- nc$dim$time$vals + sec <- udunits2::ud.convert(sec, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") + + dt <- PEcAn.utils::seconds_in_year(year) / length(sec) + tstep <- round(86400 / dt) + dt <- 86400 / tstep + + ## extract variables + Tair <-ncdf4::ncvar_get(nc, "air_temperature") ## in Kelvin + Tair_C <- udunits2::ud.convert(Tair, "K", "degC") + + # compute daily mean, min and max + ind <- rep(seq_len(diy), each = tstep) + t_dmean <- tapply(Tair_C, ind, mean, na.rm = TRUE) + t_dmax <- tapply(Tair_C, ind, max, na.rm = TRUE) + t_dmin <- tapply(Tair_C, ind, min, na.rm = TRUE) + + out$T <- t_dmean # mean temperature (degrees Celsius) + out$TMMXI <- t_dmax # max temperature (degrees Celsius) + out$TMMNI <- t_dmin # min temperature (degrees Celsius) + + RH <-ncdf4::ncvar_get(nc, "relative_humidity") # % + RH <- tapply(RH, ind, mean, na.rm = TRUE) + + out$RH <- RH # relative humidity (%) + + Rain <- ncdf4::ncvar_get(nc, "precipitation_flux") # kg m-2 s-1 + raini <- tapply(Rain*86400, ind, mean, na.rm = TRUE) + + out$RAINI <- raini # precipitation (mm d-1) + + + U <- ncdf4::ncvar_get(nc, "eastward_wind") + V <- ncdf4::ncvar_get(nc, "northward_wind") + ws <- sqrt(U ^ 2 + V ^ 2) + + out$WNI <- tapply(ws, ind, mean, na.rm = TRUE) # mean wind speed (m s-1) + + rad <- ncdf4::ncvar_get(nc, "surface_downwelling_shortwave_flux_in_air") + gr <- rad * 0.0864 # W m-2 to MJ m-2 d-1 + + out$GR <- tapply(gr, ind, mean, na.rm = TRUE) # irradiation (MJ m-2 d-1) + + ncdf4::nc_close(nc) + } else { + PEcAn.logger::logger.info("File for year", year, "not found. Skipping to next year") + next + } + + out.list[[ctr]] <- out + ctr <- ctr + 1 + } # end for-loop around years + + clim <- do.call("rbind", out.list) + + ## write output + write.table(clim, out.file.full, quote = FALSE, sep = "\t", row.names = FALSE, col.names = TRUE) + return(invisible(results)) } # met2model.BASGRA From 588828f2729768128c54cb49588a8ce29b3bf760 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 25 Sep 2019 09:08:57 -0400 Subject: [PATCH 0426/2289] calling libraries and changed the loop --- base/workflow/inst/batch_run.R | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index df4ef0992f3..109266c3afb 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -1,5 +1,6 @@ #!/usr/bin/env Rscript -library(tidyverse) +library(dplyr) +library(purrr) library(PEcAn.workflow) stopifnot( requireNamespace("PEcAn.DB", quietly = TRUE), @@ -47,7 +48,13 @@ outfile <- get_arg(argv, "--outfile", "test_result_table.csv") ################################################## # Create outfile directory if it doesn't exist dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) -input_table <- read.csv(input_table_file, stringsAsFactors = FALSE) +input_table <- read.csv(input_table_file, stringsAsFactors = FALSE) %>% + mutate( + folder= paste(model, + format(as.Date(start_date), "%Y-%m"), + format(as.Date(end_date), "%Y-%m"), + met, site_id, "test_runs", sep = "_") + ) #----------------------- Parallel Distribution of jobs seq_len(nrow(input_table)) %>% furrr::future_map(function(i){ @@ -112,7 +119,7 @@ seq_len(nrow(input_table)) %>% #----------- Checking the results of the runs -checks_df<-list.dirs(output_folder, full.names = TRUE, recursive = FALSE) %>% +checks_df<-file.path(output_folder, input_table$folder)%>% purrr::map_dfr(function(outdir){ result_table <-NULL From 40ecca7f8da84b07a6e5e27321eed63cc05eef5c Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 25 Sep 2019 09:16:40 -0400 Subject: [PATCH 0427/2289] folder --- base/workflow/inst/batch_run.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index 109266c3afb..ac6ec1ee272 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -2,11 +2,12 @@ library(dplyr) library(purrr) library(PEcAn.workflow) +library(furrr) stopifnot( requireNamespace("PEcAn.DB", quietly = TRUE), requireNamespace("PEcAn.utils", quietly = TRUE) ) -library(furrr) + plan(multiprocess) ################################################## # Parse arguments @@ -157,14 +158,13 @@ checks_df<-file.path(output_folder, input_table$folder)%>% return(result_table %>% as.data.frame() %>% - mutate(site_id=strsplit(outdir,"_")[[1]][5]) + mutate(folder=basename(outdir)) ) }) #-- Writing down the results input_table %>% - mutate(site_id= as.character(site_id)) %>% left_join(checks_df, - by="site_id") %>% + by="folder") %>% write.csv(outfile, row.names = FALSE) From 4f5e752dae89d0bf31eba18a7f7a8db607c71c2d Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Wed, 25 Sep 2019 13:07:02 -0400 Subject: [PATCH 0428/2289] updating plotting to fix bug --- modules/assim.sequential/R/sda_plotting.R | 8 ++++---- .../assim.sequential/inst/WillowCreek/workflow.template.R | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 03986cf33cd..234e5501190 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -319,8 +319,8 @@ post.analysis.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, obs, #Defining some colors ready.OBS<-NULL generate_colors_sda() - ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, - function(x) { x })[2, ], use.names = FALSE) + #ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, + # function(x) { x })[2, ], use.names = FALSE) var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") #---- #Analysis & Forcast cleaning and STAT @@ -428,8 +428,8 @@ post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.co #Defining some colors generate_colors_sda() - ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, - function(x) { x })[2, ], use.names = FALSE) + #ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, + # function(x) { x })[2, ], use.names = FALSE) var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") #rearranging the forcast and analysis data diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index d26ff2761ed..a8c3ca09313 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -152,8 +152,8 @@ prep.data <- prep.data %>% # Finding the right end and start date -met.start <- lubridate::floor_date(Sys.Date() - lubridate::hours(2), unit = "6 hour") -met.end <- met.start + lubridate::days(16) +met.start <- obs.raw$Date%>% head(1) %>% lubridate::floor_date(unit = "day") +met.end <- lubridate::floor_date(Sys.Date() - lubridate::hours(2), unit = "6 hour") + lubridate::days(16) #pad Observed Data to match met data From 146197d20f724acbb9d6ce93db45f01d725a1f46 Mon Sep 17 00:00:00 2001 From: hamzed Date: Wed, 25 Sep 2019 15:03:58 -0400 Subject: [PATCH 0429/2289] few minor changes --- base/workflow/inst/batch_run.R | 13 +++---------- models/linkages/R/met2model.LINKAGES.R | 2 +- models/linkages/R/write.config.LINKAGES.R | 5 +++-- 3 files changed, 7 insertions(+), 13 deletions(-) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index ac6ec1ee272..45571243dcf 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -3,11 +3,8 @@ library(dplyr) library(purrr) library(PEcAn.workflow) library(furrr) -stopifnot( - requireNamespace("PEcAn.DB", quietly = TRUE), - requireNamespace("PEcAn.utils", quietly = TRUE) -) - +library(PEcAn.DB) +library(PEcAn.utils) plan(multiprocess) ################################################## # Parse arguments @@ -37,11 +34,7 @@ get_arg <- function(argv, pattern, default_value) { } dbfiles_folder <- normalizePath(get_arg(argv, "--dbfiles", "~/output/dbfiles")) -input_table_file <- get_arg( - argv, - "--table", - system.file("default_tests.csv", package = "PEcAn.workflow") -) +input_table_file <- get_arg(argv, "--table",system.file("default_tests.csv", package = "PEcAn.workflow")) user_id <- as.numeric(get_arg(argv, "--userid", 99000000002)) pecan_path <- get_arg(argv, "--pecandir", getwd()) output_folder <- get_arg(argv, "--outdir", "batch_test_output") diff --git a/models/linkages/R/met2model.LINKAGES.R b/models/linkages/R/met2model.LINKAGES.R index d45f5d807a8..a8e119c0f79 100644 --- a/models/linkages/R/met2model.LINKAGES.R +++ b/models/linkages/R/met2model.LINKAGES.R @@ -104,7 +104,7 @@ met2model.LINKAGES <- function(in.path, in.prefix, outfolder, start_date, end_da month_matrix_temp_mean <- matrix(NA, nyear, 12) for (i in seq_len(nyear)) { - + year_txt <- formatC(year[i], width = 4, format = "d", flag = "0") infile <- file.path(in.path, paste0(in.prefix, year_txt, ".nc")) diff --git a/models/linkages/R/write.config.LINKAGES.R b/models/linkages/R/write.config.LINKAGES.R index 193cf1f4e78..6b112717ea0 100644 --- a/models/linkages/R/write.config.LINKAGES.R +++ b/models/linkages/R/write.config.LINKAGES.R @@ -103,9 +103,10 @@ write.config.LINKAGES <- function(defaults = NULL, trait.values, settings, run.i climate_file <- settings$run$inputs$met$path load(climate_file) } + - temp.mat <- matrix(temp.mat[which(rownames(temp.mat)%in%start.year:end.year),]) - precip.mat <- matrix(precip.mat[which(rownames(precip.mat)%in%start.year:end.year),]) + temp.mat <- matrix(temp.mat[which(rownames(temp.mat)%in%start.year:end.year),], ncol = 12) + precip.mat <- matrix(precip.mat[which(rownames(precip.mat)%in%start.year:end.year),] , ncol = 12) basesc <- 74 basesn <- 1.64 From 0ffb89bb11de28946c08c1a9edf62192036eb1eb Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Sep 2019 01:12:21 -0400 Subject: [PATCH 0430/2289] sipnet params --- models/sipnet/R/write.configs.SIPNET.R | 11 ++++------- models/sipnet/inst/template.param | 2 +- 2 files changed, 5 insertions(+), 8 deletions(-) diff --git a/models/sipnet/R/write.configs.SIPNET.R b/models/sipnet/R/write.configs.SIPNET.R index 4fb3a010ac5..70f441aba34 100755 --- a/models/sipnet/R/write.configs.SIPNET.R +++ b/models/sipnet/R/write.configs.SIPNET.R @@ -338,10 +338,12 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs if ("frozenSoilEff" %in% pft.names) { param[which(param[, 1] == "frozenSoilEff"), 2] <- pft.traits[which(pft.names == "frozenSoilEff")] } + # frozenSoilFolREff if ("frozenSoilFolREff" %in% pft.names) { param[which(param[, 1] == "frozenSoilFolREff"), 2] <- pft.traits[which(pft.names == "frozenSoilFolREff")] } + # soilWHC if ("soilWHC" %in% pft.names) { param[which(param[, 1] == "soilWHC"), 2] <- pft.traits[which(pft.names == "soilWHC")] @@ -358,27 +360,22 @@ write.config.SIPNET <- function(defaults, trait.values, settings, run.id, inputs } if ("immedEvapFrac" %in% pft.names) { - id <- which(param[, 1] == "immedEvapFrac") param[which(param[, 1] == "immedEvapFrac"), 2] <- pft.traits[which(pft.names == "immedEvapFrac")] } - if ("leafPoolDepth" %in% pft.names) { - id <- which(param[, 1] == "leafPoolDepth") - param[which(param[, 1] == "leafPoolDepth"), 2] <- pft.traits[which(pft.names == "leafPoolDepth")] + if ("leafWHC" %in% pft.names) { + param[which(param[, 1] == "leafPoolDepth"), 2] <- pft.traits[which(pft.names == "leafWHC")] } if ("waterRemoveFrac" %in% pft.names) { - id <- which(param[, 1] == "waterRemoveFrac") param[which(param[, 1] == "waterRemoveFrac"), 2] <- pft.traits[which(pft.names == "waterRemoveFrac")] } if ("fastFlowFrac" %in% pft.names) { - id <- which(param[, 1] == "fastFlowFrac") param[which(param[, 1] == "fastFlowFrac"), 2] <- pft.traits[which(pft.names == "fastFlowFrac")] } if ("rdConst" %in% pft.names) { - id <- which(param[, 1] == "rdConst") param[which(param[, 1] == "rdConst"), 2] <- pft.traits[which(pft.names == "rdConst")] } ### ----- Phenology parameters GDD leaf on diff --git a/models/sipnet/inst/template.param b/models/sipnet/inst/template.param index 506ea695643..6bffe3e7efb 100644 --- a/models/sipnet/inst/template.param +++ b/models/sipnet/inst/template.param @@ -58,7 +58,7 @@ litterWHC 1.000000 0 0.010000 4.000000 0.250000 soilWHC 12.0 1 0.1 36.000000 1.000000 immedEvapFrac 0.100000 0 0.000000 0.200000 0.025000 leafPoolDepth 0.100000 0 0.000000 0.200000 0.025000 -fastFlowFrac 0.100000 0 0.000000 0.200000 0.025000 +fastFlowFrac 0 0 0.000000 0.200000 0.025000 snowMelt 0.150000 0 0.050000 0.250000 0.020000 litWaterDrainRate 0.100000 0 0.010000 1.000000 0.100000 rdConst 300.000000 0 1.000000 1500.000000 75.000000 From 915e0d5ad90dab4745c6c725b9d661dcaa91f693 Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Thu, 26 Sep 2019 02:09:35 -0400 Subject: [PATCH 0431/2289] Update write.configs.ed.R Fixing reading in of custom tags under the flag in `pecan.xml` --- models/ed/R/write.configs.ed.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/ed/R/write.configs.ed.R b/models/ed/R/write.configs.ed.R index dad86d3af97..e22c8ded07f 100644 --- a/models/ed/R/write.configs.ed.R +++ b/models/ed/R/write.configs.ed.R @@ -363,7 +363,7 @@ write.config.ED2 <- function(trait.values, settings, run.id, defaults = settings if (!is.null(custom_tags)) { # Convert numeric tags to numeric # Anything that isn't converted to NA via `as.numeric` is numeric - custom_tags <- lapply(custom_tags, + custom_tags <- lapply(custom_tags, function(x) tryCatch(as.numeric(x), warning = function(e) x)) # Figure out what is a numeric vector # Look for a list of numbers like: "1,2,5" From d3519efcd79c69b48aba6f8c7ce02268c0147b03 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Sep 2019 07:49:45 -0400 Subject: [PATCH 0432/2289] prepare for config --- models/basgra/NAMESPACE | 2 +- models/basgra/R/met2model.BASGRA.R | 2 +- models/basgra/R/write.config.BASGRA.R | 25 ++++++++----------------- models/basgra/inst/template.job | 7 +++---- models/basgra/man/met2model.MODEL.Rd | 25 ------------------------- 5 files changed, 13 insertions(+), 48 deletions(-) delete mode 100644 models/basgra/man/met2model.MODEL.Rd diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index c402d35dcd6..25fa0362e58 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -1,5 +1,5 @@ # Generated by roxygen2: do not edit by hand -export(met2model.MODEL) +export(met2model.BASGRA) export(model2netcdf.MODEL) export(write.config.MODEL) diff --git a/models/basgra/R/met2model.BASGRA.R b/models/basgra/R/met2model.BASGRA.R index 6724d1b12f0..35d16ea3ec1 100644 --- a/models/basgra/R/met2model.BASGRA.R +++ b/models/basgra/R/met2model.BASGRA.R @@ -96,7 +96,7 @@ met2model.BASGRA <- function(in.path, in.prefix, outfolder, overwrite = FALSE, # compute daily mean, min and max ind <- rep(seq_len(diy), each = tstep) - t_dmean <- tapply(Tair_C, ind, mean, na.rm = TRUE) + t_dmean <- tapply(Tair_C, ind, mean, na.rm = TRUE) # maybe round these numbers t_dmax <- tapply(Tair_C, ind, max, na.rm = TRUE) t_dmin <- tapply(Tair_C, ind, min, na.rm = TRUE) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 631edf9f904..7240a80008c 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -8,32 +8,23 @@ #------------------------------------------------------------------------------- ##-------------------------------------------------------------------------------------------------# -##' Writes a MODEL config file. +##' Writes a BASGRA config file. ##' ##' Requires a pft xml object, a list of trait values for a single model run, ##' and the name of the file to create ##' -##' @name write.config.MODEL -##' @title Write MODEL configuration files +##' @name write.config.BASGRA +##' @title Write BASGRA configuration files ##' @param defaults list of defaults to process ##' @param trait.samples vector of samples for a given trait ##' @param settings list of settings from pecan settings file ##' @param run.id id of run -##' @return configuration file for MODEL for given run +##' @return configuration file for BASGRA for given run ##' @export -##' @author Rob Kooper +##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -write.config.MODEL <- function(defaults, trait.values, settings, run.id) { - PEcAn.logger::logger.severe("NOT IMPLEMENTED") - # Please follow the PEcAn style guide: - # https://pecanproject.github.io/pecan-documentation/develop/coding-style.html - # Note that `library()` calls should _never_ appear here; instead, put - # packages dependencies in the DESCRIPTION file, under "Imports:". - # Calls to dependent packages should use a double colon, e.g. - # `packageName::functionName()`. - # Also, `require()` should be used only when a package dependency is truly - # optional. In this case, put the package name under "Suggests:" in DESCRIPTION. - +write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { + # find out where to write run/ouput rundir <- file.path(settings$host$rundir, run.id) outdir <- file.path(settings$host$outdir, run.id) @@ -43,7 +34,7 @@ write.config.MODEL <- function(defaults, trait.values, settings, run.id) { if (!is.null(settings$model$jobtemplate) && file.exists(settings$model$jobtemplate)) { jobsh <- readLines(con = settings$model$jobtemplate, n = -1) } else { - jobsh <- readLines(con = system.file("template.job", package = "PEcAn.MODEL"), n = -1) + jobsh <- readLines(con = system.file("template.job", package = "PEcAn.BASGRA"), n = -1) } # create host specific setttings diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job index 7ccb49823de..f0451aa6251 100644 --- a/models/basgra/inst/template.job +++ b/models/basgra/inst/template.job @@ -13,7 +13,8 @@ mkdir -p "@OUTDIR@" # see if application needs running if [ ! -e "@OUTDIR@/results.csv" ]; then cd "@RUNDIR@" - + ln -s "@SITE_MET@" weather.txt + "@BINARY@" STATUS=$? @@ -24,7 +25,7 @@ if [ ! -e "@OUTDIR@/results.csv" ]; then fi # convert to MsTMIP - echo "require (PEcAn.MODEL) + echo "require (PEcAn.BASGRA) model2netcdf.MODEL('@OUTDIR@', @SITE_LAT@, @SITE_LON@, '@START_DATE@', '@END_DATE@') " | R --vanilla fi @@ -32,8 +33,6 @@ fi # copy readme with specs to output cp "@RUNDIR@/README.txt" "@OUTDIR@/README.txt" -# run getdata to extract right variables - # host specific teardown @HOST_TEARDOWN@ diff --git a/models/basgra/man/met2model.MODEL.Rd b/models/basgra/man/met2model.MODEL.Rd deleted file mode 100644 index 3d823706b25..00000000000 --- a/models/basgra/man/met2model.MODEL.Rd +++ /dev/null @@ -1,25 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/met2model.BASGRA.R -\name{met2model.MODEL} -\alias{met2model.MODEL} -\title{Write MODEL met files} -\usage{ -met2model.MODEL(in.path, in.prefix, outfolder, overwrite = FALSE) -} -\arguments{ -\item{in.path}{path on disk where CF file lives} - -\item{in.prefix}{prefix for each file} - -\item{outfolder}{location where model specific output is written.} -} -\value{ -OK if everything was succesful. -} -\description{ -Converts a met CF file to a model specific met file. The input -files are calld /.YYYY.cf -} -\author{ -Rob Kooper -} From 6d997e863b71b30a1253eace5d695d0e188da025 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Sep 2019 07:51:33 -0400 Subject: [PATCH 0433/2289] start wrapper --- models/basgra/R/run_BASGRA.R | 212 ++++++++++++++++++++++++++ models/basgra/man/met2model.BASGRA.Rd | 30 ++++ 2 files changed, 242 insertions(+) create mode 100644 models/basgra/R/run_BASGRA.R create mode 100644 models/basgra/man/met2model.BASGRA.Rd diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R new file mode 100644 index 00000000000..c49f247df10 --- /dev/null +++ b/models/basgra/R/run_BASGRA.R @@ -0,0 +1,212 @@ + +##-------------------------------------------------------------------------------------------------# +##' BASGRA wrapper function. +##' +##' BASGRA is written in fortran is run through R by wrapper functions written by Marcel Van Oijen. +##' This function makes use of those wrappers but gives control of datastream in and out of the model to PEcAn. +##' With this function we skip model2netcdf, we can also skip met2model but keeping it for now. +##' write.config.BASGRA modifies args of this function through template.job +##' then job.sh runs calls this function to run the model +##' +##' @name run_BASGRA +##' @title run BASGRA model +##' @param defaults list of defaults to process +##' @param trait.samples vector of samples for a given trait +##' @param settings list of settings from pecan settings file +##' @param run.id id of run +##' @return OK +##' @export +##' @author Istem Fer +##-------------------------------------------------------------------------------------------------# + +# template.job +run_BASGRA('@BINARY@', '@SITE_MET@', '@RUN_PARAMS@', '@START_DATE@', '@END_DATE@', + @SITE_LAT@, @SITE_LON@, ,'@OUTDIR@') + +run_BASGRA <- function(binary_path, file_weather, file_params, start_date, end_date, + outdir, sitelat, sitelon){ + + + file_weather <- "/fs/data1/pecan.data/dbfiles/Fluxnet2015_BASGRA_site_1-523/FLX_DK-ZaH_FLUXNET2015_SUBSET_HH_2000-2014_2-3.2000-01-01.2007-12-31.txt" + file_params <- "/fs/data3/istfer/BASGRA_N/parameters/parameters.txt" + start_date <- '2000/01/01' + end_date <- '2007/12/31' + + ############################# GENERAL INITIALISATION ######################## + # this part corresponds to initialise_BASGRA_general.R function + + # load DLL + dyn.load(binary_path) + + ################################################################################ + calendar_fert <- matrix( 0, nrow=100, ncol=3 ) + calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) + calendar_Ndep[1,] <- c(1900, 1,0) + calendar_Ndep[2,] <- c(2100,366,0) + days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) + + ################################################################################ + ### 1. MODEL LIBRARY FILE & FUNCTION FOR RUNNING THE MODEL + run_model <- function(p = params, + w = matrix_weather, + calf = calendar_fert, + calN = calendar_Ndep, + h = days_harvest, + n = NDAYS) { + .Fortran('BASGRA', p,w,calf,calN,h,n, NOUT,matrix(0,n,NOUT))[[8]] + } + + ################################################################################ + ### 2. FUNCTIONS FOR READING WEATHER DATA + read_weather_Bioforsk <- function(y = year_start, + d = doy_start, + n = NDAYS, + f = file_weather) { + df_weather <- read.table( f, header=TRUE ) + row_start <- 1 + while( df_weather[row_start,]$YR < y ) { row_start <- row_start+1 } + while( df_weather[row_start,]$doy < d ) { row_start <- row_start+1 } + df_weather_sim <- df_weather[row_start:(row_start+n-1),] + NMAXDAYS <- as.integer(10000) + NWEATHER <- as.integer(8) + matrix_weather <- matrix( 0., nrow=NMAXDAYS, ncol=NWEATHER ) + matrix_weather[1:n,1] <- df_weather_sim$YR + matrix_weather[1:n,2] <- df_weather_sim$doy + matrix_weather[1:n,3] <- df_weather_sim$GR + matrix_weather[1:n,4] <- df_weather_sim$T + matrix_weather[1:n,5] <- df_weather_sim$T + matrix_weather[1:n,6] <- exp(17.27*df_weather_sim$T/(df_weather_sim$T+239)) * + 0.6108 * df_weather_sim$RH / 100 + matrix_weather[1:n,7] <- df_weather_sim$RAINI + matrix_weather[1:n,8] <- df_weather_sim$WNI + return(matrix_weather) + } + + + + ################################################################################ + ### 3. OUTPUT VARIABLES + outputNames <- c( + "Time" , "year" , "doy" , "DAVTMP" , "CLV" , "CLVD" , + "YIELD" , "CRES" , "CRT" , "CST" , "CSTUB" , "DRYSTOR" , + "Fdepth" , "LAI" , "LT50" , "O2" , "PHEN" , "ROOTD" , + "Sdepth" , "TANAER" , "TILG" , "TILV" , "WAL" , "WAPL" , + "WAPS" , "WAS" , "WETSTOR" , "DM" , "RES" , "LERG" , + "NELLVG" , "RLEAF" , "SLA" , "TILTOT" , "FRTILG" , "FRTILG1" , + "FRTILG2" , "RDRT" , "VERN" , + "CLITT" , "CSOMF", "CSOMS" , "NLITT" , "NSOMF", + "NSOMS" , "NMIN" , "Rsoil" , "NemissionN2O", + "NemissionNO", "Nfert", "Ndep" , "RWA" , + "NSH" , "GNSH" , "DNSH" , "HARVNSH" , "NCSH" , + "NCGSH" , "NCDSH", "NCHARVSH", + "fNgrowth","RGRTV","FSPOT","RESNOR","TV2TIL","NSHNOR","KNMAX","KN", # 61:68 + "DMLV" , "DMST" , "NSH_DMSH" , # 69:71 + "Nfert_TOT" , "YIELD_TOT" , "DM_MAX" , # 72:74 + "F_PROTEIN" , "F_ASH" , # 75:76 + "F_WALL_DM" , "F_WALL_DMSH" , "F_WALL_LV" , "F_WALL_ST", # 77:80 + "F_DIGEST_DM", "F_DIGEST_DMSH" , # 81:82 + "F_DIGEST_LV", "F_DIGEST_ST" , "F_DIGEST_WALL", # 83:85 + "RDRS" , "Precipitation" , "Nleaching" , "NSHmob", # 86:89 + "NSHmobsoil" , "Nfixation" , "Nupt" , "Nmineralisation", # 90:93 + "NSOURCE" , "NSINK" , # 94:95 + "NRT" , "NCRT" , # 96:97 + "rNLITT" , "rNSOMF" , # 98:99 + "DAYL" # 100 + ) + + outputUnits <- c( + "(y)" , "(y)" , "(d)" , "(degC)" , "(g C m-2)", "(g C m-2)", # 1: 6 + "(g DM m-2)", "(g C m-2)", "(g C m-2)", "(g C m-2)" , "(g C m-2)", "(mm)" , # 7:12 + "(m)" , "(m2 m-2)" , "(degC)" , "(mol m-2)" , "(-)" , "(m)" , # 13:18 + "(m)" , "(d)" , "(m-2)" , "(m-2)" , "(mm)" , "(mm)" , # 19:24 + "(mm)" , "(mm)" , "(mm)" , "(g DM m-2)", "(g g-1)" , "(m d-1)" , # 25:30 + "(tiller-1)", "(d-1)" , "(m2 g-1)" , "(m-2)" , "(-)" , "(-)" , # 31:36 + "(-)" , "(d-1)" , "(-)" , # 37:39 + "(g C m-2)" , "(g C m-2)" , "(g C m-2)" , "(g N m-2)" , "(g N m-2)", # 40:44 + "(g N m-2)" , "(g N m-2)" , "(g C m-2 d-1)", "(g N m-2 d-1)", # 45:48 + "(g N m-2 d-1)", "(g N m-2 d-1)", "(g N m-2 d-1)", "(-)" , # 49:52 + "(g N m-2)" , "(g N m-2 d-1)", "(g N m-2 d-1)", "(g N m-2 d-1)", "(-)" , # 53:57 + "(-)" , "(-)" , "(-)" , # 58:60 + "(-)", "(d-1)", "(-)", "(-)", "(d-1)", "(-)", "(m2 m-2)", "(m2 m-2)", # 61:68 + "(g DM m-2)" , "(g DM m-2)" , "(g N g-1 DM)" , # 69:71 + "(g N m-2)" , "(g DM m-2)" , "(g DM m-2)" , # 72:74 + "(g g-1 DM)" , "(g g-1 DM)" , # 75:76 + "(g g-1 DM)" , "(g g-1 DM)" , "(g g-1 DM)" , "(g g-1 DM)" , # 77:80 + "(-)" , "(-)" , # 81:82 + "(-)" , "(-)" , "(-)" , # 83:85 + "(d-1)" , "(mm d-1)" , "(g N m-2 d-1)" , "(g N m-2 d-1)", # 86:89 + "(g N m-2 d-1)", "(g N m-2 d-1)", "(g N m-2 d-1)" , "(g N m-2 d-1)", # 90:93 + "(g N m-2 d-1)", "(g N m-2 d-1)", # 94:95 + "(g N m-2)" , "(g N g-1 C)" , # 96:97 + "(g N m-2)" , "(g N g-1 C)" , # 98:99 + "(d d-1)" # 100 + ) + + NOUT <- as.integer( length(outputNames) ) + + + ############################# SITE CONDITIONS ######################## + # this part corresponds to initialise_BASGRA_***.R functions + + start_date <- as.POSIXlt(start_date, tz = "UTC") + end_date <- as.POSIXlt(end_date, tz = "UTC") + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + + year_start <- as.integer(start_year) + doy_start <- as.integer(lubridate::yday(start_date)) + NDAYS <- as.integer(sum(PEcAn.utils::days_in_year(start_year:end_year))) # could be partial years, change later + parcol <- 13 + + matrix_weather <- read_weather_Bioforsk(year_start,doy_start,NDAYS,file_weather) + + # hardcoding these for now, should be able to modify later on + calendar_fert[1,] <- c( 2000, 115, 140*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 + calendar_fert[2,] <- c( 2000, 150, 80*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 + # calendar_fert[3,] <- c( 2001, 123, 0*1000/ 10000 ) # 0 kg N ha-1 applied on day 123 + calendar_Ndep[1,] <- c( 1900, 1, 2*1000/(10000*365) ) # 2 kg N ha-1 y-1 N-deposition in 1900 + calendar_Ndep[2,] <- c( 1980, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 + calendar_Ndep[3,] <- c( 2100, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 + days_harvest [1,] <- c( 2000, 150 ) + days_harvest [2,] <- c( 2000, 216 ) + + days_harvest <- as.integer(days_harvest) + + # create vector params + df_params <- read.table(file_params,header=T,sep="\t",row.names=1) + parcol <- 13 + params <- df_params[,parcol] + + + # run model + output <- .Fortran('BASGRA', + params, + matrix_weather, + calendar_fert, + calendar_Ndep, + days_harvest, + NDAYS, + NOUT, + matrix(0,NDAYS,NOUT))[[8]] + + head(output) + + ############################# WRITE OUTPUTS ########################### + # writing model outputs already in standard format + + # only LAI and CropYield for now + + lai <- output[,which(outputNames == "LAI")] + + CropYield <- output[,which(outputNames == "YIELD")] + + +} # run_BASGRA + +################################################################################ + + + + + + diff --git a/models/basgra/man/met2model.BASGRA.Rd b/models/basgra/man/met2model.BASGRA.Rd new file mode 100644 index 00000000000..e4eaf2abdd4 --- /dev/null +++ b/models/basgra/man/met2model.BASGRA.Rd @@ -0,0 +1,30 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/met2model.BASGRA.R +\name{met2model.BASGRA} +\alias{met2model.BASGRA} +\title{Write BASGRA met files} +\usage{ +met2model.BASGRA(in.path, in.prefix, outfolder, overwrite = FALSE, + start_date, end_date, ...) +} +\arguments{ +\item{in.path}{path on disk where CF file lives} + +\item{in.prefix}{prefix for each file} + +\item{outfolder}{location where model specific output is written.} + +\item{start_date}{beginning of the weather data} + +\item{end_date}{end of the weather data} +} +\value{ +OK if everything was succesful. +} +\description{ +Converts a met CF file to a model specific met file. The input +files are calld /.YYYY.cf +} +\author{ +Istem Fer +} From 5f3ab211a0fc5c24d15a17295ed9b272ea1cb378 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Sep 2019 08:16:15 -0400 Subject: [PATCH 0434/2289] first pass at wrapper function --- models/basgra/NAMESPACE | 3 +- models/basgra/R/run_BASGRA.R | 78 ++++++++++++------- models/basgra/man/run_BASGRA.Rd | 39 ++++++++++ ...config.MODEL.Rd => write.config.BASGRA.Rd} | 14 ++-- 4 files changed, 100 insertions(+), 34 deletions(-) create mode 100644 models/basgra/man/run_BASGRA.Rd rename models/basgra/man/{write.config.MODEL.Rd => write.config.BASGRA.Rd} (66%) diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index 25fa0362e58..8298ace1b90 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -2,4 +2,5 @@ export(met2model.BASGRA) export(model2netcdf.MODEL) -export(write.config.MODEL) +export(run_BASGRA) +export(write.config.BASGRA) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index c49f247df10..09a96ed0c4e 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -1,6 +1,6 @@ ##-------------------------------------------------------------------------------------------------# -##' BASGRA wrapper function. +##' BASGRA wrapper function. Runs and writes model outputs in PEcAn standard. ##' ##' BASGRA is written in fortran is run through R by wrapper functions written by Marcel Van Oijen. ##' This function makes use of those wrappers but gives control of datastream in and out of the model to PEcAn. @@ -10,28 +10,22 @@ ##' ##' @name run_BASGRA ##' @title run BASGRA model -##' @param defaults list of defaults to process -##' @param trait.samples vector of samples for a given trait -##' @param settings list of settings from pecan settings file -##' @param run.id id of run -##' @return OK +##' @param binary_path path to model binary +##' @param file_weather path to climate file, should change when I get rid of met2model? +##' @param file_params path to params file +##' @param start_date start time of the simulation +##' @param end_date end time of the simulation +##' @param outdir where to write BASGRA output +##' @param sitelat latitude of the site +##' @param sitelon longitude of the site +##' ##' @export ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -# template.job -run_BASGRA('@BINARY@', '@SITE_MET@', '@RUN_PARAMS@', '@START_DATE@', '@END_DATE@', - @SITE_LAT@, @SITE_LON@, ,'@OUTDIR@') - run_BASGRA <- function(binary_path, file_weather, file_params, start_date, end_date, outdir, sitelat, sitelon){ - - file_weather <- "/fs/data1/pecan.data/dbfiles/Fluxnet2015_BASGRA_site_1-523/FLX_DK-ZaH_FLUXNET2015_SUBSET_HH_2000-2014_2-3.2000-01-01.2007-12-31.txt" - file_params <- "/fs/data3/istfer/BASGRA_N/parameters/parameters.txt" - start_date <- '2000/01/01' - end_date <- '2007/12/31' - ############################# GENERAL INITIALISATION ######################## # this part corresponds to initialise_BASGRA_general.R function @@ -189,24 +183,56 @@ run_BASGRA <- function(binary_path, file_weather, file_params, start_date, end_d NOUT, matrix(0,NDAYS,NOUT))[[8]] - head(output) ############################# WRITE OUTPUTS ########################### # writing model outputs already in standard format # only LAI and CropYield for now - lai <- output[,which(outputNames == "LAI")] - CropYield <- output[,which(outputNames == "YIELD")] + years <- seq(start_year, end_year) + for (y in years) { + thisyear <- output[, which(outputNames == "year")] == y -} # run_BASGRA - -################################################################################ - - - - + outlist <- list() + outlist[[1]] <- output[thisyear, which(outputNames == "LAI")] # LAI in (m2 m-2) + + CropYield <- output[thisyear, which(outputNames == "YIELD")] # (g DM m-2) + outlist[[2]] <- udunits2::ud.convert(CropYield, "g m-2", "kg m-2") + + # ******************** Declare netCDF dimensions and variables ********************# + t <- ncdf4::ncdim_def(name = "time", + units = paste0("days since ", y, "-01-01 00:00:00"), + seq_len(PEcAn.utils::days_in_year(y)), + calendar = "standard", + unlim = TRUE) + + + lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(sitelat), longname = "station_latitude") + lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(sitelon), longname = "station_longitude") + + dims <- list(lon = lon, lat = lat, time = t) + + var <- list() + var[[1]] <- PEcAn.utils::to_ncvar("LAI", dims) + var[[2]] <- PEcAn.utils::to_ncvar("CropYield", dims) + # ******************** Declare netCDF variables ********************# + + ### Output netCDF data + nc <- ncdf4::nc_create(file.path(outdir, paste(y, "nc", sep = ".")), var) + varfile <- file(file.path(outdir, paste(y, "nc", "var", sep = ".")), "w") + for (i in seq_along(var)) { + # print(i) + ncdf4::ncvar_put(nc, var[[i]], outlist[[i]]) + cat(paste(var[[i]]$name, var[[i]]$longname), file = varfile, sep = "\n") + } + close(varfile) + ncdf4::nc_close(nc) + } # end year-loop over outputs + +} # run_BASGRA +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/models/basgra/man/run_BASGRA.Rd b/models/basgra/man/run_BASGRA.Rd new file mode 100644 index 00000000000..486467fe1fe --- /dev/null +++ b/models/basgra/man/run_BASGRA.Rd @@ -0,0 +1,39 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/run_BASGRA.R +\name{run_BASGRA} +\alias{run_BASGRA} +\title{run BASGRA model} +\usage{ +run_BASGRA(binary_path, file_weather, file_params, start_date, end_date, + outdir, sitelat, sitelon) +} +\arguments{ +\item{binary_path}{path to model binary} + +\item{file_weather}{path to climate file, should change when I get rid of met2model?} + +\item{file_params}{path to params file} + +\item{start_date}{start time of the simulation} + +\item{end_date}{end time of the simulation} + +\item{outdir}{where to write BASGRA output} + +\item{sitelat}{latitude of the site} + +\item{sitelon}{longitude of the site} +} +\description{ +BASGRA wrapper function. Runs and writes model outputs in PEcAn standard. +} +\details{ +BASGRA is written in fortran is run through R by wrapper functions written by Marcel Van Oijen. +This function makes use of those wrappers but gives control of datastream in and out of the model to PEcAn. +With this function we skip model2netcdf, we can also skip met2model but keeping it for now. +write.config.BASGRA modifies args of this function through template.job +then job.sh runs calls this function to run the model +} +\author{ +Istem Fer +} diff --git a/models/basgra/man/write.config.MODEL.Rd b/models/basgra/man/write.config.BASGRA.Rd similarity index 66% rename from models/basgra/man/write.config.MODEL.Rd rename to models/basgra/man/write.config.BASGRA.Rd index 7ee6f2d1dd1..39125f34b10 100644 --- a/models/basgra/man/write.config.MODEL.Rd +++ b/models/basgra/man/write.config.BASGRA.Rd @@ -1,10 +1,10 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/write.config.BASGRA.R -\name{write.config.MODEL} -\alias{write.config.MODEL} -\title{Write MODEL configuration files} +\name{write.config.BASGRA} +\alias{write.config.BASGRA} +\title{Write BASGRA configuration files} \usage{ -write.config.MODEL(defaults, trait.values, settings, run.id) +write.config.BASGRA(defaults, trait.values, settings, run.id) } \arguments{ \item{defaults}{list of defaults to process} @@ -16,15 +16,15 @@ write.config.MODEL(defaults, trait.values, settings, run.id) \item{trait.samples}{vector of samples for a given trait} } \value{ -configuration file for MODEL for given run +configuration file for BASGRA for given run } \description{ -Writes a MODEL config file. +Writes a BASGRA config file. } \details{ Requires a pft xml object, a list of trait values for a single model run, and the name of the file to create } \author{ -Rob Kooper +Istem Fer } From 43ca6baaf118747c48ac4e45ea657f3d93a5c585 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Sep 2019 08:48:51 -0400 Subject: [PATCH 0435/2289] first pass at write.config --- models/basgra/R/run_BASGRA.R | 16 ++----- models/basgra/R/write.config.BASGRA.R | 63 +++++++++---------------- models/basgra/inst/BASGRA_params.Rdata | Bin 0 -> 2675 bytes models/basgra/inst/template.job | 11 ++--- models/basgra/man/run_BASGRA.Rd | 4 +- 5 files changed, 35 insertions(+), 59 deletions(-) create mode 100644 models/basgra/inst/BASGRA_params.Rdata diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 09a96ed0c4e..00d770346a2 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -12,7 +12,7 @@ ##' @title run BASGRA model ##' @param binary_path path to model binary ##' @param file_weather path to climate file, should change when I get rid of met2model? -##' @param file_params path to params file +##' @param run_params parameter vector ##' @param start_date start time of the simulation ##' @param end_date end time of the simulation ##' @param outdir where to write BASGRA output @@ -23,7 +23,7 @@ ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -run_BASGRA <- function(binary_path, file_weather, file_params, start_date, end_date, +run_BASGRA <- function(binary_path, file_weather, run_params, start_date, end_date, outdir, sitelat, sitelon){ ############################# GENERAL INITIALISATION ######################## @@ -36,7 +36,7 @@ run_BASGRA <- function(binary_path, file_weather, file_params, start_date, end_d calendar_fert <- matrix( 0, nrow=100, ncol=3 ) calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) calendar_Ndep[1,] <- c(1900, 1,0) - calendar_Ndep[2,] <- c(2100,366,0) + calendar_Ndep[2,] <- c(2100, 366, 0) days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) ################################################################################ @@ -166,22 +166,16 @@ run_BASGRA <- function(binary_path, file_weather, file_params, start_date, end_d days_harvest <- as.integer(days_harvest) - # create vector params - df_params <- read.table(file_params,header=T,sep="\t",row.names=1) - parcol <- 13 - params <- df_params[,parcol] - - # run model output <- .Fortran('BASGRA', - params, + run_params, matrix_weather, calendar_fert, calendar_Ndep, days_harvest, NDAYS, NOUT, - matrix(0,NDAYS,NOUT))[[8]] + matrix(0, NDAYS, NOUT))[[8]] ############################# WRITE OUTPUTS ########################### diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 7240a80008c..d3874f5569c 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -29,7 +29,26 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { rundir <- file.path(settings$host$rundir, run.id) outdir <- file.path(settings$host$outdir, run.id) + # load default(!) BASGRA params + load(system.file("BASGRA_params.Rdata",package = "PEcAn.BASGRA")) + run_params <- default_params + + run_params[which(names(default_params) == "LAT")] <- as.numeric(settings$run$site$lat) + + #### write run-specific PFT parameters here #### Get parameters being handled by PEcAn + for (pft in seq_along(trait.values)) { + + pft.traits <- unlist(trait.values[[pft]]) + pft.names <- names(pft.traits) + + if ("c2n_fineroot" %in% pft.names) { + run_params[which(names(default_params) == "NCR")] <- 1/pft.traits[which(pft.names == "c2n_fineroot")] + } + } + + #----------------------------------------------------------------------- + # write job.sh # create launch script (which will create symlink) if (!is.null(settings$model$jobtemplate) && file.exists(settings$model$jobtemplate)) { jobsh <- readLines(con = settings$model$jobtemplate, n = -1) @@ -60,7 +79,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { jobsh <- gsub("@SITE_LAT@", settings$run$site$lat, jobsh) jobsh <- gsub("@SITE_LON@", settings$run$site$lon, jobsh) - jobsh <- gsub("@SITE_MET@", settings$run$site$met, jobsh) + jobsh <- gsub("@SITE_MET@", settings$run$inputs$met$path, jobsh) jobsh <- gsub("@START_DATE@", settings$run$start.date, jobsh) jobsh <- gsub("@END_DATE@", settings$run$end.date, jobsh) @@ -70,46 +89,10 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { jobsh <- gsub("@BINARY@", settings$model$binary, jobsh) + jobsh <- gsub("@RUN_PARAMS@", paste0("c(",listToArgString(run_params),")"), jobsh) + writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) Sys.chmod(file.path(settings$rundir, run.id, "job.sh")) - #----------------------------------------------------------------------- - ### Edit a templated config file for runs - if (!is.null(settings$model$config) && file.exists(settings$model$config)) { - config.text <- readLines(con = settings$model$config, n = -1) - } else { - filename <- system.file(settings$model$config, package = "PEcAn.MODEL") - if (filename == "") { - if (!is.null(settings$model$revision)) { - filename <- system.file(paste0("config.", settings$model$revision), package = "PEcAn.MODEL") - } else { - model <- db.query(paste("SELECT * FROM models WHERE id =", settings$model$id), params = settings$database$bety) - filename <- system.file(paste0("config.r", model$revision), package = "PEcAn.MODEL") - } - } - if (filename == "") { - PEcAn.logger::logger.severe("Could not find config template") - } - PEcAn.logger::logger.info("Using", filename, "as template") - config.text <- readLines(con = filename, n = -1) - } - - config.text <- gsub("@SITE_LAT@", settings$run$site$lat, config.text) - config.text <- gsub("@SITE_LON@", settings$run$site$lon, config.text) - config.text <- gsub("@SITE_MET@", settings$run$inputs$met$path, config.text) - config.text <- gsub("@MET_START@", settings$run$site$met.start, config.text) - config.text <- gsub("@MET_END@", settings$run$site$met.end, config.text) - config.text <- gsub("@START_MONTH@", format(startdate, "%m"), config.text) - config.text <- gsub("@START_DAY@", format(startdate, "%d"), config.text) - config.text <- gsub("@START_YEAR@", format(startdate, "%Y"), config.text) - config.text <- gsub("@END_MONTH@", format(enddate, "%m"), config.text) - config.text <- gsub("@END_DAY@", format(enddate, "%d"), config.text) - config.text <- gsub("@END_YEAR@", format(enddate, "%Y"), config.text) - config.text <- gsub("@OUTDIR@", settings$host$outdir, config.text) - config.text <- gsub("@ENSNAME@", run.id, config.text) - config.text <- gsub("@OUTFILE@", paste0("out", run.id), config.text) - - #----------------------------------------------------------------------- - config.file.name <- paste0("CONFIG.", run.id, ".txt") - writeLines(config.text, con = paste(outdir, config.file.name, sep = "")) + } # write.config.MODEL diff --git a/models/basgra/inst/BASGRA_params.Rdata b/models/basgra/inst/BASGRA_params.Rdata new file mode 100644 index 0000000000000000000000000000000000000000..cff6ed6aa23f16ba14fb5b563f58df142b20ee03 GIT binary patch literal 2675 zcmZWq4Nw$S96z|*`yfFhrm=D~r^e9989?zXowv(=-Fdrv=k6W{MV@CKV)EsoqG;t9 zjGC2+MyN9mhBek$;vk0Os24d|YLgg~-zdnKh?+`T8d`5}-`hKDXO7+9|Ks<5@3;TA z$V^^VQeKkDWJ)xdEf!Ov89rJP;hzMP#pHnZlsTco;G*J%f%0HDSW*ECQ($xf=s$V; zi@$#>0jC}ntZ$d5f@81NtlDs)9_FIgp^l<+Yj-Y42Gvzv{MTvA$WM1f$h)bf^`I-cT1=R-0bJQ%f24aQ1%B6H zZ6oO0xXS}m;L6yPT+7f6;M(rPSx5alz=f`+H4QJx3>+H&>$YIepWtTSjR*9j z*FlXxZQrpX7g+vPT0_Ghr^$p_)S8LzBjBQzE~q8BwKcW>gNuLIGdYh;oxpB`l0lz&*EhX`?}Nof zZNW+Z41i|c7t;Ft;VqwCR!L&mPJ8C4UNU=T>x{md5#%!m${SB5O#|0$fwGe$`blw0 z)F<%Exab8|vF&p^S3UsE?JrKB{X#eBIW~Ic*5@4HmhLY&pMJM(UBPJ5g>Ycd_-}XY zEC6RU`r)nM$L15ut6MIRGbScYgDZg*eP4Elmeqis%FXk=O?N=ugOyj`dU-jxd1Q#j z3x)Ca8hHj-ntU6*gyRYU%oL$}z@T_aM6`jdL-*S!S7-t3x3T`nXqgFILH_NIFzMpJWl&c&_@lc0 zo4`hW{=w1axefOn#h4#-+*n@}ZW>7ruPy!L1J6bBl}*~U+2FZ;T)TcUFxK6R>aObR z{^zt3rNNRAJfx-}@V5XRVTa)1Gc#x**M$N}dO&6s96=?riUSr}F`o6LK)G;|@H$xu z1Dom+RF6??;~aZ+BqN1Nh; zb(zL?G;a($2sMXNa3!H?=3`x|$EzBaT2+~n49$u8*Z{LYVPdnNHr^@crzr-DfxwSn z6afK;Wm>fii~B8p__9%t-Ctf-RxAdWU@=VeKMg__j;F9pg7ah<4iR&MB_%=Ju=SBw`E7&z0^ zV+c>GRU3B^Pos|TIND@NWf@}wgTf(aFg!;N&0&H|@nW*mQyQvNO3Nxy#0Def!8hh3 zNGSerXdwa?2nunXs%_68e?lz@6))t%p-@!_RXEgea9(K~wL`5~6fR`u6=Dv%Tkx^k z;l&;LQRXx8yS8hGlF#I`P=Q_MWi<}lw26*8jA||`8(z?YOI0&aCqXM75d#}7#pfV| zNiHr6p<+NzYc;YCPHWX7hG-3F9VlRvC4r@A{IacXi4|RV_y`Hg#ne_csuA@;6i2Hg z%t@&5Sm70Is^a2B3PVEXq%3^m9e&QMWM{+zHi=f8u`_9oeKj=mWO(Wg+3?Gu`tV>m zB#wpN`UISJa1zZ!D&x~+MnPmV?ZUkd5tm~knLYNaOABklJu_C Date: Thu, 26 Sep 2019 09:46:13 -0400 Subject: [PATCH 0436/2289] rename format in met2model --- models/basgra/R/met2model.BASGRA.R | 2 +- models/basgra/R/run_BASGRA.R | 8 +++----- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/models/basgra/R/met2model.BASGRA.R b/models/basgra/R/met2model.BASGRA.R index 35d16ea3ec1..8f176f810ef 100644 --- a/models/basgra/R/met2model.BASGRA.R +++ b/models/basgra/R/met2model.BASGRA.R @@ -47,7 +47,7 @@ met2model.BASGRA <- function(in.path, in.prefix, outfolder, overwrite = FALSE, results <- data.frame(file = out.file.full, host = PEcAn.remote::fqdn(), mimetype = "text/csv", - formatname = "Sipnet.climna", + formatname = "Weather-Bioforsk", startdate = start_date, enddate = end_date, dbfile.name = out.file, diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 00d770346a2..61de604f226 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -167,13 +167,13 @@ run_BASGRA <- function(binary_path, file_weather, run_params, start_date, end_da days_harvest <- as.integer(days_harvest) # run model - output <- .Fortran('BASGRA', + output <- .Fortran('BASGRA', run_params, - matrix_weather, + matrix_weather, calendar_fert, calendar_Ndep, days_harvest, - NDAYS, + NDAYS, NOUT, matrix(0, NDAYS, NOUT))[[8]] @@ -228,5 +228,3 @@ run_BASGRA <- function(binary_path, file_weather, run_params, start_date, end_da } # run_BASGRA -#--------------------------------------------------------------------------------------------------# -### EOF From 60c3793eb07ad4d208d364ab1ebb200cf192d1cd Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Sep 2019 09:47:53 -0400 Subject: [PATCH 0437/2289] remove model2netcdf --- models/basgra/NAMESPACE | 1 - models/basgra/R/model2netcdf.BASGRA.R | 37 ------------------------- models/basgra/man/model2netcdf.MODEL.Rd | 25 ----------------- 3 files changed, 63 deletions(-) delete mode 100644 models/basgra/R/model2netcdf.BASGRA.R delete mode 100644 models/basgra/man/model2netcdf.MODEL.Rd diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index 8298ace1b90..6e2049dfa2f 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -1,6 +1,5 @@ # Generated by roxygen2: do not edit by hand export(met2model.BASGRA) -export(model2netcdf.MODEL) export(run_BASGRA) export(write.config.BASGRA) diff --git a/models/basgra/R/model2netcdf.BASGRA.R b/models/basgra/R/model2netcdf.BASGRA.R deleted file mode 100644 index 230deb181da..00000000000 --- a/models/basgra/R/model2netcdf.BASGRA.R +++ /dev/null @@ -1,37 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##-------------------------------------------------------------------------------------------------# -##' Convert MODEL output into the NACP Intercomparison format (ALMA using netCDF) -##' -##' @name model2netcdf.MODEL -##' @title Code to convert MODELS's output into netCDF format -##' -##' @param outdir Location of model output -##' @param sitelat Latitude of the site -##' @param sitelon Longitude of the site -##' @param start_date Start time of the simulation -##' @param end_date End time of the simulation -##' @export -##' -##' @author Rob Kooper -model2netcdf.MODEL <- function(outdir, sitelat, sitelon, start_date, end_date) { - PEcAn.logger::logger.severe("NOT IMPLEMENTED") - - # Please follow the PEcAn style guide: - # https://pecanproject.github.io/pecan-documentation/develop/coding-style.html - - # Note that `library()` calls should _never_ appear here; instead, put - # packages dependencies in the DESCRIPTION file, under "Imports:". - # Calls to dependent packages should use a double colon, e.g. - # `packageName::functionName()`. - # Also, `require()` should be used only when a package dependency is truly - # optional. In this case, put the package name under "Suggests:" in DESCRIPTION. - -} # model2netcdf.MODEL diff --git a/models/basgra/man/model2netcdf.MODEL.Rd b/models/basgra/man/model2netcdf.MODEL.Rd deleted file mode 100644 index 6ee72ccd937..00000000000 --- a/models/basgra/man/model2netcdf.MODEL.Rd +++ /dev/null @@ -1,25 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/model2netcdf.BASGRA.R -\name{model2netcdf.MODEL} -\alias{model2netcdf.MODEL} -\title{Code to convert MODELS's output into netCDF format} -\usage{ -model2netcdf.MODEL(outdir, sitelat, sitelon, start_date, end_date) -} -\arguments{ -\item{outdir}{Location of model output} - -\item{sitelat}{Latitude of the site} - -\item{sitelon}{Longitude of the site} - -\item{start_date}{Start time of the simulation} - -\item{end_date}{End time of the simulation} -} -\description{ -Convert MODEL output into the NACP Intercomparison format (ALMA using netCDF) -} -\author{ -Rob Kooper -} From 1ffde964bce7d7b382a3a5cd53a62dae5d5186d4 Mon Sep 17 00:00:00 2001 From: Betsy Cowdery Date: Thu, 26 Sep 2019 10:45:13 -0400 Subject: [PATCH 0438/2289] Update Rcheck_reference.log Updating Rcheck_reference.log --- models/ed/tests/Rcheck_reference.log | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/models/ed/tests/Rcheck_reference.log b/models/ed/tests/Rcheck_reference.log index 1b7267faef5..3dd1c712c13 100644 --- a/models/ed/tests/Rcheck_reference.log +++ b/models/ed/tests/Rcheck_reference.log @@ -91,7 +91,6 @@ write.config.ED2: no visible global function definition for ‘saveXML’ write.config.ED2: no visible global function definition for ‘db.query’ write.config.ED2: no visible global function definition for ‘modifyList’ -write.config.ED2: no visible binding for global variable ‘x’ write.config.ED2 : : no visible binding for global variable ‘x’ write.config.xml.ED2: no visible global function definition for ‘data’ @@ -102,7 +101,7 @@ write.config.xml.ED2: no visible global function definition for ‘modifyList’ Undefined global functions or variables: data db.query head median modifyList pftmapping read.table saveXML - soil unzip value_type var.names write.table x + soil unzip value_type var.names write.table Consider adding importFrom("stats", "median") importFrom("utils", "data", "head", "modifyList", "read.table", From ab494cd0ccfd3d1c71a6a0e72433842b19746e28 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 26 Sep 2019 11:39:24 -0400 Subject: [PATCH 0439/2289] addressing comments --- base/workflow/R/run.write.configs.R | 13 +++++++++---- modules/uncertainty/R/ensemble.R | 4 ++-- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 6c7275d3b60..d4d822e1eca 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -26,10 +26,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo posterior.files = rep(NA, length(settings$pfts)), overwrite = TRUE) { - con <- - try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - on.exit(try(PEcAn.DB::db.close(con), silent = TRUE) - ) + ## Which posterior to use? for (i in seq_along(settings$pfts)) { @@ -37,6 +34,14 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo if (is.na(posterior.files[i])) { ## otherwise, check to see if posteriorid exists if (!is.null(settings$pfts[[i]]$posteriorid)) { + + tryCatch({ + con <- PEcAn.DB::db.open(...) + on.exit(PEcAn.DB::db.close(con) + }, error = function(e) { + PEcAn.logger::logger.severe("Connection requested, but failed to open with the following error: ", conditionMessage(e) + }) + files <- PEcAn.DB::dbfile.check("Posterior", settings$pfts[[i]]$posteriorid, con, settings$host$name, return.all = TRUE) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index 043d40854be..d5760b3285e 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -228,7 +228,7 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, #------------------------------------------------- if this is a new fresh run------------------ if (is.null(restart)){ # create an ensemble id - if (!is.null(con) & write.to.db) { + if (!is.null(con) && write.to.db) { # write ensemble first ensemble.id <- PEcAn.DB::db.query(paste0( "INSERT INTO ensembles (runtype, workflow_id) ", @@ -336,7 +336,7 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, # write configuration for each run of the ensemble runs <- data.frame() for (i in seq_len(settings$ensemble$size)) { - if (!is.null(con) & write.to.db) { + if (!is.null(con) && write.to.db) { paramlist <- paste("ensemble=", i, sep = "") # inserting this into the table and getting an id back run.id <- PEcAn.DB::db.query(paste0( From 05fc265804da42c2bf3b76adcc958a33753c3c9c Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 26 Sep 2019 12:23:06 -0400 Subject: [PATCH 0440/2289] another comment --- modules/uncertainty/R/ensemble.R | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index d5760b3285e..7c49893e347 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -200,6 +200,7 @@ get.ensemble.samples <- function(ensemble.size, pft.samples, env.samples, write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, clean = FALSE, write.to.db = TRUE,restart=NULL) { + con <- NULL my.write.config <- paste("write.config.", model, sep = "") my.write_restart <- paste0("write_restart.", model) @@ -209,15 +210,22 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, # See if we need to write to DB write.to.db <- as.logical(settings$database$bety$write) - # Open connection to database so we can store all run/ensemble information - con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) - # If we fail to connect to DB then we set to NULL - if (inherits(con, "try-error")) { - con <- NULL + if (write.to.db) { + # Open connection to database so we can store all run/ensemble information + con <- + try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE) + ) + + # If we fail to connect to DB then we set to NULL + if (inherits(con, "try-error")) + { + con <- NULL + } } + # Get the workflow id if (!is.null(settings$workflow$id)) { From c322e0c764692a81e0610d2aec139095e419c0e6 Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Thu, 26 Sep 2019 16:57:08 -0400 Subject: [PATCH 0441/2289] Update modules/uncertainty/R/get.parameter.samples.R Co-Authored-By: Alexey Shiklomanov --- modules/uncertainty/R/get.parameter.samples.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/uncertainty/R/get.parameter.samples.R b/modules/uncertainty/R/get.parameter.samples.R index 66076c56367..9951b47de83 100644 --- a/modules/uncertainty/R/get.parameter.samples.R +++ b/modules/uncertainty/R/get.parameter.samples.R @@ -66,7 +66,7 @@ get.parameter.samples <- function(settings, } ### Load trait mcmc data (if exists, either from MA or PDA) - if (!is.null(settings$pfts[[i]]$posteriorid) & !inherits(con, "try-error")) {# first check if there are any files associated with posterior ids + if (!is.null(settings$pfts[[i]]$posteriorid) && !inherits(con, "try-error")) {# first check if there are any files associated with posterior ids files <- PEcAn.DB::dbfile.check("Posterior", settings$pfts[[i]]$posteriorid, con, settings$host$name, return.all = TRUE) From 1905ea906dc7145bf0062e65bacdca7edf2c12ab Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 06:02:59 -0400 Subject: [PATCH 0442/2289] compile model in package --- models/basgra/R/run_BASGRA.R | 5 +- models/basgra/src/Makevars | 37 ++ models/basgra/src/model/BASGRA.f90 | 374 ++++++++++++++++++ models/basgra/src/model/environment.f90 | 273 +++++++++++++ models/basgra/src/model/parameters_plant.f90 | 37 ++ models/basgra/src/model/parameters_site.f90 | 61 +++ models/basgra/src/model/plant.f90 | 383 +++++++++++++++++++ models/basgra/src/model/resources.f90 | 73 ++++ models/basgra/src/model/set_params.f90 | 149 ++++++++ models/basgra/src/model/soil.f90 | 187 +++++++++ 10 files changed, 1575 insertions(+), 4 deletions(-) create mode 100644 models/basgra/src/Makevars create mode 100644 models/basgra/src/model/BASGRA.f90 create mode 100644 models/basgra/src/model/environment.f90 create mode 100644 models/basgra/src/model/parameters_plant.f90 create mode 100644 models/basgra/src/model/parameters_site.f90 create mode 100644 models/basgra/src/model/plant.f90 create mode 100644 models/basgra/src/model/resources.f90 create mode 100644 models/basgra/src/model/set_params.f90 create mode 100644 models/basgra/src/model/soil.f90 diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 61de604f226..91f3cab968d 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -20,6 +20,7 @@ ##' @param sitelon longitude of the site ##' ##' @export +##' @useDynLib PEcAn.BASGRA ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# @@ -29,10 +30,6 @@ run_BASGRA <- function(binary_path, file_weather, run_params, start_date, end_da ############################# GENERAL INITIALISATION ######################## # this part corresponds to initialise_BASGRA_general.R function - # load DLL - dyn.load(binary_path) - - ################################################################################ calendar_fert <- matrix( 0, nrow=100, ncol=3 ) calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) calendar_Ndep[1,] <- c(1900, 1,0) diff --git a/models/basgra/src/Makevars b/models/basgra/src/Makevars new file mode 100644 index 00000000000..3fc60542282 --- /dev/null +++ b/models/basgra/src/Makevars @@ -0,0 +1,37 @@ +# PEcAn BASGRA -- Makevars +# Author: Istem Fer, Alexey Shiklomanov + +PKG_LIBS = $(FLIBS) + +SOURCES = \ + model/parameters_site.f90 \ + model/parameters_plant.f90 \ + model/environment.f90 \ + model/resources.f90 \ + model/soil.f90 \ + model/plant.f90 \ + model/set_params.f90 \ + model/BASGRA.f90 + + +OBJECTS = \ + model/parameters_site.o \ + model/parameters_plant.o \ + model/environment.o \ + model/resources.o \ + model/soil.o \ + model/plant.o \ + model/set_params.o \ + model/BASGRA.o + + +.PHONY: all clean + +all : $(SHLIB) + +$(SHLIB) : $(OBJECTS) + +$(OBJECTS) : $(SOURCES) + +clean : + rm -f $(OBJECTS) *.mod model/*.mod *.so *.o symbols.rds \ No newline at end of file diff --git a/models/basgra/src/model/BASGRA.f90 b/models/basgra/src/model/BASGRA.f90 new file mode 100644 index 00000000000..264239eefa5 --- /dev/null +++ b/models/basgra/src/model/BASGRA.f90 @@ -0,0 +1,374 @@ +subroutine BASGRA( PARAMS, MATRIX_WEATHER, & + CALENDAR_FERT, CALENDAR_NDEP, DAYS_HARVEST, & + NDAYS, NOUT, & + y) +!------------------------------------------------------------------------------- +! This is the BASic GRAss model originally written in MATLAB/Simulink by Marcel +! van Oijen, Mats Hoglind, Stig Morten Thorsen and Ad Schapendonk. +! 2011-07-13: Translation to FORTRAN by David Cameron and Marcel van Oijen. +! 2014-03-17: Extra category of tillers added +! 2014-04-03: Vernalization added +! 2014-04-03: Lower limit of temperature-driven leaf senescence no longer zero +! 2015-03-26: Introducing N, following example of BASFOR for the soil part. +! Plant N is restricted to LV, ST, RT, not present in STUB and RES. +! 2015-09-24: More N-processes added +! 2016-12-09: Digestibility and fibre content added +!------------------------------------------------------------------------------- + +use parameters_site +use parameters_plant +use environment +use resources +use soil +use plant +implicit none + +integer, dimension(100,2) :: DAYS_HARVEST +real :: PARAMS(120) +#ifdef weathergen + integer, parameter :: NWEATHER = 7 +#else + integer, parameter :: NWEATHER = 8 +#endif +real :: MATRIX_WEATHER(NMAXDAYS,NWEATHER) +real , dimension(100,3) :: CALENDAR_FERT, CALENDAR_NDEP +integer, dimension(100,2) :: DAYS_FERT , DAYS_NDEP +real , dimension(100) :: NFERTV , NDEPV + +integer :: day, doy, i, NDAYS, NOUT, year +real :: y(NDAYS,NOUT) + +! State variables plants +real :: CLV, CLVD, CRES, CRT, CST, CSTUB, LAI, LT50, PHEN +real :: ROOTD, TILG1, TILG2, TILV +integer :: VERN +real :: YIELD, YIELD_LAST, YIELD_TOT +real :: NRT, NSH + +! Output variables constructed from plant state variables +real :: DM, DMLV, DMRES, DMSH, DMST, DMSTUB, DM_MAX, TILTOT +real :: NSH_DMSH +real :: ENERGY_DM, F_ASH, F_PROTEIN, PROTEIN + +! State variables soil +real :: CLITT, CSOMF, CSOMS, DRYSTOR, Fdepth +real :: NLITT, NSOMF, NSOMS, NMIN, O2, Sdepth +real :: TANAER, WAL, WAPL, WAPS, WAS, WETSTOR +real :: Nfert_TOT + +! Intermediate and rate variables +real :: DeHardRate, DLAI, DLV, DPHEN, DRT, DSTUB, dTANAER, DTILV, EVAP, EXPLOR +real :: Frate, FREEZEL, FREEZEPL, GLAI, GLV, GPHEN, GRES, GRT, GST, GSTUB, GTILV, HardRate +real :: HARVLA, HARVLV, HARVPH, HARVRE, HARVST, HARVTILG2, INFIL, IRRIG, O2IN +real :: O2OUT, PackMelt, poolDrain, poolInfil, Psnow, reFreeze, RESMOB +real :: RGRTVG1, RROOTD, SnowMelt, THAWPS, THAWS, TILVG1, TILG1G2, TRAN, Wremain +real :: NCSHI, NCGSH, NCDSH, NCHARVSH, GNSH, DNSH, HARVNSH, GNRT, DNRT +real :: NSHmob, NSHmobsoil, Nupt + +real :: Ndep, Nfert + +real :: F_DIGEST_DM, F_DIGEST_DMSH, F_DIGEST_LV, F_DIGEST_ST, F_DIGEST_WALL +real :: F_WALL_DM , F_WALL_DMSH , F_WALL_LV , F_WALL_ST + +! Parameters +call set_params(PARAMS) + +! Calendar & weather +YEARI = MATRIX_WEATHER(:,1) +DOYI = MATRIX_WEATHER(:,2) +GRI = MATRIX_WEATHER(:,3) +TMMNI = MATRIX_WEATHER(:,4) +TMMXI = MATRIX_WEATHER(:,5) +#ifdef weathergen + RAINI = MATRIX_WEATHER(:,6) + PETI = MATRIX_WEATHER(:,7) +#else + VPI = MATRIX_WEATHER(:,6) + RAINI = MATRIX_WEATHER(:,7) + WNI = MATRIX_WEATHER(:,8) +#endif + +! Calendars +DAYS_FERT = CALENDAR_FERT (:,1:2) +DAYS_NDEP = CALENDAR_NDEP (:,1:2) +NFERTV = CALENDAR_FERT (:,3) * NFERTMULT +NDEPV = CALENDAR_NDEP (:,3) + +! Initial constants for plant state variables +CLV = CLVI +CLVD = CLVDI +CRES = CRESI +CRT = CRTI +CST = CSTI +CSTUB = CSTUBI +LAI = LAII +LT50 = LT50I +NRT = NCR * CRTI + NCSHI = NCSHMAX * (1-EXP(-K*LAII)) / (K*LAII) +NSH = NCSHI * (CLVI+CSTI) +PHEN = PHENI +ROOTD = ROOTDM +TILG1 = TILTOTI * FRTILGI * FRTILGG1I +TILG2 = TILTOTI * FRTILGI * (1-FRTILGG1I) +TILV = TILTOTI * (1. - FRTILGI) +VERN = 1 +YIELD = YIELDI +YIELD_LAST = YIELDI +YIELD_TOT = YIELDI + +Nfert_TOT = 0 +DM_MAX = 0 + +! Initial constants for soil state variables +CLITT = CLITT0 +CSOMF = CSOM0 * FCSOMF0 +CSOMS = CSOM0 * (1-FCSOMF0) +DRYSTOR = DRYSTORI +Fdepth = FdepthI +NLITT = CLITT0 / CNLITT0 +NSOMF = (CSOM0 * FCSOMF0) / CNSOMF0 +NSOMS = (CSOM0 * (1-FCSOMF0)) / CNSOMS0 +NMIN = NMIN0 +O2 = FGAS * ROOTDM * FO2MX * 1000./22.4 +Sdepth = SDEPTHI +TANAER = TANAERI +WAL = 1000. * ROOTDM * WCI +WAPL = WAPLI +WAPS = WAPSI +WAS = WASI +WETSTOR = WETSTORI + +do day = 1, NDAYS + + ! Environment + call DDAYL (doy) + call set_weather_day(day,DRYSTOR, year,doy) + call SoilWaterContent(Fdepth,ROOTD,WAL) + call Physics (DAVTMP,Fdepth,ROOTD,Sdepth,WAS, Frate) + call MicroClimate (doy,DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR, & + FREEZEPL,INFIL,PackMelt,poolDrain,poolInfil, & + pSnow,reFreeze,SnowMelt,THAWPS,wRemain) +#ifdef weathergen + call PEVAPINPUT (LAI) +#else + call PENMAN (LAI) +#endif + ! Resources + call Light (DAYL,DTR,LAI,PAR) + call EVAPTRTRF (Fdepth,PEVAP,PTRAN,ROOTD,WAL, EVAP,TRAN) + call ROOTDG (Fdepth,ROOTD,WAL, EXPLOR,RROOTD) + ! Soil + call FRDRUNIR (EVAP,Fdepth,Frate,INFIL,poolDRAIN,ROOTD,TRAN,WAL,WAS, & + FREEZEL,IRRIG,THAWS) + call O2status (O2,ROOTD) + ! Plant + call Harvest (CLV,CRES,CST,year,doy,DAYS_HARVEST,LAI,PHEN,TILG1,TILG2,TILV, & + GSTUB,HARVLA,HARVLV,HARVPH,HARVRE,HARVST,HARVTILG2) + call Biomass (CLV,CRES,CST) + call Phenology (DAYL,PHEN, DPHEN,GPHEN) + call Foliage1 + call LUECO2TM (PARAV) + call HardeningSink (CLV,DAYL,doy,LT50,Tsurf) + call Growth (LAI,NSH,NMIN,CLV,CRES,CST,PARINT,TILG1,TILG2,TILV,TRANRF, & + GLV,GRES,GRT,GST,RESMOB,NSHmob) + call PlantRespiration(FO2,RESPHARD) + call Senescence (CLV,CRT,CSTUB,LAI,LT50,PERMgas,TANAER,TILV,Tsurf, & + DeHardRate,DLAI,DLV,DRT,DSTUB,dTANAER,DTILV,HardRate) + call Foliage2 (DAYL,GLV,LAI,TILV,TILG1,TRANRF,Tsurf,VERN, & + GLAI,GTILV,TILVG1,TILG1G2) + ! Soil 2 + call O2fluxes (O2,PERMgas,ROOTD,RplantAer, O2IN,O2OUT) + call N_fert (year,doy,DAYS_FERT,NFERTV, Nfert) + call N_dep (year,doy,DAYS_NDEP,NDEPV, Ndep) + call CNsoil (ROOTD,RWA,WFPS,WAL,GRT,CLITT,CSOMF,NLITT,NSOMF,NSOMS,NMIN,CSOMS) + call Nplant (NSHmob,CLV,CRT,CST,DLAI,DLV,DRT,GLAI,GLV,GRT,GST, & + HARVLA,HARVLV,HARVST,LAI,NRT,NSH, & + DNRT,DNSH,GNRT,GNSH,HARVNSH, & + NCDSH,NCGSH,NCHARVSH,NSHmobsoil,Nupt) + +! Extra variables + DMLV = CLV / 0.45 ! Leaf dry matter; g m-2 + DMST = CST / 0.45 ! Stem dry matter; g m-2 + DMSTUB = CSTUB / 0.45 + DMRES = CRES / 0.40 + DMSH = DMLV + DMST + DMRES + DM = DMSH + DMSTUB + TILTOT = TILG1 + TILG2 + TILV + + NSH_DMSH = NSH / DMSH ! N content in shoot DM; g N g-1 DM + + PROTEIN = NSH * 6.25 ! Crude protein; g m-2 + F_PROTEIN = PROTEIN / DMSH ! Crude protein in shoot dry matter; g g-1 + F_ASH = 0.069 + 0.14*F_PROTEIN + + call Digestibility (DM,DMLV,DMRES,DMSH,DMST,DMSTUB,PHEN, & + F_WALL_DM,F_WALL_DMSH,F_WALL_LV,F_WALL_ST, & + F_DIGEST_DM,F_DIGEST_DMSH,F_DIGEST_LV,F_DIGEST_ST,F_DIGEST_WALL) + + !================ + ! Outputs + !================ + y(day, 1) = year + (doy-0.5)/366 ! "Time" = Decimal year (approximation) + y(day, 2) = year + y(day, 3) = doy + y(day, 4) = DAVTMP + + y(day, 5) = CLV + y(day, 6) = CLVD + y(day, 7) = YIELD_LAST + y(day, 8) = CRES + y(day, 9) = CRT + y(day,10) = CST + y(day,11) = CSTUB + y(day,12) = DRYSTOR + y(day,13) = Fdepth + y(day,14) = LAI + y(day,15) = LT50 + y(day,16) = O2 + y(day,17) = PHEN + y(day,18) = ROOTD + y(day,19) = Sdepth + y(day,20) = TANAER + y(day,21) = TILG1 + TILG2 ! "TILG" + y(day,22) = TILV + y(day,23) = WAL + y(day,24) = WAPL + y(day,25) = WAPS + y(day,26) = WAS + y(day,27) = WETSTOR + + y(day,28) = DM ! "DM" = Aboveground dry matter in g m-2 + y(day,29) = DMRES / DM ! "RES" = Reserves in g g-1 aboveground dry matter + y(day,30) = LERG ! + y(day,31) = NELLVG ! + y(day,32) = RLEAF ! + y(day,33) = LAI / DMLV ! "SLA" = m2 leaf area g-1 dry matter leaves + y(day,34) = TILTOT ! "TILTOT" = Total tiller number in # m-2 + y(day,35) = (TILG1+TILG2) / TILTOT ! "FRTILG" = Fraction of tillers that is generative + y(day,36) = TILG1 / TILTOT ! "FRTILG1" = Fraction of tillers that is in TILG1 + y(day,37) = TILG2 / TILTOT ! "FRTILG2" = Fraction of tillers that is in TILG2 + y(day,38) = RDRT + y(day,39) = VERN + + y(day,40) = CLITT ! g C m-2 + y(day,41) = CSOMF ! g C m-2 + y(day,42) = CSOMS ! g C m-2 + y(day,43) = NLITT ! g N m-2 + y(day,44) = NSOMF ! g N m-2 + y(day,45) = NSOMS ! g N m-2 + y(day,46) = NMIN ! g N m-2 + y(day,47) = Rsoil ! g C m-2 d-1 + y(day,48) = NemissionN2O ! g N m-2 d-1 + y(day,49) = NemissionNO ! g N m-2 d-1 + y(day,50) = Nfert ! g N m-2 d-1 + y(day,51) = Ndep ! g N m-2 d-1 + y(day,52) = RWA ! - + y(day,53) = NSH ! g N m-2 + y(day,54) = GNSH + y(day,55) = DNSH + y(day,56) = HARVNSH + y(day,57) = NSH / (CLV+CST) ! - "NCSH" + y(day,58) = NCGSH + y(day,59) = NCDSH + y(day,60) = NCHARVSH + y(day,61) = fNgrowth + y(day,62) = RGRTV + y(day,63) = FSPOT + y(day,64) = RESNOR + y(day,65) = TV2TIL + y(day,66) = NSHNOR + y(day,67) = KNMAX + y(day,68) = KN + + y(day,69) = DMLV + y(day,70) = DMST + y(day,71) = NSH_DMSH + y(day,72) = Nfert_TOT + y(day,73) = YIELD_TOT + y(day,74) = DM_MAX + + y(day,75) = F_PROTEIN + y(day,76) = F_ASH + + y(day,77) = F_WALL_DM + y(day,78) = F_WALL_DMSH + y(day,79) = F_WALL_LV + y(day,80) = F_WALL_ST + y(day,81) = F_DIGEST_DM + y(day,82) = F_DIGEST_DMSH + y(day,83) = F_DIGEST_LV + y(day,84) = F_DIGEST_ST + y(day,85) = F_DIGEST_WALL + + y(day,86) = RDRS + y(day,87) = RAIN + y(day,88) = Nleaching + + y(day,89) = NSHmob + y(day,90) = NSHmobsoil + y(day,91) = Nfixation + y(day,92) = Nupt + y(day,93) = Nmineralisation + + y(day,94) = NSOURCE + y(day,95) = NSINK + + y(day,96) = NRT + y(day,97) = NRT / CRT + + y(day,98) = rNLITT + y(day,99) = rNSOMF + + y(day,100) = DAYL + +! State equations plants + CLV = CLV + GLV - DLV - HARVLV + CLVD = CLVD + DLV + CRES = CRES + GRES - RESMOB - HARVRE + CRT = CRT + GRT - DRT + CST = CST + GST - HARVST + CSTUB = CSTUB + GSTUB - DSTUB + LAI = LAI + GLAI - DLAI - HARVLA + LT50 = LT50 + DeHardRate - HardRate + PHEN = min(1., PHEN + GPHEN - DPHEN - HARVPH) + ROOTD = ROOTD + RROOTD + TILG1 = TILG1 + TILVG1 - TILG1G2 + TILG2 = TILG2 + TILG1G2 - HARVTILG2 + TILV = TILV + GTILV - TILVG1 - DTILV + if((LAT>0).AND.(doy==305)) VERN = 0 + if((LAT<0).AND.(doy==122)) VERN = 0 + if(DAVTMP0) YIELD_LAST = YIELD + YIELD_TOT = YIELD_TOT + YIELD + + NRT = NRT + GNRT - DNRT + NSH = NSH + GNSH - DNSH - HARVNSH - NSHmob + + Nfert_TOT = Nfert_TOT + Nfert + DM_MAX = max( DM, DM_MAX ) + +! State equations soil + CLITT = CLITT + DLV + DSTUB - rCLITT - dCLITT + CSOMF = CSOMF + DRT + dCLITTsomf - rCSOMF - dCSOMF + CSOMS = CSOMS + dCSOMFsoms - dCSOMS + DRYSTOR = DRYSTOR + reFreeze + Psnow - SnowMelt + Fdepth = Fdepth + Frate + NLITT = NLITT + DNSH - rNLITT - dNLITT + NSOMF = NSOMF + DNRT + NLITTsomf - rNSOMF - dNSOMF + NSOMS = NSOMS + NSOMFsoms - dNSOMS + NMIN = NMIN + Ndep + Nfert + Nmineralisation + Nfixation + NSHmobsoil & + - Nupt - Nleaching - Nemission + NMIN = max(0.,NMIN) + O2 = O2 + O2IN - O2OUT + Sdepth = Sdepth + Psnow/RHOnewSnow - PackMelt + TANAER = TANAER + dTANAER + WAL = WAL + THAWS - FREEZEL + poolDrain + INFIL +EXPLOR+IRRIG-DRAIN-RUNOFF-EVAP-TRAN + WAPL = WAPL + THAWPS - FREEZEPL + poolInfil - poolDrain + WAPS = WAPS - THAWPS + FREEZEPL + WAS = WAS - THAWS + FREEZEL + WETSTOR = WETSTOR + Wremain - WETSTOR + +enddo + +end \ No newline at end of file diff --git a/models/basgra/src/model/environment.f90 b/models/basgra/src/model/environment.f90 new file mode 100644 index 00000000000..d9ca5616a81 --- /dev/null +++ b/models/basgra/src/model/environment.f90 @@ -0,0 +1,273 @@ +module environment + +use parameters_site +use parameters_plant +implicit none +integer, parameter :: NMAXDAYS = 10000 +real :: GR, TMMN, TMMX, VP, WN +real :: YEARI(NMAXDAYS), DOYI(NMAXDAYS) , RAINI(NMAXDAYS), GRI(NMAXDAYS) +real :: TMMNI(NMAXDAYS), TMMXI(NMAXDAYS), VPI(NMAXDAYS) , WNI(NMAXDAYS) + +#ifdef weathergen +real :: PETI(NMAXDAYS) +#endif + +real :: DAVTMP,DAYL,DTR,PAR,PERMgas,PEVAP,poolRUNOFF,PTRAN,pWater,RAIN,RNINTC +real :: runOn,StayWet,WmaxStore,Wsupply + +#ifdef weathergen +real :: PET +#endif + +Contains + +#ifdef weathergen + Subroutine set_weather_day(day,DRYSTOR, year,doy) + integer :: day, doy, year + real :: DRYSTOR + year = YEARI(day) ! day of the year (d) + doy = DOYI(day) ! day of the year (d) + RAIN = RAINI(day) ! precipitation (mm d-1) + GR = GRI(day) ! irradiation (MJ m-2 d-1) + TMMN = TMMNI(day) ! minimum temperature (degrees Celsius) + TMMX = TMMXI(day) ! maximum temperature (degrees Celsius) + DAVTMP = (TMMN + TMMX)/2.0 + DTR = GR * exp(-KSNOW*DRYSTOR) + PAR = 0.5*4.56*DTR + PET = PETI(day) + end Subroutine set_weather_day +#else + Subroutine set_weather_day(day,DRYSTOR, year,doy) + integer :: day, doy, year + real :: DRYSTOR + year = YEARI(day) ! day of the year (d) + doy = DOYI(day) ! day of the year (d) + RAIN = RAINI(day) ! precipitation (mm d-1) + GR = GRI(day) ! irradiation (MJ m-2 d-1) + TMMN = TMMNI(day) ! minimum (or average) temperature (degrees Celsius) + TMMX = TMMXI(day) ! maximum (or average) temperature (degrees Celsius) + VP = VPI(day) ! vapour pressure (kPa) + WN = WNI(day) ! mean wind speed (m s-1) + DAVTMP = (TMMN + TMMX)/2.0 + DTR = GR * exp(-KSNOW*DRYSTOR) + PAR = 0.5*4.56*DTR + end Subroutine set_weather_day +#endif + +Subroutine MicroClimate(doy,DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR, & + FREEZEPL,INFIL,PackMelt,poolDrain,poolInfil,pSnow,reFreeze,SnowMelt,THAWPS,wRemain) + integer :: doy + real :: DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR + real :: FREEZEPL,INFIL,PackMelt,poolDrain,poolInfil,pSnow,reFreeze,SnowMelt,THAWPS,wRemain + call RainSnowSurfacePool(doy,DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR, & + FREEZEPL,INFIL,PackMelt,poolDrain,poolInfil,pSnow,reFreeze,SnowMelt,THAWPS,Wremain) + if (WAPS == 0.) then + PERMgas = 1. + else + PERMgas = 0. + end if +end Subroutine MicroClimate + + Subroutine RainSnowSurfacePool(doy,DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR, & + FREEZEPL,INFIL,PackMelt,poolDrain,poolInfil,pSnow,reFreeze,SnowMelt,THAWPS,Wremain) + integer :: doy + real :: DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR + real :: FREEZEPL,INFIL,PackMelt,poolDrain,poolInfil,pSnow,reFreeze,SnowMelt,THAWPS,Wremain + real :: PINFIL + call precForm(Psnow) + call WaterSnow(doy,DRYSTOR,Psnow,Sdepth,WETSTOR, PackMelt,reFreeze,SnowMelt,Wremain) + RNINTC = min( Wsupply, 0.25*LAI ) + PINFIL = Wsupply - RNINTC + call INFILrunOn(Fdepth,PINFIL, INFIL) + call SurfacePool(Fdepth,Frate,Tsurf,WAPL,WAPS, & + FREEZEPL,poolDrain,poolInfil,THAWPS) + end Subroutine RainSnowSurfacePool + + Subroutine precForm(Psnow) + real :: Psnow + if (DAVTMP > TrainSnow) then + Pwater = RAIN + Psnow = 0. + else + Pwater = 0. + Psnow = RAIN + end if + end Subroutine precForm + + Subroutine WaterSnow(doy,DRYSTOR,Psnow,Sdepth,WETSTOR, & + PackMelt,reFreeze,SnowMelt,Wremain) + integer :: doy + real :: DRYSTOR,Psnow,Sdepth,WETSTOR + real :: PackMelt,reFreeze,SnowMelt,Wremain + real :: DENSITY + call SnowMeltWmaxStore (doy,DRYSTOR, SnowMelt) + call WETSTORdynamics (WETSTOR, reFreeze) + call LiquidWaterDistribution(SnowMelt, Wremain) + call SnowDensity (DRYSTOR,Sdepth,WETSTOR, DENSITY) + call SnowDepthDecrease (DENSITY,Sdepth,SnowMelt, PackMelt) + end Subroutine WaterSnow + + Subroutine SnowMeltWmaxStore(doy,DRYSTOR, SnowMelt) + integer :: doy + real :: DRYSTOR + real :: SnowMelt + real :: Melt +! Melt = Bias + Ampl * sin( Freq * (doy-(174.-91.)) ) + Melt = Bias + Ampl * DAYL + if (DAVTMP > TmeltFreeze) then + SnowMelt = max( 0., min( DRYSTOR/DELT, Melt*(DAVTMP-TmeltFreeze) )) + else + SnowMelt = 0. + end if + WmaxStore = DRYSTOR * SWret + end Subroutine SnowMeltWmaxStore + + Subroutine WETSTORdynamics(WETSTOR, reFreeze) + real :: WETSTOR + real :: reFreeze + real :: reFreezeMax + reFreezeMax = SWrf * (TmeltFreeze-DAVTMP) + if ((WETSTOR>0).and.(DAVTMP 0.) then + DENSITY = min(480., SWE/Sdepth) + else + DENSITY = 0. + end if + end Subroutine SnowDensity + + Subroutine SnowDepthDecrease(DENSITY,Sdepth,SnowMelt, PackMelt) + real :: DENSITY,Sdepth,SnowMelt + real :: PackMelt + if (Sdepth > 0.) then + PackMelt = max(0.,min( Sdepth/DELT, Sdepth*RHOpack - SnowMelt/DENSITY )) + else + PackMelt = 0. + end if + end Subroutine SnowDepthDecrease + + Subroutine INFILrunOn(Fdepth,PINFIL, INFIL) + real :: Fdepth,PINFIL + real :: INFIL + if (Fdepth <= poolInfilLimit) then + INFIL = PINFIL + else + INFIL = 0. + end if + runOn = PINFIL - INFIL + end Subroutine INFILrunOn + + Subroutine SurfacePool(Fdepth,Frate,Tsurf,WAPL,WAPS, & + FREEZEPL,poolDrain,poolInfil,THAWPS) + real :: Fdepth,Frate,Tsurf,WAPL,WAPS + real :: FREEZEPL,poolDrain,poolInfil,THAWPS + real :: eta,PIrate,poolVolRemain,poolWavail + poolVolRemain = max(0., WpoolMax - WAPL - WAPS) + poolInfil = min(runOn,poolVolRemain) + poolRUNOFF = runOn - poolInfil + poolWavail = poolInfil + WAPL/DELT + if (poolWavail == 0.) then + poolDrain = 0. + else if (Fdepth <= poolInfilLimit) then + poolDrain = poolWavail + else + poolDrain = max(0.,min( -Frate*1000., poolWavail )) + end if + if ((Tsurf>0.).and.(WAPL==0).and.(WAPS==0.)) then + PIrate = 0. + else + eta = LAMBDAice / ( RHOwater * LatentHeat ) ! [m2 C-1 day-1] + PIrate = (sqrt( max(0.,(0.001*WAPS)**2 - 2.*eta*Tsurf*DELT)))/DELT - (0.001*WAPS)/DELT ! [m day-1] + end if + if (PIrate < 0.) then + FREEZEPL = 0. + THAWPS = min( WAPS/DELT , -PIrate*1000. ) + else + FREEZEPL = max( 0.,min( poolInfil + WAPL/DELT - poolDrain*DELT, PIrate*1000. )) + THAWPS = 0. + end if + end Subroutine SurfacePool + +Subroutine DDAYL(doy) +!============================================================================= +! Calculate day length (d d-1) from Julian day and latitude (LAT, degN) +! Author - Marcel van Oijen (CEH-Edinburgh) & Simon Woodward (DairyNZ) +!============================================================================= + integer :: doy ! (d) + real :: DEC, DECLIM, DECC, RAD + RAD = pi / 180. ! (radians deg-1) + DEC = -asin (sin (23.45*RAD)*cos (2.*pi*(doy+10.)/365.)) ! (radians) + if (LAT==0) then + DECLIM = pi/2. + else + DECLIM = abs( atan(1./tan(RAD*LAT)) ) ! (radians) + end if + DECC = max(-DECLIM,min(DECLIM, DEC )) ! (radians) + DAYL = 0.5 * ( 1. + 2. * asin(tan(RAD*LAT)*tan(DECC)) / pi ) ! (d d-1) +end Subroutine DDAYL + +#ifdef weathergen + Subroutine PEVAPINPUT(LAI) + real :: LAI + PEVAP = exp(-0.5*LAI) * PET ! (mm d-1) + PTRAN = (1.-exp(-0.5*LAI)) * PET ! (mm d-1) + PTRAN = max( 0., PTRAN-0.5*RNINTC ) ! (mm d-1) + end Subroutine PEVAPINPUT +#else + Subroutine PENMAN(LAI) + !============================================================================= + ! Calculate potential rates of evaporation and transpiration (mm d-1) + ! Inputs: LAI (m2 m-2), DTR (MJ GR m-2 d-1), RNINTC (mm d-1) + ! Inputs not in header: VP (kPa), WN (m s-1) + ! Outputs: PEVAP & PTRAN (mm d-1) + ! Author - Marcel van Oijen (CEH-Edinburgh) + !============================================================================= + real :: LAI + real :: BBRAD, BOLTZM, DTRJM2, LHVAP, NRADC, NRADS + real :: PENMD, PENMRC, PENMRS, PSYCH, RLWN, SLOPE, SVP, WDF + DTRJM2 = DTR * 1.E6 ! (J GR m-2 d-1) + BOLTZM = 5.668E-8 ! (J m-2 s-1 K-4) + LHVAP = 2.4E6 ! (J kg-1) + PSYCH = 0.067 ! (kPA degC-1)) + BBRAD = BOLTZM * (DAVTMP+273.)**4 * 86400. ! (J m-2 d-1) + SVP = 0.611 * exp(17.4 * DAVTMP / (DAVTMP + 239.)) ! (kPa) + SLOPE = 4158.6 * SVP / (DAVTMP + 239.)**2 ! (kPA degC-1) + RLWN = BBRAD * max(0.,0.55*(1.-VP/SVP)) ! (J m-2 d-1) + NRADS = DTRJM2 * (1.-0.15) - RLWN ! (J m-2 d-1) + NRADC = DTRJM2 * (1.-0.25) - RLWN ! (J m-2 d-1) + PENMRS = NRADS * SLOPE/(SLOPE+PSYCH) ! (J m-2 d-1) + PENMRC = NRADC * SLOPE/(SLOPE+PSYCH) ! (J m-2 d-1) + WDF = 2.63 * (1.0 + 0.54 * WN) ! (kg m-2 d-1 kPa-1) + PENMD = LHVAP * WDF * (SVP-VP) * PSYCH/(SLOPE+PSYCH) ! (J m-2 d-1) + PEVAP = exp(-0.5*LAI) * (PENMRS + PENMD) / LHVAP ! (mm d-1) + PTRAN = (1.-exp(-0.5*LAI)) * (PENMRC + PENMD) / LHVAP ! (mm d-1) + PTRAN = max( 0., PTRAN-0.5*RNINTC ) ! (mm d-1) + end Subroutine PENMAN +#endif + +end module environment + + + + + diff --git a/models/basgra/src/model/parameters_plant.f90 b/models/basgra/src/model/parameters_plant.f90 new file mode 100644 index 00000000000..ed57cfcb496 --- /dev/null +++ b/models/basgra/src/model/parameters_plant.f90 @@ -0,0 +1,37 @@ +module parameters_plant + +implicit none + +! Initial constants + real :: LOG10CLVI, LOG10CRESI, LOG10CRTI, CSTI, LOG10LAII + real :: CLVI, CRESI, CRTI, LAII + real :: PHENI, TILTOTI, FRTILGI, FRTILGG1I + +! Initial constants, continued + real, parameter :: CLVDI = 0. + real, parameter :: YIELDI = 0. + real, parameter :: CSTUBI = 0. + real :: LT50I + +! Process parameters + real :: CLAIV , COCRESMX, CSTAVM, DAYLB , DAYLG1G2, DAYLP , DLMXGE, FSLAMIN + real :: FSMAX , HAGERE , K , KLUETILG, LAICR , LAIEFT , LAITIL, LFWIDG + real :: LFWIDV , NELLVM , PHENCR, PHY , RDRSCO , RDRSMX , RDRTEM, RGENMX + real :: RGRTG1G2, ROOTDM , RRDMAX, RUBISC , SHAPE , SIMAX1T, SLAMAX, SLAMIN + real :: TBASE , TCRES , TOPTGE, TRANCO , YG + real :: RDRTMIN , TVERN + real :: NCSHMAX , NCR + real :: RDRROOT , RDRSTUB + real :: FNCGSHMIN, TCNSHMOB, TCNUPT + + real :: F_WALL_LV_FMIN, F_WALL_LV_MAX, F_WALL_ST_FMIN, F_WALL_ST_MAX + real :: F_DIGEST_WALL_FMIN, F_DIGEST_WALL_MAX + +! Process parameters, continued + real :: Dparam, Hparam, KRDRANAER, KRESPHARD, KRSR3H + real :: LDT50A, LDT50B, LT50MN, LT50MX, RATEDMX + real :: reHardRedDay + real, parameter :: reHardRedEnd = 91. ! If LAT<0, this is changed to 91+183 in plant.f90 + real :: THARDMX, TsurfDiff + +end module parameters_plant diff --git a/models/basgra/src/model/parameters_site.f90 b/models/basgra/src/model/parameters_site.f90 new file mode 100644 index 00000000000..6f84eb55798 --- /dev/null +++ b/models/basgra/src/model/parameters_site.f90 @@ -0,0 +1,61 @@ +module parameters_site + +! Simulation period and time step + real, parameter :: DELT = 1.0 + +! Geography + real :: LAT + +! Atmospheric conditions + real, parameter :: CO2A = 350 + +! Soil + real, parameter :: DRATE = 50 + real :: WCI + real :: FWCAD, FWCWP, FWCFC, FWCWET, WCST + real :: WCAD, WCWP, WCFC, WCWET + real, parameter :: KNFIX = 0, RRUNBULK = 0.05 + +! Soil - WINTER PARAMETERS + real :: FGAS, FO2MX, gamma, KRTOTAER, KSNOW + real, parameter :: LAMBDAice = 1.9354e+005 + real :: LAMBDAsoil + real, parameter :: LatentHeat = 335000. + real, parameter :: poolInfilLimit = 0.2 + real :: RHOnewSnow, RHOpack + real, parameter :: RHOwater = 1000. + real :: SWret, SWrf, TmeltFreeze, TrainSnow + real :: WpoolMax + +! Soil initial values (parameters) +real :: CLITT0, CSOM0, CNLITT0, CNSOMF0, CNSOMS0, FCSOMF0, NMIN0 +real :: FLITTSOMF, FSOMFSOMS, RNLEACH, KNEMIT +real :: TCLITT, TCSOMF, TCSOMS, TMAXF, TSIGMAF, RFN2O +real :: WFPS50N2O + +! Soil initial constants + real, parameter :: DRYSTORI = 0. + real, parameter :: FdepthI = 0. + real, parameter :: SDEPTHI = 0. + real, parameter :: TANAERI = 0. + real, parameter :: WAPLI = 0. + real, parameter :: WAPSI = 0. + real, parameter :: WASI = 0. + real, parameter :: WETSTORI = 0. + +! Management: harvest dates and irrigation + integer, dimension(3) :: doyHA + real, parameter :: IRRIGF = 0. + +! Mathematical constants + real, parameter :: pi = 3.141592653589793 +! real, parameter :: Freq = 2.*pi / 365. + real, parameter :: Kmin = 4. + real, parameter :: Ampl = 0.625 + real, parameter :: Bias = Kmin + Ampl + +! SA parameters + real :: NFERTMULT + +end module parameters_site + diff --git a/models/basgra/src/model/plant.f90 b/models/basgra/src/model/plant.f90 new file mode 100644 index 00000000000..48fc23cd65a --- /dev/null +++ b/models/basgra/src/model/plant.f90 @@ -0,0 +1,383 @@ +module plant + +use parameters_site +use parameters_plant +use environment +implicit none + +integer :: NOHARV +real :: CRESMX,DAYLGE,FRACTV,GLVSI,GSTSI,LERG,LERV,LUEMXQ,NELLVG,PHENRF,PHOT +real :: RDRFROST,RDRS,RDRT,RDRTOX,RESPGRT,RESPGSH,RESPHARD,RESPHARDSI,RESNOR,RLEAF,RplantAer,SLANEW +real :: RATEH,reHardPeriod,TV2TIL +real :: fNCgrowth,fNgrowth,FSPOT,KN,KNMAX,NSHNOR,RGRTV +real :: NSHK +real :: GLAISI, SINK1T +real :: NSOURCE, NSINK + +Contains + +Subroutine Harvest(CLV,CRES,CST,year,doy,DAYS_HARVEST,LAI,PHEN,TILG1,TILG2,TILV, & + GSTUB,HARVLA,HARVLV,HARVPH,HARVRE,HARVST,HARVTILG2) + integer :: doy,year + integer,dimension(100,2) :: DAYS_HARVEST + real :: CLV, CRES, CST, LAI, PHEN, TILG1, TILG2, TILV + real :: GSTUB, HARVLV, HARVLA, HARVRE, HARVTILG2, HARVST, HARVPH + real :: CLAI, HARVFR, TV1 + integer :: HARV,i + + HARV = 0 + NOHARV = 1 + do i=1,100 + if ( (year==DAYS_HARVEST(i,1)) .and. (doy==DAYS_HARVEST(i,2)) ) then + HARV = 1 + NOHARV = 0 + end if + end do + FRACTV = (TILV+TILG1) / (TILV+TILG1+TILG2) + CLAI = FRACTV * CLAIV + if (LAI <= CLAI) then + HARVFR = 0.0 + else + HARVFR = 1.0 - CLAI/LAI + end if + HARVLA = (HARV * LAI * HARVFR) / DELT + HARVLV = (HARV * CLV * HARVFR) / DELT + HARVPH = (HARV * PHEN ) / DELT + TV1 = (HARVFR * FRACTV) + (1-FRACTV)*HAGERE + HARVRE = (HARV * TV1 * CRES ) / DELT + HARVST = (HARV * CST ) / DELT + GSTUB = HARVST * (1-HAGERE) + HARVTILG2 = (HARV * TILG2 ) / DELT +end Subroutine Harvest + +Subroutine Biomass(CLV,CRES,CST) + real :: CLV, CRES, CST + CRESMX = COCRESMX*(CLV + CRES + CST) + RESNOR = max(0.,min(1., CRES/CRESMX )) +end Subroutine Biomass + +Subroutine Phenology(DAYL,PHEN, DPHEN,GPHEN) + real :: DAYL,PHEN + real :: DPHEN,GPHEN + GPHEN = max(0., (DAVTMP-0.01)*0.000144*24. * (min(DAYLP,DAYL)-0.24) ) + DPHEN = 0. + if (DAYL < DAYLB) DPHEN = PHEN / DELT + PHENRF = (1 - PHEN)/(1 - PHENCR) + if (PHENRF > 1.0) PHENRF = 1.0 + if (PHENRF < 0.0) PHENRF = 0.0 + DAYLGE = max(0.,min(1., (DAYL - DAYLB)/(DLMXGE - DAYLB) )) +end Subroutine Phenology + +Subroutine Foliage1 + real :: EFFTMP, SLAMIN + EFFTMP = max(TBASE, DAVTMP) + LERV = max(0., (-0.76 + 0.52*EFFTMP)/1000. ) + LERG = DAYLGE * max(0., (-5.46 + 2.80*EFFTMP)/1000. ) + SLAMIN = SLAMAX * FSLAMIN + SLANEW = SLAMAX - RESNOR*(SLAMAX-SLAMIN) +end Subroutine Foliage1 + +Subroutine LUECO2TM(PARAV) +!============================================================================= +! Calculate LUEMXQ (mol CO2 mol-1 PAR quanta) +! Inputs : PARAV (micromol PAR quanta m-2 s-) +!============================================================================= + real :: PARAV + real :: CO2I, EA, EAKMC, EAKMO, EAVCMX, EFF, GAMMAX, KC25, KMC, KMC25 + real :: KMO, KMO25, KOKC, O2, PMAX, R, RUBISCN, T, TMPFAC, VCMAX + T = DAVTMP !(degC) + RUBISCN = RUBISC * (1.E6/550000.) + EAVCMX = 68000 !(J mol-1) + EAKMC = 65800 !(J mol-1) + EAKMO = 1400 !(J mol-1) + KC25 = 20 !(mol CO2 mol-1 Rubisco s-1) + KMC25 = 460 !(ppm CO2) + KMO25 = 33 !(% O2) + KOKC = 0.21 !(-) + O2 = 21 !(% O2) + R = 8.314 !(J K-1 mol-1) + CO2I = 0.7 * CO2A !(ppm CO2) + VCMAX = RUBISCN * KC25 * exp((1/298.-1/(T+273))*EAVCMX/R) !(micromol CO2 m-2 s-1) + KMC = KMC25 * exp((1/298.-1/(T+273))*EAKMC /R) !(ppm CO2) + KMO = KMO25 * exp((1/298.-1/(T+273))*EAKMO /R) !(% O2) + GAMMAX = 0.5 * KOKC * KMC * O2 / KMO !(ppm CO2) + PMAX = VCMAX * (CO2I-GAMMAX) / (CO2I + KMC * (1+O2/KMO)) !(micromol CO2 m-2 s-1) + TMPFAC = max( 0., min( 1., (T+4.)/5. ) ) !(-) + EFF = TMPFAC * (1/2.1) * (CO2I-GAMMAX) / (4.5*CO2I+10.5*GAMMAX) !(mol CO2 mol-1 PAR quanta) + LUEMXQ = EFF*PMAX*(1+KLUETILG*(1-FRACTV)) / (EFF*K*PARAV + PMAX) !(mol CO2 mol-1 PAR quanta) +end Subroutine LUECO2TM + +Subroutine HardeningSink(CLV,DAYL,doy,LT50,Tsurf) + integer :: doy + real :: CLV,DAYL,LT50,Tsurf + real :: doySinceStart, reHardRedStart + if( LAT > 0 ) then + reHardRedStart = modulo( reHardRedEnd - reHardRedDay, 365. ) + else + reHardRedStart = modulo( reHardRedEnd + 183 - reHardRedDay, 365. ) + end if + doySinceStart = modulo( doy-reHardRedStart , 365. ) + if ( doySinceStart < (reHardRedDay+0.5*(365.-reHardRedDay)) ) then + reHardPeriod = max( 0., 1.-doySinceStart/reHardRedDay ) + else + reHardPeriod = 1. + end if + if ( (Tsurf>THARDMX) .or. (LT50= 0.1) then + ! Situation 1: Growth has priority over storage (spring and growth period) + ! Calculate amount of assimilates allocated to shoot + ALLOSH = min( ALLOTOT, GSHSI ) + ! Calculate amount of assimilates allocated to reserves + GRES = min( ALLOTOT - ALLOSH, GRESSI) + else + ! Situation 2: Storage has priority over shoot (autumn) + ! Calculate amount of assimilates allocated to reserves + GRES = min( ALLOTOT, GRESSI ) + ! Calculate amount of assimilates allocated to shoot + ALLOSH = min( ALLOTOT - GRES, GSHSI ) + end if + ! All surplus carbohydrate goes to roots + ALLORT = ALLOTOT - ALLOSH - GRES + if (GSHSI == 0.) GSHSI = 1 + ALLOLV = GLVSI * (ALLOSH / GSHSI) + ALLOST = GSTSI * (ALLOSH / GSHSI) + GLV = ALLOLV * YG + GST = ALLOST * YG + GRT = ALLORT * YG + RESPGSH = (ALLOLV + ALLOST) * (1-YG) + RESPGRT = ALLORT * (1-YG) + end Subroutine Allocation + +Subroutine PlantRespiration(FO2,RESPHARD) + real :: FO2,RESPHARD + real :: fAer + fAer = max(0.,min(1., FO2/FO2MX )) + RplantAer = fAer * ( RESPGRT + RESPGSH + RESPHARD ) +end Subroutine PlantRespiration + +Subroutine Senescence(CLV,CRT,CSTUB,LAI,LT50,PERMgas,TANAER,TILV,Tsurf, & + DeHardRate,DLAI,DLV,DRT,DSTUB,dTANAER,DTILV,HardRate) + integer :: doy + real :: CLV,CRT,CSTUB,DAYL,LAI,LT50,PERMgas,TANAER,TILV,Tsurf + real :: DeHardRate,DLAI,DLV,DRT,DSTUB,dTANAER,DTILV,HardRate + real :: TV1, TV2 + call AnaerobicDamage(LT50,PERMgas,TANAER, dTANAER) + call Hardening(CLV,LT50,Tsurf, DeHardRate,HardRate) + if (LAI < LAICR) then + TV1 = 0.0 + else + TV1 = RDRSCO*(LAI-LAICR)/LAICR + end if + RDRS = min(TV1, RDRSMX) + RDRT = max(RDRTMIN, RDRTEM * Tsurf) + TV2 = NOHARV * max(RDRS,RDRT,RDRFROST,RDRTOX) + TV2TIL = NOHARV * max(RDRS, RDRFROST,RDRTOX) + DLAI = LAI * TV2 + DLV = CLV * TV2 + DSTUB = CSTUB * RDRSTUB + DTILV = TILV * TV2TIL + DRT = CRT * RDRROOT + +end Subroutine Senescence + + Subroutine AnaerobicDamage(LT50,PERMgas,TANAER, dTANAER) + real :: LT50,PERMgas,TANAER + real :: dTANAER,LD50 + if (PERMgas==0.) then + dTANAER = 1. + else + dTANAER = -TANAER / DELT + end if + LD50 = LDT50A + LDT50B * LT50 + if (TANAER > 0.) then + RDRTOX = KRDRANAER / (1.+exp(-KRDRANAER*(TANAER-LD50))) + else + RDRTOX = 0. + end if + end Subroutine AnaerobicDamage + + Subroutine Hardening(CLV,LT50,Tsurf, DeHardRate,HardRate) + real :: CLV,LT50,Tsurf + real :: DeHardRate,HardRate + real :: RATED,RSR3H,RSRDAY + RSR3H = 1. / (1.+exp(-KRSR3H*(Tsurf-LT50))) + ! RDRFROST should be less than 1 to avoid numerical problems + ! (loss of all biomass but keeping positive reserves). We cap it at 0.5. + RSRDAY = RSR3H ! In previous versions we had RSRDAY = RSR3H^8 which understimated survival + RDRFROST = min( 0.5, 1. - RSRDAY ) + RATED = min( Dparam*(LT50MX-LT50)*(Tsurf+TsurfDiff), (LT50MX-LT50)/DELT ) + DeHardRate = max(0.,min( RATEDMX, RATED )) +! HardRate = RESPHARD / (CLV * KRESPHARD) + if (CLV > 0.) then + HardRate = RESPHARD / (CLV * KRESPHARD) + else + HardRate = 0. + end if + end Subroutine Hardening + +Subroutine Foliage2(DAYL,GLV,LAI,TILV,TILG1,TRANRF,Tsurf,VERN, GLAI,GTILV,TILVG1,TILG1G2) + real :: DAYL,GLV,LAI,TILV,TILG1,TRANRF,Tsurf + integer :: VERN + real :: GLAI,GTILV,TILVG1,TILG1G2 + real :: RGRTVG1,TGE,TV1 + GLAI = SLANEW * GLV + if (Tsurf < TBASE) then + TV1 = 0. + else + TV1 = Tsurf/PHY + end if + RLEAF = TV1 * NOHARV * TRANRF * DAYLGE * ( FRACTV + PHENRF*(1-FRACTV) ) * fNgrowth + FSPOT = LAITIL - LAIEFT*LAI + if (FSPOT > FSMAX) FSPOT = FSMAX + if (FSPOT < 0.) FSPOT = 0. + RGRTV = max( 0., FSPOT * RESNOR * RLEAF ) + GTILV = TILV * RGRTV + TGE = max( 0., 1 - (abs(DAVTMP - TOPTGE))/(TOPTGE-TBASE)) + RGRTVG1 = min( 1.-TV2TIL, NOHARV * DAYLGE * TGE * RGENMX ) * VERN + TILVG1 = TILV * RGRTVG1 + if (DAYL > DAYLG1G2) then + TILG1G2 = TILG1 * RGRTG1G2 + else + TILG1G2 = 0. + end if +end Subroutine Foliage2 + +Subroutine Nplant(NSHmob,CLV,CRT,CST,DLAI,DLV,DRT,GLAI,GLV,GRT,GST,HARVLA,HARVLV,HARVST, & + LAI,NRT,NSH, & + DNRT,DNSH,GNRT,GNSH,HARVNSH,NCDSH,NCGSH,NCHARVSH,NSHmobsoil,Nupt) + real :: NSHmob,CLV,CRT,CST,DLAI,DLV,DRT,GLAI,GLV,GRT,GST,HARVLA,HARVLV,HARVST + real :: LAI,NRT,NSH + real :: DNRT,DNSH,GNRT,GNSH,HARVNSH,NCDSH,NCGSH,NCHARVSH,NSHmobsoil,Nupt + real :: GNmob + NSHNOR = NSH / ((CLV+CST)*NCSHMAX) + if(NSHNOR*LAI > 0) then + KNMAX = 1./(NSHNOR*LAI) + else + KNMAX = 1.E10 + end if + if (NSHNOR < (5./8.)) then + KN = KNMAX + else + KN = ( 1.5 - 3. * sqrt(0.25 - (1.-NSHNOR)*2./3.) ) / LAI + end if + KN = max( 0., min(KNMAX,KN) ) + fNCgrowth = FNCGSHMIN + (1-FNCGSHMIN) * fNgrowth + if (GLAI > 0) then + if (KN > 0) then + GNSH = fNCgrowth * NCSHMAX * (GLV+GST) * (1-exp(-KN*GLAI)) / (KN*GLAI) + else + GNSH = fNCgrowth * NCSHMAX * (GLV+GST) + end if + NCGSH = GNSH / (GLV+GST) + else + GNSH = 0 + NCGSH = 0 + end if + if (DLAI > 0) then + if (KN > 0) then + DNSH = NSH * (CLV/(CLV+CST)) * (1 - (1-exp(-KN*(LAI-DLAI))) / (1-exp(-KN*LAI)) ) + else + DNSH = NSH * (CLV/(CLV+CST)) * DLAI/LAI + end if + DNSH = max( DNSH, NSH-NCSHMAX*((CLV+CST)-DLV) ) + NCDSH = DNSH / DLV + else + DNSH = 0 + NCDSH = 0 + end if + if (HARVLA > 0) then + if (KN > 0) then + HARVNSH = NSH * (1-exp(-KN*HARVLA)) / (1-exp(-KN*LAI)) + else + HARVNSH = NSH * HARVLA/LAI + end if + HARVNSH = max( HARVNSH, NSH-NCSHMAX*((CLV+CST)-(HARVLV+HARVST)) ) + NCHARVSH = HARVNSH / (HARVLV+HARVST) + else + HARVNSH = 0 + NCHARVSH = 0 + end if +! GNRT = NCR * GRT + GNRT = NCR * GRT * fNgrowth +! GNRT = NCR * GRT * NSHNOR +! DNRT = NCR * DRT + DNRT = DRT * NRT/CRT + GNmob = min( NSHmob, GNSH+GNRT ) + NSHmobsoil = NSHmob - GNmob + Nupt = GNSH + GNRT - GNmob +end Subroutine Nplant + +Subroutine Digestibility(DM,DMLV,DMRES,DMSH,DMST,DMSTUB,PHEN, & + F_WALL_DM,F_WALL_DMSH,F_WALL_LV,F_WALL_ST, & + F_DIGEST_DM,F_DIGEST_DMSH,F_DIGEST_LV,F_DIGEST_ST,F_DIGEST_WALL) + real :: PHEN,DMLV,DMST,DMSTUB,DM,DMSH,DMRES + real :: F_WALL_DM,F_WALL_DMSH,F_WALL_LV,F_WALL_ST + real :: F_DIGEST_DM,F_DIGEST_DMSH,F_DIGEST_LV,F_DIGEST_ST,F_DIGEST_WALL + real :: F_DIGEST_WALL_MIN,F_WALL_LV_MIN,F_WALL_ST_MIN + F_WALL_LV_MIN = F_WALL_LV_FMIN * F_WALL_LV_MAX + F_WALL_ST_MIN = F_WALL_ST_FMIN * F_WALL_ST_MAX + F_WALL_LV = F_WALL_LV_MIN + PHEN * ( F_WALL_LV_MAX - F_WALL_LV_MIN ) + F_WALL_ST = F_WALL_ST_MIN + PHEN * ( F_WALL_ST_MAX - F_WALL_ST_MIN ) + F_WALL_DM = ( F_WALL_LV*DMLV + F_WALL_ST*DMST + DMSTUB ) / DM + F_WALL_DMSH = ( F_WALL_LV*DMLV + F_WALL_ST*DMST ) / DMSH + F_DIGEST_WALL_MIN = F_DIGEST_WALL_FMIN * F_DIGEST_WALL_MAX + F_DIGEST_WALL = F_DIGEST_WALL_MAX - PHEN * ( F_DIGEST_WALL_MAX - F_DIGEST_WALL_MIN ) + F_DIGEST_LV = 1 - F_WALL_LV + F_DIGEST_WALL * F_WALL_LV + F_DIGEST_ST = 1 - F_WALL_ST + F_DIGEST_WALL * F_WALL_ST + F_DIGEST_DMSH = ( F_DIGEST_LV * DMLV + F_DIGEST_ST * DMST + DMRES ) / DMSH + F_DIGEST_DM = ( F_DIGEST_LV * DMLV + F_DIGEST_ST * DMST + DMRES + F_DIGEST_WALL * DMSTUB ) / DM +end Subroutine Digestibility + +end module plant diff --git a/models/basgra/src/model/resources.f90 b/models/basgra/src/model/resources.f90 new file mode 100644 index 00000000000..0ccaf297df1 --- /dev/null +++ b/models/basgra/src/model/resources.f90 @@ -0,0 +1,73 @@ +module resources + +use parameters_site +use parameters_plant +implicit none + +real :: DTRINT,PARAV,PARINT,TRANRF +real :: RWA, WFPS + +Contains + +Subroutine Light(DAYL,DTR,LAI,PAR) + real :: DAYL,DTR,LAI,PAR + if (DAYL > 0) then + PARAV = PAR * (1E6/(24*3600)) / DAYL + else + PARAV = 0. + end if + PARINT = PAR * (1 - exp(-1.0*K*LAI)) + DTRINT = DTR * (1 - exp(-0.75*K*LAI)) +end Subroutine Light + +Subroutine EVAPTRTRF(Fdepth,PEVAP,PTRAN,ROOTD,WAL, EVAP,TRAN) + real :: Fdepth, PEVAP, PTRAN, ROOTD, WAL, EVAP, TRAN + real :: AVAILF, FR, WAAD, WCL, WCCR + if (Fdepth < ROOTD) then + WCL = WAL*0.001 / (ROOTD-Fdepth) + else + WCL = 0 + end if ! (m3 m-3) + RWA = max(0., min(1., (WCL - WCAD) / (WCFC - WCAD) ) ) ! % (-) + WFPS = max(0., min(1., (WCL - WCAD) / (WCST - WCAD) ) ) ! % (-) + WAAD = 1000. * WCAD * (ROOTD-Fdepth) ! (mm) + EVAP = PEVAP * RWA ! (mm d-1) + WCCR = WCWP + max( 0.01, PTRAN/(PTRAN+TRANCO) * (WCFC-WCWP) ) ! (m3 m-3) + if (WCL > WCCR) then + FR = max(0., min(1., (WCST-WCL)/(WCST-WCWET) )) + else + FR = max(0., min(1., (WCL-WCWP)/(WCCR-WCWP) )) + end if ! (mm mm-1) + TRAN = PTRAN * FR ! (mm d-1) + if (EVAP+TRAN > 0.) then + AVAILF = min( 1., ((WAL-WAAD)/DELT) / (EVAP+TRAN) ) + else + AVAILF = 0 + end if ! (mm mm-1) + EVAP = EVAP * AVAILF ! (mm d-1) + TRAN = TRAN * AVAILF ! (mm d-1) + if (PTRAN > 0.) then + TRANRF = TRAN / PTRAN ! (-) + else + TRANRF = 1 ! (-) + end if +end Subroutine EVAPTRTRF + +Subroutine ROOTDG(Fdepth,ROOTD,WAL, EXPLOR,RROOTD) + real :: Fdepth,ROOTD,WAL + real :: EXPLOR,RROOTD + real :: WCL + if (Fdepth < ROOTD) then + WCL = WAL*0.001 / (ROOTD-Fdepth) + else + WCL = 0 + end if ! (m3 m-3) + if ( (ROOTDWCWP) ) then + RROOTD = min( RRDMAX, (ROOTDM-ROOTD)/DELT ) + else + RROOTD = 0. + end if + EXPLOR = 1000. * RROOTD * WCFC +end Subroutine ROOTDG + +end module resources diff --git a/models/basgra/src/model/set_params.f90 b/models/basgra/src/model/set_params.f90 new file mode 100644 index 00000000000..777db229d75 --- /dev/null +++ b/models/basgra/src/model/set_params.f90 @@ -0,0 +1,149 @@ +Subroutine set_params(pa) + +use parameters_site +use parameters_plant +implicit none +real :: pa(120) ! The length of pa() should be at least as high as the number of parameters + +! Initial constants +LOG10CLVI = pa(1) +LOG10CRESI = pa(2) +LOG10CRTI = pa(3) +CSTI = pa(4) +LOG10LAII = pa(5) +PHENI = pa(6) +TILTOTI = pa(7) +FRTILGI = pa(8) +LT50I = pa(9) + +! Process parameters +CLAIV = pa(10) +COCRESMX = pa(11) +CSTAVM = pa(12) +DAYLB = pa(13) +DAYLP = pa(14) +DLMXGE = pa(15) +FSLAMIN = pa(16) +FSMAX = pa(17) +HAGERE = pa(18) +K = pa(19) +LAICR = pa(20) +LAIEFT = pa(21) +LAITIL = pa(22) +LFWIDG = pa(23) +LFWIDV = pa(24) +NELLVM = pa(25) +PHENCR = pa(26) +PHY = pa(27) +RDRSCO = pa(28) +RDRSMX = pa(29) +RDRTEM = pa(30) +RGENMX = pa(31) +ROOTDM = pa(32) +RRDMAX = pa(33) +RUBISC = pa(34) +SHAPE = pa(35) +SIMAX1T = pa(36) +SLAMAX = pa(37) +TBASE = pa(38) +TCRES = pa(39) +TOPTGE = pa(40) +TRANCO = pa(41) +YG = pa(42) + +LAT = pa(43) +WCI = pa(44) +FWCAD = pa(45) +FWCWP = pa(46) +FWCFC = pa(47) +FWCWET = pa(48) +WCST = pa(49) +WpoolMax = pa(50) + +Dparam = pa(51) +FGAS = pa(52) +FO2MX = pa(53) +gamma = pa(54) +Hparam = pa(55) +KRDRANAER = pa(56) +KRESPHARD = pa(57) +KRSR3H = pa(58) +KRTOTAER = pa(59) +KSNOW = pa(60) +LAMBDAsoil = pa(61) +LDT50A = pa(62) +LDT50B = pa(63) +LT50MN = pa(64) +LT50MX = pa(65) +RATEDMX = pa(66) +reHardRedDay = pa(67) +RHOnewSnow = pa(68) +RHOpack = pa(69) +SWret = pa(70) +SWrf = pa(71) +THARDMX = pa(72) +TmeltFreeze = pa(73) +TrainSnow = pa(74) +TsurfDiff = pa(75) +KLUETILG = pa(76) +FRTILGG1I = pa(77) +DAYLG1G2 = pa(78) +RGRTG1G2 = pa(79) +RDRTMIN = pa(80) +TVERN = pa(81) + +CLITT0 = pa( 82) ! (g C m-2) Initial C in litter +CSOM0 = pa( 83) ! (g C m-2) Initial C in OM +CNLITT0 = pa( 84) ! (g C g-1 N) Initial C/N ratio of litter +CNSOMF0 = pa( 85) ! (g C g-1 N) Initial C/N ratio of fast-decomposing OM +CNSOMS0 = pa( 86) ! (g C g-1 N) Initial C/N ratio of slowly decomposing OM +FCSOMF0 = pa( 87) ! (-) Initial C in fast-decomposing OM as a fraction of total OM +FLITTSOMF = pa( 88) ! (-) Fraction of decomposing litter that becomes OM +FSOMFSOMS = pa( 89) ! (-) Fraction of decomposing 'fast' OM that becomes slowly decomposing OM +RNLEACH = pa( 90) ! (-) Mineral N concentration of drainage water as a ratio of that in soil water +KNEMIT = pa( 91) ! (d-1) Max. relative emission rate of soil mineral N +NMIN0 = pa( 92) ! (g N m-2) Initial mineral N +TCLITT = pa( 93) ! (d) Residence time of litter +TCSOMF = pa( 94) ! (d) Residence time of fast-decomposing OM +TCSOMS = pa( 95) ! (d) Residence time of slowly decomposing OM +TMAXF = pa( 96) ! (degC) Temperature at which soil decomposition (fTsoil) is max. +TSIGMAF = pa( 97) ! (degC) Tolerance of soil decomposition for suboptimal temperature +RFN2O = pa( 98) ! (-) Sensitivity of the N2O/NO emission ratio to extreme values of water-filled pore space +WFPS50N2O = pa( 99) ! (-) Water-filled pore space at which the N2O and NO emission rates are equal + +! Parameters for N-processes +NCSHMAX = pa(100) ! (g N g-1 C) +NCR = pa(101) ! (g N g-1 C) + +! Senescence of roots and stubble +RDRROOT = pa(102) +RDRSTUB = pa(103) + +! Parameter for sensitivity analysis of fertilisation +NFERTMULT = pa(104) ! Multiplication factor for changing fertlisation (default = 1) + +! Additional parameters for N-processes +FNCGSHMIN = pa(105) +TCNSHMOB = pa(106) +TCNUPT = pa(107) + +F_DIGEST_WALL_FMIN = pa(108) +F_DIGEST_WALL_MAX = pa(109) +F_WALL_LV_FMIN = pa(110) +F_WALL_LV_MAX = pa(111) +F_WALL_ST_FMIN = pa(112) +F_WALL_ST_MAX = pa(113) + +! Parameter transformations +CLVI = 10**LOG10CLVI +CRESI = 10**LOG10CRESI +CRTI = 10**LOG10CRTI +LAII = 10**LOG10LAII + +WCAD = FWCAD * WCST +WCWP = FWCWP * WCST +WCFC = FWCFC * WCST +WCWET = FWCWET * WCST + +return +end diff --git a/models/basgra/src/model/soil.f90 b/models/basgra/src/model/soil.f90 new file mode 100644 index 00000000000..f6f70aa98e1 --- /dev/null +++ b/models/basgra/src/model/soil.f90 @@ -0,0 +1,187 @@ +module soil + +use parameters_site +use parameters_plant +implicit none + +real :: FO2, fPerm, Tsurf, WCL + +real :: DRAIN, RUNOFF +real :: dCLITT, rCLITT, rCSOMF, Rsoil +real :: dCLITTrsoil, dCLITTsomf, dCSOMF, dCSOMFrsoil, dCSOMFsoms, dCSOMS +real :: Nemission, NemissionN2O, NemissionNO, Nfixation, Nleaching +real :: NLITTnmin, NLITTsomf, Nmineralisation +real :: dNLITT, dNSOMF, dNSOMS, NSOMFnmin, NSOMFsoms, rNLITT, rNSOMF +real :: fTsoil + +Contains + +Subroutine SoilWaterContent(Fdepth,ROOTD,WAL) + real :: Fdepth,ROOTD,WAL + if (Fdepth < ROOTD) then + WCL = WAL*0.001 / (ROOTD-Fdepth) + else + WCL = 0 + end if +end Subroutine SoilWaterContent + +Subroutine Physics(DAVTMP,Fdepth,ROOTD,Sdepth,WAS, Frate) + real :: DAVTMP,Fdepth,ROOTD,Sdepth,WAS + real :: Frate + if (Fdepth > 0.) then + Tsurf = DAVTMP / (1. + 10. * (Sdepth / Fdepth) ) + fPerm = 0. + else + Tsurf = DAVTMP * exp(-gamma*Sdepth) + fPerm = 1. + end if + call Frozensoil(Fdepth,ROOTD,WAS, Frate) +end Subroutine Physics + + Subroutine FrozenSoil(Fdepth,ROOTD,WAS, Frate) + real :: Fdepth,ROOTD,WAS + real :: Frate + real :: alpha, PFrate, WCeff + ! Determining the amount of solid water that contributes in transportation of heat to surface 'WCeff' + if (Fdepth > ROOTD) then + WCeff = WCFC + else if (Fdepth > 0.) then + WCeff = (0.001*WAS) / Fdepth + else + WCeff = WCL + end if + ! Calculating potential frost rate 'PFrate' + if (((Fdepth == 0.).and.(Tsurf>0.)).or.(WCeff == 0.)) then ! No soil frost present AND no frost starting + PFrate = 0. + else + alpha = LAMBDAsoil / ( RHOwater * WCeff * LatentHeat ) + PFrate = Sqrt( max(0.,Fdepth**2 - 2.*alpha*Tsurf) ) - Fdepth + end if + if ((PFrate >= 0.).and.(Fdepth > 0.).and.(Fdepth < ROOTD)) then + Frate = PFrate * (0.001*WAS/Fdepth) / WCFC ! Soil frost increasing + else if ((PFrate+Fdepth/DELT) < 0.) then + Frate = -Fdepth / DELT ! Remaining soil frost thaws away + else + Frate = PFrate + end if + end Subroutine FrozenSoil + +Subroutine FRDRUNIR(EVAP,Fdepth,Frate,INFIL,poolDRAIN,ROOTD,TRAN,WAL,WAS, & + FREEZEL,IRRIG,THAWS) + real :: EVAP,Fdepth,Frate,INFIL,poolDRAIN,ROOTD,TRAN,WAL,WAS + real :: FREEZEL,IRRIG,THAWS + real :: INFILTOT,WAFC,WAST + WAFC = 1000. * WCFC * max(0.,(ROOTD-Fdepth)) ! (mm) + WAST = 1000. * WCST * max(0.,(ROOTD-Fdepth)) ! (mm) + INFILTOT = INFIL + poolDrain + if (Fdepth < ROOTD) then + FREEZEL = max(0., min( WAL/DELT + (INFILTOT - EVAP - TRAN), & + (Frate/(ROOTD-Fdepth))*WAL)) ! (mm d-1) + else + FREEZEL = 0. ! (mm d-1) + end if + if ((Fdepth > 0.) .and. (Fdepth <= ROOTD)) then + THAWS = max(0.,min( WAS/DELT, -Frate*WAS/Fdepth )) ! (mm d-1) + else + THAWS = 0. ! (mm d-1) + end if + DRAIN = max(0.,min( DRATE, (WAL-WAFC)/DELT + & + (INFILTOT - EVAP - TRAN - FREEZEL + THAWS) )) ! (mm d-1) + RUNOFF = max(0., (WAL-WAST)/DELT + & + (INFILTOT - EVAP - TRAN - FREEZEL + THAWS - DRAIN) ) ! (mm d-1) + IRRIG = IRRIGF * ( (WAFC-WAL)/DELT - & + (INFILTOT - EVAP - TRAN - FREEZEL + THAWS - DRAIN - RUNOFF)) ! (mm d-1) +end Subroutine FRDRUNIR + +Subroutine O2status(O2,ROOTD) + real :: O2,ROOTD + FO2 = O2 / (ROOTD * FGAS * 1000./22.4) +end Subroutine O2status + +Subroutine O2fluxes(O2,PERMgas,ROOTD,RplantAer, O2IN,O2OUT) + real :: O2,PERMgas,ROOTD,RplantAer + real :: O2IN,O2OUT + real :: O2MX + O2OUT = RplantAer * KRTOTAER * 1./12. * 1. + O2MX = FO2MX * ROOTD * FGAS * 1000./22.4 + O2IN = PERMgas * ( (O2MX-O2) + O2OUT*DELT ) +end Subroutine O2fluxes + +Subroutine N_fert(year,doy,DAYS_FERT,NFERTV, Nfert) + integer :: year,doy,i + integer,dimension(100,2) :: DAYS_FERT + real ,dimension(100 ) :: NFERTV + real :: Nfert + Nfert = 0 + do i=1,100 + if ( (year==DAYS_FERT (i,1)) .and. (doy==DAYS_FERT (i,2)) ) then + Nfert = NFERTV (i) + end if + end do +end Subroutine N_fert + +Subroutine N_dep(year,doy,DAYS_NDEP,NDEPV, Ndep) + integer :: year,doy,j + integer,dimension(100,2) :: DAYS_NDEP + real ,dimension(100 ) :: NDEPV + integer :: idep + real :: NDEPV_interval,t + real ,dimension(100) :: tNdep + real :: Ndep + t = year + (doy -0.5)/366 + tNdep = DAYS_NDEP(:,1) + (DAYS_NDEP(:,2)-0.5)/366 + do j = 2,100 + if ( (tNdep(j-1)=t) ) idep = j-1 + end do + NDEPV_interval = NDEPV(idep+1) - NDEPV(idep) + Ndep = NDEPV(idep) + NDEPV_interval * (t -tNdep(idep)) / & + (tNdep(idep+1)-tNdep(idep)) +end Subroutine N_dep + +Subroutine CNsoil(ROOTD,RWA,WFPS,WAL,GCR,CLITT,CSOMF,NLITT,NSOMF,NSOMS,NMIN,CSOMS) + real :: CLITT, CSOMF, CSOMS, fN2O, GCR, NLITT, NMIN, NSOMF, NSOMS + real :: ROOTD, RWA, WAL, WFPS + ! Soil temperature effect + fTsoil = exp((Tsurf-10.)*(2.*TMAXF-Tsurf-10.)/(2.*TSIGMAF**2.)) + ! C Litter + rCLITT = ((CLITT*RUNOFF) / ROOTD) * RRUNBULK * 0.001 + dCLITT = (CLITT*fTsoil) / TCLITT + dCLITTsomf = FLITTSOMF * dCLITT + dCLITTrsoil = dCLITT - dCLITTsomf + ! C SOM fast + rCSOMF = ((CSOMF*RUNOFF) / ROOTD) * RRUNBULK * 0.001 + dCSOMF = (CSOMF*fTsoil) / TCSOMF + dCSOMFsoms = FSOMFSOMS * dCSOMF + dCSOMFrsoil = dCSOMF - dCSOMFSOMS + ! C SOM slow + dCSOMS = (CSOMS*fTsoil) / TCSOMS + ! Respiration + Rsoil = dCLITTrsoil + dCSOMFrsoil + dCSOMS + ! N Litter + rNLITT = ((NLITT*RUNOFF) / ROOTD) * RRUNBULK * 0.001 + dNLITT = (NLITT*dCLITT) / CLITT + NLITTsomf = dNLITT * FLITTSOMF + NLITTnmin = dNLITT - NLITTsomf + ! N SOM fast + rNSOMF = ((NSOMF*RUNOFF) / ROOTD) * RRUNBULK * 0.001 + dNSOMF = (NSOMF*dCSOMF) / CSOMF + NSOMFsoms = dNSOMF * FSOMFSOMS + NSOMFnmin = dNSOMF - NSOMFsoms + ! N SOM slow + dNSOMS = (NSOMS*dCSOMS) / CSOMS + ! N mineralisation, fixation, leaching, emission + Nmineralisation = NLITTnmin + NSOMFnmin + dNSOMS + Nfixation = gCR * KNFIX + ! Nleaching = (NMIN*RNLEACH*DRAIN) / WAL + if ((WAL > 0.) .and. (NMIN > 0.)) then + Nleaching = (NMIN*RNLEACH*DRAIN) / WAL + else + Nleaching = 0. + end if + Nemission = NMIN * KNEMIT * RWA + fN2O = 1. / (1. + exp(-RFN2O*(WFPS-WFPS50N2O))) + NemissionN2O = Nemission * fN2O + NemissionNO = Nemission * (1.-fN2O) +end Subroutine CNsoil + +end module soil From 6095ba7bb3edcd0947c6d413de6f3791d6e76088 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 06:25:44 -0400 Subject: [PATCH 0443/2289] pass flags --- models/basgra/NAMESPACE | 1 + models/basgra/src/Makevars | 25 ++++++++++++++----------- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index 6e2049dfa2f..9b5cbc5771f 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -3,3 +3,4 @@ export(met2model.BASGRA) export(run_BASGRA) export(write.config.BASGRA) +useDynLib(PEcAn.BASGRA) diff --git a/models/basgra/src/Makevars b/models/basgra/src/Makevars index 3fc60542282..5802f4c8783 100644 --- a/models/basgra/src/Makevars +++ b/models/basgra/src/Makevars @@ -1,7 +1,8 @@ # PEcAn BASGRA -- Makevars # Author: Istem Fer, Alexey Shiklomanov -PKG_LIBS = $(FLIBS) +PKG_LIBS = $(FLIBS) + SOURCES = \ model/parameters_site.f90 \ @@ -15,23 +16,25 @@ SOURCES = \ OBJECTS = \ - model/parameters_site.o \ - model/parameters_plant.o \ - model/environment.o \ - model/resources.o \ - model/soil.o \ - model/plant.o \ - model/set_params.o \ - model/BASGRA.o + parameters_site.o \ + parameters_plant.o \ + environment.o \ + resources.o \ + soil.o \ + plant.o \ + set_params.o \ + BASGRA.o .PHONY: all clean -all : $(SHLIB) +all : $(SHLIB) -$(SHLIB) : $(OBJECTS) +$(SHLIB) : $(OBJECTS) $(OBJECTS) : $(SOURCES) + $(F77) -x f95-cpp-input -fPIC -O3 -c -fdefault-real-8 $(SOURCES) + clean : rm -f $(OBJECTS) *.mod model/*.mod *.so *.o symbols.rds \ No newline at end of file From 17d26980c88bd1b16d62bdd11b9bed6a54616736 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 08:08:32 -0400 Subject: [PATCH 0444/2289] get rid of met2model --- models/basgra/NAMESPACE | 1 - models/basgra/R/met2model.BASGRA.R | 145 ------------------------- models/basgra/R/run_BASGRA.R | 147 ++++++++++++++++---------- models/basgra/inst/template.job | 2 +- models/basgra/man/met2model.BASGRA.Rd | 30 ------ models/basgra/man/run_BASGRA.Rd | 8 +- 6 files changed, 98 insertions(+), 235 deletions(-) delete mode 100644 models/basgra/R/met2model.BASGRA.R delete mode 100644 models/basgra/man/met2model.BASGRA.Rd diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index 9b5cbc5771f..8cbd290d2d2 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -1,6 +1,5 @@ # Generated by roxygen2: do not edit by hand -export(met2model.BASGRA) export(run_BASGRA) export(write.config.BASGRA) useDynLib(PEcAn.BASGRA) diff --git a/models/basgra/R/met2model.BASGRA.R b/models/basgra/R/met2model.BASGRA.R deleted file mode 100644 index 8f176f810ef..00000000000 --- a/models/basgra/R/met2model.BASGRA.R +++ /dev/null @@ -1,145 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##-------------------------------------------------------------------------------------------------# -##' Converts a met CF file to a model specific met file. The input -##' files are calld /.YYYY.cf -##' -##' @name met2model.BASGRA -##' @title Write BASGRA met files -##' @param in.path path on disk where CF file lives -##' @param in.prefix prefix for each file -##' @param outfolder location where model specific output is written. -##' @param start_date beginning of the weather data -##' @param end_date end of the weather data -##' @return OK if everything was succesful. -##' @export -##' @author Istem Fer -##-------------------------------------------------------------------------------------------------# -met2model.BASGRA <- function(in.path, in.prefix, outfolder, overwrite = FALSE, - start_date, end_date, ...) { - - PEcAn.logger::logger.info("START met2model.BASGRA") - - ## check to see if the outfolder is defined, if not create directory for output - if (!file.exists(outfolder)) { - dir.create(outfolder) - } - - start_date <- as.POSIXlt(start_date, tz = "UTC") - end_date <- as.POSIXlt(end_date, tz = "UTC") - start_year <- lubridate::year(start_date) - end_year <- lubridate::year(end_date) - - out.file <- paste(in.prefix, strptime(start_date, "%Y-%m-%d"), - strptime(end_date, "%Y-%m-%d"), - "txt", - sep = ".") - - out.file.full <- file.path(outfolder, out.file) - - results <- data.frame(file = out.file.full, - host = PEcAn.remote::fqdn(), - mimetype = "text/csv", - formatname = "Weather-Bioforsk", - startdate = start_date, - enddate = end_date, - dbfile.name = out.file, - stringsAsFactors = FALSE) - PEcAn.logger::logger.info("internal results") - PEcAn.logger::logger.info(results) - - if (file.exists(out.file.full) && !overwrite) { - PEcAn.logger::logger.debug("File '", out.file.full, "' already exists, skipping to next file.") - return(invisible(results)) - } - - out.list <- list() - - ctr <- 1 - for(year in start_year:end_year) { - - PEcAn.logger::logger.info(year) - - # prepare data frame for BASGRA format, daily inputs, but doesn't have to be full year - diy <- PEcAn.utils::days_in_year(year) - out <- data.frame(ST = rep(42, diy), # station number, not important - YR = rep(year, diy), # year - doy = seq_len(diy)) # day of year, simple implementation for now - - - - old.file <- file.path(in.path, paste(in.prefix, year, "nc", sep = ".")) - - if (file.exists(old.file)) { - - ## open netcdf - nc <- ncdf4::nc_open(old.file) - - ## convert time to seconds - sec <- nc$dim$time$vals - sec <- udunits2::ud.convert(sec, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") - - dt <- PEcAn.utils::seconds_in_year(year) / length(sec) - tstep <- round(86400 / dt) - dt <- 86400 / tstep - - ## extract variables - Tair <-ncdf4::ncvar_get(nc, "air_temperature") ## in Kelvin - Tair_C <- udunits2::ud.convert(Tair, "K", "degC") - - # compute daily mean, min and max - ind <- rep(seq_len(diy), each = tstep) - t_dmean <- tapply(Tair_C, ind, mean, na.rm = TRUE) # maybe round these numbers - t_dmax <- tapply(Tair_C, ind, max, na.rm = TRUE) - t_dmin <- tapply(Tair_C, ind, min, na.rm = TRUE) - - out$T <- t_dmean # mean temperature (degrees Celsius) - out$TMMXI <- t_dmax # max temperature (degrees Celsius) - out$TMMNI <- t_dmin # min temperature (degrees Celsius) - - RH <-ncdf4::ncvar_get(nc, "relative_humidity") # % - RH <- tapply(RH, ind, mean, na.rm = TRUE) - - out$RH <- RH # relative humidity (%) - - Rain <- ncdf4::ncvar_get(nc, "precipitation_flux") # kg m-2 s-1 - raini <- tapply(Rain*86400, ind, mean, na.rm = TRUE) - - out$RAINI <- raini # precipitation (mm d-1) - - - U <- ncdf4::ncvar_get(nc, "eastward_wind") - V <- ncdf4::ncvar_get(nc, "northward_wind") - ws <- sqrt(U ^ 2 + V ^ 2) - - out$WNI <- tapply(ws, ind, mean, na.rm = TRUE) # mean wind speed (m s-1) - - rad <- ncdf4::ncvar_get(nc, "surface_downwelling_shortwave_flux_in_air") - gr <- rad * 0.0864 # W m-2 to MJ m-2 d-1 - - out$GR <- tapply(gr, ind, mean, na.rm = TRUE) # irradiation (MJ m-2 d-1) - - ncdf4::nc_close(nc) - } else { - PEcAn.logger::logger.info("File for year", year, "not found. Skipping to next year") - next - } - - out.list[[ctr]] <- out - ctr <- ctr + 1 - } # end for-loop around years - - clim <- do.call("rbind", out.list) - - ## write output - write.table(clim, out.file.full, quote = FALSE, sep = "\t", row.names = FALSE, col.names = TRUE) - return(invisible(results)) - -} # met2model.BASGRA diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 91f3cab968d..ce8d9c73679 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -10,8 +10,7 @@ ##' ##' @name run_BASGRA ##' @title run BASGRA model -##' @param binary_path path to model binary -##' @param file_weather path to climate file, should change when I get rid of met2model? +##' @param run_met path to climate file, should change when I get rid of met2model? ##' @param run_params parameter vector ##' @param start_date start time of the simulation ##' @param end_date end time of the simulation @@ -24,59 +23,101 @@ ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -run_BASGRA <- function(binary_path, file_weather, run_params, start_date, end_date, - outdir, sitelat, sitelon){ +run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitelat, sitelon){ - ############################# GENERAL INITIALISATION ######################## - # this part corresponds to initialise_BASGRA_general.R function - - calendar_fert <- matrix( 0, nrow=100, ncol=3 ) - calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) - calendar_Ndep[1,] <- c(1900, 1,0) - calendar_Ndep[2,] <- c(2100, 366, 0) - days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) - - ################################################################################ - ### 1. MODEL LIBRARY FILE & FUNCTION FOR RUNNING THE MODEL - run_model <- function(p = params, - w = matrix_weather, - calf = calendar_fert, - calN = calendar_Ndep, - h = days_harvest, - n = NDAYS) { - .Fortran('BASGRA', p,w,calf,calN,h,n, NOUT,matrix(0,n,NOUT))[[8]] - } + start_date <- as.POSIXlt(start_date, tz = "UTC") + end_date <- as.POSIXlt(end_date, tz = "UTC") + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + + ################################################################################ - ### 2. FUNCTIONS FOR READING WEATHER DATA - read_weather_Bioforsk <- function(y = year_start, - d = doy_start, - n = NDAYS, - f = file_weather) { - df_weather <- read.table( f, header=TRUE ) - row_start <- 1 - while( df_weather[row_start,]$YR < y ) { row_start <- row_start+1 } - while( df_weather[row_start,]$doy < d ) { row_start <- row_start+1 } - df_weather_sim <- df_weather[row_start:(row_start+n-1),] - NMAXDAYS <- as.integer(10000) - NWEATHER <- as.integer(8) - matrix_weather <- matrix( 0., nrow=NMAXDAYS, ncol=NWEATHER ) - matrix_weather[1:n,1] <- df_weather_sim$YR - matrix_weather[1:n,2] <- df_weather_sim$doy - matrix_weather[1:n,3] <- df_weather_sim$GR - matrix_weather[1:n,4] <- df_weather_sim$T - matrix_weather[1:n,5] <- df_weather_sim$T - matrix_weather[1:n,6] <- exp(17.27*df_weather_sim$T/(df_weather_sim$T+239)) * - 0.6108 * df_weather_sim$RH / 100 - matrix_weather[1:n,7] <- df_weather_sim$RAINI - matrix_weather[1:n,8] <- df_weather_sim$WNI + ### FUNCTIONS FOR READING WEATHER DATA + mini_met2model_BASGRA <- function(file_path, + start_date, start_year, + end_date, end_year) { + + out.list <- list() + + ctr <- 1 + for(year in start_year:end_year) { + + diy <- PEcAn.utils::days_in_year(year) + + NWEATHER <- as.integer(8) + matrix_weather <- matrix( 0., nrow=diy, ncol=NWEATHER ) + + + # prepare data frame for BASGRA format, daily inputs, but doesn't have to be full year + + + matrix_weather[ ,1] <- rep(year, diy) # year + matrix_weather[ ,2] <- seq_len(diy) # day of year, simple implementation for now + + old.file <- file.path(dirname(file_path), paste(basename(file_path), year, "nc", sep = ".")) + + if (file.exists(old.file)) { + + ## open netcdf + nc <- ncdf4::nc_open(old.file) + + ## convert time to seconds + sec <- nc$dim$time$vals + sec <- udunits2::ud.convert(sec, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") + + dt <- PEcAn.utils::seconds_in_year(year) / length(sec) + tstep <- round(86400 / dt) + dt <- 86400 / tstep + + ind <- rep(seq_len(diy), each = tstep) + + rad <- ncdf4::ncvar_get(nc, "surface_downwelling_shortwave_flux_in_air") + gr <- rad * 0.0864 # W m-2 to MJ m-2 d-1 + + matrix_weather[ ,3] <- tapply(gr, ind, mean, na.rm = TRUE) # irradiation (MJ m-2 d-1) + + Tair <-ncdf4::ncvar_get(nc, "air_temperature") ## in Kelvin + Tair_C <- udunits2::ud.convert(Tair, "K", "degC") + + t_dmean <- tapply(Tair_C, ind, mean, na.rm = TRUE) # maybe round these numbers + matrix_weather[ ,4] <- t_dmean # mean temperature (degrees Celsius) + matrix_weather[ ,5] <- t_dmean # that's what they had in read_weather_Bioforsk + + RH <-ncdf4::ncvar_get(nc, "relative_humidity") # % + RH <- tapply(RH, ind, mean, na.rm = TRUE) + + matrix_weather[ ,6] <- exp(17.27*t_dmean/(t_dmean+239)) * 0.6108 * RH / 100 + + + Rain <- ncdf4::ncvar_get(nc, "precipitation_flux") # kg m-2 s-1 + raini <- tapply(Rain*86400, ind, mean, na.rm = TRUE) + matrix_weather[ ,7] <- raini # precipitation (mm d-1) + + U <- ncdf4::ncvar_get(nc, "eastward_wind") + V <- ncdf4::ncvar_get(nc, "northward_wind") + ws <- sqrt(U ^ 2 + V ^ 2) + + matrix_weather[ ,8] <- tapply(ws, ind, mean, na.rm = TRUE) # mean wind speed (m s-1) + + ncdf4::nc_close(nc) + } else { + PEcAn.logger::logger.info("File for year", year, "not found. Skipping to next year") + next + } + + out.list[[ctr]] <- matrix_weather + ctr <- ctr + 1 + } # end for-loop around years + + matrix_weather <- do.call("rbind", out.list) return(matrix_weather) } ################################################################################ - ### 3. OUTPUT VARIABLES + ### OUTPUT VARIABLES outputNames <- c( "Time" , "year" , "doy" , "DAVTMP" , "CLV" , "CLVD" , "YIELD" , "CRES" , "CRT" , "CST" , "CSTUB" , "DRYSTOR" , @@ -139,17 +180,17 @@ run_BASGRA <- function(binary_path, file_weather, run_params, start_date, end_da ############################# SITE CONDITIONS ######################## # this part corresponds to initialise_BASGRA_***.R functions - start_date <- as.POSIXlt(start_date, tz = "UTC") - end_date <- as.POSIXlt(end_date, tz = "UTC") - start_year <- lubridate::year(start_date) - end_year <- lubridate::year(end_date) - year_start <- as.integer(start_year) doy_start <- as.integer(lubridate::yday(start_date)) NDAYS <- as.integer(sum(PEcAn.utils::days_in_year(start_year:end_year))) # could be partial years, change later - parcol <- 13 + + matrix_weather <- mini_met2model_BASGRA(run_met, start_date, start_year, end_date, end_year) - matrix_weather <- read_weather_Bioforsk(year_start,doy_start,NDAYS,file_weather) + calendar_fert <- matrix( 0, nrow=100, ncol=3 ) + calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) + calendar_Ndep[1,] <- c(1900, 1,0) + calendar_Ndep[2,] <- c(2100, 366, 0) + days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) # hardcoding these for now, should be able to modify later on calendar_fert[1,] <- c( 2000, 115, 140*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job index 32a267804ed..290762bb251 100644 --- a/models/basgra/inst/template.job +++ b/models/basgra/inst/template.job @@ -16,7 +16,7 @@ if [ ! -e "@OUTDIR@/results.csv" ]; then "@BINARY@" # convert to MsTMIP echo "require (PEcAn.BASGRA) -run_BASGRA('@BINARY@', '@SITE_MET@', @RUN_PARAMS@, '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) +run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) " | R --vanilla STATUS=$? diff --git a/models/basgra/man/met2model.BASGRA.Rd b/models/basgra/man/met2model.BASGRA.Rd deleted file mode 100644 index e4eaf2abdd4..00000000000 --- a/models/basgra/man/met2model.BASGRA.Rd +++ /dev/null @@ -1,30 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/met2model.BASGRA.R -\name{met2model.BASGRA} -\alias{met2model.BASGRA} -\title{Write BASGRA met files} -\usage{ -met2model.BASGRA(in.path, in.prefix, outfolder, overwrite = FALSE, - start_date, end_date, ...) -} -\arguments{ -\item{in.path}{path on disk where CF file lives} - -\item{in.prefix}{prefix for each file} - -\item{outfolder}{location where model specific output is written.} - -\item{start_date}{beginning of the weather data} - -\item{end_date}{end of the weather data} -} -\value{ -OK if everything was succesful. -} -\description{ -Converts a met CF file to a model specific met file. The input -files are calld /.YYYY.cf -} -\author{ -Istem Fer -} diff --git a/models/basgra/man/run_BASGRA.Rd b/models/basgra/man/run_BASGRA.Rd index d98ed8067c6..eb07ee94afe 100644 --- a/models/basgra/man/run_BASGRA.Rd +++ b/models/basgra/man/run_BASGRA.Rd @@ -4,13 +4,11 @@ \alias{run_BASGRA} \title{run BASGRA model} \usage{ -run_BASGRA(binary_path, file_weather, run_params, start_date, end_date, - outdir, sitelat, sitelon) +run_BASGRA(run_met, run_params, start_date, end_date, outdir, sitelat, + sitelon) } \arguments{ -\item{binary_path}{path to model binary} - -\item{file_weather}{path to climate file, should change when I get rid of met2model?} +\item{run_met}{path to climate file, should change when I get rid of met2model?} \item{run_params}{parameter vector} From dd24100ba469f6e402bafc8bb80a2a159c59d01e Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 08:41:37 -0400 Subject: [PATCH 0445/2289] one more par --- models/basgra/R/run_BASGRA.R | 9 +++++---- models/basgra/R/write.config.BASGRA.R | 5 +++++ 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index ce8d9c73679..a39276594eb 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -10,7 +10,7 @@ ##' ##' @name run_BASGRA ##' @title run BASGRA model -##' @param run_met path to climate file, should change when I get rid of met2model? +##' @param run_met path to CF met ##' @param run_params parameter vector ##' @param start_date start time of the simulation ##' @param end_date end time of the simulation @@ -31,13 +31,14 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela end_year <- lubridate::year(end_date) - ################################################################################ ### FUNCTIONS FOR READING WEATHER DATA mini_met2model_BASGRA <- function(file_path, start_date, start_year, end_date, end_year) { + # TODO: read partial years + out.list <- list() ctr <- 1 @@ -89,7 +90,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela matrix_weather[ ,6] <- exp(17.27*t_dmean/(t_dmean+239)) * 0.6108 * RH / 100 - + # TODO: check these Rain <- ncdf4::ncvar_get(nc, "precipitation_flux") # kg m-2 s-1 raini <- tapply(Rain*86400, ind, mean, na.rm = TRUE) matrix_weather[ ,7] <- raini # precipitation (mm d-1) @@ -117,7 +118,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela ################################################################################ - ### OUTPUT VARIABLES + ### OUTPUT VARIABLES (from BASGRA scripts) outputNames <- c( "Time" , "year" , "doy" , "DAVTMP" , "CLV" , "CLVD" , "YIELD" , "CRES" , "CRT" , "CST" , "CSTUB" , "DRYSTOR" , diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index d3874f5569c..caebda40c6a 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -44,6 +44,11 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { if ("c2n_fineroot" %in% pft.names) { run_params[which(names(default_params) == "NCR")] <- 1/pft.traits[which(pft.names == "c2n_fineroot")] } + + if ("extinction_coefficient" %in% pft.names) { + run_params[which(names(default_params) == "K")] <- pft.traits[which(pft.names == "extinction_coefficient")] + } + } From 2e28b786287a70b7f5b77098fdcbb29f9866e47f Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 08:43:40 -0400 Subject: [PATCH 0446/2289] edit changelog --- CHANGELOG.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index a207ba4da6b..7308059407f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -15,6 +15,9 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Changed - Stricter package checking: `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings (#2404). +### Added +- BASGRA_N model basic coupling. + ## [1.7.1] - 2018-09-12 ### Fixed From 41e391dcce874f0aefbe16c36b9d31685882da1a Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 15:58:47 +0300 Subject: [PATCH 0447/2289] Apply suggestions from code review Co-Authored-By: Alexey Shiklomanov --- models/basgra/DESCRIPTION | 7 +++---- models/basgra/R/run_BASGRA.R | 2 +- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index 1308677096e..c6837f7b478 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -1,18 +1,17 @@ Package: PEcAn.BASGRA Type: Package -Title: PEcAn package for integration of the ModelName model +Title: PEcAn package for integration of the BASGRA model Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Istem", "Fer")) +Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", "cre")) Author: Istem Fer Maintainer: Istem Fer -Description: This module provides functions to link the BASGRA to PEcAn. +Description: This module provides functions to link the BASGRA model to PEcAn. Imports: PEcAn.logger, PEcAn.utils (>= 1.4.8) Suggests: testthat (>= 1.0.2) -SystemRequirements: ModelName OS_type: unix License: FreeBSD + file LICENSE Copyright: Authors diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index a39276594eb..60c3e5908bf 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -42,7 +42,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela out.list <- list() ctr <- 1 - for(year in start_year:end_year) { + for(year in seq(start_year, end_year)) { diy <- PEcAn.utils::days_in_year(year) From a6246c48110ed63a96ef14fcba2750a8cd4bb6b4 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 16:10:21 +0300 Subject: [PATCH 0448/2289] Update models/basgra/R/run_BASGRA.R Co-Authored-By: Alexey Shiklomanov --- models/basgra/R/run_BASGRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 60c3e5908bf..683c8d89222 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -183,7 +183,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela year_start <- as.integer(start_year) doy_start <- as.integer(lubridate::yday(start_date)) - NDAYS <- as.integer(sum(PEcAn.utils::days_in_year(start_year:end_year))) # could be partial years, change later + NDAYS <- as.integer(sum(PEcAn.utils::days_in_year(seq(start_year, end_year)))) # could be partial years, change later matrix_weather <- mini_met2model_BASGRA(run_met, start_date, start_year, end_date, end_year) From d2ed54ccd0c489c70b5a630881c12737d0813090 Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 09:12:46 -0400 Subject: [PATCH 0449/2289] addressing comments --- modules/uncertainty/R/ensemble.R | 11 ++++++----- modules/uncertainty/R/get.parameter.samples.R | 10 ++++++++-- 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index 7c49893e347..88c09333b81 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -214,14 +214,14 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, if (write.to.db) { # Open connection to database so we can store all run/ensemble information con <- - try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - on.exit(try(PEcAn.DB::db.close(con), silent = TRUE) + try(PEcAn.DB::db.open(settings$database$bety)) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE, add = TRUE) ) # If we fail to connect to DB then we set to NULL - if (inherits(con, "try-error")) - { + if (inherits(con, "try-error")) { con <- NULL + PEcAn.logger::logger.warn("We were not able to successfully establish a connection with Bety ") } } @@ -327,7 +327,8 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, "The following pfts are specified for the siteid ", settings$run$site$id , " but they are not defined as a pft in pecan.xml:", - site.pfts.vec[which(!(site.pfts.vec %in% defined.pfts))] + site.pfts.vec[which(!(site.pfts.vec %in% defined.pfts))], + collapse = "," ) ) } diff --git a/modules/uncertainty/R/get.parameter.samples.R b/modules/uncertainty/R/get.parameter.samples.R index 66076c56367..357f87445e1 100644 --- a/modules/uncertainty/R/get.parameter.samples.R +++ b/modules/uncertainty/R/get.parameter.samples.R @@ -16,8 +16,14 @@ get.parameter.samples <- function(settings, pft.names <- list() outdirs <- list() ## Open database connection - con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - on.exit(try(PEcAn.DB::db.close(con), silent = TRUE)) + con <- try(PEcAn.DB::db.open(settings$database$bety)) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE), add = TRUE) + + # If we fail to connect to DB then we set to NULL + if (inherits(con, "try-error")) { + con <- NULL + PEcAn.logger::logger.warn("We were not able to successfully establish a connection with Bety ") + } for (i.pft in seq_along(pfts)) { pft.names[i.pft] <- settings$pfts[[i.pft]]$name From 2d0f951f7bcc24602d37be98d427d33372db807a Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:19:05 -0400 Subject: [PATCH 0450/2289] use load_local --- models/basgra/R/write.config.BASGRA.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index caebda40c6a..07256322202 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -30,8 +30,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { outdir <- file.path(settings$host$outdir, run.id) # load default(!) BASGRA params - load(system.file("BASGRA_params.Rdata",package = "PEcAn.BASGRA")) - run_params <- default_params + run_params <- PEcAn.utils::load_local(system.file("BASGRA_params.Rdata",package = "PEcAn.BASGRA"))$default_params run_params[which(names(default_params) == "LAT")] <- as.numeric(settings$run$site$lat) From 631f910926f79635045cfa43a189f1eba8cc758f Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:19:49 -0400 Subject: [PATCH 0451/2289] minor comments --- models/basgra/R/run_BASGRA.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 683c8d89222..9523af216d7 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -88,6 +88,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela RH <-ncdf4::ncvar_get(nc, "relative_humidity") # % RH <- tapply(RH, ind, mean, na.rm = TRUE) + # This is vapor pressure according to BASGRA.f90#L86 and environment.f90#L49 matrix_weather[ ,6] <- exp(17.27*t_dmean/(t_dmean+239)) * 0.6108 * RH / 100 # TODO: check these @@ -226,7 +227,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela years <- seq(start_year, end_year) for (y in years) { - thisyear <- output[, which(outputNames == "year")] == y + thisyear <- output[ , which(outputNames == "year"] == y outlist <- list() outlist[[1]] <- output[thisyear, which(outputNames == "LAI")] # LAI in (m2 m-2) From 77f55dff7ea8287b105c14c8c764104a6435c425 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:20:28 -0400 Subject: [PATCH 0452/2289] delete readme for now --- models/basgra/README.md | 64 ----------------------------------------- 1 file changed, 64 deletions(-) delete mode 100644 models/basgra/README.md diff --git a/models/basgra/README.md b/models/basgra/README.md deleted file mode 100644 index ac17ce2dae4..00000000000 --- a/models/basgra/README.md +++ /dev/null @@ -1,64 +0,0 @@ -A generic template for adding a new model to PEcAn -========================================================================== - -Adding a new model to PEcAn in a few easy steps: - -1. add modeltype to BETY -2. add a model and PFT to BETY for use with modeltype -3. implement 3 functions as described below -4. Add tests to `tests/testthat` -5. Update README, documentation -6. execute pecan with new model - - -### Three Functions - -There are 3 functions that will need to be implemented, each of these -functions will need to have MODEL be replaced with the actual modeltype as -it is defined in the BETY database. - -* `write.config.MODEL.R` - - This will write the configuratin file as well as the job launcher used by - PEcAn. There is an example of the job execution script in the template - folder. The configuration file can also be a template that is found based - on the revision number of the model. This should use the computed results - specified in defaults and trait.values to write a configuration file - based on the PFT and traits found. - -* `met2model.MODEL.R` - - This will convert the standard Met CF file to the model specific file - format. This will allow PEcAn to create metereological files for the - specific site and model. This will only be called if no meterological - data is found for that specific site and model combination. - -* `model2netcdf.MODEL.R` - - This will convert the model specific output to NACP Intercomparison - format. After this function is finished PEcAn will use the generated - output and not use the model specific outputs. The outputs should be - named YYYY.nc - -### Additional Changes - -* `README.md` - -This file should contain basic background information about the model. -At a minimum, this should include the scientific motivation and scope, -name(s) of maintainer(s), links to project homepage, and a list of a few -key publications. -relevant publications. - -* `/tests/testthat/` - -Each package should have tests that cover the key functions of the package, -at a minimum, the three functions above. - -* documentation - -Update the `NAMESPACE`, `DESCRIPTION` and `man/*.Rd` files by running - -```r -devtools("models//") -``` From ae3cbb645341110b811dc2373367eda6e5ca45cb Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:22:24 -0400 Subject: [PATCH 0453/2289] typo --- models/basgra/R/run_BASGRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 9523af216d7..3b0b7135c05 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -227,7 +227,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela years <- seq(start_year, end_year) for (y in years) { - thisyear <- output[ , which(outputNames == "year"] == y + thisyear <- output[ , outputNames == "year"] == y outlist <- list() outlist[[1]] <- output[thisyear, which(outputNames == "LAI")] # LAI in (m2 m-2) From 9bcb22eda145aea2bb20a735c42f3e8786834384 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:23:10 -0400 Subject: [PATCH 0454/2289] delete testthat readme --- models/basgra/tests/testthat/README.txt | 3 --- 1 file changed, 3 deletions(-) delete mode 100644 models/basgra/tests/testthat/README.txt diff --git a/models/basgra/tests/testthat/README.txt b/models/basgra/tests/testthat/README.txt deleted file mode 100644 index b11fefae099..00000000000 --- a/models/basgra/tests/testthat/README.txt +++ /dev/null @@ -1,3 +0,0 @@ -Place your tests here. They will be executed in this folder, so you -can place any data you need in this folder as well (or in a subfolder -called data). From 82a4484ad5bcdf9609aadfaf897cfe1e199446aa Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 16:24:25 +0300 Subject: [PATCH 0455/2289] close nc Co-Authored-By: Alexey Shiklomanov --- models/basgra/R/run_BASGRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 3b0b7135c05..e2580d6f5d6 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -62,7 +62,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela ## open netcdf nc <- ncdf4::nc_open(old.file) - +on.exit(ncdf4::nc_close(nc), add = TRUE) ## convert time to seconds sec <- nc$dim$time$vals sec <- udunits2::ud.convert(sec, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") From f48e0f47e399ba66fb3963982df8841e8260f239 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:26:04 -0400 Subject: [PATCH 0456/2289] drop lines --- models/basgra/DESCRIPTION | 2 -- 1 file changed, 2 deletions(-) diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index c6837f7b478..a6a138ed7de 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -4,8 +4,6 @@ Title: PEcAn package for integration of the BASGRA model Version: 1.7.1 Date: 2019-09-05 Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", "cre")) -Author: Istem Fer -Maintainer: Istem Fer Description: This module provides functions to link the BASGRA model to PEcAn. Imports: PEcAn.logger, From 5c42963e8ebe782dd556d3cf4fae3297e984c55c Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 27 Sep 2019 09:31:43 -0400 Subject: [PATCH 0457/2289] run check --- models/basgra/DESCRIPTION | 2 +- models/basgra/man/run_BASGRA.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index a6a138ed7de..0bf255198fe 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -3,7 +3,7 @@ Type: Package Title: PEcAn package for integration of the BASGRA model Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", "cre")) +Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", "cre"))) Description: This module provides functions to link the BASGRA model to PEcAn. Imports: PEcAn.logger, diff --git a/models/basgra/man/run_BASGRA.Rd b/models/basgra/man/run_BASGRA.Rd index eb07ee94afe..84b55fbb3ec 100644 --- a/models/basgra/man/run_BASGRA.Rd +++ b/models/basgra/man/run_BASGRA.Rd @@ -8,7 +8,7 @@ run_BASGRA(run_met, run_params, start_date, end_date, outdir, sitelat, sitelon) } \arguments{ -\item{run_met}{path to climate file, should change when I get rid of met2model?} +\item{run_met}{path to CF met} \item{run_params}{parameter vector} From ba392be75841903b5a087a4029f1178002e663fb Mon Sep 17 00:00:00 2001 From: Hamze Dokoohaki Date: Fri, 27 Sep 2019 10:00:23 -0400 Subject: [PATCH 0458/2289] Update base/workflow/R/run.write.configs.R Co-Authored-By: Alexey Shiklomanov --- base/workflow/R/run.write.configs.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index d4d822e1eca..baedbe8c567 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -37,7 +37,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo tryCatch({ con <- PEcAn.DB::db.open(...) - on.exit(PEcAn.DB::db.close(con) + on.exit(PEcAn.DB::db.close(con), add = TRUE) }, error = function(e) { PEcAn.logger::logger.severe("Connection requested, but failed to open with the following error: ", conditionMessage(e) }) From 6bec32eed90eb7a8bc8acab5d09f795073244fbc Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 10:12:33 -0400 Subject: [PATCH 0459/2289] fix the travis --- base/workflow/R/run.write.configs.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index baedbe8c567..30749b83068 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -39,7 +39,9 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo con <- PEcAn.DB::db.open(...) on.exit(PEcAn.DB::db.close(con), add = TRUE) }, error = function(e) { - PEcAn.logger::logger.severe("Connection requested, but failed to open with the following error: ", conditionMessage(e) + PEcAn.logger::logger.severe( + "Connection requested, but failed to open with the following error: ", + conditionMessage(e)) }) files <- PEcAn.DB::dbfile.check("Posterior", From 23003c2497c38915e5636213774bacf88ed8b61d Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 10:58:58 -0400 Subject: [PATCH 0460/2289] trying to solve Travis --- base/workflow/R/run.write.configs.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 30749b83068..8d11c8858c1 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -36,7 +36,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo if (!is.null(settings$pfts[[i]]$posteriorid)) { tryCatch({ - con <- PEcAn.DB::db.open(...) + con <- PEcAn.DB::db.open(settings$database$bety) on.exit(PEcAn.DB::db.close(con), add = TRUE) }, error = function(e) { PEcAn.logger::logger.severe( From 981ff89f83aef4986b05afe0277c18215c3fb9f0 Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 11:40:45 -0400 Subject: [PATCH 0461/2289] quick fix --- modules/uncertainty/R/ensemble.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index 88c09333b81..9041629c7b1 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -215,8 +215,7 @@ write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, # Open connection to database so we can store all run/ensemble information con <- try(PEcAn.DB::db.open(settings$database$bety)) - on.exit(try(PEcAn.DB::db.close(con), silent = TRUE, add = TRUE) - ) + on.exit(try(PEcAn.DB::db.close(con), silent = TRUE), add = TRUE) # If we fail to connect to DB then we set to NULL if (inherits(con, "try-error")) { From f28a8930309477f1b4e8e3556fc0a0a5afdd66c4 Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 14:25:29 -0400 Subject: [PATCH 0462/2289] posterior.files tag --- base/workflow/R/run.write.configs.R | 5 ++--- base/workflow/R/runModule.run.write.configs.R | 12 +++++++++++- 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 8d11c8858c1..4da4faa581e 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -26,8 +26,6 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo posterior.files = rep(NA, length(settings$pfts)), overwrite = TRUE) { - - ## Which posterior to use? for (i in seq_along(settings$pfts)) { ## if posterior.files is specified us that @@ -43,7 +41,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo "Connection requested, but failed to open with the following error: ", conditionMessage(e)) }) - + files <- PEcAn.DB::dbfile.check("Posterior", settings$pfts[[i]]$posteriorid, con, settings$host$name, return.all = TRUE) @@ -63,6 +61,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo model <- settings$model$type scipen <- getOption("scipen") options(scipen = 12) + PEcAn.uncertainty::get.parameter.samples(settings, posterior.files, ens.sample.method) load(file.path(settings$outdir, "samples.Rdata")) diff --git a/base/workflow/R/runModule.run.write.configs.R b/base/workflow/R/runModule.run.write.configs.R index 1e06def102c..fd168d6082b 100644 --- a/base/workflow/R/runModule.run.write.configs.R +++ b/base/workflow/R/runModule.run.write.configs.R @@ -17,7 +17,17 @@ runModule.run.write.configs <- function(settings, overwrite = TRUE) { # double check making sure we have method for parameter sampling if (is.null(settings$ensemble$samplingspace$parameters$method)) settings$ensemble$samplingspace$parameters$method <- "uniform" ens.sample.method <- settings$ensemble$samplingspace$parameters$method - return(PEcAn.workflow::run.write.configs(settings, write, ens.sample.method, overwrite = overwrite)) + + + #check to see if there are posterior.files tags under pft + posterior.files.vec<-settings$pfts %>% + purrr::map(purrr::possibly('posterior.files', NA_character_)) %>% + purrr::modify_depth(1, function(x) { + ifelse(is.null(x), NA_character_, x) + }) %>% + unlist() + + return(PEcAn.workflow::run.write.configs(settings, write, ens.sample.method, posterior.files = posterior.files.vec, overwrite = overwrite)) } else { stop("runModule.run.write.configs only works with Settings or MultiSettings") } From ef6724d7155a0fcefb1cd0f444f4cbfafbc0a587 Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 15:21:36 -0400 Subject: [PATCH 0463/2289] adding purrr --- base/workflow/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/base/workflow/DESCRIPTION b/base/workflow/DESCRIPTION index def34c01ef3..a596475435a 100644 --- a/base/workflow/DESCRIPTION +++ b/base/workflow/DESCRIPTION @@ -33,6 +33,7 @@ Imports: PEcAn.settings, PEcAn.uncertainty, PEcAn.utils, + purrr (>= 0.2.3), XML Suggests: testthat, From e771686106fac4369407db68401645eb8c778b7e Mon Sep 17 00:00:00 2001 From: hamzed Date: Fri, 27 Sep 2019 15:29:11 -0400 Subject: [PATCH 0464/2289] Doc --- book_source/03_topical_pages/03_pecan_xml.Rmd | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index ea6449863ba..d8fa6562678 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -249,6 +249,7 @@ The PEcAn system requires at least 1 plant functional type (PFT) to be specified 1 + Path to a post.distns.*.Rdata or prior.distns.Rdata
``` @@ -256,6 +257,9 @@ The PEcAn system requires at least 1 plant functional type (PFT) to be specified * `name` : (required) the name of the PFT, which must *exactly* match the name in the PEcAn database. * `outdir`: (optional) Directory path in which PFT-specific output will be stored during meta-analysis and sensitivity analysis. If not specified (recommended), it will be written into `/`. * `contants`: (optional) this section contains information that will be written directly into the model specific configuration files. For example, some models like ED2 use PFT numbers instead of names for PFTs, and those numbers can be specified here. See documentation for model-specific code for details. +* `posterior.files` (Optional) this tag helps to signal write.config functions to use specific posterior/prior files (such as HPDA or MA analysis) for generating samples without needing to access to the bety database. + +`` This information is currently used by the following PEcAn workflow function: From 9dc97f5a963c7d4ff627dd1a012c2b4a42bf4f2d Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 28 Sep 2019 08:38:09 +0300 Subject: [PATCH 0465/2289] enable test Co-Authored-By: Alexey Shiklomanov --- models/basgra/tests/testthat.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/basgra/tests/testthat.R b/models/basgra/tests/testthat.R index d93798b4ffe..fd4d8aec47b 100644 --- a/models/basgra/tests/testthat.R +++ b/models/basgra/tests/testthat.R @@ -10,4 +10,4 @@ library(testthat) library(PEcAn.utils) PEcAn.logger::logger.setQuitOnSevere(FALSE) -#test_check("PEcAn.ModelName") +test_check("PEcAn.BASGRA") From a781ac969b0127a80e76d4e7e86b4e2830d62330 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 28 Sep 2019 02:13:06 -0400 Subject: [PATCH 0466/2289] works this way --- models/basgra/src/Makevars | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/models/basgra/src/Makevars b/models/basgra/src/Makevars index 5802f4c8783..2e255b4fa56 100644 --- a/models/basgra/src/Makevars +++ b/models/basgra/src/Makevars @@ -30,10 +30,12 @@ OBJECTS = \ all : $(SHLIB) -$(SHLIB) : $(OBJECTS) +$(SHLIB) : $(SOURCES) -$(OBJECTS) : $(SOURCES) - $(F77) -x f95-cpp-input -fPIC -O3 -c -fdefault-real-8 $(SOURCES) +PKG_FFLAGS = -x f95-cpp-input -fPIC -O3 -c -fdefault-real-8 $(SOURCES) + +$(OBJECTS) : $(SOURCES) + $(F77) $(PKG_FFLAGS) clean : From 5f0b3bbc09757806e3286807b0b863f9f5104ceb Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 28 Sep 2019 03:05:29 -0400 Subject: [PATCH 0467/2289] vary few more parameters --- models/basgra/R/write.config.BASGRA.R | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 07256322202..3aaeecb88d5 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -40,14 +40,26 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { pft.traits <- unlist(trait.values[[pft]]) pft.names <- names(pft.traits) + # N-C ratio of roots (g N g-1 C) if ("c2n_fineroot" %in% pft.names) { run_params[which(names(default_params) == "NCR")] <- 1/pft.traits[which(pft.names == "c2n_fineroot")] } + # PAR extinction coefficient (m2 m-2) if ("extinction_coefficient" %in% pft.names) { run_params[which(names(default_params) == "K")] <- pft.traits[which(pft.names == "extinction_coefficient")] } + # Transpiration coefficient (mm d-1) + if ("transpiration_coefficient" %in% pft.names) { + run_params[which(names(default_params) == "TRANCO")] <- pft.traits[which(pft.names == "transpiration_coefficient")] + } + + # Temperature that kills half the plants in a day (degrees Celcius) + if ("plant_min_temp" %in% pft.names) { + run_params[which(names(default_params) == "LT50")] <- pft.traits[which(pft.names == "plant_min_temp")] + } + } From 0ed96660f82f1407f3ec784ab5d9606a65014eda Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 29 Sep 2019 08:06:13 -0400 Subject: [PATCH 0468/2289] one more param --- models/basgra/R/write.config.BASGRA.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 3aaeecb88d5..4b6516e2e91 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -60,6 +60,10 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { run_params[which(names(default_params) == "LT50")] <- pft.traits[which(pft.names == "plant_min_temp")] } + if ("phyllochron" %in% pft.names) { + run_params[which(names(default_params) == "PHY")] <- pft.traits[which(pft.names == "phyllochron")] + } + } From d822fb08f86d24d4bb6399d55c8a3d63afd186ad Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 29 Sep 2019 09:04:32 -0400 Subject: [PATCH 0469/2289] rename var --- models/basgra/R/write.config.BASGRA.R | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 4b6516e2e91..787f9598888 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -30,9 +30,9 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { outdir <- file.path(settings$host$outdir, run.id) # load default(!) BASGRA params - run_params <- PEcAn.utils::load_local(system.file("BASGRA_params.Rdata",package = "PEcAn.BASGRA"))$default_params + run_params <- PEcAn.utils::load_local(system.file("BASGRA_params.Rdata", package = "PEcAn.BASGRA"))$default_params - run_params[which(names(default_params) == "LAT")] <- as.numeric(settings$run$site$lat) + run_params[which(names(run_params) == "LAT")] <- as.numeric(settings$run$site$lat) #### write run-specific PFT parameters here #### Get parameters being handled by PEcAn for (pft in seq_along(trait.values)) { @@ -42,26 +42,26 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { # N-C ratio of roots (g N g-1 C) if ("c2n_fineroot" %in% pft.names) { - run_params[which(names(default_params) == "NCR")] <- 1/pft.traits[which(pft.names == "c2n_fineroot")] + run_params[which(names(run_params) == "NCR")] <- 1/pft.traits[which(pft.names == "c2n_fineroot")] } # PAR extinction coefficient (m2 m-2) if ("extinction_coefficient" %in% pft.names) { - run_params[which(names(default_params) == "K")] <- pft.traits[which(pft.names == "extinction_coefficient")] + run_params[which(names(run_params) == "K")] <- pft.traits[which(pft.names == "extinction_coefficient")] } # Transpiration coefficient (mm d-1) if ("transpiration_coefficient" %in% pft.names) { - run_params[which(names(default_params) == "TRANCO")] <- pft.traits[which(pft.names == "transpiration_coefficient")] + run_params[which(names(run_params) == "TRANCO")] <- pft.traits[which(pft.names == "transpiration_coefficient")] } # Temperature that kills half the plants in a day (degrees Celcius) if ("plant_min_temp" %in% pft.names) { - run_params[which(names(default_params) == "LT50")] <- pft.traits[which(pft.names == "plant_min_temp")] + run_params[which(names(run_params) == "LT50")] <- pft.traits[which(pft.names == "plant_min_temp")] } if ("phyllochron" %in% pft.names) { - run_params[which(names(default_params) == "PHY")] <- pft.traits[which(pft.names == "phyllochron")] + run_params[which(names(run_params) == "PHY")] <- pft.traits[which(pft.names == "phyllochron")] } } From 292545009b42d7abeb86df56bbae3ccf18da0feb Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Sep 2019 04:07:06 -0400 Subject: [PATCH 0470/2289] allow partial year --- models/basgra/R/run_BASGRA.R | 66 ++++++++++++++++++++++-------------- 1 file changed, 41 insertions(+), 25 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index e2580d6f5d6..3126825ae9d 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -24,19 +24,19 @@ ##-------------------------------------------------------------------------------------------------# run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitelat, sitelon){ - + start_date <- as.POSIXlt(start_date, tz = "UTC") end_date <- as.POSIXlt(end_date, tz = "UTC") start_year <- lubridate::year(start_date) end_year <- lubridate::year(end_date) - + ################################################################################ ### FUNCTIONS FOR READING WEATHER DATA mini_met2model_BASGRA <- function(file_path, start_date, start_year, end_date, end_year) { - + # TODO: read partial years out.list <- list() @@ -44,17 +44,18 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela ctr <- 1 for(year in seq(start_year, end_year)) { - diy <- PEcAn.utils::days_in_year(year) + simdays <- seq(lubridate::yday(start_date), lubridate::yday(end_date)) - NWEATHER <- as.integer(8) - matrix_weather <- matrix( 0., nrow=diy, ncol=NWEATHER ) + NDAYS <- length(simdays) + NWEATHER <- as.integer(8) + matrix_weather <- matrix( 0., nrow = NDAYS, ncol = NWEATHER ) # prepare data frame for BASGRA format, daily inputs, but doesn't have to be full year - matrix_weather[ ,1] <- rep(year, diy) # year - matrix_weather[ ,2] <- seq_len(diy) # day of year, simple implementation for now + matrix_weather[ ,1] <- rep(year, NDAYS) # year + matrix_weather[ ,2] <- simdays old.file <- file.path(dirname(file_path), paste(basename(file_path), year, "nc", sep = ".")) @@ -62,7 +63,8 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela ## open netcdf nc <- ncdf4::nc_open(old.file) -on.exit(ncdf4::nc_close(nc), add = TRUE) + on.exit(ncdf4::nc_close(nc), add = TRUE) + ## convert time to seconds sec <- nc$dim$time$vals sec <- udunits2::ud.convert(sec, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") @@ -71,7 +73,7 @@ on.exit(ncdf4::nc_close(nc), add = TRUE) tstep <- round(86400 / dt) dt <- 86400 / tstep - ind <- rep(seq_len(diy), each = tstep) + ind <- rep(simdays, each = tstep) rad <- ncdf4::ncvar_get(nc, "surface_downwelling_shortwave_flux_in_air") gr <- rad * 0.0864 # W m-2 to MJ m-2 d-1 @@ -96,9 +98,19 @@ on.exit(ncdf4::nc_close(nc), add = TRUE) raini <- tapply(Rain*86400, ind, mean, na.rm = TRUE) matrix_weather[ ,7] <- raini # precipitation (mm d-1) - U <- ncdf4::ncvar_get(nc, "eastward_wind") - V <- ncdf4::ncvar_get(nc, "northward_wind") - ws <- sqrt(U ^ 2 + V ^ 2) + U <- try(ncdf4::ncvar_get(nc, "eastward_wind")) + V <- try(ncdf4::ncvar_get(nc, "northward_wind")) + if(is.numeric(U) & is.numeric(V)){ + ws <- sqrt(U ^ 2 + V ^ 2) + }else{ + ws <- try(ncdf4::ncvar_get(nc, "wind_speed")) + if (is.numeric(ws)) { + PEcAn.logger::logger.info("eastward_wind and northward_wind absent; using wind_speed") + }else{ + PEcAn.logger::logger.severe("No variable found to calculate wind_speed") + } + } + matrix_weather[ ,8] <- tapply(ws, ind, mean, na.rm = TRUE) # mean wind speed (m s-1) @@ -184,25 +196,29 @@ on.exit(ncdf4::nc_close(nc), add = TRUE) year_start <- as.integer(start_year) doy_start <- as.integer(lubridate::yday(start_date)) - NDAYS <- as.integer(sum(PEcAn.utils::days_in_year(seq(start_year, end_year)))) # could be partial years, change later - + matrix_weather <- mini_met2model_BASGRA(run_met, start_date, start_year, end_date, end_year) + NDAYS <- as.integer(nrow(matrix_weather)) + calendar_fert <- matrix( 0, nrow=100, ncol=3 ) calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) - calendar_Ndep[1,] <- c(1900, 1,0) - calendar_Ndep[2,] <- c(2100, 366, 0) + #calendar_Ndep[1,] <- c(1900, 1,0) + #calendar_Ndep[2,] <- c(2100, 366, 0) days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) # hardcoding these for now, should be able to modify later on - calendar_fert[1,] <- c( 2000, 115, 140*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 - calendar_fert[2,] <- c( 2000, 150, 80*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 + calendar_fert[1,] <- c( 2018, 125, 140*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 + calendar_fert[2,] <- c( 2018, 250, 80*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 # calendar_fert[3,] <- c( 2001, 123, 0*1000/ 10000 ) # 0 kg N ha-1 applied on day 123 calendar_Ndep[1,] <- c( 1900, 1, 2*1000/(10000*365) ) # 2 kg N ha-1 y-1 N-deposition in 1900 calendar_Ndep[2,] <- c( 1980, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 calendar_Ndep[3,] <- c( 2100, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 - days_harvest [1,] <- c( 2000, 150 ) - days_harvest [2,] <- c( 2000, 216 ) + + # Qvidja 2018 Harvest dates + days_harvest [1,] <- c( 2018, 163 ) + days_harvest [2,] <- c( 2018, 233 ) + days_harvest [3,] <- c( 2018, 266 ) days_harvest <- as.integer(days_harvest) @@ -216,14 +232,14 @@ on.exit(ncdf4::nc_close(nc), add = TRUE) NDAYS, NOUT, matrix(0, NDAYS, NOUT))[[8]] - + ############################# WRITE OUTPUTS ########################### # writing model outputs already in standard format # only LAI and CropYield for now - + years <- seq(start_year, end_year) for (y in years) { @@ -238,7 +254,7 @@ on.exit(ncdf4::nc_close(nc), add = TRUE) # ******************** Declare netCDF dimensions and variables ********************# t <- ncdf4::ncdim_def(name = "time", units = paste0("days since ", y, "-01-01 00:00:00"), - seq_len(PEcAn.utils::days_in_year(y)), + matrix_weather[matrix_weather[,1] == y, 2], # allow partial years, this info is already in matrix_weather calendar = "standard", unlim = TRUE) @@ -251,7 +267,7 @@ on.exit(ncdf4::nc_close(nc), add = TRUE) var <- list() var[[1]] <- PEcAn.utils::to_ncvar("LAI", dims) var[[2]] <- PEcAn.utils::to_ncvar("CropYield", dims) - + # ******************** Declare netCDF variables ********************# ### Output netCDF data From aeb0ef67f22f42f9d5ffe6cde7bac6198d55c810 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Sep 2019 06:10:32 -0400 Subject: [PATCH 0471/2289] matrix weather needs to have 10K rows --- models/basgra/R/run_BASGRA.R | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 3126825ae9d..acc1cbad0c2 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -78,25 +78,25 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela rad <- ncdf4::ncvar_get(nc, "surface_downwelling_shortwave_flux_in_air") gr <- rad * 0.0864 # W m-2 to MJ m-2 d-1 - matrix_weather[ ,3] <- tapply(gr, ind, mean, na.rm = TRUE) # irradiation (MJ m-2 d-1) + matrix_weather[ ,3] <- round(tapply(gr, ind, mean, na.rm = TRUE), digits = 2) # irradiation (MJ m-2 d-1) Tair <-ncdf4::ncvar_get(nc, "air_temperature") ## in Kelvin Tair_C <- udunits2::ud.convert(Tair, "K", "degC") - t_dmean <- tapply(Tair_C, ind, mean, na.rm = TRUE) # maybe round these numbers + t_dmean <- round(tapply(Tair_C, ind, mean, na.rm = TRUE), digits = 2) # maybe round these numbers matrix_weather[ ,4] <- t_dmean # mean temperature (degrees Celsius) matrix_weather[ ,5] <- t_dmean # that's what they had in read_weather_Bioforsk RH <-ncdf4::ncvar_get(nc, "relative_humidity") # % - RH <- tapply(RH, ind, mean, na.rm = TRUE) + RH <- round(tapply(RH, ind, mean, na.rm = TRUE), digits = 2) # This is vapor pressure according to BASGRA.f90#L86 and environment.f90#L49 - matrix_weather[ ,6] <- exp(17.27*t_dmean/(t_dmean+239)) * 0.6108 * RH / 100 + matrix_weather[ ,6] <- round(exp(17.27*t_dmean/(t_dmean+239)) * 0.6108 * RH / 100, digits = 2) # TODO: check these Rain <- ncdf4::ncvar_get(nc, "precipitation_flux") # kg m-2 s-1 raini <- tapply(Rain*86400, ind, mean, na.rm = TRUE) - matrix_weather[ ,7] <- raini # precipitation (mm d-1) + matrix_weather[ ,7] <- round(raini, digits = 2) # precipitation (mm d-1) U <- try(ncdf4::ncvar_get(nc, "eastward_wind")) V <- try(ncdf4::ncvar_get(nc, "northward_wind")) @@ -112,7 +112,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela } - matrix_weather[ ,8] <- tapply(ws, ind, mean, na.rm = TRUE) # mean wind speed (m s-1) + matrix_weather[ ,8] <- round(tapply(ws, ind, mean, na.rm = TRUE), digits = 2) # mean wind speed (m s-1) ncdf4::nc_close(nc) } else { @@ -125,6 +125,17 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela } # end for-loop around years matrix_weather <- do.call("rbind", out.list) + + #BASGRA wants the matrix_weather to be of 10000 x 8 matrix + NMAXDAYS <- as.integer(10000) + nmw <- nrow(matrix_weather) + if(nmw > NMAXDAYS){ + + }else{ + matrix_weather <- rbind(matrix_weather, matrix( 0., nrow = (NMAXDAYS - nmw), ncol = 8 )) + + } + return(matrix_weather) } @@ -199,7 +210,7 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela matrix_weather <- mini_met2model_BASGRA(run_met, start_date, start_year, end_date, end_year) - NDAYS <- as.integer(nrow(matrix_weather)) + NDAYS <- as.integer(sum(matrix_weather[,1] != 0)) calendar_fert <- matrix( 0, nrow=100, ncol=3 ) calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) From 56f49ef389ca7f130369457fd46412a06669aaf0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Sep 2019 11:20:37 +0100 Subject: [PATCH 0472/2289] avoid complaint when fixing one of several existing undefined globals --- scripts/check_with_errors.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 67dfec062c5..ac66479e775 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -140,6 +140,10 @@ if (cmp$status != "+") { } } + # Each note got its own line above, so this line is redundant + # and also causes spurious error when an existing note is fixed + cur_msgs <- cur_msgs[!grepl("NOTE: Undefined global functions or variables:", cur_msgs)] + lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { cat("R check of", pkg, "returned new problems:\n") From 3efd3db457499aff80383ccb81065e3b326d2b3c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Sep 2019 12:01:32 +0100 Subject: [PATCH 0473/2289] wording --- scripts/check_with_errors.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index ac66479e775..bbe18fdda3e 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -140,8 +140,8 @@ if (cmp$status != "+") { } } - # Each note got its own line above, so this line is redundant - # and also causes spurious error when an existing note is fixed + # This line is redundant (summarizes issues also reported individually) + # and creates a false positive when an existing issue is fixed cur_msgs <- cur_msgs[!grepl("NOTE: Undefined global functions or variables:", cur_msgs)] lines_changed <- setdiff(cur_msgs, prev_msgs) From f191174979476751a3e03e34c99591f82c7a8e3e Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Sep 2019 08:21:32 -0400 Subject: [PATCH 0474/2289] fill case --- models/basgra/R/run_BASGRA.R | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index acc1cbad0c2..041ca792e8e 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -128,12 +128,14 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela #BASGRA wants the matrix_weather to be of 10000 x 8 matrix NMAXDAYS <- as.integer(10000) - nmw <- nrow(matrix_weather) + nmw <- nrow(matrix_weather) if(nmw > NMAXDAYS){ - + matrix_weather <- matrix_weather[seq_len(NMAXDAYS), ] + PEcAn.logger::logger.info("BASGRA currently runs only", NMAXDAYS, + "simulation days. Limiting the run to the first ", NMAXDAYS, "days of the requested period.") }else{ + # append zeros at the end matrix_weather <- rbind(matrix_weather, matrix( 0., nrow = (NMAXDAYS - nmw), ncol = 8 )) - } return(matrix_weather) @@ -219,8 +221,8 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) # hardcoding these for now, should be able to modify later on - calendar_fert[1,] <- c( 2018, 125, 140*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 - calendar_fert[2,] <- c( 2018, 250, 80*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 + calendar_fert[1,] <- c( 2018, 125, 0*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 + calendar_fert[2,] <- c( 2018, 250, 0*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 # calendar_fert[3,] <- c( 2001, 123, 0*1000/ 10000 ) # 0 kg N ha-1 applied on day 123 calendar_Ndep[1,] <- c( 1900, 1, 2*1000/(10000*365) ) # 2 kg N ha-1 y-1 N-deposition in 1900 calendar_Ndep[2,] <- c( 1980, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 From df0acae73d786ae30108ea5d26703668c25309c0 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Sep 2019 10:11:49 -0400 Subject: [PATCH 0475/2289] add imports --- models/basgra/DESCRIPTION | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index 0bf255198fe..a9cc15a331a 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -7,7 +7,10 @@ Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", Description: This module provides functions to link the BASGRA model to PEcAn. Imports: PEcAn.logger, - PEcAn.utils (>= 1.4.8) + PEcAn.utils (>= 1.4.8), + lubridate, + ncdf4, + udunits2 Suggests: testthat (>= 1.0.2) OS_type: unix From e06f9e9712e1562b584db36c9468ba8667de542f Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Sep 2019 10:12:43 -0400 Subject: [PATCH 0476/2289] remove binary tag --- models/basgra/R/write.config.BASGRA.R | 2 -- models/basgra/inst/template.job | 1 - 2 files changed, 3 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 787f9598888..4bbb5b8f5f6 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -107,8 +107,6 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { jobsh <- gsub("@OUTDIR@", outdir, jobsh) jobsh <- gsub("@RUNDIR@", rundir, jobsh) - jobsh <- gsub("@BINARY@", settings$model$binary, jobsh) - jobsh <- gsub("@RUN_PARAMS@", paste0("c(",listToArgString(run_params),")"), jobsh) writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job index 290762bb251..d767c3ff9bc 100644 --- a/models/basgra/inst/template.job +++ b/models/basgra/inst/template.job @@ -13,7 +13,6 @@ mkdir -p "@OUTDIR@" # see if application needs running if [ ! -e "@OUTDIR@/results.csv" ]; then - "@BINARY@" # convert to MsTMIP echo "require (PEcAn.BASGRA) run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) From 6dd0d7da8d44c70b89585b8274629967d7032299 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Sep 2019 10:13:39 -0400 Subject: [PATCH 0477/2289] remove require --- models/basgra/inst/template.job | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job index d767c3ff9bc..1e2c4dc9ab6 100644 --- a/models/basgra/inst/template.job +++ b/models/basgra/inst/template.job @@ -14,7 +14,7 @@ mkdir -p "@OUTDIR@" if [ ! -e "@OUTDIR@/results.csv" ]; then # convert to MsTMIP - echo "require (PEcAn.BASGRA) + echo "library (PEcAn.BASGRA) run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) " | R --vanilla From 8a8e5fad18f442d365e07cb013ae5f3ecbe91c7a Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 30 Sep 2019 12:00:51 -0400 Subject: [PATCH 0478/2289] models with no revision --- base/workflow/inst/batch_run.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index 45571243dcf..c0666d864a0 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -43,6 +43,7 @@ outfile <- get_arg(argv, "--outfile", "test_result_table.csv") # Create outfile directory if it doesn't exist dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) input_table <- read.csv(input_table_file, stringsAsFactors = FALSE) %>% + tidyr::replace_na(list(revision = "")) %>% mutate( folder= paste(model, format(as.Date(start_date), "%Y-%m"), From 688645e1888d3e4b2fed42a5cf1f647e3063fc42 Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 30 Sep 2019 15:01:31 -0400 Subject: [PATCH 0479/2289] Solving the matrix issue --- modules/uncertainty/R/get.parameter.samples.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/uncertainty/R/get.parameter.samples.R b/modules/uncertainty/R/get.parameter.samples.R index 7dcbf2d2a2e..87ff35eaa72 100644 --- a/modules/uncertainty/R/get.parameter.samples.R +++ b/modules/uncertainty/R/get.parameter.samples.R @@ -131,7 +131,7 @@ get.parameter.samples <- function(settings, PEcAn.logger::logger.info("using ", samples.num, "samples per trait") for (prior in priors) { if (prior %in% param.names[[i]]) { - samples <- as.matrix(trait.mcmc[[prior]][, "beta.o"]) + samples <- trait.mcmc[[prior]] %>% purrr::map(~ .x[,'beta.o']) %>% unlist() %>% as.matrix() } else { samples <- PEcAn.priors::get.sample(prior.distns[prior, ], samples.num) } From 1e137be33fee99e709b298ec4d4be9faf5fbc620 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 1 Oct 2019 15:55:52 +0200 Subject: [PATCH 0480/2289] [Build] switch from devtools::test to testthat::test_dir Workaround for devtools#2129 --- Makefile | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index ecb3e630c91..ed2c82dbdc9 100644 --- a/Makefile +++ b/Makefile @@ -110,7 +110,15 @@ depends_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} \ -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" install_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" check_R_pkg = ./scripts/time.sh "${1}" Rscript scripts/check_with_errors.R $(strip $(1)) -test_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::test('"$(strip $(1))"', stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can + +# Would use devtools::test(), but in devtools 2.2.1 that doesn't accept stop_on_failure=TRUE +# To work around, we reimplement about half of test() here +test_R_pkg = ./scripts/time.sh "${1}" Rscript \ + -e "pkg <- devtools::as.package('$(strip $(1))')" \ + -e "env <- devtools::load_all(pkg[['path']], quiet = TRUE)[['env']]" \ + -e "testthat::test_dir(paste0(pkg[['path']], '/tests/testthat'), env = env," \ + -e "stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can + doc_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::document('"$(strip $(1))"')" $(ALL_PKGS_I) $(ALL_PKGS_C) $(ALL_PKGS_T) $(ALL_PKGS_D): | .install/devtools .install/roxygen2 .install/testthat From a18aad7ecda0825ce008cae8a0b2545b1da896b3 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 1 Oct 2019 16:48:14 +0200 Subject: [PATCH 0481/2289] avoid error when no test files ...someday this *should* be an error, but not today --- Makefile | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/Makefile b/Makefile index ed2c82dbdc9..91e2b2a9b35 100644 --- a/Makefile +++ b/Makefile @@ -111,12 +111,13 @@ depends_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} \ install_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" check_R_pkg = ./scripts/time.sh "${1}" Rscript scripts/check_with_errors.R $(strip $(1)) -# Would use devtools::test(), but in devtools 2.2.1 that doesn't accept stop_on_failure=TRUE -# To work around, we reimplement about half of test() here +# Would use devtools::test(), but devtools 2.2.1 hardcodes stop_on_failure=FALSE +# To work around this, we reimplement about half of test() here :( test_R_pkg = ./scripts/time.sh "${1}" Rscript \ - -e "pkg <- devtools::as.package('$(strip $(1))')" \ - -e "env <- devtools::load_all(pkg[['path']], quiet = TRUE)[['env']]" \ - -e "testthat::test_dir(paste0(pkg[['path']], '/tests/testthat'), env = env," \ + -e "if (length(list.files('$(strip $(1))/tests/testthat', 'test.*.[rR]')) == 0) {" \ + -e "print('No tests found'); quit('no') }" \ + -e "env <- devtools::load_all('$(strip $(1))', quiet = TRUE)[['env']]" \ + -e "testthat::test_dir('$(strip $(1))/tests/testthat', env = env," \ -e "stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can doc_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::document('"$(strip $(1))"')" From 73488f18bdf92f836d363c269c2c91ef26426895 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 1 Oct 2019 16:18:47 -0400 Subject: [PATCH 0482/2289] The adjustment function was returning NA values for the last time step because the Z was getting INF values for Litter. Added line in to change INF values in Z to 0. This function was causing the SDA to crash and this change prevented that --- modules/assim.sequential/R/Adjustment.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/Adjustment.R b/modules/assim.sequential/R/Adjustment.R index c10b2dad2fd..306f653b2f2 100644 --- a/modules/assim.sequential/R/Adjustment.R +++ b/modules/assim.sequential/R/Adjustment.R @@ -30,6 +30,7 @@ adj.ens<-function(Pf, X, mu.f, mu.a, Pa){ } Z[is.na(Z)]<-0 + Z[is.infinite(Z)] <- 0 ## analysis S_a <- svd(Pa) @@ -45,8 +46,8 @@ adj.ens<-function(Pf, X, mu.f, mu.a, Pa){ } - if(sum(mu.a - colMeans(X_a)) > 1 | sum(mu.a - colMeans(X_a)) < -1) logger.warn('Problem with ensemble adjustment (1)') - if(sum(diag(Pa) - diag(cov(X_a))) > 5 | sum(diag(Pa) - diag(cov(X_a))) < -5) logger.warn('Problem with ensemble adjustment (2)') + #if(sum(mu.a - colMeans(X_a)) > 1 | sum(mu.a - colMeans(X_a)) < -1) logger.warn('Problem with ensemble adjustment (1)') + #if(sum(diag(Pa) - diag(cov(X_a))) > 5 | sum(diag(Pa) - diag(cov(X_a))) < -5) logger.warn('Problem with ensemble adjustment (2)') analysis <- as.data.frame(X_a) From 15969e0cea90aac3fb2dd6c4be9fcefc305f51bd Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 1 Oct 2019 16:20:44 -0400 Subject: [PATCH 0483/2289] Added in Qle to get forecast, updated ensemble number --- .../inst/WillowCreek/gefs.sipnet.template.xml | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index c36bbd06fe5..34aaba8e067 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -14,6 +14,12 @@ -9999 9999
+ + Qle + MgC/ha/yr + -9999 + 9999 + TotSoilCarb KgC/m^2 @@ -76,12 +82,12 @@ /fs/data3/kzarada/pecan.data/dbfiles/ - - temperate.coniferous - temperate.deciduous + + temperate.deciduous.ALL 1 + 1000016041 @@ -89,7 +95,7 @@ FALSE - 10 + 50 NEE 2018 2018 From 486e9ae83846f42ab1dac6206c63b6ab6369c4fb Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Sep 2019 15:18:09 +0100 Subject: [PATCH 0484/2289] don't evaluate as a chunk, just highlight syntax --- book_source/04_appendix/04-package-dependencies.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book_source/04_appendix/04-package-dependencies.Rmd b/book_source/04_appendix/04-package-dependencies.Rmd index 879d5465791..bc6e68a93f0 100644 --- a/book_source/04_appendix/04-package-dependencies.Rmd +++ b/book_source/04_appendix/04-package-dependencies.Rmd @@ -46,7 +46,7 @@ PEcAn style is to import very few functions and instead use fully namespaced fun If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package, which probably lives in a file called either `zzz.R` or `-package.R`: -```{r} +```r #' What your package does #' #' Longer description of the package goes here. @@ -74,7 +74,7 @@ Loading a dependency into the package namespace is undesirable because it makes Attaching packages to the user's search path is even more problematic because it makes it hard for the user to understand *how your package will affect their own code*. Packages attached by a function stay attached after the function exits, so they can cause name collisions far downstream of the function call, potentially in code that has nothing to do with your package. And worse, since names are looked up in reverse order of package loading, it can cause behavior that differs in strange ways depending on the order of lines that look independent of each other: -```{r eval = FALSE} +```r library(Hmisc) x = ... y = 3 From 46fea917b43fb02e2ad7551e1585209ff4ca6720 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Wed, 2 Oct 2019 14:24:26 -0400 Subject: [PATCH 0485/2289] I was getting an error in the plotting function that t1 was not found- updating plotting script to make sure all functions have t1 defined --- modules/assim.sequential/R/sda_plotting.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 234e5501190..99b1f7688f5 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -34,7 +34,7 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob #Defining some colors generate_colors_sda() - t1 <- 1 + t1 <- 1 var.names <- var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") names.y <- unique(unlist(lapply(obs.mean[t1:t], function(x) { names(x) }))) @@ -124,7 +124,7 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov #Defining some colors generate_colors_sda() - t1 <- 1 + t1 <- 1 ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, function(x) { x })[2, ], use.names = FALSE) var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") @@ -221,7 +221,7 @@ postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, o #Defining some colors generate_colors_sda() - t1 <- 1 + t1 <- 1 ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, function(x) { x })[2, ], use.names = FALSE) names.y <- unique(unlist(lapply(obs.mean[t1:t], function(x) { names(x) }))) @@ -315,7 +315,7 @@ postana.bias.plotting.sda.corr<-function(t, obs.times, X, aqq, bqq){ post.analysis.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS, plot.title=NULL){ - t1 <- 1 + t1<- 1 #Defining some colors ready.OBS<-NULL generate_colors_sda() @@ -425,7 +425,7 @@ post.analysis.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, obs, ##' @export post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS, plot.title=NULL){ - + t1 <- 1 #Defining some colors generate_colors_sda() #ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, @@ -561,7 +561,7 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs if (!('ggrepel' %in% installed.packages()[,1])) devtools::install_github("slowkow/ggrepel") #Defining some colors - t1 <- 1 + t1 <- 1 generate_colors_sda() varnames <- settings$state.data.assimilation$state.variable #just a check From da3251736c75e973e58e525e14226ab965813cbb Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Wed, 2 Oct 2019 15:31:54 -0400 Subject: [PATCH 0486/2289] Removed input section for restart that was wrong. Updated last.sim to be the latest simulation- it was picking the first simulation of the day instead of the most recent run --- .../inst/WillowCreek/workflow.template.R | 81 ++++++++++++------- 1 file changed, 50 insertions(+), 31 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index a8c3ca09313..247135dd3b1 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -43,6 +43,7 @@ outputPath <- "/fs/data3/kzarada/ouput" xmlTempName <-"gefs.sipnet.template.xml" restart <-FALSE nodata <- FALSE +days.obs <- 3 #how many of observed data to include -- not including today setwd(outputPath) #------------------------------------------------------------------------------------------------ #------------------------------------------ sourcing the required tools ------------------------- @@ -68,9 +69,7 @@ con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) #------------------------------------------------------------------------------------------------ #--------------------------- Finding old sims all.previous.sims <- list.dirs(outputPath, recursive = F) -sda.start <- Sys.Date() - 14 -sda.end <- Sys.Date() -if (length(all.previous.sims) > 0 & !inherits(con, "try-error") & restart) { +if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { tryCatch({ # Looking through all the old simulations and find the most recent @@ -88,7 +87,7 @@ if (length(all.previous.sims) > 0 & !inherits(con, "try-error") & restart) { ) %>% mutate(ID=.x)) %>% mutate(started_at = as.Date(started_at)) %>% - arrange(desc(started_at)) %>% + arrange(desc(started_at), desc(ID)) %>% head(1) # pulling the date and the path to the last SDA restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) @@ -96,14 +95,16 @@ if (length(all.previous.sims) > 0 & !inherits(con, "try-error") & restart) { }, error = function(e) { restart.path <- NULL - sda.start <- Sys.Date() - 14 + sda.start <- Sys.Date() - 12 PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) }) # if there was no older sims if (is.na(sda.start)) - sda.start <- Sys.Date() - 14 + sda.start <- Sys.Date() - 12 } + +sda.end <- Sys.Date() #----------------------------------------------------------------------------------------------- #------------------------------------------ Download met and flux ------------------------------ #----------------------------------------------------------------------------------------------- @@ -114,7 +115,7 @@ if(!exists('prep.data')) sda.end, numvals = 100, vars = c("NEE", "LE"), - data.len = 168 # This is 7 days + data.len = days.obs * 24 ) obs.raw <-prep.data$rawobs prep.data<-prep.data$obs @@ -153,20 +154,20 @@ prep.data <- prep.data %>% # Finding the right end and start date met.start <- obs.raw$Date%>% head(1) %>% lubridate::floor_date(unit = "day") -met.end <- lubridate::floor_date(Sys.Date() - lubridate::hours(2), unit = "6 hour") + lubridate::days(16) +met.end <- met.start + lubridate::days(16) #pad Observed Data to match met data date <- seq( - from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC"), - to = lubridate::with_tz(as.POSIXct(met.end, format = "%Y-%m-%d"), tz = "UTC"), + from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC") + lubridate::days(1), + to = lubridate::with_tz(as.POSIXct(met.end - lubridate::days(1), format = "%Y-%m-%d"), tz = "UTC"), by = "6 hour" ) pad.prep <- obs.raw %>% tidyr::complete(Date = seq( - from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC"), - to = lubridate::with_tz(as.POSIXct(met.end, format = "%Y-%m-%d"), tz = "UTC"), + from = lubridate::with_tz(as.POSIXct(first(sda.end), format = "%Y-%m-%d"), tz = "UTC") + lubridate::days(1), + to = lubridate::with_tz(as.POSIXct(met.end - lubridate::days(1), format = "%Y-%m-%d"), tz = "UTC"), by = "6 hour" )) %>% mutate(means = NA, covs = NA) %>% @@ -216,7 +217,9 @@ obs.cov <- prep.data %>% map('covs') %>% setNames(names(prep.data)) #----------------------------------------------------------------------------------------------- #Using the found dates to run - this will help to download mets settings$run$start.date <- as.character(met.start) -settings$run$end.date <- as.character(met.end) +settings$run$end.date <- as.character(last(date)) +settings$run$site$met.start <- as.character(met.start) +settings$run$site$met.end <- as.character(met.end) #info settings$info$date <- paste0(format(Sys.time(), "%Y/%m/%d %H:%M:%S"), " +0000") # -------------------------------------------------------------------------------------------------- @@ -281,36 +284,51 @@ if (nodata) { # @Hamze: Yes if restart == TRUE if(restart == TRUE){ if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) - - file.copy(from= file.path(restart.path, "SDA", "sda.output.Rdata"), - to = file.path(settings$outdir, "SDA", "sda.output.Rdata")) - - file.copy(from= file.path(restart.path, "SDA", "outconfig.Rdata"), - to = file.path(settings$outdir, "SDA", "outconfig.Rdata")) - -#Update the SDA Output to just have last time step - temp <- new.env() + + #Update the SDA Output to just have last time step + temp<- new.env() load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) temp <- as.list(temp) - for(i in rev(2:length(temp$ANALYSIS))){ + if(length(temp$ANALYSIS) > 1){ + for(i in rev(1:(length(temp$ANALYSIS)-1))){ temp$ANALYSIS[[i]] <- NULL } - for(i in rev(2:length(temp$FORECAST))){ + for(i in rev(1:(length(temp$FORECAST)-1))){ temp$FORECAST[[i]] <- NULL } - for(i in rev(2:length(temp$enkf.params))){ + for(i in rev(1:(length(temp$enkf.params)-1))){ temp$enkf.params[[i]] <- NULL } + } temp$t = 1 + + + temp1<- new.env() + list2env(temp, envir = temp1) + save(list = c("ANALYSIS", 'FORECAST', "enkf.params", "ensemble.id", "ensemble.samples", 'inputs', 'new.params', 'new.state', 'run.id', 'site.locs', 't', 'Viz.output', 'X'), + envir = temp1, + file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + + + + temp.out <- new.env() + load(file.path(restart.path, "SDA", 'outconfig.Rdata'), envir = temp.out) + temp.out <- as.list(temp.out) + temp.out$outconfig$samples <- NULL + + temp.out1 <- new.env() + list2env(temp.out, envir = temp.out1) + save(list = c('outconfig'), + envir = temp.out1, + file = file.path(settings$outdir, "SDA", "outconfig.Rdata")) + - save(list = "temp", - file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) #copy over run and out folders @@ -325,14 +343,14 @@ if(restart == TRUE){ #--------------------------------- Run state data assimilation ------------------------------------- # -------------------------------------------------------------------------------------------------- -unlink(c('run','out','SDA'), recursive = T) +if(restart == FALSE) unlink(c('run','out','SDA'), recursive = T) if ('state.data.assimilation' %in% names(settings)) { if (PEcAn.utils::status.check("SDA") == 0) { PEcAn.utils::status.start("SDA") PEcAn.assim.sequential::sda.enkf( - settings, - restart=FALSE, + settings, + restart=restart, Q=0, obs.mean = obs.mean, obs.cov = obs.cov, @@ -341,11 +359,12 @@ if ('state.data.assimilation' %in% names(settings)) { interactivePlot =FALSE, TimeseriesPlot =TRUE, BiasPlot =FALSE, - debug =TRUE, + debug =FALSE, pause=FALSE ) ) PEcAn.utils::status.end() } } + \ No newline at end of file From bb1071557706142b4349098e80dfa503f1504694 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 3 Oct 2019 02:00:44 -0400 Subject: [PATCH 0487/2289] add basgra to makefile --- Makefile | 2 +- Makefile.depends | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 91e2b2a9b35..f4ae544d68b 100644 --- a/Makefile +++ b/Makefile @@ -2,7 +2,7 @@ NCPUS ?= 1 BASE := logger utils db settings visualization qaqc remote workflow -MODELS := biocro clm45 dalec dvmdostem ed fates gday jules linkages \ +MODELS := basgra biocro clm45 dalec dvmdostem ed fates gday jules linkages \ lpjguess maat maespa preles sipnet template MODULES := allometry assim.batch assim.sequential benchmark \ diff --git a/Makefile.depends b/Makefile.depends index e574d22a767..5ae493a56c0 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -22,6 +22,7 @@ $(call depends,modules/photosynthesis): | .install/base/logger $(call depends,modules/priors): | .install/base/utils .install/base/logger $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger +$(call depends,models/basgra): | .install/base/logger .install/base/utils $(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/base/db $(call depends,models/cable): | .install/base/logger .install/base/utils $(call depends,models/clm45): | .install/base/logger .install/base/utils From e6c85de1f21c30f58a67bb02a228ffdeb307b06a Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 3 Oct 2019 02:43:54 -0400 Subject: [PATCH 0488/2289] add model --- scripts/add.models.sh | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/scripts/add.models.sh b/scripts/add.models.sh index 490d731fdcc..b52c9c247ca 100755 --- a/scripts/add.models.sh +++ b/scripts/add.models.sh @@ -9,17 +9,18 @@ # 3 : model revision number # 4 : name of executable, without the path # 5 : optionally path to executable -addLocalModel "ED2.2" "ED2" "46" "ed2.r46" -addLocalModel "ED2.2" "ED2" "82" "ed2.r82" -addLocalModel "ED2.2" "ED2" "git" "ed2.git" -addLocalModel "SIPNET" "SIPNET" "unk" "sipnet.runk" -addLocalModel "SIPNET" "SIPNET" "136" "sipnet.r136" -addLocalModel "SIPNET" "SIPNET" "git" "sipnet.git" -addLocalModel "DALEC" "DALEC" "" "dalec_seqMH" -addLocalModel "Linkages" "LINKAGES" "git" "linkages.git" -addLocalModel "MAESPA" "MAESPA" "git" "maespa.git" -addLocalModel "LPJ-GUESS" "LPJGUESS" "3.1" "guess.3.1" -addLocalModel "GDAY(Day)" "GDAY" "" "gday" +addLocalModel "ED2.2" "ED2" "46" "ed2.r46" +addLocalModel "ED2.2" "ED2" "82" "ed2.r82" +addLocalModel "ED2.2" "ED2" "git" "ed2.git" +addLocalModel "SIPNET" "SIPNET" "unk" "sipnet.runk" +addLocalModel "SIPNET" "SIPNET" "136" "sipnet.r136" +addLocalModel "SIPNET" "SIPNET" "git" "sipnet.git" +addLocalModel "DALEC" "DALEC" "" "dalec_seqMH" +addLocalModel "Linkages" "LINKAGES" "git" "linkages.git" +addLocalModel "MAESPA" "MAESPA" "git" "maespa.git" +addLocalModel "LPJ-GUESS" "LPJGUESS" "3.1" "guess.3.1" +addLocalModel "GDAY(Day)" "GDAY" "" "gday" +addLocalModel "BASGRA" "BASGRA_N" "v1.0" "basgra" # special case for PRELES addModelFile "${FQDN}" "Preles" "PRELES" "" "true" "/bin" From 48d9efd5b1aa631ce33a65f24dc8159977958317 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 3 Oct 2019 04:52:53 -0400 Subject: [PATCH 0489/2289] add new pars --- models/basgra/R/write.config.BASGRA.R | 32 +++++++++++++++++++++++++ models/basgra/inst/BASGRA_params.Rdata | Bin 2675 -> 2675 bytes 2 files changed, 32 insertions(+) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 4bbb5b8f5f6..999f0388d29 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -40,6 +40,11 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { pft.traits <- unlist(trait.values[[pft]]) pft.names <- names(pft.traits) + # Initial value of leaf area index m2 m-2 - logged) + if ("ilai" %in% pft.names) { + run_params[which(names(run_params) == "LOG10LAII")] <- log(pft.traits[which(pft.names == "ilai")]) + } + # N-C ratio of roots (g N g-1 C) if ("c2n_fineroot" %in% pft.names) { run_params[which(names(run_params) == "NCR")] <- 1/pft.traits[which(pft.names == "c2n_fineroot")] @@ -64,6 +69,33 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { run_params[which(names(run_params) == "PHY")] <- pft.traits[which(pft.names == "phyllochron")] } + if ("leaf_width" %in% pft.names) { + # Leaf width on elongating tillers (m) + run_params[which(names(run_params) == "LFWIDG")] <- udunits2::ud.convert(pft.traits[which(pft.names == "leaf_width")], "mm", "m") + # Leaf width on non-elongating tillers (m) + run_params[which(names(run_params) == "LFWIDV")] <- run_params[which(names(run_params) == "LFWIDG")] * 0.6 # simplifying assumption + } + + # Initial and maximum value rooting depth (m) + if ("rooting_depth" %in% pft.names) { + run_params[which(names(run_params) == "ROOTDM")] <- pft.traits[which(pft.names == "rooting_depth")] + } + + # Maximum root depth growth rate (m day-1) + if ("root_growth_rate" %in% pft.names) { + run_params[which(names(run_params) == "RRDMAX")] <- pft.traits[which(pft.names == "root_growth_rate")] + } + + # Rubisco content of upper leaves (g m-2 leaf) + if ("rubisco_content" %in% pft.names) { + run_params[which(names(run_params) == "RUBISC")] <- pft.traits[which(pft.names == "rubisco_content")] + } + + # Area of a leaf relative to a rectangle of same length and width (-) + if ("shape" %in% pft.names) { + run_params[which(names(run_params) == "SHAPE")] <- pft.traits[which(pft.names == "shape")] + } + } diff --git a/models/basgra/inst/BASGRA_params.Rdata b/models/basgra/inst/BASGRA_params.Rdata index cff6ed6aa23f16ba14fb5b563f58df142b20ee03..9f9c9778121ca19e821e653af54ea364c72eae5f 100644 GIT binary patch delta 293 zcmew?@>yhpRs9DB6kw0e-#HTmX4$t?J#=3uQti-*qMYG?utr3s?ag}o2M}@lLxQp2 zLSusM&n9hKR~wdV|Ip=a{;lSG`zJPr_seAV+Q0q1;pwwVH~Xi*tKuEkeXqBFrpp(` zA~D(iCDfw34>xo!P!Dq8i=DLnAIEz8lTh(T3TZYazeDXCi+%oW3w~-}qPp&x;g5w5 z47^E7T(ABzKX1{Jn;A^0WeZV#o^vg&?0@Y(Uc)#!O{|IlvdlSz9FHU}~)aRLDOtan}j delta 293 zcmew?@>yhpRsH8nA6NcT_Rn6l3lZr!bNI|AR>@2ED_YO&z7-(d6U{M0Ki?fo{+es?dw$A020$9UHVTkSu+thn;` lv*b4W{(WU9wpd0u2${Jq1UlOOHS_xIFZq9N4rEl~1OSfyj+6iZ From 8c27c368ddfe0d45cb18c7226021c16ec0c026b3 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 3 Oct 2019 04:59:07 -0400 Subject: [PATCH 0490/2289] changing Makevars back --- models/basgra/src/Makevars | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/models/basgra/src/Makevars b/models/basgra/src/Makevars index 2e255b4fa56..0e97b504a04 100644 --- a/models/basgra/src/Makevars +++ b/models/basgra/src/Makevars @@ -30,13 +30,10 @@ OBJECTS = \ all : $(SHLIB) -$(SHLIB) : $(SOURCES) +$(SHLIB) : $(OBJECTS) -PKG_FFLAGS = -x f95-cpp-input -fPIC -O3 -c -fdefault-real-8 $(SOURCES) - -$(OBJECTS) : $(SOURCES) - $(F77) $(PKG_FFLAGS) - +$(OBJECTS) : $(SOURCES) + $(F77) -x f95-cpp-input -fPIC -O3 -c -fdefault-real-8 $(SOURCES) clean : rm -f $(OBJECTS) *.mod model/*.mod *.so *.o symbols.rds \ No newline at end of file From d9b8ab890c0b659660560af85318481bf3909378 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 3 Oct 2019 07:28:28 -0400 Subject: [PATCH 0491/2289] small update and fix --- modules/assim.batch/R/hier.mcmc.R | 7 ++++--- modules/assim.batch/R/pda.emulator.ms.R | 21 +++++++-------------- 2 files changed, 11 insertions(+), 17 deletions(-) diff --git a/modules/assim.batch/R/hier.mcmc.R b/modules/assim.batch/R/hier.mcmc.R index dc05036b838..536de1a567a 100644 --- a/modules/assim.batch/R/hier.mcmc.R +++ b/modules/assim.batch/R/hier.mcmc.R @@ -73,7 +73,7 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, sigma_global_scale <- (cov(mu_init_samp)/sigma_global_df) # initialize sigma_global (nparam x nparam) - sigma_global <- riwish(sigma_global_df, sigma_global_scale) + sigma_global <- MCMCpack::riwish(sigma_global_df, sigma_global_scale) # initialize jcov.arr (jump variances per site) jcov.arr <- array(NA_real_, c(nparam, nparam, nsites)) @@ -82,7 +82,8 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # prepare mu_site (nsite x nparam) mu_site_new <- matrix(NA_real_, nrow = nsites, ncol= nparam) - mu_site_curr <- mu_site_init + # start + mu_site_curr <- matrix(rep(mu_site_init, nsites), ncol=nparam, byrow = TRUE) # values for each site will be accepted/rejected in themselves currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) @@ -141,7 +142,7 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, sigma_global_scale_gibbs <- sigma_global_scale + sum_term # update sigma - sigma_global <- riwish(sigma_global_df_gibbs, sigma_global_scale_gibbs) # across-site covariance + sigma_global <- MCMCpack::riwish(sigma_global_df_gibbs, sigma_global_scale_gibbs) # across-site covariance diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 8a72fda2b72..3e37e210d97 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -51,7 +51,7 @@ pda.emulator.ms <- function(multi.settings) { tunnel_dir = dirname(multi.settings[[1]]$host$tunnel)) # Until a check function is implemented, run a predefined number of emulator rounds - n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 3, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) + n_rounds <- ifelse(is.null(multi.settings[[1]]$assim.batch$n_rounds), 5, as.numeric(multi.settings[[1]]$assim.batch$n_rounds)) PEcAn.logger::logger.info(n_rounds, " individual PDA rounds will be run per site. Please wait.") repeat{ @@ -325,25 +325,18 @@ pda.emulator.ms <- function(multi.settings) { ## proposing starting points from knots mu_site_init <- list() jump_init <- list() - if(nrow(SS.stack[[1]][[1]]) > nsites*tmp.settings$assim.batch$chain){ - # sample without replacement - sampind <- sample(seq_len(nrow(SS.stack[[1]][[1]])), nsites*tmp.settings$assim.batch$chain) - }else{ - # this would hardly happen, usually we have a lot more knots than nsites*tmp.settings$assim.batch$chain - # but to make this less error-prone sample with replacement if we have fewer, combinations should still be different enough - sampind <- sample(seq_len(nrow(SS.stack[[1]][[1]])), nsites*tmp.settings$assim.batch$chain, replace = TRUE) - } - + + # sample without replacement + sampind <- sample(seq_len(nrow(SS.stack[[1]][[1]])), tmp.settings$assim.batch$chain) + for(i in seq_len(tmp.settings$assim.batch$chain)){ - mu_site_init[[i]] <- SS.stack[[1]][[1]][sampind[((i-1) * nsites + 1):((i-1) * nsites + nsites)], 1:nparam] + mu_site_init[[i]] <- SS.stack[[1]][[1]][sampind[i], 1:nparam] jump_init[[i]] <- need_obj$resume.list[[i]]$jump } current.step <- "HIERARCHICAL MCMC PREP" save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) - - # start the clock ptm.start <- proc.time() @@ -361,7 +354,7 @@ pda.emulator.ms <- function(multi.settings) { mcmc.out <- parallel::parLapply(cl, seq_len(tmp.settings$assim.batch$chain), function(chain) { hier.mcmc(settings = tmp.settings, gp.stack = gp.stack, - nmcmc = tmp.settings$assim.batch$iter * 3, # need to run chains longerthan indv + nmcmc = tmp.settings$assim.batch$iter * 3, # need to run chains longer than indv rng_orig = rng_orig, jmp0 = jump_init[[chain]], mu_site_init = mu_site_init[[chain]], From dcd4549674b36f9ed0582e39750863f750ae5483 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 3 Oct 2019 12:19:50 -0400 Subject: [PATCH 0492/2289] Slowly adding the model specific tags --- base/workflow/NAMESPACE | 1 + base/workflow/R/create_execute_test_xml.R | 28 +++++++++++++++++++++++ base/workflow/man/model_specific_tags.Rd | 16 +++++++++++++ 3 files changed, 45 insertions(+) create mode 100644 base/workflow/man/model_specific_tags.Rd diff --git a/base/workflow/NAMESPACE b/base/workflow/NAMESPACE index 232b3667fe4..fd5e0cc56a7 100644 --- a/base/workflow/NAMESPACE +++ b/base/workflow/NAMESPACE @@ -2,6 +2,7 @@ export(create_execute_test_xml) export(do_conversions) +export(model_specific_tags) export(run.write.configs) export(runModule.get.trait.data) export(runModule.run.write.configs) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 43d724b2ed0..fa8a48c3216 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -53,6 +53,8 @@ create_execute_test_xml <- function(model_id, if (is.null(db_bety_password)) db_bety_password <- config.list$db_bety_password if (is.null(db_bety_hostname)) db_bety_hostname <- config.list$db_bety_hostname if (is.null(db_bety_port)) db_bety_port <- config.list$db_bety_port + + #opening a connection to bety con <- PEcAn.DB::db.open(list( user = db_bety_username, password = db_bety_password, @@ -151,6 +153,9 @@ create_execute_test_xml <- function(model_id, ) settings$host$name <- "localhost" + + # Add model specific options + settings<-model_specific_tags(settings, model.new) #create file and Run XML::saveXML(PEcAn.settings::listToXml(settings, "pecan"), file = file.path(outdir, "pecan.xml")) @@ -166,3 +171,26 @@ create_execute_test_xml <- function(model_id, outdir = outdir ) } + + + + +#' Title +#' +#' @param settings pecan xml settings +#' @param model.info model info extracted from bety +#' +#' @return +#' @export +#' +model_specific_tags <- function(settings, model.info){ + + #some extra settings for LPJ-GUESS + if(model.info$model_name=="LPJ-GUESS"){ + settings$run$inputs <- c(settings$run$inputs , + list(soil=list(id=1000000903)) + ) + } + + return(settings) +} diff --git a/base/workflow/man/model_specific_tags.Rd b/base/workflow/man/model_specific_tags.Rd new file mode 100644 index 00000000000..4e4685cbae5 --- /dev/null +++ b/base/workflow/man/model_specific_tags.Rd @@ -0,0 +1,16 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/create_execute_test_xml.R +\name{model_specific_tags} +\alias{model_specific_tags} +\title{Title} +\usage{ +model_specific_tags(settings, model.info) +} +\arguments{ +\item{settings}{pecan xml settings} + +\item{model.info}{model info extracted from bety} +} +\description{ +Title +} From 4f37970891285535f7fee8d6e15c8bf5de089399 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 3 Oct 2019 19:50:39 +0200 Subject: [PATCH 0493/2289] allow running when no Rcheck_reference.log file is present When file present: DIELEVEL defaults to error, LOGLEVEL to warn When file not present: DIELEVEL defaults to note, LOGLEVEL to all Output above DIELEVEL now comes directly from devtools::check, so e.g. dying on an error will also show all notes and warnings even if LOGLEVEL=error --- scripts/check_with_errors.R | 53 +++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 28 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index bbe18fdda3e..fb9cb6f5e43 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -7,8 +7,22 @@ die_level <- Sys.getenv('DIELEVEL', unset = NA) redocument <- as.logical(Sys.getenv('REBUILD_DOCS', unset = NA)) runtests <- as.logical(Sys.getenv('RUN_TESTS', unset = TRUE)) -# message('log_level = ', log_level) -# message('die_level = ', die_level) +old_file <- file.path(pkg, "tests", "Rcheck_reference.log") +if (file.exists(old_file)) { + # Package has old unfixed warnings that we should ignore by default + # (but if log/die level are explicitly set, respect them) + if (is.na(log_level)) log_level <- "warning" + if (is.na(die_level)) die_level <- "error" +} else { + if (is.na(log_level)) log_level <- "all" + if (is.na(die_level)) die_level <- "note" +} + +log_level <- match.arg(log_level, c("error", "warning", "note", "all")) +die_level <- match.arg(die_level, c("never", "error", "warning", "note")) + +log_warn <- log_level %in% c("warning", "note", "all") +log_notes <- log_level %in% c("note", "all") # should test se run if (!runtests) { @@ -17,47 +31,26 @@ if (!runtests) { args <- c('--timings') } -valid_log_levels <- c('warn', 'all') -if (!is.na(log_level) && !log_level %in% valid_log_levels) { - stop('Invalid log_level "', log_level, '". Select one of: ', - paste(valid_log_levels, collapse = ', ')) -} - -if (!is.na(die_level) && !die_level == 'warn') { - stop('Invalid die_level "', die_level, - '". Use either "warn" for warnings or leave blank for error.') -} - -log_warn <- !is.na(log_level) && log_level %in% c('warn', 'all') -die_warn <- !is.na(die_level) && die_level == 'warn' - -log_notes <- !is.na(log_level) && log_level == 'all' - -chk <- devtools::check(pkg, args = args, quiet = TRUE, error_on = "never", document = redocument) +chk <- devtools::check(pkg, args = args, quiet = TRUE, + error_on = die_level, document = redocument) errors <- chk[['errors']] n_errors <- length(errors) - if (n_errors > 0) { cat(errors, '\n') - stop(n_errors, ' errors found in ', pkg, '. See above for details') + stop(n_errors, ' errors found in ', pkg, '.') } warns <- chk[['warnings']] n_warns <- length(warns) message(n_warns, ' warnings found in ', pkg, '.') - -if ((log_warn|die_warn) && n_warns > 0) { +if ((log_warn) && n_warns > 0) { cat(warns, '\n') - if (die_warn && n_warns > 0) { - stop('Killing process because ', n_warns, ' warnings found in ', pkg, '.') - } } notes <- chk[['notes']] n_notes <- length(notes) message(n_notes, ' notes found in ', pkg, '.') - if (log_notes && n_notes > 0) { cat(notes, '\n') } @@ -86,7 +79,6 @@ msg_lines <- function(msg){ unlist(lapply(msg, function(x)paste(x[[1]], x[-1], sep=": "))) } -old_file <- file.path(pkg, "tests", "Rcheck_reference.log") # To update reference files after fixing an old warning: # * Run check_with_errors.R to be sure the check is currently passing @@ -99,6 +91,11 @@ old_file <- file.path(pkg, "tests", "Rcheck_reference.log") # quit("no") # } +# everything beyond this point is comparing to old version +if (!file.exists(old_file)) { + quit("no") +} + old <- rcmdcheck::parse_check(old_file) cmp <- rcmdcheck::compare_checks(old, chk) From 46c7698fa4ac012408ecc358bcc68c1a17d81c3f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 3 Oct 2019 20:58:44 +0200 Subject: [PATCH 0494/2289] line ordering, whitespace, linter --- scripts/check_with_errors.R | 65 ++++++++++++++++++++----------------- 1 file changed, 35 insertions(+), 30 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index fb9cb6f5e43..45d45eac5d9 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -2,10 +2,10 @@ arg <- commandArgs(trailingOnly = TRUE) pkg <- arg[1] -log_level <- Sys.getenv('LOGLEVEL', unset = NA) -die_level <- Sys.getenv('DIELEVEL', unset = NA) -redocument <- as.logical(Sys.getenv('REBUILD_DOCS', unset = NA)) -runtests <- as.logical(Sys.getenv('RUN_TESTS', unset = TRUE)) +log_level <- Sys.getenv("LOGLEVEL", unset = NA) +die_level <- Sys.getenv("DIELEVEL", unset = NA) +redocument <- as.logical(Sys.getenv("REBUILD_DOCS", unset = NA)) +runtests <- as.logical(Sys.getenv("RUN_TESTS", unset = TRUE)) old_file <- file.path(pkg, "tests", "Rcheck_reference.log") if (file.exists(old_file)) { @@ -26,36 +26,37 @@ log_notes <- log_level %in% c("note", "all") # should test se run if (!runtests) { - args <- c('--no-tests', '--timings') + args <- c("--no-tests", "--timings") } else { - args <- c('--timings') + args <- c("--timings") } chk <- devtools::check(pkg, args = args, quiet = TRUE, error_on = die_level, document = redocument) -errors <- chk[['errors']] +errors <- chk[["errors"]] n_errors <- length(errors) if (n_errors > 0) { - cat(errors, '\n') - stop(n_errors, ' errors found in ', pkg, '.') + cat(errors, "\n") + stop(n_errors, " errors found in ", pkg, ".") } -warns <- chk[['warnings']] +warns <- chk[["warnings"]] n_warns <- length(warns) -message(n_warns, ' warnings found in ', pkg, '.') +message(n_warns, " warnings found in ", pkg, ".") if ((log_warn) && n_warns > 0) { - cat(warns, '\n') + cat(warns, "\n") } -notes <- chk[['notes']] +notes <- chk[["notes"]] n_notes <- length(notes) -message(n_notes, ' notes found in ', pkg, '.') +message(n_notes, " notes found in ", pkg, ".") if (log_notes && n_notes > 0) { - cat(notes, '\n') + cat(notes, "\n") } +###### # PEcAn has a lot of legacy code that issues check warnings, # such that it's not yet practical to break the build on every warning. @@ -70,16 +71,7 @@ if (log_notes && n_notes > 0) { # then if those are OK we get fussier and check for new *instances* of existing # warnings (e.g. new check increases from 2 bad Rd cross-references to 3). -msg_lines <- function(msg){ - msg <- strsplit( - gsub("\n ", " ", msg, fixed = TRUE), #leading double-space indicates line wrap - split = "\n", - fixed = TRUE) - msg <- lapply(msg, function(x)x[x != ""]) - unlist(lapply(msg, function(x)paste(x[[1]], x[-1], sep=": "))) -} - - +### # To update reference files after fixing an old warning: # * Run check_with_errors.R to be sure the check is currently passing # * Delete the file you want to update @@ -90,6 +82,7 @@ msg_lines <- function(msg){ # cat(chk$stdout, file = old_file) # quit("no") # } +### # everything beyond this point is comparing to old version if (!file.exists(old_file)) { @@ -99,6 +92,17 @@ if (!file.exists(old_file)) { old <- rcmdcheck::parse_check(old_file) cmp <- rcmdcheck::compare_checks(old, chk) +msg_lines <- function(msg) { + # leading double-space indicates wrapped line -> rejoin + msg <- gsub("\n ", " ", msg, fixed = TRUE) + + #split lines, delete empty ones + msg <- strsplit(msg, split = "\n", fixed = TRUE) + msg <- lapply(msg, function(x)x[x != ""]) + + # prepend message title (e.g. "checking Rd files ... NOTE") to each line + unlist(lapply(msg, function(x)paste(x[[1]], x[-1], sep = ": "))) +} if (cmp$status != "+") { # rcmdcheck found new messages, so check has failed @@ -125,21 +129,22 @@ if (cmp$status != "+") { prev_msgs <- gsub("[‘’]", "'", prev_msgs) prev_msgs <- gsub("', '", "' '", prev_msgs) - # Compression warnings report slightly different sizes on different R versions - # If the only difference is in the numbers, don't complain + # Compression warnings report slightly different sizes on different R + # versions. If the only difference is in the numbers, don't complain cmprs_msg <- grepl("significantly better compression", cur_msgs) - if(any(cmprs_msg)){ + if (any(cmprs_msg)) { prev_cmprs_msg <- grepl("significantly better compression", prev_msgs) cur_cmprs_nodigit <- gsub("[0-9]", "", cur_msgs[cmprs_msg]) prev_cmprs_nodigit <- gsub("[0-9]", "", prev_msgs[prev_cmprs_msg]) - if(all(cur_cmprs_nodigit %in% prev_cmprs_nodigit)){ + if (all(cur_cmprs_nodigit %in% prev_cmprs_nodigit)) { cur_msgs <- cur_msgs[!cmprs_msg] } } # This line is redundant (summarizes issues also reported individually) # and creates a false positive when an existing issue is fixed - cur_msgs <- cur_msgs[!grepl("NOTE: Undefined global functions or variables:", cur_msgs)] + cur_msgs <- cur_msgs[!grepl( + "NOTE: Undefined global functions or variables:", cur_msgs)] lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { From 1776a89cc2eec8e44caf175313a9cb8394ad92fc Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 3 Oct 2019 21:37:57 +0200 Subject: [PATCH 0495/2289] fix complaints about names --- models/basgra/R/write.config.BASGRA.R | 7 +++++-- models/basgra/man/write.config.BASGRA.Rd | 4 ++-- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 999f0388d29..9702c66c547 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -16,7 +16,7 @@ ##' @name write.config.BASGRA ##' @title Write BASGRA configuration files ##' @param defaults list of defaults to process -##' @param trait.samples vector of samples for a given trait +##' @param trait.values vector of samples for a given trait ##' @param settings list of settings from pecan settings file ##' @param run.id id of run ##' @return configuration file for BASGRA for given run @@ -139,7 +139,10 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { jobsh <- gsub("@OUTDIR@", outdir, jobsh) jobsh <- gsub("@RUNDIR@", rundir, jobsh) - jobsh <- gsub("@RUN_PARAMS@", paste0("c(",listToArgString(run_params),")"), jobsh) + jobsh <- gsub( + "@RUN_PARAMS@", + paste0("c(", PEcAn.utils::listToArgString(run_params), ")"), + jobsh) writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) Sys.chmod(file.path(settings$rundir, run.id, "job.sh")) diff --git a/models/basgra/man/write.config.BASGRA.Rd b/models/basgra/man/write.config.BASGRA.Rd index 39125f34b10..7e20e3efc49 100644 --- a/models/basgra/man/write.config.BASGRA.Rd +++ b/models/basgra/man/write.config.BASGRA.Rd @@ -9,11 +9,11 @@ write.config.BASGRA(defaults, trait.values, settings, run.id) \arguments{ \item{defaults}{list of defaults to process} +\item{trait.values}{vector of samples for a given trait} + \item{settings}{list of settings from pecan settings file} \item{run.id}{id of run} - -\item{trait.samples}{vector of samples for a given trait} } \value{ configuration file for BASGRA for given run From 29fa25f9395765de4285f3f9ca94f34c9427c2c4 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 3 Oct 2019 22:00:49 +0200 Subject: [PATCH 0496/2289] regenerate Rcheck_reference.log --- models/basgra/tests/Rcheck_reference.log | 66 ++++++++++++------------ 1 file changed, 32 insertions(+), 34 deletions(-) diff --git a/models/basgra/tests/Rcheck_reference.log b/models/basgra/tests/Rcheck_reference.log index 68fa51a5ae8..70c9129b3fd 100644 --- a/models/basgra/tests/Rcheck_reference.log +++ b/models/basgra/tests/Rcheck_reference.log @@ -1,11 +1,11 @@ -* using log directory ‘/tmp/RtmpVlFdqS/PEcAn.ModelName.Rcheck’ -* using R version 3.5.2 (2018-12-20) -* using platform: x86_64-pc-linux-gnu (64-bit) +* using log directory ‘/private/var/folders/qr/mbw8xxpd45jdv_46b27914280000gn/T/Rtmpcw7AZN/PEcAn.BASGRA.Rcheck’ +* using R version 3.4.3 (2017-11-30) +* using platform: x86_64-apple-darwin15.6.0 (64-bit) * using session charset: UTF-8 -* using options ‘--no-tests --no-manual --as-cran’ -* checking for file ‘PEcAn.ModelName/DESCRIPTION’ ... OK +* using options ‘--no-manual --as-cran’ +* checking for file ‘PEcAn.BASGRA/DESCRIPTION’ ... OK * checking extension type ... Package -* this is package ‘PEcAn.ModelName’ version ‘1.7.0’ +* this is package ‘PEcAn.BASGRA’ version ‘1.7.1’ * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... OK @@ -15,16 +15,12 @@ * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK -* checking serialization versions ... OK -* checking whether package ‘PEcAn.ModelName’ can be installed ... OK +* checking whether package ‘PEcAn.BASGRA’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE License components with restrictions not permitted: FreeBSD + file LICENSE -Authors@R field gives no person with name and roles. -Authors@R field gives no person with maintainer role, valid email -address and non-empty name. * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -41,37 +37,39 @@ address and non-empty name. * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK -* checking R code for possible problems ... NOTE -write.config.MODEL: no visible global function definition for - ‘db.query’ -write.config.MODEL: no visible binding for global variable ‘startdate’ -write.config.MODEL: no visible binding for global variable ‘enddate’ -Undefined global functions or variables: - db.query enddate startdate +* checking R code for possible problems ... OK * checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK -* checking Rd \usage sections ... WARNING -Undocumented arguments in documentation object 'met2model.MODEL' - ‘overwrite’ - -Undocumented arguments in documentation object 'write.config.MODEL' - ‘trait.values’ -Documented arguments not in \usage in documentation object 'write.config.MODEL': - ‘trait.samples’ - -Functions with \usage entries need to have the appropriate \alias -entries, and all their arguments documented. -The \usage entries must correspond to syntactically valid R code. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. +* checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK +* checking line endings in C/C++/Fortran sources/headers ... OK +* checking line endings in Makefiles ... OK +* checking compilation flags in Makevars ... OK +* checking for GNU extensions in Makefiles ... OK +* checking for portable use of $(BLAS_LIBS) and $(LAPACK_LIBS) ... OK +* checking compiled code ... NOTE +File ‘PEcAn.BASGRA/libs/PEcAn.BASGRA.so’: + Found no calls to: ‘R_registerRoutines’, ‘R_useDynamicSymbols’ + +It is good practice to register native routines and to disable symbol +search. + +See ‘Writing portable packages’ in the ‘Writing R Extensions’ manual. * checking examples ... NONE * checking for unstated dependencies in ‘tests’ ... OK -* checking tests ... SKIPPED +* checking tests ... + Running ‘testthat.R’ + OK * DONE -Status: 1 WARNING, 2 NOTEs + +Status: 2 NOTEs +See + ‘/private/var/folders/qr/mbw8xxpd45jdv_46b27914280000gn/T/Rtmpcw7AZN/PEcAn.BASGRA.Rcheck/00check.log’ +for details. + + From 0c59281d6ceaa15ffa8dd95c61693e73f9a59ce9 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 3 Oct 2019 22:09:14 +0200 Subject: [PATCH 0497/2289] back to old logging default --- scripts/check_with_errors.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 45d45eac5d9..d2bf5135009 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -11,7 +11,7 @@ old_file <- file.path(pkg, "tests", "Rcheck_reference.log") if (file.exists(old_file)) { # Package has old unfixed warnings that we should ignore by default # (but if log/die level are explicitly set, respect them) - if (is.na(log_level)) log_level <- "warning" + if (is.na(log_level)) log_level <- "error" if (is.na(die_level)) die_level <- "error" } else { if (is.na(log_level)) log_level <- "all" From eb3056c4a2ac89cc4d2ab8b0e77c3765886ba591 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 3 Oct 2019 22:10:09 +0200 Subject: [PATCH 0498/2289] linting + more informative ref file regeneration instructions --- scripts/check_with_errors.R | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index d2bf5135009..5612344d8b7 100644 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -75,12 +75,14 @@ if (log_notes && n_notes > 0) { # To update reference files after fixing an old warning: # * Run check_with_errors.R to be sure the check is currently passing # * Delete the file you want to update -# * Uncomment this section, run check_with_errors.R, recomment +# * Uncomment this section +# * run `DIELEVEL=never Rscript scripts/check_with_errors.R path/to/package` +# * recomment this section # * Commit updated file -# if (! file.exists(old_file)) { -# cat("No reference check file found. Saving current results as the new standard\n") -# cat(chk$stdout, file = old_file) -# quit("no") +# if (!file.exists(old_file)) { +# cat("No reference check file found. Saving current results as the new standard\n") +# cat(chk$stdout, file = old_file) +# quit("no") # } ### From 307c10133b813e90e6ce219fe23b38cae9f14f22 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 5 Oct 2019 01:20:14 -0400 Subject: [PATCH 0499/2289] gitignore --- models/basgra/.gitignore | 4 ++++ models/basgra/R/run_BASGRA.R | 14 +++++++------- 2 files changed, 11 insertions(+), 7 deletions(-) create mode 100644 models/basgra/.gitignore diff --git a/models/basgra/.gitignore b/models/basgra/.gitignore new file mode 100644 index 00000000000..f22b541bfae --- /dev/null +++ b/models/basgra/.gitignore @@ -0,0 +1,4 @@ +*.so +*.o +*.mod + diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 041ca792e8e..bd0abbad85b 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -221,17 +221,17 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) # hardcoding these for now, should be able to modify later on - calendar_fert[1,] <- c( 2018, 125, 0*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 - calendar_fert[2,] <- c( 2018, 250, 0*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 + calendar_fert[1,] <- c( 2018, 125, 10*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 + calendar_fert[2,] <- c( 2018, 250, 0*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 # calendar_fert[3,] <- c( 2001, 123, 0*1000/ 10000 ) # 0 kg N ha-1 applied on day 123 - calendar_Ndep[1,] <- c( 1900, 1, 2*1000/(10000*365) ) # 2 kg N ha-1 y-1 N-deposition in 1900 - calendar_Ndep[2,] <- c( 1980, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 - calendar_Ndep[3,] <- c( 2100, 366, 20*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 + calendar_Ndep[1,] <- c( 1900, 1, 0*1000/(10000*365) ) # 2 kg N ha-1 y-1 N-deposition in 1900 + calendar_Ndep[2,] <- c( 1980, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 + calendar_Ndep[3,] <- c( 2100, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 # Qvidja 2018 Harvest dates days_harvest [1,] <- c( 2018, 163 ) - days_harvest [2,] <- c( 2018, 233 ) - days_harvest [3,] <- c( 2018, 266 ) + #days_harvest [2,] <- c( 2018, 233 ) + days_harvest [2,] <- c( 2018, 266 ) days_harvest <- as.integer(days_harvest) From d98ae3670a76f8fa3fa8b4ee9b8422f5fa53593f Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 5 Oct 2019 03:14:05 -0400 Subject: [PATCH 0500/2289] functionality to pass harvest dates --- models/basgra/R/run_BASGRA.R | 13 ++++++------- models/basgra/R/write.config.BASGRA.R | 5 ++++- models/basgra/inst/template.job | 2 +- models/basgra/man/run_BASGRA.Rd | 6 ++++-- 4 files changed, 15 insertions(+), 11 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index bd0abbad85b..77309e3e45c 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -12,6 +12,7 @@ ##' @title run BASGRA model ##' @param run_met path to CF met ##' @param run_params parameter vector +##' @param site_harvest path to harvest file ##' @param start_date start time of the simulation ##' @param end_date end time of the simulation ##' @param outdir where to write BASGRA output @@ -23,7 +24,7 @@ ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitelat, sitelon){ +run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, outdir, sitelat, sitelon){ start_date <- as.POSIXlt(start_date, tz = "UTC") end_date <- as.POSIXlt(end_date, tz = "UTC") @@ -228,12 +229,10 @@ run_BASGRA <- function(run_met, run_params, start_date, end_date, outdir, sitela calendar_Ndep[2,] <- c( 1980, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 calendar_Ndep[3,] <- c( 2100, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 - # Qvidja 2018 Harvest dates - days_harvest [1,] <- c( 2018, 163 ) - #days_harvest [2,] <- c( 2018, 233 ) - days_harvest [2,] <- c( 2018, 266 ) - - days_harvest <- as.integer(days_harvest) + # read in harvest days + h_days <- as.matrix(read.table(site_harvest, header = TRUE, sep = ",")) + days_harvest[1:nrow(h_days),] <- h_days + days_harvest <- as.integer(days_harvest) # run model output <- .Fortran('BASGRA', diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 9702c66c547..87285277895 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -125,13 +125,16 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id) { hostteardown <- paste(hostteardown, sep = "\n", paste(settings$host$postrun, collapse = "\n")) } + # create job.sh jobsh <- gsub("@HOST_SETUP@", hostsetup, jobsh) jobsh <- gsub("@HOST_TEARDOWN@", hostteardown, jobsh) jobsh <- gsub("@SITE_LAT@", settings$run$site$lat, jobsh) jobsh <- gsub("@SITE_LON@", settings$run$site$lon, jobsh) - jobsh <- gsub("@SITE_MET@", settings$run$inputs$met$path, jobsh) + + jobsh <- gsub("@SITE_MET@", settings$run$inputs$met$path, jobsh) + jobsh <- gsub("@SITE_HARVEST@", settings$run$inputs$harvest$path, jobsh) jobsh <- gsub("@START_DATE@", settings$run$start.date, jobsh) jobsh <- gsub("@END_DATE@", settings$run$end.date, jobsh) diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job index 1e2c4dc9ab6..1cbf436fcfe 100644 --- a/models/basgra/inst/template.job +++ b/models/basgra/inst/template.job @@ -15,7 +15,7 @@ if [ ! -e "@OUTDIR@/results.csv" ]; then # convert to MsTMIP echo "library (PEcAn.BASGRA) -run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) +run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@SITE_HARVEST@', '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) " | R --vanilla STATUS=$? diff --git a/models/basgra/man/run_BASGRA.Rd b/models/basgra/man/run_BASGRA.Rd index 84b55fbb3ec..dcd2ae95586 100644 --- a/models/basgra/man/run_BASGRA.Rd +++ b/models/basgra/man/run_BASGRA.Rd @@ -4,14 +4,16 @@ \alias{run_BASGRA} \title{run BASGRA model} \usage{ -run_BASGRA(run_met, run_params, start_date, end_date, outdir, sitelat, - sitelon) +run_BASGRA(run_met, run_params, site_harvest, start_date, end_date, outdir, + sitelat, sitelon) } \arguments{ \item{run_met}{path to CF met} \item{run_params}{parameter vector} +\item{site_harvest}{path to harvest file} + \item{start_date}{start time of the simulation} \item{end_date}{end time of the simulation} From 121b3347290e5f7a45846cabaffdbd304fc18179 Mon Sep 17 00:00:00 2001 From: istfer Date: Sat, 5 Oct 2019 03:43:15 -0400 Subject: [PATCH 0501/2289] utils --- models/basgra/R/run_BASGRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 77309e3e45c..ffcd004de62 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -230,7 +230,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, calendar_Ndep[3,] <- c( 2100, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 # read in harvest days - h_days <- as.matrix(read.table(site_harvest, header = TRUE, sep = ",")) + h_days <- as.matrix(utils::read.table(site_harvest, header = TRUE, sep = ",")) days_harvest[1:nrow(h_days),] <- h_days days_harvest <- as.integer(days_harvest) From b11f06bc4259f6efbf5242eaaf774c79a5ae0034 Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 09:23:34 -0400 Subject: [PATCH 0502/2289] Fixing bugs --- models/ed/R/write_restart.ED2.R | 3 +++ modules/assim.sequential/R/sda.enkf_refactored.R | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/models/ed/R/write_restart.ED2.R b/models/ed/R/write_restart.ED2.R index bc57d917046..6713e63a786 100644 --- a/models/ed/R/write_restart.ED2.R +++ b/models/ed/R/write_restart.ED2.R @@ -7,6 +7,9 @@ write_restart.ED2 <- function(outdir, runid, start.time, stop.time, settings, new.state, RENAME = TRUE, new.params, inputs) { + var.names <- settings$state.data.assimilation$state.variables %>% + purrr::map('variable.name') + restart <- new.params$restart # IMPORTANT NOTE: in the future, things that are passed via "restart" list need to be confined to old states that will be used diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 7ca60647b97..9b0006baa8d 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -149,8 +149,8 @@ sda.enkf <- function(settings, FORECAST <- ANALYSIS <- list() enkf.params <- list() #The aqq and bqq are shape parameters estimated over time for the proccess covariance. #see GEF help - aqq <- list() - bqq <- list() + aqq <- NULL + bqq <- numeric(nt + 1) ##### Creating matrices that describe the bounds of the state variables ##### interval is remade everytime depending on the data at time t ##### state.interval stays constant and converts new.analysis to be within the correct bounds From 91e01abae4cfb537539e0cccf432d4b2d5038884 Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 13:27:53 -0400 Subject: [PATCH 0503/2289] rescaling function --- modules/assim.sequential/NAMESPACE | 1 + modules/assim.sequential/R/Helper.functions.R | 47 +++++++++++++++++++ .../man/rescaling_stateVars.Rd | 16 +++++++ 3 files changed, 64 insertions(+) create mode 100644 modules/assim.sequential/man/rescaling_stateVars.Rd diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index 654a3ab8541..0b964a207b5 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -31,6 +31,7 @@ export(post.analysis.multisite.ggplot) export(postana.bias.plotting.sda) export(postana.bias.plotting.sda.corr) export(postana.timeser.plotting.sda) +export(rescaling_stateVars) export(sample_met) export(sda.enkf) export(sda.enkf.multisite) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 866aefeaff2..b1fe5383afa 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -75,3 +75,50 @@ SDA_control <- + +#' rescaling_stateVars +#' +#' @param settings pecan xml settings where state variables have the scaling_factor tag +#' @param X Any Matrix with column names as variable names +#' @description This function uses a set of scaling factors defined in the pecan XML to scale a given matrix (R or X in the case of SDA) +#' @return +#' @export +#' +rescaling_stateVars <- function(settings, X) { + + # Finding the scaling factors + scaling.factors <- + settings$state.data.assimilation$state.variables %>% + map('scaling_factor') %>% + setNames(settings$state.data.assimilation$state.variables %>% + map('variable.name')) + + + Y <- seq_len(ncol(X)) %>% + map_dfc(function(.x) { + + if(colnames(X)[.x] %in% names(scaling.factors)) { + X[, .x] * scaling.factors[[colnames(X)[.x]]] %>% as.numeric() + }else{ + X[, .x] + } + }) %>% + as.matrix() %>% + `colnames<-`(colnames(X)) + + try({ + # I'm trying to give the new transform variable the attributes of the old one + # X for example has `site` attribute + + attr.X <- names(attributes(X)) %>% + discard( ~ .x %in% c("dim", "dimnames")) + + if (length(attr.X) > 0) { + attr(Y, attr.X) <- attr(X, attr.X) + } + + }, silent = TRUE) + + return(Y) +} + diff --git a/modules/assim.sequential/man/rescaling_stateVars.Rd b/modules/assim.sequential/man/rescaling_stateVars.Rd new file mode 100644 index 00000000000..fba3f6b3db2 --- /dev/null +++ b/modules/assim.sequential/man/rescaling_stateVars.Rd @@ -0,0 +1,16 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Helper.functions.R +\name{rescaling_stateVars} +\alias{rescaling_stateVars} +\title{rescaling_stateVars} +\usage{ +rescaling_stateVars(settings, X) +} +\arguments{ +\item{settings}{pecan xml settings where state variables have the scaling_factor tag} + +\item{X}{Any Matrix with column names as variable names} +} +\description{ +This function uses a set of scaling factors defined in the pecan XML to scale a given matrix (R or X in the case of SDA) +} From 3115d734b2a61fb30f4241638e1795bec7701209 Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 13:52:08 -0400 Subject: [PATCH 0504/2289] multiply and divided --- modules/assim.sequential/R/Helper.functions.R | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index b1fe5383afa..66c721ea294 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -84,7 +84,13 @@ SDA_control <- #' @return #' @export #' -rescaling_stateVars <- function(settings, X) { +rescaling_stateVars <- function(settings, X, multiply=TRUE) { + + if(multiply){ + FUN <- .Primitive('*') + }else{ + FUN <- .Primitive('/') + } # Finding the scaling factors scaling.factors <- @@ -98,7 +104,8 @@ rescaling_stateVars <- function(settings, X) { map_dfc(function(.x) { if(colnames(X)[.x] %in% names(scaling.factors)) { - X[, .x] * scaling.factors[[colnames(X)[.x]]] %>% as.numeric() + # This function either multiplies or divides + FUN( X[, .x], scaling.factors[[colnames(X)[.x]]] %>% as.numeric()) }else{ X[, .x] } From c45b64883e7931b3feb28ad1c85c13318c81c76e Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 14:06:10 -0400 Subject: [PATCH 0505/2289] Add backward compatibility --- modules/assim.sequential/R/Helper.functions.R | 14 +++++++------- .../assim.sequential/man/rescaling_stateVars.Rd | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 66c721ea294..6c40633a890 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -86,18 +86,18 @@ SDA_control <- #' rescaling_stateVars <- function(settings, X, multiply=TRUE) { - if(multiply){ - FUN <- .Primitive('*') - }else{ - FUN <- .Primitive('/') - } + FUN <- ifelse(multiply, .Primitive('*'), .Primitive('/')) + # Finding the scaling factors scaling.factors <- settings$state.data.assimilation$state.variables %>% - map('scaling_factor') %>% + purrr::map('scaling_factor') %>% setNames(settings$state.data.assimilation$state.variables %>% - map('variable.name')) + purrr::map('variable.name')) %>% + purrr::discard(is.null) + + if(length(scaling.factors)==0) return(X) Y <- seq_len(ncol(X)) %>% diff --git a/modules/assim.sequential/man/rescaling_stateVars.Rd b/modules/assim.sequential/man/rescaling_stateVars.Rd index fba3f6b3db2..1491536a2b8 100644 --- a/modules/assim.sequential/man/rescaling_stateVars.Rd +++ b/modules/assim.sequential/man/rescaling_stateVars.Rd @@ -4,7 +4,7 @@ \alias{rescaling_stateVars} \title{rescaling_stateVars} \usage{ -rescaling_stateVars(settings, X) +rescaling_stateVars(settings, X, multiply = TRUE) } \arguments{ \item{settings}{pecan xml settings where state variables have the scaling_factor tag} From 913f96da6feba39c1096cb5b3967b6eab99d9cc7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 7 Oct 2019 20:30:30 +0200 Subject: [PATCH 0506/2289] ignore more summary lines, and also make executable --- scripts/check_with_errors.R | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) mode change 100644 => 100755 scripts/check_with_errors.R diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R old mode 100644 new mode 100755 index 5612344d8b7..75895c32f1c --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -1,3 +1,4 @@ +#!/usr/bin/env Rscript arg <- commandArgs(trailingOnly = TRUE) pkg <- arg[1] @@ -143,10 +144,11 @@ if (cmp$status != "+") { } } - # This line is redundant (summarizes issues also reported individually) - # and creates a false positive when an existing issue is fixed + # These lines are redundant summaries of issues also reported individually + # and create false positives when an existing issue is fixed cur_msgs <- cur_msgs[!grepl( "NOTE: Undefined global functions or variables:", cur_msgs)] + cur_msgs <- cur_msgs[!grepl("NOTE: Consider adding importFrom", cur_msgs)] lines_changed <- setdiff(cur_msgs, prev_msgs) if (length(lines_changed) > 0) { From 6540e4738afd29e688e1d5ce1dde946a28e525da Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 14:30:41 -0400 Subject: [PATCH 0507/2289] Each obs needs its own file tag --- modules/assim.sequential/R/Analysis_sda.R | 156 +++++++++--------- modules/assim.sequential/R/Helper.functions.R | 2 +- 2 files changed, 80 insertions(+), 78 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 2d297ff52fb..c04eadd4d41 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -177,14 +177,14 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 #where there are zero values so that #mu.f is in 'tobit space' in the full model constants.tobit2space <<- list(N = nrow(X), - J = length(mu.f)) + J = length(mu.f)) data.tobit2space <<- list(y.ind = x.ind, - y.censored = x.censored, - mu_0 = rep(0,length(mu.f)), - lambda_0 = diag(length(mu.f),length(mu.f)+1), - nu_0 = 3, - wts = wts)#some measure of prior obs + y.censored = x.censored, + mu_0 = rep(0,length(mu.f)), + lambda_0 = diag(length(mu.f),length(mu.f)+1), + nu_0 = 3, + wts = wts)#some measure of prior obs inits.tobit2space <<- list(pf = cov(X), muf = colMeans(X)) #set.seed(0) @@ -263,7 +263,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 Pf <- matrix(colMeans(dat.tobit2space[, iPf]),ncol(X),ncol(X)) #--- This is where the localization needs to happen - After imputing Pf - + iycens <- grep("y.censored",colnames(dat.tobit2space)) X.new <- matrix(colMeans(dat.tobit2space[,iycens]),nrow(X),ncol(X)) @@ -286,16 +286,18 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 bqq <- length(mu.f) aqq <- diag(length(mu.f)) * bqq #Q } - + ### create matrix the describes the support for each observed state variable at time t interval <- matrix(NA, length(obs.mean[[t]]), 2) + # Each observed variable needs to have its own file tag under inputs + interval <-settings$state.data.assimilation$inputs %>% + purrr::map_dfr( ~ data.frame( + .x$'min_value' %>% as.numeric(),.x$'max_value' %>% as.numeric() + )) %>% + as.matrix() + rownames(interval) <- names(obs.mean[[t]]) - for(i in 1:length(input.vars)){ - interval[grep(x=rownames(interval), - pattern=input.vars[i]), ] <- matrix(c(as.numeric(settings$state.data.assimilation$inputs[[i]]$min_value), #needs to be inputs because sometimes the observation is on a different scale than the state variable - as.numeric(settings$state.data.assimilation$inputs[[i]]$max_value)), - length(grep(x=rownames(interval),pattern=input.vars[i])),2,byrow = TRUE) - } + #### These vectors are used to categorize data based on censoring #### from the interval matrix y.ind <- as.numeric(Y > interval[,1]) @@ -330,7 +332,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 X_fcomp_start <- which_fcomp[1] X_fcomp_end <- which_fcomp[length(which_fcomp)] X_fcomp_model <- grep(colnames(X),pattern = 'AGB.pft') - + fcomp_TRUE = TRUE }else{ X_fcomp_start <- 0 @@ -379,70 +381,70 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 ) dimensions.tobit <<- list(X = length(mu.f), X.mod = ncol(X), - Q = c(length(mu.f),length(mu.f)), - y_star = (length(y.censored))) + Q = c(length(mu.f),length(mu.f)), + y_star = (length(y.censored))) data.tobit <<- list(muf = as.vector(mu.f), - pf = solve(Pf), - aq = aqq, bq = bqq, - y.ind = y.ind, - y.censored = y.censored, - r = R) #precision + pf = solve(Pf), + aq = aqq, bq = bqq, + y.ind = y.ind, + y.censored = y.censored, + r = R) #precision inits.pred <<- list(q = diag(length(mu.f))*(length(mu.f)+1), - X.mod = rnorm(length(mu.f),mu.f,1), - X = rnorm(length(mu.f),mu.f,1), - y_star = rnorm(length(y.censored),0,1)) - -model_pred <- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, - constants = constants.tobit, inits = inits.pred, - name = 'base') - -model_pred$initializeInfo() -## Adding X.mod,q,r as data for building model. -conf <- configureMCMC(model_pred, print=TRUE) -conf$addMonitors(c("X","X.mod","q","Q", "y_star","y.censored")) -## [1] conjugate_dmnorm_dmnorm sampler: X[1:5] - - -if(FALSE){ ### Need this for when the state variables are on different scales like NPP and AGB - x.char <- paste0('X[1:',ncol(X),']') - conf$removeSampler(x.char) - propCov.means <- c(rep(1,9),1000)#signif(diag(Pf),1)#mean(unlist(lapply(obs.cov,FUN = function(x){diag(x)})))[choose]#c(rep(max(diag(Pf)),ncol(X)))# - if(length(propCov.means)!=ncol(X)) propCov.means <- c(propCov.means,rep(1,ncol(X)-length(Y))) - conf$addSampler(target =c(x.char), - control <- list(propCov = diag(ncol(X))*propCov.means), - type='RW_block') -} - -## important! -## this is needed for correct indexing later -samplerNumberOffset <<- length(conf$getSamplers()) - -for(i in 1:length(y.ind)) { - node <- paste0('y.censored[',i,']') - conf$addSampler(node, 'toggle', control=list(type='RW')) - ## could instead use slice samplers, or any combination thereof, e.g.: - ##conf$addSampler(node, 'toggle', control=list(type='slice')) -} - -conf$printSamplers() - -## can monitor y.censored, if you wish, to verify correct behaviour -#conf$addMonitors('y.censored') - -Rmcmc <<- buildMCMC(conf) - -Cmodel <<- compileNimble(model_pred) -Cmcmc <<- compileNimble(Rmcmc, project = model_pred) - -for(i in 1:length(y.ind)) { - ## ironically, here we have to "toggle" the value of y.ind[i] - ## this specifies that when y.ind[i] = 1, - ## indicator variable is set to 0, which specifies *not* to sample - valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) -} - + X.mod = rnorm(length(mu.f),mu.f,1), + X = rnorm(length(mu.f),mu.f,1), + y_star = rnorm(length(y.censored),0,1)) + + model_pred <- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, + constants = constants.tobit, inits = inits.pred, + name = 'base') + + model_pred$initializeInfo() + ## Adding X.mod,q,r as data for building model. + conf <- configureMCMC(model_pred, print=TRUE) + conf$addMonitors(c("X","X.mod","q","Q", "y_star","y.censored")) + ## [1] conjugate_dmnorm_dmnorm sampler: X[1:5] + + + if(FALSE){ ### Need this for when the state variables are on different scales like NPP and AGB + x.char <- paste0('X[1:',ncol(X),']') + conf$removeSampler(x.char) + propCov.means <- c(rep(1,9),1000)#signif(diag(Pf),1)#mean(unlist(lapply(obs.cov,FUN = function(x){diag(x)})))[choose]#c(rep(max(diag(Pf)),ncol(X)))# + if(length(propCov.means)!=ncol(X)) propCov.means <- c(propCov.means,rep(1,ncol(X)-length(Y))) + conf$addSampler(target =c(x.char), + control <- list(propCov = diag(ncol(X))*propCov.means), + type='RW_block') + } + + ## important! + ## this is needed for correct indexing later + samplerNumberOffset <<- length(conf$getSamplers()) + + for(i in 1:length(y.ind)) { + node <- paste0('y.censored[',i,']') + conf$addSampler(node, 'toggle', control=list(type='RW')) + ## could instead use slice samplers, or any combination thereof, e.g.: + ##conf$addSampler(node, 'toggle', control=list(type='slice')) + } + + conf$printSamplers() + + ## can monitor y.censored, if you wish, to verify correct behaviour + #conf$addMonitors('y.censored') + + Rmcmc <<- buildMCMC(conf) + + Cmodel <<- compileNimble(model_pred) + Cmcmc <<- compileNimble(Rmcmc, project = model_pred) + + for(i in 1:length(y.ind)) { + ## ironically, here we have to "toggle" the value of y.ind[i] + ## this specifies that when y.ind[i] = 1, + ## indicator variable is set to 0, which specifies *not* to sample + valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) + } + }else{ Cmodel$y.ind <- y.ind Cmodel$y.censored <- y.censored @@ -485,7 +487,7 @@ for(i in 1:length(y.ind)) { mu.a[1:9] <- colMeans(dat[, iX[1:9]] / adjusts) if(sum(mu.a[1:9])<=0) browser() } - + ystar.a <- colMeans(dat[, iystar]) Pa <- cov(dat[, iX]) Pa[is.na(Pa)] <- 0 @@ -505,7 +507,7 @@ for(i in 1:length(y.ind)) { if (n < length(mu.f)) { n <- length(mu.f) } - + V <- solve(q.bar) * n aqq <- V diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 6c40633a890..9ce71e02433 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -97,7 +97,7 @@ rescaling_stateVars <- function(settings, X, multiply=TRUE) { purrr::map('variable.name')) %>% purrr::discard(is.null) - if(length(scaling.factors)==0) return(X) + if (length(scaling.factors) == 0) return(X) Y <- seq_len(ncol(X)) %>% From 8ea4eebd14029dbc74a0325dc1897c133a81b425 Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 14:36:19 -0400 Subject: [PATCH 0508/2289] reformatting analysis file --- modules/assim.sequential/R/Analysis_sda.R | 146 +++++++++++----------- 1 file changed, 73 insertions(+), 73 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index c04eadd4d41..2ddf128f21e 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -177,14 +177,14 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 #where there are zero values so that #mu.f is in 'tobit space' in the full model constants.tobit2space <<- list(N = nrow(X), - J = length(mu.f)) + J = length(mu.f)) data.tobit2space <<- list(y.ind = x.ind, - y.censored = x.censored, - mu_0 = rep(0,length(mu.f)), - lambda_0 = diag(length(mu.f),length(mu.f)+1), - nu_0 = 3, - wts = wts)#some measure of prior obs + y.censored = x.censored, + mu_0 = rep(0,length(mu.f)), + lambda_0 = diag(length(mu.f),length(mu.f)+1), + nu_0 = 3, + wts = wts)#some measure of prior obs inits.tobit2space <<- list(pf = cov(X), muf = colMeans(X)) #set.seed(0) @@ -263,7 +263,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 Pf <- matrix(colMeans(dat.tobit2space[, iPf]),ncol(X),ncol(X)) #--- This is where the localization needs to happen - After imputing Pf - + iycens <- grep("y.censored",colnames(dat.tobit2space)) X.new <- matrix(colMeans(dat.tobit2space[,iycens]),nrow(X),ncol(X)) @@ -286,12 +286,12 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 bqq <- length(mu.f) aqq <- diag(length(mu.f)) * bqq #Q } - + ### create matrix the describes the support for each observed state variable at time t interval <- matrix(NA, length(obs.mean[[t]]), 2) - # Each observed variable needs to have its own file tag under inputs + # Each observe variable needs to have its own file tag under inputs interval <-settings$state.data.assimilation$inputs %>% - purrr::map_dfr( ~ data.frame( + map_dfr( ~ data.frame( .x$'min_value' %>% as.numeric(),.x$'max_value' %>% as.numeric() )) %>% as.matrix() @@ -332,7 +332,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 X_fcomp_start <- which_fcomp[1] X_fcomp_end <- which_fcomp[length(which_fcomp)] X_fcomp_model <- grep(colnames(X),pattern = 'AGB.pft') - + fcomp_TRUE = TRUE }else{ X_fcomp_start <- 0 @@ -381,70 +381,70 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 ) dimensions.tobit <<- list(X = length(mu.f), X.mod = ncol(X), - Q = c(length(mu.f),length(mu.f)), - y_star = (length(y.censored))) + Q = c(length(mu.f),length(mu.f)), + y_star = (length(y.censored))) data.tobit <<- list(muf = as.vector(mu.f), - pf = solve(Pf), - aq = aqq, bq = bqq, - y.ind = y.ind, - y.censored = y.censored, - r = R) #precision + pf = solve(Pf), + aq = aqq, bq = bqq, + y.ind = y.ind, + y.censored = y.censored, + r = R) #precision inits.pred <<- list(q = diag(length(mu.f))*(length(mu.f)+1), - X.mod = rnorm(length(mu.f),mu.f,1), - X = rnorm(length(mu.f),mu.f,1), - y_star = rnorm(length(y.censored),0,1)) - - model_pred <- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, - constants = constants.tobit, inits = inits.pred, - name = 'base') - - model_pred$initializeInfo() - ## Adding X.mod,q,r as data for building model. - conf <- configureMCMC(model_pred, print=TRUE) - conf$addMonitors(c("X","X.mod","q","Q", "y_star","y.censored")) - ## [1] conjugate_dmnorm_dmnorm sampler: X[1:5] - - - if(FALSE){ ### Need this for when the state variables are on different scales like NPP and AGB - x.char <- paste0('X[1:',ncol(X),']') - conf$removeSampler(x.char) - propCov.means <- c(rep(1,9),1000)#signif(diag(Pf),1)#mean(unlist(lapply(obs.cov,FUN = function(x){diag(x)})))[choose]#c(rep(max(diag(Pf)),ncol(X)))# - if(length(propCov.means)!=ncol(X)) propCov.means <- c(propCov.means,rep(1,ncol(X)-length(Y))) - conf$addSampler(target =c(x.char), - control <- list(propCov = diag(ncol(X))*propCov.means), - type='RW_block') - } - - ## important! - ## this is needed for correct indexing later - samplerNumberOffset <<- length(conf$getSamplers()) - - for(i in 1:length(y.ind)) { - node <- paste0('y.censored[',i,']') - conf$addSampler(node, 'toggle', control=list(type='RW')) - ## could instead use slice samplers, or any combination thereof, e.g.: - ##conf$addSampler(node, 'toggle', control=list(type='slice')) - } - - conf$printSamplers() - - ## can monitor y.censored, if you wish, to verify correct behaviour - #conf$addMonitors('y.censored') - - Rmcmc <<- buildMCMC(conf) - - Cmodel <<- compileNimble(model_pred) - Cmcmc <<- compileNimble(Rmcmc, project = model_pred) - - for(i in 1:length(y.ind)) { - ## ironically, here we have to "toggle" the value of y.ind[i] - ## this specifies that when y.ind[i] = 1, - ## indicator variable is set to 0, which specifies *not* to sample - valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) - } - + X.mod = rnorm(length(mu.f),mu.f,1), + X = rnorm(length(mu.f),mu.f,1), + y_star = rnorm(length(y.censored),0,1)) + +model_pred <- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, + constants = constants.tobit, inits = inits.pred, + name = 'base') + +model_pred$initializeInfo() +## Adding X.mod,q,r as data for building model. +conf <- configureMCMC(model_pred, print=TRUE) +conf$addMonitors(c("X","X.mod","q","Q", "y_star","y.censored")) +## [1] conjugate_dmnorm_dmnorm sampler: X[1:5] + + +if(FALSE){ ### Need this for when the state variables are on different scales like NPP and AGB + x.char <- paste0('X[1:',ncol(X),']') + conf$removeSampler(x.char) + propCov.means <- c(rep(1,9),1000)#signif(diag(Pf),1)#mean(unlist(lapply(obs.cov,FUN = function(x){diag(x)})))[choose]#c(rep(max(diag(Pf)),ncol(X)))# + if(length(propCov.means)!=ncol(X)) propCov.means <- c(propCov.means,rep(1,ncol(X)-length(Y))) + conf$addSampler(target =c(x.char), + control <- list(propCov = diag(ncol(X))*propCov.means), + type='RW_block') +} + +## important! +## this is needed for correct indexing later +samplerNumberOffset <<- length(conf$getSamplers()) + +for(i in 1:length(y.ind)) { + node <- paste0('y.censored[',i,']') + conf$addSampler(node, 'toggle', control=list(type='RW')) + ## could instead use slice samplers, or any combination thereof, e.g.: + ##conf$addSampler(node, 'toggle', control=list(type='slice')) +} + +conf$printSamplers() + +## can monitor y.censored, if you wish, to verify correct behaviour +#conf$addMonitors('y.censored') + +Rmcmc <<- buildMCMC(conf) + +Cmodel <<- compileNimble(model_pred) +Cmcmc <<- compileNimble(Rmcmc, project = model_pred) + +for(i in 1:length(y.ind)) { + ## ironically, here we have to "toggle" the value of y.ind[i] + ## this specifies that when y.ind[i] = 1, + ## indicator variable is set to 0, which specifies *not* to sample + valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) +} + }else{ Cmodel$y.ind <- y.ind Cmodel$y.censored <- y.censored @@ -487,7 +487,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 mu.a[1:9] <- colMeans(dat[, iX[1:9]] / adjusts) if(sum(mu.a[1:9])<=0) browser() } - + ystar.a <- colMeans(dat[, iystar]) Pa <- cov(dat[, iX]) Pa[is.na(Pa)] <- 0 @@ -507,7 +507,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 if (n < length(mu.f)) { n <- length(mu.f) } - + V <- solve(q.bar) * n aqq <- V From a84b73cb13fce76bde58a5298e6b79bed711c7f3 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 7 Oct 2019 21:10:30 +0200 Subject: [PATCH 0509/2289] remove message fixed in #2403 --- modules/data.atmosphere/tests/Rcheck_reference.log | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index 336d032323e..276dfa16a76 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -16,18 +16,12 @@ * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK * checking serialization versions ... OK -* checking whether package ‘PEcAn.data.atmosphere’ can be installed ... WARNING -Found the following significant warnings: - Warning: replacing previous import ‘dplyr::last’ by ‘xts::last’ when loading ‘PEcAn.data.atmosphere’ - Warning: replacing previous import ‘dplyr::first’ by ‘xts::first’ when loading ‘PEcAn.data.atmosphere’ -See ‘/tmp/Rtmp02RC5y/PEcAn.data.atmosphere.Rcheck/00install.out’ for details. +* checking whether package ‘PEcAn.data.atmosphere’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE License components with restrictions not permitted: FreeBSD + file LICENSE -Package listed in more than one of Depends, Imports, Suggests, Enhances: - ‘xts’ A package should be listed in only one of these fields. * checking top-level files ... OK * checking for left-over files ... OK From ae0bc8be4215684f08905d1f3db19db52d5cfe63 Mon Sep 17 00:00:00 2001 From: hamzed Date: Mon, 7 Oct 2019 15:25:48 -0400 Subject: [PATCH 0510/2289] Left over before checkout --- modules/assim.sequential/R/Helper.functions.R | 2 +- modules/assim.sequential/R/sda.enkf_refactored.R | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 9ce71e02433..522380120a6 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -80,7 +80,7 @@ SDA_control <- #' #' @param settings pecan xml settings where state variables have the scaling_factor tag #' @param X Any Matrix with column names as variable names -#' @description This function uses a set of scaling factors defined in the pecan XML to scale a given matrix (R or X in the case of SDA) +#' @description This function uses a set of scaling factors defined in the pecan XML to scale a given matrix #' @return #' @export #' diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 9b0006baa8d..7f711a0284c 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -366,9 +366,7 @@ sda.enkf <- function(settings, #--- Reformating X X <- do.call(rbind, X) - FORECAST[[t]] <- X - mu.f <- colMeans(X) - Pf <- cov(X) + if(sum(X,na.rm=T) == 0){ logger.severe(paste('NO FORECAST for',obs.times[t],'Check outdir logfiles or read restart. Do you have the right variable names?')) @@ -448,6 +446,7 @@ sda.enkf <- function(settings, obs.cov=obs.cov) #Reading back mu.f/Pf and mu.a/Pa + FORECAST[[t]] <- X #Forecast mu.f <- enkf.params[[t]]$mu.f Pf <- enkf.params[[t]]$Pf From ec8bdca636312915afbc1f15f5a13e5c432c2217 Mon Sep 17 00:00:00 2001 From: hamzed Date: Tue, 8 Oct 2019 10:45:09 -0400 Subject: [PATCH 0511/2289] Travis fixes/ ED restart Rsync --- models/ed/R/write_restart.ED2.R | 2 +- modules/assim.sequential/DESCRIPTION | 1 + modules/assim.sequential/R/Analysis_sda.R | 2 +- modules/assim.sequential/R/Helper.functions.R | 4 ++-- modules/assim.sequential/man/rescaling_stateVars.Rd | 2 +- 5 files changed, 6 insertions(+), 5 deletions(-) diff --git a/models/ed/R/write_restart.ED2.R b/models/ed/R/write_restart.ED2.R index 6713e63a786..011c5fc5899 100644 --- a/models/ed/R/write_restart.ED2.R +++ b/models/ed/R/write_restart.ED2.R @@ -217,7 +217,7 @@ write_restart.ED2 <- function(outdir, runid, start.time, stop.time, # copy the history file with new states and new timestamp to remote # it's OK, because we backed up the original above - PEcAn.remote::remote.copy.to(settings$host, histfile, remote_histfile) + #PEcAn.remote::remote.copy.to(settings$host, histfile, remote_histfile) ##### Modify ED2IN ed2in_path <- file.path(rundir, runid, "ED2IN") diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 82237536582..b267b50a12d 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -11,6 +11,7 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific streamline the interaction between data and models, and to improve the efficacy of scientific investigation. Imports: + dplyr, PEcAn.logger, PEcAn.remote, plyr (>= 1.8.4), diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 2ddf128f21e..a928f67357f 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -291,7 +291,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 interval <- matrix(NA, length(obs.mean[[t]]), 2) # Each observe variable needs to have its own file tag under inputs interval <-settings$state.data.assimilation$inputs %>% - map_dfr( ~ data.frame( + purrr::map_dfr( ~ data.frame( .x$'min_value' %>% as.numeric(),.x$'max_value' %>% as.numeric() )) %>% as.matrix() diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 522380120a6..eaa97ae036b 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -101,7 +101,7 @@ rescaling_stateVars <- function(settings, X, multiply=TRUE) { Y <- seq_len(ncol(X)) %>% - map_dfc(function(.x) { + purrr::map_dfc(function(.x) { if(colnames(X)[.x] %in% names(scaling.factors)) { # This function either multiplies or divides @@ -118,7 +118,7 @@ rescaling_stateVars <- function(settings, X, multiply=TRUE) { # X for example has `site` attribute attr.X <- names(attributes(X)) %>% - discard( ~ .x %in% c("dim", "dimnames")) + purrr::discard( ~ .x %in% c("dim", "dimnames")) if (length(attr.X) > 0) { attr(Y, attr.X) <- attr(X, attr.X) diff --git a/modules/assim.sequential/man/rescaling_stateVars.Rd b/modules/assim.sequential/man/rescaling_stateVars.Rd index 1491536a2b8..8447e644a80 100644 --- a/modules/assim.sequential/man/rescaling_stateVars.Rd +++ b/modules/assim.sequential/man/rescaling_stateVars.Rd @@ -12,5 +12,5 @@ rescaling_stateVars(settings, X, multiply = TRUE) \item{X}{Any Matrix with column names as variable names} } \description{ -This function uses a set of scaling factors defined in the pecan XML to scale a given matrix (R or X in the case of SDA) +This function uses a set of scaling factors defined in the pecan XML to scale a given matrix } From b4ea7bedced167b73cca41cb09e1e37c797e5b81 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 8 Oct 2019 11:25:34 -0400 Subject: [PATCH 0512/2289] Updated workflow so that inputs for met files come from today's run- allows restart to work using yesterday's SDA run --- .../assim.sequential/R/sda.enkf_refactored.R | 7 +++---- .../inst/WillowCreek/gefs.sipnet.template.xml | 14 +------------- .../inst/WillowCreek/workflow.template.R | 18 +++++++++++++----- 3 files changed, 17 insertions(+), 22 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 7ca60647b97..78cd2dcc8a0 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -113,7 +113,7 @@ sda.enkf <- function(settings, start.time = start.cut, stop.time = lubridate::ymd_hms(settings$state.data.assimilation$end.date, truncated = 3, tz="UTC"), inputs = settings$run$inputs$met$path[[i]], - overwrite=F)) + overwrite=T)) } } if (control$debug) browser() @@ -211,7 +211,7 @@ sda.enkf <- function(settings, file.path(file.path(settings$outdir,"SDA"),paste0(assimyears[t],"/",files.last.sda))) } - if(length(FORECAST) == length(ANALYSIS) && length(FORECAST) > 0) t = t + 1 #if you made it through the forecast and the analysis in t and failed on the analysis in t+1 so you didn't save t + if(length(FORECAST) == length(ANALYSIS) && length(FORECAST) > 0) t = t + length(FORECAST) #if you made it through the forecast and the analysis in t and failed on the analysis in t+1 so you didn't save t }else{ t = 1 @@ -408,8 +408,7 @@ sda.enkf <- function(settings, R <- as.matrix(obs.cov[[t]][choose.cov,choose.cov]) R[is.na(R)]<-0.1 - if (control$debug) - browser() + if (control$debug) browser() # making the mapping matrix #TO DO: doesn't work unless it's one to one diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index 34aaba8e067..821fa6feafa 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -16,31 +16,19 @@ Qle - MgC/ha/yr + MW/m2 -9999 9999 TotSoilCarb KgC/m^2 - - Qle - MW/m2 - -9999 - 9999 - SoilMoistFrac m/m 0 9999 - - SoilMoistFrac - - 0 - 9999 - Litter gC/m^2 diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index 247135dd3b1..02056a33553 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -41,7 +41,7 @@ if (is.na(args[4])){ } outputPath <- "/fs/data3/kzarada/ouput" xmlTempName <-"gefs.sipnet.template.xml" -restart <-FALSE +restart <-TRUE nodata <- FALSE days.obs <- 3 #how many of observed data to include -- not including today setwd(outputPath) @@ -290,24 +290,32 @@ if(restart == TRUE){ load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) temp <- as.list(temp) - + #we want ANALYSIS, FORECAST, and enkf.parms to match up with how many days obs data we have + # +2 for days.obs since today is not included in the number. So we want to keep today and any other obs data if(length(temp$ANALYSIS) > 1){ - for(i in rev(1:(length(temp$ANALYSIS)-1))){ + for(i in rev((days.obs + 2):length(temp$ANALYSIS))){ temp$ANALYSIS[[i]] <- NULL } - for(i in rev(1:(length(temp$FORECAST)-1))){ + for(i in rev((days.obs + 2):length(temp$FORECAST))){ temp$FORECAST[[i]] <- NULL } - for(i in rev(1:(length(temp$enkf.params)-1))){ + for(i in rev((days.obs + 2):length(temp$enkf.params))){ temp$enkf.params[[i]] <- NULL } } temp$t = 1 + #change inputs path to match sampling met paths + + for(i in 1: length(temp$inputs$ids)){ + + temp$inputs$samples[i] <- settings$run$inputs$met$path[temp$inputs$ids[i]] + + } temp1<- new.env() list2env(temp, envir = temp1) From bd1141e3926ddfd115ebf2c5b44bfff5030f1af7 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 8 Oct 2019 14:21:12 -0400 Subject: [PATCH 0513/2289] updated xml- there was an error in the variables --- .../inst/WillowCreek/gefs.sipnet.template.xml | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index 821fa6feafa..c95b6d04073 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -23,18 +23,22 @@ TotSoilCarb KgC/m^2 + 0 + 9999 + SoilMoistFrac m/m 0 - 9999 - - + 1 + + Litter gC/m^2 0 9999 - + + SWE kg/m^2 0 From 7ef6577231f01f436e3943a735e6211b8b2da21c Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 8 Oct 2019 14:49:05 -0400 Subject: [PATCH 0514/2289] addressing PR questions --- models/biocro/DESCRIPTION | 2 +- modules/assim.sequential/R/sda_plotting.R | 6 +---- .../inst/NEFI/graphs_timeframe.R | 23 +++++++++++++++---- .../inst/WillowCreek/gefs.sipnet.template.xml | 2 +- .../inst/WillowCreek/workflow.template.R | 4 ++-- workflow_id_log.txt | 1 - 6 files changed, 24 insertions(+), 14 deletions(-) delete mode 100644 workflow_id_log.txt diff --git a/models/biocro/DESCRIPTION b/models/biocro/DESCRIPTION index dcea4dddd3a..3c8426270c3 100644 --- a/models/biocro/DESCRIPTION +++ b/models/biocro/DESCRIPTION @@ -35,4 +35,4 @@ Copyright: Energy Biosciences Institute, Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 6.1.1 \ No newline at end of file +RoxygenNote: 6.1.1 diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 99b1f7688f5..7155a89ad33 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -315,12 +315,10 @@ postana.bias.plotting.sda.corr<-function(t, obs.times, X, aqq, bqq){ post.analysis.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS, plot.title=NULL){ - t1<- 1 + t1 <- 1 #Defining some colors ready.OBS<-NULL generate_colors_sda() - #ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, - # function(x) { x })[2, ], use.names = FALSE) var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") #---- #Analysis & Forcast cleaning and STAT @@ -428,8 +426,6 @@ post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.co t1 <- 1 #Defining some colors generate_colors_sda() - #ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, - # function(x) { x })[2, ], use.names = FALSE) var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") #rearranging the forcast and analysis data diff --git a/modules/assim.sequential/inst/NEFI/graphs_timeframe.R b/modules/assim.sequential/inst/NEFI/graphs_timeframe.R index 254eeb9931b..570fab316c6 100644 --- a/modules/assim.sequential/inst/NEFI/graphs_timeframe.R +++ b/modules/assim.sequential/inst/NEFI/graphs_timeframe.R @@ -64,7 +64,8 @@ con <- bety$con #Connection to the database. dplyr returns a list. # Identify the workflow with the proper information if (!is.null(start_date)) { - workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE start_date='", format(start_date, "%Y-%m-%d %H:%M:%S"), + workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE start_date='", + format(start_date, "%Y-%m-%d %H:%M:%S"), "' ORDER BY id"), con) } else { workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE id='", in_wid, "'"), con) @@ -209,10 +210,18 @@ neeplot <- ggplot(needf) + xlim(frame_start, frame_end) + theme(axis.text.x=element_text(angle=60, hjust=1)) + scale_colour_manual(name='Legend', values=c("predicted mean"="lightskyblue1", "observed data"="firebrick4")) + - scale_fill_manual(name=element_blank(), values=c("Spread of data (excluding outliers)"="azure4", "95% confidence interval" = "blue3", "mean"="lightskyblue1")) + + scale_fill_manual(name=element_blank(), + values=c("Spread of data (excluding outliers)"="azure4", + "95% confidence interval" = "blue3", "mean"="lightskyblue1")) + scale_y_continuous(name="NEE (kg C m-2 s-1)", limits=c(nee_lower, nee_upper)) + theme_linedraw() + - theme(plot.title = element_text(hjust = 0.5, size = 16), legend.title = element_text(size = 14), legend.text = element_text(size = 12), axis.text.x = element_text(size = 14), axis.text.y = element_text(size = 14), axis.title.x = element_text(size = 14), axis.title.y = element_text(size = 14)) + theme(plot.title = element_text(hjust = 0.5, size = 16), + legend.title = element_text(size = 14), + legend.text = element_text(size = 12), + axis.text.x = element_text(size = 14), + axis.text.y = element_text(size = 14), + axis.title.x = element_text(size = 14), + axis.title.y = element_text(size = 14)) qleplot <- ggplot(qledf) + geom_ribbon(aes(x=Time, ymin=qlelower95, ymax=qleupper95, fill="95% confidence interval"), alpha = 0.4) + @@ -225,7 +234,13 @@ qleplot <- ggplot(qledf) + scale_fill_manual(name= element_blank(), values=c("95% confidence interval" = "blue3")) + scale_y_discrete(name="LE (W m-2 s-1)", limits = c(qle_lower, qle_upper)) + theme_linedraw() + - theme(plot.title = element_text(hjust = 0.5, size = 16), legend.title = element_text(size = 14), legend.text = element_text(size = 12), axis.text.x = element_text(size = 14), axis.text.y = element_text(size = 14), axis.title.x = element_text(size = 14), axis.title.y = element_text(size = 14)) + theme(plot.title = element_text(hjust = 0.5, size = 16), + legend.title = element_text(size = 14), + legend.text = element_text(size = 12), + axis.text.x = element_text(size = 14), + axis.text.y = element_text(size = 14), + axis.title.x = element_text(size = 14), + axis.title.y = element_text(size = 14)) if (!dir.exists(outfolder)) { diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index c95b6d04073..8170135886e 100644 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -69,7 +69,7 @@ psql-pecan.bu.edu bety PostgreSQL - true + TRUE /fs/data3/kzarada/pecan.data/dbfiles/ diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R index 02056a33553..cff0b118852 100644 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -122,7 +122,7 @@ prep.data<-prep.data$obs # if there is infinte value then take it out - here we want to remove any that just have one NA in the observed data -prep.data<-prep.data %>% +prep.data <- prep.data %>% map(function(day.data){ #cheking the mean nan.mean <- which(is.infinite(day.data$means) | is.nan(day.data$means) | is.na(day.data$means)) @@ -131,7 +131,7 @@ prep.data<-prep.data %>% day.data$means <- day.data$means[-nan.mean] day.data$covs <- day.data$covs[-nan.mean, -nan.mean] %>% as.matrix() %>% - `colnames<-`(c(colnames(day.data$covs)[-nan.mean])) + `colnames <-`(c(colnames(day.data$covs)[-nan.mean])) } day.data }) diff --git a/workflow_id_log.txt b/workflow_id_log.txt deleted file mode 100644 index 727fb5efd6c..00000000000 --- a/workflow_id_log.txt +++ /dev/null @@ -1 +0,0 @@ -1000010061 From 5b458b44d1b765a2b65bc1111896eaf32f2d28ad Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 8 Oct 2019 16:22:58 -0400 Subject: [PATCH 0515/2289] re-adding check in --- modules/assim.sequential/R/Adjustment.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/Adjustment.R b/modules/assim.sequential/R/Adjustment.R index 306f653b2f2..f2ddd478660 100644 --- a/modules/assim.sequential/R/Adjustment.R +++ b/modules/assim.sequential/R/Adjustment.R @@ -46,8 +46,8 @@ adj.ens<-function(Pf, X, mu.f, mu.a, Pa){ } - #if(sum(mu.a - colMeans(X_a)) > 1 | sum(mu.a - colMeans(X_a)) < -1) logger.warn('Problem with ensemble adjustment (1)') - #if(sum(diag(Pa) - diag(cov(X_a))) > 5 | sum(diag(Pa) - diag(cov(X_a))) < -5) logger.warn('Problem with ensemble adjustment (2)') + if(sum(mu.a - colMeans(X_a)) > 1 | sum(mu.a - colMeans(X_a)) < -1) logger.warn('Problem with ensemble adjustment (1)') + if(sum(diag(Pa) - diag(cov(X_a))) > 5 | sum(diag(Pa) - diag(cov(X_a))) < -5) logger.warn('Problem with ensemble adjustment (2)') analysis <- as.data.frame(X_a) From 504763d544b15d9692614a26b06b961773973de4 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 10 Oct 2019 12:27:29 +0200 Subject: [PATCH 0516/2289] avoid looping to create result df --- base/utils/R/convert.input.R | 37 +++++++++++------------------------- 1 file changed, 11 insertions(+), 26 deletions(-) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index ff036039d21..e43e33e4b94 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -638,32 +638,17 @@ convert.input <- fname <- list.files(outfolder) } } - - # settings$run$inputs$path <- outputfile - # what if there is more than 1 output file? - rows <- length(fname) - result <- data.frame(file = character(rows), - host = character(rows), - mimetype = character(rows), - formatname = character(rows), - startdate = character(rows), - enddate = character(rows), - stringsAsFactors = FALSE) - - - - for (i in seq_len(rows)) { - old.file <- file.path(dbfile$file_path, files[i]) - new.file <- file.path(outfolder, fname[i]) - - # create array with results - result$file[i] <- new.file - result$host[i] <- PEcAn.remote::fqdn() - result$startdate[i] <- paste(input$start_date, "00:00:00") - result$enddate[i] <- paste(input$end_date, "23:59:59") - result$mimetype[i] <- mimetype - result$formatname[i] <- formatname - } + + result <- data.frame( + # contains one row for each file in fname + file = file.path(outfolder, fname), + host = PEcAn.remote::fqdn(), + mimetype = mimetype, + formatname = formatname, + startdate = paste(input$start_date, "00:00:00"), + enddate = paste(input$end_date, "23:59:59"), + stringsAsFactors = FALSE) + } else if (conversion == "local.remote") { # perform conversion on local or remote host From 30f696abed6a25b958ce782df052c42e3f5c86e4 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Thu, 10 Oct 2019 09:47:29 -0400 Subject: [PATCH 0517/2289] Created shell file to run SDA automatically, updated workflow to work with automatic runs --- .../inst/WillowCreek/forecast.sh | 1 + .../inst/WillowCreek/gefs.sipnet.template.xml | 0 .../inst/WillowCreek/workflow.template.R | 34 ++++++++++--------- 3 files changed, 19 insertions(+), 16 deletions(-) create mode 100755 modules/assim.sequential/inst/WillowCreek/forecast.sh mode change 100644 => 100755 modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml mode change 100644 => 100755 modules/assim.sequential/inst/WillowCreek/workflow.template.R diff --git a/modules/assim.sequential/inst/WillowCreek/forecast.sh b/modules/assim.sequential/inst/WillowCreek/forecast.sh new file mode 100755 index 00000000000..bff7dad77d2 --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/forecast.sh @@ -0,0 +1 @@ +Rscript "/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/workflow.template.R" \ No newline at end of file diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml old mode 100644 new mode 100755 diff --git a/modules/assim.sequential/inst/WillowCreek/workflow.template.R b/modules/assim.sequential/inst/WillowCreek/workflow.template.R old mode 100644 new mode 100755 index cff0b118852..b4adcacce38 --- a/modules/assim.sequential/inst/WillowCreek/workflow.template.R +++ b/modules/assim.sequential/inst/WillowCreek/workflow.template.R @@ -1,20 +1,20 @@ # ---------------------------------------------------------------------- #------------------------------------------ Load required libraries----- # ---------------------------------------------------------------------- -library(PEcAn.all) -library(PEcAn.utils) -library(RCurl) -library(REddyProc) -library(tidyverse) -library(furrr) -library(R.utils) -library(dynutils) +library("PEcAn.all") +library("PEcAn.utils") +library("RCurl") +library("REddyProc") +library("tidyverse") +library("furrr") +library("R.utils") +library("dynutils") plan(multiprocess) # ---------------------------------------------------------------------------------------------- #------------------------------------------ That's all we need xml path and the out folder ----- # ---------------------------------------------------------------------------------------------- -args <- commandArgs(trailingOnly = TRUE) +args = c("/fs/data3/kzarada/ouput", FALSE, "gefs.sipnet.template.xml", TRUE, 3) if (is.na(args[1])){ outputPath <- "/fs/data3/kzarada/ouput" @@ -35,15 +35,17 @@ if (is.na(args[3])){ } if (is.na(args[4])){ - restart <-FALSE + restart <-TRUE } else { restart <- args[4] } -outputPath <- "/fs/data3/kzarada/ouput" -xmlTempName <-"gefs.sipnet.template.xml" -restart <-TRUE -nodata <- FALSE -days.obs <- 3 #how many of observed data to include -- not including today + +if (is.na(args[5])){ + days.obs <- 3 #how many of observed data to include -- not including today +} else { + days.obs <- as.numeric(args[5]) +} + setwd(outputPath) #------------------------------------------------------------------------------------------------ #------------------------------------------ sourcing the required tools ------------------------- @@ -347,7 +349,7 @@ if(restart == TRUE){ copyDirectory(from = file.path(restart.path, "out/"), to = file.path(settings$outdir, "out/")) } #restart == TRUE -# -------------------------------------------------------------------------------------------------- + # -------------------------------------------------------------------------------------------------- #--------------------------------- Run state data assimilation ------------------------------------- # -------------------------------------------------------------------------------------------------- From 7d84264228bc440bb3ea8ccec8e6893735b6b1a5 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 10 Oct 2019 16:57:26 +0200 Subject: [PATCH 0518/2289] describe license correctly (freeBSD is 2-clause, we use 3-clause) --- api/DESCRIPTION | 2 +- base/all/DESCRIPTION | 2 +- base/db/DESCRIPTION | 2 +- base/logger/DESCRIPTION | 2 +- base/qaqc/DESCRIPTION | 2 +- base/remote/DESCRIPTION | 2 +- base/settings/DESCRIPTION | 2 +- base/utils/DESCRIPTION | 2 +- base/visualization/DESCRIPTION | 2 +- base/workflow/DESCRIPTION | 2 +- models/basgra/DESCRIPTION | 2 +- models/biocro/DESCRIPTION | 2 +- models/cable/DESCRIPTION | 2 +- models/clm45/DESCRIPTION | 2 +- models/dalec/DESCRIPTION | 2 +- models/dvmdostem/DESCRIPTION | 2 +- models/ed/DESCRIPTION | 2 +- models/fates/DESCRIPTION | 2 +- models/gday/DESCRIPTION | 2 +- models/jules/DESCRIPTION | 2 +- models/linkages/DESCRIPTION | 2 +- models/lpjguess/DESCRIPTION | 2 +- models/maat/DESCRIPTION | 2 +- models/maespa/DESCRIPTION | 2 +- models/preles/DESCRIPTION | 2 +- models/sipnet/DESCRIPTION | 2 +- models/template/DESCRIPTION | 2 +- modules/allometry/DESCRIPTION | 2 +- modules/assim.batch/DESCRIPTION | 2 +- modules/assim.sequential/DESCRIPTION | 2 +- modules/benchmark/DESCRIPTION | 2 +- modules/data.atmosphere/DESCRIPTION | 2 +- modules/data.hydrology/DESCRIPTION | 2 +- modules/data.land/DESCRIPTION | 2 +- modules/data.mining/DESCRIPTION | 2 +- modules/data.remote/DESCRIPTION | 2 +- modules/emulator/DESCRIPTION | 2 +- modules/meta.analysis/DESCRIPTION | 2 +- modules/photosynthesis/DESCRIPTION | 2 +- modules/priors/DESCRIPTION | 2 +- modules/rtm/DESCRIPTION | 2 +- modules/uncertainty/DESCRIPTION | 2 +- shiny/BenchmarkReport/DESCRIPTION | 2 +- shiny/BrownDog/DESCRIPTION | 2 +- shiny/Data-Ingest/DESCRIPTION | 2 +- shiny/Elicitation/DESCRIPTION | 2 +- shiny/Pecan.depend/DESCRIPTION | 2 +- shiny/ViewMet/DESCRIPTION | 2 +- shiny/global-sensitivity/DESCRIPTION | 2 +- shiny/workflowPlot/DESCRIPTION | 2 +- 50 files changed, 50 insertions(+), 50 deletions(-) diff --git a/api/DESCRIPTION b/api/DESCRIPTION index b9fdc9019cf..4502924336c 100644 --- a/api/DESCRIPTION +++ b/api/DESCRIPTION @@ -15,7 +15,7 @@ Imports: Suggests: magrittr, ncdf4 -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true RoxygenNote: 6.1.0 diff --git a/base/all/DESCRIPTION b/base/all/DESCRIPTION index 0df2902e104..2da181d9343 100644 --- a/base/all/DESCRIPTION +++ b/base/all/DESCRIPTION @@ -56,7 +56,7 @@ Suggests: PEcAn.allometry, PEcAn.photosynthesis, testthat -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index 50b32ab7e6d..06e0e8733eb 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -69,7 +69,7 @@ Suggests: rcrossref, testthat (>= 2.0.0), tidyverse -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 62ee18c7062..b40f9d4d6aa 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -8,7 +8,7 @@ Author: Rob Kooper, Alexey Shiklomanov Maintainer: Alexey Shiklomanov Description: Special logger functions for tracking execution status and the environment. Suggests: testthat -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true RoxygenNote: 6.1.1 diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 94f3b78f402..1f0e79269cc 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -14,7 +14,7 @@ Imports: PEcAn.logger Suggests: testthat -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/base/remote/DESCRIPTION b/base/remote/DESCRIPTION index 23b195b4466..7f910564275 100644 --- a/base/remote/DESCRIPTION +++ b/base/remote/DESCRIPTION @@ -17,7 +17,7 @@ Suggests: testthat, tools, getPass -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true Roxygen: list(markdown = TRUE) diff --git a/base/settings/DESCRIPTION b/base/settings/DESCRIPTION index 9c13adf3139..ced586e316e 100644 --- a/base/settings/DESCRIPTION +++ b/base/settings/DESCRIPTION @@ -6,7 +6,7 @@ Maintainer: David LeBauer Author: David LeBauer, Rob Kooper Version: 1.7.1 Date: 2019-09-05 -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/base/utils/DESCRIPTION b/base/utils/DESCRIPTION index 14795f1c5f7..894a3382d48 100644 --- a/base/utils/DESCRIPTION +++ b/base/utils/DESCRIPTION @@ -50,7 +50,7 @@ Suggests: sp, testthat (>= 2.0.0), xtable -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyData: true Encoding: UTF-8 diff --git a/base/visualization/DESCRIPTION b/base/visualization/DESCRIPTION index b4491217f98..7d8ea742b5e 100644 --- a/base/visualization/DESCRIPTION +++ b/base/visualization/DESCRIPTION @@ -41,7 +41,7 @@ Suggests: grid, png, testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/base/workflow/DESCRIPTION b/base/workflow/DESCRIPTION index a596475435a..e2879c0929c 100644 --- a/base/workflow/DESCRIPTION +++ b/base/workflow/DESCRIPTION @@ -22,7 +22,7 @@ Description: The Predictive Ecosystem Carbon Analyzer models, and to improve the efficacy of scientific investigation. This package provides workhorse functions that can be used to run the major steps of a PEcAn analysis. -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Imports: dplyr, PEcAn.data.atmosphere, diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index a9cc15a331a..0e72ff85295 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -14,7 +14,7 @@ Imports: Suggests: testthat (>= 1.0.2) OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/biocro/DESCRIPTION b/models/biocro/DESCRIPTION index 3c8426270c3..9d7d6b69dea 100644 --- a/models/biocro/DESCRIPTION +++ b/models/biocro/DESCRIPTION @@ -30,7 +30,7 @@ Suggests: RPostgreSQL Remotes: github::ebimodeling/biocro -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Energy Biosciences Institute, Authors LazyLoad: yes LazyData: FALSE diff --git a/models/cable/DESCRIPTION b/models/cable/DESCRIPTION index 6195031b822..bd15089bf54 100644 --- a/models/cable/DESCRIPTION +++ b/models/cable/DESCRIPTION @@ -15,7 +15,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: CABLE OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/clm45/DESCRIPTION b/models/clm45/DESCRIPTION index 4b8d2b8675e..199c2ec7bd3 100644 --- a/models/clm45/DESCRIPTION +++ b/models/clm45/DESCRIPTION @@ -20,7 +20,7 @@ Imports: ncdf4 (>= 1.15) Suggests: testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/dalec/DESCRIPTION b/models/dalec/DESCRIPTION index b77ddb43921..5aa012e7897 100644 --- a/models/dalec/DESCRIPTION +++ b/models/dalec/DESCRIPTION @@ -19,7 +19,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: dalec OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/dvmdostem/DESCRIPTION b/models/dvmdostem/DESCRIPTION index edb48bd5838..ab5b5957465 100644 --- a/models/dvmdostem/DESCRIPTION +++ b/models/dvmdostem/DESCRIPTION @@ -17,7 +17,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: dvmdostem OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/ed/DESCRIPTION b/models/ed/DESCRIPTION index 01cd211d658..d97fbe38903 100644 --- a/models/ed/DESCRIPTION +++ b/models/ed/DESCRIPTION @@ -43,7 +43,7 @@ Imports: XML (>= 3.98-1.4) Suggests: testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/fates/DESCRIPTION b/models/fates/DESCRIPTION index 48c15220499..1199c7399e5 100644 --- a/models/fates/DESCRIPTION +++ b/models/fates/DESCRIPTION @@ -22,7 +22,7 @@ Imports: ncdf4 (>= 1.15) Suggests: testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/gday/DESCRIPTION b/models/gday/DESCRIPTION index a824f6a0249..2273f78e65b 100644 --- a/models/gday/DESCRIPTION +++ b/models/gday/DESCRIPTION @@ -19,7 +19,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: GDAY OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: TRUE diff --git a/models/jules/DESCRIPTION b/models/jules/DESCRIPTION index 8dff5c157bb..17c2e0ab160 100644 --- a/models/jules/DESCRIPTION +++ b/models/jules/DESCRIPTION @@ -19,7 +19,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: JULES OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/linkages/DESCRIPTION b/models/linkages/DESCRIPTION index afa46203f79..6baa0f33c92 100644 --- a/models/linkages/DESCRIPTION +++ b/models/linkages/DESCRIPTION @@ -24,7 +24,7 @@ Remotes: github::araiho/linkages_package SystemRequirements: LINKAGES OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/lpjguess/DESCRIPTION b/models/lpjguess/DESCRIPTION index 1d7a3dcc905..8c28d4202f0 100644 --- a/models/lpjguess/DESCRIPTION +++ b/models/lpjguess/DESCRIPTION @@ -19,7 +19,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: LPJ-GUESS model OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/maat/DESCRIPTION b/models/maat/DESCRIPTION index e7c97ec08b1..091cbb3137b 100644 --- a/models/maat/DESCRIPTION +++ b/models/maat/DESCRIPTION @@ -23,7 +23,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: MAAT OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/maespa/DESCRIPTION b/models/maespa/DESCRIPTION index 29c37a54645..17a11b3bad7 100644 --- a/models/maespa/DESCRIPTION +++ b/models/maespa/DESCRIPTION @@ -26,7 +26,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: MAESPA ecosystem model OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/preles/DESCRIPTION b/models/preles/DESCRIPTION index 7b178bf53b9..6636f7a793d 100644 --- a/models/preles/DESCRIPTION +++ b/models/preles/DESCRIPTION @@ -29,7 +29,7 @@ Suggests: Remotes: github::MikkoPeltoniemi/Rpreles OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/sipnet/DESCRIPTION b/models/sipnet/DESCRIPTION index 442532eb0bc..4883a212b62 100644 --- a/models/sipnet/DESCRIPTION +++ b/models/sipnet/DESCRIPTION @@ -25,7 +25,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: SIPNET ecosystem model OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/models/template/DESCRIPTION b/models/template/DESCRIPTION index 0426698d3bb..d4d26c54292 100644 --- a/models/template/DESCRIPTION +++ b/models/template/DESCRIPTION @@ -15,7 +15,7 @@ Suggests: testthat (>= 1.0.2) SystemRequirements: ModelName OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/allometry/DESCRIPTION b/modules/allometry/DESCRIPTION index cbcfeffe78e..e5549e6204e 100644 --- a/modules/allometry/DESCRIPTION +++ b/modules/allometry/DESCRIPTION @@ -18,7 +18,7 @@ Imports: Suggests: testthat (>= 1.0.2), PEcAn.DB -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 74b9636afa7..a9417d929a9 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -38,7 +38,7 @@ Imports: XML Suggests: testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 82237536582..4e69f86666f 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -24,7 +24,7 @@ Imports: coda Suggests: testthat -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 1d74411002c..c4d2a27ecf1 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -34,7 +34,7 @@ Imports: zoo Suggests: testthat (>= 2.0.0) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 222deafb733..d17608fe1c9 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -64,7 +64,7 @@ Suggests: Remotes: github::ropensci/geonames, github::ropensci/nneo -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/data.hydrology/DESCRIPTION b/modules/data.hydrology/DESCRIPTION index 17541bec204..9618068bee8 100644 --- a/modules/data.hydrology/DESCRIPTION +++ b/modules/data.hydrology/DESCRIPTION @@ -22,7 +22,7 @@ Imports: PEcAn.utils Suggests: testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 0cc9d7a9bb4..aeb316a4e77 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -43,7 +43,7 @@ Suggests: rgdal, RPostgreSQL, testthat (>= 1.0.2), -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/data.mining/DESCRIPTION b/modules/data.mining/DESCRIPTION index b17bceee7db..4d812539850 100644 --- a/modules/data.mining/DESCRIPTION +++ b/modules/data.mining/DESCRIPTION @@ -13,7 +13,7 @@ Imports: Suggests: PEcAn.utils, testthat (>= 1.0.2) -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index ce0ca1422d1..ccbe7f598da 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -19,7 +19,7 @@ Suggests: ggplot2, rgdal, reshape -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 2faca7a3ef4..a5296c46a0a 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -17,6 +17,6 @@ Imports: Description: Implementation of a Gaussian Process model (both likelihood and bayesian approaches) for kriging and model emulation. Includes functions for sampling design and prediction. -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Encoding: UTF-8 RoxygenNote: 6.1.1 diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index dc9189ed363..3ad139a7e5b 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -34,7 +34,7 @@ Suggests: testthat (>= 1.0.2), PEcAn.priors SystemRequirements: JAGS -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyData: FALSE Encoding: UTF-8 diff --git a/modules/photosynthesis/DESCRIPTION b/modules/photosynthesis/DESCRIPTION index 0308730a6d3..48d79a6e323 100644 --- a/modules/photosynthesis/DESCRIPTION +++ b/modules/photosynthesis/DESCRIPTION @@ -21,7 +21,7 @@ Imports: PEcAn.logger, coda (>= 0.18) SystemRequirements: JAGS2.2.0 -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/priors/DESCRIPTION b/modules/priors/DESCRIPTION index 4bb1c436701..879902ad20e 100644 --- a/modules/priors/DESCRIPTION +++ b/modules/priors/DESCRIPTION @@ -7,7 +7,7 @@ Authors@R: c(person("David","LeBauer")) Author: David LeBauer Maintainer: David LeBauer Description: Functions to estimate priors from data. -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/rtm/DESCRIPTION b/modules/rtm/DESCRIPTION index 26854e480e0..a460514122a 100644 --- a/modules/rtm/DESCRIPTION +++ b/modules/rtm/DESCRIPTION @@ -29,7 +29,7 @@ Suggests: knitr, pwr OS_type: unix -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/modules/uncertainty/DESCRIPTION b/modules/uncertainty/DESCRIPTION index 23f98c93351..b1165454803 100644 --- a/modules/uncertainty/DESCRIPTION +++ b/modules/uncertainty/DESCRIPTION @@ -39,7 +39,7 @@ Imports: udunits2 Suggests: testthat (>= 1.0.2), -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE diff --git a/shiny/BenchmarkReport/DESCRIPTION b/shiny/BenchmarkReport/DESCRIPTION index 7f3a86d2400..190cdf8cc4f 100644 --- a/shiny/BenchmarkReport/DESCRIPTION +++ b/shiny/BenchmarkReport/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Benchmarking Report -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Betsy Cowdery ecowdery@bu.edu Tags: PEcAn Imports: shiny diff --git a/shiny/BrownDog/DESCRIPTION b/shiny/BrownDog/DESCRIPTION index 7a6c46fa230..544a2270373 100644 --- a/shiny/BrownDog/DESCRIPTION +++ b/shiny/BrownDog/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Workflow Output -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Yan Zhao AuthorUrl: https://github.com/yan130 Tags: PEcAn diff --git a/shiny/Data-Ingest/DESCRIPTION b/shiny/Data-Ingest/DESCRIPTION index c95023a7cd4..f245180193c 100644 --- a/shiny/Data-Ingest/DESCRIPTION +++ b/shiny/Data-Ingest/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Data-Ingest -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Date: 2019-09-05 Author: Liam Burke liam.burke24@gmail.com Tags: PEcAn diff --git a/shiny/Elicitation/DESCRIPTION b/shiny/Elicitation/DESCRIPTION index b1bf86101a5..332f258af91 100644 --- a/shiny/Elicitation/DESCRIPTION +++ b/shiny/Elicitation/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Expert Prior Elicitation -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Mike Dietze , Rob Kooper diff --git a/shiny/Pecan.depend/DESCRIPTION b/shiny/Pecan.depend/DESCRIPTION index 9c40d09a33f..fe6b80cfbf1 100644 --- a/shiny/Pecan.depend/DESCRIPTION +++ b/shiny/Pecan.depend/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Visualize dependencies between PEcAn packages -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Hamze Dokoohaki Tags: PEcAn #DisplayMode: Showcase diff --git a/shiny/ViewMet/DESCRIPTION b/shiny/ViewMet/DESCRIPTION index 74df27a9964..513ef86631c 100644 --- a/shiny/ViewMet/DESCRIPTION +++ b/shiny/ViewMet/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Plot data from CF-formatted meteorology files -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Betsy Cowdery Tags: PEcAn Imports: diff --git a/shiny/global-sensitivity/DESCRIPTION b/shiny/global-sensitivity/DESCRIPTION index c5c3c1bfbbf..614542e2d03 100644 --- a/shiny/global-sensitivity/DESCRIPTION +++ b/shiny/global-sensitivity/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Global sensitivity analysis -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Alexey Shiklomanov AuthorUrl: http://ashiklom.github.io Tags: PEcAn diff --git a/shiny/workflowPlot/DESCRIPTION b/shiny/workflowPlot/DESCRIPTION index 0a8328ec6fb..1706a85b67f 100644 --- a/shiny/workflowPlot/DESCRIPTION +++ b/shiny/workflowPlot/DESCRIPTION @@ -1,6 +1,6 @@ Type: Shiny Title: Workflow Output -License: FreeBSD + file LICENSE +License: BSD_3_clause + file LICENSE Author: Rob Kooper AuthorUrl: http://www.ncsa.illinois.edu/assets/php/directory/contact.php?contact=kooper Tags: PEcAn From 3772f9ac19804276a67f7f58b441a1abaf2b8973 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 10 Oct 2019 16:58:24 +0200 Subject: [PATCH 0519/2289] update cached check output to reflect fixed license description Merge branch 'develop' of github.com:infotroph/pecan into develop --- base/all/tests/Rcheck_reference.log | 2 -- base/db/tests/Rcheck_reference.log | 6 ++---- base/logger/tests/Rcheck_reference.log | 2 -- base/qaqc/tests/Rcheck_reference.log | 2 -- base/remote/tests/Rcheck_reference.log | 2 -- base/settings/tests/Rcheck_reference.log | 2 -- base/utils/tests/Rcheck_reference.log | 2 -- base/visualization/tests/Rcheck_reference.log | 6 ++---- base/workflow/tests/Rcheck_reference.log | 2 -- models/basgra/tests/Rcheck_reference.log | 6 ++---- models/biocro/tests/Rcheck_reference.log | 2 -- models/clm45/tests/Rcheck_reference.log | 2 -- models/dalec/tests/Rcheck_reference.log | 2 -- models/dvmdostem/tests/Rcheck_reference.log | 2 -- models/ed/tests/Rcheck_reference.log | 2 -- models/fates/tests/Rcheck_reference.log | 2 -- models/gday/tests/Rcheck_reference.log | 2 -- models/jules/tests/Rcheck_reference.log | 2 -- models/linkages/tests/Rcheck_reference.log | 2 -- models/lpjguess/tests/Rcheck_reference.log | 2 -- models/maat/tests/Rcheck_reference.log | 6 ++---- models/maespa/tests/Rcheck_reference.log | 2 -- models/preles/tests/Rcheck_reference.log | 2 -- models/sipnet/tests/Rcheck_reference.log | 2 -- models/template/tests/Rcheck_reference.log | 2 -- modules/allometry/tests/Rcheck_reference.log | 2 -- modules/assim.batch/tests/Rcheck_reference.log | 6 ++---- modules/assim.sequential/tests/Rcheck_reference.log | 6 ++---- modules/benchmark/tests/Rcheck_reference.log | 2 -- modules/data.atmosphere/tests/Rcheck_reference.log | 7 ++----- modules/data.hydrology/tests/Rcheck_reference.log | 2 -- modules/data.land/tests/Rcheck_reference.log | 2 -- modules/data.remote/tests/Rcheck_reference.log | 2 -- modules/emulator/tests/Rcheck_reference.log | 2 -- modules/meta.analysis/tests/Rcheck_reference.log | 2 -- modules/photosynthesis/tests/Rcheck_reference.log | 2 -- modules/priors/tests/Rcheck_reference.log | 2 -- modules/rtm/tests/Rcheck_reference.log | 2 -- modules/uncertainty/tests/Rcheck_reference.log | 2 -- 39 files changed, 14 insertions(+), 93 deletions(-) diff --git a/base/all/tests/Rcheck_reference.log b/base/all/tests/Rcheck_reference.log index bba521a13ea..716da54d3b8 100644 --- a/base/all/tests/Rcheck_reference.log +++ b/base/all/tests/Rcheck_reference.log @@ -27,8 +27,6 @@ selectively is preferable. * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index 54b0fcb2874..5487b1c7a39 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -19,9 +19,7 @@ * checking whether package ‘PEcAn.DB’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -193,4 +191,4 @@ Package has no Sweave vignette sources and no VignetteBuilder field. * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED * DONE -Status: 4 WARNINGs, 3 NOTEs +Status: 4 WARNINGs, 2 NOTEs diff --git a/base/logger/tests/Rcheck_reference.log b/base/logger/tests/Rcheck_reference.log index 0c596edc658..a439c196f29 100644 --- a/base/logger/tests/Rcheck_reference.log +++ b/base/logger/tests/Rcheck_reference.log @@ -19,8 +19,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/base/qaqc/tests/Rcheck_reference.log b/base/qaqc/tests/Rcheck_reference.log index d4b17e050de..3e0eeb79c09 100644 --- a/base/qaqc/tests/Rcheck_reference.log +++ b/base/qaqc/tests/Rcheck_reference.log @@ -21,8 +21,6 @@ * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE Malformed Description field: should contain one or more complete sentences. -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/base/remote/tests/Rcheck_reference.log b/base/remote/tests/Rcheck_reference.log index 88026a3093d..705e0f9fde4 100644 --- a/base/remote/tests/Rcheck_reference.log +++ b/base/remote/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/base/settings/tests/Rcheck_reference.log b/base/settings/tests/Rcheck_reference.log index 43dff58f983..4ac4c7fe383 100644 --- a/base/settings/tests/Rcheck_reference.log +++ b/base/settings/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE Malformed Description field: should contain one or more complete sentences. -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index f40bafe588f..c81db000b03 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/base/visualization/tests/Rcheck_reference.log b/base/visualization/tests/Rcheck_reference.log index 0a8a3950dcf..07ebfe75f9a 100644 --- a/base/visualization/tests/Rcheck_reference.log +++ b/base/visualization/tests/Rcheck_reference.log @@ -25,9 +25,7 @@ obtained by re-running with environment variable R_KEEP_PKG_SOURCE set to ‘yes’. * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -229,4 +227,4 @@ vwReg 11.31 0.1 11.486 * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED * DONE -Status: 5 WARNINGs, 3 NOTEs +Status: 5 WARNINGs, 2 NOTEs diff --git a/base/workflow/tests/Rcheck_reference.log b/base/workflow/tests/Rcheck_reference.log index a2c5a9fcb24..31d9ed9a27c 100644 --- a/base/workflow/tests/Rcheck_reference.log +++ b/base/workflow/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/basgra/tests/Rcheck_reference.log b/models/basgra/tests/Rcheck_reference.log index 70c9129b3fd..bb9dc711165 100644 --- a/models/basgra/tests/Rcheck_reference.log +++ b/models/basgra/tests/Rcheck_reference.log @@ -18,9 +18,7 @@ * checking whether package ‘PEcAn.BASGRA’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -67,7 +65,7 @@ See ‘Writing portable packages’ in the ‘Writing R Extensions’ manual. OK * DONE -Status: 2 NOTEs +Status: 1 NOTE See ‘/private/var/folders/qr/mbw8xxpd45jdv_46b27914280000gn/T/Rtmpcw7AZN/PEcAn.BASGRA.Rcheck/00check.log’ for details. diff --git a/models/biocro/tests/Rcheck_reference.log b/models/biocro/tests/Rcheck_reference.log index f9f226a309e..f92644004e0 100644 --- a/models/biocro/tests/Rcheck_reference.log +++ b/models/biocro/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/clm45/tests/Rcheck_reference.log b/models/clm45/tests/Rcheck_reference.log index 970a0573373..b9ab3a577ed 100644 --- a/models/clm45/tests/Rcheck_reference.log +++ b/models/clm45/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/dalec/tests/Rcheck_reference.log b/models/dalec/tests/Rcheck_reference.log index 1851b6f9fbb..07dbe19d35f 100644 --- a/models/dalec/tests/Rcheck_reference.log +++ b/models/dalec/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/dvmdostem/tests/Rcheck_reference.log b/models/dvmdostem/tests/Rcheck_reference.log index 9176838951e..28cd21c8555 100644 --- a/models/dvmdostem/tests/Rcheck_reference.log +++ b/models/dvmdostem/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/ed/tests/Rcheck_reference.log b/models/ed/tests/Rcheck_reference.log index 3dd1c712c13..414616bb289 100644 --- a/models/ed/tests/Rcheck_reference.log +++ b/models/ed/tests/Rcheck_reference.log @@ -26,8 +26,6 @@ to ‘yes’. * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/fates/tests/Rcheck_reference.log b/models/fates/tests/Rcheck_reference.log index 224c3512ea3..c2c9605b82c 100644 --- a/models/fates/tests/Rcheck_reference.log +++ b/models/fates/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/gday/tests/Rcheck_reference.log b/models/gday/tests/Rcheck_reference.log index 3156671a84b..61391f605c2 100644 --- a/models/gday/tests/Rcheck_reference.log +++ b/models/gday/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/jules/tests/Rcheck_reference.log b/models/jules/tests/Rcheck_reference.log index e0144498691..c66d80ebb41 100644 --- a/models/jules/tests/Rcheck_reference.log +++ b/models/jules/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/linkages/tests/Rcheck_reference.log b/models/linkages/tests/Rcheck_reference.log index 6aeb971eaaf..fe0c244587c 100644 --- a/models/linkages/tests/Rcheck_reference.log +++ b/models/linkages/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/lpjguess/tests/Rcheck_reference.log b/models/lpjguess/tests/Rcheck_reference.log index 3b2e723a9a0..69b14aedf85 100644 --- a/models/lpjguess/tests/Rcheck_reference.log +++ b/models/lpjguess/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/maat/tests/Rcheck_reference.log b/models/maat/tests/Rcheck_reference.log index f482f70704c..9562cdc92ee 100644 --- a/models/maat/tests/Rcheck_reference.log +++ b/models/maat/tests/Rcheck_reference.log @@ -19,9 +19,7 @@ * checking whether package ‘PEcAn.MAAT’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -86,4 +84,4 @@ Package has no Sweave vignette sources and no VignetteBuilder field. * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED * DONE -Status: 2 WARNINGs, 2 NOTEs +Status: 2 WARNINGs, 1 NOTE diff --git a/models/maespa/tests/Rcheck_reference.log b/models/maespa/tests/Rcheck_reference.log index 3da9a40b845..913ed365346 100644 --- a/models/maespa/tests/Rcheck_reference.log +++ b/models/maespa/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/preles/tests/Rcheck_reference.log b/models/preles/tests/Rcheck_reference.log index aedbcecb385..38c04a3fd56 100644 --- a/models/preles/tests/Rcheck_reference.log +++ b/models/preles/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/sipnet/tests/Rcheck_reference.log b/models/sipnet/tests/Rcheck_reference.log index 978152fd680..9d8eadb9653 100644 --- a/models/sipnet/tests/Rcheck_reference.log +++ b/models/sipnet/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/models/template/tests/Rcheck_reference.log b/models/template/tests/Rcheck_reference.log index 68fa51a5ae8..c82a50085d1 100644 --- a/models/template/tests/Rcheck_reference.log +++ b/models/template/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/allometry/tests/Rcheck_reference.log b/modules/allometry/tests/Rcheck_reference.log index 8b22f84c5aa..16c7873cdf7 100644 --- a/modules/allometry/tests/Rcheck_reference.log +++ b/modules/allometry/tests/Rcheck_reference.log @@ -21,8 +21,6 @@ * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE Malformed Description field: should contain one or more complete sentences. -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/assim.batch/tests/Rcheck_reference.log b/modules/assim.batch/tests/Rcheck_reference.log index be30471dd77..a37230e8fc0 100644 --- a/modules/assim.batch/tests/Rcheck_reference.log +++ b/modules/assim.batch/tests/Rcheck_reference.log @@ -25,9 +25,7 @@ obtained by re-running with environment variable R_KEEP_PKG_SOURCE set to ‘yes’. * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -229,4 +227,4 @@ Package has no Sweave vignette sources and no VignetteBuilder field. * checking tests ... OK * DONE -Status: 4 WARNINGs, 2 NOTEs +Status: 4 WARNINGs, 1 NOTE diff --git a/modules/assim.sequential/tests/Rcheck_reference.log b/modules/assim.sequential/tests/Rcheck_reference.log index 47f24150b7d..c5162842b79 100644 --- a/modules/assim.sequential/tests/Rcheck_reference.log +++ b/modules/assim.sequential/tests/Rcheck_reference.log @@ -19,9 +19,7 @@ * checking whether package ‘PEcAn.assim.sequential’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -950,4 +948,4 @@ Extensions’ manual. * checking for unstated dependencies in examples ... OK * checking examples ... NONE * DONE -Status: 4 WARNINGs, 2 NOTEs +Status: 4 WARNINGs, 1 NOTE diff --git a/modules/benchmark/tests/Rcheck_reference.log b/modules/benchmark/tests/Rcheck_reference.log index 9d2837f11c2..85302e96c4c 100644 --- a/modules/benchmark/tests/Rcheck_reference.log +++ b/modules/benchmark/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index 276dfa16a76..08f25970148 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -19,10 +19,7 @@ * checking whether package ‘PEcAn.data.atmosphere’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE -A package should be listed in only one of these fields. +* checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK @@ -479,4 +476,4 @@ Package has no Sweave vignette sources and no VignetteBuilder field. * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED * DONE -Status: 9 WARNINGs, 3 NOTEs +Status: 9 WARNINGs, 2 NOTEs diff --git a/modules/data.hydrology/tests/Rcheck_reference.log b/modules/data.hydrology/tests/Rcheck_reference.log index 02db244eb08..42ca33e067f 100644 --- a/modules/data.hydrology/tests/Rcheck_reference.log +++ b/modules/data.hydrology/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/data.land/tests/Rcheck_reference.log b/modules/data.land/tests/Rcheck_reference.log index e327d4fe7e3..a68defe393c 100644 --- a/modules/data.land/tests/Rcheck_reference.log +++ b/modules/data.land/tests/Rcheck_reference.log @@ -34,8 +34,6 @@ to ‘yes’. FIA_allometry 2.7Mb * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/data.remote/tests/Rcheck_reference.log b/modules/data.remote/tests/Rcheck_reference.log index 764fad76fc9..b5a9bc04066 100644 --- a/modules/data.remote/tests/Rcheck_reference.log +++ b/modules/data.remote/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/emulator/tests/Rcheck_reference.log b/modules/emulator/tests/Rcheck_reference.log index c4f4d9fb1a4..6f1b743940a 100644 --- a/modules/emulator/tests/Rcheck_reference.log +++ b/modules/emulator/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/meta.analysis/tests/Rcheck_reference.log b/modules/meta.analysis/tests/Rcheck_reference.log index 037fd01b46c..00f77a3a167 100644 --- a/modules/meta.analysis/tests/Rcheck_reference.log +++ b/modules/meta.analysis/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/photosynthesis/tests/Rcheck_reference.log b/modules/photosynthesis/tests/Rcheck_reference.log index c134ecd296a..a352a37ad11 100644 --- a/modules/photosynthesis/tests/Rcheck_reference.log +++ b/modules/photosynthesis/tests/Rcheck_reference.log @@ -25,8 +25,6 @@ See section ‘Package structure’ in the ‘Writing R Extensions’ manual. * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/priors/tests/Rcheck_reference.log b/modules/priors/tests/Rcheck_reference.log index c920f056ce4..a65616374de 100644 --- a/modules/priors/tests/Rcheck_reference.log +++ b/modules/priors/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/rtm/tests/Rcheck_reference.log b/modules/rtm/tests/Rcheck_reference.log index f4be10667a2..a1efe382219 100644 --- a/modules/rtm/tests/Rcheck_reference.log +++ b/modules/rtm/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. diff --git a/modules/uncertainty/tests/Rcheck_reference.log b/modules/uncertainty/tests/Rcheck_reference.log index bc97509d7a1..f62a8f5e7c8 100644 --- a/modules/uncertainty/tests/Rcheck_reference.log +++ b/modules/uncertainty/tests/Rcheck_reference.log @@ -20,8 +20,6 @@ * checking installed package size ... OK * checking package directory ... OK * checking DESCRIPTION meta-information ... NOTE -License components with restrictions not permitted: - FreeBSD + file LICENSE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. From db84a1788d7b7ffa4ad83b2ed03ca3d10038639a Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Thu, 10 Oct 2019 18:12:23 +0000 Subject: [PATCH 0520/2289] Enable multiple variable results for get.results for ensemble and sensitivity --- base/utils/R/get.results.R | 158 +++++++++++++++++++------------------ 1 file changed, 80 insertions(+), 78 deletions(-) diff --git a/base/utils/R/get.results.R b/base/utils/R/get.results.R index 3f9a1d279d4..956d9d0e7bf 100644 --- a/base/utils/R/get.results.R +++ b/base/utils/R/get.results.R @@ -76,59 +76,59 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, end.year.sa <- NA } - variable.sa <- variable - if (is.null(variable.sa)) { + variables.sa <- variable + if (is.null(variables.sa)) { if ("variable" %in% names(settings$sensitivity.analysis)) { - variable.sa <- settings$sensitivity.analysis[names(settings$sensitivity.analysis) == "variable"] + variables.sa <- settings$sensitivity.analysis[names(settings$sensitivity.analysis) == "variable"] } else { PEcAn.logger::logger.severe("no variable defined for sensitivity analysis") } } # Only handling one variable at a time for now - if (length(variable.sa) > 1) { - variable.sa <- variable.sa[1] - PEcAn.logger::logger.warn(paste0("Currently performs sensitivity analysis on only one variable at a time. Using first (", - variable.sa, ")")) - } - - # if an expression is provided, convert.expr returns names of the variables accordingly - # if a derivation is not requested it returns the variable name as is - variables <- convert.expr(unlist(variable.sa)) - variable.sa <- variables$variable.eqn - variable.fn <- variables$variable.drv - - - for(pft.name in pft.names){ - quantiles <- rownames(sa.samples[[pft.name]]) - traits <- trait.names[[pft.name]] - - # when there is variable-per pft in the outputs, check for the tag for deciding SA per pft - per.pft <- ifelse(!is.null(settings$sensitivity.analysis$perpft), - as.logical(settings$sensitivity.analysis$perpft), FALSE) - sensitivity.output[[pft.name]] <- read.sa.output(traits = traits, - quantiles = quantiles, - pecandir = outdir, - outdir = settings$modeloutdir, - pft.name = pft.name, - start.year = start.year.sa, - end.year = end.year.sa, - variable = variable.sa, - sa.run.ids = sa.run.ids, - per.pft = per.pft) + if (length(variables.sa) >= 1) { + #variable.sa <- variable.sa[1] + for(variable.sa in variables.sa){ + PEcAn.logger::logger.warn(paste0("Currently performs sensitivity analysis on only one variable at a time. Using first (", + variable.sa, ")")) + + # if an expression is provided, convert.expr returns names of the variables accordingly + # if a derivation is not requested it returns the variable name as is + variables <- convert.expr(unlist(variable.sa)) + variable.sa <- variables$variable.eqn + variable.fn <- variables$variable.drv + + for(pft.name in pft.names){ + quantiles <- rownames(sa.samples[[pft.name]]) + traits <- trait.names[[pft.name]] + + # when there is variable-per pft in the outputs, check for the tag for deciding SA per pft + per.pft <- ifelse(!is.null(settings$sensitivity.analysis$perpft), + as.logical(settings$sensitivity.analysis$perpft), FALSE) + sensitivity.output[[pft.name]] <- read.sa.output(traits = traits, + quantiles = quantiles, + pecandir = outdir, + outdir = settings$modeloutdir, + pft.name = pft.name, + start.year = start.year.sa, + end.year = end.year.sa, + variable = variable.sa, + sa.run.ids = sa.run.ids, + per.pft = per.pft) + } + + # Save sensitivity output + + fname <- sensitivity.filename(settings, "sensitivity.output", "Rdata", + all.var.yr = FALSE, + pft = NULL, + ensemble.id = sa.ensemble.id, + variable = variable.fn, + start.year = start.year.sa, + end.year = end.year.sa) + save(sensitivity.output, file = fname) + } } - - # Save sensitivity output - - fname <- sensitivity.filename(settings, "sensitivity.output", "Rdata", - all.var.yr = FALSE, - pft = NULL, - ensemble.id = sa.ensemble.id, - variable = variable.fn, - start.year = start.year.sa, - end.year = end.year.sa) - save(sensitivity.output, file = fname) - } ensemble.output <- list() @@ -182,50 +182,52 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, end.year.ens <- NA } - variable.ens <- variable - if (is.null(variable.ens)) { + variables.ens <- variable + if (is.null(variables.ens)) { if ("variable" %in% names(settings$ensemble)) { var <- which(names(settings$ensemble) == "variable") for (i in seq_along(var)) { - variable.ens[i] <- settings$ensemble[[var[i]]] + variables.ens[i] <- settings$ensemble[[var[i]]] } } } - if (is.null(variable.ens)) + if (is.null(variables.ens)) PEcAn.logger::logger.severe("No variables for ensemble analysis!") # Only handling one variable at a time for now - if (length(variable.ens) > 1) { - variable.ens <- variable.ens[1] - PEcAn.logger::logger.warn(paste0("Currently performs ensemble analysis on only one variable at a time. Using first (", - variable.ens, ")")) + if (length(variables.ens) >= 1) { + #variable.ens <- variable.ens[1] + for(variable.ens in variables.ens){ + PEcAn.logger::logger.warn(paste0("Currently performs ensemble analysis on only one variable at a time. Using first (", + variable.ens, ")")) + + # if an expression is provided, convert.expr returns names of the variables accordingly + # if a derivation is not requested it returns the variable name as is + variables <- convert.expr(variable.ens) + variable.ens <- variables$variable.eqn + variable.fn <- variables$variable.drv + + ensemble.output <- PEcAn.uncertainty::read.ensemble.output( + settings$ensemble$size, + pecandir = outdir, + outdir = settings$modeloutdir, + start.year = start.year.ens, + end.year = end.year.ens, + variable = variable.ens, + ens.run.ids = ens.run.ids + ) + + # Save ensemble output + fname <- ensemble.filename(settings, "ensemble.output", "Rdata", + all.var.yr = FALSE, + ensemble.id = ens.ensemble.id, + variable = variable.fn, + start.year = start.year.ens, + end.year = end.year.ens) + save(ensemble.output, file = fname) + } } - - # if an expression is provided, convert.expr returns names of the variables accordingly - # if a derivation is not requested it returns the variable name as is - variables <- convert.expr(variable.ens) - variable.ens <- variables$variable.eqn - variable.fn <- variables$variable.drv - - ensemble.output <- PEcAn.uncertainty::read.ensemble.output( - settings$ensemble$size, - pecandir = outdir, - outdir = settings$modeloutdir, - start.year = start.year.ens, - end.year = end.year.ens, - variable = variable.ens, - ens.run.ids = ens.run.ids - ) - - # Save ensemble output - fname <- ensemble.filename(settings, "ensemble.output", "Rdata", - all.var.yr = FALSE, - ensemble.id = ens.ensemble.id, - variable = variable.fn, - start.year = start.year.ens, - end.year = end.year.ens) - save(ensemble.output, file = fname) } } # get.results From 9903422f53a111cc80018f28b1bb879270edb626 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 10 Oct 2019 22:12:10 +0200 Subject: [PATCH 0521/2289] make models/template pass package checks, remove cached check result. Fixes #2449 --- models/template/DESCRIPTION | 11 ++-- models/template/R/met2model.MODEL.R | 1 + models/template/R/write.config.MODEL.R | 16 ++--- models/template/man/met2model.MODEL.Rd | 2 + models/template/man/write.config.MODEL.Rd | 4 +- models/template/tests/Rcheck_reference.log | 75 ---------------------- 6 files changed, 20 insertions(+), 89 deletions(-) delete mode 100644 models/template/tests/Rcheck_reference.log diff --git a/models/template/DESCRIPTION b/models/template/DESCRIPTION index d4d26c54292..f488d6b9bdc 100644 --- a/models/template/DESCRIPTION +++ b/models/template/DESCRIPTION @@ -3,12 +3,15 @@ Type: Package Title: PEcAn package for integration of the ModelName model Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Jane", "Doe"), - person("John", "Doe")) -Author: John Doe, Jane Doe -Maintainer: John Doe +Authors@R: c( + person("Jane", "Doe", + email = "jdoe@illinois.edu", + role = c("aut", "cre")), + person("John", "Doe", + role = "aut")) Description: This module provides functions to link the (ModelName) to PEcAn. Imports: + PEcAn.DB, PEcAn.logger, PEcAn.utils (>= 1.4.8) Suggests: diff --git a/models/template/R/met2model.MODEL.R b/models/template/R/met2model.MODEL.R index eadfaed7d33..7c15426e085 100644 --- a/models/template/R/met2model.MODEL.R +++ b/models/template/R/met2model.MODEL.R @@ -16,6 +16,7 @@ ##' @param in.path path on disk where CF file lives ##' @param in.prefix prefix for each file ##' @param outfolder location where model specific output is written. +##' @param overwrite logical: replace output files if they already exist? ##' @return OK if everything was succesful. ##' @export ##' @author Rob Kooper diff --git a/models/template/R/write.config.MODEL.R b/models/template/R/write.config.MODEL.R index 631edf9f904..674eda0b710 100644 --- a/models/template/R/write.config.MODEL.R +++ b/models/template/R/write.config.MODEL.R @@ -16,7 +16,7 @@ ##' @name write.config.MODEL ##' @title Write MODEL configuration files ##' @param defaults list of defaults to process -##' @param trait.samples vector of samples for a given trait +##' @param trait.values vector of samples for a given trait ##' @param settings list of settings from pecan settings file ##' @param run.id id of run ##' @return configuration file for MODEL for given run @@ -92,7 +92,7 @@ write.config.MODEL <- function(defaults, trait.values, settings, run.id) { if (!is.null(settings$model$revision)) { filename <- system.file(paste0("config.", settings$model$revision), package = "PEcAn.MODEL") } else { - model <- db.query(paste("SELECT * FROM models WHERE id =", settings$model$id), params = settings$database$bety) + model <- PEcAn.DB::db.query(paste("SELECT * FROM models WHERE id =", settings$model$id), params = settings$database$bety) filename <- system.file(paste0("config.r", model$revision), package = "PEcAn.MODEL") } } @@ -108,12 +108,12 @@ write.config.MODEL <- function(defaults, trait.values, settings, run.id) { config.text <- gsub("@SITE_MET@", settings$run$inputs$met$path, config.text) config.text <- gsub("@MET_START@", settings$run$site$met.start, config.text) config.text <- gsub("@MET_END@", settings$run$site$met.end, config.text) - config.text <- gsub("@START_MONTH@", format(startdate, "%m"), config.text) - config.text <- gsub("@START_DAY@", format(startdate, "%d"), config.text) - config.text <- gsub("@START_YEAR@", format(startdate, "%Y"), config.text) - config.text <- gsub("@END_MONTH@", format(enddate, "%m"), config.text) - config.text <- gsub("@END_DAY@", format(enddate, "%d"), config.text) - config.text <- gsub("@END_YEAR@", format(enddate, "%Y"), config.text) + config.text <- gsub("@START_MONTH@", format(settings$run$start.date, "%m"), config.text) + config.text <- gsub("@START_DAY@", format(settings$run$start.date, "%d"), config.text) + config.text <- gsub("@START_YEAR@", format(settings$run$start.date, "%Y"), config.text) + config.text <- gsub("@END_MONTH@", format(settings$run$end.date, "%m"), config.text) + config.text <- gsub("@END_DAY@", format(settings$run$end.date, "%d"), config.text) + config.text <- gsub("@END_YEAR@", format(settings$run$end.date, "%Y"), config.text) config.text <- gsub("@OUTDIR@", settings$host$outdir, config.text) config.text <- gsub("@ENSNAME@", run.id, config.text) config.text <- gsub("@OUTFILE@", paste0("out", run.id), config.text) diff --git a/models/template/man/met2model.MODEL.Rd b/models/template/man/met2model.MODEL.Rd index a3e66def0e3..f1f599bc26a 100644 --- a/models/template/man/met2model.MODEL.Rd +++ b/models/template/man/met2model.MODEL.Rd @@ -12,6 +12,8 @@ met2model.MODEL(in.path, in.prefix, outfolder, overwrite = FALSE) \item{in.prefix}{prefix for each file} \item{outfolder}{location where model specific output is written.} + +\item{overwrite}{logical: replace output files if they already exist?} } \value{ OK if everything was succesful. diff --git a/models/template/man/write.config.MODEL.Rd b/models/template/man/write.config.MODEL.Rd index 553c8082f2d..6924767260a 100644 --- a/models/template/man/write.config.MODEL.Rd +++ b/models/template/man/write.config.MODEL.Rd @@ -9,11 +9,11 @@ write.config.MODEL(defaults, trait.values, settings, run.id) \arguments{ \item{defaults}{list of defaults to process} +\item{trait.values}{vector of samples for a given trait} + \item{settings}{list of settings from pecan settings file} \item{run.id}{id of run} - -\item{trait.samples}{vector of samples for a given trait} } \value{ configuration file for MODEL for given run diff --git a/models/template/tests/Rcheck_reference.log b/models/template/tests/Rcheck_reference.log deleted file mode 100644 index c82a50085d1..00000000000 --- a/models/template/tests/Rcheck_reference.log +++ /dev/null @@ -1,75 +0,0 @@ -* using log directory ‘/tmp/RtmpVlFdqS/PEcAn.ModelName.Rcheck’ -* using R version 3.5.2 (2018-12-20) -* using platform: x86_64-pc-linux-gnu (64-bit) -* using session charset: UTF-8 -* using options ‘--no-tests --no-manual --as-cran’ -* checking for file ‘PEcAn.ModelName/DESCRIPTION’ ... OK -* checking extension type ... Package -* this is package ‘PEcAn.ModelName’ version ‘1.7.0’ -* package encoding: UTF-8 -* checking package namespace information ... OK -* checking package dependencies ... OK -* checking if this is a source package ... OK -* checking if there is a namespace ... OK -* checking for executable files ... OK -* checking for hidden files and directories ... OK -* checking for portable file names ... OK -* checking for sufficient/correct file permissions ... OK -* checking serialization versions ... OK -* checking whether package ‘PEcAn.ModelName’ can be installed ... OK -* checking installed package size ... OK -* checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -Authors@R field gives no person with name and roles. -Authors@R field gives no person with maintainer role, valid email -address and non-empty name. -* checking top-level files ... OK -* checking for left-over files ... OK -* checking index information ... OK -* checking package subdirectories ... OK -* checking R files for non-ASCII characters ... OK -* checking R files for syntax errors ... OK -* checking whether the package can be loaded ... OK -* checking whether the package can be loaded with stated dependencies ... OK -* checking whether the package can be unloaded cleanly ... OK -* checking whether the namespace can be loaded with stated dependencies ... OK -* checking whether the namespace can be unloaded cleanly ... OK -* checking loading without being on the library search path ... OK -* checking dependencies in R code ... OK -* checking S3 generic/method consistency ... OK -* checking replacement functions ... OK -* checking foreign function calls ... OK -* checking R code for possible problems ... NOTE -write.config.MODEL: no visible global function definition for - ‘db.query’ -write.config.MODEL: no visible binding for global variable ‘startdate’ -write.config.MODEL: no visible binding for global variable ‘enddate’ -Undefined global functions or variables: - db.query enddate startdate -* checking Rd files ... OK -* checking Rd metadata ... OK -* checking Rd line widths ... OK -* checking Rd cross-references ... OK -* checking for missing documentation entries ... OK -* checking for code/documentation mismatches ... OK -* checking Rd \usage sections ... WARNING -Undocumented arguments in documentation object 'met2model.MODEL' - ‘overwrite’ - -Undocumented arguments in documentation object 'write.config.MODEL' - ‘trait.values’ -Documented arguments not in \usage in documentation object 'write.config.MODEL': - ‘trait.samples’ - -Functions with \usage entries need to have the appropriate \alias -entries, and all their arguments documented. -The \usage entries must correspond to syntactically valid R code. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. -* checking Rd contents ... OK -* checking for unstated dependencies in examples ... OK -* checking examples ... NONE -* checking for unstated dependencies in ‘tests’ ... OK -* checking tests ... SKIPPED -* DONE -Status: 1 WARNING, 2 NOTEs From 976ad57f3bf5ce791564d86d8b266bef2895d686 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 10 Oct 2019 23:08:36 +0200 Subject: [PATCH 0522/2289] document that not all packages have an Rcheck_reference.log ...plus wording tweaks while the file is open --- .../05_developer_workflows/04-testing.Rmd | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index 7e6b1a1298c..75189ca226b 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -145,16 +145,16 @@ Each build starts by launching three clean virtual machines (one for each R vers - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time (e.g. large data product downloads) or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! * Runs R package checks (the same ones you run locally with `make check` or `devtools::check(pkgname)`), skipping tests and documentation rebuild because we just did those in the previous steps. - Any ERROR in the check output will stop the build immediately. - - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. + - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. If the package has no stored reference result, all WARNINGs and NOTEs are considered newly added and reported as build failures. - If all messages from the current built were also present in the reference result, the check passes. If any messages are newly added, a build failure is reported. - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. - - If a check fails when you think it ought to be grandfathered, please fix it anyway. They all need to be cleaned up eventually, and it's likely easier to fix the error than to figure out how to re-exclude it. + - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it anyway. The failures all need to be cleaned up eventually, and it's likely easier to fix the error than to figure out how to re-ignore it. * Runs a simple PEcAn workflow from beginning to end (three years of SIPNET at Niwot Ridge using stored weather data, meta-analysis on fixed effects only, sensitivity analysis on NPP), and verifies that the models complete successfully. * Checks whether any version-controlled files have been changed by the build and testing process, and marks a build failure if they have. - If your build fails at this step, the most common cause is that you forgot to Roxygenize before committing. - - Note that this only finds changes to files that have already been committed. If you created a *new* file and forgot to add it to the commit, the CI build will not be able to detect the problem. + - This step will also detect newly added files, e.g. tests improperly writing to the current working directory rather than `tempdir()` and then failing to clean up after themselves. If any of these steps reports an error, the build is marked as "broken" and stops before the next step. If they all pass, the Travis CI bot marks the build as successful and tells the GitHub bot that it's OK to allow your changes to be merged... but the final decision belongs to the human reviewing your code and they might still ask you for other changes! @@ -169,5 +169,5 @@ The post-build steps are allowed to fail without breaking the build. If you made All of the above descriptions apply to the build Travis generates when you push to the main `PecanProject/pecan` repository, either by directly pushing to a branch or by opening a pull request. If you like, you can also [enable Travis builds](https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci) from your own PEcAn fork. This can be useful for several reasons: * It lets your test whether your changes worked before you open a pull request. -* It often lets you get faster test results: The PEcAn project uses Travis CI's free tier, which only allows a few simultaneous build jobs per repository. If many people are pushing code at the same time, your build might wait in line for a long time at PecanProject/pecan but start immediately at yourname/pecan. +* It often lets you get faster test results: The PEcAn project uses Travis CI's free tier, which only allows a few simultaneous build jobs per repository. If many people are pushing code at the same time, your build might wait in line for a long time at `PecanProject/pecan` but start immediately at `yourname/pecan`. * If you will be editing the documentation a lot and want to see rendered previews of your in-progress work (instead of waiting until it is merged into develop), you can clone the [pecan-documentation](https://github.com/PecanProject/pecan-documentation) repository to your own GitHub account and let Travis update it for you. From 5a4a8977ccda49eef87d958cf97edb1d05d9da5a Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 13 Oct 2019 08:39:01 -0400 Subject: [PATCH 0523/2289] roxy --- modules/assim.batch/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 9a301e9d977..8d0f61665b6 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -36,6 +36,7 @@ Imports: rjags, stats, prodlim, + MCMCpack, udunits2 (>= 0.11), utils, XML From f80f0d9fc20430e44d96942091cc4bff7e183338 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 17 Oct 2019 04:17:36 -0400 Subject: [PATCH 0524/2289] use cov --- modules/assim.batch/R/hier.mcmc.R | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/modules/assim.batch/R/hier.mcmc.R b/modules/assim.batch/R/hier.mcmc.R index 536de1a567a..16d107e15a2 100644 --- a/modules/assim.batch/R/hier.mcmc.R +++ b/modules/assim.batch/R/hier.mcmc.R @@ -40,7 +40,7 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # # mu_global ~ MVN (mu_global_mean, mu_global_tau) - #approximate a normal dist + # approximate a normal dist mu_init_samp <- matrix(NA, ncol = nparam, nrow = 1000) for(ps in seq_along(prior.ind.all)){ prior.quantiles <- eval(prior.fn.all$rprior[[prior.ind.all[ps]]], list(n = 1000)) @@ -48,13 +48,10 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, } # mean hyperprior - mu_global_mean <- apply(mu_init_samp, 2, mean) - + mu_global_mean <- apply(mu_init_samp, 2, mean) # sigma/tau hyperprior - distto <- rbind(abs(mu_global_mean - rng_orig[,1]), abs(mu_global_mean - rng_orig[,2])) - mu_global_sigma <- diag((apply(distto, 2, min)/4)^2) - mu_global_tau <- solve(mu_global_sigma) - + mu_global_sigma <- cov(mu_init_samp) + mu_global_tau <- solve(mu_global_sigma) ## initialize mu_global (nparam) mu_global <- rmvnorm(1, mu_global_mean, mu_global_sigma) @@ -69,8 +66,9 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # sigma_global <- solve(tau_global) # - sigma_global_df <- nparam + 1 - sigma_global_scale <- (cov(mu_init_samp)/sigma_global_df) + # sigma_global hyperpriors + sigma_global_df <- nparam + 1 + sigma_global_scale <- mu_global_sigma/sigma_global_df # initialize sigma_global (nparam x nparam) sigma_global <- MCMCpack::riwish(sigma_global_df, sigma_global_scale) @@ -86,14 +84,14 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, mu_site_curr <- matrix(rep(mu_site_init, nsites), ncol=nparam, byrow = TRUE) # values for each site will be accepted/rejected in themselves - currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) + currSS <- sapply(seq_len(nsites), function(v) PEcAn.emulator::get_ss(gp.stack[[v]], mu_site_curr[v,], pos.check)) # force it to be nvar x nsites matrix - currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) - currllp <- lapply(seq_len(nsites), function(v) PEcAn.assim.batch::pda.calc.llik.par(settings, nstack[[v]], currSS[,v])) + currSS <- matrix(currSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) + currllp <- lapply(seq_len(nsites), function(v) PEcAn.assim.batch::pda.calc.llik.par(settings, nstack[[v]], currSS[,v])) # storage - mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) - mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) + mu_site_samp <- array(NA_real_, c(nmcmc, nparam, nsites)) + mu_global_samp <- matrix(NA_real_, nrow = nmcmc, ncol= nparam) sigma_global_samp <- array(NA_real_, c(nmcmc, nparam, nparam)) musite.accept.count <- rep(0, nsites) @@ -112,7 +110,8 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, params.recent <- mu_site_samp[(g - settings$assim.batch$jump$adapt):(g - 1), , ] #colnames(params.recent) <- names(x0) settings$assim.batch$jump$adapt <- adapt_orig - jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], params.recent[seq(v,adapt_orig * nsites, by=12),,v])) + jcov.list <- lapply(seq_len(nsites), function(v) pda.adjust.jumps.bs(settings, jcov.arr[,,v], musite.accept.count[v], + params.recent[seq(v, adapt_orig * nsites, by=12), , v])) jcov.arr <- abind::abind(jcov.list, along=3) musite.accept.count <- rep(0, nsites) # Reset counter settings$assim.batch$jump$adapt <- adapt_orig * nsites From 6bdb999b9d852e501e7462bb6ee4a8f81f784387 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 17 Oct 2019 04:32:29 -0400 Subject: [PATCH 0525/2289] include jump probabilities --- modules/assim.batch/R/hier.mcmc.R | 47 ++++++++++++++++--------------- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/modules/assim.batch/R/hier.mcmc.R b/modules/assim.batch/R/hier.mcmc.R index 16d107e15a2..4065137936f 100644 --- a/modules/assim.batch/R/hier.mcmc.R +++ b/modules/assim.batch/R/hier.mcmc.R @@ -67,7 +67,7 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # # sigma_global hyperpriors - sigma_global_df <- nparam + 1 + sigma_global_df <- nparam + 1 # test results with nparam since it is the least informative sigma_global_scale <- mu_global_sigma/sigma_global_df # initialize sigma_global (nparam x nparam) @@ -183,23 +183,14 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # site level M-H ######################################## - # # propose new mu_site on standard normal domain - # for(ns in seq_len(nsites)){ - # repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - # mu_site_new_stdn[ns,] <- mvtnorm::rmvnorm(1, mu_global, sigma_global) - # check.that <- (mu_site_new_stdn[ns,] > rng_stdn[, 1] & mu_site_new_stdn[ns, ] < rng_stdn[, 2]) - # if(all(check.that)) break - # } - # } - # propose new site parameter vectors - repeat{ # make sure to stay in emulator boundaries, otherwise it confuses adaptation - thissite <- g %% nsites - if(thissite == 0) thissite <- nsites - proposed <- mvtnorm::rmvnorm(1, mu_site_curr[thissite,], jcov.arr[,,thissite]) - check.that <- (proposed > rng_orig[, 1] & proposed < rng_orig[, 2]) - if(all(check.that)) break - } + thissite <- g %% nsites + if(thissite == 0) thissite <- nsites + proposed <- tmvtnorm::rtmvnorm(1, + mean = mu_site_curr[thissite,], + sigma = jcov.arr[,,thissite], + lower = rng_orig[,1], + upper = rng_orig[,2]) mu_site_new <- matrix(rep(proposed, nsites),ncol=nparam, byrow = TRUE) @@ -211,9 +202,15 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, currLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(currSS[,v], llik.fn, currllp[[v]])) # use new priors for calculating prior probability currPrior <- dmvnorm(mu_site_curr, mu_global, sigma_global, log = TRUE) - #currPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_curr_stdn[v,], mu_global, sigma_global, log = TRUE))) currPost <- currLL + currPrior + # calculate jump probabilities + currHR <- sapply(seq_len(nsites), function(v) { + tmvtnorm::dtmvnorm(mu_site_curr[v,], mu_site_new[v,], jcov.arr[,,v], + lower = rng_orig[,1], + upper = rng_orig[,2], log = TRUE) + }) + # predict new SS newSS <- sapply(seq_len(nsites), function(v) get_ss(gp.stack[[v]], mu_site_new[v,], pos.check)) newSS <- matrix(newSS, nrow = length(settings$assim.batch$inputs), ncol = nsites) @@ -223,18 +220,24 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, newLL <- sapply(seq_len(nsites), function(v) pda.calc.llik(newSS[,v], llik.fn, newllp[[v]])) # use new priors for calculating prior probability newPrior <- dmvnorm(mu_site_new, mu_global, sigma_global, log = TRUE) - #newPrior <- unlist(lapply(seq_len(nsites), function(v) mvtnorm::dmvnorm(mu_site_new_stdn[v,], mu_s, s_sigma[[v]], log = TRUE))) newPost <- newLL + newPrior - ar <- is.accepted(currPost, newPost) + # calculate jump probabilities + newHR <- sapply(seq_len(nsites), function(v) { + tmvtnorm::dtmvnorm(mu_site_new[v,], mu_site_curr[v,], jcov.arr[,,v], + lower = rng_orig[,1], + upper = rng_orig[,2], log = TRUE) + }) + + # Accept/reject with MH rule + ar <- is.accepted(currPost + currHR, newPost + newHR) mu_site_curr[ar, ] <- mu_site_new[ar, ] - #mu_site_curr_stdn[ar, ] <- mu_site_new_stdn[ar, ] musite.accept.count[thissite] <- musite.accept.count[thissite] + ar[thissite] mu_site_samp[g, , seq_len(nsites)] <- t(mu_site_curr)[,seq_len(nsites)] mu_global_samp[g,] <- mu_global # 100% acceptance for gibbs - sigma_global_samp[g, , ] <- sigma_global # 100% acceptance for gibbs + sigma_global_samp[g, , ] <- sigma_global # 100% acceptance for gibbs if(g %% 500 == 0) PEcAn.logger::logger.info(g, "of", nmcmc, "iterations") } From 1c1f625413134756180a6a659f73871b2d52b1ad Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 17 Oct 2019 04:56:24 -0400 Subject: [PATCH 0526/2289] use truncated normal --- modules/assim.batch/DESCRIPTION | 1 + modules/assim.batch/R/pda.emulator.ms.R | 2 +- modules/assim.batch/R/pda.utils.R | 15 ++++++++------- 3 files changed, 10 insertions(+), 8 deletions(-) diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 186863e86f0..9dac80cd7bd 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -37,6 +37,7 @@ Imports: stats, prodlim, MCMCpack, + tmvtnorm, udunits2 (>= 0.11), utils, XML diff --git a/modules/assim.batch/R/pda.emulator.ms.R b/modules/assim.batch/R/pda.emulator.ms.R index 3e37e210d97..e4c681a9f68 100644 --- a/modules/assim.batch/R/pda.emulator.ms.R +++ b/modules/assim.batch/R/pda.emulator.ms.R @@ -374,7 +374,7 @@ pda.emulator.ms <- function(multi.settings) { save(list = ls(all.names = TRUE),envir=environment(),file=hbc.restart.file) # generate hierarhical posteriors - mcmc.out <- generate_hierpost(mcmc.out, rng_orig) + mcmc.out <- generate_hierpost(mcmc.out, prior.fn.all, prior.ind.all) # Collect global params in their own list and postprocess mcmc.param.list <- pda.sort.params(mcmc.out, sub.sample = "mu_global_samp", ns = NULL, prior.all, prior.ind.all.ns, sf, n.param.orig, prior.list, prior.fn.all) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index b4cb1b37c2a..a91d968b87c 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -831,13 +831,14 @@ load_pda_history <- function(workdir, ensemble.id, objects){ ##' Helper function that generates the hierarchical posteriors ##' ##' @param mcmc.out hierarchical MCMC outputs -##' @param rng_orig nparam x 2 matrix, 1st and 2nd columns are the lower and upper prior limits respectively +##' @param prior.fn.all list of all prior functions +##' @param prior.ind.all indices of the targeted params ##' ##' @return hierarchical MCMC outputs in original parameter space ##' ##' @author Istem Fer ##' @export -generate_hierpost <- function(mcmc.out, rng_orig){ +generate_hierpost <- function(mcmc.out, prior.fn.all, prior.ind.all){ for(i in seq_along(mcmc.out)){ mu_global_samp <- mcmc.out[[i]]$mu_global_samp @@ -848,11 +849,11 @@ generate_hierpost <- function(mcmc.out, rng_orig){ # calculate hierarchical posteriors from mu_global_samp and tau_global_samp hierarchical_samp <- mu_global_samp for(si in seq_len(iter_size)){ - repeat{ - proposed <- mvtnorm::rmvnorm(1, mean = mu_global_samp[si,], sigma = sigma_global_samp[si,,]) - if(all(proposed >= rng_orig[,1] & proposed <= rng_orig[,2])) break - } - hierarchical_samp[si,] <- proposed + hierarchical_samp[si,] <- tmvtnorm::rtmvnorm(1, + mean = mu_global_samp[si,], + sigma = sigma_global_samp[si,,], + lower = sapply(seq_along(prior.ind.all), function(z) eval(prior.fn.all$qprior[[prior.ind.all[z]]], list(p=0.000001))), + upper = sapply(seq_along(prior.ind.all), function(z) eval(prior.fn.all$qprior[[prior.ind.all[z]]], list(p=0.999999)))) } mcmc.out[[i]]$hierarchical_samp <- hierarchical_samp From bf7cf4be2e35814783dbd6ed83d145ef4ba1a5ab Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 17 Oct 2019 06:22:30 -0400 Subject: [PATCH 0527/2289] use mh rule in single emulator mcmc --- modules/assim.batch/R/pda.utils.R | 7 ++- modules/assim.batch/man/generate_hierpost.Rd | 6 +- .../vignettes/MultiSitePDAVignette.Rmd | 63 +++++++++++++------ modules/emulator/DESCRIPTION | 1 + modules/emulator/NAMESPACE | 6 +- modules/emulator/R/minimize.GP.R | 16 +++-- modules/emulator/man/plot.jump.Rd | 2 +- modules/emulator/man/plot.mvjump.Rd | 2 +- modules/emulator/man/update.jump.Rd | 2 +- modules/emulator/man/update.mvjump.Rd | 2 +- 10 files changed, 69 insertions(+), 38 deletions(-) diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index a91d968b87c..500460db1f1 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -840,6 +840,9 @@ load_pda_history <- function(workdir, ensemble.id, objects){ ##' @export generate_hierpost <- function(mcmc.out, prior.fn.all, prior.ind.all){ + lower_lim <- sapply(seq_along(prior.ind.all), function(z) eval(prior.fn.all$qprior[[prior.ind.all[z]]], list(p=0.000001))) + upper_lim <- sapply(seq_along(prior.ind.all), function(z) eval(prior.fn.all$qprior[[prior.ind.all[z]]], list(p=0.999999))) + for(i in seq_along(mcmc.out)){ mu_global_samp <- mcmc.out[[i]]$mu_global_samp sigma_global_samp <- mcmc.out[[i]]$sigma_global_samp @@ -852,8 +855,8 @@ generate_hierpost <- function(mcmc.out, prior.fn.all, prior.ind.all){ hierarchical_samp[si,] <- tmvtnorm::rtmvnorm(1, mean = mu_global_samp[si,], sigma = sigma_global_samp[si,,], - lower = sapply(seq_along(prior.ind.all), function(z) eval(prior.fn.all$qprior[[prior.ind.all[z]]], list(p=0.000001))), - upper = sapply(seq_along(prior.ind.all), function(z) eval(prior.fn.all$qprior[[prior.ind.all[z]]], list(p=0.999999)))) + lower = lower_lim, + upper = upper_lim) } mcmc.out[[i]]$hierarchical_samp <- hierarchical_samp diff --git a/modules/assim.batch/man/generate_hierpost.Rd b/modules/assim.batch/man/generate_hierpost.Rd index f7386b2a441..7a9eda9fc77 100644 --- a/modules/assim.batch/man/generate_hierpost.Rd +++ b/modules/assim.batch/man/generate_hierpost.Rd @@ -4,12 +4,14 @@ \alias{generate_hierpost} \title{Helper function that generates the hierarchical posteriors} \usage{ -generate_hierpost(mcmc.out, rng_orig) +generate_hierpost(mcmc.out, prior.fn.all, prior.ind.all) } \arguments{ \item{mcmc.out}{hierarchical MCMC outputs} -\item{rng_orig}{nparam x 2 matrix, 1st and 2nd columns are the lower and upper prior limits respectively} +\item{prior.fn.all}{list of all prior functions} + +\item{prior.ind.all}{indices of the targeted params} } \value{ hierarchical MCMC outputs in original parameter space diff --git a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd index d62a38c8eeb..f20eb28b8e5 100644 --- a/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd +++ b/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd @@ -1,7 +1,7 @@ --- title: "Multi-site hierarchical calibration vignette" author: "Istem Fer" -date: "3/2/2019" +date: "09/23/2019" output: html_document --- @@ -84,7 +84,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2011/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-796/AMF_US-Bar_BASE_HH_4-1.2005-01-01.2011-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-796/AMF_US-Bar_BASE_HH_4-1.2005-01-01.2011-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_0-796/IC_site_0-796.nc @@ -104,7 +106,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2014/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_0-767/IC_site_0-767.nc @@ -124,7 +128,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2015/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-768/AMF_US-MOz_BASE_HH_7-1.2005-01-01.2015-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-768/AMF_US-MOz_BASE_HH_7-1.2005-01-01.2015-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_0-768/IC_site_0-768.nc @@ -144,9 +150,11 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2014/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-776/AMF_US-UMB_BASE_HH_10-1.2007-01-01.2014-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-776/AMF_US-UMB_BASE_HH_10-1.2007-01-01.2014-12-31.clim + - + /fs/data3/istfer/HPDA/data/param.files/site_0-776/IC_site_0-776.nc @@ -164,7 +172,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2006/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-676/AMF_US-WCr_BASE_HH_11-1.1999-01-01.2006-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-676/AMF_US-WCr_BASE_HH_11-1.1999-01-01.2006-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_0-676/IC_site_0-676.nc @@ -184,7 +194,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2015/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-109/AMF_CA-TPD_BASE_HH_1-1.2013-01-01.2015-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-109/AMF_CA-TPD_BASE_HH_1-1.2013-01-01.2015-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_1-109/IC_site_1-109.nc @@ -204,7 +216,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2008/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-755/AMF_US-Dk2_BASE_HH_4-4.2002-01-01.2008-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-755/AMF_US-Dk2_BASE_HH_4-4.2002-01-01.2008-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_0-755/IC_site_0-755.nc @@ -214,17 +228,19 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 1000000145 - 2008/01/01 + 2010/01/01 2010/12/31 Chestnut Ridge (US-ChR) 35.9311 -84.3324 - 2008/01/01 + 2010/01/01 2010/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-145/AMF_US-ChR_BASE_HH_2-1.2008-01-01.2010-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-145//AMF_US-ChR_BASE_HH_2-1.2010-01-01.2010-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_1-145/IC_site_1-145.nc @@ -244,7 +260,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2014/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-66/AMF_US-Slt_BASE_HH_5-1.2005-01-01.2014-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-66/AMF_US-Slt_BASE_HH_5-1.2005-01-01.2014-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_1-66/IC_site_1-66.nc @@ -264,7 +282,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2013/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-61/AMF_US-Oho_BASE_HH_4-1.2004-01-01.2013-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_1-61/AMF_US-Oho_BASE_HH_4-1.2004-01-01.2013-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_1-61/IC_site_1-61.nc @@ -284,7 +304,9 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2010/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-740/AMF_CA-Oas_BASE_HH_1-1.1997-01-01.2010-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-740/AMF_CA-Oas_BASE_HH_1-1.1997-01-01.2010-12-31.clim + /fs/data3/istfer/HPDA/data/param.files/site_0-740/IC_site_0-740.nc @@ -304,8 +326,13 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded 2015/12/31 - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-758/AMF_US-Ha1_BASE_HR_10-1.2010-01-01.2015-12-31.clim + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-758/AMF_US-Ha1_BASE_HR_10-1.2010-01-01.2015-12-31.clim + + + /fs/data3/istfer/HPDA/data/param.files/site_0-758/IC_site_0-758.nc +
@@ -314,13 +341,13 @@ When you open `pecan.CHECKED.xml`, you'll see that the `` tag has expanded ## Modify ensemble analysis tags -Now let's specify the ensemble `` and the ensemble `` within the `` tag. In the paper we give an ensemble of runs with `1000` members but you can try smaller ensembles. +Now let's specify the ensemble `` and the ensemble `` within the `` tag. In the paper we give an ensemble of runs with `250` members but you can try smaller ensembles. Overall, your `pecan.xml` for the `` sections will look like this: ``` - 1000 + 250 NEE diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 43183a63fbd..03e15509acc 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -11,6 +11,7 @@ Depends: mlegp, coda (>= 0.18), MASS, + tmvtnorm, MCMCpack Imports: PEcAn.logger, diff --git a/modules/emulator/NAMESPACE b/modules/emulator/NAMESPACE index 421d54566ae..1823df6b3f8 100644 --- a/modules/emulator/NAMESPACE +++ b/modules/emulator/NAMESPACE @@ -4,11 +4,8 @@ S3method(calcSpatialCov,list) S3method(calcSpatialCov,matrix) S3method(p,jump) S3method(p,mvjump) -S3method(plot,mvjump) S3method(predict,GP) S3method(predict,density) -S3method(update,jump) -S3method(update,mvjump) export(GaussProcess) export(arate) export(bounded) @@ -34,4 +31,7 @@ export(mvjump) export(nderiv) export(p) export(plot.jump) +export(plot.mvjump) export(summarize.GP) +export(update.jump) +export(update.mvjump) diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index e39ddc5c8e8..6ee3caeee8c 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -219,7 +219,7 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn currllp <- pda.calc.llik.par(settings, n.of.obs, currSS, hyper.pars) pcurr <- unlist(sapply(currllp, `[[` , "par")) - xcurr <- x0 + xcurr <- unlist(x0) dim <- length(x0) samp <- matrix(NA, nmcmc, dim) par <- matrix(NA, nmcmc, length(pcurr), dimnames = list(NULL, names(pcurr))) # note: length(pcurr) can be 0 @@ -258,12 +258,7 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn } ## propose new parameters - repeat { - xnew <- MASS::mvrnorm(1, unlist(xcurr), jcov) - if (bounded(xnew, rng)) { - break - } - } + xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) # if(bounded(xnew,rng)){ # re-predict SS @@ -274,15 +269,18 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn # don't update the currllp ( = llik.par, e.g. tau) yet # calculate posterior with xcurr | currllp ycurr <- get_y(currSS, xcurr, llik.fn, priors, currllp) - + HRcurr <- tmvtnorm::dtmvnorm(c(xnew), c(xcurr), jcov, + lower = rng[,1], upper = rng[,2], log = TRUE) newSS <- get_ss(gp, xnew, pos.check) if(all(newSS != -Inf)){ newllp <- pda.calc.llik.par(settings, n.of.obs, newSS, hyper.pars) ynew <- get_y(newSS, xnew, llik.fn, priors, newllp) + HRnew <- tmvtnorm::dtmvnorm(c(xcurr), c(xnew), jcov, + lower = rng[,1], upper = rng[,2], log = TRUE) - if (is.accepted(ycurr, ynew)) { + if (is.accepted(ycurr+HRcurr, ynew+HRnew)) { xcurr <- xnew currSS <- newSS accept.count <- accept.count + 1 diff --git a/modules/emulator/man/plot.jump.Rd b/modules/emulator/man/plot.jump.Rd index 648561b8e4c..376c4f9a018 100644 --- a/modules/emulator/man/plot.jump.Rd +++ b/modules/emulator/man/plot.jump.Rd @@ -4,7 +4,7 @@ \alias{plot.jump} \title{plot.jump} \usage{ -\method{plot}{jump}(jmp) +plot.jump(jmp) } \arguments{ \item{jmp}{jump parameter} diff --git a/modules/emulator/man/plot.mvjump.Rd b/modules/emulator/man/plot.mvjump.Rd index 5d71a07499d..399cb9f42f7 100644 --- a/modules/emulator/man/plot.mvjump.Rd +++ b/modules/emulator/man/plot.mvjump.Rd @@ -4,7 +4,7 @@ \alias{plot.mvjump} \title{plot.mvjump} \usage{ -\method{plot}{mvjump}(jmp) +plot.mvjump(jmp) } \arguments{ \item{jmp}{} diff --git a/modules/emulator/man/update.jump.Rd b/modules/emulator/man/update.jump.Rd index 56dcb077ffa..45b37f16676 100644 --- a/modules/emulator/man/update.jump.Rd +++ b/modules/emulator/man/update.jump.Rd @@ -4,7 +4,7 @@ \alias{update.jump} \title{update.jump} \usage{ -\method{update}{jump}(jmp, chain) +update.jump(jmp, chain) } \arguments{ \item{jmp}{jump parameter} diff --git a/modules/emulator/man/update.mvjump.Rd b/modules/emulator/man/update.mvjump.Rd index e99e1b81ec0..872b6816af6 100644 --- a/modules/emulator/man/update.mvjump.Rd +++ b/modules/emulator/man/update.mvjump.Rd @@ -4,7 +4,7 @@ \alias{update.mvjump} \title{update.mvjump} \usage{ -\method{update}{mvjump}(jmp, chain) +update.mvjump(jmp, chain) } \description{ update.mvjump From 87f790fe57e3b27642a1d4466fae720325a5c04c Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 17 Oct 2019 15:56:33 -0400 Subject: [PATCH 0528/2289] reverse --- modules/emulator/R/minimize.GP.R | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 6ee3caeee8c..240cedcd4a9 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -258,7 +258,13 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn } ## propose new parameters - xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) + repeat { + xnew <- mvrnorm(1, c(xcurr), jcov) + if (bounded(xnew, rng)) { + break + } + } + #xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) # if(bounded(xnew,rng)){ # re-predict SS From 55b329ee5a1819fbc4ecac6a1ba1387178253857 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 18 Oct 2019 04:42:37 -0400 Subject: [PATCH 0529/2289] having precision issues --- modules/emulator/DESCRIPTION | 1 + modules/emulator/R/minimize.GP.R | 15 ++++++++------- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 03e15509acc..edd82c3cee4 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -12,6 +12,7 @@ Depends: coda (>= 0.18), MASS, tmvtnorm, + lqmm, MCMCpack Imports: PEcAn.logger, diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 240cedcd4a9..e7118bbc59b 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -243,6 +243,8 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn # jmp <- mvjump(ic=diag(jmp0),rate=ar.target, nc=dim) } + # make sure it is positive definite, see note below + jcov <- lqmm::make.positive.definite(jcov, tol=1e-10) for (g in start:nmcmc) { @@ -255,16 +257,15 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn # accept.count <- round(jmp@arate[(g-1)/settings$assim.batch$jump$adapt]*100) jcov <- pda.adjust.jumps.bs(settings, jcov, accept.count, params.recent) accept.count <- 0 # Reset counter + + # make sure precision is not going to be an issue + # NOTE: for very small values this is going to be an issue + # maybe include a scaling somewhere while building the emulator + jcov <- lqmm::make.positive.definite(jcov, tol=1e-10) } ## propose new parameters - repeat { - xnew <- mvrnorm(1, c(xcurr), jcov) - if (bounded(xnew, rng)) { - break - } - } - #xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) + xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) # if(bounded(xnew,rng)){ # re-predict SS From 0a324f46c6ba6e9ec778019555dd13d323206e07 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 22 Oct 2019 12:46:42 -0400 Subject: [PATCH 0530/2289] Updating .nc file creation- it was overwriting the nc file at each time step instead of renaming the files --- .../assim.sequential/R/sda.enkf_refactored.R | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 78cd2dcc8a0..785f582f415 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -350,19 +350,24 @@ sda.enkf <- function(settings, } #----chaning the extension of nc files to a more specific date related name - purrr::walk( - list.files( - path = file.path(settings$outdir, "out"), - "*.nc$", - recursive = TRUE, - full.names = TRUE), + files <- list.files( + path = file.path(settings$outdir, "out"), + "*.nc$", + recursive = TRUE, + full.names = TRUE) + files <- files[grep(pattern = "SDA*", files, invert = TRUE)] + + + purrr::walk(files, function(.x){ file.rename(.x , file.path(dirname(.x), - paste0(gsub(" ", "", as.character(names(obs.mean)[t])), + paste0("SDA_",gsub(" ", "", as.character(names(obs.mean)[t])), ".nc")) + ) }) + #--- Reformating X X <- do.call(rbind, X) From 8f353498f39dad07903c7e668cc1120af5efe6b6 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 24 Oct 2019 12:04:22 +0200 Subject: [PATCH 0531/2289] remove stray file --- scripts/Makefile.depends | 1 - 1 file changed, 1 deletion(-) delete mode 100644 scripts/Makefile.depends diff --git a/scripts/Makefile.depends b/scripts/Makefile.depends deleted file mode 100644 index adac6e7c09d..00000000000 --- a/scripts/Makefile.depends +++ /dev/null @@ -1 +0,0 @@ -# autogenerated From ff329d2b9d0c8b8c31c1dde460bb67abd1a4e07e Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Thu, 24 Oct 2019 09:42:36 -0400 Subject: [PATCH 0532/2289] addressing PR comments --- modules/assim.sequential/R/sda.enkf_refactored.R | 12 +++--------- .../inst/WillowCreek/gefs.sipnet.template.xml | 2 +- 2 files changed, 4 insertions(+), 10 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 785f582f415..cc45692193b 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -358,15 +358,9 @@ sda.enkf <- function(settings, files <- files[grep(pattern = "SDA*", files, invert = TRUE)] - purrr::walk(files, - function(.x){ - file.rename(.x , - file.path(dirname(.x), - paste0("SDA_",gsub(" ", "", as.character(names(obs.mean)[t])), - ".nc")) - - ) - }) + file.rename(files, + file.path(dirname(files), + paste0("SDA_", basename(files), "_", gsub(" ", "", names(obs.mean)[t]), ".nc") ) ) #--- Reformating X X <- do.call(rbind, X) diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml index 8170135886e..c9dac66dc28 100755 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml @@ -87,7 +87,7 @@ FALSE - 50 + 200 NEE 2018 2018 From 6aca18fb3043809439f40e26e8550e045db6e397 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 25 Oct 2019 16:03:39 -0500 Subject: [PATCH 0533/2289] this fixes the extraction of only the year requested #2187 --- models/sipnet/R/model2netcdf.SIPNET.R | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/models/sipnet/R/model2netcdf.SIPNET.R b/models/sipnet/R/model2netcdf.SIPNET.R index c2869941f15..2049857078c 100644 --- a/models/sipnet/R/model2netcdf.SIPNET.R +++ b/models/sipnet/R/model2netcdf.SIPNET.R @@ -74,15 +74,11 @@ model2netcdf.SIPNET <- function(outdir, sitelat, sitelon, start_date, end_date, num_years <- length(unique(sipnet_output$year)) simulation_years <- unique(sipnet_output$year) - # quick consistency check + # get all years that we want data from year_seq <- seq(lubridate::year(start_date), lubridate::year(end_date)) - if (length(year_seq) != num_years) { - PEcAn.logger::logger.severe("Date range specified in model2netcdf.SIPNET() function call and - total number of years in SIPNET output are not equal") - } # check that specified years and output years match - if (!all(simulation_years %in% year_seq)) { + if (!all(year_seq %in% simulation_years)) { PEcAn.logger::logger.severe("Years selected for model run and SIPNET output years do not match ") } @@ -99,7 +95,7 @@ model2netcdf.SIPNET <- function(outdir, sitelat, sitelon, start_date, end_date, timestep.s <- 86400 / out_day ### Loop over years in SIPNET output to create separate netCDF outputs - for (y in simulation_years) { + for (y in year_seq) { if (file.exists(file.path(outdir, paste(y, "nc", sep = "."))) & overwrite == FALSE) { next } From 282282e8754077c7112ab2f28cde10056ccb3e54 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 25 Oct 2019 16:05:23 -0500 Subject: [PATCH 0534/2289] update CHANGELOG --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7308059407f..2e8c5a9e047 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Fix issue with cruncep download: use netcdf subset (ncss) method instead of opendap (#2424). - The `parse` option to `PEcAn.utils::read_web_config` had no effect when `expand` was TRUE (#2421). - Fixed a typo that made `PEcAn.DB::symmetric_setdiff` falsely report no differences (#2428). +- sipnet2netcdf will now only extract the data for the year requested (#2187) ### Changed - Stricter package checking: `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings (#2404). From 205c2e46dfc11b686dfa30bbdfe4e3ad6cce3ee5 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 25 Oct 2019 16:04:24 -0700 Subject: [PATCH 0535/2289] Update common.php --- web/common.php | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/web/common.php b/web/common.php index 23a0dcc14ab..e18c64efdf0 100644 --- a/web/common.php +++ b/web/common.php @@ -11,7 +11,9 @@ function get_footer() { return "The PEcAn project is supported by the National Science Foundation (ABI #1062547, ABI #1458021, DIBBS #1261582, ARC #1023477, EF #1318164, EF #1241894, EF #1241891), NASA - Terrestrial Ecosystems, Department of Energy (ARPA-E #DE-AR0000594 and #DE-AR0000598), the Energy Biosciences Institute, and an Amazon AWS in Education Grant. + Terrestrial Ecosystems, Department of Energy (ARPA-E #DE-AR0000594 and #DE-AR0000598), + Department of Defense, the Arizona Experiment Station, the Energy Biosciences Institute, + and an Amazon AWS in Education Grant. PEcAn Version 1.7.1"; } From 2094da40888a8d945cb2aa07235bb6b3039fa809 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 28 Oct 2019 09:49:57 -0500 Subject: [PATCH 0536/2289] model monitor webpage --- docker/monitor/Dockerfile | 7 +- docker/monitor/bootstrap-table.min.css | 10 ++ docker/monitor/bootstrap-table.min.js | 10 ++ docker/monitor/bootstrap.min.css | 7 ++ docker/monitor/bootstrap.min.js | 7 ++ docker/monitor/fa-solid-900.woff2 | Bin 0 -> 75728 bytes docker/monitor/favicon.jpg | Bin 0 -> 768 bytes docker/monitor/fontawesome.min.css | 5 + docker/monitor/index.html | 129 ++++++++++++++++++++++++ docker/monitor/jquery-3.3.1.min.js | 2 + docker/monitor/monitor.py | 39 +++++-- docker/monitor/popper.min.js | 5 + docker/monitor/solid.min.css | 5 + docker/monitor/sticky-footer-navbar.css | 36 +++++++ 14 files changed, 249 insertions(+), 13 deletions(-) create mode 100644 docker/monitor/bootstrap-table.min.css create mode 100644 docker/monitor/bootstrap-table.min.js create mode 100644 docker/monitor/bootstrap.min.css create mode 100644 docker/monitor/bootstrap.min.js create mode 100644 docker/monitor/fa-solid-900.woff2 create mode 100644 docker/monitor/favicon.jpg create mode 100644 docker/monitor/fontawesome.min.css create mode 100644 docker/monitor/index.html create mode 100644 docker/monitor/jquery-3.3.1.min.js create mode 100644 docker/monitor/popper.min.js create mode 100644 docker/monitor/solid.min.css create mode 100644 docker/monitor/sticky-footer-navbar.css diff --git a/docker/monitor/Dockerfile b/docker/monitor/Dockerfile index 76dfc97ff21..ea776f08796 100644 --- a/docker/monitor/Dockerfile +++ b/docker/monitor/Dockerfile @@ -3,7 +3,10 @@ FROM python:3.5 ENV RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" \ RABBITMQ_MGMT_PORT="15672" \ RABBITMQ_MGMT_PATH="/rabbitmq/" \ - POSTGRES_PARAM="host=postgres dbname=bety user=bety password=bety connect_timeout=10" \ + PGHOST="postgres" \ + BETYUSER="bety" \ + BETYPASSWORD="bety" \ + BETYDATABASE="bety" \ FQDN="pecan" EXPOSE 9999 @@ -13,5 +16,5 @@ WORKDIR /src COPY requirements.txt /src/ RUN pip3 install -r /src/requirements.txt -COPY monitor.py /src/ +COPY . /src/ CMD python3 monitor.py diff --git a/docker/monitor/bootstrap-table.min.css b/docker/monitor/bootstrap-table.min.css new file mode 100644 index 00000000000..14afe05cb3e --- /dev/null +++ b/docker/monitor/bootstrap-table.min.css @@ -0,0 +1,10 @@ +/** + * bootstrap-table - An extended table to integration with some of the most widely used CSS frameworks. (Supports Bootstrap, Semantic UI, Bulma, Material Design, Foundation) + * + * @version v1.15.5 + * @homepage https://bootstrap-table.com + * @author wenzhixin (http://wenzhixin.net.cn/) + * @license MIT + */ + +@charset "UTF-8";.bootstrap-table .fixed-table-toolbar::after{content:"";display:block;clear:both}.bootstrap-table .fixed-table-toolbar .bs-bars,.bootstrap-table .fixed-table-toolbar .search,.bootstrap-table .fixed-table-toolbar .columns{position:relative;margin-top:10px;margin-bottom:10px}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group{display:inline-block;margin-left:-1px!important}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group>.btn{border-radius:0}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group:first-child>.btn{border-top-left-radius:4px;border-bottom-left-radius:4px}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group:last-child>.btn{border-top-right-radius:4px;border-bottom-right-radius:4px}.bootstrap-table .fixed-table-toolbar .columns .dropdown-menu{text-align:left;max-height:300px;overflow:auto;-ms-overflow-style:scrollbar;z-index:1001}.bootstrap-table .fixed-table-toolbar .columns label{display:block;padding:3px 20px;clear:both;font-weight:normal;line-height:1.428571429}.bootstrap-table .fixed-table-toolbar .columns-left{margin-right:5px}.bootstrap-table .fixed-table-toolbar .columns-right{margin-left:5px}.bootstrap-table .fixed-table-toolbar .pull-right .dropdown-menu{right:0;left:auto}.bootstrap-table .fixed-table-container{position:relative;clear:both}.bootstrap-table .fixed-table-container .table{width:100%;margin-bottom:0!important}.bootstrap-table .fixed-table-container .table th,.bootstrap-table .fixed-table-container .table td{vertical-align:middle;box-sizing:border-box}.bootstrap-table .fixed-table-container .table thead th{vertical-align:bottom;padding:0;margin:0}.bootstrap-table .fixed-table-container .table thead th:focus{outline:0 solid transparent}.bootstrap-table .fixed-table-container .table thead th.detail{width:30px}.bootstrap-table .fixed-table-container .table thead th .th-inner{padding:.75rem;vertical-align:bottom;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.bootstrap-table .fixed-table-container .table thead th .sortable{cursor:pointer;background-position:right;background-repeat:no-repeat;padding-right:30px}.bootstrap-table .fixed-table-container .table thead th .both{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAQAAADYWf5HAAAAkElEQVQoz7X QMQ5AQBCF4dWQSJxC5wwax1Cq1e7BAdxD5SL+Tq/QCM1oNiJidwox0355mXnG/DrEtIQ6azioNZQxI0ykPhTQIwhCR+BmBYtlK7kLJYwWCcJA9M4qdrZrd8pPjZWPtOqdRQy320YSV17OatFC4euts6z39GYMKRPCTKY9UnPQ6P+GtMRfGtPnBCiqhAeJPmkqAAAAAElFTkSuQmCC")}.bootstrap-table .fixed-table-container .table thead th .asc{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAYAAAByUDbMAAAAZ0lEQVQ4y2NgGLKgquEuFxBPAGI2ahhWCsS/gDibUoO0gPgxEP8H4ttArEyuQYxAPBdqEAxPBImTY5gjEL9DM+wTENuQahAvEO9DMwiGdwAxOymGJQLxTyD+jgWDxCMZRsEoGAVoAADeemwtPcZI2wAAAABJRU5ErkJggg==")}.bootstrap-table .fixed-table-container .table thead th .desc{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAYAAAByUDbMAAAAZUlEQVQ4y2NgGAWjYBSggaqGu5FA/BOIv2PBIPFEUgxjB+IdQPwfC94HxLykus4GiD+hGfQOiB3J8SojEE9EM2wuSJzcsFMG4ttQgx4DsRalkZENxL+AuJQaMcsGxBOAmGvopk8AVz1sLZgg0bsAAAAASUVORK5CYII= ")}.bootstrap-table .fixed-table-container .table tbody tr.selected td{background-color:rgba(0,0,0,0.075)}.bootstrap-table .fixed-table-container .table tbody tr.no-records-found{text-align:center}.bootstrap-table .fixed-table-container .table tbody tr .card-view{display:flex}.bootstrap-table .fixed-table-container .table tbody tr .card-view .card-view-title{font-weight:bold;display:inline-block;min-width:30%;text-align:left!important}.bootstrap-table .fixed-table-container .table .bs-checkbox{text-align:center}.bootstrap-table .fixed-table-container .table .bs-checkbox label{margin-bottom:0}.bootstrap-table .fixed-table-container .table input[type=radio],.bootstrap-table .fixed-table-container .table input[type=checkbox]{margin:0 auto!important}.bootstrap-table .fixed-table-container .table.table-sm .th-inner{padding:.3rem}.bootstrap-table .fixed-table-container.fixed-height:not(.has-footer){border-bottom:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height.has-card-view{border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height .fixed-table-border{border-left:1px solid #dee2e6;border-right:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height .table thead th{border-bottom:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height .table-dark thead th{border-bottom:1px solid #32383e}.bootstrap-table .fixed-table-container .fixed-table-header{overflow:hidden}.bootstrap-table .fixed-table-container .fixed-table-body{overflow-x:auto;overflow-y:auto;height:100%}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading{align-items:center;background:#fff;display:none;justify-content:center;position:absolute;bottom:0;width:100%;z-index:1000}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap{align-items:baseline;display:flex;justify-content:center}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .loading-text{font-size:2rem;margin-right:6px}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap{align-items:center;display:flex;justify-content:center}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-dot,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap::after,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap::before{content:"";animation-duration:1.5s;animation-iteration-count:infinite;animation-name:LOADING;background:#212529;border-radius:50%;display:block;height:5px;margin:0 4px;opacity:0;width:5px}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-dot{animation-delay:.3s}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap::after{animation-delay:.6s}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark{background:#212529}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark .animation-dot,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark .animation-wrap::after,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark .animation-wrap::before{background:#fff}.bootstrap-table .fixed-table-container .fixed-table-footer{overflow:hidden}.bootstrap-table .fixed-table-pagination::after{content:"";display:block;clear:both}.bootstrap-table .fixed-table-pagination>.pagination-detail,.bootstrap-table .fixed-table-pagination>.pagination{margin-top:10px;margin-bottom:10px}.bootstrap-table .fixed-table-pagination>.pagination-detail .pagination-info{line-height:34px;margin-right:5px}.bootstrap-table .fixed-table-pagination>.pagination-detail .page-list{display:inline-block}.bootstrap-table .fixed-table-pagination>.pagination-detail .page-list .btn-group{position:relative;display:inline-block;vertical-align:middle}.bootstrap-table .fixed-table-pagination>.pagination-detail .page-list .btn-group .dropdown-menu{margin-bottom:0}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination{margin:0}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination a{padding:6px 12px;line-height:1.428571429}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.page-intermediate a{color:#c8c8c8}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.page-intermediate a::before{content:"⬅"}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.page-intermediate a::after{content:"➡"}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.disabled a{pointer-events:none;cursor:default}.bootstrap-table.fullscreen{position:fixed;top:0;left:0;z-index:1050;width:100%!important;background:#fff;height:calc(100vh);overflow-y:scroll}div.fixed-table-scroll-inner{width:100%;height:200px}div.fixed-table-scroll-outer{top:0;left:0;visibility:hidden;width:200px;height:150px;overflow:hidden}@keyframes LOADING{0%{opacity:0}50%{opacity:1}to{opacity:0}} \ No newline at end of file diff --git a/docker/monitor/bootstrap-table.min.js b/docker/monitor/bootstrap-table.min.js new file mode 100644 index 00000000000..d37bccfb153 --- /dev/null +++ b/docker/monitor/bootstrap-table.min.js @@ -0,0 +1,10 @@ +/** + * bootstrap-table - An extended table to integration with some of the most widely used CSS frameworks. (Supports Bootstrap, Semantic UI, Bulma, Material Design, Foundation) + * + * @version v1.15.5 + * @homepage https://bootstrap-table.com + * @author wenzhixin (http://wenzhixin.net.cn/) + * @license MIT + */ + +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e(require("jquery")):"function"==typeof define&&define.amd?define(["jquery"],e):(t=t||self).BootstrapTable=e(t.jQuery)}(this,(function(t){"use strict";t=t&&t.hasOwnProperty("default")?t.default:t;var e="undefined"!=typeof globalThis?globalThis:"undefined"!=typeof window?window:"undefined"!=typeof global?global:"undefined"!=typeof self?self:{};function i(t,e){return t(e={exports:{}},e.exports),e.exports}var n,o,r,a="object",s=function(t){return t&&t.Math==Math&&t},l=s(typeof globalThis==a&&globalThis)||s(typeof window==a&&window)||s(typeof self==a&&self)||s(typeof e==a&&e)||Function("return this")(),c=function(t){try{return!!t()}catch(t){return!0}},h=!c((function(){return 7!=Object.defineProperty({},"a",{get:function(){return 7}}).a})),u={}.propertyIsEnumerable,d=Object.getOwnPropertyDescriptor,f={f:d&&!u.call({1:2},1)?function(t){var e=d(this,t);return!!e&&e.enumerable}:u},p=function(t,e){return{enumerable:!(1&t),configurable:!(2&t),writable:!(4&t),value:e}},g={}.toString,v=function(t){return g.call(t).slice(8,-1)},b="".split,y=c((function(){return!Object("z").propertyIsEnumerable(0)}))?function(t){return"String"==v(t)?b.call(t,""):Object(t)}:Object,m=function(t){if(null==t)throw TypeError("Can't call method on "+t);return t},w=function(t){return y(m(t))},S=function(t){return"object"==typeof t?null!==t:"function"==typeof t},x=function(t,e){if(!S(t))return t;var i,n;if(e&&"function"==typeof(i=t.toString)&&!S(n=i.call(t)))return n;if("function"==typeof(i=t.valueOf)&&!S(n=i.call(t)))return n;if(!e&&"function"==typeof(i=t.toString)&&!S(n=i.call(t)))return n;throw TypeError("Can't convert object to primitive value")},k={}.hasOwnProperty,O=function(t,e){return k.call(t,e)},T=l.document,C=S(T)&&S(T.createElement),P=function(t){return C?T.createElement(t):{}},$=!h&&!c((function(){return 7!=Object.defineProperty(P("div"),"a",{get:function(){return 7}}).a})),I=Object.getOwnPropertyDescriptor,A={f:h?I:function(t,e){if(t=w(t),e=x(e,!0),$)try{return I(t,e)}catch(t){}if(O(t,e))return p(!f.f.call(t,e),t[e])}},E=function(t){if(!S(t))throw TypeError(String(t)+" is not an object");return t},R=Object.defineProperty,N={f:h?R:function(t,e,i){if(E(t),e=x(e,!0),E(i),$)try{return R(t,e,i)}catch(t){}if("get"in i||"set"in i)throw TypeError("Accessors not supported");return"value"in i&&(t[e]=i.value),t}},j=h?function(t,e,i){return N.f(t,e,p(1,i))}:function(t,e,i){return t[e]=i,t},F=function(t,e){try{j(l,t,e)}catch(i){l[t]=e}return e},_=i((function(t){var e=l["__core-js_shared__"]||F("__core-js_shared__",{});(t.exports=function(t,i){return e[t]||(e[t]=void 0!==i?i:{})})("versions",[]).push({version:"3.1.3",mode:"global",copyright:"© 2019 Denis Pushkarev (zloirock.ru)"})})),V=_("native-function-to-string",Function.toString),B=l.WeakMap,L="function"==typeof B&&/native code/.test(V.call(B)),D=0,H=Math.random(),M=function(t){return"Symbol("+String(void 0===t?"":t)+")_"+(++D+H).toString(36)},U=_("keys"),q=function(t){return U[t]||(U[t]=M(t))},z={},W=l.WeakMap;if(L){var G=new W,K=G.get,J=G.has,Y=G.set;n=function(t,e){return Y.call(G,t,e),e},o=function(t){return K.call(G,t)||{}},r=function(t){return J.call(G,t)}}else{var X=q("state");z[X]=!0,n=function(t,e){return j(t,X,e),e},o=function(t){return O(t,X)?t[X]:{}},r=function(t){return O(t,X)}}var Q={set:n,get:o,has:r,enforce:function(t){return r(t)?o(t):n(t,{})},getterFor:function(t){return function(e){var i;if(!S(e)||(i=o(e)).type!==t)throw TypeError("Incompatible receiver, "+t+" required");return i}}},Z=i((function(t){var e=Q.get,i=Q.enforce,n=String(V).split("toString");_("inspectSource",(function(t){return V.call(t)})),(t.exports=function(t,e,o,r){var a=!!r&&!!r.unsafe,s=!!r&&!!r.enumerable,c=!!r&&!!r.noTargetGet;"function"==typeof o&&("string"!=typeof e||O(o,"name")||j(o,"name",e),i(o).source=n.join("string"==typeof e?e:"")),t!==l?(a?!c&&t[e]&&(s=!0):delete t[e],s?t[e]=o:j(t,e,o)):s?t[e]=o:F(e,o)})(Function.prototype,"toString",(function(){return"function"==typeof this&&e(this).source||V.call(this)}))})),tt=l,et=function(t){return"function"==typeof t?t:void 0},it=function(t,e){return arguments.length<2?et(tt[t])||et(l[t]):tt[t]&&tt[t][e]||l[t]&&l[t][e]},nt=Math.ceil,ot=Math.floor,rt=function(t){return isNaN(t=+t)?0:(t>0?ot:nt)(t)},at=Math.min,st=function(t){return t>0?at(rt(t),9007199254740991):0},lt=Math.max,ct=Math.min,ht=function(t,e){var i=rt(t);return i<0?lt(i+e,0):ct(i,e)},ut=function(t){return function(e,i,n){var o,r=w(e),a=st(r.length),s=ht(n,a);if(t&&i!=i){for(;a>s;)if((o=r[s++])!=o)return!0}else for(;a>s;s++)if((t||s in r)&&r[s]===i)return t||s||0;return!t&&-1}},dt={includes:ut(!0),indexOf:ut(!1)},ft=dt.indexOf,pt=function(t,e){var i,n=w(t),o=0,r=[];for(i in n)!O(z,i)&&O(n,i)&&r.push(i);for(;e.length>o;)O(n,i=e[o++])&&(~ft(r,i)||r.push(i));return r},gt=["constructor","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","toLocaleString","toString","valueOf"],vt=gt.concat("length","prototype"),bt={f:Object.getOwnPropertyNames||function(t){return pt(t,vt)}},yt={f:Object.getOwnPropertySymbols},mt=it("Reflect","ownKeys")||function(t){var e=bt.f(E(t)),i=yt.f;return i?e.concat(i(t)):e},wt=function(t,e){for(var i=mt(e),n=N.f,o=A.f,r=0;rr;)N.f(t,i=n[r++],e[i]);return t},Ft=it("document","documentElement"),_t=q("IE_PROTO"),Vt=function(){},Bt=function(){var t,e=P("iframe"),i=gt.length;for(e.style.display="none",Ft.appendChild(e),e.src=String("javascript:"),(t=e.contentWindow.document).open(),t.write(" + + + + + + + diff --git a/docker/monitor/jquery-3.3.1.min.js b/docker/monitor/jquery-3.3.1.min.js new file mode 100644 index 00000000000..4d9b3a25875 --- /dev/null +++ b/docker/monitor/jquery-3.3.1.min.js @@ -0,0 +1,2 @@ +/*! jQuery v3.3.1 | (c) JS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(e,t){"use strict";var n=[],r=e.document,i=Object.getPrototypeOf,o=n.slice,a=n.concat,s=n.push,u=n.indexOf,l={},c=l.toString,f=l.hasOwnProperty,p=f.toString,d=p.call(Object),h={},g=function e(t){return"function"==typeof t&&"number"!=typeof t.nodeType},y=function e(t){return null!=t&&t===t.window},v={type:!0,src:!0,noModule:!0};function m(e,t,n){var i,o=(t=t||r).createElement("script");if(o.text=e,n)for(i in v)n[i]&&(o[i]=n[i]);t.head.appendChild(o).parentNode.removeChild(o)}function x(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?l[c.call(e)]||"object":typeof e}var b="3.3.1",w=function(e,t){return new w.fn.init(e,t)},T=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g;w.fn=w.prototype={jquery:"3.3.1",constructor:w,length:0,toArray:function(){return o.call(this)},get:function(e){return null==e?o.call(this):e<0?this[e+this.length]:this[e]},pushStack:function(e){var t=w.merge(this.constructor(),e);return t.prevObject=this,t},each:function(e){return w.each(this,e)},map:function(e){return this.pushStack(w.map(this,function(t,n){return e.call(t,n,t)}))},slice:function(){return this.pushStack(o.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(e){var t=this.length,n=+e+(e<0?t:0);return this.pushStack(n>=0&&n0&&t-1 in e)}var E=function(e){var t,n,r,i,o,a,s,u,l,c,f,p,d,h,g,y,v,m,x,b="sizzle"+1*new Date,w=e.document,T=0,C=0,E=ae(),k=ae(),S=ae(),D=function(e,t){return e===t&&(f=!0),0},N={}.hasOwnProperty,A=[],j=A.pop,q=A.push,L=A.push,H=A.slice,O=function(e,t){for(var n=0,r=e.length;n+~]|"+M+")"+M+"*"),z=new RegExp("="+M+"*([^\\]'\"]*?)"+M+"*\\]","g"),X=new RegExp(W),U=new RegExp("^"+R+"$"),V={ID:new RegExp("^#("+R+")"),CLASS:new RegExp("^\\.("+R+")"),TAG:new RegExp("^("+R+"|[*])"),ATTR:new RegExp("^"+I),PSEUDO:new RegExp("^"+W),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+P+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},G=/^(?:input|select|textarea|button)$/i,Y=/^h\d$/i,Q=/^[^{]+\{\s*\[native \w/,J=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,K=/[+~]/,Z=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),ee=function(e,t,n){var r="0x"+t-65536;return r!==r||n?t:r<0?String.fromCharCode(r+65536):String.fromCharCode(r>>10|55296,1023&r|56320)},te=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ne=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},re=function(){p()},ie=me(function(e){return!0===e.disabled&&("form"in e||"label"in e)},{dir:"parentNode",next:"legend"});try{L.apply(A=H.call(w.childNodes),w.childNodes),A[w.childNodes.length].nodeType}catch(e){L={apply:A.length?function(e,t){q.apply(e,H.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function oe(e,t,r,i){var o,s,l,c,f,h,v,m=t&&t.ownerDocument,T=t?t.nodeType:9;if(r=r||[],"string"!=typeof e||!e||1!==T&&9!==T&&11!==T)return r;if(!i&&((t?t.ownerDocument||t:w)!==d&&p(t),t=t||d,g)){if(11!==T&&(f=J.exec(e)))if(o=f[1]){if(9===T){if(!(l=t.getElementById(o)))return r;if(l.id===o)return r.push(l),r}else if(m&&(l=m.getElementById(o))&&x(t,l)&&l.id===o)return r.push(l),r}else{if(f[2])return L.apply(r,t.getElementsByTagName(e)),r;if((o=f[3])&&n.getElementsByClassName&&t.getElementsByClassName)return L.apply(r,t.getElementsByClassName(o)),r}if(n.qsa&&!S[e+" "]&&(!y||!y.test(e))){if(1!==T)m=t,v=e;else if("object"!==t.nodeName.toLowerCase()){(c=t.getAttribute("id"))?c=c.replace(te,ne):t.setAttribute("id",c=b),s=(h=a(e)).length;while(s--)h[s]="#"+c+" "+ve(h[s]);v=h.join(","),m=K.test(e)&&ge(t.parentNode)||t}if(v)try{return L.apply(r,m.querySelectorAll(v)),r}catch(e){}finally{c===b&&t.removeAttribute("id")}}}return u(e.replace(B,"$1"),t,r,i)}function ae(){var e=[];function t(n,i){return e.push(n+" ")>r.cacheLength&&delete t[e.shift()],t[n+" "]=i}return t}function se(e){return e[b]=!0,e}function ue(e){var t=d.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function le(e,t){var n=e.split("|"),i=n.length;while(i--)r.attrHandle[n[i]]=t}function ce(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function fe(e){return function(t){return"input"===t.nodeName.toLowerCase()&&t.type===e}}function pe(e){return function(t){var n=t.nodeName.toLowerCase();return("input"===n||"button"===n)&&t.type===e}}function de(e){return function(t){return"form"in t?t.parentNode&&!1===t.disabled?"label"in t?"label"in t.parentNode?t.parentNode.disabled===e:t.disabled===e:t.isDisabled===e||t.isDisabled!==!e&&ie(t)===e:t.disabled===e:"label"in t&&t.disabled===e}}function he(e){return se(function(t){return t=+t,se(function(n,r){var i,o=e([],n.length,t),a=o.length;while(a--)n[i=o[a]]&&(n[i]=!(r[i]=n[i]))})})}function ge(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}n=oe.support={},o=oe.isXML=function(e){var t=e&&(e.ownerDocument||e).documentElement;return!!t&&"HTML"!==t.nodeName},p=oe.setDocument=function(e){var t,i,a=e?e.ownerDocument||e:w;return a!==d&&9===a.nodeType&&a.documentElement?(d=a,h=d.documentElement,g=!o(d),w!==d&&(i=d.defaultView)&&i.top!==i&&(i.addEventListener?i.addEventListener("unload",re,!1):i.attachEvent&&i.attachEvent("onunload",re)),n.attributes=ue(function(e){return e.className="i",!e.getAttribute("className")}),n.getElementsByTagName=ue(function(e){return e.appendChild(d.createComment("")),!e.getElementsByTagName("*").length}),n.getElementsByClassName=Q.test(d.getElementsByClassName),n.getById=ue(function(e){return h.appendChild(e).id=b,!d.getElementsByName||!d.getElementsByName(b).length}),n.getById?(r.filter.ID=function(e){var t=e.replace(Z,ee);return function(e){return e.getAttribute("id")===t}},r.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&g){var n=t.getElementById(e);return n?[n]:[]}}):(r.filter.ID=function(e){var t=e.replace(Z,ee);return function(e){var n="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return n&&n.value===t}},r.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&g){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),r.find.TAG=n.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):n.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},r.find.CLASS=n.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&g)return t.getElementsByClassName(e)},v=[],y=[],(n.qsa=Q.test(d.querySelectorAll))&&(ue(function(e){h.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&y.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||y.push("\\["+M+"*(?:value|"+P+")"),e.querySelectorAll("[id~="+b+"-]").length||y.push("~="),e.querySelectorAll(":checked").length||y.push(":checked"),e.querySelectorAll("a#"+b+"+*").length||y.push(".#.+[+~]")}),ue(function(e){e.innerHTML="";var t=d.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&y.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&y.push(":enabled",":disabled"),h.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&y.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),y.push(",.*:")})),(n.matchesSelector=Q.test(m=h.matches||h.webkitMatchesSelector||h.mozMatchesSelector||h.oMatchesSelector||h.msMatchesSelector))&&ue(function(e){n.disconnectedMatch=m.call(e,"*"),m.call(e,"[s!='']:x"),v.push("!=",W)}),y=y.length&&new RegExp(y.join("|")),v=v.length&&new RegExp(v.join("|")),t=Q.test(h.compareDocumentPosition),x=t||Q.test(h.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return f=!0,0;var r=!e.compareDocumentPosition-!t.compareDocumentPosition;return r||(1&(r=(e.ownerDocument||e)===(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!n.sortDetached&&t.compareDocumentPosition(e)===r?e===d||e.ownerDocument===w&&x(w,e)?-1:t===d||t.ownerDocument===w&&x(w,t)?1:c?O(c,e)-O(c,t):0:4&r?-1:1)}:function(e,t){if(e===t)return f=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e===d?-1:t===d?1:i?-1:o?1:c?O(c,e)-O(c,t):0;if(i===o)return ce(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?ce(a[r],s[r]):a[r]===w?-1:s[r]===w?1:0},d):d},oe.matches=function(e,t){return oe(e,null,null,t)},oe.matchesSelector=function(e,t){if((e.ownerDocument||e)!==d&&p(e),t=t.replace(z,"='$1']"),n.matchesSelector&&g&&!S[t+" "]&&(!v||!v.test(t))&&(!y||!y.test(t)))try{var r=m.call(e,t);if(r||n.disconnectedMatch||e.document&&11!==e.document.nodeType)return r}catch(e){}return oe(t,d,null,[e]).length>0},oe.contains=function(e,t){return(e.ownerDocument||e)!==d&&p(e),x(e,t)},oe.attr=function(e,t){(e.ownerDocument||e)!==d&&p(e);var i=r.attrHandle[t.toLowerCase()],o=i&&N.call(r.attrHandle,t.toLowerCase())?i(e,t,!g):void 0;return void 0!==o?o:n.attributes||!g?e.getAttribute(t):(o=e.getAttributeNode(t))&&o.specified?o.value:null},oe.escape=function(e){return(e+"").replace(te,ne)},oe.error=function(e){throw new Error("Syntax error, unrecognized expression: "+e)},oe.uniqueSort=function(e){var t,r=[],i=0,o=0;if(f=!n.detectDuplicates,c=!n.sortStable&&e.slice(0),e.sort(D),f){while(t=e[o++])t===e[o]&&(i=r.push(o));while(i--)e.splice(r[i],1)}return c=null,e},i=oe.getText=function(e){var t,n="",r=0,o=e.nodeType;if(o){if(1===o||9===o||11===o){if("string"==typeof e.textContent)return e.textContent;for(e=e.firstChild;e;e=e.nextSibling)n+=i(e)}else if(3===o||4===o)return e.nodeValue}else while(t=e[r++])n+=i(t);return n},(r=oe.selectors={cacheLength:50,createPseudo:se,match:V,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(Z,ee),e[3]=(e[3]||e[4]||e[5]||"").replace(Z,ee),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||oe.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&oe.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return V.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=a(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(Z,ee).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=E[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&E(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(e,t,n){return function(r){var i=oe.attr(r,e);return null==i?"!="===t:!t||(i+="","="===t?i===n:"!="===t?i!==n:"^="===t?n&&0===i.indexOf(n):"*="===t?n&&i.indexOf(n)>-1:"$="===t?n&&i.slice(-n.length)===n:"~="===t?(" "+i.replace($," ")+" ").indexOf(n)>-1:"|="===t&&(i===n||i.slice(0,n.length+1)===n+"-"))}},CHILD:function(e,t,n,r,i){var o="nth"!==e.slice(0,3),a="last"!==e.slice(-4),s="of-type"===t;return 1===r&&0===i?function(e){return!!e.parentNode}:function(t,n,u){var l,c,f,p,d,h,g=o!==a?"nextSibling":"previousSibling",y=t.parentNode,v=s&&t.nodeName.toLowerCase(),m=!u&&!s,x=!1;if(y){if(o){while(g){p=t;while(p=p[g])if(s?p.nodeName.toLowerCase()===v:1===p.nodeType)return!1;h=g="only"===e&&!h&&"nextSibling"}return!0}if(h=[a?y.firstChild:y.lastChild],a&&m){x=(d=(l=(c=(f=(p=y)[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]||[])[0]===T&&l[1])&&l[2],p=d&&y.childNodes[d];while(p=++d&&p&&p[g]||(x=d=0)||h.pop())if(1===p.nodeType&&++x&&p===t){c[e]=[T,d,x];break}}else if(m&&(x=d=(l=(c=(f=(p=t)[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]||[])[0]===T&&l[1]),!1===x)while(p=++d&&p&&p[g]||(x=d=0)||h.pop())if((s?p.nodeName.toLowerCase()===v:1===p.nodeType)&&++x&&(m&&((c=(f=p[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]=[T,x]),p===t))break;return(x-=i)===r||x%r==0&&x/r>=0}}},PSEUDO:function(e,t){var n,i=r.pseudos[e]||r.setFilters[e.toLowerCase()]||oe.error("unsupported pseudo: "+e);return i[b]?i(t):i.length>1?(n=[e,e,"",t],r.setFilters.hasOwnProperty(e.toLowerCase())?se(function(e,n){var r,o=i(e,t),a=o.length;while(a--)e[r=O(e,o[a])]=!(n[r]=o[a])}):function(e){return i(e,0,n)}):i}},pseudos:{not:se(function(e){var t=[],n=[],r=s(e.replace(B,"$1"));return r[b]?se(function(e,t,n,i){var o,a=r(e,null,i,[]),s=e.length;while(s--)(o=a[s])&&(e[s]=!(t[s]=o))}):function(e,i,o){return t[0]=e,r(t,null,o,n),t[0]=null,!n.pop()}}),has:se(function(e){return function(t){return oe(e,t).length>0}}),contains:se(function(e){return e=e.replace(Z,ee),function(t){return(t.textContent||t.innerText||i(t)).indexOf(e)>-1}}),lang:se(function(e){return U.test(e||"")||oe.error("unsupported lang: "+e),e=e.replace(Z,ee).toLowerCase(),function(t){var n;do{if(n=g?t.lang:t.getAttribute("xml:lang")||t.getAttribute("lang"))return(n=n.toLowerCase())===e||0===n.indexOf(e+"-")}while((t=t.parentNode)&&1===t.nodeType);return!1}}),target:function(t){var n=e.location&&e.location.hash;return n&&n.slice(1)===t.id},root:function(e){return e===h},focus:function(e){return e===d.activeElement&&(!d.hasFocus||d.hasFocus())&&!!(e.type||e.href||~e.tabIndex)},enabled:de(!1),disabled:de(!0),checked:function(e){var t=e.nodeName.toLowerCase();return"input"===t&&!!e.checked||"option"===t&&!!e.selected},selected:function(e){return e.parentNode&&e.parentNode.selectedIndex,!0===e.selected},empty:function(e){for(e=e.firstChild;e;e=e.nextSibling)if(e.nodeType<6)return!1;return!0},parent:function(e){return!r.pseudos.empty(e)},header:function(e){return Y.test(e.nodeName)},input:function(e){return G.test(e.nodeName)},button:function(e){var t=e.nodeName.toLowerCase();return"input"===t&&"button"===e.type||"button"===t},text:function(e){var t;return"input"===e.nodeName.toLowerCase()&&"text"===e.type&&(null==(t=e.getAttribute("type"))||"text"===t.toLowerCase())},first:he(function(){return[0]}),last:he(function(e,t){return[t-1]}),eq:he(function(e,t,n){return[n<0?n+t:n]}),even:he(function(e,t){for(var n=0;n=0;)e.push(r);return e}),gt:he(function(e,t,n){for(var r=n<0?n+t:n;++r1?function(t,n,r){var i=e.length;while(i--)if(!e[i](t,n,r))return!1;return!0}:e[0]}function be(e,t,n){for(var r=0,i=t.length;r-1&&(o[l]=!(a[l]=f))}}else v=we(v===a?v.splice(h,v.length):v),i?i(null,a,v,u):L.apply(a,v)})}function Ce(e){for(var t,n,i,o=e.length,a=r.relative[e[0].type],s=a||r.relative[" "],u=a?1:0,c=me(function(e){return e===t},s,!0),f=me(function(e){return O(t,e)>-1},s,!0),p=[function(e,n,r){var i=!a&&(r||n!==l)||((t=n).nodeType?c(e,n,r):f(e,n,r));return t=null,i}];u1&&xe(p),u>1&&ve(e.slice(0,u-1).concat({value:" "===e[u-2].type?"*":""})).replace(B,"$1"),n,u0,i=e.length>0,o=function(o,a,s,u,c){var f,h,y,v=0,m="0",x=o&&[],b=[],w=l,C=o||i&&r.find.TAG("*",c),E=T+=null==w?1:Math.random()||.1,k=C.length;for(c&&(l=a===d||a||c);m!==k&&null!=(f=C[m]);m++){if(i&&f){h=0,a||f.ownerDocument===d||(p(f),s=!g);while(y=e[h++])if(y(f,a||d,s)){u.push(f);break}c&&(T=E)}n&&((f=!y&&f)&&v--,o&&x.push(f))}if(v+=m,n&&m!==v){h=0;while(y=t[h++])y(x,b,a,s);if(o){if(v>0)while(m--)x[m]||b[m]||(b[m]=j.call(u));b=we(b)}L.apply(u,b),c&&!o&&b.length>0&&v+t.length>1&&oe.uniqueSort(u)}return c&&(T=E,l=w),x};return n?se(o):o}return s=oe.compile=function(e,t){var n,r=[],i=[],o=S[e+" "];if(!o){t||(t=a(e)),n=t.length;while(n--)(o=Ce(t[n]))[b]?r.push(o):i.push(o);(o=S(e,Ee(i,r))).selector=e}return o},u=oe.select=function(e,t,n,i){var o,u,l,c,f,p="function"==typeof e&&e,d=!i&&a(e=p.selector||e);if(n=n||[],1===d.length){if((u=d[0]=d[0].slice(0)).length>2&&"ID"===(l=u[0]).type&&9===t.nodeType&&g&&r.relative[u[1].type]){if(!(t=(r.find.ID(l.matches[0].replace(Z,ee),t)||[])[0]))return n;p&&(t=t.parentNode),e=e.slice(u.shift().value.length)}o=V.needsContext.test(e)?0:u.length;while(o--){if(l=u[o],r.relative[c=l.type])break;if((f=r.find[c])&&(i=f(l.matches[0].replace(Z,ee),K.test(u[0].type)&&ge(t.parentNode)||t))){if(u.splice(o,1),!(e=i.length&&ve(u)))return L.apply(n,i),n;break}}}return(p||s(e,d))(i,t,!g,n,!t||K.test(e)&&ge(t.parentNode)||t),n},n.sortStable=b.split("").sort(D).join("")===b,n.detectDuplicates=!!f,p(),n.sortDetached=ue(function(e){return 1&e.compareDocumentPosition(d.createElement("fieldset"))}),ue(function(e){return e.innerHTML="","#"===e.firstChild.getAttribute("href")})||le("type|href|height|width",function(e,t,n){if(!n)return e.getAttribute(t,"type"===t.toLowerCase()?1:2)}),n.attributes&&ue(function(e){return e.innerHTML="",e.firstChild.setAttribute("value",""),""===e.firstChild.getAttribute("value")})||le("value",function(e,t,n){if(!n&&"input"===e.nodeName.toLowerCase())return e.defaultValue}),ue(function(e){return null==e.getAttribute("disabled")})||le(P,function(e,t,n){var r;if(!n)return!0===e[t]?t.toLowerCase():(r=e.getAttributeNode(t))&&r.specified?r.value:null}),oe}(e);w.find=E,w.expr=E.selectors,w.expr[":"]=w.expr.pseudos,w.uniqueSort=w.unique=E.uniqueSort,w.text=E.getText,w.isXMLDoc=E.isXML,w.contains=E.contains,w.escapeSelector=E.escape;var k=function(e,t,n){var r=[],i=void 0!==n;while((e=e[t])&&9!==e.nodeType)if(1===e.nodeType){if(i&&w(e).is(n))break;r.push(e)}return r},S=function(e,t){for(var n=[];e;e=e.nextSibling)1===e.nodeType&&e!==t&&n.push(e);return n},D=w.expr.match.needsContext;function N(e,t){return e.nodeName&&e.nodeName.toLowerCase()===t.toLowerCase()}var A=/^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,t,n){return g(t)?w.grep(e,function(e,r){return!!t.call(e,r,e)!==n}):t.nodeType?w.grep(e,function(e){return e===t!==n}):"string"!=typeof t?w.grep(e,function(e){return u.call(t,e)>-1!==n}):w.filter(t,e,n)}w.filter=function(e,t,n){var r=t[0];return n&&(e=":not("+e+")"),1===t.length&&1===r.nodeType?w.find.matchesSelector(r,e)?[r]:[]:w.find.matches(e,w.grep(t,function(e){return 1===e.nodeType}))},w.fn.extend({find:function(e){var t,n,r=this.length,i=this;if("string"!=typeof e)return this.pushStack(w(e).filter(function(){for(t=0;t1?w.uniqueSort(n):n},filter:function(e){return this.pushStack(j(this,e||[],!1))},not:function(e){return this.pushStack(j(this,e||[],!0))},is:function(e){return!!j(this,"string"==typeof e&&D.test(e)?w(e):e||[],!1).length}});var q,L=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/;(w.fn.init=function(e,t,n){var i,o;if(!e)return this;if(n=n||q,"string"==typeof e){if(!(i="<"===e[0]&&">"===e[e.length-1]&&e.length>=3?[null,e,null]:L.exec(e))||!i[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(i[1]){if(t=t instanceof w?t[0]:t,w.merge(this,w.parseHTML(i[1],t&&t.nodeType?t.ownerDocument||t:r,!0)),A.test(i[1])&&w.isPlainObject(t))for(i in t)g(this[i])?this[i](t[i]):this.attr(i,t[i]);return this}return(o=r.getElementById(i[2]))&&(this[0]=o,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):g(e)?void 0!==n.ready?n.ready(e):e(w):w.makeArray(e,this)}).prototype=w.fn,q=w(r);var H=/^(?:parents|prev(?:Until|All))/,O={children:!0,contents:!0,next:!0,prev:!0};w.fn.extend({has:function(e){var t=w(e,this),n=t.length;return this.filter(function(){for(var e=0;e-1:1===n.nodeType&&w.find.matchesSelector(n,e))){o.push(n);break}return this.pushStack(o.length>1?w.uniqueSort(o):o)},index:function(e){return e?"string"==typeof e?u.call(w(e),this[0]):u.call(this,e.jquery?e[0]:e):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(e,t){return this.pushStack(w.uniqueSort(w.merge(this.get(),w(e,t))))},addBack:function(e){return this.add(null==e?this.prevObject:this.prevObject.filter(e))}});function P(e,t){while((e=e[t])&&1!==e.nodeType);return e}w.each({parent:function(e){var t=e.parentNode;return t&&11!==t.nodeType?t:null},parents:function(e){return k(e,"parentNode")},parentsUntil:function(e,t,n){return k(e,"parentNode",n)},next:function(e){return P(e,"nextSibling")},prev:function(e){return P(e,"previousSibling")},nextAll:function(e){return k(e,"nextSibling")},prevAll:function(e){return k(e,"previousSibling")},nextUntil:function(e,t,n){return k(e,"nextSibling",n)},prevUntil:function(e,t,n){return k(e,"previousSibling",n)},siblings:function(e){return S((e.parentNode||{}).firstChild,e)},children:function(e){return S(e.firstChild)},contents:function(e){return N(e,"iframe")?e.contentDocument:(N(e,"template")&&(e=e.content||e),w.merge([],e.childNodes))}},function(e,t){w.fn[e]=function(n,r){var i=w.map(this,t,n);return"Until"!==e.slice(-5)&&(r=n),r&&"string"==typeof r&&(i=w.filter(r,i)),this.length>1&&(O[e]||w.uniqueSort(i),H.test(e)&&i.reverse()),this.pushStack(i)}});var M=/[^\x20\t\r\n\f]+/g;function R(e){var t={};return w.each(e.match(M)||[],function(e,n){t[n]=!0}),t}w.Callbacks=function(e){e="string"==typeof e?R(e):w.extend({},e);var t,n,r,i,o=[],a=[],s=-1,u=function(){for(i=i||e.once,r=t=!0;a.length;s=-1){n=a.shift();while(++s-1)o.splice(n,1),n<=s&&s--}),this},has:function(e){return e?w.inArray(e,o)>-1:o.length>0},empty:function(){return o&&(o=[]),this},disable:function(){return i=a=[],o=n="",this},disabled:function(){return!o},lock:function(){return i=a=[],n||t||(o=n=""),this},locked:function(){return!!i},fireWith:function(e,n){return i||(n=[e,(n=n||[]).slice?n.slice():n],a.push(n),t||u()),this},fire:function(){return l.fireWith(this,arguments),this},fired:function(){return!!r}};return l};function I(e){return e}function W(e){throw e}function $(e,t,n,r){var i;try{e&&g(i=e.promise)?i.call(e).done(t).fail(n):e&&g(i=e.then)?i.call(e,t,n):t.apply(void 0,[e].slice(r))}catch(e){n.apply(void 0,[e])}}w.extend({Deferred:function(t){var n=[["notify","progress",w.Callbacks("memory"),w.Callbacks("memory"),2],["resolve","done",w.Callbacks("once memory"),w.Callbacks("once memory"),0,"resolved"],["reject","fail",w.Callbacks("once memory"),w.Callbacks("once memory"),1,"rejected"]],r="pending",i={state:function(){return r},always:function(){return o.done(arguments).fail(arguments),this},"catch":function(e){return i.then(null,e)},pipe:function(){var e=arguments;return w.Deferred(function(t){w.each(n,function(n,r){var i=g(e[r[4]])&&e[r[4]];o[r[1]](function(){var e=i&&i.apply(this,arguments);e&&g(e.promise)?e.promise().progress(t.notify).done(t.resolve).fail(t.reject):t[r[0]+"With"](this,i?[e]:arguments)})}),e=null}).promise()},then:function(t,r,i){var o=0;function a(t,n,r,i){return function(){var s=this,u=arguments,l=function(){var e,l;if(!(t=o&&(r!==W&&(s=void 0,u=[e]),n.rejectWith(s,u))}};t?c():(w.Deferred.getStackHook&&(c.stackTrace=w.Deferred.getStackHook()),e.setTimeout(c))}}return w.Deferred(function(e){n[0][3].add(a(0,e,g(i)?i:I,e.notifyWith)),n[1][3].add(a(0,e,g(t)?t:I)),n[2][3].add(a(0,e,g(r)?r:W))}).promise()},promise:function(e){return null!=e?w.extend(e,i):i}},o={};return w.each(n,function(e,t){var a=t[2],s=t[5];i[t[1]]=a.add,s&&a.add(function(){r=s},n[3-e][2].disable,n[3-e][3].disable,n[0][2].lock,n[0][3].lock),a.add(t[3].fire),o[t[0]]=function(){return o[t[0]+"With"](this===o?void 0:this,arguments),this},o[t[0]+"With"]=a.fireWith}),i.promise(o),t&&t.call(o,o),o},when:function(e){var t=arguments.length,n=t,r=Array(n),i=o.call(arguments),a=w.Deferred(),s=function(e){return function(n){r[e]=this,i[e]=arguments.length>1?o.call(arguments):n,--t||a.resolveWith(r,i)}};if(t<=1&&($(e,a.done(s(n)).resolve,a.reject,!t),"pending"===a.state()||g(i[n]&&i[n].then)))return a.then();while(n--)$(i[n],s(n),a.reject);return a.promise()}});var B=/^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/;w.Deferred.exceptionHook=function(t,n){e.console&&e.console.warn&&t&&B.test(t.name)&&e.console.warn("jQuery.Deferred exception: "+t.message,t.stack,n)},w.readyException=function(t){e.setTimeout(function(){throw t})};var F=w.Deferred();w.fn.ready=function(e){return F.then(e)["catch"](function(e){w.readyException(e)}),this},w.extend({isReady:!1,readyWait:1,ready:function(e){(!0===e?--w.readyWait:w.isReady)||(w.isReady=!0,!0!==e&&--w.readyWait>0||F.resolveWith(r,[w]))}}),w.ready.then=F.then;function _(){r.removeEventListener("DOMContentLoaded",_),e.removeEventListener("load",_),w.ready()}"complete"===r.readyState||"loading"!==r.readyState&&!r.documentElement.doScroll?e.setTimeout(w.ready):(r.addEventListener("DOMContentLoaded",_),e.addEventListener("load",_));var z=function(e,t,n,r,i,o,a){var s=0,u=e.length,l=null==n;if("object"===x(n)){i=!0;for(s in n)z(e,t,s,n[s],!0,o,a)}else if(void 0!==r&&(i=!0,g(r)||(a=!0),l&&(a?(t.call(e,r),t=null):(l=t,t=function(e,t,n){return l.call(w(e),n)})),t))for(;s1,null,!0)},removeData:function(e){return this.each(function(){K.remove(this,e)})}}),w.extend({queue:function(e,t,n){var r;if(e)return t=(t||"fx")+"queue",r=J.get(e,t),n&&(!r||Array.isArray(n)?r=J.access(e,t,w.makeArray(n)):r.push(n)),r||[]},dequeue:function(e,t){t=t||"fx";var n=w.queue(e,t),r=n.length,i=n.shift(),o=w._queueHooks(e,t),a=function(){w.dequeue(e,t)};"inprogress"===i&&(i=n.shift(),r--),i&&("fx"===t&&n.unshift("inprogress"),delete o.stop,i.call(e,a,o)),!r&&o&&o.empty.fire()},_queueHooks:function(e,t){var n=t+"queueHooks";return J.get(e,n)||J.access(e,n,{empty:w.Callbacks("once memory").add(function(){J.remove(e,[t+"queue",n])})})}}),w.fn.extend({queue:function(e,t){var n=2;return"string"!=typeof e&&(t=e,e="fx",n--),arguments.length\x20\t\r\n\f]+)/i,he=/^$|^module$|\/(?:java|ecma)script/i,ge={option:[1,""],thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};ge.optgroup=ge.option,ge.tbody=ge.tfoot=ge.colgroup=ge.caption=ge.thead,ge.th=ge.td;function ye(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&N(e,t)?w.merge([e],n):n}function ve(e,t){for(var n=0,r=e.length;n-1)i&&i.push(o);else if(l=w.contains(o.ownerDocument,o),a=ye(f.appendChild(o),"script"),l&&ve(a),n){c=0;while(o=a[c++])he.test(o.type||"")&&n.push(o)}return f}!function(){var e=r.createDocumentFragment().appendChild(r.createElement("div")),t=r.createElement("input");t.setAttribute("type","radio"),t.setAttribute("checked","checked"),t.setAttribute("name","t"),e.appendChild(t),h.checkClone=e.cloneNode(!0).cloneNode(!0).lastChild.checked,e.innerHTML="",h.noCloneChecked=!!e.cloneNode(!0).lastChild.defaultValue}();var be=r.documentElement,we=/^key/,Te=/^(?:mouse|pointer|contextmenu|drag|drop)|click/,Ce=/^([^.]*)(?:\.(.+)|)/;function Ee(){return!0}function ke(){return!1}function Se(){try{return r.activeElement}catch(e){}}function De(e,t,n,r,i,o){var a,s;if("object"==typeof t){"string"!=typeof n&&(r=r||n,n=void 0);for(s in t)De(e,s,n,r,t[s],o);return e}if(null==r&&null==i?(i=n,r=n=void 0):null==i&&("string"==typeof n?(i=r,r=void 0):(i=r,r=n,n=void 0)),!1===i)i=ke;else if(!i)return e;return 1===o&&(a=i,(i=function(e){return w().off(e),a.apply(this,arguments)}).guid=a.guid||(a.guid=w.guid++)),e.each(function(){w.event.add(this,t,i,r,n)})}w.event={global:{},add:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,y=J.get(e);if(y){n.handler&&(n=(o=n).handler,i=o.selector),i&&w.find.matchesSelector(be,i),n.guid||(n.guid=w.guid++),(u=y.events)||(u=y.events={}),(a=y.handle)||(a=y.handle=function(t){return"undefined"!=typeof w&&w.event.triggered!==t.type?w.event.dispatch.apply(e,arguments):void 0}),l=(t=(t||"").match(M)||[""]).length;while(l--)d=g=(s=Ce.exec(t[l])||[])[1],h=(s[2]||"").split(".").sort(),d&&(f=w.event.special[d]||{},d=(i?f.delegateType:f.bindType)||d,f=w.event.special[d]||{},c=w.extend({type:d,origType:g,data:r,handler:n,guid:n.guid,selector:i,needsContext:i&&w.expr.match.needsContext.test(i),namespace:h.join(".")},o),(p=u[d])||((p=u[d]=[]).delegateCount=0,f.setup&&!1!==f.setup.call(e,r,h,a)||e.addEventListener&&e.addEventListener(d,a)),f.add&&(f.add.call(e,c),c.handler.guid||(c.handler.guid=n.guid)),i?p.splice(p.delegateCount++,0,c):p.push(c),w.event.global[d]=!0)}},remove:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,y=J.hasData(e)&&J.get(e);if(y&&(u=y.events)){l=(t=(t||"").match(M)||[""]).length;while(l--)if(s=Ce.exec(t[l])||[],d=g=s[1],h=(s[2]||"").split(".").sort(),d){f=w.event.special[d]||{},p=u[d=(r?f.delegateType:f.bindType)||d]||[],s=s[2]&&new RegExp("(^|\\.)"+h.join("\\.(?:.*\\.|)")+"(\\.|$)"),a=o=p.length;while(o--)c=p[o],!i&&g!==c.origType||n&&n.guid!==c.guid||s&&!s.test(c.namespace)||r&&r!==c.selector&&("**"!==r||!c.selector)||(p.splice(o,1),c.selector&&p.delegateCount--,f.remove&&f.remove.call(e,c));a&&!p.length&&(f.teardown&&!1!==f.teardown.call(e,h,y.handle)||w.removeEvent(e,d,y.handle),delete u[d])}else for(d in u)w.event.remove(e,d+t[l],n,r,!0);w.isEmptyObject(u)&&J.remove(e,"handle events")}},dispatch:function(e){var t=w.event.fix(e),n,r,i,o,a,s,u=new Array(arguments.length),l=(J.get(this,"events")||{})[t.type]||[],c=w.event.special[t.type]||{};for(u[0]=t,n=1;n=1))for(;l!==this;l=l.parentNode||this)if(1===l.nodeType&&("click"!==e.type||!0!==l.disabled)){for(o=[],a={},n=0;n-1:w.find(i,this,null,[l]).length),a[i]&&o.push(r);o.length&&s.push({elem:l,handlers:o})}return l=this,u\x20\t\r\n\f]*)[^>]*)\/>/gi,Ae=/\s*$/g;function Le(e,t){return N(e,"table")&&N(11!==t.nodeType?t:t.firstChild,"tr")?w(e).children("tbody")[0]||e:e}function He(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function Oe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Pe(e,t){var n,r,i,o,a,s,u,l;if(1===t.nodeType){if(J.hasData(e)&&(o=J.access(e),a=J.set(t,o),l=o.events)){delete a.handle,a.events={};for(i in l)for(n=0,r=l[i].length;n1&&"string"==typeof y&&!h.checkClone&&je.test(y))return e.each(function(i){var o=e.eq(i);v&&(t[0]=y.call(this,i,o.html())),Re(o,t,n,r)});if(p&&(i=xe(t,e[0].ownerDocument,!1,e,r),o=i.firstChild,1===i.childNodes.length&&(i=o),o||r)){for(u=(s=w.map(ye(i,"script"),He)).length;f")},clone:function(e,t,n){var r,i,o,a,s=e.cloneNode(!0),u=w.contains(e.ownerDocument,e);if(!(h.noCloneChecked||1!==e.nodeType&&11!==e.nodeType||w.isXMLDoc(e)))for(a=ye(s),r=0,i=(o=ye(e)).length;r0&&ve(a,!u&&ye(e,"script")),s},cleanData:function(e){for(var t,n,r,i=w.event.special,o=0;void 0!==(n=e[o]);o++)if(Y(n)){if(t=n[J.expando]){if(t.events)for(r in t.events)i[r]?w.event.remove(n,r):w.removeEvent(n,r,t.handle);n[J.expando]=void 0}n[K.expando]&&(n[K.expando]=void 0)}}}),w.fn.extend({detach:function(e){return Ie(this,e,!0)},remove:function(e){return Ie(this,e)},text:function(e){return z(this,function(e){return void 0===e?w.text(this):this.empty().each(function(){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||(this.textContent=e)})},null,e,arguments.length)},append:function(){return Re(this,arguments,function(e){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||Le(this,e).appendChild(e)})},prepend:function(){return Re(this,arguments,function(e){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var t=Le(this,e);t.insertBefore(e,t.firstChild)}})},before:function(){return Re(this,arguments,function(e){this.parentNode&&this.parentNode.insertBefore(e,this)})},after:function(){return Re(this,arguments,function(e){this.parentNode&&this.parentNode.insertBefore(e,this.nextSibling)})},empty:function(){for(var e,t=0;null!=(e=this[t]);t++)1===e.nodeType&&(w.cleanData(ye(e,!1)),e.textContent="");return this},clone:function(e,t){return e=null!=e&&e,t=null==t?e:t,this.map(function(){return w.clone(this,e,t)})},html:function(e){return z(this,function(e){var t=this[0]||{},n=0,r=this.length;if(void 0===e&&1===t.nodeType)return t.innerHTML;if("string"==typeof e&&!Ae.test(e)&&!ge[(de.exec(e)||["",""])[1].toLowerCase()]){e=w.htmlPrefilter(e);try{for(;n=0&&(u+=Math.max(0,Math.ceil(e["offset"+t[0].toUpperCase()+t.slice(1)]-o-u-s-.5))),u}function et(e,t,n){var r=$e(e),i=Fe(e,t,r),o="border-box"===w.css(e,"boxSizing",!1,r),a=o;if(We.test(i)){if(!n)return i;i="auto"}return a=a&&(h.boxSizingReliable()||i===e.style[t]),("auto"===i||!parseFloat(i)&&"inline"===w.css(e,"display",!1,r))&&(i=e["offset"+t[0].toUpperCase()+t.slice(1)],a=!0),(i=parseFloat(i)||0)+Ze(e,t,n||(o?"border":"content"),a,r,i)+"px"}w.extend({cssHooks:{opacity:{get:function(e,t){if(t){var n=Fe(e,"opacity");return""===n?"1":n}}}},cssNumber:{animationIterationCount:!0,columnCount:!0,fillOpacity:!0,flexGrow:!0,flexShrink:!0,fontWeight:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{},style:function(e,t,n,r){if(e&&3!==e.nodeType&&8!==e.nodeType&&e.style){var i,o,a,s=G(t),u=Xe.test(t),l=e.style;if(u||(t=Je(s)),a=w.cssHooks[t]||w.cssHooks[s],void 0===n)return a&&"get"in a&&void 0!==(i=a.get(e,!1,r))?i:l[t];"string"==(o=typeof n)&&(i=ie.exec(n))&&i[1]&&(n=ue(e,t,i),o="number"),null!=n&&n===n&&("number"===o&&(n+=i&&i[3]||(w.cssNumber[s]?"":"px")),h.clearCloneStyle||""!==n||0!==t.indexOf("background")||(l[t]="inherit"),a&&"set"in a&&void 0===(n=a.set(e,n,r))||(u?l.setProperty(t,n):l[t]=n))}},css:function(e,t,n,r){var i,o,a,s=G(t);return Xe.test(t)||(t=Je(s)),(a=w.cssHooks[t]||w.cssHooks[s])&&"get"in a&&(i=a.get(e,!0,n)),void 0===i&&(i=Fe(e,t,r)),"normal"===i&&t in Ve&&(i=Ve[t]),""===n||n?(o=parseFloat(i),!0===n||isFinite(o)?o||0:i):i}}),w.each(["height","width"],function(e,t){w.cssHooks[t]={get:function(e,n,r){if(n)return!ze.test(w.css(e,"display"))||e.getClientRects().length&&e.getBoundingClientRect().width?et(e,t,r):se(e,Ue,function(){return et(e,t,r)})},set:function(e,n,r){var i,o=$e(e),a="border-box"===w.css(e,"boxSizing",!1,o),s=r&&Ze(e,t,r,a,o);return a&&h.scrollboxSize()===o.position&&(s-=Math.ceil(e["offset"+t[0].toUpperCase()+t.slice(1)]-parseFloat(o[t])-Ze(e,t,"border",!1,o)-.5)),s&&(i=ie.exec(n))&&"px"!==(i[3]||"px")&&(e.style[t]=n,n=w.css(e,t)),Ke(e,n,s)}}}),w.cssHooks.marginLeft=_e(h.reliableMarginLeft,function(e,t){if(t)return(parseFloat(Fe(e,"marginLeft"))||e.getBoundingClientRect().left-se(e,{marginLeft:0},function(){return e.getBoundingClientRect().left}))+"px"}),w.each({margin:"",padding:"",border:"Width"},function(e,t){w.cssHooks[e+t]={expand:function(n){for(var r=0,i={},o="string"==typeof n?n.split(" "):[n];r<4;r++)i[e+oe[r]+t]=o[r]||o[r-2]||o[0];return i}},"margin"!==e&&(w.cssHooks[e+t].set=Ke)}),w.fn.extend({css:function(e,t){return z(this,function(e,t,n){var r,i,o={},a=0;if(Array.isArray(t)){for(r=$e(e),i=t.length;a1)}});function tt(e,t,n,r,i){return new tt.prototype.init(e,t,n,r,i)}w.Tween=tt,tt.prototype={constructor:tt,init:function(e,t,n,r,i,o){this.elem=e,this.prop=n,this.easing=i||w.easing._default,this.options=t,this.start=this.now=this.cur(),this.end=r,this.unit=o||(w.cssNumber[n]?"":"px")},cur:function(){var e=tt.propHooks[this.prop];return e&&e.get?e.get(this):tt.propHooks._default.get(this)},run:function(e){var t,n=tt.propHooks[this.prop];return this.options.duration?this.pos=t=w.easing[this.easing](e,this.options.duration*e,0,1,this.options.duration):this.pos=t=e,this.now=(this.end-this.start)*t+this.start,this.options.step&&this.options.step.call(this.elem,this.now,this),n&&n.set?n.set(this):tt.propHooks._default.set(this),this}},tt.prototype.init.prototype=tt.prototype,tt.propHooks={_default:{get:function(e){var t;return 1!==e.elem.nodeType||null!=e.elem[e.prop]&&null==e.elem.style[e.prop]?e.elem[e.prop]:(t=w.css(e.elem,e.prop,""))&&"auto"!==t?t:0},set:function(e){w.fx.step[e.prop]?w.fx.step[e.prop](e):1!==e.elem.nodeType||null==e.elem.style[w.cssProps[e.prop]]&&!w.cssHooks[e.prop]?e.elem[e.prop]=e.now:w.style(e.elem,e.prop,e.now+e.unit)}}},tt.propHooks.scrollTop=tt.propHooks.scrollLeft={set:function(e){e.elem.nodeType&&e.elem.parentNode&&(e.elem[e.prop]=e.now)}},w.easing={linear:function(e){return e},swing:function(e){return.5-Math.cos(e*Math.PI)/2},_default:"swing"},w.fx=tt.prototype.init,w.fx.step={};var nt,rt,it=/^(?:toggle|show|hide)$/,ot=/queueHooks$/;function at(){rt&&(!1===r.hidden&&e.requestAnimationFrame?e.requestAnimationFrame(at):e.setTimeout(at,w.fx.interval),w.fx.tick())}function st(){return e.setTimeout(function(){nt=void 0}),nt=Date.now()}function ut(e,t){var n,r=0,i={height:e};for(t=t?1:0;r<4;r+=2-t)i["margin"+(n=oe[r])]=i["padding"+n]=e;return t&&(i.opacity=i.width=e),i}function lt(e,t,n){for(var r,i=(pt.tweeners[t]||[]).concat(pt.tweeners["*"]),o=0,a=i.length;o1)},removeAttr:function(e){return this.each(function(){w.removeAttr(this,e)})}}),w.extend({attr:function(e,t,n){var r,i,o=e.nodeType;if(3!==o&&8!==o&&2!==o)return"undefined"==typeof e.getAttribute?w.prop(e,t,n):(1===o&&w.isXMLDoc(e)||(i=w.attrHooks[t.toLowerCase()]||(w.expr.match.bool.test(t)?dt:void 0)),void 0!==n?null===n?void w.removeAttr(e,t):i&&"set"in i&&void 0!==(r=i.set(e,n,t))?r:(e.setAttribute(t,n+""),n):i&&"get"in i&&null!==(r=i.get(e,t))?r:null==(r=w.find.attr(e,t))?void 0:r)},attrHooks:{type:{set:function(e,t){if(!h.radioValue&&"radio"===t&&N(e,"input")){var n=e.value;return e.setAttribute("type",t),n&&(e.value=n),t}}}},removeAttr:function(e,t){var n,r=0,i=t&&t.match(M);if(i&&1===e.nodeType)while(n=i[r++])e.removeAttribute(n)}}),dt={set:function(e,t,n){return!1===t?w.removeAttr(e,n):e.setAttribute(n,n),n}},w.each(w.expr.match.bool.source.match(/\w+/g),function(e,t){var n=ht[t]||w.find.attr;ht[t]=function(e,t,r){var i,o,a=t.toLowerCase();return r||(o=ht[a],ht[a]=i,i=null!=n(e,t,r)?a:null,ht[a]=o),i}});var gt=/^(?:input|select|textarea|button)$/i,yt=/^(?:a|area)$/i;w.fn.extend({prop:function(e,t){return z(this,w.prop,e,t,arguments.length>1)},removeProp:function(e){return this.each(function(){delete this[w.propFix[e]||e]})}}),w.extend({prop:function(e,t,n){var r,i,o=e.nodeType;if(3!==o&&8!==o&&2!==o)return 1===o&&w.isXMLDoc(e)||(t=w.propFix[t]||t,i=w.propHooks[t]),void 0!==n?i&&"set"in i&&void 0!==(r=i.set(e,n,t))?r:e[t]=n:i&&"get"in i&&null!==(r=i.get(e,t))?r:e[t]},propHooks:{tabIndex:{get:function(e){var t=w.find.attr(e,"tabindex");return t?parseInt(t,10):gt.test(e.nodeName)||yt.test(e.nodeName)&&e.href?0:-1}}},propFix:{"for":"htmlFor","class":"className"}}),h.optSelected||(w.propHooks.selected={get:function(e){var t=e.parentNode;return t&&t.parentNode&&t.parentNode.selectedIndex,null},set:function(e){var t=e.parentNode;t&&(t.selectedIndex,t.parentNode&&t.parentNode.selectedIndex)}}),w.each(["tabIndex","readOnly","maxLength","cellSpacing","cellPadding","rowSpan","colSpan","useMap","frameBorder","contentEditable"],function(){w.propFix[this.toLowerCase()]=this});function vt(e){return(e.match(M)||[]).join(" ")}function mt(e){return e.getAttribute&&e.getAttribute("class")||""}function xt(e){return Array.isArray(e)?e:"string"==typeof e?e.match(M)||[]:[]}w.fn.extend({addClass:function(e){var t,n,r,i,o,a,s,u=0;if(g(e))return this.each(function(t){w(this).addClass(e.call(this,t,mt(this)))});if((t=xt(e)).length)while(n=this[u++])if(i=mt(n),r=1===n.nodeType&&" "+vt(i)+" "){a=0;while(o=t[a++])r.indexOf(" "+o+" ")<0&&(r+=o+" ");i!==(s=vt(r))&&n.setAttribute("class",s)}return this},removeClass:function(e){var t,n,r,i,o,a,s,u=0;if(g(e))return this.each(function(t){w(this).removeClass(e.call(this,t,mt(this)))});if(!arguments.length)return this.attr("class","");if((t=xt(e)).length)while(n=this[u++])if(i=mt(n),r=1===n.nodeType&&" "+vt(i)+" "){a=0;while(o=t[a++])while(r.indexOf(" "+o+" ")>-1)r=r.replace(" "+o+" "," ");i!==(s=vt(r))&&n.setAttribute("class",s)}return this},toggleClass:function(e,t){var n=typeof e,r="string"===n||Array.isArray(e);return"boolean"==typeof t&&r?t?this.addClass(e):this.removeClass(e):g(e)?this.each(function(n){w(this).toggleClass(e.call(this,n,mt(this),t),t)}):this.each(function(){var t,i,o,a;if(r){i=0,o=w(this),a=xt(e);while(t=a[i++])o.hasClass(t)?o.removeClass(t):o.addClass(t)}else void 0!==e&&"boolean"!==n||((t=mt(this))&&J.set(this,"__className__",t),this.setAttribute&&this.setAttribute("class",t||!1===e?"":J.get(this,"__className__")||""))})},hasClass:function(e){var t,n,r=0;t=" "+e+" ";while(n=this[r++])if(1===n.nodeType&&(" "+vt(mt(n))+" ").indexOf(t)>-1)return!0;return!1}});var bt=/\r/g;w.fn.extend({val:function(e){var t,n,r,i=this[0];{if(arguments.length)return r=g(e),this.each(function(n){var i;1===this.nodeType&&(null==(i=r?e.call(this,n,w(this).val()):e)?i="":"number"==typeof i?i+="":Array.isArray(i)&&(i=w.map(i,function(e){return null==e?"":e+""})),(t=w.valHooks[this.type]||w.valHooks[this.nodeName.toLowerCase()])&&"set"in t&&void 0!==t.set(this,i,"value")||(this.value=i))});if(i)return(t=w.valHooks[i.type]||w.valHooks[i.nodeName.toLowerCase()])&&"get"in t&&void 0!==(n=t.get(i,"value"))?n:"string"==typeof(n=i.value)?n.replace(bt,""):null==n?"":n}}}),w.extend({valHooks:{option:{get:function(e){var t=w.find.attr(e,"value");return null!=t?t:vt(w.text(e))}},select:{get:function(e){var t,n,r,i=e.options,o=e.selectedIndex,a="select-one"===e.type,s=a?null:[],u=a?o+1:i.length;for(r=o<0?u:a?o:0;r-1)&&(n=!0);return n||(e.selectedIndex=-1),o}}}}),w.each(["radio","checkbox"],function(){w.valHooks[this]={set:function(e,t){if(Array.isArray(t))return e.checked=w.inArray(w(e).val(),t)>-1}},h.checkOn||(w.valHooks[this].get=function(e){return null===e.getAttribute("value")?"on":e.value})}),h.focusin="onfocusin"in e;var wt=/^(?:focusinfocus|focusoutblur)$/,Tt=function(e){e.stopPropagation()};w.extend(w.event,{trigger:function(t,n,i,o){var a,s,u,l,c,p,d,h,v=[i||r],m=f.call(t,"type")?t.type:t,x=f.call(t,"namespace")?t.namespace.split("."):[];if(s=h=u=i=i||r,3!==i.nodeType&&8!==i.nodeType&&!wt.test(m+w.event.triggered)&&(m.indexOf(".")>-1&&(m=(x=m.split(".")).shift(),x.sort()),c=m.indexOf(":")<0&&"on"+m,t=t[w.expando]?t:new w.Event(m,"object"==typeof t&&t),t.isTrigger=o?2:3,t.namespace=x.join("."),t.rnamespace=t.namespace?new RegExp("(^|\\.)"+x.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,t.result=void 0,t.target||(t.target=i),n=null==n?[t]:w.makeArray(n,[t]),d=w.event.special[m]||{},o||!d.trigger||!1!==d.trigger.apply(i,n))){if(!o&&!d.noBubble&&!y(i)){for(l=d.delegateType||m,wt.test(l+m)||(s=s.parentNode);s;s=s.parentNode)v.push(s),u=s;u===(i.ownerDocument||r)&&v.push(u.defaultView||u.parentWindow||e)}a=0;while((s=v[a++])&&!t.isPropagationStopped())h=s,t.type=a>1?l:d.bindType||m,(p=(J.get(s,"events")||{})[t.type]&&J.get(s,"handle"))&&p.apply(s,n),(p=c&&s[c])&&p.apply&&Y(s)&&(t.result=p.apply(s,n),!1===t.result&&t.preventDefault());return t.type=m,o||t.isDefaultPrevented()||d._default&&!1!==d._default.apply(v.pop(),n)||!Y(i)||c&&g(i[m])&&!y(i)&&((u=i[c])&&(i[c]=null),w.event.triggered=m,t.isPropagationStopped()&&h.addEventListener(m,Tt),i[m](),t.isPropagationStopped()&&h.removeEventListener(m,Tt),w.event.triggered=void 0,u&&(i[c]=u)),t.result}},simulate:function(e,t,n){var r=w.extend(new w.Event,n,{type:e,isSimulated:!0});w.event.trigger(r,null,t)}}),w.fn.extend({trigger:function(e,t){return this.each(function(){w.event.trigger(e,t,this)})},triggerHandler:function(e,t){var n=this[0];if(n)return w.event.trigger(e,t,n,!0)}}),h.focusin||w.each({focus:"focusin",blur:"focusout"},function(e,t){var n=function(e){w.event.simulate(t,e.target,w.event.fix(e))};w.event.special[t]={setup:function(){var r=this.ownerDocument||this,i=J.access(r,t);i||r.addEventListener(e,n,!0),J.access(r,t,(i||0)+1)},teardown:function(){var r=this.ownerDocument||this,i=J.access(r,t)-1;i?J.access(r,t,i):(r.removeEventListener(e,n,!0),J.remove(r,t))}}});var Ct=e.location,Et=Date.now(),kt=/\?/;w.parseXML=function(t){var n;if(!t||"string"!=typeof t)return null;try{n=(new e.DOMParser).parseFromString(t,"text/xml")}catch(e){n=void 0}return n&&!n.getElementsByTagName("parsererror").length||w.error("Invalid XML: "+t),n};var St=/\[\]$/,Dt=/\r?\n/g,Nt=/^(?:submit|button|image|reset|file)$/i,At=/^(?:input|select|textarea|keygen)/i;function jt(e,t,n,r){var i;if(Array.isArray(t))w.each(t,function(t,i){n||St.test(e)?r(e,i):jt(e+"["+("object"==typeof i&&null!=i?t:"")+"]",i,n,r)});else if(n||"object"!==x(t))r(e,t);else for(i in t)jt(e+"["+i+"]",t[i],n,r)}w.param=function(e,t){var n,r=[],i=function(e,t){var n=g(t)?t():t;r[r.length]=encodeURIComponent(e)+"="+encodeURIComponent(null==n?"":n)};if(Array.isArray(e)||e.jquery&&!w.isPlainObject(e))w.each(e,function(){i(this.name,this.value)});else for(n in e)jt(n,e[n],t,i);return r.join("&")},w.fn.extend({serialize:function(){return w.param(this.serializeArray())},serializeArray:function(){return this.map(function(){var e=w.prop(this,"elements");return e?w.makeArray(e):this}).filter(function(){var e=this.type;return this.name&&!w(this).is(":disabled")&&At.test(this.nodeName)&&!Nt.test(e)&&(this.checked||!pe.test(e))}).map(function(e,t){var n=w(this).val();return null==n?null:Array.isArray(n)?w.map(n,function(e){return{name:t.name,value:e.replace(Dt,"\r\n")}}):{name:t.name,value:n.replace(Dt,"\r\n")}}).get()}});var qt=/%20/g,Lt=/#.*$/,Ht=/([?&])_=[^&]*/,Ot=/^(.*?):[ \t]*([^\r\n]*)$/gm,Pt=/^(?:about|app|app-storage|.+-extension|file|res|widget):$/,Mt=/^(?:GET|HEAD)$/,Rt=/^\/\//,It={},Wt={},$t="*/".concat("*"),Bt=r.createElement("a");Bt.href=Ct.href;function Ft(e){return function(t,n){"string"!=typeof t&&(n=t,t="*");var r,i=0,o=t.toLowerCase().match(M)||[];if(g(n))while(r=o[i++])"+"===r[0]?(r=r.slice(1)||"*",(e[r]=e[r]||[]).unshift(n)):(e[r]=e[r]||[]).push(n)}}function _t(e,t,n,r){var i={},o=e===Wt;function a(s){var u;return i[s]=!0,w.each(e[s]||[],function(e,s){var l=s(t,n,r);return"string"!=typeof l||o||i[l]?o?!(u=l):void 0:(t.dataTypes.unshift(l),a(l),!1)}),u}return a(t.dataTypes[0])||!i["*"]&&a("*")}function zt(e,t){var n,r,i=w.ajaxSettings.flatOptions||{};for(n in t)void 0!==t[n]&&((i[n]?e:r||(r={}))[n]=t[n]);return r&&w.extend(!0,e,r),e}function Xt(e,t,n){var r,i,o,a,s=e.contents,u=e.dataTypes;while("*"===u[0])u.shift(),void 0===r&&(r=e.mimeType||t.getResponseHeader("Content-Type"));if(r)for(i in s)if(s[i]&&s[i].test(r)){u.unshift(i);break}if(u[0]in n)o=u[0];else{for(i in n){if(!u[0]||e.converters[i+" "+u[0]]){o=i;break}a||(a=i)}o=o||a}if(o)return o!==u[0]&&u.unshift(o),n[o]}function Ut(e,t,n,r){var i,o,a,s,u,l={},c=e.dataTypes.slice();if(c[1])for(a in e.converters)l[a.toLowerCase()]=e.converters[a];o=c.shift();while(o)if(e.responseFields[o]&&(n[e.responseFields[o]]=t),!u&&r&&e.dataFilter&&(t=e.dataFilter(t,e.dataType)),u=o,o=c.shift())if("*"===o)o=u;else if("*"!==u&&u!==o){if(!(a=l[u+" "+o]||l["* "+o]))for(i in l)if((s=i.split(" "))[1]===o&&(a=l[u+" "+s[0]]||l["* "+s[0]])){!0===a?a=l[i]:!0!==l[i]&&(o=s[0],c.unshift(s[1]));break}if(!0!==a)if(a&&e["throws"])t=a(t);else try{t=a(t)}catch(e){return{state:"parsererror",error:a?e:"No conversion from "+u+" to "+o}}}return{state:"success",data:t}}w.extend({active:0,lastModified:{},etag:{},ajaxSettings:{url:Ct.href,type:"GET",isLocal:Pt.test(Ct.protocol),global:!0,processData:!0,async:!0,contentType:"application/x-www-form-urlencoded; charset=UTF-8",accepts:{"*":$t,text:"text/plain",html:"text/html",xml:"application/xml, text/xml",json:"application/json, text/javascript"},contents:{xml:/\bxml\b/,html:/\bhtml/,json:/\bjson\b/},responseFields:{xml:"responseXML",text:"responseText",json:"responseJSON"},converters:{"* text":String,"text html":!0,"text json":JSON.parse,"text xml":w.parseXML},flatOptions:{url:!0,context:!0}},ajaxSetup:function(e,t){return t?zt(zt(e,w.ajaxSettings),t):zt(w.ajaxSettings,e)},ajaxPrefilter:Ft(It),ajaxTransport:Ft(Wt),ajax:function(t,n){"object"==typeof t&&(n=t,t=void 0),n=n||{};var i,o,a,s,u,l,c,f,p,d,h=w.ajaxSetup({},n),g=h.context||h,y=h.context&&(g.nodeType||g.jquery)?w(g):w.event,v=w.Deferred(),m=w.Callbacks("once memory"),x=h.statusCode||{},b={},T={},C="canceled",E={readyState:0,getResponseHeader:function(e){var t;if(c){if(!s){s={};while(t=Ot.exec(a))s[t[1].toLowerCase()]=t[2]}t=s[e.toLowerCase()]}return null==t?null:t},getAllResponseHeaders:function(){return c?a:null},setRequestHeader:function(e,t){return null==c&&(e=T[e.toLowerCase()]=T[e.toLowerCase()]||e,b[e]=t),this},overrideMimeType:function(e){return null==c&&(h.mimeType=e),this},statusCode:function(e){var t;if(e)if(c)E.always(e[E.status]);else for(t in e)x[t]=[x[t],e[t]];return this},abort:function(e){var t=e||C;return i&&i.abort(t),k(0,t),this}};if(v.promise(E),h.url=((t||h.url||Ct.href)+"").replace(Rt,Ct.protocol+"//"),h.type=n.method||n.type||h.method||h.type,h.dataTypes=(h.dataType||"*").toLowerCase().match(M)||[""],null==h.crossDomain){l=r.createElement("a");try{l.href=h.url,l.href=l.href,h.crossDomain=Bt.protocol+"//"+Bt.host!=l.protocol+"//"+l.host}catch(e){h.crossDomain=!0}}if(h.data&&h.processData&&"string"!=typeof h.data&&(h.data=w.param(h.data,h.traditional)),_t(It,h,n,E),c)return E;(f=w.event&&h.global)&&0==w.active++&&w.event.trigger("ajaxStart"),h.type=h.type.toUpperCase(),h.hasContent=!Mt.test(h.type),o=h.url.replace(Lt,""),h.hasContent?h.data&&h.processData&&0===(h.contentType||"").indexOf("application/x-www-form-urlencoded")&&(h.data=h.data.replace(qt,"+")):(d=h.url.slice(o.length),h.data&&(h.processData||"string"==typeof h.data)&&(o+=(kt.test(o)?"&":"?")+h.data,delete h.data),!1===h.cache&&(o=o.replace(Ht,"$1"),d=(kt.test(o)?"&":"?")+"_="+Et+++d),h.url=o+d),h.ifModified&&(w.lastModified[o]&&E.setRequestHeader("If-Modified-Since",w.lastModified[o]),w.etag[o]&&E.setRequestHeader("If-None-Match",w.etag[o])),(h.data&&h.hasContent&&!1!==h.contentType||n.contentType)&&E.setRequestHeader("Content-Type",h.contentType),E.setRequestHeader("Accept",h.dataTypes[0]&&h.accepts[h.dataTypes[0]]?h.accepts[h.dataTypes[0]]+("*"!==h.dataTypes[0]?", "+$t+"; q=0.01":""):h.accepts["*"]);for(p in h.headers)E.setRequestHeader(p,h.headers[p]);if(h.beforeSend&&(!1===h.beforeSend.call(g,E,h)||c))return E.abort();if(C="abort",m.add(h.complete),E.done(h.success),E.fail(h.error),i=_t(Wt,h,n,E)){if(E.readyState=1,f&&y.trigger("ajaxSend",[E,h]),c)return E;h.async&&h.timeout>0&&(u=e.setTimeout(function(){E.abort("timeout")},h.timeout));try{c=!1,i.send(b,k)}catch(e){if(c)throw e;k(-1,e)}}else k(-1,"No Transport");function k(t,n,r,s){var l,p,d,b,T,C=n;c||(c=!0,u&&e.clearTimeout(u),i=void 0,a=s||"",E.readyState=t>0?4:0,l=t>=200&&t<300||304===t,r&&(b=Xt(h,E,r)),b=Ut(h,b,E,l),l?(h.ifModified&&((T=E.getResponseHeader("Last-Modified"))&&(w.lastModified[o]=T),(T=E.getResponseHeader("etag"))&&(w.etag[o]=T)),204===t||"HEAD"===h.type?C="nocontent":304===t?C="notmodified":(C=b.state,p=b.data,l=!(d=b.error))):(d=C,!t&&C||(C="error",t<0&&(t=0))),E.status=t,E.statusText=(n||C)+"",l?v.resolveWith(g,[p,C,E]):v.rejectWith(g,[E,C,d]),E.statusCode(x),x=void 0,f&&y.trigger(l?"ajaxSuccess":"ajaxError",[E,h,l?p:d]),m.fireWith(g,[E,C]),f&&(y.trigger("ajaxComplete",[E,h]),--w.active||w.event.trigger("ajaxStop")))}return E},getJSON:function(e,t,n){return w.get(e,t,n,"json")},getScript:function(e,t){return w.get(e,void 0,t,"script")}}),w.each(["get","post"],function(e,t){w[t]=function(e,n,r,i){return g(n)&&(i=i||r,r=n,n=void 0),w.ajax(w.extend({url:e,type:t,dataType:i,data:n,success:r},w.isPlainObject(e)&&e))}}),w._evalUrl=function(e){return w.ajax({url:e,type:"GET",dataType:"script",cache:!0,async:!1,global:!1,"throws":!0})},w.fn.extend({wrapAll:function(e){var t;return this[0]&&(g(e)&&(e=e.call(this[0])),t=w(e,this[0].ownerDocument).eq(0).clone(!0),this[0].parentNode&&t.insertBefore(this[0]),t.map(function(){var e=this;while(e.firstElementChild)e=e.firstElementChild;return e}).append(this)),this},wrapInner:function(e){return g(e)?this.each(function(t){w(this).wrapInner(e.call(this,t))}):this.each(function(){var t=w(this),n=t.contents();n.length?n.wrapAll(e):t.append(e)})},wrap:function(e){var t=g(e);return this.each(function(n){w(this).wrapAll(t?e.call(this,n):e)})},unwrap:function(e){return this.parent(e).not("body").each(function(){w(this).replaceWith(this.childNodes)}),this}}),w.expr.pseudos.hidden=function(e){return!w.expr.pseudos.visible(e)},w.expr.pseudos.visible=function(e){return!!(e.offsetWidth||e.offsetHeight||e.getClientRects().length)},w.ajaxSettings.xhr=function(){try{return new e.XMLHttpRequest}catch(e){}};var Vt={0:200,1223:204},Gt=w.ajaxSettings.xhr();h.cors=!!Gt&&"withCredentials"in Gt,h.ajax=Gt=!!Gt,w.ajaxTransport(function(t){var n,r;if(h.cors||Gt&&!t.crossDomain)return{send:function(i,o){var a,s=t.xhr();if(s.open(t.type,t.url,t.async,t.username,t.password),t.xhrFields)for(a in t.xhrFields)s[a]=t.xhrFields[a];t.mimeType&&s.overrideMimeType&&s.overrideMimeType(t.mimeType),t.crossDomain||i["X-Requested-With"]||(i["X-Requested-With"]="XMLHttpRequest");for(a in i)s.setRequestHeader(a,i[a]);n=function(e){return function(){n&&(n=r=s.onload=s.onerror=s.onabort=s.ontimeout=s.onreadystatechange=null,"abort"===e?s.abort():"error"===e?"number"!=typeof s.status?o(0,"error"):o(s.status,s.statusText):o(Vt[s.status]||s.status,s.statusText,"text"!==(s.responseType||"text")||"string"!=typeof s.responseText?{binary:s.response}:{text:s.responseText},s.getAllResponseHeaders()))}},s.onload=n(),r=s.onerror=s.ontimeout=n("error"),void 0!==s.onabort?s.onabort=r:s.onreadystatechange=function(){4===s.readyState&&e.setTimeout(function(){n&&r()})},n=n("abort");try{s.send(t.hasContent&&t.data||null)}catch(e){if(n)throw e}},abort:function(){n&&n()}}}),w.ajaxPrefilter(function(e){e.crossDomain&&(e.contents.script=!1)}),w.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/\b(?:java|ecma)script\b/},converters:{"text script":function(e){return w.globalEval(e),e}}}),w.ajaxPrefilter("script",function(e){void 0===e.cache&&(e.cache=!1),e.crossDomain&&(e.type="GET")}),w.ajaxTransport("script",function(e){if(e.crossDomain){var t,n;return{send:function(i,o){t=w(" + ' + ))), + + # Application title + titlePanel("PEcAn DB Sync"), + + # Map with sites + leaflet::leafletOutput("map"), + + # data table + DT::dataTableOutput("table"), + + # Refresh button + actionButton("refresh_servers", "Update Servers"), + actionButton("refresh_sync", "Update Sync"), + + # footer + hr(), + div(HTML('This site or product includes IP2Location LITE data available from https://lite.ip2location.com.')) +) + +# Define server logic required to draw map +server <- function(input, output, session) { + + # servers is what is changed, start with just data from database + values <- reactiveValues(servers=get_servers()) + + # update server list (quick) + observeEvent(input$refresh_servers, { + session$sendCustomMessage("disableUI", "") + values$servers <- get_servers() + session$sendCustomMessage("enableUI", "") + }) + + # update sync list (slow) + observeEvent(input$refresh_sync, { + session$sendCustomMessage("disableUI", "") + progress <- Progress$new(session, min=0, max=2*nrow(values$servers)) + values$servers <- check_servers(values$servers, progress) + progress$close() + session$sendCustomMessage("enableUI", "") + }) + + # create a map of all servers that have a sync_host_id and sync_url + output$map <- renderLeaflet({ + leaflet(values$servers) %>% + addProviderTiles(providers$Stamen.TonerLite, + options = providerTileOptions(noWrap = TRUE) + ) %>% + addMarkers(~long, ~lat, + label = ~htmltools::htmlEscape(hostname), + clusterOptions = markerClusterOptions(maxClusterRadius = 1)) %>% + addPolylines(lng = ~check_sync(values$servers, "long"), + lat = ~check_sync(values$servers, "lat"), + color = ~check_sync(values$servers, "color")) + }) + + # create a table of all servers that have a sync_host_id and sync_url + output$table <- DT::renderDataTable({ + DT::datatable(values$servers %>% dplyr::select("sync_host_id", "hostname", "city", "lastdump", "version")) + }) +} + +# Run the application +shinyApp(ui = ui, server = server) From 6cebe207d9a58e46b8695c99ec0d248d5d7823a3 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 13 Apr 2020 17:43:52 -0500 Subject: [PATCH 0885/2289] build docker container and launch --- .gitignore | 6 ++++++ docker-compose.yml | 17 +++++++++++++++++ docker.sh | 8 ++++++++ 3 files changed, 31 insertions(+) diff --git a/.gitignore b/.gitignore index 2942f835440..badc2505dbf 100644 --- a/.gitignore +++ b/.gitignore @@ -96,3 +96,9 @@ docker-compose.override.yml # dont check in modellauncher binaries contrib/modellauncher/modellauncher + +# don't checkin renv +/renv/ + +# ignore IP mapping to lat/lon (is about 65MB) +shiny/dbsync/IP2LOCATION-LITE-DB5.BIN diff --git a/docker-compose.yml b/docker-compose.yml index bdece361fcd..af3f77f9689 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -297,6 +297,23 @@ services: volumes: - pecan:/data +# ---------------------------------------------------------------------- +# Shiny Apps +# ---------------------------------------------------------------------- + # PEcAn DB Sync visualization + maespa: + image: pecan/shiny-dbsync:${PECAN_VERSION:-latest} + restart: unless-stopped + networks: + - pecan + depends_on: + - postgres + labels: + - "traefik.enable=true" + - "traefik.backend=dbsync" + - "traefik.port=3838" + - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip:/dbsync/" + # ---------------------------------------------------------------------- # Name of network to be used by all containers # ---------------------------------------------------------------------- diff --git a/docker.sh b/docker.sh index 21ff2159233..3b888f51256 100755 --- a/docker.sh +++ b/docker.sh @@ -157,6 +157,14 @@ for x in models executor data thredds monitor rstudio-nginx check; do docker/$x done +# shiny apps +for x in dbsync; do + ${DEBUG} docker build \ + --tag pecan/shiny-$x:${IMAGE_VERSION} \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + shiny/$x +done + # -------------------------------------------------------------------------------- # MODEL BUILD SECTION # -------------------------------------------------------------------------------- From 0e9daf7bb7ca6300c2550c1a7f6960a9341c0b0e Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 14 Apr 2020 13:59:22 +0530 Subject: [PATCH 0886/2289] Create styler-actions.yml --- .github/workflows/styler-actions.yml | 41 ++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 .github/workflows/styler-actions.yml diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml new file mode 100644 index 00000000000..50a3adb2983 --- /dev/null +++ b/.github/workflows/styler-actions.yml @@ -0,0 +1,41 @@ +on: + pull_request: + branches: master +name: Commands +jobs: + style: + name: style + runs-on: macOS-latest + steps: + - id: file_changes + uses: trilom/file-changes-action@v1.2.3 + - name: testing + run: echo '${{ steps.file_changes.outputs.files_modified}}' + - uses: actions/checkout@v2 + - uses: r-lib/actions/pr-fetch@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + - uses: r-lib/actions/setup-r@master + - name: Install dependencies + run: Rscript -e 'install.packages("styler")' + - name: string operations + run: | + echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt + cat names.txt | tr -d '[]' > new.txt + text=$(cat new.txt) + IFS=',' read -ra ids <<< "$text" + for i in "${ids[@]}"; do if [[ "$i" == *.R || "$i" == *.Rmd ]]; then echo "$i" >> new2.txt; fi; done + - name: Style + run: for i in $(cat new2.txt); do Rscript -e "styler::style_file("$i")"; done + - name: commit + run: | + git add \*.R + git commit -m 'Style' + - uses: r-lib/actions/pr-push@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + # A mock job just to ensure we have a successful build status + finish: + runs-on: ubuntu-latest + steps: + - run: true From b700de87bd0c5fcbfcb778d263111154e4f07847 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 14 Apr 2020 14:01:18 +0530 Subject: [PATCH 0887/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 50a3adb2983..4f4949d076b 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -1,6 +1,6 @@ on: pull_request: - branches: master + branches: styler-workflow name: Commands jobs: style: From 5b149af5335698a7b440bd016bd18c0a87236b66 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 14 Apr 2020 14:13:44 +0530 Subject: [PATCH 0888/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 4f4949d076b..83c8ced257b 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -24,7 +24,7 @@ jobs: cat names.txt | tr -d '[]' > new.txt text=$(cat new.txt) IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [[ "$i" == *.R || "$i" == *.Rmd ]]; then echo "$i" >> new2.txt; fi; done + for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> new2.txt; fi; done - name: Style run: for i in $(cat new2.txt); do Rscript -e "styler::style_file("$i")"; done - name: commit From f4a04ee62b3e4145d2c3048ddf50a371eb538f32 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 14 Apr 2020 14:21:52 +0530 Subject: [PATCH 0889/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 83c8ced257b..e414b092939 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -1,9 +1,10 @@ -on: - pull_request: - branches: styler-workflow +on: + issue_comment: + types: [created] name: Commands jobs: style: + if: startsWith(github.event.comment.body, '/style') name: style runs-on: macOS-latest steps: From 8397a8378c1c220bd0cf7abdbe7c86fcdc851429 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 14 Apr 2020 21:01:27 +0200 Subject: [PATCH 0890/2289] fix over-nested headers --- book_source/03_topical_pages/03_pecan_xml.Rmd | 10 +++--- .../03_topical_pages/04_R_workflow.Rmd | 24 ++++++------- .../03_topical_pages/05_models/biocro.Rmd | 2 +- .../03_topical_pages/05_models/clm.Rmd | 2 +- .../03_topical_pages/05_models/dalec.Rmd | 2 +- book_source/03_topical_pages/05_models/ed.Rmd | 14 ++++---- .../03_topical_pages/05_models/gday.Rmd | 2 +- .../03_topical_pages/05_models/linkages.Rmd | 2 +- .../03_topical_pages/05_models/lpj-guess.Rmd | 2 +- .../03_topical_pages/05_models/maespa.Rmd | 2 +- .../03_topical_pages/05_models/preles.Rmd | 2 +- .../03_topical_pages/05_models/sipnet.Rmd | 2 +- .../03_topical_pages/05_models/stics.Rmd | 2 +- .../06_data/01_meteorology.Rmd | 24 ++++++------- .../03_topical_pages/06_data/02_GFDL.Rmd | 6 ++-- .../08_Database-Synchronization.Rmd | 26 +++++++------- .../03_topical_pages/09_standalone_tools.Rmd | 11 +++--- .../12_troubleshooting-pecan.Rmd | 8 ++--- book_source/03_topical_pages/14_backup.Rmd | 6 ++-- .../03_topical_pages/92_workflow_modules.Rmd | 36 +++++++++---------- .../93_installation/00_installation_index.Rmd | 3 ++ .../93_installation/01_pecan_vm.Rmd | 2 +- .../01_setup/PEcAn-in-the-Cloud.Rmd | 18 +++++----- .../93_installation/01_setup/thredds.Rmd | 4 +-- .../03_install_OS/00_install_OS.Rmd | 4 +-- .../01_Installing-PEcAn-Ubuntu.Rmd | 15 ++++---- .../02_Installing-PEcAn-CentOS.Rmd | 16 ++++----- .../03_install_OS/04_Installing-PEcAn-OSX.Rmd | 18 +++++----- .../03_install_OS/05_install_BETY.Rmd | 2 +- .../06_install_models/00_install_models.Rmd | 2 +- .../07_Installing-PEcAn-Data.Rmd | 10 +++--- book_source/04_appendix/03_courses_taught.Rmd | 18 +++++----- 32 files changed, 149 insertions(+), 148 deletions(-) create mode 100644 book_source/03_topical_pages/93_installation/00_installation_index.Rmd diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index d8fa6562678..de9d1cd0ea1 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -105,9 +105,9 @@ A basic example looks like this: In the following sections, we step through each of these sections in detail. -### Core configuration {#xml-core-config} +## Core configuration {#xml-core-config} -#### Top-level structure {#xml-structure} +### Top-level structure {#xml-structure} The first line of the XML file should contain version and encoding information. @@ -123,7 +123,7 @@ The rest of the XML file should be surrounded by `...` tags. ``` -#### `info`: Run metadata {#xml-info} +### `info`: Run metadata {#xml-info} This section contains run metadata. This information is not essential to a successful model run, but is useful for tracking run provenance. @@ -132,7 +132,7 @@ This information is not essential to a successful model run, but is useful for t Example run -1 - guestuser + guestuser 2018/09/18 19:12:28 +0000 ``` @@ -507,7 +507,7 @@ The `modellauncher` has 2 arguements: * `binary` : [required] The full path to the binary modellauncher. Source code for this file can be found in `pecan/contrib/modellauncher`](https://github.com/PecanProject/pecan/tree/develop/contrib/modellauncher). * `qsub.extra` : [optional] Additional flags to pass to qsub besides those specified in the `qsub` tag in host. This option can be used to specify that the MPI environment needs to be used and the number of nodes that should be used. -### Advanced features {#xml-advanced} +## Advanced features {#xml-advanced} ### `ensemble`: Ensemble Runs {#xml-ensemble} diff --git a/book_source/03_topical_pages/04_R_workflow.Rmd b/book_source/03_topical_pages/04_R_workflow.Rmd index b29c575fe10..8b2f898226a 100644 --- a/book_source/03_topical_pages/04_R_workflow.Rmd +++ b/book_source/03_topical_pages/04_R_workflow.Rmd @@ -8,23 +8,23 @@
-### Read Settings {#workflow-readsettings} +## Read Settings {#workflow-readsettings} (TODO: Under construction...) -### Input Conversions {#workflow-input} +## Input Conversions {#workflow-input} -### Input Data {#workflow-input-data} +## Input Data {#workflow-input-data} Models require input data as drivers, parameters, and boundary conditions. In order to make a variety of data sources that have unique formats compatible with models, conversion scripts are written to convert them into a PEcAn standard format. That format is a netcdf file with variables names and specified to our standard variable table. Within the PEcAn repository, code pertaining to input conversion is in the MODULES directory under the data.atmosphere and data.land directories. -### Initial Conditions {#workflow-input-initial} +## Initial Conditions {#workflow-input-initial} (TODO: Under construction) -### Meteorological Data {#workflow-met} +## Meteorological Data {#workflow-met} To convert meterological data into the PEcAn Standard and then into model formats we follow four main steps: @@ -57,7 +57,7 @@ The main script that handles Met Processing, is [`met.process`](https://github.c - Example Code to [convert Standard into Sipnet format](https://github.com/PecanProject/pecan/blob/develop/models/sipnet/R/met2model.SIPNET.R) -#### Downloading Raw data (Description of Process) {#workflow-met-download} +### Downloading Raw data (Description of Process) {#workflow-met-download} Given the information passed from the pecan.xml met.process will call the `download.raw.met.module` to facilitate the execution of the necessary functions to download raw data. @@ -75,26 +75,26 @@ The main script that handles Met Processing, is [`met.process`](https://github.c ### Converting from PEcAn standard to model-specific format {#workflow-met-model} -### Traits {#workflow-traits} +## Traits {#workflow-traits} (TODO: Under construction) -### Meta Analysis {#workflow-metaanalysis} +## Meta Analysis {#workflow-metaanalysis} (TODO: Under construction) -### Model Configuration {#workflow-modelconfig} +## Model Configuration {#workflow-modelconfig} (TODO: Under construction) -### Run Execution {#workflow-modelrun} +## Run Execution {#workflow-modelrun} (TODO: Under construction) -### Post Run Analysis {#workflow-postrun} +## Post Run Analysis {#workflow-postrun} (TODO: Under construction) -### Advanced Analysis {#workflow-advanced} +## Advanced Analysis {#workflow-advanced} (TODO: Under construction) diff --git a/book_source/03_topical_pages/05_models/biocro.Rmd b/book_source/03_topical_pages/05_models/biocro.Rmd index 77524efe75e..cfb8f11e5a7 100644 --- a/book_source/03_topical_pages/05_models/biocro.Rmd +++ b/book_source/03_topical_pages/05_models/biocro.Rmd @@ -1,4 +1,4 @@ -### BioCro {#models-biocro} +## BioCro {#models-biocro} | Model Information | | | -- | -- | diff --git a/book_source/03_topical_pages/05_models/clm.Rmd b/book_source/03_topical_pages/05_models/clm.Rmd index a11cec95911..0a077a61a03 100644 --- a/book_source/03_topical_pages/05_models/clm.Rmd +++ b/book_source/03_topical_pages/05_models/clm.Rmd @@ -1,4 +1,4 @@ -### CLM {#models-clm} +## CLM {#models-clm} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/dalec.Rmd b/book_source/03_topical_pages/05_models/dalec.Rmd index d870a6cdaf8..754acc0a1cf 100644 --- a/book_source/03_topical_pages/05_models/dalec.Rmd +++ b/book_source/03_topical_pages/05_models/dalec.Rmd @@ -1,4 +1,4 @@ -### DALEC {#models-dalec} +## DALEC {#models-dalec} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/ed.Rmd b/book_source/03_topical_pages/05_models/ed.Rmd index f5c2e8563c4..eaa9c46ca21 100644 --- a/book_source/03_topical_pages/05_models/ed.Rmd +++ b/book_source/03_topical_pages/05_models/ed.Rmd @@ -1,4 +1,4 @@ -### ED2 {#models-ed} +## ED2 {#models-ed} | Model Information | | | -- | -- | @@ -8,11 +8,11 @@ | Authors | Paul Moorcroft, ... | | PEcAn Integration | Michael Dietze, Rob Kooper | -#### Introduction +### Introduction Introduction about ED model -#### PEcAn configuration file additions +### PEcAn configuration file additions The following sections of the PEcAn XML are relevant to the ED model: @@ -70,7 +70,7 @@ The following sections of the PEcAn XML are relevant to the ED model: - `veg`: [required] location of vegetation data - `soil`: [required] location of soil data -#### PFT configuration in ED2 {#models-ed-pft-configuration} +### PFT configuration in ED2 {#models-ed-pft-configuration} ED2 has more detailed PFTs than many models, and a more complex system for configuring these PFTs. ED2 has 17 PFTs, based roughly on growth form (e.g. tree vs. grass), biome (tropical vs. temperate), leaf morphology (broad vs. needleleaf), leaf phenology (evergreen vs. deciduous), and successional status (e.g. early, mid, or late). @@ -160,11 +160,11 @@ However, if you would like ED2 to run with all 17 PFTs (NOTE: using ED2's intern [convert-samples-ed]: https://pecanproject.github.io/models/ed/docs/reference/convert.samples.ED.html -#### Model specific input files +### Model specific input files List of inputs required by model, such as met, etc. -#### Model configuration files +### Model configuration files ED2 is configured using 2 files which are placed in the run folder. @@ -209,7 +209,7 @@ The ED2IN template can contain the following variables. These will be replaced w * **@OUTDIR@** : location where output files are written (**without the runid**), from \\\, should not be used. * **@SCRATCH@** : local scratch space for outputs, generated /scratch/\/run\$scratch, should not be used right now since it only works on ebi-cluster -#### Installation notes +### Installation notes This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. diff --git a/book_source/03_topical_pages/05_models/gday.Rmd b/book_source/03_topical_pages/05_models/gday.Rmd index bc24611cb60..90595573206 100644 --- a/book_source/03_topical_pages/05_models/gday.Rmd +++ b/book_source/03_topical_pages/05_models/gday.Rmd @@ -1,4 +1,4 @@ -### GDAY {#models-gday} +## GDAY {#models-gday} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/linkages.Rmd b/book_source/03_topical_pages/05_models/linkages.Rmd index 8f30cc763a6..3754a6d47af 100644 --- a/book_source/03_topical_pages/05_models/linkages.Rmd +++ b/book_source/03_topical_pages/05_models/linkages.Rmd @@ -1,4 +1,4 @@ -### LINKAGES {#models-linkages} +## LINKAGES {#models-linkages} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/lpj-guess.Rmd b/book_source/03_topical_pages/05_models/lpj-guess.Rmd index 5fe185369f5..a5c46d414c7 100644 --- a/book_source/03_topical_pages/05_models/lpj-guess.Rmd +++ b/book_source/03_topical_pages/05_models/lpj-guess.Rmd @@ -1,4 +1,4 @@ -### LPJ-GUESS {#models-lpjguess} +## LPJ-GUESS {#models-lpjguess} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/maespa.Rmd b/book_source/03_topical_pages/05_models/maespa.Rmd index 7841c39165e..7ebca7f67eb 100644 --- a/book_source/03_topical_pages/05_models/maespa.Rmd +++ b/book_source/03_topical_pages/05_models/maespa.Rmd @@ -1,4 +1,4 @@ -### MAESPA {#models-maespa} +## MAESPA {#models-maespa} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/preles.Rmd b/book_source/03_topical_pages/05_models/preles.Rmd index 8799b030aba..cba82a13da4 100644 --- a/book_source/03_topical_pages/05_models/preles.Rmd +++ b/book_source/03_topical_pages/05_models/preles.Rmd @@ -1,4 +1,4 @@ -### PRELES {#models-preles} +## PRELES {#models-preles} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/sipnet.Rmd b/book_source/03_topical_pages/05_models/sipnet.Rmd index 02df0037dea..bbcef49aeaf 100644 --- a/book_source/03_topical_pages/05_models/sipnet.Rmd +++ b/book_source/03_topical_pages/05_models/sipnet.Rmd @@ -1,4 +1,4 @@ -### SiPNET {#models-sipnet} +## SiPNET {#models-sipnet} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/05_models/stics.Rmd b/book_source/03_topical_pages/05_models/stics.Rmd index 24ee25dcb7a..54eb5702fdc 100644 --- a/book_source/03_topical_pages/05_models/stics.Rmd +++ b/book_source/03_topical_pages/05_models/stics.Rmd @@ -1,4 +1,4 @@ -### STICS {#models-stics} +## STICS {#models-stics} | Model Information || | -- | -- | diff --git a/book_source/03_topical_pages/06_data/01_meteorology.Rmd b/book_source/03_topical_pages/06_data/01_meteorology.Rmd index 59abde31092..ee5d5a5e10e 100644 --- a/book_source/03_topical_pages/06_data/01_meteorology.Rmd +++ b/book_source/03_topical_pages/06_data/01_meteorology.Rmd @@ -1,6 +1,6 @@ # Available Meteorological Drivers -### Ameriflux +## Ameriflux Scale: site @@ -10,7 +10,7 @@ Availability: varies by site http:\/\/ameriflux.lbl.gov\/data\/data-availability Notes: Old ORNL server, use is deprecated -### AmerifluxLBL +## AmerifluxLBL Scale: site @@ -20,7 +20,7 @@ Availability: varies by site http:\/\/ameriflux.lbl.gov\/data\/data-availability Notes: new Lawrence Berkeley Lab server -### Fluxnet2015 +## Fluxnet2015 Scale: site @@ -30,7 +30,7 @@ Availability: varies by site [http:\/\/fluxnet.fluxdata.org\/sites\/site-list-an Notes: Fluxnet 2015 synthesis product. Does not cover all FLUXNET sites -### NARR +## NARR Scale: North America @@ -38,7 +38,7 @@ Resolution: 3 hr, approx. 32km \(Lambert conical projection\) Availability: 1979-present -### CRUNCEP +## CRUNCEP Scale: global @@ -46,7 +46,7 @@ Resolution: 6hr, 0.5 degree Availability: 1901-2010 -### CMIP5 +## CMIP5 Scale: varies by model @@ -56,7 +56,7 @@ Availability: 2006-2100 Currently only GFDL available. Different scenerios and ensemble members can be set via Advanced Edit. -### NLDAS +## NLDAS Scale: Lower 48 + buffer, @@ -64,7 +64,7 @@ Resolution: 1 hour, .125 degree Availability: 1980-present -### GLDAS +## GLDAS Scale: Global @@ -72,7 +72,7 @@ Resolution: 3hr, 1 degree Availability: 1948-2010 -### PalEON +## PalEON Scale: -100 to -60 W Lon, 35 to 50 N Latitude \(US northern hardwoods + buffer\) @@ -80,7 +80,7 @@ Resolution: 6hr, 0.5 degree Availability: 850-2010 -### FluxnetLaThuile +## FluxnetLaThuile Scale: site @@ -90,7 +90,7 @@ Availability: varies by site http:\/\/www.fluxdata.org\/DataInfo\/Dataset%20Doc Notes: 2007 synthesis. Fluxnet2015 supercedes this for sites that have been updated -### Geostreams +## Geostreams Scale: site @@ -100,7 +100,7 @@ Availability: varies by site Notes: This is a protocol, not a single archive. The PEcAn functions currently default to querying [https://terraref.ncsa.illinois.edu/clowder/api/geostreams], which requires login and contains data from only two sites (Urbana IL and Maricopa AZ). However the interface can be used with any server that supports the [Geostreams API](https://opensource.ncsa.illinois.edu/confluence/display/CATS/Geostreams+API). -### ERA5 +## ERA5 Scale: Global diff --git a/book_source/03_topical_pages/06_data/02_GFDL.Rmd b/book_source/03_topical_pages/06_data/02_GFDL.Rmd index 4301c695704..bc9350adb18 100644 --- a/book_source/03_topical_pages/06_data/02_GFDL.Rmd +++ b/book_source/03_topical_pages/06_data/02_GFDL.Rmd @@ -1,10 +1,10 @@ -### Download GFDL +## Download GFDL The Downlad.GFDL function assimilates 3 hour frequency CMIP5 outputs generated by multiple GFDL models. GFDL developed several distinct modeling streams on the timescale of CMIP5 and AR5. These models include CM3, ESM2M and ESM2G with a spatial resolution of 2 degrees latitude by 2.5 degrees longitude. Each model has future outputs for the AR5 Representative Concentration Pathways ranging from 2006-2100. -### CM3 +## CM3 GFDL’s CMIP5 experiments with CM3 included many of the integrations found in the long-term CMIP5 experimental design. The focus of this physical climate model is on the role of aerosols, aerosol-cloud interactions, and atmospheric chemistry in climate variability and climate change. -### ESM2M & ESM2G +## ESM2M & ESM2G Two new models representing ocean physics with alternative numerical frameworks to explore the implications of some of the fundamental assumptions embedded in these models. Both ESM2M and ESM2G utilize a more advanced land model, LM3, than was available in ESM2.1 including a variety of enhancements (Milly et al., in prep). GFDL’s CMIP5 experiments with Earth System Models included many of the integrations found in the long-term CMIP5 experimental design. The ESMs, by design, close the carbon cycle and are used to study the impact of climate change on ecosystems, ecosystem changes on climate and human activities on ecosystems. diff --git a/book_source/03_topical_pages/08_Database-Synchronization.Rmd b/book_source/03_topical_pages/08_Database-Synchronization.Rmd index bb0490d6d37..d587339e45a 100644 --- a/book_source/03_topical_pages/08_Database-Synchronization.Rmd +++ b/book_source/03_topical_pages/08_Database-Synchronization.Rmd @@ -4,13 +4,13 @@ The database synchronization consists of 2 parts: - Getting the data from the remote servers to your server - Sharing your data with everybody else -### How does it work? +## How does it work? Each server that runs the BETY database will have a unique machine_id and a sequence of ID's associated. Whenever the user creates a new row in BETY it will receive an ID in the sequence. This allows us to uniquely identify where a row came from. This is information is crucial for the code that works with the synchronization since we can now copy those rows that have an ID in the sequence specified. If you have not asked for a unique ID your ID will be 99. The synchronization code itself is split into two parts, loading data with the `load.bety.sh` script and exporting data using `dump.bety.sh`. If you do not plan to share data, you only need to use `load.bety.sh` to update your database. -### Set up +## Set up Requests for new machine ID's is currently handled manually. To request a machine ID contact Rob Kooper . In the examples below this ID is referred to as 'my siteid'. @@ -21,7 +21,7 @@ sudo -u postgres {$PECAN}/scripts/load.bety.sh -c -u -m ``` WARNING: At the moment running CREATE deletes all current records in the database. If you are running from the VM this includes both all runs you have done and all information that the database is prepopulated with (e.g. input and model records). Remote records can be fetched (see below), but local records will be lost (we're working on improving this!) -### Fetch latest data +## Fetch latest data When logged into the machine you can fetch the latest data using the load.bety.sh script. The script will check what site you want to get the data for and will remove all data in the database associated with that id. It will then reinsert all the data from the remote database. @@ -62,7 +62,7 @@ dump.bety.sh -h -u should unchecked data be dumped, default is NO ``` -### Sharing data +## Sharing data Sharing your data requires a few steps. First, before entering any data, you will need to request an ID from the PEcAn developers. Simply open an issue at github and we will generate an ID for you. If possible, add the URL of your data host. @@ -92,7 +92,7 @@ NOTE: If you want your dumps to be accessible to other PEcAn servers you need to Plans to simplify this process are in the works -### Automation +## Automation Below is an example of a script to synchronize PEcAn database instances across the network. @@ -120,11 +120,11 @@ MAILTO=user@yourUniversity.edu 12 * * * * /home/dietze/db.sync.sh ``` -### Database maintentance +## Database maintentance All databases need maintenance performed on them. Depending upon the database type this can happen automatically, or it needs to be run through a scheduler or manually. The BETYdb database is Postgresql and it needs to be reindexed and vacuumed on a regular basis. Reindexing introduces efficiencies back into the database by reorganizing the indexes. Vacuuming the database frees up resources to the database by rearranging and compacting the database. Both of these operations are necessary and safe. As always if there's a concern, a backup of the database should be made ahead of time. While running the reindexing and vacuuming commands, users will notice a slowdown at times. Therefore it's better to run these maintenance tasks during off hours. -#### Reindexing the database +### Reindexing the database As mentioned above, reindexing allows the database to become more efficient. Over time as data gets updated and deleted, the indexes become less efficient. This has a negative inpact on executed statements. Reindexing makes the indexes efficient again (at least for a while) allowing faster statement execution and reducing the overall load on the database. @@ -167,7 +167,7 @@ Splitting up the indexing commands over time allows the database to operate effi Please refere to the Automation section above for information on using cron to schedule reindexing commands. -#### Vacuuming the database +### Vacuuming the database Vacuuming the BETYdb Postgresql database reduces the amount of resources it uses and introduces its own efficiencies. @@ -223,17 +223,17 @@ psql -U bety -c "VACUUM yields; VACUUM FULL yields; VACUUM ANALYZE yields;" Give its impact, it's typically not desireable to perform a VACUUM FULL after every normal vacuum; it should be done on an "as needed" basis or infrequently. -### Troubleshooting +## Troubleshooting There are several possibilities if a scheduled cron job apepars to be running but isn't producing the expected results. The following are suggestions on what to try to resolve the issue. -##### Username and password +### Username and password The user that scheduled a cron job may not have access permissions to the database. This can be easily confirmed by running the command line from the cron job while logged in as the user that scheduled the job. An error message will be shown if the user doesn't have permissions. To resolve this, be sure to include a valid database user (not a BETYdb user) with their credentials on the command in crontab. -##### db_hba.conf file +### db_hba.conf file Iit's possible that the machine hosting the docker image of the database doesn't have permissions to access the database. This may due to the cron job running on a machine that is not the docker instance of the database. @@ -249,7 +249,7 @@ This command should return a series of text lines. For each row except those beg Ensure that the host machine is listed under the fourth column (machine addresse range, or 'all'), is also included in the IP mask if one was specified, and finally that any authentication option are not set to 'reject'. If the host machine is not included the db_hba.conf file will need to be updated to allow access. -### Network Status Map +## Network Status Map https://pecan2.bu.edu/pecan/status.php @@ -258,7 +258,7 @@ Nodes: red = down, yellow = out-of-date schema, green = good Edges: red = fail, yellow = out-of-date sync, green = good -### Tasks +## Tasks Following is a list of tasks we plan on working on to improve these scripts: - [pecanproject/bety#368](https://github.com/PecanProject/bety/issues/368) allow site-specific customization of information and UI elements including title, contacts, logo, color scheme. diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 4fc1812df07..bf49560ad3b 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -5,7 +5,7 @@ - Allometry ([`modules/allometry`](https://pecanproject.github.io/modules/allometry/docs/index.html)); [vignette](https://pecanproject.github.io/modules/allometry/docs/articles/AllomVignette.html) - Load data ([`modules/benchmark`](https://pecanproject.github.io/modules/benchmark/docs/index.html) -- `PEcAn.benchmark::load_data`) -### Loading Data in PEcAn {#LoadData} +## Loading Data in PEcAn {#LoadData} If you are loading data in to PEcAn for benchmarking, using the Benchmarking shiny app [provide link?] is recommended. @@ -13,9 +13,7 @@ Data can be loaded manually using the `load_data` function which in turn require Below is a description of the `load_data` function an a simple example of loading data manually. -### Function `load_data` - -#### Inputs +### Inputs Required @@ -28,11 +26,12 @@ Optional - `end_year = NA`: - `site = NA` - `vars.used.index=NULL` -### Output + +### Output - R data frame containing the requested variables converted in to PEcAn standard name and units and time steps in `POSIX` format. -#### Example +### Example The data for this example has already been entered in to the database. To add new data go to [new data documentation](#NewInput). diff --git a/book_source/03_topical_pages/12_troubleshooting-pecan.Rmd b/book_source/03_topical_pages/12_troubleshooting-pecan.Rmd index e04bda7b223..5d75b628b43 100755 --- a/book_source/03_topical_pages/12_troubleshooting-pecan.Rmd +++ b/book_source/03_topical_pages/12_troubleshooting-pecan.Rmd @@ -1,14 +1,14 @@ # Troubleshooting and Debugging PEcAn -### Cookies and pecan web pages +## Cookies and pecan web pages You may need to disable cookies specifically for the pecan webserver in your browser. This shouldn't be a problem running from the virtual machine, but your installation of php can include a 'PHPSESSID' that is quite long, and this can overflow the params field of the workflows table, depending on how long your hostname, model name, site name, etc are. -### `Warning: mkdir() [function.mkdir]: No such file or directory` +## `Warning: mkdir() [function.mkdir]: No such file or directory` If you are seeing: `Warning: mkdir() [function.mkdir]: No such file or directory in /path/to/pecan/web/runpecan.php at line 169` it is because you have used a relative path for \$output_folder in system.php. -### After creating a new PFT the tag for PFT not passed to config.xml in ED +## After creating a new PFT the tag for PFT not passed to config.xml in ED This is a result of the rather clunky way we currently have adding PFTs to PEcAn. This is happening because you need to edit the ./pecan/models/ed/data/pftmapping.csv file to include your new PFTs. @@ -40,7 +40,7 @@ See [tests README](https://github.com/PecanProject/pecan/blob/master/tests/READM -### Useful scripts +## Useful scripts The following scripts (in `qaqc/vignettes` identify, respectively: diff --git a/book_source/03_topical_pages/14_backup.Rmd b/book_source/03_topical_pages/14_backup.Rmd index 8c08890df31..82887b9ec61 100644 --- a/book_source/03_topical_pages/14_backup.Rmd +++ b/book_source/03_topical_pages/14_backup.Rmd @@ -2,13 +2,13 @@ This section provides additional details about the BETY database used by PEcAn. It will discuss best practices for setting up the BETY database, how to backup the database and how to restore the database. -### Best practices {#database-setup} +## Best practices {#database-setup} When using the BETY database in non testing mode, it is best not to use the default users. This is accomplished when running the initialize of the database. When the database is initally created the database will be created with some default users (best known is the carya user) as well as the guestuser that can be used in the BETY web application. To disable these users you will either need to disable the users from the web interface, or you can reinitialize the database and remove the `-u` flag from the command line (the `-u` flag will create the default users). To disable the guestuser as well you can remove the `-g` flag from the command line, or disable the account from BETY. The default installation of BETY and PEcAn will assume there is a database called bety with a default username and password. The default installation will setup the database account to not have any superuser abilities. It is also best to limit access to the postgres database from trusted hosts, either by using firewalls, or configuring postgresql to only accept connections from a limited set of hosts. -### Backup of BETY database +## Backup of BETY database It is good practice to make sure you backup the BETY database. Just creating a copy of the files on disk is not enough to ensure you have a valid backup. Most likely if you do this you will end up with a corrupted backup of the database. @@ -23,7 +23,7 @@ Using this scheme, we can restore the database using any of the files generated. It is recommeneded to run this script using a cronjob at midnight such that you have a daily backup of the database and do not have to remember to create these backups. When running this script (either cron or by hand) make sure to place the backups on a different machine than the machine that holds the database in case of a larger system failure. -### Restore of BETY database +## Restore of BETY database Hopefully this section will never need to be used. Following are 5 steps that have been used to restore the database. Before you start it is worth it to read up online a bit on restoring the database as well as join the slack channel and ask any of the people there for help. diff --git a/book_source/03_topical_pages/92_workflow_modules.Rmd b/book_source/03_topical_pages/92_workflow_modules.Rmd index 6c664ccd25c..2248287649b 100644 --- a/book_source/03_topical_pages/92_workflow_modules.Rmd +++ b/book_source/03_topical_pages/92_workflow_modules.Rmd @@ -3,26 +3,26 @@ NOTE: As of PEcAn 1.2.6 -- needs to be updated significantly -### Overview +## Overview Workflow inputs and outputs (click to open in new page, then zoom). Code used to generate this image is provided in [qaqc/vignettes/module_output.Rmd](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/module_output.Rmd) [![PEcAn Workflow](http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg)](http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg) -### Load Settings: -#### `read.settings("/home/pecan/pecan.xml")` +## Load Settings +### `read.settings("/home/pecan/pecan.xml")` * loads settings * create directories * generates new xml, put in output folder -### Query Database: -#### `get.trait.data()` +## Query Database +### `get.trait.data()` Queries the database for both the trait data and prior distributions associated with the PFTs specified in the settings file. The list of variables that are queried is determined by what variables have priors associated with them in the definition of the pft. Likewise, the list of species that are associated with a PFT determines what subset of data is extracted out of all data matching a given variable name. -### Meta Analysis: -#### `run.meta.analysis()` +## Meta Analysis +### `run.meta.analysis()` The meta-analysis code begins by distilling the trait.data to just the values needed for the meta-analysis statistical model, with this being stored in `madata.Rdata`. This reduced form includes the conversion of all error statistics into precision (1/variance), and the indexing of sites, treatments, and greenhouse. In reality, the core meta-analysis code can be run independent of the trait database as long as input data is correctly formatted into the form shown in `madata`. @@ -31,30 +31,30 @@ The evaluation of the meta-analysis is done using a Bayesian statistical softwar Meta-analyses are run, and summary plots are produced. -### Write Configuration Files -#### `write.configs(model)` +## Write Configuration Files +### `write.configs(model)` * writes out a configuration file for each model run ** writes 500 configuration files for a 500 member ensemble ** for _n_ traits, writes `6 * n + 1` files for running default Sensitivity Analysis (number can be changed in the pecan settings file) -### Start Runs: -#### `start.runs(model)` +## Start Runs +### `start.runs(model)` This code starts the model runs using a model specific run function named start.runs.[model]. If the ecosystem model is running on a remote server, this module also takes care of all of the communication with the remote server and its run queue. Each of your subdirectories should now have a [run.id].out file in it. One instance of the model is run for each configuration file generated by the previous write configs module. -### Get Model Output -#### `get.model.output(model)` +## Get Model Output +### `get.model.output(model)` This code first uses a model-specific model2netcdf.[model] function to convert the model output into a standard output format ([MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml)). Then it extracts the data for requested variables specified in the settings file as `settings$ensemble$variable`, averages over the time-period specified as `start.date` and `end.date`, and stores the output in a file `output.Rdata`. The `output.Rdata` file contains two objects, `sensitivity.output` and `ensemble.output`, that is the model prediction for the parameter sets specified in `sa.samples` and `ensemble.samples`. In order to save bandwidth, if the model output is stored on a remote system PEcAn will perform these operations on the remote host and only return the `output.Rdata` object. -### Ensemble Analysis -#### `run.ensemble.analysis()` +## Ensemble Analysis +### `run.ensemble.analysis()` This module makes some simple graphs of the ensemble output. Open ensemble.analysis.pdf to view the ensemble prediction as both a histogram and a boxplot. ensemble.ts.pdf provides a timeseries plot of the ensemble mean, meadian, and 95% CI -### Sensitivity Analysis, Variance Decomposition -#### `run.sensitivity.analysis()` +## Sensitivity Analysis, Variance Decomposition +### `run.sensitivity.analysis()` This function processes the output of the previous module into sensitivity analysis plots, `sensitivityanalysis.pdf`, and a variance decomposition plot, `variancedecomposition.pdf` . In the sensitivity plots you will see the parameter values on the x-axis, the model output on the Y, with the dots being the model evaluations and the line being the spline fit. @@ -63,7 +63,7 @@ The variance decomposition plot is discussed more below. For your reference, the The variance decomposition plot contains three columns, the coefficient of variation (normalized posterior variance), the elasticity (normalized sensitivity), and the partial standard deviation of each model parameter. This graph is sorted by the variable explaining the largest amount of variability in the model output (right hand column). From this graph identify the top-tier parameters that you would target for future constraint. -### Glossary +## Glossary * Inputs: data sets that are used, and file paths leading to them * Parameters: e.g. info set in settings file diff --git a/book_source/03_topical_pages/93_installation/00_installation_index.Rmd b/book_source/03_topical_pages/93_installation/00_installation_index.Rmd new file mode 100644 index 00000000000..be47aee4fa3 --- /dev/null +++ b/book_source/03_topical_pages/93_installation/00_installation_index.Rmd @@ -0,0 +1,3 @@ +# Installation details + +This chapter contains details about installing and maintaining the uncontainerized version of PEcAn on a virtual machine or a server. If you are running PEcAn inside of Docker, many of the particulars will be different and you should refer to the [docker](#docker-index) chapter instead of this one. diff --git a/book_source/03_topical_pages/93_installation/01_pecan_vm.Rmd b/book_source/03_topical_pages/93_installation/01_pecan_vm.Rmd index 63499b357dc..9fcdf05abb6 100644 --- a/book_source/03_topical_pages/93_installation/01_pecan_vm.Rmd +++ b/book_source/03_topical_pages/93_installation/01_pecan_vm.Rmd @@ -1,6 +1,6 @@ ## PEcAn Virtual Machine {#pecanvm} -This section includes the following VM related documentation: +See also other VM related documentation sections: * [Maintaining your PEcAn VM](#maintain-vm) * [Connecting to the VM via SSH](#ssh-vm) diff --git a/book_source/03_topical_pages/93_installation/01_setup/PEcAn-in-the-Cloud.Rmd b/book_source/03_topical_pages/93_installation/01_setup/PEcAn-in-the-Cloud.Rmd index 6e4a270ecfe..8712ad8b77a 100755 --- a/book_source/03_topical_pages/93_installation/01_setup/PEcAn-in-the-Cloud.Rmd +++ b/book_source/03_topical_pages/93_installation/01_setup/PEcAn-in-the-Cloud.Rmd @@ -1,15 +1,13 @@ -### AWS Setup +## AWS Setup ***********Mirror of earlier section in installation section?********************* -### Porting VM to AWS - The following are Mike's rough notes from a first attempt to port the PEcAn VM to the AWS. This was done on a Mac These notes are based on following the instructions [here](http://www.rittmanmead.com/2014/09/obiee-sampleapp-in-the-cloud-importing-virtualbox-machines-to-aws-ec2/) -#### Convert PEcAn VM +### Convert PEcAn VM AWS allows upload of files as VMDK but the default PEcAn VM is in OVA format @@ -21,7 +19,7 @@ AWS allows upload of files as VMDK but the default PEcAn VM is in OVA format tar xf ``` -#### Set up an account on [AWS](http://aws.amazon.com/) +### Set up an account on [AWS](http://aws.amazon.com/) After you have an account you need to set up a user and save your [access key and secret key](http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html) @@ -29,7 +27,7 @@ In my case I created a user named 'carya' Note: the key that ended up working had to be made at [https://console.aws.amazon.com/iam/home#security_credential](https://console.aws.amazon.com/iam/home#security_credential), not the link above. -#### Install [EC2 command line tools](http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html) +### Install [EC2 command line tools](http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html) ``` wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip @@ -58,14 +56,14 @@ Then set your user credentials as environment variables: Note: you may want to add all the variables set in the above EXPORT commands above into your .bashrc or equivalent. -#### Create an AWS S3 'bucket' to upload VM to +### Create an AWS S3 'bucket' to upload VM to Go to [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3) and click "Create Bucket" In my case I named the bucket 'pecan' -#### Upload +### Upload In the code below, make sure to change the PEcAn version, the name of the bucket, and the name of the region. Make sure that the PEcAn version matches the one you downloaded. @@ -81,7 +79,7 @@ Make sure to note the ID of the image since you'll need it to check the VM statu ec2-describe-conversion-tasks ``` -#### Configuring the VM +### Configuring the VM On the EC2 management webpage, [https://console.aws.amazon.com/ec2](https://console.aws.amazon.com/ec2), if you select **Instances** on the left hand side (LHS) you should be able to see your new PEcAn image as an option under Launch Instance. @@ -101,7 +99,7 @@ Select "Load Balancers" on the LHS, click on "Create Load Balancer", follow Wiza To be able to launch multiple VMs: Under "Instances" convert VM to an Image. When done, select Launch, enable multiple instances, and associate with the previous security group. Once running, go back to "Load Balancers" and add the instances to the load balancer. Each instance can be accessed individually by it's own public IP, but external users should access the system more generally via the Load Balancers DNS. -#### Booting the VM +### Booting the VM Return to "Instances" using the menu on the LHS. diff --git a/book_source/03_topical_pages/93_installation/01_setup/thredds.Rmd b/book_source/03_topical_pages/93_installation/01_setup/thredds.Rmd index 2cfa7b513d8..04549749c9b 100755 --- a/book_source/03_topical_pages/93_installation/01_setup/thredds.Rmd +++ b/book_source/03_topical_pages/93_installation/01_setup/thredds.Rmd @@ -1,4 +1,4 @@ -### Thredds Setup +## Thredds Setup Installing and configuring Thredds for PEcAn authors - Rob Kooper @@ -10,8 +10,6 @@ authors - Rob Kooper The Tomcat 8 server can be installed from the default Ubuntu repositories. The thredds webapp will be downloaded and installed from unidata. -#### Ubuntu - First step is to install Tomcat 8 and configure it. The flag `-Dtds.content.root.path` should point to the location of where the thredds folder is located. This needs to be writeable by the user for tomcat. `-Djava.security.egd` is a special flag to use a different random number generator for tomcat. The default would take to long to generate a random number. ``` diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/00_install_OS.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/00_install_OS.Rmd index 7c3a98ac95e..b88c61ce652 100644 --- a/book_source/03_topical_pages/93_installation/03_install_OS/00_install_OS.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/00_install_OS.Rmd @@ -1,7 +1,7 @@ -### OS Specific Installations {#osinstall} +## OS Specific Installations {#osinstall} - [Ubuntu](#ubuntu) -- [CentOS](#centos/redhat) +- [CentOS](#centosredhat) - [OSX](#macosx) - [Install BETY](#install-bety) THIS PAGE IS DEPRECATED - [Install Models](#install-models) diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd index adec37e14a5..472417c9ccf 100755 --- a/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd @@ -1,10 +1,10 @@ -#### Ubuntu {#ubuntu} +### Ubuntu {#ubuntu} These are specific notes for installing PEcAn on Ubuntu (14.04) and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. -##### Install build environment +#### Install build environment ```bash sudo -s @@ -38,7 +38,7 @@ echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vani exit ``` -##### Install Postgres +#### Install Postgres Documentation: http://trac.osgeo.org/postgis/wiki/UsersWikiPostGIS21UbuntuPGSQL93Apt @@ -71,7 +71,8 @@ exit ``` To install the BETYdb database .. -##### Apache Configuration PEcAn + +#### Apache Configuration PEcAn ```bash # become root @@ -97,7 +98,7 @@ a2enconf pecan exit ``` -##### Apache Configuration BETY +#### Apache Configuration BETY ```bash sudo -s @@ -122,7 +123,7 @@ a2enconf bety /etc/init.d/apache2 restart ``` -##### Rstudio-server +#### Rstudio-server *NOTE This will allow anybody to login to the machine through the rstudio interface and run any arbitrary code. The login used however is the same as the system login/password.* @@ -158,7 +159,7 @@ a2enconf rstudio exit ``` -##### Additional packages +#### Additional packages HDF5 Tools, netcdf, GDB and emacs ```bash diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd index 38ad8f9da4b..e3b1aa94064 100755 --- a/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd @@ -1,10 +1,10 @@ -#### CentOS/RedHat {#centos/redhat} +### CentOS/RedHat {#centosredhat} These are specific notes for installing PEcAn on CentOS (7) and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. -##### Install build environment +#### Install build environment ```bash sudo -s @@ -46,7 +46,7 @@ echo "module load mpi" >> ~/.bashrc module load mpi ``` -###### Install and configure PostgreSQL, udunits2, NetCDF +#### Install and configure PostgreSQL, udunits2, NetCDF ```bash sudo -s @@ -77,7 +77,7 @@ systemctl start postgresql-9.4 exit ``` -##### Apache Configuration PEcAn +#### Apache Configuration PEcAn Install and Start Apache ```bash @@ -113,7 +113,7 @@ a2enconf pecan exit ``` -##### Apache Configuration BETY +#### Apache Configuration BETY ```bash sudo -s @@ -144,7 +144,7 @@ EOF systemctl restart httpd ``` -##### Rstudio-server +#### Rstudio-server NEED FIXING @@ -200,7 +200,7 @@ Then, proceed with the following: * restart the Apache server: `sudo httpd restart` * now you should be able to access `http:///rstudio` -###### Install ruby-netcdf gem +#### Install ruby-netcdf gem ```bash cd $RUBY_APPLICATION_HOME @@ -223,7 +223,7 @@ bundle install --without development ``` -##### Additional packages +#### Additional packages NEED FIXING diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/04_Installing-PEcAn-OSX.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/04_Installing-PEcAn-OSX.Rmd index b7047896559..d057ea41246 100755 --- a/book_source/03_topical_pages/93_installation/03_install_OS/04_Installing-PEcAn-OSX.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/04_Installing-PEcAn-OSX.Rmd @@ -1,11 +1,11 @@ -#### Mac OSX {#macosx} +### Mac OSX {#macosx} These are specific notes for installing PEcAn on Mac OSX and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. -##### Install build environment +#### Install build environment ```bash # install R @@ -42,7 +42,7 @@ sudo make install cd .. ``` -##### Install Postgres +#### Install Postgres For those on a Mac I use the following app for postgresql which has postgis already installed (http://postgresapp.com/) @@ -62,14 +62,14 @@ CREATE EXTENSION postgis_tiger_geocoder; To check your postgis run the following command again in psql: `SELECT PostGIS_full_version();` -##### Additional installs +#### Additional installs -###### Install JAGS +##### Install JAGS Download JAGS from http://sourceforge.net/projects/mcmc-jags/files/JAGS/3.x/Mac%20OS%20X/JAGS-Mavericks-3.4.0.dmg/download -###### Install udunits +##### Install udunits Installing udunits-2 on MacOSX is done from source. @@ -86,7 +86,7 @@ make sudo make install ``` -##### Apache Configuration +#### Apache Configuration Mac does not support pdo/postgresql by default. The easiest way to install is use: http://php-osx.liip.ch/ @@ -102,10 +102,10 @@ Alias /pecan ${PWD}/pecan/web EOF ``` -##### Ruby +#### Ruby The default version of ruby should work. Or use [JewelryBox](https://jewelrybox.unfiniti.com/). -##### Rstudio Server +#### Rstudio Server For the mac you can download [Rstudio Desktop](http://www.rstudio.com/). diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/05_install_BETY.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/05_install_BETY.Rmd index fddd44b2064..24739ba9154 100644 --- a/book_source/03_topical_pages/93_installation/03_install_OS/05_install_BETY.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/05_install_BETY.Rmd @@ -1,4 +1,4 @@ -#### Installing BETY {#install-bety} +### Installing BETY {#install-bety} **************THIS PAGE IS DEPRECATED************* diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/06_install_models/00_install_models.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/06_install_models/00_install_models.Rmd index 3d4e45d7e19..e3b8cf88aea 100644 --- a/book_source/03_topical_pages/93_installation/03_install_OS/06_install_models/00_install_models.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/06_install_models/00_install_models.Rmd @@ -1,4 +1,4 @@ -#### Install Models +### Install Models This page contains instructions on how to download and install ecosystem models that have been or are being coupled to PEcAn. These instructions have been tested on the PEcAn unbuntu VM. Commands may vary on other operating systems. diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/07_Installing-PEcAn-Data.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/07_Installing-PEcAn-Data.Rmd index 9345a6f542e..cce8c87e438 100644 --- a/book_source/03_topical_pages/93_installation/03_install_OS/07_Installing-PEcAn-Data.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/07_Installing-PEcAn-Data.Rmd @@ -1,8 +1,8 @@ -#### Installing data for PEcAn {#install-data} +### Installing data for PEcAn {#install-data} PEcAn assumes some of the data to be installed on the machine. This page will describe how to install this data. -##### Site Information +#### Site Information These are large-ish files that contain data used with ED2 and SIPNET @@ -19,7 +19,7 @@ tar zxf inputs.tgz rm inputs.tgz ``` -##### FIA database +#### FIA database FIA database is large and will add an extra 10GB to the installation. @@ -33,7 +33,7 @@ psql -U bety -d fia5data < fia5data.psql rm fia5data.psql ``` -##### Flux Camp +#### Flux Camp Following will install the data for flux camp (as well as the demo script for PEcAn). @@ -44,7 +44,7 @@ tar zxf plot.tgz rm plot.tgz ``` -##### Harvard for ED tutorial +#### Harvard for ED tutorial Add datasets and runs diff --git a/book_source/04_appendix/03_courses_taught.Rmd b/book_source/04_appendix/03_courses_taught.Rmd index 9e85f011cb9..17d1bb83bac 100755 --- a/book_source/04_appendix/03_courses_taught.Rmd +++ b/book_source/04_appendix/03_courses_taught.Rmd @@ -1,27 +1,29 @@ # PEcAn Project Used in Courses -### University classes +## University classes -#### GE 375 - Environmental Modeling - Spring 2013, 2014 (Mike Dietze, Boston University) +### GE 375 - Environmental Modeling - Spring 2013, 2014 (Mike Dietze, Boston University) The final "Case Study: Terrestrial Ecosystem Models" is a PEcAn-based hands-on activity. Each class has been 25 students. GE 585 - Ecological forecasting Fall 2013 (Mike Dietze, Boston University) -### Summer Courses / Workshops -#### Annual summer course in flux measurement and advanced modeling (Mike Dietze, Ankur Desai) Niwot Ridge, CO + +## Summer Courses / Workshops + +### Annual summer course in flux measurement and advanced modeling (Mike Dietze, Ankur Desai) Niwot Ridge, CO About 1/3 lecture, 2/3 hands-on (the syllabus is actually wrong as it list the other way around). Each class has 24 students. [2013 Syllabus](http://www.fluxcourse.org/files/SyllabusFluxcourse_2013.pdf) see Tuesday Week 2 Data Assimilation lectures and PEcAn demo and the Class projects and presentations on Thursday and Friday. (Most students use PEcAn for their group projects. 2014 will be the third year that PEcAn has been used for this course. -#### Assimilating Long-Term Data into Ecosystem Models: Paleo-Ecological Observatory Network (PalEON) Project +### Assimilating Long-Term Data into Ecosystem Models: Paleo-Ecological Observatory Network (PalEON) Project Here is a link to the course: https://www3.nd.edu/~paleolab/paleonproject/summer-course/ This course uses the same demo as above, including collecting data in the field and assimilating it (part 3) -#### Integrating Evidence on Forest Response to Climate Change: Physiology to Regional Abundance +### Integrating Evidence on Forest Response to Climate Change: Physiology to Regional Abundance http://blue.for.msu.edu/macrosystems/workshop @@ -29,12 +31,12 @@ May 13-14, 2013 Session 4: Integrating Forest Data Into Ecosystem Models -#### Ecological Society of America meetings +### Ecological Society of America meetings [Workshop: Combining Field Measurements and Ecosystem Models](http://eco.confex.com/eco/2013/webprogram/Session9007.html) -### Selected Publications +## Selected Publications 1. Dietze, M.C., D.S LeBauer, R. Kooper (2013) [On improving the communication between models and data](https://github.com/PecanProject/pecan/blob/master/documentation/dietze2013oic.pdf?raw=true). Plant, Cell, & Environment doi:10.1111/pce.12043 2. LeBauer, D.S., D. Wang, K. Richter, C. Davidson, & M.C. Dietze. (2013). [Facilitating feedbacks between field measurements and ecosystem models](https://github.com/PecanProject/pecan/blob/master/documentation/lebauer2013ffb.pdf?raw=true). Ecological Monographs. doi:10.1890/12-0137.1 \ No newline at end of file From e1efac1183799ca86b27c969de3d9140e963d1d8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 14 Apr 2020 22:12:48 +0200 Subject: [PATCH 0891/2289] fix image paths --- .../03_topical_pages/09_standalone_tools.Rmd | 2 +- .../03_topical_pages/11_adding_to_pecan.Rmd | 43 ++++++++++--------- 2 files changed, 23 insertions(+), 22 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index bf49560ad3b..a4d6690af56 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -48,7 +48,7 @@ bety = PEcAn.DB::betyConnect(php.config = "pecan/web/config.php") 2. Look up the inputs record for the data in BETY. ```{r, echo=FALSE, out.height = "50%", out.width = "50%", fig.align = 'center'} -knitr::include_graphics("04_advanced_user_guide/images/Input_ID_name.png") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/Input_ID_name.png") ``` To find the input ID, either look at diff --git a/book_source/03_topical_pages/11_adding_to_pecan.Rmd b/book_source/03_topical_pages/11_adding_to_pecan.Rmd index eaed348c2f7..c4829297eb5 100644 --- a/book_source/03_topical_pages/11_adding_to_pecan.Rmd +++ b/book_source/03_topical_pages/11_adding_to_pecan.Rmd @@ -37,7 +37,7 @@ To run a model within PEcAn requires that the PEcAn database has sufficient info The instructions in this section assume that you will be specifying this information using the BETYdb web-based interface. This can be done either on your local VM (localhost:3280/bety or localhost:6480/bety) or on a server installation of BETYdb. However you interact with BETYdb, we encourage you to set up your PEcAn instance to support [database syncs](#database-sync) so that these changes can be shared and backed-up across the PEcAn network. -![](11_images/bety_main_page.png) +![](03_topical_pages/11_images/bety_main_page.png) The figure below summarizes the relevant database tables that need to be updated to add a new model and the primary variables that define each table. @@ -49,8 +49,8 @@ The first step to adding a model is to create a new MODEL_TYPE, which defines th The MODEL_TYPE is created by selecting Runs > Model Type and then clicking on _New Model Type_. The MODEL_TYPE name should be identical to the MODEL package name (see Interface Module below) and is case sensitive. -![](11_images/bety_modeltype_1.png) -![](11_images/bety_modeltype_2.png) +![](03_topical_pages/11_images/bety_modeltype_1.png) +![](03_topical_pages/11_images/bety_modeltype_2.png) ### MACHINE @@ -84,12 +84,12 @@ Since many of the PEcAn tools are designed to keep track of parameter uncertaint Create a new PFT entry by selecting Data > PFTs and then clicking on _New PFT_. -![](11_images/bety_pft_1.png) -![](11_images/bety_pft_2.png) +![](03_topical_pages/11_images/bety_pft_1.png) +![](03_topical_pages/11_images/bety_pft_2.png) Give the PFT a descriptive name (e.g., temperate deciduous). PFTs are MODEL_TYPE specific, so choose your MODEL_TYPE from the pull down menu. -![](11_images/bety_pft_3.png) +![](03_topical_pages/11_images/bety_pft_3.png) #### Species @@ -109,15 +109,15 @@ There are a wide variety of priors already defined in the PEcAn database that of These pre-existing prior distributions can be added to a PFT. Navigate to the PFT from Data > PFTs and selecting the edit button in the Actions column for the chosen PFT. -![](11_images/bety_priors_1.png) +![](03_topical_pages/11_images/bety_priors_1.png) Click on "View Related Priors" button and search through the list for desired prior distributions. The list can be filtered by adding terms into the search box. Add a prior to the PFT by clicking on the far left button for the desired prior, changing it to an X. -![](11_images/bety_priors_2.png) +![](03_topical_pages/11_images/bety_priors_2.png) Save this by scrolling to the bottom of the PFT page and hitting the Update button. -![](11_images/bety_priors_3.png) +![](03_topical_pages/11_images/bety_priors_3.png) #### Creating new prior distributions @@ -131,7 +131,7 @@ A new prior distribution can be created for a pre-existing variable, if a more c * Specify the prior sample size in _N_ if the prior is based on observed data (independent of data in the PEcAn database) * When this is done, scroll down and hit the Create button -![](11_images/bety_priors_4.png) +![](03_topical_pages/11_images/bety_priors_4.png) The new prior distribution can then be added a PFT as described in the "Adding Priors for Each Variable" section. @@ -141,7 +141,7 @@ It is important to note that the priors are defined for the variable name and un To add a new variable, select Data > Variables and click the New Variable button. Fill in the _Name_ field with the desired name for the variable and the units in the _Units_ field. There are additional fields, such as _Standard Units_, _Notes_, and _Description_, that can be filled out if desired. When done, hit the Create button. -![](11_images/bety_priors_5.png) +![](03_topical_pages/11_images/bety_priors_5.png) The new variable can be used to create a prior distribution for it as in the "Creating new prior distributions" section. @@ -592,7 +592,7 @@ The Data-Ingest application is capable of loading data from the DataONE data fed
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/D1Ingest-1.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/D1Ingest-1.gif") ```
The DataONE download feature allows the user to download data at a given doi or DataONE specific package id. To do so, enter the doi or identifier in the `Import From DataONE` field and select `download`. The download process may take a couple of minutes to run depending on the number of files in the dataONE package. This may be a convenient option if the user does not wish to download files directly to their local machine. Once the files have been successfully downloaded from DataONE, they are displayed in a table. Before proceeding to the next step, the user can select a file to ingest by clicking on the corresponding row in the data table. @@ -607,11 +607,11 @@ After this step, the workflow is identical for both methods. However, please not
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/Local_loader_sm.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/Local_loader_sm.gif") ```
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/local_browse.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/local_browse.gif") ``` ### 2. Creating an Input Record Creating an input record requires some basic metadata about the file that is being ingested. Each entry field is briefly explained below. @@ -620,7 +620,7 @@ Creating an input record requires some basic metadata about the file that is bei - Site: To link the selected file with a site, the user can scroll or type to search all the sites in PEcAn. See Example:
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/Selectize_Input_sm.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/Selectize_Input_sm.gif") ```
- Parent: To link the selected file with another dataset, type to search existing datasets in the `Parent` field. @@ -635,7 +635,7 @@ knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studi
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/DateTime.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/DateTime.gif") ``` - Notes: Describe the data that is being uploaded. Please include any citations or references. @@ -656,7 +656,7 @@ If it is necessary to add a new format to PEcAn, the user should fill out the fo Example:
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/new_format_record.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/new_format_record.gif") ``` ### 4. Formats_Variables Record The final step in the ingest process is to register a formats-variables record. This record links pecan variables with variables from the selected data. @@ -672,7 +672,7 @@ The final step in the ingest process is to register a formats-variables record. - Column Number: Vector of integers that list the column numbers associated with variables in a dataset. Required for text files that lack headers.
```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("04_advanced_user_guide/02_adding_to_pecan/01_case_studies/images/data-ingest/D1Ingest-9_sm.gif") +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/D1Ingest-9_sm.gif") ``` Finally, the path to the ingest data is displayed in the `Select Files` box. @@ -707,7 +707,7 @@ If the Format you are looking for is not available, you will need to create a ne Here is an example using a fake dataset: -![example_data](04_advanced_user_guide/images/example_data.png) +![example_data](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/example_data.png) @@ -725,7 +725,8 @@ You will need to fill out the following fields: Here is the Formats record for the example data: -![format_record_1](04_advanced_user_guide/images/format_record_1.png) +![format_record_1](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/format_record_1.png) + When you have finished this section, hit Create. The final record will be displayed on the screen. #### Formats -> Variables @@ -736,7 +737,7 @@ To enter this data, select Edit Record and on the edit screen select View Relate Here is the record for the example data after adding related variables: -![format_record_2](04_advanced_user_guide/images/format_record_2.png) +![format_record_2](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/format_record_2.png) ##### Name and Unit From c323d4e2a8593a86a661d75c36b0207b1e897ccd Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 16 Apr 2020 17:56:58 -0500 Subject: [PATCH 0892/2289] bad service name --- docker-compose.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker-compose.yml b/docker-compose.yml index af3f77f9689..2fadf245477 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -301,7 +301,7 @@ services: # Shiny Apps # ---------------------------------------------------------------------- # PEcAn DB Sync visualization - maespa: + dbsync: image: pecan/shiny-dbsync:${PECAN_VERSION:-latest} restart: unless-stopped networks: From 9ed21c954f64d24055d91cd8682cb7b2ddf97863 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 20 Apr 2020 08:57:55 -0500 Subject: [PATCH 0893/2289] updates bsaed on reviews (#2573) * need sorting by ip * make sure to preserve env vars * updates bsaed on reviews - new mapping ip -> geo - add legend - some variables to change map --- shiny/dbsync/Dockerfile | 8 +- shiny/dbsync/app.R | 170 +++++++++++++++++++++------------ shiny/dbsync/geoip.json | 1 + shiny/dbsync/save-env-shiny.sh | 8 ++ 4 files changed, 124 insertions(+), 63 deletions(-) create mode 100644 shiny/dbsync/geoip.json create mode 100755 shiny/dbsync/save-env-shiny.sh diff --git a/shiny/dbsync/Dockerfile b/shiny/dbsync/Dockerfile index d7912e4d1b0..2506d2236c5 100644 --- a/shiny/dbsync/Dockerfile +++ b/shiny/dbsync/Dockerfile @@ -3,9 +3,13 @@ FROM rocker/shiny ENV PGHOST=postgres \ PGDATABASE=bety \ PGUSER=bety \ - PGPASSWORD=bety + PGPASSWORD=bety \ + GEOCACHE=/srv/shiny-server/geoip.json RUN apt-get -y install libpq-dev libssl-dev \ - && install2.r -e -s -n -1 dbplyr DT leaflet rgeolocate RPostgreSQL \ + && install2.r -e -s -n -1 curl dbplyr DT leaflet RPostgreSQL \ && rm -rf /srv/shiny-server/* ADD . /srv/shiny-server/ + +# special script to start shiny server and preserve env variable +CMD /srv/shiny-server/save-env-shiny.sh diff --git a/shiny/dbsync/app.R b/shiny/dbsync/app.R index 9ae6602c8e9..98fdda1bc21 100644 --- a/shiny/dbsync/app.R +++ b/shiny/dbsync/app.R @@ -1,12 +1,57 @@ library(shiny) library(leaflet) library(dbplyr) -library(rgeolocate) library(RPostgreSQL) -# this file can be downloaded from -# https://lite.ip2location.com/database/ip-country-region-city-latitude-longitude -file <- "IP2LOCATION-LITE-DB5.BIN" +# cached geo information to prevent frequent lookups +geoip <- list() +geocache <- Sys.getenv("GEOCACHE", "geoip.json") + +# maximum number of lines to consider of sync log, if this is +# large the update sync can take long time +maxlines <- 5000 + +# maximum time in hours before sync is red +maxtime <- 24 + +# number of bins to use when rendering lines +maxbins <- 5 + +# show hosts with missing sync_url +allow_no_url <- FALSE + +# mapping to fix hostnames +host_mapping <- list( + "wisconsin"="tree.aos.wisc.edu", + "terra-mepp.igb.illinois.edu"="terra-mepp.illinois.edu", + "ecn.purdue.edu"="engineering.purdue.edu", + "paleon-pecan.virtual.crc.nd.edu"="crc.nd.edu" +) + +# given a IP address lookup geo spatital info +# uses a cache to prevent to many requests (1000 per day) +get_geoip <- function(ip) { + if (length(geoip) == 0 && file.exists("geoip.json")) { + geoip <<- jsonlite::read_json(geocache, simplifyVector = TRUE) + } + if (! ip %in% geoip$ip) { + print(paste("CACHE MISS", ip)) + res <- curl::curl_fetch_memory(paste0("http://free.ipwhois.io/json/", ip)) + if (res$status -- 200) { + geoloc <- jsonlite::parse_json(rawToChar(res$content)) + geoloc[lengths(geoloc) == 0] <- NA + geoloc <- type.convert(geoloc, as.is = TRUE) + } else { + geoloc <- list(ip=ip, lat=0, lon=0, city="?", countr="?") + } + if (length(geoip) == 0) { + geoip <<- as.data.frame(geoloc) + } else { + geoip <<- rbind(geoip, as.data.frame(geoloc)) + } + jsonlite::write_json(geoip, geocache) + } +} # get a list of all servers in BETY and their geospatial location get_servers <- function() { @@ -14,31 +59,38 @@ get_servers <- function() { bety <- DBI::dbConnect( DBI::dbDriver("PostgreSQL"), dbname = Sys.getenv("PGDATABASE", "bety"), - host = Sys.getenv("PGHOST", "postgres"), + host = Sys.getenv("PGHOST", "localhost"), user = Sys.getenv("PGUSER", "bety"), password = Sys.getenv("PGPASSWORD", "bety") ) servers <- dplyr::tbl(bety, "machines") %>% dplyr::filter(!is.na(sync_host_id)) %>% - dplyr::filter(sync_url != "") %>% + dplyr::filter(sync_url != "" || allow_no_url) %>% dplyr::arrange(sync_host_id) %>% dplyr::select(hostname, sync_host_id, sync_url, sync_start, sync_end) %>% dplyr::collect() %>% - dplyr::mutate(hostname = replace(hostname, hostname=="wisconsin", "tree.aos.wisc.edu")) %>% - dplyr::mutate(hostname = replace(hostname, hostname=="terra-mepp.igb.illinois.edu", "terra-mepp.illinois.edu")) %>% dplyr::mutate(ip = unlist(lapply(hostname, function(x) { - ip <- nsl(x) + if (x %in% names(host_mapping)) { + ip <- nsl(host_mapping[[x]]) + } else { + ip <- nsl(x) + } ifelse(is.null(ip), NA, ip) }))) %>% - dplyr::mutate(version = NA, lastdump = NA) #%>% - #dplyr::filter(!is.na(ip)) - + dplyr::mutate(version = NA, lastdump = NA, migrations = NA) %>% + dplyr::filter(!is.na(ip)) %>% + dplyr::arrange(ip) + # close connection DBI::dbDisconnect(bety) - # convert ip address to geo location - locations <- rgeolocate::ip2location(servers$ip, file, c("city", "lat", "long")) + # convert ip address to geo location + lapply(servers$ip, get_geoip) + locations <- geoip %>% + dplyr::filter(ip %in% servers$ip) %>% + dplyr::arrange(ip) %>% + dplyr::select("city", "country", "latitude", "longitude") # combine tables servers <- cbind(servers, locations) @@ -47,7 +99,7 @@ get_servers <- function() { servers[, paste0("server_", servers$sync_host_id)] <- NA # return servers - servers + servers %>% dplyr::arrange(sync_host_id) } # fetch information from the actual servers @@ -66,9 +118,11 @@ check_servers <- function(servers, progress) { if (!is.na(as.numeric(version[1]))) { servers[servers$sync_url == url,'version'] <<- version[2] servers[servers$sync_url == url,'lastdump'] <<- version[4] + servers[servers$sync_url == url,'migrations'] <<- version[1] } else { servers[servers$sync_url == url,'version'] <<- NA servers[servers$sync_url == url,'lastdump'] <<- NA + servers[servers$sync_url == url,'migrations'] <<- NA } } progress$inc(amount = 1) @@ -82,12 +136,14 @@ check_servers <- function(servers, progress) { if (res$status == 200) { url <- sub("sync.log", "bety.tar.gz", res$url) lines <- strsplit(rawToChar(res$content), '\n', fixed = TRUE)[[1]] - data <- list() - for (line in lines) { + now <- as.POSIXlt(Sys.time(), tz="UTC") + for (line in tail(lines, maxlines)) { pieces <- strsplit(trimws(line), ' ', fixed=TRUE)[[1]] if (length(pieces) == 8) { if (pieces[8] == 0) { - servers[servers$sync_url == url, paste0('server_', pieces[7])] <<- paste(pieces[1:6], collapse = " ") + when <- strptime(paste(pieces[1:6], collapse = " "), format="%a %b %d %T UTC %Y", tz="UTC") + tdiff <- min(maxtime, difftime(now, when, units = "hours")) + servers[servers$sync_url == url, paste0('server_', pieces[7])] <<- tdiff } } else { print(line) @@ -106,7 +162,7 @@ check_servers <- function(servers, progress) { } # return vector to use in polylines -check_sync <- function(servers, what) { +check_sync <- function(servers) { ids <- servers$sync_host_id # helper function to see if two servers are connected @@ -116,44 +172,34 @@ check_sync <- function(servers, what) { } # build up list of all connections - result <- c() + lat <- c() + lon <- c() + tdiff <- c() + for (src in ids) { + src_x <- servers[servers$sync_host_id==src, 'longitude'] + src_y <- servers[servers$sync_host_id==src, 'latitude'] for (dst in ids) { if (connected(src, dst)) { - if (what == "color") { - when <- strptime(servers[servers$sync_host_id==src, paste0("server_", dst)], - format="%a %b %d %T UTC %Y", - tz="UTC") - now <- as.POSIXlt(Sys.time(), tz="UTC") - tdiff <- difftime(now, when, units = "hours") - if (tdiff < 0) { - result <- c(result, "purple") - } else if (tdiff < 6) { - result <- c(result, "green") - } else if (tdiff < 24) { - result <- c(result, "yellow") - } else { - result <- c(result, "red") - } - mycolors <<- result - } else { - src_x <- servers[servers$sync_host_id==src, what] - dst_x <- servers[servers$sync_host_id==dst, what] - result <- c(result, c(src_x, src_x + (dst_x - src_x) / 2, NA)) - } + tdiff <- c(tdiff, servers[servers$sync_host_id==src, paste0("server_", dst)]) + + dst_x <- servers[servers$sync_host_id==dst, 'longitude'] + lon <- c(lon, c(src_x, (src_x + (dst_x - src_x) / 2), NA)) + + dst_y <- servers[servers$sync_host_id==dst, 'latitude'] + lat <- c(lat, c(src_y, (src_y + (dst_y - src_y) / 2), NA)) } } } # need to have at least one polyline, will draw a line from server 1 to server 1 - if (length(result) == 0) { - if (what == "color") { - result <- c("black") - } else { - result <- c(servers[1, what], servers[1, what], NA) - } + if (length(tdiff) == 0) { + src_x <- servers[1, 'longitude'] + src_y <- servers[1, 'latitude'] + return(list(latitude=c(src_y, src_y, NA), longitude=c(src_x, src_x, NA), value=c(0))) + } else { + return(list(latitude=lat, longitude=lon, value=tdiff)) } - return(result) } # Define UI for application that draws a histogram @@ -189,19 +235,19 @@ ui <- fluidPage( # Refresh button actionButton("refresh_servers", "Update Servers"), - actionButton("refresh_sync", "Update Sync"), - - # footer - hr(), - div(HTML('This site or product includes IP2Location LITE data available from https://lite.ip2location.com.')) + actionButton("refresh_sync", "Update Sync") ) # Define server logic required to draw map server <- function(input, output, session) { - - # servers is what is changed, start with just data from database - values <- reactiveValues(servers=get_servers()) + # red -> green color spectrum + colors <- leaflet::colorBin("RdYlGn", domain = c(0, maxtime), bins = maxbins, na.color = "purple", reverse = TRUE) + # servers is what is changed, start with just data from database + servers <- get_servers() + values <- reactiveValues(servers=servers, + sync=check_sync(servers)) + # update server list (quick) observeEvent(input$refresh_servers, { session$sendCustomMessage("disableUI", "") @@ -214,6 +260,7 @@ server <- function(input, output, session) { session$sendCustomMessage("disableUI", "") progress <- Progress$new(session, min=0, max=2*nrow(values$servers)) values$servers <- check_servers(values$servers, progress) + values$sync <- check_sync(values$servers) progress$close() session$sendCustomMessage("enableUI", "") }) @@ -224,17 +271,18 @@ server <- function(input, output, session) { addProviderTiles(providers$Stamen.TonerLite, options = providerTileOptions(noWrap = TRUE) ) %>% - addMarkers(~long, ~lat, + addMarkers(~longitude, ~latitude, label = ~htmltools::htmlEscape(hostname), clusterOptions = markerClusterOptions(maxClusterRadius = 1)) %>% - addPolylines(lng = ~check_sync(values$servers, "long"), - lat = ~check_sync(values$servers, "lat"), - color = ~check_sync(values$servers, "color")) + addPolylines(~longitude, ~latitude, + color = colors(values$sync$value), data=values$sync) %>% + addLegend("bottomright", colors, values$sync$value, + title = "since last sync", labFormat = labelFormat(suffix =" hours")) }) # create a table of all servers that have a sync_host_id and sync_url output$table <- DT::renderDataTable({ - DT::datatable(values$servers %>% dplyr::select("sync_host_id", "hostname", "city", "lastdump", "version")) + DT::datatable(values$servers %>% dplyr::select("sync_host_id", "hostname", "city", "country", "lastdump", "migrations")) }) } diff --git a/shiny/dbsync/geoip.json b/shiny/dbsync/geoip.json new file mode 100644 index 00000000000..3632da7f652 --- /dev/null +++ b/shiny/dbsync/geoip.json @@ -0,0 +1 @@ +[{"ip":"128.174.124.54","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Illinois","city":"Urbana","latitude":40.1106,"longitude":-88.2073,"asn":"AS38","org":"University of Illinois","isp":"University of Illinois","timezone":"America/Chicago","timezone_name":"Central Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-21600,"timezone_gmt":"GMT -6:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"128.197.168.114","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Massachusetts","city":"Boston","latitude":42.3601,"longitude":-71.0589,"asn":"AS111","org":"Boston University","isp":"Boston University","timezone":"America/New_York","timezone_name":"Eastern Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-18000,"timezone_gmt":"GMT -5:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"130.199.3.21","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"New York","city":"Bellport","latitude":40.757,"longitude":-72.9393,"asn":"AS43","org":"Brookhaven National Laboratory","isp":"Brookhaven National Laboratory","timezone":"America/New_York","timezone_name":"Eastern Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-18000,"timezone_gmt":"GMT -5:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"144.92.131.21","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Wisconsin","city":"Madison","latitude":43.0731,"longitude":-89.4012,"asn":"AS59","org":"University of Wisconsin Madison","isp":"University of Wisconsin Madison","timezone":"America/Chicago","timezone_name":"Central Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-21600,"timezone_gmt":"GMT -6:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"141.142.227.158","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Illinois","city":"Urbana","latitude":40.1106,"longitude":-88.2073,"asn":"AS1224","org":"National Center Supercomputing Applications","isp":"National Center for Supercomputing Applications","timezone":"America/Chicago","timezone_name":"Central Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-21600,"timezone_gmt":"GMT -6:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"128.174.124.40","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Illinois","city":"Urbana","latitude":40.1106,"longitude":-88.2073,"asn":"AS38","org":"University of Illinois","isp":"University of Illinois","timezone":"America/Chicago","timezone_name":"Central Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-21600,"timezone_gmt":"GMT -6:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"128.196.65.37","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Arizona","city":"Tucson","latitude":32.2217,"longitude":-110.9265,"asn":"AS1706","org":"University of Arizona","isp":"University of Arizona","timezone":"America/Phoenix","timezone_name":"Mountain Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-25200,"timezone_gmt":"GMT -7:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":92},{"ip":"193.166.223.38","success":true,"type":"IPv4","continent":"Europe","country":"Finland","country_code":"FI","country_flag":"https://cdn.ipwhois.io/flags/fi.svg","country_capital":"Helsinki","country_phone":358,"country_neighbours":"NO,RU,SE","region":"Uusimaa","city":"Helsinki","latitude":60.1699,"longitude":24.9384,"asn":"AS1741","org":"FUNET","isp":"CSC - Tieteen tietotekniikan keskus Oy","timezone":"Europe/Helsinki","timezone_name":"Eastern European Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":7200,"timezone_gmt":"GMT +2:00","currency":"Euro","currency_code":"EUR","currency_symbol":"€","currency_rates":0.9195,"currency_plural":"euros","completed_requests":92,"continent_code":"EU"},{"ip":"128.210.26.15","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Indiana","city":"West Lafayette","latitude":40.4259,"longitude":-86.9081,"asn":"AS17","org":"Purdue University","isp":"Purdue University","timezone":"America/New_York","timezone_name":"Eastern Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-18000,"timezone_gmt":"GMT -5:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":100},{"ip":"141.142.227.159","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Illinois","city":"Urbana","latitude":40.1106,"longitude":-88.2073,"asn":"AS1224","org":"National Center Supercomputing Applications","isp":"National Center for Supercomputing Applications","timezone":"America/Chicago","timezone_name":"Central Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-21600,"timezone_gmt":"GMT -6:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":100},{"ip":"130.127.204.30","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"South Carolina","city":"Clemson","latitude":34.6834,"longitude":-82.8374,"asn":"AS12148","org":"Clemson University","isp":"Clemson University","timezone":"America/New_York","timezone_name":"Eastern Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-18000,"timezone_gmt":"GMT -5:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":103},{"ip":"131.243.130.42","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"California","city":"Berkeley","latitude":37.8716,"longitude":-122.2727,"asn":"AS16","org":"Lawrence Berkeley National Laboratory","isp":"Lawrence Berkeley National Laboratory","timezone":"America/Los_Angeles","timezone_name":"Pacific Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-28800,"timezone_gmt":"GMT -8:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":103},{"ip":"128.46.104.5","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Indiana","city":"West Lafayette","latitude":40.4259,"longitude":-86.9081,"asn":"AS17","org":"Purdue University","isp":"Purdue University","timezone":"America/New_York","timezone_name":"Eastern Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-18000,"timezone_gmt":"GMT -5:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":105},{"ip":"54.85.105.29","success":true,"type":"IPv4","continent":"North America","country":"United States","country_code":"US","country_flag":"https://cdn.ipwhois.io/flags/us.svg","country_capital":"Washington","country_phone":1,"country_neighbours":"CA,MX,CU","region":"Virginia","city":"Ashburn","latitude":39.0438,"longitude":-77.4874,"asn":"AS14618","org":"Amazon.com, Inc.","isp":"Amazon.com, Inc.","timezone":"America/New_York","timezone_name":"Eastern Standard Time","timezone_dstOffset":0,"timezone_gmtOffset":-18000,"timezone_gmt":"GMT -5:00","currency":"US Dollar","currency_code":"USD","currency_symbol":"$","currency_rates":1,"currency_plural":"US dollars","completed_requests":105}] diff --git a/shiny/dbsync/save-env-shiny.sh b/shiny/dbsync/save-env-shiny.sh new file mode 100755 index 00000000000..d681b66d457 --- /dev/null +++ b/shiny/dbsync/save-env-shiny.sh @@ -0,0 +1,8 @@ +#!/bin/sh + +# save env +env > /home/shiny/.Renviron +chown shiny.shiny /home/shiny/.Renviron + +# start shiny server +/usr/bin/shiny-server.sh From 93b8e6632f6e67d21ecfc2cfb0d545608b7fe3a1 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 20 Apr 2020 16:47:45 +0200 Subject: [PATCH 0894/2289] install glkp system library --- .travis.yml | 1 + docker/depends/Dockerfile | 1 + 2 files changed, 2 insertions(+) diff --git a/.travis.yml b/.travis.yml index 8f0c94b4c68..f7bf002c73c 100644 --- a/.travis.yml +++ b/.travis.yml @@ -22,6 +22,7 @@ _apt: &apt-base - libgdal-dev - libgl1-mesa-dev - libglu1-mesa-dev + - libglpk-dev # indirectly needed by BayesianTools - libgmp-dev - libhdf5-dev - liblapack-dev diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index 6481f4c5dc9..ca37ccd0586 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -14,6 +14,7 @@ RUN apt-get update \ jags \ time \ libgdal-dev \ + libglpk-dev \ librdf0-dev \ libnetcdf-dev \ libudunits2-dev \ From 1e3940a56ccd87f8bc8c2000c17a47ce99e069fb Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 20 Apr 2020 11:34:01 -0500 Subject: [PATCH 0895/2289] fix monitor (fixes #2571) --- docker/monitor/monitor.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docker/monitor/monitor.py b/docker/monitor/monitor.py index a089df5f89a..88276535f1a 100644 --- a/docker/monitor/monitor.py +++ b/docker/monitor/monitor.py @@ -24,7 +24,7 @@ # parameters to connect to BETY database postgres_host = os.getenv('PGHOST', 'postgres') -postgres_port = os.getenv('PGPORT', 'postgres') +postgres_port = os.getenv('PGPORT', '5432') postgres_user = os.getenv('BETYUSER', 'bety') postgres_password = os.getenv('BETYPASSWORD', 'bety') postgres_database = os.getenv('BETYDATABASE', 'bety') @@ -148,6 +148,7 @@ def insert_model(model_info): postgres_host, postgres_port, postgres_database, postgres_user, postgres_password ) + conn = None try: # connect to the PostgreSQL database conn = psycopg2.connect(postgres_uri) From 8c4a947396a272c8e0ffd4c9d69ea9327bb3a099 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Thu, 23 Apr 2020 19:10:29 +0530 Subject: [PATCH 0896/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 60 +++++++++++++++++++++++++++- 1 file changed, 59 insertions(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index e414b092939..ae82bfade9d 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -18,7 +18,10 @@ jobs: repo-token: ${{ secrets.GITHUB_TOKEN }} - uses: r-lib/actions/setup-r@master - name: Install dependencies - run: Rscript -e 'install.packages("styler")' + run: | + Rscript -e 'install.packages("styler")' + Rscript -e 'install.packages("devtools")' + Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' - name: string operations run: | echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt @@ -26,6 +29,11 @@ jobs: text=$(cat new.txt) IFS=',' read -ra ids <<< "$text" for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> new2.txt; fi; done + - name: Upload artifacts + uses: actions/upload-artifact@v1 + with: + name: artifacts + path: new2.txt - name: Style run: for i in $(cat new2.txt); do Rscript -e "styler::style_file("$i")"; done - name: commit @@ -35,8 +43,58 @@ jobs: - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} + + + check: + needs: [style] + runs-on: ubuntu-latest + container: pecan/depends:develop + steps: + - name: check git version + id: gitversion + run: | + v=$(git --version | grep -oE '[0-9\.]+') + v='cat(numeric_version("'${v}'") < "2.18")' + echo "##[set-output name=isold;]$(Rscript -e "${v}")" + - name: upgrade git if needed + # Hack: actions/checkout wants git >= 2.18, rocker 3.5 images have 2.11 + # Assuming debian stretch because newer images have git >= 2.20 already + if: steps.gitversion.outputs.isold == 'TRUE' + run: | + echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list + apt-get update + apt-get -t stretch-backports upgrade -y git + - uses: actions/checkout@v2 + - uses: r-lib/actions/pr-fetch@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + - uses: r-lib/actions/setup-r@master + - name : download artifacts + uses: actions/download-artifact@v1 + with: + name: artifacts + - name : make + shell: bash + run: | + cut -d / -f 1-2 artifacts/new2.txt | tr -d '"' > new.txt + cat new.txt + sort new.txt | uniq > uniq.txt + cat uniq.txt + for i in $(cat uniq.txt); do make .doc/${i}; done + - name: commit + run: | + git config --global user.email "rahulagrawal799110@gmail.com" + git config --global user.name "Rahul Agrawal" + git add \*.Rd + git commit -m 'make' + - uses: r-lib/actions/pr-push@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + + # A mock job just to ensure we have a successful build status finish: + needs: [check] runs-on: ubuntu-latest steps: - run: true From 86533c206a0fa1858747ee3340b44606a7f343a6 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Fri, 24 Apr 2020 13:46:28 +0530 Subject: [PATCH 0897/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index ae82bfade9d..c5819ae2297 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -39,6 +39,7 @@ jobs: - name: commit run: | git add \*.R + git add \*.Rmd git commit -m 'Style' - uses: r-lib/actions/pr-push@master with: From a445a27d776c3111a07d4c8bd31911599975ed4b Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 24 Apr 2020 11:05:12 -0500 Subject: [PATCH 0898/2289] set sorting to en_US.UTF-8 (fixes #2515) (#2579) set sorting to en_US.UTF-8 (fixes #2515) --- scripts/generate_dependencies.R | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/scripts/generate_dependencies.R b/scripts/generate_dependencies.R index 975862fd013..5ea9ad772a1 100755 --- a/scripts/generate_dependencies.R +++ b/scripts/generate_dependencies.R @@ -1,5 +1,12 @@ #!/usr/bin/env RScript +# force sorting +if(capabilities("ICU")) { + icuSetCollate(locale = "en_US.UTF-8") +} else { + print("Can not force sorting, this could result in unpredicted results.") +} + # following modules will be ignored ignore <- c("modules/data.mining") From cdb94d4a584f78483ce569d043d8bd4f7be0ffa9 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 24 Apr 2020 20:54:20 +0300 Subject: [PATCH 0899/2289] Dockerize basgra (#2566) add basgra - update docker.sh to build container - update docker-compose to run model --- CHANGELOG.md | 1 + docker-compose.yml | 13 +++++++++++++ docker.sh | 9 +++++++++ docker/monitor/monitor.py | 1 + models/basgra/.Rbuildignore | 2 ++ models/basgra/Dockerfile | 20 +++++++++++++++++++ models/basgra/model_info.json | 36 +++++++++++++++++++++++++++++++++++ 7 files changed, 82 insertions(+) create mode 100644 models/basgra/.Rbuildignore create mode 100644 models/basgra/Dockerfile create mode 100644 models/basgra/model_info.json diff --git a/CHANGELOG.md b/CHANGELOG.md index 1667251fcba..5eef0b23cb0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -34,6 +34,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). ### Added +- Dockerize BASGRA_N model. - Basic coupling for models BASGRA_N and STICS. - PEcAn.priors now exports functions `priorfig` and `plot_densities` (#2439). - Models monitoring container for Docker now shows a webpage with models it has seen diff --git a/docker-compose.yml b/docker-compose.yml index 2fadf245477..e0f5661b0fb 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -258,6 +258,19 @@ services: # PEcAn models, list each model you want to run below # ---------------------------------------------------------------------- + # PEcAn basgra model runner + basgra: + image: pecan/model-basgra-basgra_n_v1.0:${PECAN_VERSION:-latest} + restart: unless-stopped + networks: + - pecan + environment: + - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} + depends_on: + - rabbitmq + volumes: + - pecan:/data + # PEcAn sipnet model runner sipnet: image: pecan/model-sipnet-r136:${PECAN_VERSION:-latest} diff --git a/docker.sh b/docker.sh index 3b888f51256..3f05e57c524 100755 --- a/docker.sh +++ b/docker.sh @@ -169,6 +169,15 @@ done # MODEL BUILD SECTION # -------------------------------------------------------------------------------- +# build basgra +for version in BASGRA_N_v1.0; do + ${DEBUG} docker build \ + --tag pecan/model-basgra-$(echo $version | tr '[A-Z]' '[a-z]'):${IMAGE_VERSION} \ + --build-arg MODEL_VERSION="${version}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + models/basgra +done + # build biocro for version in 0.95; do ${DEBUG} docker build \ diff --git a/docker/monitor/monitor.py b/docker/monitor/monitor.py index 88276535f1a..4c2ab72aab9 100644 --- a/docker/monitor/monitor.py +++ b/docker/monitor/monitor.py @@ -149,6 +149,7 @@ def insert_model(model_info): ) conn = None + try: # connect to the PostgreSQL database conn = psycopg2.connect(postgres_uri) diff --git a/models/basgra/.Rbuildignore b/models/basgra/.Rbuildignore new file mode 100644 index 00000000000..2d28facae40 --- /dev/null +++ b/models/basgra/.Rbuildignore @@ -0,0 +1,2 @@ +Dockerfile +model_info.json diff --git a/models/basgra/Dockerfile b/models/basgra/Dockerfile new file mode 100644 index 00000000000..d9c382fa5f3 --- /dev/null +++ b/models/basgra/Dockerfile @@ -0,0 +1,20 @@ +# this needs to be at the top, what version are we building +ARG IMAGE_VERSION="latest" + +# ---------------------------------------------------------------------- +# BUILD PECAN FOR BASGRA +# ---------------------------------------------------------------------- +FROM pecan/models:${IMAGE_VERSION} + +# ---------------------------------------------------------------------- +# SETUP FOR SPECIFIC BASGRA VERSION +# ---------------------------------------------------------------------- + +# Some variables that can be used to set control the docker build +ARG MODEL_VERSION=BASGRA_N_v1.0 + +# Setup model_info file +COPY model_info.json /work/model.json +RUN sed -i -e "s/@VERSION@/${MODEL_VERSION}/g" \ + -e "s#@BINARY@#/usr/local/lib/R/site-library/PEcAn.BASGRA/libs/PEcAn.BASGRA.so#g" /work/model.json + diff --git a/models/basgra/model_info.json b/models/basgra/model_info.json new file mode 100644 index 00000000000..256fb873b1e --- /dev/null +++ b/models/basgra/model_info.json @@ -0,0 +1,36 @@ +{ + "name": "BASGRA_N", + "type": "BASGRA", + "version": "@VERSION@", + "binary": "@BINARY@", + "description": "A version of the BASic GRAssland model (BASGRA) that includes simulation of the N-cycle and digestibility.", + "creator": "Marcel van Oijen", + "contributors": ["Marcel van Oijen", "Mats Hoglind", "David Cameron"], + "links": [ + { + "type": "git", + "description": "Source code to BASGRA_N", + "url": "https://github.com/MarcelVanOijen/BASGRA_N.git" + }, + { + "type": "issues", + "description": "Issues related to the BASGRA_N model", + "url": "https://github.com/MarcelVanOijen/BASGRA_N/issues" + }, + { + "type": "documentation", + "description": "Documentation of the BASGRA_N model", + "url": "https://github.com/MarcelVanOijen/BASGRA_N/blob/master/documentation/BASGRA_N_UserGuide.docx" + }, + { + "type": "docker", + "description": "Docker image of the BASGRA_N model", + "url": "pecan/models-basgra" + } + ], + "inputs": {}, + "bibtex": [ + "Hoglind M, van Oijen M, Cameron D, Persson T (2016) Process-based simulation of growth and overwintering of grassland using the BASGRA model. Ecological Modelling 335:1-15.", + "Hoglind M, Cameron D, Persson T, Huang X, van Oijen M (2020) BASGRA_N: A model for grassland productivity, quality and greenhouse gas balance. Ecological Modelling, 417:108925." + ] +} From 0c13d5e5d068ca80fe6303120fc84406a4f769fa Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sat, 25 Apr 2020 11:52:33 -0500 Subject: [PATCH 0900/2289] remove leftover mysql --- base/db/inst/create.db.subset.sh | 19 ------ base/db/inst/dump.db.sh | 49 -------------- base/db/inst/dump.db.subset.sh | 69 ------------------- base/db/inst/mysql2psql_validation.Rmd | 92 -------------------------- web/04-runpecan.php | 6 +- 5 files changed, 1 insertion(+), 234 deletions(-) delete mode 100755 base/db/inst/create.db.subset.sh delete mode 100755 base/db/inst/dump.db.sh delete mode 100755 base/db/inst/dump.db.subset.sh delete mode 100644 base/db/inst/mysql2psql_validation.Rmd diff --git a/base/db/inst/create.db.subset.sh b/base/db/inst/create.db.subset.sh deleted file mode 100755 index 134d98d5480..00000000000 --- a/base/db/inst/create.db.subset.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -# to be run after dump.db.subset.sh - -NEWDB=$1 -mysqladmin create $NEWDB - -for table in citation cultivar covariate pft pfts_prior pfts_specie prior site specie trait treatment variable yield -do - mysql $NEWDB < ${table}s.sql -done diff --git a/base/db/inst/dump.db.sh b/base/db/inst/dump.db.sh deleted file mode 100755 index dcc4644a7a5..00000000000 --- a/base/db/inst/dump.db.sh +++ /dev/null @@ -1,49 +0,0 @@ -#!/bin/bash - -# exports from betydb - -cd $(dirname $0)/.. -set -x - -# copy database and load locally -ssh ebi-forecast.igb.illinois.edu "mysqldump --lock-tables=false ZZZZ -u YYYY -pXXXX" > betydump.sql -mysql -u bety -pbety -e 'DROP DATABASE IF EXISTS betydump; CREATE DATABASE betydump' -grep -v "DEFINER" betydump.sql | mysql -f -u bety -pbety betydump - -# anonymize all accounts, set default password to illinois -mysql -u bety -pbety betydump -e 'update users set login=CONCAT("user", id), name=CONCAT("user ", id), email=CONCAT("betydb+", id, "@gmail.com"), city="Urbana, IL", country="USA", field=NULL, created_at=NOW(), updated_at=NOW(), crypted_password="df8428063fb28d75841d719e3447c3f416860bb7", salt="carya", remember_token=NULL, remember_token_expires_at=NULL, access_level=3, page_access_level=3, apikey=NULL, state_prov=NULL, postal_code=NULL;' -mysql -u bety -pbety betydump -e 'update users set login="carya", access_level=1, page_access_level=1 where id=1;' - -# remove all non checked data -mysql -u bety -pbety betydump -e 'delete from traits where checked = -1;' -mysql -u bety -pbety betydump -e 'delete from yields where checked = -1;' - -# remove all secret data -mysql -u bety -pbety betydump -e 'delete from traits where access_level < 3;' -mysql -u bety -pbety betydump -e 'delete from yields where access_level < 3;' - -# update bety -# this assumes there is an environment called dump in the database.yml file -if [ -e ../bety ]; then - (cd ../bety && rake db:migrate RAILS_ENV="dump") -elif [ -e /usr/local/bety ]; then - (cd /usr/local/bety && rake db:migrate RAILS_ENV="dump") -fi - -# dump database and copy to isda -mysqldump -u bety -pbety betydump | gzip > betydump.mysql.gz -cp betydump.mysql.gz /mnt/isda/kooper/public_html/EBI/ - -# create postgres version -mysql -u bety -pbety betydump -e 'drop view mgmtview, yieldsview;' - -echo "DROP DATABASE betydump; CREATE DATABASE betydump" | sudo -u postgres psql -taps server mysql://bety:bety@localhost/betydump?encoding=latin1 bety bety & -SERVER=$! - -sleep 10 -taps pull postgres://bety:bety@localhost/betydump http://bety:bety@localhost:5000 -kill -9 $SERVER - -sudo -u postgres pg_dump betydump | gzip > betydump.psql.gz -cp betydump.psql.gz /mnt/isda/kooper/public_html/EBI diff --git a/base/db/inst/dump.db.subset.sh b/base/db/inst/dump.db.subset.sh deleted file mode 100755 index 45d26042fef..00000000000 --- a/base/db/inst/dump.db.subset.sh +++ /dev/null @@ -1,69 +0,0 @@ -#!/bin/bash -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -#Note: this script requires read -DB="ebi_analysis" -PFT=$1 -SPID=`mysql --raw --skip-column-names -e "select specie_id from pfts_species join pfts on pfts_species.pft_id = pfts.id where pfts.name = 'ebifarm.pavi'" ebi_analysis ` - -## if traits given in command, use, otherwise, use given list -TRAITS="('mort2', 'growth_resp_factor', 'leaf_turnover_rate', 'leaf_width', 'nonlocal_dispersal', 'fineroot2leaf', 'root_turnover_rate', 'seedling_mortality', 'stomatal_slope', 'quantum_efficiency', 'r_fract', 'root_respiration_rate', 'Vm_low_temp', 'SLA', 'Vcmax')" -COVS="('leafT', 'airT', 'canopy_layer', 'rootT')" - -IGNORE="--ignore-table=$DB.counties --ignore-table=$DB.county_boundaries --ignore-table=$DB.county_paths --ignore-table=$DB.drop_me --ignore-table=$DB.error_logs --ignore-table=$DB.formats --ignore-table=$DB.inputs --ignore-table=$DB.inputs_runs --ignore-table=$DB.inputs_variables --ignore-table=$DB.likelihoods --ignore-table=$DB.location_yields --ignore-table=$DB.managements --ignore-table=$DB.managements_treatments --ignore-table=$DB.mimetypes --ignore-table=$DB.models --ignore-table=$DB.plants --ignore-table=$DB.posteriors --ignore-table=$DB.posteriors_runs --ignore-table=$DB.runs --ignore-table=$DB.schema_migrations --ignore-table=$DB.users --ignore-table=$DB.visitors" - - -table="trait" - CONDITION="specie_id in ($SPID) and variable_id in (select id from variables where name in $TRAITS)" - mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - - -table="yield" - CONDITION="specie_id in ($SPID)" - mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -# tables linked directly to traits -for table in site specie citation cultivar treatment -do - CONDITION="id in (select ${table}_id from traits where specie_id in ($SPID) and variable_id in (select id from variables where name in $TRAITS))" - mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql -done - -table="variable" -CONDITION="name in $TRAITS or name in $COVS" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -# lookup and auxillary tables -table="covariate" -CONDITION="variable_id in (select id from variables where name in $COVS) and trait_id in (select id from traits where specie_id in ($SPID) and variable_id in (select id from variables where name in $TRAITS))" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -table="pfts_specie" -CONDITION="specie_id in ($SPID)" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -table="pfts_prior" -CONDITION="pft_id in (select pft_id from pfts_species where specie_id in ($SPID));" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -table="prior" -CONDITION="id in (select prior_id from pfts_species, pfts_priors where specie_id in ($SPID) and pfts_priors.pft_id = pfts_species.pft_id);" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -table="pft" -CONDITION="name in ('$PFT');" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -table="yield" -CONDITION="specie_id in ($SPID)" -mysqldump --where="$CONDITION" --lock-all-tables $IGNORE $DB ${table}s > ${table}s.sql - -# Acknowledgements: -# Rolando from LogicWorks: http://dba.stackexchange.com/q/4654/1580 \ No newline at end of file diff --git a/base/db/inst/mysql2psql_validation.Rmd b/base/db/inst/mysql2psql_validation.Rmd deleted file mode 100644 index bd9bd61f5db..00000000000 --- a/base/db/inst/mysql2psql_validation.Rmd +++ /dev/null @@ -1,92 +0,0 @@ -Check MySQL --> PSQL Migration -======================================================== - - -This script will compare tables before and after migration from MySQL to PSQL ([Redmine issue 1860](https://ebi-forecast.igb.illinois.edu/redmine/issues/1860)) - -Note: For more information about accessing the database, see the [PEcAn settings wiki](https://pecanproject.github.io/pecan-documentation/master/pecan-xml-configuration.html#database-access). - - -```{r} -library(PEcAn.DB) -library(RMySQL) -library(RPostgreSQL) -library(testthat) -``` - -### Create connections to MySQL and PSQL versions - -```{r} -params = list(dbname = "ebi_production_copy", - user = "bety", - password = "bety", - host = "pecandev.igb.illinois.edu") - -mysqlparams <- c(params, driver = "MySQL") -mysqlparams$dbname <- "ebi_production_copy_clean" -psqlparams <- c(params, driver = "PostgreSQL") -psqlparams$dbname <- "ebi_production_copy" - -mcon <- db.open(mysqlparams) -pcon <- db.open(psqlparams) -``` - -### Check that they have the same tables - - -```{r} -mtables <- db.query("show tables", con = mcon) -ptables <- db.query("SELECT tablename FROM pg_catalog.pg_tables where tableowner = 'bety'", con = pcon) - -expect_equivalent(mtables, ptables) - -for (t in mtables[,1]) { - if(!grepl("_", t) ){ - print(paste("testing", t)) - mtest <- db.query(paste("select * from", t, "order by id"), con=mcon) - ptest <- db.query(paste("select * from", t, "order by id"), con=pcon) - expect_equal(colnames(mtest), colnames(ptest)) - - ## test numeric cols only - num.cols <- sapply(mtest, class) %in% c("integer", "numeric") - - mnums <- mtest[,num.cols] - pnums <- ptest[,num.cols] - mnums[is.na(mnums)] <- -9999 - pnums[is.na(pnums)] <- -9999 - - expect_equivalent(mnums, pnums) - - ## test char cols - - char.cols <- sapply(ptest, class) == "character" - mchar <- mtest[, char.cols] - pchar <- ptest[, char.cols] - mchar[is.na(mchar)] <- -9999 - pchar[is.na(pchar)] <- -9999 - if(ncol(mchar) > 0 && any(!mchar == pchar)){ - - ptmp <- pchar[!mchar == pchar] - sink(tempfile()) - asciitest <- !ptmp == showNonASCII(pchar[!mchar == pchar]) - sink() - diffs <- data.frame(mysql = mchar[!mchar == pchar], - psql = pchar[!mchar == pchar]) - asciidiffs <- diffs[asciitest,] - - print(paste("this table has", sum(!mchar==pchar), - " char mismatches")) - ## we only want to print out examples where diffs are not related to - ## mysql being latin1 encoding - print("printing elements where databases differences (probably) not related to character encoding ") - if(sum(asciitest) > 0) print(t(asciidiffs)) - - } - } - - } -} -``` - - - diff --git a/web/04-runpecan.php b/web/04-runpecan.php index 9fc82a2dc09..0e46dcfaa9b 100644 --- a/web/04-runpecan.php +++ b/web/04-runpecan.php @@ -260,11 +260,7 @@ fwrite($fh, " ${db_bety_port}" . PHP_EOL); } fwrite($fh, " ${db_bety_database}" . PHP_EOL); -if ($db_bety_type == "mysql") { - fwrite($fh, " MySQL" . PHP_EOL); -} else if ($db_bety_type = "pgsql") { - fwrite($fh, " PostgreSQL" . PHP_EOL); -} +fwrite($fh, " PostgreSQL" . PHP_EOL); fwrite($fh, " true" . PHP_EOL); fwrite($fh, " " . PHP_EOL); From 042326526c7531c2478e3bf691da4cd2c4ad98a8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 23 Apr 2020 10:49:39 +0200 Subject: [PATCH 0901/2289] typo in test file --- models/stics/tests/testthat.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/stics/tests/testthat.R b/models/stics/tests/testthat.R index f5599b0ba0c..b7618603b85 100644 --- a/models/stics/tests/testthat.R +++ b/models/stics/tests/testthat.R @@ -10,4 +10,4 @@ library(testthat) library(PEcAn.utils) PEcAn.logger::logger.setQuitOnSevere(FALSE) -test_check("PEcAn.ModelName") +test_check("PEcAn.STICS") From c6fa15648ce1c6a998987231b5d4f0bf8b627f80 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 26 Apr 2020 03:38:33 +0200 Subject: [PATCH 0902/2289] let database handle created_at and updated_at timestamps. Closes #1083 --- base/db/R/dbfiles.R | 34 +++++++------------ base/db/R/get.trait.data.pft.R | 13 +++---- base/db/R/utils_db.R | 8 ++--- base/db/inst/import-try/93.create.try.sites.R | 2 +- base/db/inst/import-try/README.md | 2 -- base/db/man/get.id.Rd | 6 ++-- base/db/tests/Rcheck_reference.log | 1 - base/settings/R/check.all.settings.R | 20 ++++------- base/utils/R/sensitivity.R | 16 ++++----- docker/data/add.util.sh | 20 +++++------ docker/monitor/monitor.py | 16 ++++----- modules/assim.sequential/R/sda.enkf.R | 29 ++++++++++------ modules/data.land/inst/LoadFLUXNETsites.R | 8 ++--- scripts/add.util.sh | 20 +++++------ web/04-runpecan.php | 4 +-- web/setups/serversyncscript.php | 6 ++-- 16 files changed, 95 insertions(+), 110 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index ab12d053f50..f52823e922a 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -111,12 +111,13 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, # allow.conflicting.dates==TRUE. So, insert new input record. if (parent == "") { cmd <- paste0("INSERT INTO inputs ", - "(site_id, format_id, created_at, updated_at, start_date, end_date, name) VALUES (", - siteid, ", ", formatid, ", NOW(), NOW(), '", startdate, "', '", enddate, "','", name, "') RETURNING id") + "(site_id, format_id, start_date, end_date, name) VALUES (", + siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, + "') RETURNING id") } else { cmd <- paste0("INSERT INTO inputs ", - "(site_id, format_id, created_at, updated_at, start_date, end_date, name, parent_id) VALUES (", - siteid, ", ", formatid, ", NOW(), NOW(), '", startdate, "', '", enddate, "','", name, "',", parentid, ") RETURNING id") + "(site_id, format_id, start_date, end_date, name, parent_id) VALUES (", + siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, "',", parentid, ") RETURNING id") } # This is the id that we just registered inserted.id <-db.query(query = cmd, con = con) @@ -372,8 +373,8 @@ dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, ho # insert input db.query( query = paste0( - "INSERT INTO posteriors (pft_id, format_id, created_at, updated_at) VALUES (", - pftid, ", ", formatid, ", NOW(), NOW())" + "INSERT INTO posteriors (pft_id, format_id)", + " VALUES (", pftid, ", ", formatid, ")" ), con = con ) @@ -480,26 +481,17 @@ dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostn con = con)) if (nrow(dbfile) == 0) { - # If no exsting record, insert one - now <- format(Sys.time(), "%Y-%m-%d %H:%M:%S") - - db.query( + # If no existing record, insert one + + insert_result <- db.query( query = paste0("INSERT INTO dbfiles ", - "(container_type, container_id, file_name, file_path, machine_id, created_at, updated_at) VALUES (", + "(container_type, container_id, file_name, file_path, machine_id) VALUES (", "'", type, "', ", id, ", '", basename(in.prefix), "', '", in.path, "', ", hostid, - ", '", now, "', '", now, "')"), + ") RETURNING id"), con = con ) - file.id <- invisible(db.query( - query = paste0( - "SELECT * FROM dbfiles WHERE container_type='", type, - "' AND container_id=", id, - " AND created_at='", now, - "' ORDER BY id DESC LIMIT 1" - ), - con = con - )[['id']]) + file.id <- insert_result[['id']] } else if (!reuse) { # If there is an existing record but reuse==FALSE, return NA. file.id <- NA diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index 606d36b79c8..41cb094f0fa 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -91,7 +91,7 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% dplyr::filter(pft_id == !!pftid) %>% dplyr::arrange(dplyr::desc(created_at)) %>% - head(1) %>% + utils::head(1) %>% dplyr::pull(id) } else { PEcAn.logger::logger.info("No previous posterior found. Forcing update") @@ -289,13 +289,10 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, old.files <- list.files(path = pft$outdir) # create a new posterior - now <- format(x = Sys.time(), format = "%Y-%m-%d %H:%M:%S") - db.query(paste0("INSERT INTO posteriors (pft_id, created_at, updated_at) ", - "VALUES (", pftid, ", '", now, "', '", now, "')"), - con = dbcon) - pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid, created_at == !!now) %>% - dplyr::pull(id) + insert_result <- db.query( + paste0("INSERT INTO posteriors (pft_id) VALUES (", pftid, ") RETURNING id"), + con = dbcon) + pft$posteriorid <- insert_result[["id"]] # create path where to store files pathname <- file.path(dbfiles, "posterior", pft$posteriorid) diff --git a/base/db/R/utils_db.R b/base/db/R/utils_db.R index eaab4cdf48f..dad6a6184b3 100644 --- a/base/db/R/utils_db.R +++ b/base/db/R/utils_db.R @@ -446,7 +446,9 @@ db.getShowQueries <- function() { ##' @param values values to be queried in fields corresponding to colnames ##' @param con database connection object, ##' @param create logical: make a record if none found? -##' @param dates logical: update created_at and updated_at timestamps? Used only if `create` is TRUE +##' @param dates Ignored. +##' Formerly indicated whether to set created_at and updated_at timestamps +##' when `create` was TRUE, but the database now always sets them automatically ##' @return will numeric ##' @export ##' @author David LeBauer @@ -455,16 +457,14 @@ db.getShowQueries <- function() { ##' pftid <- get.id("pfts", "name", "salix", con) ##' pftid <- get.id("pfts", c("name", "modeltype_id"), c("ebifarm.salix", 1), con) ##' } -get.id <- function(table, colnames, values, con, create=FALSE, dates=FALSE){ +get.id <- function(table, colnames, values, con, create=FALSE, dates=TRUE){ values <- lapply(values, function(x) ifelse(is.character(x), shQuote(x), x)) where_clause <- paste(colnames, values , sep = " = ", collapse = " and ") query <- paste("select id from", table, "where", where_clause, ";") id <- db.query(query = query, con = con)[["id"]] if (is.null(id) && create) { colinsert <- paste0(colnames, collapse=", ") - if (dates) colinsert <- paste0(colinsert, ", created_at, updated_at") valinsert <- paste0(values, collapse=", ") - if (dates) valinsert <- paste0(valinsert, ", NOW(), NOW()") PEcAn.logger::logger.info("INSERT INTO ", table, " (", colinsert, ") VALUES (", valinsert, ")") db.query(query = paste0("INSERT INTO ", table, " (", colinsert, ") VALUES (", valinsert, ")"), con = con) id <- db.query(query, con)[["id"]] diff --git a/base/db/inst/import-try/93.create.try.sites.R b/base/db/inst/import-try/93.create.try.sites.R index 6fd27cdcc16..026550f1edf 100644 --- a/base/db/inst/import-try/93.create.try.sites.R +++ b/base/db/inst/import-try/93.create.try.sites.R @@ -49,7 +49,7 @@ bety.site.index <- which(names(try.sites) == "bety.site.id") # f. Loop over rows... radius.query.string <- 'SELECT id, sitename, ST_Y(ST_Centroid(geometry)) AS lat, ST_X(ST_Centroid(geometry)) AS lon, ST_Distance(ST_Centroid(geometry), ST_SetSRID(ST_MakePoint(%2$f, %1$f), 4326)) as distance FROM sites WHERE ST_Distance(ST_Centroid(geometry), ST_SetSRID(ST_MakePoint(%2$f, %1$f), 4326)) <= %3$f' -insert.query.string <- "INSERT INTO sites(sitename,notes,geometry,user_id,created_at,updated_at) VALUES('%s','%s',ST_Force3D(ST_SetSRID(ST_MakePoint(%f, %f), 4326)),'%s', NOW(), NOW() ) RETURNING id;" +insert.query.string <- "INSERT INTO sites(sitename,notes,geometry,user_id) VALUES('%s','%s',ST_Force3D(ST_SetSRID(ST_MakePoint(%f, %f), 4326)),'%s' ) RETURNING id;" message("Looping over sites and adding to BETY") pb <- txtProgressBar(0, nrow(try.sites), style=3) diff --git a/base/db/inst/import-try/README.md b/base/db/inst/import-try/README.md index 7ad5b00de76..50a5be718ba 100644 --- a/base/db/inst/import-try/README.md +++ b/base/db/inst/import-try/README.md @@ -79,7 +79,5 @@ With a recent-enough R version (> 3.2), we can bring in the following: date_day --> ^^ time_hour --> from measurement time time_minute --> ^^ - created_at --> NOW() - updated_at --> NOW() 2. Store ID at every time step -- match with ObsDataID of TRY? Not perfect because miss time, etc., but may help later. diff --git a/base/db/man/get.id.Rd b/base/db/man/get.id.Rd index 9adb4a32a6f..2bdff9a0144 100644 --- a/base/db/man/get.id.Rd +++ b/base/db/man/get.id.Rd @@ -4,7 +4,7 @@ \alias{get.id} \title{get.id} \usage{ -get.id(table, colnames, values, con, create = FALSE, dates = FALSE) +get.id(table, colnames, values, con, create = FALSE, dates = TRUE) } \arguments{ \item{table}{name of table} @@ -17,7 +17,9 @@ get.id(table, colnames, values, con, create = FALSE, dates = FALSE) \item{create}{logical: make a record if none found?} -\item{dates}{logical: update created_at and updated_at timestamps? Used only if `create` is TRUE} +\item{dates}{Ignored. +Formerly indicated whether to set created_at and updated_at timestamps +when `create` was TRUE, but the database now always sets them automatically} } \value{ will numeric diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index 5d2511ac67f..d9cd7590ff1 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -56,7 +56,6 @@ get_users: no visible binding for global variable ‘id’ get_workflow_ids: no visible binding for global variable ‘workflow_id’ get.trait.data.pft: no visible binding for global variable ‘pft_id’ get.trait.data.pft: no visible binding for global variable ‘created_at’ -get.trait.data.pft: no visible global function definition for ‘head’ get.trait.data.pft: no visible binding for global variable ‘stat’ get.trait.data.pft: no visible binding for global variable ‘trait’ insert.format.vars: no visible binding for global variable ‘id’ diff --git a/base/settings/R/check.all.settings.R b/base/settings/R/check.all.settings.R index 81976cf9354..c8e12950de0 100644 --- a/base/settings/R/check.all.settings.R +++ b/base/settings/R/check.all.settings.R @@ -940,23 +940,22 @@ check.workflow.settings <- function(settings, dbcon = NULL) { if (!"workflow" %in% names(settings)) { now <- format(Sys.time(), "%Y-%m-%d %H:%M:%S") if (is.MultiSettings(settings)) { - PEcAn.DB::db.query( + insert_result <- PEcAn.DB::db.query( paste0( "INSERT INTO workflows (", - "folder, model_id, hostname, started_at, created_at) ", + "folder, model_id, hostname, started_at) ", "values ('", settings$outdir, "','", settings$model$id, "', '", settings$host$name, "', '", - now, "', '", - now, "')"), + now, "') RETURNING id"), con = dbcon) } else { - PEcAn.DB::db.query( + insert_result <- PEcAn.DB::db.query( paste0( "INSERT INTO workflows (", "folder, site_id, model_id, hostname, start_date, end_date, ", - "started_at, created_at) ", + "started_at) ", "values ('", settings$outdir, "','", settings$run$site$id, "','", @@ -964,15 +963,10 @@ check.workflow.settings <- function(settings, dbcon = NULL) { settings$host$name, "', '", settings$run$start.date, "', '", settings$run$end.date, "', '", - now, "', '", - now, "')"), + now, "') RETURNING id"), con = dbcon) } - settings$workflow$id <- PEcAn.DB::db.query( - paste0( - "SELECT id FROM workflows WHERE created_at='", now, - "' ORDER BY id DESC LIMIT 1;"), - con = dbcon)[["id"]] + settings$workflow$id <- insert_result[["id"]] fixoutdir <- TRUE } } else { diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index 4d3a8f41c53..72f441fb3e8 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -255,34 +255,30 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, trait.samples[[i]][trait] <- quantile.samples[[i]][quantile.str, trait, drop=FALSE] if (!is.null(con)) { - now <- format(Sys.time(), "%Y-%m-%d %H:%M:%S") paramlist <- paste0("quantile=", quantile.str, ",trait=", trait, ",pft=", pftname) - PEcAn.DB::db.query(paste0("INSERT INTO runs (model_id, site_id, start_time, finish_time, outdir, created_at, ensemble_id, parameter_list) values ('", + insert_result <- PEcAn.DB::db.query(paste0("INSERT INTO runs (model_id, site_id, start_time, finish_time, outdir, ensemble_id, parameter_list) values ('", settings$model$id, "', '", settings$run$site$id, "', '", settings$run$start.date, "', '", settings$run$end.date, "', '", settings$run$outdir, "', '", - now, "', ", ensemble.id, ", '", - paramlist, "')"), con = con) - run.id <- PEcAn.DB::db.query(paste0("SELECT id FROM runs WHERE created_at='", - now, "' AND parameter_list='", paramlist, "'"), con = con)[["id"]] + paramlist, "') RETURNING id"), con = con) + run.id <- insert_result[["id"]] # associate posteriors with ensembles for (pft in defaults) { - PEcAn.DB::db.query(paste0("INSERT INTO posteriors_ensembles (posterior_id, ensemble_id, created_at, updated_at) values (", + PEcAn.DB::db.query(paste0("INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) values (", pft$posteriorid, ", ", ensemble.id, ", '", - now, "', '", now, "');"), con = con) } # associate inputs with runs if (!is.null(inputs)) { for (x in inputs) { - PEcAn.DB::db.query(paste0("INSERT INTO inputs_runs (input_id, run_id, created_at) ", - "values (", settings$run$inputs[[x]], ", ", run.id, ", NOW());"), + PEcAn.DB::db.query(paste0("INSERT INTO inputs_runs (input_id, run_id) ", + "values (", settings$run$inputs[[x]], ", ", run.id, ");"), con = con) } } diff --git a/docker/data/add.util.sh b/docker/data/add.util.sh index 36e3b44637b..0c07020c1e4 100644 --- a/docker/data/add.util.sh +++ b/docker/data/add.util.sh @@ -33,7 +33,7 @@ addFormat() { fi FORMAT_ID=$( ${PSQL} "SELECT id FROM formats WHERE mimetype_id=${MIME_ID} AND name='$2' LIMIT 1;" ) if [ "$FORMAT_ID" == "" ]; then - ${PSQL} "INSERT INTO formats (mimetype_id, name, created_at, updated_at) VALUES (${MIME_ID}, '$2', NOW(), NOW());" + ${PSQL} "INSERT INTO formats (mimetype_id, name) VALUES (${MIME_ID}, '$2');" FORMAT_ID=$( ${PSQL} "SELECT id FROM formats WHERE mimetype_id=${MIME_ID} AND name='$2' LIMIT 1;" ) echo "Added new format with ID=${FORMAT_ID} for mimetype_id=${MIME_ID}, name=$2" fi @@ -62,7 +62,7 @@ addInput() { fi INPUT_ID=$( ${PSQL} "SELECT id FROM inputs WHERE site_id=$1 AND format_id=$2 AND start_date${START_Q} AND end_date${END_Q} LIMIT 1;" ) if [ "$INPUT_ID" == "" ]; then - ${PSQL} "INSERT INTO inputs (site_id, format_id, name, start_date, end_date, created_at, updated_at) VALUES ($1, $2, '', ${START_I}, ${END_I}, NOW(), NOW());" + ${PSQL} "INSERT INTO inputs (site_id, format_id, name, start_date, end_date) VALUES ($1, $2, '', ${START_I}, ${END_I});" INPUT_ID=$( ${PSQL} "SELECT id FROM inputs WHERE site_id=$1 AND format_id=$2 AND start_date${START_Q} AND end_date${END_Q} LIMIT 1;" ) echo "Added new input with ID=${INPUT_ID} for site=$1, format_id=$2, start=$3, end=$4" else @@ -93,8 +93,8 @@ addInputFile() { fi # Make sure host exists - ${PSQL} "INSERT INTO machines (hostname, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" + ${PSQL} "INSERT INTO machines (hostname) + SELECT * FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" # Add file ${PSQL} "INSERT INTO dbfiles (container_type, container_id, file_name, file_path, machine_id) VALUES @@ -116,16 +116,16 @@ addModelFile() { MODELID="(SELECT models.id FROM models, modeltypes WHERE model_name='${2}' AND modeltypes.name='${3}' AND modeltypes.id=models.modeltype_id AND revision='${4}')" # Make sure host exists - ${PSQL} "INSERT INTO machines (hostname, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" + ${PSQL} "INSERT INTO machines (hostname) + SELECT * FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" # Make sure modeltype exists - ${PSQL} "INSERT INTO modeltypes (name, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${3}') AS tmp WHERE NOT EXISTS ${MODELTYPEID};" + ${PSQL} "INSERT INTO modeltypes (name) + SELECT * FROM (SELECT '${3}') AS tmp WHERE NOT EXISTS ${MODELTYPEID};" # Make sure model exists - ${PSQL} "INSERT INTO models (model_name, modeltype_id, revision, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${2}', ${MODELTYPEID}, '${4}') AS tmp WHERE NOT EXISTS ${MODELID};" + ${PSQL} "INSERT INTO models (model_name, modeltype_id, revision) + SELECT * FROM (SELECT '${2}', ${MODELTYPEID}, '${4}') AS tmp WHERE NOT EXISTS ${MODELID};" # check if binary already added COUNT=$( ${PSQL} "SELECT COUNT(id) FROM dbfiles WHERE container_type='Model' AND container_id=${MODELID} AND file_name='${5}' AND file_path='${6}' and machine_id=${HOSTID};" ) diff --git a/docker/monitor/monitor.py b/docker/monitor/monitor.py index 4c2ab72aab9..6cc29d8f3e1 100644 --- a/docker/monitor/monitor.py +++ b/docker/monitor/monitor.py @@ -164,8 +164,8 @@ def insert_model(model_info): else: logging.debug("Adding host") cur = conn.cursor() - cur.execute('INSERT INTO machines (hostname, created_at, updated_at) ' - 'VALUES (%s, now(), now()) RETURNING id', (pecan_fqdn,)) + cur.execute('INSERT INTO machines (hostname) ' + 'VALUES (%s) RETURNING id', (pecan_fqdn)) result = cur.fetchone() cur.close() if not result: @@ -184,8 +184,8 @@ def insert_model(model_info): else: logging.debug("Adding modeltype") cur = conn.cursor() - cur.execute('INSERT INTO modeltypes (name, created_at, updated_at) ' - 'VALUES (%s, now(), now()) RETURNING id', (model_info['type'],)) + cur.execute('INSERT INTO modeltypes (name) ' + 'VALUES (%s) RETURNING id', (model_info['type'])) result = cur.fetchone() cur.close() if not result: @@ -205,8 +205,8 @@ def insert_model(model_info): else: logging.debug("Adding model") cur = conn.cursor() - cur.execute('INSERT INTO models (model_name, modeltype_id, revision, created_at, updated_at) ' - 'VALUES (%s, %s, %s, now(), now()) RETURNING id', + cur.execute('INSERT INTO models (model_name, modeltype_id, revision) ' + 'VALUES (%s, %s, %s) RETURNING id', (model_info['name'], model_type_id, model_info['version'])) result = cur.fetchone() cur.close() @@ -231,8 +231,8 @@ def insert_model(model_info): logging.debug("Adding model binary") cur = conn.cursor() cur.execute("INSERT INTO dbfiles (container_type, container_id, file_name, file_path," - " machine_id, created_at, updated_at)" - " VALUES ('Model', %s, %s, %s, %s, now(), now()) RETURNING id", + " machine_id)" + " VALUES ('Model', %s, %s, %s, %s) RETURNING id", (model_id, os.path.basename(model_info['binary']), os.path.dirname(model_info['binary']), host_id)) result = cur.fetchone() diff --git a/modules/assim.sequential/R/sda.enkf.R b/modules/assim.sequential/R/sda.enkf.R index 7fb8b3d46ee..09f124511f7 100644 --- a/modules/assim.sequential/R/sda.enkf.R +++ b/modules/assim.sequential/R/sda.enkf.R @@ -157,11 +157,13 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, ###-------------------------------------------------------------------### if (!is.null(con)) { # write ensemble first - now <- format(Sys.time(), "%Y-%m-%d %H:%M:%S") - db.query(paste("INSERT INTO ensembles (created_at, runtype, workflow_id) values ('", now, - "', 'EnKF', ", workflow.id, ")", sep = ""), con) - ensemble.id <- db.query(paste("SELECT id FROM ensembles WHERE created_at='", now, "'", sep = ""), - con)[["id"]] + result <- db.query( + paste( + "INSERT INTO ensembles (runtype, workflow_id) ", + "values ('EnKF', ", workflow.id, ") returning id", + sep = ""), + con) + ensemble.id <- result[['id']] } else { ensemble.id <- -1 } @@ -246,12 +248,19 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, ## set RUN.ID if (!is.null(con)) { - now <- format(Sys.time(), "%Y-%m-%d %H:%M:%S") paramlist <- paste("EnKF:", i) - run.id[[i]] <- db.query(paste0("INSERT INTO runs (model_id, site_id, start_time, finish_time, outdir, created_at, ensemble_id,", - " parameter_list) values ('", settings$model$id, "', '", settings$run$site$id, "', '", - settings$run$start.date, "', '", settings$run$end.date, "', '", settings$outdir, "', '", - now, "', ", ensemble.id, ", '", paramlist, "') RETURNING id"), con) + run.id[[i]] <- db.query( + paste0( + "INSERT INTO runs (", + "model_id, site_id, ", + "start_time, finish_time, ", + "outdir, ensemble_id, parameter_list) ", + "VALUES ('", + settings$model$id, "', '", settings$run$site$id, "', '", + settings$run$start.date, "', '", settings$run$end.date, "', '", + settings$outdir, "', '", ensemble.id, ", '", paramlist, "') ", + "RETURNING id"), + con) } else { run.id[[i]] <- paste("EnKF", i, sep = ".") } diff --git a/modules/data.land/inst/LoadFLUXNETsites.R b/modules/data.land/inst/LoadFLUXNETsites.R index 7c7fe603ad2..dc5eb7490f2 100644 --- a/modules/data.land/inst/LoadFLUXNETsites.R +++ b/modules/data.land/inst/LoadFLUXNETsites.R @@ -96,7 +96,7 @@ for(s in 1:nsite){ " TOWER_BEGAN =",as.character(AMERIFLUX_table$TOWER_BEGAN[s]), " TOWER_END =",as.character(AMERIFLUX_table$TOWER_END[s]) ) - InsertString = paste0("INSERT INTO sites(sitename,country,mat,map,notes,geometry,user_id,created_at,updated_at) VALUES(", + InsertString = paste0("INSERT INTO sites(sitename,country,mat,map,notes,geometry,user_id) VALUES(", "'",sitename,"', ", "'",country,"', ", mat,", ", @@ -104,7 +104,7 @@ for(s in 1:nsite){ "'",notes,"', ", "ST_GeomFromText('POINT(",lon," ",lat," ",elev,")', 4326), ", user.id, - ", NOW(), NOW() );") + ");") db.query(InsertString,con) } @@ -185,13 +185,13 @@ for(s in 1:nsite){ notes = paste0("PI: ",PI,"; ",site.char,"; FLUXNET DESCRIPTION: ",description) notes = gsub("'","",notes) # drop single quotes from notes - InsertString = paste0("INSERT INTO sites(sitename,country,notes,geometry,user_id,created_at,updated_at) VALUES(", + InsertString = paste0("INSERT INTO sites(sitename,country,notes,geometry,user_id) VALUES(", "'",sitename,"', ", "'",country,"', ", "'",notes,"', ", "ST_GeomFromText('POINT(",lon," ",lat," ",elev,")', 4326), ", user.id, - ", NOW(), NOW() );") + ");") db.query(InsertString,con) } ## end IF new site diff --git a/scripts/add.util.sh b/scripts/add.util.sh index fd9f3b8e39b..ee3ba0bd547 100644 --- a/scripts/add.util.sh +++ b/scripts/add.util.sh @@ -38,7 +38,7 @@ addFormat() { fi FORMAT_ID=$( ${PSQL} "SELECT id FROM formats WHERE mimetype_id=${MIME_ID} AND name='$2' LIMIT 1;" ) if [ "$FORMAT_ID" == "" ]; then - ${PSQL} "INSERT INTO formats (mimetype_id, name, created_at, updated_at) VALUES (${MIME_ID}, '$2', NOW(), NOW());" + ${PSQL} "INSERT INTO formats (mimetype_id, name) VALUES (${MIME_ID}, '$2');" FORMAT_ID=$( ${PSQL} "SELECT id FROM formats WHERE mimetype_id=${MIME_ID} AND name='$2' LIMIT 1;" ) echo "Added new format with ID=${FORMAT_ID} for mimetype_id=${MIME_ID}, name=$2" fi @@ -67,7 +67,7 @@ addInput() { fi INPUT_ID=$( ${PSQL} "SELECT id FROM inputs WHERE site_id=$1 AND format_id=$2 AND start_date${START_Q} AND end_date${END_Q} LIMIT 1;" ) if [ "$INPUT_ID" == "" ]; then - ${PSQL} "INSERT INTO inputs (site_id, format_id, name, start_date, end_date, created_at, updated_at) VALUES ($1, $2, '', ${START_I}, ${END_I}, NOW(), NOW());" + ${PSQL} "INSERT INTO inputs (site_id, format_id, name, start_date, end_date) VALUES ($1, $2, '', ${START_I}, ${END_I});" INPUT_ID=$( ${PSQL} "SELECT id FROM inputs WHERE site_id=$1 AND format_id=$2 AND start_date${START_Q} AND end_date${END_Q} LIMIT 1;" ) echo "Added new input with ID=${INPUT_ID} for site=$1, format_id=$2, start=$3, end=$4" else @@ -98,8 +98,8 @@ addInputFile() { fi # Make sure host exists - ${PSQL} "INSERT INTO machines (hostname, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" + ${PSQL} "INSERT INTO machines (hostname) + SELECT * FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" # Add file ${PSQL} "INSERT INTO dbfiles (container_type, container_id, file_name, file_path, machine_id) VALUES @@ -121,16 +121,16 @@ addModelFile() { MODELID="(SELECT models.id FROM models, modeltypes WHERE model_name='${2}' AND modeltypes.name='${3}' AND modeltypes.id=models.modeltype_id AND revision='${4}')" # Make sure host exists - ${PSQL} "INSERT INTO machines (hostname, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" + ${PSQL} "INSERT INTO machines (hostname) + SELECT * FROM (SELECT '${1}') AS tmp WHERE NOT EXISTS ${HOSTID};" # Make sure modeltype exists - ${PSQL} "INSERT INTO modeltypes (name, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${3}') AS tmp WHERE NOT EXISTS ${MODELTYPEID};" + ${PSQL} "INSERT INTO modeltypes (name) + SELECT * FROM (SELECT '${3}') AS tmp WHERE NOT EXISTS ${MODELTYPEID};" # Make sure model exists - ${PSQL} "INSERT INTO models (model_name, modeltype_id, revision, created_at, updated_at) - SELECT *, now(), now() FROM (SELECT '${2}', ${MODELTYPEID}, '${4}') AS tmp WHERE NOT EXISTS ${MODELID};" + ${PSQL} "INSERT INTO models (model_name, modeltype_id, revision) + SELECT * FROM (SELECT '${2}', ${MODELTYPEID}, '${4}') AS tmp WHERE NOT EXISTS ${MODELID};" # check if binary already added COUNT=$( ${PSQL} "SELECT COUNT(id) FROM dbfiles WHERE container_type='Model' AND container_id=${MODELID} AND file_name='${5}' AND file_path='${6}' and machine_id=${HOSTID};" ) diff --git a/web/04-runpecan.php b/web/04-runpecan.php index 0e46dcfaa9b..16d2f44626c 100644 --- a/web/04-runpecan.php +++ b/web/04-runpecan.php @@ -163,9 +163,9 @@ // create the workflow execution $userid=get_userid(); if ($userid != -1) { - $q=$pdo->prepare("INSERT INTO workflows (site_id, model_id, notes, folder, hostname, start_date, end_date, advanced_edit, started_at, created_at, user_id) values (:siteid, :modelid, :notes, '', :hostname, :startdate, :enddate, :advanced_edit, NOW(), NOW(), :userid)"); + $q=$pdo->prepare("INSERT INTO workflows (site_id, model_id, notes, folder, hostname, start_date, end_date, advanced_edit, started_at, user_id) values (:siteid, :modelid, :notes, '', :hostname, :startdate, :enddate, :advanced_edit, NOW(), :userid)"); } else { - $q=$pdo->prepare("INSERT INTO workflows (site_id, model_id, notes, folder, hostname, start_date, end_date, advanced_edit, started_at, created_at) values (:siteid, :modelid, :notes, '', :hostname, :startdate, :enddate, :advanced_edit, NOW(), NOW())"); + $q=$pdo->prepare("INSERT INTO workflows (site_id, model_id, notes, folder, hostname, start_date, end_date, advanced_edit, started_at) values (:siteid, :modelid, :notes, '', :hostname, :startdate, :enddate, :advanced_edit, NOW())"); } $q->bindParam(':siteid', $siteid, PDO::PARAM_INT); $q->bindParam(':modelid', $modelid, PDO::PARAM_INT); diff --git a/web/setups/serversyncscript.php b/web/setups/serversyncscript.php index e12df6a0719..eaee96afa37 100644 --- a/web/setups/serversyncscript.php +++ b/web/setups/serversyncscript.php @@ -64,12 +64,10 @@ $date = date("Y-m-d H:i:s"); $stmt = $pdo->prepare("INSERT - INTO machines (id, hostname, created_at, updated_at , sync_host_id, sync_url, sync_contact, sync_start, sync_end) - VALUES (:id, :hostname, :created_at, :updated_at , :sync_host_id, :sync_url, :sync_contact, :sync_start, :sync_end );"); + INTO machines (id, hostname, sync_host_id, sync_url, sync_contact, sync_start, sync_end) + VALUES (:id, :hostname, :sync_host_id, :sync_url, :sync_contact, :sync_start, :sync_end );"); $stmt->bindValue(':id', $id, PDO::PARAM_INT); $stmt->bindValue(':hostname', $fqdn, PDO::PARAM_STR); - $stmt->bindValue(':created_at', $date, PDO::PARAM_STR); - $stmt->bindValue(':updated_at', $date, PDO::PARAM_STR); $stmt->bindValue(':sync_host_id', $host_id, PDO::PARAM_INT); $stmt->bindValue(':sync_url', ' ', PDO::PARAM_STR); $stmt->bindValue(':sync_contact', ' ', PDO::PARAM_STR); From d64cfd8c88cccbe3e4ed5a75f9b1a67ccefe0f7e Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 26 Apr 2020 03:42:28 +0200 Subject: [PATCH 0903/2289] set locale when comparing check output Should cut down on failures caused by sort order within lines example: reference check result contains `imports not declared from: 'dplyr' 'PEcAn.DB', but new result in C locale becomes `imports not declared from: 'PEcAn.DB' 'dplyr'` Yes, we have to call both `Sys.setlocale` and `Set.setenv`, because we need to set locale both inside the current session and for rcmdcheck::rcmdcheck (which runs in its own process). --- scripts/check_with_errors.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 75895c32f1c..891bd5ffba0 100755 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -1,5 +1,9 @@ #!/usr/bin/env Rscript +# Avoid some spurious failures from platform-dependent sorting +invisible(Sys.setlocale("LC_ALL", "en_US.UTF-8")) # sets sorting in this script +Sys.setenv(LC_ALL = "en_US.UTF-8") # sets sorting in rcmdcheck processes + arg <- commandArgs(trailingOnly = TRUE) pkg <- arg[1] From ea13fae526e0dc1c5c84f7abcd1af47ff8644f6d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 26 Apr 2020 11:42:24 +0200 Subject: [PATCH 0904/2289] changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 5eef0b23cb0..f92d78d348c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -23,6 +23,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Update ED docker build, will now build version 2.2.0 and git ### Changed +- Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). - `PEcAn.DB::insert_table` now uses `DBI::dbAppendTable` internally instead of manually constructed SQL (#2552). - Rebuilt documentation using Roxygen 7. Readers get nicer formatting of usage sections, writers get more flexible behavior when inheriting parameters and less hassle when maintaining namespaces (#2524). - Renamed functions that looked like S3 methods but were not: From 079d4772ceaa78bb77a939fcce7ec6d737455e47 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 27 Apr 2020 16:31:05 -0500 Subject: [PATCH 0905/2289] fixes not starting after editing files (fixes #2587) --- web/04-runpecan.php | 6 +++++- web/07-continue.php | 29 +++++++++++++++++++---------- 2 files changed, 24 insertions(+), 11 deletions(-) diff --git a/web/04-runpecan.php b/web/04-runpecan.php index 16d2f44626c..fd9f6af374e 100644 --- a/web/04-runpecan.php +++ b/web/04-runpecan.php @@ -532,7 +532,11 @@ } # create the message - $message = '{"folder": "' . $folder . '", "workflowid": "' . $workflowid . '"}'; + $message = '{"folder": "' . $folder . '", "workflowid": "' . $workflowid . '"'; + if ($model_edit) { + $message .= ', "modeledit": true'; + } + $message .= '}' send_rabbitmq_message($message, $rabbitmq_uri, $rabbitmq_queue); #done diff --git a/web/07-continue.php b/web/07-continue.php index 9b319de9799..6e5142c5a44 100644 --- a/web/07-continue.php +++ b/web/07-continue.php @@ -53,7 +53,6 @@ $stmt->closeCursor(); close_database(); -$exec = "R_LIBS_USER=\"$R_library_path\" $Rbinary CMD BATCH"; $path = "05-running.php?workflowid=$workflowid&hostname=${hostname}"; if ($pecan_edit) { $path .= "&pecan_edit=pecan_edit"; @@ -74,13 +73,6 @@ $fh = fopen($folder . DIRECTORY_SEPARATOR . "STATUS", 'a') or die("can't open file"); fwrite($fh, "\t" . date("Y-m-d H:i:s") . "\tDONE\t\n"); fclose($fh); - - $exec .= " --continue workflow.R workflow2.Rout"; -} else { - if ($model_edit) { - $exec .= " --advanced"; - } - $exec .= " workflow.R"; } # start the workflow again @@ -91,11 +83,28 @@ } else { $rabbitmq_queue = "pecan"; } - $msg_exec = str_replace("\"", "'", $exec); - $message = '{"folder": "' . $folder . '", "custom_application": "' . $msg_exec . '"}'; + + $message = '{"folder": "' . $folder . '", "workflowid": "' . $workflowid . '"'; + if (file_exists($folder . DIRECTORY_SEPARATOR . "STATUS")) { + $message .= ', "continue": true'; + } else if ($model_edit) { + $message .= ', "modeledit": true'; + } + $message .= '}'; send_rabbitmq_message($message, $rabbitmq_uri, $rabbitmq_queue); } else { chdir($folder); + + $exec = "R_LIBS_USER=\"$R_library_path\" $Rbinary CMD BATCH"; + if (file_exists($folder . DIRECTORY_SEPARATOR . "STATUS")) { + $exec .= " --continue workflow.R workflow2.Rout"; + } else { + if ($model_edit) { + $exec .= " --advanced"; + } + $exec .= " workflow.R"; + } + pclose(popen("$exec &", 'r')); } From fd856a6baca7feb8c4ce7efc9a28a61ad47b3041 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 27 Apr 2020 18:42:32 -0500 Subject: [PATCH 0906/2289] run workflow and stop in case of advanced, and continue --- docker/executor/Dockerfile | 2 +- docker/executor/executor.py | 8 ++++++++ web/04-runpecan.php | 2 +- 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/docker/executor/Dockerfile b/docker/executor/Dockerfile index 4b68c7ba897..0cf5622e487 100644 --- a/docker/executor/Dockerfile +++ b/docker/executor/Dockerfile @@ -20,7 +20,7 @@ WORKDIR /work # variables to store in docker image ENV RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" \ RABBITMQ_QUEUE="pecan" \ - APPLICATION="R CMD BATCH workflow.R" + APPLICATION="workflow" # actual application that will be executed COPY executor.py sender.py /work/ diff --git a/docker/executor/executor.py b/docker/executor/executor.py index b79e089e277..a432df87ecb 100644 --- a/docker/executor/executor.py +++ b/docker/executor/executor.py @@ -55,6 +55,14 @@ def runfunc(self): application = "R CMD BATCH workflow.R" elif custom_application is not None: application = custom_application + elif default_application == "workflow": + application = "R CMD BATCH" + if jbody.get("continue") == True: + application = application + " --continue workflow.R workflow2.Rout"; + else: + if jbody.get("modeledit") == True: + application = application + " --advanced" + application = application + " workflow.R workflow.Rout"; else: logging.info("Running default command: %s" % default_application) application = default_application diff --git a/web/04-runpecan.php b/web/04-runpecan.php index fd9f6af374e..2bbf990d70e 100644 --- a/web/04-runpecan.php +++ b/web/04-runpecan.php @@ -536,7 +536,7 @@ if ($model_edit) { $message .= ', "modeledit": true'; } - $message .= '}' + $message .= '}'; send_rabbitmq_message($message, $rabbitmq_uri, $rabbitmq_queue); #done From bb6bde598da5952ed5a802e7b810b4e47b4459da Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 29 Apr 2020 00:23:25 +0200 Subject: [PATCH 0907/2289] Add CI builds on Github Actions (#2544) * experimental GitHub Actions CI workflows * put R library outside git dir The "${HOME}${R_LIBS#'~'}" is to manually expand tilde in R_LIBS: * GitHub sets env vars as unprocessed strings, so it won't be expanded there. * POSIX shell expansion rules say ~ is always replaced *before* expanding $, so `FOO='~'; mkdir -p $FOO` creates a dir literally named '~'. * update git if needed Co-authored-by: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Co-authored-by: Rob Kooper --- .github/workflows/ci.yml | 173 +++++++++++++++++++++++++++ tests/ghaction.sipnet_PostgreSQL.xml | 62 ++++++++++ tests/ghaction.sipnet_Postgres.xml | 62 ++++++++++ tests/integration.sh | 4 +- 4 files changed, 300 insertions(+), 1 deletion(-) create mode 100644 .github/workflows/ci.yml create mode 100644 tests/ghaction.sipnet_PostgreSQL.xml create mode 100644 tests/ghaction.sipnet_Postgres.xml diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 00000000000..9b1e35c6938 --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,173 @@ +name: CI + +on: push + +env: + # Would be more usual to set R_LIBS_USER, but R uses R_LIBS first if present + # ...and it's always present here, because the rocker/tidyverse base image + # checks at R startup time for R_LIBS and R_LIBS_USER, sets both if not found + R_LIBS: ~/R/library + +jobs: + + build: + runs-on: ubuntu-latest + container: pecan/depends:develop + steps: + - name: check git version + id: gitversion + run: | + v=$(git --version | grep -oE '[0-9\.]+') + v='cat(numeric_version("'${v}'") < "2.18")' + echo "##[set-output name=isold;]$(Rscript -e "${v}")" + - name: upgrade git if needed + # Hack: actions/checkout wants git >= 2.18, rocker 3.5 images have 2.11 + # Assuming debian stretch because newer images have git >= 2.20 already + if: steps.gitversion.outputs.isold == 'TRUE' + run: | + echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list + apt-get update + apt-get -t stretch-backports upgrade -y git + - uses: actions/checkout@v2 + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + shell: bash + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: cache .doc + uses: actions/cache@v1 + with: + key: doc-${{ github.sha }} + path: .doc + - name: cache .install + uses: actions/cache@v1 + with: + key: install-${{ github.sha }} + path: .install + - name: build + run: make -j1 + env: + NCPUS: 2 + CI: true + - name: check for out-of-date Rd files + uses: infotroph/tree-is-clean@v1 + + test: + needs: build + runs-on: ubuntu-latest + container: pecan/depends:develop + services: + postgres: + image: mdillon/postgis:9.5 + options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 + env: + NCPUS: 2 + PGHOST: postgres + CI: true + steps: + - uses: actions/checkout@v2 + - name: install utils + run: apt-get update && apt-get install -y openssh-client postgresql-client curl + - name: db setup + uses: docker://pecan/db:ci + - name: add models to db + run: ./scripts/add.models.sh + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: cache .doc + uses: actions/cache@v1 + with: + key: doc-${{ github.sha }} + path: .doc + - name: cache .install + uses: actions/cache@v1 + with: + key: install-${{ github.sha }} + path: .install + - name: test + run: make test + + check: + needs: build + runs-on: ubuntu-latest + container: pecan/depends:develop + env: + NCPUS: 2 + CI: true + _R_CHECK_LENGTH_1_CONDITION_: true + _R_CHECK_LENGTH_1_LOGIC2_: true + # Avoid compilation check warnings that come from the system Makevars + # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html + _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + steps: + - uses: actions/checkout@v2 + - name: install ssh + run: apt-get update && apt-get install -y openssh-client qpdf + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: cache .doc + uses: actions/cache@v1 + with: + key: doc-${{ github.sha }} + path: .doc + - name: cache .install + uses: actions/cache@v1 + with: + key: install-${{ github.sha }} + path: .install + - name: check + run: make check + env: + REBUILD_DOCS: "FALSE" + RUN_TESTS: "FALSE" + + sipnet: + needs: build + runs-on: ubuntu-latest + container: pecan/depends:develop + services: + postgres: + image: mdillon/postgis:9.5 + options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 + env: + PGHOST: postgres + steps: + - uses: actions/checkout@v2 + - run: apt-get update && apt-get install -y curl postgresql-client + - name: install sipnet + run: | + cd ${HOME} + curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz + tar zxf sipnet_unk.tar.gz + cd sipnet_unk + make + - name: db setup + uses: docker://pecan/db:ci + - name: add models to db + run: ./scripts/add.models.sh + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: integration test + run: ./tests/integration.sh ghaction diff --git a/tests/ghaction.sipnet_PostgreSQL.xml b/tests/ghaction.sipnet_PostgreSQL.xml new file mode 100644 index 00000000000..8301b03a59a --- /dev/null +++ b/tests/ghaction.sipnet_PostgreSQL.xml @@ -0,0 +1,62 @@ + + + pecan + + + + PostgreSQL + bety + bety + postgres + bety + FALSE + + + + + + temperate.coniferous + + + + + 3000 + FALSE + 1.2 + AUTO + + + + NPP + + + + + -1 + 1 + + NPP + + + + ${HOME}/sipnet_unk/sipnet + SIPNET + + + + + 772 + + + + ${HOME}/sipnet_unk/niwot_tutorial.clim + + + 2002-01-01 00:00:00 + 2005-12-31 00:00:00 + + localhost + + pecan/dbfiles + + diff --git a/tests/ghaction.sipnet_Postgres.xml b/tests/ghaction.sipnet_Postgres.xml new file mode 100644 index 00000000000..f1ffd6935f1 --- /dev/null +++ b/tests/ghaction.sipnet_Postgres.xml @@ -0,0 +1,62 @@ + + + pecan + + + + Postgres + bety + bety + postgres + bety + FALSE + + + + + + temperate.coniferous + + + + + 3000 + FALSE + 1.2 + AUTO + + + + NPP + + + + + -1 + 1 + + NPP + + + + ${HOME}/sipnet_unk/sipnet + SIPNET + + + + + 772 + + + + ${HOME}/sipnet_unk/niwot_tutorial.clim + + + 2002-01-01 00:00:00 + 2005-12-31 00:00:00 + + localhost + + pecan/dbfiles + + diff --git a/tests/integration.sh b/tests/integration.sh index 8ed5c484d72..49d28c1d08d 100755 --- a/tests/integration.sh +++ b/tests/integration.sh @@ -2,9 +2,11 @@ NAME=${1:-$HOSTNAME} +set -o pipefail + cd $( dirname $0 ) for f in ${NAME}.*.xml; do - echo -en 'travis_fold:start:TEST $f\r' + echo -en "travis_fold:start:TEST $f\r" rm -rf pecan output.log Rscript --vanilla ../web/workflow.R --settings $f 2>&1 | tee output.log if [ $? -ne 0 ]; then From 613547ca5496e0cc19645f33a4f1c8c4a653e6c7 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 29 Apr 2020 10:23:48 -0500 Subject: [PATCH 0908/2289] ftp has status code 226 for ok --- shiny/dbsync/app.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/shiny/dbsync/app.R b/shiny/dbsync/app.R index 98fdda1bc21..e77041f0c17 100644 --- a/shiny/dbsync/app.R +++ b/shiny/dbsync/app.R @@ -112,7 +112,7 @@ check_servers <- function(servers, progress) { # version information server_version <- function(res) { progress$inc(amount = 0, message = paste("Processing", progress$getValue(), "of", progress$getMax())) - if (res$status == 200) { + if (res$status == 200 || res$status == 226) { url <- sub("version.txt", "bety.tar.gz", res$url) version <- strsplit(rawToChar(res$content), '\t', fixed = TRUE)[[1]] if (!is.na(as.numeric(version[1]))) { @@ -128,12 +128,13 @@ check_servers <- function(servers, progress) { progress$inc(amount = 1) } urls <- sapply(servers[,'sync_url'], function(x) { sub("bety.tar.gz", "version.txt", x) }) + print(urls) lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_version, fail = failure, handle = curl::new_handle(connecttimeout=1)) }) # log information server_log <- function(res) { progress$inc(amount = 0, message = paste("Processing", progress$getValue(), "of", progress$getMax())) - if (res$status == 200) { + if (res$status == 200 || res$status == 226) { url <- sub("sync.log", "bety.tar.gz", res$url) lines <- strsplit(rawToChar(res$content), '\n', fixed = TRUE)[[1]] now <- as.POSIXlt(Sys.time(), tz="UTC") From 5560e0b4dcb45bf89ef255c8e77c72a3d1c520c7 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 29 Apr 2020 12:37:49 -0500 Subject: [PATCH 0909/2289] remove timeout will try servers once, if no versions.txt add to ignored list (and use gray color). This is reset on update servers. --- shiny/dbsync/app.R | 43 +++++++++++++++++++++++++++++++------------ 1 file changed, 31 insertions(+), 12 deletions(-) diff --git a/shiny/dbsync/app.R b/shiny/dbsync/app.R index e77041f0c17..f59ff57f961 100644 --- a/shiny/dbsync/app.R +++ b/shiny/dbsync/app.R @@ -28,6 +28,9 @@ host_mapping <- list( "paleon-pecan.virtual.crc.nd.edu"="crc.nd.edu" ) +# ignored servers, is reset on refresh +ignored_servers <- c() + # given a IP address lookup geo spatital info # uses a cache to prevent to many requests (1000 per day) get_geoip <- function(ip) { @@ -55,6 +58,8 @@ get_geoip <- function(ip) { # get a list of all servers in BETY and their geospatial location get_servers <- function() { + ignored_servers <<- c() + # connect to BETYdb bety <- DBI::dbConnect( DBI::dbDriver("PostgreSQL"), @@ -104,16 +109,22 @@ get_servers <- function() { # fetch information from the actual servers check_servers <- function(servers, progress) { + check_servers <- servers$sync_url[! servers$sync_host_id %in% ignored_servers] + print(check_servers) + # generic failure message to increment progress failure <- function(res) { + print(res) progress$inc(amount = 1) } # version information server_version <- function(res) { + url <- sub("version.txt", "bety.tar.gz", res$url) progress$inc(amount = 0, message = paste("Processing", progress$getValue(), "of", progress$getMax())) + print(paste(res$status, url)) if (res$status == 200 || res$status == 226) { - url <- sub("version.txt", "bety.tar.gz", res$url) + check_servers <<- check_servers[check_servers != url] version <- strsplit(rawToChar(res$content), '\t', fixed = TRUE)[[1]] if (!is.na(as.numeric(version[1]))) { servers[servers$sync_url == url,'version'] <<- version[2] @@ -127,15 +138,15 @@ check_servers <- function(servers, progress) { } progress$inc(amount = 1) } - urls <- sapply(servers[,'sync_url'], function(x) { sub("bety.tar.gz", "version.txt", x) }) - print(urls) - lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_version, fail = failure, handle = curl::new_handle(connecttimeout=1)) }) + urls <- sapply(check_servers, function(x) { sub("bety.tar.gz", "version.txt", x) }) + lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_version, fail = failure) } ) # log information server_log <- function(res) { + url <- sub("sync.log", "bety.tar.gz", res$url) progress$inc(amount = 0, message = paste("Processing", progress$getValue(), "of", progress$getMax())) + print(paste(res$status, url)) if (res$status == 200 || res$status == 226) { - url <- sub("sync.log", "bety.tar.gz", res$url) lines <- strsplit(rawToChar(res$content), '\n', fixed = TRUE)[[1]] now <- as.POSIXlt(Sys.time(), tz="UTC") for (line in tail(lines, maxlines)) { @@ -153,12 +164,13 @@ check_servers <- function(servers, progress) { } progress$inc(amount = 1) } - urls <- sapply(servers[,'sync_url'], function(x) { sub("bety.tar.gz", "sync.log", x) }) - lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_log, fail = failure, handle = curl::new_handle(connecttimeout=1)) }) + urls <- sapply(check_servers, function(x) { sub("bety.tar.gz", "sync.log", x) }) + lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_log, fail = failure) } ) # run queries in parallel curl::multi_run() - myservers <<- servers + ignored_servers <<- c(ignored_servers, servers[servers$sync_url %in% check_servers, "sync_host_id"]) + return(servers) } @@ -258,12 +270,15 @@ server <- function(input, output, session) { # update sync list (slow) observeEvent(input$refresh_sync, { + servers <- values$servers session$sendCustomMessage("disableUI", "") - progress <- Progress$new(session, min=0, max=2*nrow(values$servers)) - values$servers <- check_servers(values$servers, progress) - values$sync <- check_sync(values$servers) + progress <- Progress$new(session, min=0, max=2*(nrow(servers)-length(ignored_servers))) + servers <- check_servers(servers, progress) + sync <- check_sync(servers) progress$close() session$sendCustomMessage("enableUI", "") + values$servers <- servers + values$sync <- sync }) # create a map of all servers that have a sync_host_id and sync_url @@ -283,7 +298,11 @@ server <- function(input, output, session) { # create a table of all servers that have a sync_host_id and sync_url output$table <- DT::renderDataTable({ - DT::datatable(values$servers %>% dplyr::select("sync_host_id", "hostname", "city", "country", "lastdump", "migrations")) + ignored <- rep("gray", length(ignored_servers) + 1) + DT::datatable(values$servers %>% + dplyr::select("sync_host_id", "hostname", "city", "country", "lastdump", "migrations"), + rownames = FALSE) %>% + formatStyle('sync_host_id', target = "row", color = styleEqual(c(ignored_servers, "-1"), ignored)) }) } From 0aaa2714690c9e2b1d6348e66c102d82627392a8 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 29 Apr 2020 13:29:00 -0500 Subject: [PATCH 0910/2289] fix missing namespace --- shiny/dbsync/app.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shiny/dbsync/app.R b/shiny/dbsync/app.R index f59ff57f961..c9a61b46fa2 100644 --- a/shiny/dbsync/app.R +++ b/shiny/dbsync/app.R @@ -302,7 +302,7 @@ server <- function(input, output, session) { DT::datatable(values$servers %>% dplyr::select("sync_host_id", "hostname", "city", "country", "lastdump", "migrations"), rownames = FALSE) %>% - formatStyle('sync_host_id', target = "row", color = styleEqual(c(ignored_servers, "-1"), ignored)) + DT::formatStyle('sync_host_id', target = "row", color = DT::styleEqual(c(ignored_servers, "-1"), ignored)) }) } From c6c311a0ee59e606d9fe52738595c3ace7b280e3 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 29 Apr 2020 13:48:22 -0500 Subject: [PATCH 0911/2289] remove debug statement --- shiny/dbsync/app.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/shiny/dbsync/app.R b/shiny/dbsync/app.R index c9a61b46fa2..da4f3065247 100644 --- a/shiny/dbsync/app.R +++ b/shiny/dbsync/app.R @@ -110,8 +110,7 @@ get_servers <- function() { # fetch information from the actual servers check_servers <- function(servers, progress) { check_servers <- servers$sync_url[! servers$sync_host_id %in% ignored_servers] - print(check_servers) - + # generic failure message to increment progress failure <- function(res) { print(res) From 10a066bf1ed152d34e846ad4e706ce8266d8250f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 28 Apr 2020 14:16:29 +0200 Subject: [PATCH 0912/2289] typo in cache priming script Travis interprets it correctly for now, but may not in future --- scripts/travis/prime_travis_cache.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/travis/prime_travis_cache.sh b/scripts/travis/prime_travis_cache.sh index 695ed205158..bf9537ae7ee 100755 --- a/scripts/travis/prime_travis_cache.sh +++ b/scripts/travis/prime_travis_cache.sh @@ -23,7 +23,7 @@ BODY='{ "branch":"'${BRANCH}'", "config": { "install":"echo skipping", - "before-script":"echo skipping", + "before_script":"echo skipping", "script":"scripts/travis/cache_buildup.sh 30m"} }}' From 6e5731e2714f220f503368bad0492adbe588eda3 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 28 Apr 2020 16:15:22 +0200 Subject: [PATCH 0913/2289] was only passing first word of CMDS to timeout --- scripts/travis/cache_buildup.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/travis/cache_buildup.sh b/scripts/travis/cache_buildup.sh index 7cdbde55745..b7e3e37b884 100755 --- a/scripts/travis/cache_buildup.sh +++ b/scripts/travis/cache_buildup.sh @@ -34,7 +34,7 @@ MAX_TIME=${1:-30m} CMDS=${2:-'scripts/travis/install.sh && make install'} # Spends up to $MAX_TIME installing packages, then sends HUP -timeout ${MAX_TIME} bash -c ${CMDS} +timeout ${MAX_TIME} bash -c "${CMDS}" if [[ $? -ne 0 ]]; then # Clean up any lock files left from killing install.packages From 450e7b2d98018f61fa456a6db4d693cdd408ae1b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 28 Apr 2020 22:01:11 +0200 Subject: [PATCH 0914/2289] no c2d4u --- .travis.yml | 97 +++++++++++++++-------------------------------------- 1 file changed, 27 insertions(+), 70 deletions(-) diff --git a/.travis.yml b/.travis.yml index f7bf002c73c..787d3c0660e 100644 --- a/.travis.yml +++ b/.travis.yml @@ -14,86 +14,43 @@ env: - _R_CHECK_LENGTH_1_CONDITION_=true - _R_CHECK_LENGTH_1_LOGIC2_=true -_apt: &apt-base - - bc - - curl - - gdal-bin - - jags - - libgdal-dev - - libgl1-mesa-dev - - libglu1-mesa-dev - - libglpk-dev # indirectly needed by BayesianTools - - libgmp-dev - - libhdf5-dev - - liblapack-dev - - libnetcdf-dev - - libproj-dev - - librdf0-dev - - libudunits2-dev - - netcdf-bin - - pandoc - - python-dev - - qpdf - - tcl - - tcl-dev - - udunits-bin - -_c2d4u: &apt-r-binaries - - r-bioc-biocinstaller - - r-cran-ape - - r-cran-curl - - r-cran-data.table - - r-cran-devtools - - r-cran-dplyr - - r-cran-gap - - r-cran-ggplot2 - - r-cran-httr - - r-cran-igraph - - r-cran-lme4 - - r-cran-matrixstats - - r-cran-mcmcpack - - r-cran-raster - - r-cran-rcpp - - r-cran-rcurl - - r-cran-redland - - r-cran-rjags - - r-cran-rncl - - r-cran-roxygen2 - - r-cran-rsqlite - # - r-cran-sf - - r-cran-shiny - - r-cran-sirt - - r-cran-testthat - - r-cran-tidyverse - - r-cran-xml - - r-cran-xml2 - - r-cran-xts +addons: + apt: + sources: + - sourceline: 'ppa:ubuntugis/ppa' # for GDAL 2 binaries + packages: + - bc + - curl + - gdal-bin + - jags + - libgdal-dev + - libgl1-mesa-dev + - libglu1-mesa-dev + - libglpk-dev # indirectly needed by BayesianTools + - libgmp-dev + - libhdf5-dev + - liblapack-dev + - libnetcdf-dev + - libproj-dev + - librdf0-dev + - libudunits2-dev + - netcdf-bin + - pandoc + - python-dev + - qpdf + - tcl + - tcl-dev + - udunits-bin jobs: fast_finish: true include: - r: release - addons: &addons-c2d4u - apt: - sources: - - sourceline: 'ppa:ubuntugis/ppa' # for GDAL 2 binaries - packages: - - *apt-base - - *apt-r-binaries - r: devel - addons: &addons-base - apt: - sources: - - sourceline: 'ppa:ubuntugis/ppa' # for GDAL 2 binaries - packages: - - *apt-base - r: oldrel - addons: *addons-base allow_failures: - r: devel - addons: *addons-base - r: oldrel - addons: *addons-base cache: - directories: From 16411a5be48bed1d47feed2fe2939277aac56fc2 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 28 Apr 2020 23:17:10 +0200 Subject: [PATCH 0915/2289] use pecan.depends to install R packages before starting make --- docker/depends/pecan.depends | 3 ++- scripts/travis/install.sh | 12 ++++++++++-- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 5494ad53436..cd041decf6e 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -7,6 +7,7 @@ set -e # Don't use X11 for rgl RGL_USE_NULL=TRUE +RLIB=${R_LIBS_USER:-/usr/local/lib/R/site-library} # install remotes first in case packages are references in dependencies installGithub.r \ @@ -17,7 +18,7 @@ installGithub.r \ ropensci/nneo # install all packages (depends, imports, suggests) -install2.r -e -s -n -1\ +install2.r -e -s -l "${RLIB}" -n -1\ abind \ BayesianTools \ binaryLogic \ diff --git a/scripts/travis/install.sh b/scripts/travis/install.sh index ff99c0670a5..c70dc427bec 100755 --- a/scripts/travis/install.sh +++ b/scripts/travis/install.sh @@ -3,13 +3,21 @@ set -e . $( dirname $0 )/func.sh -# FIXING R BINARIES +# install R package dependencies ( travis_time_start "pkg_version_check" "Checking R package binaries" - Rscript scripts/travis/rebuild_pkg_binaries.R + travis_time_end + travis_time_start "r_pkgs" "installing R packages" + # Seems like a lot of fiddling to set up littler and only use it once + # inside pecan.depends, but still easier than duplicating the script + Rscript -e 'if (!requireNamespace("littler", quietly = TRUE)) { install.packages(c("littler", "remotes", "docopt"), repos = "https://cloud.r-project.org") }' + LRPATHS=`Rscript -e 'cat(system.file(c("examples", "bin"), package = "littler"), sep = ":")'` + echo 'options(repos="https://cloud.r-project.org")' > ~/.littler.r + PATH=$LRPATHS:$PATH bash docker/depends/pecan.depends travis_time_end + ) # ROLL A FEW R PACKAGES BACK TO SPECIFIED VERSIONS From e24ff9b7f83511956acf34ea369368c40925c93c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 29 Apr 2020 11:21:32 +0200 Subject: [PATCH 0916/2289] set installation library when installing dependencies (plus spelling+whitespace fix) --- scripts/generate_dependencies.R | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/scripts/generate_dependencies.R b/scripts/generate_dependencies.R index 5ea9ad772a1..ef824e8d4b7 100755 --- a/scripts/generate_dependencies.R +++ b/scripts/generate_dependencies.R @@ -1,7 +1,7 @@ -#!/usr/bin/env RScript +#!/usr/bin/env Rscript # force sorting -if(capabilities("ICU")) { +if (capabilities("ICU")) { icuSetCollate(locale = "en_US.UTF-8") } else { print("Can not force sorting, this could result in unpredicted results.") @@ -97,6 +97,7 @@ cat("#!/bin/bash", "", "# Don\'t use X11 for rgl", "RGL_USE_NULL=TRUE", + "RLIB=${R_LIBS_USER:-/usr/local/lib/R/site-library}", "", "# install remotes first in case packages are references in dependencies", paste0( @@ -105,6 +106,6 @@ cat("#!/bin/bash", "", "# install all packages (depends, imports, suggests)", paste0( - "install2.r -e -s -n -1\\\n ", + "install2.r -e -s -l \"${RLIB}\" -n -1\\\n ", paste(sort(docker), sep = "", collapse = " \\\n ")), file = "docker/depends/pecan.depends", sep = "\n", append = FALSE) From 5a7eee5e7b5165c8c1f6412cf7d75aaa3eea67e0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 29 Apr 2020 13:04:35 +0200 Subject: [PATCH 0917/2289] build on R 3.5 too --- .travis.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.travis.yml b/.travis.yml index 787d3c0660e..20fc82ada9f 100644 --- a/.travis.yml +++ b/.travis.yml @@ -48,9 +48,11 @@ jobs: - r: release - r: devel - r: oldrel + - r: 3.5 allow_failures: - r: devel - r: oldrel + - r: 3.5 cache: - directories: From 482e51493d86eb6c2a8ec0c527640c221e10aab7 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 30 Apr 2020 14:15:17 -0500 Subject: [PATCH 0918/2289] Kubernetes fixes (#2589) * check_postgresql will now check bety db as well * need a , at the end to make it a tuple. --- docker/check/check_postgresql | 23 +++++++++++++++++++++-- docker/monitor/monitor.py | 2 +- 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/docker/check/check_postgresql b/docker/check/check_postgresql index 53ac99d200b..29036ebb632 100755 --- a/docker/check/check_postgresql +++ b/docker/check/check_postgresql @@ -1,7 +1,26 @@ #!/bin/bash while ! pg_isready -U ${PGUSER} -h ${PGHOST} -p ${PGPORT}; do - echo "Waiting for database" - sleep 2 + echo "Waiting for database system" + sleep 2 done + +# if given use BETY credentials, check for BETY database +if [ -n "$BETYUSER" ]; then + # set PGPASSWORD so we are not prompted for password + PGPASSWORD="${BETYPASSWORD}" + + # wait for bety user / database to be active + while ! pg_isready -U ${BETYUSER} -h ${PGHOST} -p ${PGPORT} -d ${BETYDATABASE}; do + echo "Waiting for bety database" + sleep 2 + done + + # wait for list of users to be active + while ! psql -U ${BETYUSER} -h ${PGHOST} -p ${PGPORT} -d ${BETYDATABASE} -tAc "SELECT count(id) FROM users;"; do + echo "Waiting for user table to be populated" + sleep 2 + done +fi + echo "Database is ready" diff --git a/docker/monitor/monitor.py b/docker/monitor/monitor.py index 6cc29d8f3e1..159d4fe3f5c 100644 --- a/docker/monitor/monitor.py +++ b/docker/monitor/monitor.py @@ -165,7 +165,7 @@ def insert_model(model_info): logging.debug("Adding host") cur = conn.cursor() cur.execute('INSERT INTO machines (hostname) ' - 'VALUES (%s) RETURNING id', (pecan_fqdn)) + 'VALUES (%s) RETURNING id', (pecan_fqdn, )) result = cur.fetchone() cur.close() if not result: From 447e24559846df3e685a1b6d89b00a02c9424db3 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 30 Apr 2020 15:14:23 -0500 Subject: [PATCH 0919/2289] add dockerfile, model_info.json to template (fixes #2567) --- CHANGELOG.md | 1 + models/template/Dockerfile | 69 +++++++++++++++++++++++++++++++++ models/template/model_info.json | 36 +++++++++++++++++ 3 files changed, 106 insertions(+) create mode 100644 models/template/Dockerfile create mode 100644 models/template/model_info.json diff --git a/CHANGELOG.md b/CHANGELOG.md index f92d78d348c..8c27857eba3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -35,6 +35,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). ### Added +- model_info.json and Dockerfile to template (#2567) - Dockerize BASGRA_N model. - Basic coupling for models BASGRA_N and STICS. - PEcAn.priors now exports functions `priorfig` and `plot_densities` (#2439). diff --git a/models/template/Dockerfile b/models/template/Dockerfile new file mode 100644 index 00000000000..045917d183b --- /dev/null +++ b/models/template/Dockerfile @@ -0,0 +1,69 @@ +######################################################################## +# First we build the actual model, this has everything that is needed +# for the model to compile. Next we create the final image that has the +# PEcAn code as well as only the model binary. +######################################################################## + +# this needs to be at the top, what version are we building +ARG IMAGE_VERSION="latest" + +# ---------------------------------------------------------------------- +# BUILD MODEL BINARY +# ---------------------------------------------------------------------- +FROM debian:stretch as model-binary + +# Some variables that can be used to set control the docker build +ARG MODEL_VERSION=git + +# install dependencies +RUN apt-get update \ + && apt-get install -y --no-install-recommends \ + build-essential \ + curl \ + gfortran \ + git \ + libhdf5-dev \ + libopenmpi-dev \ + && rm -rf /var/lib/apt/lists/* + +# download, unzip and build ed2 +WORKDIR /src +RUN git clone https://github.com/model/repo.git \ + && cd repo \ + && if [ "${MODEL_VERSION}" != "git" ]; then git checkout "v.${MODEL_VERSION}"; fi \ + && ./configure \ + && make + +######################################################################## + +# ---------------------------------------------------------------------- +# BUILD PECAN FOR MODEL +# ---------------------------------------------------------------------- +FROM pecan/models:${IMAGE_VERSION} + +# ---------------------------------------------------------------------- +# INSTALL MODEL SPECIFIC PIECES +# ---------------------------------------------------------------------- + +RUN apt-get update \ + && apt-get install -y --no-install-recommends \ + libgfortran3 \ + libopenmpi2 \ + && rm -rf /var/lib/apt/lists/* + +# ---------------------------------------------------------------------- +# SETUP FOR SPECIFIC MODEL +# ---------------------------------------------------------------------- + +# Some variables that can be used to set control the docker build +ARG MODEL_VERSION=git + +# Setup model_info file +# @VERSION@ is replaced with model version in the model_info.json file +# @BINARY@ is replaced with model binary in the model_info.json file +COPY model_info.json /work/model.json +RUN sed -i -e "s/@VERSION@/${MODEL_VERSION}/g" \ + -e "s#@BINARY@#/usr/local/bin/model#g" /work/model.json + +# COPY model binary +COPY --from=model-binary /src/repo/src/model /usr/local/bin/model diff --git a/models/template/model_info.json b/models/template/model_info.json new file mode 100644 index 00000000000..452aad50e91 --- /dev/null +++ b/models/template/model_info.json @@ -0,0 +1,36 @@ +{ + "name": "Name of model, displayed in UI", + "type": "Type of model", + "version": "@VERSION@", + "binary": "@BINARY@", + "description": "Longer description of the model, right now only displayed in the monitor", + "creator": "Main creator of model", + "contributors": ["List of contributors", "Both for model", "And PEcAn code for model"], + "links": [ + { + "type": "git", + "description": "Link to source code, if not exist, just remove", + "url": "https://github.com/...." + }, + { + "type": "issues", + "description": "Link to issues, if not exist, just remove", + "url": "https://github.com/..../issues" + }, + { + "type": "documentation", + "description": "Link to documentation, if not exist, just remove", + "url": "https://github.com/..../wiki" + }, + { + "type": "docker", + "description": "Link to docker image, if not exist, just remove", + "url": "organization/image" + } + ], + "inputs": {}, + "bibtex": [ + "How to cite the model, paper 1", + "How to cite the model, paper 2" + ] +} From 40c09275e8bae390d44946ae9b6e86ba931ae0f6 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 30 Apr 2020 15:45:20 -0500 Subject: [PATCH 0920/2289] ignore model_info and Dockerfile in top directory --- models/template/.Rbuildignore | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 models/template/.Rbuildignore diff --git a/models/template/.Rbuildignore b/models/template/.Rbuildignore new file mode 100644 index 00000000000..2d28facae40 --- /dev/null +++ b/models/template/.Rbuildignore @@ -0,0 +1,2 @@ +Dockerfile +model_info.json From 567e1141d96a6013c739f7aa35a595bf6f0b3ac9 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 30 Apr 2020 23:58:36 +0200 Subject: [PATCH 0921/2289] make subshell more obvious --- scripts/travis/install.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/travis/install.sh b/scripts/travis/install.sh index c70dc427bec..82d8d122805 100755 --- a/scripts/travis/install.sh +++ b/scripts/travis/install.sh @@ -13,7 +13,7 @@ set -e # Seems like a lot of fiddling to set up littler and only use it once # inside pecan.depends, but still easier than duplicating the script Rscript -e 'if (!requireNamespace("littler", quietly = TRUE)) { install.packages(c("littler", "remotes", "docopt"), repos = "https://cloud.r-project.org") }' - LRPATHS=`Rscript -e 'cat(system.file(c("examples", "bin"), package = "littler"), sep = ":")'` + LRPATHS=$(Rscript -e 'cat(system.file(c("examples", "bin"), package = "littler"), sep = ":")') echo 'options(repos="https://cloud.r-project.org")' > ~/.littler.r PATH=$LRPATHS:$PATH bash docker/depends/pecan.depends travis_time_end From c149be437f89b08cb2765999f6b53dab37b0f586 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 1 May 2020 10:21:43 +0200 Subject: [PATCH 0922/2289] be more lenient about import counts --- .travis.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.travis.yml b/.travis.yml index 20fc82ada9f..f4b6a1c0a69 100644 --- a/.travis.yml +++ b/.travis.yml @@ -13,6 +13,7 @@ env: - RGL_USE_NULL=TRUE # Keeps RGL from complaining it can't find X11 - _R_CHECK_LENGTH_1_CONDITION_=true - _R_CHECK_LENGTH_1_LOGIC2_=true + - _R_CHECK_EXCESSIVE_IMPORTS_=100 # TODO consider reducing (CRAN uses 20) addons: apt: From 6efa077503c7ad3c552c2bed3185f67a59dcaf0d Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 1 May 2020 08:45:19 -0500 Subject: [PATCH 0923/2289] run on pull requests --- .github/workflows/ci.yml | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 9b1e35c6938..518c4aad5de 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -1,6 +1,15 @@ name: CI -on: push +on: + push: + branches: + - master + - develop + + tags: + - '*' + + pull_request: env: # Would be more usual to set R_LIBS_USER, but R uses R_LIBS first if present From ae3e947692abe8aac075bf7a73c0dd03b4fab8d7 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 1 May 2020 11:03:28 -0500 Subject: [PATCH 0924/2289] updated documentation --- models/template/README.md | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/models/template/README.md b/models/template/README.md index ac17ce2dae4..79ecc18223e 100644 --- a/models/template/README.md +++ b/models/template/README.md @@ -8,7 +8,8 @@ Adding a new model to PEcAn in a few easy steps: 3. implement 3 functions as described below 4. Add tests to `tests/testthat` 5. Update README, documentation -6. execute pecan with new model +6. Update Dockerfile and model_info.json +7. execute pecan with new model ### Three Functions @@ -39,6 +40,41 @@ it is defined in the BETY database. format. After this function is finished PEcAn will use the generated output and not use the model specific outputs. The outputs should be named YYYY.nc + +### Dockerization + +The PEcAn system is leveraging Docker to encapsulate most of the code. +This will make it easier to share new model with others, without them +having to compile the models. The goal is for people to be able to +launch the model in docker, and it will register with PEcAn and is +almost immediatly available to be used. To accomplish this you will need to modify two files. + +* `Dockerfile` + + The [Dockerfile](https://docs.docker.com/engine/reference/builder/) is + like the Makefile for docker. This file is split in two pieces the + part at the top is to actually build the binary. This is where you + specify all the libraries that are needed, as well as all the build + tools to compile your model. The second part, starting at the second + `FROM` line, is where you will install only the libraries needed to + run the binary and copy the binary from the build stage, using the + `COPY --from` line. + +* `model_info.json` + + The model_info.json describes the model and is used to register the + model with PEcAn. In the model_info.json the only fields that are + really required are those at the top: `name`, `type`, `version` and + `binary`. All other fields are optional but are good to be filled + out. You can leave `version` and `binary` with the special values + which will be updated by the Dockerfile. + +Once the image can be build it can be pushed so others can leverage +of the model. For PEcAn we have been using the following naming scheme +for the docker images: `pecan/model--:` +where the `model` and `model_version` are the same as those used to +build the model, and `pecan_version` is the version of PEcAn this +model is compiled for. ### Additional Changes From 77024f7a5087cbce09149183245069c016399f71 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 1 May 2020 21:14:08 +0200 Subject: [PATCH 0925/2289] pinned versions first, to avoid reinstalling --- scripts/travis/install.sh | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/scripts/travis/install.sh b/scripts/travis/install.sh index 82d8d122805..eb062a3f249 100755 --- a/scripts/travis/install.sh +++ b/scripts/travis/install.sh @@ -3,24 +3,7 @@ set -e . $( dirname $0 )/func.sh -# install R package dependencies -( - travis_time_start "pkg_version_check" "Checking R package binaries" - Rscript scripts/travis/rebuild_pkg_binaries.R - travis_time_end - - travis_time_start "r_pkgs" "installing R packages" - # Seems like a lot of fiddling to set up littler and only use it once - # inside pecan.depends, but still easier than duplicating the script - Rscript -e 'if (!requireNamespace("littler", quietly = TRUE)) { install.packages(c("littler", "remotes", "docopt"), repos = "https://cloud.r-project.org") }' - LRPATHS=$(Rscript -e 'cat(system.file(c("examples", "bin"), package = "littler"), sep = ":")') - echo 'options(repos="https://cloud.r-project.org")' > ~/.littler.r - PATH=$LRPATHS:$PATH bash docker/depends/pecan.depends - travis_time_end - -) - -# ROLL A FEW R PACKAGES BACK TO SPECIFIED VERSIONS +# Install R packages that need specified versions ( travis_time_start "pecan_install_roxygen" "Installing Roxygen 7.0.2 to match comitted documentation version" # We keep Roxygen pinned to a known version, merely to avoid hassle / @@ -42,6 +25,22 @@ set -e fi ) +# Install R package dependencies +# N.B. we run this *after* installing packages that need pinned versions, +# relying on fact that pecan.depends calls littler with -s, +# so it will skip reinstalling packages that already exist. +# This way each package is only installed once. +( + travis_time_start "r_pkgs" "installing R packages" + # Seems like a lot of fiddling to set up littler and only use it once + # inside pecan.depends, but still easier than duplicating the script + Rscript -e 'if (!requireNamespace("littler", quietly = TRUE)) { install.packages(c("littler", "remotes", "docopt"), repos = "https://cloud.r-project.org") }' + LRPATHS=$(Rscript -e 'cat(system.file(c("examples", "bin"), package = "littler"), sep = ":")') + echo 'options(repos="https://cloud.r-project.org")' > ~/.littler.r + PATH=$LRPATHS:$PATH bash docker/depends/pecan.depends + travis_time_end +) + # INSTALLING SIPNET ( travis_time_start "install_sipnet" "Installing SIPNET for testing" From b50e859b74e630a6458a7d2a8e447cdad4570421 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 1 May 2020 21:15:47 +0200 Subject: [PATCH 0926/2289] delete rebuild_pkg_binaries No longer needed when not trying to juggle C2D4U --- scripts/travis/rebuild_pkg_binaries.R | 57 --------------------------- 1 file changed, 57 deletions(-) delete mode 100644 scripts/travis/rebuild_pkg_binaries.R diff --git a/scripts/travis/rebuild_pkg_binaries.R b/scripts/travis/rebuild_pkg_binaries.R deleted file mode 100644 index e206ff8938f..00000000000 --- a/scripts/travis/rebuild_pkg_binaries.R +++ /dev/null @@ -1,57 +0,0 @@ -#!/usr/bin/env Rscript - -# ¡ugly hack! -# -# Travis setup uses many prebuilt R packages from c2d4u3.5, which despite its -# name now contains a mix of packages built for R 3.5 and R 3.6. When loaded in -# R 3.5.x, 3.6-built packages throw error -# `rbind(info, getNamespaceInfo(env, "S3methods")): -# number of columns of matrices must match`, -# and as of 2019-10-30 at least one package (data.table) refuses to load if its -# build version does not match the R version it is running on. -# -# We resolve this the slow brute-force way: By running this script before -# attempting to load any R packages, thereby reinstalling from source any -# package whose binary was built with a different R version. -# -# TODO: Remove this when c2d4u situation improves. - -is_wrong_build <- function(pkgname) { - - # lockfile implies incomplete previous installation => delete and rebuild - lock_path <- file.path(.libPaths(), paste0("00LOCK-", pkgname)) - if (any(file.exists(lock_path))) { - unlink(lock_path, recursive = TRUE) - return(TRUE) - } - - built_str <- tryCatch( - packageDescription(pkgname)$Built, - error = function(e)e) - if (inherits(built_str, "error")) { - # In the rare case we can't read the description, - # assume package is broken and needs rebuilding - return(TRUE) - } - - # Typical packageDescription(pkgname)$Built result: we only need chars 3-7 - # "R 3.4.4; x86_64-apple-darwin15.6.0; 2019-03-18 04:41:51 UTC; unix" - built_ver <- R_system_version(substr(built_str, start = 3, stop = 7)) - - # NB strict comparison: even patch level must agree - built_ver != getRversion() -} - -all_pkgs <- installed.packages()[, 1] -needs_rebuild <- vapply(all_pkgs, is_wrong_build, logical(1)) - -if (any(needs_rebuild)) { - print(paste( - "Found R packages that were built for a different R version.", - "Reinstalling these from source.")) - install.packages( - all_pkgs[needs_rebuild], - repos = "cloud.r-project.org", - dependencies = FALSE, - Ncpus = 2) -} From 2055009b89750336fc46e723d8c992525d792634 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 2 May 2020 04:22:05 +0200 Subject: [PATCH 0927/2289] import threshold ignored when checking as cran --- .travis.yml | 1 - modules/assim.batch/tests/Rcheck_reference.log | 6 +++++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/.travis.yml b/.travis.yml index f4b6a1c0a69..20fc82ada9f 100644 --- a/.travis.yml +++ b/.travis.yml @@ -13,7 +13,6 @@ env: - RGL_USE_NULL=TRUE # Keeps RGL from complaining it can't find X11 - _R_CHECK_LENGTH_1_CONDITION_=true - _R_CHECK_LENGTH_1_LOGIC2_=true - - _R_CHECK_EXCESSIVE_IMPORTS_=100 # TODO consider reducing (CRAN uses 20) addons: apt: diff --git a/modules/assim.batch/tests/Rcheck_reference.log b/modules/assim.batch/tests/Rcheck_reference.log index 40865f2a793..6072eb26a19 100644 --- a/modules/assim.batch/tests/Rcheck_reference.log +++ b/modules/assim.batch/tests/Rcheck_reference.log @@ -8,7 +8,11 @@ * this is package ‘PEcAn.assim.batch’ version ‘1.7.0’ * package encoding: UTF-8 * checking package namespace information ... OK -* checking package dependencies ... OK +* checking package dependencies ... NOTE +Imports includes 28 non-default packages. +Importing from so many packages makes the package vulnerable to any of +them becoming unavailable. Move as many as possible to Suggests and +use conditionally. * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK From efcf71ffc7b5db4e31db413e59ee90ec51e326ad Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 2 May 2020 14:47:18 +0200 Subject: [PATCH 0928/2289] remove references to C2D4U from CI documentation (plus wording tweaks) --- .../05_developer_workflows/04-testing.Rmd | 17 ++++++++--------- .../04_appendix/04-package-dependencies.Rmd | 6 +----- 2 files changed, 9 insertions(+), 14 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index 2f9bddb3291..2694dc56c57 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -122,22 +122,21 @@ The `batch_run.R` script can take the following command-line arguments: ### Continuous Integration -Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase using [Travis CI](https://travis-ci.org/pecanProject/pecan), and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. +Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase, and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. -At this writing (September 2019), all our Travis builds run on the same version of Linux (currently Ubuntu 16.04, `xenial`) using three different versions of R in parallel: previous release, current release, and nightly builds of the R development branch. In most cases the build should pass on all three versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on R:oldrel as developer time and forward compatibility allow. +At this writing PEcAn's CI builds primarily use [Travis CI](https://travis-ci.org/pecanProject/pecan) and the rest of this section assumes a Travis build, but as of May 2020 we also have an experimental GitHub Actions build, and if we switch completely to GitHub Actions then this guide will need to be rewritten. -Each build starts by launching three clean virtual machines (one for each R version) and performs roughly the following actions on all of them: +All our Travis builds run on the same version of Linux (currently Ubuntu 16.04, `xenial`) using four different versions of R in parallel: the two most recent previous releases, current release, and nightly builds of the R development branch. In most cases the build should pass on all four versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on older releases as developer time and forward compatibility allow. + +Each build starts by launching a separate clean virtual machine for each R version and performs roughly the following actions on all of them: * Installs binary system dependencies needed by PEcAn (NetCDF and HDF5 libraries, JAGS, udunits, etc). -* Installs prebuilt binaries of a few R packages from [cran2deb4ubuntu](https://launchpad.net/~marutter/+archive/ubuntu/c2d4u3.5). - - This is only a small subset of the PEcAn dependency list, mostly the ones that take a very long time to compile but have binaries small enough to download quickly. - - The rest are installed as needed during the installation of PEcAn packages. - - Because these packages are compiled for a specific version of R and we test on three different versions, in some cases the installed binary is incompatible with the installed R version and is overwritten by a locally-compiled one in a later step. At this writing this is only done on R 3.5 (the current R:oldrel), but this may change. +* Installs all the R packages that are declared as dependencies in any PEcAn package, as computed by `scripts/generate_dependencies.R`. * Clones the PEcAn repository from GitHub, and checks out the branch to be tested. * Retrieves any cached files available from previous Travis builds. - The main thing in the cache is previously-installed dependency R packages, to avoid recompiling them every time. - If the cache becomes stale or is preventing a package update needed by the build (e.g. to get a new version that contains a needed bug fix), delete the cache through the Travis web interface and it will be reconstructed on the next build. - - Because PEcAn has so many dependencies, builds with no cache will spend most of their time recompiling packages and will often run out of time before the tests complete. You can fix this by using `scripts/travis/cache_buildup.sh` and `scripts/travis/prime_travis_cache.sh` to build up the cache incrementally through one or more [custom builds](https://blog.travis-ci.com/2017-08-24-trigger-custom-build), each of which installs some dependencies and then uploads a freshened cache *without* running any tests. Once all dependencies have been cached, restart the standard full build. + - Because PEcAn has so many dependencies, builds with no cache will spend most of their time recompiling packages and will probably run out of time before the tests complete. You can fix this by using `scripts/travis/cache_buildup.sh` and `scripts/travis/prime_travis_cache.sh` to build up the cache incrementally through one or more [custom builds](https://blog.travis-ci.com/2017-08-24-trigger-custom-build), each of which installs some dependencies and then uploads a freshened cache *without* running any tests. Once all dependencies have been cached, restart the standard full build. * Initializes a skeleton version of the PEcAn database (BeTY) containing a few public records to be used by the test runs. * Installs all the PEcAn packages, recompiling documentation and installing dependencies as needed. * Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). @@ -150,7 +149,7 @@ Each build starts by launching three clean virtual machines (one for each R vers - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. - - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it anyway. The failures all need to be cleaned up eventually, and it's likely easier to fix the error than to figure out how to re-ignore it. + - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it as part of your PR anyway. It's frustrating to see tests complain about code you didn't touch, but the failures all need to be cleaned up eventually and it's likely easier to fix the error than to figure out how to re-ignore it. * Runs a simple PEcAn workflow from beginning to end (three years of SIPNET at Niwot Ridge using stored weather data, meta-analysis on fixed effects only, sensitivity analysis on NPP), and verifies that the models complete successfully. * Checks whether any version-controlled files have been changed by the build and testing process, and marks a build failure if they have. - If your build fails at this step, the most common cause is that you forgot to Roxygenize before committing. diff --git a/book_source/04_appendix/04-package-dependencies.Rmd b/book_source/04_appendix/04-package-dependencies.Rmd index bc6e68a93f0..94d80ddf96e 100644 --- a/book_source/04_appendix/04-package-dependencies.Rmd +++ b/book_source/04_appendix/04-package-dependencies.Rmd @@ -98,8 +98,4 @@ If you think your package needs to load or attach code for any reason, please no ## Installing dependencies: Let the machines do it -In most cases you won't need to think about how dependencies get installed -- just declare them in your package's DESCRIPTION and the installation will be handled automatically by R and devtools during the build process. In PEcAn packages, the rare cases where this isn't enough will probably fall into one of two categories. - -First, some dependencies rely on non-R software that R does not know how to install automatically. For example, rjags relies on JAGS, which might be installed in a different place on every machine. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. - -Second, some dependencies *will* install automatically, but they take a long time to compile or conflict with dependencies needed by other packages or are otherwise annoying to deal with. To save time during CI builds, PEcAn's Travis configuration file includes a manually curated list of the most-annoying dependencies and installs them from pre-compiled binaries before starting the normal installation process. If you suspect you're adding a dependency that will be "annoying", please *do not* add it to the Travis binary-install list right away; instead, focus on getting your package to work right using the default automatic installation. Then, if needed to keep build times reasonable, submit a separate pull request to change the Travis configuration. This two-step procedure makes it much easier to understand which merges changed package code and which ones changed the testing configuration without changing package functionality, and also lets you focus on what the code is supposed to do instead of on installation details. +In most cases you won't need to think about how dependencies get installed -- just declare them in your package's DESCRIPTION and the installation will be handled automatically by R and devtools during the build process. The main exception is when a dependency relies on non-R software that R does not know how to install automatically. For example, rjags relies on JAGS, which might be installed in a different place on every machine. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. From 25c31212da24eb59256e020e8836afd81ade4deb Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 3 May 2020 15:39:47 +0200 Subject: [PATCH 0929/2289] ignore data.atmosphere excessive imports note --- modules/data.atmosphere/tests/Rcheck_reference.log | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index 39e696f479a..bf1308c9e6f 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -8,7 +8,11 @@ * this is package ‘PEcAn.data.atmosphere’ version ‘1.7.0’ * package encoding: UTF-8 * checking package namespace information ... OK -* checking package dependencies ... OK +* checking package dependencies ... NOTE +Imports includes 36 non-default packages. +Importing from so many packages makes the package vulnerable to any of +them becoming unavailable. Move as many as possible to Suggests and +use conditionally. * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK From 5a04e2b254aead96dce0c9d0d2f8756739d4d9af Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 3 May 2020 15:55:58 +0200 Subject: [PATCH 0930/2289] remove doc for arguments removed in 2012 --- modules/uncertainty/R/plots.R | 18 +++---------- .../man/plot_variance_decomposition.Rd | 27 ++----------------- .../uncertainty/tests/Rcheck_reference.log | 6 ----- 3 files changed, 5 insertions(+), 46 deletions(-) diff --git a/modules/uncertainty/R/plots.R b/modules/uncertainty/R/plots.R index 7c773e29ec9..029d9328b1a 100644 --- a/modules/uncertainty/R/plots.R +++ b/modules/uncertainty/R/plots.R @@ -8,26 +8,14 @@ #------------------------------------------------------------------------------- ##--------------------------------------------------------------------------------------------------# -##' Plot results of variance decomposition +##' Variance Decomposition Plots ##' ##' Plots variance decomposition tryptich ##' @name plot_variance_decomposition -##' @title Variance Decomposition Plots -##' @export plot_variance_decomposition +##' @export ##' @author David LeBauer, Carl Davidson -##' @param ... Output from any number of sensitivity analyses. Output must be of the form +##' @param plot.inputs Output from a sensitivity analysis. Output must be of the form ##' given by sensitivity.results$variance.decomposition.output in model output -##' @param all.plot.inputs Optional argument allowing output from sensitivity analyses to be specified in a list -##' @param exclude vector of strings specifying parameters to omit from the variance decomposition graph -##' @param convert.var function transforming variances to the value displayed in the graph -##' @param var.label label to displayed over variance column -##' @param order.plot.input Output from a sensitivity analysis that is to be used to order parameters. -##' Parameters are ordered by variance. Defaults to the first sensitivity analysis output given -##' @param ticks.plot.input Output from a sensitivity analysis that is to be used. -##' Defaults to the first sensitivity analysis output given -##' @param col Color of each sensitivity analysis. Equivalent to col parameter of the plot function. -##' @param pch Shape of each sensitivity analysis. Equivalent to pch parameter of the plot function. -##' @param main Plot title. Useful for multi-pft variance decompositions. ##' @param fontsize list specifying the font size of the titles and axes of the graph ##' @examples ##' x <- list(trait.labels = c('a', 'b', 'c'), diff --git a/modules/uncertainty/man/plot_variance_decomposition.Rd b/modules/uncertainty/man/plot_variance_decomposition.Rd index 8a33da6acf0..9a8317c41e0 100644 --- a/modules/uncertainty/man/plot_variance_decomposition.Rd +++ b/modules/uncertainty/man/plot_variance_decomposition.Rd @@ -10,35 +10,12 @@ plot_variance_decomposition( ) } \arguments{ -\item{fontsize}{list specifying the font size of the titles and axes of the graph} - -\item{...}{Output from any number of sensitivity analyses. Output must be of the form +\item{plot.inputs}{Output from a sensitivity analysis. Output must be of the form given by sensitivity.results$variance.decomposition.output in model output} -\item{all.plot.inputs}{Optional argument allowing output from sensitivity analyses to be specified in a list} - -\item{exclude}{vector of strings specifying parameters to omit from the variance decomposition graph} - -\item{convert.var}{function transforming variances to the value displayed in the graph} - -\item{var.label}{label to displayed over variance column} - -\item{order.plot.input}{Output from a sensitivity analysis that is to be used to order parameters. -Parameters are ordered by variance. Defaults to the first sensitivity analysis output given} - -\item{ticks.plot.input}{Output from a sensitivity analysis that is to be used. -Defaults to the first sensitivity analysis output given} - -\item{col}{Color of each sensitivity analysis. Equivalent to col parameter of the plot function.} - -\item{pch}{Shape of each sensitivity analysis. Equivalent to pch parameter of the plot function.} - -\item{main}{Plot title. Useful for multi-pft variance decompositions.} +\item{fontsize}{list specifying the font size of the titles and axes of the graph} } \description{ -Plot results of variance decomposition -} -\details{ Plots variance decomposition tryptich } \examples{ diff --git a/modules/uncertainty/tests/Rcheck_reference.log b/modules/uncertainty/tests/Rcheck_reference.log index f62a8f5e7c8..a97db2893ca 100644 --- a/modules/uncertainty/tests/Rcheck_reference.log +++ b/modules/uncertainty/tests/Rcheck_reference.log @@ -277,12 +277,6 @@ Documented arguments not in \usage in documentation object 'plot_sensitivities': Undocumented arguments in documentation object 'plot_sensitivity' ‘linesize’ ‘dotsize’ -Undocumented arguments in documentation object 'plot_variance_decomposition' - ‘plot.inputs’ -Documented arguments not in \usage in documentation object 'plot_variance_decomposition': - ‘all.plot.inputs’ ‘exclude’ ‘convert.var’ ‘var.label’ - ‘order.plot.input’ ‘ticks.plot.input’ ‘col’ ‘pch’ ‘main’ - Undocumented arguments in documentation object 'read.ameriflux.L2' ‘file.name’ ‘year’ From 3fdb87bf2c3f406ae1c463c22727640f94091c30 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 3 May 2020 22:42:35 +0200 Subject: [PATCH 0931/2289] data.atm: drafts of dataset documentation Was thinking this would fix "undocumented datasets" check note, but will need to turn on lazy-loading of package data and not yet ready to commit to that. Saving drafts for future use. --- modules/data.atmosphere/R/data.R | 150 ++++++++++++++++++ .../data.atmosphere/data/FLUXNET.sitemap.R | 5 + 2 files changed, 155 insertions(+) create mode 100644 modules/data.atmosphere/R/data.R create mode 100644 modules/data.atmosphere/data/FLUXNET.sitemap.R diff --git a/modules/data.atmosphere/R/data.R b/modules/data.atmosphere/R/data.R new file mode 100644 index 00000000000..256b8573748 --- /dev/null +++ b/modules/data.atmosphere/R/data.R @@ -0,0 +1,150 @@ + +## Drafts of documentation for package datasets +## +## Written by CKB 2020-05-03, then commented out when I realized that as +## written we need to enable lazy-loading of package data to use these. +## TODO in this order: +## * Inspect all datasets, determine whether lazy-loading them into package +## namespace will cause any issues +## * make any changes needed to resolve issues identified above +## * Change DESCRIPTION line to read `LazyData: true` +## * Uncomment this file, delete this header block +## * run Roxygen, commit resulting Rd files + +# #' 2010 CRUNCEP weather data for Urbana, IL +# #' +# #' Hourly 2010 meteorology for the 0.5-degree grid cell containing the +# #' EBI Energy Farm (Urbana, IL), as obtained from the CRUNCEP +# #' 6-hourly product. +# #' Please see the `compare_narr_cruncep_met` vignette for details of a +# #' comparison between this and the `narr`, `narr3h`, and `ebifarm` datasets. +# #' +# #' @format A data frame with 8736 rows and 10 columns: +# #' \describe{ +# #' \item{date}{POSIXct timestamp} +# #' \item{year, doy, hour}{integer, extracted from `date`} +# #' \item{solarR}{solar radiation, in umol/h/m2} +# #' \item{DailyTemp.C}{air temperature, in degrees C} +# #' \item{RH}{relative humidity, in percent} +# #' \item{WindSpeed}{wind speed, in m/s} +# #' \item{precip}{precipitation rate, in mm/h} +# #' \item{source}{dataset identifier, in this case always "cruncep"}} +# #' @seealso \code{\link{narr}} \code{\link{narr3h}} \code{\link{ebifarm}} +# "cruncep" +# +# +# #' Global 0.5 degree land/water mask for the CRUNCEP dataset +# #' +# #' For details, please see the CRUNCEP scripts included with this package: +# #' `system.file("scripts/cruncep", package = "PEcAn.data.atmosphere")` +# #' +# #' @format a data frame with 259200 rows and 3 columns: +# #' \describe{ +# #' \item{lat}{latitude, in decimal degrees} +# #' \item{lon}{longitude, in decimal degrees} +# #' \item{land}{logical. TRUE = land, FALSE = water}} +# "cruncep_landmask" +# +# +# #' 2010 weather station data from near Urbana, IL +# #' +# #' Hourly 2010 weather data collected at the EBI Energy Farm (Urbana, IL). +# #' Please see the `compare_narr_cruncep_met` vignette for details of a +# #' comparison between this and the `narr`, `narr3h`, and `cruncep` datasets. +# #' +# #' @format A data frame with 8390 rows and 10 columns: +# #' \describe{ +# #' \item{date}{POSIXct timestamp} +# #' \item{year, doy, hour}{integer, extracted from `date`} +# #' \item{Temp}{air temperature, in degrees C} +# #' \item{RH}{relative humidity, in percent} +# #' \item{precip}{precipitation rate, in mm/h} +# #' \item{wind}{wind speed, in m/s} +# #' \item{solar}{solar radiation, in umol/h/m2} +# #' \item{source}{dataset identifier, in this case always "ebifarm"}} +# #' @seealso \code{\link{cruncep}} \code{\link{narr}} \code{\link{narr3h}} +# "ebifarm" +# +# +# #' Codes and BeTY IDs for sites in the FLUXNET network +# #' +# #' @format a data frame with 698 rows and 2 columns: +# #' \describe{ +# #' \item{FLUX.id}{character identifier used by FLUXNET, +# #' e.g. Niwot Ridge USA is `US-NR1`} +# #' \item{site.id}{identifier used in the `sites` table of the PEcAn +# #' database. Integer, but stored as character}} +# "FLUXNET.sitemap" +# +# +# #' Global land/water mask for the NCEP dataset +# #' +# #' For details, please see the NCEP scripts included with this package: +# #' `system.file("scripts/ncep", package = "PEcAn.data.atmosphere")` +# #' +# #' @format a data frame with 18048 rows and 3 columns: +# #' \describe{ +# #' \item{lat}{latitude, in decimal degrees} +# #' \item{lon}{longitude, in decimal degrees} +# #' \item{land}{logical. TRUE = land, FALSE = water}} +# "landmask" +# +# +# #' Latitudes of 94 sites from the NCEP dataset +# #' +# #' For details, please see the NCEP scripts included with this package: +# #' `system.file("scripts/ncep", package = "PEcAn.data.atmosphere")` +# #' +# #' @format a vector of 94 decimal values +# "Lat" +# +# +# #' Longitudes of 192 sites from the NCEP dataset +# #' +# #' For details, please see the NCEP scripts included with this package: +# #' `system.file("scripts/ncep", package = "PEcAn.data.atmosphere")` +# #' +# #' @format a vector of 192 decimal values +# "Lon" +# +# +# #' 2010 NARR weather data for Urbana, IL +# #' +# #' Hourly 2010 meteorology for the 0.3-degree grid cell containing the +# #' EBI Energy Farm (Urbana, IL), as obtained from the NARR daily product. +# #' Please see the `compare_narr_cruncep_met` vignette for details of a +# #' comparison between this and the `cruncep`, `narr3h`, and `ebifarm` datasets. +# #' +# #' @format A data frame with 8760 rows and 10 columns: +# #' \describe{ +# #' \item{date}{POSIXct timestamp} +# #' \item{year, doy, hour}{integer, extracted from `date`} +# #' \item{SolarR}{solar radiation, in umol/h/m2} +# #' \item{Temp}{air temperature, in degrees C} +# #' \item{RH}{relative humidity, in percent} +# #' \item{WS}{wind speed, in m/s} +# #' \item{precip}{precipitation rate, in mm/h} +# #' \item{source}{dataset identifier, in this case always "narr"}} +# #' @seealso \code{\link{cruncep}} \code{\link{ebifarm}} \code{\link{narr3h}} +# "narr" +# +# +# #' 2010 NARR 3-hourly weather data for Urbana, IL +# #' +# #' Hourly 2010 meteorology for the 0.25-degree grid cell containing the +# #' EBI Energy Farm (Urbana, IL), as obtained from the NARR 3-hourly product. +# #' Please see the `compare_narr_cruncep_met` vignette for details of a +# #' comparison between this and the `cruncep`, `narr`, and `ebifarm` datasets. +# #' +# #' @format A data frame with 8736 rows and 10 columns: +# #' \describe{ +# #' \item{date}{POSIXct timestamp} +# #' \item{year, doy, hour}{integer, extracted from `date`} +# #' \item{solarR}{solar radiation, in umol/h/m2} +# #' \item{DailyTemp.C}{air temperature, in degrees C} +# #' \item{RH}{relative humidity, in percent} +# #' \item{WindSpeed}{wind speed, in m/s} +# #' \item{precip}{precipitation rate, in mm/h} +# #' \item{source}{dataset identifier, in this case always "narr3h"}} +# #' @seealso \code{\link{cruncep}} \code{\link{ebifarm}} \code{\link{narr}} +# "narr3h" diff --git a/modules/data.atmosphere/data/FLUXNET.sitemap.R b/modules/data.atmosphere/data/FLUXNET.sitemap.R new file mode 100644 index 00000000000..823665ae51a --- /dev/null +++ b/modules/data.atmosphere/data/FLUXNET.sitemap.R @@ -0,0 +1,5 @@ + +FLUXNET.sitemap <- utils::read.csv( + file = "FLUXNET.sitemap.csv", + colClasses = "character", + row.names = 1) From feb0a0a206b43e7d9e7d18e326ede92f3b9e444b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 4 May 2020 15:59:45 +0200 Subject: [PATCH 0932/2289] avoid clobbering locale in check_with_errors --- scripts/check_with_errors.R | 41 ++++++++++++++++++++++++++++++++----- 1 file changed, 36 insertions(+), 5 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 891bd5ffba0..534d542ffe3 100755 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -1,9 +1,5 @@ #!/usr/bin/env Rscript -# Avoid some spurious failures from platform-dependent sorting -invisible(Sys.setlocale("LC_ALL", "en_US.UTF-8")) # sets sorting in this script -Sys.setenv(LC_ALL = "en_US.UTF-8") # sets sorting in rcmdcheck processes - arg <- commandArgs(trailingOnly = TRUE) pkg <- arg[1] @@ -108,7 +104,15 @@ msg_lines <- function(msg) { msg <- lapply(msg, function(x)x[x != ""]) # prepend message title (e.g. "checking Rd files ... NOTE") to each line - unlist(lapply(msg, function(x)paste(x[[1]], x[-1], sep = ": "))) + unlist(lapply( + msg, + function(x) { + if (length(x) > 1) { + paste(x[[1]], x[-1], sep = ": ") + } else { + x + } + })) } if (cmp$status != "+") { @@ -155,6 +159,33 @@ if (cmp$status != "+") { cur_msgs <- cur_msgs[!grepl("NOTE: Consider adding importFrom", cur_msgs)] lines_changed <- setdiff(cur_msgs, prev_msgs) + + # Crude hack: + # Some messages are locale-dependent in complex ways, + # e.g. the note about undocumented datasets concatenates CSV names + # (ordered in the current locale) and objects in RData files (always + # ordered in C locale), and so on. + # As a last effort, we look for pre-existing lines that contain the same + # words in a different order + if (length(lines_changed) > 0) { + prev_words <- strsplit(prev_msgs, " ") + changed_words <- strsplit(lines_changed, " ") + is_reordered <- function(v1, v2) { + length(v1[v1 != ""]) == length(v2[v2 != ""]) && setequal(v1, v2) + } + is_in_prev <- function(line) { + any(vapply( + X = prev_words, + FUN = is_reordered, + FUN.VALUE = logical(1), + line)) + } + in_prev <- vapply( + X = changed_words, + FUN = is_in_prev, + FUN.VALUE = logical(1)) + lines_changed <- lines_changed[!in_prev] + } if (length(lines_changed) > 0) { cat("R check of", pkg, "returned new problems:\n") cat(lines_changed, sep = "\n") From 76b53888ead47d4b45aed19e232cbbb96f262039 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 4 May 2020 16:43:27 +0200 Subject: [PATCH 0933/2289] RTM: use tempdir to avoid complaint about examples creating files in userspace --- modules/rtm/R/fortran.datamodule.R | 8 +++++--- modules/rtm/man/fortran_data_module.Rd | 8 +++++--- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/modules/rtm/R/fortran.datamodule.R b/modules/rtm/R/fortran.datamodule.R index 1941641830c..2e6eb1bd6ee 100644 --- a/modules/rtm/R/fortran.datamodule.R +++ b/modules/rtm/R/fortran.datamodule.R @@ -22,14 +22,16 @@ #' z <- seq(exp(1), pi, length.out=42) #' l <- list(x=x, y=y, z=z) ## NOTE that names must be explicitly declared #' l.types <- c('real','integer', 'real*4', 'real*8') -#' fortran_data_module(l, l.types, 'testmod') -#' +#' fortran_data_module(l, l.types, 'testmod', +#' file.path(tempdir(), "testmod.f90")) +#' #' x <- runif(10) #' y <- rnorm(10) #' z <- rgamma(10, 3) #' d <- data.frame(x,y,z) ## NOTE that data.frames are just named lists #' d.types <- rep('real*8', ncol(d)) -#' fortran_data_module(d, d.types, 'random') +#' fortran_data_module(d, d.types, 'random', +#' file.path(tempdir(), "random.f90")) #' @export fortran_data_module <- function(dat, types, modname, fname = paste0(modname, ".f90")) { if (!is.list(dat)) { diff --git a/modules/rtm/man/fortran_data_module.Rd b/modules/rtm/man/fortran_data_module.Rd index d0441cc3553..31b9abc88d7 100644 --- a/modules/rtm/man/fortran_data_module.Rd +++ b/modules/rtm/man/fortran_data_module.Rd @@ -36,14 +36,16 @@ Currently, only numeric data are supported (i.e. no characters). z <- seq(exp(1), pi, length.out=42) l <- list(x=x, y=y, z=z) ## NOTE that names must be explicitly declared l.types <- c('real','integer', 'real*4', 'real*8') - fortran_data_module(l, l.types, 'testmod') - + fortran_data_module(l, l.types, 'testmod', + file.path(tempdir(), "testmod.f90")) + x <- runif(10) y <- rnorm(10) z <- rgamma(10, 3) d <- data.frame(x,y,z) ## NOTE that data.frames are just named lists d.types <- rep('real*8', ncol(d)) - fortran_data_module(d, d.types, 'random') + fortran_data_module(d, d.types, 'random', + file.path(tempdir(), "random.f90")) } \author{ Alexey Shiklomanov From d80db13539fd97694866a680f18edf37113e9413 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 5 May 2020 18:47:02 +0530 Subject: [PATCH 0934/2289] removed unwanted step --- .github/workflows/styler-actions.yml | 8 -------- 1 file changed, 8 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index c5819ae2297..ff5b80dab3c 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -91,11 +91,3 @@ jobs: - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} - - - # A mock job just to ensure we have a successful build status - finish: - needs: [check] - runs-on: ubuntu-latest - steps: - - run: true From 31e85a60a87513b07a67917df3e82a0ae564a56b Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 5 May 2020 20:33:44 +0530 Subject: [PATCH 0935/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index ff5b80dab3c..28761a23ff8 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -84,8 +84,8 @@ jobs: for i in $(cat uniq.txt); do make .doc/${i}; done - name: commit run: | - git config --global user.email "rahulagrawal799110@gmail.com" - git config --global user.name "Rahul Agrawal" + git config --global user.email "pecan_bot@example.com" + git config --global user.name "PEcAn stylebot" git add \*.Rd git commit -m 'make' - uses: r-lib/actions/pr-push@master From c719dc59ed7b1d02de0efcffaeee4c69c699f8cf Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 5 May 2020 10:18:29 -0500 Subject: [PATCH 0936/2289] Build develop image (#2595) * autobuild develop image * install git (fixes #2579) --- .github/workflows/depends.yml | 63 +++++++++++++++++++++++++++++++++++ .github/workflows/stale.yml | 1 + docker/depends/Dockerfile | 10 ++++++ 3 files changed, 74 insertions(+) create mode 100644 .github/workflows/depends.yml diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml new file mode 100644 index 00000000000..f7534c97006 --- /dev/null +++ b/.github/workflows/depends.yml @@ -0,0 +1,63 @@ +name: Docker Depends Image + +on: + push: + branches: + - develop + + # this runs on the develop branch + schedule: + - cron: '0 0 * * *' + +jobs: + depends: + #if: github.repository == 'PecanProject/pecan' + + runs-on: ubuntu-latest + + strategy: + fail-fast: false + matrix: + R: + # 3.4 does not work + - 3.5 + # 3.6 != latest in 3.6 series + - 3.6.3 + # 4.0 not released yet + # unstable version + - devel + + steps: + - uses: actions/checkout@v2 + + - name: Build for R version ${{ matrix.R }} + run: | + docker build --pull \ + --build-arg R_VERSION=${{ matrix.R }} \ + --tag image \ + docker/depends + + - name: Login into registry + run: | + echo "${{ secrets.GITHUB_TOKEN }}" | docker login docker.pkg.github.com -u ${{ github.actor }} --password-stdin + if [ -n "${{ secrets.DOCKERHUB_USERNAME }}" -a -n "${{ secrets.DOCKERHUB_PASSWORD }}" ]; then + echo "${{ secrets.DOCKERHUB_PASSWORD }}" | docker login -u ${{ secrets.DOCKERHUB_USERNAME }} --password-stdin + fi + + - name: Push docker image ${{ matrix.R }} + run: | + IMAGE_ID=docker.pkg.github.com/${{ github.repository }}/depends + IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]') + + TAG=${{ matrix.R }} + if [ "$TAG" == "3.6.3" ]; then + TAG=3.6 + fi + + docker tag image $IMAGE_ID:${TAG}-develop + docker push $IMAGE_ID:${TAG}-develop + + if [ -n "${{ secrets.DOCKERHUB_USERNAME }}" -a -n "${{ secrets.DOCKERHUB_PASSWORD }}" ]; then + docker tag image pecan/depends:${TAG}-develop + docker push pecan/depends:${TAG}-develop + fi diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml index f2724c16ebe..67c3898ff50 100644 --- a/.github/workflows/stale.yml +++ b/.github/workflows/stale.yml @@ -6,6 +6,7 @@ on: jobs: stale: + if: github.repository == 'PecanProject/pecan' runs-on: ubuntu-latest diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index ca37ccd0586..2e419aab5ff 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -6,6 +6,16 @@ ARG R_VERSION="3.5" FROM rocker/tidyverse:${R_VERSION} MAINTAINER Rob Kooper +# ---------------------------------------------------------------------- +# UPDATE GIT +# This is needed for stretch and github actions +# ---------------------------------------------------------------------- +RUN if [ "$(lsb_release -s -c)" == "stretch" ]; then \ + echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list \ + && apt-get update \ + && apt-get -t stretch-backports upgrade -y git \ + ; fi + # ---------------------------------------------------------------------- # INSTALL BINARY/LIBRARY DEPENDENCIES # ---------------------------------------------------------------------- From 4a7cc603b9e18a4a0b9ce90766e339772f3b5c4e Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 5 May 2020 22:53:16 +0530 Subject: [PATCH 0937/2289] removing-git-install --- .github/workflows/ci.yml | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 9b1e35c6938..16ed42592c7 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -14,20 +14,6 @@ jobs: runs-on: ubuntu-latest container: pecan/depends:develop steps: - - name: check git version - id: gitversion - run: | - v=$(git --version | grep -oE '[0-9\.]+') - v='cat(numeric_version("'${v}'") < "2.18")' - echo "##[set-output name=isold;]$(Rscript -e "${v}")" - - name: upgrade git if needed - # Hack: actions/checkout wants git >= 2.18, rocker 3.5 images have 2.11 - # Assuming debian stretch because newer images have git >= 2.20 already - if: steps.gitversion.outputs.isold == 'TRUE' - run: | - echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list - apt-get update - apt-get -t stretch-backports upgrade -y git - uses: actions/checkout@v2 - run: mkdir -p "${HOME}${R_LIBS#'~'}" shell: bash From 22ecc137d90c59ca9bbd50f8a2e7e4fc28a7ee4d Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 5 May 2020 15:30:12 -0500 Subject: [PATCH 0938/2289] autobuild pecan/depends:develop image --- .github/workflows/depends.yml | 39 +++++++++++++++++++++++------------ 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index f7534c97006..909d5153915 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -19,6 +19,9 @@ jobs: fail-fast: false matrix: R: + # develop is special since it will build the default depends for develop + # this is not a real R version. + - develop # 3.4 does not work - 3.5 # 3.6 != latest in 3.6 series @@ -26,16 +29,31 @@ jobs: # 4.0 not released yet # unstable version - devel + include: + - R: develop + TAG: develop + - R: 3.5 + TAG: 3.5-develop + - R: 3.6.3 + TAG: 3.6-develop + - R: devel + TAG: unstable-develop steps: - uses: actions/checkout@v2 - name: Build for R version ${{ matrix.R }} run: | - docker build --pull \ - --build-arg R_VERSION=${{ matrix.R }} \ - --tag image \ - docker/depends + if [ "${{ matrix.R }}" == "develop" ]; then + docker build --pull \ + --tag image \ + docker/depends + else + docker build --pull \ + --build-arg R_VERSION=${{ matrix.R }} \ + --tag image \ + docker/depends + fi - name: Login into registry run: | @@ -49,15 +67,10 @@ jobs: IMAGE_ID=docker.pkg.github.com/${{ github.repository }}/depends IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]') - TAG=${{ matrix.R }} - if [ "$TAG" == "3.6.3" ]; then - TAG=3.6 - fi - - docker tag image $IMAGE_ID:${TAG}-develop - docker push $IMAGE_ID:${TAG}-develop + docker tag image $IMAGE_ID:${{ matrix.TAG }} + docker push $IMAGE_ID:${{ matrix.TAG }} if [ -n "${{ secrets.DOCKERHUB_USERNAME }}" -a -n "${{ secrets.DOCKERHUB_PASSWORD }}" ]; then - docker tag image pecan/depends:${TAG}-develop - docker push pecan/depends:${TAG}-develop + docker tag image pecan/depends:${{ matrix.TAG }} + docker push pecan/depends:${{ matrix.TAG }} fi From dc1438f5784e2073c75adc2ca6b8ada73e0c232b Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 5 May 2020 14:24:37 -0700 Subject: [PATCH 0939/2289] Create ED2INv2.2.0 based on ED2IN.git still need to finish finding and replacing variables --- models/ed/inst/ED2INv2.2.0 | 1997 ++++++++++++++++++++++++++++++++++++ 1 file changed, 1997 insertions(+) create mode 100644 models/ed/inst/ED2INv2.2.0 diff --git a/models/ed/inst/ED2INv2.2.0 b/models/ed/inst/ED2INv2.2.0 new file mode 100644 index 00000000000..815b85b2ac7 --- /dev/null +++ b/models/ed/inst/ED2INv2.2.0 @@ -0,0 +1,1997 @@ +!==========================================================================================! +!==========================================================================================! +! ED2IN . ! +! ! +! This is the file that contains the variables that define how ED is to be run. There ! +! is some brief information about the variables here. Some of the variables allow ! +! switching between algorithms; in this case we highlight the status of each ! +! implementation using the following labels: ! +! ! +! ED-2.2 default. ! +! These are the options described in the ED-2.2 technical note (L19) and that have been ! +! thoroughly tested. When unsure, we recommend to use this option. ! +! ! +! ED-2.2 alternative. ! +! These are the options described either in the ED-2.2 or other publications (mostly ! +! X16), and should be fully functional. Depending on the application, this may be the ! +! most appropriate option. ! +! ! +! Legacy. ! +! Older implementations that have been implemented in ED-1.0, ED-2.0, or ED-2.1. These ! +! options are still fully functional and may be the most appropriate option depending ! +! on the question. ! +! ! +! Beta. ! +! Well developed alternative implementations to the ED-2.2 default. These ! +! implementations are nearly complete, but they have not be thoroughly tested. Feel ! +! free to try if you think it is useful, but bear in mind that they may still need some ! +! adjustments. ! +! ! +! Under development. ! +! Alternative implementations to the ED-2.2 default, but not yet fully implemented. Do ! +! not use these options unless you are willing to contribute to the development. ! +! ! +! Deprecated. ! +! Older implementations that have shown important limitations. They are included for ! +! back-compatibility but we strongly discourage their use in most cases. ! +! ! +! Non-functional. ! +! Older implementations that have been discontinued or methods not yet implemented. ! +! Do not use these options. ! +! ! +! References: ! +! ! +! Longo M, Knox RG, Medvigy DM, Levine NM, Dietze MC, Kim Y, Swann ALS, Zhang K, ! +! Rollinson CR, Bras RL, Wofsy SC, Moorcroft PR. 2019. The biophysics, ecology, and ! +! biogeochemistry of functionally diverse, vertically and horizontally heterogeneous ! +! ecosystems: the Ecosystem Demography model, version 2.2 ? part 1: Model description. ! +! Geosci. Model Dev. 12: 4309-4346, doi:10.5194/gmd-12-4309-2019 (L19). ! +! ! +! Xu X, Medvigy D, Powers JS, Becknell JM , Guan K. 2016. Diversity in plant hydraulic ! +! traits explains seasonal and inter-annual variations of vegetation dynamics in ! +! seasonally dry tropical forests. New Phytol. 212: 80-95, doi:10.1111/nph.14009 (X16). ! +!------------------------------------------------------------------------------------------! +$ED_NL + + !----- Simulation title (64 characters). -----------------------------------------------! + NL%EXPNME = 'ED version 2.2 PEcAn @ENSNAME@' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Type of run: ! + ! ! + ! INITIAL -- ED-2 will start a new run. Initial conditions will be set by ! + ! IED_INIT_MODE. The initial conditions can be based on a previous run ! + ! (restart/history), but then it will use only the biomass information as a ! + ! simple initial condition. Energy, water, and CO2 will use standard ! + ! initial conditions. ! + ! HISTORY -- ED-2 will resume a simulation from the last history, and every variable ! + ! (including forest structure and thermodynamics) will be assigned based on ! + ! the history file. ! + ! IMPORTANT: this option is intended for continuing interrupted simulations ! + ! (e.g. power outage). We discourage users to select this option and ! + ! provide restart files generated by different commits. ! + !---------------------------------------------------------------------------------------! + NL%RUNTYPE = 'INITIAL' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Start of simulation. Information must be given in UTC time. ! + !---------------------------------------------------------------------------------------! + NL%IMONTHA = @START_MONTH@ ! Month + NL%IDATEA = @START_DAY@ ! Day + NL%IYEARA = @START_YEAR@ ! Year + NL%ITIMEA = 0000 ! UTC + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! End of simulation. Information must be given in UTC time. ! + !---------------------------------------------------------------------------------------! + NL%IMONTHZ = @END_MONTH@ ! Month + NL%IDATEZ = @END_DAY@ ! Day + NL%IYEARZ = @END_YEAR@ ! Year + NL%ITIMEZ = 0000 ! UTC + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! DTLSM -- Basic time step [seconds] for photosynthesis, and maximum step for thermo- ! + ! dynamics. Recommended values range from 240 to 900 seconds when using the ! + ! 4th-order Runge Kutta integrator (INTEGRATION_SCHEME=1). We discourage ! + ! using the forward Euler scheme (INTEGRATION_SCHEME=0), but in case you ! + ! really want to use it, set the time step to 60 seconds or shorter. ! + ! RADFRQ -- Time step for the canopy radiative transfer model [seconds]. This value ! + ! must be an integer multiple of DTLSM, and we recommend it to be exactly ! + ! the same as DTSLM. ! + !---------------------------------------------------------------------------------------! + NL%DTLSM = 600 + NL%RADFRQ = 600 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! MONTH_YRSTEP -- Month in which the yearly time step (patch dynamics) should occur. ! + !---------------------------------------------------------------------------------------! + NL%MONTH_YRSTEP = 7 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used in case the user wants to run a regional run. ! + ! ! + ! N_ED_REGION -- number of regions for which you want to run ED. This can be set to ! + ! zero provided that N_POI is not... ! + ! GRID_TYPE -- which kind of grid to run: ! + ! 0. Longitude/latitude grid ! + ! 1. Polar-stereographic ! + !---------------------------------------------------------------------------------------! + NL%N_ED_REGION = 0 + NL%GRID_TYPE = 0 + + !------------------------------------------------------------------------------------! + ! The following variables are used only when GRID_TYPE is set to 0. You must ! + ! provide one value for each grid, except otherwise noted. ! + ! ! + ! GRID_RES -- Grid resolution, in degrees (first grid only, the other grids ! + ! resolution will be defined by NSTRATX/NSTRATY). ! + ! ED_REG_LATMIN -- Southernmost point of each region. ! + ! ED_REG_LATMAX -- Northernmost point of each region. ! + ! ED_REG_LONMIN -- Westernmost point of each region. ! + ! ED_REG_LONMAX -- Easternmost point of each region. ! + !------------------------------------------------------------------------------------! + NL%GRID_RES = 1.0 + NL%ED_REG_LATMIN = -12.0, -7.5, 10.0, -6.0 + NL%ED_REG_LATMAX = 1.0, -3.5, 15.0, -1.0 + NL%ED_REG_LONMIN = -66.0,-58.5, 70.0, -63.0 + NL%ED_REG_LONMAX = -49.0,-54.5, 35.0, -53.0 + !------------------------------------------------------------------------------------! + + + + !------------------------------------------------------------------------------------! + ! The following variables are used only when GRID_TYPE is set to 1. ! + ! ! + ! NNXP -- number of points in the X direction. One value for each grid. ! + ! NNYP -- number of points in the Y direction. One value for each grid. ! + ! DELTAX -- grid resolution in the X direction, near the grid pole. Units: [ m]. ! + ! this value is used to define the first grid only, other grids are ! + ! defined using NNSTRATX. ! + ! DELTAY -- grid resolution in the Y direction, near the grid pole. Units: [ m]. ! + ! this value is used to define the first grid only, other grids are ! + ! defined using NNSTRATX. Unless you are running some specific tests, ! + ! both DELTAX and DELTAY should be the same. ! + ! POLELAT -- Latitude of the pole point. Set this close to CENTLAT for a more ! + ! traditional "square" domain. One value for all grids. ! + ! POLELON -- Longitude of the pole point. Set this close to CENTLON for a more ! + ! traditional "square" domain. One value for all grids. ! + ! CENTLAT -- Latitude of the central point. One value for each grid. ! + ! CENTLON -- Longitude of the central point. One value for each grid. ! + !------------------------------------------------------------------------------------! + NL%NNXP = 110 + NL%NNYP = 70 + NL%DELTAX = 60000 + NL%DELTAY = 60000 + NL%POLELAT = -2.857 + NL%POLELON = -54.959 + NL%CENTLAT = -2.857 + NL%CENTLON = -54.959 + !------------------------------------------------------------------------------------! + + + + !------------------------------------------------------------------------------------! + ! Nest ratios. These values are used by both GRID_TYPE=0 and GRID_TYPE=1. ! + ! NSTRATX -- this is will divide the values given by DELTAX or GRID_RES for the ! + ! nested grids. The first value should be always one. ! + ! NSTRATY -- this is will divide the values given by DELTAY or GRID_RES for the ! + ! nested grids. The first value should be always one, and this must ! + ! be always the same as NSTRATX when GRID_TYPE = 0, and this is also ! + ! strongly recommended for when GRID_TYPE = 1. ! + !------------------------------------------------------------------------------------! + NL%NSTRATX = 1,4 + NL%NSTRATY = 1,4 + !------------------------------------------------------------------------------------! + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to define single polygon of interest runs, and ! + ! they are ignored when N_ED_REGION = 0. ! + ! ! + ! N_POI -- number of polygons of interest (POIs). This can be zero as long as ! + ! N_ED_REGION is not. ! + ! POI_LAT -- list of latitudes of each POI. ! + ! POI_LON -- list of longitudes of each POI. ! + ! POI_RES -- grid resolution of each POI (degrees). This is used only to define the ! + ! soil types ! + !---------------------------------------------------------------------------------------! + NL%N_POI = 1 + NL%POI_LAT = @SITE_LAT@ + NL%POI_LON = @SITE_LON@ + NL%POI_RES = 1.00 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! LOADMETH -- Load balancing method. This is used only in regional runs run in ! + ! parallel. ! + ! 0. Let ED decide the best way of splitting the polygons. Commonest ! + ! option and default. ! + ! 1. One of the methods to split polygons based on their previous ! + ! work load. Developpers only. ! + ! 2. Try to load an equal number of SITES per node. Useful for when ! + ! total number of polygon is the same as the total number of cores. ! + ! 3. Another method to split polygons based on their previous work load. ! + ! Developpers only. ! + !---------------------------------------------------------------------------------------! + NL%LOADMETH = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ED2 File output. For all the variables 0 means no output and 3 means HDF5 output. ! + ! ! + ! IFOUTPUT -- Fast analysis. These are mostly polygon-level averages, and the time ! + ! interval between files is determined by FRQANL ! + ! IDOUTPUT -- Daily means (one file per day) ! + ! IMOUTPUT -- Monthly means (one file per month) ! + ! IQOUTPUT -- Monthly means of the diurnal cycle (one file per month). The number ! + ! of points for the diurnal cycle is 86400 / FRQANL ! + ! IYOUTPUT -- Annual output. ! + ! ITOUTPUT -- Instantaneous fluxes, mostly polygon-level variables, one file per year. ! + ! IOOUTPUT -- Observation time output. Equivalent to IFOUTPUT, except only at the ! + ! times specified in OBSTIME_DB. ! + ! ISOUTPUT -- restart file, for HISTORY runs. The time interval between files is ! + ! determined by FRQHIS ! + !---------------------------------------------------------------------------------------! + NL%IFOUTPUT = 0 + NL%IDOUTPUT = 0 + NL%IMOUTPUT = 3 + NL%IQOUTPUT = 0 + NL%IYOUTPUT = 0 + NL%ITOUTPUT = 3 + NL%IOOUTPUT = 0 + NL%ISOUTPUT = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control whether site-, patch-, and cohort-level time ! + ! means and mean sum of squares should be included in the output files or not. ! + ! If these options are on, then they provide much more detailed output, but they may ! + ! add a lot of disk space (especially if you want the fast output to have the detailed ! + ! output). ! + ! ! + ! IADD_SITE_MEANS -- Add site-level averages to the output ! + ! IADD_PATCH_MEANS -- Add patch-level averages to the output ! + ! IADD_COHORT_MEANS -- Add cohort-level averages to the output ! + ! ! + ! The options are additive, and the indices below represent the different types of ! + ! output: ! + ! ! + ! 0 -- No detailed output. ! + ! 1 -- Include the level in monthly output (IMOUTPUT and IQOUTPUT) ! + ! 2 -- Include the level in daily output (IDOUTPUT). ! + ! 4 -- Include the level in sub-daily output (IFOUTPUT and IOOUTPUT). ! + ! ! + ! For example, in case you don't want any cohort output, set IADD_COHORT_MEANS to zero. ! + ! In case you want to generate include cohort means to both daily and monthly outputs, ! + ! but not the sub-daily means, set IADD_COHORT_MEANS to 3 (1 + 2). Any combination of ! + ! the above outputs is acceptable (i.e., any number from 0 to 7). ! + !---------------------------------------------------------------------------------------! + NL%IADD_SITE_MEANS = 1 + NL%IADD_PATCH_MEANS = 1 + NL%IADD_COHORT_MEANS = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ATTACH_METADATA -- Flag for attaching metadata to HDF datasets. Attaching metadata ! + ! will aid new users in quickly identifying dataset descriptions but ! + ! will compromise I/O performance significantly. ! + ! 0 = no metadata, 1 = attach metadata ! + !---------------------------------------------------------------------------------------! + NL%ATTACH_METADATA = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! UNITFAST -- The following variables control the units for FRQFAST/OUTFAST, and ! + ! UNITSTATE FRQSTATE/OUTSTATE, respectively. Possible values are: ! + ! 0. Seconds; ! + ! 1. Days; ! + ! 2. Calendar months (variable) ! + ! 3. Calendar years (variable) ! + ! ! + ! N.B.: 1. In case OUTFAST/OUTSTATE are set to special flags (-1 or -2) ! + ! UNITFAST/UNITSTATE will be ignored for them. ! + ! 2. In case IQOUTPUT is set to 3, then UNITFAST has to be 0. ! + ! ! + !---------------------------------------------------------------------------------------! + NL%UNITFAST = 0 + NL%UNITSTATE = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! OUTFAST/OUTSTATE -- these control the number of times per file. ! + ! 0. Each time gets its own file ! + ! -1. One file per day ! + ! -2. One file per month ! + ! > 0. Multiple timepoints can be recorded to a single file reducing ! + ! the number of files and i/o time in post-processing. ! + ! Multiple timepoints should not be used in the history files ! + ! if you intend to use these for HISTORY runs. ! + !---------------------------------------------------------------------------------------! + NL%OUTFAST = 0 + NL%OUTSTATE = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ICLOBBER -- What to do in case the model finds a file that it was supposed the ! + ! written? 0 = stop the run, 1 = overwrite without warning. ! + ! FRQFAST -- time interval between analysis files, units defined by UNITFAST. ! + ! FRQSTATE -- time interval between history files, units defined by UNITSTATE. ! + !---------------------------------------------------------------------------------------! + NL%ICLOBBER = 1 + NL%FRQFAST = 3600. + NL%FRQSTATE = 1. + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! + ! SFILOUT -- Path and prefix for history files. ! + !---------------------------------------------------------------------------------------! + NL%FFILOUT = '/mypath/generic-prefix' + NL%SFILOUT = '/mypath/generic-prefix' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IED_INIT_MODE -- This controls how the plant community and soil carbon pools are ! + ! initialised. ! + ! ! + ! -1. Start from a true bare ground run, or an absolute desert run. This will ! + ! never grow any plant. ! + ! 0. Start from near-bare ground (only a few seedlings from each PFT to be included ! + ! in this run). ! + ! 1. (Deprecated) This will use history files written by ED-1.0. It will read the ! + ! ecosystem state (like biomass, LAI, plant density, etc.), but it will start ! + ! the thermodynamic state as a new simulation. ! + ! 2. (Deprecated) Same as 1, but it uses history files from ED-2.0 without multiple ! + ! sites, and with the old PFT numbers. ! + ! 3. Same as 1, but using history files from ED-2.0 with multiple sites and ! + ! TOPMODEL hydrology. ! + ! 4. Same as 1, but using ED2.1 H5 history/state files that take the form: ! + ! 'dir/prefix-gxx.h5' ! + ! Initialization files MUST end with -gxx.h5 where xx is a two digit integer ! + ! grid number. Each grid has its own initialization file. As an example, if a ! + ! user has two files to initialize their grids with: ! + ! example_file_init-g01.h5 and example_file_init-g02.h5 ! + ! SFILIN = 'example_file_init' ! + ! ! + ! 5. This is similar to option 4, except that you may provide several files ! + ! (including a mix of regional and POI runs, each file ending at a different ! + ! date). This will not check date nor grid structure, it will simply read all ! + ! polygons and match the nearest neighbour to each polygon of your run. SFILIN ! + ! must have the directory common to all history files that are sought to be used,! + ! up to the last character the files have in common. For example if your files ! + ! are ! + ! /mypath/P0001-S-2000-01-01-000000-g01.h5, ! + ! /mypath/P0002-S-1966-01-01-000000-g02.h5, ! + ! ... ! + ! /mypath/P1000-S-1687-01-01-000000-g01.h5: ! + ! SFILIN = '/mypath/P' ! + ! ! + ! 6 - Initialize with ED-2 style files without multiple sites, exactly like option ! + ! 2, except that the PFT types are preserved. ! + ! ! + ! 7. Initialize from a list of both POI and gridded ED2.1 state files, organized ! + ! in the same manner as 5. This method overrides the soil database info and ! + ! takes the soil texture and soil moisture information from the initializing ! + ! ED2.1 state file. It allows for different layering, and assigns via nearest ! + ! neighbor. ! + !---------------------------------------------------------------------------------------! + NL%IED_INIT_MODE = 6 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! EDRES -- Expected input resolution for ED2.0 files. This is not used unless ! + ! IED_INIT_MODE = 3. ! + !---------------------------------------------------------------------------------------! + NL%EDRES = 1.0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! SFILIN -- The meaning and the size of this variable depends on the type of run, set ! + ! at variable RUNTYPE. ! + ! ! + ! 1. INITIAL. Then this is the path+prefix of the previous ecosystem state. This has ! + ! dimension of the number of grids so you can initialize each grid with a ! + ! different dataset. In case only one path+prefix is given, the same will ! + ! be used for every grid. Only some ecosystem variables will be set up ! + ! here, and the initial condition will be in thermodynamic equilibrium. ! + ! ! + ! 2. HISTORY. This is the path+prefix of the history file that will be used. Only the ! + ! path+prefix will be used, as the history for every grid must have come ! + ! from the same simulation. ! + !---------------------------------------------------------------------------------------! + NL%SFILIN = '/mypath/myprefix.' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! History file information. These variables are used to continue a simulation from ! + ! a point other than the beginning. Time must be in UTC. ! + ! ! + ! IMONTHH -- the time of the history file. This is the only place you need to change ! + ! IDATEH dates for a HISTORY run. You may change IMONTHZ and related in case you ! + ! IYEARH want to extend the run, but yo should NOT change IMONTHA and related. ! + ! ITIMEH ! + !---------------------------------------------------------------------------------------! + NL%IYEARH = 2000 ! Year + NL%IMONTHH = 08 ! Month + NL%IDATEH = 01 ! Day + NL%ITIMEH = 0000 ! UTC + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! NZG - Number of soil layers. One value for all regions and polygons of interest. ! + ! NZS - Maximum number of snow/water pounding layers. One value for all regions ! + ! and polygons of interest. This is used only when snow is accumulating. ! + ! If only liquid water is standing, the water will be always collapsed ! + ! into a single layer. ! + !---------------------------------------------------------------------------------------! + NL%NZG = 16 + NL%NZS = 4 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISOILFLG -- This controls how to initialise soil texture. This must be a list with ! + ! N_ED_REGION+N_POI elements. The first N_ED_REGION elements correspond to ! + ! each gridded domain (from first to last). Elements between N_ED_REGION+1 ! + ! and N_ED_REGION+N_POI correspond to the polygons of interest (from 1 to ! + ! N_POI. Options are: ! + ! 1 -- Read in soil textural class from the files set in SOIL_DATABASE. ! + ! 2 -- Assign either the value set by NSLCON (see below) or define soil ! + ! texture from SLXSAND and SLXCLAY. ! + !---------------------------------------------------------------------------------------! + NL%ISOILFLG = 2 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! NSLCON -- ED-2 Soil classes that the model will use when ISOILFLG is set to 2. ! + ! Possible values are: ! + !---------------------------------------------------------------------------------------! + ! 1 -- sand | 7 -- silty clay loam | 13 -- bedrock ! + ! 2 -- loamy sand | 8 -- clayey loam | 14 -- silt ! + ! 3 -- sandy loam | 9 -- sandy clay | 15 -- heavy clay ! + ! 4 -- silt loam | 10 -- silty clay | 16 -- clayey sand ! + ! 5 -- loam | 11 -- clay | 17 -- clayey silt ! + ! 6 -- sandy clay loam | 12 -- peat ! + !---------------------------------------------------------------------------------------! + NL%NSLCON = 6 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISOILCOL -- LEAF-3 and ED-2 soil colour classes that the model will use when ISOILFLG ! + ! is set to 2. Soil classes are from 1 to 20 (1 = lightest; 20 = darkest). ! + ! The values are the same as CLM-4.0. The table is the albedo for visible ! + ! and near infra-red. ! + !---------------------------------------------------------------------------------------! + ! ! + ! |-----------------------------------------------------------------------| ! + ! | | Dry soil | Saturated | | Dry soil | Saturated | ! + ! | Class |-------------+-------------| Class +-------------+-------------| ! + ! | | VIS | NIR | VIS | NIR | | VIS | NIR | VIS | NIR | ! + ! |-------+------+------+------+------+-------+------+------+------+------| ! + ! | 1 | 0.36 | 0.61 | 0.25 | 0.50 | 11 | 0.24 | 0.37 | 0.13 | 0.26 | ! + ! | 2 | 0.34 | 0.57 | 0.23 | 0.46 | 12 | 0.23 | 0.35 | 0.12 | 0.24 | ! + ! | 3 | 0.32 | 0.53 | 0.21 | 0.42 | 13 | 0.22 | 0.33 | 0.11 | 0.22 | ! + ! | 4 | 0.31 | 0.51 | 0.20 | 0.40 | 14 | 0.20 | 0.31 | 0.10 | 0.20 | ! + ! | 5 | 0.30 | 0.49 | 0.19 | 0.38 | 15 | 0.18 | 0.29 | 0.09 | 0.18 | ! + ! | 6 | 0.29 | 0.48 | 0.18 | 0.36 | 16 | 0.16 | 0.27 | 0.08 | 0.16 | ! + ! | 7 | 0.28 | 0.45 | 0.17 | 0.34 | 17 | 0.14 | 0.25 | 0.07 | 0.14 | ! + ! | 8 | 0.27 | 0.43 | 0.16 | 0.32 | 18 | 0.12 | 0.23 | 0.06 | 0.12 | ! + ! | 9 | 0.26 | 0.41 | 0.15 | 0.30 | 19 | 0.10 | 0.21 | 0.05 | 0.10 | ! + ! | 10 | 0.25 | 0.39 | 0.14 | 0.28 | 20 | 0.08 | 0.16 | 0.04 | 0.08 | ! + ! |-----------------------------------------------------------------------| ! + ! ! + ! Soil type 21 is a special case in which we use the albedo method that used to be ! + ! the default in ED-2.1. ! + !---------------------------------------------------------------------------------------! + NL%ISOILCOL = 14 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! These variables are used to define the soil properties when you don't want to use ! + ! the standard soil classes. ! + ! ! + ! SLXCLAY -- Prescribed fraction of clay [0-1] ! + ! SLXSAND -- Prescribed fraction of sand [0-1]. ! + ! ! + ! They are used only when ISOILFLG is 2, both values are between 0. and 1., and ! + ! their sum doesn't exceed 1. In case ISOILFLG is 2 but the fractions do not meet the ! + ! criteria, ED-2 uses NSLCON instead. ! + !---------------------------------------------------------------------------------------! + NL%SLXCLAY = 0.345 + NL%SLXSAND = 0.562 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Soil grid and initial conditions in case no file is provided. Provide NZG values ! + ! for the following variables (always from deepest to shallowest layer). ! + ! ! + ! SLZ - depth of the bottom of each soil layer [m]. Values must be negative. ! + ! SLMSTR - this is the initial soil moisture, now given as the soil moisture index. ! + ! -1 = dry air soil moisture ! + ! 0 = wilting point ! + ! 1 = field capacity ! + ! 2 = porosity (saturation) ! + ! Values can be fraction, in which case they will be linearly interpolated ! + ! between the special points (e.g. 0.5 will put soil moisture half way ! + ! between the wilting point and field capacity). ! + ! STGOFF - initial temperature offset (soil temperature = air temperature + offset) ! + !---------------------------------------------------------------------------------------! + NL%SLZ = -8.000,-7.072,-6.198,-5.380,-4.617,-3.910,-3.259,-2.664,-2.127,-1.648, + -1.228,-0.866,-0.566,-0.326,-0.150,-0.040 + NL%SLMSTR = 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, + 1.000, 1.000, 1.000, 1.000, 1.000, 1.000 + NL%STGOFF = 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, + 0.000, 0.000, 0.000, 0.000, 0.000, 0.000 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Input databases ! + ! VEG_DATABASE -- vegetation database, used only to determine the land/water mask. ! + ! Fill with the path and the prefix. ! + ! SOIL_DATABASE -- soil database, used to determine the soil type. Fill with the ! + ! path and the prefix. ! + ! LU_DATABASE -- land-use change disturbance rates database, used only when ! + ! IANTH_DISTURB is set to 1. Fill with the path and the prefix. ! + ! PLANTATION_FILE -- Character string for the path to the forest plantation fraction ! + ! file. This is used only when IANTH_DISTURB is set to 1 and ! + ! the user wants to simulate forest plantations. Otherwise, leave ! + ! it empty (PLANTATION_FILE='') ! + ! THSUMS_DATABASE -- input directory with dataset to initialise chilling-degree and ! + ! growing-degree days, which is used to drive the cold-deciduous ! + ! phenology (you must always provide this, even when your PFTs are ! + ! not cold deciduous). ! + ! ED_MET_DRIVER_DB -- File containing information for meteorological driver ! + ! instructions (the "header" file). ! + ! OBSTIME_DB -- File containing times of desired IOOUTPUT ! + ! Reference file: /ED/run/obstime_template.time ! + ! SOILSTATE_DB -- If ISOILSTATEINIT=1, this variable specifies the full path of ! + ! the file that contains soil moisture and temperature ! + ! information. ! + ! SOILDEPTH_DB -- If ISOILDEPTHFLG=1, this variable specifies the full path of the ! + ! file that contains soil moisture and temperature information. ! + !---------------------------------------------------------------------------------------! + NL%VEG_DATABASE = '/mypath/veget_data/oge2/OGE2_' + NL%SOIL_DATABASE = '/mypath/soil_data/texture/Hengl/Hengl_' + NL%LU_DATABASE = '/mypath/glu-3.3.1+sa1.bau/glu-3.3.1+sa1.bau-' + NL%PLANTATION_FILE = '' + NL%THSUMS_DATABASE = '/mypath/thermal_sums/' + NL%ED_MET_DRIVER_DB = '/mypath/ED_MET_DRIVER_HEADER' + NL%OBSTIME_DB = '' !Reference file: /ED/run/obstime_template.time + NL%SOILSTATE_DB = '/mypath/soil_data/temp+moist/STW1996OCT.dat' + NL%SOILDEPTH_DB = '/mypath/soil_data/depth/H250mBRD/H250mBRD_' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ISOILSTATEINIT -- Variable controlling how to initialise the soil temperature and ! + ! moisture ! + ! 0. Use SLMSTR and STGOFF. ! + ! 1. Read from SOILSTATE_DB. ! + ! ISOILDEPTHFLG -- Variable controlling how to initialise soil depth ! + ! 0. Constant, always defined by the first (deepest) SLZ layer. ! + ! 1. Read from SOILDEPTH_DB (ED-1.0 style, ascii file). ! + ! 2. Read from SOILDEPTH_DB (ED-2.2 style, hdf5 files + header). ! + !---------------------------------------------------------------------------------------! + NL%ISOILSTATEINIT = 0 + NL%ISOILDEPTHFLG = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ISOILBC -- This controls the soil moisture boundary condition at the bottom. Choose ! + ! the option according to the site characteristics. ! + ! 0. Flat bedrock. Flux from the bottom of the bottommost layer is zero. ! + ! 1. Gravitational flow (free drainage). The flux from the bottom of the ! + ! bottommost layer is due to gradient of height only. ! + ! 2. Lateral drainage. Similar to free drainage, but the gradient is ! + ! reduced by the slope not being completely vertical. The reduction is ! + ! controlled by variable SLDRAIN. In the future options 0, 1, and 2 may ! + ! be combined into a single option. ! + ! 3. Aquifer. Soil moisture of the ficticious layer beneath the bottom is ! + ! always at saturation. ! + !---------------------------------------------------------------------------------------! + NL%ISOILBC = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! SLDRAIN -- This is used only when ISOILBC is set to 2. In this case SLDRAIN is the ! + ! equivalent slope that will slow down drainage. If this is set to zero, ! + ! then lateral drainage reduces to flat bedrock, and if this is set to 90, ! + ! then lateral drainage becomes free drainage. SLDRAIN must be between 0 ! + ! and 90. ! + !---------------------------------------------------------------------------------------! + NL%SLDRAIN = 90. + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IVEGT_DYNAMICS -- The vegetation dynamics scheme. ! + ! 0. No vegetation dynamics, the initial state will be preserved, ! + ! even though the model will compute the potential values. This ! + ! option is useful for theoretical simulations only. ! + ! 1. Normal ED vegetation dynamics (Moorcroft et al 2001). ! + ! The normal option for almost any simulation. ! + !---------------------------------------------------------------------------------------! + NL%IVEGT_DYNAMICS = 1 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! IBIGLEAF -- Do you want to run ED as a 'big leaf' model? ! + ! 0. No, use the standard size- and age-structure (Moorcroft et al. 2001, ! + ! Ecol. Monogr.). This is the recommended method for most ! + ! applications. ! + ! 1. 'big leaf' ED (Levine et al. 2016, PNAS): this will have no ! + ! horizontal or vertical heterogeneities; 1 patch per PFT and 1 cohort ! + ! per patch; no vertical growth, recruits will 'appear' instantaneously ! + ! at maximum height. ! + ! ! + ! N.B. if you set IBIGLEAF to 1, you MUST turn off the crown model (CROWN_MOD = 0) ! + !---------------------------------------------------------------------------------------! + NL%IBIGLEAF = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! INTEGRATION_SCHEME -- The biophysics integration scheme. ! + ! 0. (Deprecated) Euler step. The fastest, but it has only a ! + ! very crude estimate of time-step errors. ! + ! 1. (ED-2.2 default) Fourth-order Runge-Kutta method. ! + ! 2. (Deprecated) Second-order Runge-Kutta method (Heun's). ! + ! This is not faster than option 1, and it will be eventually ! + ! removed. ! + ! 3. (Under development) Hybrid Stepping (Backward Euler BDF2 ! + ! implicit step for the canopy air and leaf temp, forward ! + ! Euler for else). This has not been thoroughly tested for ! + ! energy, water, and CO2 conservation. ! + !---------------------------------------------------------------------------------------! + NL%INTEGRATION_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! NSUB_EULER -- The number of sub-steps in case we are running forward Euler. The ! + ! maximum time step will then be DTLSM / NSUB_EULER. This is needed to ! + ! make sure we don't take too long steps with Euler, as we cannot ! + ! estimate errors using first-order schemes. This number is ignored ! + ! except when INTEGRATION_SCHEME is 0. ! + !---------------------------------------------------------------------------------------! + NL%NSUB_EULER = 40 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! RK4_TOLERANCE -- This is the relative tolerance for Runge-Kutta or Heun's ! + ! integration. Currently the valid range is between 1.e-7 and 1.e-1, ! + ! but recommended values are between 1.e-4 and 1.e-2. ! + !---------------------------------------------------------------------------------------! + NL%RK4_TOLERANCE = 0.01 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IBRANCH_THERMO -- This determines whether branches should be included in the ! + ! vegetation thermodynamics and radiation or not. ! + ! 0. (Legacy) No branches in energy/radiation. ! + ! 1. (ED-2.2 default) Branches are accounted in the energy and ! + ! radiation. Branchwood and leaf are treated separately in the ! + ! canopy radiation scheme, but solved as a single pool in the ! + ! biophysics integration. ! + ! 2. (Beta) Similar to 1, but branches are treated as separate pools ! + ! in the biophysics (thus doubling the number of prognostic ! + ! variables). ! + !---------------------------------------------------------------------------------------! + NL%IBRANCH_THERMO = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPHYSIOL -- This variable will determine the functional form that will control how ! + ! the various parameters will vary with temperature, and how the CO2 ! + ! compensation point for gross photosynthesis (Gamma*) will be found. ! + ! Options are: ! + ! ! + ! 0 -- (Legacy) Original ED-2.1, we use the "Arrhenius" function as in Foley et al. ! + ! (1996, Global Biogeochem. Cycles) and Moorcroft et al. (2001, Ecol. Monogr.). ! + ! Gamma* is found using the parameters for tau as in Foley et al. (1996). This ! + ! option causes optimal temperature to be quite low, even in the tropics (Rogers ! + ! et al. 2017, New Phytol.). ! + ! 1 -- (Beta) Similar to case 0, but we use Jmax to determine the RubP-regeneration ! + ! (aka light) limitation case, account for the triose phosphate utilisation ! + ! limitation case (C3), and use the Michaelis-Mentel coefficients along with other ! + ! parameters from von Caemmerer (2000, Biochemical models of leaf photosynthesis). ! + ! 2 -- (ED-2.2 default) Collatz et al. (1991, Agric. For. Meteorol.). We use the power ! + ! (Q10) equations, with Collatz et al. (1991) parameters for compensation point, ! + ! and the Michaelis-Mentel coefficients. The correction for high and low ! + ! temperatures are the same as in Moorcroft et al. (2001). ! + ! 3 -- (Beta) Similar to case 2, but we use Jmax to determine the RubP-regeneration ! + ! (aka light) limitation case, account for the triose phosphate utilisation ! + ! limitation case (C3), and use the Michaelis-Mentel coefficients along with other ! + ! parameters from von Caemmerer (2000). ! + ! 4 -- (Beta) Use "Arrhenius" function as in Harley et al. (1991). This must be run ! + ! with ISTOMATA_SCHEME = 1 ! + !---------------------------------------------------------------------------------------! + NL%IPHYSIOL = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IALLOM -- Which allometry to use (this mostly affects tropical PFTs). Temperate PFTs ! + ! will use the new root allometry and the maximum crown area unless IALLOM is ! + ! set to 0). ! + ! 0. (Legacy) Original ED-1.0, included for back compatibility. ! + ! 1. (Legacy) ! + ! a. The coefficients for structural biomass are set so the total AGB ! + ! is similar to Baker et al. (2004, Glob. Change Biol.), equation 2. ! + ! b. Experimental root depth that makes canopy trees to have root depths ! + ! of 5m and grasses/seedlings at 0.5 to have root depth of 0.5 m. ! + ! c. Crown area defined as in Poorter et al. (2006, Ecology), imposing ! + ! maximum crown area. ! + ! 2. (ED-2.2 default) Similar to 1, but with a few extra changes. ! + ! a. Height -> DBH allometry as in Poorter et al. (2006) ! + ! b. Balive is retuned, using a few leaf biomass allometric equations for ! + ! a few genera in Costa Rica. References: ! + ! Cole and Ewel (2006, Forest Ecol. Manag.), and Calvo-Alvarado et al. ! + ! (2008, Tree Physiol.). ! + ! 3. (Beta) Updated allometric for tropical PFTs based on data from ! + ! Sustainable Landscapes Brazil (Height and crown area), Chave et al. ! + ! (2014, Glob. Change Biol.) (biomass) and the BAAD data base, Falster et ! + ! al. (2015, Ecology) (leaf area). Both leaf and structural biomass take ! + ! DBH and Height as dependent variables, and DBH-Height takes a simpler ! + ! log-linear form fitted using SMA so it can be inverted (useful for ! + ! airborne lidar initialisation). ! + !---------------------------------------------------------------------------------------! + NL%IALLOM = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ECONOMICS_SCHEME -- Temporary variable for testing the relationship amongst traits in ! + ! the tropics. ! + ! 0. (ED-2.2 default) ED-2.1 standard, based on Reich et al. (1997, ! + ! PNAS), Moorcroft et al. (2001, Ecol. Monogr.) and some updates ! + ! following Kim et al. (2012, Glob. Change Biol.). ! + ! 1. When available, trait relationships were derived from more ! + ! up-to-date data sets, including the TRY database (Kattge et ! + ! al. 2011, Glob. Change Biol.), NGEE-Tropics (Norby et al. ! + ! 2017, New Phytol.), RAINFOR (Bahar et al. 2017, New Phytol.), ! + ! and GLOPNET (Wright et al. 2004, Nature). Check ! + ! ed_params.f90 for details. ! + !---------------------------------------------------------------------------------------! + NL%ECONOMICS_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IGRASS -- This controls the dynamics and growth calculation for grasses. ! + ! ! + ! 0. (Legacy) Original ED-1/ED-2.0 method, grasses are miniature trees, grasses have ! + ! heartwood biomass (albeit small), and growth happens monthly. ! + ! 1. (ED-2.2 default). Heartwood biomass is always 0, height is a function of leaf ! + ! biomass , and growth happens daily. With this option, grasses are evergreen ! + ! regardless of IPHEN_SCHEME. ! + !---------------------------------------------------------------------------------------! + NL%IGRASS = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPHEN_SCHEME -- It controls the phenology scheme. Even within each scheme, the ! + ! actual phenology will be different depending on the PFT. ! + ! ! + ! -1: (ED-2.2 default for evergreen tropical). ! + ! grasses - evergreen; ! + ! tropical - evergreen; ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous (Botta et al.); ! + ! ! + ! 0: (Deprecated). ! + ! grasses - drought-deciduous (old scheme); ! + ! tropical - drought-deciduous (old scheme); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! 1: (ED-2.2 default for prescribed phenology; deprecated for tropical PFTs). ! + ! phenology is prescribed for cold-deciduous broadleaf trees. ! + ! ! + ! 2: (ED-2.2 default). ! + ! grasses - drought-deciduous (new scheme); ! + ! tropical - drought-deciduous (new scheme); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! 3: (Beta). ! + ! grasses - drought-deciduous (new scheme); ! + ! tropical - drought-deciduous (light phenology); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! Old scheme: plants shed their leaves once instantaneous amount of available water ! + ! becomes less than a critical value. ! + ! New scheme: plants shed their leaves once a 10-day running average of available ! + ! water becomes less than a critical value. ! + !---------------------------------------------------------------------------------------! + NL%IPHEN_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! Parameters that control the phenology response to radiation, used only when ! + ! IPHEN_SCHEME = 3. ! + ! ! + ! RADINT -- Intercept ! + ! RADSLP -- Slope. ! + !---------------------------------------------------------------------------------------! + NL%RADINT = -11.3868 + NL%RADSLP = 0.0824 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! REPRO_SCHEME -- This controls plant reproduction and dispersal. ! + ! 0. Reproduction off. Useful for very short runs only. ! + ! 1. (Legacy) Original reproduction scheme. Seeds are exchanged ! + ! between patches belonging to the same site, but they can't go ! + ! outside their original site. ! + ! 2. (ED-2.2 default) Similar to 1, but seeds are exchanged between ! + ! patches belonging to the same polygon, even if they are in ! + ! different sites. They can't go outside their original polygon, ! + ! though. This is the same as option 1 if there is only one site ! + ! per polygon. ! + ! 3. (Beta) Similar to 2, but allocation to reproduction may be set as ! + ! a function of height using an asymptotic curve. ! + !---------------------------------------------------------------------------------------! + NL%REPRO_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! LAPSE_SCHEME -- This specifies the met lapse rate scheme: ! + ! 0. (ED-2.2 default) No lapse rates ! + ! 1. (Beta) Phenomenological, global ! + ! 2. (Non-functional) Phenomenological, local ! + ! 3. (Non-functional) Mechanistic ! + !---------------------------------------------------------------------------------------! + NL%LAPSE_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! CROWN_MOD -- Specifies how tree crowns are represent in the canopy radiation model, ! + ! and in the turbulence scheme depending on ICANTURB. ! + ! 0. (ED-2.2 default) Flat-top, infinitesimally thin crowns. ! + ! 1. (Under development) Finite radius mixing model (Dietze). ! + ! This is only implemented for direct radiation with ICANRAD=0. ! + !---------------------------------------------------------------------------------------! + NL%CROWN_MOD = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the canopy radiation solver. ! + ! ! + ! ICANRAD -- Specifies how canopy radiation is solved. This variable sets both ! + ! shortwave and longwave. ! + ! 0. (Deprecated) original two-stream model from Medvigy (2006), with ! + ! the possibility to apply finite crown area to direct shortwave ! + ! radiation. This option is no longer supported and may be removed ! + ! in future releases. ! + ! 1. (Deprecated) Multiple scattering model from Zhao and Qualls (2005, ! + ! 2006, Water Resour. Res.). This option is no longer supported and ! + ! may be removed in future releases. ! + ! 2. (ED-2.2 default) Updated two-stream model from Liou (2002, An ! + ! introduction to atmospheric radiation). ! + ! IHRZRAD -- Specifies how horizontal canopy radiation is solved. ! + ! 0. (ED-2.2 default) No horizontal patch shading. All patches ! + ! receive the same amount of light at the top. ! + ! 1. (Beta) A realized map of the plant community is built by randomly ! + ! assigning gaps associated with gaps (number of gaps proportional ! + ! to the patch area), and populating them with individuals, ! + ! respecting the cohort distribution in each patch. The crown ! + ! closure index is calculated for the entire landscape and used ! + ! to change the amount of direct light reaching the top of the ! + ! canopy. Patches are then split into 1-3 patches based on the ! + ! light condition, so expect simulations to be slower. (Morton et ! + ! al., in review). ! + ! 2. (Beta) Similar to option 1, except that height for trees with ! + ! DBH > DBH_crit are rescaled to calculate CCI. ! + ! 3. (Beta) This creates patches following IHRZRAD = 1, but then ! + ! assumes that the light scaling factor is 1 for all patches. This ! + ! is only useful to isolate the effect of heterogeneous ! + ! illumination from the patch count. ! + ! 4. (Beta) Similar to option 3, but it applies the same method as ! + ! IHRZRAD=2. ! + !---------------------------------------------------------------------------------------! + NL%ICANRAD = 2 + NL%IHRZRAD = 0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! The variables below will be eventually removed from ED2IN, use XML initialisation ! + ! file to set these parameters instead. ! + ! LTRANS_VIS -- Leaf transmittance for tropical plants - Visible/PAR ! + ! LTRANS_NIR -- Leaf transmittance for tropical plants - Near Infrared ! + ! LREFLECT_VIS -- Leaf reflectance for tropical plants - Visible/PAR ! + ! LREFLECT_NIR -- Leaf reflectance for tropical plants - Near Infrared ! + ! ORIENT_TREE -- Leaf orientation factor for tropical trees. Extremes are: ! + ! -1. All leaves are oriented in the vertical ! + ! 0. Leaf orientation is perfectly random ! + ! 1. All leaves are oriented in the horizontal ! + ! In practice, acceptable values range from -0.4 and 0.6 ! + ! (Goudriaan, 1977). ! + ! ORIENT_GRASS -- Leaf orientation factor for tropical grasses. Extremes are: ! + ! -1. All leaves are oriented in the vertical ! + ! 0. Leaf orientation is perfectly random ! + ! 1. All leaves are oriented in the horizontal ! + ! In practice, acceptable values range from -0.4 and 0.6 ! + ! (Goudriaan, 1977). ! + ! CLUMP_TREE -- Clumping factor for tropical trees. Extremes are: ! + ! lim -> 0. Black hole (0 itself is unacceptable) ! + ! 1. Homogeneously spread over the layer (i.e., no clumping) ! + ! CLUMP_GRASS -- Clumping factor for tropical grasses. Extremes are: ! + ! lim -> 0. Black hole (0 itself is unacceptable) ! + ! 1. Homogeneously spread over the layer (i.e., no clumping) ! + !---------------------------------------------------------------------------------------! + NL%LTRANS_VIS = 0.05 + NL%LTRANS_NIR = 0.2 + NL%LREFLECT_VIS = 0.1 + NL%LREFLECT_NIR = 0.4 + NL%ORIENT_TREE = 0.1 + NL%ORIENT_GRASS = 0.0 + NL%CLUMP_TREE = 0.8 + NL%CLUMP_GRASS = 1.0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IGOUTPUT -- In case IHRZRAD is not zero, should the model write the patch table and ! + ! gap realisation files? (0 -- no; 1 -- yes). Note these files are still ! + ! in text files so they may take considerable disk space. ! + ! GFILOUT -- Prefix for the output patch table/gap files. ! + !---------------------------------------------------------------------------------------! + NL%IGOUTPUT = 0 + NL%GFILOUT = '/mypath/generic-prefix' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! DECOMP_SCHEME -- This specifies the soil Carbon (decomposition) model. ! + ! ! + ! 0 - (Deprecated) ED-2.0 default. Exponential with low-temperature limitation only. ! + ! This option is known to cause excessive accumulation of soil carbon in the ! + ! tropics. ! + ! 1 - (Beta) Lloyd and Taylor (1994, Funct. Ecol.) model. Aditional parameters must be ! + ! set in an XML file. ! + ! 2 - (ED-2.2 default) Similar to ED-1.0 and CENTURY model, heterotrophic respiration ! + ! reaches a maximum at around 38C (using the default parameters), then quickly ! + ! falls to zero at around 50C. It applies a similar function for soil moisture, ! + ! which allows higher decomposition rates when it is close to the optimal, plumet- ! + ! ting when it is almost saturated. ! + ! 3 - (Beta) Similar to option 0, but it uses an empirical moisture limit equation ! + ! from Moyano et al. (2012), Biogeosciences. ! + ! 4 - (Beta) Similar to option 1, but it uses an empirical moisture limit equation ! + ! from Moyano et al. (2012), Biogeosciences. ! + ! 5 - (Beta) Based on Bolker et al. (1998, Ecol. Appl.) CENTURY model. Five necromass ! + ! pools (litter aka fast, structural, microbial, humified aka slow, and passive). ! + ! Temperature and moisture functions are the same as 2. ! + !---------------------------------------------------------------------------------------! + NL%DECOMP_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! H2O_PLANT_LIM -- this determines whether plant photosynthesis can be limited by ! + ! soil moisture, the FSW, defined as FSW = Supply / (Demand + Supply). ! + ! ! + ! Demand is always the transpiration rates in case soil moisture is not limiting (the ! + ! psi_0 term times LAI). The supply is determined by ! + ! ! + ! Kw * nplant * Broot * Available_Water, ! + ! ! + ! and the definition of available water changes depending on H2O_PLANT_LIM: ! + ! 0. Force FSW = 1 (effectively available water is infinity). ! + ! 1. (Legacy) Available water is the total soil water above wilting point, integrated ! + ! across all layers within the rooting zone. ! + ! 2. (ED-2.2 default) Available water is the soil water at field capacity minus wilt- ! + ! ing point, scaled by the so-called wilting factor: ! + ! ! + ! (psi(k) - (H - z(k)) - psi_wp) / (psi_fc - psi_wp) ! + ! ! + ! where psi is the matric potentital at layer k, z is the layer depth, H it the ! + ! crown height and psi_fc and psi_wp are the matric potentials at wilting point ! + ! and field capacity. ! + ! 3. (Beta) Use leaf water potential to modify fsw following Powell et al. (2017). ! + ! This setting requires PLANT_HYDRO_SCHEME to be non-zero. ! + ! 4. (Beta) Use leaf water potential to modify the optimization-based stomatal model ! + ! following Xu et al. (2016). This setting requires PLANT_HYDRO_SCHEME to be ! + ! non-zero values and set ISTOMATA_SCHEME to 1. ! + ! 5. (Beta) Similar to 2, but the water supply directly affects gsw, as opposed to ! + ! fsw. This is done by making D0 a function of soil moisture. Note that this ! + ! still uses Kw but Kw must be significantly lower, at least for tropical trees ! + ! (1/15 - 1/10 of the original). This works only with PLANT_HYDRO_SCHEME set to 0. ! + !---------------------------------------------------------------------------------------! + NL%H2O_PLANT_LIM = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! PLANT_HYDRO_SCHEME -- Flag to set dynamic plant hydraulics. ! + ! 0 - (ED-2.2 default) No dynamic hydraulics (leaf and wood are always saturated). ! + ! 1 - (ED-2.2 alternative) Track plant hydrodynamics. Model framework from X16, using ! + ! parameters from C16. ! + ! 2 - (Deprecated) Track plant hydrodynamics. Model framework from X16, using ! + ! parameters from X16. ! + ! ! + ! References: ! + ! ! + ! Christoffersen BO, Gloor M, Fauset S, Fyllas NM, Galbraith DR, Baker TR, Kruijt B, ! + ! Rowland L, Fisher RA, Binks OJ et al. 2016. Linking hydraulic traits to tropical ! + ! forest function in a size- structured and trait-driven model (TFS v.1-Hydro). ! + ! Geosci. Model Dev., 9: 4227-4255. doi:10.5194/gmd- 9-4227-2016 (C16). ! + ! ! + ! Xu X, Medvigy D, Powers JS, Becknell JM , Guan K. 2016. Diversity in plant hydraulic ! + ! traits explains seasonal and inter-annual variations of vegetation dynamics in ! + ! seasonally dry tropical forests. New Phytol., 212: 80-95. doi:10.1111/nph.14009 ! + ! (X16). ! + !---------------------------------------------------------------------------------------! + NL%PLANT_HYDRO_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISTRUCT_GROWTH_SCHEME -- Different methods to perform structural growth. ! + ! 0. (ED-2.2 default) Use all bstorage allocation to growth to increase heartwood. ! + ! This option will be eventually deprecated, as it creates problems for drought- ! + ! deciduous plants and for allometric settings that properly calculate sapwood ! + ! (IALLOM = 3). ! + ! 1. (ED-2.2 alternative) Correct the fraction of storage allocated to heartwood, so ! + ! storage has sufficient carbon to increment all living tissues in the upcoming ! + ! month. This option will eventually become the default. ! + !---------------------------------------------------------------------------------------! + NL%ISTRUCT_GROWTH_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISTOMATA_SCHEME -- Which stomatal conductance model to use. ! + ! 0. (ED-2.2 default) Leuning (L95) model. ! + ! 1. (Beta) Katul's optimization-based model (see X16) ! + ! ! + ! References: ! + ! ! + ! Leuning R. 1995. A critical appraisal of a combined stomatal-photosynthesis model for ! + ! C3 plants. Plant Cell Environ., 18: 339-355. ! + ! doi:10.1111/j.1365-3040.1995.tb00370.x (L95). ! + ! ! + ! Xu X, Medvigy D, Powers JS, Becknell JM , Guan K. 2016. Diversity in plant hydraulic ! + ! traits explains seasonal and inter-annual variations of vegetation dynamics in ! + ! seasonally dry tropical forests. New Phytol., 212: 80-95. doi:10.1111/nph.14009 ! + ! (X16). ! + !---------------------------------------------------------------------------------------! + NL%ISTOMATA_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! TRAIT_PLASTICITY_SCHEME -- Whether/How plant traits vary with local environment. ! + ! ! + ! 0 - (ED-2.2 default) No trait plasticity. Trait parameters for each PFT are fixed. ! + ! 1 - (Beta) Vm0, SLA and leaf turnover rate change annually with cohort light ! + ! environment. The parametrisation is based on Lloyd et al. (2010, ! + ! Biogeosciences), with additional data from Keenan and Niinemets (2016, Nat. ! + ! Plants) and Russo and Kitajima (2016, Tropical Tree Physiology book). For each ! + ! cohort, Vm0 and leaf turnover rates decrease, and SLA increases with shading. ! + ! The magnitude of changes is calculated using overtopping LAI and corresponding ! + ! extinction factors for each trait. This is not applicable to grass PFTs. ! + ! (Xu et al. in prep.) ! + ! 2 - (Beta) Similar to 1, but traits are updated monthly. ! + ! -1 - (Beta) Similar to 1, but use height to adjust SLA. ! + ! -2 - (Beta) Similar to 2, but use height to adjust SLA. ! + !---------------------------------------------------------------------------------------! + NL%TRAIT_PLASTICITY_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IDDMORT_SCHEME -- This flag determines whether storage should be accounted in the ! + ! carbon balance. ! + ! 0 -- (Legacy) Carbon balance is done in terms of fluxes only. ! + ! 1 -- (ED-2.2 default) Carbon balance is offset by the storage ! + ! pool. Plants will be in negative carbon balance only when ! + ! they run out of storage and are still losing more carbon than ! + ! gaining. ! + ! ! + ! CBR_SCHEME -- This flag determines which carbon stress scheme is used: ! + ! 0 -- (ED-2.2 default) Single stress. CBR = cb/cb_mlmax ! + ! cb_mlmax is the carbon balance in full sun and no moisture ! + ! limitation ! + ! 1 -- (Legacy) Co-limitation from light and moisture (Longo et al. ! + ! 2018, New Phytol.). CBR_LIGHT = cb/cb_lightmax and ! + ! CBR_MOIST = cb/cb_moistmax. CBR_LIGHT and CBR_MOIST are then ! + ! weighted according to DDMORT_CONST (below) ! + ! 2 -- (Beta) Leibig Style, i.e. limitation from either light or ! + ! moisture depending on which is lower at a given point in time ! + ! CBR = cb/max(cb_lightmax, cb_moistmax) ! + ! ! + ! DDMORT_CONST -- CBR_Scheme = 1 only ! + ! This constant (k) determines the relative contribution of light ! + ! and soil moisture to the density-dependent mortality rate. Values ! + ! range from 0 (soil moisture only) to 1 (light only, which is the ! + ! ED-1.0 and ED-2.0 default). ! + ! ! + ! mort1 ! + ! mu_DD = ------------------------- ! + ! 1 + exp [ mort2 * CBR ] ! + ! ! + ! 1 DDMORT_CONST 1 - DDMORT_CONST ! + ! ------------ = ------------------ + ------------------ ! + ! CBR - CBR_SS CBR_LIGHT - CBR_SS CBR_MOIST - CBR_SS ! + ! ! + !---------------------------------------------------------------------------------------! + NL%IDDMORT_SCHEME = 1 + NL%CBR_SCHEME = 0 + NL%DDMORT_CONST = 0.8 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! These variables will be eventually removed from ED2IN, use the XML initialisation ! + ! file to set these parameters Vm0 instead. The following variables are factors that ! + ! control photosynthesis and respiration. Note that some of them are relative values, ! + ! whereas others are absolute. ! + ! ! + ! VMFACT_C3 -- Factor multiplying the default Vm0 for C3 plants (1.0 = default). ! + ! VMFACT_C4 -- Factor multiplying the default Vm0 for C4 plants (1.0 = default). ! + ! MPHOTO_TRC3 -- Stomatal slope (M) for tropical C3 plants ! + ! MPHOTO_TEC3 -- Stomatal slope (M) for conifers and temperate C3 plants ! + ! MPHOTO_C4 -- Stomatal slope (M) for C4 plants. ! + ! BPHOTO_BLC3 -- cuticular conductance for broadleaf C3 plants [umol/m2/s] ! + ! BPHOTO_NLC3 -- cuticular conductance for needleleaf C3 plants [umol/m2/s] ! + ! BPHOTO_C4 -- cuticular conductance for C4 plants [umol/m2/s] ! + ! KW_GRASS -- Water conductance for trees, in m2/yr/kgC_root. This is used only ! + ! when H2O_PLANT_LIM is not 0. ! + ! KW_TREE -- Water conductance for grasses, in m2/yr/kgC_root. This is used only ! + ! when H2O_PLANT_LIM is not 0. ! + ! GAMMA_C3 -- The dark respiration factor (gamma) for C3 plants. In case this ! + ! number is set to 0, find the factor based on Atkin et al. (2015). ! + ! GAMMA_C4 -- The dark respiration factor (gamma) for C4 plants. In case this ! + ! number is set to 0, find the factor based on Atkin et al. (2015). ! + ! (assumed to be twice as large as C3 grasses, as Atkin et al. 2015 ! + ! did not estimate Rd0 for C4 grasses). ! + ! D0_GRASS -- The transpiration control in gsw (D0) for ALL grasses. ! + ! D0_TREE -- The transpiration control in gsw (D0) for ALL trees. ! + ! ALPHA_C3 -- Quantum yield of ALL C3 plants. This is only applied when ! + ! QUANTUM_EFFICIENCY_T = 0. ! + ! ALPHA_C4 -- Quantum yield of C4 plants. This is always applied. ! + ! KLOWCO2IN -- The coefficient that controls the PEP carboxylase limited rate of ! + ! carboxylation for C4 plants. ! + ! RRFFACT -- Factor multiplying the root respiration factor for ALL PFTs. ! + ! (1.0 = default). ! + ! GROWTHRESP -- The actual growth respiration factor (C3/C4 tropical PFTs only). ! + ! (1.0 = default). ! + ! LWIDTH_GRASS -- Leaf width for grasses, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). ! + ! LWIDTH_BLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). This is applied to broadleaf trees ! + ! only. ! + ! LWIDTH_NLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). This is applied to conifer trees ! + ! only. ! + ! Q10_C3 -- Q10 factor for C3 plants (used only if IPHYSIOL is set to 2 or 3). ! + ! Q10_C4 -- Q10 factor for C4 plants (used only if IPHYSIOL is set to 2 or 3). ! + !---------------------------------------------------------------------------------------! + NL%VMFACT_C3 = 1.00 + NL%VMFACT_C4 = 1.00 + NL%MPHOTO_TRC3 = 9.0 + NL%MPHOTO_TEC3 = 7.2 + NL%MPHOTO_C4 = 5.2 + NL%BPHOTO_BLC3 = 10000. + NL%BPHOTO_NLC3 = 1000. + NL%BPHOTO_C4 = 10000. + NL%KW_GRASS = 900. + NL%KW_TREE = 600. + NL%GAMMA_C3 = 0.0145 + NL%GAMMA_C4 = 0.035 + NL%D0_GRASS = 0.016 + NL%D0_TREE = 0.016 + NL%ALPHA_C3 = 0.08 + NL%ALPHA_C4 = 0.055 + NL%KLOWCO2IN = 17949. + NL%RRFFACT = 1.000 + NL%GROWTHRESP = 0.333 + NL%LWIDTH_GRASS = 0.05 + NL%LWIDTH_BLTREE = 0.10 + NL%LWIDTH_NLTREE = 0.05 + NL%Q10_C3 = 2.4 + NL%Q10_C4 = 2.4 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! THETACRIT -- Leaf drought phenology threshold. The sign matters here: ! + ! >= 0. -- This is the relative soil moisture above the wilting point ! + ! below which the drought-deciduous plants will start shedding ! + ! their leaves ! + ! < 0. -- This is the soil potential in MPa below which the drought- ! + ! -deciduous plants will start shedding their leaves. The wilt- ! + ! ing point is by definition -1.5MPa, so make sure that the value ! + ! is above -1.5. ! + !---------------------------------------------------------------------------------------! + NL%THETACRIT = -1.20 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! QUANTUM_EFFICIENCY_T -- Which quantum yield model should to use for C3 plants ! + ! 0. (ED-2.2 default) Quantum efficiency is constant. ! + ! 1. (Beta) Quantum efficiency varies with temperature ! + ! following Ehleringer (1978, Oecologia) polynomial fit. ! + !---------------------------------------------------------------------------------------! + NL%QUANTUM_EFFICIENCY_T = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! N_PLANT_LIM -- This controls whether plant photosynthesis can be limited by nitrogen. ! + ! 0. No limitation ! + ! 1. Activate nitrogen limitation model. As of ED-2.2, this option has ! + ! not been thoroughly tested in the tropics. ! + !---------------------------------------------------------------------------------------! + NL%N_PLANT_LIM = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! N_DECOMP_LIM -- This controls whether decomposition can be limited by nitrogen. ! + ! 0. No limitation ! + ! 1. Activate nitrogen limitation model. As of ED-2.2, this option has ! + ! not been thoroughly tested in the tropics. ! + !---------------------------------------------------------------------------------------! + NL%N_DECOMP_LIM = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! The following parameters adjust the fire disturbance in the model. ! + ! INCLUDE_FIRE -- Which threshold to use for fires. ! + ! 0. No fires; ! + ! 1. (deprecated) Fire will be triggered with enough fuel (assumed ! + ! to be total above-ground biomass) and integrated ground water ! + ! depth less than a threshold. Based on ED-1, the threshold ! + ! assumes that the soil is 1 m, so deeper soils will need to be ! + ! much drier to allow fires to happen. ! + ! 2. (ED-2.2 default) Fire will be triggered with enough biomass and ! + ! the total soil water at the top 50 cm falls below a threshold. ! + ! 3. (Under development) This will eventually become SPITFIRE and/or ! + ! HESFIRE. Currently this is similar to 2, except that fuel is ! + ! defined as above-ground litter and coarse woody debris, ! + ! grasses, and trees shorter than 2 m. Ignitions are currently ! + ! restricted to areas with human presence (i.e. any non-natural ! + ! patch). ! + ! FIRE_PARAMETER -- If fire happens, this will control the intensity of the disturbance ! + ! given the amount of fuel. ! + ! SM_FIRE -- This is used only when INCLUDE_FIRE = 2 or 3, and it has different ! + ! meanings. The sign here matters. ! + ! When INCLUDE_FIRE = 2: ! + ! >= 0. - Minimum relative soil moisture above dry air of the top ! + ! soil layers that will prevent fires to happen. ! + ! < 0. - Minimum mean soil moisture potential in MPa of the top ! + ! soil layers that will prevent fires to happen. Although ! + ! this variable can be as negative as -3.1 MPa (residual ! + ! soil water), it is recommended that SM_FIRE > -1.5 MPa ! + ! (wilting point), otherwise fires may never occur. ! + !---------------------------------------------------------------------------------------! + NL%INCLUDE_FIRE = 0 + NL%FIRE_PARAMETER = 0.5 + NL%SM_FIRE = -1.4 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IANTH_DISTURB -- This flag controls whether to include anthropogenic disturbances ! + ! such as land clearing, abandonment, and logging. ! + ! 0. No anthropogenic disturbance. ! + ! 1. Use anthropogenic disturbance dataset (ED-2.2 default when ! + ! anthropogenic disturbance is sought). ! + ! 2. Site-specific forest plantation or selective logging cycle. ! + ! (Longo et al., in prep.) (Beta) ! + ! ! + ! The following variables are used only when IANTH_DISTURB is 2. ! + ! ! + ! SL_SCALE -- This flag assumes whether the simulation scale is local or ! + ! landscape. This controls the recurrence of logging. ! + ! 0. Local. The simulation represents one logging unit. Apply ! + ! logging only once every SL_NYRS ! + ! 1. Landscape. The simulation represents a landscape. Logging ! + ! occurs every year but it is restricted to patches with age ! + ! greater than or equal to SL_NYRS ! + ! SL_YR_FIRST -- The first year to apply logging. In case IANTH_DISTURB is 2 it ! + ! must be a simulation year (i.e. between IYEARA and IYEARZ). ! + ! SL_NYRS -- This variable defines the logging cycle, in years (see variable ! + ! SL_SCALE above) ! + ! SL_PFT -- PFTs that can be harvested. ! + ! SL_PROB_HARVEST -- Logging intensity (one value for each PFT provided in SL_PFT). ! + ! Values should be between 0.0 and 1.0, with 0 meaning no ! + ! removal, and 1 removal of all trees needed to meet demands. ! + ! SL_MINDBH_HARVEST -- Minimum DBH for logging (one value for each PFT provided in ! + ! SL_PFT). ! + ! SL_BIOMASS_HARVEST -- Target biomass to be harvested in each cycle, in kgC/m2. If ! + ! zero, then all trees that meet the minimum DBH and minimum ! + ! patch age will be logged. In case you don't want logging to ! + ! occur, don't set this value to zero! Instead, set IANTH_DISTURB ! + ! to zero. ! + ! ! + ! The following variables are used when IANTH_DISTURB is 1 or 2. ! + ! ! + ! SL_SKID_REL_AREA -- area damaged by skid trails (relative to felled area). ! + ! SL_SKID_S_GTHARV -- survivorship of trees with DBH > MINDBH in skid trails. ! + ! SL_SKID_S_LTHARV -- survivorship of trees with DBH < MINDBH in skid trails. ! + ! SL_FELLING_S_LTHARV -- survivorship of trees with DBH < MINDBH in felling gaps. ! + ! ! + ! Cropland variables, used when IANTH_DISTURB is 1 or 2. ! + ! ! + ! CL_FSEEDS_HARVEST -- fraction of seeds that is harvested. ! + ! CL_FSTORAGE_HARVEST -- fraction of non-structural carbon that is harvested. ! + ! CL_FLEAF_HARVEST -- fraction of leaves that is harvested in croplands. ! + !---------------------------------------------------------------------------------------! + NL%IANTH_DISTURB = 0 + NL%SL_SCALE = 1 + NL%SL_SCALE = 0 + NL%SL_YR_FIRST = 1992 + NL%SL_NYRS = 50 + NL%SL_PFT = 2,3,4 + NL%SL_PROB_HARVEST = 1.0,1.0,1.0 + NL%SL_MINDBH_HARVEST = 50.,50.,50. + NL%SL_BIOMASS_HARVEST = 0 + NL%SL_SKID_REL_AREA = 1 + NL%SL_SKID_S_GTHARV = 1 + NL%SL_SKID_S_LTHARV = 0.6 + NL%SL_FELLING_S_LTHARV = 0.35 + NL%CL_FSEEDS_HARVEST = 0.75 + NL%CL_FSTORAGE_HARVEST = 0.00 + NL%CL_FLEAF_HARVEST = 0.00 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ICANTURB -- This flag controls the canopy roughness. ! + ! ! + ! 0. (Legacy) Based on Leuning et al. (Oct 1995, Plant Cell Environ.) and LEAF-3 ! + ! (Walko et al. 2000, J. Appl. Meteorol.). Roughness and displacement height are ! + ! found using simple relations with vegetation height; wind is computed using the ! + ! similarity theory for the top cohort, then it is assumed that wind extinguishes ! + ! following an exponential decay with "perceived" cumulative LAI (local LAI with ! + ! finite crown area). ! + ! 1. (Legacy) Similar to option 0, but the wind profile is not based on LAI, instead ! + ! it used the cohort height. ! + ! 2. (ED-2.2 default) This uses the method of Massman (1997, Boundary-Layer Meteorol.) ! + ! assuming constant drag and no sheltering factor. ! + ! 3. (ED-2.2 alternative) This is based on Massman and Weil (1999, Boundary-Layer ! + ! Meteorol.). Similar to 2, but with the option of varying the drag and sheltering ! + ! within the canopy. ! + ! 4. Similar to 0, but if finds the ground conductance following CLM-4.5 technical ! + ! note (Oleson et al. 2013, NCAR/TN-503+STR) (equations 5.98-5.100). ! + !---------------------------------------------------------------------------------------! + NL%ICANTURB = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISFCLYRM -- Similarity theory model. The model that computes u*, T*, etc... ! + ! 1. (Legacy) BRAMS default, based on Louis (1979, Boundary-Layer Meteorol.). It uses ! + ! empirical relations to estimate the flux based on the bulk Richardson number. ! + ! ! + ! All models below use an interative method to find z/L, and the only change ! + ! is the functional form of the psi functions. ! + ! ! + ! 2. (Legacy) Oncley and Dudhia (1995) model, based on MM5. ! + ! 3. (ED-2.2 default) Beljaars and Holtslag (1991) model. Similar to 2, but it uses an ! + ! alternative method for the stable case that mixes more than the OD95. ! + ! 4. (Beta) CLM-based (Oleson et al. 2013, NCAR/TN-503+STR). Similar to options 2 ! + ! and 3, but it uses special functions to deal with very stable and very stable ! + ! cases. It also accounts for different roughness lengths between momentum and ! + ! heat. ! + !---------------------------------------------------------------------------------------! + NL%ISFCLYRM = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IED_GRNDVAP -- Methods to find the ground -> canopy conductance. ! + ! 0. (ED-2.2 default) Modified Lee and Pielke (1992, J. Appl. Meteorol.), adding ! + ! field capacity, but using beta factor without the square, like in ! + ! Noilhan and Planton (1989, Mon. Wea. Rev.). ! + ! 1. (Legacy) Test # 1 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 2. (Legacy) Test # 2 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 3. (Legacy) Test # 3 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 4. (Legacy) Test # 4 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 5. (Legacy) Combination of test #1 (alpha) and test #2 (soil resistance). ! + ! In all cases the beta term is modified so it approaches zero as soil moisture goes ! + ! to dry air soil. ! + !---------------------------------------------------------------------------------------! + NL%IED_GRNDVAP = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! These variables will be eventually removed from ED2IN, use XML initialisation file ! + ! to set these parameters instead. These variables are used to control the similarity ! + ! theory model. For the meaning of these parameters, check Beljaars and Holtslag ! + ! (1991, J. Appl. Meteorol.). ! + ! ! + ! GAMM -- gamma coefficient for momentum, unstable case (dimensionless) ! + ! Ignored when ISTAR = 1 ! + ! GAMH -- gamma coefficient for heat, unstable case (dimensionless) ! + ! Ignored when ISTAR = 1 ! + ! TPRANDTL -- Turbulent Prandtl number ! + ! Ignored when ISTAR = 1 ! + ! RIBMAX -- maximum bulk Richardson number. ! + ! LEAF_MAXWHC -- Maximum water that can be intercepted by leaves, in kg/m2leaf. ! + !---------------------------------------------------------------------------------------! + NL%GAMM = 13.0 + NL%GAMH = 13.0 + NL%TPRANDTL = 0.74 + NL%RIBMAX = 0.50 + NL%LEAF_MAXWHC = 0.11 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPERCOL -- This controls percolation and infiltration. ! + ! 0. (ED-2.2 default) Based on LEAF-3 (Walko et al. 2000, J. Appl. ! + ! Meteorol.). This assumes soil conductivity constant and for the ! + ! temporary surface water, it sheds liquid in excess of a 1:9 liquid- ! + ! -to-ice ratio through percolation. Temporary surface water exists ! + ! only if the top soil layer is at saturation. ! + ! 1. (Beta). Constant soil conductivity, and it uses the percolation ! + ! model as in Anderson (1976, NOAA technical report NWS 19). Temporary ! + ! surface water may exist after a heavy rain event, even if the soil ! + ! doesn't saturate. ! + ! 2. (Beta). Similar to 1, but soil conductivity decreases with depth even ! + ! for constant soil moisture. ! + !---------------------------------------------------------------------------------------! + NL%IPERCOL = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the plant functional types (PFTs) that will be ! + ! used in this simulation. ! + ! ! + ! INCLUDE_THESE_PFT -- which PFTs to be considered for the simulation. ! + ! PASTURE_STOCK -- which PFT should be used for pastures ! + ! (used only when IANTH_DISTURB = 1 or 2) ! + ! AGRI_STOCK -- which PFT should be used for agriculture ! + ! (used only when IANTH_DISTURB = 1 or 2) ! + ! PLANTATION_STOCK -- which PFT should be used for plantation ! + ! (used only when IANTH_DISTURB = 1 or 2) ! + ! ! + ! PFT table ! + !---------------------------------------------------------------------------------------! + ! 1 - C4 Grass ! + ! 2 - Tropical broadleaf, early successional ! + ! 3 - Tropical broadleaf, mid-successional ! + ! 4 - Tropical broadleaf, late successional ! + ! 5 - Temperate C3 grass ! + ! 6 - Northern North American pines ! + ! 7 - Southern North American pines ! + ! 8 - Late-successional North American conifers ! + ! 9 - Temperate broadleaf, early successional ! + ! 10 - Temperate broadleaf, mid-successional ! + ! 11 - Temperate broadleaf, late successional ! + ! 12 - (Beta) Tropical broadleaf, early successional (thick bark) ! + ! 13 - (Beta) Tropical broadleaf, mid-successional (thick bark) ! + ! 14 - (Beta) Tropical broadleaf, late successional (thick bark) ! + ! 15 - Araucaria ! + ! 16 - Tropical/subtropical C3 grass ! + ! 17 - (Beta) Lianas ! + !---------------------------------------------------------------------------------------! + NL%INCLUDE_THESE_PFT = 1,2,3,4,16 + NL%PASTURE_STOCK = 1 + NL%AGRI_STOCK = 1 + NL%PLANTATION_STOCK = 3 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! PFT_1ST_CHECK -- What to do if the initialisation file has a PFT that is not listed ! + ! in INCLUDE_THESE_PFT (ignored if IED_INIT_MODE is -1 or 0) ! + ! 0. Stop the run ! + ! 1. Add the PFT in the INCLUDE_THESE_PFT list ! + ! 2. Ignore the cohort ! + !---------------------------------------------------------------------------------------! + NL%PFT_1ST_CHECK = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the size of sub-polygon structures in ED-2. ! + ! IFUSION -- Control on patch/cohort fusion scheme ! + ! 0. (ED-2.2 default). This is the original ED-2 scheme. This will ! + ! be eventually superseded by IFUSION=1. ! + ! 1. (Beta) New scheme, developed to address a few issues that ! + ! become more evident when initialising ED with large (>1000) ! + ! number of patches. It uses absolute difference in light levels ! + ! to avoid fusing patches with very different canopies, and also ! + ! makes sure that remaining patches have area above ! + ! MIN_PATCH_AREA and that a high percentage of the original ! + ! landscape is retained. ! + ! ! + ! MAXSITE -- This is the strict maximum number of sites that each polygon can ! + ! contain. Currently this is used only when the user wants to run ! + ! the same polygon with multiple soil types. If there aren't that ! + ! many different soil types with a minimum area (check MIN_SITE_AREA ! + ! below), then the model will allocate just the amount needed. ! + ! MAXPATCH -- A variable controlling the sought number of patches per site. ! + ! Possible values are: ! + ! 0. Disable any patch fusion. This may lead to a large number ! + ! of patches in century-long simulations. ! + ! 1. The model will force fusion until the total number of ! + ! patches is 1 for each land use type. ! + ! -1. Similar to 1, but fusion will only happen during ! + ! initialisation ! + ! >= 2. The model will seek fusion of patches every year, aiming to ! + ! keep the number of patches below NL%MAXPATCH. ! + ! <= -2. Similar to >= 2, but fusion will only happen during ! + ! initialisation. The target number of patches will be the ! + ! absolute number of NL%MAXPATCH. ! + ! ! + ! IMPORTANT: A given site may contain more patches than MAXPATCH in ! + ! case the patches are so different that they cannot be ! + ! fused even when the fusion threshold is relaxed. ! + ! ! + ! MAXCOHORT -- A variable controlling the sought number of cohorts per patch. ! + ! Possible values are: ! + ! 0. Disable cohort fusion. This may lead to a large number of ! + ! cohorts in century-long simulations. ! + ! >= 1. The model will seek fusion of cohorts every month, aiming to ! + ! keep the number of cohorts per patch below MAXCOHORT. ! + ! <= -1. Similar to >= 1, but fusion will only happen during ! + ! initialisation. The target number of cohorts will be the ! + ! absolute number of MAXCOHORT. ! + ! ! + ! IMPORTANT: A given patch may contain more cohorts than MAXCOHORT in ! + ! case the cohorts are so different that they cannot be ! + ! fused even when the fusion threshold is relaxed. ! + ! ! + ! MIN_SITE_AREA -- This is the minimum fraction area of a given soil type that allows ! + ! a site to be created. ! + ! ! + ! MIN_PATCH_AREA -- This is the minimum fraction area of a given soil type that allows ! + ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! + ! IMPORTANT: This is not enforced by the model, but we recommend that ! + ! MIN_PATCH_AREA >= 1/MAXPATCH, otherwise the model may ! + ! never reach MAXPATCH. ! + !---------------------------------------------------------------------------------------! + NL%IFUSION = 0 + NL%MAXSITE = 1 + NL%MAXPATCH = 30 + NL%MAXCOHORT = 80 + NL%MIN_SITE_AREA = 0.001 + NL%MIN_PATCH_AREA = 0.001 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ZROUGH -- Roughness length [metres] of non-vegetated soil. This variable will be ! + ! eventually removed from ED2IN, use XML initialisation file to set this ! + ! parameter instead. ! + !---------------------------------------------------------------------------------------! + NL%ZROUGH = 0.1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Treefall disturbance parameters. ! + ! TREEFALL_DISTURBANCE_RATE -- Sign-dependent treefall disturbance rate: ! + ! > 0. usual disturbance rate, in 1/years; ! + ! = 0. No treefall disturbance; ! + ! TIME2CANOPY -- Minimum patch age for treefall disturbance to happen. ! + ! If TREEFALL_DISTURBANCE_RATE = 0., this value will be ! + ! ignored. If this value is different than zero, then ! + ! TREEFALL_DISTURBANCE_RATE is internally adjusted so the ! + ! average patch age is still 1/TREEFALL_DISTURBANCE_RATE ! + !---------------------------------------------------------------------------------------! + NL%TREEFALL_DISTURBANCE_RATE = 0.0125 + NL%TIME2CANOPY = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! RUNOFF_TIME -- In case a temporary surface water (TSW) is created, this is the "e- ! + ! -folding lifetime" of the TSW in seconds due to runoff. If you don't ! + ! want runoff to happen, set this to 0. ! + !---------------------------------------------------------------------------------------! + NL%RUNOFF_TIME = 3600. + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! These variables will be eventually removed from ED2IN, use XML initialisation ! + ! file to set these parameters instead. ! + ! ! + ! The following variables control the minimum values of various velocities in the ! + ! canopy. This is needed to avoid the air to be extremely still, or to avoid singular- ! + ! ities. When defining the values, keep in mind that UBMIN >= UGBMIN >= USTMIN. ! + ! ! + ! UBMIN -- minimum wind speed at the top of the canopy air space [ m/s] ! + ! UGBMIN -- minimum wind speed at the leaf level [ m/s] ! + ! USTMIN -- minimum friction velocity, u*, in m/s. [ m/s] ! + !---------------------------------------------------------------------------------------! + NL%UBMIN = 1.00 + NL%UGBMIN = 0.25 + NL%USTMIN = 0.10 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Control parameters for printing to standard output. Any variable can be printed ! + ! to standard output as long as it is one dimensional. Polygon variables have been ! + ! tested, no gaurtantees for other hierarchical levels. Choose any variables that are ! + ! defined in the variable table fill routine in ed_state_vars.f90. Choose the start ! + ! and end index of the polygon,site,patch or cohort. It should work in parallel. The ! + ! indices are global indices of the entire domain. The are printed out in rows of 10 ! + ! columns each. ! + ! ! + ! IPRINTPOLYS -- 0. Do not print information to screen ! + ! 1. Print polygon arrays to screen, use variables described below to ! + ! determine which ones and how ! + ! NPVARS -- Number of variables to be printed ! + ! PRINTVARS -- List of variables to be printed ! + ! PFMTSTR -- The standard fortran format for the prints. One format per variable ! + ! IPMIN -- First polygon (absolute index) to be print ! + ! IPMAX -- Last polygon (absolute index) to print ! + !---------------------------------------------------------------------------------------! + NL%IPRINTPOLYS = 0 + NL%NPVARS = 1 + NL%PRINTVARS = 'AVG_PCPG','AVG_CAN_TEMP','AVG_VAPOR_AC','AVG_CAN_SHV' + NL%PFMTSTR = 'f10.8','f5.1','f7.2','f9.5' + NL%IPMIN = 1 + NL%IPMAX = 60 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Variables that control the meteorological forcing. ! + ! ! + ! IMETTYPE -- Format of the meteorological dataset ! + ! 0. (Non-functional) ASCII ! + ! 1. (ED-2.2 default) HDF5 ! + ! ISHUFFLE -- How to choose an year outside the meterorological data range (see ! + ! METCYC1 and METCYCF). ! + ! 0. (ED-2.2 default) Sequentially cycle over years ! + ! 1. (Under development) Randomly pick a year. The sequence of randomly ! + ! picked years will be the same every time the simulation is re-run, ! + ! provided that the initial year and met driver time span remain the ! + ! same. This have been reports that this option is working like ! + ! option 2 (completely random). ! + ! 2. (Beta) Randomly pick the years, choosing a different sequence each ! + ! time the model is run. ! + ! ! + ! IMPORTANT: Regardless of the ISHUFFLE option, the model always use the ! + ! correct year for the period in which meteorological drivers ! + ! exist. ! + ! ! + ! IMETCYC1 -- First year for which meteorological driver files exist. ! + ! IMETCYCF -- Last year for which meteorological driver files exist. In addition, ! + ! the model assumes that files exist for all years between METCYC1 and ! + ! METCYCF. ! + ! IMETAVG -- How the input radiation was originally averaged. You must tell this ! + ! because ED-2.1 can make a interpolation accounting for the cosine of ! + ! zenith angle. ! + ! -1. (Deprecated) I don't know, use linear interpolation. ! + ! 0. No average, the values are instantaneous ! + ! 1. Averages ending at the reference time ! + ! 2. Averages beginning at the reference time ! + ! 3. Averages centred at the reference time ! + ! ! + ! IMPORTANT: The user must obtain the correct information for each ! + ! meteorological driver before running the model, and set ! + ! this variable consistently. Inconsistent settings are ! + ! known to cause numerical instabilities, particularly at ! + ! around sunrise and sunset times. ! + ! ! + ! IMETRAD -- What should the model do with the input short wave radiation? ! + ! 0. (ED-2.2 default, when radiation components were measured) ! + ! Nothing, use it as is. ! + ! 1. (Legacy) Add radiation components together, then use the SiB ! + ! method (Sellers et al. 1986, J. Atmos. Sci) to split radiation ! + ! into the four components (PAR direct, PAR diffuse, NIR direct, ! + ! NIR diffuse). ! + ! 2. (ED-2.2 default when radiation components were not measured) ! + ! Add components then together, then use the method by Weiss and ! + ! Norman (1985, Agric. For. Meteorol.) to split radiation down to ! + ! the four components. ! + ! 3. All radiation goes to diffuse. Useful for theoretical studies ! + ! only. ! + ! 4. All radiation goes to direct, except at night. Useful for ! + ! theoretical studies only. ! + ! 5. (Beta) Add radiation components back together, then split ! + ! radiation to the four components based on clearness index (Bendix ! + ! et al. 2010, Int. J. Biometeorol.). ! + ! INITIAL_CO2 -- Initial value for CO2 in case no CO2 is provided at the meteorological ! + ! driver dataset [Units: umol/mol] ! + !---------------------------------------------------------------------------------------! + NL%IMETTYPE = 1 + NL%ISHUFFLE = 0 + NL%METCYC1 = 2004 + NL%METCYCF = 2014 + NL%IMETAVG = 1 + NL%IMETRAD = 5 + NL%INITIAL_CO2 = 410. + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the phenology prescribed from observations: ! + ! ! + ! IPHENYS1 -- First year for spring phenology ! + ! IPHENYSF -- Final year for spring phenology ! + ! IPHENYF1 -- First year for fall/autumn phenology ! + ! IPHENYFF -- Final year for fall/autumn phenology ! + ! PHENPATH -- path and prefix of the prescribed phenology data. ! + ! ! + ! If the years don't cover the entire simulation period, they will be recycled. ! + !---------------------------------------------------------------------------------------! + NL%IPHENYS1 = 1992 + NL%IPHENYSF = 2003 + NL%IPHENYF1 = 1992 + NL%IPHENYFF = 2003 + NL%PHENPATH = '/mypath/phenology' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! These are some additional configuration files. ! + ! IEDCNFGF -- XML file containing additional parameter settings. If you don't have ! + ! one, leave it empty ! + ! EVENT_FILE -- file containing specific events that must be incorporated into the ! + ! simulation. ! + ! PHENPATH -- path and prefix of the prescribed phenology data. ! + !---------------------------------------------------------------------------------------! + NL%IEDCNFGF = '/mypath/config.xml' + NL%EVENT_FILE = '/mypath/event.xml' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Census variables. This is going to create unique census statuses to cohorts, to ! + ! better compare the model with census observations. In case you don't intend to ! + ! compare the model with census data, set up DT_CENSUS to 1., otherwise you may reduce ! + ! cohort fusion. ! + ! DT_CENSUS -- Time between census, in months. Currently the maximum is 60 ! + ! months, to avoid excessive memory allocation. Every time the ! + ! simulation reaches the census time step, all census tags will be ! + ! reset. ! + ! YR1ST_CENSUS -- In which year was the first census conducted? ! + ! MON1ST_CENSUS -- In which month was the first census conducted? ! + ! MIN_RECRUIT_DBH -- Minimum DBH that is measured in the census, in cm. ! + !---------------------------------------------------------------------------------------! + NL%DT_CENSUS = 24 + NL%YR1ST_CENSUS = 2004 + NL%MON1ST_CENSUS = 3 + NL%MIN_RECRUIT_DBH = 10 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to control the detailed output for debugging ! + ! purposes. ! + ! ! + ! IDETAILED -- This flag controls the possible detailed outputs, mostly used for ! + ! debugging purposes. Notice that this doesn't replace the normal debug- ! + ! ger options, the idea is to provide detailed output to check bad ! + ! assumptions. The options are additive, and the indices below represent ! + ! the different types of output: ! + ! ! + ! 0 -- (ED-2.2 default) No detailed output. ! + ! 1 -- Detailed budget (every DTLSM) ! + ! 2 -- Detailed photosynthesis (every DTLSM) ! + ! 4 -- Detailed output from the integrator (every HDID) ! + ! 8 -- Thermodynamic bounds for sanity check (every DTLSM) ! + ! 16 -- Daily error stats (which variable caused the time step to shrink) ! + ! 32 -- Allometry parameters, photosynthesis parameters, and minimum and ! + ! maximum sizes (three files, only at the beginning) ! + ! 64 -- Detailed disturbance rate output. Two types of detailed ! + ! transitions will be written (single polygon runs only). ! + ! a. A text file that looks like the .lu files. This is written ! + ! only once, at the beginning of the simulation. ! + ! b. Detailed information about the transition matrix. This is ! + ! written to the standard output (e.g. screen), every time the ! + ! patch dynamics is called. ! + ! ! + ! In case you don't want any detailed output (likely for most runs), set ! + ! IDETAILED to zero. In case you want to generate multiple outputs, add ! + ! the number of the sought options: for example, if you want detailed ! + ! photosynthesis and detailed output from the integrator, set IDETAILED ! + ! to 6 (2 + 4). Any combination of the above outputs is acceptable, al- ! + ! though all but the last produce a sheer amount of txt files, in which ! + ! case you may want to look at variable PATCH_KEEP. ! + ! ! + ! IMPORTANT: The first five options will only work for single site ! + ! simulations, and it is strongly recommended to set ! + ! IVEGT_DYNAMICS to 0. These options generate tons of ! + ! output, so don't try these options with long simulations. ! + ! ! + ! ! + ! PATCH_KEEP -- This option will eliminate all patches except one from the initial- ! + ! isation. This is only used when one of the first five types of ! + ! detailed output is active, otherwise it will be ignored. Options are: ! + ! -2. Keep only the patch with the lowest potential LAI ! + ! -1. Keep only the patch with the highest potential LAI ! + ! 0. Keep all patches. ! + ! > 0. Keep the patch with the provided index. In case the index is ! + ! not valid, the model will crash. ! + !---------------------------------------------------------------------------------------! + NL%IDETAILED = 0 + NL%PATCH_KEEP = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! GROWTH_RESP_SCHEME -- This flag indicates how growth respiration fluxes are treated. ! + ! ! + ! 0 - (Legacy) Growth respiration is treated as tax on GPP, at pft-specific ! + ! rate given by growth_resp_factor. All growth respiration is treated as ! + ! aboveground wood -> canopy-airspace flux. ! + ! 1 - (ED-2.2 default) Growth respiration is calculated as in 0, but split into ! + ! fluxes entering the CAS from Leaf, Fine Root, Sapwood (above- and below- ! + ! -ground), and Bark (above- and below-ground, only when IALLOM=3), ! + ! proportionally to the biomass of each tissue. This does not affect carbon ! + ! budget at all, it provides greater within-ecosystem flux resolution. ! + !---------------------------------------------------------------------------------------! + NL%GROWTH_RESP_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! STORAGE_RESP_SCHEME -- This flag controls how storage respiration fluxes are treated. ! + ! ! + ! 0 - (Legacy) Storage resp. is aboveground wood -> canopy-airspace flux. ! + ! 1 - (ED-2.2 default) Storage respiration is calculated as in 0, but split into ! + ! fluxes entering the CAS from Leaf, Fine Root, Sapwood (above- and below- ! + ! -ground), and Bark (above- and below-ground, only when IALLOM=3), ! + ! proportionally to the biomass of each tissue. This does not affect carbon ! + ! budget at all, it provides greater within-ecosystem flux resolution. ! + !---------------------------------------------------------------------------------------! + NL%STORAGE_RESP_SCHEME = 1 + !---------------------------------------------------------------------------------------! +$END +!==========================================================================================! +!==========================================================================================! From 1c4bcd9bf004d387f34b9ee7e8bbc099459ca662 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 5 May 2020 17:19:42 -0700 Subject: [PATCH 0940/2289] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index f92d78d348c..2c97b2fb78f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - When building sipnet model would not set correct model version - Update pecan/depends docker image to have latest Roxygen and devtools. - Update ED docker build, will now build version 2.2.0 and git +- Remove ED2IN.git and add new versioned ED2IN template: ED2INv2.2.0 (#2143) ### Changed - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). From 2c8eca2d2e6c6066f4f33dc92d124fb49518bda0 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 5 May 2020 18:06:00 -0700 Subject: [PATCH 0941/2289] add a true docker quickstart previously the docker-quickstart link pointed to this file, with the first section ``` ## The PEcAn docker install process in detail {#docker-quickstart} ``` --- .../94_docker/02_quickstart.Rmd | 37 ++++++++++++++++++- 1 file changed, 35 insertions(+), 2 deletions(-) diff --git a/book_source/03_topical_pages/94_docker/02_quickstart.Rmd b/book_source/03_topical_pages/94_docker/02_quickstart.Rmd index d23ed2bf550..16989fe91ad 100644 --- a/book_source/03_topical_pages/94_docker/02_quickstart.Rmd +++ b/book_source/03_topical_pages/94_docker/02_quickstart.Rmd @@ -1,4 +1,37 @@ -## The PEcAn docker install process in detail {#docker-quickstart} +## Quick-start docker install {#docker-quickstart} + +```bash +git clone git@github.com/pecanproject/pecan +cd pecan + +# start database +docker-compose -p pecan up -d postgres + +# add example data (first time only) +docker-compose run --rm bety initialize +docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop + +# start PEcAn +docker-compose -p pecan up -d + +# run a model +curl -v -X POST \ + -F 'hostname=docker' \ + -F 'modelid=5000000002' \ + -F 'sitegroupid=1' \ + -F 'siteid=772' \ + -F 'sitename=Niwot Ridge Forest/LTER NWT1 (US-NR1)' \ + -F 'pft[]=temperate.coniferous' \ + -F 'start=2004/01/01' \ + -F 'end=2004/12/31' \ + -F 'input_met=5000000005' \ + -F 'email=' \ + -F 'notes=' \ + 'http://localhost:8000/pecan/04-runpecan.php' +``` + + +## The PEcAn docker install process in detail ### Configure docker-compose {#pecan-setup-compose-configure} @@ -65,7 +98,7 @@ As a side effect, the above command will also create blank data ["volumes"](http Because our project is called `pecan` and `docker-compose.yml` describes a network called `pecan`, the resulting network is called `pecan_pecan`. This is relevant to the following commands, which will actually initialize and populate the BETY database. -Assuming the above ran successfully, next run the following: +Assuming the above has run successfully, next run the following: ```bash docker-compose run --rm bety initialize From 69e6fecd4c960ae39868bed2cd25a4d6579cbb32 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 6 May 2020 08:24:57 -0500 Subject: [PATCH 0942/2289] only run on pecanproject/pecan --- .github/workflows/depends.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 909d5153915..814547f9dc7 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -11,7 +11,7 @@ on: jobs: depends: - #if: github.repository == 'PecanProject/pecan' + if: github.repository == 'PecanProject/pecan' runs-on: ubuntu-latest From 444ec08c49805e64aa1f80afc1be718dc227b6ad Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 6 May 2020 16:03:59 +0200 Subject: [PATCH 0943/2289] forgot changelog in #2592 --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8c27857eba3..19e373161d5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -23,6 +23,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Update ED docker build, will now build version 2.2.0 and git ### Changed +- Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). - `PEcAn.DB::insert_table` now uses `DBI::dbAppendTable` internally instead of manually constructed SQL (#2552). - Rebuilt documentation using Roxygen 7. Readers get nicer formatting of usage sections, writers get more flexible behavior when inheriting parameters and less hassle when maintaining namespaces (#2524). From eeb22ff6738dbb12a63d73fd546ea17f5730a1b3 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 08:54:18 -0700 Subject: [PATCH 0944/2289] Delete ED2IN.git (#2599) this no longer works with the master branch of the https://github.com/EDmodel/ED2 keeping this unversioned ED2IN around will only cause trouble. --- models/ed/inst/ED2IN.git | 1259 -------------------------------------- 1 file changed, 1259 deletions(-) delete mode 100644 models/ed/inst/ED2IN.git diff --git a/models/ed/inst/ED2IN.git b/models/ed/inst/ED2IN.git deleted file mode 100644 index 58a154572b2..00000000000 --- a/models/ed/inst/ED2IN.git +++ /dev/null @@ -1,1259 +0,0 @@ -!==========================================================================================! -!==========================================================================================! -! ED2IN . ! -! ! -! This is the file that contains the variables that define how ED is to be run. There ! -! is some brief information about the variables here. ! -!------------------------------------------------------------------------------------------! -$ED_NL - - !----- Simulation title (64 characters). -----------------------------------------------! - NL%EXPNME = 'ED2 vGITHUB PEcAn @ENSNAME@' - !---------------------------------------------------------------------------------------! - - !---------------------------------------------------------------------------------------! - ! Type of run: ! - ! INITIAL -- Starts a new run, that can be based on a previous run (restart/history), ! - ! but then it will use only the biomass and soil carbon information. ! - ! HISTORY -- Resumes a simulation from the last history. This is different from ! - ! initial in the sense that exactly the same information written in the ! - ! history will be used here. ! - !---------------------------------------------------------------------------------------! - NL%RUNTYPE = 'INITIAL' - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! Start of simulation. Information must be given in UTC time. ! - !---------------------------------------------------------------------------------------! - NL%IMONTHA = @START_MONTH@ - NL%IDATEA = @START_DAY@ - NL%IYEARA = @START_YEAR@ - NL%ITIMEA = 0000 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! End of simulation. Information must be given in UTC time. ! - !---------------------------------------------------------------------------------------! - NL%IMONTHZ = @END_MONTH@ - NL%IDATEZ = @END_DAY@ - NL%IYEARZ = @END_YEAR@ - NL%ITIMEZ = 0000 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! DTLSM -- Time step to integrate photosynthesis, and the maximum time step for ! - ! integration of energy and water budgets (units: seconds). Notice that the ! - ! model will take steps shorter than this if this is too coarse and could ! - ! lead to loss of accuracy or unrealistic results in the biophysics. ! - ! Recommended values are < 60 seconds if INTEGRATION_SCHEME is 0, and 240-900 ! - ! seconds otherwise. ! - ! RADFRQ -- Time step to integrate radiation, in seconds. This must be an integer ! - ! multiple of DTLSM, and we recommend it to be exactly the same as DTLSM. ! - !---------------------------------------------------------------------------------------! - NL%DTLSM = 600. - NL%RADFRQ = 600. - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used in case the user wants to run a regional run. ! - ! ! - ! N_ED_REGION -- number of regions for which you want to run ED. This can be set to ! - ! zero provided that N_POI is not... ! - ! GRID_TYPE -- which kind of grid to run: ! - ! 0. Longitude/latitude grid ! - ! 1. Polar-stereographic ! - !---------------------------------------------------------------------------------------! - NL%N_ED_REGION = 0 - NL%GRID_TYPE = 0 - - !------------------------------------------------------------------------------------! - ! The following variables are used only when GRID_TYPE is set to 0. You must ! - ! provide one value for each grid, except otherwise noted. ! - ! ! - ! GRID_RES -- Grid resolution, in degrees (first grid only, the other grids ! - ! resolution will be defined by NSTRATX/NSTRATY). ! - ! ED_REG_LATMIN -- Southernmost point of each region. ! - ! ED_REG_LATMAX -- Northernmost point of each region. ! - ! ED_REG_LONMIN -- Westernmost point of each region. ! - ! ED_REG_LONMAX -- Easternmost point of each region. ! - !------------------------------------------------------------------------------------! - NL%GRID_RES = 1.0 ! This is the grid resolution scale in degrees. [\*/] - NL%ED_REG_LATMIN = -90 ! List of minimum latitudes; - NL%ED_REG_LATMAX = 90 ! List of maximum latitudes; - NL%ED_REG_LONMIN = -180 ! List of minimum longitudes; - NL%ED_REG_LONMAX = 180 ! List of maximum longitudes; - !------------------------------------------------------------------------------------! - - - - !------------------------------------------------------------------------------------! - ! The following variables are used only when GRID_TYPE is set to 1. ! - ! ! - ! NNXP -- number of points in the X direction. One value for each grid. ! - ! NNYP -- number of points in the Y direction. One value for each grid. ! - ! DELTAX -- grid resolution in the X direction, near the grid pole. Units: [ m]. ! - ! this value is used to define the first grid only, other grids are ! - ! defined using NNSTRATX. ! - ! DELTAY -- grid resolution in the Y direction, near the grid pole. Units: [ m]. ! - ! this value is used to define the first grid only, other grids are ! - ! defined using NNSTRATX. Unless you are running some specific tests, ! - ! both DELTAX and DELTAY should be the same. ! - ! POLELAT -- Latitude of the pole point. Set this close to CENTLAT for a more ! - ! traditional "square" domain. One value for all grids. ! - ! POLELON -- Longitude of the pole point. Set this close to CENTLON for a more ! - ! traditional "square" domain. One value for all grids. ! - ! CENTLAT -- Latitude of the central point. One value for each grid. ! - ! CENTLON -- Longitude of the central point. One value for each grid. ! - !------------------------------------------------------------------------------------! - NL%NNXP = 110 - NL%NNYP = 70 - NL%DELTAX = 60000. - NL%DELTAY = 60000. - NL%POLELAT = -2.609075 - NL%POLELON = -60.2093 - NL%CENTLAT = -2.609075 - NL%CENTLON = -60.2093 - !------------------------------------------------------------------------------------! - - - - !------------------------------------------------------------------------------------! - ! Nest ratios. These values are used by both GRID_TYPE=0 and GRID_TYPE=1. ! - ! NSTRATX -- this is will divide the values given by DELTAX or GRID_RES for the ! - ! nested grids. The first value should be always one. ! - ! NSTRATY -- this is will divide the values given by DELTAY or GRID_RES for the ! - ! nested grids. The first value should be always one, and this must ! - ! be always the same as NSTRATX when GRID_TYPE = 0, and this is also ! - ! strongly recommended for when GRID_TYPE = 1. ! - !------------------------------------------------------------------------------------! - NL%NSTRATX = 1,4 - NL%NSTRATY = 1,4 - !------------------------------------------------------------------------------------! - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used to define single polygon of interest runs, and ! - ! they are ignored when N_ED_REGION = 0. ! - ! ! - ! N_POI -- number of polygons of interest (POIs). This can be zero as long as ! - ! N_ED_REGION is not. ! - ! POI_LAT -- list of latitudes of each POI. ! - ! POI_LON -- list of longitudes of each POI. ! - ! POI_RES -- grid resolution of each POI (degrees). This is used only to define the ! - ! soil types ! - !---------------------------------------------------------------------------------------! - NL%N_POI = 1 ! number of polygons of interest (POIs). This could be zero. - NL%POI_LAT = @SITE_LAT@ ! list of the latitudes of the POIs (degrees north) - NL%POI_LON = @SITE_LON@ ! list of the longitudes of the POIs (degrees east) - NL%POI_RES = 1.00 - !---------------------------------------------------------------------------------------! - !---------------------------------------------------------------------------------------! - ! LOADMETH -- Load balancing method. This is used only in regional runs run in ! - ! parallel. ! - ! 0. Let ED decide the best way of splitting the polygons. Commonest ! - ! option and default. ! - ! 1. One of the methods to split polygons based on their previous ! - ! work load. Developpers only. ! - ! 2. Try to load an equal number of SITES per node. Useful for when ! - ! total number of polygon is the same as the total number of cores. ! - ! 3. Another method to split polygons based on their previous work load. ! - ! Developpers only. ! - !---------------------------------------------------------------------------------------! - NL%LOADMETH = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ED2 File output. For all the variables 0 means no output and 3 means HDF5 output. ! - ! ! - ! IFOUTPUT -- Fast analysis. These are mostly polygon-level averages, and the time ! - ! interval between files is determined by FRQANL ! - ! IDOUTPUT -- Daily means (one file per day) ! - ! IMOUTPUT -- Monthly means (one file per month) ! - ! IQOUTPUT -- Monthly means of the diurnal cycle (one file per month). The number ! - ! of points for the diurnal cycle is 86400 / FRQANL ! - ! IYOUTPUT -- Annual output. ! - ! ITOUTPUT -- Instantaneous fluxes, mostly polygon-level variables, one file per year. ! - ! ISOUTPUT -- restart file, for HISTORY runs. The time interval between files is ! - ! determined by FRQHIS ! - !---------------------------------------------------------------------------------------! - NL%IFOUTPUT = 0 ! Instantaneous analysis (site average) - NL%IDOUTPUT = 0 ! Daily means (site average) - NL%IMOUTPUT = 0 ! Monthly means (site average) - NL%IQOUTPUT = 0 ! Monthly means (diurnal cycle) - NL%IYOUTPUT = 3 ! Annual output - NL%ITOUTPUT = 3 ! Instantaneous fluxes (site average) --> "Tower" Files - NL%ISOUTPUT = 0 ! History files - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables control whether site-, patch-, and cohort-level time ! - ! means and mean sum of squares should be included in the output files or not. ! - ! ! - ! Reasons to add them: ! - ! a. Sub-polygon variables are more comprehensive. ! - ! b. Explore heterogeneity within a polygon and make interesting analysis. ! - ! c. More chances to create cool 3-D plots. ! - ! ! - ! Reasons to NOT add them: ! - ! a. Output files will become much larger! ! - ! b. In regional/coupled runs, the output files will be ridiculously large. ! - ! c. You may fill up the disk. ! - ! d. Other people's job may crash due to insufficient disk space. ! - ! e. You will gain a bad reputation amongst your colleagues. ! - ! f. And it will be entirely your fault. ! - ! ! - ! Either way, polygon-level averages are always included, and so are the instan- ! - ! taneous site-, patch-, and cohort-level variables needed for resuming the run. ! - ! ! - ! IADD_SITE_MEANS -- Add site-level averages to the output (0 = no; 1 = yes) ! - ! IADD_PATCH_MEANS -- Add patch-level averages to the output (0 = no; 1 = yes) ! - ! IADD_COHORT_MEANS -- Add cohort-level averages to the output (0 = no; 1 = yes) ! - ! ! - !---------------------------------------------------------------------------------------! - NL%IADD_SITE_MEANS = 0 - NL%IADD_PATCH_MEANS = 0 - NL%IADD_COHORT_MEANS = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ATTACH_METADATA -- Flag for attaching metadata to HDF datasets. Attaching metadata ! - ! will aid new users in quickly identifying dataset descriptions but ! - ! will compromise I/O performance significantly. ! - ! 0 = no metadata, 1 = attach metadata ! - !---------------------------------------------------------------------------------------! - NL%ATTACH_METADATA = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! UNITFAST -- The following variables control the units for FRQFAST/OUTFAST, and ! - ! UNITSTATE FRQSTATE/OUTSTATE, respectively. Possible values are: ! - ! 0. Seconds; ! - ! 1. Days; ! - ! 2. Calendar months (variable) ! - ! 3. Calendar years (variable) ! - ! ! - ! N.B.: 1. In case OUTFAST/OUTSTATE are set to special flags (-1 or -2) ! - ! UNITFAST/UNITSTATE will be ignored for them. ! - ! 2. In case IQOUTPUT is set to 3, then UNITFAST has to be 0. ! - ! ! - !---------------------------------------------------------------------------------------! - NL%UNITFAST = 0 - NL%UNITSTATE = 3 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! OUTFAST/OUTSTATE -- these control the number of times per file. ! - ! 0. Each time gets its own file ! - ! -1. One file per day ! - ! -2. One file per month ! - ! > 0. Multiple timepoints can be recorded to a single file reducing ! - ! the number of files and i/o time in post-processing. ! - ! Multiple timepoints should not be used in the history files ! - ! if you intend to use these for HISTORY runs. ! - !---------------------------------------------------------------------------------------! - NL%OUTFAST = -1 ! orig. 3600. - NL%OUTSTATE = 0 ! orig. 1. - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ICLOBBER -- What to do in case the model finds a file that it was supposed the ! - ! written? 0 = stop the run, 1 = overwrite without warning. ! - ! FRQFAST -- time interval between analysis files, units defined by UNITFAST. ! - ! FRQSTATE -- time interval between history files, units defined by UNITSTATE. ! - !---------------------------------------------------------------------------------------! - NL%ICLOBBER = 1 ! 0 = stop if files exist, 1 = overwite files - NL%FRQFAST = 1800. ! Time interval between analysis/history files - NL%FRQSTATE = 86400. ! Time interval between history files - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! - ! SFILOUT -- Path and prefix for history files. ! - !---------------------------------------------------------------------------------------! - NL%FFILOUT = '@FFILOUT@' ! Analysis output prefix; - NL%SFILOUT = '@SFILOUT@' ! History output prefix - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IED_INIT_MODE -- This controls how the plant community and soil carbon pools are ! - ! initialised. ! - ! ! - ! -1. Start from a true bare ground run, or an absolute desert run. This will ! - ! never grow any plant. ! - ! 0. Start from near-bare ground (only a few seedlings from each PFT to be included ! - ! in this run). ! - ! 1. This will use history files written by ED-1.0. It will grab the ecosystem ! - ! state (like biomass, LAI, plant density, etc.), but it will start the ! - ! thermodynamic state as a new simulation. ! - ! 2. Same as 1, but it uses history files from ED-2.0 without multiple sites, and ! - ! with the old PFT numbers. ! - ! 3. Same as 1, but using history files from ED-2.0 with multiple sites and ! - ! TOPMODEL hydrology. ! - ! 4. Same as 1, but using ED2.1 H5 history/state files that take the form: ! - ! 'dir/prefix-gxx.h5' ! - ! Initialization files MUST end with -gxx.h5 where xx is a two digit integer ! - ! grid number. Each grid has its own initialization file. As an example, if a ! - ! user has two files to initialize their grids with: ! - ! example_file_init-g01.h5 and example_file_init-g02.h5 ! - ! NL%SFILIN = 'example_file_init' ! - ! ! - ! 5. This is similar to option 4, except that you may provide several files ! - ! (including a mix of regional and POI runs, each file ending at a different ! - ! date). This will not check date nor grid structure, it will simply read all ! - ! polygons and match the nearest neighbour to each polygon of your run. SFILIN ! - ! must have the directory common to all history files that are sought to be used,! - ! up to the last character the files have in common. For example if your files ! - ! are ! - ! /mypath/P0001-S-2000-01-01-000000-g01.h5, ! - ! /mypath/P0002-S-1966-01-01-000000-g02.h5, ! - ! ... ! - ! /mypath/P1000-S-1687-01-01-000000-g01.h5: ! - ! NL%SFILIN = '/mypath/P' ! - ! ! - ! 6 - Initialize with ED-2 style files without multiple sites, exactly like option ! - ! 2, except that the PFT types are preserved. ! - !---------------------------------------------------------------------------------------! - NL%IED_INIT_MODE = @INIT_MODEL@ - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! EDRES -- Expected input resolution for ED2.0 files. This is not used unless ! - ! IED_INIT_MODE = 3. ! - !---------------------------------------------------------------------------------------! - NL%EDRES = 1.0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! SFILIN -- The meaning and the size of this variable depends on the type of run, set ! - ! at variable NL%RUNTYPE. ! - ! ! - ! 1. INITIAL. Then this is the path+prefix of the previous ecosystem state. This has ! - ! dimension of the number of grids so you can initialize each grid with a ! - ! different dataset. In case only one path+prefix is given, the same will ! - ! be used for every grid. Only some ecosystem variables will be set up ! - ! here, and the initial condition will be in thermodynamic equilibrium. ! - ! ! - ! 2. HISTORY. This is the path+prefix of the history file that will be used. Only the ! - ! path+prefix will be used, as the history for every grid must have come ! - ! from the same simulation. ! - !---------------------------------------------------------------------------------------! - NL%SFILIN = '@SITE_PSSCSS@' - !---------------------------------------------------------------------------------------! - ! History file information. These variables are used to continue a simulation from ! - ! a point other than the beginning. Time must be in UTC. ! - ! ! - ! IMONTHH -- the time of the history file. This is the only place you need to change ! - ! IDATEH dates for a HISTORY run. You may change IMONTHZ and related in case you ! - ! IYEARH want to extend the run, but yo should NOT change IMONTHA and related. ! - ! ITIMEH ! - !---------------------------------------------------------------------------------------! - NL%ITIMEH = 0000 - NL%IDATEH = 01 - NL%IMONTHH = 01 - NL%IYEARH = 1500 - !---------------------------------------------------------------------------------------! - ! NZG - number of soil layers. One value for all grids. ! - ! NZS - maximum number of snow/water pounding layers. This is used only for ! - ! snow, if only liquid water is standing, the water will be all collapsed ! - ! into a single layer, so if you are running for places where it doesn't snow ! - ! a lot, leave this set to 1. One value for all grids. ! - !---------------------------------------------------------------------------------------! - NL%NZG = 9 - NL%NZS = 1 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ISOILFLG -- this controls which soil type input you want to use. ! - ! 1. Read in from a dataset I will provide in the SOIL_DATABASE variable a ! - ! few lines below. ! - ! below. ! - ! 2. No data available, I will use constant values I will provide in ! - ! NSLCON or by prescribing the fraction of sand and clay (see SLXSAND ! - ! and SLXCLAY). ! - !---------------------------------------------------------------------------------------! - NL%ISOILFLG = 2 - !---------------------------------------------------------------------------------------! - ! NSLCON -- ED-2 Soil classes that the model will use when ISOILFLG is set to 2. ! - ! Possible values are: ! - !---------------------------------------------------------------------------------------! - ! 1 -- sand | 7 -- silty clay loam | 13 -- bedrock ! - ! 2 -- loamy sand | 8 -- clayey loam | 14 -- silt ! - ! 3 -- sandy loam | 9 -- sandy clay | 15 -- heavy clay ! - ! 4 -- silt loam | 10 -- silty clay | 16 -- clayey sand ! - ! 5 -- loam | 11 -- clay | 17 -- clayey silt ! - ! 6 -- sandy clay loam | 12 -- peat ! - !---------------------------------------------------------------------------------------! - NL%NSLCON = 3 !3 US-WCr, 2 US-Syv, 10 US-Los - !---------------------------------------------------------------------------------------! - ! ISOILCOL -- LEAF-3 and ED-2 soil colour classes that the model will use when ISOILFLG ! - ! is set to 2. Soil classes are from 1 to 20 (1 = lightest; 20 = darkest). ! - ! The values are the same as CLM-4.0. The table is the albedo for visible ! - ! and near infra-red. ! - !---------------------------------------------------------------------------------------! - ! ! - ! |-----------------------------------------------------------------------| ! - ! | | Dry soil | Saturated | | Dry soil | Saturated | ! - ! | Class |-------------+-------------| Class +-------------+-------------| ! - ! | | VIS | NIR | VIS | NIR | | VIS | NIR | VIS | NIR | ! - ! |-------+------+------+------+------+-------+------+------+------+------| ! - ! | 1 | 0.36 | 0.61 | 0.25 | 0.50 | 11 | 0.24 | 0.37 | 0.13 | 0.26 | ! - ! | 2 | 0.34 | 0.57 | 0.23 | 0.46 | 12 | 0.23 | 0.35 | 0.12 | 0.24 | ! - ! | 3 | 0.32 | 0.53 | 0.21 | 0.42 | 13 | 0.22 | 0.33 | 0.11 | 0.22 | ! - ! | 4 | 0.31 | 0.51 | 0.20 | 0.40 | 14 | 0.20 | 0.31 | 0.10 | 0.20 | ! - ! | 5 | 0.30 | 0.49 | 0.19 | 0.38 | 15 | 0.18 | 0.29 | 0.09 | 0.18 | ! - ! | 6 | 0.29 | 0.48 | 0.18 | 0.36 | 16 | 0.16 | 0.27 | 0.08 | 0.16 | ! - ! | 7 | 0.28 | 0.45 | 0.17 | 0.34 | 17 | 0.14 | 0.25 | 0.07 | 0.14 | ! - ! | 8 | 0.27 | 0.43 | 0.16 | 0.32 | 18 | 0.12 | 0.23 | 0.06 | 0.12 | ! - ! | 9 | 0.26 | 0.41 | 0.15 | 0.30 | 19 | 0.10 | 0.21 | 0.05 | 0.10 | ! - ! | 10 | 0.25 | 0.39 | 0.14 | 0.28 | 20 | 0.08 | 0.16 | 0.04 | 0.08 | ! - ! |-----------------------------------------------------------------------| ! - ! ! - ! Soil type 21 is a special case in which we use the albedo method that used to be ! - ! the default in ED-2.1. ! - !---------------------------------------------------------------------------------------! - NL%ISOILCOL = 10 !21 12 for US-Los - !---------------------------------------------------------------------------------------! - ! These variables are used to define the soil properties when you don't want to use ! - ! the standard soil classes. ! - ! ! - ! SLXCLAY -- Prescribed fraction of clay [0-1] ! - ! SLXSAND -- Prescribed fraction of sand [0-1]. ! - ! ! - ! They are used only when ISOILFLG is 2, both values are between 0. and 1., and ! - ! theira sum doesn't exceed 1. Otherwise standard ED values will be used instead. ! - !---------------------------------------------------------------------------------------! - NL%SLXCLAY = 0.13 ! 0.13 US-WCr, 0.06 US-Syv, 0.0663 US-PFa, 0.68 default - NL%SLXSAND = 0.54 ! 0.54 US-WCr, 0.57 US-Syv, 0.5931 US-PFa, 0.20 default - !---------------------------------------------------------------------------------------! - ! Soil grid and initial conditions if no file is provided: ! - ! ! - ! SLZ - soil depth in m. Values must be negative and go from the deepest layer to ! - ! the top. ! - ! SLMSTR - this is the initial soil moisture, now given as the soil moisture index. ! - ! Values can be fraction, in which case they will be linearly interpolated ! - ! between the special points (e.g. 0.5 will put soil moisture half way ! - ! between the wilting point and field capacity). ! - ! -1 = dry air soil moisture ! - ! 0 = wilting point ! - ! 1 = field capacity ! - ! 2 = porosity (saturation) ! - ! STGOFF - initial temperature offset (soil temperature = air temperature + offset) ! - !---------------------------------------------------------------------------------------! - NL%SLZ = -2.0,-1.5, -1.0, -0.80, -0.60, -0.40, -0.2, -0.10, -0.05 - NL%SLMSTR = 0.65, 0.65, 0.65, 0.65, 0.65, 0.65, 0.65, 0.65, 0.65 - NL%STGOFF = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - !---------------------------------------------------------------------------------------! - ! Input databases ! - ! VEG_DATABASE -- vegetation database, used only to determine the land/water mask. ! - ! Fill with the path and the prefix. ! - ! SOIL_DATABASE -- soil database, used to determine the soil type. Fill with the ! - ! path and the prefix. ! - ! LU_DATABASE -- land-use change disturbance rates database, used only when ! - ! IANTH_DISTURB is set to 1. Fill with the path and the prefix. ! - ! PLANTATION_FILE -- plantation fraction file. In case you don't have such a file or ! - ! you do not want to use it, you must leave this variable empty: ! - ! (NL%PLANTATION_FILE = '' ! - ! THSUMS_DATABASE -- input directory with dataset to initialise chilling degrees and ! - ! growing degree days, which is used to drive the cold-deciduous ! - ! phenology (you must always provide this, even when your PFTs are ! - ! not cold deciduous). ! - ! ED_MET_DRIVER_DB -- File containing information for meteorological driver ! - ! instructions (the "header" file). ! - ! SOILSTATE_DB -- Dataset in case you want to provide the initial conditions of ! - ! soil temperature and moisture. ! - ! SOILDEPTH_DB -- Dataset in case you want to read in soil depth information. ! - !---------------------------------------------------------------------------------------! - NL%VEG_DATABASE = '@ED_VEG@' - NL%SOIL_DATABASE = '@ED_SOIL@' - NL%LU_DATABASE = '@ED_LU@' - NL%PLANTATION_FILE = '' - NL%THSUMS_DATABASE = '@ED_THSUM@' - NL%ED_MET_DRIVER_DB = '@SITE_MET@' - NL%SOILSTATE_DB = '' - NL%SOILDEPTH_DB = '' - !---------------------------------------------------------------------------------------! - ! ISOILSTATEINIT -- Variable controlling how to initialise the soil temperature and ! - ! moisture ! - ! 0. Use SLMSTR and STGOFF. ! - ! 1. Read from SOILSTATE_DB. ! - ! ISOILDEPTHFLG -- Variable controlling how to initialise soil depth ! - ! 0. Constant, always defined by the first SLZ layer. ! - ! 1. Read from SOILDEPTH_DB. ! - !---------------------------------------------------------------------------------------! - NL%ISOILSTATEINIT = 0 - NL%ISOILDEPTHFLG = 0 - !---------------------------------------------------------------------------------------! - ! ISOILBC -- This controls the soil moisture boundary condition at the bottom. If ! - ! unsure, use 0 for short-term simulations (couple of days), and 1 for long- ! - ! -term simulations (months to years). ! - ! 0. Bedrock. Flux from the bottom of the bottommost layer is set to 0. ! - ! 1. Gravitational flow. The flux from the bottom of the bottommost layer ! - ! is due to gradient of height only. ! - ! 2. Super drainage. Soil moisture of the ficticious layer beneath the ! - ! bottom is always at dry air soil moisture. ! - ! 3. Half-way. Assume that the fictious layer beneath the bottom is always ! - ! at field capacity. ! - ! 4. Aquifer. Soil moisture of the ficticious layer beneath the bottom is ! - ! always at saturation. ! - !---------------------------------------------------------------------------------------! - NL%ISOILBC = 1 - !---------------------------------------------------------------------------------------! - ! SLDRAIN -- This is used only when ISOILBC is set to 2. In this case SLDRAIN is the ! - ! equivalent slope that will slow down drainage. If this is set to zero, ! - ! then lateral drainage reduces to flat bedrock, and if this is set to 90, ! - ! then lateral drainage becomes free drainage. SLDRAIN must be between 0 ! - ! and 90. ! - !---------------------------------------------------------------------------------------! - NL%SLDRAIN = 10. - !---------------------------------------------------------------------------------------! - ! IVEGT_DYNAMICS -- The vegetation dynamics scheme. ! - ! 0. No vegetation dynamics, the initial state will be preserved, ! - ! even though the model will compute the potential values. This ! - ! option is useful for theoretical simulations only. ! - ! 1. Normal ED vegetation dynamics (Moorcroft et al 2001). ! - ! The normal option for almost any simulation. ! - !---------------------------------------------------------------------------------------! - NL%IVEGT_DYNAMICS = 1 - !---------------------------------------------------------------------------------------! - ! IBIGLEAF -- Do you want to run ED as a 'big leaf' model? ! - ! 0. No, use the standard size- and age-structure (Moorcroft et al. 2001) ! - ! This is the recommended method for most applications. ! - ! 1. 'big leaf' ED: this will have no horizontal or vertical hetero- ! - ! geneities; 1 patch per PFT and 1 cohort per patch; no vertical ! - ! growth, recruits will 'appear' instantaneously at maximum height. ! - ! ! - ! N.B. if you set IBIGLEAF to 1, you MUST turn off the crown model (CROWN_MOD = 0) ! - !---------------------------------------------------------------------------------------! - NL%IBIGLEAF = 0 - !---------------------------------------------------------------------------------------! - ! INTEGRATION_SCHEME -- The biophysics integration scheme. ! - ! 0. Euler step. The fastest, but it doesn't estimate ! - ! errors. ! - ! 1. Fourth-order Runge-Kutta method. ED-2.1 default method ! - ! 2. Heun's method (a second-order Runge-Kutta). ! - ! 3. Hybrid Stepping (BDF2 implicit step for the canopy air and ! - ! leaf temp, forward Euler for else, under development). ! - !---------------------------------------------------------------------------------------! - NL%INTEGRATION_SCHEME = 1 - !---------------------------------------------------------------------------------------! - ! RK4_TOLERANCE -- This is the relative tolerance for Runge-Kutta or Heun's ! - ! integration. Larger numbers will make runs go faster, at the ! - ! expense of being less accurate. Currently the valid range is ! - ! between 1.e-7 and 1.e-1, but recommended values are between 1.e-4 ! - ! and 1.e-2. ! - !---------------------------------------------------------------------------------------! - NL%RK4_TOLERANCE = 0.01 - !---------------------------------------------------------------------------------------! - ! IBRANCH_THERMO -- This determines whether branches should be included in the ! - ! vegetation thermodynamics and radiation or not. ! - ! 0. No branches in energy/radiation (ED-2.1 default); ! - ! 1. Branches are accounted in the energy and radiation. Branchwood ! - ! and leaf are treated separately in the canopy radiation scheme, ! - ! but solved as a single pool in the biophysics integration. ! - ! 2. Similar to 1, but branches are treated as separate pools in the ! - ! biophysics (thus doubling the number of prognostic variables). ! - !---------------------------------------------------------------------------------------! - NL%IBRANCH_THERMO = 0 - !---------------------------------------------------------------------------------------! - ! IPHYSIOL -- This variable will determine the functional form that will control how ! - ! the various parameters will vary with temperature, and how the CO2 ! - ! compensation point for gross photosynthesis (Gamma*) will be found. ! - ! Options are: ! - ! ! - ! 0 -- Original ED-2.1, we use the "Arrhenius" function as in Foley et al. (1996) and ! - ! Moorcroft et al. (2001). Gamma* is found using the parameters for tau as in ! - ! Foley et al. (1996). ! - ! 1 -- Modified ED-2.1. In this case Gamma* is found using the Michaelis-Mentel ! - ! coefficients for CO2 and O2, as in Farquhar et al. (1980) and in CLM. ! - ! 2 -- Collatz et al. (1991). We use the power (Q10) equations, with Collatz et al. ! - ! parameters for compensation point, and the Michaelis-Mentel coefficients. The ! - ! correction for high and low temperatures are the same as in Moorcroft et al. ! - ! (2001). ! - ! 3 -- Same as 2, except that we find Gamma* as in Farquhar et al. (1980) and in CLM. ! - !---------------------------------------------------------------------------------------! - NL%IPHYSIOL = 2 - !---------------------------------------------------------------------------------------! - ! IALLOM -- Which allometry to use (this mostly affects tropical PFTs. Temperate PFTs ! - ! will use the new root allometry and the maximum crown area if IALLOM is set ! - ! to 1 or 2). ! - ! 0. Original ED-2.1 ! - ! 1. a. The coefficients for structural biomass are set so the total AGB ! - ! is similar to Baker et al. (2004), equation 2. Balive is the ! - ! default ED-2.1; ! - ! b. Experimental root depth that makes canopy trees to have root depths ! - ! of 5m and grasses/seedlings at 0.5 to have root depth of 0.5 m. ! - ! c. Crown area defined as in Poorter et al. (2006), imposing maximum ! - ! crown area ! - ! 2. Similar to 1, but with a few extra changes. ! - ! a. Height -> DBH allometry as in Poorter et al. (2006) ! - ! b. Balive is retuned, using a few leaf biomass allometric equations for ! - ! a few genuses in Costa Rica. References: ! - ! Cole and Ewel (2006), and Calvo Alvarado et al. (2008). ! - !---------------------------------------------------------------------------------------! - NL%IALLOM = 2 - !---------------------------------------------------------------------------------------! - ! IGRASS -- This controls the dynamics and growth calculation for grasses. A new ! - ! grass scheme is now available where bdead = 0, height is a function of bleaf! - ! and growth happens daily. ALS (3/3/12) ! - ! 0: grasses behave like trees as in ED2.1 (old scheme) ! - ! ! - ! 1: new grass scheme as described above ! - !---------------------------------------------------------------------------------------! - NL%IGRASS = 0 - !---------------------------------------------------------------------------------------! - ! IPHEN_SCHEME -- It controls the phenology scheme. Even within each scheme, the ! - ! actual phenology will be different depending on the PFT. ! - ! ! - ! -1: grasses - evergreen; ! - ! tropical - evergreen; ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous (Botta et al.); ! - ! ! - ! 0: grasses - drought-deciduous (old scheme); ! - ! tropical - drought-deciduous (old scheme); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! 1: prescribed phenology ! - ! ! - ! 2: grasses - drought-deciduous (new scheme); ! - ! tropical - drought-deciduous (new scheme); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! 3: grasses - drought-deciduous (new scheme); ! - ! tropical - drought-deciduous (light phenology); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! Old scheme: plants shed their leaves once instantaneous amount of available water ! - ! becomes less than a critical value. ! - ! New scheme: plants shed their leaves once a 10-day running average of available ! - ! water becomes less than a critical value. ! - !---------------------------------------------------------------------------------------! - NL%IPHEN_SCHEME = @PHENOL_SCHEME@ - !---------------------------------------------------------------------------------------! - ! Parameters that control the phenology response to radiation, used only when ! - ! IPHEN_SCHEME = 3. ! - ! ! - ! RADINT -- Intercept ! - ! RADSLP -- Slope. ! - !---------------------------------------------------------------------------------------! - NL%RADINT = -11.3868 - NL%RADSLP = 0.0824 - !---------------------------------------------------------------------------------------! - ! REPRO_SCHEME -- This controls plant reproduction and dispersal. ! - ! 0. Reproduction off. Useful for very short runs only. ! - ! 1. Original reproduction scheme. Seeds are exchanged between ! - ! patches belonging to the same site, but they can't go outside ! - ! their original site. ! - ! 2. Similar to 1, but seeds are exchanged between patches belonging ! - ! to the same polygon, even if they are in different sites. They ! - ! can't go outside their original polygon, though. This is the ! - ! same as option 1 if there is only one site per polygon. ! - ! 3. Similar to 2, but recruits will only be formed if their phenology ! - ! status would be "leaves fully flushed". This only matters for ! - ! drought deciduous plants. This option is for testing purposes ! - ! only, think 50 times before using it... ! - !---------------------------------------------------------------------------------------! - NL%REPRO_SCHEME = 0 - !---------------------------------------------------------------------------------------! - ! LAPSE_SCHEME -- This specifies the met lapse rate scheme: ! - ! 0. No lapse rates ! - ! 1. phenomenological, global ! - ! 2. phenomenological, local (not yet implemented) ! - ! 3. mechanistic(not yet implemented) ! - !---------------------------------------------------------------------------------------! - NL%LAPSE_SCHEME = 0 - !---------------------------------------------------------------------------------------! - ! CROWN_MOD -- Specifies how tree crowns are represent in the canopy radiation model, ! - ! and in the turbulence scheme depending on ICANTURB. ! - ! 0. ED1 default, crowns are evenly spread throughout the patch area, and ! - ! cohorts are stacked on the top of each other. ! - ! 1. Dietze (2008) model. Cohorts have a finite radius, and cohorts are ! - ! stacked on the top of each other. ! - !---------------------------------------------------------------------------------------! - NL%CROWN_MOD = 1 - !---------------------------------------------------------------------------------------! - ! The following variables control the canopy radiation solver. ! - ! ! - ! ICANRAD -- Specifies how canopy radiation is solved. This variable sets both ! - ! shortwave and longwave. ! - ! 0. Two-stream model (Medvigy 2006), with the possibility to apply ! - ! finite crown area to direct shortwave radiation. ! - ! 1. Multiple-scattering model (Zhao and Qualls 2005,2006), with the ! - ! possibility to apply finite crown area to all radiation fluxes. ! - ! LTRANS_VIS -- Leaf transmittance for tropical plants - Visible/PAR ! - ! LTRANS_NIR -- Leaf transmittance for tropical plants - Near Infrared ! - ! LREFLECT_VIS -- Leaf reflectance for tropical plants - Visible/PAR ! - ! LREFLECT_NIR -- Leaf reflectance for tropical plants - Near Infrared ! - ! ORIENT_TREE -- Leaf orientation factor for tropical trees. Extremes are: ! - ! -1. All leaves are oriented in the vertical ! - ! 0. Leaf orientation is perfectly random ! - ! 1. All leaves are oriented in the horizontal ! - ! In practice, acceptable values range from -0.4 and 0.6 ! - ! ORIENT_GRASS -- Leaf orientation factor for tropical grasses. Extremes are: ! - ! -1. All leaves are oriented in the vertical ! - ! 0. Leaf orientation is perfectly random ! - ! 1. All leaves are oriented in the horizontal ! - ! In practice, acceptable values range from -0.4 and 0.6 ! - ! CLUMP_TREE -- Clumping factor for tropical trees. Extremes are: ! - ! lim -> 0. Black hole (0 itself is unacceptable) ! - ! 1. Homogeneously spread over the layer (i.e., no clumping) ! - ! CLUMP_GRASS -- Clumping factor for tropical grasses. Extremes are: ! - ! lim -> 0. Black hole (0 itself is unacceptable) ! - ! 1. Homogeneously spread over the layer (i.e., no clumping) ! - !---------------------------------------------------------------------------------------! - NL%ICANRAD = 0 - NL%LTRANS_VIS = 0.050 - NL%LTRANS_NIR = 0.270 - NL%LREFLECT_VIS = 0.150 - NL%LREFLECT_NIR = 0.540 - NL%ORIENT_TREE = 0.100 - NL%ORIENT_GRASS = -0.100 - NL%CLUMP_TREE = 0.800 - NL%CLUMP_GRASS = 1.000 - !---------------------------------------------------------------------------------------! - ! DECOMP_SCHEME -- This specifies the dependence of soil decomposition on temperature. ! - ! 0. ED-2.0 default, the original exponential ! - ! 1. Lloyd and Taylor (1994) model ! - ! [[option 1 requires parameters to be set in xml]] ! - !---------------------------------------------------------------------------------------! - NL%DECOMP_SCHEME = 0 - !---------------------------------------------------------------------------------------! - ! H2O_PLANT_LIM -- this determines whether plant photosynthesis can be limited by ! - ! soil moisture, the FSW, defined as FSW = Supply / (Demand + Supply). ! - ! ! - ! Demand is always the transpiration rates in case soil moisture is ! - ! not limiting (the psi_0 term times LAI). The supply is determined ! - ! by Kw * nplant * Broot * Available_Water, and the definition of ! - ! available water changes depending on H2O_PLANT_LIM: ! - ! 0. Force FSW = 1 (effectively available water is infinity). ! - ! 1. Available water is the total soil water above wilting point, ! - ! integrated across all layers within the rooting zone. ! - ! 2. Available water is the soil water at field capacity minus ! - ! wilting point, scaled by the so-called wilting factor: ! - ! (psi(k) - (H - z(k)) - psi_wp) / (psi_fc - psi_wp) ! - ! where psi is the matric potentital at layer k, z is the layer ! - ! depth, H it the crown height and psi_fc and psi_wp are the ! - ! matric potentials at wilting point and field capacity. ! - !---------------------------------------------------------------------------------------! - NL%H2O_PLANT_LIM = 1 - !---------------------------------------------------------------------------------------! - ! IDDMORT_SCHEME -- This flag determines whether storage should be accounted in the ! - ! carbon balance. ! - ! 0 -- Carbon balance is done in terms of fluxes only. This is the ! - ! default in ED-2.1 ! - ! 1 -- Carbon balance is offset by the storage pool. Plants will be ! - ! in negative carbon balance only when they run out of storage ! - ! and are still losing more carbon than gaining. ! - ! ! - ! DDMORT_CONST -- This constant (k) determines the relative contribution of light ! - ! and soil moisture to the density-dependent mortality rate. Values ! - ! range from 0 (soil moisture only) to 1 (light only). ! - ! ! - ! mort1 ! - ! mu_DD = ------------------------- ! - ! 1 + exp [ mort2 * cr ] ! - ! ! - ! CB CB ! - ! cr = k ------------- + (1 - k) ------------- ! - ! CB_lightmax CB_watermax ! - !---------------------------------------------------------------------------------------! - NL%IDDMORT_SCHEME = 0 - NL%DDMORT_CONST = 0.8 - !---------------------------------------------------------------------------------------! - ! The following variables are factors that control photosynthesis and respiration. ! - ! Notice that some of them are relative values whereas others are absolute. ! - ! ! - ! VMFACT_C3 -- Factor multiplying the default Vm0 for C3 plants (1.0 = default). ! - ! VMFACT_C4 -- Factor multiplying the default Vm0 for C4 plants (1.0 = default). ! - ! MPHOTO_TRC3 -- Stomatal slope (M) for tropical C3 plants ! - ! MPHOTO_TEC3 -- Stomatal slope (M) for conifers and temperate C3 plants ! - ! MPHOTO_C4 -- Stomatal slope (M) for C4 plants. ! - ! BPHOTO_BLC3 -- cuticular conductance for broadleaf C3 plants [umol/m2/s] ! - ! BPHOTO_NLC3 -- cuticular conductance for needleleaf C3 plants [umol/m2/s] ! - ! BPHOTO_C4 -- cuticular conductance for C4 plants [umol/m2/s] ! - ! KW_GRASS -- Water conductance for trees, in m2/yr/kgC_root. This is used only ! - ! when H2O_PLANT_LIM is not 0. ! - ! KW_TREE -- Water conductance for grasses, in m2/yr/kgC_root. This is used only ! - ! when H2O_PLANT_LIM is not 0. ! - ! GAMMA_C3 -- The dark respiration factor (gamma) for C3 plants. Subtropical ! - ! conifers will be scaled by GAMMA_C3 * 0.028 / 0.02 ! - ! GAMMA_C4 -- The dark respiration factor (gamma) for C4 plants. ! - ! D0_GRASS -- The transpiration control in gsw (D0) for ALL grasses. ! - ! D0_TREE -- The transpiration control in gsw (D0) for ALL trees. ! - ! ALPHA_C3 -- Quantum yield of ALL C3 plants. This is only applied when ! - ! QUANTUM_EFFICIENCY_T = 0. ! - ! ALPHA_C4 -- Quantum yield of C4 plants. This is always applied. ! - ! KLOWCO2IN -- The coefficient that controls the PEP carboxylase limited rate of ! - ! carboxylation for C4 plants. ! - ! RRFFACT -- Factor multiplying the root respiration factor for ALL PFTs. ! - ! (1.0 = default). ! - ! GROWTHRESP -- The actual growth respiration factor (C3/C4 tropical PFTs only). ! - ! (1.0 = default). ! - ! LWIDTH_GRASS -- Leaf width for grasses, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). ! - ! LWIDTH_BLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). This is applied to broadleaf trees ! - ! only. ! - ! LWIDTH_NLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). This is applied to conifer trees ! - ! only. ! - ! Q10_C3 -- Q10 factor for C3 plants (used only if IPHYSIOL is set to 2 or 3). ! - ! Q10_C4 -- Q10 factor for C4 plants (used only if IPHYSIOL is set to 2 or 3). ! - !---------------------------------------------------------------------------------------! - NL%VMFACT_C3 = 1.00 - NL%VMFACT_C4 = 1.00 - NL%MPHOTO_TRC3 = 9.0 - NL%MPHOTO_TEC3 = 7.2 - NL%MPHOTO_C4 = 5.2 - NL%BPHOTO_BLC3 = 10000. - NL%BPHOTO_NLC3 = 1000. - NL%BPHOTO_C4 = 10000. - NL%KW_GRASS = 900. - NL%KW_TREE = 600. - NL%GAMMA_C3 = 0.015 - NL%GAMMA_C4 = 0.040 - NL%D0_GRASS = 0.016 - NL%D0_TREE = 0.016 - NL%ALPHA_C3 = 0.080 - NL%ALPHA_C4 = 0.055 - NL%KLOWCO2IN = 4000. - NL%RRFFACT = 1.000 - NL%GROWTHRESP = 0.333 - NL%LWIDTH_GRASS = 0.05 - NL%LWIDTH_BLTREE = 0.10 - NL%LWIDTH_NLTREE = 0.05 - NL%Q10_C3 = 2.4 - NL%Q10_C4 = 2.4 - !---------------------------------------------------------------------------------------! - ! THETACRIT -- Leaf drought phenology threshold. The sign matters here: ! - ! >= 0. -- This is the relative soil moisture above the wilting point ! - ! below which the drought-deciduous plants will start shedding ! - ! their leaves ! - ! < 0. -- This is the soil potential in MPa below which the drought- ! - ! -deciduous plants will start shedding their leaves. The wilt- ! - ! ing point is by definition -1.5MPa, so make sure that the value ! - ! is above -1.5. ! - !---------------------------------------------------------------------------------------! - NL%THETACRIT = -1.15 - !---------------------------------------------------------------------------------------! - ! QUANTUM_EFFICIENCY_T -- Which quantum yield model should to use for C3 plants ! - ! 0. Original ED-2.1, quantum efficiency is constant. ! - ! 1. Quantum efficiency varies with temperature following ! - ! Ehleringer (1978) polynomial fit. ! - !---------------------------------------------------------------------------------------! - NL%QUANTUM_EFFICIENCY_T = 0 - !---------------------------------------------------------------------------------------! - ! N_PLANT_LIM -- This controls whether plant photosynthesis can be limited by nitrogen. ! - ! 0. No limitation ! - ! 1. ED-2.1 nitrogen limitation model. ! - !---------------------------------------------------------------------------------------! - NL%N_PLANT_LIM = 0 - !---------------------------------------------------------------------------------------! - ! N_DECOMP_LIM -- This controls whether decomposition can be limited by nitrogen. ! - ! 0. No limitation ! - ! 1. ED-2.1 nitrogen limitation model. ! - !---------------------------------------------------------------------------------------! - NL%N_DECOMP_LIM = 0 - !---------------------------------------------------------------------------------------! - ! The following parameters adjust the fire disturbance in the model. ! - ! INCLUDE_FIRE -- Which threshold to use for fires. ! - ! 0. No fires; ! - ! 1. (deprecated) Fire will be triggered with enough biomass and ! - ! integrated ground water depth less than a threshold. Based on ! - ! ED-1, the threshold assumes that the soil is 1 m, so deeper ! - ! soils will need to be much drier to allow fires to happen and ! - ! often will never allow fires. ! - ! 2. Fire will be triggered with enough biomass and the total soil ! - ! water at the top 75 cm falls below a threshold. ! - ! FIRE_PARAMETER -- If fire happens, this will control the intensity of the disturbance ! - ! given the amount of fuel (currently the total above-ground ! - ! biomass). ! - ! SM_FIRE -- This is used only when INCLUDE_FIRE = 2. The sign here matters. ! - ! >= 0. - Minimum relative soil moisture above dry air of the top 1m ! - ! that will prevent fires to happen. ! - ! < 0. - Minimum mean soil moisture potential in MPa of the top 1m ! - ! that will prevent fires to happen. The dry air soil ! - ! potential is defined as -3.1 MPa, so make sure SM_FIRE is ! - ! greater than this value. ! - !---------------------------------------------------------------------------------------! - NL%INCLUDE_FIRE = 0 ! default is 2 - NL%FIRE_PARAMETER = 0.2 - NL%SM_FIRE = -1.45 - !---------------------------------------------------------------------------------------! - ! IANTH_DISTURB -- This flag controls whether to include anthropogenic disturbances ! - ! such as land clearing, abandonment, and logging. ! - ! 0. no anthropogenic disturbance. ! - ! 1. use anthropogenic disturbance dataset. ! - !---------------------------------------------------------------------------------------! - NL%IANTH_DISTURB = 0 - !---------------------------------------------------------------------------------------! - ! ICANTURB -- This flag controls the canopy roughness. ! - ! 0. Based on Leuning et al. (1995), wind is computed using the similarity ! - ! theory for the top cohort, and they are extinguished with cumulative ! - ! LAI. If using CROWN_MOD 1 or 2, this will use local LAI and average ! - ! by crown area. ! - ! 1. The default ED-2.1 scheme, except that it uses the zero-plane ! - ! displacement height. ! - ! 2. This uses the method of Massman (1997) using constant drag and no ! - ! sheltering factor. ! - ! 3. This is also based on Massman (1997), but with the option of varying ! - ! the drag and sheltering within the canopy. ! - ! 4. Same as 0, but if finds the ground conductance following CLM ! - ! technical note (equations 5.98-5.100). ! - !---------------------------------------------------------------------------------------! - NL%ICANTURB = 1 - !---------------------------------------------------------------------------------------! - ! ISFCLYRM -- Similarity theory model. The model that computes u*, T*, etc... ! - ! 1. BRAMS default, based on Louis (1979). It uses empirical relations to ! - ! estimate the flux based on the bulk Richardson number ! - ! ! - ! All models below use an interative method to find z/L, and the only change ! - ! is the functional form of the psi functions. ! - ! ! - ! 2. Oncley and Dudhia (1995) model, based on MM5. ! - ! 3. Beljaars and Holtslag (1991) model. Similar to 2, but it uses an alternative ! - ! method for the stable case that mixes more than the OD95. ! - ! 4. CLM (2004). Similar to 2 and 3, but they have special functions to deal with ! - ! very stable and very stable cases. ! - !---------------------------------------------------------------------------------------! - NL%ISFCLYRM = 4 ! 3 set by default - !---------------------------------------------------------------------------------------! - ! IED_GRNDVAP -- Methods to find the ground -> canopy conductance. ! - ! 0. Modified Lee Pielke (1992), adding field capacity, but using beta factor ! - ! without the square, like in Noilhan and Planton (1989). This is the closest ! - ! to the original ED-2.0 and LEAF-3, and it is also the recommended one. ! - ! 1. Test # 1 of Mahfouf and Noilhan (1991) ! - ! 2. Test # 2 of Mahfouf and Noilhan (1991) ! - ! 3. Test # 3 of Mahfouf and Noilhan (1991) ! - ! 4. Test # 4 of Mahfouf and Noilhan (1991) ! - ! 5. Combination of test #1 (alpha) and test #2 (soil resistance). ! - ! In all cases the beta term is modified so it approaches zero as soil moisture goes ! - ! to dry air soil. ! - !---------------------------------------------------------------------------------------! - NL%IED_GRNDVAP = 0 - !---------------------------------------------------------------------------------------! - ! The following variables are used to control the similarity theory model. For the ! - ! meaning of these parameters, check Beljaars and Holtslag (1991). ! - ! GAMM -- gamma coefficient for momentum, unstable case (dimensionless) ! - ! Ignored when ISTAR = 1 ! - ! GAMH -- gamma coefficient for heat, unstable case (dimensionless) ! - ! Ignored when ISTAR = 1 ! - ! TPRANDTL -- Turbulent Prandtl number ! - ! Ignored when ISTAR = 1 ! - ! RIBMAX -- maximum bulk Richardson number. ! - ! LEAF_MAXWHC -- Maximum water that can be intercepted by leaves, in kg/m2leaf. ! - !---------------------------------------------------------------------------------------! - NL%GAMM = 13.0 - NL%GAMH = 13.0 - NL%TPRANDTL = 0.74 - NL%RIBMAX = 0.50 - NL%LEAF_MAXWHC = 0.11 - !---------------------------------------------------------------------------------------! - ! IPERCOL -- This controls percolation and infiltration. ! - ! 0. Default method. Assumes soil conductivity constant and for the ! - ! temporary surface water, it sheds liquid in excess of a 1:9 liquid- ! - ! -to-ice ratio through percolation. Temporary surface water exists ! - ! only if the top soil layer is at saturation. ! - ! 1. Constant soil conductivity, and it uses the percolation model as in ! - ! Anderson (1976) NOAA technical report NWS 19. Temporary surface ! - ! water may exist after a heavy rain event, even if the soil doesn't ! - ! saturate. Recommended value. ! - ! 2. Soil conductivity decreases with depth even for constant soil moisture ! - ! , otherwise it is the same as 1. ! - !---------------------------------------------------------------------------------------! - NL%IPERCOL = 1 - !---------------------------------------------------------------------------------------! - ! The following variables control the plant functional types (PFTs) that will be ! - ! used in this simulation. ! - ! ! - ! INCLUDE_THESE_PFT -- a list containing all the PFTs you want to include in this run ! - ! AGRI_STOCK -- which PFT should be used for agriculture ! - ! (used only when IANTH_DISTURB = 1) ! - ! PLANTATION_STOCK -- which PFT should be used for plantation ! - ! (used only when IANTH_DISTURB = 1) ! - ! ! - ! PFT table ! - !---------------------------------------------------------------------------------------! - ! 1 - C4 grass | 9 - early temperate deciduous ! - ! 2 - early tropical | 10 - mid temperate deciduous ! - ! 3 - mid tropical | 11 - late temperate deciduous ! - ! 4 - late tropical | 12:15 - agricultural PFTs ! - ! 5 - temperate C3 grass | 16 - Subtropical C3 grass ! - ! 6 - northern pines | (C4 grass with C3 photo). ! - ! 7 - southern pines | 17 - "Araucaria" (non-optimised ! - ! 8 - late conifers | Southern Pines). ! - !---------------------------------------------------------------------------------------! - NL%INCLUDE_THESE_PFT = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 ! List of PFTs to be included - NL%AGRI_STOCK = 5 ! Agriculture PFT (used only if ianth_disturb=1) - NL%PLANTATION_STOCK = 6 ! Plantation PFT (used only if ianth_disturb=1) - !---------------------------------------------------------------------------------------! - ! PFT_1ST_CHECK -- What to do if the initialisation file has a PFT that is not listed ! - ! in INCLUDE_THESE_PFT (ignored if IED_INIT_MODE is -1 or 0) ! - ! 0. Stop the run ! - ! 1. Add the PFT in the INCLUDE_THESE_PFT list ! - ! 2. Ignore the cohort ! - !---------------------------------------------------------------------------------------! - NL%PFT_1ST_CHECK = 0 - !---------------------------------------------------------------------------------------! - ! The following variables control the size of sub-polygon structures in ED-2. ! - ! MAXSITE -- This is the strict maximum number of sites that each polygon can ! - ! contain. Currently this is used only when the user wants to run ! - ! the same polygon with multiple soil types. If there aren't that ! - ! many different soil types with a minimum area (check MIN_SITE_AREA ! - ! below), then the model will allocate just the amount needed. ! - ! MAXPATCH -- If number of patches in a given site exceeds MAXPATCH, force patch ! - ! fusion. If MAXPATCH is 0, then fusion will never happen. If ! - ! MAXPATCH is negative, then the absolute value is used only during ! - ! the initialization, and fusion will never happen again. Notice ! - ! that if the patches are too different, then the actual number of ! - ! patches in a site may exceed MAXPATCH. ! - ! MAXCOHORT -- If number of cohorts in a given patch exceeds MAXCOHORT, force ! - ! cohort fusion. If MAXCOHORT is 0, then fusion will never happen. ! - ! If MAXCOHORT is negative, then the absolute value is used only ! - ! during the initialization, and fusion will never happen again. ! - ! Notice that if the cohorts are too different, then the actual ! - ! number of cohorts in a patch may exceed MAXCOHORT. ! - ! MIN_SITE_AREA -- This is the minimum fraction area of a given soil type that allows ! - ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! - ! MIN_PATCH_AREA -- This is the minimum fraction area of a given soil type that allows ! - ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! - !---------------------------------------------------------------------------------------! - NL%MAXSITE = 6 - NL%MAXPATCH = 30 - NL%MAXCOHORT = 60 - NL%MIN_SITE_AREA = 0.005 - NL%MIN_PATCH_AREA = 0.005 - !---------------------------------------------------------------------------------------! - ! ZROUGH -- constant roughness, in metres, if for all domain ! - !---------------------------------------------------------------------------------------! - NL%ZROUGH = 0.1 - !---------------------------------------------------------------------------------------! - ! Treefall disturbance parameters. ! - ! TREEFALL_DISTURBANCE_RATE -- Sign-dependent treefall disturbance rate: ! - ! > 0. usual disturbance rate, in 1/years; ! - ! = 0. No treefall disturbance; ! - ! < 0. Treefall will be added as a mortality rate (it ! - ! will kill plants, but it won't create a new patch). ! - ! TIME2CANOPY -- Minimum patch age for treefall disturbance to happen. ! - ! If TREEFALL_DISTURBANCE_RATE = 0., this value will be ! - ! ignored. If this value is different than zero, then ! - ! TREEFALL_DISTURBANCE_RATE is internally adjusted so the ! - ! average patch age is still 1/TREEFALL_DISTURBANCE_RATE ! - !---------------------------------------------------------------------------------------! - NL%TREEFALL_DISTURBANCE_RATE = 0.0 !0.014 - NL%TIME2CANOPY = 0.0 - !---------------------------------------------------------------------------------------! - ! RUNOFF_TIME -- In case a temporary surface water (TSW) is created, this is the "e- ! - ! -folding lifetime" of the TSW in seconds due to runoff. If you don't ! - ! want runoff to happen, set this to 0. ! - !---------------------------------------------------------------------------------------! - NL%RUNOFF_TIME = 86400.0 - !---------------------------------------------------------------------------------------! - ! The following variables control the minimum values of various velocities in the ! - ! canopy. This is needed to avoid the air to be extremely still, or to avoid singular- ! - ! ities. When defining the values, keep in mind that UBMIN >= UGBMIN >= USTMIN. ! - ! ! - ! UBMIN -- minimum wind speed at the top of the canopy air space [ m/s] ! - ! UGBMIN -- minimum wind speed at the leaf level [ m/s] ! - ! USTMIN -- minimum friction velocity, u*, in m/s. [ m/s] ! - !---------------------------------------------------------------------------------------! - NL%UBMIN = 0.65 - NL%UGBMIN = 0.25 - NL%USTMIN = 0.05 - !---------------------------------------------------------------------------------------! - ! Control parameters for printing to standard output. Any variable can be printed ! - ! to standard output as long as it is one dimensional. Polygon variables have been ! - ! tested, no gaurtantees for other hierarchical levels. Choose any variables that are ! - ! defined in the variable table fill routine in ed_state_vars.f90. Choose the start ! - ! and end index of the polygon,site,patch or cohort. It should work in parallel. The ! - ! indices are global indices of the entire domain. The are printed out in rows of 10 ! - ! columns each. ! - ! ! - ! IPRINTPOLYS -- 0. Do not print information to screen ! - ! 1. Print polygon arrays to screen, use variables described below to ! - ! determine which ones and how ! - ! NPVARS -- Number of variables to be printed ! - ! PRINTVARS -- List of variables to be printed ! - ! PFMTSTR -- The standard fortran format for the prints. One format per variable ! - ! IPMIN -- First polygon (absolute index) to be print ! - ! IPMAX -- Last polygon (absolute index) to print ! - !---------------------------------------------------------------------------------------! - NL%IPRINTPOLYS = 0 - NL%NPVARS = 1 - NL%PRINTVARS = 'AVG_PCPG','AVG_CAN_TEMP','AVG_VAPOR_AC','AVG_CAN_SHV' - NL%PFMTSTR = 'f10.8','f5.1','f7.2','f9.5' - NL%IPMIN = 1 - NL%IPMAX = 60 - !---------------------------------------------------------------------------------------! - ! Variables that control the meteorological forcing. ! - ! ! - ! IMETTYPE -- Format of the meteorological dataset ! - ! 0. ASCII (deprecated) ! - ! 1. HDF5 ! - ! ISHUFFLE -- How to choose an year outside the meterorological data range (see ! - ! METCYC1 and METCYCF). ! - ! 0. Sequentially cycle over years ! - ! 1. Randomly pick the years, using the same sequence. This has worked ! - ! with gfortran running in Mac OS X system, but it acts like option 2 ! - ! when running ifort. ! - ! 2. Randomly pick the years, choosing a different sequence each time ! - ! the model is run. ! - ! IMETCYC1 -- First year with meteorological information ! - ! IMETCYCF -- Last year with meteorological information ! - ! IMETAVG -- How the input radiation was originally averaged. You must tell this ! - ! because ED-2.1 can make a interpolation accounting for the cosine of ! - ! zenith angle. ! - ! -1. I don't know, use linear interpolation. ! - ! 0. No average, the values are instantaneous ! - ! 1. Averages ending at the reference time ! - ! 2. Averages beginning at the reference time ! - ! 3. Averages centred at the reference time ! - ! IMETRAD -- What should the model do with the input short wave radiation? ! - ! 0. Nothing, use it as is. ! - ! 1. Add them together, then use the SiB method to break radiation down ! - ! into the four components (PAR direct, PAR diffuse, NIR direct, ! - ! NIR diffuse). ! - ! 2. Add then together, then use the method by Weiss and Norman (1985) ! - ! to break radiation down to the four components. ! - ! 3. Gloomy -- All radiation goes to diffuse. ! - ! 4. Sesame street -- all radiation goes to direct, except at night. ! - ! INITIAL_CO2 -- Initial value for CO2 in case no CO2 is provided at the meteorological ! - ! driver dataset [Units: ?mol/mol] ! - !---------------------------------------------------------------------------------------! - NL%IMETTYPE = 1 ! 0 = ASCII, 1 = HDF5 - NL%ISHUFFLE = 2 ! 2. Randomly pick recycled years - NL%METCYC1 = @MET_START@ ! First year of met data - NL%METCYCF = @MET_END@ ! Last year of met data - NL%IMETAVG = @MET_SOURCE@ - NL%IMETRAD = 0 - NL%INITIAL_CO2 = 370.0 ! Initial value for CO2 in case no CO2 is provided at the - ! meteorological driver dataset - !---------------------------------------------------------------------------------------! - ! The following variables control the phenology prescribed from observations: ! - ! ! - ! IPHENYS1 -- First year for spring phenology ! - ! IPHENYSF -- Final year for spring phenology ! - ! IPHENYF1 -- First year for fall/autumn phenology ! - ! IPHENYFF -- Final year for fall/autumn phenology ! - ! PHENPATH -- path and prefix of the prescribed phenology data. ! - ! ! - ! If the years don't cover the entire simulation period, they will be recycled. ! - !---------------------------------------------------------------------------------------! - NL%IPHENYS1 = @PHENOL_START@ - NL%IPHENYSF = @PHENOL_END@ - NL%IPHENYF1 = @PHENOL_START@ - NL%IPHENYFF = @PHENOL_END@ - NL%PHENPATH = '@PHENOL@' - !---------------------------------------------------------------------------------------! - ! These are some additional configuration files. ! - ! IEDCNFGF -- XML file containing additional parameter settings. If you don't have ! - ! one, leave it empty ! - ! EVENT_FILE -- file containing specific events that must be incorporated into the ! - ! simulation. ! - ! PHENPATH -- path and prefix of the prescribed phenology data. ! - !---------------------------------------------------------------------------------------! - NL%IEDCNFGF = '@CONFIGFILE@' - NL%EVENT_FILE = 'myevents.xml' - !---------------------------------------------------------------------------------------! - ! Census variables. This is going to create unique census statuses to cohorts, to ! - ! better compare the model with census observations. In case you don't intend to ! - ! compare the model with census data, set up DT_CENSUS to 1., otherwise you may reduce ! - ! cohort fusion. ! - ! DT_CENSUS -- Time between census, in months. Currently the maximum is 60 ! - ! months, to avoid excessive memory allocation. Every time the ! - ! simulation reaches the census time step, all census tags will be ! - ! reset. ! - ! YR1ST_CENSUS -- In which year was the first census conducted? ! - ! MON1ST_CENSUS -- In which month was the first census conducted? ! - ! MIN_RECRUIT_DBH -- Minimum DBH that is measured in the census, in cm. ! - !---------------------------------------------------------------------------------------! - NL%DT_CENSUS = 1 - NL%YR1ST_CENSUS = 1901 - NL%MON1ST_CENSUS = 7 - NL%MIN_RECRUIT_DBH = 10 - !---------------------------------------------------------------------------------------! - ! The following variables are used to control the detailed output for debugging ! - ! purposes. ! - ! ! - ! IDETAILED -- This flag controls the possible detailed outputs, mostly used for ! - ! debugging purposes. Notice that this doesn't replace the normal debug- ! - ! ger options, the idea is to provide detailed output to check bad ! - ! assumptions. The options are additive, and the indices below represent ! - ! the different types of output: ! - ! ! - ! 1 -- Detailed budget (every DTLSM) ! - ! 2 -- Detailed photosynthesis (every DTLSM) ! - ! 4 -- Detailed output from the integrator (every HDID) ! - ! 8 -- Thermodynamic bounds for sanity check (every DTLSM) ! - ! 16 -- Daily error stats (which variable caused the time step to shrink) ! - ! 32 -- Allometry parameters, and minimum and maximum sizes ! - ! (two files, only at the beginning) ! - ! ! - ! In case you don't want any detailed output (likely for most runs), set ! - ! IDETAILED to zero. In case you want to generate multiple outputs, add ! - ! the number of the sought options: for example, if you want detailed ! - ! photosynthesis and detailed output from the integrator, set IDETAILED ! - ! to 6 (2 + 4). Any combination of the above outputs is acceptable, al- ! - ! though all but the last produce a sheer amount of txt files, in which ! - ! case you may want to look at variable PATCH_KEEP. It is also a good ! - ! idea to set IVEGT_DYNAMICS to 0 when using the first five outputs. ! - ! ! - ! ! - ! PATCH_KEEP -- This option will eliminate all patches except one from the initial- ! - ! isation. This is only used when one of the first five types of ! - ! detailed output is active, otherwise it will be ignored. Options are: ! - ! -2. Keep only the patch with the lowest potential LAI ! - ! -1. Keep only the patch with the highest potential LAI ! - ! 0. Keep all patches. ! - ! > 0. Keep the patch with the provided index. In case the index is ! - ! not valid, the model will crash. ! - !---------------------------------------------------------------------------------------! - NL%IDETAILED = 0 - NL%PATCH_KEEP = 0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! IOPTINPT -- Optimization configuration. (Currently not used) ! - !---------------------------------------------------------------------------------------! - NL%IOPTINPT = '' - !---------------------------------------------------------------------------------------! -$END -!==========================================================================================! -!==========================================================================================! From b9ad6beaea69f5cf0be761230f7f055cf4f39162 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 08:57:39 -0700 Subject: [PATCH 0945/2289] Update CHANGELOG.md add changes re: ED2IN to changelog --- CHANGELOG.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e65efb13960..f5ebb8b6ae4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,7 +21,6 @@ For more information about this file see also [Keep a Changelog](http://keepacha - When building sipnet model would not set correct model version - Update pecan/depends docker image to have latest Roxygen and devtools. - Update ED docker build, will now build version 2.2.0 and git -- Remove ED2IN.git and add new versioned ED2IN template: ED2INv2.2.0 (#2143) ### Changed - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). @@ -37,6 +36,8 @@ For more information about this file see also [Keep a Changelog](http://keepacha - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). ### Added + +- New versioned ED2IN template: ED2INv2.2.0 (#2143) - model_info.json and Dockerfile to template (#2567) - Dockerize BASGRA_N model. - Basic coupling for models BASGRA_N and STICS. @@ -51,6 +52,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). - Scripts `dump.pgsql.sh` and `dump.mysql.sh` have been deleted. See the ["BeTY database administration"](https://pecanproject.github.io/pecan-documentation/develop/database.html) chapter of the PEcAn documentation for current recommendations (#2563). - Old dependency management scripts `check.dependencies.sh`, `update.dependencies.sh`, and `install_deps.R` have been deleted. Use `generate_dependencies.R` and the automatic dependency handling built into `make install` instead (#2563). +- Removed ED2IN.git (#2599) 'definitely going to break things for people' - but they can still use PEcAn <=1.7.1 ## [1.7.1] - 2018-09-12 From 088bfd11d50e0218d9a2684ff7e7c9d556a04001 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 08:58:45 -0700 Subject: [PATCH 0946/2289] Rename ED2INv2.2.0 to ED2IN.2.2.0 per convention, as requested in https://github.com/PecanProject/pecan/pull/2598#pullrequestreview-406275757 --- models/ed/inst/{ED2INv2.2.0 => ED2IN.2.2.0} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename models/ed/inst/{ED2INv2.2.0 => ED2IN.2.2.0} (100%) diff --git a/models/ed/inst/ED2INv2.2.0 b/models/ed/inst/ED2IN.2.2.0 similarity index 100% rename from models/ed/inst/ED2INv2.2.0 rename to models/ed/inst/ED2IN.2.2.0 From 07138ae5552f72a81e43f5752579425bc1d69780 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 08:59:45 -0700 Subject: [PATCH 0947/2289] Update CHANGELOG.md --- CHANGELOG.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f5ebb8b6ae4..886852981db 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -23,6 +23,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Update ED docker build, will now build version 2.2.0 and git ### Changed + - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). - `PEcAn.DB::insert_table` now uses `DBI::dbAppendTable` internally instead of manually constructed SQL (#2552). @@ -37,7 +38,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Added -- New versioned ED2IN template: ED2INv2.2.0 (#2143) +- New versioned ED2IN template: ED2IN.2.2.0 (#2143) (replaces ED2IN.git) - model_info.json and Dockerfile to template (#2567) - Dockerize BASGRA_N model. - Basic coupling for models BASGRA_N and STICS. @@ -49,10 +50,12 @@ For more information about this file see also [Keep a Changelog](http://keepacha - New shiny application to show database synchronization status (shiny/dbsync) ### Removed + +- Removed ED2IN.git (#2599) 'definitely going to break things for people' - but they can still use PEcAn <=1.7.1 - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). - Scripts `dump.pgsql.sh` and `dump.mysql.sh` have been deleted. See the ["BeTY database administration"](https://pecanproject.github.io/pecan-documentation/develop/database.html) chapter of the PEcAn documentation for current recommendations (#2563). - Old dependency management scripts `check.dependencies.sh`, `update.dependencies.sh`, and `install_deps.R` have been deleted. Use `generate_dependencies.R` and the automatic dependency handling built into `make install` instead (#2563). -- Removed ED2IN.git (#2599) 'definitely going to break things for people' - but they can still use PEcAn <=1.7.1 + ## [1.7.1] - 2018-09-12 From d52018ef8e6256f7a828f9c0aeba3a248b1fbb38 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 09:01:10 -0700 Subject: [PATCH 0948/2289] Remove Xubuntu desktop installation instructions (#2600) * Remove Xubuntu desktop installation instructions just not necessary! also simplified port forwarding examples * change gitter-->slack --- .../03_topical_pages/01_advanced_vm.Rmd | 50 ++++--------------- .../tutorials/01_Demo_Basic_Run/Demo01.Rmd | 2 +- 2 files changed, 10 insertions(+), 42 deletions(-) diff --git a/book_source/03_topical_pages/01_advanced_vm.Rmd b/book_source/03_topical_pages/01_advanced_vm.Rmd index 76ea34e405b..d1890bfcf37 100644 --- a/book_source/03_topical_pages/01_advanced_vm.Rmd +++ b/book_source/03_topical_pages/01_advanced_vm.Rmd @@ -28,22 +28,22 @@ Host pecan-vm This will allow you to SSH into the VM with the simplified command, `ssh pecan-vm`. -## Connecting to bety on the VM via SSh {#ssh-vm-bety} +## Connecting to BETYdb on the VM via SSH {#ssh-vm-bety} -Sometimes, you may want to develop code locally but connect to an instance of Bety on the VM. -To do this, first open a new terminal and connect to the VM while enabling port forwarding (with the `-L` flag) and setting XXXX to any available port (more or less any 4 digit number -- a reasonable choice is 3333). +Sometimes, you may want to develop code locally but connect to an instance of BETYdb on the VM. +To do this, first open a new terminal and connect to the VM while enabling port forwarding (with the `-L` flag) and setting the port number. Using 5433 does not conflict with the postgres default port of 5432, the forwarded port will not conflict with a postgres database server running locally. ``` -ssh -L XXXX:localhost:5432 carya@localhost:6422 +ssh -L 5433:localhost:5432 carya@localhost:6422 ``` -This makes port XXXX on the local machine match port 5432 on the VM. -This means that connecting to `localhost:XXXX` will give you access to Bety on the VM. +This makes port 5433 on the local machine match port 5432 on the VM. +This means that connecting to `localhost:5433` will give you access to BETYdb on the VM. To test this on the command line, try the following command, which, if successful, will drop you into the `psql` console. ``` -psql -d bety -U bety -h localhost -p XXXX +psql -d bety -U bety -h localhost -p 5433 ``` To test this in R, open a Postgres using the analogous parameters: @@ -56,12 +56,12 @@ con <- dbConnect( password = "bety", dbname = "bety", host = "localhost", - port = XXXX + port = 5433 ) dbListTables(con) # This should return a vector of bety tables ``` -Note that the same general approach will work on any Bety server where port forwarding is enabled. +Note that the same general approach will work on any BETYdb server where port forwarding is enabled, but it requires ssh access. ### Using Amazon Web Services for a VM (AWS) {#awsvm} @@ -231,35 +231,3 @@ Once all done, stop the virtual machine ```bash history -c && ${HOME}/cleanvm.sh ``` - -### VM Desktop Conversion {#vm-dektop-conversion} - -```bash -sudo apt-get update -sudo apt-get install xfce4 xorg -``` - -For a more refined desktop environment, try - -```bash -sudo apt-get install --no-install-recommends xubuntu-desktop -``` -* replace `xubuntu-` with `ubuntu-`, `lubuntu-`, or other preferred desktop enviornment -* the `--no-install-recommends` eliminates additional applications, removing it will add a word processor, a browser, and lots of other applications included in the default operating system. - -Reinstall Virtual Box additions for better integration adding X/mouse support - -```bash -sudo mount /dev/cdrom /mnt -sudo /mnt/VBoxLinuxAdditions.run -sudo umount /mnt -``` - -### Install RStudio Desktop {#install-rstudio} - -```bash -wget http://download1.rstudio.org/rstudio-0.97.551-amd64.deb -apt-get install libjpeg621 -dpkg -i rstudio-* -rm rstudio-* -``` diff --git a/documentation/tutorials/01_Demo_Basic_Run/Demo01.Rmd b/documentation/tutorials/01_Demo_Basic_Run/Demo01.Rmd index 93aa11f00fd..07f33287306 100644 --- a/documentation/tutorials/01_Demo_Basic_Run/Demo01.Rmd +++ b/documentation/tutorials/01_Demo_Basic_Run/Demo01.Rmd @@ -147,7 +147,7 @@ If enabled post-process output for these analyses If at any point a Stage Name has the **Status “ERROR”** please notify the PEcAn team member that is administering the demo or feel free to do any of the following: * Refer to the PEcAn Documentation for documentation -* Post the end of your workflow log on our Gitter chat +* Post the end of your workflow log on our Slack Channel chat * Post an issue on Github. The entire PEcAn team welcomes any questions you may have! From cd8d750bfa227fb365c97d67ae835cf862e76948 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 6 May 2020 11:03:14 -0500 Subject: [PATCH 0949/2289] remove git build of ED --- docker.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker.sh b/docker.sh index 3f05e57c524..8e0e5699c85 100755 --- a/docker.sh +++ b/docker.sh @@ -188,7 +188,7 @@ for version in 0.95; do done # build ed2 -for version in git 2.2.0; do +for version in 2.2.0; do ${DEBUG} docker build \ --tag pecan/model-ed2-${version}:${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ From d52d2056568e6ebc411ea28368b0c48cf6531434 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 6 May 2020 21:20:10 +0200 Subject: [PATCH 0950/2289] == is a bashism --- docker/depends/Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index 2e419aab5ff..fc309ad12d7 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -10,7 +10,7 @@ MAINTAINER Rob Kooper # UPDATE GIT # This is needed for stretch and github actions # ---------------------------------------------------------------------- -RUN if [ "$(lsb_release -s -c)" == "stretch" ]; then \ +RUN if [ "$(lsb_release -s -c)" = "stretch" ]; then \ echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list \ && apt-get update \ && apt-get -t stretch-backports upgrade -y git \ From 732ab71a6eab45cbbfe5751b0b2f129945d21ed2 Mon Sep 17 00:00:00 2001 From: Ken Youens-Clark Date: Wed, 6 May 2020 13:02:43 -0700 Subject: [PATCH 0951/2289] version which might one day be compatible with ED 2.2.0 --- models/ed/inst/ED2IN.2.2.0 | 52 +++++++++++++++++++------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/models/ed/inst/ED2IN.2.2.0 b/models/ed/inst/ED2IN.2.2.0 index 815b85b2ac7..14e29ff6d56 100644 --- a/models/ed/inst/ED2IN.2.2.0 +++ b/models/ed/inst/ED2IN.2.2.0 @@ -54,7 +54,7 @@ $ED_NL !----- Simulation title (64 characters). -----------------------------------------------! - NL%EXPNME = 'ED version 2.2 PEcAn @ENSNAME@' + NL%EXPNME = 'ED2 vGITHUB PEcAn @ENSNAME@' !---------------------------------------------------------------------------------------! @@ -84,8 +84,8 @@ $ED_NL ! Start of simulation. Information must be given in UTC time. ! !---------------------------------------------------------------------------------------! NL%IMONTHA = @START_MONTH@ ! Month - NL%IDATEA = @START_DAY@ ! Day - NL%IYEARA = @START_YEAR@ ! Year + NL%IDATEA = @START_DAY@ ! Day + NL%IYEARA = @START_YEAR@ ! Year NL%ITIMEA = 0000 ! UTC !---------------------------------------------------------------------------------------! @@ -95,8 +95,8 @@ $ED_NL ! End of simulation. Information must be given in UTC time. ! !---------------------------------------------------------------------------------------! NL%IMONTHZ = @END_MONTH@ ! Month - NL%IDATEZ = @END_DAY@ ! Day - NL%IYEARZ = @END_YEAR@ ! Year + NL%IDATEZ = @END_DAY@ ! Day + NL%IYEARZ = @END_YEAR@ ! Year NL%ITIMEZ = 0000 ! UTC !---------------------------------------------------------------------------------------! @@ -219,8 +219,8 @@ $ED_NL ! soil types ! !---------------------------------------------------------------------------------------! NL%N_POI = 1 - NL%POI_LAT = @SITE_LAT@ - NL%POI_LON = @SITE_LON@ + NL%POI_LAT = @SITE_LAT@ + NL%POI_LON = @SITE_LON@ NL%POI_RES = 1.00 !---------------------------------------------------------------------------------------! @@ -366,8 +366,8 @@ $ED_NL ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! ! SFILOUT -- Path and prefix for history files. ! !---------------------------------------------------------------------------------------! - NL%FFILOUT = '/mypath/generic-prefix' - NL%SFILOUT = '/mypath/generic-prefix' + NL%FFILOUT = '@FFILOUT@' + NL%SFILOUT = '@SFILOUT@' !---------------------------------------------------------------------------------------! @@ -417,7 +417,7 @@ $ED_NL ! ED2.1 state file. It allows for different layering, and assigns via nearest ! ! neighbor. ! !---------------------------------------------------------------------------------------! - NL%IED_INIT_MODE = 6 + NL%IED_INIT_MODE = @INIT_MODEL@ !---------------------------------------------------------------------------------------! @@ -444,7 +444,7 @@ $ED_NL ! path+prefix will be used, as the history for every grid must have come ! ! from the same simulation. ! !---------------------------------------------------------------------------------------! - NL%SFILIN = '/mypath/myprefix.' + NL%SFILIN = '@SITE_PSSCSS@' !---------------------------------------------------------------------------------------! @@ -614,12 +614,12 @@ $ED_NL ! SOILDEPTH_DB -- If ISOILDEPTHFLG=1, this variable specifies the full path of the ! ! file that contains soil moisture and temperature information. ! !---------------------------------------------------------------------------------------! - NL%VEG_DATABASE = '/mypath/veget_data/oge2/OGE2_' - NL%SOIL_DATABASE = '/mypath/soil_data/texture/Hengl/Hengl_' - NL%LU_DATABASE = '/mypath/glu-3.3.1+sa1.bau/glu-3.3.1+sa1.bau-' + NL%VEG_DATABASE = '@ED_VEG@' + NL%SOIL_DATABASE = '@ED_SOIL@' + NL%LU_DATABASE = '@ED_LU@' NL%PLANTATION_FILE = '' - NL%THSUMS_DATABASE = '/mypath/thermal_sums/' - NL%ED_MET_DRIVER_DB = '/mypath/ED_MET_DRIVER_HEADER' + NL%THSUMS_DATABASE = '@ED_THSUM@' + NL%ED_MET_DRIVER_DB = '@SITE_MET@' NL%OBSTIME_DB = '' !Reference file: /ED/run/obstime_template.time NL%SOILSTATE_DB = '/mypath/soil_data/temp+moist/STW1996OCT.dat' NL%SOILDEPTH_DB = '/mypath/soil_data/depth/H250mBRD/H250mBRD_' @@ -899,7 +899,7 @@ $ED_NL ! New scheme: plants shed their leaves once a 10-day running average of available ! ! water becomes less than a critical value. ! !---------------------------------------------------------------------------------------! - NL%IPHEN_SCHEME = 2 + NL%IPHEN_SCHEME = @PHENOL_SCHEME@ !---------------------------------------------------------------------------------------! @@ -1845,9 +1845,9 @@ $ED_NL !---------------------------------------------------------------------------------------! NL%IMETTYPE = 1 NL%ISHUFFLE = 0 - NL%METCYC1 = 2004 - NL%METCYCF = 2014 - NL%IMETAVG = 1 + NL%METCYC1 = @MET_START@ + NL%METCYCF = @MET_END@ + NL%IMETAVG = @MET_SOURCE@ NL%IMETRAD = 5 NL%INITIAL_CO2 = 410. !---------------------------------------------------------------------------------------! @@ -1865,11 +1865,11 @@ $ED_NL ! ! ! If the years don't cover the entire simulation period, they will be recycled. ! !---------------------------------------------------------------------------------------! - NL%IPHENYS1 = 1992 - NL%IPHENYSF = 2003 - NL%IPHENYF1 = 1992 - NL%IPHENYFF = 2003 - NL%PHENPATH = '/mypath/phenology' + NL%IPHENYS1 = @PHENOL_START@ + NL%IPHENYSF = @PHENOL_END@ + NL%IPHENYF1 = @PHENOL_START@ + NL%IPHENYFF = @PHENOL_END@ + NL%PHENPATH = '@PHENOL@' !---------------------------------------------------------------------------------------! @@ -1882,7 +1882,7 @@ $ED_NL ! simulation. ! ! PHENPATH -- path and prefix of the prescribed phenology data. ! !---------------------------------------------------------------------------------------! - NL%IEDCNFGF = '/mypath/config.xml' + NL%IEDCNFGF = '@CONFIGFILE@' NL%EVENT_FILE = '/mypath/event.xml' !---------------------------------------------------------------------------------------! From 5e40edc790bed71365252d5d259daf2ae7a1293c Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 17:07:50 -0700 Subject: [PATCH 0952/2289] [WIP] Remove a lot of text from Git workflow I was motivated to just remove the "Basic Workflow" section that starts with "This workflow is for educational purposes only." Then I continued to remove content that I thought was not necessary. A lot of this content dates from previous decades before the carpentries were widespread and before git literacy was common among young scientists. I tried to begin removing a lot of content that can easily be found on google. This section needs a lot of work to be useful - we should make it clear (and consistent) if forking is expected of everyone, or just preferred, etc. Currently there is a focus on branching with a few allusions to forking. And many references to cloning from pecanproject/pecan and branching imply forking isn't the first priority (or when it is). NB: I have a problem with forks because it makes it harder to edit text files in the github web interface. --- .../02_git/01_using-git.Rmd | 77 +++++-------------- 1 file changed, 18 insertions(+), 59 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd index d1df68a482a..b40c8d8c1fa 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd @@ -31,6 +31,10 @@ it. The git protocol is read-only. This document describes the steps required to download PEcAn, make changes to code, and submit your changes. +If during above process you want to work on something else, commit all +your code, create a new branch, and work on new branch. + + #### PEcAn Project and Github * Organization Repository: https://github.com/organizations/PecanProject * PEcAn source code: https://github.com/PecanProject/pecan.git @@ -69,10 +73,19 @@ The **easiest** approach is to use GitHub's browser based workflow. This is usef #### Recommended Git Workflow +Summary: + +* Branch "early and often" +* First pull from develop +* Branch before working on feature +* One branch per feature +* You can switch easily between branches +* Merge feature into main line when branch done + **Each feature should be in its own branch** (for example each issue is a branch, names of branches are often the issue in a bug tracking system). -**Commit and Push Frequency** On your branch, commit **_at minimum once a day before you push changes:_** even better: every time you reach a stopping point and move to a new issue. best: any time that you have done work that you do not want to re-do. Remember, pushing changes to your branch is like saving a draft. Submit a pull request when you are done. +**Commit and Push Frequency** On your branch, commit any time that you have done work that you do not want to re-do. Remember, pushing changes to your branch is like saving a draft. Submit a pull request when you are done. #### Before any work is done @@ -106,41 +119,9 @@ cd pecan git remote add upstream git@github.com:PecanProject/pecan.git ``` -#### During development: - -* commit often; -* each commit can address 0 or 1 issue; many commits can reference an issue -* ensure that all tests are passing before anything is pushed into develop. - -#### Basic Workflow - -This workflow is for educational purposes only. Please use the Recommended Workflow if you plan on contributing to PEcAn. This workflow does not include creating branches, a feature we would like you to use. -1. Get the latest code from the main repository - -`git pull upstream develop` - -2. Do some coding - -3. Commit after each chunk of code (multiple times a day) - -`git commit -m ""` - -4. Push to YOUR Github (when a feature is working, a set of bugs are fixed, or you need to share progress with others) - -`git push origin develop` - -5. Before submitting code back to the main repository, make sure that code compiles from the main directory. - -`make` - - -6. submit pull request with a reference to related issue; -* also see [github documentation](https://help.github.com/articles/using-pull-requests) - - #### Recommended Workflow: A new branch for each change -1. Make sure you start in develop +1. Make sure you start in the develop branch `git checkout develop` @@ -152,7 +133,7 @@ This workflow is for educational purposes only. Please use the Recommended Workf `make` -4. Create a branch and switch to it +4. Create a new branch and switch to it `git checkout -b ` @@ -172,7 +153,7 @@ This workflow is for educational purposes only. Please use the Recommended Workf `git push origin ` 8. submit pull request with [[link commits to issues|Using-Git#link-commits-to-issuess]]; -* also see [explanation in this PecanProject/bety issue](https://github.com/PecanProject/bety/issues/57) and [github documentation](https://help.github.com/articles/using-pull-requests) +* also see [github documentation](https://help.github.com/articles/using-pull-requests) #### After pull request is merged @@ -232,16 +213,6 @@ There are two ways to do this. One easy way is to include the following text in #### Other Useful Git Commands: -* GIT encourages branching "early and often" -* First pull from develop -* Branch before working on feature -* One branch per feature -* You can switch easily between branches -* Merge feature into main line when branch done - -If during above process you want to work on something else, commit all -your code, create a new branch, and work on new branch. - * Delete a branch: `git branch -d ` * To push a branch git: `push -u origin `` @@ -294,19 +265,7 @@ The Rstudio documentation includes useful overviews of [version control](http:// Once you have git installed on your computer (see the [Rstudio version control](http://www.rstudio.com/ide/docs/version_control/overview) documentation for instructions), you can use the following steps to install the PEcAn source code in Rstudio. -#### Creating a Read-only version: - -This is a fast way to clone the repository that does not support contributing new changes (this can be done with further modification). - -1. install Rstudio (www.rstudio.com) -2. click (upper right) project -* create project -* version control -* Git - clone a project from a Git Repository -* paste https://www.github.com/PecanProject/pecan -* choose working dir. for repo - -#### For development: +##### Changes without requiring a username and password: 1. create account on github 2. create a fork of the PEcAn repository to your own account https://www.github.com/pecanproject/pecan From 2109a8c88d92e1f64e1b41c5d50e0e21ebbc8182 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 6 May 2020 17:56:36 -0700 Subject: [PATCH 0953/2289] fix typos in docker.sh --- docker.sh | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docker.sh b/docker.sh index 3f05e57c524..0ffe39aa527 100755 --- a/docker.sh +++ b/docker.sh @@ -61,7 +61,7 @@ IMAGE_VERSION or the option -i. To run the script in debug mode without actually building any images you can use the environment variable DEBUG or option -d. -By default the docker.sh process will try and use a prebuild dependency +By default the docker.sh process will try and use a prebuilt dependency image since this image takes a long time to build. To force this image to be build use the DEPEND="build" environment flag, or use option -f. @@ -103,8 +103,8 @@ echo "# test this build you can use:" echo "# PECAN_VERSION='${IMAGE_VERSION}' docker-compose up" echo "#" echo "# The docker image for dependencies takes a long time to build. You" -echo "# can use a prebuild version (default) or force a new versin to be" -echo "# build locally using: DEPEND=build $0" +echo "# can use a prebuilt version (default) or force a new version to be" +echo "# built locally using: DEPEND=build $0" echo "# ----------------------------------------------------------------------" # not building dependencies image, following command will build this From b2e2bd535b4bd00e4cc647d3340c9609f2071272 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Fri, 8 May 2020 06:58:19 +0530 Subject: [PATCH 0954/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 8 -------- 1 file changed, 8 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 28761a23ff8..ee745a01b40 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -57,14 +57,6 @@ jobs: v=$(git --version | grep -oE '[0-9\.]+') v='cat(numeric_version("'${v}'") < "2.18")' echo "##[set-output name=isold;]$(Rscript -e "${v}")" - - name: upgrade git if needed - # Hack: actions/checkout wants git >= 2.18, rocker 3.5 images have 2.11 - # Assuming debian stretch because newer images have git >= 2.20 already - if: steps.gitversion.outputs.isold == 'TRUE' - run: | - echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list - apt-get update - apt-get -t stretch-backports upgrade -y git - uses: actions/checkout@v2 - uses: r-lib/actions/pr-fetch@master with: From 3801fa22ddef3903daa204671b510cc36714c36f Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Fri, 8 May 2020 06:59:26 +0530 Subject: [PATCH 0955/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index ee745a01b40..b81b8e732ba 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -19,8 +19,7 @@ jobs: - uses: r-lib/actions/setup-r@master - name: Install dependencies run: | - Rscript -e 'install.packages("styler")' - Rscript -e 'install.packages("devtools")' + Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' - name: string operations run: | From 06340b4af02aac590ee3910a0bedf5bc90458782 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat, 9 May 2020 00:01:34 +0530 Subject: [PATCH 0956/2289] file name updated --- .github/workflows/styler-actions.yml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index b81b8e732ba..e2c5a991da6 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -24,17 +24,17 @@ jobs: - name: string operations run: | echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt - cat names.txt | tr -d '[]' > new.txt - text=$(cat new.txt) + cat names.txt | tr -d '[]' > changed_files.txt + text=$(cat changed_files.txt) IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> new2.txt; fi; done + for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done - name: Upload artifacts uses: actions/upload-artifact@v1 with: name: artifacts - path: new2.txt + path: files_to_style.txt - name: Style - run: for i in $(cat new2.txt); do Rscript -e "styler::style_file("$i")"; done + run: for i in $(cat files_to_style.txt); do Rscript -e "styler::style_file("$i")"; done - name: commit run: | git add \*.R From 87066d0b3c6674776a9791a46ebc99d2179bb766 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat, 9 May 2020 00:03:54 +0530 Subject: [PATCH 0957/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index e2c5a991da6..139ceda11f1 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -68,11 +68,11 @@ jobs: - name : make shell: bash run: | - cut -d / -f 1-2 artifacts/new2.txt | tr -d '"' > new.txt - cat new.txt - sort new.txt | uniq > uniq.txt - cat uniq.txt - for i in $(cat uniq.txt); do make .doc/${i}; done + cut -d / -f 1-2 artifacts/new2.txt | tr -d '"' > changed_dirs.txt + cat changed_dirs.txt + sort changed_dirs.txt | uniq > needs_documenting.txt + cat needs_documenting.txt + for i in $(cat needs_documenting.txt); do make .doc/${i}; done - name: commit run: | git config --global user.email "pecan_bot@example.com" From 30b96bcdd06508edca4345b0ec7d286195b9d362 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat, 9 May 2020 00:04:25 +0530 Subject: [PATCH 0958/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 139ceda11f1..e5da6675005 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -68,7 +68,7 @@ jobs: - name : make shell: bash run: | - cut -d / -f 1-2 artifacts/new2.txt | tr -d '"' > changed_dirs.txt + cut -d / -f 1-2 artifacts/files_to_style.txt | tr -d '"' > changed_dirs.txt cat changed_dirs.txt sort changed_dirs.txt | uniq > needs_documenting.txt cat needs_documenting.txt From da17fad705eee338b2732999982637129b6a11eb Mon Sep 17 00:00:00 2001 From: Ken Youens-Clark Date: Fri, 8 May 2020 16:19:20 -0700 Subject: [PATCH 0959/2289] setting the default values to match docker.sh so this will build --- models/ed/Dockerfile | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/models/ed/Dockerfile b/models/ed/Dockerfile index c1d64ba2e3c..34cad0989a1 100644 --- a/models/ed/Dockerfile +++ b/models/ed/Dockerfile @@ -7,8 +7,8 @@ ARG IMAGE_VERSION="latest" FROM debian:stretch as model-binary # Some variables that can be used to set control the docker build -ARG MODEL_VERSION=git -ARG BINARY_VERSION=2.1 +ARG MODEL_VERSION="2.2.0" +ARG BINARY_VERSION="2.2" # specify fortran compiler ENV FC_TYPE=GNU @@ -59,7 +59,7 @@ RUN apt-get update \ # ---------------------------------------------------------------------- # Some variables that can be used to set control the docker build -ARG MODEL_VERSION=git +ARG MODEL_VERSION="2.2.0" # Setup model_info file COPY model_info.json /work/model.json From d7eeb8b516c333911d33dc66cf1384ce8b1e3f22 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 11 May 2020 15:18:23 +0530 Subject: [PATCH 0960/2289] updated CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 19e373161d5..685c6885295 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -34,6 +34,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - No longer writing an arbitrary num for each PFT, this was breaking ED runs potentially. - The pecan/data container has no longer hardcoded path for postgres - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). +- data.remote: Arguments to the function `call_MODIS()` have been changed (#2338). ### Added - model_info.json and Dockerfile to template (#2567) From 40b0e519f774f6b131654ae0d47d5b70eaadfd1a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 11 May 2020 16:04:36 +0530 Subject: [PATCH 0961/2289] changed order of arguments, updated docs --- modules/data.remote/R/call_MODIS.R | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 71ecd18e0a6..9d47780b060 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -3,7 +3,7 @@ ##' @name call_MODIS ##' @title call_MODIS ##' @export -##' @param outdir where the output file will be stored. Default is NULL +##' @param outdir where the output file will be stored. Default is NULL and in this case only values are returned. When path is provided values are returned and written to disk. ##' @param var the simple name of the modis dataset variable (e.g. lai) ##' @param site_info Bety list of site info for parsing MODIS data: list(site_id, site_name, lat, ##' lon, time_zone) @@ -35,14 +35,14 @@ ##' lon = 90, ##' time_zone = "UTC") ##' test_modistools <- call_MODIS( -##' outdir = NULL, ##' var = "lai", +##' product = "MOD15A2H", +##' band = "Lai_500m", ##' site_info = site_info, ##' product_dates = c("2001150", "2001365"), +##' outdir = NULL, ##' run_parallel = TRUE, ##' ncores = NULL, -##' product = "MOD15A2H", -##' band = "Lai_500m", ##' package_method = "MODISTools", ##' QC_filter = TRUE, ##' progress = FALSE) @@ -50,12 +50,12 @@ ##' @importFrom foreach %do% %dopar% ##' @author Bailey Morrison ##' -call_MODIS <- function(outdir = NULL, - var, site_info, - product_dates, +call_MODIS <- function(var, product, + band, site_info, + product_dates, + outdir = NULL, run_parallel = FALSE, - ncores = NULL, - product, band, + ncores = NULL, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE) { From a6e7c9614bbb224a12cfe7e52413ff9f0ca5e819 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 11 May 2020 16:13:45 +0530 Subject: [PATCH 0962/2289] Update CHANGELOG.md --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 685c6885295..a0a20b84d50 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -34,7 +34,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - No longer writing an arbitrary num for each PFT, this was breaking ED runs potentially. - The pecan/data container has no longer hardcoded path for postgres - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). -- data.remote: Arguments to the function `call_MODIS()` have been changed (#2338). +- data.remote: Arguments to the function `call_MODIS()` have been changed (issue #2519). ### Added - model_info.json and Dockerfile to template (#2567) From 78b901c9bcffa3892a796783c05ed442a4c474de Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 11 May 2020 18:39:53 +0530 Subject: [PATCH 0963/2289] generated documentation for call_MODIS --- modules/data.remote/man/call_MODIS.Rd | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index cf54ceaf091..52fc0422110 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -5,39 +5,39 @@ \title{call_MODIS} \usage{ call_MODIS( - outdir = NULL, var, + product, + band, site_info, product_dates, + outdir = NULL, run_parallel = FALSE, ncores = NULL, - product, - band, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE ) } \arguments{ -\item{outdir}{where the output file will be stored. Default is NULL} - \item{var}{the simple name of the modis dataset variable (e.g. lai)} +\item{product}{string value for MODIS product number} + +\item{band}{string value for which measurement to extract} + \item{site_info}{Bety list of site info for parsing MODIS data: list(site_id, site_name, lat, lon, time_zone)} \item{product_dates}{a character vector of the start and end date of the data in YYYYJJJ} +\item{outdir}{where the output file will be stored. Default is NULL and in this case only values are returned. When path is provided values are returned and written to disk.} + \item{run_parallel}{optional method to download data paralleize. Only works if more than 1 site is needed and there are >1 CPUs available.} \item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL.} -\item{product}{string value for MODIS product number} - -\item{band}{string value for which measurement to extract} - \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} @@ -64,14 +64,14 @@ site_info <- list( lon = 90, time_zone = "UTC") test_modistools <- call_MODIS( - outdir = NULL, var = "lai", + product = "MOD15A2H", + band = "Lai_500m", site_info = site_info, product_dates = c("2001150", "2001365"), + outdir = NULL, run_parallel = TRUE, ncores = NULL, - product = "MOD15A2H", - band = "Lai_500m", package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) From 573dfd57d6852db50b7dd6d206cbc2904eaffe8e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 13 May 2020 16:31:36 -0500 Subject: [PATCH 0964/2289] do apt-get before install --- shiny/dbsync/Dockerfile | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/shiny/dbsync/Dockerfile b/shiny/dbsync/Dockerfile index 2506d2236c5..2a77bba24ed 100644 --- a/shiny/dbsync/Dockerfile +++ b/shiny/dbsync/Dockerfile @@ -6,9 +6,11 @@ ENV PGHOST=postgres \ PGPASSWORD=bety \ GEOCACHE=/srv/shiny-server/geoip.json -RUN apt-get -y install libpq-dev libssl-dev \ +RUN apt-get update \ + && apt-get -y install libpq-dev libssl-dev \ && install2.r -e -s -n -1 curl dbplyr DT leaflet RPostgreSQL \ - && rm -rf /srv/shiny-server/* + && rm -rf /srv/shiny-server/* \ + && rm -rf /var/lib/apt/lists/* ADD . /srv/shiny-server/ # special script to start shiny server and preserve env variable From 3e3df92c3ca78234fd58fb0e36df86f77a5fe7b5 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 13 May 2020 23:28:22 -0500 Subject: [PATCH 0965/2289] how to develop using docker --- .gitignore | 5 +- DEV-INTRO.md | 101 +++++++++++++++++++++++------------------ docker-compose.dev.yml | 80 ++++++++++++++++++++++++++++++++ docker-compose.yml | 18 -------- scripts/compile.sh | 3 ++ 5 files changed, 144 insertions(+), 63 deletions(-) create mode 100644 docker-compose.dev.yml create mode 100755 scripts/compile.sh diff --git a/.gitignore b/.gitignore index badc2505dbf..cbded97662d 100644 --- a/.gitignore +++ b/.gitignore @@ -100,5 +100,6 @@ contrib/modellauncher/modellauncher # don't checkin renv /renv/ -# ignore IP mapping to lat/lon (is about 65MB) -shiny/dbsync/IP2LOCATION-LITE-DB5.BIN +# folder with the data folders for the docker stack +/volumes + diff --git a/DEV-INTRO.md b/DEV-INTRO.md index b1881ba44f4..322ea905fff 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -1,59 +1,74 @@ -PEcAn Development -================= +# PEcAn Development -Directory Structure -------------------- +This is a minimal guide to getting started with PEcAn development. -### pecan/ +## Git Repository and Workflow -* modules/ Contains the modules that make up PEcAn -* web/ The PEcAn web app to start a run and view outputs. -* models/ Code to create the model specific configurations. -* documentation/ Documentation about what PEcAn is and how to use it. +The recommended workflow is gitflow which is described in the PEcAn documentation. For the repository we recommend using a [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). In the scripts folder there is a script called [syngit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. -### Modules (R packages) +## Developing in Docker -* General -** all -** utils -** db -* Analysis -** modules/meta.analysis -** modules/uncertainty" -** modules/data.land -** modules/data.atmosphere -** modules/assim.batch -** modules/assim.sequential -** modules/priors -* Model Interfaces -** models/ed -** models/sipnet -** models/biocro +To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. +### First time setup -#### List of modules +You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: -Installing PEcAn ----------------- +* `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers +* `PECAN_VERSION` set this to develop, the docker image we start with -### Virtual Machine +Next we will create the folders that will hold all the data for the docker containers using: `mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The `volumes` folder will be ignored by git. -* Fastest way to get started -* see PEcAn demo ... +First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. -### Installing from source +During the time it takes for the database to start we will copy the R packages from a container to our `volumes/lib` folder. This folder contain the compiled R packages. Later we will put our newly compiled code here as well. You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). -#### From GitHub +At this point postgresql should have started and is ready to load the database. This is done using `docker run -ti --rm --network pecan_pecan pecan/db`. This only needs to be done once (unless the volumes folder is removed). -``` -library(devtools) -install_github("pecan", "PEcAnProject") -``` +Once the database is loaded we can bring up the rest of the docker stack using `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d`. At this point you have PEcAn running in docker. -#### "Makefile" +### PEcAn Development -``` -./scripts/build.sh -install # installs all R packages -./scripts/build.sh -h # list other options +The current folder is mounted in some containers as `/pecan`. You can see which containers exactly in `docker-compose.dev.yaml`. You can now modify the code on your local machine, or you can use [rstudio](http://localhost:8000) in the docker stack. Once you made changes to the code you can compile the code either in the terminal of rstudio (`cd pecan && make`) or using `./scripts/compile.sh` from your machine (latter is nothing more than a shell script that runs `docker-compose exec executor sh -c 'cd /pecan && make'`. -``` \ No newline at end of file +If you submit new workflows through the webinterface it will use this new code in the executor, and any other containers that have `volumes/lib` mounted inside the containers (done by `docker-compose.dev.yaml`). + +### Workflow Submission + +A better way of doing this is developed as part of GSOC. You can leverage of the API folder (specifically submit_workflow.R). + +# Directory Structure + +Following are the main folders inside the pecan repository. + +### base (R packages) + +These are the core packages of PEcAn. Most other packages will depend on the packages in this folder. + +### models (R packages) + +Each subfolder contains the required pieces to run the model in PEcAn + +### modules (R packages) + +Contains packages that eitehr do analysis, or download and convert different data products. + +### web (PHP + javascript) + +The Pecan web application + +### shiny (R + shiny) + +Each subfolder is its own shiny application. + +### book_source (RMarkdown) + +The PEcAn documentation that is compiled and uploaded to the PEcAn webpage. + +### docker + +Some of the docker build files. The Dockerfiles for each model are placed in the models folder. + +### scripts + +Small scripts that are used as part of the development and installation of PEcAn. diff --git a/docker-compose.dev.yml b/docker-compose.dev.yml new file mode 100644 index 00000000000..c87f3d21fba --- /dev/null +++ b/docker-compose.dev.yml @@ -0,0 +1,80 @@ +# mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik} +# +# docker-compose -f docker-compose.yml -f docker-compose.dev.yml + +version: '3' + +services: + web: + volumes: + - '${PWD}/web:/var/www/html/pecan' + - '${PWD}/docker/web/config.docker.php:/var/www/html/pecan/config.php' + + # executor can compile the code + executor: + volumes: + - '${PWD}:/pecan/' + - '${PWD}/volumes/lib:/usr/local/lib/R/site-library/' + + # use same for R development in rstudio + rstudio: + volumes: + - '${PWD}:/pecan/' + - '${PWD}:/home/carya/pecan/' + - '${PWD}/volumes/lib:/usr/local/lib/R/site-library/' + + # use following as template for other models + # this can be used if you are changng the code for a model in PEcAN + sipnet: + volumes: + - '${PWD}/volumes/lib:/usr/local/lib/R/site-library/' + + # this will open postgres to the hostcompute + #postgres: + # ports: + # - '5432:5432' + + # Allow to see all docker containers running, restart and see log files. + #portainer: + # image: portainer/portainer:latest + # command: + # - --admin-password=${PORTAINER_PASSWORD:-} + # - --host=unix:///var/run/docker.sock + # restart: unless-stopped + # networks: + # - pecan + # labels: + # - "traefik.enable=true" + # - "traefik.backend=portainer" + # - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip: /portainer" + # - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" + # volumes: + # - /var/run/docker.sock:/var/run/docker.sock + # - portainer:/data + +volumes: + traefik: + driver_opts: + type: none + device: '${PWD}/volumes/traefik' + o: bind + postgres: + driver_opts: + type: none + device: '${PWD}/volumes/postgres' + o: bind + rabbitmq: + driver_opts: + type: none + device: '${PWD}/volumes/rabbitmq' + o: bind + pecan: + driver_opts: + type: none + device: '${PWD}/volumes/pecan' + o: bind + portainer: + driver_opts: + type: none + device: '${PWD}/volumes/portainer' + o: bind diff --git a/docker-compose.yml b/docker-compose.yml index e0f5661b0fb..6c5e5de4e7b 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -43,24 +43,6 @@ services: - /var/run/docker.sock:/var/run/docker.sock:ro - traefik:/config - # Allow to see all docker containers running, restart and see log files. - portainer: - image: portainer/portainer:latest - command: - - --admin-password=${PORTAINER_PASSWORD:-} - - --host=unix:///var/run/docker.sock - restart: unless-stopped - networks: - - pecan - labels: - - "traefik.enable=true" - - "traefik.backend=portainer" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip: /portainer" - - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" - volumes: - - /var/run/docker.sock:/var/run/docker.sock - - portainer:/data - # ---------------------------------------------------------------------- # Access to the files generated and used by PEcAn, both through a # web interface (minio) as well using the thredds server. diff --git a/scripts/compile.sh b/scripts/compile.sh new file mode 100755 index 00000000000..462c2ac55aa --- /dev/null +++ b/scripts/compile.sh @@ -0,0 +1,3 @@ +#!/bin/bash + +docker-compose exec executor sh -c 'cd /pecan && make' From 63d87af67b3e1250121e88a9120f16d9c6e4c094 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 13 May 2020 23:33:02 -0500 Subject: [PATCH 0966/2289] update CHANGELOG --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index b8ecb9bf59b..4f531d63f29 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -39,6 +39,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Added +- Documentation in [DEV-INTRO.md](DEV-INTRO.md) on development in a docker environment (#2553) - New versioned ED2IN template: ED2IN.2.2.0 (#2143) (replaces ED2IN.git) - model_info.json and Dockerfile to template (#2567) - Dockerize BASGRA_N model. From 01d0596546ca4c056a761f9305d7ab249eb182ac Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 13 May 2020 23:49:16 -0500 Subject: [PATCH 0967/2289] add note about docker-compose.override.yml --- DEV-INTRO.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 322ea905fff..d00db2bd4d9 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -10,6 +10,8 @@ The recommended workflow is gitflow which is described in the PEcAn documentatio To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. +You can copy the `docker-compose.dev.yaml` to `docker-compose.override.yml`. Once that is done the `docker-compose` program will automatically use the `docker-compose.yml` and `docker-compose.override.yml` files, and you don't have to explicitly load them in the commands below. + ### First time setup You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: From c02c2f009b1732ef03ca8208742353300d4aeef6 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 14 May 2020 15:03:46 +0530 Subject: [PATCH 0968/2289] removed !(is.null(good)) --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 9d47780b060..2cb138472a1 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -244,7 +244,7 @@ call_MODIS <- function(var, product, output$qc[i] <- substr(convert, nchar(convert) - 2, nchar(convert)) } good <- which(output$qc %in% c("000", "001")) - if (length(good) > 0 || !(is.null(good))) + if (length(good) > 0) { output <- output[good, ] } else { From a55a4f45980245eb8560ba256a91529c2dc4cf19 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 14 May 2020 16:53:02 -0500 Subject: [PATCH 0969/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index d00db2bd4d9..669f3bd34ce 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -53,7 +53,7 @@ Each subfolder contains the required pieces to run the model in PEcAn ### modules (R packages) -Contains packages that eitehr do analysis, or download and convert different data products. +Contains packages that either do analysis, or download and convert different data products. ### web (PHP + javascript) From 62b900c76100dca60f2642c88f2c170be680291f Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 14 May 2020 16:53:12 -0500 Subject: [PATCH 0970/2289] Update DEV-INTRO.md Co-authored-by: David LeBauer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 669f3bd34ce..03c1b747762 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -4,7 +4,7 @@ This is a minimal guide to getting started with PEcAn development. ## Git Repository and Workflow -The recommended workflow is gitflow which is described in the PEcAn documentation. For the repository we recommend using a [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). In the scripts folder there is a script called [syngit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. +The recommended workflow is gitflow which is described in the PEcAn documentation. For the repository we recommend using a [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). In the `/scripts` folder there is a script called [syncgit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. ## Developing in Docker From c30e5f6dd0e71165ca1371c5e66cfcc9a54cb8ff Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 14 May 2020 23:12:22 -0500 Subject: [PATCH 0971/2289] fixes based on demo today --- DEV-INTRO.md | 57 +++++++++++++--- .../94_docker/02_quickstart.Rmd | 45 ++++++------- docker-compose.yml | 3 + tests/docker.sipnet.xml | 65 +++++++++++++++++++ 4 files changed, 138 insertions(+), 32 deletions(-) create mode 100644 tests/docker.sipnet.xml diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 03c1b747762..2251eaff58d 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -1,6 +1,6 @@ # PEcAn Development -This is a minimal guide to getting started with PEcAn development. +This is a minimal guide to getting started with PEcAn development under Docker. You can find more information about docker in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). ## Git Repository and Workflow @@ -10,34 +10,71 @@ The recommended workflow is gitflow which is described in the PEcAn documentatio To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. -You can copy the `docker-compose.dev.yaml` to `docker-compose.override.yml`. Once that is done the `docker-compose` program will automatically use the `docker-compose.yml` and `docker-compose.override.yml` files, and you don't have to explicitly load them in the commands below. - ### First time setup +The steps in this section only need to be done the fist time you start working with the stack in docker. After this is done you can skip these steps. You can find more detail about the docker commands in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). + +#### .env file You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers * `PECAN_VERSION` set this to develop, the docker image we start with -Next we will create the folders that will hold all the data for the docker containers using: `mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The `volumes` folder will be ignored by git. +#### folders + +Next we will create the folders that will hold all the data for the docker containers using: `mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The `volumes` folder will be ignored by git. You can create these at any location, however you will need to update the `docker-compose.dev.yml` file. The subfolders are used for the following: + +- **lib** holds all the R packages for the specific version of PEcAn and R. More information is below +- **pecan** this holds all the data, such as workflows and any downloaded data. +- **portainer** if you enabled the portainer service this folder is used to hold persistent data for this service +- **postgres** holds the actual database data. If you want to backup the database, you can stop the postgres container, zip up the folder. +- **rabbitmq** holds persistent information of the message broker (rabbitmq). +- **traefik** holds persisent data for the web proxy, that directs incoming traffic to the correct container. + +#### postgresql database First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. -During the time it takes for the database to start we will copy the R packages from a container to our `volumes/lib` folder. This folder contain the compiled R packages. Later we will put our newly compiled code here as well. You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). +Once the database has finished starting up we will initialize the database using: `docker run --rm --network pecan_pecan pecan/db`. Once that is done we create two users for BETY: -At this point postgresql should have started and is ready to load the database. This is done using `docker run -ti --rm --network pecan_pecan pecan/db`. This only needs to be done once (unless the volumes folder is removed). +``` +# guest user +docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 -Once the database is loaded we can bring up the rest of the docker stack using `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d`. At this point you have PEcAn running in docker. +# example user +docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 +``` + +#### copy R packages + +During the time it takes for the database to start we will copy the R packages from a container to our `volumes/lib` folder. This folder contain the compiled R packages. Later we will put our newly compiled code here as well. You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). ### PEcAn Development -The current folder is mounted in some containers as `/pecan`. You can see which containers exactly in `docker-compose.dev.yaml`. You can now modify the code on your local machine, or you can use [rstudio](http://localhost:8000) in the docker stack. Once you made changes to the code you can compile the code either in the terminal of rstudio (`cd pecan && make`) or using `./scripts/compile.sh` from your machine (latter is nothing more than a shell script that runs `docker-compose exec executor sh -c 'cd /pecan && make'`. +To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers: `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d`. At this point you have PEcAn running in docker. + +The current folder (most likely your clone of the git repository) is mounted in some containers as `/pecan`, and in the case of rstudio also in your home folder as `pecan`. You can see which containers exactly in `docker-compose.dev.yaml`. + +You can now modify the code on your local machine, or you can use [rstudio](http://localhost:8000) in the docker stack. Once you made changes to the code you can compile the code either in the terminal of rstudio (`cd pecan && make`) or using `./scripts/compile.sh` from your machine (latter is nothing more than a shell script that runs `docker-compose exec executor sh -c 'cd /pecan && make'`. + +The compiled code is written to `/usr/local/lib/R/site-library` which is mapped to `volumes/lib` on your machine. This same folder is mounted in many other containers, allowing you to share the same PEcAn modules in all containers. Now if you change a module, and compile all other containers will see and use this new version of your module. -If you submit new workflows through the webinterface it will use this new code in the executor, and any other containers that have `volumes/lib` mounted inside the containers (done by `docker-compose.dev.yaml`). +To compile the PEcAn code you can use the make command in either the rstudio container, or in the executor container. The script [`compile.sh`](sripts/compile.sh) will run make inside the executor container. + +To make it easier to start the containers and not having to remember to use `docker-compose -f docker-compose.yml -f docker-compose.dev.yml` when you want to use the docker-compose comannd, you can rename `docker-compose.dev.yml` to `docker-compose.override.yml`. The docker-compose command will automatically use the `docker-compose.yml`, `docker-compose.override.yml` and the `.env` files to start the right containers with the correct parameters. ### Workflow Submission -A better way of doing this is developed as part of GSOC. You can leverage of the API folder (specifically submit_workflow.R). +You can submit your workflow either in the executor container or in rstudio container. For example to run the `docker.sipnet.xml` workflow located in the tests folder you can use: + +``` +docker-compose exec executor baseh +# inside the container +cd /pecan/tests +R CMD ../web/workflow.R docker.sipnet.xml +``` + +A better way of doing this is developed as part of GSOC, in which case you can leverage of the restful interface defined, or using the new R PEcAn API package. # Directory Structure diff --git a/book_source/03_topical_pages/94_docker/02_quickstart.Rmd b/book_source/03_topical_pages/94_docker/02_quickstart.Rmd index d23ed2bf550..08247054ab7 100644 --- a/book_source/03_topical_pages/94_docker/02_quickstart.Rmd +++ b/book_source/03_topical_pages/94_docker/02_quickstart.Rmd @@ -68,45 +68,46 @@ This is relevant to the following commands, which will actually initialize and p Assuming the above ran successfully, next run the following: ```bash -docker-compose run --rm bety initialize +docker run --rm --network pecan_pecan pecan/db ``` The breakdown of this command is as follows: {#docker-run-init} -- `docker-compose run` -- This says we will be running a specific command inside the target service (bety in this case). +- `docker run` -- This says we will be running a container. - `--rm` -- This automatically removes the resulting container once the specified command exits, as well as any volumes associated with the container. This is useful as a general "clean-up" flag for one-off commands (like this one) to make sure you don't leave any "zombie" containers or volumes around at the end. -- `bety` -- This is the name of the service in which we want to run the specified command. -- Everything after the service name (here, `bety`) is interpreted as an argument to the image's specified [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). For the `bety` service, the entrypoint is the script [`docker/entrypoint.sh`](https://github.com/PecanProject/bety/blob/master/docker/entrypoint.sh) located in the BETY repository. Here, the `initialize` argument is parsed to mean "Create a new database", which first runs `psql` commands to create the `bety` role and database and then runs the `load.bety.sh` script. - - NOTE: The entrypoint script that is used is the one copied into the Docker container at the time it was built, which, depending on the indicated image version and how often images are built on Docker Hub relative to updates to the source, may be older than whatever is in the source code. +- `--network pecan_pecan` -- Thsi will start the container in the same network space as the posgres container, allowing it to push data into the database. +- `pecan/db` -- This is the name of the container, this holds a copy of the database used to initialize the postgresql database. Note that this command may throw a bunch of errors related to functions and/or operators already existing. This is normal -- it just means that the PostGIS extension to PostgreSQL is already installed. The important thing is that you see output near the end like: ``` -CREATED SCHEMA -Loading schema_migrations : ADDED 61 -Started psql (pid=507) -Updated formats : 35 (+35) -Fixed formats : 46 -Updated machines : 23 (+23) -Fixed machines : 24 -Updated mimetypes : 419 (+419) -Fixed mimetypes : 1095 -... -... -... -Added carya41 with access_level=4 and page_access_level=1 with id=323 -Added carya42 with access_level=4 and page_access_level=2 with id=325 -Added carya43 with access_level=4 and page_access_level=3 with id=327 -Added carya44 with access_level=4 and page_access_level=4 with id=329 -Added guestuser with access_level=4 and page_access_level=4 with id=331 +---------------------------------------------------------------------- +Safety checks + +---------------------------------------------------------------------- + +---------------------------------------------------------------------- +Making sure user 'bety' exists. ``` If you do not see this output, you can look at the [troubleshooting](#docker-quickstart-troubleshooting) section at the end of this section for some troubleshooting tips, as well as some solutions to common problems. Once the command has finished successfully, proceed with the next step which will load some initial data into the database and place the data in the docker volumes. +#### Add first user to PEcAn database + +You can add an initial user to the BETY database, for example the following commands will add the guestuser account as well as the demo `carya` account: + +``` +# guest user +docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 + +# example user +docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 +``` + #### Add example data (first time only) {#pecan-docker-quickstart-init-data} The following command will add some initial data to the PEcAn stack and register the data with the database. diff --git a/docker-compose.yml b/docker-compose.yml index 6c5e5de4e7b..b2176512d02 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -158,6 +158,9 @@ services: networks: - pecan environment: + - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} + - FQDN=${PECAN_FQDN:-docker} + - NAME=${PECAN_NAME:-docker} - USER=${PECAN_RSTUDIO_USER:-carya} - PASSWORD=${PECAN_RSTUDIO_PASS:-illinois} entrypoint: /init diff --git a/tests/docker.sipnet.xml b/tests/docker.sipnet.xml new file mode 100644 index 00000000000..0c9f298654d --- /dev/null +++ b/tests/docker.sipnet.xml @@ -0,0 +1,65 @@ + + + pecan + + + + PostgreSQL + bety + bety + postgres + bety + FALSE + + + + + + temperate.coniferous + + + + + 3000 + FALSE + 1.2 + AUTO + + + + NPP + + + + + -1 + 1 + + NPP + + + + SIPNET + r136 + + + + + 772 + + + /home/carya/sites/niwot/niwot.clim + + 2002-01-01 00:00:00 + 2005-12-31 00:00:00 + pecan/dbfiles + + + + localhost + + amqp://guest:guest@rabbitmq/%2F + SIPNET_136 + + + From 40a15e117ff662d1b8b2c34dab1798b8d5085b8e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 14 May 2020 23:18:35 -0500 Subject: [PATCH 0972/2289] few more tweaks --- DEV-INTRO.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 2251eaff58d..9379fc376c6 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -24,7 +24,7 @@ You can copy the [`env.example`](docker/env.example) file as .env in your pecan Next we will create the folders that will hold all the data for the docker containers using: `mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The `volumes` folder will be ignored by git. You can create these at any location, however you will need to update the `docker-compose.dev.yml` file. The subfolders are used for the following: -- **lib** holds all the R packages for the specific version of PEcAn and R. More information is below +- **lib** holds all the R packages for the specific version of PEcAn and R. This folder will be shared amongst all other containers, and will contain the compiled PEcAn code. - **pecan** this holds all the data, such as workflows and any downloaded data. - **portainer** if you enabled the portainer service this folder is used to hold persistent data for this service - **postgres** holds the actual database data. If you want to backup the database, you can stop the postgres container, zip up the folder. @@ -47,7 +47,11 @@ docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example #### copy R packages -During the time it takes for the database to start we will copy the R packages from a container to our `volumes/lib` folder. This folder contain the compiled R packages. Later we will put our newly compiled code here as well. You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). +The final step is to copy the R packages from a container to your local machine as the `volumes/lib` folder. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. + +You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. + +This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). You can also always delete all files in the `volumes/lib` folder, and recompile PEcAn from scratch. ### PEcAn Development From 6591e19f61dd1693bb47f2ac83dcc326f4b51a6d Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 15 May 2020 09:59:59 -0500 Subject: [PATCH 0973/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 9379fc376c6..bf63decc1dc 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -72,7 +72,7 @@ To make it easier to start the containers and not having to remember to use `doc You can submit your workflow either in the executor container or in rstudio container. For example to run the `docker.sipnet.xml` workflow located in the tests folder you can use: ``` -docker-compose exec executor baseh +docker-compose exec executor bash # inside the container cd /pecan/tests R CMD ../web/workflow.R docker.sipnet.xml From 236241366445f44e14b889f92cec1f588cb042cd Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sat, 16 May 2020 10:41:24 -0500 Subject: [PATCH 0974/2289] added some text to clarify things based on feedback from @istfer --- DEV-INTRO.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 9379fc376c6..f43d5902950 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -8,6 +8,8 @@ The recommended workflow is gitflow which is described in the PEcAn documentatio ## Developing in Docker +If running on a linux system it is recommended to add your user to the docker group. This will prevent you from having to use `sudo` to star the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using `sudo adduser ${USER} docker`. + To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. ### First time setup @@ -31,6 +33,8 @@ Next we will create the folders that will hold all the data for the docker conta - **rabbitmq** holds persistent information of the message broker (rabbitmq). - **traefik** holds persisent data for the web proxy, that directs incoming traffic to the correct container. +These folders will hold all the persistent data for each of the respective containers and can grow. For example the postgres database is multiple GB. The pecan folder will hold all data produced by the workflows, including any downloaded data, and can grow to many giga bytes. + #### postgresql database First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. From 04236fc07023b3cd22a4f18579a5dd5f6d36d83f Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 18 May 2020 03:37:37 -0400 Subject: [PATCH 0975/2289] add .sh --- shiny/dbsync/Dockerfile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/shiny/dbsync/Dockerfile b/shiny/dbsync/Dockerfile index 2a77bba24ed..d5a89a6ca0f 100644 --- a/shiny/dbsync/Dockerfile +++ b/shiny/dbsync/Dockerfile @@ -13,5 +13,9 @@ RUN apt-get update \ && rm -rf /var/lib/apt/lists/* ADD . /srv/shiny-server/ +ADD https://raw.githubusercontent.com/rocker-org/shiny/master/shiny-server.sh /usr/bin/ + +RUN chmod +x /usr/bin/shiny-server.sh + # special script to start shiny server and preserve env variable CMD /srv/shiny-server/save-env-shiny.sh From 9989ab40236390ce018feb8542f7955cb312c7be Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 18 May 2020 11:32:48 -0500 Subject: [PATCH 0976/2289] more fixes based on discussion in #2572 --- DEV-INTRO.md | 51 +++++++++++++++++++++++++++++++++++++++--- docker-compose.dev.yml | 39 +++++++++++++++++++++++--------- docker-compose.yml | 2 +- 3 files changed, 77 insertions(+), 15 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index fa89076415d..a0e336dfa2c 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -49,9 +49,13 @@ docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@exa docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 ``` +#### copy web config file + +The `docker-compose.dev.yaml` file has a section that will eanble editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using `cp docker/web/config.docker.php web/config.php`. + #### copy R packages -The final step is to copy the R packages from a container to your local machine as the `volumes/lib` folder. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. +Next copy the R packages from a container to your local machine as the `volumes/lib` folder. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. @@ -69,8 +73,6 @@ The compiled code is written to `/usr/local/lib/R/site-library` which is mapped To compile the PEcAn code you can use the make command in either the rstudio container, or in the executor container. The script [`compile.sh`](sripts/compile.sh) will run make inside the executor container. -To make it easier to start the containers and not having to remember to use `docker-compose -f docker-compose.yml -f docker-compose.dev.yml` when you want to use the docker-compose comannd, you can rename `docker-compose.dev.yml` to `docker-compose.override.yml`. The docker-compose command will automatically use the `docker-compose.yml`, `docker-compose.override.yml` and the `.env` files to start the right containers with the correct parameters. - ### Workflow Submission You can submit your workflow either in the executor container or in rstudio container. For example to run the `docker.sipnet.xml` workflow located in the tests folder you can use: @@ -119,3 +121,46 @@ Some of the docker build files. The Dockerfiles for each model are placed in the ### scripts Small scripts that are used as part of the development and installation of PEcAn. + +# Advanced Development Options + +## docker-compose + +To make it easier to start the containers and not having to remember to use `docker-compose -f docker-compose.yml -f docker-compose.dev.yml` when you want to use the docker-compose comannd, you can rename `docker-compose.dev.yml` to `docker-compose.override.yml`. The docker-compose command will automatically use the `docker-compose.yml`, `docker-compose.override.yml` and the `.env` files to start the right containers with the correct parameters. + +## Linux and User permissions + +(On Mac OSX and Windows files should automatically be owned by the user running the docker-compose commands) + +This will leverage of NFS to mount the file system in your local docker image, changing the files to owned by the user specified in the export file. Try to limit this to only your PEcAn folder since this will allow anybody on this system to get access to the exported folder as you! + +First install nfs server: + +``` +apt-get install nfs-kernel-server +``` + +Next export your home directory: + +``` +echo -e "$PWD\t127.0.0.1(rw,no_subtree_check,all_squash,anonuid=$(id -u),anongid=$(id -g))" | sudo tee -a /etc/exports +``` + +And export the filesystem. + +``` +sudo exportfs -va +``` + +At this point you have exported your home directory, only to your local machine. All files written to that exported filesystem will be owned by you (`id -u`) and your primary group (`id -g`). + +Finally we can modify the docker-compose.dev.yaml file to allow for writing files to your PEcAn folder as you: + +``` +volumes: + pecan_home: + driver_opts: + type: "nfs" + device: ":${PWD}" + o: "addr=127.0.0.1" +``` diff --git a/docker-compose.dev.yml b/docker-compose.dev.yml index c87f3d21fba..b11ee4ed02c 100644 --- a/docker-compose.dev.yml +++ b/docker-compose.dev.yml @@ -2,32 +2,34 @@ # # docker-compose -f docker-compose.yml -f docker-compose.dev.yml -version: '3' +version: '3.2' services: - web: - volumes: - - '${PWD}/web:/var/www/html/pecan' - - '${PWD}/docker/web/config.docker.php:/var/www/html/pecan/config.php' + + # web application. This expects the config.php to be copied from docker/web + # cp docker/web/config.docker.php web/config.php + #web: + # volumes: + # - 'pecan_web:/var/www/html/pecan' # executor can compile the code executor: volumes: - - '${PWD}:/pecan/' - - '${PWD}/volumes/lib:/usr/local/lib/R/site-library/' + - 'pecan_home:/pecan/' + - 'pecan_lib:/usr/local/lib/R/site-library/' # use same for R development in rstudio rstudio: volumes: - - '${PWD}:/pecan/' - - '${PWD}:/home/carya/pecan/' - - '${PWD}/volumes/lib:/usr/local/lib/R/site-library/' + - 'pecan_home:/pecan/' + - 'pecan_home:/home/carya/pecan/' + - 'pecan_lib:/usr/local/lib/R/site-library/' # use following as template for other models # this can be used if you are changng the code for a model in PEcAN sipnet: volumes: - - '${PWD}/volumes/lib:/usr/local/lib/R/site-library/' + - 'pecan_lib:/usr/local/lib/R/site-library/' # this will open postgres to the hostcompute #postgres: @@ -53,6 +55,21 @@ services: # - portainer:/data volumes: + pecan_home: + driver_opts: + type: none + device: '${PWD}' + o: bind + pecan_lib: + driver_opts: + type: none + device: '${PWD}/volumes/lib' + o: bind + pecan_web: + driver_opts: + type: none + device: '${PWD}/web/' + o: bind traefik: driver_opts: type: none diff --git a/docker-compose.yml b/docker-compose.yml index b2176512d02..e4b4552d396 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,4 +1,4 @@ -version: "3" +version: "3.2" services: From e627e3fc32bb0a42d4b93fcddcc90ce797d4452d Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 18 May 2020 15:13:28 -0500 Subject: [PATCH 0977/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index a0e336dfa2c..3c88f087adb 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -8,7 +8,7 @@ The recommended workflow is gitflow which is described in the PEcAn documentatio ## Developing in Docker -If running on a linux system it is recommended to add your user to the docker group. This will prevent you from having to use `sudo` to star the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using `sudo adduser ${USER} docker`. +If running on a linux system it is recommended to add your user to the docker group. This will prevent you from having to use `sudo` to start the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using `sudo adduser ${USER} docker`. To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. From 1910db876a7f0689735757cb635d41f0d4812291 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 18 May 2020 17:40:51 -0500 Subject: [PATCH 0978/2289] fix cp command --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 3c88f087adb..b6467232dcf 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -57,7 +57,7 @@ The `docker-compose.dev.yaml` file has a section that will eanble editing the we Next copy the R packages from a container to your local machine as the `volumes/lib` folder. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. -You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -r /usr/local/lib/R/site-library/* /rlib/`. This will copy all compiled packages to your local machine. +You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/`. This will copy all compiled packages to your local machine. This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). You can also always delete all files in the `volumes/lib` folder, and recompile PEcAn from scratch. From f0771aaf6ca9cfc928ce65e7b25033d25a26d7b1 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 19 May 2020 07:58:54 -0500 Subject: [PATCH 0979/2289] Update DEV-INTRO.md Co-authored-by: Chris Black --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index b6467232dcf..4d8ec6dcc71 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -4,7 +4,7 @@ This is a minimal guide to getting started with PEcAn development under Docker. ## Git Repository and Workflow -The recommended workflow is gitflow which is described in the PEcAn documentation. For the repository we recommend using a [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). In the `/scripts` folder there is a script called [syncgit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. +We recommend following the the [gitflow](https://nvie.com/posts/a-successful-git-branching-model/) workflow and working in your own [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). See the [PEcAn developer guide](book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd) for further details. In the `/scripts` folder there is a script called [syncgit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. ## Developing in Docker From 2bb5b1a2c16584dacc03d10f3035752a6097c1aa Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sun, 24 May 2020 10:19:14 +0530 Subject: [PATCH 0980/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 37 ++++++++++++---------------- 1 file changed, 16 insertions(+), 21 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index e5da6675005..b199f960abd 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -19,43 +19,38 @@ jobs: - uses: r-lib/actions/setup-r@master - name: Install dependencies run: | - Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' + Rscript -e 'install.packages("styler")' + Rscript -e 'install.packages("devtools")' Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' - name: string operations run: | echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt - cat names.txt | tr -d '[]' > changed_files.txt - text=$(cat changed_files.txt) + cat names.txt | tr -d '[]' > new.txt + text=$(cat new.txt) IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done + for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> new2.txt; fi; done - name: Upload artifacts uses: actions/upload-artifact@v1 with: name: artifacts - path: files_to_style.txt + path: new2.txt - name: Style - run: for i in $(cat files_to_style.txt); do Rscript -e "styler::style_file("$i")"; done + run: for i in $(cat new2.txt); do Rscript -e "styler::style_file("$i")"; done - name: commit run: | git add \*.R git add \*.Rmd - git commit -m 'Style' + if [[ $(git diff --name-only --cached) != "" ]]; then git commit -m 'automated syle update' ; fi - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} - - + + check: needs: [style] runs-on: ubuntu-latest container: pecan/depends:develop steps: - - name: check git version - id: gitversion - run: | - v=$(git --version | grep -oE '[0-9\.]+') - v='cat(numeric_version("'${v}'") < "2.18")' - echo "##[set-output name=isold;]$(Rscript -e "${v}")" - uses: actions/checkout@v2 - uses: r-lib/actions/pr-fetch@master with: @@ -68,17 +63,17 @@ jobs: - name : make shell: bash run: | - cut -d / -f 1-2 artifacts/files_to_style.txt | tr -d '"' > changed_dirs.txt - cat changed_dirs.txt - sort changed_dirs.txt | uniq > needs_documenting.txt - cat needs_documenting.txt - for i in $(cat needs_documenting.txt); do make .doc/${i}; done + cut -d / -f 1-2 artifacts/new2.txt | tr -d '"' > new.txt + cat new.txt + sort new.txt | uniq > uniq.txt + cat uniq.txt + for i in $(cat uniq.txt); do make .doc/${i}; done - name: commit run: | git config --global user.email "pecan_bot@example.com" git config --global user.name "PEcAn stylebot" git add \*.Rd - git commit -m 'make' + if [[ $(git diff --name-only --cached) != "" ]]; then git commit -m 'automated documentation update' ; fi - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} From 656d8e6f3b2878c2ff79d8e563cad33730fa471a Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 24 May 2020 09:22:47 -0400 Subject: [PATCH 0981/2289] fix typo --- base/utils/R/sensitivity.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index 72f441fb3e8..ffeae53e734 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -261,7 +261,7 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, settings$run$site$id, "', '", settings$run$start.date, "', '", settings$run$end.date, "', '", - settings$run$outdir, "', '", + settings$run$outdir, "', ", ensemble.id, ", '", paramlist, "') RETURNING id"), con = con) run.id <- insert_result[["id"]] From ca8cb7ba7f35f58601dc0f52879f73dc91f0d39c Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 24 May 2020 10:37:16 -0400 Subject: [PATCH 0982/2289] remove extra --- base/utils/R/sensitivity.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index ffeae53e734..da87d48a069 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -270,8 +270,7 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, for (pft in defaults) { PEcAn.DB::db.query(paste0("INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) values (", pft$posteriorid, ", ", - ensemble.id, ", '", - "');"), con = con) + ensemble.id, ");"), con = con) } # associate inputs with runs From d9da9d1c7fac3e524cdc539735bdbf1db3744a4a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 24 May 2020 11:25:24 -0500 Subject: [PATCH 0983/2289] update sipnet docker test fixes #2615 --- tests/docker.sipnet.xml | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/tests/docker.sipnet.xml b/tests/docker.sipnet.xml index 0c9f298654d..3da74156bc5 100644 --- a/tests/docker.sipnet.xml +++ b/tests/docker.sipnet.xml @@ -1,6 +1,6 @@ - pecan + /data/tests/sipnet @@ -40,7 +40,7 @@ SIPNET - r136 + r136 @@ -48,7 +48,9 @@ 772
- /home/carya/sites/niwot/niwot.clim + + 5000000005 + 2002-01-01 00:00:00 2005-12-31 00:00:00 @@ -59,7 +61,7 @@ localhost amqp://guest:guest@rabbitmq/%2F - SIPNET_136 + SIPNET_r136 From 14eafc2153d4a2988c798d4d6c069053e9e18377 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 24 May 2020 11:26:02 -0500 Subject: [PATCH 0984/2289] fix error message --- base/settings/R/check.all.settings.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/settings/R/check.all.settings.R b/base/settings/R/check.all.settings.R index c8e12950de0..44c5dd7465c 100644 --- a/base/settings/R/check.all.settings.R +++ b/base/settings/R/check.all.settings.R @@ -835,7 +835,7 @@ check.model.settings <- function(settings, dbcon = NULL) { con = dbcon) if (nrow(model) > 1) { PEcAn.logger::logger.warn( - "multiple records for", settings$model$name, + "multiple records for", settings$model$type, "returned; using the latest") row <- which.max(model$updated_at) if (length(row) == 0) row <- nrow(model) From ee98276e998c7dfe8f3202dcf7f350d610a40b19 Mon Sep 17 00:00:00 2001 From: tezansahu Date: Tue, 26 May 2020 08:11:46 +0530 Subject: [PATCH 0985/2289] updated steps used for installing pecan using docker --- .../01_install_pecan.Rmd | 32 +++++++++++------- book_source/figures/env-file.PNG | Bin 0 -> 175488 bytes 2 files changed, 20 insertions(+), 12 deletions(-) create mode 100644 book_source/figures/env-file.PNG diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index fcc3f4cb37d..c6a63153008 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -80,7 +80,7 @@ This will not go into much detail about about how to use Docker -- for more deta This should print the current version of docker-compose. We have tested the instruction below with versions of docker-compose 1.22 and above. -3. **Download the PEcAn docker-compose file**. It is located in the root directory of the [PEcAn source code](https://github.com/pecanproject/pecan). For reference, here are direct links to the [latest stable version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) and the [bleeding edge development version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml). (To download the files, you should be able to right click the link and select "Save link as".) Make sure the file is saved as `docker-compose.yml` in a directory called `pecan`. +3. **Download the PEcAn docker-compose file**. It is located in the root directory of the [PEcAn source code](https://github.com/pecanproject/pecan). For reference, here are direct links to the [latest stable version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) and the [bleeding edge development version](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker-compose.yml). (To download the files, you should be able to right click the link and select "Save link as".) Make sure the file is saved as `docker-compose.yml` in a directory called `pecan`. 4. **Initialize the PEcAn database and data images**. The following `docker-compose` commands are used to download all the data PEcAn needs to start working. For more on how they work, see our [Docker topical pages](#pecan-docker-quickstart-init). @@ -99,22 +99,24 @@ This will not go into much detail about about how to use Docker -- for more deta b. "Initialize" the data for the PEcAn database. ```bash - docker-compose run --rm bety initialize + docker run --rm --network pecan_pecan pecan/db ``` This should produce a lot of output describing the database operations happening under the hood. - Some of these will look like errors (including starting with `ERROR`), but _this is normal_. - This command succeeded if the output ends with the following: + This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB: ``` - Added carya41 with access_level=4 and page_access_level=1 with id=323 - Added carya42 with access_level=4 and page_access_level=2 with id=325 - Added carya43 with access_level=4 and page_access_level=3 with id=327 - Added carya44 with access_level=4 and page_access_level=4 with id=329 - Added guestuser with access_level=4 and page_access_level=4 with id=331 + docker-compose run bety user 'login' 'password' 'full name' 'email' 1 1 ``` - - c. Download and configure the core PEcAn database files. + c. Add a user to BetyDB using the example syntax provided as the last line of the output of the previous step: + ```bash + # guest user + docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 + + # example user + docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 + ``` + d. Download and configure the core PEcAn database files. ```bash docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop @@ -128,7 +130,13 @@ This will not go into much detail about about how to use Docker -- for more deta Done! ###################################################################### ``` - + e. Download the [`pecan/docker/env.example`](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker-compose.yml) & save it as `.env` file. + Now, open the `.env` file & uncomment the lines mentioned below: + ```{r, echo=FALSE, fig.align='center'} + knitr::include_graphics(rep("figures/env-file.PNG")) + ``` + + 5. **Start the PEcAn stack**. Assuming all of the steps above completed successfully, start the full stack by running the following shell command: ```bash diff --git a/book_source/figures/env-file.PNG b/book_source/figures/env-file.PNG new file mode 100644 index 0000000000000000000000000000000000000000..c1b9d1048606277e1b49fbee6ab36041f41a8d67 GIT binary patch literal 175488 zcmb@MRZ|>Hw624H1P=sv4Q>hU1lNQBfx!|6cXuZc+}+)62GRz&Rg=gZ{EDYQBwT;_01cki#KnOz5YZ0x6&<)sr@e? zxqMZSc~d(8A`Yip; z)9CowqMlJVbw47er7J1?3b02u982VGBy9oDuJJfl_ER zFO}h(7R>dg$05DK7_&P{!k1>3KDU&D|M@xR;?c1A*^X}SVsc{NL-DG^@=n8!KjFk< zZh`2vaU;7`D;+=NeZc82WwffNp+z*2ytCo|P{4^2{FJXt_&7@upEU!fjV=jrraZ#$ zlY5FTE*+>Ex%FaZqBjw`(z=8FdH!%8=jStuen26{OoN`+lSYuJ$Jai#$L?XK>H7<( zZJigkqYO_q$o$+BVScl@klu++?Ni0&vp~YN7vJpN#lw@u&dk*NaY?;%v)}t80xRj4 z4+1wDcETdQ-K_mt_fvN*Lti5U3eR2Ip14ccV5;b86D=1w#bRva;p{wBRWa2COB{Kh zs&+fNM>vjL(jD`Cbbx(DWMzS;1``SN=0~zwlHoK=FWkTp!r$8|;{i;rxR&;B?Y>w5 z)jpZ`F*M3C;G7gRHT@E(TC;Whu+8jUD^XtpX0?g7b)5wxH}>$3lWj1I;xxI23Y0N{ zkSyCbWG<^P?J!+MGwIoh#3(qw3R39lT3D{0B+NaMQOKjC9r|qTFPp#t!Px4=M*HJ2 z)dYKC@}oaBJiqn$LE3C|ETsvA8D<>%dnP93%JuXxvXk;#OPe_c>0A|d1FSNJoXki^ zVbR;!^F!A018>@Of;X2$z+PZhS){PRk*5GHf9)P};pE~@Ow0d3sSen7M!G(t#=J?=1FUi$euPSsQw*sIrK+&;nyv^wotsg0*F`51`9I_vNQZ0VqJ`^ zd}sVx4-D5p%uha?iZyZ%eV*?YsPB1pvSbhh&%3aj7F~RHQl{9>jU16Ci$tXU@khuE z&wotQ==8@x$7B?c)CP&;VKlqcO0S39MbBwkEM!9cFTRpept#A)Ie5DN@@#56g+1Y~Jq2 z$`Hs7lr6U9W7hL{l-yCz-b^^AT1TYP>M;Jetu`==`h%YC1}w&ZDK%F6SF^9#@Jm)W zgQ)L37zw*#iBs+N2RsvP`sGB`Ugf0US7-W50>zbxswr8TXHT4qqrkcWd0WKZ+J0Pl*iHs8ennMhVrTYzgY()2FTB6#GAQqot$lHNW) zaZhLqk(7|l|HC$jU4e@*^1-wnp`;gChp}E+=?`nr5RlunoPEZOtAO9K{QWD7MwuChMhGdG7EMOz$lrpczo}SueK$}Ym9VX|_L|ejl-~&<96d1@;Tvxr;I_x`& z3HlVnt>2KJwpv7`VYuH^JM24ci3oh$%UHl)0+G|K@w-#82qY1bobbw>^Tw%5JnbhD zyia+CTxunAm(dDG)fJA$j_RWI*HM1$9scg84hhAM_a69;tMo|ee5V81xLUfYrjn$A z87}FzadEQ$P8>TN`b_iZzg)@PWoa-piNR zQQ>UCwT15_J7dKGyUnKUi7`Yqpl)DMo74sNF@T_*;knzeEs&G093SUGTrbW|4$4Gx z;e~rh_)#Wy-+FA`o(pEX<9it9z~I`N#JD$A0$-P2w+D#5mgJZxvWil2#Hf zs(d=n>GEIKVYNQzxj*TjQfpVZb(6H~YKOmfi`K)?gsqqbPzpKv9!g2c8HJswmZef3 zhcSd7kwaWu7OM+#$2>lL;lu4ph)CS;|8lrHyhhT*^8wp>omRdN$GYtc6+j|QX}0hS z5WmEC20NEXDXG&27vgUZ@$ZR~3i(evb84Pk-{w6oatW7-<<6qnZ{#JcsnEG9NRRb{Ycjc091(nExvl_5LTFv zDM(NiB~L)P`BY#&6Q*H|z!-HK@ELkD>v$@beIj#MM-LZu=dfsr_i*vCR1t}1vb&xM zV=jfO!->r#OdCCaSY~0q$#e?sBhs09utt1aCS5`Z1F2BaVu19{=F@024`VY-c8J-? zYMXnX=u4}x(SzTRGD7P8-pOP09dxx}WxiZl(W`hf4C&vsvv5#1SIPo!E+^brFb~_5 zrTwiyY7K|p>T0zgk!U^J{nrj~X8z=ZUxRNkNV}~v;_S}rPjB-!1O3KH!(CClVD z(KiMN?X-d6UJZ^HR;bY4ri+~530#>Noxj+e&`xj);_I0`E;e`4fb5zT|B%VVH5RLig5QBC|v8VRbj7o3pWH~+zoaS(nS3+A8I6)x?4yO`QGR(DIVj$Yri z5ACYuvfNHk_|hzf0CO}Owm)${8ccdBKQT*Wz` zNr+5#ggp+#AQ6`Wkm|7J5^!Oa7d@4&#VO8D_L90JFAZMtC+(vrV+jSINL*0HVE&=M zhdEINAF1GQYeEBIDPehXW}SX?Z>e@c4$DX~9e3fyIQBdP9xCy-U$;=cV5I0%R{FiL zwxH+n+(hwCu6ofH@uQd<sBPIt$lsiq>*dk)=ir-m!F{vs&0D z9r@tUD+$oCOeoLm$1)8U%zLS0;JbKh6WpH0){tQ7(92xV-kUUZDA&6Y7?p0@(Sp|O zR0}((>Yn4|fE7i__&Ac`dSN+##||g0M?$~nUGfcevf_SlpEl$&Pn>*fi|LkIdaB<| z@`FQCBpxiiwEnC)mAi<)1VMv-juoFKgQz-X9+5v!;m>bzP_G*`Vi2drL^FHw6Ms;s zX|TRGB5Q9V1jKvbpdY`(^_$QBlPf3LOh9S>XUJVZD#VF$*3FLfCvpsxaSmxci~D=K zL2|@L8WH#hz?} zZ-YKP_T}=Qs<-yClkqY3Oxpa^7WVmCLnbFEOg_XgTLVMTpyNb|#3a}4$i}vEDt^<6qb5+t+BG#+Cl;r_<{Jp+E~j* zHg08)rQ4K$GvU8#Q$jqblQe0eEM?fqj+!hqb4vFd#f!f6MOmJoCR>`pZnX zV>PtWYi$HdE=hk4ec`Yac-(%e_7YmOxFuRO@Rzz(xNO^m8SbpdyoiTT+YgvGZge?3<)**#vq!X7A5PlD2rM<^}Z@o^ZCn13EKh?kLRDR zCxsf1=bxn0%?yX${h7>A=^?GN8!8NWb57L063yU;`|K0Xs|S;lDc-;l2eeo8Khd5W z#06Ml+p=6e6SqH|GsppOUqz>S#!-F1>3O=AEPYS@2JL}3D^j#BR9$TzhpW-a0nA4KG{E{Q1zik4+XZ`Idgx2|&+2Y{jzPRT}h ziwi$iYK=#h?9g4~iHyHIu8X~2+%)xiJeHbQ|Dti{4-8}ug3qiaD?4D?-@oZhvEdJH#U<$tk@h+Bn z*+KApAL=kvI;TR(IF2xY6k}1*-1S$qcThIA{7X;_Q-_~~O}G^}jRQ!6zT+26W{BjN zW-n~4cf=f=N|C+KRTf1iw}MK?el*-~;kd>3-TBrgzX1#br%8^$j`kC{(eE*`WIIOs z^Y!jIoBYq5@5`2aDS&4x@i2jeiGY<`Sf?}u1o zqLZ^+T3Ori#n)^1HLiH)W3y@Mz`7(-5*r*_UTm69sb*Qn+XYGPrBX9g4H`Zj3()*; z7x9QBtM6YM2g&^&(?Jx%l(3&*mFxf<*`g8J_(H6?!*qMACE3~Pj1kJJdf@D ziJf_QcWp}sDGM31iH2YtOsbj5H98f9JK6MY;6A!v^C2N4jjzA70`vTXDV@}_gwD*C zIy8S9`zgtw@6nN)I)whCgc2hMPvY%W zhYPss1$b0h^t|v`Sn!MlfJvUsIBM}fhz0v1aC=|%)|>HZb3~LhV^&tgV$2wFrc;vb zfp>TRPX18S=~5WCX;5b{xfF@R1V|ZPVt;gYM}y7c+ty%oL%&O*t!5G^woT2m=E_1l($D-u|;Dp5Ejit%YS*Ewd(? zG09DrS2fnSnuCg|D{g`JjtoB9he^EgYM$pnjr4gkLF|g%Mj>7fN95rg0+sN_fo+8P zw}#lJ^S0RAGrz&Rc&SsA8Kl+Y~;HMiaYaQ?-U!=?llBtD(kFlPuiW9e0B za?k|R@SAw#tZo#NMy zCVebb*;x}cxP=VK zK3^<`4;}r3K{JW_ zf{P>}!>V~8e2}h;I)O7!rcD9}T5msH_GfWM81LF0iWUGOHG*-_e)ZW^4&RJ6N;?H^SOO}X%QB6 zREXwf&a*ffdqfo+h2 zn(&;!JC>n9+<=}5q$(4xeWMcA`Zd?yV6!CrSU+tJLRxjy=z;~qjaO1M?-TSBjVkDt zr*S(fZf5JdkDs%n5BzS_Xy5=8NB!tQW^+@K7zoN2i4O97LT&TwO*blhb*|JFnvQYd zbtmx^jolQI4P2Ug|6M)p#r{j&q!n@}yU@jN&<{RvT-@HV3t!(%8R+(c4x6C8(kH&9 zf(3))!IA^b(2Z1{*2X-gx<<5q)B-$&|B^82e7*b7h30b^gp6cvsf{Wi3!R!{U(Idv zN^_$904?!I6!PfNGxJWA@%~D9Boh}b?Yj9+-k~+pmQq0lWNA8dwaUMX<{mw8=LfDK&pYQes7M)WP7X|h37!8CQq>VWA zZW{{gL9B_WrV9Iarn_i;8m!vhA+4*=OuP?{MBH8${&|PH^}+OF7>6PM6GDDjXhRhB zU*nSA$JL|P4BF1b{7llLB?Tk(9f@r%Qw@ynfN0*0)Wy3QF?e>ZIz$VCvT0HOtW(3n zPnU1`1jxb2V&z>%*V><>;A=|2fQW|Pfc9pa$|2vh4N5_xlb{s2QRiC3DKWp!w4-m5 zp1{Tv8ThKvcDr0DcNP0i~q3}xaS1DZ^-AeoX0?rtNUFlMGo-lM;kyd=x*f=f z9`6;b*4J@vq2$t>52>zYBhX3=#f}`>I$@9qQie!$5yQL++e*%xoX{|-k#<@4mQ%pF za|!imLyvuax#Md*=9Bz;lZ4r|^g~vEb!zlO)r@o+IjbV&sm)2q(s*O8T-)P+zU`D? zw&Wy)3j`-!bji)ngeSr3`$(pei(jJy9 z!};VLD)Y+j0z%|^R38CwjrY&ieL-_3jr`w@I4QFR*5`&A(c3w%mB>6?OohpHycDud zwfh+OvP>FjDfw6*3t%e*u&0S3VnPCsPkfk;p87cck<~pvluV9><|P@h+J8-*3=G5B z1|@bLnn8DWSGOYS0N?gs+(1XUXbU`u*?(;Zpq@#_3pW~U6vmL6YNhS@(np)~jPOYT z;E_`&9&>nt1C}p0@F#a)#ZhjieauRGCRjsaa@nTP>rYDuT-{wn_U-IKAg^r@kY!%VWqa35Vp7|bwQ!r=A=B92E{H&N@@55c0ogaq&Au9g;{*nT|;^;zgz=+h(9iE z8e=Vb!?R8UCS`N&atiqG^*=KhvK1J#Co-bXmS%(Jg@lZyv-JBb{>bWvERij!FHQMY z$WYfb;yq?>zh0N`s49xzt7FSZ)7V5Y$|K%Ch50fKzpk3 z0vFOmKVmrBpc4x7`^)a4|=)HtwH7TcH*fbuy__rVLZfY97{^89+wSC_#-J4D*Z!+e zWv^YE-jFoEZo%qE*J0;)Jz&T{n$^hnozGIai{jJ7?(wQd{qWh&iCwU?Vai-<@xAqZ z)0}pXL9>u^oGu5P&*vnm+VBTHa0R$iIz)h0ZYon_OaRlP%sS0VBb{ZA-V4(w}-mtxEL#nA^vPctFhInu;NhR`JQ=VaBwwMCmi-+Y+RpkF>@liE4r_>a4d#*QQmle>QqL;J64#~ zBzPtwrD`aCh}mHy{Ubq4gisW=H|O*GzpG@zz;*c3U|Gh;-eRh`F4`6eG|%v(tTw)h zx5)L(?&vRc{+-pR)3IcFQ(RbioJ{EU95^>Elx9u=bAfUD2TbW`_?`(4y}(#-P`ccWQpJiIdw@?y9P>RZqm&(BMUG=g#sw zhYz=$DvzHs6c@R1{uJE-O9~!FfNHHp@H>3W#D!^PYt<1 zL+*aRsAM*hh7dd2@feJE=vwei<8)YiPugc5?iAc!2*S*&kc){~Otq2C1?+20=99!9 zk?rMCYpYV`=Q>Xn>id0kX!`!O#rC7F|Gj6%DTtZRf$yBc^!ldYkIsYz?kC_e-s9CR zUHgjSyBSd&hrGX)YF)=ggErL1vN2FDPm34XC zPI8kzC}IqhhLp;Xf~(3dMI;@^^^qqZ<+E@5M<%q1#GvL+1e;$~8?st|Edjy0v8II;5cx9cTtG;TpbTwy0O^)0~Dddax;1e2F*+7y_y z-W$%wK-Oz@;GC+nX2Tmt)TsRY>(o-Ug@Z2JIM z7+1?-#*blrU7AWV?wSu>AuiKJIZ4Aq@FJS}{Wa_|(D;@m^jw(ie1e^MtS`3(6>s&r9heIJD%05dmbta|=q8Sw+ zf~o9k<$_Jk#LuZ|J1}g;i?s*SkN4n4lyiL13kI1w)pob1)wa5C$(xqPo>say{w3~l zW7Udly7_7xx6c|yY$~FrfeydO5ZDpo*=kein>IEyKW4&D$i5Ysi%E=|XS3XG zhAf*g&Q8&Typhvix4D&X53{sUOT9yDO^P0!Z#%eok|{l&s^Yc?j2dZGu$me()BDl7 zo4iRfzYV?n{-JL+^X)-6NL$#lMkXkV#?lqm2OO>K0Bo$taO9GIyW|RJ$uB3$>K+gy+<9zz+j!|5`ca0S!vL zDufGpEw>o&XBlGJ`3nE&ZWjPB+v$_c3|K+``6~zwvOZu#TSeN3q+FN^s;cVf#+p7Q zAN|%q8>MvCC8~ORj$%yr(M%SwxgBuTg7)}4s_h<3E0FcdUWX2T>7KdxKP2s6H@TDChs<1o*tAr42rRm4y>h%l+kD$@AK;@?#a$QPGqjssW4aF> zUnMses3HTfhFkJ453tL53QkBX0kBxhib?zYKLmh5St4Ik6DWm58J~@@$6)@t$ee2}HW6z(lzx|xL^3#aTEFP)`ESHUtoxdoq zjen{+yKuiLI2rN&hd?Np^(4hI?z43I8Hy%0CdI4W&kbI@d*x(YXUkK14gCyJDypSw zs;Dc*c`kan{M6>xlml;=pgMDS@|YfMwKV84q>5#CL?~-69OU^!)|J+OwN^MS)B0bx zh*G{HewapIjXSsIUu(@IQC0p8^TYsey zN?{O#s@R=`L?+1mME7gzqIg2mj4@Kk(D=;lIEy|3oLQONg%ZHRXdo`J?(x7Pw&s98 z+CK7C#l2PnUKv+Q|KGNY$vem(=Gd%-Zn!Bw<%Jduu-D?fxTNe5bfVtm>?_fFKNOtQf)=_S1)|e_= zP5f-JE1@snhZ#Sj@!r%u=xWqFJ^pIdY!|&pM7s1G#n^R?oH35Jd0*e|fa!34005#d zcDOc@mbOgqA&C(6K!<7tH2&1RYeQq1Te$KVBt86CQ$y6B(Bu1N zMh*?s;HR6!LxqjG(szl3A;#2#*&PaD-=i2dD8-3OT7uhDjB4i>!fS6wD&QB?;x8!* z3NJ*8==$G_RU*8%)9>^D^G}HKSFz|{7c@KP#vTSZe4WW!zV83KSiV1R4ma{HL{Sny zP@pp1I<1K7zXDFDx3cR_r@j?`kdS^R#nbtMdr`sr*uJfUyW_g9XQ!mWax3|du{5Uf zmq>dp0TeS0##~j6kI?mp0oyZ}`a%ivLTh2 zWi3@K6i(;jfALOsF)Wzve|r^RuEowb5-pz_@NQJB_TSF(bd zY)0yIajXIqw-e^JIlLR?9JohYe}(boug@xpWC|H_=1zk(`41so_<34K#I6Mt*2gFP^e0CX!z5z^FqWBTstaqh-Hb zFV&HB0_lt1t9!El5VVV;f*bgakH;wc%qJT|^LMg6JrxCCpLMclOV#9- zJDYcm4q$IzN$gYC7r$Z7)f5ZjPH_gb(I8Iver4Pj7^JD&9LX+&nd)Cea{-)ns@1>p z%^zzE?S_0#Q%YjP)9w9=U>X7Bi#pir2kf~%n}hdPskNJYr%WhKgbxXv`nrt)7lHHk zrl>xnarK`#|2r{$607H|qCXPfUmgH_4CVj+j{&2~LRsdYJieOA1+YqHl=2W7-ZcGA zv$A9F`P+;t-r}8s&;nM1?)QP^xzu`qo^BvmxqB7LUp37;lFG#mTX%>%{*!T86cW6x z@az`g7vVE(q3rjfc&K|>0b)$bMviTZV!g=w+hy7AEwpDw5qXQX6XF2wrCzT&o|_>~ z(Q7olkd%K`$WylK&zZ)2cn8ILb)$?5XnUUC2PlL z1!H1G2|0+sgjTAk4kH!BS!pp|LRgfI+lma$9uYOVe`;5CK;oRaKC){(&R-4v4Ca(> z+)SK}4V`36(7#RBRExGf8MX5?Xa(}=inUP+ktfH|Yz?aNb+eqM*!3$Os@!5;U*(>x zU|oB4vig_jWO%xM+!hHwtbl8j=4=6e882tP?rwdO^T}TWSIRK~G|y}DIP~M#T5X$u z=701=QZELa1sK6$DrdX4G_jk9RC6p8j72o3w-*EZk>f*Okfemvd5@ml<_%@0ZRP z8VJk$VxpFzMRT@f-XC38TfqMs@?LXBcxR2gVYzMN+W27t8dFg*ZRoz={0@N$s>#BvI`yz^uEFtdyf z*zRU-rxn+H99eP$#K@lRimalhVacq=ZuH+!dmfnZe*5#T&BLdqqUXGV-Gjc;V*2W1 z>777pj3b*XFmgQKpuM6>BJmq;Wk zftRHB$Yq=A{D@{3n1`dCJqAK=o0tp9$W$yw5&E@t5A%x8|InP+#JI(MIO^|i&+0g{ z(n0dMvmH)d4D;*=TLEx`6s`_Ll z!G|*8uihWOV|25g5Ut}pead<=ej)8^Z{(+!{cjal)SFj1m7wDnio4z$lwI_^i1~Tq zB|hejN@B2|5Eknjmp?Mtp#m6QzNVPl$qga9IJ^;~6j2_>R z?W%)j27o-O!R+FMg7g^#({vKxC6q^4Gt}tyA6hAtU&DI?5jL!x7laYSV97S>#Pqr$Sb7KGjSL?@giTsXG>(tTR#{`(EmZzFgk0A?M!Uh~q=gi^7W5zIJxpD%^}mV} ziv6HVt;Uj9qTqO<+HN&GLs`Jmf{LFImBwi20y}dTt4YkDy4TgFp8|>~CLQqXiUjg3 zXPB{rR)zAKe6>s@7Y$`JWB{-q?lBuGSw=&=j9UaN>EC`ezj<)@E^Q(wRie9aXdP_i zb|{b`<>p4S>#G(N_xwPCd zD!+2^oQdKC!EHkQ;b4Y2aD`c3Yjg{#`FRZ_ARfwTRx|#-3z&HNW=9#q6VPmga_M#Fcdg+LGV! z5$*leXi0sn8aJ|M$a@c8&;3xQ!tq>40g=?cy!mh_Koh04+#dzoI?EBB=%YhV=HdbA_)K?LEC{>sJ~1DH_1u zgxU9L-`UNo|5PIRd8Y?@34&%JAFD*hxeS=T6#U(zl6!op{WDKjhU7Ul+DydqE%r_o z>O**a+!oAyviaIFmQph|SW4N+kQMI!p0s0#I_$KR$=sufm&o1l%zu5=-o7u-I=I|e zOqQW5*9WbpbZwy^XR9M*`3;jS(Czlynhd4qF8T*FwrD*3x20+L96x^6ET9vBqa$LS zAeq)Im03VvUCl9nHv24qe^q9)FA zH}`UNTR0`jSp>~#f~crCr=R+0XwfIU$tj%pzb|AefEucQ-cX+k_Fw~)N3laCF%-I| zhg~HVaAu}7c=)HanCb#OWkdrwZLl&7sGqh8bG`_xr26Qm?an!nzN0zq3t+g`27o3k zE?>53E^mpE`sIj>E&;&|dVpoU0 zrDu`_jei0dFP4{eJIzQHE+BT^GY73Rv_S?0@mBpJt5ItWX_&>5w7-6sj~2UZgn97V z?A!l@yS?ii&8W8Pi=g@EM}M3#w@y){%5&`=lg5_Wvxa>LowRvRq0lczw6VW}w%(Op z*Z@1OS*SXWc?KGN7|aZR+w99|!~KupzmmZHNrj6ym~Z#9XA1oTDzoUq=UaOa(n#1P z`iBl)K5-TocWUYm8V=a;gHVHwmY`HZQe|JS|wJyC{sRFRV*-S&JnZi76^FV7n2 z;e^W-hI#3POXgZg#(2t&!GUHq*gJEb&*}~>RopO*)WaG;F3z7$fj+&t4VVIO%OhOO zW)LR+K{k z-Q!)iyQ`q8S^Dp1>PMh0)dV=t3xOb*tTazK4rpO%RLjhp?SX}dl=Xzy0c)cE>{UFF z`Rpcj;(SDJCf=)sE=br$8;O-P36H3!v>l--aCd9gTO)(k12Wyto_d08GofHAyfo6J z?5>fgZWU3DA62t!ra5GyVbmOd3f^ZqZFnnAPe#*v@qNCp#I_8$&-qI%BM9m1_h4o7Z_YXqF?T9`V zhcZu@KFcy2hjbIC>n_x*c)P%Ckw3r2Iv_beR}G>pzsjwlsaq5$WpCLJte}twNVhv{^Px^TZo1n!NK!2epi`gJ|rhRfaU)WQMt8w3`x)K!A zdiwiRE`Qwfr3hN?qPE@d5P5b0G9Sk&p$@s`8PqJ!8jR-n#jSFa6#wLCgE4UPx z@ya`#UD>jroAEx^^}#92a=9Ll5-m%lXEhA{TGi(3lz96YwIH=IV&@^rX2(&c!X1to z9NlS{m;r~r+&4}6YIWkSC+F{{)}sdfM8_GuvOjokZ$Y~v)NWW@v)VJ+P0v|PeTXdn z;ftPe8`5=^v(Ljv9#%98tb?2nO#4S2PtB|nPX+tIx=DH*jPHy=?HdB_mh|ARwQ!_j zBPIjmjDEq>4=8(r=oX}9wYnPepWzg#3hwe*Mzq^DWniw3>5_T!3D@h9@zyZLDl$Q- zf>S8Y`h>|w-4sm=lCm4bOqN2Rcl&KNMUc_tz=5FD)LE}IcoKI5d#Y@|u@^ZbOMrH& zDAv2>3h$Q4_N=t9hHg29+7hR);{WVGr& z|9;#-C=3g326ov#AA>7_mNwgy~F&*M^!EY`hfP5Ap{N`(rr z-)QF#8NSS=rc4reinrEU=Ah1#L7oq-f-4%J)N-%xiAdZb;mn3twUm<28%3ihHd$k_ z1Uz}2rEWYzJjo+ip(Y(g}r%wwI3O+>t1fC(V!P%{&yQP>|72 z@I5Fp$y}R4yL)_~Eq$nhAhOXRh%RB{s+7IO9EA6JFVaC6`Ivm~iVo7_jp@8rTP3}; z!LaSFpVh)sk1?Mn_(8w6ilC+Fl5^CFE-gUimH6x3Kn~jDk+|vYfBPK2&cxh{Edotm zvE3tWeep70o>DL0zWTrv&2BSYK$u@yPpZq7vAuVXiCwKZgI<|zwWS^6ieG7x6tfdg z$bUn%?e0TQ=Pbgw!+QZPzybe0setIlr>FBb-a@HkRa{U0-y4BN}<(JnqpuP zcB79t+n_6}?wzmMMzQZwbaAfIkpMlNjj4UB$^oH(6rJ__b2;+cbq`S+3|w7U70upB ziU-JD;S1LZoVwz+ZXUugV#~}eMqt-f=9?_Fg&t-W%S=q*#^Sp4-{;1kfV4`HFV1%N z&|rcEbrXF>CNuX==V}`rpx)8})!&StZ^C*1qi|BC@t4^P-ECRAi;?<#a(mr*M5%a`EFRz9kqsG_gl%|4eQ72Zj9sx;p=H~@y}UaJ_#cEZ1eEGa;p`culUbYrIXfq0! zX&isib%O+Xjnls63L{w-;RWd!-w3XgBd@AY9T|2kS;o0Heu>iAo;>?)i!0xudseED z+Q(DZoB!*_;3;YGdY{Gictf*|qgh_ZtC+vO6u5Qx ztNQscPWbUmlGcS9_heAxz1>Cd5cN`|dOkZ1&Din}?DlJY?|&HiOO=GRdY#r>tlAPb zoCg*wVC1!n1Iir*T2Q9hGGk-0sF4v~+oOulcl>>H;sY-?ZYlz)h7OoTK7=fVa>3`C zGg-V>fjJtE3iTgL71KZf8rYC~hahg3iT}fgxFgZnwM>8k$rg?#dcl+ijcA5N6ejLU>Bq4E@xN>s zUeUIXSkZ%P<4k~FG3`G>ZzVlyE$m(sDlgIH9L#F)gq4Yx#^lpNn@TM4_)>$&=vDmh zayp-vjkRF`;V%E)Ezb15#Nni=(S&^N(vX68o$mOdyGGi5cy6=oeD~Vvlu|n!*&V?_ zHA{#*&E8@kDB1HX2uJ1mQAx*}Mbh3xQ%Mh~`Sc#8E0^l$%j-*5=QCn1`-VXC zD5bm}btLYuGJ0lI*Zpq~b@St(-JqH|yIGfAR11+z)|aRAs;Wx=Hdo%MXV%F*7K(h! zAGrJmSRiI0ULvWh_WwuQe?~RgMQz)tQbiC{q)7(>6$uhLD7}avhzLjtMXExmp*I2P z0i;PM6zNR}HFQ*Z?+LxvP)#7AeYx-Zd7tr({pTHfjJ^M7tSe)!Tq|?V^O)znQ`peT zZ2|kFO&ytEU9vOc?kI@L+x`2>!0QAmt+Z6@O6=pFzR#Y_y5~CtwnV|WI~`I~e-9Q0 zSL@x%S`fGUMFU~8~sHCz~n&w z*8v}GdRqnp-lW@HADT~*Rp+36<^JA!`gWg=u~e9`Mr-J~kkDOSN}}1xq$+(ZZa+Sk zWk$ohq^v1!&tLl95m)t9j!~=5jytc$yd19DoRaDNY>^|wIoB+808Ql`@V$FZyU&1n zxyv_sPoK@-|Da!uRGPg?u{mIg5!4x-f+RdLO{!W~CuVNhZt_X%!y9gVQeXhLi1`#x z^VP6S8K1v*v$}A~;t@rK9+yS=bVH;ALOeI=+T&YJ(iNeR$JVH40rM&oYP2ZO zg?*6fOowQc`l$j1zYP48IN0+yy@Ww1M&f4h!kg%gL=J!;Puy3o&`VZtS=Gm(>Mjlm=&S z3-VZ(@xnPTiRRb0@+DC6dZ$ z%O9sbpnu<*-ZPr=C@*BR??acYmMG{fV-p{f&bBGv&7>)vW}o+3TEFq8qo$--8COs7 zE?;4(qUiSz99NZII5hCvj!|Mp2n($+ZFBe$@g{Io`91s|wV zw$o%i(sHP@hMNIRpz5%8U~0vmhzMMBTY|ilZZ_8|C-A8vb zxKm%4rse#OzvM=*>zkWt3n8txAhYWohAk$SaXK~u=rg&WJ!=2|CD|PE8#aIsS3nC z+jcnVIw*13<@6t-))WQ5I5WqY-AXKfdXm+AGZMh_i96Pk`@Yo!;@h_*^iOMQ;&qZo z94u;T^`w8p5L{9>%; zr33XK2I%kQIIG2KXgwHg^SX1n&NSH`4`lavJSwI1C=|ei!6=km>jzY^k z#JynN`BB$p){!Y|ijWTh5SV6a=hr)LB?_XoJ4vw;8LO)l1r8hgG^YlQ38QrMV1EZhnis+l*e45!65~4~lqNvXwW2*0*<* zjMAlxWN*J`0T1aG%qwg>VNHD6SSl@7>O%_}xf=C$%eP~fnO)LZf>dkt;j~qm&-Q`W z&-rk6<)J}9i`Gv{VT`0E+HY=`#(2u*E$JRno`2fdDj%d(X&1#%>WR8LoG|r?E`EMYYuWafnmjWn+U#!a zxagG!^Mg-JN7BE?$m{hr1jxorNXOTcUL{bp-Rs0%s5c4+wO?U?dp0N&TEqM~;r z_7io!#R3K?d#h4y^*$b_H24?Y^h=KF10Sja7w_X;APUtc3U>$Bd^&w^JRLWmrj~sw z*gJF!^;S5^FL@~Dgy~l;Ye0cw{t#tF*3OBjGQ-)UYKC|VDZZRkXzUdPquV2^29kHX zpXyiM8bsku7Jroff>)Dl#N&eSkiU<0-X5>ONu}}fL)grdxmCS1Sc!y}Mh4aI0Odv$ zD>YoT!5~a6zv|>sF;55)A%ucA5#L?m~X3b!BCS&H~#=H<#?xXIq5&PLb9fhQi~lfw^>OXgiL&& z5l8^qD+5VqkH*I;`LL|w7X&H%x|ws9;JPDtR;LlJ|DRh@+{$^F z#v2f5VvG*eR|1_crYz6tu1^pLP6=DDzl4ml`u=OXMA#3WX5$uv)9Uzgbc{W&B)qI}V@{iY*!LmsbprgTPaUoO zM`&K3o!l+{q`)|>&^U~DE+BSS%ZEv}LN*Y;h_RaY0#bCC4-*aPca_0&jRv78qbvLOz}rYUmCv8C6d-H$%mRJYzf&6RF6 z4p@lVPsFR?AejHY#rW~9g1qz16xjIKq-=?)nojtvvn>B?uErnX!QltAaoZ;EnZdJq zH5Y>!0&^c~{0O-901kR5CN-L?MhSWURk)j*gm?Zn3p?Y`hYl}qt{`K}HL?PhHt+PU~A-ar;J=??~5 zinf+IToRKoUL5+NzF23u+`-Qj$=4e%j^<~6Ilk(El71v-)2*jQFb%N>UYmPlYlwGy z@!+}BRz?dQ=zOOval}+3nt{ar#*2miH`5eHo|R>4FpXy$gR>YKvksBSF2Fdo?K=Evl^p6PTwOK8r&ox;>+NPN^9}zc$ zH#ZgRDg?$^^U$350-;I94L5=pR4_+%tNHNgWXeRh1@NnVS2io~m&*jWs64w=DDXor zW!hK2cp-E|3M{jPGIw_7?G))K%UA0_tZ>gWm<>y5A0#YCy5v}ip@{v+<0g(O=7{`S zYm~ItM8pF5iXjr(FPBS60qQ#^#}zy$CsbfJ%-&)6wut<6y>cm6K-F<98T-LX3{Sh2 z5F<3P4}+h%?JntH=2PUvq1`{!5OSA7wlQ+F&JAtMafNx((STdrx635EL|l$j)fkk< z6=fW*irUOW03$^Z^KTaSFiJYrqa5_2vY%8T{BGubDX+*9wMr>j#!?a=F>e@IU?yglbuMR?~u*CC;^5WD5fGt&m5EBfR0h=MsTokXM zG#wfy0;^AVO_aBK}Atk2_savUxQj3>3Ph%L4Ti&k`504<_-V8Hb({1WLUvH z*%0B~DebZ}`wh=!KB>Nwq0LZqu)){NS65Gun9S{evWmDno;biepmpUCzAo%g`sDkY zZ<$lRZyO=btPLzVXx^;`jMTJ%IAL*GW=Y6>eKxmKL{GX!3#`SvSV5+ z%z8F+T+l-Vog{a`C%-GQ1Y4#lGjhC_JIzC+5CtO+W%(uqCqW)|S7TQx(o8b~d)N>`xnJsqR4*{e>OL9n%I) z6in+$^kT_OrIKo=slD7A`78tk;$B{BbkK7ZYG+LFxAo8diR;mGc#2BkS3Jq_+SQmE zGCkT`EoBV;K4LOsRISNtGc2&^f2lX+9d{A~BAuO!&N2Ilojclt$5vyJlTf#3kC7)d zWK%sFO)daCLBBPlFV(WAkd6&ScxVL;cfCHLnuz3t^Xx`q=kOyg(3Mn0$#e9*;p*rd zGyz#4WzEqQKEm&F!NgtM(ub~4#m)wxyE;=cyZy7>&PG8OY~RgcU~SIjVR1Ep3hRXxTlJ1P z5&%|-JMho^ic$=UBsd)~EWA9;2h<4h<`cYPKp29m)Osh6(ubWNm4>YbG#?o*#4N}& zd{63D>avGY1TaZnd$SYC%Lu-R0&u|M0e zPEMQ*7);De+=Q*!(5B`u7S5jA)sIA#Z*^ir;;bHD?*(W0eF4;r6>ioimn8{JTr~SL zG38O_ZFMGSJ+N2ix46IC86g8FEq#rh*%?9nWXPM4y(T@O1wF@~ac>h5u1UZJo<_21 z$m^s^J}Rdwt11b=a>j_uri_liBQ@KVGS}Ss!i#~qGrJ~-TYAX`UDgxbn_jtz?g7Ia zw>nSKi8Fk<02u?Q-M^T~4I5$f3!OU3u@L={5cPlbm*pcmD1}I+w&}?UoO!0{HK?H@ zr8%=!rl7mxCF$%$NCKx^qfdGxbO38AxZ8QM6$Gc;>QslF(dV?uhHnT%};`h3M9zY4dN;BYyPJdkCkEr2pZWY1Oo)}q=*VMq5^(m)bM-hjcfw93@~<;e*J zB(MtI&D(r=(?u@e_1360#C%i1UKH_D3*p3RF*u_gtwwoL8eZO9g!8%q;`rtjLop8Cl){|BF>ra>Hk_r@Sq!zO*sT;pXNX}D_q)<&@um{pb@xLeh05OFsyi1 zP_N|`df;*MbtDkTEUv2@fPwT@2si410m$=r(7e`pF7J1t9Ok;WHB!x`skKfA=;r4; zJE$yDi26#OkGZCS%&dNC45TZi-@E6+(D!&V_xK(ehBrZN)P>bJ~l^I1zF2ht>kHI$8E?99JfkkuFsAN+O$*Hwg1iZj_qwggY2)h0^o0{oKjnZ zxV_1A3CgWaI(cDEqZHqr?<_N*8U;v`Qu4`z#;5|WQx>$CXk15l1-}0)>yI2`x!SZec4Bj|8s|$i;2D1qonuMeK@8Um3eJM7lEN=X~D!tnt{r z>zKaFIF_SExa0G4fQ;XJ>(wonR~ETy+%gAT=`x zia~@I;{xp|1591cZY|xod`mALCKR`qPdS}kj4g>XFycqZ`H{9iOc2WWy7ijDte?z# zoBu>r-=O+%G~`9$7LiH|-7mhh%R?ZQ7UK1W{#K`L8D*Ly>wwlatbzpk?vyOlM+)?I zX{n1OEe#gmk`99ke5~&J1nKBD8&>g1!fd367hYSI?VtCB1#Nc43u>{t8A?ubuuuTLppGM{?G3=W~ zDG5I+Vuis(L^1(It#AqwsgJo*Sun!+$e++6KoiM;9^N^HE^di&s`WG&e1tTAiV!^v zYazde<6pDI<0LC64^4|HX*c+eo$J!ty3s`I(;tmM?7J}EzzrS2lbkT2Zx=eg5d$T< zlcS5*KWw|hJw0TMYm&q7p2Cj7H_Ve5fA(n@e*;2=KH%NS-?~UX?CJ+~QCLv;421f= zsJi_)rZkqU=q0QV@k}q9l}TJa`H`+^un|`uWHJztXZ1wTls9ts#3(^Mc?%+`CBna} zuWdTA_WCUGD6aa6MWT?}bIoL+YBHW;kOdvJpaF4e3Gop^y0fNrpG_7+0Zr0;FE4JHhY< zBz9SS37mgo!IhR44vhfV6t`6vP3SvulfRS~)A-yQ`m3iksN(ELD_)|4GSTa=s*97w zc@pC76ssh{XIlZt8+4Xo!qB5KpYp!v#+*9g2TTjGAw%@0yb}9t)d(26QqTFxoHyucvO8a*5!Uog|ysdC4ohNnJ_%XLBVNw>0p2cBkky$4L*o^!q!fu zGaKn7OrGIKaD$Q4CYsbdQ=*Zdck_Tc=6suqyaC^StYR%;m2j1%R{3oIGNCI8dyl|+Z0D{6!=JW~oe4gq4_4gghWqM@@;?8USbnlE ziml;LK4nRR2s&gpOQ>}wyes(6Ecr<*tY>0+QnlBLSy|C#?o&;|f6mFi*UQ_8kbSGI zymR(q;{Q{Wps^$vFz2oLe?7l?m^@lIcQ8V8rR}v|ye|{}wCIIoyQ2qnhk})K(2hnc z4d43(2DKrzKFFdH-_~%Sw`qtoDwlyAK-*`l9#Aq+TP}6s?gn{Qk-&j_xtzuL7NeAv zgqF_2Jwg{pO6v!dG+(%~B-O9>*>ybN1RS{2HP*-E_D__s!~C~Ud2v-Z0` zYUcF^SM~&fS@hni$dt6Rh_`u=9>s15hxde{8>UAd)C|<8mJ@6lx&2!s=i;!{@nc>B)h<6YbQA6dc-F9qh# zp2p^RBUZAL8Ma6nyp>c0HSW4huB*1pQHq`!T||nPzY`p}iKfrf?Xb9@hk0v<>-FrE zKlhdoWmD_#j5RS_nWA&j;Ge{Fl9WFlje0m2CHp>fKuPh!PPv++ch^h%(m)YZUOnl` zOnQ1(`L;D5^*Xv!ci6$jX>}^MeE$}vAt##J23>z%U1zB~x>I?+A95!T z960DGWOU$##U=IB*$XtL_;&MD68XYQYM{EywJ{-Pk7_NUuXqeYa^Yfjnm*kJcF75riFy{Rm zaB47AL9JY8=G>cQqAc0bHIrgPPGgN;;u%J5Dq%-9g53Jl7XG4FM2A}&tm+)x%v6_` zkG~n5j>VV9JTA;s6(`?+qmK--sXwsX3(m_JFa5$>owOTO=cC!>f9XEt9l^nd)T`zh zBw_LVr1XhXzH5YaaX}`o3bU&&Rv))?gTZ*K7j=EXxzH+BP1jGHKFemnpn0{ETIL7o zgO|j#*8w8;%C)M+ZfDNh5lee=sJA&?WE9w#7SjGxo_zP?tiQbyJb3LgnwAUx)E>V2 z<@*EQ-GlB=@t3;Rb$w>*6v-7&kpVajOPJrc`e9AH-gxA%5}NX_n+qo&%=i|1XQjh@ zrCqY@X;lyRgn4GAhwT@=7RjU{>wL{^huy696e^Jw;G7nW13?;JneFZOq+ppF_fdC0 zmrH7?U9A5i7Jz-9z05p4*S6Ias0LuC3r1HM1|ofWfE8i$8%cYbW-w z%92L}hiE5TK1W<#^<4S;p)rynym-VBXveQzJveBmU{6mC5)&GG}JCh1m7Ivpev> zdvL;~a*EPzc_Ejaf^i3)zPk%@*~sX$wX`{V_8a%@1M2b6>vb10{1#Zv#oyamPSR*- zhLJ5#PQ`FwrD(S=xpmiWQ12Nv`gd^|W@4%A%*f!UEkcePrxCn%TVZ;xU2Mv#%UQGp4fZn#VO`er;X_|D0CZ3#HZGUzwzqOsQ%U? zX%qrHC2qFww};&6NxOB+@YaVbr{@-SJ!iY6!c(Q{-nYQ?I73^tB%BjZOv% zq*{H?|40^wnb;>-wG`i3Rn)wU$7v6=P1;OztUx!kN<$l zBYwvbl^e(r_dOOk?8<&Xtl|ms9?M|JE>r(`oPfje-qxFHci=u9I72HjheHwE$WMyb zAePLg(o{Pzol^f9R$*1<_-ZrW{Qe71iJo_|NEu{J5Hw5LdeTh-h%bF3RAGEl?%-f+ zP^~-PpPV<*ay9SMwdc5Wx!Kh79Ru*{W2K2_TEx3lhV0!!Zx{f5WNR3(5BB()ZYAoG z>8S^}vHN9!{Tr0+4ku9t)rIv?BrWA2ifDLYMm026c%Q3bhv!Cv8>bCOov5@Vg5oHt zN8X~~{77x8Oprmus<)u_yZHcIUE=iQ*ZTP)vBc%Gz__5ySr0$qjAl;3k6#46{+qL~ zDD9a;|0YiR6ua8K^6dCwjNZXFP5M}h2-gt>5i3uZtlB%4dzMbjy@qxtasL>|@PsY1 zP_SoCdob{gzM2aEHR|I3ZpsuP&`qohJ+=neIUCO64%<~C-pV&}I{dv0m67lQN)ga7JG zN;qB|SIa1@J7&XBY9$D=uq_kGTf>&)>dvkGI9u}!RAe#Y!1)T7C&i3>P4q<)W6jXp zlXU1Y7%Fo-4frG~u$~zbZ|Tqb{^uImD+>lWiMOVZZ!+HnZzfhgngb|JEfIhX!GGUy zU)klsRMwd;TwpnfaVFBhH^F?k!|OA~H9z=XO(ZCt4YJfyVYgWk+$yjSoP`@2aXnQZ zoUuFDK(__(I9^|Y7Wb)>^z$|&nVLi%8uDw}`1>HWi_w9c!~U0=3s>`w$5?*gU|>~` zt=KeCm!b^ejj{?2cMxKZ(eKgFvP8=_=WP;S$ok7|5tGqiD+h#};m8t~-FTgjo4 zP%}B{s$Th~D`$r-MiyhK9ua_&9-LKDqm{M~#Fdknc!Ii1N&-^yaIk=h?xA>ixFMM>qsX zTy5`GQCh2SerG_x;qU+N9d7wj=ram_W6hms8DaS&&%*ob?L2nJbZlB0M)wxmu^@OqFl$ymuNZSJ)s_BikY|2 zDv4En*tj}N-ZqR}zHA*dHLa$LA~I^@s#%r5@>^nTv=oZqmI8NH-a(uPERo>BI8pJQ zBTa}*-+M*8+itTBcZN=j#LxXkiK(t0!ycFNZa5M0dWGQdG}IwwcSt}#+>NwX#*vNe@lHDC=Kn?p|I4Bxw) zUiYgi8Vr^)?d-p}v6VE@y4x@?=$-QAbHI_|xyepViDlV=H8e_It|9 zLp<1P?El9*kW59+{ z!ri^>Q@1_wm9wlgvL9-~Bs~1py#_l*`wf~xXaiDO9jsVvxa+Mf&)Qyo6Rnh~amcOU zPJYo}mt?P&jI=qgcBjRoKELaT`pfY@*UYz1wnec!yIjdu;_-tEuEYTE zqyyV#5kvCd?5-r!H^s)grJ<2nru9ERwMYyk=f}Z(_=j)a{M;{8rm`2W}f zLZZ>ir2{fzsD*^q<@OQsLd)nDLSC3!wa__n=k5O}`wte`J$VEz8Vv#mONVF5ymAJ| zPX>?jG5;=2KqF2WOmyrl)i;&veoEKy_|hL4%zU2GwOd!;{cp_4$^R zD6UgO?bD|t_l8?wMuNDJRaAmtvHpUf7Mu2c-e#FZsa`Lg{Ysq*MOymNKbXQ62KImJ zG^Mh*Ncp*#a|5KFGBa)8DxM(HV?ru_FxSVKDO@)Ay7+Zu{yK=BrE`PPHnxqk9fZRS?7Q2GlWvhoJtUdZ`Zo; z)TW0r=F>LsKMScz>zrxy*#ExQXyhlaAL43%oJsMI;t~u=?NcLIJ@w;7wgM z;@x?7OPvG}3H_MOmMuc5C{arvcAz$C^Fvx7)=UZ!5QCEx0A1$clSD!qSWn^A0)6ja zj9+L7-Ak3E+|4Dgr+elhY(ptVXMa7b>%QUJMMi0zcS}IKj^zGF5Xe&3aC+z{(7a_r zi9Gz4OPuIrVsdkA^!M2j_B-AbRNWZr+93DAhrJD(*(bVjqHFzVD%$7JcSYsO%)0|E zQo2chxl;XiN-t5Uo|Y4%)p=RDFC8?Cg?DT>o=j)QMn8n~V^~3`QQ`H=V5U=mE6u;c zzxNQRDe+Ddu@%DG)HC8Lyufp!)V$WmDB1|88t|#`f-A|z@8vY;%f?+13=#e0w2lud zTPbo;I@`grPcX3ff3^Qf`r>Ge(0oJX4ovLzC;;rOX;k?$$+xShGI>wr4`%}T4B8967Au zIPa+zBoOay(#+|w{nG8PH)GL9jy^k zu$%h-g#pl)-u$GI_JeYQJ$U1m2*%&%a=s|}FnnJG!2x~zbAxfbr`LuTboWh*R{J13 zM!al<^kEW!%03hlRBG(BCVzW z>+8@Lgh;IGt@k!wU2*5&*5-ERZ3+D;H&3H? zJCje~ayz-d=Qg& zH$0JFA0r)2%KCA3KQ%hNG>@;2L7*WGu51^D-42sHUkg9L=D8=3-~0y$!06Teg9EsL zTOz9POPS5HS{DvtHOi&x*x)-igX5~;s;=1d;4GY5&dY1d(G8(j1`oD&M2AfRWbMQw znP&f*b+`+5{t&6cqQ&fDG9GVk#!j@G~j=bECjZQ_ZrrF_2d>oTiFS$4ceb|+(0kAfg{zocE7jp zj4r98s5ZXYB>MoT#_l6eh9B5-x;*@&)^gm`3}T6}UoXgg*$5P2PpcoYL|@Pi%d|!c z*;1fGT6lIXU9fY<=xIXx0fXehjPB3Dd2hbvIu*?S9R|>8`n^5qQP~_{4_CBSpQ1xI zVsh3Q{|JYfwKKW}%3!~&mcfRdq#9*jtO`+YSZ$#s;wL-E^K>p5G18TiNC>PUlW=kDxfm3%=T~NH zhw4WhcG-DJdy|7c=&{i<3AbPlfG$@U8ImAh6Hj`OoD6sHU(Bgjlg3wsyX&YsfMQ;H z9n+<--I>0n0Ft%{0zjVgMgYvN?Jw(c1=3hiL@1h^^C5Ou_a&7s1bxz82sm(3jX~1} ztq;TR4|>k`>{W%0lCFtI-n$xfJaPB`ZT}}U_hkzKUgx7(+sjPInr9J952l=$<74l(Rkt$pDN#gUL?nIi#(fB=HnCt1e{Sx#D9L$g>k_*#r+ z$un3oYXN0VU+x&Nv7PQeM*42p&`~$*yDz_i+-y1=843cYlM|B|vA0B(x5f+6Aa~ri zxCe%LGjO_t!!&;UIh4b49#0T&m9TBf*P1}tJ2KxZEGx=r`VbsO(U-@&sW7cK7M#0OTEVh&|7SAS?C-IvYi6%DJW7YC3wrncnOkA$zrA}BxcvZ>BFUp|SG!HjEU zcr#)|XjNqni}9a$O};y)MI?7i4gTie;0q{BJ%hyzTzJ)v4x2tk%2bEhuu<EYwc(i^Z&;1jjXSl(A^#s zWWMPvy-yIY=)haTsRVd(72l~p7Crmwd5-_wkR`GKJ0ExqTYa^o_4wIzYDJq`y3P(C zT0@-ISF2?IX%jwydTXxNsuKcH7QPqEyZr!xCq!$$#-;%jpp^DJg5IVeULi}MA^n*V z&rM*9bdw+}{kG~mGEZGsfyHg((GiEb5SyKYuR=FtHr5b?Q0ZT|GJT>@1J>)SnZrEI zz&o;j@}}!wvJ&AUBRraETp@d){14%ox!v`Z5M&Wn3>XXi{-b31@D($ieC*w5(fgN}-^fHuzWo2Bq$lruWwjS>kQSnw#o}TBi<;s7 z{~HCXd%DaZVoT2-;5AwQ&n(dZj+S5F{u9dP{{J{)^#7~jrDrA*N84h_*M(GWr%ij+ zYpFdVxJyv~SyRCgZ1&1y?(&?t_%0#z=Of?zZN;L<)cf<|a`S86oq0K*wCjjhR2Q&R zj4Z`Yp^WB&GqRrOp{onk3YK3czhO_dn}Wc8YuHL%;;6UY?}W=NKH$9AJ^?|lTjjZa zao{%v!U%A61gDOv{%}UjvtI$6Q)4HlXQGzmq7&$&Pc$#+S!E$o-wY~p#4XUUjAMI@oT=q8iZfLXPwp8re)QvlSG~-u{=PUp0qsBEcPj?TH9z6So>6I}!>{<)V}>PxXR zI1vMsWh!VfA~q5euSu_e$;|dkBI2MylZO`uGs}P-!>FI87q>nfem}LT+PMnN);!su73%K_m z{Lh9y9WA?xG`oq@7_a|Sb)9DHLPfPn|0XC9zt?hIF89xp+6zx18(<2>3*xr~}xYz2o-_ttd(8r~#x zDs*bj^~Q;lXI${l=GgPn)4m_ej9^dFVw_$3@UAVlg1|-3PYQWJkB<~D2v@Sf136yS$IwbtwntJ*}3@Y0O%@$h7Qon*f(|EW5C^{&@|wlt2}bp>mgY z@}Ze)!Vhdd+`+_&PNYtLllSg3bd#JAWi@c+?sleE2C4@qub*(8H*LPI`{-|32*Bz6 za0s**ir7kI1lq3@ka_DGz*)LL=*0Kp8-?1CCnsvtL_0R{INSurFM}?%U`?A+OW8A? z*6|ax>^uHmjP^yupWol~xYo>Cr|eI&f&^IpqS3#y&}~^&+8zpxnnc}&35*j(#QvYGWzeEg;59Y4kswCt6T&G`^8hTFQ{+m~5IlRsS;yG$`0dduZO z#^i`Qt3|HRz^G8~o15>y3)w4U-4gZJx%h=zBWR0%ptm};r#suT46L!Sj+^N5@hr^U z8{Z`e^G(INO6VJtzQv-xj~t7<>u#7X&XDghXC2ulgc+%-nwkl+~`7ZOv zjI3Djep;1_>LYaZUAXF}_T_68p{kL;~3ku>>fk!?Qin3@D{o0Wo&x>FI zk(D;MSd6lP=#U$C0$(;fts!z#3GG*L8`h*!@-fqrVVNnAjLZC0tQW^AbN*_SZT7(o zHs6}kaM0n@y#0CI@WB%lcMaUd#J0Zj%-!O-1uLA`CkARhywNTuNK?*i0ZH1kBZ;cF zSMJRU_+d#?!NJYFi!D00*r_mj@PRc|MOf4APf(8o>~xPBnJwINDEO4MBa8$yh^RLMCk^D@2aCa_$n zOFKDhR$9?$Jgq9$X@ywgNUCcs2Yh%XM}PsAmQVwLu|gth-{~PD;8WyOG|z0^^J-~v zKM$tXFSKvJ@StJj+|-$E^l4DV4xrvZ7?nEPuOpd&3SA?EmVNgaKQo)iZk| zu~5!9s%Z1#e5DMKb6y7l^{NQ2moadkuemBSOC`a7`I0@?D0O~p05u?SyqprUaaT+_#BZa45CQzW<4>{6q$6;AJWM6Eh*7gXrc= zgdzuLu*!`%&LYOWUc=!pHvHm=W!K+r;9?KJfS3U7dmklmB__jCLO;YEaM&$c9_eG{ zAqR{P_AG&iIl-cJ8=2i>u1?$7yBxx}f=%L?1+63P4yU8I2giPtcv||sxwBqM*_ReM zVX83?14r*=dJueYm@ zycr5L|2W`()(Kd!vxmt-0|bVCg~Hv>4W}H;mYQzG9JjQF2=JUsLaxrd797?U(q+HH zPbF<`@RgR{TSc^QH~QM{POr*2P4QcYeIIQ(8>!FYieU{qk7|j#{JwWFQ44)}Zr=-_ z9YN0JUPir)Yq_hlom==jUqm1;_19aFf_xYu&*KQ>Zn6jh2m|4OZxGYtTR+#l)jCSCNi{`D{1^+kiIlpRE125Z)s` zK?e>H>}sB`rd3%iUY+gn4E#;2pRbG^%!cSBO~I#(+d>tvrFXlNhhP7TXxEIXyzJZy zwJuRYY;eljjNNFbPcR?ilulQ-o*aGMm8blgk%UR-iaiV{G~X=~tJvzMqPS<|$ZFknI_ezR;I)*u=|85bm=~D)n<*w&yYJjKco2F#AX+VxQQN;`{yy1+ zyr;q;{$Bq*%_{Y$Zv&k0!4lWw9zE1ykNa2N;vlQxS@U{7Gwq_pDmfn^My)igzFXV$ z8hZCU7c@KDUmXW6;M24pua3Vn*M(bo1{ta$i>v<3;w;;o)B~mEE8`~_>^Tq9 zqT4gQb{g=v{f62Y(nzy9MJ@GG)%V3}c)VZ`G)kTxY>f$DH?@CcJh!}coMU+Bm z;B!vslOY2jndInIi;VLm8<5 z5*eIxW{h2<>kZBwWe+p+GH)pAXRfT$7kIpKm$%m6iXk0W$bZ0Je**>?O=kC~-13rH}gkY+W9)TNw+99RcU`EzZ>7PNk^|R`)x09ZpCzeTMqsgk_?w;U+7DFl?-2X${dxpc+Ht@br zf(W7{qPHZXM~~i#h%OOC#vnn|(R&#|5WV+KLiA`ultBcejb3K-UdHGK!?5Rhp7*`> z+55WoIp@PUpL|&}YnhpKueI*`|NH%a$%w9{i*)v+itzg%)wf3FX1BE}CCGnT*{w|( zJjnnS068uL0|OUydKn@IQZG`e2cMMO?b@cjJUn~kfngh(9dN>z$FysSd(H1YfD5iL z6Un6LiC29tvZ6)s1_inOI4FSlk?@-OiaFhEeCNG1TNt$h$noeE8|#KG%-f2S8PoN3 zsBvfAi|MB7gmK6AhX1rVrPx{EL3i29DJhA?N>KtUxolKl#*I!R!He*K_f~B29&BxJ zpB*|8gO1O`4deKFH0|s^LKaWtxd!yzU*?y4+*98FOX;CI8Q~|6+qqi&@Nbh9WGtl? zijJqNiW3z6JX90X!Kk)eU=b86{eeT~R5=??A7U^~Z|tS^RfWAP@4F&T$TK;6y9gVF zS_Cs22ibIRgD~^>vwlEdZdSpfvUx`LMdY-;nww`W{rzwE-4k(YA9{S9E<8_O!57&7 z8kM{b$2aMC23nrWk!^)!``p4oRpN!it|<2h5Ke(cRyP|)1{kNnq(*|RL!n@vM~w(V zV>ICdevfE2(Ah}!^8F1G>fLByOuIXuCH%+Yo;VTu6~(Q@RC&))!{NUI zLcwh)zS?ik2+UOK=i_#(6FP57B$jRp5sR&Q?nhf{v~)U~=$yDC%np-}-?o^1-nsbK z1CABhR>C^i}Ge*&|)}nG54l@ zM(#j73I^$$qc+mpG;xcG4RhUFRqrC$WwRSl^ny}UQmIPWixycL;*Fy2XDZeoG{89U zlt8@Lz^%!6IoY3&f3{e=dfsaeF~H|DRQr=H2b+b>o4FP=KTWrB|D>_R9m^2dcLGd% zZY5H^7c+kKmD=ppOT;`Ya(*E9bor@e1M}rbLXBM(4Kh_#lK9?{Ot0)QtdO-sYFB0x z;K7u^$YAFtY7A8AfddaJAu`VdvCTUG8e^kHj>Y$zVV+ln2p92swWaT@;iF1yRUh*` z;$A=J%V|ntd)XHDJYZli5y+L*tTZA@D^qn39rruVa5An#^m^!1k~-Ry<{eF}&80MU z)&Ua8#(?Aq&iFP&cvMwibh(fvz3eej7dQM!4cYH@9RdSi?sU*D-_?}tMorh@BQ7a* zDp}+A-$8|c>LBUN2)q+=%2m=D)UVO>m-dZ)$}F$d(3*clK&r(k=|M~m#^Elp{u9Z& z5oLCpN$VoCWE+I3=u2HN*Ms9VvtzQr{7lz zCy|HMH=?*k>td!Sa(TX;*TDC4oE*Vn>q%%9?Bj+pE%Ebjxfw7_$-)FDR zu)e~wB08=QQ>aZcin15FbYl@+85OuVwvXRndT#@hca_sVFu`8F6b2K)(?-1zpJ;pM z=|A2TrhkVMjy9!a@RjR%7Ou3PX@yzkBE__rZK6e^TWs9TKZ`_^6jrv>m(u)t-(VWm z`3CWqm5SxD$j|o-CPX+pezlE2tHz=ol~6am)S}C#_oL7D-LeK>8?n51?r7`ET`%if zyMSrhJ$}!Y5v$u`_vBliajfmKK9sZmLoGc!%3go!JNQAx981R%-{t!LpDixu3{IsU z%sf7k%4g9YC}FR*4tunD_m7_3yuxC0n|WZvpSr-~OeeYb&(wU{k(({G;M+th2<8lH zu|K8bDnbQZf5|GipUpNf=H1Mf9kPlve08Y!l^UG{aX)2$1RLRxIh7&B6bTGpPwSF6 zZ(Y_&h6JALT4r1B{9N8Kxt3Kfo#ArMs99;rrs#FT?8|g3Yl`X>#>;u4*&|-blZAgPld)LWwb|N5i!S3xa8I>zACXfb4%|mp616KSz?gDzB?7+md4xC zEd#XIaHgTK{2M{5v30dKK`ZWf;8#5lM*oQu9i z#Y`0%DnGduc^k*(eoSt#?iLhe;}jMT zfIaA1^&Qm1hN5<>dO`-e`Nj3}T|Fs*@NWY%KuanGpZYDsa?0ERYe}GWX7GsaShDnxOLQ&M&NK#DUL!-J|DB?Bgmc^v!9c z6P*ilv(@t6MN1v@1wPOL(eZX$70A3oJDUAR=QdTSLMKoi?sO@UV0> zF>k3yy_cKL!q>!gy}tu|SQO0SC+gHL^EG7M`#abOGkM6^xQ?8bP{@WK=S;+vH`OwC zj>PsxLZY1a6k3-Np&$Cii19P(E@oXx!mqMJqI=H&UUZ3ku@9FT&!hu@`%5n_1`5mhK{`#x_CIvhTP$*lU2c# z6ZyuPYPl7INl2DK0bQQGa^(^FzQ^Tsa4wia6uR>;Tt_dn5am^|iH$1)= zJKEHSCtrw36zr5YpMsiP^luSQ*CDKbCQq}qw0RQ#Z-^&eWaw4avrwO9oBBZ?BX}XjYpWG5A z6KV&lqewtetUT{6SkND|dcs8aMoTnSR<+Cb){fMzNFEf4&DUhapUmR~0_OpD%AS`( z4?JE)#+r&08JX>jZ3!(1r@I#-Px^Y8>YA&yQ#mO~p*ol2r;8~ZfyaKW)ii_U70<$9 z@0s4}$#XE!S(P8!ju*ZOOG!^ z^zX%$%Ll7ECTI$Dlta(`Y4P?rUZ7y(J$+xhyL-381dgw>JE)l4^@Jn*h;#D~;3TF7amL?sexcmzt<#_Vfg%Q&xYeh2VQ zOk9fvcFRdVrB-K2ApCaPYmy1C&2AA50TM2dwCUT2)4DcWO)c(B)6|>?vLxy)!N|9G zHC9`%6c1D`Iu0PRwsM5Ko#Lspc2KvNf!ZN)sa#>&c+ z&!i2dQtnrJ>ma$3oX<=zDkY3~in#N{?ys1++kb5K`fDwUEtOWSqx0?#6H* z)UO^JoibeNo=CDUwX@}sVHcklO?LD&;LRo0sG1;Uci=VMYEMm*bXaCInY6T7bzche zpAsakg74SL;I)Uz>kr?%8T9yakX=KKaQ{i3DHRu23E1In^Kb6q<{7i?@|>*qEh{LA zM`!2}&6BEt(NFjq7$=q!5EWPRM}tlXJoeev>3or`(H)UgdlK|0-WPBvw;1L^Edxo} z-KojGaMOT%&@q?#74%`zasF^OX*_qHA;O`vFcJMe!tM7h@RjP6fKnriDWpmOQBe%* zxg#f|a+zm#@ND|w3?=I`V+yS6-n8H+^!zH+4IaTxV=@)B`W|2GomEiYi|-n+M5Bvx z$yd`MRb&U3yZN$h64S6K{lj{$=V!Gno1$FTB1V zhpRY!K;$q-^{(rYt2C<9&GML*5MwU0$==dk9to-c-m2Uzxr^XqTdQ!YXRgrROy^LL zE3Y`db_DiZ&n{ZwWw##CY`1y)LMqdZ*Q2)%n{1Bj6^e3BO)LdOBrGjZaJg9=@srk&2b6+M*(MhqLE_ZKk zQWWM}>0llz=sKkm{}UIH7{k(!XyKBT37D+Q@*Rx0=leLQFYvx4+~$l2mRcGIQ(Xn6GicG_nGt zU9T>zBO&0|HY(P&Q^Qij^1yn2U_#F4SXN_Sck59M{$XW8d(pG=xG(NVz*#zQ-YpLP zYhXVqlrwx+*3}X{^a)5P$P_=`>$@l&Fr9KsZpFxK=vQ;e@b-nBG6m3ZS+;qpN}hXU zl-K2hNW{M<8u~Y&`Pol45V^znMxk6o?mcIfQA1Bc2tn^5YzKCIZaWpbayKrwpB^&j zSsk}K6I7AY(ZnXV1Q@^SFqD1!vv>oh6Od37N~WCEwfUzndc806^q!+2#qP6=D?Y9v z43H8u)Y_3+HF-G}Ambuz!3AF$DN{lHCUNiEPRJNgqC8PtZL)3*bCK=8a+XJmfgTde z4EB1MGpZE~jJiI;S+-bLUtsz|qboDCA(8C;E0$MxH`NV&!v}u_MYrRj1L{RKWqSq0 z?q%?AIjP!snQNFG>vEGe zT57`FJhFoSGVkt5#FcN7VJxr(fWOMjrkRl|92}RzLGVCaQDK|n=TU9vQ&<3V}A_PJboY3Wt-`)QeH zZ(EiL#oIF@9(EOlQGM|12-8*8QkEKY&o6a{y%Q=PU4i!Q7+F)suO_`ok!@cUvoj0k~Va{P*`7K^IFv9rk z1^jS_Lu@tuPkE+?wvnQaY{xA#Xllb>P*j3G`EBP}gR$$m^`KLeM|bfB9*(yQdS2>$ zbkI?BI}1ycJ5`KW>TP^*>Tw~{r9fHDR=4TWG%QCY+1qBPJ({~<@RP0ck}`gsGXA)$ z{5GJi!bSl=jbIxIx;cS5$-xR6=65%53T=iJ*k;gC#H|z9-4n;pcPo{Y{PK+uBx?^6!3bdeJ(_GoaDKZkRg=9_Xx{&45^! z!uy9Mp=L<>)tl>^f!RRWS9cbuW`@m}>nhz~dy-RL({+?*ZcqUOvu-8VJ@8usj z8B$>OTb#!t*ki`6<=eV*2lDIMxmBF}>O|ig0ukdltktK9eG$Qa&Lqr)7t6SueHy>p z)l5P~+o8+Ug0yn`ibThDt(y*mL$}Y~oNd0ezG6Xk`vL-opa~8pN&o?QG86m|Z~|0A z6Xj;CD1}{vE&0j$B^CrKOQ^B8M3=xYWRgzv)zh;x@(Rgy(+Wcfwz!palGgg7na)+? z%af>z0QlJ^enxgWtwu^ynQtDm+wW1=;+snEX-99=CUVl*5ClTL&^8_$#v^N{e@3TGdP1sUDdg z;yp|l>?^ zc_!maa`?p@s17wPI@KmrOH|*UPoSn~=^RqgP8$Av#KLpkVL=DK%0)ES`DAN)Dc+qO zqUg6LA`aJxRi^L|nIjYY;pJ=wzg`_Mz_hpKHAJdq5jOxVgZTo0fWC8J3`W}`8^m%+mglvTLYJ!=d zcYX9DB~Y2%wk!NJ`OX<0Fu8K)Q{g)}C7nx|3yAv*uKc zLo5~ZHn}^g<;3v6=fwQ)>T*1l+=U&9%^gJUSS8^0aM`o&N`4adW@~J6Z)zz-PAA>^ z;8nK5hL*#7{D{O~1t4RZU(fbCR2O%eRR%Ee%Q8s21`bVo;%3bc|zmx$DPkt_w8O!*JS;Xz~AbjDIax;_1%BYPm3d}I1GtgXNmiiqz z!{u$8#+CIdD!q}n2;m;K0#dblA2)M@##+pBsR+=&V(7Op&PHuZoc#h#EY(PjpYr?S zx%3L|%j9B%B4`Az6&ntl;+|QTynK=+Y82J40r~WLN!-atpYMc353+nEKH3;7g=a>o z5jJs~;#>6H1qvECDd5R1-=S?uZO6g=)Vf7h(bo4h}bdqt$)h7wx%M2(` zG)2h93u*X9mH1&k_^4UkXQD+yPG;Ce4*o5M_hiR_|72{I`-DCNqIK`!ZvWTQy}er^ zeRY<2lun#|@RMB-vAkQT2fmKoN);R! z5JnZnNIQ;iG7(ZVlRw(5^h4>NmzYv^cF>Hn4b2!}16M3FHo-k>f27vp=6vHDJiTO8 z6dLfEbjI*$=R`_aTimcb=0V#iDXt$4Zs? zacmBK$B|{ksb%Xe9jW|W&v;itsnj`pM?2EW0!HhKfrb@zIo}-3HNV5U@pBoIlg$u67T+3Xs%|vCh>kq)Pyxy z@2#8=v})q5{vM#~&(7Y~@iJ{E%Wl4agnI|gtK_p;dQa9!U)RsIT+kq4bD=F(!Q%|q zk{M5oYa_uO1-H27Pu#8i5S-4~W5_o=U*5d5dFNIza@uW$y5mFNPi>h`O4WC9RKEQX z&1^oY`iTcm4`e=?IG!bXum@5yY-L=3K=W=EQeWpyxc@6laEvc4FOSPU>^TgDyID&~FZXPN#HC11dJf-+QW$v-5g0vV^{=Ldnhe_7RzDz6JqYDfPH*)g~#U*R=lys=$E-qJ_L3hV#M zwxwb419fXS8>Ix^9-F)N4n+0gGPH(sj1B?kWo{3iuK%8vROgr(F4i09zTWoR;=f9c z0?zpQQ%l^yJBC)D^8UGmztIZse|i87VP1;PYl5FK?9p~vLNC5i{0DSnnVbHv(e`{Z zdg8w-+%ujxT)F?g{{PP?t+S2^Q-qC7-(e;9e`LG(V4}CJd4gVMQZGh#m5g`PWfI!Ch53lY^ z*WJn6m90wt_y75Je*L)aG@)$E+7G{aliX7BxE?zT4Y*dHqR2kp9Y--ReIHwMF!3|x zO7}+uT&1_jn!U6N_;AJC-%A+~gT?2Z^#a2@p*(q_4GJJ1Ri@w5rDlg>AY8x5a{2ki z`%@mxNb53%g4RMPK0(cAOKPU-r5u2N?U0_?Id3!d&hQ94?|q;$PzWSL1G;Ober7+B zS4zHIzC(uJvImQ=CB-tHmaM9#|GgVt6xnJo^d+wPS+$MIx%4R?w7AmDnA4wT-l_ch z%TXj&dA(+P$7t*!G;;6D->=Oq_y*d2Rkx5wi;rz`Q#7MkD91k$ zs*^B#5B>CcXtj#b>7-zau3BR)l%jkOPwxqMxnE%4H$Mti$JrupULGe^+xEUl>HK2x zR>vli?Nqqa-JLq~_rqrY;)AOGLlWv(gAo=rEwksG1J+1r&)>&Y<|6c6K4$QFn0^o8 z;EOY+$vE4xKz(^vz?6k|rhs%t{SOq!^owm-2+6(Up7wIL<;rgj6tW1K2~wM^3~>xQR$B_3Vc{Mhaxwjf0z4@N z5ao#(U;jl=ufOz}ZThU#>~97zinjzT=U&d8Hir3s^#2+eA|G7gXSXNL@@F58^FLVx zBi|IAMn_*^;Xzn z`g3)<9}0Jd?%O<}IP+pZdZ;kI9V`^g`pPT)rH;HPonC$JHXr#*x9F*RMPp%a zC>mVBb`>l@#{Qn>N52`9mMy`B5{}?E5}(uLN7v_gD*=)JAIN_nW!6A-((@+l{}1xt z$;_QCd@%-D^_oLpSYg1q>4Ln>#%#{xFN1fV-p_f|IV}Ryl{5aiuAM+cUrvcv8!^;k zS`Z_&l>!J#al(5qd*G}?2qy~PjpEJ?x&g7)$Mr=OqSPw%YXM&n z>+sM@Ht-3)&X8i9d!9^}4s@%>&wHdUEikX~U)&EMi!a^FxljGnLQQ@M0VokQR3K}P zOLm~$od2~ktGRE9e&23gnNtAbf*I@L?lwEMXEcUL41E$Ts0Q3FR|^r$=||d!%eKrV z)ug7vas!La$r8r7b#_t~*U>AlEcqm8(0wTd7bC46oc5d5-k3v!XeCV>!Cii1uB5un zRHt?@pM4w5r?)B?wAZ62B<#26jX|gxU+&uVNYf_oUJw)5TNVSQ(i+m_kN6c4d4A32 z#|6eJ)8PrTeAR2{XBJJCz|mZM>^SzdtQ)s=yWEN~M7X}KMFW!0UT?=~x(Hl5ehGIJ zRf2#7XH>}Oy+4g|?QiZDMMcYtn9Q&sb3KB@R!^$TKJO7jV}yC~QUxAeK1n(>!1`X% z_N3GR`kSuIEsW{l0SR?R6yB+&j>b4j*8rl{|1Jd?0rd#mlcT(HWCmZOucw!;m^QApHJFty1 z=usYBt}0#p@T&^4G&Od>f8DRJ~)jwJ5!B?rj3TSL`S%P7%(}B6PAKfW&km}&rfG9fHul>RfS~jSbAWGtp**f zM4=4qw8z5|^BxX=MK=X5C9{qbQlL!pSSH#qkg(Voy{73|jiIdbBx{?A6mT%3Hx4@* zmWLsTx0Lw;e8GNw?-^8vhEZt=!p%aF0ZAD_0e z`~r#70lmR)pWTLMLX(XBa-gQ8R}RHF*8QGJ%?G5-&LqHhF#@vCVBw$*u^oFb*ptA| zsFX~CKB=*8s`q53p~O6^*;!-@IX^qNPFV}N@inH)Rin!WD$qe~XL{#9ql%>)kLab>DILtwN@3A977~d_2M><#(C9nW#y`&C?Ha@cY4w zMU`lJuG>sa$?MfNF2E7J;r=bEC*LZ*{U32Zo{Mi8-o`z6=8qNioVpr6ifW1;DzIb_ zk=YgH$(z;V@PS5Kk)^;hJ`wvu89C9t|F^hb(WPYJg%$fF$6j28LwOG4wuTets>v%8fe^Gho|w#tU{Wn=`m=KP;Z1w3-}Kkb`B{Be(L>yBpfEWkRN zX>qG{|80m+WiiOI1sA7()slOxr%C;qZq$H!ba7sOST*%QZnDTf&*$G>9pYXTS5?Ws zN#@{Y#7k>PMs@#rs>DZ>{73}*n$}zOiU0dpqJTGKPOJ5Iq(s{%gtR!hQmV<%2Sgrb zaip|guvXJH<~@uyDybB_S>kG+Z%a6&w%Br-?nu*<^P-GgOu70oRJ{`tKNm7;^hRPE zfWP1SbK0|SaQK?VD~`LMVDYdmlO*7_w4AL%!8R$6eoZ;+jSk$S{X+7ohXK7stEq{2 zMzMIv4<+xm=9}D;#@g5-{cF*G=NX3MORM>0To~*sUENEioH2fhr-sG`0`<(q zb9O4xZMqo%sds}-2F;T`JBMd_tvxg}wn{e0sMBKml9ma#%D^XX|L^AF@muyjKK zO49Sbsl% zC(iVq^|$Zam|_+|J7uz~w`w@p4)a#TcqT848x4q_RntF;t+FCT>uH>F5Qc}&Ny>85 zu$y)OMRA^|kpPe~dHF?lpw)i8{=9Ca?sMe(41rI7+LKAEF3MZPX7Sd^Qx+@H|~?wKUG` zH_ghsw4vYykKs&fVu+86c?==O9(D&S%8jAOyVv0udvT65|_2&@1CRHy}HaP;IEW8 z%ibzLF_f!dj&0AlEqLPE?@1d~d5bG{e3MkMFn$dGcQ%sAJ`sTKV3{^sKY?w;4&9PduJ}R>HV|T5KRL(<0C(dW2p?2R_WR3Q)aAE$r{=cs zkK1s3wXunHfX{VIb@@@(m)R==@QVp|m>08stTGIuytX(xgVplv@4inhCqJjb2L%Q5 zuUFtALmQrJ6FpAB304#1ZiT+Zjim$(uS-fJ%q>mVauK`fOvvA=lGBFB;IN$`0M$&G zRehYp3qvCsyCpvB%pOHF2mM!X>x+UO z$2ClJiGXLn)=^v7Ggws@fyJp8%6dNUPUwhO!r~uyrI9(()zNU5g81~fnN(&+V7_S17mG0-2(Fn<-a> ziQ(QZt}aNdz`r`Lo2-4un4)VL3QX{T%nuW~ap`gj}D#7$^zkMz9`9HrN8J}EEeQ`PbZNXumL$Ny)=jzEU z7(f-0&w8=Dh_}EVP4zN13(;e$N!6Ku?X-B3Y_fP$UM)gba{Z_ktD{#U@2(RPm~KSf^l$c1ZQwn&FlbvJeo zBC^F)P*kU5G1g!kraQ`XRvdYLlKUO667Ec|-eOQ@`1@;G9v}!Lh`dCdC4=^_6XhEC z+}kM~A{{7M3$lgI9E|UeZVB)0Eh%;^x9D;&w64#FqM$%F4d`xCmlbOsxx<4{z%+GM z-0k?ZR{LMpPWjQO)YZy;@_D+LlR*QrkuXhYOst={IOo3E@Y>L)rrfL((5Fa0>g+T* zK5d`9<09<0E&hkAk6cbn6QV@YY{7yzMsfXnzm)|Uji1_*rV5na9 z+w~uEMxITqcgRcbtyZS)KI-Yr;(EP8=kJG#MwljN7LSGZ1)9&)oqF$_jyMl@Sj{`j z%P-Har4!$9OEHi$*L@-H%3E^_pt_C{N$iK@esS_=oFVUbX65jsAX1Ry+kYSIe8U2) zRM(@$wk|)@xRLGAQ0Q;6X2^XW+0ejVyZ6ytknPyvEEUE`Du489=Y^S8Cclp(7T0h7 zq;c70F|1fFi=JcQr!jjNETF9>;Kjww^t7AjTC|fBbLiB@h#UbMCz`R?OFQ1`%1gc$Kgm`6L1+?E7z=0>&4R*p&0=w-I%R3RO4wlP5kS- zxC_xf_({R{VLoi-L)#}m!O4e%6$mP;YI};DX_?Rm3SLsyV%!ic*77iuHlC!#Gi|r# zvLAB*AJNV!x%w`vq2qHeBQ_=Bbd^VU=(TC|w`v8G`f$(GdAGn6rEaQ|yDH&h;*c+% z`%{GhOh}ZlFZd{RUp}K5tZ!39&uDZw=I3QjCDsFfHEK;c-a7B2JjU=r+H0AfM9JRa zqSlG7G^>uf=5la#>$CN*9@u&SL6-xqN(VxQE)*o&EP2ZS*U~!!!0CYY)II`30pPhvnKg6IF&Aazs3Gy&S&t)D;obRGO%_ zK8;vi9lPjQP}rDFAj=F|Vtt{Hko?|PEnwgopy|zMvGFi>FRZOV?pzO%D&eE<1JiWL z@#nKW1@38DaHFk0&H5f-Uqi$^a?!Iiofej%@4dglr&LtP{sLkj(y+ey!D_(Vo7Asy zajN}SUq_0u6_-ifz3&W*6lQUtF5b#y-LX&jFK~DG3w7Z!ffbLT=DF&lx%Bf2iSMYt>wU>^LW0U6B6a!{=UhR$SMF;Yt3|6-yo@# zyX?F*yOr)_q2FicI-0|b0d0LUmvAKXw0h*+4~#VEQBopXW=tf1K+25V-ZEsYriNzY8r z{$b0Q-Dl&mkT=iINv470tb2q_Saw_xu#I-c>hRrB_JiT|#BP_1_rs*$Qj}a)Yn?#$ z>?pLd@lJ{;2{|iAqiyU|@l0w%y2pL9;VE_!+FvH9;aM}xr6)5DZej{H<@p7|AQfwF zVq=G64jaDaFKBSBdMagEqO$hg&r0w4qjJV%h`9a;g=LtFP|W1W?ETqK{m6Nx44!>% z)+V*!_iM5-b1jX@l_B+sz~Wxfe}>$MM``63;Og9jmHVh)sJO-c*-$V#J*8DN*EjW8 zk;j&}6qscpqniofHakhve?jc^!t)&05f&yl3l^#^#jEl-Ppzxh(c{N>C+CBAU+$PR z%GHw5-3CN`v{$pOcvnzKD#!H2a2hXM){EV|BgdrTqsgP5RDbpb)3H~JexfaB_dVLH z3K-@IibBuF;UApU7w>Rc_-*u!a2|M8|5ho3%>~uwWgnmod6eV2A-d&1XW0SM_(XP= zR8|=!TK(a4?cRvzb1WvxLW-}EikQPpCbw>!dps68;>#rGcjIoJL(eUGWjf*b!e)ft zv>^&`R#rRTC{vpg;5`7dVMtjhxX7IJ+-sf*A@q_3+w&7^sJMQ z$Kwc4z0yBGUw4aVhVO zuGoMwV9^S5OBSOq=*Vm8>K^mx-;sl1NlaYg51PLwMpUnHS&1T+lkG{uN;$YHTbZ>g z?tzVySR+?GA08UBDV?Jj~-MR#jC3Tf5tt1GvSkv$jVA@ztvAMLs*-L zdql}1tUE*Zx1O(9!Lj1P)`m}^t3iv5_re{M%Zh`_H+bSx6ZkBvzp-}Rp$hu;AjpBu6ZJ+JQ z8roKb(owkzPnZQ%`?Y|rxORlShUxwvFELJkJ_rA2KVP?rQrrv_2(r4)xsxr1ef{E} zTwVOfx&%*kqy0f2W$!&|+@{K0L#VP0#i!+3+gE$_y_?PQTPY795{!3QJ!dbF5T-SE<8Z^9-X zCoqkf^C69)dQzC?;dj|$+i^cqT^}9Wmim;R-LHJ;q`b_wBdXmeKiIw>eV81cyNN=k zAXKO7n_J}AbQQ?Xm)_SWS4R~dYnU(yK8P0pGV8oiHO`6KGaA=zciH#WS-r)@PBE{! z{mXfUXYs9$t#s(&;;8$DYhrGc+3x4HnutBCe@91_jrN>e3M>|<{qybIj!?-{-X{Il z9m(JR^zBiJ&RsaO&(FbPxa5r$S-K+W{pWit^BOAmgD! zq2(U%(BZZI-`7P;*unaj*}tq#Q%CjybhA}$RxQKQlk+!EX7DYl z*Ak~54eQ7wk=op_8B*27VAx!*ku%H*P|#}L?KUZQ=KGFx0iG+_yRzK1rGxI zY_Bw~h%Q{4&L@3HX{7h+GYx5Ch?Y~%8`l95f>%g?zUObAeKjL7V>WZPm8_1G&)fe9 zcJ~259}S&B&TY@%_#d@!V_A+pd=_YZw*6DAIV625`H5Uo*XkY==iShPKHQ2^=C}r3 z7_JCXOt0hr+I)N0Va;(Bdni>M3AKc7Z-Xv^li?u$;$<(`na(;;kE>(-m!!bENO+7C9S^w%(43S=6Z{yNm^doKIm}#;QPw)KSu&gRQvfB z8c2t8>+?TVJ1jqb$m>3b7lr?R=wEW3@!@gno&ot`fr@JRuaw3~zbMbONk$pJzPgA) zM0V8j%8-h(gW~8kOk3>-^?2^AoaW(4w+Lvb0N_@atKH1^heKz5(=de1(Cf)$Y zDVRjW{!wTwEjmWmcdyNwjg_V`$p17GbMH#Gg-IQX-HL>m804d1r9mkefqJU*7MF^$=W_*+(4nqfOp?>O)(w|5z05xA1NFx3YDnzcq~jLsAJcSlnaulUNcqR{qAxo-DEL zat-vm2&o+5oAGK^Ia6?X72D4mt)y&`%nfDzTEX3ZG>4$Z%UUMsf5H3H;}WJ zCH7>tH*YVqj>u_&EM>svBjs1?IgfLorSTg8jZ3cOTP~=bR93nP&oSwY{k(6wy+q&H zvYg2BPKTe=Pfnev!Z>DcZz7pXb6Zt)ac&Rf=bQ9$1POlZyjX1bv6ok7pC0=X}KoZ-<~ONdt#R&sJ<~9d{j|CO@vGG)~2}_ zjGH91*BX(myv}!SAX+c?Ok5WA&Vo$!=*qpT-Fm+uj?d`%t{%F5gG`rQKfq{c}}#nQ~L*>Z+7wdfYG zeR)=JVGvQ7XzjPpk5uu#!9Lwt_Bz6Bs1CpP{d%K^khc9Jd~z9a6C&X%t`g#N)RzcC zKQbG;Ua=;Uk#~UD=_7LI?jw$-WFr&h#u0?e!~2l8ll8FTE` zQxhuz_Yn(@0hNPwx{JfJeAaUICNH6J$smU9Fh@d6$h51c#att154P6@L#SS*`;y2SZru;JXifjtHqj2-5v79}(p`jbouo(%tBQ;d`4Sb#y49txH4J;(ewQ5HKij=Itcze5*ThkfKn?;T7Ja}^@PQ6ozG=_c z$P5AXbg~&bskf(d#e{g>CbQiYlm*mJt{JP1HULqSf$x4E>wiLdrS66T6V^++-SXDZ zy?NMF``Fz>;(@7Y8e{$xS@CFpa7Lq&G_3P7((^}Lm71XM{!nY1TLw}s%---NW(z(N z5Qy90I11qVDJ8vLXGaEuzOcUH>-Tm2+?momtJv1s#AQoH7rTiLSMxvKVZZDVRR@er zUz>hsG06vO-sAM6phYixU?X#_*+ywi+x2Y84iG^O{S(V6C@Aj8*?hb{v^S3%fq{SH zzdG6^k`aFam`R_AxKk%3ePytAMaTApYVG)R)0PaTW>%sPXp&}{r2w_G$k*!JNwr1B zB7YCmhlc6we1VLpLTv%&S9L&QO-L$aJpA61w!FD%& zWJ(}ESL-0o*ZSDa(Qo|zIpBsw>EnXH80`7_tQ1o|a83o#BcSLB`c}`n-VRv zmTCv|L;-els2r9K5z+*BV@nG`jwL$0pyi2l@>=2a%HQ2bUhakGu?&2S zvLO?^DKyWc$UGgRg+dfw{PJ@Q#F^<$+3)04j`#>L&`r>SUjdqw0W&WF-man^%DEC> ztFJFQ`FmP+AkIJe++9S`Js>ogvzBrncB~t0#(yDcQ~o^cZZsEo-+QF?lk~`0)G({`-_-`e-6~ zqBu%X6?_Z1C{w+OZnNjMx6Hx}(^F#5$T=n{U}pJ6qbkb?r`?O0R3040yHmp8n^ zhNgx$vdVytcpp+}zyvMqY%|c{Cw0-M@{~<^%aGR+;f1OG*GKF zyR?ck>NPx|6VYw`=wh_8_-UJ3;rLHPZ+@=N3XE8uBX5uz4{H0Yr~|EYy$x=jnr=kg z3@uk0>$3&^58B=`stE`F9u|?&pfc$Y5E; zyGAo&#DD(2|KEAuJTIT;<#x86vvYPnyPtdS=iY4xCKNgT14T(u;N7;@w7w_hA56m_ z$_|TyH@^#NA~&etjPx?cvM#6Vc6%259&1aIS6g2u)hp;Si{pyW9d6uaKa-UTixn%) z(~*1nI9O{F8Xzb9jTngV6u8}sKjg{OJ^&e56U21Tk@Ccs5G zj}x**_4)4i9h%pyBiaOezw0)OC+rOqQ8(AlW2B_VAnmo~Bxk1kwT~tN&miw5B(`0-a3?B%jKEJzyU0j4 z22W-Rzb6c@A@@2&4ik?4eVl1aUANjUyAJa$XVf7><5jd1dq1P1o$fNKaVJxGYjE|d zWLT?U)Fpqa=1Y#P=UyfSX@`To-50SM47_PID&5&O#qJwvFZ+c55b{4Z;~P4+eqSRg zHo@});(y_>kFYx;Pi%boFKYOI@G`|A-j(LOVH*9BQR~m=Bmeg~B4IVK&ce^f|NEF$ znCX9!@gGzi%gKK*e3er!uj~H;??&MN^KMNmCGdYR`j2w_|FqBMBw!iSv-7D8gYR!{MJ<{CFYt#j;^2iH332JbTd)~{(r z2D;)WdatuV(=`F%dxlrsRk~F#>WGmUmE&utk7c*;-wrb+r@ox=aNj_aEk3$saMx1> zxlmUp0l5Hc65b0mr5P-TkLoG}rPFDAHefj~VV4V?9$oPBb#uHB_)rZJ5kkC6^TRl5 z#>h3HG9n^tAy|E!f!K0KUzqyGn@|#zd>K=Z0ps7!g4b#UQUs)XWzDo^(@5=)LvhR3 z@2OW)L~w+i%R05T>SS*KJ4rk^zq?Mm^jZYCHG+E>#Oofh+RMl|s*Ij0mh*=7y*#(r z--|rR^|e58wH8#)KyL*3({VnqFtK1AZd>kuR_o8!W*{(qmhNs2n%az4VPW|m^GeG* zUV+3_t3IlW(6#dVR2X=(p^6(X8B?*{cg6%I)0HHiFow>ABI@)&lay?gjow@m^4I=%ka<&t5-QcYw?wHtouS$Yw36J z-hWFQx|_CJC?T;fX8CQatsb{YrZ5yq((@?D_>VdZ@YOR z`T+HJ_Y=oU5v$KS#rQvP0~=hDCVD>(V{Pwq%`oHcw(?VWiN2fTOu>!41ibYN}(_3|_ha83$%6PvWQi zmdm3LIwFI^F5>S_6DfVktlg~__|odb@E$R>cxc%Ym+_!h8@*F)(V6HTBZB-_f?9O* zm!>#~-ty^Lk%Ys#R-LNona2)$-SDNYIjV>=m>cl}JsmZfZIO&OtWX8-tKta(WLIzcB8XSH&V_ga+PfT20uvdE8i9 z@jru;T*`W!KCenqX*zia{YR8^>JbMFN0%0whdWh~HtCJ@HulI&X0sgF3x}z$yh!LK z1pY16s5T6qS9c1t&K4l__1JI7?(%OGrGXF2pP!Qd{FH=fUu4>sdYi1ivPn_Rk^p~+ zm!~bC4Sf~0C{8Dj+v-1Ftu}0_cDG(D1Bk;Fs z%Lshro-eDjdAVO)-Hh6mpiya?mU)P6gP3gn4F|c9Z}fDY&26ZFVE1$#Zx#0C$Q&c~ z0POKd!?Vsp9;xrH5 zD2vn$Kgt}bkw~i3W86}aHKIkF18M*UvaRDM7IwnRaD2HsjRC$P9^qIQ>Fa28U3%zc zH97nGsD$TST{Py~0Q|n4$4q6v{>EN=#CXwL8%5Ap7Eu`Nneu4NPLOjd#A17F$eQz| zEGd{;f>y?Xw-#|}Nvpi7EpC&zib;$Uc4Y7GAb*>*cu1d|pCfDq82oz4<1>O_~-yQsMHX=?uFycybeAt_{-^i5~ zQ!pLcFn~ULnS!-5RqqCDI*p4Mtn{($*L~rS>ZcMwR;&2|SzNfE*ncr6MH`WsRPqxu~r1AJh&6TK*D6)^bh;Wp}M|C*PJ%%F^? zDrNpv>*L#{w>G2o8AA(86capPA2>N?&?>LVT|v zK#=r4hQso$#DtQK?@KQ#jW(E4&eR!V!dA!HOk05P3G683`O-I<;5j<9TpXq`;+Npa zo~tOK)R3SFRPlfEI!U3Gy$a%QZKNwhzS$c4iH~*tR@fd+Kn?KLJad37~;bLKt7cb@^xboji!@ zd}0WE8dhI0F(U-O1o4JGe@7TXUbG?v9 z9htvD{G82;fD810Kiit_)2)%?Eca8qj+@DSb}F~U0XhxXAjVZNf)az9 z?w2xfWZLRj)?yQlRY#edXe3tIe>6Hlz6Tg1Iv>Z)W{-AuwZ}cB_ZxKDFq@-Lw z_jj@*XDJo)`MBVD#N?)kp@(Rm`X(Ol!KXHh zlFppF%>J9}1LmfY`8U$7!k>r=e<{W*&31%_5ss19LBHxZbp8M>Hr<;cUi^CsBW+Y1a zor#y6y%648!t77qR+nW1%sl>71=-jC(PJ+z+lH&V?pEl>Dvw?7GYskdmfdoe*`HZ; zo+m|%5f%JRJ>jyEjjCxGqv-?D6^HVM?7L7xre8% zqbaWZ;8$vIuJ>q#L~yTk4DJo1zFd8?e3>B^p0(F>__`pyMxa}eHI>-NJaWAgm` z>@T~KEky;L0PC!(TG%)sy3!~=TaBpr87j#1?0kWWZeC3as#=45;~ayA5UJ_BsjN)+ z!Zd&K2_sV+{>m#{423COnPJ)&2hhWiq}3-pr$5#yOaWBfvTon!41=9KOmOjCe3^e)BA1(p_lU0Gd)=yFZsn%L9C4F z5*1YrA|a!no%w7Zk~Xu&EBR_JMrSKTtESS)F+gwpRX;r(j>5`uA&Ucj4SQg!G=8Jj zdPiTOP}omDp=Ygii;h+u)mxlbwik|G^Ci{HHJA<-LLcAaIy`#UOHG08Gv$avr_J;A z2UpynSAyyeNAep(FPM*}IUJJDzoT6UuFJPs*?@=mI%q}@NFdd+sWwXR_ z+(ebp8-LVAfXRkwPHA?Hi(f`Irg+2xZq8lC^486#H?Gs1CrYeHX!Vx3PR|cyeyVVE zdif~~IbzBR&)@c=QlQskHhPyMn(1S8y*3jKvwvNo90CG$yfC{g@_w!^Y$I6iNvCyZq4kOMKeeV}lx!AWCGt_3A)s{q9kRBUv1j6M)kOFj@ANrFJq*sUfnA5GHUrpyw7jjEg_p3j|-1gT62 zQ8Vg*@y>3wPj&;YwR%1u&YHj^Z&Y`dNGWoc_f!aS3)!t*Mvt}=l?$B2ro35=l^P=i z@6canjP$YIhh%I8L+7b;ErW*!FrU>Gil`1_z$f7}1d&LnPEPaMDn94YCWX+sEOM5$ zx(>>`EPXU*SVe|HH-k^rEnfcm?4m=ZVLL;izek>Szmm!%R>28V&c~!O-!SQZbn=yU z#DCyIbg3JJ_Yo&ocjEcnWdgYI7+>kX-Pm6+gs6$Laf$Q03nX-0{q<5uT1cC{bEi)l7bpx`?8E^1)m-GcnuRvdJL-P(w-`{GIqNBvb_t(57! zx_r5~D;owc;$v0eUuqY|S1Ri9KfPFV1h3BK+pTj$3A5^O}jQwriUxnMyW#%o9 z4!_ca1oO!~kf$C)dSQJ_51{RN1KFjS2uwDCh-nF$*?xx_$%I-I=i_S%|5IwbTf@MP zoU3Vq*rv-5UXwjiK6mQXszn{2NHQo0oE;x4;ti4tHr>h0jFFb7QQ%J140lNEzP$s=0YdGeQu=T;V=Nj~O4S`C$*7+~JEPFaaKuUXf>5bA&8m z0E#aqhTsC=QrwtF?tb15JeYUQgB1Mg=8_`8o>CEJU-g>uK5s9Q8#+_Zn)#E@asaB~ zd4&|Xzd^{gJ7P1b-S05MoE7s38=-v$c;eYm+5=crs@;nRk-lD@ zzB8+KD)f8+f0@Yh5r9@>`6kG73ImX{*N;u79F>T@3d z($_b=SC{&H?EM>K6hN3ol$6dF)N)YK>5`78zQ3$c6g4ya-pFX^<1>mry1)wC5%uW{ zjy^KXx7EU5CX&iy5Rnm4NYI6@Sjd|Q$LY2+YNZ8-uZW+_gC1Xyno#7pjBIabTQ z+;aprPbF&I4^8(f%3u5>xo^Ag6SUw*XFtDAr{dar&Xm1)Bx*;Zc96$jQ~Z@m#+OW{krNqJTXR_6M8ZaU0;6YEr~V8 zjbSnwg>8Cm5>@?0(e}0k_gNUdY{(5G8yXdp(D>w@^NE zWJ!;M!u!ADl?|19(DyolA758I6Fn6KD*8z{Gdukd|1)=hgQ?n5?FhA`2A+8N@Bowe z^b^iAgr+aE#R+E5N392sDR)Nqx+q6eG)ONmd4*SSTLQklA@8n;!nmUcoQTFmXSUMK ziU1`o;RyK#fD+&jCa6Ib9P0zV7XXbWV)RJ)ThZeoPeqoA*6aUfiTm6PkY;sp`NsR- zz4aEOrN!MG0}3)m+)qu)gPQuv=$JXe-R@uu$j(zF9flVB`(uvMRbNP6{|rLZwbV+0JOj zHRlg%D$8!7kr6~`SdgvO1{!C2SjXk2C-Agw6eeCzL>QZTiPltS0uR{z%H_ve0(mM3 z(ec^bhY}6c`uI`r$p)IKN(F*=&M(N6lX4JEfhoN0gCIQfsgyn?&<={0#Rwmg_h39M z(M9sLaR6B=vAiUfUbiohEnYocJ$USTJoZhze&8Ty-Mf)@6F%)LJ*Wl4ud&vD2gCpm zG#b50y~-hDNM#yv$bU=~c9f6o@_3v(Usr1tSh0L?L5%M~eWcx^#E00)^w)yAfKi)p z0pz_4!O!8Qrv+umdErgfj%>DMHckxq6pLO5aGo!CId6Z^UfBFoesWg_;gfvZ zshIr=a-5c5&1f%70?y_0YC=a)FA}<^rS7c z(7aOj;fN`kpoNcy%Lti>%hmU6YH+`NyZa^9OC&^d+-?6bqYt@aT|51nf5RI`l+5h# z4SnlrOEK{xS!P4NpM%CmN}kS*d{gIUZANydHk%)!xQirR99EXgz&If6t>(6};)kisWATd+_87t)o)(5ZS^s<%ayy;WrxNv!FI&3_% z?ZYD<*&2*uV@XB_w3gH!wo`mt&_aC+-bdgreyS(H+N^#w8GJPUGIEi_iS< zI!;eg(X9FNSiIl!A5ymcpCMVP~l%<-a1-PZe z=HH9oMDCgLJ+@!_H4_>0La<=_xMonSUvpyBWYh>AMexalY>su_4F9_9jVWf#GjyR` zdWTo)$7+Vk^?K4y(;nqF{@U9crQKyrz-yKH4vsxTh@R^pdVw527E|v@J&v2<{#cq# zVn3!wHJ}-#7Ugy#{QHrAvfooKX9F5co7|b1@n=HZAe2|B%HCf@_Ecj#0b>xx=@ZP$ z_uvdmvhU^1c2c16Y_;`lyP48z5Qlh(UH~Qe!YXgc`Fo$UR~Zd3I+ONC*%)qxZ{*^n zrzSUE09Q0l#fM(LlGu1_TVWfjM@Wm|=WvV`ZJ1q=54VBqgla4C;unP8rY0REa~$jT%}3egQu(agK2QnNAg3 zicO2H(J*5_fb47kg^=wR>;svZNl|h$z$%&tO_0R26sn zTL!Xs$CwXmve~y?4?|XoOQloW z3N`Xa1#P@Hj(@l^cI-T#9)&Kn0M__^AGmE4!k8%>teZRVu_(%enlC(-mfFePM9qZ$ zaZ0alUw{zl4i;eb1jZy!t|dJM%nTYOnHz&BnRLB7#q_u3y+G!ZCnKZ>k3+6ZftTxS zCmogNxe1S(Nx0ykS0Q{+JvL=2}PKLtij8UczSXE8elz^mg zYD~#(9~kS1qG=qUrP#(5$*QEr=6<>I9-uyIUPr#B)mv(G9I4=HJ?pti{$9$?ELNdG z<95wJV;a4b-gzPCqug=iTsO4aRkLC|+4=V4=(YN#c%Kq}{dih+nPp5XsSfKErk&G+ zK}JoM;@(9s1rIX;QG^3UGxlB4z;C2?H9~wJr9mHFPDzP<6tQ_YWEMcaZ=F+K2F(Y6 zCn(4JK7;yJ3KRfEQ0cY8zO&)Mkcgzy=loBVSf<*NNg3Q3stppdNEwF+0LWDZEEJby zapl8IYf&YYSH6ls21h4}VR-f^9^&;d{^)W=~biJ^zV%5Vul^!~o z74O3HmgeC$84I-?XlnVer{JCcbTFntWFtAm6d%laRAY51;x+X*Vw!EIGK4hfYpgjz zN2W*EnZ|}YMz}dsJ4$2} z2eo2cvdKe(l0hs~@7-CsTW#o`Tq#}qBU8Z|5^V#LJo?w3|o^$T(Ui8Ot-rKHT5x z1(h!8)JS?$ubTq6PAD~Y*mSu+m=mB~@&<$C`8I06gH5zYhHhrQ)my;1Q=b?EEU5l^ zv!1%#&p^2YYD%o*UGq{MG95zKudG9FO1Yz~a-4hm@4{pl=W**6icP)4=Wt3c|8|wi zyAHBlW2EwHrk(*ly*7ukIk_dLPA-pi<@rr>1UfrvJon*ZD3k&X@#L7oAisi6_R|uYFrI5?4CVq0rl4g}Jj9Zvj(Ke1U zU7lqP*L{VeT)MW|g}0)D&3ns(K-oVnsDrTYh2risZy#cwa;~&mL`*WAKIpxRTJg5- z>~3w6%VLsaJ>x(0&9+*W0Kzqz3`&K~ z>I$uV8#MTBN0ydG50}l6r>P=hk)9^YWb+>24Pjy$z)5-mAu9%3A=dZ#UZb?FUvs7(+ zc!`8)*yjT;n@3-BI=Ij&SsPgS`V2|@x<8+m^A&Rs!?>8&*z+Gf=tmWh0IwRH641^H zYQUz)yfU4?EI<*zXXK(-q)~QWU|+@|E8UZm%ZO`WPepu$Rv~DNb{Z|PW!`eSwyeUZ zFs=%}I$lnrFL8i#^Rcr%>VHTuwFRpj@*xNL=?BJiMqj{R{j5n2x=EI*Szxp~-5`}r znc_j;nPd~Qv3)2e$V!~j)|A#@#@M`<`_*0SXakyhND7GYSW;S$7^wUfFdO4%*3LI# z`4Ta66H1uE^Ohi4RCH2UzD;+akpPRW^F2AOWc#e(!2O126tb5q5rPl7o5fwOWk&da zd=R%Li2QO-W+1!?2cg?cQ$}}uV9#2RT7mZXffpEfXx%R-p*V-+zA)lGI*2A8V()&t z=N^C8Daks|cYe}o<1d7DM6iSArxpO^kzE9$St0MsoB0>PLeU_x;e&De)mT}RVy zqZrGj6&`auYkS_1)H79Mr0(nc!~kua_w(Ph@Ko9SX0j%oVtCctzptIHXT32}Y7P&w zp(ZvCvC4k7`FF*QCigWL>?g*b5L611dWU!cZ;lkuudZ$(1->HiGx2)% zJ%oT4gn#YGOIq|5+q=`JWf8-y`dHxc5SVv|NwQSeW29Ryv)xCVD)HTVq>QoyzOnFj zihL6Wv?_B$m-FcTXwE`fgU*7_@LQJ!v5j=BYJ3m?y3CxBS&L0E7kJ9zBEQ1e)hAU} zRQ}qk6%UKaQZ}_Gt>*<%pE;$rZa?fce%F%t z^afyERv0)^UoWQx%TqOtaSD9#Ts=|p9{#okQ2Ikc%WwHA2KsbaoDB+zMUfn5^^!ZS zFh^Mkw5bu(T1i@u`d-#f4!Hu$1Z&6mPynmS=KLu%5811WoNK;9WJFwHp z$|!e8%1LT491AY7(=TF#+)fZXHz!Pa&y{i4yh*T>MEs5yJxfWNNitx-sf;1kMFI*4J-H*l?gX$zuUZ$6J3a8u7LQ5zs<-7J^> zhl0N>nLc$`elM<}IdqLfL$?;{6tgL4mFN1Eh`Gl$dOm1Cb`}UQohP;E39bcsgg;FU z9WI&P`X~wVn*PdjHX7mvqszGqyyp^WY!|A_Hs_NnMo;9IfC*$edyl5{(aWyw;IuTViw69(rzc{IqV;hnCzqgG87{Gx!_^< z`?ody6T20L-=AKn)tFE)v2~35nPk`*;JE!Z7QMu#{jL`?e#QFT%XKD^{#~F+`bP#@ zd@n;NSilCZaE2sP0E_Yc{j3zMT^~i~{Y#Wt`;`P6@hG4BqD$b5GH%sZnDm80o_klL z@v4PlbDY7p`4IzbN5((Ou>;c5SfrHHcx1K4XWR|Ba0!zSVBqe+aX}z+WWuzIs6Uvv zgGt%O%SDaX1ME7uIYQr)Afdv>lrr!-vM*aUSWK-Wae7L`Md!8LGpMSQ)a7%-%uTlX zz8{Rj{W;Rp&lsi0?^z*zi^^+_gr8onRWXATa|ZHyRT>ymg4swE$K6as%AG>i`WLsP zy^9Ex@;#?@!1oQP0V750^?7aSp2YsK)%nDUv6<#qDi#N?-%ml_oXDAg#SEp0Iy3IF zAx32LKJypQ0K*E;ll#PX*Z)9)$L$TFy0J|$p4q4LPx58o2eEdwe0ColFgy<1)n{!@ zFcmPyE(_J`ohddWydD7vU~@_v&$)PiGhSjT)v_=iAtsVlyngq>T!7mDG5U6`C)g{k zaZu?O{Pam#Q`8e+c}nk|@6@jGpJ3J*r(CY3X@1)H4D80+;+?;PblMX?9+(_4AhG<- z0TX}h^i%R&>-oQYXVu9Tg(yi9Dz9!}j{8ns{|4jiij!%}6^q_m{5~NYG~LzIHfy|6 zYP+zQ`%Uq8g5huf4+kuriBFeTF5uVnxakA+>#e`2-CJH7+X;_*e3#D0GmDBn&+QmT zv3%lScM6Ffe6=&VX_OdHxiJ2U#pHbk-92bwQA?i*{5ADncjD~?aF1WCi|dVy*J&AX z{%JpYJBEBzYH|&jd0$!A`iKB0&`dOt6M$@!~Y-6zyI&H;xMhXd6hm0SpE8rA>PTvFuj*7GQq>=Zw~80H0Q)3 z0AYOL$q~&jNn<8lWLxZ?d&O|3WQ~N&8itRA9OgGR;vO3Avd+Y+#qJ5)>Cayl#V`Hi z3G^^cEoCgJ!#`MyHo|+<;TT(^hqs)24}lQdN4U|W9=`M>E4`#oK6uA+SygiYazu6% zTzIorQ1Mfff3%XFSteuqLbom0Hd?d5M}jXuGgN)_6**fTk<-O#x;FXMVr%Q`h+s2e zFEst;UpXOqE&b^32hxz;VZ?h(ZO?e1(D;-+8Tv3SX7;u}GEfZxFu2owDWX9%rA(VK z#Z~m8a^*%%B;F>2S5v!}tUW;s<=QJK6hV#S3HKtM#I|62p2r`&Q-ml!lv9JeQD0cr zrTQnHFVi#u68^|zxd&YMLW49yLXYbZ8#1KZm z^b7UKV=Ni(4fTK_j!F&(>3)Vw{E&Oicv3IDiXw!7nqC9G z2m(yOP&@W(3encIxLi@6iv-6De?rl-I^jt2&!RJ-NE9>~(Ty!bwR;Kq^rIax&oX{a zDV&Hr10-Q(;KjRv_L{K=kX_kT;>|);*r>ltBD?U}5(8yt+d#z@Gha{*8<}6Dpz%+_ zg5T*89iN6Ov)H4{oG~x8cyCOe*6n5NLCF}+P{I2y+MHBY7T-YLjN{#5U9%EU#;T^+U^N)0)r}{0%u=RW!s}Wcm;DMH zp=E-;mkG|_$%&#KyqiXrG0$8JhU;9H6n*`7Bfp6&EKbLuJn$GxOT}v*hK`aBb_(0% znI+iZnxD4Lm`YNbvUs0>X1-D}u>Ebjq!#4FB|U3+r1NQ^e2w)T*)AdX2)=jd!oeXf zA`@%`V-vz`6xm zA9`%p$p`V^OU#`duq4CkleZ%uG&X1l{MVDJAnjTEbf>{IZU&TqxpFZ zS>Nj8%p5V0%$NY`N)f5ikMcJve$!t{W06ssOh(Q8C&)QO+bsJPInuRiR%H4= zo$cmf`S|(ZvDdilJC4(|kQTPbnEql_g&}|NLnyi@7@ziH?^3P^u`{0(?<;L%xB|ofJ^#JnT+k-40(9dOi)Bdzys99)I~Y zK5dE}e~6CYU{+vcFAKHNKDs2C*l8K20IZzPvdZJfvMTq(H%`cAq(CKPg&`+@3(ASJ zn%x^N7xkI?H8{GAH1^Z2!)otb*YT@O`!Jv$4UWc<$ZOG#KujW( zVzy9%-N-SDn0E>GvBHI56A$B~=6u`0D|4$(GB@2aTtwj2hkj&TE~oa3Ghi2`@22 zfjSU>E9eol#8;(1V}ZiZpQWOh5u^o>BF3u7-|IJaB62R3QM*eEDt@=NsxE zGe`lNlEJdk95Gqa% zA{)-?iulvRhUk)64i-AQ%^vkAjdgs7iC&Q+bbl;LjSX7H=U>id^yQ&{mu>5j6miW( zV>)=|6sD6xv>bEIqXR=&f}hD=$uLAbU#Mwuxi98ccd{RJ;z;zh^1W|=%ZZdN@c}MmsA@LPdDTv1N%ho!wf);- zop_B$O}pHNNW^DS$&9B^j7^h?kjwDPi)RB6+XcrcAIMQoi;?(tJEFN9t)EM$%2LE? zAXUT$k)>!OHi?{|lKASlLVP^l=)=UB=untp7OA~x$3+$ z8<{>wfXri5+q9EZBr%r9V>K}JTMHe9s`?vZjqnh!@r!4lY|UzGzA7AY&1?C}z!whZ z+A7o<-x!p$z}+q*kbM&KPpqRAa3Y4?`Yzk@`WkG>*!1MLCIDqxt-Ls-ef8s{+RV#d zp|A?38b6+o6DV=20oVt^Pb@Pbs6m*GieugJpL{FFGU*;@667*(R2QwtcN|Z?8-plO zuNNi@&POCeYIcpZK1A3`(!Qf^rJZ$yh9mV_*=JX^$T4bCpI%dg^ie%m*9=mdn-GU=zi0?XxgF$d$3P2FDRVZr9+9^2tCjBH#vP998+*amzi@8r# zJ4*(w2}eKgVsnyyyFA#&!4yD}%ZFQ8R@T=76N0VlVeU&6f}xkk8L4>XUf8kXQZ3H{ z?F|{=KV&GNPjpRtUM@bLeR~?OS!5SmN`_UbqGX&Q9QCH4mt?M$twtGiQk+BZh2ZAl zJd9WVvrXW&Pdlet7qSr=fj-&rwk@^U#*Um&F&)bG_t$f@$>L&^88Omg;w09n1~_QA zV%~|QT5$WVo>ZBPn@3vg%LF~-lq(>4&n3YW`m{$}ZQMS)`_^p6eemMU4aZPkm8|;L zESt>)_r>VGYA4|el?bJ2kw}I&e7CF7+5Ub_=FZKT)oKM$Hp>>QzLV8_C72VdjWhkj z<;Yu+4x`*yO~uN-6~mTFN3QHA!Ro&ej~+dozmREbuA|zJu zYCc~iwP%M)Zm#yde(bu7at|@sw2F{QrqNrZVm}AeL|I)r+Lzr0&FR9gS?nzwd=5Ey zVI>%Ja>ZUH?mk!t!B&DfL9UXUd=NhuEEmcn2 z2kxkrGe@iPW}S-f$zb>_6+Ghn`#w2obmE`9ndEYUQWwFB0`aOhz`HW{%EZ3QF6(L( z4|l+g)9?*XcqoJQH;wZh#%xIUXM1^hEZ{wk{1-K|KS%AO;jtTtE2$Xz7jOlyW0IP? zZh-keltz2yPOy@mg4dM_>BYwt{+N=M&&b6_QwQcuIiVziL?XJFy~SKQs@)X)_YF;9 z7hYd4yuqx%2b7*)X9!zX7%OoUtlQIzj(8Q~&DHdHY;ZMKyr$JE;`pXvmFcCbxbSwR?|bRLE=tQuyCI&C8-2_FD1TFx-lJCPWO^>`ZnGG~l7M))DW*i# zNy=<#Ri+MSiawt$t@9W295}cWIz?h8=nFNCsB`HJTXv-@sFHiudbQn?hmR2w&=2rU zsQqNUXw>)Sl6Oz%7m$TzR#Crs?;oRay6Q4AoxMS9VO3kVzPfJL68%>-Ck{%_%%SSg zbLIWXdF&8wS|h+=?I44(h51f&?6=>Idp@;(4(X~(iz`Eg8mEtk+ZT`U_jADqM+RS| zdp=%mB-kiGy zCtI37=T;Hpj=^@cn~^key;mgK=AnL|dl0peY_;_x9?1V|i zgnh8eQV<9p|2Jt5Lt9D1Dm?wInh7k5ljLAT<-ekdmK7DqNI^J+382rEz$l=PGCfQr zVJI}CI=r%*dOb!dCgf~XX%4Wq?_Hmla#3P1QWK#u>&!iRV8cDP@VPErGd)f(U!xFj zIIgX2Ht2N8U>jac2Rum32}LemH;5juWJVvXf5$>?t~QyZ#6Xh3 z-O!8>q4x3wUGdW~xcFMT!p|KIS&sLFl%H2+y%GYV`X3T&G4=6hO18lESp?(y$ zWLaga7h44~UCwt^%njK~%(uh)IrFNZ=YDA9)MH8-mCyA8qEieUL10z_OEOf5V_k{q zhc(se_?iCJpK_>WQTK_-zInmZQWlFk)RAf)n2hpV7=GsBObHd<67mnH`_;U75c7o{ zW5)CkGsXTCOKG0d=cJ6FXku{^XW3~(D0g_(WN>MfRfL~?sx4igLijs{#yQW-&X=O8 zb!PtRmUI?hj59PvzS~@x;D1YWCQq0WfG`(0md@<)xcDlYeV_>U*rWCUrdHd632aO1_@Ss9a*dO}l4xsIv_i8zs6uGO2YJjGpN>xVuuS$(Whe#A z*^5kan>*zncsI=t(~j5v@b`O6jHI+q5p2Aank#*_6P~HdKRoolht)%Xbs}d^&Fb2VGT(km3WzPhpppX@Eg{6yTP0)Br5z6rxWx z-Px=%-BzfHv# zyT%X$%(vT^SXrzaiX5NFzDs^csLLU6GJI4wssGFGOq0Din^X-x%fIvz(9HpK6T+(} z++@alqm^Mz5JCl-FzA@};Kq^B%9z@)HBXLE$2n${8^dT_+iWoP4bdh0gm;|VEz?{a z*UPaSE;3WC?j3OUe~3HFrZ(GXZMUUBi?%>Kym{W4{bm1w{R=Xg`_AOL);iDQ2q=@%9urIsnnTIJ5aV5WlN^;`K0Vb8Q@^SJxqiu zIFikhXI@(ljXy0``Ep*2&LLo@1~RrS5uI;m4Kn^FoBEPq2weCh3_3x&-~g7^_}cmk zw`8-l);aFPVfcdypI*29AF+(`z9#&;Lxv6|z+m@xEF6r|HoC>HpxFc=QjL%!&z5+P zH~=K~kqIMZiieQ64Z@{}C!Ud%74bKRF;Rf=Z(VN+`G|8NY3K20(0 z`O_h!?O$b0L8_SincP>8V2wd!gxziY22K?dohd7da0-T>u+AG^q)V| zhLYM%Vy;4j$Bg3T|cfFt6AID=nHD%$XVC=mb}}xwSQJiLD%c0w5rjJvlMI!ah zK>Y{TGZF%}umGFb_VdgteAgI(PYP93h3<#3qu}`reCyBxBZn3)Hnga5+ev3B<7mOT zVkSeU!_M5#0vR{qOkd05k~&G5k}ukrpg4Ma9xrWQ5l-u3FjkdB5+!p7wXz9K1B9dv zW@Xx!7Trp6uhQJ?FT5T<-#wASW*y|Ph3KuvcCovdNLj^~~ZVqr;J7JXm;iyg*s(J#AW zj7>e?LjA~KMOFdi;(@ zpw&KmE<0~Bzm*pPXU<>~BK_oq)R?*Ql-XI6h1|FJZtzCw1@h3^!Yi z>Bgi()`LLA*t=KvE)VOD{`F454Yp>cG#{Mqy^Np(_>zuf?nK18(z-TfaeaE729l-V zC|bi%TS;vTL1Y7{d8XGqeteNt<@4W!NXW;IiVT-pAt#JOtlCaM?i(vv%p3=ko2Gvv zE;Oy2Ut`qic`enAWp8wmbEGlXh`HeRmn$(B*R1C@0ps=UaoBp8M-N81r%B=_hgJs# z{Y#}sn8}wxou8nq8Yv9Qwtls3C^{J{53Z~F`~_VW$}C`bSMa=&&u@S^dwlAs$Dk*+ zRvyn<3Y|gydrsWpvI|a-U^J0Fb@IG&*nr@ zcxeBB&jZ5sQK}NM#fDERVkz7t!E2ncK4SIUF|Gy^zc{C;0k(5D$fEgr%OGV&3z@Jl z%A<`y+(h`De$BFcNRrt~C>~-*J;(cIzt-01-K5b^Dc7RUzU9sB8amJxe=D{V(&O%w3OvrSlzU=|ar1*%$Vp zke;7~gCYKV`{Hic4A$-PO92k|RBLe8<1m}_b8c2#cvhQAL;XljXQRyxBrPiX^_*~$ zO-4yf`e;`&Pj}OeibY1viAx}c)xYcIh|#%$x$N##tV!Qv{lj&RIu5A;`cT6Nb1wFS zW@uuj?^j50N|zl?02vlod@!$Y?8qNEKW~o`#m7BwWh1b4-h;6T*|#N0-KSY-`sZOBdFv2$A)xWHR+;ps9(EjOJo`3C2mui9@x zmqfTLRrHbPcw%`$oIO1in9w;}QG(K~#gwgGqt=!slRMAmq&4|a+zf%PT&xt@2A(RN zD27s^An8W#8@>?&Rypav<$JPNvHE>)J z(pF+EK0zl!#*^L=T-0r(FOg%7drkt*1Lt2OD^+VtXjGj&;!}dncBw*)5dq~s?gl0S zr=BEiFKO3CozJC94h>ks=hJ!vdJchJU(7{uDTPpC&~jC$EE2*aJhaKosAIfR9SrCm z--MszWiD^ORQ%VA8PHz1lGxbBpy)fc&Z7^$32`krvB=74qd)z9`BvGx1wKX0nO(lS zcF45i^_Eu~5QVcgi?hpOqbQ`N*1J|(Vy1|Fs3*;W-q&ckbvqZ=rC2tMQ1uwPPMGEw z{EaU7YTTQcPFiSe@xyS=cx7q2`Pm;Wb!dHQf~3mAHvgPa7{wDi=wx#rqgI)e4B&ZI zQVWdiX?99#AsyvMu{i4|QNu`$Iz8?8ID5d&SzGDVg(5EWb;Wd{foCY=YOjC$TmfH^ z#Ca13>(2~@q%t-I?KVeQGb(RDPg^}frn?NiWSygn3|HTjNeu27K=kPNlp46xgHQBG z5MxH3JL0ze_i_`s4hQ!0Wq^E}X6<8r?6UE6N4w8g_SBmPg@8h^Ap5<`8lCXGnac*i z7VS`^iX&RJ2-GAX`%@?B_jz*Um1r_1z2o)$cd7wOT&k7X`%k#GuOW`Uli}I`GD`ZF zaFbA>TcWDnBU-?sNiy(8vJd6nC=TLK{}87MMEfg>HFH644F|(+6zgaL-Jb2*-^Ooo z;EtKu1L|oTaAUteTa4{(5Sl=Q+fV~=HYIA|f9E@jdh2la5;!~LFHd-dacs!xDq`+K zQElJNAm8`dNKA7(RJ)rl+53!pn-3BHk*-;~kl2>H%Un>WTai(+O11Bq)QV5#iSmsn zPxQtVVu4g)o9^9aJSn{+79Ls<4?o$buY0BxsVwgkpuPbdyDsiXE?ZOpw{|- z+m4Rc!foFggtm&_Xxe`^KwMRSV)cAy>a3ef$P48SpE{77RLP{u6mM3|EF#7YW3kUgdqq^X8~j z?EmCkwr@Tg5WX9JYfX(>jyp{bJYM_|`F6H38z7LB^4Q&ojH~$13wgEn+bNVhx{tZ> ziM20_b!+Cbg-P_^e~SVQRNnK*17&)2JXJEq*C%hUh&Lx$S62km>q_DaPZLORZP7L( zDF|*)8Khpw`6!L+mA;I*H-(>9&a;laPkw4Z4RHSOz|G36D;g6JO3U9YtVeM0@vt<^ zHr&aReZ@K?Zr^#lWxwkFMU0`zv3zjeu7e$h;h}W25#aE>qr|~ z%1Hu)k@~l2?*IYqWFNmcea!wuc0W8P^M_#mOSpsmOr0)m<|2+GX#tatT3}uv?mMC1 zyTgCH%wxM0t9T@j5i2_1K9hF7opQTp>27S;c!1?OUT@6}`dn+fw{k>k|7-tF5LnpB zZ8YSkQZrq<8jK-)+;||fc}TAxT13@1t9^FQSl01_b=;sf4(qaI842s1Wo#2REs(TS ze;5uWmF~cI>qsS;=;BvgvWgkTjwi=<(~J=gep$-`QK#Bk>ZfQa;Kptx39UC-Vq2Yb z$v6HiOE0irbvrj8tw#}R+w=OL6C~S0ehYEFh>D1DyWTyd<&`;9{XM)rJXS>j>B!gK z$*(5Ov-PwAdHo(xNSOvm@cdJs$U~XkDrKSbv`<`EM6OIRQ5i3!R=*pDiE?|gf>2O0 zqkb$mL*&$Dg>ko)I|SKDp9W46;eC%a?H5WbQpL*J>EQ9ceL~Tj2nLl$rVL^n3lNzL z*ajb?5$Q0&&|PJK8VkZv*0BD`=_Kbf#kiY{pCyBJW319S^6Y)WC-!tm7V^+XTrgsM z90Xh&X?vP`I-5jWmslTP(pjHDkw!UiQB1R)g`W$YF!g5>jL z0~SGB@SkteshQ52-+}JSkHr6G31rq~d9N*!5jEjbcxM+Cyr;EMw*h&V*3%DWUc!70 z_e*msRIjeubq1FA}`NP`+okwae#Q5+q+(z7R ztL?L5QFDn^J@xMuVDRt+6nQ{*T< zSlRp6#hQ+LG~Z_`uR($V(t(@oapKve_g3FqMd643 zPc4y4toqU}(6gVxkYUpMe6qHU#SqY4gM;qE9L)&fR;MHWRV^x;ll1~c>#37YToez& z!mmMV6sjgP8)sTq%3E7y+fQqCeN>^&&}PiJ+=)s0yl74<_~c(Wq*}e9($-VQL;k%% zp*C0;75~_@-?=NhoKg2fPGHXjPjYHlN>>wuEGy|CKg8Uf*T5>T&`|X!m)lU^OTZcq zU=EYtJaq3@U!v%7oU~tC(tF_{{3=hpRH&DA@CoktuKr?i?`Es0n||KHigQgnLe_{k zJ^_R&xI=n@e8ix^^xT_e@<~;TgY%vo6Y`n;_1Z-31*M z>SW)PRL=CLGN|J^7hhL2l2xu;mj7U_&bzD&E1ZBQ3X8VR(Oa7le>voYc3$Mq(gz?o z{`BuB7qPpXGsiLaZ)qNg$5_LXCFgjyrtP>#Vu{63x8_Ck>1`$8S+$NP(JndIzB5Wc z*Tiqs`2Wi=H->awSaqz5?>?@5!vRekg&>b9quAqxTj5+PPEOJd?R|AO6a{&;(d!>$nn-X7IO1wLunW_tUtsYWArip@gH_^mH@2dH8}W(5u7*)y%IwMvmu zc~I}P*(HJJdS6$vVGHz?HM`+&BD5Byj%`Pasn|KT;G*uX`ri(g@6*2af98dqwKbiHq$z0%ir3fh&}H`f(FbK_PV+z zf4%B8PQNAN$(!JnXfSiO;U#gCXF|!_qm4I?{=N8fiEMewGk?6SN|EZA14VDtm8(x{hb{w!otuTHJ#5AUTb7P`r!%pY-FPIHDPan&#RbY0l0zXga3n##z#MR|vcO95 zZ*ekQP5XIC2C>*IR7*ju?Lm0l;MLbfDp82I4xG~EofbofHa`hfUemO zPoN^m&}olxBQnR0{J3tg>d^ZcmIq0<$kG9+VLEjru$~JS*^A}Kfa<< zrK^@%KDp{FQPpul5R>6~v+N32By2akqU{o^MUVedL4Dfi%8mLT*6A(>gA|wur3S6S z2N0pG5(+E%=iQm+#XVaBOhcSegK|~ACSg({J+Ngz+tSJ0cu?}Q{(Qn2Ru5>^YLnMW z4OVJk4PqO5(~|eM9V5$eY1;D)Uk&Z=wy+2C!jq+SXdV7|z_MB?dVz+LL-^icL!0sh z%WOJ_(Mi$7>N*#NmYVh6s z8J}#527TUNDr6=2(9a{K&^H}=gmaFi(nj%bFw#RPLXU2<=EuQnm^g8o&bt!!ic=x+ zr&6fp{qFYRd}u0bkx|_#y9}lJ*89Xblt+mOy4kV9;_1UnSz8W4H*Rx!jtQY=2~t?W zmf@#=?vZVdgWu0Ig=rmYBq3GIyZ@=irY_UCuO+UdJjFdX+uL*}4ov%UYVUJm{KI~P z)-a~X2x(h#@y=Eb7ClQDQDqj`oMtOVA?w_yivilJ^Y3`)IA2UPAF6n$NnG~kPg||c zIB`;_1|(CTd_w?RXBQ5r92&b^6&LgGkf=8j#h$VNnrA zv8wiW6YFap7AI5LaKXOAV3Vls_N?aw?8leoB-JfQ?>CfiTN27UayfSN3{H$qw18Sb z#@O5hlH1W$DrrLMRg|4m%Zpdpt%Z4cIjY@iCHoQ@TVn8>^0T@#{C&Qqf%s=TIjZlj zGbvH_RV@G6464iu2c#* zf2HDPNSi8)@ttzGwAb9+dhG=pl73bgn6sP#F0Z27lR5)vH)}$F3OHdEljeWR!FcfT-@PMdQWxcwO*If7cXHtk;qz02Mj3p=x#68!_)2`ndJ#ka&Ud_a|d-+Zfv)y4Ssc$UlP1UZ=iGGNrC^%EH;xbX=4@|r)o zeL4^4#x8EnrRig9a?l#1U`zD0Y{J~k%(b{pXk@A4YEBsx#Virp1d@DGPaS$3znwLv zE`ZvyyjIg|Ps<3@7{sGbbd)P1jV@G+PadO7`)bIM=+pki8$QNTx?z8w9P5|B@prjO zhB0=ql3Xt|buW~wAnr*26Gh^=RUgFj^GNgg#QJclsP1K`;5f5y-|vls%uU%Fa7Rx< zZgZ`;;ki8;YVbg&ZH{M!;z>tYd`6@M9-q*|(A z)W382?&WCjP7h%?XHF&TeVQKKiDXSa(y`}ie_Y%hor^pl_6&5iF)Hn7F@{yqW?M&? zMZVM4VY6gWH45L{kXf!gWIAQ!$qWq+PsJUBg0^H!~j z$IlajTSdbRoevHUzb<&)%Qe~YNvfgtP3Ax%F4XmsD=|p3<6oNo@N)m-3{xlFbMcU0 z3H^z2NzN8!m|^arA4S*8{9|8azw6OSM1K0s?A@CdZAI&nTY)KdBJE#O7lK{bJ(@E6 zO`@Igl>&De8%1{q^cC{@Ul^EWqA+L*G%1n_W%spcqlBx8@(u6sB`wAz`X_OmT5^Ib}PA^JhK= zAOi|^|EmP|KTeJRlM>vXNSb1PXBKAPbTA%ygsaJY?wZ^B4YcMQdF&b#mFAv0FEpEj z-&KE-b1+x{Y})l3hCJVK9u@}{*pr3XKfrj@g1{>q7mM~Go|elGpJfhDPF^Vq`D(kz zjU!DxvV^W{s*s&S_9*SQ2*r_nx7kZ!wu*?>zwKYgJ6Y5Wkds72s5>32g|9WVgn7U? ze;p>ymleTMR9zbVAlWskCiJ1jmuw1FQCh;-P?zsxg{}t)=N)q?fVFo0wgMRDT+Emm} zV2zTJ8kZGKYW!_~F+5;9qTg3E(u$|W-_BLMM0##jlH=`gmv(-`>wk+8t4ema?Xw8a zRkJtS+&WA1b1M~u!!wRf{;4(>c+BwR<y=#3f8^0X>OrE8Rb=w#KnhLI zCwUu?mU3Z}G*%7=Oap^wf8@{Tw1~C3VxG0q9@=uGWqdsT();RcCZu%pa0Z(oLO`3; zoMWbjo83l-?7L!$%qd6F*jPzCpTGy5`(I8Ht3jXa*U5()3xKt&2&u`;d6I>5opB}o z8(ZVl?hgLiW#nRwN{Ad*s2LZv^v0&iV$KybUEQV>OG==QM!wJQ9_8ZH50?W*ZJ&ozAU{>x}`5L$Z73}tPtwdpXr$&Pk z2Nyfr3AXLY{os9zF|%5gy5#srtdzNiK^IYTIL5)}OPfHCta0DBp$6D!Q?xOOM^8V; z@8gy?#|dmRb{COp)(^7f-Vlnchms;Cm&<-0tgHeDF+2*b|`k8IW)Vn{`Bz**`*oe1Qh%;TMV<#dFdOjL^VmW zBN&U*R?LEsyPoS_#2aV9w%JSN{!_^Uv%g zp?Jm=?Q!bLV8?~tQ0iwCUN>p1F*5-l1-6?Ue38cY<4P3YEVpQQMO$m40qa!^1t{&I z{x1C6?zPO-&kz4iRJIphU&>*9A%h~1snQ=kuaZ$c@9d4V2V`Gl=n&jRVRq;@cN=BD z{6Gi68~5@>uHLe)ZR5L3QrbosYVtK2#n;{4wAxN(mu#x25QdTy!@I7_2z~}X{7Ecjw;BdKWJ64&*Ic8pL@EEtF$pZn2VcYWl z*trbJOaTNLAp;!=qE8E&KZWn9t6#zfabPLD*Dm z#7eqM^5jAuP&*CWm6if=5V`h3DL9slCtuO>?F>88Hca-}Fbg zdnle~YDy#{-q-&=tAfGg-`A4;1`%loP0z%5#sr3tTx&}n-Qf78*^Pye-Fyfk)2~S- z7*P1ZT4OQ9mELo>z-1+Tviw>AeVjGZCqB@wKbP>NL1O9E8IK*?0@Xd=N!OY$htrtY;tz}!@v3%4g z&44d8T|d%+J*91GsyXd?6n7{98+tyIjI1=ve1vLWxLCeMaYJ#x^Ut7ExMk|eQYM>M zKpw9S2D|`k8Fow@H2Y32G-Gqcm09)jwCypRWI*`|A6xC^o2txMJt8B zs%qbo$WlpUaw-+>pD82n`B&DEy8Up#@+uZd$Nd`~_-W_!D}(){%0aE%r52Uf{eIR? z@$Ur!(?4RfAINY3E$^(_rm4+O^xk}d22FuZ%TxT}gq5=ap0wuJyDAzno2Xb^gH^p4 zhMk|Q>6DjJ~ZN!c7e3y~KBCVRy^7~`Xn%c%POO0)J0wCO2x3%6hWbUsEz9g3> z$>}twg(GX=w7E8v@05G>j=#l6@wDPxb|9^4L*+vdq{7eA>P9?AGzurOv zUbVZ5o2JDsi=RS%x!d656Kl^5IDiN0ZW52W3MUSFyX?9sKB9C$w=wIv?lxMqwYU7C6>C zF))GH_688>>cx%B3=3ORU8v|gKsMdIPnsuIp3PQPI$!^S&F&!O-t6pEy?Lt>W;`5n zd?=3|;2C^mo30t71#1QfuTdQP0}Lm>kvCwd?{LnkIM@;zriXrj9oG}TT^x4tbBGZ$ z2cODD)-XbrFqMM_Ki0-5qx1acR1+ntX5i10(H6OA_KZ7=o|s#!iKaD~MRc^A=?0Bv zW&f+pZ`Uj6*bVKByk87l%IZ>?k5V8jLh(FQFyhLWHH#%>Zzmg3$}DXSfCK@(Awm9K zpUq3?VW&{auKDaXu9^MQCeR+I?i|s=b>;jqzea=p67#>?hG3qe21t|#F1rp-xnE5G zyj{*0^-UUzbS9su(Gyh~i(sL;+%g%kJCOAn z%|RA_Zm9&UJ3Q(cLHBhJ5-<-4XQn#}yc`*}Hcxip6&IP48s$U_Ha1t~b|hv6cK!Wc z|H(Pbk8>U;v&h&9thT&2&y=G*8eCn6ju*%Nh*Md^u_}^ki(^*B(-y;Qv_($dJpv_! z9#7|I2w|H@50uZn4^c3byFUewF1eh2_kj!@=My`8E$8osWo!DQva25AGb|3|X7;ASY8ecyU9XnIfZJevgZvK1ifPk0atAD}u49rVevc}KHP%$#$ ztUBuiBstvx%^Gb%nU&*R-r)30X{U)eHv{Aoz>m`}c*s(U!vvbXpNCCU(jc;d5MfE4D_SrL$do;lUB(!(XKJ(wwkDGrF>DhB=GX`4vkwlGk!~-g z=qdLkt9O#I3&Gfnm3+Yie3Ie)M#T{NtNcB7s^Vu`D!82Xz2alM~#;J zmkhs|UnktKe=SZ11l%4!nzi~CEG*KW&jsX6s^%-H)FzY6{2H^x@1Me59b1@;Al3(i zlx$!2ZqB0M&)VzIG05caXnK<9Q3clTVZjrg&y*rAb>mQdUd+v z=GyIgr^tUfdyU!Arw`vy=064wdJIAqG4rzyH+pwey4m*Q)9do~QyU8^-&EH$>F+k#~af8WS>{DJ>?qkt=BVjPYtf&v_K5%1*^!Ac4CP%YX;17x6MEhE%7-f z;1?3q(J-}}m3QLp#Roc>q(n3A`{7qEt}0wlSBo37^Ch>Oj3ZOom|ZEys4;J5cgG8{ z&K$`C>DDD~i2qLJZGY*sV!%&2-;69vc#d4tulD)%S$9g_Ztu@$E>SG<54XDT+7_`^ zLt}Q}^V~7#h>Ms7F)V_!8$PLS&!RYvHnv!K!n9M9JQhRpnN9F% z;L=+1U)@vP;%>o%F8Lu(7wnj|%NHiRys%W-5sof&@(QeYR7LN+d8>WfWYxU@GXgh# z3wV@Wq%#Fu5L!1L%Cs551u+U@joORdBCC&i&MB=>EqNVPA$n?k|L;MgTpnW&`3v$ceOJ-$ctzgyIF&3fShp)F! z2C{X6)NJz}4o_#k;!_ux{mZ%f;m57EQu%W$B)2^{#4A&ra{a4WJiwfIhBbFdnRdJn zV?Y!N{*mg{ui9hF5_fg|mnZ6_%4Z)Ev%Da2YhtGw^KvR$+06DrAkhobV}33<`s^q- z2)()dp{~o0WPBo!+HvZ)B(f3q4EtN;$h^&+#8*!2^Xhv3hEDHp~! z0>Z?wlrfr1*!$3Kl?d-7t8&+scznXt%7y?zqEzEbe==s8O4 zb(&YLv&Qn`4Ss7LOH>1{#Br^_YT5<(4d>WDCaK0JpL!Swo-Euk`#-iCZXL}LQ#1%c zLavG61@z(9Z!!)1FHPLieDCDCpZ@9cBX0A4KZZ~d9c`;;QVi5RKz+4Lo{y1E>Y(IP zkS|iHwXy5wh$Zl{uvMTlbD%WRSWZrFI>96GbiRMkz#Tm%xclb{iZU+t&ZlG3iIpic z7h=#Fp;B}3R?H%j(@^0aBfyBC*60sv zj^T!lK0V2}p;mA7@0T2Vvpu`m=N>n?=2Xj^t$~f#Cx+e3($^lxWxXMlbGUAqN%XA4 z+P;IWN!{e_yI3J_(e|)YgF|i@+m>5q=@wx`uu0d`J=KrA@NQC&f9Ql#?M)U;Z^1X9#8w9pY;8DRB$Hv1BiT3 zY0U`eRNk#Zw1?F1hTf$27?q31t^Bljp{;)pXIP{~UN8k?l8$OJh&8w9E50h-`6#5p z2SS_q10B@J_PAR;7|jRHzV2qTbxgTb6JsR(qLy^pYEQ=&I}r$$6?`mrcekj6Y&Ofn z{33yov6M_cT>T0P8JRw`)rY($f>JHd9#}END01U_S3mvI%a>F$|ggYU|VF zdBVEY5A*)0HpR=^GG_wrgr>xol_Yoa2)K0%>t8UPGuR%?xMQJz=~S~ewH7EAngqv| zih$}uJ(vhNn2Ptl(-o#$X&+pepFV-6c7jU`ftB`u*L5GtC^fP;N#kAqL{eCO4-Dnu zc6lpwIw#yQ#D7i|5GHUxtayJINlAu8DDET@C+2HP76ST--o(8YyfaG**4wV`S%vOoy~M3Ch@zCkzoNf1DO$;f=r}2#{6xWj9eOi~MbD+}NIe9OFpWpFP5 zsI4_*0H=}u1Qe`DSnB&U0b=2_=&7CoI%VYLp+*Xcf%-iYZ1USaRBErauuJJ$)fdvL z9#nRHX!@`wadgPM=!)|v1E6d2d|gz3J&evz_8#;%=tCsSUXOXa3_3tmH-fOf$;!P5 zHt=eyQB=w(u0iKr7P*z<$CM&3mh)SVz=DX+ae0wAiO$j2%vH|DfkMHNvp%y=S4^7v zl02xl?_xe2FNF=Sh|zmF5#5l9E!NeWlBo`+=@@&SXT}3BMHU1w<7^ELYt3D=-|$1O z^Ix{~2KehYMf;0&eMWwlDY+pKO^Tm^(s*)9@HM@;{EX zVen<4bP4z=O`h|bd$2CE#ktNrUvo8SRO}7!kH9b=3Bx7_fcf*rfP3pG^?A{1nxi#( zwfd5=DBVn{XN=PUL1H;_BRJUji||G0XRVMggn)>Xx-klE(>c&;xv-M%`^; zZML8{HoZ1u0_fIXi0Or#^eJ-<`S1Cz>tE0DYF1~et&qy|>x$3A?z?aZ9kkkXs<%%Y zcm@}DU|q-7BMY}h^;*1fb?s%wtyQs&In&$?fZ;`h+XVyG`B9d0yyuDiD~ASboz_DK ztrj&C=ALxhz!2w&#aP^<%%t>x=~Xq9J3%FDyyW%9#Bn+c88tEO{j=| z9TnYmKh#(t1GDn@yZYCe3Zm8f+-8MQnWY*|2n&$mv|-|);03!%s&;n1RyK0u?WrPBgp}#TjR+|@ z*|e9@BCdf9vXC>})k(Qq$#EBdf=tS|qh;cS3GB85>RMYt4uhH{p{@x1I@VgU-4Zse zJfypYtRJ&2@~qL;+LQulfejh(Mz9M=d3z!vJB@y%WlIY$d*?WtEM4awg)@|T&JA2& zw<2jTy^S5qIqTrX^AGPiv5w%=LI(CW8STVsj&3K!QFUs;6JQ!@6Nl%@2@( zqB6Xw`1Rf5(4oWIDNaE2nwhtY!hMI)L|xowGr;Yf*ueQ;jc(U>33c?D)`50Uq-7>q zFRd#l+oy+&-;+onNRq2UrA~QBfXO^|f78}ag4El*+ZLsfbg^}%es*S3%M^|5w?3&&bH)WE zV`==ZW}19XpU5u5n9U`Rq6a?j{fm{lIexueRy${T?blR|iTgGt)m6Dxos*(5vmG0kd#sO*&TZW!PZ7;-lYP8f- z1iI7BZGc0iK1IuJRcd&K$bvQDCTV}K14|R>M4hf9=3!y*P>7h52>`;aMn&M|hKGU* zOGd|++%RQ@#nZ}Jtg5?3jL_;v(Ohc}0LXglgbY>3M%Z7^Iep?%*S34j6Do$|Si{|z zrK1@VRsCjEAEB?~=%GQS5&q@4A{K?BkO|^%&%q`OH zTFLlOW-_!+>U3A0DC$v8jS|591Z+g@mLEzfvLq*?Wy10xjpc8zw))UO4pH6Lz8*9VTsqGq z%2%ajAgJ9v&zH?0k8h~@B;4XC#F`UxH?JW)Ch(Stjw=g`s*RFPWnJn!IGj^FxQd}| z-C)%8KIuYhkNTKoWB2OCl`BPfo*_-tiHFbmVeTH1$=P4k(SZUeP(CT@w1ge_OF}I( z!;72(o4uSX9C28@>}>%I$y>QuBrWZ0FPP7lv~Tr5~J_f;SVeFou^WO_4D*r z=oNmUYj#4hY;K@yq}bjb#aZ^}1{B;6@JU$u;|SEN)xXCP@V>*_cgSGx9Ij3E+652S zlWE)ctqe&h1@QdcIwp&?dA+SA)4b*V}yx&>5stT_o&SlXNxYRzx zBGA7(%xqnoa6AYf++77ioo+HLB8aHATOB~}Y5S0-*|DcPHk8&Nv%*rPk>XyGmk))( zLBmJ4RZe#Fx*r*Y>-QBG<*$W>!flGY5n0cG>M~-PXicJaycjNe&qw1o+U&{$MBF`NOd>=K3uJe@orH zg&#iyxQzo?#W+Q@J^j^Q$ug`-l--c=k@N~G5s3bV!5BEOc1Dr$T*kcN_JUX@vJg25 zMl5qahKe!fo3in%G=`*cpRydBg7MfL266Tf4%PVDLIq2`sUk1!-q^7ypjj}os^)eI$>O*S0myKf5gQH^|0 z8*ULbd|3y#mIn8V==%q^nkYA~NZv%8umv4_-3O%>ou*CPf?g~BTh&!Je})G0==NC% z5d>XNPo=AGh%aS8_`wYU~{cbDK4EAH+E zid%sIE$&*}-7OTCP~1}7g8R$8_x&Uv$w|(cJu`b|_L^BPTWGs?*T5iumrm_Nx;r$c zw8j8ikpP?t)+&HbR$QhjAlUZh;CH{KGYt$Cr8Clr+cSC5HFo3$&%J_vfLYCVA&2;# zW>dw7l;XZ-VTesxjMRg9&*1uqvcLHIBXyZOqbhxl|K}>aa3P{%-0Fd?hH^={luJu4 z7iw~<_H5a|ALGTuD$2qtF&+pi1_;z5ebm=&^~ay0zK{ZLD>zJFuDZ7YmW8dCZeOeK za`qS@1F2Lt)uv`o+Ls6AvP>F;1CVSGKLn2j*4kfHkv1)-1-{zW$E zItc%MXd6&D-+Gf2VrctO^jg^sn0q;x+N(*Mk8rEpJkgMsj*cHBEWSiw8(5%tQ}rzt z0P%y_xcG`nt4vZ zknoWoyw@IeRCB{}iysp}1!iE~D|Vj>jiI=I z8Y__2ga?eyscRdwT8y^^r8R1*CXAr*dkg9Xh=86|Cn15!-f-C6KDv}uSz`SPc#2wJ z!B2DI0?&`5-t-rP)>2u(8Wd8^WcKe^JK|_Ui-^4yoyOhg2V&@)V@kq7UCC)&5B9A= zaP17PuM(30oYF8iQHdvm?GrlXEulPm8I~M|p4ayuM+YwCp+g`r3G0K;811y)@Pyyt zrp)9P0K9DqS%CVT1qE$=%r1wu6aF*zLPg$+Zu)bgH-p-A7haKfjJoCN4yb`4aY8eh zT+@%i$9H{QF+mVNLRhyS@_hK5Pn^y~xh0O{4-FWjp9N%{yU2(w3IP~`G9~q;xO298 zwY<HnUD zZCbaj?-}3*!n& zh;=GFP3=de@kQo8k51B^ED!X7f|a%YrmMd>wQ{KDK?-DT-v3~xN$B})>qOC4jtWOk zYfn=&)mzM5B^4X*Ro1PQsH>RRIVtFC?Gd@4T(oQhL3>MUaFXfvM@3-3Ig9&LZ<-h9 zgDp^5p>!&%GQZrG(UOGl;Tv6yGMXxgs~s(HU)BjEFedatOr@*iz#Hm$%i~#-?tKku zMmNgcS-+M>_4DaaVv=CgQ}K^Ig&oCRp4SKBxAu%R>>Kwd^gWW%tw)&;FS*FRsgI66 zi~GhyPH9COe6e-sS|35xr4|pb5jg7OHjLUxgAxCZo^lK>38!hT;e24Ue_TQD8xBEl z>j_+3XxpB#zb(#h6oS1ltq)f&hcG?PAZCakhul2e+4NA;~rspy$r@c{qLAf5)H zxNr<_W9Y&(xKX5W%|Sx3W3XxV(5L|k)}UnFTX6|l6+wX!OE+Y+*U$^fMrxgcU|vsS zY{mC}0_}Y_1`33JNC4<_&D&p-Gj6|3E(isMIWpYyjXj`DgB?Ou!u|9be(cbbVGC&7G(94p0@s7Q4E~5p*h7P z$L=k~$U;wjN{=x#+fWIt*@Y73Tp{i1e~@gC5spV+b@xuy|9Fu9IGx5%t1_!lP=V? z)UTuq&AGG2?bnu`>XX&^6`Y5vG-1%U0+YzmD6+yPJHQr``2{PQOx@{6rIGr)ik^vb%9S zixQn#Uw?{$Aqea|W+3)*wO5sm1*Q&vwKT?zENi2Ky!;p=$k za6*cN;A+|lGe1o%Q@Je*?Qb=@L1N;Q z20tQ`wk=mxrK?z5Cq?2|s(|Ujh$8jkCQ1FnJM=v-ssCvy>D{pfyQ}3!=4nf;yN{1% zj0!`2D#O_kLxKPhe$Pp_1B?ASR9F@GCT7hvo+6LYxaz^_Lr%$p6=|M2ZI|==_V-I-_1r`1{f?n@w# zkocnNjG#-X|7nhty@Ua0bguNK0rTIa=uDAm8gA$fIn`=M6nWH9Rf=*{V7UF=Jc3|V zgm>&)M%w)7ko>~_<}kINX%ZbmZ5|L0^sPa$EkB6LcNTUj@LJioK-9c#%d`LXYNE(+ zQwOrPe5Av;$1(B4^_0K!)o1(g|51*rR~nwXtrz>JDKoiKonDTZ)AgSLXB zDxTUS8z>dvRxNxzc`qeBQ~6~MmO`knUhY;Rc|(_om=(68o6+}HNjMZDjOZY40SAM| zrMJvrMv+j$UV6F=pA@rvrB3fuZiN^9)=U!hHsD{SHw27Y+pwT)^n55EgeMoBxYihp zQLoDRmeg~OQ)lnMxoKN*Kug<{*~G8Q$3gdQyU{kQqE$6tk-FIA3);}k<(dBJ5sk_Y z&2oIhwsS~aMBN&6H z(v^ZWLfNI@2ZCAq?1Irk94yv(GMd*uQ2x_(NV*VXCNNDsuzKHiT`-;e+HgXOR4l&b zPX3J9F+((ba;{Kt1ApP*+E9h>pWT*gHT6z8+Wy>|$~rnl#?&Bcs3|A%D06cmmujJ{ z13>V>ii1i_C5*oc-p6w#vqI+2L4$SsCuyPq)C2FD%5I+d;Eq#=IDrC_Z$aHxt_*+WLEr@Vhw zlC#PUpm&{5#i*s?jcyxI4ESLa>!fH(^#g}WP;G$Ob(LtN5l0Ee^5ZI?rbU*spHg9< zQr+q4%*o`c)9fpFo-^xerS-!Eb*>Fx;I{E_V`%TkYe@je8q{G~%Fk%3l#hG5ThXqt za{!1Wd{=T!-!0vuNrh#r|L@SK0plwrr4jN^cA0)_J1_lpfbK^WE2MP5t~+0sHE+zY}`Jb zOK!$a;Adm`dT~OH%Nk;rU^VLEfc~w3?skXH)x(=g{G4?KaLkpy?~YOFJ4W9Ykh8$8 zAk!Apk?;QZNbteOjzk6DBYPK%y2s@V-TKE=Oq11)wbeOfZq51}reWKFpr|9=0sHF! z&vk^ERr7Yj(gL|&b@Me+J+}H;4 zmbK`*w3;8{4!8%w%zzApNEBm~<)L40RciyQk$}MztgeQt#ByP{SCK(HF=D2^OMGSK znKq(;cn^F2E79AW-EoHmrBu8?u=f18qrJh1h}?z7mylx?TT z$*#-~VbQ&3TNp(5P1Cyw>k8x6ZLM!rbn*BdG%?7Ee;m0ArqWElMhJU2b+9+vHhga& zf2hc~=~zUj{MLs3;~k}`Zos65AT;2Tp?&jx`w@iUkLjbTNkON-ih}(z#n7J5kMM3E z^&n`@7X3V6w^2eqJW^hm;rLUAJ=Vjv78t0MSu2lolnL>hT0MtKbqxgmkP3)N8tT*z zT|4mf5gof=G`V()3wLDL(YGOw{q-B?sO!S1pVOy^e{J|9pL|r(d#lsxtv@GzB~xvO zWCx}4k-xXe<l5Dr6-pMC6r~}+OoLSY@T&KB|x=5om~q+XzzGo;&mdzH z+jBo5eML=Jyx?nvh#%f|*4pvpp;EhjG=1s7$ld95Qd?3}bU=0(!IuT@-m@2|0op1s z7spQf-ZIY`YAu38s zqtOm$gMq&9<}O z?lygSy9JnbUeuM9migBAAi1e0LmECUT!#V{z@Q#3IIu=^!VozQpMTE;{dTY@=)bVj zI7B<1u1uFWRcN_!5Vmf68_TjDL_3`&xf;h#5&q0u3uJjVg2q~l8D7I-9$HX8@B>DS z+Fbu~uz#`-NUpL7()2f~(v(8xv?rWv2AN?hPdj)GnPC}0dv0p^GbDBy4g#3YOv3#u z3y&1-hAAfTvq!eA#iF*Ta-K-)Neg(#k#L!d_sYJ70%4LV4|jLU)Di)hPUPFV`*_aX z?Tub8jEXeebWlXUSrA63vb@#ZjK9^z!eC+zMfbuZ&_`b3r((46|oUsU3N=ixU$r2Ob)EZ;?cViAXs8d(5fm1_qeQqx|61)G)7xSsNVUBy{ zoQM)7W-knc%)t6?cAJ~kMX7WS{1I*1H}BzZ+saC&l)tr<;f7v1f*UCJ&ZHearqd4N zE3~J=;cEesP39IgO_ELV6RIFsyP;%+=lPcu7-2WdO||s976bY3o=VnzX>cm0x&vX5 zoAjFGI*Pze-_eKEjfQ;-V9p{2gUYo=NG8tvE=+f2chMA|OHMb`n};#c&4|W70FOo=wVEF)@8u@6 z_b&#C&CYB1x9K(Slk$1p>)=mqFLF_GTTf1XI}^}wby2@Nq^fb>|9ZbJ!=@t_vxQe3 zuZ8ajyu3;kV14T$ixRa9x#

F?kuR&jh2$0JHX#t zc++Gj5-&%(_s70+rVv^y6Pel0{?0mR2Nx2qzUQp7%XHt$pXyFD^*rZE|C?Su=&uH< z*A}d2ug%!_tWm$uF%+F-{cm$Bke%Ny5J#*2T7p?BNrJV5kR_)pJf;tQK>;fF!j ziRh)h#^2}?gx+3yvhNzr{T001>o*#tiw^79&D|bO{iH$9?oH=vBuwF_a%_>1$e*B1 z;j^k*S8}nHGM|v3gsygZrK!Cg%}i8-`xnGyTjXji{1PQ=@Bs-_F*Bk83OWdAc8*ZuK(#UOtsP3%(`qu=cA5`D08+!0f*CHmuBkx#` zOg$`|vMOB2$F$FBv=k)pCzaCDuVp6~X#jVUm1bDlzNB>sHCT9k;RxDvEDX6>*YWgy zH2HFPoA{f>vZvb42ZJN7Mo^HOmbUvWmYqfvkL0IX0KQdBd;6IW=!__D&W%ZYPvEtB zA{?H`#h_#-NT+(Ac2)-~5(S%5*vDt%>-@{p`{EojBosk*O%f2qL%UI*4nlB0k@w{G zhYO41vTWAaqHOKdK9EPX#u_r>YZ}XRNTe>xF6G|#*FK~=6%=6SY4(1vP~IScP;6JI zRI)WAKhB!ET`=t^ZM@6GW>`b~XTs#TBXqMb34(}{51K(cnpVUSQOt^!`)%cM zxPk&7inESZcIZSh+3?%A4Qw1KvY~9ySm+RVu(c)^XTg6RQWGbTZ3-mF35|eeX%u}G z=lA@^SM|pJPm8=kOb%ishZ2@W^qIiZwnXrD{^-=b8-!XAAa?>9(0$$-j%TFlks(G;jHr766 z*flPvX$|9IU+WWo$A&ykINl;M_^J7d+*1HkLl#b9FxigTV-)o_AaG)+4UsX%DR?=% zngZ%~?eL>BHpbOzVhlj=V^-DWc{+L}+vEu@#>+zsaPQY+6d^xVFsE=h7DXWTbsK}{ ze)+ps#*r0}`4uk4XESF-rYE$By)D{1@T%k<_VX1Vfjs9na$T@vJ}(f;13{yTk(Qj$ zNt2dou9wP%G96;eC57l3(e9N6dn3TjrCuYfsn8gjwA%O>pc0{!3s_*gb0M&5Exv_Z zXJkK81D_(zuJl*cb4Lh_FGPn$*cx8l@_rzPc(H94Ioo%R6bj@L(wzXhL z*Dud6rWhTu!fol=A#mKsJHk}^{G3iyxk!#F1>S6bp|M4AKl6h&y& zd$z4dq;_9wYeDR5fhqYTF(0QNRxW6vw@&D{=n{<-{b+nHIm+|7d>$1;NJInr13dl= z3rzZ9MzQQ87FbzA7P0WrqAZjGBl2xj;#&qadIHncH<9ldXbYi=59;yj_B})|fIUbn zRbZUkvg@QslVNUrsw>8Ae@b`v2g*F|Gwi&5&9m_qx>F1O)D?!BskwE}>xEXwwmS$;My%=pLkLO71kJ?MK}cB;Edi#N$>ZgZ`Ixs&yjx_n&ROm<5z-TkkZUPs zz%v}knv=Xm{d6U|QtS9vH*zDV(wWe>IMaa0GUFXYYBnT&CA| zi}nV0`Bm%_<q3PFj&Ty$exGQMR%EfS24YT6#N>nEZ-xFTEf^bnQa3z1Ih8 zqa#sfO&*|CC-ZCTAs(pFLGx4~6F#t3xE>M+6ulAHmmij1c0%oxe=w&^PzXTC8Xc>` zjbgWH#om-3c4`v;2A4E1beDOp8njrw>x#P$*x#n% za>rquF@Z{HULZ0^8`XJ5sTFKoBQiYWOV4z)6Q{axEA&LiZvzI&87dZM16~Fg9^&0l z9{MxVTFHP58s(s8#M@+KF$j5E&laVBEr&yc6(y(P=nWj>|FRhtQDy}Rrm=iLaAiSD?VRQHC`WQBEl$GN5(#kq8!exXLX%G_BiGO_H) z+Sea{6-;ha`#g!tbgB6jO5^hRsp(D4qn}ANX5`phqe3Fq%~xwwKFi+BqyL;t3k6zJ zHhjD9q~2j!S>nEshyqRj9GmsYgHr%)?J$@VBM}072n|=*3-$a5*5|wEpvcZUcwl1) zwMWm1ire@QlNt$3j%hy_seTo*$eNv^sK4mnATn&BjIJmK%KYNOLJCI6wr?{UpD>77 zyfidMx0IH5h6%*7m!Rsjw|7l(8N0@?)~qs0G?#YP0VX#d`I<80C|7OsSW$6pKj!E% z5#KofO{Hr@jxcc02Zm8qH3-t@$4-f%#?Jf}W_n8p`YAt;TQgN;91eQKX z6hR-JDsos;tkSK1^DoTyU=-RgGagR#iF8c4+UrP6Wv_Wpg`--99ej*}JI8{u6Q1fwzXc$*B(~#KRfg?#@V!$!b4$QH#oS-lfjOgzr`t zChr!j8>uEfNZd7!8-e*LPg@pvu-%-?y(x3n{qT~ie6(pTl# z_c8($R#%X>!eJNMbnFAwsi-fjObAxh$r7lmf!@g%*A_#yVhvQE5FfC_* zUw2T}zD&S2V2He+l~d3k-HB!P!-hbqB>}~0thQrqR}awDUc@j}yxLj}fg+IL8KZmP zI}drn8&!%_XZdJ=Rt_hT1VNu|ZS?~xRJIY#fW8cq6UIlpa&eW+?0?>>^^Hj8Ju(@7 zogA;%z*S`npRD@&z4Wrcnf$Enw~?vseNI^H?hv}?YExySIPTYxlN_RMgzq6M)HS1B z`-U?>2al?vM=Rlf;=j!;ipUcACxj&aq%Sy_3Q6g}$JGm+xm;r+NDLQJnNbq>*J`Yv z6$1!pb2vaVDSy+Iu_tb|s>yNdsWG2qHskAB1s(Dw$r$G`O_#WJQ}?K`UUa(e{2e=K z^)$?wZ~cl?y`7_}?=+$Qw}D{;L*_X&7wRdYq(0AXL*ydzV8w7qsJ_Xt!?aJ(i4O+D z>)a-_{u>emP8ThX8;~*_%}FZok%iLK@TaBZm4PuXQ=Qqy+X{?t2WZmei7qXYIDV)e zq#xn?pfTj8=eHSMdXk{M$QSI14r~6EQpnX@xD4*Pta{rWrzU4?iVbRvg-(=zy|3P@ z+=$z|mLy3R2T!!da7}>!lZ7#2`33-<>@+91dt^3L_S;SXWMhVong=$ErDI-uz0^KA zr|45#?0;WY>0Nbupa&kAl6KpCoE>Dh_4APvkLc*cJ2W8y z3w94r+TszNQ6PelN8Wa2j`+1cw+{&(eJiM{{cV!YHeB)&B=m+KCG|KESW3RV5@ z?qLRUOMj=M{7=#hdzL^n*meJSWLQS4H5%sw>e2M$d$01hG=vXFX7LR57(1Z`Yigis&I3jcYk9B}A)RF6!i00ws>lw`rX# zG8NXJ(L@=|OJ>?S%NwFUsLJXG`y<9SC#P|bbI+?=Y*V+r6dIAjXP}IVCMe!^l%wG&|l7oW>UJj)&JIsvj)F@FM zeVASYPjPeZ@0i3DJ7c(OT>~95)86$ym>P{;03>PhUMJSd$$n|cw z2NyjvZgqWz*@J9 zwFA{E-in=%s5e_Z>YIR_z=oUq6Mi~;ZN90HZz(ZA`)y%n&PQT<0<~i1C=0^LYjQMT- zVGaw$XND;1e{FuxB)*NdvVbDsFqQRg@7mheI&z~hvaWRn*z=8ZKCLwOIOVQOYL|fjYS=Y@1~xKSRe`kaH6{Ae2iqs*G0jSp&Pbh;Fo?|Me#Gv1np$-ewq|i z`lK0m!yd>{cP6rEZu;Uhkt0glTgv2;+ui2zp>YRDWIV#;xy86(K{|lNG-)|3_t#2@aXD;&{V_=UgPMSVTWR}u&H*^Ng#JH0NYn*Y6jxQc+CW@E zfb4gzBL}4_q!9-~Uo?bgW#%Qpt%ia+DN4Gv;8oCFQD6h95D3hwy8I30TmK)p;14az z!=RcJA}(HD?vfx1DG|<43w}zyRM#@E@`>!Ctv(ot9@7!`X&-Rsr`S!fEr=}rs04bWqYyN3a7_-lu>%#uP^F{oFL~P4A_CfrM-<;F^~^2 zPD`+Ev6ZwEUZtNTaf(WY<}K#(g;ORXB8zegNK?c zX&ZCNKP*@EM9j*Ld9!e>G<2A?I6E)RF7_gGkkd9+F8^GrhOuFE2sNt&ehHHf^u(C! zK8c38o8x9O(q{X%Kjs)Co8R82&Bdhd!vkl4G;Wc52w_4HRNpncn+67s6_$EJ-wAspP5im!* zwi5N^96qvOT<$sk_LhE;ONneVc!-p|J=1(8G@mMWbeXR54-2*c9nS;wFWDB#i?`$N zWg$^=1B!cAW7!TYj%r=A)_fu4%gm?Vt@XZB(3VAL(NLM*N?6^HwV*5 z4dXDkQ)yp|rJ#Fs%*xz^Frwo|_}pl_y$lDs8;RQYXez5>PO5c6C}uT<6#+?$ zepPhXn zy1m0y&;^dLYKYMe{RGLUHMFqV0@LE|ph6_DHk8~2gOQsj9_It7`}TAxVp7)YCgzXZ z@F~%m$_ZU-ZV+R={_Fr$WS&x^^n4#Hq)HOoIc>5Y(nV4W66ZHXN_%;2;zmhU@$`=XOrl>gq@6S>p9o z-iPRE4KtUo|I%a(St=pBIk!`2mp?;Cj*13!qYNW-FvcUV8K_~I*TZ*!kuE+0ufh$V6 zu~u-_>R6aO7&Aml6l|IIw=Ct;YJk*Z-$vFyN2WZ>3~mD5G7Wq8hk--O1lfiEqO! zTTCK}A(vj@6gP>19Q>~E4h`r2f;!GK*&sXW9^v+jMyO|F5EA(~Q6?*m?@x`2*9bXa{X!(WI$ZV`jL)9No7br@G& zef$n($iFcI-$U2CxtiXF2a!f@A<-;Zb71USuha{;PTk7P(I2MeUQVI#VC zS8?R;M>sS$oaf{t{5|qO-?;r{5%v+z3JV}95U&Tm{-75=qqi12#Q#^4$O?sI3H1}p zkPCoQIoV!h5~J`XrGYkTO9-B*W$n^~HSoyK;mwYhZstqOivb)B&8|rncHEIZ&6DkS zaKZdC=V|g5-SGAMCpGF)yHN7=d4fWoY-8dUN)DgIC`V6^QxeHQE3 zErRULpes4Prk8rcX|BH91XFiwj1Rn)vOfgq8B#xN_6) zME6?!!WM7g7aftUcXzF5d>$NdjVJ(*nc3%$)p};k zUoqNWF9FQFQS}6rGb~bXPbdqv-_bwaaq~Q;>>;c)WZ1m3m|LTND$f=sSf&c1t+({u z=5-r;TV_J=fD?dfZMKxz_Y<2Om?|QjO`Lbye7r!<+JtTKEhng8DCo74^+qUIeEzj- zv|8EhS1ne5=4TCA7Y<}jT#JU!G(ECp%*pz!qY39k+wwZS6cN#})Apm%!l3=8zQXe{ zTbujw{k~9 zVsEC>4Uegw=Q1b?Hy}@CJv_YgIZ?<5?0pBVF7{*P3WWYTm%$gl&j*L zTx>IKzLT<*kekt$ry0V>VeTY251@<;jU8ffWDdp>n!v}wVd&-`}*>TkS>ttoF*3f&T z`I}AfX3QV0QR}r+5<{%qwJv!?scCWEZWbx=%UXbsf)@HNH^v{j? z`Tkn-a~0xWJaqW#f_9s{0tZN&mLAgiexVPzLH~?0HM*~ywx6sl)QYSeHQTcyzs;Cy ztO`%DB0JI@@oC#9jy!5hZ%krqS`a#F4u>i&;|T?`HC|LNErzyWuM=n-O@zGpFdIP5 zV9QeE=D{&)A^n;-$A{7+ydHj>GR=lNBsOxLMsVIr*WA6>x#m}*uGjymwibb{05YSL z6yIp4W$3`nuz_iks(8UaAF%fsvhan3)}RfwBKq2c=Qv`5tC{O+`ExY)SK4}Ci-FVb zavN!2EA_oF5NiCvwop8>i>x`m>Ja{ zzT#-=_4!{d5qUF1Y9DBMjD~jNV1sB)OD2S;F9d2%f7gO7vzSt9(rDxS&cG8`<`3hj zC>WM0N)v>T?b_QuxmnH;(LyE3$YUH3j8Zn|yUT2Z>$TW*Pau1hUI(~eVjcleMIERc zDSa^%;J@@}H`kfhVYd(flh--NFlwQCcGNHU(~zy{W%u?b#{IZR_a}mKY|)M(?4-qv zp)c7scj%E7UKiSyGuk#)($+CoL;xwJ8tvor2A#^pg2sRFLysY~M^Ii$5xA zdhZVCrdO2`MUJ+1(RMkUWyNOT{WF(DHru3e@H<5|DbcPU_p~2r?RDbhP3aJcsdmYu zrmz~=&J%LW^@Pd)$dGf34D`6Q+o53NG~rUS$JF@|Rq%Rtn=iQ^{i0D@&cyiU72a*@ z818-NTuQ~6k_1b|%(ZDd)YTenEKWRzdy!*0=kF+1?nUB-8Rk(u zi=6Eu;gMDWpBOg-ko6O}SS7{d7abRBEV3dM*}*EE6ugh+k+jP{P*2`qjhkgH*csZL z7?&Fk2LsDqe!s_JX;jOeqEx^*;DqNQ8^c7Xc8)CG9jmKpVS(DjHA@(F%tpq3Sy_!Fn)B?MDelb+2S{@#{#-C2nP6r&M z6z=b5uaV;9-OVL;CR+qjPIs)Z0trYNugS9u%jtC)ePr}KFUH%J@%zGj2!iCa?iAW{ zu;($l)T+wbB?1X1pxnz9fW7yB-md1y#l`_v&;v)#DqHEK6s)Q#KW=B{tzCRDLA6|K zv(#Eh7P-|ic|@B3GM0~K(q(#*Hap~zpoAiVpg|Gn@uB1wFRkM6q2Yxlds8Jw#Y!lL zP^%C;b*kw%D&B&9Q}7e&l&$2+tB3h?!v3EQ7z?YX8*4~)({9yEQhwSK9uWe%mToDW z$Oc%m1$3D0Mljt*tV8e-#gl9E)LrVih!esqI$N?G_(a-2HH;uir~0!{tvl~`>E!r- zbpwC+Q07WLA!cd_B<}u5LtE3K&23=TGS82pzwM#jRFnwlN(6~@gVaxWqLU7ZNF;fS zybF?ObI|C?)0oD=e@eGC!A1+p7$>|B*PBpU|{?D9kkgsh^~4iT^1z^2fLI0pQ3DJ3js_*yv2q4>%klX=3G^rr| zoNL+t>c-T)QO>F(9Ez_~IU0{)2@1Dt&MqG&XrQSfNDo1u_QrfYZr^`Tb;d>&!l+`eHThbUa^O9{}$!Et$r_GOx6U& zb5YYt`?lWt#yvv%z1P)~7o)de!_lr@gki)iguMhUf=xgsh&h10l-N-@L2Lcu_qG0H zIn|XrK5D9%)JyAI7h{vDaPTKZRwJ1lo4<}KYTqr@0WR^v$+6RNsuFzncGxYUo<>T# zd#$oqjP;mfDZdfUKZNZn@bcUN$$OLR0C|VW^NTJ>)|(yyF$w#sQuaAKd-!r6riZzF z6sJNfC)*x+Id@_#_YmaX3uOxdH182&9z#F4wd;5b?IU)SF5A796@nKdh+UC^e6Z>~ z42v`Qb^S|yEw0{)D8@*|bSN)=6$Z<20rbx_6PYA>%QxL;4;u}&FK_x!c7sDb*l!6n zOd@hEq<^}Q@C!1Qnr7L2bY_P*E$jitQZ#7AexYHPW0O*Cz4#q|{#-~u9a4g!Q;YBS zNK+XU=O5d)NF|$>l@+fq{fsTEdrvUCSI#^TUl+SLSHk|)LI8YDQ3YAA0^ZwiK6yf2 z)e9U$bb?de$-_Y>n}1|TmZ?SQ@w&czx&@R}HTMH#_!m?`LKf@Tl#t+Vb7VvWz($n_^efvuvwc6Zp zbL|iVnKD9bw(%!Otg4jdLk%t$6I>4Y-g zgMal+xU=DV`IV8`3UfI<9Z`tgSQ?OA6$a6M)0%GHwQKfc;JWN$uIzw^oKw2I`;=Ol zI1CuY=bLkd%iqYW)?F~G=QEmU`gi?pOb~)#4Aw}wJCOGJ3iQ1&wyq+8G|ZC-k~4Wa zW@BVUBz}^HzT65{r9o4TWDKX~5g_B!AnHu`i$1MQr`F~XUcoiq1O|Zo~r(;;T)JIG5vinYfYfj~~Z2>)XrYY@mp~PulEtwVtwVmyN@?ET& z<*_benx^c%qGY)aTF7L0N>DO}aYtUJYEo3ImSy(juEO5vhb3jxwf-dePB&&~CrXAJ zKV5E=zZ=cz`;(6~NC<9?y3}}8=A#$Ab?e<@lngUk zu%coYKBZbJTM1viB#R0OOo$XX0lmJ#L;?u!(-x=4#9@GiF1WJtrbVGb1KuPrJ0WlOf2ppWlz??2F^&S+e#IH*f@ z7Ru7T%xD8qFmoPi7M3MdC#mMd9pkW#qcL8BilU+8MT(F28Ok3?6kzA$%UZB%kl!2p z#S*-PB2>YRzCB>b;796X!^kOEIs0DyozdcB4k)kQucR}&Uix7Suc#2!LlfO`8kt&jbWsJ9G?qwCtX0|W>j+}#}poiMn&1qdG83GVLh65KsN zkinf`LvSD5-Cgr>U(Z|h{qOGDRo%V!l4G5xU^T*b4pK6~=OK}@^x@X zgGyls59F@>X{>`BSr$(`3Gv^zIEO=xsWO7!;$^xe3_%~{fWjNN{*=D>n4Rsk&YK}h zzhle=e|;k9qrUK`10PUc-Dc~v+fLAv|JTF2s1Y%e<4V}H1dxde%XwoUY;r(yhuOc$ zsx`_0Srx+xRxZ*wbW7i5!GZ|fEIuLYMIqxZCF0OREZNFmI737+n#B(e@kLj;xh~p@d%7V51$2M8w?`?JN8yS&_a$Jq*AqES* zQ^q(+HBTTTFwcI;;N?C@yn1@VkKnc%YY6fyd}rmo5Q{(y3ct#Rk4m!nyK{4Q_XI%WkU?X zn`vB_)`w(rfFR6{wpfQo-K&|T7PreGdj9cA^HZ-OkuTvQE(x)2{p}0WE@+nPK}F0$ ziH;2KQBYRNPBznsrt8k+uqfsaJ!kr5Vve3jG7HpzbI`-CfP*@^!ev?@c0zW*Xo_c0 z-GxQP*qfX2r6V%mIsX4^_A<#RS_CMib>*r<+7i^H1G5ZF#43)#W#1#Uw)@qbPB)maN}DJU zcf+0i*k)OgT&kYcieUEpgFixOc0@Dr5$_-TZg*&M52s&}H^e{Nt9|#Qhy~(9dg3fp zyKu^JI^n(AswL2RyQ-+YQIdh_`r;mvfT^gaG(#Ks5TJa|qHU(tPd?SeTWtnXyH-o; zyrb)H#q#qD_}c{&zn-Wb0wW}36ljQUY@xX1RG|pxKVtNoZPDWz?M@~GUkAfEzfa>>7!v! zOYQ{?ARsT6wHQvuUQnK{@fi7)@PEYlLxspqbK5i7)Q>=6FS9{|j&JT#+ESXrgGTAV z+xMv*V0 zE2Z!VeUR#unD7HP`s8*v9$8iENi9eg(!|vC=LwhLz3Y_j??suWYLS=`fUynsy}suA z93VhJ?(jtEy|qt$@_&xj?uC$mofhmF`a4|@s!Lpr;$jsE_uOVT<83>fN?pIIE|?Qj zsgxxr&DxaAQ(TGfL)vAM;0xZNmq-i#g~~`xl7z@*PyfiG+-~*xPO8DYyUz!X!3VSx ziRkM1ermVPnyihYZ0E}ozD4)r1K-f=vhe})(zwqu)nl*UAgMG>!ap-ywp>mx|0@ap zAa058jKNlWU5!+Ic5g_2f~ojh{-NCTHJ$O{@`DH(X($#|>Lq`k+L!~BO-Z!2%-avyDNDDO`Mp6kAK^RXRv23|7 zo#Wkf?Wj=1anpWM)_zn4V3Spw?!*8eTX137;F;)$jyoP*p`??<+YD~>7NOJ=@QM9@hFg+?c?GIRy1K_c(uMR`-{o(B`c3Sg@ zdsh4$)tIV;3*R2#qXgx>awSq0H_F+&OT5skzjpO(3MmyjZ{m@fA*bY}yZW?%Xy1}C zOMm5Z9Wz^AQnKZ-Owsaeh%=)zafL5$B08B=`#aYsZn9k=XR>uO$TMBY)IcZ8yhHVn zU*yK@)L`|V_vOvJm~?L3a`;aeh06&+r?8>SG^W9N@!h>%#|93a(-KJx6fSvz>9*^YK3=8c@iQDHs{{^*@x7O*4cN88sK=c$*8MLj*;RZ z)}{~osN`D>s*3r~b%tKmBBD(F#Q5q~;Izk7(|&8*i1BR5dHo-_ zf*KS!mlXko%aX&&8g~7q1twcV#nFYWBsi9&IW3w|PJHY5e>p&$2!{eq&2YOorbl_h zacJHP7v3a1YnVbWJ-qP2C?Mo);Hx0SGBcws(mw zM%@6frJJ;@|M$o{u?vhemivO{@Mcoenq2||I;wxGhe*Vf+pP8(|HuC*er7p8xR6@u z7k~vUc_|4EMEaThzqq>dvx^7y*=Ej1^1%PWJpBR)q84R-__O~VjPFT{U-117^X7lV z&wJ9kfbjpLK#cGA{TF9}I+S~f9she>|3?LNYfskL+*30=-nJvfx%uS=*X*n1-fJ2a zG_j`qv+wf`4Cg~Rmt3nINA7tz9mDxE_ zby-+_bhf~S9}p5WAmOy|xOR6u_o)4Gq~OCn80wCR40U#y^H)44=+MZ$Sw4fk$1Ya9 zm^r()f&OC)^ee9B@6v`YBp!ha0tf_uqK8$^~h?a2ax8c>`@JEWuIqP`Kgf?LEM3}KLZ#=PL z4;5~*^get?0n)2bcv{n+r!RlMRGBw8I>ci7mWd>{l?Z}qbIflMTn<|po0CqdN)HFaYh$312q zsv46P6v;V+%%ke=h$y~by0#0b@!8{7QoS+ZLPQ&auUu>NwJ%2ss-hj2>@~-B?==S> zk6|uzZars|F}i~(ypXnw;eq$Al=n-&s?V`JG0wRn3aw8Q=i~mQfgMFv*7_APjFRRu z{%R@S`H0niOZ+Cl<(U+S3#MO69A`s5iN)9JZf<%noK$xsE74;vB6TG_OB=v)$G@m|>=jBj!tC=zdOX zd%apkTW(^iliz3J)#F=s^j>{K-o39?J_W{#LMfrUoB~4v`&-gC&)z1_sTh1Gsp^e6 zj>{zabQjzxZ@80BvqMKsWju}Pg#KK8D7mzw#?O*h=Zl!S?j+FGf*z@|U4k+8s!&s) z;hSD=zzbCl= zZYP$e{n)KBS&(#3#vQ})ykl$_&yp=NKPd`p{jc7pGiYeu4ur^=BXza3zc&*nr5f_^ zx~(@{+2l^K$$ue$^1OxR3H-=T%QU%Rbm?psiXREq}TQ9E_1bw2YlX5P@qs_@<$Xa z;Y)%dEmd27S4!r$ueg#H*5dHMLk0%RCN%@h7$`G{GSMxdf!m3)v50^Z+zxY!LXCT@ zv;I5b!16$1AWlC>c{4*9eOC3xps2d~h*kUdGS>gGrEPJG=Y7}AV= z+Nf%9ntaXlQ`wxApZ#_D$Zh6qQK*yl_x7-R6tIvtLW99$C!!KmBG~3?Z9oI4}SX7;=+%Qfpd@Gl?)F{UqZ!l+E&( ztn_Iv7Zjmg2dJB%2$mF2M57{(<>52mFJM$5uSBzFnQeG1@FW$Hr*hQL2m>8ry#s$v z%$7s~FX~xeWLQ5Un85wLZYa<_o%w-TUUQQVP2Jhnm3>*b;w(?)4{YBZL~P*}w!+SC z@m~5z=7&e2(+u?5dCR(N#NTE#Nx|n@G^YdXQuP-H?$aUBgPUgWP9m%QN{z!22rLyK z%4fzR0D|sYvF~6uy2H0c-daga&K|bU6^vd>!OjlX^M9Jy1}>Vh2R1a6(hJ>UdC^+V z-u~FHCqc~$WjyaS4yr#cq>(kKUf8Wg_5WsVsvcF}w=9l1T2`qnnk1wHH$nZ2;p zIU9>sL}NmJ8&g1-PeCEB6!s9s53|sueAu) zxk`{Rdynz{h_K_VC!uQl)>&cXE_NN}TETiw=Ot#YrgPlfLp_zEDIO;mCv-Su)1g$X}^_Ic2(VF4y#3 z+dbutZ3uptNT?-8?WxLDX%4wzjPI24FqEl6igtMD_8#_sGJdC`E?i<23>-7QX|7eW zQ}C6{W6|z=XbI%4j_-}<6!`$huTvc}BRD$NWcy=r+%iN4M^W=!LP z3l06&L}@&5hnC)j-mcqI%a%{%2di&Mp3bSW6Ur}lSz~uG_hq7#a!bfXIY0&(1Pi1T zyl{HfY?#3&piPzL6>oGU`f?gG;v3VO#cN~6@|$@h1Gf+hYosUEEagFQ^2BiPr*y8b z$Ityu+JVSr`F~bjtBtvCobEdzV0N3u>fw7$$Y4y78NmSzszkMLlRz`=e4yThbG2vL zI-R}Incbhw7Mq486LzB>sRRs{1r9b0ir|vJ#3RQ$X4O!mUk)H%4qhJfW)5YkgMLQn zd^dL!#)-L)ZJeFpwY@-ZUvpk>1m74w5KY5nfQ_Wl?&cipKsnw_3SW%KncEjhuMW+-WfY{&vOkT)P zDOitX#_PgXkB4x0J)wE}$Q7tBR`(a8?n0_0D^(q{fsXKaViogXmV%CaSYWqO8GdKd z@j!fDvwU?KCFn*F8f26rwBna*&ALj>?Qi(yh6iBFUb7zLQll& z0jQVv=wenm$a_-hpO#N7ktezDuESv$ySX@(Ta>@lc*kW8zPEnxa5&yWHuyavaed~kP$+-}hdsxc)0?-s)Xax?vV zi5-MWwevEWQ>SrMacUUoD8*?XvWM#t`E0D)1Z%Q}zsT%aJR7R|kK0H1&1ol%mbM!& zWt=TVnqKuUe9VKYc0cYqaeA@iZ%e<|J6OSG1`?Rt0mCwkPp60Sr`P1Qy01P1x%@;+W~hS- zoOe!Z(VT?|CajQ`!0HN8Ac;0^wl`F~JxHwMydZ2$+MYsgM-#qWh55IaN zaSN*U>9C@aMZyAK@;2eRccY}#+IMSeAk^?hYyyK2lAC{VV!9QM$j|PA5)JEtXyIGdi2 z??!YTe=huKDrW}IMFYwlaijzs4E!P;+wD4+;vK$AbHv#+@t#EU5saHyt`a*^{k%>m z$RdCFzN$mWw;w0%7Pax6d2k1jKVVF1WeoQri(u&c)cq)@RP#;P9jsl|){4*DTi9yE z*|+duieRBAy_)X|7N6tNcK1zD}(RV;EmoI^ZEW~jHXgnv31{BKBc7@|a&KM_aH-fe-|20z(RLUpFGbR@o#8HfBR zB8_^QXYPZG$6-{=O_nS@X}E^fBYZm+cO(~KT=Zg;5LRs<-E6i+!ibSL^rQMCD319lM39jyRJ$6O);h4aJ^`;lK z-j`&5vyNQjbeW)$$ZoBhvMI}bhJJWMifX}(S?xFwLzGg@!m@>6RtS!26@_5`5DACs z)q#lq_q?KuQcm!`m^UH)&LmAD@Ay7oFE#g)_dYhv0)PdLcj!UinvHd(9lwOG#EMt6 zigCB6w@yzGtjJkwg?c_|jqbo8JP1r^obW64e6&0Hv$zW8t4q(VfQsVW^K~N#vxi9B zyDsdT#AG2{KXJ!Jr;US&=6}XSd=`%`*wz^*$sGXV#rglssmuXt)%pLkoO?SMS~5v> z%sxip-`J8J9jq+gJtvOPX^GsV2=R><57?FmD^?UDKSTt7G~J0;eSH_xd=Cl-_t?lR)X7yTbI zw3j3NUNOh1-eBTfz6y_(utHn28Y#2<$j5 z)n4uts>2{FUseJ=&@xcQW`~F-InCyEv=~RWM!{GQL?{`5A1cKnrmlQ}YU!N#_cGLU zI?lh%Oz_xB{;js4tz5~3p0t6hCH1BlC^2iD$jWPbr8XJzp zJ@xsOH|`}D6r(ry!BROtq$^nMB}dq*Zv82oL7FO>i(&TPM0d$*>Z0FmG}oOG&KfW3 z1-I{I`~TqHNFU*g$(UCf3`!yJSZO!4vi@KpuWT=<+@kBdH287 z%-3yyt%h#2RdPcy(U~yJl$OF_)xtyzm2&l)?OiV1XG+l9VG+-Bng61-|8w5#$?oPK zMtSKfnU~1)Ao8MaTEp0(uYf|I!fQER-m*>BG>5NTL%sfIIw*B$g^idH%a!5p=&EiZ zK6d=iPum*L`hy$m7P&9zR|M*U&=ZnDtOPR{k48X|MO7T@LwRpJ2@g;@6W^)vc+$Z{ z)RUhFx?0;w9Y=n|FD;qx44^{YXBjjO@SFNMKYfDxl&&tN*!1W4z~)S}F-SU*A^@NB z#qiNQ`gO*C)%stzaMj|6)9JU5kz$SB1bnMGF~|q4~&?gR+IL^!E|xmX0SbdfBZEo0aXUL$ilE#_m)%#9XljOk;lCE&LCv1$InyVazFCM zJTd@X2pd}3IY|p`v-)Ap4l)E=QAL?^KD3WjWUADb8I_3fC&yu4+<%OlmWGBD^Z zd)KA5MnaMf!=equzRC)T&Y%~V_T z@UHL!#tEg<>X;z+G}cbNqlT?+1Woa;1vaS~h7do~Ha{!dXiQrSt6O9$)WUneofVdC zGDb@r=~S;AiK1wHKHFz3Ds@wUV}~*Bm0NA>sYx^E=H;3=%Z74PEjh#+{`4l{(7qvmc)u0u%W%=2blnCghY~D?(X=yQM{YN;g+}K}%^|l3yS?~1` zXKiDBiBk69e{`nK)SkLiE#c!f8l$%GzBFEa1x*q6Y=l%#Zrk;Xcb{K}hC5<8^@ajS zgb1Ohy0;+VEPAL+h2yj|3J=+L8+6k~uwy*c63;7#0MTN69@~rd zgD#In*YqwfW#SU|S(|NHubF3*FkIm*K`5IhN-&P>8*=P&Gw+E~;Q;ddFyUuc9i$x! z35oiu^6pU1t~8|s)0$r!Iupmc&cg?-Ud5$H&J_HA!=_`Ausw$(<#re$})IR0j1 zfpqTslABf=?BnH>%k+Nel<+3)aHoRvkXPz%DkDWbUH$x zNXCzH=jvD`<6Sroi1>Ss-=}ArYI@!_CUj1_9~&}r%*j?7Lhn?i0Vn4ZcV&yMlS-Sv z+*yo-sDp81;OxFGHqjkT89t2HaU-*S=`0IiJt9`|@_QyQ(*xtL}nw&MLOIoXV?$toUIReM? z;g~Wi^U9=Xb{}@9#(HMWE670OPcD`gk!|C#D!HN^;gO9IFt|n`<{VM`Dw-A<1h8mt zfMJt!am;fXF`Er;kYvh%!D+ymCY`wOAb>FbBvL$4_e{U<`+8$iubd#^ts~A{qB>>r zx%C@(#n4XKZYb>!lVusTSMD#BTrq2{Kb@}%)AO1jf4Z=WJpF$GJiXgp{!8k1L?utEq2#ZsViToWBxay2&DeNA*EvdSPq+o;EhG#h*qiu4A_ z5SXBYi=QRhYJF7ge}YDQ6vvK9wlIvcoth4_(~bs+ zy+x7amgR*-Dm+p976aC8txQVt={6ZUM zqWUJeSG-}>E4n0a*s0*#Y2Iw&Y)qf9 zk8|&Cm*qP8$cnZj?|Toz8B%ff!ZmR~RLV_5&-IAAj44=*CgF|)BV%uS7`8@`lo#Fi z-q$FfXzx9w7%-Nw^HHpi4eNQsneHEPxJSMF#}Mdr7Z?$H)2#H2Z{fTbPEMt}1PHdv zVTivgC^?>+?R6W_GLL?{ZjdBqa||qNN3_5<+{cZWtyTwZ|8qb9YP8@UKh~HRIoI+$SZfJB&l=FXWaDBc$ z*H{68qW+E-a;0y6v>j^I3sLX49MtA&525zO$tl>_`)BDbrI`x=gi!MG>O4YLddLzshgcAC5Cxcvl$7QzvjcgfjfAjFLABGT6U}#miJCkeY4Y9Y2;bUv@4nXUZHZM z#@0n7pFJO=nq1a9iKKn!=1OaVN*@xg7Hq_is5>FWBUR0aE6i3a3xMV|m^8Jd$wm_T zL0nI(iq5}rI}F;5`4kfBQd&MTeqQ?WzEx&72vH*3wEmYR5!&)ajP-erzUiCuZy)X} z71-p#idQKyH*z^gk>`Xl!4Gpb)85v%-#KADy+<#3aicn$XY0P&QBWrF!A6ENThB+&0IQ zi)n!6?d)tu(>BWzvj*pUW-~7uZ zsT0o40kq-hX5SV(t#oz`d4}?qD}IUZDG+2F{FwqX8HM;S+lRaP=I^&{2ON57d2mjn zWUY9G{+71Vuf24(%8NaN%^5U(E+J@n5k~hxn-u97%0ha|sX3e6{LIcstHTY)Po2^Q zV!W%Y0fmO-V$iKR@hfGx_O{x#ypPhx$w4X)U(K8wF0Swb3>ZB4Z8A5g&BG{0K5Yvq zW)DaZ<78a9s4?>yk&CG9tNtQ__*(VIA8IgT-sX8&#Ug7`Kqn|heDnJ0HdbgcY2EGL z@nv4uz=eAk!Lt8-JFDACOly4*(`$jyRO(v2s;?kIf>09b(Xa@gzSU>CEpi}u+Tc(- zR^HBI4$-8I^Ma}9kv9_cD^JzIrUiUZ#+Gq7rAcCL()J4JpAPPh6Ej-Rfa z*9mJkqfX6{;^76=K}A;yagSL##6Ib) zuLj7rWYaZ@Q006#ORAAc%Rbq?FAMFX;}lMYaUoEqGLii_<)a9DQj#;z{D2O;1GUxa z+)wT>n72Bue1?Wm=~j^Fv?sZ9C3&|(^>??%h_l zXJwIN;irX5SxoTgrIKK3PU*V?NWU~^>0W3y{u|KqSOSG-tD&d1j^eX^EVi}{VZ1R5 za05CL$QxNxzdtx9)(8^|Cz;nUs{yn_#l`U2RWa4&w$E3`lP2Uwu-H&FDNB%_(sWlg zQHg~M+`*<;mj85_7-{tUx`af+8^g*+N`8^=_n9>(x~6b}5LJp2E|ztSFRU;Q+VzTkOD4upU73T`LsryomH+v??_Y(eCbRK!fOV9_3ZYx@9nQ7bvQUXY}5XD zH&8;KjRW|NNuG{O@?)~M(XQ=ft($$WO)}$r(_2r6<`-?}`1rbu0n)LH#?r=AZZaxc zt{UwqpNN7p>EqbUiY;d8Vp-uSt|-Cug)mL75+~i4y?@?D$Y)Rf(n9vGHSTGp-_Y{z_*;VM6C_ ziDDY~i1$gE^E+f09@0%liq-~vDRD9bIY%j*MPC(;7>TY(YNw8lrWpyng*K}&NqXzO zf$XMF3;*(n7~vzCmepiuQuyf&Y?Z?enPw&jMoIx2h9hB3anNBsF`>6Rn|;@YY-Jfq*7ZDk8J zK`*=7W2^sCi$$MW>pWR<%%0<{TH}vq>m54)M2#pfwFrIx#PYdD__eeAWx zt<5*AF%;=7BifF;?$L0$?sC1F=-u^v$O!&UKz zFwJ{69KItNfO)nj^vHM<7zTpL^R1gVZ`ROVk#>C>jl_I*O2)juctar62l_jG#0UI+ zN4D@jnh;#>A2r-%nZUUb&E;FJFh=NQ;I3s|U%Axu4z?B&Yt;-hL(&n7+G|0iqmOO9 z;8CVc*T=_Bpo;J6j3=-=pfC<{sIn*RgZCovQEy*bONrtMXy86Pp+D;&`1wpN0~p`T zkK7G$V3^5lccWjL8UgIjP2noT^f+txI{;606;7_WQ4`+2z%`F7lte;A8i%Mf^hzx(VakAX`KJ zxXsL0Hlb;Ly6}W>`U@c?_f;=t%h;54p!hh0D#3sbZGXOmr_lj^K}*IYB~_rpf@8E_ z;ZW}RQ2Q_AxBx6lx)U3i+KllfT?27tv?Z+%2YNZ1)C)SVcM}aR%{fk8S?FZW2rlq< z{y@|WwBvLUXhBeRlKNLc@+d!YPGvZ;a}%c}KaMg;0fd*nAgReefj{fD$afFS-qZ9< z>Kg13KGn=As`|w-#F{CoJE10W`oTYdQzndY24!6HEN20a^h!%rdNA^Cl%Ul;=Dj|0 zo;mz&A_XX1rK00** z$Woaf*pIk(OJWKVF7aN)zK`S;?#)d=gP+Zcw zR3VYn@?AlLDg)?l;#Qm^=ekD~pXl2HFMYW*bm6N0vE796YgzgTGC*}V*z;KsNnKeP zu0HNMi=3w(r#5yc6JK(kdH)ZhduPj@s>i#e*<9)GhVrtSkZ@vz-ty4%wK%3z+4Ci+ zWBxMX3>k1o;xJ3`uVCx!nlK1UauN)=gj_|#RlUFtmMMAK<%__S7R@dzzrjF^_{9lb zd@*g_TMf;z-sDuuaY|ggEQOOKQJ!p5ueVUn%0BBpwGd&|w+hl!`c}xZ65hn%0 zrK$?;-dxqXtyqi=AAhhTIj?Gho|Ri)ZFgzpHHkasFlEgCZ!A)D!@y$}U>pS_o2yuT zX=^<4DI+R#3OFKKSeULVBA^P;Iij&fz3sda->EJ_ohn7a)*vg%Br5<#vHOvca7`<0 z^jHpTSb-$Betz$1RLCIF3>kYwVE~NpQphYZA*Uvy{y_hA$@;LFi?gE97QKb;HJlcN zVBv+eHlL61EbCtQVkN&FKIxY&5klN3zR$lKH^m+LWfTB^7C=`l&3Laj=neRFKp3>k zphwX*Jun%Q7-+-%LXW*ukzMG-XhB#)=`g(LkSjUg=t{DdPrOBdwAG&yXz(RvgkoN; z-ts+|#7iX3CymhY{P|o&k;3}t@soRk13{sHOtWaP-?aYJv}*`U@GcXYpmq$dcFuv@ zbnL~noYhr78b1+4AfV!)Lg6_6BnlQB`zWC10#90w~kYwZ&`)K9Q4BheviSYIsCq3yY z;d>N%f5nGxNpf>`P@?O#godz$2n>7+msn8ZEz}KWiqmO9BkGf0&JphstyeCSd5VLH z+dqZ@Yn?hi&=Lc=lQ^$X9F0lyk~mLO#ZMkOQnxjeb}E4(!L&=B|D$G!16jC5+$}Ts z5=Lte)WgJO9|>S2cSmMuSecs&T+)nvlpF++U~#Nz%-9te1O-i8>WeZYnjBOX`7Z1}pRz|8gEuREu-fyg=NbgfQsW$yCQ-)) z3w5xCQ+|`w9ubK{dfk6Wp-gjbCDO~=@Zs|D3whR|+c#)L-#o-b>{DXm97{R&Pe?x~d)g`MnV%db_-&NxtdSXS`Q+tJxpSiebsIj8W4f3&X6F7llWQrhi5 z3LX~Q!Cw8;4LBj9K05E1_)jMBR>b@dyqaDc6EjCnt)wg_55tvaNMVGU;kk1nv)oq* zcUx$*8*-HpviczPY%>#X4oWzVfTt4z?3KD?g&D)lSDHA?R6xqKYvL&=lXXgogK;!R zPeEl1$ISF#zL3F@d*!(yR1_xyAF}*uAZ6!!zFxBI5rbM7BK~Hiv>@L?WzaCPG?%A- zv;`FF``_7lm5vt0Z^bA()bvBbx45|BS^RU}RXV$iQe<)l@+X8wN1t3+#`j@-nO85v zIZ-tChtQcssDTVaQ8I4?aed4L!Uv%Q%?@y8cZrQL@dfI$_@^-V;}TbBkjfX;q%J_% zcD~@pO}}xC9~ChqFO`bJ{GL%xUOt6RTk0-`1(}+kC9s28GQC&|>puUBD4xrywGbq* zFL{aP7m)5L8X|1xuyt28kcadM$Ru@WzmP27> z7-oRCU=Rm`<>69eabPbgA)DBS>MDF-c7y$p|Es7%11DA=Y$fzzl}FYNzai(V3F@Ho zh6Guuet6G2KK?qHoDHtO1>ENrv$k^{y&H=u6Cw5zenv~=)*LViT>c-_Za^>NY6g*? zxPjygn3gCPVyi*apW+C?Kw^Jp}i^IWKYGSQ#H;Uxt*WnburtrlSQ5GPB~dRIc8{!UCbO z%KcFSkPQs}0~{e$QjCF!Y32*zOHY+bYbs14L)r|uBi_g%%vxwSSAae;Z6=ShQ^C`> zKxtxCcI2^#s5|~@@Ml$5Hmoo5*Q}Dltl^ofiZtiv;fZB-xYG+f5kW%61$H?$1vrMH zA;hOo5lP$1VT$7sF;!{*WXEAt+dldW_a?jOIc$ZhPSEUPS+c1)C|epn9dxHnE< zApaNP)W9#%R;2;+q^6X8#J>W&YqGLBKgRA+6txD0C_q|ZaZ@P?ct*?_b!n1mLHT&=7C(reX7|0 zTe4O9u$K&zo$oU)|Jz{$6mDfa8y$6csQ@@p034*eU6~zxrYCAK8y~*k55x=-sNX6{ zS0#My2{oodP%}_z<>oi3*(di}DnM4-#1LiTu|E58+*j*JE%ZQ~rj+K~f`6P3h9R-x z4DFgyk#a$LQ?2z7VHkE{DN+BtFU32bb{%=A(}8PIS83Y?yYJIRr(e^>aw!>MD@|HB zN}mpu({Uu3AnH56CX&o51&fxlBieGXMYYp6mRX17Mus-D{=|wVI%C_TvihP6b*B@~ zN<704evZ%0+mnAt5|_~i!W`jB{_$ejBon2`Q|T^-8Rk^+w_IAW8=hv?diP?aaW1q2 zYK}M_d>2s%OFE^AzPlMYI*^n>SaW-*0u+p(HFMi_lA^=mEl_VpF-ZsNhsnig`u!YgVtBD zkIOI(I1Mz!%WZ**CSoX?47phQ9wiuVe#_y%w5?Y^0`jE>hrl^%nQ zoybg$2Rrx(h9K)6@y>Tl!eSxB1GDyF^S^w;4(CccMB#7oYn>I_{EJsHZp8o@oh@bXB5-P zw`BDUu+;PhG{q~6X+rX;izt>65f8hY8Lx2DqA;ae}Mo&u(vsuYQjJ` z1Hzpo5|{pJbPDHH_jhYsVFwqi%YrYGA6Y$_kxWSfZ?JZ6zT{`anQjecQh8k#&{~a) zOCCXWDg5TTt@+I6%Ir_9DsqMS#v82^Tb!2!EL0kI9V*yOZ{8)fA~j>$%FDEn;JNz+ zy;M7WZl&xrx*-$-7GGTlq6cZN`+<@~${Vh=z$kXs_4wV6$gn2^(5Jkl|HjRfBK)e_ z&l96%jr~C&qap?$i{TO(aK8@10ZEwBdw`=)T&yAyX>ggSlR`DO$0{i<5jrALdZ8yl zV^UqNI+dSmrnb;|ME#*VdPBv_%kR(<3%7kgmjQUcq7j7#e~20u_w)dEkPBpXvf9*< z`ZgZ=fHe_hb#?Qii6Dlc5Jd=x%oT^IsuI(WFre0>#;v(8cpvmJoBb+sKyI{U+81|; zE;iG6^$5Xfqhe&V+}TELLM_N0lk@AVb=y!mc=WDZhia}7fQ+;C8f#KfP$O#w%28FD zHsfswg$ZE@Q3Kc5*|X7EmG|XmH3G@&%5cWS(X6M}^7o10A@*l4!}RG!UO zFv?DTBz7@XR2uh#AJD_SYkplx%Rt>#2t-Xw$6UubH~cD}0p{;z)a4|{>2d!Z7P#6H zzx{_{M-h7+#>AQ3Txn+IsR~WnCMILrye3O{s`-ZJBE3Aw=nUN{Nb81W#fw;Uq74iK znEIyi%uScG-TP>EHsWF3q2O4f@i~@oQ15rr&LBq%j}$Eh>23;zvYn5*F3w>z+gGmO zT1OXDB8Z7Xy9Oy(^)=?y_bBLy7sp<)x@KK< zdR6=Lee>P?@;T->+Adc6c`g!W#$8RUS?cGiOYO)`6m73VtL0dAY*G=5(tcVid|TP> zhTafEpBm-;_^+8AHi4?LdO%zyyGf|nd_`1?VdcA*b6MAN;21cjCB-sssQT0QZ`p-o zqwV@D?_ZZ^WD}KNjul~^&An1h6#j1LdXCfI^!foA6(rW#hC2hc(L}5b-ak2ixtu_3 za=8E4m?z32Y|5AuMn$wX-0kKBc^VUf&yIcVp!pb(7K=*l)R->siT00u>lr#X9M+^o zgu?0k7JpK)4V3hsY2Y-9eJS{H7{A z1|SmTA#73{Z=5ci-<*{)sSBR6_w3v#NYI7%TTS-XMi!FZc*l1adu?8;vitpSem?|Q z`JNUNpK+v(IIpxPdP}TFx#-_S8`=7hlI_@1%>aSd^pUMaHuU?2+Kuv_K`|N&_f8D@ zu9xNU1hRM#b^QftrlG{k5@C0=mbWAahOAY%hlyR3bcc)mr(l+$vSfUGV3KiML|m>Z zXiMmB+P-M#F~0LXyEGDd_jrm&JDOUXlhXLIR?lp=5Ns<}edu{MI53Yv)9CPL7y>J3l=PDHenf{5fZdU1S z%oPWQqZ3fLgdfN*9$yWsC>mEU^E=E{au0x+A_hMU2>(B--ZChTwhOyWf;$AA;1Jw` zySoH;4Giw??k>UIEx5bG0E2sQcXuaep7;As)j2<>tERiFYo_jf@7`-&xF%!Ifn)}X zYfPy$Efw@(2X9{_Kf^Ez``-P}OH+ee&9Y;g+-W_y2}R zpM74^@-Q`*913>0>!w<`PY0(^>pIl`%wUy$Z2LZl@vfO(>`MQ0PC@=yMuP^irx&HD zaEd4z4F5Z*2M0KkpEa;KZwy;gNXU4%`KM9@xnLSxARV3qgNZ(A`^==qf7XIKVS|b} zr)yeRFX+T~H%iH~xX%B?@uHZmQfryE5EpL+`rr6yu7oGzN%{Y=nqd_rPaE}3`os{f zlL+Q~JrgwcabAw_8U|xd#z(8?-289TlkZPr zxlmg9pUY+=q_>~C9w)j@t~Iw7a(C1uM@L6jJdjq_BhMb}|JYJ6y7SAv^Z$#5BJ87y z{ig)~uLR$a+=}^M4glmt0xD0k<|D*;m`q zWA*6Z+)+CBneGsK$?o$8ZSGQq!PPo4(teN9v|V2(q5B4XtLxsOcy6Z(A9mvg^2yTJ zl+OOE=RM)S+cm!mHh%Xf3xC#~jI)r1MnXA5?SdACWsN2c>x!Fm*K@uJqm3t#kj%JiWP zcJ<~Ll<_mt)4(RkqJ1ym;WxXEC=RI^Hh0vR_dCYlrBs@!bB)gu#ZH5>qG$LO_bn%51 zat4T-+Sc%jwXZ$h;$GsH3v45$@NIC^Xq>k4YAbD13R`VDpg}9z0}1vm7LDyp#f{@^ z(IkN1jjVRmQ@srkMj4s;Ybba|sgVok5plX{Y;5XKX#aNZIWMg!R{pp%tqtZ+o41e) z^nRzQl=<#pHsoJ*mi@QrbvZuf+!xIeht36MO>CvZv@d&uf? z{aS5XF2k#QI2tj70$dp4&N&4(YR%ai=b)w)XR<(9d`P;t{uj~`3yXt`^zt^!-ZBES zPwfULn0?;v-;%UuF;{D5sF*4gKn(&kLam<1iHvxer1!jFHI8vvsK?w%(W|&;i*KCX?5z*P`P|%!ACAM!Y>teK zgMa}HT8V!RKHS&afd|*dzXpEOn2ep%?zTrt=E)WEQQaAYH#8er7-6GRj$guhT$rs% ziyP<-m0lz=b=9?UHXFE?8A{5Balk4JanT5wd?koSy_-JS@E%VA^`EBae%pP2)^rc4 z`@ymKraS(YWkz3l?B{k^3W;@q3nqp27$Azi+!z$FcUu=|PehmxxUnND5;8?*M%2N7 z&=AyV*=*%)#@3UAhV}8GY$O&`Z>kX>)^<~{WG7?)+`ft8PVrOZ=QpiW$R3TMP8~U> z8WSSNF?+o6%+iRA7%-qqtcoe3PfkaxM<@X=x1F^4733p;J*?Or+=Ew>aS6XM`0B1E zAAC-(${$&*lC#$h@9kBZo`JrUcwT$`wJy*hShr8TM2jeuF!RD*_l{!R>;R)=O?S5d zJ9D4_mSv%KA5~slj9omV<}Z0$LZZbdeT%tzUJ}fmv4PVN+NCg~k-QfVCa`4Vny8`3 zs(1h_@)TkgY^xvV=B;q?ZX$-i}Qn4kx?VC$;w2UGwe zA4X`_YAKp~K1M^UhG#5xGibGPd&BHW9NqZCI+6Fv6&81m8)vA_M_OV$#h>-Fg5U&o zTNEy&tpPsD!fBs7s|<)&H(zEe-!}yWt?*M0oeBTFRNYl75M18Qj0<~%@v;RmRvvRu ztf32!-HWO+8#dsKAhY9HF(7W zeuk^y!`XvJ@=G6n&f~*bdn>A5sp>vI2GV<%+2*!ypVZX`)Ln53sJQLsdHCW0x7%-7 zryjSq*F()GLc!#Z1mQM1Of%yve?8N$F)<^nq3}iNSbf)jG3KE_`JGmwATdHWyX)^`L0D1E-P&8U-d=B+AZ@_8SDF-LEC_f7q#^MjzCgNy`;YzHg)IvxteGhuV8Ah zLJV#aywtJ9@eOM9UC7KlM?QV~%#P>?y)nz>m*Mjc{N{SX-7rV zW?ys~XzPaf#r(v6T5$|EXTksaN9lfsYHcr-AY@6${L*qRWes<(84H}o3-!|)J==64 zRiP|rfExbUv2A8$JvVp9glK&wV~$-g>k{>a>XBg;zZu4aTl_n}@V_eEu;_#d32_R{ z{E2D1F1@P=%-Vo!X4ht;;U9lB;!UvA*FVQGn74es0{crJTX?kiBrI1GqI3i>&xY|a z_4W`gVF9K2EIU<|X013m&N%j*c30*lN|hewyF`*U!J2Ir8UxKW#Zz1PNIZoj%lER< z@XGaicoDo06t1rG)~rBNI$#k>oKdu*1MWG-hY7=caU8WJgBXxA(T#U92QfN786zu7*IMQy7gMd zt^K7uWb*045mkSSMn|)|dg7(xr>30?*=8liWi8>pe6_62 zKk0z0;+e!=%|MY=l5O-f(})@SyZdd*AO~rb66k@= zn^9u2<%fi8pdaC_{h?#SC7+4|9q~7+5M<@b51y0F+A70Wnhrvsc63O7N!@hRK4I`d z;4LdL(jf>BX?#jN_~yi5^p~3eP6{iIQ4GvU;{zq?g2tSiqZ60-oefGv3v!`9-MepM)%d_vC9tV{*CxEwVClG!ZEWGNbvhGU%bM1CKZtb(7eHI__3cy}|N41Oj-2WzY z17?psq(4cdN2)(%)e-s8;Qbyz=De~-wCqSwsDN2q4O;%YjQ^MB<-5ePU@JaU6I-)^ zw|mBjIY^+9u0+uEOhbUfssj&-rhe|`oa4q$Q+gipLMzD!8_smKLc4)9bIg9+o20R*` zXH1O38vbg5t8vcb0DPx@QoT5liZ0qT)g1LXV9z-n62RyE$NWA_0Bf~|{}V>^%|y$* z4ILY{R-~yDF16j4LR?Ga&Cr0Emc*yjMsi-PcQad=A0qkLgwTX~E67+)cxYTE`c=zn zEt}uJ=}P8S_Fef9kk@rHHNPH(1qiCe_hJw&!+QK zBUXnf(a$kl7~?eevN~2q#GN&^J<)PJX>n-YnhDiff~X*U@Cb&16V&8<*BygIM;P_} zXYi##bZ7kPY%z(jV3uUc?@j9ZL??V|GLerM&S{DkD!#BIGsOMD} z9|Di$Y|}E%$80Db>#tpf-MIe}r(2WaLXzrgA5jk^1aOF2bDzrv@}sKLbN%s=e3ms8 zR|S3T*7?C1CAsTwMLqK+NIYSaT*vhXrLohU@l#WX&cxo-%?4)l(9t1K(IGZ?cfK#% z!0{<)eR+NOa{|XSWvtuKgZ~O1v|v^hEoc{sNPl79z+D*FD=rCfr$X+RG8VcQvI4}D zEc{`TAHtk0hHLfYk7KtqWM9vzB6Y=ryP~Lt#(ZtCbS3n|`{o-M`Tw2ldg4noB#&wU z11rVK8w>7m^A$O_;l~U38uvPYdX{<&8Rl0qIhvVv8|DBvjL=%RHSP9Op*mM9iNltvt`1WU$FNK9X);oTi z;%Yqb@~b0PTp!Y(kzZ<-iNJ(x(el_9bgf6aZy{751C><80A`rW0*@SXcWBY>Xx|y- zOLQAm!MF+qMsm!g))zA&mfVToFZ+DXcR=!Y*{@cmb*V}>2tYGJpl}?-{U~6G_7->W zl#1bE8kbkJw)#Ky_F-myE%9hPb&Bb#P7a2A4NVe<8?B;Up;zmVyhGV7%1zxwDX(G36(>_bR^|mju3(i8lZ$xiT8hl%_2ox7~;?>_kC`L&j z|8o>|4;BwT$M4eV`=kS%huh5!rR_7b`=IdF|EE<_uQWuO{I7a$U^4Ewsf;C| z7;G0_g*e_EM)bQn7TA{z<~-QNBl$mB-l{%EUl)PG6x3@a~nKMnxki-nRWU| zNDj1gb+NIHQF_Sb7&Sr_OXW(u_*?c>S2%(%xzgCqecd_PO9Hcwus#)RWv!Nz(5pzJ z2jTve8N{ZK{1i$#8fFU3Af8yrC5nkRZAjUT48oe6eEQKT1W8qO(3bfUG*c#P2=~x* zseU#hWIQ!3FYcqAlM^nw+Sn9eh#r^Fv=rF(z*Cqp=Al0_HbHhsLqmK7mXg_?t=8SA)ms!z>W=e)24gN~Pno$wthz#f4|Sa;|r>YnLNOys42r%cUshexhFOiEp(83{7%oXg|y{E~JhY!4)$QAsytaK#cwrq_Qn+ysJ z`fA82h&Nn#<%@IpE3JF+0~Q-GpNgDiRCA>Prf?Oz)ofd}_U5%l-OA--vttD*ed=t4lE2W1XUTOSwo+$P?zoi9|^!jHei;Ks{w z@gF3s!RtfO*zal~5=FkWIrqo|Bh>YA5w9>DQ<9i|K^tfb@~%bjHDp!uPGAnPv&4^p1V`9zHit5VF?9`1OW*Z0W2}&Z-@+Y!hz-&TP>6D> zJK8($l{w^@`=EdOhBR;&f$WFm`3t|(f1(V?cH%@Jxr93*1Z2opc{tyK# zJB9m{S`2Or91SfaXG5v>-3b(_nO5pm4Rdnp(NYSH@69p#;uB9dkEtZ#_^DbXitz7 zB2ECq=pO#PZ(6w`4<9devst0pB38=aM{=vdE-?{ECRC+F!yPB$GLbkrD)cBf^gP*C zyiZ;>E&tmmZe9_5T4K*)iT<8YW<1;XbbdT}GBu$Q`^|?`IIR*fSFt6;Wb&KD2!gJ5b_q4dqtRM@^n#h7hj2iS)f^w z$ZTq7+vGw|8V)o+T&S;899IatRAUU5nepO4ZFEe+Sn>@q%%Tf@ClJ(DdnTR{HyBQL z-x^>YpR#x@)`92R@(}RZgJ|{;wi6Z9uIVs2P^tG^{>nlk}bHP|4UinZn!y=IzMc8XJgQa?`m?0ITRrz?#pWLpwC6? z@(RQb<^&u7#2wR6x``Bt4K}$-T9YPr3Gdk>Fg<<0@3D6>5^UjjnK@2l^Oo%wlRaab z>~#_|jzMe)Sw8vLs>&+qb=uSYwa$fz&oQ~O!JbsAOYmLYCTczQ%&;Jk3Kfzrv!wjD zuR)9#KjH&-2xseAfd&6`OSxmLl>%*Ij)pB}P$g@4@3hU2uuTsHywuif`B*^FSxe*! zQIm9t6&tZL8Q>5#TL9w$B|WelE^k7sG}T)8M~*urmFk0?gyD0X^538GmQ@BiG@J zcAGxtzby=|aiT`J*c=oeMNRl(B8k&_^<)wp5kE0t1b8L(Vk$Wxd2-!N;uMtu&2ZwVhzF8(=)EoCT%X+T-{&e_mxA2qNq{0Ca;7iRVIm{;vD9#z5$(ZBMowxp3*#tawtewxl319X{^6u3H9Fn6MEil z2pb01-G{*W1OCrvVeAOA!cu&t0u7 z09LC}Tsp27+l%pdnp7C*gPe?NY#o`P``e`rX-rU zYs!courZo0kziLEleH&*0V6w7BHOpqo2Itd>evp{4<<5()f2w}ZfOs9u8DdzX({sl z8@jAFS%KDR@j3qt{5*{Nv6?nUK0alJR_uDk#s@y`Wo}*Mz24f+4l!3-xf%HZ<$mXE zj>{Cx;Y!xXArh~zJF73@qmbhX$d2Ng%+rO7l}L$I@lu}RgYw#!GB1?tyzoUU>q87+bDkw3P^^(R`ZH#>+&>5rsxaU9<*rz4NAn5aQp743T zp)k#HXLxm^$`hYZNMge^PK*b_{a{|WL3cq>P*0a+HMR>7niu~Qb5bX+4n}A zCYFc1iZW}8#^$O4QlT8t@pPnN@(T3)sRfO|siC{_SkgCLq9U^0(U|~!r1wG}+!Mhr zz3H*mV`VJ{-T^OsSeqMfV>knNn=dafkl4`sX<%jBp%w1oXq;Rbjk-$WXQ#0Dvj=#q}s&Vsn7X?E$w56Byzx#X&jhoy}fvjm1ThdokN z3Vxa}VO;mvhjy39h=Uq}C8qQA2BJfDxrDTT&UWmqgt5dxkx2^Uqa<>GD4}=*6=FJ= zL3h`5o*I0Hz8PvWK+ja4DHN0?PVGp)lhF+R;it%Udq&RT$Su4Gkv^9uD>5ES zsE~e^IJ~}Q(M#AJOiny#@PcoOGP0*D z=0*GB{FgZ2X4Jon)h1MNam`XcWW{Njml2{mHWTi7kn+*W;F_1a#o`mTPCg}z1o(m5 ztiP2Xu9H4de6Ese8HjdWru~wmR0QVVuHYsB#A%0*kZmFVxmpg(?+__UId^Bg7>8Q> zx}I?qs8~T>hsy0tJTGFg48`=Ki=1gO!Mh22p}NM5N-4wu;YClA{`Wf5fFqUqBk zlLE6xO=$DDC8$%ZW;<+!-iFm+LE(CDnmt?{x z-if1ZeQ2#CFLGAIeq+9+DIxq*#!a1}eXj_!pwyjG-ay9fIO8A$ux*@{pb~U6J!ZPD z*}+~3%^tr|B~Vn`7X>A!hjcGyCT)@#210lMPP^N>QjUqQL{__7Fo+*r5u{#~%(6-+ z2o|WyC`L-Rd7}E9Zl%nFU=8C+J7T$0(oWiBx)h=N`?7LP9<*|D{_oU(jxc1@-q(9c zY0PhO%}?s`x?`pTR3682bL((>bee`EdYdX4p)Fh7l~nGt?COF=un0UC;X z*moJ1*pFk*hfv9M;P4t2qHJ6nI&_;u&J5m2Ntt7~Sgpl+21ps#T{*5a z6B_c*{TiWPYVx)k{0}%fiOdgf?{65)4{E%t3hf!@o4;lY5y9;rzU(y#aU~;CFVTz% zJ?}dr1%Pq(BX3!9$grJV^g|1*KRfL|PyMX?N^Jm|1U$ZJmwz&YZ(E`d3v3p8Bp_0D z%Ap&Mh`o`RW`)vyORf`s6_C(H^-9aaQMaB_6XH)3BF6CZyT_n_VjMnPWp`B}Fc4owgS(uM$POu-%&6o=&Dm3?c4NrJQE zHc&*qSwh;snP>z%Bc@$$;OD#Xr^V?5z|I@g@0?>bCleaJ-ESdaMDX3%58~J*B=L2} z?B|;eMtAbo!^8or1(ocU{>RLG8tNslu6=`sEJVd=fP>!D@S3+GiEr0F?x#OiA+mAs zf3`Zu?+u8E8;SLY!+UUrLfh>r5k)T{VVNj_kSuy($~ix7*P%%q#)<>$gaf{lx133n zd!(8szSahpOJrw-JjcKTX_B=eC6lrJsCpBkgPirpsN z_kwp+YJ-%RtBGwgY?EYXy?MG=Fx??i4K?75?Uwe z=2Y}|QdZWL+sJPy{as5n`w5f(yzj@_c2unxr3$@gUP6&GDz-%QJ+j0>gw&zGy!Q|8 zmdzJ^tDJf#X-hFQX*IN*+z~`I(AWd+5CK3Q(rtCPm%zCnshGwi?i$nNvJE$r+f+>4 z3cmPRw`is{ntMw-ozb&&JKD#D4=y5pD2`L=Sz;!*dO9#= z?}U>g;12XmCNr%S&}0|GW$mb37Gtx8hQ;lme zF87JBE%9Sxx}r_u84WS@+wqQe-px)}b4lZr9vya`*KadnT>als#s{)vsH!EOM#V{% zMOa${T!H>OhpfvmS1Qk%@Hz7*289KCnC!k%8GPL3rN z@NH0+SVar0`oH(@_(AO&p*!05NCNDf`+V4~cu6ZkIsE5KTIOBbuCKpb3VnPYcsYK{ z+bj6dsx-+5Cl;mJAfn0DG;(9jm&58Uj`?@}mSZVmQIV%>eWh(ZXcwlfAL`2DSO(9b zzQ04Z=^BJxuACBYUxqfAAfrT(DHgGwg_*~Z;_ktZCnr{pw(#;&ToxwQe=krK-jt2u zz*f>)DcBdR1~hU48{-o{{nhx94rm-K=f9HaQx=Dm`^?aWOpmZ!n=ve{iND2gw{NCJ z0N7apBn5B2SKjeF)5&%4v(vP>2XDwGq2(RijVq~nYp?_qm1fgOu?9uDoR7`Ik{#U%=^Smr)0G0_7@a#*X{0uJdc@z+iuKzFvsi?% z&`L&ENBED_E`qzU+Kd&RPb7O+QL{Tn!wg2X8Vln@+}Q6uTQ7=h%saCD!BatI43Bv< z?bXCh(q{ON0`Ut<+@lNkxHz@(CLk0Ko%~k^^Iz`$nQb*TV=qQHa2g^VUfUSVns_Gl z2@YfdfOQ4 zLMEqsI3`H#mVzz1`b^%wfh<%bcoP@p1}bRpbtb@kiUSie%-aA4A)is6B`=?l??D^V z6It}CjLMs4jb~Jfo8kZonyB2?tttA|sSJ=whLc1@70+PQRt=6~mbq^VlfGfDefg7A zppuG3>OiMfq`P(WJwFdB3kIOowL!{pVA=tugX-*ajKw1ilf@vFSkVib67P+Sbb2e_ zV8XHM7cM%wvKWX&jy*Zn1_e{ERL}`;PFimRX0W>b_9;2lE|12Y=#)lWF`Om_v}qwXW7@~mxvj@ktG z3}?l?W1NvC^f3Wvi`M%adx|+Ktsp=-E=UHo%K3SN$%kL#K%YUc<2VZOo!w$I#J*S(9!JAuz**c{)Y% z)CMpt(ElPXcjkJdgE#22O{){Lw2J?bu(_RKfYOV2VGDj_Dbry-b8-+gX&)HcdsLM7 z#z}$<;$6uL-|6W2#Od}rdV35d#rS3(KOw51X)FU!qs?D%)5&6O58o_Qp%MAH{8RWx z8;9J#508?s{Ki0Xb6_A~3fd+3%lyrj+dKI07Q&61x@@Ev3d=$3J2lyrZ`_S!+7)%a zJ=HUHUAPpt5cklgJiSN64T`x`8xl3`{g(US5wE94>mqHh>rvMy;=PDiVL@z#$Vv-I z&z$;#pUZa+Po7HOk{%4hcQ5UIqLEgLv&+j<5~=#=_Nh^*(G9C;&w@dmee6Pd1(kb> zlR?OYF1y70=M*Q&S$U>0m&ylhE_$nFw|s3QO8F5f;=l{4Nc2=k;&0aq)HQbbMp+b_ znivciE;mr%h>hiF!mvx^N58TM`%A0{cz(j)$iE24Q_13<07*92ngz&~`1KRf=An|I zoc&UilD&!egjK!J74tY(C_%CYJgZ{gBkIuhoXZWGv|>6KWQJ!oP_xu3b2)Bkvzm-Fhd)$ zqWtM--`FCW<|KtrFBdtXw5~pZWd}~bKUo#J1wSa7G{$E;1z7mlygkTWRZYMbdfIAw zWZpz7MoH-roU|fkAf-D9Kvl0xYcCebagsr1EfMvsmhW(~TYt8oQ z=auvgx9&ga3Ivef8h>t+*2FX?EE+jCK{uv3u0Qp%5-hEhm?tuzRd7W6ngbOTSRB+< zIWQKOhFfczu<_0wNSbGi37+RG1=mdRX)7>}$ufT#jej2rMxCLvrwbpvyiI`2blUM^ znKJ9-%z4G*sf$Z0`^!YuuQ{FIDxrIt{uIL)Z7o2?1Y07o&(2o=$fqtg9}vKbD}dcc z2qR0CzcF^k4xy<>N8zVZODcI3Djvrs#jpr|RSfoAgmm^ENvIpo_=YV|cNAn)m2%ya zwpE;W$s}!qc*URF&9SOT_b1=nzn|0Z4%nl-QbjC*DZw3g;Do;kx zNuusNChv9mnMxO+7|L;GGImuOhp8PucY21(=M?A@-UlE70K=)qy*AumB8jq@g+Vlv z$$Q08>+|x)l_TSTdh9&z(1t>IdA7!t{KGx!SJh1y+{vu&Cs$C?$Ob@^kQqI}el(3N z2t7<#Vv$>Ng!K*HL`s;Bi}~x!VJj%@m|1rJ6WgA-`LW(7K+8#hXkktCNMFL-E;4Uu z#SIbx_CBz-Uq_15F-AC=C@(r1L!#-$VWk;b|AD+4&EH@ubpOb_7{*t5RQqm^ZG>jN zZI6Yxw?2_d=|VRboqG%r(H+k)ddeJPf$$~}+qel()cSUGqU4WXx^s$4R`DSz*0MTk zmL>UUDL1e?+$)@G9zyy;eb}f#M`Ak^Vf{{)sZRfO6k&jnRF;Z(MfsT5s4Q3QQGWL4e5t~I8Mnap(z)P_W+{~AjPk0|x>ZGiI2)bB9ZShp zFVZqr({}0IDnA3~ZtfsQaG1a(2|$){`I8jvZ{g}vYcjyr2KC2$I0{KT-SLmm?$wJ* z-#!7w<&k$;j*OD#j+gicN}_hXL|^V~R*jioT3lCdOd}3in4sJ{Yqt7iGQ)qNM1I+)9q~SnNBh;&(M5 zf0`Q==!orEO3;gui!1KE%&kVB5^Y!^jXKflMu&1Dx+)ofFD; z@D88gJAX7Ms_4$yT_g;O&wlqX>!mw1&zk&d3ox1S&AJmi;kX{+@s0I+tD7eO zdLKdnb?)XKPws;=XWJqsh&ZWo<&)-UJ4blF1S}`Dw-}$>jMe`nG9$Fzgl?kV#FqsQ1F`&rcMk3%2hc1SVJF1&D&^9a&?>Y3^r;>-g}wF>?q zCf)!!9?0E5DmHO4p7Xex$*9-tEK)CG>bD;t_s8d{R(=D&mDnYN=D8Tdj`)Y=lG>o~ zEc5l0zbE$mEJ^G;xVI>rQh$d|WFsC8ktHi|<#W{-%nqF)0(ShfT1>%r9t z4ieHKoR0IXhH8){PEc5wAC85=MOjxCbe%{&cIh-fTfp%rq9iH#D7;zdP$kI&tDiIP zNdQ;dL{iLoSi!s9)H(~4OLt=drqHPU-}+9 z6_^56fpGsrmAO9sLcuJy+7zF@Zu>@m2MkcXELx+43QeUW>Gui3f$46_3)J8ry}fY; z@|~(=!qH|Zw3o6vO)qM9O5sjk^R(TrI0T?jj%u8e+KL2)T51$vE zWzvlxrjRqPb~anI0`VEJcufPB z1Z8f`HW+hR!+>81q6v9~zN`6Ca~m4ue6+Z-S+u-r_ZYVn63}i5iNMH0xCM8(Ew!c! zH|4R0pA9MqFgvc6Ru&pl&*9T2{{&cFuiX^;<=#}4vN-mo01$)Hx&+-Z!uCjNU6XFj zW%j9-QJIMw1Cmv`%9P~2Wnzl#3?ZZT>k@TRuHuEjP55|`)B%=S2{%jpO`?&DY{J^_ z9o6m?k*`%A`B@4uxTAlnJbf!h{)CeudWzEu@yL%=Y0F4bddhA2hR6!)lL62T+bEQ5 zJPRtv=IWasFiEDMgOZU|)M52d{W(PM6K;Mg#}e}&LIQwtiZ6&DQ@C-`uC!DDS>MC} zip^cm4sd+w{`NfqXQ6S^de^wy%=hTD2l`gClOd&8Y?w7#BxjXTU(U{cH+o;DKo!|b zZA*>T(qVK->IP=#DR3eG{G7S658#i9Zkd9~IF^N_TFGD(a(d2r`(a-y^Vf4~FB^{e z$k?0XsVDK1jmUsykrg7X#$<9>?XeFDv9kxUACyp6bhD~#6)l*5dV+7=1i7>=dcZeR z05yiv3WYsn8U>)x+n(bn2p8a(b+_hC_e3$~`owfGweT4?M`HIK=E?Ea zKjOh>@vm3GXU21E%ww39jd){U zio{suFh82*kEUu(0>+kZ^JE4TT;Me8zYz)&OSVQ8L z<&fH`OZz)a$LWcy=cGe# zpHw42L2!)(ShVBmT4S!TRFFAW!j^!?mt7K{6}IzS$>9*;3n@=<+Ux{k@?3Jzi)7wiInp8$=5-n>-L-^{S=+c~G0WI8RJLN&i# z0{;Ez-0PUX&&XVzhtq#V{QV*}Hp6njBNO^NaW8%j`zLNyE^NpH1p!RzO7Tx4$aaqk zh7eMpdm(2y!WzFB|G2kQW3a6Kr-flszLT|48GKBWtHzFQhSa!}g=6x^>h6`F+%4R( z1>d}EcMA*jvr_VMdAfGuti(5WZlNWwDYD+2GMCv>wg*uA`k9f734V;~+DvN}Dct+G z9O6Zm5Q$cWki@@*vuH0&SG(Hi>x{Nl->>O=o6C=AB~>oip73(BF5S70b$QYNj?Upm zh2nD2rGlxL)AVbZ5hr+XC40s>p-&)s`1YSOd9rf#X|cp1quQKtFOA#pQALM&QE1N;C2ZJOQAN-`hF>}n`MRpG9!2`p^!DM!q{ zb*qWh2X{^bX0^Cr$0fL2YO)24;C$}Nc`CvOWcy;?ez=NeWD9MLdF1AvAOBJ zB-2K3g(!Ju^14@%kEoA+-}LGdAF+&iVvqF5=ua=uc`?$u#_Z%0!I_r|z%dM+4(>as zOa+7(_)U_kbng|cTBhotPL5%DV+jEm{_%Vy*Eo=nm+SOzgQQdLQbZ`VpAy#^&4eez zHuTZSMlnBJPB<#tET@y_me#0rd3^)gI8sifCc(}uCDAWpB+vHtKqrQof4+(Xz+D}& zWiD!4J(AS6t&F-F&z0}g%Pg*k)9nxOd+tXroct#vhrw2QlZCwwBv^0UZ3xoa(XFp6 zYnCvL!6d2t-w!7jRP_S!9NC9kesElg)DuH;wxDW4%6Ld?2+>Xp8FCw_PW-G7cgDgg zOt_X3$e+;VDGY39etRA48at-N;=2NiPXrzuQ~ww3X-8))ku`c(0Svl`va#*QB^fH0 zrO3urG$$11SZZJys`DpPDA$$D>ttcu9hth1GO-=*-~b~?wh<{sdQr&oRdnW!H{|s$ zAy4efEXnr)E7!cPR)RJ7YUqV#;!~W|G+iRwYv6)Q>7znUI8y13I9p1&m~N@4Q!Q*y zMMwxt+9B^G*dpIdKWb7pqkE7|*7jLoi1v>)L0a7!j7-$247-e?Z8?);OM#AEfw3fl zsR535V)O^PbeJzcTeMOin#WSFS?gQIGm?Nb~ObbU>`kse9tXr1);lBZy3 z{n*F3{kc_M)?W}qeVH*{;W6H#SEYnkuQ|pjSf$Eyw;%rN&br}L+YN4#|G-^H!d9O= z$RhR06b|)9wPV#=r{zV3^y;BO!XDP`U0(zTfFhehQw45-%cUj7|42i3vwZuZburH= z%Aa?^MJD59$J%~attD@(361mG(;l41FLxi(Z-=0g4>o;geuK8$1XaGJsTDrIbPp9u z>i-32cXn-&U7KBp``K9c%Hv{`{Evc1$&=!H{hajYU5_}z>*}gG=#^&nFkS8+g2bC# z%Kit0%M1!@K`kG`yXIG#_-KeFdi^A!%Y^bDmpQNs)rT(e#*%nO*D!a1kLz@L3Rz9A zu5g4i*C``FuR+uA$j5)mj8fY^WH-Y9JGqcO40*LQ)-ItJ)PMG{7W_!PQ%@^-N*#$e zCg7mo|L96PK;T>jr$tAKSJ(eit~U@!B+^(94w-lStllEu;mI!qA9nf%bb_}2tu56& z@Bc3ZmCL+IA?_UV`lPu*dgnhz(+(rB8`Xz%?tj=OVf@UZB>K9Q84rZy-2N##J#+j& zj&sB23W9uwwLk?<%D9>9dev6P}=_Qmp8%`b9JhkDVtmQn_{a6U$iG^*W!gALI1-{A?xm?$n?xo zg6;BZZ3kU`w)R@>zFEb?da5WdcO*t!?ahaY{QtcpeuSAnP05lMoppvf9TT2QphX=t zYiqvR<z$oj?8gUOh-q;bYBm#>9Xwg7QMmh=&ole%-j4dyT_xTLg2cH5 znUPa7KV>+bJy9kI4hrcqJRVN*kvwuxCvH4Mou#Y20bwM=yYgeLBPy=9Pdf3o`rX_H zsY}xn4p>(i&SU%BOt`smwho>sxmmdG)!Pjt2L85A&mPav)%ocI-tLigR}!yc9vb_H z`!)HQskpl0se9m_Aps2CI9sP~#u`aCQoh?*nAnM-LvX6?k+VKgTTYyYzRIfA=)ai2v+E50R~d6$(0HgRYeaIP143OFf0K3^ zg0DAAEf1{0vmr&3c$F61yxEt`GL=vyAFOKd%>M9v`eas+C3{_LhT}v@AL{@=lFF$i zXT^~Pf5^8jJh)VFOH=XU?|;f;f5m>G@Nh>fp9Y&Id!t+}TLjr|JYj7vun1}9%^UrbHB=lZPzW-(zbDp^*xdfSyhc@Amv?fPCjLo9rM10G}kiLauYRZUSQJ=@KUTJ()SAc+BLZ1+6YwEQ~-iz(fG}u0% z&VJ!d%rxNf1_!? zi>8~An=v-@R(W`XQB_~f=p1;3b(^T8F{SV{uc^3cLD0&fRHhEBZWVSsf17vz+2#tQ zY5{1(z~7!aL4g@t+iLKOHYZYc9X_BFGt1^zkk;#B zi}u--1MHl&C5l7=Eb#L;?~|<8%j~9@C}jwm>9sSg#_Pb*NT+0E)xv((r&rhX!&AOf zx53uhUg1 zr$F>lAri-mI8pYyiq zFOMg9d(bbhmluG3698PvDu#osFl?^==K(NYlM4PCh>+o@NyzoXsw1TrEpjVtvtl_a$aHLx^DMus&gDno6hY13LZ(nxTdvxii+O>& z-pZ}J!mEFZ-_sL2IzUn|6( zM2xij9dk&prNZ5;gT$kUav3QUo=gn)8w~dY3KN}P54E6WuX_cEla&|UP37blC_BSz zqt;s!8YBy;gNhs^L@6P^(pQOZf6MrQohdF~+i(WvH@u4wb?NZ}Ujt=zW#FT@q#}OY z&A(sMB}8IU;Kskkh4GUWe5;Wv6!m>V^<18MP7F+RHyg)bLI)j7QQ2UoBwyr z#LGyk@VT3SMDNUZO3C&!>$T0AhJOfVL`_>pW}bsp?&vz&&AZdpvt6^MWd!MP5o^4T zM*fWrf1DE{HHWA4_FJ2g`!J4gG{e%8RAuCPbF7rOW+CawC^h{^{JHxzxy`sU zTy?di%CidZ#ai zw>2UD!OZn%E|7RreVM{nmfJPM{dj-cv&BRAhb+HMz>CBJu!EK{O%gEnVWh+T%M`dJ zlQTw#%j*YK7Qy0IvO@*EgZlat`YTUI`Zw?jMwQngF2VC)vhoF4drl_{&NZ`dk>u6a zr&%(Np(7$G7Y?3U3CNG4SnKDumfPhD9?UD2k#RcOLB36VoZLC>LGnefCA0ve!(X|5+1JvT^*_!;^L3GYueAX zyO!IiFK+oQk~RV3uOiQFMPjXJeC;DG+=CKxn|2?TX?h!hM6y|KXQV9Z@siO(5}cA( zU&4QLx<5kTY@Aki+jot~kI!fikK%>OI={Ql%b`*i)4t`0e~YA}#!WDqmbyrm`h(=r zn@^%ubo!33A1@WO-5wWo9PsHD_q7Hq5Pq!elClh?L69N55GD|rttl4XWdfT zxL9oLTpUsc#;PAK8x5VB8&-Hn<&DMjYs!3E1p@6Y*Y`+daiX z6zs85w1~izwp-~WS>u4BYOHWZk08bw!$wYp9fz!gD&Q%<+CP443$r>58P1qV2h+gB zug#)luP`c2Ec)dH;0YKPO69l8zkg-GI)?QRcB8&b3zF>~I&zLK@C-{61!atc9g`fO z4sNZB8Z;cu?@0^9mvIv@h6}B#baBR{eA_Y-FwUxhM`;$EKLd$kIf}=J(SlMHV)q(; z#Q4U6s=9{NJ`%gq;-ZnqYHuV=&Fixvl;*&U%ACau$o-1=H8}EcpK8x6ztC@SMC@ve zkFR(iX;2(6r9XOSRfII4#Z6R5e0#oMf%QjyI^*E2dmf6Gg1Ps5A#k*b)WGAgSbLdu zNdHHdodbr#2ez4+>JjPT3dT(X>emS>+iq~)B!LM_sYZ52w<$Wgows%3Dc>e6i(ZF# zz)2`^6C#z4p8xUHW#;aHe~WBKVkfx-e{dzI#7#{osk}7i5t4v6Y(bt&>>&>s-5yCj zVZ&&0af)lebj@<=*eujqfNHAs@;=aubCz{J`yFRaBY{VX(7j!-!=%42X&4X+T~H4! zb-_gSpnPZ*JVEprw+yiI7EysaSAVB`(89FSz*=OGqgeNd^{_rf0X{#p3_xn}QUu$j zv0ny?%_*>~>psvk{t}JGc=hD+Z3hOvHQSRv>csTxpC?`i9hc)1E+TD|1Vk7ZIZ(sJ zb{zFX(`=iMPgm*W5%4({QfDnd)((%4vb^K(g7Y2n_Uv!1Twf$-btNp@i z?sIbsU9CA4StHcG8$9ZIHl6snVVOE|?hcXt$lc4Bz%^dI4;1exuIzI66RZ9Eh5vc? z1I^*-ou!CBXLC?4alqe(I6PH%bc8Y*N5tAokFcpEYc}DMm+(DovVOQc90NrX)LMNX zXpQ08+_iNzYx46xx9m)+rm0ub6LKV_0kWne-lc@ts;0dHOk5gsnJr*l`AV z*G6DpAm7Hz|NDNdC)E9V6J|Ui3~OPnC8#mC*vE19OSYf2dtP~#QURE~k;@a#L{+=x%91UY%fQ^ro8z#7Egtei#8u{ZB{H-y3cgn{^(ycC!l?|44&h zz^jz4V!)WjrMjRt-$@9^r??y4FZ*NY#~gNNO$MNgRVy`3e%qaa*f#Mi=h?T4n!|HC z1+}{3z2}hhO%7k8-`o--6&?s2!XJ-sz56=^WZ4myx&MQjGn^ZSS zV!<8v5L{}UIT$l`TYruvY%MP@{b92dAirM}Y~Lg3G7rkH z<=8BDQXhv`R$ph&t{(Xl;74qO@nSxTpeW6JC3PiLuc;{Z*R4O?XRYu3drHWw54m9> ziUuuWAnSFTaB?O@#?OtO6=Aqf3(G%aO!i+wnCn4u{$?a=Yg(v!{yLwJSut6e_d#I) z&oIm{s&5Sf>WqKT9Y9SX!u-@Lr7?_)7=uUuBYk2eToSHMa)MAnqXFg-C|V*(BRiSb z@kHJCNtryD&J0nP*LSw30D@9*+&)1d$syKvXqE`hbfj*q^cnk-SCh@Pm2 z`GJP@7Lf+ImPIf(e`gH5-a>mIc$@Z{rK^R{Y>)j*uaTpb$9-3UlkHq>Dq-UWVZPu~ zT)f=2SF3SGHm6xW7FZ(M$5~QCZ0d5cUWnn^M{;p+JwHD+&CAQZv0vV&t}8Q}Niqus zkhs0rgx}1gpM&pz;~lqaHIS4Dk&94X3!W>Vj_Ww|UPN}>K^AMp8rGY@PO1T)6}Ge! z7eWc9A$1{a4O`N?;y#_hDo&Un)@vn53$7{HE8EM9<*$mJW2?YP4{Ds%AZ&TjdTnwU zqn>i}-kYKoz2A~f`cia#uJ33KI@mx@yMmUu>N-wn!V*t#uWA$$u(xBz@??OG{?BG5 z=nWgnFGn%|qO-}`i`_nZRXm>J_6>IqBJ`roNs`pLZu3)iU(UlS38@Hw_6pBh_DauH zB@OeXQ70FOl@_>a8D0ET(k(&I$kYxO=`)k^6;Gv23R?9D23 zK!*{RCNM+3aeXdvY8>q3lNs>PAmX^-$XmX;`X1QAN7-UXT-LKXwLfP?YM^ViuFJOO zvonqEXLc?(7~Dh4(`agA9X9kPms1Bx9ur7J8+?C{_4F7_OMB+MZ}=UEPc+3C1r5OR!0)#vYti@B7X zSo5dFk@)V2x8Y%HMDFf)FE_-U0-b$$=%xa;0xBtTS|J_ z^q9!TRnl65Gu4y(Y_t^Jn!~`ko-6sj4ceE^uNo6I-|kmP=k}+b%M@hw0&MGyv1;)4 ze$zcFJB2&~Qy=~$jH;PN1U5Y8Hbf!L5}ogj`_z)M`4Q#Ue>;99n(;%MrcTrgIaXCi zA3Ty+K;|+QGqiZ?IO>J|??4V8GRQ4C3HsD`~Cc}87Z%h3HEpVj7fj8tWJ6|LQb#!Dh99H z#_ZN&wI{4h^x5GzCTK+gH#E~l5!27ATjc$XDiRQOBBCiARYx&sD-d`2K1$ROy{&cZ z}tUg<~r*6i{Xu9C_!;xa-uZI+rNGzArHfxp1YJL^~3u}Z)R z7m(!I0+wdPpW|rhPH!5CtI8IV)=Yt=7np|NcOmDsLZq#=6A{%4wJlU6oxTF~t!D9S z^TojR*el5uR{BRqJx_9X$My5~WtGd6P5WezbTEJ!(WK-qyv@;m)NMN6E7sXC4Ppdf z5|ME6M(+)0A{5IE2oej{DH(-_R+%+ zndi)<+oFs5j%VEh%jlhCCkshaP*zkYaJpWz`lsup25$x4)V1!4ov6ZdC_aOi{X$G! zLIE3Of)(d_2b$9D$=RHi>=cT4+o0zChkEG8-GWgWJuCw zo}**F^*&Erfs{pjHFle2`+6Vuay{Vuqbz;z9RhIT6CBVd9-qyWjgpH{$(vajkLxj+SX4hH43XxK4Kt!` zpD_hX!P-eQZ^gItBVLCeSBGN+To0u_^tHxaKWG=__d9^e-27*5#}A9e5Xy6Sq>c|) zUix#E`Z0~nEq;+?U1ndMAORN8mkpxu!MvdTjX1Bm5yJ@2{XZ2n`OT}*HqMS||711K zPZxQtp>v*FU=Hfm`JAlEzp&rCMDkBa^c15@a#?{97CkeXEH+JC2Q_IyEu^p zDC(mNyog53sIg=)0BdhgY24QY7Y5aWHugFdO0z>YQdWvobH1@=S6LV43m+IBw#_9w zP~PSbXy?EAmEyXWUqhwY8SAPQOg7ATyq>8aosLd`k|ctc(!iuMPLosWQW~4ZW5Af* z$2^SDht-SOX_L5<1LUZSdCD+h(VZwx>g%E6-aw}MYDRA?xu$v%diSYIm)S(ItJN=F z0RRWwR_WFf07h7TR7^*e2*R#{!JbfAWtq4#bCQ~#%Q+-k$~kD=#;E@GIDWTGw`mbQ z_p8&LQftfmmW`I!DK+U6LeuW3Zu9+1Cp&XTi_@t}74^(Nld^VzT81Z$18-DswehCQ zawTsFI~)Bv#eBX3m>iP{EWljpMP_@}r8apusbM^S(SXgJ&sH>{wT-{jI z-nyg%E*e^1`T_0Rfs_&XUKdnc!OAYHZpf^du#LyMrgIR?I!3;Hn$ z)i0dWW1JfF`Jlbk)CZoY%IU`?GcYuXI}$`OzhLV~i3XOyiv(0PEjVW2j4^-6d5wEo zrV@I(Y22Vr#|@pvYd9U)d6_z!Xoh_jK*v~dqyg#{CWKEfGf%5!YwO~kx|Rs;o_-Xw?wwW2 zVf%*?WA0tZr{P#gMqSNq?X}bSUkmh1{yi4C!vl&Y^t*A8A~UX^+e%*` zg*B1VuK4Vr^Rf+1s!qk3M7FzU;9p)fK%M34bUDb?!^tRsA#epT{%(-E!h6Tbid>$b zf~$V}{Z2olnBa3F)8W)iFO{yQHj{SRmEAzyf-1>dU^+Tg7v<{92r}=%(rooVn?{9X6YA$TNCq!}Kb(&AZ?_xW^Y{o6^|LQ!`RZ~ZY-O4(_HbKi%8PUHIXd@Scz zky3Z>3nl2Zymv#!>qi?7Mp#1ZTs8JBwRf|-)V6%mMOCG{;M9O*-8%AjAmG=s?hw}@ zu%+(ij;RkX!d2G$P`R&LolY{A;y}^aqW<%ygqq=C*+Yi9cB`P0I{0zSqcpOy3~h2^ z9;M%b-_wI4R6jnSkHvxThB5Z26zpx2u$AZS4W0%oE1?37)7L@@Kdml5KI~xZw|^Ndmy-;fI`h zfINO#N}b*t45>zYKB@RJHb;igS8Ar|bW>lK-*wb{VSneo+`5YU!8Md@Y;Lw|WBoY5 zS1|jf1mufZ6Z9(BFut{I+jCt0{DhynE(;vTkxaOozDpkefsu|fXXeA#=i|6pwX!#L z=dktUww+x7PS$+J=9>hpFQ(uw^t|T29;16XWWPz%#l*)@`0syb2{&6NBkv4+@$ejdJT5jjlSq!Wl)X4T+22&~?n7fCp-Z85}LIAC8a5u56M!be;RU<0Tz{_K>2JR3T zzrBmun`q``I+W_`q4V3lq6JaC{kQ4+<{D2g_j=A3KYhQ6bqrWy$(l*`0kem@KJZ&s z8_ZsJ!GMt`*dZ*y7bRYrM{60nvH@`q)i{fhI{G`MJi?w%RT`qKTAD5PKAh}GR_$MmXK zbX^qRT&aV4`e{a=H4Jya4~Q`Q=vXx2GdAW|aOVD0UDj!l(~tAG2Q#t9@e>sWwj<#F z;(L3jH=c3LI#Y*|Wu_-+{NDUc^E|h8sbVIu)X9W5h2JUBd3P$TJ5cm4`0mzIFK5Em zX=cW@kh8s?yVQlW0WE6}18=~#VBO*!i`t)xDd9%taXs&aeRji7a=CH4g-MYqWJwI7 zFfS@)S=>Ijo|U%6B8TbgZODH(Cw>U^qEeCEKLCR+Cv zGb8)5dS;9)vn)45m0wlruqu$vit8?i=WEkQr`{YDa3rt$n^I`#KbavDVB{FfJt(>~1uw_eUF=Qnv(=E-3%uxw%djQ5_mEB&Z#q-?xFZ+huS)|%-lPK2ir;z@oiEA{7G?{wgyyBR~w z)to`46%=^sX=0>*g$(pD`r=jGIF<)c%VKsx1lBH+r31J+PpmwxjWc-F$py2xAK`o~ zzb`+S){yMD6;6z|=adJCUzkD37$MRZCeGWge*$a4sGI89N`82T(9TrMm#X`o&)D|E z$FP@MW-7>1+?}5xQ2d06il<*i$ClOMfjuH7_rKfb;q4Aazo4B3>hn{Dwn;vOHOH=> zBp!FeGURbU=IX17z50zMw$fAu0#twjZI&+W#50ET6UCa8aowU#O5FANs<+2lK|7g@ z8Hm-&S$P|wz7QWsT8gbqwQE5;aI$jyowRp^|#KWJE$pnCUhCfTI{|6>a;Wy<8#bAhV-&E094GN4XN zY%DFPh=^7vqR#cx+773(hdJe{k_YjVAwz8^JYQb?o9GdGb4Z)jVt6&CBeh@oAw5nh zbX{I%WJc5;U-gASz7Bjm1hIZIwavNS5jE&O1e?dX*mTU9UaD`$?=y zEoanGp?jh#LEA=qO~tSND7maD+f&KWJn_RDlO-ig=OQH<>}OH((tsCl1C}OEAz{K_ z=di~^MID{*UO=R6TevARdKF7+X4}WFKwB_QF?c9+VO3eAP&Aw)G!$v6@Sz=F zh3^5?Bm4Z4_^qFjM*dGYWxv>VR|7joQ`c#dmbhU->sO;BJt}7Qto>Kc(Kk9ar_KgO z8b&>po?{9$hu1pLem?6T*A+vli{Os2EXbpU(XS+kG>2EQa&~`jK`X{|My(_6XgTPI zCut=lY<`gv6LR4a;R#zJ`x`h@xQFcq?mfF_* zCVbxbGH~Dd*=jCIO|tnvxqhl2FxGkbi;qome&`6oz7*aDocW8hD|776*tRs8&XXOG zH)lo!h#!(P??(vVgY^n#96Feq`wZ4p{KZq!4onKa#PhZAmSlb~t>kKhHL+KxeS}m) z_V_L*6vv>_6d$A5L-txC`HYmHL>x1)*E1E4xJ#a~!3|uBSXbEbGZhf&7Bl^MvY#Ye z+tl+&)wKt{@gKAclXsLRlza}5%*DtfxB--`5vCoa^Ck(> zpBfzMseON?0}Z>4Fxu4>;ZnAn%F^nkfBHQE2`iXM_tmv&4bFj)^g`Zir*L26R*|$| z^zYdV@5OwWxN{gfzrZg(JDXqjzq7~<8|(x#U+<3<<2iWoyk(B;cEmO=Eya_iZR@o_ zI3+o&m^oO}=3oF>)MN}L2%*~19H6E`oYGRBIXh*(d4-D*J0m9%K^r6IM8YS|jAqA> zA=-5SDkh^NQ)E*3&ha?$G&X<(J-53=lgxGiU%Kqi~&|cka_yNP|`q%75{WMpVB6(>sM5nNJaxY=dsp!~kMFuU&Dugvg%<$rx4a z^ftUl)kh+@FRQO6qq)hQaP-!t{nZI)5)*b~S%HcpZX_0trD-82J#jRpC;klGjsW{# z>Bv4A>wqslP64qM1gUki_IO45Ucia@_0HHA(*wbW@^;S%76u5khIbuj_6KhRw5i&u zSDzD1%lC)K)~P7AVxv*$i5e8dZb3< zKONhn2xO&ccg}bEC`$vigxl^ap#k=|=*L=}a@77Y+QNkiixb*Dx==K~Za(UHskCLx zakXhY6Hmitnp3Vq#-~J-BZmxlA2FAbbjx=*;3idby<`@cO#|YEpd(xKp4NR9$k?=7 zA}e!ODIXnh)l>IPcQ)5lx@UcPI9+iEJ<^1;i_}0VPtl49`b)>`tsP@9Sx%Mn_OHwp z@GMzv@-hz1jCcB)mS-D6zIg9#`O7dub8aZ~X8`ZCnJ?@;KuzH-f%weO{$4=K862#PII7#(EW(sGIuG z^(>k2;X9s`T1A)?q@U<0i0-haERCk~S`Fn_+CabHhY+;!1v`Lzb|7JRWn!NX*G z+)hN+DG{Xq+QFzzD!;jItYGG0uqETef%^=rbEt4T!RitjXqN0vGpl%zBAxZkXlM>B z4fKTwL)*#(O6D@}{WtI$&uRwtfN~ttU;6_Rf~AbE#nnug$QD`Kbg3)hWCeu-&3ku$ zc>EU04n)A9w2Nhfg$UJZ7P*6E6jjz+UAU-`wPuVH# z+a;);Y<}W+ch zhr2jtGrp$;=~rvPV5BXjv;X?@T<-cFc@lY3t1Zq)3I#ahLVu>=obKESQNt z#LRGE9Uo#jl6{cgAY1fFn1B2mNNL^c=5d8&ULWe|fRHi6Zq;6~K@iR9&vrLeB_0%8 z>eLRam&hRg$@z_%g+ES<;V@oDT+zbxgn}72NFk(OcwEE&jNx(0v68$%y(Lo7nIS~H z(^tIO*?gqn2+sZ_Iy3ijhINnAdYGWPH+*Zk$5>IqPY^@Z^0;C^?AE5Q=gxR;8lL=8 z^Ur+hB^pWWv9V*Yh-w~YP-gTbp62h`ZeUAaY?q2r!LIrITNlV5Sx6Qbdm(w{6*HS- z11(2GXjHlG{fs=0TP-=36c#k)kNY7$>CBJ73z?ZL`TDQbonDg~$w zN5(_tn9ThZ`*OJ)*us=JHwY=)~Yi67Dgj^~#?G%b1RX2zkW%MZz&Q4Qt3vsUY|Lb+p25BjG_h4HyL1)Wyxc5qgHBG4$0oHlTv6Ih`&dl5y>XDZzVd3PC($}a}Q5~evBw#p8 zncq2d!#363+nJSzYF@{nW9hOD93%Dn67-ci-uaofez5MeRdtR_W*r1HD{HFzf>>?h zvIc(tI%;ZMA;W?}(A@V=b@#U9GcsEumF9I(rjcGA zCAhvXtsma%h>L_*{<@T4LMj2km5}~8FUF_0miL{YUuB#=Ef^+iXGqV7u-zJa+=)VU zF<7d{dy`r?NaHSWyiqUL%lVmKdF+T-p2(ZG6{Oow#|w1ryM${2F_4|c8LOen=^ zGe-)e4R-MLzL||EpBdMs1M&|ZAeWx`PY%?=5l1s|hx;zlz|MIipqg>^Gk}fmQ}tN8 z;XTJ#asB0Ot?!LCl#J{ld}$%5!f8#@%jYp`L_0wS5djI{7&FK%uQ%*axmO9EcAWU6 z&;GbHUL&v$X-w%9hAlzD?ni+~C%dEloCzT;2Gn2fpn>9mPg%#Js^(?B4i7y4zTo21 zxb<)R>8y%s-wo1`rBjnL-KU9#S9;F4OG2Jle%~`LTMV;--?>8;S?yZorOqUjrk8fi zl*l48mNOPi`Xp|Y4##^%9M<5ELny_b=Bpm|<;R+IP_9g0Rd&1ARI$U;+xlz#jj`Wu zb)|az`~|>G1IV8~EUjH0XY3dioGH64S_}yZ>3IY!i&1N^(KxIF;!4OS&XuG!A-R?QPsNesN*FOGPcrN;%oN4$HBmVp=Z8FpVMagF$;4g&6}p#N#Oc z^x4BBl#6~Ke=zPm_ZE)&IX8rib}lZcl2-jL(w{Q;lQC5!%s+EfKt zz)M;sjX}EKf^oBNg2zz7NEs6u3o4^YZoFvFSEt^eK!Xf5RJNrFo(()Lbp1O{jXNtL z(}$`@oplM-+>3v8c7vqXVKx4#0_+xVh$!jMt$fZbji&EhNcLiX?%zFMFIZ{n zf}i@Gml`DH0*};wrk6s&j1D`=2+^2+f#$O7xsmZ}NA--;3zOI-O)^VJ(SaSu2#81W$o5WU>`%Sm|1`|e|05mp ztEe&P@NAJbpu?#UTDambZ0%LrlHao>Jm~;qKLDwD@fBm8KvVUZz*Y>O^67|*@Uw3k z7m?>RK@4o=AbWt*(-Jucs6nK&F0tJ|0DG7A3oGy~W}L{Vmr&QMC3; z#rj_iKIF2!)Tx?}@Mm8MrE`YR#Jw4svyoLLI}E`za>x;NQ|?)I_ycPtGFqNS2(e86SHf*@Om3v6(aQr zznqB$M{b*a&kDU~6CP7CHpsSyxf@}-*SV^lhO$mvBj^&GopCP|ZHBZnl&nkKz5PrW zXot4vYc?ikybjNZ)UM9PRZIB&?sY?^yA4_Y-QJ{=m+d|-sayAHvk)8pAN<|N@_kIKo zNB$}oiW~I%9I&&&kVTn-O#uR`J2|t)NfWQ!P}2lDIv90M3b@2Md+;zPayCT(9X~m# z`dHkh*bhJk(bia#D@p_#XlQJ-c|)nx>Llrf)4@>*4X4n&F^Q?5JZH>hn$p&89+1t7 zHJVRGwka-WirF_=T5wJ5>Fhr^#Mau;Psx`=?OvMR=;#eC>hjmuOTXmH-2mTngq~-F zJPX@UqBiJ=ZR!79vZ#xrS<6}oWe*Q+sk~~b5QvrjP$%pt0Bvuvfd1Je!w`^NCp{v8 zN}IS2hdfap)rEHCM5MFxE-g@wTXXbUkK5#YAaQ^*8%SGGtmrg!BJ`Ya?Z0uBg0be* z(p<8I4~oALIHPEF6_tJj#;A<`c9X3MlfBLq7%vc}WO1U&d{CQqb))4k1x4TE)8BP~ z7IS9R-jx3&@4sfeGrgQuH~2IYAyCo&d}4g^-vDZO;!j}pP0PBGC5O5kSbikH>s|AN zOp&TIs@l+h$YF%-*x6a#!t6nMD6uf1Mlu>QZ*`3{V~6&wH%ukUgYG0PR4FZoQtxh3 z1~+WxOp0)pdy){6J#nLcFm6Bkod?iiaNe>a6v3!^xOIi!_HNN3gmC6so-rG)5 zczdm=ZpZ2(9hf}tDwe!ruq%3_u?iymffY+8Y;m`i7MtJk%HTJ+@!?!TyP-qbuB@eD zeSvHR@6vN~!MgM5i<1~`ib*z>bKp1h$kONS1d_XjA#Xuhi41fX{PdgCG_gUHPw-K2 zuNX*dosHP-v)O})^>KLf8~QqRvGkQasv;A*$Q(B#W>F$i1zT6A-rI{f(zeY{6v2-~ z>4fQ7DGiGSzCWB0XL57Dx}z;={c001T|6!QKF0ic8aq#SrYij?V1B(j&|p&DL%FPT zGU3B2@SfO^LQ50{bdTrW)XMlG#En7Gf|89Tp;7g;w$6v$4}1qi}Jg0r1UbDG3->`$9N^c`{76&SpWDc_! zO43{-z^y>1;|4yTvri{~p(#s627#Vt4$k^J8Qi1KtLq{?1*Bh2dG1@sY+>>!T!-7@ zdON+AW;uJq`KZTvrgtA-Gfvoc`@nt~v~6McC#BCB49SC(5YBZI?f@6$0+4onroJrA zq4}i8jINYN561vp?_1+Oi}Y{8^EKKZ+b_0!c9U?h-2qNw&k?tWqo(4Cm$Sr1&DkGf z8Sl>EUe^__^4AbYmbXJ)4L*FDSZ6%z z&RQOjK+VA`gKL*qXI$mJw1CN;2jw}_z^2bL-Pm&}wo7Ay?;B8UeZL_Wcy5|8w_P`gpa&G${w(TueMsk@@ep$Z(i(u0exn5)hKYB0GWHNUTam=<#9lg zP$?n~CJsI&_k4tzPF>+g?JPDVZs zV!JJDjx;RI*govy=b7ODyg>ZH(_=$No!A1&D!NJNX&Iucj=ymuyBNqt4rJw8$^408 z2nZfP1DjZN<}g9Ur@9ZruBKi&8JYA^Mjvyw=`@<$w126w5>x6sV=TBhW(f#%{7j&$ zEa2NQO=-*I*sfa-rt%_#i)hfTrdZ6m%&sDI;si7XRS%b_jT8g4HJZ71SLetKM|{W5 z?73DFT90bVs8{>6`nbFG2 z0&Z2){y3oz;Lu;*JoFR#b70I;^vaC|vem)WR4*ed0+*LgRPGru0Z~FCz|ROzd9u`L zf%^?n(?$W#+fHl}V|D}9Jy5o&6L4R~8bpC4sKaw+bvL8pG*Bq}9MBN6{OMKAnO6Q3 z!!o`V&3Cp{U^UAzOG3uDIq#)2b8Uc;W=CQ*CoZTHyUe0y8XR+#D7Z4!Y% zIm%82;*H8^_f2=H2D9~E#dUHws-N-S+37cL6bW~_hc^C{1t`g$3PFd!iBw zQ(8j1J#pl4aofPeGeuHq3Fr-16)6t~e3=4d0p>Kzj+6L;QP(GPqi@Vl#R4pH#mQIg z(`5wPm_Sj^SHe;Qmtt&;K0v{Ml;#`=NoVKGNwjOE_jRC^t>)laM}-9Th^BJ6fXJ_is(wr-t>v>Z(uA$Jd|4v=T)t)$ZwZpKb`M^!WD| z+3H974l7a<{q+YMz7o6`iulS@)IB-F*BkdE!xz?WMyG4N)qxQ*-fN{Hs>~WapMhaA zli+6ZPg#*Kd0RoBPwbtSaW%E^O;xYqsM3Xihd?}Qia8T4Kc-~eTPup06u-Dt{Q#*w%(h~H0mnBrJzXpFCV#~o4J&Yb~9lQ>p?{ia0vzToR@31Ut9E= zH{u=Lcf=4}D|PtKuj_h^>L-$$P18=+3RVCGpH83=G#rE197!zKcOm+X@BssnH0rqc zQqIUhK_}pf#Eq{-2vT|upPCyg#>VmllVDr1V`=f8L;!QO?{-+<>Sbe}ARhmbtHGoo zs6gd5g`)|o8Rg?TyN4=PyZsAb0`&>MT|29w3tJHxZ9IE{98wlT_52OU&~@*3lejIZ zm%a-2M&EVYJ41Fu-br99i;P^}I9!VmjU(z@WiQ~h>}UW{0EVC?C;%$s;t z>0EZ1owXX(>1WYzIIl6!t5$BwnjFJZ#6lPwu}A&j`*C88xVl)!_j|3%-a8%hQhn7K z;eWIS`#To#X*r=AEtMv{TJ_^ytJUN((6(+1A6G?$=3} z=5|Mt_KnT#KQWfrq05r)id0UuytxAq@PT|BCy`!ZsvEo4mD}ZHNOpo>3@?6-7WNW`0X{# zf3dhhqUUdW{_mrtFqwO}XiDPmZ5J%B6Oin=6M5@qNeH6P?}J!3AXzJe3A1nY3nf(e z*?-1p@=|##cwN&${A+rZcX~})24(a;Qes@ghlrkuF5k|qmwoLb7kbj|^-G2hH5hjM zr(QB4;=)qg``%CLE6e2N`78z9K36x{KNu&0at5&yn3+T=hwvtPV2{xw#m>|a2g%`v z^xUJfN|JK~siTspzLgRRUJ)cAnAYg6rZGcJ1>AR^*4#5&Fe6ghMFiZI&y3GJZ*F00 zSHbhBknVKSbbiSCOw4Fx)=2>3`cbEtIfyTjsib07=}5VQ*+y$eI_f5IXBr zv65*C#Qnblr4;tLK_0;(7b09=rjD6C_Z-}5y)C*=yPEFha`V#i`V_M|3b!PVohQ z3y6MaG9!Jsuom{WgeHYlJod?c^mt+vEjH@-k9wQVW;jPM*2j~*4WfsgU?#U+gL^FD zal@|K0bW1lSe`YN@^c0PbrQ9+INLCx^QhP7Fn2KoL2|*G_cqL&#<=jesqFz4pBNKT zQN0m1+kV-%9o3~N-yc*INu=CweLpd-ph?JQNFOqolkjF#3#MR(eIN@s%e!x<`uh4p zGA1`7yV&>)pHn=p8q8c*es~XvULnO4r|~-$ac_6z8z^gjT`|vRL?VBU?2vb4k`YW> zd9*Vh9u2l>_%9mNyk7nrM_q?KfK_+p})-v8o9N(LC^%2+p1U6jy zv31<1UM)$Q4p4+^;IBU~fcyR%vBoaD7LniB&XiuP+w+~ZA$opA?5u*mi8tKMHdQ+^ z=2+f4!9*PXp`P+?!9;#N@th;cWs_2Q!daXb9F>u5gxfDhDYX*SnSLX*;tN@7*{$;E z*Eq5z*h$BdkU`;Xn#MScL*y~Ak892Q2z;wyv3#jniXCQ&#Fy3yxXDgv==%ye^o8;S zII)pNiXj{d>PvOw#AR%2x?*5PA6|1d5+rV1M9Dwvioa||TP4Lyzi$abX3hoHS{ADx z?UyK~#7kqI<2^WLFA-XEtc@@{H9oqf5tHJ7IoP00bXm)YwzmG)*`U&eq?>c3N`Fk# zSQ__!cuwBgb-{U{C-(0rDW6uK_B;5s1*4E25r#K#gd3D)P5bS z-5;-jhA_P z+WQ4ob_xPSypPh|(zADm#I5ve{SSHCv}#0}9XA|-Xo6!{O60-l?vH2)fpRjjv~~=x zHVN*%&`Bry0#>YOemcoZN^CaK>Gmi@hQU(KB_~?NTG)CvG2?}f{I*!c>I8qBon z7_s3>1<|YiqL+&1|2mRCZEyzRA265%nQ|YGg@Fhfu^Zhk?0pQ51>xMmLnG6N zl$dfw`0;(*7yK{6IF*y1b$dE?!?++C#m9pKv(tF$b^e4GzzlB$!W97wN3@ zMRm}$xNSbB6Rc;YG6*UfFmpyqGHq#!EK6EZD<>+j;Ese-n7?>n!$TBzB}d;dTg7Vq zxcQ4izarf}ipHAwg>_rgJ?sPc$*$eV=4w{;hHGM5Q~=2VA6Z4ciYC23`fuvIh#4rfTC{>$jffBBt3WG=w;&N2q3 z8_eoP==UY(9!u-S1+9HTYrhS{?QdV8SF%OMfpxS( z{y|EDR%>(W6doDj5jUJru)LlxW1}5wL=5S9R5Gc}YXiQ#N19z(Qv-03BH3je{~$NU zKYM+A+m2+dS;QP6;hR*+%TiX0XnP;7F!qe3M^l$^Kks_+_FTEpUcMZyVDexme(xt1 zmut#z7a{e)eQ^dBJm0$kS8nKC?YuD3j}?!mauxIhAF!aNLE_ zV_DJPbo1ZomDWf6^Vm^#R6D;LUcb9o_u9w2E6hqEw8hyQc-txCmQ4!H?oXdy%8vD4 zVl`XifxZ|&DW8JbxD@$m!U!8`ODZnBpYI5dwOEm0 z!J@AC^QO0ZC`n=`v?LPD6lc1Vz;{-vZx@^bw#4zMw8GkazSwz->49>C>KLA|N7H?a z*V_(h_>n4VOp7lsrO(ttXT>WMj3y{H$mO1~^>9#4Wo8}T`T?-&CnEW@^eikVERKuT}|LBBj30r0@=#-aM zrT^h1`W5KQUn$vS6@NV9*Gf&;fHf7K7~czAr84q%@F%AfF|x9NXw~n#aad{>I|@|h zcE0&S#=^?#(J>eYCN1faTQ3mmJA2PB%ndrubFV71H^|d!cLWS3;Dlg|eyMbq1C#T% zXg8)$r9K0xcdwN(E@EEEOnp8dNwV>3B~~Z7Fbl&<5!0XPHodnzWerbF@?ggrN4!Jr z!8iQWo+`-7wdpzrj3?E6PGJ5bu@aNVCN{>gbZ{FPL!oxk!eH`y0zVnQ5)A)c9Rh*=|z7sb{k>7+o&X4A+j>Q=wMqI1{Sj%%#|*rY1x+(%ibN~2VCq=}Mm@J? z!6ht-PkO4<(xDPAZ}ahKq3A;G;^vEmRQSc?=5 zPH;GR|KB&x^%*Bu`(lr=_R3g$?K#(cp5MetF}Emd_|rnTiuEP7f?_f~rXQLSRmgu1 z8Jp7-*4XRZu4yiqoGaNn40P12evUc^;f(Xtszvm*geYE3K zTx@7ak`AEtQjv~r6q3c$>uE^1!RMD(R&D;LC$xU-Z^CLXvZ|Kwkgr8zzMqeKzPn1S zARz8r_hpS0C|WR%hjCl;tNX1WZ==nA$GZ;JUcavMf?>@=i7E4AJ}@?M&6i^hGW|C_?&=(beW+?hWz>TFr<_R39rXJXQ>;W2IK?o~8aBi{0v$ zg!6(1dR^LDJSf|PH9e6-KUn?|fByxqho>5^6_MAo7Mlwug`pQcoYj1$56wN8DBt~F z(49&b29fKy-*u7%_XroHQu&R48 z6?7X8sqO&1+ReN>urCzViJH%Ts7VOX+qS9gSZJ)|z;;k!KA3V5>E$9=kcko0n2N88 zaHK?@T(y-D$PLXdV&R)L`be;sK_x+)D5D}#JJVmx!`zLqy@>IxBN+Rd8%0`p#WI%K z$BR^vucjOw_9lSOR|n*#P_EqC%-K9lW6&^s!o7kjLgZ8z?qk8$fOgDY8;4Dfbg`{n zOAd(#n-9@!P_E=3|4cCZjoE9ZJ@m9b%z-$qAFMaSbGBE0SrhZk1bv8E%8fAvo3)+v zf^I5r{+DJh>uAefri8a@jM%NgvUsriPz(Pn!H>k!ONPTWki!*C@P^+<@LZ%AYoM>R`iG;wgoD616+3)o4YF}JYmdPBR#Tp|g!a#LXTQ^KM1(lJ5K?<3iauNM=d z2Je^8vEyEJV0%}J)tum%r>`R!8_4j1i#HQS@D0|*TVCa~t2n^mwgzXz8Ofw;Ox3*h zGJd+POP9j+Bi}tf28fuxX=EcJ8-E0$-xB1#5!eljYxT#Bb6A}JYx&TjXNVNMteAjV zTT6Xne1y#njq-7C(gP!Y4IWp-2P1+H8U&g@q&?Rjd@?iHKJJwX2DTRX4jE!URQ?cw zX~mVb&HgLnIubK!VRQ*KrTFJpmC+fe4LMyEN60SI7&`eJV8Q7?WXC&E?kiq>mdks8 z0N3V}>UGBXQklHKy^ddr8kF)Qa|Sf(GzxX*NYmB_>8(E6KYh>V-C!|<*Rpu41X#Ql z-LT=fuPU_uFjp&Jn-Lo!k~j>V%j*D6U5W}6{oQEh2Fura7_uL|}FWgE&26+RVSo{`ucM+I`>cw4h66 zd7^CUceMLo7iif2gom!bR4t-^dZP#^NLx>e5m~%8@xMvlp-8U4H$Mpv*j=RN2@Ymv zea2#{3k`fL&MJd`396$UONtLaQWr~g4`Q^Duv^EWd31E)&a;v8@(QzB()G?n?j z;Hj`L_<2Q+GLOl1J1ZOaDDOdED8$z@sco!nq=M-%l=~ zMNP4gRs}~QWnf3sgDu$m=0=CYkG$={b41l)53oTL{0Lims@X7UsB>CxuaJQ&!)0QX z&20Td&smq#GalL@+vUV^uAMSAshPgV=8)P*(PL(X1u^DzQFValGmrQyFV{Q~A>cu# z@erYn+~bjvx*^H7KD`rNFU0HhM)Ug?br*sA8aow)H*9M#Q2tNR>^4(~n#n_HSr>;( zdSU>&X}Ca=jBa^z6FlVn5ZlL^+EW8Rg26Q?#^vO_lO(;b6 zSNg51Q-|;9EHXEa5S@j5sTg8-=jv(MA-K$2V_L4d2?XCH0?WQothP0&fCY*eqRII29}>-lO>XNh(GJ|lcqw(b5a%3NEAGuoRUUXXmgBFHdKYkn619>sBc= zVe;mGPYsl3hhENFT(h?HurfP;V|Hrs)^<#tdsWo9+(J;?=b}f=uROxCxe5_%UMD>V zu&b!rynYXgbJ71|?}ls|k8V@jw$gFBVk;;}4%4iA!JyY717eGG=K7{3&s%k|24`cApYZowaV zV@7xpHUoJyyhto2ew{&rjr+rm+whdp=JF?i)93}&*vJFb3|$E8wU4W8XepfQbCq6Y zZe7sLiE>@~$9wi?@4U>UAd0Y~axwO`tD6EKjwv}WtM-nHO=H5HQ^~q5=Ax9yL3wxe zT2qTAsI;Ag@0dP6=^@rSeWNbvgau|v89ZDiLtsI9Yb4X{EcaBGYiUd?kJ$Q<>?9Jm zqm<(xp#k!w109}rR=jT#svQ0ct_|+&uv%$lA5R<_k;|98nJ~FN`{B$0zbe%&3gmY@ zS1Z}>RmA^OZXN)UW#u&Gkh%*%hfCnce9%Bp<^Yl{0Gcz zxOCPxUGDeukgg8LYcimN?yU6#AsGdwmWbyILJ8_+Z@^#>#yDaDLwf~7v)fO&AwvKh zbVc-<$-E(#4eMn$2)hdRhlE`1BHY;W{%8Q4PS?nD{>WxA9R1uXuHRBJjICEn^Z=jA zX!I;WW}Mcvpg=eO{c z&Y8fUc+valk|tzr=KRZOonnQI=F`y*QL<{uyfad%0TN9z$QC8t3qR=*YB^?VB8YxI zs~8jf_F7NMVyPW^gp4aTz1q}im1-r2yfhI}0GUnz5iE3=>_{AEpQ!SaxcHgDP`!CP z7sE;K#Vv%fmwe1Fx+BKJ$zFQ2n47wAp!+Kc+Uw1?#+Y!%0-hDJ>rz2cAnb8uiBF~^ zo8?oG^Yt_jKT1RTd5;qgwHRzPEmN<)RrZs`Nz|VZRSu_h-u}rfCQbRCmE7! zfZ`zZ+Mjd!ioG3--=M)ITgS@L*VX|Pza z?u8@o?flX=f=8805`laz^ssv8>$DDv?yJd~8Y(SP0ms%%$n8S6tu~vv)-M?f1Y19- z@5K$@RE$(@0Nzjwn0T4Z<(51@e(EUJ#aHMN<5lp+BRGDxS5&$4Nb0sDEKdaKg4p%5 z<lKAt)Oc&h?}aiRFVp3FejaNr=)4 zEbx4eQo#1PG_mHk-vS!EOk9odMkBbk$c9>oT@_!2gsGLsPdXEX>+&_0KEpGy5h&1! zvn}QT_zvJ6kr*o&9m&%_J@{_W9ZC`uj?K8;+d6;(oV?;JBB5LMQEWidI+9$$i$YOU zy{!fJ6L5!V_(+!q?7b^aMgf}jCCnIBw?->=(%cG&R-RYM2nt;))BlbqJHpytRtW}P zVR~>$kMcCkwIxb_YQ!I{ z<4ne}^iz|Xb~9&#QL<4|bhBBQnwL_kn{vQzJ;tJMTCGBdc1G09@m5byjaCtoYSAue z(fbcO(>e5`LX+qS289#O~^juOz+5%1~*j2?*T{3*-dfL^NF;_yrX>m#ey{QaB zs8h7?vzsOx8_wo^vqlz&fG;w3O13`=?1Iirny+-7i3YFHcdj_jbTssQ`O#abS+s(9 zRtGymI3c_1l0}kc3*!uFc>(Ae=o%k+Onw}ZlI}ZuO9E(mr?s_VO+23gL&W?pI}6puhpN=zn3(z9%ydCDH6=`a^`}o9AWc#W6yt(j?0f@+lB+p6t@~t#4>MW zyteJ=cyWdbDz<_s385nHJ0S_@v~o1*>E|H+6|hDH3;0(h5e_o>{2W=CX(={F7+gq1Gg@kt>$zY0SJ%~k|D3`PQJ-v@fxWeV?EqGQ zRb0$>-xO`ZUs;p7rQ25-^6C5cL|GE`!CcJGm^O?ym7!`qBQw(XbBkz zQL#NE+jy*Y%qehicwH|o69QTl6+kBSmU6{Hb(rbN>y6djBRTW<&W61sex?>x738pP zGmJ|u1Jw~oB%mt-g>TXWq4h_ekl&o{pZ}YogB%zubT_uKlU=48wTH(>AB*QoFmkoR zVK}o2HNKR%s^&0lR2O#h5yIbB>GEp+FxO&im%$SlG0(HgQ16&0tfSEJBRK?xoYcP4 zA2&+ln}D|2ZMOiirhXA+dvczZLHO%2=QK!~bh?B{qoWm@hD%dXeABh}gJ5wQca+IA zdtQof%!AO`k+Zb$-qvlUOtQ0_g|3NXNG2%a_$u9OH%-%dfo7_h*sT9?kNwSyZR*lB z{?-!rS=$b|%O+vNyNRa8e$gFK8A_(QFC*o4YBv7gpCmoWxDr=6x3 z{rY)5A6ZAY^5tF%uHCLU;aG94Em5Q4xw^R>;IXF0)O)vGOaH~;f`LsaFQPM2h+0Ka z1)ZU6pS(5?iQjxSvJQDtV7^~`?y-K}AZwu83wIP@#n?<1sK)3dw<#_XZ&X9OL(NQJ znd2AQTNLh0SXT8(X>|-#`^S})x(qxQiD*7MQUavZgi^16Lz{I)U;#%<0bh3m*iA&k zi8M*Uf|vE?dU>lTd^|Q4^U;)^dWw~@6<((!VPZ?}o1)z`J&ZG&jXAH2q&OoTx#r6%8C0Nf#Qj7-=Wun2)oct$AtVj{cL^k!^4-hg`$M z*0jo#Of&4AqRe=$|GGte!xT9Cv|puh`J{V+jdC8^XV07-B6X6Wi5tmy|Ey6V6!VO< zcsC4#osH=N7vET)RRjN+LqX%L9;2qzZq#Q^mc_Oj*O%#V>_jFX+{4RwY=hG#9LP>m z$`qe|8Gy?+4b8^*L=6J{^Z}EP0Q}_!+YY#9}8Wqi)*&q}@-HdU)QS zibZ&H?R>DeQJ3iW+X0ysG5W2->?)h{viFy{!Cw@W*7)-;9vI-!J;SP9+WR+N90M zz3wjanc-|fYBmS7$rALpTv32;{`Z&Cm>;)Hy858fx|V+js8YNIY_R7O-Z*GSnPu-| znehEV1(`GLAQ!w(Oj~y(M7q1{E^zb-evyn81Qt63quq;4rMYm2*Yj%el(_d?qI;j~ zGv9|iYrd3AI${_{5qPKYwCktAtCB4R&XUsXUWjg9qcCQS-D>trW8xEJMB7!icC+*d z*#*TRHlD_dO(|GZN+mcY(k8YwKC{w&nOO0dLc10f6|coK52>>rZlHKNUq}J&^xdVB zVIg2Pjr25cy-Az#qVpXLi@w;e!q=+m=1vVM_0mQ{X>_1T0moNAzwpJUGpblGmM_8f zHM9W1hNn5FES&#Lr}AgFh6&OE%?Hg*r@00^-qc_hnVGK-pAo0Amn#kV{vVC@1LcD6 zYDXvL%jOo_BiD?YzYkPRKd`HRu&| zCUdDibV>Z|1vMkEc>Ni7G6&~#(D9Nsm~=^%NfRTqxfNfZs^hf;suO=35)N2^(j0TT z?;fznRED3nmwoxvge_c_=GDY}9N;nWpu0ZU*UVpZZvP7lUOMOH7mOL#a%E3zNwY_WZ>KX@`W2?UWiF>>#C6B4>N7MNE078;Z?*}sxXEo-^{ z>D7>Fc@VCq&+taV(U6rb(1}Q;&5vNmY3lAspilHlRMf_YdvELO(LzlK^#IWd@|jx)Ak{g?BqH=$FzzSy9nuV)ulwJ5-ky<) zIp;J%UUycvorpg&m^G*d9b#A0pk4p`<4YA{5#W4J-x8`8jfPH{Pbc~}fs=+tT!_Nf__zqFZvs{7fI%a}ZA+{Ra}OsQQ{DD+`&JRKgJwzUd|ndc1_e0n7u2Zw z(7N}>Ur5iH8~B%#mEoKg_K{*7Re|GJ#6mUi;3UyShKwnCqn;Zq)>X49{%DBW-dg;^ zr07GcZn4@?TI2#frGh0JepF%_oCi~KaCY&#O_=8wTx|PV@fPaA03n;&<${H+4xKJ6Q zvuX!?H6MZsy+Z6xk-}fu2;;@Xec!crBXdxQ1{yB3_T|-XiMUoLJd zO_j-eA!s>}c7Dqhna=p==iA*_AL8ed5iw0!rQ%2UinN~|zh5fLC(%W8RLY`t{zOeb zt!y0)dsj_67dGV$xfqJ_gsZzfoXw7Rjy|6ih7fKtFXg~dR~i1sZ}# z=MCvA?um)2$jb3c3OSshs_W_a9E`8RUMjFaRH%C_wC+0IpSB`KML0C`>z}K7gP`n( zF~^ytEE>lkt<<24s-+BRV1&-K#_zFqu4WU`p%PsP%RYYIOkcZRVM;T1@4=wJcKwtw z;immif2u%Xg3Q1mw}GXY*SlOWKRPMHz0gL?QI74>dcDk_d=@zYi+#|FUdvQ%{E<1n zYqYyd&Zme?K5eCB5h5)Q2faYWr>0^Gxx;PHqTaBv(0o@0RFle*&qKN0U zs%WcDKBQ%}bFud0>U5PnMH;W6e&fjBNRwpEg1b?p^z;g*rqm>%YZ?O)k2A0(VU}4%3M&VI+ed^5n>Nk#_$V_iAslS5luN1U)UlkT=RQgPw#tV zzs>t*$+}8>rR+Y%w^RH#a6&b^oXTVYUmlydx!7Jg!>?Il zt?)NLJ#>fCw`{BM)yGS*`!{1o`DR?VF~)G^GbWvuAF*#~>9gJ8PVWhk%FEWHYD&WS z??o7@-J1bFEm_OV#qy{;#{nh21&h*|{*wCN@AKq54k~_`(HqopPgF6*tOYnEcLu2C9k6b z0-7k<{NizTxNW9KOa8hGRj%fA((PZG)S|AhuS_qXTlEppi`6Rn=(u!{c<+71Dly4C zkP`T9$cI@mqsz=`>Z;@rHqzH=y$Jf*e!NrKu;-H-_}cHmFOL)Y+BHs-r?XX7{!j7W zsQ_i)?B1OnNLKl3pL%Q~sfx9h>|hZsE9lF70oUY?UrI>pi@D+#$ccjVzvWNB?y*uN z##N3zWOjM(s*P0}-rRUFcv!v9qVnPbSpU`C@a#10Mb>yN!WqvFZZRkhsARVxJl^?1 z0r#n|0|~-ss_l8gpL+2CM2mlIHXn4{fpu!@thYCQ#d@QLzbRcgst@)V2-?oai`+T! z;Zr8!3r$XQk1Ibnnz;Wf{Vp3oUdGg6MZee>=*o#0L0vq%`Wu_l>e&i%wP>^Cx?5&` z!*NyHW>1?b>!-|2_pT&OU-q_5IV$!b6Xkx6A1qc0qambM>KL0EhMY7w`iGPuILp0@ z{&xo5lo8Efi*VzYAYSi@ec(t6HeCpqbLJUg!O{yo^Dv66oRu83d6Z zkBiU~)LGZ6E|#(TPBupd1bkoJ&x*q*S~D`P0jQtPeLgk?uh+0otRK8_?0(h;=_l*f_gIsfVbR*^F zMqcp7*J9_!R*3H1tBCQ(PC9|JT1-)HP%2fkixkn)s)*0y?#MV?tn5E6fdT`EAW3h_ zsBx&yl=kK|oLM{ih+HIu(~g*imDugMWdkSMBqmdA!Su3qvP!yiDztFTTr9qnVbS z-lUCH;w>RJWwG%EaYrzISv*Z1gmC>uSYWHpQSZ|P1ZlTgC|#k764Vmx+(l+Lv&n-r zCEwTE8C&jKEyM_h$$H+oCE^UWSlKz_F+5M(+XDyBdLM^v?w63?_PiflJj{seUY(Zq zFKC^BvdMekB+R>EL58~C{~u=6>YKeI!d`}8z^u)OYFV$G%XHt;<9oMNY^+z8x4PV; z`_tR~I>+Jdb!DK*0$O%S+JH7oJ9QE&b|u+XdNhlx+l=qCyOFkG`ua$7lUvLBkKhYSrqH4u_Pl!O}pv1EIh!_CqooHzSqr@-5fy&O*GhuJQid z;$I&1tiQi3`|jD=`Mzn9*FJ?|pH9X~1tB+Zrc$&bBl$I|GP4D3N2!D|=n zG_bU&8XYH39$)DzBj98zh`UdY>Tu=S*UP)EDWq7#xYnMGua7&AxJ;=JnAZB*NfAj` z^>ShznlyKd)_ey&08f|WhM@1z4nZNpBF2rnst~a;^jM9UA0290Ut3B7?Mq$@p;8sk z{P3zP*WFxy@f5%&6R)$qUBx$s*Egz=*%c{Tf2dp=;W=mQf9qzivMNv7*(59{f?K-p z^J1$Btrs?J(CsRM@W#KZ6B4VF$>aghz z%P|AUsEu#C;KAWaf|clY0a6ZmyXKRfHbFY(94CI*U(f^ev5~jTh1wO2de7NzXQX>L zuZ(K@mKahWm8PGksd29g1dA9E*SqrT2|MuIcoV-$cbUTdJVc4D6JHc1Ohlq#u+DB3!=;x<2Y z-dP|@Hccl<2x(SnPE@_oH(S)&l|77n%-mI(K5}jbzFc?>pK7Ka`x#`V9=_HTVRoQj zqMFlK6cLqkc0$Z!t#|J#MJm@VefghA^<)2?KmRR@+Slpkl_Z)#?^A|T@>*?fYCm+& zO#o1y4ESAeIIF)}ENVp#kvkD}Vb;laNTe=-lGu7SAB4}z6?Ead(UXh%j zcgUX_nbAu%Y~)z9l`hApV0R3FRDT|)@6URRcl3LQ%aps*U?ax}kNUp7Z#p=T!%J^& zB4Xao7`@QvfQjQA_Ts()OYLkTSoeklR2d z|8lxs85RGp{J8=}2y5`sq7ss?e3Y z=*Zl3bQQ%lrDtuda_e)^8KCk>Smr^?B%`cGo^hx&8FDPEIb-{=d$43`n5u18v?{;G zqLKFe2SZVlPQtR-`6oXgqD0c289P^oHQceQQ?;Nd^|hib=M@td?2(eSvzFN&LHE5* z@4B)A{0g7#gW`Bc3leq#4qn4uv9!?_%364QEjkjTTFbg}#^KvDPNi3+tqDzcx76Q8 z?u{C|n&(^4qz@ZKxJkO2DI+l252^ynnw#TxwR8H<|Kn^Ne+_pf*9HJz*Cvw!B9(B` z&l}?CF1agP@Y5o+!@U%mrctgBatQ$|OLu#+vZQxvx+)3i1|sa`1zalKjnG!0e?bpv z%|Dbu3EADOS$3!5#7G}j*8Q#=O9d|FxvOSQDIlIg%qfp47rjgUtSp8=xaT_+(epy@ zQa4;QsM>DXr22M7hITHpPW=JmxUa))|3QFc$&1~Gx=1B6y!l@{VBlfex68#K@;*`L zs0LYHExIz0!CxN6oqj*tzHucvmL234v&~mK+@2SS|8P2ds z44uEd^3gwH#IO=-EsEpw&k4NJ>Ro9WMl54k8l%!Xe^d-q&xD_dYPL|DY-u2-bY3A?AsVA~`IC&~-=T)ojojAE z??P#|S*6*{RoO-gZtb0;1e1gqNWmM|ibJ^9Hov&OObEP z0~uQEj`b+{MZCsIZ1}CTB&z7;2iQ?))R)fmb%-rS%*Vo1QfU`IfZseU!owhskM83>>s+ z0l26bt0zh$DV0-gdK=>}#!aXd*(CSP714oG#HGW}6l>mHz@4s1Gjg*3y738@fF8&V z+~~XKKtv>Fz*QKqH$z@vwOl~d1T{+b)w=}q}lS?bEXGbZUEugj{jb#iufUGMs+V(=JWXjHRpEiA}MdTQA6t#sf#>lr1vBe6$D7$u@YF&3ko_6=8A6drakRmh<0uQoTk$%98^&<_^qR z8Jy-%`So^>J2?bHIzCH$ZMNSiaW)gC?YP>}J#iX|Lj0kjjaz-!wo(;`DEb@3ZNoWK zTY_pSPug5JU^Mahi8P8h++>Svm(1Bq zJ6X>(z$S$=4)&Jesu0IXs+bB&H_?;csRG7b48nk`#?1wqyT6b<13M(xmKS@qoAfaz zTvcB|aY7xvD+u$OtPFTFHk&CGC9Mzmu?>3H@Xu=z9i$RcnDp6iL+$Uco#Jf)0Ir)p zm7;DBjwU2zRwcjr(?66LN8R^ud}R3B^fyeNb$)||4G$!vvGMFN^OXAxwhoc65Le9S z9}o6<`OGE@%L;#5fl8M5A{<PlNw^h2;q~$XqnSd*Z z?Ik0x95*({3w3Y^+2k; zBov1TxuI1?Y&@)l;i#9^yRLD#G)4G#V-0{9LvVxms;HMKG|y1JIj?BVwe)(A&W-)aOo&avS8H}QMZdN zC^cLg;~48~n<1JkiC0P~_3o~;o*>R7iluA83Qahlc}*+G@^CR54$EG`$NB-7)Ks)L zR`|>e7$L8{V+Xj^y$-%Ju{RZINE5U@3BrIXWzPhn5{G5{gl?`E;_4f{p8j<7Fef;{ zp`St#*)2f09Ki`o;fJWO5yLqB%c)=_V@4<0tyW5{t+ppE8$I?u_;U6_8OsQGl?zgP zJxYYu8Dr^9VMr@0JU8hTRYi%jub`#ILk8#1xL-^}L!4DK!00Fb&P-z&#K5y=UOT8d z;JCrQDfW#1kIvL26ioSt zShEodtpC^)WS6_1?pC?PddwGKt^P;;r4GjbKHAf&={_1j1E4-;_hy+%QEqb8DRR!c7(4VE5d$q*6#|=u@Zb7fj?q1!kra&Bmk%9 z!baz`PD!v!`MMj0(uuAK?6DQ({{-{W$Y~Iamt%Q7)tZ8M^M*GcF|43hrvW>>Gn~1@DqslrsCEe#QzKkh zo@UsMU|7cykGgV+XInU=LsGmEM+i}@@yR^y=&j9NNnw8Gepy2`MN)H@Nee}JSZmahz^$xZnPw}_9 z4P$V3l5^}(Kq#bMQUM#kPj}m&QqCe40t-(mdL3hz^!W0Db}6zrVug;WcmZDC9D{BX zf`gLsMDHi-1T!h`#fHp?jY*EB*1AsgNXX~TZVL+liP(hn>&cGLNsk!s?Ipx)^)WO{ zqk9#p)FlkS0g8>Gh%;7R(xYwNMis^?_aqhq zp?*^(VS~d2{IV->?O3UFVM)`nw!W5#JDm^K3SO@{q@-!T z4S2{64hr{4(r|-?bQ0BkeMQc+i`uE7^EKn?6hlrV;Co9O*OmTe5TM7qtra*(+m3}qZd0VaLE~G zk)Wn(U8Bl;nH?`EnKie~4)~n>ZF7WiHmL|fvp8K3AT`{;T9mcxks{i@VHLX7tw>Fy z=N+;ARaTE zSCH;;F~?oJvX+Z9u%pCFhyT1AJeDe%p&8Scr~9DzYiNj;8&b49e#ftZ9X0|WVM^7* zOkEIo&n;Z0V;||M$MgO*C#ACXhwZBg2vOXH><*sZfwO5gel?nkrpS*8o#D_OeP}t$ zN6n^swQ}~Ft~uzvMK{e1icllR2X@(6!Zy%$lSc#st3V2O0@-f1p!ykWr&v|>(lvc$ z`5!Z(6q)3?V!I=gXB}zAVp(p$H#%6vrl0>o{%iN#Vw|>x#cH$?Po(Y?jY}&Yb7VQFzT)F)g;Vk-u{STazPZ`U1SlS#mM$~kg9y!hKfSy-u^e4k=RE3^pS9(mLeYkzml^kvT5B`@qk z`>GfEhl|>1?o_5bfJGx5SK}PGRX(W1eMu0_ zvo^(PKHHm5kK*}q&5<#pO(cJ;d+hu7Qepof5d3KsN8^gOVl}AU*>Cl(s+;xNBog+8 zz$!SuLPoRmH14NqtK;~4bp;FHrv<0SZ6q$WIVp8>YRlsfz|Hq0Y_mRHv+^qH-tunY z>t$}h%!#G3aYFSo^EI*gm5u>d0mU^lT*<;a(ds=-Fz^D>lJq<_G5Wu-c;Ma^mDu|u zC$R&jpgWPfB~21(GF{?yymX6HI3vi3axUgAVZ6_z?I-tx-`;(5h0V+WYOoqnns>)BN#mq1^ z4rEIqwo)k7jLhPU;eafII{B9xoT6$*U!dQW&_8uMJLiHBChOl~$dUHXp~;j6u2 zn0{S62|4D%3gJ#pGUSRVDx3=e0v_B7Gic128Z?)U=z#Ap5@gZ2O69EsaK^IoRZ`0}2YR?yVUc2x1q7cgG+>FE*Va z`pzF(r8~u?ZgsnnW7fzq4&G=Q?>USj+<0aFaU-*(3vev;Qu5O2)%36OMn%Vq!iUa1 zH!|Uo3ka3pM}2XD#r9{Idq->Or;i|<|N7mWSxDP^!BC^d5IX@@0T26-AcUl4H;yd{ zrWdn{0d=J!AAaV1bf zQ<7M`Uizr56s9IEna_{8ge&VVdWmxN9|^If9N3>$L8(PKEuH$R1*Y$Kv=aK^R1b z_Tk%_l}s0w4_Fm}qw!V(ZSQmqP1bMHonu)+py_7zhqn7=5H4$gKo@S=;pCOEm^ruk zf2)z!TGrGl)?CD^(9$@U?l%rtKpo8ol_4v1a2Nb5+-O{}Ld?mE^5?nnTnt*Qjb6Tx zlgozyuMkT+q-1M7#fUmomPP8d<ye7HjJl;5 z{>=QF>(jZIjph+>kqFYK>O26Ow`Gy8q1!F*S%Pk@YrSy}3yh&l`|Fr;^5Po6H4mpG zG>&@y^{&BTmP>$OIAf_piwlai-;y(-MkoaF@nUX4eeR~NO4Ve<<;cBCD%&P8BxyUY z;7VKXn4Is>v=_X#Bqvgj;L93@nKQicR2-CMypmeCV_S?{Z0L7G6Ev332aKhja>LWW z=PG|3#=tpsa+PM@mq~~xYB2Tw2azCD6wnq6&51a%ah0{tv6g-O)jWy!$09)KH-9Jt zqXO@hgbHxf-}h{hi4Fx%mp08?WM^Y73<8z9cTlX0f#foq*R{DBCsKM7Wy%P}g7fbV zb?)h&n!_D5D2wU@{#LZc0Cu`UkqBI|^xlISLDT;>H#|Ubb5I0mty`o;nz`v%AV&2^ zq~O?lbvp?hf_a~jgC7gGPoN$R!J9Ecz#nk(i1#`U{?Jz9AT)+~9jhlOEHn_323B*>f3J`Ypls-Nz?hT=bUwgutm6!Lqd!T2V{o!zi6- zF|XA~XER6^QQtph_9AU$hbV_EYlws;aSwe*H=IaP&yyLKm}Y?r#Pk|iHT55i(dHf= zMR>nm^uzgDvKo(8&G}MmCdZd|N#S0b6Iy{7Rj{ILCtW^$HK+vepG(RNO^bTr*>z}f z-i*Uw`0cY@vRmvMLZ3rKte8BF_9`At3^!GI;OX;zVy27O?Y^9*)?KHfQw#CRE}m4D zzFZqV(!jHjsOzqNbP~ef0!aJOQwN62LSIH-yx9@v9dF`@E4*VDEcf~9WLa?YcBd0t z#rI=VoJ`v2-O`h9uW3s@fS7ah>Bg+FmB$)u#uG9pa@BmOSVOz>YUgM+y82CoJ$;iY z2GZE{(pTZ;ySoI!-O`RnHp#)+JXo{^9TL2l9DDVu$dnq_IWVC?ORL(uti(yx`S})< z?($*?ZiP48QuP6Bmt>~^S|)jrxN{%6!)){maK6UP!qG18eLA%1#ACL`{qo$X?xZNu ze38~y9%OT$J`oC}!g6^$h6DvWeP-o}Qadxr=l$PG;Opndb-{!He3&8ZwNv^?W5TxT zsgvCGDd$G>gY5leYc420F`DSj;WxGGaBgqVpW8rE{(vCj^>Q0 zRPWMkvfJQ+CGwPT)vCL!L1V8Nr@knRazb!@b!#mGmIVcioi*}v?>G_ay=^bRWeH4W zpihaUNDd+?{E$};!8bM$=^e9BGR^hANLc?-hplyy9Tb;paFym*bfDl}ER4GmQuq1) z>Fg~3nr_4XFQJqmjnbhq`nr%(N*LYUBPPu@LK*=<8iA2sG7xEz+;D_+NlC{xQo2D9 zMhV>0-;>{?`@#MF6VC7JbsXpUIX=hxBWBFc0YWnMFZ!&m1z+=C535Nju;P%$sv%qV zwMd+>Z|TK>Z0FOb+W7IU0^KNpdd4r#j6y$vekK5I7>D1VXL8XjhXL+j9NA_1-Dz*{ z=r~e1oox3<<|8c2r0c5un3|}{Lzre12$vZ)J$9guy1blvP(Sxzxk*h_(sFM-sW0qw zkqJsuekzprzJ6p^bGHiLo_$CQHl!YVk}}5N;aYpEu^uQ#Sl!oMl8WgO9hBtfrM|Zq zrBnT)CG?=D?+#Cwz_M2nF;&K4xfg4}P?xas(a-eda24;jB~t$upN^JE*8Mks%57%D zKfQBa0P3bR&q;Hm7tel}E3P7*Bu9mU6i3lwaEUw#&pW~nP80E5DP&SnT&cm(8NoNV z9WSo!GIvMxV({&SOaz}aQrM6_BW6GwIP~Jls{T@Z!zATc_xo89#z#y_%Gb`lKtv2_ z0JK9;y=2Iu^a;FW;lap~XHKBAL7GHAWDEtOSxdZ}sErax7n9&?DIG2s=?L}hgyaMk zNmMdMlN^}<2AeBf#vV#Mtp5Fx#4_PTA@XhdDyW)__=w4<9#h4Y=j z3>#OBYFdU_WU3>|wuoxL+;M9BPYncs~V4^`*BSn;a9my>HC4l*CShuVtnBBvZ z^e|x!CuUN`UaSpNgFlIv zRGNfc+;<~o%ud#b;OW7Yr{2D@s9Y`O8hpucZB+tfi?VkiP^@R!u~hs+G<#YQpC}&# z){OiC+d@h@t#9rskED~Nj(vG$+fbcstVTQT!xd8ro&9_pqBJb-OmNukp2tvK2sC9u z-NhTAPx(pQ*QwX|3)U(kYqYYzD-PDOpPDio?)2P3sUgcNGE^d^!Kvc6$7BAw3|>V zuVq2{b63~vMN5?(+C0-w?E~BC@C(pyzOHFK4#eSqBA#C_h6#JRrXgap`6BU>3vYKX`Xl8-Sj%a?F)O*DGBm6?z9iosZw=$CX&qvkt*Zyi zNL6?Q5i2SZE}B>*0zE3n%W>I0Os12 z2()k86HS9ukY& zMAqnk$cNfj5=!ElP%5a24(AAj+}p3hc2veoe*7{&xf?j(yVXa;Kbgq*mg)_wCB-iL ziE#<_T^fQs5TqcRHNzJx=pU#KxdKSCl)V(-W)U^K2ax4Sz>R9SvXJqyb-IgHlwpS@ z<2mG6ajG+mjXXyQH*4pFNA}M%Za!BiOp9%sNBY?hwFrrjHfI(LFop&vpnTgK5Sv$LRz2S+iLpq=i6^aC6Iax+PB?ui_;v0R1eE zu(t9;?m29NoOqRA=qvS374?yEwz*_uiVV{jJ(wHRh+e7ZXuFWu9?&IM5BhkQ5*5Rv zN$nT~2T_#!nQf`yc7t3_7*2)H1q>ehWhliT3ov`Q>Oh&HaX+R>SBBwX(aIRvMKwyN zY>xsE@suWAF*uYgjN(&jd|Y$SSxSWRk)9b9P&XmFk&q7v5>^-MjZ1Ertiv6s5s=g$pM_o(z**>n)ey z=Lw+qJ3Qlp%i&os(duYJ2R%%nC#<*%aNv+39$FxiQh4X?pR^1G9rA28gt3T8LB3OJ z8Dh>4G5k-=D@V;TZR)MSldaS~lROPKfGH67RUmOJ_swk$Ee<5CA)q_^A2ll~xXP#F z1{)pva9*_*ML&mzr9N8y!AjxKkV;?PaXL*6qS4bmBLL$_WJu`_{Wp@Er&Fd=J&l?) zJ$4U$q>HK%|6~a(7MC@PTvRJ>x#C+Bt&WtkFKmX2Els6-D+4Zgv*^2RJHmzW-rg=< zldR)*v>_zt=JI{vqNM|xMR{%FUTfd3);?9HvtKB*zWzOs(pd&Gl@d*!B_gooi!k)r zLfpk7Rjq9A#(5eu)srF_`%a0qBG|6Uf&@Hu)v zzM|Z=R>C};PWW3o5QFv>RqGbmKyNktnhPJ2zY@gmjsIb$o_*|04Lq4!p#kc>;zj1& zX=pKm8 zqUTEZ`9F1W>Up}7EN~)4tKGuF_chtbuK5-5K(Pcg|Y{+?g!vn zraqpNWsjboT{(rUWfp)qCN4c-CM#9~)B#>qYx)8EmDp(?N8`eXtIcO8L(~F9dUQWt zHRdF)wWBZ9n$|!3J#(>phWW@dG?XA}yw`Q!BUniSfcDaAz@07q-Mrt!ksQkbBUM<1 zy20Juwb5;i8NsHxhk{za!|sb9p>9-jG_JrW&ej2D1iMQm_WBxo=j+h$-54p|(zca+ z)Rm_~irwZ{a%4(|@oZ=jnG>yk#f0n?L>nYX!t(%(hvLZ8-!xdrWMKja!kj?JC-lO37)PY5bf(?l|1~gU)@s+@i|0>~Jju;m`h9~(~OZRqm!?SWmm?1v4our9P&i&8cv8y_I9wb1Y zP!3M(BKLr91n=N-mC?)P=?9Bw0pv+84)QWdgIqpiZ6N}ShiMY>*G*cV%@_tt_0}N* zrvqI1{@XYpDV7j_sIjA3c@Klp&9w1}AZFlOVTfc_LO(%EQ* zhJZbrEo;0rpnMPEeMX}E{us4djHrqfQNKq(r4|pmiW({3@-N!+`NBJ7*ok}%^Wr8U zUV-s_r(&I1B#$V0k&33< zzq1mTo1*SpW~ z^@S&J0p1k%iTM4L^z#a8+%Slm{lc;Ag-6eRt599|PtvHzszl;*gz+kqquft>T=JJ} zbXwbpTn={gWK^x@Sk^Eq?3cOFv^MjJCV|ENFJf|kfy??HWuJfFMLHS%Y~ACU{PR(T z%W^6_5(;?GcQ{FcOtMMdUHARbD*euf@9A7A7!K30oWDGal#dr@^70(~I%ykRZqLzz z3|-@ag46f7Pk+oKC52V|a|3A>%*!5n#n` zIT%sQPZxULQ+79wXc;*8!D#j+rj_R?mL_v==?zsKXF=%xXB=rnFp^a9l2Az23F$R zc;=7gH>=;cvpc?Zv(6Zma!Y{IxNvzh+*(~8{2C!HW&$dWnrHi*9}Zoi0Zdo_e-*9PV)0)u`7!HmmN@Tpv=aR>PZ% zG3{M?M%9IQ_DEG~?oGrpASO2++1(;z*_|Nx!20=+(B9Bc zZE)WFhnavyz>E;Twtpy1Wbfzj)iG@H&8Ok8*KflQ3;SBn-?S?zVdoKo_%gn-t2HV= zJPTp$s!zZSL0BIcP|o;g?p^mX?+TH9zdo+s-c1sUfIfG7s=g~C7_Oc`%gB>Y0T&!~Lo)^cM2P)7yWI|bePrTCl0kx?4`Cm-#iZVk`S571T@(~y zZ!3mvJ4-CcGHl`SxpE(XAu`CqGTJN98-^0F9O^GzH)7ovH znZz@k%BOs9inpEVf$%dIJ@(3tKECO(W8R~ujGN}d7^Y+fd7$jWe?wZX$`wU$;ek9~ z@{yBzc?5Sgh7vE_`?6ZqpV5f0Xxcn`?C9F%Di$pNvB&+(E7!PApCe_*>Q*M4;!R6I zC`q(~!L;l&{(FwU)SJ!qxHmuN*}#{}rC_!~VKiy^FX^)tV8teNXhh*2}2LNZhTWcm-nr z)mOErUOGqoEI2QB2H<^HzfeZiVoNi-n8R(>{3t97b>=X3-J}SZL0enOQHa~MIs5Oa z*{c(_AGKG3LB9@q{Yn8Otw-8PQ*rk)T%9k++#;OLJ_}Kv~86ILQh> zmRr1ck#(_Zz^3o=se@;vr2mozNhp1mr+w;J=N#(3S6K#ir`S!&6Pm%afn09ZcOG3X zm8VwDa$4*2MOj5P45Su=;-Q@dlfgTkKDM+-x7*dht7XrVJur0AW`Q7UaB)YZ{Ebn6 z>-C$otEHEZE{FR^X%3PqsoJb!Lutn?+aZ`Vl+}1G(K5sP6#HDeM!K*4Fjk^X<(Q#R z9MTO0_TW}K#}rAXy3=Wg%uA8qi)5sW8H(_Gm^$CUiPI$ieTTU!*@KTzWQ1VM%>(YC zVw74gVC!J+#gF!W7JDFzsXc&<)tFw_8n0%HG3I&+hJG>-5C}j$d8b?o$ayoWKED$` zDp+~uL0J((5CuKIOQ|(JOP$Hzi$C$ICVEux(j5E(Ln-xyYer(t_mha%-~!$lW~GcK z{l#4MagJSRCZMVm-^kBWk@!1#NUiQp(8dNOEa}LY<;yVyo*A0kX}S|5PPf%oUht<= z$%W;USHDoTd-GZnc*$Di&r0RNK4{Q2Y$arEZS-|bY^%n}&+U#MF%N8(efAof{niQQ z=~Y=6MvvNxxOEDNAm^s~x8=maMF8 z@{n#p*oY*i*%WbP6PM~O`l%v`HOn>__0eZxRzp{qK7{Q0g8SLtoTS>t8w|+~C>@K} zodY_IjRwA#C_aA6du%q~f+?A_GBgn^l8=&X&O(iK%k+n*dsGf7$ZMZ}vr5Z6f=ebt zc%56Sr^lBlue*R zk>%-OjE^e2svc>xlo)=E;gi)3(*ghyO|^Rlp5KcB;z%fe8zMx!$HhU)y4EI-lQ-K~ z!tCS5bTv-I3JWRwx4?_!Y9*X`pZ!H2*mwPzx*0GETR42IsB;|(xK`@(iZ7cXS!=WWc8j!_{ z?vnT$MpR2MQDn5Mq~Fy56Vy!E3Es<-@!H|2o_>XoF{Ru|FocJE7}# z$RQ(v*^hDr5PgV0yI zq~Z`|oQ$5$MyXN#8(f%1TN5od+JTXL{6kB$Ph_VrQ}K6w=RbWTFr3@f4W@@C>LHf* ziysEHT@-1q@@PvhZg%<2{7%sMV*Fp6sa|%jD3zI6HajbVTPJLTJKnO-$ZTJ(rFD5! zdYG3V`rtFqdDJabtDnAX)|B+4y1GX_OgDin8C7wXg`^V}xxBO#MzOD~Gzi6QF05BE z*1tDxg+0joD~VS*f&io~EOUTR4>hIU{_VD^G#yOyZqdk{jSz=^i?Uz7W~(iPC-PJA zb4Txidgq@_@<{K|RY}V+8URD?(EGeY0a)S3xR2HbZym?Bo1i%;hoAwQthSF!0Yl2> zwr0GV<}q_n58jZBONlkao}J!xU+tyIJ+9B`ni)Jvj~hmQ!Wra;)+S(_Q7T z&mWjqmd#=N_%!m*VV|$(?z;S^L`4XkC3xN{Ob@}FNpWVfPAx&gs2x)((}IOYUo%?LcrR5uH!yyRY3iDZ6K&+ON7L)<3Z(Cu%uMqo2bbG84!O?#B~ZKU)= zB3PY{zTv-@dbtI$Ks&?qm>zTaFbcDp@yR8MzuBuRSLFKof8P0om;V2UM*crIFaCGf da_T>qHzdfTguy(sAlxm&r>Ux^Ql)Gk`9B4oDtZ6_ literal 0 HcmV?d00001 From bd12ad9d29a759ea0e68a11fcf20d59d5ed2e165 Mon Sep 17 00:00:00 2001 From: Tezan Sahu <31898274+tezansahu@users.noreply.github.com> Date: Tue, 26 May 2020 20:34:12 +0530 Subject: [PATCH 0986/2289] Update book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd Co-authored-by: istfer --- book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index c6a63153008..7ae1c81a008 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -103,7 +103,7 @@ This will not go into much detail about about how to use Docker -- for more deta ``` This should produce a lot of output describing the database operations happening under the hood. - This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB: + This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB): ``` docker-compose run bety user 'login' 'password' 'full name' 'email' 1 1 From 3033bc93fb789511935c2dbbd63dd4f439eb9d95 Mon Sep 17 00:00:00 2001 From: tezansahu Date: Tue, 26 May 2020 20:40:36 +0530 Subject: [PATCH 0987/2289] fixes to issues in the docs --- .../02_demos_tutorials_workflows/01_install_pecan.Rmd | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index c6a63153008..d1b39cf9dc0 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -103,7 +103,8 @@ This will not go into much detail about about how to use Docker -- for more deta ``` This should produce a lot of output describing the database operations happening under the hood. - This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB: + Some of these will look like errors, but _this is normal_. + This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB): ``` docker-compose run bety user 'login' 'password' 'full name' 'email' 1 1 @@ -130,7 +131,7 @@ This will not go into much detail about about how to use Docker -- for more deta Done! ###################################################################### ``` - e. Download the [`pecan/docker/env.example`](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker-compose.yml) & save it as `.env` file. + e. Download the [`pecan/docker/env.example`](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker/env.example) & save it as `.env` file. Now, open the `.env` file & uncomment the lines mentioned below: ```{r, echo=FALSE, fig.align='center'} knitr::include_graphics(rep("figures/env-file.PNG")) From a78cd186a0d1c94e62c43c5248a181c55945b95f Mon Sep 17 00:00:00 2001 From: Tezan Sahu <31898274+tezansahu@users.noreply.github.com> Date: Tue, 26 May 2020 20:55:44 +0530 Subject: [PATCH 0988/2289] Update 01_install_pecan.Rmd --- book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd | 3 --- 1 file changed, 3 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index 21d7d71fbe0..d1b39cf9dc0 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -103,10 +103,7 @@ This will not go into much detail about about how to use Docker -- for more deta ``` This should produce a lot of output describing the database operations happening under the hood. -<<<<<<< HEAD Some of these will look like errors, but _this is normal_. -======= ->>>>>>> bd12ad9d29a759ea0e68a11fcf20d59d5ed2e165 This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB): ``` From d5e2b0799ec2decb66708662a482f9cd3af19405 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 26 May 2020 13:38:27 -0700 Subject: [PATCH 0989/2289] updated and re-organized Git documentation * removed comments about VM * deleted more instructions that are commonly available * re-organized * made section for git desktop software (github desktop and Rstudio) * moved tagging and updating releases to advanced section --- .../02_git/01_using-git.Rmd | 139 +++++++----------- 1 file changed, 56 insertions(+), 83 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd index b40c8d8c1fa..170cf15b7a3 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd @@ -66,22 +66,20 @@ The Milestones, issues, and tasks can be used to organize specific features or r ---------------------------------- -#### Quick and Easy +#### Editing files on GitHub The **easiest** approach is to use GitHub's browser based workflow. This is useful when your change is a few lines, if you are editing a wiki, or if the edit is trivial (and won't break the code). The [GitHub documentation is here](https://help.github.com/articles/github-flow-in-the-browser) but it is simple: finding the page or file you want to edit, click "edit" and then the GitHub web application will automatically forking and branch, then allow you to submit a pull request. However, it should be noted that unless you are a member of the PEcAn project that the "edit" button will not be active and you'll want to follow the workflow described below for forking and then submitting a pull request. -#### Recommended Git Workflow +### Recommended Git Workflow Summary: -* Branch "early and often" -* First pull from develop -* Branch before working on feature -* One branch per feature -* You can switch easily between branches -* Merge feature into main line when branch done - +1. Fork +2. Create Branch +3. Develop +4. Push changes to your fork +5. Create pull request from branch on your fork to develop branch on pecanproject/pecan **Each feature should be in its own branch** (for example each issue is a branch, names of branches are often the issue in a bug tracking system). @@ -89,14 +87,16 @@ Summary: #### Before any work is done -The first step below only needs to be done once when you first start working on the PEcAn code. The steps below that need to be done to set up PEcAn on your computer, and would need to be repeated if you move to a new computer. If you are working from the PEcAn VM, you can skip the "git clone" since the PEcAn code is already installed. +The first step below only needs to be done once when you first start working on the PEcAn code. The steps below that need to be done to set up PEcAn on your computer, and would need to be repeated if you move to a new computer. -Most people will not be able to work in the PEcAn repository directly and will need to create a fork of the PEcAn source code in their own folder. To fork PEcAn into your own github space ([github help: "fork a repo"](https://help.github.com/articles/fork-a-repo)). This forked repository will allow you to create branches and commit changes back to GitHub and create pull requests to the develop branch of PEcAn. +All contributors should create a fork of the PEcAn source code in their own folder see [github help: "fork a repo"](https://help.github.com/articles/fork-a-repo)). This forked repository will allow you to create branches and submit these changes back to GitHub using pull requests to the develop branch of PEcAn. -The forked repository is the only way for external people to commit code back to PEcAn and BETY. The pull request will start a review process that will eventually result in the code being merged into the main copy of the codebase. See https://help.github.com/articles/fork-a-repo for more information, especially on how to keep your fork up to date with respect to the original. (Rstudio users should also see [Git + Rstudio](Using-Git.md#git--rstudio), below) +The pull request will start a review process that will eventually result in the code being merged into the main copy of the codebase. See https://help.github.com/articles/fork-a-repo for more information, especially on how to keep your fork up to date with respect to the original. (Rstudio users should also see [Git + Rstudio](Using-Git.md#git--rstudio), below). You can setup SSH keys to make it easier to commit cod back to GitHub. This might especially be true if you are working from a cluster, see [set up ssh keys](https://help.github.com/articles/generating-ssh-keys) +There is a script in the scripts folder called `scripts/syncgit.sh` that will keep your fork in sync with the main pecanproject repository. + 1. Introduce yourself to GIT `git config --global user.name "FULLNAME"` @@ -108,18 +108,14 @@ You can setup SSH keys to make it easier to commit cod back to GitHub. This migh `git clone git@github.com:/pecan.git` -If this does not work, try the https method - -`git clone https://github.com/PecanProject/pecan.git` - -4. Define upstream repository +4. Define `PEcAnProject/pecan` as upstream repository ``` cd pecan git remote add upstream git@github.com:PecanProject/pecan.git ``` -#### Recommended Workflow: A new branch for each change +#### A new branch for each feature or bug fix 1. Make sure you start in the develop branch @@ -139,14 +135,17 @@ git remote add upstream git@github.com:PecanProject/pecan.git 5. Work/commit/etc -`git add ` - -`git commit -m ""` +```sh +git add +git commit -m "" +``` 6. Make sure that code compiles and documentation updated. The make document command will run roxygenise. -`make document` -`make` +```sh +make document +make +``` 7. Push this branch to your github space @@ -170,6 +169,39 @@ git remote add upstream git@github.com:PecanProject/pecan.git `git branch -D ` +#### Link commits to issues + +You can reference and close issues from comments, pull requests, and commit messages. This should be done when you commit code that is related to or will close/fix an existing issue. + +There are two ways to do this. One easy way is to include the following text in your commit message: + +* [**Github**](https://github.com/blog/1386-closing-issues-via-commit-messages) +* to close: "closes gh-xxx" (or syn. close, closed, fixes, fix, fixed) +* to reference: just the issue number (e.g. "gh-xxx") + +### Useful Git tools + +#### GitHub Desktop + +The easiest way to get working with GitHub is by installing the GitHub +client. For instructions for your specific OS and download of the +GitHub client, see https://help.github.com/articles/set-up-git. +This will help you set up an SSH key to push code back to GitHub. To +check out a project you do not need to have an ssh key and you can use +the https or git url to check out the code. + +#### Git + Rstudio + +Rstudio is nicely integrated with many development tools, including git and GitHub. +It is quite easy to check out source code from within the Rstudio program or browser. +The Rstudio documentation includes useful overviews of [version control](http://www.rstudio.com/ide/docs/version_control/overview) +and [R package development](http://www.rstudio.com/ide/docs/packages/overview). + +Once you have git installed on your computer (see the [Rstudio version control](http://www.rstudio.com/ide/docs/version_control/overview) documentation for instructions), you can use the following steps to install the PEcAn source code in Rstudio. + +### Advanced + + #### Fixing a release Branch If you would like to make changes to a release branch, you must follow a different workflow, as the release branch will not contain the latest code on develop and must remain seperate. @@ -200,32 +232,6 @@ If you would like to make changes to a release branch, you must follow a differe 8. Make a pull request. It is essential that you compare your pull request to the remote release branch, NOT the develop branch. -#### Link commits to issues - -You can reference and close issues from comments, pull requests, and commit messages. This should be done when you commit code that is related to or will close/fix an existing issue. - -There are two ways to do this. One easy way is to include the following text in your commit message: - -* [**Github**](https://github.com/blog/1386-closing-issues-via-commit-messages) -* to close: "closes gh-xxx" (or syn. close, closed, fixes, fix, fixed) -* to reference: just the issue number (e.g. "gh-xxx") - - -#### Other Useful Git Commands: - - -* Delete a branch: `git branch -d ` -* To push a branch git: `push -u origin `` -* To check out a branch: - -``` -git fetch origin -git checkout --track origin/ -``` - -* Show graph of commits: - -`git log --graph --oneline --all` #### Tags @@ -249,40 +255,7 @@ To tag an earlier commit, just append the commit SHA to the command, e.g. `git tag -a v0.99 -m "last version before 1.0" 9fceb02` -**Using GitHub** The easiest way to get working with GitHub is by installing the GitHub -client. For instructions for your specific OS and download of the -GitHub client, see https://help.github.com/articles/set-up-git. -This will help you set up an SSH key to push code back to GitHub. To -check out a project you do not need to have an ssh key and you can use -the https or git url to check out the code. - -#### Git + Rstudio - - -Rstudio is nicely integrated with many development tools, including git and GitHub. -It is quite easy to check out source code from within the Rstudio program or browser. -The Rstudio documentation includes useful overviews of [version control](http://www.rstudio.com/ide/docs/version_control/overview) and [R package development](http://www.rstudio.com/ide/docs/packages/overview). - -Once you have git installed on your computer (see the [Rstudio version control](http://www.rstudio.com/ide/docs/version_control/overview) documentation for instructions), you can use the following steps to install the PEcAn source code in Rstudio. - -##### Changes without requiring a username and password: - -1. create account on github -2. create a fork of the PEcAn repository to your own account https://www.github.com/pecanproject/pecan -3. install Rstudio (www.rstudio.com) -4. generate an ssh key -* in Rstudio: - * `Tools -> Options -> Git/SVN -> "create RSA key"` -* `View public key -> ctrl+C to copy` -* in GitHub -* go to [ssh settings](https://github.com/settings/ssh) -* `-> 'add ssh key' -> ctrl+V to paste -> 'add key'` -2. Create project in Rstudio -* `project (upper right) -> create project -> version control -> Git - clone a project from a Git Repository` -* paste repository url `git@github.com:/pecan.git>` -* choose working dir. for repository - -#### References +### References #### Git Documentation From 5ea1169f4e9a6eb346c5800fb46df5301f994d3a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 26 May 2020 18:48:37 -0500 Subject: [PATCH 0990/2289] more updates --- DEV-INTRO.md | 37 +++++++++++++++++++++++++++++-------- 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 4d8ec6dcc71..f28cf71e32b 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -12,16 +12,37 @@ If running on a linux system it is recommended to add your user to the docker gr To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. +By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. To do this we simply rename the `docker-compose.dev.yml` file to `docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for develoopment. + +If you in the past had loaded some of the data, but would like to start from scratch you can simply remove the `volumes` folder (and all the subfolders) and start with the "First time setup" section again. + ### First time setup The steps in this section only need to be done the fist time you start working with the stack in docker. After this is done you can skip these steps. You can find more detail about the docker commands in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). +Before doing anything it is recommended to make sure you have the lastest docker images ready. You can do a `docker-compose pull` to get the latest images. + +* setup .env file +* create folders to hold the data +* load the postgresql database +* load some test data +* copy all R packages (optional but recommended) +* setup for web folder development (optional) + #### .env file + You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers * `PECAN_VERSION` set this to develop, the docker image we start with +At the end you should see the following if you run the following command `egrep -v '^(#|$)' .env` + +``` +COMPOSE_PROJECT_NAME=pecan +PECAN_VERSION=develop +``` + #### folders Next we will create the folders that will hold all the data for the docker containers using: `mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The `volumes` folder will be ignored by git. You can create these at any location, however you will need to update the `docker-compose.dev.yml` file. The subfolders are used for the following: @@ -39,7 +60,7 @@ These folders will hold all the persistent data for each of the respective conta First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. -Once the database has finished starting up we will initialize the database using: `docker run --rm --network pecan_pecan pecan/db`. Once that is done we create two users for BETY: +Once the database has finished starting up we will initialize the database. Before we run the container we want to make sure we have the latest database infomration, you can do this with `docker pull pecan/db`, which will make sure you have the latest version of the database ready. Now you can load the database using: `docker run --rm --network pecan_pecan pecan/db` (in this case we use the latest image instead of develop since it refers to the actual database data, and not the actual code). Once that is done we create two users for BETY: ``` # guest user @@ -49,11 +70,11 @@ docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@exa docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 ``` -#### copy web config file +#### load example data -The `docker-compose.dev.yaml` file has a section that will eanble editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using `cp docker/web/config.docker.php web/config.php`. +Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. To do this we first again make sure we have the latest code ready using `docker pull pecan/data:develop` and run this image using `docker run --rm --network pecan_pecan pecan/data:develop`. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers (which is mounted from `volumes/pecan` in your current folder. -#### copy R packages +#### copy R packages (optional but recommended) Next copy the R packages from a container to your local machine as the `volumes/lib` folder. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. @@ -61,6 +82,10 @@ You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). You can also always delete all files in the `volumes/lib` folder, and recompile PEcAn from scratch. +#### copy web config file (optional) + +The `docker-compose.dev.yaml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using `cp docker/web/config.docker.php web/config.php`. + ### PEcAn Development To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers: `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d`. At this point you have PEcAn running in docker. @@ -124,10 +149,6 @@ Small scripts that are used as part of the development and installation of PEcAn # Advanced Development Options -## docker-compose - -To make it easier to start the containers and not having to remember to use `docker-compose -f docker-compose.yml -f docker-compose.dev.yml` when you want to use the docker-compose comannd, you can rename `docker-compose.dev.yml` to `docker-compose.override.yml`. The docker-compose command will automatically use the `docker-compose.yml`, `docker-compose.override.yml` and the `.env` files to start the right containers with the correct parameters. - ## Linux and User permissions (On Mac OSX and Windows files should automatically be owned by the user running the docker-compose commands) From a035f3cface61fe4e5363a79a3365e6aca9aee14 Mon Sep 17 00:00:00 2001 From: tezansahu Date: Wed, 27 May 2020 08:36:41 +0530 Subject: [PATCH 0991/2289] modified image for env file & added warning --- .../01_install_pecan.Rmd | 2 ++ book_source/figures/env-file.PNG | Bin 175488 -> 92110 bytes 2 files changed, 2 insertions(+) diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index d1b39cf9dc0..645306491f4 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -136,6 +136,8 @@ This will not go into much detail about about how to use Docker -- for more deta ```{r, echo=FALSE, fig.align='center'} knitr::include_graphics(rep("figures/env-file.PNG")) ``` + + Setting `PECAN_VERSION=develop` indicates that you want to run the bleeding-edge `develop` branch, meaning it may have bugs. To go ahead with the stable version you may set `PECAN_VERSION=latest` or `PECAN_VERSION=` (For example `1.7.0`). [Here](https://github.com/pecanproject/pecan/releases) are the details about all the PEcAn releases. 5. **Start the PEcAn stack**. Assuming all of the steps above completed successfully, start the full stack by running the following shell command: diff --git a/book_source/figures/env-file.PNG b/book_source/figures/env-file.PNG index c1b9d1048606277e1b49fbee6ab36041f41a8d67..f1ad8553b90458e1b4445b0767c4f304f5d3b89b 100644 GIT binary patch literal 92110 zcmb@tRZtzz7c~k&g1c*gUXXAc@v9mYj+>YKZkl$#RVrZrWoNWi_Q#KU)JSp4MCnPaf6rPU z=dZsSM2Rr|GuoHllF3Pc|IhQQ-~V@qKDIAixv}cx3CZ?UIv%qk zfulDs_Q(>gv?6de+)wD-7cBTX>nbJekYB84t?L(W{61JNjK0T76fehCqT1{|3UUlw z&gWWvbW=axC4YqqI3Yl~yO)DlCit3(k35${u&qkz?0@w_43HSgA|E<;JcYDrE^0Q3sZ5D9~GZIE4vX>KjJ;1b2=|5sG!g{DO1wa zPZb9r=@_>!JYF0(w~r?`n2&-G&}vY&^}3~dIDA$cr`b@u{no?VH1C}nD;}0$z6;hH ziXo*uwia5J#Q7M`m<+l}#Kt)_FcE8ed~vjD3y~E7gtqt=ry2ukc zqIUg0%=8ESbRvK2@yJx|AV^X5(6iJ4`wcB78#lCbCEn>>5E|VHNQ^C#SI1bO`RZdsie?th<;{5>Hs0ptlw}8f~LKp?<;Tt6d5=>RJc8DOlZJ zH;1utof6l})-_uFcFU|d4$J#h7srEAp0oPHcHHe}Wr?hJG zn=kZrjx13w+b-VI?lP}tMZO+a39DmFP-nXrid4fKKqmfL}ndbK-i50U5K!#o<5a|;qi|tcsVhGw*WN6+h zuoMwbPs_%XH(4hXV?;52GnMGC5Lo+R70<1rGW?SlZ-U}}@hlmQl{L~!3TN>k*N`b6 zW~N7YJk2z}+z_~vQR&Pa(RI%g*YH%$ntV{8Z@ zs?Bksqlh$99Uo+VngSkLYtEdFI6RaRD5^i#3NSMOmY2ksV9QA2NcVx+m=geYwiK7r zj^Woul77vVp|5dkUnWyr8hl78*U>&a&Mu5qpDze9fl5#R@vPnu%xqyc+h)LS@~QS{Ip>r?s?Y-o+N1r{u&*S>II$7F#EF>-l0}dE z?Nss05stHlU!ZYVXu5t9WD!0yPRp_>lx_jYeSB6@u1qMQxd#N<;Ud&U-3rJ5pxCq}T$2LX|V4xY1;>z#R!!MC1PIdo8C?cYk%_p+PKWEy5g} z)xaaR6n76;3#NHD|Aeb;-HH7U8LJK2WVI<CLF7M;VYlSQ8C!%Kv{$(J0G$l%Q0{2w=x9>af;@eXis&Bl?<^4BB>`NSZ-A;3^~WRl(Mb$z!f3cE(eP8 zbH#F<>9-!pPU7xubQ|?E1NtCtMDz8aQxZMQ$TN|i4o;^rnrzU53v--t!dC44ZZa;e zeEg2vlp_-PT! zOVukCd0A$SZp|yz*!Ge!Vd{T{T>Not4As+hOt+(>trrK=;2Br_2vNgrA>|eQEyb$6 zVQYZug31??V`CARoD4@32y?Yu@%^?4FXw#eqMX&%ax3khy(TQ3@ao2x|BVRliz3F;FZeai7ax9bj8*e5tzv>#7bZ;m`2O9>-pse?+b{{Tz`4f z*nC4V%62dj0E-XH|0OejV==-Oqq;s9KzrFNCiul$Gj@>ddrXP!Bhku97I~@WumAZ~ zsvdMLZ(mb2%>eL8WQaHTM%vgY z+Dmh%W+rq~#R>$%s{JYjr*Z$@z(;9hbCrcP0y&u>6&w^n6!a&3L(4#B$^TIGwN~!S zTP*1<7)W9tF~_GgK&?U%8!Ut-VtMNiqkmLfJzcnji;?+KeZ$wh{N_+i%-?YpIV$Q( zKjl~!q|dVrkF>}#i{h>Ybc)q-1))n*qntn40i>Y|^8x5&p>;Va8lObZO*o%hI z`%9;$i#XVzTfS!uG>$GvHqaaZ0aGaQvm)7AKIhXeIL1d@UqugPqJMZPf7;gi(Jm{^4K;)&c?|Jl+oe8#EL)zpex!q9X3C-f zZkHY_jwtMGH~VIE0Tg0<1$I_jJX*XJqwzu3*~|m*v72BmDDewbgOAZ1hNaQqxbG99 z2->vEOFx8!fti9~=&pebo;@pu#5CWs0o~{DPvfjN@{%>6lf|1edhuPLCW2BRoQwru zO^KJi$eoSvyTp=cR~+3k*XlKBh)#C!5`&iW+NkGF3oUx%1qjy8#T!WBo)mlerzSJ$~(ydk+xT z@oTBoPUAH~LNC`}Y&@`TCH>D+r+wZiZt%j@5SOrh{@bG0z&{E;k_Fkx!J>r_;A=2J zy|waltk4=5OW5fM1F_G*k;geSlh2VncLQo|!{`ra&IM zwU<=qSKo6r3je;eaXR@02YE*5Gd~rV{0^>|8zxE4RFtMV&=rQubtEp)9+FPOBGd5! z$|4@x*-f6|*E%DltZ*O0T(N*)G%C{K5m$NUBA}lKM;6jvz>5Q_8sQ^=WC2fcc#I_` zUxbPj^AL%&C2(f0V{FYUYr@)5n5bpJ6PoUolruu{S<~{v_lM@;4+`PMe3R4eCuJ|CB(b^^}NILe$%V$e%<%2)D!H zjxJngQ;Ip%3&B?l-}Gb7JHAZGPz@;%l#A5NvjRa|pnAPJ8JA?WkeN-t_?FZ6leNee zr}%bH6e1P1kgFGu1?YcjMEbm-8`CY4q7vWA)NAzoBhOdF8_Ct-D5N?~FnU1NCfzo1 zcY4IV5c*zB-bK4q63xnepNr#j_f<{g!%W5U=VV2aJXQkEtdNaC2~&7uX3zsXDJdH> z_t@#_&1g8q)5?#&p}`pTJuAA0=G0Z(+ep2>#XH++o8xva#o6?J?xz-Q0XO3Jv)o`@ z1ist(eMQfX$&SkxQ6K^pHAdL?N`@InuyN!RVg5Az3IBpUo;gD6mCak z-932jo!VOL{3I#vKbiFYydl2Y9y~c#&)T)KmN4xu->>9E*?&Vw8uw zrciT-KJ^GpkiF4*X}%;;PfoJp+eh<5#Do8>D}B#VY5tWpA$#DB>4y6dI!y@S$`w}C z9>w(SZ(yuHr~7Mti!_1G4GZ2rhr`zyJIwv(V3O1OiXf%Q;)Nc_XFMJ97CUN10oree zH*}Ofev6BirUbZh3NQ)T3h2_wgBJnDBFKD*>NqI45?+Kis>qAo_4@Nl6u)tF55N$K zMaH<(OGR$xRgVR0g6=O-DvV}J@-XL2_XR1~`@`_$qA>YsSyiU}(7@VY+? J`iN zmb!#RIoN;1GB>=SCe}6p3!6$Z(#b{%r3W1q@o%r!ZGwYJ-u08;*m%NR{V5YYeaxe^ zpDEn{SoB&zd#|9_>^6XQmtp^SIwlH}S-QQHk_X42*Om}!PRqdQF(*IAw(mJdh5C5f z*o%C-Ls!tk`kc~btq6+IFWQPt8J$RJfuxK*3&{W{hkw>Bf}!6+vt(#Z_dbHXJzg2K z2`PZ!+gRktFLsc2quQ$@G`o!)iop*OyAB?EHQZ99yDmS_A)l4 z;~9+kyQivuKq6*nKEhg_7foGPiq8t3gA((WRz))sbRSoFh;zfjtupR^x(5Q}koJBh zYe<+JIn#)dCnUWS_cb6}9Cednd2>BxXIS`Ao$`2)hi^L5~xg6=Z0>5*+>Yiu5;H06_OXvP+{|+lIC)cJuZg`UpuRcyDruLN+eyp|H z$a^fDf!W!D-#;PrVnM)iKLl43y57OZYIL-uxB>I<^u^mVNzKg(Q49 zaeO_*{%K#|%Uz(>I+)@3m`2Cb8wOX}NrEG)%p8Z)RUPGLAJMvr>)p5{-1SE!nD*!d za@=nH!yj9u4LRWf+pjs|noPQ^SnlYULDU)sc(27s>E@qXC1n)lnx54tNQ8=W1Alte zzNss1!qnEix&$2&DRPq0G22HSBWs|D4D!8D`r~_a?x^*8bgC8Z)rKik#>y7!wY7*@ zI=yUrvmj96IBL!S6pa#d5$qb2nwSSzU!V#?kDQ`JInQk0m?2xwz2QP3sUAld0-Dr6feWOLfwhyhdz6!%axlO5z#LSY zCVR9iimj^LqGjqSdP#H3bD-s7pw&TyG`wEkm}fsbKCF)OXT53L7pm}6tH1L*d?Nm% z)$NtKbMn3(ze9~WiofTrre-;99*V%#r9}M!Jyz8Q!-w%WfIBk*ydF;ts516iX%@&j zsmlH%2IhVduTBN}ql2NcWE-MRU2TNY#j2B+Z0KN~ps+8g8|d6#OKo{|z7)r>qaCq{ z4QX>f!2=gdGd)7*&g-hJr)67d-t02+yCB@x-FLKE`dW6zn-UmYsm6DxeHHhv$d|8* zeV{r1)?uM?V@CUkWL62Dn6E9$c0T?5Y#`VIZ+BwjZu&a@Ld*8FX{ zf*1A4_JPu<)y?+hE2fr z`>-?-ddAwYC(6jt>jW)L!rR!1+kim7xDDwJ9n6Iu z@wS$xX}<8qlB)1h$l`n@!LxT5e&LwS!QkzL`;h>UN_xSK$??Qkq9$gqi*a3F}E|;vgRvfpv5z2DuM&rFLXt+GU0k^COT@SU?NMd@<>Qu%4 z-eJqn3vMr#87`r}K&ujRvR~hO3e`qd{qll5nkwRENYbk|*TKG}!fP6kZtT85-~8^d z8e#gsl{-188h$t)UE zr8TeO;m=#zUG~dr}>(h43x-eicmfy11N0;x$ zU*0Ap!*cfHx!De-gYhi?5drvyOOY(@womz#uF4mE-6r0zTWt}!WADuA(+y;cnfSwI zQXC|cuIOE(9v7G2@h_bGjpkt0gE2yd@b#n_(bF6>i-SM;kR^*qr=ZP-j(LoMV&z-F=T~k1V zL!s#D+JE6JZ#2`ba4>Nd4p*X8xfohWvOkzd{4AK{#PrK@4S-ugA#hyQG;>L|I(Mw>~Uw37Uw z+rJU>OLjijkFcQ(H>>D4QVPsS+Qmj?I8Qay27;4^Tss=3o`pvG-1a5dKq^6Gvx-aP zfb_(P4(J|^ON5Yjc@WkhT#R8@Va4K!1cTKmoL0DrBf@6-DIh}^N8P6xdLNbD`E;CM zC}pLTzml)Hb0UjO?8{i6g&N8Zu!vmn`?-BJbc13)FfSpANuPgYz_3n2f`=+K*=l)b?p7rEuR9|0A_5g&Ih#;}d6q1)c=DJDJ=5B(c z7M4z@xTj6(<^+??$`pmnr!^E&MT}NHjF0ZUr{ajM5aQXhT}Fd;I)h=V?n}MPaXO2zK%t{)J^CoZ({ktPkxky}&3DvUDRo6#Dwbo~Us=R!%&RGK1@A|-%KFb) zb-}s}qRh#mZ9cYflN;?5kLEcHrBW6`#{y$#lzo*Z+lIJ;&eb7C$9{dmg^3N3@BQvh zXuBv;Tm1ZP5V}-|2B^7n$&cIQ>4pxkuF5pN(64>f?`TwVz?nLBw}qW}1>H8w$2=Ay z*b)H&+|gk-f3`{nDmt%flJ632@uJ9>@;@gGJoL<-p9s~J-L z*jgoHaYQ^r1dbi`eb5i278Hy5TM3IHod5dRC%Gt&QRMo&@Ql~Rk<(3CN@P81sb_m{yP-ur-;jyyHhX0R!=$c1~Uo%>A^7!i0ZN z$hWQtip?`S(}-+GFnjx;^&Y-cSNiCBL!5rkxB%7msGpsc(S|0;Pl(lHi2A9Rciq`n zGmA~Ri;JZvOGv#+;$O&*yg$)8bJYoDs1 z&`JneBv1YV$Oyd0;&IiT)fg0;?0X&vB$xQF;V%Z-Ic*KSJ!VUdyZqz`%${^wu8HD~ z>j_d=f5r+4uY$ehTBjBF56jhsb99LMqwqg@J}{BybWsGy7dv*36~he zcNGYrllpCs(#>~}^uVwjw*+qtHV6vUS^Oc7ubfqp3iT$x7P$#SlcFuo&`!wP>a}`J zQ?Q~ufRQa$1%7R(|K9j2X+-GeU!pKoRg#Y#DFLr)yXu}@8nLZgHzD5| zR5S5sMiCM)=sD(<(v$FMvx6Fwv(g1@w>@YAa4%UzPvDs)J;tP_>G~l?u+h=?4M6NR zl7iXj=zgu0jXyMypS@qCxO+RnfLgf)dVo~;OXVTbwgXRf%d|@q)@EZ$J>0k_r z;wC3ssoXk>E|_#d5=xw%8puZVrw!N|XO)a5C9)3e0%D@B5ud%R1Z1=W;a&XW7xdFR zJO)U2_2d4P48bU|p2kx3#Mf|x_T?>EVm3R}NdwHEXsVwlD{|XnqnRU2QBXIp_0+AB zrp}ogFRT*Zzg*NT5MjjMM4Wnbxk{iob(4K7><;2n4-=w@*-q`k5FJ%%Ho`p5n_#Gk z+MXT_OyWN4LkQcQQAr~js-(nCYPD3;!yPZ2k%L^O>Jv$Ng`)@Tfgbw)utlu-G8^E1 zpM&?P{LF%6Mgo>K(d!l`y%>-)t@Py`E%yacZRAxNnmy+jR%{sw_g;GFPQ!5G>^hw3 zuI+MM^3{kaqm$LuI?KAzjL-3?%G3Rvy!QE~{KApXD8 zh53VLM8juUH=KjE>}hkxTKU>Vrh#Eo=`WosNow{xG%M2i_^G{$dir05!g@Gyx_#*) zK5Nd1=@JjbpP_HAiJd9Ye9mph-@rybc#X@|dg@$hl|ERqa{iT7r(JsRi%uOl*~vhC za}i&8n-}K5>#8oM))o|$4pO&2cQP!X?Ylv}Rad?FZRj-@AH}A!|GiSj0?E+m(7TO1 zP$NP(G9zSv<4?fK!gFo-*~FF^ifPAdSqSFQTc9E`Rc4_Hrte4I$*xaD@V3cjQO1duM5V5Heh8$>BF3}*dXljb^cZkm9ObUo8;PbRc};)ZYi6LMG>oHX zf6sgj?_GyG*WcpuJCakfXPY;mAS`Ohb!oqsInxYMUt zfKcKlBa_qKR(Q$dlJ3CQdzgdS^_Zmyk1ijE%9{76ANv@GWTOTm75^~39?h77hnqUP z81X{I$Mc#@zl?KY8OGwv9&^Yk-MbD#h&$fBxzIW@4Pn@;)B=B#EvQ{-l3yWuWDgoyT-gff_4csVVk# zRGq&|MX*=g++brBAfI!XQuT_trnmZbEAbZT=)mHj(@s-y|EF5SVdV8wblWs2fu{6H}%=4@^twu~_V zePQQ>nu3woGV2!a(*LL&^Ho7Vzz$)fSZ%+gngQ=0F;aFhT5O$fp4cb(nq?`W6jCw5 z*h^zcx$~|=f!(~#cy?RD{=YLt+<*f(i(rIqb6e7lUu&iA1Qxl-xACtc89X z@uvnJM1o=iNoQ8S5$UKeomF z%k+UXX+i%CwtGZ&)xCMAGREb7cVm|F<#G7xQ};y@tl*hJj5|t`HLXZ&%#jcF7`tS) zzy>1!qfyjGZm<7a6#HvDty{8E{N|jR_lW%N_?N{cC{^WyPAf4%zibR6{1*id^^mgM zpPb5%Yp9ITh7Eu~u|qa}{_!d)x`Sm0=n{pEhi+KkH!J^vaO$HhT|}TzGaEbBisj3l zaPzyz8g`FjwFt0kI@M%58A{^BW3VnxK#d<<)60I3ioksFLC5vFjZH1|rlCm>${!b&EeMR~RB5~7+A64Fe!HoDg0W-_Lh zz}-^32P=1WEzZ?xb>D1)LTAZQZ{1jHk1es+)SDG*}gv zc^acu2_K^Z-F+Q&Py#%cHje*%`g}*Fa3Yo@Rv_*)l$)BQG)UiS^RLI4!+4z$P4vr`{F>$4`fJqx^;ov zUWsUKL6nRb!C+mE&P{Qj{*GHELm6chvK+7D4c2oJ8nx|9j*=iMc?5lQzj1-fF zA6z{40$~)rUFM8}(|)epK_d!7Q?7F*5}{BOD)nnR9?cyj{yQ?q;Z`&G-Ij82Yq z-JJL~8!5kqa`##|;9nmqmHobV@*;Uvc70UN`FUg+1f|k~ILqz}=%-{{5MOcw%UWMw zUJvTU}{A|M{9Ve?pZtUrb>t7c#1RX2ok>WWjyc(%Rm5rbJhtnZ` zc*VEPDm{<<_2fd!S^Q1Tq!UIY5?BoQ=K#KrrTCQaR_@1pRAk1GvMouD*;TCyUZn#q zNw?Ma-(h1lNE9U#;cikmF(AC}62ZaAD9EkQ_d>K5{ixPHZt8y0CHyU9Nd&#}L<6_j zg>ku#OI>=Qx~9hP`_`;;1FJ}zLHAr%XXH!O1-~quv}?=;=&*_1XP?RK_@A$0$$Am2 zbI7M0wmQa3D(TEbEZo+O_l&Ch@64&2>{x~^bM6in2P4>XuD#u>x9w3k;$1r1Qr#$G zj&3OMC`po*l29HpT|4?OiP;lo_;$-GxUtm#MP7MRsdcEIRBgA{J!mlMVh- zogX|B z)A-Zf5k58-%e~oqc1ZJZWUXgc69`HzAZXT6f4i%1|ZO_o^^ zd-ruPQ(Ua`r$J%2Z39+Q1K!8ZPXo7bbyQWoLD+h!(LzaBbbd4E;`1XSK+vIS9M~w_ zzFtA$q#`X9>6z@URuOZiI>t4Qp*x%xE#GM}!<1CiC`3D48r!K6=UT5(YrOmG8?kiV zMba3P?`u5go@q8>dc8qhC2!4uK?G1I@)Elq3tRidnY4RQUq2kkeTftBLlA_Ybqfoo zOnQ=PY{=R@G>yI9P@ktV>|_|K3iVR*$ZU%+mS`8x+{W)#N*5bvm@o2lXqk42^X2_5 zI=`Lr>5>lz;zS-Ey$^h&$PvfiH$Q(eCAfd%&%6HD46Qg?!4i>6Co|X||E#iGB=_}K zJQ-h1*!!s}a`Q`mG(O;Qn4+2e{|F;7i!%JDryu}pKs9F(BDXHw_mJ+bAR_<%Mj~(5P5_Ir{G&9 zx>R4!_%&r(+VP&ztu7RUZO{uAw$?;UFZcyD+}HDh6!o_$Bx;O`4iT!jY`aY48JG)` zCP*7~D6!4-jx^ti+mjg{1QNv_vjk%(2Ne^MqIZ>1^@%YX5lO!19>hjjWnY%7Nl8V?lG;h{NS zpX;828`^OT*)rO_O>jntrk4vLN;L=8T!A0LgzuAjA-Ekc@=$uZfxTv?EyighIB*)o zub#FQ!8_{6V|Y@D)i9^DoPsyM&?ne7edFG1)p_2Jr#{zi@hY>8_Y9h$G)edNItUAP zeV1Sz8LCI78qTj2J*}=JqMLif7iZ<~pLoBMgRbO*PgW<=<{wGii!xTv77B8-fpkQO(_!(iqnwz`{)jb}%^w zQ|)qx!(Tl-unc6UhP+y*du<8jL=OEUT3_cez55cu;A5bo zo=AivYUy?LCG+?5<w*Qd1ukN1 zefe%1sDUmRxa9ceVT5RGA*8HrxjkxAU1@voV$fCceWrz(B{o-i7*M_)Y4yr0uA`7c z-n-+=^M1th$#b7BL2JA&d$)SbEZjh+u0ngXj=89kQEqUKPjnfxI%6v{H4_1M#Fr($hmq##?~l6qH+w9{BMk3 zcQFXLmHD&J@l}cy%%bln0$xB^v}8;Da`Q+yZF8x#EYKq5yN*X=OTFy}dElu6OEtUt z;?#lB^F)*TjscUZT7smbaee`*a|3+(n>R*Ral$d3H~2SC>m}CJ+DPT<3j37Bk0#iP z49WQL7lHqsln4=FP@1UY|8V0bc-av!ollg(NbH^&1)25y5At?UoE;&zU2-2y@gGAG zb1wgL1#$gf0nr7s)4DV$kBLZ^0rjEjv9_c4`42JD#6*jYiavLrIA`G|x)PiU9P)cT zA;w{7ITVerui({zdrG9;WuP^9$KpY0rKG@EX-589tCo?^BCaR^N|&&z%vd zBmO|yxTQ2R;Yw9hU-vHPpRe)rhYxJen7C%&_gGBybS~b7^Li(nE{f!+q>Y9AW5Gn< zuE|8m7R?|W`{0D0UUr@L?A!0F%~7}bge^WYC{1OX>G*ub;W+vpE(AliB4q2uluq_I z_@(@3dJL~{9}{xs%qc4ygs7pGpl3|6N#B^A*7zlO>`5i>ogG@_Ni7gQbXlW0k+5+4 zUE)*M#MFCNE`_MtuXIb3%TLIrc2$-7Zwf^v)GNGo{qg4!X6K08MFvPWfw3ej;W@0Q ztwh8|8K*k%-Lb9x_KjcD({%S}=+*yPcyZ$lzCOD$7R~@F?;$>n_iwlv*9nVXizn`n zAMl&r`D{0xjI1dstf?aX(%6D_uhM2IqXS@M2)XB{zeNfAJ-Fz6Wi(AKY zyJ$aGU7DB5}ebI(XK5}0nq?zUlT*n9IqwWnd?D_F1i zlj?RsFNZeJ*abqn)u2z!5k#p7)a!|LURQbGyxFtrPIKM2)F(1Ra>=KhG}xLp4zJy5 z9n^97SGr43BN{8T4IRv6>0vrbgEgw~Ebp(;l0PbuE-`Huf6$%IHJr~2zMquOU-)^q zIPv%@>qVd9&8p)nfP-{yb^@**F&uO3zmueAm~7g4gk03Ttg>1MXCy;}JZ&`4+WZH5L!&))piZx9l4(z9Y#p{?%lV z9_P4RTM>K6VF_mg{PD0`aFT`pgs9%&Y{Vt{ zZxd{|O+4dFq)4irQa;?((<=Of-!>UzYCggs5dBf*l8QXwkMkCY4p0VLlSX{3qJv~n zX^W?-7SwEnuvx;1_NZmyz2B)Sr_{!dF?KDsb9D0}XhrahVX+W=1$92+1 zqGZY5TU4F58#RNf9<+nWg{&dSA$T=&{!HhAtCn7N+*^VeVF7PM% ze(n@he8sDD>9|~Fyv%u@oEaUQN`}bq*Y+GwTQlZLi&KP2Upr$c1sM}=@zbpEtKGr{%A zVA`Sf%=+JqN3@x?qJt09MPAFvrQ&To9~8(le3XCJNlvu32%8Lt>vx`s7D^1#3|?>} z3JAG3MA0?_THJ#WvoOVNGQ-}z=i@(T-$R;Nj?;u@KHa_&nk?9APe_{;d+^h-%}7ow z*H)|#?OZa3hZWiD;><&@wSHsb1!_zA{SG=&%TGYxztAOT|4c|%Y;szt@SUk*@}6;@ zl_+iP&zZ;fXBAS0_@$N~VX?{V*m<%3FR2InN&bSZD?>3Vj^-XToab|i!qei47#A~? z3UsO$%wV?3yH|iEu1~;!5jmgubt1=dt5eRe<9j8>r?IR;Nu<&www!TJ(Y_ zJm@|8z!HeTHJc_y^-mz})%s?tNUJuWpP#52t73*9&P0D-~TQ8{fRy%G9QgXCD6pJ?*Gyb{>1;=t}#&bfrS}rjQ#91 zKPUVKyoh1hrGLu*(j|Nl!q+qt%1>}M>D>7aiftc%rS#Vl31YVIozHve@JQDRG0Y7K ziDnc()cX??d@aBIcfAt=>eKahL(nrq-~JU{@NoDF?K;q5)ShSiO(h#n1cEmP8#iAT zq5aLhI+U}HT)?JW3n(~RN;4x#K}BQ|)^~~&N4C$i$1vIqs@h=Mt0>dJ-0Rli)ID6s z-UJh?b4mn$3y!5s%vl4G&>WE zeHC@+k=Z6^FC8y2eXG&kI2iR%`#?Mi;VzdT8kAesC~jAMlj7Q7Q#T45;)delo3 z;hPZ6Zf24PBjR#Zjn}xmT-iUQZN#tzbOsY#@3RL)rM?q@-(f9Ml1-^ZVnu!_Gk~8$ zKE*mP-k4hu$(y~=f%(I2%GTZ4Vv{vEEK(B83%)?qbt0>tks$g29j@|c^ zrs@-R9sXdarsUKEPQGiF3@;~X04!3lQ~v=+%d_h5)HL}V{V6K?lT)bd?u<^x8B49+ zG%4&x0SieQ0u~w!=KnP2lT!$e&+2WB-}OQ&Rp}Z0VM&>gY#%U*Bk+Kzk+DJ|>Y5|T z8f~-{+;@L;fbNv&dA>q94EXKkUVxmviESxARk zP0=f>e!I;+4?N+fgJYDMoYKI|XK8wWjH9G^>6U&lB8y>^8sJ0o=8I;6IQ2>b0PkKqN?taIvdsZ;^W=4bRuQwFrq0xhO>)Ldd5F5^kL(bsiVHJazTwMWMl9+2PTOqrWePrVB5T z2Ikb}UuRX@hTCX$B0a#ad+}pY#{k3b1TU*sp`G)#YH@#+e`Sw8i7BZ@mC>HM`)xay zmvxGVl*b>_31-ZuG5^O`)5&>xs50MKh0N0mcKcUQ*|wwf;lN>_B|4w=s1vZ?cJyoG zyy%cq^Rls}GOHDfoFQ>T#x2Fq%Acj~xJ{66{^hJFOO13{V;-pEMp*Ho0dt)oPt`Co zf&FD3a4Lr>__PA$sE;=oAAo(}JCgW6*y+p*rr3+5A`)$sl|r|jtp4-AM8=ok2MgnW z_;wTdKri?|pqoVc|8Ku+ve{tD4+iWJK8sGwehY`#b)ye(CC9bmF?mePae4hfyZ47- zHSPa~a)>Qk;DK};)kIUtov)ZL{-X(H!PirmdhC%|(Zm3t^nu6k7sCkudltX#bJK65 zVZ;y1yENWjE@$B1eRXV%0O98QujAccg3mX|`W%v&zRV36d{kCnmBQFOdRnV!6KuC8 ziTe}{kYzguTn2{ARK_cL+wUw<6bGFr-2zLkc%M5p#7gXWod zWTDjV$eWI@`^whP#4u&*61s7zzZ3Szl1MZ@)^em}%+E$qnZl)u!Rv#&<2R#>W{=wz zZ$%P#NTs|SE~`JHOU=F2SmC_zW*b&tsMBfVvHhHn0M0_&A)Lg7H&CBbE*@}*VV>&V zw?DgA$DCq<@k#{3vwzFsfiD)xUrTLnvAnmE2|Y_s@ST-FmL3e`!UuXq<@f2Rvck>N=q0kVBEm0T^jY2w zK$_u*S^H#OHXR*~nQ7j%UH9mZC!cy|%z8_S)=CzRIa4q0tEDE@yA?1*{JjChH}Gdu z9>=MYk4NHdR-Q*;dP$2cljIzG<*-mi)5g%BBY!hd1WrYeD(Xd?M}hK@ce?d&1}8I$ zK`&6zHBZpn^m3Nr6siTh(ir3|9fnqZrFk_(-vuBC` z@8G*TRNAm8(aMjn$BqBEP|f-Uq$&a8Zv}+;z6l%G2C-yY@kN>TD8m>Tikea3dE{w# za9<$IA&XZN5;z)R4U#T=aBZIn#s>vcN9A|kPna-wyd`UCll|FZww0@$uCcsLhFj>~Ugt1l z3cnL`@r7m5+~xnZvF;wZ+u)oTXgT~43nZDyk6-&Aqk&E4d9nTP=(#2a_F1?{-~`RR zakS9P-pgRSfo8C{odn-I15iW?Uo2DgT3$0S{5_F($JobcC4i^eb1E5QyfjI2OPZrc z)gNiiiy*d*D{oglV;D*e=nNMAL)CkYB6XI*lY_<1=)1K{fRwH&+|#2q#Y3u-ioWg9 zB#?6WdXNUo+r5qBLgpJ#1|935`pdqFElPEA?w2aeM#tiWh^r>-`zRYx2I_b#U6hTD z<2riJPkcj?UeqLqxYXE@{xt&6**=8H>%GMi`eVw`{q3>72b2{LS} zx@)qo%_1!HL>_y?xLy3!Y)_)LushAJ$AL#enfZ*`SWm=*3H(b^c3i^>LHmt5Rg!I> zZI4_8Mc zdYaxI;8|LF&?$Sa&O0lP=MJEJr64b(a&hB`nt6Ai{m*o3TQ#>pXRyhS$emQh#B;v} za6qjE9|&RxP?(u6UYlznn?`|V9Z<4ueITBOC@id(VZp}4zK+}&Z*@87-I|Loa|-MgHe%*kXX^V%bC6khT2z@-lP zk>AtYiu4xjnj^#Z)gjF68t@ks1O!Q;?-n%hTl1)J2QvUlo6S54KJ3U>dxu0JvEfNy z0VOk+^Vh?M*%ZKWw7;3@y=CADCh5zAuAN?scx1~lmW4;gjiHy`PB%efj_(}9GfZOLx3D&JWj8#PR= z`<+u+lXv4`TdgmIq+!%ffGznYI(y=WWTMd*m&Cl%s3x*5vAT)7RFY2pFgeXZ7<$?GJvErTYW~pMH|O4@)=rOcSBp z?(U6gxgkXa-TFn7X$*4-SdI{^RGqYz>%Ha9AeKd=uazU)?$sK*kt_jW9Qb}rCsJu} z{$oZSw2~|0wzbaLp)D)I%~E-E2x1RwcB#ctJxtn#&g5*uAt2kvIHjoIv)H%X)Ha)G zV2cVHZ?kt;$4Wp)QL(+)WMyF(neeJeTYQg_i(noeHQ1CDD|r_d$+fbO@WgiS*8ZM> znpkvtc9>WJ5R3m{H%Qlia5RNPENwXSL>9+->QmI5t2L7oe=CDnYov%lZ29^f8|pTo zB*rhsR*v7LZXjYO`QfzZB5>(!72%C>ni|M8#ODF?Kz!VR6DT1W`$1W|VH`3&%}hQ8 z%v*o_fVhD6aDJ~0ova9Y$dVJET`Rbe`sRTgS`h-C9539czgY@wYuAvSrh<3s6mD00N@phK?k$H@;k8sOz3Q`+ZgR&$tP=dN{s~86D?2u!1Jq zBNrh>+gq_sa#RM!1Z-ttQ>GIWy+nE*(Zh5}5C!47x?;xfcQ&0IP8IVM#6`tKgsXR} zG`9Gv{s$PeMxEBW1dHRvn|Fi~;uqmKrnl0s1!3$=5~5s3M?vdfB*B&?WQWuWrRw!b8 zzZW#_^SN?G=42x}{gg}r&lP54fIN0cG7y_Z+TsUHjM`{>TpT;=D0$VAZ zR7xqK%=oc?3_9g14wK!;9Y)2a$_d%;w#O%|JH>{~l+LxO$@&HdyRyxu%-)an`$G69 z=?n10+-0Oeq0v8dkQL`suK3Z@7*WD+8hNT-3G3yFYRE#y*wr?wH9gq!7FGzQ@E&;E zCPE~~WgZZysSNTmbr8_$r@9&_uYd^1BFDrkyxHn_eUC)Syk7z$1hfPYKoKN7*bh|} zS<$v^Tgq`e>eqbSqg_odC<;Zc>6AWKu_Tqvq6&#Jk>!oHML1}^D1BOb#jAf|Wq!u` zKehU`DYA|s>D#4b6vy$Ay!oaLJ+ewF#==!exY zyd3+|oBcOiX{qYKQ27syDZE5|Z0<6>o&)wcmYQ;EL8wsZC~u#)TR~7wNCt(IRXWKp z{%Zm(kIuq$onJEvk{sa)Pxcg&`5QcgE7h8ctdrrBYYzJBu+P;fD#!u?WN`p=5pSh# zGzBUTRrGC@D3Mv^^C$GcmqZ-Sa<^G*O5{)xW)WRaz*{3fEtiU$H~LuH;?vgF&s7GN z3>9YGfmq!rz3?dwB46;)A%vzNGTh?k)d=e$4J0U zFtVE}KNWH`Dv6Th1`P4wL1#n}Phh-{*by7Fr72yW|9US9&5f>!X19Bp+Y==)i1M|d z#Bf8M{e`(=Xb1wnh0^>n%xUK7YX8Z~`|Ue{$u6j=-P)w&Mb^w|7N>;5o@u*8cnZmr zll}Eu#zY8$m8I0go;E2Z+k)@!{%7-I%(EEMfJiZ*?cQ)Ncm7M>np#5wfu?h7%GrCR zngkh~mS@@H8ByY0e(SKy1!SVOy=H~2%KiCgJ`!A)7DtJl6OEB&x2V2IrkGB{HewrP zcs3asV04gJ@!F)Vl0h%E)25KPhFFOsjW?`-BWR|c8v#_W^SUYRtXfIK`X}jvM3ST2 z9L13~rjq%o5+U?KTW+lXTC_XGg2^u|tdEgJ*1f+tUoSry;;BVPwZ!bwux^;w-D>Xt zc{CI!?}TP-_Q3q2XM?vQTY+h3R>*RsC5}Jd25lsB;@*i4;;Io}hZomAxESxd>_cvf zKm5*c_$(%R5#t^#%=PPo?`*_aD*(^oxx?iQ1EB^!4v zdJiR|?Zq;q#6pKWPZqZ0Rj)mmF4WKTBhp^zs0{lKO*rQTQO< zg0vdJFSROmE(bF4S2yQFA)Z)+<$sBxuaDktmzYZrojm0KG|kA9!D}3vIeYLaFALdX zJYAJ94`RPKYw7d-{nWG2n zgQdYs4T@D-dngZ&g;7uM^rUI-eaplXSGi&jBKN;Go6MG-*zG5l8>3{I_1l^f7B>fy z%JUOKL3=ivoTl%a;_BYd$XeEhX;ixDS3{%zOb}fwfRltIp_IzsAP9_9>xax4u!t^b z)%Cz9YLB_~Q7blCx3Rtr)qx^Ha$c){+m-4;0z6015c-0~$bB9I5sZ4T)w4i?+{Z2s z%%`?|LiAZfYs7Y=#Mt-!CBt&BZ9*;h`^s*RAQkF$@zbdmbGu&f;(@Fo2Mnv&THTmi zaXcbKv0(uN8?h?23g_*i6IoD~OL=$mlTV_s52Pb!IcJfI_A|g4=7mcMSZU+C29j}M zpFxpr5L}F(JaQ@R_+!Dt%UD5TU+HA-cJk_5$-tTGgj}7hoKKgt=v`QK3LF%2gy(Ny zF@6{wVjkuWD>IwY!<)$-t~Pfz2kP&9??5@1xRO|e5#te@;up%>R#a#FO)R6LJumDV zP_3l}?aRJ2)4lt;7WI4|mBB5jRn1`en3Yzb4*m<_y-B(wDA0p~kOvE)Cr}1y`dm?Y0;s__;6k1{neoN0Y_QeC zhue&!I7rzf(Rd-nGcC|OO4IzzGg^&ElCJ9>DUXm0*}VVCPTorI+Fe29ad~AF08L!& z&7|g6X&>eDwOF(t&?R(z3q#M-85UIkd@rD4_LE=1p?%PM2|*N@xXR&4F6Q5Wn}M32 z=5^tM@jE+nr5;ZLmB!t>ve4Vg)5v#fq?hgijUN++3um8^aEKO7-fZG2?wTjuaFSQA zHm@|E)M|Y?!+ITtkR75MU2OIar%92bx4*S(x-!T=&OYib z;(e7BH*#Nl7tsXlZWg_``A*3p^^J9a4xJt$tQ*89T5Y2>efDT*q=<*!mi)qtUbiSL z;etp9EQxxIy(Jh|EEEfZN&dEw@}?Q&%d9sOsFHB;-xlS?k|5GI zjyzBNLHLCb@7rx<9L#vg|3j)GiRZXU!=zE?s&)P^GN#@IZ@8O%`qXC8Y5zG1rAlSJ7b8qj&0tJF zNVi#{o?~E{Mxxv_UFPk043DjJWk5iWpk7r|I34Een2!+}QEo$*2gT(!hXiCRx4rI3 zC>?7gbGgQ6ntIwRsrH|noAR@Hp-ncO5AI~nGm}1|0PH;Oi}$)*<0sYj$$61)`pA=j zw~c|JuQ#Q!|D3eMmPpl^!tn%OTF2KEqlQLO#Ivl2@Az@U9wkCD5A~>WHKL+odVgv0 z9Itb^SWc;qMouyb5;k$=h9qeD(is%j=5YS?THoV)tw9=~huRv;0lO+)_Pj(RvV?Yu zZfmmEa=An+W%c`ppYqLV_n|{$CarAKUfV&`n@ygEatWdZPrE&%GG*af&nQDx+MA}4m$>;!3e)goaxv|4czaET!s`KvO+|Bv5Axh z{elwgA#52XNNF+8F;S8gZI}-8su;$_Lypu`08(y{MeU+u8SMOOSE=o>;mOrf&xzLb z;%7;o!~DpAAxpE5JFD&6@&)(}or)`$YKAd3b(fp>w$i%Du+>c{m1R}JwHPH^`D<&VvvMHB1%sb z$_CzhiX4hCt0y7#0W0~}k`=N+Z}HM18=g6FJz~!84X0-@{!wLK{M}6u@04f{Q-zl& zuIPbf0aLLNQt3YqR%W0Dyu_e$FF`7JSjtL*6#mDg@sA?a|7i-H3s3eWhtM74>_mvj z@vzzPk@Yzj2mIJjAY6i+&i>Ek@mVCksOleOXM*|L<=;=F!Ac{n7BU&1Q;6Z^0!~OG z$N!`+ei0A*M|yy(mj6$DMqv~N#z7f&JBoR3_lc1aB zPFeHc+vx}c-+m)J-1`6+JXyv_qvYbOz0d!=T5;TcPG{v>3OU)R2M9{I7?boO(2&iX zK14hTle`{j5spm@b9`>h#M-FumeL49%o65Cfuc_x->mC$AMg;83$5ZRydZV(YdxN+ zAspJEe&&ud3oMyK|JdorjdIq7Km+aR`4yA*hBv62mipL7TJdYdk~ioLV)}tN8`VAN zOW)g6Uda#)l%~5`crT^dLimX|wdp9AC6Pb7B>YOUEH4A(dW1!b`QG(H^4jdxPlGQm ze0fdosU$qehAF70r#i@_B1DcY3?W*S!sFSSx@r$&7O!-T$-Ca{@aIGq!FgN8d^{Ge z1iEp6G{er3o2T1NU&4P}JN^(A>M>6vSY_5_4glRK(1Qdtv0UwYJWit{@EUF+;5^KL z$Odb+m3?(n`7t40Mc#&hc63nWBeWF?a)gz*ZVlVjEs)>a~@2 z&)6@y29Oj;cSM%lB3f$0Z)Y1%$Zy7Dy-IJ|$Q10?|0s>QEU&+52MAbR<;JBtY$^;i zC)U((5(aJPome9lW)OGY5js8#RHB^jw4uAAvzB4xDdm@wL3my+L}dtISKp>-)N~-c zFP{f&9}ii$NH+#-Cl?XK8;qqtb6voh`skL^MvD6HHf7l@J+3@xF+aFmNWRzqRaCB^ z3bxo|(bW;txj$P|Ahb#QHob+H&lhBu7w4CCzV3~wH*iLtu^l(f2SKWCMk5jB3NK?O zQ71t5o4GsduqBKt849vQn*ZkUfJO-oSc>o+xKCv7Cdt~`4mvxdS#fy976Eg9a_>%< z4(1G8x~V`3ia&_H$o*1--e`BboEJ#3-fZNrPolX;5dB#BjFp$6puJ=B7)omxvd8{T*sixAwtVnlC0q{rBC)B z*MK@MF+I^9l_XByhTk6(16~vV`!wAhk5;vZEd$#K--rK-O?--1&;Ow`{4M`)=D9mz zAN3iT?!&acqH|bJlQm?V1|Ee-_V!uwStwmc`GJjPJVY~>%hnQEj|0}niLh{>dGu_K zwakdH9v;pQy?K>3xQ%3}(O5>3BM;*`RX(9OW#qx^R=x52UW4yk`XgX7WvKwM5NR9W zTlR_52qAZ^*bNt$d%0+F_Ddv5A9Jazw;V2S z!we6hX5+b$Zcd&7->F9r5t6IYz4U-?NN{I~>~v7=hid0>z3%D-^Xtvd8~!+z5-)9*J)(+KN1E)6L%l^jY7+_j&BUlf6evjVfVlQUYj)nFFA37IL?lh*6 zhV*g6;QG+52!ecFlGJQlZx6NBE%*&MPuY~C5kbWI%g9=spUZwGwnt5~P`Xx~^3?S5 zjuJ8Y_Nn(3RpB`>x8`Sj<$CqbYvDx4wGr0LiAe{JAD_+E=1BVf{(k=~j&VmziE}#l zkaYt(GH0v%0%nzUXr-ID^>NK3Cun;BzN)1Bm@vQ5{Zx4nNg{QJi%5_Dg15H(0P2k)oI zxEvRjv`JW0vd3Yi^=z3$-PhpeC(*!{aGZkg9b!&Mr@-&8O>@Cc{t~f|n}J29Eg^+` z;k8U8N@sn%ElVA9Ml|nMd*bCNgM9zstj4oAQf)nR?HfyVi@GWkSn~f^$7g%Kq&o=A zQs9eNoi{Pp`gvfjWO_kgG|{)ch0xlXbmjKg?juXExVi}#^jLDyMEWglHsp;s%f*BD z7q@6kI(4ElB{=b4y3dWXxI88G9sH0S?mYj-cW$tYW#mYIJ{6NnMboAAbJED~+c)XZ zJHtf);rvKat$TEgDnZuidV&CYA9l6q==Z3WHcTpZ3FGh3e(s44O9PX;4bucFMe*Ul z&0c``xz?+nWUW1pENtLe1`Mc=zAXC%l}K(biXaz>!}?XD`&Ww&^3mG_@h=HU-nkx- zI5{@`vmPtm4UZUzb?I&q?fvLUZd>k?$mKDgUBK<;Lo~k6kwMvJyqnypFo;llOfEG2 zNc^`lNWMFxi`Qi3m)TeEb6tw=njU6}1`Cd*`(7ye-V){(+SDc-443)JVg{u zCs?yum=$$Z4sCHq2LUW(_EK(%4qCCX8@1xsBXIcBovIBrB#Nnm*yhnlv`h5>jK-JO zG4o20r*!xGYkVKAI%Dd{%R3ST7PjkNp{V>2@Yw5O`xHV=4a#bbK2S5ShxwGvb zGLQ!n5qw%KF43giq{AGoCPLP?D8qlg89}F{w@Tfe;e-b>5$0kuJM$hX{4g7$ za4f+D?_{!~f{}>T_OhH<12L7S3UzKEiA_g}wb{qt@>{kRskNivkW$XT$BpV&fzO{_ z8OAk73ohlXL+HR+MUB@L{>TqEw!4xd(TPahIm~)rfxR3Hc(5qzN5S+XSMi=exws9+ zpQQvtlun@t@0r<6&%_*>Z;+`C=u9kxdL)F>(xi&t_g0#o(?Zkx%7{Vq#qZX-L&>!; zS*piIcoESA`*_tXegxz37tbSNV!lZa#B8R^Es;~OJfjF)mbeoiMI%HdA+khigTf^( ze<^ZjW=fTmb1f=$@ZyR`6r#^!)~uq{yJpNO)wB_3)#|gif3U<@d)q+6jZw8)8{~V9 zT)>?obK2V*n|qF^F1Ca34}Q!cUQLSzLKQk^E8}QOQVs-)$E&*Q03p%FG`Too%2#Gd zVA{|*Y(WMGZ_H<3EC9E1$!Za;7Ll7EckJfnhq?oPhtieoym&kvbZ|J_Te1$0me1fp z43y{N=wVAI8(;AKJj$_u!c!vnn+3itVP1zVmLVw`S7kLI3GYwZgtNM=Z%LS3Mpr(7 zOarrg*>mER!*Y>DcVl%-B_J#c z$>O~2)R6m4wuZ4tEo`Cm3>(zAs10_ZuV?iKbg9OH6K=;NxftEIJt~N@%yMV-gP=Ek zgkm1+*Yy(OleQZzDSc|-jA#e8+u_ha4!u5Ky;(|P5+#r-r_&EhC^b)~{6rNX5c9(~h<={?#daHdfM}l}Nc4KQs+HBk@u!s3 z+liVtDAmseA7q=N6-3;XV0?<4{wJTkRLEy;t66<(aHb!$KZ%GD!j%bse)s+c>lO>< zxNf`zk}N|G?i){!!**5{0mf?Vy?NkcSl`~>T*YNFH_$4OYU)aux$T3&tn0?n33z2; zO4C>T=-cM)UU^JbW~t~h!1w@9(w%Cj;6@l|Li_IVpCT-v4?wfx^`91IB$WyX=i2^D zQtSUs4CaQF?;Ic1>A+?%h_x%`Kl0%JAIX&nB@ijmz`mo1y`Q@wo{qcZpPJ$?rh4HB z#$012aoBV&oxoz(Ssqzt0Nt0y5mzz%e_!?Jww~VA?5=!#7N>= z^QwyX-{#!CkM+1pHeFwRraF%)n&`cz5AlZU5TbsmArV1hq8`N+rWkSWCR-&%ZAujthjHh9*r%h(_Xs!e!zr5y_Erp z0(W-(rF@bQ5s<#epVho_pV;EE*+*(deL!Wc9TLQ)Z{fZB;;7@3_RN+kN14R7WJ*o+ z&WY$uP)WH2N03v^eatN5q;JyneEDE|?vUwPBX|1e>;`4n7%fUNbf-I}%-}*1-s& zG6DvMn4?pW(l!!iV~O|PJIY__uO>gsYh?c2HmzNX$oM7Y5*9ok%3<|!=6LdK0Z90a zG+?WMX(OXUOITSx=8|6Xga2K0a0#7c)$=>a1_8u7My74Bv*onJ73FcW``#|0=!W&xW40l64!5o{nnX2ZvSXz5TvUU;dy^LBY_;nS{_sy%rL+1f!G>a>8l4VJ%z!rVUj_tVLLS;+xt zdPN8hR<1w~XxZ7{YRIl{R8mVY<@ z&IHv1L&st+L}u>3eIJ&;XM)|oncX^d+0e#zd^SFK(lf@$w{j~FD^GTjmKlGW3Q*%WO^{LgKblQsYo-!Jf<(w|B3Xk2zshn?DtfJoz zT1b3c7zm^Z!iM^bx~_+rNe*?^J+*SVZuUdq+NgPjtC%T($cm)-S-$<5fY z+=%?M_0W56VV4W`86A<=)CC=rI&;sLJ`Sl_hxu zoRf7!rtvRSU&eu?*qHj9ZGy;mj467N!+^e!*ew^twj2E$v>DGo+U7>@dLAC~UN3sg z7iiR-TyD-2N>DgnO#Dj^6YQnOh#&blQv+#|Uixysyzsk3qy!xU!P`{O>3sN&{|C(*65N}yR_Og8<{meAcgRzFJobrfpwqRTs(~1B! z^m+{;-V%k?U={@iPK}^k#!9YHr5v_)tjhmowbVbELBzXW6$OJ257{NlX9_+%{Sfjb zTRhynkt<}Z{*{ZZ(WvCMwrU~X(-c`9y))akI!$%{FV`xf{qnE!yeUY+QcT0*tT)rt zsGqU3j`UV^H|M91XGOIC%c!RZ2mAxX+})8lmpnZP5H*8KHn9k-$vkJ+ib<$@cZv*^ z3N@7T8p_cSYSV&-r0_+$zI@VGiYy+xP_|qe$;HhaPXHGo@*baG9_#jYTrV(%Sd2O{4P-RgY zjt1w6HT5g>N+G(Tu<89al%Fqu)6T!#v4(EeHO2BaZhvl-?H62Aez%*gb26>&f$8lL zh64t(1->-nEE=|&N`49PaC&fAEemWlw91K8_7ncz@EZ{MV0y7$RJof9PcIvC*|E{> z+wtK7@OTOgBg10#w)su{6H+=?Jstt`etgKxZR{qa3R;1{bZtn)=3FPRYmHmdp!<>G zUsdn}@xpF759j_y6zF^3QA*OW3Mxmd487N=dX9|dd{p^Ki0JWvI&N>&ARJBUgnW8x z7wErZNeE{nj@78aLb)H*jVm`2bRWGx{SL?G{^0dZ3qLd|xDC}^{G~9?SAi{!l;64B znebp`WVEUTfJtPIoR>JKK^=IOj{|All10Hf{8{FKS%9Z%Y04DGQ|$((7R*Z=v}Vwu z(y6|v3a@d|z$CuYCR$m8cVZXoYLyQOU621tEDf19SN%7{n6=Ii0_L&Vx{pT(bp1t% zhj{zfObTx|Fd@S)`1_Iqt1@C(aZ0sx=v}4A83S&;9}rrXKW)|uMVwy6VD1*VWP$~e z9XM!A3fGzsx>>+%#ZDh_d0XY47Z!lML(n{vACQ6D9v3VR@nf^^;U_N(5Yd|=%`?O> zRw?p>w|bmx9ApgR&fQvOfJR;@DOcyIG=rnz%$~Kj*oD`jis!kf-B_&9>yQfY-{(Ziq&*1X_uSWXc zr!mA~{x>28fZHxM?CBEWrR+=p`=~w#Spgo$Z+k{9sU1pYFM`w-MF5g_<73$yD}HZJ z2k^|?3&7V^A!Zf9?vuY>)H6b;MxK9HRS+%;81XNB(fG9?HKmZM@KKd7-l6xRtZnBX z&J4l*Avqo7<%3d{XCf`yw(<*Ai}}ZH)Q~+n4BL($7kz@)K0RBp-K{cQ-2_v2n3sn^ zzc&azu1$@5T9U^zde_}Cwa#SXFA*6!>`3Z|Q$iy7F8U6H`AMNz&c>dXB(prP@fc&S z|L+Z!R)wN$)^kAoiVv_E%tlr&RRgB<<7eReXwae$E5s(B9RZu83{qs| zSX>2DwR(7#+D72$)6+NEM+dipBpxq0hmY8CMu+RF-JqTe7`uD1W9{DW3HbEYlrwN7 zgJACj8XGh^1upV{Pm?hO<#Ty-skX9qpI#Q!96qpPt-Oi*BUVPj3i!cmEj8nIn8N9N zF}jP&v0BiTWkJ9GHN2FF)aT!2VN!PGt*`!1!n2C794MzXzKGN<`+)`mut>bQRWAvcoJ%ZVa*w2v_+ugU(0Uw41AZKKm2 zectWT2FYj4s%qaxl*xc^XwL)~jV5IBRkGbV+qKlFrioy3ZZOU4nEJH0BTVD)(s{3y z*dAnRO4KWUTke~u8^1tob_alWH;FOH)+9A?{Y!G5L07( zS@|H0q=+AHgzR?T%r*JEM{48%Tj=}_%gv5fRs3=iz#0N3_xPH`@wgU;duc#@?c?R@ zVYyq!A|ayGdmdn9W)FJ_;0g)->CT`L8~~~gPLRbMDt|R|teueTv~gXQmnt#WBgb`3 zAHh!miP~t^2XM6UM9YVAVv_cIQcg;Gv)7_5d4G5ly6`kN(~qo2+Ra^Ab=s$Z)u+$V zvCfx!D~Aj#DH7|I$c;^VEB~=Jt`uaovMnC>pv5+SGd~I`3!D%Ziug7+*J!Gk=iNN@ zvC6G6jtt%3<#$L*<8K6i!;I}i7U$h}7Z1w7X z>acYa>fyRMx^-1&ppBNk~lomWHRJ!*v1={gf5As$#y;`FwTC<0Tz{gQpY54RE zX2@Obgi~u=@2FAKTc-86`Jqn3$8V(BuM zqve9aLKFtg9-Q3Pe-vXJ=4=$c`0LWCCFJCj*$7EiBpM{zL;hAI<(+!>+vJJV(SW8? zL1NH-p|=5LBkUS!aAd&q!iw3~2JXV|=hHs@`y>Sp_vaH=|1$ou^(|@iUJimTBM*Q2=LMJclRqsnQ?k1MYP_*ah=sj`RKo^DdK! zNn29iGwkLgd=7A+{CdUU*QRw@il=C^;rnr&0gF3ZYV1tsq@G%JWQwdymBuGhG{@$6 zZE63C{1%(FCE@_%`k9D#ur_{#K~9i~i~UN(lGNVj3DC46-~?te=_(K(r{ zD$(i8w`bl<=dtj`Y(IhvXs-@i=Ps4WYBpo@Ti?CMKAKTiP;*tRk7jkFOHEU=;w_}{ zO~%2y{t7JcUj*aD7SWPpg~REDy{bt|8l+;3;bO!;`_1q3 zX)Da%laq9R<(-iYbz8*J(E9wk01@l)p@^Pj@Qw2ZT&YXB{7}qt(ivI;!H)%>R zBT-_HLfYZYj-DEyuBUJ29a*d=a^=yOI{tOwQJe+n@$Z}V07DYPXNnF_ zUxh$lZV;cA)~u!mwH8l6?v{h01!$ZR&;MFEkcfPRtM)QdK^*`%;!APKNgtVG&}(x3 z8g=v_(eoo=0?*_fu|fh+p7DAZy_);Xtz{#*@JnQiMa?){NGv(A)AT;-j?6vyu3O~T9Bm^9m4J0Mp4@#xlah`MXvWcp5%)8Q>D|`*uY(xvr&yrwx$TR=ZWPH zVV>NzD`Rw55vwE;W6YcTReDvuN+dX2KMwlYE^zVN4Lwz;~4zjH7=6yw7rB9TJ#RhxEItH73zZug##vJK)qUFd!>K4ea=|f?+-SxhZ_TtjT z_6OL2%~5(p+$JGsLC=gPIp!EZvJn^jKO<(=KcFS!)AQKyPA0b z8wTpxcOq$f4QLRUTCy;gL`W#)<9I@&7 zCOh;h#@sb?#mN=t=x=UWH%Uq9NcBUrQHOviiI30fCO@@ibvi5x)XVjXXmBVugwQVz ztw-=agoe`5ILK3D>+W&KIwerX^Tc%Tb3VyWeigCoYZ5RRG5X~7N??Q= z-L~~?xly%y0p-R|ZBFZOnVXrQNMn0>zmc5Z|9}{Q0XuRXW{NZZijNJ1bhS(l5^m~@ z_`WFM?)~)eQ>NfBW85|8)-jVB)+uuALdKk@?z=Jh6Netyc-v&w<71a& z*3$!-lv8_>T+u!)u*+4-trpcRrx(R$Ak!OL^`y%m>@)N(iC&r23O}-s0iL)!2ZNs? z>PFsViy1F+NwjfJ;kS6?gQF~75rYEb!dme!;~>pN30>xd(0gOt;V7M=w{hxe#6&(v zyA!h_#NlTptj2XT7cr{=v7fjsFKb&3Iy(&rW%YE)uoUy|%L5FqmANNZ>VpNKpviWll+(Y#oMuAqDt*lR2BcoQSBEbgbIkoJjQd}cY=?>u%- zoE+`IAwAOZ@3$Gh@hrXV;g~Fe3ePI!3LEzUcu1d~()yl{WTns^HZJCs;Hxk;o4}bQ z@}E;QrSh!gPn~9Aq8Zp$CcJO)XR&*XB5&S5vP*2-e<@ZQGBlU#^%BB??5rAJV898^ zFNn@6I^*|1&RPW)Lng3u`bN_s=7QI&@&ouCs zz<%fJwI;1DrIt&pLASSC**6J24(L63in!ZI7xUtei$cx)`7!#n zT4p{@I?OTiA6^~1|KJe_r_~o^L+30H! zRH7X_sI*nDy7&^l-i@Wj!0zoE8oqhiZ#dsbBAdKCL{E@SXdXzli2vZV$3jIV8#J&v zej`v9smT-z#9n0Dj^opi8on-@b(fpJEu|EFjR;Bw%*mm{f^Ia3Myh#*y&4W@oKuHO z^Q9MiP7dav^Lea@4N$uGor2dNhRTRW%K9Aff}1{n>%EsZU6}J6|2)QGFWGfnJ0JZ% zjrS#|O|%OcWgCiBM0)Bwns#m+51qh4lJ^jv~;LlZxN2O0Pj7v@9l4=$g=5hF!WgE_mQ05SSs>wPSwvm|Nv(K+;{q-QQvtSZ= z;pG+s7RiIJ+uP8st_4mNnH_5(QnoyefB}l6X|n)GlcR8KfWIv`4*#3SyV+J@WTQWx zs%)=TwkhuYkn9s7g493Ts$O4)_fRrU2^5V%?qTu@c4fyl&Z&Gaq{@>g6{{{s9zkgu+=HUdJ zVP0vL;8xR~K@YuaWiaRYEz>CR2@7O>VIBRsb$w}$_19F#O=Z;D4`dY2QGb+NIFx9dwCj``Wo8Ex`M)lo*1yA*IRTIT$aW0?HYDR zFfyJkSjt9?DLZTAHDWP$y`7-mfN$W+WxURjBT<uqt@aIbSagvZgOz|>{awKHNf zs;RRKQzkRTNGwR@mM5X)i!3fq<+N1BC61bMVbno5+ifL$O0@~+cXF9r-0Eg$puiVD zB54r1nF{#>v&#eTrEnn*1T~^1jJ*#lFS%^R#gab8NV0rw&{4n!II7cRCmC!gLE0rW z>c%NOPe8W3zER&L=m-@ELA$fhTlabTS27KiM0&sSWFVsU2PNX996^THUvJG`S? zP%Z$(Y>IlmdZ(@`f8`<+Lv4Mn)%0fJC1sd>1+(b2C!7iPH)lDel>yyGaa#x$5-MCP^ z&!{Mg5^J6up0YAI?gS(uR;S%@mR_mmh)(VnEd>!tyerx% zapE=NE|5vnS@+GvAW~HCQHKh#+2F1i7u9lMrEr7k#&@2a!;PvrLQc67Okuh36-h&H z4swxE&PnjTYy3ef_8!e5Lc9uq5Y#+77%o3zaK+GYrwP)|cP(c{<^JgVkZ(miDK=}r zU5Tid&i>nYPhIx1O`K-<04=yf4@UJunZ)uElV8VG+*FUF4DwzxvD-Wo9<@UHh(; zhX|_%U?xM5m8Q@u&Z;o4ug8;OS>U~ug`f{_-~%hbH>(%sRuN!oV@u&pVGkgI?v?Ht z(W~{KpE11XLD`cmOxr_PZPYpqD-kryMAZQ)nx3eb5 z%8+-;h`t`2b*VUVgOx_l2&tg1ZB0`QVj~|2M!Qp&`o#>8AaIza(Vc|TcQkUj{2E%i z*n5J6vzPUn6Xd@x3AMPL;3q$6J?SoeYEwxRuA|0j|9~16KAb?62MJ^(!ln?O7LMp* z(-G9Wo!=o%)0&t!lSD3YrREozDWk+W8@O?jkj)&pR=)WXm_Z%S=w z$uv=;a6)MJ4j}OgpbPn^z`Y+)zFxr+hEh0|WHfjk+0fqS`}zB8)$(7a!C_9TxYF{; zSo{8|ZT#84-gge`M&Ea{SUq7nsskZS9!b=Tahjx6OIUsc3}bGTvYgK?gt1Ji*DE?X zGWHx~Ru(8&75%0|kKeUSJjLi~hr2Rn6~-jT#jvbcg=O*pKcf#?-xglYs5Q849Uwfy zK!H<77n}dqs`{;}p7kZE_IAA<@ z0YdNth@-6fUn!9K>E1hk8}E-}!_v{fEYND$B(+?As0sgEc(N|eBb@Th0PCqJ{EYHC z$UiRZ({-b+Fc;MKy)6EsZ#fv$v3s)NLZnl1oddS9Skb~3Tm$`{9`0b|@7Js)wR9<; zBmA0+&JhRVR}SqGy!6D9UU%_(0xS&+2Kc@w=(SnF(K@_o`y~SM_*OGb`fg-Xd|6p` zbq+z!4`aO9?C^#*T|RU_qej)NZalzRb><}^BJ_5Y*+WHCGPZ?VDElJ&8HB~J%_n2t za?=Rd$f>1I@TXAkElJr!v{#{~mUU?WUoewpWR9a|2amTQ=ib`aWjaF_jpnT0x(aFU z2`89aSV$1g8A=?U{14jRDk`on+SWya1P#Fow*(08F2M;-aCdhnxJyA{!Cewu3U@8s zCAho0`>lVseec74J?*@&r&?>)9HWmu`gfaO0$Q$$+1wO{_Hv!lU15^Uk8+z&*zqE| z_+`=hG4pghag}za;j%Odmn-@7 zqr-Vt=T_s-c_=E!noc{npCdhkfViqk!fzAe0!@;A2{xilP)p(PlYx<-$LmyV-KO%( z@ihCSy5J}C5Sl07a*^(jtd%b+0Kq`f&;`*x`p+i%mpw*2Hns?U!>fV8x#=jBsta4H zW?tM7k8U-iKO`7Dul}w!jbyD5$^!jrrx3_yl!1rga{&b(QCLv8UrGR(;HKE_4RS~A zg`D~74``CJj}!HUf$dDE5P;`bdV}}ti~Z&Hef}ES@V(e31^n|8w7u$hWaZA!Mn-y&-89TU%PJ2hrk%u{^DQKfd@g|4TxfD&w`v`6VS~ zmlZa!c9+*&qykexm%a37ZRG&wTZ~>4R2EZuf`xba(uW^&lqNenL$C40 zS@~vezs^@(b`QrIEzv4hMy83`W^On!*D7)wnDbc47De<`ZB*FN+3Rf-f{dPfzwtQf zncsYuG$bA|k*7;v_#h$9MO;)PM}&mQE?D{R?X$M3Kzpc+kMro^`>lzcKy|Z+)~&d6 z@@^ex7NU!EXrv4ue(U@(Z6-mt?%km=AK~)aZ2{j0fkxedF^z~8IR=1Dw_?Sw=qUp} zD$LE)(JEJ)IodL#ttX`2#QxZ(j>H3kW|dr4AM55Yf@>3LgEI0ZX*mn*!#C2+Ds{e%&LZcZD$$4g;LViAf+^ zQ)JQ9UdG-@K~BLd9Ub>fR0%z_dy0VhS4zpROhHr9mdX`{=Z zZ6O=Xqc?9R(07<6KWqh)FCE4cbZxTCr2aoHwz;!gtd zvJML!C2tisV^C=CQ-s}9=A{N#((bBJE%i(P4Uv>GS_GA=nE31P#)O`fG4+S^nk6M$ zBdd$JrM5`bM$#Q9YwuzYVH{&FS^M|{PR|=TP5;0$^_P|9Dw7Utxktn4Qp$( z{MpduB3z_0Mazx!78z=jT)y<&nSuR9!?y00R&*N`!#MA+IqEjq|C~BLZI2oKLt9?K9ME+;@vGQC zfhjs@OP3%7^;*gd*TvzgVaeC#a+O%hSF@B}{8P%ii;HRaf}NChMo@+AM%dt~h*Z`B z+Ju(_j1juC$P+7TTIUNu`1#Fg7r=UAA-jr9x5K-_YDlge1P#|XPA`<6#XlfW&?f{j zeL?RpFa8fC3^v2LV`zdi=?t!a5J-8Htn*!Hn5$eU*Kkk?UTaO_C9HtYefAplnX`S% zw9^{%m}kM-|v|n0Za1Yu6=&87kWl49j#gOR|alq zRK*i~sS9fZcMt(yKmL+tyUOv3%a4=4oYAj?hTgHxFZaBS_jfS6%Hv$Pg$tHus~MUe zRUd5S^4okETD>}Btk3VxcxISC7OV%tkS-0fX`YeKRb_7IUS=!cb2CI&vSdXaKi?nm zOpo*(%|HjW#^UKg9v)6|Z5$%TF_G00UCAKAtuZL1@MKHZu^ z3^Ri_f%2EBvxZ9~x+{YTPjpeYKdbF~li=T`S}8;GF{_`Of|e`MO)QgD!fLaI5VqxX zT{a+K_OFh@?YtaN?GDDLOh+ND&Z)2+!fhP^wR*ndoz_%3=SEX+sMz&NmQ;kSdlWp>a7bM*OB%7XZ80g*`qzhwc*~tCXDa4ok>y zH@kk+DA(9yqWfCyX8|l~vyyXfY{eAcqiNoCm9pinIRPS9W}9yHThV})u^NWH#geVV zz`Hyx4i@CKGzHc0{xQCYMy@2-BUyp}ONH)ZmL1tg9G>{#BB)9$TZLgnTv1Qk_{K3z z@9*=WQN9jbq~6!35yD!Lb1#O;p!>OBdi?k5I3x#af4qc;4|jQs4dL8QIhSU+={gJ< zZvvX`P`ZtO$AJ&ps>xgIKI8$v5pGuxX&0?av z^RK6+x0QSEeBAOPukoegNj*$yk7X;-jGPl%llz{Mq{lgrCaYO>g|mUaPJk$0EDL6! z;ZL@9bGo%X3;I&`}e!}&=2rj4Y4)5U^+!6OmZ#<(A@lS>wZ~e~P(6zL|Css%zw*0MQozR=k3@w?f39^x3kaEOIam zlkm;@eqE)(lv=0y6H!f@9`C@1SV;4e;8yRe-qTEuAQBea-WthP=;Q#7cbT6rUt-oA z)M9`4aJ}s1!B{V~+%NT;)yl6lCJ-)dgHqr%+fz>02M&VAWv!XVMcGJdHi0ox-;Vjl zN51eO7N~d;su|v}U~^``jbmVjALF8tVT(!n=+fqtQEO>-sJYm`;(GVJo_t&X)@oUV zPY58o-poVKWHzq6r?!dQVg9~4-uJ4X1Y8%u?aGGmz3q3ytev_EQ)i%bcRivSPr4s3 zP-IsYrf_?MLph__xS(TMhwnaCOMw1Ca}3Z-CRX9KK$!H%H?GyXYQ!tW#{Jldp1;we z`Y%vNj)I5G2E?#fm`AJ|UcLBi8w09R}viaSN*RwbF4={$CU)jukv7aq}bt{6}C+?rm|TPua`lSTQ0C5b%b zr^tQ6;#^@DJ-~Wr7KT{!y_m;myTlJxZGb#IA19nWA+j3Og{tgQdX~6_LdsM#)~ATA z1AX+}&^3o00m)>C>4%R?mX7y1lF5f0LM2>Tpoed%?56wKq}gUr0QwmB#qqvtZ}2=z zR*GP|?b>3N_V|AKnKEu|`(@_;vc&Djz9F-_nRh zuree7Aw#+6Ad)(P#5ZpPCTXZ&@8v{AP%%qcJ{X8DxBPuVS72qHKFl*9mXv5jST_9M z*P$K_=wL;vF|_(ViG0pGUV@)0LOx&qh4^%rqNg?-Xw|5*7iO)Yg%YKtS3yAm37O5y z&yuPB`@`w!Ec&Cc4#7~LN9`$=yU&^>DrU!U@qp}S%o)a}FCkh#Zl1Dy8^?*Ayo&a6 zW8R-lKCQpkV&d!JNZ2bb&w)?3@&?RO#3yU!Vl7Hj|BV6MMB$5f2_LrL;A4dex{}R3 z3^+xYr~-iC7(oj#t~_GWR-=7{@vMzsUCy4!FniEz=74ldFzHLP&l)Y`L~nwCBtQL1 z_*)C5yF*9t^=X&0aH)5i2G_u_0UpGHM(2TX;OG;&yo1hggi2wsxj4(%^hK10+kW1N zXC1#^OP~jVldtCd(ZQ#;#G1G686k!i7NF<^%nv9sffJ;!Vg?I2GbM^trCzjh-~Zhs zNO7mD{g{o(OO+3e2~fS^Rvxhwp8y+jmK_X&^2k>O8?rXvFV+2)6ulOzkwMTDnxg9- zd&No3_6s=}EnV>xw5WX{)Fri-{nw8Vcki-YF0ERb8X+RPUykrj`BIpTp)vhgY#G~Y zG{@MARl;uZZ_u&~|v60K{tK zS3GS11zx-$&%)>}-O(Ygi`xvy!_60X?D43LWUWFMyWK(?F=SPQ%d-2e@oA(;lF9f{ ztOd`B4T7 zRb12OC@WQ~IKsYf=wvJwL6Pz{Zp8Sg3H}r5mJuXF@fvb}lgc(ML2yoXCGErES*mT_ zMyxjuQ8dTme~Yr($VI zh<<+cf&258FZvi2jY(omv-1!VAh5-Yl*gIXTgV1AtAGK2eOyfwUcRb5nlE9Pr;6mwB>ZQt0} z`EZuLB9mRL3==yFz|`b-F^07)=k=5x%Z0CJ$;Wy|7<&&fRi*}ySo5TNlDZDu47ho4bnJNFy78RvV;v8W}T zU#3+<6_SF~pp>Hp=Nzh5K_SUwotKAOYjpOMFelj(U=aW-x%M>;2rS4f+j2c+?i7Jc zA3(TbeC2HS6YLESKUGGt!7mw&pVKuO7<;UFSFvC8JnQniq8%yF0yy}9Z&uMfVcMFP z$z-h=f$`fWw7h!*W%@!Qtn(X*A}8;k+(TX!SyXM!d?D0*O5L2aXFpdu#$)T(>d7N_ zbo0IxeOdk(EkLP?u?6zWV?xn5X%&~MpUI6_#>J&W!{UwL*A3ED#lz2wpxcrWwY4v1 zU*Sqydkp*jHiwmOG1Lj43E5=Ij$5ex7D%e=V;x1J5ju@;Wt2=$8UDR1Y?#e^iAqKn z^OJ)Y)R;h}fL$lG@`AM|+(-1!lR=n74>Tt>HWx%%#px8@~9i4!<3l`P+mL zt#CME;tyKC50)Mi5z046{A1qPZqtkXwM-!2FN^K%b7<>5uQ3mF!f)II@~>`9rCT~s z#^QQZxedo#4as4@OKKJ#T*vWZG2|UTlQFmAdu2gdVmWI=Jlhr&$J!0FUoFEYsy2T&~h4lnj~K){GiK@*~~0fGo8gldcYEkCtJ? z$O}lc>4J>ijt=$xeW0=Hj7R4Fd%>og;EbgBLxR<+L0dma$WRDZNku;!J2DE+KSTJ zYEuIxz@g7`GBV*B`o{h7tczF6krNdzEj1&R6HyIKK)O}UU%@>^S^~j zap%4N3v<^0|D^T>0{FyeSYXf*njnDsajg}r_iny5zOq9{s|>{isX1RusSJoAc%p;Tf-{o^U@P%$8l=d-HBHRLbq# z6vnrHP5D^V%}V|VFE`2nW_1S!Mr!@HEtG-V?O~73pF=c|)2g?j5+%RFF@tmdc1Wbl z6O}CPB=Q#or(k+%Oe)e32ArkZuXYCd?efcis_q<~^fSDcnp%k{;JR0{5Zv+AwV)9P@8@Z0p=PzRh8ch4orO+A zviR>0&NFj#d1>0nTwTLJ2CG|QRgV0E)m$CE&5^wmg9j5QX2yqcv1i^!N-0bjLq|5X z40U$lSwbwI1n$LST5Un?f9wcgAq5JdB=b50% zjiLGrqdF1b`8ob2Yc`f0Rtyj-cx^f|A+915^>Ezu5f1ws+Ubuo#ns{5jCt+dXjhMG z+~p~3uVYJIe7|kG;wLZdXRrsDn=tak!(h7@BcRaQz2Ued8GaR0QlI~+UP5BZ>8uN* z;tW}zz%T54_(^7t8`3G;!SVrJ+NUxSxiP4m6mM&SO-Y5_F46vCV|-xcK)4-OjJ-$r zC90`_0w-U`@?I&L2CY;aGEQBCut$I*f{tsg>%pPHcFG z|6GI&c#8kD;MsauR9N?Uwdwj?pJNw^r9CCMiIm!pW3JnP^m41h8c8_S4k<}qLmY}4 zTjC(<^pp%=h-sBA{{DPh_lY3pXi0jcZhVMVm2~>Du>z$-oMwytqlL0jUTEE*%jA{m zpdof>rfld(kyWBEY<)dnZQlcYi0qwm_4}&MueU7jy_7TTH}ZQntn@y|TW->;(3V-a z)h(A_Kqv9Ch(RnJ`R&`%X2C9eq^DY z5LLh=3jB#4o*^tiuyiNBuAgW44}co((X=tnaZ%xfvbI)19;`pnC<%W~0%QMPLzec4 z$CP85`$fO6MXcyv#9^W-%}gI?*$)mSn`jS!~Hp|E~chz?34}7)0(cG z3v=wZ4n#<32yqXvFbdHfFW$UxgNr^mZX-~~oJ=b(58c%=WqTj~^hXD znUJr1ozjni8vi6g(l(v5+&@L2saMAG#oU6D+SRv(5Q18GmPXGDKdWNFkKecoZ`>wp zd%OCumxo}%y&p(PW{v(?PKTrzy-!~|IFEV}TQqntW!+Y~?2;HKP zQ%vKhaV17Ozh6@-GVaNQIpS_|Y%5zuB4CXadO?}U;2agzO9}%bZp;z+9&|f-6AUah z!QtAtm$0m2?ETEsgH#LV5B3jHF_mL97Ep1oOakHP&dR6>0XlySU+rVmV(Oz(9-JO| zhJiI}ZHi*pQWqLkEOFQnCtr%)S+VY9>kS>*SK1->I7KPjf8?mzX<@-LE3=oq?co?^4%ou~$DHe2+45cn#pr8mmTm3&cdgh~B?qROHH0k+7*bTtd!7S@$i~`}sh>cen6^6;{Mf(r!W4eu4<0qd^ zUC?;_<(G9^xzu1zhGyl$s3KvP^q#fNhC`Ttft#bl4~E1;?_=<=DH6_pOJ~KI7kDKh z2^;%GhGxR(=`v|uwjg!-4S}Xt?O!ewu^##gm?8|j`y$m7^0zenW4f-iC9yL#^Iy!W zJTIw8z3Ufy5qya)Z@dbq9+|K4-z9%(2?>f`HZIU;pM;ahSJ5`@-?5Si8xrVbJO~23 zLpL2DG#|h$dVLe;w}$@s%ix$S((nKD651CvZ8v7W%Mhs5%k5O+ zI69GSD^Pgtyzg8{$#Iq#`$sib^36|2q~#k3(kg(HF(B~f17{Ve`W3+V@)hph?kDTF z_3koe&+zPobF*-d|gTsX}A)%g3xsxd*hhet*sS{p&nXK+vi50V=QBdIko^*13Q zj#sOQ$2}y%TcX`1fN^<(czJ{|)yId7&z0HWV3781Nyj$lja>LJST z`+IxocIjnaM~u94?uj$fh?N`)In!r=wAf556G!1&^V7-vk@06@abkCbiB*yxREZ~s z9|pvFC=@Ri6z7%ZRNJsdsVS$v1537P{IBz4P-;^YAzAM?#@L{@s0m5L<#%e3dlC3` z#sa}{#}|`sIn?zBM;kl?$;25CyWZR+%voCr?)Ek`AVk;)jLk9LDYTe12pq`q%B=B< zsv#VEY@+{CMwK{weva>aX-H*u>z_DF&(GQRY%Q%GJJl5F9h{~`?4qcmKG6!cG=kYG zJ)guvZ0PK(mLYhtYYw|>@VlobX@AQ8rKNE7C`lsc;0OabtQM2sp`PX^HAifJP^iXj zdoIL}RmwvrwH4!C#tk>ewW04LyLoAKGZXDbqCbdq3hB6vMq*Bex4J3q@X?~T*WMYh z#IVl_ZmZVrOMWFJBw)2O5(j6IrY;gbJ!`fHO`8yMoon#l*FcJusm(F#E_b{cVzESu z9N92(Q@kVM6H_UZ!Z|H}s1LHtSuTi{SycPyX6nJJlguiOvBjv;M}Qi!geL>sGjC1U zAVnLRWx+wJdD1wxe`vHNBx-F!iTpL!Lw_Cxi4?#O7Q1cIU^Y)(!;w+%^Rm3&Y!p2i zReEfuy`pf)2F`}sid!+g=`ZIAr_*x~=fh&8$|;6#`dqRfEyyZ1#9Y+~a(!O5w`VU) z4hTvuuDrgPBuH(LUg^Bto{pq8UWz7rwsX_8dIrRNPjx0m%= znQ!E7>)t(tVu&oL8VuCRbH+{SysqUPT=u@(kAS2dAN63{Pb3~93Mw(bJp~cz2=#jd z+<(Ws7f7|;7?UA-xEX{G*R`pO*jJLYIVC2V)osv-3FTj}zaHRL6DOyy^qcjk`KtSkA-K zex2>{4D;#qT?$U&sy=UZhF&f73UtznMj;X=Y+F?M=_=qiUzJ5g4#@$l4VOcfh@~&4 zT9)7Drnug+{aO+wg`t2E z`>O&e9#C$mLP-DPRGzn@QB4lLmG8Kex`p_L95r!dWxpB<6-Qa9Nn}lFzVg#HmHx{E zE55nzCuDsmOR8@ZqJ8m|yEKn4c~(_NxAw?sQU4kJf1IdG}T?C+=>{rlhb6 zFVih?8HxV}x-fZ+qNZ(e_H}518sK`}?xqfmWJVVwi>TlXCbW$OA$$c_)$-AMmSsV& z226YmZdTSG(4h6EPm?5a`9WT3hzlf}hLp$eOMBxq%6KE0?8Cj2e$?mM(kTv-B{Dw3l2wTqMas`GO zlU3QY=_0(5LlwGiW*Gjz148&0XNlF zp@YE|DKWm$q|{^ACfWw{p1&Atlp#wOuYobqMp=b;I;mYgKFflhVRoNuNXG~5x{-&| zPhVEAnKDf?B+(ii9V<|~H(Jo|W{o6UH5e~UR=%SWW(AF(1*#HHU*gR$S$t^fbB*&H z1g*Dk_0+gAwT+pKU={3nXSI9?b@H zDGghjGC8iZXB%>`|B4}-cPUrMMU!cCEGOL&F#WZ4wUf^aTkg3Ydc}11E-mCwJH@v5 z@(&=i7|W}Vf982xyY-j1X-`}Dh}-kxWX@mj+YV0p!w%Piy?lMi>?`a(;GI$)sr`ki+Z{R6|A@E7}7L6>-k?7 zPo3SK$T7rfQ`1?p@$bsCO|dn@XC{=aK05{?gVxLi&n_a$#Apht7v7T1)1E%naahnC zgm~$e(CF{~b@kJX{~PGhh^*f;Ok$Y7{sUd!SFWj{3Cumy#c1--$p7#JpN?+0>db77 z;;X5OT&MYcm5KYzPbc^rsX?^6*YL)eHw!UNG357tW)4<(dPYq>(nHF zfWc_pbflZ%BTnM!4T}MSK{Iv0GwEgSl=H=InU3L)q?GbG92=`{`+Krr!Oc>VjHv}8x~I(v`ezM8J1*6F1#jU`zLr04}npQ z@U1}g3laA?Cb?Fo z&TfxG&Rp<05H0;=VA`07dgiSqs_s8GgH>+x>KQ+q3|jB(VFXre``N$hT`- zX<@s+%D~mgBPpmuOrFE`V(V<)ZB*+-3}S>oOc=1Eiz?Pew{^u(z6hO_M4k)x^d^T> zT})u=YBognM-H?JR2c-L3jbOGHO?LLxc>~S(&~HW&`whY9_IkNsFss9qLgckQZ*qi zh@&?Jf7U?H#R1F0`OA`$fuVVIpU+85^O2G8L5<)ebZ%0E!=7YB1PNFYVc&nC+i*48 zl~R*M40Mxlo~C_f{kA3vu@BJ2X*gVi_uohv#{uWetku7M#~@xZm?EJpDn{<=zwG?L zKbnzW;Xz)|J=~DGwtLxg&~K|Z=sVRQSg6&Kno@$-F#z@&2v0iRn)%tTpY)W17;@YI zT%+Ao?-yG`w}d`fiBy{6!MbnR`2K#Yp_xv8rm8amrl}7y)lin$xrkd^q|H;!`5kRH zKGA*(jLZhcyy1!jD~bgJ`Wi;fZTOeXiKCoqr*Z$h*tliOj0w zvqFDdEet-cig+b}y1!Cu-?1$nSqFaG0@cN5TbmprOX_eC8=qE8yT|Tvanv3~hlpoK zCD&~=Ha=dC`;rnDIGsi#->usjVBD)bu*V=;_GOTAm?B-&>i87vpLK^>CzBvRQG47B ztd5>U=pT2dE>9hx>fd9XlEuWWk+C9C%GiAY&j^Zv2j=?{mQFXngQwF(2A`w?7X>NI z_-KuCs_77vm96fD+ORAIjJ(=x%vbe?-d$SV_E(HWKq=U?3hI_Wc z(W~}%zma%reNwY`yCxby*_MW@BlhJ&hh1ZAE!rb8(0@W9tA82>ce&*f5aFDw4W-a9 z?`#i`W}%-%&mPB?k=!giWP|iI7yXdbe3p)`=2`7UMW}k? z+yb9QCN=uV{IRFMx8-Gjt+HAn(u5wqK4F`AXN;&MRf)YMDFI%XREg$%gEJkd|7nab z{~B$_qgpwId%C+m<;n3msYn0! zrHXgXf5}WmpuHHEbyfz<1s_!rk@BGWRS>{FUS!7)776` z!FPm4x37uxOCNz~sdKX&9=?~C18?kREfnrN1%qMzate>`>x^WkqRFe`{=DSX$qV?ou*gj|WNS%^3Eb=3~xcq=Zw95}XXmD})h z&8I5kH%PK-44L}_a(6lR#YF!4CHW09kjpRdjpJ-e#num<^R9Kn|Is>MGU6Qa|I@Gk z==hkUNsA_cWX%CZcX#$Cuin`kg2A!Bw_vHwZ}?qe-J`JPP#wIKmiopV|NHjyDX9KO z0e3}hPW(>^hhCZgt5)N6%W5J((9u50n%!`3XrK2tsKRr{uwh|sllT8rT7)b2pN~p@ z6Ji*KEq#^K-0PVz_!k`8F4?=(^hziU7ryc^CQB%Xl<5B$0gw9leq$UFwU;9{=mNEJ11GGoO;jX*yl@h8KE#~dbPc1B3wMhsqz7Zu z;BcrOHu={B&kc5^+(2+uvfU$9-WTKg2)w>>D|~$+M*It{i@{R!rCA;5IdQg|MchwL z35Tn4@8iyx`m2@XOandh%%7N*_;|FA`UM@V2MIrQVVYOJ)_+gT+w>=OqH6Gm(ZFo@ zgu74PQQ=2+EgPI0E?V@p=v^Z`9s8X?jBz$bpvcx8qVGuuBkRXTdgvL#svlwpZwE8WzWS> z!C4e9Ocv$V@u+uSwSkaFSdNqy$Hac}`bp8807MT=)Z-o-BV3hl_FiA$?kn7l9``1- zHM1E|+mCb~7P0Ny+LenaK30U*8<(M$2-=nV^1;QPAytw5^>12OvWDgk_y!D z*14HWO>3I_PaM{R_ds>vD|GH`-Q6Dbz4ZdMfvnip>O9qM>IS<<1$2lk@yfLZv;JCX zw=5MeyA;IE|ISdaIjD-66&!2V^dcjiZTw2+J&(IGJd^8uZp&CXNQ+eKUHZ^ZCa4Kv zv)3Z*QZDR_I9&JRg2QC=pm4T?DIi8SvnTT1=!re|fXpm+?P#JiSKyo{H7Sj>a93HZ zcbxUW5~fyOOiTN@REPHSA$W5g@ZkAHdkizc+t1C+<{Hgn;Z~?^t>WRYt(Ke2eHI>e zZ&ISd*N5I^WK`Dr@sXjg3^ycLT@SWIuQZ5p;tmhCO#%uRR>gB*Q9`bT-Y`K90aYI{ zp`-fC>)d&A>|i3@rv2A({GTsPR@Of!OZgmGOG_;hIem&Fal9^WRp<6=@HA|q;HvvP z0Wg*=c{fp0Y$Fzg|t1xONV4RQtBBf6R2)O=z-zDVcn$fI_Pqi|ZNe|k$;^F%& z5{ylzH1*Hk;n#nskw*#hY_9q`BnUsH*{SU(#6aJ@;K}qENpo3bm(Qpav$JI^oJ7&9 z`td`qiU_FiYRR9N$5pyJ5sRzN&>0Ee;1oi_ec>_$eQ!XkV!u%n;g0d}z{4#~d~vNN z+j~M9sDyv z&$hD{A=9yBR{2gDsIBNo9g>o;n~h8ImpzoAO7d3jWq%!1l$hfte_8xjxjhDy#&0i= z)6zuBR(N~(@QU_4cC@Oi-9d-cYrn>P_jC@KkfH$XtA6o;!FuIP3{Xq#JIYaCN`tYx z0TH*}p4htp_Ihhi?9IVN6$wiWcs#wQ-=xW=>Gol18OOVah2h*ZrvUAo6e+n zxh?z|Y$}b74>%{w<=Y+(R^75@cm_^MDyis7yuc7PqE<9aFFKFjx!te?>4wGv)=@2e)RnrLUivumt?o0 zMN3cV|G9E}Fkoph>x2{^o$R_t(Cv>a*3CZQeX%X<9sCTIJG%T9;s~w#P(z<~O!QAm zr=0GE+ZSAVLvxh1Nbu*4#(Sp3_!OGt(7rjwtt4yuEEhC?VfO=xB7|W@U zI0SqK`~7o(hC&3yoxs-tRbswM#Pjv8V?)=fxE|aLtHb1=^x9t+u#$#y{$SD2oUrLq zW#!T|0OSqpmwb-l=IU5cLD-xZE+2GLZ}>Y##XmyASk!(%s*IK?ZaP3?g~=c}XK3O_byorO^6-haJkz`5}gbf@&Qsod3F zS+$2vd+WUnTAz^$gCZ#58}$thw_s3h^dq}Uu{YQG$OYa`JPX&CnFP_t3s(uLjy z8R%KZv;i+R$d9ZnOu0wr1V?w&v5_!4sIa=$2saVG^}cbf`Fw#ZD-(T`ffhUieHMH2 zt{|W=^)Ro=FRZ`?!iH0Q2u-Bu^QVzYYYXb22s>Myp4jRCp2d%xh=z>>RV)o>ez+iw4i0F75U6=QtsBR7Di%MVj@2+Gu(qf1XE3;PLCLWSi1)twj zBK zJ_ZM+S{iQCaYI?U`JSnPHj7AG?Fm3JCV{uWuoVuJM1?OQrh-!TyEjL#t7MxSs)nBo@#1nSBR`yRt*Xr`gS zRYMG4GY(Lk(kdJ;=b)v)!yo(74>wXmRTGKx1lPvMZrVu{-j!}`y^~_j;-`|TalU;4 zzQ33Jzy&YtgNz$7ZwYzrg1CMbMp1ZVh0g6p>!)@k?S-ol{8;p>)AdgLmBz*=r=)s5a98)MGAd%{HC@Y%&noM z$_s5xeWcnjEzQS5qKT^c>M~`KO2vVyY$uarR=WAs@)3Pp$P?egv2eSmOA(?&U2w@F zU59TKk&P_w3b?lmt6?+YFYL>8AaYxi_CU4_Lg8ZbPlcg|i`hVw4v!q9^HUY4p#&Ih zO$GbN(EaR~E*E;YCMzDPnFxV~pxyI1=w4GW4e8BRA0wgmD@sH4>uo%SHrKC<{aG{= zP9%c@W$)KCWPza@H!Vaw>Jr)+yBo)Emc>(!0It1S5_&6!4uREv%=e={ejV4udHT&1 zF5%I5F@_vFrUMI;M-wbhpWrZR!eYu{=0f#>LU&AfU8f{6q_sk2t-YQ-Dik=$eVf1^ zUlN#^tW~3fw{VD1+V4qkx;WoRJd(NpAgp;JMJg#tnSBLtPWEaOW{*WPNKzwU`ZC-Q znUI@z62&?k>i+Uy0jMUt2+Mh`h94D+Goz7n2f-#*+3co`HIS_pB51VKr>ohPx=vEH zuMP>UNmFe)G`i19Va1zM%_}=Gv?b3)l5l{v@&sa&tpM`d|H6i(E3?CPmr&)zr|#BM$2%^!D`oi%N&^ipiL685bQW>4gz%9dGO<_Nu z-%u-NI6D}w>68s> zo!Bbc{5;S?-qRB#g2#3Kc(Baz7~Hc9ox|mAXE*K#d^RyULdHg=Ay~N0CTcxQTEF*T zZkXd~I-J8>sUHk!jM`Ogi=Hk8NYrmyXs7UKc(9ar23ldmr*2JMe8# zxQhaqvmQVCxY6AerUC#2!-}-`{BH-L%>BLZK2-$V(FN>XIsJ;5_dgz79xtwIVMcQ9 zwz}Qu_Vky4!g^%iwKYbdK+*(e259L~XRIFyB1(BuqMYk#Sy`{Ial|RtJM3PKZDlpq zhgpnPz4F$Sj;wP!zO86-E{}{BLYxj}mZkN4{FeP3f$!0IfKp{%X83N-?ROu6)al(5 ztY_iP-=YSkO1W<*(7)0?A@90bx-@0(udKExoYxg=cB(+is#-7alZ=AD&Lwzt2a~-U ze26ihL3V{6L5ZxO#A3(ZJift=UuyOLHloqoNhYi9p_-U~Z)kdDv@;FIm2S8%p@~kKA^>B7h1%}j29h+#QmbHjN&gbA74e$c?sqj=^Qa+^wrsHrZVIeWAO_7h1 z5wf9#EsMm+b_X&sq0gSq#tZWXBml@O273&}29V5N0*5lvat7x;vK@x`X*g}KwNlyR z(mY8R(yX>lD~e_t&zn%bE(J+uY{?saNF^&RgLp}CTJrRC_6dEBtS%C+-p5X>9*Y_X zp~RDQi6g@#7I-|=m4w#!IV2n&(a$lebSdq#C@ZiC-Z5GFs3W_Da8{~Le<*HNi8gbq ze?nNjR{3oLgsM@rODr1BW+wq1D?eo=GHgqE_FPz;Ln5(W8KP;O9UXHJ$&%vdi;RUw ztf{Dn`jgn1$Nz%g`mBcEe|zNgdmc`A#6SmGDlbpJFl z!QqGLnq(lXgpBy!w^H{*PX)B)*7rCRvSX&Zy^E`bxoL1Xvtjz2Dz=7#>1&j2+iOBo zChl7Wdjr}@9 zvhrR|L<0!r{8v_7Ec9AZNMTd8@{~9B-O~<f3hB-8{Q%?V2XT<`f!co*V9{?bIi(-WXxVzPhfPAHITIBQN) z@e%LJj)WdFE5{w5pK#4x9%k+LKhaAJ#J+C}Dt`^vK9f;PmKC!FBnWZE5UUvV=|Ejg z@aL8DMy@IvIhc~QLB4;r9oKKTW&j!Xn++c^{4Qyj^_L4NKCKN^=PC4!L+Gqi_k5JM zY+p`VVCL-qO*v;T@4rcXN80m-DkQcksA?6PCcSJpiOD5AKDvUliqIxT7^y-^@igWK zL~chMmRC^TQyb@`LT#hS`2Ew(C)uYoz)wqxnX>%E{XXiYgPifYN4%%~rD7qhCAzt> zLF3_1*zGlT#eV3VjNd!EsoN^(rc1=VztGY0=jF;X@7!|5+tsm?>(iuma9`h~5|*|s z+%oge_uhOU>HH-RnR4c|6u{~0zC~qtx%|O>7ipq8N`3rCoUh3+0AzMNJ{Dk#n+jMC zw*zS?DUY>Gqx{*=0BG3e(HwW?+=ibSN;_=6@&h=##DRrSvRu#uF{%lwt@tqtMvY=7 z7$Gtb8ah+BH8i&A80fo{!oS179+r*$G9apr>Q%GGl#&mI4$FERKb5NM}YX2CvSbDQTJP3W+t$gk8=F-=| zNi$-)#S$@wll{x=?blS5>06GR_Aboh+!$OP4rmR9S)%$S%c%`G%tR6Ju4SE@vH;x~ z=ZOj8_JH&7T@C-CD))^eW?yz4n$#%-yU^AbTA-S3pG-D0(8UM|theoK$3lFF?)#DKspCDEFtr?;vhWatXj)6diuidY882#&bl1C%zQ;vkU}Qh+Kg+EZWyc= z*8a>vd<~0_DQq_z!uq}$isugYsaEtg<-nxY24EK$Ay9Go10!_?!tRO!UJ5~iwzNnn9^8t%ySuwP6nEDacP(Drf){spibHX00_=SM z-uvdM{^2tdc6hE2Gtt= z?LW_eCXK}cj8p7T96R^=gn`vW$3eKI!>ivb-s24r_$)LsUG4B`!eQ!Gr3oxwbEZX@ zZw~`+b_?7=HSNs+!x9xj@9wK?q8`72uruX=b?^OSU+lxtX<-@)Kj~p-pWO%$0m#aT zUtl-e4{6H`I(%1`1d#%HjYY=!UXyUe&snYCB*F^~n!DGss;9?qWnouC3>2_`^thmV zpUZwB@$JnAVv>5y=KvJO0OQBL!{iUtSp2UTaYSRxmba?!$LKgXCKHZlHDoO zssAvR^PLT?hG2}YTsV)bj@Yn4&F@PD{h%PBeTf9> zseS$O{=u$LWt$lXFi9^i9jfv)T7pW^&5S7Y&lcHZCbMbA<5)@zL)|?`k~f$}g3$9> zKyC3(D0aYC4XAHiZ@_nN`#2%gb;L9JwV8Xw1&8IM*HO^hYWMHNbJkhyG(i~b4Wf5n zI9(PRlyY96nn|tC#zf^0QSa3P3fbe?bgUkU|B8MZXn^5F*Li6nH14{u4_?1Mrc-nl zFY=@U09T@l;V1t%Mo*@6`!mLQMYKu4@zhM0rqW3vul}p#YO8O6DP?(>B|YbpQZNU1 zsYMSoZOk9UC>u{Lb=3jyq5wNk;(!?*xleywTV;eTx}#3f=pM1BpxD?BH&6HY`vtZ; z66`+!jMrh%jV*J-|13+4yByCA?yu(01E)-6RnyL|q)hY89Z4&>o?O;iEbzmq25zeleRxwQf0Rdc} zTida3x->WIhyh-5ry{TW_YEwSpWHT&f@tn*JPN)H4VBD^49lRB9IaHE{!#)D?7J&{ zx&Bke5=Y^7bQ$6GJ!y6yGj`kCRMu`_7UAbsP0^GUrNk{x$y7XU-a7gfy+>U_AwSN) z>n%BF^h*4J&&BIxi=Q`y#lFlDEJvD(U9P%t9q7FS1&Y;&xux$Hebmr??G`6wA~VhP zL0zk`6#Htvo-i!_&x-S;ixI?B$o?)Fa2iOcJ?Q)G7vHiZAJ|J94Ue(AMY+E^>dh_g z7;iz$n8>cnPCLf-qDRBY2|ZVcGRbR#!MN)T7E+SWBU*e5lAS2}8vS)-*$%MZzlhM#^TD2AwJRlxjTwI;*bSN|nv*JZXFtK8~8kzBv*j z{k}8Cxj+k^$3W_H#b~sO4VFCq)m+XkGS%8>hhj~lc3@Yvv*(>wkaZLV#`?XhLuqYB z3i|Qw%e^$?>H6m1A)5N<8G9av@>&h&!td%v0ovg5lC#5Y=5r|64S9G>L|IOyGOjps zdcTf(cx6d0InTO(Fz^((vZc0ZzH`KsY(HUu8D8wD@(z{(5}59YzUjsd zy=YnwruKPRHQ6HH9Cm7+-BOR8EhiM5>|$l76mo9W(RQ)>xKCfNpUVWHF8ZHK&(ap@1HD&+7jFc$~fCAo- zF_#1pqJqf1cG0=+!QqPVS)61h_y#Fr+z0Ko&mvh0J@j4hWil6>CVhzvzaTtZH00Q6 zNgyu#M>YTVl#bUK3Lv?l66XIvo!eWXIzwPzWVm`vni*W_LOzj1c&JcT;Z@eU9MOhz&nndRC*r?M!Q0C7YzQi<$_zHCt%(<+Q4<}vM#aaE!$Zqr9FLy* z3@mOU2>r46M=7(fQa7DTQ71IOZRz z)^dcu?oX3<(Cbmi838cInjMKa1Hd+9UtIsPwOv*FypAwZ?4EhloklWS6?@M0-`8d!ht`%KbgAArna@O3G z$chG0yfvrF8hw2`Fn6qJSKh|>E3%-^W`QZ_$MHdoq*}sFXX&@#6!53C^W2vx6S3PI zY0iMx;xgNdtm1nPV+t2z$*Q>QZ73lQp#s;{A+6rBW{J_90zgtL$V+ti9!T4IuR1Tt zb+1Z~wv|Y$UY$^`IjD^l*?SrBPt?!mhnV!&I|}mpt}m^tY}8B=&P6F4{4FwrBLyBu zaqhci&P+lOsVHS72bosY_I&Bit(iqled5H}Zq#mG_^#M;t3wi{VF&n|9`K4=!k(Js zoUv;lAqy`0g-EE0g~^78kIIeI)mbUE!n{bi;Gr^E7%@aj0`ALCqPSmhl9S4)^It}j zMlgV3SJt?TVR;j;?_#lBd@?h6S3*y}#cvFI<^DBNsgHA)RJRkTHqx28Y*YK`FC!OA zN=Ha(=i4?e{2s8hEKD9)4i7YJ{0}Z@t_q+n>4_ zi#%&*#sii*uMv&99!!z&O(@smp6)YP+!$;UIgE+4q!I{FUQ$M})XbkzYcA3bbar5t zuop%IzrDkm?JS!sEnQ==vfA3J5M~A|Fw*BGS{Sb@-5VE#WGsgF|59*H{66I>{(9H) zZus`&DhYLWgb+1Ut98~xj zN)szvCr!H5@Fj(vP?gg}A;P9FB&4TvyYD?Pf4-Rr2G8e1-g7Lq*3tDqMC)1(a-A3dqSTxt~s8FH7@ z$%KZ6V-q>*R@cMUlw8Fv*(vWI>i*t+!*GR3nf8%b_zXACw^P5VsYT5gnW&paN@f{% zVVLBD7-Cv_Sb%iF-ozc~o=mO2LI zSu@&K%;cPHri=5HS`hB&kA`s&&o(aaFtp!}#nC(@^h($wX5uLx+`Rk(?lwyj9flp}NV2{jrkDCH`k0;EX%PyR4aA8TB_-Da_ z?{M9lUIxe0qcu^8sN+!?fiEb{P~q#~(B+(l_%n&V+Uc^?zU4g&Yp6q)hLy>htV3s@ zmbi6TxEpBZjFir#W}?y&bvwqZIZ*6$X>2$H|KjI&@AZpcx9$oALcN=O4j@ z?XZ5AZ=N1?iiKas$&QTH-T=B zZY&DBkHbirY3wA8N1LNeQN{r>7js0i8TH*SGu^?xo-tbKS-Z0seSuWQ@>uS zkoyzR6SZJ~v_os)o*-JAd-yM?+zPG97(%rF!%Kl6&CRp}rK&Jfqnd>ba^Irg?5gz; zZXp3GsI}~CA7Yo!lHnwdYl<9pUr{XFT3!8^Zb?*y9rJlxyd@Hf#rmb+sK?u@M#`~X z0DGrB^1AE1u-oE3ZLmAqz6DJN6WX`+T^*Bparu1~b5v)Rl@(*a16wryovq?0=tBR$ z$o)0(6mg*_pW)_TG1x-CuS?#*)lSMmOAGpxVgk%1MwV%3@wW<95Xa_y3aoW)lCekK3fr1E1N3f5~ga z#y%E3@CkQ#>_zdY)w8!Oc3F~!2O?n?Vb1`-mXiLny!~jkmZVa|KZl>Q-Titf!#}JC zwa?@G7$Cd-sQC8ca4WoA^W*8~(FEHT%L&WxMtLqZ;$cu#7YF1BU7MgVR%;wocyHko zs~4=_>6-zV>s#mY^irrGD&fP0+giyKigGs9v#)mRDeqn%hfZ@Ga=ic0XtV<%(_$%!%2fNasOy;G7 z)+q6mI<^nsPH8vSF7o3zbQPdz;_@`YQr{yP=23uz&hgzu)Fy)%%CgEheW9PC@`bxk zPZ={anLS&$DT528$GPzK)DuNTlV{WFxJS@#e~?S!^_kGXaKDd_)Qt?o(Mh9iA#Bi7BENy>_Ia=OmcI3LN%?VOlzohnYkJ^^yiXmh z>(@?59s)*DZ-F>xQ=^zPB@v@K$N?rzV6bvSPjJs8LMkZSwpaQa)#ss!g=I>_AvexG zqJle%&#f3UlYe&L?*x;YQ{vDxA4R3D*I`}o8`bAl|M;M&{x9lFhzsfPj({Id?ulMvGr$z)sY zi+1Nl)F)xfO_`e-2sHdNj3tq%TpBJgypnOd=};QcW>g&{tc}B|3L&#sV=?e zBqxMvlM~OyZG7 zFFL@)r$oFTQaL&IKuXbjS40*|LfZRY0#mZ@*cZpNV|hz8Q*c(zdEV{PFyo0ebJOq5 zFsxjdf70XJ9QS1#M9K8Sm};fW)V2+A@RW*c1wu9YcL1Hyi(cB!olM_uTT&J=H7#r* zJCM=j)RNclnHGBWQybO78H^XCwewIN4f?4IBnS-JO2LSx`Vcg+A z`FX6RI_lvf_@|WnXf30mYGJz(@TddO(jSjY1(R~=9rVML31@Y99C?PR*+|q4-}l}s zqoUH-XwjPGSZ+tCT798GtBbsjs&R$r?hb2~k&KMhvE;XeMP)Plaw#!~-3TIOgwVUf zT7wRmiT@Zq8(?pK!S8vLL7?UK1acDvICbIv1(NR({Y#qa?)H#DF!cp3*JbVT{_az( zxi)2;3pqTivdGCibk86j+2LE(0~?8NWy-tdWjiINd_SeXq}q zh>EpXB)DM|3)n;?kB9zvUe;aln1pCbqK+k%_@C+S(GB@mA--Aaw@3Dc2_<_A`rEzi zPFr1gULLHWD9Ugu$-46wM0nWf&YD4Yb--)i&EAf=(R@D%HepIStk&CVzpwvg{g z+)s|)X)(Z<)mPNm>!M|s9WKR=Br)@>0aw@M!g zmE1r$B+6oJfgn#Qw(VKlR#%U?_u(C?*$I4qiGO#nWB8kGh0)e>!7dQgV@Vm+t4i6* zVp4;P=$Y*2iy!7--47$W;WznDJoJCA}@Jtr7b9TLZ{5(gg#;BKBYqEik#szO^RlBmLQ$ zXQzcG_*E5Wyc|?-KyojOay++cZq?i)yffbbmMcfUyMYnz_TDVU-0Ud;>zQZrE<@ZWLHQJp= z_;}xbr5k@2x)WfgK5GPL#!UJUnN$QS6_|2iledFoTI5YZAa|;EC;J#S+G?OIRc^8M&^Yj|C;;kkb8trgHb;a~83wOIby|exA zj-DwxwfFKVsb_?5?|X*Y?@J0nKliL@)%m-mA?__ryH%;~#*YEZIKZbSRzcWq1A+w6 zie=aF1>a=UL?>eEPYGu2*tCy6p)tf^{yFB>YT=B=ZIlAldq#|aDR`aBD^uAzG-Mfm zX1?jHDtGcws&F^g-+z=yuQvh26=IgPri9Ht0n;6W~4^XibJDd`IuG_@;zMEr1?-eKQ28adGzV zz2ldnS7eEes-yC^3QiPnwPT<(M3cC`S52j>!@M-N7Mp zAn?c0=ko*PzXQ}E3HZ@nx#K|}js~svwak?iybj%QyI)C}7n_zJ-wSv}jCSNw%;Gb9 zU-rbp_du1oa6;4iBJ#gdmJ;zK892NV$T>IhP^FSjA~Y#CLbk?Qz|vIQ#~rH_&O-fW8p4O^+F^D=NSM`n@_onu}O*9E9vP7}clh zH@(^=tV6LI8#Ph?>%Pw5h*cKD7a`ou?PC14&eu{jq0X$73J4^lEVL88>&F30PwLZW zK5BJx*cMgv%fZH5QCDSRcEb7S z=ESxxQ|z4GzK~0del+n3Go_mkq_8_B7Hp32rgOU#jFtg5wVU$3yvtom_&IM>7xz2_ z#e=+cW8m|&c*S{gI(smN$|{ETA`tC4LId<U|Eng;k=mFTMZZ@w zQNR5Q%KYs~h{BSLH<4=4i{ZoIL1YTam*j{^8pXjdj<|HQUa3+iZE2`>?w8j>BMJCx zo(ljTOgX~EbC+OXx+Ks%COT$f~#kLvG?(`2CU@RdR zFFfwyp`1AF!bqU2M{pea*(a9B;9YO|CZU_Yuvud~Zg~)sWie-hfr$srn8}*W{fSiR z!>js%HeEbmRjM#)b>{b-$4r^EI1OPVHqsnG;uFT^YH>j@{+)`CkEt&&_k1s1dFs$p zT{vX!IkP`;)7(1Jcq60UFh=`=#wT5J%{yfu>(p`r%ZL|heQksp7mIaLe%mdstyl&_ zD}95@yre9xt(b$~UJG`}51T!ws5Up7{$6g%qcL;{^7{Tv3UD;lYYtv&siL!e(>+-K zv2cri5NGkc05P)6*wS}%vjDY-D=1;F@Vmj?67g{yg_((4W%R>?yL}|-55Tx3Wh?{w z29kD{a)<$0Zw4HAur)Q5r3N3*X^kjo2q01fvlQn=eNYCz-2Rx1Ox|M~8VH=i{_P`! zi!!#h(P*RS6Cc)%M}~Q}RV;`3Oudk=7mJ0|A_wpvd~3r2NW4t92ihI@a}?Dne@=*ZThz=cc-1VPZ&tfUG<8ez;*|E47P|4x+7BRw}ysjr1Z9d z*>T6(y$$yB0z0?DexO}nomQ{vAao@1vmzL8N38lpdCeiQ5Mcs1E?oWvi{_m$qX0WNQM&A)I0a|8(M;^pLm1+mbot4!75$xx=#sPA76Px9wV9l zVc(TUNL#GL<8_-I{_s|65NABa!cIc{We)tB(HFpE+~i8>Ze;IS)Hy{gX(^y zh`co$s@Q*V;aSu@(woCB_c$Od3&DgJ;XB|(cp}tbOsF=hlV1j!XV1^=QU*k?BXq^f ziUivlEAewQJTK$Ck+c3t$&bv2yOJtDNdvvq0gA7}b7Wt(>y4vuD#!F5=c;%Qm@YqD zg^~vPN{U~32{BEoI4_OyIvDJV(xkwCcd2s{LF8RZk2;HoG5XaX8ND@NI1EuaYEXnKw5w4VY&jMA28nb>zA5p#)Qa5th62qa>WN?DGJ2Rsx{{D+)Cx@-(J$=d4rNtYA#Sm+wBsI z(#1y4Yyaz|Q<>L$xT6Cy@dZX=GShI`Iv}7+ZalJ%wLj=%8vwtrBcqUj5qHTEt`=Ju z7E2}1s06ejU4gM!i+Axgv1rQcIH@VT0ictqDSrm_*7x>a$yivEN{NZ)9|X}O-Bq<` zt`#V4igt^GE4oCF6V9D?&|WVKHI$uSk?_2n1g7?0@s9m1GP_Q_UpoGn-GJxh@K3ziO^dUWnX+ygO5af1vYL8ibHq**8UJDp%Pu^vv5lVeL- z>hH^^;GAu=M~=6lvsX_1VM|S$Z({1zRZ;kIn`=!s0DZeBsk_{=5XM%IJlup(+051_ zRlm)&p!^rf2iL5)UUfLh=c#WgGQBE<(PqAZaygV~T5hDwr0_mlHLhd7Z3to;~bUFWO;Y@N=eQ zS)F0AjE%!>W@CGApC6RN-Pgxd-xT8o{Iu+GpnTFdnNa>s%ka3zVwsp{xfg~`sFP3^ zm=%;=fZ8pz*86Bob8R}HqXv!#d$^LU=>L{-F*78UlpQ3U#yJ>gb2|1arL^85-q#zG zI*Mrdm!xZCykmvbKPP$$)$_Awut?|5oOo-(blL+S<&xBWJpV=G1NffF4(Tly!?wkx zqceJXow$~~*}_X*0+uSC+ahntXL1hp{@M%j)U-Yt14nq+YW0m={;)<(r_@k3vtn@> z-`24y8pLRI#TFw59o+aj-k<-Pvm=;e{k}E~@%811{XwiFdxm%wphOLAlMtRtdgB*| z(;tUV#@`{qXKIH{Zd_=S7hh!JoyZiKZ|cf=+7x<-!|YaflaHFEFr+T~tko;1&%Zj< zEehlhfjwS}k!CdhMRyZ)=-~Yp&Eb$5L9G(?Cnmf-hhopTj(r<&=T3Pqa0(f6p@=Tz$}2Hia`y3yAdn-mL&N^&h3k0?cg~)E!t{)L0e+_| z!aeZt+MucD#&*#XQ@p!yYS^h|rq>_rt}&u_C?-K|^y0l5paxNRqzWlL&-gj?ZHBVe+Fx}ij&Kd z98{DmkK7t3v-3wj}j zKW7S2NP3OzC{Al|#^6AkqMe2UBU-}?61-YnR`^3%RSwMJQyc`_MR3q2_-YawR!?GC zADna9hhHEYVV@@y_|8VoG!3Y?+tr{fjGWe@6(h?ZB~UjoP?t-n(LFU1E`ImBzP{p1 zlS;W3jp!F0k~}k%rL!M!g%&+f=+~4;W3>J*z_7G&Etct3Ku%IbOg>^o)7nJa?LKX) zY-gMGWxaSe@I|CQ)t+jutWbL-qfCwUpfsaA4ON`OMuQZwYi}`sC?i)F9GcPnXDCSq`_*qE z4!H9(H>|6}?eRS})^DV1A8jw_mDIXC?&Ir>?%E+9=6skUjA*5KCJfgOe=FP3y{$r( zH|=0COOTiNhDT&ysKS!Y<{jpfxKB{F9wz-SCjQ30FeLpirY|4L!IE{9xGLW< zkY@18L~spOWe7*Jqq(>iASW9ITX0B5HjRXp=#iOM=E}U@XlpELgz`dd?$x&9Yh>@n z7?{|S^Nv6|iW8NNm%{wnGZ)^aFGXZNF)s`}I3AGX(KtAWz=$(S-O&J$nZF1O8W8ZO z=QO^TkW=d1blbns6vMAPo^3f4Lt7noSZvtBtflBA!XZ;?_YFSl|rI2~|*F0SBx z9)kSGL<0_RUagb0gd-HixPUVDc0OOGG%{k1A|sxand^!?2n_hI?}(0Xf6h8RL`wvk z7Zopy0sBAI6KdkcZMQ;DX|MoKtK}9s8V(Ef!RL<9$Y&}QlJf~tOC!1>Y-jZSzlZwV z(nt-pgAgu4R2F06IU>Zd6-#q8j!!JW4wL90R)4ay&60v91xVzT2{SgnLJDe$`XK23 zZd;I3W3HU7SL{`U^wh)Vh;P1k{6)iQg5JId4cE>Hyl29PK~}vs?||;!6WlM4<6!(` za*;u8+vO!$eL+AI>`$}U;$s^Zj&*-yBT@J9)%=SiL!1E#`zRe}om*3sboFa?0cSJ$ zv{G-qhgD1bI0Yf6<6KgPr z-M)pm8L=F!*#)z;hE5Ra24%4nmtMET>B53sWog?AOf)`iYWa^(AE~y)Ua^$fK7Jcx zck_FAV(P0;z;k}BbdMvNvgfnMnv8HpBClR9q;mWgM0(BPm*CD+n535|XAUck{)0tY zferHTwujw?BXmFWIXb;*f69O+w*HQRdW5sX-R;-<|8OQI4Pj#zBuB0LH$OZ(Az9xk zX`fL^GQ}@Cx=z3(68z9gw1^I$p$Ao}va+apXCTGev_MIKVC;9YuSPvFGbRtCF8e=P z?F-Q<;|4@)JCunG3jNo?LK2qTEI^Xa-6s-AqFuYnZ^juh2D>=+c`0;i(6%`K`{OGk z|NDluv76FE_`h#mpZ~vikN#hu;kEruRI~G)pOAyj*>G`gUS61gnIdQ6^1@> z+I6}R^E2BrmNjJUHsJlh=T$9UotodT-n3RVQoPjk4BAi z&lAH__j}3?%JS)p0OmUdnTC{}X{E+1P>&>U;0S)3#NL4Nhz3ab~2=f=6L=ln7Y z`&1u5U@KEo@?siZsd$`c!rv}M`UO3MI11>_UkuI0rIWJX@p1*>*Hp1q*DFZxbPjAq zeW87OBNdHXqG(29f7~@LOD1bBAc&FC@)M@Cv8TnQoH61=vFljirRu%DchIx*FZy6$ zfIISCPt>*eKc#f1h=i!ACs2~r!S|8o=KZXJ3lMue;hoVFJ)}Y8{QX^=WmoXwuRHW8 zqXOqUU^6)M56wF^y0wG8l&q*!>CDSwEojXw*|Xsti)sgCI_23J zo(}2moZE%ebq+LlzxM|U!aihvO7{nRFq?2y6ffWLTN`!kpZJG1g*hgcO*=FQffwBC z_T1kmmaOc)?296j>F5 zFJY<<)=wppnR4NEuABUNn?a0Z;>VIhm`(jf^C;v} zE2Z@r8>_!TC^o(9uS} zgnv=6=mG|8C_)@x_>JN67DQ-shSj?vN?#WY)+yM+IG!j5!@b~2qY*4hLW}TwP9n2A z4(GMEC{?;&#u*Eo3skh&mIkgwd3~@r>pp2hqWdv+*}lLYRq#or)Vk)25gO(PR(d$x(gL1Pa1`cTI`%*ee74gUJP&eH( zEn@sd$bR>s)YSfqxgq6p6}7JtLPi|B za;7~s!<H+IW=QwRf`y6t!`+dbU&_zKUj#N7+=CTVuSqq2gt) z_ea}dF*Qs(sbB-+{_TMZeX?4egHf{Zh6ca%Jhg^5{F{XOd@O1==7Fsl8BR6S{TTg4 zw6>DrpHstNwNHG>i(veKaUoN4&Zc4CtuWE%+FA^@?kd+s#37t$s`HLOrGy@e z*1g7jzsSTxWdbQ(9sE`DpVeMkqgk_Z=jPlvQ;m;OFy#l$)fyi5n~my}BzIgRV)8B7 zT)I^2|9;W@Xs^eSa1-3%5LocmBd9me+)IB_#@-5hJF%V2$bBadqWo=tt%RY>J zX&Rq&tA)}%Wng^GIHpw#!0 zP#7x*vMv!91`I#n5s~C8M!rFk$BNXff65GglY*p`jmceY8z><@#;q}~0L+50D)%|- zCr@@e6wH?5JMD%>CpE${`&erlO*HGDDeV?`UKL?R#5xIyL~a{jdXdnqgP=VcAb;VaN5(hu8sd0CeIf-Au`+?jA*Pi~6Wq#r$gS z*GUk=@3LSYSZ4TMT<^&Q?+^NU%cT=5tCveLqED{*oM|@YwW`V(pnN7KZSu<5cjAM* zku1*U z^I>)2*b(zL6rZB9d49^18A*bx`SKf@d+ii*i78JZ*DWr)@}y=c0?P_JW`6&qeWGpk zxyAZvndCW#tz?EJRDb6dr*t8X3HBe_pp6eb@?t{b6 z5EAR`hto%+GSt8h1*U>CKV5w< zo4EL;#}!w8m?)Y~rF!SDDErb`oqpC&C+@*AM#m_q8p`R84X*)k>h7Q^ zBWLNSu$%L@ImPm+JO7YYN%rVDe}{{I*48gDp2&5|^FZl3R3Gc|O7ed`5DYUAs&o4h zBqNlAet)MtJfsF(R~E(2@>p?`kk(h1!|N9DBFG2O(Nt{6dR z8kjtd)xfN9?(BSgL|XUthfd5|$JrT%gh{fyAvV^(+K2LMBBuLvhry4&w%-#S%U@3W zZ#L|09uMc3M%5+XlaF?TiR+~au|_K^`9D=n!jONLMYR7>@frQh>7a8WDxDj3F(;~k zYxtfQiC0!t0!T+j+^gTxRG3Mz>;+7nEHAlsaaJLm{wg-D%aC8DJn=@*I2X6EAxK{h zc^`}u{0|?v$f--5d1z!*6PNLThHFYrJB#m2pyv~XE7Z0AHoTWIOr*9-Ueewlj%jS6 z({iiF_0+egicVpUmN5<^xJLc>k?EL zqI{hQvHW?ogn(e#AT1`W_85Tr_n!uXMZmywscS@db1>s!weKrAQM2-no>LO8=`O*U znXY0J53-t$TZ3p|Gqi*Kny$k};kGe*mSeX84Z12>0&>#f_L}sV3wl<2uDy5X_BU;< zor*xJ+IZ~u1zH#!^}qV@d>gp@oIk0o{2`Wp%=ejt$b*CTlBI7$Qc^OuY8cK>p_g#K ziXG57u6{eYFX4`|PwP`2?s_>~tikFk!@nrX5|gpT{zhrh^Cer0YU7%}G7b4|`=O8cEy zeL1o#RNHgjhvMolwfn@e11~dnwe@GG3cZ97<;qE5=*#t4;FyH$xPVSs_4z08yYoy` zD#*;;?$}E;XBicwjMj_+AtdKuRkHDB)oJ;b${R2CgD(DUK)+P{ z7S}y$Ag?DP83l~CIV9D%tdRW3NF5JIKLPUcWqz(&iz+eILa2$MkB?Nm` zL*u_-xGJUvOZgCW#E18v;|wSX)2NVFuMlC7VrLnsWMfAldG3rOG;q@`XkjAxaC19F z;=K)AQ<$iDPN_jN&+=kWz8p0v=#D{b#YX=s%mNrMRU*u?5#gfh*a? zXh(O3eY^>O9rs2&us`n&y8DAJ;QCw_VGxHJ8!I@JYl?%#;#<@*FZsjtsr~6`Z~zS< zj%(d6QaXv?A_=WE!Mq&4YdGmqj!fIkh=da7ur=9WNh9Zdoxz?Y3JaAq1XFGEE)8h& zy}lJ4D5>ZmYdx}xF3#_*gD@A(C?V%j*Q1kgvfSTHS!$uf`@1(E(TH|!eqKGbx4tfI ze1sMd#aO#7_lp(md3 zf?A5zkhrf=R~BrVU|KTnR%aF*WZdQbmkC>x##PRIg?O|(04mPhJRjQl_ZtPo&DG1H zG|nO;EGf2=LEB-m4uQ!spzu6v>)yoFn`kvRVvjf{gF;EzId6A6fLx&bY_Mx!rpunp z{Et)OlaCK7OOAKhk#i>0c^g4GWA0~PQFR%rV|;| zn2bS+bt9a3+skt{bghC?0{=IXcm!5O^G^O895;OzH~5khYk^Z)z*(2=?l|*wR#?ly zOE7aQ$k~XM*8U+p&{1HtotJ-z-=YqCZ<&h`%{_aYtSVo5C!W+pS18bFbr@bLVM3>}3xz3F6n^ur=(+|IB54;q|omU$mx%ddXBit4f_u0TNhIk(6H!=UD@S z&991D_!iUQkN$;y<*9k`F0#FuS#y3bwqb3bA%Jf$vs@E;-M`gQ=-<;62$GwxJ@?Yp zU2Ca0M<+{qsJMBwuJVNX{7TzWMm@i=8iU$3oW$N+uPwb65%a7h;I$QeRqxz@l`#0V zf?=PghsBi$cY3BIpxyt60koa&#MLWl=s9n$zkFNl0RFt zY=`e~vSG^W@r)1s69DSSQSm~CJZs_67F+;>N4Npv4_|QxCMKuEDOa=uq>>1=qmbvv zEpeJ;RYokjrWu{uf(3KanR$eoHo9*~$2jlZQ92Y5gkHiav$#$#2X!8w0+V5gd+6cDq8d1+F;%zWktENCTC|akBWN2H6fbb`oHcPykT;v17^2b{R5|6Y&r8 z%e$gC0cZySr1$QY@yu~T*?(Xgc5AN>nP)M=j6nA%bcmLA=JKme`jB|I+T}EW= zk)54quQ{K!esl8&S_ID_{2nc2{&9nQ-9~&IGS3!Dm>M6wp(Mrn36IG%XRVRLU)RtB zPE;I7*5xMR+4q%zOVD3%=A+gS?vDKBrX)6Fp@{~;-ww;NDTPxQ({WUNiscFRafv21 zamr^i)@44MhyWlJ*OW&`v&z8alYP;0-0X1gyDr%`q-Tq{#1EPaCA(Py^{W9;lql|y z^vzP$zz7uvR;QQJ=z6)GNyg74D;^o>18%JfE=`gZM%FJ^8C3_8i?@b)_p5vgP9g?9Po>xD;%@quJ)~40#yi4#)xhhU`OITSv{g#6 ztLtK_MQH5vyaDyFD$&U? z`(S6!4M5}4vDyDFcr2n^7l7%^t(F9ZExF*nnWNh#7u0f?8{r@7QUChx9d)Z)Lush+ zfzq2$bu!FWu+m`S1N^wqW>-2AEE6}2M{JPa%6T6iphatM$=x+6Bp4dG&I@AkA>ng{ zcjR9l@Wx6&xVX$@ZU!X7SV-+1ntjC^Q=gO#;k6$Ip0gmE{O;SH>h%$=WKMb6E^VdF zfz7>+q_*%rUK5JBT`(X`Cx4o2d$`@C6w3l|b9hJr6EP3s%@7{_$I{YjwIJs2NRWJt z$=knXq(3S{-OYp#CK8>AU@iFfR1`*ypfO4I6ipWg(Ukp4Ya*^0qR#B{&VrHL5@CWK zpZS!!^YucP+!KA;qfXZexkwi)OZt&+Mrtv|;R^$d0Nl$We{}I24|$X`4BI(jX*~F2 zeY~#r zK6E+qz#QB#$c-WrI(ojmv>g4H0AF%He~3}YYsx2djtg+;{_P$Cp_cXbf&;FD!#Cvp zm;+n4D6qvzuIdd5`*%~VkRn>1i5+r!)|6-}GqRjeAY4Dn_TlkylDy>yrue`wY_b;pBzZhTdF2uE4ODD zv+L&@Ij2RP{vhFKMj2>~uUOGsnqF}2 zVmemDL&Cj`co97*J=Mwn3 zz0WJ+0blyiOK5H8MIg_Ezh+55gq`1)JdkZ~nbQ%xnPrY+r;=>)okc2#lxfCt;t0<{W4wJtN+z@3*MVALQ+ zbWC`SY(vnt*8vYq!?ox>>@x&NfUiLGDH)ytJ~X;af#=8Egy=nDN+~;!Mr4Gnw?#PF z^ZJ!Ovw#kI`;xE&V5q#5RvI^o&=#m>>%x*4VfywF<~p>Qkx2HOX}p)8G=qCij7y-D zDG!=>ptP_pk~4KFA$^=pjF(bI%+1IGhI6tvsR0IT8WNCH<5m04fKAUk-HX*E6u0(f zHP8*eSnaSaYLmrXDod5wk2SP!J>S2|RXm0>tg-Cw4*Sau1OD-~`;^cKPHhJCqftL+ zw_eig35`wEq9-R@K=d}kP|J+jw$HdGqU0{}_z=x>%1idZ$`Arl!J@;1t3zjoV2#^d zm`*m`ORd>Ul@$y#!%~FC{p)qQ4jlTJaG4pM^3o3RWmtkL@X?v;N_P%qryu&*KY#jE zwt2Zrc*nM$GOrZ=i-nE9I1h+6Sm%iQ?B2G?@LE%jMji6(srsW-7O9~4S$8(AB%}bu zwQ|yZg@n6Hyo|~TL2ynjBBt}pb|Eu+)}~?JDbWL2Rco#B3pzyz!92(?MkY^LNCS0x`3@hHvBc(U0z7*Q4BW=XTne_W&&-&ub_jNQ#T`!`{Nk~Wey5_ zRn*d59muyy<)+~K^xf3KrL3*COEW&`^z(W|yDfinoFXITJgLmN*`IS0&z$dicv9R*->Y5OHq! zn*>XHAbN7^w9j)M_0(x<0zH&`l(^<2wb{tLQVpGvndLRikcoi$U&`>s!}Mm$(dSGXeETO zb6>xIU8_rqh!}|d0=JheLaRySs+T0d}OP2CIA3C_bn z9;zdR8l0xbYM(tyfARe-gDj!JmoYy$+vOx7Hc}J%{FDaT%+}9tB-#%POfXzw9WCeN z8}jZJmY(nu%rwtc%qCMvJ#gXH`-HqRg7H(lL4UMjY?LppN{Qy7-8i0QROCbC`_8^Kut+2}|Z`+n0z@ zq-N_*i||oddXjgawVH3iSm*YSCnk+_U@<8qS@9}l*Gw0+BLC$xrhXp@^0xaGJvc4v zuD(ne0($gOCj+=csxnb@w^xc>?t+kbUux5dulOIAdt1VWowvl9DVnRZ9W{r|jk{@K z>f-Xefx=~lPz*6FtRS?=()Yq99jLMc%Y}PGP@TS3k4G_n>ci^1_ra>|HANmmpX-OM zHlAoi;QMiesRY??k36RW-s%>2&8Ux~-@kdE^ zJ7B7iqG!s@H}=hM4{KFVFh9@fmZJ@*!wc+@(xCwe93xBQdgCv5mCbF$dKbD}Tzcy_BGhr>8ZO9>Hwx}u6HPdh ztrp~A6Xfp8qD)-Gw707(w(UO4tF|Q#y^>FdKZPA1%Yw|3X)b=z(iTl2ggKe>_ma}W z(+g#~N%nFcXacAlcz3=5k$<NS8W7QxE7hyt=CK`K28*c=UL4X!@bOUQ_;aGqQf-YlcLyBag)IU$tjN zGSh3>5m(2DZH3u0mvnf7pqYLz#^b#ckQMz`azelo)ZU zjKhN6e-)#?QZyG3?{{_Z<-*+qF%PYH?|F=C-)lquav%dw@Uv`CY6X;}tRHaTktk0j z1t;{c+CEEYh$30<6!W@YpT@Z2QN=f0rg>oNoMbMYJ0l{&p?6xd)ilr&@k$l%mjyx7 zPkxG~VLv%1pb3+dB5HA1-?k28H0s04F#9%MlT2EevsSos9W*@ZKy`PfU+8(|v_l-` z+V{I}dwLTFYmmlY8(I3F@31PeCX(e{vYu!mP@D`qanQ&txk?rf9)gElbGe)^<1w$&$i;<+BA!4C6q zY(%O&<)3P1i%9zA^H4u(%46IfoVP6D+Z&i@S5W`0PcTXfmxu`9zHPz0#)}4FqFxr2 z!Er3*8?~-H3_GHhFL$NX+)ymT5D22ayZ3^oR5u`9S~7o8iv0FHnAdpVj3JBS6)c># zjNeu+vY)qXXc5oD6sO>h4ftH>6#A+6xDQx_kvfiCKQLft?nr zDOiNAZY##^Q_Q=OuYDQ4?X3L)#c)J(*P>GUDu7P9SWHgQ*DV_f9X1*81R(MA?FV{d zt9zQzKMh3OAZ2^>M&7z~`cFb>^L=SlJ3E{Gmz5)K9S9;~zN74d<(NHu7*=AM`RE_X zY}L0Btnh?Sc5SGFs=(jdiCZ8`3Wa8Uvq4s&EGDL;D*gJss8W-N^95ATcLvYY)*Bi( znAhQZO2@qR6$XHN}PQ_eo0j!h@V~J=6AQv37lal z&cXlo(L_b5|AzU<+`vs&R^t|J_{gDned+^B&elNpmckR`x%^l;%xVa{Is_n zL=r#GF`&fI(N%@!B(p@J+Qm(et~C!L3L#NvyfB5v=~E{pSF9{d8f6fWU2j{NSDE9Q zH})Y}don`UA@62f=%_DUGk2W@CIy5tXu0l*ex_9bcW?6N@~2Qd-Ep!fs-MpkNNUJ( zvq%sHLRXxnhlJ4NV;wB6cXDoz=2oKFxon@)bATynD~7%Q=2pIpW~RhUe%fiv^nz?* zO^FV}E<(Ra`ybj&hba-ZBh1g`0 z_@tD>iWo&z=0kH6H(e(idQaf~Zb9NK*O285htS?VTG4vrNDz%1{dYODCs-&kMOBvp zx{7@r^Zd$ca0{+afw(8EJfx?ZdTf{i8yZ~d_g(d?n)~Gh9_;1d$#I`=uc9jIY@UZl ze0@t|Jt0Q&tW#!+6)*F142u3?t-@2g6>Y^Wz;KJFhUtUV>e)ijW09-Ypv_?Y>y6xS@S`wg*#JMyz1y>}LDGyB+vFKbTl(Y!#mENUP zg2eN6pj7m{r1H)|*A?GnL_Nl76&mJP4a{Ph8lIB!yVkHGxiDLLbE46ol}lw5DZ82joWzy;F?C%1rqD!a^aLy8Vo}KUD6q$_2Rxr_)nr0| zlNz1nAj0&AcCu&YhE+zI;gb3q$!`5uH>4p1XJ@Oj)g(xlzd!C^xsEo z6%!GX^FCvk7-^Hfo9eu7DgKwyl!$0KRQU!;!~qcStbF@%VUjuaiG9t#S*533fDn=% zYhY>z=rJdX99Ud{n4SP%E~UI|)k!i_mf_Tif&M_>eZO-->d9`mYA$=@6y#%9E84$q z7%8TkW+@pw9eM+G{-B)ebA-{Zmq*_fp4E_AtM<@O1W6KBgHFr`u$AcZxB961AI-yD zkHSt2D5E7DM!RA!33wU(i6mi`-Pbx2ke7d{7F}NKKaB7NPp%C-iJv6H>R$I0NZP07)XF-rwo)d2_5SGFHxa zC_GN%J+e}VghcMjgTwsp?G+Z5dv}#kcBaF~yZTk8WyC^JbN*dsOQkRpgkXcl@JD2= zI(`3e4g5zI-Un5W{RbZ;!eeFrHu1g#ZFrHarRcytJktG=G}1rc8P_^81=mBr+S&HW zDphVQCj#1!x_%?+YLw;8e?#t8E!tNH`^E$5nKysWKP%#RJAs`{8Bz529l7sVpm`Q^ z^%*I%v?b7!j#m~mtMu7gC_b`;l3yCIj2P5dBW$n#w0GG+VxKcF_=X zvfm^o^Ys*1-I3Mh??L!hs-jmBac>c}rd?pfne;nM5U!B-Vc!!M&X2i37%ST~Cl1Q} z(BgiqUUBvtx&XKsb3);$`))|#%f_X1UTIoM#;(;%BTXm6=cMTY?U{KzP-yOEYRC!c z&0Ei%xP85558Is5%p#Q0A8XESma7Y*CCr>94w~f==qm^%P1LGK^d#1k*adS0($~hIWvP^`DUm=UVyLoGUkd6*8`}0HzEmYnEWc$ zR~IpH-eC$0ugw(#hu@2Apq1wt5i=>Sbu6ut8C8!t?hGX%12suqTw$k(K}IB%h_#)Y z)GtCHfvNjT)$(D03s())kjGL^m;hI4)M(sba6ErVAk>Sbd-fGSod}2HmfQggpk&o}~pC*z?$%PAmonJ+jg3z&QyOZrQTd~F6-o0!8J zWa9utmWRw}Y+1c_8L34zV_@4+YE9M|21Trfa-Q%M_M{l+z5mu-%FFjX`-C2Av{A1C zrOh?ndB6`J%6(kQh_qpK4GngP4sfw+aZnEyfSxm&)x?DrXl;tnmC$jbfoI8#8=8RB zejJJWPbx2W9wgj4#O3e7*6=ihxZW25(T$tF+Ji=Vq&pj%CQJ{@8XZ0rtF@6RQRg-k z^>PEnI(bdDyrmR%&Ece#zLL&_bN_N=)Q$6u37+ja7*z@)LWL9@c7ZQ~^D$Hwfb2!T zr>U&2u$tnyrrjHgf4}t%qG12=6jXwzbcqOCH#(+ENdGnQ8&lhn(gZSIiuV!*CJ2-3 z8(UKi~f^uW|E%-yN^e52s`6WnW6|Rw~Crq)Z3Lroyff%oG$r7_Ie#&O%mjhw zBC@{_2yatO)%Z>a;SPva-01RkA#^aP`OE-A*JkzU(U^2kB%{^Z2uioo_E&jUprg{U zI>8K=5!r#Da~4lNq(Wm&b-qRaj6XS|pD1Dsn9H^GF(qr+RZd?1CE_EL)n5}K^Mp?sQM-7$CbhQu@Pxv}(oDu~_7s$% z?QYr{aiCtDgZ+(ox;b;^;;$7`CWpVq$-A#ZUWevgJbcEHe>bFS1cfVznS^xltUn=R zW`pQ1W$c=A5cVNmq`o{~reyrmNZ*k5)vLy{wu$iXsl`^lqUiwv!Z{8R4<92J%vm- ztwla0cm^B)+V$aH~>X^Uy#2AkYYHQ1imLc>L&=}Xv~ zK*P4Gwq9Nd!e0KM&=6zP8fdgjXxNlgKA~~4Xr0jYjII6Y0`0p3 zx>WFz$B@XA>#?3U`L9}{J%=pEfWA2J_v4BebCIOhf`~Y`+$?^%eLF}Ht+IK@2>6_z z<_npAPE@ff2pQ0I$P1}YXPu9MLd?54Z`t z$~YS+y-|4jHO%i`L;QWXxRRQ%3*WIK253yR=7n27R8&bK-Pe#fD?BzF2kH1G)6UZD zDt22{cNG(Mgfrpn^A9~d28h&ST|2T)?}zU1MkdaZ4p~ktd?>~Zo;rW%1gW9ZcD?#1 z%b}D192o`-9m$d~;tT|e3*Mkq?^JE6B>xGzu3xu@Y48j+{$-a%G=8p%k%KQgIx6}%@p8%xcX#J364Ae9l5H2S?X4dA}j*wiZo5gyczg zp0BZ$_V6%njNA5dyH8R8Y~B&DF0>=TNg3Nv9Zm&VD&m>>Efb2qA)A-hku1mpEQ|h; z4w^rhvT z^7l4awEhIHRV0MuB~J8!ejR;{EZ;Ct10rf`N~rB@kK=QOpBs zY|FTpJD&(4;-IlO%;Lao!=)3XRp?(M!ltlye_9+Cu}u$J6UW1P0iexzKTM^>+iqna z%eJ)&2N}RLq&%?vn6BmB*pQCWr)E7b!nQf#&CMIL*?wQe`Q}g!LM%WXIV;m2mBXzQ zkX>;J59nZE7EzLlh>~-C;Vn|LhSF3rh8EV*VN1E?V6$_)BB|WCKQiqi6W!z zSz8ydWqAN369bkP?fRGEFV&!_*>0DUl1&rpJ&?hZtjs!u{DTAE?8_VxlYAU@)%&o? zO{soahem(0k!~l(S{(zVph9b69hbPf?M-9V=IQO;F~Gw@LCvU;i!PIg)CfPcGp5az zFAuIMlusrvDh5Q=ry>;W!T_~&2-9B|!KV=f_1Gk|AS6oWA?^B;U-CL_;0O)CszmwKy4k*n`*KID=ET%d2EdJIiJ+<+4C^hd2s26O>o} zG;{Q+4w^WK&(P{%+A#dPC9uE9cO!4k$?p0-;g_R03|Al|O!&hpU7Op_Sb--Igczpc z&sd+-EE%|?R6?VhnEwy~U6npG*9p4ulk{E;vZ8!f%7ypmw1ZuJSjNRJvynO9!BHou zI07dfz8{y7OAUyzvT1bHyAX1bDAcA5J@}ldHE#!ep$51ghGY0%J0tM+7#@t*?`PTS zk7|;P*HzK0U^gHU8@raU(YaiNhW6OLx3_@87gv`dg;2K7YquBFP0{C{i~L83CyzWE zbAFe?$l|}NZaM7r3!>KriNs7mAk5f$HV#E8_!B6G=LUs3lZ zC+QXiTv0%`NJpRK)jLESoPRiCfIL1^X^M)A98VWVVR1(T3)7tBJ{GW(6izBl%86Y) zw7utru=dy(-?AEr{SSK{E6dD-YZyJFOws!q^N zOKg*pgub+id1oQH3JzGirw1uV55{#g;ha?8yM1~O4)HgdqYN<8+<&!Rsi9!iL8-of zc4y&&x4+i{bU0{W+vasa29bGW&AlM!`LG3zCH-)})QUs%I zIg;k-i1mHsY$J1pI)Nj7ne4#Obv0I>THw%Xm9&Y9154qzE_x!4lRSMCWNzVapKuBk zaIa5u`jZ(6Z>9*CNKH0Jn_W_6S{+gXSCsXsu63Gpd+;aELPUHLg|=VRA~#i;KSXx+bR@g=J?s#M(l7_5!6 zsyKU1ivYa2DmP~tF3Xyzx_lEi4G-Y=R1ajzBK}C52ya~Kp62tjPY$Umk9HY2G&My# zvwdG|PF`8@YqB*~m)b8mIYa%HwY^U@eV0eMMcJ4)O1;~JjjL)Ye{0y=>ig%o#uzX39e<rJAotHuSmF@5{=0MWIqhMj~jlnN8S|ACas`OiDh)agVayAa)!`} z+MGc6Lz9Fg1@hq=QHsjdts>xa7*8)qli@QO`{^gen>;wpddV*8If0UuPE;qu)RDLO zCb&y-P;`hUt`;<2bvkK}SP1&q#Kqbiw!IakJV=S4JP*jETZeyWCU48u(>C5(T8G#A zbs=OoR63z2H|To5`_CNSX_$yZL?7J$&XB)BMrLt#ll{*wmiZ2QWk)RqTYsS=d-N|8 zW{Lv$zwTkaU}bU=vBR=&rTV`T0ShNJ zX1VrWv0A5E1T$tmLelQD|6?pn=$~5;6nR6Qp2~fa1$YwLLmYoKg$I=n{nK0!D7Rl# z=Mz=Txl^uQdG}*0jMY62jU}{@ioshAYDxcdRhE3#9qaaOkw0EcaKyeF>YGPuQr6hj zGrjIg7|b=Moe9k;3T`==d8Lz0JzD2S9)BqGZ*_x=4`!_}+lc);9fHy`InUJZ0zEAy z_0?((tIe!gTjcg2q~n+qn9C63ue2pBes7$5xSS7BER;#7v(Hd zel`4-vZBPgIEA!9X^toqkuf(VIBkE-<5BCT%+acliTEVIYtkv8!( zSiyS%VO)Y!BI~l+q=emJn;Ym#f6{UQVPzzKd_gx7%Fdw?-M2++dqaoU+eLIm}Z|6{f6|bSFeaieKO1@l?nWVnsfv+iC7GQ9)OX`XB z%EIWFuStpe6VPs)EFQhO20r8DluCL>L|Cl4zqR}v1CR>#$q#Ow{9%`8McuQzQ-wo< z-Tq@jnsfA1Ec=%-k{8&?0cVycutteHqzz_pTsrMH?`})93x0Z;> zX*C_5-?dKV58QL5O{xecEGz%8vIhC<^nc$`FPLVjZ)P0I_H>oUHb1w-Ef=&e2+a6= zrskZVdn_*fNQyM}p7-HB>6EebV70)dB;n*$gxCGgvIv5b3pKlqKB3;X=1>I!SJY#X z-7jZTkW63y#7k1x=+rpZsV>e?8<@L}wK*o}{chU<{W+?8}R=sOa(4ezQp; zyyJY9GwFbKL8$mWwR(&3x*ocQk(53?bOrc#Wr}Z916_<|3wcDoO)ky#I}(Xed^gyw zEb7Eo<*73AL4u`*ErTSawX(bz6^F~)uzt@Y$Ln5L99hR%iU6~Aq-$y2(}?jTMPFmW z=D#nhvF`g7zH~773D8p|+hcXBsb8vybk?I~9Jplm^Ym!YXg?-r)6r`{H66xT#W zM*B(tSK4OJ!d=nJs4GC{Ap_~h>6oX&V#ZCp=6tfv^pU1I9+1#}dh>Pa z1=+uwwiWu!C@PbC_+O47T7pdf4!A>#*nJ>3@D`T54d${@dHD42c2&H*_=-_ova7bo zKlfmN5Wo0aCZC;V@$TH7&xb)8`vZmrnu{vP0X{#fT>9R2n|^cb6|P@J^3XbEpMl| z@H_gBRYYoI6p!Nr71oadh4_SoqL2T~i$O%nlG=aX#T3PVAH(_)`u{j$+5d!gP&DeZ zfWM*hAw-`0hmw4>hBf^x@7 z%Vj1^zNzWkvFX;`qhcm!fA^CtA6mE_1=S+>Es%ChfGT9lKe}jHNK}AN7?34N>*(oeLq3|{JmnjwxW3ZH4XyS-`nUNcrq>;Po*{8wNS5XJ?)slBG&d4 z94~NyLt-GIzjtk0-}>loTlf)2>@}teh;8$-1C1T2W|8xNrfw?)^G9I3H(%u`?g3Jf zkj%pz3Ql#>=@85C&vwr6bGNy0Y-D<1X0E*+^ieUzXO2I7vG>P6-^GSN*yY33?e!2K zY^O_Gl>_kQJ63H{iXQnJ;{_b=2PFD2piOmg-LFNtla zo}bzYNwo?Ke)#6AJ7VSS?JN=~8y8r~Bm1&|V0G1T zo0xginMl-gY2p=`vmpVbtFh*5Rcg_sC0_?R+MLZx+@*nC)qzyE(FofjXRQv>OWeas zU$qD(76S}O=o!0D==c3Y1RR9BTyM`z^8KO5EJ%}Vn3*q+IBGA<{}eiUi=`4YsP0U# z2D7)hq8eqWS4zv}>AlyR`ba^2T_USso6{(9Bpj+c>-S|iIs@L1ImRYh7BC-AH76Rp zG8R6j!Igp~Z|}EmG048oklY*_6u>`Q>bsrY*G-fpxK)2-f9*K4qF9LzJRWhXQ;TlS z8RT-iACQNjUvE}gdXq#tyJ8)gWfHMzLOsH2J02x_mOf{!&E365JUg0X2HHvvES*|K z|M*b{8atV1MX?iP8tD8h;OX$vx?sC1P8zgGcFAggZQ_Stkv!bLn%EHazHWP#KqPyYbpuw^lfs z;cn1~G;Q1uC8GnftA1q}kMO#+p}Mv4O96#l;CYo=td{=fK#D}{XkC*MgZAhX@{@w_ z$x49%<#?X-zBp#Ff6KuNLUGDhaJ7y-Lj&bJ! zfgIzl=7N(wO3BThAnx)ne2F;H%OmAeFFQ)J-icp>#}$w3VfoTXKr=g-^$gQ3mcc!jKch&4lJ3QxlEBJNt z5suJZH{!agn#79=B=vR`$Hi4&o#9G{v5NJ#$-XBIubjZNMC;+uS~@e3fI~1MgE=ZM z`VnmtjE+u6Fm(Og*rBVJ0tz>f!W8fZ43g#8Y>`h7#9K3@9)naq3H-{@!xC@Q6wN_o zjWW=a?&v601bmcrvEPUdj76;6U+>3Dj{U~4=vAxgqH_Z0iTq4we)NhAwlumwsYFch93ZrJF$&$_hzU8eV}iB5kZU@CDW{x-~3Loz-T zrtpBYOFSO)dOx%-^3aD(ak5C@Sh97dQ`mB!0VzW%kPZqeA`}!rg|Ko|E|1oLZZV=D zk36(kd(U)7ZeDzW&JBq9l zls917!qCYfp%I@tmIi`jI6yqyoOF~+mYwp<3Bp#^EdVw@!rC)8P{#{5)5{VkrLG$G zMnYM2H1F##!z>AtvyE3-VkM~>rr%rgAc8=y;dI4&4{p@fvs2<3sSf>uar?l=ZFTk z27p-uJVEcs!$mdbwt@mPAsD*uFoW%EAL<>z!ZRLRt)A6uaGBYkS^Fr%?hV8D-`n(P zGe_WO698v&#e^pYXG8wlZ!D{9+yQ31T8DmRm`}u^=Er^D1=$-ekL(=IZV}3)4zRb^ z)bxSWE>EbB0C3F^>N(wWlJiX0;C24OTkk;x@;x@t;3W+645j0Qbv2@}}_aFM3)tN@fz~J`1Um$Kb+P)gt=Zmx6(C z*`led4m}vg`E`Dre&#du?F??JEdjkn=imuMcQOan$_W;0Z~&bs1A_-+STPp^55-CE zN+bDqZ2F0aAH-Kx-Eq^DsfnfA+~~oV2cd77oqrI50{7$07X28GpPQ+-n3cl_Bcmd_ zdoLzmRr5N-8^PdTxil-=3;ebYUiV}a4DJAbTgrlm**=NZXvvF7>` ze!il6Q&Ux^E%l3QWiggX+cI4va|dJa_M?a$k8P4t?Ar6d$Cpb+RQ7|-mwg;Ng@RDd zmD^OcHy0u4-t|(BUUj5n-PonE_I%a6pGXLsr04W+oQv|hUrS|B+rUd_g0ry7w_|V#8Md+hkf)67KQJg66-g4^EAzM<4o@= z2=FTlNNDrFzm^t;wq+!hEtVEQnjz!?51B_FR8>3&=v7HcDx}X43~KPBdfo(vU~cMN zx!j-QYiN#!nL}%1k~!Pox@Y(FYs8&;S_ab5Px@^U|g4AgEi2N2DNb@i6KP<%5gdN z$!_C>CMN`8z;3*eIPbHgXloX3+2QIw&GCRkll222=(zg`57t5!p?S|@AXoP0bjjrM z#eTW0?h62@F@A<8N4E3B3~1uA2TYN);_r=AxRL%0>r)T*>wLBEba-!?Rr~D2v>Y!# z&iMUUr%-+hr>#}D>jklR-Y-jB5Jx>j#hH#4vvMuVPQnQ)?7VRWJn(UsbSour=RhSe7M zHPcbPm{H7^Pkch7F)K)1%dLs?e~XkD8GIwNX?pI?<}sMU4FS(i% zU8PzUlu?jcCI(g;pQrcd;P2`wG6P>5_3y4dnWW(6&Qvh)ptHMueTuf$)KB${mHx6e zr$)pyl@6%qk50OG-|qoiYzHQ#8N%_!E%1?5vqE5!gQRq$ppbSu+7$TP9(QFqDxPqZ zI-isfn~Dr!hOzLU6+Bgf?qD6x;ZLQP2Wz|wisSIh0`4R% zicZxma6E11HGo;6IA78L085Q~TfCp8GyFL7Y_pCGIAp&MaPE@!`cXqj?nB-P-{XD) z>+@))OOyyaX%|{FdMJ_=e_B(dy`pMf8j|YBFyr1g#`cY^k3JRFwsNWdT5Dpe^xB3( zolS(xf|zEZsWhlT{D7QpV|Z2NzM-n}ud9bfcaNHr8SMNNL^?b~8mZqF5P;WzU^Rb-wuh$*@!EBOIlccyHP#!kPfdcorI13P8 z#vuJ0(J<;y{J>IvVXqj{abvrxF!Xrd>Mb#Lv1 z{jBP!4^ESs9d_Bt_$hG(j#gijtDsN(;-CmmzJYv0*;w_-yBs3g2psNkb3?;p4gC1w z>#}m=W$s2Di!U=E);XIrUz{< zC)*Xn3g4*Ge+|PHIW>jQn544MRf3e|`|4)35-P)3s3%4>#LBy!sa!d+4$z{JqfqhfU8;Vdgiy&Bf7d zydUFrigV(XB|C9y<^_>{MjG;UlaR3LeaS+BpQ>s^=`~07j_ntM9=e*BAEndCYScQM zc22L)Ua}7HGm~v7n?WVH9Wr0){rnulH{baLd#Cy4_3%Vf5?+!v@=7Ex7C0rBt8vo7 zyXDj?UP{@P;IDIadpi7p#HU1p4t`jm)YJlxh{`?4|0jUy`ZzqKz+QaK7r zdZI)_l;psi+UG~WM#OGD1zd=mBa35T0P7;onK1mj0~fRD(1AV;QM0x&`QOw-pFXX| zre~Qp3y<;pit?ag=14;vW=Bjx&s5YSa;Xm+BfN^*_aK6ypW^;O?BkIeAsTqlBlN#U zI+Kh}xO8$Ulne=aquS~TI_r_(iNL<;hk}NDK^{4(d%0RdF!-phD?F66VjW)xy{?YS z=_$eKkH{Wu=8h=SNUPR=X2K*5f7G?DAO9&_(LpYQmA`BoP4blxm*U&Hs zNS7iE-3%#6N{4{bAw3{5q|%+z9fL#H#{YiKbI#S-&;G98@4Z+TYrXLc2YSL+WR=NU zD?G#4%^`RFy)O?zAHBVuXUaSCzxpjbqBufOc7MSq3b-i_A6LjYRCOT|Fl&8D75<5j zKRl5oygFOog&=EFSx) zk#7ugZo^B_wphXH$q0xF8uGtd=>ns-t(tdi35A=9wM%(467DjUacfKt_Qb!4I<*?y z5rPJUSKi{)elwB3M67){ePTN&9I+za9qqBdHA<(GaEJEyzmnVRZzg=Y8%E+}yYj0o z#ET)xNH_pS=%F~LXeLV*z?sS+|FGgv-;!;b5I`_m-*+{61- z?DmTo5^k{ONN;KTI|nnLV$R{;{Z@#^@hdv|QuWq)xtSP$s5hNLfu-l}<3TQ28atf` zo`rahgv^){F~PkV?XQ0>k(m3HQ^zy3m}TW+Fr#RB0H1e{0p0N{!fKusDg4bK1m|0p zV`lc6YU8rhH3o4NI}HLcgKe+dMGTl`f1`^KU-Yy=iEvodA1@pH<}(WsPSrOgIZ7Rd zN=TjR@zM<04Vz+mN}6CuK`^ZOWmb(LY4W58N8sIJyxGXZIKJWVPJaohfS9sL_M{Q_*uZ&N0Ox(S)&$ zZ_~k$I;{?kO)rwZ_;fmC09t1O9a1ExHPS=i34IV}%%ScoA_<0Ecw*pv^OOHv^gEW> zD#yV8oSh8(RtyU+=-eg|b?eZ2qC*f@&-L|lnDMJP%V`Drd0^0MnCT0$-)hSrv>)jp znG*nL>a4|3R6o${;p)6wpsAU2#k|(a^S8~64ST~_6QRJHpSFcH59?}puK6(vwxhJY zpDY;{#r-$1M2FPHSu-r;ad|1+HTf_lho2=Xou7FQx^;zdS8dYO&6M5im9OhOhFaC8 zFUWZEHczS|%g$od2a|ujm&c_fgr&M|+=N2aaClloU$T9|l{$2xUB1`jXq)DB=+@Vq zphTyY2tLQwtGIUt7}s+Z65+oinSFAmr~HfMd4jj(RX$Qk(yjeTmX~hrX`>pP=%$Pm zwtWCFjN|%TllS*W0rNHc3avqhyoGkez;1`u;P(`g_xcq6o?3wDd#X+a-dH&(8IHgy zzI_vXQZ@vjqF6+;dYn%)Gu@Md4@AUpQyp@ki*5G(fb}oZ`s}xU&j~|K8MV~QA$1Az zk=k)4t~U-1{E%5(Hxd@Jj5#M@R13ookQHFa+5gUS#$01yj`k-7Bm0bTK`#5EJJ+=i zu2Gt&=OcoOn2~5VO|bO1UtrMOhICn}wOM#@a>kAU60i^Ecb!jGYxGwq#Od^*kkebZ zr=sLoI}n#!S!sE;Wj?X~s?e+V2N~gSi<~Xv?e~!0h{-_d#kHcLDxT-&9poU@guA07 zhcxZoXO(GFd*K*MmFR(cSHv<8EONQ+SyVfMwA05JIguao6re84D!+JjuI@e(79N;e zZ`5nP^MWD&<9T#1R!QtfeN|2z?)J(8_`$pulDXSE(UvL0?{t{U43ScCIv}a}Eq}S2 z)kFq|)Euo{KBg(!vDZ$lg!dvv7Y?s)&`Z^jE|eVm_#1)%1=`1XASH0v(F5+= zJNy;I7PXlB*0-Qw+;Gp$CgfkGx~(*Hgx-Eo*pWG%{1N}r)jN9FhyA@#K14lLbxzA@ zG@Wv#)$Iue@rbI>je{cd<@dala{->rshwwLCRu!RF>7${6nyE}xQrte*Y^11VKqhW ziYp_tPd>Vi^QHMg&JmD)&)s8 zDR>@|0FN@!B zvATj}Bb6=LZ*ZAdx?MsJzDt&@Hk`j~jAk(NF=jTCkh1RcC1;Dt@X>f$r&~!&aFd8~ z_7RTxej%e+wwVwH$Utq&nq>>!nbR;veJCiI2({)0mXU1lQJ8{)E?CM2*~A;>_c%@&PI=FZp&O>JmK}Cr|_A^w;MOUMu4CKSrr+efVnrIuRB?Y0PYx zayoa{C@~#Lo(elKcMWi53QT!!CD97u9{<|Id~G{^Pi7)52w$PrfqDYyX{5m>p#j< zwtR?qaZCywMV}3gqkuXRBnUy`?0#HusRdxccJVE=JY?ssywgoObVnb!UVb3*+E>X_ zn70v%4V(NxWut)_$Pw@6Xs_y%kpykY;>}W{ce`Wv2X4RbRfFlItq%gMfoIb9F;fn{Fnh}#h@dIOT=S)ZVQ6i-qJy#PxU zzXA^*cgrR8$fRBn#Z&2)kB&lsNR!czvV?SI^xY8=^y>hgN(t6P3Od3K??tXjousY- z8inD)gr&Z=MoX!->jK!#U2CIp-k;NC(5tM?*h$wZ_AnLu15qOHTS741JB#|Xdgw2< z%F_7ffy)SZ8c4G>$k5Z%3HCa z*5mq0>o|l)qp@6Ne}2gj2n{{>_+nc+7M-$jHodo8K|wh4SEaIqk`s_ubl*{->5wv> zP8hxO#uGzSY_!Iu4I`(2lG4CErOc9Ms3KixZzi_MYye5nf6MksnsT|w#Y{q)X3YQ|FK(z^@WaaFzzUB#73>z z;c@LD6vn5nPc_e|qqR|K&oP(uyFThAB@za`_O6KZmCs?VDj=?1VrqT{HLZD+lGQI7 zxp6nUSVA?OA?KcW;^I%S*wM7rGtz_<(EafuuvY^8RgMFkDL2^MCmsB4HGqL2x5%%n z$8lX-NN&syZ1y$QJK@lwpWc8TlT``f4Z(1*3D*Ewwl zNJqYO2|adh@&oqL9(i4~p8V_7Dk<(gasJb)eWv(7d&&QCarwUvSmom&MfpW`YxpuB zmBJZZKON)v%d1ZYB|SRI0Oa7A7gzstVSt(7Bu*kba~<(s+;S#;ZCdG7b4w4JA9h?* zRqj>Ru82*?J{VThyep*<;EFl|U#zAa{-y|Wdy+25dX^Ae{=z2(*X@kE^Dq@dgJx3t9+MSj z>pMI753*#a)dMCHNzEY-St@B>&U2nAUYYHc^+N1^4pvlibwy>e!QjI`rgksZx-j3K z@g3hL{DgJMP=oTW8SpKCTRv+jh99X5EL6&K1xSXzl+;a7a@kmTPx(|9+Ly<4kJfpj zL<2zXw^2oD2Xb8lCMLHV_YyFOudD@aSIZtZKNW4Jrd6tYe{<4?@qPOyDL%^bPKuy8 zZ}retW(FvT3UMP&)aEQRE4CWp!LD3N4HVj=1Ak#a);SZC?|xW7zspryq07~7S#dPF z$IPu^4)igb;B~Zl#8>;C{{@ZGH2=+}hV(v^B0BwkvI~6vxl!X*Jk+!~>~G7ri-Xed z*WQbd&h{SUu)I7c}TvM*lSlMof@S&u8MU9sTExuHU>*d z_Em}Y96?u$i)jyg5fTvcp4!2<+a$M4rq3eIy5Qnuke4?lPsO7PqIu2b{!X=8Z{nSw z&DKnB>g15?vt;Xv@6m_O&wlYz(l;Jyx_ zBIAp$9CFujaZ5m4jHN}9rj%e&jNtLu^CuU#KB7g$RmTt1@tf7KXZ4i?s|uSwinIf= zpE5TsECgNm7N0;S-a(;Hk(q~5%M7mLcZUK*)8IUXC&Dyyi=706{F$OL!9L%@{a+AY z#Y9_b>Wq}%o0oF=x_sI2pKlQuC;{vmgd=>vy;*ga5by6W)m0citaF>LgzZ-`mR0*NpbgUl5Vxq{r&6JT<}Wk}>z`^O;-ROrXYU~!?o)qB zEGI&6zeBCtXEJ5)8}LDA}b$EkSa zuFv-}6s^rvxCO%$!fE~*3+qQ0u=b>g9-c3VUFsCB`fEFXZ}^9*Ri7|JsG*0dM;oZi z$M|VCUZgmDXi&Jl!l-9`dhz=sGmW|P^ZwK}f%y+*m6*VSq(yLP5^#GiH$Sn)1hPSU zhKY?Y2gcdc$E`{LLeCX3Z>A;d^tl$#Ahq%#(oKf4gCa`Oin0v1#R{%_FUU-Iv$CaU zq78HkRyB1yI>hUKKF^k4Y@bnZGb9w^IZUF<&F4~+_bi{tOIAcht6I^h10kNIN_)1<1ETOU;4YiKXP^sK;Ym zjk;4Q$_G-^U;A7Iij5pLJUbT0!ibsm_+%C9<=VbZB(=hE7eRicJ_q!ylt?&*R+nb~ zx2f2B{zepCl4k?&{mo;4UpK%U6He5K!<%|oE$%LGVxpGwvfmrN&JkhsLqlG>CfKP( zS1un^f2c?1u$JvrsMUcWiEtadl_vZ`jtuM+WKk0c^G%F>gm22rm6(oIHowd0kRsFOy)6ONmuXjLXnTRP!+Q-l(zMqtdezOb z{j^5bs@YmUz}_7&Wuv}Z(H{C|`gKaQuC-))a?%D-YV=9WeT@prxHi#&{x@TdNGeCK4qMJYYkK05z!prRGY0nxYCan$Z zAdOEhP&jq=TZ*3=#REx<(o5rY8dPQVzVB@M{eFt8zAjyLI|(8#0$wnUXN!k+)~I=Z zv*^!%hA3cjwlL9}nngHAUg1!u(D`_`usQbN52Yj?v_lLT3BLl7mtuL{_=>mEoso3m0hX zEVdJ@_zv^o^3u*{dHGVTQ5lQ5(q*uh*;2(|@mK0PM_J>Y zjMPc9dxF&iks{OHjBM0GiGQXByUGJV3hqW3+5XS2mb~BXFOJ5q2Tl=uy);<&KLg+$ zQtcpLxTZiZ&vmyA0Evh&DS9HsG1&o)HuHx?flUne-LuxEB*tihN#Sp-gnnvdBDxs>%#e|;!i zdp$5sT-6B|69b=7B0DIh7kmw{{9zh2(e)fkDz&IJ4#L*cH(iJL1+0TUFR=iQpRYtU zPr!r&)R@F4Ef5;9ZUxYWRp1V6v#xEdtm->~=)AB;GZ9MNN(r0FIx=__6L z-r9KIz!UamwIeDecaaN$2rT zrch?NN|~~Geuo-(dwZT0`Vj%+xVFZDQ@(vmR3;1PCl)OcXxx$H=a5jsv!!O|wcA)> z5%fEM!D*QWf7~H@Tjn2TUCr3KF}j{YWAk5LcmCX#p;Q(6Rs|WA?jpMB-Qfk(*>o)w zSm7b^-FuqiEXN~iHi&@ruWyr=?PZh`LOfWzd49wk6h{CY4sL;-)_1uy&RGr?EaE>` zKSjMvweNd-xh$4yXSl>JPRTi2(rIyK9piD3t_5bWVQMt*)ra{M;Lbv7Wn?=vv4uv& z#H!2`KP1MM4#%ycM{#YEYJ4BW!n!)ShZ{ypE8(@QPq!&XsvRSO1x}2u@SI%4%MkJT>&Y^h~ zXP;GaH8f~B*#B_o?`+HAkRDI>LR!FzqY&&Tw`3TnY>7$Zj){cIbaM}l^2ayMu8{s#!1#W!2cl+%@ge{C4u~8L!)JEEK zpn=N+j|#jZ{;M9lGD|Jfcsp#l|2y8&1Xnz73Ac@Rro-hfQ;M52R4-yNShH5yw)N4? z*(Jti9BL^}HJ_u{iMVlTF76R_lj2n1$(5cXa8MaszbKDGy+GeXPZCy@k(~%p-EZZC zdTm@*W!!yDgGXP9DX~|%Zm`iv2az$IgTq0gRW=ckk<= zP<8I_Pf}8|Y%M19NuB+lg}wpZgI87`q-vD$G!~-*%V5(x8`AsBh(`R~UPYm??-CRBY(3#@qAGz!t}7opMXAFlYKbs!_W2LQ%FN zcRIeFpRIPdWe32skTV+G}MzV-he>EhGl;qc6-ljX!-gbe`M%T#qEiX+CbB3CzfDO1+#tWLv zP;s8r+%RSI;nIy%FVn~GL(;=ocpJBfD_eTq>b$v9sawmeaIEVx1*BR#N&c>?zxe|- z$Jbjc)ej)`T=&Mq4MQ)2zqL-c*Rr}*IV3#79DjAiAb92}wQhNpQ+Zrx8>{=R4?x6_ z80|ZLihFmVKsww;_S5Ri$Lsk{CDdE;k#G3~!;a%0%7f6>2oQ548_QR zEYC>b1QF6Qd3v88Qi;MT?f3Nd>z&pwODCxShLAP|Y5K157TP`(wbrkgN*mho<3FF7 z2s6h*)4#5^B?g=7;n(ZTBKO+gR#tZWd)z!^#&v=TdQ!z6JnEv_n((R03)6}N=@y53 z@~wgUHtrI}7hCKu)gCK~9%jgtM zUPM&4a7`gLg;SM~ayjD0$`7M)sa{BVOk(qBjoY-sYReO6yeMO)K}*q%>3G3G)SkT% z2{RSPORJlpZ*6`ZXcT}rb=C#h35eQjb7hJz{hVPc+moW5pW1^wC=X>Y?kLv3(0%)` zlp24DSdwY_*ZF^$O$smnsc3m@s{r*a+NH*f5w_P14_bXi0S7c)6&+0`dm-`EFOSf3BSbh zz`OQGgpU`=^IA2&YY!U`7d`Xfd8zq?`%bB=cmepYFW1;`Xirjh4hj>q6=}Sw5Y)#= z`bYgPy~&`A3>kbCou7-`SJ}9Fc4&H|JiKl$4NHktRS>=i;HJNCq5k*kB{2Lq^Qe#7 zYV$ue3P;5Mtx+CQ!3$-T{(G=N{Qr=Y{{8u{i245qVEnwt+mtKnE9>h+U_5-v^6GLG IG8V!A1wT>sz5oCK literal 175488 zcmb@MRZ|>Hw624H1P=sv4Q>hU1lNQBfx!|6cXuZc+}+)62GRz&Rg=gZ{EDYQBwT;_01cki#KnOz5YZ0x6&<)sr@e? zxqMZSc~d(8A`Yip; z)9CowqMlJVbw47er7J1?3b02u982VGBy9oDuJJfl_ER zFO}h(7R>dg$05DK7_&P{!k1>3KDU&D|M@xR;?c1A*^X}SVsc{NL-DG^@=n8!KjFk< zZh`2vaU;7`D;+=NeZc82WwffNp+z*2ytCo|P{4^2{FJXt_&7@upEU!fjV=jrraZ#$ zlY5FTE*+>Ex%FaZqBjw`(z=8FdH!%8=jStuen26{OoN`+lSYuJ$Jai#$L?XK>H7<( zZJigkqYO_q$o$+BVScl@klu++?Ni0&vp~YN7vJpN#lw@u&dk*NaY?;%v)}t80xRj4 z4+1wDcETdQ-K_mt_fvN*Lti5U3eR2Ip14ccV5;b86D=1w#bRva;p{wBRWa2COB{Kh zs&+fNM>vjL(jD`Cbbx(DWMzS;1``SN=0~zwlHoK=FWkTp!r$8|;{i;rxR&;B?Y>w5 z)jpZ`F*M3C;G7gRHT@E(TC;Whu+8jUD^XtpX0?g7b)5wxH}>$3lWj1I;xxI23Y0N{ zkSyCbWG<^P?J!+MGwIoh#3(qw3R39lT3D{0B+NaMQOKjC9r|qTFPp#t!Px4=M*HJ2 z)dYKC@}oaBJiqn$LE3C|ETsvA8D<>%dnP93%JuXxvXk;#OPe_c>0A|d1FSNJoXki^ zVbR;!^F!A018>@Of;X2$z+PZhS){PRk*5GHf9)P};pE~@Ow0d3sSen7M!G(t#=J?=1FUi$euPSsQw*sIrK+&;nyv^wotsg0*F`51`9I_vNQZ0VqJ`^ zd}sVx4-D5p%uha?iZyZ%eV*?YsPB1pvSbhh&%3aj7F~RHQl{9>jU16Ci$tXU@khuE z&wotQ==8@x$7B?c)CP&;VKlqcO0S39MbBwkEM!9cFTRpept#A)Ie5DN@@#56g+1Y~Jq2 z$`Hs7lr6U9W7hL{l-yCz-b^^AT1TYP>M;Jetu`==`h%YC1}w&ZDK%F6SF^9#@Jm)W zgQ)L37zw*#iBs+N2RsvP`sGB`Ugf0US7-W50>zbxswr8TXHT4qqrkcWd0WKZ+J0Pl*iHs8ennMhVrTYzgY()2FTB6#GAQqot$lHNW) zaZhLqk(7|l|HC$jU4e@*^1-wnp`;gChp}E+=?`nr5RlunoPEZOtAO9K{QWD7MwuChMhGdG7EMOz$lrpczo}SueK$}Ym9VX|_L|ejl-~&<96d1@;Tvxr;I_x`& z3HlVnt>2KJwpv7`VYuH^JM24ci3oh$%UHl)0+G|K@w-#82qY1bobbw>^Tw%5JnbhD zyia+CTxunAm(dDG)fJA$j_RWI*HM1$9scg84hhAM_a69;tMo|ee5V81xLUfYrjn$A z87}FzadEQ$P8>TN`b_iZzg)@PWoa-piNR zQQ>UCwT15_J7dKGyUnKUi7`Yqpl)DMo74sNF@T_*;knzeEs&G093SUGTrbW|4$4Gx z;e~rh_)#Wy-+FA`o(pEX<9it9z~I`N#JD$A0$-P2w+D#5mgJZxvWil2#Hf zs(d=n>GEIKVYNQzxj*TjQfpVZb(6H~YKOmfi`K)?gsqqbPzpKv9!g2c8HJswmZef3 zhcSd7kwaWu7OM+#$2>lL;lu4ph)CS;|8lrHyhhT*^8wp>omRdN$GYtc6+j|QX}0hS z5WmEC20NEXDXG&27vgUZ@$ZR~3i(evb84Pk-{w6oatW7-<<6qnZ{#JcsnEG9NRRb{Ycjc091(nExvl_5LTFv zDM(NiB~L)P`BY#&6Q*H|z!-HK@ELkD>v$@beIj#MM-LZu=dfsr_i*vCR1t}1vb&xM zV=jfO!->r#OdCCaSY~0q$#e?sBhs09utt1aCS5`Z1F2BaVu19{=F@024`VY-c8J-? zYMXnX=u4}x(SzTRGD7P8-pOP09dxx}WxiZl(W`hf4C&vsvv5#1SIPo!E+^brFb~_5 zrTwiyY7K|p>T0zgk!U^J{nrj~X8z=ZUxRNkNV}~v;_S}rPjB-!1O3KH!(CClVD z(KiMN?X-d6UJZ^HR;bY4ri+~530#>Noxj+e&`xj);_I0`E;e`4fb5zT|B%VVH5RLig5QBC|v8VRbj7o3pWH~+zoaS(nS3+A8I6)x?4yO`QGR(DIVj$Yri z5ACYuvfNHk_|hzf0CO}Owm)${8ccdBKQT*Wz` zNr+5#ggp+#AQ6`Wkm|7J5^!Oa7d@4&#VO8D_L90JFAZMtC+(vrV+jSINL*0HVE&=M zhdEINAF1GQYeEBIDPehXW}SX?Z>e@c4$DX~9e3fyIQBdP9xCy-U$;=cV5I0%R{FiL zwxH+n+(hwCu6ofH@uQd<sBPIt$lsiq>*dk)=ir-m!F{vs&0D z9r@tUD+$oCOeoLm$1)8U%zLS0;JbKh6WpH0){tQ7(92xV-kUUZDA&6Y7?p0@(Sp|O zR0}((>Yn4|fE7i__&Ac`dSN+##||g0M?$~nUGfcevf_SlpEl$&Pn>*fi|LkIdaB<| z@`FQCBpxiiwEnC)mAi<)1VMv-juoFKgQz-X9+5v!;m>bzP_G*`Vi2drL^FHw6Ms;s zX|TRGB5Q9V1jKvbpdY`(^_$QBlPf3LOh9S>XUJVZD#VF$*3FLfCvpsxaSmxci~D=K zL2|@L8WH#hz?} zZ-YKP_T}=Qs<-yClkqY3Oxpa^7WVmCLnbFEOg_XgTLVMTpyNb|#3a}4$i}vEDt^<6qb5+t+BG#+Cl;r_<{Jp+E~j* zHg08)rQ4K$GvU8#Q$jqblQe0eEM?fqj+!hqb4vFd#f!f6MOmJoCR>`pZnX zV>PtWYi$HdE=hk4ec`Yac-(%e_7YmOxFuRO@Rzz(xNO^m8SbpdyoiTT+YgvGZge?3<)**#vq!X7A5PlD2rM<^}Z@o^ZCn13EKh?kLRDR zCxsf1=bxn0%?yX${h7>A=^?GN8!8NWb57L063yU;`|K0Xs|S;lDc-;l2eeo8Khd5W z#06Ml+p=6e6SqH|GsppOUqz>S#!-F1>3O=AEPYS@2JL}3D^j#BR9$TzhpW-a0nA4KG{E{Q1zik4+XZ`Idgx2|&+2Y{jzPRT}h ziwi$iYK=#h?9g4~iHyHIu8X~2+%)xiJeHbQ|Dti{4-8}ug3qiaD?4D?-@oZhvEdJH#U<$tk@h+Bn z*+KApAL=kvI;TR(IF2xY6k}1*-1S$qcThIA{7X;_Q-_~~O}G^}jRQ!6zT+26W{BjN zW-n~4cf=f=N|C+KRTf1iw}MK?el*-~;kd>3-TBrgzX1#br%8^$j`kC{(eE*`WIIOs z^Y!jIoBYq5@5`2aDS&4x@i2jeiGY<`Sf?}u1o zqLZ^+T3Ori#n)^1HLiH)W3y@Mz`7(-5*r*_UTm69sb*Qn+XYGPrBX9g4H`Zj3()*; z7x9QBtM6YM2g&^&(?Jx%l(3&*mFxf<*`g8J_(H6?!*qMACE3~Pj1kJJdf@D ziJf_QcWp}sDGM31iH2YtOsbj5H98f9JK6MY;6A!v^C2N4jjzA70`vTXDV@}_gwD*C zIy8S9`zgtw@6nN)I)whCgc2hMPvY%W zhYPss1$b0h^t|v`Sn!MlfJvUsIBM}fhz0v1aC=|%)|>HZb3~LhV^&tgV$2wFrc;vb zfp>TRPX18S=~5WCX;5b{xfF@R1V|ZPVt;gYM}y7c+ty%oL%&O*t!5G^woT2m=E_1l($D-u|;Dp5Ejit%YS*Ewd(? zG09DrS2fnSnuCg|D{g`JjtoB9he^EgYM$pnjr4gkLF|g%Mj>7fN95rg0+sN_fo+8P zw}#lJ^S0RAGrz&Rc&SsA8Kl+Y~;HMiaYaQ?-U!=?llBtD(kFlPuiW9e0B za?k|R@SAw#tZo#NMy zCVebb*;x}cxP=VK zK3^<`4;}r3K{JW_ zf{P>}!>V~8e2}h;I)O7!rcD9}T5msH_GfWM81LF0iWUGOHG*-_e)ZW^4&RJ6N;?H^SOO}X%QB6 zREXwf&a*ffdqfo+h2 zn(&;!JC>n9+<=}5q$(4xeWMcA`Zd?yV6!CrSU+tJLRxjy=z;~qjaO1M?-TSBjVkDt zr*S(fZf5JdkDs%n5BzS_Xy5=8NB!tQW^+@K7zoN2i4O97LT&TwO*blhb*|JFnvQYd zbtmx^jolQI4P2Ug|6M)p#r{j&q!n@}yU@jN&<{RvT-@HV3t!(%8R+(c4x6C8(kH&9 zf(3))!IA^b(2Z1{*2X-gx<<5q)B-$&|B^82e7*b7h30b^gp6cvsf{Wi3!R!{U(Idv zN^_$904?!I6!PfNGxJWA@%~D9Boh}b?Yj9+-k~+pmQq0lWNA8dwaUMX<{mw8=LfDK&pYQes7M)WP7X|h37!8CQq>VWA zZW{{gL9B_WrV9Iarn_i;8m!vhA+4*=OuP?{MBH8${&|PH^}+OF7>6PM6GDDjXhRhB zU*nSA$JL|P4BF1b{7llLB?Tk(9f@r%Qw@ynfN0*0)Wy3QF?e>ZIz$VCvT0HOtW(3n zPnU1`1jxb2V&z>%*V><>;A=|2fQW|Pfc9pa$|2vh4N5_xlb{s2QRiC3DKWp!w4-m5 zp1{Tv8ThKvcDr0DcNP0i~q3}xaS1DZ^-AeoX0?rtNUFlMGo-lM;kyd=x*f=f z9`6;b*4J@vq2$t>52>zYBhX3=#f}`>I$@9qQie!$5yQL++e*%xoX{|-k#<@4mQ%pF za|!imLyvuax#Md*=9Bz;lZ4r|^g~vEb!zlO)r@o+IjbV&sm)2q(s*O8T-)P+zU`D? zw&Wy)3j`-!bji)ngeSr3`$(pei(jJy9 z!};VLD)Y+j0z%|^R38CwjrY&ieL-_3jr`w@I4QFR*5`&A(c3w%mB>6?OohpHycDud zwfh+OvP>FjDfw6*3t%e*u&0S3VnPCsPkfk;p87cck<~pvluV9><|P@h+J8-*3=G5B z1|@bLnn8DWSGOYS0N?gs+(1XUXbU`u*?(;Zpq@#_3pW~U6vmL6YNhS@(np)~jPOYT z;E_`&9&>nt1C}p0@F#a)#ZhjieauRGCRjsaa@nTP>rYDuT-{wn_U-IKAg^r@kY!%VWqa35Vp7|bwQ!r=A=B92E{H&N@@55c0ogaq&Au9g;{*nT|;^;zgz=+h(9iE z8e=Vb!?R8UCS`N&atiqG^*=KhvK1J#Co-bXmS%(Jg@lZyv-JBb{>bWvERij!FHQMY z$WYfb;yq?>zh0N`s49xzt7FSZ)7V5Y$|K%Ch50fKzpk3 z0vFOmKVmrBpc4x7`^)a4|=)HtwH7TcH*fbuy__rVLZfY97{^89+wSC_#-J4D*Z!+e zWv^YE-jFoEZo%qE*J0;)Jz&T{n$^hnozGIai{jJ7?(wQd{qWh&iCwU?Vai-<@xAqZ z)0}pXL9>u^oGu5P&*vnm+VBTHa0R$iIz)h0ZYon_OaRlP%sS0VBb{ZA-V4(w}-mtxEL#nA^vPctFhInu;NhR`JQ=VaBwwMCmi-+Y+RpkF>@liE4r_>a4d#*QQmle>QqL;J64#~ zBzPtwrD`aCh}mHy{Ubq4gisW=H|O*GzpG@zz;*c3U|Gh;-eRh`F4`6eG|%v(tTw)h zx5)L(?&vRc{+-pR)3IcFQ(RbioJ{EU95^>Elx9u=bAfUD2TbW`_?`(4y}(#-P`ccWQpJiIdw@?y9P>RZqm&(BMUG=g#sw zhYz=$DvzHs6c@R1{uJE-O9~!FfNHHp@H>3W#D!^PYt<1 zL+*aRsAM*hh7dd2@feJE=vwei<8)YiPugc5?iAc!2*S*&kc){~Otq2C1?+20=99!9 zk?rMCYpYV`=Q>Xn>id0kX!`!O#rC7F|Gj6%DTtZRf$yBc^!ldYkIsYz?kC_e-s9CR zUHgjSyBSd&hrGX)YF)=ggErL1vN2FDPm34XC zPI8kzC}IqhhLp;Xf~(3dMI;@^^^qqZ<+E@5M<%q1#GvL+1e;$~8?st|Edjy0v8II;5cx9cTtG;TpbTwy0O^)0~Dddax;1e2F*+7y_y z-W$%wK-Oz@;GC+nX2Tmt)TsRY>(o-Ug@Z2JIM z7+1?-#*blrU7AWV?wSu>AuiKJIZ4Aq@FJS}{Wa_|(D;@m^jw(ie1e^MtS`3(6>s&r9heIJD%05dmbta|=q8Sw+ zf~o9k<$_Jk#LuZ|J1}g;i?s*SkN4n4lyiL13kI1w)pob1)wa5C$(xqPo>say{w3~l zW7Udly7_7xx6c|yY$~FrfeydO5ZDpo*=kein>IEyKW4&D$i5Ysi%E=|XS3XG zhAf*g&Q8&Typhvix4D&X53{sUOT9yDO^P0!Z#%eok|{l&s^Yc?j2dZGu$me()BDl7 zo4iRfzYV?n{-JL+^X)-6NL$#lMkXkV#?lqm2OO>K0Bo$taO9GIyW|RJ$uB3$>K+gy+<9zz+j!|5`ca0S!vL zDufGpEw>o&XBlGJ`3nE&ZWjPB+v$_c3|K+``6~zwvOZu#TSeN3q+FN^s;cVf#+p7Q zAN|%q8>MvCC8~ORj$%yr(M%SwxgBuTg7)}4s_h<3E0FcdUWX2T>7KdxKP2s6H@TDChs<1o*tAr42rRm4y>h%l+kD$@AK;@?#a$QPGqjssW4aF> zUnMses3HTfhFkJ453tL53QkBX0kBxhib?zYKLmh5St4Ik6DWm58J~@@$6)@t$ee2}HW6z(lzx|xL^3#aTEFP)`ESHUtoxdoq zjen{+yKuiLI2rN&hd?Np^(4hI?z43I8Hy%0CdI4W&kbI@d*x(YXUkK14gCyJDypSw zs;Dc*c`kan{M6>xlml;=pgMDS@|YfMwKV84q>5#CL?~-69OU^!)|J+OwN^MS)B0bx zh*G{HewapIjXSsIUu(@IQC0p8^TYsey zN?{O#s@R=`L?+1mME7gzqIg2mj4@Kk(D=;lIEy|3oLQONg%ZHRXdo`J?(x7Pw&s98 z+CK7C#l2PnUKv+Q|KGNY$vem(=Gd%-Zn!Bw<%Jduu-D?fxTNe5bfVtm>?_fFKNOtQf)=_S1)|e_= zP5f-JE1@snhZ#Sj@!r%u=xWqFJ^pIdY!|&pM7s1G#n^R?oH35Jd0*e|fa!34005#d zcDOc@mbOgqA&C(6K!<7tH2&1RYeQq1Te$KVBt86CQ$y6B(Bu1N zMh*?s;HR6!LxqjG(szl3A;#2#*&PaD-=i2dD8-3OT7uhDjB4i>!fS6wD&QB?;x8!* z3NJ*8==$G_RU*8%)9>^D^G}HKSFz|{7c@KP#vTSZe4WW!zV83KSiV1R4ma{HL{Sny zP@pp1I<1K7zXDFDx3cR_r@j?`kdS^R#nbtMdr`sr*uJfUyW_g9XQ!mWax3|du{5Uf zmq>dp0TeS0##~j6kI?mp0oyZ}`a%ivLTh2 zWi3@K6i(;jfALOsF)Wzve|r^RuEowb5-pz_@NQJB_TSF(bd zY)0yIajXIqw-e^JIlLR?9JohYe}(boug@xpWC|H_=1zk(`41so_<34K#I6Mt*2gFP^e0CX!z5z^FqWBTstaqh-Hb zFV&HB0_lt1t9!El5VVV;f*bgakH;wc%qJT|^LMg6JrxCCpLMclOV#9- zJDYcm4q$IzN$gYC7r$Z7)f5ZjPH_gb(I8Iver4Pj7^JD&9LX+&nd)Cea{-)ns@1>p z%^zzE?S_0#Q%YjP)9w9=U>X7Bi#pir2kf~%n}hdPskNJYr%WhKgbxXv`nrt)7lHHk zrl>xnarK`#|2r{$607H|qCXPfUmgH_4CVj+j{&2~LRsdYJieOA1+YqHl=2W7-ZcGA zv$A9F`P+;t-r}8s&;nM1?)QP^xzu`qo^BvmxqB7LUp37;lFG#mTX%>%{*!T86cW6x z@az`g7vVE(q3rjfc&K|>0b)$bMviTZV!g=w+hy7AEwpDw5qXQX6XF2wrCzT&o|_>~ z(Q7olkd%K`$WylK&zZ)2cn8ILb)$?5XnUUC2PlL z1!H1G2|0+sgjTAk4kH!BS!pp|LRgfI+lma$9uYOVe`;5CK;oRaKC){(&R-4v4Ca(> z+)SK}4V`36(7#RBRExGf8MX5?Xa(}=inUP+ktfH|Yz?aNb+eqM*!3$Os@!5;U*(>x zU|oB4vig_jWO%xM+!hHwtbl8j=4=6e882tP?rwdO^T}TWSIRK~G|y}DIP~M#T5X$u z=701=QZELa1sK6$DrdX4G_jk9RC6p8j72o3w-*EZk>f*Okfemvd5@ml<_%@0ZRP z8VJk$VxpFzMRT@f-XC38TfqMs@?LXBcxR2gVYzMN+W27t8dFg*ZRoz={0@N$s>#BvI`yz^uEFtdyf z*zRU-rxn+H99eP$#K@lRimalhVacq=ZuH+!dmfnZe*5#T&BLdqqUXGV-Gjc;V*2W1 z>777pj3b*XFmgQKpuM6>BJmq;Wk zftRHB$Yq=A{D@{3n1`dCJqAK=o0tp9$W$yw5&E@t5A%x8|InP+#JI(MIO^|i&+0g{ z(n0dMvmH)d4D;*=TLEx`6s`_Ll z!G|*8uihWOV|25g5Ut}pead<=ej)8^Z{(+!{cjal)SFj1m7wDnio4z$lwI_^i1~Tq zB|hejN@B2|5Eknjmp?Mtp#m6QzNVPl$qga9IJ^;~6j2_>R z?W%)j27o-O!R+FMg7g^#({vKxC6q^4Gt}tyA6hAtU&DI?5jL!x7laYSV97S>#Pqr$Sb7KGjSL?@giTsXG>(tTR#{`(EmZzFgk0A?M!Uh~q=gi^7W5zIJxpD%^}mV} ziv6HVt;Uj9qTqO<+HN&GLs`Jmf{LFImBwi20y}dTt4YkDy4TgFp8|>~CLQqXiUjg3 zXPB{rR)zAKe6>s@7Y$`JWB{-q?lBuGSw=&=j9UaN>EC`ezj<)@E^Q(wRie9aXdP_i zb|{b`<>p4S>#G(N_xwPCd zD!+2^oQdKC!EHkQ;b4Y2aD`c3Yjg{#`FRZ_ARfwTRx|#-3z&HNW=9#q6VPmga_M#Fcdg+LGV! z5$*leXi0sn8aJ|M$a@c8&;3xQ!tq>40g=?cy!mh_Koh04+#dzoI?EBB=%YhV=HdbA_)K?LEC{>sJ~1DH_1u zgxU9L-`UNo|5PIRd8Y?@34&%JAFD*hxeS=T6#U(zl6!op{WDKjhU7Ul+DydqE%r_o z>O**a+!oAyviaIFmQph|SW4N+kQMI!p0s0#I_$KR$=sufm&o1l%zu5=-o7u-I=I|e zOqQW5*9WbpbZwy^XR9M*`3;jS(Czlynhd4qF8T*FwrD*3x20+L96x^6ET9vBqa$LS zAeq)Im03VvUCl9nHv24qe^q9)FA zH}`UNTR0`jSp>~#f~crCr=R+0XwfIU$tj%pzb|AefEucQ-cX+k_Fw~)N3laCF%-I| zhg~HVaAu}7c=)HanCb#OWkdrwZLl&7sGqh8bG`_xr26Qm?an!nzN0zq3t+g`27o3k zE?>53E^mpE`sIj>E&;&|dVpoU0 zrDu`_jei0dFP4{eJIzQHE+BT^GY73Rv_S?0@mBpJt5ItWX_&>5w7-6sj~2UZgn97V z?A!l@yS?ii&8W8Pi=g@EM}M3#w@y){%5&`=lg5_Wvxa>LowRvRq0lczw6VW}w%(Op z*Z@1OS*SXWc?KGN7|aZR+w99|!~KupzmmZHNrj6ym~Z#9XA1oTDzoUq=UaOa(n#1P z`iBl)K5-TocWUYm8V=a;gHVHwmY`HZQe|JS|wJyC{sRFRV*-S&JnZi76^FV7n2 z;e^W-hI#3POXgZg#(2t&!GUHq*gJEb&*}~>RopO*)WaG;F3z7$fj+&t4VVIO%OhOO zW)LR+K{k z-Q!)iyQ`q8S^Dp1>PMh0)dV=t3xOb*tTazK4rpO%RLjhp?SX}dl=Xzy0c)cE>{UFF z`Rpcj;(SDJCf=)sE=br$8;O-P36H3!v>l--aCd9gTO)(k12Wyto_d08GofHAyfo6J z?5>fgZWU3DA62t!ra5GyVbmOd3f^ZqZFnnAPe#*v@qNCp#I_8$&-qI%BM9m1_h4o7Z_YXqF?T9`V zhcZu@KFcy2hjbIC>n_x*c)P%Ckw3r2Iv_beR}G>pzsjwlsaq5$WpCLJte}twNVhv{^Px^TZo1n!NK!2epi`gJ|rhRfaU)WQMt8w3`x)K!A zdiwiRE`Qwfr3hN?qPE@d5P5b0G9Sk&p$@s`8PqJ!8jR-n#jSFa6#wLCgE4UPx z@ya`#UD>jroAEx^^}#92a=9Ll5-m%lXEhA{TGi(3lz96YwIH=IV&@^rX2(&c!X1to z9NlS{m;r~r+&4}6YIWkSC+F{{)}sdfM8_GuvOjokZ$Y~v)NWW@v)VJ+P0v|PeTXdn z;ftPe8`5=^v(Ljv9#%98tb?2nO#4S2PtB|nPX+tIx=DH*jPHy=?HdB_mh|ARwQ!_j zBPIjmjDEq>4=8(r=oX}9wYnPepWzg#3hwe*Mzq^DWniw3>5_T!3D@h9@zyZLDl$Q- zf>S8Y`h>|w-4sm=lCm4bOqN2Rcl&KNMUc_tz=5FD)LE}IcoKI5d#Y@|u@^ZbOMrH& zDAv2>3h$Q4_N=t9hHg29+7hR);{WVGr& z|9;#-C=3g326ov#AA>7_mNwgy~F&*M^!EY`hfP5Ap{N`(rr z-)QF#8NSS=rc4reinrEU=Ah1#L7oq-f-4%J)N-%xiAdZb;mn3twUm<28%3ihHd$k_ z1Uz}2rEWYzJjo+ip(Y(g}r%wwI3O+>t1fC(V!P%{&yQP>|72 z@I5Fp$y}R4yL)_~Eq$nhAhOXRh%RB{s+7IO9EA6JFVaC6`Ivm~iVo7_jp@8rTP3}; z!LaSFpVh)sk1?Mn_(8w6ilC+Fl5^CFE-gUimH6x3Kn~jDk+|vYfBPK2&cxh{Edotm zvE3tWeep70o>DL0zWTrv&2BSYK$u@yPpZq7vAuVXiCwKZgI<|zwWS^6ieG7x6tfdg z$bUn%?e0TQ=Pbgw!+QZPzybe0setIlr>FBb-a@HkRa{U0-y4BN}<(JnqpuP zcB79t+n_6}?wzmMMzQZwbaAfIkpMlNjj4UB$^oH(6rJ__b2;+cbq`S+3|w7U70upB ziU-JD;S1LZoVwz+ZXUugV#~}eMqt-f=9?_Fg&t-W%S=q*#^Sp4-{;1kfV4`HFV1%N z&|rcEbrXF>CNuX==V}`rpx)8})!&StZ^C*1qi|BC@t4^P-ECRAi;?<#a(mr*M5%a`EFRz9kqsG_gl%|4eQ72Zj9sx;p=H~@y}UaJ_#cEZ1eEGa;p`culUbYrIXfq0! zX&isib%O+Xjnls63L{w-;RWd!-w3XgBd@AY9T|2kS;o0Heu>iAo;>?)i!0xudseED z+Q(DZoB!*_;3;YGdY{Gictf*|qgh_ZtC+vO6u5Qx ztNQscPWbUmlGcS9_heAxz1>Cd5cN`|dOkZ1&Din}?DlJY?|&HiOO=GRdY#r>tlAPb zoCg*wVC1!n1Iir*T2Q9hGGk-0sF4v~+oOulcl>>H;sY-?ZYlz)h7OoTK7=fVa>3`C zGg-V>fjJtE3iTgL71KZf8rYC~hahg3iT}fgxFgZnwM>8k$rg?#dcl+ijcA5N6ejLU>Bq4E@xN>s zUeUIXSkZ%P<4k~FG3`G>ZzVlyE$m(sDlgIH9L#F)gq4Yx#^lpNn@TM4_)>$&=vDmh zayp-vjkRF`;V%E)Ezb15#Nni=(S&^N(vX68o$mOdyGGi5cy6=oeD~Vvlu|n!*&V?_ zHA{#*&E8@kDB1HX2uJ1mQAx*}Mbh3xQ%Mh~`Sc#8E0^l$%j-*5=QCn1`-VXC zD5bm}btLYuGJ0lI*Zpq~b@St(-JqH|yIGfAR11+z)|aRAs;Wx=Hdo%MXV%F*7K(h! zAGrJmSRiI0ULvWh_WwuQe?~RgMQz)tQbiC{q)7(>6$uhLD7}avhzLjtMXExmp*I2P z0i;PM6zNR}HFQ*Z?+LxvP)#7AeYx-Zd7tr({pTHfjJ^M7tSe)!Tq|?V^O)znQ`peT zZ2|kFO&ytEU9vOc?kI@L+x`2>!0QAmt+Z6@O6=pFzR#Y_y5~CtwnV|WI~`I~e-9Q0 zSL@x%S`fGUMFU~8~sHCz~n&w z*8v}GdRqnp-lW@HADT~*Rp+36<^JA!`gWg=u~e9`Mr-J~kkDOSN}}1xq$+(ZZa+Sk zWk$ohq^v1!&tLl95m)t9j!~=5jytc$yd19DoRaDNY>^|wIoB+808Ql`@V$FZyU&1n zxyv_sPoK@-|Da!uRGPg?u{mIg5!4x-f+RdLO{!W~CuVNhZt_X%!y9gVQeXhLi1`#x z^VP6S8K1v*v$}A~;t@rK9+yS=bVH;ALOeI=+T&YJ(iNeR$JVH40rM&oYP2ZO zg?*6fOowQc`l$j1zYP48IN0+yy@Ww1M&f4h!kg%gL=J!;Puy3o&`VZtS=Gm(>Mjlm=&S z3-VZ(@xnPTiRRb0@+DC6dZ$ z%O9sbpnu<*-ZPr=C@*BR??acYmMG{fV-p{f&bBGv&7>)vW}o+3TEFq8qo$--8COs7 zE?;4(qUiSz99NZII5hCvj!|Mp2n($+ZFBe$@g{Io`91s|wV zw$o%i(sHP@hMNIRpz5%8U~0vmhzMMBTY|ilZZ_8|C-A8vb zxKm%4rse#OzvM=*>zkWt3n8txAhYWohAk$SaXK~u=rg&WJ!=2|CD|PE8#aIsS3nC z+jcnVIw*13<@6t-))WQ5I5WqY-AXKfdXm+AGZMh_i96Pk`@Yo!;@h_*^iOMQ;&qZo z94u;T^`w8p5L{9>%; zr33XK2I%kQIIG2KXgwHg^SX1n&NSH`4`lavJSwI1C=|ei!6=km>jzY^k z#JynN`BB$p){!Y|ijWTh5SV6a=hr)LB?_XoJ4vw;8LO)l1r8hgG^YlQ38QrMV1EZhnis+l*e45!65~4~lqNvXwW2*0*<* zjMAlxWN*J`0T1aG%qwg>VNHD6SSl@7>O%_}xf=C$%eP~fnO)LZf>dkt;j~qm&-Q`W z&-rk6<)J}9i`Gv{VT`0E+HY=`#(2u*E$JRno`2fdDj%d(X&1#%>WR8LoG|r?E`EMYYuWafnmjWn+U#!a zxagG!^Mg-JN7BE?$m{hr1jxorNXOTcUL{bp-Rs0%s5c4+wO?U?dp0N&TEqM~;r z_7io!#R3K?d#h4y^*$b_H24?Y^h=KF10Sja7w_X;APUtc3U>$Bd^&w^JRLWmrj~sw z*gJF!^;S5^FL@~Dgy~l;Ye0cw{t#tF*3OBjGQ-)UYKC|VDZZRkXzUdPquV2^29kHX zpXyiM8bsku7Jroff>)Dl#N&eSkiU<0-X5>ONu}}fL)grdxmCS1Sc!y}Mh4aI0Odv$ zD>YoT!5~a6zv|>sF;55)A%ucA5#L?m~X3b!BCS&H~#=H<#?xXIq5&PLb9fhQi~lfw^>OXgiL&& z5l8^qD+5VqkH*I;`LL|w7X&H%x|ws9;JPDtR;LlJ|DRh@+{$^F z#v2f5VvG*eR|1_crYz6tu1^pLP6=DDzl4ml`u=OXMA#3WX5$uv)9Uzgbc{W&B)qI}V@{iY*!LmsbprgTPaUoO zM`&K3o!l+{q`)|>&^U~DE+BSS%ZEv}LN*Y;h_RaY0#bCC4-*aPca_0&jRv78qbvLOz}rYUmCv8C6d-H$%mRJYzf&6RF6 z4p@lVPsFR?AejHY#rW~9g1qz16xjIKq-=?)nojtvvn>B?uErnX!QltAaoZ;EnZdJq zH5Y>!0&^c~{0O-901kR5CN-L?MhSWURk)j*gm?Zn3p?Y`hYl}qt{`K}HL?PhHt+PU~A-ar;J=??~5 zinf+IToRKoUL5+NzF23u+`-Qj$=4e%j^<~6Ilk(El71v-)2*jQFb%N>UYmPlYlwGy z@!+}BRz?dQ=zOOval}+3nt{ar#*2miH`5eHo|R>4FpXy$gR>YKvksBSF2Fdo?K=Evl^p6PTwOK8r&ox;>+NPN^9}zc$ zH#ZgRDg?$^^U$350-;I94L5=pR4_+%tNHNgWXeRh1@NnVS2io~m&*jWs64w=DDXor zW!hK2cp-E|3M{jPGIw_7?G))K%UA0_tZ>gWm<>y5A0#YCy5v}ip@{v+<0g(O=7{`S zYm~ItM8pF5iXjr(FPBS60qQ#^#}zy$CsbfJ%-&)6wut<6y>cm6K-F<98T-LX3{Sh2 z5F<3P4}+h%?JntH=2PUvq1`{!5OSA7wlQ+F&JAtMafNx((STdrx635EL|l$j)fkk< z6=fW*irUOW03$^Z^KTaSFiJYrqa5_2vY%8T{BGubDX+*9wMr>j#!?a=F>e@IU?yglbuMR?~u*CC;^5WD5fGt&m5EBfR0h=MsTokXM zG#wfy0;^AVO_aBK}Atk2_savUxQj3>3Ph%L4Ti&k`504<_-V8Hb({1WLUvH z*%0B~DebZ}`wh=!KB>Nwq0LZqu)){NS65Gun9S{evWmDno;biepmpUCzAo%g`sDkY zZ<$lRZyO=btPLzVXx^;`jMTJ%IAL*GW=Y6>eKxmKL{GX!3#`SvSV5+ z%z8F+T+l-Vog{a`C%-GQ1Y4#lGjhC_JIzC+5CtO+W%(uqCqW)|S7TQx(o8b~d)N>`xnJsqR4*{e>OL9n%I) z6in+$^kT_OrIKo=slD7A`78tk;$B{BbkK7ZYG+LFxAo8diR;mGc#2BkS3Jq_+SQmE zGCkT`EoBV;K4LOsRISNtGc2&^f2lX+9d{A~BAuO!&N2Ilojclt$5vyJlTf#3kC7)d zWK%sFO)daCLBBPlFV(WAkd6&ScxVL;cfCHLnuz3t^Xx`q=kOyg(3Mn0$#e9*;p*rd zGyz#4WzEqQKEm&F!NgtM(ub~4#m)wxyE;=cyZy7>&PG8OY~RgcU~SIjVR1Ep3hRXxTlJ1P z5&%|-JMho^ic$=UBsd)~EWA9;2h<4h<`cYPKp29m)Osh6(ubWNm4>YbG#?o*#4N}& zd{63D>avGY1TaZnd$SYC%Lu-R0&u|M0e zPEMQ*7);De+=Q*!(5B`u7S5jA)sIA#Z*^ir;;bHD?*(W0eF4;r6>ioimn8{JTr~SL zG38O_ZFMGSJ+N2ix46IC86g8FEq#rh*%?9nWXPM4y(T@O1wF@~ac>h5u1UZJo<_21 z$m^s^J}Rdwt11b=a>j_uri_liBQ@KVGS}Ss!i#~qGrJ~-TYAX`UDgxbn_jtz?g7Ia zw>nSKi8Fk<02u?Q-M^T~4I5$f3!OU3u@L={5cPlbm*pcmD1}I+w&}?UoO!0{HK?H@ zr8%=!rl7mxCF$%$NCKx^qfdGxbO38AxZ8QM6$Gc;>QslF(dV?uhHnT%};`h3M9zY4dN;BYyPJdkCkEr2pZWY1Oo)}q=*VMq5^(m)bM-hjcfw93@~<;e*J zB(MtI&D(r=(?u@e_1360#C%i1UKH_D3*p3RF*u_gtwwoL8eZO9g!8%q;`rtjLop8Cl){|BF>ra>Hk_r@Sq!zO*sT;pXNX}D_q)<&@um{pb@xLeh05OFsyi1 zP_N|`df;*MbtDkTEUv2@fPwT@2si410m$=r(7e`pF7J1t9Ok;WHB!x`skKfA=;r4; zJE$yDi26#OkGZCS%&dNC45TZi-@E6+(D!&V_xK(ehBrZN)P>bJ~l^I1zF2ht>kHI$8E?99JfkkuFsAN+O$*Hwg1iZj_qwggY2)h0^o0{oKjnZ zxV_1A3CgWaI(cDEqZHqr?<_N*8U;v`Qu4`z#;5|WQx>$CXk15l1-}0)>yI2`x!SZec4Bj|8s|$i;2D1qonuMeK@8Um3eJM7lEN=X~D!tnt{r z>zKaFIF_SExa0G4fQ;XJ>(wonR~ETy+%gAT=`x zia~@I;{xp|1591cZY|xod`mALCKR`qPdS}kj4g>XFycqZ`H{9iOc2WWy7ijDte?z# zoBu>r-=O+%G~`9$7LiH|-7mhh%R?ZQ7UK1W{#K`L8D*Ly>wwlatbzpk?vyOlM+)?I zX{n1OEe#gmk`99ke5~&J1nKBD8&>g1!fd367hYSI?VtCB1#Nc43u>{t8A?ubuuuTLppGM{?G3=W~ zDG5I+Vuis(L^1(It#AqwsgJo*Sun!+$e++6KoiM;9^N^HE^di&s`WG&e1tTAiV!^v zYazde<6pDI<0LC64^4|HX*c+eo$J!ty3s`I(;tmM?7J}EzzrS2lbkT2Zx=eg5d$T< zlcS5*KWw|hJw0TMYm&q7p2Cj7H_Ve5fA(n@e*;2=KH%NS-?~UX?CJ+~QCLv;421f= zsJi_)rZkqU=q0QV@k}q9l}TJa`H`+^un|`uWHJztXZ1wTls9ts#3(^Mc?%+`CBna} zuWdTA_WCUGD6aa6MWT?}bIoL+YBHW;kOdvJpaF4e3Gop^y0fNrpG_7+0Zr0;FE4JHhY< zBz9SS37mgo!IhR44vhfV6t`6vP3SvulfRS~)A-yQ`m3iksN(ELD_)|4GSTa=s*97w zc@pC76ssh{XIlZt8+4Xo!qB5KpYp!v#+*9g2TTjGAw%@0yb}9t)d(26QqTFxoHyucvO8a*5!Uog|ysdC4ohNnJ_%XLBVNw>0p2cBkky$4L*o^!q!fu zGaKn7OrGIKaD$Q4CYsbdQ=*Zdck_Tc=6suqyaC^StYR%;m2j1%R{3oIGNCI8dyl|+Z0D{6!=JW~oe4gq4_4gghWqM@@;?8USbnlE ziml;LK4nRR2s&gpOQ>}wyes(6Ecr<*tY>0+QnlBLSy|C#?o&;|f6mFi*UQ_8kbSGI zymR(q;{Q{Wps^$vFz2oLe?7l?m^@lIcQ8V8rR}v|ye|{}wCIIoyQ2qnhk})K(2hnc z4d43(2DKrzKFFdH-_~%Sw`qtoDwlyAK-*`l9#Aq+TP}6s?gn{Qk-&j_xtzuL7NeAv zgqF_2Jwg{pO6v!dG+(%~B-O9>*>ybN1RS{2HP*-E_D__s!~C~Ud2v-Z0` zYUcF^SM~&fS@hni$dt6Rh_`u=9>s15hxde{8>UAd)C|<8mJ@6lx&2!s=i;!{@nc>B)h<6YbQA6dc-F9qh# zp2p^RBUZAL8Ma6nyp>c0HSW4huB*1pQHq`!T||nPzY`p}iKfrf?Xb9@hk0v<>-FrE zKlhdoWmD_#j5RS_nWA&j;Ge{Fl9WFlje0m2CHp>fKuPh!PPv++ch^h%(m)YZUOnl` zOnQ1(`L;D5^*Xv!ci6$jX>}^MeE$}vAt##J23>z%U1zB~x>I?+A95!T z960DGWOU$##U=IB*$XtL_;&MD68XYQYM{EywJ{-Pk7_NUuXqeYa^Yfjnm*kJcF75riFy{Rm zaB47AL9JY8=G>cQqAc0bHIrgPPGgN;;u%J5Dq%-9g53Jl7XG4FM2A}&tm+)x%v6_` zkG~n5j>VV9JTA;s6(`?+qmK--sXwsX3(m_JFa5$>owOTO=cC!>f9XEt9l^nd)T`zh zBw_LVr1XhXzH5YaaX}`o3bU&&Rv))?gTZ*K7j=EXxzH+BP1jGHKFemnpn0{ETIL7o zgO|j#*8w8;%C)M+ZfDNh5lee=sJA&?WE9w#7SjGxo_zP?tiQbyJb3LgnwAUx)E>V2 z<@*EQ-GlB=@t3;Rb$w>*6v-7&kpVajOPJrc`e9AH-gxA%5}NX_n+qo&%=i|1XQjh@ zrCqY@X;lyRgn4GAhwT@=7RjU{>wL{^huy696e^Jw;G7nW13?;JneFZOq+ppF_fdC0 zmrH7?U9A5i7Jz-9z05p4*S6Ias0LuC3r1HM1|ofWfE8i$8%cYbW-w z%92L}hiE5TK1W<#^<4S;p)rynym-VBXveQzJveBmU{6mC5)&GG}JCh1m7Ivpev> zdvL;~a*EPzc_Ejaf^i3)zPk%@*~sX$wX`{V_8a%@1M2b6>vb10{1#Zv#oyamPSR*- zhLJ5#PQ`FwrD(S=xpmiWQ12Nv`gd^|W@4%A%*f!UEkcePrxCn%TVZ;xU2Mv#%UQGp4fZn#VO`er;X_|D0CZ3#HZGUzwzqOsQ%U? zX%qrHC2qFww};&6NxOB+@YaVbr{@-SJ!iY6!c(Q{-nYQ?I73^tB%BjZOv% zq*{H?|40^wnb;>-wG`i3Rn)wU$7v6=P1;OztUx!kN<$l zBYwvbl^e(r_dOOk?8<&Xtl|ms9?M|JE>r(`oPfje-qxFHci=u9I72HjheHwE$WMyb zAePLg(o{Pzol^f9R$*1<_-ZrW{Qe71iJo_|NEu{J5Hw5LdeTh-h%bF3RAGEl?%-f+ zP^~-PpPV<*ay9SMwdc5Wx!Kh79Ru*{W2K2_TEx3lhV0!!Zx{f5WNR3(5BB()ZYAoG z>8S^}vHN9!{Tr0+4ku9t)rIv?BrWA2ifDLYMm026c%Q3bhv!Cv8>bCOov5@Vg5oHt zN8X~~{77x8Oprmus<)u_yZHcIUE=iQ*ZTP)vBc%Gz__5ySr0$qjAl;3k6#46{+qL~ zDD9a;|0YiR6ua8K^6dCwjNZXFP5M}h2-gt>5i3uZtlB%4dzMbjy@qxtasL>|@PsY1 zP_SoCdob{gzM2aEHR|I3ZpsuP&`qohJ+=neIUCO64%<~C-pV&}I{dv0m67lQN)ga7JG zN;qB|SIa1@J7&XBY9$D=uq_kGTf>&)>dvkGI9u}!RAe#Y!1)T7C&i3>P4q<)W6jXp zlXU1Y7%Fo-4frG~u$~zbZ|Tqb{^uImD+>lWiMOVZZ!+HnZzfhgngb|JEfIhX!GGUy zU)klsRMwd;TwpnfaVFBhH^F?k!|OA~H9z=XO(ZCt4YJfyVYgWk+$yjSoP`@2aXnQZ zoUuFDK(__(I9^|Y7Wb)>^z$|&nVLi%8uDw}`1>HWi_w9c!~U0=3s>`w$5?*gU|>~` zt=KeCm!b^ejj{?2cMxKZ(eKgFvP8=_=WP;S$ok7|5tGqiD+h#};m8t~-FTgjo4 zP%}B{s$Th~D`$r-MiyhK9ua_&9-LKDqm{M~#Fdknc!Ii1N&-^yaIk=h?xA>ixFMM>qsX zTy5`GQCh2SerG_x;qU+N9d7wj=ram_W6hms8DaS&&%*ob?L2nJbZlB0M)wxmu^@OqFl$ymuNZSJ)s_BikY|2 zDv4En*tj}N-ZqR}zHA*dHLa$LA~I^@s#%r5@>^nTv=oZqmI8NH-a(uPERo>BI8pJQ zBTa}*-+M*8+itTBcZN=j#LxXkiK(t0!ycFNZa5M0dWGQdG}IwwcSt}#+>NwX#*vNe@lHDC=Kn?p|I4Bxw) zUiYgi8Vr^)?d-p}v6VE@y4x@?=$-QAbHI_|xyepViDlV=H8e_It|9 zLp<1P?El9*kW59+{ z!ri^>Q@1_wm9wlgvL9-~Bs~1py#_l*`wf~xXaiDO9jsVvxa+Mf&)Qyo6Rnh~amcOU zPJYo}mt?P&jI=qgcBjRoKELaT`pfY@*UYz1wnec!yIjdu;_-tEuEYTE zqyyV#5kvCd?5-r!H^s)grJ<2nru9ERwMYyk=f}Z(_=j)a{M;{8rm`2W}f zLZZ>ir2{fzsD*^q<@OQsLd)nDLSC3!wa__n=k5O}`wte`J$VEz8Vv#mONVF5ymAJ| zPX>?jG5;=2KqF2WOmyrl)i;&veoEKy_|hL4%zU2GwOd!;{cp_4$^R zD6UgO?bD|t_l8?wMuNDJRaAmtvHpUf7Mu2c-e#FZsa`Lg{Ysq*MOymNKbXQ62KImJ zG^Mh*Ncp*#a|5KFGBa)8DxM(HV?ru_FxSVKDO@)Ay7+Zu{yK=BrE`PPHnxqk9fZRS?7Q2GlWvhoJtUdZ`Zo; z)TW0r=F>LsKMScz>zrxy*#ExQXyhlaAL43%oJsMI;t~u=?NcLIJ@w;7wgM z;@x?7OPvG}3H_MOmMuc5C{arvcAz$C^Fvx7)=UZ!5QCEx0A1$clSD!qSWn^A0)6ja zj9+L7-Ak3E+|4Dgr+elhY(ptVXMa7b>%QUJMMi0zcS}IKj^zGF5Xe&3aC+z{(7a_r zi9Gz4OPuIrVsdkA^!M2j_B-AbRNWZr+93DAhrJD(*(bVjqHFzVD%$7JcSYsO%)0|E zQo2chxl;XiN-t5Uo|Y4%)p=RDFC8?Cg?DT>o=j)QMn8n~V^~3`QQ`H=V5U=mE6u;c zzxNQRDe+Ddu@%DG)HC8Lyufp!)V$WmDB1|88t|#`f-A|z@8vY;%f?+13=#e0w2lud zTPbo;I@`grPcX3ff3^Qf`r>Ge(0oJX4ovLzC;;rOX;k?$$+xShGI>wr4`%}T4B8967Au zIPa+zBoOay(#+|w{nG8PH)GL9jy^k zu$%h-g#pl)-u$GI_JeYQJ$U1m2*%&%a=s|}FnnJG!2x~zbAxfbr`LuTboWh*R{J13 zM!al<^kEW!%03hlRBG(BCVzW z>+8@Lgh;IGt@k!wU2*5&*5-ERZ3+D;H&3H? zJCje~ayz-d=Qg& zH$0JFA0r)2%KCA3KQ%hNG>@;2L7*WGu51^D-42sHUkg9L=D8=3-~0y$!06Teg9EsL zTOz9POPS5HS{DvtHOi&x*x)-igX5~;s;=1d;4GY5&dY1d(G8(j1`oD&M2AfRWbMQw znP&f*b+`+5{t&6cqQ&fDG9GVk#!j@G~j=bECjZQ_ZrrF_2d>oTiFS$4ceb|+(0kAfg{zocE7jp zj4r98s5ZXYB>MoT#_l6eh9B5-x;*@&)^gm`3}T6}UoXgg*$5P2PpcoYL|@Pi%d|!c z*;1fGT6lIXU9fY<=xIXx0fXehjPB3Dd2hbvIu*?S9R|>8`n^5qQP~_{4_CBSpQ1xI zVsh3Q{|JYfwKKW}%3!~&mcfRdq#9*jtO`+YSZ$#s;wL-E^K>p5G18TiNC>PUlW=kDxfm3%=T~NH zhw4WhcG-DJdy|7c=&{i<3AbPlfG$@U8ImAh6Hj`OoD6sHU(Bgjlg3wsyX&YsfMQ;H z9n+<--I>0n0Ft%{0zjVgMgYvN?Jw(c1=3hiL@1h^^C5Ou_a&7s1bxz82sm(3jX~1} ztq;TR4|>k`>{W%0lCFtI-n$xfJaPB`ZT}}U_hkzKUgx7(+sjPInr9J952l=$<74l(Rkt$pDN#gUL?nIi#(fB=HnCt1e{Sx#D9L$g>k_*#r+ z$un3oYXN0VU+x&Nv7PQeM*42p&`~$*yDz_i+-y1=843cYlM|B|vA0B(x5f+6Aa~ri zxCe%LGjO_t!!&;UIh4b49#0T&m9TBf*P1}tJ2KxZEGx=r`VbsO(U-@&sW7cK7M#0OTEVh&|7SAS?C-IvYi6%DJW7YC3wrncnOkA$zrA}BxcvZ>BFUp|SG!HjEU zcr#)|XjNqni}9a$O};y)MI?7i4gTie;0q{BJ%hyzTzJ)v4x2tk%2bEhuu<EYwc(i^Z&;1jjXSl(A^#s zWWMPvy-yIY=)haTsRVd(72l~p7Crmwd5-_wkR`GKJ0ExqTYa^o_4wIzYDJq`y3P(C zT0@-ISF2?IX%jwydTXxNsuKcH7QPqEyZr!xCq!$$#-;%jpp^DJg5IVeULi}MA^n*V z&rM*9bdw+}{kG~mGEZGsfyHg((GiEb5SyKYuR=FtHr5b?Q0ZT|GJT>@1J>)SnZrEI zz&o;j@}}!wvJ&AUBRraETp@d){14%ox!v`Z5M&Wn3>XXi{-b31@D($ieC*w5(fgN}-^fHuzWo2Bq$lruWwjS>kQSnw#o}TBi<;s7 z{~HCXd%DaZVoT2-;5AwQ&n(dZj+S5F{u9dP{{J{)^#7~jrDrA*N84h_*M(GWr%ij+ zYpFdVxJyv~SyRCgZ1&1y?(&?t_%0#z=Of?zZN;L<)cf<|a`S86oq0K*wCjjhR2Q&R zj4Z`Yp^WB&GqRrOp{onk3YK3czhO_dn}Wc8YuHL%;;6UY?}W=NKH$9AJ^?|lTjjZa zao{%v!U%A61gDOv{%}UjvtI$6Q)4HlXQGzmq7&$&Pc$#+S!E$o-wY~p#4XUUjAMI@oT=q8iZfLXPwp8re)QvlSG~-u{=PUp0qsBEcPj?TH9z6So>6I}!>{<)V}>PxXR zI1vMsWh!VfA~q5euSu_e$;|dkBI2MylZO`uGs}P-!>FI87q>nfem}LT+PMnN);!su73%K_m z{Lh9y9WA?xG`oq@7_a|Sb)9DHLPfPn|0XC9zt?hIF89xp+6zx18(<2>3*xr~}xYz2o-_ttd(8r~#x zDs*bj^~Q;lXI${l=GgPn)4m_ej9^dFVw_$3@UAVlg1|-3PYQWJkB<~D2v@Sf136yS$IwbtwntJ*}3@Y0O%@$h7Qon*f(|EW5C^{&@|wlt2}bp>mgY z@}Ze)!Vhdd+`+_&PNYtLllSg3bd#JAWi@c+?sleE2C4@qub*(8H*LPI`{-|32*Bz6 za0s**ir7kI1lq3@ka_DGz*)LL=*0Kp8-?1CCnsvtL_0R{INSurFM}?%U`?A+OW8A? z*6|ax>^uHmjP^yupWol~xYo>Cr|eI&f&^IpqS3#y&}~^&+8zpxnnc}&35*j(#QvYGWzeEg;59Y4kswCt6T&G`^8hTFQ{+m~5IlRsS;yG$`0dduZO z#^i`Qt3|HRz^G8~o15>y3)w4U-4gZJx%h=zBWR0%ptm};r#suT46L!Sj+^N5@hr^U z8{Z`e^G(INO6VJtzQv-xj~t7<>u#7X&XDghXC2ulgc+%-nwkl+~`7ZOv zjI3Djep;1_>LYaZUAXF}_T_68p{kL;~3ku>>fk!?Qin3@D{o0Wo&x>FI zk(D;MSd6lP=#U$C0$(;fts!z#3GG*L8`h*!@-fqrVVNnAjLZC0tQW^AbN*_SZT7(o zHs6}kaM0n@y#0CI@WB%lcMaUd#J0Zj%-!O-1uLA`CkARhywNTuNK?*i0ZH1kBZ;cF zSMJRU_+d#?!NJYFi!D00*r_mj@PRc|MOf4APf(8o>~xPBnJwINDEO4MBa8$yh^RLMCk^D@2aCa_$n zOFKDhR$9?$Jgq9$X@ywgNUCcs2Yh%XM}PsAmQVwLu|gth-{~PD;8WyOG|z0^^J-~v zKM$tXFSKvJ@StJj+|-$E^l4DV4xrvZ7?nEPuOpd&3SA?EmVNgaKQo)iZk| zu~5!9s%Z1#e5DMKb6y7l^{NQ2moadkuemBSOC`a7`I0@?D0O~p05u?SyqprUaaT+_#BZa45CQzW<4>{6q$6;AJWM6Eh*7gXrc= zgdzuLu*!`%&LYOWUc=!pHvHm=W!K+r;9?KJfS3U7dmklmB__jCLO;YEaM&$c9_eG{ zAqR{P_AG&iIl-cJ8=2i>u1?$7yBxx}f=%L?1+63P4yU8I2giPtcv||sxwBqM*_ReM zVX83?14r*=dJueYm@ zycr5L|2W`()(Kd!vxmt-0|bVCg~Hv>4W}H;mYQzG9JjQF2=JUsLaxrd797?U(q+HH zPbF<`@RgR{TSc^QH~QM{POr*2P4QcYeIIQ(8>!FYieU{qk7|j#{JwWFQ44)}Zr=-_ z9YN0JUPir)Yq_hlom==jUqm1;_19aFf_xYu&*KQ>Zn6jh2m|4OZxGYtTR+#l)jCSCNi{`D{1^+kiIlpRE125Z)s` zK?e>H>}sB`rd3%iUY+gn4E#;2pRbG^%!cSBO~I#(+d>tvrFXlNhhP7TXxEIXyzJZy zwJuRYY;eljjNNFbPcR?ilulQ-o*aGMm8blgk%UR-iaiV{G~X=~tJvzMqPS<|$ZFknI_ezR;I)*u=|85bm=~D)n<*w&yYJjKco2F#AX+VxQQN;`{yy1+ zyr;q;{$Bq*%_{Y$Zv&k0!4lWw9zE1ykNa2N;vlQxS@U{7Gwq_pDmfn^My)igzFXV$ z8hZCU7c@KDUmXW6;M24pua3Vn*M(bo1{ta$i>v<3;w;;o)B~mEE8`~_>^Tq9 zqT4gQb{g=v{f62Y(nzy9MJ@GG)%V3}c)VZ`G)kTxY>f$DH?@CcJh!}coMU+Bm z;B!vslOY2jndInIi;VLm8<5 z5*eIxW{h2<>kZBwWe+p+GH)pAXRfT$7kIpKm$%m6iXk0W$bZ0Je**>?O=kC~-13rH}gkY+W9)TNw+99RcU`EzZ>7PNk^|R`)x09ZpCzeTMqsgk_?w;U+7DFl?-2X${dxpc+Ht@br zf(W7{qPHZXM~~i#h%OOC#vnn|(R&#|5WV+KLiA`ultBcejb3K-UdHGK!?5Rhp7*`> z+55WoIp@PUpL|&}YnhpKueI*`|NH%a$%w9{i*)v+itzg%)wf3FX1BE}CCGnT*{w|( zJjnnS068uL0|OUydKn@IQZG`e2cMMO?b@cjJUn~kfngh(9dN>z$FysSd(H1YfD5iL z6Un6LiC29tvZ6)s1_inOI4FSlk?@-OiaFhEeCNG1TNt$h$noeE8|#KG%-f2S8PoN3 zsBvfAi|MB7gmK6AhX1rVrPx{EL3i29DJhA?N>KtUxolKl#*I!R!He*K_f~B29&BxJ zpB*|8gO1O`4deKFH0|s^LKaWtxd!yzU*?y4+*98FOX;CI8Q~|6+qqi&@Nbh9WGtl? zijJqNiW3z6JX90X!Kk)eU=b86{eeT~R5=??A7U^~Z|tS^RfWAP@4F&T$TK;6y9gVF zS_Cs22ibIRgD~^>vwlEdZdSpfvUx`LMdY-;nww`W{rzwE-4k(YA9{S9E<8_O!57&7 z8kM{b$2aMC23nrWk!^)!``p4oRpN!it|<2h5Ke(cRyP|)1{kNnq(*|RL!n@vM~w(V zV>ICdevfE2(Ah}!^8F1G>fLByOuIXuCH%+Yo;VTu6~(Q@RC&))!{NUI zLcwh)zS?ik2+UOK=i_#(6FP57B$jRp5sR&Q?nhf{v~)U~=$yDC%np-}-?o^1-nsbK z1CABhR>C^i}Ge*&|)}nG54l@ zM(#j73I^$$qc+mpG;xcG4RhUFRqrC$WwRSl^ny}UQmIPWixycL;*Fy2XDZeoG{89U zlt8@Lz^%!6IoY3&f3{e=dfsaeF~H|DRQr=H2b+b>o4FP=KTWrB|D>_R9m^2dcLGd% zZY5H^7c+kKmD=ppOT;`Ya(*E9bor@e1M}rbLXBM(4Kh_#lK9?{Ot0)QtdO-sYFB0x z;K7u^$YAFtY7A8AfddaJAu`VdvCTUG8e^kHj>Y$zVV+ln2p92swWaT@;iF1yRUh*` z;$A=J%V|ntd)XHDJYZli5y+L*tTZA@D^qn39rruVa5An#^m^!1k~-Ry<{eF}&80MU z)&Ua8#(?Aq&iFP&cvMwibh(fvz3eej7dQM!4cYH@9RdSi?sU*D-_?}tMorh@BQ7a* zDp}+A-$8|c>LBUN2)q+=%2m=D)UVO>m-dZ)$}F$d(3*clK&r(k=|M~m#^Elp{u9Z& z5oLCpN$VoCWE+I3=u2HN*Ms9VvtzQr{7lz zCy|HMH=?*k>td!Sa(TX;*TDC4oE*Vn>q%%9?Bj+pE%Ebjxfw7_$-)FDR zu)e~wB08=QQ>aZcin15FbYl@+85OuVwvXRndT#@hca_sVFu`8F6b2K)(?-1zpJ;pM z=|A2TrhkVMjy9!a@RjR%7Ou3PX@yzkBE__rZK6e^TWs9TKZ`_^6jrv>m(u)t-(VWm z`3CWqm5SxD$j|o-CPX+pezlE2tHz=ol~6am)S}C#_oL7D-LeK>8?n51?r7`ET`%if zyMSrhJ$}!Y5v$u`_vBliajfmKK9sZmLoGc!%3go!JNQAx981R%-{t!LpDixu3{IsU z%sf7k%4g9YC}FR*4tunD_m7_3yuxC0n|WZvpSr-~OeeYb&(wU{k(({G;M+th2<8lH zu|K8bDnbQZf5|GipUpNf=H1Mf9kPlve08Y!l^UG{aX)2$1RLRxIh7&B6bTGpPwSF6 zZ(Y_&h6JALT4r1B{9N8Kxt3Kfo#ArMs99;rrs#FT?8|g3Yl`X>#>;u4*&|-blZAgPld)LWwb|N5i!S3xa8I>zACXfb4%|mp616KSz?gDzB?7+md4xC zEd#XIaHgTK{2M{5v30dKK`ZWf;8#5lM*oQu9i z#Y`0%DnGduc^k*(eoSt#?iLhe;}jMT zfIaA1^&Qm1hN5<>dO`-e`Nj3}T|Fs*@NWY%KuanGpZYDsa?0ERYe}GWX7GsaShDnxOLQ&M&NK#DUL!-J|DB?Bgmc^v!9c z6P*ilv(@t6MN1v@1wPOL(eZX$70A3oJDUAR=QdTSLMKoi?sO@UV0> zF>k3yy_cKL!q>!gy}tu|SQO0SC+gHL^EG7M`#abOGkM6^xQ?8bP{@WK=S;+vH`OwC zj>PsxLZY1a6k3-Np&$Cii19P(E@oXx!mqMJqI=H&UUZ3ku@9FT&!hu@`%5n_1`5mhK{`#x_CIvhTP$*lU2c# z6ZyuPYPl7INl2DK0bQQGa^(^FzQ^Tsa4wia6uR>;Tt_dn5am^|iH$1)= zJKEHSCtrw36zr5YpMsiP^luSQ*CDKbCQq}qw0RQ#Z-^&eWaw4avrwO9oBBZ?BX}XjYpWG5A z6KV&lqewtetUT{6SkND|dcs8aMoTnSR<+Cb){fMzNFEf4&DUhapUmR~0_OpD%AS`( z4?JE)#+r&08JX>jZ3!(1r@I#-Px^Y8>YA&yQ#mO~p*ol2r;8~ZfyaKW)ii_U70<$9 z@0s4}$#XE!S(P8!ju*ZOOG!^ z^zX%$%Ll7ECTI$Dlta(`Y4P?rUZ7y(J$+xhyL-381dgw>JE)l4^@Jn*h;#D~;3TF7amL?sexcmzt<#_Vfg%Q&xYeh2VQ zOk9fvcFRdVrB-K2ApCaPYmy1C&2AA50TM2dwCUT2)4DcWO)c(B)6|>?vLxy)!N|9G zHC9`%6c1D`Iu0PRwsM5Ko#Lspc2KvNf!ZN)sa#>&c+ z&!i2dQtnrJ>ma$3oX<=zDkY3~in#N{?ys1++kb5K`fDwUEtOWSqx0?#6H* z)UO^JoibeNo=CDUwX@}sVHcklO?LD&;LRo0sG1;Uci=VMYEMm*bXaCInY6T7bzche zpAsakg74SL;I)Uz>kr?%8T9yakX=KKaQ{i3DHRu23E1In^Kb6q<{7i?@|>*qEh{LA zM`!2}&6BEt(NFjq7$=q!5EWPRM}tlXJoeev>3or`(H)UgdlK|0-WPBvw;1L^Edxo} z-KojGaMOT%&@q?#74%`zasF^OX*_qHA;O`vFcJMe!tM7h@RjP6fKnriDWpmOQBe%* zxg#f|a+zm#@ND|w3?=I`V+yS6-n8H+^!zH+4IaTxV=@)B`W|2GomEiYi|-n+M5Bvx z$yd`MRb&U3yZN$h64S6K{lj{$=V!Gno1$FTB1V zhpRY!K;$q-^{(rYt2C<9&GML*5MwU0$==dk9to-c-m2Uzxr^XqTdQ!YXRgrROy^LL zE3Y`db_DiZ&n{ZwWw##CY`1y)LMqdZ*Q2)%n{1Bj6^e3BO)LdOBrGjZaJg9=@srk&2b6+M*(MhqLE_ZKk zQWWM}>0llz=sKkm{}UIH7{k(!XyKBT37D+Q@*Rx0=leLQFYvx4+~$l2mRcGIQ(Xn6GicG_nGt zU9T>zBO&0|HY(P&Q^Qij^1yn2U_#F4SXN_Sck59M{$XW8d(pG=xG(NVz*#zQ-YpLP zYhXVqlrwx+*3}X{^a)5P$P_=`>$@l&Fr9KsZpFxK=vQ;e@b-nBG6m3ZS+;qpN}hXU zl-K2hNW{M<8u~Y&`Pol45V^znMxk6o?mcIfQA1Bc2tn^5YzKCIZaWpbayKrwpB^&j zSsk}K6I7AY(ZnXV1Q@^SFqD1!vv>oh6Od37N~WCEwfUzndc806^q!+2#qP6=D?Y9v z43H8u)Y_3+HF-G}Ambuz!3AF$DN{lHCUNiEPRJNgqC8PtZL)3*bCK=8a+XJmfgTde z4EB1MGpZE~jJiI;S+-bLUtsz|qboDCA(8C;E0$MxH`NV&!v}u_MYrRj1L{RKWqSq0 z?q%?AIjP!snQNFG>vEGe zT57`FJhFoSGVkt5#FcN7VJxr(fWOMjrkRl|92}RzLGVCaQDK|n=TU9vQ&<3V}A_PJboY3Wt-`)QeH zZ(EiL#oIF@9(EOlQGM|12-8*8QkEKY&o6a{y%Q=PU4i!Q7+F)suO_`ok!@cUvoj0k~Va{P*`7K^IFv9rk z1^jS_Lu@tuPkE+?wvnQaY{xA#Xllb>P*j3G`EBP}gR$$m^`KLeM|bfB9*(yQdS2>$ zbkI?BI}1ycJ5`KW>TP^*>Tw~{r9fHDR=4TWG%QCY+1qBPJ({~<@RP0ck}`gsGXA)$ z{5GJi!bSl=jbIxIx;cS5$-xR6=65%53T=iJ*k;gC#H|z9-4n;pcPo{Y{PK+uBx?^6!3bdeJ(_GoaDKZkRg=9_Xx{&45^! z!uy9Mp=L<>)tl>^f!RRWS9cbuW`@m}>nhz~dy-RL({+?*ZcqUOvu-8VJ@8usj z8B$>OTb#!t*ki`6<=eV*2lDIMxmBF}>O|ig0ukdltktK9eG$Qa&Lqr)7t6SueHy>p z)l5P~+o8+Ug0yn`ibThDt(y*mL$}Y~oNd0ezG6Xk`vL-opa~8pN&o?QG86m|Z~|0A z6Xj;CD1}{vE&0j$B^CrKOQ^B8M3=xYWRgzv)zh;x@(Rgy(+Wcfwz!palGgg7na)+? z%af>z0QlJ^enxgWtwu^ynQtDm+wW1=;+snEX-99=CUVl*5ClTL&^8_$#v^N{e@3TGdP1sUDdg z;yp|l>?^ zc_!maa`?p@s17wPI@KmrOH|*UPoSn~=^RqgP8$Av#KLpkVL=DK%0)ES`DAN)Dc+qO zqUg6LA`aJxRi^L|nIjYY;pJ=wzg`_Mz_hpKHAJdq5jOxVgZTo0fWC8J3`W}`8^m%+mglvTLYJ!=d zcYX9DB~Y2%wk!NJ`OX<0Fu8K)Q{g)}C7nx|3yAv*uKc zLo5~ZHn}^g<;3v6=fwQ)>T*1l+=U&9%^gJUSS8^0aM`o&N`4adW@~J6Z)zz-PAA>^ z;8nK5hL*#7{D{O~1t4RZU(fbCR2O%eRR%Ee%Q8s21`bVo;%3bc|zmx$DPkt_w8O!*JS;Xz~AbjDIax;_1%BYPm3d}I1GtgXNmiiqz z!{u$8#+CIdD!q}n2;m;K0#dblA2)M@##+pBsR+=&V(7Op&PHuZoc#h#EY(PjpYr?S zx%3L|%j9B%B4`Az6&ntl;+|QTynK=+Y82J40r~WLN!-atpYMc353+nEKH3;7g=a>o z5jJs~;#>6H1qvECDd5R1-=S?uZO6g=)Vf7h(bo4h}bdqt$)h7wx%M2(` zG)2h93u*X9mH1&k_^4UkXQD+yPG;Ce4*o5M_hiR_|72{I`-DCNqIK`!ZvWTQy}er^ zeRY<2lun#|@RMB-vAkQT2fmKoN);R! z5JnZnNIQ;iG7(ZVlRw(5^h4>NmzYv^cF>Hn4b2!}16M3FHo-k>f27vp=6vHDJiTO8 z6dLfEbjI*$=R`_aTimcb=0V#iDXt$4Zs? zacmBK$B|{ksb%Xe9jW|W&v;itsnj`pM?2EW0!HhKfrb@zIo}-3HNV5U@pBoIlg$u67T+3Xs%|vCh>kq)Pyxy z@2#8=v})q5{vM#~&(7Y~@iJ{E%Wl4agnI|gtK_p;dQa9!U)RsIT+kq4bD=F(!Q%|q zk{M5oYa_uO1-H27Pu#8i5S-4~W5_o=U*5d5dFNIza@uW$y5mFNPi>h`O4WC9RKEQX z&1^oY`iTcm4`e=?IG!bXum@5yY-L=3K=W=EQeWpyxc@6laEvc4FOSPU>^TgDyID&~FZXPN#HC11dJf-+QW$v-5g0vV^{=Ldnhe_7RzDz6JqYDfPH*)g~#U*R=lys=$E-qJ_L3hV#M zwxwb419fXS8>Ix^9-F)N4n+0gGPH(sj1B?kWo{3iuK%8vROgr(F4i09zTWoR;=f9c z0?zpQQ%l^yJBC)D^8UGmztIZse|i87VP1;PYl5FK?9p~vLNC5i{0DSnnVbHv(e`{Z zdg8w-+%ujxT)F?g{{PP?t+S2^Q-qC7-(e;9e`LG(V4}CJd4gVMQZGh#m5g`PWfI!Ch53lY^ z*WJn6m90wt_y75Je*L)aG@)$E+7G{aliX7BxE?zT4Y*dHqR2kp9Y--ReIHwMF!3|x zO7}+uT&1_jn!U6N_;AJC-%A+~gT?2Z^#a2@p*(q_4GJJ1Ri@w5rDlg>AY8x5a{2ki z`%@mxNb53%g4RMPK0(cAOKPU-r5u2N?U0_?Id3!d&hQ94?|q;$PzWSL1G;Ober7+B zS4zHIzC(uJvImQ=CB-tHmaM9#|GgVt6xnJo^d+wPS+$MIx%4R?w7AmDnA4wT-l_ch z%TXj&dA(+P$7t*!G;;6D->=Oq_y*d2Rkx5wi;rz`Q#7MkD91k$ zs*^B#5B>CcXtj#b>7-zau3BR)l%jkOPwxqMxnE%4H$Mti$JrupULGe^+xEUl>HK2x zR>vli?Nqqa-JLq~_rqrY;)AOGLlWv(gAo=rEwksG1J+1r&)>&Y<|6c6K4$QFn0^o8 z;EOY+$vE4xKz(^vz?6k|rhs%t{SOq!^owm-2+6(Up7wIL<;rgj6tW1K2~wM^3~>xQR$B_3Vc{Mhaxwjf0z4@N z5ao#(U;jl=ufOz}ZThU#>~97zinjzT=U&d8Hir3s^#2+eA|G7gXSXNL@@F58^FLVx zBi|IAMn_*^;Xzn z`g3)<9}0Jd?%O<}IP+pZdZ;kI9V`^g`pPT)rH;HPonC$JHXr#*x9F*RMPp%a zC>mVBb`>l@#{Qn>N52`9mMy`B5{}?E5}(uLN7v_gD*=)JAIN_nW!6A-((@+l{}1xt z$;_QCd@%-D^_oLpSYg1q>4Ln>#%#{xFN1fV-p_f|IV}Ryl{5aiuAM+cUrvcv8!^;k zS`Z_&l>!J#al(5qd*G}?2qy~PjpEJ?x&g7)$Mr=OqSPw%YXM&n z>+sM@Ht-3)&X8i9d!9^}4s@%>&wHdUEikX~U)&EMi!a^FxljGnLQQ@M0VokQR3K}P zOLm~$od2~ktGRE9e&23gnNtAbf*I@L?lwEMXEcUL41E$Ts0Q3FR|^r$=||d!%eKrV z)ug7vas!La$r8r7b#_t~*U>AlEcqm8(0wTd7bC46oc5d5-k3v!XeCV>!Cii1uB5un zRHt?@pM4w5r?)B?wAZ62B<#26jX|gxU+&uVNYf_oUJw)5TNVSQ(i+m_kN6c4d4A32 z#|6eJ)8PrTeAR2{XBJJCz|mZM>^SzdtQ)s=yWEN~M7X}KMFW!0UT?=~x(Hl5ehGIJ zRf2#7XH>}Oy+4g|?QiZDMMcYtn9Q&sb3KB@R!^$TKJO7jV}yC~QUxAeK1n(>!1`X% z_N3GR`kSuIEsW{l0SR?R6yB+&j>b4j*8rl{|1Jd?0rd#mlcT(HWCmZOucw!;m^QApHJFty1 z=usYBt}0#p@T&^4G&Od>f8DRJ~)jwJ5!B?rj3TSL`S%P7%(}B6PAKfW&km}&rfG9fHul>RfS~jSbAWGtp**f zM4=4qw8z5|^BxX=MK=X5C9{qbQlL!pSSH#qkg(Voy{73|jiIdbBx{?A6mT%3Hx4@* zmWLsTx0Lw;e8GNw?-^8vhEZt=!p%aF0ZAD_0e z`~r#70lmR)pWTLMLX(XBa-gQ8R}RHF*8QGJ%?G5-&LqHhF#@vCVBw$*u^oFb*ptA| zsFX~CKB=*8s`q53p~O6^*;!-@IX^qNPFV}N@inH)Rin!WD$qe~XL{#9ql%>)kLab>DILtwN@3A977~d_2M><#(C9nW#y`&C?Ha@cY4w zMU`lJuG>sa$?MfNF2E7J;r=bEC*LZ*{U32Zo{Mi8-o`z6=8qNioVpr6ifW1;DzIb_ zk=YgH$(z;V@PS5Kk)^;hJ`wvu89C9t|F^hb(WPYJg%$fF$6j28LwOG4wuTets>v%8fe^Gho|w#tU{Wn=`m=KP;Z1w3-}Kkb`B{Be(L>yBpfEWkRN zX>qG{|80m+WiiOI1sA7()slOxr%C;qZq$H!ba7sOST*%QZnDTf&*$G>9pYXTS5?Ws zN#@{Y#7k>PMs@#rs>DZ>{73}*n$}zOiU0dpqJTGKPOJ5Iq(s{%gtR!hQmV<%2Sgrb zaip|guvXJH<~@uyDybB_S>kG+Z%a6&w%Br-?nu*<^P-GgOu70oRJ{`tKNm7;^hRPE zfWP1SbK0|SaQK?VD~`LMVDYdmlO*7_w4AL%!8R$6eoZ;+jSk$S{X+7ohXK7stEq{2 zMzMIv4<+xm=9}D;#@g5-{cF*G=NX3MORM>0To~*sUENEioH2fhr-sG`0`<(q zb9O4xZMqo%sds}-2F;T`JBMd_tvxg}wn{e0sMBKml9ma#%D^XX|L^AF@muyjKK zO49Sbsl% zC(iVq^|$Zam|_+|J7uz~w`w@p4)a#TcqT848x4q_RntF;t+FCT>uH>F5Qc}&Ny>85 zu$y)OMRA^|kpPe~dHF?lpw)i8{=9Ca?sMe(41rI7+LKAEF3MZPX7Sd^Qx+@H|~?wKUG` zH_ghsw4vYykKs&fVu+86c?==O9(D&S%8jAOyVv0udvT65|_2&@1CRHy}HaP;IEW8 z%ibzLF_f!dj&0AlEqLPE?@1d~d5bG{e3MkMFn$dGcQ%sAJ`sTKV3{^sKY?w;4&9PduJ}R>HV|T5KRL(<0C(dW2p?2R_WR3Q)aAE$r{=cs zkK1s3wXunHfX{VIb@@@(m)R==@QVp|m>08stTGIuytX(xgVplv@4inhCqJjb2L%Q5 zuUFtALmQrJ6FpAB304#1ZiT+Zjim$(uS-fJ%q>mVauK`fOvvA=lGBFB;IN$`0M$&G zRehYp3qvCsyCpvB%pOHF2mM!X>x+UO z$2ClJiGXLn)=^v7Ggws@fyJp8%6dNUPUwhO!r~uyrI9(()zNU5g81~fnN(&+V7_S17mG0-2(Fn<-a> ziQ(QZt}aNdz`r`Lo2-4un4)VL3QX{T%nuW~ap`gj}D#7$^zkMz9`9HrN8J}EEeQ`PbZNXumL$Ny)=jzEU z7(f-0&w8=Dh_}EVP4zN13(;e$N!6Ku?X-B3Y_fP$UM)gba{Z_ktD{#U@2(RPm~KSf^l$c1ZQwn&FlbvJeo zBC^F)P*kU5G1g!kraQ`XRvdYLlKUO667Ec|-eOQ@`1@;G9v}!Lh`dCdC4=^_6XhEC z+}kM~A{{7M3$lgI9E|UeZVB)0Eh%;^x9D;&w64#FqM$%F4d`xCmlbOsxx<4{z%+GM z-0k?ZR{LMpPWjQO)YZy;@_D+LlR*QrkuXhYOst={IOo3E@Y>L)rrfL((5Fa0>g+T* zK5d`9<09<0E&hkAk6cbn6QV@YY{7yzMsfXnzm)|Uji1_*rV5na9 z+w~uEMxITqcgRcbtyZS)KI-Yr;(EP8=kJG#MwljN7LSGZ1)9&)oqF$_jyMl@Sj{`j z%P-Har4!$9OEHi$*L@-H%3E^_pt_C{N$iK@esS_=oFVUbX65jsAX1Ry+kYSIe8U2) zRM(@$wk|)@xRLGAQ0Q;6X2^XW+0ejVyZ6ytknPyvEEUE`Du489=Y^S8Cclp(7T0h7 zq;c70F|1fFi=JcQr!jjNETF9>;Kjww^t7AjTC|fBbLiB@h#UbMCz`R?OFQ1`%1gc$Kgm`6L1+?E7z=0>&4R*p&0=w-I%R3RO4wlP5kS- zxC_xf_({R{VLoi-L)#}m!O4e%6$mP;YI};DX_?Rm3SLsyV%!ic*77iuHlC!#Gi|r# zvLAB*AJNV!x%w`vq2qHeBQ_=Bbd^VU=(TC|w`v8G`f$(GdAGn6rEaQ|yDH&h;*c+% z`%{GhOh}ZlFZd{RUp}K5tZ!39&uDZw=I3QjCDsFfHEK;c-a7B2JjU=r+H0AfM9JRa zqSlG7G^>uf=5la#>$CN*9@u&SL6-xqN(VxQE)*o&EP2ZS*U~!!!0CYY)II`30pPhvnKg6IF&Aazs3Gy&S&t)D;obRGO%_ zK8;vi9lPjQP}rDFAj=F|Vtt{Hko?|PEnwgopy|zMvGFi>FRZOV?pzO%D&eE<1JiWL z@#nKW1@38DaHFk0&H5f-Uqi$^a?!Iiofej%@4dglr&LtP{sLkj(y+ey!D_(Vo7Asy zajN}SUq_0u6_-ifz3&W*6lQUtF5b#y-LX&jFK~DG3w7Z!ffbLT=DF&lx%Bf2iSMYt>wU>^LW0U6B6a!{=UhR$SMF;Yt3|6-yo@# zyX?F*yOr)_q2FicI-0|b0d0LUmvAKXw0h*+4~#VEQBopXW=tf1K+25V-ZEsYriNzY8r z{$b0Q-Dl&mkT=iINv470tb2q_Saw_xu#I-c>hRrB_JiT|#BP_1_rs*$Qj}a)Yn?#$ z>?pLd@lJ{;2{|iAqiyU|@l0w%y2pL9;VE_!+FvH9;aM}xr6)5DZej{H<@p7|AQfwF zVq=G64jaDaFKBSBdMagEqO$hg&r0w4qjJV%h`9a;g=LtFP|W1W?ETqK{m6Nx44!>% z)+V*!_iM5-b1jX@l_B+sz~Wxfe}>$MM``63;Og9jmHVh)sJO-c*-$V#J*8DN*EjW8 zk;j&}6qscpqniofHakhve?jc^!t)&05f&yl3l^#^#jEl-Ppzxh(c{N>C+CBAU+$PR z%GHw5-3CN`v{$pOcvnzKD#!H2a2hXM){EV|BgdrTqsgP5RDbpb)3H~JexfaB_dVLH z3K-@IibBuF;UApU7w>Rc_-*u!a2|M8|5ho3%>~uwWgnmod6eV2A-d&1XW0SM_(XP= zR8|=!TK(a4?cRvzb1WvxLW-}EikQPpCbw>!dps68;>#rGcjIoJL(eUGWjf*b!e)ft zv>^&`R#rRTC{vpg;5`7dVMtjhxX7IJ+-sf*A@q_3+w&7^sJMQ z$Kwc4z0yBGUw4aVhVO zuGoMwV9^S5OBSOq=*Vm8>K^mx-;sl1NlaYg51PLwMpUnHS&1T+lkG{uN;$YHTbZ>g z?tzVySR+?GA08UBDV?Jj~-MR#jC3Tf5tt1GvSkv$jVA@ztvAMLs*-L zdql}1tUE*Zx1O(9!Lj1P)`m}^t3iv5_re{M%Zh`_H+bSx6ZkBvzp-}Rp$hu;AjpBu6ZJ+JQ z8roKb(owkzPnZQ%`?Y|rxORlShUxwvFELJkJ_rA2KVP?rQrrv_2(r4)xsxr1ef{E} zTwVOfx&%*kqy0f2W$!&|+@{K0L#VP0#i!+3+gE$_y_?PQTPY795{!3QJ!dbF5T-SE<8Z^9-X zCoqkf^C69)dQzC?;dj|$+i^cqT^}9Wmim;R-LHJ;q`b_wBdXmeKiIw>eV81cyNN=k zAXKO7n_J}AbQQ?Xm)_SWS4R~dYnU(yK8P0pGV8oiHO`6KGaA=zciH#WS-r)@PBE{! z{mXfUXYs9$t#s(&;;8$DYhrGc+3x4HnutBCe@91_jrN>e3M>|<{qybIj!?-{-X{Il z9m(JR^zBiJ&RsaO&(FbPxa5r$S-K+W{pWit^BOAmgD! zq2(U%(BZZI-`7P;*unaj*}tq#Q%CjybhA}$RxQKQlk+!EX7DYl z*Ak~54eQ7wk=op_8B*27VAx!*ku%H*P|#}L?KUZQ=KGFx0iG+_yRzK1rGxI zY_Bw~h%Q{4&L@3HX{7h+GYx5Ch?Y~%8`l95f>%g?zUObAeKjL7V>WZPm8_1G&)fe9 zcJ~259}S&B&TY@%_#d@!V_A+pd=_YZw*6DAIV625`H5Uo*XkY==iShPKHQ2^=C}r3 z7_JCXOt0hr+I)N0Va;(Bdni>M3AKc7Z-Xv^li?u$;$<(`na(;;kE>(-m!!bENO+7C9S^w%(43S=6Z{yNm^doKIm}#;QPw)KSu&gRQvfB z8c2t8>+?TVJ1jqb$m>3b7lr?R=wEW3@!@gno&ot`fr@JRuaw3~zbMbONk$pJzPgA) zM0V8j%8-h(gW~8kOk3>-^?2^AoaW(4w+Lvb0N_@atKH1^heKz5(=de1(Cf)$Y zDVRjW{!wTwEjmWmcdyNwjg_V`$p17GbMH#Gg-IQX-HL>m804d1r9mkefqJU*7MF^$=W_*+(4nqfOp?>O)(w|5z05xA1NFx3YDnzcq~jLsAJcSlnaulUNcqR{qAxo-DEL zat-vm2&o+5oAGK^Ia6?X72D4mt)y&`%nfDzTEX3ZG>4$Z%UUMsf5H3H;}WJ zCH7>tH*YVqj>u_&EM>svBjs1?IgfLorSTg8jZ3cOTP~=bR93nP&oSwY{k(6wy+q&H zvYg2BPKTe=Pfnev!Z>DcZz7pXb6Zt)ac&Rf=bQ9$1POlZyjX1bv6ok7pC0=X}KoZ-<~ONdt#R&sJ<~9d{j|CO@vGG)~2}_ zjGH91*BX(myv}!SAX+c?Ok5WA&Vo$!=*qpT-Fm+uj?d`%t{%F5gG`rQKfq{c}}#nQ~L*>Z+7wdfYG zeR)=JVGvQ7XzjPpk5uu#!9Lwt_Bz6Bs1CpP{d%K^khc9Jd~z9a6C&X%t`g#N)RzcC zKQbG;Ua=;Uk#~UD=_7LI?jw$-WFr&h#u0?e!~2l8ll8FTE` zQxhuz_Yn(@0hNPwx{JfJeAaUICNH6J$smU9Fh@d6$h51c#att154P6@L#SS*`;y2SZru;JXifjtHqj2-5v79}(p`jbouo(%tBQ;d`4Sb#y49txH4J;(ewQ5HKij=Itcze5*ThkfKn?;T7Ja}^@PQ6ozG=_c z$P5AXbg~&bskf(d#e{g>CbQiYlm*mJt{JP1HULqSf$x4E>wiLdrS66T6V^++-SXDZ zy?NMF``Fz>;(@7Y8e{$xS@CFpa7Lq&G_3P7((^}Lm71XM{!nY1TLw}s%---NW(z(N z5Qy90I11qVDJ8vLXGaEuzOcUH>-Tm2+?momtJv1s#AQoH7rTiLSMxvKVZZDVRR@er zUz>hsG06vO-sAM6phYixU?X#_*+ywi+x2Y84iG^O{S(V6C@Aj8*?hb{v^S3%fq{SH zzdG6^k`aFam`R_AxKk%3ePytAMaTApYVG)R)0PaTW>%sPXp&}{r2w_G$k*!JNwr1B zB7YCmhlc6we1VLpLTv%&S9L&QO-L$aJpA61w!FD%& zWJ(}ESL-0o*ZSDa(Qo|zIpBsw>EnXH80`7_tQ1o|a83o#BcSLB`c}`n-VRv zmTCv|L;-els2r9K5z+*BV@nG`jwL$0pyi2l@>=2a%HQ2bUhakGu?&2S zvLO?^DKyWc$UGgRg+dfw{PJ@Q#F^<$+3)04j`#>L&`r>SUjdqw0W&WF-man^%DEC> ztFJFQ`FmP+AkIJe++9S`Js>ogvzBrncB~t0#(yDcQ~o^cZZsEo-+QF?lk~`0)G({`-_-`e-6~ zqBu%X6?_Z1C{w+OZnNjMx6Hx}(^F#5$T=n{U}pJ6qbkb?r`?O0R3040yHmp8n^ zhNgx$vdVytcpp+}zyvMqY%|c{Cw0-M@{~<^%aGR+;f1OG*GKF zyR?ck>NPx|6VYw`=wh_8_-UJ3;rLHPZ+@=N3XE8uBX5uz4{H0Yr~|EYy$x=jnr=kg z3@uk0>$3&^58B=`stE`F9u|?&pfc$Y5E; zyGAo&#DD(2|KEAuJTIT;<#x86vvYPnyPtdS=iY4xCKNgT14T(u;N7;@w7w_hA56m_ z$_|TyH@^#NA~&etjPx?cvM#6Vc6%259&1aIS6g2u)hp;Si{pyW9d6uaKa-UTixn%) z(~*1nI9O{F8Xzb9jTngV6u8}sKjg{OJ^&e56U21Tk@Ccs5G zj}x**_4)4i9h%pyBiaOezw0)OC+rOqQ8(AlW2B_VAnmo~Bxk1kwT~tN&miw5B(`0-a3?B%jKEJzyU0j4 z22W-Rzb6c@A@@2&4ik?4eVl1aUANjUyAJa$XVf7><5jd1dq1P1o$fNKaVJxGYjE|d zWLT?U)Fpqa=1Y#P=UyfSX@`To-50SM47_PID&5&O#qJwvFZ+c55b{4Z;~P4+eqSRg zHo@});(y_>kFYx;Pi%boFKYOI@G`|A-j(LOVH*9BQR~m=Bmeg~B4IVK&ce^f|NEF$ znCX9!@gGzi%gKK*e3er!uj~H;??&MN^KMNmCGdYR`j2w_|FqBMBw!iSv-7D8gYR!{MJ<{CFYt#j;^2iH332JbTd)~{(r z2D;)WdatuV(=`F%dxlrsRk~F#>WGmUmE&utk7c*;-wrb+r@ox=aNj_aEk3$saMx1> zxlmUp0l5Hc65b0mr5P-TkLoG}rPFDAHefj~VV4V?9$oPBb#uHB_)rZJ5kkC6^TRl5 z#>h3HG9n^tAy|E!f!K0KUzqyGn@|#zd>K=Z0ps7!g4b#UQUs)XWzDo^(@5=)LvhR3 z@2OW)L~w+i%R05T>SS*KJ4rk^zq?Mm^jZYCHG+E>#Oofh+RMl|s*Ij0mh*=7y*#(r z--|rR^|e58wH8#)KyL*3({VnqFtK1AZd>kuR_o8!W*{(qmhNs2n%az4VPW|m^GeG* zUV+3_t3IlW(6#dVR2X=(p^6(X8B?*{cg6%I)0HHiFow>ABI@)&lay?gjow@m^4I=%ka<&t5-QcYw?wHtouS$Yw36J z-hWFQx|_CJC?T;fX8CQatsb{YrZ5yq((@?D_>VdZ@YOR z`T+HJ_Y=oU5v$KS#rQvP0~=hDCVD>(V{Pwq%`oHcw(?VWiN2fTOu>!41ibYN}(_3|_ha83$%6PvWQi zmdm3LIwFI^F5>S_6DfVktlg~__|odb@E$R>cxc%Ym+_!h8@*F)(V6HTBZB-_f?9O* zm!>#~-ty^Lk%Ys#R-LNona2)$-SDNYIjV>=m>cl}JsmZfZIO&OtWX8-tKta(WLIzcB8XSH&V_ga+PfT20uvdE8i9 z@jru;T*`W!KCenqX*zia{YR8^>JbMFN0%0whdWh~HtCJ@HulI&X0sgF3x}z$yh!LK z1pY16s5T6qS9c1t&K4l__1JI7?(%OGrGXF2pP!Qd{FH=fUu4>sdYi1ivPn_Rk^p~+ zm!~bC4Sf~0C{8Dj+v-1Ftu}0_cDG(D1Bk;Fs z%Lshro-eDjdAVO)-Hh6mpiya?mU)P6gP3gn4F|c9Z}fDY&26ZFVE1$#Zx#0C$Q&c~ z0POKd!?Vsp9;xrH5 zD2vn$Kgt}bkw~i3W86}aHKIkF18M*UvaRDM7IwnRaD2HsjRC$P9^qIQ>Fa28U3%zc zH97nGsD$TST{Py~0Q|n4$4q6v{>EN=#CXwL8%5Ap7Eu`Nneu4NPLOjd#A17F$eQz| zEGd{;f>y?Xw-#|}Nvpi7EpC&zib;$Uc4Y7GAb*>*cu1d|pCfDq82oz4<1>O_~-yQsMHX=?uFycybeAt_{-^i5~ zQ!pLcFn~ULnS!-5RqqCDI*p4Mtn{($*L~rS>ZcMwR;&2|SzNfE*ncr6MH`WsRPqxu~r1AJh&6TK*D6)^bh;Wp}M|C*PJ%%F^? zDrNpv>*L#{w>G2o8AA(86capPA2>N?&?>LVT|v zK#=r4hQso$#DtQK?@KQ#jW(E4&eR!V!dA!HOk05P3G683`O-I<;5j<9TpXq`;+Npa zo~tOK)R3SFRPlfEI!U3Gy$a%QZKNwhzS$c4iH~*tR@fd+Kn?KLJad37~;bLKt7cb@^xboji!@ zd}0WE8dhI0F(U-O1o4JGe@7TXUbG?v9 z9htvD{G82;fD810Kiit_)2)%?Eca8qj+@DSb}F~U0XhxXAjVZNf)az9 z?w2xfWZLRj)?yQlRY#edXe3tIe>6Hlz6Tg1Iv>Z)W{-AuwZ}cB_ZxKDFq@-Lw z_jj@*XDJo)`MBVD#N?)kp@(Rm`X(Ol!KXHh zlFppF%>J9}1LmfY`8U$7!k>r=e<{W*&31%_5ss19LBHxZbp8M>Hr<;cUi^CsBW+Y1a zor#y6y%648!t77qR+nW1%sl>71=-jC(PJ+z+lH&V?pEl>Dvw?7GYskdmfdoe*`HZ; zo+m|%5f%JRJ>jyEjjCxGqv-?D6^HVM?7L7xre8% zqbaWZ;8$vIuJ>q#L~yTk4DJo1zFd8?e3>B^p0(F>__`pyMxa}eHI>-NJaWAgm` z>@T~KEky;L0PC!(TG%)sy3!~=TaBpr87j#1?0kWWZeC3as#=45;~ayA5UJ_BsjN)+ z!Zd&K2_sV+{>m#{423COnPJ)&2hhWiq}3-pr$5#yOaWBfvTon!41=9KOmOjCe3^e)BA1(p_lU0Gd)=yFZsn%L9C4F z5*1YrA|a!no%w7Zk~Xu&EBR_JMrSKTtESS)F+gwpRX;r(j>5`uA&Ucj4SQg!G=8Jj zdPiTOP}omDp=Ygii;h+u)mxlbwik|G^Ci{HHJA<-LLcAaIy`#UOHG08Gv$avr_J;A z2UpynSAyyeNAep(FPM*}IUJJDzoT6UuFJPs*?@=mI%q}@NFdd+sWwXR_ z+(ebp8-LVAfXRkwPHA?Hi(f`Irg+2xZq8lC^486#H?Gs1CrYeHX!Vx3PR|cyeyVVE zdif~~IbzBR&)@c=QlQskHhPyMn(1S8y*3jKvwvNo90CG$yfC{g@_w!^Y$I6iNvCyZq4kOMKeeV}lx!AWCGt_3A)s{q9kRBUv1j6M)kOFj@ANrFJq*sUfnA5GHUrpyw7jjEg_p3j|-1gT62 zQ8Vg*@y>3wPj&;YwR%1u&YHj^Z&Y`dNGWoc_f!aS3)!t*Mvt}=l?$B2ro35=l^P=i z@6canjP$YIhh%I8L+7b;ErW*!FrU>Gil`1_z$f7}1d&LnPEPaMDn94YCWX+sEOM5$ zx(>>`EPXU*SVe|HH-k^rEnfcm?4m=ZVLL;izek>Szmm!%R>28V&c~!O-!SQZbn=yU z#DCyIbg3JJ_Yo&ocjEcnWdgYI7+>kX-Pm6+gs6$Laf$Q03nX-0{q<5uT1cC{bEi)l7bpx`?8E^1)m-GcnuRvdJL-P(w-`{GIqNBvb_t(57! zx_r5~D;owc;$v0eUuqY|S1Ri9KfPFV1h3BK+pTj$3A5^O}jQwriUxnMyW#%o9 z4!_ca1oO!~kf$C)dSQJ_51{RN1KFjS2uwDCh-nF$*?xx_$%I-I=i_S%|5IwbTf@MP zoU3Vq*rv-5UXwjiK6mQXszn{2NHQo0oE;x4;ti4tHr>h0jFFb7QQ%J140lNEzP$s=0YdGeQu=T;V=Nj~O4S`C$*7+~JEPFaaKuUXf>5bA&8m z0E#aqhTsC=QrwtF?tb15JeYUQgB1Mg=8_`8o>CEJU-g>uK5s9Q8#+_Zn)#E@asaB~ zd4&|Xzd^{gJ7P1b-S05MoE7s38=-v$c;eYm+5=crs@;nRk-lD@ zzB8+KD)f8+f0@Yh5r9@>`6kG73ImX{*N;u79F>T@3d z($_b=SC{&H?EM>K6hN3ol$6dF)N)YK>5`78zQ3$c6g4ya-pFX^<1>mry1)wC5%uW{ zjy^KXx7EU5CX&iy5Rnm4NYI6@Sjd|Q$LY2+YNZ8-uZW+_gC1Xyno#7pjBIabTQ z+;aprPbF&I4^8(f%3u5>xo^Ag6SUw*XFtDAr{dar&Xm1)Bx*;Zc96$jQ~Z@m#+OW{krNqJTXR_6M8ZaU0;6YEr~V8 zjbSnwg>8Cm5>@?0(e}0k_gNUdY{(5G8yXdp(D>w@^NE zWJ!;M!u!ADl?|19(DyolA758I6Fn6KD*8z{Gdukd|1)=hgQ?n5?FhA`2A+8N@Bowe z^b^iAgr+aE#R+E5N392sDR)Nqx+q6eG)ONmd4*SSTLQklA@8n;!nmUcoQTFmXSUMK ziU1`o;RyK#fD+&jCa6Ib9P0zV7XXbWV)RJ)ThZeoPeqoA*6aUfiTm6PkY;sp`NsR- zz4aEOrN!MG0}3)m+)qu)gPQuv=$JXe-R@uu$j(zF9flVB`(uvMRbNP6{|rLZwbV+0JOj zHRlg%D$8!7kr6~`SdgvO1{!C2SjXk2C-Agw6eeCzL>QZTiPltS0uR{z%H_ve0(mM3 z(ec^bhY}6c`uI`r$p)IKN(F*=&M(N6lX4JEfhoN0gCIQfsgyn?&<={0#Rwmg_h39M z(M9sLaR6B=vAiUfUbiohEnYocJ$USTJoZhze&8Ty-Mf)@6F%)LJ*Wl4ud&vD2gCpm zG#b50y~-hDNM#yv$bU=~c9f6o@_3v(Usr1tSh0L?L5%M~eWcx^#E00)^w)yAfKi)p z0pz_4!O!8Qrv+umdErgfj%>DMHckxq6pLO5aGo!CId6Z^UfBFoesWg_;gfvZ zshIr=a-5c5&1f%70?y_0YC=a)FA}<^rS7c z(7aOj;fN`kpoNcy%Lti>%hmU6YH+`NyZa^9OC&^d+-?6bqYt@aT|51nf5RI`l+5h# z4SnlrOEK{xS!P4NpM%CmN}kS*d{gIUZANydHk%)!xQirR99EXgz&If6t>(6};)kisWATd+_87t)o)(5ZS^s<%ayy;WrxNv!FI&3_% z?ZYD<*&2*uV@XB_w3gH!wo`mt&_aC+-bdgreyS(H+N^#w8GJPUGIEi_iS< zI!;eg(X9FNSiIl!A5ymcpCMVP~l%<-a1-PZe z=HH9oMDCgLJ+@!_H4_>0La<=_xMonSUvpyBWYh>AMexalY>su_4F9_9jVWf#GjyR` zdWTo)$7+Vk^?K4y(;nqF{@U9crQKyrz-yKH4vsxTh@R^pdVw527E|v@J&v2<{#cq# zVn3!wHJ}-#7Ugy#{QHrAvfooKX9F5co7|b1@n=HZAe2|B%HCf@_Ecj#0b>xx=@ZP$ z_uvdmvhU^1c2c16Y_;`lyP48z5Qlh(UH~Qe!YXgc`Fo$UR~Zd3I+ONC*%)qxZ{*^n zrzSUE09Q0l#fM(LlGu1_TVWfjM@Wm|=WvV`ZJ1q=54VBqgla4C;unP8rY0REa~$jT%}3egQu(agK2QnNAg3 zicO2H(J*5_fb47kg^=wR>;svZNl|h$z$%&tO_0R26sn zTL!Xs$CwXmve~y?4?|XoOQloW z3N`Xa1#P@Hj(@l^cI-T#9)&Kn0M__^AGmE4!k8%>teZRVu_(%enlC(-mfFePM9qZ$ zaZ0alUw{zl4i;eb1jZy!t|dJM%nTYOnHz&BnRLB7#q_u3y+G!ZCnKZ>k3+6ZftTxS zCmogNxe1S(Nx0ykS0Q{+JvL=2}PKLtij8UczSXE8elz^mg zYD~#(9~kS1qG=qUrP#(5$*QEr=6<>I9-uyIUPr#B)mv(G9I4=HJ?pti{$9$?ELNdG z<95wJV;a4b-gzPCqug=iTsO4aRkLC|+4=V4=(YN#c%Kq}{dih+nPp5XsSfKErk&G+ zK}JoM;@(9s1rIX;QG^3UGxlB4z;C2?H9~wJr9mHFPDzP<6tQ_YWEMcaZ=F+K2F(Y6 zCn(4JK7;yJ3KRfEQ0cY8zO&)Mkcgzy=loBVSf<*NNg3Q3stppdNEwF+0LWDZEEJby zapl8IYf&YYSH6ls21h4}VR-f^9^&;d{^)W=~biJ^zV%5Vul^!~o z74O3HmgeC$84I-?XlnVer{JCcbTFntWFtAm6d%laRAY51;x+X*Vw!EIGK4hfYpgjz zN2W*EnZ|}YMz}dsJ4$2} z2eo2cvdKe(l0hs~@7-CsTW#o`Tq#}qBU8Z|5^V#LJo?w3|o^$T(Ui8Ot-rKHT5x z1(h!8)JS?$ubTq6PAD~Y*mSu+m=mB~@&<$C`8I06gH5zYhHhrQ)my;1Q=b?EEU5l^ zv!1%#&p^2YYD%o*UGq{MG95zKudG9FO1Yz~a-4hm@4{pl=W**6icP)4=Wt3c|8|wi zyAHBlW2EwHrk(*ly*7ukIk_dLPA-pi<@rr>1UfrvJon*ZD3k&X@#L7oAisi6_R|uYFrI5?4CVq0rl4g}Jj9Zvj(Ke1U zU7lqP*L{VeT)MW|g}0)D&3ns(K-oVnsDrTYh2risZy#cwa;~&mL`*WAKIpxRTJg5- z>~3w6%VLsaJ>x(0&9+*W0Kzqz3`&K~ z>I$uV8#MTBN0ydG50}l6r>P=hk)9^YWb+>24Pjy$z)5-mAu9%3A=dZ#UZb?FUvs7(+ zc!`8)*yjT;n@3-BI=Ij&SsPgS`V2|@x<8+m^A&Rs!?>8&*z+Gf=tmWh0IwRH641^H zYQUz)yfU4?EI<*zXXK(-q)~QWU|+@|E8UZm%ZO`WPepu$Rv~DNb{Z|PW!`eSwyeUZ zFs=%}I$lnrFL8i#^Rcr%>VHTuwFRpj@*xNL=?BJiMqj{R{j5n2x=EI*Szxp~-5`}r znc_j;nPd~Qv3)2e$V!~j)|A#@#@M`<`_*0SXakyhND7GYSW;S$7^wUfFdO4%*3LI# z`4Ta66H1uE^Ohi4RCH2UzD;+akpPRW^F2AOWc#e(!2O126tb5q5rPl7o5fwOWk&da zd=R%Li2QO-W+1!?2cg?cQ$}}uV9#2RT7mZXffpEfXx%R-p*V-+zA)lGI*2A8V()&t z=N^C8Daks|cYe}o<1d7DM6iSArxpO^kzE9$St0MsoB0>PLeU_x;e&De)mT}RVy zqZrGj6&`auYkS_1)H79Mr0(nc!~kua_w(Ph@Ko9SX0j%oVtCctzptIHXT32}Y7P&w zp(ZvCvC4k7`FF*QCigWL>?g*b5L611dWU!cZ;lkuudZ$(1->HiGx2)% zJ%oT4gn#YGOIq|5+q=`JWf8-y`dHxc5SVv|NwQSeW29Ryv)xCVD)HTVq>QoyzOnFj zihL6Wv?_B$m-FcTXwE`fgU*7_@LQJ!v5j=BYJ3m?y3CxBS&L0E7kJ9zBEQ1e)hAU} zRQ}qk6%UKaQZ}_Gt>*<%pE;$rZa?fce%F%t z^afyERv0)^UoWQx%TqOtaSD9#Ts=|p9{#okQ2Ikc%WwHA2KsbaoDB+zMUfn5^^!ZS zFh^Mkw5bu(T1i@u`d-#f4!Hu$1Z&6mPynmS=KLu%5811WoNK;9WJFwHp z$|!e8%1LT491AY7(=TF#+)fZXHz!Pa&y{i4yh*T>MEs5yJxfWNNitx-sf;1kMFI*4J-H*l?gX$zuUZ$6J3a8u7LQ5zs<-7J^> zhl0N>nLc$`elM<}IdqLfL$?;{6tgL4mFN1Eh`Gl$dOm1Cb`}UQohP;E39bcsgg;FU z9WI&P`X~wVn*PdjHX7mvqszGqyyp^WY!|A_Hs_NnMo;9IfC*$edyl5{(aWyw;IuTViw69(rzc{IqV;hnCzqgG87{Gx!_^< z`?ody6T20L-=AKn)tFE)v2~35nPk`*;JE!Z7QMu#{jL`?e#QFT%XKD^{#~F+`bP#@ zd@n;NSilCZaE2sP0E_Yc{j3zMT^~i~{Y#Wt`;`P6@hG4BqD$b5GH%sZnDm80o_klL z@v4PlbDY7p`4IzbN5((Ou>;c5SfrHHcx1K4XWR|Ba0!zSVBqe+aX}z+WWuzIs6Uvv zgGt%O%SDaX1ME7uIYQr)Afdv>lrr!-vM*aUSWK-Wae7L`Md!8LGpMSQ)a7%-%uTlX zz8{Rj{W;Rp&lsi0?^z*zi^^+_gr8onRWXATa|ZHyRT>ymg4swE$K6as%AG>i`WLsP zy^9Ex@;#?@!1oQP0V750^?7aSp2YsK)%nDUv6<#qDi#N?-%ml_oXDAg#SEp0Iy3IF zAx32LKJypQ0K*E;ll#PX*Z)9)$L$TFy0J|$p4q4LPx58o2eEdwe0ColFgy<1)n{!@ zFcmPyE(_J`ohddWydD7vU~@_v&$)PiGhSjT)v_=iAtsVlyngq>T!7mDG5U6`C)g{k zaZu?O{Pam#Q`8e+c}nk|@6@jGpJ3J*r(CY3X@1)H4D80+;+?;PblMX?9+(_4AhG<- z0TX}h^i%R&>-oQYXVu9Tg(yi9Dz9!}j{8ns{|4jiij!%}6^q_m{5~NYG~LzIHfy|6 zYP+zQ`%Uq8g5huf4+kuriBFeTF5uVnxakA+>#e`2-CJH7+X;_*e3#D0GmDBn&+QmT zv3%lScM6Ffe6=&VX_OdHxiJ2U#pHbk-92bwQA?i*{5ADncjD~?aF1WCi|dVy*J&AX z{%JpYJBEBzYH|&jd0$!A`iKB0&`dOt6M$@!~Y-6zyI&H;xMhXd6hm0SpE8rA>PTvFuj*7GQq>=Zw~80H0Q)3 z0AYOL$q~&jNn<8lWLxZ?d&O|3WQ~N&8itRA9OgGR;vO3Avd+Y+#qJ5)>Cayl#V`Hi z3G^^cEoCgJ!#`MyHo|+<;TT(^hqs)24}lQdN4U|W9=`M>E4`#oK6uA+SygiYazu6% zTzIorQ1Mfff3%XFSteuqLbom0Hd?d5M}jXuGgN)_6**fTk<-O#x;FXMVr%Q`h+s2e zFEst;UpXOqE&b^32hxz;VZ?h(ZO?e1(D;-+8Tv3SX7;u}GEfZxFu2owDWX9%rA(VK z#Z~m8a^*%%B;F>2S5v!}tUW;s<=QJK6hV#S3HKtM#I|62p2r`&Q-ml!lv9JeQD0cr zrTQnHFVi#u68^|zxd&YMLW49yLXYbZ8#1KZm z^b7UKV=Ni(4fTK_j!F&(>3)Vw{E&Oicv3IDiXw!7nqC9G z2m(yOP&@W(3encIxLi@6iv-6De?rl-I^jt2&!RJ-NE9>~(Ty!bwR;Kq^rIax&oX{a zDV&Hr10-Q(;KjRv_L{K=kX_kT;>|);*r>ltBD?U}5(8yt+d#z@Gha{*8<}6Dpz%+_ zg5T*89iN6Ov)H4{oG~x8cyCOe*6n5NLCF}+P{I2y+MHBY7T-YLjN{#5U9%EU#;T^+U^N)0)r}{0%u=RW!s}Wcm;DMH zp=E-;mkG|_$%&#KyqiXrG0$8JhU;9H6n*`7Bfp6&EKbLuJn$GxOT}v*hK`aBb_(0% znI+iZnxD4Lm`YNbvUs0>X1-D}u>Ebjq!#4FB|U3+r1NQ^e2w)T*)AdX2)=jd!oeXf zA`@%`V-vz`6xm zA9`%p$p`V^OU#`duq4CkleZ%uG&X1l{MVDJAnjTEbf>{IZU&TqxpFZ zS>Nj8%p5V0%$NY`N)f5ikMcJve$!t{W06ssOh(Q8C&)QO+bsJPInuRiR%H4= zo$cmf`S|(ZvDdilJC4(|kQTPbnEql_g&}|NLnyi@7@ziH?^3P^u`{0(?<;L%xB|ofJ^#JnT+k-40(9dOi)Bdzys99)I~Y zK5dE}e~6CYU{+vcFAKHNKDs2C*l8K20IZzPvdZJfvMTq(H%`cAq(CKPg&`+@3(ASJ zn%x^N7xkI?H8{GAH1^Z2!)otb*YT@O`!Jv$4UWc<$ZOG#KujW( zVzy9%-N-SDn0E>GvBHI56A$B~=6u`0D|4$(GB@2aTtwj2hkj&TE~oa3Ghi2`@22 zfjSU>E9eol#8;(1V}ZiZpQWOh5u^o>BF3u7-|IJaB62R3QM*eEDt@=NsxE zGe`lNlEJdk95Gqa% zA{)-?iulvRhUk)64i-AQ%^vkAjdgs7iC&Q+bbl;LjSX7H=U>id^yQ&{mu>5j6miW( zV>)=|6sD6xv>bEIqXR=&f}hD=$uLAbU#Mwuxi98ccd{RJ;z;zh^1W|=%ZZdN@c}MmsA@LPdDTv1N%ho!wf);- zop_B$O}pHNNW^DS$&9B^j7^h?kjwDPi)RB6+XcrcAIMQoi;?(tJEFN9t)EM$%2LE? zAXUT$k)>!OHi?{|lKASlLVP^l=)=UB=untp7OA~x$3+$ z8<{>wfXri5+q9EZBr%r9V>K}JTMHe9s`?vZjqnh!@r!4lY|UzGzA7AY&1?C}z!whZ z+A7o<-x!p$z}+q*kbM&KPpqRAa3Y4?`Yzk@`WkG>*!1MLCIDqxt-Ls-ef8s{+RV#d zp|A?38b6+o6DV=20oVt^Pb@Pbs6m*GieugJpL{FFGU*;@667*(R2QwtcN|Z?8-plO zuNNi@&POCeYIcpZK1A3`(!Qf^rJZ$yh9mV_*=JX^$T4bCpI%dg^ie%m*9=mdn-GU=zi0?XxgF$d$3P2FDRVZr9+9^2tCjBH#vP998+*amzi@8r# zJ4*(w2}eKgVsnyyyFA#&!4yD}%ZFQ8R@T=76N0VlVeU&6f}xkk8L4>XUf8kXQZ3H{ z?F|{=KV&GNPjpRtUM@bLeR~?OS!5SmN`_UbqGX&Q9QCH4mt?M$twtGiQk+BZh2ZAl zJd9WVvrXW&Pdlet7qSr=fj-&rwk@^U#*Um&F&)bG_t$f@$>L&^88Omg;w09n1~_QA zV%~|QT5$WVo>ZBPn@3vg%LF~-lq(>4&n3YW`m{$}ZQMS)`_^p6eemMU4aZPkm8|;L zESt>)_r>VGYA4|el?bJ2kw}I&e7CF7+5Ub_=FZKT)oKM$Hp>>QzLV8_C72VdjWhkj z<;Yu+4x`*yO~uN-6~mTFN3QHA!Ro&ej~+dozmREbuA|zJu zYCc~iwP%M)Zm#yde(bu7at|@sw2F{QrqNrZVm}AeL|I)r+Lzr0&FR9gS?nzwd=5Ey zVI>%Ja>ZUH?mk!t!B&DfL9UXUd=NhuEEmcn2 z2kxkrGe@iPW}S-f$zb>_6+Ghn`#w2obmE`9ndEYUQWwFB0`aOhz`HW{%EZ3QF6(L( z4|l+g)9?*XcqoJQH;wZh#%xIUXM1^hEZ{wk{1-K|KS%AO;jtTtE2$Xz7jOlyW0IP? zZh-keltz2yPOy@mg4dM_>BYwt{+N=M&&b6_QwQcuIiVziL?XJFy~SKQs@)X)_YF;9 z7hYd4yuqx%2b7*)X9!zX7%OoUtlQIzj(8Q~&DHdHY;ZMKyr$JE;`pXvmFcCbxbSwR?|bRLE=tQuyCI&C8-2_FD1TFx-lJCPWO^>`ZnGG~l7M))DW*i# zNy=<#Ri+MSiawt$t@9W295}cWIz?h8=nFNCsB`HJTXv-@sFHiudbQn?hmR2w&=2rU zsQqNUXw>)Sl6Oz%7m$TzR#Crs?;oRay6Q4AoxMS9VO3kVzPfJL68%>-Ck{%_%%SSg zbLIWXdF&8wS|h+=?I44(h51f&?6=>Idp@;(4(X~(iz`Eg8mEtk+ZT`U_jADqM+RS| zdp=%mB-kiGy zCtI37=T;Hpj=^@cn~^key;mgK=AnL|dl0peY_;_x9?1V|i zgnh8eQV<9p|2Jt5Lt9D1Dm?wInh7k5ljLAT<-ekdmK7DqNI^J+382rEz$l=PGCfQr zVJI}CI=r%*dOb!dCgf~XX%4Wq?_Hmla#3P1QWK#u>&!iRV8cDP@VPErGd)f(U!xFj zIIgX2Ht2N8U>jac2Rum32}LemH;5juWJVvXf5$>?t~QyZ#6Xh3 z-O!8>q4x3wUGdW~xcFMT!p|KIS&sLFl%H2+y%GYV`X3T&G4=6hO18lESp?(y$ zWLaga7h44~UCwt^%njK~%(uh)IrFNZ=YDA9)MH8-mCyA8qEieUL10z_OEOf5V_k{q zhc(se_?iCJpK_>WQTK_-zInmZQWlFk)RAf)n2hpV7=GsBObHd<67mnH`_;U75c7o{ zW5)CkGsXTCOKG0d=cJ6FXku{^XW3~(D0g_(WN>MfRfL~?sx4igLijs{#yQW-&X=O8 zb!PtRmUI?hj59PvzS~@x;D1YWCQq0WfG`(0md@<)xcDlYeV_>U*rWCUrdHd632aO1_@Ss9a*dO}l4xsIv_i8zs6uGO2YJjGpN>xVuuS$(Whe#A z*^5kan>*zncsI=t(~j5v@b`O6jHI+q5p2Aank#*_6P~HdKRoolht)%Xbs}d^&Fb2VGT(km3WzPhpppX@Eg{6yTP0)Br5z6rxWx z-Px=%-BzfHv# zyT%X$%(vT^SXrzaiX5NFzDs^csLLU6GJI4wssGFGOq0Din^X-x%fIvz(9HpK6T+(} z++@alqm^Mz5JCl-FzA@};Kq^B%9z@)HBXLE$2n${8^dT_+iWoP4bdh0gm;|VEz?{a z*UPaSE;3WC?j3OUe~3HFrZ(GXZMUUBi?%>Kym{W4{bm1w{R=Xg`_AOL);iDQ2q=@%9urIsnnTIJ5aV5WlN^;`K0Vb8Q@^SJxqiu zIFikhXI@(ljXy0``Ep*2&LLo@1~RrS5uI;m4Kn^FoBEPq2weCh3_3x&-~g7^_}cmk zw`8-l);aFPVfcdypI*29AF+(`z9#&;Lxv6|z+m@xEF6r|HoC>HpxFc=QjL%!&z5+P zH~=K~kqIMZiieQ64Z@{}C!Ud%74bKRF;Rf=Z(VN+`G|8NY3K20(0 z`O_h!?O$b0L8_SincP>8V2wd!gxziY22K?dohd7da0-T>u+AG^q)V| zhLYM%Vy;4j$Bg3T|cfFt6AID=nHD%$XVC=mb}}xwSQJiLD%c0w5rjJvlMI!ah zK>Y{TGZF%}umGFb_VdgteAgI(PYP93h3<#3qu}`reCyBxBZn3)Hnga5+ev3B<7mOT zVkSeU!_M5#0vR{qOkd05k~&G5k}ukrpg4Ma9xrWQ5l-u3FjkdB5+!p7wXz9K1B9dv zW@Xx!7Trp6uhQJ?FT5T<-#wASW*y|Ph3KuvcCovdNLj^~~ZVqr;J7JXm;iyg*s(J#AW zj7>e?LjA~KMOFdi;(@ zpw&KmE<0~Bzm*pPXU<>~BK_oq)R?*Ql-XI6h1|FJZtzCw1@h3^!Yi z>Bgi()`LLA*t=KvE)VOD{`F454Yp>cG#{Mqy^Np(_>zuf?nK18(z-TfaeaE729l-V zC|bi%TS;vTL1Y7{d8XGqeteNt<@4W!NXW;IiVT-pAt#JOtlCaM?i(vv%p3=ko2Gvv zE;Oy2Ut`qic`enAWp8wmbEGlXh`HeRmn$(B*R1C@0ps=UaoBp8M-N81r%B=_hgJs# z{Y#}sn8}wxou8nq8Yv9Qwtls3C^{J{53Z~F`~_VW$}C`bSMa=&&u@S^dwlAs$Dk*+ zRvyn<3Y|gydrsWpvI|a-U^J0Fb@IG&*nr@ zcxeBB&jZ5sQK}NM#fDERVkz7t!E2ncK4SIUF|Gy^zc{C;0k(5D$fEgr%OGV&3z@Jl z%A<`y+(h`De$BFcNRrt~C>~-*J;(cIzt-01-K5b^Dc7RUzU9sB8amJxe=D{V(&O%w3OvrSlzU=|ar1*%$Vp zke;7~gCYKV`{Hic4A$-PO92k|RBLe8<1m}_b8c2#cvhQAL;XljXQRyxBrPiX^_*~$ zO-4yf`e;`&Pj}OeibY1viAx}c)xYcIh|#%$x$N##tV!Qv{lj&RIu5A;`cT6Nb1wFS zW@uuj?^j50N|zl?02vlod@!$Y?8qNEKW~o`#m7BwWh1b4-h;6T*|#N0-KSY-`sZOBdFv2$A)xWHR+;ps9(EjOJo`3C2mui9@x zmqfTLRrHbPcw%`$oIO1in9w;}QG(K~#gwgGqt=!slRMAmq&4|a+zf%PT&xt@2A(RN zD27s^An8W#8@>?&Rypav<$JPNvHE>)J z(pF+EK0zl!#*^L=T-0r(FOg%7drkt*1Lt2OD^+VtXjGj&;!}dncBw*)5dq~s?gl0S zr=BEiFKO3CozJC94h>ks=hJ!vdJchJU(7{uDTPpC&~jC$EE2*aJhaKosAIfR9SrCm z--MszWiD^ORQ%VA8PHz1lGxbBpy)fc&Z7^$32`krvB=74qd)z9`BvGx1wKX0nO(lS zcF45i^_Eu~5QVcgi?hpOqbQ`N*1J|(Vy1|Fs3*;W-q&ckbvqZ=rC2tMQ1uwPPMGEw z{EaU7YTTQcPFiSe@xyS=cx7q2`Pm;Wb!dHQf~3mAHvgPa7{wDi=wx#rqgI)e4B&ZI zQVWdiX?99#AsyvMu{i4|QNu`$Iz8?8ID5d&SzGDVg(5EWb;Wd{foCY=YOjC$TmfH^ z#Ca13>(2~@q%t-I?KVeQGb(RDPg^}frn?NiWSygn3|HTjNeu27K=kPNlp46xgHQBG z5MxH3JL0ze_i_`s4hQ!0Wq^E}X6<8r?6UE6N4w8g_SBmPg@8h^Ap5<`8lCXGnac*i z7VS`^iX&RJ2-GAX`%@?B_jz*Um1r_1z2o)$cd7wOT&k7X`%k#GuOW`Uli}I`GD`ZF zaFbA>TcWDnBU-?sNiy(8vJd6nC=TLK{}87MMEfg>HFH644F|(+6zgaL-Jb2*-^Ooo z;EtKu1L|oTaAUteTa4{(5Sl=Q+fV~=HYIA|f9E@jdh2la5;!~LFHd-dacs!xDq`+K zQElJNAm8`dNKA7(RJ)rl+53!pn-3BHk*-;~kl2>H%Un>WTai(+O11Bq)QV5#iSmsn zPxQtVVu4g)o9^9aJSn{+79Ls<4?o$buY0BxsVwgkpuPbdyDsiXE?ZOpw{|- z+m4Rc!foFggtm&_Xxe`^KwMRSV)cAy>a3ef$P48SpE{77RLP{u6mM3|EF#7YW3kUgdqq^X8~j z?EmCkwr@Tg5WX9JYfX(>jyp{bJYM_|`F6H38z7LB^4Q&ojH~$13wgEn+bNVhx{tZ> ziM20_b!+Cbg-P_^e~SVQRNnK*17&)2JXJEq*C%hUh&Lx$S62km>q_DaPZLORZP7L( zDF|*)8Khpw`6!L+mA;I*H-(>9&a;laPkw4Z4RHSOz|G36D;g6JO3U9YtVeM0@vt<^ zHr&aReZ@K?Zr^#lWxwkFMU0`zv3zjeu7e$h;h}W25#aE>qr|~ z%1Hu)k@~l2?*IYqWFNmcea!wuc0W8P^M_#mOSpsmOr0)m<|2+GX#tatT3}uv?mMC1 zyTgCH%wxM0t9T@j5i2_1K9hF7opQTp>27S;c!1?OUT@6}`dn+fw{k>k|7-tF5LnpB zZ8YSkQZrq<8jK-)+;||fc}TAxT13@1t9^FQSl01_b=;sf4(qaI842s1Wo#2REs(TS ze;5uWmF~cI>qsS;=;BvgvWgkTjwi=<(~J=gep$-`QK#Bk>ZfQa;Kptx39UC-Vq2Yb z$v6HiOE0irbvrj8tw#}R+w=OL6C~S0ehYEFh>D1DyWTyd<&`;9{XM)rJXS>j>B!gK z$*(5Ov-PwAdHo(xNSOvm@cdJs$U~XkDrKSbv`<`EM6OIRQ5i3!R=*pDiE?|gf>2O0 zqkb$mL*&$Dg>ko)I|SKDp9W46;eC%a?H5WbQpL*J>EQ9ceL~Tj2nLl$rVL^n3lNzL z*ajb?5$Q0&&|PJK8VkZv*0BD`=_Kbf#kiY{pCyBJW319S^6Y)WC-!tm7V^+XTrgsM z90Xh&X?vP`I-5jWmslTP(pjHDkw!UiQB1R)g`W$YF!g5>jL z0~SGB@SkteshQ52-+}JSkHr6G31rq~d9N*!5jEjbcxM+Cyr;EMw*h&V*3%DWUc!70 z_e*msRIjeubq1FA}`NP`+okwae#Q5+q+(z7R ztL?L5QFDn^J@xMuVDRt+6nQ{*T< zSlRp6#hQ+LG~Z_`uR($V(t(@oapKve_g3FqMd643 zPc4y4toqU}(6gVxkYUpMe6qHU#SqY4gM;qE9L)&fR;MHWRV^x;ll1~c>#37YToez& z!mmMV6sjgP8)sTq%3E7y+fQqCeN>^&&}PiJ+=)s0yl74<_~c(Wq*}e9($-VQL;k%% zp*C0;75~_@-?=NhoKg2fPGHXjPjYHlN>>wuEGy|CKg8Uf*T5>T&`|X!m)lU^OTZcq zU=EYtJaq3@U!v%7oU~tC(tF_{{3=hpRH&DA@CoktuKr?i?`Es0n||KHigQgnLe_{k zJ^_R&xI=n@e8ix^^xT_e@<~;TgY%vo6Y`n;_1Z-31*M z>SW)PRL=CLGN|J^7hhL2l2xu;mj7U_&bzD&E1ZBQ3X8VR(Oa7le>voYc3$Mq(gz?o z{`BuB7qPpXGsiLaZ)qNg$5_LXCFgjyrtP>#Vu{63x8_Ck>1`$8S+$NP(JndIzB5Wc z*Tiqs`2Wi=H->awSaqz5?>?@5!vRekg&>b9quAqxTj5+PPEOJd?R|AO6a{&;(d!>$nn-X7IO1wLunW_tUtsYWArip@gH_^mH@2dH8}W(5u7*)y%IwMvmu zc~I}P*(HJJdS6$vVGHz?HM`+&BD5Byj%`Pasn|KT;G*uX`ri(g@6*2af98dqwKbiHq$z0%ir3fh&}H`f(FbK_PV+z zf4%B8PQNAN$(!JnXfSiO;U#gCXF|!_qm4I?{=N8fiEMewGk?6SN|EZA14VDtm8(x{hb{w!otuTHJ#5AUTb7P`r!%pY-FPIHDPan&#RbY0l0zXga3n##z#MR|vcO95 zZ*ekQP5XIC2C>*IR7*ju?Lm0l;MLbfDp82I4xG~EofbofHa`hfUemO zPoN^m&}olxBQnR0{J3tg>d^ZcmIq0<$kG9+VLEjru$~JS*^A}Kfa<< zrK^@%KDp{FQPpul5R>6~v+N32By2akqU{o^MUVedL4Dfi%8mLT*6A(>gA|wur3S6S z2N0pG5(+E%=iQm+#XVaBOhcSegK|~ACSg({J+Ngz+tSJ0cu?}Q{(Qn2Ru5>^YLnMW z4OVJk4PqO5(~|eM9V5$eY1;D)Uk&Z=wy+2C!jq+SXdV7|z_MB?dVz+LL-^icL!0sh z%WOJ_(Mi$7>N*#NmYVh6s z8J}#527TUNDr6=2(9a{K&^H}=gmaFi(nj%bFw#RPLXU2<=EuQnm^g8o&bt!!ic=x+ zr&6fp{qFYRd}u0bkx|_#y9}lJ*89Xblt+mOy4kV9;_1UnSz8W4H*Rx!jtQY=2~t?W zmf@#=?vZVdgWu0Ig=rmYBq3GIyZ@=irY_UCuO+UdJjFdX+uL*}4ov%UYVUJm{KI~P z)-a~X2x(h#@y=Eb7ClQDQDqj`oMtOVA?w_yivilJ^Y3`)IA2UPAF6n$NnG~kPg||c zIB`;_1|(CTd_w?RXBQ5r92&b^6&LgGkf=8j#h$VNnrA zv8wiW6YFap7AI5LaKXOAV3Vls_N?aw?8leoB-JfQ?>CfiTN27UayfSN3{H$qw18Sb z#@O5hlH1W$DrrLMRg|4m%Zpdpt%Z4cIjY@iCHoQ@TVn8>^0T@#{C&Qqf%s=TIjZlj zGbvH_RV@G6464iu2c#* zf2HDPNSi8)@ttzGwAb9+dhG=pl73bgn6sP#F0Z27lR5)vH)}$F3OHdEljeWR!FcfT-@PMdQWxcwO*If7cXHtk;qz02Mj3p=x#68!_)2`ndJ#ka&Ud_a|d-+Zfv)y4Ssc$UlP1UZ=iGGNrC^%EH;xbX=4@|r)o zeL4^4#x8EnrRig9a?l#1U`zD0Y{J~k%(b{pXk@A4YEBsx#Virp1d@DGPaS$3znwLv zE`ZvyyjIg|Ps<3@7{sGbbd)P1jV@G+PadO7`)bIM=+pki8$QNTx?z8w9P5|B@prjO zhB0=ql3Xt|buW~wAnr*26Gh^=RUgFj^GNgg#QJclsP1K`;5f5y-|vls%uU%Fa7Rx< zZgZ`;;ki8;YVbg&ZH{M!;z>tYd`6@M9-q*|(A z)W382?&WCjP7h%?XHF&TeVQKKiDXSa(y`}ie_Y%hor^pl_6&5iF)Hn7F@{yqW?M&? zMZVM4VY6gWH45L{kXf!gWIAQ!$qWq+PsJUBg0^H!~j z$IlajTSdbRoevHUzb<&)%Qe~YNvfgtP3Ax%F4XmsD=|p3<6oNo@N)m-3{xlFbMcU0 z3H^z2NzN8!m|^arA4S*8{9|8azw6OSM1K0s?A@CdZAI&nTY)KdBJE#O7lK{bJ(@E6 zO`@Igl>&De8%1{q^cC{@Ul^EWqA+L*G%1n_W%spcqlBx8@(u6sB`wAz`X_OmT5^Ib}PA^JhK= zAOi|^|EmP|KTeJRlM>vXNSb1PXBKAPbTA%ygsaJY?wZ^B4YcMQdF&b#mFAv0FEpEj z-&KE-b1+x{Y})l3hCJVK9u@}{*pr3XKfrj@g1{>q7mM~Go|elGpJfhDPF^Vq`D(kz zjU!DxvV^W{s*s&S_9*SQ2*r_nx7kZ!wu*?>zwKYgJ6Y5Wkds72s5>32g|9WVgn7U? ze;p>ymleTMR9zbVAlWskCiJ1jmuw1FQCh;-P?zsxg{}t)=N)q?fVFo0wgMRDT+Emm} zV2zTJ8kZGKYW!_~F+5;9qTg3E(u$|W-_BLMM0##jlH=`gmv(-`>wk+8t4ema?Xw8a zRkJtS+&WA1b1M~u!!wRf{;4(>c+BwR<y=#3f8^0X>OrE8Rb=w#KnhLI zCwUu?mU3Z}G*%7=Oap^wf8@{Tw1~C3VxG0q9@=uGWqdsT();RcCZu%pa0Z(oLO`3; zoMWbjo83l-?7L!$%qd6F*jPzCpTGy5`(I8Ht3jXa*U5()3xKt&2&u`;d6I>5opB}o z8(ZVl?hgLiW#nRwN{Ad*s2LZv^v0&iV$KybUEQV>OG==QM!wJQ9_8ZH50?W*ZJ&ozAU{>x}`5L$Z73}tPtwdpXr$&Pk z2Nyfr3AXLY{os9zF|%5gy5#srtdzNiK^IYTIL5)}OPfHCta0DBp$6D!Q?xOOM^8V; z@8gy?#|dmRb{COp)(^7f-Vlnchms;Cm&<-0tgHeDF+2*b|`k8IW)Vn{`Bz**`*oe1Qh%;TMV<#dFdOjL^VmW zBN&U*R?LEsyPoS_#2aV9w%JSN{!_^Uv%g zp?Jm=?Q!bLV8?~tQ0iwCUN>p1F*5-l1-6?Ue38cY<4P3YEVpQQMO$m40qa!^1t{&I z{x1C6?zPO-&kz4iRJIphU&>*9A%h~1snQ=kuaZ$c@9d4V2V`Gl=n&jRVRq;@cN=BD z{6Gi68~5@>uHLe)ZR5L3QrbosYVtK2#n;{4wAxN(mu#x25QdTy!@I7_2z~}X{7Ecjw;BdKWJ64&*Ic8pL@EEtF$pZn2VcYWl z*trbJOaTNLAp;!=qE8E&KZWn9t6#zfabPLD*Dm z#7eqM^5jAuP&*CWm6if=5V`h3DL9slCtuO>?F>88Hca-}Fbg zdnle~YDy#{-q-&=tAfGg-`A4;1`%loP0z%5#sr3tTx&}n-Qf78*^Pye-Fyfk)2~S- z7*P1ZT4OQ9mELo>z-1+Tviw>AeVjGZCqB@wKbP>NL1O9E8IK*?0@Xd=N!OY$htrtY;tz}!@v3%4g z&44d8T|d%+J*91GsyXd?6n7{98+tyIjI1=ve1vLWxLCeMaYJ#x^Ut7ExMk|eQYM>M zKpw9S2D|`k8Fow@H2Y32G-Gqcm09)jwCypRWI*`|A6xC^o2txMJt8B zs%qbo$WlpUaw-+>pD82n`B&DEy8Up#@+uZd$Nd`~_-W_!D}(){%0aE%r52Uf{eIR? z@$Ur!(?4RfAINY3E$^(_rm4+O^xk}d22FuZ%TxT}gq5=ap0wuJyDAzno2Xb^gH^p4 zhMk|Q>6DjJ~ZN!c7e3y~KBCVRy^7~`Xn%c%POO0)J0wCO2x3%6hWbUsEz9g3> z$>}twg(GX=w7E8v@05G>j=#l6@wDPxb|9^4L*+vdq{7eA>P9?AGzurOv zUbVZ5o2JDsi=RS%x!d656Kl^5IDiN0ZW52W3MUSFyX?9sKB9C$w=wIv?lxMqwYU7C6>C zF))GH_688>>cx%B3=3ORU8v|gKsMdIPnsuIp3PQPI$!^S&F&!O-t6pEy?Lt>W;`5n zd?=3|;2C^mo30t71#1QfuTdQP0}Lm>kvCwd?{LnkIM@;zriXrj9oG}TT^x4tbBGZ$ z2cODD)-XbrFqMM_Ki0-5qx1acR1+ntX5i10(H6OA_KZ7=o|s#!iKaD~MRc^A=?0Bv zW&f+pZ`Uj6*bVKByk87l%IZ>?k5V8jLh(FQFyhLWHH#%>Zzmg3$}DXSfCK@(Awm9K zpUq3?VW&{auKDaXu9^MQCeR+I?i|s=b>;jqzea=p67#>?hG3qe21t|#F1rp-xnE5G zyj{*0^-UUzbS9su(Gyh~i(sL;+%g%kJCOAn z%|RA_Zm9&UJ3Q(cLHBhJ5-<-4XQn#}yc`*}Hcxip6&IP48s$U_Ha1t~b|hv6cK!Wc z|H(Pbk8>U;v&h&9thT&2&y=G*8eCn6ju*%Nh*Md^u_}^ki(^*B(-y;Qv_($dJpv_! z9#7|I2w|H@50uZn4^c3byFUewF1eh2_kj!@=My`8E$8osWo!DQva25AGb|3|X7;ASY8ecyU9XnIfZJevgZvK1ifPk0atAD}u49rVevc}KHP%$#$ ztUBuiBstvx%^Gb%nU&*R-r)30X{U)eHv{Aoz>m`}c*s(U!vvbXpNCCU(jc;d5MfE4D_SrL$do;lUB(!(XKJ(wwkDGrF>DhB=GX`4vkwlGk!~-g z=qdLkt9O#I3&Gfnm3+Yie3Ie)M#T{NtNcB7s^Vu`D!82Xz2alM~#;J zmkhs|UnktKe=SZ11l%4!nzi~CEG*KW&jsX6s^%-H)FzY6{2H^x@1Me59b1@;Al3(i zlx$!2ZqB0M&)VzIG05caXnK<9Q3clTVZjrg&y*rAb>mQdUd+v z=GyIgr^tUfdyU!Arw`vy=064wdJIAqG4rzyH+pwey4m*Q)9do~QyU8^-&EH$>F+k#~af8WS>{DJ>?qkt=BVjPYtf&v_K5%1*^!Ac4CP%YX;17x6MEhE%7-f z;1?3q(J-}}m3QLp#Roc>q(n3A`{7qEt}0wlSBo37^Ch>Oj3ZOom|ZEys4;J5cgG8{ z&K$`C>DDD~i2qLJZGY*sV!%&2-;69vc#d4tulD)%S$9g_Ztu@$E>SG<54XDT+7_`^ zLt}Q}^V~7#h>Ms7F)V_!8$PLS&!RYvHnv!K!n9M9JQhRpnN9F% z;L=+1U)@vP;%>o%F8Lu(7wnj|%NHiRys%W-5sof&@(QeYR7LN+d8>WfWYxU@GXgh# z3wV@Wq%#Fu5L!1L%Cs551u+U@joORdBCC&i&MB=>EqNVPA$n?k|L;MgTpnW&`3v$ceOJ-$ctzgyIF&3fShp)F! z2C{X6)NJz}4o_#k;!_ux{mZ%f;m57EQu%W$B)2^{#4A&ra{a4WJiwfIhBbFdnRdJn zV?Y!N{*mg{ui9hF5_fg|mnZ6_%4Z)Ev%Da2YhtGw^KvR$+06DrAkhobV}33<`s^q- z2)()dp{~o0WPBo!+HvZ)B(f3q4EtN;$h^&+#8*!2^Xhv3hEDHp~! z0>Z?wlrfr1*!$3Kl?d-7t8&+scznXt%7y?zqEzEbe==s8O4 zb(&YLv&Qn`4Ss7LOH>1{#Br^_YT5<(4d>WDCaK0JpL!Swo-Euk`#-iCZXL}LQ#1%c zLavG61@z(9Z!!)1FHPLieDCDCpZ@9cBX0A4KZZ~d9c`;;QVi5RKz+4Lo{y1E>Y(IP zkS|iHwXy5wh$Zl{uvMTlbD%WRSWZrFI>96GbiRMkz#Tm%xclb{iZU+t&ZlG3iIpic z7h=#Fp;B}3R?H%j(@^0aBfyBC*60sv zj^T!lK0V2}p;mA7@0T2Vvpu`m=N>n?=2Xj^t$~f#Cx+e3($^lxWxXMlbGUAqN%XA4 z+P;IWN!{e_yI3J_(e|)YgF|i@+m>5q=@wx`uu0d`J=KrA@NQC&f9Ql#?M)U;Z^1X9#8w9pY;8DRB$Hv1BiT3 zY0U`eRNk#Zw1?F1hTf$27?q31t^Bljp{;)pXIP{~UN8k?l8$OJh&8w9E50h-`6#5p z2SS_q10B@J_PAR;7|jRHzV2qTbxgTb6JsR(qLy^pYEQ=&I}r$$6?`mrcekj6Y&Ofn z{33yov6M_cT>T0P8JRw`)rY($f>JHd9#}END01U_S3mvI%a>F$|ggYU|VF zdBVEY5A*)0HpR=^GG_wrgr>xol_Yoa2)K0%>t8UPGuR%?xMQJz=~S~ewH7EAngqv| zih$}uJ(vhNn2Ptl(-o#$X&+pepFV-6c7jU`ftB`u*L5GtC^fP;N#kAqL{eCO4-Dnu zc6lpwIw#yQ#D7i|5GHUxtayJINlAu8DDET@C+2HP76ST--o(8YyfaG**4wV`S%vOoy~M3Ch@zCkzoNf1DO$;f=r}2#{6xWj9eOi~MbD+}NIe9OFpWpFP5 zsI4_*0H=}u1Qe`DSnB&U0b=2_=&7CoI%VYLp+*Xcf%-iYZ1USaRBErauuJJ$)fdvL z9#nRHX!@`wadgPM=!)|v1E6d2d|gz3J&evz_8#;%=tCsSUXOXa3_3tmH-fOf$;!P5 zHt=eyQB=w(u0iKr7P*z<$CM&3mh)SVz=DX+ae0wAiO$j2%vH|DfkMHNvp%y=S4^7v zl02xl?_xe2FNF=Sh|zmF5#5l9E!NeWlBo`+=@@&SXT}3BMHU1w<7^ELYt3D=-|$1O z^Ix{~2KehYMf;0&eMWwlDY+pKO^Tm^(s*)9@HM@;{EX zVen<4bP4z=O`h|bd$2CE#ktNrUvo8SRO}7!kH9b=3Bx7_fcf*rfP3pG^?A{1nxi#( zwfd5=DBVn{XN=PUL1H;_BRJUji||G0XRVMggn)>Xx-klE(>c&;xv-M%`^; zZML8{HoZ1u0_fIXi0Or#^eJ-<`S1Cz>tE0DYF1~et&qy|>x$3A?z?aZ9kkkXs<%%Y zcm@}DU|q-7BMY}h^;*1fb?s%wtyQs&In&$?fZ;`h+XVyG`B9d0yyuDiD~ASboz_DK ztrj&C=ALxhz!2w&#aP^<%%t>x=~Xq9J3%FDyyW%9#Bn+c88tEO{j=| z9TnYmKh#(t1GDn@yZYCe3Zm8f+-8MQnWY*|2n&$mv|-|);03!%s&;n1RyK0u?WrPBgp}#TjR+|@ z*|e9@BCdf9vXC>})k(Qq$#EBdf=tS|qh;cS3GB85>RMYt4uhH{p{@x1I@VgU-4Zse zJfypYtRJ&2@~qL;+LQulfejh(Mz9M=d3z!vJB@y%WlIY$d*?WtEM4awg)@|T&JA2& zw<2jTy^S5qIqTrX^AGPiv5w%=LI(CW8STVsj&3K!QFUs;6JQ!@6Nl%@2@( zqB6Xw`1Rf5(4oWIDNaE2nwhtY!hMI)L|xowGr;Yf*ueQ;jc(U>33c?D)`50Uq-7>q zFRd#l+oy+&-;+onNRq2UrA~QBfXO^|f78}ag4El*+ZLsfbg^}%es*S3%M^|5w?3&&bH)WE zV`==ZW}19XpU5u5n9U`Rq6a?j{fm{lIexueRy${T?blR|iTgGt)m6Dxos*(5vmG0kd#sO*&TZW!PZ7;-lYP8f- z1iI7BZGc0iK1IuJRcd&K$bvQDCTV}K14|R>M4hf9=3!y*P>7h52>`;aMn&M|hKGU* zOGd|++%RQ@#nZ}Jtg5?3jL_;v(Ohc}0LXglgbY>3M%Z7^Iep?%*S34j6Do$|Si{|z zrK1@VRsCjEAEB?~=%GQS5&q@4A{K?BkO|^%&%q`OH zTFLlOW-_!+>U3A0DC$v8jS|591Z+g@mLEzfvLq*?Wy10xjpc8zw))UO4pH6Lz8*9VTsqGq z%2%ajAgJ9v&zH?0k8h~@B;4XC#F`UxH?JW)Ch(Stjw=g`s*RFPWnJn!IGj^FxQd}| z-C)%8KIuYhkNTKoWB2OCl`BPfo*_-tiHFbmVeTH1$=P4k(SZUeP(CT@w1ge_OF}I( z!;72(o4uSX9C28@>}>%I$y>QuBrWZ0FPP7lv~Tr5~J_f;SVeFou^WO_4D*r z=oNmUYj#4hY;K@yq}bjb#aZ^}1{B;6@JU$u;|SEN)xXCP@V>*_cgSGx9Ij3E+652S zlWE)ctqe&h1@QdcIwp&?dA+SA)4b*V}yx&>5stT_o&SlXNxYRzx zBGA7(%xqnoa6AYf++77ioo+HLB8aHATOB~}Y5S0-*|DcPHk8&Nv%*rPk>XyGmk))( zLBmJ4RZe#Fx*r*Y>-QBG<*$W>!flGY5n0cG>M~-PXicJaycjNe&qw1o+U&{$MBF`NOd>=K3uJe@orH zg&#iyxQzo?#W+Q@J^j^Q$ug`-l--c=k@N~G5s3bV!5BEOc1Dr$T*kcN_JUX@vJg25 zMl5qahKe!fo3in%G=`*cpRydBg7MfL266Tf4%PVDLIq2`sUk1!-q^7ypjj}os^)eI$>O*S0myKf5gQH^|0 z8*ULbd|3y#mIn8V==%q^nkYA~NZv%8umv4_-3O%>ou*CPf?g~BTh&!Je})G0==NC% z5d>XNPo=AGh%aS8_`wYU~{cbDK4EAH+E zid%sIE$&*}-7OTCP~1}7g8R$8_x&Uv$w|(cJu`b|_L^BPTWGs?*T5iumrm_Nx;r$c zw8j8ikpP?t)+&HbR$QhjAlUZh;CH{KGYt$Cr8Clr+cSC5HFo3$&%J_vfLYCVA&2;# zW>dw7l;XZ-VTesxjMRg9&*1uqvcLHIBXyZOqbhxl|K}>aa3P{%-0Fd?hH^={luJu4 z7iw~<_H5a|ALGTuD$2qtF&+pi1_;z5ebm=&^~ay0zK{ZLD>zJFuDZ7YmW8dCZeOeK za`qS@1F2Lt)uv`o+Ls6AvP>F;1CVSGKLn2j*4kfHkv1)-1-{zW$E zItc%MXd6&D-+Gf2VrctO^jg^sn0q;x+N(*Mk8rEpJkgMsj*cHBEWSiw8(5%tQ}rzt z0P%y_xcG`nt4vZ zknoWoyw@IeRCB{}iysp}1!iE~D|Vj>jiI=I z8Y__2ga?eyscRdwT8y^^r8R1*CXAr*dkg9Xh=86|Cn15!-f-C6KDv}uSz`SPc#2wJ z!B2DI0?&`5-t-rP)>2u(8Wd8^WcKe^JK|_Ui-^4yoyOhg2V&@)V@kq7UCC)&5B9A= zaP17PuM(30oYF8iQHdvm?GrlXEulPm8I~M|p4ayuM+YwCp+g`r3G0K;811y)@Pyyt zrp)9P0K9DqS%CVT1qE$=%r1wu6aF*zLPg$+Zu)bgH-p-A7haKfjJoCN4yb`4aY8eh zT+@%i$9H{QF+mVNLRhyS@_hK5Pn^y~xh0O{4-FWjp9N%{yU2(w3IP~`G9~q;xO298 zwY<HnUD zZCbaj?-}3*!n& zh;=GFP3=de@kQo8k51B^ED!X7f|a%YrmMd>wQ{KDK?-DT-v3~xN$B})>qOC4jtWOk zYfn=&)mzM5B^4X*Ro1PQsH>RRIVtFC?Gd@4T(oQhL3>MUaFXfvM@3-3Ig9&LZ<-h9 zgDp^5p>!&%GQZrG(UOGl;Tv6yGMXxgs~s(HU)BjEFedatOr@*iz#Hm$%i~#-?tKku zMmNgcS-+M>_4DaaVv=CgQ}K^Ig&oCRp4SKBxAu%R>>Kwd^gWW%tw)&;FS*FRsgI66 zi~GhyPH9COe6e-sS|35xr4|pb5jg7OHjLUxgAxCZo^lK>38!hT;e24Ue_TQD8xBEl z>j_+3XxpB#zb(#h6oS1ltq)f&hcG?PAZCakhul2e+4NA;~rspy$r@c{qLAf5)H zxNr<_W9Y&(xKX5W%|Sx3W3XxV(5L|k)}UnFTX6|l6+wX!OE+Y+*U$^fMrxgcU|vsS zY{mC}0_}Y_1`33JNC4<_&D&p-Gj6|3E(isMIWpYyjXj`DgB?Ou!u|9be(cbbVGC&7G(94p0@s7Q4E~5p*h7P z$L=k~$U;wjN{=x#+fWIt*@Y73Tp{i1e~@gC5spV+b@xuy|9Fu9IGx5%t1_!lP=V? z)UTuq&AGG2?bnu`>XX&^6`Y5vG-1%U0+YzmD6+yPJHQr``2{PQOx@{6rIGr)ik^vb%9S zixQn#Uw?{$Aqea|W+3)*wO5sm1*Q&vwKT?zENi2Ky!;p=$k za6*cN;A+|lGe1o%Q@Je*?Qb=@L1N;Q z20tQ`wk=mxrK?z5Cq?2|s(|Ujh$8jkCQ1FnJM=v-ssCvy>D{pfyQ}3!=4nf;yN{1% zj0!`2D#O_kLxKPhe$Pp_1B?ASR9F@GCT7hvo+6LYxaz^_Lr%$p6=|M2ZI|==_V-I-_1r`1{f?n@w# zkocnNjG#-X|7nhty@Ua0bguNK0rTIa=uDAm8gA$fIn`=M6nWH9Rf=*{V7UF=Jc3|V zgm>&)M%w)7ko>~_<}kINX%ZbmZ5|L0^sPa$EkB6LcNTUj@LJioK-9c#%d`LXYNE(+ zQwOrPe5Av;$1(B4^_0K!)o1(g|51*rR~nwXtrz>JDKoiKonDTZ)AgSLXB zDxTUS8z>dvRxNxzc`qeBQ~6~MmO`knUhY;Rc|(_om=(68o6+}HNjMZDjOZY40SAM| zrMJvrMv+j$UV6F=pA@rvrB3fuZiN^9)=U!hHsD{SHw27Y+pwT)^n55EgeMoBxYihp zQLoDRmeg~OQ)lnMxoKN*Kug<{*~G8Q$3gdQyU{kQqE$6tk-FIA3);}k<(dBJ5sk_Y z&2oIhwsS~aMBN&6H z(v^ZWLfNI@2ZCAq?1Irk94yv(GMd*uQ2x_(NV*VXCNNDsuzKHiT`-;e+HgXOR4l&b zPX3J9F+((ba;{Kt1ApP*+E9h>pWT*gHT6z8+Wy>|$~rnl#?&Bcs3|A%D06cmmujJ{ z13>V>ii1i_C5*oc-p6w#vqI+2L4$SsCuyPq)C2FD%5I+d;Eq#=IDrC_Z$aHxt_*+WLEr@Vhw zlC#PUpm&{5#i*s?jcyxI4ESLa>!fH(^#g}WP;G$Ob(LtN5l0Ee^5ZI?rbU*spHg9< zQr+q4%*o`c)9fpFo-^xerS-!Eb*>Fx;I{E_V`%TkYe@je8q{G~%Fk%3l#hG5ThXqt za{!1Wd{=T!-!0vuNrh#r|L@SK0plwrr4jN^cA0)_J1_lpfbK^WE2MP5t~+0sHE+zY}`Jb zOK!$a;Adm`dT~OH%Nk;rU^VLEfc~w3?skXH)x(=g{G4?KaLkpy?~YOFJ4W9Ykh8$8 zAk!Apk?;QZNbteOjzk6DBYPK%y2s@V-TKE=Oq11)wbeOfZq51}reWKFpr|9=0sHF! z&vk^ERr7Yj(gL|&b@Me+J+}H;4 zmbK`*w3;8{4!8%w%zzApNEBm~<)L40RciyQk$}MztgeQt#ByP{SCK(HF=D2^OMGSK znKq(;cn^F2E79AW-EoHmrBu8?u=f18qrJh1h}?z7mylx?TT z$*#-~VbQ&3TNp(5P1Cyw>k8x6ZLM!rbn*BdG%?7Ee;m0ArqWElMhJU2b+9+vHhga& zf2hc~=~zUj{MLs3;~k}`Zos65AT;2Tp?&jx`w@iUkLjbTNkON-ih}(z#n7J5kMM3E z^&n`@7X3V6w^2eqJW^hm;rLUAJ=Vjv78t0MSu2lolnL>hT0MtKbqxgmkP3)N8tT*z zT|4mf5gof=G`V()3wLDL(YGOw{q-B?sO!S1pVOy^e{J|9pL|r(d#lsxtv@GzB~xvO zWCx}4k-xXe<l5Dr6-pMC6r~}+OoLSY@T&KB|x=5om~q+XzzGo;&mdzH z+jBo5eML=Jyx?nvh#%f|*4pvpp;EhjG=1s7$ld95Qd?3}bU=0(!IuT@-m@2|0op1s z7spQf-ZIY`YAu38s zqtOm$gMq&9<}O z?lygSy9JnbUeuM9migBAAi1e0LmECUT!#V{z@Q#3IIu=^!VozQpMTE;{dTY@=)bVj zI7B<1u1uFWRcN_!5Vmf68_TjDL_3`&xf;h#5&q0u3uJjVg2q~l8D7I-9$HX8@B>DS z+Fbu~uz#`-NUpL7()2f~(v(8xv?rWv2AN?hPdj)GnPC}0dv0p^GbDBy4g#3YOv3#u z3y&1-hAAfTvq!eA#iF*Ta-K-)Neg(#k#L!d_sYJ70%4LV4|jLU)Di)hPUPFV`*_aX z?Tub8jEXeebWlXUSrA63vb@#ZjK9^z!eC+zMfbuZ&_`b3r((46|oUsU3N=ixU$r2Ob)EZ;?cViAXs8d(5fm1_qeQqx|61)G)7xSsNVUBy{ zoQM)7W-knc%)t6?cAJ~kMX7WS{1I*1H}BzZ+saC&l)tr<;f7v1f*UCJ&ZHearqd4N zE3~J=;cEesP39IgO_ELV6RIFsyP;%+=lPcu7-2WdO||s976bY3o=VnzX>cm0x&vX5 zoAjFGI*Pze-_eKEjfQ;-V9p{2gUYo=NG8tvE=+f2chMA|OHMb`n};#c&4|W70FOo=wVEF)@8u@6 z_b&#C&CYB1x9K(Slk$1p>)=mqFLF_GTTf1XI}^}wby2@Nq^fb>|9ZbJ!=@t_vxQe3 zuZ8ajyu3;kV14T$ixRa9x#

F?kuR&jh2$0JHX#t zc++Gj5-&%(_s70+rVv^y6Pel0{?0mR2Nx2qzUQp7%XHt$pXyFD^*rZE|C?Su=&uH< z*A}d2ug%!_tWm$uF%+F-{cm$Bke%Ny5J#*2T7p?BNrJV5kR_)pJf;tQK>;fF!j ziRh)h#^2}?gx+3yvhNzr{T001>o*#tiw^79&D|bO{iH$9?oH=vBuwF_a%_>1$e*B1 z;j^k*S8}nHGM|v3gsygZrK!Cg%}i8-`xnGyTjXji{1PQ=@Bs-_F*Bk83OWdAc8*ZuK(#UOtsP3%(`qu=cA5`D08+!0f*CHmuBkx#` zOg$`|vMOB2$F$FBv=k)pCzaCDuVp6~X#jVUm1bDlzNB>sHCT9k;RxDvEDX6>*YWgy zH2HFPoA{f>vZvb42ZJN7Mo^HOmbUvWmYqfvkL0IX0KQdBd;6IW=!__D&W%ZYPvEtB zA{?H`#h_#-NT+(Ac2)-~5(S%5*vDt%>-@{p`{EojBosk*O%f2qL%UI*4nlB0k@w{G zhYO41vTWAaqHOKdK9EPX#u_r>YZ}XRNTe>xF6G|#*FK~=6%=6SY4(1vP~IScP;6JI zRI)WAKhB!ET`=t^ZM@6GW>`b~XTs#TBXqMb34(}{51K(cnpVUSQOt^!`)%cM zxPk&7inESZcIZSh+3?%A4Qw1KvY~9ySm+RVu(c)^XTg6RQWGbTZ3-mF35|eeX%u}G z=lA@^SM|pJPm8=kOb%ishZ2@W^qIiZwnXrD{^-=b8-!XAAa?>9(0$$-j%TFlks(G;jHr766 z*flPvX$|9IU+WWo$A&ykINl;M_^J7d+*1HkLl#b9FxigTV-)o_AaG)+4UsX%DR?=% zngZ%~?eL>BHpbOzVhlj=V^-DWc{+L}+vEu@#>+zsaPQY+6d^xVFsE=h7DXWTbsK}{ ze)+ps#*r0}`4uk4XESF-rYE$By)D{1@T%k<_VX1Vfjs9na$T@vJ}(f;13{yTk(Qj$ zNt2dou9wP%G96;eC57l3(e9N6dn3TjrCuYfsn8gjwA%O>pc0{!3s_*gb0M&5Exv_Z zXJkK81D_(zuJl*cb4Lh_FGPn$*cx8l@_rzPc(H94Ioo%R6bj@L(wzXhL z*Dud6rWhTu!fol=A#mKsJHk}^{G3iyxk!#F1>S6bp|M4AKl6h&y& zd$z4dq;_9wYeDR5fhqYTF(0QNRxW6vw@&D{=n{<-{b+nHIm+|7d>$1;NJInr13dl= z3rzZ9MzQQ87FbzA7P0WrqAZjGBl2xj;#&qadIHncH<9ldXbYi=59;yj_B})|fIUbn zRbZUkvg@QslVNUrsw>8Ae@b`v2g*F|Gwi&5&9m_qx>F1O)D?!BskwE}>xEXwwmS$;My%=pLkLO71kJ?MK}cB;Edi#N$>ZgZ`Ixs&yjx_n&ROm<5z-TkkZUPs zz%v}knv=Xm{d6U|QtS9vH*zDV(wWe>IMaa0GUFXYYBnT&CA| zi}nV0`Bm%_<q3PFj&Ty$exGQMR%EfS24YT6#N>nEZ-xFTEf^bnQa3z1Ih8 zqa#sfO&*|CC-ZCTAs(pFLGx4~6F#t3xE>M+6ulAHmmij1c0%oxe=w&^PzXTC8Xc>` zjbgWH#om-3c4`v;2A4E1beDOp8njrw>x#P$*x#n% za>rquF@Z{HULZ0^8`XJ5sTFKoBQiYWOV4z)6Q{axEA&LiZvzI&87dZM16~Fg9^&0l z9{MxVTFHP58s(s8#M@+KF$j5E&laVBEr&yc6(y(P=nWj>|FRhtQDy}Rrm=iLaAiSD?VRQHC`WQBEl$GN5(#kq8!exXLX%G_BiGO_H) z+Sea{6-;ha`#g!tbgB6jO5^hRsp(D4qn}ANX5`phqe3Fq%~xwwKFi+BqyL;t3k6zJ zHhjD9q~2j!S>nEshyqRj9GmsYgHr%)?J$@VBM}072n|=*3-$a5*5|wEpvcZUcwl1) zwMWm1ire@QlNt$3j%hy_seTo*$eNv^sK4mnATn&BjIJmK%KYNOLJCI6wr?{UpD>77 zyfidMx0IH5h6%*7m!Rsjw|7l(8N0@?)~qs0G?#YP0VX#d`I<80C|7OsSW$6pKj!E% z5#KofO{Hr@jxcc02Zm8qH3-t@$4-f%#?Jf}W_n8p`YAt;TQgN;91eQKX z6hR-JDsos;tkSK1^DoTyU=-RgGagR#iF8c4+UrP6Wv_Wpg`--99ej*}JI8{u6Q1fwzXc$*B(~#KRfg?#@V!$!b4$QH#oS-lfjOgzr`t zChr!j8>uEfNZd7!8-e*LPg@pvu-%-?y(x3n{qT~ie6(pTl# z_c8($R#%X>!eJNMbnFAwsi-fjObAxh$r7lmf!@g%*A_#yVhvQE5FfC_* zUw2T}zD&S2V2He+l~d3k-HB!P!-hbqB>}~0thQrqR}awDUc@j}yxLj}fg+IL8KZmP zI}drn8&!%_XZdJ=Rt_hT1VNu|ZS?~xRJIY#fW8cq6UIlpa&eW+?0?>>^^Hj8Ju(@7 zogA;%z*S`npRD@&z4Wrcnf$Enw~?vseNI^H?hv}?YExySIPTYxlN_RMgzq6M)HS1B z`-U?>2al?vM=Rlf;=j!;ipUcACxj&aq%Sy_3Q6g}$JGm+xm;r+NDLQJnNbq>*J`Yv z6$1!pb2vaVDSy+Iu_tb|s>yNdsWG2qHskAB1s(Dw$r$G`O_#WJQ}?K`UUa(e{2e=K z^)$?wZ~cl?y`7_}?=+$Qw}D{;L*_X&7wRdYq(0AXL*ydzV8w7qsJ_Xt!?aJ(i4O+D z>)a-_{u>emP8ThX8;~*_%}FZok%iLK@TaBZm4PuXQ=Qqy+X{?t2WZmei7qXYIDV)e zq#xn?pfTj8=eHSMdXk{M$QSI14r~6EQpnX@xD4*Pta{rWrzU4?iVbRvg-(=zy|3P@ z+=$z|mLy3R2T!!da7}>!lZ7#2`33-<>@+91dt^3L_S;SXWMhVong=$ErDI-uz0^KA zr|45#?0;WY>0Nbupa&kAl6KpCoE>Dh_4APvkLc*cJ2W8y z3w94r+TszNQ6PelN8Wa2j`+1cw+{&(eJiM{{cV!YHeB)&B=m+KCG|KESW3RV5@ z?qLRUOMj=M{7=#hdzL^n*meJSWLQS4H5%sw>e2M$d$01hG=vXFX7LR57(1Z`Yigis&I3jcYk9B}A)RF6!i00ws>lw`rX# zG8NXJ(L@=|OJ>?S%NwFUsLJXG`y<9SC#P|bbI+?=Y*V+r6dIAjXP}IVCMe!^l%wG&|l7oW>UJj)&JIsvj)F@FM zeVASYPjPeZ@0i3DJ7c(OT>~95)86$ym>P{;03>PhUMJSd$$n|cw z2NyjvZgqWz*@J9 zwFA{E-in=%s5e_Z>YIR_z=oUq6Mi~;ZN90HZz(ZA`)y%n&PQT<0<~i1C=0^LYjQMT- zVGaw$XND;1e{FuxB)*NdvVbDsFqQRg@7mheI&z~hvaWRn*z=8ZKCLwOIOVQOYL|fjYS=Y@1~xKSRe`kaH6{Ae2iqs*G0jSp&Pbh;Fo?|Me#Gv1np$-ewq|i z`lK0m!yd>{cP6rEZu;Uhkt0glTgv2;+ui2zp>YRDWIV#;xy86(K{|lNG-)|3_t#2@aXD;&{V_=UgPMSVTWR}u&H*^Ng#JH0NYn*Y6jxQc+CW@E zfb4gzBL}4_q!9-~Uo?bgW#%Qpt%ia+DN4Gv;8oCFQD6h95D3hwy8I30TmK)p;14az z!=RcJA}(HD?vfx1DG|<43w}zyRM#@E@`>!Ctv(ot9@7!`X&-Rsr`S!fEr=}rs04bWqYyN3a7_-lu>%#uP^F{oFL~P4A_CfrM-<;F^~^2 zPD`+Ev6ZwEUZtNTaf(WY<}K#(g;ORXB8zegNK?c zX&ZCNKP*@EM9j*Ld9!e>G<2A?I6E)RF7_gGkkd9+F8^GrhOuFE2sNt&ehHHf^u(C! zK8c38o8x9O(q{X%Kjs)Co8R82&Bdhd!vkl4G;Wc52w_4HRNpncn+67s6_$EJ-wAspP5im!* zwi5N^96qvOT<$sk_LhE;ONneVc!-p|J=1(8G@mMWbeXR54-2*c9nS;wFWDB#i?`$N zWg$^=1B!cAW7!TYj%r=A)_fu4%gm?Vt@XZB(3VAL(NLM*N?6^HwV*5 z4dXDkQ)yp|rJ#Fs%*xz^Frwo|_}pl_y$lDs8;RQYXez5>PO5c6C}uT<6#+?$ zepPhXn zy1m0y&;^dLYKYMe{RGLUHMFqV0@LE|ph6_DHk8~2gOQsj9_It7`}TAxVp7)YCgzXZ z@F~%m$_ZU-ZV+R={_Fr$WS&x^^n4#Hq)HOoIc>5Y(nV4W66ZHXN_%;2;zmhU@$`=XOrl>gq@6S>p9o z-iPRE4KtUo|I%a(St=pBIk!`2mp?;Cj*13!qYNW-FvcUV8K_~I*TZ*!kuE+0ufh$V6 zu~u-_>R6aO7&Aml6l|IIw=Ct;YJk*Z-$vFyN2WZ>3~mD5G7Wq8hk--O1lfiEqO! zTTCK}A(vj@6gP>19Q>~E4h`r2f;!GK*&sXW9^v+jMyO|F5EA(~Q6?*m?@x`2*9bXa{X!(WI$ZV`jL)9No7br@G& zef$n($iFcI-$U2CxtiXF2a!f@A<-;Zb71USuha{;PTk7P(I2MeUQVI#VC zS8?R;M>sS$oaf{t{5|qO-?;r{5%v+z3JV}95U&Tm{-75=qqi12#Q#^4$O?sI3H1}p zkPCoQIoV!h5~J`XrGYkTO9-B*W$n^~HSoyK;mwYhZstqOivb)B&8|rncHEIZ&6DkS zaKZdC=V|g5-SGAMCpGF)yHN7=d4fWoY-8dUN)DgIC`V6^QxeHQE3 zErRULpes4Prk8rcX|BH91XFiwj1Rn)vOfgq8B#xN_6) zME6?!!WM7g7aftUcXzF5d>$NdjVJ(*nc3%$)p};k zUoqNWF9FQFQS}6rGb~bXPbdqv-_bwaaq~Q;>>;c)WZ1m3m|LTND$f=sSf&c1t+({u z=5-r;TV_J=fD?dfZMKxz_Y<2Om?|QjO`Lbye7r!<+JtTKEhng8DCo74^+qUIeEzj- zv|8EhS1ne5=4TCA7Y<}jT#JU!G(ECp%*pz!qY39k+wwZS6cN#})Apm%!l3=8zQXe{ zTbujw{k~9 zVsEC>4Uegw=Q1b?Hy}@CJv_YgIZ?<5?0pBVF7{*P3WWYTm%$gl&j*L zTx>IKzLT<*kekt$ry0V>VeTY251@<;jU8ffWDdp>n!v}wVd&-`}*>TkS>ttoF*3f&T z`I}AfX3QV0QR}r+5<{%qwJv!?scCWEZWbx=%UXbsf)@HNH^v{j? z`Tkn-a~0xWJaqW#f_9s{0tZN&mLAgiexVPzLH~?0HM*~ywx6sl)QYSeHQTcyzs;Cy ztO`%DB0JI@@oC#9jy!5hZ%krqS`a#F4u>i&;|T?`HC|LNErzyWuM=n-O@zGpFdIP5 zV9QeE=D{&)A^n;-$A{7+ydHj>GR=lNBsOxLMsVIr*WA6>x#m}*uGjymwibb{05YSL z6yIp4W$3`nuz_iks(8UaAF%fsvhan3)}RfwBKq2c=Qv`5tC{O+`ExY)SK4}Ci-FVb zavN!2EA_oF5NiCvwop8>i>x`m>Ja{ zzT#-=_4!{d5qUF1Y9DBMjD~jNV1sB)OD2S;F9d2%f7gO7vzSt9(rDxS&cG8`<`3hj zC>WM0N)v>T?b_QuxmnH;(LyE3$YUH3j8Zn|yUT2Z>$TW*Pau1hUI(~eVjcleMIERc zDSa^%;J@@}H`kfhVYd(flh--NFlwQCcGNHU(~zy{W%u?b#{IZR_a}mKY|)M(?4-qv zp)c7scj%E7UKiSyGuk#)($+CoL;xwJ8tvor2A#^pg2sRFLysY~M^Ii$5xA zdhZVCrdO2`MUJ+1(RMkUWyNOT{WF(DHru3e@H<5|DbcPU_p~2r?RDbhP3aJcsdmYu zrmz~=&J%LW^@Pd)$dGf34D`6Q+o53NG~rUS$JF@|Rq%Rtn=iQ^{i0D@&cyiU72a*@ z818-NTuQ~6k_1b|%(ZDd)YTenEKWRzdy!*0=kF+1?nUB-8Rk(u zi=6Eu;gMDWpBOg-ko6O}SS7{d7abRBEV3dM*}*EE6ugh+k+jP{P*2`qjhkgH*csZL z7?&Fk2LsDqe!s_JX;jOeqEx^*;DqNQ8^c7Xc8)CG9jmKpVS(DjHA@(F%tpq3Sy_!Fn)B?MDelb+2S{@#{#-C2nP6r&M z6z=b5uaV;9-OVL;CR+qjPIs)Z0trYNugS9u%jtC)ePr}KFUH%J@%zGj2!iCa?iAW{ zu;($l)T+wbB?1X1pxnz9fW7yB-md1y#l`_v&;v)#DqHEK6s)Q#KW=B{tzCRDLA6|K zv(#Eh7P-|ic|@B3GM0~K(q(#*Hap~zpoAiVpg|Gn@uB1wFRkM6q2Yxlds8Jw#Y!lL zP^%C;b*kw%D&B&9Q}7e&l&$2+tB3h?!v3EQ7z?YX8*4~)({9yEQhwSK9uWe%mToDW z$Oc%m1$3D0Mljt*tV8e-#gl9E)LrVih!esqI$N?G_(a-2HH;uir~0!{tvl~`>E!r- zbpwC+Q07WLA!cd_B<}u5LtE3K&23=TGS82pzwM#jRFnwlN(6~@gVaxWqLU7ZNF;fS zybF?ObI|C?)0oD=e@eGC!A1+p7$>|B*PBpU|{?D9kkgsh^~4iT^1z^2fLI0pQ3DJ3js_*yv2q4>%klX=3G^rr| zoNL+t>c-T)QO>F(9Ez_~IU0{)2@1Dt&MqG&XrQSfNDo1u_QrfYZr^`Tb;d>&!l+`eHThbUa^O9{}$!Et$r_GOx6U& zb5YYt`?lWt#yvv%z1P)~7o)de!_lr@gki)iguMhUf=xgsh&h10l-N-@L2Lcu_qG0H zIn|XrK5D9%)JyAI7h{vDaPTKZRwJ1lo4<}KYTqr@0WR^v$+6RNsuFzncGxYUo<>T# zd#$oqjP;mfDZdfUKZNZn@bcUN$$OLR0C|VW^NTJ>)|(yyF$w#sQuaAKd-!r6riZzF z6sJNfC)*x+Id@_#_YmaX3uOxdH182&9z#F4wd;5b?IU)SF5A796@nKdh+UC^e6Z>~ z42v`Qb^S|yEw0{)D8@*|bSN)=6$Z<20rbx_6PYA>%QxL;4;u}&FK_x!c7sDb*l!6n zOd@hEq<^}Q@C!1Qnr7L2bY_P*E$jitQZ#7AexYHPW0O*Cz4#q|{#-~u9a4g!Q;YBS zNK+XU=O5d)NF|$>l@+fq{fsTEdrvUCSI#^TUl+SLSHk|)LI8YDQ3YAA0^ZwiK6yf2 z)e9U$bb?de$-_Y>n}1|TmZ?SQ@w&czx&@R}HTMH#_!m?`LKf@Tl#t+Vb7VvWz($n_^efvuvwc6Zp zbL|iVnKD9bw(%!Otg4jdLk%t$6I>4Y-g zgMal+xU=DV`IV8`3UfI<9Z`tgSQ?OA6$a6M)0%GHwQKfc;JWN$uIzw^oKw2I`;=Ol zI1CuY=bLkd%iqYW)?F~G=QEmU`gi?pOb~)#4Aw}wJCOGJ3iQ1&wyq+8G|ZC-k~4Wa zW@BVUBz}^HzT65{r9o4TWDKX~5g_B!AnHu`i$1MQr`F~XUcoiq1O|Zo~r(;;T)JIG5vinYfYfj~~Z2>)XrYY@mp~PulEtwVtwVmyN@?ET& z<*_benx^c%qGY)aTF7L0N>DO}aYtUJYEo3ImSy(juEO5vhb3jxwf-dePB&&~CrXAJ zKV5E=zZ=cz`;(6~NC<9?y3}}8=A#$Ab?e<@lngUk zu%coYKBZbJTM1viB#R0OOo$XX0lmJ#L;?u!(-x=4#9@GiF1WJtrbVGb1KuPrJ0WlOf2ppWlz??2F^&S+e#IH*f@ z7Ru7T%xD8qFmoPi7M3MdC#mMd9pkW#qcL8BilU+8MT(F28Ok3?6kzA$%UZB%kl!2p z#S*-PB2>YRzCB>b;796X!^kOEIs0DyozdcB4k)kQucR}&Uix7Suc#2!LlfO`8kt&jbWsJ9G?qwCtX0|W>j+}#}poiMn&1qdG83GVLh65KsN zkinf`LvSD5-Cgr>U(Z|h{qOGDRo%V!l4G5xU^T*b4pK6~=OK}@^x@X zgGyls59F@>X{>`BSr$(`3Gv^zIEO=xsWO7!;$^xe3_%~{fWjNN{*=D>n4Rsk&YK}h zzhle=e|;k9qrUK`10PUc-Dc~v+fLAv|JTF2s1Y%e<4V}H1dxde%XwoUY;r(yhuOc$ zsx`_0Srx+xRxZ*wbW7i5!GZ|fEIuLYMIqxZCF0OREZNFmI737+n#B(e@kLj;xh~p@d%7V51$2M8w?`?JN8yS&_a$Jq*AqES* zQ^q(+HBTTTFwcI;;N?C@yn1@VkKnc%YY6fyd}rmo5Q{(y3ct#Rk4m!nyK{4Q_XI%WkU?X zn`vB_)`w(rfFR6{wpfQo-K&|T7PreGdj9cA^HZ-OkuTvQE(x)2{p}0WE@+nPK}F0$ ziH;2KQBYRNPBznsrt8k+uqfsaJ!kr5Vve3jG7HpzbI`-CfP*@^!ev?@c0zW*Xo_c0 z-GxQP*qfX2r6V%mIsX4^_A<#RS_CMib>*r<+7i^H1G5ZF#43)#W#1#Uw)@qbPB)maN}DJU zcf+0i*k)OgT&kYcieUEpgFixOc0@Dr5$_-TZg*&M52s&}H^e{Nt9|#Qhy~(9dg3fp zyKu^JI^n(AswL2RyQ-+YQIdh_`r;mvfT^gaG(#Ks5TJa|qHU(tPd?SeTWtnXyH-o; zyrb)H#q#qD_}c{&zn-Wb0wW}36ljQUY@xX1RG|pxKVtNoZPDWz?M@~GUkAfEzfa>>7!v! zOYQ{?ARsT6wHQvuUQnK{@fi7)@PEYlLxspqbK5i7)Q>=6FS9{|j&JT#+ESXrgGTAV z+xMv*V0 zE2Z!VeUR#unD7HP`s8*v9$8iENi9eg(!|vC=LwhLz3Y_j??suWYLS=`fUynsy}suA z93VhJ?(jtEy|qt$@_&xj?uC$mofhmF`a4|@s!Lpr;$jsE_uOVT<83>fN?pIIE|?Qj zsgxxr&DxaAQ(TGfL)vAM;0xZNmq-i#g~~`xl7z@*PyfiG+-~*xPO8DYyUz!X!3VSx ziRkM1ermVPnyihYZ0E}ozD4)r1K-f=vhe})(zwqu)nl*UAgMG>!ap-ywp>mx|0@ap zAa058jKNlWU5!+Ic5g_2f~ojh{-NCTHJ$O{@`DH(X($#|>Lq`k+L!~BO-Z!2%-avyDNDDO`Mp6kAK^RXRv23|7 zo#Wkf?Wj=1anpWM)_zn4V3Spw?!*8eTX137;F;)$jyoP*p`??<+YD~>7NOJ=@QM9@hFg+?c?GIRy1K_c(uMR`-{o(B`c3Sg@ zdsh4$)tIV;3*R2#qXgx>awSq0H_F+&OT5skzjpO(3MmyjZ{m@fA*bY}yZW?%Xy1}C zOMm5Z9Wz^AQnKZ-Owsaeh%=)zafL5$B08B=`#aYsZn9k=XR>uO$TMBY)IcZ8yhHVn zU*yK@)L`|V_vOvJm~?L3a`;aeh06&+r?8>SG^W9N@!h>%#|93a(-KJx6fSvz>9*^YK3=8c@iQDHs{{^*@x7O*4cN88sK=c$*8MLj*;RZ z)}{~osN`D>s*3r~b%tKmBBD(F#Q5q~;Izk7(|&8*i1BR5dHo-_ zf*KS!mlXko%aX&&8g~7q1twcV#nFYWBsi9&IW3w|PJHY5e>p&$2!{eq&2YOorbl_h zacJHP7v3a1YnVbWJ-qP2C?Mo);Hx0SGBcws(mw zM%@6frJJ;@|M$o{u?vhemivO{@Mcoenq2||I;wxGhe*Vf+pP8(|HuC*er7p8xR6@u z7k~vUc_|4EMEaThzqq>dvx^7y*=Ej1^1%PWJpBR)q84R-__O~VjPFT{U-117^X7lV z&wJ9kfbjpLK#cGA{TF9}I+S~f9she>|3?LNYfskL+*30=-nJvfx%uS=*X*n1-fJ2a zG_j`qv+wf`4Cg~Rmt3nINA7tz9mDxE_ zby-+_bhf~S9}p5WAmOy|xOR6u_o)4Gq~OCn80wCR40U#y^H)44=+MZ$Sw4fk$1Ya9 zm^r()f&OC)^ee9B@6v`YBp!ha0tf_uqK8$^~h?a2ax8c>`@JEWuIqP`Kgf?LEM3}KLZ#=PL z4;5~*^get?0n)2bcv{n+r!RlMRGBw8I>ci7mWd>{l?Z}qbIflMTn<|po0CqdN)HFaYh$312q zsv46P6v;V+%%ke=h$y~by0#0b@!8{7QoS+ZLPQ&auUu>NwJ%2ss-hj2>@~-B?==S> zk6|uzZars|F}i~(ypXnw;eq$Al=n-&s?V`JG0wRn3aw8Q=i~mQfgMFv*7_APjFRRu z{%R@S`H0niOZ+Cl<(U+S3#MO69A`s5iN)9JZf<%noK$xsE74;vB6TG_OB=v)$G@m|>=jBj!tC=zdOX zd%apkTW(^iliz3J)#F=s^j>{K-o39?J_W{#LMfrUoB~4v`&-gC&)z1_sTh1Gsp^e6 zj>{zabQjzxZ@80BvqMKsWju}Pg#KK8D7mzw#?O*h=Zl!S?j+FGf*z@|U4k+8s!&s) z;hSD=zzbCl= zZYP$e{n)KBS&(#3#vQ})ykl$_&yp=NKPd`p{jc7pGiYeu4ur^=BXza3zc&*nr5f_^ zx~(@{+2l^K$$ue$^1OxR3H-=T%QU%Rbm?psiXREq}TQ9E_1bw2YlX5P@qs_@<$Xa z;Y)%dEmd27S4!r$ueg#H*5dHMLk0%RCN%@h7$`G{GSMxdf!m3)v50^Z+zxY!LXCT@ zv;I5b!16$1AWlC>c{4*9eOC3xps2d~h*kUdGS>gGrEPJG=Y7}AV= z+Nf%9ntaXlQ`wxApZ#_D$Zh6qQK*yl_x7-R6tIvtLW99$C!!KmBG~3?Z9oI4}SX7;=+%Qfpd@Gl?)F{UqZ!l+E&( ztn_Iv7Zjmg2dJB%2$mF2M57{(<>52mFJM$5uSBzFnQeG1@FW$Hr*hQL2m>8ry#s$v z%$7s~FX~xeWLQ5Un85wLZYa<_o%w-TUUQQVP2Jhnm3>*b;w(?)4{YBZL~P*}w!+SC z@m~5z=7&e2(+u?5dCR(N#NTE#Nx|n@G^YdXQuP-H?$aUBgPUgWP9m%QN{z!22rLyK z%4fzR0D|sYvF~6uy2H0c-daga&K|bU6^vd>!OjlX^M9Jy1}>Vh2R1a6(hJ>UdC^+V z-u~FHCqc~$WjyaS4yr#cq>(kKUf8Wg_5WsVsvcF}w=9l1T2`qnnk1wHH$nZ2;p zIU9>sL}NmJ8&g1-PeCEB6!s9s53|sueAu) zxk`{Rdynz{h_K_VC!uQl)>&cXE_NN}TETiw=Ot#YrgPlfLp_zEDIO;mCv-Su)1g$X}^_Ic2(VF4y#3 z+dbutZ3uptNT?-8?WxLDX%4wzjPI24FqEl6igtMD_8#_sGJdC`E?i<23>-7QX|7eW zQ}C6{W6|z=XbI%4j_-}<6!`$huTvc}BRD$NWcy=r+%iN4M^W=!LP z3l06&L}@&5hnC)j-mcqI%a%{%2di&Mp3bSW6Ur}lSz~uG_hq7#a!bfXIY0&(1Pi1T zyl{HfY?#3&piPzL6>oGU`f?gG;v3VO#cN~6@|$@h1Gf+hYosUEEagFQ^2BiPr*y8b z$Ityu+JVSr`F~bjtBtvCobEdzV0N3u>fw7$$Y4y78NmSzszkMLlRz`=e4yThbG2vL zI-R}Incbhw7Mq486LzB>sRRs{1r9b0ir|vJ#3RQ$X4O!mUk)H%4qhJfW)5YkgMLQn zd^dL!#)-L)ZJeFpwY@-ZUvpk>1m74w5KY5nfQ_Wl?&cipKsnw_3SW%KncEjhuMW+-WfY{&vOkT)P zDOitX#_PgXkB4x0J)wE}$Q7tBR`(a8?n0_0D^(q{fsXKaViogXmV%CaSYWqO8GdKd z@j!fDvwU?KCFn*F8f26rwBna*&ALj>?Qi(yh6iBFUb7zLQll& z0jQVv=wenm$a_-hpO#N7ktezDuESv$ySX@(Ta>@lc*kW8zPEnxa5&yWHuyavaed~kP$+-}hdsxc)0?-s)Xax?vV zi5-MWwevEWQ>SrMacUUoD8*?XvWM#t`E0D)1Z%Q}zsT%aJR7R|kK0H1&1ol%mbM!& zWt=TVnqKuUe9VKYc0cYqaeA@iZ%e<|J6OSG1`?Rt0mCwkPp60Sr`P1Qy01P1x%@;+W~hS- zoOe!Z(VT?|CajQ`!0HN8Ac;0^wl`F~JxHwMydZ2$+MYsgM-#qWh55IaN zaSN*U>9C@aMZyAK@;2eRccY}#+IMSeAk^?hYyyK2lAC{VV!9QM$j|PA5)JEtXyIGdi2 z??!YTe=huKDrW}IMFYwlaijzs4E!P;+wD4+;vK$AbHv#+@t#EU5saHyt`a*^{k%>m z$RdCFzN$mWw;w0%7Pax6d2k1jKVVF1WeoQri(u&c)cq)@RP#;P9jsl|){4*DTi9yE z*|+duieRBAy_)X|7N6tNcK1zD}(RV;EmoI^ZEW~jHXgnv31{BKBc7@|a&KM_aH-fe-|20z(RLUpFGbR@o#8HfBR zB8_^QXYPZG$6-{=O_nS@X}E^fBYZm+cO(~KT=Zg;5LRs<-E6i+!ibSL^rQMCD319lM39jyRJ$6O);h4aJ^`;lK z-j`&5vyNQjbeW)$$ZoBhvMI}bhJJWMifX}(S?xFwLzGg@!m@>6RtS!26@_5`5DACs z)q#lq_q?KuQcm!`m^UH)&LmAD@Ay7oFE#g)_dYhv0)PdLcj!UinvHd(9lwOG#EMt6 zigCB6w@yzGtjJkwg?c_|jqbo8JP1r^obW64e6&0Hv$zW8t4q(VfQsVW^K~N#vxi9B zyDsdT#AG2{KXJ!Jr;US&=6}XSd=`%`*wz^*$sGXV#rglssmuXt)%pLkoO?SMS~5v> z%sxip-`J8J9jq+gJtvOPX^GsV2=R><57?FmD^?UDKSTt7G~J0;eSH_xd=Cl-_t?lR)X7yTbI zw3j3NUNOh1-eBTfz6y_(utHn28Y#2<$j5 z)n4uts>2{FUseJ=&@xcQW`~F-InCyEv=~RWM!{GQL?{`5A1cKnrmlQ}YU!N#_cGLU zI?lh%Oz_xB{;js4tz5~3p0t6hCH1BlC^2iD$jWPbr8XJzp zJ@xsOH|`}D6r(ry!BROtq$^nMB}dq*Zv82oL7FO>i(&TPM0d$*>Z0FmG}oOG&KfW3 z1-I{I`~TqHNFU*g$(UCf3`!yJSZO!4vi@KpuWT=<+@kBdH287 z%-3yyt%h#2RdPcy(U~yJl$OF_)xtyzm2&l)?OiV1XG+l9VG+-Bng61-|8w5#$?oPK zMtSKfnU~1)Ao8MaTEp0(uYf|I!fQER-m*>BG>5NTL%sfIIw*B$g^idH%a!5p=&EiZ zK6d=iPum*L`hy$m7P&9zR|M*U&=ZnDtOPR{k48X|MO7T@LwRpJ2@g;@6W^)vc+$Z{ z)RUhFx?0;w9Y=n|FD;qx44^{YXBjjO@SFNMKYfDxl&&tN*!1W4z~)S}F-SU*A^@NB z#qiNQ`gO*C)%stzaMj|6)9JU5kz$SB1bnMGF~|q4~&?gR+IL^!E|xmX0SbdfBZEo0aXUL$ilE#_m)%#9XljOk;lCE&LCv1$InyVazFCM zJTd@X2pd}3IY|p`v-)Ap4l)E=QAL?^KD3WjWUADb8I_3fC&yu4+<%OlmWGBD^Z zd)KA5MnaMf!=equzRC)T&Y%~V_T z@UHL!#tEg<>X;z+G}cbNqlT?+1Woa;1vaS~h7do~Ha{!dXiQrSt6O9$)WUneofVdC zGDb@r=~S;AiK1wHKHFz3Ds@wUV}~*Bm0NA>sYx^E=H;3=%Z74PEjh#+{`4l{(7qvmc)u0u%W%=2blnCghY~D?(X=yQM{YN;g+}K}%^|l3yS?~1` zXKiDBiBk69e{`nK)SkLiE#c!f8l$%GzBFEa1x*q6Y=l%#Zrk;Xcb{K}hC5<8^@ajS zgb1Ohy0;+VEPAL+h2yj|3J=+L8+6k~uwy*c63;7#0MTN69@~rd zgD#In*YqwfW#SU|S(|NHubF3*FkIm*K`5IhN-&P>8*=P&Gw+E~;Q;ddFyUuc9i$x! z35oiu^6pU1t~8|s)0$r!Iupmc&cg?-Ud5$H&J_HA!=_`Ausw$(<#re$})IR0j1 zfpqTslABf=?BnH>%k+Nel<+3)aHoRvkXPz%DkDWbUH$x zNXCzH=jvD`<6Sroi1>Ss-=}ArYI@!_CUj1_9~&}r%*j?7Lhn?i0Vn4ZcV&yMlS-Sv z+*yo-sDp81;OxFGHqjkT89t2HaU-*S=`0IiJt9`|@_QyQ(*xtL}nw&MLOIoXV?$toUIReM? z;g~Wi^U9=Xb{}@9#(HMWE670OPcD`gk!|C#D!HN^;gO9IFt|n`<{VM`Dw-A<1h8mt zfMJt!am;fXF`Er;kYvh%!D+ymCY`wOAb>FbBvL$4_e{U<`+8$iubd#^ts~A{qB>>r zx%C@(#n4XKZYb>!lVusTSMD#BTrq2{Kb@}%)AO1jf4Z=WJpF$GJiXgp{!8k1L?utEq2#ZsViToWBxay2&DeNA*EvdSPq+o;EhG#h*qiu4A_ z5SXBYi=QRhYJF7ge}YDQ6vvK9wlIvcoth4_(~bs+ zy+x7amgR*-Dm+p976aC8txQVt={6ZUM zqWUJeSG-}>E4n0a*s0*#Y2Iw&Y)qf9 zk8|&Cm*qP8$cnZj?|Toz8B%ff!ZmR~RLV_5&-IAAj44=*CgF|)BV%uS7`8@`lo#Fi z-q$FfXzx9w7%-Nw^HHpi4eNQsneHEPxJSMF#}Mdr7Z?$H)2#H2Z{fTbPEMt}1PHdv zVTivgC^?>+?R6W_GLL?{ZjdBqa||qNN3_5<+{cZWtyTwZ|8qb9YP8@UKh~HRIoI+$SZfJB&l=FXWaDBc$ z*H{68qW+E-a;0y6v>j^I3sLX49MtA&525zO$tl>_`)BDbrI`x=gi!MG>O4YLddLzshgcAC5Cxcvl$7QzvjcgfjfAjFLABGT6U}#miJCkeY4Y9Y2;bUv@4nXUZHZM z#@0n7pFJO=nq1a9iKKn!=1OaVN*@xg7Hq_is5>FWBUR0aE6i3a3xMV|m^8Jd$wm_T zL0nI(iq5}rI}F;5`4kfBQd&MTeqQ?WzEx&72vH*3wEmYR5!&)ajP-erzUiCuZy)X} z71-p#idQKyH*z^gk>`Xl!4Gpb)85v%-#KADy+<#3aicn$XY0P&QBWrF!A6ENThB+&0IQ zi)n!6?d)tu(>BWzvj*pUW-~7uZ zsT0o40kq-hX5SV(t#oz`d4}?qD}IUZDG+2F{FwqX8HM;S+lRaP=I^&{2ON57d2mjn zWUY9G{+71Vuf24(%8NaN%^5U(E+J@n5k~hxn-u97%0ha|sX3e6{LIcstHTY)Po2^Q zV!W%Y0fmO-V$iKR@hfGx_O{x#ypPhx$w4X)U(K8wF0Swb3>ZB4Z8A5g&BG{0K5Yvq zW)DaZ<78a9s4?>yk&CG9tNtQ__*(VIA8IgT-sX8&#Ug7`Kqn|heDnJ0HdbgcY2EGL z@nv4uz=eAk!Lt8-JFDACOly4*(`$jyRO(v2s;?kIf>09b(Xa@gzSU>CEpi}u+Tc(- zR^HBI4$-8I^Ma}9kv9_cD^JzIrUiUZ#+Gq7rAcCL()J4JpAPPh6Ej-Rfa z*9mJkqfX6{;^76=K}A;yagSL##6Ib) zuLj7rWYaZ@Q006#ORAAc%Rbq?FAMFX;}lMYaUoEqGLii_<)a9DQj#;z{D2O;1GUxa z+)wT>n72Bue1?Wm=~j^Fv?sZ9C3&|(^>??%h_l zXJwIN;irX5SxoTgrIKK3PU*V?NWU~^>0W3y{u|KqSOSG-tD&d1j^eX^EVi}{VZ1R5 za05CL$QxNxzdtx9)(8^|Cz;nUs{yn_#l`U2RWa4&w$E3`lP2Uwu-H&FDNB%_(sWlg zQHg~M+`*<;mj85_7-{tUx`af+8^g*+N`8^=_n9>(x~6b}5LJp2E|ztSFRU;Q+VzTkOD4upU73T`LsryomH+v??_Y(eCbRK!fOV9_3ZYx@9nQ7bvQUXY}5XD zH&8;KjRW|NNuG{O@?)~M(XQ=ft($$WO)}$r(_2r6<`-?}`1rbu0n)LH#?r=AZZaxc zt{UwqpNN7p>EqbUiY;d8Vp-uSt|-Cug)mL75+~i4y?@?D$Y)Rf(n9vGHSTGp-_Y{z_*;VM6C_ ziDDY~i1$gE^E+f09@0%liq-~vDRD9bIY%j*MPC(;7>TY(YNw8lrWpyng*K}&NqXzO zf$XMF3;*(n7~vzCmepiuQuyf&Y?Z?enPw&jMoIx2h9hB3anNBsF`>6Rn|;@YY-Jfq*7ZDk8J zK`*=7W2^sCi$$MW>pWR<%%0<{TH}vq>m54)M2#pfwFrIx#PYdD__eeAWx zt<5*AF%;=7BifF;?$L0$?sC1F=-u^v$O!&UKz zFwJ{69KItNfO)nj^vHM<7zTpL^R1gVZ`ROVk#>C>jl_I*O2)juctar62l_jG#0UI+ zN4D@jnh;#>A2r-%nZUUb&E;FJFh=NQ;I3s|U%Axu4z?B&Yt;-hL(&n7+G|0iqmOO9 z;8CVc*T=_Bpo;J6j3=-=pfC<{sIn*RgZCovQEy*bONrtMXy86Pp+D;&`1wpN0~p`T zkK7G$V3^5lccWjL8UgIjP2noT^f+txI{;606;7_WQ4`+2z%`F7lte;A8i%Mf^hzx(VakAX`KJ zxXsL0Hlb;Ly6}W>`U@c?_f;=t%h;54p!hh0D#3sbZGXOmr_lj^K}*IYB~_rpf@8E_ z;ZW}RQ2Q_AxBx6lx)U3i+KllfT?27tv?Z+%2YNZ1)C)SVcM}aR%{fk8S?FZW2rlq< z{y@|WwBvLUXhBeRlKNLc@+d!YPGvZ;a}%c}KaMg;0fd*nAgReefj{fD$afFS-qZ9< z>Kg13KGn=As`|w-#F{CoJE10W`oTYdQzndY24!6HEN20a^h!%rdNA^Cl%Ul;=Dj|0 zo;mz&A_XX1rK00** z$Woaf*pIk(OJWKVF7aN)zK`S;?#)d=gP+Zcw zR3VYn@?AlLDg)?l;#Qm^=ekD~pXl2HFMYW*bm6N0vE796YgzgTGC*}V*z;KsNnKeP zu0HNMi=3w(r#5yc6JK(kdH)ZhduPj@s>i#e*<9)GhVrtSkZ@vz-ty4%wK%3z+4Ci+ zWBxMX3>k1o;xJ3`uVCx!nlK1UauN)=gj_|#RlUFtmMMAK<%__S7R@dzzrjF^_{9lb zd@*g_TMf;z-sDuuaY|ggEQOOKQJ!p5ueVUn%0BBpwGd&|w+hl!`c}xZ65hn%0 zrK$?;-dxqXtyqi=AAhhTIj?Gho|Ri)ZFgzpHHkasFlEgCZ!A)D!@y$}U>pS_o2yuT zX=^<4DI+R#3OFKKSeULVBA^P;Iij&fz3sda->EJ_ohn7a)*vg%Br5<#vHOvca7`<0 z^jHpTSb-$Betz$1RLCIF3>kYwVE~NpQphYZA*Uvy{y_hA$@;LFi?gE97QKb;HJlcN zVBv+eHlL61EbCtQVkN&FKIxY&5klN3zR$lKH^m+LWfTB^7C=`l&3Laj=neRFKp3>k zphwX*Jun%Q7-+-%LXW*ukzMG-XhB#)=`g(LkSjUg=t{DdPrOBdwAG&yXz(RvgkoN; z-ts+|#7iX3CymhY{P|o&k;3}t@soRk13{sHOtWaP-?aYJv}*`U@GcXYpmq$dcFuv@ zbnL~noYhr78b1+4AfV!)Lg6_6BnlQB`zWC10#90w~kYwZ&`)K9Q4BheviSYIsCq3yY z;d>N%f5nGxNpf>`P@?O#godz$2n>7+msn8ZEz}KWiqmO9BkGf0&JphstyeCSd5VLH z+dqZ@Yn?hi&=Lc=lQ^$X9F0lyk~mLO#ZMkOQnxjeb}E4(!L&=B|D$G!16jC5+$}Ts z5=Lte)WgJO9|>S2cSmMuSecs&T+)nvlpF++U~#Nz%-9te1O-i8>WeZYnjBOX`7Z1}pRz|8gEuREu-fyg=NbgfQsW$yCQ-)) z3w5xCQ+|`w9ubK{dfk6Wp-gjbCDO~=@Zs|D3whR|+c#)L-#o-b>{DXm97{R&Pe?x~d)g`MnV%db_-&NxtdSXS`Q+tJxpSiebsIj8W4f3&X6F7llWQrhi5 z3LX~Q!Cw8;4LBj9K05E1_)jMBR>b@dyqaDc6EjCnt)wg_55tvaNMVGU;kk1nv)oq* zcUx$*8*-HpviczPY%>#X4oWzVfTt4z?3KD?g&D)lSDHA?R6xqKYvL&=lXXgogK;!R zPeEl1$ISF#zL3F@d*!(yR1_xyAF}*uAZ6!!zFxBI5rbM7BK~Hiv>@L?WzaCPG?%A- zv;`FF``_7lm5vt0Z^bA()bvBbx45|BS^RU}RXV$iQe<)l@+X8wN1t3+#`j@-nO85v zIZ-tChtQcssDTVaQ8I4?aed4L!Uv%Q%?@y8cZrQL@dfI$_@^-V;}TbBkjfX;q%J_% zcD~@pO}}xC9~ChqFO`bJ{GL%xUOt6RTk0-`1(}+kC9s28GQC&|>puUBD4xrywGbq* zFL{aP7m)5L8X|1xuyt28kcadM$Ru@WzmP27> z7-oRCU=Rm`<>69eabPbgA)DBS>MDF-c7y$p|Es7%11DA=Y$fzzl}FYNzai(V3F@Ho zh6Guuet6G2KK?qHoDHtO1>ENrv$k^{y&H=u6Cw5zenv~=)*LViT>c-_Za^>NY6g*? zxPjygn3gCPVyi*apW+C?Kw^Jp}i^IWKYGSQ#H;Uxt*WnburtrlSQ5GPB~dRIc8{!UCbO z%KcFSkPQs}0~{e$QjCF!Y32*zOHY+bYbs14L)r|uBi_g%%vxwSSAae;Z6=ShQ^C`> zKxtxCcI2^#s5|~@@Ml$5Hmoo5*Q}Dltl^ofiZtiv;fZB-xYG+f5kW%61$H?$1vrMH zA;hOo5lP$1VT$7sF;!{*WXEAt+dldW_a?jOIc$ZhPSEUPS+c1)C|epn9dxHnE< zApaNP)W9#%R;2;+q^6X8#J>W&YqGLBKgRA+6txD0C_q|ZaZ@P?ct*?_b!n1mLHT&=7C(reX7|0 zTe4O9u$K&zo$oU)|Jz{$6mDfa8y$6csQ@@p034*eU6~zxrYCAK8y~*k55x=-sNX6{ zS0#My2{oodP%}_z<>oi3*(di}DnM4-#1LiTu|E58+*j*JE%ZQ~rj+K~f`6P3h9R-x z4DFgyk#a$LQ?2z7VHkE{DN+BtFU32bb{%=A(}8PIS83Y?yYJIRr(e^>aw!>MD@|HB zN}mpu({Uu3AnH56CX&o51&fxlBieGXMYYp6mRX17Mus-D{=|wVI%C_TvihP6b*B@~ zN<704evZ%0+mnAt5|_~i!W`jB{_$ejBon2`Q|T^-8Rk^+w_IAW8=hv?diP?aaW1q2 zYK}M_d>2s%OFE^AzPlMYI*^n>SaW-*0u+p(HFMi_lA^=mEl_VpF-ZsNhsnig`u!YgVtBD zkIOI(I1Mz!%WZ**CSoX?47phQ9wiuVe#_y%w5?Y^0`jE>hrl^%nQ zoybg$2Rrx(h9K)6@y>Tl!eSxB1GDyF^S^w;4(CccMB#7oYn>I_{EJsHZp8o@oh@bXB5-P zw`BDUu+;PhG{q~6X+rX;izt>65f8hY8Lx2DqA;ae}Mo&u(vsuYQjJ` z1Hzpo5|{pJbPDHH_jhYsVFwqi%YrYGA6Y$_kxWSfZ?JZ6zT{`anQjecQh8k#&{~a) zOCCXWDg5TTt@+I6%Ir_9DsqMS#v82^Tb!2!EL0kI9V*yOZ{8)fA~j>$%FDEn;JNz+ zy;M7WZl&xrx*-$-7GGTlq6cZN`+<@~${Vh=z$kXs_4wV6$gn2^(5Jkl|HjRfBK)e_ z&l96%jr~C&qap?$i{TO(aK8@10ZEwBdw`=)T&yAyX>ggSlR`DO$0{i<5jrALdZ8yl zV^UqNI+dSmrnb;|ME#*VdPBv_%kR(<3%7kgmjQUcq7j7#e~20u_w)dEkPBpXvf9*< z`ZgZ=fHe_hb#?Qii6Dlc5Jd=x%oT^IsuI(WFre0>#;v(8cpvmJoBb+sKyI{U+81|; zE;iG6^$5Xfqhe&V+}TELLM_N0lk@AVb=y!mc=WDZhia}7fQ+;C8f#KfP$O#w%28FD zHsfswg$ZE@Q3Kc5*|X7EmG|XmH3G@&%5cWS(X6M}^7o10A@*l4!}RG!UO zFv?DTBz7@XR2uh#AJD_SYkplx%Rt>#2t-Xw$6UubH~cD}0p{;z)a4|{>2d!Z7P#6H zzx{_{M-h7+#>AQ3Txn+IsR~WnCMILrye3O{s`-ZJBE3Aw=nUN{Nb81W#fw;Uq74iK znEIyi%uScG-TP>EHsWF3q2O4f@i~@oQ15rr&LBq%j}$Eh>23;zvYn5*F3w>z+gGmO zT1OXDB8Z7Xy9Oy(^)=?y_bBLy7sp<)x@KK< zdR6=Lee>P?@;T->+Adc6c`g!W#$8RUS?cGiOYO)`6m73VtL0dAY*G=5(tcVid|TP> zhTafEpBm-;_^+8AHi4?LdO%zyyGf|nd_`1?VdcA*b6MAN;21cjCB-sssQT0QZ`p-o zqwV@D?_ZZ^WD}KNjul~^&An1h6#j1LdXCfI^!foA6(rW#hC2hc(L}5b-ak2ixtu_3 za=8E4m?z32Y|5AuMn$wX-0kKBc^VUf&yIcVp!pb(7K=*l)R->siT00u>lr#X9M+^o zgu?0k7JpK)4V3hsY2Y-9eJS{H7{A z1|SmTA#73{Z=5ci-<*{)sSBR6_w3v#NYI7%TTS-XMi!FZc*l1adu?8;vitpSem?|Q z`JNUNpK+v(IIpxPdP}TFx#-_S8`=7hlI_@1%>aSd^pUMaHuU?2+Kuv_K`|N&_f8D@ zu9xNU1hRM#b^QftrlG{k5@C0=mbWAahOAY%hlyR3bcc)mr(l+$vSfUGV3KiML|m>Z zXiMmB+P-M#F~0LXyEGDd_jrm&JDOUXlhXLIR?lp=5Ns<}edu{MI53Yv)9CPL7y>J3l=PDHenf{5fZdU1S z%oPWQqZ3fLgdfN*9$yWsC>mEU^E=E{au0x+A_hMU2>(B--ZChTwhOyWf;$AA;1Jw` zySoH;4Giw??k>UIEx5bG0E2sQcXuaep7;As)j2<>tERiFYo_jf@7`-&xF%!Ifn)}X zYfPy$Efw@(2X9{_Kf^Ez``-P}OH+ee&9Y;g+-W_y2}R zpM74^@-Q`*913>0>!w<`PY0(^>pIl`%wUy$Z2LZl@vfO(>`MQ0PC@=yMuP^irx&HD zaEd4z4F5Z*2M0KkpEa;KZwy;gNXU4%`KM9@xnLSxARV3qgNZ(A`^==qf7XIKVS|b} zr)yeRFX+T~H%iH~xX%B?@uHZmQfryE5EpL+`rr6yu7oGzN%{Y=nqd_rPaE}3`os{f zlL+Q~JrgwcabAw_8U|xd#z(8?-289TlkZPr zxlmg9pUY+=q_>~C9w)j@t~Iw7a(C1uM@L6jJdjq_BhMb}|JYJ6y7SAv^Z$#5BJ87y z{ig)~uLR$a+=}^M4glmt0xD0k<|D*;m`q zWA*6Z+)+CBneGsK$?o$8ZSGQq!PPo4(teN9v|V2(q5B4XtLxsOcy6Z(A9mvg^2yTJ zl+OOE=RM)S+cm!mHh%Xf3xC#~jI)r1MnXA5?SdACWsN2c>x!Fm*K@uJqm3t#kj%JiWP zcJ<~Ll<_mt)4(RkqJ1ym;WxXEC=RI^Hh0vR_dCYlrBs@!bB)gu#ZH5>qG$LO_bn%51 zat4T-+Sc%jwXZ$h;$GsH3v45$@NIC^Xq>k4YAbD13R`VDpg}9z0}1vm7LDyp#f{@^ z(IkN1jjVRmQ@srkMj4s;Ybba|sgVok5plX{Y;5XKX#aNZIWMg!R{pp%tqtZ+o41e) z^nRzQl=<#pHsoJ*mi@QrbvZuf+!xIeht36MO>CvZv@d&uf? z{aS5XF2k#QI2tj70$dp4&N&4(YR%ai=b)w)XR<(9d`P;t{uj~`3yXt`^zt^!-ZBES zPwfULn0?;v-;%UuF;{D5sF*4gKn(&kLam<1iHvxer1!jFHI8vvsK?w%(W|&;i*KCX?5z*P`P|%!ACAM!Y>teK zgMa}HT8V!RKHS&afd|*dzXpEOn2ep%?zTrt=E)WEQQaAYH#8er7-6GRj$guhT$rs% ziyP<-m0lz=b=9?UHXFE?8A{5Balk4JanT5wd?koSy_-JS@E%VA^`EBae%pP2)^rc4 z`@ymKraS(YWkz3l?B{k^3W;@q3nqp27$Azi+!z$FcUu=|PehmxxUnND5;8?*M%2N7 z&=AyV*=*%)#@3UAhV}8GY$O&`Z>kX>)^<~{WG7?)+`ft8PVrOZ=QpiW$R3TMP8~U> z8WSSNF?+o6%+iRA7%-qqtcoe3PfkaxM<@X=x1F^4733p;J*?Or+=Ew>aS6XM`0B1E zAAC-(${$&*lC#$h@9kBZo`JrUcwT$`wJy*hShr8TM2jeuF!RD*_l{!R>;R)=O?S5d zJ9D4_mSv%KA5~slj9omV<}Z0$LZZbdeT%tzUJ}fmv4PVN+NCg~k-QfVCa`4Vny8`3 zs(1h_@)TkgY^xvV=B;q?ZX$-i}Qn4kx?VC$;w2UGwe zA4X`_YAKp~K1M^UhG#5xGibGPd&BHW9NqZCI+6Fv6&81m8)vA_M_OV$#h>-Fg5U&o zTNEy&tpPsD!fBs7s|<)&H(zEe-!}yWt?*M0oeBTFRNYl75M18Qj0<~%@v;RmRvvRu ztf32!-HWO+8#dsKAhY9HF(7W zeuk^y!`XvJ@=G6n&f~*bdn>A5sp>vI2GV<%+2*!ypVZX`)Ln53sJQLsdHCW0x7%-7 zryjSq*F()GLc!#Z1mQM1Of%yve?8N$F)<^nq3}iNSbf)jG3KE_`JGmwATdHWyX)^`L0D1E-P&8U-d=B+AZ@_8SDF-LEC_f7q#^MjzCgNy`;YzHg)IvxteGhuV8Ah zLJV#aywtJ9@eOM9UC7KlM?QV~%#P>?y)nz>m*Mjc{N{SX-7rV zW?ys~XzPaf#r(v6T5$|EXTksaN9lfsYHcr-AY@6${L*qRWes<(84H}o3-!|)J==64 zRiP|rfExbUv2A8$JvVp9glK&wV~$-g>k{>a>XBg;zZu4aTl_n}@V_eEu;_#d32_R{ z{E2D1F1@P=%-Vo!X4ht;;U9lB;!UvA*FVQGn74es0{crJTX?kiBrI1GqI3i>&xY|a z_4W`gVF9K2EIU<|X013m&N%j*c30*lN|hewyF`*U!J2Ir8UxKW#Zz1PNIZoj%lER< z@XGaicoDo06t1rG)~rBNI$#k>oKdu*1MWG-hY7=caU8WJgBXxA(T#U92QfN786zu7*IMQy7gMd zt^K7uWb*045mkSSMn|)|dg7(xr>30?*=8liWi8>pe6_62 zKk0z0;+e!=%|MY=l5O-f(})@SyZdd*AO~rb66k@= zn^9u2<%fi8pdaC_{h?#SC7+4|9q~7+5M<@b51y0F+A70Wnhrvsc63O7N!@hRK4I`d z;4LdL(jf>BX?#jN_~yi5^p~3eP6{iIQ4GvU;{zq?g2tSiqZ60-oefGv3v!`9-MepM)%d_vC9tV{*CxEwVClG!ZEWGNbvhGU%bM1CKZtb(7eHI__3cy}|N41Oj-2WzY z17?psq(4cdN2)(%)e-s8;Qbyz=De~-wCqSwsDN2q4O;%YjQ^MB<-5ePU@JaU6I-)^ zw|mBjIY^+9u0+uEOhbUfssj&-rhe|`oa4q$Q+gipLMzD!8_smKLc4)9bIg9+o20R*` zXH1O38vbg5t8vcb0DPx@QoT5liZ0qT)g1LXV9z-n62RyE$NWA_0Bf~|{}V>^%|y$* z4ILY{R-~yDF16j4LR?Ga&Cr0Emc*yjMsi-PcQad=A0qkLgwTX~E67+)cxYTE`c=zn zEt}uJ=}P8S_Fef9kk@rHHNPH(1qiCe_hJw&!+QK zBUXnf(a$kl7~?eevN~2q#GN&^J<)PJX>n-YnhDiff~X*U@Cb&16V&8<*BygIM;P_} zXYi##bZ7kPY%z(jV3uUc?@j9ZL??V|GLerM&S{DkD!#BIGsOMD} z9|Di$Y|}E%$80Db>#tpf-MIe}r(2WaLXzrgA5jk^1aOF2bDzrv@}sKLbN%s=e3ms8 zR|S3T*7?C1CAsTwMLqK+NIYSaT*vhXrLohU@l#WX&cxo-%?4)l(9t1K(IGZ?cfK#% z!0{<)eR+NOa{|XSWvtuKgZ~O1v|v^hEoc{sNPl79z+D*FD=rCfr$X+RG8VcQvI4}D zEc{`TAHtk0hHLfYk7KtqWM9vzB6Y=ryP~Lt#(ZtCbS3n|`{o-M`Tw2ldg4noB#&wU z11rVK8w>7m^A$O_;l~U38uvPYdX{<&8Rl0qIhvVv8|DBvjL=%RHSP9Op*mM9iNltvt`1WU$FNK9X);oTi z;%Yqb@~b0PTp!Y(kzZ<-iNJ(x(el_9bgf6aZy{751C><80A`rW0*@SXcWBY>Xx|y- zOLQAm!MF+qMsm!g))zA&mfVToFZ+DXcR=!Y*{@cmb*V}>2tYGJpl}?-{U~6G_7->W zl#1bE8kbkJw)#Ky_F-myE%9hPb&Bb#P7a2A4NVe<8?B;Up;zmVyhGV7%1zxwDX(G36(>_bR^|mju3(i8lZ$xiT8hl%_2ox7~;?>_kC`L&j z|8o>|4;BwT$M4eV`=kS%huh5!rR_7b`=IdF|EE<_uQWuO{I7a$U^4Ewsf;C| z7;G0_g*e_EM)bQn7TA{z<~-QNBl$mB-l{%EUl)PG6x3@a~nKMnxki-nRWU| zNDj1gb+NIHQF_Sb7&Sr_OXW(u_*?c>S2%(%xzgCqecd_PO9Hcwus#)RWv!Nz(5pzJ z2jTve8N{ZK{1i$#8fFU3Af8yrC5nkRZAjUT48oe6eEQKT1W8qO(3bfUG*c#P2=~x* zseU#hWIQ!3FYcqAlM^nw+Sn9eh#r^Fv=rF(z*Cqp=Al0_HbHhsLqmK7mXg_?t=8SA)ms!z>W=e)24gN~Pno$wthz#f4|Sa;|r>YnLNOys42r%cUshexhFOiEp(83{7%oXg|y{E~JhY!4)$QAsytaK#cwrq_Qn+ysJ z`fA82h&Nn#<%@IpE3JF+0~Q-GpNgDiRCA>Prf?Oz)ofd}_U5%l-OA--vttD*ed=t4lE2W1XUTOSwo+$P?zoi9|^!jHei;Ks{w z@gF3s!RtfO*zal~5=FkWIrqo|Bh>YA5w9>DQ<9i|K^tfb@~%bjHDp!uPGAnPv&4^p1V`9zHit5VF?9`1OW*Z0W2}&Z-@+Y!hz-&TP>6D> zJK8($l{w^@`=EdOhBR;&f$WFm`3t|(f1(V?cH%@Jxr93*1Z2opc{tyK# zJB9m{S`2Or91SfaXG5v>-3b(_nO5pm4Rdnp(NYSH@69p#;uB9dkEtZ#_^DbXitz7 zB2ECq=pO#PZ(6w`4<9devst0pB38=aM{=vdE-?{ECRC+F!yPB$GLbkrD)cBf^gP*C zyiZ;>E&tmmZe9_5T4K*)iT<8YW<1;XbbdT}GBu$Q`^|?`IIR*fSFt6;Wb&KD2!gJ5b_q4dqtRM@^n#h7hj2iS)f^w z$ZTq7+vGw|8V)o+T&S;899IatRAUU5nepO4ZFEe+Sn>@q%%Tf@ClJ(DdnTR{HyBQL z-x^>YpR#x@)`92R@(}RZgJ|{;wi6Z9uIVs2P^tG^{>nlk}bHP|4UinZn!y=IzMc8XJgQa?`m?0ITRrz?#pWLpwC6? z@(RQb<^&u7#2wR6x``Bt4K}$-T9YPr3Gdk>Fg<<0@3D6>5^UjjnK@2l^Oo%wlRaab z>~#_|jzMe)Sw8vLs>&+qb=uSYwa$fz&oQ~O!JbsAOYmLYCTczQ%&;Jk3Kfzrv!wjD zuR)9#KjH&-2xseAfd&6`OSxmLl>%*Ij)pB}P$g@4@3hU2uuTsHywuif`B*^FSxe*! zQIm9t6&tZL8Q>5#TL9w$B|WelE^k7sG}T)8M~*urmFk0?gyD0X^538GmQ@BiG@J zcAGxtzby=|aiT`J*c=oeMNRl(B8k&_^<)wp5kE0t1b8L(Vk$Wxd2-!N;uMtu&2ZwVhzF8(=)EoCT%X+T-{&e_mxA2qNq{0Ca;7iRVIm{;vD9#z5$(ZBMowxp3*#tawtewxl319X{^6u3H9Fn6MEil z2pb01-G{*W1OCrvVeAOA!cu&t0u7 z09LC}Tsp27+l%pdnp7C*gPe?NY#o`P``e`rX-rU zYs!courZo0kziLEleH&*0V6w7BHOpqo2Itd>evp{4<<5()f2w}ZfOs9u8DdzX({sl z8@jAFS%KDR@j3qt{5*{Nv6?nUK0alJR_uDk#s@y`Wo}*Mz24f+4l!3-xf%HZ<$mXE zj>{Cx;Y!xXArh~zJF73@qmbhX$d2Ng%+rO7l}L$I@lu}RgYw#!GB1?tyzoUU>q87+bDkw3P^^(R`ZH#>+&>5rsxaU9<*rz4NAn5aQp743T zp)k#HXLxm^$`hYZNMge^PK*b_{a{|WL3cq>P*0a+HMR>7niu~Qb5bX+4n}A zCYFc1iZW}8#^$O4QlT8t@pPnN@(T3)sRfO|siC{_SkgCLq9U^0(U|~!r1wG}+!Mhr zz3H*mV`VJ{-T^OsSeqMfV>knNn=dafkl4`sX<%jBp%w1oXq;Rbjk-$WXQ#0Dvj=#q}s&Vsn7X?E$w56Byzx#X&jhoy}fvjm1ThdokN z3Vxa}VO;mvhjy39h=Uq}C8qQA2BJfDxrDTT&UWmqgt5dxkx2^Uqa<>GD4}=*6=FJ= zL3h`5o*I0Hz8PvWK+ja4DHN0?PVGp)lhF+R;it%Udq&RT$Su4Gkv^9uD>5ES zsE~e^IJ~}Q(M#AJOiny#@PcoOGP0*D z=0*GB{FgZ2X4Jon)h1MNam`XcWW{Njml2{mHWTi7kn+*W;F_1a#o`mTPCg}z1o(m5 ztiP2Xu9H4de6Ese8HjdWru~wmR0QVVuHYsB#A%0*kZmFVxmpg(?+__UId^Bg7>8Q> zx}I?qs8~T>hsy0tJTGFg48`=Ki=1gO!Mh22p}NM5N-4wu;YClA{`Wf5fFqUqBk zlLE6xO=$DDC8$%ZW;<+!-iFm+LE(CDnmt?{x z-if1ZeQ2#CFLGAIeq+9+DIxq*#!a1}eXj_!pwyjG-ay9fIO8A$ux*@{pb~U6J!ZPD z*}+~3%^tr|B~Vn`7X>A!hjcGyCT)@#210lMPP^N>QjUqQL{__7Fo+*r5u{#~%(6-+ z2o|WyC`L-Rd7}E9Zl%nFU=8C+J7T$0(oWiBx)h=N`?7LP9<*|D{_oU(jxc1@-q(9c zY0PhO%}?s`x?`pTR3682bL((>bee`EdYdX4p)Fh7l~nGt?COF=un0UC;X z*moJ1*pFk*hfv9M;P4t2qHJ6nI&_;u&J5m2Ntt7~Sgpl+21ps#T{*5a z6B_c*{TiWPYVx)k{0}%fiOdgf?{65)4{E%t3hf!@o4;lY5y9;rzU(y#aU~;CFVTz% zJ?}dr1%Pq(BX3!9$grJV^g|1*KRfL|PyMX?N^Jm|1U$ZJmwz&YZ(E`d3v3p8Bp_0D z%Ap&Mh`o`RW`)vyORf`s6_C(H^-9aaQMaB_6XH)3BF6CZyT_n_VjMnPWp`B}Fc4owgS(uM$POu-%&6o=&Dm3?c4NrJQE zHc&*qSwh;snP>z%Bc@$$;OD#Xr^V?5z|I@g@0?>bCleaJ-ESdaMDX3%58~J*B=L2} z?B|;eMtAbo!^8or1(ocU{>RLG8tNslu6=`sEJVd=fP>!D@S3+GiEr0F?x#OiA+mAs zf3`Zu?+u8E8;SLY!+UUrLfh>r5k)T{VVNj_kSuy($~ix7*P%%q#)<>$gaf{lx133n zd!(8szSahpOJrw-JjcKTX_B=eC6lrJsCpBkgPirpsN z_kwp+YJ-%RtBGwgY?EYXy?MG=Fx??i4K?75?Uwe z=2Y}|QdZWL+sJPy{as5n`w5f(yzj@_c2unxr3$@gUP6&GDz-%QJ+j0>gw&zGy!Q|8 zmdzJ^tDJf#X-hFQX*IN*+z~`I(AWd+5CK3Q(rtCPm%zCnshGwi?i$nNvJE$r+f+>4 z3cmPRw`is{ntMw-ozb&&JKD#D4=y5pD2`L=Sz;!*dO9#= z?}U>g;12XmCNr%S&}0|GW$mb37Gtx8hQ;lme zF87JBE%9Sxx}r_u84WS@+wqQe-px)}b4lZr9vya`*KadnT>als#s{)vsH!EOM#V{% zMOa${T!H>OhpfvmS1Qk%@Hz7*289KCnC!k%8GPL3rN z@NH0+SVar0`oH(@_(AO&p*!05NCNDf`+V4~cu6ZkIsE5KTIOBbuCKpb3VnPYcsYK{ z+bj6dsx-+5Cl;mJAfn0DG;(9jm&58Uj`?@}mSZVmQIV%>eWh(ZXcwlfAL`2DSO(9b zzQ04Z=^BJxuACBYUxqfAAfrT(DHgGwg_*~Z;_ktZCnr{pw(#;&ToxwQe=krK-jt2u zz*f>)DcBdR1~hU48{-o{{nhx94rm-K=f9HaQx=Dm`^?aWOpmZ!n=ve{iND2gw{NCJ z0N7apBn5B2SKjeF)5&%4v(vP>2XDwGq2(RijVq~nYp?_qm1fgOu?9uDoR7`Ik{#U%=^Smr)0G0_7@a#*X{0uJdc@z+iuKzFvsi?% z&`L&ENBED_E`qzU+Kd&RPb7O+QL{Tn!wg2X8Vln@+}Q6uTQ7=h%saCD!BatI43Bv< z?bXCh(q{ON0`Ut<+@lNkxHz@(CLk0Ko%~k^^Iz`$nQb*TV=qQHa2g^VUfUSVns_Gl z2@YfdfOQ4 zLMEqsI3`H#mVzz1`b^%wfh<%bcoP@p1}bRpbtb@kiUSie%-aA4A)is6B`=?l??D^V z6It}CjLMs4jb~Jfo8kZonyB2?tttA|sSJ=whLc1@70+PQRt=6~mbq^VlfGfDefg7A zppuG3>OiMfq`P(WJwFdB3kIOowL!{pVA=tugX-*ajKw1ilf@vFSkVib67P+Sbb2e_ zV8XHM7cM%wvKWX&jy*Zn1_e{ERL}`;PFimRX0W>b_9;2lE|12Y=#)lWF`Om_v}qwXW7@~mxvj@ktG z3}?l?W1NvC^f3Wvi`M%adx|+Ktsp=-E=UHo%K3SN$%kL#K%YUc<2VZOo!w$I#J*S(9!JAuz**c{)Y% z)CMpt(ElPXcjkJdgE#22O{){Lw2J?bu(_RKfYOV2VGDj_Dbry-b8-+gX&)HcdsLM7 z#z}$<;$6uL-|6W2#Od}rdV35d#rS3(KOw51X)FU!qs?D%)5&6O58o_Qp%MAH{8RWx z8;9J#508?s{Ki0Xb6_A~3fd+3%lyrj+dKI07Q&61x@@Ev3d=$3J2lyrZ`_S!+7)%a zJ=HUHUAPpt5cklgJiSN64T`x`8xl3`{g(US5wE94>mqHh>rvMy;=PDiVL@z#$Vv-I z&z$;#pUZa+Po7HOk{%4hcQ5UIqLEgLv&+j<5~=#=_Nh^*(G9C;&w@dmee6Pd1(kb> zlR?OYF1y70=M*Q&S$U>0m&ylhE_$nFw|s3QO8F5f;=l{4Nc2=k;&0aq)HQbbMp+b_ znivciE;mr%h>hiF!mvx^N58TM`%A0{cz(j)$iE24Q_13<07*92ngz&~`1KRf=An|I zoc&UilD&!egjK!J74tY(C_%CYJgZ{gBkIuhoXZWGv|>6KWQJ!oP_xu3b2)Bkvzm-Fhd)$ zqWtM--`FCW<|KtrFBdtXw5~pZWd}~bKUo#J1wSa7G{$E;1z7mlygkTWRZYMbdfIAw zWZpz7MoH-roU|fkAf-D9Kvl0xYcCebagsr1EfMvsmhW(~TYt8oQ z=auvgx9&ga3Ivef8h>t+*2FX?EE+jCK{uv3u0Qp%5-hEhm?tuzRd7W6ngbOTSRB+< zIWQKOhFfczu<_0wNSbGi37+RG1=mdRX)7>}$ufT#jej2rMxCLvrwbpvyiI`2blUM^ znKJ9-%z4G*sf$Z0`^!YuuQ{FIDxrIt{uIL)Z7o2?1Y07o&(2o=$fqtg9}vKbD}dcc z2qR0CzcF^k4xy<>N8zVZODcI3Djvrs#jpr|RSfoAgmm^ENvIpo_=YV|cNAn)m2%ya zwpE;W$s}!qc*URF&9SOT_b1=nzn|0Z4%nl-QbjC*DZw3g;Do;kx zNuusNChv9mnMxO+7|L;GGImuOhp8PucY21(=M?A@-UlE70K=)qy*AumB8jq@g+Vlv z$$Q08>+|x)l_TSTdh9&z(1t>IdA7!t{KGx!SJh1y+{vu&Cs$C?$Ob@^kQqI}el(3N z2t7<#Vv$>Ng!K*HL`s;Bi}~x!VJj%@m|1rJ6WgA-`LW(7K+8#hXkktCNMFL-E;4Uu z#SIbx_CBz-Uq_15F-AC=C@(r1L!#-$VWk;b|AD+4&EH@ubpOb_7{*t5RQqm^ZG>jN zZI6Yxw?2_d=|VRboqG%r(H+k)ddeJPf$$~}+qel()cSUGqU4WXx^s$4R`DSz*0MTk zmL>UUDL1e?+$)@G9zyy;eb}f#M`Ak^Vf{{)sZRfO6k&jnRF;Z(MfsT5s4Q3QQGWL4e5t~I8Mnap(z)P_W+{~AjPk0|x>ZGiI2)bB9ZShp zFVZqr({}0IDnA3~ZtfsQaG1a(2|$){`I8jvZ{g}vYcjyr2KC2$I0{KT-SLmm?$wJ* z-#!7w<&k$;j*OD#j+gicN}_hXL|^V~R*jioT3lCdOd}3in4sJ{Yqt7iGQ)qNM1I+)9q~SnNBh;&(M5 zf0`Q==!orEO3;gui!1KE%&kVB5^Y!^jXKflMu&1Dx+)ofFD; z@D88gJAX7Ms_4$yT_g;O&wlqX>!mw1&zk&d3ox1S&AJmi;kX{+@s0I+tD7eO zdLKdnb?)XKPws;=XWJqsh&ZWo<&)-UJ4blF1S}`Dw-}$>jMe`nG9$Fzgl?kV#FqsQ1F`&rcMk3%2hc1SVJF1&D&^9a&?>Y3^r;>-g}wF>?q zCf)!!9?0E5DmHO4p7Xex$*9-tEK)CG>bD;t_s8d{R(=D&mDnYN=D8Tdj`)Y=lG>o~ zEc5l0zbE$mEJ^G;xVI>rQh$d|WFsC8ktHi|<#W{-%nqF)0(ShfT1>%r9t z4ieHKoR0IXhH8){PEc5wAC85=MOjxCbe%{&cIh-fTfp%rq9iH#D7;zdP$kI&tDiIP zNdQ;dL{iLoSi!s9)H(~4OLt=drqHPU-}+9 z6_^56fpGsrmAO9sLcuJy+7zF@Zu>@m2MkcXELx+43QeUW>Gui3f$46_3)J8ry}fY; z@|~(=!qH|Zw3o6vO)qM9O5sjk^R(TrI0T?jj%u8e+KL2)T51$vE zWzvlxrjRqPb~anI0`VEJcufPB z1Z8f`HW+hR!+>81q6v9~zN`6Ca~m4ue6+Z-S+u-r_ZYVn63}i5iNMH0xCM8(Ew!c! zH|4R0pA9MqFgvc6Ru&pl&*9T2{{&cFuiX^;<=#}4vN-mo01$)Hx&+-Z!uCjNU6XFj zW%j9-QJIMw1Cmv`%9P~2Wnzl#3?ZZT>k@TRuHuEjP55|`)B%=S2{%jpO`?&DY{J^_ z9o6m?k*`%A`B@4uxTAlnJbf!h{)CeudWzEu@yL%=Y0F4bddhA2hR6!)lL62T+bEQ5 zJPRtv=IWasFiEDMgOZU|)M52d{W(PM6K;Mg#}e}&LIQwtiZ6&DQ@C-`uC!DDS>MC} zip^cm4sd+w{`NfqXQ6S^de^wy%=hTD2l`gClOd&8Y?w7#BxjXTU(U{cH+o;DKo!|b zZA*>T(qVK->IP=#DR3eG{G7S658#i9Zkd9~IF^N_TFGD(a(d2r`(a-y^Vf4~FB^{e z$k?0XsVDK1jmUsykrg7X#$<9>?XeFDv9kxUACyp6bhD~#6)l*5dV+7=1i7>=dcZeR z05yiv3WYsn8U>)x+n(bn2p8a(b+_hC_e3$~`owfGweT4?M`HIK=E?Ea zKjOh>@vm3GXU21E%ww39jd){U zio{suFh82*kEUu(0>+kZ^JE4TT;Me8zYz)&OSVQ8L z<&fH`OZz)a$LWcy=cGe# zpHw42L2!)(ShVBmT4S!TRFFAW!j^!?mt7K{6}IzS$>9*;3n@=<+Ux{k@?3Jzi)7wiInp8$=5-n>-L-^{S=+c~G0WI8RJLN&i# z0{;Ez-0PUX&&XVzhtq#V{QV*}Hp6njBNO^NaW8%j`zLNyE^NpH1p!RzO7Tx4$aaqk zh7eMpdm(2y!WzFB|G2kQW3a6Kr-flszLT|48GKBWtHzFQhSa!}g=6x^>h6`F+%4R( z1>d}EcMA*jvr_VMdAfGuti(5WZlNWwDYD+2GMCv>wg*uA`k9f734V;~+DvN}Dct+G z9O6Zm5Q$cWki@@*vuH0&SG(Hi>x{Nl->>O=o6C=AB~>oip73(BF5S70b$QYNj?Upm zh2nD2rGlxL)AVbZ5hr+XC40s>p-&)s`1YSOd9rf#X|cp1quQKtFOA#pQALM&QE1N;C2ZJOQAN-`hF>}n`MRpG9!2`p^!DM!q{ zb*qWh2X{^bX0^Cr$0fL2YO)24;C$}Nc`CvOWcy;?ez=NeWD9MLdF1AvAOBJ zB-2K3g(!Ju^14@%kEoA+-}LGdAF+&iVvqF5=ua=uc`?$u#_Z%0!I_r|z%dM+4(>as zOa+7(_)U_kbng|cTBhotPL5%DV+jEm{_%Vy*Eo=nm+SOzgQQdLQbZ`VpAy#^&4eez zHuTZSMlnBJPB<#tET@y_me#0rd3^)gI8sifCc(}uCDAWpB+vHtKqrQof4+(Xz+D}& zWiD!4J(AS6t&F-F&z0}g%Pg*k)9nxOd+tXroct#vhrw2QlZCwwBv^0UZ3xoa(XFp6 zYnCvL!6d2t-w!7jRP_S!9NC9kesElg)DuH;wxDW4%6Ld?2+>Xp8FCw_PW-G7cgDgg zOt_X3$e+;VDGY39etRA48at-N;=2NiPXrzuQ~ww3X-8))ku`c(0Svl`va#*QB^fH0 zrO3urG$$11SZZJys`DpPDA$$D>ttcu9hth1GO-=*-~b~?wh<{sdQr&oRdnW!H{|s$ zAy4efEXnr)E7!cPR)RJ7YUqV#;!~W|G+iRwYv6)Q>7znUI8y13I9p1&m~N@4Q!Q*y zMMwxt+9B^G*dpIdKWb7pqkE7|*7jLoi1v>)L0a7!j7-$247-e?Z8?);OM#AEfw3fl zsR535V)O^PbeJzcTeMOin#WSFS?gQIGm?Nb~ObbU>`kse9tXr1);lBZy3 z{n*F3{kc_M)?W}qeVH*{;W6H#SEYnkuQ|pjSf$Eyw;%rN&br}L+YN4#|G-^H!d9O= z$RhR06b|)9wPV#=r{zV3^y;BO!XDP`U0(zTfFhehQw45-%cUj7|42i3vwZuZburH= z%Aa?^MJD59$J%~attD@(361mG(;l41FLxi(Z-=0g4>o;geuK8$1XaGJsTDrIbPp9u z>i-32cXn-&U7KBp``K9c%Hv{`{Evc1$&=!H{hajYU5_}z>*}gG=#^&nFkS8+g2bC# z%Kit0%M1!@K`kG`yXIG#_-KeFdi^A!%Y^bDmpQNs)rT(e#*%nO*D!a1kLz@L3Rz9A zu5g4i*C``FuR+uA$j5)mj8fY^WH-Y9JGqcO40*LQ)-ItJ)PMG{7W_!PQ%@^-N*#$e zCg7mo|L96PK;T>jr$tAKSJ(eit~U@!B+^(94w-lStllEu;mI!qA9nf%bb_}2tu56& z@Bc3ZmCL+IA?_UV`lPu*dgnhz(+(rB8`Xz%?tj=OVf@UZB>K9Q84rZy-2N##J#+j& zj&sB23W9uwwLk?<%D9>9dev6P}=_Qmp8%`b9JhkDVtmQn_{a6U$iG^*W!gALI1-{A?xm?$n?xo zg6;BZZ3kU`w)R@>zFEb?da5WdcO*t!?ahaY{QtcpeuSAnP05lMoppvf9TT2QphX=t zYiqvR<z$oj?8gUOh-q;bYBm#>9Xwg7QMmh=&ole%-j4dyT_xTLg2cH5 znUPa7KV>+bJy9kI4hrcqJRVN*kvwuxCvH4Mou#Y20bwM=yYgeLBPy=9Pdf3o`rX_H zsY}xn4p>(i&SU%BOt`smwho>sxmmdG)!Pjt2L85A&mPav)%ocI-tLigR}!yc9vb_H z`!)HQskpl0se9m_Aps2CI9sP~#u`aCQoh?*nAnM-LvX6?k+VKgTTYyYzRIfA=)ai2v+E50R~d6$(0HgRYeaIP143OFf0K3^ zg0DAAEf1{0vmr&3c$F61yxEt`GL=vyAFOKd%>M9v`eas+C3{_LhT}v@AL{@=lFF$i zXT^~Pf5^8jJh)VFOH=XU?|;f;f5m>G@Nh>fp9Y&Id!t+}TLjr|JYj7vun1}9%^UrbHB=lZPzW-(zbDp^*xdfSyhc@Amv?fPCjLo9rM10G}kiLauYRZUSQJ=@KUTJ()SAc+BLZ1+6YwEQ~-iz(fG}u0% z&VJ!d%rxNf1_!? zi>8~An=v-@R(W`XQB_~f=p1;3b(^T8F{SV{uc^3cLD0&fRHhEBZWVSsf17vz+2#tQ zY5{1(z~7!aL4g@t+iLKOHYZYc9X_BFGt1^zkk;#B zi}u--1MHl&C5l7=Eb#L;?~|<8%j~9@C}jwm>9sSg#_Pb*NT+0E)xv((r&rhX!&AOf zx53uhUg1 zr$F>lAri-mI8pYyiq zFOMg9d(bbhmluG3698PvDu#osFl?^==K(NYlM4PCh>+o@NyzoXsw1TrEpjVtvtl_a$aHLx^DMus&gDno6hY13LZ(nxTdvxii+O>& z-pZ}J!mEFZ-_sL2IzUn|6( zM2xij9dk&prNZ5;gT$kUav3QUo=gn)8w~dY3KN}P54E6WuX_cEla&|UP37blC_BSz zqt;s!8YBy;gNhs^L@6P^(pQOZf6MrQohdF~+i(WvH@u4wb?NZ}Ujt=zW#FT@q#}OY z&A(sMB}8IU;Kskkh4GUWe5;Wv6!m>V^<18MP7F+RHyg)bLI)j7QQ2UoBwyr z#LGyk@VT3SMDNUZO3C&!>$T0AhJOfVL`_>pW}bsp?&vz&&AZdpvt6^MWd!MP5o^4T zM*fWrf1DE{HHWA4_FJ2g`!J4gG{e%8RAuCPbF7rOW+CawC^h{^{JHxzxy`sU zTy?di%CidZ#ai zw>2UD!OZn%E|7RreVM{nmfJPM{dj-cv&BRAhb+HMz>CBJu!EK{O%gEnVWh+T%M`dJ zlQTw#%j*YK7Qy0IvO@*EgZlat`YTUI`Zw?jMwQngF2VC)vhoF4drl_{&NZ`dk>u6a zr&%(Np(7$G7Y?3U3CNG4SnKDumfPhD9?UD2k#RcOLB36VoZLC>LGnefCA0ve!(X|5+1JvT^*_!;^L3GYueAX zyO!IiFK+oQk~RV3uOiQFMPjXJeC;DG+=CKxn|2?TX?h!hM6y|KXQV9Z@siO(5}cA( zU&4QLx<5kTY@Aki+jot~kI!fikK%>OI={Ql%b`*i)4t`0e~YA}#!WDqmbyrm`h(=r zn@^%ubo!33A1@WO-5wWo9PsHD_q7Hq5Pq!elClh?L69N55GD|rttl4XWdfT zxL9oLTpUsc#;PAK8x5VB8&-Hn<&DMjYs!3E1p@6Y*Y`+daiX z6zs85w1~izwp-~WS>u4BYOHWZk08bw!$wYp9fz!gD&Q%<+CP443$r>58P1qV2h+gB zug#)luP`c2Ec)dH;0YKPO69l8zkg-GI)?QRcB8&b3zF>~I&zLK@C-{61!atc9g`fO z4sNZB8Z;cu?@0^9mvIv@h6}B#baBR{eA_Y-FwUxhM`;$EKLd$kIf}=J(SlMHV)q(; z#Q4U6s=9{NJ`%gq;-ZnqYHuV=&Fixvl;*&U%ACau$o-1=H8}EcpK8x6ztC@SMC@ve zkFR(iX;2(6r9XOSRfII4#Z6R5e0#oMf%QjyI^*E2dmf6Gg1Ps5A#k*b)WGAgSbLdu zNdHHdodbr#2ez4+>JjPT3dT(X>emS>+iq~)B!LM_sYZ52w<$Wgows%3Dc>e6i(ZF# zz)2`^6C#z4p8xUHW#;aHe~WBKVkfx-e{dzI#7#{osk}7i5t4v6Y(bt&>>&>s-5yCj zVZ&&0af)lebj@<=*eujqfNHAs@;=aubCz{J`yFRaBY{VX(7j!-!=%42X&4X+T~H4! zb-_gSpnPZ*JVEprw+yiI7EysaSAVB`(89FSz*=OGqgeNd^{_rf0X{#p3_xn}QUu$j zv0ny?%_*>~>psvk{t}JGc=hD+Z3hOvHQSRv>csTxpC?`i9hc)1E+TD|1Vk7ZIZ(sJ zb{zFX(`=iMPgm*W5%4({QfDnd)((%4vb^K(g7Y2n_Uv!1Twf$-btNp@i z?sIbsU9CA4StHcG8$9ZIHl6snVVOE|?hcXt$lc4Bz%^dI4;1exuIzI66RZ9Eh5vc? z1I^*-ou!CBXLC?4alqe(I6PH%bc8Y*N5tAokFcpEYc}DMm+(DovVOQc90NrX)LMNX zXpQ08+_iNzYx46xx9m)+rm0ub6LKV_0kWne-lc@ts;0dHOk5gsnJr*l`AV z*G6DpAm7Hz|NDNdC)E9V6J|Ui3~OPnC8#mC*vE19OSYf2dtP~#QURE~k;@a#L{+=x%91UY%fQ^ro8z#7Egtei#8u{ZB{H-y3cgn{^(ycC!l?|44&h zz^jz4V!)WjrMjRt-$@9^r??y4FZ*NY#~gNNO$MNgRVy`3e%qaa*f#Mi=h?T4n!|HC z1+}{3z2}hhO%7k8-`o--6&?s2!XJ-sz56=^WZ4myx&MQjGn^ZSS zV!<8v5L{}UIT$l`TYruvY%MP@{b92dAirM}Y~Lg3G7rkH z<=8BDQXhv`R$ph&t{(Xl;74qO@nSxTpeW6JC3PiLuc;{Z*R4O?XRYu3drHWw54m9> ziUuuWAnSFTaB?O@#?OtO6=Aqf3(G%aO!i+wnCn4u{$?a=Yg(v!{yLwJSut6e_d#I) z&oIm{s&5Sf>WqKT9Y9SX!u-@Lr7?_)7=uUuBYk2eToSHMa)MAnqXFg-C|V*(BRiSb z@kHJCNtryD&J0nP*LSw30D@9*+&)1d$syKvXqE`hbfj*q^cnk-SCh@Pm2 z`GJP@7Lf+ImPIf(e`gH5-a>mIc$@Z{rK^R{Y>)j*uaTpb$9-3UlkHq>Dq-UWVZPu~ zT)f=2SF3SGHm6xW7FZ(M$5~QCZ0d5cUWnn^M{;p+JwHD+&CAQZv0vV&t}8Q}Niqus zkhs0rgx}1gpM&pz;~lqaHIS4Dk&94X3!W>Vj_Ww|UPN}>K^AMp8rGY@PO1T)6}Ge! z7eWc9A$1{a4O`N?;y#_hDo&Un)@vn53$7{HE8EM9<*$mJW2?YP4{Ds%AZ&TjdTnwU zqn>i}-kYKoz2A~f`cia#uJ33KI@mx@yMmUu>N-wn!V*t#uWA$$u(xBz@??OG{?BG5 z=nWgnFGn%|qO-}`i`_nZRXm>J_6>IqBJ`roNs`pLZu3)iU(UlS38@Hw_6pBh_DauH zB@OeXQ70FOl@_>a8D0ET(k(&I$kYxO=`)k^6;Gv23R?9D23 zK!*{RCNM+3aeXdvY8>q3lNs>PAmX^-$XmX;`X1QAN7-UXT-LKXwLfP?YM^ViuFJOO zvonqEXLc?(7~Dh4(`agA9X9kPms1Bx9ur7J8+?C{_4F7_OMB+MZ}=UEPc+3C1r5OR!0)#vYti@B7X zSo5dFk@)V2x8Y%HMDFf)FE_-U0-b$$=%xa;0xBtTS|J_ z^q9!TRnl65Gu4y(Y_t^Jn!~`ko-6sj4ceE^uNo6I-|kmP=k}+b%M@hw0&MGyv1;)4 ze$zcFJB2&~Qy=~$jH;PN1U5Y8Hbf!L5}ogj`_z)M`4Q#Ue>;99n(;%MrcTrgIaXCi zA3Ty+K;|+QGqiZ?IO>J|??4V8GRQ4C3HsD`~Cc}87Z%h3HEpVj7fj8tWJ6|LQb#!Dh99H z#_ZN&wI{4h^x5GzCTK+gH#E~l5!27ATjc$XDiRQOBBCiARYx&sD-d`2K1$ROy{&cZ z}tUg<~r*6i{Xu9C_!;xa-uZI+rNGzArHfxp1YJL^~3u}Z)R z7m(!I0+wdPpW|rhPH!5CtI8IV)=Yt=7np|NcOmDsLZq#=6A{%4wJlU6oxTF~t!D9S z^TojR*el5uR{BRqJx_9X$My5~WtGd6P5WezbTEJ!(WK-qyv@;m)NMN6E7sXC4Ppdf z5|ME6M(+)0A{5IE2oej{DH(-_R+%+ zndi)<+oFs5j%VEh%jlhCCkshaP*zkYaJpWz`lsup25$x4)V1!4ov6ZdC_aOi{X$G! zLIE3Of)(d_2b$9D$=RHi>=cT4+o0zChkEG8-GWgWJuCw zo}**F^*&Erfs{pjHFle2`+6Vuay{Vuqbz;z9RhIT6CBVd9-qyWjgpH{$(vajkLxj+SX4hH43XxK4Kt!` zpD_hX!P-eQZ^gItBVLCeSBGN+To0u_^tHxaKWG=__d9^e-27*5#}A9e5Xy6Sq>c|) zUix#E`Z0~nEq;+?U1ndMAORN8mkpxu!MvdTjX1Bm5yJ@2{XZ2n`OT}*HqMS||711K zPZxQtp>v*FU=Hfm`JAlEzp&rCMDkBa^c15@a#?{97CkeXEH+JC2Q_IyEu^p zDC(mNyog53sIg=)0BdhgY24QY7Y5aWHugFdO0z>YQdWvobH1@=S6LV43m+IBw#_9w zP~PSbXy?EAmEyXWUqhwY8SAPQOg7ATyq>8aosLd`k|ctc(!iuMPLosWQW~4ZW5Af* z$2^SDht-SOX_L5<1LUZSdCD+h(VZwx>g%E6-aw}MYDRA?xu$v%diSYIm)S(ItJN=F z0RRWwR_WFf07h7TR7^*e2*R#{!JbfAWtq4#bCQ~#%Q+-k$~kD=#;E@GIDWTGw`mbQ z_p8&LQftfmmW`I!DK+U6LeuW3Zu9+1Cp&XTi_@t}74^(Nld^VzT81Z$18-DswehCQ zawTsFI~)Bv#eBX3m>iP{EWljpMP_@}r8apusbM^S(SXgJ&sH>{wT-{jI z-nyg%E*e^1`T_0Rfs_&XUKdnc!OAYHZpf^du#LyMrgIR?I!3;Hn$ z)i0dWW1JfF`Jlbk)CZoY%IU`?GcYuXI}$`OzhLV~i3XOyiv(0PEjVW2j4^-6d5wEo zrV@I(Y22Vr#|@pvYd9U)d6_z!Xoh_jK*v~dqyg#{CWKEfGf%5!YwO~kx|Rs;o_-Xw?wwW2 zVf%*?WA0tZr{P#gMqSNq?X}bSUkmh1{yi4C!vl&Y^t*A8A~UX^+e%*` zg*B1VuK4Vr^Rf+1s!qk3M7FzU;9p)fK%M34bUDb?!^tRsA#epT{%(-E!h6Tbid>$b zf~$V}{Z2olnBa3F)8W)iFO{yQHj{SRmEAzyf-1>dU^+Tg7v<{92r}=%(rooVn?{9X6YA$TNCq!}Kb(&AZ?_xW^Y{o6^|LQ!`RZ~ZY-O4(_HbKi%8PUHIXd@Scz zky3Z>3nl2Zymv#!>qi?7Mp#1ZTs8JBwRf|-)V6%mMOCG{;M9O*-8%AjAmG=s?hw}@ zu%+(ij;RkX!d2G$P`R&LolY{A;y}^aqW<%ygqq=C*+Yi9cB`P0I{0zSqcpOy3~h2^ z9;M%b-_wI4R6jnSkHvxThB5Z26zpx2u$AZS4W0%oE1?37)7L@@Kdml5KI~xZw|^Ndmy-;fI`h zfINO#N}b*t45>zYKB@RJHb;igS8Ar|bW>lK-*wb{VSneo+`5YU!8Md@Y;Lw|WBoY5 zS1|jf1mufZ6Z9(BFut{I+jCt0{DhynE(;vTkxaOozDpkefsu|fXXeA#=i|6pwX!#L z=dktUww+x7PS$+J=9>hpFQ(uw^t|T29;16XWWPz%#l*)@`0syb2{&6NBkv4+@$ejdJT5jjlSq!Wl)X4T+22&~?n7fCp-Z85}LIAC8a5u56M!be;RU<0Tz{_K>2JR3T zzrBmun`q``I+W_`q4V3lq6JaC{kQ4+<{D2g_j=A3KYhQ6bqrWy$(l*`0kem@KJZ&s z8_ZsJ!GMt`*dZ*y7bRYrM{60nvH@`q)i{fhI{G`MJi?w%RT`qKTAD5PKAh}GR_$MmXK zbX^qRT&aV4`e{a=H4Jya4~Q`Q=vXx2GdAW|aOVD0UDj!l(~tAG2Q#t9@e>sWwj<#F z;(L3jH=c3LI#Y*|Wu_-+{NDUc^E|h8sbVIu)X9W5h2JUBd3P$TJ5cm4`0mzIFK5Em zX=cW@kh8s?yVQlW0WE6}18=~#VBO*!i`t)xDd9%taXs&aeRji7a=CH4g-MYqWJwI7 zFfS@)S=>Ijo|U%6B8TbgZODH(Cw>U^qEeCEKLCR+Cv zGb8)5dS;9)vn)45m0wlruqu$vit8?i=WEkQr`{YDa3rt$n^I`#KbavDVB{FfJt(>~1uw_eUF=Qnv(=E-3%uxw%djQ5_mEB&Z#q-?xFZ+huS)|%-lPK2ir;z@oiEA{7G?{wgyyBR~w z)to`46%=^sX=0>*g$(pD`r=jGIF<)c%VKsx1lBH+r31J+PpmwxjWc-F$py2xAK`o~ zzb`+S){yMD6;6z|=adJCUzkD37$MRZCeGWge*$a4sGI89N`82T(9TrMm#X`o&)D|E z$FP@MW-7>1+?}5xQ2d06il<*i$ClOMfjuH7_rKfb;q4Aazo4B3>hn{Dwn;vOHOH=> zBp!FeGURbU=IX17z50zMw$fAu0#twjZI&+W#50ET6UCa8aowU#O5FANs<+2lK|7g@ z8Hm-&S$P|wz7QWsT8gbqwQE5;aI$jyowRp^|#KWJE$pnCUhCfTI{|6>a;Wy<8#bAhV-&E094GN4XN zY%DFPh=^7vqR#cx+773(hdJe{k_YjVAwz8^JYQb?o9GdGb4Z)jVt6&CBeh@oAw5nh zbX{I%WJc5;U-gASz7Bjm1hIZIwavNS5jE&O1e?dX*mTU9UaD`$?=y zEoanGp?jh#LEA=qO~tSND7maD+f&KWJn_RDlO-ig=OQH<>}OH((tsCl1C}OEAz{K_ z=di~^MID{*UO=R6TevARdKF7+X4}WFKwB_QF?c9+VO3eAP&Aw)G!$v6@Sz=F zh3^5?Bm4Z4_^qFjM*dGYWxv>VR|7joQ`c#dmbhU->sO;BJt}7Qto>Kc(Kk9ar_KgO z8b&>po?{9$hu1pLem?6T*A+vli{Os2EXbpU(XS+kG>2EQa&~`jK`X{|My(_6XgTPI zCut=lY<`gv6LR4a;R#zJ`x`h@xQFcq?mfF_* zCVbxbGH~Dd*=jCIO|tnvxqhl2FxGkbi;qome&`6oz7*aDocW8hD|776*tRs8&XXOG zH)lo!h#!(P??(vVgY^n#96Feq`wZ4p{KZq!4onKa#PhZAmSlb~t>kKhHL+KxeS}m) z_V_L*6vv>_6d$A5L-txC`HYmHL>x1)*E1E4xJ#a~!3|uBSXbEbGZhf&7Bl^MvY#Ye z+tl+&)wKt{@gKAclXsLRlza}5%*DtfxB--`5vCoa^Ck(> zpBfzMseON?0}Z>4Fxu4>;ZnAn%F^nkfBHQE2`iXM_tmv&4bFj)^g`Zir*L26R*|$| z^zYdV@5OwWxN{gfzrZg(JDXqjzq7~<8|(x#U+<3<<2iWoyk(B;cEmO=Eya_iZR@o_ zI3+o&m^oO}=3oF>)MN}L2%*~19H6E`oYGRBIXh*(d4-D*J0m9%K^r6IM8YS|jAqA> zA=-5SDkh^NQ)E*3&ha?$G&X<(J-53=lgxGiU%Kqi~&|cka_yNP|`q%75{WMpVB6(>sM5nNJaxY=dsp!~kMFuU&Dugvg%<$rx4a z^ftUl)kh+@FRQO6qq)hQaP-!t{nZI)5)*b~S%HcpZX_0trD-82J#jRpC;klGjsW{# z>Bv4A>wqslP64qM1gUki_IO45Ucia@_0HHA(*wbW@^;S%76u5khIbuj_6KhRw5i&u zSDzD1%lC)K)~P7AVxv*$i5e8dZb3< zKONhn2xO&ccg}bEC`$vigxl^ap#k=|=*L=}a@77Y+QNkiixb*Dx==K~Za(UHskCLx zakXhY6Hmitnp3Vq#-~J-BZmxlA2FAbbjx=*;3idby<`@cO#|YEpd(xKp4NR9$k?=7 zA}e!ODIXnh)l>IPcQ)5lx@UcPI9+iEJ<^1;i_}0VPtl49`b)>`tsP@9Sx%Mn_OHwp z@GMzv@-hz1jCcB)mS-D6zIg9#`O7dub8aZ~X8`ZCnJ?@;KuzH-f%weO{$4=K862#PII7#(EW(sGIuG z^(>k2;X9s`T1A)?q@U<0i0-haERCk~S`Fn_+CabHhY+;!1v`Lzb|7JRWn!NX*G z+)hN+DG{Xq+QFzzD!;jItYGG0uqETef%^=rbEt4T!RitjXqN0vGpl%zBAxZkXlM>B z4fKTwL)*#(O6D@}{WtI$&uRwtfN~ttU;6_Rf~AbE#nnug$QD`Kbg3)hWCeu-&3ku$ zc>EU04n)A9w2Nhfg$UJZ7P*6E6jjz+UAU-`wPuVH# z+a;);Y<}W+ch zhr2jtGrp$;=~rvPV5BXjv;X?@T<-cFc@lY3t1Zq)3I#ahLVu>=obKESQNt z#LRGE9Uo#jl6{cgAY1fFn1B2mNNL^c=5d8&ULWe|fRHi6Zq;6~K@iR9&vrLeB_0%8 z>eLRam&hRg$@z_%g+ES<;V@oDT+zbxgn}72NFk(OcwEE&jNx(0v68$%y(Lo7nIS~H z(^tIO*?gqn2+sZ_Iy3ijhINnAdYGWPH+*Zk$5>IqPY^@Z^0;C^?AE5Q=gxR;8lL=8 z^Ur+hB^pWWv9V*Yh-w~YP-gTbp62h`ZeUAaY?q2r!LIrITNlV5Sx6Qbdm(w{6*HS- z11(2GXjHlG{fs=0TP-=36c#k)kNY7$>CBJ73z?ZL`TDQbonDg~$w zN5(_tn9ThZ`*OJ)*us=JHwY=)~Yi67Dgj^~#?G%b1RX2zkW%MZz&Q4Qt3vsUY|Lb+p25BjG_h4HyL1)Wyxc5qgHBG4$0oHlTv6Ih`&dl5y>XDZzVd3PC($}a}Q5~evBw#p8 zncq2d!#363+nJSzYF@{nW9hOD93%Dn67-ci-uaofez5MeRdtR_W*r1HD{HFzf>>?h zvIc(tI%;ZMA;W?}(A@V=b@#U9GcsEumF9I(rjcGA zCAhvXtsma%h>L_*{<@T4LMj2km5}~8FUF_0miL{YUuB#=Ef^+iXGqV7u-zJa+=)VU zF<7d{dy`r?NaHSWyiqUL%lVmKdF+T-p2(ZG6{Oow#|w1ryM${2F_4|c8LOen=^ zGe-)e4R-MLzL||EpBdMs1M&|ZAeWx`PY%?=5l1s|hx;zlz|MIipqg>^Gk}fmQ}tN8 z;XTJ#asB0Ot?!LCl#J{ld}$%5!f8#@%jYp`L_0wS5djI{7&FK%uQ%*axmO9EcAWU6 z&;GbHUL&v$X-w%9hAlzD?ni+~C%dEloCzT;2Gn2fpn>9mPg%#Js^(?B4i7y4zTo21 zxb<)R>8y%s-wo1`rBjnL-KU9#S9;F4OG2Jle%~`LTMV;--?>8;S?yZorOqUjrk8fi zl*l48mNOPi`Xp|Y4##^%9M<5ELny_b=Bpm|<;R+IP_9g0Rd&1ARI$U;+xlz#jj`Wu zb)|az`~|>G1IV8~EUjH0XY3dioGH64S_}yZ>3IY!i&1N^(KxIF;!4OS&XuG!A-R?QPsNesN*FOGPcrN;%oN4$HBmVp=Z8FpVMagF$;4g&6}p#N#Oc z^x4BBl#6~Ke=zPm_ZE)&IX8rib}lZcl2-jL(w{Q;lQC5!%s+EfKt zz)M;sjX}EKf^oBNg2zz7NEs6u3o4^YZoFvFSEt^eK!Xf5RJNrFo(()Lbp1O{jXNtL z(}$`@oplM-+>3v8c7vqXVKx4#0_+xVh$!jMt$fZbji&EhNcLiX?%zFMFIZ{n zf}i@Gml`DH0*};wrk6s&j1D`=2+^2+f#$O7xsmZ}NA--;3zOI-O)^VJ(SaSu2#81W$o5WU>`%Sm|1`|e|05mp ztEe&P@NAJbpu?#UTDambZ0%LrlHao>Jm~;qKLDwD@fBm8KvVUZz*Y>O^67|*@Uw3k z7m?>RK@4o=AbWt*(-Jucs6nK&F0tJ|0DG7A3oGy~W}L{Vmr&QMC3; z#rj_iKIF2!)Tx?}@Mm8MrE`YR#Jw4svyoLLI}E`za>x;NQ|?)I_ycPtGFqNS2(e86SHf*@Om3v6(aQr zznqB$M{b*a&kDU~6CP7CHpsSyxf@}-*SV^lhO$mvBj^&GopCP|ZHBZnl&nkKz5PrW zXot4vYc?ikybjNZ)UM9PRZIB&?sY?^yA4_Y-QJ{=m+d|-sayAHvk)8pAN<|N@_kIKo zNB$}oiW~I%9I&&&kVTn-O#uR`J2|t)NfWQ!P}2lDIv90M3b@2Md+;zPayCT(9X~m# z`dHkh*bhJk(bia#D@p_#XlQJ-c|)nx>Llrf)4@>*4X4n&F^Q?5JZH>hn$p&89+1t7 zHJVRGwka-WirF_=T5wJ5>Fhr^#Mau;Psx`=?OvMR=;#eC>hjmuOTXmH-2mTngq~-F zJPX@UqBiJ=ZR!79vZ#xrS<6}oWe*Q+sk~~b5QvrjP$%pt0Bvuvfd1Je!w`^NCp{v8 zN}IS2hdfap)rEHCM5MFxE-g@wTXXbUkK5#YAaQ^*8%SGGtmrg!BJ`Ya?Z0uBg0be* z(p<8I4~oALIHPEF6_tJj#;A<`c9X3MlfBLq7%vc}WO1U&d{CQqb))4k1x4TE)8BP~ z7IS9R-jx3&@4sfeGrgQuH~2IYAyCo&d}4g^-vDZO;!j}pP0PBGC5O5kSbikH>s|AN zOp&TIs@l+h$YF%-*x6a#!t6nMD6uf1Mlu>QZ*`3{V~6&wH%ukUgYG0PR4FZoQtxh3 z1~+WxOp0)pdy){6J#nLcFm6Bkod?iiaNe>a6v3!^xOIi!_HNN3gmC6so-rG)5 zczdm=ZpZ2(9hf}tDwe!ruq%3_u?iymffY+8Y;m`i7MtJk%HTJ+@!?!TyP-qbuB@eD zeSvHR@6vN~!MgM5i<1~`ib*z>bKp1h$kONS1d_XjA#Xuhi41fX{PdgCG_gUHPw-K2 zuNX*dosHP-v)O})^>KLf8~QqRvGkQasv;A*$Q(B#W>F$i1zT6A-rI{f(zeY{6v2-~ z>4fQ7DGiGSzCWB0XL57Dx}z;={c001T|6!QKF0ic8aq#SrYij?V1B(j&|p&DL%FPT zGU3B2@SfO^LQ50{bdTrW)XMlG#En7Gf|89Tp;7g;w$6v$4}1qi}Jg0r1UbDG3->`$9N^c`{76&SpWDc_! zO43{-z^y>1;|4yTvri{~p(#s627#Vt4$k^J8Qi1KtLq{?1*Bh2dG1@sY+>>!T!-7@ zdON+AW;uJq`KZTvrgtA-Gfvoc`@nt~v~6McC#BCB49SC(5YBZI?f@6$0+4onroJrA zq4}i8jINYN561vp?_1+Oi}Y{8^EKKZ+b_0!c9U?h-2qNw&k?tWqo(4Cm$Sr1&DkGf z8Sl>EUe^__^4AbYmbXJ)4L*FDSZ6%z z&RQOjK+VA`gKL*qXI$mJw1CN;2jw}_z^2bL-Pm&}wo7Ay?;B8UeZL_Wcy5|8w_P`gpa&G${w(TueMsk@@ep$Z(i(u0exn5)hKYB0GWHNUTam=<#9lg zP$?n~CJsI&_k4tzPF>+g?JPDVZs zV!JJDjx;RI*govy=b7ODyg>ZH(_=$No!A1&D!NJNX&Iucj=ymuyBNqt4rJw8$^408 z2nZfP1DjZN<}g9Ur@9ZruBKi&8JYA^Mjvyw=`@<$w126w5>x6sV=TBhW(f#%{7j&$ zEa2NQO=-*I*sfa-rt%_#i)hfTrdZ6m%&sDI;si7XRS%b_jT8g4HJZ71SLetKM|{W5 z?73DFT90bVs8{>6`nbFG2 z0&Z2){y3oz;Lu;*JoFR#b70I;^vaC|vem)WR4*ed0+*LgRPGru0Z~FCz|ROzd9u`L zf%^?n(?$W#+fHl}V|D}9Jy5o&6L4R~8bpC4sKaw+bvL8pG*Bq}9MBN6{OMKAnO6Q3 z!!o`V&3Cp{U^UAzOG3uDIq#)2b8Uc;W=CQ*CoZTHyUe0y8XR+#D7Z4!Y% zIm%82;*H8^_f2=H2D9~E#dUHws-N-S+37cL6bW~_hc^C{1t`g$3PFd!iBw zQ(8j1J#pl4aofPeGeuHq3Fr-16)6t~e3=4d0p>Kzj+6L;QP(GPqi@Vl#R4pH#mQIg z(`5wPm_Sj^SHe;Qmtt&;K0v{Ml;#`=NoVKGNwjOE_jRC^t>)laM}-9Th^BJ6fXJ_is(wr-t>v>Z(uA$Jd|4v=T)t)$ZwZpKb`M^!WD| z+3H974l7a<{q+YMz7o6`iulS@)IB-F*BkdE!xz?WMyG4N)qxQ*-fN{Hs>~WapMhaA zli+6ZPg#*Kd0RoBPwbtSaW%E^O;xYqsM3Xihd?}Qia8T4Kc-~eTPup06u-Dt{Q#*w%(h~H0mnBrJzXpFCV#~o4J&Yb~9lQ>p?{ia0vzToR@31Ut9E= zH{u=Lcf=4}D|PtKuj_h^>L-$$P18=+3RVCGpH83=G#rE197!zKcOm+X@BssnH0rqc zQqIUhK_}pf#Eq{-2vT|upPCyg#>VmllVDr1V`=f8L;!QO?{-+<>Sbe}ARhmbtHGoo zs6gd5g`)|o8Rg?TyN4=PyZsAb0`&>MT|29w3tJHxZ9IE{98wlT_52OU&~@*3lejIZ zm%a-2M&EVYJ41Fu-br99i;P^}I9!VmjU(z@WiQ~h>}UW{0EVC?C;%$s;t z>0EZ1owXX(>1WYzIIl6!t5$BwnjFJZ#6lPwu}A&j`*C88xVl)!_j|3%-a8%hQhn7K z;eWIS`#To#X*r=AEtMv{TJ_^ytJUN((6(+1A6G?$=3} z=5|Mt_KnT#KQWfrq05r)id0UuytxAq@PT|BCy`!ZsvEo4mD}ZHNOpo>3@?6-7WNW`0X{# zf3dhhqUUdW{_mrtFqwO}XiDPmZ5J%B6Oin=6M5@qNeH6P?}J!3AXzJe3A1nY3nf(e z*?-1p@=|##cwN&${A+rZcX~})24(a;Qes@ghlrkuF5k|qmwoLb7kbj|^-G2hH5hjM zr(QB4;=)qg``%CLE6e2N`78z9K36x{KNu&0at5&yn3+T=hwvtPV2{xw#m>|a2g%`v z^xUJfN|JK~siTspzLgRRUJ)cAnAYg6rZGcJ1>AR^*4#5&Fe6ghMFiZI&y3GJZ*F00 zSHbhBknVKSbbiSCOw4Fx)=2>3`cbEtIfyTjsib07=}5VQ*+y$eI_f5IXBr zv65*C#Qnblr4;tLK_0;(7b09=rjD6C_Z-}5y)C*=yPEFha`V#i`V_M|3b!PVohQ z3y6MaG9!Jsuom{WgeHYlJod?c^mt+vEjH@-k9wQVW;jPM*2j~*4WfsgU?#U+gL^FD zal@|K0bW1lSe`YN@^c0PbrQ9+INLCx^QhP7Fn2KoL2|*G_cqL&#<=jesqFz4pBNKT zQN0m1+kV-%9o3~N-yc*INu=CweLpd-ph?JQNFOqolkjF#3#MR(eIN@s%e!x<`uh4p zGA1`7yV&>)pHn=p8q8c*es~XvULnO4r|~-$ac_6z8z^gjT`|vRL?VBU?2vb4k`YW> zd9*Vh9u2l>_%9mNyk7nrM_q?KfK_+p})-v8o9N(LC^%2+p1U6jy zv31<1UM)$Q4p4+^;IBU~fcyR%vBoaD7LniB&XiuP+w+~ZA$opA?5u*mi8tKMHdQ+^ z=2+f4!9*PXp`P+?!9;#N@th;cWs_2Q!daXb9F>u5gxfDhDYX*SnSLX*;tN@7*{$;E z*Eq5z*h$BdkU`;Xn#MScL*y~Ak892Q2z;wyv3#jniXCQ&#Fy3yxXDgv==%ye^o8;S zII)pNiXj{d>PvOw#AR%2x?*5PA6|1d5+rV1M9Dwvioa||TP4Lyzi$abX3hoHS{ADx z?UyK~#7kqI<2^WLFA-XEtc@@{H9oqf5tHJ7IoP00bXm)YwzmG)*`U&eq?>c3N`Fk# zSQ__!cuwBgb-{U{C-(0rDW6uK_B;5s1*4E25r#K#gd3D)P5bS z-5;-jhA_P z+WQ4ob_xPSypPh|(zADm#I5ve{SSHCv}#0}9XA|-Xo6!{O60-l?vH2)fpRjjv~~=x zHVN*%&`Bry0#>YOemcoZN^CaK>Gmi@hQU(KB_~?NTG)CvG2?}f{I*!c>I8qBon z7_s3>1<|YiqL+&1|2mRCZEyzRA265%nQ|YGg@Fhfu^Zhk?0pQ51>xMmLnG6N zl$dfw`0;(*7yK{6IF*y1b$dE?!?++C#m9pKv(tF$b^e4GzzlB$!W97wN3@ zMRm}$xNSbB6Rc;YG6*UfFmpyqGHq#!EK6EZD<>+j;Ese-n7?>n!$TBzB}d;dTg7Vq zxcQ4izarf}ipHAwg>_rgJ?sPc$*$eV=4w{;hHGM5Q~=2VA6Z4ciYC23`fuvIh#4rfTC{>$jffBBt3WG=w;&N2q3 z8_eoP==UY(9!u-S1+9HTYrhS{?QdV8SF%OMfpxS( z{y|EDR%>(W6doDj5jUJru)LlxW1}5wL=5S9R5Gc}YXiQ#N19z(Qv-03BH3je{~$NU zKYM+A+m2+dS;QP6;hR*+%TiX0XnP;7F!qe3M^l$^Kks_+_FTEpUcMZyVDexme(xt1 zmut#z7a{e)eQ^dBJm0$kS8nKC?YuD3j}?!mauxIhAF!aNLE_ zV_DJPbo1ZomDWf6^Vm^#R6D;LUcb9o_u9w2E6hqEw8hyQc-txCmQ4!H?oXdy%8vD4 zVl`XifxZ|&DW8JbxD@$m!U!8`ODZnBpYI5dwOEm0 z!J@AC^QO0ZC`n=`v?LPD6lc1Vz;{-vZx@^bw#4zMw8GkazSwz->49>C>KLA|N7H?a z*V_(h_>n4VOp7lsrO(ttXT>WMj3y{H$mO1~^>9#4Wo8}T`T?-&CnEW@^eikVERKuT}|LBBj30r0@=#-aM zrT^h1`W5KQUn$vS6@NV9*Gf&;fHf7K7~czAr84q%@F%AfF|x9NXw~n#aad{>I|@|h zcE0&S#=^?#(J>eYCN1faTQ3mmJA2PB%ndrubFV71H^|d!cLWS3;Dlg|eyMbq1C#T% zXg8)$r9K0xcdwN(E@EEEOnp8dNwV>3B~~Z7Fbl&<5!0XPHodnzWerbF@?ggrN4!Jr z!8iQWo+`-7wdpzrj3?E6PGJ5bu@aNVCN{>gbZ{FPL!oxk!eH`y0zVnQ5)A)c9Rh*=|z7sb{k>7+o&X4A+j>Q=wMqI1{Sj%%#|*rY1x+(%ibN~2VCq=}Mm@J? z!6ht-PkO4<(xDPAZ}ahKq3A;G;^vEmRQSc?=5 zPH;GR|KB&x^%*Bu`(lr=_R3g$?K#(cp5MetF}Emd_|rnTiuEP7f?_f~rXQLSRmgu1 z8Jp7-*4XRZu4yiqoGaNn40P12evUc^;f(Xtszvm*geYE3K zTx@7ak`AEtQjv~r6q3c$>uE^1!RMD(R&D;LC$xU-Z^CLXvZ|Kwkgr8zzMqeKzPn1S zARz8r_hpS0C|WR%hjCl;tNX1WZ==nA$GZ;JUcavMf?>@=i7E4AJ}@?M&6i^hGW|C_?&=(beW+?hWz>TFr<_R39rXJXQ>;W2IK?o~8aBi{0v$ zg!6(1dR^LDJSf|PH9e6-KUn?|fByxqho>5^6_MAo7Mlwug`pQcoYj1$56wN8DBt~F z(49&b29fKy-*u7%_XroHQu&R48 z6?7X8sqO&1+ReN>urCzViJH%Ts7VOX+qS9gSZJ)|z;;k!KA3V5>E$9=kcko0n2N88 zaHK?@T(y-D$PLXdV&R)L`be;sK_x+)D5D}#JJVmx!`zLqy@>IxBN+Rd8%0`p#WI%K z$BR^vucjOw_9lSOR|n*#P_EqC%-K9lW6&^s!o7kjLgZ8z?qk8$fOgDY8;4Dfbg`{n zOAd(#n-9@!P_E=3|4cCZjoE9ZJ@m9b%z-$qAFMaSbGBE0SrhZk1bv8E%8fAvo3)+v zf^I5r{+DJh>uAefri8a@jM%NgvUsriPz(Pn!H>k!ONPTWki!*C@P^+<@LZ%AYoM>R`iG;wgoD616+3)o4YF}JYmdPBR#Tp|g!a#LXTQ^KM1(lJ5K?<3iauNM=d z2Je^8vEyEJV0%}J)tum%r>`R!8_4j1i#HQS@D0|*TVCa~t2n^mwgzXz8Ofw;Ox3*h zGJd+POP9j+Bi}tf28fuxX=EcJ8-E0$-xB1#5!eljYxT#Bb6A}JYx&TjXNVNMteAjV zTT6Xne1y#njq-7C(gP!Y4IWp-2P1+H8U&g@q&?Rjd@?iHKJJwX2DTRX4jE!URQ?cw zX~mVb&HgLnIubK!VRQ*KrTFJpmC+fe4LMyEN60SI7&`eJV8Q7?WXC&E?kiq>mdks8 z0N3V}>UGBXQklHKy^ddr8kF)Qa|Sf(GzxX*NYmB_>8(E6KYh>V-C!|<*Rpu41X#Ql z-LT=fuPU_uFjp&Jn-Lo!k~j>V%j*D6U5W}6{oQEh2Fura7_uL|}FWgE&26+RVSo{`ucM+I`>cw4h66 zd7^CUceMLo7iif2gom!bR4t-^dZP#^NLx>e5m~%8@xMvlp-8U4H$Mpv*j=RN2@Ymv zea2#{3k`fL&MJd`396$UONtLaQWr~g4`Q^Duv^EWd31E)&a;v8@(QzB()G?n?j z;Hj`L_<2Q+GLOl1J1ZOaDDOdED8$z@sco!nq=M-%l=~ zMNP4gRs}~QWnf3sgDu$m=0=CYkG$={b41l)53oTL{0Lims@X7UsB>CxuaJQ&!)0QX z&20Td&smq#GalL@+vUV^uAMSAshPgV=8)P*(PL(X1u^DzQFValGmrQyFV{Q~A>cu# z@erYn+~bjvx*^H7KD`rNFU0HhM)Ug?br*sA8aow)H*9M#Q2tNR>^4(~n#n_HSr>;( zdSU>&X}Ca=jBa^z6FlVn5ZlL^+EW8Rg26Q?#^vO_lO(;b6 zSNg51Q-|;9EHXEa5S@j5sTg8-=jv(MA-K$2V_L4d2?XCH0?WQothP0&fCY*eqRII29}>-lO>XNh(GJ|lcqw(b5a%3NEAGuoRUUXXmgBFHdKYkn619>sBc= zVe;mGPYsl3hhENFT(h?HurfP;V|Hrs)^<#tdsWo9+(J;?=b}f=uROxCxe5_%UMD>V zu&b!rynYXgbJ71|?}ls|k8V@jw$gFBVk;;}4%4iA!JyY717eGG=K7{3&s%k|24`cApYZowaV zV@7xpHUoJyyhto2ew{&rjr+rm+whdp=JF?i)93}&*vJFb3|$E8wU4W8XepfQbCq6Y zZe7sLiE>@~$9wi?@4U>UAd0Y~axwO`tD6EKjwv}WtM-nHO=H5HQ^~q5=Ax9yL3wxe zT2qTAsI;Ag@0dP6=^@rSeWNbvgau|v89ZDiLtsI9Yb4X{EcaBGYiUd?kJ$Q<>?9Jm zqm<(xp#k!w109}rR=jT#svQ0ct_|+&uv%$lA5R<_k;|98nJ~FN`{B$0zbe%&3gmY@ zS1Z}>RmA^OZXN)UW#u&Gkh%*%hfCnce9%Bp<^Yl{0Gcz zxOCPxUGDeukgg8LYcimN?yU6#AsGdwmWbyILJ8_+Z@^#>#yDaDLwf~7v)fO&AwvKh zbVc-<$-E(#4eMn$2)hdRhlE`1BHY;W{%8Q4PS?nD{>WxA9R1uXuHRBJjICEn^Z=jA zX!I;WW}Mcvpg=eO{c z&Y8fUc+valk|tzr=KRZOonnQI=F`y*QL<{uyfad%0TN9z$QC8t3qR=*YB^?VB8YxI zs~8jf_F7NMVyPW^gp4aTz1q}im1-r2yfhI}0GUnz5iE3=>_{AEpQ!SaxcHgDP`!CP z7sE;K#Vv%fmwe1Fx+BKJ$zFQ2n47wAp!+Kc+Uw1?#+Y!%0-hDJ>rz2cAnb8uiBF~^ zo8?oG^Yt_jKT1RTd5;qgwHRzPEmN<)RrZs`Nz|VZRSu_h-u}rfCQbRCmE7! zfZ`zZ+Mjd!ioG3--=M)ITgS@L*VX|Pza z?u8@o?flX=f=8805`laz^ssv8>$DDv?yJd~8Y(SP0ms%%$n8S6tu~vv)-M?f1Y19- z@5K$@RE$(@0Nzjwn0T4Z<(51@e(EUJ#aHMN<5lp+BRGDxS5&$4Nb0sDEKdaKg4p%5 z<lKAt)Oc&h?}aiRFVp3FejaNr=)4 zEbx4eQo#1PG_mHk-vS!EOk9odMkBbk$c9>oT@_!2gsGLsPdXEX>+&_0KEpGy5h&1! zvn}QT_zvJ6kr*o&9m&%_J@{_W9ZC`uj?K8;+d6;(oV?;JBB5LMQEWidI+9$$i$YOU zy{!fJ6L5!V_(+!q?7b^aMgf}jCCnIBw?->=(%cG&R-RYM2nt;))BlbqJHpytRtW}P zVR~>$kMcCkwIxb_YQ!I{ z<4ne}^iz|Xb~9&#QL<4|bhBBQnwL_kn{vQzJ;tJMTCGBdc1G09@m5byjaCtoYSAue z(fbcO(>e5`LX+qS289#O~^juOz+5%1~*j2?*T{3*-dfL^NF;_yrX>m#ey{QaB zs8h7?vzsOx8_wo^vqlz&fG;w3O13`=?1Iirny+-7i3YFHcdj_jbTssQ`O#abS+s(9 zRtGymI3c_1l0}kc3*!uFc>(Ae=o%k+Onw}ZlI}ZuO9E(mr?s_VO+23gL&W?pI}6puhpN=zn3(z9%ydCDH6=`a^`}o9AWc#W6yt(j?0f@+lB+p6t@~t#4>MW zyteJ=cyWdbDz<_s385nHJ0S_@v~o1*>E|H+6|hDH3;0(h5e_o>{2W=CX(={F7+gq1Gg@kt>$zY0SJ%~k|D3`PQJ-v@fxWeV?EqGQ zRb0$>-xO`ZUs;p7rQ25-^6C5cL|GE`!CcJGm^O?ym7!`qBQw(XbBkz zQL#NE+jy*Y%qehicwH|o69QTl6+kBSmU6{Hb(rbN>y6djBRTW<&W61sex?>x738pP zGmJ|u1Jw~oB%mt-g>TXWq4h_ekl&o{pZ}YogB%zubT_uKlU=48wTH(>AB*QoFmkoR zVK}o2HNKR%s^&0lR2O#h5yIbB>GEp+FxO&im%$SlG0(HgQ16&0tfSEJBRK?xoYcP4 zA2&+ln}D|2ZMOiirhXA+dvczZLHO%2=QK!~bh?B{qoWm@hD%dXeABh}gJ5wQca+IA zdtQof%!AO`k+Zb$-qvlUOtQ0_g|3NXNG2%a_$u9OH%-%dfo7_h*sT9?kNwSyZR*lB z{?-!rS=$b|%O+vNyNRa8e$gFK8A_(QFC*o4YBv7gpCmoWxDr=6x3 z{rY)5A6ZAY^5tF%uHCLU;aG94Em5Q4xw^R>;IXF0)O)vGOaH~;f`LsaFQPM2h+0Ka z1)ZU6pS(5?iQjxSvJQDtV7^~`?y-K}AZwu83wIP@#n?<1sK)3dw<#_XZ&X9OL(NQJ znd2AQTNLh0SXT8(X>|-#`^S})x(qxQiD*7MQUavZgi^16Lz{I)U;#%<0bh3m*iA&k zi8M*Uf|vE?dU>lTd^|Q4^U;)^dWw~@6<((!VPZ?}o1)z`J&ZG&jXAH2q&OoTx#r6%8C0Nf#Qj7-=Wun2)oct$AtVj{cL^k!^4-hg`$M z*0jo#Of&4AqRe=$|GGte!xT9Cv|puh`J{V+jdC8^XV07-B6X6Wi5tmy|Ey6V6!VO< zcsC4#osH=N7vET)RRjN+LqX%L9;2qzZq#Q^mc_Oj*O%#V>_jFX+{4RwY=hG#9LP>m z$`qe|8Gy?+4b8^*L=6J{^Z}EP0Q}_!+YY#9}8Wqi)*&q}@-HdU)QS zibZ&H?R>DeQJ3iW+X0ysG5W2->?)h{viFy{!Cw@W*7)-;9vI-!J;SP9+WR+N90M zz3wjanc-|fYBmS7$rALpTv32;{`Z&Cm>;)Hy858fx|V+js8YNIY_R7O-Z*GSnPu-| znehEV1(`GLAQ!w(Oj~y(M7q1{E^zb-evyn81Qt63quq;4rMYm2*Yj%el(_d?qI;j~ zGv9|iYrd3AI${_{5qPKYwCktAtCB4R&XUsXUWjg9qcCQS-D>trW8xEJMB7!icC+*d z*#*TRHlD_dO(|GZN+mcY(k8YwKC{w&nOO0dLc10f6|coK52>>rZlHKNUq}J&^xdVB zVIg2Pjr25cy-Az#qVpXLi@w;e!q=+m=1vVM_0mQ{X>_1T0moNAzwpJUGpblGmM_8f zHM9W1hNn5FES&#Lr}AgFh6&OE%?Hg*r@00^-qc_hnVGK-pAo0Amn#kV{vVC@1LcD6 zYDXvL%jOo_BiD?YzYkPRKd`HRu&| zCUdDibV>Z|1vMkEc>Ni7G6&~#(D9Nsm~=^%NfRTqxfNfZs^hf;suO=35)N2^(j0TT z?;fznRED3nmwoxvge_c_=GDY}9N;nWpu0ZU*UVpZZvP7lUOMOH7mOL#a%E3zNwY_WZ>KX@`W2?UWiF>>#C6B4>N7MNE078;Z?*}sxXEo-^{ z>D7>Fc@VCq&+taV(U6rb(1}Q;&5vNmY3lAspilHlRMf_YdvELO(LzlK^#IWd@|jx)Ak{g?BqH=$FzzSy9nuV)ulwJ5-ky<) zIp;J%UUycvorpg&m^G*d9b#A0pk4p`<4YA{5#W4J-x8`8jfPH{Pbc~}fs=+tT!_Nf__zqFZvs{7fI%a}ZA+{Ra}OsQQ{DD+`&JRKgJwzUd|ndc1_e0n7u2Zw z(7N}>Ur5iH8~B%#mEoKg_K{*7Re|GJ#6mUi;3UyShKwnCqn;Zq)>X49{%DBW-dg;^ zr07GcZn4@?TI2#frGh0JepF%_oCi~KaCY&#O_=8wTx|PV@fPaA03n;&<${H+4xKJ6Q zvuX!?H6MZsy+Z6xk-}fu2;;@Xec!crBXdxQ1{yB3_T|-XiMUoLJd zO_j-eA!s>}c7Dqhna=p==iA*_AL8ed5iw0!rQ%2UinN~|zh5fLC(%W8RLY`t{zOeb zt!y0)dsj_67dGV$xfqJ_gsZzfoXw7Rjy|6ih7fKtFXg~dR~i1sZ}# z=MCvA?um)2$jb3c3OSshs_W_a9E`8RUMjFaRH%C_wC+0IpSB`KML0C`>z}K7gP`n( zF~^ytEE>lkt<<24s-+BRV1&-K#_zFqu4WU`p%PsP%RYYIOkcZRVM;T1@4=wJcKwtw z;immif2u%Xg3Q1mw}GXY*SlOWKRPMHz0gL?QI74>dcDk_d=@zYi+#|FUdvQ%{E<1n zYqYyd&Zme?K5eCB5h5)Q2faYWr>0^Gxx;PHqTaBv(0o@0RFle*&qKN0U zs%WcDKBQ%}bFud0>U5PnMH;W6e&fjBNRwpEg1b?p^z;g*rqm>%YZ?O)k2A0(VU}4%3M&VI+ed^5n>Nk#_$V_iAslS5luN1U)UlkT=RQgPw#tV zzs>t*$+}8>rR+Y%w^RH#a6&b^oXTVYUmlydx!7Jg!>?Il zt?)NLJ#>fCw`{BM)yGS*`!{1o`DR?VF~)G^GbWvuAF*#~>9gJ8PVWhk%FEWHYD&WS z??o7@-J1bFEm_OV#qy{;#{nh21&h*|{*wCN@AKq54k~_`(HqopPgF6*tOYnEcLu2C9k6b z0-7k<{NizTxNW9KOa8hGRj%fA((PZG)S|AhuS_qXTlEppi`6Rn=(u!{c<+71Dly4C zkP`T9$cI@mqsz=`>Z;@rHqzH=y$Jf*e!NrKu;-H-_}cHmFOL)Y+BHs-r?XX7{!j7W zsQ_i)?B1OnNLKl3pL%Q~sfx9h>|hZsE9lF70oUY?UrI>pi@D+#$ccjVzvWNB?y*uN z##N3zWOjM(s*P0}-rRUFcv!v9qVnPbSpU`C@a#10Mb>yN!WqvFZZRkhsARVxJl^?1 z0r#n|0|~-ss_l8gpL+2CM2mlIHXn4{fpu!@thYCQ#d@QLzbRcgst@)V2-?oai`+T! z;Zr8!3r$XQk1Ibnnz;Wf{Vp3oUdGg6MZee>=*o#0L0vq%`Wu_l>e&i%wP>^Cx?5&` z!*NyHW>1?b>!-|2_pT&OU-q_5IV$!b6Xkx6A1qc0qambM>KL0EhMY7w`iGPuILp0@ z{&xo5lo8Efi*VzYAYSi@ec(t6HeCpqbLJUg!O{yo^Dv66oRu83d6Z zkBiU~)LGZ6E|#(TPBupd1bkoJ&x*q*S~D`P0jQtPeLgk?uh+0otRK8_?0(h;=_l*f_gIsfVbR*^F zMqcp7*J9_!R*3H1tBCQ(PC9|JT1-)HP%2fkixkn)s)*0y?#MV?tn5E6fdT`EAW3h_ zsBx&yl=kK|oLM{ih+HIu(~g*imDugMWdkSMBqmdA!Su3qvP!yiDztFTTr9qnVbS z-lUCH;w>RJWwG%EaYrzISv*Z1gmC>uSYWHpQSZ|P1ZlTgC|#k764Vmx+(l+Lv&n-r zCEwTE8C&jKEyM_h$$H+oCE^UWSlKz_F+5M(+XDyBdLM^v?w63?_PiflJj{seUY(Zq zFKC^BvdMekB+R>EL58~C{~u=6>YKeI!d`}8z^u)OYFV$G%XHt;<9oMNY^+z8x4PV; z`_tR~I>+Jdb!DK*0$O%S+JH7oJ9QE&b|u+XdNhlx+l=qCyOFkG`ua$7lUvLBkKhYSrqH4u_Pl!O}pv1EIh!_CqooHzSqr@-5fy&O*GhuJQid z;$I&1tiQi3`|jD=`Mzn9*FJ?|pH9X~1tB+Zrc$&bBl$I|GP4D3N2!D|=n zG_bU&8XYH39$)DzBj98zh`UdY>Tu=S*UP)EDWq7#xYnMGua7&AxJ;=JnAZB*NfAj` z^>ShznlyKd)_ey&08f|WhM@1z4nZNpBF2rnst~a;^jM9UA0290Ut3B7?Mq$@p;8sk z{P3zP*WFxy@f5%&6R)$qUBx$s*Egz=*%c{Tf2dp=;W=mQf9qzivMNv7*(59{f?K-p z^J1$Btrs?J(CsRM@W#KZ6B4VF$>aghz z%P|AUsEu#C;KAWaf|clY0a6ZmyXKRfHbFY(94CI*U(f^ev5~jTh1wO2de7NzXQX>L zuZ(K@mKahWm8PGksd29g1dA9E*SqrT2|MuIcoV-$cbUTdJVc4D6JHc1Ohlq#u+DB3!=;x<2Y z-dP|@Hccl<2x(SnPE@_oH(S)&l|77n%-mI(K5}jbzFc?>pK7Ka`x#`V9=_HTVRoQj zqMFlK6cLqkc0$Z!t#|J#MJm@VefghA^<)2?KmRR@+Slpkl_Z)#?^A|T@>*?fYCm+& zO#o1y4ESAeIIF)}ENVp#kvkD}Vb;laNTe=-lGu7SAB4}z6?Ead(UXh%j zcgUX_nbAu%Y~)z9l`hApV0R3FRDT|)@6URRcl3LQ%aps*U?ax}kNUp7Z#p=T!%J^& zB4Xao7`@QvfQjQA_Ts()OYLkTSoeklR2d z|8lxs85RGp{J8=}2y5`sq7ss?e3Y z=*Zl3bQQ%lrDtuda_e)^8KCk>Smr^?B%`cGo^hx&8FDPEIb-{=d$43`n5u18v?{;G zqLKFe2SZVlPQtR-`6oXgqD0c289P^oHQceQQ?;Nd^|hib=M@td?2(eSvzFN&LHE5* z@4B)A{0g7#gW`Bc3leq#4qn4uv9!?_%364QEjkjTTFbg}#^KvDPNi3+tqDzcx76Q8 z?u{C|n&(^4qz@ZKxJkO2DI+l252^ynnw#TxwR8H<|Kn^Ne+_pf*9HJz*Cvw!B9(B` z&l}?CF1agP@Y5o+!@U%mrctgBatQ$|OLu#+vZQxvx+)3i1|sa`1zalKjnG!0e?bpv z%|Dbu3EADOS$3!5#7G}j*8Q#=O9d|FxvOSQDIlIg%qfp47rjgUtSp8=xaT_+(epy@ zQa4;QsM>DXr22M7hITHpPW=JmxUa))|3QFc$&1~Gx=1B6y!l@{VBlfex68#K@;*`L zs0LYHExIz0!CxN6oqj*tzHucvmL234v&~mK+@2SS|8P2ds z44uEd^3gwH#IO=-EsEpw&k4NJ>Ro9WMl54k8l%!Xe^d-q&xD_dYPL|DY-u2-bY3A?AsVA~`IC&~-=T)ojojAE z??P#|S*6*{RoO-gZtb0;1e1gqNWmM|ibJ^9Hov&OObEP z0~uQEj`b+{MZCsIZ1}CTB&z7;2iQ?))R)fmb%-rS%*Vo1QfU`IfZseU!owhskM83>>s+ z0l26bt0zh$DV0-gdK=>}#!aXd*(CSP714oG#HGW}6l>mHz@4s1Gjg*3y738@fF8&V z+~~XKKtv>Fz*QKqH$z@vwOl~d1T{+b)w=}q}lS?bEXGbZUEugj{jb#iufUGMs+V(=JWXjHRpEiA}MdTQA6t#sf#>lr1vBe6$D7$u@YF&3ko_6=8A6drakRmh<0uQoTk$%98^&<_^qR z8Jy-%`So^>J2?bHIzCH$ZMNSiaW)gC?YP>}J#iX|Lj0kjjaz-!wo(;`DEb@3ZNoWK zTY_pSPug5JU^Mahi8P8h++>Svm(1Bq zJ6X>(z$S$=4)&Jesu0IXs+bB&H_?;csRG7b48nk`#?1wqyT6b<13M(xmKS@qoAfaz zTvcB|aY7xvD+u$OtPFTFHk&CGC9Mzmu?>3H@Xu=z9i$RcnDp6iL+$Uco#Jf)0Ir)p zm7;DBjwU2zRwcjr(?66LN8R^ud}R3B^fyeNb$)||4G$!vvGMFN^OXAxwhoc65Le9S z9}o6<`OGE@%L;#5fl8M5A{<PlNw^h2;q~$XqnSd*Z z?Ik0x95*({3w3Y^+2k; zBov1TxuI1?Y&@)l;i#9^yRLD#G)4G#V-0{9LvVxms;HMKG|y1JIj?BVwe)(A&W-)aOo&avS8H}QMZdN zC^cLg;~48~n<1JkiC0P~_3o~;o*>R7iluA83Qahlc}*+G@^CR54$EG`$NB-7)Ks)L zR`|>e7$L8{V+Xj^y$-%Ju{RZINE5U@3BrIXWzPhn5{G5{gl?`E;_4f{p8j<7Fef;{ zp`St#*)2f09Ki`o;fJWO5yLqB%c)=_V@4<0tyW5{t+ppE8$I?u_;U6_8OsQGl?zgP zJxYYu8Dr^9VMr@0JU8hTRYi%jub`#ILk8#1xL-^}L!4DK!00Fb&P-z&#K5y=UOT8d z;JCrQDfW#1kIvL26ioSt zShEodtpC^)WS6_1?pC?PddwGKt^P;;r4GjbKHAf&={_1j1E4-;_hy+%QEqb8DRR!c7(4VE5d$q*6#|=u@Zb7fj?q1!kra&Bmk%9 z!baz`PD!v!`MMj0(uuAK?6DQ({{-{W$Y~Iamt%Q7)tZ8M^M*GcF|43hrvW>>Gn~1@DqslrsCEe#QzKkh zo@UsMU|7cykGgV+XInU=LsGmEM+i}@@yR^y=&j9NNnw8Gepy2`MN)H@Nee}JSZmahz^$xZnPw}_9 z4P$V3l5^}(Kq#bMQUM#kPj}m&QqCe40t-(mdL3hz^!W0Db}6zrVug;WcmZDC9D{BX zf`gLsMDHi-1T!h`#fHp?jY*EB*1AsgNXX~TZVL+liP(hn>&cGLNsk!s?Ipx)^)WO{ zqk9#p)FlkS0g8>Gh%;7R(xYwNMis^?_aqhq zp?*^(VS~d2{IV->?O3UFVM)`nw!W5#JDm^K3SO@{q@-!T z4S2{64hr{4(r|-?bQ0BkeMQc+i`uE7^EKn?6hlrV;Co9O*OmTe5TM7qtra*(+m3}qZd0VaLE~G zk)Wn(U8Bl;nH?`EnKie~4)~n>ZF7WiHmL|fvp8K3AT`{;T9mcxks{i@VHLX7tw>Fy z=N+;ARaTE zSCH;;F~?oJvX+Z9u%pCFhyT1AJeDe%p&8Scr~9DzYiNj;8&b49e#ftZ9X0|WVM^7* zOkEIo&n;Z0V;||M$MgO*C#ACXhwZBg2vOXH><*sZfwO5gel?nkrpS*8o#D_OeP}t$ zN6n^swQ}~Ft~uzvMK{e1icllR2X@(6!Zy%$lSc#st3V2O0@-f1p!ykWr&v|>(lvc$ z`5!Z(6q)3?V!I=gXB}zAVp(p$H#%6vrl0>o{%iN#Vw|>x#cH$?Po(Y?jY}&Yb7VQFzT)F)g;Vk-u{STazPZ`U1SlS#mM$~kg9y!hKfSy-u^e4k=RE3^pS9(mLeYkzml^kvT5B`@qk z`>GfEhl|>1?o_5bfJGx5SK}PGRX(W1eMu0_ zvo^(PKHHm5kK*}q&5<#pO(cJ;d+hu7Qepof5d3KsN8^gOVl}AU*>Cl(s+;xNBog+8 zz$!SuLPoRmH14NqtK;~4bp;FHrv<0SZ6q$WIVp8>YRlsfz|Hq0Y_mRHv+^qH-tunY z>t$}h%!#G3aYFSo^EI*gm5u>d0mU^lT*<;a(ds=-Fz^D>lJq<_G5Wu-c;Ma^mDu|u zC$R&jpgWPfB~21(GF{?yymX6HI3vi3axUgAVZ6_z?I-tx-`;(5h0V+WYOoqnns>)BN#mq1^ z4rEIqwo)k7jLhPU;eafII{B9xoT6$*U!dQW&_8uMJLiHBChOl~$dUHXp~;j6u2 zn0{S62|4D%3gJ#pGUSRVDx3=e0v_B7Gic128Z?)U=z#Ap5@gZ2O69EsaK^IoRZ`0}2YR?yVUc2x1q7cgG+>FE*Va z`pzF(r8~u?ZgsnnW7fzq4&G=Q?>USj+<0aFaU-*(3vev;Qu5O2)%36OMn%Vq!iUa1 zH!|Uo3ka3pM}2XD#r9{Idq->Or;i|<|N7mWSxDP^!BC^d5IX@@0T26-AcUl4H;yd{ zrWdn{0d=J!AAaV1bf zQ<7M`Uizr56s9IEna_{8ge&VVdWmxN9|^If9N3>$L8(PKEuH$R1*Y$Kv=aK^R1b z_Tk%_l}s0w4_Fm}qw!V(ZSQmqP1bMHonu)+py_7zhqn7=5H4$gKo@S=;pCOEm^ruk zf2)z!TGrGl)?CD^(9$@U?l%rtKpo8ol_4v1a2Nb5+-O{}Ld?mE^5?nnTnt*Qjb6Tx zlgozyuMkT+q-1M7#fUmomPP8d<ye7HjJl;5 z{>=QF>(jZIjph+>kqFYK>O26Ow`Gy8q1!F*S%Pk@YrSy}3yh&l`|Fr;^5Po6H4mpG zG>&@y^{&BTmP>$OIAf_piwlai-;y(-MkoaF@nUX4eeR~NO4Ve<<;cBCD%&P8BxyUY z;7VKXn4Is>v=_X#Bqvgj;L93@nKQicR2-CMypmeCV_S?{Z0L7G6Ev332aKhja>LWW z=PG|3#=tpsa+PM@mq~~xYB2Tw2azCD6wnq6&51a%ah0{tv6g-O)jWy!$09)KH-9Jt zqXO@hgbHxf-}h{hi4Fx%mp08?WM^Y73<8z9cTlX0f#foq*R{DBCsKM7Wy%P}g7fbV zb?)h&n!_D5D2wU@{#LZc0Cu`UkqBI|^xlISLDT;>H#|Ubb5I0mty`o;nz`v%AV&2^ zq~O?lbvp?hf_a~jgC7gGPoN$R!J9Ecz#nk(i1#`U{?Jz9AT)+~9jhlOEHn_323B*>f3J`Ypls-Nz?hT=bUwgutm6!Lqd!T2V{o!zi6- zF|XA~XER6^QQtph_9AU$hbV_EYlws;aSwe*H=IaP&yyLKm}Y?r#Pk|iHT55i(dHf= zMR>nm^uzgDvKo(8&G}MmCdZd|N#S0b6Iy{7Rj{ILCtW^$HK+vepG(RNO^bTr*>z}f z-i*Uw`0cY@vRmvMLZ3rKte8BF_9`At3^!GI;OX;zVy27O?Y^9*)?KHfQw#CRE}m4D zzFZqV(!jHjsOzqNbP~ef0!aJOQwN62LSIH-yx9@v9dF`@E4*VDEcf~9WLa?YcBd0t z#rI=VoJ`v2-O`h9uW3s@fS7ah>Bg+FmB$)u#uG9pa@BmOSVOz>YUgM+y82CoJ$;iY z2GZE{(pTZ;ySoI!-O`RnHp#)+JXo{^9TL2l9DDVu$dnq_IWVC?ORL(uti(yx`S})< z?($*?ZiP48QuP6Bmt>~^S|)jrxN{%6!)){maK6UP!qG18eLA%1#ACL`{qo$X?xZNu ze38~y9%OT$J`oC}!g6^$h6DvWeP-o}Qadxr=l$PG;Opndb-{!He3&8ZwNv^?W5TxT zsgvCGDd$G>gY5leYc420F`DSj;WxGGaBgqVpW8rE{(vCj^>Q0 zRPWMkvfJQ+CGwPT)vCL!L1V8Nr@knRazb!@b!#mGmIVcioi*}v?>G_ay=^bRWeH4W zpihaUNDd+?{E$};!8bM$=^e9BGR^hANLc?-hplyy9Tb;paFym*bfDl}ER4GmQuq1) z>Fg~3nr_4XFQJqmjnbhq`nr%(N*LYUBPPu@LK*=<8iA2sG7xEz+;D_+NlC{xQo2D9 zMhV>0-;>{?`@#MF6VC7JbsXpUIX=hxBWBFc0YWnMFZ!&m1z+=C535Nju;P%$sv%qV zwMd+>Z|TK>Z0FOb+W7IU0^KNpdd4r#j6y$vekK5I7>D1VXL8XjhXL+j9NA_1-Dz*{ z=r~e1oox3<<|8c2r0c5un3|}{Lzre12$vZ)J$9guy1blvP(Sxzxk*h_(sFM-sW0qw zkqJsuekzprzJ6p^bGHiLo_$CQHl!YVk}}5N;aYpEu^uQ#Sl!oMl8WgO9hBtfrM|Zq zrBnT)CG?=D?+#Cwz_M2nF;&K4xfg4}P?xas(a-eda24;jB~t$upN^JE*8Mks%57%D zKfQBa0P3bR&q;Hm7tel}E3P7*Bu9mU6i3lwaEUw#&pW~nP80E5DP&SnT&cm(8NoNV z9WSo!GIvMxV({&SOaz}aQrM6_BW6GwIP~Jls{T@Z!zATc_xo89#z#y_%Gb`lKtv2_ z0JK9;y=2Iu^a;FW;lap~XHKBAL7GHAWDEtOSxdZ}sErax7n9&?DIG2s=?L}hgyaMk zNmMdMlN^}<2AeBf#vV#Mtp5Fx#4_PTA@XhdDyW)__=w4<9#h4Y=j z3>#OBYFdU_WU3>|wuoxL+;M9BPYncs~V4^`*BSn;a9my>HC4l*CShuVtnBBvZ z^e|x!CuUN`UaSpNgFlIv zRGNfc+;<~o%ud#b;OW7Yr{2D@s9Y`O8hpucZB+tfi?VkiP^@R!u~hs+G<#YQpC}&# z){OiC+d@h@t#9rskED~Nj(vG$+fbcstVTQT!xd8ro&9_pqBJb-OmNukp2tvK2sC9u z-NhTAPx(pQ*QwX|3)U(kYqYYzD-PDOpPDio?)2P3sUgcNGE^d^!Kvc6$7BAw3|>V zuVq2{b63~vMN5?(+C0-w?E~BC@C(pyzOHFK4#eSqBA#C_h6#JRrXgap`6BU>3vYKX`Xl8-Sj%a?F)O*DGBm6?z9iosZw=$CX&qvkt*Zyi zNL6?Q5i2SZE}B>*0zE3n%W>I0Os12 z2()k86HS9ukY& zMAqnk$cNfj5=!ElP%5a24(AAj+}p3hc2veoe*7{&xf?j(yVXa;Kbgq*mg)_wCB-iL ziE#<_T^fQs5TqcRHNzJx=pU#KxdKSCl)V(-W)U^K2ax4Sz>R9SvXJqyb-IgHlwpS@ z<2mG6ajG+mjXXyQH*4pFNA}M%Za!BiOp9%sNBY?hwFrrjHfI(LFop&vpnTgK5Sv$LRz2S+iLpq=i6^aC6Iax+PB?ui_;v0R1eE zu(t9;?m29NoOqRA=qvS374?yEwz*_uiVV{jJ(wHRh+e7ZXuFWu9?&IM5BhkQ5*5Rv zN$nT~2T_#!nQf`yc7t3_7*2)H1q>ehWhliT3ov`Q>Oh&HaX+R>SBBwX(aIRvMKwyN zY>xsE@suWAF*uYgjN(&jd|Y$SSxSWRk)9b9P&XmFk&q7v5>^-MjZ1Ertiv6s5s=g$pM_o(z**>n)ey z=Lw+qJ3Qlp%i&os(duYJ2R%%nC#<*%aNv+39$FxiQh4X?pR^1G9rA28gt3T8LB3OJ z8Dh>4G5k-=D@V;TZR)MSldaS~lROPKfGH67RUmOJ_swk$Ee<5CA)q_^A2ll~xXP#F z1{)pva9*_*ML&mzr9N8y!AjxKkV;?PaXL*6qS4bmBLL$_WJu`_{Wp@Er&Fd=J&l?) zJ$4U$q>HK%|6~a(7MC@PTvRJ>x#C+Bt&WtkFKmX2Els6-D+4Zgv*^2RJHmzW-rg=< zldR)*v>_zt=JI{vqNM|xMR{%FUTfd3);?9HvtKB*zWzOs(pd&Gl@d*!B_gooi!k)r zLfpk7Rjq9A#(5eu)srF_`%a0qBG|6Uf&@Hu)v zzM|Z=R>C};PWW3o5QFv>RqGbmKyNktnhPJ2zY@gmjsIb$o_*|04Lq4!p#kc>;zj1& zX=pKm8 zqUTEZ`9F1W>Up}7EN~)4tKGuF_chtbuK5-5K(Pcg|Y{+?g!vn zraqpNWsjboT{(rUWfp)qCN4c-CM#9~)B#>qYx)8EmDp(?N8`eXtIcO8L(~F9dUQWt zHRdF)wWBZ9n$|!3J#(>phWW@dG?XA}yw`Q!BUniSfcDaAz@07q-Mrt!ksQkbBUM<1 zy20Juwb5;i8NsHxhk{za!|sb9p>9-jG_JrW&ej2D1iMQm_WBxo=j+h$-54p|(zca+ z)Rm_~irwZ{a%4(|@oZ=jnG>yk#f0n?L>nYX!t(%(hvLZ8-!xdrWMKja!kj?JC-lO37)PY5bf(?l|1~gU)@s+@i|0>~Jju;m`h9~(~OZRqm!?SWmm?1v4our9P&i&8cv8y_I9wb1Y zP!3M(BKLr91n=N-mC?)P=?9Bw0pv+84)QWdgIqpiZ6N}ShiMY>*G*cV%@_tt_0}N* zrvqI1{@XYpDV7j_sIjA3c@Klp&9w1}AZFlOVTfc_LO(%EQ* zhJZbrEo;0rpnMPEeMX}E{us4djHrqfQNKq(r4|pmiW({3@-N!+`NBJ7*ok}%^Wr8U zUV-s_r(&I1B#$V0k&33< zzq1mTo1*SpW~ z^@S&J0p1k%iTM4L^z#a8+%Slm{lc;Ag-6eRt599|PtvHzszl;*gz+kqquft>T=JJ} zbXwbpTn={gWK^x@Sk^Eq?3cOFv^MjJCV|ENFJf|kfy??HWuJfFMLHS%Y~ACU{PR(T z%W^6_5(;?GcQ{FcOtMMdUHARbD*euf@9A7A7!K30oWDGal#dr@^70(~I%ykRZqLzz z3|-@ag46f7Pk+oKC52V|a|3A>%*!5n#n` zIT%sQPZxULQ+79wXc;*8!D#j+rj_R?mL_v==?zsKXF=%xXB=rnFp^a9l2Az23F$R zc;=7gH>=;cvpc?Zv(6Zma!Y{IxNvzh+*(~8{2C!HW&$dWnrHi*9}Zoi0Zdo_e-*9PV)0)u`7!HmmN@Tpv=aR>PZ% zG3{M?M%9IQ_DEG~?oGrpASO2++1(;z*_|Nx!20=+(B9Bc zZE)WFhnavyz>E;Twtpy1Wbfzj)iG@H&8Ok8*KflQ3;SBn-?S?zVdoKo_%gn-t2HV= zJPTp$s!zZSL0BIcP|o;g?p^mX?+TH9zdo+s-c1sUfIfG7s=g~C7_Oc`%gB>Y0T&!~Lo)^cM2P)7yWI|bePrTCl0kx?4`Cm-#iZVk`S571T@(~y zZ!3mvJ4-CcGHl`SxpE(XAu`CqGTJN98-^0F9O^GzH)7ovH znZz@k%BOs9inpEVf$%dIJ@(3tKECO(W8R~ujGN}d7^Y+fd7$jWe?wZX$`wU$;ek9~ z@{yBzc?5Sgh7vE_`?6ZqpV5f0Xxcn`?C9F%Di$pNvB&+(E7!PApCe_*>Q*M4;!R6I zC`q(~!L;l&{(FwU)SJ!qxHmuN*}#{}rC_!~VKiy^FX^)tV8teNXhh*2}2LNZhTWcm-nr z)mOErUOGqoEI2QB2H<^HzfeZiVoNi-n8R(>{3t97b>=X3-J}SZL0enOQHa~MIs5Oa z*{c(_AGKG3LB9@q{Yn8Otw-8PQ*rk)T%9k++#;OLJ_}Kv~86ILQh> zmRr1ck#(_Zz^3o=se@;vr2mozNhp1mr+w;J=N#(3S6K#ir`S!&6Pm%afn09ZcOG3X zm8VwDa$4*2MOj5P45Su=;-Q@dlfgTkKDM+-x7*dht7XrVJur0AW`Q7UaB)YZ{Ebn6 z>-C$otEHEZE{FR^X%3PqsoJb!Lutn?+aZ`Vl+}1G(K5sP6#HDeM!K*4Fjk^X<(Q#R z9MTO0_TW}K#}rAXy3=Wg%uA8qi)5sW8H(_Gm^$CUiPI$ieTTU!*@KTzWQ1VM%>(YC zVw74gVC!J+#gF!W7JDFzsXc&<)tFw_8n0%HG3I&+hJG>-5C}j$d8b?o$ayoWKED$` zDp+~uL0J((5CuKIOQ|(JOP$Hzi$C$ICVEux(j5E(Ln-xyYer(t_mha%-~!$lW~GcK z{l#4MagJSRCZMVm-^kBWk@!1#NUiQp(8dNOEa}LY<;yVyo*A0kX}S|5PPf%oUht<= z$%W;USHDoTd-GZnc*$Di&r0RNK4{Q2Y$arEZS-|bY^%n}&+U#MF%N8(efAof{niQQ z=~Y=6MvvNxxOEDNAm^s~x8=maMF8 z@{n#p*oY*i*%WbP6PM~O`l%v`HOn>__0eZxRzp{qK7{Q0g8SLtoTS>t8w|+~C>@K} zodY_IjRwA#C_aA6du%q~f+?A_GBgn^l8=&X&O(iK%k+n*dsGf7$ZMZ}vr5Z6f=ebt zc%56Sr^lBlue*R zk>%-OjE^e2svc>xlo)=E;gi)3(*ghyO|^Rlp5KcB;z%fe8zMx!$HhU)y4EI-lQ-K~ z!tCS5bTv-I3JWRwx4?_!Y9*X`pZ!H2*mwPzx*0GETR42IsB;|(xK`@(iZ7cXS!=WWc8j!_{ z?vnT$MpR2MQDn5Mq~Fy56Vy!E3Es<-@!H|2o_>XoF{Ru|FocJE7}# z$RQ(v*^hDr5PgV0yI zq~Z`|oQ$5$MyXN#8(f%1TN5od+JTXL{6kB$Ph_VrQ}K6w=RbWTFr3@f4W@@C>LHf* ziysEHT@-1q@@PvhZg%<2{7%sMV*Fp6sa|%jD3zI6HajbVTPJLTJKnO-$ZTJ(rFD5! zdYG3V`rtFqdDJabtDnAX)|B+4y1GX_OgDin8C7wXg`^V}xxBO#MzOD~Gzi6QF05BE z*1tDxg+0joD~VS*f&io~EOUTR4>hIU{_VD^G#yOyZqdk{jSz=^i?Uz7W~(iPC-PJA zb4Txidgq@_@<{K|RY}V+8URD?(EGeY0a)S3xR2HbZym?Bo1i%;hoAwQthSF!0Yl2> zwr0GV<}q_n58jZBONlkao}J!xU+tyIJ+9B`ni)Jvj~hmQ!Wra;)+S(_Q7T z&mWjqmd#=N_%!m*VV|$(?z;S^L`4XkC3xN{Ob@}FNpWVfPAx&gs2x)((}IOYUo%?LcrR5uH!yyRY3iDZ6K&+ON7L)<3Z(Cu%uMqo2bbG84!O?#B~ZKU)= zB3PY{zTv-@dbtI$Ks&?qm>zTaFbcDp@yR8MzuBuRSLFKof8P0om;V2UM*crIFaCGf da_T>qHzdfTguy(sAlxm&r>Ux^Ql)Gk`9B4oDtZ6_ From 7dc0ffe1fdcb27bbf0943b6ff84b2105b9405acf Mon Sep 17 00:00:00 2001 From: tezansahu Date: Thu, 28 May 2020 00:56:13 +0530 Subject: [PATCH 0992/2289] modified link for pecan releases --- book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index 645306491f4..9943cdd3c28 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -137,7 +137,7 @@ This will not go into much detail about about how to use Docker -- for more deta knitr::include_graphics(rep("figures/env-file.PNG")) ``` - Setting `PECAN_VERSION=develop` indicates that you want to run the bleeding-edge `develop` branch, meaning it may have bugs. To go ahead with the stable version you may set `PECAN_VERSION=latest` or `PECAN_VERSION=` (For example `1.7.0`). [Here](https://github.com/pecanproject/pecan/releases) are the details about all the PEcAn releases. + Setting `PECAN_VERSION=develop` indicates that you want to run the bleeding-edge `develop` branch, meaning it may have bugs. To go ahead with the stable version you may set `PECAN_VERSION=latest` or `PECAN_VERSION=` (For example `1.7.0`). You can look at the list of all the [releases](https://github.com/pecanproject/pecan/releases) of PEcAn to see what options are availble. 5. **Start the PEcAn stack**. Assuming all of the steps above completed successfully, start the full stack by running the following shell command: From c473a3b486819fe70dd2147b92bf1cd1a7910f40 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 27 May 2020 16:05:08 -0500 Subject: [PATCH 0993/2289] use copy instead of rename --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index f28cf71e32b..7fa114251e8 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -12,7 +12,7 @@ If running on a linux system it is recommended to add your user to the docker gr To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. -By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. To do this we simply rename the `docker-compose.dev.yml` file to `docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for develoopment. +By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. To do this we copy the `docker-compose.dev.yml` file to `docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for develoopment. If you in the past had loaded some of the data, but would like to start from scratch you can simply remove the `volumes` folder (and all the subfolders) and start with the "First time setup" section again. From ce642787971130de46192d787d4ae94549ec6492 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 28 May 2020 02:29:54 -0500 Subject: [PATCH 0994/2289] started creating api endpoints; ping ready --- base/api/DESCRIPTION | 20 +++ base/api/NAMESPACE | 3 + base/api/R/entrypoint.R | 7 + base/api/R/ping.R | 13 ++ base/api/inst/pecanapi-spec.yml | 303 ++++++++++++++++++++++++++++++++ base/api/man/ping.Rd | 17 ++ 6 files changed, 363 insertions(+) create mode 100644 base/api/DESCRIPTION create mode 100644 base/api/NAMESPACE create mode 100644 base/api/R/entrypoint.R create mode 100644 base/api/R/ping.R create mode 100644 base/api/inst/pecanapi-spec.yml create mode 100644 base/api/man/ping.Rd diff --git a/base/api/DESCRIPTION b/base/api/DESCRIPTION new file mode 100644 index 00000000000..06a15434bb5 --- /dev/null +++ b/base/api/DESCRIPTION @@ -0,0 +1,20 @@ +Package: pecanapi +Title: R API for Dockerized remote PEcAn instances +Version: 1.7.1 +Authors@R: + person(given = "Tezan", + family = "Sahu", + role = c("aut", "cre"), + email = "tezansahu@gmail.com", + comment = c(ORCID = "0000-0003-1031-9683")) +Description:The package contains functions that deliver the functionalty for PEcAn's API +Imports: + plumber, + yaml, + XML, + DBI +License: BSD_3_clause + file LICENSE +Encoding: UTF-8 +LazyData: true +Roxygen: list(markdown = TRUE) +RoxygenNote: 7.1.0 diff --git a/base/api/NAMESPACE b/base/api/NAMESPACE new file mode 100644 index 00000000000..368ef012b6b --- /dev/null +++ b/base/api/NAMESPACE @@ -0,0 +1,3 @@ +# Generated by roxygen2: do not edit by hand + +export(ping) diff --git a/base/api/R/entrypoint.R b/base/api/R/entrypoint.R new file mode 100644 index 00000000000..84e5cfed11a --- /dev/null +++ b/base/api/R/entrypoint.R @@ -0,0 +1,7 @@ +root <- plumber::plumber$new() +root$handle("GET", "/ping", pecanapi::ping) + +root$run(port=8000, swagger = function(pr, spec, ...) { + spec <- yaml::read_yaml("inst/pecan-api-spec.yml") + spec +}) diff --git a/base/api/R/ping.R b/base/api/R/ping.R new file mode 100644 index 00000000000..f159ac01b96 --- /dev/null +++ b/base/api/R/ping.R @@ -0,0 +1,13 @@ +#------------------------------------------- Ping function -------------------------------------------# +##' Function to be executed when /ping API endpoint is called +##' +##' If successful connection to API server is established, this function will return the "pong" message +##' @return Mapping containing response as "pong" +##' @author Tezan Sahu +##' @export +ping <- function(){ + list( + request = "ping", + response = "pong" + ) +} diff --git a/base/api/inst/pecanapi-spec.yml b/base/api/inst/pecanapi-spec.yml new file mode 100644 index 00000000000..c3244068477 --- /dev/null +++ b/base/api/inst/pecanapi-spec.yml @@ -0,0 +1,303 @@ +openapi: 3.0.0 +servers: + - description: Localhost + url: http://127.0.0.1:8000 +info: + title: PEcAn Project API + description: >- + This is the API for interacting with server(s) of the __PEcAn Project__. The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. Here's the link to [PEcAn's Github Repository](https://github.com/PecanProject/pecan). + PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. + version: "1.0.0" + contact: + email: "pecanproj@gmail.com" + license: + name: University of Illinois/NCSA Open Source License + url: https://opensource.org/licenses/NCSA +externalDocs: + description: Find out more about PEcAn Project + url: https://pecanproject.github.io/ + +tags: + - name: general + description: Related to the overall working on the API & its details + - name: workflows + description: Everything about PEcAn workflows + - name: runs + description: Everything about PEcAn runs + +##################################################################################################################### +##################################################### API Endpoints ################################################# +##################################################################################################################### + +paths: + /: + get: + summary: Root of the API tree + tags: + - general + - + responses: + '200': + description: OK + content: + application/json: + schema: + type: object + properties: + message: + type: string + example: This is the API for the PEcAn Project + server: + type: string + status: + type: string + + /ping: + get: + summary: Ping the server to check if it is live + tags: + - general + - + responses: + '200': + description: OK + content: + text/plain: + schema: + type: string + example: pong + + + /workflow/{id}: + get: + tags: + - workflows + - + summary: Get the details of a PEcAn Workflow + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + responses: + '200': + description: Details of the requested PEcAn Workflow + content: + application/json: + schema: + $ref: '#/components/schemas/Workflow' + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + + /workflow: + post: + tags: + - workflows + - + summary: Submit a new PEcAn workflow + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/Workflow' + responses: + '201': + description: Submitted workflow successfully + + + /workflow/{id}/runs: + get: + tags: + - workflows + - runs + summary: Get the list of all runs for a specified PEcAn Workflow + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + responses: + '200': + description: List of all runs for the requested PEcAn Workflow + content: + application/json: + schema: + type: array + items: + type: object + properties: + id: + type: string + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + + /run/{id}: + get: + tags: + - runs + - + summary: Get the details of a specified PEcAn run + parameters: + - in: path + name: id + description: ID of the PEcAn run + required: true + schema: + type: string + responses: + '200': + description: Details about the requested run within the requested PEcAn workflow + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + +##################################################################################################################### +##################################################### Model Schemas ################################################# +##################################################################################################################### + +components: + schemas: + Run: + properties: + run_id: + type: string + workflow_id: + type: string + runtype: + type: string + quantile: + type: number + site: + type: object + properties: + id: + type: string + name: + type: string + lat: + type: number + lon: + type: number + pfts: + type: array + items: + type: object + properties: + name: + type: string + constants: + type: array + items: + type: number + model: + type: object + properties: + id: + type: string + name: + type: string + inputs: + type: array + items: + type: object + properties: + type: + type: string + id: + type: string + source: + type: string + path: + type: string + start_date: + type: string + end_date: + type: string + + Workflow: + properties: + id: + type: string + pfts: + type: array + items: + type: string + model: + type: object + properties: + id: + type: string + name: + type: string + meta_analysis: + type: object + properties: + iter: + type: number + random_effects: + type: object + properties: + "on": + type: boolean + use_ghs: + type: boolean + update: + type: boolean + threshold: + type: number + ensemble: + type: object + properties: + size: + type: number + variable: + type: string + sampling_space: + type: object + properties: + parameters.method: + type: string + met.method: + type: string + sensitivity_analysis: + type: object + properties: + quantiles_sigma: + type: array + items: + type: number + variable: + type: string + perpft: + type: boolean + runs: + type: array + items: + type: string + host: + type: object + properties: + name: + type: string + status: + type: string + + + diff --git a/base/api/man/ping.Rd b/base/api/man/ping.Rd new file mode 100644 index 00000000000..5c52c93de08 --- /dev/null +++ b/base/api/man/ping.Rd @@ -0,0 +1,17 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/ping.R +\name{ping} +\alias{ping} +\title{Function to be executed when /ping API endpoint is called} +\usage{ +ping() +} +\value{ +Mapping containing response as "pong" +} +\description{ +If successful connection to API server is established, this function will return the "pong" message +} +\author{ +Tezan Sahu +} From 07e1a1504f5fef00f203af176aacfdff845bb285 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 29 May 2020 14:21:08 -0500 Subject: [PATCH 0995/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 7fa114251e8..0a63852eb3b 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -60,7 +60,7 @@ These folders will hold all the persistent data for each of the respective conta First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. -Once the database has finished starting up we will initialize the database. Before we run the container we want to make sure we have the latest database infomration, you can do this with `docker pull pecan/db`, which will make sure you have the latest version of the database ready. Now you can load the database using: `docker run --rm --network pecan_pecan pecan/db` (in this case we use the latest image instead of develop since it refers to the actual database data, and not the actual code). Once that is done we create two users for BETY: +Once the database has finished starting up we will initialize the database. Before we run the container we want to make sure we have the latest database information, you can do this with `docker pull pecan/db`, which will make sure you have the latest version of the database ready. Now you can load the database using: `docker run --rm --network pecan_pecan pecan/db` (in this case we use the latest image instead of develop since it refers to the actual database data, and not the actual code). Once that is done we create two users for BETY: ``` # guest user From 46e1f76a3c8ddd856f957816e2cd99d49e7731d6 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 29 May 2020 14:22:51 -0500 Subject: [PATCH 0996/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 0a63852eb3b..4855628cad7 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -12,7 +12,7 @@ If running on a linux system it is recommended to add your user to the docker gr To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. -By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. To do this we copy the `docker-compose.dev.yml` file to `docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for develoopment. +By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. Copy the `docker-compose.dev.yml` file to `docker-compose.override.yml` to start working with your own override file, i.e. `cp docker-compose.dev.yml docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for development. If you in the past had loaded some of the data, but would like to start from scratch you can simply remove the `volumes` folder (and all the subfolders) and start with the "First time setup" section again. From 109cd7658860bf23f08e468ab9c2ea96ff2d6e5d Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 29 May 2020 14:23:10 -0500 Subject: [PATCH 0997/2289] Update DEV-INTRO.md Co-authored-by: Kristina Riemer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 4855628cad7..ee6b195874b 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -35,7 +35,7 @@ You can copy the [`env.example`](docker/env.example) file as .env in your pecan * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers * `PECAN_VERSION` set this to develop, the docker image we start with - +Both of these variables should also be uncommented by removing the # preceding them. At the end you should see the following if you run the following command `egrep -v '^(#|$)' .env` ``` From d5cb02d3518684fc59e0954ab5f5862dbffd8136 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 29 May 2020 14:24:03 -0500 Subject: [PATCH 0998/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index ee6b195874b..7fdf0ef00d0 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -72,7 +72,7 @@ docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example #### load example data -Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. To do this we first again make sure we have the latest code ready using `docker pull pecan/data:develop` and run this image using `docker run --rm --network pecan_pecan pecan/data:develop`. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers (which is mounted from `volumes/pecan` in your current folder. +Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. To do this we first again make sure we have the latest code ready using `docker pull pecan/data:develop` and run this image using `docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop`. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers (which is mounted from `volumes/pecan` in your current folder. #### copy R packages (optional but recommended) From 724664f30c249af653e1758d98adee26f65fc612 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 29 May 2020 14:24:31 -0500 Subject: [PATCH 0999/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 7fdf0ef00d0..071257b76b5 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -90,7 +90,7 @@ The `docker-compose.dev.yaml` file has a section that will enable editing the we To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers: `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d`. At this point you have PEcAn running in docker. -The current folder (most likely your clone of the git repository) is mounted in some containers as `/pecan`, and in the case of rstudio also in your home folder as `pecan`. You can see which containers exactly in `docker-compose.dev.yaml`. +The current folder (most likely your clone of the git repository) is mounted in some containers as `/pecan`, and in the case of rstudio also in your home folder as `pecan`. You can see which containers exactly in `docker-compose.override.yml`. You can now modify the code on your local machine, or you can use [rstudio](http://localhost:8000) in the docker stack. Once you made changes to the code you can compile the code either in the terminal of rstudio (`cd pecan && make`) or using `./scripts/compile.sh` from your machine (latter is nothing more than a shell script that runs `docker-compose exec executor sh -c 'cd /pecan && make'`. From 3de66d005ce4a672ed606666c212947984cb62f8 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 29 May 2020 14:24:50 -0500 Subject: [PATCH 1000/2289] Update DEV-INTRO.md Co-authored-by: istfer --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 071257b76b5..47839e99b32 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -84,7 +84,7 @@ This only needs to be done once (or if the PEcAn base image changes drastically, #### copy web config file (optional) -The `docker-compose.dev.yaml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using `cp docker/web/config.docker.php web/config.php`. +The `docker-compose.override.yml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using `cp docker/web/config.docker.php web/config.php`. ### PEcAn Development From 852e081f8d7539e5e039ba03e207801fcc791bd2 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue, 2 Jun 2020 10:48:46 +0530 Subject: [PATCH 1001/2289] Update styler-actions.yml --- .github/workflows/styler-actions.yml | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index b199f960abd..5aafd00c247 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -19,23 +19,22 @@ jobs: - uses: r-lib/actions/setup-r@master - name: Install dependencies run: | - Rscript -e 'install.packages("styler")' - Rscript -e 'install.packages("devtools")' + Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' - name: string operations run: | echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt - cat names.txt | tr -d '[]' > new.txt - text=$(cat new.txt) + cat names.txt | tr -d '[]' > changed_files.txt + text=$(cat changed_files.txt) IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> new2.txt; fi; done + for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done - name: Upload artifacts uses: actions/upload-artifact@v1 with: name: artifacts - path: new2.txt + path: files_to_style.txt - name: Style - run: for i in $(cat new2.txt); do Rscript -e "styler::style_file("$i")"; done + run: for i in $(cat files_to_style.txt); do Rscript -e "styler::style_file("$i")"; done - name: commit run: | git add \*.R @@ -63,11 +62,11 @@ jobs: - name : make shell: bash run: | - cut -d / -f 1-2 artifacts/new2.txt | tr -d '"' > new.txt - cat new.txt - sort new.txt | uniq > uniq.txt - cat uniq.txt - for i in $(cat uniq.txt); do make .doc/${i}; done + cut -d / -f 1-2 artifacts/files_to_style.txt | tr -d '"' > changed_dirs.txt + cat changed_dirs.txt + sort changed_dirs.txt | uniq > needs_documenting.txt + cat needs_documenting.txt + for i in $(cat needs_documenting.txt); do make .doc/${i}; done - name: commit run: | git config --global user.email "pecan_bot@example.com" From 0fea2c0b2713cde58e8ce5d3992387a4bb6082d3 Mon Sep 17 00:00:00 2001 From: Mukul Maheshwari Date: Tue, 2 Jun 2020 15:42:17 +0530 Subject: [PATCH 1002/2289] new sandox repo --- actions/book.yml | 31 + base/logger/DESCRIPTION | 2 +- base/remote/DESCRIPTION | 2 +- book_source/_main.Rmd | 13402 +++++++++++++++++++++++++++++++++ book_source/libgl1-mesa-dev | 0 book_source/libglpk-dev | 0 book_source/libglu1-mesa-dev | 4 + book_source/libnetcdf-dev | 0 book_source/librdf0-dev | 0 book_source/libudunits2-dev | 0 modules/emulator/DESCRIPTION | 2 +- 11 files changed, 13440 insertions(+), 3 deletions(-) create mode 100644 actions/book.yml create mode 100644 book_source/_main.Rmd create mode 100644 book_source/libgl1-mesa-dev create mode 100644 book_source/libglpk-dev create mode 100644 book_source/libglu1-mesa-dev create mode 100644 book_source/libnetcdf-dev create mode 100644 book_source/librdf0-dev create mode 100644 book_source/libudunits2-dev diff --git a/actions/book.yml b/actions/book.yml new file mode 100644 index 00000000000..933b3174c34 --- /dev/null +++ b/actions/book.yml @@ -0,0 +1,31 @@ +# This is a basic workflow to help you get started with Actions + +name: CI + +on: + push: + branches: master + pull_request: + branches: master + +# A workflow run is made up of one or more jobs that can run sequentially or in parallel +jobs: + # This workflow contains a single job called "build" + build: + # The type of runner that the job will run on + runs-on: ubuntu-latest + container: pecan/depends:develop + + steps: + # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it + - uses: actions/checkout@v2 + + - name: Building book from source + run: cd book_source && make + + # Runs a set of commands using the runners shell + - name: Run a multi-line script + run: | + echo Add other actions to build, + echo test, and deploy your project. + diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 8d2cd9ab22d..d78726612f1 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -11,5 +11,5 @@ Suggests: testthat License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 Roxygen: list(markdown = TRUE) diff --git a/base/remote/DESCRIPTION b/base/remote/DESCRIPTION index f7a1174e619..7e0fd00bcbc 100644 --- a/base/remote/DESCRIPTION +++ b/base/remote/DESCRIPTION @@ -21,4 +21,4 @@ License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true Roxygen: list(markdown = TRUE) -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 diff --git a/book_source/_main.Rmd b/book_source/_main.Rmd new file mode 100644 index 00000000000..8e1e71962a2 --- /dev/null +++ b/book_source/_main.Rmd @@ -0,0 +1,13402 @@ +--- +title: "The Predictive Ecosystem Analyzer" +date: "`r Sys.Date()`" +site: bookdown::bookdown_site +documentclass: book +biblio-style: apalike +link-citations: yes +author: "By: PEcAn Team" +--- + +# Welcome {-} + +**Ecosystem science, policy, and management informed by the best available data and models** + +```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics(rep("figures/PecanLogo.png")) +``` + + +**Our Mission:** + + +**Develop and promote accessible tools for reproducible ecosystem modeling and forecasting** + + + +[PEcAn Website](http://pecanproject.github.io/) + +[Public Chat Room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) + +[Github Repository](https://github.com/PecanProject/pecan) + + + + + + + + + +# (PART) Introduction {-} + + + +# Project Overview + +The Predictive Ecosystem Analyzer (PEcAn) is an integrated informatics toolbox for ecosystem modeling (Dietze et al. 2013, LeBauer et al. 2013). PEcAn consists of: + +1) An application program interface (API) that encapsulates an ecosystem model, providing a common interface, inputs, and output. + +2) Core utilities for handling and tracking model runs and the flows of information and uncertainties into and out of models and analyses + +3) An accessible web-based user interface and visualization tools + +4) An extensible collection of modules to handle specific types of analyses (sensitivity, uncertainty, ensemble), model-data syntheses (benchmarking, parameter data assimilation, state data assimilation), and data processing (model inputs and data constraints) + +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/PEcAn_Components.jpeg")) +``` + +This project is motivated by the fact that many of the most pressing questions about global change are limited by our ability to synthesize existing data and strategically prioritize the collection of new data. This project seeks to improve this ability by developing a framework for integrating multiple data sources in a sensible manner. + +The workflow system allows ecosystem modeling to be more reproducible, automated, and transparent in terms of operations applied to data, and thus ultimately more comprehensible to both peers and the public. It reduces the redundancy of effort among modeling groups, facilitate collaboration, and make models more accessible the rest of the research community. + +PEcAn is not itself an ecosystem model, and it can be used to with a variety of different ecosystem models; integrating a model involves writing a wrapper to convert inputs and outputs to and from the standards used by PEcAn. Currently, PEcAn supports multiple models listed [PEcAn Models]. + + + +**Acknowledgements** + +The PEcAn project is supported financially by the following: + +- National Science Foundation (NSF): + - [1062547](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1062547) + - [1062204](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1062204) + - [1241894](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1241894) + - [1261582](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1261582) + - [1318164](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1318164) + - [1346748](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1346748) + - [1458021](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1458021) + - [1638577](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1638577) + - [1655095](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1655095) + - [1702996](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1702996) +- National Aeronautics and Space Administration (NASA) + - NNX14AH65G + - NNX16AO13H + - 80NSSC17K0711 +- Department of Defense, Strategic Environmental Research and Development Program (DOD-SERDP), grant [RC2636](https://www.serdp-estcp.org/Program-Areas/Resource-Conservation-and-Resiliency/Infrastructure-Resiliency/Vulnerability-and-Impact-Assessment/RC-2636/RC-2636) +- Energy Biosciences Institute, University of Illinois +- Amazon Web Services (AWS) +- [Google Summer of Code](https://summerofcode.withgoogle.com/organizations/4612291316678656/) + +BETY-db is a product of the Energy Biosciences Institute at the University of Illinois at Urbana-Champaign. We gratefully acknowledge the great effort of other researchers who generously made their own data available for further study. + +PEcAn is a collaboration among research groups at the Department of Earth And Environment at Boston University, the Energy Biosciences Institute at the University of Illinois, the Image Spatial Data Analysis group at NCSA, the Department of Atmospheric & Oceanic Sciences at the University Wisconsin-Madison, the Terrestrial Ecosystem Science & Technology (TEST) Group at Brookhaven National Laboratory, and the Joint Global Change Research Institute (JGCRI) at the Pacific Northwest National Laboratory. + +Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NASA, Boston University, University of Illinois, Brookhaven National Lab, Pacific National Lab, Battelle, the US Department of Defense, or the US Department of Energy. + +**PEcAn Publications** + +* Fer I, R Kelly, P Moorcroft, AD Richardson, E Cowdery, MC Dietze. 2018. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation. Biogeosciences Discussions +* Feng X, Uriarte M, González G, et al. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling. Glob Change Biol. 2018;24:e213–e232.[doi:10.1111/gcb.13863](https://doi.org/10.1111/gcb.13863) +* Dietze, M. C. (2017), Prediction in ecology: a first-principles framework. Ecol Appl, 27: 2048-2060. doi:10.1002/eap.1589 +* Fisher RA, Koven CD, Anderegg WRL, et al. 2017. Vegetation demographics in Earth System Models: A review of progress and priorities. Glob Change Biol. https://doi.org/10.1111/gcb.13910 +* Rollinson, C. R., Liu, Y., Raiho, A., Moore, D. J.P., McLachlan, J., Bishop, D. A., Dye, A., Matthes, J. H., Hessl, A., Hickler, T., Pederson, N., Poulter, B., Quaife, T., Schaefer, K., Steinkamp, J. and Dietze, M. C. (2017), Emergent climate and CO2 sensitivities of net primary productivity in ecosystem models do not agree with empirical data in temperate forests of eastern North America. Glob Change Biol. Accepted Author Manuscript. doi:10.1111/gcb.13626 +* LeBauer, D., Kooper, R., Mulrooney, P., Rohde, S., Wang, D., Long, S. P. and Dietze, M. C. (2017), betydb: a yield, trait, and ecosystem service database applied to second-generation bioenergy feedstock production. GCB Bioenergy. doi:10.1111/gcbb.12420 +* Rogers A, BE Medlyn, J Dukes, G Bonan, S von Caemmerer, MC Dietze, J Kattge, ADB Leakey, LM Mercado, U Niinemets, IC Prentice, SP Serbin, S Sitch, DA Way, S Zaehle. 2017. "A Roadmap for Improving the Representation of Photosynthesis in Earth System Models" New Phytologist 213(1):22-42 DOI: 10.1111/nph.14283 +* Shiklomanov. A, MC Dietze, T Viskari, PA Townsend, SP Serbin. 2016 "Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion" Remote Sensing of the Environment 183: 226-238 +* Viskari et al. 2015 Model-data assimilation of multiple phenological observations to constrain and forecast leaf area index. Ecological Applications 25(2): 546-558 +* Dietze, M. C., S. P. Serbin, C. Davidson, A. R. Desai, X. Feng, R. Kelly, R. Kooper, D. LeBauer, J. Mantooth, K. McHenry, and D. Wang (2014) A quantitative assessment of a terrestrial biosphere model's data needs across North American biomes. Journal of Geophysical Research-Biogeosciences [doi:10.1002/2013jg002392](https://doi.org/10.1002/2013jg002392) +* LeBauer, D.S., D. Wang, K. Richter, C. Davidson, & M.C. Dietze. (2013). Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs. [doi:10.1890/12-0137.1](https://doi.org/10.1890/12-0137.1) +* Wang, D, D.S. LeBauer, and M.C. Dietze(2013) Predicting yields of short-rotation hybrid poplar (Populus spp.) for the contiguous US through model-data synthesis. Ecological Applications [doi:10.1890/12-0854.1](https://doi.org/10.1890/12-0854.1) +* Dietze, M.C., D.S LeBauer, R. Kooper (2013) On improving the communication between models and data. Plant, Cell, & Environment [doi:10.1111/pce.12043](https://doi.org/10.1111/pce.12043) + + [Longer / auto-updated list of publications that mention PEcAn's full name in Google Scholar](https://scholar.google.com/scholar?start=0&q="predictive+ecosystem+analyzer+PEcAn") + + + +# Contributor Covenant Code of Conduct + +**Our Pledge** + +In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. + +**Our Standards** + +Examples of behavior that contributes to creating a positive environment include: + + * Using welcoming and inclusive language + * Being respectful of differing viewpoints and experiences + * Gracefully accepting constructive criticism + * Focusing on what is best for the community + * Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + + + +**Our Responsibilities** + +Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. + +**Scope** + +This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. + +**Enforcement** + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at pecanproj[at]gmail.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. + +**Attribution** + +This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org/) version 1.4, available at [http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4/). + + + +# About the PEcAn Book + +This book serves as documentation for the PEcAn Project. It contains descriptions of topics necessary to inform a beginner and advanced user as well as requisite materials for developers. It does not contain low-level descriptions of functions within PEcAn. Our aim for this documentation is to educate you about the PEcAn software, the possibilities of its usage, and the standards,expectations, and core workflows for developers. + +This book is organized four main topics: + +**Introduction** - Brief explanation of PEcAn, how to obtain the PEcAn VM, and explanation of basic web interface functions. + +**Tutorials/Demos/Workflows** - All User and Developer tutorials/demos/workflows to explain how to use and add to PEcAn in different ways. + +**Topical Pages** - Explanation of main PEcAn components and how they fit together. + +**Appendix** - External documentation and sources of information and a FAQ section. + +## General Feedback/Comments/Suggestions + +*We want your ideas, thoughts, comments, and suggestions!* As a community we are committed to creating an inclusive and supportive atmosphere so we all to reach out to us in the following ways: + +**Github:** [https://github.com/PecanProject/pecan](https://github.com/PecanProject/pecan) +This is the main hub of communication surrounding PEcAn development. Check out the issues section to see known bugs, upcoming features, and ideas for future development. Feel free to comment on existing issues or open new ones with questions, bug reports, feature requests, and/or ideas. + +**Slack:** [https://pecanproject.slack.com/](https://pecanproject.slack.com/) +Slack serves as our day to day mode of communication. To join us in slack you will need to create an account first. This is done in 3 steps: + +1. Request an [inivitation](https://publicslack.com/slacks/pecanproject/invites/new) to join Slack, this will be send by email to address you provided. +2. Check your inbox for an email from Slack with subject "Rob Kooper has invited you to join a Slack workspace". This email should have a link that you can click to join slack. +3. When you click a webpage will open up that asks you to create an account, once that is done you can login into the slack chatrooms. + +**Email:** pecanproj[at]gmail.com +If you do not wish your communication with the team to be public, send us an email at the address above and we will get back to you as soon as possible. + +## Editing this book {#bookediting} + +The file organization of this documentation can be described simply as follows: + +- Each **chapter** is in its own file (within the corresponding section). +- Each **group of chapters** (i.e. "part" in LaTeX) is in its own directory. + +Sections and chapters are rendered (and numbered) in alpha-numerical order of their corresponding file names. +Therefore, each section directory and chapter file name should be **prefixed with a two-digit (zero-padded) number**. +File and directory names should be as similar as possible to the name of the corresponding chapter or section. +For instance, the file name for this chapter's source file is `06_reference/10_editing_this_book.Rmd`. +This numbering means that if you need to create an additional chapter _before_ an existing one, you will have to renumber all chapters following it. + +To ensure correct rendering, you should also make sure that **each chapter starts with a level 1 heading** (`# heading`). +For instance, this chapter's source starts with: + +```markdown +# Editing this book {#bookediting} + +The file organization of this documentation can be described simply as follows: +... +``` + +Furthermore, to keep the organization consistent, each chapter should have **exactly one level 1 heading** (i.e. do not combine multiple chapters into a single file). +In other words, **do not spread a single chapter across multiple files**, and **do not put multiple chapters in the same file**. + +Each **section** directory has a file starting with `00` that contains only the section (or "Part") title. +This is used to create the greyed-out section headers in the rendered HTML. +For instance, this section has a file called `00_introduction.Rmd` which contains only the following: + +```markdown +# (PART) Introduction {-} +``` + +To cross-reference a different section, use that section's unique tag (starts with `#`; appears next to the section heading surrounded in curly braces). +For instance, the following Markdown contains two sections that cross-reference each other: + +```markdown +## Introduction {#intro} + +Here is the intro. This is a link to the [next section](#section-one). + +## First section. {#section-one} + +As mentioned in the [previous section](#intro). +``` + +If no header tag exists for a section you want to cross-reference, you should create one. +We have no strict rules about this, but it's useful to have tags that give some sense of their parent hierarchy and reference their parent sections (e.g. `#models`, `#models-ed`, and `#models-ed-xml` to refer to a chapter on models, with a subsection on ED and a sub-subsection on ED XML configuration). +If section organization changes, it is fine to move header tags, but **avoid altering existing tags** as this will break all the links pointing to that tag. +(Note that it is also possible to link to section headings by their exact title. However, this is not recommended because these section titles could change, which would break the links.) + +When referring to PEcAn packages or specific functions, it is a good idea to link to the [rendered package documentation](https://pecanproject.github.io/pkgdocs.html). +For instance, here are links to the [`models/ed`](https://pecanproject.github.io/models/ed/docs/index.html) package, the [`PEcAn.ED2::modify_ed2in`](https://pecanproject.github.io/models/ed/docs/reference/modify_ed2in.html) function, and the PEcAnRTM package [vignette](https://pecanproject.github.io/modules/rtm/docs/articles/pecanrtm.vignette.html). +If necessary, you can also link directly to specific lines or blocks in the source code on GitHub, [like this](https://github.com/PecanProject/pecan/blob/develop/models/ed/R/create_veg.R#L11-L25). +(To get a link to a line, click its line number. To then select a block, shift-click another line number.) + +To insert figures, use `knitr::include_graphics("path/to/figure.png")` inside an [R code chunk](https://yihui.name/knitr/options/#chunk-options). +For example: + +```` +```{r}`r ''` +knitr::include_graphics("04_advanced_user_guide/images/Input_ID_name.png") +``` +```` + +Note that image file names are **relative to the `book_source` directory**, **NOT** to the markdown file. +In other words, if `myimage.png` was in the same directory as this file, I would still have to reference it as `06_reference/myimage.png` -- I could _not_ just do `myimage.png`. +The size, caption, and other properties of the rendered image can be controlled via [chunk options](https://yihui.name/knitr/options/#plots). + +For additional information about how `bookdown` works (including information about its syntax), see the [Bookdown free online book](https://bookdown.org/yihui/bookdown/). + +## How to create your own version of Documentation + +To create your own version of documentation you'll need to follow these steps: +These procedures assume you have an github account, you forked pecan, you have cloned pecan locally, and have a [TRAVIS](https://travis-ci.org/) account. + 1. Create a repository under your github account with the name "pecan-documentation". Clear it of any files. Set up the repository with [Github Pages](https://pages.github.com/) by going to the settings tab for that repository. + 2. Create a personal access token for github: https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line and copy it. + 3. Create a TRAVIS environment variable called `GITHUB_PAT` and save the access token you made as a secret variable. + 4. Create a branch from your local pecan repository with a name that starts with `release/`. (ie. Release/vtonydoc) + 5. Make whichever changes you would like to the documentation and push it up to your fork. + +From here TRAVIS will build your documentation. The web version of your documentation will be rendered with the url following the structure: username.github.io/pecan-documentation/pattern_after_release/ + + + +# (PART) Tutorials, Demos and How To's {-} + + + +# Install PEcAn {#pecan-manual-setup} + +These instructions are provided to document how to install and setup PEcAn. It includes: + +- [Virtual machine](#install-vm) +- [PEcAn Docker](#install-docker) +- [PEcAn OS specific installation](#install-native) + +The PEcAn code and necessary infrastructure can be obtained and compiled in different ways. +This set of instructions will help facilitate your path and the steps necessary to move forward to have a fully a functioning PEcAn environment. + +## Virtual Machine (VM) {#install-vm} + +The PEcAn virtual machine consists of all of PEcAn pre-compiled within a Linux operating system and saved in a "virtual machine" (VM). +Virtual machines allow for running consistent set-ups without worrying about differences between operating systems, library dependencies, compiling the code, etc. + +1. **Install VirtualBox** This is the software that runs the virtual machine. You can find the download link and instructions at [http://www.virtualbox.org](http://www.virtualbox.org). *NOTE: On Windows you may see a warning about Logo testing, it is okay to ignore the warning.* + +2. **Download the PEcAn VM** You can find the download link at [http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN](http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN), under the "**Files**" header. Click the ".ova" file to begin the download. Note that the file is ~7 GB, so this download can take several minutes to hours depending on your connection speed. Also, the VM requires >4 GB of RAM to operate correctly. Please check current usage of RAM and shutdown processes as needed. + +3. **Import the VM** Once the download is complete, open VirtualBox. In the VirtualBox menus, go to "File" → "Import Appliance" and locate the downloaded ".ova" file. + + +For Virtualbox version 5.x: In the Appliance Import Settings, make sure you select "Reinitialize the MAC address of all network cards" (picture below). This is not selected by default and can result in networking issues since multiple machines might claim to have the same network MAC Address. + +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/pic1.jpg")) +``` + +For Virtualbox versions starting with 6.0, there is a slightly different interface (see figure). Select "Generate new MAC addresses for all network adapters" from the MAC Address Policy: + +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/pic1v2.png")) +``` + +NOTE: If you experience network connection difficulties in the VM with this enabled, try re-importing the VM without this setting selected). + +Finally, click "Import" to build the Virtual Machine from its image. + +4. **Launch PEcAn** Double click the icon for the PEcAn VM. A terminal window will pop up showing the machine booting up which may take a minute. It is done booting when you get to the `pecan login:` prompt. You do not need to login as the VM behaves like a server that we will be accessing through you web browser. Feel free to minimize the VM window. + +* If you _do_ want to login to the VM, the credentials are as follows: `username: carya`, `password: illinois` (after the pecan tree, [_Carya illinoinensis_][pecan-wikipedia]). + +5. **Open the PEcAn web interface** With the VM running in the background, open any web browser on the same machine and navigate to `localhost:6480/pecan/` to start the PEcAn workflow. (NOTE: The trailing backslash may be necessary depending on your browser) + +6. **Advanced interaction with the VM is mostly done through the command line.** You can perform these manipulations from inside the VM window. However, the VM is also configured for SSH access (username `carya`, hostname `localhost`, port 6422). For instance, to open an interactive shell inside the VM from a terminal on the host machine, use a command like `ssh -l carya -p 6422 localhost` (when prompted, the password is `illinois`, as above). + +These steps should be enough to get you started with running models and performing basic analyses with PEcAn. +For advanced details on VM configuration and maintenance, see the [Advanced VM section](#working-with-vm). + + +## Docker {#install-docker} + +This is a short documentation on how to start with Docker and PEcAn. +This will not go into much detail about about how to use Docker -- for more details, see the main [Docker topical page](#docker-intro). + +1. **Install Docker**. Follow the instructions for your operating system at https://www.docker.com/community-edition#/download. + Once Docker is installed, make sure it is running. + To test that Docker is installed and running, open a terminal and run the following commands: + + ```bash + docker run hello-world + ``` + + If successful, this should return a message starting with `"Hello from Docker!"`. + If this doesn't work, there is something wrong with your configuration. + Refer to the Docker documentation for debugging. + + NOTE: Depending on how Docker is installed and configured, you may have to run this command as `sudo`. + Try running the command without `sudo` first. + If that fails, but running as `sudo` succeeds, see [these instructions](https://docs.docker.com/install/linux/linux-postinstall/) for steps to use Docker as a non-root user. + +2. **Install docker-compose**. If you are running this on a Mac or Windows this might be already installed. On Linux you will need to install this it separately; see https://docs.docker.com/compose/install/. + + To see if docker-compose is successfully installed, use the following shell command: + + ```bash + docker-compose -v + ``` + + This should print the current version of docker-compose. We have tested the instruction below with versions of docker-compose 1.22 and above. + +3. **Download the PEcAn docker-compose file**. It is located in the root directory of the [PEcAn source code](https://github.com/pecanproject/pecan). For reference, here are direct links to the [latest stable version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) and the [bleeding edge development version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml). (To download the files, you should be able to right click the link and select "Save link as".) Make sure the file is saved as `docker-compose.yml` in a directory called `pecan`. + +4. **Initialize the PEcAn database and data images**. The following `docker-compose` commands are used to download all the data PEcAn needs to start working. For more on how they work, see our [Docker topical pages](#pecan-docker-quickstart-init). + + a. Create and start the PEcAn database container (without any data) + + ```bash + docker-compose up -d postgres + ``` + + If this is successful, the end of the output should look like the following: + + ``` + Creating pecan_postgres_1 ... done + ``` + + b. "Initialize" the data for the PEcAn database. + + ```bash + docker-compose run --rm bety initialize + ``` + + This should produce a lot of output describing the database operations happening under the hood. + Some of these will look like errors (including starting with `ERROR`), but _this is normal_. + This command succeeded if the output ends with the following: + + ``` + Added carya41 with access_level=4 and page_access_level=1 with id=323 + Added carya42 with access_level=4 and page_access_level=2 with id=325 + Added carya43 with access_level=4 and page_access_level=3 with id=327 + Added carya44 with access_level=4 and page_access_level=4 with id=329 + Added guestuser with access_level=4 and page_access_level=4 with id=331 + ``` + + c. Download and configure the core PEcAn database files. + + ```bash + docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop + ``` + + This will produce a lot of output describing file operations. + This command succeeded if the output ends with the following: + + ``` + ###################################################################### + Done! + ###################################################################### + ``` + +5. **Start the PEcAn stack**. Assuming all of the steps above completed successfully, start the full stack by running the following shell command: + + ```bash + docker-compose up -d + ``` + + If all of the containers started successfully, you should be able to access the various components from a browser via the following URLs (if you run these commands on a remote machine replace localhost with the actual hostname). + + - PEcAn web interface (running models) -- http://localhost:8000/pecan/ (NOTE: The trailing backslash is necessary.) + - PEcAn documentation and home page -- http://localhost:8000/ + - BETY web interface -- http://localhost:8000/bety/ + - File browser (minio) -- http://localhost:8000/minio/ + - RabbitMQ management console (for managing queued processes) -- http://localhost:8000/rabbitmq/ + - Traefik, webserver showing maps from URLs onto their respective containers -- http://localhost:8000/traefik/ + - Monitor, service that monitors models and shows all models that are online as well as how many instances are online and the number of jobs waiting. The output is in JSON -- http://localhost:8000/monitor/ + +For troubleshooting and advanced configuration, see our [Docker topical pages](#docker-index). + +## (Advanced) Native install {#install-native} + +The full PEcAn system has a lot of dependencies, including R packages, compiled C and Fortran libraries, and system services and utilities. +Installing all of these side by side, and getting them to work harmoniously, is very time-consuming and challenging, which is why we **strongly** encourage new users to use the VM or Docker if possible. + +In a nutshell, the process for manual installation is as follows: + +1. **Download the [PEcAn source code](https://github.com/pecanproject/pecan) from GitHub**. The recommended way to do this is with the shell command `git clone`, i.e. `git clone https://github.com/pecanproject/pecan`. + +2. **Download the BETY source code from GitHub**. + +2. **Install the PEcAn R packages and their dependencies**. This can be done by running the shell command `make` inside the PEcAn source code directory. Note that many of the R packages on which PEcAn depends have system dependencies that are not included by default on most operating systems, but almost all of which should be available via your operating system's package manager (e.g. Homebrew for MacOS, `apt` for Ubuntu/Debian/Mint Linux, `yum` for RedHat Fedora/CentOS). + +3. **Install and configure PostgreSQL** +4. **Install and configure the Apache web server**. + +For more details, see our notes about [OS Specific Installations](#osinstall). + + + +# Tutorials {#user-section} + +## How PEcAn Works in a nutshell {#pecan-in-a-nutshell} + +PEcAn provides an interface to a variety of ecosystem models and attempts to standardize and automate the processes of model parameterization, execution, and analysis. First, you choose an ecosystem model, then the time and location of interest (a site), the plant community (or crop) that you are interested in simulating, and a source of atmospheric data from the BETY database (LeBauer et al, 2010). These are set in a "settings" file, commonly named `pecan.xml` which can be edited manually if desired. From here, PEcAn will take over and set up and execute the selected model using your settings. The key is that PEcAn uses models as-is, and all of the translation steps are done within PEcAn so no modifications are required of the model itself. Once the model is finished it will allow you to create graphs with the results of the simulation as well as download the results. It is also possible to see all past experiments and simulations. + +There are two ways of using PEcAn, via the web interface and directly within R. Even for users familiar with R, using the web interface is a good place to start because it provides a high level overview of the PEcAn workflow. The quickest way to get started is to download the virtual machine or use an AWS instance. + +## PEcAn Demos {#demo-table} + +The following Tutorials assume you have installed PEcAn. If you have not, please consult the [PEcAn Installation Section](#pecan-manual-setup). + +|Type|Title|Web Link| Source Rmd| +|:--:|:---:|:------:|:---------:| +|Demo| Basic Run| [html](https://pecanproject.github.io/pecan-documentation/tutorials/Demo01.html) | [Rmd](https://github.com/PecanProject/pecan/blob/develop/documentation/tutorials/01_Demo_Basic_Run/Demo01.Rmd)| +|Demo| Uncertainty Analysis| [html](https://pecanproject.github.io/pecan-documentation/tutorials/Demo02.html) | [Rmd](https://github.com/PecanProject/pecan/tree/master/documentation/tutorials/02_Demo_Uncertainty_Analysis)| +|Demo| Output Analysis|html |[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/AnalyzeOutput)| +|Demo| MCMC |html|[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/MCMC)| +|Demo|Parameter Assimilation |html |[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/ParameterAssimilation)| +|Demo|State Assimilation|html|[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/StateAssimilation)| +|Demo| Sensitivity|html|[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/sensitivity)| +|Vignette|Allometries|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/allometry/vignettes/AllomVignette.Rmd)| +|Vignette|MCMC|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/assim.batch/vignettes/AssimBatchVignette.Rmd)| +|Vignette|Meteorological Data|html|[Rmd](https://github.com/PecanProject/pecan/tree/master/modules/data.atmosphere/vignettes)| +|Vignette|Meta-Analysis|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/meta.analysis/vignettes/single.MA_demo.Rmd)| +|Vignette|Photosynthetic Response Curves|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd)| +|Vignette|Priors|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/priors/vignettes/priors_demo.Rmd)| +|Vignette|Leaf Spectra:PROSPECT inversion|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/rtm/vignettes/pecanrtm.vignette.Rmd)| + + + +## Demo 01: Basic Run PEcAn + +```{r echo = FALSE,warning=FALSE} +library(knitr) +opts_chunk$set(echo = FALSE, message = FALSE, warning = FALSE, + fig.align = 'center', out.width = '100%') +``` + +#### Objective + +We will begin by exploring a set of web-based tools that are designed to run single-site model runs. A lot of the detail about what’s going on under the hood, and all the outputs that PEcAn produces, are left to Demo 2. This demo will also demonstrate how to use PEcAn outputs in additional analyses outside of PEcAn. + +#### PEcAn URL + +In the following demo, **URL** is the web address of a PEcAn server and will refer to one of the following: + +* If you are doing a live demo with the PEcAn team, **URL was provided** +* If you are running the PEcAn [virtual machine](#basicvm): **URL = localhost:6480** +* If you are running PEcAn using [Amazon Web Services (AWS)](#awsvm), **URL is the Public IP** +* If you are running PEcAn using [Docker](#dockervm), **URL is localhost:8000/pecan/** (trailing backslash is important!) +* If you followed instructions found in [Install PEcAn by hand], **URL is your server’s IP** + + +#### Start PEcAn: + +1. **Enter URL in your web browser** +2. **Click “Run Models”** +3. **Click the ‘Next’ button** to move to the “Site Selection” page. + +```{r startpecan} +knitr::include_graphics('extfiles/startpecan.jpg') +``` + +#### Site Selection + +```{r mapmodel} +knitr::include_graphics('extfiles/mapmodel.png') +``` + +#### Host + +**Select the local machine “pecan”**. Other options exist if you’ve read and followed instructions found in [Remote execution with PEcAn]. + + +#### Mode + +Select **SIPNET** (r136) from the available models because it is quick & simple. Reference material can be found in [Models in PEcAn] + + +#### Site Group + +To filter sites, you can **select a specific group of sites**. For this tutorial we will use **Ameriflux**. + + +#### Conversion: + +**Select the conversion check box**, to show all sites that PEcAn is capable of generating model drivers for automatically. By default (unchecked), PEcAn only displays sites where model drivers already exist in the system database + + +#### Site: + +**For this tutorial, type _US-NR1_ in the search box to display the Niwot Ridge Ameriflux site (US-NR1), and then click on the pin icon**. When you click on a site’s flag on the map, it will give you the name and location of the site and put that site in the “Site:” box on the left hand site, indicating your current selection. + +Once you are finished with the above steps, **click "Next"**. + + +#### Run Specification + +```{r runspec} +knitr::include_graphics('extfiles/runspec.png') +``` + +Next we will specify settings required to run the model. Be aware that the inputs required for any particular model may vary somewhat so there may be addition optional or required input selections available for other models. + + +#### PFT (Plant Functional Type): + +**Niwot Ridge is temperate coniferous**. Available PFTs will vary by model and some models allow multiple competing PFTs to be selected. Also select **soil** to control the soil parameters + + +#### Start/End Date: + +Select **2003/01/01** to **2006/12/31**. In general, be careful to select dates for which there is available driver data. + + +#### Weather Data: + +**Select “Use AmerifluxLBL”** from the [Available Meteorological Drivers]. + +#### Optional Settings: + +**Leave all blank for demo run** + +1. **Email** sends a message when the run is complete. +2. **Use Brown Dog** will use the Brown Dog web services in order to do input file conversions. (**Note: Required if you select _Use NARR_ for Weather Data**) +3. **Edit pecan.xml** allows you to configure advanced settings via the PEcAn settings file +4. **Edit model config** pauses the workflow after PEcAn has written all model specific settings but before the model runs are called and allows users to configure any additional settings internal to the model. +5. **Advanced Setup** controls ensemble and sensitivity run settings discussed in Demo 2. + +Finally, **click "Next"** to start the model run. + + +#### Data Use Policies + +The last step before the run starts is to **read and agree** to AmeriFlux's data policy and **give a valid username**. If you don't already have an Ameriflux username, click "register here" and create one. If you selected a different data source, this step may or may not be needed: you will need to agree to a data policy if your source has one, but if it doesn't then the run will start immediately. + + +#### If you get an error in your run + +If you get an error in your run as part of a live demo or class activity, it is probably simplest to start over and try changing options and re-running (e.g. with a different site or PFT), as time does not permit detailed debugging. If the source of the error is not immediately obvious, you may want to take a look at the workflow.Rout to see the log of the PEcAn workflow or the logfile.txt to see the model execution output log and then refer to the [Documentation](http://pecanproject.github.io/documentation.html) or the [Chat Room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) for help. + +#### Model Run Workflow + +```{r execstatus} +knitr::include_graphics('extfiles/execstatus.jpg') +``` + +#### MET Process: + +First, PEcAn will download meteorological data based on the type of the Weather Data you chose, and process it into the specific format for the chosen model + + +#### TRAIT / META: + +PEcAn then estimates model parameters by performing a meta-analysis of the available trait data for a PFT. TRAIT will extract relevant trait data from the database. META performs a hierarchical Bayes meta-analysis of available trait data. The output of this analysis is a probability distribution for each model parameter. PEcAn selects the median value of this parameter as the default, but in Demo 2 we will see how PEcAn can use this parameter uncertainty to make probabilistic forecasts and assess model sensitivity and uncertainty. Errors at this stage usually indicate errors in the trait database or incorrectly specified PFTs (e.g. defining a variable twice). + + +#### CONFIG: + +writes model-specific settings and parameter files + + +#### MODEL: + +runs model. + + +#### OUTPUT: + +All model outputs are converted to [standard netCDF format](http://nacp.ornl.gov/MsTMIP_variables.shtml) + + +#### ENSEMBLE & SENSITIVITY: + +If enabled post-process output for these analyses + +If at any point a Stage Name has the **Status “ERROR”** please notify the PEcAn team member that is administering the demo or feel free to do any of the following: + +* Refer to the PEcAn Documentation for documentation +* Post the end of your workflow log on our Slack Channel chat +* Post an issue on Github. + +The entire PEcAn team welcomes any questions you may have! + +**If the Finished Stage has a Status of “DONE”, congratulations!** If you got this far, you have managed to run an ecosystem model without ever touching a line of code! Now it’s time to look at the results **click Finished**. + +FYI, [adding a new model](https://pecanproject.github.io/pecan-documentation/master/adding-an-ecosystem-model.html) to PEcAn does not require modification of the model’s code, just the implementation of a wrapper function. + + +#### Output and Visualization + +**For now focus on graphs, we will explore all of PEcAn’s outputs in more detail in Demo 02.** + + +#### Graphs + +1. **Select a Year and Y-axis Variable, and then click 'Plot run/year/variable'.** Initially leave the X-axis as time. +2. Within this figure the **points indicate the daily mean** for the variable while the **envelope encompasses the diurnal variability (max and min)**. +3. Variable names and units are based on a [standard netCDF format](http://nacp.ornl.gov/MsTMIP_variables.shtml). +4. Try looking at a number of different output variables over different years. +5. Try **changing the X-axis** to look at bivariate plots of how different output variables are related to one another. Be aware that PEcAn currently runs a moving min/mean/max through bivariate plots, just as it does with time series plots. In some cases this makes more sense than others. + + +#### Alternative Visualization: R Shiny + +1. **Click on Open SHINY**, which will open a new browser window. The shiny app will automatically access your run’s output files and allow you to visualize all output variables as a function of time. + +```{r workflowshiny} +knitr::include_graphics('extfiles/workflowshiny.png') +``` + +2. Use the pull down menu under **Variable Name** to choose whichever output variable you wish to plot. + + +#### Model Run Archive + +**Return to the output window and Click on the HISTORY button. Click on any previous run in the “ID” column** to go to the current state of that run's execution -- you can always return to old runs and runs in-progress this way. The run you just did should be the more recent entry in the table. **For the next analysis, make note of the ID number from your run.** + + +#### Next steps + +##### Analyzing model output + +Follow this tutorial, [Analyze Output] to learn how to **open model output in R and compare to observed data** + + +#### DEMO 02 +[Demo 02: Sensitivity and Uncertainty Analysis] will show how to perform **Ensemble & Sensitivity Analyses** through the web interface and explore the PEcAn outputs in greater detail, including the **trait meta-analysis** + + + +## Demo 02: Sensitivity and Uncertainty Analysis + +In Demo 2 we will be looking at how PEcAn can use information about parameter uncertainty to perform three automated analyses: + +* **Ensemble Analysis**: Repeat numerous model runs, each sampling from the parameter uncertainty, to generate a probability distribution of model projections. **Allows us to put a confidence interval on the model** +* **Sensitivity Analysis**: Repeats numerous model runs to assess how changes in model parameters will affect model outputs. **Allows us to identify which parameters the model is most sensitive to.** +* **Uncertainty Analysis**: Combines information about model sensitivity with information about parameter uncertainty to determine the contribution of each model parameter to the uncertainty in model outputs. **Allow us to identify which parameters are driving model uncertainty.** + +#### Run Specification + +1. Return to the main menu for the PEcAn web interface: **URL > Run Models** + +2. Repeat the steps for site selection and run specification from Demo 01, but also **click on “Advanced setup”**, then click Next. + +3. By clicking Advanced setup, PEcAn will first show an Analysis Menu, where we are going to specify new settings. + + + For an ensemble analysis, increase the number of runs in the ensemble, in this case set **Runs to 50**. In practice you would want to use a larger ensemble size (100-5000) than we are using in the demo. The ensemble analysis samples parameters from their posterior distributions to propagate this uncertainty into the model output. + + + PEcAn's sensitivity analysis holds all parameters at their median value and then varies each parameter one-at-a-time based on the quantiles of the posterior distribution. PEcAn also includes a handy shortcut, which is the default behavior for the web interface, that converts a specified standard deviation into its Normal quantile equivalent (e.g. 1 and -1 are converted to 0.157 and 0.841). In this example **set Sensitivity to -2,-1,1,2** (the median value, 0, occurs by default). + + + We also can tell PEcAn which variable to run the sensitivity on. Here, **set Variables to NEE**, so we can compare against flux tower NEE observations. + +**Click Next** + +#### Additional Outputs: + +The PEcAn workflow will take considerably longer to complete since we have just asked for over a hundred model runs. Once the runs are complete you will return to the output visualization page were there will be a few new outputs to explore, as well as outputs that were present earlier that we’ll explore in greater details: + +#### Run ID: +While the sensitivity and ensemble analyses synthesize across runs, you can also select individual runs from the Run ID menu. You can use the Graphs menu to visualize each individual run, or open individual runs in Shiny + +#### Inputs: +This menu shows the contents of /run which lets you look at and download: + +1. A summary file (README.txt) describing each run: location, run ID, model, dates, whether it was in the sensitivity or ensemble analysis, variables modifed, etc. +2. The model-specific input files fed into the model +3. The jobs.sh file used to submit the model run + +#### Outputs: +This menu shows the contents of /out. A number of files generated by the underlying ecosystem model are archived and available for download. These include: + +1. Output files in the standardized netCDF ([year].nc) that can be downloaded for visualization and analysis (R, Matlab, ncview, panoply, etc) +2. Raw model output in model-specific format (e.g. sipnet.out). +3. Logfile.txt contains job.sh & model error, warning, and informational messages + +#### PFTs: +This menu shows the contents of /pft. There is a wide array of outputs available that are related to the process of estimating the model parameters and running sensitivity/uncertainty analyses for a specific Plant Functional Type. + +1. **TRAITS**: The Rdata files **trait.data.Rdata** and **madata.Rdata** are, respectively, the available trait data extracted from the database that was used to estimate the model parameters and that same data cleaned and formatted for the statistical code. The **list of variables that are queried is determined by what variables have priors associated with them in the definition of the PFTs**. Priors are output into **prior.distns.Rdata**. Likewise, the **list of species that are associated with a PFT determines what subset of data is extracted** out of all data matching a given variable name. Demo 3 will demonstrate how a PFT can be created or modified. To look at these files in RStudio **click on these files to load them into your workspace**. You can further examine them in the _Environment_ window or accessing them at the command line. For example, try typing ```names(trait.data)``` as this will tell you what variables were extracted, ```names(trait.data$Amax)``` will tell you the names of the columns in the Amax table, and ```summary(trait.data$Amax)``` will give you summary data about the Amax values. +2. **META-ANALYSIS**: + + ```*.bug```: The evaluation of the meta-analysis is done using a Bayesian statistical software package called JAGS that is called by the R code. For each trait, the R code will generate a [trait].model.bug file that is the JAGS code for the meta-analysis itself. This code is generated on the fly, with PEcAn adding or subtracting the site, treatment, and greenhouse terms depending upon the presence of these effects in the data itself. If the tag is set to FALSE then all random effects will be turned off even if there are multiple sites. + + ```meta-analysis.log``` contains a number of diagnostics, including the summary statistics of the model, an assessment of whether the posterior is consistent with the prior, and the status of the Gelman-Brooks-Rubin convergence statistic (which is ideally 1.0 but should be less than 1.1). + + ```ma.summaryplots.*.pdf``` are collections of diagnostic plots produced in R after the above JAGS code is run that are useful in assessing the statistical model. _Open up one of these pdfs_ to evaluate the shape of the posterior distributions (they should generally be unimodal), the convergence of the MCMC chains (all chains should be mixing well from the same distribution), and the autocorrelation of the samples (should be low). + + ```traits.mcmc.Rdata``` contains the raw output from the statistical code. This includes samples from all of the parameters in the meta-analysis model, not just those that feed forward to the ecosystem, but also the variances, fixed effects, and random effects. + + ```post.distns.Rdata``` stores a simple tables of the posterior distributions for all model parameters in terms of the name of the distribution and its parameters. + + ```posteriors.pdf``` provides graphics showing, for each model parameter, the prior distribution, the data, the smoothed histogram of the posterior distribution (labeled post), and the best-fit analytical approximation to that smoothed histogram (labeled approx). _Open posteriors.pdf and compare the posteriors to the priors and data_ + +3. **SENSITIVITY ANALYSIS** + + ```sensitivity.analysis.[RunID].[Variable].[StartYear].[EndYear].pdf``` shows the raw data points from univariate one-at-a-time analyses and spline fits through the points. _Open this file_ to determine which parameters are most and least sensitive + +4. **UNCERTAINTY ANALYSIS** + + ```variance.decomposition.[RunID].[Variable].[StartYear].[EndYear].pdf```, contains three columns, the coefficient of variation (normalized posterior variance), the elasticity (normalized sensitivity), and the partial standard deviation of each model parameter. **Open this file for BOTH the soil and conifer PFTS and answer the following questions:** + + The Variance Decomposition graph is sorted by the variable explaining the largest amount of variability in the model output (right hand column). **From this graph identify the top-tier parameters that you would target for future constraint.** + + A parameter can be important because it is highly sensitive, because it is highly uncertain, or both. **Identify parameters in your output that meet each of these criteria.** Additionally, **identify parameters that are highly uncertain but unimportant (due to low sensitivity) and those that are highly sensitive but unimportant (due to low uncertainty)**. + + Parameter constraints could come from further literature synthesis, from direct measurement of the trait, or from data assimilation. **Choose the parameter that you think provides the most efficient means of reducing model uncertainty and propose how you might best reduce uncertainty in this process**. In making this choice remember that not all processes in models can be directly observed, and that the cost-per-sample for different measurements can vary tremendously (and thus the parameter you measure next is not always the one contributing the most to model variability). Also consider the role of parameter uncertainty versus model sensitivity in justifying your choice of what parameters to constrain. + +#### PEcAn Files: + +This menu shows the contents of the root workflow folder that are not in one of the folders indicated above. It mostly contains log files from the PEcAn workflow that are useful if the workflow generates an error, and as metadata & provenance (a detailed record of how data was generated). + +1. ```STATUS``` gives a summary of the steps of the workflow, the time they took, and whether they were successful +2. ```pecan.*.xml``` are PEcAn settings files +3. ```workflow.R``` is the workflow script +4. ```workflow.Rout``` is the corresponding log file +5. ```samples.Rdata``` contains the parameter values used in the runs. This file contains two data objects, sa.samples and ensemble.samples, that are the parameter values for the sensitivity analysis and ensemble runs respectively +6. ```sensitivity.output.[RunID].[Variable].[StartYear].[EndYear].Rdata``` contains the object sensitivity.output which is the model outputs corresponding to the parameter values in sa.samples. +7. ENSEMBLE ANALYSIS + + ```ensemble.Rdata``` contains contains the object ensemble.output, which is the model predictions at the parameter values given in ensemble.samples. + + ```ensemble.analysis.[RunID].[Variable].[StarYear].[EndYear].pdf``` contains the ensemble prediction as both a histogram and a boxplot. + + ```ensemble.ts.[RunID].[Variable].[StartYear].[EndYear].pdf``` contains a time-series plot of the ensemble mean, median, and 95% CI + +#### Global Sensitivity: Shiny + +**Navigate to URL/shiny/global-sensitivity.** + +This app uses the output from the ENSEMBLE runs to perform a global Monte Carlo sensitivity analysis. There are three modes controlled by Output type: + +1. **Pairwise** looks at the relationship between a specific parameter (X) and output (Y) +2. **All parameters** looks at how all parameters affect a specific output (Y) +3. **All variables** looks at how all outputs are affected by a specific parameter(X) + +In all of these analyses, the app also fits a linear regression to these scatterplots and reports a number of summary statistics. Among these, the slope is an indicator of **global sensitivity** and the R2 is an indicator of the contribution to **global uncertainty** + +#### Next Steps + +The next set of tutorials will focus on the process of data assimilation and parameter estimation. The next two steps are in “.Rmd” files which can be viewed online. + +#### Assimilation 'by hand' + +[Explore](https://github.com/PecanProject/pecan/blob/master/documentation/tutorials/sensitivity/PEcAn_sensitivity_tutorial_v1.0.Rmd) how model error changes as a function of parameter value (i.e. data assimilation ‘by hand’) + + +#### MCMC Concepts + +[Explore](https://github.com/PecanProject/pecan/blob/master/documentation/tutorials/MCMC/MCMC_Concepts.Rmd) Bayesian MCMC concepts using the photosynthesis module + + +#### More info about tools, analyses, and specific tasks… + +Additional information about specific tasks (adding sites, models, data; software updates; etc.) and analyses (e.g. data assimilation) can be found in the PEcAn [documentation](https://pecanproject.github.io/pecan-documentation/) + +If you encounter a problem with PEcAn that’s not covered in the documentation, or if PEcAn is missing functionality you need, please search [known bugs and issues](https://github.com/PecanProject/pecan/issues?q=), submit a [bug report](http://pecanproject.github.io/Report_an_issue.html), or ask a question in our [chat room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ). Additional questions can be directed to the [project manager](mailto:tonygard@bu.edu?subject=PEcAn Demo::) + + + + +## Other Vignettes {#othervignettes} + + + +### Simple Model-Data Comparisons + +#### Author: Istem Fer, Tess McCabe + +In this tutorial we will compare model outputs to data outside of the PEcAn web interface. The goal of this is to demonstrate how to perform additional analyses using PEcAn’s outputs. To do this you can download each of the Output files, and then perform the analyses using whatever software you prefer, or you can perform analyses directly on the PEcAn server itself. Here we’ll be analyzing model outputs in R using a browser-based version of RStudio that’s installed on the server + +#### Starting RStudio Server + +1. Open RStudio Server in a new window at **URL/rstudio** + +2. The username is carya and the password is illinois. + +3. To open a new R script click File > New File > R Script + +4. Use the Files browser on the lower right pane to find where your run(s) are located + + + All PEcAn outputs are stored in the output folder. Click on this to open it up. + + + Within the outputs folder, there will be one folder for each workflow execution. For example, click to open the folder PEcAn_99000000001 if that’s your workflow ID + + + A workflow folder will have a few log and settings files (e.g. pecan.xml) and the following subfolders + +``` +run contains all the inputs for each run +out contains all the outputs from each run +pft contains the parameter information for each PFT +``` + +Within both the run and out folders there will be one folder for each unique model run, where the folder name is the run ID. Click to open the out folder. For our simple case we only did one run so there should be only one folder (e.g. 99000000001). Click to open this folder. + + + Within this folder you will find, among other things, files of the format .nc. Each of these files contains one year of model output in the standard PEcAn netCDF format. This is the model output that we will use to compare to data. + + +#### Read in settings From an XML file + + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +## Read in the xml + +settings<-PEcAn.settings::read.settings("~/output/PEcAn_99000000001/pecan.CONFIGS.xml") + +## To read in the model output +runid<-as.character(read.table(paste(settings$outdir, "/run/","runs.txt", sep=""))[1,1]) # Note: if you are using an xml from a run with multiple ensembles this line will provide only the first run id +outdir<- paste(settings$outdir,"/out/",runid,sep= "") +start.year<-as.numeric(lubridate::year(settings$run$start.date)) +end.year<-as.numeric(lubridate::year(settings$run$end.date)) + +site.id<-settings$run$site$id +File_path<-"~/output/dbfiles/AmerifluxLBL_site_0-772/AMF_US-NR1_BASE_HH_9-1.csv" + +## Open up a connection to The Bety Database +bety <-dplyr::src_postgres(host = settings$database$bety$host, user = settings$database$bety$user, password = settings$database$bety$password, dbname = settings$database$bety$dbname) + +``` + +#### Read in model output from specific variables +```{r echo = TRUE, warning=FALSE, eval= FALSE} +model_vars<-c("time", "NEE") #varibles being read +model <- PEcAn.utils::read.output(runid,outdir,start.year, end.year, model_vars,dataframe=TRUE) +``` + +The arguments to read.output are the run ID, the folder where the run is located, the start year, the end year, and the variables being read. The README file in the Input file dropdown menu of any successful run lists the run ID, the output folder, and the start and end year. + +#### Compare model to flux observations + +**First** _load up the observations_ and take a look at the contents of the file + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +File_format<-PEcAn.DB::query.format.vars(bety = bety, format.id = 5000000002) #This matches the file with a premade "format" or a template that describes how the information in the file is organized + +site<-PEcAn.DB::query.site(site.id,bety$con) #This tells PEcAn where the data comes from + +observations<-PEcAn.benchmark::load_data(data.path = File_path, format= File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) #This will throw an error that not all of the units can be converted. That's ok, as the units of the varibles of interest (NEE) are being converted. +``` + +File_Path refers to where you stored your observational data. In this example the default file path is an Ameriflux dataset from Niwot Ridge. + +File_format queries the database for the format your file is in. The defualt format ID "5000000002" is for csv files downloaded from the Ameriflux website. +You could query for diffent kinds of formats that exist in bety or [make your own](https://pecanproject.github.io/pecan-documentation/adding-an-ecosystem-model.html#formats). + +Here 772 is the database site ID for Niwot Ridge Forest, which tells pecan where the data is from and what time zone to assign any time data read in. + +**Second** _apply a conservative u* filter to observations_ +```{r echo = TRUE, warning=FALSE, eval= FALSE} +observations$NEE[observations$UST<0.2]<-NA +``` + +**Third** _align model output and observations_ + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +aligned_dat = PEcAn.benchmark::align_data(model.calc = model, obvs.calc = observations, var ="NEE", align_method = "match_timestep") + +``` +When we aligned the data, we got a dataframe with the variables we requested in a $NEE.m$ and a $NEE.o$ format. The $.o$ is for observations, and the $.m$ is for model. The posix column allows for easy plotting along a timeseries. + +**Fourth**, _plot model predictions vs. observations_ and compare this to a 1:1 line + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +## predicted vs observed plot +plot(aligned_dat$NEE.m, aligned_dat$NEE.o) +abline(0,1,col="red") ## intercept=0, slope=1 +``` + +**Fifth**, _calculate the Root Mean Square Error (RMSE)_ between the model and the data + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +rmse = sqrt(mean((aligned_dat$NEE.m-aligned_dat$NEE.o)^2,na.rm = TRUE)) +``` +*na.rm* makes sure we don’t include missing or screened values in either time series. + +**Finally**, _plot time-series_ of both the model and data together + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +## plot aligned data +plot(aligned_dat$posix, aligned_dat$NEE.o, type="l") +lines(aligned_dat$posix,aligned_dat$NEE.m, col = "red") + +``` + +**Bonus** _How would you compare aggregated data?_ + +Try RMSE against monthly NEE instead of half-hourly. In this case, first average the values up to monthly in the observations. Then, use align_data to match the monthly timestep in model output. + +**NOTE**: Align_data uses two seperate alignment function, match_timestep and mean_over_larger_timestep. Match_timestep will use only that data that is present in both the model and the observation. This is helpful for sparse observations. Mean_over_larger_timestep aggregates the values over the largest timestep present. If you were to look at averaged monthly data, you would use mean_over_larger_timestep. + +```{r echo = TRUE, warning=FALSE, eval= FALSE} +monthlyNEEobs<-aggregate(observations, by= list(month(observations$posix)), simplify=TRUE, FUN =mean, na.rm= TRUE) +plottable<-align_data(model.calc = model, obvs.calc = monthlyNEEobs, align_method = "mean_over_larger_timestep", var= "NEE") +head(plottable) +``` + + + +### Data Assimilation Concepts + + +The goal of this tutorial is to help you gain some hands-on familiarity with some of the concepts, tools, and techniques involved in Bayesian Calibration. As a warm-up to more advanced approaches to model-data fusion involving full ecosystem models, this example will focus on fitting the Farquhar, von Caemmerer, and Berry (1980) photosynthesis model [FvCB model] to leaf-level photosynthetic data. This is a simple, nonlinear model consisting of 3 equations that models net carbon assimilation, $A^{(m)}$, at the scale of an individual leaf as a function of light and CO2. + +$$A_j = \frac{\alpha Q}{\sqrt{1+(\alpha^2 Q^2)/(Jmax^2)}} \frac{C_i- \Gamma}{4 C_i + 8 \Gamma}$$ + +$$A_c = V_{cmax} \frac{C_i - \Gamma}{C_i+ K_C (1+[O]/K_O) }$$ + +$$A^{(m)} = min(A_j,A_c) - r$$ + +The first equation $A_j$ describes the RUBP-regeneration limited case. In this equation the first fraction is a nonrectangular hyperbola predicting $J$, the electron transport rate, as a function of incident light $Q$, quantum yield $\alpha$, and the assymptotic saturation of $J$ at high light $J_{max}$. The second equation, $A_c$, describes the Rubisco limited case. The third equation says that the overall net assimilation is determined by whichever of the two above cases is limiting, minus the leaf respiration rate, $r$. + +To keep things simple, as a Data Model (a.k.a. Likelihood or Cost Function) we'll assume that the observed leaf-level assimilation $A^{(o)}$ is Normally distributed around the model predictions with observation error $\tau$. + +$$A^{(o)} \sim N(A^{(m)},\tau)$$ + +To fit this model to data we're going to rely on a piece of statistical software known as JAGS. The above model would be written in JAGS as: + +``` +model{ + +## Priors + Jmax ~ dlnorm(4.7,2.7) ## maximum electron transport rate prior + alpha~dnorm(0.25,100) ##quantum yield (mol electrons/mole photon) prior + vmax ~dlnorm(4.6,2.7) ## maximum rubisco capacity prior + + r ~ dlnorm(0.75,1.56) ## leaf respiration prior + cp ~ dlnorm(1.9,2.7) ## CO2 compensation point prior + tau ~ dgamma(0.1,0.1) + + for(i in 1:n){ + + ## electron transport limited + Aj[i]<-(alpha*q[i]/(sqrt(1+(alpha*alpha*q[i]*q[i])/(Jmax*Jmax))))*(pi[i]-cp)/(4*pi[i]+8*cp) + + ## maximum rubisco limited without covariates + Ac[i]<- vmax*(pi[i]-cp)/(pi[i]+Kc*(1+po/Ko)) + + Am[i]<-min(Aj[i], Ac[i]) - r ## predicted net photosynthesis + Ao[i]~dnorm(Am[i],tau) ## likelihood + } + +} +``` + +The first chunk of code defines the _prior_ probability distributions. In Bayesian inference every unknown parameter that needs to be estimated is required to have a prior distribution. Priors are the expression of our belief about what values a parameter might take on **prior to observing the data**. They can arise from many sources of information (literature survey, meta-analysis, expert opinion, etc.) provided that they do not make use of the data that is being used to fit the model. In this particular case, the priors were defined by Feng and Dietze 2013. Most priors are lognormal or gamma, which were choosen because most of these parameters need to be positive. + +After the priors is the Data Model, which in JAGS needs to be implemented as a loop over every observation. This is simply a codified version of the earlier equations. + +Table 1: FvCB model parameters in the statistical code, their symbols in equations, and definitions + +Parameter | Symbol | Definition +----------|------------|----------- +alpha0 | $\alpha$ | quantum yield (mol electrons/mole photon) +Jmax | $J_{max}$ | maximum electron transport +cp | $\Gamma$ | CO2 compensation point +vmax0 | $V_{cmax}$ | maximum Rubisco capacity (a.k.a Vcmax) +r | $R_d$ | leaf respiration +tau | $\tau$ | residual precision +q | $Q$ | PAR +pi | $C_i$ | CO2 concentration + +#### Fitting the model + +To begin with we'll load up an example A-Ci and A-Q curve that was collected during the 2012 edition of the [Flux Course](http://www.fluxcourse.org/) at Niwot Ridge. The exact syntax below may be a bit confusing to those unaccustomed to R, but the essence is that the `filenames` line is looking up where the example data is stored in the PEcAn.photosynthesis package and the `dat` line is loading up two files (once A-Ci, the other A-Q) and concatanating them together. + +```{r echo=TRUE ,eval=FALSE} +library(PEcAn.photosynthesis) + +### Load built in data +filenames <- system.file("extdata", paste0("flux-course-3",c("aci","aq")), package = "PEcAn.photosynthesis") +dat<-do.call("rbind", lapply(filenames, read_Licor)) + +## Simple plots +aci = as.character(dat$fname) == basename(filenames[1]) +plot(dat$Ci[aci],dat$Photo[aci],main="ACi") +plot(dat$PARi[!aci],dat$Photo[!aci],main="AQ") +``` + +In PEcAn we've written a wrapper function, $fitA$, around the statistical model discussed above, which has a number of other bells and whistles discussed in the [PEcAn Photosynthesis Vignette](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd). For today we'll just use the most basic version, which takes as arguments the data and the number of MCMC iterations we want to run. + +```{r echo=TRUE, eval=FALSE} +fit <- fitA(dat,model=list(n.iter=10000)) +``` + +#### What's going on + +Bayesian numerical methods for model calibration are based on sampling parameter values from the posterior distribution. Fundamentally what's returned is a matrix, with the number of iterations as rows and the number of parameters as columns, which are samples from the posterior distribution, from which we can approximate any quantity of interest (mean, median, variance, CI, etc.). + +The following plots follow the trajectory of two correlated parameters, Jmax and alpha. In the first figure, arrows show the first 10 iterations. Internally JAGS is choosing between a variety of different Bayesian sampling methods (e.g. Metropolis-Hasting, Gibbs sampling, slice sampling, rejection sampling, etc) to draw a new value for each parameter conditional on the current value. After just 10 steps we don't have a good picture of the overall posterior, but it should still be clear that the sampling is not a complete random walk. + +```{r echo= TRUE, eval=FALSE} + +params <- as.matrix(fit$params) +xrng = range(fit$params[,"alpha0"]) +yrng = range(fit$params[,"Jmax0"]) + +n = 1:10 +plot(params[n,"alpha0"],params[n,"Jmax0"],type='n',xlim=xrng,ylim=yrng) +arrows(params[n[-10],"alpha0"],params[n[-10],"Jmax0"],params[n[-1],"alpha0"],params[n[-1],"Jmax0"],length=0.125,lwd=1.1) + +``` + +After 100 steps, we can see a cloud start to form, with occassional wanderings around the periphery. + +```{r echo= TRUE, eval=FALSE} +n = 1:100 +plot(params[n,"alpha0"],params[n,"Jmax0"],type='l',xlim=xrng,ylim=yrng) +``` + +After $nrow(params)$ steps what we see is a point cloud of samples from the joint posterior distribution. When viewed sequentially, points are not independent, but we are interested in working with the overall distribution, where the order of samples is not important. + +```{r echo= TRUE, eval=FALSE} +n = 1:nrow(params) +plot(params[n,"alpha0"],params[n,"Jmax0"],type='p',pch="+",cex=0.5,xlim=xrng,ylim=yrng) +``` + + +#### Evaluating the model output + +A really important aspect of Bayesian inference is that the output is the **joint** posterior probability of all the parameters. This is very different from an optimization approach, which tries to find a single best parameter value. It is also different from estimating the independent posterior probabilities of each parameter -- Bayesian posteriors frequently have strong correlation among parameters for reasons having to do both with model structure and the underlying data. + +The model we're fitting has six free parameters, and therefore the output matrix has 6 columns, of which we've only looked at two. Unfortunately it is impossible to visualize a 6 dimensional parameter space on a two dimensional screen, so a very common practice (for models with a small to medium number of parameters) is to look at all pairwise scatterplots. If parameters are uncorrelated we will typically see oval shaped clouds that are oriented in the same directions as the axes. For parameters with linear correlations those clouds will be along a diagonal. For parameters with nonlinear trade-offs the shapes of the parameter clouds can be more complex, such as the banana-shaped or triangular. For the FvCB model we see very few parameters that are uncorrelated or have simple linear correlations, a fact that we should keep in mind when interpreting individual parameters. + +```{r echo= TRUE, eval=FALSE} +pairs(params,pch=".") +``` + +The three most common outputs that are performed on almost all Bayesian analyses are to look at the MCMC chains themselves, the marginal distributions of each parameter, and the overall summary statistics. + +The 'trace' diagrams below show the history of individual parameters during the MCMC sampling. There are different color lines that represent the fact that JAGS ran the MCMC multiple times, with each run (i.e. each color) being refered to as a different $chain$. It is common to run multiple chains in order to assess whether the model, started from different points, consistently converges on the same answer. The ideal trace plot looks like white noise with all chains in agreement. + +The 'density' figures represent smoothed versions of the _marginal_ distributions of each parameter. The tick marks on the x-axis are the actual samples. You will note that some posteriors will look approximately Normal, while others may be skewed or have clearly defined boundaries. On occassion there will even be posteriors that are multimodal. There is no assumption built into Bayesian statistics that the posteriors need be Normal, so as long as an MCMC has converged this diversity of shapes is valid. [note: the most common cause of multi-modal posteriors is a lack of convergence] + +Finally, the summary table reports, for each parameter, a mean, standard deviation, two variants of standard error, and standard quantile estimates (95% CI, interquartile, and median). The standard deviation of the posterior is a good summary statistic about **how uncertain we are about a parameter**. The Naive SE is the traditonal $\frac{SD}{\sqrt{n}}$, which is an estimate of the **NUMERICAL accuracy in our estimate of the mean**. As we run the MCMC longer (i.e. take more samples), we get an answer that is numerically more precise (SE converges to 0) but the uncertainty in the parameter (i.e. SD) stays the same because that's determined by the sample size of the DATA not the length of the MCMC. Finally, the Time-series SE is a variant of the SE calculation that accounts for the autocorrelation in the MCMC samples. In practice is is therefore more appropriate to use this term to assess numerical accuracy. + +```{r echo= TRUE, eval=FALSE} +plot(fit$params,auto.layout = FALSE) ## MCMC diagnostic plots +summary(fit$params) ## parameter estimates +``` + +Assessing the convergence of the MCMC is first done visually, but more generally the use of statistical diagnostics to assess convergence is highly encouraged. There are a number of metrics in the literature, but the most common is the Gelman-Brooks-Rubin statistic, which compare the variance within each chain to the variance across chains. If the chains have converged then this quantity should be 1. Values less than 1.05 are typically considered sufficient by most statisticians, but these are just rules-of-thumb. + +```{r echo= TRUE, eval=FALSE} +gelman.plot(fit$params,auto.layout = FALSE) +gelman.diag(fit$params) +``` + +As with any modeling, whether statistical or process-based, another common diagnostic is a predicted vs observed plot. In a perfect model the data would fall along the 1:1 line. The deviations away from this line are the model residuals. If observations lie along a line other than the 1:1 this indicates that the model is biased in some way. This bias is often assessed by fitting a linear regression to the points, though two important things are noteworthy about this practice. First, the $R^2$ and residual error of this regression are not the appropriate statistics to use to assess model performance (though you will frequently find them reported incorrectly in the literature). The correct $R^2$ and residual error (a.k.a Root Mean Square Error, RMSE) are based on deviations from the 1:1 line, not the regression. The code below shows these two terms calculated by hand. The second thing to note about the regression line is that the standard regression F-test, which assesses deviations from 0, is not the test you are actually interested in, which is whether the line differs from 1:1. Therefore, while the test on the intercept is correct, as this value should be 0 in an unbiased model, the test statistic on the slope is typically of less interest (unless your question really is about whether the model is doing better than random). However, this form of bias can easily be assessed by looking to see if the CI for the slope overlaps with 1. + +```{r echo= TRUE, eval=FALSE} +## predicted vs observed plot +par(mfrow=c(1,1)) +mstats = summary(fit$predict) +pmean = mstats$statistics[grep("pmean",rownames(mstats$statistics)),1] +plot(pmean,dat$Photo,pch="+",xlab="Predicted A",ylab = "Observed A") +abline(0,1,col=2,lwd=2) +bias.fit = lm(dat$Photo~pmean) +abline(bias.fit,col=3,lty=2,lwd=2) +legend("topleft",legend=c("1:1","regression"),lwd=2,col=2:3,lty=1:2) +summary(bias.fit) +RMSE = sqrt(mean((pmean-dat$Photo)^2)) +RMSE +R2 = 1-RMSE^2/var(dat$Photo) +R2 +confint(bias.fit) +``` + +In the final set of plots we look at the actual A-Ci and A-Q curves themselves. Here we've added two interval estimates around the curves. The CI captures the uncertainty in the _parameters_ and will asympotically shrink with more and more data. The PI (predictive interval) includes the parameter and residual error. If our fit is good then 95% PI should thus encompass at least 95% of the observations. That said, as with any statistical model we want to look for systematic deviations in the residuals from either the mean or the range of the PI. + +```{r echo= TRUE, eval=FALSE} +## Response curve +plot_photo(dat,fit) +``` + +Note: on the last figure you will get warnings about "No ACi" and "No AQ" which can be ignored. These are occuring because the file that had the ACi curve didn't have an AQ curve, and the file that had the AQ curve didn't have an ACi curve. + + +#### Additional information + +There is a more detailed R Vignette on the use of the PEcAn photosynthesis module available in the [PEcAn Repository](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd). + +#### Citations + +Dietze, M.C. (2014). Gaps in knowledge and data driving uncertainty in models of photosynthesis. Photosynth. Res., 19, 3–14. + +Farquhar, G., Caemmerer, S. & Berry, J.A. (1980). A biochemical model of photosynthetic CO2 assimilation in leaves of C3 species. Planta, 149, 78–90. + +Feng, X. & Dietze, M.C. (2013). Scale dependence in the effects of leaf ecophysiological traits on photosynthesis: Bayesian parameterization of photosynthesis models. New Phytol., 200, 1132–1144. + + + +### Parameter Data Assimilation + + +#### Objectives +* Gain hands-on experience in using Bayesian MCMC techniques to calibrate a simple ecosystem model using parameter data assimilation (PDA) +* Set up and run a PDA in PEcAn using model emulation technique, assimilating NEE data from Niwot Ridge +* Examine outputs from a PDA for the SIPNET model and evaluation of the calibrated model against i) data used to constrain model, ii) additional data for the same site + +#### Larger Context +Parameter data assimilation (PDA) occurs as one step in the larger process of model calibration, validation, and application. The goal of PDA is to update our estimates of the posterior distributions of the model parameters using data that correspond to model outputs. This differs from our previous use of PEcAn to constrain a simple model using data that map directly to the model parameters. Briefly, the recommended overall approach for model calibration and validation consists of the following steps: + +1. Assemble and process data sets required by the model as drivers +2. Perform an initial test-run of the model as a basic sanity check + + Were there errors in drivers? (return to 1) + + Is the model in the same ballpark as the data? +3. Construct priors for model parameters +4. Collect/assemble the data that can be used to constrain model parameters and outputs +5. Meta-analysis +6. Sensitivity analysis (SA) +7. Variance Decomposition (VD) +8. Determine what parameters need further constraint + + Does this data exist in the literature? (repeat 4-8) + + Can I collect this data in the field? (repeat 4-8) +9. Ensemble Analysis + + Is reality within the range of the uncertainty in the model? +10. Evaluate/estimate uncertainties in the data +11. Parameter Data Assimilation: + + Propose new parameter values + + Evaluate L(data | param) & prior(param) + + Accept or reject the proposed parameter values + + Repeat many times until a histogram of accepted parameter values approximates the true posterior distribution. +12. Model evaluation [preferably ensemble based] + + Against data used to constrain model + + Against additional data for this site + + Same variable, different time + + Different variables + + Against data at a new site + + Do I need more data? Repeat 4-9 (direct data constraint) or 6-11 (parameter data assimilation). +13. Application [preferably ensemble forecast or hindcast] + +#### Connect to Rstudio +Today, we're again going to work mostly in Rstudio, in order to easily edit advanced PEcAn settings and browse files. So if you haven't already, connect now to the Rstudio server on your VM ([URL]/rstudio). + +This tutorial assumes you have successfully completed an ensemble and a sensitivity analysis (Demo 2) before. + +#### Defining variables + +The following variables need to be set specific to the site being run and the workflow being run +```{r echo= TRUE, eval=FALSE} +workflow_id <- 99000000002 ## comes from the History table, your successful ensemble run's workflow ID + +## from URL/bety/inputs. +## Search by Ameriflux ID (e.g. US-NR1) +## Find the "plain" site record (no CF or model name) that's on your server +## (probably last in the list) +## Click on the magnifying glass icon then look under "View Related Files" +datadir <- "/home/carya/output/dbfiles/AmerifluxLBL_site_0-772/" + +## where PEcAn is saving output (default OK on VM) +outdir <- "/home/carya/output/" +``` + + +#### Initial Ensemble Analysis +A good place to start when thinking about a new PDA analysis is to look at the current model fit to observed data. In fact, we want to compare data to a full ensemble prediction from the model. This is important because our current parameter distributions will be the priors for PDA. While the analysis will translate these priors into more optimal (in terms of producing model output that matches observations) and more confident (i.e. narrower) posterior distributions, these results are inherently constrained by the current parameter distributions. Thus, if reality falls far outside the prior ensemble confidence interval (which reflects the current uncertainty of all model parameters), data assimilation will not be able to fix this. In such cases, the prior parameter estimates must already be over-constrained, or there are structural errors in the model itself that need fixing. +To begin, let’s load up some NEE observations so that we can plot them along with our ensemble predictions. In the code below the elements in bold may vary depending on site and your previous runs. + +```{r echo= TRUE, eval=FALSE} + +library(PEcAn.all) + +# read settings +settings <- read.settings(file.path(outdir,paste0("PEcAn_",workflow_id),"pecan.CONFIGS.xml")) + +# open up a DB connection +bety<-settings$database$bety +bety <-dplyr::src_postgres(host = bety$host, user = bety$user, password = bety$password, dbname = bety$dbname) + +# Fill out the arguments needed by load_data function + +# read file format information +format <- PEcAn.DB::query.format.vars(bety = bety, format.id = 5000000002) +start_year <- lubridate::year(settings$run$start.date) +end_year <- lubridate::year(settings$run$end.date) +vars.used.index <- which(format$vars$bety_name %in% c("NEE", "UST")) + +obs <-PEcAn.benchmark::load_data(data.path = file.path(datadir, "AMF_US-NR1_BASE_HH_9-1.csv"), + format = format, start_year = start_year, end_year = end_year, + site = settings$run$site, + vars.used.index = vars.used.index, + time.row = format$time.row) + +obs$NEE[obs$UST<0.4] <- NA ## U* filter +NEEo <- obs$NEE +``` + +Now let's load up our ensemble outputs from the previous ensemble analysis (Demo 2) and plot our ensemble predictions against our NEE observations. +```{r echo= TRUE, eval=FALSE} + +# load outputs, try not to delete prePDA ensemble output filename from your environment +prePDA_ensemble_output_file <- file.path(outdir,paste0("PEcAn_",workflow_id, "/ensemble.ts.", settings$ensemble$ensemble.id, ".NEE.2003.2006.Rdata")) +load(prePDA_ensemble_output_file) + +# calculate CI +pre_pda_ens <- ensemble.ts[["NEE"]] +preCI <- apply(pre_pda_ens, 2, quantile, c(0.025, 0.5, 0.975), na.rm = TRUE) + +# plot model ensemble +ymin <- min(min(c(preCI, NEEo), na.rm = TRUE)) +ymax <- max(max(c(preCI, NEEo), na.rm = TRUE)) +plot(preCI[2,], ylim = c(ymin, ymax), lwd = 2, xlab = "time", ylab = "NEE", main = "pre-PDA model ensemble vs data", type = "n") +prepoly <- 1:dim(preCI)[2] +polygon(c(prepoly, rev(prepoly)), c(preCI[3,], rev(preCI[1,])), col='khaki', border=NA) + +# add data +points(NEEo, pch = ".", col= adjustcolor("purple",alpha.f=0.5)) +legend("topright", legend=c("Data","Pre-PDA Model"), pch=c(15,15), + col=c("purple","khaki")) +``` + +When interpreting your results it is important to remember the difference between a confidence interval, which just includes parameter uncertainties, and a predictive interval, which includes parameter and residual uncertainties. Your ensemble analysis plot illustrates the former—i.e., the confidence in the mean NEE. By contrast, the data reflect both changes in mean NEE, and random variability. As such, we can't expect all the data to fall within the CI; in fact, if we had unlimited data to constrain mean NEE, the CI would collapse to a single line and none of the data would be contained! However, your plot will give you an idea of how much uncertainty there is in your model currently, and help to identify systematic errors like bias (values consistently too high or low) or poorly represented seasonal patterns. + +#### Questions: +* Does your ensemble agree well with the data? + + If so, how much room for improvement is there, in terms of tightening the CI? + + If not, what are the greatest discrepancies? +* What are some of the problems (with model, data, and/or PEcAn) that might explain the data-model disparity you see? + +#### Choosing Parameters +Beyond exploratory exercises, the first step of PDA analysis is to choose the model parameters you will target for optimization. PDA is computationally expensive (even when using an emulator), and the cost increases exponentially with the number of parameters targeted. The number you can handle in any given analysis completely depends on the complexity of the model and your available computational resources, but in practice it's going to be rather small (~1–10) relative to the large number of parameters in a mechanistic ecosystem model (~10–100). +Given this limitation, it is important to target parameters that can contribute substantially to improving model fit. If you recall, identifying those parameters was the goal of the uncertainty analysis you conducted previously, in the second PEcAn demo. Let's revisit that analysis now. Open your variance decomposition graph from Demo 2 +From this figure decide which variables you will target with PDA. As noted, an obvious criterion is that the parameter should be contributing a large amount of uncertainty to the current model, because otherwise it simply can't change the model output much no matter how much you try to optimize it. But there are other considerations too. For example, if two parameters have similar or competing roles in the model, you may have trouble optimizing both simultaneously. In practice, there will likely be some guess-and-testing involved, though a good understanding of how the model works will help. It may also help to look at the shape of the Sensitivity responses and details of model fit to data (your ensemble analysis from the previous section). +For the purposes of this demo, choose eight to ten parameters (in total, if you have more than one PFT) that contribute high uncertainty to model output and/or seem like good choices for some other rational reason. + +#### Questions: +* Which parameters did you choose, and why? + +#### Editing PEcAn settings +Now let’s add settings to tell PEcAn how to run the PDA with emulator, we will come to the details of model emulation later. Open up the pecan.CONFIGS.xml file you located previously, and choose ```File > Save as...``` from the menu to save a new copy as **pecan.PDA.xml**. Now add the block of XML listed below to the file, immediately after the line. Check and fill in the parts corresponding to your run when necessary. +In this block, use the `````` tags to identify the parameters you’ve chosen for PDA (it's up to you to choose the number of parameters you want to constrain, then you can set the `````` to be >= 10 per parameter you choose, e.g. 200 knots for 10 parameters). Here, you need to use PEcAn’s standard parameter names, which are generally not the same as what’s printed on your variance decomposition graph. To find your parameters look at the row names in the ```prior.distns.csv``` file for each PFT under the PFT pulldown menu. Insert the variable name (exactly, and case sensitive) into the `````` tags of the XML code below. +In addition, you may need to edit ``````, depending on the site and year you ran previously. The rest of the settings control options for the PDA analysis (how long to run, etc.), and also identify the data to be used for assimilation. For more details, see the assim.batch vignette on the PEcAn GitHub page (https://goo.gl/9hYVPQ). + +``` + <-- These lines are already in there. Don't duplicate them, + <-- just paste the block below right after them. + + emulator + 160 <-- FILL IN + 25000 + 3 + + + YOUR_PFT_1_PARAM_1 <-- FILL IN + YOUR_PFT_1_PARAM_2 <-- FILL IN + + + YOUR_PFT_2_PARAM_1 <-- FILL IN + YOUR_PFT_2_PARAM_2 <-- FILL IN + YOUR_PFT_2_PARAM_3 <-- FILL IN + YOUR_PFT_2_PARAM_4 <-- FILL IN + YOUR_PFT_2_PARAM_5 <-- FILL IN + YOUR_PFT_2_PARAM_6 <-- FILL IN + + + + 100 + 0.1 + 0.3 + + + + + /home/carya/output/dbfiles/AmerifluxLBL_site_0-772/AMF_US-NR1_BASE_HH_9-1.csv + + 5000000002 + 1000011238 <-- FILL IN, from BETY inputs table, this is *NOT* the workflow ID + Laplace + + NEE + UST + + 297 + + + +``` +Once you’ve made and saved the changes to your XML, load the file and check that it contains the new settings: + +```{r echo= TRUE, eval=FALSE} +settings <- read.settings(file.path(outdir,paste0("PEcAn_",workflow_id),"pecan.PDA.xml")) +settings$assim.batch +``` + +If the printed list contains everything you just added to pecan.PDA.xml, you’re ready to proceed. + +#### Investigating PEcAn function pda.emulator (optional) + +Before we run the data assimilation, let's take a high-level look at the organization of the code. Use the Rstudio file browser to open up ```~/pecan/modules/assim.batch/R/pda.emulator.R.``` This code works in much the same way as the pure statistical models that we learned about earlier in the week, except that the model being fit is a statistical model that emulates a complicated process-based computer simulation (i.e., an ecosystem model). We could have directly used the ecosystem model (indeed PEcAn's other PDA functions perform MCMC by actually running the ecosystem model at each iteration, see pda.mcmc.R script as an example), however, this would require a lot more computational time than we have today. Instead here we will use a technique called model emulation. This technique allows us to run the model for a relatively smaller number of times with parameter values that have been carefully chosen to give a good coverage of parameter space. Then we can interpolate the likelihood calculated for each of those runs to get a surface that "emulates" the true likelihood and perform regular MCMC, except instead of actually running the model on every iteration to get a likelihood, this time we will just get an approximation from the likelihood emulator. The general algorithm of this method can be further expressed as: + +1. Propose initial parameter set sampling design +2. Run full model for each parameter set +3. Evaluate the likelihoods +4. Construct emulator of multivariate likelihood surface +5. Use emulator to estimate posterior parameter distributions +6. (Optional) Refine emulator by proposing new design points, goto 2) + +For now, we just want you to get a glimpse at the overall structure of the code, which is laid out in the comment headers in ```pda.emulator()```. Most of the real work gets done by the functions this code calls, which are all located in the file ```~/pecan/modules/assim.batch/R/pda.utils.R``` and the MCMC will be performed by the ```mcmc.GP()``` function in ```~/pecan/modules/emulator/R/minimize.GP.R```. To delve deeper into how the code works, take a look at these files when you have the time. + +#### Running a demo PDA + +Now we can go ahead and run a data assimilation MCMC with emulator. Since you've already loaded the settings containing your modified XML block, all you need to do to start the PDA is run `pda.emulator(settings)`. But, in the emulator workflow, there is a bit of a time consuming step where we calculate the effective sample size of the input data, and we have already done this step for you. You could load it up and pass it to the function explicitly in order to skip this step: + +```{r echo= TRUE, eval=FALSE} +# load input data +load("/home/carya/pda/pda_externals.Rdata") +postPDA.settings <- pda.emulator(settings, external.data = inputs_w_neff) +``` +After executing the code above, you will see print-outs to the console. The code begins with loading the prior values which in this case are the posterior distributions coming from your previous meta analysis. Then, normally, it loads the observational data and carries out necessary conversions and formatting to align it with model outputs, as we did separately above, but today it will skip this step as we are passing data externally. After this step, you will see a progress bar where the actual model is run n.knot times with the proposed parameter sets and then the outputs from these runs are read. Next, this model output is compared to the specified observational data, and the likelihood is calculated using the heteroskedastic Laplacian discussed previously. Once we calculate the likelihoods, we fit an emulator which interpolates the model output in parameter space between the points where the model has actually been run. Now we can put this emulator in the MCMC algorithm instead of the model itself. Within the MCMC loop the code proposes new parameter value from a multivariate normal jump distribution. The corresponding likelihood will be approximated by the emulator and the new parameter value is accepted or rejected based on its posterior probability relative to the current value. + + +#### Outputs from PEcAn’s Parameter Data Assimilation + +When the PDA is finished, a number of outputs are automatically produced that are either the same as or similar to posterior outputs that we’ve seen before. These are located in the ```PEcAn_[workflow_id]/pft/*``` output directory and are identified by ```pda.[PFT]_[workflow_id]``` in the filenames: + +* posteriors.pda.[PFT]*.pdf shows the posterior distributions resulting from your PDA +* trait.mcmc.pda.[PFT]*.Rdata contains all the parameter samples contained in the PDA posterior +* mcmc.pda.[PFT]*.Rdata is essentially the same thing in a different format +* mcmc.diagnostics.pda.[PFT]*.pdf shows trace plots and posterior densities for each assimilated parameter, as well as pairs plots showing all pairwise parameter correlations. + +Together, these files allow you to evaluate whether a completed PDA analysis has converged and how the posterior distributions compare to the priors, and to use the posterior samples in further analyses, including additional PDA. +If you haven't done so already, take a look at all of the outputs described here. + +#### Questions: +* Do the diagnostic figures indicate that your likelihood at least improved over the course of the analysis? +* Does the MCMC appear to have converged? +* Are the posterior distributions well resolved? + +#### Post-PDA analyses + +In addition to the outputs of the PDA itself, you may want to conduct ensemble and/or sensitivity analyses based on the posteriors of the data assimilation, in order to check progress towards improved model fit and/or changing sensitivity. For this, you need to generate new model runs based on parameters sampled from the updated (by PDA) posterior, which is a simple matter of rerunning several steps of the PEcAn workflow. +The PDA you ran has automatically produced an updated XML file (`pecan.pda***.xml`) that includes the posterior id to be used in the next round of runs. Locate this file in your run directory and load the file for the post-pda ensemble/sensitivity analysis (if you already have the `settings` list in your working environment you don't need to re-read the settings): + + +```{r echo= TRUE, eval=FALSE} + + # read post-PDA settings if you don't have them in your wotking environment + # replace the *** with the ensemble id given by the workflow + # postPDA.settings <- read.settings(file.path(outdir,paste0("PEcAn_", workflow_id),"pecan.pda***.xml")) + + # Call model specific write.configs + postPDA.settings <- run.write.configs(postPDA.settings, + write=postPDA.settings$database$bety$write, + ens.sample.method=postPDA.settings$ensemble$method) + + # Let's save the settings with the new ensemble id + PEcAn.settings::write.settings(settings, outputfile=paste0('pecan.pda', postPDA.settings$assim.batch$ensemble.id,'.xml')) + + # Start ecosystem model runs, this one takes awhile... + PEcAn.remote::start.model.runs(postPDA.settings, postPDA.settings$database$bety$write) + + # Get results of model runs + get.results(postPDA.settings) + + # Repeat ensemble analysis with PDA-constrained params + run.ensemble.analysis(postPDA.settings, TRUE) + + # let's re-load the pre-PDA ensemble outputs + load(prePDA_ensemble_output_file) + pre_pda_ens <- ensemble.ts[["NEE"]] + + # nowload the post-PDA ensemble outputs + postPDA_ensemble_output_file <- file.path(outdir,paste0("PEcAn_", workflow_id, "/ensemble.ts.", postPDA.settings$ensemble$ensemble.id, ".NEE.2003.2006.Rdata")) + load(postPDA_ensemble_output_file) + post_pda_ens <- ensemble.ts[["NEE"]] + + # try changing the window value for daily, weekly, monthly smoothing later + # see if this changes your model-data agreement, why? + window <- 1 # no smoothing + pre_pda <- t(apply(pre_pda_ens, 1, function(x) { + tapply(x, rep(1:(length(x)/window + 1), each = window)[1:length(x)], + mean, na.rm = TRUE)})) + post_pda <- t(apply(post_pda_ens, 1, function(x) { + tapply(x, rep(1:(length(x)/window + 1), each = window)[1:length(x)], + mean, na.rm = TRUE)})) + fobs <- tapply(NEEo, rep(1:(length(NEEo) / window + 1), + each = window)[1:length(NEEo)], mean, na.rm = TRUE) + + + # save the comparison plots to pdf + pdf(file.path(outdir,paste0("PEcAn_",workflow_id),"model.data.comparison.pdf"), onefile=T, + paper='A4r', height=15, width=20) + + # now plot the pre-PDA ensemble similar to the way we did before + preCI <- apply(pre_pda, 2, quantile, c(0.025, 0.5, 0.975), na.rm = TRUE) + ymin <- min(min(c(preCI, fobs), na.rm = TRUE)) + ymax <- max(max(c(preCI, fobs), na.rm = TRUE)) + plot(pre_pda[1,], ylim = c(ymin, ymax), lwd = 2, xlab = "time", ylab = "NEE", main = "pre-PDA vs post-PDA", type = "n") + prepoly <- 1:dim(preCI)[2] + polygon(c(prepoly, rev(prepoly)),c(preCI[3,], rev(preCI[1,])),col='khaki',border=NA) + + # plot the post-PDA ensemble + postCI <- apply(post_pda, 2, quantile, c(0.025, 0.5, 0.975), na.rm = TRUE) + postpoly <- 1:dim(postCI)[2] + polygon(c(postpoly, rev(postpoly)),c(postCI[3,], rev(postCI[1,])),col='lightblue',border=NA) + + # finally let's add the data and see how we did + points(fobs, pch = ".", col= adjustcolor("purple",alpha.f=0.7)) + legend("topright", legend=c("Data","Pre-PDA Model", "Post-PDA Model"), pch=c(15,15,15), + col=c("purple","khaki","lightblue")) + + dev.off() + + + # Repeat variance decomposition to see how constraints have changed + run.sensitivity.analysis(postPDA.settings) +``` + + +Now you can check the new figures produced by your analyses under ```PEcAn_[workflow_id]/pft/*/variance.decomposition.*.pdf``` and ```PEcAn_[workflow_id]/pft/*/sensitivity.analysis.*.pdf```, and compare them to the previous ones. Also, take a look at the comparison of model outputs to data when we run SIPNET with pre- and post-PDA parameter (mean) values under ```PEcAn_[workflow_id]/model.data.comparison.pdf```. + +#### Questions: +* Looking at the ensemble analysis outputs in order (i.e., in order of increasing ID in the filenames), qualitatively how did the model fit to data change over the course of the analysis? +* Based on the final ensemble analysis, what are the major remaining discrepancies between model and data? + + Can you think of the processes / parameters that are likely contributing to the differences? + + What would be your next steps towards evaluating or improving model performance? + + + +### State-Variable Data Assimilation + + +#### Objectives: +* Assimilate tree ring estimated NPP & inventory AGB within the SIPNET model in order to: +* Reconcile data and model NPP & AGB estimates +* Constrain inferences about other ecosystem responses. + +#### Overview: +* Initial Run +* Settings +* Load and match plot and increment data +* Estimate tree-level data uncertainties +* Estimate allometric relationships +* Estimate stand-level NPP +* Sample initial conditions and parameters +* Run Ensemble Kalman Filter + +#### Initial Run + +Perform a site-level SIPNET run using the following settings + +* Site = UNDERC +* Start = 01/01/1979 +* End = 12/31/2015 +* Met = NARR +* Check **Brown Dog** +* When the run is complete, open the pecan.xml and cut-and-paste the **outdir** for later use + +#### Settings: + +* Open the PEcAn RStudio environment back up. + +* Set your working directory to the outdir from above ```setwd(outdir)``` and shift the file browser to that location (Files > More > Go To Working Directory) + +* Open up the latest settings file ```pecan.CONFIG.xml```. + +* At the top of the file add the following tags to set the ensemble size + +``` + + 35 + FALSE + TRUE + + 1000000040 + 1000013298 + + + + NPP + MgC/ha/yr + -9999 + 9999 + + + AbvGrndWood + KgC/m^2 + 0 + 9999 + + + TotSoilCarb + KgC/m^2 + 0 + 9999 + + + LeafC + m^2/m^2 + 0 + 9999 + + + SoilMoistFrac + + 0 + 9999 + + + SWE + cm + 0 + 9999 + + + Litter + gC/m^2 + 0 + 9999 + + + year + 1980/01/01 + 2015/12/31 + +``` + +* Delete the ````` block from the settings + +* In the PEcAn History, go to your PDA run and open ```pecan.pda[UNIQUEID].xml``` (the one PEcAn saved for you AFTER you finished the PDA) + +* Cut-and-paste the PDA `````` block into the SDA settings file + +* Save the file as ```pecan.SDA.xml``` + +#### Loading data + +* If you have not done so already, clone (new) or pull (update) the PalEON Camp2016 repository + + Open a shell under Tools > Shell + + ```cd`` to go to your home directory + + To clone: ```git clone git@github.com:PalEON-Project/Camp2016.git``` + + To pull: ```cd Camp2016; git pull https://github.com/PalEON-Project/Camp2016.git master``` + +* Open the tree-ring data assimilation workflow under Home > pecan > scripts > workflow.treering.R + +* Run the script from the start up through the LOAD DATA section + + +#### Estimating tree-level data uncertainties + + +One thing that is critical for data assimilation, whether it is being used to estimate parameters or state variables, is the careful consideration and treatment of the uncertainties in the data itself. For this analysis we will be using a combination of forest plot and tree ring data in order to estimate stand-level productivity. The basic idea is that we will be using the plot-sample of measured DBHs as an estimate of the size structure of the forest, and will use the annual growth increments to project that forest backward in time. Tree biomass is estimated using empirical allometric equations relating DBH to aboveground biomass. There are a number of sources of uncertainty in this approach, and before moving you are encouraged to think about and write down a few: + +_______________________________________ +_______________________________________ +_______________________________________ +_______________________________________ +_______________________________________ +_______________________________________ +_______________________________________ +_______________________________________ + +Today we will use a statistical model based on the model developed by Clark et al 2007 that partitions out a number of sources of variability and uncertainty in tree ring and plot data (Fig 1). This model is a Bayesian statespace model that treats the true diameters (D) and increments (X) as latent variables that are connected through a fairly simple mixed effects process model + +$$D_{ij,t+1} = D_{ij,t} + \mu + \alpha_{i} + \alpha_t + \epsilon_{ij,t}$$ + +where i = individual, j = plot, t = time (year). Each of these terms are represented at normal distributions, where $\mu$ is a fixed effect (overall mean growth rate) and individual and year are random effects + +$$\mu \sim N(0.5,0.5)$$ +$$\alpha_{i} \sim N(0,\tau_{i})$$ +$$\alpha_{t} \sim N(0,\tau_{t})$$ +$$\epsilon_{ij,t} \sim N(0,\tau_{e})$$ + +The connection between the true (latent) variable and the observations is also represented as normal with these variances representing measurement error: + +$$D_{ij,t}^O \sim N( D_{ij,t},\tau_D)$$ +$$X_{ij,t}^O \sim N( X_{ij,t},\tau_r)$$ + +Finally, there are five gamma priors on the precisions, one for the residual process error ($\tau_{e}$), two for the random effects on individual ($\tau_{i}$) and time ($\tau_t$), +and two measurement errors or DBH ($\tau_D$) and tree rings ($\tau_r$) + +$$\tau_{e} \sim Gamma(a_e,r_e)$$ +$$\tau_{i} \sim Gamma(a_i,r_i)$$ +$$\tau_{t} \sim Gamma(a_t,r_t)$$ +$$\tau_{D} \sim Gamma(a_D,r_D)$$ +$$\tau_{r} \sim Gamma(a_r,r_r)$$ + +This model is encapsulated in the PEcAn function: + +``````{r echo= TRUE, eval=FALSE} +InventoryGrowthFusion(combined,n,iter) +``` + +where the first argument is the combined data set formatted for JAGS and the second is the number of MCMC interations. The model itself is written for JAGS and is embedded in the function. Running the above InventoryGrowthFusion will run a full MCMC algorithm, so it does take a while to run. The code returns the results as an mcmc.list object, and the next line in the script saves this to the outputs directory We then call the function InventoryGrowthFusionDiagnostics to print out a set of MCMC diagnostics and example time-series for growth and DBH. + +#### Allometric equations + +Aboveground NPP is estimated as the increment in annual total aboveground biomass. This estimate is imperfect, but not unreasonable for demonstration purposes. As mentioned above, we will take an allometric approach of scaling from diameter to biomass: Biomass = b0 * DBH b1 We will generate the allometric equation on a PFT level using another Bayesian model that synthesizes across a database of species-level allometric equations (Jenkins et al 2004). This model has two steps within the overall MCMC loop. First it simulates data from each equation, including both parameter and residual uncertainties, and then it updated the parameters of a single allometric relationship across all observations. The code also fits a second model, which includes a random site effect, but for simplicity we will not be using output from this version. Prior to running the model we have to first query the species codes for our pfts. Next we pass this PFT list to the model, AllomAve, which saves the results to the output directory in addition to returning a summary of the parameters and covariances. + +#### Estimate stand-level NPP + +If we have no uncertainty in our data or allometric equations, we could estimate the stand aboveground biomass (AGB) for every year by summing over the biomass of all the trees in the plot and then divide by the plot area. We would then estimate NPP by the difference in AGB between years. One approach to propagating uncertainties into NPP would be to transform the distribution of DBH for each individual tree and year into a distribution for biomass, then sum over those distributions to get a distribution for AGB and then subtract the distributions to get the distributions of NPP. However, if we do this we will greatly overestimate the uncertainty in NPP because we ignore the fact that our increment data has much lower uncertainty than our diameter data. In essence, if we take a random draw from a distribution of AGB in year and it comes back above average, the AGB is much more likely to also be above average the following year than if we were to do an independent draw from that distribution. Accounting for this covariance requires a fairly simple change in our approach and takes advantage of the nature of the MCMC output. The basic idea is that we are going to take a random draw from the full individual x year diameter matrix, as well as a random draw of allometric parameters, and perform the 'zero-error' calculation approach described above. We will then create a distribution of all the NPP estimates that comes out of repeated draws for the full diameter matrix. This approach is encapsulated in the function `plot2AGB`. The argument unit.conv is a factor that combines both the area of the plot and the unit conversion from tree biomass (kg/tree) to stand AGB (Mg/ha). There are two outputs from plot2AGB: a pdf depicting estimated NPP and AGB (mean and 95% CI) time series, with each page being a plot; and plot2AGB.Rdata, a binary record of the function output that is read into the data assimilation code. The latter is also returned from the fuction and assigned to the variable “state”. Finally, we calculate the mean and standard deviation of NPP and save this as obs. + +#### Build Initial Conditions + +The function sample.IC.SIPNET uses the AGB estimate from the previous function in order to initialize the data assimilation routine. Specifically it samples n.ensemble values from the first time step of the AGB estimate. Embedded in this function are also a number of prior distributions for other state variables, which are also samples in order to create a full set of initial conditions for SIPNET. + +#### Load Priors + +The function sample.parameters samples values from the most recent posterior parameter distributions. You can also specify a specific set of parameters so sample from by specifying the argument `` within `` as the posterior.id you want to use. This is useful to know if you want to go back and run with the Meta-analysis posteriors, or if you end up rerunning the meta-analysis and need to go back and specify the parameter data assimilation posteriors instead of the most recent. + +#### Ensemble Kalman Filter + +The function `sda.enkf` will run SIPNET in Ensemble Kalman Filter mode. The output of this function will be all the of run outputs, a PDF of diagnostics, and an Rdata object that includes three lists: + +* FORECAST will be the ensemble forecasts for each year +* ANALYSIS will be the updated ensemble sample given the NPP observations +* enkf.params contains the prior and posterior mean vector and covariance matrix for each time step. + +If you look within this function you will find that much of the format is similar to the pda.mcmc function, but in general is much simpler. The function begins by setting parameters, opening a database connection, and generating workflow and ensemble ID's. Next we split the SIPNET clim meteorology file up into individual annual files since we will be running SIPNET for a year at a time between updates. Next we perform an initial set of runs starting from the initial states and parameters we described above. In doing so we create the run and output directories, the README file, and the runs.txt file that is read by start.model.runs. Worth noting is that the README and runs.txt don't need to be updated within the forecast loop. Given this initial run we then enter the forecast loop. Within this loop over years we perform four basic steps. First, we read the output from the latest runs. Second, we calculate the updated posterior state estimates based on the model ensemble prior and observation likelihood. Third, we resample the state estimates based on these posterior parameters. Finally, we start a new set of runs based on this sample. The sda.enfk function then ends by saving the outputs and generating some diagnostic figures. The first set of these shows the data, forecast, analysis. The second set shows pairs plots of the covariance structure for the Forecast and Analysis steps. The final set shows the time-series plots for the Analysis of the over state variables produced by SIPNET. + +#### Finishing up + +The final bit of code in the script will register the workflow as complete in the database. After this is run you should be able to find all of the runs, and all of the outputs generated above, from within the PEcAn webpages. + + + +### PEcAn: Testing the Sensitivity Analysis Against Observations" + +#### Author: "Ankur Desai" + + +#### Flux Measurements and Modeling Course, *Tutorial Part 2* +This tutorial assumes you have successfully completed the Demo01, Demo02 and the modelVSdata tutorial. + +#### Introduction +Now that you have successfully run PEcAn through the web interface and have learned how to do a simple comparison of flux tower observations to model output, let’s start looking at how data assimilation and parameter estimation would work with an ecosystem model. + +Before we start a full data assimilation exercise, let’s try something simple – single parameter selection by hand. + ++ Open or or the Amazon URL/rstudio if running on the cloud. + +In Demo02, you have ran a sensitivity analysis of SIPNET model runs at Niwot Ridge sampling across quantiles of a parameter prior, while holding all others to the median value. The pecan.xml file told PEcAn to run an sensitivity analysis, which simply meant SIPNET was run multiple times with the same driver, but varying parameter values one at a time (while holding all others to their median), and the parameter range also specified in the pecan.xml file (as quantiles, which were then sampled against the BETY database of observed variation in the parameter for species within the specific plant functional type). + +Let’s try to compare Ameriflux NEE to SIPNET NEE across all these runs to make a plot of parameter vs. goodness-of-fit. We’ll start with root mean square error (**RMSE**), but then discuss some other tests, too. + +#### A. Read in settings object from a run xml + +Open up a connection to the bety database and create a settings object from your xml, just like in the modelVSdata tutorial. +```{r echo= TRUE, eval=FALSE} + +settings<-PEcAn.settings::read.settings("~/output/PEcAn_99000000002/pecan.CONFIGS.xml") + +bety<-settings$database$bety +bety <-dplyr::src_postgres(host = bety$host, user = bety$user, password = bety$password, dbname = bety$dbname) +``` + +We read in the pecan.CONFIG.xml instead of the pecan.xml because the pecan.CONFIG.xml has already incorperated some of the information from the database we would have to query again if we used the pecan.xml. + +```{r echo= TRUE, eval=FALSE} + +runid<-as.character(read.table(paste(settings$outdir, "/run/","runs.txt", sep=""))[1,1]) # Note: if you are using an xml from a run with multiple ensembles this line will provide only the first run id +outdir<- paste(settings$outdir,"/out/",runid,sep= "") +start.year<-as.numeric(lubridate::year(settings$run$start.date)) +end.year<-as.numeric(lubridate::year(settings$run$end.date)) + +site.id<-settings$run$site$id +``` + + +Back to the files pane, within the *run/* folder, find a folder called *pft/* and within that a folder with the pft name (such as *temprature.coniferous*). Within that is a PDF file that starts *sensitivity.analysis*. In Rstudio, just click on the PDF to view it. You discussed this PDF last tutorial, through the web interface. Here, we see how the model NEE in SIPNET changes with each parameter. + +Let’s read that sensitivity output. Navigate back up (*..*) to the *~/output/**RUNDIR**/* folder. Find a series of files that end in “*.RData*”. These files contain the R variables used to make these plots. In particular, there is **sensitivity.output.*.RData** which contains the annual NEE as a function of each parameter quantile. Click on it to load a variable into your environment. There is **sensitivity.results.*.RData** which contains plotting functions and variance decomposition output, which we don't need in this tutorial. And finally, there is **sensitivity.samples.*.RData** which contains the actual parameter values and the RunIDs associated with each sensitivity run. + +Click on *sensitivity.samples.*.RData* to load it into your environment, or run the ```{r}load()``` script below. You should see a set of five new variables (pft.names, trait.names, sa.ensemble.id, sa.run.ids, sa.samples). + +Let’s extract a parameter and it’s sensitivity NEE output from the list sa.samples, which is organized by PFT, and then by parameter. First, let’s look at a list of PFTs and parameters available: +```{r echo= TRUE, eval=FALSE} +load(paste(settings$outdir,"/sensitivity.samples.", settings$sensitivity.analysis$ensemble.id,".Rdata", sep="")) + +names(sa.samples) +names(sa.samples$temperate.coniferous) +``` + +Now to see the actual parameter values used by the runs, just pick a parameter and type: +```{r echo= TRUE, eval=FALSE} +sa.samples$temperate.coniferous$psnTOpt +``` + + +Let’s store that value for future use: +```{r echo= TRUE, eval=FALSE} +psnTOpt <- sa.samples$temperate.coniferous$psnTOpt +``` + +Now, to see the annual NEE output from the model for a particular PFT and parameter range, try +```{r echo= TRUE, eval=FALSE} +load(paste(settings$outdir,paste("/sensitivity.output", settings$sensitivity.analysis$ensemble.id,settings$sensitivity.analysis$variable,start.year,end.year,"Rdata", sep="."), sep="")) + +sensitivity.output$temperate.coniferous$psnTOpt +``` + +You could even plot the two: +```{r echo= TRUE, eval=FALSE} +plot(psnTOpt,sensitivity.output$temperate.coniferous$psnTOpt) +``` + +What do you notice? + + +Let’s try to read the output from a single run id as you did in the earlier tutorial. +```{r echo= TRUE, eval=FALSE} +runids <- sa.run.ids$temperate.coniferous$psnTOpt +arun <- PEcAn.utils::read.output(runids[1], paste(settings$outdir, "out", runids[1], sep="/"), start.year= start.year, end.year= end.year,"NEE", dataframe = TRUE) + +plot(arun$posix,arun$NEE) +``` + +#### B. Now let’s bring in the actual observations + +Recall reading Ameriflux NEE in the modelVSdata tutorial. +```{r echo= TRUE, eval=FALSE} +File_path<-"~/output/dbfiles/AmerifluxLBL_site_0-772/AMF_US-NR1_BASE_HH_9-1.csv" + +File_format<-PEcAn.DB::query.format.vars(bety = bety, format.id = 5000000002) #This matches the file with a premade "format" or a template that describes how the information in the file is organized +site<-PEcAn.DB::query.site(site.id = site.id,bety$con) + +obs<-PEcAn.benchmark::load_data(data.path = File_path, format= File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) + +obs$NEE[obs$UST<0.2]<-NA #Apply a U* filter + +plottable<-align_data(model.calc = arun, obvs.calc = obs, align_method = "match_timestep", var= "NEE") +head(plottable) +``` + + +#### C. Finally, we can finally compare model to data + +In the modelVSdata, you also compared NEE to the ensemble model run. Here we will do the same except we include each sensitivity run. + +```{r echo= TRUE, eval=FALSE} +plot(plottable$NEE.m,plottable$NEE.o) +abline(0,1,col="red") +``` + +And remember the formula for RMSE: +```{r echo= TRUE, eval=FALSE} +sqrt(mean((plottable$NEE.o-plottable$NEE.m)^2,na.rm = TRUE)) +``` + +All we need to do to go beyond this is to make a loop that reads in each sensitivity run NEE based on runids, calculates RMSE against the observations, and stores it in an array, by combining the steps above in a for loop. Make sure you change the directory names and year to your specific run. +```{r echo= TRUE, eval=FALSE} +rmses <- rep(0,length(runids)) +for(r in 1:length(runids)){ +arun <- read.output(runids[r],paste(settings$outdir, "out", runids[r], sep="/"),2004,2004,"NEE", dataframe= TRUE) +plottable<-align_data(model.calc = arun, obvs.calc = obs, align_method = "match_timestep", var= "NEE") +rmses[r] <- sqrt(mean((plottable$NEE.o-plottable$NEE.m)^2,na.rm = TRUE)) +} + +rmses +``` + +Let’s plot that array +```{r echo= TRUE, eval=FALSE} +plot(psnTOpt,rmses) +``` + +Can you identify a minimum (if there is one)? If so, is there any reason to believe this is the “best” parameter? Why or why not? Think about all the other parameters. + +Now that you have the hang of it, here are a few more things to try: + +1. Try a different error functions, given actual NEE uncertainty. You learned earlier that uncertainty in half-hourly observed NEE is not Gaussian. This makes RMSE not the correct measure for goodness-of-fit. Go to *~/pecan/modules/uncertainty/R*, open *flux_uncertainty.R*, and click on the *source* button in the program editing pane. + +Alternatively, you can source the function from the console using: +```{r echo= TRUE, eval=FALSE} +source("pecan/modules/uncertainty/R/flux_uncertainty.R") +``` + +Then you can run: +```{r echo= TRUE, eval=FALSE} +unc <- flux.uncertainty(plottable$NEE.o,QC=rep(0,17520)) +plot_flux_uncertainty(unc) +``` +The figure shows you uncertainty (err) as a function of NEE magnitude (mag). How might you use this information to change the RMSE calculation? + +Try a few other parameters. Repeat the above steps but with a different parameter. You might want to select one from the sensitivity PDF that has a large sensitivity or from the variance decomposition that is also poorly constrained. + + + +## Advanced User Guide {#advanced-user} + +- [Workflow curl submission](#web-curl-submission) + + + +### Submitting Workflow from Command Line {#web-curl-submission} + +This is how you can submit a workflow from the command line through the pecan web interface. This will use curl to submit all the requireed parameters to the web interface and trigger a run. + +```bash +# the host where the model should run +# never use remote sites since you will need to pass your username/password and that WILL be stored +hostname=pecan.vm +# the site id where to run the model (NIWOT in this case) +siteid=772 +# start date and end date, / need to be replaced with %2F or use - (NOT TESTED) +start=2004-01-01 +end=2004-12-31 + +# if of model you want to run, rest of section parameters depend on the model selected (SIPNET 136) +modelid=5000000002 +# PFT selected (we should just use a number here) +# NOTE: the square brackets are needed and will need be escaped with a \ if you call this from command line +pft[]=temperate.coniferous +# initial pool condition (-1 means nothing is selected) +input_poolinitcond=-1 +# met data +input_met=99000000006 + +# variables to collect +variables=NPP,GPP +# ensemble size +runs=10 +# use sensitivity analysis +sensitivity=-1,1 + +# redirect to the edit pecan.xml file +pecan_edit=on +# redirect to edit the model configuration files +model_edit=on +# use browndog +browndog=on +``` + +For example the following will run the above workflow. Using -v in curl will show verbose output (needed) and the grep will make sure it only shows the redirect. This will show the actual workflowid: + +``` +curl -s -v 'http://localhost:6480/pecan/04-runpecan.php?hostname=pecan.vm&siteid=772&start=2004-01-01&end=2004-12-31&modelid=5000000002&pft\[\]=temperate.coniferous&input_poolinitcond=-1&input_met=99000000006' 2>&1 | grep 'Location:' +< Location: 05-running.php?workflowid=99000000004 +``` + +In this case you can use the browser to see progress, or use the following to see the status: + +``` +curl -s 'http://localhost:6480/pecan/dataset.php?workflowid=99000000004&type=file&name=STATUS' +TRAIT 2017-12-13 08:56:56 2017-12-13 08:56:57 DONE +META 2017-12-13 08:56:57 2017-12-13 08:57:13 DONE +CONFIG 2017-12-13 08:57:13 2017-12-13 08:57:14 DONE +MODEL 2017-12-13 08:57:14 2017-12-13 08:57:15 DONE +OUTPUT 2017-12-13 08:57:15 2017-12-13 08:57:15 DONE +ENSEMBLE 2017-12-13 08:57:15 2017-12-13 08:57:16 DONE +FINISHED 2017-12-13 08:57:16 2017-12-13 08:57:16 DONE +``` + +Or to show the output log: + +``` +curl -s 'http://localhost:6480/pecan/dataset.php?workflowid=99000000004&type=file&name=workflow.Rout' + +R version 3.4.3 (2017-11-30) -- "Kite-Eating Tree" +Copyright (C) 2017 The R Foundation for Statistical Computing +Platform: x86_64-pc-linux-gnu (64-bit) + +R is free software and comes with ABSOLUTELY NO WARRANTY. +You are welcome to redistribute it under certain conditions. +Type 'license()' or 'licence()' for distribution details. + +R is a collaborative project with many contributors. +.... +``` + + + +# Basic Web workflow {#basic-web-workflow} + +This chapter describes the major steps of the PEcAn web-based workflow, which are as follows: + +- [Model and site selection](#web-site-model) +- [Model configuration](#web-model-config) +- Run execution -- TODO! +- Results -- TODO! +- Interactive visualizations -- TODO! + +We recommend that all new users begin with [PEcAn Hands-On Demo 01: Basic Run]. The documentation below assumes you are already familiar with how to navigate to PEcAn's interactive web interface for running models. + +## Site and model selection {#web-site-model} + +This page is used to select the model to run and the site at which you would like to run that model. + +**NOTE:** If this page does not load for you, it may be related to a known Google Maps API key issue. See [issue #1269][issue-1269] for a possible solution. + + +[issue-1269]: https://github.com/PecanProject/pecan/issues/1269 + +### Selecting a model + +1. On the **Select Host** webpage **use the Host pull-down menu to select the server you want to run on**. PEcAn is designed to allow models to be run both locally and on remote high-performance computing (HPC) resources (i.e. clusters). We recommend that users start with local runs. More information about connecting your PEcAn instance to a cluster can be found on the [Remote execution with PEcAn] page. + +2. Next, **select the model you want to run under the Model pull-down menu**. The list of models currently supported by PEcAn, along with information about these models, is available on the [PEcAn Models] page. + + i) If a PEcAn-supported model is not listed, this is most likely because the model has not been installed on the server. The PEcAn team does not have permissions to redistribute all of the models that are coupled to it, so you will have to install some PEcAn-compatible models yourself. Please consult the PEcAn model listing for information about obtaining and installing different models. Once the model is installed and you have added the location of the model executable to Bety (see [Adding An Ecosystem Model]), your model should appear on the PEcAn **Select Host** page after your refresh the page. + + ii) If you would like to add a new model to PEcAn please consult our guide for [Adding an Ecosystem Model] and contact the PEcAn team for assistance. + +3. If selecting your model causes your **site to disappear** from the Google Map, that means the site exists but there are no drivers for that model-site combination registered in the database. + + i) Click the "Conversion" checkbox. If your site reappears, that means PEcAn should be able to automatically generate the required inputs for this site by converting from existing input files in other formats. + + ii) If the site still does not reappear, that means there are required input files for that model-site combination that PEcAn cannot autogenerate. This may be because the model has unique input requirements or because it has not yet been fully coupled to the PEcAn input processing workflow. Go to the troubleshooting section under [Selecting a site] for more information on diagnosing what drivers are missing. + +### Selecting a site + +### Site Groups + +1. PEcAn provides the option of organizing sites into groups to make them easier to find and easier to run as a group. We have pre-loaded a number of common research networks (e.g., FLUXNET, LTER, NEON), but you are free to create new site groups through Bety. + +2. If you are searching for a site that is not part of an existing site group, or you are unsure which site group it belongs to, select "All Sites" to see all sites in Bety. Note that this may take a while to render. + +### Using existing sites + +1. **Find the site on the map** The simplest way of determining if a site exists in PEcAn is through the Google Map interface of the web-based workflow. You'll want to make sure that the "Site Group" is set to "All Sites" and the "Model" is set to "All Models". + +2. **Find the site in BETY** If the site is not on the map, it may still be in Bety but with insufficient geographic information. To locate the site in Bety, first login to your local version of the BETY database. If using the VM, navigate to `localhost:6480/bety` and login with username `bety` and password `illinois`. Then, navigate to `Data > Sites` and use the "Search" box to search for your site. If you **do** find your site, click "Edit" and add geographic information so that the site will show up on the map. Also, note that the site ID number shows up in the URL for the "Show" or "Edit" pages. This ID is often useful to know, for example when editing a PEcAn settings file by hand. If you did not find you site, follow the instructions below to add a site. + +### Adding a new site + +(TODO: Move most of this out) + +1. Log into Bety as described above. + +2. **Pick a citation for your site** Each site requires an associated "citation" that must be added before the site itself is added. First, navigate to "Data > Citations" and use the "Search" box to see if the relevant citation already exists. If it does, click the check mark under "Actions" to proceed to site creation. + +* **To create a new citation**, click the **New Citation** button, fill in the fields, and then click "Create". The "field URL" should contain the web address that takes you to this publication on the publisher's website. The "PDF" field should be the full web address to a PDF for this citation. + +* Note that our definition of a citation is flexible, and a citation need not be a peer-reviewed publication. Most of the fields in "New Citation" can be left blank, but we recommend at least adding a descriptive title, such as "EBI Farm Field Data" and a relevant contact person as the "Author". + +5. Once the Citation is created or selected this should automatically take you to the Sites page and list any Sites already associated with this citation. To create a new site click the **New Site** button. + +6. When creating a new site, the most important fields are the **Site name** and coordinates (**latitude** and **longitude**). The coordinates can be entered by hand or by clicking on the site location on the Google Map interface. All other information is optional, but can be useful for searching and indexing purposes. + +7. When you are done click **Create**. At this point, once the PEcAn site-level page is refreshed, the site should automatically appear. + +### Troubleshooting + +#### My site shows up when I don't have any model selected, but disappears once I select the model I want to run + +Selecting a model will cause PEcAn to filter the available sites based on whether they possess the required Inputs for a given model (e.g. meteorology). To check what Inputs are missing for a site point your browser to the pecan/checksite.php webpage (e.g. localhost:6480/pecan/checksite.php). This page looks virtually identical to the site selection page, except that it has a *Check* button instead of *Prev* and *Next*. If you select a Machine, Model, and Site and then click *Check* the page should return a list of what Inputs are missing (listing both the name and the Format ID number). Don't forget that its possible for PEcAn to have required Inputs in its database, but just not have them for the Machine where you want to run. + +To see more about what Inputs a given model can accept, and which of those are required, take a look at the MODEL_TYPE table entry in the database (e.g. go to `localhost:6480/bety`; Select `Runs > Model Type`; and then click on the model you want to run). + +For information about loading missing Inputs into the database visit [Input records in BETY], and also read the rest of the pages under this section, which will provide important information about the specific classes of Inputs (e.g. meteorology, vegetation, etc). + +Finally, we are continually developing and refining workflows and standards for processing Input data in a model-agnostic way. The details about what Inputs can be processed automatically are discussed input-by-input in the sections below. For those looking to dive into the code or troubleshoot further, these conversions are ultimately handled under the `PEcAn.workflow::do_conversions` workflow module. + +## Model configuration {#web-model-config} + +This page is used for basic model configuration, including when your model will run and what input data it will use. + +### Choosing meteorology + +Once a Machine, Model, and Site have been selected, PEcAn will take you to the Input selection page. From this page you will select what Plant Functional Type (PFT) you want to run at a site, the start and end dates of the run, and various Input selections. The most common of these across all models is the need to specify meteorological forcing data. The exact name of the menu item for meteorology will vary by model because all of the Input requirements are generated individually for each model based on the MODEL_TYPE table. In general there are 3 possible cases for meteorology + +* PEcAn already has driver files in its database +* PEcAn does not have drivers, but can generate them from publicly available data +* You need (or want) to upload your own drivers + +The first two cases will appear automatically in the the pull down menu. For meteorological files that already exist you will see the date range that's available. By contrast, met that can be generated will appear as "Use ", where is the origin of the data (e.g. "Use Ameriflux" will use the micromet from an Ameriflux eddy covariance tower, if one is present at the site). + +If you want to upload your own met data this can be done in three ways. + +1. The default way to add met data is to incorporate it into the overall meteorological processing workflow. This is preferred if you are working with a common meteorological data product that is not yet in PEcAn's workflow. This case can be divided into two special cases: + + i) Data is in a common MIME-type that PEcAn already has a converter for (e.g. CSV). In this case you'll want to create a new Format record for the meta-data so that the existing converter can process this data. See documentation for [Creating a new Format record in BETY] for more details. + + ii) Data is in a more complicated format or interactive database, but large/useful enough to warrent a custom conversion function. Details on creating custom met conversions is in the [Input Conversion], though at this stage you would also be strongly encouraged to contact the PEcAn development team. + +2. The second-best way is to upload data in PEcAn's standard meteorological format (netCDF files, CF metadata). See [Input Conversion] for details about variables and units. From this standard, PEcAn can then convert the file to the model-specific format required by the model you have chosen. This approach is preferred for a rare or one-off meterological file format, because PEcAn will also be able to convert the file into the format required by any other model as well. + +3. The last option for adding met data is to add it in a model-specific format, which is often easiest if you've already been running your model at a site and are just switching to using PEcAn. + + +### Met workflow + +In a nutshell, the PEcAn met workflow is designed to reduce the problem of converting *n* possible met inputs into *m* possible model formats, which requires *n x m* conversion functions as well as numerous custom functions for downscaling, gap filling, etc. Instead, PEcAn works with a single met standard, and thus requires *n* conversion functions, one for converting each data source into the PEcAn standard, and then *m* conversion functions for converting from that standard to what an individual model requires. For a new model joining the PEcAn system the burden in particularly low -- writing one conversion function provides access to *n* inputs. Similarly, PEcAn performs all other operations/manipulations (extracting a site, downscaling, gap filling, etc) within the PEcAn standard, which means these operations only need be implemented once. + +Consider a generic met data product named MET for simplicity. PEcAn will use a function, download.MET, to pull data for the selected year from a public data source (e.g. Ameriflux, North American Regional Reanalysis, etc). Next, PEcAn will use a function, met2CF.MET, to convert the data into the PEcAn standard. If the data is already at the site scale it will then gapfill the data. If the data is a regional or global data product, PEcAn will then permute the data to allow easier site-level extraction, then it will extract data for the requested site and data range. Modules to address the temporal and spatial downscaling of meteorological data products, as well as their uncertainties, are in development but not yet part of the operational workflow. All of these functions are located within the data.atmosphere module. + +Once data is in the standard format and processed, it will be converted to the model-specific format using a met2model.MODEL function (located in that MODEL's module). + +More detailed information on how PEcAn processes inputs can be found on our [Input Conversion] page. + +### Troubleshooting meteorological conversions + +At the current moment, most of the issues below address possible errors that the Ameriflux meteorology workflow might report + +#### Could not do gapfill ... The following variables have NA's + +This error message means that there were gaps in the downloaded data, for whatever variables that were listed, which were larger than the current algorithm could fill. Particularly common is missing radiation or PAR data, as Ameriflux frequently converts nighttime data to NULL, and work is in progress to detect this based on solar geometry. Also common are incomplete years (first or last year of tower operations). + +#### Could not get information about . Is this an Ameriflux site? + +This message occurs when PEcAn believes that a site is part of Ameriflux (because it was listed on the Ameriflux or FLUXNET webpage and has a US-* site code), but no data is present on the Ameriflux server. The most common reasons for this is that you have selected a site that has not submitted data to Ameriflux yet (or that data hasn't been processed yet), or you have selected a year that's outside the tower's operational period. Visit [Ameriflux](http://ameriflux.lbl.gov/sites/site-list-and-pages/) and [FLUXNET](http://fluxnet.ornl.gov/site_status) for lists of available site years. + +#### Could not download data for for the year + +This is similar to the previous error, but in this case PEcAn did find data for the site listed, but just not for the year requested. This can usually be fixed by just altering the years of the run to match those with available data. + + +#### I could not find the requested var (or dimvar) in the file! + +PEcAn could not find a required variable within the downloaded file. Most likely this is due to that variable not being measured at this site. The most common cause of failure is the absence of atmospheric pressure data (PRESS), but since most models have a low sensitivity to this variable we are working on methods to estimate this from other sources. + +## Selecting Plant Functional Types (PFTs) and other parameter groupings. + +### Using existing PFTs + +PEcAn does not automatically know what vegetation types are present at your study site so you need to select the PFT. + +Some models, such as ED2 and LINKAGES, support competition among multiple PFTs and thus you are encouraged to highlight multiple choices. Other models, such as SIPNET and DALEC, only support one PFT at a site. + +Many models also have parameters that control non-vegetation processes (e.g. soil biogeochemistry and hydrology). PEcAn allows users to assign these parameters to functional groups as well (e.g. a `soils` PFT) + +### Creating new PFTs + +To modify or add a new Plant Functional Type (PFT), or to change a PFT's priors, navigate +on the grey menu bar to Data > PFTs + +1. To add a new pft, click “new PFT” at the top and enter a name and description. (hint: +we're trying to name PFTs based on model.biome.pft, ED2 is the default model if one +isn't specified) + +2. To add new species to a PFT click on [+] View Related Species and type the species, +genus, or family you are looking for into the Search box. Click on the + to add. + +3. To remove a species from a PFT, click on [+] View Related Species and click on the X +of the species you want to remove from the PFT. + +4. To remove a prior, click [-] View Related Prior and click on the X of the variable who's +prior you want to remove. This will cause the parameter to be excluded from all +analyses (meta-analysis, sensitivity analysis, etc) and revert to its default value. + +5. To add a prior, choose one from the white box of priors on the right to choose. + +6. To view the specification of a prior, or to add a new prior, click BETY-DB > Priors and +enter the information on the variable, distribution name, distribution parameters, etc. N +is the sample size underlying the prior specification (0 is ok for uninformative priors). + +7. You can also got to Data > Variables in order to use the search function to find an +existing variable (or create a new one). Please try not to create new variables +unnecessarily (e.g. changes of variable name or units to what your model uses is handled +internally, so you want to find the trait with the correct MEANING). + +Additional information on adding PFTs, Species, and Priors can be found in Adding An [Ecosystem Model]. + +### Choosing initial vegetation + +On the Input Selection webpage, in addition to selecting PFTs, start & end dates, and meteorology, many models also require some way of specifying the initial conditions for the vegetation, which may range from setting the aboveground biomass and LAI up to detailed inventory-like data on species composition and stand structure. + +At the moment, PEcAn has three cases for initial conditions: + +1. If files already exist in the database, they can simply be selected from the menu. For ED2, there are 3 different veg files (site, pss, css) and it is important that you select a complete set, not mix and match. + +2. If files don't exist they can be uploaded following the instructions in [ Create a database file record for the input data]. + +3. Automated vegetation initial condition workflow + +As with meteorology, PEcAn is working to develop a model-agnostic workflow for converting various sources of vegetation data to common standards, developing common processing tools, and then writing out to model-specific formats. This process is in a much early stage than the meteorology workflow, as we are still researching what the options are for standard formats, but ultimately aims to be much more broad in scope, considering not just plot inventory data but also historical documentation, paleoecological proxies, satellite remote sensing (e.g. LANDSAT), airborne hyperspectral imagery, and active remote sensing (Lidar, Radar). + +At the moment, what is functional is a prototype workflow that works for inventory-based vegetation data. This data can come from either files that have been registered with the BETY Inputs and Formats tables or can be queried from the USFS Forest Inventory and Analysis (FIA). For more information visit Section 13.1.2.2 Vegetation Data + +### US FIA + +This tool works with an internal copy of the FIA that is uploaded to a postGRES database along side BETY, however for space reasons this database does not ship with the PEcAn VM. To turn this feature on: + +1. Download and Install the FIA database. Instructions in [Installing data for PEcAn] +2. For web-base runs, specify the database settings in the [config.php](https://github.com/PecanProject/pecan/blob/master/web/config.example.php) +3. For R-based runs, specify the database settings in the [THE PEcAn XML] + +More detailed information on how PEcAn processes inputs can be found on our [Input Conversion]page. + + +### Spin up + +A number of ecosystem models are typically initialized by spinning up to steady state. At the moment PEcAn doesn't handle spin up automatically (e.g. looping met, checking for stability), but there are various ways to achieve a spin-up within the system. + +**Option 1:** If there are model-specific settings in a model's settings/config file, then that file can be accessed by clicking on the **Edit model config** check box. If this box is selected then PEcAn will pause the site run workflow after it has generated your model config file, but before it runs the model, and give you an opportunity to edit the file by hand, allowing you to change any model-specific spin up settings (e.g met recycling, spin up length) + +**Option 2:** Set start_year very early and set the met drivers to be a long time series (e.g. PalEON, something custom uploaded to Inputs) + +**Option 3:** In the MODEL_TYPE table, add your model's restart format as an optional input, modify the model specific write.config function to use that restart, and then load a previous spin-up to the Inputs table + +Beyond these options, we hope to eventually develop more general, model-agnostic tools for spin up. In particular, we have started to explore the accelerated spin-up and semi-analytical techniques being developed by Yiqi Luo's lab + + + +### Selecting a soils product + +Many models have requirements for soils information, which may include: site-specific soil texture and depth information; soil biogeochemical initial conditions (e.g. soil carbon and nitrogen pools); soil moisture initial conditions; and soil thermal initial conditions. + +As with [Choosing initial vegetation], we eventually hope to develop data standards, soils workflows, and spin-up tools, but at the moment this workflow is in the early stages of development. Model requirements need to be met by[Creating a new Input record in BETY] into the database or using files that have already been uploaded. Similar to met, we recommend that this file be in the PEcAn-standard netCDF described below, but model-specific files can also be registered. + +### Soil texture, depth, and physical parameters + +A PEcAn-standard netCDF file format exists for soil texture, depth, and physical parameters, using PEcAn standard names that are largely a direct extention of the CF standard. + +The easiest way to create this file is with the PEcAn R function ```soil2netcdf``` as described in the Soil Data section of the Advanced Users Guide. + +A table of standard names and units can be listed using `PEcAn.data.land::soil.units()` with no arguments. + +```{r, echo = FALSE, eval = FALSE} +knitr::kable(PEcAn.data.land::soil.units()) +``` + +More detailed information on how PEcAn processes inputs can be found on our [Input Conversion] page. + +### Other model inputs + +Finally, any other model-specific inputs (e.g. N deposition, land use history, etc), should be met by [Creating a new Input record in BETY] or using files that have already been uploaded. + + + +# More on the PEcAn Web Interface {#intermediate-user} + +This section will provide information to those wanting to take advantage of PEcAn's customizations from the web interface. + +* [Additional web configuration] - Advanced options available from the web interface + - [Brown Dog](#browndog) + - [Sensitivity and ensemble analyses][Advanced Setup][TODO: Under construction...] + - [Editing model configurations][TODO: Under construction...] +* [Settings-configured analyses] - Analyses only available by manually editing `pecan.xml` + - [Parameter data assimilation (PDA)](#pda) + - [State data assimilation (SDA)](#sda) +* [Remote execution with PEcAn](#pecan-remote) - Running analyses and generally working with external machines (HPC) in the context of PEcAn. + + + + +## Additional web configuration + +Additional settings for web configuration: + +- [Web interface setup]{#intermediate-web-setup} +- [Brown Dog]{#browndog} +- [Advanced setup]{#intermediate-advanced-setup} + - [Sensitivity analysis] (TODO) + - [Uncertainty analysis] (TODO) +- [Editing model configuration files]{#intermediate-model-config} + +### Web interface setup {#intermediate-web-setup} + +There are few options which you can change via web interface. + +To visit the configuration page either you can just click on the setups link on the introduction page alternatively can type `/setups/`. + +The list of configuration available + +1. **Database configuration** : BETYdb(Biofuel Ecophysiological Traits and Yields database) configuration details, can be edited according to need. + +2. **Browndog configuration** : Browndog configuration details, Used to connect browndog. Its included by default in VM. + +3. **FIA Database** : FIA(Forest Inventory and Analysis) Database configuration details, Can be used to add additional data to models. + +4. **Google MapKey** : Google Map key, used to access the google map by PEcAn. + +5. **Change Password** : A small infomation to change the VM user password. (if using Docker image it won't work) + +6. **Automatic Sync** : If ON then it will sync the database between local machine and the remote servers. **Still unders testing part might be buggy**. + +Still work on the adding other editing feature going on, this page will be updated as new configuration will be available. + +### Brown Dog {#browndog} + +The Browndog service provides PEcAn with access to large and diverse sets of data at the click of a button in the format that PEcAn needs. By clicking the checkbox you will be using the Browndog Service to process data. + +For more information regarding meteorological data check out [Available Meteorological Drivers] + +More information can be found at the [Browndog website](http://browndog.ncsa.illinois.edu/). + +### Advanced Setup {#intermediate-advanced-setup} + +(TODO: Under construction...) + +### Editing model configurations {#intermediate-model-config} + +(TODO: Under construction...) + + + +## Settings-configured analyses + +These analyses can be run through the web interface, but lack graphical interfaces and currently can only be configured throughthe XML settings. To run these analyses use the **Edit pecan.xml** checkbox on the Input configuration page. Eventually, these modules will be integrated into the web user interface. + +- [Parameter Data Assimilation (PDA)](#pda) +- [State Data Assimilation (SDA)](#sda) +- [MultiSettings](#multisettings) +- [Benchmarking](#benchmarking) + +(TODO: Add links) + +### Parameter data assimilation (PDA) {#pda} + +All functions pertaining to Parameter Data Assimilation are housed within: **pecan/modules/assim.batch**. + +For a detailed usage of the module, please see the vignette under **pecan/modules/assim.batch/vignettes**. + +Hierarchical version of the PDA is also implemented, for more details, see the `MultiSitePDAVignette` [package vignette](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd) and function-level documentation. + +#### **pda.mcmc.R** +This is the main PDA code. It performs Bayesian MCMC on model parameters by proposing parameter values, running the model, calculating a likelihood (between model output and supplied observations), and accepting or rejecting the proposed parameters (Metropolis algorithm). Additional notes: + +* The first argument is *settings*, followed by others that all default to *NULL.settings* is a list used throughout Pecan, which contains all the user options for whatever analyses are being done. The easiest thing to do is just pass that whole object all around the Pecan code and let different functions access whichever settings they need. That's what a lot of the rest of the Pecan code does. But the flexibility to override most of the relevant settings in *settings* is there by providing them directly as arguments to the function. + +* The *if(FALSE)...* : If you're trying to step through the function you probably will have the *settings* object around, but those other variables will be undefined. If you set them all to NULL then they'll be ignored without causing errors. It is there for debugging purposes. + +* The next step calls pda.settings(), which is in the file pda.utils.R (see below). It checks whether any settings are being overridden by arguments, and in most cases supplies default values if it can't find either. + + +* In the MCMC setup section + * The code is set up to allow you to start a new MCMC chain, or to continue a previous chain as specified in settings. + * The code writes a simple text file of parameter samples at every iteration, which lets you get some results and even re-start an MCMC that fails for some reason. + * The code has adaptive jump distributions. So you can see some initialization of the jump distributions and associated variables here. + * Finally, note that after all this setup a new XML settings file is saved. The idea is that the original pecan.xml you create is preserved for provenance, and then periodically throughout the workflow the settings (likely containing new information) are re-saved with descriptive filenames. + +* MCMC loop + * Periodically adjust jump distribution to make acceptance rate closer to target + * Propose new parameters one at a time. For each: + * First, note that Pecan may be handling many more parameters than are actually being targeted by PDA. Pecan puts priors on any variables it has information for (in the BETY database), and then these get passed around throughout the analysis and every step (meta-, sensitivity, ensemble analyses, etc.). But for PDA, you specify a separate list of probably far fewer parameters to constrain with data. These are the ones that get looped over and varied here. The distinction between all parameters and only those dealt with in PDA is dealt with in the setup code above. + * First a new value is proposed for the parameter of interest. + * Then, a new model run is set up, identical to the previous except with the new proposed value for the one parameter being updated on this run. + * The model run is started, and outputs collected after waiting for it to finish. + * A new likelihood is calculated based on the model outputs and the observed dataset provided. + * Standard Metropolis acceptance criteria is used to decide whether to keep the proposed parameter. + * Periodically (at interval specified in settings), a diagnostic figure is saved to disk so you can check on progress. + * This works only for NEE currently + +#### **pda.mcmc.bs.R** +This file is basically identical to pda.mcm.R, but rather than propose parameters one at a time, it proposes new values for all parameters at once ("bs" stands for "block sampling"). You choose which option to use by specifying settings$assim.batch$method: + * "bruteforce" means sample parameters one at a time + * "bruteforce.bs" means use this version, sampling all parameters at once + * "emulator" means use the emulated-likelihood version + +#### **pda.emulator** +This version of the PDA code again looks quite similar to the basic "bruteforce" one, but its mechanics are very different. The basic idea is, rather than running thousands of model iterations to explore parameter space via MCMC, run a relatively smaller number of runs that have been carefully chosen to give good coverage of parameter space. Then, basically interpolate the likelihood calculated for each of those runs (actually, fit a Gaussian process to it), to get a surface that "emulates" the true likelihood. Now, perform regular MCMC (just like the "bruteforce" approach), except instead of actually running the model on every iteration to get a likelihood, just get an approximation from the likelihood emulator. Since the latter step takes virtually no time, you can run as long of an MCMC as you need at little computational cost, once you have done the initial model runs to create the likelihood emulator. + +#### **pda.mcmc.recover.R** +This function is for recovering a failed PDA MCMC run. + +#### **pda.utils.R** +This file contains most of the individual functions used by the main PDA functions (pda.mcmc.*.R). + + * *assim.batch* is the main function Pecan calls to do PDA. It checks which method is requested (bruteforce, bruteforce.bs, or emulator) and call the appropriate function described above. + * *pda.setting* handles settings. If a setting isn't found, the code can usually supply a reasonable default. + * *pda.load.priors* is fairly self explanatory, except that it handles a lot of cases and gives different options priority over others. Basically, the priors to use for PDA parameters can come from either a Pecan prior.distns or post.distns object (the latter would be, e.g., the posteriors of a meta-analysis or previous PDA), or specified either by file path or BETY ID. If not told otherwise, the code tries to just find the most recent posterior in BETY, and use that as prior for PDA. + * *pda.create.ensemble* gets an ensemble ID for the PDA. All model runs associated with an individual PDA (any of the three methods) are considered part of a single ensemble. This function does is register a new ensemble in BETY, and return the ID that BETY gives it. + * *pda.define.prior.fn* creates R functions for all of the priors the PDA will use. + * *pda.init.params* sets up the parameter matrix for the run, which has one row per iteration, and one column per parameter. Columns include all Pecan parameters, not just the (probably small) subset that are being updated by PDA. This is for compatibility with other Pecan components. If starting a fresh run, the returned matrix is just a big empty matrix to fill in as the PDA runs. If continuing an existing MCMC, then it will be the previous params matrix, with a bunch of blank rows added on for filling in during this round of PDA. + * *pda.init.run* This is basically a big wrapper for Pecan's write.config function (actually functions [plural], since every model in Pecan has its own version). For the bruteforce and bruteforce.bs methods this will be run once per iteration, whereas the emulator method knows about all its runs ahead of time and this will be a big batch of all runs at once. + * *pda.adjust.jumps* tweaks the jump distributions for the standard MCMC method, and *pda.adjust.jumps.bs* does the same for the block-sampled version. + * *pda.calc.llik* calculates the log-likelihood of the model given all datasets provided to compare it to. + * *pda.generate.knots* is for the emulator version of PDA. It uses a Latin hypercube design to sample a specified number of locations in parameter space. These locations are where the model will actually be run, and then the GP interpolates the likelihood surface in between. + * *pda.plot.params* provides basic MCMC diagnostics (trace and density) for parameters being sampled. + * *pda.postprocess* prepares the posteriors of the PDA, stores them to files and the database, and performs some other cleanup functions. + * *pda.load.data.r* This is the function that loads in data that will be used to constrain the PDA. It's supposed to be eventually more integrated with Pecan, which will know how to load all kinds of data from all kinds of sources. For now, it can do NEE from Ameriflux. + * *pda.define.llik.r* A simple helper function that defines likelihood functions for different datasets. Probably in the future this should be queried from the database or something. For now, it is extremely limited. The original test case of NEE assimilation uses a heteroskedastic Laplacian distribution. + * *pda.get.model.output.R* Another function that will eventually grow to handle many more cases, or perhaps be replaced by a better system altogether. For now though, it again just handles Ameriflux NEE. + +#### **get.da.data.\*.R, plot.da.R** +Old codes written by Carl Davidson. Defunct now, but may contain good ideas so currently left in. + + +### State data assimilation (SDA) {#sda} + +`sda.enkf.R` is housed within: `/pecan/modules/assim.sequential/R` + +The tree ring tutorial is housed within: `/pecan/documentation/tutorials/StateAssimilation` + +More descriptive SDA methods can be found at: `/pecan/book_source/adve_user_guide_web/SDA_Methods.Rmd` + +#### **sda.enkf.R Description** +This is the main ensemble Kalman filter and generalized filter code. Originally, this was just ensemble Kalman filter code. Mike Dietze and Ann Raiho added a generalized ensemble filter to avoid filter divergence. The output of this function will be all the of run outputs, a PDF of diagnostics, and an Rdata object that includes three lists: + +* FORECAST will be the ensemble forecasts for each year +* ANALYSIS will be the updated ensemble sample given the NPP observations +* enkf.params contains the prior and posterior mean vector and covariance matrix for each time step. + +#### **sda.enkf.R Arguments** + +* settings - (required) [State Data Assimilation Tags Example] settings object + +* obs.mean - (required) a list of observation means named with dates in YYYY/MM/DD format + +* obs.cov - (required) a list of observation covariances names with dates in YYYY/MM/DD format + +* IC - (optional) initial condition matrix (dimensions: ensemble memeber # by state variables). Default is NULL. + +* Q - (optional) process covariance matrix (dimensions: state variable by state variables). Defualt is NULL. + +#### State Data Assimilation Workflow +Before running sda.enkf, these tasks must be completed (in no particular order), + +* Read in a [State Data Assimilation Tags Example] settings file with tags listed below. i.e. read.settings('pecan.SDA.xml') + +* Load data means (obs.mean) and covariances (obs.cov) as lists with PEcAn naming and unit conventions. Each observation must have a date in YYYY/MM/DD format (optional time) associated with it. If there are missing data, the date must still be represented in the list with an NA as the list object. + +* Create initial conditions matrix (IC) that is state variables columns by ensemble members rows in dimension. [sample.IC.MODEL][sample.IC.MODEL.R] can be used to create the IC matrix, but it is not required. This IC matrix is fed into write.configs for the initial model runs. + +The main parts of the SDA function are: + +Setting up for initial runs: + +* Set parameters + +* Load initial run inputs via [split.inputs.MODEL][split.inputs.MODEL.R] + +* Open database connection + +* Get new workflow ids + +* Create ensemble ids + +Performing the initial set of runs + +Set up for data assimilation + +Loop over time + +* [read.restart.MODEL][read.restart.MODEL.R] - read model restart files corresponding to start.time and stop.time that you want to assimilate data into + +* Analysis - There are four choices based on if process variance is TRUE or FALSE and if there is data or not. [See explaination below.][Analysis Options] + +* [write.restart.MODEL][write.restart.MODEL.R] - This function has two jobs. First, to insert adjusted state back into model restart file. Second, to update start.time, stop.time, and job.sh. + +* run model + +Save outputs + +Create diagnostics + +#### State Data Assimilation Tags Example + +``` + + TRUE + FALSE + FALSE + Single + + + AGB.pft + MgC/ha/yr + 0 + 100000000 + + + TotSoilCarb + KgC/m^2 + 0 + 100000000 + + + + 1950/01/01 + 1960/12/31 + + 1 + 1961/01/01 + 2010/12/31 + +``` + +#### State Data Assimilation Tags Descriptions + +* **adjustment** : [optional] TRUE/FLASE flag for if ensembles needs to be adjusted based on weights estimated given their likelihood during analysis step. The defualt is TRUE for this flag. +* **process.variance** : [optional] TRUE/FLASE flag for if process variance should be estimated (TRUE) or not (FALSE). If TRUE, a generalized ensemble filter will be used. If FALSE, an ensemble Kalman filter will be used. Default is FALSE. If you use the TRUE argument you can set three more optional tags to control the MCMCs built for the generalized esnsemble filter. +* **nitrGEF** : [optional] numeric defining the length of the MCMC chains. +* **nthin** : [optional] numeric defining thining length for the MCMC chains. +* **nburnin** : [optional] numeric defining the number of burnins during the MCMCs. +* **q.type** : [optional] If the `process.variance` is set to TRUE then this can take values of Single, Site or PFT. +* **censored.data** : [optional] logical set TRUE for censored state variables. + +* **sample.parameters** : [optional] TRUE/FLASE flag for if parameters should be sampled for each ensemble member or not. This allows for more spread in the initial conditions of the forecast. +* **state.variable** : [required] State variable that is to be assimilated (in PEcAn standard format). +* **spin.up** : [required] start.date and end.date for initial model runs. +* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because initial runs can be done over a subset of the full run. +* **forecast.time.step** : [optional] In the future, this will be used to allow the forecast time step to vary from the data time step. +* **start.date** : [optional] start date of the state data assimilation (in YYYY/MM/DD format) +* **end.date** : [optional] end date of the state data assimilation (in YYYY/MM/DD format) +* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because this analysis can be done over a subset of the run. + +#### Model Specific Functions for SDA Workflow + +#### read.restart.MODEL.R +The purpose of read.restart is to read model restart files and return a matrix that is site rows by state variable columns. The state variables must be in PEcAn names and units. The arguments are: + +* outdir - output directory + +* runid - ensemble member run ID + +* stop.time - used to determine which restart file to read (in POSIX format) + +* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object + +* var.names - vector with state variable names with PEcAn standard naming. Example: c('AGB.pft', 'TotSoilCarb') + +* params - parameters used by ensemble member (same format as write.configs) + +#### write.restart.MODEL.R +This model specific function takes in new state and new parameter matrices from sda.enkf.R after the analysis step and translates new variables back to the model variables. Then, updates start.time, stop.time, and job.sh so that start.model.runs() does the correct runs with the new states. In write.restart.LINKAGES and write.restart.SIPNET, job.sh is updated by using write.configs.MODEL. + +* outdir - output directory + +* runid - run ID for ensemble member + +* start.time - beginning of model run (in POSIX format) + +* stop.time - end of model run (in POSIX format) + +* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object + +* new.state - matrix from analysis of updated state variables with PEcAn names (dimensions: site rows by state variables columns) + +* new.params - In the future, this will allow us to update parameters based on states (same format as write.configs) + +* inputs - model specific inputs from [split.inputs.MODEL][split.inputs.MODEL.R] used to run the model from start.time to stop.time + +* RENAME - [optional] Flag used in write.restart.LINKAGES.R for development. + +#### split.inputs.MODEL.R +This model specific function gives the correct met and/or other model inputs to settings$run$inputs. This function returns settings$run$inputs to an inputs argument in sda.enkf.R. But, the inputs will not need to change for all models and should return settings$run$inputs unchanged if that is the case. + +* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object + +* start.time - start time for model run (in POSIX format) + +* stop.time - stop time for model run (in POSIX format) + +#### sample.IC.MODEL.R +This model specific function is optional. But, it can be used to create initial condition matrix (IC) with # state variables columns by # ensemble rows. This IC matrix is used for the initial runs in sda.enkf.R in the write.configs.MODEL function. + +* ne - number of ensemble members + +* state - matrix of state variables to get initial conditions from + +* year - used to determine which year to sample initial conditions from + +#### Analysis Options +There are four options depending on whether process variance is TRUE/FALSE and whether or not there is data or not. + +* If there is no data and process variance = FALSE, there is no analysis step. + +* If there is no data and process variance = TRUE, process variance is added to the forecast. + +* If there is data and process variance = TRUE, [the generalized ensemble filter][The Generalized Ensemble Filter] is implemented with MCMC. + +* If there is data and process variance = FALSE, the Kalman filter is used and solved analytically. + +#### The Generalized Ensemble Filter +An ensemble filter is a sequential data assimilation algorithm with two procedures at every time step: a forecast followed by an analysis. The forecast ensembles arise from a model while the analysis makes an adjustment of the forecasts ensembles from the model towards the data. An ensemble Kalman filter is typically suggested for this type of analysis because of its computationally efficient analytical solution and its ability to update states based on an estimate of covariance structure. But, in some cases, the ensemble Kalman filter fails because of filter divergence. Filter divergence occurs when forecast variability is too small, which causes the analysis to favor the forecast and diverge from the data. Models often produce low forecast variability because there is little internal stochasticity. Our ensemble filter overcomes this problem in a Bayesian framework by including an estimation of model process variance. This methodology also maintains the benefits of the ensemble Kalman filter by updating the state vector based on the estimated covariance structure. + +This process begins after the model is spun up to equilibrium. + +The likelihood function uses the data vector $\left(\boldsymbol{y_{t}}\right)$ conditional on the estimated state vector $\left(\boldsymbol{x_{t}}\right)$ such that + + $\boldsymbol{y}_{t}\sim\mathrm{multivariate\:normal}(\boldsymbol{x}_{t},\boldsymbol{R}_{t})$ + +where $\boldsymbol{R}_{t}=\boldsymbol{\sigma}_{t}^{2}\boldsymbol{I}$ and $\boldsymbol{\sigma}_{t}^{2}$ is a vector of data variances. To obtain an estimate of the state vector $\left(\boldsymbol{x}_{t}\right)$, we use a process model that incorporates a process covariance matrix $\left(\boldsymbol{Q}_{t}\right)$. This process covariance matrix differentiates our methods from past ensemble filters. Our process model contains the following equations + +$\boldsymbol{x}_{t} \sim \mathrm{multivariate\: normal}(\boldsymbol{x}_{model_{t}},\boldsymbol{Q}_{t})$ + +$\boldsymbol{x}_{model_{t}} \sim \mathrm{multivariate\: normal}(\boldsymbol{\mu}_{forecast_{t}},\boldsymbol{P}_{forecast_{t}})$ + +where $\boldsymbol{\mu}_{forecast_{t}}$ is a vector of means from the ensemble forecasts and $\boldsymbol{P}_{forecast_{t}}$ is a covariance matrix calculated from the ensemble forecasts. The prior for our process covariance matrix is $\boldsymbol{Q}_{t}\sim\mathrm{Wishart}(\boldsymbol{V}_{t},n_{t})$ where $\boldsymbol{V}_{t}$ is a scale matrix and $n_{t}$ is the degrees of freedom. The prior shape parameters are updated at each time step through moment matching such that + +$\boldsymbol{V}_{t+1} = n_{t}\bar{\boldsymbol{Q}}_{t}$ + +$n_{t+1} = \frac{\sum_{i=1}^{I}\sum_{j=1}^{J}\frac{v_{ijt}^{2}+v_{iit}v_{jjt}}{Var(\boldsymbol{\bar{Q}}_{t})}}{I\times J}$ + +where we calculate the mean of the process covariance matrix $\left(\bar{\boldsymbol{Q}_{t}}\right)$ from the posterior samples at time t. Degrees of freedom for the Wishart are typically calculated element by element where $v_{ij}$ are the elements of $\boldsymbol{V}_{t}$. $I$ and $J$ index rows and columns of $\boldsymbol{V}$. Here, we calculate a mean number of degrees of freedom for $t+1$ by summing over all the elements of the scale matrix $\left(\boldsymbol{V}\right)$ and dividing by the count of those elements $\left(I\times J\right)$. We fit this model sequentially through time in the R computing environment using R package 'rjags.' + +Users have control over how they think is the best way to estimate $Q$. Our code will look for the tag `q.type` in the XML settings under `state.data.assimilation` which can take 3 values of Single, PFT or Site. If `q.type` is set to single then one value of process variance will be estimated across all different sites or PFTs. On the other hand, when q.type` is set to Site or PFT then a process variance will be estimated for each site or PFT at a cost of more time and computation power. + +#### Multi-site State data assimilation. +`sda.enkf.multisite` function allows for assimilation of observed data at multiple sites at the same time. In order to run a multi-site SDA, one needs to send a multisettings pecan xml file to this function. This multisettings xml file needs to contain information required for running at least two sites under `run` tag. The code will automatically run the ensembles for all the sites and reformats the outputs matching the required formats for analysis step. + +The observed mean and cov needs to be formatted as list of different dates with observations. For each element of this list also there needs to be a list with mean and cov matrices of different sites named by their siteid. In case that zero variance was estimated for a variable inside the obs.cov, the SDA code will automatically replace that with half of the minimum variance from other non-zero variables in that time step. + +This would look like something like this: +``` +> obs.mean + +$`2010/12/31` +$`2010/12/31`$`1000000650` + AbvGrndWood GWBI + 111.502 1.0746 + +$`2010/12/31`$`1000000651` + AbvGrndWood GWBI + 114.302 1.574695 +``` + +``` +> obs.cov + +$`2010/12/31` +$`2010/12/31`$`1000000650` + [,1] [,2] +[1,] 19.7821691 0.213584319 +[2,] 0.5135843 0.005162113 + +$`2010/12/31`$`1000000651` + [,1] [,2] +[1,] 15.2821691 0.513584319 +[2,] 0.1213583 0.001162113 +``` +An example of multi-settings pecan xml file also may look like below: +``` + + + + FALSE + TRUE + + 1000000040 + 1000013298 + + + + GWBI + KgC/m^2 + 0 + 9999 + + + AbvGrndWood + KgC/m^2 + 0 + 9999 + + + 1960/01/01 + 2000/12/31 + + + + -1 + + 2017/12/06 21:19:33 +0000 + + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768 + + + bety + bety + 128.197.168.114 + bety + PostgreSQL + false + + /fs/data1/pecan.data/dbfiles/ + + + + temperate.deciduous_SDA + + 2 + + /fs/data2/output//PEcAn_1000008768/pft/temperate.deciduous_SDA + 1000008552 + + + + 3000 + + FALSE + TRUE + + + + 20 + 1000016146 + 1995 + 1999 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000022 + /fs/data3/hamzed/output/paleon_sda_SIPNET-8768/Bartlett.param + SIPNET + r136 + FALSE + /fs/data5/pecan.models/SIPNET/trunk/sipnet_ssr + + + 1000008768 + + + + + 1000000650 + 1960/01/01 + 1965/12/31 + Harvard Forest - Lyford Plots (PalEON PHA) + 42.53 + -72.18 + + + + CRUNCEP + SIPNET + + /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim + + + + 1960/01/01 + 1980/12/31 + + + + 1000000651 + 1960/01/01 + 1965/12/31 + Harvard Forest - Lyford Plots (PalEON PHA) + 42.53 + -72.18 + + + + CRUNCEP + SIPNET + + /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim + + + + 1960/01/01 + 1980/12/31 + + + + localhost + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out + + + TRUE + TRUE + TRUE + + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out + run + +``` + +### Running SDA on remote +In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote_Sync_launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote_Sync_launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. + +`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote_Sync_launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. + +`Additionally, the Remote_Sync_launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. + +Several points on how to prepare your xml settings for the remote SDA run: +1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. +2 - Inside the xml setting an `` flag needs to be included and point to a local directory where `SDA_remote_launcher` will look for either a `sample.Rdata` file or a `pft` folder. +3 - You need to set your `` tag according to the desired remote machine. You can learn more about this on the `Remote execution with PEcAn` section of the documentation. Please make sure that the `` tag inside `` is pointing to a directory where you would like to store and run your SDA job(s). +4 - Finally, make sure the tag inside the tag is set to the correct path on the remote machine. + +### Restart functionality in SDA +If you prefer to run your SDA analysis in multiple stages, where each phase picks up where the previous one left off, you can use the `restart` argument in the `sda.enkf.multisite` function. You need to make sure that the output from previous step exists in the `SDA` folder (in the `outfolder`), and the `` is the same as `` from the previous step. When you run the SDA with the restart parameter, it will load the output from the previous step and use configs already written in the run folder to set itself up for the next step. Using the restart argument could be as easy as : + +``` +sda.enkf.multisite(settings, + obs.mean =obs.mean , + obs.cov = obs.cov, + control = SDA.arguments(debug = FALSE, TimeseriesPlot = FALSE), + restart = FALSE + ) + +``` +Where the new `settings`, `obs.mean` and `obs.cov` contain the relevant information for the next phase. + + +### State Data Assimilation Methods + + +*By Ann Raiho* + +Our goal is build a fully generalizable state data assimilation (SDA) workflow that will assimilate multiple types of data and data products into ecosystem models within PEcAn temporally and spatially. But, during development, specifically with PalEON goals in mind, we have been focusing on assimilating tree ring estimated NPP and AGB and pollen derived fractional composition into two ecosystem models, SIPNET and LINKAGES, at Harvard Forest. This methodology will soon be expanded to include the PalEON sites listed on the [state data assimilation wiki page](https://paleon.geography.wisc.edu/doku.php/working_groups;state_data_assimilation). + +#### Data Products +During workflow development, we have been working with tree ring estimated NPP and AGB and pollen derived fractional composition data products. Both of these data products have been estimated with a full accounting of uncertainty, which provides us with state variable observation mean vector and covariance matrix at each time step. These data products are discussed in more detail below. Even though we have been working with specific data products during development, our workflow is generalizable to alternative data products as long as we can calculate a state variable observation mean vector and covariance for a time point. + +#### Tree Rings +We have been primarily been working with the tree ring data product created by Andria Dawson and Chris Paciorek and the PEcAn tree ring allometry module. They have developed a Bayesian model that estimates annual aboveground biomass increment (Mg/ha/yr) and aboveground biomass (Mg/ha) for each tree in a dataset. We obtain this data and aggregate to the level appropriate for the ecosystem model. In SIPNET, we are assimilating annual gross woody increment (Mg/ha/yr) and above ground woody biomass (Mg/ha). In LINKAGES, we are assimilating annual species biomass. More information on deriving these tree ring data products can be found in Dawson et al 201?. + +We have been working mostly with tree data collected at Harvard Forest. Tree rings and census data were collected at Lyford Plot between 1960 and 2010 in three separate plots. Other tree ring data will be added to this analysis in the future from past PEON courses (UNDERC), Kelly Heilman (Billy's Lake and Bigwoods), and Alex Dye (Huron Mt. Club). + +#### Pollen +STEPPS is a Bayesian model developed by Paciorek and McLachlan 2009 and Dawson et al 2016 to estimate spatially gridded fractional composition from fossil pollen. We have been working with STEPPS1 output, specifically with the grid cell that contains Harvard Forest. The temporal resolution of this data product is centennial. Our workflow currently operates at annual time steps, but does not require data at every time step. So, it is possible to assimilate fractional composition every one hundred years or to assimilate fractional composition data every year by accounting for variance inflation. + +In the future, pollen derived biomass (ReFAB) will also be available for data assimilation. Although, we have not discussed how STEPPS and ReFAB data assimilation will work. + +#### Variance Inflation +*Side Note: Probably want to call this something else now. + +Since the fractional composition data product has a centennial resolution, in order to use fractional composition information every year we need to change the weight the data has on the analysis. The basic idea is to downweight the likelihood relative to the prior to account for (a) the fact that we assimilate an observation multiple times and (b) the fact that the number of STEPPS observations is 'inflated' because of the autocorrelation. To do this, we take the likelihood and raise it to the power of (1/w) where 'w' is an inflation factor. + +w = D * (N / ESS) + +where D is the length of the time step. In our case D = 100. N is the number of time steps. In our case N = 11. and ESS is the effective sample size. The ESS is calculated with the following function where ntimes is the same as N above and sims is a matrix with the dimensions number of MCMC samples by number of state variables. + +``` +ESS_calc <- function(ntimes, sims){ + # center based on mean at each time to remove baseline temporal correlation + # (we want to estimate effective sample size effect from correlation of the errors) + row.means.sims <- sims - rowMeans(sims) + + # compute all pairwise covariances at different times + covars <- NULL + for(lag in 1:(ntimes-1)){ + covars <- c(covars, rowMeans(row.means.sims[(lag+1):ntimes, , drop = FALSE] * row.means.sims[1:(ntimes-lag), , drop = FALSE])) + } + vars <- apply(row.means.sims, 1, var) # pointwise post variances at each time, might not be homoscedastic + + # nominal sample size scaled by ratio of variance of an average + # under independence to variance of average of correlated values + neff <- ntimes * sum(vars) / (sum(vars) + 2 * sum(covars)) + return(neff) + } +``` + +The ESS for the STEPPS1 data product is 3.6, so w in our assimilation of fractional composition at Harvard Forest will be w = 305.6. + +#### Current Models +SIPNET and LINKAGES are the two ecosystem models that have been used during state data assimilation development within PEcAn. SIPNET is a simple ecosystem model that was built for... LINKAGES is a forest gap model created to simulate the process of succession that occurs when a gap is opened in the forest canopy. LINKAGES has 72 species level plant functional types and the ability to simulate some below ground processes (C and N cycles). + +#### Model Calibration +Without model calibration both SIPNET and LINKAGES make incorrect predictions about Harvard Forest. To confront this problem, SIPNET and LINKAGES will both be calibrated using data collected at the Harvard Forest flux tower. Istem has completed calibration for SIPNET using a [parameter data assimilation emulator](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/R/pda.emulator.R) contained within the PEcAn workflow. LINKAGES will also be calibrated using this method. This method is also generalizable to other sites assuming there is data independent of data assimilation data available to calibrate against. + +#### Initial Conditions +The initial conditions for SIPNET are sampled across state space based on data distributions at the time when the data assimilation will begin. We do not sample LINAKGES for initial conditions and instead perform model spin up for 100 years prior to beginning data assimilation. In the future, we would like to estimate initial conditions based on data. We achieve adequate spread in the initial conditions by allowing the parameters to vary across ensemble members. + +#### Drivers +We are currently using Global Climate Model (GCM) drivers from the PaLEON model intercomparison. Christy Rollinson and John Tipton are creating MET downscaled GCM drivers for the Paleon data assimilation sites. We will use these drivers when they are available because they are a closer representation of reality. + +#### Sequential State Data Assimilation +We are using sequential state data assimilation methods to assimilate paleon data products into ecosystem models because less computation power is required for sequential state data assimilation than for particle filter methods. + +#### General Description +The general sequential data assimilation framework consists of three steps at each time step: +1. Read the state variable output for time t from the model forecast ensembles and save the forecast mean (muf) and covariance (Pf). +2. If there are data mean (y) and covariance (R) at this time step, perform data assimilation analysis (either EnKF or generalized ensemble filter) to calculate the new mean (mua) and covariance (Pa) of the state variables. +3. Use mua and Pa to restart and run the ecosystem model ensembles with new state variables for time t+1. + +#### EnKF +There are two ways to implement sequential state data assimilation at this time. The first is the Ensemble Kalman Filter (EnKF). EnKF has an analytical solution, so the kalman gain, analysis mean vector, and analysis covariance matrix can be calculated directly: + +``` + + K <- Pf %*% t(H) %*% solve((R + H %*% Pf %*% t(H))) ## Kalman Gain + + mu.a <- mu.f + K %*% (Y - H %*% mu.f) # Analysis mean vector + + Pa <- (diag(ncol(X)) - K %*% H) %*% Pf # Analysis covariance matrix + +``` + +The EnKF is typically used for sequential state data assimilation, but we found that EnKF lead to filter divergence when combined with our uncertain data products. Filter divergence led us to create a generalized ensemble filter that estimates process variance. + +#### Generalized Ensemble Filter +The generalized ensemble filter follows generally the three steps of sequential state data assimilation. But, in the generalized ensemble filter we add a latent state vector that accounts for added process variance. Furthermore, instead of solving the analysis analytically like the EnKF, we have to estimate the mean analysis vector and covariance matrix with MCMC. + +#### Mapping Ensemble Output to Tobit Space +There are some instances when we have right or left censored variables from the model forecast. For example, a model estimating species level biomass may have several ensemble members that produce zero biomass for a given species. We are considering this case a left censored state variable that needs to be mapped to normal space using a tobit model. We do this by creating two matrices with dimensions number of ensembles by state variable. The first matrix is a matrix of indicator variables (y.ind), and the second is a matrix of censored variables (y.censored). When the indicator variable is 0 the state variable (j) for ensemble member (i) is sampled. This allows us to impute a normal distribution for each state variable that contains 'missing' forecasts or forecasts of zero. + +``` +tobit2space.model <- nimbleCode({ + for(i in 1:N){ + y.censored[i,1:J] ~ dmnorm(muf[1:J], cov = pf[1:J,1:J]) + for(j in 1:J){ + y.ind[i,j] ~ dconstraint(y.censored[i,j] > 0) + } + } + + muf[1:J] ~ dmnorm(mean = mu_0[1:J], cov = pf[1:J,1:J]) + + Sigma[1:J,1:J] <- lambda_0[1:J,1:J]/nu_0 + pf[1:J,1:J] ~ dinvwish(S = Sigma[1:J,1:J], df = J) + + }) +``` + + +#### Generalized Ensemble Filter Model Description +Below is the BUGS code for the full analysis model. The forecast mean an covariance are calculated from the tobit2space model above. We use a tobit likelihood in this model because there are instances when the data may be left or right censored. Process variance is included by adding a latent model state (X) with a process precision matrix (q). We update our prior on q at each time step using our estimate of q from the previous time step. + +``` + tobit.model <- nimbleCode({ + + q[1:N,1:N] ~ dwish(R = aq[1:N,1:N], df = bq) ## aq and bq are estimated over time + Q[1:N,1:N] <- inverse(q[1:N,1:N]) + X.mod[1:N] ~ dmnorm(muf[1:N], prec = pf[1:N,1:N]) ## Model Forecast ##muf and pf are assigned from ensembles + + ## add process error + X[1:N] ~ dmnorm(X.mod[1:N], prec = q[1:N,1:N]) + + #agb linear + #y_star[1:YN,1:YN] <- X[1:YN,1:YN] #[choose] + + #f.comp non linear + #y_star[1:YN] <- X[1:YN] / sum(X[1:YN]) + + ## Analysis + y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN,1:YN]) #is it an okay assumpution to just have X and Y in the same order? + + #don't flag y.censored as data, y.censored in inits + #remove y.censored samplers and only assign univariate samplers on NAs + + for(i in 1:YN){ + y.ind[i] ~ dconstraint(y.censored[i] > 0) + } + + }) +``` + +#### Ensemble Adjustment +Each ensemble member has a different set of species parameters. We adjust the updated state variables by using an ensemble adjustment. The ensemble adjustment weights the ensemble members based on their likelihood during the analysis step. + +``` + S_f <- svd(Pf) + L_f <- S_f$d + V_f <- S_f$v + + ## normalize + Z <- X*0 + for(i in seq_len(nrow(X))){ + Z[i,] <- 1/sqrt(L_f) * t(V_f)%*%(X[i,]-mu.f) + } + Z[is.na(Z)]<-0 + + ## analysis + S_a <- svd(Pa) + L_a <- S_a$d + V_a <- S_a$v + + ## analysis ensemble + X_a <- X*0 + for(i in seq_len(nrow(X))){ + X_a[i,] <- V_a %*%diag(sqrt(L_a))%*%Z[i,] + mu.a + } +``` + +#### Diagnostics +There are three diagnostics we have currently implemented: time series, bias time series, and process variance. The time series diagnostics show the data, forecast, and analysis time series for each state variable. These are useful for visually assessing variance and magnitude of change of state variables through time. These time series are also updated throughout the analysis and are also created as a pdf at the end of the SDA workflow. There are two types of bias time series the first assess the bias in the update (the forecast minus the analysis) and the second assess the bias in the error (the forecast minus the data). These bias time series are useful for identifying which state variables have intrinsic bias within the model. For example, if red oak biomass in LINKAGES increases at every time step (the update and the error are always positive), this would suggest that LINKAGES has a positive growth or recruitment bias for red oak. Finally, when using the generalized ensemble filter to estimate process variance, there are two additional plots to assess estimation of process variance. The first is a correlation plot of the process covariance matrix. This tells us what correlations are incorrectly represented by the model. For example, if red oak biomass and white pine biomass are highly negatively correlated in the process covariance matrix, this means that the model either 1) has no relationship between red oak and white pine and they should affect each other negatively or 2) there is a positive relationship between red oak and white pine and there shouldn't be any relationship. We can determine which of these is true by comparing the process covariance matrix to the model covariance matrix. The second process variance diagnostic plot shows how the degrees of freedom associated with estimating the process covariance matrix have changed through time. This plot should show increasing degrees of freedom through time. + + +### MultiSettings{#multisettings} + +(TODO: Under construction...) + +### Benchmarking {#benchmarking} + +Benchmarking is the process of comparing model outputs against either experimental data or against other model outputs as a way to validate model performance. +We have a suit of statistical comparisons that provide benchmarking scores as well as visual comparisons that help in diagnosing data-model and/or model-model differences. + +#### Data Preparation + +All data that you want to compare with model runs must be registered in the database. +This is currently a step that must be done by hand either from the command line or through the online BETY interface. +The data must have three records: + +1. An input record (Instructions [here](#NewInput)) + +2. A database file record (Instructions [here](#NewInput)) + +3. A format record (Instructions [here](#NewFormat)) + +#### Model Runs + +Model runs can be setup and executed +- Using the PEcAn web interface online or with a VM ([see setup](#GettingStarted)) +- By hand using the [pecan.xml](#pecanXML) + +#### The Benchmarking Shiny App + +The entire benchmarking process can be done through the Benchmarking R Shiny app. + +When the model run has completed, navigate to the workflow visualization Shiny app. + +- Load model data + - Select the workflow and run id + - Make sure that your model output is loading properly (i.e. you can see plots of your data) +- Load benchmarking data + - Again make sure that you can see the uploaded data plotted alongside the model output. In the future there will be more tools for double checking that your uploaded data is appropriate for benchmarking, but for now you may need to do the sanity checks by hand. + +#### Create a reference run record + +- Navigate to the Benchmarking tab + + The first step is to register the new model run as a reference run in the database. Benchmarking cannot be done before this step is completed. When the reference run record has been created, additional menus for benchmarking will appear. + +#### Setup Benchmarks and metrics + +- From the menus select + - The variables in the uploaded data that you wish to compare with model output. + - The numerical metrics you would like to use in your comparison. + - Additional comparison plots that you would like to see. +- Note: All these selections populate the benchmarking section of the `pecan.BENCH.xml` which is then saved in the same location as the original run output. This xml is purely for reference. + +##### Benchmarking Output + +- All benchmarking results are stored in the benchmarking directory which is created in the same folder as the original model run. +- The benchmaking directory contains subdirectories for each of the datasets compared with the model output. The names of these directories are the same as the corresponding data set's input id in BETY. +- Each input directory contains `benchmarking.output.Rdata`, an Rdata file contianing all the results of the benchmarking workflow. `load(benchmarking.output.Rdata) ` loads a list called `result.out` which contains the following: + - `bench.results`: a data frame of all numeric benchmarking scores + - `format`: a data frame that can be used to see how the input data was transformed to make it comparable to the model output. This involves converting from the original variable names and units to the internal pecan standard. + - `aligned.dat`: a data frame of the final aligned model and input values. +- All plots are saved as pdf files with names with "benchmark_plot-type_variable_input-id.pdf" + +- To view interactive results, naviage to the Benchmarking Plots tab in the shiny app. + + + +#### Benchmarking in pecan.xml + +Before reading this section, it is recommended that you [familiarize yourself with basics of the pecan.xml file.](#pecanXML) + +The `pecan.xml` has an _optional_ benchmarking section. Below are all the tags in the benchmarking section explained. Many of these field are filled in automatically during the benchmarking process when using the benchmarking shiny app. + +The only time one should edit the benchmarking section by hand is for performing clone runs. See [clone run documentation.](#CloneRun) + +`` settings: + +- `ensemble_id`: the id of the ensemble that you will be using - the settings from this ensemble will be saved in a reference run record and then `ensemble_id` will be replaced with `reference_run_id` +- `new_run`: TRUE = create new run, FALSE = use existing run (required, default FALSE) + +It is possible to look at more than one benchmark with a particular run. +The specific settings related to each benchmark are in a sub section called `benchmark` + +- `input_id`: the id of the benchmarking data (required) +- `variable_id`: the id of the variable of interest within the data. If you leave this blank, all variables that are shared between the input and model output will be used. +- `metric_id`: the id(s) of the metric(s) to be calculated. If you leave this blank, all metrics will be used. + +Example: +In this example, +- we are using a pre-existing run from `ensemble_id = 1000010983` (`new_run = FALSE`) +- the output will be compared to data from `input_id = 1000013743`, specifically two variables of interest: `variable_id = 411, variable_id = 18` +- for `variable_id = 411` we will perform only one metric of comparison `metric_id = 1000000001` +- for for `variable_id = 18` we will perform two metrics of comparison `metric_id = 1000000001, metric_id = 1000000002` + +```xml + + 1000010983 + FALSE + + 1000013743 + 411 + 853 + + 1000000001 + + + + 1000013743 + 18 + 853 + + 1000000001 + 1000000002 + + + +``` + + + +# Developer guide {#developer-guide} + +* [Update BETY](#updatebety) +* [Update PEcAn Code](#pecan-make) +* [PEcAn and Git](#pecan-git) +* [Coding Practices](#coding-practices) + + + +## Updating PEcAn Code and Bety Database {#updatebety} + +Release notes for all releases can be found [here](https://github.com/PecanProject/pecan/releases). + +This page will only list any steps you have to do to upgrade an existing system. When updating PEcAn it is highly encouraged to update BETY. You can find instructions on how to do this, as well on how to update the database in the [Updating BETYdb](https://pecan.gitbooks.io/betydb-documentation/content/updating_betydb_when_new_versions_are_released.html) gitbook page. + + +### Updating PEcAn {#pecan-make} + +The latest version of PEcAn code can be obtained from the PEcAn repository on GitHub: + +```bash +cd pecan # If you are not already in the PEcAn directory +git pull +``` + +The PEcAn build system is based on GNU Make. +The simplest way to install is to run `make` from inside the PEcAn directory. +This will update the documentation for all packages and install them, as well as all required dependencies. + +For more control, the following `make` commands are available: + +* `make document` -- Use `devtools::document` to update the documentation for all package. +Under the hood, this uses the `roxygen2` documentation system. + +* `make install` -- Install all packages and their dependnencies using `devtools::install`. +By default, this only installs packages that have had their code changed and any dependent packages. + +* `make check` -- Perform a rigorous check of packages using `devtools::check` + +* `make test` -- Run all unit tests (based on `testthat` package) for all packages, using `devtools::test` + +* `make clean` -- Remove the make build cache, which is used to track which packages have changed. +Cache files are stored in the `.doc`, `.install`, `.check`, and `.test` subdirectories in the PEcAn main directory. +Running `make clean` will force the next invocation of `make` commands to operate on all PEcAn packages, regardless of changes. + +The following are some additional `make` tricks that may be useful: + +* Install, check, document, or test a specific package -- `make ./`; e.g. `make .install/utils` or `make .check/modules/rtm` + +* Force `make` to run, even if package has not changed -- `make -B ` + +* Run `make` commands in parallel -- `make -j`; e.g. `make -j4 install` to install packages using four parallel processes. Note that packages containing compiled code (e.g. PEcAn.RTM, PEcAn.BASGRA) might fail when `j` is greater than 1, because of limitations in the way R calls `make` internally while compiling them. See [GitHub issue 1976](https://github.com/PecanProject/pecan/issues/1976) for more details. + +All instructions for the `make` build system are contained in the `Makefile` in the PEcAn root directory. +For full documentation on `make`, see the man pages by running `man make` from a terminal. + + + +## Git and GitHub Workflow {#pecan-git} + +[Using Git]{#using-git} + + + +### Using Git {#using-git} + +This document describes the steps required to download PEcAn, make changes to code, and submit your changes. + +* If you are new to GitHub or to PEcAn, start with the one-time set-up instructions under [Before any work is done]. Also see the excellent tutorials and references in the [Git]) section right below this list and at the bootom in [References]. +* To make trivial changes, see [Quick and Easy]. +* To make a few changes to the code, start with the [Basic Workflow]. +* To make substantial changes and/or if plan to contribute over time see [Recommended Workflow: A new branch for each change]. + +#### Git + +Git is a free & open source, distributed version control system designed +to handle everything from small to very large projects with speed and +efficiency. Every Git clone is a full-fledged repository with complete +history and full revision tracking capabilities, not dependent on +network access or a central server. Branching and merging are fast and +easy to do. + +A good place to start is the [GitHub 5 minute illustrated tutorial](https://guides.github.com/introduction/flow/). +In addition, there are three fun tutorials for learning git: + +* [Learn Git](https:k//www.codecademy.com/learn/learn-git) is a great web-based interactive tutorial. +* [LearnGitBranching](https://learngitbranching.js.org/) +* [TryGit](http://tryk.github.com). + + +**URLs** In the rest of the document will use specific URL’s to clone the code. +There a few URL’s you can use to clone a project, using https, ssh and +git. You can use either https or git to clone a repository and write to +it. The git protocol is read-only. +This document describes the steps required to download PEcAn, make changes to code, and submit your changes. + + +#### PEcAn Project and Github +* Organization Repository: https://github.com/organizations/PecanProject +* PEcAn source code: https://github.com/PecanProject/pecan.git +* BETYdb source code: https://github.com/PecanProject/bety.git + +These instructions apply to other repositories too. + +#### PEcAn Project Branches +We follow branch organization laid out on [this page](http://nvie.com/posts/a-successful-git-branching-model). + +In short, there are three main branches you must be aware of: + +* **develop** - Main Branch containing the latest code. This is the main branch you will make changes to. +* **master** - Branch containing the latest stable code. DO NOT MAKE CHANGES TO THIS BRANCH. +* **release/vX.X.X** - Named branches containing code specific to a release. Only make changes to this branch if you are fixing a bug on a release branch. + +#### Milestones, Issues, Tasks + +The Milestones, issues, and tasks can be used to organize specific features or research projects. In general, there is a heirarchy: + +* milestones (Big picture, "Epic"): contains many issues, organized by release. +* issues (Specific features / bugs, "Story"): may contain a list of tasks; represent +* task list (to do list, "Tasks"): list of steps required to close an issue, e.g.: + +---------------------------------- +* [ ] first do this +* [ ] then this +* [ ] completed when x and y +---------------------------------- + + +#### Quick and Easy + +The **easiest** approach is to use GitHub's browser based workflow. This is useful when your change is a few lines, if you are editing a wiki, or if the edit is trivial (and won't break the code). The [GitHub documentation is here](https://help.github.com/articles/github-flow-in-the-browser) but it is simple: finding the page or file you want to edit, click "edit" and then the GitHub web application will automatically forking and branch, then allow you to submit a pull request. However, it should be noted that unless you are a member of the PEcAn project that the "edit" button will not be active and you'll want to follow the workflow described below for forking and then submitting a pull request. + + +#### Recommended Git Workflow + + +**Each feature should be in its own branch** (for example each issue is a branch, names of branches are often the issue in a bug tracking system). + +**Commit and Push Frequency** On your branch, commit **_at minimum once a day before you push changes:_** even better: every time you reach a stopping point and move to a new issue. best: any time that you have done work that you do not want to re-do. Remember, pushing changes to your branch is like saving a draft. Submit a pull request when you are done. + +#### Before any work is done + +The first step below only needs to be done once when you first start working on the PEcAn code. The steps below that need to be done to set up PEcAn on your computer, and would need to be repeated if you move to a new computer. If you are working from the PEcAn VM, you can skip the "git clone" since the PEcAn code is already installed. + +Most people will not be able to work in the PEcAn repository directly and will need to create a fork of the PEcAn source code in their own folder. To fork PEcAn into your own github space ([github help: "fork a repo"](https://help.github.com/articles/fork-a-repo)). This forked repository will allow you to create branches and commit changes back to GitHub and create pull requests to the develop branch of PEcAn. + +The forked repository is the only way for external people to commit code back to PEcAn and BETY. The pull request will start a review process that will eventually result in the code being merged into the main copy of the codebase. See https://help.github.com/articles/fork-a-repo for more information, especially on how to keep your fork up to date with respect to the original. (Rstudio users should also see [Git + Rstudio](Using-Git.md#git--rstudio), below) + +You can setup SSH keys to make it easier to commit cod back to GitHub. This might especially be true if you are working from a cluster, see [set up ssh keys](https://help.github.com/articles/generating-ssh-keys) + +1. Introduce yourself to GIT + +`git config --global user.name "FULLNAME"` +`git config --global user.email you@yourdomain.example.com` + +2. Fork PEcAn on GitHub. Go to the PEcAn source code and click on the Fork button in the upper right. This will create a copy of PEcAn in your personal space. + +3. Clone to your local machine via command line + +`git clone git@github.com:/pecan.git` + +If this does not work, try the https method + +`git clone https://github.com/PecanProject/pecan.git` + +4. Define upstream repository + +``` +cd pecan +git remote add upstream git@github.com:PecanProject/pecan.git +``` + +#### During development: + +* commit often; +* each commit can address 0 or 1 issue; many commits can reference an issue +* ensure that all tests are passing before anything is pushed into develop. + +#### Basic Workflow + +This workflow is for educational purposes only. Please use the Recommended Workflow if you plan on contributing to PEcAn. This workflow does not include creating branches, a feature we would like you to use. +1. Get the latest code from the main repository + +`git pull upstream develop` + +2. Do some coding + +3. Commit after each chunk of code (multiple times a day) + +`git commit -m ""` + +4. Push to YOUR Github (when a feature is working, a set of bugs are fixed, or you need to share progress with others) + +`git push origin develop` + +5. Before submitting code back to the main repository, make sure that code compiles from the main directory. + +`make` + + +6. submit pull request with a reference to related issue; +* also see [github documentation](https://help.github.com/articles/using-pull-requests) + + +#### Recommended Workflow: A new branch for each change + +1. Make sure you start in develop + +`git checkout develop` + +2. Make sure develop is up to date + +`git pull upstream develop` + +3. Run the PEcAn MAKEFILE to compile code from the main directory. + +`make` + +4. Create a branch and switch to it + +`git checkout -b ` + +5. Work/commit/etc + +`git add ` + +`git commit -m ""` + +6. Make sure that code compiles and documentation updated. The make document command will run roxygenise. + +`make document` +`make` + +7. Push this branch to your github space + +`git push origin ` + +8. submit pull request with [[link commits to issues|Using-Git#link-commits-to-issuess]]; +* also see [explanation in this PecanProject/bety issue](https://github.com/PecanProject/bety/issues/57) and [github documentation](https://help.github.com/articles/using-pull-requests) + +#### After pull request is merged + +1. Make sure you start in master + +`git checkout develop` + +2. delete branch remotely + +`git push origin --delete ` + +3. delete branch locally + +`git branch -D ` + + +#### Fixing a release Branch + +If you would like to make changes to a release branch, you must follow a different workflow, as the release branch will not contain the latest code on develop and must remain seperate. + +1. Fetch upstream remote branches + +`git fetch upstream` + +2. Checkout the correct release branch + +`git checkout -b release/vX.Y.Z` + +4. Compile Code with make + +`make` + +5. Make changes and commit them + +`git add ` +`git commit -m "Describe changes"` + +6. Compile and make roxygen changes +`make` +`make document` + +7. Commit and push any files that were changed by make document + +8. Make a pull request. It is essential that you compare your pull request to the remote release branch, NOT the develop branch. + + +#### Link commits to issues + +You can reference and close issues from comments, pull requests, and commit messages. This should be done when you commit code that is related to or will close/fix an existing issue. + +There are two ways to do this. One easy way is to include the following text in your commit message: + +* [**Github**](https://github.com/blog/1386-closing-issues-via-commit-messages) +* to close: "closes gh-xxx" (or syn. close, closed, fixes, fix, fixed) +* to reference: just the issue number (e.g. "gh-xxx") + + +#### Other Useful Git Commands: + +* GIT encourages branching "early and often" +* First pull from develop +* Branch before working on feature +* One branch per feature +* You can switch easily between branches +* Merge feature into main line when branch done + +If during above process you want to work on something else, commit all +your code, create a new branch, and work on new branch. + + +* Delete a branch: `git branch -d ` +* To push a branch git: `push -u origin `` +* To check out a branch: + +``` +git fetch origin +git checkout --track origin/ +``` + +* Show graph of commits: + +`git log --graph --oneline --all` + +#### Tags + +Git supports two types of tags: lightweight and annotated. For more information see the [Tagging Chapter in the Git documentation](http://git-scm.com/book/ch2-6.html). + +Lightweight tags are useful, but here we discuss the annotated tags that are used for marking stable versions, major releases, and versions associated with published results. + +The basic command is `git tag`. The `-a` flag means 'annotated' and `-m` is used before a message. Here is an example: + +`git tag -a v0.6 -m "stable version with foo and bar features, used in the foobar publication by Bob"` + +Adding a tag to the a remote repository must be done explicitly with a push, e.g. + +`git push v0.6` + +To use a tagged version, just checkout: + +`git checkout v0.6` + +To tag an earlier commit, just append the commit SHA to the command, e.g. + +`git tag -a v0.99 -m "last version before 1.0" 9fceb02` + +**Using GitHub** The easiest way to get working with GitHub is by installing the GitHub +client. For instructions for your specific OS and download of the +GitHub client, see https://help.github.com/articles/set-up-git. +This will help you set up an SSH key to push code back to GitHub. To +check out a project you do not need to have an ssh key and you can use +the https or git url to check out the code. + +#### Git + Rstudio + + +Rstudio is nicely integrated with many development tools, including git and GitHub. +It is quite easy to check out source code from within the Rstudio program or browser. +The Rstudio documentation includes useful overviews of [version control](http://www.rstudio.com/ide/docs/version_control/overview) and [R package development](http://www.rstudio.com/ide/docs/packages/overview). + +Once you have git installed on your computer (see the [Rstudio version control](http://www.rstudio.com/ide/docs/version_control/overview) documentation for instructions), you can use the following steps to install the PEcAn source code in Rstudio. + +#### Creating a Read-only version: + +This is a fast way to clone the repository that does not support contributing new changes (this can be done with further modification). + +1. install Rstudio (www.rstudio.com) +2. click (upper right) project +* create project +* version control +* Git - clone a project from a Git Repository +* paste https://www.github.com/PecanProject/pecan +* choose working dir. for repo + +#### For development: + +1. create account on github +2. create a fork of the PEcAn repository to your own account https://www.github.com/pecanproject/pecan +3. install Rstudio (www.rstudio.com) +4. generate an ssh key +* in Rstudio: + * `Tools -> Options -> Git/SVN -> "create RSA key"` +* `View public key -> ctrl+C to copy` +* in GitHub +* go to [ssh settings](https://github.com/settings/ssh) +* `-> 'add ssh key' -> ctrl+V to paste -> 'add key'` +2. Create project in Rstudio +* `project (upper right) -> create project -> version control -> Git - clone a project from a Git Repository` +* paste repository url `git@github.com:/pecan.git>` +* choose working dir. for repository + +#### References + +#### Git Documentation + +* Scott Chacon, ‘Pro Git book’, +[http://git-scm.com/book](http://git-scm.com/book) +* GitHub help pages, +[https://help.github.com](https://help.github.com)/ +* Main GIT page +[http://git-scm.com/documentation](http://git-scm.com/documentation) +* Another set of pages about branching, +[http://sandofsky.com/blog/git-workflow.html](http://sandofsky.com/blog/git-workflow.html) +* [Stackoverflow highest voted questions tagged "git"](http://stackoverflow.com/questions/tagged/git?sort=votes&pagesize=50) + + +#### GitHub Documentation + +When in doubt, the first step is to click the "Help" button at the top of the page. + +* [GitHub Flow](http://scottchacon.com/2011/08/31/github-flow.html) by +Scott Chacon (Git evangelist and Ruby developer working on GitHub.com) +* [GitHub FAQ](https://help.github.com/) +* [Using Pull Requests](https://help.github.com/articles/using-pull-requests) +* [SSH Keys](https://help.github.com/articles/generating-ssh-keys) + + + +### GitHub use with PEcAn + +In this section, development topics are introduced and discussed. PEcAn code lives within the If you are looking for an issue to work on, take a look through issues labled ["good first issue"](https://github.com/PecanProject/pecan/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). To get started you will want to review + + +We use GitHub to track development. + +To learn about GitHub, it is worth taking some time to read through the [FAQ](https://help.github.com/). When in doubt, the first step is to click the "Help" button at the top of the page. + +* **To address specific people**, use a github feature called @mentions e.g. write @dlebauer, @robkooper, @mdietze, or @serbinsh ... in the issue to alert the user as described in the [GitHub documentation on notifications](https://help.github.com/articles/notifications) + + +#### Bugs, Issues, Features, etc. + + +#### Reporting a bug + +1. (For developers) work through debugging. +2. Once you have identified a problem, that you can not resolve, you can write a bug report +3. Write a bug report +4. submit the bug report +5. If you do find the answer, explain the resolution (in the issue) and close the issue + +#### Required content + +Note: + +* **a bug is only a bug if it is reproducible** +* **clear bug reports save time** + +1. Clear, specific title +2. Description - + * What you did + * What you expected to happen + * What actually happened + * What does work, under what conditions does it fail? + * Reproduction steps - minimum steps required to reproduce the bug +3. additional materials that could help identify the cause: + * screen shots + * stack traces, logs, scripts, output + * specific code and data / settings / configuration files required to reproduce the bug + * environment (operating system, browser, hardware) + +#### Requesting a feature + + +(from The Pragmatic Programmer, available as +[ebook](http://proquestcombo.safaribooksonline.com/0-201-61622-X/223) +through UI libraries, hardcopy on David’s bookshelf)\ + +* focus on “user stories”, e.g. specific use cases +* Be as specific as possible, + +* Here is an example: + + 1. Bob is at www.mysite.edu/maps + 2. map of the the region (based on user location, e.g. US, Asia, etc) + 3. option to “use current location” is provided, if clicked, map zooms in to, e.g. state or county level + 4. for site run: + 1. option to select existing site or specify point by lat/lon + 2. option to specify a bounding box and grid resolution in + either lat/lon or polar stereographic. +5. asked to specify start and end times in terms of year, month, day, hour, minute. Time is recorded in UTC not local time, this should be indicated. + +#### Closing an issue + +1. Definition of “Done” + * test + * documentation +2. when issue is resolved: + * status is changed to “resolved” + * assignee is changed to original author +3. if original author agrees that issue has been resolved + * original author changes status to “closed” +4. except for trivial issues, issues are only closed by the author + +#### When to submit an issue? + +**Ideally, non-trivial code changes will be linked to an issue and a commit.** + +This requires creating issues for each task, making small commits, and referencing the issue within your commit message. Issues can be created [on GitHub](http://pecanproject.github.io/Report_an_issue.html). These issues can be linked to commits by adding text such as `fixes gh-5`). + +Rationale: This workflow is a small upfront investment that reduces error and time spent re-creating and debugging errors. Associating issues and commits, makes it easier to identify why a change was made, and potential bugs that could arise when the code is changed. In addition, knowing which issue you are working on clarifies the scope and objectives of your current task. + + + +## Coding Practices {#coding-practices} + + + +### Coding Style {#developer-codestyle} + +Consistent coding style improves readability and reduces errors in +shared code. + +Unless otherwise noted, PEcAn follows the [Tidyverse style guide](https://style.tidyverse.org/), so please familiarize yourself with it before contributing. +In addition, note the following: + +- **Document all functions using `roxygen2`**. +See [Roxygen2](#developer-roxygen) for more details. +- **Put your name on things**. +Any function that you create or make a meaningful contribution to should have your name listed after the author tag in the function documentation. +It is also often a good idea to add your name to extended comments describing particularly complex or strange code. +- **Write unit tests with `testthat`**. +Tests are a complement to documentation - they define what a function is (and is not) expected to do. +Not all functions necessarily need unit tests, but the more tests we have, the more confident we can be that changes don't break existing code. +Whenever you discover and fix a bug, it is a good idea to write a unit test that makes sure the same bug won't happen again. +See [Unit_Testing](#developer-testing) for instructions, and [Advanced R: Tests](http://r-pkgs.had.co.nz/tests.html). +- **Do not use abbreviations**. +Always write out `TRUE` and `FALSE` (i.e. _do not_ use `T` or `F`). +Do not rely on partial argument matching -- write out all arguments in full. +- **Avoid dots in function names**. +R's S3 methods system uses dots to denote object methods (e.g. `print.matrix` is the `print` method for objects of class `matrix`), which can cause confusion. +Use underscores instead (e.g. `do_analysis` instead of `do.analysis`). +(NOTE that many old PEcAn functions violate this convention. The plan is to deprecate those in PEcAn 2.0. See GitHub issue [#392](https://github.com/PecanProject/pecan/issues/392)). +- **Use informative file names with consistent extensions**. +Standard file extensions are `.R` for R scripts, `.rds` for individual objects (via `saveRDS` function), and `.RData` (note: capital D!) for multiple objects (via the `save` function). +For function source code, prefer multiple files with fewer functions in each to large files with lots of files (though it may be a good idea to group closely related functions in a single file). +File names should match, or at least closely reflect, their files (e.g. function `do_analysis` should be defined in a file called `do_analysis.R`). +_Do not use spaces in file names_ -- use dashes (`-`) or underscores (`_`). +- **For using external packages, add the package to `Imports:` and call the corresponding function with `package::function`**. +_Do not_ use `@importFrom package function` or, worse yet, `@import package`. +(The exception is infix operators like `magrittr::%>%` or `ggplot2::%+%`, which can be imported via roxygen2 documentation like `@importFrom magrittr %>%`). +_Do not_ add packages to `Depends`. +In general, try to avoid adding new dependencies (especially ones that depend on system libraries) unless they are necessary or already widely used in PEcAn (e.g. GDAL, NetCDF, XML, JAGS, `dplyr`). +For a more thorough and nuanced discussion, see the [package dependencies appendix](#package-dependencies). + + + +### Logging {#developer-logging} + +During development we often add many print statements to check to see how the code is doing, what is happening, what intermediate results there are etc. When done with the development it would be nice to turn this additional code off, but have the ability to quickly turn it back on if we discover a problem. This is where logging comes into play. Logging allows us to use "rules" to say what information should be shown. For example when I am working on the code to create graphs, I do not have to see any debugging information about the SQL command being sent, however trying to figure out what goes wrong during a SQL statement it would be nice to show the SQL statements without adding any additional code. + +PEcAn provides a set of `logger.*` functions that should be used in place of base R's `stop`, `warn`, `print`, and similar functions. The `logger` functions make it easier to print to a system log file, and to control the level of output produced by PEcAn. + +* The file [test.logger.R](https://github.com/PecanProject/pecan/blob/develop/base/logger/tests/testthat/test.logger.R) provides descriptive examples +* This query provides an current overview of [functions that use logging](https://github.com/PecanProject/pecan/search?q=logger&ref=cmdform) +* Logger functions and their corresponding levels (in order of increasing level): + * `logger.debug` (`"DEBUG"`) -- Low-level diagnostic messages that are hidden by default. Good examples of this are expanded file paths and raw results from database queries or other analyses. + * `logger.info` (`"INFO"`) -- Informational messages that regular users may want to see, but which do not indicate anything unexpected. Good examples of this are progress updates updates for long-running processes, or brief summaries of queries or analyses. + * `logger.warn` (`"WARN"`) -- Warning messages about issues that may lead to unexpected but valid results. Good examples of this are interactions between arguments that lead to some arguments being ignored or removal of missing or extreme values. + * `logger.error` (`"ERROR"`) -- Error messages from which PEcAn has some capacity to recover. Unless you have a very good reason, we recommend avoiding this in favor of either `logger.severe` to actually stop execution or `logger.warn` to more explicitly indicate that the problem is not fatal. + * `logger.severe` -- Catastrophic errors that warrant immediate termination of the workflow. This is the only function that actually stops R's execution (via `stop`). +* The `logger.setLevel` function sets the level at which a message will be printed. For instance, `logger.setLevel("WARN")` will suppress `logger.info` and `logger.debug` messages, but will print `logger.warn` and `logger.error` messages. `logger.setLevel("OFF")` suppresses all logger messages. +* To print all messages to console, use `logger.setUseConsole(TRUE)` + + + +### Package Data {#developer-packagedata} + +#### Summary: + +Files with the following extensions will be read by R as data: + +* plain R code in .R and .r files are sourced using `source()` +* text tables in .tab, .txt, .csv files are read using `read()` +** objects in R image files: .RData, .rda are loaded using `load()` + * capitalization matters + * all objects in foo.RData are loaded into environment + * pro: easiset way to store objects in R format + * con: format is application (R) specific + +Details are in `?data`, which is mostly a copy of [Data section of +Writing R +Extensions](http://cran.r-project.org/doc/manuals/R-exts.html#Data-in-packages). + +#### Accessing data + +Data in the [data] directory will be accessed in the following ways, + +* efficient way: (especially for large data sets) using the `data` +function: + +```r +data(foo) # accesses data with, e.g. load(foo.RData), read(foo.csv), or source(foo.R) +``` + +* easy way: by adding the following line to the package DESCRIPTION: + *note:* this should be used with caution or it can cause difficulty as discussed in redmine issue #1118 +```r +LazyData: TRUE +``` +From the R help page: + +Currently, a limited number of data formats can be accessed using the `data` function by placing one of the following filetypes in a packages' `data` directory: +* files ending `.R` or `.r` are `source()`d in, with the R working +directory changed temporarily to the directory containing the respective +file. (`data` ensures that the `utils` package is attached, in case it +had been run *via* `utils::data`.) +* files ending `.RData` or `.rda` are `load()`ed. +* files ending `.tab`, `.txt` or `.TXT` are read using `read.table(..., header = TRUE)`, and hence result in a data frame. +* files ending `.csv` or `.CSV` are read using `read.table(..., header = TRUE, sep = ';')`, and also result in a data frame. + +If your data does not fall in those 4 categories, or you can use the +`system.file` function to get access to the data: + +```r +system.file("data", "ed.trait.dictionary.csv", package="PEcAn.utils") +[1] "/home/kooper/R/x86_64-pc-linux-gnu-library/2.15/PEcAn.utils/data/ed.trait.dictionary.csv" +``` + +The arguments are folder, filename(s) and then package. It will return +the fully qualified path name to a file in a package, in this case it +points to the trait data. This is almost the same as the data function, +however we can now use any function to read the file, such as read.csv +instead of read.csv2 which seems to be the default of data. This also +allows us to store arbitrary files in the data folder, such as the the +bug file and load it when we need it. + +##### Examples of data in PEcAn packages + +* outputs: [/modules/uncertainties/data/output.RData] +* parameter samples [/modules/uncertainties/data/samples.RData] + + + +### Documenting functions using `roxygen2` {#developer-roxygen} + +This is the standard method for documenting R functions in PEcAn. +For detailed instructions, see one of the following resources: + +* `roxygen2` [pacakge documentation](https://roxygen2.r-lib.org/) + - [Formatting overview](https://roxygen2.r-lib.org/articles/rd.html) + - [Markdown formatting](https://blog.rstudio.com/2017/02/01/roxygen2-6-0-0/) + - [Namespaces](https://roxygen2.r-lib.org/articles/namespace.html) (e.g. when to use `@export`) +* From "R packages" by Hadley Wickham: + - [Object Documentation](http://r-pkgs.had.co.nz/man.html) + - [Package Metadata](http://r-pkgs.had.co.nz/description.html) + +Below is a complete template for a Roxygen documentation block. +Note that roxygen lines start with `#'`: + +```r +#' Function title, in a few words +#' +#' Function description, in 2-3 sentences. +#' +#' (Optional) Package details. +#' +#' @param argument_1 A description of the argument +#' @param argument_2 Another argument to the function +#' @return A description of what the function returns. +#' +#' @author Your name +#' @examples +#' \dontrun{ +#' # This example will NOT be run by R CMD check. +#' # Useful for long-running functions, or functions that +#' # depend on files or values that may not be accessible to R CMD check. +#' my_function("~/user/my_file") +#'} +# # This example WILL be run by R CMD check +#' my_function(1:10, argument_2 = 5) +## ^^ A few examples of the function's usage +#' @export +# ^^ Whether or not the function will be "exported" (made available) to the user. +# If omitted, the function can only be used inside the package. +my_function <- function(argument_1, argument_2) {...} +``` + +Here is a complete example from the `PEcAn.utils::days_in_year()` function: + +```r +#' Number of days in a year +#' +#' Calculate number of days in a year based on whether it is a leap year or not. +#' +#' @param year Numeric year (can be a vector) +#' @param leap_year Default = TRUE. If set to FALSE will always return 365 +#' +#' @author Alexey Shiklomanov +#' @return integer vector, all either 365 or 366 +#' @export +#' @examples +#' days_in_year(2010) # Not a leap year -- returns 365 +#' days_in_year(2012) # Leap year -- returns 366 +#' days_in_year(2000:2008) # Function is vectorized over years +days_in_year <- function(year, leap_year = TRUE) {...} +``` + +To update documentation throughout PEcAn, run `make document` in the PEcAn root directory. +_Make sure you do this before opening a pull request_ -- +PEcAn's automated testing (Travis) will check if any documentation is out of date and will throw an error like the following if it is: + +``` +These files were changed by the build process: +{...} +``` + + + +## Testing {#developer-testing} + +PEcAn uses two different kinds of testing -- [unit tests](#developer-testing-unit) and [integration tests](#developer-testing-integration). + +### Unit testing {#developer-testing-unit} + +Unit tests are short (<1 minute runtime) tests of functionality of specific functions. +Ideally, every function should have at least one unit test associated with it. + +A unit test *should* be written for each of the following situations: + +1. Each bug should get a regression test. + * The first step in handling a bug is to write code that reproduces the error + * This code becomes the test + * most important when error could re-appear + * essential when error silently produces invalid results + +2. Every time a (non-trivial) function is created or edited + * Write tests that indicate how the function should perform + * example: `expect_equal(sum(1,1), 2)` indicates that the sum + function should take the sum of its arguments + + * Write tests for cases under which the function should throw an + error + * example: `expect_error(sum("foo"))` + * better : `expect_error(sum("foo"), "invalid 'type' (character)")` +3. Any functionality that you would like to protect over the long term. Functionality that is not tested is more likely to be lost. +PEcAn uses the `testthat` package for unit testing. +A general overview of is provided in the ["Testing"](http://adv-r.had.co.nz/Testing.html) chapter of Hadley Wickham's book "R packages". +Another useful resource is the `testthat` [package documentation website](https://testthat.r-lib.org/). +See also our [`testthat` appendix](#appendix-testthat). +Below is a lightning introduction to unit testing with `testthat`. + +Each package's unit tests live in `.R` scripts in the folder `/tests/testthat`. +In addition, a `testthat`-enabled package has a file called `/tests/testthat.R` with the following contents: + +```r +library(testthat) +library() + +test_check("") +``` + +Tests should be placed in `/tests/testthat/test-.R`, and look like the following: + +```r +context("Mathematical operators") + +test_that("mathematical operators plus and minus work as expected",{ + sum1 <- sum(1, 1) + expect_equal(sum1, 2) + sum2 <- sum(-1, -1) + expect_equal(sum2, -2) + expect_equal(sum(1,NA), NA) + expect_error(sum("cat")) + set.seed(0) + expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) +}) + +test_that("different testing functions work, giving excuse to demonstrate",{ + expect_identical(1, 1) + expect_identical(numeric(1), integer(1)) + expect_equivalent(numeric(1), integer(1)) + expect_warning(mean('1')) + expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) + expect_warning(mean('1'), "argument is not numeric or logical: returning NA") + expect_message(message("a"), "a") +}) +``` + +### Integration testing {#developer-testing-integration} + +Integration tests consist of running the PEcAn workflow in full. +One way to do integration tests is to manually run workflows for a given version of PEcAn, either through the web interface or by manually creating a [`pecan.xml` file](#pecanXML). +Such manual tests are an important part of checking PEcAn functionality. + +Alternatively, the [`base/workflow/inst/batch_run.R`][batch_run] script can be used to quickly run a series of user-specified integration tests without having to create a bunch of XML files. +This script is powered by the [`PEcAn.workflow::create_execute_test_xml()`][xml_fun] function, +which takes as input information about the model, meteorology driver, site ID, run dates, and others, +uses these to construct a PEcAn XML file, +and then uses the `system()` command to run a workflow with that XML. + +If run without arguments, `batch_run.R` will try to run the model configurations specified in the [`base/workflow/inst/default_tests.csv`][default_tests] file. +This file contains a CSV table with the following columns: + +- `model` -- The name of the model (`models.model_name` column in BETY) +- `revision` -- The version of the model (`models.revision` column in BETY) +- `met` -- The name of the meteorology driver source +- `site_id` -- The numeric site ID for the model run (`sites.site_id`) +- `pft` -- The name of the plant functional type to run. If `NA`, the script will use the first PFT associated with the model. +- `start_date`, `end_date` -- The start and end dates for the model run, respectively. These should be formatted according to ISO standard (`YYYY-MM-DD`, e.g. `2010-03-16`) +- `sensitivity` -- Whether or not to run the sensitivity analysis. `TRUE` means run it, `FALSE` means do not. +- `ensemble_size` -- The number of ensemble members to run. Set this to 1 to do a single run at the trait median. +- `comment` -- An string providing some user-friendly information about the run. + +The `batch_run.R` script will run a workflow for every row in the input table, sequentially (for now; eventually, it will try to run them in parallel), +and at the end of each workflow, will perform some basic checks, including whether or not the workflow finished and if the model produced any output. +These results are summarized in a CSV table (by default, a file called `test_result_table.csv`), with all of the columns as the input test CSV plus the following: + +- `outdir` -- Absolute path to the workflow directory. +- `workflow_complete` -- Whether or not the PEcAn workflow completed. Note that this is a relatively low bar -- PEcAn workflows can complete without having run the model or finished some other steps. +- `has_jobsh` -- Whether or not PEcAn was able to write the model's `job.sh` script. This is a good indication of whether or not the model's `write.configs` step was successful, and may be useful for separating model configuration errors from model execution errors. +- `model_output_raw` -- Whether or not the model produced any output files at all. This is just a check to see of the `/out` directory is empty or not. Note that some models may produce logfiles or similar artifacts as soon as they are executed, whether or not they ran even a single timestep, so this is not an indication of model success. +- `model_output_processed` -- Whether or not PEcAn was able to post-process any model output. This test just sees if there are any files of the form `YYYY.nc` (e.g. `1992.nc`) in the `/out` directory. + +Right now, these checks are not particularly robust or comprehensive, but they should be sufficient for catching common errors. +Development of more, better tests is ongoing. + +The `batch_run.R` script can take the following command-line arguments: + +- `--help` -- Prints a help message about the script's arguments +- `--dbfiles=` -- The path to the PEcAn `dbfiles` folder. The default value is `~/output/dbfiles`, based on the file structure of the PEcAn VM. Note that for this and all other paths, if a relative path is given, it is assumed to be relative to the current working directory, i.e. the directory from which the script was called. +- `--table=` -- Path to an alternate test table. The default is the `base/workflow/inst/default_tests.csv` file. See preceding paragraph for a description of the format. +- `--userid=` -- The numeric user ID for registering the workflow. The default value is 99000000002, corresponding to the guest user on the PEcAn VM. +- `--outdir=` -- Path to a directory (which will be created if it doesn't exist) for storing the PEcAn workflow outputs. Default is `batch_test_output` (in the current working directory). +- `--pecandir=` -- Path to the PEcAn source code root directory. Default is the current working directory. +- `--outfile=` -- Full path (including file name) of the CSV file summarizing the results of the runs. Default is `test_result_table.csv`. The format of the output + +[batch_run]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/batch_run.R +[default_tests]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/default_tests.csv +[xml_fun]: + +### Continuous Integration + +Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase, and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. + +At this writing PEcAn's CI builds primarily use [Travis CI](https://travis-ci.org/pecanProject/pecan) and the rest of this section assumes a Travis build, but as of May 2020 we also have an experimental GitHub Actions build, and if we switch completely to GitHub Actions then this guide will need to be rewritten. + +All our Travis builds run on the same version of Linux (currently Ubuntu 16.04, `xenial`) using four different versions of R in parallel: the two most recent previous releases, current release, and nightly builds of the R development branch. In most cases the build should pass on all four versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on older releases as developer time and forward compatibility allow. + +Each build starts by launching a separate clean virtual machine for each R version and performs roughly the following actions on all of them: + +* Installs binary system dependencies needed by PEcAn (NetCDF and HDF5 libraries, JAGS, udunits, etc). +* Installs all the R packages that are declared as dependencies in any PEcAn package, as computed by `scripts/generate_dependencies.R`. +* Clones the PEcAn repository from GitHub, and checks out the branch to be tested. +* Retrieves any cached files available from previous Travis builds. + - The main thing in the cache is previously-installed dependency R packages, to avoid recompiling them every time. + - If the cache becomes stale or is preventing a package update needed by the build (e.g. to get a new version that contains a needed bug fix), delete the cache through the Travis web interface and it will be reconstructed on the next build. + - Because PEcAn has so many dependencies, builds with no cache will spend most of their time recompiling packages and will probably run out of time before the tests complete. You can fix this by using `scripts/travis/cache_buildup.sh` and `scripts/travis/prime_travis_cache.sh` to build up the cache incrementally through one or more [custom builds](https://blog.travis-ci.com/2017-08-24-trigger-custom-build), each of which installs some dependencies and then uploads a freshened cache *without* running any tests. Once all dependencies have been cached, restart the standard full build. +* Initializes a skeleton version of the PEcAn database (BeTY) containing a few public records to be used by the test runs. +* Installs all the PEcAn packages, recompiling documentation and installing dependencies as needed. +* Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). + - As discussed in [Unit testing](#developer-testing-unit), these tests should run quickly and test individual components in relative isolation. + - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time (e.g. large data product downloads) or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! +* Runs R package checks (the same ones you run locally with `make check` or `devtools::check(pkgname)`), skipping tests and documentation rebuild because we just did those in the previous steps. + - Any ERROR in the check output will stop the build immediately. + - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. If the package has no stored reference result, all WARNINGs and NOTEs are considered newly added and reported as build failures. + - If all messages from the current built were also present in the reference result, the check passes. If any messages are newly added, a build failure is reported. + - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! + - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. + - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. + - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it as part of your PR anyway. It's frustrating to see tests complain about code you didn't touch, but the failures all need to be cleaned up eventually and it's likely easier to fix the error than to figure out how to re-ignore it. +* Runs a simple PEcAn workflow from beginning to end (three years of SIPNET at Niwot Ridge using stored weather data, meta-analysis on fixed effects only, sensitivity analysis on NPP), and verifies that the models complete successfully. +* Checks whether any version-controlled files have been changed by the build and testing process, and marks a build failure if they have. + - If your build fails at this step, the most common cause is that you forgot to Roxygenize before committing. + - This step will also detect newly added files, e.g. tests improperly writing to the current working directory rather than `tempdir()` and then failing to clean up after themselves. + +If any of these steps reports an error, the build is marked as "broken" and stops before the next step. If they all pass, the Travis CI bot marks the build as successful and tells the GitHub bot that it's OK to allow your changes to be merged... but the final decision belongs to the human reviewing your code and they might still ask you for other changes! + +After a successful build, Travis performs two post-build steps: + +* Compiles the PEcAn documentation book (`book_source`) and the tutorials (`documentation/tutorials`) and uploads them to the [PEcAn website](https://pecanproject.github.io/pecan-documentation). + - This is only done for commits to the `master` or `develop` branch, so changes to in-progress pull requests never change the live documentation until after they are merged. +* Packs up selected build artifacts into a cache file and uploads it to the Travis servers for use on the next build. + +The post-build steps are allowed to fail without breaking the build. If you made documentation changes but don't see them deployed, or if your build seems to be reinstalling packages that ought to be cached, inspect the Travis logs of the previous supposedly-successful build to see if their uploads succeeded. + +All of the above descriptions apply to the build Travis generates when you push to the main `PecanProject/pecan` repository, either by directly pushing to a branch or by opening a pull request. If you like, you can also [enable Travis builds](https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci) from your own PEcAn fork. This can be useful for several reasons: + +* It lets your test whether your changes worked before you open a pull request. +* It often lets you get faster test results: The PEcAn project uses Travis CI's free tier, which only allows a few simultaneous build jobs per repository. If many people are pushing code at the same time, your build might wait in line for a long time at `PecanProject/pecan` but start immediately at `yourname/pecan`. +* If you will be editing the documentation a lot and want to see rendered previews of your in-progress work (instead of waiting until it is merged into develop), you can clone the [pecan-documentation](https://github.com/PecanProject/pecan-documentation) repository to your own GitHub account and let Travis update it for you. + + + +## Download and Compile PEcAn + +Set `R_LIBS_USER` + +[CRAN Reference](http://cran.r-project.org/doc/manuals/r-devel/R-admin.html#Managing-libraries) + +```bash +# point R to personal lib folder +echo 'export R_LIBS_USER=${HOME}/R/library' >> ~/.profile +source ~/.profile +mkdir -p ${R_LIBS_USER} +``` + + +### Download, compile and install PEcAn from GitHub + +```bash +# download pecan +cd +git clone https://github.com/PecanProject/pecan.git + +# compile pecan +cd pecan +make +``` +For more information on the capabilities of the PEcAn Makefile, check out our section on [Updating PEcAn]. + +Following will run a small script to setup some hooks to prevent people from using the pecan demo user account to check in any code. + +```bash +# prevent pecan user from checking in code +./scripts/create-hooks.sh +``` + +### PEcAn Testrun + +Do the run, this assumes you have [installed the BETY database][Installing BETY], [sites][Site Information] tar file and [SIPNET]. + +```bash +# create folder +cd +mkdir testrun.pecan +cd testrun.pecan + +# copy example of pecan workflow and configuration file +cp ../pecan/tests/pecan32.sipnet.xml pecan.xml +cp ../pecan/scripts/workflow.R workflow.R + +# exectute workflow +rm -rf pecan +./workflow.R pecan.xml +``` + +NB: pecan.xml is configured for the virtual machine, you will need to change the field from '/home/carya/' to wherever you installed your 'sites', usually $HOME + + + +## Directory structure + +### Overview of PEcAn repository as of PEcAn 1.5.3 + + +``` +pecan/ + +- base/ # Core functions + +- all # Dummy package to load all PEcAn R packages + +- db # Modules for querying the database + +- logger # Report warnings without killing workflows + +- qaqc # Model skill testing and integration testing + +- remote # Communicate with and execute models on local and remote hosts + +- settings # Functions to read and manipulate PEcAn settings files + +- utils # Misc. utility functions + +- visualization # Advanced PEcAn visualization module + +- workflow # functions to coordinate analysis steps + +- book_source/ # Main documentation and developer's guide + +- CHANGELOG.md # Running log of changes in each version of PEcAn + +- docker/ # Experimental replacement for PEcAn virtual machine + +- documentation # index_vm.html, references, other misc. + +- models/ # Wrappers to run models within PEcAn + +- ed/ # Wrapper scripts for running ED within PEcAn + +- sipnet/ # Wrapper scripts for running SIPNET within PEcAn + +- ... # Wrapper scripts for running [...] within PEcAn + +- template/ # Sample wrappers to copy and modify when adding a new model + +- modules # Core modules + +- allometry + +- data.atmosphere + +- data.hydrology + +- data.land + +- meta.analysis + +- priors + +- rtm + +- uncertainty + +- ... + +- scripts # R and Shell scripts for use with PEcAn + +- shiny/ # Interactive visualization of model results + +- tests/ # Settings files for host-specific integration tests + +- web # Main PEcAn website files +``` + + +### Generic R package structure: + +see the R development wiki for more information on writing code and adding data. + +``` + +- DESCRIPTION # short description of the PEcAn library + +- R/ # location of R source code + +- man/ # Documentation (automatically compiled by Roxygen) + +- inst/ # files to be installed with package that aren't R functions + +- extdata/ # misc. data files (in misc. formats) + +- data/ # data used in testing and examples (saved as *.RData or *.rda files) + +- NAMESPACE # declaration of package imports and exports (automatically compiled by Roxygen) + +- tests/ # PEcAn testing scripts + +- testthat/ # nearly all tests should use the testthat framework and live here +``` + + + +# (PART) Topical Pages {-} + + + + + +# VM configuration and maintenance {#working-with-vm} + +## Updating the VM {#maintain-vm} + + + +The PEcAn VM is distributed with specific versions of PEcAn compiled. However, you do not need to constantly download the VM in order to update your code and version of BETY. To update and maintain your code you can follow the steps found in the Developer section in [Updating PecAn and Code and BETY Database](#updatebety). However, if you are using the latest version of the PEcAn VM (>=1.7) you have the Dockerized version of PEcAn and need to follow the instructions under the [DOCKER](#docker-index) section to update your PEcAn and BETY containers. + +## Connecting to the VM via SSH {#ssh-vm} + +Once the VM is running anywhere on your machine, you can connect to it from a separate terminal via SSH as follows: + +```sh +ssh -p 6422 carya@localhost +``` + +You will be prompted for a password. Like everywhere else in PEcAn, the username is `carya` and the password is `illinois`. The same password is used for any system maintenance you wish to do on the VM via `sudo`. + +As a shortcut, you can add the following to your `~/.ssh/config` file (or create one if it does not exist). + +``` +Host pecan-vm + Hostname localhost + Port 6422 + user carya + ForwardX11Trusted yes +``` + +This will allow you to SSH into the VM with the simplified command, `ssh pecan-vm`. + +## Connecting to BETYdb on the VM via SSH {#ssh-vm-bety} + +Sometimes, you may want to develop code locally but connect to an instance of BETYdb on the VM. +To do this, first open a new terminal and connect to the VM while enabling port forwarding (with the `-L` flag) and setting the port number. Using 5433 does not conflict with the postgres default port of 5432, the forwarded port will not conflict with a postgres database server running locally. + +``` +ssh -L 5433:localhost:5432 carya@localhost:6422 +``` + +This makes port 5433 on the local machine match port 5432 on the VM. +This means that connecting to `localhost:5433` will give you access to BETYdb on the VM. + +To test this on the command line, try the following command, which, if successful, will drop you into the `psql` console. + +``` +psql -d bety -U bety -h localhost -p 5433 +``` + +To test this in R, open a Postgres using the analogous parameters: + +``` +library(RPostgres) +con <- dbConnect( + Postgres(), + user = "bety", + password = "bety", + dbname = "bety", + host = "localhost", + port = 5433 + ) +dbListTables(con) # This should return a vector of bety tables +``` + +Note that the same general approach will work on any BETYdb server where port forwarding is enabled, but it requires ssh access. + + +### Using Amazon Web Services for a VM (AWS) {#awsvm} + +Login to [Amazon Web Services (AWS)](http://console.aws.amazon.com/) and select the EC2 Dashboard. If this is your first time using AWS you will need to set up an account before you are able to access the EC2 Dashboard. Important: You will need a credit card number and access to a phone to be able to verify AWS account registration. AWS is free for one year. + +1. Choose AMI ++ On the top right next to your name, make sure the location setting is on U.S. East (N. Virginia), not U.S. West (Oregon) ++ On the left click, click on EC2 (Virtual servers), then click on “AMIs”, also on the left ++ In the search window toggle to change “Owned by me” to “Public images” ++ Type “pecan” into the search window ++ Click on the toggle button on the left next to PEcAn1.4.6 ++ Click on the “Launch” button at the top +2. Choose an Instance Type ++ Select what type of machine you want to run. For this demo the default, t2.micro, will be adequate. Be aware that different machine types incur very different costs, from 1.3 cents/hour to over \$5/hr https://aws.amazon.com/ec2/pricing/ + + Select t2.micro, then click “Next: Configure Instance Details” +3. Configure Instance Details ++ The defaults are OK. Click “Next: Add Storage” +4. Add Storage ++ The defaults are OK. Click “Next: Tag Instance” +5. Tag Instance ++ You can name your instance if you want. Click “Next: Configure Security Group” +6. Configure Security Group ++ You will need to add two new rules: + + Click “Add Rule” then select “HTTP” from the pull down menu. This rule allows you to access the webserver on PEcAn. ++ Click “Add Rule”, leave the pull down on “Custom TCP Rule”, and then change the Port Range from 0 to 8787. Set “Source” to Anywhere. This rule allows you to access RStudio Server on PEcAn. ++ Click “Review and Launch” . You will then see this pop-up: + + ```{r, echo=FALSE,fig.align='center'} +knitr::include_graphics(rep("figures/pic2.jpg")) +``` + +Select the default drive volume type and click Next + +7. Review and Launch ++ Review the settings and then click “Launch”, which will pop up a select/create Key Pair window. +8. Key Pair ++ Select “Create a new key pair” and give it a name. You won’t actually need this key unless you need to SSH into your PEcAn server, but AWS requires you to create one. Click on “Download Key Pair” then on “Launch Instances”. Next click on “View Instances” at the bottom of the following page. + + +```{r, echo=FALSE,fig.align='center'} +knitr::include_graphics(rep("figures/pic3.jpg")) +``` + +9. Instances ++ You will see the status of your PEcAn VM, which will take a minute to boot up. Wait until the Instance State reads “running”. The most important piece of information here is the Public IP, which is the URL you will need in order to access your PEcAn instance from within your web browser (see Demo 1 below). ++ Be aware that it often takes ~1 hr for AWS instances to become fully operational, so if you get an error when you put the Public IP in you web browser, most of the time you just need to wait a bit longer. +Congratulations! You just started a PEcAn server in the “cloud”! + + 10. When you are done using PEcAn, you will want to return to the “Instances” menu to turn off your VM. ++ To STOP the instance (which will turn the machine off but keep your work), select your PEcAn instance and click Actions > Instance state > Stop. Be aware that a stopped instance will still accrue a small storage cost on AWS. To restart this instance at any point in the future you do not want to repeat all the steps above, but instead you just need to select your instance and then click Actions > Instance state > Start ++ To TERMINATE the instance (which will DELETE your PEcAn machine), select your instance and click Actions > Instance state > Terminate. Terminated instances will not incur costs. In most cases you will also want to go to the Volumes menu and delete the storage associated with your PEcAn VM.Remember, AWS is free for one year, but will automatically charge a fee in second year if account is not cancelled. + + +### Creating a Virtual Machine {#createvm} + +First create virtual machine + +``` +# ---------------------------------------------------------------------- +# CREATE VM USING FOLLOWING: +# - VM NAME = PEcAn +# - CPU = 2 +# - MEMORY = 2GB +# - DISK = 100GB +# - HOSTNAME = pecan +# - FULLNAME = PEcAn Demo User +# - USERNAME = xxxxxxx +# - PASSWORD = yyyyyyy +# - PACKAGE = openssh +# ---------------------------------------------------------------------- +``` + +To enable tunnels run the following on the host machine: + +```bash +VBoxManage modifyvm "PEcAn" --natpf1 "ssh,tcp,,6422,,22" +VBoxManage modifyvm "PEcAn" --natpf1 "www,tcp,,6480,,80" +``` + +Make sure machine is up to date. + +UBUNTU +```bash +sudo apt-get update +sudo apt-get -y dist-upgrade +sudo reboot +``` + +CENTOS/REDHAT +```bash +sudo yum -y update +sudo reboot +``` + +Install compiler and other packages needed and install the tools. + +UBUNTU +```bash +sudo apt-get -y install build-essential linux-headers-server dkms +``` + +CENTOS/REDHAT +```bash +sudo yum -y groupinstall "Development Tools" +sudo yum -y install wget +``` + +Install Virtual Box additions for better integration + +```bash +sudo mount /dev/cdrom /mnt +sudo /mnt/VBoxLinuxAdditions.run +sudo umount /mnt +sudo usermod -a -G vboxsf carya +``` + +**Finishing up the machine** + +**Add a message to the login:** + +```bash +sudo -s +export PORT=$( hostname | sed 's/pecan//' ) +cat > /etc/motd << EOF +PEcAn version 1.4.3 + +For more information about: +Pecan - http://pecanproject.org +BETY - http://www.betydb.org + +For a list of all models currently navigate [here](../users_guide/basic_users_guide/models_table.md) + + +You can access this system using a webbrowser at + http://:${PORT}80/ +or using SSH at + ssh -l carya -p ${PORT}22 +where is the machine where the VM runs on. +EOF +exit +``` + +**Finishing up** + +Script to clean the VM and remove as much as possible history [cleanvm.sh](http://isda.ncsa.uiuc.edu/~kooper/EBI/cleanvm.sh) + +```bash +wget -O ~/cleanvm.sh http://isda.ncsa.uiuc.edu/~kooper/EBI/cleanvm.sh +chmod 755 ~/cleanvm.sh +``` + +Make sure machine has SSH keys [rc.local](http://isda.ncsa.illinois.edu/~kooper/EBI/rc.local) + +```bash +sudo wget -O /etc/rc.local http://isda.ncsa.illinois.edu/~kooper/EBI/rc.local +``` + +Change the resolution of the console + +```bash +sudo sed -i -e 's/#GRUB_GFXMODE=640x480/GRUB_GFXMODE=1024x768/' /etc/default/grub +sudo update-grub +``` + +Once all done, stop the virtual machine +```bash +history -c && ${HOME}/cleanvm.sh +``` + + + +# PEcAn standard formats {#pecan-standards} + +## Defining new input formats + +* New formats can be defined on the ['formats' page of BETYdb](http://betydb.org/formats) +* After creating a new format, the contents should be defined by specifying the BETYdb variable name and the name used in the file/ + +## Time Standard +Internal PEcAn standard time follows ISO_8601 format for dates and time (https://en.wikipedia.org/wiki/ISO_8601). For example ordinal dates go from 1 365/366 (https://en.wikipedia.org/wiki/ISO_8601#Ordinal_dates). However, time used in met drivers or model outputs follows CF convention with julian dates following the 0 to 364/365 format + +To aid in the conversion between PEcAn internal ISO_8601 standard and CF convention used in all met drivers and PEcAn standard output you can utilize the functions: "cf2datetime","datetime2doy","cf2doy", and for SIPNET "sipnet2datetime" + +### Input Standards + +#### Meterology Standards + +##### Dimensions: + + +|CF standard-name | units | +|:------------------------------------------|:------| +| time | days since 1700-01-01 00:00:00 UTC| +| longitude | degrees_east| +| latitude |degrees_north| + +General Note: dates in the database should be date-time (preferably with timezone), and datetime passed around in PEcAn should be of type POSIXct. + + +##### The variable names should be `standard_name` +```{r, echo=FALSE,warning=FALSE,message=FALSE} +names<-c("air_temperature", "air_temperature_max", "air_temperature_min", "air_pressure", + "mole_fraction_of_carbon_dioxide_in_air", "moisture_content_of_soil_layer", "soil_temperature ", + "relative_humidity", "specific_humidity", "water_vapor_saturation_deficit","surface_downwelling_longwave_flux_in_air", + "surface_downwelling_shortwave_flux_in_air", "surface_downwelling_photosynthetic_photon_flux_in_air", + "precipitation_flux", " ","wind_speed","eastward_wind", "northward_wind") + +units <-c("K","K","K","Pa","mol/mol","kg m-2","K","%","1","Pa","W m-2","W m-2","mol m-2 s-1", "kg m-2 s-1", "degrees", "m/s", "m/s", "m/s") + +bety <- c("airT", "","", "air_pressure","","","soilT","relative_humidity","specific_humidity","VPD", "same","solar_radiation","PAR", + "cccc", "wind_direction", "Wspd", "eastward_wind", "northward_wind") + +isimip <-c("tasAdjust", "tasmaxAdjust", "tasminAdjust","" ,"" ,"" ,"" ,"rhurs","NA", "","rldsAdjust","rsdsAdjust","", "prAdjust","","","","") +cruncep <- c("tair","NA","NA","","","","","NA","qair","","lwdown","swdown","","rain","","","","") +narr <- c("air","tmax","tmin","","","","","rhum","shum","","dlwrf","dswrf","","acpc","","","","") +ameriflux <- c("TA(C)","" ,"" , "PRESS (KPa)","CO2","","TS1(NOT DONE)", + "RH","CALC(RH)","VPD(NOT DONE)","Rgl","Rg","PAR(NOT DONE)","PREC","WD","WS","CALC(WS+WD)","CALC(WS+WD)") + +in_tab<-cbind(names,units,bety,isimip,cruncep,narr,ameriflux) +colnames(in_tab)<- c("CF standard-name","units","BETY","Isimip","CRUNCEP","NARR", "Ameriflux") +if (require("DT")){ +datatable(in_tab, extensions = c('FixedColumns',"Buttons"), + options = list( + dom = 'Bfrtip', + scrollX = TRUE, + fixedColumns = TRUE, + buttons = c('copy', 'csv', 'excel', 'pdf', 'print') + + ) + ) +} + +``` + +* preferred variables indicated in bold +* wind_direction has no CF equivalent and should not be converted, instead the met2CF functions should convert wind_direction and wind_speed to eastward_wind and northward_wind +* standard_name is CF-convention standard names +* units can be converted by udunits, so these can vary (e.g. the time denominator may change with time frequency of inputs) +* soil moisture for the full column, rather than a layer, is soil_moisture_content +* A full list of PEcAn standard variable names, units and dimensions can be found here: https://github.com/PecanProject/pecan/blob/develop/base/utils/data/standard_vars.csv + + +For example, in the [MsTMIP-CRUNCEP](https://www.betydb.org/inputs/280) data, the variable `rain` should be `precipitation_rate`. +We want to standardize the units as well as part of the `met2CF.` step. I believe we want to use the CF "canonical" units but retain the MsTMIP units any time CF is ambiguous about the units. + +The key is to process each type of met data (site, reanalysis, forecast, climate scenario, etc) to the exact same standard. This way every operation after that (extract, gap fill, downscale, convert to a model, etc) will always have the exact same inputs. This will make everything else much simpler to code and allow us to avoid a lot of unnecessary data checking, tests, etc being repeated in every downstream function. + +### Soils and Vegetation Inputs + +##### Soil Data + +Check out the [Soil Data] section on more into on creating a standard soil data file. + +##### Vegetation Data + +Check Out the [Vegetation Data] section on more info on creating a standard vegetation data file + + + +### Output Standards + +* created by `model2netcdf` functions +* based on format used by [MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml) +* Can be seen at HERE + +We originally used the [MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml) conventions. Since then, we've added the PaLEON variable conventions to our standard as well. If a variable isn't in one of those two, we stick to the CF conventions. + +```{r, echo=FALSE,warning=FALSE,message=FALSE} +data("standard_vars", package = "PEcAn.utils") +if (require("DT")) { +DT::datatable(standard_vars, + extensions = c('FixedColumns',"Buttons"), + options = list( + dom = 'Bfrtip', + scrollX = TRUE, + fixedColumns = TRUE, + buttons = c('copy', 'csv', 'excel', 'pdf', 'print') + + ) +) +} +``` + + + +# The PEcAn XML {#pecanXML} + +The PEcAn system is configured using a XML file, often called `pecan.xml`. +It contains the following major sections ("nodes"): + +- [Core configuration](#xml-core-config) + - [Top level structure](#xml-structure) + - [`info`](#xml-info) -- Run metadata + - [`outdir`](#xml-outdir) -- Output directory + - [`database`](#xml-database) -- PEcAn database settings + - [`pft`](#xml-pft) -- Plant functional type selection + - [`meta.analysis`](#xml-meta-analysis) -- Trait meta analysis + - [`model`](#xml-model) -- Model configuration + - [`run`](#xml-run) -- Run setup + - [`host`](#xml-host) -- Host information for remote execution +- [Advanced features](#xml-advanced) + - [`ensemble`](#xml-ensemble) -- Ensemble runs + - [`sensitivity.analysis`](#xml-sensitivity-analysis) -- Sensitivity analysis + - [`parameter.data.assimilation`](#xml-parameter-data-assimilation) -- Parameter data assimilation + - [`multi.settings`](#xml-multi-settings) -- Multi Site Settings + - (experimental) [`state.data.assimilation`](#xml-state-data-assimilation) -- State data assimilation + - (experimental) [`browndog`](#xml-browndog) -- Brown Dog configuration + - (experimental) [`benchmarking`](#xml-benchmarking) -- Benchmarking + +A basic example looks like this: + +```xml + + + + Example run + -1 + guestuser + 2018/09/18 19:12:28 +0000 + + /data/workflows/PEcAn_99000000006 + + + bety + bety + postgres + bety + PostgreSQL + true + + /data/dbfiles + + + + tundra.grasses + + 1 + + + + + 3000 + + FALSE + TRUE + + + + 1 + NPP + + + uniform + + + sampling + + + + + 5000000002 + + + 99000000006 + + + + 1000000098 + 2004/01/01 + 2004/12/31 + + + + CRUNCEP + SIPNET + + + 2004/01/01 + 2004/12/31 + + + localhost + + amqp://guest:guest@rabbitmq:5672/%2F + SIPNET_136 + + + +``` + +In the following sections, we step through each of these sections in detail. + +## Core configuration {#xml-core-config} + +### Top-level structure {#xml-structure} + +The first line of the XML file should contain version and encoding information. + +```xml + +``` + +The rest of the XML file should be surrounded by `...` tags. + +```xml + + ...XML body here... + +``` + +### `info`: Run metadata {#xml-info} + +This section contains run metadata. +This information is not essential to a successful model run, but is useful for tracking run provenance. + +```xml + + Example run + -1 + guestuser + 2018/09/18 19:12:28 +0000 + +``` + +The `` tag will be filled in by the web GUI if you provide notes, or you can add notes yourself within these tags. We suggest adding notes that help identify your run and a brief description of what the run is for. Because these notes are searchable within the PEcAn database and web interface, they can be a useful way to distinguish between similar runs. + +The `` and `` section is filled in from the GUI if you are signed in. If you are not using the GUI, add the user name and ID you are associated with that exists within the PEcAn database. + +The `` tag is filled automatically at the time of your run from the GUI. If you are not using the GUI, add the date you execute the run. This tag is not the tag for the dates you would like to run your model simulation. + +### `outdir`: Output directory {#xml-outdir} + +The `` tag is used to configure the output folder used by PEcAn. +This is the directory where all model input and output files will be stored. +By default, the web interface names this folder `PEcAn_`, and higher-level location is set by the `$output_folder$` variable in the `web/config.php` file. +If no `outdir` is specified, PEcAn defaults to the working directory from which it is called, which may be counterintuitive. + +```xml + /data/workflows/PEcAn_99000000006 +``` + +### `database`: PEcAn database settings {#xml-database} + +#### `bety`: PEcAn database (Bety) configuration {#xml-bety} + +The `bety` tag defines the driver to use to connect to the database (the only driver we support, and the default, is `PostgreSQL`) and parameters required to connect to the database. Note that connection parameters are passed *exactly* as entered to the underlying R database driver, and any invalid or extra parameters will result in an error. + +In other words, this configuration... + +```xml + + ... + + bety + bety + postgres + bety + PostgreSQL + true + + ... + +``` + +...will be translated into R code like the following: + +```r +con <- DBI::dbConnect( + DBI::dbDriver("PostgreSQL"), + user = "bety", + password = "bety", + dbname = "bety", + host = "postgres", + write = TRUE +) +``` + +Common parameters are described as follows: + +* `driver`: The driver to use to connect to the database. This should always be set to `PostgreSQL`, unless you absolutely know what you're doing. +* `dbname`: The name of the database (formerly `name`), corresponding to the `-d` argument to `psql`. In most cases, this should be set to `bety`, and will only be different if you named your Bety instance something else (e.g. if you have multiple instances running at once). If unset, it will default to the user name of the current user, which is usually wrong! +* `user`: The username to connect to the database (formerly `userid`), corresponding to the `-U` argument to `psql`. default value is the username of the current user logged in (PostgreSQL uses user for this field). +* `password`: The password to connect to the database (was `passwd`), corresponding to the `-p` argument to `psql`. If unspecified, no password is used. On standard PEcAn installations, the username and password are both `bety` (all lowercase). +* `host`: The hostname of the `bety` database, corresponding to the `-h` argument to `psql`. On the VM, this will be `localhost` (the default). If using `docker`, this will be the name of the PostgreSQL container, which is `postgres` if using our standard `docker-compose`. If connecting to the PEcAn database on a remote server (e.g. `psql-pecan.bu.edu`), this should be the same as the hostname used for `ssh` access. +* `write`: Logical. If `true` (the default), write results to the database. If `false`, PEcAn will run but will not store any information to `bety`. + +When using the web interface, this section is configured by the `web/config.php` file. +The default `config.php` settings on any given platform (VM, Docker, etc.) or in example files (e.g. `config.php.example`) are a good place to get default values for these fields if writing `pecan.xml` by hand. + +Key R functions using these parameters are as follows: + +- `PEcAn.DB::db.open` -- Open a database connection and create a connection object, which is used by many other functions for communicating with the PEcAn database. + +#### `dbfiles`: Location of database files {#xml-dbfiles} + +The `dbfiles` is a path to the local location of files needed to run models using PEcAn, including model executables and inputs. + +```xml + + ... + /data/dbfiles + ... + +``` + +#### (Experimental) `fia`: FIA database connection parameters {#xml-fia} + +If a version of the FIA database is available, it can be configured using `` node, whose syntax is identical to that of the `` node. + +```xml + + ... + + fia5data + bety + bety + localhost + + ... + +``` + +Currently, this is only used for extraction of specific site vegetation information (notably, for ED2 `css`, `pss`, and `site` files). +Stability not ensured as of 1.5.3. + +### `pft`: Plant functional type selection {#xml-pft} + +The PEcAn system requires at least 1 plant functional type (PFT) to be specified inside the `` section. + +```xml + + + tundra.grasses + + 1 + + Path to a post.distns.*.Rdata or prior.distns.Rdata + + +``` + +* `name` : (required) the name of the PFT, which must *exactly* match the name in the PEcAn database. +* `outdir`: (optional) Directory path in which PFT-specific output will be stored during meta-analysis and sensitivity analysis. If not specified (recommended), it will be written into `/`. +* `contants`: (optional) this section contains information that will be written directly into the model specific configuration files. For example, some models like ED2 use PFT numbers instead of names for PFTs, and those numbers can be specified here. See documentation for model-specific code for details. +* `posterior.files` (Optional) this tag helps to signal write.config functions to use specific posterior/prior files (such as HPDA or MA analysis) for generating samples without needing to access to the bety database. + +`` + +This information is currently used by the following PEcAn workflow function: + +- `get.traits` - ?????? + +### `meta.analysis`: Trait Meta Analysis {#xml-meta-analysis} + +The section meta.analysis needs to exists for a meta.analysis to be executed, even though all tags inside are optional. +Conversely, if you do not want to do a trait meta-analysis (e.g. if you want to manually set all parameters), you should omit this node. + +```xml + + 3000 + + FALSE + TRUE + + + +``` + +Some of the tags that can go in this section are: + +* `iter`: [MCMC](http:/en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) (Markov Chain Monte Carlo) chain length, i.e. the total number of posterior samples in the meta-analysis, default is 3000. Smaller numbers will run faster but produce larger errors. +* `random.effects`: Settings related to whether to include random effects (site, treatment) in meta-analysis model. +- `on`: Default is set to FALSE to work around convergence problems caused by an over parameterized model (e.g. too many sites, not enough data). Can be turned to TRUE for including hierarchical random effects. +- `use_ghs`: Default is set to TRUE to include greenhouse measurements. Can be set to FALSE to exclude cases where all data is from greenhouse. +* `update`: Should previous results of meta.analysis and get.traits be re-used. If set to TRUE the meta-analysis and get.trait.data will always be executed. Setting this to FALSE will try and reuse existing results. Future versions will allow for AUTO as well which will try and reuse if the PFT/traits have not changed. The default value is FALSE. +* `threshold`: threshold for Gelman-Rubin convergence diagnostic (MGPRF); default is 1.2. + +This information is currently used by the following PEcAn workflow function: + +- `PEcAn.MA::run.meta.analysis` - ??? + +### `model`: Model configuration {#xml-model} + +This section describes which model PEcAn should run and some instructions for how to run it. + +```xml + + 7 + ED2 + /usr/local/bin/ed2.r82 + module load hdf5 + + + + +``` + +Some important tags are as follows: + +- `id` -- The unique numeric ID of the model in the PEcAn database `models` table. If this is present, then `type` and `binary` are optional since they can be determined from the PEcAn database. +- `type` -- The model "type", matching the PEcAn database `modeltypes` table `name` column. This also refers to which PEcAn model-specific package will be used. In PEcAn, a "model" refers to a specific version (e.g. release, git commit) of a specific model, and "model type" is used to link different releases of the same model. Model "types" also have specific PFT definitions and other requirements associated with them (e.g. the ED2 model "type" requires a global land cover database). +- `binary` -- The file path to the model executable. If omitted, PEcAn will use whatever path is registered in the PEcAn database for the current machine. +- `job.sh` -- Additional options added to the `job.sh` script, which is used to execute the model. This is useful for setting specific environment variables, load modules, etc. + +This information is currently used by the following PEcAn workflow function: + +- `PEcAn.::write.config.` -- Write model-specific configuration files {#pecan-write-configs} +- `PEcAn.remote::start.model.runs` -- Begin model run execution + +#### Model-specific configuration {#xml-model-specific} + +See the following: + +* [ED2][ED2 Configuration] +* [SIPNET][SIPNET Configuration] +* [BIOCRO][BioCro Configuration] + +#### ED2 specific tags {#xml-ed} + +Following variables are ED specific and are used in the [ED2 Configuration](ED2-Configuration). + +Starting at 1.3.7 the tags for inputs have moved to ``. This includes, veg, soil, psscss, inputs. + +```xml + /home/carya/runs/PEcAn_4/ED2IN.template + + + 0.01 + + + 12 + + + 0 +``` + + +* **edin** : [required] template used to write ED2IN file +* **veg** : **OBSOLETE** [required] location of VEG database, now part of `` +* **soil** : **OBSOLETE** [required] location of soild database, now part of `` +* **psscss** : **OBSOLETE** [required] location of site inforation, now part of ``. Should be specified as ``, `` and ``. +* **inputs** : **OBSOLETE** [required] location of additional input files (e.g. data assimilation data), now part of ``. Should be specified as `` and ``. + +### `run`: Run Setup {#xml-run} + +This section provides detailed configuration for the model run, including the site and time period for the simulation and what input files will be used. + +```xml + + + 1000000098 + 2004/01/01 + 2004/12/31 + + temperate.needleleaf.evergreen + temperate.needleleaf.evergreen.test + + + + + CRUNCEP + SIPNET + + + 2004/01/01 + 2004/12/31 + +``` + +#### `site`: Where to run the model {#xml-run-site} + +This contains the following tags: + +- `id` -- This is the numeric ID of the site in the PEcAn database (table `sites`, column `id`). PEcAn can automatically fill in other relevant information for the site (e.g. `name`, `lat`, `lon`) using the site ID, so those fields are optional if ID is provided. +- `name` -- The name of the site, as a string. +- `lat`, `lon` -- The latitude and longitude coordinates of the site, as decimals. +- `met.start`, `met.end` -- ??? +- `` (optional) If this tag is found under the site tag, then PEcAn automatically makes sure that only PFTs defined under this tag is used for generating parameter's samples. Following shows an exmaple of how this tag can be added to the PEcAn xml : + +```xml + + temperate.needleleaf.evergreen + temperate.needleleaf.evergreen + +``` +For multi-site runs if the `pft.site` tag (see {#xml-run-inputs}) is defined under `input`, then the above process will be done automatically under prepare settings step in PEcAn main workflow and there is no need for adding the tags manually. Using the `pft.site` tag however, requires a lookup table as an input (see {#xml-run-inputs}). + + +#### `inputs`: Model inputs {#xml-run-inputs} + +Models require several different types of inputs to run. +Exact requirements differ from model to model, but common inputs include meteorological/climate drivers, site initial conditions (e.g. vegetation composition, carbon pools), and land use drivers. + +In general, all inputs should have the following tags: + +* `id`: Numeric ID of the input in the PEcAn database (table `inputs`, column `id`). If not specified, PEcAn will try to figure this out based on the `source` tag (described below). +* `path`: The file path of the input. Usually, PEcAn will set this automatically based on the `id` (which, in turn, is determined from the `source`). However, this can be set manually for data that PEcAn does not know about (e.g. data that you have processed yourself and have not registered with the PEcAn database). +* `source`: The input data type. This tag name needs to match the names in the corresponding conversion functions. If you are using PEcAn's automatic input processing, this is the only field you need to set. However, this field is ignored if `id` and/or `path` are provided. +* `output`: ??? + +The following are the most common types of inputs, along with their corresponding tags: + +##### `met`: Meteorological inputs {#xml-run-inputs-met} + +(Under construction. See the `PEcAn.data.atmosphere` package, located in `modules/data.atmosphere`, for more details.) + +##### (Experimental) `soil`: Soil inputs {#xml-run-inputs-soil} + +(Under construction. See the `PEcAn.data.land` package, located in `modules/data.land`, for more details). + +##### (Experimental) `veg`: Vegetation initial conditions {#xml-run-inputs-veg} + +(Under construction. Follow developments in the `PEcAn.data.land` package, located in `modules/data.land` in the source code). +##### `pft.site` Multi-site site / PFT mapping + +When performing multi-site runs, it is not uncommon to find that different sites need to be run with different PFTs, rather than running all PFTs at all sites. If you're interested to use a specific PFT for your site/sites you can use the following tag to tell PEcAn which PFT needs to be used for what site. +``` + + site_pft.csv + +``` +For example using the above tag, user needs to have a csv file named `site_pft` stored in the pecan folder. At the moment we have functions supporting just the `.csv` and `.txt` files which are comma separated and have the following format: +``` +site_id, pft_name +1000025731,temperate.broadleaf.deciduous +764,temperate.broadleaf.deciduous +``` +Then pecan would use this lookup table to inform `write.ensemble.config` function about what PFTs need to be used for what sites. + +#### `start.date` and `end.date` + +The start and end date for the run, in a format parseable by R (e.g. `YYYY/MM/DD` or `YYYY-MM-DD`). +These dates are inclusive; in other words, they refer to the first and last days of the run, respectively. + +NOTE: Any time-series inputs (e.g. meteorology drivers) must contain all of these dates. +PEcAn tries to detect and throw informative errors when dates are out of bounds inputs, but it may not know about some edge cases. + +#### Other tags + +The following tags are optional run settings that apply to any model: + +* `jobtemplate`: the template used when creating a `job.sh` file, which is used to launch the actual model. Each model has its own default template in the `inst` folder of the corresponding R package (for instance, here is the one for [ED2](https://github.com/PecanProject/pecan/blob/master/models/ed/inst/template.job)). The following variables can be used: `@SITE_LAT@`, `@SITE_LON@`, `@SITE_MET@`, `@START_DATE@`, `@END_DATE@`, `@OUTDIR@`, `@RUNDIR@` which all come variables in the `pecan.xml` file. The following two command can be used to copy and clean the results from a scratch folder (specified as scratch in the run section below, for example local disk vs network disk) : `@SCRATCH_COPY@`, `@SCRATCH_CLEAR@`. + +Some models also have model-specific tags, which are described in the [PEcAn Models](#pecan-models) section. + +### `host`: Host information for remote execution {#xml-host} + +This section provides settings for remote model execution, i.e. any execution that happens on a machine (including "virtual" machines, like Docker containers) different from the one on which the main PEcAn workflow is running. +A common use case for this section is to submit model runs as jobs to a high-performance computing cluster. +If no `host` tag is provided, PEcAn assumes models are run on `localhost`, a.k.a. the same machine as PEcAn itself. + +For detailed instructions on remote execution, see the [Remote Execution](#pecan-remote) page. +For detailed information on configuring this for RabbitMQ, see the [RabbitMQ](#rabbitmq-xml) page. +The following provides a quick overview of XML tags related to remote execution. + +**NOTE**: Any paths specified in the `pecan.xml` refer to paths on the `host` specified in this section, /not/ the machine on which PEcAn is running (unless models are running on `localhost` or this section is omitted). + +```xml + + pecan2.bu.edu + /fs/data3/guestuser/pecan/testworkflow/run + /fs/data3/guestuser/pecan/testworkflow/out + /tmp/carya + TRUE + qsub -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash + Your job ([0-9]+) .* + qstat -j @JOBID@ &> /dev/null || echo DONE + module load udunits R/R-3.0.0_gnu-4.4.6 + + /usr/local/bin/modellauncher + -pe omp 20 + + +``` + +The `host` section has the following tags: + +* `name`: [optional] name of host server where model is located and executed, if not specified localhost is assumed. +* `rundir`: [optional/required] location where all the configuration files are written. For localhost this is optional (`/run` is the default), for any other host this is required. +* `outdir`: [optional/required] location where all the outputs of the model are written. For localhost this is optional (`/out` is the default), for any other host this is required. +* `scratchdir`: [optional] location where output is written. If specified the output from the model is written to this folder and copied to the outdir when the model is finished, this could significantly speed up the model execution (by using local or ram disk). +* `clearscratch`: [optional] if set to TRUE the scratchfolder is cleaned up after copying the results to the outdir, otherwise the folder will be left. The default is to clean up after copying. +* `qsub`: [optional] the command to submit a job to the queuing system. There are 3 parameters you can use when specifying the qsub command, you can add additional values for your specific setup (for example -l walltime to specify the walltime, etc). You can specify @NAME@ the pretty name, @STDOUT@ where to write stdout and @STDERR@, where to write stderr. You can specify an empty element (``) in which case it will use the default value is `qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -s /bin/bash`. +* `qsub.jobid`: [optional] the regular expression used to find the `jobid` returned from `qsub`. If not specified (and `qsub` is) it will use the default value is `Your job ([0-9]+) .*` +* `qstat`: [optional] the command to execute to check if a job is finished, this should return DONE if the job is finished. There is one parameter this command should take `@JOBID@` which is the ID of the job as returned by `qsub.jobid`. If not specified (and qsub is) it will use the default value is `qstat -j @JOBID@ || echo DONE` +* `job.sh`: [optional] additional options to add to the job.sh at the top. +* `modellauncher`: [optional] this is an experimental section that will allow you to submit all the runs as a single job to a HPC system. + +The `modellauncher` section if specified will group all runs together and only submit a single job to the HPC cluster. This single job will leverage of a MPI program that will execute a single run. Some HPC systems will place a limit on the number of jobs that can be executed in parallel, this will only submit a single job (using multiple nodes). In case there is no limit on the number of jobs, a single PEcAn run could potentially submit a lot of jobs resulting in the full cluster running jobs for a single PEcAn run, preventing others from executing on the cluster. + +The `modellauncher` has 2 arguements: +* `binary` : [required] The full path to the binary modellauncher. Source code for this file can be found in `pecan/contrib/modellauncher`](https://github.com/PecanProject/pecan/tree/develop/contrib/modellauncher). +* `qsub.extra` : [optional] Additional flags to pass to qsub besides those specified in the `qsub` tag in host. This option can be used to specify that the MPI environment needs to be used and the number of nodes that should be used. + +## Advanced features {#xml-advanced} + +### `ensemble`: Ensemble Runs {#xml-ensemble} + +As with `meta.analysis`, if this section is missing, then PEcAn will not do an ensemble analysis. + +```xml + + 1 + NPP + + + uniform + + + sampling + + + +``` + +An alternative configuration is as follows: + +```xml + + 5 + GPP + 1995 + 1999 + + + lhc + + + sampling + + + +``` + +Tags in this block can be broken down into two categories: Those used for setup (which determine how the ensemble analysis runs) and those used for post-hoc analysis and visualization (i.e. which do not affect how the ensemble is generated). + +Tags related to ensemble setup are: + +* `size` : (required) the number of runs in the ensemble. +* `samplingspace`: (optional) Contains tags for defining how the ensembles will be generated. + +Each piece in the sampling space can potentially have a method tag and a parent tag. Method refers to the sampling method and parent refers to the cases where we need to link the samples of two components. When no tag is defined for one component, one sample will be generated and used for all the ensembles. This allows for partitioning/studying different sources of uncertainties. For example, if no met tag is defined then, one met path will be used for all the ensembles and as a result the output uncertainty will come from the variability in the parameters. At the moment no sampling method is implemented for soil and vegetation. +Available sampling methods for `parameters` can be found in the documentation of the `PEcAn.utils::get.ensemble.samples` function. +For the cases where we need simulations with a predefined set of parameters, met and initial condition we can use the restart argument. Restart needs to be a list with name tags of `runid`, `inputs`, `new.params` (parameters), `new.state` (initial condition), `ensemble.id` (ensemble ids), `start.time`, and `stop.time`. + +The restart functionality is developed using model specific functions by called `write_restart.modelname`. You need to make sure first that this function is already exist for your desired model. + +Note: if the ensemble size is set to 1, PEcAn will select the **posterior median** parameter values rather than taking a single random draw from the posterior + +Tags related to post-hoc analysis and visualization are: + +* `variable`: (optional) name of one (or more) variables the analysis should be run for. If not specified, `sensitivity.analysis` variable is used, otherwise default is GPP (Gross Primary Productivity). + +(NOTE: This static visualization functionality will soon be deprecated as PEcAn moves towards interactive visualization tools based on Shiny and htmlwidgets). + +This information is currently used by the following PEcAn workflow functions: + +- `PEcAn.::write.config.` - See [above](#pecan-write-configs). +- `PEcAn.uncertainty::write.ensemble.configs` - Write configuration files for ensemble analysis +- `PEcAn.uncertainty::run.ensemble.analysis` - Run ensemble analysis + +### `sensitivity.analysis`: Sensitivity analysis {#xml-sensitivity-analysis} + +Only if this section is defined a sensitivity analysis is done. This section will have `` or `` nodes. If neither are given, the default is to use the median +/- [1 2 3] x sigma (e.g. the 0.00135 0.0228 0.159 0.5 0.841 0.977 0.999 quantiles); If the 0.5 (median) quantile is omitted, it will be added in the code. + +```xml + + + -3 + -2 + -1 + 1 + 2 + 3 + + GPP + TRUE + 2004 + 2006 + +``` + +- `quantiles/sigma` : [optional] The number of standard deviations relative to the standard normal (i.e. "Z-score") for which to perform the ensemble analysis. For instance, `1` corresponds to the quantile associated with 1 standard deviation greater than the mean (i.e. 0.681). Use a separate `` tag, all under the `` tag, to specify multiple quantiles. Note that we _do not automatically add the quantile associated with `-sigma`_ -- i.e. if you want +/- 1 standard deviation, then you must include both `1` _and_ `-1`. +- `start.date` : [required?] start date of the sensitivity analysis (in YYYY/MM/DD format) +- `end.date` : [required?] end date of the sensitivity analysis (in YYYY/MM/DD format) + - **_NOTE:_** `start.date` and `end.date` are distinct from values set in the run tag because this analysis can be done over a subset of the run. +- `variable` : [optional] name of one (or more) variables the analysis should be run for. If not specified, sensitivity.analysis variable is used, otherwise default is GPP. +- `perpft` : [optional] if `TRUE` a sensitivity analysis on PFT-specific outputs will be run. This is only possible if your model provides PFT-specific outputs for the `variable` requested. This tag only affects the output processing, not the number of samples proposed for the analysis nor the model execution. + +This information is currently used by the following PEcAn workflow functions: + +- `PEcAn.::write.configs.` -- See [above](#pecan-write-configs) +- `PEcAn.uncertainty::run.sensitivity.analysis` -- Executes the uncertainty analysis + +### Parameter Data Assimilation {#xml-parameter-data-assimilation} + +The following tags can be used for parameter data assimilation. More detailed information can be found here: [Parameter Data Assimilation Documentation](#pda) + +### Multi-Settings {#xml-multi-settings} + +Multi-settings allows you to do multiple runs across different sites. This customization can also leverage site group distinctions to expedite the customization. It takes your settings and applies the same settings, changing only the site level tags across sites. + +To start, add the multisettings tag within the `` section of your xml +``` + + run + +``` +Additional tags for this section exist and can fully be seen here: +``` + + assim.batch + ensemble + sensitivity.analysis + run + + ``` +These tags correspond to different pecan analysis that need to know that there will be multiple settings read in. + + +Next you'll want to add the following tags to denote the group of sites you want to use. It leverages site groups, which are defined in BETY. + +```xml + + 1000000022 + +```` +If you add this tag, you must remove the ` ` tags from the `` tag portion of your xml. +The id of your sitegroup can be found by lookig up your site group within BETY. + +You do not have to use the sitegroup tag. You can manually add multiple sites using the structure in the example below. + +Lastly change the top level tag to ``, meaning the top and bootom of your xml should look like this: + +``` + + +... + +``` + +Once you have defined these tags, you can run PEcAn, but there may be further specifications needed if you know that different data sources have different dates available. + +Run workflow.R up until +``` +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") +``` +Once this section is run, you'll need to open `pecan.CHECKED.xml`. You will notice that it has expanded from your original `pecan.xml`. + +```xml + + + + 796 + 2005/01/01 + 2011/12/31 + Bartlett Experimental Forest (US-Bar) + 44.06464 + -71.288077 + + 2005/01/01 + 2011/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-796/AMF_US-Bar_BASE_HH_4-1.2005-01-01.2011-12-31.clim + + + + + + 767 + 2001/01/01 + 2014/12/31 + Morgan Monroe State Forest (US-MMS) + 39.3231 + -86.4131 + + 2001/01/01 + 2014/12/31 + + + /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim + + + +.... + +``` +* The `...` replaces the rest of the site settings for however many sites are within the site group. + +Looking at the example above, take a close look at the `` and ``. You will notice that for both sites, the dates are different. In this example they were edited by hand to include the dates that are available for that site and source. You must know your source prior. Only the source CRUNCEP has a check that will tell you if your dates are outside the range available. PEcAn will automatically populate these dates across sites according the original setting of start and end dates. + + +In addition, you will notice that the `` section contains the model specific meteorological data file. You can add that in by hand or you can you can leave the normal tags that met process workflow will use to process the data into your model specific format: +``` + + AmerifluxLBL + SIPNET + pecan + +``` + + +### (experimental) State Data Assimilation {#xml-state-data-assimilation} + +The following tags can be used for state data assimilation. More detailed information can be found here: [State Data Assimilation Documentation](#sda) + +```xml + + FALSE + FALSE + + AGB.pft + TotSoilCarb + + + 2004/01/01 + 2006/12/31 + + 1 + 2004/01/01 + 2006/12/31 + +``` + +* **process.variance** : [optional] TRUE/FLASE flag for if process variance should be estimated (TRUE) or not (FALSE). If TRUE, a generalized ensemble filter will be used. If FALSE, an ensemble Kalman filter will be used. Default is FALSE. +* **sample.parameters** : [optional] TRUE/FLASE flag for if parameters should be sampled for each ensemble member or not. This allows for more spread in the intial conditions of the forecast. +* **_NOTE:_** If TRUE, you must also assign a vector of trait names to pick.trait.params within the sda.enkf function. +* **state.variable** : [required] State variable that is to be assimilated (in PEcAn standard format). Default is "AGB" - Above Ground Biomass. +* **spin.up** : [required] start.date and end.date for model spin up. +* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because spin up can be done over a subset of the run. +* **forecast.time.step** : [optional] start.date and end.date for model spin up. +* **start.date** : [required?] start date of the state data assimilation (in YYYY/MM/DD format) +* **end.date** : [required?] end date of the state data assimilation (in YYYY/MM/DD format) +* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because this analysis can be done over a subset of the run. + +### (experimental) Brown Dog {#xml-browndog} + +This section describes how to connect to [Brown Dog](http://browndog.ncsa.illinois.edu). This facilitates processing and conversions of data. + +```xml + + ... + ... + ... + +``` + +* `url`: (required) endpoint for Brown Dog to be used. +* `username`: (optional) username to be used with the endpoint for Brown Dog. +* `password`: (optional) password to be used with the endpoint for Brown Dog. + +This information is currently used by the following R functions: + +- `PEcAn.data.atmosphere::met.process` -- Generic function for processing meteorological input data. +- `PEcAn.benchmark::load_data` -- Generic, versatile function for loading data in various formats. + +### (experimental) Benchmarking {#xml-benchmarking} + +Coming soon... + + + + +# PEcAn workflow (web/workflow.R) {#workflow} + +- How the workflow works +- How each module is called +- How to do outside of web interface +- Link to "folder structure" section below for detailed descriptions + + +

+ +## Read Settings {#workflow-readsettings} + +(TODO: Under construction...) + +## Input Conversions {#workflow-input} + +## Input Data {#workflow-input-data} + +Models require input data as drivers, parameters, and boundary conditions. In order to make a variety of data sources that have unique formats compatible with models, conversion scripts are written to convert them into a PEcAn standard format. That format is a netcdf file with variables names and specified to our standard variable table. + +Within the PEcAn repository, code pertaining to input conversion is in the MODULES directory under the data.atmosphere and data.land directories. + +## Initial Conditions {#workflow-input-initial} + +(TODO: Under construction) + +## Meteorological Data {#workflow-met} + +To convert meterological data into the PEcAn Standard and then into model formats we follow four main steps: + + 1. Downloading raw data + - [Currently supported products]() + - Example Code + 2. Converting raw data into a CF standard + - Example Code + 3. Downscaling and gapfilling + - Example Code + 4. Coverting to Model Specific format + - Example Code + +Common Questions regarding Met Data: + +How do I add my Meterological data product to PEcAn? +How do I use PEcAn to convert Met data outide the workflow? + + +The main script that handles Met Processing, is [`met.process`](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/met.process.R). It acts as a wrapper function that calls individual modules to facilitate the processing of meteorological data from it's original form to a pecan standard, and then from that standard to model specific formats. It also handles recording these processes in the BETY database. + + 1. Downloading raw data + - [Available Meteorological Drivers] + - Example Code to download [Ameriflux data](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/download.AmerifluxLBL.R) + 2. Converting raw data into a CF standard (if needed) + - Example Code to [convert from raw csv to CF standard](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/met2CF.csv.R) + 3. Downscaling and gapfilling(if needed) + - Example Code to [gapfill](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/metgapfill.R) + 4. Coverting to Model Specific format + - Example Code to [convert Standard into Sipnet format](https://github.com/PecanProject/pecan/blob/develop/models/sipnet/R/met2model.SIPNET.R) + + +### Downloading Raw data (Description of Process) {#workflow-met-download} + + Given the information passed from the pecan.xml met.process will call the `download.raw.met.module` to facilitate the execution of the necessary functions to download raw data. + +```xml + + AmerifluxLBL + SIPNET + pecan + +``` + +### Converting raw data to PEcAn standard {#workflow-met-standard} + +### Downscaling and gapfilling (optional) {#workflow-met-downscale} + +### Converting from PEcAn standard to model-specific format {#workflow-met-model} + +## Traits {#workflow-traits} + +(TODO: Under construction) + +## Meta Analysis {#workflow-metaanalysis} + +(TODO: Under construction) + +## Model Configuration {#workflow-modelconfig} + +(TODO: Under construction) + +## Run Execution {#workflow-modelrun} + +(TODO: Under construction) + +## Post Run Analysis {#workflow-postrun} + +(TODO: Under construction) +## Advanced Analysis {#workflow-advanced} + +(TODO: Under construction) + + + + +# PEcAn Models {#pecan-models} + +This section will contain information about all models and output variables that are supported by PEcAn. + +| Model Name | Available in the VM | Prescribed Inputs | Input Functions/Values | Restart Function | +| -- | -- | -- | -- | -- | +| [BioCro](#models-biocro) | Yes | Yes | Yes| No | +| [CLM](#models-clm)| No | No | No| No | +| [DALEC](#models-dalec)| Yes | Yes | Yes| No | +| [ED2](#models-ed)| Yes | Yes | Yes| Yes | +| FATES | No | Yes | | No| +| [GDAY](#models-gday) | No | No | No| No | +| [LINKAGES](#models-linkages) | Yes | Yes | Yes| Yes | +| [LPJ-GUESS](#models-lpjguess)| No | Yes | No | No | +| [MAESPA](#models-maespa)| Yes | Yes | No | No | +| [PRELES](#models-preles) | Yes | Yes | Partially | No | +| [SiPNET](#models-sipnet)| Yes | Yes | Yes| Yes | +| [STICS](#models-stics)| Yes | Yes | No | No | + +*Available in the VM* - Denotes if a model is publicly available with PEcAn. + +*Prescribed Inputs* - Denotes whether or not PEcAn can prescribe inputs. + +*Input Functions/Values* - Denotes whether or not PEcAn has functions to fully produce a model's Input values. + +*Restart Function* - Denotes status of model data assimilation capabilities. + + +**Output Variables** + +PEcAn converts all model outputs to a single [Output Standards]. This standard evolved out of MsTMIP project, which is itself based on NACP, LBA, and other model-intercomparison projects. This standard was expanded for the PalEON MIP and the needs of the PEcAn modeling community to support variables not in these standards. + +_Model developers_: do not add variables to your PEcAn output without first adding them to the PEcAn standard table! Also, do not create new variables equivalent to existing variables but just with different names or units. + + + + + + + +## BioCro {#models-biocro} + +| Model Information | | +| -- | -- | +| Home Page | https://github.com/ebimodeling/biocro/blob/master/README.md | +| Source Code | https://github.com/ebimodeling/biocro | +| License | [University of Illinois/NCSA Open Source License](https://github.com/ebimodeling/biocro/blob/master/LICENSE) | +| Authors | Fernando E. Miguez, Deepak Jaiswal, Justin McGrath, David LeBauer, Scott Rohde, Dan Wang | +| PEcAn Integration | David LeBauer, Dan Wang | + +### Introduction + +BioCro is a model that estimates photosynthesis at the leaf, canopy, and ecosystem levels and determines plant biomass allocation and crop yields, using underlying physiological and ecological processes to do so. + +### PEcAn configuration file additions + +The following sections of the PEcAn XML are relevant to the BioCro model: + +- `model` + - `revision` -- Model version number +- `run` + - `site/id` -- ID associated with desired site from BETYdb site entry + - `inputs` + - `met/output` -- Set as BIOCRO + - `met/path` -- Path to file containing meteorological data + +### Model specific input files + +List of inputs required by model, such as met, etc. + +### Model configuration files + +Genus-specific parameter files are secretly required. These are stored in the PEcAn.BIOCRO package and looked up under the hood. + +`write.configs.BIOCRO` looks for defaults in this order: first any file at a path specified by `settings$pft$constants$file`, next by matching the genus name in datasets exported by the BioCro package, and last by matching the genus name in PEcAn.BIOCRO's extdata/defaults directory. + +When adding a new genus, it is necessary to provide a new default parameter file in PEcAn.BIOCRO [`inst/extdata/defaults`](https://github.com/PecanProject/pecan/tree/develop/models/biocro/inst/extdata/defaults) and also (for v<1.0) update the `call_biocro()` function. + +BioCro uses a config.xml file similar to ED2. At this time, no other template files are required. + +### Installation notes + +BioCro can be run standalone using the model's R package. Instructions for installing and using the package are in the GitHub repo's [README file](https://github.com/ebimodeling/biocro/blob/master/README.md). + + + +## CLM {#models-clm} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** +List of inputs required by model, such as met, etc. + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## DALEC {#models-dalec} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## ED2 {#models-ed} + +| Model Information | | +| -- | -- | +| Home Page | http://moorcroftlab.oeb.harvard.edu/ | +| Source Code | https://github.com/EDmodel/ED2 | +| License | | +| Authors | Paul Moorcroft, ... | +| PEcAn Integration | Michael Dietze, Rob Kooper | + +### Introduction + +Introduction about ED model + +### PEcAn configuration file additions + +The following sections of the PEcAn XML are relevant to the ED model: + +- `model` + - `id` -- BETY model ID. Each ID corresponds to a different revision of ED (see below) + - `revision` -- The revision (a.k.a. release) number of ED (e.g. "r82"). "rgit" indicates the latest code on the ED repository. + - `edin` -- Name of the template ED2IN configuration file. If this is a functional path that points to a specific file, that file is used. If no file is found following the path literally, the workflow will try the path relative to the `PEcAn.ED2` package using `system.file` (recall that files in `inst` are moved in the package root, but all other directory structure is preserved). If this is omitted, `PEcAn.ED2::write.configs.ED2` will look for a file called `ED2IN.` (e.g. `ED2IN.rgit`, `ED2IN.r86`) in the `PEcAn.ED2` package. + - **Example**: `ED2IN.rgit` will use the `ED2IN.rgit` file shipped with `PEcAn.ED2` _regardless of the revision of ED used_. (Note however that if a file called `ED2IN.rgit` exists in the workflow runtime directory, that file will be used instead). + - `start.date`, `end.date` -- Run start and end date, respectively + - `met.start`, `met.end` -- Start and end year of meteorology inputs. By default (if omitted), these are set to the years of `start.date` and `end.date`, respectively. Setting these values to a shorter interval than `start.date` and `end.date` will cause ED to loop the meteorology input over the specified years. This may be useful for, for example, spinning up ED under a set of constant climate conditions. + - `phenol.scheme` + - `phenol` + - `phenol.start` + - `phenol.end` + - `all_pfts` -- (Logical) If false or missing (default), only run ED2 with the PFTs configured via PEcAn (i.e. in the `` section of the XML). If `true`, run with all 17 of ED2's PFTs, using ED2's internal default parameters for all PFTs not configured through PEcAn. See [below](#models-ed-pft-configuration) for more details. + - `ed2in_tags` -- Named list of additional tags in the ED2IN to be modified. These modifications override any of those set by other parts of the PEcAn workflow. These tags must be in all caps. Any tags that are not already in the ED2IN file will be added; this makes this an effective way to run newer versions of ED2 that have new ED2IN parameters without having to provide an entire new ED2IN. For example: + + ```xml + + + 0 + 0 + 0 + 0 + 0 + + + ``` + - `barebones_ed2in` -- Whether or not to try to annotate the ED2IN file with comments. If "true", skip all comments and only write the tags themselves. If "false" (default), try to transfer comments from the template file into the target file. + - `jobtemplate` + - `prerun` -- String of commands to be added to the `job.sh` model execution script before the model is run. Multiple commands should be separated by proper `bash` syntax -- i.e. either with `&&` or `;`. + - One common use of this argument is to load modules on some HPC systems -- for instance: + ```xml + module load git hdf5 + ``` + - If your particular version of ED is failing early during execution with a mysterious "Segmentation fault", that may indicate that its process is exceeding its stack limit. In this case, you may need to remove the stack limit restriction with a `prerun` command like the following: + ```xml + ulimit -s unlimited + ``` + - `postrun` -- Same as ``, but for commands to be run _after_ model execution. + - `binary` -- The full path to the ED2 binary on the target machine. + - `binary_args` -- Additional arguments to be passed to the ED2 binary. Some common arguments are: + - `-s` -- Delay OpenMPI initialization until the last possible moment. This is needed when running ED2 in a Docker container. It is included by default when the host is `rabbitmq`. + - `-f /path/to/ED2IN` -- Full path to a specific ED2IN namelist file. Typically, this is not needed because, by default, ED searches for the ED2IN in the current directory and the PEcAn workflow places the ED2IN file and a symbolic link to the ED executable in the same (run) directory for you. +- `run/site` + - `lat` -- Latitude coordinate of site + - `lon` -- Longitude coordinate of site +- `inputs` + - `met/path` -- Path to `ED_MET_DRIVER_HEADER` file + - `pss`: [required] location of patch file + - `css`: [required] location of cohort file + - `site`: [optional] location of site file + - `lu`: [required] location of land use file + - `thsums`: [required] location of thermal sums file + - `veg`: [required] location of vegetation data + - `soil`: [required] location of soil data + +### PFT configuration in ED2 {#models-ed-pft-configuration} + +ED2 has more detailed PFTs than many models, and a more complex system for configuring these PFTs. +ED2 has 17 PFTs, based roughly on growth form (e.g. tree vs. grass), biome (tropical vs. temperate), leaf morphology (broad vs. needleleaf), leaf phenology (evergreen vs. deciduous), and successional status (e.g. early, mid, or late). +Each PFT is assigned an integer (1-17), which is used by the ED model to assign default model parameters. +The mappings of these integers onto PFT definitions are not absolute, and may change as the ED2 source code evolves. +Unfortunately, _the only authoritative source for these PFT definitions for any given ED2 version is the Fortran source code of that version_. +The following is the mapping as of ED2 commit [24e6df6a][ed2in-table] (October 2018): + +1. C4 grass +2. Early-successional tropical +3. Mid-successional tropical +4. Late-successional tropical +5. Temperate C3 grass +6. Northern pine +7. Southern pine +8. Late-successional conifer +9. Early-successional temperate deciduous +10. Mid-successional temperate deciduous +11. Late-successional temperate deciduous +12. Agricultural (crop) 1 +13. Agricultural (crop) 2 +14. Agricultural (crop) 3 +15. Agricultural (crop) 4 +16. Subtropical C3 grass (C4 grass with C3 photosynthesis) +17. "Araucaria" (non-optimized southern pine), or liana + +ED2 parameter defaults are hard-coded in its Fortran source code. +However, most parameters can be modified via an XML file (determined by the `ED2IN` `IEDCNFGF` field; usually `config.xml` in the same directory as the `ED2IN` file). +The complete record of all parameters (defaults and user overrides) used by a given ED2 run is stored in a `history.xml` file (usually in the same directory as the `ED2IN`) -- this is the file to check to make sure that an ED2 run is parameterized as you expect. + +As with other models, PEcAn can set ED2 parameters using its built-in trait meta analysis. +The function specifically responsible for writing the `config.xml` is [`PEcAn.ED2::write.config.xml.ED2`][write-config-xml-ed2] (which is called as part of the more general [`PEcAn.ED2::write.config.ED2`][write-config-ed2]). +The configuration process proceeds as follows: + +First, the mappings between PEcAn PFT names and ED2 PFT numbers are determined according to the following logic: + +- If the PFT has a `` XML tag, that number is used. For example: + ```xml + + umbs.early_hardwood + 9 + + ``` +- If `` is not provided, the code tries to cross-reference the PFT `name` against the `pftmapping.csv` data provided with the `PEcAn.ED2` package. If the name is not matched (perfectly!), the function will exit with an error. + +Second, the PFT number from the previous step is used to write that PFT's parameters to the `config.xml`. +The order of precedence for parameters is as follows (from highest to lowest): + +1. **Explicit user overrides**. + These are specified via a `` tag in the PFT definition in the `pecan.xml`. + For example: + ```xml + + umbs.early_hardwood + + 36.3 + + + ``` + Note that these values are passed through [`PEcAn.ED2::convert.samples.ED`][convert-samples-ed], so they should generally be given in PEcAn's default units rather than ED2's. +2. **Samples from the PEcAn meta analysis**. + These are also converted via [`PEcAn.ED2::convert.samples.ED`][convert-samples-ed]. +3. **ED2 defaults that PEcAn knows about**. + These are stored in the `edhistory.csv` file inside of `PEcAn.ED2` + This file is re-generated manually whenever there is a new version of ED2, so while we try our best to keep it up to date, there is no guarantee that it is. +4. (Implicitly) **Defaults in the ED2 Fortran source code**. + In general, our goal is to set all parameters through PEcAn (via steps 1-3), but if you are running PEcAn with new or experimental versions of ED2, you should be extra careful to make sure ED2 is running with the parameters you intend. + Again, the best way to know which parameters ED2 is actually using is to check the `history.xml` file produced once the run starts. + +The `ED2IN` field `INCLUDE_THESE_PFT` controls which of these PFTs are included in a given ED2 run. +By default, PEcAn will set this field to only include the PFTs specified by the user. +This is the recommended behavior because it ensures that all PFTs in ED2 were parameterized (one way or another, at least partially) through PEcAn. +However, if you would like ED2 to run with all 17 PFTs (NOTE: using ED2's internal defaults for all PFTs not specified by the user!), you can set the `` XML tag (in the `` section) to `true`: + +```xml + + true + +``` + +[ed2in-table]: https://github.com/EDmodel/ED2/blob/24e6df6a75702337c5f29cbd3f3fad90467c9a51/ED/run/ED2IN#L1320-L1327 + +[write-config-xml-ed2]: https://pecanproject.github.io/models/ed/docs/reference/write.config.xml.ED2.html + +[write-config-ed2]: https://pecanproject.github.io/models/ed/docs/reference/write.config.ED2.html + +[convert-samples-ed]: https://pecanproject.github.io/models/ed/docs/reference/convert.samples.ED.html + + +### Model specific input files + +List of inputs required by model, such as met, etc. + +### Model configuration files + +ED2 is configured using 2 files which are placed in the run folder. + +* **ED2IN** : template for this file is located at models/ed/inst/ED2IN.\. The values in this template that need to be modified are described below and are surrounded with @ symbols. +* **config.xml** : this file is generated by PEcAn. Some values are stored in the pecan.xml in \\\ section as well as in \ section. + +An example of the template can be found in [ED2IN.r82](https://github.com/PecanProject/pecan/blob/master/models/ed/inst/ED2IN.r82) + +The ED2IN template can contain the following variables. These will be replaced with actual values when the model configuration is written. + +* **@ENSNAME@** : run id of the simulation, used in template for NL%EXPNME + +* **@START_MONTH@** : start of simulation UTC time, from \\, used in template for NL%IMONTHA +* **@START_DAY@** : start of simulation UTC time, from \\, used in template for NL%IDATEA +* **@START_YEAR@** : start of simulation UTC time, from \\, used in template for NL%IYEARA +* **@END_MONTH@** : end of simulation UTC time, from \\, used in template for NL%IMONTHZ +* **@END_DAY@** : end of simulation UTC time, from \\, used in template for NL%IDATEZ +* **@END_YEAR@** : end of simulation UTC time, from \\, used in template for NL%IYEARZ + +* **@SITE_LAT@** : site latitude location, from \\\, used in template for NL%POI_LAT +* **@SITE_LON@** : site longitude location, from \\\, used in template for NL%POI_LON + +* **@SITE_MET@** : met header location, from \\\, used in template for NL%ED_MET_DRIVER_DB +* **@MET_START@** : first year of met data, from \\\, used in template for NL%METCYC1 +* **@MET_END@** : last year of met data, from \\\, used in template for NL%METCYCF + +* **@PHENOL_SCHEME@** : phenology scheme, if this variabe is 1 the following 3 fields will be used, otherwise they will be set to empty strings, from \\, used in template for NL%IPHEN_SCHEME +* **@PHENOL_START@** : first year for phenology, from \\, used in template for NL%IPHENYS1 and NL%IPHENYF1 +* **@PHENOL_END@** : last year for phenology, from \\, used in template for NL%IPHENYSF and NL%IPHENYFF +**@PHENOL@** : path and prefix of the prescribed phenology data, from \* \, used in template for NL%PHENPATH + +* **@SITE_PSSCSS@** : path and prefix of the previous ecosystem state, from \\, used in template for NL%SFILIN +* **@ED_VEG@** : path and prefix of the vegetation database, used only to determine the land/water mask, from \\, used in template for NL%VEG_DATABASE +* **@ED_SOIL@** : path and prefix of the soil database, used to determine the soil type, from \\, used in template for NL%SOIL_DATABASE +* **@ED_INPUTS@** : input directory with dataset to initialise chilling degrees and growing degree days, which is used to drive the cold-deciduous phenology, from \\, used in template for NL%THSUMS_DATABASE + +* **@FFILOUT@** : path and prefix for analysis files, generated from \\\/run.id/analysis, used in template for NL%FFILOUT +* **@SFILOUT@** : path and prefix for history files, generated from \\\/run.id/history, used in template for NL%SFILOUT + +* **@CONFIGFILE@** : XML file containing additional parameter settings, this is always "config.xml", used in template for NL%IEDCNFGF + +* **@OUTDIR@** : location where output files are written (**without the runid**), from \\\, should not be used. +* **@SCRATCH@** : local scratch space for outputs, generated /scratch/\/run\$scratch, should not be used right now since it only works on ebi-cluster + +### Installation notes + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +#### VM + +#### BU geo + +#### TACC lonestar + +```bash +module load hdf5 +curl -o ED.r82.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.tgz +tar zxf ED.r82.tgz +rm ED.r82.tgz +cd ED.r82/ED/build/bin +curl -o include.mk.lonestar http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.lonestar +make OPT=lonestar +``` + +#### TACC stampede + +```bash +module load hdf5 +curl -o ED.r82.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.tgz +tar zxf ED.r82.tgz +rm ED.r82.tgz +cd ED.r82/ED/build/bin +curl -o include.mk.stampede http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.stampede +make OPT=stampede +``` + + + +## GDAY {#models-gday} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## LINKAGES {#models-linkages} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## LPJ-GUESS {#models-lpjguess} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## MAESPA {#models-maespa} + +| Model Information || +| -- | -- | +| Home Page |http://maespa.github.io/ | +| Source Code | http://maespa.github.io/download.html| +| License | | +| Authors |Belinda Medlyn and Remko Duursma | +| PEcAn Integration |Tony Gardella, Martim DeKauwe, Remki Duursma | + +**Introduction** + + + +**PEcAn configuration file additions** + +**Model specific input files** + + + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +Installing the MAESPA model requires cloning the MAESPA Bitbucket Repository, executing the makefile, and ensuring that the Maeswarp R package is correctly installed. + +To clone and compile the model, execute this code at the command line +``` +git clone https://bitbucket.org/remkoduursma/maespa.git + +cd maespa + +make clean + +make +``` + +`maespa.out` is your executable. Example input files can be found in the inputfiles directory. Executing measpa.out from within one of the example directories will produce output. + +MAESPA developers have also developed a wrapper package called `Maeswrap`. The usual R package installation method `install.packages` may present issues with downloading an unpacking a dependency package called `rgl`. Here are a couple of solutions: + +Solution 1 + +**From the Command Line** +``` +sudo apt-get install r-cran-rgl + +then from within R + +install.packages("Maeswrap") +``` + +Solution 2 + +**From the Command line** +``` +sudo apt-get install libglu1-mesa-dev + +then from within R + +install.packages("Maeswrap") + + +``` + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## PRELES {#models-preles} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. +* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. +* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +**VM** + + + +## SiPNET {#models-sipnet} + +| Model Information || +| -- | -- | +| Home Page | | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | Michael Dietze, Rob Kooper | + +**Introduction** + +Introduction about model + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +SIPNET is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. + +* **sipnet.in** : template for this file is located at models/sipnet/inst/sipnet.in and is not modified. +* **sipnet.param-spatial** : template for this file is located at models/sipnet/inst/template.param-spatial and is not modified. +* **sipnet.param** : template for this file is in models/sipnet/inst/template.param or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. + +**Installation notes** + +This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. + +SIPNET version unk: + +``` +if [ ! -e ${HOME}/sipnet_unk ]; then + cd + curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/PEcAn/models/sipnet_unk.tar.gz + tar zxf sipnet_unk.tar.gz + rm sipnet_unk.tar.gz +fi +cd ${HOME}/sipnet_unk/ +make clean +make +sudo cp sipnet /usr/local/bin/sipnet.runk +make clean +``` + +SIPNET version 136: + +``` +if [ ! -e ${HOME}/sipnet_r136 ]; then + cd + curl -o sipnet_r136.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_r136.tar.gz + tar zxf sipnet_r136.tar.gz + rm sipnet_r136.tar.gz + sed -i 's#$(LD) $(LIBLINKS) \(.*\)#$(LD) \1 $(LIBLINKS)#' ${HOME}/sipnet_r136/Makefile +fi +cd ${HOME}/sipnet_r136/ +make clean +make +sudo cp sipnet /usr/local/bin/sipnet.r136 +make clean +``` + +**VM** + + + +## STICS {#models-stics} + +| Model Information || +| -- | -- | +| Home Page | https://www6.paca.inrae.fr/stics/ | +| Source Code | | +| License | | +| Authors | | +| PEcAn Integration | Istem Fer | + +**Introduction** + +STICS (Simulateur mulTIdisciplinaire pour les Cultures Standard) is a crop model that has been developed since 1996 at INRA (French National Institute for Agronomic Research) in collaboration with other research (CIRAD, Irstea, Ecole des Mines de Paris, ESA, LSCE) or professional (ARVALIS, Terres Inovia, CTIFL, ITV, ITB, Agrotransferts, etc.) and teaching institutes or organizations. + + +**PEcAn configuration file additions** + +Should list the model specific additions to the PEcAn file here + +**Model specific input files** + +List of inputs required by model, such as met, etc. + +**Model configuration files** + +STICS is configured using different XML files located in two fixed directories: config, plant and user defined workspace(s) directorie(s). A java app called JavaStics allows users to generate these files. + +**Installation notes** + +The software (JavaStics interface and STICS model) is available for download after a registration procedure (see procedure at http://www6.paca.inra.fr/stics_eng/ Download). + + +**VM** + + + +# Available Meteorological Drivers + +## Ameriflux + +Scale: site + +Resolution: 30 or 60 min + +Availability: varies by site http:\/\/ameriflux.lbl.gov\/data\/data-availability\/ + +Notes: Old ORNL server, use is deprecated + +## AmerifluxLBL + +Scale: site + +Resolution: 30 or 60 min + +Availability: varies by site http:\/\/ameriflux.lbl.gov\/data\/data-availability\/ + +Notes: new Lawrence Berkeley Lab server + +## Fluxnet2015 + +Scale: site + +Resolution: 30 or 60 min + +Availability: varies by site [http:\/\/fluxnet.fluxdata.org\/sites\/site-list-and-pages](http://fluxnet.fluxdata.org/sites/site-list-and-pages/) + +Notes: Fluxnet 2015 synthesis product. Does not cover all FLUXNET sites + +## NARR + +Scale: North America + +Resolution: 3 hr, approx. 32km \(Lambert conical projection\) + +Availability: 1979-present + +## CRUNCEP + +Scale: global + +Resolution: 6hr, 0.5 degree + +Availability: 1901-2010 + +## CMIP5 + +Scale: varies by model + +Resolution: 3 hr + +Availability: 2006-2100 + +Currently only GFDL available. Different scenerios and ensemble members can be set via Advanced Edit. + +## NLDAS + +Scale: Lower 48 + buffer, + +Resolution: 1 hour, .125 degree + +Availability: 1980-present + +## GLDAS + +Scale: Global + +Resolution: 3hr, 1 degree + +Availability: 1948-2010 + +## PalEON + +Scale: -100 to -60 W Lon, 35 to 50 N Latitude \(US northern hardwoods + buffer\) + +Resolution: 6hr, 0.5 degree + +Availability: 850-2010 + +## FluxnetLaThuile + +Scale: site + +Resolution: 30 or 60 min + +Availability: varies by site http:\/\/www.fluxdata.org\/DataInfo\/Dataset%20Doc%20Lib\/SynthDataSummary.aspx + +Notes: 2007 synthesis. Fluxnet2015 supercedes this for sites that have been updated + +## Geostreams + +Scale: site + +Resolution: varies + +Availability: varies by site + +Notes: This is a protocol, not a single archive. The PEcAn functions currently default to querying [https://terraref.ncsa.illinois.edu/clowder/api/geostreams], which requires login and contains data from only two sites (Urbana IL and Maricopa AZ). However the interface can be used with any server that supports the [Geostreams API](https://opensource.ncsa.illinois.edu/confluence/display/CATS/Geostreams+API). + +## ERA5 + +Scale: Global + +Resolution: 3 hrs and 31 km + +Availability: 1950-present + +Notes: + +It's important to know that the raw ERA5 tiles needs to be downloaded and registered in the database first. Inside the `inst` folder in the data.atmosphere package there are R files for downloading and registering files in the BETY. However, it assumes that you have registered and setup your API requirements. Check out how to setup your API [here] (https://confluence.ecmwf.int/display/CKB/How+to+download+ERA5#HowtodownloadERA5-3-DownloadERA5datathroughtheCDSAPI). +In the `inst` folder you can find two files (`ERA5_db_register.R` and `ERA5_USA_download.R`). If you setup your `ecmwf` account as it's explained in the link above, `ERA5_USA_download.R` will help you to download all the tiles with all the variables required for pecan `extract.nc.ERA5` function to generate pecan standard met files. Besides installing the required packages for this file, it should work from top to bottom with no problem. After downloading the tiles, there is simple script in `ERA5_db_register.R` which helps you register your tiles in the bety. `met.process` later on uses that entery to find the required tiles for extracting met data for your sites. There are important points about this file. 1- Make sure you don't change the site id in the script (which is the same the `ParentSite` in ERA5 registeration xml file). 2- Make sure the start and end date in that script matches the downloaded tiles. Set your `ERA5.files.path` to where you downloaded the tiles and then the rest of the script should be working fine. + + + + +## Download GFDL +The Downlad.GFDL function assimilates 3 hour frequency CMIP5 outputs generated by multiple GFDL models. GFDL developed several distinct modeling streams on the timescale of CMIP5 and AR5. These models include CM3, ESM2M and ESM2G with a spatial resolution of 2 degrees latitude by 2.5 degrees longitude. Each model has future outputs for the AR5 Representative Concentration Pathways ranging from 2006-2100. + +## CM3 +GFDL’s CMIP5 experiments with CM3 included many of the integrations found in the long-term CMIP5 experimental design. The focus of this physical climate model is on the role of aerosols, aerosol-cloud interactions, and atmospheric chemistry in climate variability and climate change. + +## ESM2M & ESM2G + +Two new models representing ocean physics with alternative numerical frameworks to explore the implications of some of the fundamental assumptions embedded in these models. Both ESM2M and ESM2G utilize a more advanced land model, LM3, than was available in ESM2.1 including a variety of enhancements (Milly et al., in prep). GFDL’s CMIP5 experiments with Earth System Models included many of the integrations found in the long-term CMIP5 experimental design. The ESMs, by design, close the carbon cycle and are used to study the impact of climate change on ecosystems, ecosystem changes on climate and human activities on ecosystems. + +For more information please navigate [here](https://gfdl.noaa.gov/cmip) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +# Database synchronization {#database-sync} + +The database synchronization consists of 2 parts: +- Getting the data from the remote servers to your server +- Sharing your data with everybody else + +## How does it work? + +Each server that runs the BETY database will have a unique machine_id and a sequence of ID's associated. Whenever the user creates a new row in BETY it will receive an ID in the sequence. This allows us to uniquely identify where a row came from. This is information is crucial for the code that works with the synchronization since we can now copy those rows that have an ID in the sequence specified. If you have not asked for a unique ID your ID will be 99. + +The synchronization code itself is split into two parts, loading data with the `load.bety.sh` script and exporting data using `dump.bety.sh`. If you do not plan to share data, you only need to use `load.bety.sh` to update your database. + +## Set up + +Requests for new machine ID's is currently handled manually. To request a machine ID contact Rob Kooper . In the examples below this ID is referred to as 'my siteid'. + +To setup the database to use this ID you need to call load.bety in 'CREATE' mode (replacing with the ID if your site) + +``` +sudo -u postgres {$PECAN}/scripts/load.bety.sh -c -u -m +``` +WARNING: At the moment running CREATE deletes all current records in the database. If you are running from the VM this includes both all runs you have done and all information that the database is prepopulated with (e.g. input and model records). Remote records can be fetched (see below), but local records will be lost (we're working on improving this!) + +## Fetch latest data + +When logged into the machine you can fetch the latest data using the load.bety.sh script. The script will check what site you want to get the data for and will remove all data in the database associated with that id. It will then reinsert all the data from the remote database. + +The script is configured using environment variables. The following variables are recognized: +- DATABASE: the database where the script should write the results. The default is `bety`. +- OWNER: the owner of the database (if it is to be created). The default is `bety`. +- PG_OPT: additional options to be added to psql (default is nothing). +- MYSITE: the (numerical) ID of your site. If you have not requested an ID, use 99; this is used for all sites that do not want to share their data (i.e. VM). 99 is in fact the default. +- REMOTESITE: the ID of the site you want to fetch the data from. The default is 0 (EBI). +- CREATE: If 'YES', this indicates that the existing database (`bety`, or the one specified by DATABASE) should be removed. Set to YES (in caps) to remove the database. **THIS WILL REMOVE ALL DATA** in DATABASE. The default is NO. +- KEEPTMP: indicates whether the downloaded file should be preserved. Set to YES (in caps) to keep downloaded files; the default is NO. +- USERS: determines if default users should be created. Set to YES (in caps) to create default users with default passwords. The default is NO. + +All of these variables can be specified as command line arguments as well, to see the options use -h. + +``` +load.bety.sh -h +./scripts/load.bety.sh [-c YES|NO] [-d database] [-h] [-m my siteid] [-o owner] [-p psql options] [-r remote siteid] [-t YES|NO] [-u YES|NO] + -c create database, THIS WILL ERASE THE CURRENT DATABASE, default is NO + -d database, default is bety + -h this help page + -m site id, default is 99 (VM) + -o owner of the database, default is bety + -p additional psql command line options, default is empty + -r remote site id, default is 0 (EBI) + -t keep temp folder, default is NO + -u create carya users, this will create some default users + +dump.bety.sh -h +./scripts/dump.bety.sh [-a YES|NO] [-d database] [-h] [-l 0,1,2,3,4] [-m my siteid] [-o folder] [-p psql options] [-u YES|NO] + -a use anonymous user, default is YES + -d database, default is bety + -h this help page + -l level of data that can be dumped, default is 3 + -m site id, default is 99 (VM) + -o output folder where dumped data is written, default is dump + -p additional psql command line options, default is -U bety + -u should unchecked data be dumped, default is NO +``` + +## Sharing data + +Sharing your data requires a few steps. First, before entering any data, you will need to request an ID from the PEcAn developers. Simply open an issue at github and we will generate an ID for you. If possible, add the URL of your data host. + +You will now need to synchronize the database again and use your ID. For example if you are given ID=42 you can use the following command: `MYID=42 REMOTEID=0 ./scripts/load.bety.sh`. This will load the EBI database and set the ID's such that any data you insert will have the right ID. + +To share your data you can now run the dump.bey.sh. The script is configured using environment variables, the following variables are recognized: +- DATABASE: the database where the script should write the results. The default is `bety`. +- PG_OPT: additional options to be added to psql (default is nothing). +- MYSITE: the ID of your site. If you have not requested an ID, use 99, which is used for all sites that do not want to share their data (i.e. VM). 99 is the default. +- LEVEL: the minimum access-protection level of the data to be dumped (0=private, 1=restricted, 2=internal collaborators, 3=external collaborators, 4=public). The default level for exported data is level 3. + - note that currently only the traits and yields tables have restrictions on sharing. If you share data, records from other (meta-data) tables will be shared. If you wish to extend the access_level to other tables please [submit a feature request](https://github.com/pecanproject/bety/issues/new). +- UNCHECKED: specifies whether unchecked traits and yields be dumped. Set to YES (all caps) to dump unchecked data. The default is NO. +- ANONYMOUS: specifies whether all users be anonymized. Set to YES (all caps) to keep the original users (**INCLUDING PASSWORD**) in the dump file. The default is NO. +- OUTPUT: the location of where on disk to write the result file. The default is `${PWD}/dump`. + +NOTE: If you want your dumps to be accessible to other PEcAn servers you need to perform the following additional steps + +1. Open pecan/scripts/load.bety.sh +2. In the DUMPURL section of the code add a new record indicating where you are dumping your data. Below is the example for SITE number 1 (Boston University) +``` + elif [ "${REMOTESITE}" == "1" ]; then + DUMPURL="http://psql-pecan.bu.edu/sync/dump/bety.tar.gz" +``` +3. Check your Apache settings to make sure this location is public +4. Commit this code and submit a Pull Request +5. From the URL in the Pull Request, PEcAn administrators will update the machines table, the status map, and notify other users to update their cron jobs (see Automation below) + +Plans to simplify this process are in the works + +## Automation + +Below is an example of a script to synchronize PEcAn database instances across the network. + +db.sync.sh +``` +#!/bin/bash +## make sure psql is in PATH +export PATH=/usr/pgsql-9.3/bin/:$PATH +## move to export directory +cd /fs/data3/sync +## Dump Data +MYSITE=1 /home/dietze/pecan/scripts/dump.bety.sh +## Load Data from other sites +MYSITE=1 REMOTESITE=2 /home/dietze/pecan/scripts/load.bety.sh +MYSITE=1 REMOTESITE=5 /home/dietze/pecan/scripts/load.bety.sh +MYSITE=1 REMOTESITE=0 /home/dietze/pecan/scripts/load.bety.sh +## Timestamp sync log +echo $(date +%c) >> /home/dietze/db.sync.log +``` + +Typically such a script is set up to run as a cron job. Make sure to schedule this job (`crontab -e`) as the user that has database privileges (typically postgres). The example below is a cron table that runs the sync every hour at 12 min after the hour. + +``` +MAILTO=user@yourUniversity.edu +12 * * * * /home/dietze/db.sync.sh +``` + +## Database maintentance + +All databases need maintenance performed on them. Depending upon the database type this can happen automatically, or it needs to be run through a scheduler or manually. The BETYdb database is Postgresql and it needs to be reindexed and vacuumed on a regular basis. Reindexing introduces efficiencies back into the database by reorganizing the indexes. Vacuuming the database frees up resources to the database by rearranging and compacting the database. Both of these operations are necessary and safe. As always if there's a concern, a backup of the database should be made ahead of time. While running the reindexing and vacuuming commands, users will notice a slowdown at times. Therefore it's better to run these maintenance tasks during off hours. + +### Reindexing the database + +As mentioned above, reindexing allows the database to become more efficient. Over time as data gets updated and deleted, the indexes become less efficient. This has a negative inpact on executed statements. Reindexing makes the indexes efficient again (at least for a while) allowing faster statement execution and reducing the overall load on the database. + +The reindex.bety.sh script is provided to simplify reindexing the database. + +``` +reindex.bety.sh -h +./reindex.bety.sh [-c datalog] [-d database] [-h] [-i table names] [-p psql options] [-q] [-s] [-t tablename] + -c catalog, database catalog name used to search for tables, default is bety + -d database, default is bety + -h this help page + -i table names, list of space-separated table names to skip over when reindexing + -p additional psql command line options, default is -U bety + -q the reindexing should be quiet + -s reindex the database after reindexing the tables (this should be done sparingly) + -t tablename, the name of the one table to reindex +``` + +If the database is small enough it's reasonable to reindex the entire database at one time. To do this manually run or schedule the REINDEX statement. For example: + +``` +reindex.bety.sh -s +``` + +For larger databases it may be desireable to reindex entire tables at a time. An efficient way to do this is to reindex the larger tables and then the entire database. For example: + +``` +reindex.bety.sh -t traits; reindex.bety.sh -t yields; +reindex.bety.sh -s +``` + +For very large databases it may be desirable to reindex one or more individual indexes before reindexing tables and the databases. In this case running specific psql commands to reindex those specific indexes, followed by reindexing the table is a possible approach. For example: + +``` +psql -U bety -c "REINDEX INDEX index_yields_on_citation_id; REINDEX INDEX index_yields_on_cultivar_id;" +reindex.bety.sh -t yields; +``` + +Splitting up the indexing commands over time allows the database to operate efficiently with minimal impact on users. One approach is to schedule the reindexing of large, complex tables at a spcific off-time during the week, followed by a general reindexing and excluding those large tables on a weekend night. + +Please refere to the Automation section above for information on using cron to schedule reindexing commands. + +### Vacuuming the database + +Vacuuming the BETYdb Postgresql database reduces the amount of resources it uses and introduces its own efficiencies. + +Over time, modified and deleted records leave 'holes' in the storage of the database. This is a common feature for most databases. Each database has its own way of handing this, in Postgresql it's the VACUUM command. The VACUUM command performs two main operations: cleaning up tables to make memory use more efficient, and analyze tables for optimum statement execution. The use of the keyword ANALYZE indicates the second operation should take place. + +The vacuum.bety.sh script is provided to simplify vacuuming the database. + +``` +vacuum.bety.db -h +./vacuum.bety.sh [-c datalog] [-d database] [-f] [-h] [-i table names] [-n] [-p psql options] [-q] [-s] [-t tablename] [-z] + -c catalog, database catalog name used to search for tables, default is bety + -d database, default is bety + -f perform a full vacuum to return resources to the system. Specify rarely, if ever + -h this help page + -i table names, list of space-separated table names to skip over when vacuuming + -n only vacuum the tables and do not analyze, default is to first vacuum and then analyze + -p additional psql command line options, default is -U bety + -q the export should be quiet + -s skip vacuuming the database after vacuuming the tables + -t tablename, the name of the one table to vacuum + -z only perform analyze, do not perform a regular vacuum, overrides -n and -f, sets -s +``` + +For small databases with light loads it may be possible to set aside a time for a complete vacuum. During this time, commands executed against the database might fail (a temporary condition as the database gets cleaned up). The following commands can be used to perform all the vaccum operations in one go. + +``` +vacuum.bety.sh -f +``` + +Generally it's not desireable to have down time. If the system running the database doesn't need resources that the database is using returned to it, a FULL vacuum can be avoided. This is the default behavior of the script + +``` +vacuum.bety.sh +``` + +In larger databases, vacuuming the entire database can take a long time causing a negative impact on users. This means that individual tables need to be vacuumed. How often a vacuum needs to be performed is dependent upon a table's activity. The more frequently updates and deletes occur on a table, the more frequent the vaccum should be. For large tables it may be desireable to separate the table cleanup from the analysis. An example for completely vacuuming and analyzing a table is: + +``` +psql -U bety -c "VACUUM traits; VACUUM ANALYZE traits;" +``` + +Similar to indexes, vacuuming the most active tables followed by general database vacuuming and vacuum analyze may be a desireable approach. + +Also note that it isn't necessary to run VACUUM ANALYZE for each vacuum performed. Separating the commands and performing a VACUUM ANALYZE after several regular vacuums may be sufficient, with less load on the database. + +If the BETYdb database is running on a system with limited resources, or with resources that have become limited, the VACCUM command can return resources to the system from the database. The normal vacuuming process releases resources back to the database for reuse, but not to the system; generally this isn't a problem. Postgresql has a VACUUM keyword FULL that returns resources back to the system. Requesting a FULL vacuum will lock the table being vacuumed while it is being re-written preventing any statements from being executed against it. If performing VECUUM FULL against the entire database, only the table being actively worked on is locked. + +To minimize the impact a VACUUM FULL has on users, it's best to perform a normal vacuum before a FULL vacuum. If this approach is taken, there sould be a minimal time gap between the normal VACUUM and the VACUUM FULL commands. A normal vacuum allows changes to be made thus requiring the full vacuum to handle those changes, extending it's run time. Reducing the time between the two commands lessens the work VACUUM FULL needs to do. + +``` +psql -U bety -c "VACUUM yields; VACUUM FULL yields; VACUUM ANALYZE yields;" +``` + +Give its impact, it's typically not desireable to perform a VACUUM FULL after every normal vacuum; it should be done on an "as needed" basis or infrequently. + +## Troubleshooting + +There are several possibilities if a scheduled cron job apepars to be running but isn't producing the expected results. The following are suggestions on what to try to resolve the issue. + +### Username and password + +The user that scheduled a cron job may not have access permissions to the database. This can be easily confirmed by running the command line from the cron job while logged in as the user that scheduled the job. An error message will be shown if the user doesn't have permissions. + +To resolve this, be sure to include a valid database user (not a BETYdb user) with their credentials on the command in crontab. + +### db_hba.conf file + +Iit's possible that the machine hosting the docker image of the database doesn't have permissions to access the database. This may due to the cron job running on a machine that is not the docker instance of the database. + +It may be necessary to look at the loga on the hosting machine to determine if database access permissions are causing a problem. Logs are stored in different locations depending upon the Operating System of the host and upon other environmental factors. This document doesn't provide information on where to find the logs. + +To begin, it's best to look at the contents of the relevent database configuration file. The following command will display the contents of the db_hba.conf file. + +``` +psql -U postgres -qAt -c "show hba_file" | xargs grep -v -E '^[[:space:]]*#' +``` + +This command should return a series of text lines. For each row except those begining with 'local', the fourth item describes the machines that can access the database. In some cases an IP mask is specified in the fifth that further restricts the machines that have access. The special work 'all' in the fourth column grants permissions to all machines. The last column on each line contains the authentication option for the machine(s) specified in the fourth column (with a possible fifth column IP mask modifier). + +Ensure that the host machine is listed under the fourth column (machine addresse range, or 'all'), is also included in the IP mask if one was specified, and finally that any authentication option are not set to 'reject'. If the host machine is not included the db_hba.conf file will need to be updated to allow access. + +## Network Status Map + +https://pecan2.bu.edu/pecan/status.php + +Nodes: red = down, yellow = out-of-date schema, green = good + +Edges: red = fail, yellow = out-of-date sync, green = good + + +## Tasks + +Following is a list of tasks we plan on working on to improve these scripts: +- [pecanproject/bety#368](https://github.com/PecanProject/bety/issues/368) allow site-specific customization of information and UI elements including title, contacts, logo, color scheme. + + + +# Standalone tools (modules) + +- Radiative transfer modeling and remote sensing ([`modules/rtm`](https://pecanproject.github.io/modules/rtm/docs/index.html)); [vignette](https://pecanproject.github.io/modules/rtm/docs/articles/pecanrtm.vignette.html) +- Photosynthesis ([`modules/photosynthesis`](https://pecanproject.github.io/modules/photosynthesis/docs/index.html)); [vignette](https://pecanproject.github.io/modules/photosynthesis/docs/articles/ResponseCurves.html) +- Allometry ([`modules/allometry`](https://pecanproject.github.io/modules/allometry/docs/index.html)); [vignette](https://pecanproject.github.io/modules/allometry/docs/articles/AllomVignette.html) +- Load data ([`modules/benchmark`](https://pecanproject.github.io/modules/benchmark/docs/index.html) -- `PEcAn.benchmark::load_data`) + +## Loading Data in PEcAn {#LoadData} + +If you are loading data in to PEcAn for benchmarking, using the Benchmarking shiny app [provide link?] is recommended. + +Data can be loaded manually using the `load_data` function which in turn requires providing data format information using `query.format.vars` and the path to the data using `query.file.path`. + +Below is a description of the `load_data` function an a simple example of loading data manually. + +### Inputs + +Required + +- `data.path`: path to the data that is the output of the function `query.file.path` (see example below) +- `format`: R list object that is the output of the function `query.format.vars` (see example below) + +Optional + +- `start_year = NA`: +- `end_year = NA`: +- `site = NA` +- `vars.used.index=NULL` + +### Output + +- R data frame containing the requested variables converted in to PEcAn standard name and units and time steps in `POSIX` format. + +### Example + +The data for this example has already been entered in to the database. To add new data go to [new data documentation](#NewInput). + +To load the Ameriflux data for the Harvard Forest (US-Ha1) site. + +1. Create a connection to the BETY database. This can be done using R function + +``` R +bety = PEcAn.DB::betyConnect(php.config = "pecan/web/config.php") +``` + + where the complete path to the `config.php` is specified. See [here](https://github.com/PecanProject/pecan/blob/master/web/config.example.php) for an example `config.php` file. + +2. Look up the inputs record for the data in BETY. + +```{r, echo=FALSE, out.height = "50%", out.width = "50%", fig.align = 'center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/Input_ID_name.png") +``` + + To find the input ID, either look at + + - The url of the record (see image above) + + - In R run + +````R +library(dplyr) +input_name = "AmerifluxLBL_site_0-758" #copied directly from online +input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) +```` + +3. Additional arguments to `query.format.vars` are optional + + 1. If you only want to load a subset of dates in the data, specify start and end year, otherwise all data will be loaded. + 2. If you only want to load a select list of variables from the data, look up their IDs in BETY, otherwise all variables will be loaded. + +4. In R run + +```R + format = PEcAn.DB::query.format.vars(bety, input.id) +``` + + Examine the resulting R list object to make sure it returned the correct information. + + The example format contains the following objects: + +```R + $file_name + [1] "AMERIFLUX_BASE_HH" + + $mimetype + [1] "csv" + + $skip + [1] 2 + + $header + [1] 1 + + $na.strings + [1] "-9999" "-6999" "9999" "NA" + + $time.row + [1] 4 + + $site + [1] 758 + + $lat + [1] 42.5378 + + $lon + [1] -72.1715 + + $time_zone + [1] "America/New_York" +``` + + The first 4 rows of the table `format$vars` looks like this: + + | bety_name | variable_id | input_name | input_units | storage_type | column_number | bety_units | mstmip_name | mstmip_units | pecan_name | pecan_units | + | ------------ | ----------- | --------------- | ----------- | ------------ | ------------- | ---------- | ----------- | -------------- | ---------- | -------------- | + | air_pressure | 554 | PA | kPa | | 19 | Pa | Psurf | Pa | Psurf | Pa | + | airT | 86 | TA | celsius | | 4 | degrees C | Tair | K | Tair | K | + | co2atm | 135 | CO2_1 | umol mol-1 | | 20 | umol mol-1 | CO2air | micromol mol-1 | CO2air | micromol mol-1 | + | datetime | 5000000001 | TIMESTAMP_START | ymd_hms | %Y%m%d%H%M | 1 | ymd_hms | NA | NA | datetime | ymd_hms | + +5. Get the path to the data + +```R + data.path = PEcAn.DB::query.file.path( + input.id = input.id, + host_name = PEcAn.remote::fqdn(), + con = bety$con) +``` + +6. Load the data + +```R + data = PEcAn.benchmark::load_data(data.path = data.path, format = format) +``` + + + +# Shiny + +## Testing the Shiny Server + +Shiny can be difficult to debug because, when run as a web service, the R output is hidden in system log files that are hard to find and read. +One useful approach to debugging is to use port forwarding, as follows. + +First, on the remote machine (including the VM), make sure R's working directory is set to the directory of the Shiny app (e.g., `setwd(/path/to/pecan/shiny/WorkflowPlots)`, or just open the app as an RStudio project). +Then, in the R console, run the app as: + +``` +shiny::runApp(port = XXXX) +# E.g. shiny::runApp(port = 5638) +``` + +Then, on your local machine, open a terminal and run the following command, matching `XXXX` to the port above and `YYYY` to any unused port on your local machine (any 4-digit number should work). + +``` +ssh -L YYYY:localhost:XXXX +# E.g., for the PEcAn VM, given the above port: +# ssh -L 5639:localhost:5638 carya@localhost -p 6422 +``` + +Now, in a web browser on your local machine, browse to `localhost:YYYY` (e.g., `localhost:5639`) to run whatever app you started with `shiny::runApp` in the previous step. +All of the output should display in the R console where the `shiny::runApp` command was executed. +Note that this includes any `print`, `message`, `logger.*`, etc. statements in your Shiny app. + +If the Shiny app hits an R error, the backtrace should include a line like `Hit error at of server.R#LXX` -- that `XX` being a line number that you can use to track down the error. +To return from the error to a normal R prompt, hit `-C` (alternatively, the "Stop" button in RStudio). +To restart the app, run `shiny::runApp(port = XXXX)` again (keeping the same port). + +Note that Shiny runs any code in the `pecan/shiny/` directory at the moment the app is launched. +So, any changes you make to the code in `server.R` and `ui.R` or scripts loaded therein will take effect the next time the app is started. + +If for whatever reason this doesn't work with RStudio, you can always run R from the command line. +Also, note that the ability to forward ports (`ssh -L`) may depend on the `ssh` configuration of your remote machine. +These instructions have been tested on the PEcAn VM (v.1.5.2+). + +## Debugging Shiny Apps + +When developing shiny apps you can run the application from rstudio and place breakpoints int he code. To do this you will need to do the following steps first (already done on the VM) before starting rstudio: +- echo "options(shiny.port = 6438)" >> ${HOME}/.Rprofile +- echo "options(shiny.launch.browser = 'FALSE')" >> ${HOME}/.Rprofile + +Next you will need to create a tunnel for port 6438 to the VM, which will be used to open the shiny app, the following command will creat this tunnel: `ssh -l carya -p 6422 -L 6438:localhost:6438 localhost`. + +Now you can from rstudio run your application using `shiny::runApp()` and it will show the output from the application in your console. You can now place breakpoints and evaluate the output. + +## Checking Log Files +To create Log files on the VM, execute the following: +``` +sudo -s +echo "preserve_logs true;" >> /etc/shiny-server/shiny-server.conf +service shiny-server restart +``` +Then within the directory `/var/log/shiny-server` you will see log files for your specific shiny apps. + + + +# Adding to PEcAn {#adding-to-pecan} + +- Case studies + - [Adding a model](#adding-model) + - [Adding input data](#NewInput) + - [Adding data through the web interface](#adding-data-web) + - Adding new species, PFTs, and traits from a new site + - Add a site + - Add some species + - Add PFT + - Add trait data + - Adding a benchmark + - Adding a met driver +- [Reference](#editing-records) (How to edit records in bety) + - Models + - Species + - PFTs + - Traits + - Inputs + - DB files + - Variables + - Formats + - (Link each section to relevant Bety tables) + +## Adding An Ecosystem Model {#adding-model} + +**Adding a model to PEcAn involves two activities:** + +1. Updating the PEcAn database to register the model +2. Writing the interface modules between the model and PEcAn + +**Note that coupling a model to PEcAn should not require any changes to the model code itself**. A key aspect of our design philosophy is that we want it to be easy to add models to the system and we want to using the working version of the code that is used by all other model users, not a special branch (which would rapidly end up out-of-date). + +### Using PEcAn Database + +To run a model within PEcAn requires that the PEcAn database has sufficient information about the model. This includes a MODEL_TYPE designation, the types of inputs the model requires, the location of the model executable, and the plant functional types used by the model. + +The instructions in this section assume that you will be specifying this information using the BETYdb web-based interface. This can be done either on your local VM (localhost:3280/bety or localhost:6480/bety) or on a server installation of BETYdb. However you interact with BETYdb, we encourage you to set up your PEcAn instance to support [database syncs](#database-sync) so that these changes can be shared and backed-up across the PEcAn network. + +![](03_topical_pages/11_images/bety_main_page.png) + +The figure below summarizes the relevant database tables that need to be updated to add a new model and the primary variables that define each table. + +![](https://www.lucidchart.com/publicSegments/view/54a8aea8-9360-4628-af9e-392a0a00c27b/image.png) + +### Define MODEL_TYPE + +The first step to adding a model is to create a new MODEL_TYPE, which defines the abstract model class. This MODEL_TYPE is used to specify input requirements, define plant functional types, and keep track of different model versions. + +The MODEL_TYPE is created by selecting Runs > Model Type and then clicking on _New Model Type_. The MODEL_TYPE name should be identical to the MODEL package name (see Interface Module below) and is case sensitive. + +![](03_topical_pages/11_images/bety_modeltype_1.png) +![](03_topical_pages/11_images/bety_modeltype_2.png) + +### MACHINE + +The PEcAn design acknowledges that the same model executables and input files may exist on multiple computers. Therefore, we need to define the machine that that we are using. If you are running on the VM then the local machine is already defined as _pecan_. Otherwise, you will need to select Runs > Machines, click _New Machine_, and enter the URL of your server (e.g. pecan2.bu.edu). + +### MODEL + +Next we are going to tell PEcAn where the model executable is. Select Runs > Files, and click ADD. Use the pull down menu to specify the machine you just defined above and fill in the path and name for the executable. For example, if SIPNET is installed at /usr/local/bin/sipnet then the path is /usr/local/bin/ and the file (executable) is sipnet. + +Now we will create the model record and associate this with the File we just registered. The first time you do this select Runs > Models and click _New Model_. Specify a descriptive name of the model (which doesn't have to be the same as MODEL_TYPE), select the MODEL_TYPE from the pull down, and provide a revision identifier for the model (e.g. v3.2.1). Once the record is created select it from the Models table and click EDIT RECORD. Click on "View Related Files" and when the search window appears search for the model executable you just added (if you are unsure which file to choose you can go back to the Files menu and look up the unique ID number). You can then associate this Model record with the File by clicking on the +/- symbol. By contrast, clicking on the name itself will take you to the File record. + +In the future, if you set up the SAME MODEL VERSION on a different computer you can add that Machine and File to PEcAn and then associate this new File with this same Model record. A single version of a model should only be entered into PEcAn **once**. + +If a new version of the model is developed that is derived from the current version you should add this as a new Model record but with the same MODEL_TYPE as the original. Furthermore, you should set the previous version of the model as Parent of this new version. + +### FORMATS + +The PEcAn database keep track of all the input files passed to models, as well as any data used in model validation or data assimilation. Before we start to register these files with PEcAn we need to define the format these files will be in. To create a new format see [Formats Documentation](#NewFormat). + +### MODEL_TYPE -> Formats + +For each of the input formats you specify for your model, you will need to edit your MODEL_TYPE record to add an association between the format and the MODEL_TYPE. Go to Runs > Model Type, select your record and click on the Edit button. Next, click on "Edit Associated Formats" and choose the Format you just defined from the pull down menu. If the *Input* box is checked then all matching Input records will be displayed in the PEcAn site run selection page when you are defining a model run. In other words, the set of model inputs available through the PEcAn web interface is model-specific and dynamically generated from the associations between MODEL_TYPEs and Formats. If you also check the *Required* box, then the Input will be treated as required and PEcAn will not run the model if that input is not available. Furthermore, on the site selection webpage, PEcAn will filter the available sites and only display pins on the Google Map for sites that have a full set of required inputs (or where those inputs could be generated using PEcAn's workflows). Similarly, to make a site appear on the Google Map, all you need to do is specify Inputs, as described in the next section, and the point should automatically appear on the map. + +### INPUTS + +After a file Format has been created then input files can be registered with the database. Creating Inputs can be found under [How to insert new Input data](#NewInput). + +### Add Plant Functional Types (PFTs) + +Since many of the PEcAn tools are designed to keep track of parameter uncertainties and assimilate data into models, to use PEcAn with a model it is important to define Plant Functional Types for the sites or regions that you will be running the model. + +Create a new PFT entry by selecting Data > PFTs and then clicking on _New PFT_. + +![](03_topical_pages/11_images/bety_pft_1.png) +![](03_topical_pages/11_images/bety_pft_2.png) + +Give the PFT a descriptive name (e.g., temperate deciduous). PFTs are MODEL_TYPE specific, so choose your MODEL_TYPE from the pull down menu. + +![](03_topical_pages/11_images/bety_pft_3.png) + +#### Species + +Within PEcAn there are no predefined PFTs and user can create new PFTs very easily at whatever taxonomic level is most appropriate, from PFTs for individual species up to one PFT for all plants globally. To allow PEcAn to query its trait database for information about a PFT, you will want to associate species with the PFT record by choosing Edit and then "View Related Species". Species can be searched for by common or scientific name and then added to a PFT using the +/- button. + +#### Cultivars + +You can also define PFTs whose members are *cultivars* instead of species. This is designed for analyses where you want to want to perform meta-analysis on within-species comparisons (e.g. cultivar evaluation in an agricultural model) but may be useful for other cases when you want to specify different priors for some member of a species. You cannot associate both species and cultivars with the same PFT, but the cultivars in a cultivar PFT may come from different species, potentially including all known cultivars from some of the species, if you wish to and have thought about how to interpret the results. + +It is not yet possible to add a cultivar PFT through the BETYdb web interface. See [this GithHub comment](https://github.com/PecanProject/pecan/pull/1826#issuecomment-360665864) for an example of how to define one manually in PostgreSQL. + +### Adding Priors for Each Variable + +In addition to adding species, a PFT is defined in PEcAn by the list of variables associated with the PFT. PEcAn takes a fundamentally Bayesian approach to representing model parameters, so variables are not entered as fixed constants but as prior probability distributions. + +There are a wide variety of priors already defined in the PEcAn database that often range from very diffuse and generic to very informative priors for specific PFTs. + +These pre-existing prior distributions can be added to a PFT. Navigate to the PFT from Data > PFTs and selecting the edit button in the Actions column for the chosen PFT. + +![](03_topical_pages/11_images/bety_priors_1.png) + +Click on "View Related Priors" button and search through the list for desired prior distributions. The list can be filtered by adding terms into the search box. Add a prior to the PFT by clicking on the far left button for the desired prior, changing it to an X. + +![](03_topical_pages/11_images/bety_priors_2.png) + +Save this by scrolling to the bottom of the PFT page and hitting the Update button. + +![](03_topical_pages/11_images/bety_priors_3.png) + +#### Creating new prior distributions + +A new prior distribution can be created for a pre-existing variable, if a more constrained or specific one is known. + +* Select Data > Priors then “New Prior” +* In the _Citation_ box, type in or select an existing reference that indicates how the prior was defined. There are a number of unpublished citations in current use that simply state the expert opinion of an individual +* Fill the _Variable_ box by typing in part or all of a pre-existing variable's name and selecting it +* The _Phylogeny_ box allows one to specify what taxonomic grouping the prior is defined for, at it is important to note that this is just for reference and doesn’t have to be specified in any standard way nor does it have to be monophyletic (i.e. it can be a functional grouping) +* The prior distribution is defined by choosing an option from the drop-down _Distribution_ box, and then specifying values for both _Parameter a_ and _Parameter b_. The exact meaning of the two parameters depends on the distribution chosen. For example, for the Normal distribution a and b are the mean and standard deviation while for the Uniform they are the minimum and maximum. All parameters are defined based on their standard parameterization in the R language +* Specify the prior sample size in _N_ if the prior is based on observed data (independent of data in the PEcAn database) +* When this is done, scroll down and hit the Create button + +![](03_topical_pages/11_images/bety_priors_4.png) + +The new prior distribution can then be added a PFT as described in the "Adding Priors for Each Variable" section. + +#### Creating new variables + +It is important to note that the priors are defined for the variable name and units as specified in the Variables table. **If the variable name or units is different within the model it is the responsibility of write.configs.MODEL function to handle name and unit conversions** (see Interface Modules below). This can also include common but nonlinear transformations, such as converting SLA to LMA or changing the reference temperature for respiration rates. + +To add a new variable, select Data > Variables and click the New Variable button. Fill in the _Name_ field with the desired name for the variable and the units in the _Units_ field. There are additional fields, such as _Standard Units_, _Notes_, and _Description_, that can be filled out if desired. When done, hit the Create button. + +![](03_topical_pages/11_images/bety_priors_5.png) + +The new variable can be used to create a prior distribution for it as in the "Creating new prior distributions" section. + +### Interface Modules + +#### Setting up the module directory (required) + +PEcAn assumes that the interface modules are available as an R package in the models directory named after the model in question. The simplest way to get started on that R package is to make a copy the [_template_](https://github.com/PecanProject/pecan/tree/master/models/template) directory in the pecan/models folder and re-name it to the name of your model. In the code, filenames, and examples below you will want to substitute the word **MODEL** for the name of your model (note: R is case-sensitive). + +If you do not want to write the interface modules in R then it is fairly simple to set up the R functions describe below to just call the script you want to run using R's _system_ command. Scripts that are not R functions should be placed in the _inst_ folder and R can look up the location of these files using the function _system.file_ which takes as arguments the _local_ path of the file within the package folder and the name of the package (typically PEcAn.MODEL). For example + + ## Example met conversion wrapper function + met2model.MODEL <- function(in.path, in.prefix, outfolder, start_date, end_date){ + myMetScript <- system.file("inst/met2model.MODEL.sh", "PEcAn.MODEL") + system(paste(myMetScript, file.path(in.path, in.prefix), outfolder, start_date, end_date)) + } + +would execute the following at the Linux command line + + inst/met2model.MODEL.sh in.path/in.prefix outfolder start_date end_date ` + +#### DESCRIPTION + +Within the module folder open the *DESCRIPTION* file and change the package name to PEcAn.MODEL. Fill out other fields such as Title, Author, Maintainer, and Date. + +#### NAMESPACE + +Open the *NAMESPACE* file and change all instances of MODEL to the name of your model. If you are not going to implement one of the optional modules (described below) at this time then you will want to comment those out using the pound sign `#`. For a complete description of R NAMESPACE files [see here](http://cran.r-project.org/doc/manuals/r-devel/R-exts.html#Package-namespaces). If you create additional functions in your R package that you want to be used make sure you include them in the NAMESPACE as well (internal functions don't need to be declared) + +#### Building the package + +Once the package is defined you will then need to add it to the PEcAn build scripts. From the root of the pecan directory, go into the _scripts_ folder and open the file _build.sh_. Within the section of code that includes PACKAGES= add model/MODEL to the list of packages to compile. If, in writing your module, you add any other R packages to the system you will want to make sure those are listed in the DESCRIPTION and in the script **scripts/install.dependencies.R**. Next, from the root pecan directory open all/DESCRIPTION and add your model package to the *Suggests:* list. + +At any point, if you want to check if PEcAn can build your MODEL package successfully, just go to the linux command prompt and run **scripts/build.sh**. You will need to do this before the system can use these packages. + +#### write.config.MODEL (required) + +This module performs two primary tasks. The first is to take the list of parameter values and model input files that it receives as inputs and write those out in whatever format(s) the MODEL reads (e.g. a settings file). The second is to write out a shell script, jobs.sh, which, when run, will start your model run and convert its output to the PEcAn standard (netCDF with metadata currently equivalent to the [MsTMIP standard](http://nacp.ornl.gov/MsTMIP_variables.shtml)). Within the MODEL directory take a close look at inst/template.job and the example write.config.MODEL to see an example of how this is done. It is important that this script writes or moves outputs to the correct location so that PEcAn can find them. The example function also shows an example of writing a model-specific settings/config file, also by using a template. + +You are encouraged to read the section above on defining PFTs before writing write.config.MODEL so that you understand what model parameters PEcAn will be passing you, how they will be named, and what units they will be in. Also note that the (optional) PEcAn input/driver processing scripts are called by separate workflows, so the paths to any required inputs (e.g. meteorology) will already be in the model-specific format by the time write.config.MODEL receives that info. + +#### Output Conversions + +The module model2netcdf.MODEL converts model output into the PEcAn standard (netCDF with metadata currently equivalent to the [MsTMIP standard](http://nacp.ornl.gov/MsTMIP_variables.shtml)). This function was previously required, but now that the conversion is called within jobs.sh it may be easier for you to convert outputs using other approaches (or to just directly write outputs in the standard). + +Whether you implement this function or convert outputs some other way, please note that PEcAn expects all outputs to be broken up into ANNUAL files with the year number as the file name (i.e. YEAR.nc), though these files may contain any number of scalars, vectors, matrices, or arrays of model outputs, such as time-series of each output variable at the model's native timestep. + +Note: PEcAn reads all variable names from the files themselves so it is possible to add additional variables that are not part of the MsTMIP standard. Similarly, there are no REQUIRED output variables, though *time* is highly encouraged. We are shortly going establish a canonical list of PEcAn variables so that if users add additional output variables they become part of the standard. **We don't want two different models to call the same output with two different names or different units** as this would prohibit the multi-model syntheses and comparisons that PEcAn is designed to facilitate. + +#### met2model.MODEL + +`met2model.MODEL(in.path, in.prefix, outfolder, start_date, end_date)` + +Converts meteorology input files from the PEcAn standard (netCDF, CF metadata) to the format required by the model. This file is optional if you want to load all of your met files into the Inputs table as described in [How to insert new Input data](../developers_guide/How-to-insert-new-Input-data.html), which is often the easiest way to get up and running quickly. However, this function is required if you want to benefit from PEcAn's meteorology workflows and model run cloning. You'll want to take a close look at [Adding-an-Input-Converter] to see the exact variable names and units that PEcAn will be providing. Also note that PEcAn splits all meteorology up into ANNUAL files, with the year number explicitly included in the file name, and thus what PEcAn will actually be providing is **in.path**, the input path to the folder where multiple met files may stored, and **in.prefix**, the start of the filename that precedes the year (i.e. an individual file will be named `.YEAR.nc`). It is valid for in.prefix to be blank. The additional REQUIRED arguments to met2model.MODEL are **outfolder**, the output folder where PEcAn wants you to write your meteorology, and **start_date** and **end_date**, the time range the user has asked the meteorology to be processed for. + +#### Commit changes + +Once the MODEL modules are written, you should follow the [Using-Git](Using-Git.md) instructions on how to commit your changes to your local git repository, verify that PEcAn compiles using *scripts/build.sh*, push these changes to Github, and submit a pull request so that your model module is added to the PEcAn system. It is important to note that while we encourage users to make their models open, adding the PEcAn interface module to the Github repository in no way requires that the model code itself be made public. It does, however, allow anyone who already has a copy of the model code to use PEcAn so we strongly encourage that any new model modules be committed to Github. + + + + +## Adding input data {#NewInput} + +### Input records in BETY + +All model input data or data used for model calibration/validation must be registered in the BETY database. + +Before creating a new Input record, you must make sure that the format type of your data is registered in the database. If you need to make a new format record, see [Creating a new format record in BETY](#NewFormat). + +### Create a database file record for the input data + +An input record contains all the metadata required to identify the data, however, this record does not include the location of the data file. Since the same data may be stored in multiple places, every file has its own dbfile record. + +From your BETY interface: + +* Create a DBFILES entry for the path to the file + + From the menu click RUNS then FILES + + Click “New File” + + Select the machine your file is located at + + Fill in the File Path where your file is located (aka folder or directory) NOT including the name of the file itself + + Fill in the File Name with the name of the file itself. Note that some types of input records will refer to be ALL the files in a directory and thus File Name can be blank + + Click Update + +### Creating a new Input record in BETY + +From your BETY interface: + +* Create an INPUT entry for your data + + From the menu click RUNS then INPUTS + + Click “New Input” + + Select the SITE that this data is associated with the input data set + + Other required fields are a unique name for the input, the start and end dates of the data set, and the format of the data. If the data is not in a currently known format you will need to create a NEW FORMAT and possibly a new input converter. Instructions on how to do add a converter can be found here [Input conversion](#InputConversions). Instructions on how to add a format record can be found [here](#NewFormat) + + Parent ID is an optional variable to indicated that one dataset was derived from another. + + Click “Create” +* Associate the DBFILE with the INPUT + + In the RUNS -> INPUTS table, search and find the input record you just created + + Click on the EDIT icon + + Select “View related Files” + + In the Search window, search for the DBFILE you just created + * Once you have found the DBFILE, click on the “+” icon to add the file + * Click on “Update” at the bottom when you are done. + +### Adding a new input converter {#InputConversions} + +Three Types of data conversions are discussed below: Meteorological data, Vegetation data, and Soil data. Each section provides instructions on how to convert data from their raw formats into a PEcAn standard format, whether it be from a database or if you have raw data in hand. + +Also, see [PEcAn standard formats]. + +#### Meterological Data + +##### Adding a function to PEcAn to convert a met data source + +In general, you will need to write a function to download the raw met data and one to convert it to the PEcAn standard. + +Downloading raw data function are named `download..R`. These functions are stored within the PEcAn directory: [`/modules/data.atmosphere/R`](https://github.com/PecanProject/pecan/tree/develop/modules/data.atmosphere/R). + +Conversion function from raw to standard are named `met2CF..R`. These functions are stored within the PEcAn directory: [`/modules/data.atmosphere/R`](https://github.com/PecanProject/pecan/tree/develop/modules/data.atmosphere/R). + +Current Meteorological products that are coupled to PEcAn can be found in our [Available Meteorological Drivers] page. + +Note: Unless you are also adding a new model, you will not need to write a script to convert from PEcAn standard to PEcAn models. Those conversion scripts are written when a model is added and can be found within each model's PEcAn directory. + +*Standards dimesion, names, nad units can be found here:* [Input Standards] + +##### Adding Single-Site Specific Meteorological Data + +Perhaps you have meteorological data specific to one site, with a unique format that you would like to add to PEcAn. Your steps would be to: + 1. write a script or function to convert your files into the netcdf PEcAn standard + 2. insert that file as an input record for your site following these [instructions](#NewInput) + +##### Processing Met data outside of the workflow using PEcAn functions + +Perhaps you would like to obtain data from one of the sources coupled to PEcAn on its own. To do so you can run PEcAn functions on their own. + +###### Example 1: Processing data from a database + +Download Amerifluxlbl from Niwot Ridge for the year 2004: + +``` +raw.file <-PEcAn.data.atmosphere::download.AmerifluxLBL(sitename = "US-NR1", + outfolder = ".", + start_date = "2004-01-01", + end_date = "2004-12-31") +``` + +Using the information returned as the object `raw.file` you will then convert the raw files into a standard file. + +Open a connection with BETY. You may need to change the host name depending on what machine you are hosting BETY. You can find the hostname listed in the machines table of BETY. + +``` + +bety <- dplyr::src_postgres(dbname = 'bety', + host ='localhost', + user = "bety", + password = "bety") + +con <- bety$con +``` + +Next you will set up the arguments for the function + +``` +in.path <- '.' +in.prefix <- raw.file$dbfile.name +outfolder <- '.' +format.id <- 5000000002 +format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety) +lon <- -105.54 +lat <- 40.03 +format$time_zone <- "America/Chicago" +``` +Note: The format.id can be pulled from the BETY database if you know the format of the raw data. + +Once these arguments are defined you can execute the `met2CF.csv` function + +``` +PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, + in.prefix =in.prefix, + outfolder = ".", + start_date ="2004-01-01", + end_date = "2004-12-01", + lat= lat, + lon = lon, + format = format) +``` + + + +###### Example 2: Processing data from data already in hand + +If you have Met data already in hand and you would like to convert into the PEcAn standard follow these instructions. + +Update BETY with file record, format record and input record according to this page [How to Insert new Input Data](#NewInput) + +If your data is in a csv format you can use the `met2CF.csv`function to convert your data into a PEcAn standard file. + +Open a connection with BETY. You may need to change the host name depending on what machine you are hosting BETY. You can find the hostname listed in the machines table of BETY. + +``` +bety <- dplyr::src_postgres(dbname = 'bety', + host ='localhost', + user = "bety", + password = "bety") + +con <- bety$con +``` + +Prepare the arguments you need to execute the met2CF.csv function + +``` +in.path <- 'path/where/the/raw/file/lives' +in.prefix <- 'prefix_of_the_raw_file' +outfolder <- 'path/to/where/you/want/to/output/thecsv/' +format.id <- formatid of the format your created +format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety) +lon <- longitude of your site +lat <- latitude of your site +format$time_zone <- time zone of your site +start_date <- Start date of your data in "y-m-d" +end_date <- End date of your data in "y-m-d" +``` + +Next you can execute the function: +``` +PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, + in.prefix =in.prefix, + outfolder = ".", + start_date = start_date, + end_date = end_date, + lat= lat, + lon = lon, + format = format) +``` + + +#### Vegetation Data + +Vegetation data will be required to parameterize your model. In these examples we will go over how to produce a standard initial condition file. + +The main function to process cohort data is the `ic.process.R` function. As of now however, if you require pool data you will run a separate function, `pool_ic_list2netcdf.R`. + +###### Example 1: Processing Veg data from data in hand. + +In the following example we will process vegetation data that you have in hand using PEcAn. + +First, you'll need to create a input record in BETY that will have a file record and format record reflecting the location and format of your file. Instructions can be found in our [How to Insert new Input Data](#NewInput) page. + +Once you have created an input record you must take note of the input id of your record. An easy way to take note of this is in the URL of the BETY webpage that shows your input record. In this example we use an input record with the id `1000013064` which can be found at this url: https://psql-pecan.bu.edu/bety/inputs/1000013064# . Note that this is the Boston University BETY database. If you are on a different machine, your url will be different. + +With the input id in hand you can now edit a pecan XML so that the PEcAn function `ic.process` will know where to look in order to process your data. The `inputs` section of your pecan XML will look like this. As of now ic.process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the `useic` flag. +``` + + + FFT + css + pecan + 1000013064 + TRUE + + 1 + 70 + 400 + + + + FFT + pss + pecan + 1000013064 + TRUE + + + FFT + site + pecan + 1000013064 + TRUE + + + CRUNCEP + ED2 + + + 294 + + + 297 + + + 295 + + + 296 + + +``` + +This IC workflow also supports generating ensembles of initial conditions from posterior estimates of DBH. To do this the tags below can be inserted to the pecan.xml: +``` + + PalEON + css + 1000015682 + TRUE + 20 + + 1256.637 + 3 + + +``` +Here the `id` should point to a file that has MCMC samples to generate the ensemble from. The number between the `` tag defines the number of ensembles requested. The workflow will populate the settings list `run$inputs` tag with ensemble member information. E.g.: +``` + + + ... + ... + ... + ... + ... + + + + ... + ... + ... + ... + ... + + + + + ... + ... + ... + ... + ... + + + ... + ... + ... + ... + ... + +``` + +Once you edit your PEcAn.xml you can than create a settings object using PEcAn functions. Your `pecan.xml` must be in your working directory. + +``` +settings <- PEcAn.settings::read.settings("pecan.xml") +settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) +``` +You can then execute the `ic.process` function to convert data into a standard Rds file: + +``` +input <- settings$run$inputs +dir <- "." +ic.process(settings, input, dir, overwrite = FALSE) +``` + +Note that the argument `dir` is set to the current directory. You will find the final ED2 file there. More importantly though you will find the `.Rds ` file within the same directory. + + + + +###### Example 3 Pool Initial Condition files +If you have pool vegetation data, you'll need the [`pool_ic_list2netcdf.R`](https://github.com/PecanProject/pecan/blob/develop/modules/data.land/R/pool_ic_list2netcdf.R) function to convert the pool data into PEcAn +standard. + +The function stands alone and requires that you provide a named list of netcdf dimensions and values, and a named list of variables and values. Names and units need to match the standard_vars.csv table found [here](https://github.com/PecanProject/pecan/blob/develop/base/utils/data/standard_vars.csv). + +``` +#Create a list object with necessary dimensions for your site +input<-list() +dims<- list(lat=-115,lon=45, time= 1) +variables<- list(SoilResp=8,TotLivBiom=295) +input$dims <- dims +input$vals <- variables +``` + +Once this is done, set `outdir` to where you'd like the file to write out to and a siteid. Siteid in this can be used as an file name identifier. Once part of the automated workflow siteid will reflect the site id within the BET db. + +``` +outdir <- "." +siteid <- 772 +pool_ic_list2netcdf(input = input, outdir = outdir, siteid = siteid) +``` + +You should now have a netcdf file with initial conditions. + +#### Soil Data + +###### Example 1: Converting Data in hand + +Local data that has the correct names and units can easily be written out in PEcAn standard using the function soil2netcdf. + +``` +soil.data <- list(volume_fraction_of_sand_in_soil = c(0.3,0.4,0.5), + volume_fraction_of_clay_in_soil = c(0.3,0.3,0.3), + soil_depth = c(0.2,0.5,1.0)) + +soil2netcdf(soil.data,"soil.nc") +``` + +At the moment this file would need to be inserted into Inputs manually. By default, this function also calls soil_params, which will estimate a number of hydraulic and thermal parameters from texture. Be aware that at the moment not all model couplers are yet set up to read this file and/or convert it to model-specific formats. + + +###### Example 2: Converting PalEON data + +In addition to location-specific soil data, PEcAn can extract soil texture information from the PalEON regional soil product, which itself is a subset of the MsTMIP Unified North American Soil Map. If this product is installed on your machine, the appropriate step in the do_conversions workflow is enabled by adding the following tag under `````` in your pecan.xml + +```xml + + 1000012896 + +``` + +In the future we aim to extend this extraction to a wider range of soil products. + + +###### Example 3: Extracting soil properties from gSSURGO database + +In addition to location-specific soil data, PEcAn can extract soil texture information from the gSSURGO data product. This product needs no installation and it extract soil proeprties for the lower 48 states in U.S. In order to let the pecan know that you're planning to use gSSURGO, you can the following XML tag under input in your pecan xml file. + +```xml + + + gSSURGO + + +``` + + + + +## Pecan Data Ingest via Web Interface {#adding-data-web} + +This tutorial explains the process of ingesting data into PEcAn via our Data-Ingest Application. In order to ingest data, the users must first select data that they wish to upload. Then, they enter metadata to help PEcAn parse and load the data into the main PEcAn workflow. + +### Loading Data + +#### Selecting Ingest Method +The Data-Ingest application is capable of loading data from the DataONE data federation and from the user's local machine. The first step in the workflow is therefore to select an upload method. The application defaults to uploading from DataONE. To upload data from a local device, simply select the radio button titled `Local Files `. + +#### DataONE Upload Example +
+ +```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/D1Ingest-1.gif") +``` +
+The DataONE download feature allows the user to download data at a given doi or DataONE specific package id. To do so, enter the doi or identifier in the `Import From DataONE` field and select `download`. The download process may take a couple of minutes to run depending on the number of files in the dataONE package. This may be a convenient option if the user does not wish to download files directly to their local machine. Once the files have been successfully downloaded from DataONE, they are displayed in a table. Before proceeding to the next step, the user can select a file to ingest by clicking on the corresponding row in the data table. +
+ +### Local Upload Example + +
+To upload local files, the user should first select the `Local Files` button. From there, the user can upload files from their local machines by selecting `Browse` or by dragging and dropping files into the text box. The files will begin uploading automatically. From there, the user should select a file to ingest and then select the `Next Step` button. +
+After this step, the workflow is identical for both methods. However, please note that if it becomes necessary to switch from loading data via `DataONE` to uploading local files after the first step, please restart the application. +
+ +```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/Local_loader_sm.gif") +``` +
+```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/local_browse.gif") +``` +### 2. Creating an Input Record +Creating an input record requires some basic metadata about the file that is being ingested. Each entry field is briefly explained below. +
+ + - Site: To link the selected file with a site, the user can scroll or type to search all the sites in PEcAn. See Example: +
+```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/Selectize_Input_sm.gif") +``` +
+- Parent: To link the selected file with another dataset, type to search existing datasets in the `Parent` field. + +- Name: this field should be autofilled by selecting a file in step 1. + +- Format: If the selected file has an existing format name, the user can search and select in the `Format` field. If the selected file's format is not already in pecan, the user can create a new format by selecting `Create New Format`. Once this new format is created, it will automatically populate the `Format` box and the `Current Mimetype` box (See Section 3). + +- Mimetype: If the format already exists, select an existing mimetype. + +- Start and End Date and Time: Inputs can be entered manually or by using the user interface. See example + +
+```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/DateTime.gif") +``` + +- Notes: Describe the data that is being uploaded. Please include any citations or references. + +### 3. Creating a format record +If it is necessary to add a new format to PEcAn, the user should fill out the form attached to the `Create New Format` button. The inputs to this form are described below: + +- Mimetype: type to search existing mimetypes. If the mimetype is not in that list, please click on the link `Create New Mimetype` and create a new mimetype via the BETY website. + +- New Format Name: Add the name of the new format. Please exclude spaces from the name. Instead please use underscores "_". + +- Header: If there is space before the first line of data in the dataset, please select `Yes` + +- Skip: The number of lines in the header that should be skipped before the data. + +- Please enter notes that describe the format. + +Example: +
+```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/new_format_record.gif") +``` +### 4. Formats_Variables Record +The final step in the ingest process is to register a formats-variables record. This record links pecan variables with variables from the selected data. + +- Variable: PEcAn variable that is equivalent to variable in selected file. + +- Name: The variable name in the imported data need only be specified if it differs from the BETY variable name. + +- Unit: Should be in a format parseable by the udunits library and need only be secified if the units of the data in the file differ from the BETY standard. + +- Storage Type: Storage type need only be specified if the variable is stored in a format other than would be expected (e.g. if numeric values are stored as quoted character strings). Additionally, storage_type stores POSIX codes that are used to store any time variables (e.g. a column with a 4-digit year would be `%Y`). + +- Column Number: Vector of integers that list the column numbers associated with variables in a dataset. Required for text files that lack headers. +
+```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/D1Ingest-9_sm.gif") +``` + +Finally, the path to the ingest data is displayed in the `Select Files` box. + + +## Creating a new format {#NewFormat} +### Formats in BETY + +The PEcAn database keeps track of all the input files passed to models, as well as any data used in model validation or data assimilation. Before we start to register these files with PEcAn we need to define the format these files will be in. + +The main goal is to take all the meta-data we have about a data file and create a record of it that pecan can use as a guide when parsing the data file. + +This information is stored in a Format record in the bety database. Make sure to read through the current Formats before deciding to make a new one. + +### Creating a new format in BETY + +If the Format you are looking for is not available, you will need to create a new record. Before entering information into the database, you need to be able to answer the following questions about your data: + +- What is the file MIME type? + - We have a suit of functions for loading in data in open formats such as CSV, txt, netCDF, etc. + - PEcAn has partnered with the [NCSA BrownDog project](http://browndog.ncsa.illinois.edu/) to create a service that can read and convert as many data formats as possible. If your file type is less common or a proprietary type, you can use the [BrownDog DAP](http://dap.ncsa.illinois.edu/) to convert it to a format that can be used with PEcAn. + - If BrownDog cannot convert your data, you will need to contact us about writing a data specific load function. + +- What variables does the file contain? + - What are the variables named? + - What are the variable units? + - How do the variable names and units in the data map to PEcAn variables in the BETY database? See [below](### Name and Unit) for an example. It is most likely that you will NOT need to add variables to BETY. However, identifying the appropriate variables matches in the database may require some work. We are always available to help answer your questions. + +- Is there a timestamp on the data? + - What are the units of time? + + +Here is an example using a fake dataset: + +![example_data](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/example_data.png) + + + +This data started out as an excel document, but was saved as a CSV file. + +To create a Formats record for this data, in the web interface of BETY, select Runs > Formats and click _New Format_. + +You will need to fill out the following fields: + +- MIME type: File type (you can search for other formats in the text field) +- Name: The name of your format (this can be whatever you want) +- Header: Boolean that denotes whether or not your data contains a header as the first line of the data. (1 = TRUE, 0 = FALSE) +- Skip: The number of lines above the data that should be skipped. For example, metadata that should not be included when reading in the data or blank spaces. +- Notes: Any additional information about the data such as sources and citations. + +Here is the Formats record for the example data: + +![format_record_1](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/format_record_1.png) + +When you have finished this section, hit Create. The final record will be displayed on the screen. + +#### Formats -> Variables + +After a Format entry has been created, you are encouraged to edit the entry to add relationships between the file's variables and the Variables table in PEcAn. Not only do these relationships provide meta-data describing the file format, but they also allow PEcAn to search and (for some MIME types) read files. + +To enter this data, select Edit Record and on the edit screen select View Related Variable. + +Here is the record for the example data after adding related variables: + +![format_record_2](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/format_record_2.png) + +##### Name and Unit + +For each variable in the file you will want at a minimum to specify the NAME of the variable within your file and match that to the equivalent Variable in the pulldown. + +Make sure to search for your variables under Data > Variables before suggesting that we create a new variable record. This may not always be a straightforward process. + +For example bety contains a record for Net Primary Productivity: + +![var_record](04_advanced_user_guide/images/var_record.png) + +This record does not have the same variable name or the same units as NPP in the example data. +You may have to do some reading to confirm that they are the same variable. +In this case +- Both the data and the record are for Net Primary Productivity (the notes section provides additional resources for interpreting the variable.) +- The units of the data can be converted to those of the vairiable record (this can be checked by running `udunits2::ud.are.convertible("g C m-2 yr-1", "Mg C ha-1 yr-1")`) + +Differences between the data and the variable record can be accounted for in the data Formats record. + +- Under Variable, select the variable as it is recorded in bety. +- Under Name, write the name the variable has in your data file. +- Under Unit, write the units the variable has in your data file. + +NOTE: All units must be written in a udunits compliant format. To check that your units can be read by udunits, in R, load the udunits2 package and run `udunits2::is.parseable("g C m-2 yr-1")` + +**If the name or the units are the same**, you can leave the Name and Unit fields blank. This is can be seen with the variable LAI. + +##### Storage Type + +_Storage Type_ only needs to be specified if the variable is stored in a format other than what would be expected (e.g. if numeric values are stored as quoted character strings). + +One such example is *time variables*. + +PEcAn converts all dates into POSIX format using R functions such as `strptime`. These functions require that the user specify the format in which the date is written. + +The default is `"%Y-%m-%d %H:%M:%S"` which would look like `"2017-01-01 00:00:00"` + +A list of date formats can be found in the [R documentation for the function `strptime`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/strptime.html) + +Below are some commonly used codes: + +| %d | Day of the month as decimal number (01–31). | +| ---- | ---------------------------------------- | +| %D | Date format such as %m/%d/%y. | +| %H | Hours as decimal number (00–23). | +| %m | Month as decimal number (01–12). | +| %M | Minute as decimal number (00–59). | +| %S | Second as integer (00–61), allowing for up to two leap-seconds (but POSIX-compliant implementations will ignore leap seconds). | +| %T | Equivalent to %H:%M:%S. | +| %y | Year without century (00–99). On input, values 00 to 68 are prefixed by 20 and 69 to 99 by 19 – that is the behaviour specified by the 2004 and 2008 POSIX standards, but they do also say ‘it is expected that in a future version the default century inferred from a 2-digit year will change’. | +| %Y | Year with century. | + + +##### Column Number + +If your data is in text format with variables in a standard order then you can specify the Column Number for the variable. This is required for text files that lack headers. + +#### Retrieving Format Information + +To acquire Format information from a Format record, use the R function `query.format.vars` + +##### Inputs + +- `bety`: connection to BETY +- `input.id=NA` and/or `format.id=NA`: Input or Format record ID from BETY + - At least one must be specified. Defaults to `format.id` if both provided. +- `var.ids=NA`: optional vector of variable IDs. If provided, limits results to these variables. + +##### Output + +- R list object containing many things. Fill this in. + +## Creating a new benchmark reference run {#NewBenchmark} + +The purpose of the reference run record in BETY is to store all the settings from a run that are necessary in exactly recreating it. + +The pecan.xml file is the home of absolutely all the settings for a particular run in pecan. However, much of the information in the pecan.xml file is server and user specific and more importantly, the pecan.xml files are stored on individual servers and may not be available to the public. + +When a run that is performed using pecan is registered as a reference run, the settings that were used to make that run are made available to all users through the database. + +All completed runs are not automatically registered as reference runs. To register a run, navigate to the benchmarking section of the workflow visualizations Shiny app. + +## Editing records {#editing-records} + +- Models +- Species +- PFTs +- Traits +- Inputs +- DB files +- Variables +- Formats +- (Link each section to relevant Bety tables) + + + +# Troubleshooting and Debugging PEcAn + +## Cookies and pecan web pages + +You may need to disable cookies specifically for the pecan webserver in your browser. This shouldn't be a problem running from the virtual machine, but your installation of php can include a 'PHPSESSID' that is quite long, and this can overflow the params field of the workflows table, depending on how long your hostname, model name, site name, etc are. + +## `Warning: mkdir() [function.mkdir]: No such file or directory` + +If you are seeing: `Warning: mkdir() [function.mkdir]: No such file or directory in /path/to/pecan/web/runpecan.php at line 169` it is because you have used a relative path for \$output_folder in system.php. + +## After creating a new PFT the tag for PFT not passed to config.xml in ED + +This is a result of the rather clunky way we currently have adding PFTs to PEcAn. This is happening because you need to edit the ./pecan/models/ed/data/pftmapping.csv file to include your new PFTs. + +This is what the file looks like: + +``` +PEcAn;ED +ebifarm.acru;11 +ebifarm.acsa3;11 +... +``` + +You just need to edit this file (in a text editor, no Excel) and add your PFT names and associated number to the end of the file. Once you do this, recompile PEcAn and it should then work for you. We currently need to reference this file in order to properly set the PFT number and maintain internal consistency between PEcAn and ED2. + +## Debugging + +How to identify the source of a problem. + + +### Using `tests/workflow.R` + +This script, along with model-specific settings files in the `tests` folder, provide a working example. From inside the tests folder, `R CMD --vanilla -- --settings pecan..xml < workflow.R` should work. + +The next step is to add `debugonce()` before running the test workflow. + +This allows you can step through the function and evaluate the different objects as they are created and/or transformed. + +See [tests README](https://github.com/PecanProject/pecan/blob/master/tests/README.md) for more information. + + + +## Useful scripts + +The following scripts (in `qaqc/vignettes` identify, respectively: + +1. [relationships among functions across packages](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/function_relationships.Rmd) +2. [function inputs and outputs](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/module_output.Rmd) (e.g. that will identify which functions and outputs are used in a workflow). + + + + + +# BETY Database Administration {#database} + +This section provides additional details about the BETY database used by PEcAn. It will discuss best practices for setting up the BETY database, how to backup the database and how to restore the database. + +## Best practices {#database-setup} + +When using the BETY database in non testing mode, it is best not to use the default users. This is accomplished when running the initialize of the database. When the database is initally created the database will be created with some default users (best known is the carya user) as well as the guestuser that can be used in the BETY web application. To disable these users you will either need to disable the users from the web interface, or you can reinitialize the database and remove the `-u` flag from the command line (the `-u` flag will create the default users). To disable the guestuser as well you can remove the `-g` flag from the command line, or disable the account from BETY. + +The default installation of BETY and PEcAn will assume there is a database called bety with a default username and password. The default installation will setup the database account to not have any superuser abilities. It is also best to limit access to the postgres database from trusted hosts, either by using firewalls, or configuring postgresql to only accept connections from a limited set of hosts. + +## Backup of BETY database + +It is good practice to make sure you backup the BETY database. Just creating a copy of the files on disk is not enough to ensure you have a valid backup. Most likely if you do this you will end up with a corrupted backup of the database. + +To backup the database you can use the `pg_dump` command, which will make sure the database id backed up in a consistent state. You can run `sudo -u postgres pg_dump -d bety -Z 9 -f bety.sql.gz`, this will create a compressed file that can be used to resotore the database. + +In the PEcAn distribution in scripts folder there is a script called `backup.bety.sh`. This script will create the backup of the database. It will create multiple backups allowing you to restore the database from one of these copies. The database will be backed up to one of the following files: + - bety-d-X, daily backup, where X is the day of the month. + - bety-w-X, weekly backup, where X is the week number in the year + - bety-m-X, montly backup, where X is the month of the year + - bety-y-X, yearly backup, where X is the actual year. +Using this scheme, we can restore the database using any of the files generated. + +It is recommeneded to run this script using a cronjob at midnight such that you have a daily backup of the database and do not have to remember to create these backups. When running this script (either cron or by hand) make sure to place the backups on a different machine than the machine that holds the database in case of a larger system failure. + +## Restore of BETY database + +Hopefully this section will never need to be used. Following are 5 steps that have been used to restore the database. Before you start it is worth it to read up online a bit on restoring the database as well as join the slack channel and ask any of the people there for help. + +1. stop apache (BETY/PEcAn web apps) `service httpd stop` or `service apache2 stop` +2. backup database (you know just incase) `pg_dump -d bety > baddb.sql` +3. drop database `sudo -u postgres psql -c 'drop database bety'` +4. create database `sudo -u postgres psql -c 'create database bety with owner bety'` +5. load database (assuming dump is called bety.sql.gz) `zcat bety.sql.gz | grep -v search_path | sudo -u postgres psql -d bety` +6. start apache again `service httpd start` or `service apache2 start` + +If during step 5 there is a lot of errors, it is helpful to add `-v ON_ERROR_STOP=1` to the end of the command. This will stop the restore at the first error and will help with debugging the issue. + + + +# Workflow modules + +NOTE: As of PEcAn 1.2.6 -- needs to be updated significantly + + +## Overview + +Workflow inputs and outputs (click to open in new page, then zoom). Code used to generate this image is provided in [qaqc/vignettes/module_output.Rmd](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/module_output.Rmd) + +[![PEcAn Workflow](http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg)](http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg) + +## Load Settings +### `read.settings("/home/pecan/pecan.xml")` + +* loads settings +* create directories +* generates new xml, put in output folder + +## Query Database +### `get.trait.data()` + +Queries the database for both the trait data and prior distributions associated with the PFTs specified in the settings file. The list of variables that are queried is determined by what variables have priors associated with them in the definition of the pft. Likewise, the list of species that are associated with a PFT determines what subset of data is extracted out of all data matching a given variable name. + +## Meta Analysis +### `run.meta.analysis()` + +The meta-analysis code begins by distilling the trait.data to just the values needed for the meta-analysis statistical model, with this being stored in `madata.Rdata`. This reduced form includes the conversion of all error statistics into precision (1/variance), and the indexing of sites, treatments, and greenhouse. In reality, the core meta-analysis code can be run independent of the trait database as long as input data is correctly formatted into the form shown in `madata`. + +The evaluation of the meta-analysis is done using a Bayesian statistical software package called JAGS that is called by the R code. For each trait, the R code will generate a [trait].model.bug file that is the JAGS code for the meta-analysis itself. This code is generated on the fly, with PEcAn adding or subtracting the site, treatment, and greenhouse terms depending upon the presence of these effects in the data itself. + +Meta-analyses are run, and summary plots are produced. + + +## Write Configuration Files +### `write.configs(model)` + +* writes out a configuration file for each model run +** writes 500 configuration files for a 500 member ensemble +** for _n_ traits, writes `6 * n + 1` files for running default Sensitivity Analysis (number can be changed in the pecan settings file) + +## Start Runs +### `start.runs(model)` + +This code starts the model runs using a model specific run function named start.runs.[model]. If the ecosystem model is running on a remote server, this module also takes care of all of the communication with the remote server and its run queue. Each of your subdirectories should now have a [run.id].out file in it. One instance of the model is run for each configuration file generated by the previous write configs module. + +## Get Model Output +### `get.model.output(model)` + +This code first uses a model-specific model2netcdf.[model] function to convert the model output into a standard output format ([MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml)). Then it extracts the data for requested variables specified in the settings file as `settings$ensemble$variable`, averages over the time-period specified as `start.date` and `end.date`, and stores the output in a file `output.Rdata`. The `output.Rdata` file contains two objects, `sensitivity.output` and `ensemble.output`, that is the model prediction for the parameter sets specified in `sa.samples` and `ensemble.samples`. In order to save bandwidth, if the model output is stored on a remote system PEcAn will perform these operations on the remote host and only return the `output.Rdata` object. + +## Ensemble Analysis +### `run.ensemble.analysis()` + +This module makes some simple graphs of the ensemble output. Open ensemble.analysis.pdf to view the ensemble prediction as both a histogram and a boxplot. ensemble.ts.pdf provides a timeseries plot of the ensemble mean, meadian, and 95% CI + +## Sensitivity Analysis, Variance Decomposition +### `run.sensitivity.analysis()` + +This function processes the output of the previous module into sensitivity analysis plots, `sensitivityanalysis.pdf`, and a variance decomposition plot, `variancedecomposition.pdf` . In the sensitivity plots you will see the parameter values on the x-axis, the model output on the Y, with the dots being the model evaluations and the line being the spline fit. + + +The variance decomposition plot is discussed more below. For your reference, the R list object, sensitivity.results, stored in sensitivity.results.Rdata, contains all the components of the variance decomposition table, as well as the the input parameter space and splines from the sensitivity analysis (reminder: the output parameter space from the sensitivity analysis was in outputs.R). + +The variance decomposition plot contains three columns, the coefficient of variation (normalized posterior variance), the elasticity (normalized sensitivity), and the partial standard deviation of each model parameter. This graph is sorted by the variable explaining the largest amount of variability in the model output (right hand column). From this graph identify the top-tier parameters that you would target for future constraint. + +## Glossary + +* Inputs: data sets that are used, and file paths leading to them +* Parameters: e.g. info set in settings file +* Outputs: data sets that are dropped, and the file paths leading to them + + + +# Installation details + +This chapter contains details about installing and maintaining the uncontainerized version of PEcAn on a virtual machine or a server. If you are running PEcAn inside of Docker, many of the particulars will be different and you should refer to the [docker](#docker-index) chapter instead of this one. + + + +## PEcAn Virtual Machine {#pecanvm} + +See also other VM related documentation sections: + +* [Maintaining your PEcAn VM](#maintain-vm) +* [Connecting to the VM via SSH](#ssh-vm) +* [Connecting to bety on the VM via SSh](#ssh-vm-bety) +* [Using Amazon Web Services for a VM (AWS)](#awsvm) +* [Creating a Virtual Machine](#createvm) +* [VM Desktop Conversion](#vm-dektop-conversion) +* [Install RStudio Desktop](#install-rstudio) + + +The PEcAn virtual machine consists of all of PEcAn pre-compiled within a Linux operating system and saved in a "virtual machine" (VM). Virtual machines allow for running consistent set-ups without worrying about differences between operating systems, library dependencies, compiling the code, etc. + +1. **Install VirtualBox** This is the software that runs the virtual machine. You can find the download link and instructions at [http://www.virtualbox.org](http://www.virtualbox.org). *NOTE: On Windows you may see a warning about Logo testing, it is okay to ignore the warning.* + +2. **Download the PEcAn VM** You can find the download link at [http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN](http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN), under the "**Files**" header. Click the ".ova" file to begin the download. Note that the file is ~7 GB, so this download can take several minutes to hours depending on your connection speed. Also, the VM requires >4 GB of RAM to operate correctly. Please check current usage of RAM and shutdown processes as needed. + +3. **Import the VM** Once the download is complete, open VirtualBox. In the VirtualBox menus, go to "File" → "Import Appliance" and locate the downloaded ".ova" file. + + +For Virtualbox version 5.x: In the Appliance Import Settings, make sure you select "Reinitialize the MAC address of all network cards" (picture below). This is not selected by default and can result in networking issues since multiple machines might claim to have the same network MAC Address. + +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/pic1.jpg")) +``` + +For Virtualbox versions starting with 6.0, there is a slightly different interface (see figure). Select "Generate new MAC addresses for all network adapters" from the MAC Address Policy: + +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/pic1v2.png")) +``` + +NOTE: If you experience network connection difficulties in the VM with this enabled, try re-importing the VM without this setting selected). + +Finally, click "Import" to build the Virtual Machine from its image. + +4. **Launch PEcAn** Double click the icon for the PEcAn VM. A terminal window will pop up showing the machine booting up which may take a minute. It is done booting when you get to the `pecan login:` prompt. You do not need to login as the VM behaves like a server that we will be accessing through you web browser. Feel free to minimize the VM window. + +* If you _do_ want to login to the VM, the credentials are as follows: `username: carya`, `password: illinois` (after the pecan tree, [Carya illinoinensis][pecan-wikipedia]). + +5. **Open the PEcAn web interface** With the VM running in the background, open any web browser on the same machine and navigate to `localhost:6480/pecan/` to start the PEcAn workflow. (NOTE: The trailing backslash may be necessary depending on your browser) + +* To ssh into the VM, open up a terminal on your machine and execute `ssh -l carya -p 6422 localhost`. Username and password are the same as when you log into the machine. + + + + +## AWS Setup + +***********Mirror of earlier section in installation section?********************* + +The following are Mike's rough notes from a first attempt to port the PEcAn VM to the AWS. This was done on a Mac + +These notes are based on following the instructions [here](http://www.rittmanmead.com/2014/09/obiee-sampleapp-in-the-cloud-importing-virtualbox-machines-to-aws-ec2/) + + +### Convert PEcAn VM + +AWS allows upload of files as VMDK but the default PEcAn VM is in OVA format + +1. If you haven't done so already, download the [PEcAn VM](http://isda.ncsa.illinois.edu/download/index.php?project=PEcAn&sort=category) + +2. Split the OVA file into OVF and VMDK files + +``` +tar xf +``` + +### Set up an account on [AWS](http://aws.amazon.com/) + +After you have an account you need to set up a user and save your [access key and secret key](http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html) + +In my case I created a user named 'carya' + +Note: the key that ended up working had to be made at [https://console.aws.amazon.com/iam/home#security_credential](https://console.aws.amazon.com/iam/home#security_credential), not the link above. + +### Install [EC2 command line tools](http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html) + +``` +wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip + +sudo mkdir /usr/local/ec2 + +sudo unzip ec2-api-tools.zip -d /usr/local/ec2 +``` + +If need be, download and install [JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html) + +``` +export JAVA_HOME=$(/usr/libexec/java_home) + +export EC2_HOME=/usr/local/ec2/ec2-api-tools- + +export PATH=$PATH:$EC2_HOME/bin +``` + + +Then set your user credentials as environment variables: + +`export AWS_ACCESS_KEY=xxxxxxxxxxxxxx` + +`export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxx` + +Note: you may want to add all the variables set in the above EXPORT commands above into your .bashrc or equivalent. + +### Create an AWS S3 'bucket' to upload VM to + +Go to [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3) and click "Create Bucket" + +In my case I named the bucket 'pecan' + + +### Upload + +In the code below, make sure to change the PEcAn version, the name of the bucket, and the name of the region. Make sure that the PEcAn version matches the one you downloaded. + +Also, you may want to choose a considerably larger instance type. The one chosen below is that corresponding to the AWS Free Tier + +``` +ec2-import-instance PEcAn_1.2.6-disk1.vmdk --instance-type t2.micro --format VMDK --architecture x86_64 --platform Linux --bucket pecan --region us-east-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY +``` + +Make sure to note the ID of the image since you'll need it to check the VM status. Once the image is uploaded it will take a while (typically about an hour) for Amazon to convert the image to one it can run. You can check on this progress by running + +``` +ec2-describe-conversion-tasks +``` + +### Configuring the VM + +On the EC2 management webpage, [https://console.aws.amazon.com/ec2](https://console.aws.amazon.com/ec2), if you select **Instances** on the left hand side (LHS) you should be able to see your new PEcAn image as an option under Launch Instance. + +Before launching, you will want to update the firewall to open up additional ports that PEcAn needs -- specifically port 80 for the webpage. Port 22 (ssh/sftp) should be open by default. Under "Security Groups" select "Inbound" then "Edit" and then add "HTTP". + +Select "Elastic IPs" on the LHS, and "Allocate New Address" in order to create a public IP for your VM. + +Next, select "Network Interfaces" on the LHS and then under Actions select "Associate Addresses" then choose the Elastic IP you just created. + +See also http://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/GetStarted.html + +### Set up multiple instances (optional) + +For info on setting up multiple instances with load balancing see: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/gs-ec2VPC.html + +Select "Load Balancers" on the LHS, click on "Create Load Balancer", follow Wizard keeping defaults. + +To be able to launch multiple VMs: Under "Instances" convert VM to an Image. When done, select Launch, enable multiple instances, and associate with the previous security group. Once running, go back to "Load Balancers" and add the instances to the load balancer. Each instance can be accessed individually by it's own public IP, but external users should access the system more generally via the Load Balancers DNS. + +### Booting the VM + +Return to "Instances" using the menu on the LHS. + +To boot the VM select "Actions" then "Instance State" then "Start". In the future, once you have the VM loaded and configured this last step is the only one you will need to repeat to turn your VM on and off. + +The menu provided should specify the Public IP where the VM has launched + + + +## Shiny Setup + +Installing and configuring Shiny for PEcAn +authors - Alexey Shiklomanov + - Rob Kooper + +**NOTE: Instructions are only tested for CentOS 6.5 and Ubuntu 16.04** +**NOTE: Pretty much every step here requires root access.** + +### Install the Shiny R package and Shiny server + +Follow the instructions on the [Shiny download page][shiny-download] for the operating system you are using. + +[shiny-download]: https://www.rstudio.com/products/shiny/download-server/ + +### Modify the shiny configuration file + +The Shiny configuration file is located in `/etc/shiny-server/shiny-server.conf`. Comment out the entire file and add the following, replacing `` with your user name and `` with the URL location you want for your app. This will allow you to run Shiny apps from your web browser at https://your.server.edu/shiny/your-location + +``` +run as shiny; +server { + listen 3838; + location // { + run as ; + site_dir /path/to/your/shiny/app; + log_dir /var/log/shiny-server; + directory_index on; + } +} +``` + +For example, my configuration on the old test-pecan looks like this. + +``` +run as shiny; +server { + listen 3838; + location /ashiklom/ { + run as ashiklom; + site_dir /home/ashiklom/fs-data/pecan/shiny/; + log_dir /var/log/shiny-server; + directory_index on; + } +} +``` + +...and I can access my Shiny apps at, for instance, https://test-pecan.bu.edu/shiny/ashiklom/workflowPlots. + +You can add as many `location { ... }` fields as you would like. + +``` +run as shiny; +server { + listen 3838; + location /ashiklom/ { + ... + } + location /bety/ { + ... + } +} +``` + +If you change the configuration, for example to add a new location, you will need to restart Shiny server. +*If you are setting up a new instance of Shiny*, skip this step and continue with the guide, since there are a few more steps to get Shiny working. +*If there is an instance of Shiny already running*, you can restart it with: + +``` +## On CentOS +sudo service shiny-server stop +sudo service shiny-server start + +## On Ubuntu +sudo systemctl stop shiny-server.service +sudo systemctl start shiny-server.service +``` + +### Set the Apache proxy + +Create a file with the following name, based on the version of the operating system you are using: + +* Ubuntu 16.04 (pecan1, pecan2, test-pecan) -- `/etc/apache2/conf-available/shiny.conf` +* CentOS 6.5 (psql-pecan) -- `/etc/httpd/conf.d/shiny.conf` + +Into this file, add the following: + +``` +ProxyPass /shiny/ http://localhost:3838/ +ProxyPassReverse /shiny/ http://localhost:3838/ +RedirectMatch permanent ^/shiny$ /shiny/ +``` + +#### **Ubuntu only:** Enable the new shiny configuration + +``` +sudo a2enconf shiny +``` + +This will create a symbolic link to the newly created `shiny.conf` file inside the `/etc/apache2/conf-enabled` directory. +You can do `ls -l /etc/apache2/conf-enabled` to confirm that this worked. + +### Enable and start the shiny server, and restart apache + +#### On CentOS + +``` +sudo ln -s /opt/shiny-server/config/init.d/redhat/shiny-server /etc/init.d +sudo service shiny-server stop +sudo service shiny-server start +sudo service httpd restart +``` + +You can check that Shiny is running with `service shiny-server status`. + +#### On Ubuntu + +Enable the Shiny server service. +This will make sure Shiny runs automatically on startup. + +``` +sudo systemctl enable shiny-server.service +``` + +Restart Apache. + +``` +sudo apachectl restart +``` + +Start the Shiny server. + +``` +sudo systemctl start shiny-server.service +``` + +If there are problems, you can stop the `shiny-server.service` with... + +``` +sudo systemctl stop shiny-server.service +``` + +...and then use `start` again to restart it. + + +### Troubleshooting + +Refer to the log files for shiny (`/var/log/shiny-server.log`) and httpd (on CentOS, `/var/log/httpd/error-log`; on Ubuntu, `/var/log/apache2/error-log`). + + + +### Further reading + +* [Shiny server configuration reference](http://docs.rstudio.com/shiny-server/) + + + + +## Thredds Setup + +Installing and configuring Thredds for PEcAn +authors - Rob Kooper + +**NOTE: Instructions are only tested for Ubuntu 16.04 on the VM, if you have instructions for CENTOS/RedHat please update this documentation** +**NOTE: Pretty much every step here requires root access.** + +### Install the Tomcat 8 and Thredds webapp + +The Tomcat 8 server can be installed from the default Ubuntu repositories. The thredds webapp will be downloaded and installed from unidata. + +First step is to install Tomcat 8 and configure it. The flag `-Dtds.content.root.path` should point to the location of where the thredds folder is located. This needs to be writeable by the user for tomcat. `-Djava.security.egd` is a special flag to use a different random number generator for tomcat. The default would take to long to generate a random number. + +``` +apt-get -y install tomcat8 openjdk-8-jdk +echo JAVA_OPTS=\"-Dtds.content.root.path=/home/carya \${JAVA_OPTS}\" >> /etc/default/tomcat8 +echo JAVA_OPTS=\"-Djava.security.egd=file:/dev/./urandom \${JAVA_OPTS}\" >> /etc/default/tomcat8 +service tomcat8 restart +``` + +Next is to install the webapp. + + +``` +mkdir /home/carya/thredds +chmod 777 /home/carya/thredds + +wget -O /var/lib/tomcat8/webapps/thredds.war ftp://ftp.unidata.ucar.edu/pub/thredds/4.6/current/thredds.war +``` + +Finally we configure Apache to prox the thredds server + +``` +cat > /etc/apache2/conf-available/thredds.conf << EOF +ProxyPass /thredds/ http://localhost:8080/thredds/ +ProxyPassReverse /thredds/ http://localhost:8080/thredds/ +RedirectMatch permanent ^/thredds$ /thredds/ +EOF +a2enmod proxy_http +a2enconf thredds +service apache2 reload +``` + +#### Customize the Thredds server + +To customize the thredds server for your installation edit the file in /home/carya/thredds/threddsConfig.xml. For example the following file is included in the VM. + +``` + + + + + + + PEcAn + /pecan/images/pecan_small.jpg + PEcAn + + Scientific Data + meteorology, atmosphere, climate, ocean, earth science + + + Rob Kooper + NCSA + kooper@illinois.edu + + + + PEcAn + http://www.pecanproject.org/ + /pecan/images/pecan_small.jpg + PEcAn Project + + + + + + + + + tds.css + tdsCat.css + tdsDap.css + + + + + + + + + + + + + false + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +``` + +### Update the catalog + +For example to update the catalog with the latest data, run the following command from the root crontab. This cronjob will also synchronize the database with remote servers and dump your database (by default in /home/carya/dump) + +``` +0 * * * * /home/carya/pecan/scripts/cron.sh -o /home/carya/dump +``` + +### Troubleshooting + +Refer to the log files for Tomcat (`/var/log/tomcat8/*`) and Thredds (`/home/carya/thredds/logs`). + + +### Further reading + +* [Thredds reference](http://www.unidata.ucar.edu/software/thredds/current/tds/) + + + + +## OS Specific Installations {#osinstall} + +- [Ubuntu](#ubuntu) +- [CentOS](#centosredhat) +- [OSX](#macosx) +- [Install BETY](#install-bety) THIS PAGE IS DEPRECATED +- [Install Models](#install-models) +- [Install Data](#install-data) + + + +### Ubuntu {#ubuntu} + +These are specific notes for installing PEcAn on Ubuntu (14.04) and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. + +This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. + +#### Install build environment + +```bash +sudo -s + +# point to latest R +echo "deb http://cran.rstudio.com/bin/linux/ubuntu `lsb_release -s -c`/" > /etc/apt/sources.list.d/R.list +apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9 + +# update package list +apt-get -y update + +# install packages needed for PEcAn +apt-get -y install build-essential gfortran git r-base-core r-base-dev jags liblapack-dev libnetcdf-dev netcdf-bin bc libcurl4-gnutls-dev curl udunits-bin libudunits2-dev libgmp-dev python-dev libgdal1-dev libproj-dev expect + +# install packages needed for ED2 +apt-get -y install openmpi-bin libopenmpi-dev + +# install requirements for DALEC +apt-get -y install libgsl0-dev + +# install packages for webserver +apt-get -y install apache2 libapache2-mod-php5 php5 + +# install packages to compile docs +apt-get -y install texinfo texlive-latex-base texlive-latex-extra texlive-fonts-recommended + +# install devtools +echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla + +# done as root +exit +``` + +#### Install Postgres + +Documentation: http://trac.osgeo.org/postgis/wiki/UsersWikiPostGIS21UbuntuPGSQL93Apt + +```bash +sudo -s + +# point to latest PostgreSQL +echo "deb http://apt.postgresql.org/pub/repos/apt `lsb_release -s -c`-pgdg main" > /etc/apt/sources.list.d/pgdg.list +wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - + +# update package list +apt-get -y update + +# install packages for postgresql (using a newer version than default) +apt-get -y install libdbd-pgsql postgresql postgresql-client libpq-dev postgresql-9.4-postgis-2.1 postgresql-9.4-postgis-2.1-scripts + +# install following if you want to run pecan through the web +apt-get -y install php5-pgsql + +# enable bety user to login with trust by adding the following lines after +# the ability of postgres user to login in /etc/postgresql/9.4/main/pg_hba.conf +local   all             bety                                    trust +host    all             bety            127.0.0.1/32            trust +host    all             bety            ::1/128                 trust + +# Once done restart postgresql +/etc/init.d/postgresql restart + +exit +``` + +To install the BETYdb database .. + +#### Apache Configuration PEcAn + +```bash +# become root +sudo -s + +# get index page +rm /var/www/html/index.html +ln -s ${HOME}/pecan/documentation/index_vm.html /var/www/html/index.html + +# setup a redirect +cat > /etc/apache2/conf-available/pecan.conf << EOF +Alias /pecan ${HOME}/pecan/web + + DirectoryIndex index.php + Options +ExecCGI + Require all granted + +EOF +a2enconf pecan +/etc/init.d/apache2 restart + +# done as root +exit +``` + +#### Apache Configuration BETY + +```bash +sudo -s + +# install all ruby related packages +apt-get -y install ruby2.0 ruby2.0-dev libapache2-mod-passenger + +# link static content +ln -s ${HOME}/bety/public /var/www/html/bety + +# setup a redirect +cat > /etc/apache2/conf-available/bety.conf << EOF +RailsEnv production +RailsBaseURI /bety +PassengerRuby /usr/bin/ruby2.0 + + Options +FollowSymLinks + Require all granted + +EOF +a2enconf bety +/etc/init.d/apache2 restart +``` + +#### Rstudio-server + +*NOTE This will allow anybody to login to the machine through the rstudio interface and run any arbitrary code. The login used however is the same as the system login/password.* + +```bash +wget http://download2.rstudio.org/rstudio-server-0.98.1103-amd64.deb +``` + +```bash +# bceome root +sudo -s + +# install required packages +apt-get -y install libapparmor1 apparmor-utils libssl0.9.8 + +# install rstudio +dpkg -i rstudio-server-* +rm rstudio-server-* +echo "www-address=127.0.0.1" >> /etc/rstudio/rserver.conf +echo "r-libs-user=~/R/library" >> /etc/rstudio/rsession.conf +rstudio-server restart + +# setup rstudio forwarding in apache +a2enmod proxy_http +cat > /etc/apache2/conf-available/rstudio.conf << EOF +ProxyPass /rstudio/ http://localhost:8787/ +ProxyPassReverse /rstudio/ http://localhost:8787/ +RedirectMatch permanent ^/rstudio$ /rstudio/ +EOF +a2enconf rstudio +/etc/init.d/apache2 restart + +# all done, exit root +exit +``` + +#### Additional packages + +HDF5 Tools, netcdf, GDB and emacs +```bash +sudo apt-get -y install hdf5-tools cdo nco netcdf-bin ncview gdb emacs ess nedit +``` + + + +### CentOS/RedHat {#centosredhat} + +These are specific notes for installing PEcAn on CentOS (7) and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. + +This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. + +#### Install build environment + +```bash +sudo -s + +# install packages needed for PEcAn +yum -y groupinstall 'Development Tools' +yum -y install git netcdf-fortran-openmpi-devel R bc curl libxml2-devel openssl-devel ed udunits2 udunits2-devel netcdf netcdf-devel gmp-devel python-devel gdal-devel proj-devel proj-epsg expect + +# jags +yum -y install http://download.opensuse.org/repositories/home:/cornell_vrdc/CentOS_7/x86_64/jags3-3.4.0-54.1.x86_64.rpm +yum -y install http://download.opensuse.org/repositories/home:/cornell_vrdc/CentOS_7/x86_64/jags3-devel-3.4.0-54.1.x86_64.rpm + +# fix include folder for udunits2 +ln -s /usr/include/udunits2/* /usr/include/ + +# install packages needed for ED2 +yum -y install environment-modules openmpi-bin libopenmpi-dev + +# install requirements for DALEC +yum -y install gsl-devel + +# install packages for webserver +yum -y install httpd php +systemctl enable httpd +systemctl start httpd +firewall-cmd --zone=public --add-port=80/tcp --permanent +firewall-cmd --reload + +# install packages to compile docs +#apt-get -y install texinfo texlive-latex-base texlive-latex-extra texlive-fonts-recommended + +# install devtools +echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla + +# done as root +exit + +echo "module load mpi" >> ~/.bashrc +module load mpi +``` + +#### Install and configure PostgreSQL, udunits2, NetCDF + +```bash +sudo -s + +# point to latest PostgreSQL +yum install -y epel-release +yum -y install http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-1.noarch.rpm + +# install packages for postgresql (using a newer version than default) +yum -y install postgresql94-server postgresql94-contrib postgis2_94 postgresql94-devel udunits2 netcdf + +# install following if you want to run pecan through the web +yum -y install php-pgsql + +# enable bety user to login with trust by adding the following lines after +# the ability of postgres user to login in /var/lib/pgsql/9.4/data/pg_hba.conf +local all bety trust +host all bety 127.0.0.1/32 trust +host all bety ::1/128 trust + +# Create database +/usr/pgsql-9.4/bin/postgresql94-setup initdb + +# Enable postgres +systemctl enable postgresql-9.4 +systemctl start postgresql-9.4 + +exit +``` + +#### Apache Configuration PEcAn + +Install and Start Apache +```bash +yum -y install httpd +systemctl enable httpd +systemctl start httpd +``` + +```bash +# become root +sudo -s + +# get index page +rm /var/www/html/index.html +ln -s /home/carya/pecan/documentation/index_vm.html /var/www/html/index.html + +# fix selinux context (does this need to be done after PEcAn is installed?) +chcon -R -t httpd_sys_content_t /home/carya/pecan /home/carya/output + +# setup a redirect +cat > /etc/httpd/conf.d/pecan.conf << EOF +Alias /pecan /home/carya/pecan/web + + DirectoryIndex index.php + Options +ExecCGI + Require all granted + +EOF +a2enconf pecan +/etc/init.d/apache2 restart + +# done as root +exit +``` + +#### Apache Configuration BETY + +```bash +sudo -s + +# install all ruby related packages +sudo curl --fail -sSLo /etc/yum.repos.d/passenger.repo https://oss-binaries.phusionpassenger.com/yum/definitions/el-passenger.repo +yum -y install ruby ruby-devel mod_passenger + +# link static content +ln -s /home/carya/bety/public /var/www/html/bety + +# fix GemFile +echo 'gem "test-unit"' >> bety/Gemlile + +# fix selinux context (does this need to be done after bety is installed?) +chcon -R -t httpd_sys_content_t /home/carya/bety + +# setup a redirect +cat > /etc/httpd/conf.d/bety.conf << EOF +RailsEnv production +RailsBaseURI /bety +PassengerRuby /usr/bin/ruby + + Options +FollowSymLinks + Require all granted + +EOF +systemctl restart httpd +``` + +#### Rstudio-server + +NEED FIXING + +*NOTE This will allow anybody to login to the machine through the rstudio interface and run any arbitrary code. The login used however is the same as the system login/password.* + +```bash +wget http://download2.rstudio.org/rstudio-server-0.98.1103-amd64.deb +``` + +```bash +# bceome root +sudo -s + +# install required packages +apt-get -y install libapparmor1 apparmor-utils libssl0.9.8 + +# install rstudio +dpkg -i rstudio-server-* +rm rstudio-server-* +echo "www-address=127.0.0.1" >> /etc/rstudio/rserver.conf +echo "r-libs-user=~/R/library" >> /etc/rstudio/rsession.conf +rstudio-server restart + +# setup rstudio forwarding in apache +a2enmod proxy_http +cat > /etc/apache2/conf-available/rstudio.conf << EOF +ProxyPass /rstudio/ http://localhost:8787/ +ProxyPassReverse /rstudio/ http://localhost:8787/ +RedirectMatch permanent ^/rstudio$ /rstudio/ +EOF +a2enconf rstudio +/etc/init.d/apache2 restart + +# all done, exit root +exit +``` +*Alternative Rstudio instructions* + +##### Install and configure Rstudio-server + +Install RStudio Server by following the [official documentation](https://www.rstudio.com/products/rstudio/download-server/) for your platform. +Then, proceed with the following: + +* add `PATH=$PATH:/usr/sbin:/sbin` to `/etc/profile` +```bash + cat "PATH=$PATH:/usr/sbin:/sbin; export PATH" >> /etc/profile +``` +* add [rstudio.conf](https://gist.github.com/dlebauer/6921889) to /etc/httpd/conf.d/ +```bash + wget https://gist.github.com/dlebauer/6921889/raw/d1e0f945228e5519afa6223d6f49d6e0617262bd/rstudio.conf + sudo mv rstudio.conf /httpd/conf.d/ +``` +* restart the Apache server: `sudo httpd restart` +* now you should be able to access `http:///rstudio` + +#### Install ruby-netcdf gem + +```bash +cd $RUBY_APPLICATION_HOME +export $NETCDF_URL=http://www.gfd-dennou.org/arch/ruby/products/ruby-netcdf/release/ruby-netcdf-0.6.6.tar.gz +export $NETCDF_DIR=/usr/local/netcdf +gem install narray +export NARRAY_DIR="$(ls $GEM_HOME/gems | grep 'narray-')" +export NARRAY_PATH="$GEM_HOME/gems/$NARRAY_DIR" +cd $MY_RUBY_HOME/bin +wget $NETCDF_URL -O ruby-netcdf.tgz +tar zxf ruby-netcdf.tgz && cd ruby-netcdf-0.6.6/ +ruby -rubygems extconf.rb --with-narray-include=$NARRAY_PATH --with-netcdf-dir=/usr/local/netcdf-4.3.0 +sed -i 's|rb/$|rb|' Makefile +make +make install +cd ../ && sudo rm -rf ruby-netcdf* + +cd $RUBY_APPLICATION +bundle install --without development +``` + + +#### Additional packages + +NEED FIXING + +HDF5 Tools, netcdf, GDB and emacs +```bash +sudo apt-get -y install hdf5-tools cdo nco netcdf-bin ncview gdb emacs ess nedit +``` + + + +### Mac OSX {#macosx} + +These are specific notes for installing PEcAn on Mac OSX and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. + +This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. + + +#### Install build environment + +```bash +# install R +# download from http://cran.r-project.org/bin/macosx/ + +# install gfortran +# download from http://cran.r-project.org/bin/macosx/tools/ + +# install OpenMPI +curl -o openmpi-1.6.3.tar.gz http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.3.tar.gz +tar zxf openmpi-1.6.3.tar.gz +cd openmpi-1.6.3 +./configure --prefix=/usr/local +make all +sudo make install +cd .. + +# install szip +curl -o szip-2.1-MacOSX-intel.tar.gz ftp://ftp.hdfgroup.org/lib-external/szip/2.1/bin/szip-2.1-MacOSX-intel.tar.gz +tar zxf szip-2.1-MacOSX-intel.tar.gz +sudo mv szip-2.1-MacOSX-intel /usr/local/szip + +# install HDF5 + +curl -o hdf5-1.8.11.tar.gz http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.11.tar.gz +tar zxf hdf5-1.8.11.tar.gz +cd hdf5-1.8.11 +sed -i -e 's/-O3/-O0/g' config/gnu-flags +./configure --prefix=/usr/local/hdf5 --enable-fortran --enable-cxx --with-szlib=/usr/local/szip +make +# make check +sudo make install +# sudo make check-install +cd .. +``` + +#### Install Postgres + +For those on a Mac I use the following app for postgresql which has +postgis already installed (http://postgresapp.com/) + +To get postgis run the following commands in psql: + +```bash +##### Enable PostGIS (includes raster) +CREATE EXTENSION postgis; +##### Enable Topology +CREATE EXTENSION postgis_topology; +##### fuzzy matching needed for Tiger +CREATE EXTENSION fuzzystrmatch; +##### Enable US Tiger Geocoder +CREATE EXTENSION postgis_tiger_geocoder; +``` + +To check your postgis run the following command again in psql: `SELECT PostGIS_full_version();` + +#### Additional installs + + +##### Install JAGS + +Download JAGS from http://sourceforge.net/projects/mcmc-jags/files/JAGS/3.x/Mac%20OS%20X/JAGS-Mavericks-3.4.0.dmg/download + +##### Install udunits + +Installing udunits-2 on MacOSX is done from source. + +* download most recent [version of Udunits here](http://www.unidata.ucar.edu/downloads/udunits/index.jsp) +* instructions for [compiling from source](http://www.unidata.ucar.edu/software/udunits/udunits-2/udunits2.html#Obtain) + + +```bash +curl -o udunits-2.1.24.tar.gz ftp://ftp.unidata.ucar.edu/pub/udunits/udunits-2.1.24.tar.gz +tar zxf udunits-2.1.24.tar.gz +cd udunits-2.1.24 +./configure +make +sudo make install +``` + +#### Apache Configuration + +Mac does not support pdo/postgresql by default. The easiest way to install is use: http://php-osx.liip.ch/ + +To enable pecan to run from your webserver. +```bash +cat > /etc/apache2/others/pecan.conf << EOF +Alias /pecan ${PWD}/pecan/web + + DirectoryIndex index.php + Options +All + Require all granted + +EOF +``` + +#### Ruby + +The default version of ruby should work. Or use [JewelryBox](https://jewelrybox.unfiniti.com/). + +#### Rstudio Server + +For the mac you can download [Rstudio Desktop](http://www.rstudio.com/). + + + +### Installing BETY {#install-bety} + +**************THIS PAGE IS DEPRECATED************* + +Official Instructions for BETY are maintained here: https://pecan.gitbook.io/betydb-documentation + +If you would like to install the Docker Version of BETY, please consult the [PEcAn Docker](#pecan-docker) section. + +#### Install Database + Data + +* _note_ To install BETYdb without PEcAn, first download the [`load.bety.sh` script](https://raw.githubusercontent.com/PecanProject/pecan/master/scripts/load.bety.sh) + +```sh +# install database (code assumes password is bety) +sudo -u postgres createuser -d -l -P -R -S bety +sudo -u postgres createdb -O bety bety +sudo -u postgres ./scripts/load.bety.sh -c YES -u YES -r 0 +sudo -u postgres ./scripts/load.bety.sh -r 1 +sudo -u postgres ./scripts/load.bety.sh -r 2 + +# configure for PEcAn web app (change password if needed) +cp web/config.example.php web/config.php + +# add models to database (VM only) +./scripts/add.models.sh + +# add data to database +./scripts/add.data.sh + +# create outputs folder +mkdir ~/output +chmod 777 ~/output +``` + +#### Installing BETYdb Web Application + +There are two flavors of BETY, PHP and RUBY. The PHP version allows for a minimal interaction with the database while the RUBY version allows for full interaction with the database. + +##### PHP version + +The php version comes with PEcAn and is already configured. + +##### RUBY version + +The RUBY version requires a few extra packages to be installed first. + +Next we install the web app. + +```bash +# install bety +cd +git clone https://github.com/PecanProject/bety.git + +# install gems +cd bety +sudo gem2.0 install bundler +bundle install --without development:test:javascript_testing:debug +``` + +and configure BETY + +```bash +# create folders for upload folders +mkdir paperclip/files paperclip/file_names +chmod 777 paperclip/files paperclip/file_names + +# create folder for log files +mkdir log +touch log/production.log +chmod 0666 log/production.log + +# fix configuration for vm +cp config/additional_environment_vm.rb config/additional_environment.rb +chmod go+w public/javascripts/cache/ + +# setup bety database configuration +cat > config/database.yml << EOF +production: + adapter: postgis + encoding: utf-8 + reconnect: false + database: bety + pool: 5 + username: bety + password: bety +EOF + +# setup login tokens +cat > config/initializers/site_keys.rb << EOF +REST_AUTH_SITE_KEY = 'thisisnotasecret' +REST_AUTH_DIGEST_STRETCHES = 10 +EOF +``` + + + +### Install Models + +This page contains instructions on how to download and install ecosystem models that have been or are being coupled to PEcAn. +These instructions have been tested on the PEcAn unbuntu VM. Commands may vary on other operating systems. +Also, some model downloads require permissions before downloading, making them unavailable to the general public. Please contact the PEcAn team if you would like access to a model that is not already installed on the default PEcAn VM. + +- [BioCro](#inst-biocro) +- [CLM 4.5](#inst-clm45) +- [DALEC](#inst-dalec) +- [ED2](#inst-ed2) +- [FATES](#inst-fates) +- [GDAY](#inst-gday) +- [JULES](#inst-jules) +- [LINKAGES](#inst-linkages) +- [LPJ-GUESS](#inst-lpj-guess) +- [MAESPA](#inst-maespa) +- [SIPNET](#inst-sipnet) + + + +#### BioCro {#inst-biocro} + +```r +# Public +echo 'devtools::install_github("ebimodeling/biocro")' | R --vanilla +# Development: +echo 'devtools::install_github("ebimodeling/biocro-dev")' | R --vanilla +``` + +_BioCro Developers:_ request from [@dlebauer on GitHub](https://github.com/dlebauer) + + + +#### CLM 4.5 {#inst-clm45} + +The version of CLM installed on PEcAn is the ORNL branch provided by Dan Ricciuto. This version includes Dan's point-level CLM processing scripts + +Download the code (~300M compressed), input data (1.7GB compressed and expands to 14 GB), and a few misc inputs. + +```bash +mkdir models +cd models +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm4_5_1_r085.tar.gz +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm/ccsm_inputdata.tar.gz +tar -xvzf clm4_5* +tar -xvzf ccsm_inputdata.tar.gz + +#Parameter file: +mkdir /home/carya/models/ccsm_inputdata/lnd/clm2/paramdata +cd /home/carya/models/ccsm_inputdata/lnd/clm2/paramdata +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm_params.c130821.nc +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm_params.c140423.nc + +#Domain file: +cd /home/carya/models/ccsm_inputdata/share/domains/domain.clm/ +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/domain.lnd.1x1pt_US-UMB_navy.nc + +#Aggregated met data file: +cd /home/carya/models/ccsm_inputdata/atm/datm7/CLM1PT_data/1x1pt_US-UMB +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/all_hourly.nc + +## lightning database +cd /home/carya/models/ccsm_inputdata/atm/datm7/NASA_LIS/ +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clmforc.Li_2012_climo1995-2011.T62.lnfm_Total_c140423.nc + +## surface data +cd /home/carya/models/ccsm_inputdata/lnd/clm2/surfdata +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm/surfdata_360x720cru_simyr1850_c130927.nc +cd /home/carya/models/ccsm_inputdata/lnd/clm2/surfdata_map +wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm/surfdata_1x1pt_US-UMB_I1850CLM45CN_simyr1850.nc_new +mv surfdata_1x1pt_US-UMB_I1850CLM45CN_simyr1850.nc_new surfdata_1x1pt_US-UMB_I1850CLM45CN_simyr1850.nc +``` +Required libraries +```bash +sudo apt-get install mercurial csh tcsh subversion cmake + +sudo ln -s /usr/bin/make /usr/bin/gmake +``` +Compile and build default inputs +```bash +cd ~/carya/models/clm4_5_1_r085/scripts +python runCLM.py --site US-UMB ––compset I1850CLM45CN --mach ubuntu --ccsm_input /home/carya/models/ccsm_inputdata --tstep 1 --nopointdata --coldstart --cpl_bypass --clean_build +``` + +##### CLM Test Run +You will see a new directory in scripts: US-UMB_I1850CLM45CN +Enter this directory and run (you shouldn’t have to do this normally, but there is a bug with the python script and doing this ensures all files get to the right place): +```bash +./US-UMB_I1850CLM45CN.build +``` + +Next you are ready to go to the run directory: +```bash +/home/carya/models/clm4_5_1_r085/run/US-UMB_I1850CLM45CN/run +``` +Open to edit file: datm.streams.txt.CLM1PT.CLM_USRDAT and check file paths such that all paths start with /home/carya/models/ccsm_inputdata + +From this directory, launch the executable that resides in the bld directory: +```bash +/home/carya/clm4_5_1_r085/run/US-UMB_I1850CLM45CN/bld/cesm.exe +``` +not sure this was the right location, but wherever the executable is + +You should begin to see output files that look like this: +US-UMB_I1850CLM45CN.clm2.h0.yyyy-mm.nc (yyyy is year, mm is month) +These are netcdf files containing monthly averages of lots of variables. + +The lnd_in file in the run directory can be modified to change the output file frequency and variables. + + + + + +#### DALEC {#inst-dalec} + +```bash +cd +curl -o dalec_EnKF_pub.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/dalec_EnKF_pub.tgz +tar zxf dalec_EnKF_pub.tgz +rm dalec_EnKF_pub.tgz + +cd dalec_EnKF_pub +make dalec_EnKF +make dalec_seqMH +sudo cp dalec_EnKF dalec_seqMH /usr/local/bin +``` + + + + +#### ED2 {#inst-ed2} + +##### ED2.2 r46 (used in PEcAn manuscript) + +```bash +# ---------------------------------------------------------------------- +# Get version r46 with a few patches for ubuntu +cd +curl -o ED.r46.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r46.tgz +tar zxf ED.r46.tgz +rm ED.r46.tgz +# ---------------------------------------------------------------------- +# configure and compile ed +cd ~/ED.r46/ED/build/bin +curl -o include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` +make OPT=VM +sudo cp ../ed_2.1-VM /usr/local/bin/ed2.r46 +``` + +Perform a test run using pre configured ED settings for ED2.2 r46 + +```bash +# ---------------------------------------------------------------------- +# Create sample run +cd +mkdir testrun.ed.r46 +cd testrun.ed.r46 +curl -o ED2IN http://isda.ncsa.illinois.edu/~kooper/EBI/ED2IN.r46 +sed -i -e "s#\$HOME#$HOME#" ED2IN +curl -o config.xml http://isda.ncsa.illinois.edu/~kooper/EBI/config.r46.xml +# execute test run +time ed2.r46 +``` + +##### ED 2.2 r82 + +```bash +cd +curl -o ED.r82.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.tgz +tar zxf ED.r82.tgz +rm ED.r82.tgz + +cd ED.r82 +curl -o ED.r82.patch http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.patch +patch -p1 < ED.r82.patch +cd ED/build/bin +curl -o include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` +make OPT=VM +sudo cp ../ed_2.1-VM /usr/local/bin/ed2.r82 +``` + +Perform a test run using pre configured ED settings for ED2.2 r82 + +```bash +cd +mkdir testrun.ed.r82 +cd testrun.ed.r82 +curl -o ED2IN http://isda.ncsa.illinois.edu/~kooper/EBI/ED2IN.r82 +sed -i -e "s#\$HOME#$HOME#" ED2IN +curl -o config.xml http://isda.ncsa.illinois.edu/~kooper/EBI/config.r82.xml +# execute test run +time ed2.r82 +``` + +##### ED 2.2 bleeding edge + +```bash +cd +git clone https://github.com/EDmodel/ED2.git + +cd ED2/ED/build/bin +curl -o include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` +./generate_deps.sh +make OPT=VM +sudo cp ../ed_2.1-VM /usr/local/bin/ed2.git +``` + + + + +#### CLM-FATES {#inst-fates} + +Prerequisites +``` +sudo apt-get upgrade libnetcdf-dev +sudo apt-get install subversion +sudo apt-get install csh +sudo apt-get install cmake +sudo ln -s /usr/bin/make /usr/bin/gmake +sudo rm /bin/sh +sudo ln -s /bin/bash /bin/sh + +wget https://github.com/Unidata/netcdf-fortran/archive/v4.4.4.tar.gz +cd netcdf-4.4.4 +./configure +make +sudo make install +``` +you might need to mess around with installing netcdf and netcdf-fortran to get a version FATES likes... + +Get code from Github (currently private) and go to cime/scripts directory +``` +git clone git@github.com:NGEET/ed-clm.git +cd ed-clm/cime/scripts/ +``` +Within CLM-FATES, to be able to build an executable we need to create a reference run. We'll also use this reference run to grab defaults from, so we'll be registering the location of both the reference **case** (location of executable, scripts, etc) and the reference **inputs** with the PEcAn database. To begin, copy reference run script from pecan +``` +cp ~/pecan/models/fates/inst/create_1x1_ref_case.sh . +``` +Edit reference case script to set NETCDF_HOME, CROOT (reference run case), DIN_LOC_ROOT (reference run inputs). Also, make sure DIN_LOC_ROOT exists as FATES will not create it itself. Then run the script +``` +./create_1x1_ref_case.sh +``` +Be aware that this script WILL ask you for your password on the NCAR server to download the reference case input data (the guest password may work, haven't tried this). If it gives an error at the pio stage check the log, but the most likely error is it being unable to find a version of netcdf it likes. + +Once FATES is installed, set the whole reference case directory as the Model path (leave filename blank) and set the whole inputs directory as an Input with format clm_defaults. + + + +#### GDAY {#inst-gday} + +Navigate to a directory you would like to store GDAY and run the following: + +```bash +git clone https://github.com/mdekauwe/GDAY.git + +cd GDAY + +cd src + +make +``` + +```gday``` is your executable. + + + +#### JULES {#inst-jules} + +INSTALL STEPS: + 1) Download JULES and FCM +JULES: + Model requires registration to download. Not to be put on PEcAn VM +Getting Started documentation: https://jules.jchmr.org/content/getting-started +Registration: http://jules-lsm.github.io/access_req/JULES_access.html + +FCM: +```bash + https://github.com/metomi/fcm/ + wget https://github.com/metomi/fcm/archive/2015.05.0.tar.gz +``` + +2) edit makefile +```bash +open etc/fcm-make/make.cfg + +set JULES_NETCDF = actual instead of dummy +set path (e.g. /usr/) and lib_path /lib64 to netCDF libraries +``` + +3) compile JULES + +```bash +cd etc/fcm-make/ +{path.to.fcm}/fcm make -f etc/fcm-make/make.cfg --new + +``` + +```bash +UBUNTU VERSION: installed without having to add any perl libraries +#perl stuff that I had to install on pecan2 not PEcAN VM +sudo yum install perl-Digest-SHA +sudo yum install perl-Time-modules +sudo yum install cpan +curl -L http://cpanmin.us | perl - --sudo App::cpanminus +sudo cpanm Time/Piece.pm +sudo cpanm IO/Uncompress/Gunzip.pm +``` + + +Executable is under build/bin/jules.exe + +Example rundir: examples/point_loobos + + + + +#### LINKAGES {#inst-linkages} + +##### R Installation +```r +# Public +echo 'devtools::install_github("araiho/linkages_package")' | R --vanilla +``` + +##### FORTRAN VERSION + +```r +#FORTRAN VERSION +cd +git clone https://github.com/araiho/Linkages.git +cd Linkages +gfortran -o linkages linkages.f +sudo cp linkages /usr/local/bin/linkages.git +``` + + + + +#### LPJ-GUESS {#inst-lpj-guess} + +Instructions to download source code + +Go to [LPJ-GUESS website](http://web.nateko.lu.se/lpj-guess/download.html) for instructions to access code. + + + +#### MAESPA {#inst-maespa} + +Navigate to a directory you would like store MAESPA and run the following: + +```bash +git clone https://bitbucket.org/remkoduursma/maespa.git + +cd maespa + +make +``` + +```maespa.out``` is your executable. Example input files can be found in the ```inpufiles``` directory. Executing measpa.out from within one of the example directories will produce output. + +MAESPA developers have also developed a wrapper package called Maeswrap. The usual R package installation method ```install.packages``` may present issues with downloading an unpacking a dependency package called ```rgl```. Here are a couple of solutions: + +##### Solution 1 + +```bash +### From the Command Line +sudo apt-get install r-cran-rgl +``` +then from within R +```R +install.packages("Maeswrap") +``` + +##### Solution 2 + +```bash +### From the Command line +sudo apt-get install libglu1-mesa-dev +``` +then from within R +```R +install.packages("Maeswrap") +``` + + + +#### SIPNET {inst-sipnet} + +```bash +cd +curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz +tar zxf sipnet_unk.tar.gz +rm sipnet_unk.tar.gz + +cd sipnet_unk +make +sudo cp sipnet /usr/local/bin/sipnet.runk +``` + +##### SIPNET testrun + +```bash +cd +curl -o testrun.sipnet.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.sipnet.tar.gz +tar zxf testrun.sipnet.tar.gz +rm testrun.sipnet.tar.gz +cd testrun.sipnet +sipnet.runk +``` + + + +### Installing data for PEcAn {#install-data} + +PEcAn assumes some of the data to be installed on the machine. This page will describe how to install this data. + +#### Site Information + +These are large-ish files that contain data used with ED2 and SIPNET + +```bash +rm -rf sites +curl -o sites.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/sites.tgz +tar zxf sites.tgz +sed -i -e "s#/home/kooper/Projects/EBI#${PWD}#" sites/*/ED_MET_DRIVER_HEADER +rm sites.tgz + +rm -rf inputs +curl -o inputs.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/inputs.tgz +tar zxf inputs.tgz +rm inputs.tgz +``` + +#### FIA database + +FIA database is large and will add an extra 10GB to the installation. + +```bash +# download and install database +curl -o fia5data.psql.gz http://isda.ncsa.illinois.edu/~kooper/EBI/fia5data.psql.gz +dropdb --if-exists fia5data +createdb -O bety fia5data +gunzip fia5data.psql.gz +psql -U bety -d fia5data < fia5data.psql +rm fia5data.psql +``` + +#### Flux Camp + +Following will install the data for flux camp (as well as the demo script for PEcAn). + +```bash +cd +curl -o plot.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/plot.tgz +tar zxf plot.tgz +rm plot.tgz +``` + +#### Harvard for ED tutorial + +Add datasets and runs + +```bash +curl -o Santarem_Km83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/Santarem_Km83.zip +unzip -d sites Santarem_Km83.zip +sed -i -e "s#/home/pecan#${HOME}#" sites/Santarem_Km83/ED_MET_DRIVER_HEADER +rm Santarem_Km83.zip + +curl -o testrun.s83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.s83.zip +unzip testrun.s83.zip +sed -i -e "s#/home/pecan#${HOME}#" testrun.s83/ED2IN +rm testrun.s83.zip + +curl -o ed2ws.harvard.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ed2ws.harvard.tgz +tar zxf ed2ws.harvard.tgz +mkdir ed2ws.harvard/analy ed2ws.harvard/histo +sed -i -e "s#/home/pecan#${HOME}#g" ed2ws.harvard/input_harvard/met_driver/HF_MET_HEADER ed2ws.harvard/ED2IN ed2ws.harvard/*.r +rm ed2ws.harvard.tgz + +curl -o testrun.PDG.zip http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.PDG.zip +unzip testrun.PDG.zip +sed -i -e "s#/home/pecan#${HOME}#" testrun.PDG/Met/PDG_MET_DRIVER testrun.PDG/Template/ED2IN +sed -i -e 's#/n/scratch2/moorcroft_lab/kzhang/PDG/WFire_Pecan/##' testrun.PDG/Template/ED2IN +rm testrun.PDG.zip + +curl -o create_met_driver.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/create_met_driver.tar.gz +tar zxf create_met_driver.tar.gz +rm create_met_driver.tar.gz +``` + + + +# Docker {#docker-index} + +This chapter describes the PEcAn Docker container infrastructure. +It contains the following sections: + +- [Introduction to Docker](#docker-intro) -- Brief introduction to Docker and `docker-compose` +- [Docker quickstart](#docker-quickstart) -- Brief tutorial for setting up a Docker-based PEcAn instance +- [PEcAn Docker Architecture](#pecan-docker) -- Detailed description of the containers comprising the PEcAn Docker-based infrastructure +- [Dockerfiles for models](#model-docker) -- General guide for writing Dockerfiles for new models +- [Building and modifying images](#docker-build-images) +- [Troubleshooting Docker]{#docker-troubleshooting} +- [Migrating from VM to Docker](#docker-migrate) -- Steps to migrate from running PEcAn on a VM to a docker. + + + +## Introduction to Docker? {#docker-intro} + +* [What is Docker](#what-is-docker) +* [Working with Docker](#working-with-docker) +* [`docker-compose`]{#docker-compose} + +### What is Docker? {#what-is-docker} + +For a quick and accessible introduction to Docker, we suggest this YouTube video: [Learn Docker in 12 Minutes](https://www.youtube.com/watch?v=YFl2mCHdv24). + +For more comprehensive Docker documentation, we refer you to the [Docker documentation website](https://docs.docker.com/). + +For a useful analogy for Docker containerization, we refer you to the webcomic [xkcd](https://xkcd.com/1988/). + +Docker is a technology for encapsulating software in "containers", somewhat similarly to virtual machines. +Like virtual machines, Docker containers facilitate software distribution by bundling the software with all of its dependencies in a single location. +Unlike virtual machines, Docker containers are meant to only run a single service or process and are build on top of existing services provided by the host OS (such as disk access, networking, memory management etc.). + +In Docker, an **image** refers to a binary snapshot of a piece of software and all of its dependencies. +A **container** refers to a running instance of a particular image. +A good rule of thumb is that each container should be responsible for no more than one running process. +A software **stack** refers to a collection of containers, each responsible for its own process, working together to power a particular application. +Docker makes it easy to run multiple software stacks at the same time in parallel on the same machine. +Stacks can be given a unique name, which is passed along as a prefix to all their containers. +Inside these stacks, containers can communicate using generic names not prefixed with the stack name, making it easy to deploy multiple stacks with the same internal configuration. +Containers within the same stack communicate with each other via a common **network**. +Like virtual machines or system processes, Docker stacks can also be instructed to open specific ports to facilitate communication with the host and other machines. + +The PEcAn database BETY provides an instructive case-study. +BETY is comprised of two core processes -- a PostgreSQL database, and a web-based front-end to that database (Apache web server with Ruby on Rails). +Running BETY as a "Dockerized" application therefore involves two containers -- one for the PostgreSQL database, and one for the web server. +We could build these containers ourselves by starting from a container with nothing but the essentials of a particular operating system, but we can save some time and effort by starting with an [existing image for PostgreSQL](https://hub.docker.com/_/postgres/) from Docker Hub. +When starting a Dockerized BETY, we start the PostgreSQL container first, then start the BETY container telling it how to communicate with the PostgreSQL container. +To upgrade an existing BETY instance, we stop the BETY container, download the latest version, tell it to upgrade the database, and re-start the BETY container. +There is no need to install new dependencies for BETY since they are all shipped as part of the container. + +The PEcAn Docker architecture is designed to facilitate installation and maintenance on a variety of systems by eliminating the need to install and maintain complex system dependencies (such as PostgreSQL, Apache web server, and Shiny server). +Furthermore, using separate Docker containers for each ecosystem model helps avoid clashes between different software version requirements of different models (e.g. some models require GCC <5.0, while others may require GCC >=5.0). + +The full PEcAn Docker stack is described in more detail in the [next section](#pecan-docker). + +### Working with Docker {#working-with-docker} + +To run an image, you can use the Docker command line interface. +For example, the following runs a PostgreSQL image based on the [pre-existing PostGIS image](https://hub.docker.com/r/mdillon/postgis/) by `mdillon`: + +```bash +docker run \ + --detach \ + --rm \ + --name postgresql \ + --network pecan \ + --publish 9876:5432 \ + --volume ${PWD}/postgres:/var/lib/postgresql/data \ + mdillon/postgis:9.6-alpine +``` + +This will start the PostgreSQL+PostGIS container. +The following options were used: + +- `--detach` makes the container run in the background. +- `--rm` removes the container when it is finished (make sure to use the volume below). +- `--name` the name of the container, also the hostname of the container which can be used by other docker containers in the same network inside docker. +- `--network pecan` the network that the container should be running in, this leverages of network isolation in docker and allows this container to be connected to by others using the postgresql hostname. +- `--publish` exposes the port to the outside world, this is like ssh, and maps port 9876 to port 5432 in the docker container +- `--volume` maps a folder on your local machine to the machine in the container. This allows you to save data on your local machine. +- `mdillon/postgis:9.6-alpine` is the actual image that will be run, in this case it comes from the group/person mdillon, the container is postgis and the version 9.6-alpine (version 9.6 build on alpine linux). + +Other options that might be used: + +- `--tty` allocate a pseudo-TTY to send stdout and stderr back to the console. +- `--interactive` keeps stdin open so the user can interact with the application running. +- `--env` sets environment variables, these are often used to change the behavior of the docker container. + +To see a list of all running containers you can use the following command: + +```bash +docker ps +``` + +To see the log files of this container you use the following command (you can either use their name or id as returned by `docker ps`). The `-f` flag will follow the `stdout`/`stderr` from the container, use `Ctrl-C` to stop following the `stdout`/`stderr`. + +```bash +docker logs -f postgresql +``` + +To stop a running container use: + +``` +docker stop postgresql +``` + +Containers that are running in the foreground (without the `--detach`) can be stopped by pressing `Ctrl-C`. Any containers running in the background (with `--detach`) will continue running until the machine is restarted or the container is stopped using `docker stop`. + +### `docker-compose` {#docker-compose} + +For a quick introduction to `docker-compose`, we recommend the following YouTube video: [Docker Compose in 12 Minutes](https://www.youtube.com/watch?v=Qw9zlE3t8Ko). + +The complete `docker-compose` references can be found on the [Docker documentation website](https://docs.docker.com/compose/). + +`docker-compose` provides a convenient way to configure and run a multi-container Docker stack. +Basically, a `docker-compose` setup consists of a list of containers and their configuration parameters, which are then internally converted into a bunch of `docker` commands. +To configure BETY as described above, we can use a `docker-compose.yml` file like the following: + +```yaml +version: "3" +services: + postgres: + image: mdillon/postgis:9.5 + bety: + image: pecan/bety + depends_on: + - postgres +``` + +This simple file allows us to bring up a full BETY application with both database and BETY application. The BETY app will not be brought up until the database container has started. + +You can now start this application by changing into the same directory as the `docker-compose.yml` file (`cd /path/to/file`) and then running: + +``` +docker-compose up +``` + +This will start the application, and you will see the log files for the 2 different containers. + + + +## The PEcAn docker install process in detail {#docker-quickstart} + +### Configure docker-compose {#pecan-setup-compose-configure} + +This section will let you download some configuration files. The documentation provides links to the latest released version (master branch in GitHub) or the develop version that we are working on (develop branch in GitHub) which will become the next release. If you cloned the PEcAn GitHub repository you can use `git checkout ` to switch branches. + +The PEcAn Docker stack is configured using a `docker-compose.yml` file. You can download just this file directly from GitHub [latest](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) or [develop](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml). You can also find this file in the root of cloned PEcAn GitHub repository. There is no need to edit the `docker-compose.yml` file. You can use either the `.env` file to change some of the settings, or the `docker-compose.override.yml` file to modify the `docker-compose.yml` file. This makes it easier for you to get an updated version of the `docker-compose.yml` file and not lose any chances you have made to it. + +Some of the settings in the `docker-compose.yml` can be set using a `.env` file. You can download either the [latest](https://raw.githubusercontent.com/PecanProject/pecan/master/docker/env.example) or the [develop](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker/env.example) version. If you have cloned the GitHub repository it is also located in the docker folder. This file should be called `.env` and be placed in the same folder as your `docker-compose.yml` file. This file will allow you to set which version of PEcAn or BETY to use. See the comments in this file to control the settings. Option you might want to set are: + +- `PECAN_VERSION` : The docker images to use for PEcAn. The default is `latest` which is the latest released version of PEcAn. Setting this to `develop` will result in using the version of PEcAn which will become the next release. +- `PECAN_FQDN` : Is the name of the server where PEcAn is running. This is what is used to register all files generated by this version of PEcAn (see also `TRAEFIK_HOST`). +- `PECAN_NAME` : A short name of this PEcAn server that is shown in the pull down menu and might be easier to recognize. +- `BETY_VERSION` : This controls the version of BETY. The default is `latest` which is the latest released version of BETY. Setting this to `develop` will result in using the version of BETY which will become the next release. +- `TRAEFIK_HOST` : Should be the FQDN of the server, this is needed when generating a SSL certificate. For SSL certificates you will need to set `TRAEFIK_ACME_ENABLE` as well as `TRAEFIK_ACME_EMAIL`. +- `TRAEFIK_IPFILTER` : is used to limit access to certain resources, such as RabbitMQ and the Traefik dashboard. + +A final file, which is optional, is a `docker-compose.override.yml`. You can download a version for the [latest](https://raw.githubusercontent.com/PecanProject/pecan/master/docker/docker-compose.example.yml) and [develop](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker/docker-compose.example.yml) versions. If you have cloned the GitHub repository it is located in the docker folder. Use this file as an example of what you can do, only copy the pieces over that you really need. This will allow you to make changes to the docker-compose file for your local installation. You can use this to add additional containers to your stack, change the path where docker stores the data for the containers, or you can use this to open up the postgresql port. + +```yaml +version: "3" + +services: + # expose database to localhost for ease of access + postgres: + ports: + - 5432:5432 +``` + +Once you have the `docker-compose.yml` file as well as the optional `.env` and `docker-compose.override.yml` in a folder you can start the PEcAn stack. The following instructions assume you are in the same directory as the file (if not, `cd` into it). + +In the rest of this section we will use a few arguments for the `docker-compose` application. The location of these arguments are important. The general syntax of `docker-compose` is `docker-compose [SERVICES]`. More generally, `docker-compose` options are very sensitive to their location relative to other commands in the same line -- that is, `docker-compose -f /my/docker-compose.yml -p pecan up -d postgres` is _not_ the same as `docker-compose -d postgres -p pecan up -f /my/docker-compose.yml`. If expected ever don't seem to be working, check that the arguments are in the right order.) + +- `-f ` : *ARGUMENTS FOR DOCKER COMPOSE* : Allows you to specify a docker-compose.yml file explicitly. You can use this argument multiple times. Default is to use the docker-compose.yml and docker-compose.override.yml in your current folder. +- `-p ` : *ARGUMENTS FOR DOCKER COMPOSE* : Project name, all volumes, networks, and containers will be prefixed with this argument. The default value is to use the current folder name. +- `-d` : *ARGUMENTS FOR `up` COMMAND* : Will start all the containers in the background and return back to the command shell. + +If no services as added to the docker-compose command all services possible will be started. + + +### Initialize PEcAn (first time only) {#pecan-docker-quickstart-init} + +Before you can start to use PEcAn for the first time you will need to initialize the database (and optionally add some data). The following two sections will [first initialize the database](#pecan-docker-quickstart-init-db) and [secondly add some data](#pecan-docker-quickstart-init-data) to the system. + +#### Initialize the PEcAn database {#pecan-docker-quickstart-init-db} + +The commands described in this section will set up the PEcAn database (BETY) and pre-load it with some common "default" data. + +```bash +docker-compose -p pecan up -d postgres + +# If you have a custom docker-compose file: +# docker-compose -f /path/to/my-docker-compose.yml -p pecan up -d postgres +``` + +The breakdown of this command is as follows: + +- `-p pecan` -- This tells `docker-compose` to do all of this as part of a "project" `-p` we'll call `pecan`. By default, the project name is set to the name of the current working directory. The project name will be used as a prefix to all containers started by this `docker-compose` instance (so, if we have a service called `postgres`, this will create a container called `pecan_postgres`). +- `up -d` -- `up` is a command that initializes the containers. Initialization involves downloading and building the target containers and any containers they depend on, and then running them. Normally, this happens in the foreground, printing logs directly to `stderr`/`stdout` (meaning you would have to interrupt it with Ctrl-C), but the `-d` flag forces this to happen more quietly and in the background. +- `postgres` -- This indicates that we only want to initialize the service called `postgres` (and its dependencies). If we omitted this, `docker-compose` would initialize all containers in the stack. + +The end result of this command is to initialize a "blank" PostGIS container that will run in the background. +This container is not connected to any data (yet), and is basically analogous to just installing and starting PostgreSQL to your system. +As a side effect, the above command will also create blank data ["volumes"](https://docs.docker.com/storage/volumes/) and a ["network"](https://docs.docker.com/network/) that containers will use to communicate with each other. +Because our project is called `pecan` and `docker-compose.yml` describes a network called `pecan`, the resulting network is called `pecan_pecan`. +This is relevant to the following commands, which will actually initialize and populate the BETY database. + +Assuming the above ran successfully, next run the following: + +```bash +docker-compose run --rm bety initialize +``` + +The breakdown of this command is as follows: {#docker-run-init} + +- `docker-compose run` -- This says we will be running a specific command inside the target service (bety in this case). +- `--rm` -- This automatically removes the resulting container once the specified command exits, as well as any volumes associated with the container. This is useful as a general "clean-up" flag for one-off commands (like this one) to make sure you don't leave any "zombie" containers or volumes around at the end. +- `bety` -- This is the name of the service in which we want to run the specified command. +- Everything after the service name (here, `bety`) is interpreted as an argument to the image's specified [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). For the `bety` service, the entrypoint is the script [`docker/entrypoint.sh`](https://github.com/PecanProject/bety/blob/master/docker/entrypoint.sh) located in the BETY repository. Here, the `initialize` argument is parsed to mean "Create a new database", which first runs `psql` commands to create the `bety` role and database and then runs the `load.bety.sh` script. + - NOTE: The entrypoint script that is used is the one copied into the Docker container at the time it was built, which, depending on the indicated image version and how often images are built on Docker Hub relative to updates to the source, may be older than whatever is in the source code. + +Note that this command may throw a bunch of errors related to functions and/or operators already existing. +This is normal -- it just means that the PostGIS extension to PostgreSQL is already installed. +The important thing is that you see output near the end like: + +``` +CREATED SCHEMA +Loading schema_migrations : ADDED 61 +Started psql (pid=507) +Updated formats : 35 (+35) +Fixed formats : 46 +Updated machines : 23 (+23) +Fixed machines : 24 +Updated mimetypes : 419 (+419) +Fixed mimetypes : 1095 +... +... +... +Added carya41 with access_level=4 and page_access_level=1 with id=323 +Added carya42 with access_level=4 and page_access_level=2 with id=325 +Added carya43 with access_level=4 and page_access_level=3 with id=327 +Added carya44 with access_level=4 and page_access_level=4 with id=329 +Added guestuser with access_level=4 and page_access_level=4 with id=331 +``` + +If you do not see this output, you can look at the [troubleshooting](#docker-quickstart-troubleshooting) section at the end of this section for some troubleshooting tips, as well as some solutions to common problems. + +Once the command has finished successfully, proceed with the next step which will load some initial data into the database and place the data in the docker volumes. + +#### Add example data (first time only) {#pecan-docker-quickstart-init-data} + +The following command will add some initial data to the PEcAn stack and register the data with the database. + +```bash +docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop +``` + +The breakdown of this command is as follows: + +- `docker run` -- This says we will be running a specific command inside the target Docker container. See `docker run --help` and the [Docker run reference](https://docs.docker.com/engine/reference/run/) for more information. +- `-ti` -- This is actually two flags, `-t` to allocate a pseudo-tty and `-i` to keep STDIN open even if detached. `-t` is necessary to ensure lower-level script commands run correctly. `-i` makes sure that the command output (`stdin`) is displayed. +- `--rm` -- This automatically removes the resulting container once the specified command exits, as well as any volumes associated with the container. This is useful as a general "clean-up" flag for one-off commands (like this one) to make sure you don't leave any "zombie" containers or volumes around at the end. +- `--network pecan_pecan` -- This indicates that the container will use the existing `pecan_pecan` network. This network is what ensures communication between the `postgres` container (which, recall, is _just_ a PostGIS installation with some data) and the "volumes" where the actual data are persistently stored. +- `pecan/data:develop` -- This is the name of the image in which to run the specified command, in the form `repository/image:version`. This is interpreted as follows: + - First, it sees if there are any images called `pecan/data:develop` available on your local machine. If there are, it uses that one. + - If that image version is _not_ available locally, it will next try to find the image online. By default, it searches [Docker Hub](https://hub.docker.com/), such that `pecan/data` gets expanded to the container at `https://hub.docker.com/r/pecan/data`. For custom repositories, a full name can be given, such as `hub.ncsa.illinois.edu/pecan/data:latest`. + - If `:version` is omitted, Docker assumes `:latest`. NOTE that while online containers _should_ have a `:latest` version, not all of them do, and if a `:latest` version does not exist, Docker will be unable to find the image and will throw an error. +- Everything after the image name (here, `pecan/data:develop`) is interpreted as an argument to the image's specified [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). +- `--volume pecan_pecan:/data` -- This mounts the data from the subsequent container (`pecan/data:develop`) onto the current project volume, called `pecan_pecan` (as with the network, the project name `pecan` is the prefix, and the volume name also happens to be `pecan` as specified in the `docker-compose.yml` file). +- `--env FQDN=docker` -- the Fully Qualified Domain Name, this is the same value as specified in the `.env` file (for the web, monitor and executor containers). This will link the data files to the name in the machines table in BETY. +- `pecan/data:develop` -- As above, this is the target image to run. Since there is no argument after the image name, this command will run the default command ([`CMD`](https://docs.docker.com/engine/reference/builder/#cmd)) specified for this docker container. In this case, it is the [`docker/add_data.sh`](https://github.com/PecanProject/pecan/blob/develop/docker/add-data.sh) script from the PEcAn repository. + +Under the hood, this container runs the `docker/add-data.sh` script, which copies a bunch of input files and registers them with the PEcAn database. + +Successful execution of this command should take some time because it involves copying reasonably large amounts of data and performing a number of database operations. + + +#### Start PEcAn {#start-pecan} + + +If you already completed the above steps, you can start the full stack by just running the following: + +```bash +docker-compose -p pecan up -d +``` + +This will build and start all containers required to run PEcAn. +With the `-d` flag, this will run all of these containers quietly in the background, and show a nice architecture diagram with the name and status of each container while they are starting. +Once this is done you have a working instance of PEcAn. + +If all of the containers started successfully, you should be able to access the various components from a browser via the following URLs (if you run these commands on a remote machine replace localhost with the actual hostname). + +- PEcAn web interface (running models) -- http://localhost:8000/pecan/ (NOTE: The trailing backslash is necessary.) +- PEcAn documentation and home page -- http://localhost:8000/ +- BETY web interface -- http://localhost:8000/bety/ +- File browser (minio) -- http://localhost:8000/minio/ +- RabbitMQ management console (for managing queued processes) -- http://localhost:8000/rabbitmq/ +- Traefik, webserver showing maps from URLs onto their respective containers -- http://localhost:8000/traefik/ +- Monitor, service that monitors models and shows all models that are online as well as how many instances are online and the number of jobs waiting. The output is in JSON -- http://localhost:8000/monitor/ + + +#### Start model runs using curl {#curl-model-runs} + +To test PEcAn you can use the following `curl` statement, or use the webpage to submit a request (if you run these commands on a remote machine replace localhost with the actual hostname): + +```bash +curl -v -X POST \ + -F 'hostname=docker' \ + -F 'modelid=5000000002' \ + -F 'sitegroupid=1' \ + -F 'siteid=772' \ + -F 'sitename=Niwot Ridge Forest/LTER NWT1 (US-NR1)' \ + -F 'pft[]=temperate.coniferous' \ + -F 'start=2004/01/01' \ + -F 'end=2004/12/31' \ + -F 'input_met=5000000005' \ + -F 'email=' \ + -F 'notes=' \ + 'http://localhost:8000/pecan/04-runpecan.php' +``` + +This should return some text with in there `Location:` this is shows the workflow id, you can prepend http://localhost:8000/pecan/ to the front of this, for example: http://localhost:8000/pecan/05-running.php?workflowid=99000000001. Here you will be able to see the progress of the workflow. + +To see what is happening behind the scenes you can use look at the log file of the specific docker containers, once of interest are `pecan_executor_1` this is the container that will execute a single workflow and `pecan_sipnet_1` which executes the sipnet mode. To see the logs you use `docker logs pecan_executor_1` Following is an example output: + +``` +2018-06-13 15:50:37,903 [MainThread ] INFO : pika.adapters.base_connection - Connecting to 172.18.0.2:5672 +2018-06-13 15:50:37,924 [MainThread ] INFO : pika.adapters.blocking_connection - Created channel=1 +2018-06-13 15:50:37,941 [MainThread ] INFO : root - [*] Waiting for messages. To exit press CTRL+C +2018-06-13 19:44:49,523 [MainThread ] INFO : root - b'{"folder": "/data/workflows/PEcAn_99000000001", "workflowid": "99000000001"}' +2018-06-13 19:44:49,524 [MainThread ] INFO : root - Starting job in /data/workflows/PEcAn_99000000001. +2018-06-13 19:45:15,555 [MainThread ] INFO : root - Finished running job. +``` + +This shows that the executor connects to RabbitMQ, waits for messages. Once it picks up a message it will print the message, and execute the workflow in the folder passed in with the message. Once the workflow (including any model executions) is finished it will print Finished. The log file for `pecan_sipnet_1` is very similar, in this case it runs the `job.sh` in the run folder. + +To run multiple executors in parallel you can duplicate the executor section in the docker-compose file and just rename it from executor to executor1 and executor2 for example. The same can be done for the models. To make this easier it helps to deploy the containers using Kubernetes allowing to easily scale up and down the containers. + +### Troubleshooting {#docker-quickstart-troubleshooting} + +When initializing the database, you will know you have encountered more serious errors if the command exits or hangs with output resembling the following: + +``` +LINE 1: SELECT count(*) FROM formats WHERE ... + ^ +Error: Relation `formats` does not exist +``` + +**If the above command fails**, you can try to fix things interactively by first opening a shell inside the container... + +``` +docker run -ti --rm --network pecan_pecan pecan/bety:latest /bin/bash +``` + +...and then running the following commands, which emulate the functionality of the `entrypoint.sh` with the `initialize` argument. + +```bash +# Create the bety role in the postgresql database +psql -h postgres -p 5432 -U postgres -c "CREATE ROLE bety WITH LOGIN CREATEDB NOSUPERUSER NOCREATEROLE PASSWORD 'bety'" + +# Initialize the bety database itself, and set to be owned by role bety +psql -h postgres -p 5432 -U postgres -c "CREATE DATABASE bety WITH OWNER bety" + +# If either of these fail with a "role/database bety already exists", +# that's fine. You can safely proceed to the next command. + +# Load the actual bety database tables and values +./script/load.bety.sh -a "postgres" -d "bety" -p "-h postgres -p 5432" -o bety -c -u -g -m ${LOCAL_SERVER} -r 0 -w https://ebi-forecast.igb.illinois.edu/pecan/dump/all/bety.tar.gz +``` + + + +## PEcAn Docker Architecture {#pecan-docker} + +* [Overview](#pecan-docker-overview) +* [PEcAn's `docker-compose`](#pecan-docker-compose) +* [Top-level structure](#pecan-dc-structure) +* [`traefik`](#pecan-dc-traefik) +* [`portainer`](#pecan-dc-portainer) +* [`minio`](#pecan-dc-minio) +* [`thredds`](#pecan-dc-thredds) +* [`postgres`](#pecan-dc-postgres) +* [`rabbitmq`](#pecan-dc-rabbitmq) +* [`bety`](#pecan-dc-bety) +* [`docs`](#pecan-dc-docs) +* [`web`](#pecan-dc-web) +* [`executor`](#pecan-dc-executor) +* [`monitor`](#pecan-dc-monitor) +* [Model-specific containers](#pecan-dc-models) + + +### Overview {#pecan-docker-overview} + +The PEcAn docker architecture consists of many containers (see figure below) that will communicate with each other. The goal of this architecture is to easily expand the PEcAn system by deploying new model containers and registering them with PEcAn. Once this is done the user can now use these new models in their work. The PEcAn framework will setup the configurations for the models, and send a message to the model containers to start execution. Once the execution is finished the PEcAn framework will continue. This is exactly as if the model is running on a HPC machine. Models can be executed in parallel by launching multiple model containers. + +```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics("03_topical_pages/94_docker/pecan-docker.png") +``` +As can be seen in the figure the architecture leverages of two standard containers (in orange). The first container is postgresql with postgis ([mdillon/postgis](https://hub.docker.com/r/mdillon/postgis/)) which is used to store the database used by both BETY and PEcAn. The second containers is a messagebus, more specifically RabbitMQ ([rabbitmq](https://hub.docker.com/_/rabbitmq/)). + +The BETY app container ([pecan/bety](https://hub.docker.com/r/pecan/bety/)) is the front end to the BETY database and is connected to the postgresql container. A http server can be put in front of this container for SSL termination as well to allow for load balancing (by using multiple BETY app containers). + +The PEcAn framework containers consist of multiple unique ways to interact with the PEcAn system (none of these containers will have any models installed): + +- PEcAn shiny hosts the shiny applications developed and will interact with the database to get all information necessary to display +- PEcAn rstudio is a rstudio environment with the PEcAn libraries preloaded. This allows for prototyping of new algorithms that can be used as part of the PEcAn framework later. +- PEcAn web allows the user to create a new PEcAn workflow. The workflow is stored in the database, and the models are executed by the model containers. +- PEcAn cli will allow the user to give a pecan.xml file that will be executed by the PEcAn framework. The workflow created from the XML file is stored in the database, and the models are executed by the model containers. + +The model containers contain the actual models that are executed as well as small wrappers to make them work in the PEcAn framework. The containers will run the model based on the parameters received from the message bus and convert the outputs back to the standard PEcAn output format. Once the container is finished processing a message it will immediatly get the next message and start processing it. + +### PEcAn's `docker-compose` {#pecan-docker-compose} + +```{r comment='', echo = FALSE, results = 'hide'} +docker_compose_file <- file.path("..", "docker-compose.yml") +dc_yaml <- yaml::read_yaml(docker_compose_file) +``` + +The PEcAn Docker architecture is described in full by the PEcAn `docker-compose.yml` file. +For full `docker-compose` syntax, see the [official documentation](https://docs.docker.com/compose/). + +This section describes the [top-level structure](#pecan-dc-structure) and each of the services, which are as follows: + +- [`traefik`](#pecan-dc-traefik) +- [`portainer`](#pecan-dc-portainer) +- [`minio`](#pecan-dc-minio) +- [`thredds`](#pecan-dc-thredds) +- [`postgres`](#pecan-dc-postgres) +- [`rabbitmq`](#pecan-dc-rabbitmq) +- [`bety`](#pecan-dc-bety) +- [`docs`](#pecan-dc-docs) +- [`web`](#pecan-dc-web) +- [`executor`](#pecan-dc-executor) +- [`monitor`](#pecan-dc-monitor) +- [Model-specific services](#pecan-dc-models) + +For reference, the complete `docker-compose` file is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml, stdout()) +``` + +There are two ways you can override different values in the docker-compose.yml file. The first method is to create a file called `.env` that is placed in the same folder as the docker-compose.yml file. This file can override some of configuration variables used by docker-compose. For example the following is an example of the env file + +```{r comment='', echo = FALSE} +docker_env_file <- file.path("..", "docker", "env.example") +writeLines(readLines(docker_env_file)) +``` + +You can also extend the `docker-compose.yml` file with a `docker-compose.override.yml` file (in the same directory), allowing you to add more services, or for example to change where the volumes are stored (see [official documentation](https://docs.docker.com/compose/extends/)). For example the following will change the volume for postgres to be stored in your home directory: + +``` +version: "3" + +volumes: + postgres: + driver_opts: + type: none + device: ${HOME}/postgres + o: bind +``` + +### Top-level structure {#pecan-dc-structure} + +The root of the `docker-compose.yml` file contains three sections: + +- `services` -- This is a list of services provided by the application, with each service corresponding to a container. +When communicating with each other internally, the hostnames of containers correspond to their names in this section. +For instance, regardless of the "project" name passed to `docker-compose up`, the hostname for connecting to the PostgreSQL database of any given container is _always_ going to be `postgres` (e.g. you should be able to access the PostgreSQL database by calling the following from inside the container: `psql -d bety -U bety -h postgres`). +The services comprising the PEcAn application are described below. + +- `networks` -- This is a list of networks used by the application. +Containers can only communicate with each other (via ports and hostnames) if they are on the same Docker network, and containers on different networks can only communicate through ports exposed by the host machine. +We just provide the network name (`pecan`) and resort to Docker's default network configuration. +Note that the services we want connected to this network include a `networks: ... - pecan` tag. +For more details on Docker networks, see the [official documentation](https://docs.docker.com/network/). + +- `volumes` -- Similarly to `networks`, this just contains a list of volume names we want. +Briefly, in Docker, volumes are directories containing files that are meant to be shared across containers. +Each volume corresponds to a directory, which can be mounted at a specific location by different containers. +For example, syntax like `volumes: ... - pecan:/data` in a service definition means to mount the `pecan` "volume" (including its contents) in the `/data` directory of that container. +Volumes also allow data to persist on containers between restarts, as normally, any data created by a container during its execution is lost when the container is re-launched. +For example, using a volume for the database allows data to be saved between different runs of the database container. +Without volumes, we would start with a blank database every time we restart the containers. +For more details on Docker volumes, see the [official documentation](https://docs.docker.com/storage/volumes/). +Here, we define three volumes: + + - `postgres` -- This contains the data files underlying the PEcAn PostgreSQL database (BETY). + Notice that it is mounted by the `postgres` container to `/var/lib/postgresql/data`. + This is the data that we pre-populate when we run the Docker commands to [initialize the PEcAn database](#pecan-docker-quickstart-init). + Note that these are the values stored _directly in the PostgreSQL database_. + The default files to which the database points (i.e. `dbfiles`) are stored in the `pecan` volume, described below. + + - `rabbitmq` -- This volume contains persistent data for RabbitMQ. + It is only used by the `rabbitmq` service. + + - `pecan` -- This volume contains PEcAn's `dbfiles`, which include downloaded and converted model inputs, processed configuration files, and outputs. + It is used by almost all of the services in the PEcAn stack, and is typically mounted to `/data`. + +### `traefik` {#pecan-dc-traefik} + +[Traefik](https://traefik.io/) manages communication among the different PEcAn services and between PEcAn and the web. +Among other things, `traefik` facilitates the setup of web access to each PEcAn service via common and easy-to-remember URLs. +For instance, the following lines in the `web` service configure access to the PEcAn web interface via the URL http://localhost:8000/pecan/ : + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services$web["labels"], stdout()) +``` + +(Further details in the works...) + +The traefik service configuration looks like this: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["traefik"], stdout()) +``` + + +### `portainer` {#pecan-dc-portainer} + +[portainer](https://portainer.io/) is lightweight management UI that allows you to manage the docker host (or swarm). You can use this service to monitor the different containers, see the logfiles, and start and stop containers. + +The portainer service configuration looks like this: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["portainer"], stdout()) +``` + +Portainer is accessible by browsing to `localhost:8000/portainer/`. You can either set the password in the `.env` file (for an example see env.example) or you can use the web browser and go to the portainer url. If this is the first time it will ask for your password. + +### `minio` {#pecan-dc-minio} + +[Minio](https://github.com/minio/minio) is a service that provides access to the a folder on disk through a variety of protocols, including S3 buckets and web-based access. +We mainly use Minio to facilitate access to PEcAn data using a web browser without the need for CLI tools. + +Our current configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["minio"], stdout()) +``` + +The Minio interface is accessible by browsing to `localhost:8000/minio/`. +From there, you can browse directories and download files. +You can also upload files by clicking the red "+" in the bottom-right corner. + +Note that it is currently impossible to create or upload directories using the Minio interface (except in the `/data` root directory -- those folders are called "buckets" in Minio). +Therefore, the recommended way to perform any file management tasks other than individual file uploads is through the command line, e.g. + +```bash +docker run -it --rm --volumes pecan_pecan:/data --volumes /path/to/local/directory:/localdir ubuntu + +# Now, you can move files between `/data` and `/localdir`, create new directories, etc. +``` + +### `thredds` {#pecan-dc-thredds} + +This service allows PEcAn model outputs to be accessible via the [THREDDS data server (TDS)](https://www.unidata.ucar.edu/software/thredds/current/tds/). +When the PEcAn stack is running, the catalog can be explored in a web browser at http://localhost:8000/thredds/catalog.html. +Specific output files can also be accessed from the command line via commands like the following: + +```{r, eval = FALSE} +nc <- ncdf4::nc_open("http://localhost:8000/thredds/dodsC/outputs/PEcAn_/out//.nc") +``` + +Note that everything after `outputs/` exactly matches the directory structure of the `workflows` directory. + +Which files are served, which subsetting services are available, and other aspects of the data server's behavior are configured in the `docker/thredds_catalog.xml` file. +Specifically, this XML tells the data server to use the `datasetScan` tool to serve all files within the `/data/workflows` directory, with the additional `filter` that only files ending in `.nc` are served. +For additional information about the syntax of this file, see the extensive [THREDDS documentation](https://www.unidata.ucar.edu/software/thredds/current/tds/reference/index.html). + +Our current configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["thredds"], stdout()) +``` + + +### `postgres` {#pecan-dc-postgres} + +This service provides a working PostGIS database. +Our configuration is fairly straightforward: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["postgres"], stdout()) +``` + +Some additional details about our configuration: + +- `image` -- This pulls a container with PostgreSQL + PostGIS pre-installed. +Note that by default, we use PostgreSQL version 9.5. +To experiment with other versions, you can change `9.5` accordingly. + +- `networks` -- This allows PostgreSQL to communicate with other containers on the `pecan` network. +As mentioned above, the hostname of this service is just its name, i.e. `postgres`, so to connect to the database from inside a running container, use a command like the following: `psql -d bety -U bety -h postgres` + +- `volumes` -- Note that the PostgreSQL data files (which store the values in the SQL database) are stored on a _volume_ called `postgres` (which is _not_ the same as the `postgres` _service_, even though they share the same name). + +### `rabbitmq` {#pecan-dc-rabbitmq} + +[RabbitMQ](https://www.rabbitmq.com/) is a message broker service. +In PEcAn, RabbitMQ functions as a task manager and scheduler, coordinating the execution of different tasks (such as running models and analyzing results) associated with the PEcAn workflow. + +Our configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["rabbitmq"], stdout()) +``` + +Note that the `traefik.frontend.rule` indicates that browsing to http://localhost:8000/rabbitmq/ leads to the RabbitMQ management console. + +By default, the RabbitMQ management console has username/password `guest/guest`, which is highly insecure. +For production instances of PEcAn, we highly recommend changing these credentials to something more secure, and removing access to the RabbitMQ management console via Traefik. + +### `bety` {#pecan-dc-bety} + +This service operates the BETY web interface, which is effectively a web-based front-end to the PostgreSQL database. +Unlike the `postgres` service, which contains all the data needed to run PEcAn models, this service is not essential to the PEcAn workflow. +However, note that certain features of the PEcAn web interface do link to the BETY web interface and will not work if this container is not running. + +Our configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["bety"], stdout()) +``` + +The BETY container Dockerfile is located in the root directory of the [BETY GitHub repository](https://github.com/PecanProject/bety) ([direct link](https://github.com/PecanProject/bety/blob/master/Dockerfile)). + +### `docs` {#pecan-dc-docs} + +This service will show the documentation for the version of PEcAn running as well as a homepage with links to all relevant endpoints. You can access this at http://localhost:8000/. You can find the documentation for PEcAn at http://localhost:8000/docs/pecan/. + +Our current configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["docs"], stdout()) +``` + +### `web` {#pecan-dc-web} + +This service runs the PEcAn web interface. +It is effectively a thin wrapper around a standard Apache web server container from Docker Hub that installs some additional dependencies and copies over the necessary files from the PEcAn source code. + +Our configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["web"], stdout()) +``` + +Its Dockerfile ships with the PEcAn source code, in [`docker/web/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/web/Dockerfile). + +In terms of [actively developing PEcAn using Docker](#pecan-docker-develop), this is the service to modify when making changes to the web interface (i.e. PHP, HTML, and JavaScript code located in the PEcAn `web` directory). + +### `executor` {#pecan-dc-executor} + +This service is in charge of running the R code underlying the core PEcAn workflow. +However, it is _not_ in charge of executing the models themselves -- model binaries are located on their [own dedicated Docker containers](#pecan-dc-models), and model execution is coordinated by RabbitMQ. + +Our configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["executor"], stdout()) +``` + +Its Dockerfile is ships with the PEcAn source code, in [`docker/executor/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/executor/Dockerfile). +Its image is built on top of the `pecan/base` image ([`docker/base/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/base/Dockerfile)), which contains the actual PEcAn source. +To facilitate caching, the `pecan/base` image is itself built on top of the `pecan/depends` image ([`docker/depends/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/depends/Dockerfile)), a large image that contains an R installation and PEcAn's many system and R package dependencies (which usually take ~30 minutes or longer to install from scratch). + +In terms of [actively developing PEcAn using Docker](#pecan-docker-develop), this is the service to modify when making changes to the PEcAn R source code. +Note that, unlike changes to the `web` image's PHP code, changes to the R source code do not immediately propagate to the PEcAn container; instead, you have to re-compile the code by running `make` inside the container. + +### `monitor` {#pecan-dc-monitor} + +This service will show all models that are currently running http://localhost:8000/monitor/. This list returned is JSON and shows all models (grouped by type and version) that are currently running, or where seen in the past. This list will also contain a list of all current active containers, as well as how many jobs are waiting to be processed. + +This service is also responsible for registering any new models with PEcAn so users can select it and execute the model from the web interface. + +Our current configuration is as follows: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["monitor"], stdout()) +``` + +### Model-specific containers {#pecan-dc-models} + +Additional models are added as additional services. +In general, their configuration should be similar to the following configuration for SIPNET, which ships with PEcAn: + +```{r, echo = FALSE, comment = ''} +yaml::write_yaml(dc_yaml$services["sipnet"], stdout()) +``` + +The PEcAn source contains Dockerfiles for ED2 ([`models/ed/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/models/ed/Dockerfile.)) and SIPNET ([`models/sipnet/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/models/sipnet/Dockerfile)) that can serve as references. +For additional tips on constructing a Dockerfile for your model, see [Dockerfiles for Models](#model-docker). + + + +## Models using Docker {#model-docker} + +This section will discuss how to add new models to PEcAn docker. To be able to add a new model to PEcAn when using docker is as simple as starting a new container. The model will come online and let the PEcAn framework know there is a new model available, there is no need to go through the process of registering this model with PEcAn. Users will be able to select this new model from web interface and run with this model selected. + +For this process to work a docker image of the model will need to be created as well as small json file that is used to announce this new model. A separate service in PEcAn ([`monitor`](#pecan-dc-monitor)) will use this json file to keep track of all models available as well as register these models with PEcAn. + +[Model information](#model-docker-json-file) +[Model build](#model-docker-Dockerfile) +[Common problems](#common-docker-problems) + + +### Model information {#model-docker-json-file} + +Each model will have a small json file called model_info.json that is used to describe the model and used by the monitor service to register the model with PEcAn. This file will contain information about the model that is send as part of the heartbeat of the container to the monitor service. Below is an example of this file for the ED model. The required fields are `name`, `type`, `version` and `binary`. The fields `type` and `version` are used by PEcAn to send the messages to RabbitMQ. There is no need to specify the queue name explicitly. The queue name will be created by combining these two fields with an underscore. The field `binary` is used to point to the actual binary in the docker container. + +There are 2 special values that can be used, `@VERSION@` which will be replaced by the version that is passed in when building the container, and `@BINARY@` which will be replaced by the binary when building the docker image. + +```json +{ + "name": "ED2.2", + "type": "ED2", + "version": "@VERSION@", + "binary": "@BINARY@", + "description": "The Ecosystem Demography Biosphere Model (ED2) is an integrated terrestrial biosphere model incorporating hydrology, land-surface biophysics, vegetation dynamics, and soil carbon and nitrogen biogeochemistry", + "author": "Mike Dietze", + "contributors": ["David LeBauer", "Xiaohui Feng", "Dan Wang", "Carl Davidson", "Rob Kooper", "Shawn Serbin", "Alexey Shiklomanov"], + "links": { + "source": "https://github.com/EDmodel/ED2", + "issues": "https://github.com/EDmodel/ED2/issues" + }, + "inputs": {}, + "citation": ["Medvigy D., Wofsy S. C., Munger J. W., Hollinger D. Y., Moorcroft P. R. 2009. Mechanistic scaling of ecosystem function and dynamics in space and time: Ecosystem Demography model version 2. J. Geophys. Res. 114 (doi:10.1029/2008JG000812)"] +} +``` + +Other fields that are recommended, but currently not used yet, are: + +- `description` : a longer description of the model. +- `creator` : contact person about this docker image. +- `contribuor` : other people that have contributed to this docker image. +- `links` : addtional links to help people when using this docker image, for example values that can be used are `source` to link to the source code, `issues` to link to issue tracking system, and `documentation` to link to model specific documentation. +- `citation` : how the model should be cited in publications. + + +### Model build {#model-docker-Dockerfile} + + +In general we try to minimize the size of the images. To be able to do this we split the process of creating the building of the model images into two pieces (or leverage of an image that exists from the original model developers). If you look at the example Dockerfile you will see that there are 2 sections, the first section will build the model binary, the second section will build the actual PEcAn model, which copies the binary from the first section. + +This is an example of how the ED2 model is build. This will install all the packages needed to build ED2 model, gets the latest version from GitHub and builds the model. + +The second section will create the actual model runner. This will leverage the PEcAn model image that has PEcAn already installed as well as the python code to listen for messages and run the actual model code. This will install some additional packages needed by the model binary (more about that below), copy the model_info.json file and change the `@VERSION@` and `@BINARY@` placeholders. + +It is important values for `type` and `version` are set correct. The PEcAn code will use these to register the model with the BETY database, which is then used by PEcAn to send out a message to a specfic worker queue, if you do not set these variables correctly your model executor will pick up messages for the wrong model. + +To build the docker image, we use a Dockerfile (see example below) and run the following command. This command will expect the Dockerfile to live in the model specific folder and the command is executed in the root pecan folder. It will copy the content of the pecan folder and make it available to the build process (in this example we do not need any additional files). + +Since we can have multiple different versions of a model be available for PEcAn we ar using the following naming schema `pecan/model--: 7f23c6302130 +Step 6/9 : FROM pecan/executor:latest + ---> f19d81b739f5 +... MORE OUTPUT ... +Step 9/9 : COPY --from=model-binary /src/ED2/ED/build/ed_2.1-opt /usr/local/bin/ed2.${MODEL_VERSION} + ---> 07ac841be457 +Successfully built 07ac841be457 +Successfully tagged pecan/pecan-ed2:latest +``` + +At this point we have created a docker image with the binary and all PEcAn code that is needed to run the model. Some models (especially those build as native code) might be missing additional packages that need to be installed in the docker image. To see if all libraries are installed for the binary. + +```bash +> docker run -ti --rm pecan/pecan-ed2 /bin/bash +root@8a95ee8b6b47:/work# ldd /usr/local/bin/ed2.git | grep "not found" + libmpi_usempif08.so.40 => not found + libmpi_usempi_ignore_tkr.so.40 => not found + libmpi_mpifh.so.40 => not found + libmpi.so.40 => not found + libgfortran.so.5 => not found +``` + +Start the build container again (this is the number before the line FROM pecan/executor:latest, 7f23c6302130 in the example), and find the missing libraries listed above (for example libmpi_usempif08.so.40): + +```bash +> docker run --rm -ti 7f23c6302130 +root@e716c63c031f:/src# dpkg -S libmpi_usempif08.so.40 +libopenmpi3:amd64: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_usempif08.so.40.10.1 +libopenmpi3:amd64: /usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.40.10.1 +libopenmpi3:amd64: /usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.40 +``` + +This shows the pages is libopenmpi3 that needs to be installed, do this for all missing packages, modify the Dockerfile and rebuild. Next time you run the ldd command there should be no more packages being listed. + + + +## Building and modifying images {#docker-build-images} + +The only other section on this page is: +[Local development and testing with Docker](#docker-local-devel) + + +For general use, it is sufficient to use the pre-built PEcAn images hosted on [Docker Hub](https://hub.docker.com/r/pecan/) (see [Docker quickstart](#docker-quickstart)). +However, there are cases where it makes sense to re-build the Docker images locally. +The following is a list of PEcAn-specific images and reasons why you would want to rebuild them locally: + +- `pecan/depends` -- Rebuild if: + - You modify the `docker/depends/Dockerfile` + - You introduce new system dependencies (i.e. things that need to be installed with `apt-get`) + - You introduce new R package dependencies, and you want those R package installations to be cached during future builds. For packages with fast build times, it may be fine to let them be installed as part of PEcAn's standard build process (i.e. `make`). +- `pecan/base` -- Rebuild if: + - You built a new version of `pecan/depends` (on which `pecan/base` depends) + - You modify the `docker/base/Dockerfile` + - You made changes to the PEcAn R package source code, the Makefile, or `web/workflow.R`. + - NOTE that changes to the web interface code affect `pecan/web`, _not_ `pecan/base` +- `pecan/executor` -- Rebuild if: + - You built a new version of `pecan/base` (on which `pecan/executor` depends) and/or, `pecan/depends` (on which `pecan/base` depends) + - You modified the `docker/executor/Dockerfile` + - You modified the RabbitMQ Python scripts (e.g. `docker/receiver.py`, `docker/sender.py`) +- `pecan/web` -- Rebuild if you modified any of the following: + - `docker/web/Dockerfile` + - The PHP/HTML/JavaScript code for the PEcAn web interface in `web/` (_except_ `web/workflow.R` -- that goes in `pecan/base`) + - `docker/config.docker.php` (the `config.php` file for Docker web instances) + - `documentation/index_vm.html` (the documentation HTML website) + - NOTE: Because changes to this code are applied instantly (i.e. do not require compilation or installation), a more effective way to do local development may be to mount the `web/` or other relevant folders as a volume onto the `pecan/web` container. + +The easiest way to quickly re-build all of the images is using the `docker.sh` script in the PEcAn source code root directory. +This script will build all of the docker images locally on your machine, and tag them as `latest`. +This will not build the `pecan/depends` image by default because that takes considerably longer. +However, you can force the script to build `pecan/depends` as well by setting the `DEPEND` environment variable to 1 (i.e. `DEPEND=1 ./docker.sh`). +The following instructions provide details on how to build each image individually. + +To build an image locally, use the `docker build` command as described below. +For more details, see `docker build --help` or the [online Docker build documentation](https://docs.docker.com/engine/reference/commandline/build/). + +First, in a terminal window, navigate (`cd`) into the PEcAn source code root directory. +From there, the general syntax for building an image looks like the following: + +```bash +docker build -t pecan/: -f docker/base/Dockerfile. . +``` + +For instance, to build a local version of the `pecan/depends:latest` image, you would run: + +```bash +docker build -t pecan/depends:latest -f docker/depends/Dockerfile . +``` + +The breakdown of this command is as follows: + +- `docker build` -- This is the core command. +The standard syntax is `docker build [OPTIONS] `, where `` refers to the directory to be used as the "build context". +The "build context" is the working directory assumed by the Dockerfiles. +In PEcAn, this is always the PEcAn source code root directory, which allows Dockerfiles to use instructions such as `COPY web/workflow.R /work/`. +In this example, the `` is set to the current working directory, i.e. `.` because we are already in the PEcAn root directory. +If you were located in a different directory, you would have to provide a path to the PEcAn source code root directory. +Also, by default, `docker build` will look for a Dockerfile located at `/Dockerfile`, but this is modified by the `-f` option described below. + +- `-t pecan/depends:latest` -- The `-t/--tag` option specifies how the image will be labeled. +By default, Docker only defines unique image IDs, which are hexidecimal strings that are unintuitive and hard to remember. +Tags are useful for referring to specific images in a human-readable way. +Note that the same unique image can have multiple tags associated with it, so it is possible for, e.g. `pecan/depends:latest`, `pecan/depends:custom`, and even `mypecan/somethingelse:20.0` to refer to the same exact image. +To see a table of all local images, including their tags and IDs, run `docker image ls`. + - **NOTE**: PEcAn's `docker-compose.yml` can be configured via the `PECAN` environment variable to point at different versions of PEcAn images. + By default, it points to the `:latest` versions of all images. + However, if you wanted to, for instance, build `:local` images corresponding to your local source code and then run that version of PEcAn, you would run: + + ``` + PECAN=local docker-compose -p pecan up -d + ``` + + This is an effective way to do local development and testing of different PEcAn versions, as described [below](#docker-local-devel). + +- `-f docker/depends/Dockerfile` -- The `-f/--file` tag is used to provide an alternative location and file name for the Dockerfile. + +### Local development and testing with Docker {#docker-local-devel} + +The following is an example of one possible workflow for developing and testing PEcAn using local Docker images. +The basic idea is to mount a local version of the PEcAn source code onto a running `pecan/executor` image, and then send a special "rebuild" RabbitMQ message to the container to trigger the rebuild whenever you make changes. +NOTE: All commands assume you are working from the PEcAn source code root directory. + +1. In the PEcAn source code directory, create a `docker-compose.override.yml` file with the following contents.: + + ```yml + version: "3" + services: + executor: + volumes: + - .:/pecan + ``` + + This will mount the current directory `.` to the `/pecan` directory in the `executor` container. + The special `docker-compose.override.yml` file is read automatically by `docker-compose` and overrides or extends any instructions set in the original `docker-compose.yml` file. + It provides a convenient way to host server-specific configurations without having to modify the project-wide (and version-controlled) default configuration. + For more details, see the [Docker Compose documentation](https://docs.docker.com/compose/extends/). + +2. Update your PEcAn Docker stack with `docker-compose up -d`. + If the stack is already running, this should only restart your `executor` instance while leaving the remaining containers running. + +3. To update to the latest local code, run `./scripts/docker_rebuild.sh`. + Under the hood, this uses `curl` to post a RabbitMQ message to a running Docker instance. + By default, the scripts assumes that username and password are both `guest` and that the RabbitMQ URL is `http://localhost:8000/rabbitmq`. + All of these can be customized by setting the environment variables `RABBITMQ_USER`, `RABBITMQ_PASSWORD`, and `RABBITMQ_URL`, respectively (or running the script prefixed with those variables, e.g. `RABBITMQ_USER=carya RABBITMQ_PASSWORD=illinois ./scripts/docker_rebuild.sh`). + This step can be repeated whenever you want to trigger a rebuild of the local code. + +NOTE: The updates with this workflow are _specific to the running container session_; restarting the `executor` container will revert to the previous versions of the installed packages. +To make persistent changes, you should re-build the `pecan/base` and `pecan/executor` containers against the current version of the source code. + +NOTE: The mounted PEcAn source code directory includes _everything_ in your local source directory, _including installation artifacts used by make_. +This can lead to two common issues: +- Any previous make cache files (stuff in the `.install`, `.docs`, etc. directories) persist across container instances, even though the installed packages may not. To ensure a complete build, it's a good idea to run `make clean` on the host machine to remove these artifacts. +- Similarly, any installation artifacts from local builds will be carried over to the build. In particular, be wary of packages with compiled code, such as `modules/rtm` (`PEcAnRTM`) -- the compiled `.o`, `.so`, `.mod`, etc. files from compilation of such packages will carry over into the build, which can cause conflicts if the package was also built locally. + +The `docker-compose.override.yml` is useful for some other local modifications. +For instance, the following adds a custom ED2 "develop" model container. + +```yml +services: + # ... + ed2devel: + image: pecan/model-ed2-develop:latest + build: + context: ../ED2 # Or wherever ED2 source code is found + networks: + - pecan + depends_on: + - rabbitmq + volumes: + - pecan:/data + restart: unless-stopped +``` + +Similarly, this snippet modifies the `pecan` network to use a custom IP subnet mask. +This is required on the PNNL cluster because its servers' IP addresses often clash with Docker's default IP mask. + +```yml +networks: + pecan: + ipam: + config: + - subnet: 10.17.1.0/24 +``` + + + +## Troubleshooting Docker {#docker-troubleshooting} + +### "Package not available" while building images + +**PROBLEM**: Packages fail to install while building `pecan/depends` and/or `pecan/base` with an error like the following: + +``` +Installing package into ‘/usr/local/lib/R/site-library’ +(as ‘lib’ is unspecified) +Warning: unable to access index for repository https://mran.microsoft.com/snapshot/2018-09-01/src/contrib: + cannot open URL 'https://mran.microsoft.com/snapshot/2018-09-01/src/contrib/PACKAGES' +Warning message: +package ‘’ is not available (for R version 3.5.1) +``` + +**CAUSE**: This can sometimes happen if there are problems with Microsoft's CRAN snapshots, which are the default repository for the `rocker/tidyverse` containers. +See GitHub issues [rocker-org/rocker-versioned#102](https://github.com/rocker-org/rocker-versioned/issues/102) and [#58](https://github.com/rocker-org/rocker-versioned/issues/58). + +**SOLUTION**: Add the following line to the `depends` and/or `base` Dockerfiles _before_ (i.e. above) any commands that install R packages (e.g. `Rscript -e "install.packages(...)"`): + +``` +RUN echo "options(repos = c(CRAN = 'https://cran.rstudio.org'))" >> /usr/local/lib/R/etc/Rprofile.site +``` + +This will set the default repository to the more reliable (albeit, more up-to-date; beware of breaking package changes!) RStudio CRAN mirror. +Then, build the image as usual. + + + +## Migrating PEcAn from VM to Docker {#docker-migrate} + +This document assumes you have read through the [Introduction to Docker](#docker-intro) as well as [Docker quickstart](#docker-quickstart) and have docker running on the VM. + +This document will slowly replace each of the components with the appropriate docker images. At then end of this document you should be able to use the docker-compose command to bring up the full docker stack as if you had started with this origianally. + +### Running BETY as a docker container + +This will replace the BETY application running on the machine with a docker image. This will assume you still have the database running on the local machine and the only thing we replace is the BETY application. + +If you are running systemd (Ubuntu 16.04 or Centos 7) you can copy the following file to /etc/systemd/system/bety.service (replace LOCAL_SERVER=99 with your actual server). If you have postgres running on another server replace 127.0.0.1 with the actual ip address of the postgres server. + +``` +[Unit] +Description=BETY container +After=docker.service + +[Service] +Restart=always +ExecStart=/usr/bin/docker run -t --rm --name bety --add-host=postgres:127.0.0.1 --network=host --env RAILS_RELATIVE_URL_ROOT=/bety --env LOCAL_SERVER=99 pecan/bety +ExecStop=/usr/bin/docker stop -t 2 bety + +[Install] +WantedBy=local.target +``` + +At this point we can enable the bety service (this only needs to be done once). First we need to tell systemd a new service is available using `systemctl daemon-reload`. Next we enable the BETY service so it will restart automatically when the machine reboots, using `systemctl enable bety`. Finally we can start the BETY service using `systemctl start bety`. At this point BETY is running as a docker container on port 8000. You can see the log messages using `journalctl -u bety`. + +Next we need to modify apache configuration files. The file /etc/apache2/conf-enabled/bety.conf will be replaced with the following content: + +``` +ProxyPass /bety/ http://localhost:8000/bety/ +ProxyPassReverse /bety/ http://localhost:8000/bety/ +RedirectMatch permanent ^/bety$ /bety/ +``` + +Once this modified we can restart apache using `systemctl restart apache2`. At this point BETY is running in a container and is accessable trough the webserver at http://server/bety/. + + +To upgrade to a new version of BETY you can now use the docker commands. You can use the following commands to stop BETY, pull the latest image down, migrate the database (you made a backup correct?) and start BETY again. + +``` +systemctl stop bety +docker pull pecan/bety:latest +docker run -ti --rm --add-host=postgres:127.0.0.1 --network=host --env LOCAL_SERVER=99 pecan/bety migrate +systemctl start bety +``` + +Once you are satisfied with the migration of BETY you can remove the bety folder as well as any ruby binaries you have installed. + + + +## The PEcAn Docker API {#pecan-api} + +If you have a running instance of Dockerized PEcAn (or other setup where PEcAn workflows are submitted via [RabbitMQ](#rabbitmq)), +you have the option of running and managing PEcAn workflows using the `pecanapi` package. + +For more details, see the `pecanapi` [package vignette](https://github.com/PecanProject/pecan/blob/develop/api/vignettes/pecanapi.Rmd) and function-level documentation. +What follows is a lightning introduction. + +#### Installation + +The package can be installed directly from GitHub via `devtools::install_github`: + +```r +devtools::install_github("pecanproject/pecan/api@develop") +``` + +#### Creating and submitting a workflow + +With `pecanapi`, creating a workflow, submitting it to RabbitMQ, monitoring its progress, and processing its output can all be accomplished via an R script. + +Start by loading the package (and the `magrittr` package, for the `%>%` pipe operator). + +```r +library(pecanapi) +library(magrittr) +``` + +Set your PEcAn database user ID, and create a database connection object, which will be used for database operations throughout the workflow. + +```r +options(pecanapi.user_id = 99000000002) +con <- DBI::dbConnect( + RPostgres::Postgres(), + user = "bety", + password = "bety", + host = "localhost", + port = 5432 +) +``` + +Find model and site IDs for the site and model you want to run. + +```r +model_id <- get_model_id(con, "SIPNET", "136") +all_umbs <- search_sites(con, "umbs%disturbance") +site_id <- subset(all_umbs, !is.na(mat))[["id"]] +``` + +Insert a new workflow into the PEcAn database, and extract its ID. + +```r +workflow <- insert_new_workflow(con, site_id, model_id, + start_date = "2004-01-01", + end_date = "2004-12-31") +workflow_id <- workflow[["id"]] +``` + +Pull all of this information together into a settings list object. + +```r +settings <- list() %>% + add_workflow(workflow) %>% + add_database() %>% + add_pft("temperate.deciduous") %>% + add_rabbitmq(con = con) %>% + modifyList(list( + meta.analysis = list(iter = 3000, random.effects = list(on = FALSE, use_ghs = TRUE)), + run = list(inputs = list(met = list(source = "CRUNCEP", output = "SIPNET", method = "ncss"))), + ensemble = list(size = 1, variable = "NPP") + )) +``` + +Submit the workflow via RabbitMQ, and monitor its progress in the R process. + +```r +submit_workflow(settings) +watch_workflow(workflow_id) +``` + +Use THREDDS to access and analyze the output. + +```r +sipnet_out <- ncdf4::nc_open(run_dap(workflow_id, "2004.nc")) +gpp <- ncdf4::ncvar_get(sipnet_out, "GPP") +time <- ncdf4::ncvar_get(sipnet_out, "time") +ncdf4::nc_close(sipnet_out) +plot(time, gpp, type = "l") +``` + + + +## RabbitMQ {#rabbitmq} + +This section provides additional details about how PEcAn uses RabbitMQ to manage communication between its Docker containers. + +In PEcAn, we use the Python [`pika`](http://www.rabbitmq.com/tutorials/tutorial-one-python.html) client to post and retrieve messages from RabbitMQ. +As such, every Docker container that communicates with RabbitMQ contains two Python scripts: `sender.py` and `reciever.py`. +Both are located in the `docker` directory in the PEcAn source code root. + +### Producer -- `sender.py` {#rabbitmq-basics-sender} + +The `sender.py` script is in charge of posting messages to RabbitMQ. +In the RabbitMQ documentation, it is known as a "producer". +It runs once for every message posted to RabbitMQ, and then immediately exits (unlike the `receiver.py`, which runs continuously -- see [below](#rabbitmq-basics-receiver)). + +Its usage is as follows: + +```bash +python3 sender.py +``` + +The arguments are: + +- `` -- The unique identifier of the RabbitMQ instance, similar to a URL. +The format is `amqp://username:password@host/vhost`. +By default, this is `amqp://guest:guest@rabbitmq/%2F` (the `%2F` here is the hexadecimal encoding for the `/` character). + +- `` -- The name of the queue on which to post the message. + +- `` -- The contents of the message to post, in JSON format. +A typical message posted by PEcAn looks like the following: + + ```json + { "folder" : "/path/to/PEcAn_WORKFLOWID", "workflow" : "WORKFLOWID" } + ``` + +The `PEcAn.remote::start_rabbitmq` function is a wrapper for this script that provides an easy way to post a `folder` message to RabbitMQ from R. + +### Consumer -- `receiver.py` {#rabbitmq-basics-receiver} + +Unlike `sender.py`, `receiver.py` runs like a daemon, constantly listening for messages. +In the RabbitMQ documentation, it is known as a "consumer". +In PEcAn, you can tell that it is ready to receive messages if the corresponding logs (e.g. `docker-compose logs executor`) show the following message: + +``` +[*] Waiting for messages. To exit press CTRL+C. +``` + +Our `reciever` is configured by three environment variables: + +- `RABBITMQ_URI` -- This defines the URI where RabbitMQ is running. +See corresponding argument in the [producer](#rabbitmq-basics-sender) + +- `RABBITMQ_QUEUE` -- This is the name of the queue on which the consumer will listen for messages, just as in the [producer](#rabbitmq-basics-sender). + +- `APPLICATION` -- This specifies the name (including the path) of the default executable to run when receiving a message. + At the moment, it should be an executable that runs in the directory specified by the message's `folder` variable. + In the case of PEcAn models, this is usually `./job.sh`, such that the `folder` corresponds to the `run` directory associated with a particular `runID` (i.e. where the `job.sh` is located). + For the PEcAn workflow itself, this is set to `R CMD BATCH workflow.R`, such that the `folder` is the root directory of the workflow (in the `executor` Docker container, something like `/data/workflows/PEcAn_`). + This default executable is _overridden_ if the message contains a `custom_application` key. + If included, the string specified by the `custom_application` key will be run as a command exactly as is on the container, from the directory specified by `folder`. + For instance, in the example below, the container will print "Hello there!" instead of running its default application. + + ```json + {"custom_application": "echo 'Hello there!'", "folder": "/path/to/my/dir"} + ``` + + NOTE that in RabbitMQ messages, the `folder` key is always required. + +### RabbitMQ and the PEcAn web interface {#rabbitmq-web} + +RabbitMQ is configured by the following variables in `config.php`: + +- `$rabbitmq_host` -- The RabbitMQ server hostname (default: `rabbitmq`, because that is the name of the `rabbitmq` service in `docker-compose.yml`) +- `$rabbitmq_port` -- The port on which RabbitMQ listens for messages (default: 5672) +- `$rabbitmq_vhost` -- The path of the RabbitMQ [Virtual Host](https://www.rabbitmq.com/vhosts.html) (default: `/`). +- `$rabbitmq_queue` -- The name of the RabbitMQ queue associated with the PEcAn workflow (default: `pecan`) +- `$rabbitmq_username` -- The RabbitMQ username (default: `guest`) +- `$rabbitmq_password` -- The RabbitMQ password (default: `guest`) + +In addition, for running models via RabbitMQ, you will also need to add an entry like the following to the `config.php` `$hostlist`: + +```php +$hostlist=array($fqdn => array("rabbitmq" => "amqp://guest:guest@rabbitmq/%2F"), ...) +``` + +This will set the hostname to the name of the current machine (defined by the `$fqdn` variable earlier in the `config.php` file) to an array with one entry, whose key is `rabbitmq` and whose value is the RabbitMQ URI (`amqp://...`). + +These values are converted into the appropriate entries in the `pecan.xml` in `web/04-runpecan.php`. + +### RabbitMQ in the PEcAn XML {#rabbitmq-xml} + +RabbitMQ is a special case of remote execution, so it is configured by the `host` node. +An example RabbitMQ configuration is as follows: + +```xml + + + amqp://guest:guest@rabbitmq/%2F + sipnet_136 + + +``` + +Here, `uri` and `queue` have the same general meanings as described in ["producer"](#rabbitmq-basics-sender). +Note that `queue` here refers to the target model. +In PEcAn, RabbitMQ model queues are named as `MODELTYPE_REVISION`, +so the example above refers to the SIPNET model version 136. +Another example is `ED2_git`, referring to the latest `git` version of the ED2 model. + +### RabbitMQ configuration in Dockerfiles {#rabbitmq-dockerfile} + +As described in the ["consumer"](#rabbitmq-basics-receiver) section, our standard RabbitMQ receiver script is configured using three environment variables: `RABBITMQ_URI`, `RABBITMQ_QUEUE`, and `APPLICATION`. +Therefore, configuring a container to work with PEcAn's RabbitMQ instance requires setting these three variables in the Dockerfile using an [`ENV`](https://docs.docker.com/engine/reference/builder/#env) statement. + +For example, this excerpt from `docker/base/Dockerfile.executor` (for the `pecan/executor` image responsible for the PEcAn workflow) sets these variables as follows: + +```dockerfile +ENV RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" \ + RABBITMQ_QUEUE="pecan" \ + APPLICATION="R CMD BATCH workflow.R" +``` + +Similarly, this excerpt from `docker/models/Dockerfile.sipnet` (which builds the SIPNET model image) is a typical example for a model image. +Note the use of [`ARG`](https://docs.docker.com/engine/reference/builder/#arg) here to specify a default version model version of 136 while allowing this to be configurable (via `--build-arg MODEL_VERSION=X`) at build time: + +```dockerfile +ARG MODEL_VERSION=136 + +ENV APPLICATION="./job.sh" \ + MODEL_TYPE="SIPNET" \ + MODEL_VERSION="${MODEL_VERSION}" + +ENV RABBITMQ_QUEUE="${MODEL_TYPE}_${MODEL_VERSION}" +``` + +**WARNING**: Dockerfile environment variables set via `ENV` are assigned _all at once_; _they do not evaluate successively, left to right_. +Consider the following block: + +```dockerfile +# Don't do this! +ENV MODEL_TYPE="SIPNET" \ + MODEL_VERSION=136 \ + RABBITMQ_QUEUE=${MODEL_TYPE}_${MODEL_VERSION} # <- Doesn't know about MODEL_TYPE or MODEL_VERSION! +``` + +In this block, the expansion for setting `RABBITMQ_QUEUE` _is not aware_ of the current values of `MODEL_TYPE` or `MODEL_VERSION`, and will therefore be set incorrectly to just `_` (unless they have been set previously, in which case it will be aware only of their earlier values). +As such, **variables depending on other variables must be set in a separate, subsequent `ENV` statement than the variables they depend on**. + +### Case study: PEcAn web interface {#rabbitmq-case-study} + +The following describes in general terms what happens during a typical run of the PEcAn web interface with RabbitMQ. + +1. The user initializes all containers with `docker-compose up`. +All the services that interact with RabbitMQ (`executor` and all models) run `receiver.py` in the foreground, waiting for messages to tell them what to do. + +2. The user browses to http://localhost:8000/pecan/ and steps through the web interface. +All the pages up to the `04-runpecan.php` run on the `web` container, and are primarily for setting up the `pecan.xml` file. + +3. Once the user starts the PEcAn workflow at `04-runpecan.php`, the underlying PHP code connects to RabbitMQ (based on the URI provided in `config.php`) and posts the following message to the `pecan` queue: + + ```json + {"folder": "/workflows/PEcAn_WORKFLOWID", "workflowid": "WORKFLOWID"} + ``` + +4. The `executor` service, which is listening on the `pecan` queue, hears this message and executes its `APPLICATION` (`R CMD BATCH workflow.R`) in the working directory specified in the message's `folder`. + The `executor` service then performs the pre-execution steps (e.g. trait meta-analysis, conversions) itself. + Then, to actually execute the model, `executor` posts the following message to the target model's queue: + + ```json + {"folder": "/workflows/PEcAn_WORKFLOWID/run/RUNID"} + ``` + +5. The target model service, which is listening on its dedicated queue, hears this message and runs its `APPLICATION`, which is `job.sh`, in the directory indicated by the message. + Upon exiting (normally), the model service writes its status into a file called `rabbitmq.out` in the same directory. + +6. The `executor` container continuously looks for the `rabbitmq.out` file as an indication of the model run's status. + Once it sees this file, it reads the status and proceeds with the post-execution parts of the workflow. + (NOTE that this isn't perfect. If the model running process exits abnormally, the `rabbitmq.out` file may not be created, which can cause the `executor` container to hang. If this happens, the solution is to restart the `executor` container with `docker-compose restart executor`). + + + +# Remote execution with PEcAn {#pecan-remote} + +Remote execution allows the user to leverage the power and storage of high performance computing clusters, AWS instances, or specially configured virtual machines, but without leaving their local working environment. +PEcAn uses remote execution primarily to run ecosystem models. + +The infrastructure for remote execution lives in the `PEcAn.remote` package (`base/remote` in the PEcAn repository). + +This section describes the following: + +1. Checking capabilities to connect to the remote machine correctly: ++ Basics of command line SSH ++ SSH authentication with keys and passwords ++ Basics of SSH tunnels, and how they are used in PEcAn + +2. Description of PEcAn related tools that control remote execution ++ Basic remote execution R functions in `PEcAn.remote` + +3. SETUP- Configuration Files and settings ++ Remote model execution configuration in the `pecan.xml` and `config.php` ++ Additional information about preparing remote servers for execution + + +## Basics of SSH + +All of the PEcAn remote infrastructure depends on the system `ssh` utility, so it's important to make sure this works before attempting the advanced remote execution functionality in PEcAn. + +To connect to a remote server interactively, the command is simply: + +```sh +ssh @ +``` + +For instance, my connection to the BU shared computing cluster looks like: + +```sh +ssh ashiklom@geo.bu.edu +``` + +It will prompt me for my BU password, and, if successful, will drop me into a login shell on the remote machine. + +Alternatively to the login shell, `ssh` can be used to execute arbitrary code, whose output will be returned exactly as it would if you ran the command locally. +For example, the following: + +```sh +ssh ashiklom@geo.bu.edu pwd +``` + +will run the `pwd` command, and return the path to my home directory on the BU SCC. +The more advanced example below will run some simple R code on the BU SCC and return the output as if it was run locally. + +```sh +ssh ashiklom@geo.bu.edu Rscript -e "seq(1, 10)" +``` + +## SSH authentication -- password vs. SSH key + +Because this server uses passwords for authentication, this command will then prompt me for my password. + +An alternative to password authentication is using SSH keys. +Under this system, the host machine (say, your laptop, or the PEcAn VM) has to generate a public and private key pair (using the `ssh-keygen` command). +The private key (by default, a file in `~/.ssh/id_rsa`) lives on the host machine, and should **never** be shared with anyone. +The public key will be distributed to any remote machines to which you want the host to be able to connect. +On each remote machine, the public key should be added to a list of authorized keys located in the `~/.ssh/authorized_keys` file (on the remote machine). +The authorized keys list indicates which machines (technically, which keys -- a single machine, and even a single user, can have many keys) are allowed to connect to it. +This is the system used by all of the PEcAn servers (`pecan1`, `pecan2`, `test-pecan`). + +## SSH tunneling + +SSH authentication can be more advanced than indicated above, especially on systems that require dual authentication. +Even simple password-protection can be tricky in scripts, since (by design) it is fairly difficult to get SSH to accept a password from anything other than the raw keyboard input (i.e. SSH doesn't let you pass passwords as input or arguments, because this exposes your password as plain text). + +A convenient and secure way to follow SSH security protocol, but prevent having to go through the full authentication process every time, is to use SSH tunnels (or "sockets", which are effectively synonymous). +Essentially, an SSH socket is a read- and write-protectected file that contains all of the information about an SSH connection. + +To create an SSH tunnel, use a command like the following: + +```sh +ssh -n -N -f -o ControlMaster=yes -S /path/to/socket/file @ +``` + +If appropriate, this will prompt you for your password (if using password authentication), and then will drop you back to the command line (thanks to the `-N` flag, which runs SSH without executing a command, the `-f` flag, which pushes SSH into the background, and the `-n` flag, which prevents ssh from reading any input). +It will also create the file `/path/to/socket/file`. + +To use this socket with another command, use the `-S /path/to/file` flag, pointing to the same tunnel file you just created. + +```sh +ssh -S /path/to/socket/file +``` + +This will let you access the server without any sort of authentication step. +As before, if `` is blank, you will be dropped into an interactive shell on the remote, or if it's a command, that command will be executed and the output returned. + +To close a socket, use the following: + +```sh +ssh -S /path/to/socket/file -O exit +``` + +This will delete the socket file and close the connection. +Alternatively, a scorched earth approach to closing the SSH tunnel if you don't remember where you put the socket file is something like the following: + +```sh +pgrep ssh # See which processes will be killed +pkill ssh # Kill those processes +``` + +...which will kill all user processes called `ssh`. + +To automatically create tunnels following a specific pattern, you can add the following to your +`~/.ssh/config` + +```sh +Host + ControlMaster auto + ControlPath /tmp/%r@%h:%p +``` + +For more information, see `man ssh`. + +## SSH tunnels and PEcAn + +Many of the `PEcAn.remote` functions assume that a tunnel is already open. +If working from the web interface, the tunnel will be opened for you by some under-the-hood PHP and Bash code, but if debugging or working locally, you will have to create the tunnel yourself. +The best way to do this is to create the tunnel first, outside of R, as described above. +(In the following examples, I'll use my username `ashiklom` connecting to the `test-pecan` server with a socket stored in `/tmp/testpecan`. +To follow along, replace these with your own username and designated server, respectively). + +```sh +ssh -nNf -o ControlMaster=yes -S /tmp/testpecan ashiklom@test-pecan.bu.edu +``` + +Then, in R, create a `host` object, which is just a list containing the elements `name` (hostname) and `tunnel` (path to tunnel file). + +```r +my_host <- list(name = "test-pecan.bu.edu", tunnel = "/tmp/testpecan") +``` + +This host object can then be used in any of the remote execution functions. + + +## Basic remote execute functions + +The `PEcAn.remote::remote.execute.cmd` function runs a system command on a remote server (or on the local server, if `host$name == "localhost"`). + +``` +x <- PEcAn.remote::remote.execute.cmd(host = my_host, cmd = "echo", args = "Hello world") +x +``` + +Note that `remote.execute.cmd` is similar to base R's `system2`, in that the base command (in this case, `echo`) is passed separately from its arguments (`"Hello world"`). +Note also that the output of the remote command is returned as a character. + +For R code, there is a special wrapper around `remote.execute.cmd` -- `PEcAn.remote::remote.execute.R`, which runs R code (passed as a string) on a remote and returns the output. + +``` +code <- " + x <- 2:4 + y <- 3:1 + x ^ y +" +out <- PEcAn.remote::remote.execute.R(code = code, host = my_host) +``` + +For additional functions related to remote file operations and other stuff, see the `PEcAn.remote` package documentation. + + +## Remote model execution with PEcAn + +The workhorse of remote model execution is the `PEcAn.remote::start.model.runs` function, which distributes execution of each run in a list of runs (e.g. multiple runs in an ensemble) to the local machine or a remote based on the configuration in the PEcAn settings. + +Broadly, there are three major types of model execution: + +- Serialized (`PEcAn.remote::start_serial`) -- This runs models one at a time, directly on the local machine or remote (i.e. same as calling the executables one at a time for each run). +- Via a queue system, (`PEcAn.remote::start_qsub`) -- This uses a queue management system, such as SGE (e.g. `qsub`, `qstat`) found on the BU SCC machines, to submit jobs. + For computationally intensive tasks, this is the recommended way to go. +- Via a model launcher script (`PEcAn.remote::setup_modellauncher`) -- This is a highly customizable approach where task submission is controlled by a user-provided script (`launcher.sh`). + +## XML configuration + +The relevant section of the PEcAn XML file is the `` block. +Here is a minimal example from one of my recent runs: + +``` + + geo.bu.edu + ashiklom + /home/carya/output//PEcAn_99000000008/tunnel/tunnel + +``` + +Breaking this down: + +- `name` -- The hostname of the machine where the runs will be performed. + Set it to `localhost` to run on the local machine. +- `user` -- Your username on the remote machine (note that this may be different from the username on your local machine). +- `tunnel` -- This is the tunnel file for the connection used by all remote execution files. + The tunnel is created automatically by the web interface, but must be created by the user for command line execution. + +This configuration will run in serialized mode. +To use `qsub`, the configuration is slightly more involved: + +``` + + geo.bu.edu + ashiklom + qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash + Your job ([0-9]+) .* + qstat -j @JOBID@ || echo DONE + /home/carya/output//PEcAn_99000000008/tunnel/tunnel + +``` + +The additional fields are as follows: + +- `qsub` -- The command used to submit jobs to the queue system. + Despite the name, this can be any command used for any queue system. + The following variables are available to be set here: + - `@NAME@` -- Job name to display + - `@STDOUT@` -- File to which `stdout` will be redirected + - `@STDERR@` -- File to which `stderr` will be redirected +- `qsub.jobid` -- A regular expression, from which the job ID will be determined. + This string will be parsed by R as `jobid <- gsub(qsub.jobid, "\\1", output)` -- note that the first pattern match is taken as the job ID. +- `qstat` -- The command used to check the status of a job. + Internally, PEcAn will look for the `DONE` string at the end, so a structure like ` || echo DONE` is required. + The `@JOBID@` here is the job ID determined from the `qsub.jobid` parsing. + +Documentation for using the model launcher is currently unavailable. + +## Configuration for PEcAn web interface + +The `config.php` has a few variables that will control where the web +interface can run jobs, and how to run those jobs. It is located in the `/web` directory and if you have not touched it yet it will +be named as `config.example.php`. Rename it to 'config.php` and edit by folowing the following directions. + +These variables are `$hostlist`, `$qsublist`, `$qsuboptions`, and `$SSHtunnel`. In +the near future `$hostlist`, `$qsublist`, `$qsuboptions` will be +combined into a single list. + +`$SSHtunnel` : points to the script that creates an SSH tunnel. +The script is located in the web folder and the default value of +`dirname(__FILE__) . DIRECTORY_SEPARATOR . "sshtunnel.sh";` most +likely will work. + +`$hostlist` : is an array with by default a single value, only +allowing jobs to run on the local server. Adding any other servers +to this list will show them in the pull down menu when selecting +machines, and will trigger the web page to be show to ask for a +username and password for the remote execution (make sure to use +HTTPS setup when asking for password to prevent it from being send +in the clear). + +`$qsublist` : is an array of hosts that require qsub to be used +when running the models. This list can include `$fqdn` to indicate +that jobs on the local machine should use qsub to run the models. + +`$qsuboptions` : is an array that lists options for each machine. +Currently it support the following options (see also +[Run Setup] and look at the tags) + +``` +array("geo.bu.edu" => + array("qsub" => "qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash", + "jobid" => "Your job ([0-9]+) .*", + "qstat" => "qstat -j @JOBID@ || echo DONE", + "job.sh" => "module load udunits R/R-3.0.0_gnu-4.4.6", + "models" => array("ED2" => "module load hdf5")) +``` + +In this list `qsub` is the actual command line for qsub, `jobid` +is the text returned from qsub, `qstat` is the command to check +to see if the job is finished. `job.sh` and the value in models +are additional entries to add to the job.sh file generated to +run the model. This can be used to make sure modules are loaded +on the HPC cluster before running the actual model. + +## Running PEcAn code for remotely + +You do not need to download PEcAn fully on your remote machine. You can compile and install the model specific code pieces of +PEcAn on the cluster easily without having to install the +full code base of PEcAn (and all OS dependencies). Use the `git clone ` command to: + +``` +devtools::install_github("pecanproject/pecan", subdir = 'base/utils') +``` + +Next we need to install the model specific pieces, this is done +almost the same (example for ED2): + +``` +devtools::install_github("pecanproject/pecan", subdir = 'models/ed') +``` + +This should install dependencies required. + +* The following are some notes on how to install the model specifics on different HPC +clusters* + +## Special case: `geo.bu.edu` + +Following modules need to be loaded: + +``` +module load hdf5 udunits R/R-3.0.0_gnu-4.4.6 +``` + +Next the following packages need to be installed, otherwise it +will fall back on the older versions install site-library + +``` +install.packages(c('udunits2', 'lubridate'), + configure.args=c(udunits2='--with-udunits2-lib=/project/earth/packages/udunits-2.1.24/lib --with-udunits2-include=/project/earth/packages/udunits-2.1.24/include'), + repos='http://cran.us.r-project.org') +``` + +Finally to install support for both ED and SIPNET: + +``` +devtools::install_github("pecanproject/pecan", subdir = 'base/utils') +devtools::install_github("pecanproject/pecan", subdir = 'models/sipnet') +devtools::install_github("pecanproject/pecan", subdir = 'models/ed') +``` + + + +# Data assimilation with DART + +In addition to the state assimilation routines found in the assim.sequential module, another approach for state data assimilation in PEcAn is through the DART workflow created by the DARES group in NCAR. + +This section gives a straight-forward explanation how to implement DART, focused on the technical aspects of the implementation. If there are any questions, feel free to send \@Viskari an email (`tt.viskari@gmail.com`) or contacting DART support as they are quite awesome in helping people with problems. Also, if there are any suggestions on how to improve the wiki, please let me know. + +**Running with current folders in PEcAn** + +Currently the DART folders in PEcAn are that you can simply copy the structure there over a downloaded DART workflow and it should replace/add relevant files and folders. The most important step after that is to check and change the run paths in the following files: +Path_name files in the work folders +T_ED2IN file, as it indicates where the run results be written. +advance_model.csh, as it indicates where to copy files from/to. + +Second thing is setting the state vector size. This is explained in more detail below, but essentially this is governed by the variable model_size in model_mod.f90. In addition to this, it should be changed in utils/F2R.f90 and R2F.f90 programs, which are responsible for reading and writing state variable information for the different ensembles. This also will be expanded below. Finally, the new state vector size should be updated for any other executable that runs it. + +Third thing needed are the initial condition and observation sequence files. They will always follow the same format and are explained in more detail below. + +Finally the ensemble size, which is the easiest to change. In the work subfolder, there is a file named input.nml. Simply changing the ensemble size there will set it for the run itself. Also remember that initial conditions file should have the equal amount of state vectors as there are ensemble members. + +**Adjusting the workflow** + +The central file for the actual workflow is advance_model.csh. It is a script DART calls to determine how the state vector changes between the two observation times and is essentially the only file one needs to change when changing state models or observations operators. The file itself should be commented to give a good idea of the flow, but beneath is a crude order of events. +1. Create a temporary folder to run the model in and copy/link required files in to it. +2. Read in the state vector values and times from DART. Here it is important to note that the values will be in binary format, which need to be read in by a Fortran program. In my system, there is a program called F2R which reads in the binary values and writes out in ascii form the state vector values as well as which ED2 history files it needs to copy based on the time stamps. +3. Run the observation operator, which writes the state vector state in to the history files and adjusts them if necessary. +4. Run the program. +5. Read the new state vector values from output files. +6. Convert the state vector values to the binary. In my system, this is done by the program R2F. + +**Initial conditions file** + +The initial conditions file, commonly named filter_ics although you can set it to something else in input.nml, is relatively simple in structure. It has one sequence repeating over the number of ensemble members. +First line contains two times: Seconds and days. Just use one of them in this situation, but it has to match the starting time given in input.nml. +After that each line should contain a value from the state vector in the order you want to treat them. +R functions filter_ics.R and B_filter_ics.R in the R folder give good examples of how to create these. + +**Observations files** + +The file which contains the observations is commonly known as obs_seq.out, although again the name of the file can be changed in input.nml. The structure of the file is relatively straight-forward and the R function ObsSeq.R in the R subfolder has the write structure for this. Instead of writing it out here, I want to focus on a few really important details in this file. +Each observations will have a time, a value, an uncertainty, a location and a kind. The first four are self-explanatory, but the kind is really important, but also unfortunately really easy to misunderstand. In this file, the kind does not refer to a unit or a type of observation, but which member of the state vector is this observation of. So if the kind was, for example, 5, it would mean that it was of the fifth member of the state vector. However, if the kind value is positive, the system assumes that there is some sort of an operator change in comparing the observation and state vector value which is specified in a subprogram in model_mod.f90. + +So for an direct identity comparison between the observation and the state vector value, the kind needs to be negative number of the state vector component. Thus, again if the observation is of the fifth state vector value, the kind should be set as -5. Thus it is recommendable that the state vector values have already been altered to be comparable with the observations. + +As for location, there are many ways to set in DART and the method needs to be chosen when compiling the code by giving the program which of the location mods it is to use. In our examples we used a 1-dimensional location vector with scaled values between 0 and 1. For future it makes sense to switch to a 2 dimensional long- and lat-scale, but for the time being the location does not impact the system a lot. The main impact will be if the covariances will be localized, as that will be decided on their locations. + + +**State variable vector in DART** + +Creating/adjusting a state variable vector in DART is relatively straight-forward. Below are listed the steps to specify a state variable vector in DART. + +I. For each specific model, there should be an own folder within the DART root models folder. In this folder there is a model_mod.f90, which contains the model specific subroutines necessary for a DART run. + +At the beginning of this file there should be the following line: + +integer, parameter :: model_size = [number] + +The number here should be the number of variables in the vector. So for example if there were three state variables, then the line should look like this: + +integer, parameter :: model_size = 3 + +This number should also be changed to match with any of the other executables called during the run as indicated by the list above. + + +II. In the DART root, there should be a folder named obs_kind, which contains a file called DEFAULT_obs_kind_mod.F90. It is important to note that all changes should be done to this file instead of obs_kind_mod.f90, as during compilation DART creates obs_kind_mod.f90 from DEFAULT_obs_kind_mod.F90. +This program file contains all the defined observation types used by DART and numbers them for easier reference later. Different types are classified according to observation instrument or relevant observation phenomenon. Adding a new type only requires finding an unused number and starting a new identifying line with the following: + +integer, parameter, public :: & + KIND_... + +Note that the observation kind should always be easy to understand, so avoid using unnecessary acronyms. For example, when adding an observation type for Leaf Area Index, it would look like below: + +integer, parameter, public :: & + KIND_LEAF_AREA_INDEX = [number] + + +III. In the DART root, there should be a folder named obs_def, which contains several files starting with obs_def_. There files contain the different available observation kinds classified either according to observation instrument or observable system. Each file starts with the line + +! BEGIN DART PREPROCESS KIND LIST + +And end with line + +! END DART PREPROCESS KIND LIST + +The lines between these two should contain + +! The desired observation reference, the observation type, COMMON_CODE. + +For example, for observations relating to phenology, I have created a file called obs_def_phen_mod.f90. In this file I define the Leaf Area Index observations in the following way. + +! BEGIN DART PREPROCESS KIND LIST +! LAI, TYPE_LEAF_AREA_INDEX, COMMON_CODE +! END DART PREPROCESS KIND LIST + +Note that the exclamation marks are necessary for the file. + + +IV. In the model specific folder, in the work subfolder there is a namelist file input.nml. This contains all the run specific information for DART. In it, there is a subtitle &preprocess, under which there is a line + +input_files = ‘….’ + +This input_files sections must be set to refer to the obs_def file created in step III. The input files can contain references to multiple obs_def files if necessary. + +As an example, the reference to the obs_def_phen_mod.f90 would look like +input_files = ‘../../../obs_def/obs_def_phen_mod.f90’ + +V. Finally, as an optional step, the different values in state vector can be typed. In model_mod, referred to in step I, there is a subroutine get_state_meta_data. In it, there is an input variable index_in, which refers to the vector component. So for instance for the second component of the vector index_in would be 2. If this is done, the variable kind has to be also included at the beginning of the model_mod.f90 file, at the section which begins + +use obs_kind_mod, only :: + +The location of the variable can be set, but for a 0-dimensional model we are discussing here, this is not necessary. + +Here, though, it is possible to set the variable types by including the following line + +if(index_in .eq. [number]) var_type = [One of the variable kinds set in step II] + +VI. If the length of the state vector is changed, it is important that the script ran with DART produces a vector of that length. Change appropriately if necessary. + +After these steps, DART should be able to run with the state vector of interest. + + + +# (PART) Appendix {-} + +# Miscellaneous + +## TODO + +* Add list of developers + +## Using the PEcAn download.file() function +download.file(url, destination_file, method)
+
+ +This custom PEcAn function works together with the base R function download.file (https://stat.ethz.ch/R-manual/R-devel/library/utils/html/download.file.html). However, it provides expanded functionality to generalize the use for a broad range of environments. This is because some computing environments are behind a firewall or proxy, including FTP firewalls. This may require the use of a custom FTP program and/or initial proxy server authentication to retrieve the files needed by PEcAn (e.g. meteorology drivers, other inputs) to run certain model simulations or tools. For example, the Brookhaven National Laboratory (BNL) requires an initial connection to a FTP proxy before downloading files via FTP protocol. As a result, the computers running PEcAn behind the BNL firewall (e.g. https://modex.bnl.gov) use the ncftp cleint (http://www.ncftp.com/) to download files for PEcAn because the base options with R::base download.file() such as curl, libcurl which don't have the functionality to provide credentials for a proxy or even those such as wget which do but don't easily allow for connecting through a proxy server before downloading files. The current option for use in these instances is **ncftp**, specifically **ncftpget** + +
+ +Examples:
+*HTTP*
+``` +download.file("http://lib.stat.cmu.edu/datasets/csb/ch11b.txt","~/test.download.txt") +``` +*FTP* +``` +download.file("ftp://ftp.cdc.noaa.gov/Datasets/NARR/monolevel/pres.sfc.2000.nc", "~/pres.sfc.2000.nc") +``` +*customizing to use ncftp when running behind an FTP firewall (requires ncftp to be installed and availible)*
+``` +download.file("ftp://ftp.cdc.noaa.gov/Datasets/NARR/monolevel/pres.sfc.2000.nc", "~/pres.sfc.2000.nc", method=""ncftpget") +``` + +
+ +On modex.bnl.gov, the ncftp firewall configuration file (e.g. ~/.ncftp/firewall) is configured as: +firewall-type=1 +firewall-host=ftpgateway.sec.bnl.local +firewall-port=21 + +which then allows for direct connection through the firewall using a command like: + +``` +ncftpget ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-fortran-4.4.4.tar.gz +``` + +To allow the use of ncftpget from within the download.file() function you need to set your R profile download.ftp.method option in your options list. To see your current R options run options() from R cmd, which should look something like this: +``` +> options() +$add.smooth +[1] TRUE + +$bitmapType +[1] "cairo" + +$browser +[1] "/usr/bin/xdg-open" + +$browserNLdisabled +[1] FALSE + +$CBoundsCheck +[1] FALSE + +$check.bounds +[1] FALSE + +$citation.bibtex.max +[1] 1 + +$continue +[1] "+ " + +$contrasts + unordered ordered +"contr.treatment" "contr.poly" +``` + +In order to set your download.ftp.method option you need to add a line such as +``` +# set default FTP +options(download.ftp.method = "ncftpget") +``` + +In your ~/.Rprofile. On modex at BNL we have set the global option in /usr/lib64/R/etc/Rprofile.site. + +Once this is done you should be able to see the option set using this command in R: +``` +> options("download.ftp.method") +$download.ftp.method +[1] "ncftpget" +``` + + + + + + + +# FAQ + + + +# PEcAn Project Used in Courses + +## University classes + +### GE 375 - Environmental Modeling - Spring 2013, 2014 (Mike Dietze, Boston University) + +The final "Case Study: Terrestrial Ecosystem Models" is a PEcAn-based hands-on activity. Each class has been 25 students. + +GE 585 - Ecological forecasting Fall 2013 (Mike Dietze, Boston University) + + +## Summer Courses / Workshops + +### Annual summer course in flux measurement and advanced modeling (Mike Dietze, Ankur Desai) Niwot Ridge, CO + +About 1/3 lecture, 2/3 hands-on (the syllabus is actually wrong as it list the other way around). Each class has 24 students. + +[2013 Syllabus](http://www.fluxcourse.org/files/SyllabusFluxcourse_2013.pdf) see Tuesday Week 2 Data Assimilation lectures and PEcAn demo and the Class projects and presentations on Thursday and Friday. (Most students use PEcAn for their group projects. 2014 will be the third year that PEcAn has been used for this course. + +### Assimilating Long-Term Data into Ecosystem Models: Paleo-Ecological Observatory Network (PalEON) Project + +Here is a link to the course: https://www3.nd.edu/~paleolab/paleonproject/summer-course/ + +This course uses the same demo as above, including collecting data in the field and assimilating it (part 3) + +### Integrating Evidence on Forest Response to Climate Change: Physiology to Regional Abundance + +http://blue.for.msu.edu/macrosystems/workshop + +May 13-14, 2013 + +Session 4: Integrating Forest Data Into Ecosystem Models + +### Ecological Society of America meetings + +[Workshop: Combining Field Measurements and Ecosystem Models](http://eco.confex.com/eco/2013/webprogram/Session9007.html) + + +## Selected Publications + +1. Dietze, M.C., D.S LeBauer, R. Kooper (2013) [On improving the communication between models and data](https://github.com/PecanProject/pecan/blob/master/documentation/dietze2013oic.pdf?raw=true). Plant, Cell, & Environment doi:10.1111/pce.12043 +2. LeBauer, D.S., D. Wang, K. Richter, C. Davidson, & M.C. Dietze. (2013). [Facilitating feedbacks between field measurements and ecosystem models](https://github.com/PecanProject/pecan/blob/master/documentation/lebauer2013ffb.pdf?raw=true). Ecological Monographs. doi:10.1890/12-0137.1 + + + +# Package Dependencies {#package-dependencies} + +## Executive Summary: What to usually do + +When you're editing one PEcAn package and want to use a function from any other R package (including other PEcAn packages), the standard method is to add the other package to the `Imports:` field of your DESCRIPTION file, spell the function in fully namespaced form (`pkg::function()`) everywhere you call it, and be done. There are a few cases where this isn't enough, but they're rarer than you think. The rest of this section mostly deals with the exceptions to this rule and why not to use them when they can be avoided. + +## Big Picture: What's possible to do + +To make one PEcAn package use functionality from another R package (including other PEcAn packages), you must do at least one and up to four things in your own package. + +* Always, *declare* which packages your package depends on, so that R can install them as needed when someone installs your package and so that human readers can understand what additional functionality it uses. Declare dependencies by manually adding them to your package's DESCRIPTION file. +* Sometimes, *import* functions from the dependency package into your package's namespace, so that your functions know where to find them. This is only sometimes necessary, because you can usually use `::` to call functions without importing them. Import functions by writing Roxygen `@importFrom` statements and do not edit the NAMESPACE file by hand. +* Rarely, *load* dependency code into the R environment, so that the person using your package can use it without loading it separately. This is usually a bad idea, has caused many subtle bugs, and in PEcAn it should only be used when unavoidable. When unavoidable, prefer `requireNamespace(... quietly = TRUE)` over `Depends:` or `require()` or `library()`. +* Only if your dependency relies on non-R tools, *install* any components that R won't know how to find for itself. These components are often but not always identifiable from a `SystemRequirements` field in the dependency's DESCRIPTION file. The exact installation procedure will vary case by case and from one operating system to another, and for PEcAn the key point is that you should skip this step until it proves necessary. When it does prove necessary, edit the documentation for your package to include advice on installing the dependency components, then edit the PEcAn build and testing scripts as needed so that they follow your advice. + +The advice below about each step is written specifically for PEcAn, although much of it holds for R packages in general. For more details about working with dependencies, start with Hadley Wickham's [R packages](http://r-pkgs.had.co.nz/description.html#dependencies) and treat the CRAN team's [Writing R Extensions](https://cran.r-project.org/doc/manuals/R-exts.html) as the final authority. + + +## Declaring Dependencies: Depends, Suggests, Imports + +List all dependencies in the DESCRIPTION file. Every package that is used by your package's code must appear in exactly one of the sections `Depends`, `Imports`, or `Suggests`. + +Please list packages in alphabetical order within each section. R doesn't care about the order, but you will later when you're trying to check whether this package uses a particular dependency. + +* `Imports` is the correct place to declare most PEcAn dependencies. This ensures that they get installed, but *does not* automatically import any of their functions -- Since PEcAn style prefers to mostly use `::` instead of importing, this is what we want. + +* `Depends` is, despite the name, usually the wrong place to declare PEcAn dependencies. The only difference between `Depends` and `Imports` is that when the user attaches your packages to their own R workspace (e.g. using `library("PEcAn.yourpkg")`), the packages in `Depends` are attached as well. Notice that a call like `PEcAn.yourpkg::yourfun()` *will not* attach your package *or* its dependencies, so your code still needs to import or `::`-qualify all functions from packages listed in `Depends`. In short, `Depends` is not a shortcut, is for user convenience not developer convenience, and makes it easy to create subtle bugs that appear to work during interactive test sessions but fail when run from scripts. As the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies) puts it (emphasis added): + + > This [Imports and Depends] scheme was developed before all packages had namespaces (R 2.14.0 in October 2011), and good practice changed once that was in place. Field ‘Depends’ should nowadays be used rarely, only for packages which are intended to be put on the search path to make their facilities **available to the end user (and not to the package itself)**." + +* The `Suggests` field can be used to declare dependencies on packages that make your package more useful but are not completely essential. Again from the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies): + + > The `Suggests` field uses the same syntax as `Depends` and lists packages that are not necessarily needed. This includes packages used only in examples, tests or vignettes (see [Writing package vignettes](https://cran.r-project.org/doc/manuals/R-exts.html#Writing-package-vignettes)), and packages loaded in the body of functions. E.g., suppose an example from package foo uses a dataset from package bar. Then it is not necessary to have bar use foo unless one wants to execute all the examples/tests/vignettes: it is useful to have bar, but not necessary. + + Some of the PEcAn model interface packages push this definition of "not necessarily needed" by declaring their coupled model package in `Suggests` rather than `Imports`. For example, the `PEcAn.BIOCRO` package cannot do anything useful when the BioCro model is not installed, but it lists BioCro in Suggests because *PEcAn as a whole* can work without it. This is a compromise to simplify installation of PEcAn for users who only plan to use a few models, so that they can avoid the bother of installing BioCro if they only plan to run, say, SIPNET. + + Since the point of Suggests is that they are allowed to be missing, all code that uses a suggested package must behave reasonably when the package is not found. Depending on the situation, "reasonably" could mean checking whether the package is available and throwing an error as needed (PEcAn.BIOCRO uses its `.onLoad` function to check at load time whether BioCro is installed and will refuse to load if it is not), or providing an alternative behavior (`PEcAn.data.atmosphere::get_NARR_thredds` checks at call time for either `parallel` or `doParallel` and uses whichever one it finds first), or something else, but your code should never just assume that the suggested package is available. + + You are not allowed to import functions from `Suggests` into your package's namespace, so always call them in `::`-qualified form. By default R will not install suggested packages when your package is installed, but users can change this using the `dependencies` argument of `install.packages`. Note that for testing on Travis CI, PEcAn *does* install all `Suggests` (because they are required for full package checks), so any of your code that runs when a suggested package is not available will never be exercised by Travis checks. + + It is often tempting to move a dependency from Imports to Suggests because it is a hassle to install (large, hard to compile, no longer available from CRAN, currently broken on GitHub, etc), in the hopes that this will isolate the rest of PEcAn from the troublesome dependency. This helps for some cases, but fails for two very common ones: It does not reduce install time for CI builds, because all suggested packages need to be present when running full package checks (`R CMD check` or `devtools::check` or `make check`). It also does not prevent breakage when updating PEcAn via `make install`, because `devtools::install_deps` does not install suggested packages that are missing but does try to *upgrade* any that are already installed to the newest available version -- even if the installed version took ages to compile and would have worked just fine! + +## Importing Functions: Use Roxygen + +PEcAn style is to import very few functions and instead use fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. + +If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package, which probably lives in a file called either `zzz.R` or `-package.R`: + +```r +#' What your package does +#' +#' Longer description of the package goes here. +#' Probably with links to other resources about it, citations, etc. +#' +#' @docType package +#' @name PEcAn.yourpkg +#' @importFrom magrittr %>% +NULL +``` + +Roxygen will make sure there's only one NAMESPACE entry per imported function no matter how many `importFrom` statements there are, but please pick a scheme (either import on every usage or once for the whole package), stick with it, and do not make function `x()` rely on an importFrom in the comments above function `y()`. + +Please do *not* import entire package namespaces (`#' @import pkg`); it increases the chance of function name collisions and makes it much harder to understand which package a given function was called from. + +A special note about importing functions from the [tidyverse](https://tidyverse.org): Be sure to import from the package(s) that actually contain the functions you want to use, e.g. `Imports: dplyr, magrittr, purrr` / `@importFrom magrittr %>%` / `purrr::map(...)`, not `Imports: tidyverse` / `@importFrom tidyverse %>%` / `tidyverse::map(...)`. The package named `tidyverse` is just a interactive shortcut that loads the whole collection of constituent packages; it doesn't export any functions in its own namespace and therefore importing it into your package doesn't make them available. + +## Loading Code: Don't... But Use `requireNamespace` When You Do + +The very short version of this section: We want to maintain clear separation between the [package's namespace](http://r-pkgs.had.co.nz/namespace.html) (which we control and want to keep predictable) and the global namespace (which the user controls, might change in ways we have no control over, and will be justifiably angry if we change it in ways they were not expecting). Therefore, avoid attaching packages to the search path (so no `Depends` and no `library()` or `require()` inside functions), and do not explicitly load other namespaces if you can help it. + +The longer version requires that we make a distinction often glossed over: *Loading* a package makes it possible for *R* to find things in the package namespace and does any actions needed to make it ready for use (e.g. running its .onLoad method, loading DLLs if the package contains compiled code, etc). *Attaching* a package (usually by calling `library("somePackage")`) loads it if it wasn't already loaded, and then adds it to the search path so that the *user* can find things in its namespace. As discussed in the "Declaring Dependancies" section above, dependencies listed in `Depends` will be attached when your package is attached, but they will be *neither attached nor loaded* when your package is loaded without being attached. + +Loading a dependency into the package namespace is undesirable because it makes it hard to understand our own code -- if we need to use something from elsewhere, we'd prefer call it from its own namespace using `::` (which implicitly loads the dependency!) or explicitly import it with a Roxygen `@import` directive. But in a few cases this isn't enough. The most common reason to need to explicitly load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.MA needs to call `as.matrix` on objects of class `mcmc.list`. When the `coda` namespace is loaded, `as.matrix(some_mcmc.list)` can be correctly dispatched by `base::as.matrix` to the unexported method `coda:::as.matrix.mcmc.list`, but when `coda` is not loaded this dispatch will fail. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.MA namespace, so instead we [load the `coda` namespace](https://github.com/PecanProject/pecan/pull/1966/files#diff-e0b625a54a8654cc9b22d9c076e7a838R13) whenever PEcAn.MA is loaded. + +Attaching packages to the user's search path is even more problematic because it makes it hard for the user to understand *how your package will affect their own code*. Packages attached by a function stay attached after the function exits, so they can cause name collisions far downstream of the function call, potentially in code that has nothing to do with your package. And worse, since names are looked up in reverse order of package loading, it can cause behavior that differs in strange ways depending on the order of lines that look independent of each other: + +```r +library(Hmisc) +x = ... +y = 3 +summarize(x) # calls Hmisc::summarize +y2 <- some_package_that_attaches_dplyr::innocent.looking.function(y) +# Loading required package: dplyr +summarize(x) # Looks identical to previous summarize, but calls dplyr::summarize! +``` + +This is not to say that users will *never* want your package to attach another one for them, just that it's rare and that attaching dependencies is much more likely to cause bugs than to fix them and additionally doesn't usually save the package author any work. + +One possible exception to the do-not-attach-packages rule is a case where your dependency ignores all good practice and wrongly assumes, without checking, that all of its own dependencies are attached; if its DESCRIPTION uses only `Depends` instead of `Imports`, this is often a warning sign. For example, a small-but-surprising number of packages depend on the `methods` package without proper checks (this is probably because most *but not all* R interpreters attach `methods` by default and therefore it's easy for an author to forget it might ever be otherwise unless they happen to test with a but-not-all interpreter). + +If you find yourself with a dependency that does this, accept first that you are relying on a package that is broken, and you should either convince its maintainer to fix it or find a way to remove the dependency from PEcAn. But as a short-term workaround, it is sometimes possible for your code to attach the direct dependency so that it will behave right with regard to its secondary dependencies. If so, make sure the attachment happens every time your package is loaded (e.g. by calling `library(depname)` inside your package's `.onLoad` method) and not just when your package is attached (e.g. by putting it in Depends). + +When you do need to load or attach a dependency, it is probably better to do it inside your package's `.onLoad` method rather than in individual functions, but this isn't ironclad. To only load, use `requireNamespace(pkgname, quietly=TRUE)` -- this will make it available inside your package's namespace while avoiding (most) annoying loadtime messages and not disturbing the user's search path. To attach when you really can't avoid it, declare the dependency in `Depends` and *also* attach it using `library(pkgname)` in your .onLoad method. + +Note that scripts in `inst/` are considered to be sample code rather than part of the package namespace, so it is acceptable for them to explicitly attach packages using `library()`. You may also see code that uses `require(pkgname)`; this is just like `library`, but returns FALSE instead of erroring if package load fails. It is OK for scripts in `inst/` that can do *and do* do something useful when a dependency is missing, but if it is used as `if(!require(pkg)){ stop(...)}` then replace it with `library(pkg)`. + +If you think your package needs to load or attach code for any reason, please note why in your pull request description and be prepared for questions about it during code review. If your reviewers can think of an alternate approach that avoids loading or attaching, they will likely ask you to use it even if it creates extra work for you. + +## Installing dependencies: Let the machines do it + +In most cases you won't need to think about how dependencies get installed -- just declare them in your package's DESCRIPTION and the installation will be handled automatically by R and devtools during the build process. The main exception is when a dependency relies on non-R software that R does not know how to install automatically. For example, rjags relies on JAGS, which might be installed in a different place on every machine. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. + + + +# Testing with the `testthat` package {#appendix-testthat} + +Tests are found in `/tests/testthat/` (for example, `base/utils/inst/tests/`) + +See attached file and +[http://r-pkgs.had.co.nz/tests.html](http://r-pkgs.had.co.nz/tests.html) +for details on how to use the testthat package. + +## List of Expectations + +|Full |Abbreviation| +|---|----| +|expect_that(x, is_true()) |expect_true(x)| +|expect_that(x, is_false()) |expect_false(x)| +|expect_that(x, is_a(y)) |expect_is(x, y)| +|expect_that(x, equals(y)) |expect_equal(x, y)| +|expect_that(x, is_equivalent_to(y)) |expect_equivalent(x, y)| +|expect_that(x, is_identical_to(y)) |expect_identical(x, y)| +|expect_that(x, matches(y)) |expect_matches(x, y)| +|expect_that(x, prints_text(y)) |expect_output(x, y)| +|expect_that(x, shows_message(y)) |expect_message(x, y)| +|expect_that(x, gives_warning(y)) |expect_warning(x, y)| +|expect_that(x, throws_error(y)) |expect_error(x, y)| + +## Basic use of the `testthat` package + +Create a file called `/tests/testthat.R` with the following contents: + +```r +library(testthat) +library(mypackage) + +test_check("mypackage") +``` + +Tests should be placed in `/tests/testthat/test-.R`, and look like the following: + +```r +test_that("mathematical operators plus and minus work as expected",{ + expect_equal(sum(1,1), 2) + expect_equal(sum(-1,-1), -2) + expect_equal(sum(1,NA), NA) + expect_error(sum("cat")) + set.seed(0) + expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) +}) + +test_that("different testing functions work, giving excuse to demonstrate",{ + expect_identical(1, 1) + expect_identical(numeric(1), integer(1)) + expect_equivalent(numeric(1), integer(1)) + expect_warning(mean('1')) + expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) + expect_warning(mean('1'), "argument is not numeric or logical: returning NA") + expect_message(message("a"), "a") +}) +``` + +## Data for tests + +Many of PEcAn’s functions require inputs that are provided as data. +These can be in the `data` or the `inst/extdata` folders of a package. +Data that are not package specific should be placed in the `PEcAn.all` (`base/all`) or +`PEcAn.utils` (`base/utils`) packages. + +Some useful conventions: + +## Settings + +* A generic settings can be found in the `PEcAn.all` package +```r +settings.xml <- system.file("pecan.biocro.xml", package = "PEcAn.BIOCRO") +settings <- read.settings(settings.xml) +``` + +* database settings can be specified, and tests run only if a connection is available + +We currently use the following database to run tests against; tests that require access to a database should check `db.exists()` and be skipped if it returns FALSE to avoid failed tests on systems that do not have the database installed. + +```r +settings$database <- list(userid = "bety", + passwd = "bety", + name = "bety", # database name + host = "localhost" # server name) +test_that(..., { + skip_if_not(db.exists(settings$database)) + ## write tests here +}) +``` + +* instructions for installing this are available on the [VM creation + wiki](VM-Creation.md) +* examples can be found in the PEcAn.DB package (`base/db/tests/testthat/`). + +* Model specific settings can go in the model-specific module, for +example: + +```r +settings.xml <- system.file("extdata/pecan.biocro.xml", package = "PEcAn.BIOCRO") +settings <- read.settings(settings.xml) +``` +* test-specific settings: + - settings text can be specified inline: + ``` + settings.text <- " + + nope ## allows bypass of checks in the read.settings functions + + + ebifarm.pavi + test/ + + + test/ + + bety + bety + localhost + bety + + " + settings <- read.settings(settings.text) + ``` + - values in settings can be updated: + ```r + settings <- read.settings(settings.text) + settings$outdir <- "/tmp" ## or any other settings + ``` + +## Helper functions for unit tests + +* `PEcAn.utils::tryl` returns `FALSE` if function gives error +* `PEcAn.utils::temp.settings` creates temporary settings file +* `PEcAn.DB::db.exists` returns `TRUE` if connection to database is available + + + +# `devtools` package {#developer-devtools} + +Provides functions to simplify development + +Documentation: +[The R devtools package](https://devtools.r-lib.org/) + +```r +load_all("pkg") +document("pkg") +test("pkg") +install("pkg") +build("pkg") +``` +other tips for devtools (from the documentation): + +* Adding the following to your `~/.Rprofile` will load devtools when +running R in interactive mode: +```r +# load devtools by default +if (interactive()) { + suppressMessages(require(devtools)) +} +``` +* Adding the following to your .Rpackages will allow devtools to recognize package by folder name, rather than directory path +```r +# in this example, devhome is the pecan trunk directory +devhome <- "/home/dlebauer/R-dev/pecandev/" +list( + default = function(x) { + file.path(devhome, x, x) + }, + "utils" = paste(devhome, "pecandev/utils", sep = "") + "common" = paste(devhome, "pecandev/common", sep = "") + "all" = paste(devhome, "pecandev/all", sep = "") + "ed" = paste(devhome, "pecandev/models/ed", sep = "") + "uncertainty" = paste(devhome, "modules/uncertainty", sep = "") + "meta.analysis" = paste(devhome, "modules/meta.analysis", sep = "") + "db" = paste(devhome, "db", sep = "") +) +``` + +Now, devtools can take `pkg` as an argument instead of `/path/to/pkg/`, +e.g. so you can use `build("pkg")` instead of `build("/path/to/pkg/")` + + + +# `singularity` {#models_singularity} + +Running a model using singularity. + +*This is work in progress.* + +This assumes you have [singulariy](https://sylabs.io/singularity/) already installed. + +This will work on a Linux machine (x86_64). + +First make sure you have all the data files: + +
+bash script to install required files (click to expand) +```bash +#!/bin/bash + +if [ ! -e sites ]; then + curl -s -o sites.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/sites.tgz + tar zxf sites.tgz + sed -i -e "s#/home/kooper/Projects/EBI#/data/sites#" sites/*/ED_MET_DRIVER_HEADER + rm sites.tgz +fi + +if [ ! -e inputs ]; then + curl -s -o inputs.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/inputs.tgz + tar zxf inputs.tgz + rm inputs.tgz +fi + +if [ ! -e testrun.s83 ]; then + curl -s -o testrun.s83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.s83.zip + unzip -q testrun.s83.zip + sed -i -e "s#/home/pecan#/data#" testrun.s83/ED2IN + rm testrun.s83.zip +fi + +if [ ! -e ${HOME}/sites/Santarem_Km83 ]; then + curl -s -o Santarem_Km83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/Santarem_Km83.zip + unzip -q -d sites Santarem_Km83.zip + sed -i -e "s#/home/pecan#/data#" sites/Santarem_Km83/ED_MET_DRIVER_HEADER + rm Santarem_Km83.zip +fi +``` +
+ +Next edit the ED2IN file in testrun.s83 + +
+ED2IN file (click to expand) +``` +!==========================================================================================! +!==========================================================================================! +! ED2IN . ! +! ! +! This is the file that contains the variables that define how ED is to be run. There ! +! is some brief information about the variables here. ! +!------------------------------------------------------------------------------------------! +$ED_NL + + !----- Simulation title (64 characters). -----------------------------------------------! + NL%EXPNME = 'ED version 2.1 test' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Type of run: ! + ! INITIAL -- Starts a new run, that can be based on a previous run (restart/history), ! + ! but then it will use only the biomass and soil carbon information. ! + ! HISTORY -- Resumes a simulation from the last history. This is different from ! + ! initial in the sense that exactly the same information written in the ! + ! history will be used here. ! + !---------------------------------------------------------------------------------------! + NL%RUNTYPE = 'INITIAL' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Start of simulation. Information must be given in UTC time. ! + !---------------------------------------------------------------------------------------! + NL%IMONTHA = 01 + NL%IDATEA = 01 + NL%IYEARA = 2001 + NL%ITIMEA = 0000 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! End of simulation. Information must be given in UTC time. ! + !---------------------------------------------------------------------------------------! + NL%IMONTHZ = 01 + NL%IDATEZ = 01 + NL%IYEARZ = 2002 + NL%ITIMEZ = 0000 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! DTLSM -- Time step to integrate photosynthesis, and the maximum time step for ! + ! integration of energy and water budgets (units: seconds). Notice that the ! + ! model will take steps shorter than this if this is too coarse and could ! + ! lead to loss of accuracy or unrealistic results in the biophysics. ! + ! Recommended values are < 60 seconds if INTEGRATION_SCHEME is 0, and 240-900 ! + ! seconds otherwise. ! + ! RADFRQ -- Time step to integrate radiation, in seconds. This must be an integer ! + ! multiple of DTLSM, and we recommend it to be exactly the same as DTLSM. ! + !---------------------------------------------------------------------------------------! + NL%DTLSM = 600. + NL%RADFRQ = 600. + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used in case the user wants to run a regional run. ! + ! ! + ! N_ED_REGION -- number of regions for which you want to run ED. This can be set to ! + ! zero provided that N_POI is not... ! + ! GRID_TYPE -- which kind of grid to run: ! + ! 0. Longitude/latitude grid ! + ! 1. Polar-stereographic ! + !---------------------------------------------------------------------------------------! + NL%N_ED_REGION = 0 + NL%GRID_TYPE = 1 + + !------------------------------------------------------------------------------------! + ! The following variables are used only when GRID_TYPE is set to 0. You must ! + ! provide one value for each grid, except otherwise noted. ! + ! ! + ! GRID_RES -- Grid resolution, in degrees (first grid only, the other grids ! + ! resolution will be defined by NSTRATX/NSTRATY). ! + ! ED_REG_LATMIN -- Southernmost point of each region. ! + ! ED_REG_LATMAX -- Northernmost point of each region. ! + ! ED_REG_LONMIN -- Westernmost point of each region. ! + ! ED_REG_LONMAX -- Easternmost point of each region. ! + !------------------------------------------------------------------------------------! + NL%GRID_RES = 1.0 + NL%ED_REG_LATMIN = -12.0, -7.5, 10.0, -6.0 + NL%ED_REG_LATMAX = 1.0, -3.5, 15.0, -1.0 + NL%ED_REG_LONMIN = -66.0,-58.5, 70.0, -63.0 + NL%ED_REG_LONMAX = -49.0,-54.5, 35.0, -53.0 + !------------------------------------------------------------------------------------! + + + + !------------------------------------------------------------------------------------! + ! The following variables are used only when GRID_TYPE is set to 1. ! + ! ! + ! NNXP -- number of points in the X direction. One value for each grid. ! + ! NNYP -- number of points in the Y direction. One value for each grid. ! + ! DELTAX -- grid resolution in the X direction, near the grid pole. Units: [ m]. ! + ! this value is used to define the first grid only, other grids are ! + ! defined using NNSTRATX. ! + ! DELTAY -- grid resolution in the Y direction, near the grid pole. Units: [ m]. ! + ! this value is used to define the first grid only, other grids are ! + ! defined using NNSTRATX. Unless you are running some specific tests, ! + ! both DELTAX and DELTAY should be the same. ! + ! POLELAT -- Latitude of the pole point. Set this close to CENTLAT for a more ! + ! traditional "square" domain. One value for all grids. ! + ! POLELON -- Longitude of the pole point. Set this close to CENTLON for a more ! + ! traditional "square" domain. One value for all grids. ! + ! CENTLAT -- Latitude of the central point. One value for each grid. ! + ! CENTLON -- Longitude of the central point. One value for each grid. ! + !------------------------------------------------------------------------------------! + NL%NNXP = 110 + NL%NNYP = 70 + NL%DELTAX = 60000. + NL%DELTAY = 60000. + NL%POLELAT = -2.609075 + NL%POLELON = -60.2093 + NL%CENTLAT = -2.609075 + NL%CENTLON = -60.2093 + !------------------------------------------------------------------------------------! + + + + !------------------------------------------------------------------------------------! + ! Nest ratios. These values are used by both GRID_TYPE=0 and GRID_TYPE=1. ! + ! NSTRATX -- this is will divide the values given by DELTAX or GRID_RES for the ! + ! nested grids. The first value should be always one. ! + ! NSTRATY -- this is will divide the values given by DELTAY or GRID_RES for the ! + ! nested grids. The first value should be always one, and this must ! + ! be always the same as NSTRATX when GRID_TYPE = 0, and this is also ! + ! strongly recommended for when GRID_TYPE = 1. ! + !------------------------------------------------------------------------------------! + NL%NSTRATX = 1,4 + NL%NSTRATY = 1,4 + !------------------------------------------------------------------------------------! + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to define single polygon of interest runs, and ! + ! they are ignored when N_ED_REGION = 0. ! + ! ! + ! N_POI -- number of polygons of interest (POIs). This can be zero as long as ! + ! N_ED_REGION is not. ! + ! POI_LAT -- list of latitudes of each POI. ! + ! POI_LON -- list of longitudes of each POI. ! + ! POI_RES -- grid resolution of each POI (degrees). This is used only to define the ! + ! soil types ! + !---------------------------------------------------------------------------------------! + NL%N_POI = 1 + NL%POI_LAT = -3.018 + NL%POI_LON = -54.971 + NL%POI_RES = 1.00 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! LOADMETH -- Load balancing method. This is used only in regional runs run in ! + ! parallel. ! + ! 0. Let ED decide the best way of splitting the polygons. Commonest ! + ! option and default. ! + ! 1. One of the methods to split polygons based on their previous ! + ! work load. Developpers only. ! + ! 2. Try to load an equal number of SITES per node. Useful for when ! + ! total number of polygon is the same as the total number of cores. ! + ! 3. Another method to split polygons based on their previous work load. ! + ! Developpers only. ! + !---------------------------------------------------------------------------------------! + NL%LOADMETH = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ED2 File output. For all the variables 0 means no output and 3 means HDF5 output. ! + ! ! + ! IFOUTPUT -- Fast analysis. These are mostly polygon-level averages, and the time ! + ! interval between files is determined by FRQANL ! + ! IDOUTPUT -- Daily means (one file per day) ! + ! IMOUTPUT -- Monthly means (one file per month) ! + ! IQOUTPUT -- Monthly means of the diurnal cycle (one file per month). The number ! + ! of points for the diurnal cycle is 86400 / FRQANL ! + ! IYOUTPUT -- Annual output. ! + ! ITOUTPUT -- Instantaneous fluxes, mostly polygon-level variables, one file per year. ! + ! ISOUTPUT -- restart file, for HISTORY runs. The time interval between files is ! + ! determined by FRQHIS ! + !---------------------------------------------------------------------------------------! + NL%IFOUTPUT = 0 + NL%IDOUTPUT = 0 + NL%IMOUTPUT = 0 + NL%IQOUTPUT = 3 + NL%IYOUTPUT = 0 + NL%ITOUTPUT = 0 + NL%ISOUTPUT = 3 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ATTACH_METADATA -- Flag for attaching metadata to HDF datasets. Attaching metadata ! + ! will aid new users in quickly identifying dataset descriptions but ! + ! will compromise I/O performance significantly. ! + ! 0 = no metadata, 1 = attach metadata ! + !---------------------------------------------------------------------------------------! + NL%ATTACH_METADATA = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! UNITFAST -- The following variables control the units for FRQFAST/OUTFAST, and ! + ! UNITSTATE FRQSTATE/OUTSTATE, respectively. Possible values are: ! + ! 0. Seconds; ! + ! 1. Days; ! + ! 2. Calendar months (variable) ! + ! 3. Calendar years (variable) ! + ! ! + ! N.B.: 1. In case OUTFAST/OUTSTATE are set to special flags (-1 or -2) ! + ! UNITFAST/UNITSTATE will be ignored for them. ! + ! 2. In case IQOUTPUT is set to 3, then UNITFAST has to be 0. ! + ! ! + !---------------------------------------------------------------------------------------! + NL%UNITFAST = 0 + NL%UNITSTATE = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! OUTFAST/OUTSTATE -- these control the number of times per file. ! + ! 0. Each time gets its own file ! + ! -1. One file per day ! + ! -2. One file per month ! + ! > 0. Multiple timepoints can be recorded to a single file reducing ! + ! the number of files and i/o time in post-processing. ! + ! Multiple timepoints should not be used in the history files ! + ! if you intend to use these for HISTORY runs. ! + !---------------------------------------------------------------------------------------! + NL%OUTFAST = 1. + NL%OUTSTATE = 1. + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ICLOBBER -- What to do in case the model finds a file that it was supposed the ! + ! written? 0 = stop the run, 1 = overwrite without warning. ! + ! FRQFAST -- time interval between analysis files, units defined by UNITFAST. ! + ! FRQSTATE -- time interval between history files, units defined by UNITSTATE. ! + !---------------------------------------------------------------------------------------! + NL%ICLOBBER = 1 + NL%FRQFAST = 3600. + NL%FRQSTATE = 1. + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! + ! SFILOUT -- Path and prefix for history files. ! + !---------------------------------------------------------------------------------------! + NL%FFILOUT = '/data/testrun.s83/analy/ts83' + NL%SFILOUT = '/data/testrun.s83/histo/ts83' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IED_INIT_MODE -- This controls how the plant community and soil carbon pools are ! + ! initialised. ! + ! ! + ! -1. Start from a true bare ground run, or an absolute desert run. This will ! + ! never grow any plant. ! + ! 0. Start from near-bare ground (only a few seedlings from each PFT to be included ! + ! in this run). ! + ! 1. This will use history files written by ED-1.0. It will grab the ecosystem ! + ! state (like biomass, LAI, plant density, etc.), but it will start the ! + ! thermodynamic state as a new simulation. ! + ! 2. Same as 1, but it uses history files from ED-2.0 without multiple sites, and ! + ! with the old PFT numbers. ! + ! 3. Same as 1, but using history files from ED-2.0 with multiple sites and ! + ! TOPMODEL hydrology. ! + ! 4. Same as 1, but using ED2.1 H5 history/state files that take the form: ! + ! 'dir/prefix-gxx.h5' ! + ! Initialization files MUST end with -gxx.h5 where xx is a two digit integer ! + ! grid number. Each grid has its own initialization file. As an example, if a ! + ! user has two files to initialize their grids with: ! + ! example_file_init-g01.h5 and example_file_init-g02.h5 ! + ! NL%SFILIN = 'example_file_init' ! + ! ! + ! 5. This is similar to option 4, except that you may provide several files ! + ! (including a mix of regional and POI runs, each file ending at a different ! + ! date). This will not check date nor grid structure, it will simply read all ! + ! polygons and match the nearest neighbour to each polygon of your run. SFILIN ! + ! must have the directory common to all history files that are sought to be used,! + ! up to the last character the files have in common. For example if your files ! + ! are ! + ! /mypath/P0001-S-2000-01-01-000000-g01.h5, ! + ! /mypath/P0002-S-1966-01-01-000000-g02.h5, ! + ! ... ! + ! /mypath/P1000-S-1687-01-01-000000-g01.h5: ! + ! NL%SFILIN = '/mypath/P' ! + ! ! + ! 6 - Initialize with ED-2 style files without multiple sites, exactly like option ! + ! 2, except that the PFT types are preserved. ! + !---------------------------------------------------------------------------------------! + NL%IED_INIT_MODE = 6 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! EDRES -- Expected input resolution for ED2.0 files. This is not used unless ! + ! IED_INIT_MODE = 3. ! + !---------------------------------------------------------------------------------------! + NL%EDRES = 1.0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! SFILIN -- The meaning and the size of this variable depends on the type of run, set ! + ! at variable NL%RUNTYPE. ! + ! ! + ! 1. INITIAL. Then this is the path+prefix of the previous ecosystem state. This has ! + ! dimension of the number of grids so you can initialize each grid with a ! + ! different dataset. In case only one path+prefix is given, the same will ! + ! be used for every grid. Only some ecosystem variables will be set up ! + ! here, and the initial condition will be in thermodynamic equilibrium. ! + ! ! + ! 2. HISTORY. This is the path+prefix of the history file that will be used. Only the ! + ! path+prefix will be used, as the history for every grid must have come ! + ! from the same simulation. ! + !---------------------------------------------------------------------------------------! + NL%SFILIN = '/data/sites/Santarem_Km83/s83_default.' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! History file information. These variables are used to continue a simulation from ! + ! a point other than the beginning. Time must be in UTC. ! + ! ! + ! IMONTHH -- the time of the history file. This is the only place you need to change ! + ! IDATEH dates for a HISTORY run. You may change IMONTHZ and related in case you ! + ! IYEARH want to extend the run, but yo should NOT change IMONTHA and related. ! + ! ITIMEH ! + !---------------------------------------------------------------------------------------! + NL%ITIMEH = 0000 + NL%IDATEH = 01 + NL%IMONTHH = 05 + NL%IYEARH = 2001 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! NZG - number of soil layers. One value for all grids. ! + ! NZS - maximum number of snow/water pounding layers. This is used only for ! + ! snow, if only liquid water is standing, the water will be all collapsed ! + ! into a single layer, so if you are running for places where it doesn't snow ! + ! a lot, leave this set to 1. One value for all grids. ! + !---------------------------------------------------------------------------------------! + NL%NZG = 16 + NL%NZS = 4 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISOILFLG -- this controls which soil type input you want to use. ! + ! 1. Read in from a dataset I will provide in the SOIL_DATABASE variable a ! + ! few lines below. ! + ! below. ! + ! 2. No data available, I will use constant values I will provide in ! + ! NSLCON or by prescribing the fraction of sand and clay (see SLXSAND ! + ! and SLXCLAY). ! + !---------------------------------------------------------------------------------------! + NL%ISOILFLG = 2 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! NSLCON -- ED-2 Soil classes that the model will use when ISOILFLG is set to 2. ! + ! Possible values are: ! + !---------------------------------------------------------------------------------------! + ! 1 -- sand | 7 -- silty clay loam | 13 -- bedrock ! + ! 2 -- loamy sand | 8 -- clayey loam | 14 -- silt ! + ! 3 -- sandy loam | 9 -- sandy clay | 15 -- heavy clay ! + ! 4 -- silt loam | 10 -- silty clay | 16 -- clayey sand ! + ! 5 -- loam | 11 -- clay | 17 -- clayey silt ! + ! 6 -- sandy clay loam | 12 -- peat ! + !---------------------------------------------------------------------------------------! + NL%NSLCON = 11 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISOILCOL -- LEAF-3 and ED-2 soil colour classes that the model will use when ISOILFLG ! + ! is set to 2. Soil classes are from 1 to 20 (1 = lightest; 20 = darkest). ! + ! The values are the same as CLM-4.0. The table is the albedo for visible ! + ! and near infra-red. ! + !---------------------------------------------------------------------------------------! + ! ! + ! |-----------------------------------------------------------------------| ! + ! | | Dry soil | Saturated | | Dry soil | Saturated | ! + ! | Class |-------------+-------------| Class +-------------+-------------| ! + ! | | VIS | NIR | VIS | NIR | | VIS | NIR | VIS | NIR | ! + ! |-------+------+------+------+------+-------+------+------+------+------| ! + ! | 1 | 0.36 | 0.61 | 0.25 | 0.50 | 11 | 0.24 | 0.37 | 0.13 | 0.26 | ! + ! | 2 | 0.34 | 0.57 | 0.23 | 0.46 | 12 | 0.23 | 0.35 | 0.12 | 0.24 | ! + ! | 3 | 0.32 | 0.53 | 0.21 | 0.42 | 13 | 0.22 | 0.33 | 0.11 | 0.22 | ! + ! | 4 | 0.31 | 0.51 | 0.20 | 0.40 | 14 | 0.20 | 0.31 | 0.10 | 0.20 | ! + ! | 5 | 0.30 | 0.49 | 0.19 | 0.38 | 15 | 0.18 | 0.29 | 0.09 | 0.18 | ! + ! | 6 | 0.29 | 0.48 | 0.18 | 0.36 | 16 | 0.16 | 0.27 | 0.08 | 0.16 | ! + ! | 7 | 0.28 | 0.45 | 0.17 | 0.34 | 17 | 0.14 | 0.25 | 0.07 | 0.14 | ! + ! | 8 | 0.27 | 0.43 | 0.16 | 0.32 | 18 | 0.12 | 0.23 | 0.06 | 0.12 | ! + ! | 9 | 0.26 | 0.41 | 0.15 | 0.30 | 19 | 0.10 | 0.21 | 0.05 | 0.10 | ! + ! | 10 | 0.25 | 0.39 | 0.14 | 0.28 | 20 | 0.08 | 0.16 | 0.04 | 0.08 | ! + ! |-----------------------------------------------------------------------| ! + ! ! + ! Soil type 21 is a special case in which we use the albedo method that used to be ! + ! the default in ED-2.1. ! + !---------------------------------------------------------------------------------------! + NL%ISOILCOL = 21 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! These variables are used to define the soil properties when you don't want to use ! + ! the standard soil classes. ! + ! ! + ! SLXCLAY -- Prescribed fraction of clay [0-1] ! + ! SLXSAND -- Prescribed fraction of sand [0-1]. ! + ! ! + ! They are used only when ISOILFLG is 2, both values are between 0. and 1., and ! + ! theira sum doesn't exceed 1. Otherwise standard ED values will be used instead. ! + !---------------------------------------------------------------------------------------! + NL%SLXCLAY = 0.59 + NL%SLXSAND = 0.39 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Soil grid and initial conditions if no file is provided: ! + ! ! + ! SLZ - soil depth in m. Values must be negative and go from the deepest layer to ! + ! the top. ! + ! SLMSTR - this is the initial soil moisture, now given as the soil moisture index. ! + ! Values can be fraction, in which case they will be linearly interpolated ! + ! between the special points (e.g. 0.5 will put soil moisture half way ! + ! between the wilting point and field capacity). ! + ! -1 = dry air soil moisture ! + ! 0 = wilting point ! + ! 1 = field capacity ! + ! 2 = porosity (saturation) ! + ! STGOFF - initial temperature offset (soil temperature = air temperature + offset) ! + !---------------------------------------------------------------------------------------! + NL%SLZ = -8.000, -6.959, -5.995, -5.108, -4.296, -3.560, -2.897, -2.307, + -1.789, -1.340, -0.961, -0.648, -0.400, -0.215, -0.089, -0.020 + NL%SLMSTR = 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, + 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00 + NL%STGOFF = 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, + 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Input databases ! + ! VEG_DATABASE -- vegetation database, used only to determine the land/water mask. ! + ! Fill with the path and the prefix. ! + ! SOIL_DATABASE -- soil database, used to determine the soil type. Fill with the ! + ! path and the prefix. ! + ! LU_DATABASE -- land-use change disturbance rates database, used only when ! + ! IANTH_DISTURB is set to 1. Fill with the path and the prefix. ! + ! PLANTATION_FILE -- plantation fraction file. In case you don't have such a file or ! + ! you do not want to use it, you must leave this variable empty: ! + ! (NL%PLANTATION_FILE = '' ! + ! THSUMS_DATABASE -- input directory with dataset to initialise chilling degrees and ! + ! growing degree days, which is used to drive the cold-deciduous ! + ! phenology (you must always provide this, even when your PFTs are ! + ! not cold deciduous). ! + ! ED_MET_DRIVER_DB -- File containing information for meteorological driver ! + ! instructions (the "header" file). ! + ! SOILSTATE_DB -- Dataset in case you want to provide the initial conditions of ! + ! soil temperature and moisture. ! + ! SOILDEPTH_DB -- Dataset in case you want to read in soil depth information. ! + !---------------------------------------------------------------------------------------! + NL%VEG_DATABASE = '/data/oge2OLD/OGE2_' + NL%SOIL_DATABASE = '/data/faoOLD/FAO_' + NL%LU_DATABASE = '/data/ed_inputs/glu/' + NL%PLANTATION_FILE = '' + NL%THSUMS_DATABASE = '/data/ed_inputs/' + NL%ED_MET_DRIVER_DB = '/data/sites/Santarem_Km83/ED_MET_DRIVER_HEADER' + NL%SOILSTATE_DB = '' + NL%SOILDEPTH_DB = '' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ISOILSTATEINIT -- Variable controlling how to initialise the soil temperature and ! + ! moisture ! + ! 0. Use SLMSTR and STGOFF. ! + ! 1. Read from SOILSTATE_DB. ! + ! ISOILDEPTHFLG -- Variable controlling how to initialise soil depth ! + ! 0. Constant, always defined by the first SLZ layer. ! + ! 1. Read from SOILDEPTH_DB. ! + !---------------------------------------------------------------------------------------! + NL%ISOILSTATEINIT = 0 + NL%ISOILDEPTHFLG = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ISOILBC -- This controls the soil moisture boundary condition at the bottom. If ! + ! unsure, use 0 for short-term simulations (couple of days), and 1 for long- ! + ! -term simulations (months to years). ! + ! 0. Bedrock. Flux from the bottom of the bottommost layer is set to 0. ! + ! 1. Gravitational flow. The flux from the bottom of the bottommost layer ! + ! is due to gradient of height only. ! + ! 2. Super drainage. Soil moisture of the ficticious layer beneath the ! + ! bottom is always at dry air soil moisture. ! + ! 3. Half-way. Assume that the fictious layer beneath the bottom is always ! + ! at field capacity. ! + ! 4. Aquifer. Soil moisture of the ficticious layer beneath the bottom is ! + ! always at saturation. ! + !---------------------------------------------------------------------------------------! + NL%ISOILBC = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IVEGT_DYNAMICS -- The vegetation dynamics scheme. ! + ! 0. No vegetation dynamics, the initial state will be preserved, ! + ! even though the model will compute the potential values. This ! + ! option is useful for theoretical simulations only. ! + ! 1. Normal ED vegetation dynamics (Moorcroft et al 2001). ! + ! The normal option for almost any simulation. ! + !---------------------------------------------------------------------------------------! + NL%IVEGT_DYNAMICS = 1 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! IBIGLEAF -- Do you want to run ED as a 'big leaf' model? ! + ! 0. No, use the standard size- and age-structure (Moorcroft et al. 2001) ! + ! This is the recommended method for most applications. ! + ! 1. 'big leaf' ED: this will have no horizontal or vertical hetero- ! + ! geneities; 1 patch per PFT and 1 cohort per patch; no vertical ! + ! growth, recruits will 'appear' instantaneously at maximum height. ! + ! ! + ! N.B. if you set IBIGLEAF to 1, you MUST turn off the crown model (CROWN_MOD = 0) ! + !---------------------------------------------------------------------------------------! + NL%IBIGLEAF = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! INTEGRATION_SCHEME -- The biophysics integration scheme. ! + ! 0. Euler step. The fastest, but it doesn't estimate ! + ! errors. ! + ! 1. Fourth-order Runge-Kutta method. ED-2.1 default method ! + ! 2. Heun's method (a second-order Runge-Kutta). ! + ! 3. Hybrid Stepping (BDF2 implicit step for the canopy air and ! + ! leaf temp, forward Euler for else, under development). ! + !---------------------------------------------------------------------------------------! + NL%INTEGRATION_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! RK4_TOLERANCE -- This is the relative tolerance for Runge-Kutta or Heun's ! + ! integration. Larger numbers will make runs go faster, at the ! + ! expense of being less accurate. Currently the valid range is ! + ! between 1.e-7 and 1.e-1, but recommended values are between 1.e-4 ! + ! and 1.e-2. ! + !---------------------------------------------------------------------------------------! + NL%RK4_TOLERANCE = 0.01 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IBRANCH_THERMO -- This determines whether branches should be included in the ! + ! vegetation thermodynamics and radiation or not. ! + ! 0. No branches in energy/radiation (ED-2.1 default); ! + ! 1. Branches are accounted in the energy and radiation. Branchwood ! + ! and leaf are treated separately in the canopy radiation scheme, ! + ! but solved as a single pool in the biophysics integration. ! + ! 2. Similar to 1, but branches are treated as separate pools in the ! + ! biophysics (thus doubling the number of prognostic variables). ! + !---------------------------------------------------------------------------------------! + NL%IBRANCH_THERMO = 1 + !---------------------------------------------------------------------------------------! + + !---------------------------------------------------------------------------------------! + ! IPHYSIOL -- This variable will determine the functional form that will control how ! + ! the various parameters will vary with temperature, and how the CO2 ! + ! compensation point for gross photosynthesis (Gamma*) will be found. ! + ! Options are: ! + ! ! + ! 0 -- Original ED-2.1, we use the "Arrhenius" function as in Foley et al. (1996) and ! + ! Moorcroft et al. (2001). Gamma* is found using the parameters for tau as in ! + ! Foley et al. (1996). ! + ! 1 -- Modified ED-2.1. In this case Gamma* is found using the Michaelis-Mentel ! + ! coefficients for CO2 and O2, as in Farquhar et al. (1980) and in CLM. ! + ! 2 -- Collatz et al. (1991). We use the power (Q10) equations, with Collatz et al. ! + ! parameters for compensation point, and the Michaelis-Mentel coefficients. The ! + ! correction for high and low temperatures are the same as in Moorcroft et al. ! + ! (2001). ! + ! 3 -- Same as 2, except that we find Gamma* as in Farquhar et al. (1980) and in CLM. ! + !---------------------------------------------------------------------------------------! + NL%IPHYSIOL = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IALLOM -- Which allometry to use (this mostly affects tropical PFTs. Temperate PFTs ! + ! will use the new root allometry and the maximum crown area if IALLOM is set ! + ! to 1 or 2). ! + ! 0. Original ED-2.1 ! + ! 1. a. The coefficients for structural biomass are set so the total AGB ! + ! is similar to Baker et al. (2004), equation 2. Balive is the ! + ! default ED-2.1; ! + ! b. Experimental root depth that makes canopy trees to have root depths ! + ! of 5m and grasses/seedlings at 0.5 to have root depth of 0.5 m. ! + ! c. Crown area defined as in Poorter et al. (2006), imposing maximum ! + ! crown area ! + ! 2. Similar to 1, but with a few extra changes. ! + ! a. Height -> DBH allometry as in Poorter et al. (2006) ! + ! b. Balive is retuned, using a few leaf biomass allometric equations for ! + ! a few genuses in Costa Rica. References: ! + ! Cole and Ewel (2006), and Calvo Alvarado et al. (2008). ! + !---------------------------------------------------------------------------------------! + NL%IALLOM = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IGRASS -- This controls the dynamics and growth calculation for grasses. A new ! + ! grass scheme is now available where bdead = 0, height is a function of bleaf! + ! and growth happens daily. ALS (3/3/12) ! + ! 0: grasses behave like trees as in ED2.1 (old scheme) ! + ! ! + ! 1: new grass scheme as described above ! + !---------------------------------------------------------------------------------------! + NL%IGRASS = 0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! IPHEN_SCHEME -- It controls the phenology scheme. Even within each scheme, the ! + ! actual phenology will be different depending on the PFT. ! + ! ! + ! -1: grasses - evergreen; ! + ! tropical - evergreen; ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous (Botta et al.); ! + ! ! + ! 0: grasses - drought-deciduous (old scheme); ! + ! tropical - drought-deciduous (old scheme); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! 1: prescribed phenology ! + ! ! + ! 2: grasses - drought-deciduous (new scheme); ! + ! tropical - drought-deciduous (new scheme); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! 3: grasses - drought-deciduous (new scheme); ! + ! tropical - drought-deciduous (light phenology); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! Old scheme: plants shed their leaves once instantaneous amount of available water ! + ! becomes less than a critical value. ! + ! New scheme: plants shed their leaves once a 10-day running average of available ! + ! water becomes less than a critical value. ! + !---------------------------------------------------------------------------------------! + NL%IPHEN_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! Parameters that control the phenology response to radiation, used only when ! + ! IPHEN_SCHEME = 3. ! + ! ! + ! RADINT -- Intercept ! + ! RADSLP -- Slope. ! + !---------------------------------------------------------------------------------------! + NL%RADINT = -11.3868 + NL%RADSLP = 0.0824 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! REPRO_SCHEME -- This controls plant reproduction and dispersal. ! + ! 0. Reproduction off. Useful for very short runs only. ! + ! 1. Original reproduction scheme. Seeds are exchanged between ! + ! patches belonging to the same site, but they can't go outside ! + ! their original site. ! + ! 2. Similar to 1, but seeds are exchanged between patches belonging ! + ! to the same polygon, even if they are in different sites. They ! + ! can't go outside their original polygon, though. This is the ! + ! same as option 1 if there is only one site per polygon. ! + ! 3. Similar to 2, but recruits will only be formed if their phenology ! + ! status would be "leaves fully flushed". This only matters for ! + ! drought deciduous plants. This option is for testing purposes ! + ! only, think 50 times before using it... ! + !---------------------------------------------------------------------------------------! + NL%REPRO_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! LAPSE_SCHEME -- This specifies the met lapse rate scheme: ! + ! 0. No lapse rates ! + ! 1. phenomenological, global ! + ! 2. phenomenological, local (not yet implemented) ! + ! 3. mechanistic(not yet implemented) ! + !---------------------------------------------------------------------------------------! + NL%LAPSE_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! CROWN_MOD -- Specifies how tree crowns are represent in the canopy radiation model, ! + ! and in the turbulence scheme depending on ICANTURB. ! + ! 0. ED1 default, crowns are evenly spread throughout the patch area, and ! + ! cohorts are stacked on the top of each other. ! + ! 1. Dietze (2008) model. Cohorts have a finite radius, and cohorts are ! + ! stacked on the top of each other. ! + !---------------------------------------------------------------------------------------! + NL%CROWN_MOD = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the canopy radiation solver. ! + ! ! + ! ICANRAD -- Specifies how canopy radiation is solved. This variable sets both ! + ! shortwave and longwave. ! + ! 0. Two-stream model (Medvigy 2006), with the possibility to apply ! + ! finite crown area to direct shortwave radiation. ! + ! 1. Multiple-scattering model (Zhao and Qualls 2005,2006), with the ! + ! possibility to apply finite crown area to all radiation fluxes. ! + ! LTRANS_VIS -- Leaf transmittance for tropical plants - Visible/PAR ! + ! LTRANS_NIR -- Leaf transmittance for tropical plants - Near Infrared ! + ! LREFLECT_VIS -- Leaf reflectance for tropical plants - Visible/PAR ! + ! LREFLECT_NIR -- Leaf reflectance for tropical plants - Near Infrared ! + ! ORIENT_TREE -- Leaf orientation factor for tropical trees. Extremes are: ! + ! -1. All leaves are oriented in the vertical ! + ! 0. Leaf orientation is perfectly random ! + ! 1. All leaves are oriented in the horizontal ! + ! In practice, acceptable values range from -0.4 and 0.6 ! + ! ORIENT_GRASS -- Leaf orientation factor for tropical grasses. Extremes are: ! + ! -1. All leaves are oriented in the vertical ! + ! 0. Leaf orientation is perfectly random ! + ! 1. All leaves are oriented in the horizontal ! + ! In practice, acceptable values range from -0.4 and 0.6 ! + ! CLUMP_TREE -- Clumping factor for tropical trees. Extremes are: ! + ! lim -> 0. Black hole (0 itself is unacceptable) ! + ! 1. Homogeneously spread over the layer (i.e., no clumping) ! + ! CLUMP_GRASS -- Clumping factor for tropical grasses. Extremes are: ! + ! lim -> 0. Black hole (0 itself is unacceptable) ! + ! 1. Homogeneously spread over the layer (i.e., no clumping) ! + !---------------------------------------------------------------------------------------! + NL%ICANRAD = 0 + NL%LTRANS_VIS = 0.050 + NL%LTRANS_NIR = 0.230 + NL%LREFLECT_VIS = 0.100 + NL%LREFLECT_NIR = 0.460 + NL%ORIENT_TREE = 0.100 + NL%ORIENT_GRASS = 0.000 + NL%CLUMP_TREE = 0.800 + NL%CLUMP_GRASS = 1.000 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! DECOMP_SCHEME -- This specifies the dependence of soil decomposition on temperature. ! + ! 0. ED-2.0 default, the original exponential ! + ! 1. Lloyd and Taylor (1994) model ! + ! [[option 1 requires parameters to be set in xml]] ! + !---------------------------------------------------------------------------------------! + NL%DECOMP_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! H2O_PLANT_LIM -- this determines whether plant photosynthesis can be limited by ! + ! soil moisture, the FSW, defined as FSW = Supply / (Demand + Supply). ! + ! ! + ! Demand is always the transpiration rates in case soil moisture is ! + ! not limiting (the psi_0 term times LAI). The supply is determined ! + ! by Kw * nplant * Broot * Available_Water, and the definition of ! + ! available water changes depending on H2O_PLANT_LIM: ! + ! 0. Force FSW = 1 (effectively available water is infinity). ! + ! 1. Available water is the total soil water above wilting point, ! + ! integrated across all layers within the rooting zone. ! + ! 2. Available water is the soil water at field capacity minus ! + ! wilting point, scaled by the so-called wilting factor: ! + ! (psi(k) - (H - z(k)) - psi_wp) / (psi_fc - psi_wp) ! + ! where psi is the matric potentital at layer k, z is the layer ! + ! depth, H it the crown height and psi_fc and psi_wp are the ! + ! matric potentials at wilting point and field capacity. ! + !---------------------------------------------------------------------------------------! + NL%H2O_PLANT_LIM = 2 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! The following variables are factors that control photosynthesis and respiration. ! + ! Notice that some of them are relative values whereas others are absolute. ! + ! ! + ! VMFACT_C3 -- Factor multiplying the default Vm0 for C3 plants (1.0 = default). ! + ! VMFACT_C4 -- Factor multiplying the default Vm0 for C4 plants (1.0 = default). ! + ! MPHOTO_TRC3 -- Stomatal slope (M) for tropical C3 plants ! + ! MPHOTO_TEC3 -- Stomatal slope (M) for conifers and temperate C3 plants ! + ! MPHOTO_C4 -- Stomatal slope (M) for C4 plants. ! + ! BPHOTO_BLC3 -- cuticular conductance for broadleaf C3 plants [umol/m2/s] ! + ! BPHOTO_NLC3 -- cuticular conductance for needleleaf C3 plants [umol/m2/s] ! + ! BPHOTO_C4 -- cuticular conductance for C4 plants [umol/m2/s] ! + ! KW_GRASS -- Water conductance for trees, in m2/yr/kgC_root. This is used only ! + ! when H2O_PLANT_LIM is not 0. ! + ! KW_TREE -- Water conductance for grasses, in m2/yr/kgC_root. This is used only ! + ! when H2O_PLANT_LIM is not 0. ! + ! GAMMA_C3 -- The dark respiration factor (gamma) for C3 plants. Subtropical ! + ! conifers will be scaled by GAMMA_C3 * 0.028 / 0.02 ! + ! GAMMA_C4 -- The dark respiration factor (gamma) for C4 plants. ! + ! D0_GRASS -- The transpiration control in gsw (D0) for ALL grasses. ! + ! D0_TREE -- The transpiration control in gsw (D0) for ALL trees. ! + ! ALPHA_C3 -- Quantum yield of ALL C3 plants. This is only applied when ! + ! QUANTUM_EFFICIENCY_T = 0. ! + ! ALPHA_C4 -- Quantum yield of C4 plants. This is always applied. ! + ! KLOWCO2IN -- The coefficient that controls the PEP carboxylase limited rate of ! + ! carboxylation for C4 plants. ! + ! RRFFACT -- Factor multiplying the root respiration factor for ALL PFTs. ! + ! (1.0 = default). ! + ! GROWTHRESP -- The actual growth respiration factor (C3/C4 tropical PFTs only). ! + ! (1.0 = default). ! + ! LWIDTH_GRASS -- Leaf width for grasses, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). ! + ! LWIDTH_BLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). This is applied to broadleaf trees ! + ! only. ! + ! LWIDTH_NLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). This is applied to conifer trees ! + ! only. ! + ! Q10_C3 -- Q10 factor for C3 plants (used only if IPHYSIOL is set to 2 or 3). ! + ! Q10_C4 -- Q10 factor for C4 plants (used only if IPHYSIOL is set to 2 or 3). ! + !---------------------------------------------------------------------------------------! + NL%VMFACT_C3 = 1 + NL%VMFACT_C4 = 1 + NL%MPHOTO_TRC3 = 9 + NL%MPHOTO_TEC3 = 7.2 + NL%MPHOTO_C4 = 5.2 + NL%BPHOTO_BLC3 = 10000 + NL%BPHOTO_NLC3 = 1000 + NL%BPHOTO_C4 = 10000 + NL%KW_GRASS = 900 + NL%KW_TREE = 600 + NL%GAMMA_C3 = 0.0145 + NL%GAMMA_C4 = 0.035 + NL%D0_GRASS = 0.016 + NL%D0_TREE = 0.016 + NL%ALPHA_C3 = 0.08 + NL%ALPHA_C4 = 0.055 + NL%KLOWCO2IN = 4000 + NL%RRFFACT = 1 + NL%GROWTHRESP = 0.333 + NL%LWIDTH_GRASS = 0.05 + NL%LWIDTH_BLTREE = 0.1 + NL%LWIDTH_NLTREE = 0.05 + NL%Q10_C3 = 2.4 + NL%Q10_C4 = 2.4 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! THETACRIT -- Leaf drought phenology threshold. The sign matters here: ! + ! >= 0. -- This is the relative soil moisture above the wilting point ! + ! below which the drought-deciduous plants will start shedding ! + ! their leaves ! + ! < 0. -- This is the soil potential in MPa below which the drought- ! + ! -deciduous plants will start shedding their leaves. The wilt- ! + ! ing point is by definition -1.5MPa, so make sure that the value ! + ! is above -1.5. ! + !---------------------------------------------------------------------------------------! + NL%THETACRIT = -1.20 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! QUANTUM_EFFICIENCY_T -- Which quantum yield model should to use for C3 plants ! + ! 0. Original ED-2.1, quantum efficiency is constant. ! + ! 1. Quantum efficiency varies with temperature following ! + ! Ehleringer (1978) polynomial fit. ! + !---------------------------------------------------------------------------------------! + NL%QUANTUM_EFFICIENCY_T = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! N_PLANT_LIM -- This controls whether plant photosynthesis can be limited by nitrogen. ! + ! 0. No limitation ! + ! 1. ED-2.1 nitrogen limitation model. ! + !---------------------------------------------------------------------------------------! + NL%N_PLANT_LIM = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! N_DECOMP_LIM -- This controls whether decomposition can be limited by nitrogen. ! + ! 0. No limitation ! + ! 1. ED-2.1 nitrogen limitation model. ! + !---------------------------------------------------------------------------------------! + NL%N_DECOMP_LIM = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! The following parameters adjust the fire disturbance in the model. ! + ! INCLUDE_FIRE -- Which threshold to use for fires. ! + ! 0. No fires; ! + ! 1. (deprecated) Fire will be triggered with enough biomass and ! + ! integrated ground water depth less than a threshold. Based on ! + ! ED-1, the threshold assumes that the soil is 1 m, so deeper ! + ! soils will need to be much drier to allow fires to happen and ! + ! often will never allow fires. ! + ! 2. Fire will be triggered with enough biomass and the total soil ! + ! water at the top 75 cm falls below a threshold. ! + ! FIRE_PARAMETER -- If fire happens, this will control the intensity of the disturbance ! + ! given the amount of fuel (currently the total above-ground ! + ! biomass). ! + ! SM_FIRE -- This is used only when INCLUDE_FIRE = 2. The sign here matters. ! + ! >= 0. - Minimum relative soil moisture above dry air of the top 1m ! + ! that will prevent fires to happen. ! + ! < 0. - Minimum mean soil moisture potential in MPa of the top 1m ! + ! that will prevent fires to happen. The dry air soil ! + ! potential is defined as -3.1 MPa, so make sure SM_FIRE is ! + ! greater than this value. ! + !---------------------------------------------------------------------------------------! + NL%INCLUDE_FIRE = 0 + NL%FIRE_PARAMETER = 0.5 + NL%SM_FIRE = -1.40 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IANTH_DISTURB -- This flag controls whether to include anthropogenic disturbances ! + ! such as land clearing, abandonment, and logging. ! + ! 0. no anthropogenic disturbance. ! + ! 1. use anthropogenic disturbance dataset. ! + !---------------------------------------------------------------------------------------! + NL%IANTH_DISTURB = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ICANTURB -- This flag controls the canopy roughness. ! + ! 0. Based on Leuning et al. (1995), wind is computed using the similarity ! + ! theory for the top cohort, and they are extinguished with cumulative ! + ! LAI. If using CROWN_MOD 1 or 2, this will use local LAI and average ! + ! by crown area. ! + ! 1. The default ED-2.1 scheme, except that it uses the zero-plane ! + ! displacement height. ! + ! 2. This uses the method of Massman (1997) using constant drag and no ! + ! sheltering factor. ! + ! 3. This is also based on Massman (1997), but with the option of varying ! + ! the drag and sheltering within the canopy. ! + ! 4. Same as 0, but if finds the ground conductance following CLM ! + ! technical note (equations 5.98-5.100). ! + !---------------------------------------------------------------------------------------! + NL%ICANTURB = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISFCLYRM -- Similarity theory model. The model that computes u*, T*, etc... ! + ! 1. BRAMS default, based on Louis (1979). It uses empirical relations to ! + ! estimate the flux based on the bulk Richardson number ! + ! ! + ! All models below use an interative method to find z/L, and the only change ! + ! is the functional form of the psi functions. ! + ! ! + ! 2. Oncley and Dudhia (1995) model, based on MM5. ! + ! 3. Beljaars and Holtslag (1991) model. Similar to 2, but it uses an alternative ! + ! method for the stable case that mixes more than the OD95. ! + ! 4. CLM (2004). Similar to 2 and 3, but they have special functions to deal with ! + ! very stable and very stable cases. ! + !---------------------------------------------------------------------------------------! + NL%ISFCLYRM = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IED_GRNDVAP -- Methods to find the ground -> canopy conductance. ! + ! 0. Modified Lee Pielke (1992), adding field capacity, but using beta factor ! + ! without the square, like in Noilhan and Planton (1989). This is the closest ! + ! to the original ED-2.0 and LEAF-3, and it is also the recommended one. ! + ! 1. Test # 1 of Mahfouf and Noilhan (1991) ! + ! 2. Test # 2 of Mahfouf and Noilhan (1991) ! + ! 3. Test # 3 of Mahfouf and Noilhan (1991) ! + ! 4. Test # 4 of Mahfouf and Noilhan (1991) ! + ! 5. Combination of test #1 (alpha) and test #2 (soil resistance). ! + ! In all cases the beta term is modified so it approaches zero as soil moisture goes ! + ! to dry air soil. ! + !---------------------------------------------------------------------------------------! + NL%IED_GRNDVAP = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to control the similarity theory model. For the ! + ! meaning of these parameters, check Beljaars and Holtslag (1991). ! + ! GAMM -- gamma coefficient for momentum, unstable case (dimensionless) ! + ! Ignored when ISTAR = 1 ! + ! GAMH -- gamma coefficient for heat, unstable case (dimensionless) ! + ! Ignored when ISTAR = 1 ! + ! TPRANDTL -- Turbulent Prandtl number ! + ! Ignored when ISTAR = 1 ! + ! RIBMAX -- maximum bulk Richardson number. ! + ! LEAF_MAXWHC -- Maximum water that can be intercepted by leaves, in kg/m2leaf. ! + !---------------------------------------------------------------------------------------! + NL%GAMM = 13.0 + NL%GAMH = 13.0 + NL%TPRANDTL = 0.74 + NL%RIBMAX = 0.50 + NL%LEAF_MAXWHC = 0.11 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPERCOL -- This controls percolation and infiltration. ! + ! 0. Default method. Assumes soil conductivity constant and for the ! + ! temporary surface water, it sheds liquid in excess of a 1:9 liquid- ! + ! -to-ice ratio through percolation. Temporary surface water exists ! + ! only if the top soil layer is at saturation. ! + ! 1. Constant soil conductivity, and it uses the percolation model as in ! + ! Anderson (1976) NOAA technical report NWS 19. Temporary surface ! + ! water may exist after a heavy rain event, even if the soil doesn't ! + ! saturate. Recommended value. ! + ! 2. Soil conductivity decreases with depth even for constant soil moisture ! + ! , otherwise it is the same as 1. ! + !---------------------------------------------------------------------------------------! + NL%IPERCOL = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the plant functional types (PFTs) that will be ! + ! used in this simulation. ! + ! ! + ! INCLUDE_THESE_PFT -- a list containing all the PFTs you want to include in this run ! + ! AGRI_STOCK -- which PFT should be used for agriculture ! + ! (used only when IANTH_DISTURB = 1) ! + ! PLANTATION_STOCK -- which PFT should be used for plantation ! + ! (used only when IANTH_DISTURB = 1) ! + ! ! + ! PFT table ! + !---------------------------------------------------------------------------------------! + ! 1 - C4 grass | 9 - early temperate deciduous ! + ! 2 - early tropical | 10 - mid temperate deciduous ! + ! 3 - mid tropical | 11 - late temperate deciduous ! + ! 4 - late tropical | 12:15 - agricultural PFTs ! + ! 5 - temperate C3 grass | 16 - Subtropical C3 grass ! + ! 6 - northern pines | (C4 grass with C3 photo). ! + ! 7 - southern pines | 17 - "Araucaria" (non-optimised ! + ! 8 - late conifers | Southern Pines). ! + !---------------------------------------------------------------------------------------! + NL%INCLUDE_THESE_PFT = 1,2,3,4,16 + NL%AGRI_STOCK = 1 + NL%PLANTATION_STOCK = 3 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! PFT_1ST_CHECK -- What to do if the initialisation file has a PFT that is not listed ! + ! in INCLUDE_THESE_PFT (ignored if IED_INIT_MODE is -1 or 0) ! + ! 0. Stop the run ! + ! 1. Add the PFT in the INCLUDE_THESE_PFT list ! + ! 2. Ignore the cohort ! + !---------------------------------------------------------------------------------------! + NL%PFT_1ST_CHECK = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the size of sub-polygon structures in ED-2. ! + ! MAXSITE -- This is the strict maximum number of sites that each polygon can ! + ! contain. Currently this is used only when the user wants to run ! + ! the same polygon with multiple soil types. If there aren't that ! + ! many different soil types with a minimum area (check MIN_SITE_AREA ! + ! below), then the model will allocate just the amount needed. ! + ! MAXPATCH -- If number of patches in a given site exceeds MAXPATCH, force patch ! + ! fusion. If MAXPATCH is 0, then fusion will never happen. If ! + ! MAXPATCH is negative, then the absolute value is used only during ! + ! the initialization, and fusion will never happen again. Notice ! + ! that if the patches are too different, then the actual number of ! + ! patches in a site may exceed MAXPATCH. ! + ! MAXCOHORT -- If number of cohorts in a given patch exceeds MAXCOHORT, force ! + ! cohort fusion. If MAXCOHORT is 0, then fusion will never happen. ! + ! If MAXCOHORT is negative, then the absolute value is used only ! + ! during the initialization, and fusion will never happen again. ! + ! Notice that if the cohorts are too different, then the actual ! + ! number of cohorts in a patch may exceed MAXCOHORT. ! + ! MIN_SITE_AREA -- This is the minimum fraction area of a given soil type that allows ! + ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! + ! MIN_PATCH_AREA -- This is the minimum fraction area of a given soil type that allows ! + ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! + !---------------------------------------------------------------------------------------! + NL%MAXSITE = 1 + NL%MAXPATCH = 10 + NL%MAXCOHORT = 40 + NL%MIN_SITE_AREA = 0.005 + NL%MIN_PATCH_AREA = 0.005 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ZROUGH -- constant roughness, in metres, if for all domain ! + !---------------------------------------------------------------------------------------! + NL%ZROUGH = 0.1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Treefall disturbance parameters. ! + ! TREEFALL_DISTURBANCE_RATE -- Sign-dependent treefall disturbance rate: ! + ! > 0. usual disturbance rate, in 1/years; ! + ! = 0. No treefall disturbance; ! + ! < 0. Treefall will be added as a mortality rate (it ! + ! will kill plants, but it won't create a new patch). ! + ! TIME2CANOPY -- Minimum patch age for treefall disturbance to happen. ! + ! If TREEFALL_DISTURBANCE_RATE = 0., this value will be ! + ! ignored. If this value is different than zero, then ! + ! TREEFALL_DISTURBANCE_RATE is internally adjusted so the ! + ! average patch age is still 1/TREEFALL_DISTURBANCE_RATE ! + !---------------------------------------------------------------------------------------! + NL%TREEFALL_DISTURBANCE_RATE = 0.014 + NL%TIME2CANOPY = 0.0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! RUNOFF_TIME -- In case a temporary surface water (TSW) is created, this is the "e- ! + ! -folding lifetime" of the TSW in seconds due to runoff. If you don't ! + ! want runoff to happen, set this to 0. ! + !---------------------------------------------------------------------------------------! + NL%RUNOFF_TIME = 3600.0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the minimum values of various velocities in the ! + ! canopy. This is needed to avoid the air to be extremely still, or to avoid singular- ! + ! ities. When defining the values, keep in mind that UBMIN >= UGBMIN >= USTMIN. ! + ! ! + ! UBMIN -- minimum wind speed at the top of the canopy air space [ m/s] ! + ! UGBMIN -- minimum wind speed at the leaf level [ m/s] ! + ! USTMIN -- minimum friction velocity, u*, in m/s. [ m/s] ! + !---------------------------------------------------------------------------------------! + NL%UBMIN = 0.65 + NL%UGBMIN = 0.25 + NL%USTMIN = 0.05 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Control parameters for printing to standard output. Any variable can be printed ! + ! to standard output as long as it is one dimensional. Polygon variables have been ! + ! tested, no gaurtantees for other hierarchical levels. Choose any variables that are ! + ! defined in the variable table fill routine in ed_state_vars.f90. Choose the start ! + ! and end index of the polygon,site,patch or cohort. It should work in parallel. The ! + ! indices are global indices of the entire domain. The are printed out in rows of 10 ! + ! columns each. ! + ! ! + ! IPRINTPOLYS -- 0. Do not print information to screen ! + ! 1. Print polygon arrays to screen, use variables described below to ! + ! determine which ones and how ! + ! NPVARS -- Number of variables to be printed ! + ! PRINTVARS -- List of variables to be printed ! + ! PFMTSTR -- The standard fortran format for the prints. One format per variable ! + ! IPMIN -- First polygon (absolute index) to be print ! + ! IPMAX -- Last polygon (absolute index) to print ! + !---------------------------------------------------------------------------------------! + NL%IPRINTPOLYS = 0 + NL%NPVARS = 1 + NL%PRINTVARS = 'AVG_PCPG','AVG_CAN_TEMP','AVG_VAPOR_AC','AVG_CAN_SHV' + NL%PFMTSTR = 'f10.8','f5.1','f7.2','f9.5' + NL%IPMIN = 1 + NL%IPMAX = 60 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Variables that control the meteorological forcing. ! + ! ! + ! IMETTYPE -- Format of the meteorological dataset ! + ! 0. ASCII (deprecated) ! + ! 1. HDF5 ! + ! ISHUFFLE -- How to choose an year outside the meterorological data range (see ! + ! METCYC1 and METCYCF). ! + ! 0. Sequentially cycle over years ! + ! 1. Randomly pick the years, using the same sequence. This has worked ! + ! with gfortran running in Mac OS X system, but it acts like option 2 ! + ! when running ifort. ! + ! 2. Randomly pick the years, choosing a different sequence each time ! + ! the model is run. ! + ! IMETCYC1 -- First year with meteorological information ! + ! IMETCYCF -- Last year with meteorological information ! + ! IMETAVG -- How the input radiation was originally averaged. You must tell this ! + ! because ED-2.1 can make a interpolation accounting for the cosine of ! + ! zenith angle. ! + ! -1. I don't know, use linear interpolation. ! + ! 0. No average, the values are instantaneous ! + ! 1. Averages ending at the reference time ! + ! 2. Averages beginning at the reference time ! + ! 3. Averages centred at the reference time ! + ! IMETRAD -- What should the model do with the input short wave radiation? ! + ! 0. Nothing, use it as is. ! + ! 1. Add them together, then use the SiB method to break radiation down ! + ! into the four components (PAR direct, PAR diffuse, NIR direct, ! + ! NIR diffuse). ! + ! 2. Add then together, then use the method by Weiss and Norman (1985) ! + ! to break radiation down to the four components. ! + ! 3. Gloomy -- All radiation goes to diffuse. ! + ! 4. Sesame street -- all radiation goes to direct, except at night. ! + ! INITIAL_CO2 -- Initial value for CO2 in case no CO2 is provided at the meteorological ! + ! driver dataset [Units: µmol/mol] ! + !---------------------------------------------------------------------------------------! + NL%IMETTYPE = 1 + NL%ISHUFFLE = 0 + NL%METCYC1 = 2000 + NL%METCYCF = 2003 + NL%IMETAVG = 1 + NL%IMETRAD = 2 + NL%INITIAL_CO2 = 378.0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the phenology prescribed from observations: ! + ! ! + ! IPHENYS1 -- First year for spring phenology ! + ! IPHENYSF -- Final year for spring phenology ! + ! IPHENYF1 -- First year for fall/autumn phenology ! + ! IPHENYFF -- Final year for fall/autumn phenology ! + ! PHENPATH -- path and prefix of the prescribed phenology data. ! + ! ! + ! If the years don't cover the entire simulation period, they will be recycled. ! + !---------------------------------------------------------------------------------------! + NL%IPHENYS1 = 1992 + NL%IPHENYSF = 2003 + NL%IPHENYF1 = 1992 + NL%IPHENYFF = 2003 + NL%PHENPATH = '/n/moorcroft_data/data/ed2_data/phenology/phenology' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! These are some additional configuration files. ! + ! IEDCNFGF -- XML file containing additional parameter settings. If you don't have ! + ! one, leave it empty ! + ! EVENT_FILE -- file containing specific events that must be incorporated into the ! + ! simulation. ! + ! PHENPATH -- path and prefix of the prescribed phenology data. ! + !---------------------------------------------------------------------------------------! + NL%IEDCNFGF = 'config.xml' + NL%EVENT_FILE = 'myevents.xml' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to control the detailed output for debugging ! + ! purposes. ! + ! ! + ! IDETAILED -- This flag controls the possible detailed outputs, mostly used for ! + ! debugging purposes. Notice that this doesn't replace the normal debug- ! + ! ger options, the idea is to provide detailed output to check bad ! + ! assumptions. The options are additive, and the indices below represent ! + ! the different types of output: ! + ! ! + ! 1 -- Detailed budget (every DTLSM) ! + ! 2 -- Detailed photosynthesis (every DTLSM) ! + ! 4 -- Detailed output from the integrator (every HDID) ! + ! 8 -- Thermodynamic bounds for sanity check (every DTLSM) ! + ! 16 -- Daily error stats (which variable caused the time step to shrink) ! + ! 32 -- Allometry parameters, and minimum and maximum sizes ! + ! (two files, only at the beginning) ! + ! ! + ! In case you don't want any detailed output (likely for most runs), set ! + ! IDETAILED to zero. In case you want to generate multiple outputs, add ! + ! the number of the sought options: for example, if you want detailed ! + ! photosynthesis and detailed output from the integrator, set IDETAILED ! + ! to 6 (2 + 4). Any combination of the above outputs is acceptable, al- ! + ! though all but the last produce a sheer amount of txt files, in which ! + ! case you may want to look at variable PATCH_KEEP. It is also a good ! + ! idea to set IVEGT_DYNAMICS to 0 when using the first five outputs. ! + ! ! + ! ! + ! PATCH_KEEP -- This option will eliminate all patches except one from the initial- ! + ! isation. This is only used when one of the first five types of ! + ! detailed output is active, otherwise it will be ignored. Options are: ! + ! -2. Keep only the patch with the lowest potential LAI ! + ! -1. Keep only the patch with the highest potential LAI ! + ! 0. Keep all patches. ! + ! > 0. Keep the patch with the provided index. In case the index is ! + ! not valid, the model will crash. ! + !---------------------------------------------------------------------------------------! + NL%IDETAILED = 0 + NL%PATCH_KEEP = 0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! IOPTINPT -- Optimization configuration. (Currently not used) ! + !---------------------------------------------------------------------------------------! + !NL%IOPTINPT = '' + !---------------------------------------------------------------------------------------! + NL%IOOUTPUT = 3 + NL%IADD_SITE_MEANS = 0 + NL%IADD_PATCH_MEANS = 0 + NL%IADD_COHORT_MEANS = 0 + NL%GROWTH_RESP_SCHEME = 0 + NL%STORAGE_RESP_SCHEME = 0 + NL%PLANT_HYDRO_SCHEME = 0 + NL%ISTOMATA_SCHEME = 0 + NL%ISTRUCT_GROWTH_SCHEME = 0 + NL%TRAIT_PLASTICITY_SCHEME = 0 + NL%IDDMORT_SCHEME = 0 + NL%CBR_SCHEME = 0 + NL%DDMORT_CONST = 0 + NL%ICANRAD = 1 + NL%DT_CENSUS = 60 + NL%YR1ST_CENSUS = 2000 + NL%MON1ST_CENSUS = 6 + NL%MIN_RECRUIT_DBH = 50 +$END +!==========================================================================================! +!==========================================================================================! +``` +
+ +Convert the docker image to singularity +``` +singularity pull docker://pecan/model-ed2-git +``` + +Finally you can run the singularity image +``` +singularity exec -B ed_inputs:/data/ed_inputs -B faoOLD:/data/faoOLD -B oge2OLD:/data/oge2OLD -B sites:/data/sites -B testrun.s83:/data/testrun.s83 --pwd /data/testrun.s83 ./model-ed2-git.simg ed2.git -s +``` + +Note that the `-B` option will mount the folder into the singularity image as the second argument (afther the :) + +The ed2.git command is started with the `-s` which will run it in single mode, and not initialize and use MPI. + +Once the model is finished the outputs should be available under `testrun.s83`. + +The example ED2IN file is not 100% correct and will result in the following error: + +
+output of (failed) run (click to expand) +``` ++---------------- MPI parallel info: --------------------+ ++ - Machnum = 0 ++ - Machsize = 1 ++---------------- OMP parallel info: --------------------+ ++ - thread use: 1 ++ - threads max: 1 ++ - cpu use: 1 ++ - cpus max: 1 ++ Note: Max vals are for node, not sockets. ++--------------------------------------------------------+ +Reading namelist information +Copying namelist + +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!! WARNING! WARNING! WARNING! WARNING! WARNING! !!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + -> Outfast cannot be less than frqfast. + Oufast was redefined to 3600. seconds. +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!! WARNING! WARNING! WARNING! WARNING! WARNING! !!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + -> Outstate cannot be different than frqstate when + unitstate is set to 3 (years). + Oustate was set to 1. years. + Oustate was redefined to 1. years. +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + ++------------------------------------------------------------+ +| Ecosystem Demography Model, version 2.2 ++------------------------------------------------------------+ +| Input namelist filename is ED2IN +| +| Single process execution on INITIAL run. ++------------------------------------------------------------+ + => Generating the land/sea mask. +/data/oge2OLD/OGE2_HEADER + -> Getting file: /data/oge2OLD/OGE2_30S060W.h5... + + Work allocation, node 1; + + Polygon array allocation, node 1; + + Memory successfully allocated on none 1; + [+] Load_Ed_Ecosystem_Params... +---------------------------------------- + Treefall disturbance parameters: + - LAMBDA_REF = 1.40000E-02 + - LAMBDA_EFF = 1.40000E-02 + - TIME2CANOPY = 0.00000E+00 +---------------------------------------- + [+] Checking for XML config... +********************************************* +** WARNING! ** +** ** +** XML file wasn't found. Using default ** +** parameters in ED. ** +** (You provided config.xml). +** ** +********************************************* + [+] Alloc_Soilgrid... + [+] Set_Polygon_Coordinates... + [+] Sfcdata_ED... + [+] Load_Ecosystem_State... + + Doing sequential initialization over nodes. + + Initializing from ED restart file. Node: 001 + [-] filelist_f: Checking prefix: /data/sites/Santarem_Km83/s83_default. + + Showing first 10 files: + [-] File #: 1 /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.css + [-] File #: 2 /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.pss +Using patch file: /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.pss +Using cohort file: /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.css + + Initializing phenology. Node: 001 + - Reading thermal sums. + + Initializing anthropogenic disturbance forcing. Node: 001 + + -------------------------------------------------------- + Soil information: + + Polygon name : ts83 + Longitude : -54.971 + Latitude : -3.018 + Prescribed sand and clay : T + # of sites : 1 + + Site : 1 + - Type : 16 + - Clay fraction = 5.90000E-01 + - Sand fraction = 3.90000E-01 + - Silt fraction = 2.00000E-02 + - SLBS = 1.22460E+01 + - SLPOTS = -1.52090E-01 + - SLCONS = 2.30320E-06 + - Dry air soil = 2.29248E-01 + - Wilting point = 2.43249E-01 + - Field capacity = 3.24517E-01 + - Saturation = 4.27790E-01 + - Heat capacity = 1.30601E+06 + -------------------------------------------------------- + + [+] Init_Met_Drivers... + [+] Read_Met_Drivers_Init... +------------------------------ + - METCYC1 = 2000 + - METCYCF = 2003 + - NYEARS = 2 +------------------------------ + IYEAR YEAR_USE + 1 2001 + 2 2002 +------------------------------ + + [+] Update_met_drivers... + [+] Ed_Init_Atm... +Total count in node 1 for grid 1 : POLYGONS= 1 SITES= 1 PATCHES= 18 COHORTS= 1753 +Grid: 1 Poly: 1 Lon: -54.9710 Lat: -3.0180 Nplants: 0.73 Avg. LAI: 4.45 NPatches: 18 NCohorts: 660 + [+] initHydrology... +initHydrology | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 +Allocated | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 +Updated | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 +Deallocated | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 + [+] Filltab_Alltypes... + [+] Finding frqsum... + [+] Loading obstime_list +File /nowhere not found! +Specify OBSTIME_DB properly in ED namelist. +:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: +:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: +:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: + +-------------------------------------------------------------- + !!! FATAL ERROR !!! +-------------------------------------------------------------- + ---> File: ed_init.F90 + ---> Subroutine: read_obstime + ---> Reason: OBSTIME_DB not found! +-------------------------------------------------------------- + ED execution halts (see previous error message)... +-------------------------------------------------------------- +STOP fatal_error +``` +
+ + + diff --git a/book_source/libgl1-mesa-dev b/book_source/libgl1-mesa-dev new file mode 100644 index 00000000000..e69de29bb2d diff --git a/book_source/libglpk-dev b/book_source/libglpk-dev new file mode 100644 index 00000000000..e69de29bb2d diff --git a/book_source/libglu1-mesa-dev b/book_source/libglu1-mesa-dev new file mode 100644 index 00000000000..e4ef8991e2b --- /dev/null +++ b/book_source/libglu1-mesa-dev @@ -0,0 +1,4 @@ +Fedora Modular 31 - x86_64 - Updates 1.7 kB/s | 4.2 kB 00:02 +Fedora Modular 31 - x86_64 - Updates 39 kB/s | 417 kB 00:10 +Fedora 31 - x86_64 - Updates 5.7 kB/s | 3.8 kB 00:00 +Fedora 31 - x86_64 - Updates 25 kB/s | 657 kB 00:26 diff --git a/book_source/libnetcdf-dev b/book_source/libnetcdf-dev new file mode 100644 index 00000000000..e69de29bb2d diff --git a/book_source/librdf0-dev b/book_source/librdf0-dev new file mode 100644 index 00000000000..e69de29bb2d diff --git a/book_source/libudunits2-dev b/book_source/libudunits2-dev new file mode 100644 index 00000000000..e69de29bb2d diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 00204e8566c..306d5a443ae 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -21,4 +21,4 @@ Description: Implementation of a Gaussian Process model (both likelihood and for sampling design and prediction. License: BSD_3_clause + file LICENSE Encoding: UTF-8 -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 From 98ce000961c48e7f453c64eaf34aa482ca525a0f Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:33:33 +0530 Subject: [PATCH 1003/2289] create book.yml --- .github/workflows/book.yml | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 .github/workflows/book.yml diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml new file mode 100644 index 00000000000..9c9e0f13fca --- /dev/null +++ b/.github/workflows/book.yml @@ -0,0 +1,38 @@ +# This is a basic workflow to help you get started with Actions + +name: book + +on: + push: + branches: master + pull_request: + branches: master + + +jobs: + + build: + # The type of runner that the job will run on + runs-on: ubuntu-latest + container: pecan/depends:develop + + + steps: + + - uses: actions/checkout@v2 + + # Building book from source using makefile + - name: Building book + run: cd book_source && make + + - name: looking for generated html + run: cd _book + + - name: Commiting the changes to pecan documentation repo + run: | + git init + git add . + git commit -m "documentation updated by actions" + git remote add origin https://github.com/MukulMaheshwari/pecan-documentation.git + git push -u origin master + From 8680132b465fd709469a080fb6ecea98b8c570a9 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:43:22 +0530 Subject: [PATCH 1004/2289] Delete ci.yml --- .github/workflows/ci.yml | 182 --------------------------------------- 1 file changed, 182 deletions(-) delete mode 100644 .github/workflows/ci.yml diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml deleted file mode 100644 index 518c4aad5de..00000000000 --- a/.github/workflows/ci.yml +++ /dev/null @@ -1,182 +0,0 @@ -name: CI - -on: - push: - branches: - - master - - develop - - tags: - - '*' - - pull_request: - -env: - # Would be more usual to set R_LIBS_USER, but R uses R_LIBS first if present - # ...and it's always present here, because the rocker/tidyverse base image - # checks at R startup time for R_LIBS and R_LIBS_USER, sets both if not found - R_LIBS: ~/R/library - -jobs: - - build: - runs-on: ubuntu-latest - container: pecan/depends:develop - steps: - - name: check git version - id: gitversion - run: | - v=$(git --version | grep -oE '[0-9\.]+') - v='cat(numeric_version("'${v}'") < "2.18")' - echo "##[set-output name=isold;]$(Rscript -e "${v}")" - - name: upgrade git if needed - # Hack: actions/checkout wants git >= 2.18, rocker 3.5 images have 2.11 - # Assuming debian stretch because newer images have git >= 2.20 already - if: steps.gitversion.outputs.isold == 'TRUE' - run: | - echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list - apt-get update - apt-get -t stretch-backports upgrade -y git - - uses: actions/checkout@v2 - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - shell: bash - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: cache .doc - uses: actions/cache@v1 - with: - key: doc-${{ github.sha }} - path: .doc - - name: cache .install - uses: actions/cache@v1 - with: - key: install-${{ github.sha }} - path: .install - - name: build - run: make -j1 - env: - NCPUS: 2 - CI: true - - name: check for out-of-date Rd files - uses: infotroph/tree-is-clean@v1 - - test: - needs: build - runs-on: ubuntu-latest - container: pecan/depends:develop - services: - postgres: - image: mdillon/postgis:9.5 - options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 - env: - NCPUS: 2 - PGHOST: postgres - CI: true - steps: - - uses: actions/checkout@v2 - - name: install utils - run: apt-get update && apt-get install -y openssh-client postgresql-client curl - - name: db setup - uses: docker://pecan/db:ci - - name: add models to db - run: ./scripts/add.models.sh - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: cache .doc - uses: actions/cache@v1 - with: - key: doc-${{ github.sha }} - path: .doc - - name: cache .install - uses: actions/cache@v1 - with: - key: install-${{ github.sha }} - path: .install - - name: test - run: make test - - check: - needs: build - runs-on: ubuntu-latest - container: pecan/depends:develop - env: - NCPUS: 2 - CI: true - _R_CHECK_LENGTH_1_CONDITION_: true - _R_CHECK_LENGTH_1_LOGIC2_: true - # Avoid compilation check warnings that come from the system Makevars - # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html - _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time - steps: - - uses: actions/checkout@v2 - - name: install ssh - run: apt-get update && apt-get install -y openssh-client qpdf - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: cache .doc - uses: actions/cache@v1 - with: - key: doc-${{ github.sha }} - path: .doc - - name: cache .install - uses: actions/cache@v1 - with: - key: install-${{ github.sha }} - path: .install - - name: check - run: make check - env: - REBUILD_DOCS: "FALSE" - RUN_TESTS: "FALSE" - - sipnet: - needs: build - runs-on: ubuntu-latest - container: pecan/depends:develop - services: - postgres: - image: mdillon/postgis:9.5 - options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 - env: - PGHOST: postgres - steps: - - uses: actions/checkout@v2 - - run: apt-get update && apt-get install -y curl postgresql-client - - name: install sipnet - run: | - cd ${HOME} - curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz - tar zxf sipnet_unk.tar.gz - cd sipnet_unk - make - - name: db setup - uses: docker://pecan/db:ci - - name: add models to db - run: ./scripts/add.models.sh - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: integration test - run: ./tests/integration.sh ghaction From e7382d422de471e1058c012cf4b771b1c9f43575 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:50:15 +0530 Subject: [PATCH 1005/2289] Delete _main.Rmd --- book_source/_main.Rmd | 13402 ---------------------------------------- 1 file changed, 13402 deletions(-) delete mode 100644 book_source/_main.Rmd diff --git a/book_source/_main.Rmd b/book_source/_main.Rmd deleted file mode 100644 index 8e1e71962a2..00000000000 --- a/book_source/_main.Rmd +++ /dev/null @@ -1,13402 +0,0 @@ ---- -title: "The Predictive Ecosystem Analyzer" -date: "`r Sys.Date()`" -site: bookdown::bookdown_site -documentclass: book -biblio-style: apalike -link-citations: yes -author: "By: PEcAn Team" ---- - -# Welcome {-} - -**Ecosystem science, policy, and management informed by the best available data and models** - -```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics(rep("figures/PecanLogo.png")) -``` - - -**Our Mission:** - - -**Develop and promote accessible tools for reproducible ecosystem modeling and forecasting** - - - -[PEcAn Website](http://pecanproject.github.io/) - -[Public Chat Room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) - -[Github Repository](https://github.com/PecanProject/pecan) - - - - - - - - - -# (PART) Introduction {-} - - - -# Project Overview - -The Predictive Ecosystem Analyzer (PEcAn) is an integrated informatics toolbox for ecosystem modeling (Dietze et al. 2013, LeBauer et al. 2013). PEcAn consists of: - -1) An application program interface (API) that encapsulates an ecosystem model, providing a common interface, inputs, and output. - -2) Core utilities for handling and tracking model runs and the flows of information and uncertainties into and out of models and analyses - -3) An accessible web-based user interface and visualization tools - -4) An extensible collection of modules to handle specific types of analyses (sensitivity, uncertainty, ensemble), model-data syntheses (benchmarking, parameter data assimilation, state data assimilation), and data processing (model inputs and data constraints) - -```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/PEcAn_Components.jpeg")) -``` - -This project is motivated by the fact that many of the most pressing questions about global change are limited by our ability to synthesize existing data and strategically prioritize the collection of new data. This project seeks to improve this ability by developing a framework for integrating multiple data sources in a sensible manner. - -The workflow system allows ecosystem modeling to be more reproducible, automated, and transparent in terms of operations applied to data, and thus ultimately more comprehensible to both peers and the public. It reduces the redundancy of effort among modeling groups, facilitate collaboration, and make models more accessible the rest of the research community. - -PEcAn is not itself an ecosystem model, and it can be used to with a variety of different ecosystem models; integrating a model involves writing a wrapper to convert inputs and outputs to and from the standards used by PEcAn. Currently, PEcAn supports multiple models listed [PEcAn Models]. - - - -**Acknowledgements** - -The PEcAn project is supported financially by the following: - -- National Science Foundation (NSF): - - [1062547](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1062547) - - [1062204](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1062204) - - [1241894](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1241894) - - [1261582](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1261582) - - [1318164](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1318164) - - [1346748](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1346748) - - [1458021](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1458021) - - [1638577](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1638577) - - [1655095](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1655095) - - [1702996](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1702996) -- National Aeronautics and Space Administration (NASA) - - NNX14AH65G - - NNX16AO13H - - 80NSSC17K0711 -- Department of Defense, Strategic Environmental Research and Development Program (DOD-SERDP), grant [RC2636](https://www.serdp-estcp.org/Program-Areas/Resource-Conservation-and-Resiliency/Infrastructure-Resiliency/Vulnerability-and-Impact-Assessment/RC-2636/RC-2636) -- Energy Biosciences Institute, University of Illinois -- Amazon Web Services (AWS) -- [Google Summer of Code](https://summerofcode.withgoogle.com/organizations/4612291316678656/) - -BETY-db is a product of the Energy Biosciences Institute at the University of Illinois at Urbana-Champaign. We gratefully acknowledge the great effort of other researchers who generously made their own data available for further study. - -PEcAn is a collaboration among research groups at the Department of Earth And Environment at Boston University, the Energy Biosciences Institute at the University of Illinois, the Image Spatial Data Analysis group at NCSA, the Department of Atmospheric & Oceanic Sciences at the University Wisconsin-Madison, the Terrestrial Ecosystem Science & Technology (TEST) Group at Brookhaven National Laboratory, and the Joint Global Change Research Institute (JGCRI) at the Pacific Northwest National Laboratory. - -Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NASA, Boston University, University of Illinois, Brookhaven National Lab, Pacific National Lab, Battelle, the US Department of Defense, or the US Department of Energy. - -**PEcAn Publications** - -* Fer I, R Kelly, P Moorcroft, AD Richardson, E Cowdery, MC Dietze. 2018. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation. Biogeosciences Discussions -* Feng X, Uriarte M, González G, et al. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling. Glob Change Biol. 2018;24:e213–e232.[doi:10.1111/gcb.13863](https://doi.org/10.1111/gcb.13863) -* Dietze, M. C. (2017), Prediction in ecology: a first-principles framework. Ecol Appl, 27: 2048-2060. doi:10.1002/eap.1589 -* Fisher RA, Koven CD, Anderegg WRL, et al. 2017. Vegetation demographics in Earth System Models: A review of progress and priorities. Glob Change Biol. https://doi.org/10.1111/gcb.13910 -* Rollinson, C. R., Liu, Y., Raiho, A., Moore, D. J.P., McLachlan, J., Bishop, D. A., Dye, A., Matthes, J. H., Hessl, A., Hickler, T., Pederson, N., Poulter, B., Quaife, T., Schaefer, K., Steinkamp, J. and Dietze, M. C. (2017), Emergent climate and CO2 sensitivities of net primary productivity in ecosystem models do not agree with empirical data in temperate forests of eastern North America. Glob Change Biol. Accepted Author Manuscript. doi:10.1111/gcb.13626 -* LeBauer, D., Kooper, R., Mulrooney, P., Rohde, S., Wang, D., Long, S. P. and Dietze, M. C. (2017), betydb: a yield, trait, and ecosystem service database applied to second-generation bioenergy feedstock production. GCB Bioenergy. doi:10.1111/gcbb.12420 -* Rogers A, BE Medlyn, J Dukes, G Bonan, S von Caemmerer, MC Dietze, J Kattge, ADB Leakey, LM Mercado, U Niinemets, IC Prentice, SP Serbin, S Sitch, DA Way, S Zaehle. 2017. "A Roadmap for Improving the Representation of Photosynthesis in Earth System Models" New Phytologist 213(1):22-42 DOI: 10.1111/nph.14283 -* Shiklomanov. A, MC Dietze, T Viskari, PA Townsend, SP Serbin. 2016 "Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion" Remote Sensing of the Environment 183: 226-238 -* Viskari et al. 2015 Model-data assimilation of multiple phenological observations to constrain and forecast leaf area index. Ecological Applications 25(2): 546-558 -* Dietze, M. C., S. P. Serbin, C. Davidson, A. R. Desai, X. Feng, R. Kelly, R. Kooper, D. LeBauer, J. Mantooth, K. McHenry, and D. Wang (2014) A quantitative assessment of a terrestrial biosphere model's data needs across North American biomes. Journal of Geophysical Research-Biogeosciences [doi:10.1002/2013jg002392](https://doi.org/10.1002/2013jg002392) -* LeBauer, D.S., D. Wang, K. Richter, C. Davidson, & M.C. Dietze. (2013). Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs. [doi:10.1890/12-0137.1](https://doi.org/10.1890/12-0137.1) -* Wang, D, D.S. LeBauer, and M.C. Dietze(2013) Predicting yields of short-rotation hybrid poplar (Populus spp.) for the contiguous US through model-data synthesis. Ecological Applications [doi:10.1890/12-0854.1](https://doi.org/10.1890/12-0854.1) -* Dietze, M.C., D.S LeBauer, R. Kooper (2013) On improving the communication between models and data. Plant, Cell, & Environment [doi:10.1111/pce.12043](https://doi.org/10.1111/pce.12043) - - [Longer / auto-updated list of publications that mention PEcAn's full name in Google Scholar](https://scholar.google.com/scholar?start=0&q="predictive+ecosystem+analyzer+PEcAn") - - - -# Contributor Covenant Code of Conduct - -**Our Pledge** - -In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. - -**Our Standards** - -Examples of behavior that contributes to creating a positive environment include: - - * Using welcoming and inclusive language - * Being respectful of differing viewpoints and experiences - * Gracefully accepting constructive criticism - * Focusing on what is best for the community - * Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a professional setting - - - -**Our Responsibilities** - -Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. - -**Scope** - -This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. - -**Enforcement** - -Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at pecanproj[at]gmail.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. - -**Attribution** - -This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org/) version 1.4, available at [http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4/). - - - -# About the PEcAn Book - -This book serves as documentation for the PEcAn Project. It contains descriptions of topics necessary to inform a beginner and advanced user as well as requisite materials for developers. It does not contain low-level descriptions of functions within PEcAn. Our aim for this documentation is to educate you about the PEcAn software, the possibilities of its usage, and the standards,expectations, and core workflows for developers. - -This book is organized four main topics: - -**Introduction** - Brief explanation of PEcAn, how to obtain the PEcAn VM, and explanation of basic web interface functions. - -**Tutorials/Demos/Workflows** - All User and Developer tutorials/demos/workflows to explain how to use and add to PEcAn in different ways. - -**Topical Pages** - Explanation of main PEcAn components and how they fit together. - -**Appendix** - External documentation and sources of information and a FAQ section. - -## General Feedback/Comments/Suggestions - -*We want your ideas, thoughts, comments, and suggestions!* As a community we are committed to creating an inclusive and supportive atmosphere so we all to reach out to us in the following ways: - -**Github:** [https://github.com/PecanProject/pecan](https://github.com/PecanProject/pecan) -This is the main hub of communication surrounding PEcAn development. Check out the issues section to see known bugs, upcoming features, and ideas for future development. Feel free to comment on existing issues or open new ones with questions, bug reports, feature requests, and/or ideas. - -**Slack:** [https://pecanproject.slack.com/](https://pecanproject.slack.com/) -Slack serves as our day to day mode of communication. To join us in slack you will need to create an account first. This is done in 3 steps: - -1. Request an [inivitation](https://publicslack.com/slacks/pecanproject/invites/new) to join Slack, this will be send by email to address you provided. -2. Check your inbox for an email from Slack with subject "Rob Kooper has invited you to join a Slack workspace". This email should have a link that you can click to join slack. -3. When you click a webpage will open up that asks you to create an account, once that is done you can login into the slack chatrooms. - -**Email:** pecanproj[at]gmail.com -If you do not wish your communication with the team to be public, send us an email at the address above and we will get back to you as soon as possible. - -## Editing this book {#bookediting} - -The file organization of this documentation can be described simply as follows: - -- Each **chapter** is in its own file (within the corresponding section). -- Each **group of chapters** (i.e. "part" in LaTeX) is in its own directory. - -Sections and chapters are rendered (and numbered) in alpha-numerical order of their corresponding file names. -Therefore, each section directory and chapter file name should be **prefixed with a two-digit (zero-padded) number**. -File and directory names should be as similar as possible to the name of the corresponding chapter or section. -For instance, the file name for this chapter's source file is `06_reference/10_editing_this_book.Rmd`. -This numbering means that if you need to create an additional chapter _before_ an existing one, you will have to renumber all chapters following it. - -To ensure correct rendering, you should also make sure that **each chapter starts with a level 1 heading** (`# heading`). -For instance, this chapter's source starts with: - -```markdown -# Editing this book {#bookediting} - -The file organization of this documentation can be described simply as follows: -... -``` - -Furthermore, to keep the organization consistent, each chapter should have **exactly one level 1 heading** (i.e. do not combine multiple chapters into a single file). -In other words, **do not spread a single chapter across multiple files**, and **do not put multiple chapters in the same file**. - -Each **section** directory has a file starting with `00` that contains only the section (or "Part") title. -This is used to create the greyed-out section headers in the rendered HTML. -For instance, this section has a file called `00_introduction.Rmd` which contains only the following: - -```markdown -# (PART) Introduction {-} -``` - -To cross-reference a different section, use that section's unique tag (starts with `#`; appears next to the section heading surrounded in curly braces). -For instance, the following Markdown contains two sections that cross-reference each other: - -```markdown -## Introduction {#intro} - -Here is the intro. This is a link to the [next section](#section-one). - -## First section. {#section-one} - -As mentioned in the [previous section](#intro). -``` - -If no header tag exists for a section you want to cross-reference, you should create one. -We have no strict rules about this, but it's useful to have tags that give some sense of their parent hierarchy and reference their parent sections (e.g. `#models`, `#models-ed`, and `#models-ed-xml` to refer to a chapter on models, with a subsection on ED and a sub-subsection on ED XML configuration). -If section organization changes, it is fine to move header tags, but **avoid altering existing tags** as this will break all the links pointing to that tag. -(Note that it is also possible to link to section headings by their exact title. However, this is not recommended because these section titles could change, which would break the links.) - -When referring to PEcAn packages or specific functions, it is a good idea to link to the [rendered package documentation](https://pecanproject.github.io/pkgdocs.html). -For instance, here are links to the [`models/ed`](https://pecanproject.github.io/models/ed/docs/index.html) package, the [`PEcAn.ED2::modify_ed2in`](https://pecanproject.github.io/models/ed/docs/reference/modify_ed2in.html) function, and the PEcAnRTM package [vignette](https://pecanproject.github.io/modules/rtm/docs/articles/pecanrtm.vignette.html). -If necessary, you can also link directly to specific lines or blocks in the source code on GitHub, [like this](https://github.com/PecanProject/pecan/blob/develop/models/ed/R/create_veg.R#L11-L25). -(To get a link to a line, click its line number. To then select a block, shift-click another line number.) - -To insert figures, use `knitr::include_graphics("path/to/figure.png")` inside an [R code chunk](https://yihui.name/knitr/options/#chunk-options). -For example: - -```` -```{r}`r ''` -knitr::include_graphics("04_advanced_user_guide/images/Input_ID_name.png") -``` -```` - -Note that image file names are **relative to the `book_source` directory**, **NOT** to the markdown file. -In other words, if `myimage.png` was in the same directory as this file, I would still have to reference it as `06_reference/myimage.png` -- I could _not_ just do `myimage.png`. -The size, caption, and other properties of the rendered image can be controlled via [chunk options](https://yihui.name/knitr/options/#plots). - -For additional information about how `bookdown` works (including information about its syntax), see the [Bookdown free online book](https://bookdown.org/yihui/bookdown/). - -## How to create your own version of Documentation - -To create your own version of documentation you'll need to follow these steps: -These procedures assume you have an github account, you forked pecan, you have cloned pecan locally, and have a [TRAVIS](https://travis-ci.org/) account. - 1. Create a repository under your github account with the name "pecan-documentation". Clear it of any files. Set up the repository with [Github Pages](https://pages.github.com/) by going to the settings tab for that repository. - 2. Create a personal access token for github: https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line and copy it. - 3. Create a TRAVIS environment variable called `GITHUB_PAT` and save the access token you made as a secret variable. - 4. Create a branch from your local pecan repository with a name that starts with `release/`. (ie. Release/vtonydoc) - 5. Make whichever changes you would like to the documentation and push it up to your fork. - -From here TRAVIS will build your documentation. The web version of your documentation will be rendered with the url following the structure: username.github.io/pecan-documentation/pattern_after_release/ - - - -# (PART) Tutorials, Demos and How To's {-} - - - -# Install PEcAn {#pecan-manual-setup} - -These instructions are provided to document how to install and setup PEcAn. It includes: - -- [Virtual machine](#install-vm) -- [PEcAn Docker](#install-docker) -- [PEcAn OS specific installation](#install-native) - -The PEcAn code and necessary infrastructure can be obtained and compiled in different ways. -This set of instructions will help facilitate your path and the steps necessary to move forward to have a fully a functioning PEcAn environment. - -## Virtual Machine (VM) {#install-vm} - -The PEcAn virtual machine consists of all of PEcAn pre-compiled within a Linux operating system and saved in a "virtual machine" (VM). -Virtual machines allow for running consistent set-ups without worrying about differences between operating systems, library dependencies, compiling the code, etc. - -1. **Install VirtualBox** This is the software that runs the virtual machine. You can find the download link and instructions at [http://www.virtualbox.org](http://www.virtualbox.org). *NOTE: On Windows you may see a warning about Logo testing, it is okay to ignore the warning.* - -2. **Download the PEcAn VM** You can find the download link at [http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN](http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN), under the "**Files**" header. Click the ".ova" file to begin the download. Note that the file is ~7 GB, so this download can take several minutes to hours depending on your connection speed. Also, the VM requires >4 GB of RAM to operate correctly. Please check current usage of RAM and shutdown processes as needed. - -3. **Import the VM** Once the download is complete, open VirtualBox. In the VirtualBox menus, go to "File" → "Import Appliance" and locate the downloaded ".ova" file. - - -For Virtualbox version 5.x: In the Appliance Import Settings, make sure you select "Reinitialize the MAC address of all network cards" (picture below). This is not selected by default and can result in networking issues since multiple machines might claim to have the same network MAC Address. - -```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/pic1.jpg")) -``` - -For Virtualbox versions starting with 6.0, there is a slightly different interface (see figure). Select "Generate new MAC addresses for all network adapters" from the MAC Address Policy: - -```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/pic1v2.png")) -``` - -NOTE: If you experience network connection difficulties in the VM with this enabled, try re-importing the VM without this setting selected). - -Finally, click "Import" to build the Virtual Machine from its image. - -4. **Launch PEcAn** Double click the icon for the PEcAn VM. A terminal window will pop up showing the machine booting up which may take a minute. It is done booting when you get to the `pecan login:` prompt. You do not need to login as the VM behaves like a server that we will be accessing through you web browser. Feel free to minimize the VM window. - -* If you _do_ want to login to the VM, the credentials are as follows: `username: carya`, `password: illinois` (after the pecan tree, [_Carya illinoinensis_][pecan-wikipedia]). - -5. **Open the PEcAn web interface** With the VM running in the background, open any web browser on the same machine and navigate to `localhost:6480/pecan/` to start the PEcAn workflow. (NOTE: The trailing backslash may be necessary depending on your browser) - -6. **Advanced interaction with the VM is mostly done through the command line.** You can perform these manipulations from inside the VM window. However, the VM is also configured for SSH access (username `carya`, hostname `localhost`, port 6422). For instance, to open an interactive shell inside the VM from a terminal on the host machine, use a command like `ssh -l carya -p 6422 localhost` (when prompted, the password is `illinois`, as above). - -These steps should be enough to get you started with running models and performing basic analyses with PEcAn. -For advanced details on VM configuration and maintenance, see the [Advanced VM section](#working-with-vm). - - -## Docker {#install-docker} - -This is a short documentation on how to start with Docker and PEcAn. -This will not go into much detail about about how to use Docker -- for more details, see the main [Docker topical page](#docker-intro). - -1. **Install Docker**. Follow the instructions for your operating system at https://www.docker.com/community-edition#/download. - Once Docker is installed, make sure it is running. - To test that Docker is installed and running, open a terminal and run the following commands: - - ```bash - docker run hello-world - ``` - - If successful, this should return a message starting with `"Hello from Docker!"`. - If this doesn't work, there is something wrong with your configuration. - Refer to the Docker documentation for debugging. - - NOTE: Depending on how Docker is installed and configured, you may have to run this command as `sudo`. - Try running the command without `sudo` first. - If that fails, but running as `sudo` succeeds, see [these instructions](https://docs.docker.com/install/linux/linux-postinstall/) for steps to use Docker as a non-root user. - -2. **Install docker-compose**. If you are running this on a Mac or Windows this might be already installed. On Linux you will need to install this it separately; see https://docs.docker.com/compose/install/. - - To see if docker-compose is successfully installed, use the following shell command: - - ```bash - docker-compose -v - ``` - - This should print the current version of docker-compose. We have tested the instruction below with versions of docker-compose 1.22 and above. - -3. **Download the PEcAn docker-compose file**. It is located in the root directory of the [PEcAn source code](https://github.com/pecanproject/pecan). For reference, here are direct links to the [latest stable version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) and the [bleeding edge development version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml). (To download the files, you should be able to right click the link and select "Save link as".) Make sure the file is saved as `docker-compose.yml` in a directory called `pecan`. - -4. **Initialize the PEcAn database and data images**. The following `docker-compose` commands are used to download all the data PEcAn needs to start working. For more on how they work, see our [Docker topical pages](#pecan-docker-quickstart-init). - - a. Create and start the PEcAn database container (without any data) - - ```bash - docker-compose up -d postgres - ``` - - If this is successful, the end of the output should look like the following: - - ``` - Creating pecan_postgres_1 ... done - ``` - - b. "Initialize" the data for the PEcAn database. - - ```bash - docker-compose run --rm bety initialize - ``` - - This should produce a lot of output describing the database operations happening under the hood. - Some of these will look like errors (including starting with `ERROR`), but _this is normal_. - This command succeeded if the output ends with the following: - - ``` - Added carya41 with access_level=4 and page_access_level=1 with id=323 - Added carya42 with access_level=4 and page_access_level=2 with id=325 - Added carya43 with access_level=4 and page_access_level=3 with id=327 - Added carya44 with access_level=4 and page_access_level=4 with id=329 - Added guestuser with access_level=4 and page_access_level=4 with id=331 - ``` - - c. Download and configure the core PEcAn database files. - - ```bash - docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop - ``` - - This will produce a lot of output describing file operations. - This command succeeded if the output ends with the following: - - ``` - ###################################################################### - Done! - ###################################################################### - ``` - -5. **Start the PEcAn stack**. Assuming all of the steps above completed successfully, start the full stack by running the following shell command: - - ```bash - docker-compose up -d - ``` - - If all of the containers started successfully, you should be able to access the various components from a browser via the following URLs (if you run these commands on a remote machine replace localhost with the actual hostname). - - - PEcAn web interface (running models) -- http://localhost:8000/pecan/ (NOTE: The trailing backslash is necessary.) - - PEcAn documentation and home page -- http://localhost:8000/ - - BETY web interface -- http://localhost:8000/bety/ - - File browser (minio) -- http://localhost:8000/minio/ - - RabbitMQ management console (for managing queued processes) -- http://localhost:8000/rabbitmq/ - - Traefik, webserver showing maps from URLs onto their respective containers -- http://localhost:8000/traefik/ - - Monitor, service that monitors models and shows all models that are online as well as how many instances are online and the number of jobs waiting. The output is in JSON -- http://localhost:8000/monitor/ - -For troubleshooting and advanced configuration, see our [Docker topical pages](#docker-index). - -## (Advanced) Native install {#install-native} - -The full PEcAn system has a lot of dependencies, including R packages, compiled C and Fortran libraries, and system services and utilities. -Installing all of these side by side, and getting them to work harmoniously, is very time-consuming and challenging, which is why we **strongly** encourage new users to use the VM or Docker if possible. - -In a nutshell, the process for manual installation is as follows: - -1. **Download the [PEcAn source code](https://github.com/pecanproject/pecan) from GitHub**. The recommended way to do this is with the shell command `git clone`, i.e. `git clone https://github.com/pecanproject/pecan`. - -2. **Download the BETY source code from GitHub**. - -2. **Install the PEcAn R packages and their dependencies**. This can be done by running the shell command `make` inside the PEcAn source code directory. Note that many of the R packages on which PEcAn depends have system dependencies that are not included by default on most operating systems, but almost all of which should be available via your operating system's package manager (e.g. Homebrew for MacOS, `apt` for Ubuntu/Debian/Mint Linux, `yum` for RedHat Fedora/CentOS). - -3. **Install and configure PostgreSQL** -4. **Install and configure the Apache web server**. - -For more details, see our notes about [OS Specific Installations](#osinstall). - - - -# Tutorials {#user-section} - -## How PEcAn Works in a nutshell {#pecan-in-a-nutshell} - -PEcAn provides an interface to a variety of ecosystem models and attempts to standardize and automate the processes of model parameterization, execution, and analysis. First, you choose an ecosystem model, then the time and location of interest (a site), the plant community (or crop) that you are interested in simulating, and a source of atmospheric data from the BETY database (LeBauer et al, 2010). These are set in a "settings" file, commonly named `pecan.xml` which can be edited manually if desired. From here, PEcAn will take over and set up and execute the selected model using your settings. The key is that PEcAn uses models as-is, and all of the translation steps are done within PEcAn so no modifications are required of the model itself. Once the model is finished it will allow you to create graphs with the results of the simulation as well as download the results. It is also possible to see all past experiments and simulations. - -There are two ways of using PEcAn, via the web interface and directly within R. Even for users familiar with R, using the web interface is a good place to start because it provides a high level overview of the PEcAn workflow. The quickest way to get started is to download the virtual machine or use an AWS instance. - -## PEcAn Demos {#demo-table} - -The following Tutorials assume you have installed PEcAn. If you have not, please consult the [PEcAn Installation Section](#pecan-manual-setup). - -|Type|Title|Web Link| Source Rmd| -|:--:|:---:|:------:|:---------:| -|Demo| Basic Run| [html](https://pecanproject.github.io/pecan-documentation/tutorials/Demo01.html) | [Rmd](https://github.com/PecanProject/pecan/blob/develop/documentation/tutorials/01_Demo_Basic_Run/Demo01.Rmd)| -|Demo| Uncertainty Analysis| [html](https://pecanproject.github.io/pecan-documentation/tutorials/Demo02.html) | [Rmd](https://github.com/PecanProject/pecan/tree/master/documentation/tutorials/02_Demo_Uncertainty_Analysis)| -|Demo| Output Analysis|html |[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/AnalyzeOutput)| -|Demo| MCMC |html|[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/MCMC)| -|Demo|Parameter Assimilation |html |[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/ParameterAssimilation)| -|Demo|State Assimilation|html|[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/StateAssimilation)| -|Demo| Sensitivity|html|[Rmd](https://github.com/PecanProject/pecan/tree/develop/documentation/tutorials/sensitivity)| -|Vignette|Allometries|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/allometry/vignettes/AllomVignette.Rmd)| -|Vignette|MCMC|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/assim.batch/vignettes/AssimBatchVignette.Rmd)| -|Vignette|Meteorological Data|html|[Rmd](https://github.com/PecanProject/pecan/tree/master/modules/data.atmosphere/vignettes)| -|Vignette|Meta-Analysis|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/meta.analysis/vignettes/single.MA_demo.Rmd)| -|Vignette|Photosynthetic Response Curves|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd)| -|Vignette|Priors|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/priors/vignettes/priors_demo.Rmd)| -|Vignette|Leaf Spectra:PROSPECT inversion|html|[Rmd](https://github.com/PecanProject/pecan/blob/master/modules/rtm/vignettes/pecanrtm.vignette.Rmd)| - - - -## Demo 01: Basic Run PEcAn - -```{r echo = FALSE,warning=FALSE} -library(knitr) -opts_chunk$set(echo = FALSE, message = FALSE, warning = FALSE, - fig.align = 'center', out.width = '100%') -``` - -#### Objective - -We will begin by exploring a set of web-based tools that are designed to run single-site model runs. A lot of the detail about what’s going on under the hood, and all the outputs that PEcAn produces, are left to Demo 2. This demo will also demonstrate how to use PEcAn outputs in additional analyses outside of PEcAn. - -#### PEcAn URL - -In the following demo, **URL** is the web address of a PEcAn server and will refer to one of the following: - -* If you are doing a live demo with the PEcAn team, **URL was provided** -* If you are running the PEcAn [virtual machine](#basicvm): **URL = localhost:6480** -* If you are running PEcAn using [Amazon Web Services (AWS)](#awsvm), **URL is the Public IP** -* If you are running PEcAn using [Docker](#dockervm), **URL is localhost:8000/pecan/** (trailing backslash is important!) -* If you followed instructions found in [Install PEcAn by hand], **URL is your server’s IP** - - -#### Start PEcAn: - -1. **Enter URL in your web browser** -2. **Click “Run Models”** -3. **Click the ‘Next’ button** to move to the “Site Selection” page. - -```{r startpecan} -knitr::include_graphics('extfiles/startpecan.jpg') -``` - -#### Site Selection - -```{r mapmodel} -knitr::include_graphics('extfiles/mapmodel.png') -``` - -#### Host - -**Select the local machine “pecan”**. Other options exist if you’ve read and followed instructions found in [Remote execution with PEcAn]. - - -#### Mode - -Select **SIPNET** (r136) from the available models because it is quick & simple. Reference material can be found in [Models in PEcAn] - - -#### Site Group - -To filter sites, you can **select a specific group of sites**. For this tutorial we will use **Ameriflux**. - - -#### Conversion: - -**Select the conversion check box**, to show all sites that PEcAn is capable of generating model drivers for automatically. By default (unchecked), PEcAn only displays sites where model drivers already exist in the system database - - -#### Site: - -**For this tutorial, type _US-NR1_ in the search box to display the Niwot Ridge Ameriflux site (US-NR1), and then click on the pin icon**. When you click on a site’s flag on the map, it will give you the name and location of the site and put that site in the “Site:” box on the left hand site, indicating your current selection. - -Once you are finished with the above steps, **click "Next"**. - - -#### Run Specification - -```{r runspec} -knitr::include_graphics('extfiles/runspec.png') -``` - -Next we will specify settings required to run the model. Be aware that the inputs required for any particular model may vary somewhat so there may be addition optional or required input selections available for other models. - - -#### PFT (Plant Functional Type): - -**Niwot Ridge is temperate coniferous**. Available PFTs will vary by model and some models allow multiple competing PFTs to be selected. Also select **soil** to control the soil parameters - - -#### Start/End Date: - -Select **2003/01/01** to **2006/12/31**. In general, be careful to select dates for which there is available driver data. - - -#### Weather Data: - -**Select “Use AmerifluxLBL”** from the [Available Meteorological Drivers]. - -#### Optional Settings: - -**Leave all blank for demo run** - -1. **Email** sends a message when the run is complete. -2. **Use Brown Dog** will use the Brown Dog web services in order to do input file conversions. (**Note: Required if you select _Use NARR_ for Weather Data**) -3. **Edit pecan.xml** allows you to configure advanced settings via the PEcAn settings file -4. **Edit model config** pauses the workflow after PEcAn has written all model specific settings but before the model runs are called and allows users to configure any additional settings internal to the model. -5. **Advanced Setup** controls ensemble and sensitivity run settings discussed in Demo 2. - -Finally, **click "Next"** to start the model run. - - -#### Data Use Policies - -The last step before the run starts is to **read and agree** to AmeriFlux's data policy and **give a valid username**. If you don't already have an Ameriflux username, click "register here" and create one. If you selected a different data source, this step may or may not be needed: you will need to agree to a data policy if your source has one, but if it doesn't then the run will start immediately. - - -#### If you get an error in your run - -If you get an error in your run as part of a live demo or class activity, it is probably simplest to start over and try changing options and re-running (e.g. with a different site or PFT), as time does not permit detailed debugging. If the source of the error is not immediately obvious, you may want to take a look at the workflow.Rout to see the log of the PEcAn workflow or the logfile.txt to see the model execution output log and then refer to the [Documentation](http://pecanproject.github.io/documentation.html) or the [Chat Room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) for help. - -#### Model Run Workflow - -```{r execstatus} -knitr::include_graphics('extfiles/execstatus.jpg') -``` - -#### MET Process: - -First, PEcAn will download meteorological data based on the type of the Weather Data you chose, and process it into the specific format for the chosen model - - -#### TRAIT / META: - -PEcAn then estimates model parameters by performing a meta-analysis of the available trait data for a PFT. TRAIT will extract relevant trait data from the database. META performs a hierarchical Bayes meta-analysis of available trait data. The output of this analysis is a probability distribution for each model parameter. PEcAn selects the median value of this parameter as the default, but in Demo 2 we will see how PEcAn can use this parameter uncertainty to make probabilistic forecasts and assess model sensitivity and uncertainty. Errors at this stage usually indicate errors in the trait database or incorrectly specified PFTs (e.g. defining a variable twice). - - -#### CONFIG: - -writes model-specific settings and parameter files - - -#### MODEL: - -runs model. - - -#### OUTPUT: - -All model outputs are converted to [standard netCDF format](http://nacp.ornl.gov/MsTMIP_variables.shtml) - - -#### ENSEMBLE & SENSITIVITY: - -If enabled post-process output for these analyses - -If at any point a Stage Name has the **Status “ERROR”** please notify the PEcAn team member that is administering the demo or feel free to do any of the following: - -* Refer to the PEcAn Documentation for documentation -* Post the end of your workflow log on our Slack Channel chat -* Post an issue on Github. - -The entire PEcAn team welcomes any questions you may have! - -**If the Finished Stage has a Status of “DONE”, congratulations!** If you got this far, you have managed to run an ecosystem model without ever touching a line of code! Now it’s time to look at the results **click Finished**. - -FYI, [adding a new model](https://pecanproject.github.io/pecan-documentation/master/adding-an-ecosystem-model.html) to PEcAn does not require modification of the model’s code, just the implementation of a wrapper function. - - -#### Output and Visualization - -**For now focus on graphs, we will explore all of PEcAn’s outputs in more detail in Demo 02.** - - -#### Graphs - -1. **Select a Year and Y-axis Variable, and then click 'Plot run/year/variable'.** Initially leave the X-axis as time. -2. Within this figure the **points indicate the daily mean** for the variable while the **envelope encompasses the diurnal variability (max and min)**. -3. Variable names and units are based on a [standard netCDF format](http://nacp.ornl.gov/MsTMIP_variables.shtml). -4. Try looking at a number of different output variables over different years. -5. Try **changing the X-axis** to look at bivariate plots of how different output variables are related to one another. Be aware that PEcAn currently runs a moving min/mean/max through bivariate plots, just as it does with time series plots. In some cases this makes more sense than others. - - -#### Alternative Visualization: R Shiny - -1. **Click on Open SHINY**, which will open a new browser window. The shiny app will automatically access your run’s output files and allow you to visualize all output variables as a function of time. - -```{r workflowshiny} -knitr::include_graphics('extfiles/workflowshiny.png') -``` - -2. Use the pull down menu under **Variable Name** to choose whichever output variable you wish to plot. - - -#### Model Run Archive - -**Return to the output window and Click on the HISTORY button. Click on any previous run in the “ID” column** to go to the current state of that run's execution -- you can always return to old runs and runs in-progress this way. The run you just did should be the more recent entry in the table. **For the next analysis, make note of the ID number from your run.** - - -#### Next steps - -##### Analyzing model output - -Follow this tutorial, [Analyze Output] to learn how to **open model output in R and compare to observed data** - - -#### DEMO 02 -[Demo 02: Sensitivity and Uncertainty Analysis] will show how to perform **Ensemble & Sensitivity Analyses** through the web interface and explore the PEcAn outputs in greater detail, including the **trait meta-analysis** - - - -## Demo 02: Sensitivity and Uncertainty Analysis - -In Demo 2 we will be looking at how PEcAn can use information about parameter uncertainty to perform three automated analyses: - -* **Ensemble Analysis**: Repeat numerous model runs, each sampling from the parameter uncertainty, to generate a probability distribution of model projections. **Allows us to put a confidence interval on the model** -* **Sensitivity Analysis**: Repeats numerous model runs to assess how changes in model parameters will affect model outputs. **Allows us to identify which parameters the model is most sensitive to.** -* **Uncertainty Analysis**: Combines information about model sensitivity with information about parameter uncertainty to determine the contribution of each model parameter to the uncertainty in model outputs. **Allow us to identify which parameters are driving model uncertainty.** - -#### Run Specification - -1. Return to the main menu for the PEcAn web interface: **URL > Run Models** - -2. Repeat the steps for site selection and run specification from Demo 01, but also **click on “Advanced setup”**, then click Next. - -3. By clicking Advanced setup, PEcAn will first show an Analysis Menu, where we are going to specify new settings. - - + For an ensemble analysis, increase the number of runs in the ensemble, in this case set **Runs to 50**. In practice you would want to use a larger ensemble size (100-5000) than we are using in the demo. The ensemble analysis samples parameters from their posterior distributions to propagate this uncertainty into the model output. - - + PEcAn's sensitivity analysis holds all parameters at their median value and then varies each parameter one-at-a-time based on the quantiles of the posterior distribution. PEcAn also includes a handy shortcut, which is the default behavior for the web interface, that converts a specified standard deviation into its Normal quantile equivalent (e.g. 1 and -1 are converted to 0.157 and 0.841). In this example **set Sensitivity to -2,-1,1,2** (the median value, 0, occurs by default). - - + We also can tell PEcAn which variable to run the sensitivity on. Here, **set Variables to NEE**, so we can compare against flux tower NEE observations. - -**Click Next** - -#### Additional Outputs: - -The PEcAn workflow will take considerably longer to complete since we have just asked for over a hundred model runs. Once the runs are complete you will return to the output visualization page were there will be a few new outputs to explore, as well as outputs that were present earlier that we’ll explore in greater details: - -#### Run ID: -While the sensitivity and ensemble analyses synthesize across runs, you can also select individual runs from the Run ID menu. You can use the Graphs menu to visualize each individual run, or open individual runs in Shiny - -#### Inputs: -This menu shows the contents of /run which lets you look at and download: - -1. A summary file (README.txt) describing each run: location, run ID, model, dates, whether it was in the sensitivity or ensemble analysis, variables modifed, etc. -2. The model-specific input files fed into the model -3. The jobs.sh file used to submit the model run - -#### Outputs: -This menu shows the contents of /out. A number of files generated by the underlying ecosystem model are archived and available for download. These include: - -1. Output files in the standardized netCDF ([year].nc) that can be downloaded for visualization and analysis (R, Matlab, ncview, panoply, etc) -2. Raw model output in model-specific format (e.g. sipnet.out). -3. Logfile.txt contains job.sh & model error, warning, and informational messages - -#### PFTs: -This menu shows the contents of /pft. There is a wide array of outputs available that are related to the process of estimating the model parameters and running sensitivity/uncertainty analyses for a specific Plant Functional Type. - -1. **TRAITS**: The Rdata files **trait.data.Rdata** and **madata.Rdata** are, respectively, the available trait data extracted from the database that was used to estimate the model parameters and that same data cleaned and formatted for the statistical code. The **list of variables that are queried is determined by what variables have priors associated with them in the definition of the PFTs**. Priors are output into **prior.distns.Rdata**. Likewise, the **list of species that are associated with a PFT determines what subset of data is extracted** out of all data matching a given variable name. Demo 3 will demonstrate how a PFT can be created or modified. To look at these files in RStudio **click on these files to load them into your workspace**. You can further examine them in the _Environment_ window or accessing them at the command line. For example, try typing ```names(trait.data)``` as this will tell you what variables were extracted, ```names(trait.data$Amax)``` will tell you the names of the columns in the Amax table, and ```summary(trait.data$Amax)``` will give you summary data about the Amax values. -2. **META-ANALYSIS**: - + ```*.bug```: The evaluation of the meta-analysis is done using a Bayesian statistical software package called JAGS that is called by the R code. For each trait, the R code will generate a [trait].model.bug file that is the JAGS code for the meta-analysis itself. This code is generated on the fly, with PEcAn adding or subtracting the site, treatment, and greenhouse terms depending upon the presence of these effects in the data itself. If the tag is set to FALSE then all random effects will be turned off even if there are multiple sites. - + ```meta-analysis.log``` contains a number of diagnostics, including the summary statistics of the model, an assessment of whether the posterior is consistent with the prior, and the status of the Gelman-Brooks-Rubin convergence statistic (which is ideally 1.0 but should be less than 1.1). - + ```ma.summaryplots.*.pdf``` are collections of diagnostic plots produced in R after the above JAGS code is run that are useful in assessing the statistical model. _Open up one of these pdfs_ to evaluate the shape of the posterior distributions (they should generally be unimodal), the convergence of the MCMC chains (all chains should be mixing well from the same distribution), and the autocorrelation of the samples (should be low). - + ```traits.mcmc.Rdata``` contains the raw output from the statistical code. This includes samples from all of the parameters in the meta-analysis model, not just those that feed forward to the ecosystem, but also the variances, fixed effects, and random effects. - + ```post.distns.Rdata``` stores a simple tables of the posterior distributions for all model parameters in terms of the name of the distribution and its parameters. - + ```posteriors.pdf``` provides graphics showing, for each model parameter, the prior distribution, the data, the smoothed histogram of the posterior distribution (labeled post), and the best-fit analytical approximation to that smoothed histogram (labeled approx). _Open posteriors.pdf and compare the posteriors to the priors and data_ - -3. **SENSITIVITY ANALYSIS** - + ```sensitivity.analysis.[RunID].[Variable].[StartYear].[EndYear].pdf``` shows the raw data points from univariate one-at-a-time analyses and spline fits through the points. _Open this file_ to determine which parameters are most and least sensitive - -4. **UNCERTAINTY ANALYSIS** - + ```variance.decomposition.[RunID].[Variable].[StartYear].[EndYear].pdf```, contains three columns, the coefficient of variation (normalized posterior variance), the elasticity (normalized sensitivity), and the partial standard deviation of each model parameter. **Open this file for BOTH the soil and conifer PFTS and answer the following questions:** - + The Variance Decomposition graph is sorted by the variable explaining the largest amount of variability in the model output (right hand column). **From this graph identify the top-tier parameters that you would target for future constraint.** - + A parameter can be important because it is highly sensitive, because it is highly uncertain, or both. **Identify parameters in your output that meet each of these criteria.** Additionally, **identify parameters that are highly uncertain but unimportant (due to low sensitivity) and those that are highly sensitive but unimportant (due to low uncertainty)**. - + Parameter constraints could come from further literature synthesis, from direct measurement of the trait, or from data assimilation. **Choose the parameter that you think provides the most efficient means of reducing model uncertainty and propose how you might best reduce uncertainty in this process**. In making this choice remember that not all processes in models can be directly observed, and that the cost-per-sample for different measurements can vary tremendously (and thus the parameter you measure next is not always the one contributing the most to model variability). Also consider the role of parameter uncertainty versus model sensitivity in justifying your choice of what parameters to constrain. - -#### PEcAn Files: - -This menu shows the contents of the root workflow folder that are not in one of the folders indicated above. It mostly contains log files from the PEcAn workflow that are useful if the workflow generates an error, and as metadata & provenance (a detailed record of how data was generated). - -1. ```STATUS``` gives a summary of the steps of the workflow, the time they took, and whether they were successful -2. ```pecan.*.xml``` are PEcAn settings files -3. ```workflow.R``` is the workflow script -4. ```workflow.Rout``` is the corresponding log file -5. ```samples.Rdata``` contains the parameter values used in the runs. This file contains two data objects, sa.samples and ensemble.samples, that are the parameter values for the sensitivity analysis and ensemble runs respectively -6. ```sensitivity.output.[RunID].[Variable].[StartYear].[EndYear].Rdata``` contains the object sensitivity.output which is the model outputs corresponding to the parameter values in sa.samples. -7. ENSEMBLE ANALYSIS - + ```ensemble.Rdata``` contains contains the object ensemble.output, which is the model predictions at the parameter values given in ensemble.samples. - + ```ensemble.analysis.[RunID].[Variable].[StarYear].[EndYear].pdf``` contains the ensemble prediction as both a histogram and a boxplot. - + ```ensemble.ts.[RunID].[Variable].[StartYear].[EndYear].pdf``` contains a time-series plot of the ensemble mean, median, and 95% CI - -#### Global Sensitivity: Shiny - -**Navigate to URL/shiny/global-sensitivity.** - -This app uses the output from the ENSEMBLE runs to perform a global Monte Carlo sensitivity analysis. There are three modes controlled by Output type: - -1. **Pairwise** looks at the relationship between a specific parameter (X) and output (Y) -2. **All parameters** looks at how all parameters affect a specific output (Y) -3. **All variables** looks at how all outputs are affected by a specific parameter(X) - -In all of these analyses, the app also fits a linear regression to these scatterplots and reports a number of summary statistics. Among these, the slope is an indicator of **global sensitivity** and the R2 is an indicator of the contribution to **global uncertainty** - -#### Next Steps - -The next set of tutorials will focus on the process of data assimilation and parameter estimation. The next two steps are in “.Rmd” files which can be viewed online. - -#### Assimilation 'by hand' - -[Explore](https://github.com/PecanProject/pecan/blob/master/documentation/tutorials/sensitivity/PEcAn_sensitivity_tutorial_v1.0.Rmd) how model error changes as a function of parameter value (i.e. data assimilation ‘by hand’) - - -#### MCMC Concepts - -[Explore](https://github.com/PecanProject/pecan/blob/master/documentation/tutorials/MCMC/MCMC_Concepts.Rmd) Bayesian MCMC concepts using the photosynthesis module - - -#### More info about tools, analyses, and specific tasks… - -Additional information about specific tasks (adding sites, models, data; software updates; etc.) and analyses (e.g. data assimilation) can be found in the PEcAn [documentation](https://pecanproject.github.io/pecan-documentation/) - -If you encounter a problem with PEcAn that’s not covered in the documentation, or if PEcAn is missing functionality you need, please search [known bugs and issues](https://github.com/PecanProject/pecan/issues?q=), submit a [bug report](http://pecanproject.github.io/Report_an_issue.html), or ask a question in our [chat room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ). Additional questions can be directed to the [project manager](mailto:tonygard@bu.edu?subject=PEcAn Demo::) - - - - -## Other Vignettes {#othervignettes} - - - -### Simple Model-Data Comparisons - -#### Author: Istem Fer, Tess McCabe - -In this tutorial we will compare model outputs to data outside of the PEcAn web interface. The goal of this is to demonstrate how to perform additional analyses using PEcAn’s outputs. To do this you can download each of the Output files, and then perform the analyses using whatever software you prefer, or you can perform analyses directly on the PEcAn server itself. Here we’ll be analyzing model outputs in R using a browser-based version of RStudio that’s installed on the server - -#### Starting RStudio Server - -1. Open RStudio Server in a new window at **URL/rstudio** - -2. The username is carya and the password is illinois. - -3. To open a new R script click File > New File > R Script - -4. Use the Files browser on the lower right pane to find where your run(s) are located - - + All PEcAn outputs are stored in the output folder. Click on this to open it up. - - + Within the outputs folder, there will be one folder for each workflow execution. For example, click to open the folder PEcAn_99000000001 if that’s your workflow ID - - + A workflow folder will have a few log and settings files (e.g. pecan.xml) and the following subfolders - -``` -run contains all the inputs for each run -out contains all the outputs from each run -pft contains the parameter information for each PFT -``` - -Within both the run and out folders there will be one folder for each unique model run, where the folder name is the run ID. Click to open the out folder. For our simple case we only did one run so there should be only one folder (e.g. 99000000001). Click to open this folder. - - + Within this folder you will find, among other things, files of the format .nc. Each of these files contains one year of model output in the standard PEcAn netCDF format. This is the model output that we will use to compare to data. - - -#### Read in settings From an XML file - - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -## Read in the xml - -settings<-PEcAn.settings::read.settings("~/output/PEcAn_99000000001/pecan.CONFIGS.xml") - -## To read in the model output -runid<-as.character(read.table(paste(settings$outdir, "/run/","runs.txt", sep=""))[1,1]) # Note: if you are using an xml from a run with multiple ensembles this line will provide only the first run id -outdir<- paste(settings$outdir,"/out/",runid,sep= "") -start.year<-as.numeric(lubridate::year(settings$run$start.date)) -end.year<-as.numeric(lubridate::year(settings$run$end.date)) - -site.id<-settings$run$site$id -File_path<-"~/output/dbfiles/AmerifluxLBL_site_0-772/AMF_US-NR1_BASE_HH_9-1.csv" - -## Open up a connection to The Bety Database -bety <-dplyr::src_postgres(host = settings$database$bety$host, user = settings$database$bety$user, password = settings$database$bety$password, dbname = settings$database$bety$dbname) - -``` - -#### Read in model output from specific variables -```{r echo = TRUE, warning=FALSE, eval= FALSE} -model_vars<-c("time", "NEE") #varibles being read -model <- PEcAn.utils::read.output(runid,outdir,start.year, end.year, model_vars,dataframe=TRUE) -``` - -The arguments to read.output are the run ID, the folder where the run is located, the start year, the end year, and the variables being read. The README file in the Input file dropdown menu of any successful run lists the run ID, the output folder, and the start and end year. - -#### Compare model to flux observations - -**First** _load up the observations_ and take a look at the contents of the file - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -File_format<-PEcAn.DB::query.format.vars(bety = bety, format.id = 5000000002) #This matches the file with a premade "format" or a template that describes how the information in the file is organized - -site<-PEcAn.DB::query.site(site.id,bety$con) #This tells PEcAn where the data comes from - -observations<-PEcAn.benchmark::load_data(data.path = File_path, format= File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) #This will throw an error that not all of the units can be converted. That's ok, as the units of the varibles of interest (NEE) are being converted. -``` - -File_Path refers to where you stored your observational data. In this example the default file path is an Ameriflux dataset from Niwot Ridge. - -File_format queries the database for the format your file is in. The defualt format ID "5000000002" is for csv files downloaded from the Ameriflux website. -You could query for diffent kinds of formats that exist in bety or [make your own](https://pecanproject.github.io/pecan-documentation/adding-an-ecosystem-model.html#formats). - -Here 772 is the database site ID for Niwot Ridge Forest, which tells pecan where the data is from and what time zone to assign any time data read in. - -**Second** _apply a conservative u* filter to observations_ -```{r echo = TRUE, warning=FALSE, eval= FALSE} -observations$NEE[observations$UST<0.2]<-NA -``` - -**Third** _align model output and observations_ - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -aligned_dat = PEcAn.benchmark::align_data(model.calc = model, obvs.calc = observations, var ="NEE", align_method = "match_timestep") - -``` -When we aligned the data, we got a dataframe with the variables we requested in a $NEE.m$ and a $NEE.o$ format. The $.o$ is for observations, and the $.m$ is for model. The posix column allows for easy plotting along a timeseries. - -**Fourth**, _plot model predictions vs. observations_ and compare this to a 1:1 line - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -## predicted vs observed plot -plot(aligned_dat$NEE.m, aligned_dat$NEE.o) -abline(0,1,col="red") ## intercept=0, slope=1 -``` - -**Fifth**, _calculate the Root Mean Square Error (RMSE)_ between the model and the data - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -rmse = sqrt(mean((aligned_dat$NEE.m-aligned_dat$NEE.o)^2,na.rm = TRUE)) -``` -*na.rm* makes sure we don’t include missing or screened values in either time series. - -**Finally**, _plot time-series_ of both the model and data together - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -## plot aligned data -plot(aligned_dat$posix, aligned_dat$NEE.o, type="l") -lines(aligned_dat$posix,aligned_dat$NEE.m, col = "red") - -``` - -**Bonus** _How would you compare aggregated data?_ - -Try RMSE against monthly NEE instead of half-hourly. In this case, first average the values up to monthly in the observations. Then, use align_data to match the monthly timestep in model output. - -**NOTE**: Align_data uses two seperate alignment function, match_timestep and mean_over_larger_timestep. Match_timestep will use only that data that is present in both the model and the observation. This is helpful for sparse observations. Mean_over_larger_timestep aggregates the values over the largest timestep present. If you were to look at averaged monthly data, you would use mean_over_larger_timestep. - -```{r echo = TRUE, warning=FALSE, eval= FALSE} -monthlyNEEobs<-aggregate(observations, by= list(month(observations$posix)), simplify=TRUE, FUN =mean, na.rm= TRUE) -plottable<-align_data(model.calc = model, obvs.calc = monthlyNEEobs, align_method = "mean_over_larger_timestep", var= "NEE") -head(plottable) -``` - - - -### Data Assimilation Concepts - - -The goal of this tutorial is to help you gain some hands-on familiarity with some of the concepts, tools, and techniques involved in Bayesian Calibration. As a warm-up to more advanced approaches to model-data fusion involving full ecosystem models, this example will focus on fitting the Farquhar, von Caemmerer, and Berry (1980) photosynthesis model [FvCB model] to leaf-level photosynthetic data. This is a simple, nonlinear model consisting of 3 equations that models net carbon assimilation, $A^{(m)}$, at the scale of an individual leaf as a function of light and CO2. - -$$A_j = \frac{\alpha Q}{\sqrt{1+(\alpha^2 Q^2)/(Jmax^2)}} \frac{C_i- \Gamma}{4 C_i + 8 \Gamma}$$ - -$$A_c = V_{cmax} \frac{C_i - \Gamma}{C_i+ K_C (1+[O]/K_O) }$$ - -$$A^{(m)} = min(A_j,A_c) - r$$ - -The first equation $A_j$ describes the RUBP-regeneration limited case. In this equation the first fraction is a nonrectangular hyperbola predicting $J$, the electron transport rate, as a function of incident light $Q$, quantum yield $\alpha$, and the assymptotic saturation of $J$ at high light $J_{max}$. The second equation, $A_c$, describes the Rubisco limited case. The third equation says that the overall net assimilation is determined by whichever of the two above cases is limiting, minus the leaf respiration rate, $r$. - -To keep things simple, as a Data Model (a.k.a. Likelihood or Cost Function) we'll assume that the observed leaf-level assimilation $A^{(o)}$ is Normally distributed around the model predictions with observation error $\tau$. - -$$A^{(o)} \sim N(A^{(m)},\tau)$$ - -To fit this model to data we're going to rely on a piece of statistical software known as JAGS. The above model would be written in JAGS as: - -``` -model{ - -## Priors - Jmax ~ dlnorm(4.7,2.7) ## maximum electron transport rate prior - alpha~dnorm(0.25,100) ##quantum yield (mol electrons/mole photon) prior - vmax ~dlnorm(4.6,2.7) ## maximum rubisco capacity prior - - r ~ dlnorm(0.75,1.56) ## leaf respiration prior - cp ~ dlnorm(1.9,2.7) ## CO2 compensation point prior - tau ~ dgamma(0.1,0.1) - - for(i in 1:n){ - - ## electron transport limited - Aj[i]<-(alpha*q[i]/(sqrt(1+(alpha*alpha*q[i]*q[i])/(Jmax*Jmax))))*(pi[i]-cp)/(4*pi[i]+8*cp) - - ## maximum rubisco limited without covariates - Ac[i]<- vmax*(pi[i]-cp)/(pi[i]+Kc*(1+po/Ko)) - - Am[i]<-min(Aj[i], Ac[i]) - r ## predicted net photosynthesis - Ao[i]~dnorm(Am[i],tau) ## likelihood - } - -} -``` - -The first chunk of code defines the _prior_ probability distributions. In Bayesian inference every unknown parameter that needs to be estimated is required to have a prior distribution. Priors are the expression of our belief about what values a parameter might take on **prior to observing the data**. They can arise from many sources of information (literature survey, meta-analysis, expert opinion, etc.) provided that they do not make use of the data that is being used to fit the model. In this particular case, the priors were defined by Feng and Dietze 2013. Most priors are lognormal or gamma, which were choosen because most of these parameters need to be positive. - -After the priors is the Data Model, which in JAGS needs to be implemented as a loop over every observation. This is simply a codified version of the earlier equations. - -Table 1: FvCB model parameters in the statistical code, their symbols in equations, and definitions - -Parameter | Symbol | Definition -----------|------------|----------- -alpha0 | $\alpha$ | quantum yield (mol electrons/mole photon) -Jmax | $J_{max}$ | maximum electron transport -cp | $\Gamma$ | CO2 compensation point -vmax0 | $V_{cmax}$ | maximum Rubisco capacity (a.k.a Vcmax) -r | $R_d$ | leaf respiration -tau | $\tau$ | residual precision -q | $Q$ | PAR -pi | $C_i$ | CO2 concentration - -#### Fitting the model - -To begin with we'll load up an example A-Ci and A-Q curve that was collected during the 2012 edition of the [Flux Course](http://www.fluxcourse.org/) at Niwot Ridge. The exact syntax below may be a bit confusing to those unaccustomed to R, but the essence is that the `filenames` line is looking up where the example data is stored in the PEcAn.photosynthesis package and the `dat` line is loading up two files (once A-Ci, the other A-Q) and concatanating them together. - -```{r echo=TRUE ,eval=FALSE} -library(PEcAn.photosynthesis) - -### Load built in data -filenames <- system.file("extdata", paste0("flux-course-3",c("aci","aq")), package = "PEcAn.photosynthesis") -dat<-do.call("rbind", lapply(filenames, read_Licor)) - -## Simple plots -aci = as.character(dat$fname) == basename(filenames[1]) -plot(dat$Ci[aci],dat$Photo[aci],main="ACi") -plot(dat$PARi[!aci],dat$Photo[!aci],main="AQ") -``` - -In PEcAn we've written a wrapper function, $fitA$, around the statistical model discussed above, which has a number of other bells and whistles discussed in the [PEcAn Photosynthesis Vignette](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd). For today we'll just use the most basic version, which takes as arguments the data and the number of MCMC iterations we want to run. - -```{r echo=TRUE, eval=FALSE} -fit <- fitA(dat,model=list(n.iter=10000)) -``` - -#### What's going on - -Bayesian numerical methods for model calibration are based on sampling parameter values from the posterior distribution. Fundamentally what's returned is a matrix, with the number of iterations as rows and the number of parameters as columns, which are samples from the posterior distribution, from which we can approximate any quantity of interest (mean, median, variance, CI, etc.). - -The following plots follow the trajectory of two correlated parameters, Jmax and alpha. In the first figure, arrows show the first 10 iterations. Internally JAGS is choosing between a variety of different Bayesian sampling methods (e.g. Metropolis-Hasting, Gibbs sampling, slice sampling, rejection sampling, etc) to draw a new value for each parameter conditional on the current value. After just 10 steps we don't have a good picture of the overall posterior, but it should still be clear that the sampling is not a complete random walk. - -```{r echo= TRUE, eval=FALSE} - -params <- as.matrix(fit$params) -xrng = range(fit$params[,"alpha0"]) -yrng = range(fit$params[,"Jmax0"]) - -n = 1:10 -plot(params[n,"alpha0"],params[n,"Jmax0"],type='n',xlim=xrng,ylim=yrng) -arrows(params[n[-10],"alpha0"],params[n[-10],"Jmax0"],params[n[-1],"alpha0"],params[n[-1],"Jmax0"],length=0.125,lwd=1.1) - -``` - -After 100 steps, we can see a cloud start to form, with occassional wanderings around the periphery. - -```{r echo= TRUE, eval=FALSE} -n = 1:100 -plot(params[n,"alpha0"],params[n,"Jmax0"],type='l',xlim=xrng,ylim=yrng) -``` - -After $nrow(params)$ steps what we see is a point cloud of samples from the joint posterior distribution. When viewed sequentially, points are not independent, but we are interested in working with the overall distribution, where the order of samples is not important. - -```{r echo= TRUE, eval=FALSE} -n = 1:nrow(params) -plot(params[n,"alpha0"],params[n,"Jmax0"],type='p',pch="+",cex=0.5,xlim=xrng,ylim=yrng) -``` - - -#### Evaluating the model output - -A really important aspect of Bayesian inference is that the output is the **joint** posterior probability of all the parameters. This is very different from an optimization approach, which tries to find a single best parameter value. It is also different from estimating the independent posterior probabilities of each parameter -- Bayesian posteriors frequently have strong correlation among parameters for reasons having to do both with model structure and the underlying data. - -The model we're fitting has six free parameters, and therefore the output matrix has 6 columns, of which we've only looked at two. Unfortunately it is impossible to visualize a 6 dimensional parameter space on a two dimensional screen, so a very common practice (for models with a small to medium number of parameters) is to look at all pairwise scatterplots. If parameters are uncorrelated we will typically see oval shaped clouds that are oriented in the same directions as the axes. For parameters with linear correlations those clouds will be along a diagonal. For parameters with nonlinear trade-offs the shapes of the parameter clouds can be more complex, such as the banana-shaped or triangular. For the FvCB model we see very few parameters that are uncorrelated or have simple linear correlations, a fact that we should keep in mind when interpreting individual parameters. - -```{r echo= TRUE, eval=FALSE} -pairs(params,pch=".") -``` - -The three most common outputs that are performed on almost all Bayesian analyses are to look at the MCMC chains themselves, the marginal distributions of each parameter, and the overall summary statistics. - -The 'trace' diagrams below show the history of individual parameters during the MCMC sampling. There are different color lines that represent the fact that JAGS ran the MCMC multiple times, with each run (i.e. each color) being refered to as a different $chain$. It is common to run multiple chains in order to assess whether the model, started from different points, consistently converges on the same answer. The ideal trace plot looks like white noise with all chains in agreement. - -The 'density' figures represent smoothed versions of the _marginal_ distributions of each parameter. The tick marks on the x-axis are the actual samples. You will note that some posteriors will look approximately Normal, while others may be skewed or have clearly defined boundaries. On occassion there will even be posteriors that are multimodal. There is no assumption built into Bayesian statistics that the posteriors need be Normal, so as long as an MCMC has converged this diversity of shapes is valid. [note: the most common cause of multi-modal posteriors is a lack of convergence] - -Finally, the summary table reports, for each parameter, a mean, standard deviation, two variants of standard error, and standard quantile estimates (95% CI, interquartile, and median). The standard deviation of the posterior is a good summary statistic about **how uncertain we are about a parameter**. The Naive SE is the traditonal $\frac{SD}{\sqrt{n}}$, which is an estimate of the **NUMERICAL accuracy in our estimate of the mean**. As we run the MCMC longer (i.e. take more samples), we get an answer that is numerically more precise (SE converges to 0) but the uncertainty in the parameter (i.e. SD) stays the same because that's determined by the sample size of the DATA not the length of the MCMC. Finally, the Time-series SE is a variant of the SE calculation that accounts for the autocorrelation in the MCMC samples. In practice is is therefore more appropriate to use this term to assess numerical accuracy. - -```{r echo= TRUE, eval=FALSE} -plot(fit$params,auto.layout = FALSE) ## MCMC diagnostic plots -summary(fit$params) ## parameter estimates -``` - -Assessing the convergence of the MCMC is first done visually, but more generally the use of statistical diagnostics to assess convergence is highly encouraged. There are a number of metrics in the literature, but the most common is the Gelman-Brooks-Rubin statistic, which compare the variance within each chain to the variance across chains. If the chains have converged then this quantity should be 1. Values less than 1.05 are typically considered sufficient by most statisticians, but these are just rules-of-thumb. - -```{r echo= TRUE, eval=FALSE} -gelman.plot(fit$params,auto.layout = FALSE) -gelman.diag(fit$params) -``` - -As with any modeling, whether statistical or process-based, another common diagnostic is a predicted vs observed plot. In a perfect model the data would fall along the 1:1 line. The deviations away from this line are the model residuals. If observations lie along a line other than the 1:1 this indicates that the model is biased in some way. This bias is often assessed by fitting a linear regression to the points, though two important things are noteworthy about this practice. First, the $R^2$ and residual error of this regression are not the appropriate statistics to use to assess model performance (though you will frequently find them reported incorrectly in the literature). The correct $R^2$ and residual error (a.k.a Root Mean Square Error, RMSE) are based on deviations from the 1:1 line, not the regression. The code below shows these two terms calculated by hand. The second thing to note about the regression line is that the standard regression F-test, which assesses deviations from 0, is not the test you are actually interested in, which is whether the line differs from 1:1. Therefore, while the test on the intercept is correct, as this value should be 0 in an unbiased model, the test statistic on the slope is typically of less interest (unless your question really is about whether the model is doing better than random). However, this form of bias can easily be assessed by looking to see if the CI for the slope overlaps with 1. - -```{r echo= TRUE, eval=FALSE} -## predicted vs observed plot -par(mfrow=c(1,1)) -mstats = summary(fit$predict) -pmean = mstats$statistics[grep("pmean",rownames(mstats$statistics)),1] -plot(pmean,dat$Photo,pch="+",xlab="Predicted A",ylab = "Observed A") -abline(0,1,col=2,lwd=2) -bias.fit = lm(dat$Photo~pmean) -abline(bias.fit,col=3,lty=2,lwd=2) -legend("topleft",legend=c("1:1","regression"),lwd=2,col=2:3,lty=1:2) -summary(bias.fit) -RMSE = sqrt(mean((pmean-dat$Photo)^2)) -RMSE -R2 = 1-RMSE^2/var(dat$Photo) -R2 -confint(bias.fit) -``` - -In the final set of plots we look at the actual A-Ci and A-Q curves themselves. Here we've added two interval estimates around the curves. The CI captures the uncertainty in the _parameters_ and will asympotically shrink with more and more data. The PI (predictive interval) includes the parameter and residual error. If our fit is good then 95% PI should thus encompass at least 95% of the observations. That said, as with any statistical model we want to look for systematic deviations in the residuals from either the mean or the range of the PI. - -```{r echo= TRUE, eval=FALSE} -## Response curve -plot_photo(dat,fit) -``` - -Note: on the last figure you will get warnings about "No ACi" and "No AQ" which can be ignored. These are occuring because the file that had the ACi curve didn't have an AQ curve, and the file that had the AQ curve didn't have an ACi curve. - - -#### Additional information - -There is a more detailed R Vignette on the use of the PEcAn photosynthesis module available in the [PEcAn Repository](https://github.com/PecanProject/pecan/blob/master/modules/photosynthesis/vignettes/ResponseCurves.Rmd). - -#### Citations - -Dietze, M.C. (2014). Gaps in knowledge and data driving uncertainty in models of photosynthesis. Photosynth. Res., 19, 3–14. - -Farquhar, G., Caemmerer, S. & Berry, J.A. (1980). A biochemical model of photosynthetic CO2 assimilation in leaves of C3 species. Planta, 149, 78–90. - -Feng, X. & Dietze, M.C. (2013). Scale dependence in the effects of leaf ecophysiological traits on photosynthesis: Bayesian parameterization of photosynthesis models. New Phytol., 200, 1132–1144. - - - -### Parameter Data Assimilation - - -#### Objectives -* Gain hands-on experience in using Bayesian MCMC techniques to calibrate a simple ecosystem model using parameter data assimilation (PDA) -* Set up and run a PDA in PEcAn using model emulation technique, assimilating NEE data from Niwot Ridge -* Examine outputs from a PDA for the SIPNET model and evaluation of the calibrated model against i) data used to constrain model, ii) additional data for the same site - -#### Larger Context -Parameter data assimilation (PDA) occurs as one step in the larger process of model calibration, validation, and application. The goal of PDA is to update our estimates of the posterior distributions of the model parameters using data that correspond to model outputs. This differs from our previous use of PEcAn to constrain a simple model using data that map directly to the model parameters. Briefly, the recommended overall approach for model calibration and validation consists of the following steps: - -1. Assemble and process data sets required by the model as drivers -2. Perform an initial test-run of the model as a basic sanity check - + Were there errors in drivers? (return to 1) - + Is the model in the same ballpark as the data? -3. Construct priors for model parameters -4. Collect/assemble the data that can be used to constrain model parameters and outputs -5. Meta-analysis -6. Sensitivity analysis (SA) -7. Variance Decomposition (VD) -8. Determine what parameters need further constraint - + Does this data exist in the literature? (repeat 4-8) - + Can I collect this data in the field? (repeat 4-8) -9. Ensemble Analysis - + Is reality within the range of the uncertainty in the model? -10. Evaluate/estimate uncertainties in the data -11. Parameter Data Assimilation: - + Propose new parameter values - + Evaluate L(data | param) & prior(param) - + Accept or reject the proposed parameter values - + Repeat many times until a histogram of accepted parameter values approximates the true posterior distribution. -12. Model evaluation [preferably ensemble based] - + Against data used to constrain model - + Against additional data for this site - + Same variable, different time - + Different variables - + Against data at a new site - + Do I need more data? Repeat 4-9 (direct data constraint) or 6-11 (parameter data assimilation). -13. Application [preferably ensemble forecast or hindcast] - -#### Connect to Rstudio -Today, we're again going to work mostly in Rstudio, in order to easily edit advanced PEcAn settings and browse files. So if you haven't already, connect now to the Rstudio server on your VM ([URL]/rstudio). - -This tutorial assumes you have successfully completed an ensemble and a sensitivity analysis (Demo 2) before. - -#### Defining variables - -The following variables need to be set specific to the site being run and the workflow being run -```{r echo= TRUE, eval=FALSE} -workflow_id <- 99000000002 ## comes from the History table, your successful ensemble run's workflow ID - -## from URL/bety/inputs. -## Search by Ameriflux ID (e.g. US-NR1) -## Find the "plain" site record (no CF or model name) that's on your server -## (probably last in the list) -## Click on the magnifying glass icon then look under "View Related Files" -datadir <- "/home/carya/output/dbfiles/AmerifluxLBL_site_0-772/" - -## where PEcAn is saving output (default OK on VM) -outdir <- "/home/carya/output/" -``` - - -#### Initial Ensemble Analysis -A good place to start when thinking about a new PDA analysis is to look at the current model fit to observed data. In fact, we want to compare data to a full ensemble prediction from the model. This is important because our current parameter distributions will be the priors for PDA. While the analysis will translate these priors into more optimal (in terms of producing model output that matches observations) and more confident (i.e. narrower) posterior distributions, these results are inherently constrained by the current parameter distributions. Thus, if reality falls far outside the prior ensemble confidence interval (which reflects the current uncertainty of all model parameters), data assimilation will not be able to fix this. In such cases, the prior parameter estimates must already be over-constrained, or there are structural errors in the model itself that need fixing. -To begin, let’s load up some NEE observations so that we can plot them along with our ensemble predictions. In the code below the elements in bold may vary depending on site and your previous runs. - -```{r echo= TRUE, eval=FALSE} - -library(PEcAn.all) - -# read settings -settings <- read.settings(file.path(outdir,paste0("PEcAn_",workflow_id),"pecan.CONFIGS.xml")) - -# open up a DB connection -bety<-settings$database$bety -bety <-dplyr::src_postgres(host = bety$host, user = bety$user, password = bety$password, dbname = bety$dbname) - -# Fill out the arguments needed by load_data function - -# read file format information -format <- PEcAn.DB::query.format.vars(bety = bety, format.id = 5000000002) -start_year <- lubridate::year(settings$run$start.date) -end_year <- lubridate::year(settings$run$end.date) -vars.used.index <- which(format$vars$bety_name %in% c("NEE", "UST")) - -obs <-PEcAn.benchmark::load_data(data.path = file.path(datadir, "AMF_US-NR1_BASE_HH_9-1.csv"), - format = format, start_year = start_year, end_year = end_year, - site = settings$run$site, - vars.used.index = vars.used.index, - time.row = format$time.row) - -obs$NEE[obs$UST<0.4] <- NA ## U* filter -NEEo <- obs$NEE -``` - -Now let's load up our ensemble outputs from the previous ensemble analysis (Demo 2) and plot our ensemble predictions against our NEE observations. -```{r echo= TRUE, eval=FALSE} - -# load outputs, try not to delete prePDA ensemble output filename from your environment -prePDA_ensemble_output_file <- file.path(outdir,paste0("PEcAn_",workflow_id, "/ensemble.ts.", settings$ensemble$ensemble.id, ".NEE.2003.2006.Rdata")) -load(prePDA_ensemble_output_file) - -# calculate CI -pre_pda_ens <- ensemble.ts[["NEE"]] -preCI <- apply(pre_pda_ens, 2, quantile, c(0.025, 0.5, 0.975), na.rm = TRUE) - -# plot model ensemble -ymin <- min(min(c(preCI, NEEo), na.rm = TRUE)) -ymax <- max(max(c(preCI, NEEo), na.rm = TRUE)) -plot(preCI[2,], ylim = c(ymin, ymax), lwd = 2, xlab = "time", ylab = "NEE", main = "pre-PDA model ensemble vs data", type = "n") -prepoly <- 1:dim(preCI)[2] -polygon(c(prepoly, rev(prepoly)), c(preCI[3,], rev(preCI[1,])), col='khaki', border=NA) - -# add data -points(NEEo, pch = ".", col= adjustcolor("purple",alpha.f=0.5)) -legend("topright", legend=c("Data","Pre-PDA Model"), pch=c(15,15), - col=c("purple","khaki")) -``` - -When interpreting your results it is important to remember the difference between a confidence interval, which just includes parameter uncertainties, and a predictive interval, which includes parameter and residual uncertainties. Your ensemble analysis plot illustrates the former—i.e., the confidence in the mean NEE. By contrast, the data reflect both changes in mean NEE, and random variability. As such, we can't expect all the data to fall within the CI; in fact, if we had unlimited data to constrain mean NEE, the CI would collapse to a single line and none of the data would be contained! However, your plot will give you an idea of how much uncertainty there is in your model currently, and help to identify systematic errors like bias (values consistently too high or low) or poorly represented seasonal patterns. - -#### Questions: -* Does your ensemble agree well with the data? - + If so, how much room for improvement is there, in terms of tightening the CI? - + If not, what are the greatest discrepancies? -* What are some of the problems (with model, data, and/or PEcAn) that might explain the data-model disparity you see? - -#### Choosing Parameters -Beyond exploratory exercises, the first step of PDA analysis is to choose the model parameters you will target for optimization. PDA is computationally expensive (even when using an emulator), and the cost increases exponentially with the number of parameters targeted. The number you can handle in any given analysis completely depends on the complexity of the model and your available computational resources, but in practice it's going to be rather small (~1–10) relative to the large number of parameters in a mechanistic ecosystem model (~10–100). -Given this limitation, it is important to target parameters that can contribute substantially to improving model fit. If you recall, identifying those parameters was the goal of the uncertainty analysis you conducted previously, in the second PEcAn demo. Let's revisit that analysis now. Open your variance decomposition graph from Demo 2 -From this figure decide which variables you will target with PDA. As noted, an obvious criterion is that the parameter should be contributing a large amount of uncertainty to the current model, because otherwise it simply can't change the model output much no matter how much you try to optimize it. But there are other considerations too. For example, if two parameters have similar or competing roles in the model, you may have trouble optimizing both simultaneously. In practice, there will likely be some guess-and-testing involved, though a good understanding of how the model works will help. It may also help to look at the shape of the Sensitivity responses and details of model fit to data (your ensemble analysis from the previous section). -For the purposes of this demo, choose eight to ten parameters (in total, if you have more than one PFT) that contribute high uncertainty to model output and/or seem like good choices for some other rational reason. - -#### Questions: -* Which parameters did you choose, and why? - -#### Editing PEcAn settings -Now let’s add settings to tell PEcAn how to run the PDA with emulator, we will come to the details of model emulation later. Open up the pecan.CONFIGS.xml file you located previously, and choose ```File > Save as...``` from the menu to save a new copy as **pecan.PDA.xml**. Now add the block of XML listed below to the file, immediately after the line. Check and fill in the parts corresponding to your run when necessary. -In this block, use the `````` tags to identify the parameters you’ve chosen for PDA (it's up to you to choose the number of parameters you want to constrain, then you can set the `````` to be >= 10 per parameter you choose, e.g. 200 knots for 10 parameters). Here, you need to use PEcAn’s standard parameter names, which are generally not the same as what’s printed on your variance decomposition graph. To find your parameters look at the row names in the ```prior.distns.csv``` file for each PFT under the PFT pulldown menu. Insert the variable name (exactly, and case sensitive) into the `````` tags of the XML code below. -In addition, you may need to edit ``````, depending on the site and year you ran previously. The rest of the settings control options for the PDA analysis (how long to run, etc.), and also identify the data to be used for assimilation. For more details, see the assim.batch vignette on the PEcAn GitHub page (https://goo.gl/9hYVPQ). - -``` - <-- These lines are already in there. Don't duplicate them, - <-- just paste the block below right after them. - - emulator - 160 <-- FILL IN - 25000 - 3 - - - YOUR_PFT_1_PARAM_1 <-- FILL IN - YOUR_PFT_1_PARAM_2 <-- FILL IN - - - YOUR_PFT_2_PARAM_1 <-- FILL IN - YOUR_PFT_2_PARAM_2 <-- FILL IN - YOUR_PFT_2_PARAM_3 <-- FILL IN - YOUR_PFT_2_PARAM_4 <-- FILL IN - YOUR_PFT_2_PARAM_5 <-- FILL IN - YOUR_PFT_2_PARAM_6 <-- FILL IN - - - - 100 - 0.1 - 0.3 - - - - - /home/carya/output/dbfiles/AmerifluxLBL_site_0-772/AMF_US-NR1_BASE_HH_9-1.csv - - 5000000002 - 1000011238 <-- FILL IN, from BETY inputs table, this is *NOT* the workflow ID - Laplace - - NEE - UST - - 297 - - - -``` -Once you’ve made and saved the changes to your XML, load the file and check that it contains the new settings: - -```{r echo= TRUE, eval=FALSE} -settings <- read.settings(file.path(outdir,paste0("PEcAn_",workflow_id),"pecan.PDA.xml")) -settings$assim.batch -``` - -If the printed list contains everything you just added to pecan.PDA.xml, you’re ready to proceed. - -#### Investigating PEcAn function pda.emulator (optional) - -Before we run the data assimilation, let's take a high-level look at the organization of the code. Use the Rstudio file browser to open up ```~/pecan/modules/assim.batch/R/pda.emulator.R.``` This code works in much the same way as the pure statistical models that we learned about earlier in the week, except that the model being fit is a statistical model that emulates a complicated process-based computer simulation (i.e., an ecosystem model). We could have directly used the ecosystem model (indeed PEcAn's other PDA functions perform MCMC by actually running the ecosystem model at each iteration, see pda.mcmc.R script as an example), however, this would require a lot more computational time than we have today. Instead here we will use a technique called model emulation. This technique allows us to run the model for a relatively smaller number of times with parameter values that have been carefully chosen to give a good coverage of parameter space. Then we can interpolate the likelihood calculated for each of those runs to get a surface that "emulates" the true likelihood and perform regular MCMC, except instead of actually running the model on every iteration to get a likelihood, this time we will just get an approximation from the likelihood emulator. The general algorithm of this method can be further expressed as: - -1. Propose initial parameter set sampling design -2. Run full model for each parameter set -3. Evaluate the likelihoods -4. Construct emulator of multivariate likelihood surface -5. Use emulator to estimate posterior parameter distributions -6. (Optional) Refine emulator by proposing new design points, goto 2) - -For now, we just want you to get a glimpse at the overall structure of the code, which is laid out in the comment headers in ```pda.emulator()```. Most of the real work gets done by the functions this code calls, which are all located in the file ```~/pecan/modules/assim.batch/R/pda.utils.R``` and the MCMC will be performed by the ```mcmc.GP()``` function in ```~/pecan/modules/emulator/R/minimize.GP.R```. To delve deeper into how the code works, take a look at these files when you have the time. - -#### Running a demo PDA - -Now we can go ahead and run a data assimilation MCMC with emulator. Since you've already loaded the settings containing your modified XML block, all you need to do to start the PDA is run `pda.emulator(settings)`. But, in the emulator workflow, there is a bit of a time consuming step where we calculate the effective sample size of the input data, and we have already done this step for you. You could load it up and pass it to the function explicitly in order to skip this step: - -```{r echo= TRUE, eval=FALSE} -# load input data -load("/home/carya/pda/pda_externals.Rdata") -postPDA.settings <- pda.emulator(settings, external.data = inputs_w_neff) -``` -After executing the code above, you will see print-outs to the console. The code begins with loading the prior values which in this case are the posterior distributions coming from your previous meta analysis. Then, normally, it loads the observational data and carries out necessary conversions and formatting to align it with model outputs, as we did separately above, but today it will skip this step as we are passing data externally. After this step, you will see a progress bar where the actual model is run n.knot times with the proposed parameter sets and then the outputs from these runs are read. Next, this model output is compared to the specified observational data, and the likelihood is calculated using the heteroskedastic Laplacian discussed previously. Once we calculate the likelihoods, we fit an emulator which interpolates the model output in parameter space between the points where the model has actually been run. Now we can put this emulator in the MCMC algorithm instead of the model itself. Within the MCMC loop the code proposes new parameter value from a multivariate normal jump distribution. The corresponding likelihood will be approximated by the emulator and the new parameter value is accepted or rejected based on its posterior probability relative to the current value. - - -#### Outputs from PEcAn’s Parameter Data Assimilation - -When the PDA is finished, a number of outputs are automatically produced that are either the same as or similar to posterior outputs that we’ve seen before. These are located in the ```PEcAn_[workflow_id]/pft/*``` output directory and are identified by ```pda.[PFT]_[workflow_id]``` in the filenames: - -* posteriors.pda.[PFT]*.pdf shows the posterior distributions resulting from your PDA -* trait.mcmc.pda.[PFT]*.Rdata contains all the parameter samples contained in the PDA posterior -* mcmc.pda.[PFT]*.Rdata is essentially the same thing in a different format -* mcmc.diagnostics.pda.[PFT]*.pdf shows trace plots and posterior densities for each assimilated parameter, as well as pairs plots showing all pairwise parameter correlations. - -Together, these files allow you to evaluate whether a completed PDA analysis has converged and how the posterior distributions compare to the priors, and to use the posterior samples in further analyses, including additional PDA. -If you haven't done so already, take a look at all of the outputs described here. - -#### Questions: -* Do the diagnostic figures indicate that your likelihood at least improved over the course of the analysis? -* Does the MCMC appear to have converged? -* Are the posterior distributions well resolved? - -#### Post-PDA analyses - -In addition to the outputs of the PDA itself, you may want to conduct ensemble and/or sensitivity analyses based on the posteriors of the data assimilation, in order to check progress towards improved model fit and/or changing sensitivity. For this, you need to generate new model runs based on parameters sampled from the updated (by PDA) posterior, which is a simple matter of rerunning several steps of the PEcAn workflow. -The PDA you ran has automatically produced an updated XML file (`pecan.pda***.xml`) that includes the posterior id to be used in the next round of runs. Locate this file in your run directory and load the file for the post-pda ensemble/sensitivity analysis (if you already have the `settings` list in your working environment you don't need to re-read the settings): - - -```{r echo= TRUE, eval=FALSE} - - # read post-PDA settings if you don't have them in your wotking environment - # replace the *** with the ensemble id given by the workflow - # postPDA.settings <- read.settings(file.path(outdir,paste0("PEcAn_", workflow_id),"pecan.pda***.xml")) - - # Call model specific write.configs - postPDA.settings <- run.write.configs(postPDA.settings, - write=postPDA.settings$database$bety$write, - ens.sample.method=postPDA.settings$ensemble$method) - - # Let's save the settings with the new ensemble id - PEcAn.settings::write.settings(settings, outputfile=paste0('pecan.pda', postPDA.settings$assim.batch$ensemble.id,'.xml')) - - # Start ecosystem model runs, this one takes awhile... - PEcAn.remote::start.model.runs(postPDA.settings, postPDA.settings$database$bety$write) - - # Get results of model runs - get.results(postPDA.settings) - - # Repeat ensemble analysis with PDA-constrained params - run.ensemble.analysis(postPDA.settings, TRUE) - - # let's re-load the pre-PDA ensemble outputs - load(prePDA_ensemble_output_file) - pre_pda_ens <- ensemble.ts[["NEE"]] - - # nowload the post-PDA ensemble outputs - postPDA_ensemble_output_file <- file.path(outdir,paste0("PEcAn_", workflow_id, "/ensemble.ts.", postPDA.settings$ensemble$ensemble.id, ".NEE.2003.2006.Rdata")) - load(postPDA_ensemble_output_file) - post_pda_ens <- ensemble.ts[["NEE"]] - - # try changing the window value for daily, weekly, monthly smoothing later - # see if this changes your model-data agreement, why? - window <- 1 # no smoothing - pre_pda <- t(apply(pre_pda_ens, 1, function(x) { - tapply(x, rep(1:(length(x)/window + 1), each = window)[1:length(x)], - mean, na.rm = TRUE)})) - post_pda <- t(apply(post_pda_ens, 1, function(x) { - tapply(x, rep(1:(length(x)/window + 1), each = window)[1:length(x)], - mean, na.rm = TRUE)})) - fobs <- tapply(NEEo, rep(1:(length(NEEo) / window + 1), - each = window)[1:length(NEEo)], mean, na.rm = TRUE) - - - # save the comparison plots to pdf - pdf(file.path(outdir,paste0("PEcAn_",workflow_id),"model.data.comparison.pdf"), onefile=T, - paper='A4r', height=15, width=20) - - # now plot the pre-PDA ensemble similar to the way we did before - preCI <- apply(pre_pda, 2, quantile, c(0.025, 0.5, 0.975), na.rm = TRUE) - ymin <- min(min(c(preCI, fobs), na.rm = TRUE)) - ymax <- max(max(c(preCI, fobs), na.rm = TRUE)) - plot(pre_pda[1,], ylim = c(ymin, ymax), lwd = 2, xlab = "time", ylab = "NEE", main = "pre-PDA vs post-PDA", type = "n") - prepoly <- 1:dim(preCI)[2] - polygon(c(prepoly, rev(prepoly)),c(preCI[3,], rev(preCI[1,])),col='khaki',border=NA) - - # plot the post-PDA ensemble - postCI <- apply(post_pda, 2, quantile, c(0.025, 0.5, 0.975), na.rm = TRUE) - postpoly <- 1:dim(postCI)[2] - polygon(c(postpoly, rev(postpoly)),c(postCI[3,], rev(postCI[1,])),col='lightblue',border=NA) - - # finally let's add the data and see how we did - points(fobs, pch = ".", col= adjustcolor("purple",alpha.f=0.7)) - legend("topright", legend=c("Data","Pre-PDA Model", "Post-PDA Model"), pch=c(15,15,15), - col=c("purple","khaki","lightblue")) - - dev.off() - - - # Repeat variance decomposition to see how constraints have changed - run.sensitivity.analysis(postPDA.settings) -``` - - -Now you can check the new figures produced by your analyses under ```PEcAn_[workflow_id]/pft/*/variance.decomposition.*.pdf``` and ```PEcAn_[workflow_id]/pft/*/sensitivity.analysis.*.pdf```, and compare them to the previous ones. Also, take a look at the comparison of model outputs to data when we run SIPNET with pre- and post-PDA parameter (mean) values under ```PEcAn_[workflow_id]/model.data.comparison.pdf```. - -#### Questions: -* Looking at the ensemble analysis outputs in order (i.e., in order of increasing ID in the filenames), qualitatively how did the model fit to data change over the course of the analysis? -* Based on the final ensemble analysis, what are the major remaining discrepancies between model and data? - + Can you think of the processes / parameters that are likely contributing to the differences? - + What would be your next steps towards evaluating or improving model performance? - - - -### State-Variable Data Assimilation - - -#### Objectives: -* Assimilate tree ring estimated NPP & inventory AGB within the SIPNET model in order to: -* Reconcile data and model NPP & AGB estimates -* Constrain inferences about other ecosystem responses. - -#### Overview: -* Initial Run -* Settings -* Load and match plot and increment data -* Estimate tree-level data uncertainties -* Estimate allometric relationships -* Estimate stand-level NPP -* Sample initial conditions and parameters -* Run Ensemble Kalman Filter - -#### Initial Run - -Perform a site-level SIPNET run using the following settings - -* Site = UNDERC -* Start = 01/01/1979 -* End = 12/31/2015 -* Met = NARR -* Check **Brown Dog** -* When the run is complete, open the pecan.xml and cut-and-paste the **outdir** for later use - -#### Settings: - -* Open the PEcAn RStudio environment back up. - -* Set your working directory to the outdir from above ```setwd(outdir)``` and shift the file browser to that location (Files > More > Go To Working Directory) - -* Open up the latest settings file ```pecan.CONFIG.xml```. - -* At the top of the file add the following tags to set the ensemble size - -``` - - 35 - FALSE - TRUE - - 1000000040 - 1000013298 - - - - NPP - MgC/ha/yr - -9999 - 9999 - - - AbvGrndWood - KgC/m^2 - 0 - 9999 - - - TotSoilCarb - KgC/m^2 - 0 - 9999 - - - LeafC - m^2/m^2 - 0 - 9999 - - - SoilMoistFrac - - 0 - 9999 - - - SWE - cm - 0 - 9999 - - - Litter - gC/m^2 - 0 - 9999 - - - year - 1980/01/01 - 2015/12/31 - -``` - -* Delete the ````` block from the settings - -* In the PEcAn History, go to your PDA run and open ```pecan.pda[UNIQUEID].xml``` (the one PEcAn saved for you AFTER you finished the PDA) - -* Cut-and-paste the PDA `````` block into the SDA settings file - -* Save the file as ```pecan.SDA.xml``` - -#### Loading data - -* If you have not done so already, clone (new) or pull (update) the PalEON Camp2016 repository - + Open a shell under Tools > Shell - + ```cd`` to go to your home directory - + To clone: ```git clone git@github.com:PalEON-Project/Camp2016.git``` - + To pull: ```cd Camp2016; git pull https://github.com/PalEON-Project/Camp2016.git master``` - -* Open the tree-ring data assimilation workflow under Home > pecan > scripts > workflow.treering.R - -* Run the script from the start up through the LOAD DATA section - - -#### Estimating tree-level data uncertainties - - -One thing that is critical for data assimilation, whether it is being used to estimate parameters or state variables, is the careful consideration and treatment of the uncertainties in the data itself. For this analysis we will be using a combination of forest plot and tree ring data in order to estimate stand-level productivity. The basic idea is that we will be using the plot-sample of measured DBHs as an estimate of the size structure of the forest, and will use the annual growth increments to project that forest backward in time. Tree biomass is estimated using empirical allometric equations relating DBH to aboveground biomass. There are a number of sources of uncertainty in this approach, and before moving you are encouraged to think about and write down a few: - -_______________________________________ -_______________________________________ -_______________________________________ -_______________________________________ -_______________________________________ -_______________________________________ -_______________________________________ -_______________________________________ - -Today we will use a statistical model based on the model developed by Clark et al 2007 that partitions out a number of sources of variability and uncertainty in tree ring and plot data (Fig 1). This model is a Bayesian statespace model that treats the true diameters (D) and increments (X) as latent variables that are connected through a fairly simple mixed effects process model - -$$D_{ij,t+1} = D_{ij,t} + \mu + \alpha_{i} + \alpha_t + \epsilon_{ij,t}$$ - -where i = individual, j = plot, t = time (year). Each of these terms are represented at normal distributions, where $\mu$ is a fixed effect (overall mean growth rate) and individual and year are random effects - -$$\mu \sim N(0.5,0.5)$$ -$$\alpha_{i} \sim N(0,\tau_{i})$$ -$$\alpha_{t} \sim N(0,\tau_{t})$$ -$$\epsilon_{ij,t} \sim N(0,\tau_{e})$$ - -The connection between the true (latent) variable and the observations is also represented as normal with these variances representing measurement error: - -$$D_{ij,t}^O \sim N( D_{ij,t},\tau_D)$$ -$$X_{ij,t}^O \sim N( X_{ij,t},\tau_r)$$ - -Finally, there are five gamma priors on the precisions, one for the residual process error ($\tau_{e}$), two for the random effects on individual ($\tau_{i}$) and time ($\tau_t$), -and two measurement errors or DBH ($\tau_D$) and tree rings ($\tau_r$) - -$$\tau_{e} \sim Gamma(a_e,r_e)$$ -$$\tau_{i} \sim Gamma(a_i,r_i)$$ -$$\tau_{t} \sim Gamma(a_t,r_t)$$ -$$\tau_{D} \sim Gamma(a_D,r_D)$$ -$$\tau_{r} \sim Gamma(a_r,r_r)$$ - -This model is encapsulated in the PEcAn function: - -``````{r echo= TRUE, eval=FALSE} -InventoryGrowthFusion(combined,n,iter) -``` - -where the first argument is the combined data set formatted for JAGS and the second is the number of MCMC interations. The model itself is written for JAGS and is embedded in the function. Running the above InventoryGrowthFusion will run a full MCMC algorithm, so it does take a while to run. The code returns the results as an mcmc.list object, and the next line in the script saves this to the outputs directory We then call the function InventoryGrowthFusionDiagnostics to print out a set of MCMC diagnostics and example time-series for growth and DBH. - -#### Allometric equations - -Aboveground NPP is estimated as the increment in annual total aboveground biomass. This estimate is imperfect, but not unreasonable for demonstration purposes. As mentioned above, we will take an allometric approach of scaling from diameter to biomass: Biomass = b0 * DBH b1 We will generate the allometric equation on a PFT level using another Bayesian model that synthesizes across a database of species-level allometric equations (Jenkins et al 2004). This model has two steps within the overall MCMC loop. First it simulates data from each equation, including both parameter and residual uncertainties, and then it updated the parameters of a single allometric relationship across all observations. The code also fits a second model, which includes a random site effect, but for simplicity we will not be using output from this version. Prior to running the model we have to first query the species codes for our pfts. Next we pass this PFT list to the model, AllomAve, which saves the results to the output directory in addition to returning a summary of the parameters and covariances. - -#### Estimate stand-level NPP - -If we have no uncertainty in our data or allometric equations, we could estimate the stand aboveground biomass (AGB) for every year by summing over the biomass of all the trees in the plot and then divide by the plot area. We would then estimate NPP by the difference in AGB between years. One approach to propagating uncertainties into NPP would be to transform the distribution of DBH for each individual tree and year into a distribution for biomass, then sum over those distributions to get a distribution for AGB and then subtract the distributions to get the distributions of NPP. However, if we do this we will greatly overestimate the uncertainty in NPP because we ignore the fact that our increment data has much lower uncertainty than our diameter data. In essence, if we take a random draw from a distribution of AGB in year and it comes back above average, the AGB is much more likely to also be above average the following year than if we were to do an independent draw from that distribution. Accounting for this covariance requires a fairly simple change in our approach and takes advantage of the nature of the MCMC output. The basic idea is that we are going to take a random draw from the full individual x year diameter matrix, as well as a random draw of allometric parameters, and perform the 'zero-error' calculation approach described above. We will then create a distribution of all the NPP estimates that comes out of repeated draws for the full diameter matrix. This approach is encapsulated in the function `plot2AGB`. The argument unit.conv is a factor that combines both the area of the plot and the unit conversion from tree biomass (kg/tree) to stand AGB (Mg/ha). There are two outputs from plot2AGB: a pdf depicting estimated NPP and AGB (mean and 95% CI) time series, with each page being a plot; and plot2AGB.Rdata, a binary record of the function output that is read into the data assimilation code. The latter is also returned from the fuction and assigned to the variable “state”. Finally, we calculate the mean and standard deviation of NPP and save this as obs. - -#### Build Initial Conditions - -The function sample.IC.SIPNET uses the AGB estimate from the previous function in order to initialize the data assimilation routine. Specifically it samples n.ensemble values from the first time step of the AGB estimate. Embedded in this function are also a number of prior distributions for other state variables, which are also samples in order to create a full set of initial conditions for SIPNET. - -#### Load Priors - -The function sample.parameters samples values from the most recent posterior parameter distributions. You can also specify a specific set of parameters so sample from by specifying the argument `` within `` as the posterior.id you want to use. This is useful to know if you want to go back and run with the Meta-analysis posteriors, or if you end up rerunning the meta-analysis and need to go back and specify the parameter data assimilation posteriors instead of the most recent. - -#### Ensemble Kalman Filter - -The function `sda.enkf` will run SIPNET in Ensemble Kalman Filter mode. The output of this function will be all the of run outputs, a PDF of diagnostics, and an Rdata object that includes three lists: - -* FORECAST will be the ensemble forecasts for each year -* ANALYSIS will be the updated ensemble sample given the NPP observations -* enkf.params contains the prior and posterior mean vector and covariance matrix for each time step. - -If you look within this function you will find that much of the format is similar to the pda.mcmc function, but in general is much simpler. The function begins by setting parameters, opening a database connection, and generating workflow and ensemble ID's. Next we split the SIPNET clim meteorology file up into individual annual files since we will be running SIPNET for a year at a time between updates. Next we perform an initial set of runs starting from the initial states and parameters we described above. In doing so we create the run and output directories, the README file, and the runs.txt file that is read by start.model.runs. Worth noting is that the README and runs.txt don't need to be updated within the forecast loop. Given this initial run we then enter the forecast loop. Within this loop over years we perform four basic steps. First, we read the output from the latest runs. Second, we calculate the updated posterior state estimates based on the model ensemble prior and observation likelihood. Third, we resample the state estimates based on these posterior parameters. Finally, we start a new set of runs based on this sample. The sda.enfk function then ends by saving the outputs and generating some diagnostic figures. The first set of these shows the data, forecast, analysis. The second set shows pairs plots of the covariance structure for the Forecast and Analysis steps. The final set shows the time-series plots for the Analysis of the over state variables produced by SIPNET. - -#### Finishing up - -The final bit of code in the script will register the workflow as complete in the database. After this is run you should be able to find all of the runs, and all of the outputs generated above, from within the PEcAn webpages. - - - -### PEcAn: Testing the Sensitivity Analysis Against Observations" - -#### Author: "Ankur Desai" - - -#### Flux Measurements and Modeling Course, *Tutorial Part 2* -This tutorial assumes you have successfully completed the Demo01, Demo02 and the modelVSdata tutorial. - -#### Introduction -Now that you have successfully run PEcAn through the web interface and have learned how to do a simple comparison of flux tower observations to model output, let’s start looking at how data assimilation and parameter estimation would work with an ecosystem model. - -Before we start a full data assimilation exercise, let’s try something simple – single parameter selection by hand. - -+ Open or or the Amazon URL/rstudio if running on the cloud. - -In Demo02, you have ran a sensitivity analysis of SIPNET model runs at Niwot Ridge sampling across quantiles of a parameter prior, while holding all others to the median value. The pecan.xml file told PEcAn to run an sensitivity analysis, which simply meant SIPNET was run multiple times with the same driver, but varying parameter values one at a time (while holding all others to their median), and the parameter range also specified in the pecan.xml file (as quantiles, which were then sampled against the BETY database of observed variation in the parameter for species within the specific plant functional type). - -Let’s try to compare Ameriflux NEE to SIPNET NEE across all these runs to make a plot of parameter vs. goodness-of-fit. We’ll start with root mean square error (**RMSE**), but then discuss some other tests, too. - -#### A. Read in settings object from a run xml - -Open up a connection to the bety database and create a settings object from your xml, just like in the modelVSdata tutorial. -```{r echo= TRUE, eval=FALSE} - -settings<-PEcAn.settings::read.settings("~/output/PEcAn_99000000002/pecan.CONFIGS.xml") - -bety<-settings$database$bety -bety <-dplyr::src_postgres(host = bety$host, user = bety$user, password = bety$password, dbname = bety$dbname) -``` - -We read in the pecan.CONFIG.xml instead of the pecan.xml because the pecan.CONFIG.xml has already incorperated some of the information from the database we would have to query again if we used the pecan.xml. - -```{r echo= TRUE, eval=FALSE} - -runid<-as.character(read.table(paste(settings$outdir, "/run/","runs.txt", sep=""))[1,1]) # Note: if you are using an xml from a run with multiple ensembles this line will provide only the first run id -outdir<- paste(settings$outdir,"/out/",runid,sep= "") -start.year<-as.numeric(lubridate::year(settings$run$start.date)) -end.year<-as.numeric(lubridate::year(settings$run$end.date)) - -site.id<-settings$run$site$id -``` - - -Back to the files pane, within the *run/* folder, find a folder called *pft/* and within that a folder with the pft name (such as *temprature.coniferous*). Within that is a PDF file that starts *sensitivity.analysis*. In Rstudio, just click on the PDF to view it. You discussed this PDF last tutorial, through the web interface. Here, we see how the model NEE in SIPNET changes with each parameter. - -Let’s read that sensitivity output. Navigate back up (*..*) to the *~/output/**RUNDIR**/* folder. Find a series of files that end in “*.RData*”. These files contain the R variables used to make these plots. In particular, there is **sensitivity.output.*.RData** which contains the annual NEE as a function of each parameter quantile. Click on it to load a variable into your environment. There is **sensitivity.results.*.RData** which contains plotting functions and variance decomposition output, which we don't need in this tutorial. And finally, there is **sensitivity.samples.*.RData** which contains the actual parameter values and the RunIDs associated with each sensitivity run. - -Click on *sensitivity.samples.*.RData* to load it into your environment, or run the ```{r}load()``` script below. You should see a set of five new variables (pft.names, trait.names, sa.ensemble.id, sa.run.ids, sa.samples). - -Let’s extract a parameter and it’s sensitivity NEE output from the list sa.samples, which is organized by PFT, and then by parameter. First, let’s look at a list of PFTs and parameters available: -```{r echo= TRUE, eval=FALSE} -load(paste(settings$outdir,"/sensitivity.samples.", settings$sensitivity.analysis$ensemble.id,".Rdata", sep="")) - -names(sa.samples) -names(sa.samples$temperate.coniferous) -``` - -Now to see the actual parameter values used by the runs, just pick a parameter and type: -```{r echo= TRUE, eval=FALSE} -sa.samples$temperate.coniferous$psnTOpt -``` - - -Let’s store that value for future use: -```{r echo= TRUE, eval=FALSE} -psnTOpt <- sa.samples$temperate.coniferous$psnTOpt -``` - -Now, to see the annual NEE output from the model for a particular PFT and parameter range, try -```{r echo= TRUE, eval=FALSE} -load(paste(settings$outdir,paste("/sensitivity.output", settings$sensitivity.analysis$ensemble.id,settings$sensitivity.analysis$variable,start.year,end.year,"Rdata", sep="."), sep="")) - -sensitivity.output$temperate.coniferous$psnTOpt -``` - -You could even plot the two: -```{r echo= TRUE, eval=FALSE} -plot(psnTOpt,sensitivity.output$temperate.coniferous$psnTOpt) -``` - -What do you notice? - - -Let’s try to read the output from a single run id as you did in the earlier tutorial. -```{r echo= TRUE, eval=FALSE} -runids <- sa.run.ids$temperate.coniferous$psnTOpt -arun <- PEcAn.utils::read.output(runids[1], paste(settings$outdir, "out", runids[1], sep="/"), start.year= start.year, end.year= end.year,"NEE", dataframe = TRUE) - -plot(arun$posix,arun$NEE) -``` - -#### B. Now let’s bring in the actual observations - -Recall reading Ameriflux NEE in the modelVSdata tutorial. -```{r echo= TRUE, eval=FALSE} -File_path<-"~/output/dbfiles/AmerifluxLBL_site_0-772/AMF_US-NR1_BASE_HH_9-1.csv" - -File_format<-PEcAn.DB::query.format.vars(bety = bety, format.id = 5000000002) #This matches the file with a premade "format" or a template that describes how the information in the file is organized -site<-PEcAn.DB::query.site(site.id = site.id,bety$con) - -obs<-PEcAn.benchmark::load_data(data.path = File_path, format= File_format, time.row = File_format$time.row, site = site, start_year = start.year, end_year = end.year) - -obs$NEE[obs$UST<0.2]<-NA #Apply a U* filter - -plottable<-align_data(model.calc = arun, obvs.calc = obs, align_method = "match_timestep", var= "NEE") -head(plottable) -``` - - -#### C. Finally, we can finally compare model to data - -In the modelVSdata, you also compared NEE to the ensemble model run. Here we will do the same except we include each sensitivity run. - -```{r echo= TRUE, eval=FALSE} -plot(plottable$NEE.m,plottable$NEE.o) -abline(0,1,col="red") -``` - -And remember the formula for RMSE: -```{r echo= TRUE, eval=FALSE} -sqrt(mean((plottable$NEE.o-plottable$NEE.m)^2,na.rm = TRUE)) -``` - -All we need to do to go beyond this is to make a loop that reads in each sensitivity run NEE based on runids, calculates RMSE against the observations, and stores it in an array, by combining the steps above in a for loop. Make sure you change the directory names and year to your specific run. -```{r echo= TRUE, eval=FALSE} -rmses <- rep(0,length(runids)) -for(r in 1:length(runids)){ -arun <- read.output(runids[r],paste(settings$outdir, "out", runids[r], sep="/"),2004,2004,"NEE", dataframe= TRUE) -plottable<-align_data(model.calc = arun, obvs.calc = obs, align_method = "match_timestep", var= "NEE") -rmses[r] <- sqrt(mean((plottable$NEE.o-plottable$NEE.m)^2,na.rm = TRUE)) -} - -rmses -``` - -Let’s plot that array -```{r echo= TRUE, eval=FALSE} -plot(psnTOpt,rmses) -``` - -Can you identify a minimum (if there is one)? If so, is there any reason to believe this is the “best” parameter? Why or why not? Think about all the other parameters. - -Now that you have the hang of it, here are a few more things to try: - -1. Try a different error functions, given actual NEE uncertainty. You learned earlier that uncertainty in half-hourly observed NEE is not Gaussian. This makes RMSE not the correct measure for goodness-of-fit. Go to *~/pecan/modules/uncertainty/R*, open *flux_uncertainty.R*, and click on the *source* button in the program editing pane. - -Alternatively, you can source the function from the console using: -```{r echo= TRUE, eval=FALSE} -source("pecan/modules/uncertainty/R/flux_uncertainty.R") -``` - -Then you can run: -```{r echo= TRUE, eval=FALSE} -unc <- flux.uncertainty(plottable$NEE.o,QC=rep(0,17520)) -plot_flux_uncertainty(unc) -``` -The figure shows you uncertainty (err) as a function of NEE magnitude (mag). How might you use this information to change the RMSE calculation? - -Try a few other parameters. Repeat the above steps but with a different parameter. You might want to select one from the sensitivity PDF that has a large sensitivity or from the variance decomposition that is also poorly constrained. - - - -## Advanced User Guide {#advanced-user} - -- [Workflow curl submission](#web-curl-submission) - - - -### Submitting Workflow from Command Line {#web-curl-submission} - -This is how you can submit a workflow from the command line through the pecan web interface. This will use curl to submit all the requireed parameters to the web interface and trigger a run. - -```bash -# the host where the model should run -# never use remote sites since you will need to pass your username/password and that WILL be stored -hostname=pecan.vm -# the site id where to run the model (NIWOT in this case) -siteid=772 -# start date and end date, / need to be replaced with %2F or use - (NOT TESTED) -start=2004-01-01 -end=2004-12-31 - -# if of model you want to run, rest of section parameters depend on the model selected (SIPNET 136) -modelid=5000000002 -# PFT selected (we should just use a number here) -# NOTE: the square brackets are needed and will need be escaped with a \ if you call this from command line -pft[]=temperate.coniferous -# initial pool condition (-1 means nothing is selected) -input_poolinitcond=-1 -# met data -input_met=99000000006 - -# variables to collect -variables=NPP,GPP -# ensemble size -runs=10 -# use sensitivity analysis -sensitivity=-1,1 - -# redirect to the edit pecan.xml file -pecan_edit=on -# redirect to edit the model configuration files -model_edit=on -# use browndog -browndog=on -``` - -For example the following will run the above workflow. Using -v in curl will show verbose output (needed) and the grep will make sure it only shows the redirect. This will show the actual workflowid: - -``` -curl -s -v 'http://localhost:6480/pecan/04-runpecan.php?hostname=pecan.vm&siteid=772&start=2004-01-01&end=2004-12-31&modelid=5000000002&pft\[\]=temperate.coniferous&input_poolinitcond=-1&input_met=99000000006' 2>&1 | grep 'Location:' -< Location: 05-running.php?workflowid=99000000004 -``` - -In this case you can use the browser to see progress, or use the following to see the status: - -``` -curl -s 'http://localhost:6480/pecan/dataset.php?workflowid=99000000004&type=file&name=STATUS' -TRAIT 2017-12-13 08:56:56 2017-12-13 08:56:57 DONE -META 2017-12-13 08:56:57 2017-12-13 08:57:13 DONE -CONFIG 2017-12-13 08:57:13 2017-12-13 08:57:14 DONE -MODEL 2017-12-13 08:57:14 2017-12-13 08:57:15 DONE -OUTPUT 2017-12-13 08:57:15 2017-12-13 08:57:15 DONE -ENSEMBLE 2017-12-13 08:57:15 2017-12-13 08:57:16 DONE -FINISHED 2017-12-13 08:57:16 2017-12-13 08:57:16 DONE -``` - -Or to show the output log: - -``` -curl -s 'http://localhost:6480/pecan/dataset.php?workflowid=99000000004&type=file&name=workflow.Rout' - -R version 3.4.3 (2017-11-30) -- "Kite-Eating Tree" -Copyright (C) 2017 The R Foundation for Statistical Computing -Platform: x86_64-pc-linux-gnu (64-bit) - -R is free software and comes with ABSOLUTELY NO WARRANTY. -You are welcome to redistribute it under certain conditions. -Type 'license()' or 'licence()' for distribution details. - -R is a collaborative project with many contributors. -.... -``` - - - -# Basic Web workflow {#basic-web-workflow} - -This chapter describes the major steps of the PEcAn web-based workflow, which are as follows: - -- [Model and site selection](#web-site-model) -- [Model configuration](#web-model-config) -- Run execution -- TODO! -- Results -- TODO! -- Interactive visualizations -- TODO! - -We recommend that all new users begin with [PEcAn Hands-On Demo 01: Basic Run]. The documentation below assumes you are already familiar with how to navigate to PEcAn's interactive web interface for running models. - -## Site and model selection {#web-site-model} - -This page is used to select the model to run and the site at which you would like to run that model. - -**NOTE:** If this page does not load for you, it may be related to a known Google Maps API key issue. See [issue #1269][issue-1269] for a possible solution. - - -[issue-1269]: https://github.com/PecanProject/pecan/issues/1269 - -### Selecting a model - -1. On the **Select Host** webpage **use the Host pull-down menu to select the server you want to run on**. PEcAn is designed to allow models to be run both locally and on remote high-performance computing (HPC) resources (i.e. clusters). We recommend that users start with local runs. More information about connecting your PEcAn instance to a cluster can be found on the [Remote execution with PEcAn] page. - -2. Next, **select the model you want to run under the Model pull-down menu**. The list of models currently supported by PEcAn, along with information about these models, is available on the [PEcAn Models] page. - - i) If a PEcAn-supported model is not listed, this is most likely because the model has not been installed on the server. The PEcAn team does not have permissions to redistribute all of the models that are coupled to it, so you will have to install some PEcAn-compatible models yourself. Please consult the PEcAn model listing for information about obtaining and installing different models. Once the model is installed and you have added the location of the model executable to Bety (see [Adding An Ecosystem Model]), your model should appear on the PEcAn **Select Host** page after your refresh the page. - - ii) If you would like to add a new model to PEcAn please consult our guide for [Adding an Ecosystem Model] and contact the PEcAn team for assistance. - -3. If selecting your model causes your **site to disappear** from the Google Map, that means the site exists but there are no drivers for that model-site combination registered in the database. - - i) Click the "Conversion" checkbox. If your site reappears, that means PEcAn should be able to automatically generate the required inputs for this site by converting from existing input files in other formats. - - ii) If the site still does not reappear, that means there are required input files for that model-site combination that PEcAn cannot autogenerate. This may be because the model has unique input requirements or because it has not yet been fully coupled to the PEcAn input processing workflow. Go to the troubleshooting section under [Selecting a site] for more information on diagnosing what drivers are missing. - -### Selecting a site - -### Site Groups - -1. PEcAn provides the option of organizing sites into groups to make them easier to find and easier to run as a group. We have pre-loaded a number of common research networks (e.g., FLUXNET, LTER, NEON), but you are free to create new site groups through Bety. - -2. If you are searching for a site that is not part of an existing site group, or you are unsure which site group it belongs to, select "All Sites" to see all sites in Bety. Note that this may take a while to render. - -### Using existing sites - -1. **Find the site on the map** The simplest way of determining if a site exists in PEcAn is through the Google Map interface of the web-based workflow. You'll want to make sure that the "Site Group" is set to "All Sites" and the "Model" is set to "All Models". - -2. **Find the site in BETY** If the site is not on the map, it may still be in Bety but with insufficient geographic information. To locate the site in Bety, first login to your local version of the BETY database. If using the VM, navigate to `localhost:6480/bety` and login with username `bety` and password `illinois`. Then, navigate to `Data > Sites` and use the "Search" box to search for your site. If you **do** find your site, click "Edit" and add geographic information so that the site will show up on the map. Also, note that the site ID number shows up in the URL for the "Show" or "Edit" pages. This ID is often useful to know, for example when editing a PEcAn settings file by hand. If you did not find you site, follow the instructions below to add a site. - -### Adding a new site - -(TODO: Move most of this out) - -1. Log into Bety as described above. - -2. **Pick a citation for your site** Each site requires an associated "citation" that must be added before the site itself is added. First, navigate to "Data > Citations" and use the "Search" box to see if the relevant citation already exists. If it does, click the check mark under "Actions" to proceed to site creation. - -* **To create a new citation**, click the **New Citation** button, fill in the fields, and then click "Create". The "field URL" should contain the web address that takes you to this publication on the publisher's website. The "PDF" field should be the full web address to a PDF for this citation. - -* Note that our definition of a citation is flexible, and a citation need not be a peer-reviewed publication. Most of the fields in "New Citation" can be left blank, but we recommend at least adding a descriptive title, such as "EBI Farm Field Data" and a relevant contact person as the "Author". - -5. Once the Citation is created or selected this should automatically take you to the Sites page and list any Sites already associated with this citation. To create a new site click the **New Site** button. - -6. When creating a new site, the most important fields are the **Site name** and coordinates (**latitude** and **longitude**). The coordinates can be entered by hand or by clicking on the site location on the Google Map interface. All other information is optional, but can be useful for searching and indexing purposes. - -7. When you are done click **Create**. At this point, once the PEcAn site-level page is refreshed, the site should automatically appear. - -### Troubleshooting - -#### My site shows up when I don't have any model selected, but disappears once I select the model I want to run - -Selecting a model will cause PEcAn to filter the available sites based on whether they possess the required Inputs for a given model (e.g. meteorology). To check what Inputs are missing for a site point your browser to the pecan/checksite.php webpage (e.g. localhost:6480/pecan/checksite.php). This page looks virtually identical to the site selection page, except that it has a *Check* button instead of *Prev* and *Next*. If you select a Machine, Model, and Site and then click *Check* the page should return a list of what Inputs are missing (listing both the name and the Format ID number). Don't forget that its possible for PEcAn to have required Inputs in its database, but just not have them for the Machine where you want to run. - -To see more about what Inputs a given model can accept, and which of those are required, take a look at the MODEL_TYPE table entry in the database (e.g. go to `localhost:6480/bety`; Select `Runs > Model Type`; and then click on the model you want to run). - -For information about loading missing Inputs into the database visit [Input records in BETY], and also read the rest of the pages under this section, which will provide important information about the specific classes of Inputs (e.g. meteorology, vegetation, etc). - -Finally, we are continually developing and refining workflows and standards for processing Input data in a model-agnostic way. The details about what Inputs can be processed automatically are discussed input-by-input in the sections below. For those looking to dive into the code or troubleshoot further, these conversions are ultimately handled under the `PEcAn.workflow::do_conversions` workflow module. - -## Model configuration {#web-model-config} - -This page is used for basic model configuration, including when your model will run and what input data it will use. - -### Choosing meteorology - -Once a Machine, Model, and Site have been selected, PEcAn will take you to the Input selection page. From this page you will select what Plant Functional Type (PFT) you want to run at a site, the start and end dates of the run, and various Input selections. The most common of these across all models is the need to specify meteorological forcing data. The exact name of the menu item for meteorology will vary by model because all of the Input requirements are generated individually for each model based on the MODEL_TYPE table. In general there are 3 possible cases for meteorology - -* PEcAn already has driver files in its database -* PEcAn does not have drivers, but can generate them from publicly available data -* You need (or want) to upload your own drivers - -The first two cases will appear automatically in the the pull down menu. For meteorological files that already exist you will see the date range that's available. By contrast, met that can be generated will appear as "Use ", where is the origin of the data (e.g. "Use Ameriflux" will use the micromet from an Ameriflux eddy covariance tower, if one is present at the site). - -If you want to upload your own met data this can be done in three ways. - -1. The default way to add met data is to incorporate it into the overall meteorological processing workflow. This is preferred if you are working with a common meteorological data product that is not yet in PEcAn's workflow. This case can be divided into two special cases: - - i) Data is in a common MIME-type that PEcAn already has a converter for (e.g. CSV). In this case you'll want to create a new Format record for the meta-data so that the existing converter can process this data. See documentation for [Creating a new Format record in BETY] for more details. - - ii) Data is in a more complicated format or interactive database, but large/useful enough to warrent a custom conversion function. Details on creating custom met conversions is in the [Input Conversion], though at this stage you would also be strongly encouraged to contact the PEcAn development team. - -2. The second-best way is to upload data in PEcAn's standard meteorological format (netCDF files, CF metadata). See [Input Conversion] for details about variables and units. From this standard, PEcAn can then convert the file to the model-specific format required by the model you have chosen. This approach is preferred for a rare or one-off meterological file format, because PEcAn will also be able to convert the file into the format required by any other model as well. - -3. The last option for adding met data is to add it in a model-specific format, which is often easiest if you've already been running your model at a site and are just switching to using PEcAn. - - -### Met workflow - -In a nutshell, the PEcAn met workflow is designed to reduce the problem of converting *n* possible met inputs into *m* possible model formats, which requires *n x m* conversion functions as well as numerous custom functions for downscaling, gap filling, etc. Instead, PEcAn works with a single met standard, and thus requires *n* conversion functions, one for converting each data source into the PEcAn standard, and then *m* conversion functions for converting from that standard to what an individual model requires. For a new model joining the PEcAn system the burden in particularly low -- writing one conversion function provides access to *n* inputs. Similarly, PEcAn performs all other operations/manipulations (extracting a site, downscaling, gap filling, etc) within the PEcAn standard, which means these operations only need be implemented once. - -Consider a generic met data product named MET for simplicity. PEcAn will use a function, download.MET, to pull data for the selected year from a public data source (e.g. Ameriflux, North American Regional Reanalysis, etc). Next, PEcAn will use a function, met2CF.MET, to convert the data into the PEcAn standard. If the data is already at the site scale it will then gapfill the data. If the data is a regional or global data product, PEcAn will then permute the data to allow easier site-level extraction, then it will extract data for the requested site and data range. Modules to address the temporal and spatial downscaling of meteorological data products, as well as their uncertainties, are in development but not yet part of the operational workflow. All of these functions are located within the data.atmosphere module. - -Once data is in the standard format and processed, it will be converted to the model-specific format using a met2model.MODEL function (located in that MODEL's module). - -More detailed information on how PEcAn processes inputs can be found on our [Input Conversion] page. - -### Troubleshooting meteorological conversions - -At the current moment, most of the issues below address possible errors that the Ameriflux meteorology workflow might report - -#### Could not do gapfill ... The following variables have NA's - -This error message means that there were gaps in the downloaded data, for whatever variables that were listed, which were larger than the current algorithm could fill. Particularly common is missing radiation or PAR data, as Ameriflux frequently converts nighttime data to NULL, and work is in progress to detect this based on solar geometry. Also common are incomplete years (first or last year of tower operations). - -#### Could not get information about . Is this an Ameriflux site? - -This message occurs when PEcAn believes that a site is part of Ameriflux (because it was listed on the Ameriflux or FLUXNET webpage and has a US-* site code), but no data is present on the Ameriflux server. The most common reasons for this is that you have selected a site that has not submitted data to Ameriflux yet (or that data hasn't been processed yet), or you have selected a year that's outside the tower's operational period. Visit [Ameriflux](http://ameriflux.lbl.gov/sites/site-list-and-pages/) and [FLUXNET](http://fluxnet.ornl.gov/site_status) for lists of available site years. - -#### Could not download data for for the year - -This is similar to the previous error, but in this case PEcAn did find data for the site listed, but just not for the year requested. This can usually be fixed by just altering the years of the run to match those with available data. - - -#### I could not find the requested var (or dimvar) in the file! - -PEcAn could not find a required variable within the downloaded file. Most likely this is due to that variable not being measured at this site. The most common cause of failure is the absence of atmospheric pressure data (PRESS), but since most models have a low sensitivity to this variable we are working on methods to estimate this from other sources. - -## Selecting Plant Functional Types (PFTs) and other parameter groupings. - -### Using existing PFTs - -PEcAn does not automatically know what vegetation types are present at your study site so you need to select the PFT. - -Some models, such as ED2 and LINKAGES, support competition among multiple PFTs and thus you are encouraged to highlight multiple choices. Other models, such as SIPNET and DALEC, only support one PFT at a site. - -Many models also have parameters that control non-vegetation processes (e.g. soil biogeochemistry and hydrology). PEcAn allows users to assign these parameters to functional groups as well (e.g. a `soils` PFT) - -### Creating new PFTs - -To modify or add a new Plant Functional Type (PFT), or to change a PFT's priors, navigate -on the grey menu bar to Data > PFTs - -1. To add a new pft, click “new PFT” at the top and enter a name and description. (hint: -we're trying to name PFTs based on model.biome.pft, ED2 is the default model if one -isn't specified) - -2. To add new species to a PFT click on [+] View Related Species and type the species, -genus, or family you are looking for into the Search box. Click on the + to add. - -3. To remove a species from a PFT, click on [+] View Related Species and click on the X -of the species you want to remove from the PFT. - -4. To remove a prior, click [-] View Related Prior and click on the X of the variable who's -prior you want to remove. This will cause the parameter to be excluded from all -analyses (meta-analysis, sensitivity analysis, etc) and revert to its default value. - -5. To add a prior, choose one from the white box of priors on the right to choose. - -6. To view the specification of a prior, or to add a new prior, click BETY-DB > Priors and -enter the information on the variable, distribution name, distribution parameters, etc. N -is the sample size underlying the prior specification (0 is ok for uninformative priors). - -7. You can also got to Data > Variables in order to use the search function to find an -existing variable (or create a new one). Please try not to create new variables -unnecessarily (e.g. changes of variable name or units to what your model uses is handled -internally, so you want to find the trait with the correct MEANING). - -Additional information on adding PFTs, Species, and Priors can be found in Adding An [Ecosystem Model]. - -### Choosing initial vegetation - -On the Input Selection webpage, in addition to selecting PFTs, start & end dates, and meteorology, many models also require some way of specifying the initial conditions for the vegetation, which may range from setting the aboveground biomass and LAI up to detailed inventory-like data on species composition and stand structure. - -At the moment, PEcAn has three cases for initial conditions: - -1. If files already exist in the database, they can simply be selected from the menu. For ED2, there are 3 different veg files (site, pss, css) and it is important that you select a complete set, not mix and match. - -2. If files don't exist they can be uploaded following the instructions in [ Create a database file record for the input data]. - -3. Automated vegetation initial condition workflow - -As with meteorology, PEcAn is working to develop a model-agnostic workflow for converting various sources of vegetation data to common standards, developing common processing tools, and then writing out to model-specific formats. This process is in a much early stage than the meteorology workflow, as we are still researching what the options are for standard formats, but ultimately aims to be much more broad in scope, considering not just plot inventory data but also historical documentation, paleoecological proxies, satellite remote sensing (e.g. LANDSAT), airborne hyperspectral imagery, and active remote sensing (Lidar, Radar). - -At the moment, what is functional is a prototype workflow that works for inventory-based vegetation data. This data can come from either files that have been registered with the BETY Inputs and Formats tables or can be queried from the USFS Forest Inventory and Analysis (FIA). For more information visit Section 13.1.2.2 Vegetation Data - -### US FIA - -This tool works with an internal copy of the FIA that is uploaded to a postGRES database along side BETY, however for space reasons this database does not ship with the PEcAn VM. To turn this feature on: - -1. Download and Install the FIA database. Instructions in [Installing data for PEcAn] -2. For web-base runs, specify the database settings in the [config.php](https://github.com/PecanProject/pecan/blob/master/web/config.example.php) -3. For R-based runs, specify the database settings in the [THE PEcAn XML] - -More detailed information on how PEcAn processes inputs can be found on our [Input Conversion]page. - - -### Spin up - -A number of ecosystem models are typically initialized by spinning up to steady state. At the moment PEcAn doesn't handle spin up automatically (e.g. looping met, checking for stability), but there are various ways to achieve a spin-up within the system. - -**Option 1:** If there are model-specific settings in a model's settings/config file, then that file can be accessed by clicking on the **Edit model config** check box. If this box is selected then PEcAn will pause the site run workflow after it has generated your model config file, but before it runs the model, and give you an opportunity to edit the file by hand, allowing you to change any model-specific spin up settings (e.g met recycling, spin up length) - -**Option 2:** Set start_year very early and set the met drivers to be a long time series (e.g. PalEON, something custom uploaded to Inputs) - -**Option 3:** In the MODEL_TYPE table, add your model's restart format as an optional input, modify the model specific write.config function to use that restart, and then load a previous spin-up to the Inputs table - -Beyond these options, we hope to eventually develop more general, model-agnostic tools for spin up. In particular, we have started to explore the accelerated spin-up and semi-analytical techniques being developed by Yiqi Luo's lab - - - -### Selecting a soils product - -Many models have requirements for soils information, which may include: site-specific soil texture and depth information; soil biogeochemical initial conditions (e.g. soil carbon and nitrogen pools); soil moisture initial conditions; and soil thermal initial conditions. - -As with [Choosing initial vegetation], we eventually hope to develop data standards, soils workflows, and spin-up tools, but at the moment this workflow is in the early stages of development. Model requirements need to be met by[Creating a new Input record in BETY] into the database or using files that have already been uploaded. Similar to met, we recommend that this file be in the PEcAn-standard netCDF described below, but model-specific files can also be registered. - -### Soil texture, depth, and physical parameters - -A PEcAn-standard netCDF file format exists for soil texture, depth, and physical parameters, using PEcAn standard names that are largely a direct extention of the CF standard. - -The easiest way to create this file is with the PEcAn R function ```soil2netcdf``` as described in the Soil Data section of the Advanced Users Guide. - -A table of standard names and units can be listed using `PEcAn.data.land::soil.units()` with no arguments. - -```{r, echo = FALSE, eval = FALSE} -knitr::kable(PEcAn.data.land::soil.units()) -``` - -More detailed information on how PEcAn processes inputs can be found on our [Input Conversion] page. - -### Other model inputs - -Finally, any other model-specific inputs (e.g. N deposition, land use history, etc), should be met by [Creating a new Input record in BETY] or using files that have already been uploaded. - - - -# More on the PEcAn Web Interface {#intermediate-user} - -This section will provide information to those wanting to take advantage of PEcAn's customizations from the web interface. - -* [Additional web configuration] - Advanced options available from the web interface - - [Brown Dog](#browndog) - - [Sensitivity and ensemble analyses][Advanced Setup][TODO: Under construction...] - - [Editing model configurations][TODO: Under construction...] -* [Settings-configured analyses] - Analyses only available by manually editing `pecan.xml` - - [Parameter data assimilation (PDA)](#pda) - - [State data assimilation (SDA)](#sda) -* [Remote execution with PEcAn](#pecan-remote) - Running analyses and generally working with external machines (HPC) in the context of PEcAn. - - - - -## Additional web configuration - -Additional settings for web configuration: - -- [Web interface setup]{#intermediate-web-setup} -- [Brown Dog]{#browndog} -- [Advanced setup]{#intermediate-advanced-setup} - - [Sensitivity analysis] (TODO) - - [Uncertainty analysis] (TODO) -- [Editing model configuration files]{#intermediate-model-config} - -### Web interface setup {#intermediate-web-setup} - -There are few options which you can change via web interface. - -To visit the configuration page either you can just click on the setups link on the introduction page alternatively can type `/setups/`. - -The list of configuration available - -1. **Database configuration** : BETYdb(Biofuel Ecophysiological Traits and Yields database) configuration details, can be edited according to need. - -2. **Browndog configuration** : Browndog configuration details, Used to connect browndog. Its included by default in VM. - -3. **FIA Database** : FIA(Forest Inventory and Analysis) Database configuration details, Can be used to add additional data to models. - -4. **Google MapKey** : Google Map key, used to access the google map by PEcAn. - -5. **Change Password** : A small infomation to change the VM user password. (if using Docker image it won't work) - -6. **Automatic Sync** : If ON then it will sync the database between local machine and the remote servers. **Still unders testing part might be buggy**. - -Still work on the adding other editing feature going on, this page will be updated as new configuration will be available. - -### Brown Dog {#browndog} - -The Browndog service provides PEcAn with access to large and diverse sets of data at the click of a button in the format that PEcAn needs. By clicking the checkbox you will be using the Browndog Service to process data. - -For more information regarding meteorological data check out [Available Meteorological Drivers] - -More information can be found at the [Browndog website](http://browndog.ncsa.illinois.edu/). - -### Advanced Setup {#intermediate-advanced-setup} - -(TODO: Under construction...) - -### Editing model configurations {#intermediate-model-config} - -(TODO: Under construction...) - - - -## Settings-configured analyses - -These analyses can be run through the web interface, but lack graphical interfaces and currently can only be configured throughthe XML settings. To run these analyses use the **Edit pecan.xml** checkbox on the Input configuration page. Eventually, these modules will be integrated into the web user interface. - -- [Parameter Data Assimilation (PDA)](#pda) -- [State Data Assimilation (SDA)](#sda) -- [MultiSettings](#multisettings) -- [Benchmarking](#benchmarking) - -(TODO: Add links) - -### Parameter data assimilation (PDA) {#pda} - -All functions pertaining to Parameter Data Assimilation are housed within: **pecan/modules/assim.batch**. - -For a detailed usage of the module, please see the vignette under **pecan/modules/assim.batch/vignettes**. - -Hierarchical version of the PDA is also implemented, for more details, see the `MultiSitePDAVignette` [package vignette](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd) and function-level documentation. - -#### **pda.mcmc.R** -This is the main PDA code. It performs Bayesian MCMC on model parameters by proposing parameter values, running the model, calculating a likelihood (between model output and supplied observations), and accepting or rejecting the proposed parameters (Metropolis algorithm). Additional notes: - -* The first argument is *settings*, followed by others that all default to *NULL.settings* is a list used throughout Pecan, which contains all the user options for whatever analyses are being done. The easiest thing to do is just pass that whole object all around the Pecan code and let different functions access whichever settings they need. That's what a lot of the rest of the Pecan code does. But the flexibility to override most of the relevant settings in *settings* is there by providing them directly as arguments to the function. - -* The *if(FALSE)...* : If you're trying to step through the function you probably will have the *settings* object around, but those other variables will be undefined. If you set them all to NULL then they'll be ignored without causing errors. It is there for debugging purposes. - -* The next step calls pda.settings(), which is in the file pda.utils.R (see below). It checks whether any settings are being overridden by arguments, and in most cases supplies default values if it can't find either. - - -* In the MCMC setup section - * The code is set up to allow you to start a new MCMC chain, or to continue a previous chain as specified in settings. - * The code writes a simple text file of parameter samples at every iteration, which lets you get some results and even re-start an MCMC that fails for some reason. - * The code has adaptive jump distributions. So you can see some initialization of the jump distributions and associated variables here. - * Finally, note that after all this setup a new XML settings file is saved. The idea is that the original pecan.xml you create is preserved for provenance, and then periodically throughout the workflow the settings (likely containing new information) are re-saved with descriptive filenames. - -* MCMC loop - * Periodically adjust jump distribution to make acceptance rate closer to target - * Propose new parameters one at a time. For each: - * First, note that Pecan may be handling many more parameters than are actually being targeted by PDA. Pecan puts priors on any variables it has information for (in the BETY database), and then these get passed around throughout the analysis and every step (meta-, sensitivity, ensemble analyses, etc.). But for PDA, you specify a separate list of probably far fewer parameters to constrain with data. These are the ones that get looped over and varied here. The distinction between all parameters and only those dealt with in PDA is dealt with in the setup code above. - * First a new value is proposed for the parameter of interest. - * Then, a new model run is set up, identical to the previous except with the new proposed value for the one parameter being updated on this run. - * The model run is started, and outputs collected after waiting for it to finish. - * A new likelihood is calculated based on the model outputs and the observed dataset provided. - * Standard Metropolis acceptance criteria is used to decide whether to keep the proposed parameter. - * Periodically (at interval specified in settings), a diagnostic figure is saved to disk so you can check on progress. - * This works only for NEE currently - -#### **pda.mcmc.bs.R** -This file is basically identical to pda.mcm.R, but rather than propose parameters one at a time, it proposes new values for all parameters at once ("bs" stands for "block sampling"). You choose which option to use by specifying settings$assim.batch$method: - * "bruteforce" means sample parameters one at a time - * "bruteforce.bs" means use this version, sampling all parameters at once - * "emulator" means use the emulated-likelihood version - -#### **pda.emulator** -This version of the PDA code again looks quite similar to the basic "bruteforce" one, but its mechanics are very different. The basic idea is, rather than running thousands of model iterations to explore parameter space via MCMC, run a relatively smaller number of runs that have been carefully chosen to give good coverage of parameter space. Then, basically interpolate the likelihood calculated for each of those runs (actually, fit a Gaussian process to it), to get a surface that "emulates" the true likelihood. Now, perform regular MCMC (just like the "bruteforce" approach), except instead of actually running the model on every iteration to get a likelihood, just get an approximation from the likelihood emulator. Since the latter step takes virtually no time, you can run as long of an MCMC as you need at little computational cost, once you have done the initial model runs to create the likelihood emulator. - -#### **pda.mcmc.recover.R** -This function is for recovering a failed PDA MCMC run. - -#### **pda.utils.R** -This file contains most of the individual functions used by the main PDA functions (pda.mcmc.*.R). - - * *assim.batch* is the main function Pecan calls to do PDA. It checks which method is requested (bruteforce, bruteforce.bs, or emulator) and call the appropriate function described above. - * *pda.setting* handles settings. If a setting isn't found, the code can usually supply a reasonable default. - * *pda.load.priors* is fairly self explanatory, except that it handles a lot of cases and gives different options priority over others. Basically, the priors to use for PDA parameters can come from either a Pecan prior.distns or post.distns object (the latter would be, e.g., the posteriors of a meta-analysis or previous PDA), or specified either by file path or BETY ID. If not told otherwise, the code tries to just find the most recent posterior in BETY, and use that as prior for PDA. - * *pda.create.ensemble* gets an ensemble ID for the PDA. All model runs associated with an individual PDA (any of the three methods) are considered part of a single ensemble. This function does is register a new ensemble in BETY, and return the ID that BETY gives it. - * *pda.define.prior.fn* creates R functions for all of the priors the PDA will use. - * *pda.init.params* sets up the parameter matrix for the run, which has one row per iteration, and one column per parameter. Columns include all Pecan parameters, not just the (probably small) subset that are being updated by PDA. This is for compatibility with other Pecan components. If starting a fresh run, the returned matrix is just a big empty matrix to fill in as the PDA runs. If continuing an existing MCMC, then it will be the previous params matrix, with a bunch of blank rows added on for filling in during this round of PDA. - * *pda.init.run* This is basically a big wrapper for Pecan's write.config function (actually functions [plural], since every model in Pecan has its own version). For the bruteforce and bruteforce.bs methods this will be run once per iteration, whereas the emulator method knows about all its runs ahead of time and this will be a big batch of all runs at once. - * *pda.adjust.jumps* tweaks the jump distributions for the standard MCMC method, and *pda.adjust.jumps.bs* does the same for the block-sampled version. - * *pda.calc.llik* calculates the log-likelihood of the model given all datasets provided to compare it to. - * *pda.generate.knots* is for the emulator version of PDA. It uses a Latin hypercube design to sample a specified number of locations in parameter space. These locations are where the model will actually be run, and then the GP interpolates the likelihood surface in between. - * *pda.plot.params* provides basic MCMC diagnostics (trace and density) for parameters being sampled. - * *pda.postprocess* prepares the posteriors of the PDA, stores them to files and the database, and performs some other cleanup functions. - * *pda.load.data.r* This is the function that loads in data that will be used to constrain the PDA. It's supposed to be eventually more integrated with Pecan, which will know how to load all kinds of data from all kinds of sources. For now, it can do NEE from Ameriflux. - * *pda.define.llik.r* A simple helper function that defines likelihood functions for different datasets. Probably in the future this should be queried from the database or something. For now, it is extremely limited. The original test case of NEE assimilation uses a heteroskedastic Laplacian distribution. - * *pda.get.model.output.R* Another function that will eventually grow to handle many more cases, or perhaps be replaced by a better system altogether. For now though, it again just handles Ameriflux NEE. - -#### **get.da.data.\*.R, plot.da.R** -Old codes written by Carl Davidson. Defunct now, but may contain good ideas so currently left in. - - -### State data assimilation (SDA) {#sda} - -`sda.enkf.R` is housed within: `/pecan/modules/assim.sequential/R` - -The tree ring tutorial is housed within: `/pecan/documentation/tutorials/StateAssimilation` - -More descriptive SDA methods can be found at: `/pecan/book_source/adve_user_guide_web/SDA_Methods.Rmd` - -#### **sda.enkf.R Description** -This is the main ensemble Kalman filter and generalized filter code. Originally, this was just ensemble Kalman filter code. Mike Dietze and Ann Raiho added a generalized ensemble filter to avoid filter divergence. The output of this function will be all the of run outputs, a PDF of diagnostics, and an Rdata object that includes three lists: - -* FORECAST will be the ensemble forecasts for each year -* ANALYSIS will be the updated ensemble sample given the NPP observations -* enkf.params contains the prior and posterior mean vector and covariance matrix for each time step. - -#### **sda.enkf.R Arguments** - -* settings - (required) [State Data Assimilation Tags Example] settings object - -* obs.mean - (required) a list of observation means named with dates in YYYY/MM/DD format - -* obs.cov - (required) a list of observation covariances names with dates in YYYY/MM/DD format - -* IC - (optional) initial condition matrix (dimensions: ensemble memeber # by state variables). Default is NULL. - -* Q - (optional) process covariance matrix (dimensions: state variable by state variables). Defualt is NULL. - -#### State Data Assimilation Workflow -Before running sda.enkf, these tasks must be completed (in no particular order), - -* Read in a [State Data Assimilation Tags Example] settings file with tags listed below. i.e. read.settings('pecan.SDA.xml') - -* Load data means (obs.mean) and covariances (obs.cov) as lists with PEcAn naming and unit conventions. Each observation must have a date in YYYY/MM/DD format (optional time) associated with it. If there are missing data, the date must still be represented in the list with an NA as the list object. - -* Create initial conditions matrix (IC) that is state variables columns by ensemble members rows in dimension. [sample.IC.MODEL][sample.IC.MODEL.R] can be used to create the IC matrix, but it is not required. This IC matrix is fed into write.configs for the initial model runs. - -The main parts of the SDA function are: - -Setting up for initial runs: - -* Set parameters - -* Load initial run inputs via [split.inputs.MODEL][split.inputs.MODEL.R] - -* Open database connection - -* Get new workflow ids - -* Create ensemble ids - -Performing the initial set of runs - -Set up for data assimilation - -Loop over time - -* [read.restart.MODEL][read.restart.MODEL.R] - read model restart files corresponding to start.time and stop.time that you want to assimilate data into - -* Analysis - There are four choices based on if process variance is TRUE or FALSE and if there is data or not. [See explaination below.][Analysis Options] - -* [write.restart.MODEL][write.restart.MODEL.R] - This function has two jobs. First, to insert adjusted state back into model restart file. Second, to update start.time, stop.time, and job.sh. - -* run model - -Save outputs - -Create diagnostics - -#### State Data Assimilation Tags Example - -``` - - TRUE - FALSE - FALSE - Single - - - AGB.pft - MgC/ha/yr - 0 - 100000000 - - - TotSoilCarb - KgC/m^2 - 0 - 100000000 - - - - 1950/01/01 - 1960/12/31 - - 1 - 1961/01/01 - 2010/12/31 - -``` - -#### State Data Assimilation Tags Descriptions - -* **adjustment** : [optional] TRUE/FLASE flag for if ensembles needs to be adjusted based on weights estimated given their likelihood during analysis step. The defualt is TRUE for this flag. -* **process.variance** : [optional] TRUE/FLASE flag for if process variance should be estimated (TRUE) or not (FALSE). If TRUE, a generalized ensemble filter will be used. If FALSE, an ensemble Kalman filter will be used. Default is FALSE. If you use the TRUE argument you can set three more optional tags to control the MCMCs built for the generalized esnsemble filter. -* **nitrGEF** : [optional] numeric defining the length of the MCMC chains. -* **nthin** : [optional] numeric defining thining length for the MCMC chains. -* **nburnin** : [optional] numeric defining the number of burnins during the MCMCs. -* **q.type** : [optional] If the `process.variance` is set to TRUE then this can take values of Single, Site or PFT. -* **censored.data** : [optional] logical set TRUE for censored state variables. - -* **sample.parameters** : [optional] TRUE/FLASE flag for if parameters should be sampled for each ensemble member or not. This allows for more spread in the initial conditions of the forecast. -* **state.variable** : [required] State variable that is to be assimilated (in PEcAn standard format). -* **spin.up** : [required] start.date and end.date for initial model runs. -* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because initial runs can be done over a subset of the full run. -* **forecast.time.step** : [optional] In the future, this will be used to allow the forecast time step to vary from the data time step. -* **start.date** : [optional] start date of the state data assimilation (in YYYY/MM/DD format) -* **end.date** : [optional] end date of the state data assimilation (in YYYY/MM/DD format) -* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because this analysis can be done over a subset of the run. - -#### Model Specific Functions for SDA Workflow - -#### read.restart.MODEL.R -The purpose of read.restart is to read model restart files and return a matrix that is site rows by state variable columns. The state variables must be in PEcAn names and units. The arguments are: - -* outdir - output directory - -* runid - ensemble member run ID - -* stop.time - used to determine which restart file to read (in POSIX format) - -* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object - -* var.names - vector with state variable names with PEcAn standard naming. Example: c('AGB.pft', 'TotSoilCarb') - -* params - parameters used by ensemble member (same format as write.configs) - -#### write.restart.MODEL.R -This model specific function takes in new state and new parameter matrices from sda.enkf.R after the analysis step and translates new variables back to the model variables. Then, updates start.time, stop.time, and job.sh so that start.model.runs() does the correct runs with the new states. In write.restart.LINKAGES and write.restart.SIPNET, job.sh is updated by using write.configs.MODEL. - -* outdir - output directory - -* runid - run ID for ensemble member - -* start.time - beginning of model run (in POSIX format) - -* stop.time - end of model run (in POSIX format) - -* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object - -* new.state - matrix from analysis of updated state variables with PEcAn names (dimensions: site rows by state variables columns) - -* new.params - In the future, this will allow us to update parameters based on states (same format as write.configs) - -* inputs - model specific inputs from [split.inputs.MODEL][split.inputs.MODEL.R] used to run the model from start.time to stop.time - -* RENAME - [optional] Flag used in write.restart.LINKAGES.R for development. - -#### split.inputs.MODEL.R -This model specific function gives the correct met and/or other model inputs to settings$run$inputs. This function returns settings$run$inputs to an inputs argument in sda.enkf.R. But, the inputs will not need to change for all models and should return settings$run$inputs unchanged if that is the case. - -* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object - -* start.time - start time for model run (in POSIX format) - -* stop.time - stop time for model run (in POSIX format) - -#### sample.IC.MODEL.R -This model specific function is optional. But, it can be used to create initial condition matrix (IC) with # state variables columns by # ensemble rows. This IC matrix is used for the initial runs in sda.enkf.R in the write.configs.MODEL function. - -* ne - number of ensemble members - -* state - matrix of state variables to get initial conditions from - -* year - used to determine which year to sample initial conditions from - -#### Analysis Options -There are four options depending on whether process variance is TRUE/FALSE and whether or not there is data or not. - -* If there is no data and process variance = FALSE, there is no analysis step. - -* If there is no data and process variance = TRUE, process variance is added to the forecast. - -* If there is data and process variance = TRUE, [the generalized ensemble filter][The Generalized Ensemble Filter] is implemented with MCMC. - -* If there is data and process variance = FALSE, the Kalman filter is used and solved analytically. - -#### The Generalized Ensemble Filter -An ensemble filter is a sequential data assimilation algorithm with two procedures at every time step: a forecast followed by an analysis. The forecast ensembles arise from a model while the analysis makes an adjustment of the forecasts ensembles from the model towards the data. An ensemble Kalman filter is typically suggested for this type of analysis because of its computationally efficient analytical solution and its ability to update states based on an estimate of covariance structure. But, in some cases, the ensemble Kalman filter fails because of filter divergence. Filter divergence occurs when forecast variability is too small, which causes the analysis to favor the forecast and diverge from the data. Models often produce low forecast variability because there is little internal stochasticity. Our ensemble filter overcomes this problem in a Bayesian framework by including an estimation of model process variance. This methodology also maintains the benefits of the ensemble Kalman filter by updating the state vector based on the estimated covariance structure. - -This process begins after the model is spun up to equilibrium. - -The likelihood function uses the data vector $\left(\boldsymbol{y_{t}}\right)$ conditional on the estimated state vector $\left(\boldsymbol{x_{t}}\right)$ such that - - $\boldsymbol{y}_{t}\sim\mathrm{multivariate\:normal}(\boldsymbol{x}_{t},\boldsymbol{R}_{t})$ - -where $\boldsymbol{R}_{t}=\boldsymbol{\sigma}_{t}^{2}\boldsymbol{I}$ and $\boldsymbol{\sigma}_{t}^{2}$ is a vector of data variances. To obtain an estimate of the state vector $\left(\boldsymbol{x}_{t}\right)$, we use a process model that incorporates a process covariance matrix $\left(\boldsymbol{Q}_{t}\right)$. This process covariance matrix differentiates our methods from past ensemble filters. Our process model contains the following equations - -$\boldsymbol{x}_{t} \sim \mathrm{multivariate\: normal}(\boldsymbol{x}_{model_{t}},\boldsymbol{Q}_{t})$ - -$\boldsymbol{x}_{model_{t}} \sim \mathrm{multivariate\: normal}(\boldsymbol{\mu}_{forecast_{t}},\boldsymbol{P}_{forecast_{t}})$ - -where $\boldsymbol{\mu}_{forecast_{t}}$ is a vector of means from the ensemble forecasts and $\boldsymbol{P}_{forecast_{t}}$ is a covariance matrix calculated from the ensemble forecasts. The prior for our process covariance matrix is $\boldsymbol{Q}_{t}\sim\mathrm{Wishart}(\boldsymbol{V}_{t},n_{t})$ where $\boldsymbol{V}_{t}$ is a scale matrix and $n_{t}$ is the degrees of freedom. The prior shape parameters are updated at each time step through moment matching such that - -$\boldsymbol{V}_{t+1} = n_{t}\bar{\boldsymbol{Q}}_{t}$ - -$n_{t+1} = \frac{\sum_{i=1}^{I}\sum_{j=1}^{J}\frac{v_{ijt}^{2}+v_{iit}v_{jjt}}{Var(\boldsymbol{\bar{Q}}_{t})}}{I\times J}$ - -where we calculate the mean of the process covariance matrix $\left(\bar{\boldsymbol{Q}_{t}}\right)$ from the posterior samples at time t. Degrees of freedom for the Wishart are typically calculated element by element where $v_{ij}$ are the elements of $\boldsymbol{V}_{t}$. $I$ and $J$ index rows and columns of $\boldsymbol{V}$. Here, we calculate a mean number of degrees of freedom for $t+1$ by summing over all the elements of the scale matrix $\left(\boldsymbol{V}\right)$ and dividing by the count of those elements $\left(I\times J\right)$. We fit this model sequentially through time in the R computing environment using R package 'rjags.' - -Users have control over how they think is the best way to estimate $Q$. Our code will look for the tag `q.type` in the XML settings under `state.data.assimilation` which can take 3 values of Single, PFT or Site. If `q.type` is set to single then one value of process variance will be estimated across all different sites or PFTs. On the other hand, when q.type` is set to Site or PFT then a process variance will be estimated for each site or PFT at a cost of more time and computation power. - -#### Multi-site State data assimilation. -`sda.enkf.multisite` function allows for assimilation of observed data at multiple sites at the same time. In order to run a multi-site SDA, one needs to send a multisettings pecan xml file to this function. This multisettings xml file needs to contain information required for running at least two sites under `run` tag. The code will automatically run the ensembles for all the sites and reformats the outputs matching the required formats for analysis step. - -The observed mean and cov needs to be formatted as list of different dates with observations. For each element of this list also there needs to be a list with mean and cov matrices of different sites named by their siteid. In case that zero variance was estimated for a variable inside the obs.cov, the SDA code will automatically replace that with half of the minimum variance from other non-zero variables in that time step. - -This would look like something like this: -``` -> obs.mean - -$`2010/12/31` -$`2010/12/31`$`1000000650` - AbvGrndWood GWBI - 111.502 1.0746 - -$`2010/12/31`$`1000000651` - AbvGrndWood GWBI - 114.302 1.574695 -``` - -``` -> obs.cov - -$`2010/12/31` -$`2010/12/31`$`1000000650` - [,1] [,2] -[1,] 19.7821691 0.213584319 -[2,] 0.5135843 0.005162113 - -$`2010/12/31`$`1000000651` - [,1] [,2] -[1,] 15.2821691 0.513584319 -[2,] 0.1213583 0.001162113 -``` -An example of multi-settings pecan xml file also may look like below: -``` - - - - FALSE - TRUE - - 1000000040 - 1000013298 - - - - GWBI - KgC/m^2 - 0 - 9999 - - - AbvGrndWood - KgC/m^2 - 0 - 9999 - - - 1960/01/01 - 2000/12/31 - - - - -1 - - 2017/12/06 21:19:33 +0000 - - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768 - - - bety - bety - 128.197.168.114 - bety - PostgreSQL - false - - /fs/data1/pecan.data/dbfiles/ - - - - temperate.deciduous_SDA - - 2 - - /fs/data2/output//PEcAn_1000008768/pft/temperate.deciduous_SDA - 1000008552 - - - - 3000 - - FALSE - TRUE - - - - 20 - 1000016146 - 1995 - 1999 - - - uniform - - - sampling - - - parameters - - - soil - - - - - 1000000022 - /fs/data3/hamzed/output/paleon_sda_SIPNET-8768/Bartlett.param - SIPNET - r136 - FALSE - /fs/data5/pecan.models/SIPNET/trunk/sipnet_ssr - - - 1000008768 - - - - - 1000000650 - 1960/01/01 - 1965/12/31 - Harvard Forest - Lyford Plots (PalEON PHA) - 42.53 - -72.18 - - - - CRUNCEP - SIPNET - - /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim - - - - 1960/01/01 - 1980/12/31 - - - - 1000000651 - 1960/01/01 - 1965/12/31 - Harvard Forest - Lyford Plots (PalEON PHA) - 42.53 - -72.18 - - - - CRUNCEP - SIPNET - - /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim - - - - 1960/01/01 - 1980/12/31 - - - - localhost - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out - - - TRUE - TRUE - TRUE - - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out - run - -``` - -### Running SDA on remote -In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote_Sync_launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote_Sync_launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. - -`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote_Sync_launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. - -`Additionally, the Remote_Sync_launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. - -Several points on how to prepare your xml settings for the remote SDA run: -1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. -2 - Inside the xml setting an `` flag needs to be included and point to a local directory where `SDA_remote_launcher` will look for either a `sample.Rdata` file or a `pft` folder. -3 - You need to set your `` tag according to the desired remote machine. You can learn more about this on the `Remote execution with PEcAn` section of the documentation. Please make sure that the `` tag inside `` is pointing to a directory where you would like to store and run your SDA job(s). -4 - Finally, make sure the tag inside the tag is set to the correct path on the remote machine. - -### Restart functionality in SDA -If you prefer to run your SDA analysis in multiple stages, where each phase picks up where the previous one left off, you can use the `restart` argument in the `sda.enkf.multisite` function. You need to make sure that the output from previous step exists in the `SDA` folder (in the `outfolder`), and the `` is the same as `` from the previous step. When you run the SDA with the restart parameter, it will load the output from the previous step and use configs already written in the run folder to set itself up for the next step. Using the restart argument could be as easy as : - -``` -sda.enkf.multisite(settings, - obs.mean =obs.mean , - obs.cov = obs.cov, - control = SDA.arguments(debug = FALSE, TimeseriesPlot = FALSE), - restart = FALSE - ) - -``` -Where the new `settings`, `obs.mean` and `obs.cov` contain the relevant information for the next phase. - - -### State Data Assimilation Methods - - -*By Ann Raiho* - -Our goal is build a fully generalizable state data assimilation (SDA) workflow that will assimilate multiple types of data and data products into ecosystem models within PEcAn temporally and spatially. But, during development, specifically with PalEON goals in mind, we have been focusing on assimilating tree ring estimated NPP and AGB and pollen derived fractional composition into two ecosystem models, SIPNET and LINKAGES, at Harvard Forest. This methodology will soon be expanded to include the PalEON sites listed on the [state data assimilation wiki page](https://paleon.geography.wisc.edu/doku.php/working_groups;state_data_assimilation). - -#### Data Products -During workflow development, we have been working with tree ring estimated NPP and AGB and pollen derived fractional composition data products. Both of these data products have been estimated with a full accounting of uncertainty, which provides us with state variable observation mean vector and covariance matrix at each time step. These data products are discussed in more detail below. Even though we have been working with specific data products during development, our workflow is generalizable to alternative data products as long as we can calculate a state variable observation mean vector and covariance for a time point. - -#### Tree Rings -We have been primarily been working with the tree ring data product created by Andria Dawson and Chris Paciorek and the PEcAn tree ring allometry module. They have developed a Bayesian model that estimates annual aboveground biomass increment (Mg/ha/yr) and aboveground biomass (Mg/ha) for each tree in a dataset. We obtain this data and aggregate to the level appropriate for the ecosystem model. In SIPNET, we are assimilating annual gross woody increment (Mg/ha/yr) and above ground woody biomass (Mg/ha). In LINKAGES, we are assimilating annual species biomass. More information on deriving these tree ring data products can be found in Dawson et al 201?. - -We have been working mostly with tree data collected at Harvard Forest. Tree rings and census data were collected at Lyford Plot between 1960 and 2010 in three separate plots. Other tree ring data will be added to this analysis in the future from past PEON courses (UNDERC), Kelly Heilman (Billy's Lake and Bigwoods), and Alex Dye (Huron Mt. Club). - -#### Pollen -STEPPS is a Bayesian model developed by Paciorek and McLachlan 2009 and Dawson et al 2016 to estimate spatially gridded fractional composition from fossil pollen. We have been working with STEPPS1 output, specifically with the grid cell that contains Harvard Forest. The temporal resolution of this data product is centennial. Our workflow currently operates at annual time steps, but does not require data at every time step. So, it is possible to assimilate fractional composition every one hundred years or to assimilate fractional composition data every year by accounting for variance inflation. - -In the future, pollen derived biomass (ReFAB) will also be available for data assimilation. Although, we have not discussed how STEPPS and ReFAB data assimilation will work. - -#### Variance Inflation -*Side Note: Probably want to call this something else now. - -Since the fractional composition data product has a centennial resolution, in order to use fractional composition information every year we need to change the weight the data has on the analysis. The basic idea is to downweight the likelihood relative to the prior to account for (a) the fact that we assimilate an observation multiple times and (b) the fact that the number of STEPPS observations is 'inflated' because of the autocorrelation. To do this, we take the likelihood and raise it to the power of (1/w) where 'w' is an inflation factor. - -w = D * (N / ESS) - -where D is the length of the time step. In our case D = 100. N is the number of time steps. In our case N = 11. and ESS is the effective sample size. The ESS is calculated with the following function where ntimes is the same as N above and sims is a matrix with the dimensions number of MCMC samples by number of state variables. - -``` -ESS_calc <- function(ntimes, sims){ - # center based on mean at each time to remove baseline temporal correlation - # (we want to estimate effective sample size effect from correlation of the errors) - row.means.sims <- sims - rowMeans(sims) - - # compute all pairwise covariances at different times - covars <- NULL - for(lag in 1:(ntimes-1)){ - covars <- c(covars, rowMeans(row.means.sims[(lag+1):ntimes, , drop = FALSE] * row.means.sims[1:(ntimes-lag), , drop = FALSE])) - } - vars <- apply(row.means.sims, 1, var) # pointwise post variances at each time, might not be homoscedastic - - # nominal sample size scaled by ratio of variance of an average - # under independence to variance of average of correlated values - neff <- ntimes * sum(vars) / (sum(vars) + 2 * sum(covars)) - return(neff) - } -``` - -The ESS for the STEPPS1 data product is 3.6, so w in our assimilation of fractional composition at Harvard Forest will be w = 305.6. - -#### Current Models -SIPNET and LINKAGES are the two ecosystem models that have been used during state data assimilation development within PEcAn. SIPNET is a simple ecosystem model that was built for... LINKAGES is a forest gap model created to simulate the process of succession that occurs when a gap is opened in the forest canopy. LINKAGES has 72 species level plant functional types and the ability to simulate some below ground processes (C and N cycles). - -#### Model Calibration -Without model calibration both SIPNET and LINKAGES make incorrect predictions about Harvard Forest. To confront this problem, SIPNET and LINKAGES will both be calibrated using data collected at the Harvard Forest flux tower. Istem has completed calibration for SIPNET using a [parameter data assimilation emulator](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/R/pda.emulator.R) contained within the PEcAn workflow. LINKAGES will also be calibrated using this method. This method is also generalizable to other sites assuming there is data independent of data assimilation data available to calibrate against. - -#### Initial Conditions -The initial conditions for SIPNET are sampled across state space based on data distributions at the time when the data assimilation will begin. We do not sample LINAKGES for initial conditions and instead perform model spin up for 100 years prior to beginning data assimilation. In the future, we would like to estimate initial conditions based on data. We achieve adequate spread in the initial conditions by allowing the parameters to vary across ensemble members. - -#### Drivers -We are currently using Global Climate Model (GCM) drivers from the PaLEON model intercomparison. Christy Rollinson and John Tipton are creating MET downscaled GCM drivers for the Paleon data assimilation sites. We will use these drivers when they are available because they are a closer representation of reality. - -#### Sequential State Data Assimilation -We are using sequential state data assimilation methods to assimilate paleon data products into ecosystem models because less computation power is required for sequential state data assimilation than for particle filter methods. - -#### General Description -The general sequential data assimilation framework consists of three steps at each time step: -1. Read the state variable output for time t from the model forecast ensembles and save the forecast mean (muf) and covariance (Pf). -2. If there are data mean (y) and covariance (R) at this time step, perform data assimilation analysis (either EnKF or generalized ensemble filter) to calculate the new mean (mua) and covariance (Pa) of the state variables. -3. Use mua and Pa to restart and run the ecosystem model ensembles with new state variables for time t+1. - -#### EnKF -There are two ways to implement sequential state data assimilation at this time. The first is the Ensemble Kalman Filter (EnKF). EnKF has an analytical solution, so the kalman gain, analysis mean vector, and analysis covariance matrix can be calculated directly: - -``` - - K <- Pf %*% t(H) %*% solve((R + H %*% Pf %*% t(H))) ## Kalman Gain - - mu.a <- mu.f + K %*% (Y - H %*% mu.f) # Analysis mean vector - - Pa <- (diag(ncol(X)) - K %*% H) %*% Pf # Analysis covariance matrix - -``` - -The EnKF is typically used for sequential state data assimilation, but we found that EnKF lead to filter divergence when combined with our uncertain data products. Filter divergence led us to create a generalized ensemble filter that estimates process variance. - -#### Generalized Ensemble Filter -The generalized ensemble filter follows generally the three steps of sequential state data assimilation. But, in the generalized ensemble filter we add a latent state vector that accounts for added process variance. Furthermore, instead of solving the analysis analytically like the EnKF, we have to estimate the mean analysis vector and covariance matrix with MCMC. - -#### Mapping Ensemble Output to Tobit Space -There are some instances when we have right or left censored variables from the model forecast. For example, a model estimating species level biomass may have several ensemble members that produce zero biomass for a given species. We are considering this case a left censored state variable that needs to be mapped to normal space using a tobit model. We do this by creating two matrices with dimensions number of ensembles by state variable. The first matrix is a matrix of indicator variables (y.ind), and the second is a matrix of censored variables (y.censored). When the indicator variable is 0 the state variable (j) for ensemble member (i) is sampled. This allows us to impute a normal distribution for each state variable that contains 'missing' forecasts or forecasts of zero. - -``` -tobit2space.model <- nimbleCode({ - for(i in 1:N){ - y.censored[i,1:J] ~ dmnorm(muf[1:J], cov = pf[1:J,1:J]) - for(j in 1:J){ - y.ind[i,j] ~ dconstraint(y.censored[i,j] > 0) - } - } - - muf[1:J] ~ dmnorm(mean = mu_0[1:J], cov = pf[1:J,1:J]) - - Sigma[1:J,1:J] <- lambda_0[1:J,1:J]/nu_0 - pf[1:J,1:J] ~ dinvwish(S = Sigma[1:J,1:J], df = J) - - }) -``` - - -#### Generalized Ensemble Filter Model Description -Below is the BUGS code for the full analysis model. The forecast mean an covariance are calculated from the tobit2space model above. We use a tobit likelihood in this model because there are instances when the data may be left or right censored. Process variance is included by adding a latent model state (X) with a process precision matrix (q). We update our prior on q at each time step using our estimate of q from the previous time step. - -``` - tobit.model <- nimbleCode({ - - q[1:N,1:N] ~ dwish(R = aq[1:N,1:N], df = bq) ## aq and bq are estimated over time - Q[1:N,1:N] <- inverse(q[1:N,1:N]) - X.mod[1:N] ~ dmnorm(muf[1:N], prec = pf[1:N,1:N]) ## Model Forecast ##muf and pf are assigned from ensembles - - ## add process error - X[1:N] ~ dmnorm(X.mod[1:N], prec = q[1:N,1:N]) - - #agb linear - #y_star[1:YN,1:YN] <- X[1:YN,1:YN] #[choose] - - #f.comp non linear - #y_star[1:YN] <- X[1:YN] / sum(X[1:YN]) - - ## Analysis - y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN,1:YN]) #is it an okay assumpution to just have X and Y in the same order? - - #don't flag y.censored as data, y.censored in inits - #remove y.censored samplers and only assign univariate samplers on NAs - - for(i in 1:YN){ - y.ind[i] ~ dconstraint(y.censored[i] > 0) - } - - }) -``` - -#### Ensemble Adjustment -Each ensemble member has a different set of species parameters. We adjust the updated state variables by using an ensemble adjustment. The ensemble adjustment weights the ensemble members based on their likelihood during the analysis step. - -``` - S_f <- svd(Pf) - L_f <- S_f$d - V_f <- S_f$v - - ## normalize - Z <- X*0 - for(i in seq_len(nrow(X))){ - Z[i,] <- 1/sqrt(L_f) * t(V_f)%*%(X[i,]-mu.f) - } - Z[is.na(Z)]<-0 - - ## analysis - S_a <- svd(Pa) - L_a <- S_a$d - V_a <- S_a$v - - ## analysis ensemble - X_a <- X*0 - for(i in seq_len(nrow(X))){ - X_a[i,] <- V_a %*%diag(sqrt(L_a))%*%Z[i,] + mu.a - } -``` - -#### Diagnostics -There are three diagnostics we have currently implemented: time series, bias time series, and process variance. The time series diagnostics show the data, forecast, and analysis time series for each state variable. These are useful for visually assessing variance and magnitude of change of state variables through time. These time series are also updated throughout the analysis and are also created as a pdf at the end of the SDA workflow. There are two types of bias time series the first assess the bias in the update (the forecast minus the analysis) and the second assess the bias in the error (the forecast minus the data). These bias time series are useful for identifying which state variables have intrinsic bias within the model. For example, if red oak biomass in LINKAGES increases at every time step (the update and the error are always positive), this would suggest that LINKAGES has a positive growth or recruitment bias for red oak. Finally, when using the generalized ensemble filter to estimate process variance, there are two additional plots to assess estimation of process variance. The first is a correlation plot of the process covariance matrix. This tells us what correlations are incorrectly represented by the model. For example, if red oak biomass and white pine biomass are highly negatively correlated in the process covariance matrix, this means that the model either 1) has no relationship between red oak and white pine and they should affect each other negatively or 2) there is a positive relationship between red oak and white pine and there shouldn't be any relationship. We can determine which of these is true by comparing the process covariance matrix to the model covariance matrix. The second process variance diagnostic plot shows how the degrees of freedom associated with estimating the process covariance matrix have changed through time. This plot should show increasing degrees of freedom through time. - - -### MultiSettings{#multisettings} - -(TODO: Under construction...) - -### Benchmarking {#benchmarking} - -Benchmarking is the process of comparing model outputs against either experimental data or against other model outputs as a way to validate model performance. -We have a suit of statistical comparisons that provide benchmarking scores as well as visual comparisons that help in diagnosing data-model and/or model-model differences. - -#### Data Preparation - -All data that you want to compare with model runs must be registered in the database. -This is currently a step that must be done by hand either from the command line or through the online BETY interface. -The data must have three records: - -1. An input record (Instructions [here](#NewInput)) - -2. A database file record (Instructions [here](#NewInput)) - -3. A format record (Instructions [here](#NewFormat)) - -#### Model Runs - -Model runs can be setup and executed -- Using the PEcAn web interface online or with a VM ([see setup](#GettingStarted)) -- By hand using the [pecan.xml](#pecanXML) - -#### The Benchmarking Shiny App - -The entire benchmarking process can be done through the Benchmarking R Shiny app. - -When the model run has completed, navigate to the workflow visualization Shiny app. - -- Load model data - - Select the workflow and run id - - Make sure that your model output is loading properly (i.e. you can see plots of your data) -- Load benchmarking data - - Again make sure that you can see the uploaded data plotted alongside the model output. In the future there will be more tools for double checking that your uploaded data is appropriate for benchmarking, but for now you may need to do the sanity checks by hand. - -#### Create a reference run record - -- Navigate to the Benchmarking tab - + The first step is to register the new model run as a reference run in the database. Benchmarking cannot be done before this step is completed. When the reference run record has been created, additional menus for benchmarking will appear. - -#### Setup Benchmarks and metrics - -- From the menus select - - The variables in the uploaded data that you wish to compare with model output. - - The numerical metrics you would like to use in your comparison. - - Additional comparison plots that you would like to see. -- Note: All these selections populate the benchmarking section of the `pecan.BENCH.xml` which is then saved in the same location as the original run output. This xml is purely for reference. - -##### Benchmarking Output - -- All benchmarking results are stored in the benchmarking directory which is created in the same folder as the original model run. -- The benchmaking directory contains subdirectories for each of the datasets compared with the model output. The names of these directories are the same as the corresponding data set's input id in BETY. -- Each input directory contains `benchmarking.output.Rdata`, an Rdata file contianing all the results of the benchmarking workflow. `load(benchmarking.output.Rdata) ` loads a list called `result.out` which contains the following: - - `bench.results`: a data frame of all numeric benchmarking scores - - `format`: a data frame that can be used to see how the input data was transformed to make it comparable to the model output. This involves converting from the original variable names and units to the internal pecan standard. - - `aligned.dat`: a data frame of the final aligned model and input values. -- All plots are saved as pdf files with names with "benchmark_plot-type_variable_input-id.pdf" - -- To view interactive results, naviage to the Benchmarking Plots tab in the shiny app. - - - -#### Benchmarking in pecan.xml - -Before reading this section, it is recommended that you [familiarize yourself with basics of the pecan.xml file.](#pecanXML) - -The `pecan.xml` has an _optional_ benchmarking section. Below are all the tags in the benchmarking section explained. Many of these field are filled in automatically during the benchmarking process when using the benchmarking shiny app. - -The only time one should edit the benchmarking section by hand is for performing clone runs. See [clone run documentation.](#CloneRun) - -`` settings: - -- `ensemble_id`: the id of the ensemble that you will be using - the settings from this ensemble will be saved in a reference run record and then `ensemble_id` will be replaced with `reference_run_id` -- `new_run`: TRUE = create new run, FALSE = use existing run (required, default FALSE) - -It is possible to look at more than one benchmark with a particular run. -The specific settings related to each benchmark are in a sub section called `benchmark` - -- `input_id`: the id of the benchmarking data (required) -- `variable_id`: the id of the variable of interest within the data. If you leave this blank, all variables that are shared between the input and model output will be used. -- `metric_id`: the id(s) of the metric(s) to be calculated. If you leave this blank, all metrics will be used. - -Example: -In this example, -- we are using a pre-existing run from `ensemble_id = 1000010983` (`new_run = FALSE`) -- the output will be compared to data from `input_id = 1000013743`, specifically two variables of interest: `variable_id = 411, variable_id = 18` -- for `variable_id = 411` we will perform only one metric of comparison `metric_id = 1000000001` -- for for `variable_id = 18` we will perform two metrics of comparison `metric_id = 1000000001, metric_id = 1000000002` - -```xml - - 1000010983 - FALSE - - 1000013743 - 411 - 853 - - 1000000001 - - - - 1000013743 - 18 - 853 - - 1000000001 - 1000000002 - - - -``` - - - -# Developer guide {#developer-guide} - -* [Update BETY](#updatebety) -* [Update PEcAn Code](#pecan-make) -* [PEcAn and Git](#pecan-git) -* [Coding Practices](#coding-practices) - - - -## Updating PEcAn Code and Bety Database {#updatebety} - -Release notes for all releases can be found [here](https://github.com/PecanProject/pecan/releases). - -This page will only list any steps you have to do to upgrade an existing system. When updating PEcAn it is highly encouraged to update BETY. You can find instructions on how to do this, as well on how to update the database in the [Updating BETYdb](https://pecan.gitbooks.io/betydb-documentation/content/updating_betydb_when_new_versions_are_released.html) gitbook page. - - -### Updating PEcAn {#pecan-make} - -The latest version of PEcAn code can be obtained from the PEcAn repository on GitHub: - -```bash -cd pecan # If you are not already in the PEcAn directory -git pull -``` - -The PEcAn build system is based on GNU Make. -The simplest way to install is to run `make` from inside the PEcAn directory. -This will update the documentation for all packages and install them, as well as all required dependencies. - -For more control, the following `make` commands are available: - -* `make document` -- Use `devtools::document` to update the documentation for all package. -Under the hood, this uses the `roxygen2` documentation system. - -* `make install` -- Install all packages and their dependnencies using `devtools::install`. -By default, this only installs packages that have had their code changed and any dependent packages. - -* `make check` -- Perform a rigorous check of packages using `devtools::check` - -* `make test` -- Run all unit tests (based on `testthat` package) for all packages, using `devtools::test` - -* `make clean` -- Remove the make build cache, which is used to track which packages have changed. -Cache files are stored in the `.doc`, `.install`, `.check`, and `.test` subdirectories in the PEcAn main directory. -Running `make clean` will force the next invocation of `make` commands to operate on all PEcAn packages, regardless of changes. - -The following are some additional `make` tricks that may be useful: - -* Install, check, document, or test a specific package -- `make ./`; e.g. `make .install/utils` or `make .check/modules/rtm` - -* Force `make` to run, even if package has not changed -- `make -B ` - -* Run `make` commands in parallel -- `make -j`; e.g. `make -j4 install` to install packages using four parallel processes. Note that packages containing compiled code (e.g. PEcAn.RTM, PEcAn.BASGRA) might fail when `j` is greater than 1, because of limitations in the way R calls `make` internally while compiling them. See [GitHub issue 1976](https://github.com/PecanProject/pecan/issues/1976) for more details. - -All instructions for the `make` build system are contained in the `Makefile` in the PEcAn root directory. -For full documentation on `make`, see the man pages by running `man make` from a terminal. - - - -## Git and GitHub Workflow {#pecan-git} - -[Using Git]{#using-git} - - - -### Using Git {#using-git} - -This document describes the steps required to download PEcAn, make changes to code, and submit your changes. - -* If you are new to GitHub or to PEcAn, start with the one-time set-up instructions under [Before any work is done]. Also see the excellent tutorials and references in the [Git]) section right below this list and at the bootom in [References]. -* To make trivial changes, see [Quick and Easy]. -* To make a few changes to the code, start with the [Basic Workflow]. -* To make substantial changes and/or if plan to contribute over time see [Recommended Workflow: A new branch for each change]. - -#### Git - -Git is a free & open source, distributed version control system designed -to handle everything from small to very large projects with speed and -efficiency. Every Git clone is a full-fledged repository with complete -history and full revision tracking capabilities, not dependent on -network access or a central server. Branching and merging are fast and -easy to do. - -A good place to start is the [GitHub 5 minute illustrated tutorial](https://guides.github.com/introduction/flow/). -In addition, there are three fun tutorials for learning git: - -* [Learn Git](https:k//www.codecademy.com/learn/learn-git) is a great web-based interactive tutorial. -* [LearnGitBranching](https://learngitbranching.js.org/) -* [TryGit](http://tryk.github.com). - - -**URLs** In the rest of the document will use specific URL’s to clone the code. -There a few URL’s you can use to clone a project, using https, ssh and -git. You can use either https or git to clone a repository and write to -it. The git protocol is read-only. -This document describes the steps required to download PEcAn, make changes to code, and submit your changes. - - -#### PEcAn Project and Github -* Organization Repository: https://github.com/organizations/PecanProject -* PEcAn source code: https://github.com/PecanProject/pecan.git -* BETYdb source code: https://github.com/PecanProject/bety.git - -These instructions apply to other repositories too. - -#### PEcAn Project Branches -We follow branch organization laid out on [this page](http://nvie.com/posts/a-successful-git-branching-model). - -In short, there are three main branches you must be aware of: - -* **develop** - Main Branch containing the latest code. This is the main branch you will make changes to. -* **master** - Branch containing the latest stable code. DO NOT MAKE CHANGES TO THIS BRANCH. -* **release/vX.X.X** - Named branches containing code specific to a release. Only make changes to this branch if you are fixing a bug on a release branch. - -#### Milestones, Issues, Tasks - -The Milestones, issues, and tasks can be used to organize specific features or research projects. In general, there is a heirarchy: - -* milestones (Big picture, "Epic"): contains many issues, organized by release. -* issues (Specific features / bugs, "Story"): may contain a list of tasks; represent -* task list (to do list, "Tasks"): list of steps required to close an issue, e.g.: - ----------------------------------- -* [ ] first do this -* [ ] then this -* [ ] completed when x and y ----------------------------------- - - -#### Quick and Easy - -The **easiest** approach is to use GitHub's browser based workflow. This is useful when your change is a few lines, if you are editing a wiki, or if the edit is trivial (and won't break the code). The [GitHub documentation is here](https://help.github.com/articles/github-flow-in-the-browser) but it is simple: finding the page or file you want to edit, click "edit" and then the GitHub web application will automatically forking and branch, then allow you to submit a pull request. However, it should be noted that unless you are a member of the PEcAn project that the "edit" button will not be active and you'll want to follow the workflow described below for forking and then submitting a pull request. - - -#### Recommended Git Workflow - - -**Each feature should be in its own branch** (for example each issue is a branch, names of branches are often the issue in a bug tracking system). - -**Commit and Push Frequency** On your branch, commit **_at minimum once a day before you push changes:_** even better: every time you reach a stopping point and move to a new issue. best: any time that you have done work that you do not want to re-do. Remember, pushing changes to your branch is like saving a draft. Submit a pull request when you are done. - -#### Before any work is done - -The first step below only needs to be done once when you first start working on the PEcAn code. The steps below that need to be done to set up PEcAn on your computer, and would need to be repeated if you move to a new computer. If you are working from the PEcAn VM, you can skip the "git clone" since the PEcAn code is already installed. - -Most people will not be able to work in the PEcAn repository directly and will need to create a fork of the PEcAn source code in their own folder. To fork PEcAn into your own github space ([github help: "fork a repo"](https://help.github.com/articles/fork-a-repo)). This forked repository will allow you to create branches and commit changes back to GitHub and create pull requests to the develop branch of PEcAn. - -The forked repository is the only way for external people to commit code back to PEcAn and BETY. The pull request will start a review process that will eventually result in the code being merged into the main copy of the codebase. See https://help.github.com/articles/fork-a-repo for more information, especially on how to keep your fork up to date with respect to the original. (Rstudio users should also see [Git + Rstudio](Using-Git.md#git--rstudio), below) - -You can setup SSH keys to make it easier to commit cod back to GitHub. This might especially be true if you are working from a cluster, see [set up ssh keys](https://help.github.com/articles/generating-ssh-keys) - -1. Introduce yourself to GIT - -`git config --global user.name "FULLNAME"` -`git config --global user.email you@yourdomain.example.com` - -2. Fork PEcAn on GitHub. Go to the PEcAn source code and click on the Fork button in the upper right. This will create a copy of PEcAn in your personal space. - -3. Clone to your local machine via command line - -`git clone git@github.com:/pecan.git` - -If this does not work, try the https method - -`git clone https://github.com/PecanProject/pecan.git` - -4. Define upstream repository - -``` -cd pecan -git remote add upstream git@github.com:PecanProject/pecan.git -``` - -#### During development: - -* commit often; -* each commit can address 0 or 1 issue; many commits can reference an issue -* ensure that all tests are passing before anything is pushed into develop. - -#### Basic Workflow - -This workflow is for educational purposes only. Please use the Recommended Workflow if you plan on contributing to PEcAn. This workflow does not include creating branches, a feature we would like you to use. -1. Get the latest code from the main repository - -`git pull upstream develop` - -2. Do some coding - -3. Commit after each chunk of code (multiple times a day) - -`git commit -m ""` - -4. Push to YOUR Github (when a feature is working, a set of bugs are fixed, or you need to share progress with others) - -`git push origin develop` - -5. Before submitting code back to the main repository, make sure that code compiles from the main directory. - -`make` - - -6. submit pull request with a reference to related issue; -* also see [github documentation](https://help.github.com/articles/using-pull-requests) - - -#### Recommended Workflow: A new branch for each change - -1. Make sure you start in develop - -`git checkout develop` - -2. Make sure develop is up to date - -`git pull upstream develop` - -3. Run the PEcAn MAKEFILE to compile code from the main directory. - -`make` - -4. Create a branch and switch to it - -`git checkout -b ` - -5. Work/commit/etc - -`git add ` - -`git commit -m ""` - -6. Make sure that code compiles and documentation updated. The make document command will run roxygenise. - -`make document` -`make` - -7. Push this branch to your github space - -`git push origin ` - -8. submit pull request with [[link commits to issues|Using-Git#link-commits-to-issuess]]; -* also see [explanation in this PecanProject/bety issue](https://github.com/PecanProject/bety/issues/57) and [github documentation](https://help.github.com/articles/using-pull-requests) - -#### After pull request is merged - -1. Make sure you start in master - -`git checkout develop` - -2. delete branch remotely - -`git push origin --delete ` - -3. delete branch locally - -`git branch -D ` - - -#### Fixing a release Branch - -If you would like to make changes to a release branch, you must follow a different workflow, as the release branch will not contain the latest code on develop and must remain seperate. - -1. Fetch upstream remote branches - -`git fetch upstream` - -2. Checkout the correct release branch - -`git checkout -b release/vX.Y.Z` - -4. Compile Code with make - -`make` - -5. Make changes and commit them - -`git add ` -`git commit -m "Describe changes"` - -6. Compile and make roxygen changes -`make` -`make document` - -7. Commit and push any files that were changed by make document - -8. Make a pull request. It is essential that you compare your pull request to the remote release branch, NOT the develop branch. - - -#### Link commits to issues - -You can reference and close issues from comments, pull requests, and commit messages. This should be done when you commit code that is related to or will close/fix an existing issue. - -There are two ways to do this. One easy way is to include the following text in your commit message: - -* [**Github**](https://github.com/blog/1386-closing-issues-via-commit-messages) -* to close: "closes gh-xxx" (or syn. close, closed, fixes, fix, fixed) -* to reference: just the issue number (e.g. "gh-xxx") - - -#### Other Useful Git Commands: - -* GIT encourages branching "early and often" -* First pull from develop -* Branch before working on feature -* One branch per feature -* You can switch easily between branches -* Merge feature into main line when branch done - -If during above process you want to work on something else, commit all -your code, create a new branch, and work on new branch. - - -* Delete a branch: `git branch -d ` -* To push a branch git: `push -u origin `` -* To check out a branch: - -``` -git fetch origin -git checkout --track origin/ -``` - -* Show graph of commits: - -`git log --graph --oneline --all` - -#### Tags - -Git supports two types of tags: lightweight and annotated. For more information see the [Tagging Chapter in the Git documentation](http://git-scm.com/book/ch2-6.html). - -Lightweight tags are useful, but here we discuss the annotated tags that are used for marking stable versions, major releases, and versions associated with published results. - -The basic command is `git tag`. The `-a` flag means 'annotated' and `-m` is used before a message. Here is an example: - -`git tag -a v0.6 -m "stable version with foo and bar features, used in the foobar publication by Bob"` - -Adding a tag to the a remote repository must be done explicitly with a push, e.g. - -`git push v0.6` - -To use a tagged version, just checkout: - -`git checkout v0.6` - -To tag an earlier commit, just append the commit SHA to the command, e.g. - -`git tag -a v0.99 -m "last version before 1.0" 9fceb02` - -**Using GitHub** The easiest way to get working with GitHub is by installing the GitHub -client. For instructions for your specific OS and download of the -GitHub client, see https://help.github.com/articles/set-up-git. -This will help you set up an SSH key to push code back to GitHub. To -check out a project you do not need to have an ssh key and you can use -the https or git url to check out the code. - -#### Git + Rstudio - - -Rstudio is nicely integrated with many development tools, including git and GitHub. -It is quite easy to check out source code from within the Rstudio program or browser. -The Rstudio documentation includes useful overviews of [version control](http://www.rstudio.com/ide/docs/version_control/overview) and [R package development](http://www.rstudio.com/ide/docs/packages/overview). - -Once you have git installed on your computer (see the [Rstudio version control](http://www.rstudio.com/ide/docs/version_control/overview) documentation for instructions), you can use the following steps to install the PEcAn source code in Rstudio. - -#### Creating a Read-only version: - -This is a fast way to clone the repository that does not support contributing new changes (this can be done with further modification). - -1. install Rstudio (www.rstudio.com) -2. click (upper right) project -* create project -* version control -* Git - clone a project from a Git Repository -* paste https://www.github.com/PecanProject/pecan -* choose working dir. for repo - -#### For development: - -1. create account on github -2. create a fork of the PEcAn repository to your own account https://www.github.com/pecanproject/pecan -3. install Rstudio (www.rstudio.com) -4. generate an ssh key -* in Rstudio: - * `Tools -> Options -> Git/SVN -> "create RSA key"` -* `View public key -> ctrl+C to copy` -* in GitHub -* go to [ssh settings](https://github.com/settings/ssh) -* `-> 'add ssh key' -> ctrl+V to paste -> 'add key'` -2. Create project in Rstudio -* `project (upper right) -> create project -> version control -> Git - clone a project from a Git Repository` -* paste repository url `git@github.com:/pecan.git>` -* choose working dir. for repository - -#### References - -#### Git Documentation - -* Scott Chacon, ‘Pro Git book’, -[http://git-scm.com/book](http://git-scm.com/book) -* GitHub help pages, -[https://help.github.com](https://help.github.com)/ -* Main GIT page -[http://git-scm.com/documentation](http://git-scm.com/documentation) -* Another set of pages about branching, -[http://sandofsky.com/blog/git-workflow.html](http://sandofsky.com/blog/git-workflow.html) -* [Stackoverflow highest voted questions tagged "git"](http://stackoverflow.com/questions/tagged/git?sort=votes&pagesize=50) - - -#### GitHub Documentation - -When in doubt, the first step is to click the "Help" button at the top of the page. - -* [GitHub Flow](http://scottchacon.com/2011/08/31/github-flow.html) by -Scott Chacon (Git evangelist and Ruby developer working on GitHub.com) -* [GitHub FAQ](https://help.github.com/) -* [Using Pull Requests](https://help.github.com/articles/using-pull-requests) -* [SSH Keys](https://help.github.com/articles/generating-ssh-keys) - - - -### GitHub use with PEcAn - -In this section, development topics are introduced and discussed. PEcAn code lives within the If you are looking for an issue to work on, take a look through issues labled ["good first issue"](https://github.com/PecanProject/pecan/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). To get started you will want to review - - -We use GitHub to track development. - -To learn about GitHub, it is worth taking some time to read through the [FAQ](https://help.github.com/). When in doubt, the first step is to click the "Help" button at the top of the page. - -* **To address specific people**, use a github feature called @mentions e.g. write @dlebauer, @robkooper, @mdietze, or @serbinsh ... in the issue to alert the user as described in the [GitHub documentation on notifications](https://help.github.com/articles/notifications) - - -#### Bugs, Issues, Features, etc. - - -#### Reporting a bug - -1. (For developers) work through debugging. -2. Once you have identified a problem, that you can not resolve, you can write a bug report -3. Write a bug report -4. submit the bug report -5. If you do find the answer, explain the resolution (in the issue) and close the issue - -#### Required content - -Note: - -* **a bug is only a bug if it is reproducible** -* **clear bug reports save time** - -1. Clear, specific title -2. Description - - * What you did - * What you expected to happen - * What actually happened - * What does work, under what conditions does it fail? - * Reproduction steps - minimum steps required to reproduce the bug -3. additional materials that could help identify the cause: - * screen shots - * stack traces, logs, scripts, output - * specific code and data / settings / configuration files required to reproduce the bug - * environment (operating system, browser, hardware) - -#### Requesting a feature - - -(from The Pragmatic Programmer, available as -[ebook](http://proquestcombo.safaribooksonline.com/0-201-61622-X/223) -through UI libraries, hardcopy on David’s bookshelf)\ - -* focus on “user stories”, e.g. specific use cases -* Be as specific as possible, - -* Here is an example: - - 1. Bob is at www.mysite.edu/maps - 2. map of the the region (based on user location, e.g. US, Asia, etc) - 3. option to “use current location” is provided, if clicked, map zooms in to, e.g. state or county level - 4. for site run: - 1. option to select existing site or specify point by lat/lon - 2. option to specify a bounding box and grid resolution in - either lat/lon or polar stereographic. -5. asked to specify start and end times in terms of year, month, day, hour, minute. Time is recorded in UTC not local time, this should be indicated. - -#### Closing an issue - -1. Definition of “Done” - * test - * documentation -2. when issue is resolved: - * status is changed to “resolved” - * assignee is changed to original author -3. if original author agrees that issue has been resolved - * original author changes status to “closed” -4. except for trivial issues, issues are only closed by the author - -#### When to submit an issue? - -**Ideally, non-trivial code changes will be linked to an issue and a commit.** - -This requires creating issues for each task, making small commits, and referencing the issue within your commit message. Issues can be created [on GitHub](http://pecanproject.github.io/Report_an_issue.html). These issues can be linked to commits by adding text such as `fixes gh-5`). - -Rationale: This workflow is a small upfront investment that reduces error and time spent re-creating and debugging errors. Associating issues and commits, makes it easier to identify why a change was made, and potential bugs that could arise when the code is changed. In addition, knowing which issue you are working on clarifies the scope and objectives of your current task. - - - -## Coding Practices {#coding-practices} - - - -### Coding Style {#developer-codestyle} - -Consistent coding style improves readability and reduces errors in -shared code. - -Unless otherwise noted, PEcAn follows the [Tidyverse style guide](https://style.tidyverse.org/), so please familiarize yourself with it before contributing. -In addition, note the following: - -- **Document all functions using `roxygen2`**. -See [Roxygen2](#developer-roxygen) for more details. -- **Put your name on things**. -Any function that you create or make a meaningful contribution to should have your name listed after the author tag in the function documentation. -It is also often a good idea to add your name to extended comments describing particularly complex or strange code. -- **Write unit tests with `testthat`**. -Tests are a complement to documentation - they define what a function is (and is not) expected to do. -Not all functions necessarily need unit tests, but the more tests we have, the more confident we can be that changes don't break existing code. -Whenever you discover and fix a bug, it is a good idea to write a unit test that makes sure the same bug won't happen again. -See [Unit_Testing](#developer-testing) for instructions, and [Advanced R: Tests](http://r-pkgs.had.co.nz/tests.html). -- **Do not use abbreviations**. -Always write out `TRUE` and `FALSE` (i.e. _do not_ use `T` or `F`). -Do not rely on partial argument matching -- write out all arguments in full. -- **Avoid dots in function names**. -R's S3 methods system uses dots to denote object methods (e.g. `print.matrix` is the `print` method for objects of class `matrix`), which can cause confusion. -Use underscores instead (e.g. `do_analysis` instead of `do.analysis`). -(NOTE that many old PEcAn functions violate this convention. The plan is to deprecate those in PEcAn 2.0. See GitHub issue [#392](https://github.com/PecanProject/pecan/issues/392)). -- **Use informative file names with consistent extensions**. -Standard file extensions are `.R` for R scripts, `.rds` for individual objects (via `saveRDS` function), and `.RData` (note: capital D!) for multiple objects (via the `save` function). -For function source code, prefer multiple files with fewer functions in each to large files with lots of files (though it may be a good idea to group closely related functions in a single file). -File names should match, or at least closely reflect, their files (e.g. function `do_analysis` should be defined in a file called `do_analysis.R`). -_Do not use spaces in file names_ -- use dashes (`-`) or underscores (`_`). -- **For using external packages, add the package to `Imports:` and call the corresponding function with `package::function`**. -_Do not_ use `@importFrom package function` or, worse yet, `@import package`. -(The exception is infix operators like `magrittr::%>%` or `ggplot2::%+%`, which can be imported via roxygen2 documentation like `@importFrom magrittr %>%`). -_Do not_ add packages to `Depends`. -In general, try to avoid adding new dependencies (especially ones that depend on system libraries) unless they are necessary or already widely used in PEcAn (e.g. GDAL, NetCDF, XML, JAGS, `dplyr`). -For a more thorough and nuanced discussion, see the [package dependencies appendix](#package-dependencies). - - - -### Logging {#developer-logging} - -During development we often add many print statements to check to see how the code is doing, what is happening, what intermediate results there are etc. When done with the development it would be nice to turn this additional code off, but have the ability to quickly turn it back on if we discover a problem. This is where logging comes into play. Logging allows us to use "rules" to say what information should be shown. For example when I am working on the code to create graphs, I do not have to see any debugging information about the SQL command being sent, however trying to figure out what goes wrong during a SQL statement it would be nice to show the SQL statements without adding any additional code. - -PEcAn provides a set of `logger.*` functions that should be used in place of base R's `stop`, `warn`, `print`, and similar functions. The `logger` functions make it easier to print to a system log file, and to control the level of output produced by PEcAn. - -* The file [test.logger.R](https://github.com/PecanProject/pecan/blob/develop/base/logger/tests/testthat/test.logger.R) provides descriptive examples -* This query provides an current overview of [functions that use logging](https://github.com/PecanProject/pecan/search?q=logger&ref=cmdform) -* Logger functions and their corresponding levels (in order of increasing level): - * `logger.debug` (`"DEBUG"`) -- Low-level diagnostic messages that are hidden by default. Good examples of this are expanded file paths and raw results from database queries or other analyses. - * `logger.info` (`"INFO"`) -- Informational messages that regular users may want to see, but which do not indicate anything unexpected. Good examples of this are progress updates updates for long-running processes, or brief summaries of queries or analyses. - * `logger.warn` (`"WARN"`) -- Warning messages about issues that may lead to unexpected but valid results. Good examples of this are interactions between arguments that lead to some arguments being ignored or removal of missing or extreme values. - * `logger.error` (`"ERROR"`) -- Error messages from which PEcAn has some capacity to recover. Unless you have a very good reason, we recommend avoiding this in favor of either `logger.severe` to actually stop execution or `logger.warn` to more explicitly indicate that the problem is not fatal. - * `logger.severe` -- Catastrophic errors that warrant immediate termination of the workflow. This is the only function that actually stops R's execution (via `stop`). -* The `logger.setLevel` function sets the level at which a message will be printed. For instance, `logger.setLevel("WARN")` will suppress `logger.info` and `logger.debug` messages, but will print `logger.warn` and `logger.error` messages. `logger.setLevel("OFF")` suppresses all logger messages. -* To print all messages to console, use `logger.setUseConsole(TRUE)` - - - -### Package Data {#developer-packagedata} - -#### Summary: - -Files with the following extensions will be read by R as data: - -* plain R code in .R and .r files are sourced using `source()` -* text tables in .tab, .txt, .csv files are read using `read()` -** objects in R image files: .RData, .rda are loaded using `load()` - * capitalization matters - * all objects in foo.RData are loaded into environment - * pro: easiset way to store objects in R format - * con: format is application (R) specific - -Details are in `?data`, which is mostly a copy of [Data section of -Writing R -Extensions](http://cran.r-project.org/doc/manuals/R-exts.html#Data-in-packages). - -#### Accessing data - -Data in the [data] directory will be accessed in the following ways, - -* efficient way: (especially for large data sets) using the `data` -function: - -```r -data(foo) # accesses data with, e.g. load(foo.RData), read(foo.csv), or source(foo.R) -``` - -* easy way: by adding the following line to the package DESCRIPTION: - *note:* this should be used with caution or it can cause difficulty as discussed in redmine issue #1118 -```r -LazyData: TRUE -``` -From the R help page: - -Currently, a limited number of data formats can be accessed using the `data` function by placing one of the following filetypes in a packages' `data` directory: -* files ending `.R` or `.r` are `source()`d in, with the R working -directory changed temporarily to the directory containing the respective -file. (`data` ensures that the `utils` package is attached, in case it -had been run *via* `utils::data`.) -* files ending `.RData` or `.rda` are `load()`ed. -* files ending `.tab`, `.txt` or `.TXT` are read using `read.table(..., header = TRUE)`, and hence result in a data frame. -* files ending `.csv` or `.CSV` are read using `read.table(..., header = TRUE, sep = ';')`, and also result in a data frame. - -If your data does not fall in those 4 categories, or you can use the -`system.file` function to get access to the data: - -```r -system.file("data", "ed.trait.dictionary.csv", package="PEcAn.utils") -[1] "/home/kooper/R/x86_64-pc-linux-gnu-library/2.15/PEcAn.utils/data/ed.trait.dictionary.csv" -``` - -The arguments are folder, filename(s) and then package. It will return -the fully qualified path name to a file in a package, in this case it -points to the trait data. This is almost the same as the data function, -however we can now use any function to read the file, such as read.csv -instead of read.csv2 which seems to be the default of data. This also -allows us to store arbitrary files in the data folder, such as the the -bug file and load it when we need it. - -##### Examples of data in PEcAn packages - -* outputs: [/modules/uncertainties/data/output.RData] -* parameter samples [/modules/uncertainties/data/samples.RData] - - - -### Documenting functions using `roxygen2` {#developer-roxygen} - -This is the standard method for documenting R functions in PEcAn. -For detailed instructions, see one of the following resources: - -* `roxygen2` [pacakge documentation](https://roxygen2.r-lib.org/) - - [Formatting overview](https://roxygen2.r-lib.org/articles/rd.html) - - [Markdown formatting](https://blog.rstudio.com/2017/02/01/roxygen2-6-0-0/) - - [Namespaces](https://roxygen2.r-lib.org/articles/namespace.html) (e.g. when to use `@export`) -* From "R packages" by Hadley Wickham: - - [Object Documentation](http://r-pkgs.had.co.nz/man.html) - - [Package Metadata](http://r-pkgs.had.co.nz/description.html) - -Below is a complete template for a Roxygen documentation block. -Note that roxygen lines start with `#'`: - -```r -#' Function title, in a few words -#' -#' Function description, in 2-3 sentences. -#' -#' (Optional) Package details. -#' -#' @param argument_1 A description of the argument -#' @param argument_2 Another argument to the function -#' @return A description of what the function returns. -#' -#' @author Your name -#' @examples -#' \dontrun{ -#' # This example will NOT be run by R CMD check. -#' # Useful for long-running functions, or functions that -#' # depend on files or values that may not be accessible to R CMD check. -#' my_function("~/user/my_file") -#'} -# # This example WILL be run by R CMD check -#' my_function(1:10, argument_2 = 5) -## ^^ A few examples of the function's usage -#' @export -# ^^ Whether or not the function will be "exported" (made available) to the user. -# If omitted, the function can only be used inside the package. -my_function <- function(argument_1, argument_2) {...} -``` - -Here is a complete example from the `PEcAn.utils::days_in_year()` function: - -```r -#' Number of days in a year -#' -#' Calculate number of days in a year based on whether it is a leap year or not. -#' -#' @param year Numeric year (can be a vector) -#' @param leap_year Default = TRUE. If set to FALSE will always return 365 -#' -#' @author Alexey Shiklomanov -#' @return integer vector, all either 365 or 366 -#' @export -#' @examples -#' days_in_year(2010) # Not a leap year -- returns 365 -#' days_in_year(2012) # Leap year -- returns 366 -#' days_in_year(2000:2008) # Function is vectorized over years -days_in_year <- function(year, leap_year = TRUE) {...} -``` - -To update documentation throughout PEcAn, run `make document` in the PEcAn root directory. -_Make sure you do this before opening a pull request_ -- -PEcAn's automated testing (Travis) will check if any documentation is out of date and will throw an error like the following if it is: - -``` -These files were changed by the build process: -{...} -``` - - - -## Testing {#developer-testing} - -PEcAn uses two different kinds of testing -- [unit tests](#developer-testing-unit) and [integration tests](#developer-testing-integration). - -### Unit testing {#developer-testing-unit} - -Unit tests are short (<1 minute runtime) tests of functionality of specific functions. -Ideally, every function should have at least one unit test associated with it. - -A unit test *should* be written for each of the following situations: - -1. Each bug should get a regression test. - * The first step in handling a bug is to write code that reproduces the error - * This code becomes the test - * most important when error could re-appear - * essential when error silently produces invalid results - -2. Every time a (non-trivial) function is created or edited - * Write tests that indicate how the function should perform - * example: `expect_equal(sum(1,1), 2)` indicates that the sum - function should take the sum of its arguments - - * Write tests for cases under which the function should throw an - error - * example: `expect_error(sum("foo"))` - * better : `expect_error(sum("foo"), "invalid 'type' (character)")` -3. Any functionality that you would like to protect over the long term. Functionality that is not tested is more likely to be lost. -PEcAn uses the `testthat` package for unit testing. -A general overview of is provided in the ["Testing"](http://adv-r.had.co.nz/Testing.html) chapter of Hadley Wickham's book "R packages". -Another useful resource is the `testthat` [package documentation website](https://testthat.r-lib.org/). -See also our [`testthat` appendix](#appendix-testthat). -Below is a lightning introduction to unit testing with `testthat`. - -Each package's unit tests live in `.R` scripts in the folder `/tests/testthat`. -In addition, a `testthat`-enabled package has a file called `/tests/testthat.R` with the following contents: - -```r -library(testthat) -library() - -test_check("") -``` - -Tests should be placed in `/tests/testthat/test-.R`, and look like the following: - -```r -context("Mathematical operators") - -test_that("mathematical operators plus and minus work as expected",{ - sum1 <- sum(1, 1) - expect_equal(sum1, 2) - sum2 <- sum(-1, -1) - expect_equal(sum2, -2) - expect_equal(sum(1,NA), NA) - expect_error(sum("cat")) - set.seed(0) - expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) -}) - -test_that("different testing functions work, giving excuse to demonstrate",{ - expect_identical(1, 1) - expect_identical(numeric(1), integer(1)) - expect_equivalent(numeric(1), integer(1)) - expect_warning(mean('1')) - expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) - expect_warning(mean('1'), "argument is not numeric or logical: returning NA") - expect_message(message("a"), "a") -}) -``` - -### Integration testing {#developer-testing-integration} - -Integration tests consist of running the PEcAn workflow in full. -One way to do integration tests is to manually run workflows for a given version of PEcAn, either through the web interface or by manually creating a [`pecan.xml` file](#pecanXML). -Such manual tests are an important part of checking PEcAn functionality. - -Alternatively, the [`base/workflow/inst/batch_run.R`][batch_run] script can be used to quickly run a series of user-specified integration tests without having to create a bunch of XML files. -This script is powered by the [`PEcAn.workflow::create_execute_test_xml()`][xml_fun] function, -which takes as input information about the model, meteorology driver, site ID, run dates, and others, -uses these to construct a PEcAn XML file, -and then uses the `system()` command to run a workflow with that XML. - -If run without arguments, `batch_run.R` will try to run the model configurations specified in the [`base/workflow/inst/default_tests.csv`][default_tests] file. -This file contains a CSV table with the following columns: - -- `model` -- The name of the model (`models.model_name` column in BETY) -- `revision` -- The version of the model (`models.revision` column in BETY) -- `met` -- The name of the meteorology driver source -- `site_id` -- The numeric site ID for the model run (`sites.site_id`) -- `pft` -- The name of the plant functional type to run. If `NA`, the script will use the first PFT associated with the model. -- `start_date`, `end_date` -- The start and end dates for the model run, respectively. These should be formatted according to ISO standard (`YYYY-MM-DD`, e.g. `2010-03-16`) -- `sensitivity` -- Whether or not to run the sensitivity analysis. `TRUE` means run it, `FALSE` means do not. -- `ensemble_size` -- The number of ensemble members to run. Set this to 1 to do a single run at the trait median. -- `comment` -- An string providing some user-friendly information about the run. - -The `batch_run.R` script will run a workflow for every row in the input table, sequentially (for now; eventually, it will try to run them in parallel), -and at the end of each workflow, will perform some basic checks, including whether or not the workflow finished and if the model produced any output. -These results are summarized in a CSV table (by default, a file called `test_result_table.csv`), with all of the columns as the input test CSV plus the following: - -- `outdir` -- Absolute path to the workflow directory. -- `workflow_complete` -- Whether or not the PEcAn workflow completed. Note that this is a relatively low bar -- PEcAn workflows can complete without having run the model or finished some other steps. -- `has_jobsh` -- Whether or not PEcAn was able to write the model's `job.sh` script. This is a good indication of whether or not the model's `write.configs` step was successful, and may be useful for separating model configuration errors from model execution errors. -- `model_output_raw` -- Whether or not the model produced any output files at all. This is just a check to see of the `/out` directory is empty or not. Note that some models may produce logfiles or similar artifacts as soon as they are executed, whether or not they ran even a single timestep, so this is not an indication of model success. -- `model_output_processed` -- Whether or not PEcAn was able to post-process any model output. This test just sees if there are any files of the form `YYYY.nc` (e.g. `1992.nc`) in the `/out` directory. - -Right now, these checks are not particularly robust or comprehensive, but they should be sufficient for catching common errors. -Development of more, better tests is ongoing. - -The `batch_run.R` script can take the following command-line arguments: - -- `--help` -- Prints a help message about the script's arguments -- `--dbfiles=` -- The path to the PEcAn `dbfiles` folder. The default value is `~/output/dbfiles`, based on the file structure of the PEcAn VM. Note that for this and all other paths, if a relative path is given, it is assumed to be relative to the current working directory, i.e. the directory from which the script was called. -- `--table=` -- Path to an alternate test table. The default is the `base/workflow/inst/default_tests.csv` file. See preceding paragraph for a description of the format. -- `--userid=` -- The numeric user ID for registering the workflow. The default value is 99000000002, corresponding to the guest user on the PEcAn VM. -- `--outdir=` -- Path to a directory (which will be created if it doesn't exist) for storing the PEcAn workflow outputs. Default is `batch_test_output` (in the current working directory). -- `--pecandir=` -- Path to the PEcAn source code root directory. Default is the current working directory. -- `--outfile=` -- Full path (including file name) of the CSV file summarizing the results of the runs. Default is `test_result_table.csv`. The format of the output - -[batch_run]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/batch_run.R -[default_tests]: https://github.com/pecanproject/pecan/tree/develop/base/workflow/inst/default_tests.csv -[xml_fun]: - -### Continuous Integration - -Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase, and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. - -At this writing PEcAn's CI builds primarily use [Travis CI](https://travis-ci.org/pecanProject/pecan) and the rest of this section assumes a Travis build, but as of May 2020 we also have an experimental GitHub Actions build, and if we switch completely to GitHub Actions then this guide will need to be rewritten. - -All our Travis builds run on the same version of Linux (currently Ubuntu 16.04, `xenial`) using four different versions of R in parallel: the two most recent previous releases, current release, and nightly builds of the R development branch. In most cases the build should pass on all four versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on older releases as developer time and forward compatibility allow. - -Each build starts by launching a separate clean virtual machine for each R version and performs roughly the following actions on all of them: - -* Installs binary system dependencies needed by PEcAn (NetCDF and HDF5 libraries, JAGS, udunits, etc). -* Installs all the R packages that are declared as dependencies in any PEcAn package, as computed by `scripts/generate_dependencies.R`. -* Clones the PEcAn repository from GitHub, and checks out the branch to be tested. -* Retrieves any cached files available from previous Travis builds. - - The main thing in the cache is previously-installed dependency R packages, to avoid recompiling them every time. - - If the cache becomes stale or is preventing a package update needed by the build (e.g. to get a new version that contains a needed bug fix), delete the cache through the Travis web interface and it will be reconstructed on the next build. - - Because PEcAn has so many dependencies, builds with no cache will spend most of their time recompiling packages and will probably run out of time before the tests complete. You can fix this by using `scripts/travis/cache_buildup.sh` and `scripts/travis/prime_travis_cache.sh` to build up the cache incrementally through one or more [custom builds](https://blog.travis-ci.com/2017-08-24-trigger-custom-build), each of which installs some dependencies and then uploads a freshened cache *without* running any tests. Once all dependencies have been cached, restart the standard full build. -* Initializes a skeleton version of the PEcAn database (BeTY) containing a few public records to be used by the test runs. -* Installs all the PEcAn packages, recompiling documentation and installing dependencies as needed. -* Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). - - As discussed in [Unit testing](#developer-testing-unit), these tests should run quickly and test individual components in relative isolation. - - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time (e.g. large data product downloads) or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! -* Runs R package checks (the same ones you run locally with `make check` or `devtools::check(pkgname)`), skipping tests and documentation rebuild because we just did those in the previous steps. - - Any ERROR in the check output will stop the build immediately. - - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. If the package has no stored reference result, all WARNINGs and NOTEs are considered newly added and reported as build failures. - - If all messages from the current built were also present in the reference result, the check passes. If any messages are newly added, a build failure is reported. - - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! - - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. - - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. - - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it as part of your PR anyway. It's frustrating to see tests complain about code you didn't touch, but the failures all need to be cleaned up eventually and it's likely easier to fix the error than to figure out how to re-ignore it. -* Runs a simple PEcAn workflow from beginning to end (three years of SIPNET at Niwot Ridge using stored weather data, meta-analysis on fixed effects only, sensitivity analysis on NPP), and verifies that the models complete successfully. -* Checks whether any version-controlled files have been changed by the build and testing process, and marks a build failure if they have. - - If your build fails at this step, the most common cause is that you forgot to Roxygenize before committing. - - This step will also detect newly added files, e.g. tests improperly writing to the current working directory rather than `tempdir()` and then failing to clean up after themselves. - -If any of these steps reports an error, the build is marked as "broken" and stops before the next step. If they all pass, the Travis CI bot marks the build as successful and tells the GitHub bot that it's OK to allow your changes to be merged... but the final decision belongs to the human reviewing your code and they might still ask you for other changes! - -After a successful build, Travis performs two post-build steps: - -* Compiles the PEcAn documentation book (`book_source`) and the tutorials (`documentation/tutorials`) and uploads them to the [PEcAn website](https://pecanproject.github.io/pecan-documentation). - - This is only done for commits to the `master` or `develop` branch, so changes to in-progress pull requests never change the live documentation until after they are merged. -* Packs up selected build artifacts into a cache file and uploads it to the Travis servers for use on the next build. - -The post-build steps are allowed to fail without breaking the build. If you made documentation changes but don't see them deployed, or if your build seems to be reinstalling packages that ought to be cached, inspect the Travis logs of the previous supposedly-successful build to see if their uploads succeeded. - -All of the above descriptions apply to the build Travis generates when you push to the main `PecanProject/pecan` repository, either by directly pushing to a branch or by opening a pull request. If you like, you can also [enable Travis builds](https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci) from your own PEcAn fork. This can be useful for several reasons: - -* It lets your test whether your changes worked before you open a pull request. -* It often lets you get faster test results: The PEcAn project uses Travis CI's free tier, which only allows a few simultaneous build jobs per repository. If many people are pushing code at the same time, your build might wait in line for a long time at `PecanProject/pecan` but start immediately at `yourname/pecan`. -* If you will be editing the documentation a lot and want to see rendered previews of your in-progress work (instead of waiting until it is merged into develop), you can clone the [pecan-documentation](https://github.com/PecanProject/pecan-documentation) repository to your own GitHub account and let Travis update it for you. - - - -## Download and Compile PEcAn - -Set `R_LIBS_USER` - -[CRAN Reference](http://cran.r-project.org/doc/manuals/r-devel/R-admin.html#Managing-libraries) - -```bash -# point R to personal lib folder -echo 'export R_LIBS_USER=${HOME}/R/library' >> ~/.profile -source ~/.profile -mkdir -p ${R_LIBS_USER} -``` - - -### Download, compile and install PEcAn from GitHub - -```bash -# download pecan -cd -git clone https://github.com/PecanProject/pecan.git - -# compile pecan -cd pecan -make -``` -For more information on the capabilities of the PEcAn Makefile, check out our section on [Updating PEcAn]. - -Following will run a small script to setup some hooks to prevent people from using the pecan demo user account to check in any code. - -```bash -# prevent pecan user from checking in code -./scripts/create-hooks.sh -``` - -### PEcAn Testrun - -Do the run, this assumes you have [installed the BETY database][Installing BETY], [sites][Site Information] tar file and [SIPNET]. - -```bash -# create folder -cd -mkdir testrun.pecan -cd testrun.pecan - -# copy example of pecan workflow and configuration file -cp ../pecan/tests/pecan32.sipnet.xml pecan.xml -cp ../pecan/scripts/workflow.R workflow.R - -# exectute workflow -rm -rf pecan -./workflow.R pecan.xml -``` - -NB: pecan.xml is configured for the virtual machine, you will need to change the field from '/home/carya/' to wherever you installed your 'sites', usually $HOME - - - -## Directory structure - -### Overview of PEcAn repository as of PEcAn 1.5.3 - - -``` -pecan/ - +- base/ # Core functions - +- all # Dummy package to load all PEcAn R packages - +- db # Modules for querying the database - +- logger # Report warnings without killing workflows - +- qaqc # Model skill testing and integration testing - +- remote # Communicate with and execute models on local and remote hosts - +- settings # Functions to read and manipulate PEcAn settings files - +- utils # Misc. utility functions - +- visualization # Advanced PEcAn visualization module - +- workflow # functions to coordinate analysis steps - +- book_source/ # Main documentation and developer's guide - +- CHANGELOG.md # Running log of changes in each version of PEcAn - +- docker/ # Experimental replacement for PEcAn virtual machine - +- documentation # index_vm.html, references, other misc. - +- models/ # Wrappers to run models within PEcAn - +- ed/ # Wrapper scripts for running ED within PEcAn - +- sipnet/ # Wrapper scripts for running SIPNET within PEcAn - +- ... # Wrapper scripts for running [...] within PEcAn - +- template/ # Sample wrappers to copy and modify when adding a new model - +- modules # Core modules - +- allometry - +- data.atmosphere - +- data.hydrology - +- data.land - +- meta.analysis - +- priors - +- rtm - +- uncertainty - +- ... - +- scripts # R and Shell scripts for use with PEcAn - +- shiny/ # Interactive visualization of model results - +- tests/ # Settings files for host-specific integration tests - +- web # Main PEcAn website files -``` - - -### Generic R package structure: - -see the R development wiki for more information on writing code and adding data. - -``` - +- DESCRIPTION # short description of the PEcAn library - +- R/ # location of R source code - +- man/ # Documentation (automatically compiled by Roxygen) - +- inst/ # files to be installed with package that aren't R functions - +- extdata/ # misc. data files (in misc. formats) - +- data/ # data used in testing and examples (saved as *.RData or *.rda files) - +- NAMESPACE # declaration of package imports and exports (automatically compiled by Roxygen) - +- tests/ # PEcAn testing scripts - +- testthat/ # nearly all tests should use the testthat framework and live here -``` - - - -# (PART) Topical Pages {-} - - - - - -# VM configuration and maintenance {#working-with-vm} - -## Updating the VM {#maintain-vm} - - - -The PEcAn VM is distributed with specific versions of PEcAn compiled. However, you do not need to constantly download the VM in order to update your code and version of BETY. To update and maintain your code you can follow the steps found in the Developer section in [Updating PecAn and Code and BETY Database](#updatebety). However, if you are using the latest version of the PEcAn VM (>=1.7) you have the Dockerized version of PEcAn and need to follow the instructions under the [DOCKER](#docker-index) section to update your PEcAn and BETY containers. - -## Connecting to the VM via SSH {#ssh-vm} - -Once the VM is running anywhere on your machine, you can connect to it from a separate terminal via SSH as follows: - -```sh -ssh -p 6422 carya@localhost -``` - -You will be prompted for a password. Like everywhere else in PEcAn, the username is `carya` and the password is `illinois`. The same password is used for any system maintenance you wish to do on the VM via `sudo`. - -As a shortcut, you can add the following to your `~/.ssh/config` file (or create one if it does not exist). - -``` -Host pecan-vm - Hostname localhost - Port 6422 - user carya - ForwardX11Trusted yes -``` - -This will allow you to SSH into the VM with the simplified command, `ssh pecan-vm`. - -## Connecting to BETYdb on the VM via SSH {#ssh-vm-bety} - -Sometimes, you may want to develop code locally but connect to an instance of BETYdb on the VM. -To do this, first open a new terminal and connect to the VM while enabling port forwarding (with the `-L` flag) and setting the port number. Using 5433 does not conflict with the postgres default port of 5432, the forwarded port will not conflict with a postgres database server running locally. - -``` -ssh -L 5433:localhost:5432 carya@localhost:6422 -``` - -This makes port 5433 on the local machine match port 5432 on the VM. -This means that connecting to `localhost:5433` will give you access to BETYdb on the VM. - -To test this on the command line, try the following command, which, if successful, will drop you into the `psql` console. - -``` -psql -d bety -U bety -h localhost -p 5433 -``` - -To test this in R, open a Postgres using the analogous parameters: - -``` -library(RPostgres) -con <- dbConnect( - Postgres(), - user = "bety", - password = "bety", - dbname = "bety", - host = "localhost", - port = 5433 - ) -dbListTables(con) # This should return a vector of bety tables -``` - -Note that the same general approach will work on any BETYdb server where port forwarding is enabled, but it requires ssh access. - - -### Using Amazon Web Services for a VM (AWS) {#awsvm} - -Login to [Amazon Web Services (AWS)](http://console.aws.amazon.com/) and select the EC2 Dashboard. If this is your first time using AWS you will need to set up an account before you are able to access the EC2 Dashboard. Important: You will need a credit card number and access to a phone to be able to verify AWS account registration. AWS is free for one year. - -1. Choose AMI -+ On the top right next to your name, make sure the location setting is on U.S. East (N. Virginia), not U.S. West (Oregon) -+ On the left click, click on EC2 (Virtual servers), then click on “AMIs”, also on the left -+ In the search window toggle to change “Owned by me” to “Public images” -+ Type “pecan” into the search window -+ Click on the toggle button on the left next to PEcAn1.4.6 -+ Click on the “Launch” button at the top -2. Choose an Instance Type -+ Select what type of machine you want to run. For this demo the default, t2.micro, will be adequate. Be aware that different machine types incur very different costs, from 1.3 cents/hour to over \$5/hr https://aws.amazon.com/ec2/pricing/ - + Select t2.micro, then click “Next: Configure Instance Details” -3. Configure Instance Details -+ The defaults are OK. Click “Next: Add Storage” -4. Add Storage -+ The defaults are OK. Click “Next: Tag Instance” -5. Tag Instance -+ You can name your instance if you want. Click “Next: Configure Security Group” -6. Configure Security Group -+ You will need to add two new rules: - + Click “Add Rule” then select “HTTP” from the pull down menu. This rule allows you to access the webserver on PEcAn. -+ Click “Add Rule”, leave the pull down on “Custom TCP Rule”, and then change the Port Range from 0 to 8787. Set “Source” to Anywhere. This rule allows you to access RStudio Server on PEcAn. -+ Click “Review and Launch” . You will then see this pop-up: - - ```{r, echo=FALSE,fig.align='center'} -knitr::include_graphics(rep("figures/pic2.jpg")) -``` - -Select the default drive volume type and click Next - -7. Review and Launch -+ Review the settings and then click “Launch”, which will pop up a select/create Key Pair window. -8. Key Pair -+ Select “Create a new key pair” and give it a name. You won’t actually need this key unless you need to SSH into your PEcAn server, but AWS requires you to create one. Click on “Download Key Pair” then on “Launch Instances”. Next click on “View Instances” at the bottom of the following page. - - -```{r, echo=FALSE,fig.align='center'} -knitr::include_graphics(rep("figures/pic3.jpg")) -``` - -9. Instances -+ You will see the status of your PEcAn VM, which will take a minute to boot up. Wait until the Instance State reads “running”. The most important piece of information here is the Public IP, which is the URL you will need in order to access your PEcAn instance from within your web browser (see Demo 1 below). -+ Be aware that it often takes ~1 hr for AWS instances to become fully operational, so if you get an error when you put the Public IP in you web browser, most of the time you just need to wait a bit longer. -Congratulations! You just started a PEcAn server in the “cloud”! - - 10. When you are done using PEcAn, you will want to return to the “Instances” menu to turn off your VM. -+ To STOP the instance (which will turn the machine off but keep your work), select your PEcAn instance and click Actions > Instance state > Stop. Be aware that a stopped instance will still accrue a small storage cost on AWS. To restart this instance at any point in the future you do not want to repeat all the steps above, but instead you just need to select your instance and then click Actions > Instance state > Start -+ To TERMINATE the instance (which will DELETE your PEcAn machine), select your instance and click Actions > Instance state > Terminate. Terminated instances will not incur costs. In most cases you will also want to go to the Volumes menu and delete the storage associated with your PEcAn VM.Remember, AWS is free for one year, but will automatically charge a fee in second year if account is not cancelled. - - -### Creating a Virtual Machine {#createvm} - -First create virtual machine - -``` -# ---------------------------------------------------------------------- -# CREATE VM USING FOLLOWING: -# - VM NAME = PEcAn -# - CPU = 2 -# - MEMORY = 2GB -# - DISK = 100GB -# - HOSTNAME = pecan -# - FULLNAME = PEcAn Demo User -# - USERNAME = xxxxxxx -# - PASSWORD = yyyyyyy -# - PACKAGE = openssh -# ---------------------------------------------------------------------- -``` - -To enable tunnels run the following on the host machine: - -```bash -VBoxManage modifyvm "PEcAn" --natpf1 "ssh,tcp,,6422,,22" -VBoxManage modifyvm "PEcAn" --natpf1 "www,tcp,,6480,,80" -``` - -Make sure machine is up to date. - -UBUNTU -```bash -sudo apt-get update -sudo apt-get -y dist-upgrade -sudo reboot -``` - -CENTOS/REDHAT -```bash -sudo yum -y update -sudo reboot -``` - -Install compiler and other packages needed and install the tools. - -UBUNTU -```bash -sudo apt-get -y install build-essential linux-headers-server dkms -``` - -CENTOS/REDHAT -```bash -sudo yum -y groupinstall "Development Tools" -sudo yum -y install wget -``` - -Install Virtual Box additions for better integration - -```bash -sudo mount /dev/cdrom /mnt -sudo /mnt/VBoxLinuxAdditions.run -sudo umount /mnt -sudo usermod -a -G vboxsf carya -``` - -**Finishing up the machine** - -**Add a message to the login:** - -```bash -sudo -s -export PORT=$( hostname | sed 's/pecan//' ) -cat > /etc/motd << EOF -PEcAn version 1.4.3 - -For more information about: -Pecan - http://pecanproject.org -BETY - http://www.betydb.org - -For a list of all models currently navigate [here](../users_guide/basic_users_guide/models_table.md) - - -You can access this system using a webbrowser at - http://:${PORT}80/ -or using SSH at - ssh -l carya -p ${PORT}22 -where is the machine where the VM runs on. -EOF -exit -``` - -**Finishing up** - -Script to clean the VM and remove as much as possible history [cleanvm.sh](http://isda.ncsa.uiuc.edu/~kooper/EBI/cleanvm.sh) - -```bash -wget -O ~/cleanvm.sh http://isda.ncsa.uiuc.edu/~kooper/EBI/cleanvm.sh -chmod 755 ~/cleanvm.sh -``` - -Make sure machine has SSH keys [rc.local](http://isda.ncsa.illinois.edu/~kooper/EBI/rc.local) - -```bash -sudo wget -O /etc/rc.local http://isda.ncsa.illinois.edu/~kooper/EBI/rc.local -``` - -Change the resolution of the console - -```bash -sudo sed -i -e 's/#GRUB_GFXMODE=640x480/GRUB_GFXMODE=1024x768/' /etc/default/grub -sudo update-grub -``` - -Once all done, stop the virtual machine -```bash -history -c && ${HOME}/cleanvm.sh -``` - - - -# PEcAn standard formats {#pecan-standards} - -## Defining new input formats - -* New formats can be defined on the ['formats' page of BETYdb](http://betydb.org/formats) -* After creating a new format, the contents should be defined by specifying the BETYdb variable name and the name used in the file/ - -## Time Standard -Internal PEcAn standard time follows ISO_8601 format for dates and time (https://en.wikipedia.org/wiki/ISO_8601). For example ordinal dates go from 1 365/366 (https://en.wikipedia.org/wiki/ISO_8601#Ordinal_dates). However, time used in met drivers or model outputs follows CF convention with julian dates following the 0 to 364/365 format - -To aid in the conversion between PEcAn internal ISO_8601 standard and CF convention used in all met drivers and PEcAn standard output you can utilize the functions: "cf2datetime","datetime2doy","cf2doy", and for SIPNET "sipnet2datetime" - -### Input Standards - -#### Meterology Standards - -##### Dimensions: - - -|CF standard-name | units | -|:------------------------------------------|:------| -| time | days since 1700-01-01 00:00:00 UTC| -| longitude | degrees_east| -| latitude |degrees_north| - -General Note: dates in the database should be date-time (preferably with timezone), and datetime passed around in PEcAn should be of type POSIXct. - - -##### The variable names should be `standard_name` -```{r, echo=FALSE,warning=FALSE,message=FALSE} -names<-c("air_temperature", "air_temperature_max", "air_temperature_min", "air_pressure", - "mole_fraction_of_carbon_dioxide_in_air", "moisture_content_of_soil_layer", "soil_temperature ", - "relative_humidity", "specific_humidity", "water_vapor_saturation_deficit","surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "surface_downwelling_photosynthetic_photon_flux_in_air", - "precipitation_flux", " ","wind_speed","eastward_wind", "northward_wind") - -units <-c("K","K","K","Pa","mol/mol","kg m-2","K","%","1","Pa","W m-2","W m-2","mol m-2 s-1", "kg m-2 s-1", "degrees", "m/s", "m/s", "m/s") - -bety <- c("airT", "","", "air_pressure","","","soilT","relative_humidity","specific_humidity","VPD", "same","solar_radiation","PAR", - "cccc", "wind_direction", "Wspd", "eastward_wind", "northward_wind") - -isimip <-c("tasAdjust", "tasmaxAdjust", "tasminAdjust","" ,"" ,"" ,"" ,"rhurs","NA", "","rldsAdjust","rsdsAdjust","", "prAdjust","","","","") -cruncep <- c("tair","NA","NA","","","","","NA","qair","","lwdown","swdown","","rain","","","","") -narr <- c("air","tmax","tmin","","","","","rhum","shum","","dlwrf","dswrf","","acpc","","","","") -ameriflux <- c("TA(C)","" ,"" , "PRESS (KPa)","CO2","","TS1(NOT DONE)", - "RH","CALC(RH)","VPD(NOT DONE)","Rgl","Rg","PAR(NOT DONE)","PREC","WD","WS","CALC(WS+WD)","CALC(WS+WD)") - -in_tab<-cbind(names,units,bety,isimip,cruncep,narr,ameriflux) -colnames(in_tab)<- c("CF standard-name","units","BETY","Isimip","CRUNCEP","NARR", "Ameriflux") -if (require("DT")){ -datatable(in_tab, extensions = c('FixedColumns',"Buttons"), - options = list( - dom = 'Bfrtip', - scrollX = TRUE, - fixedColumns = TRUE, - buttons = c('copy', 'csv', 'excel', 'pdf', 'print') - - ) - ) -} - -``` - -* preferred variables indicated in bold -* wind_direction has no CF equivalent and should not be converted, instead the met2CF functions should convert wind_direction and wind_speed to eastward_wind and northward_wind -* standard_name is CF-convention standard names -* units can be converted by udunits, so these can vary (e.g. the time denominator may change with time frequency of inputs) -* soil moisture for the full column, rather than a layer, is soil_moisture_content -* A full list of PEcAn standard variable names, units and dimensions can be found here: https://github.com/PecanProject/pecan/blob/develop/base/utils/data/standard_vars.csv - - -For example, in the [MsTMIP-CRUNCEP](https://www.betydb.org/inputs/280) data, the variable `rain` should be `precipitation_rate`. -We want to standardize the units as well as part of the `met2CF.` step. I believe we want to use the CF "canonical" units but retain the MsTMIP units any time CF is ambiguous about the units. - -The key is to process each type of met data (site, reanalysis, forecast, climate scenario, etc) to the exact same standard. This way every operation after that (extract, gap fill, downscale, convert to a model, etc) will always have the exact same inputs. This will make everything else much simpler to code and allow us to avoid a lot of unnecessary data checking, tests, etc being repeated in every downstream function. - -### Soils and Vegetation Inputs - -##### Soil Data - -Check out the [Soil Data] section on more into on creating a standard soil data file. - -##### Vegetation Data - -Check Out the [Vegetation Data] section on more info on creating a standard vegetation data file - - - -### Output Standards - -* created by `model2netcdf` functions -* based on format used by [MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml) -* Can be seen at HERE - -We originally used the [MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml) conventions. Since then, we've added the PaLEON variable conventions to our standard as well. If a variable isn't in one of those two, we stick to the CF conventions. - -```{r, echo=FALSE,warning=FALSE,message=FALSE} -data("standard_vars", package = "PEcAn.utils") -if (require("DT")) { -DT::datatable(standard_vars, - extensions = c('FixedColumns',"Buttons"), - options = list( - dom = 'Bfrtip', - scrollX = TRUE, - fixedColumns = TRUE, - buttons = c('copy', 'csv', 'excel', 'pdf', 'print') - - ) -) -} -``` - - - -# The PEcAn XML {#pecanXML} - -The PEcAn system is configured using a XML file, often called `pecan.xml`. -It contains the following major sections ("nodes"): - -- [Core configuration](#xml-core-config) - - [Top level structure](#xml-structure) - - [`info`](#xml-info) -- Run metadata - - [`outdir`](#xml-outdir) -- Output directory - - [`database`](#xml-database) -- PEcAn database settings - - [`pft`](#xml-pft) -- Plant functional type selection - - [`meta.analysis`](#xml-meta-analysis) -- Trait meta analysis - - [`model`](#xml-model) -- Model configuration - - [`run`](#xml-run) -- Run setup - - [`host`](#xml-host) -- Host information for remote execution -- [Advanced features](#xml-advanced) - - [`ensemble`](#xml-ensemble) -- Ensemble runs - - [`sensitivity.analysis`](#xml-sensitivity-analysis) -- Sensitivity analysis - - [`parameter.data.assimilation`](#xml-parameter-data-assimilation) -- Parameter data assimilation - - [`multi.settings`](#xml-multi-settings) -- Multi Site Settings - - (experimental) [`state.data.assimilation`](#xml-state-data-assimilation) -- State data assimilation - - (experimental) [`browndog`](#xml-browndog) -- Brown Dog configuration - - (experimental) [`benchmarking`](#xml-benchmarking) -- Benchmarking - -A basic example looks like this: - -```xml - - - - Example run - -1 - guestuser - 2018/09/18 19:12:28 +0000 - - /data/workflows/PEcAn_99000000006 - - - bety - bety - postgres - bety - PostgreSQL - true - - /data/dbfiles - - - - tundra.grasses - - 1 - - - - - 3000 - - FALSE - TRUE - - - - 1 - NPP - - - uniform - - - sampling - - - - - 5000000002 - - - 99000000006 - - - - 1000000098 - 2004/01/01 - 2004/12/31 - - - - CRUNCEP - SIPNET - - - 2004/01/01 - 2004/12/31 - - - localhost - - amqp://guest:guest@rabbitmq:5672/%2F - SIPNET_136 - - - -``` - -In the following sections, we step through each of these sections in detail. - -## Core configuration {#xml-core-config} - -### Top-level structure {#xml-structure} - -The first line of the XML file should contain version and encoding information. - -```xml - -``` - -The rest of the XML file should be surrounded by `...` tags. - -```xml - - ...XML body here... - -``` - -### `info`: Run metadata {#xml-info} - -This section contains run metadata. -This information is not essential to a successful model run, but is useful for tracking run provenance. - -```xml - - Example run - -1 - guestuser - 2018/09/18 19:12:28 +0000 - -``` - -The `` tag will be filled in by the web GUI if you provide notes, or you can add notes yourself within these tags. We suggest adding notes that help identify your run and a brief description of what the run is for. Because these notes are searchable within the PEcAn database and web interface, they can be a useful way to distinguish between similar runs. - -The `` and `` section is filled in from the GUI if you are signed in. If you are not using the GUI, add the user name and ID you are associated with that exists within the PEcAn database. - -The `` tag is filled automatically at the time of your run from the GUI. If you are not using the GUI, add the date you execute the run. This tag is not the tag for the dates you would like to run your model simulation. - -### `outdir`: Output directory {#xml-outdir} - -The `` tag is used to configure the output folder used by PEcAn. -This is the directory where all model input and output files will be stored. -By default, the web interface names this folder `PEcAn_`, and higher-level location is set by the `$output_folder$` variable in the `web/config.php` file. -If no `outdir` is specified, PEcAn defaults to the working directory from which it is called, which may be counterintuitive. - -```xml - /data/workflows/PEcAn_99000000006 -``` - -### `database`: PEcAn database settings {#xml-database} - -#### `bety`: PEcAn database (Bety) configuration {#xml-bety} - -The `bety` tag defines the driver to use to connect to the database (the only driver we support, and the default, is `PostgreSQL`) and parameters required to connect to the database. Note that connection parameters are passed *exactly* as entered to the underlying R database driver, and any invalid or extra parameters will result in an error. - -In other words, this configuration... - -```xml - - ... - - bety - bety - postgres - bety - PostgreSQL - true - - ... - -``` - -...will be translated into R code like the following: - -```r -con <- DBI::dbConnect( - DBI::dbDriver("PostgreSQL"), - user = "bety", - password = "bety", - dbname = "bety", - host = "postgres", - write = TRUE -) -``` - -Common parameters are described as follows: - -* `driver`: The driver to use to connect to the database. This should always be set to `PostgreSQL`, unless you absolutely know what you're doing. -* `dbname`: The name of the database (formerly `name`), corresponding to the `-d` argument to `psql`. In most cases, this should be set to `bety`, and will only be different if you named your Bety instance something else (e.g. if you have multiple instances running at once). If unset, it will default to the user name of the current user, which is usually wrong! -* `user`: The username to connect to the database (formerly `userid`), corresponding to the `-U` argument to `psql`. default value is the username of the current user logged in (PostgreSQL uses user for this field). -* `password`: The password to connect to the database (was `passwd`), corresponding to the `-p` argument to `psql`. If unspecified, no password is used. On standard PEcAn installations, the username and password are both `bety` (all lowercase). -* `host`: The hostname of the `bety` database, corresponding to the `-h` argument to `psql`. On the VM, this will be `localhost` (the default). If using `docker`, this will be the name of the PostgreSQL container, which is `postgres` if using our standard `docker-compose`. If connecting to the PEcAn database on a remote server (e.g. `psql-pecan.bu.edu`), this should be the same as the hostname used for `ssh` access. -* `write`: Logical. If `true` (the default), write results to the database. If `false`, PEcAn will run but will not store any information to `bety`. - -When using the web interface, this section is configured by the `web/config.php` file. -The default `config.php` settings on any given platform (VM, Docker, etc.) or in example files (e.g. `config.php.example`) are a good place to get default values for these fields if writing `pecan.xml` by hand. - -Key R functions using these parameters are as follows: - -- `PEcAn.DB::db.open` -- Open a database connection and create a connection object, which is used by many other functions for communicating with the PEcAn database. - -#### `dbfiles`: Location of database files {#xml-dbfiles} - -The `dbfiles` is a path to the local location of files needed to run models using PEcAn, including model executables and inputs. - -```xml - - ... - /data/dbfiles - ... - -``` - -#### (Experimental) `fia`: FIA database connection parameters {#xml-fia} - -If a version of the FIA database is available, it can be configured using `` node, whose syntax is identical to that of the `` node. - -```xml - - ... - - fia5data - bety - bety - localhost - - ... - -``` - -Currently, this is only used for extraction of specific site vegetation information (notably, for ED2 `css`, `pss`, and `site` files). -Stability not ensured as of 1.5.3. - -### `pft`: Plant functional type selection {#xml-pft} - -The PEcAn system requires at least 1 plant functional type (PFT) to be specified inside the `` section. - -```xml - - - tundra.grasses - - 1 - - Path to a post.distns.*.Rdata or prior.distns.Rdata - - -``` - -* `name` : (required) the name of the PFT, which must *exactly* match the name in the PEcAn database. -* `outdir`: (optional) Directory path in which PFT-specific output will be stored during meta-analysis and sensitivity analysis. If not specified (recommended), it will be written into `/`. -* `contants`: (optional) this section contains information that will be written directly into the model specific configuration files. For example, some models like ED2 use PFT numbers instead of names for PFTs, and those numbers can be specified here. See documentation for model-specific code for details. -* `posterior.files` (Optional) this tag helps to signal write.config functions to use specific posterior/prior files (such as HPDA or MA analysis) for generating samples without needing to access to the bety database. - -`` - -This information is currently used by the following PEcAn workflow function: - -- `get.traits` - ?????? - -### `meta.analysis`: Trait Meta Analysis {#xml-meta-analysis} - -The section meta.analysis needs to exists for a meta.analysis to be executed, even though all tags inside are optional. -Conversely, if you do not want to do a trait meta-analysis (e.g. if you want to manually set all parameters), you should omit this node. - -```xml - - 3000 - - FALSE - TRUE - - - -``` - -Some of the tags that can go in this section are: - -* `iter`: [MCMC](http:/en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) (Markov Chain Monte Carlo) chain length, i.e. the total number of posterior samples in the meta-analysis, default is 3000. Smaller numbers will run faster but produce larger errors. -* `random.effects`: Settings related to whether to include random effects (site, treatment) in meta-analysis model. -- `on`: Default is set to FALSE to work around convergence problems caused by an over parameterized model (e.g. too many sites, not enough data). Can be turned to TRUE for including hierarchical random effects. -- `use_ghs`: Default is set to TRUE to include greenhouse measurements. Can be set to FALSE to exclude cases where all data is from greenhouse. -* `update`: Should previous results of meta.analysis and get.traits be re-used. If set to TRUE the meta-analysis and get.trait.data will always be executed. Setting this to FALSE will try and reuse existing results. Future versions will allow for AUTO as well which will try and reuse if the PFT/traits have not changed. The default value is FALSE. -* `threshold`: threshold for Gelman-Rubin convergence diagnostic (MGPRF); default is 1.2. - -This information is currently used by the following PEcAn workflow function: - -- `PEcAn.MA::run.meta.analysis` - ??? - -### `model`: Model configuration {#xml-model} - -This section describes which model PEcAn should run and some instructions for how to run it. - -```xml - - 7 - ED2 - /usr/local/bin/ed2.r82 - module load hdf5 - - - - -``` - -Some important tags are as follows: - -- `id` -- The unique numeric ID of the model in the PEcAn database `models` table. If this is present, then `type` and `binary` are optional since they can be determined from the PEcAn database. -- `type` -- The model "type", matching the PEcAn database `modeltypes` table `name` column. This also refers to which PEcAn model-specific package will be used. In PEcAn, a "model" refers to a specific version (e.g. release, git commit) of a specific model, and "model type" is used to link different releases of the same model. Model "types" also have specific PFT definitions and other requirements associated with them (e.g. the ED2 model "type" requires a global land cover database). -- `binary` -- The file path to the model executable. If omitted, PEcAn will use whatever path is registered in the PEcAn database for the current machine. -- `job.sh` -- Additional options added to the `job.sh` script, which is used to execute the model. This is useful for setting specific environment variables, load modules, etc. - -This information is currently used by the following PEcAn workflow function: - -- `PEcAn.::write.config.` -- Write model-specific configuration files {#pecan-write-configs} -- `PEcAn.remote::start.model.runs` -- Begin model run execution - -#### Model-specific configuration {#xml-model-specific} - -See the following: - -* [ED2][ED2 Configuration] -* [SIPNET][SIPNET Configuration] -* [BIOCRO][BioCro Configuration] - -#### ED2 specific tags {#xml-ed} - -Following variables are ED specific and are used in the [ED2 Configuration](ED2-Configuration). - -Starting at 1.3.7 the tags for inputs have moved to ``. This includes, veg, soil, psscss, inputs. - -```xml - /home/carya/runs/PEcAn_4/ED2IN.template - - - 0.01 - - - 12 - - - 0 -``` - - -* **edin** : [required] template used to write ED2IN file -* **veg** : **OBSOLETE** [required] location of VEG database, now part of `` -* **soil** : **OBSOLETE** [required] location of soild database, now part of `` -* **psscss** : **OBSOLETE** [required] location of site inforation, now part of ``. Should be specified as ``, `` and ``. -* **inputs** : **OBSOLETE** [required] location of additional input files (e.g. data assimilation data), now part of ``. Should be specified as `` and ``. - -### `run`: Run Setup {#xml-run} - -This section provides detailed configuration for the model run, including the site and time period for the simulation and what input files will be used. - -```xml - - - 1000000098 - 2004/01/01 - 2004/12/31 - - temperate.needleleaf.evergreen - temperate.needleleaf.evergreen.test - - - - - CRUNCEP - SIPNET - - - 2004/01/01 - 2004/12/31 - -``` - -#### `site`: Where to run the model {#xml-run-site} - -This contains the following tags: - -- `id` -- This is the numeric ID of the site in the PEcAn database (table `sites`, column `id`). PEcAn can automatically fill in other relevant information for the site (e.g. `name`, `lat`, `lon`) using the site ID, so those fields are optional if ID is provided. -- `name` -- The name of the site, as a string. -- `lat`, `lon` -- The latitude and longitude coordinates of the site, as decimals. -- `met.start`, `met.end` -- ??? -- `` (optional) If this tag is found under the site tag, then PEcAn automatically makes sure that only PFTs defined under this tag is used for generating parameter's samples. Following shows an exmaple of how this tag can be added to the PEcAn xml : - -```xml - - temperate.needleleaf.evergreen - temperate.needleleaf.evergreen - -``` -For multi-site runs if the `pft.site` tag (see {#xml-run-inputs}) is defined under `input`, then the above process will be done automatically under prepare settings step in PEcAn main workflow and there is no need for adding the tags manually. Using the `pft.site` tag however, requires a lookup table as an input (see {#xml-run-inputs}). - - -#### `inputs`: Model inputs {#xml-run-inputs} - -Models require several different types of inputs to run. -Exact requirements differ from model to model, but common inputs include meteorological/climate drivers, site initial conditions (e.g. vegetation composition, carbon pools), and land use drivers. - -In general, all inputs should have the following tags: - -* `id`: Numeric ID of the input in the PEcAn database (table `inputs`, column `id`). If not specified, PEcAn will try to figure this out based on the `source` tag (described below). -* `path`: The file path of the input. Usually, PEcAn will set this automatically based on the `id` (which, in turn, is determined from the `source`). However, this can be set manually for data that PEcAn does not know about (e.g. data that you have processed yourself and have not registered with the PEcAn database). -* `source`: The input data type. This tag name needs to match the names in the corresponding conversion functions. If you are using PEcAn's automatic input processing, this is the only field you need to set. However, this field is ignored if `id` and/or `path` are provided. -* `output`: ??? - -The following are the most common types of inputs, along with their corresponding tags: - -##### `met`: Meteorological inputs {#xml-run-inputs-met} - -(Under construction. See the `PEcAn.data.atmosphere` package, located in `modules/data.atmosphere`, for more details.) - -##### (Experimental) `soil`: Soil inputs {#xml-run-inputs-soil} - -(Under construction. See the `PEcAn.data.land` package, located in `modules/data.land`, for more details). - -##### (Experimental) `veg`: Vegetation initial conditions {#xml-run-inputs-veg} - -(Under construction. Follow developments in the `PEcAn.data.land` package, located in `modules/data.land` in the source code). -##### `pft.site` Multi-site site / PFT mapping - -When performing multi-site runs, it is not uncommon to find that different sites need to be run with different PFTs, rather than running all PFTs at all sites. If you're interested to use a specific PFT for your site/sites you can use the following tag to tell PEcAn which PFT needs to be used for what site. -``` - - site_pft.csv - -``` -For example using the above tag, user needs to have a csv file named `site_pft` stored in the pecan folder. At the moment we have functions supporting just the `.csv` and `.txt` files which are comma separated and have the following format: -``` -site_id, pft_name -1000025731,temperate.broadleaf.deciduous -764,temperate.broadleaf.deciduous -``` -Then pecan would use this lookup table to inform `write.ensemble.config` function about what PFTs need to be used for what sites. - -#### `start.date` and `end.date` - -The start and end date for the run, in a format parseable by R (e.g. `YYYY/MM/DD` or `YYYY-MM-DD`). -These dates are inclusive; in other words, they refer to the first and last days of the run, respectively. - -NOTE: Any time-series inputs (e.g. meteorology drivers) must contain all of these dates. -PEcAn tries to detect and throw informative errors when dates are out of bounds inputs, but it may not know about some edge cases. - -#### Other tags - -The following tags are optional run settings that apply to any model: - -* `jobtemplate`: the template used when creating a `job.sh` file, which is used to launch the actual model. Each model has its own default template in the `inst` folder of the corresponding R package (for instance, here is the one for [ED2](https://github.com/PecanProject/pecan/blob/master/models/ed/inst/template.job)). The following variables can be used: `@SITE_LAT@`, `@SITE_LON@`, `@SITE_MET@`, `@START_DATE@`, `@END_DATE@`, `@OUTDIR@`, `@RUNDIR@` which all come variables in the `pecan.xml` file. The following two command can be used to copy and clean the results from a scratch folder (specified as scratch in the run section below, for example local disk vs network disk) : `@SCRATCH_COPY@`, `@SCRATCH_CLEAR@`. - -Some models also have model-specific tags, which are described in the [PEcAn Models](#pecan-models) section. - -### `host`: Host information for remote execution {#xml-host} - -This section provides settings for remote model execution, i.e. any execution that happens on a machine (including "virtual" machines, like Docker containers) different from the one on which the main PEcAn workflow is running. -A common use case for this section is to submit model runs as jobs to a high-performance computing cluster. -If no `host` tag is provided, PEcAn assumes models are run on `localhost`, a.k.a. the same machine as PEcAn itself. - -For detailed instructions on remote execution, see the [Remote Execution](#pecan-remote) page. -For detailed information on configuring this for RabbitMQ, see the [RabbitMQ](#rabbitmq-xml) page. -The following provides a quick overview of XML tags related to remote execution. - -**NOTE**: Any paths specified in the `pecan.xml` refer to paths on the `host` specified in this section, /not/ the machine on which PEcAn is running (unless models are running on `localhost` or this section is omitted). - -```xml - - pecan2.bu.edu - /fs/data3/guestuser/pecan/testworkflow/run - /fs/data3/guestuser/pecan/testworkflow/out - /tmp/carya - TRUE - qsub -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash - Your job ([0-9]+) .* - qstat -j @JOBID@ &> /dev/null || echo DONE - module load udunits R/R-3.0.0_gnu-4.4.6 - - /usr/local/bin/modellauncher - -pe omp 20 - - -``` - -The `host` section has the following tags: - -* `name`: [optional] name of host server where model is located and executed, if not specified localhost is assumed. -* `rundir`: [optional/required] location where all the configuration files are written. For localhost this is optional (`/run` is the default), for any other host this is required. -* `outdir`: [optional/required] location where all the outputs of the model are written. For localhost this is optional (`/out` is the default), for any other host this is required. -* `scratchdir`: [optional] location where output is written. If specified the output from the model is written to this folder and copied to the outdir when the model is finished, this could significantly speed up the model execution (by using local or ram disk). -* `clearscratch`: [optional] if set to TRUE the scratchfolder is cleaned up after copying the results to the outdir, otherwise the folder will be left. The default is to clean up after copying. -* `qsub`: [optional] the command to submit a job to the queuing system. There are 3 parameters you can use when specifying the qsub command, you can add additional values for your specific setup (for example -l walltime to specify the walltime, etc). You can specify @NAME@ the pretty name, @STDOUT@ where to write stdout and @STDERR@, where to write stderr. You can specify an empty element (``) in which case it will use the default value is `qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -s /bin/bash`. -* `qsub.jobid`: [optional] the regular expression used to find the `jobid` returned from `qsub`. If not specified (and `qsub` is) it will use the default value is `Your job ([0-9]+) .*` -* `qstat`: [optional] the command to execute to check if a job is finished, this should return DONE if the job is finished. There is one parameter this command should take `@JOBID@` which is the ID of the job as returned by `qsub.jobid`. If not specified (and qsub is) it will use the default value is `qstat -j @JOBID@ || echo DONE` -* `job.sh`: [optional] additional options to add to the job.sh at the top. -* `modellauncher`: [optional] this is an experimental section that will allow you to submit all the runs as a single job to a HPC system. - -The `modellauncher` section if specified will group all runs together and only submit a single job to the HPC cluster. This single job will leverage of a MPI program that will execute a single run. Some HPC systems will place a limit on the number of jobs that can be executed in parallel, this will only submit a single job (using multiple nodes). In case there is no limit on the number of jobs, a single PEcAn run could potentially submit a lot of jobs resulting in the full cluster running jobs for a single PEcAn run, preventing others from executing on the cluster. - -The `modellauncher` has 2 arguements: -* `binary` : [required] The full path to the binary modellauncher. Source code for this file can be found in `pecan/contrib/modellauncher`](https://github.com/PecanProject/pecan/tree/develop/contrib/modellauncher). -* `qsub.extra` : [optional] Additional flags to pass to qsub besides those specified in the `qsub` tag in host. This option can be used to specify that the MPI environment needs to be used and the number of nodes that should be used. - -## Advanced features {#xml-advanced} - -### `ensemble`: Ensemble Runs {#xml-ensemble} - -As with `meta.analysis`, if this section is missing, then PEcAn will not do an ensemble analysis. - -```xml - - 1 - NPP - - - uniform - - - sampling - - - -``` - -An alternative configuration is as follows: - -```xml - - 5 - GPP - 1995 - 1999 - - - lhc - - - sampling - - - -``` - -Tags in this block can be broken down into two categories: Those used for setup (which determine how the ensemble analysis runs) and those used for post-hoc analysis and visualization (i.e. which do not affect how the ensemble is generated). - -Tags related to ensemble setup are: - -* `size` : (required) the number of runs in the ensemble. -* `samplingspace`: (optional) Contains tags for defining how the ensembles will be generated. - -Each piece in the sampling space can potentially have a method tag and a parent tag. Method refers to the sampling method and parent refers to the cases where we need to link the samples of two components. When no tag is defined for one component, one sample will be generated and used for all the ensembles. This allows for partitioning/studying different sources of uncertainties. For example, if no met tag is defined then, one met path will be used for all the ensembles and as a result the output uncertainty will come from the variability in the parameters. At the moment no sampling method is implemented for soil and vegetation. -Available sampling methods for `parameters` can be found in the documentation of the `PEcAn.utils::get.ensemble.samples` function. -For the cases where we need simulations with a predefined set of parameters, met and initial condition we can use the restart argument. Restart needs to be a list with name tags of `runid`, `inputs`, `new.params` (parameters), `new.state` (initial condition), `ensemble.id` (ensemble ids), `start.time`, and `stop.time`. - -The restart functionality is developed using model specific functions by called `write_restart.modelname`. You need to make sure first that this function is already exist for your desired model. - -Note: if the ensemble size is set to 1, PEcAn will select the **posterior median** parameter values rather than taking a single random draw from the posterior - -Tags related to post-hoc analysis and visualization are: - -* `variable`: (optional) name of one (or more) variables the analysis should be run for. If not specified, `sensitivity.analysis` variable is used, otherwise default is GPP (Gross Primary Productivity). - -(NOTE: This static visualization functionality will soon be deprecated as PEcAn moves towards interactive visualization tools based on Shiny and htmlwidgets). - -This information is currently used by the following PEcAn workflow functions: - -- `PEcAn.::write.config.` - See [above](#pecan-write-configs). -- `PEcAn.uncertainty::write.ensemble.configs` - Write configuration files for ensemble analysis -- `PEcAn.uncertainty::run.ensemble.analysis` - Run ensemble analysis - -### `sensitivity.analysis`: Sensitivity analysis {#xml-sensitivity-analysis} - -Only if this section is defined a sensitivity analysis is done. This section will have `` or `` nodes. If neither are given, the default is to use the median +/- [1 2 3] x sigma (e.g. the 0.00135 0.0228 0.159 0.5 0.841 0.977 0.999 quantiles); If the 0.5 (median) quantile is omitted, it will be added in the code. - -```xml - - - -3 - -2 - -1 - 1 - 2 - 3 - - GPP - TRUE - 2004 - 2006 - -``` - -- `quantiles/sigma` : [optional] The number of standard deviations relative to the standard normal (i.e. "Z-score") for which to perform the ensemble analysis. For instance, `1` corresponds to the quantile associated with 1 standard deviation greater than the mean (i.e. 0.681). Use a separate `` tag, all under the `` tag, to specify multiple quantiles. Note that we _do not automatically add the quantile associated with `-sigma`_ -- i.e. if you want +/- 1 standard deviation, then you must include both `1` _and_ `-1`. -- `start.date` : [required?] start date of the sensitivity analysis (in YYYY/MM/DD format) -- `end.date` : [required?] end date of the sensitivity analysis (in YYYY/MM/DD format) - - **_NOTE:_** `start.date` and `end.date` are distinct from values set in the run tag because this analysis can be done over a subset of the run. -- `variable` : [optional] name of one (or more) variables the analysis should be run for. If not specified, sensitivity.analysis variable is used, otherwise default is GPP. -- `perpft` : [optional] if `TRUE` a sensitivity analysis on PFT-specific outputs will be run. This is only possible if your model provides PFT-specific outputs for the `variable` requested. This tag only affects the output processing, not the number of samples proposed for the analysis nor the model execution. - -This information is currently used by the following PEcAn workflow functions: - -- `PEcAn.::write.configs.` -- See [above](#pecan-write-configs) -- `PEcAn.uncertainty::run.sensitivity.analysis` -- Executes the uncertainty analysis - -### Parameter Data Assimilation {#xml-parameter-data-assimilation} - -The following tags can be used for parameter data assimilation. More detailed information can be found here: [Parameter Data Assimilation Documentation](#pda) - -### Multi-Settings {#xml-multi-settings} - -Multi-settings allows you to do multiple runs across different sites. This customization can also leverage site group distinctions to expedite the customization. It takes your settings and applies the same settings, changing only the site level tags across sites. - -To start, add the multisettings tag within the `` section of your xml -``` - - run - -``` -Additional tags for this section exist and can fully be seen here: -``` - - assim.batch - ensemble - sensitivity.analysis - run - - ``` -These tags correspond to different pecan analysis that need to know that there will be multiple settings read in. - - -Next you'll want to add the following tags to denote the group of sites you want to use. It leverages site groups, which are defined in BETY. - -```xml - - 1000000022 - -```` -If you add this tag, you must remove the ` ` tags from the `` tag portion of your xml. -The id of your sitegroup can be found by lookig up your site group within BETY. - -You do not have to use the sitegroup tag. You can manually add multiple sites using the structure in the example below. - -Lastly change the top level tag to ``, meaning the top and bootom of your xml should look like this: - -``` - - -... - -``` - -Once you have defined these tags, you can run PEcAn, but there may be further specifications needed if you know that different data sources have different dates available. - -Run workflow.R up until -``` -# Write pecan.CHECKED.xml -PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") -``` -Once this section is run, you'll need to open `pecan.CHECKED.xml`. You will notice that it has expanded from your original `pecan.xml`. - -```xml - - - - 796 - 2005/01/01 - 2011/12/31 - Bartlett Experimental Forest (US-Bar) - 44.06464 - -71.288077 - - 2005/01/01 - 2011/12/31 - - - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-796/AMF_US-Bar_BASE_HH_4-1.2005-01-01.2011-12-31.clim - - - - - - 767 - 2001/01/01 - 2014/12/31 - Morgan Monroe State Forest (US-MMS) - 39.3231 - -86.4131 - - 2001/01/01 - 2014/12/31 - - - /fs/data1/pecan.data/dbfiles/AmerifluxLBL_SIPNET_site_0-767/AMF_US-MMS_BASE_HR_8-1.2001-01-01.2014-12-31.clim - - - -.... - -``` -* The `...` replaces the rest of the site settings for however many sites are within the site group. - -Looking at the example above, take a close look at the `` and ``. You will notice that for both sites, the dates are different. In this example they were edited by hand to include the dates that are available for that site and source. You must know your source prior. Only the source CRUNCEP has a check that will tell you if your dates are outside the range available. PEcAn will automatically populate these dates across sites according the original setting of start and end dates. - - -In addition, you will notice that the `` section contains the model specific meteorological data file. You can add that in by hand or you can you can leave the normal tags that met process workflow will use to process the data into your model specific format: -``` - - AmerifluxLBL - SIPNET - pecan - -``` - - -### (experimental) State Data Assimilation {#xml-state-data-assimilation} - -The following tags can be used for state data assimilation. More detailed information can be found here: [State Data Assimilation Documentation](#sda) - -```xml - - FALSE - FALSE - - AGB.pft - TotSoilCarb - - - 2004/01/01 - 2006/12/31 - - 1 - 2004/01/01 - 2006/12/31 - -``` - -* **process.variance** : [optional] TRUE/FLASE flag for if process variance should be estimated (TRUE) or not (FALSE). If TRUE, a generalized ensemble filter will be used. If FALSE, an ensemble Kalman filter will be used. Default is FALSE. -* **sample.parameters** : [optional] TRUE/FLASE flag for if parameters should be sampled for each ensemble member or not. This allows for more spread in the intial conditions of the forecast. -* **_NOTE:_** If TRUE, you must also assign a vector of trait names to pick.trait.params within the sda.enkf function. -* **state.variable** : [required] State variable that is to be assimilated (in PEcAn standard format). Default is "AGB" - Above Ground Biomass. -* **spin.up** : [required] start.date and end.date for model spin up. -* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because spin up can be done over a subset of the run. -* **forecast.time.step** : [optional] start.date and end.date for model spin up. -* **start.date** : [required?] start date of the state data assimilation (in YYYY/MM/DD format) -* **end.date** : [required?] end date of the state data assimilation (in YYYY/MM/DD format) -* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because this analysis can be done over a subset of the run. - -### (experimental) Brown Dog {#xml-browndog} - -This section describes how to connect to [Brown Dog](http://browndog.ncsa.illinois.edu). This facilitates processing and conversions of data. - -```xml - - ... - ... - ... - -``` - -* `url`: (required) endpoint for Brown Dog to be used. -* `username`: (optional) username to be used with the endpoint for Brown Dog. -* `password`: (optional) password to be used with the endpoint for Brown Dog. - -This information is currently used by the following R functions: - -- `PEcAn.data.atmosphere::met.process` -- Generic function for processing meteorological input data. -- `PEcAn.benchmark::load_data` -- Generic, versatile function for loading data in various formats. - -### (experimental) Benchmarking {#xml-benchmarking} - -Coming soon... - - - - -# PEcAn workflow (web/workflow.R) {#workflow} - -- How the workflow works -- How each module is called -- How to do outside of web interface -- Link to "folder structure" section below for detailed descriptions - - -
- -## Read Settings {#workflow-readsettings} - -(TODO: Under construction...) - -## Input Conversions {#workflow-input} - -## Input Data {#workflow-input-data} - -Models require input data as drivers, parameters, and boundary conditions. In order to make a variety of data sources that have unique formats compatible with models, conversion scripts are written to convert them into a PEcAn standard format. That format is a netcdf file with variables names and specified to our standard variable table. - -Within the PEcAn repository, code pertaining to input conversion is in the MODULES directory under the data.atmosphere and data.land directories. - -## Initial Conditions {#workflow-input-initial} - -(TODO: Under construction) - -## Meteorological Data {#workflow-met} - -To convert meterological data into the PEcAn Standard and then into model formats we follow four main steps: - - 1. Downloading raw data - - [Currently supported products]() - - Example Code - 2. Converting raw data into a CF standard - - Example Code - 3. Downscaling and gapfilling - - Example Code - 4. Coverting to Model Specific format - - Example Code - -Common Questions regarding Met Data: - -How do I add my Meterological data product to PEcAn? -How do I use PEcAn to convert Met data outide the workflow? - - -The main script that handles Met Processing, is [`met.process`](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/met.process.R). It acts as a wrapper function that calls individual modules to facilitate the processing of meteorological data from it's original form to a pecan standard, and then from that standard to model specific formats. It also handles recording these processes in the BETY database. - - 1. Downloading raw data - - [Available Meteorological Drivers] - - Example Code to download [Ameriflux data](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/download.AmerifluxLBL.R) - 2. Converting raw data into a CF standard (if needed) - - Example Code to [convert from raw csv to CF standard](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/met2CF.csv.R) - 3. Downscaling and gapfilling(if needed) - - Example Code to [gapfill](https://github.com/PecanProject/pecan/blob/develop/modules/data.atmosphere/R/metgapfill.R) - 4. Coverting to Model Specific format - - Example Code to [convert Standard into Sipnet format](https://github.com/PecanProject/pecan/blob/develop/models/sipnet/R/met2model.SIPNET.R) - - -### Downloading Raw data (Description of Process) {#workflow-met-download} - - Given the information passed from the pecan.xml met.process will call the `download.raw.met.module` to facilitate the execution of the necessary functions to download raw data. - -```xml - - AmerifluxLBL - SIPNET - pecan - -``` - -### Converting raw data to PEcAn standard {#workflow-met-standard} - -### Downscaling and gapfilling (optional) {#workflow-met-downscale} - -### Converting from PEcAn standard to model-specific format {#workflow-met-model} - -## Traits {#workflow-traits} - -(TODO: Under construction) - -## Meta Analysis {#workflow-metaanalysis} - -(TODO: Under construction) - -## Model Configuration {#workflow-modelconfig} - -(TODO: Under construction) - -## Run Execution {#workflow-modelrun} - -(TODO: Under construction) - -## Post Run Analysis {#workflow-postrun} - -(TODO: Under construction) -## Advanced Analysis {#workflow-advanced} - -(TODO: Under construction) - - - - -# PEcAn Models {#pecan-models} - -This section will contain information about all models and output variables that are supported by PEcAn. - -| Model Name | Available in the VM | Prescribed Inputs | Input Functions/Values | Restart Function | -| -- | -- | -- | -- | -- | -| [BioCro](#models-biocro) | Yes | Yes | Yes| No | -| [CLM](#models-clm)| No | No | No| No | -| [DALEC](#models-dalec)| Yes | Yes | Yes| No | -| [ED2](#models-ed)| Yes | Yes | Yes| Yes | -| FATES | No | Yes | | No| -| [GDAY](#models-gday) | No | No | No| No | -| [LINKAGES](#models-linkages) | Yes | Yes | Yes| Yes | -| [LPJ-GUESS](#models-lpjguess)| No | Yes | No | No | -| [MAESPA](#models-maespa)| Yes | Yes | No | No | -| [PRELES](#models-preles) | Yes | Yes | Partially | No | -| [SiPNET](#models-sipnet)| Yes | Yes | Yes| Yes | -| [STICS](#models-stics)| Yes | Yes | No | No | - -*Available in the VM* - Denotes if a model is publicly available with PEcAn. - -*Prescribed Inputs* - Denotes whether or not PEcAn can prescribe inputs. - -*Input Functions/Values* - Denotes whether or not PEcAn has functions to fully produce a model's Input values. - -*Restart Function* - Denotes status of model data assimilation capabilities. - - -**Output Variables** - -PEcAn converts all model outputs to a single [Output Standards]. This standard evolved out of MsTMIP project, which is itself based on NACP, LBA, and other model-intercomparison projects. This standard was expanded for the PalEON MIP and the needs of the PEcAn modeling community to support variables not in these standards. - -_Model developers_: do not add variables to your PEcAn output without first adding them to the PEcAn standard table! Also, do not create new variables equivalent to existing variables but just with different names or units. - - - - - - - -## BioCro {#models-biocro} - -| Model Information | | -| -- | -- | -| Home Page | https://github.com/ebimodeling/biocro/blob/master/README.md | -| Source Code | https://github.com/ebimodeling/biocro | -| License | [University of Illinois/NCSA Open Source License](https://github.com/ebimodeling/biocro/blob/master/LICENSE) | -| Authors | Fernando E. Miguez, Deepak Jaiswal, Justin McGrath, David LeBauer, Scott Rohde, Dan Wang | -| PEcAn Integration | David LeBauer, Dan Wang | - -### Introduction - -BioCro is a model that estimates photosynthesis at the leaf, canopy, and ecosystem levels and determines plant biomass allocation and crop yields, using underlying physiological and ecological processes to do so. - -### PEcAn configuration file additions - -The following sections of the PEcAn XML are relevant to the BioCro model: - -- `model` - - `revision` -- Model version number -- `run` - - `site/id` -- ID associated with desired site from BETYdb site entry - - `inputs` - - `met/output` -- Set as BIOCRO - - `met/path` -- Path to file containing meteorological data - -### Model specific input files - -List of inputs required by model, such as met, etc. - -### Model configuration files - -Genus-specific parameter files are secretly required. These are stored in the PEcAn.BIOCRO package and looked up under the hood. - -`write.configs.BIOCRO` looks for defaults in this order: first any file at a path specified by `settings$pft$constants$file`, next by matching the genus name in datasets exported by the BioCro package, and last by matching the genus name in PEcAn.BIOCRO's extdata/defaults directory. - -When adding a new genus, it is necessary to provide a new default parameter file in PEcAn.BIOCRO [`inst/extdata/defaults`](https://github.com/PecanProject/pecan/tree/develop/models/biocro/inst/extdata/defaults) and also (for v<1.0) update the `call_biocro()` function. - -BioCro uses a config.xml file similar to ED2. At this time, no other template files are required. - -### Installation notes - -BioCro can be run standalone using the model's R package. Instructions for installing and using the package are in the GitHub repo's [README file](https://github.com/ebimodeling/biocro/blob/master/README.md). - - - -## CLM {#models-clm} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** -List of inputs required by model, such as met, etc. - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## DALEC {#models-dalec} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## ED2 {#models-ed} - -| Model Information | | -| -- | -- | -| Home Page | http://moorcroftlab.oeb.harvard.edu/ | -| Source Code | https://github.com/EDmodel/ED2 | -| License | | -| Authors | Paul Moorcroft, ... | -| PEcAn Integration | Michael Dietze, Rob Kooper | - -### Introduction - -Introduction about ED model - -### PEcAn configuration file additions - -The following sections of the PEcAn XML are relevant to the ED model: - -- `model` - - `id` -- BETY model ID. Each ID corresponds to a different revision of ED (see below) - - `revision` -- The revision (a.k.a. release) number of ED (e.g. "r82"). "rgit" indicates the latest code on the ED repository. - - `edin` -- Name of the template ED2IN configuration file. If this is a functional path that points to a specific file, that file is used. If no file is found following the path literally, the workflow will try the path relative to the `PEcAn.ED2` package using `system.file` (recall that files in `inst` are moved in the package root, but all other directory structure is preserved). If this is omitted, `PEcAn.ED2::write.configs.ED2` will look for a file called `ED2IN.` (e.g. `ED2IN.rgit`, `ED2IN.r86`) in the `PEcAn.ED2` package. - - **Example**: `ED2IN.rgit` will use the `ED2IN.rgit` file shipped with `PEcAn.ED2` _regardless of the revision of ED used_. (Note however that if a file called `ED2IN.rgit` exists in the workflow runtime directory, that file will be used instead). - - `start.date`, `end.date` -- Run start and end date, respectively - - `met.start`, `met.end` -- Start and end year of meteorology inputs. By default (if omitted), these are set to the years of `start.date` and `end.date`, respectively. Setting these values to a shorter interval than `start.date` and `end.date` will cause ED to loop the meteorology input over the specified years. This may be useful for, for example, spinning up ED under a set of constant climate conditions. - - `phenol.scheme` - - `phenol` - - `phenol.start` - - `phenol.end` - - `all_pfts` -- (Logical) If false or missing (default), only run ED2 with the PFTs configured via PEcAn (i.e. in the `` section of the XML). If `true`, run with all 17 of ED2's PFTs, using ED2's internal default parameters for all PFTs not configured through PEcAn. See [below](#models-ed-pft-configuration) for more details. - - `ed2in_tags` -- Named list of additional tags in the ED2IN to be modified. These modifications override any of those set by other parts of the PEcAn workflow. These tags must be in all caps. Any tags that are not already in the ED2IN file will be added; this makes this an effective way to run newer versions of ED2 that have new ED2IN parameters without having to provide an entire new ED2IN. For example: - - ```xml - - - 0 - 0 - 0 - 0 - 0 - - - ``` - - `barebones_ed2in` -- Whether or not to try to annotate the ED2IN file with comments. If "true", skip all comments and only write the tags themselves. If "false" (default), try to transfer comments from the template file into the target file. - - `jobtemplate` - - `prerun` -- String of commands to be added to the `job.sh` model execution script before the model is run. Multiple commands should be separated by proper `bash` syntax -- i.e. either with `&&` or `;`. - - One common use of this argument is to load modules on some HPC systems -- for instance: - ```xml - module load git hdf5 - ``` - - If your particular version of ED is failing early during execution with a mysterious "Segmentation fault", that may indicate that its process is exceeding its stack limit. In this case, you may need to remove the stack limit restriction with a `prerun` command like the following: - ```xml - ulimit -s unlimited - ``` - - `postrun` -- Same as ``, but for commands to be run _after_ model execution. - - `binary` -- The full path to the ED2 binary on the target machine. - - `binary_args` -- Additional arguments to be passed to the ED2 binary. Some common arguments are: - - `-s` -- Delay OpenMPI initialization until the last possible moment. This is needed when running ED2 in a Docker container. It is included by default when the host is `rabbitmq`. - - `-f /path/to/ED2IN` -- Full path to a specific ED2IN namelist file. Typically, this is not needed because, by default, ED searches for the ED2IN in the current directory and the PEcAn workflow places the ED2IN file and a symbolic link to the ED executable in the same (run) directory for you. -- `run/site` - - `lat` -- Latitude coordinate of site - - `lon` -- Longitude coordinate of site -- `inputs` - - `met/path` -- Path to `ED_MET_DRIVER_HEADER` file - - `pss`: [required] location of patch file - - `css`: [required] location of cohort file - - `site`: [optional] location of site file - - `lu`: [required] location of land use file - - `thsums`: [required] location of thermal sums file - - `veg`: [required] location of vegetation data - - `soil`: [required] location of soil data - -### PFT configuration in ED2 {#models-ed-pft-configuration} - -ED2 has more detailed PFTs than many models, and a more complex system for configuring these PFTs. -ED2 has 17 PFTs, based roughly on growth form (e.g. tree vs. grass), biome (tropical vs. temperate), leaf morphology (broad vs. needleleaf), leaf phenology (evergreen vs. deciduous), and successional status (e.g. early, mid, or late). -Each PFT is assigned an integer (1-17), which is used by the ED model to assign default model parameters. -The mappings of these integers onto PFT definitions are not absolute, and may change as the ED2 source code evolves. -Unfortunately, _the only authoritative source for these PFT definitions for any given ED2 version is the Fortran source code of that version_. -The following is the mapping as of ED2 commit [24e6df6a][ed2in-table] (October 2018): - -1. C4 grass -2. Early-successional tropical -3. Mid-successional tropical -4. Late-successional tropical -5. Temperate C3 grass -6. Northern pine -7. Southern pine -8. Late-successional conifer -9. Early-successional temperate deciduous -10. Mid-successional temperate deciduous -11. Late-successional temperate deciduous -12. Agricultural (crop) 1 -13. Agricultural (crop) 2 -14. Agricultural (crop) 3 -15. Agricultural (crop) 4 -16. Subtropical C3 grass (C4 grass with C3 photosynthesis) -17. "Araucaria" (non-optimized southern pine), or liana - -ED2 parameter defaults are hard-coded in its Fortran source code. -However, most parameters can be modified via an XML file (determined by the `ED2IN` `IEDCNFGF` field; usually `config.xml` in the same directory as the `ED2IN` file). -The complete record of all parameters (defaults and user overrides) used by a given ED2 run is stored in a `history.xml` file (usually in the same directory as the `ED2IN`) -- this is the file to check to make sure that an ED2 run is parameterized as you expect. - -As with other models, PEcAn can set ED2 parameters using its built-in trait meta analysis. -The function specifically responsible for writing the `config.xml` is [`PEcAn.ED2::write.config.xml.ED2`][write-config-xml-ed2] (which is called as part of the more general [`PEcAn.ED2::write.config.ED2`][write-config-ed2]). -The configuration process proceeds as follows: - -First, the mappings between PEcAn PFT names and ED2 PFT numbers are determined according to the following logic: - -- If the PFT has a `` XML tag, that number is used. For example: - ```xml - - umbs.early_hardwood - 9 - - ``` -- If `` is not provided, the code tries to cross-reference the PFT `name` against the `pftmapping.csv` data provided with the `PEcAn.ED2` package. If the name is not matched (perfectly!), the function will exit with an error. - -Second, the PFT number from the previous step is used to write that PFT's parameters to the `config.xml`. -The order of precedence for parameters is as follows (from highest to lowest): - -1. **Explicit user overrides**. - These are specified via a `` tag in the PFT definition in the `pecan.xml`. - For example: - ```xml - - umbs.early_hardwood - - 36.3 - - - ``` - Note that these values are passed through [`PEcAn.ED2::convert.samples.ED`][convert-samples-ed], so they should generally be given in PEcAn's default units rather than ED2's. -2. **Samples from the PEcAn meta analysis**. - These are also converted via [`PEcAn.ED2::convert.samples.ED`][convert-samples-ed]. -3. **ED2 defaults that PEcAn knows about**. - These are stored in the `edhistory.csv` file inside of `PEcAn.ED2` - This file is re-generated manually whenever there is a new version of ED2, so while we try our best to keep it up to date, there is no guarantee that it is. -4. (Implicitly) **Defaults in the ED2 Fortran source code**. - In general, our goal is to set all parameters through PEcAn (via steps 1-3), but if you are running PEcAn with new or experimental versions of ED2, you should be extra careful to make sure ED2 is running with the parameters you intend. - Again, the best way to know which parameters ED2 is actually using is to check the `history.xml` file produced once the run starts. - -The `ED2IN` field `INCLUDE_THESE_PFT` controls which of these PFTs are included in a given ED2 run. -By default, PEcAn will set this field to only include the PFTs specified by the user. -This is the recommended behavior because it ensures that all PFTs in ED2 were parameterized (one way or another, at least partially) through PEcAn. -However, if you would like ED2 to run with all 17 PFTs (NOTE: using ED2's internal defaults for all PFTs not specified by the user!), you can set the `` XML tag (in the `` section) to `true`: - -```xml - - true - -``` - -[ed2in-table]: https://github.com/EDmodel/ED2/blob/24e6df6a75702337c5f29cbd3f3fad90467c9a51/ED/run/ED2IN#L1320-L1327 - -[write-config-xml-ed2]: https://pecanproject.github.io/models/ed/docs/reference/write.config.xml.ED2.html - -[write-config-ed2]: https://pecanproject.github.io/models/ed/docs/reference/write.config.ED2.html - -[convert-samples-ed]: https://pecanproject.github.io/models/ed/docs/reference/convert.samples.ED.html - - -### Model specific input files - -List of inputs required by model, such as met, etc. - -### Model configuration files - -ED2 is configured using 2 files which are placed in the run folder. - -* **ED2IN** : template for this file is located at models/ed/inst/ED2IN.\. The values in this template that need to be modified are described below and are surrounded with @ symbols. -* **config.xml** : this file is generated by PEcAn. Some values are stored in the pecan.xml in \\\ section as well as in \ section. - -An example of the template can be found in [ED2IN.r82](https://github.com/PecanProject/pecan/blob/master/models/ed/inst/ED2IN.r82) - -The ED2IN template can contain the following variables. These will be replaced with actual values when the model configuration is written. - -* **@ENSNAME@** : run id of the simulation, used in template for NL%EXPNME - -* **@START_MONTH@** : start of simulation UTC time, from \\, used in template for NL%IMONTHA -* **@START_DAY@** : start of simulation UTC time, from \\, used in template for NL%IDATEA -* **@START_YEAR@** : start of simulation UTC time, from \\, used in template for NL%IYEARA -* **@END_MONTH@** : end of simulation UTC time, from \\, used in template for NL%IMONTHZ -* **@END_DAY@** : end of simulation UTC time, from \\, used in template for NL%IDATEZ -* **@END_YEAR@** : end of simulation UTC time, from \\, used in template for NL%IYEARZ - -* **@SITE_LAT@** : site latitude location, from \\\, used in template for NL%POI_LAT -* **@SITE_LON@** : site longitude location, from \\\, used in template for NL%POI_LON - -* **@SITE_MET@** : met header location, from \\\, used in template for NL%ED_MET_DRIVER_DB -* **@MET_START@** : first year of met data, from \\\, used in template for NL%METCYC1 -* **@MET_END@** : last year of met data, from \\\, used in template for NL%METCYCF - -* **@PHENOL_SCHEME@** : phenology scheme, if this variabe is 1 the following 3 fields will be used, otherwise they will be set to empty strings, from \\, used in template for NL%IPHEN_SCHEME -* **@PHENOL_START@** : first year for phenology, from \\, used in template for NL%IPHENYS1 and NL%IPHENYF1 -* **@PHENOL_END@** : last year for phenology, from \\, used in template for NL%IPHENYSF and NL%IPHENYFF -**@PHENOL@** : path and prefix of the prescribed phenology data, from \* \, used in template for NL%PHENPATH - -* **@SITE_PSSCSS@** : path and prefix of the previous ecosystem state, from \\, used in template for NL%SFILIN -* **@ED_VEG@** : path and prefix of the vegetation database, used only to determine the land/water mask, from \\, used in template for NL%VEG_DATABASE -* **@ED_SOIL@** : path and prefix of the soil database, used to determine the soil type, from \\, used in template for NL%SOIL_DATABASE -* **@ED_INPUTS@** : input directory with dataset to initialise chilling degrees and growing degree days, which is used to drive the cold-deciduous phenology, from \\, used in template for NL%THSUMS_DATABASE - -* **@FFILOUT@** : path and prefix for analysis files, generated from \\\/run.id/analysis, used in template for NL%FFILOUT -* **@SFILOUT@** : path and prefix for history files, generated from \\\/run.id/history, used in template for NL%SFILOUT - -* **@CONFIGFILE@** : XML file containing additional parameter settings, this is always "config.xml", used in template for NL%IEDCNFGF - -* **@OUTDIR@** : location where output files are written (**without the runid**), from \\\, should not be used. -* **@SCRATCH@** : local scratch space for outputs, generated /scratch/\/run\$scratch, should not be used right now since it only works on ebi-cluster - -### Installation notes - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -#### VM - -#### BU geo - -#### TACC lonestar - -```bash -module load hdf5 -curl -o ED.r82.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.tgz -tar zxf ED.r82.tgz -rm ED.r82.tgz -cd ED.r82/ED/build/bin -curl -o include.mk.lonestar http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.lonestar -make OPT=lonestar -``` - -#### TACC stampede - -```bash -module load hdf5 -curl -o ED.r82.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.tgz -tar zxf ED.r82.tgz -rm ED.r82.tgz -cd ED.r82/ED/build/bin -curl -o include.mk.stampede http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.stampede -make OPT=stampede -``` - - - -## GDAY {#models-gday} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## LINKAGES {#models-linkages} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## LPJ-GUESS {#models-lpjguess} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## MAESPA {#models-maespa} - -| Model Information || -| -- | -- | -| Home Page |http://maespa.github.io/ | -| Source Code | http://maespa.github.io/download.html| -| License | | -| Authors |Belinda Medlyn and Remko Duursma | -| PEcAn Integration |Tony Gardella, Martim DeKauwe, Remki Duursma | - -**Introduction** - - - -**PEcAn configuration file additions** - -**Model specific input files** - - - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -Installing the MAESPA model requires cloning the MAESPA Bitbucket Repository, executing the makefile, and ensuring that the Maeswarp R package is correctly installed. - -To clone and compile the model, execute this code at the command line -``` -git clone https://bitbucket.org/remkoduursma/maespa.git - -cd maespa - -make clean - -make -``` - -`maespa.out` is your executable. Example input files can be found in the inputfiles directory. Executing measpa.out from within one of the example directories will produce output. - -MAESPA developers have also developed a wrapper package called `Maeswrap`. The usual R package installation method `install.packages` may present issues with downloading an unpacking a dependency package called `rgl`. Here are a couple of solutions: - -Solution 1 - -**From the Command Line** -``` -sudo apt-get install r-cran-rgl - -then from within R - -install.packages("Maeswrap") -``` - -Solution 2 - -**From the Command line** -``` -sudo apt-get install libglu1-mesa-dev - -then from within R - -install.packages("Maeswrap") - - -``` - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## PRELES {#models-preles} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -MODEL is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **file1** : template for this file is located at models/MODEL/inst/file1 and is not modified. -* **file2** : template for this file is located at models/MODEL/inst/file2 and is not modified. -* **file3** : template for this file is in models/MODEL/inst/file3 or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -**VM** - - - -## SiPNET {#models-sipnet} - -| Model Information || -| -- | -- | -| Home Page | | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | Michael Dietze, Rob Kooper | - -**Introduction** - -Introduction about model - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -SIPNET is configured using 3 files which are placed in the run folder, as well as a symbolic link to the met file. - -* **sipnet.in** : template for this file is located at models/sipnet/inst/sipnet.in and is not modified. -* **sipnet.param-spatial** : template for this file is located at models/sipnet/inst/template.param-spatial and is not modified. -* **sipnet.param** : template for this file is in models/sipnet/inst/template.param or it is specified in the \ section as \. The values in this template are replaced by those computed in the earlier stages of PEcAN. - -**Installation notes** - -This section contains notes on how to compile the model. The notes for the VM might work on other machines or configurations as well. - -SIPNET version unk: - -``` -if [ ! -e ${HOME}/sipnet_unk ]; then - cd - curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/PEcAn/models/sipnet_unk.tar.gz - tar zxf sipnet_unk.tar.gz - rm sipnet_unk.tar.gz -fi -cd ${HOME}/sipnet_unk/ -make clean -make -sudo cp sipnet /usr/local/bin/sipnet.runk -make clean -``` - -SIPNET version 136: - -``` -if [ ! -e ${HOME}/sipnet_r136 ]; then - cd - curl -o sipnet_r136.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_r136.tar.gz - tar zxf sipnet_r136.tar.gz - rm sipnet_r136.tar.gz - sed -i 's#$(LD) $(LIBLINKS) \(.*\)#$(LD) \1 $(LIBLINKS)#' ${HOME}/sipnet_r136/Makefile -fi -cd ${HOME}/sipnet_r136/ -make clean -make -sudo cp sipnet /usr/local/bin/sipnet.r136 -make clean -``` - -**VM** - - - -## STICS {#models-stics} - -| Model Information || -| -- | -- | -| Home Page | https://www6.paca.inrae.fr/stics/ | -| Source Code | | -| License | | -| Authors | | -| PEcAn Integration | Istem Fer | - -**Introduction** - -STICS (Simulateur mulTIdisciplinaire pour les Cultures Standard) is a crop model that has been developed since 1996 at INRA (French National Institute for Agronomic Research) in collaboration with other research (CIRAD, Irstea, Ecole des Mines de Paris, ESA, LSCE) or professional (ARVALIS, Terres Inovia, CTIFL, ITV, ITB, Agrotransferts, etc.) and teaching institutes or organizations. - - -**PEcAn configuration file additions** - -Should list the model specific additions to the PEcAn file here - -**Model specific input files** - -List of inputs required by model, such as met, etc. - -**Model configuration files** - -STICS is configured using different XML files located in two fixed directories: config, plant and user defined workspace(s) directorie(s). A java app called JavaStics allows users to generate these files. - -**Installation notes** - -The software (JavaStics interface and STICS model) is available for download after a registration procedure (see procedure at http://www6.paca.inra.fr/stics_eng/ Download). - - -**VM** - - - -# Available Meteorological Drivers - -## Ameriflux - -Scale: site - -Resolution: 30 or 60 min - -Availability: varies by site http:\/\/ameriflux.lbl.gov\/data\/data-availability\/ - -Notes: Old ORNL server, use is deprecated - -## AmerifluxLBL - -Scale: site - -Resolution: 30 or 60 min - -Availability: varies by site http:\/\/ameriflux.lbl.gov\/data\/data-availability\/ - -Notes: new Lawrence Berkeley Lab server - -## Fluxnet2015 - -Scale: site - -Resolution: 30 or 60 min - -Availability: varies by site [http:\/\/fluxnet.fluxdata.org\/sites\/site-list-and-pages](http://fluxnet.fluxdata.org/sites/site-list-and-pages/) - -Notes: Fluxnet 2015 synthesis product. Does not cover all FLUXNET sites - -## NARR - -Scale: North America - -Resolution: 3 hr, approx. 32km \(Lambert conical projection\) - -Availability: 1979-present - -## CRUNCEP - -Scale: global - -Resolution: 6hr, 0.5 degree - -Availability: 1901-2010 - -## CMIP5 - -Scale: varies by model - -Resolution: 3 hr - -Availability: 2006-2100 - -Currently only GFDL available. Different scenerios and ensemble members can be set via Advanced Edit. - -## NLDAS - -Scale: Lower 48 + buffer, - -Resolution: 1 hour, .125 degree - -Availability: 1980-present - -## GLDAS - -Scale: Global - -Resolution: 3hr, 1 degree - -Availability: 1948-2010 - -## PalEON - -Scale: -100 to -60 W Lon, 35 to 50 N Latitude \(US northern hardwoods + buffer\) - -Resolution: 6hr, 0.5 degree - -Availability: 850-2010 - -## FluxnetLaThuile - -Scale: site - -Resolution: 30 or 60 min - -Availability: varies by site http:\/\/www.fluxdata.org\/DataInfo\/Dataset%20Doc%20Lib\/SynthDataSummary.aspx - -Notes: 2007 synthesis. Fluxnet2015 supercedes this for sites that have been updated - -## Geostreams - -Scale: site - -Resolution: varies - -Availability: varies by site - -Notes: This is a protocol, not a single archive. The PEcAn functions currently default to querying [https://terraref.ncsa.illinois.edu/clowder/api/geostreams], which requires login and contains data from only two sites (Urbana IL and Maricopa AZ). However the interface can be used with any server that supports the [Geostreams API](https://opensource.ncsa.illinois.edu/confluence/display/CATS/Geostreams+API). - -## ERA5 - -Scale: Global - -Resolution: 3 hrs and 31 km - -Availability: 1950-present - -Notes: - -It's important to know that the raw ERA5 tiles needs to be downloaded and registered in the database first. Inside the `inst` folder in the data.atmosphere package there are R files for downloading and registering files in the BETY. However, it assumes that you have registered and setup your API requirements. Check out how to setup your API [here] (https://confluence.ecmwf.int/display/CKB/How+to+download+ERA5#HowtodownloadERA5-3-DownloadERA5datathroughtheCDSAPI). -In the `inst` folder you can find two files (`ERA5_db_register.R` and `ERA5_USA_download.R`). If you setup your `ecmwf` account as it's explained in the link above, `ERA5_USA_download.R` will help you to download all the tiles with all the variables required for pecan `extract.nc.ERA5` function to generate pecan standard met files. Besides installing the required packages for this file, it should work from top to bottom with no problem. After downloading the tiles, there is simple script in `ERA5_db_register.R` which helps you register your tiles in the bety. `met.process` later on uses that entery to find the required tiles for extracting met data for your sites. There are important points about this file. 1- Make sure you don't change the site id in the script (which is the same the `ParentSite` in ERA5 registeration xml file). 2- Make sure the start and end date in that script matches the downloaded tiles. Set your `ERA5.files.path` to where you downloaded the tiles and then the rest of the script should be working fine. - - - - -## Download GFDL -The Downlad.GFDL function assimilates 3 hour frequency CMIP5 outputs generated by multiple GFDL models. GFDL developed several distinct modeling streams on the timescale of CMIP5 and AR5. These models include CM3, ESM2M and ESM2G with a spatial resolution of 2 degrees latitude by 2.5 degrees longitude. Each model has future outputs for the AR5 Representative Concentration Pathways ranging from 2006-2100. - -## CM3 -GFDL’s CMIP5 experiments with CM3 included many of the integrations found in the long-term CMIP5 experimental design. The focus of this physical climate model is on the role of aerosols, aerosol-cloud interactions, and atmospheric chemistry in climate variability and climate change. - -## ESM2M & ESM2G - -Two new models representing ocean physics with alternative numerical frameworks to explore the implications of some of the fundamental assumptions embedded in these models. Both ESM2M and ESM2G utilize a more advanced land model, LM3, than was available in ESM2.1 including a variety of enhancements (Milly et al., in prep). GFDL’s CMIP5 experiments with Earth System Models included many of the integrations found in the long-term CMIP5 experimental design. The ESMs, by design, close the carbon cycle and are used to study the impact of climate change on ecosystems, ecosystem changes on climate and human activities on ecosystems. - -For more information please navigate [here](https://gfdl.noaa.gov/cmip) - -
CM#ESM2MESM2G
rcp26r1i1p1r1i1p1
rcp45r1i1p1, r3i1p1,r5i1p1r1i1p1r1i1p1
rcp60r1i1p1r1i1p1
rcp85r1i1p1r1i1p1r1i1p1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -# Database synchronization {#database-sync} - -The database synchronization consists of 2 parts: -- Getting the data from the remote servers to your server -- Sharing your data with everybody else - -## How does it work? - -Each server that runs the BETY database will have a unique machine_id and a sequence of ID's associated. Whenever the user creates a new row in BETY it will receive an ID in the sequence. This allows us to uniquely identify where a row came from. This is information is crucial for the code that works with the synchronization since we can now copy those rows that have an ID in the sequence specified. If you have not asked for a unique ID your ID will be 99. - -The synchronization code itself is split into two parts, loading data with the `load.bety.sh` script and exporting data using `dump.bety.sh`. If you do not plan to share data, you only need to use `load.bety.sh` to update your database. - -## Set up - -Requests for new machine ID's is currently handled manually. To request a machine ID contact Rob Kooper . In the examples below this ID is referred to as 'my siteid'. - -To setup the database to use this ID you need to call load.bety in 'CREATE' mode (replacing with the ID if your site) - -``` -sudo -u postgres {$PECAN}/scripts/load.bety.sh -c -u -m -``` -WARNING: At the moment running CREATE deletes all current records in the database. If you are running from the VM this includes both all runs you have done and all information that the database is prepopulated with (e.g. input and model records). Remote records can be fetched (see below), but local records will be lost (we're working on improving this!) - -## Fetch latest data - -When logged into the machine you can fetch the latest data using the load.bety.sh script. The script will check what site you want to get the data for and will remove all data in the database associated with that id. It will then reinsert all the data from the remote database. - -The script is configured using environment variables. The following variables are recognized: -- DATABASE: the database where the script should write the results. The default is `bety`. -- OWNER: the owner of the database (if it is to be created). The default is `bety`. -- PG_OPT: additional options to be added to psql (default is nothing). -- MYSITE: the (numerical) ID of your site. If you have not requested an ID, use 99; this is used for all sites that do not want to share their data (i.e. VM). 99 is in fact the default. -- REMOTESITE: the ID of the site you want to fetch the data from. The default is 0 (EBI). -- CREATE: If 'YES', this indicates that the existing database (`bety`, or the one specified by DATABASE) should be removed. Set to YES (in caps) to remove the database. **THIS WILL REMOVE ALL DATA** in DATABASE. The default is NO. -- KEEPTMP: indicates whether the downloaded file should be preserved. Set to YES (in caps) to keep downloaded files; the default is NO. -- USERS: determines if default users should be created. Set to YES (in caps) to create default users with default passwords. The default is NO. - -All of these variables can be specified as command line arguments as well, to see the options use -h. - -``` -load.bety.sh -h -./scripts/load.bety.sh [-c YES|NO] [-d database] [-h] [-m my siteid] [-o owner] [-p psql options] [-r remote siteid] [-t YES|NO] [-u YES|NO] - -c create database, THIS WILL ERASE THE CURRENT DATABASE, default is NO - -d database, default is bety - -h this help page - -m site id, default is 99 (VM) - -o owner of the database, default is bety - -p additional psql command line options, default is empty - -r remote site id, default is 0 (EBI) - -t keep temp folder, default is NO - -u create carya users, this will create some default users - -dump.bety.sh -h -./scripts/dump.bety.sh [-a YES|NO] [-d database] [-h] [-l 0,1,2,3,4] [-m my siteid] [-o folder] [-p psql options] [-u YES|NO] - -a use anonymous user, default is YES - -d database, default is bety - -h this help page - -l level of data that can be dumped, default is 3 - -m site id, default is 99 (VM) - -o output folder where dumped data is written, default is dump - -p additional psql command line options, default is -U bety - -u should unchecked data be dumped, default is NO -``` - -## Sharing data - -Sharing your data requires a few steps. First, before entering any data, you will need to request an ID from the PEcAn developers. Simply open an issue at github and we will generate an ID for you. If possible, add the URL of your data host. - -You will now need to synchronize the database again and use your ID. For example if you are given ID=42 you can use the following command: `MYID=42 REMOTEID=0 ./scripts/load.bety.sh`. This will load the EBI database and set the ID's such that any data you insert will have the right ID. - -To share your data you can now run the dump.bey.sh. The script is configured using environment variables, the following variables are recognized: -- DATABASE: the database where the script should write the results. The default is `bety`. -- PG_OPT: additional options to be added to psql (default is nothing). -- MYSITE: the ID of your site. If you have not requested an ID, use 99, which is used for all sites that do not want to share their data (i.e. VM). 99 is the default. -- LEVEL: the minimum access-protection level of the data to be dumped (0=private, 1=restricted, 2=internal collaborators, 3=external collaborators, 4=public). The default level for exported data is level 3. - - note that currently only the traits and yields tables have restrictions on sharing. If you share data, records from other (meta-data) tables will be shared. If you wish to extend the access_level to other tables please [submit a feature request](https://github.com/pecanproject/bety/issues/new). -- UNCHECKED: specifies whether unchecked traits and yields be dumped. Set to YES (all caps) to dump unchecked data. The default is NO. -- ANONYMOUS: specifies whether all users be anonymized. Set to YES (all caps) to keep the original users (**INCLUDING PASSWORD**) in the dump file. The default is NO. -- OUTPUT: the location of where on disk to write the result file. The default is `${PWD}/dump`. - -NOTE: If you want your dumps to be accessible to other PEcAn servers you need to perform the following additional steps - -1. Open pecan/scripts/load.bety.sh -2. In the DUMPURL section of the code add a new record indicating where you are dumping your data. Below is the example for SITE number 1 (Boston University) -``` - elif [ "${REMOTESITE}" == "1" ]; then - DUMPURL="http://psql-pecan.bu.edu/sync/dump/bety.tar.gz" -``` -3. Check your Apache settings to make sure this location is public -4. Commit this code and submit a Pull Request -5. From the URL in the Pull Request, PEcAn administrators will update the machines table, the status map, and notify other users to update their cron jobs (see Automation below) - -Plans to simplify this process are in the works - -## Automation - -Below is an example of a script to synchronize PEcAn database instances across the network. - -db.sync.sh -``` -#!/bin/bash -## make sure psql is in PATH -export PATH=/usr/pgsql-9.3/bin/:$PATH -## move to export directory -cd /fs/data3/sync -## Dump Data -MYSITE=1 /home/dietze/pecan/scripts/dump.bety.sh -## Load Data from other sites -MYSITE=1 REMOTESITE=2 /home/dietze/pecan/scripts/load.bety.sh -MYSITE=1 REMOTESITE=5 /home/dietze/pecan/scripts/load.bety.sh -MYSITE=1 REMOTESITE=0 /home/dietze/pecan/scripts/load.bety.sh -## Timestamp sync log -echo $(date +%c) >> /home/dietze/db.sync.log -``` - -Typically such a script is set up to run as a cron job. Make sure to schedule this job (`crontab -e`) as the user that has database privileges (typically postgres). The example below is a cron table that runs the sync every hour at 12 min after the hour. - -``` -MAILTO=user@yourUniversity.edu -12 * * * * /home/dietze/db.sync.sh -``` - -## Database maintentance - -All databases need maintenance performed on them. Depending upon the database type this can happen automatically, or it needs to be run through a scheduler or manually. The BETYdb database is Postgresql and it needs to be reindexed and vacuumed on a regular basis. Reindexing introduces efficiencies back into the database by reorganizing the indexes. Vacuuming the database frees up resources to the database by rearranging and compacting the database. Both of these operations are necessary and safe. As always if there's a concern, a backup of the database should be made ahead of time. While running the reindexing and vacuuming commands, users will notice a slowdown at times. Therefore it's better to run these maintenance tasks during off hours. - -### Reindexing the database - -As mentioned above, reindexing allows the database to become more efficient. Over time as data gets updated and deleted, the indexes become less efficient. This has a negative inpact on executed statements. Reindexing makes the indexes efficient again (at least for a while) allowing faster statement execution and reducing the overall load on the database. - -The reindex.bety.sh script is provided to simplify reindexing the database. - -``` -reindex.bety.sh -h -./reindex.bety.sh [-c datalog] [-d database] [-h] [-i table names] [-p psql options] [-q] [-s] [-t tablename] - -c catalog, database catalog name used to search for tables, default is bety - -d database, default is bety - -h this help page - -i table names, list of space-separated table names to skip over when reindexing - -p additional psql command line options, default is -U bety - -q the reindexing should be quiet - -s reindex the database after reindexing the tables (this should be done sparingly) - -t tablename, the name of the one table to reindex -``` - -If the database is small enough it's reasonable to reindex the entire database at one time. To do this manually run or schedule the REINDEX statement. For example: - -``` -reindex.bety.sh -s -``` - -For larger databases it may be desireable to reindex entire tables at a time. An efficient way to do this is to reindex the larger tables and then the entire database. For example: - -``` -reindex.bety.sh -t traits; reindex.bety.sh -t yields; -reindex.bety.sh -s -``` - -For very large databases it may be desirable to reindex one or more individual indexes before reindexing tables and the databases. In this case running specific psql commands to reindex those specific indexes, followed by reindexing the table is a possible approach. For example: - -``` -psql -U bety -c "REINDEX INDEX index_yields_on_citation_id; REINDEX INDEX index_yields_on_cultivar_id;" -reindex.bety.sh -t yields; -``` - -Splitting up the indexing commands over time allows the database to operate efficiently with minimal impact on users. One approach is to schedule the reindexing of large, complex tables at a spcific off-time during the week, followed by a general reindexing and excluding those large tables on a weekend night. - -Please refere to the Automation section above for information on using cron to schedule reindexing commands. - -### Vacuuming the database - -Vacuuming the BETYdb Postgresql database reduces the amount of resources it uses and introduces its own efficiencies. - -Over time, modified and deleted records leave 'holes' in the storage of the database. This is a common feature for most databases. Each database has its own way of handing this, in Postgresql it's the VACUUM command. The VACUUM command performs two main operations: cleaning up tables to make memory use more efficient, and analyze tables for optimum statement execution. The use of the keyword ANALYZE indicates the second operation should take place. - -The vacuum.bety.sh script is provided to simplify vacuuming the database. - -``` -vacuum.bety.db -h -./vacuum.bety.sh [-c datalog] [-d database] [-f] [-h] [-i table names] [-n] [-p psql options] [-q] [-s] [-t tablename] [-z] - -c catalog, database catalog name used to search for tables, default is bety - -d database, default is bety - -f perform a full vacuum to return resources to the system. Specify rarely, if ever - -h this help page - -i table names, list of space-separated table names to skip over when vacuuming - -n only vacuum the tables and do not analyze, default is to first vacuum and then analyze - -p additional psql command line options, default is -U bety - -q the export should be quiet - -s skip vacuuming the database after vacuuming the tables - -t tablename, the name of the one table to vacuum - -z only perform analyze, do not perform a regular vacuum, overrides -n and -f, sets -s -``` - -For small databases with light loads it may be possible to set aside a time for a complete vacuum. During this time, commands executed against the database might fail (a temporary condition as the database gets cleaned up). The following commands can be used to perform all the vaccum operations in one go. - -``` -vacuum.bety.sh -f -``` - -Generally it's not desireable to have down time. If the system running the database doesn't need resources that the database is using returned to it, a FULL vacuum can be avoided. This is the default behavior of the script - -``` -vacuum.bety.sh -``` - -In larger databases, vacuuming the entire database can take a long time causing a negative impact on users. This means that individual tables need to be vacuumed. How often a vacuum needs to be performed is dependent upon a table's activity. The more frequently updates and deletes occur on a table, the more frequent the vaccum should be. For large tables it may be desireable to separate the table cleanup from the analysis. An example for completely vacuuming and analyzing a table is: - -``` -psql -U bety -c "VACUUM traits; VACUUM ANALYZE traits;" -``` - -Similar to indexes, vacuuming the most active tables followed by general database vacuuming and vacuum analyze may be a desireable approach. - -Also note that it isn't necessary to run VACUUM ANALYZE for each vacuum performed. Separating the commands and performing a VACUUM ANALYZE after several regular vacuums may be sufficient, with less load on the database. - -If the BETYdb database is running on a system with limited resources, or with resources that have become limited, the VACCUM command can return resources to the system from the database. The normal vacuuming process releases resources back to the database for reuse, but not to the system; generally this isn't a problem. Postgresql has a VACUUM keyword FULL that returns resources back to the system. Requesting a FULL vacuum will lock the table being vacuumed while it is being re-written preventing any statements from being executed against it. If performing VECUUM FULL against the entire database, only the table being actively worked on is locked. - -To minimize the impact a VACUUM FULL has on users, it's best to perform a normal vacuum before a FULL vacuum. If this approach is taken, there sould be a minimal time gap between the normal VACUUM and the VACUUM FULL commands. A normal vacuum allows changes to be made thus requiring the full vacuum to handle those changes, extending it's run time. Reducing the time between the two commands lessens the work VACUUM FULL needs to do. - -``` -psql -U bety -c "VACUUM yields; VACUUM FULL yields; VACUUM ANALYZE yields;" -``` - -Give its impact, it's typically not desireable to perform a VACUUM FULL after every normal vacuum; it should be done on an "as needed" basis or infrequently. - -## Troubleshooting - -There are several possibilities if a scheduled cron job apepars to be running but isn't producing the expected results. The following are suggestions on what to try to resolve the issue. - -### Username and password - -The user that scheduled a cron job may not have access permissions to the database. This can be easily confirmed by running the command line from the cron job while logged in as the user that scheduled the job. An error message will be shown if the user doesn't have permissions. - -To resolve this, be sure to include a valid database user (not a BETYdb user) with their credentials on the command in crontab. - -### db_hba.conf file - -Iit's possible that the machine hosting the docker image of the database doesn't have permissions to access the database. This may due to the cron job running on a machine that is not the docker instance of the database. - -It may be necessary to look at the loga on the hosting machine to determine if database access permissions are causing a problem. Logs are stored in different locations depending upon the Operating System of the host and upon other environmental factors. This document doesn't provide information on where to find the logs. - -To begin, it's best to look at the contents of the relevent database configuration file. The following command will display the contents of the db_hba.conf file. - -``` -psql -U postgres -qAt -c "show hba_file" | xargs grep -v -E '^[[:space:]]*#' -``` - -This command should return a series of text lines. For each row except those begining with 'local', the fourth item describes the machines that can access the database. In some cases an IP mask is specified in the fifth that further restricts the machines that have access. The special work 'all' in the fourth column grants permissions to all machines. The last column on each line contains the authentication option for the machine(s) specified in the fourth column (with a possible fifth column IP mask modifier). - -Ensure that the host machine is listed under the fourth column (machine addresse range, or 'all'), is also included in the IP mask if one was specified, and finally that any authentication option are not set to 'reject'. If the host machine is not included the db_hba.conf file will need to be updated to allow access. - -## Network Status Map - -https://pecan2.bu.edu/pecan/status.php - -Nodes: red = down, yellow = out-of-date schema, green = good - -Edges: red = fail, yellow = out-of-date sync, green = good - - -## Tasks - -Following is a list of tasks we plan on working on to improve these scripts: -- [pecanproject/bety#368](https://github.com/PecanProject/bety/issues/368) allow site-specific customization of information and UI elements including title, contacts, logo, color scheme. - - - -# Standalone tools (modules) - -- Radiative transfer modeling and remote sensing ([`modules/rtm`](https://pecanproject.github.io/modules/rtm/docs/index.html)); [vignette](https://pecanproject.github.io/modules/rtm/docs/articles/pecanrtm.vignette.html) -- Photosynthesis ([`modules/photosynthesis`](https://pecanproject.github.io/modules/photosynthesis/docs/index.html)); [vignette](https://pecanproject.github.io/modules/photosynthesis/docs/articles/ResponseCurves.html) -- Allometry ([`modules/allometry`](https://pecanproject.github.io/modules/allometry/docs/index.html)); [vignette](https://pecanproject.github.io/modules/allometry/docs/articles/AllomVignette.html) -- Load data ([`modules/benchmark`](https://pecanproject.github.io/modules/benchmark/docs/index.html) -- `PEcAn.benchmark::load_data`) - -## Loading Data in PEcAn {#LoadData} - -If you are loading data in to PEcAn for benchmarking, using the Benchmarking shiny app [provide link?] is recommended. - -Data can be loaded manually using the `load_data` function which in turn requires providing data format information using `query.format.vars` and the path to the data using `query.file.path`. - -Below is a description of the `load_data` function an a simple example of loading data manually. - -### Inputs - -Required - -- `data.path`: path to the data that is the output of the function `query.file.path` (see example below) -- `format`: R list object that is the output of the function `query.format.vars` (see example below) - -Optional - -- `start_year = NA`: -- `end_year = NA`: -- `site = NA` -- `vars.used.index=NULL` - -### Output - -- R data frame containing the requested variables converted in to PEcAn standard name and units and time steps in `POSIX` format. - -### Example - -The data for this example has already been entered in to the database. To add new data go to [new data documentation](#NewInput). - -To load the Ameriflux data for the Harvard Forest (US-Ha1) site. - -1. Create a connection to the BETY database. This can be done using R function - -``` R -bety = PEcAn.DB::betyConnect(php.config = "pecan/web/config.php") -``` - - where the complete path to the `config.php` is specified. See [here](https://github.com/PecanProject/pecan/blob/master/web/config.example.php) for an example `config.php` file. - -2. Look up the inputs record for the data in BETY. - -```{r, echo=FALSE, out.height = "50%", out.width = "50%", fig.align = 'center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/Input_ID_name.png") -``` - - To find the input ID, either look at - - - The url of the record (see image above) - - - In R run - -````R -library(dplyr) -input_name = "AmerifluxLBL_site_0-758" #copied directly from online -input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) -```` - -3. Additional arguments to `query.format.vars` are optional - - 1. If you only want to load a subset of dates in the data, specify start and end year, otherwise all data will be loaded. - 2. If you only want to load a select list of variables from the data, look up their IDs in BETY, otherwise all variables will be loaded. - -4. In R run - -```R - format = PEcAn.DB::query.format.vars(bety, input.id) -``` - - Examine the resulting R list object to make sure it returned the correct information. - - The example format contains the following objects: - -```R - $file_name - [1] "AMERIFLUX_BASE_HH" - - $mimetype - [1] "csv" - - $skip - [1] 2 - - $header - [1] 1 - - $na.strings - [1] "-9999" "-6999" "9999" "NA" - - $time.row - [1] 4 - - $site - [1] 758 - - $lat - [1] 42.5378 - - $lon - [1] -72.1715 - - $time_zone - [1] "America/New_York" -``` - - The first 4 rows of the table `format$vars` looks like this: - - | bety_name | variable_id | input_name | input_units | storage_type | column_number | bety_units | mstmip_name | mstmip_units | pecan_name | pecan_units | - | ------------ | ----------- | --------------- | ----------- | ------------ | ------------- | ---------- | ----------- | -------------- | ---------- | -------------- | - | air_pressure | 554 | PA | kPa | | 19 | Pa | Psurf | Pa | Psurf | Pa | - | airT | 86 | TA | celsius | | 4 | degrees C | Tair | K | Tair | K | - | co2atm | 135 | CO2_1 | umol mol-1 | | 20 | umol mol-1 | CO2air | micromol mol-1 | CO2air | micromol mol-1 | - | datetime | 5000000001 | TIMESTAMP_START | ymd_hms | %Y%m%d%H%M | 1 | ymd_hms | NA | NA | datetime | ymd_hms | - -5. Get the path to the data - -```R - data.path = PEcAn.DB::query.file.path( - input.id = input.id, - host_name = PEcAn.remote::fqdn(), - con = bety$con) -``` - -6. Load the data - -```R - data = PEcAn.benchmark::load_data(data.path = data.path, format = format) -``` - - - -# Shiny - -## Testing the Shiny Server - -Shiny can be difficult to debug because, when run as a web service, the R output is hidden in system log files that are hard to find and read. -One useful approach to debugging is to use port forwarding, as follows. - -First, on the remote machine (including the VM), make sure R's working directory is set to the directory of the Shiny app (e.g., `setwd(/path/to/pecan/shiny/WorkflowPlots)`, or just open the app as an RStudio project). -Then, in the R console, run the app as: - -``` -shiny::runApp(port = XXXX) -# E.g. shiny::runApp(port = 5638) -``` - -Then, on your local machine, open a terminal and run the following command, matching `XXXX` to the port above and `YYYY` to any unused port on your local machine (any 4-digit number should work). - -``` -ssh -L YYYY:localhost:XXXX -# E.g., for the PEcAn VM, given the above port: -# ssh -L 5639:localhost:5638 carya@localhost -p 6422 -``` - -Now, in a web browser on your local machine, browse to `localhost:YYYY` (e.g., `localhost:5639`) to run whatever app you started with `shiny::runApp` in the previous step. -All of the output should display in the R console where the `shiny::runApp` command was executed. -Note that this includes any `print`, `message`, `logger.*`, etc. statements in your Shiny app. - -If the Shiny app hits an R error, the backtrace should include a line like `Hit error at of server.R#LXX` -- that `XX` being a line number that you can use to track down the error. -To return from the error to a normal R prompt, hit `-C` (alternatively, the "Stop" button in RStudio). -To restart the app, run `shiny::runApp(port = XXXX)` again (keeping the same port). - -Note that Shiny runs any code in the `pecan/shiny/` directory at the moment the app is launched. -So, any changes you make to the code in `server.R` and `ui.R` or scripts loaded therein will take effect the next time the app is started. - -If for whatever reason this doesn't work with RStudio, you can always run R from the command line. -Also, note that the ability to forward ports (`ssh -L`) may depend on the `ssh` configuration of your remote machine. -These instructions have been tested on the PEcAn VM (v.1.5.2+). - -## Debugging Shiny Apps - -When developing shiny apps you can run the application from rstudio and place breakpoints int he code. To do this you will need to do the following steps first (already done on the VM) before starting rstudio: -- echo "options(shiny.port = 6438)" >> ${HOME}/.Rprofile -- echo "options(shiny.launch.browser = 'FALSE')" >> ${HOME}/.Rprofile - -Next you will need to create a tunnel for port 6438 to the VM, which will be used to open the shiny app, the following command will creat this tunnel: `ssh -l carya -p 6422 -L 6438:localhost:6438 localhost`. - -Now you can from rstudio run your application using `shiny::runApp()` and it will show the output from the application in your console. You can now place breakpoints and evaluate the output. - -## Checking Log Files -To create Log files on the VM, execute the following: -``` -sudo -s -echo "preserve_logs true;" >> /etc/shiny-server/shiny-server.conf -service shiny-server restart -``` -Then within the directory `/var/log/shiny-server` you will see log files for your specific shiny apps. - - - -# Adding to PEcAn {#adding-to-pecan} - -- Case studies - - [Adding a model](#adding-model) - - [Adding input data](#NewInput) - - [Adding data through the web interface](#adding-data-web) - - Adding new species, PFTs, and traits from a new site - - Add a site - - Add some species - - Add PFT - - Add trait data - - Adding a benchmark - - Adding a met driver -- [Reference](#editing-records) (How to edit records in bety) - - Models - - Species - - PFTs - - Traits - - Inputs - - DB files - - Variables - - Formats - - (Link each section to relevant Bety tables) - -## Adding An Ecosystem Model {#adding-model} - -**Adding a model to PEcAn involves two activities:** - -1. Updating the PEcAn database to register the model -2. Writing the interface modules between the model and PEcAn - -**Note that coupling a model to PEcAn should not require any changes to the model code itself**. A key aspect of our design philosophy is that we want it to be easy to add models to the system and we want to using the working version of the code that is used by all other model users, not a special branch (which would rapidly end up out-of-date). - -### Using PEcAn Database - -To run a model within PEcAn requires that the PEcAn database has sufficient information about the model. This includes a MODEL_TYPE designation, the types of inputs the model requires, the location of the model executable, and the plant functional types used by the model. - -The instructions in this section assume that you will be specifying this information using the BETYdb web-based interface. This can be done either on your local VM (localhost:3280/bety or localhost:6480/bety) or on a server installation of BETYdb. However you interact with BETYdb, we encourage you to set up your PEcAn instance to support [database syncs](#database-sync) so that these changes can be shared and backed-up across the PEcAn network. - -![](03_topical_pages/11_images/bety_main_page.png) - -The figure below summarizes the relevant database tables that need to be updated to add a new model and the primary variables that define each table. - -![](https://www.lucidchart.com/publicSegments/view/54a8aea8-9360-4628-af9e-392a0a00c27b/image.png) - -### Define MODEL_TYPE - -The first step to adding a model is to create a new MODEL_TYPE, which defines the abstract model class. This MODEL_TYPE is used to specify input requirements, define plant functional types, and keep track of different model versions. - -The MODEL_TYPE is created by selecting Runs > Model Type and then clicking on _New Model Type_. The MODEL_TYPE name should be identical to the MODEL package name (see Interface Module below) and is case sensitive. - -![](03_topical_pages/11_images/bety_modeltype_1.png) -![](03_topical_pages/11_images/bety_modeltype_2.png) - -### MACHINE - -The PEcAn design acknowledges that the same model executables and input files may exist on multiple computers. Therefore, we need to define the machine that that we are using. If you are running on the VM then the local machine is already defined as _pecan_. Otherwise, you will need to select Runs > Machines, click _New Machine_, and enter the URL of your server (e.g. pecan2.bu.edu). - -### MODEL - -Next we are going to tell PEcAn where the model executable is. Select Runs > Files, and click ADD. Use the pull down menu to specify the machine you just defined above and fill in the path and name for the executable. For example, if SIPNET is installed at /usr/local/bin/sipnet then the path is /usr/local/bin/ and the file (executable) is sipnet. - -Now we will create the model record and associate this with the File we just registered. The first time you do this select Runs > Models and click _New Model_. Specify a descriptive name of the model (which doesn't have to be the same as MODEL_TYPE), select the MODEL_TYPE from the pull down, and provide a revision identifier for the model (e.g. v3.2.1). Once the record is created select it from the Models table and click EDIT RECORD. Click on "View Related Files" and when the search window appears search for the model executable you just added (if you are unsure which file to choose you can go back to the Files menu and look up the unique ID number). You can then associate this Model record with the File by clicking on the +/- symbol. By contrast, clicking on the name itself will take you to the File record. - -In the future, if you set up the SAME MODEL VERSION on a different computer you can add that Machine and File to PEcAn and then associate this new File with this same Model record. A single version of a model should only be entered into PEcAn **once**. - -If a new version of the model is developed that is derived from the current version you should add this as a new Model record but with the same MODEL_TYPE as the original. Furthermore, you should set the previous version of the model as Parent of this new version. - -### FORMATS - -The PEcAn database keep track of all the input files passed to models, as well as any data used in model validation or data assimilation. Before we start to register these files with PEcAn we need to define the format these files will be in. To create a new format see [Formats Documentation](#NewFormat). - -### MODEL_TYPE -> Formats - -For each of the input formats you specify for your model, you will need to edit your MODEL_TYPE record to add an association between the format and the MODEL_TYPE. Go to Runs > Model Type, select your record and click on the Edit button. Next, click on "Edit Associated Formats" and choose the Format you just defined from the pull down menu. If the *Input* box is checked then all matching Input records will be displayed in the PEcAn site run selection page when you are defining a model run. In other words, the set of model inputs available through the PEcAn web interface is model-specific and dynamically generated from the associations between MODEL_TYPEs and Formats. If you also check the *Required* box, then the Input will be treated as required and PEcAn will not run the model if that input is not available. Furthermore, on the site selection webpage, PEcAn will filter the available sites and only display pins on the Google Map for sites that have a full set of required inputs (or where those inputs could be generated using PEcAn's workflows). Similarly, to make a site appear on the Google Map, all you need to do is specify Inputs, as described in the next section, and the point should automatically appear on the map. - -### INPUTS - -After a file Format has been created then input files can be registered with the database. Creating Inputs can be found under [How to insert new Input data](#NewInput). - -### Add Plant Functional Types (PFTs) - -Since many of the PEcAn tools are designed to keep track of parameter uncertainties and assimilate data into models, to use PEcAn with a model it is important to define Plant Functional Types for the sites or regions that you will be running the model. - -Create a new PFT entry by selecting Data > PFTs and then clicking on _New PFT_. - -![](03_topical_pages/11_images/bety_pft_1.png) -![](03_topical_pages/11_images/bety_pft_2.png) - -Give the PFT a descriptive name (e.g., temperate deciduous). PFTs are MODEL_TYPE specific, so choose your MODEL_TYPE from the pull down menu. - -![](03_topical_pages/11_images/bety_pft_3.png) - -#### Species - -Within PEcAn there are no predefined PFTs and user can create new PFTs very easily at whatever taxonomic level is most appropriate, from PFTs for individual species up to one PFT for all plants globally. To allow PEcAn to query its trait database for information about a PFT, you will want to associate species with the PFT record by choosing Edit and then "View Related Species". Species can be searched for by common or scientific name and then added to a PFT using the +/- button. - -#### Cultivars - -You can also define PFTs whose members are *cultivars* instead of species. This is designed for analyses where you want to want to perform meta-analysis on within-species comparisons (e.g. cultivar evaluation in an agricultural model) but may be useful for other cases when you want to specify different priors for some member of a species. You cannot associate both species and cultivars with the same PFT, but the cultivars in a cultivar PFT may come from different species, potentially including all known cultivars from some of the species, if you wish to and have thought about how to interpret the results. - -It is not yet possible to add a cultivar PFT through the BETYdb web interface. See [this GithHub comment](https://github.com/PecanProject/pecan/pull/1826#issuecomment-360665864) for an example of how to define one manually in PostgreSQL. - -### Adding Priors for Each Variable - -In addition to adding species, a PFT is defined in PEcAn by the list of variables associated with the PFT. PEcAn takes a fundamentally Bayesian approach to representing model parameters, so variables are not entered as fixed constants but as prior probability distributions. - -There are a wide variety of priors already defined in the PEcAn database that often range from very diffuse and generic to very informative priors for specific PFTs. - -These pre-existing prior distributions can be added to a PFT. Navigate to the PFT from Data > PFTs and selecting the edit button in the Actions column for the chosen PFT. - -![](03_topical_pages/11_images/bety_priors_1.png) - -Click on "View Related Priors" button and search through the list for desired prior distributions. The list can be filtered by adding terms into the search box. Add a prior to the PFT by clicking on the far left button for the desired prior, changing it to an X. - -![](03_topical_pages/11_images/bety_priors_2.png) - -Save this by scrolling to the bottom of the PFT page and hitting the Update button. - -![](03_topical_pages/11_images/bety_priors_3.png) - -#### Creating new prior distributions - -A new prior distribution can be created for a pre-existing variable, if a more constrained or specific one is known. - -* Select Data > Priors then “New Prior” -* In the _Citation_ box, type in or select an existing reference that indicates how the prior was defined. There are a number of unpublished citations in current use that simply state the expert opinion of an individual -* Fill the _Variable_ box by typing in part or all of a pre-existing variable's name and selecting it -* The _Phylogeny_ box allows one to specify what taxonomic grouping the prior is defined for, at it is important to note that this is just for reference and doesn’t have to be specified in any standard way nor does it have to be monophyletic (i.e. it can be a functional grouping) -* The prior distribution is defined by choosing an option from the drop-down _Distribution_ box, and then specifying values for both _Parameter a_ and _Parameter b_. The exact meaning of the two parameters depends on the distribution chosen. For example, for the Normal distribution a and b are the mean and standard deviation while for the Uniform they are the minimum and maximum. All parameters are defined based on their standard parameterization in the R language -* Specify the prior sample size in _N_ if the prior is based on observed data (independent of data in the PEcAn database) -* When this is done, scroll down and hit the Create button - -![](03_topical_pages/11_images/bety_priors_4.png) - -The new prior distribution can then be added a PFT as described in the "Adding Priors for Each Variable" section. - -#### Creating new variables - -It is important to note that the priors are defined for the variable name and units as specified in the Variables table. **If the variable name or units is different within the model it is the responsibility of write.configs.MODEL function to handle name and unit conversions** (see Interface Modules below). This can also include common but nonlinear transformations, such as converting SLA to LMA or changing the reference temperature for respiration rates. - -To add a new variable, select Data > Variables and click the New Variable button. Fill in the _Name_ field with the desired name for the variable and the units in the _Units_ field. There are additional fields, such as _Standard Units_, _Notes_, and _Description_, that can be filled out if desired. When done, hit the Create button. - -![](03_topical_pages/11_images/bety_priors_5.png) - -The new variable can be used to create a prior distribution for it as in the "Creating new prior distributions" section. - -### Interface Modules - -#### Setting up the module directory (required) - -PEcAn assumes that the interface modules are available as an R package in the models directory named after the model in question. The simplest way to get started on that R package is to make a copy the [_template_](https://github.com/PecanProject/pecan/tree/master/models/template) directory in the pecan/models folder and re-name it to the name of your model. In the code, filenames, and examples below you will want to substitute the word **MODEL** for the name of your model (note: R is case-sensitive). - -If you do not want to write the interface modules in R then it is fairly simple to set up the R functions describe below to just call the script you want to run using R's _system_ command. Scripts that are not R functions should be placed in the _inst_ folder and R can look up the location of these files using the function _system.file_ which takes as arguments the _local_ path of the file within the package folder and the name of the package (typically PEcAn.MODEL). For example - - ## Example met conversion wrapper function - met2model.MODEL <- function(in.path, in.prefix, outfolder, start_date, end_date){ - myMetScript <- system.file("inst/met2model.MODEL.sh", "PEcAn.MODEL") - system(paste(myMetScript, file.path(in.path, in.prefix), outfolder, start_date, end_date)) - } - -would execute the following at the Linux command line - - inst/met2model.MODEL.sh in.path/in.prefix outfolder start_date end_date ` - -#### DESCRIPTION - -Within the module folder open the *DESCRIPTION* file and change the package name to PEcAn.MODEL. Fill out other fields such as Title, Author, Maintainer, and Date. - -#### NAMESPACE - -Open the *NAMESPACE* file and change all instances of MODEL to the name of your model. If you are not going to implement one of the optional modules (described below) at this time then you will want to comment those out using the pound sign `#`. For a complete description of R NAMESPACE files [see here](http://cran.r-project.org/doc/manuals/r-devel/R-exts.html#Package-namespaces). If you create additional functions in your R package that you want to be used make sure you include them in the NAMESPACE as well (internal functions don't need to be declared) - -#### Building the package - -Once the package is defined you will then need to add it to the PEcAn build scripts. From the root of the pecan directory, go into the _scripts_ folder and open the file _build.sh_. Within the section of code that includes PACKAGES= add model/MODEL to the list of packages to compile. If, in writing your module, you add any other R packages to the system you will want to make sure those are listed in the DESCRIPTION and in the script **scripts/install.dependencies.R**. Next, from the root pecan directory open all/DESCRIPTION and add your model package to the *Suggests:* list. - -At any point, if you want to check if PEcAn can build your MODEL package successfully, just go to the linux command prompt and run **scripts/build.sh**. You will need to do this before the system can use these packages. - -#### write.config.MODEL (required) - -This module performs two primary tasks. The first is to take the list of parameter values and model input files that it receives as inputs and write those out in whatever format(s) the MODEL reads (e.g. a settings file). The second is to write out a shell script, jobs.sh, which, when run, will start your model run and convert its output to the PEcAn standard (netCDF with metadata currently equivalent to the [MsTMIP standard](http://nacp.ornl.gov/MsTMIP_variables.shtml)). Within the MODEL directory take a close look at inst/template.job and the example write.config.MODEL to see an example of how this is done. It is important that this script writes or moves outputs to the correct location so that PEcAn can find them. The example function also shows an example of writing a model-specific settings/config file, also by using a template. - -You are encouraged to read the section above on defining PFTs before writing write.config.MODEL so that you understand what model parameters PEcAn will be passing you, how they will be named, and what units they will be in. Also note that the (optional) PEcAn input/driver processing scripts are called by separate workflows, so the paths to any required inputs (e.g. meteorology) will already be in the model-specific format by the time write.config.MODEL receives that info. - -#### Output Conversions - -The module model2netcdf.MODEL converts model output into the PEcAn standard (netCDF with metadata currently equivalent to the [MsTMIP standard](http://nacp.ornl.gov/MsTMIP_variables.shtml)). This function was previously required, but now that the conversion is called within jobs.sh it may be easier for you to convert outputs using other approaches (or to just directly write outputs in the standard). - -Whether you implement this function or convert outputs some other way, please note that PEcAn expects all outputs to be broken up into ANNUAL files with the year number as the file name (i.e. YEAR.nc), though these files may contain any number of scalars, vectors, matrices, or arrays of model outputs, such as time-series of each output variable at the model's native timestep. - -Note: PEcAn reads all variable names from the files themselves so it is possible to add additional variables that are not part of the MsTMIP standard. Similarly, there are no REQUIRED output variables, though *time* is highly encouraged. We are shortly going establish a canonical list of PEcAn variables so that if users add additional output variables they become part of the standard. **We don't want two different models to call the same output with two different names or different units** as this would prohibit the multi-model syntheses and comparisons that PEcAn is designed to facilitate. - -#### met2model.MODEL - -`met2model.MODEL(in.path, in.prefix, outfolder, start_date, end_date)` - -Converts meteorology input files from the PEcAn standard (netCDF, CF metadata) to the format required by the model. This file is optional if you want to load all of your met files into the Inputs table as described in [How to insert new Input data](../developers_guide/How-to-insert-new-Input-data.html), which is often the easiest way to get up and running quickly. However, this function is required if you want to benefit from PEcAn's meteorology workflows and model run cloning. You'll want to take a close look at [Adding-an-Input-Converter] to see the exact variable names and units that PEcAn will be providing. Also note that PEcAn splits all meteorology up into ANNUAL files, with the year number explicitly included in the file name, and thus what PEcAn will actually be providing is **in.path**, the input path to the folder where multiple met files may stored, and **in.prefix**, the start of the filename that precedes the year (i.e. an individual file will be named `.YEAR.nc`). It is valid for in.prefix to be blank. The additional REQUIRED arguments to met2model.MODEL are **outfolder**, the output folder where PEcAn wants you to write your meteorology, and **start_date** and **end_date**, the time range the user has asked the meteorology to be processed for. - -#### Commit changes - -Once the MODEL modules are written, you should follow the [Using-Git](Using-Git.md) instructions on how to commit your changes to your local git repository, verify that PEcAn compiles using *scripts/build.sh*, push these changes to Github, and submit a pull request so that your model module is added to the PEcAn system. It is important to note that while we encourage users to make their models open, adding the PEcAn interface module to the Github repository in no way requires that the model code itself be made public. It does, however, allow anyone who already has a copy of the model code to use PEcAn so we strongly encourage that any new model modules be committed to Github. - - - - -## Adding input data {#NewInput} - -### Input records in BETY - -All model input data or data used for model calibration/validation must be registered in the BETY database. - -Before creating a new Input record, you must make sure that the format type of your data is registered in the database. If you need to make a new format record, see [Creating a new format record in BETY](#NewFormat). - -### Create a database file record for the input data - -An input record contains all the metadata required to identify the data, however, this record does not include the location of the data file. Since the same data may be stored in multiple places, every file has its own dbfile record. - -From your BETY interface: - -* Create a DBFILES entry for the path to the file - + From the menu click RUNS then FILES - + Click “New File” - + Select the machine your file is located at - + Fill in the File Path where your file is located (aka folder or directory) NOT including the name of the file itself - + Fill in the File Name with the name of the file itself. Note that some types of input records will refer to be ALL the files in a directory and thus File Name can be blank - + Click Update - -### Creating a new Input record in BETY - -From your BETY interface: - -* Create an INPUT entry for your data - + From the menu click RUNS then INPUTS - + Click “New Input” - + Select the SITE that this data is associated with the input data set - + Other required fields are a unique name for the input, the start and end dates of the data set, and the format of the data. If the data is not in a currently known format you will need to create a NEW FORMAT and possibly a new input converter. Instructions on how to do add a converter can be found here [Input conversion](#InputConversions). Instructions on how to add a format record can be found [here](#NewFormat) - + Parent ID is an optional variable to indicated that one dataset was derived from another. - + Click “Create” -* Associate the DBFILE with the INPUT - + In the RUNS -> INPUTS table, search and find the input record you just created - + Click on the EDIT icon - + Select “View related Files” - + In the Search window, search for the DBFILE you just created - * Once you have found the DBFILE, click on the “+” icon to add the file - * Click on “Update” at the bottom when you are done. - -### Adding a new input converter {#InputConversions} - -Three Types of data conversions are discussed below: Meteorological data, Vegetation data, and Soil data. Each section provides instructions on how to convert data from their raw formats into a PEcAn standard format, whether it be from a database or if you have raw data in hand. - -Also, see [PEcAn standard formats]. - -#### Meterological Data - -##### Adding a function to PEcAn to convert a met data source - -In general, you will need to write a function to download the raw met data and one to convert it to the PEcAn standard. - -Downloading raw data function are named `download..R`. These functions are stored within the PEcAn directory: [`/modules/data.atmosphere/R`](https://github.com/PecanProject/pecan/tree/develop/modules/data.atmosphere/R). - -Conversion function from raw to standard are named `met2CF..R`. These functions are stored within the PEcAn directory: [`/modules/data.atmosphere/R`](https://github.com/PecanProject/pecan/tree/develop/modules/data.atmosphere/R). - -Current Meteorological products that are coupled to PEcAn can be found in our [Available Meteorological Drivers] page. - -Note: Unless you are also adding a new model, you will not need to write a script to convert from PEcAn standard to PEcAn models. Those conversion scripts are written when a model is added and can be found within each model's PEcAn directory. - -*Standards dimesion, names, nad units can be found here:* [Input Standards] - -##### Adding Single-Site Specific Meteorological Data - -Perhaps you have meteorological data specific to one site, with a unique format that you would like to add to PEcAn. Your steps would be to: - 1. write a script or function to convert your files into the netcdf PEcAn standard - 2. insert that file as an input record for your site following these [instructions](#NewInput) - -##### Processing Met data outside of the workflow using PEcAn functions - -Perhaps you would like to obtain data from one of the sources coupled to PEcAn on its own. To do so you can run PEcAn functions on their own. - -###### Example 1: Processing data from a database - -Download Amerifluxlbl from Niwot Ridge for the year 2004: - -``` -raw.file <-PEcAn.data.atmosphere::download.AmerifluxLBL(sitename = "US-NR1", - outfolder = ".", - start_date = "2004-01-01", - end_date = "2004-12-31") -``` - -Using the information returned as the object `raw.file` you will then convert the raw files into a standard file. - -Open a connection with BETY. You may need to change the host name depending on what machine you are hosting BETY. You can find the hostname listed in the machines table of BETY. - -``` - -bety <- dplyr::src_postgres(dbname = 'bety', - host ='localhost', - user = "bety", - password = "bety") - -con <- bety$con -``` - -Next you will set up the arguments for the function - -``` -in.path <- '.' -in.prefix <- raw.file$dbfile.name -outfolder <- '.' -format.id <- 5000000002 -format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety) -lon <- -105.54 -lat <- 40.03 -format$time_zone <- "America/Chicago" -``` -Note: The format.id can be pulled from the BETY database if you know the format of the raw data. - -Once these arguments are defined you can execute the `met2CF.csv` function - -``` -PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, - in.prefix =in.prefix, - outfolder = ".", - start_date ="2004-01-01", - end_date = "2004-12-01", - lat= lat, - lon = lon, - format = format) -``` - - - -###### Example 2: Processing data from data already in hand - -If you have Met data already in hand and you would like to convert into the PEcAn standard follow these instructions. - -Update BETY with file record, format record and input record according to this page [How to Insert new Input Data](#NewInput) - -If your data is in a csv format you can use the `met2CF.csv`function to convert your data into a PEcAn standard file. - -Open a connection with BETY. You may need to change the host name depending on what machine you are hosting BETY. You can find the hostname listed in the machines table of BETY. - -``` -bety <- dplyr::src_postgres(dbname = 'bety', - host ='localhost', - user = "bety", - password = "bety") - -con <- bety$con -``` - -Prepare the arguments you need to execute the met2CF.csv function - -``` -in.path <- 'path/where/the/raw/file/lives' -in.prefix <- 'prefix_of_the_raw_file' -outfolder <- 'path/to/where/you/want/to/output/thecsv/' -format.id <- formatid of the format your created -format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety) -lon <- longitude of your site -lat <- latitude of your site -format$time_zone <- time zone of your site -start_date <- Start date of your data in "y-m-d" -end_date <- End date of your data in "y-m-d" -``` - -Next you can execute the function: -``` -PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, - in.prefix =in.prefix, - outfolder = ".", - start_date = start_date, - end_date = end_date, - lat= lat, - lon = lon, - format = format) -``` - - -#### Vegetation Data - -Vegetation data will be required to parameterize your model. In these examples we will go over how to produce a standard initial condition file. - -The main function to process cohort data is the `ic.process.R` function. As of now however, if you require pool data you will run a separate function, `pool_ic_list2netcdf.R`. - -###### Example 1: Processing Veg data from data in hand. - -In the following example we will process vegetation data that you have in hand using PEcAn. - -First, you'll need to create a input record in BETY that will have a file record and format record reflecting the location and format of your file. Instructions can be found in our [How to Insert new Input Data](#NewInput) page. - -Once you have created an input record you must take note of the input id of your record. An easy way to take note of this is in the URL of the BETY webpage that shows your input record. In this example we use an input record with the id `1000013064` which can be found at this url: https://psql-pecan.bu.edu/bety/inputs/1000013064# . Note that this is the Boston University BETY database. If you are on a different machine, your url will be different. - -With the input id in hand you can now edit a pecan XML so that the PEcAn function `ic.process` will know where to look in order to process your data. The `inputs` section of your pecan XML will look like this. As of now ic.process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the `useic` flag. -``` - - - FFT - css - pecan - 1000013064 - TRUE - - 1 - 70 - 400 - - - - FFT - pss - pecan - 1000013064 - TRUE - - - FFT - site - pecan - 1000013064 - TRUE - - - CRUNCEP - ED2 - - - 294 - - - 297 - - - 295 - - - 296 - - -``` - -This IC workflow also supports generating ensembles of initial conditions from posterior estimates of DBH. To do this the tags below can be inserted to the pecan.xml: -``` - - PalEON - css - 1000015682 - TRUE - 20 - - 1256.637 - 3 - - -``` -Here the `id` should point to a file that has MCMC samples to generate the ensemble from. The number between the `` tag defines the number of ensembles requested. The workflow will populate the settings list `run$inputs` tag with ensemble member information. E.g.: -``` - - - ... - ... - ... - ... - ... - - - - ... - ... - ... - ... - ... - - - - - ... - ... - ... - ... - ... - - - ... - ... - ... - ... - ... - -``` - -Once you edit your PEcAn.xml you can than create a settings object using PEcAn functions. Your `pecan.xml` must be in your working directory. - -``` -settings <- PEcAn.settings::read.settings("pecan.xml") -settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) -``` -You can then execute the `ic.process` function to convert data into a standard Rds file: - -``` -input <- settings$run$inputs -dir <- "." -ic.process(settings, input, dir, overwrite = FALSE) -``` - -Note that the argument `dir` is set to the current directory. You will find the final ED2 file there. More importantly though you will find the `.Rds ` file within the same directory. - - - - -###### Example 3 Pool Initial Condition files -If you have pool vegetation data, you'll need the [`pool_ic_list2netcdf.R`](https://github.com/PecanProject/pecan/blob/develop/modules/data.land/R/pool_ic_list2netcdf.R) function to convert the pool data into PEcAn -standard. - -The function stands alone and requires that you provide a named list of netcdf dimensions and values, and a named list of variables and values. Names and units need to match the standard_vars.csv table found [here](https://github.com/PecanProject/pecan/blob/develop/base/utils/data/standard_vars.csv). - -``` -#Create a list object with necessary dimensions for your site -input<-list() -dims<- list(lat=-115,lon=45, time= 1) -variables<- list(SoilResp=8,TotLivBiom=295) -input$dims <- dims -input$vals <- variables -``` - -Once this is done, set `outdir` to where you'd like the file to write out to and a siteid. Siteid in this can be used as an file name identifier. Once part of the automated workflow siteid will reflect the site id within the BET db. - -``` -outdir <- "." -siteid <- 772 -pool_ic_list2netcdf(input = input, outdir = outdir, siteid = siteid) -``` - -You should now have a netcdf file with initial conditions. - -#### Soil Data - -###### Example 1: Converting Data in hand - -Local data that has the correct names and units can easily be written out in PEcAn standard using the function soil2netcdf. - -``` -soil.data <- list(volume_fraction_of_sand_in_soil = c(0.3,0.4,0.5), - volume_fraction_of_clay_in_soil = c(0.3,0.3,0.3), - soil_depth = c(0.2,0.5,1.0)) - -soil2netcdf(soil.data,"soil.nc") -``` - -At the moment this file would need to be inserted into Inputs manually. By default, this function also calls soil_params, which will estimate a number of hydraulic and thermal parameters from texture. Be aware that at the moment not all model couplers are yet set up to read this file and/or convert it to model-specific formats. - - -###### Example 2: Converting PalEON data - -In addition to location-specific soil data, PEcAn can extract soil texture information from the PalEON regional soil product, which itself is a subset of the MsTMIP Unified North American Soil Map. If this product is installed on your machine, the appropriate step in the do_conversions workflow is enabled by adding the following tag under `````` in your pecan.xml - -```xml - - 1000012896 - -``` - -In the future we aim to extend this extraction to a wider range of soil products. - - -###### Example 3: Extracting soil properties from gSSURGO database - -In addition to location-specific soil data, PEcAn can extract soil texture information from the gSSURGO data product. This product needs no installation and it extract soil proeprties for the lower 48 states in U.S. In order to let the pecan know that you're planning to use gSSURGO, you can the following XML tag under input in your pecan xml file. - -```xml - - - gSSURGO - - -``` - - - - -## Pecan Data Ingest via Web Interface {#adding-data-web} - -This tutorial explains the process of ingesting data into PEcAn via our Data-Ingest Application. In order to ingest data, the users must first select data that they wish to upload. Then, they enter metadata to help PEcAn parse and load the data into the main PEcAn workflow. - -### Loading Data - -#### Selecting Ingest Method -The Data-Ingest application is capable of loading data from the DataONE data federation and from the user's local machine. The first step in the workflow is therefore to select an upload method. The application defaults to uploading from DataONE. To upload data from a local device, simply select the radio button titled `Local Files `. - -#### DataONE Upload Example -
- -```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/D1Ingest-1.gif") -``` -
-The DataONE download feature allows the user to download data at a given doi or DataONE specific package id. To do so, enter the doi or identifier in the `Import From DataONE` field and select `download`. The download process may take a couple of minutes to run depending on the number of files in the dataONE package. This may be a convenient option if the user does not wish to download files directly to their local machine. Once the files have been successfully downloaded from DataONE, they are displayed in a table. Before proceeding to the next step, the user can select a file to ingest by clicking on the corresponding row in the data table. -
- -### Local Upload Example - -
-To upload local files, the user should first select the `Local Files` button. From there, the user can upload files from their local machines by selecting `Browse` or by dragging and dropping files into the text box. The files will begin uploading automatically. From there, the user should select a file to ingest and then select the `Next Step` button. -
-After this step, the workflow is identical for both methods. However, please note that if it becomes necessary to switch from loading data via `DataONE` to uploading local files after the first step, please restart the application. -
- -```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/Local_loader_sm.gif") -``` -
-```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/local_browse.gif") -``` -### 2. Creating an Input Record -Creating an input record requires some basic metadata about the file that is being ingested. Each entry field is briefly explained below. -
- - - Site: To link the selected file with a site, the user can scroll or type to search all the sites in PEcAn. See Example: -
-```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/Selectize_Input_sm.gif") -``` -
-- Parent: To link the selected file with another dataset, type to search existing datasets in the `Parent` field. - -- Name: this field should be autofilled by selecting a file in step 1. - -- Format: If the selected file has an existing format name, the user can search and select in the `Format` field. If the selected file's format is not already in pecan, the user can create a new format by selecting `Create New Format`. Once this new format is created, it will automatically populate the `Format` box and the `Current Mimetype` box (See Section 3). - -- Mimetype: If the format already exists, select an existing mimetype. - -- Start and End Date and Time: Inputs can be entered manually or by using the user interface. See example - -
-```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/DateTime.gif") -``` - -- Notes: Describe the data that is being uploaded. Please include any citations or references. - -### 3. Creating a format record -If it is necessary to add a new format to PEcAn, the user should fill out the form attached to the `Create New Format` button. The inputs to this form are described below: - -- Mimetype: type to search existing mimetypes. If the mimetype is not in that list, please click on the link `Create New Mimetype` and create a new mimetype via the BETY website. - -- New Format Name: Add the name of the new format. Please exclude spaces from the name. Instead please use underscores "_". - -- Header: If there is space before the first line of data in the dataset, please select `Yes` - -- Skip: The number of lines in the header that should be skipped before the data. - -- Please enter notes that describe the format. - -Example: -
-```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/new_format_record.gif") -``` -### 4. Formats_Variables Record -The final step in the ingest process is to register a formats-variables record. This record links pecan variables with variables from the selected data. - -- Variable: PEcAn variable that is equivalent to variable in selected file. - -- Name: The variable name in the imported data need only be specified if it differs from the BETY variable name. - -- Unit: Should be in a format parseable by the udunits library and need only be secified if the units of the data in the file differ from the BETY standard. - -- Storage Type: Storage type need only be specified if the variable is stored in a format other than would be expected (e.g. if numeric values are stored as quoted character strings). Additionally, storage_type stores POSIX codes that are used to store any time variables (e.g. a column with a 4-digit year would be `%Y`). - -- Column Number: Vector of integers that list the column numbers associated with variables in a dataset. Required for text files that lack headers. -
-```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/data-ingest/D1Ingest-9_sm.gif") -``` - -Finally, the path to the ingest data is displayed in the `Select Files` box. - - -## Creating a new format {#NewFormat} -### Formats in BETY - -The PEcAn database keeps track of all the input files passed to models, as well as any data used in model validation or data assimilation. Before we start to register these files with PEcAn we need to define the format these files will be in. - -The main goal is to take all the meta-data we have about a data file and create a record of it that pecan can use as a guide when parsing the data file. - -This information is stored in a Format record in the bety database. Make sure to read through the current Formats before deciding to make a new one. - -### Creating a new format in BETY - -If the Format you are looking for is not available, you will need to create a new record. Before entering information into the database, you need to be able to answer the following questions about your data: - -- What is the file MIME type? - - We have a suit of functions for loading in data in open formats such as CSV, txt, netCDF, etc. - - PEcAn has partnered with the [NCSA BrownDog project](http://browndog.ncsa.illinois.edu/) to create a service that can read and convert as many data formats as possible. If your file type is less common or a proprietary type, you can use the [BrownDog DAP](http://dap.ncsa.illinois.edu/) to convert it to a format that can be used with PEcAn. - - If BrownDog cannot convert your data, you will need to contact us about writing a data specific load function. - -- What variables does the file contain? - - What are the variables named? - - What are the variable units? - - How do the variable names and units in the data map to PEcAn variables in the BETY database? See [below](### Name and Unit) for an example. It is most likely that you will NOT need to add variables to BETY. However, identifying the appropriate variables matches in the database may require some work. We are always available to help answer your questions. - -- Is there a timestamp on the data? - - What are the units of time? - - -Here is an example using a fake dataset: - -![example_data](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/example_data.png) - - - -This data started out as an excel document, but was saved as a CSV file. - -To create a Formats record for this data, in the web interface of BETY, select Runs > Formats and click _New Format_. - -You will need to fill out the following fields: - -- MIME type: File type (you can search for other formats in the text field) -- Name: The name of your format (this can be whatever you want) -- Header: Boolean that denotes whether or not your data contains a header as the first line of the data. (1 = TRUE, 0 = FALSE) -- Skip: The number of lines above the data that should be skipped. For example, metadata that should not be included when reading in the data or blank spaces. -- Notes: Any additional information about the data such as sources and citations. - -Here is the Formats record for the example data: - -![format_record_1](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/format_record_1.png) - -When you have finished this section, hit Create. The final record will be displayed on the screen. - -#### Formats -> Variables - -After a Format entry has been created, you are encouraged to edit the entry to add relationships between the file's variables and the Variables table in PEcAn. Not only do these relationships provide meta-data describing the file format, but they also allow PEcAn to search and (for some MIME types) read files. - -To enter this data, select Edit Record and on the edit screen select View Related Variable. - -Here is the record for the example data after adding related variables: - -![format_record_2](02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/format_record_2.png) - -##### Name and Unit - -For each variable in the file you will want at a minimum to specify the NAME of the variable within your file and match that to the equivalent Variable in the pulldown. - -Make sure to search for your variables under Data > Variables before suggesting that we create a new variable record. This may not always be a straightforward process. - -For example bety contains a record for Net Primary Productivity: - -![var_record](04_advanced_user_guide/images/var_record.png) - -This record does not have the same variable name or the same units as NPP in the example data. -You may have to do some reading to confirm that they are the same variable. -In this case -- Both the data and the record are for Net Primary Productivity (the notes section provides additional resources for interpreting the variable.) -- The units of the data can be converted to those of the vairiable record (this can be checked by running `udunits2::ud.are.convertible("g C m-2 yr-1", "Mg C ha-1 yr-1")`) - -Differences between the data and the variable record can be accounted for in the data Formats record. - -- Under Variable, select the variable as it is recorded in bety. -- Under Name, write the name the variable has in your data file. -- Under Unit, write the units the variable has in your data file. - -NOTE: All units must be written in a udunits compliant format. To check that your units can be read by udunits, in R, load the udunits2 package and run `udunits2::is.parseable("g C m-2 yr-1")` - -**If the name or the units are the same**, you can leave the Name and Unit fields blank. This is can be seen with the variable LAI. - -##### Storage Type - -_Storage Type_ only needs to be specified if the variable is stored in a format other than what would be expected (e.g. if numeric values are stored as quoted character strings). - -One such example is *time variables*. - -PEcAn converts all dates into POSIX format using R functions such as `strptime`. These functions require that the user specify the format in which the date is written. - -The default is `"%Y-%m-%d %H:%M:%S"` which would look like `"2017-01-01 00:00:00"` - -A list of date formats can be found in the [R documentation for the function `strptime`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/strptime.html) - -Below are some commonly used codes: - -| %d | Day of the month as decimal number (01–31). | -| ---- | ---------------------------------------- | -| %D | Date format such as %m/%d/%y. | -| %H | Hours as decimal number (00–23). | -| %m | Month as decimal number (01–12). | -| %M | Minute as decimal number (00–59). | -| %S | Second as integer (00–61), allowing for up to two leap-seconds (but POSIX-compliant implementations will ignore leap seconds). | -| %T | Equivalent to %H:%M:%S. | -| %y | Year without century (00–99). On input, values 00 to 68 are prefixed by 20 and 69 to 99 by 19 – that is the behaviour specified by the 2004 and 2008 POSIX standards, but they do also say ‘it is expected that in a future version the default century inferred from a 2-digit year will change’. | -| %Y | Year with century. | - - -##### Column Number - -If your data is in text format with variables in a standard order then you can specify the Column Number for the variable. This is required for text files that lack headers. - -#### Retrieving Format Information - -To acquire Format information from a Format record, use the R function `query.format.vars` - -##### Inputs - -- `bety`: connection to BETY -- `input.id=NA` and/or `format.id=NA`: Input or Format record ID from BETY - - At least one must be specified. Defaults to `format.id` if both provided. -- `var.ids=NA`: optional vector of variable IDs. If provided, limits results to these variables. - -##### Output - -- R list object containing many things. Fill this in. - -## Creating a new benchmark reference run {#NewBenchmark} - -The purpose of the reference run record in BETY is to store all the settings from a run that are necessary in exactly recreating it. - -The pecan.xml file is the home of absolutely all the settings for a particular run in pecan. However, much of the information in the pecan.xml file is server and user specific and more importantly, the pecan.xml files are stored on individual servers and may not be available to the public. - -When a run that is performed using pecan is registered as a reference run, the settings that were used to make that run are made available to all users through the database. - -All completed runs are not automatically registered as reference runs. To register a run, navigate to the benchmarking section of the workflow visualizations Shiny app. - -## Editing records {#editing-records} - -- Models -- Species -- PFTs -- Traits -- Inputs -- DB files -- Variables -- Formats -- (Link each section to relevant Bety tables) - - - -# Troubleshooting and Debugging PEcAn - -## Cookies and pecan web pages - -You may need to disable cookies specifically for the pecan webserver in your browser. This shouldn't be a problem running from the virtual machine, but your installation of php can include a 'PHPSESSID' that is quite long, and this can overflow the params field of the workflows table, depending on how long your hostname, model name, site name, etc are. - -## `Warning: mkdir() [function.mkdir]: No such file or directory` - -If you are seeing: `Warning: mkdir() [function.mkdir]: No such file or directory in /path/to/pecan/web/runpecan.php at line 169` it is because you have used a relative path for \$output_folder in system.php. - -## After creating a new PFT the tag for PFT not passed to config.xml in ED - -This is a result of the rather clunky way we currently have adding PFTs to PEcAn. This is happening because you need to edit the ./pecan/models/ed/data/pftmapping.csv file to include your new PFTs. - -This is what the file looks like: - -``` -PEcAn;ED -ebifarm.acru;11 -ebifarm.acsa3;11 -... -``` - -You just need to edit this file (in a text editor, no Excel) and add your PFT names and associated number to the end of the file. Once you do this, recompile PEcAn and it should then work for you. We currently need to reference this file in order to properly set the PFT number and maintain internal consistency between PEcAn and ED2. - -## Debugging - -How to identify the source of a problem. - - -### Using `tests/workflow.R` - -This script, along with model-specific settings files in the `tests` folder, provide a working example. From inside the tests folder, `R CMD --vanilla -- --settings pecan..xml < workflow.R` should work. - -The next step is to add `debugonce()` before running the test workflow. - -This allows you can step through the function and evaluate the different objects as they are created and/or transformed. - -See [tests README](https://github.com/PecanProject/pecan/blob/master/tests/README.md) for more information. - - - -## Useful scripts - -The following scripts (in `qaqc/vignettes` identify, respectively: - -1. [relationships among functions across packages](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/function_relationships.Rmd) -2. [function inputs and outputs](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/module_output.Rmd) (e.g. that will identify which functions and outputs are used in a workflow). - - - - - -# BETY Database Administration {#database} - -This section provides additional details about the BETY database used by PEcAn. It will discuss best practices for setting up the BETY database, how to backup the database and how to restore the database. - -## Best practices {#database-setup} - -When using the BETY database in non testing mode, it is best not to use the default users. This is accomplished when running the initialize of the database. When the database is initally created the database will be created with some default users (best known is the carya user) as well as the guestuser that can be used in the BETY web application. To disable these users you will either need to disable the users from the web interface, or you can reinitialize the database and remove the `-u` flag from the command line (the `-u` flag will create the default users). To disable the guestuser as well you can remove the `-g` flag from the command line, or disable the account from BETY. - -The default installation of BETY and PEcAn will assume there is a database called bety with a default username and password. The default installation will setup the database account to not have any superuser abilities. It is also best to limit access to the postgres database from trusted hosts, either by using firewalls, or configuring postgresql to only accept connections from a limited set of hosts. - -## Backup of BETY database - -It is good practice to make sure you backup the BETY database. Just creating a copy of the files on disk is not enough to ensure you have a valid backup. Most likely if you do this you will end up with a corrupted backup of the database. - -To backup the database you can use the `pg_dump` command, which will make sure the database id backed up in a consistent state. You can run `sudo -u postgres pg_dump -d bety -Z 9 -f bety.sql.gz`, this will create a compressed file that can be used to resotore the database. - -In the PEcAn distribution in scripts folder there is a script called `backup.bety.sh`. This script will create the backup of the database. It will create multiple backups allowing you to restore the database from one of these copies. The database will be backed up to one of the following files: - - bety-d-X, daily backup, where X is the day of the month. - - bety-w-X, weekly backup, where X is the week number in the year - - bety-m-X, montly backup, where X is the month of the year - - bety-y-X, yearly backup, where X is the actual year. -Using this scheme, we can restore the database using any of the files generated. - -It is recommeneded to run this script using a cronjob at midnight such that you have a daily backup of the database and do not have to remember to create these backups. When running this script (either cron or by hand) make sure to place the backups on a different machine than the machine that holds the database in case of a larger system failure. - -## Restore of BETY database - -Hopefully this section will never need to be used. Following are 5 steps that have been used to restore the database. Before you start it is worth it to read up online a bit on restoring the database as well as join the slack channel and ask any of the people there for help. - -1. stop apache (BETY/PEcAn web apps) `service httpd stop` or `service apache2 stop` -2. backup database (you know just incase) `pg_dump -d bety > baddb.sql` -3. drop database `sudo -u postgres psql -c 'drop database bety'` -4. create database `sudo -u postgres psql -c 'create database bety with owner bety'` -5. load database (assuming dump is called bety.sql.gz) `zcat bety.sql.gz | grep -v search_path | sudo -u postgres psql -d bety` -6. start apache again `service httpd start` or `service apache2 start` - -If during step 5 there is a lot of errors, it is helpful to add `-v ON_ERROR_STOP=1` to the end of the command. This will stop the restore at the first error and will help with debugging the issue. - - - -# Workflow modules - -NOTE: As of PEcAn 1.2.6 -- needs to be updated significantly - - -## Overview - -Workflow inputs and outputs (click to open in new page, then zoom). Code used to generate this image is provided in [qaqc/vignettes/module_output.Rmd](https://github.com/PecanProject/pecan/blob/master/qaqc/vignettes/module_output.Rmd) - -[![PEcAn Workflow](http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg)](http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg) - -## Load Settings -### `read.settings("/home/pecan/pecan.xml")` - -* loads settings -* create directories -* generates new xml, put in output folder - -## Query Database -### `get.trait.data()` - -Queries the database for both the trait data and prior distributions associated with the PFTs specified in the settings file. The list of variables that are queried is determined by what variables have priors associated with them in the definition of the pft. Likewise, the list of species that are associated with a PFT determines what subset of data is extracted out of all data matching a given variable name. - -## Meta Analysis -### `run.meta.analysis()` - -The meta-analysis code begins by distilling the trait.data to just the values needed for the meta-analysis statistical model, with this being stored in `madata.Rdata`. This reduced form includes the conversion of all error statistics into precision (1/variance), and the indexing of sites, treatments, and greenhouse. In reality, the core meta-analysis code can be run independent of the trait database as long as input data is correctly formatted into the form shown in `madata`. - -The evaluation of the meta-analysis is done using a Bayesian statistical software package called JAGS that is called by the R code. For each trait, the R code will generate a [trait].model.bug file that is the JAGS code for the meta-analysis itself. This code is generated on the fly, with PEcAn adding or subtracting the site, treatment, and greenhouse terms depending upon the presence of these effects in the data itself. - -Meta-analyses are run, and summary plots are produced. - - -## Write Configuration Files -### `write.configs(model)` - -* writes out a configuration file for each model run -** writes 500 configuration files for a 500 member ensemble -** for _n_ traits, writes `6 * n + 1` files for running default Sensitivity Analysis (number can be changed in the pecan settings file) - -## Start Runs -### `start.runs(model)` - -This code starts the model runs using a model specific run function named start.runs.[model]. If the ecosystem model is running on a remote server, this module also takes care of all of the communication with the remote server and its run queue. Each of your subdirectories should now have a [run.id].out file in it. One instance of the model is run for each configuration file generated by the previous write configs module. - -## Get Model Output -### `get.model.output(model)` - -This code first uses a model-specific model2netcdf.[model] function to convert the model output into a standard output format ([MsTMIP](http://nacp.ornl.gov/MsTMIP_variables.shtml)). Then it extracts the data for requested variables specified in the settings file as `settings$ensemble$variable`, averages over the time-period specified as `start.date` and `end.date`, and stores the output in a file `output.Rdata`. The `output.Rdata` file contains two objects, `sensitivity.output` and `ensemble.output`, that is the model prediction for the parameter sets specified in `sa.samples` and `ensemble.samples`. In order to save bandwidth, if the model output is stored on a remote system PEcAn will perform these operations on the remote host and only return the `output.Rdata` object. - -## Ensemble Analysis -### `run.ensemble.analysis()` - -This module makes some simple graphs of the ensemble output. Open ensemble.analysis.pdf to view the ensemble prediction as both a histogram and a boxplot. ensemble.ts.pdf provides a timeseries plot of the ensemble mean, meadian, and 95% CI - -## Sensitivity Analysis, Variance Decomposition -### `run.sensitivity.analysis()` - -This function processes the output of the previous module into sensitivity analysis plots, `sensitivityanalysis.pdf`, and a variance decomposition plot, `variancedecomposition.pdf` . In the sensitivity plots you will see the parameter values on the x-axis, the model output on the Y, with the dots being the model evaluations and the line being the spline fit. - - -The variance decomposition plot is discussed more below. For your reference, the R list object, sensitivity.results, stored in sensitivity.results.Rdata, contains all the components of the variance decomposition table, as well as the the input parameter space and splines from the sensitivity analysis (reminder: the output parameter space from the sensitivity analysis was in outputs.R). - -The variance decomposition plot contains three columns, the coefficient of variation (normalized posterior variance), the elasticity (normalized sensitivity), and the partial standard deviation of each model parameter. This graph is sorted by the variable explaining the largest amount of variability in the model output (right hand column). From this graph identify the top-tier parameters that you would target for future constraint. - -## Glossary - -* Inputs: data sets that are used, and file paths leading to them -* Parameters: e.g. info set in settings file -* Outputs: data sets that are dropped, and the file paths leading to them - - - -# Installation details - -This chapter contains details about installing and maintaining the uncontainerized version of PEcAn on a virtual machine or a server. If you are running PEcAn inside of Docker, many of the particulars will be different and you should refer to the [docker](#docker-index) chapter instead of this one. - - - -## PEcAn Virtual Machine {#pecanvm} - -See also other VM related documentation sections: - -* [Maintaining your PEcAn VM](#maintain-vm) -* [Connecting to the VM via SSH](#ssh-vm) -* [Connecting to bety on the VM via SSh](#ssh-vm-bety) -* [Using Amazon Web Services for a VM (AWS)](#awsvm) -* [Creating a Virtual Machine](#createvm) -* [VM Desktop Conversion](#vm-dektop-conversion) -* [Install RStudio Desktop](#install-rstudio) - - -The PEcAn virtual machine consists of all of PEcAn pre-compiled within a Linux operating system and saved in a "virtual machine" (VM). Virtual machines allow for running consistent set-ups without worrying about differences between operating systems, library dependencies, compiling the code, etc. - -1. **Install VirtualBox** This is the software that runs the virtual machine. You can find the download link and instructions at [http://www.virtualbox.org](http://www.virtualbox.org). *NOTE: On Windows you may see a warning about Logo testing, it is okay to ignore the warning.* - -2. **Download the PEcAn VM** You can find the download link at [http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN](http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN), under the "**Files**" header. Click the ".ova" file to begin the download. Note that the file is ~7 GB, so this download can take several minutes to hours depending on your connection speed. Also, the VM requires >4 GB of RAM to operate correctly. Please check current usage of RAM and shutdown processes as needed. - -3. **Import the VM** Once the download is complete, open VirtualBox. In the VirtualBox menus, go to "File" → "Import Appliance" and locate the downloaded ".ova" file. - - -For Virtualbox version 5.x: In the Appliance Import Settings, make sure you select "Reinitialize the MAC address of all network cards" (picture below). This is not selected by default and can result in networking issues since multiple machines might claim to have the same network MAC Address. - -```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/pic1.jpg")) -``` - -For Virtualbox versions starting with 6.0, there is a slightly different interface (see figure). Select "Generate new MAC addresses for all network adapters" from the MAC Address Policy: - -```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/pic1v2.png")) -``` - -NOTE: If you experience network connection difficulties in the VM with this enabled, try re-importing the VM without this setting selected). - -Finally, click "Import" to build the Virtual Machine from its image. - -4. **Launch PEcAn** Double click the icon for the PEcAn VM. A terminal window will pop up showing the machine booting up which may take a minute. It is done booting when you get to the `pecan login:` prompt. You do not need to login as the VM behaves like a server that we will be accessing through you web browser. Feel free to minimize the VM window. - -* If you _do_ want to login to the VM, the credentials are as follows: `username: carya`, `password: illinois` (after the pecan tree, [Carya illinoinensis][pecan-wikipedia]). - -5. **Open the PEcAn web interface** With the VM running in the background, open any web browser on the same machine and navigate to `localhost:6480/pecan/` to start the PEcAn workflow. (NOTE: The trailing backslash may be necessary depending on your browser) - -* To ssh into the VM, open up a terminal on your machine and execute `ssh -l carya -p 6422 localhost`. Username and password are the same as when you log into the machine. - - - - -## AWS Setup - -***********Mirror of earlier section in installation section?********************* - -The following are Mike's rough notes from a first attempt to port the PEcAn VM to the AWS. This was done on a Mac - -These notes are based on following the instructions [here](http://www.rittmanmead.com/2014/09/obiee-sampleapp-in-the-cloud-importing-virtualbox-machines-to-aws-ec2/) - - -### Convert PEcAn VM - -AWS allows upload of files as VMDK but the default PEcAn VM is in OVA format - -1. If you haven't done so already, download the [PEcAn VM](http://isda.ncsa.illinois.edu/download/index.php?project=PEcAn&sort=category) - -2. Split the OVA file into OVF and VMDK files - -``` -tar xf -``` - -### Set up an account on [AWS](http://aws.amazon.com/) - -After you have an account you need to set up a user and save your [access key and secret key](http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html) - -In my case I created a user named 'carya' - -Note: the key that ended up working had to be made at [https://console.aws.amazon.com/iam/home#security_credential](https://console.aws.amazon.com/iam/home#security_credential), not the link above. - -### Install [EC2 command line tools](http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html) - -``` -wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip - -sudo mkdir /usr/local/ec2 - -sudo unzip ec2-api-tools.zip -d /usr/local/ec2 -``` - -If need be, download and install [JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html) - -``` -export JAVA_HOME=$(/usr/libexec/java_home) - -export EC2_HOME=/usr/local/ec2/ec2-api-tools- - -export PATH=$PATH:$EC2_HOME/bin -``` - - -Then set your user credentials as environment variables: - -`export AWS_ACCESS_KEY=xxxxxxxxxxxxxx` - -`export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxx` - -Note: you may want to add all the variables set in the above EXPORT commands above into your .bashrc or equivalent. - -### Create an AWS S3 'bucket' to upload VM to - -Go to [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3) and click "Create Bucket" - -In my case I named the bucket 'pecan' - - -### Upload - -In the code below, make sure to change the PEcAn version, the name of the bucket, and the name of the region. Make sure that the PEcAn version matches the one you downloaded. - -Also, you may want to choose a considerably larger instance type. The one chosen below is that corresponding to the AWS Free Tier - -``` -ec2-import-instance PEcAn_1.2.6-disk1.vmdk --instance-type t2.micro --format VMDK --architecture x86_64 --platform Linux --bucket pecan --region us-east-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY -``` - -Make sure to note the ID of the image since you'll need it to check the VM status. Once the image is uploaded it will take a while (typically about an hour) for Amazon to convert the image to one it can run. You can check on this progress by running - -``` -ec2-describe-conversion-tasks -``` - -### Configuring the VM - -On the EC2 management webpage, [https://console.aws.amazon.com/ec2](https://console.aws.amazon.com/ec2), if you select **Instances** on the left hand side (LHS) you should be able to see your new PEcAn image as an option under Launch Instance. - -Before launching, you will want to update the firewall to open up additional ports that PEcAn needs -- specifically port 80 for the webpage. Port 22 (ssh/sftp) should be open by default. Under "Security Groups" select "Inbound" then "Edit" and then add "HTTP". - -Select "Elastic IPs" on the LHS, and "Allocate New Address" in order to create a public IP for your VM. - -Next, select "Network Interfaces" on the LHS and then under Actions select "Associate Addresses" then choose the Elastic IP you just created. - -See also http://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/GetStarted.html - -### Set up multiple instances (optional) - -For info on setting up multiple instances with load balancing see: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/gs-ec2VPC.html - -Select "Load Balancers" on the LHS, click on "Create Load Balancer", follow Wizard keeping defaults. - -To be able to launch multiple VMs: Under "Instances" convert VM to an Image. When done, select Launch, enable multiple instances, and associate with the previous security group. Once running, go back to "Load Balancers" and add the instances to the load balancer. Each instance can be accessed individually by it's own public IP, but external users should access the system more generally via the Load Balancers DNS. - -### Booting the VM - -Return to "Instances" using the menu on the LHS. - -To boot the VM select "Actions" then "Instance State" then "Start". In the future, once you have the VM loaded and configured this last step is the only one you will need to repeat to turn your VM on and off. - -The menu provided should specify the Public IP where the VM has launched - - - -## Shiny Setup - -Installing and configuring Shiny for PEcAn -authors - Alexey Shiklomanov - - Rob Kooper - -**NOTE: Instructions are only tested for CentOS 6.5 and Ubuntu 16.04** -**NOTE: Pretty much every step here requires root access.** - -### Install the Shiny R package and Shiny server - -Follow the instructions on the [Shiny download page][shiny-download] for the operating system you are using. - -[shiny-download]: https://www.rstudio.com/products/shiny/download-server/ - -### Modify the shiny configuration file - -The Shiny configuration file is located in `/etc/shiny-server/shiny-server.conf`. Comment out the entire file and add the following, replacing `` with your user name and `` with the URL location you want for your app. This will allow you to run Shiny apps from your web browser at https://your.server.edu/shiny/your-location - -``` -run as shiny; -server { - listen 3838; - location // { - run as ; - site_dir /path/to/your/shiny/app; - log_dir /var/log/shiny-server; - directory_index on; - } -} -``` - -For example, my configuration on the old test-pecan looks like this. - -``` -run as shiny; -server { - listen 3838; - location /ashiklom/ { - run as ashiklom; - site_dir /home/ashiklom/fs-data/pecan/shiny/; - log_dir /var/log/shiny-server; - directory_index on; - } -} -``` - -...and I can access my Shiny apps at, for instance, https://test-pecan.bu.edu/shiny/ashiklom/workflowPlots. - -You can add as many `location { ... }` fields as you would like. - -``` -run as shiny; -server { - listen 3838; - location /ashiklom/ { - ... - } - location /bety/ { - ... - } -} -``` - -If you change the configuration, for example to add a new location, you will need to restart Shiny server. -*If you are setting up a new instance of Shiny*, skip this step and continue with the guide, since there are a few more steps to get Shiny working. -*If there is an instance of Shiny already running*, you can restart it with: - -``` -## On CentOS -sudo service shiny-server stop -sudo service shiny-server start - -## On Ubuntu -sudo systemctl stop shiny-server.service -sudo systemctl start shiny-server.service -``` - -### Set the Apache proxy - -Create a file with the following name, based on the version of the operating system you are using: - -* Ubuntu 16.04 (pecan1, pecan2, test-pecan) -- `/etc/apache2/conf-available/shiny.conf` -* CentOS 6.5 (psql-pecan) -- `/etc/httpd/conf.d/shiny.conf` - -Into this file, add the following: - -``` -ProxyPass /shiny/ http://localhost:3838/ -ProxyPassReverse /shiny/ http://localhost:3838/ -RedirectMatch permanent ^/shiny$ /shiny/ -``` - -#### **Ubuntu only:** Enable the new shiny configuration - -``` -sudo a2enconf shiny -``` - -This will create a symbolic link to the newly created `shiny.conf` file inside the `/etc/apache2/conf-enabled` directory. -You can do `ls -l /etc/apache2/conf-enabled` to confirm that this worked. - -### Enable and start the shiny server, and restart apache - -#### On CentOS - -``` -sudo ln -s /opt/shiny-server/config/init.d/redhat/shiny-server /etc/init.d -sudo service shiny-server stop -sudo service shiny-server start -sudo service httpd restart -``` - -You can check that Shiny is running with `service shiny-server status`. - -#### On Ubuntu - -Enable the Shiny server service. -This will make sure Shiny runs automatically on startup. - -``` -sudo systemctl enable shiny-server.service -``` - -Restart Apache. - -``` -sudo apachectl restart -``` - -Start the Shiny server. - -``` -sudo systemctl start shiny-server.service -``` - -If there are problems, you can stop the `shiny-server.service` with... - -``` -sudo systemctl stop shiny-server.service -``` - -...and then use `start` again to restart it. - - -### Troubleshooting - -Refer to the log files for shiny (`/var/log/shiny-server.log`) and httpd (on CentOS, `/var/log/httpd/error-log`; on Ubuntu, `/var/log/apache2/error-log`). - - - -### Further reading - -* [Shiny server configuration reference](http://docs.rstudio.com/shiny-server/) - - - - -## Thredds Setup - -Installing and configuring Thredds for PEcAn -authors - Rob Kooper - -**NOTE: Instructions are only tested for Ubuntu 16.04 on the VM, if you have instructions for CENTOS/RedHat please update this documentation** -**NOTE: Pretty much every step here requires root access.** - -### Install the Tomcat 8 and Thredds webapp - -The Tomcat 8 server can be installed from the default Ubuntu repositories. The thredds webapp will be downloaded and installed from unidata. - -First step is to install Tomcat 8 and configure it. The flag `-Dtds.content.root.path` should point to the location of where the thredds folder is located. This needs to be writeable by the user for tomcat. `-Djava.security.egd` is a special flag to use a different random number generator for tomcat. The default would take to long to generate a random number. - -``` -apt-get -y install tomcat8 openjdk-8-jdk -echo JAVA_OPTS=\"-Dtds.content.root.path=/home/carya \${JAVA_OPTS}\" >> /etc/default/tomcat8 -echo JAVA_OPTS=\"-Djava.security.egd=file:/dev/./urandom \${JAVA_OPTS}\" >> /etc/default/tomcat8 -service tomcat8 restart -``` - -Next is to install the webapp. - - -``` -mkdir /home/carya/thredds -chmod 777 /home/carya/thredds - -wget -O /var/lib/tomcat8/webapps/thredds.war ftp://ftp.unidata.ucar.edu/pub/thredds/4.6/current/thredds.war -``` - -Finally we configure Apache to prox the thredds server - -``` -cat > /etc/apache2/conf-available/thredds.conf << EOF -ProxyPass /thredds/ http://localhost:8080/thredds/ -ProxyPassReverse /thredds/ http://localhost:8080/thredds/ -RedirectMatch permanent ^/thredds$ /thredds/ -EOF -a2enmod proxy_http -a2enconf thredds -service apache2 reload -``` - -#### Customize the Thredds server - -To customize the thredds server for your installation edit the file in /home/carya/thredds/threddsConfig.xml. For example the following file is included in the VM. - -``` - - - - - - - PEcAn - /pecan/images/pecan_small.jpg - PEcAn - - Scientific Data - meteorology, atmosphere, climate, ocean, earth science - - - Rob Kooper - NCSA - kooper@illinois.edu - - - - PEcAn - http://www.pecanproject.org/ - /pecan/images/pecan_small.jpg - PEcAn Project - - - - - - - - - tds.css - tdsCat.css - tdsDap.css - - - - - - - - - - - - - false - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -### Update the catalog - -For example to update the catalog with the latest data, run the following command from the root crontab. This cronjob will also synchronize the database with remote servers and dump your database (by default in /home/carya/dump) - -``` -0 * * * * /home/carya/pecan/scripts/cron.sh -o /home/carya/dump -``` - -### Troubleshooting - -Refer to the log files for Tomcat (`/var/log/tomcat8/*`) and Thredds (`/home/carya/thredds/logs`). - - -### Further reading - -* [Thredds reference](http://www.unidata.ucar.edu/software/thredds/current/tds/) - - - - -## OS Specific Installations {#osinstall} - -- [Ubuntu](#ubuntu) -- [CentOS](#centosredhat) -- [OSX](#macosx) -- [Install BETY](#install-bety) THIS PAGE IS DEPRECATED -- [Install Models](#install-models) -- [Install Data](#install-data) - - - -### Ubuntu {#ubuntu} - -These are specific notes for installing PEcAn on Ubuntu (14.04) and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. - -This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. - -#### Install build environment - -```bash -sudo -s - -# point to latest R -echo "deb http://cran.rstudio.com/bin/linux/ubuntu `lsb_release -s -c`/" > /etc/apt/sources.list.d/R.list -apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9 - -# update package list -apt-get -y update - -# install packages needed for PEcAn -apt-get -y install build-essential gfortran git r-base-core r-base-dev jags liblapack-dev libnetcdf-dev netcdf-bin bc libcurl4-gnutls-dev curl udunits-bin libudunits2-dev libgmp-dev python-dev libgdal1-dev libproj-dev expect - -# install packages needed for ED2 -apt-get -y install openmpi-bin libopenmpi-dev - -# install requirements for DALEC -apt-get -y install libgsl0-dev - -# install packages for webserver -apt-get -y install apache2 libapache2-mod-php5 php5 - -# install packages to compile docs -apt-get -y install texinfo texlive-latex-base texlive-latex-extra texlive-fonts-recommended - -# install devtools -echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla - -# done as root -exit -``` - -#### Install Postgres - -Documentation: http://trac.osgeo.org/postgis/wiki/UsersWikiPostGIS21UbuntuPGSQL93Apt - -```bash -sudo -s - -# point to latest PostgreSQL -echo "deb http://apt.postgresql.org/pub/repos/apt `lsb_release -s -c`-pgdg main" > /etc/apt/sources.list.d/pgdg.list -wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - - -# update package list -apt-get -y update - -# install packages for postgresql (using a newer version than default) -apt-get -y install libdbd-pgsql postgresql postgresql-client libpq-dev postgresql-9.4-postgis-2.1 postgresql-9.4-postgis-2.1-scripts - -# install following if you want to run pecan through the web -apt-get -y install php5-pgsql - -# enable bety user to login with trust by adding the following lines after -# the ability of postgres user to login in /etc/postgresql/9.4/main/pg_hba.conf -local   all             bety                                    trust -host    all             bety            127.0.0.1/32            trust -host    all             bety            ::1/128                 trust - -# Once done restart postgresql -/etc/init.d/postgresql restart - -exit -``` - -To install the BETYdb database .. - -#### Apache Configuration PEcAn - -```bash -# become root -sudo -s - -# get index page -rm /var/www/html/index.html -ln -s ${HOME}/pecan/documentation/index_vm.html /var/www/html/index.html - -# setup a redirect -cat > /etc/apache2/conf-available/pecan.conf << EOF -Alias /pecan ${HOME}/pecan/web - - DirectoryIndex index.php - Options +ExecCGI - Require all granted - -EOF -a2enconf pecan -/etc/init.d/apache2 restart - -# done as root -exit -``` - -#### Apache Configuration BETY - -```bash -sudo -s - -# install all ruby related packages -apt-get -y install ruby2.0 ruby2.0-dev libapache2-mod-passenger - -# link static content -ln -s ${HOME}/bety/public /var/www/html/bety - -# setup a redirect -cat > /etc/apache2/conf-available/bety.conf << EOF -RailsEnv production -RailsBaseURI /bety -PassengerRuby /usr/bin/ruby2.0 - - Options +FollowSymLinks - Require all granted - -EOF -a2enconf bety -/etc/init.d/apache2 restart -``` - -#### Rstudio-server - -*NOTE This will allow anybody to login to the machine through the rstudio interface and run any arbitrary code. The login used however is the same as the system login/password.* - -```bash -wget http://download2.rstudio.org/rstudio-server-0.98.1103-amd64.deb -``` - -```bash -# bceome root -sudo -s - -# install required packages -apt-get -y install libapparmor1 apparmor-utils libssl0.9.8 - -# install rstudio -dpkg -i rstudio-server-* -rm rstudio-server-* -echo "www-address=127.0.0.1" >> /etc/rstudio/rserver.conf -echo "r-libs-user=~/R/library" >> /etc/rstudio/rsession.conf -rstudio-server restart - -# setup rstudio forwarding in apache -a2enmod proxy_http -cat > /etc/apache2/conf-available/rstudio.conf << EOF -ProxyPass /rstudio/ http://localhost:8787/ -ProxyPassReverse /rstudio/ http://localhost:8787/ -RedirectMatch permanent ^/rstudio$ /rstudio/ -EOF -a2enconf rstudio -/etc/init.d/apache2 restart - -# all done, exit root -exit -``` - -#### Additional packages - -HDF5 Tools, netcdf, GDB and emacs -```bash -sudo apt-get -y install hdf5-tools cdo nco netcdf-bin ncview gdb emacs ess nedit -``` - - - -### CentOS/RedHat {#centosredhat} - -These are specific notes for installing PEcAn on CentOS (7) and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. - -This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. - -#### Install build environment - -```bash -sudo -s - -# install packages needed for PEcAn -yum -y groupinstall 'Development Tools' -yum -y install git netcdf-fortran-openmpi-devel R bc curl libxml2-devel openssl-devel ed udunits2 udunits2-devel netcdf netcdf-devel gmp-devel python-devel gdal-devel proj-devel proj-epsg expect - -# jags -yum -y install http://download.opensuse.org/repositories/home:/cornell_vrdc/CentOS_7/x86_64/jags3-3.4.0-54.1.x86_64.rpm -yum -y install http://download.opensuse.org/repositories/home:/cornell_vrdc/CentOS_7/x86_64/jags3-devel-3.4.0-54.1.x86_64.rpm - -# fix include folder for udunits2 -ln -s /usr/include/udunits2/* /usr/include/ - -# install packages needed for ED2 -yum -y install environment-modules openmpi-bin libopenmpi-dev - -# install requirements for DALEC -yum -y install gsl-devel - -# install packages for webserver -yum -y install httpd php -systemctl enable httpd -systemctl start httpd -firewall-cmd --zone=public --add-port=80/tcp --permanent -firewall-cmd --reload - -# install packages to compile docs -#apt-get -y install texinfo texlive-latex-base texlive-latex-extra texlive-fonts-recommended - -# install devtools -echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla - -# done as root -exit - -echo "module load mpi" >> ~/.bashrc -module load mpi -``` - -#### Install and configure PostgreSQL, udunits2, NetCDF - -```bash -sudo -s - -# point to latest PostgreSQL -yum install -y epel-release -yum -y install http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-1.noarch.rpm - -# install packages for postgresql (using a newer version than default) -yum -y install postgresql94-server postgresql94-contrib postgis2_94 postgresql94-devel udunits2 netcdf - -# install following if you want to run pecan through the web -yum -y install php-pgsql - -# enable bety user to login with trust by adding the following lines after -# the ability of postgres user to login in /var/lib/pgsql/9.4/data/pg_hba.conf -local all bety trust -host all bety 127.0.0.1/32 trust -host all bety ::1/128 trust - -# Create database -/usr/pgsql-9.4/bin/postgresql94-setup initdb - -# Enable postgres -systemctl enable postgresql-9.4 -systemctl start postgresql-9.4 - -exit -``` - -#### Apache Configuration PEcAn - -Install and Start Apache -```bash -yum -y install httpd -systemctl enable httpd -systemctl start httpd -``` - -```bash -# become root -sudo -s - -# get index page -rm /var/www/html/index.html -ln -s /home/carya/pecan/documentation/index_vm.html /var/www/html/index.html - -# fix selinux context (does this need to be done after PEcAn is installed?) -chcon -R -t httpd_sys_content_t /home/carya/pecan /home/carya/output - -# setup a redirect -cat > /etc/httpd/conf.d/pecan.conf << EOF -Alias /pecan /home/carya/pecan/web - - DirectoryIndex index.php - Options +ExecCGI - Require all granted - -EOF -a2enconf pecan -/etc/init.d/apache2 restart - -# done as root -exit -``` - -#### Apache Configuration BETY - -```bash -sudo -s - -# install all ruby related packages -sudo curl --fail -sSLo /etc/yum.repos.d/passenger.repo https://oss-binaries.phusionpassenger.com/yum/definitions/el-passenger.repo -yum -y install ruby ruby-devel mod_passenger - -# link static content -ln -s /home/carya/bety/public /var/www/html/bety - -# fix GemFile -echo 'gem "test-unit"' >> bety/Gemlile - -# fix selinux context (does this need to be done after bety is installed?) -chcon -R -t httpd_sys_content_t /home/carya/bety - -# setup a redirect -cat > /etc/httpd/conf.d/bety.conf << EOF -RailsEnv production -RailsBaseURI /bety -PassengerRuby /usr/bin/ruby - - Options +FollowSymLinks - Require all granted - -EOF -systemctl restart httpd -``` - -#### Rstudio-server - -NEED FIXING - -*NOTE This will allow anybody to login to the machine through the rstudio interface and run any arbitrary code. The login used however is the same as the system login/password.* - -```bash -wget http://download2.rstudio.org/rstudio-server-0.98.1103-amd64.deb -``` - -```bash -# bceome root -sudo -s - -# install required packages -apt-get -y install libapparmor1 apparmor-utils libssl0.9.8 - -# install rstudio -dpkg -i rstudio-server-* -rm rstudio-server-* -echo "www-address=127.0.0.1" >> /etc/rstudio/rserver.conf -echo "r-libs-user=~/R/library" >> /etc/rstudio/rsession.conf -rstudio-server restart - -# setup rstudio forwarding in apache -a2enmod proxy_http -cat > /etc/apache2/conf-available/rstudio.conf << EOF -ProxyPass /rstudio/ http://localhost:8787/ -ProxyPassReverse /rstudio/ http://localhost:8787/ -RedirectMatch permanent ^/rstudio$ /rstudio/ -EOF -a2enconf rstudio -/etc/init.d/apache2 restart - -# all done, exit root -exit -``` -*Alternative Rstudio instructions* - -##### Install and configure Rstudio-server - -Install RStudio Server by following the [official documentation](https://www.rstudio.com/products/rstudio/download-server/) for your platform. -Then, proceed with the following: - -* add `PATH=$PATH:/usr/sbin:/sbin` to `/etc/profile` -```bash - cat "PATH=$PATH:/usr/sbin:/sbin; export PATH" >> /etc/profile -``` -* add [rstudio.conf](https://gist.github.com/dlebauer/6921889) to /etc/httpd/conf.d/ -```bash - wget https://gist.github.com/dlebauer/6921889/raw/d1e0f945228e5519afa6223d6f49d6e0617262bd/rstudio.conf - sudo mv rstudio.conf /httpd/conf.d/ -``` -* restart the Apache server: `sudo httpd restart` -* now you should be able to access `http:///rstudio` - -#### Install ruby-netcdf gem - -```bash -cd $RUBY_APPLICATION_HOME -export $NETCDF_URL=http://www.gfd-dennou.org/arch/ruby/products/ruby-netcdf/release/ruby-netcdf-0.6.6.tar.gz -export $NETCDF_DIR=/usr/local/netcdf -gem install narray -export NARRAY_DIR="$(ls $GEM_HOME/gems | grep 'narray-')" -export NARRAY_PATH="$GEM_HOME/gems/$NARRAY_DIR" -cd $MY_RUBY_HOME/bin -wget $NETCDF_URL -O ruby-netcdf.tgz -tar zxf ruby-netcdf.tgz && cd ruby-netcdf-0.6.6/ -ruby -rubygems extconf.rb --with-narray-include=$NARRAY_PATH --with-netcdf-dir=/usr/local/netcdf-4.3.0 -sed -i 's|rb/$|rb|' Makefile -make -make install -cd ../ && sudo rm -rf ruby-netcdf* - -cd $RUBY_APPLICATION -bundle install --without development -``` - - -#### Additional packages - -NEED FIXING - -HDF5 Tools, netcdf, GDB and emacs -```bash -sudo apt-get -y install hdf5-tools cdo nco netcdf-bin ncview gdb emacs ess nedit -``` - - - -### Mac OSX {#macosx} - -These are specific notes for installing PEcAn on Mac OSX and will be referenced from the main [installing PEcAn](Installing-PEcAn) page. You will at least need to install the build environment and Postgres sections. If you want to access the database/PEcAn using a web browser you will need to install Apache. To access the database using the BETY interface, you will need to have Ruby installed. - -This document also contains information on how to install the Rstudio server edition as well as any other packages that can be helpful. - - -#### Install build environment - -```bash -# install R -# download from http://cran.r-project.org/bin/macosx/ - -# install gfortran -# download from http://cran.r-project.org/bin/macosx/tools/ - -# install OpenMPI -curl -o openmpi-1.6.3.tar.gz http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.3.tar.gz -tar zxf openmpi-1.6.3.tar.gz -cd openmpi-1.6.3 -./configure --prefix=/usr/local -make all -sudo make install -cd .. - -# install szip -curl -o szip-2.1-MacOSX-intel.tar.gz ftp://ftp.hdfgroup.org/lib-external/szip/2.1/bin/szip-2.1-MacOSX-intel.tar.gz -tar zxf szip-2.1-MacOSX-intel.tar.gz -sudo mv szip-2.1-MacOSX-intel /usr/local/szip - -# install HDF5 - -curl -o hdf5-1.8.11.tar.gz http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.11.tar.gz -tar zxf hdf5-1.8.11.tar.gz -cd hdf5-1.8.11 -sed -i -e 's/-O3/-O0/g' config/gnu-flags -./configure --prefix=/usr/local/hdf5 --enable-fortran --enable-cxx --with-szlib=/usr/local/szip -make -# make check -sudo make install -# sudo make check-install -cd .. -``` - -#### Install Postgres - -For those on a Mac I use the following app for postgresql which has -postgis already installed (http://postgresapp.com/) - -To get postgis run the following commands in psql: - -```bash -##### Enable PostGIS (includes raster) -CREATE EXTENSION postgis; -##### Enable Topology -CREATE EXTENSION postgis_topology; -##### fuzzy matching needed for Tiger -CREATE EXTENSION fuzzystrmatch; -##### Enable US Tiger Geocoder -CREATE EXTENSION postgis_tiger_geocoder; -``` - -To check your postgis run the following command again in psql: `SELECT PostGIS_full_version();` - -#### Additional installs - - -##### Install JAGS - -Download JAGS from http://sourceforge.net/projects/mcmc-jags/files/JAGS/3.x/Mac%20OS%20X/JAGS-Mavericks-3.4.0.dmg/download - -##### Install udunits - -Installing udunits-2 on MacOSX is done from source. - -* download most recent [version of Udunits here](http://www.unidata.ucar.edu/downloads/udunits/index.jsp) -* instructions for [compiling from source](http://www.unidata.ucar.edu/software/udunits/udunits-2/udunits2.html#Obtain) - - -```bash -curl -o udunits-2.1.24.tar.gz ftp://ftp.unidata.ucar.edu/pub/udunits/udunits-2.1.24.tar.gz -tar zxf udunits-2.1.24.tar.gz -cd udunits-2.1.24 -./configure -make -sudo make install -``` - -#### Apache Configuration - -Mac does not support pdo/postgresql by default. The easiest way to install is use: http://php-osx.liip.ch/ - -To enable pecan to run from your webserver. -```bash -cat > /etc/apache2/others/pecan.conf << EOF -Alias /pecan ${PWD}/pecan/web - - DirectoryIndex index.php - Options +All - Require all granted - -EOF -``` - -#### Ruby - -The default version of ruby should work. Or use [JewelryBox](https://jewelrybox.unfiniti.com/). - -#### Rstudio Server - -For the mac you can download [Rstudio Desktop](http://www.rstudio.com/). - - - -### Installing BETY {#install-bety} - -**************THIS PAGE IS DEPRECATED************* - -Official Instructions for BETY are maintained here: https://pecan.gitbook.io/betydb-documentation - -If you would like to install the Docker Version of BETY, please consult the [PEcAn Docker](#pecan-docker) section. - -#### Install Database + Data - -* _note_ To install BETYdb without PEcAn, first download the [`load.bety.sh` script](https://raw.githubusercontent.com/PecanProject/pecan/master/scripts/load.bety.sh) - -```sh -# install database (code assumes password is bety) -sudo -u postgres createuser -d -l -P -R -S bety -sudo -u postgres createdb -O bety bety -sudo -u postgres ./scripts/load.bety.sh -c YES -u YES -r 0 -sudo -u postgres ./scripts/load.bety.sh -r 1 -sudo -u postgres ./scripts/load.bety.sh -r 2 - -# configure for PEcAn web app (change password if needed) -cp web/config.example.php web/config.php - -# add models to database (VM only) -./scripts/add.models.sh - -# add data to database -./scripts/add.data.sh - -# create outputs folder -mkdir ~/output -chmod 777 ~/output -``` - -#### Installing BETYdb Web Application - -There are two flavors of BETY, PHP and RUBY. The PHP version allows for a minimal interaction with the database while the RUBY version allows for full interaction with the database. - -##### PHP version - -The php version comes with PEcAn and is already configured. - -##### RUBY version - -The RUBY version requires a few extra packages to be installed first. - -Next we install the web app. - -```bash -# install bety -cd -git clone https://github.com/PecanProject/bety.git - -# install gems -cd bety -sudo gem2.0 install bundler -bundle install --without development:test:javascript_testing:debug -``` - -and configure BETY - -```bash -# create folders for upload folders -mkdir paperclip/files paperclip/file_names -chmod 777 paperclip/files paperclip/file_names - -# create folder for log files -mkdir log -touch log/production.log -chmod 0666 log/production.log - -# fix configuration for vm -cp config/additional_environment_vm.rb config/additional_environment.rb -chmod go+w public/javascripts/cache/ - -# setup bety database configuration -cat > config/database.yml << EOF -production: - adapter: postgis - encoding: utf-8 - reconnect: false - database: bety - pool: 5 - username: bety - password: bety -EOF - -# setup login tokens -cat > config/initializers/site_keys.rb << EOF -REST_AUTH_SITE_KEY = 'thisisnotasecret' -REST_AUTH_DIGEST_STRETCHES = 10 -EOF -``` - - - -### Install Models - -This page contains instructions on how to download and install ecosystem models that have been or are being coupled to PEcAn. -These instructions have been tested on the PEcAn unbuntu VM. Commands may vary on other operating systems. -Also, some model downloads require permissions before downloading, making them unavailable to the general public. Please contact the PEcAn team if you would like access to a model that is not already installed on the default PEcAn VM. - -- [BioCro](#inst-biocro) -- [CLM 4.5](#inst-clm45) -- [DALEC](#inst-dalec) -- [ED2](#inst-ed2) -- [FATES](#inst-fates) -- [GDAY](#inst-gday) -- [JULES](#inst-jules) -- [LINKAGES](#inst-linkages) -- [LPJ-GUESS](#inst-lpj-guess) -- [MAESPA](#inst-maespa) -- [SIPNET](#inst-sipnet) - - - -#### BioCro {#inst-biocro} - -```r -# Public -echo 'devtools::install_github("ebimodeling/biocro")' | R --vanilla -# Development: -echo 'devtools::install_github("ebimodeling/biocro-dev")' | R --vanilla -``` - -_BioCro Developers:_ request from [@dlebauer on GitHub](https://github.com/dlebauer) - - - -#### CLM 4.5 {#inst-clm45} - -The version of CLM installed on PEcAn is the ORNL branch provided by Dan Ricciuto. This version includes Dan's point-level CLM processing scripts - -Download the code (~300M compressed), input data (1.7GB compressed and expands to 14 GB), and a few misc inputs. - -```bash -mkdir models -cd models -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm4_5_1_r085.tar.gz -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm/ccsm_inputdata.tar.gz -tar -xvzf clm4_5* -tar -xvzf ccsm_inputdata.tar.gz - -#Parameter file: -mkdir /home/carya/models/ccsm_inputdata/lnd/clm2/paramdata -cd /home/carya/models/ccsm_inputdata/lnd/clm2/paramdata -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm_params.c130821.nc -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm_params.c140423.nc - -#Domain file: -cd /home/carya/models/ccsm_inputdata/share/domains/domain.clm/ -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/domain.lnd.1x1pt_US-UMB_navy.nc - -#Aggregated met data file: -cd /home/carya/models/ccsm_inputdata/atm/datm7/CLM1PT_data/1x1pt_US-UMB -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/all_hourly.nc - -## lightning database -cd /home/carya/models/ccsm_inputdata/atm/datm7/NASA_LIS/ -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clmforc.Li_2012_climo1995-2011.T62.lnfm_Total_c140423.nc - -## surface data -cd /home/carya/models/ccsm_inputdata/lnd/clm2/surfdata -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm/surfdata_360x720cru_simyr1850_c130927.nc -cd /home/carya/models/ccsm_inputdata/lnd/clm2/surfdata_map -wget ftp://nacp.ornl.gov/synthesis/2008/firenze/site/clm/surfdata_1x1pt_US-UMB_I1850CLM45CN_simyr1850.nc_new -mv surfdata_1x1pt_US-UMB_I1850CLM45CN_simyr1850.nc_new surfdata_1x1pt_US-UMB_I1850CLM45CN_simyr1850.nc -``` -Required libraries -```bash -sudo apt-get install mercurial csh tcsh subversion cmake - -sudo ln -s /usr/bin/make /usr/bin/gmake -``` -Compile and build default inputs -```bash -cd ~/carya/models/clm4_5_1_r085/scripts -python runCLM.py --site US-UMB ––compset I1850CLM45CN --mach ubuntu --ccsm_input /home/carya/models/ccsm_inputdata --tstep 1 --nopointdata --coldstart --cpl_bypass --clean_build -``` - -##### CLM Test Run -You will see a new directory in scripts: US-UMB_I1850CLM45CN -Enter this directory and run (you shouldn’t have to do this normally, but there is a bug with the python script and doing this ensures all files get to the right place): -```bash -./US-UMB_I1850CLM45CN.build -``` - -Next you are ready to go to the run directory: -```bash -/home/carya/models/clm4_5_1_r085/run/US-UMB_I1850CLM45CN/run -``` -Open to edit file: datm.streams.txt.CLM1PT.CLM_USRDAT and check file paths such that all paths start with /home/carya/models/ccsm_inputdata - -From this directory, launch the executable that resides in the bld directory: -```bash -/home/carya/clm4_5_1_r085/run/US-UMB_I1850CLM45CN/bld/cesm.exe -``` -not sure this was the right location, but wherever the executable is - -You should begin to see output files that look like this: -US-UMB_I1850CLM45CN.clm2.h0.yyyy-mm.nc (yyyy is year, mm is month) -These are netcdf files containing monthly averages of lots of variables. - -The lnd_in file in the run directory can be modified to change the output file frequency and variables. - - - - - -#### DALEC {#inst-dalec} - -```bash -cd -curl -o dalec_EnKF_pub.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/dalec_EnKF_pub.tgz -tar zxf dalec_EnKF_pub.tgz -rm dalec_EnKF_pub.tgz - -cd dalec_EnKF_pub -make dalec_EnKF -make dalec_seqMH -sudo cp dalec_EnKF dalec_seqMH /usr/local/bin -``` - - - - -#### ED2 {#inst-ed2} - -##### ED2.2 r46 (used in PEcAn manuscript) - -```bash -# ---------------------------------------------------------------------- -# Get version r46 with a few patches for ubuntu -cd -curl -o ED.r46.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r46.tgz -tar zxf ED.r46.tgz -rm ED.r46.tgz -# ---------------------------------------------------------------------- -# configure and compile ed -cd ~/ED.r46/ED/build/bin -curl -o include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` -make OPT=VM -sudo cp ../ed_2.1-VM /usr/local/bin/ed2.r46 -``` - -Perform a test run using pre configured ED settings for ED2.2 r46 - -```bash -# ---------------------------------------------------------------------- -# Create sample run -cd -mkdir testrun.ed.r46 -cd testrun.ed.r46 -curl -o ED2IN http://isda.ncsa.illinois.edu/~kooper/EBI/ED2IN.r46 -sed -i -e "s#\$HOME#$HOME#" ED2IN -curl -o config.xml http://isda.ncsa.illinois.edu/~kooper/EBI/config.r46.xml -# execute test run -time ed2.r46 -``` - -##### ED 2.2 r82 - -```bash -cd -curl -o ED.r82.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.tgz -tar zxf ED.r82.tgz -rm ED.r82.tgz - -cd ED.r82 -curl -o ED.r82.patch http://isda.ncsa.illinois.edu/~kooper/EBI/ED.r82.patch -patch -p1 < ED.r82.patch -cd ED/build/bin -curl -o include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` -make OPT=VM -sudo cp ../ed_2.1-VM /usr/local/bin/ed2.r82 -``` - -Perform a test run using pre configured ED settings for ED2.2 r82 - -```bash -cd -mkdir testrun.ed.r82 -cd testrun.ed.r82 -curl -o ED2IN http://isda.ncsa.illinois.edu/~kooper/EBI/ED2IN.r82 -sed -i -e "s#\$HOME#$HOME#" ED2IN -curl -o config.xml http://isda.ncsa.illinois.edu/~kooper/EBI/config.r82.xml -# execute test run -time ed2.r82 -``` - -##### ED 2.2 bleeding edge - -```bash -cd -git clone https://github.com/EDmodel/ED2.git - -cd ED2/ED/build/bin -curl -o include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` -./generate_deps.sh -make OPT=VM -sudo cp ../ed_2.1-VM /usr/local/bin/ed2.git -``` - - - - -#### CLM-FATES {#inst-fates} - -Prerequisites -``` -sudo apt-get upgrade libnetcdf-dev -sudo apt-get install subversion -sudo apt-get install csh -sudo apt-get install cmake -sudo ln -s /usr/bin/make /usr/bin/gmake -sudo rm /bin/sh -sudo ln -s /bin/bash /bin/sh - -wget https://github.com/Unidata/netcdf-fortran/archive/v4.4.4.tar.gz -cd netcdf-4.4.4 -./configure -make -sudo make install -``` -you might need to mess around with installing netcdf and netcdf-fortran to get a version FATES likes... - -Get code from Github (currently private) and go to cime/scripts directory -``` -git clone git@github.com:NGEET/ed-clm.git -cd ed-clm/cime/scripts/ -``` -Within CLM-FATES, to be able to build an executable we need to create a reference run. We'll also use this reference run to grab defaults from, so we'll be registering the location of both the reference **case** (location of executable, scripts, etc) and the reference **inputs** with the PEcAn database. To begin, copy reference run script from pecan -``` -cp ~/pecan/models/fates/inst/create_1x1_ref_case.sh . -``` -Edit reference case script to set NETCDF_HOME, CROOT (reference run case), DIN_LOC_ROOT (reference run inputs). Also, make sure DIN_LOC_ROOT exists as FATES will not create it itself. Then run the script -``` -./create_1x1_ref_case.sh -``` -Be aware that this script WILL ask you for your password on the NCAR server to download the reference case input data (the guest password may work, haven't tried this). If it gives an error at the pio stage check the log, but the most likely error is it being unable to find a version of netcdf it likes. - -Once FATES is installed, set the whole reference case directory as the Model path (leave filename blank) and set the whole inputs directory as an Input with format clm_defaults. - - - -#### GDAY {#inst-gday} - -Navigate to a directory you would like to store GDAY and run the following: - -```bash -git clone https://github.com/mdekauwe/GDAY.git - -cd GDAY - -cd src - -make -``` - -```gday``` is your executable. - - - -#### JULES {#inst-jules} - -INSTALL STEPS: - 1) Download JULES and FCM -JULES: - Model requires registration to download. Not to be put on PEcAn VM -Getting Started documentation: https://jules.jchmr.org/content/getting-started -Registration: http://jules-lsm.github.io/access_req/JULES_access.html - -FCM: -```bash - https://github.com/metomi/fcm/ - wget https://github.com/metomi/fcm/archive/2015.05.0.tar.gz -``` - -2) edit makefile -```bash -open etc/fcm-make/make.cfg - -set JULES_NETCDF = actual instead of dummy -set path (e.g. /usr/) and lib_path /lib64 to netCDF libraries -``` - -3) compile JULES - -```bash -cd etc/fcm-make/ -{path.to.fcm}/fcm make -f etc/fcm-make/make.cfg --new - -``` - -```bash -UBUNTU VERSION: installed without having to add any perl libraries -#perl stuff that I had to install on pecan2 not PEcAN VM -sudo yum install perl-Digest-SHA -sudo yum install perl-Time-modules -sudo yum install cpan -curl -L http://cpanmin.us | perl - --sudo App::cpanminus -sudo cpanm Time/Piece.pm -sudo cpanm IO/Uncompress/Gunzip.pm -``` - - -Executable is under build/bin/jules.exe - -Example rundir: examples/point_loobos - - - - -#### LINKAGES {#inst-linkages} - -##### R Installation -```r -# Public -echo 'devtools::install_github("araiho/linkages_package")' | R --vanilla -``` - -##### FORTRAN VERSION - -```r -#FORTRAN VERSION -cd -git clone https://github.com/araiho/Linkages.git -cd Linkages -gfortran -o linkages linkages.f -sudo cp linkages /usr/local/bin/linkages.git -``` - - - - -#### LPJ-GUESS {#inst-lpj-guess} - -Instructions to download source code - -Go to [LPJ-GUESS website](http://web.nateko.lu.se/lpj-guess/download.html) for instructions to access code. - - - -#### MAESPA {#inst-maespa} - -Navigate to a directory you would like store MAESPA and run the following: - -```bash -git clone https://bitbucket.org/remkoduursma/maespa.git - -cd maespa - -make -``` - -```maespa.out``` is your executable. Example input files can be found in the ```inpufiles``` directory. Executing measpa.out from within one of the example directories will produce output. - -MAESPA developers have also developed a wrapper package called Maeswrap. The usual R package installation method ```install.packages``` may present issues with downloading an unpacking a dependency package called ```rgl```. Here are a couple of solutions: - -##### Solution 1 - -```bash -### From the Command Line -sudo apt-get install r-cran-rgl -``` -then from within R -```R -install.packages("Maeswrap") -``` - -##### Solution 2 - -```bash -### From the Command line -sudo apt-get install libglu1-mesa-dev -``` -then from within R -```R -install.packages("Maeswrap") -``` - - - -#### SIPNET {inst-sipnet} - -```bash -cd -curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz -tar zxf sipnet_unk.tar.gz -rm sipnet_unk.tar.gz - -cd sipnet_unk -make -sudo cp sipnet /usr/local/bin/sipnet.runk -``` - -##### SIPNET testrun - -```bash -cd -curl -o testrun.sipnet.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.sipnet.tar.gz -tar zxf testrun.sipnet.tar.gz -rm testrun.sipnet.tar.gz -cd testrun.sipnet -sipnet.runk -``` - - - -### Installing data for PEcAn {#install-data} - -PEcAn assumes some of the data to be installed on the machine. This page will describe how to install this data. - -#### Site Information - -These are large-ish files that contain data used with ED2 and SIPNET - -```bash -rm -rf sites -curl -o sites.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/sites.tgz -tar zxf sites.tgz -sed -i -e "s#/home/kooper/Projects/EBI#${PWD}#" sites/*/ED_MET_DRIVER_HEADER -rm sites.tgz - -rm -rf inputs -curl -o inputs.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/inputs.tgz -tar zxf inputs.tgz -rm inputs.tgz -``` - -#### FIA database - -FIA database is large and will add an extra 10GB to the installation. - -```bash -# download and install database -curl -o fia5data.psql.gz http://isda.ncsa.illinois.edu/~kooper/EBI/fia5data.psql.gz -dropdb --if-exists fia5data -createdb -O bety fia5data -gunzip fia5data.psql.gz -psql -U bety -d fia5data < fia5data.psql -rm fia5data.psql -``` - -#### Flux Camp - -Following will install the data for flux camp (as well as the demo script for PEcAn). - -```bash -cd -curl -o plot.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/plot.tgz -tar zxf plot.tgz -rm plot.tgz -``` - -#### Harvard for ED tutorial - -Add datasets and runs - -```bash -curl -o Santarem_Km83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/Santarem_Km83.zip -unzip -d sites Santarem_Km83.zip -sed -i -e "s#/home/pecan#${HOME}#" sites/Santarem_Km83/ED_MET_DRIVER_HEADER -rm Santarem_Km83.zip - -curl -o testrun.s83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.s83.zip -unzip testrun.s83.zip -sed -i -e "s#/home/pecan#${HOME}#" testrun.s83/ED2IN -rm testrun.s83.zip - -curl -o ed2ws.harvard.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/ed2ws.harvard.tgz -tar zxf ed2ws.harvard.tgz -mkdir ed2ws.harvard/analy ed2ws.harvard/histo -sed -i -e "s#/home/pecan#${HOME}#g" ed2ws.harvard/input_harvard/met_driver/HF_MET_HEADER ed2ws.harvard/ED2IN ed2ws.harvard/*.r -rm ed2ws.harvard.tgz - -curl -o testrun.PDG.zip http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.PDG.zip -unzip testrun.PDG.zip -sed -i -e "s#/home/pecan#${HOME}#" testrun.PDG/Met/PDG_MET_DRIVER testrun.PDG/Template/ED2IN -sed -i -e 's#/n/scratch2/moorcroft_lab/kzhang/PDG/WFire_Pecan/##' testrun.PDG/Template/ED2IN -rm testrun.PDG.zip - -curl -o create_met_driver.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/create_met_driver.tar.gz -tar zxf create_met_driver.tar.gz -rm create_met_driver.tar.gz -``` - - - -# Docker {#docker-index} - -This chapter describes the PEcAn Docker container infrastructure. -It contains the following sections: - -- [Introduction to Docker](#docker-intro) -- Brief introduction to Docker and `docker-compose` -- [Docker quickstart](#docker-quickstart) -- Brief tutorial for setting up a Docker-based PEcAn instance -- [PEcAn Docker Architecture](#pecan-docker) -- Detailed description of the containers comprising the PEcAn Docker-based infrastructure -- [Dockerfiles for models](#model-docker) -- General guide for writing Dockerfiles for new models -- [Building and modifying images](#docker-build-images) -- [Troubleshooting Docker]{#docker-troubleshooting} -- [Migrating from VM to Docker](#docker-migrate) -- Steps to migrate from running PEcAn on a VM to a docker. - - - -## Introduction to Docker? {#docker-intro} - -* [What is Docker](#what-is-docker) -* [Working with Docker](#working-with-docker) -* [`docker-compose`]{#docker-compose} - -### What is Docker? {#what-is-docker} - -For a quick and accessible introduction to Docker, we suggest this YouTube video: [Learn Docker in 12 Minutes](https://www.youtube.com/watch?v=YFl2mCHdv24). - -For more comprehensive Docker documentation, we refer you to the [Docker documentation website](https://docs.docker.com/). - -For a useful analogy for Docker containerization, we refer you to the webcomic [xkcd](https://xkcd.com/1988/). - -Docker is a technology for encapsulating software in "containers", somewhat similarly to virtual machines. -Like virtual machines, Docker containers facilitate software distribution by bundling the software with all of its dependencies in a single location. -Unlike virtual machines, Docker containers are meant to only run a single service or process and are build on top of existing services provided by the host OS (such as disk access, networking, memory management etc.). - -In Docker, an **image** refers to a binary snapshot of a piece of software and all of its dependencies. -A **container** refers to a running instance of a particular image. -A good rule of thumb is that each container should be responsible for no more than one running process. -A software **stack** refers to a collection of containers, each responsible for its own process, working together to power a particular application. -Docker makes it easy to run multiple software stacks at the same time in parallel on the same machine. -Stacks can be given a unique name, which is passed along as a prefix to all their containers. -Inside these stacks, containers can communicate using generic names not prefixed with the stack name, making it easy to deploy multiple stacks with the same internal configuration. -Containers within the same stack communicate with each other via a common **network**. -Like virtual machines or system processes, Docker stacks can also be instructed to open specific ports to facilitate communication with the host and other machines. - -The PEcAn database BETY provides an instructive case-study. -BETY is comprised of two core processes -- a PostgreSQL database, and a web-based front-end to that database (Apache web server with Ruby on Rails). -Running BETY as a "Dockerized" application therefore involves two containers -- one for the PostgreSQL database, and one for the web server. -We could build these containers ourselves by starting from a container with nothing but the essentials of a particular operating system, but we can save some time and effort by starting with an [existing image for PostgreSQL](https://hub.docker.com/_/postgres/) from Docker Hub. -When starting a Dockerized BETY, we start the PostgreSQL container first, then start the BETY container telling it how to communicate with the PostgreSQL container. -To upgrade an existing BETY instance, we stop the BETY container, download the latest version, tell it to upgrade the database, and re-start the BETY container. -There is no need to install new dependencies for BETY since they are all shipped as part of the container. - -The PEcAn Docker architecture is designed to facilitate installation and maintenance on a variety of systems by eliminating the need to install and maintain complex system dependencies (such as PostgreSQL, Apache web server, and Shiny server). -Furthermore, using separate Docker containers for each ecosystem model helps avoid clashes between different software version requirements of different models (e.g. some models require GCC <5.0, while others may require GCC >=5.0). - -The full PEcAn Docker stack is described in more detail in the [next section](#pecan-docker). - -### Working with Docker {#working-with-docker} - -To run an image, you can use the Docker command line interface. -For example, the following runs a PostgreSQL image based on the [pre-existing PostGIS image](https://hub.docker.com/r/mdillon/postgis/) by `mdillon`: - -```bash -docker run \ - --detach \ - --rm \ - --name postgresql \ - --network pecan \ - --publish 9876:5432 \ - --volume ${PWD}/postgres:/var/lib/postgresql/data \ - mdillon/postgis:9.6-alpine -``` - -This will start the PostgreSQL+PostGIS container. -The following options were used: - -- `--detach` makes the container run in the background. -- `--rm` removes the container when it is finished (make sure to use the volume below). -- `--name` the name of the container, also the hostname of the container which can be used by other docker containers in the same network inside docker. -- `--network pecan` the network that the container should be running in, this leverages of network isolation in docker and allows this container to be connected to by others using the postgresql hostname. -- `--publish` exposes the port to the outside world, this is like ssh, and maps port 9876 to port 5432 in the docker container -- `--volume` maps a folder on your local machine to the machine in the container. This allows you to save data on your local machine. -- `mdillon/postgis:9.6-alpine` is the actual image that will be run, in this case it comes from the group/person mdillon, the container is postgis and the version 9.6-alpine (version 9.6 build on alpine linux). - -Other options that might be used: - -- `--tty` allocate a pseudo-TTY to send stdout and stderr back to the console. -- `--interactive` keeps stdin open so the user can interact with the application running. -- `--env` sets environment variables, these are often used to change the behavior of the docker container. - -To see a list of all running containers you can use the following command: - -```bash -docker ps -``` - -To see the log files of this container you use the following command (you can either use their name or id as returned by `docker ps`). The `-f` flag will follow the `stdout`/`stderr` from the container, use `Ctrl-C` to stop following the `stdout`/`stderr`. - -```bash -docker logs -f postgresql -``` - -To stop a running container use: - -``` -docker stop postgresql -``` - -Containers that are running in the foreground (without the `--detach`) can be stopped by pressing `Ctrl-C`. Any containers running in the background (with `--detach`) will continue running until the machine is restarted or the container is stopped using `docker stop`. - -### `docker-compose` {#docker-compose} - -For a quick introduction to `docker-compose`, we recommend the following YouTube video: [Docker Compose in 12 Minutes](https://www.youtube.com/watch?v=Qw9zlE3t8Ko). - -The complete `docker-compose` references can be found on the [Docker documentation website](https://docs.docker.com/compose/). - -`docker-compose` provides a convenient way to configure and run a multi-container Docker stack. -Basically, a `docker-compose` setup consists of a list of containers and their configuration parameters, which are then internally converted into a bunch of `docker` commands. -To configure BETY as described above, we can use a `docker-compose.yml` file like the following: - -```yaml -version: "3" -services: - postgres: - image: mdillon/postgis:9.5 - bety: - image: pecan/bety - depends_on: - - postgres -``` - -This simple file allows us to bring up a full BETY application with both database and BETY application. The BETY app will not be brought up until the database container has started. - -You can now start this application by changing into the same directory as the `docker-compose.yml` file (`cd /path/to/file`) and then running: - -``` -docker-compose up -``` - -This will start the application, and you will see the log files for the 2 different containers. - - - -## The PEcAn docker install process in detail {#docker-quickstart} - -### Configure docker-compose {#pecan-setup-compose-configure} - -This section will let you download some configuration files. The documentation provides links to the latest released version (master branch in GitHub) or the develop version that we are working on (develop branch in GitHub) which will become the next release. If you cloned the PEcAn GitHub repository you can use `git checkout ` to switch branches. - -The PEcAn Docker stack is configured using a `docker-compose.yml` file. You can download just this file directly from GitHub [latest](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) or [develop](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml). You can also find this file in the root of cloned PEcAn GitHub repository. There is no need to edit the `docker-compose.yml` file. You can use either the `.env` file to change some of the settings, or the `docker-compose.override.yml` file to modify the `docker-compose.yml` file. This makes it easier for you to get an updated version of the `docker-compose.yml` file and not lose any chances you have made to it. - -Some of the settings in the `docker-compose.yml` can be set using a `.env` file. You can download either the [latest](https://raw.githubusercontent.com/PecanProject/pecan/master/docker/env.example) or the [develop](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker/env.example) version. If you have cloned the GitHub repository it is also located in the docker folder. This file should be called `.env` and be placed in the same folder as your `docker-compose.yml` file. This file will allow you to set which version of PEcAn or BETY to use. See the comments in this file to control the settings. Option you might want to set are: - -- `PECAN_VERSION` : The docker images to use for PEcAn. The default is `latest` which is the latest released version of PEcAn. Setting this to `develop` will result in using the version of PEcAn which will become the next release. -- `PECAN_FQDN` : Is the name of the server where PEcAn is running. This is what is used to register all files generated by this version of PEcAn (see also `TRAEFIK_HOST`). -- `PECAN_NAME` : A short name of this PEcAn server that is shown in the pull down menu and might be easier to recognize. -- `BETY_VERSION` : This controls the version of BETY. The default is `latest` which is the latest released version of BETY. Setting this to `develop` will result in using the version of BETY which will become the next release. -- `TRAEFIK_HOST` : Should be the FQDN of the server, this is needed when generating a SSL certificate. For SSL certificates you will need to set `TRAEFIK_ACME_ENABLE` as well as `TRAEFIK_ACME_EMAIL`. -- `TRAEFIK_IPFILTER` : is used to limit access to certain resources, such as RabbitMQ and the Traefik dashboard. - -A final file, which is optional, is a `docker-compose.override.yml`. You can download a version for the [latest](https://raw.githubusercontent.com/PecanProject/pecan/master/docker/docker-compose.example.yml) and [develop](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker/docker-compose.example.yml) versions. If you have cloned the GitHub repository it is located in the docker folder. Use this file as an example of what you can do, only copy the pieces over that you really need. This will allow you to make changes to the docker-compose file for your local installation. You can use this to add additional containers to your stack, change the path where docker stores the data for the containers, or you can use this to open up the postgresql port. - -```yaml -version: "3" - -services: - # expose database to localhost for ease of access - postgres: - ports: - - 5432:5432 -``` - -Once you have the `docker-compose.yml` file as well as the optional `.env` and `docker-compose.override.yml` in a folder you can start the PEcAn stack. The following instructions assume you are in the same directory as the file (if not, `cd` into it). - -In the rest of this section we will use a few arguments for the `docker-compose` application. The location of these arguments are important. The general syntax of `docker-compose` is `docker-compose [SERVICES]`. More generally, `docker-compose` options are very sensitive to their location relative to other commands in the same line -- that is, `docker-compose -f /my/docker-compose.yml -p pecan up -d postgres` is _not_ the same as `docker-compose -d postgres -p pecan up -f /my/docker-compose.yml`. If expected ever don't seem to be working, check that the arguments are in the right order.) - -- `-f ` : *ARGUMENTS FOR DOCKER COMPOSE* : Allows you to specify a docker-compose.yml file explicitly. You can use this argument multiple times. Default is to use the docker-compose.yml and docker-compose.override.yml in your current folder. -- `-p ` : *ARGUMENTS FOR DOCKER COMPOSE* : Project name, all volumes, networks, and containers will be prefixed with this argument. The default value is to use the current folder name. -- `-d` : *ARGUMENTS FOR `up` COMMAND* : Will start all the containers in the background and return back to the command shell. - -If no services as added to the docker-compose command all services possible will be started. - - -### Initialize PEcAn (first time only) {#pecan-docker-quickstart-init} - -Before you can start to use PEcAn for the first time you will need to initialize the database (and optionally add some data). The following two sections will [first initialize the database](#pecan-docker-quickstart-init-db) and [secondly add some data](#pecan-docker-quickstart-init-data) to the system. - -#### Initialize the PEcAn database {#pecan-docker-quickstart-init-db} - -The commands described in this section will set up the PEcAn database (BETY) and pre-load it with some common "default" data. - -```bash -docker-compose -p pecan up -d postgres - -# If you have a custom docker-compose file: -# docker-compose -f /path/to/my-docker-compose.yml -p pecan up -d postgres -``` - -The breakdown of this command is as follows: - -- `-p pecan` -- This tells `docker-compose` to do all of this as part of a "project" `-p` we'll call `pecan`. By default, the project name is set to the name of the current working directory. The project name will be used as a prefix to all containers started by this `docker-compose` instance (so, if we have a service called `postgres`, this will create a container called `pecan_postgres`). -- `up -d` -- `up` is a command that initializes the containers. Initialization involves downloading and building the target containers and any containers they depend on, and then running them. Normally, this happens in the foreground, printing logs directly to `stderr`/`stdout` (meaning you would have to interrupt it with Ctrl-C), but the `-d` flag forces this to happen more quietly and in the background. -- `postgres` -- This indicates that we only want to initialize the service called `postgres` (and its dependencies). If we omitted this, `docker-compose` would initialize all containers in the stack. - -The end result of this command is to initialize a "blank" PostGIS container that will run in the background. -This container is not connected to any data (yet), and is basically analogous to just installing and starting PostgreSQL to your system. -As a side effect, the above command will also create blank data ["volumes"](https://docs.docker.com/storage/volumes/) and a ["network"](https://docs.docker.com/network/) that containers will use to communicate with each other. -Because our project is called `pecan` and `docker-compose.yml` describes a network called `pecan`, the resulting network is called `pecan_pecan`. -This is relevant to the following commands, which will actually initialize and populate the BETY database. - -Assuming the above ran successfully, next run the following: - -```bash -docker-compose run --rm bety initialize -``` - -The breakdown of this command is as follows: {#docker-run-init} - -- `docker-compose run` -- This says we will be running a specific command inside the target service (bety in this case). -- `--rm` -- This automatically removes the resulting container once the specified command exits, as well as any volumes associated with the container. This is useful as a general "clean-up" flag for one-off commands (like this one) to make sure you don't leave any "zombie" containers or volumes around at the end. -- `bety` -- This is the name of the service in which we want to run the specified command. -- Everything after the service name (here, `bety`) is interpreted as an argument to the image's specified [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). For the `bety` service, the entrypoint is the script [`docker/entrypoint.sh`](https://github.com/PecanProject/bety/blob/master/docker/entrypoint.sh) located in the BETY repository. Here, the `initialize` argument is parsed to mean "Create a new database", which first runs `psql` commands to create the `bety` role and database and then runs the `load.bety.sh` script. - - NOTE: The entrypoint script that is used is the one copied into the Docker container at the time it was built, which, depending on the indicated image version and how often images are built on Docker Hub relative to updates to the source, may be older than whatever is in the source code. - -Note that this command may throw a bunch of errors related to functions and/or operators already existing. -This is normal -- it just means that the PostGIS extension to PostgreSQL is already installed. -The important thing is that you see output near the end like: - -``` -CREATED SCHEMA -Loading schema_migrations : ADDED 61 -Started psql (pid=507) -Updated formats : 35 (+35) -Fixed formats : 46 -Updated machines : 23 (+23) -Fixed machines : 24 -Updated mimetypes : 419 (+419) -Fixed mimetypes : 1095 -... -... -... -Added carya41 with access_level=4 and page_access_level=1 with id=323 -Added carya42 with access_level=4 and page_access_level=2 with id=325 -Added carya43 with access_level=4 and page_access_level=3 with id=327 -Added carya44 with access_level=4 and page_access_level=4 with id=329 -Added guestuser with access_level=4 and page_access_level=4 with id=331 -``` - -If you do not see this output, you can look at the [troubleshooting](#docker-quickstart-troubleshooting) section at the end of this section for some troubleshooting tips, as well as some solutions to common problems. - -Once the command has finished successfully, proceed with the next step which will load some initial data into the database and place the data in the docker volumes. - -#### Add example data (first time only) {#pecan-docker-quickstart-init-data} - -The following command will add some initial data to the PEcAn stack and register the data with the database. - -```bash -docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop -``` - -The breakdown of this command is as follows: - -- `docker run` -- This says we will be running a specific command inside the target Docker container. See `docker run --help` and the [Docker run reference](https://docs.docker.com/engine/reference/run/) for more information. -- `-ti` -- This is actually two flags, `-t` to allocate a pseudo-tty and `-i` to keep STDIN open even if detached. `-t` is necessary to ensure lower-level script commands run correctly. `-i` makes sure that the command output (`stdin`) is displayed. -- `--rm` -- This automatically removes the resulting container once the specified command exits, as well as any volumes associated with the container. This is useful as a general "clean-up" flag for one-off commands (like this one) to make sure you don't leave any "zombie" containers or volumes around at the end. -- `--network pecan_pecan` -- This indicates that the container will use the existing `pecan_pecan` network. This network is what ensures communication between the `postgres` container (which, recall, is _just_ a PostGIS installation with some data) and the "volumes" where the actual data are persistently stored. -- `pecan/data:develop` -- This is the name of the image in which to run the specified command, in the form `repository/image:version`. This is interpreted as follows: - - First, it sees if there are any images called `pecan/data:develop` available on your local machine. If there are, it uses that one. - - If that image version is _not_ available locally, it will next try to find the image online. By default, it searches [Docker Hub](https://hub.docker.com/), such that `pecan/data` gets expanded to the container at `https://hub.docker.com/r/pecan/data`. For custom repositories, a full name can be given, such as `hub.ncsa.illinois.edu/pecan/data:latest`. - - If `:version` is omitted, Docker assumes `:latest`. NOTE that while online containers _should_ have a `:latest` version, not all of them do, and if a `:latest` version does not exist, Docker will be unable to find the image and will throw an error. -- Everything after the image name (here, `pecan/data:develop`) is interpreted as an argument to the image's specified [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). -- `--volume pecan_pecan:/data` -- This mounts the data from the subsequent container (`pecan/data:develop`) onto the current project volume, called `pecan_pecan` (as with the network, the project name `pecan` is the prefix, and the volume name also happens to be `pecan` as specified in the `docker-compose.yml` file). -- `--env FQDN=docker` -- the Fully Qualified Domain Name, this is the same value as specified in the `.env` file (for the web, monitor and executor containers). This will link the data files to the name in the machines table in BETY. -- `pecan/data:develop` -- As above, this is the target image to run. Since there is no argument after the image name, this command will run the default command ([`CMD`](https://docs.docker.com/engine/reference/builder/#cmd)) specified for this docker container. In this case, it is the [`docker/add_data.sh`](https://github.com/PecanProject/pecan/blob/develop/docker/add-data.sh) script from the PEcAn repository. - -Under the hood, this container runs the `docker/add-data.sh` script, which copies a bunch of input files and registers them with the PEcAn database. - -Successful execution of this command should take some time because it involves copying reasonably large amounts of data and performing a number of database operations. - - -#### Start PEcAn {#start-pecan} - - -If you already completed the above steps, you can start the full stack by just running the following: - -```bash -docker-compose -p pecan up -d -``` - -This will build and start all containers required to run PEcAn. -With the `-d` flag, this will run all of these containers quietly in the background, and show a nice architecture diagram with the name and status of each container while they are starting. -Once this is done you have a working instance of PEcAn. - -If all of the containers started successfully, you should be able to access the various components from a browser via the following URLs (if you run these commands on a remote machine replace localhost with the actual hostname). - -- PEcAn web interface (running models) -- http://localhost:8000/pecan/ (NOTE: The trailing backslash is necessary.) -- PEcAn documentation and home page -- http://localhost:8000/ -- BETY web interface -- http://localhost:8000/bety/ -- File browser (minio) -- http://localhost:8000/minio/ -- RabbitMQ management console (for managing queued processes) -- http://localhost:8000/rabbitmq/ -- Traefik, webserver showing maps from URLs onto their respective containers -- http://localhost:8000/traefik/ -- Monitor, service that monitors models and shows all models that are online as well as how many instances are online and the number of jobs waiting. The output is in JSON -- http://localhost:8000/monitor/ - - -#### Start model runs using curl {#curl-model-runs} - -To test PEcAn you can use the following `curl` statement, or use the webpage to submit a request (if you run these commands on a remote machine replace localhost with the actual hostname): - -```bash -curl -v -X POST \ - -F 'hostname=docker' \ - -F 'modelid=5000000002' \ - -F 'sitegroupid=1' \ - -F 'siteid=772' \ - -F 'sitename=Niwot Ridge Forest/LTER NWT1 (US-NR1)' \ - -F 'pft[]=temperate.coniferous' \ - -F 'start=2004/01/01' \ - -F 'end=2004/12/31' \ - -F 'input_met=5000000005' \ - -F 'email=' \ - -F 'notes=' \ - 'http://localhost:8000/pecan/04-runpecan.php' -``` - -This should return some text with in there `Location:` this is shows the workflow id, you can prepend http://localhost:8000/pecan/ to the front of this, for example: http://localhost:8000/pecan/05-running.php?workflowid=99000000001. Here you will be able to see the progress of the workflow. - -To see what is happening behind the scenes you can use look at the log file of the specific docker containers, once of interest are `pecan_executor_1` this is the container that will execute a single workflow and `pecan_sipnet_1` which executes the sipnet mode. To see the logs you use `docker logs pecan_executor_1` Following is an example output: - -``` -2018-06-13 15:50:37,903 [MainThread ] INFO : pika.adapters.base_connection - Connecting to 172.18.0.2:5672 -2018-06-13 15:50:37,924 [MainThread ] INFO : pika.adapters.blocking_connection - Created channel=1 -2018-06-13 15:50:37,941 [MainThread ] INFO : root - [*] Waiting for messages. To exit press CTRL+C -2018-06-13 19:44:49,523 [MainThread ] INFO : root - b'{"folder": "/data/workflows/PEcAn_99000000001", "workflowid": "99000000001"}' -2018-06-13 19:44:49,524 [MainThread ] INFO : root - Starting job in /data/workflows/PEcAn_99000000001. -2018-06-13 19:45:15,555 [MainThread ] INFO : root - Finished running job. -``` - -This shows that the executor connects to RabbitMQ, waits for messages. Once it picks up a message it will print the message, and execute the workflow in the folder passed in with the message. Once the workflow (including any model executions) is finished it will print Finished. The log file for `pecan_sipnet_1` is very similar, in this case it runs the `job.sh` in the run folder. - -To run multiple executors in parallel you can duplicate the executor section in the docker-compose file and just rename it from executor to executor1 and executor2 for example. The same can be done for the models. To make this easier it helps to deploy the containers using Kubernetes allowing to easily scale up and down the containers. - -### Troubleshooting {#docker-quickstart-troubleshooting} - -When initializing the database, you will know you have encountered more serious errors if the command exits or hangs with output resembling the following: - -``` -LINE 1: SELECT count(*) FROM formats WHERE ... - ^ -Error: Relation `formats` does not exist -``` - -**If the above command fails**, you can try to fix things interactively by first opening a shell inside the container... - -``` -docker run -ti --rm --network pecan_pecan pecan/bety:latest /bin/bash -``` - -...and then running the following commands, which emulate the functionality of the `entrypoint.sh` with the `initialize` argument. - -```bash -# Create the bety role in the postgresql database -psql -h postgres -p 5432 -U postgres -c "CREATE ROLE bety WITH LOGIN CREATEDB NOSUPERUSER NOCREATEROLE PASSWORD 'bety'" - -# Initialize the bety database itself, and set to be owned by role bety -psql -h postgres -p 5432 -U postgres -c "CREATE DATABASE bety WITH OWNER bety" - -# If either of these fail with a "role/database bety already exists", -# that's fine. You can safely proceed to the next command. - -# Load the actual bety database tables and values -./script/load.bety.sh -a "postgres" -d "bety" -p "-h postgres -p 5432" -o bety -c -u -g -m ${LOCAL_SERVER} -r 0 -w https://ebi-forecast.igb.illinois.edu/pecan/dump/all/bety.tar.gz -``` - - - -## PEcAn Docker Architecture {#pecan-docker} - -* [Overview](#pecan-docker-overview) -* [PEcAn's `docker-compose`](#pecan-docker-compose) -* [Top-level structure](#pecan-dc-structure) -* [`traefik`](#pecan-dc-traefik) -* [`portainer`](#pecan-dc-portainer) -* [`minio`](#pecan-dc-minio) -* [`thredds`](#pecan-dc-thredds) -* [`postgres`](#pecan-dc-postgres) -* [`rabbitmq`](#pecan-dc-rabbitmq) -* [`bety`](#pecan-dc-bety) -* [`docs`](#pecan-dc-docs) -* [`web`](#pecan-dc-web) -* [`executor`](#pecan-dc-executor) -* [`monitor`](#pecan-dc-monitor) -* [Model-specific containers](#pecan-dc-models) - - -### Overview {#pecan-docker-overview} - -The PEcAn docker architecture consists of many containers (see figure below) that will communicate with each other. The goal of this architecture is to easily expand the PEcAn system by deploying new model containers and registering them with PEcAn. Once this is done the user can now use these new models in their work. The PEcAn framework will setup the configurations for the models, and send a message to the model containers to start execution. Once the execution is finished the PEcAn framework will continue. This is exactly as if the model is running on a HPC machine. Models can be executed in parallel by launching multiple model containers. - -```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics("03_topical_pages/94_docker/pecan-docker.png") -``` -As can be seen in the figure the architecture leverages of two standard containers (in orange). The first container is postgresql with postgis ([mdillon/postgis](https://hub.docker.com/r/mdillon/postgis/)) which is used to store the database used by both BETY and PEcAn. The second containers is a messagebus, more specifically RabbitMQ ([rabbitmq](https://hub.docker.com/_/rabbitmq/)). - -The BETY app container ([pecan/bety](https://hub.docker.com/r/pecan/bety/)) is the front end to the BETY database and is connected to the postgresql container. A http server can be put in front of this container for SSL termination as well to allow for load balancing (by using multiple BETY app containers). - -The PEcAn framework containers consist of multiple unique ways to interact with the PEcAn system (none of these containers will have any models installed): - -- PEcAn shiny hosts the shiny applications developed and will interact with the database to get all information necessary to display -- PEcAn rstudio is a rstudio environment with the PEcAn libraries preloaded. This allows for prototyping of new algorithms that can be used as part of the PEcAn framework later. -- PEcAn web allows the user to create a new PEcAn workflow. The workflow is stored in the database, and the models are executed by the model containers. -- PEcAn cli will allow the user to give a pecan.xml file that will be executed by the PEcAn framework. The workflow created from the XML file is stored in the database, and the models are executed by the model containers. - -The model containers contain the actual models that are executed as well as small wrappers to make them work in the PEcAn framework. The containers will run the model based on the parameters received from the message bus and convert the outputs back to the standard PEcAn output format. Once the container is finished processing a message it will immediatly get the next message and start processing it. - -### PEcAn's `docker-compose` {#pecan-docker-compose} - -```{r comment='', echo = FALSE, results = 'hide'} -docker_compose_file <- file.path("..", "docker-compose.yml") -dc_yaml <- yaml::read_yaml(docker_compose_file) -``` - -The PEcAn Docker architecture is described in full by the PEcAn `docker-compose.yml` file. -For full `docker-compose` syntax, see the [official documentation](https://docs.docker.com/compose/). - -This section describes the [top-level structure](#pecan-dc-structure) and each of the services, which are as follows: - -- [`traefik`](#pecan-dc-traefik) -- [`portainer`](#pecan-dc-portainer) -- [`minio`](#pecan-dc-minio) -- [`thredds`](#pecan-dc-thredds) -- [`postgres`](#pecan-dc-postgres) -- [`rabbitmq`](#pecan-dc-rabbitmq) -- [`bety`](#pecan-dc-bety) -- [`docs`](#pecan-dc-docs) -- [`web`](#pecan-dc-web) -- [`executor`](#pecan-dc-executor) -- [`monitor`](#pecan-dc-monitor) -- [Model-specific services](#pecan-dc-models) - -For reference, the complete `docker-compose` file is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml, stdout()) -``` - -There are two ways you can override different values in the docker-compose.yml file. The first method is to create a file called `.env` that is placed in the same folder as the docker-compose.yml file. This file can override some of configuration variables used by docker-compose. For example the following is an example of the env file - -```{r comment='', echo = FALSE} -docker_env_file <- file.path("..", "docker", "env.example") -writeLines(readLines(docker_env_file)) -``` - -You can also extend the `docker-compose.yml` file with a `docker-compose.override.yml` file (in the same directory), allowing you to add more services, or for example to change where the volumes are stored (see [official documentation](https://docs.docker.com/compose/extends/)). For example the following will change the volume for postgres to be stored in your home directory: - -``` -version: "3" - -volumes: - postgres: - driver_opts: - type: none - device: ${HOME}/postgres - o: bind -``` - -### Top-level structure {#pecan-dc-structure} - -The root of the `docker-compose.yml` file contains three sections: - -- `services` -- This is a list of services provided by the application, with each service corresponding to a container. -When communicating with each other internally, the hostnames of containers correspond to their names in this section. -For instance, regardless of the "project" name passed to `docker-compose up`, the hostname for connecting to the PostgreSQL database of any given container is _always_ going to be `postgres` (e.g. you should be able to access the PostgreSQL database by calling the following from inside the container: `psql -d bety -U bety -h postgres`). -The services comprising the PEcAn application are described below. - -- `networks` -- This is a list of networks used by the application. -Containers can only communicate with each other (via ports and hostnames) if they are on the same Docker network, and containers on different networks can only communicate through ports exposed by the host machine. -We just provide the network name (`pecan`) and resort to Docker's default network configuration. -Note that the services we want connected to this network include a `networks: ... - pecan` tag. -For more details on Docker networks, see the [official documentation](https://docs.docker.com/network/). - -- `volumes` -- Similarly to `networks`, this just contains a list of volume names we want. -Briefly, in Docker, volumes are directories containing files that are meant to be shared across containers. -Each volume corresponds to a directory, which can be mounted at a specific location by different containers. -For example, syntax like `volumes: ... - pecan:/data` in a service definition means to mount the `pecan` "volume" (including its contents) in the `/data` directory of that container. -Volumes also allow data to persist on containers between restarts, as normally, any data created by a container during its execution is lost when the container is re-launched. -For example, using a volume for the database allows data to be saved between different runs of the database container. -Without volumes, we would start with a blank database every time we restart the containers. -For more details on Docker volumes, see the [official documentation](https://docs.docker.com/storage/volumes/). -Here, we define three volumes: - - - `postgres` -- This contains the data files underlying the PEcAn PostgreSQL database (BETY). - Notice that it is mounted by the `postgres` container to `/var/lib/postgresql/data`. - This is the data that we pre-populate when we run the Docker commands to [initialize the PEcAn database](#pecan-docker-quickstart-init). - Note that these are the values stored _directly in the PostgreSQL database_. - The default files to which the database points (i.e. `dbfiles`) are stored in the `pecan` volume, described below. - - - `rabbitmq` -- This volume contains persistent data for RabbitMQ. - It is only used by the `rabbitmq` service. - - - `pecan` -- This volume contains PEcAn's `dbfiles`, which include downloaded and converted model inputs, processed configuration files, and outputs. - It is used by almost all of the services in the PEcAn stack, and is typically mounted to `/data`. - -### `traefik` {#pecan-dc-traefik} - -[Traefik](https://traefik.io/) manages communication among the different PEcAn services and between PEcAn and the web. -Among other things, `traefik` facilitates the setup of web access to each PEcAn service via common and easy-to-remember URLs. -For instance, the following lines in the `web` service configure access to the PEcAn web interface via the URL http://localhost:8000/pecan/ : - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services$web["labels"], stdout()) -``` - -(Further details in the works...) - -The traefik service configuration looks like this: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["traefik"], stdout()) -``` - - -### `portainer` {#pecan-dc-portainer} - -[portainer](https://portainer.io/) is lightweight management UI that allows you to manage the docker host (or swarm). You can use this service to monitor the different containers, see the logfiles, and start and stop containers. - -The portainer service configuration looks like this: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["portainer"], stdout()) -``` - -Portainer is accessible by browsing to `localhost:8000/portainer/`. You can either set the password in the `.env` file (for an example see env.example) or you can use the web browser and go to the portainer url. If this is the first time it will ask for your password. - -### `minio` {#pecan-dc-minio} - -[Minio](https://github.com/minio/minio) is a service that provides access to the a folder on disk through a variety of protocols, including S3 buckets and web-based access. -We mainly use Minio to facilitate access to PEcAn data using a web browser without the need for CLI tools. - -Our current configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["minio"], stdout()) -``` - -The Minio interface is accessible by browsing to `localhost:8000/minio/`. -From there, you can browse directories and download files. -You can also upload files by clicking the red "+" in the bottom-right corner. - -Note that it is currently impossible to create or upload directories using the Minio interface (except in the `/data` root directory -- those folders are called "buckets" in Minio). -Therefore, the recommended way to perform any file management tasks other than individual file uploads is through the command line, e.g. - -```bash -docker run -it --rm --volumes pecan_pecan:/data --volumes /path/to/local/directory:/localdir ubuntu - -# Now, you can move files between `/data` and `/localdir`, create new directories, etc. -``` - -### `thredds` {#pecan-dc-thredds} - -This service allows PEcAn model outputs to be accessible via the [THREDDS data server (TDS)](https://www.unidata.ucar.edu/software/thredds/current/tds/). -When the PEcAn stack is running, the catalog can be explored in a web browser at http://localhost:8000/thredds/catalog.html. -Specific output files can also be accessed from the command line via commands like the following: - -```{r, eval = FALSE} -nc <- ncdf4::nc_open("http://localhost:8000/thredds/dodsC/outputs/PEcAn_/out//.nc") -``` - -Note that everything after `outputs/` exactly matches the directory structure of the `workflows` directory. - -Which files are served, which subsetting services are available, and other aspects of the data server's behavior are configured in the `docker/thredds_catalog.xml` file. -Specifically, this XML tells the data server to use the `datasetScan` tool to serve all files within the `/data/workflows` directory, with the additional `filter` that only files ending in `.nc` are served. -For additional information about the syntax of this file, see the extensive [THREDDS documentation](https://www.unidata.ucar.edu/software/thredds/current/tds/reference/index.html). - -Our current configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["thredds"], stdout()) -``` - - -### `postgres` {#pecan-dc-postgres} - -This service provides a working PostGIS database. -Our configuration is fairly straightforward: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["postgres"], stdout()) -``` - -Some additional details about our configuration: - -- `image` -- This pulls a container with PostgreSQL + PostGIS pre-installed. -Note that by default, we use PostgreSQL version 9.5. -To experiment with other versions, you can change `9.5` accordingly. - -- `networks` -- This allows PostgreSQL to communicate with other containers on the `pecan` network. -As mentioned above, the hostname of this service is just its name, i.e. `postgres`, so to connect to the database from inside a running container, use a command like the following: `psql -d bety -U bety -h postgres` - -- `volumes` -- Note that the PostgreSQL data files (which store the values in the SQL database) are stored on a _volume_ called `postgres` (which is _not_ the same as the `postgres` _service_, even though they share the same name). - -### `rabbitmq` {#pecan-dc-rabbitmq} - -[RabbitMQ](https://www.rabbitmq.com/) is a message broker service. -In PEcAn, RabbitMQ functions as a task manager and scheduler, coordinating the execution of different tasks (such as running models and analyzing results) associated with the PEcAn workflow. - -Our configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["rabbitmq"], stdout()) -``` - -Note that the `traefik.frontend.rule` indicates that browsing to http://localhost:8000/rabbitmq/ leads to the RabbitMQ management console. - -By default, the RabbitMQ management console has username/password `guest/guest`, which is highly insecure. -For production instances of PEcAn, we highly recommend changing these credentials to something more secure, and removing access to the RabbitMQ management console via Traefik. - -### `bety` {#pecan-dc-bety} - -This service operates the BETY web interface, which is effectively a web-based front-end to the PostgreSQL database. -Unlike the `postgres` service, which contains all the data needed to run PEcAn models, this service is not essential to the PEcAn workflow. -However, note that certain features of the PEcAn web interface do link to the BETY web interface and will not work if this container is not running. - -Our configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["bety"], stdout()) -``` - -The BETY container Dockerfile is located in the root directory of the [BETY GitHub repository](https://github.com/PecanProject/bety) ([direct link](https://github.com/PecanProject/bety/blob/master/Dockerfile)). - -### `docs` {#pecan-dc-docs} - -This service will show the documentation for the version of PEcAn running as well as a homepage with links to all relevant endpoints. You can access this at http://localhost:8000/. You can find the documentation for PEcAn at http://localhost:8000/docs/pecan/. - -Our current configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["docs"], stdout()) -``` - -### `web` {#pecan-dc-web} - -This service runs the PEcAn web interface. -It is effectively a thin wrapper around a standard Apache web server container from Docker Hub that installs some additional dependencies and copies over the necessary files from the PEcAn source code. - -Our configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["web"], stdout()) -``` - -Its Dockerfile ships with the PEcAn source code, in [`docker/web/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/web/Dockerfile). - -In terms of [actively developing PEcAn using Docker](#pecan-docker-develop), this is the service to modify when making changes to the web interface (i.e. PHP, HTML, and JavaScript code located in the PEcAn `web` directory). - -### `executor` {#pecan-dc-executor} - -This service is in charge of running the R code underlying the core PEcAn workflow. -However, it is _not_ in charge of executing the models themselves -- model binaries are located on their [own dedicated Docker containers](#pecan-dc-models), and model execution is coordinated by RabbitMQ. - -Our configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["executor"], stdout()) -``` - -Its Dockerfile is ships with the PEcAn source code, in [`docker/executor/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/executor/Dockerfile). -Its image is built on top of the `pecan/base` image ([`docker/base/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/base/Dockerfile)), which contains the actual PEcAn source. -To facilitate caching, the `pecan/base` image is itself built on top of the `pecan/depends` image ([`docker/depends/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/docker/depends/Dockerfile)), a large image that contains an R installation and PEcAn's many system and R package dependencies (which usually take ~30 minutes or longer to install from scratch). - -In terms of [actively developing PEcAn using Docker](#pecan-docker-develop), this is the service to modify when making changes to the PEcAn R source code. -Note that, unlike changes to the `web` image's PHP code, changes to the R source code do not immediately propagate to the PEcAn container; instead, you have to re-compile the code by running `make` inside the container. - -### `monitor` {#pecan-dc-monitor} - -This service will show all models that are currently running http://localhost:8000/monitor/. This list returned is JSON and shows all models (grouped by type and version) that are currently running, or where seen in the past. This list will also contain a list of all current active containers, as well as how many jobs are waiting to be processed. - -This service is also responsible for registering any new models with PEcAn so users can select it and execute the model from the web interface. - -Our current configuration is as follows: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["monitor"], stdout()) -``` - -### Model-specific containers {#pecan-dc-models} - -Additional models are added as additional services. -In general, their configuration should be similar to the following configuration for SIPNET, which ships with PEcAn: - -```{r, echo = FALSE, comment = ''} -yaml::write_yaml(dc_yaml$services["sipnet"], stdout()) -``` - -The PEcAn source contains Dockerfiles for ED2 ([`models/ed/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/models/ed/Dockerfile.)) and SIPNET ([`models/sipnet/Dockerfile`](https://github.com/PecanProject/pecan/blob/develop/models/sipnet/Dockerfile)) that can serve as references. -For additional tips on constructing a Dockerfile for your model, see [Dockerfiles for Models](#model-docker). - - - -## Models using Docker {#model-docker} - -This section will discuss how to add new models to PEcAn docker. To be able to add a new model to PEcAn when using docker is as simple as starting a new container. The model will come online and let the PEcAn framework know there is a new model available, there is no need to go through the process of registering this model with PEcAn. Users will be able to select this new model from web interface and run with this model selected. - -For this process to work a docker image of the model will need to be created as well as small json file that is used to announce this new model. A separate service in PEcAn ([`monitor`](#pecan-dc-monitor)) will use this json file to keep track of all models available as well as register these models with PEcAn. - -[Model information](#model-docker-json-file) -[Model build](#model-docker-Dockerfile) -[Common problems](#common-docker-problems) - - -### Model information {#model-docker-json-file} - -Each model will have a small json file called model_info.json that is used to describe the model and used by the monitor service to register the model with PEcAn. This file will contain information about the model that is send as part of the heartbeat of the container to the monitor service. Below is an example of this file for the ED model. The required fields are `name`, `type`, `version` and `binary`. The fields `type` and `version` are used by PEcAn to send the messages to RabbitMQ. There is no need to specify the queue name explicitly. The queue name will be created by combining these two fields with an underscore. The field `binary` is used to point to the actual binary in the docker container. - -There are 2 special values that can be used, `@VERSION@` which will be replaced by the version that is passed in when building the container, and `@BINARY@` which will be replaced by the binary when building the docker image. - -```json -{ - "name": "ED2.2", - "type": "ED2", - "version": "@VERSION@", - "binary": "@BINARY@", - "description": "The Ecosystem Demography Biosphere Model (ED2) is an integrated terrestrial biosphere model incorporating hydrology, land-surface biophysics, vegetation dynamics, and soil carbon and nitrogen biogeochemistry", - "author": "Mike Dietze", - "contributors": ["David LeBauer", "Xiaohui Feng", "Dan Wang", "Carl Davidson", "Rob Kooper", "Shawn Serbin", "Alexey Shiklomanov"], - "links": { - "source": "https://github.com/EDmodel/ED2", - "issues": "https://github.com/EDmodel/ED2/issues" - }, - "inputs": {}, - "citation": ["Medvigy D., Wofsy S. C., Munger J. W., Hollinger D. Y., Moorcroft P. R. 2009. Mechanistic scaling of ecosystem function and dynamics in space and time: Ecosystem Demography model version 2. J. Geophys. Res. 114 (doi:10.1029/2008JG000812)"] -} -``` - -Other fields that are recommended, but currently not used yet, are: - -- `description` : a longer description of the model. -- `creator` : contact person about this docker image. -- `contribuor` : other people that have contributed to this docker image. -- `links` : addtional links to help people when using this docker image, for example values that can be used are `source` to link to the source code, `issues` to link to issue tracking system, and `documentation` to link to model specific documentation. -- `citation` : how the model should be cited in publications. - - -### Model build {#model-docker-Dockerfile} - - -In general we try to minimize the size of the images. To be able to do this we split the process of creating the building of the model images into two pieces (or leverage of an image that exists from the original model developers). If you look at the example Dockerfile you will see that there are 2 sections, the first section will build the model binary, the second section will build the actual PEcAn model, which copies the binary from the first section. - -This is an example of how the ED2 model is build. This will install all the packages needed to build ED2 model, gets the latest version from GitHub and builds the model. - -The second section will create the actual model runner. This will leverage the PEcAn model image that has PEcAn already installed as well as the python code to listen for messages and run the actual model code. This will install some additional packages needed by the model binary (more about that below), copy the model_info.json file and change the `@VERSION@` and `@BINARY@` placeholders. - -It is important values for `type` and `version` are set correct. The PEcAn code will use these to register the model with the BETY database, which is then used by PEcAn to send out a message to a specfic worker queue, if you do not set these variables correctly your model executor will pick up messages for the wrong model. - -To build the docker image, we use a Dockerfile (see example below) and run the following command. This command will expect the Dockerfile to live in the model specific folder and the command is executed in the root pecan folder. It will copy the content of the pecan folder and make it available to the build process (in this example we do not need any additional files). - -Since we can have multiple different versions of a model be available for PEcAn we ar using the following naming schema `pecan/model--: 7f23c6302130 -Step 6/9 : FROM pecan/executor:latest - ---> f19d81b739f5 -... MORE OUTPUT ... -Step 9/9 : COPY --from=model-binary /src/ED2/ED/build/ed_2.1-opt /usr/local/bin/ed2.${MODEL_VERSION} - ---> 07ac841be457 -Successfully built 07ac841be457 -Successfully tagged pecan/pecan-ed2:latest -``` - -At this point we have created a docker image with the binary and all PEcAn code that is needed to run the model. Some models (especially those build as native code) might be missing additional packages that need to be installed in the docker image. To see if all libraries are installed for the binary. - -```bash -> docker run -ti --rm pecan/pecan-ed2 /bin/bash -root@8a95ee8b6b47:/work# ldd /usr/local/bin/ed2.git | grep "not found" - libmpi_usempif08.so.40 => not found - libmpi_usempi_ignore_tkr.so.40 => not found - libmpi_mpifh.so.40 => not found - libmpi.so.40 => not found - libgfortran.so.5 => not found -``` - -Start the build container again (this is the number before the line FROM pecan/executor:latest, 7f23c6302130 in the example), and find the missing libraries listed above (for example libmpi_usempif08.so.40): - -```bash -> docker run --rm -ti 7f23c6302130 -root@e716c63c031f:/src# dpkg -S libmpi_usempif08.so.40 -libopenmpi3:amd64: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_usempif08.so.40.10.1 -libopenmpi3:amd64: /usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.40.10.1 -libopenmpi3:amd64: /usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.40 -``` - -This shows the pages is libopenmpi3 that needs to be installed, do this for all missing packages, modify the Dockerfile and rebuild. Next time you run the ldd command there should be no more packages being listed. - - - -## Building and modifying images {#docker-build-images} - -The only other section on this page is: -[Local development and testing with Docker](#docker-local-devel) - - -For general use, it is sufficient to use the pre-built PEcAn images hosted on [Docker Hub](https://hub.docker.com/r/pecan/) (see [Docker quickstart](#docker-quickstart)). -However, there are cases where it makes sense to re-build the Docker images locally. -The following is a list of PEcAn-specific images and reasons why you would want to rebuild them locally: - -- `pecan/depends` -- Rebuild if: - - You modify the `docker/depends/Dockerfile` - - You introduce new system dependencies (i.e. things that need to be installed with `apt-get`) - - You introduce new R package dependencies, and you want those R package installations to be cached during future builds. For packages with fast build times, it may be fine to let them be installed as part of PEcAn's standard build process (i.e. `make`). -- `pecan/base` -- Rebuild if: - - You built a new version of `pecan/depends` (on which `pecan/base` depends) - - You modify the `docker/base/Dockerfile` - - You made changes to the PEcAn R package source code, the Makefile, or `web/workflow.R`. - - NOTE that changes to the web interface code affect `pecan/web`, _not_ `pecan/base` -- `pecan/executor` -- Rebuild if: - - You built a new version of `pecan/base` (on which `pecan/executor` depends) and/or, `pecan/depends` (on which `pecan/base` depends) - - You modified the `docker/executor/Dockerfile` - - You modified the RabbitMQ Python scripts (e.g. `docker/receiver.py`, `docker/sender.py`) -- `pecan/web` -- Rebuild if you modified any of the following: - - `docker/web/Dockerfile` - - The PHP/HTML/JavaScript code for the PEcAn web interface in `web/` (_except_ `web/workflow.R` -- that goes in `pecan/base`) - - `docker/config.docker.php` (the `config.php` file for Docker web instances) - - `documentation/index_vm.html` (the documentation HTML website) - - NOTE: Because changes to this code are applied instantly (i.e. do not require compilation or installation), a more effective way to do local development may be to mount the `web/` or other relevant folders as a volume onto the `pecan/web` container. - -The easiest way to quickly re-build all of the images is using the `docker.sh` script in the PEcAn source code root directory. -This script will build all of the docker images locally on your machine, and tag them as `latest`. -This will not build the `pecan/depends` image by default because that takes considerably longer. -However, you can force the script to build `pecan/depends` as well by setting the `DEPEND` environment variable to 1 (i.e. `DEPEND=1 ./docker.sh`). -The following instructions provide details on how to build each image individually. - -To build an image locally, use the `docker build` command as described below. -For more details, see `docker build --help` or the [online Docker build documentation](https://docs.docker.com/engine/reference/commandline/build/). - -First, in a terminal window, navigate (`cd`) into the PEcAn source code root directory. -From there, the general syntax for building an image looks like the following: - -```bash -docker build -t pecan/: -f docker/base/Dockerfile. . -``` - -For instance, to build a local version of the `pecan/depends:latest` image, you would run: - -```bash -docker build -t pecan/depends:latest -f docker/depends/Dockerfile . -``` - -The breakdown of this command is as follows: - -- `docker build` -- This is the core command. -The standard syntax is `docker build [OPTIONS] `, where `` refers to the directory to be used as the "build context". -The "build context" is the working directory assumed by the Dockerfiles. -In PEcAn, this is always the PEcAn source code root directory, which allows Dockerfiles to use instructions such as `COPY web/workflow.R /work/`. -In this example, the `` is set to the current working directory, i.e. `.` because we are already in the PEcAn root directory. -If you were located in a different directory, you would have to provide a path to the PEcAn source code root directory. -Also, by default, `docker build` will look for a Dockerfile located at `/Dockerfile`, but this is modified by the `-f` option described below. - -- `-t pecan/depends:latest` -- The `-t/--tag` option specifies how the image will be labeled. -By default, Docker only defines unique image IDs, which are hexidecimal strings that are unintuitive and hard to remember. -Tags are useful for referring to specific images in a human-readable way. -Note that the same unique image can have multiple tags associated with it, so it is possible for, e.g. `pecan/depends:latest`, `pecan/depends:custom`, and even `mypecan/somethingelse:20.0` to refer to the same exact image. -To see a table of all local images, including their tags and IDs, run `docker image ls`. - - **NOTE**: PEcAn's `docker-compose.yml` can be configured via the `PECAN` environment variable to point at different versions of PEcAn images. - By default, it points to the `:latest` versions of all images. - However, if you wanted to, for instance, build `:local` images corresponding to your local source code and then run that version of PEcAn, you would run: - - ``` - PECAN=local docker-compose -p pecan up -d - ``` - - This is an effective way to do local development and testing of different PEcAn versions, as described [below](#docker-local-devel). - -- `-f docker/depends/Dockerfile` -- The `-f/--file` tag is used to provide an alternative location and file name for the Dockerfile. - -### Local development and testing with Docker {#docker-local-devel} - -The following is an example of one possible workflow for developing and testing PEcAn using local Docker images. -The basic idea is to mount a local version of the PEcAn source code onto a running `pecan/executor` image, and then send a special "rebuild" RabbitMQ message to the container to trigger the rebuild whenever you make changes. -NOTE: All commands assume you are working from the PEcAn source code root directory. - -1. In the PEcAn source code directory, create a `docker-compose.override.yml` file with the following contents.: - - ```yml - version: "3" - services: - executor: - volumes: - - .:/pecan - ``` - - This will mount the current directory `.` to the `/pecan` directory in the `executor` container. - The special `docker-compose.override.yml` file is read automatically by `docker-compose` and overrides or extends any instructions set in the original `docker-compose.yml` file. - It provides a convenient way to host server-specific configurations without having to modify the project-wide (and version-controlled) default configuration. - For more details, see the [Docker Compose documentation](https://docs.docker.com/compose/extends/). - -2. Update your PEcAn Docker stack with `docker-compose up -d`. - If the stack is already running, this should only restart your `executor` instance while leaving the remaining containers running. - -3. To update to the latest local code, run `./scripts/docker_rebuild.sh`. - Under the hood, this uses `curl` to post a RabbitMQ message to a running Docker instance. - By default, the scripts assumes that username and password are both `guest` and that the RabbitMQ URL is `http://localhost:8000/rabbitmq`. - All of these can be customized by setting the environment variables `RABBITMQ_USER`, `RABBITMQ_PASSWORD`, and `RABBITMQ_URL`, respectively (or running the script prefixed with those variables, e.g. `RABBITMQ_USER=carya RABBITMQ_PASSWORD=illinois ./scripts/docker_rebuild.sh`). - This step can be repeated whenever you want to trigger a rebuild of the local code. - -NOTE: The updates with this workflow are _specific to the running container session_; restarting the `executor` container will revert to the previous versions of the installed packages. -To make persistent changes, you should re-build the `pecan/base` and `pecan/executor` containers against the current version of the source code. - -NOTE: The mounted PEcAn source code directory includes _everything_ in your local source directory, _including installation artifacts used by make_. -This can lead to two common issues: -- Any previous make cache files (stuff in the `.install`, `.docs`, etc. directories) persist across container instances, even though the installed packages may not. To ensure a complete build, it's a good idea to run `make clean` on the host machine to remove these artifacts. -- Similarly, any installation artifacts from local builds will be carried over to the build. In particular, be wary of packages with compiled code, such as `modules/rtm` (`PEcAnRTM`) -- the compiled `.o`, `.so`, `.mod`, etc. files from compilation of such packages will carry over into the build, which can cause conflicts if the package was also built locally. - -The `docker-compose.override.yml` is useful for some other local modifications. -For instance, the following adds a custom ED2 "develop" model container. - -```yml -services: - # ... - ed2devel: - image: pecan/model-ed2-develop:latest - build: - context: ../ED2 # Or wherever ED2 source code is found - networks: - - pecan - depends_on: - - rabbitmq - volumes: - - pecan:/data - restart: unless-stopped -``` - -Similarly, this snippet modifies the `pecan` network to use a custom IP subnet mask. -This is required on the PNNL cluster because its servers' IP addresses often clash with Docker's default IP mask. - -```yml -networks: - pecan: - ipam: - config: - - subnet: 10.17.1.0/24 -``` - - - -## Troubleshooting Docker {#docker-troubleshooting} - -### "Package not available" while building images - -**PROBLEM**: Packages fail to install while building `pecan/depends` and/or `pecan/base` with an error like the following: - -``` -Installing package into ‘/usr/local/lib/R/site-library’ -(as ‘lib’ is unspecified) -Warning: unable to access index for repository https://mran.microsoft.com/snapshot/2018-09-01/src/contrib: - cannot open URL 'https://mran.microsoft.com/snapshot/2018-09-01/src/contrib/PACKAGES' -Warning message: -package ‘’ is not available (for R version 3.5.1) -``` - -**CAUSE**: This can sometimes happen if there are problems with Microsoft's CRAN snapshots, which are the default repository for the `rocker/tidyverse` containers. -See GitHub issues [rocker-org/rocker-versioned#102](https://github.com/rocker-org/rocker-versioned/issues/102) and [#58](https://github.com/rocker-org/rocker-versioned/issues/58). - -**SOLUTION**: Add the following line to the `depends` and/or `base` Dockerfiles _before_ (i.e. above) any commands that install R packages (e.g. `Rscript -e "install.packages(...)"`): - -``` -RUN echo "options(repos = c(CRAN = 'https://cran.rstudio.org'))" >> /usr/local/lib/R/etc/Rprofile.site -``` - -This will set the default repository to the more reliable (albeit, more up-to-date; beware of breaking package changes!) RStudio CRAN mirror. -Then, build the image as usual. - - - -## Migrating PEcAn from VM to Docker {#docker-migrate} - -This document assumes you have read through the [Introduction to Docker](#docker-intro) as well as [Docker quickstart](#docker-quickstart) and have docker running on the VM. - -This document will slowly replace each of the components with the appropriate docker images. At then end of this document you should be able to use the docker-compose command to bring up the full docker stack as if you had started with this origianally. - -### Running BETY as a docker container - -This will replace the BETY application running on the machine with a docker image. This will assume you still have the database running on the local machine and the only thing we replace is the BETY application. - -If you are running systemd (Ubuntu 16.04 or Centos 7) you can copy the following file to /etc/systemd/system/bety.service (replace LOCAL_SERVER=99 with your actual server). If you have postgres running on another server replace 127.0.0.1 with the actual ip address of the postgres server. - -``` -[Unit] -Description=BETY container -After=docker.service - -[Service] -Restart=always -ExecStart=/usr/bin/docker run -t --rm --name bety --add-host=postgres:127.0.0.1 --network=host --env RAILS_RELATIVE_URL_ROOT=/bety --env LOCAL_SERVER=99 pecan/bety -ExecStop=/usr/bin/docker stop -t 2 bety - -[Install] -WantedBy=local.target -``` - -At this point we can enable the bety service (this only needs to be done once). First we need to tell systemd a new service is available using `systemctl daemon-reload`. Next we enable the BETY service so it will restart automatically when the machine reboots, using `systemctl enable bety`. Finally we can start the BETY service using `systemctl start bety`. At this point BETY is running as a docker container on port 8000. You can see the log messages using `journalctl -u bety`. - -Next we need to modify apache configuration files. The file /etc/apache2/conf-enabled/bety.conf will be replaced with the following content: - -``` -ProxyPass /bety/ http://localhost:8000/bety/ -ProxyPassReverse /bety/ http://localhost:8000/bety/ -RedirectMatch permanent ^/bety$ /bety/ -``` - -Once this modified we can restart apache using `systemctl restart apache2`. At this point BETY is running in a container and is accessable trough the webserver at http://server/bety/. - - -To upgrade to a new version of BETY you can now use the docker commands. You can use the following commands to stop BETY, pull the latest image down, migrate the database (you made a backup correct?) and start BETY again. - -``` -systemctl stop bety -docker pull pecan/bety:latest -docker run -ti --rm --add-host=postgres:127.0.0.1 --network=host --env LOCAL_SERVER=99 pecan/bety migrate -systemctl start bety -``` - -Once you are satisfied with the migration of BETY you can remove the bety folder as well as any ruby binaries you have installed. - - - -## The PEcAn Docker API {#pecan-api} - -If you have a running instance of Dockerized PEcAn (or other setup where PEcAn workflows are submitted via [RabbitMQ](#rabbitmq)), -you have the option of running and managing PEcAn workflows using the `pecanapi` package. - -For more details, see the `pecanapi` [package vignette](https://github.com/PecanProject/pecan/blob/develop/api/vignettes/pecanapi.Rmd) and function-level documentation. -What follows is a lightning introduction. - -#### Installation - -The package can be installed directly from GitHub via `devtools::install_github`: - -```r -devtools::install_github("pecanproject/pecan/api@develop") -``` - -#### Creating and submitting a workflow - -With `pecanapi`, creating a workflow, submitting it to RabbitMQ, monitoring its progress, and processing its output can all be accomplished via an R script. - -Start by loading the package (and the `magrittr` package, for the `%>%` pipe operator). - -```r -library(pecanapi) -library(magrittr) -``` - -Set your PEcAn database user ID, and create a database connection object, which will be used for database operations throughout the workflow. - -```r -options(pecanapi.user_id = 99000000002) -con <- DBI::dbConnect( - RPostgres::Postgres(), - user = "bety", - password = "bety", - host = "localhost", - port = 5432 -) -``` - -Find model and site IDs for the site and model you want to run. - -```r -model_id <- get_model_id(con, "SIPNET", "136") -all_umbs <- search_sites(con, "umbs%disturbance") -site_id <- subset(all_umbs, !is.na(mat))[["id"]] -``` - -Insert a new workflow into the PEcAn database, and extract its ID. - -```r -workflow <- insert_new_workflow(con, site_id, model_id, - start_date = "2004-01-01", - end_date = "2004-12-31") -workflow_id <- workflow[["id"]] -``` - -Pull all of this information together into a settings list object. - -```r -settings <- list() %>% - add_workflow(workflow) %>% - add_database() %>% - add_pft("temperate.deciduous") %>% - add_rabbitmq(con = con) %>% - modifyList(list( - meta.analysis = list(iter = 3000, random.effects = list(on = FALSE, use_ghs = TRUE)), - run = list(inputs = list(met = list(source = "CRUNCEP", output = "SIPNET", method = "ncss"))), - ensemble = list(size = 1, variable = "NPP") - )) -``` - -Submit the workflow via RabbitMQ, and monitor its progress in the R process. - -```r -submit_workflow(settings) -watch_workflow(workflow_id) -``` - -Use THREDDS to access and analyze the output. - -```r -sipnet_out <- ncdf4::nc_open(run_dap(workflow_id, "2004.nc")) -gpp <- ncdf4::ncvar_get(sipnet_out, "GPP") -time <- ncdf4::ncvar_get(sipnet_out, "time") -ncdf4::nc_close(sipnet_out) -plot(time, gpp, type = "l") -``` - - - -## RabbitMQ {#rabbitmq} - -This section provides additional details about how PEcAn uses RabbitMQ to manage communication between its Docker containers. - -In PEcAn, we use the Python [`pika`](http://www.rabbitmq.com/tutorials/tutorial-one-python.html) client to post and retrieve messages from RabbitMQ. -As such, every Docker container that communicates with RabbitMQ contains two Python scripts: `sender.py` and `reciever.py`. -Both are located in the `docker` directory in the PEcAn source code root. - -### Producer -- `sender.py` {#rabbitmq-basics-sender} - -The `sender.py` script is in charge of posting messages to RabbitMQ. -In the RabbitMQ documentation, it is known as a "producer". -It runs once for every message posted to RabbitMQ, and then immediately exits (unlike the `receiver.py`, which runs continuously -- see [below](#rabbitmq-basics-receiver)). - -Its usage is as follows: - -```bash -python3 sender.py -``` - -The arguments are: - -- `` -- The unique identifier of the RabbitMQ instance, similar to a URL. -The format is `amqp://username:password@host/vhost`. -By default, this is `amqp://guest:guest@rabbitmq/%2F` (the `%2F` here is the hexadecimal encoding for the `/` character). - -- `` -- The name of the queue on which to post the message. - -- `` -- The contents of the message to post, in JSON format. -A typical message posted by PEcAn looks like the following: - - ```json - { "folder" : "/path/to/PEcAn_WORKFLOWID", "workflow" : "WORKFLOWID" } - ``` - -The `PEcAn.remote::start_rabbitmq` function is a wrapper for this script that provides an easy way to post a `folder` message to RabbitMQ from R. - -### Consumer -- `receiver.py` {#rabbitmq-basics-receiver} - -Unlike `sender.py`, `receiver.py` runs like a daemon, constantly listening for messages. -In the RabbitMQ documentation, it is known as a "consumer". -In PEcAn, you can tell that it is ready to receive messages if the corresponding logs (e.g. `docker-compose logs executor`) show the following message: - -``` -[*] Waiting for messages. To exit press CTRL+C. -``` - -Our `reciever` is configured by three environment variables: - -- `RABBITMQ_URI` -- This defines the URI where RabbitMQ is running. -See corresponding argument in the [producer](#rabbitmq-basics-sender) - -- `RABBITMQ_QUEUE` -- This is the name of the queue on which the consumer will listen for messages, just as in the [producer](#rabbitmq-basics-sender). - -- `APPLICATION` -- This specifies the name (including the path) of the default executable to run when receiving a message. - At the moment, it should be an executable that runs in the directory specified by the message's `folder` variable. - In the case of PEcAn models, this is usually `./job.sh`, such that the `folder` corresponds to the `run` directory associated with a particular `runID` (i.e. where the `job.sh` is located). - For the PEcAn workflow itself, this is set to `R CMD BATCH workflow.R`, such that the `folder` is the root directory of the workflow (in the `executor` Docker container, something like `/data/workflows/PEcAn_`). - This default executable is _overridden_ if the message contains a `custom_application` key. - If included, the string specified by the `custom_application` key will be run as a command exactly as is on the container, from the directory specified by `folder`. - For instance, in the example below, the container will print "Hello there!" instead of running its default application. - - ```json - {"custom_application": "echo 'Hello there!'", "folder": "/path/to/my/dir"} - ``` - - NOTE that in RabbitMQ messages, the `folder` key is always required. - -### RabbitMQ and the PEcAn web interface {#rabbitmq-web} - -RabbitMQ is configured by the following variables in `config.php`: - -- `$rabbitmq_host` -- The RabbitMQ server hostname (default: `rabbitmq`, because that is the name of the `rabbitmq` service in `docker-compose.yml`) -- `$rabbitmq_port` -- The port on which RabbitMQ listens for messages (default: 5672) -- `$rabbitmq_vhost` -- The path of the RabbitMQ [Virtual Host](https://www.rabbitmq.com/vhosts.html) (default: `/`). -- `$rabbitmq_queue` -- The name of the RabbitMQ queue associated with the PEcAn workflow (default: `pecan`) -- `$rabbitmq_username` -- The RabbitMQ username (default: `guest`) -- `$rabbitmq_password` -- The RabbitMQ password (default: `guest`) - -In addition, for running models via RabbitMQ, you will also need to add an entry like the following to the `config.php` `$hostlist`: - -```php -$hostlist=array($fqdn => array("rabbitmq" => "amqp://guest:guest@rabbitmq/%2F"), ...) -``` - -This will set the hostname to the name of the current machine (defined by the `$fqdn` variable earlier in the `config.php` file) to an array with one entry, whose key is `rabbitmq` and whose value is the RabbitMQ URI (`amqp://...`). - -These values are converted into the appropriate entries in the `pecan.xml` in `web/04-runpecan.php`. - -### RabbitMQ in the PEcAn XML {#rabbitmq-xml} - -RabbitMQ is a special case of remote execution, so it is configured by the `host` node. -An example RabbitMQ configuration is as follows: - -```xml - - - amqp://guest:guest@rabbitmq/%2F - sipnet_136 - - -``` - -Here, `uri` and `queue` have the same general meanings as described in ["producer"](#rabbitmq-basics-sender). -Note that `queue` here refers to the target model. -In PEcAn, RabbitMQ model queues are named as `MODELTYPE_REVISION`, -so the example above refers to the SIPNET model version 136. -Another example is `ED2_git`, referring to the latest `git` version of the ED2 model. - -### RabbitMQ configuration in Dockerfiles {#rabbitmq-dockerfile} - -As described in the ["consumer"](#rabbitmq-basics-receiver) section, our standard RabbitMQ receiver script is configured using three environment variables: `RABBITMQ_URI`, `RABBITMQ_QUEUE`, and `APPLICATION`. -Therefore, configuring a container to work with PEcAn's RabbitMQ instance requires setting these three variables in the Dockerfile using an [`ENV`](https://docs.docker.com/engine/reference/builder/#env) statement. - -For example, this excerpt from `docker/base/Dockerfile.executor` (for the `pecan/executor` image responsible for the PEcAn workflow) sets these variables as follows: - -```dockerfile -ENV RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" \ - RABBITMQ_QUEUE="pecan" \ - APPLICATION="R CMD BATCH workflow.R" -``` - -Similarly, this excerpt from `docker/models/Dockerfile.sipnet` (which builds the SIPNET model image) is a typical example for a model image. -Note the use of [`ARG`](https://docs.docker.com/engine/reference/builder/#arg) here to specify a default version model version of 136 while allowing this to be configurable (via `--build-arg MODEL_VERSION=X`) at build time: - -```dockerfile -ARG MODEL_VERSION=136 - -ENV APPLICATION="./job.sh" \ - MODEL_TYPE="SIPNET" \ - MODEL_VERSION="${MODEL_VERSION}" - -ENV RABBITMQ_QUEUE="${MODEL_TYPE}_${MODEL_VERSION}" -``` - -**WARNING**: Dockerfile environment variables set via `ENV` are assigned _all at once_; _they do not evaluate successively, left to right_. -Consider the following block: - -```dockerfile -# Don't do this! -ENV MODEL_TYPE="SIPNET" \ - MODEL_VERSION=136 \ - RABBITMQ_QUEUE=${MODEL_TYPE}_${MODEL_VERSION} # <- Doesn't know about MODEL_TYPE or MODEL_VERSION! -``` - -In this block, the expansion for setting `RABBITMQ_QUEUE` _is not aware_ of the current values of `MODEL_TYPE` or `MODEL_VERSION`, and will therefore be set incorrectly to just `_` (unless they have been set previously, in which case it will be aware only of their earlier values). -As such, **variables depending on other variables must be set in a separate, subsequent `ENV` statement than the variables they depend on**. - -### Case study: PEcAn web interface {#rabbitmq-case-study} - -The following describes in general terms what happens during a typical run of the PEcAn web interface with RabbitMQ. - -1. The user initializes all containers with `docker-compose up`. -All the services that interact with RabbitMQ (`executor` and all models) run `receiver.py` in the foreground, waiting for messages to tell them what to do. - -2. The user browses to http://localhost:8000/pecan/ and steps through the web interface. -All the pages up to the `04-runpecan.php` run on the `web` container, and are primarily for setting up the `pecan.xml` file. - -3. Once the user starts the PEcAn workflow at `04-runpecan.php`, the underlying PHP code connects to RabbitMQ (based on the URI provided in `config.php`) and posts the following message to the `pecan` queue: - - ```json - {"folder": "/workflows/PEcAn_WORKFLOWID", "workflowid": "WORKFLOWID"} - ``` - -4. The `executor` service, which is listening on the `pecan` queue, hears this message and executes its `APPLICATION` (`R CMD BATCH workflow.R`) in the working directory specified in the message's `folder`. - The `executor` service then performs the pre-execution steps (e.g. trait meta-analysis, conversions) itself. - Then, to actually execute the model, `executor` posts the following message to the target model's queue: - - ```json - {"folder": "/workflows/PEcAn_WORKFLOWID/run/RUNID"} - ``` - -5. The target model service, which is listening on its dedicated queue, hears this message and runs its `APPLICATION`, which is `job.sh`, in the directory indicated by the message. - Upon exiting (normally), the model service writes its status into a file called `rabbitmq.out` in the same directory. - -6. The `executor` container continuously looks for the `rabbitmq.out` file as an indication of the model run's status. - Once it sees this file, it reads the status and proceeds with the post-execution parts of the workflow. - (NOTE that this isn't perfect. If the model running process exits abnormally, the `rabbitmq.out` file may not be created, which can cause the `executor` container to hang. If this happens, the solution is to restart the `executor` container with `docker-compose restart executor`). - - - -# Remote execution with PEcAn {#pecan-remote} - -Remote execution allows the user to leverage the power and storage of high performance computing clusters, AWS instances, or specially configured virtual machines, but without leaving their local working environment. -PEcAn uses remote execution primarily to run ecosystem models. - -The infrastructure for remote execution lives in the `PEcAn.remote` package (`base/remote` in the PEcAn repository). - -This section describes the following: - -1. Checking capabilities to connect to the remote machine correctly: -+ Basics of command line SSH -+ SSH authentication with keys and passwords -+ Basics of SSH tunnels, and how they are used in PEcAn - -2. Description of PEcAn related tools that control remote execution -+ Basic remote execution R functions in `PEcAn.remote` - -3. SETUP- Configuration Files and settings -+ Remote model execution configuration in the `pecan.xml` and `config.php` -+ Additional information about preparing remote servers for execution - - -## Basics of SSH - -All of the PEcAn remote infrastructure depends on the system `ssh` utility, so it's important to make sure this works before attempting the advanced remote execution functionality in PEcAn. - -To connect to a remote server interactively, the command is simply: - -```sh -ssh @ -``` - -For instance, my connection to the BU shared computing cluster looks like: - -```sh -ssh ashiklom@geo.bu.edu -``` - -It will prompt me for my BU password, and, if successful, will drop me into a login shell on the remote machine. - -Alternatively to the login shell, `ssh` can be used to execute arbitrary code, whose output will be returned exactly as it would if you ran the command locally. -For example, the following: - -```sh -ssh ashiklom@geo.bu.edu pwd -``` - -will run the `pwd` command, and return the path to my home directory on the BU SCC. -The more advanced example below will run some simple R code on the BU SCC and return the output as if it was run locally. - -```sh -ssh ashiklom@geo.bu.edu Rscript -e "seq(1, 10)" -``` - -## SSH authentication -- password vs. SSH key - -Because this server uses passwords for authentication, this command will then prompt me for my password. - -An alternative to password authentication is using SSH keys. -Under this system, the host machine (say, your laptop, or the PEcAn VM) has to generate a public and private key pair (using the `ssh-keygen` command). -The private key (by default, a file in `~/.ssh/id_rsa`) lives on the host machine, and should **never** be shared with anyone. -The public key will be distributed to any remote machines to which you want the host to be able to connect. -On each remote machine, the public key should be added to a list of authorized keys located in the `~/.ssh/authorized_keys` file (on the remote machine). -The authorized keys list indicates which machines (technically, which keys -- a single machine, and even a single user, can have many keys) are allowed to connect to it. -This is the system used by all of the PEcAn servers (`pecan1`, `pecan2`, `test-pecan`). - -## SSH tunneling - -SSH authentication can be more advanced than indicated above, especially on systems that require dual authentication. -Even simple password-protection can be tricky in scripts, since (by design) it is fairly difficult to get SSH to accept a password from anything other than the raw keyboard input (i.e. SSH doesn't let you pass passwords as input or arguments, because this exposes your password as plain text). - -A convenient and secure way to follow SSH security protocol, but prevent having to go through the full authentication process every time, is to use SSH tunnels (or "sockets", which are effectively synonymous). -Essentially, an SSH socket is a read- and write-protectected file that contains all of the information about an SSH connection. - -To create an SSH tunnel, use a command like the following: - -```sh -ssh -n -N -f -o ControlMaster=yes -S /path/to/socket/file @ -``` - -If appropriate, this will prompt you for your password (if using password authentication), and then will drop you back to the command line (thanks to the `-N` flag, which runs SSH without executing a command, the `-f` flag, which pushes SSH into the background, and the `-n` flag, which prevents ssh from reading any input). -It will also create the file `/path/to/socket/file`. - -To use this socket with another command, use the `-S /path/to/file` flag, pointing to the same tunnel file you just created. - -```sh -ssh -S /path/to/socket/file -``` - -This will let you access the server without any sort of authentication step. -As before, if `` is blank, you will be dropped into an interactive shell on the remote, or if it's a command, that command will be executed and the output returned. - -To close a socket, use the following: - -```sh -ssh -S /path/to/socket/file -O exit -``` - -This will delete the socket file and close the connection. -Alternatively, a scorched earth approach to closing the SSH tunnel if you don't remember where you put the socket file is something like the following: - -```sh -pgrep ssh # See which processes will be killed -pkill ssh # Kill those processes -``` - -...which will kill all user processes called `ssh`. - -To automatically create tunnels following a specific pattern, you can add the following to your -`~/.ssh/config` - -```sh -Host - ControlMaster auto - ControlPath /tmp/%r@%h:%p -``` - -For more information, see `man ssh`. - -## SSH tunnels and PEcAn - -Many of the `PEcAn.remote` functions assume that a tunnel is already open. -If working from the web interface, the tunnel will be opened for you by some under-the-hood PHP and Bash code, but if debugging or working locally, you will have to create the tunnel yourself. -The best way to do this is to create the tunnel first, outside of R, as described above. -(In the following examples, I'll use my username `ashiklom` connecting to the `test-pecan` server with a socket stored in `/tmp/testpecan`. -To follow along, replace these with your own username and designated server, respectively). - -```sh -ssh -nNf -o ControlMaster=yes -S /tmp/testpecan ashiklom@test-pecan.bu.edu -``` - -Then, in R, create a `host` object, which is just a list containing the elements `name` (hostname) and `tunnel` (path to tunnel file). - -```r -my_host <- list(name = "test-pecan.bu.edu", tunnel = "/tmp/testpecan") -``` - -This host object can then be used in any of the remote execution functions. - - -## Basic remote execute functions - -The `PEcAn.remote::remote.execute.cmd` function runs a system command on a remote server (or on the local server, if `host$name == "localhost"`). - -``` -x <- PEcAn.remote::remote.execute.cmd(host = my_host, cmd = "echo", args = "Hello world") -x -``` - -Note that `remote.execute.cmd` is similar to base R's `system2`, in that the base command (in this case, `echo`) is passed separately from its arguments (`"Hello world"`). -Note also that the output of the remote command is returned as a character. - -For R code, there is a special wrapper around `remote.execute.cmd` -- `PEcAn.remote::remote.execute.R`, which runs R code (passed as a string) on a remote and returns the output. - -``` -code <- " - x <- 2:4 - y <- 3:1 - x ^ y -" -out <- PEcAn.remote::remote.execute.R(code = code, host = my_host) -``` - -For additional functions related to remote file operations and other stuff, see the `PEcAn.remote` package documentation. - - -## Remote model execution with PEcAn - -The workhorse of remote model execution is the `PEcAn.remote::start.model.runs` function, which distributes execution of each run in a list of runs (e.g. multiple runs in an ensemble) to the local machine or a remote based on the configuration in the PEcAn settings. - -Broadly, there are three major types of model execution: - -- Serialized (`PEcAn.remote::start_serial`) -- This runs models one at a time, directly on the local machine or remote (i.e. same as calling the executables one at a time for each run). -- Via a queue system, (`PEcAn.remote::start_qsub`) -- This uses a queue management system, such as SGE (e.g. `qsub`, `qstat`) found on the BU SCC machines, to submit jobs. - For computationally intensive tasks, this is the recommended way to go. -- Via a model launcher script (`PEcAn.remote::setup_modellauncher`) -- This is a highly customizable approach where task submission is controlled by a user-provided script (`launcher.sh`). - -## XML configuration - -The relevant section of the PEcAn XML file is the `` block. -Here is a minimal example from one of my recent runs: - -``` - - geo.bu.edu - ashiklom - /home/carya/output//PEcAn_99000000008/tunnel/tunnel - -``` - -Breaking this down: - -- `name` -- The hostname of the machine where the runs will be performed. - Set it to `localhost` to run on the local machine. -- `user` -- Your username on the remote machine (note that this may be different from the username on your local machine). -- `tunnel` -- This is the tunnel file for the connection used by all remote execution files. - The tunnel is created automatically by the web interface, but must be created by the user for command line execution. - -This configuration will run in serialized mode. -To use `qsub`, the configuration is slightly more involved: - -``` - - geo.bu.edu - ashiklom - qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash - Your job ([0-9]+) .* - qstat -j @JOBID@ || echo DONE - /home/carya/output//PEcAn_99000000008/tunnel/tunnel - -``` - -The additional fields are as follows: - -- `qsub` -- The command used to submit jobs to the queue system. - Despite the name, this can be any command used for any queue system. - The following variables are available to be set here: - - `@NAME@` -- Job name to display - - `@STDOUT@` -- File to which `stdout` will be redirected - - `@STDERR@` -- File to which `stderr` will be redirected -- `qsub.jobid` -- A regular expression, from which the job ID will be determined. - This string will be parsed by R as `jobid <- gsub(qsub.jobid, "\\1", output)` -- note that the first pattern match is taken as the job ID. -- `qstat` -- The command used to check the status of a job. - Internally, PEcAn will look for the `DONE` string at the end, so a structure like ` || echo DONE` is required. - The `@JOBID@` here is the job ID determined from the `qsub.jobid` parsing. - -Documentation for using the model launcher is currently unavailable. - -## Configuration for PEcAn web interface - -The `config.php` has a few variables that will control where the web -interface can run jobs, and how to run those jobs. It is located in the `/web` directory and if you have not touched it yet it will -be named as `config.example.php`. Rename it to 'config.php` and edit by folowing the following directions. - -These variables are `$hostlist`, `$qsublist`, `$qsuboptions`, and `$SSHtunnel`. In -the near future `$hostlist`, `$qsublist`, `$qsuboptions` will be -combined into a single list. - -`$SSHtunnel` : points to the script that creates an SSH tunnel. -The script is located in the web folder and the default value of -`dirname(__FILE__) . DIRECTORY_SEPARATOR . "sshtunnel.sh";` most -likely will work. - -`$hostlist` : is an array with by default a single value, only -allowing jobs to run on the local server. Adding any other servers -to this list will show them in the pull down menu when selecting -machines, and will trigger the web page to be show to ask for a -username and password for the remote execution (make sure to use -HTTPS setup when asking for password to prevent it from being send -in the clear). - -`$qsublist` : is an array of hosts that require qsub to be used -when running the models. This list can include `$fqdn` to indicate -that jobs on the local machine should use qsub to run the models. - -`$qsuboptions` : is an array that lists options for each machine. -Currently it support the following options (see also -[Run Setup] and look at the tags) - -``` -array("geo.bu.edu" => - array("qsub" => "qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash", - "jobid" => "Your job ([0-9]+) .*", - "qstat" => "qstat -j @JOBID@ || echo DONE", - "job.sh" => "module load udunits R/R-3.0.0_gnu-4.4.6", - "models" => array("ED2" => "module load hdf5")) -``` - -In this list `qsub` is the actual command line for qsub, `jobid` -is the text returned from qsub, `qstat` is the command to check -to see if the job is finished. `job.sh` and the value in models -are additional entries to add to the job.sh file generated to -run the model. This can be used to make sure modules are loaded -on the HPC cluster before running the actual model. - -## Running PEcAn code for remotely - -You do not need to download PEcAn fully on your remote machine. You can compile and install the model specific code pieces of -PEcAn on the cluster easily without having to install the -full code base of PEcAn (and all OS dependencies). Use the `git clone ` command to: - -``` -devtools::install_github("pecanproject/pecan", subdir = 'base/utils') -``` - -Next we need to install the model specific pieces, this is done -almost the same (example for ED2): - -``` -devtools::install_github("pecanproject/pecan", subdir = 'models/ed') -``` - -This should install dependencies required. - -* The following are some notes on how to install the model specifics on different HPC -clusters* - -## Special case: `geo.bu.edu` - -Following modules need to be loaded: - -``` -module load hdf5 udunits R/R-3.0.0_gnu-4.4.6 -``` - -Next the following packages need to be installed, otherwise it -will fall back on the older versions install site-library - -``` -install.packages(c('udunits2', 'lubridate'), - configure.args=c(udunits2='--with-udunits2-lib=/project/earth/packages/udunits-2.1.24/lib --with-udunits2-include=/project/earth/packages/udunits-2.1.24/include'), - repos='http://cran.us.r-project.org') -``` - -Finally to install support for both ED and SIPNET: - -``` -devtools::install_github("pecanproject/pecan", subdir = 'base/utils') -devtools::install_github("pecanproject/pecan", subdir = 'models/sipnet') -devtools::install_github("pecanproject/pecan", subdir = 'models/ed') -``` - - - -# Data assimilation with DART - -In addition to the state assimilation routines found in the assim.sequential module, another approach for state data assimilation in PEcAn is through the DART workflow created by the DARES group in NCAR. - -This section gives a straight-forward explanation how to implement DART, focused on the technical aspects of the implementation. If there are any questions, feel free to send \@Viskari an email (`tt.viskari@gmail.com`) or contacting DART support as they are quite awesome in helping people with problems. Also, if there are any suggestions on how to improve the wiki, please let me know. - -**Running with current folders in PEcAn** - -Currently the DART folders in PEcAn are that you can simply copy the structure there over a downloaded DART workflow and it should replace/add relevant files and folders. The most important step after that is to check and change the run paths in the following files: -Path_name files in the work folders -T_ED2IN file, as it indicates where the run results be written. -advance_model.csh, as it indicates where to copy files from/to. - -Second thing is setting the state vector size. This is explained in more detail below, but essentially this is governed by the variable model_size in model_mod.f90. In addition to this, it should be changed in utils/F2R.f90 and R2F.f90 programs, which are responsible for reading and writing state variable information for the different ensembles. This also will be expanded below. Finally, the new state vector size should be updated for any other executable that runs it. - -Third thing needed are the initial condition and observation sequence files. They will always follow the same format and are explained in more detail below. - -Finally the ensemble size, which is the easiest to change. In the work subfolder, there is a file named input.nml. Simply changing the ensemble size there will set it for the run itself. Also remember that initial conditions file should have the equal amount of state vectors as there are ensemble members. - -**Adjusting the workflow** - -The central file for the actual workflow is advance_model.csh. It is a script DART calls to determine how the state vector changes between the two observation times and is essentially the only file one needs to change when changing state models or observations operators. The file itself should be commented to give a good idea of the flow, but beneath is a crude order of events. -1. Create a temporary folder to run the model in and copy/link required files in to it. -2. Read in the state vector values and times from DART. Here it is important to note that the values will be in binary format, which need to be read in by a Fortran program. In my system, there is a program called F2R which reads in the binary values and writes out in ascii form the state vector values as well as which ED2 history files it needs to copy based on the time stamps. -3. Run the observation operator, which writes the state vector state in to the history files and adjusts them if necessary. -4. Run the program. -5. Read the new state vector values from output files. -6. Convert the state vector values to the binary. In my system, this is done by the program R2F. - -**Initial conditions file** - -The initial conditions file, commonly named filter_ics although you can set it to something else in input.nml, is relatively simple in structure. It has one sequence repeating over the number of ensemble members. -First line contains two times: Seconds and days. Just use one of them in this situation, but it has to match the starting time given in input.nml. -After that each line should contain a value from the state vector in the order you want to treat them. -R functions filter_ics.R and B_filter_ics.R in the R folder give good examples of how to create these. - -**Observations files** - -The file which contains the observations is commonly known as obs_seq.out, although again the name of the file can be changed in input.nml. The structure of the file is relatively straight-forward and the R function ObsSeq.R in the R subfolder has the write structure for this. Instead of writing it out here, I want to focus on a few really important details in this file. -Each observations will have a time, a value, an uncertainty, a location and a kind. The first four are self-explanatory, but the kind is really important, but also unfortunately really easy to misunderstand. In this file, the kind does not refer to a unit or a type of observation, but which member of the state vector is this observation of. So if the kind was, for example, 5, it would mean that it was of the fifth member of the state vector. However, if the kind value is positive, the system assumes that there is some sort of an operator change in comparing the observation and state vector value which is specified in a subprogram in model_mod.f90. - -So for an direct identity comparison between the observation and the state vector value, the kind needs to be negative number of the state vector component. Thus, again if the observation is of the fifth state vector value, the kind should be set as -5. Thus it is recommendable that the state vector values have already been altered to be comparable with the observations. - -As for location, there are many ways to set in DART and the method needs to be chosen when compiling the code by giving the program which of the location mods it is to use. In our examples we used a 1-dimensional location vector with scaled values between 0 and 1. For future it makes sense to switch to a 2 dimensional long- and lat-scale, but for the time being the location does not impact the system a lot. The main impact will be if the covariances will be localized, as that will be decided on their locations. - - -**State variable vector in DART** - -Creating/adjusting a state variable vector in DART is relatively straight-forward. Below are listed the steps to specify a state variable vector in DART. - -I. For each specific model, there should be an own folder within the DART root models folder. In this folder there is a model_mod.f90, which contains the model specific subroutines necessary for a DART run. - -At the beginning of this file there should be the following line: - -integer, parameter :: model_size = [number] - -The number here should be the number of variables in the vector. So for example if there were three state variables, then the line should look like this: - -integer, parameter :: model_size = 3 - -This number should also be changed to match with any of the other executables called during the run as indicated by the list above. - - -II. In the DART root, there should be a folder named obs_kind, which contains a file called DEFAULT_obs_kind_mod.F90. It is important to note that all changes should be done to this file instead of obs_kind_mod.f90, as during compilation DART creates obs_kind_mod.f90 from DEFAULT_obs_kind_mod.F90. -This program file contains all the defined observation types used by DART and numbers them for easier reference later. Different types are classified according to observation instrument or relevant observation phenomenon. Adding a new type only requires finding an unused number and starting a new identifying line with the following: - -integer, parameter, public :: & - KIND_... - -Note that the observation kind should always be easy to understand, so avoid using unnecessary acronyms. For example, when adding an observation type for Leaf Area Index, it would look like below: - -integer, parameter, public :: & - KIND_LEAF_AREA_INDEX = [number] - - -III. In the DART root, there should be a folder named obs_def, which contains several files starting with obs_def_. There files contain the different available observation kinds classified either according to observation instrument or observable system. Each file starts with the line - -! BEGIN DART PREPROCESS KIND LIST - -And end with line - -! END DART PREPROCESS KIND LIST - -The lines between these two should contain - -! The desired observation reference, the observation type, COMMON_CODE. - -For example, for observations relating to phenology, I have created a file called obs_def_phen_mod.f90. In this file I define the Leaf Area Index observations in the following way. - -! BEGIN DART PREPROCESS KIND LIST -! LAI, TYPE_LEAF_AREA_INDEX, COMMON_CODE -! END DART PREPROCESS KIND LIST - -Note that the exclamation marks are necessary for the file. - - -IV. In the model specific folder, in the work subfolder there is a namelist file input.nml. This contains all the run specific information for DART. In it, there is a subtitle &preprocess, under which there is a line - -input_files = ‘….’ - -This input_files sections must be set to refer to the obs_def file created in step III. The input files can contain references to multiple obs_def files if necessary. - -As an example, the reference to the obs_def_phen_mod.f90 would look like -input_files = ‘../../../obs_def/obs_def_phen_mod.f90’ - -V. Finally, as an optional step, the different values in state vector can be typed. In model_mod, referred to in step I, there is a subroutine get_state_meta_data. In it, there is an input variable index_in, which refers to the vector component. So for instance for the second component of the vector index_in would be 2. If this is done, the variable kind has to be also included at the beginning of the model_mod.f90 file, at the section which begins - -use obs_kind_mod, only :: - -The location of the variable can be set, but for a 0-dimensional model we are discussing here, this is not necessary. - -Here, though, it is possible to set the variable types by including the following line - -if(index_in .eq. [number]) var_type = [One of the variable kinds set in step II] - -VI. If the length of the state vector is changed, it is important that the script ran with DART produces a vector of that length. Change appropriately if necessary. - -After these steps, DART should be able to run with the state vector of interest. - - - -# (PART) Appendix {-} - -# Miscellaneous - -## TODO - -* Add list of developers - -## Using the PEcAn download.file() function -download.file(url, destination_file, method)
-
- -This custom PEcAn function works together with the base R function download.file (https://stat.ethz.ch/R-manual/R-devel/library/utils/html/download.file.html). However, it provides expanded functionality to generalize the use for a broad range of environments. This is because some computing environments are behind a firewall or proxy, including FTP firewalls. This may require the use of a custom FTP program and/or initial proxy server authentication to retrieve the files needed by PEcAn (e.g. meteorology drivers, other inputs) to run certain model simulations or tools. For example, the Brookhaven National Laboratory (BNL) requires an initial connection to a FTP proxy before downloading files via FTP protocol. As a result, the computers running PEcAn behind the BNL firewall (e.g. https://modex.bnl.gov) use the ncftp cleint (http://www.ncftp.com/) to download files for PEcAn because the base options with R::base download.file() such as curl, libcurl which don't have the functionality to provide credentials for a proxy or even those such as wget which do but don't easily allow for connecting through a proxy server before downloading files. The current option for use in these instances is **ncftp**, specifically **ncftpget** - -
- -Examples:
-*HTTP*
-``` -download.file("http://lib.stat.cmu.edu/datasets/csb/ch11b.txt","~/test.download.txt") -``` -*FTP* -``` -download.file("ftp://ftp.cdc.noaa.gov/Datasets/NARR/monolevel/pres.sfc.2000.nc", "~/pres.sfc.2000.nc") -``` -*customizing to use ncftp when running behind an FTP firewall (requires ncftp to be installed and availible)*
-``` -download.file("ftp://ftp.cdc.noaa.gov/Datasets/NARR/monolevel/pres.sfc.2000.nc", "~/pres.sfc.2000.nc", method=""ncftpget") -``` - -
- -On modex.bnl.gov, the ncftp firewall configuration file (e.g. ~/.ncftp/firewall) is configured as: -firewall-type=1 -firewall-host=ftpgateway.sec.bnl.local -firewall-port=21 - -which then allows for direct connection through the firewall using a command like: - -``` -ncftpget ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-fortran-4.4.4.tar.gz -``` - -To allow the use of ncftpget from within the download.file() function you need to set your R profile download.ftp.method option in your options list. To see your current R options run options() from R cmd, which should look something like this: -``` -> options() -$add.smooth -[1] TRUE - -$bitmapType -[1] "cairo" - -$browser -[1] "/usr/bin/xdg-open" - -$browserNLdisabled -[1] FALSE - -$CBoundsCheck -[1] FALSE - -$check.bounds -[1] FALSE - -$citation.bibtex.max -[1] 1 - -$continue -[1] "+ " - -$contrasts - unordered ordered -"contr.treatment" "contr.poly" -``` - -In order to set your download.ftp.method option you need to add a line such as -``` -# set default FTP -options(download.ftp.method = "ncftpget") -``` - -In your ~/.Rprofile. On modex at BNL we have set the global option in /usr/lib64/R/etc/Rprofile.site. - -Once this is done you should be able to see the option set using this command in R: -``` -> options("download.ftp.method") -$download.ftp.method -[1] "ncftpget" -``` - - - - - - - -# FAQ - - - -# PEcAn Project Used in Courses - -## University classes - -### GE 375 - Environmental Modeling - Spring 2013, 2014 (Mike Dietze, Boston University) - -The final "Case Study: Terrestrial Ecosystem Models" is a PEcAn-based hands-on activity. Each class has been 25 students. - -GE 585 - Ecological forecasting Fall 2013 (Mike Dietze, Boston University) - - -## Summer Courses / Workshops - -### Annual summer course in flux measurement and advanced modeling (Mike Dietze, Ankur Desai) Niwot Ridge, CO - -About 1/3 lecture, 2/3 hands-on (the syllabus is actually wrong as it list the other way around). Each class has 24 students. - -[2013 Syllabus](http://www.fluxcourse.org/files/SyllabusFluxcourse_2013.pdf) see Tuesday Week 2 Data Assimilation lectures and PEcAn demo and the Class projects and presentations on Thursday and Friday. (Most students use PEcAn for their group projects. 2014 will be the third year that PEcAn has been used for this course. - -### Assimilating Long-Term Data into Ecosystem Models: Paleo-Ecological Observatory Network (PalEON) Project - -Here is a link to the course: https://www3.nd.edu/~paleolab/paleonproject/summer-course/ - -This course uses the same demo as above, including collecting data in the field and assimilating it (part 3) - -### Integrating Evidence on Forest Response to Climate Change: Physiology to Regional Abundance - -http://blue.for.msu.edu/macrosystems/workshop - -May 13-14, 2013 - -Session 4: Integrating Forest Data Into Ecosystem Models - -### Ecological Society of America meetings - -[Workshop: Combining Field Measurements and Ecosystem Models](http://eco.confex.com/eco/2013/webprogram/Session9007.html) - - -## Selected Publications - -1. Dietze, M.C., D.S LeBauer, R. Kooper (2013) [On improving the communication between models and data](https://github.com/PecanProject/pecan/blob/master/documentation/dietze2013oic.pdf?raw=true). Plant, Cell, & Environment doi:10.1111/pce.12043 -2. LeBauer, D.S., D. Wang, K. Richter, C. Davidson, & M.C. Dietze. (2013). [Facilitating feedbacks between field measurements and ecosystem models](https://github.com/PecanProject/pecan/blob/master/documentation/lebauer2013ffb.pdf?raw=true). Ecological Monographs. doi:10.1890/12-0137.1 - - - -# Package Dependencies {#package-dependencies} - -## Executive Summary: What to usually do - -When you're editing one PEcAn package and want to use a function from any other R package (including other PEcAn packages), the standard method is to add the other package to the `Imports:` field of your DESCRIPTION file, spell the function in fully namespaced form (`pkg::function()`) everywhere you call it, and be done. There are a few cases where this isn't enough, but they're rarer than you think. The rest of this section mostly deals with the exceptions to this rule and why not to use them when they can be avoided. - -## Big Picture: What's possible to do - -To make one PEcAn package use functionality from another R package (including other PEcAn packages), you must do at least one and up to four things in your own package. - -* Always, *declare* which packages your package depends on, so that R can install them as needed when someone installs your package and so that human readers can understand what additional functionality it uses. Declare dependencies by manually adding them to your package's DESCRIPTION file. -* Sometimes, *import* functions from the dependency package into your package's namespace, so that your functions know where to find them. This is only sometimes necessary, because you can usually use `::` to call functions without importing them. Import functions by writing Roxygen `@importFrom` statements and do not edit the NAMESPACE file by hand. -* Rarely, *load* dependency code into the R environment, so that the person using your package can use it without loading it separately. This is usually a bad idea, has caused many subtle bugs, and in PEcAn it should only be used when unavoidable. When unavoidable, prefer `requireNamespace(... quietly = TRUE)` over `Depends:` or `require()` or `library()`. -* Only if your dependency relies on non-R tools, *install* any components that R won't know how to find for itself. These components are often but not always identifiable from a `SystemRequirements` field in the dependency's DESCRIPTION file. The exact installation procedure will vary case by case and from one operating system to another, and for PEcAn the key point is that you should skip this step until it proves necessary. When it does prove necessary, edit the documentation for your package to include advice on installing the dependency components, then edit the PEcAn build and testing scripts as needed so that they follow your advice. - -The advice below about each step is written specifically for PEcAn, although much of it holds for R packages in general. For more details about working with dependencies, start with Hadley Wickham's [R packages](http://r-pkgs.had.co.nz/description.html#dependencies) and treat the CRAN team's [Writing R Extensions](https://cran.r-project.org/doc/manuals/R-exts.html) as the final authority. - - -## Declaring Dependencies: Depends, Suggests, Imports - -List all dependencies in the DESCRIPTION file. Every package that is used by your package's code must appear in exactly one of the sections `Depends`, `Imports`, or `Suggests`. - -Please list packages in alphabetical order within each section. R doesn't care about the order, but you will later when you're trying to check whether this package uses a particular dependency. - -* `Imports` is the correct place to declare most PEcAn dependencies. This ensures that they get installed, but *does not* automatically import any of their functions -- Since PEcAn style prefers to mostly use `::` instead of importing, this is what we want. - -* `Depends` is, despite the name, usually the wrong place to declare PEcAn dependencies. The only difference between `Depends` and `Imports` is that when the user attaches your packages to their own R workspace (e.g. using `library("PEcAn.yourpkg")`), the packages in `Depends` are attached as well. Notice that a call like `PEcAn.yourpkg::yourfun()` *will not* attach your package *or* its dependencies, so your code still needs to import or `::`-qualify all functions from packages listed in `Depends`. In short, `Depends` is not a shortcut, is for user convenience not developer convenience, and makes it easy to create subtle bugs that appear to work during interactive test sessions but fail when run from scripts. As the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies) puts it (emphasis added): - - > This [Imports and Depends] scheme was developed before all packages had namespaces (R 2.14.0 in October 2011), and good practice changed once that was in place. Field ‘Depends’ should nowadays be used rarely, only for packages which are intended to be put on the search path to make their facilities **available to the end user (and not to the package itself)**." - -* The `Suggests` field can be used to declare dependencies on packages that make your package more useful but are not completely essential. Again from the [R extensions manual](https://cran.r-project.org/doc/manuals/R-exts.html#Package-Dependencies): - - > The `Suggests` field uses the same syntax as `Depends` and lists packages that are not necessarily needed. This includes packages used only in examples, tests or vignettes (see [Writing package vignettes](https://cran.r-project.org/doc/manuals/R-exts.html#Writing-package-vignettes)), and packages loaded in the body of functions. E.g., suppose an example from package foo uses a dataset from package bar. Then it is not necessary to have bar use foo unless one wants to execute all the examples/tests/vignettes: it is useful to have bar, but not necessary. - - Some of the PEcAn model interface packages push this definition of "not necessarily needed" by declaring their coupled model package in `Suggests` rather than `Imports`. For example, the `PEcAn.BIOCRO` package cannot do anything useful when the BioCro model is not installed, but it lists BioCro in Suggests because *PEcAn as a whole* can work without it. This is a compromise to simplify installation of PEcAn for users who only plan to use a few models, so that they can avoid the bother of installing BioCro if they only plan to run, say, SIPNET. - - Since the point of Suggests is that they are allowed to be missing, all code that uses a suggested package must behave reasonably when the package is not found. Depending on the situation, "reasonably" could mean checking whether the package is available and throwing an error as needed (PEcAn.BIOCRO uses its `.onLoad` function to check at load time whether BioCro is installed and will refuse to load if it is not), or providing an alternative behavior (`PEcAn.data.atmosphere::get_NARR_thredds` checks at call time for either `parallel` or `doParallel` and uses whichever one it finds first), or something else, but your code should never just assume that the suggested package is available. - - You are not allowed to import functions from `Suggests` into your package's namespace, so always call them in `::`-qualified form. By default R will not install suggested packages when your package is installed, but users can change this using the `dependencies` argument of `install.packages`. Note that for testing on Travis CI, PEcAn *does* install all `Suggests` (because they are required for full package checks), so any of your code that runs when a suggested package is not available will never be exercised by Travis checks. - - It is often tempting to move a dependency from Imports to Suggests because it is a hassle to install (large, hard to compile, no longer available from CRAN, currently broken on GitHub, etc), in the hopes that this will isolate the rest of PEcAn from the troublesome dependency. This helps for some cases, but fails for two very common ones: It does not reduce install time for CI builds, because all suggested packages need to be present when running full package checks (`R CMD check` or `devtools::check` or `make check`). It also does not prevent breakage when updating PEcAn via `make install`, because `devtools::install_deps` does not install suggested packages that are missing but does try to *upgrade* any that are already installed to the newest available version -- even if the installed version took ages to compile and would have worked just fine! - -## Importing Functions: Use Roxygen - -PEcAn style is to import very few functions and instead use fully namespaced function calls (`pkg::fn()`) everywhere it's practical to do so. In cases where double-colon notation would be especially burdensome, such as when importing custom binary operators like `%>%`, it's acceptable to import specific functions into the package namespace. Do this by using Roxygen comments of the form `#' @importFrom pkg function`, not by hand-editing the NAMESPACE file. - -If the import is only used in one or a few functions, use an `@importFrom` in the documentation for each function that uses it. If it is used all over the package, use a single `@importFrom` in the Roxygen documentation block for the whole package, which probably lives in a file called either `zzz.R` or `-package.R`: - -```r -#' What your package does -#' -#' Longer description of the package goes here. -#' Probably with links to other resources about it, citations, etc. -#' -#' @docType package -#' @name PEcAn.yourpkg -#' @importFrom magrittr %>% -NULL -``` - -Roxygen will make sure there's only one NAMESPACE entry per imported function no matter how many `importFrom` statements there are, but please pick a scheme (either import on every usage or once for the whole package), stick with it, and do not make function `x()` rely on an importFrom in the comments above function `y()`. - -Please do *not* import entire package namespaces (`#' @import pkg`); it increases the chance of function name collisions and makes it much harder to understand which package a given function was called from. - -A special note about importing functions from the [tidyverse](https://tidyverse.org): Be sure to import from the package(s) that actually contain the functions you want to use, e.g. `Imports: dplyr, magrittr, purrr` / `@importFrom magrittr %>%` / `purrr::map(...)`, not `Imports: tidyverse` / `@importFrom tidyverse %>%` / `tidyverse::map(...)`. The package named `tidyverse` is just a interactive shortcut that loads the whole collection of constituent packages; it doesn't export any functions in its own namespace and therefore importing it into your package doesn't make them available. - -## Loading Code: Don't... But Use `requireNamespace` When You Do - -The very short version of this section: We want to maintain clear separation between the [package's namespace](http://r-pkgs.had.co.nz/namespace.html) (which we control and want to keep predictable) and the global namespace (which the user controls, might change in ways we have no control over, and will be justifiably angry if we change it in ways they were not expecting). Therefore, avoid attaching packages to the search path (so no `Depends` and no `library()` or `require()` inside functions), and do not explicitly load other namespaces if you can help it. - -The longer version requires that we make a distinction often glossed over: *Loading* a package makes it possible for *R* to find things in the package namespace and does any actions needed to make it ready for use (e.g. running its .onLoad method, loading DLLs if the package contains compiled code, etc). *Attaching* a package (usually by calling `library("somePackage")`) loads it if it wasn't already loaded, and then adds it to the search path so that the *user* can find things in its namespace. As discussed in the "Declaring Dependancies" section above, dependencies listed in `Depends` will be attached when your package is attached, but they will be *neither attached nor loaded* when your package is loaded without being attached. - -Loading a dependency into the package namespace is undesirable because it makes it hard to understand our own code -- if we need to use something from elsewhere, we'd prefer call it from its own namespace using `::` (which implicitly loads the dependency!) or explicitly import it with a Roxygen `@import` directive. But in a few cases this isn't enough. The most common reason to need to explicitly load a dependency is that some packages *define* new S3 methods for generic functions defined in other packages, but do not *export* these methods directly. We would prefer that these packages did not do this, but sometimes we have to use them anyway. An [example from PEcAn](https://github.com/PecanProject/pecan/issues/1368) is that PEcAn.MA needs to call `as.matrix` on objects of class `mcmc.list`. When the `coda` namespace is loaded, `as.matrix(some_mcmc.list)` can be correctly dispatched by `base::as.matrix` to the unexported method `coda:::as.matrix.mcmc.list`, but when `coda` is not loaded this dispatch will fail. Unfortunately coda does not export `as.matrix.mcmc.list` so we cannot call it directly or import it into the PEcAn.MA namespace, so instead we [load the `coda` namespace](https://github.com/PecanProject/pecan/pull/1966/files#diff-e0b625a54a8654cc9b22d9c076e7a838R13) whenever PEcAn.MA is loaded. - -Attaching packages to the user's search path is even more problematic because it makes it hard for the user to understand *how your package will affect their own code*. Packages attached by a function stay attached after the function exits, so they can cause name collisions far downstream of the function call, potentially in code that has nothing to do with your package. And worse, since names are looked up in reverse order of package loading, it can cause behavior that differs in strange ways depending on the order of lines that look independent of each other: - -```r -library(Hmisc) -x = ... -y = 3 -summarize(x) # calls Hmisc::summarize -y2 <- some_package_that_attaches_dplyr::innocent.looking.function(y) -# Loading required package: dplyr -summarize(x) # Looks identical to previous summarize, but calls dplyr::summarize! -``` - -This is not to say that users will *never* want your package to attach another one for them, just that it's rare and that attaching dependencies is much more likely to cause bugs than to fix them and additionally doesn't usually save the package author any work. - -One possible exception to the do-not-attach-packages rule is a case where your dependency ignores all good practice and wrongly assumes, without checking, that all of its own dependencies are attached; if its DESCRIPTION uses only `Depends` instead of `Imports`, this is often a warning sign. For example, a small-but-surprising number of packages depend on the `methods` package without proper checks (this is probably because most *but not all* R interpreters attach `methods` by default and therefore it's easy for an author to forget it might ever be otherwise unless they happen to test with a but-not-all interpreter). - -If you find yourself with a dependency that does this, accept first that you are relying on a package that is broken, and you should either convince its maintainer to fix it or find a way to remove the dependency from PEcAn. But as a short-term workaround, it is sometimes possible for your code to attach the direct dependency so that it will behave right with regard to its secondary dependencies. If so, make sure the attachment happens every time your package is loaded (e.g. by calling `library(depname)` inside your package's `.onLoad` method) and not just when your package is attached (e.g. by putting it in Depends). - -When you do need to load or attach a dependency, it is probably better to do it inside your package's `.onLoad` method rather than in individual functions, but this isn't ironclad. To only load, use `requireNamespace(pkgname, quietly=TRUE)` -- this will make it available inside your package's namespace while avoiding (most) annoying loadtime messages and not disturbing the user's search path. To attach when you really can't avoid it, declare the dependency in `Depends` and *also* attach it using `library(pkgname)` in your .onLoad method. - -Note that scripts in `inst/` are considered to be sample code rather than part of the package namespace, so it is acceptable for them to explicitly attach packages using `library()`. You may also see code that uses `require(pkgname)`; this is just like `library`, but returns FALSE instead of erroring if package load fails. It is OK for scripts in `inst/` that can do *and do* do something useful when a dependency is missing, but if it is used as `if(!require(pkg)){ stop(...)}` then replace it with `library(pkg)`. - -If you think your package needs to load or attach code for any reason, please note why in your pull request description and be prepared for questions about it during code review. If your reviewers can think of an alternate approach that avoids loading or attaching, they will likely ask you to use it even if it creates extra work for you. - -## Installing dependencies: Let the machines do it - -In most cases you won't need to think about how dependencies get installed -- just declare them in your package's DESCRIPTION and the installation will be handled automatically by R and devtools during the build process. The main exception is when a dependency relies on non-R software that R does not know how to install automatically. For example, rjags relies on JAGS, which might be installed in a different place on every machine. If your dependency falls in this category, you will know (because your CI builds will keep failing until you figure out how to fix it), but the exact details of the fix will differ for every case. - - - -# Testing with the `testthat` package {#appendix-testthat} - -Tests are found in `/tests/testthat/` (for example, `base/utils/inst/tests/`) - -See attached file and -[http://r-pkgs.had.co.nz/tests.html](http://r-pkgs.had.co.nz/tests.html) -for details on how to use the testthat package. - -## List of Expectations - -|Full |Abbreviation| -|---|----| -|expect_that(x, is_true()) |expect_true(x)| -|expect_that(x, is_false()) |expect_false(x)| -|expect_that(x, is_a(y)) |expect_is(x, y)| -|expect_that(x, equals(y)) |expect_equal(x, y)| -|expect_that(x, is_equivalent_to(y)) |expect_equivalent(x, y)| -|expect_that(x, is_identical_to(y)) |expect_identical(x, y)| -|expect_that(x, matches(y)) |expect_matches(x, y)| -|expect_that(x, prints_text(y)) |expect_output(x, y)| -|expect_that(x, shows_message(y)) |expect_message(x, y)| -|expect_that(x, gives_warning(y)) |expect_warning(x, y)| -|expect_that(x, throws_error(y)) |expect_error(x, y)| - -## Basic use of the `testthat` package - -Create a file called `/tests/testthat.R` with the following contents: - -```r -library(testthat) -library(mypackage) - -test_check("mypackage") -``` - -Tests should be placed in `/tests/testthat/test-.R`, and look like the following: - -```r -test_that("mathematical operators plus and minus work as expected",{ - expect_equal(sum(1,1), 2) - expect_equal(sum(-1,-1), -2) - expect_equal(sum(1,NA), NA) - expect_error(sum("cat")) - set.seed(0) - expect_equal(sum(matrix(1:100)), sum(data.frame(1:100))) -}) - -test_that("different testing functions work, giving excuse to demonstrate",{ - expect_identical(1, 1) - expect_identical(numeric(1), integer(1)) - expect_equivalent(numeric(1), integer(1)) - expect_warning(mean('1')) - expect_that(mean('1'), gives_warning("argument is not numeric or logical: returning NA")) - expect_warning(mean('1'), "argument is not numeric or logical: returning NA") - expect_message(message("a"), "a") -}) -``` - -## Data for tests - -Many of PEcAn’s functions require inputs that are provided as data. -These can be in the `data` or the `inst/extdata` folders of a package. -Data that are not package specific should be placed in the `PEcAn.all` (`base/all`) or -`PEcAn.utils` (`base/utils`) packages. - -Some useful conventions: - -## Settings - -* A generic settings can be found in the `PEcAn.all` package -```r -settings.xml <- system.file("pecan.biocro.xml", package = "PEcAn.BIOCRO") -settings <- read.settings(settings.xml) -``` - -* database settings can be specified, and tests run only if a connection is available - -We currently use the following database to run tests against; tests that require access to a database should check `db.exists()` and be skipped if it returns FALSE to avoid failed tests on systems that do not have the database installed. - -```r -settings$database <- list(userid = "bety", - passwd = "bety", - name = "bety", # database name - host = "localhost" # server name) -test_that(..., { - skip_if_not(db.exists(settings$database)) - ## write tests here -}) -``` - -* instructions for installing this are available on the [VM creation - wiki](VM-Creation.md) -* examples can be found in the PEcAn.DB package (`base/db/tests/testthat/`). - -* Model specific settings can go in the model-specific module, for -example: - -```r -settings.xml <- system.file("extdata/pecan.biocro.xml", package = "PEcAn.BIOCRO") -settings <- read.settings(settings.xml) -``` -* test-specific settings: - - settings text can be specified inline: - ``` - settings.text <- " - - nope ## allows bypass of checks in the read.settings functions - - - ebifarm.pavi - test/ - - - test/ - - bety - bety - localhost - bety - - " - settings <- read.settings(settings.text) - ``` - - values in settings can be updated: - ```r - settings <- read.settings(settings.text) - settings$outdir <- "/tmp" ## or any other settings - ``` - -## Helper functions for unit tests - -* `PEcAn.utils::tryl` returns `FALSE` if function gives error -* `PEcAn.utils::temp.settings` creates temporary settings file -* `PEcAn.DB::db.exists` returns `TRUE` if connection to database is available - - - -# `devtools` package {#developer-devtools} - -Provides functions to simplify development - -Documentation: -[The R devtools package](https://devtools.r-lib.org/) - -```r -load_all("pkg") -document("pkg") -test("pkg") -install("pkg") -build("pkg") -``` -other tips for devtools (from the documentation): - -* Adding the following to your `~/.Rprofile` will load devtools when -running R in interactive mode: -```r -# load devtools by default -if (interactive()) { - suppressMessages(require(devtools)) -} -``` -* Adding the following to your .Rpackages will allow devtools to recognize package by folder name, rather than directory path -```r -# in this example, devhome is the pecan trunk directory -devhome <- "/home/dlebauer/R-dev/pecandev/" -list( - default = function(x) { - file.path(devhome, x, x) - }, - "utils" = paste(devhome, "pecandev/utils", sep = "") - "common" = paste(devhome, "pecandev/common", sep = "") - "all" = paste(devhome, "pecandev/all", sep = "") - "ed" = paste(devhome, "pecandev/models/ed", sep = "") - "uncertainty" = paste(devhome, "modules/uncertainty", sep = "") - "meta.analysis" = paste(devhome, "modules/meta.analysis", sep = "") - "db" = paste(devhome, "db", sep = "") -) -``` - -Now, devtools can take `pkg` as an argument instead of `/path/to/pkg/`, -e.g. so you can use `build("pkg")` instead of `build("/path/to/pkg/")` - - - -# `singularity` {#models_singularity} - -Running a model using singularity. - -*This is work in progress.* - -This assumes you have [singulariy](https://sylabs.io/singularity/) already installed. - -This will work on a Linux machine (x86_64). - -First make sure you have all the data files: - -
-bash script to install required files (click to expand) -```bash -#!/bin/bash - -if [ ! -e sites ]; then - curl -s -o sites.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/sites.tgz - tar zxf sites.tgz - sed -i -e "s#/home/kooper/Projects/EBI#/data/sites#" sites/*/ED_MET_DRIVER_HEADER - rm sites.tgz -fi - -if [ ! -e inputs ]; then - curl -s -o inputs.tgz http://isda.ncsa.illinois.edu/~kooper/EBI/inputs.tgz - tar zxf inputs.tgz - rm inputs.tgz -fi - -if [ ! -e testrun.s83 ]; then - curl -s -o testrun.s83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/testrun.s83.zip - unzip -q testrun.s83.zip - sed -i -e "s#/home/pecan#/data#" testrun.s83/ED2IN - rm testrun.s83.zip -fi - -if [ ! -e ${HOME}/sites/Santarem_Km83 ]; then - curl -s -o Santarem_Km83.zip http://isda.ncsa.illinois.edu/~kooper/EBI/Santarem_Km83.zip - unzip -q -d sites Santarem_Km83.zip - sed -i -e "s#/home/pecan#/data#" sites/Santarem_Km83/ED_MET_DRIVER_HEADER - rm Santarem_Km83.zip -fi -``` -
- -Next edit the ED2IN file in testrun.s83 - -
-ED2IN file (click to expand) -``` -!==========================================================================================! -!==========================================================================================! -! ED2IN . ! -! ! -! This is the file that contains the variables that define how ED is to be run. There ! -! is some brief information about the variables here. ! -!------------------------------------------------------------------------------------------! -$ED_NL - - !----- Simulation title (64 characters). -----------------------------------------------! - NL%EXPNME = 'ED version 2.1 test' - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! Type of run: ! - ! INITIAL -- Starts a new run, that can be based on a previous run (restart/history), ! - ! but then it will use only the biomass and soil carbon information. ! - ! HISTORY -- Resumes a simulation from the last history. This is different from ! - ! initial in the sense that exactly the same information written in the ! - ! history will be used here. ! - !---------------------------------------------------------------------------------------! - NL%RUNTYPE = 'INITIAL' - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! Start of simulation. Information must be given in UTC time. ! - !---------------------------------------------------------------------------------------! - NL%IMONTHA = 01 - NL%IDATEA = 01 - NL%IYEARA = 2001 - NL%ITIMEA = 0000 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! End of simulation. Information must be given in UTC time. ! - !---------------------------------------------------------------------------------------! - NL%IMONTHZ = 01 - NL%IDATEZ = 01 - NL%IYEARZ = 2002 - NL%ITIMEZ = 0000 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! DTLSM -- Time step to integrate photosynthesis, and the maximum time step for ! - ! integration of energy and water budgets (units: seconds). Notice that the ! - ! model will take steps shorter than this if this is too coarse and could ! - ! lead to loss of accuracy or unrealistic results in the biophysics. ! - ! Recommended values are < 60 seconds if INTEGRATION_SCHEME is 0, and 240-900 ! - ! seconds otherwise. ! - ! RADFRQ -- Time step to integrate radiation, in seconds. This must be an integer ! - ! multiple of DTLSM, and we recommend it to be exactly the same as DTLSM. ! - !---------------------------------------------------------------------------------------! - NL%DTLSM = 600. - NL%RADFRQ = 600. - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used in case the user wants to run a regional run. ! - ! ! - ! N_ED_REGION -- number of regions for which you want to run ED. This can be set to ! - ! zero provided that N_POI is not... ! - ! GRID_TYPE -- which kind of grid to run: ! - ! 0. Longitude/latitude grid ! - ! 1. Polar-stereographic ! - !---------------------------------------------------------------------------------------! - NL%N_ED_REGION = 0 - NL%GRID_TYPE = 1 - - !------------------------------------------------------------------------------------! - ! The following variables are used only when GRID_TYPE is set to 0. You must ! - ! provide one value for each grid, except otherwise noted. ! - ! ! - ! GRID_RES -- Grid resolution, in degrees (first grid only, the other grids ! - ! resolution will be defined by NSTRATX/NSTRATY). ! - ! ED_REG_LATMIN -- Southernmost point of each region. ! - ! ED_REG_LATMAX -- Northernmost point of each region. ! - ! ED_REG_LONMIN -- Westernmost point of each region. ! - ! ED_REG_LONMAX -- Easternmost point of each region. ! - !------------------------------------------------------------------------------------! - NL%GRID_RES = 1.0 - NL%ED_REG_LATMIN = -12.0, -7.5, 10.0, -6.0 - NL%ED_REG_LATMAX = 1.0, -3.5, 15.0, -1.0 - NL%ED_REG_LONMIN = -66.0,-58.5, 70.0, -63.0 - NL%ED_REG_LONMAX = -49.0,-54.5, 35.0, -53.0 - !------------------------------------------------------------------------------------! - - - - !------------------------------------------------------------------------------------! - ! The following variables are used only when GRID_TYPE is set to 1. ! - ! ! - ! NNXP -- number of points in the X direction. One value for each grid. ! - ! NNYP -- number of points in the Y direction. One value for each grid. ! - ! DELTAX -- grid resolution in the X direction, near the grid pole. Units: [ m]. ! - ! this value is used to define the first grid only, other grids are ! - ! defined using NNSTRATX. ! - ! DELTAY -- grid resolution in the Y direction, near the grid pole. Units: [ m]. ! - ! this value is used to define the first grid only, other grids are ! - ! defined using NNSTRATX. Unless you are running some specific tests, ! - ! both DELTAX and DELTAY should be the same. ! - ! POLELAT -- Latitude of the pole point. Set this close to CENTLAT for a more ! - ! traditional "square" domain. One value for all grids. ! - ! POLELON -- Longitude of the pole point. Set this close to CENTLON for a more ! - ! traditional "square" domain. One value for all grids. ! - ! CENTLAT -- Latitude of the central point. One value for each grid. ! - ! CENTLON -- Longitude of the central point. One value for each grid. ! - !------------------------------------------------------------------------------------! - NL%NNXP = 110 - NL%NNYP = 70 - NL%DELTAX = 60000. - NL%DELTAY = 60000. - NL%POLELAT = -2.609075 - NL%POLELON = -60.2093 - NL%CENTLAT = -2.609075 - NL%CENTLON = -60.2093 - !------------------------------------------------------------------------------------! - - - - !------------------------------------------------------------------------------------! - ! Nest ratios. These values are used by both GRID_TYPE=0 and GRID_TYPE=1. ! - ! NSTRATX -- this is will divide the values given by DELTAX or GRID_RES for the ! - ! nested grids. The first value should be always one. ! - ! NSTRATY -- this is will divide the values given by DELTAY or GRID_RES for the ! - ! nested grids. The first value should be always one, and this must ! - ! be always the same as NSTRATX when GRID_TYPE = 0, and this is also ! - ! strongly recommended for when GRID_TYPE = 1. ! - !------------------------------------------------------------------------------------! - NL%NSTRATX = 1,4 - NL%NSTRATY = 1,4 - !------------------------------------------------------------------------------------! - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used to define single polygon of interest runs, and ! - ! they are ignored when N_ED_REGION = 0. ! - ! ! - ! N_POI -- number of polygons of interest (POIs). This can be zero as long as ! - ! N_ED_REGION is not. ! - ! POI_LAT -- list of latitudes of each POI. ! - ! POI_LON -- list of longitudes of each POI. ! - ! POI_RES -- grid resolution of each POI (degrees). This is used only to define the ! - ! soil types ! - !---------------------------------------------------------------------------------------! - NL%N_POI = 1 - NL%POI_LAT = -3.018 - NL%POI_LON = -54.971 - NL%POI_RES = 1.00 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! LOADMETH -- Load balancing method. This is used only in regional runs run in ! - ! parallel. ! - ! 0. Let ED decide the best way of splitting the polygons. Commonest ! - ! option and default. ! - ! 1. One of the methods to split polygons based on their previous ! - ! work load. Developpers only. ! - ! 2. Try to load an equal number of SITES per node. Useful for when ! - ! total number of polygon is the same as the total number of cores. ! - ! 3. Another method to split polygons based on their previous work load. ! - ! Developpers only. ! - !---------------------------------------------------------------------------------------! - NL%LOADMETH = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ED2 File output. For all the variables 0 means no output and 3 means HDF5 output. ! - ! ! - ! IFOUTPUT -- Fast analysis. These are mostly polygon-level averages, and the time ! - ! interval between files is determined by FRQANL ! - ! IDOUTPUT -- Daily means (one file per day) ! - ! IMOUTPUT -- Monthly means (one file per month) ! - ! IQOUTPUT -- Monthly means of the diurnal cycle (one file per month). The number ! - ! of points for the diurnal cycle is 86400 / FRQANL ! - ! IYOUTPUT -- Annual output. ! - ! ITOUTPUT -- Instantaneous fluxes, mostly polygon-level variables, one file per year. ! - ! ISOUTPUT -- restart file, for HISTORY runs. The time interval between files is ! - ! determined by FRQHIS ! - !---------------------------------------------------------------------------------------! - NL%IFOUTPUT = 0 - NL%IDOUTPUT = 0 - NL%IMOUTPUT = 0 - NL%IQOUTPUT = 3 - NL%IYOUTPUT = 0 - NL%ITOUTPUT = 0 - NL%ISOUTPUT = 3 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ATTACH_METADATA -- Flag for attaching metadata to HDF datasets. Attaching metadata ! - ! will aid new users in quickly identifying dataset descriptions but ! - ! will compromise I/O performance significantly. ! - ! 0 = no metadata, 1 = attach metadata ! - !---------------------------------------------------------------------------------------! - NL%ATTACH_METADATA = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! UNITFAST -- The following variables control the units for FRQFAST/OUTFAST, and ! - ! UNITSTATE FRQSTATE/OUTSTATE, respectively. Possible values are: ! - ! 0. Seconds; ! - ! 1. Days; ! - ! 2. Calendar months (variable) ! - ! 3. Calendar years (variable) ! - ! ! - ! N.B.: 1. In case OUTFAST/OUTSTATE are set to special flags (-1 or -2) ! - ! UNITFAST/UNITSTATE will be ignored for them. ! - ! 2. In case IQOUTPUT is set to 3, then UNITFAST has to be 0. ! - ! ! - !---------------------------------------------------------------------------------------! - NL%UNITFAST = 0 - NL%UNITSTATE = 3 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! OUTFAST/OUTSTATE -- these control the number of times per file. ! - ! 0. Each time gets its own file ! - ! -1. One file per day ! - ! -2. One file per month ! - ! > 0. Multiple timepoints can be recorded to a single file reducing ! - ! the number of files and i/o time in post-processing. ! - ! Multiple timepoints should not be used in the history files ! - ! if you intend to use these for HISTORY runs. ! - !---------------------------------------------------------------------------------------! - NL%OUTFAST = 1. - NL%OUTSTATE = 1. - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ICLOBBER -- What to do in case the model finds a file that it was supposed the ! - ! written? 0 = stop the run, 1 = overwrite without warning. ! - ! FRQFAST -- time interval between analysis files, units defined by UNITFAST. ! - ! FRQSTATE -- time interval between history files, units defined by UNITSTATE. ! - !---------------------------------------------------------------------------------------! - NL%ICLOBBER = 1 - NL%FRQFAST = 3600. - NL%FRQSTATE = 1. - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! - ! SFILOUT -- Path and prefix for history files. ! - !---------------------------------------------------------------------------------------! - NL%FFILOUT = '/data/testrun.s83/analy/ts83' - NL%SFILOUT = '/data/testrun.s83/histo/ts83' - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IED_INIT_MODE -- This controls how the plant community and soil carbon pools are ! - ! initialised. ! - ! ! - ! -1. Start from a true bare ground run, or an absolute desert run. This will ! - ! never grow any plant. ! - ! 0. Start from near-bare ground (only a few seedlings from each PFT to be included ! - ! in this run). ! - ! 1. This will use history files written by ED-1.0. It will grab the ecosystem ! - ! state (like biomass, LAI, plant density, etc.), but it will start the ! - ! thermodynamic state as a new simulation. ! - ! 2. Same as 1, but it uses history files from ED-2.0 without multiple sites, and ! - ! with the old PFT numbers. ! - ! 3. Same as 1, but using history files from ED-2.0 with multiple sites and ! - ! TOPMODEL hydrology. ! - ! 4. Same as 1, but using ED2.1 H5 history/state files that take the form: ! - ! 'dir/prefix-gxx.h5' ! - ! Initialization files MUST end with -gxx.h5 where xx is a two digit integer ! - ! grid number. Each grid has its own initialization file. As an example, if a ! - ! user has two files to initialize their grids with: ! - ! example_file_init-g01.h5 and example_file_init-g02.h5 ! - ! NL%SFILIN = 'example_file_init' ! - ! ! - ! 5. This is similar to option 4, except that you may provide several files ! - ! (including a mix of regional and POI runs, each file ending at a different ! - ! date). This will not check date nor grid structure, it will simply read all ! - ! polygons and match the nearest neighbour to each polygon of your run. SFILIN ! - ! must have the directory common to all history files that are sought to be used,! - ! up to the last character the files have in common. For example if your files ! - ! are ! - ! /mypath/P0001-S-2000-01-01-000000-g01.h5, ! - ! /mypath/P0002-S-1966-01-01-000000-g02.h5, ! - ! ... ! - ! /mypath/P1000-S-1687-01-01-000000-g01.h5: ! - ! NL%SFILIN = '/mypath/P' ! - ! ! - ! 6 - Initialize with ED-2 style files without multiple sites, exactly like option ! - ! 2, except that the PFT types are preserved. ! - !---------------------------------------------------------------------------------------! - NL%IED_INIT_MODE = 6 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! EDRES -- Expected input resolution for ED2.0 files. This is not used unless ! - ! IED_INIT_MODE = 3. ! - !---------------------------------------------------------------------------------------! - NL%EDRES = 1.0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! SFILIN -- The meaning and the size of this variable depends on the type of run, set ! - ! at variable NL%RUNTYPE. ! - ! ! - ! 1. INITIAL. Then this is the path+prefix of the previous ecosystem state. This has ! - ! dimension of the number of grids so you can initialize each grid with a ! - ! different dataset. In case only one path+prefix is given, the same will ! - ! be used for every grid. Only some ecosystem variables will be set up ! - ! here, and the initial condition will be in thermodynamic equilibrium. ! - ! ! - ! 2. HISTORY. This is the path+prefix of the history file that will be used. Only the ! - ! path+prefix will be used, as the history for every grid must have come ! - ! from the same simulation. ! - !---------------------------------------------------------------------------------------! - NL%SFILIN = '/data/sites/Santarem_Km83/s83_default.' - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! History file information. These variables are used to continue a simulation from ! - ! a point other than the beginning. Time must be in UTC. ! - ! ! - ! IMONTHH -- the time of the history file. This is the only place you need to change ! - ! IDATEH dates for a HISTORY run. You may change IMONTHZ and related in case you ! - ! IYEARH want to extend the run, but yo should NOT change IMONTHA and related. ! - ! ITIMEH ! - !---------------------------------------------------------------------------------------! - NL%ITIMEH = 0000 - NL%IDATEH = 01 - NL%IMONTHH = 05 - NL%IYEARH = 2001 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! NZG - number of soil layers. One value for all grids. ! - ! NZS - maximum number of snow/water pounding layers. This is used only for ! - ! snow, if only liquid water is standing, the water will be all collapsed ! - ! into a single layer, so if you are running for places where it doesn't snow ! - ! a lot, leave this set to 1. One value for all grids. ! - !---------------------------------------------------------------------------------------! - NL%NZG = 16 - NL%NZS = 4 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ISOILFLG -- this controls which soil type input you want to use. ! - ! 1. Read in from a dataset I will provide in the SOIL_DATABASE variable a ! - ! few lines below. ! - ! below. ! - ! 2. No data available, I will use constant values I will provide in ! - ! NSLCON or by prescribing the fraction of sand and clay (see SLXSAND ! - ! and SLXCLAY). ! - !---------------------------------------------------------------------------------------! - NL%ISOILFLG = 2 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! NSLCON -- ED-2 Soil classes that the model will use when ISOILFLG is set to 2. ! - ! Possible values are: ! - !---------------------------------------------------------------------------------------! - ! 1 -- sand | 7 -- silty clay loam | 13 -- bedrock ! - ! 2 -- loamy sand | 8 -- clayey loam | 14 -- silt ! - ! 3 -- sandy loam | 9 -- sandy clay | 15 -- heavy clay ! - ! 4 -- silt loam | 10 -- silty clay | 16 -- clayey sand ! - ! 5 -- loam | 11 -- clay | 17 -- clayey silt ! - ! 6 -- sandy clay loam | 12 -- peat ! - !---------------------------------------------------------------------------------------! - NL%NSLCON = 11 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ISOILCOL -- LEAF-3 and ED-2 soil colour classes that the model will use when ISOILFLG ! - ! is set to 2. Soil classes are from 1 to 20 (1 = lightest; 20 = darkest). ! - ! The values are the same as CLM-4.0. The table is the albedo for visible ! - ! and near infra-red. ! - !---------------------------------------------------------------------------------------! - ! ! - ! |-----------------------------------------------------------------------| ! - ! | | Dry soil | Saturated | | Dry soil | Saturated | ! - ! | Class |-------------+-------------| Class +-------------+-------------| ! - ! | | VIS | NIR | VIS | NIR | | VIS | NIR | VIS | NIR | ! - ! |-------+------+------+------+------+-------+------+------+------+------| ! - ! | 1 | 0.36 | 0.61 | 0.25 | 0.50 | 11 | 0.24 | 0.37 | 0.13 | 0.26 | ! - ! | 2 | 0.34 | 0.57 | 0.23 | 0.46 | 12 | 0.23 | 0.35 | 0.12 | 0.24 | ! - ! | 3 | 0.32 | 0.53 | 0.21 | 0.42 | 13 | 0.22 | 0.33 | 0.11 | 0.22 | ! - ! | 4 | 0.31 | 0.51 | 0.20 | 0.40 | 14 | 0.20 | 0.31 | 0.10 | 0.20 | ! - ! | 5 | 0.30 | 0.49 | 0.19 | 0.38 | 15 | 0.18 | 0.29 | 0.09 | 0.18 | ! - ! | 6 | 0.29 | 0.48 | 0.18 | 0.36 | 16 | 0.16 | 0.27 | 0.08 | 0.16 | ! - ! | 7 | 0.28 | 0.45 | 0.17 | 0.34 | 17 | 0.14 | 0.25 | 0.07 | 0.14 | ! - ! | 8 | 0.27 | 0.43 | 0.16 | 0.32 | 18 | 0.12 | 0.23 | 0.06 | 0.12 | ! - ! | 9 | 0.26 | 0.41 | 0.15 | 0.30 | 19 | 0.10 | 0.21 | 0.05 | 0.10 | ! - ! | 10 | 0.25 | 0.39 | 0.14 | 0.28 | 20 | 0.08 | 0.16 | 0.04 | 0.08 | ! - ! |-----------------------------------------------------------------------| ! - ! ! - ! Soil type 21 is a special case in which we use the albedo method that used to be ! - ! the default in ED-2.1. ! - !---------------------------------------------------------------------------------------! - NL%ISOILCOL = 21 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! These variables are used to define the soil properties when you don't want to use ! - ! the standard soil classes. ! - ! ! - ! SLXCLAY -- Prescribed fraction of clay [0-1] ! - ! SLXSAND -- Prescribed fraction of sand [0-1]. ! - ! ! - ! They are used only when ISOILFLG is 2, both values are between 0. and 1., and ! - ! theira sum doesn't exceed 1. Otherwise standard ED values will be used instead. ! - !---------------------------------------------------------------------------------------! - NL%SLXCLAY = 0.59 - NL%SLXSAND = 0.39 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! Soil grid and initial conditions if no file is provided: ! - ! ! - ! SLZ - soil depth in m. Values must be negative and go from the deepest layer to ! - ! the top. ! - ! SLMSTR - this is the initial soil moisture, now given as the soil moisture index. ! - ! Values can be fraction, in which case they will be linearly interpolated ! - ! between the special points (e.g. 0.5 will put soil moisture half way ! - ! between the wilting point and field capacity). ! - ! -1 = dry air soil moisture ! - ! 0 = wilting point ! - ! 1 = field capacity ! - ! 2 = porosity (saturation) ! - ! STGOFF - initial temperature offset (soil temperature = air temperature + offset) ! - !---------------------------------------------------------------------------------------! - NL%SLZ = -8.000, -6.959, -5.995, -5.108, -4.296, -3.560, -2.897, -2.307, - -1.789, -1.340, -0.961, -0.648, -0.400, -0.215, -0.089, -0.020 - NL%SLMSTR = 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, - 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00 - NL%STGOFF = 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, - 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! Input databases ! - ! VEG_DATABASE -- vegetation database, used only to determine the land/water mask. ! - ! Fill with the path and the prefix. ! - ! SOIL_DATABASE -- soil database, used to determine the soil type. Fill with the ! - ! path and the prefix. ! - ! LU_DATABASE -- land-use change disturbance rates database, used only when ! - ! IANTH_DISTURB is set to 1. Fill with the path and the prefix. ! - ! PLANTATION_FILE -- plantation fraction file. In case you don't have such a file or ! - ! you do not want to use it, you must leave this variable empty: ! - ! (NL%PLANTATION_FILE = '' ! - ! THSUMS_DATABASE -- input directory with dataset to initialise chilling degrees and ! - ! growing degree days, which is used to drive the cold-deciduous ! - ! phenology (you must always provide this, even when your PFTs are ! - ! not cold deciduous). ! - ! ED_MET_DRIVER_DB -- File containing information for meteorological driver ! - ! instructions (the "header" file). ! - ! SOILSTATE_DB -- Dataset in case you want to provide the initial conditions of ! - ! soil temperature and moisture. ! - ! SOILDEPTH_DB -- Dataset in case you want to read in soil depth information. ! - !---------------------------------------------------------------------------------------! - NL%VEG_DATABASE = '/data/oge2OLD/OGE2_' - NL%SOIL_DATABASE = '/data/faoOLD/FAO_' - NL%LU_DATABASE = '/data/ed_inputs/glu/' - NL%PLANTATION_FILE = '' - NL%THSUMS_DATABASE = '/data/ed_inputs/' - NL%ED_MET_DRIVER_DB = '/data/sites/Santarem_Km83/ED_MET_DRIVER_HEADER' - NL%SOILSTATE_DB = '' - NL%SOILDEPTH_DB = '' - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ISOILSTATEINIT -- Variable controlling how to initialise the soil temperature and ! - ! moisture ! - ! 0. Use SLMSTR and STGOFF. ! - ! 1. Read from SOILSTATE_DB. ! - ! ISOILDEPTHFLG -- Variable controlling how to initialise soil depth ! - ! 0. Constant, always defined by the first SLZ layer. ! - ! 1. Read from SOILDEPTH_DB. ! - !---------------------------------------------------------------------------------------! - NL%ISOILSTATEINIT = 0 - NL%ISOILDEPTHFLG = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ISOILBC -- This controls the soil moisture boundary condition at the bottom. If ! - ! unsure, use 0 for short-term simulations (couple of days), and 1 for long- ! - ! -term simulations (months to years). ! - ! 0. Bedrock. Flux from the bottom of the bottommost layer is set to 0. ! - ! 1. Gravitational flow. The flux from the bottom of the bottommost layer ! - ! is due to gradient of height only. ! - ! 2. Super drainage. Soil moisture of the ficticious layer beneath the ! - ! bottom is always at dry air soil moisture. ! - ! 3. Half-way. Assume that the fictious layer beneath the bottom is always ! - ! at field capacity. ! - ! 4. Aquifer. Soil moisture of the ficticious layer beneath the bottom is ! - ! always at saturation. ! - !---------------------------------------------------------------------------------------! - NL%ISOILBC = 1 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! IVEGT_DYNAMICS -- The vegetation dynamics scheme. ! - ! 0. No vegetation dynamics, the initial state will be preserved, ! - ! even though the model will compute the potential values. This ! - ! option is useful for theoretical simulations only. ! - ! 1. Normal ED vegetation dynamics (Moorcroft et al 2001). ! - ! The normal option for almost any simulation. ! - !---------------------------------------------------------------------------------------! - NL%IVEGT_DYNAMICS = 1 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! IBIGLEAF -- Do you want to run ED as a 'big leaf' model? ! - ! 0. No, use the standard size- and age-structure (Moorcroft et al. 2001) ! - ! This is the recommended method for most applications. ! - ! 1. 'big leaf' ED: this will have no horizontal or vertical hetero- ! - ! geneities; 1 patch per PFT and 1 cohort per patch; no vertical ! - ! growth, recruits will 'appear' instantaneously at maximum height. ! - ! ! - ! N.B. if you set IBIGLEAF to 1, you MUST turn off the crown model (CROWN_MOD = 0) ! - !---------------------------------------------------------------------------------------! - NL%IBIGLEAF = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! INTEGRATION_SCHEME -- The biophysics integration scheme. ! - ! 0. Euler step. The fastest, but it doesn't estimate ! - ! errors. ! - ! 1. Fourth-order Runge-Kutta method. ED-2.1 default method ! - ! 2. Heun's method (a second-order Runge-Kutta). ! - ! 3. Hybrid Stepping (BDF2 implicit step for the canopy air and ! - ! leaf temp, forward Euler for else, under development). ! - !---------------------------------------------------------------------------------------! - NL%INTEGRATION_SCHEME = 1 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! RK4_TOLERANCE -- This is the relative tolerance for Runge-Kutta or Heun's ! - ! integration. Larger numbers will make runs go faster, at the ! - ! expense of being less accurate. Currently the valid range is ! - ! between 1.e-7 and 1.e-1, but recommended values are between 1.e-4 ! - ! and 1.e-2. ! - !---------------------------------------------------------------------------------------! - NL%RK4_TOLERANCE = 0.01 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! IBRANCH_THERMO -- This determines whether branches should be included in the ! - ! vegetation thermodynamics and radiation or not. ! - ! 0. No branches in energy/radiation (ED-2.1 default); ! - ! 1. Branches are accounted in the energy and radiation. Branchwood ! - ! and leaf are treated separately in the canopy radiation scheme, ! - ! but solved as a single pool in the biophysics integration. ! - ! 2. Similar to 1, but branches are treated as separate pools in the ! - ! biophysics (thus doubling the number of prognostic variables). ! - !---------------------------------------------------------------------------------------! - NL%IBRANCH_THERMO = 1 - !---------------------------------------------------------------------------------------! - - !---------------------------------------------------------------------------------------! - ! IPHYSIOL -- This variable will determine the functional form that will control how ! - ! the various parameters will vary with temperature, and how the CO2 ! - ! compensation point for gross photosynthesis (Gamma*) will be found. ! - ! Options are: ! - ! ! - ! 0 -- Original ED-2.1, we use the "Arrhenius" function as in Foley et al. (1996) and ! - ! Moorcroft et al. (2001). Gamma* is found using the parameters for tau as in ! - ! Foley et al. (1996). ! - ! 1 -- Modified ED-2.1. In this case Gamma* is found using the Michaelis-Mentel ! - ! coefficients for CO2 and O2, as in Farquhar et al. (1980) and in CLM. ! - ! 2 -- Collatz et al. (1991). We use the power (Q10) equations, with Collatz et al. ! - ! parameters for compensation point, and the Michaelis-Mentel coefficients. The ! - ! correction for high and low temperatures are the same as in Moorcroft et al. ! - ! (2001). ! - ! 3 -- Same as 2, except that we find Gamma* as in Farquhar et al. (1980) and in CLM. ! - !---------------------------------------------------------------------------------------! - NL%IPHYSIOL = 2 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IALLOM -- Which allometry to use (this mostly affects tropical PFTs. Temperate PFTs ! - ! will use the new root allometry and the maximum crown area if IALLOM is set ! - ! to 1 or 2). ! - ! 0. Original ED-2.1 ! - ! 1. a. The coefficients for structural biomass are set so the total AGB ! - ! is similar to Baker et al. (2004), equation 2. Balive is the ! - ! default ED-2.1; ! - ! b. Experimental root depth that makes canopy trees to have root depths ! - ! of 5m and grasses/seedlings at 0.5 to have root depth of 0.5 m. ! - ! c. Crown area defined as in Poorter et al. (2006), imposing maximum ! - ! crown area ! - ! 2. Similar to 1, but with a few extra changes. ! - ! a. Height -> DBH allometry as in Poorter et al. (2006) ! - ! b. Balive is retuned, using a few leaf biomass allometric equations for ! - ! a few genuses in Costa Rica. References: ! - ! Cole and Ewel (2006), and Calvo Alvarado et al. (2008). ! - !---------------------------------------------------------------------------------------! - NL%IALLOM = 2 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IGRASS -- This controls the dynamics and growth calculation for grasses. A new ! - ! grass scheme is now available where bdead = 0, height is a function of bleaf! - ! and growth happens daily. ALS (3/3/12) ! - ! 0: grasses behave like trees as in ED2.1 (old scheme) ! - ! ! - ! 1: new grass scheme as described above ! - !---------------------------------------------------------------------------------------! - NL%IGRASS = 0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! IPHEN_SCHEME -- It controls the phenology scheme. Even within each scheme, the ! - ! actual phenology will be different depending on the PFT. ! - ! ! - ! -1: grasses - evergreen; ! - ! tropical - evergreen; ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous (Botta et al.); ! - ! ! - ! 0: grasses - drought-deciduous (old scheme); ! - ! tropical - drought-deciduous (old scheme); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! 1: prescribed phenology ! - ! ! - ! 2: grasses - drought-deciduous (new scheme); ! - ! tropical - drought-deciduous (new scheme); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! 3: grasses - drought-deciduous (new scheme); ! - ! tropical - drought-deciduous (light phenology); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! Old scheme: plants shed their leaves once instantaneous amount of available water ! - ! becomes less than a critical value. ! - ! New scheme: plants shed their leaves once a 10-day running average of available ! - ! water becomes less than a critical value. ! - !---------------------------------------------------------------------------------------! - NL%IPHEN_SCHEME = 2 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! Parameters that control the phenology response to radiation, used only when ! - ! IPHEN_SCHEME = 3. ! - ! ! - ! RADINT -- Intercept ! - ! RADSLP -- Slope. ! - !---------------------------------------------------------------------------------------! - NL%RADINT = -11.3868 - NL%RADSLP = 0.0824 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! REPRO_SCHEME -- This controls plant reproduction and dispersal. ! - ! 0. Reproduction off. Useful for very short runs only. ! - ! 1. Original reproduction scheme. Seeds are exchanged between ! - ! patches belonging to the same site, but they can't go outside ! - ! their original site. ! - ! 2. Similar to 1, but seeds are exchanged between patches belonging ! - ! to the same polygon, even if they are in different sites. They ! - ! can't go outside their original polygon, though. This is the ! - ! same as option 1 if there is only one site per polygon. ! - ! 3. Similar to 2, but recruits will only be formed if their phenology ! - ! status would be "leaves fully flushed". This only matters for ! - ! drought deciduous plants. This option is for testing purposes ! - ! only, think 50 times before using it... ! - !---------------------------------------------------------------------------------------! - NL%REPRO_SCHEME = 2 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! LAPSE_SCHEME -- This specifies the met lapse rate scheme: ! - ! 0. No lapse rates ! - ! 1. phenomenological, global ! - ! 2. phenomenological, local (not yet implemented) ! - ! 3. mechanistic(not yet implemented) ! - !---------------------------------------------------------------------------------------! - NL%LAPSE_SCHEME = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! CROWN_MOD -- Specifies how tree crowns are represent in the canopy radiation model, ! - ! and in the turbulence scheme depending on ICANTURB. ! - ! 0. ED1 default, crowns are evenly spread throughout the patch area, and ! - ! cohorts are stacked on the top of each other. ! - ! 1. Dietze (2008) model. Cohorts have a finite radius, and cohorts are ! - ! stacked on the top of each other. ! - !---------------------------------------------------------------------------------------! - NL%CROWN_MOD = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables control the canopy radiation solver. ! - ! ! - ! ICANRAD -- Specifies how canopy radiation is solved. This variable sets both ! - ! shortwave and longwave. ! - ! 0. Two-stream model (Medvigy 2006), with the possibility to apply ! - ! finite crown area to direct shortwave radiation. ! - ! 1. Multiple-scattering model (Zhao and Qualls 2005,2006), with the ! - ! possibility to apply finite crown area to all radiation fluxes. ! - ! LTRANS_VIS -- Leaf transmittance for tropical plants - Visible/PAR ! - ! LTRANS_NIR -- Leaf transmittance for tropical plants - Near Infrared ! - ! LREFLECT_VIS -- Leaf reflectance for tropical plants - Visible/PAR ! - ! LREFLECT_NIR -- Leaf reflectance for tropical plants - Near Infrared ! - ! ORIENT_TREE -- Leaf orientation factor for tropical trees. Extremes are: ! - ! -1. All leaves are oriented in the vertical ! - ! 0. Leaf orientation is perfectly random ! - ! 1. All leaves are oriented in the horizontal ! - ! In practice, acceptable values range from -0.4 and 0.6 ! - ! ORIENT_GRASS -- Leaf orientation factor for tropical grasses. Extremes are: ! - ! -1. All leaves are oriented in the vertical ! - ! 0. Leaf orientation is perfectly random ! - ! 1. All leaves are oriented in the horizontal ! - ! In practice, acceptable values range from -0.4 and 0.6 ! - ! CLUMP_TREE -- Clumping factor for tropical trees. Extremes are: ! - ! lim -> 0. Black hole (0 itself is unacceptable) ! - ! 1. Homogeneously spread over the layer (i.e., no clumping) ! - ! CLUMP_GRASS -- Clumping factor for tropical grasses. Extremes are: ! - ! lim -> 0. Black hole (0 itself is unacceptable) ! - ! 1. Homogeneously spread over the layer (i.e., no clumping) ! - !---------------------------------------------------------------------------------------! - NL%ICANRAD = 0 - NL%LTRANS_VIS = 0.050 - NL%LTRANS_NIR = 0.230 - NL%LREFLECT_VIS = 0.100 - NL%LREFLECT_NIR = 0.460 - NL%ORIENT_TREE = 0.100 - NL%ORIENT_GRASS = 0.000 - NL%CLUMP_TREE = 0.800 - NL%CLUMP_GRASS = 1.000 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! DECOMP_SCHEME -- This specifies the dependence of soil decomposition on temperature. ! - ! 0. ED-2.0 default, the original exponential ! - ! 1. Lloyd and Taylor (1994) model ! - ! [[option 1 requires parameters to be set in xml]] ! - !---------------------------------------------------------------------------------------! - NL%DECOMP_SCHEME = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! H2O_PLANT_LIM -- this determines whether plant photosynthesis can be limited by ! - ! soil moisture, the FSW, defined as FSW = Supply / (Demand + Supply). ! - ! ! - ! Demand is always the transpiration rates in case soil moisture is ! - ! not limiting (the psi_0 term times LAI). The supply is determined ! - ! by Kw * nplant * Broot * Available_Water, and the definition of ! - ! available water changes depending on H2O_PLANT_LIM: ! - ! 0. Force FSW = 1 (effectively available water is infinity). ! - ! 1. Available water is the total soil water above wilting point, ! - ! integrated across all layers within the rooting zone. ! - ! 2. Available water is the soil water at field capacity minus ! - ! wilting point, scaled by the so-called wilting factor: ! - ! (psi(k) - (H - z(k)) - psi_wp) / (psi_fc - psi_wp) ! - ! where psi is the matric potentital at layer k, z is the layer ! - ! depth, H it the crown height and psi_fc and psi_wp are the ! - ! matric potentials at wilting point and field capacity. ! - !---------------------------------------------------------------------------------------! - NL%H2O_PLANT_LIM = 2 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! The following variables are factors that control photosynthesis and respiration. ! - ! Notice that some of them are relative values whereas others are absolute. ! - ! ! - ! VMFACT_C3 -- Factor multiplying the default Vm0 for C3 plants (1.0 = default). ! - ! VMFACT_C4 -- Factor multiplying the default Vm0 for C4 plants (1.0 = default). ! - ! MPHOTO_TRC3 -- Stomatal slope (M) for tropical C3 plants ! - ! MPHOTO_TEC3 -- Stomatal slope (M) for conifers and temperate C3 plants ! - ! MPHOTO_C4 -- Stomatal slope (M) for C4 plants. ! - ! BPHOTO_BLC3 -- cuticular conductance for broadleaf C3 plants [umol/m2/s] ! - ! BPHOTO_NLC3 -- cuticular conductance for needleleaf C3 plants [umol/m2/s] ! - ! BPHOTO_C4 -- cuticular conductance for C4 plants [umol/m2/s] ! - ! KW_GRASS -- Water conductance for trees, in m2/yr/kgC_root. This is used only ! - ! when H2O_PLANT_LIM is not 0. ! - ! KW_TREE -- Water conductance for grasses, in m2/yr/kgC_root. This is used only ! - ! when H2O_PLANT_LIM is not 0. ! - ! GAMMA_C3 -- The dark respiration factor (gamma) for C3 plants. Subtropical ! - ! conifers will be scaled by GAMMA_C3 * 0.028 / 0.02 ! - ! GAMMA_C4 -- The dark respiration factor (gamma) for C4 plants. ! - ! D0_GRASS -- The transpiration control in gsw (D0) for ALL grasses. ! - ! D0_TREE -- The transpiration control in gsw (D0) for ALL trees. ! - ! ALPHA_C3 -- Quantum yield of ALL C3 plants. This is only applied when ! - ! QUANTUM_EFFICIENCY_T = 0. ! - ! ALPHA_C4 -- Quantum yield of C4 plants. This is always applied. ! - ! KLOWCO2IN -- The coefficient that controls the PEP carboxylase limited rate of ! - ! carboxylation for C4 plants. ! - ! RRFFACT -- Factor multiplying the root respiration factor for ALL PFTs. ! - ! (1.0 = default). ! - ! GROWTHRESP -- The actual growth respiration factor (C3/C4 tropical PFTs only). ! - ! (1.0 = default). ! - ! LWIDTH_GRASS -- Leaf width for grasses, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). ! - ! LWIDTH_BLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). This is applied to broadleaf trees ! - ! only. ! - ! LWIDTH_NLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). This is applied to conifer trees ! - ! only. ! - ! Q10_C3 -- Q10 factor for C3 plants (used only if IPHYSIOL is set to 2 or 3). ! - ! Q10_C4 -- Q10 factor for C4 plants (used only if IPHYSIOL is set to 2 or 3). ! - !---------------------------------------------------------------------------------------! - NL%VMFACT_C3 = 1 - NL%VMFACT_C4 = 1 - NL%MPHOTO_TRC3 = 9 - NL%MPHOTO_TEC3 = 7.2 - NL%MPHOTO_C4 = 5.2 - NL%BPHOTO_BLC3 = 10000 - NL%BPHOTO_NLC3 = 1000 - NL%BPHOTO_C4 = 10000 - NL%KW_GRASS = 900 - NL%KW_TREE = 600 - NL%GAMMA_C3 = 0.0145 - NL%GAMMA_C4 = 0.035 - NL%D0_GRASS = 0.016 - NL%D0_TREE = 0.016 - NL%ALPHA_C3 = 0.08 - NL%ALPHA_C4 = 0.055 - NL%KLOWCO2IN = 4000 - NL%RRFFACT = 1 - NL%GROWTHRESP = 0.333 - NL%LWIDTH_GRASS = 0.05 - NL%LWIDTH_BLTREE = 0.1 - NL%LWIDTH_NLTREE = 0.05 - NL%Q10_C3 = 2.4 - NL%Q10_C4 = 2.4 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! THETACRIT -- Leaf drought phenology threshold. The sign matters here: ! - ! >= 0. -- This is the relative soil moisture above the wilting point ! - ! below which the drought-deciduous plants will start shedding ! - ! their leaves ! - ! < 0. -- This is the soil potential in MPa below which the drought- ! - ! -deciduous plants will start shedding their leaves. The wilt- ! - ! ing point is by definition -1.5MPa, so make sure that the value ! - ! is above -1.5. ! - !---------------------------------------------------------------------------------------! - NL%THETACRIT = -1.20 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! QUANTUM_EFFICIENCY_T -- Which quantum yield model should to use for C3 plants ! - ! 0. Original ED-2.1, quantum efficiency is constant. ! - ! 1. Quantum efficiency varies with temperature following ! - ! Ehleringer (1978) polynomial fit. ! - !---------------------------------------------------------------------------------------! - NL%QUANTUM_EFFICIENCY_T = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! N_PLANT_LIM -- This controls whether plant photosynthesis can be limited by nitrogen. ! - ! 0. No limitation ! - ! 1. ED-2.1 nitrogen limitation model. ! - !---------------------------------------------------------------------------------------! - NL%N_PLANT_LIM = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! N_DECOMP_LIM -- This controls whether decomposition can be limited by nitrogen. ! - ! 0. No limitation ! - ! 1. ED-2.1 nitrogen limitation model. ! - !---------------------------------------------------------------------------------------! - NL%N_DECOMP_LIM = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! The following parameters adjust the fire disturbance in the model. ! - ! INCLUDE_FIRE -- Which threshold to use for fires. ! - ! 0. No fires; ! - ! 1. (deprecated) Fire will be triggered with enough biomass and ! - ! integrated ground water depth less than a threshold. Based on ! - ! ED-1, the threshold assumes that the soil is 1 m, so deeper ! - ! soils will need to be much drier to allow fires to happen and ! - ! often will never allow fires. ! - ! 2. Fire will be triggered with enough biomass and the total soil ! - ! water at the top 75 cm falls below a threshold. ! - ! FIRE_PARAMETER -- If fire happens, this will control the intensity of the disturbance ! - ! given the amount of fuel (currently the total above-ground ! - ! biomass). ! - ! SM_FIRE -- This is used only when INCLUDE_FIRE = 2. The sign here matters. ! - ! >= 0. - Minimum relative soil moisture above dry air of the top 1m ! - ! that will prevent fires to happen. ! - ! < 0. - Minimum mean soil moisture potential in MPa of the top 1m ! - ! that will prevent fires to happen. The dry air soil ! - ! potential is defined as -3.1 MPa, so make sure SM_FIRE is ! - ! greater than this value. ! - !---------------------------------------------------------------------------------------! - NL%INCLUDE_FIRE = 0 - NL%FIRE_PARAMETER = 0.5 - NL%SM_FIRE = -1.40 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IANTH_DISTURB -- This flag controls whether to include anthropogenic disturbances ! - ! such as land clearing, abandonment, and logging. ! - ! 0. no anthropogenic disturbance. ! - ! 1. use anthropogenic disturbance dataset. ! - !---------------------------------------------------------------------------------------! - NL%IANTH_DISTURB = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ICANTURB -- This flag controls the canopy roughness. ! - ! 0. Based on Leuning et al. (1995), wind is computed using the similarity ! - ! theory for the top cohort, and they are extinguished with cumulative ! - ! LAI. If using CROWN_MOD 1 or 2, this will use local LAI and average ! - ! by crown area. ! - ! 1. The default ED-2.1 scheme, except that it uses the zero-plane ! - ! displacement height. ! - ! 2. This uses the method of Massman (1997) using constant drag and no ! - ! sheltering factor. ! - ! 3. This is also based on Massman (1997), but with the option of varying ! - ! the drag and sheltering within the canopy. ! - ! 4. Same as 0, but if finds the ground conductance following CLM ! - ! technical note (equations 5.98-5.100). ! - !---------------------------------------------------------------------------------------! - NL%ICANTURB = 2 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ISFCLYRM -- Similarity theory model. The model that computes u*, T*, etc... ! - ! 1. BRAMS default, based on Louis (1979). It uses empirical relations to ! - ! estimate the flux based on the bulk Richardson number ! - ! ! - ! All models below use an interative method to find z/L, and the only change ! - ! is the functional form of the psi functions. ! - ! ! - ! 2. Oncley and Dudhia (1995) model, based on MM5. ! - ! 3. Beljaars and Holtslag (1991) model. Similar to 2, but it uses an alternative ! - ! method for the stable case that mixes more than the OD95. ! - ! 4. CLM (2004). Similar to 2 and 3, but they have special functions to deal with ! - ! very stable and very stable cases. ! - !---------------------------------------------------------------------------------------! - NL%ISFCLYRM = 3 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IED_GRNDVAP -- Methods to find the ground -> canopy conductance. ! - ! 0. Modified Lee Pielke (1992), adding field capacity, but using beta factor ! - ! without the square, like in Noilhan and Planton (1989). This is the closest ! - ! to the original ED-2.0 and LEAF-3, and it is also the recommended one. ! - ! 1. Test # 1 of Mahfouf and Noilhan (1991) ! - ! 2. Test # 2 of Mahfouf and Noilhan (1991) ! - ! 3. Test # 3 of Mahfouf and Noilhan (1991) ! - ! 4. Test # 4 of Mahfouf and Noilhan (1991) ! - ! 5. Combination of test #1 (alpha) and test #2 (soil resistance). ! - ! In all cases the beta term is modified so it approaches zero as soil moisture goes ! - ! to dry air soil. ! - !---------------------------------------------------------------------------------------! - NL%IED_GRNDVAP = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used to control the similarity theory model. For the ! - ! meaning of these parameters, check Beljaars and Holtslag (1991). ! - ! GAMM -- gamma coefficient for momentum, unstable case (dimensionless) ! - ! Ignored when ISTAR = 1 ! - ! GAMH -- gamma coefficient for heat, unstable case (dimensionless) ! - ! Ignored when ISTAR = 1 ! - ! TPRANDTL -- Turbulent Prandtl number ! - ! Ignored when ISTAR = 1 ! - ! RIBMAX -- maximum bulk Richardson number. ! - ! LEAF_MAXWHC -- Maximum water that can be intercepted by leaves, in kg/m2leaf. ! - !---------------------------------------------------------------------------------------! - NL%GAMM = 13.0 - NL%GAMH = 13.0 - NL%TPRANDTL = 0.74 - NL%RIBMAX = 0.50 - NL%LEAF_MAXWHC = 0.11 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! IPERCOL -- This controls percolation and infiltration. ! - ! 0. Default method. Assumes soil conductivity constant and for the ! - ! temporary surface water, it sheds liquid in excess of a 1:9 liquid- ! - ! -to-ice ratio through percolation. Temporary surface water exists ! - ! only if the top soil layer is at saturation. ! - ! 1. Constant soil conductivity, and it uses the percolation model as in ! - ! Anderson (1976) NOAA technical report NWS 19. Temporary surface ! - ! water may exist after a heavy rain event, even if the soil doesn't ! - ! saturate. Recommended value. ! - ! 2. Soil conductivity decreases with depth even for constant soil moisture ! - ! , otherwise it is the same as 1. ! - !---------------------------------------------------------------------------------------! - NL%IPERCOL = 1 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables control the plant functional types (PFTs) that will be ! - ! used in this simulation. ! - ! ! - ! INCLUDE_THESE_PFT -- a list containing all the PFTs you want to include in this run ! - ! AGRI_STOCK -- which PFT should be used for agriculture ! - ! (used only when IANTH_DISTURB = 1) ! - ! PLANTATION_STOCK -- which PFT should be used for plantation ! - ! (used only when IANTH_DISTURB = 1) ! - ! ! - ! PFT table ! - !---------------------------------------------------------------------------------------! - ! 1 - C4 grass | 9 - early temperate deciduous ! - ! 2 - early tropical | 10 - mid temperate deciduous ! - ! 3 - mid tropical | 11 - late temperate deciduous ! - ! 4 - late tropical | 12:15 - agricultural PFTs ! - ! 5 - temperate C3 grass | 16 - Subtropical C3 grass ! - ! 6 - northern pines | (C4 grass with C3 photo). ! - ! 7 - southern pines | 17 - "Araucaria" (non-optimised ! - ! 8 - late conifers | Southern Pines). ! - !---------------------------------------------------------------------------------------! - NL%INCLUDE_THESE_PFT = 1,2,3,4,16 - NL%AGRI_STOCK = 1 - NL%PLANTATION_STOCK = 3 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! PFT_1ST_CHECK -- What to do if the initialisation file has a PFT that is not listed ! - ! in INCLUDE_THESE_PFT (ignored if IED_INIT_MODE is -1 or 0) ! - ! 0. Stop the run ! - ! 1. Add the PFT in the INCLUDE_THESE_PFT list ! - ! 2. Ignore the cohort ! - !---------------------------------------------------------------------------------------! - NL%PFT_1ST_CHECK = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables control the size of sub-polygon structures in ED-2. ! - ! MAXSITE -- This is the strict maximum number of sites that each polygon can ! - ! contain. Currently this is used only when the user wants to run ! - ! the same polygon with multiple soil types. If there aren't that ! - ! many different soil types with a minimum area (check MIN_SITE_AREA ! - ! below), then the model will allocate just the amount needed. ! - ! MAXPATCH -- If number of patches in a given site exceeds MAXPATCH, force patch ! - ! fusion. If MAXPATCH is 0, then fusion will never happen. If ! - ! MAXPATCH is negative, then the absolute value is used only during ! - ! the initialization, and fusion will never happen again. Notice ! - ! that if the patches are too different, then the actual number of ! - ! patches in a site may exceed MAXPATCH. ! - ! MAXCOHORT -- If number of cohorts in a given patch exceeds MAXCOHORT, force ! - ! cohort fusion. If MAXCOHORT is 0, then fusion will never happen. ! - ! If MAXCOHORT is negative, then the absolute value is used only ! - ! during the initialization, and fusion will never happen again. ! - ! Notice that if the cohorts are too different, then the actual ! - ! number of cohorts in a patch may exceed MAXCOHORT. ! - ! MIN_SITE_AREA -- This is the minimum fraction area of a given soil type that allows ! - ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! - ! MIN_PATCH_AREA -- This is the minimum fraction area of a given soil type that allows ! - ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! - !---------------------------------------------------------------------------------------! - NL%MAXSITE = 1 - NL%MAXPATCH = 10 - NL%MAXCOHORT = 40 - NL%MIN_SITE_AREA = 0.005 - NL%MIN_PATCH_AREA = 0.005 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ZROUGH -- constant roughness, in metres, if for all domain ! - !---------------------------------------------------------------------------------------! - NL%ZROUGH = 0.1 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! Treefall disturbance parameters. ! - ! TREEFALL_DISTURBANCE_RATE -- Sign-dependent treefall disturbance rate: ! - ! > 0. usual disturbance rate, in 1/years; ! - ! = 0. No treefall disturbance; ! - ! < 0. Treefall will be added as a mortality rate (it ! - ! will kill plants, but it won't create a new patch). ! - ! TIME2CANOPY -- Minimum patch age for treefall disturbance to happen. ! - ! If TREEFALL_DISTURBANCE_RATE = 0., this value will be ! - ! ignored. If this value is different than zero, then ! - ! TREEFALL_DISTURBANCE_RATE is internally adjusted so the ! - ! average patch age is still 1/TREEFALL_DISTURBANCE_RATE ! - !---------------------------------------------------------------------------------------! - NL%TREEFALL_DISTURBANCE_RATE = 0.014 - NL%TIME2CANOPY = 0.0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! RUNOFF_TIME -- In case a temporary surface water (TSW) is created, this is the "e- ! - ! -folding lifetime" of the TSW in seconds due to runoff. If you don't ! - ! want runoff to happen, set this to 0. ! - !---------------------------------------------------------------------------------------! - NL%RUNOFF_TIME = 3600.0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! The following variables control the minimum values of various velocities in the ! - ! canopy. This is needed to avoid the air to be extremely still, or to avoid singular- ! - ! ities. When defining the values, keep in mind that UBMIN >= UGBMIN >= USTMIN. ! - ! ! - ! UBMIN -- minimum wind speed at the top of the canopy air space [ m/s] ! - ! UGBMIN -- minimum wind speed at the leaf level [ m/s] ! - ! USTMIN -- minimum friction velocity, u*, in m/s. [ m/s] ! - !---------------------------------------------------------------------------------------! - NL%UBMIN = 0.65 - NL%UGBMIN = 0.25 - NL%USTMIN = 0.05 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! Control parameters for printing to standard output. Any variable can be printed ! - ! to standard output as long as it is one dimensional. Polygon variables have been ! - ! tested, no gaurtantees for other hierarchical levels. Choose any variables that are ! - ! defined in the variable table fill routine in ed_state_vars.f90. Choose the start ! - ! and end index of the polygon,site,patch or cohort. It should work in parallel. The ! - ! indices are global indices of the entire domain. The are printed out in rows of 10 ! - ! columns each. ! - ! ! - ! IPRINTPOLYS -- 0. Do not print information to screen ! - ! 1. Print polygon arrays to screen, use variables described below to ! - ! determine which ones and how ! - ! NPVARS -- Number of variables to be printed ! - ! PRINTVARS -- List of variables to be printed ! - ! PFMTSTR -- The standard fortran format for the prints. One format per variable ! - ! IPMIN -- First polygon (absolute index) to be print ! - ! IPMAX -- Last polygon (absolute index) to print ! - !---------------------------------------------------------------------------------------! - NL%IPRINTPOLYS = 0 - NL%NPVARS = 1 - NL%PRINTVARS = 'AVG_PCPG','AVG_CAN_TEMP','AVG_VAPOR_AC','AVG_CAN_SHV' - NL%PFMTSTR = 'f10.8','f5.1','f7.2','f9.5' - NL%IPMIN = 1 - NL%IPMAX = 60 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! Variables that control the meteorological forcing. ! - ! ! - ! IMETTYPE -- Format of the meteorological dataset ! - ! 0. ASCII (deprecated) ! - ! 1. HDF5 ! - ! ISHUFFLE -- How to choose an year outside the meterorological data range (see ! - ! METCYC1 and METCYCF). ! - ! 0. Sequentially cycle over years ! - ! 1. Randomly pick the years, using the same sequence. This has worked ! - ! with gfortran running in Mac OS X system, but it acts like option 2 ! - ! when running ifort. ! - ! 2. Randomly pick the years, choosing a different sequence each time ! - ! the model is run. ! - ! IMETCYC1 -- First year with meteorological information ! - ! IMETCYCF -- Last year with meteorological information ! - ! IMETAVG -- How the input radiation was originally averaged. You must tell this ! - ! because ED-2.1 can make a interpolation accounting for the cosine of ! - ! zenith angle. ! - ! -1. I don't know, use linear interpolation. ! - ! 0. No average, the values are instantaneous ! - ! 1. Averages ending at the reference time ! - ! 2. Averages beginning at the reference time ! - ! 3. Averages centred at the reference time ! - ! IMETRAD -- What should the model do with the input short wave radiation? ! - ! 0. Nothing, use it as is. ! - ! 1. Add them together, then use the SiB method to break radiation down ! - ! into the four components (PAR direct, PAR diffuse, NIR direct, ! - ! NIR diffuse). ! - ! 2. Add then together, then use the method by Weiss and Norman (1985) ! - ! to break radiation down to the four components. ! - ! 3. Gloomy -- All radiation goes to diffuse. ! - ! 4. Sesame street -- all radiation goes to direct, except at night. ! - ! INITIAL_CO2 -- Initial value for CO2 in case no CO2 is provided at the meteorological ! - ! driver dataset [Units: µmol/mol] ! - !---------------------------------------------------------------------------------------! - NL%IMETTYPE = 1 - NL%ISHUFFLE = 0 - NL%METCYC1 = 2000 - NL%METCYCF = 2003 - NL%IMETAVG = 1 - NL%IMETRAD = 2 - NL%INITIAL_CO2 = 378.0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables control the phenology prescribed from observations: ! - ! ! - ! IPHENYS1 -- First year for spring phenology ! - ! IPHENYSF -- Final year for spring phenology ! - ! IPHENYF1 -- First year for fall/autumn phenology ! - ! IPHENYFF -- Final year for fall/autumn phenology ! - ! PHENPATH -- path and prefix of the prescribed phenology data. ! - ! ! - ! If the years don't cover the entire simulation period, they will be recycled. ! - !---------------------------------------------------------------------------------------! - NL%IPHENYS1 = 1992 - NL%IPHENYSF = 2003 - NL%IPHENYF1 = 1992 - NL%IPHENYFF = 2003 - NL%PHENPATH = '/n/moorcroft_data/data/ed2_data/phenology/phenology' - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! These are some additional configuration files. ! - ! IEDCNFGF -- XML file containing additional parameter settings. If you don't have ! - ! one, leave it empty ! - ! EVENT_FILE -- file containing specific events that must be incorporated into the ! - ! simulation. ! - ! PHENPATH -- path and prefix of the prescribed phenology data. ! - !---------------------------------------------------------------------------------------! - NL%IEDCNFGF = 'config.xml' - NL%EVENT_FILE = 'myevents.xml' - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used to control the detailed output for debugging ! - ! purposes. ! - ! ! - ! IDETAILED -- This flag controls the possible detailed outputs, mostly used for ! - ! debugging purposes. Notice that this doesn't replace the normal debug- ! - ! ger options, the idea is to provide detailed output to check bad ! - ! assumptions. The options are additive, and the indices below represent ! - ! the different types of output: ! - ! ! - ! 1 -- Detailed budget (every DTLSM) ! - ! 2 -- Detailed photosynthesis (every DTLSM) ! - ! 4 -- Detailed output from the integrator (every HDID) ! - ! 8 -- Thermodynamic bounds for sanity check (every DTLSM) ! - ! 16 -- Daily error stats (which variable caused the time step to shrink) ! - ! 32 -- Allometry parameters, and minimum and maximum sizes ! - ! (two files, only at the beginning) ! - ! ! - ! In case you don't want any detailed output (likely for most runs), set ! - ! IDETAILED to zero. In case you want to generate multiple outputs, add ! - ! the number of the sought options: for example, if you want detailed ! - ! photosynthesis and detailed output from the integrator, set IDETAILED ! - ! to 6 (2 + 4). Any combination of the above outputs is acceptable, al- ! - ! though all but the last produce a sheer amount of txt files, in which ! - ! case you may want to look at variable PATCH_KEEP. It is also a good ! - ! idea to set IVEGT_DYNAMICS to 0 when using the first five outputs. ! - ! ! - ! ! - ! PATCH_KEEP -- This option will eliminate all patches except one from the initial- ! - ! isation. This is only used when one of the first five types of ! - ! detailed output is active, otherwise it will be ignored. Options are: ! - ! -2. Keep only the patch with the lowest potential LAI ! - ! -1. Keep only the patch with the highest potential LAI ! - ! 0. Keep all patches. ! - ! > 0. Keep the patch with the provided index. In case the index is ! - ! not valid, the model will crash. ! - !---------------------------------------------------------------------------------------! - NL%IDETAILED = 0 - NL%PATCH_KEEP = 0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! IOPTINPT -- Optimization configuration. (Currently not used) ! - !---------------------------------------------------------------------------------------! - !NL%IOPTINPT = '' - !---------------------------------------------------------------------------------------! - NL%IOOUTPUT = 3 - NL%IADD_SITE_MEANS = 0 - NL%IADD_PATCH_MEANS = 0 - NL%IADD_COHORT_MEANS = 0 - NL%GROWTH_RESP_SCHEME = 0 - NL%STORAGE_RESP_SCHEME = 0 - NL%PLANT_HYDRO_SCHEME = 0 - NL%ISTOMATA_SCHEME = 0 - NL%ISTRUCT_GROWTH_SCHEME = 0 - NL%TRAIT_PLASTICITY_SCHEME = 0 - NL%IDDMORT_SCHEME = 0 - NL%CBR_SCHEME = 0 - NL%DDMORT_CONST = 0 - NL%ICANRAD = 1 - NL%DT_CENSUS = 60 - NL%YR1ST_CENSUS = 2000 - NL%MON1ST_CENSUS = 6 - NL%MIN_RECRUIT_DBH = 50 -$END -!==========================================================================================! -!==========================================================================================! -``` -
- -Convert the docker image to singularity -``` -singularity pull docker://pecan/model-ed2-git -``` - -Finally you can run the singularity image -``` -singularity exec -B ed_inputs:/data/ed_inputs -B faoOLD:/data/faoOLD -B oge2OLD:/data/oge2OLD -B sites:/data/sites -B testrun.s83:/data/testrun.s83 --pwd /data/testrun.s83 ./model-ed2-git.simg ed2.git -s -``` - -Note that the `-B` option will mount the folder into the singularity image as the second argument (afther the :) - -The ed2.git command is started with the `-s` which will run it in single mode, and not initialize and use MPI. - -Once the model is finished the outputs should be available under `testrun.s83`. - -The example ED2IN file is not 100% correct and will result in the following error: - -
-output of (failed) run (click to expand) -``` -+---------------- MPI parallel info: --------------------+ -+ - Machnum = 0 -+ - Machsize = 1 -+---------------- OMP parallel info: --------------------+ -+ - thread use: 1 -+ - threads max: 1 -+ - cpu use: 1 -+ - cpus max: 1 -+ Note: Max vals are for node, not sockets. -+--------------------------------------------------------+ -Reading namelist information -Copying namelist - -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!! WARNING! WARNING! WARNING! WARNING! WARNING! !!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - -> Outfast cannot be less than frqfast. - Oufast was redefined to 3600. seconds. -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - - -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!! WARNING! WARNING! WARNING! WARNING! WARNING! !!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - -> Outstate cannot be different than frqstate when - unitstate is set to 3 (years). - Oustate was set to 1. years. - Oustate was redefined to 1. years. -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - -+------------------------------------------------------------+ -| Ecosystem Demography Model, version 2.2 -+------------------------------------------------------------+ -| Input namelist filename is ED2IN -| -| Single process execution on INITIAL run. -+------------------------------------------------------------+ - => Generating the land/sea mask. -/data/oge2OLD/OGE2_HEADER - -> Getting file: /data/oge2OLD/OGE2_30S060W.h5... - + Work allocation, node 1; - + Polygon array allocation, node 1; - + Memory successfully allocated on none 1; - [+] Load_Ed_Ecosystem_Params... ----------------------------------------- - Treefall disturbance parameters: - - LAMBDA_REF = 1.40000E-02 - - LAMBDA_EFF = 1.40000E-02 - - TIME2CANOPY = 0.00000E+00 ----------------------------------------- - [+] Checking for XML config... -********************************************* -** WARNING! ** -** ** -** XML file wasn't found. Using default ** -** parameters in ED. ** -** (You provided config.xml). -** ** -********************************************* - [+] Alloc_Soilgrid... - [+] Set_Polygon_Coordinates... - [+] Sfcdata_ED... - [+] Load_Ecosystem_State... - + Doing sequential initialization over nodes. - + Initializing from ED restart file. Node: 001 - [-] filelist_f: Checking prefix: /data/sites/Santarem_Km83/s83_default. - + Showing first 10 files: - [-] File #: 1 /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.css - [-] File #: 2 /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.pss -Using patch file: /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.pss -Using cohort file: /data/sites/Santarem_Km83/s83_default.lat-3.018lon-54.971.css - + Initializing phenology. Node: 001 - - Reading thermal sums. - + Initializing anthropogenic disturbance forcing. Node: 001 - - -------------------------------------------------------- - Soil information: - - Polygon name : ts83 - Longitude : -54.971 - Latitude : -3.018 - Prescribed sand and clay : T - # of sites : 1 - - Site : 1 - - Type : 16 - - Clay fraction = 5.90000E-01 - - Sand fraction = 3.90000E-01 - - Silt fraction = 2.00000E-02 - - SLBS = 1.22460E+01 - - SLPOTS = -1.52090E-01 - - SLCONS = 2.30320E-06 - - Dry air soil = 2.29248E-01 - - Wilting point = 2.43249E-01 - - Field capacity = 3.24517E-01 - - Saturation = 4.27790E-01 - - Heat capacity = 1.30601E+06 - -------------------------------------------------------- - - [+] Init_Met_Drivers... - [+] Read_Met_Drivers_Init... ------------------------------- - - METCYC1 = 2000 - - METCYCF = 2003 - - NYEARS = 2 ------------------------------- - IYEAR YEAR_USE - 1 2001 - 2 2002 ------------------------------- - - [+] Update_met_drivers... - [+] Ed_Init_Atm... -Total count in node 1 for grid 1 : POLYGONS= 1 SITES= 1 PATCHES= 18 COHORTS= 1753 -Grid: 1 Poly: 1 Lon: -54.9710 Lat: -3.0180 Nplants: 0.73 Avg. LAI: 4.45 NPatches: 18 NCohorts: 660 - [+] initHydrology... -initHydrology | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 -Allocated | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 -Updated | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 -Deallocated | mynum= 1 ngrids= 1 mpolys= 1 msites= 1 - [+] Filltab_Alltypes... - [+] Finding frqsum... - [+] Loading obstime_list -File /nowhere not found! -Specify OBSTIME_DB properly in ED namelist. -:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: -:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: -:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: - --------------------------------------------------------------- - !!! FATAL ERROR !!! --------------------------------------------------------------- - ---> File: ed_init.F90 - ---> Subroutine: read_obstime - ---> Reason: OBSTIME_DB not found! --------------------------------------------------------------- - ED execution halts (see previous error message)... --------------------------------------------------------------- -STOP fatal_error -``` -
- - - From eb3ef7f3626f6c419248bca2f6f9d95e36cbcf60 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:50:47 +0530 Subject: [PATCH 1006/2289] Delete index.Rmd --- book_source/index.Rmd | 39 --------------------------------------- 1 file changed, 39 deletions(-) delete mode 100644 book_source/index.Rmd diff --git a/book_source/index.Rmd b/book_source/index.Rmd deleted file mode 100644 index 8b71e4c339d..00000000000 --- a/book_source/index.Rmd +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: "The Predictive Ecosystem Analyzer" -date: "`r Sys.Date()`" -site: bookdown::bookdown_site -documentclass: book -biblio-style: apalike -link-citations: yes -author: "By: PEcAn Team" ---- - -# Welcome {-} - -**Ecosystem science, policy, and management informed by the best available data and models** - -```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} -knitr::include_graphics(rep("figures/PecanLogo.png")) -``` - - -**Our Mission:** - - -**Develop and promote accessible tools for reproducible ecosystem modeling and forecasting** - - - -[PEcAn Website](http://pecanproject.github.io/) - -[Public Chat Room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) - -[Github Repository](https://github.com/PecanProject/pecan) - - - - - - From 33249a12bf69249222ed6fd447219cc5ecd81aed Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:51:05 +0530 Subject: [PATCH 1007/2289] Delete libgl1-mesa-dev --- book_source/libgl1-mesa-dev | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 book_source/libgl1-mesa-dev diff --git a/book_source/libgl1-mesa-dev b/book_source/libgl1-mesa-dev deleted file mode 100644 index e69de29bb2d..00000000000 From 10bb55b75a0a295f60d068c4c993243c2fba77ce Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:51:19 +0530 Subject: [PATCH 1008/2289] Delete libglpk-dev --- book_source/libglpk-dev | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 book_source/libglpk-dev diff --git a/book_source/libglpk-dev b/book_source/libglpk-dev deleted file mode 100644 index e69de29bb2d..00000000000 From c6454359fe48520ff4b3a6629b41fb98a826d783 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:51:32 +0530 Subject: [PATCH 1009/2289] Delete libglu1-mesa-dev --- book_source/libglu1-mesa-dev | 4 ---- 1 file changed, 4 deletions(-) delete mode 100644 book_source/libglu1-mesa-dev diff --git a/book_source/libglu1-mesa-dev b/book_source/libglu1-mesa-dev deleted file mode 100644 index e4ef8991e2b..00000000000 --- a/book_source/libglu1-mesa-dev +++ /dev/null @@ -1,4 +0,0 @@ -Fedora Modular 31 - x86_64 - Updates 1.7 kB/s | 4.2 kB 00:02 -Fedora Modular 31 - x86_64 - Updates 39 kB/s | 417 kB 00:10 -Fedora 31 - x86_64 - Updates 5.7 kB/s | 3.8 kB 00:00 -Fedora 31 - x86_64 - Updates 25 kB/s | 657 kB 00:26 From c4e959e5247a0c4fb4fa4a4e8d764bd377ba8396 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:51:43 +0530 Subject: [PATCH 1010/2289] Delete libnetcdf-dev --- book_source/libnetcdf-dev | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 book_source/libnetcdf-dev diff --git a/book_source/libnetcdf-dev b/book_source/libnetcdf-dev deleted file mode 100644 index e69de29bb2d..00000000000 From e107fb75eb1bbbea18208708f2f30ae41b6f1b24 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:51:57 +0530 Subject: [PATCH 1011/2289] Delete librdf0-dev --- book_source/librdf0-dev | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 book_source/librdf0-dev diff --git a/book_source/librdf0-dev b/book_source/librdf0-dev deleted file mode 100644 index e69de29bb2d..00000000000 From e98615ad302debaead7e1ae58062874dca41963b Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:52:07 +0530 Subject: [PATCH 1012/2289] Delete libudunits2-dev --- book_source/libudunits2-dev | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 book_source/libudunits2-dev diff --git a/book_source/libudunits2-dev b/book_source/libudunits2-dev deleted file mode 100644 index e69de29bb2d..00000000000 From cbb09ebb0062f3169d557dbe0a323872c44d9580 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 16:54:55 +0530 Subject: [PATCH 1013/2289] created index.Rmd --- book_source/index.Rmd | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 book_source/index.Rmd diff --git a/book_source/index.Rmd b/book_source/index.Rmd new file mode 100644 index 00000000000..8b71e4c339d --- /dev/null +++ b/book_source/index.Rmd @@ -0,0 +1,39 @@ +--- +title: "The Predictive Ecosystem Analyzer" +date: "`r Sys.Date()`" +site: bookdown::bookdown_site +documentclass: book +biblio-style: apalike +link-citations: yes +author: "By: PEcAn Team" +--- + +# Welcome {-} + +**Ecosystem science, policy, and management informed by the best available data and models** + +```{r, echo=FALSE,out.height= "50%", out.width="50%", fig.align='center'} +knitr::include_graphics(rep("figures/PecanLogo.png")) +``` + + +**Our Mission:** + + +**Develop and promote accessible tools for reproducible ecosystem modeling and forecasting** + + + +[PEcAn Website](http://pecanproject.github.io/) + +[Public Chat Room](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) + +[Github Repository](https://github.com/PecanProject/pecan) + + + + + + From b6ecfd3547336f87e7c0183128d75297e77dc60d Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 17:05:43 +0530 Subject: [PATCH 1014/2289] updated book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 9c9e0f13fca..e6c1d82b0d5 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends:develop + container: pecan/depends steps: From 4332a418597ea9ab232c609b776d05d6f48bc0e8 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 17:17:41 +0530 Subject: [PATCH 1015/2289] adeed dependencies --- .github/workflows/book.yml | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index e6c1d82b0d5..68d2241d9e3 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -20,6 +20,19 @@ jobs: steps: - uses: actions/checkout@v2 + - name: installing dependencies + run: | + apt-get update \ + && apt-get -y --no-install-recommends install \ + jags \ + time \ + libgdal-dev \ + libglpk-dev \ + librdf0-dev \ + libnetcdf-dev \ + libudunits2-dev \ + libgl1-mesa-dev \ + libglu1-mesa-dev # Building book from source using makefile - name: Building book From f09cbb36ab7a19b5efcebf4d07d1a5283753ea18 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 19:06:41 +0530 Subject: [PATCH 1016/2289] added more updates --- .github/workflows/book.yml | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 68d2241d9e3..32d95090c74 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -13,7 +13,7 @@ jobs: build: # The type of runner that the job will run on - runs-on: ubuntu-latest + runs-on: debian-latest container: pecan/depends @@ -21,18 +21,18 @@ jobs: - uses: actions/checkout@v2 - name: installing dependencies - run: | - apt-get update \ - && apt-get -y --no-install-recommends install \ - jags \ - time \ - libgdal-dev \ - libglpk-dev \ - librdf0-dev \ - libnetcdf-dev \ - libudunits2-dev \ - libgl1-mesa-dev \ - libglu1-mesa-dev + run : | + apt-get update \ + && apt-get -y --no-install-recommends install \ + jags \ + time \ + libgdal-dev \ + libglpk-dev \ + librdf0-dev \ + libnetcdf-dev \ + libudunits2-dev \ + libgl1-mesa-dev \ + libglu1-mesa-dev # Building book from source using makefile - name: Building book From d19a4a5f3313c1598ef67872756fff760f9954e0 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 19:08:35 +0530 Subject: [PATCH 1017/2289] updated book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 32d95090c74..88b8de72025 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -13,7 +13,7 @@ jobs: build: # The type of runner that the job will run on - runs-on: debian-latest + runs-on: ubuntu-latest container: pecan/depends From 1c970839bd1dcd188ce646b3d3aea3d030d6686c Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 19:18:47 +0530 Subject: [PATCH 1018/2289] updated image name --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 88b8de72025..97c25743d4a 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends + container: pecan/depends:latest steps: From 2c60a175aadf652681dc66c530562e637dcb4eae Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 20:08:46 +0530 Subject: [PATCH 1019/2289] update container image --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 97c25743d4a..57dc894091d 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends:latest + container: pecan/depends:devel steps: From 27dda34d03d3b22ca3369e1e6cfcf61715312264 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 20:12:45 +0530 Subject: [PATCH 1020/2289] updated image name --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 57dc894091d..4b537ee263e 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends:devel + container: pecan/depends:develop steps: From 823dcb3c40bd74729565e424266f94362e69bfbe Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 2 Jun 2020 11:04:50 -0400 Subject: [PATCH 1021/2289] replace package --- docker/depends/pecan.depends | 2 +- modules/assim.batch/DESCRIPTION | 2 +- modules/emulator/DESCRIPTION | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index cd041decf6e..6228573ce78 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -111,8 +111,8 @@ install2.r -e -s -l "${RLIB}" -n -1\ tictoc \ tidyr \ tidyverse \ - tmvtnorm \ tools \ + TruncatedNormal \ truncnorm \ udunits2 \ utils \ diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 122e2da0e83..faddbebc3de 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -42,7 +42,7 @@ Imports: stats, prodlim, MCMCpack, - tmvtnorm, + TruncatedNormal (>= 2.2), udunits2 (>= 0.11), utils, XML, diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 00204e8566c..a782b0ab330 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -13,7 +13,7 @@ Imports: mlegp, coda (>= 0.18), MASS, - tmvtnorm, + TruncatedNormal (>= 2.2), lqmm, MCMCpack Description: Implementation of a Gaussian Process model (both likelihood and From 902fdbfae3e456bd74bb3c3a0a941a19f34a7836 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 2 Jun 2020 11:11:34 -0400 Subject: [PATCH 1022/2289] replace function calls --- modules/assim.batch/R/hier.mcmc.R | 20 ++++++++++---------- modules/assim.batch/R/pda.utils.R | 8 ++++---- modules/emulator/R/minimize.GP.R | 10 +++++----- 3 files changed, 19 insertions(+), 19 deletions(-) diff --git a/modules/assim.batch/R/hier.mcmc.R b/modules/assim.batch/R/hier.mcmc.R index de8b668c512..6bf159169b4 100644 --- a/modules/assim.batch/R/hier.mcmc.R +++ b/modules/assim.batch/R/hier.mcmc.R @@ -208,11 +208,11 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # propose new site parameter vectors thissite <- g %% nsites if(thissite == 0) thissite <- nsites - proposed <- tmvtnorm::rtmvnorm(1, - mean = mu_site_curr[thissite,], + proposed <- TruncatedNormal::rtmvnorm(1, + mu = mu_site_curr[thissite,], sigma = jcov.arr[,,thissite], - lower = rng_orig[,1], - upper = rng_orig[,2]) + lb = rng_orig[,1], + ub = rng_orig[,2]) mu_site_new <- matrix(rep(proposed, nsites),ncol=nparam, byrow = TRUE) @@ -228,9 +228,9 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # calculate jump probabilities currHR <- sapply(seq_len(nsites), function(v) { - tmvtnorm::dtmvnorm(mu_site_curr[v,], mu_site_new[v,], jcov.arr[,,v], - lower = rng_orig[,1], - upper = rng_orig[,2], log = TRUE) + TruncatedNormal::dtmvnorm(mu_site_curr[v,], mu_site_new[v,], jcov.arr[,,v], + lb = rng_orig[,1], + ub = rng_orig[,2], log = TRUE, B = 1e2) }) # predict new SS @@ -246,9 +246,9 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # calculate jump probabilities newHR <- sapply(seq_len(nsites), function(v) { - tmvtnorm::dtmvnorm(mu_site_new[v,], mu_site_curr[v,], jcov.arr[,,v], - lower = rng_orig[,1], - upper = rng_orig[,2], log = TRUE) + TruncatedNormal::dtmvnorm(mu_site_new[v,], mu_site_curr[v,], jcov.arr[,,v], + lb = rng_orig[,1], + ub = rng_orig[,2], log = TRUE, B = 1e2) }) # Accept/reject with MH rule diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 2180709a3c2..0763fa465e7 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -901,11 +901,11 @@ generate_hierpost <- function(mcmc.out, prior.fn.all, prior.ind.all){ # calculate hierarchical posteriors from mu_global_samp and tau_global_samp hierarchical_samp <- mu_global_samp for(si in seq_len(iter_size)){ - hierarchical_samp[si,] <- tmvtnorm::rtmvnorm(1, - mean = mu_global_samp[si,], + hierarchical_samp[si,] <- TruncatedNormal::rtmvnorm(1, + mu = mu_global_samp[si,], sigma = sigma_global_samp[si,,], - lower = lower_lim, - upper = upper_lim) + lb = lower_lim, + ub = upper_lim) } mcmc.out[[i]]$hierarchical_samp <- hierarchical_samp diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 79a92468729..6e1c4508306 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -271,7 +271,7 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn } ## propose new parameters - xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) + xnew <- TruncatedNormal::rtmvnorm(1, mu = c(xcurr), sigma = jcov, lb = rng[,1], ub = rng[,2]) # if(bounded(xnew,rng)){ # re-predict SS @@ -282,16 +282,16 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn # don't update the currllp ( = llik.par, e.g. tau) yet # calculate posterior with xcurr | currllp ycurr <- get_y(currSS, xcurr, llik.fn, priors, currllp) - HRcurr <- tmvtnorm::dtmvnorm(c(xnew), c(xcurr), jcov, - lower = rng[,1], upper = rng[,2], log = TRUE) + HRcurr <- TruncatedNormal::dtmvnorm(c(xnew), c(xcurr), jcov, + lb = rng[,1], ub = rng[,2], log = TRUE, B = 1e2) newSS <- get_ss(gp, xnew, pos.check) if(all(newSS != -Inf)){ newllp <- pda.calc.llik.par(settings, n.of.obs, newSS, hyper.pars) ynew <- get_y(newSS, xnew, llik.fn, priors, newllp) - HRnew <- tmvtnorm::dtmvnorm(c(xcurr), c(xnew), jcov, - lower = rng[,1], upper = rng[,2], log = TRUE) + HRnew <- TruncatedNormal::dtmvnorm(c(xcurr), c(xnew), jcov, + lb = rng[,1], ub = rng[,2], log = TRUE, B = 1e2) if (is.accepted(ycurr+HRcurr, ynew+HRnew)) { xcurr <- xnew From b813fea99a5dd037df063396f61ad340bff00891 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 2 Jun 2020 11:13:39 -0400 Subject: [PATCH 1023/2289] update changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index b8ecb9bf59b..b42abb07411 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -24,6 +24,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Changed +- Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up. - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). - `PEcAn.DB::insert_table` now uses `DBI::dbAppendTable` internally instead of manually constructed SQL (#2552). From 9249e462dae827b5559fca57de81213aa59cf876 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 22:44:41 +0530 Subject: [PATCH 1024/2289] updated container image --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 4b537ee263e..51bcf34aca8 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -13,7 +13,7 @@ jobs: build: # The type of runner that the job will run on - runs-on: ubuntu-latest + runs-on: macos-latest container: pecan/depends:develop From e1b2d53eb702ce0f080f60c495999ae7973b4d3f Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 2 Jun 2020 22:47:28 +0530 Subject: [PATCH 1025/2289] added changes --- .github/workflows/book.yml | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 51bcf34aca8..6aafe44f5cf 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -13,27 +13,15 @@ jobs: build: # The type of runner that the job will run on - runs-on: macos-latest + runs-on: ubuntu-latest container: pecan/depends:develop steps: - uses: actions/checkout@v2 - - name: installing dependencies - run : | - apt-get update \ - && apt-get -y --no-install-recommends install \ - jags \ - time \ - libgdal-dev \ - libglpk-dev \ - librdf0-dev \ - libnetcdf-dev \ - libudunits2-dev \ - libgl1-mesa-dev \ - libglu1-mesa-dev - + + # Building book from source using makefile - name: Building book run: cd book_source && make From b829beebf15322bb2bed83a5111f8a1c022eef92 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 2 Jun 2020 13:33:40 -0400 Subject: [PATCH 1026/2289] add issue no --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index b42abb07411..d7661d6efa7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -24,7 +24,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Changed -- Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up. +- Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). - `PEcAn.DB::insert_table` now uses `DBI::dbAppendTable` internally instead of manually constructed SQL (#2552). From 9521887a66a147be24879839da1abfd7e6ebb59c Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 12:48:27 -0700 Subject: [PATCH 1027/2289] don't override random.effects = TRUE --- modules/meta.analysis/R/meta.analysis.R | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/modules/meta.analysis/R/meta.analysis.R b/modules/meta.analysis/R/meta.analysis.R index 113c6e4b757..8ff675f134d 100644 --- a/modules/meta.analysis/R/meta.analysis.R +++ b/modules/meta.analysis/R/meta.analysis.R @@ -109,11 +109,10 @@ pecan.ma <- function(trait.data, prior.distns, ## check for excess missing data if (all(is.na(data[["obs.prec"]]))) { - if (verbose) { - writeLines("NO ERROR STATS PROVIDED, DROPPING RANDOM EFFECTS") - } - data$site <- rep(1, nrow(data)) - data$trt <- rep(0, nrow(data)) + logger.warn("NO ERROR STATS PROVIDED\n Check meta-analysis Model Convergence", + "and consider turning off Random Effects by", + "setting FALSE", + "in your pecan.xml settings file ") } if (!random) { From 5fe037daeafe1de1b0c51c74294268b01c236a67 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 13:00:32 -0700 Subject: [PATCH 1028/2289] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index b8ecb9bf59b..7996e843a80 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - When building sipnet model would not set correct model version - Update pecan/depends docker image to have latest Roxygen and devtools. - Update ED docker build, will now build version 2.2.0 and git +- Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 ### Changed From bc252328f8a13755ea0f7ee69d6496bf90faadbe Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 3 Jun 2020 15:47:53 -0500 Subject: [PATCH 1029/2289] updates to doc and speedups --- .gitignore | 4 --- DEV-INTRO.md | 48 ++++++++++++++++++------------ docker-compose.dev.yml | 66 +++++++++++++++++++++++------------------- docker-compose.yml | 2 ++ docker/env.example | 2 +- 5 files changed, 70 insertions(+), 52 deletions(-) diff --git a/.gitignore b/.gitignore index cbded97662d..1e523cbe68f 100644 --- a/.gitignore +++ b/.gitignore @@ -99,7 +99,3 @@ contrib/modellauncher/modellauncher # don't checkin renv /renv/ - -# folder with the data folders for the docker stack -/volumes - diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 47839e99b32..aba762612a4 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -12,9 +12,9 @@ If running on a linux system it is recommended to add your user to the docker gr To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. -By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. Copy the `docker-compose.dev.yml` file to `docker-compose.override.yml` to start working with your own override file, i.e. `cp docker-compose.dev.yml docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for development. +By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. Copy the `docker-compose.dev.yml` file to `docker-compose.override.yml` to start working with your own override file, i.e. `cp docker-compose.dev.yml docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for development. **The rest of this document assumes you have done this step.** -If you in the past had loaded some of the data, but would like to start from scratch you can simply remove the `volumes` folder (and all the subfolders) and start with the "First time setup" section again. +If you want to start from scratch and remove all old data, but keep your pecan checked out folder, you can remove the folders where you have written the data (see `folders` below). You will also need to remove any of the docker managed volumes. To see all volumes you can do `docker volume ls -q -f name=pecan`. If you are sure, you can either remove them one by one, or use `docker volume rm $(docker volume ls -q -f name=pecan)` to remove them all. **THIS DESTROYS ALL DATA IN DOCKER MANAGED VOLUMES.**. If you changed the path in `docker-compose.override.yml` you will need to remove that volume as well. This will however not destroy the data since it is already on your local machine. ### First time setup @@ -45,20 +45,24 @@ PECAN_VERSION=develop #### folders -Next we will create the folders that will hold all the data for the docker containers using: `mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The `volumes` folder will be ignored by git. You can create these at any location, however you will need to update the `docker-compose.dev.yml` file. The subfolders are used for the following: +The goal of the development is to share the development folder with your container, whilst minimizing the latency. What this will do is setup the folders to allow for your pecan folder to be shared, and keep the rest of the folders managed by docker. Some of this is based on a presentation done during [DockerCon 2020](https://docker.events.cube365.net/docker/dockercon/content/Videos/92BAM7vob5uQ2spZf). In this talk it is recommended to keep the database on the filesystem managed by docker, as well as any other folders that are not directly modified on the host system (not using the docker managed volumes could lead to a large speed loss when reading/writing to the disk). The `docker-compose.override.yml` can be modified to copy all the data to the local filesystem, you will need to comment out the appropriate blocks. If you are sharing more than the pecan home directory you will need to make sure that these folder exist. As from the video, it is recommended to keep these folders outside of the actual pecan folder to allow for better caching capabilities of the docker system. -- **lib** holds all the R packages for the specific version of PEcAn and R. This folder will be shared amongst all other containers, and will contain the compiled PEcAn code. -- **pecan** this holds all the data, such as workflows and any downloaded data. -- **portainer** if you enabled the portainer service this folder is used to hold persistent data for this service -- **postgres** holds the actual database data. If you want to backup the database, you can stop the postgres container, zip up the folder. -- **rabbitmq** holds persistent information of the message broker (rabbitmq). -- **traefik** holds persisent data for the web proxy, that directs incoming traffic to the correct container. +If you have commented out the volumes in `docker-compose.override.yml` you will need to create the folders. Assuming you have not modified the values, you can do this with: `mkdir -p $HOME/volumes/pecan/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The following volumes are specified: + +- **pecan_home** : is the checked out folder of PEcAn. This is shared with the executor and rstudio container allowing you to share and compile PEcAn. (defaults to current folder) +- **pecan_web** : is the checked out web folder of PEcAn. This is shared with the web container allowing you to share and modify the PEcAn web app. (defaults to web folder in the current folder) +- **pecan_lib** : holds all the R packages for the specific version of PEcAn and R. This folder will be shared amongst all other containers, and will contain the compiled PEcAn code. (defaults to managed by docker, or $HOME/volumes/pecan/lib) +- **pecan** this holds all the data, such as workflows and any downloaded data. (defaults to managed by docker, or $HOME/volumes/pecan/pecan) +- **traefik** holds persisent data for the web proxy, that directs incoming traffic to the correct container. (defaults to managed by docker, or $HOME/volumes/pecan/traefik) +- **postgres** holds the actual database data. If you want to backup the database, you can stop the postgres container, zip up the folder. (defaults to managed by docker, or $HOME/volumes/pecan/postgres) +- **rabbitmq** holds persistent information of the message broker (rabbitmq). (defaults to managed by docker, or $HOME/volumes/pecan/rabbitmq) +- **portainer** if you enabled the portainer service this folder is used to hold persistent data for this service. You will need to enable this service. (defaults to managed by docker, or $HOME/volumes/pecan/portainer) These folders will hold all the persistent data for each of the respective containers and can grow. For example the postgres database is multiple GB. The pecan folder will hold all data produced by the workflows, including any downloaded data, and can grow to many giga bytes. #### postgresql database -First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. +First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. Once the database has finished starting up we will initialize the database. Before we run the container we want to make sure we have the latest database information, you can do this with `docker pull pecan/db`, which will make sure you have the latest version of the database ready. Now you can load the database using: `docker run --rm --network pecan_pecan pecan/db` (in this case we use the latest image instead of develop since it refers to the actual database data, and not the actual code). Once that is done we create two users for BETY: @@ -72,15 +76,13 @@ docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example #### load example data -Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. To do this we first again make sure we have the latest code ready using `docker pull pecan/data:develop` and run this image using `docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop`. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers (which is mounted from `volumes/pecan` in your current folder. +Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. To do this we first again make sure we have the latest code ready using `docker pull pecan/data:develop` and run this image using `docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop`. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers. #### copy R packages (optional but recommended) -Next copy the R packages from a container to your local machine as the `volumes/lib` folder. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. - -You can copy all the data using `docker run -ti --rm -v ${PWD}/volumes/lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/`. This will copy all compiled packages to your local machine. +Next copy the R packages from a container to volume `pecan_lib`. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. This folder is shared with all PEcAn containers, allowing you to compile the code in one place, and have the compiled code available in all other containers. For example modify the code for a model, allows you to compile the code in rstudio container, and see the results in the model container. -This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). You can also always delete all files in the `volumes/lib` folder, and recompile PEcAn from scratch. +You can copy all the data using `docker run -ti --rm -v pecan_lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/`. This will copy all compiled packages to your local machine. #### copy web config file (optional) @@ -88,7 +90,7 @@ The `docker-compose.override.yml` file has a section that will enable editing th ### PEcAn Development -To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers: `docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d`. At this point you have PEcAn running in docker. +To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers: `docker-compose up -d`. At this point you have PEcAn running in docker. The current folder (most likely your clone of the git repository) is mounted in some containers as `/pecan`, and in the case of rstudio also in your home folder as `pecan`. You can see which containers exactly in `docker-compose.override.yml`. @@ -149,9 +151,19 @@ Small scripts that are used as part of the development and installation of PEcAn # Advanced Development Options +## Reset all containers/database + +At some point you might want to reset your database. + +## New version of R + +This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). Firs stop the full stack (using `docker-compose down`). You can delete the volume using `docker rm pecan_lib` (also remvove the folder on your local disk), copy the R packages, and start the full stack. + ## Linux and User permissions -(On Mac OSX and Windows files should automatically be owned by the user running the docker-compose commands) +(On Mac OSX and Windows files should automatically be owned by the user running the docker-compose commands). + +If you use mounted folders, make sure that these folders are writable by the containers. Docker on Linux will try to preserve the file permissions. To do this it might be necessary for the folders to have rw permissions. This can be done by using `chmod 777 $HOME/volumes/pecan/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. This will leverage of NFS to mount the file system in your local docker image, changing the files to owned by the user specified in the export file. Try to limit this to only your PEcAn folder since this will allow anybody on this system to get access to the exported folder as you! @@ -175,7 +187,7 @@ sudo exportfs -va At this point you have exported your home directory, only to your local machine. All files written to that exported filesystem will be owned by you (`id -u`) and your primary group (`id -g`). -Finally we can modify the docker-compose.dev.yaml file to allow for writing files to your PEcAn folder as you: +Finally we can modify the `docker-compose.override.yml` file to allow for writing files to your PEcAn folder as you: ``` volumes: diff --git a/docker-compose.dev.yml b/docker-compose.dev.yml index b11ee4ed02c..a35993435ac 100644 --- a/docker-compose.dev.yml +++ b/docker-compose.dev.yml @@ -54,44 +54,52 @@ services: # - /var/run/docker.sock:/var/run/docker.sock # - portainer:/data +# ----------------------------------------------------------------------- +# Theser are the volumes mounted into the containers. For speed reasons +# it is best to use docker native containers (less important on Linux). +# The pecan_home and pecan_web are important since this allows us to +# share the PEcAn source code from local machine to docker containers. +# Volumes are placed outside of the PEcAn source tree to allow for +# optimized caching of the changed files. +# ----------------------------------------------------------------------- volumes: pecan_home: driver_opts: type: none device: '${PWD}' o: bind - pecan_lib: - driver_opts: - type: none - device: '${PWD}/volumes/lib' - o: bind pecan_web: driver_opts: type: none device: '${PWD}/web/' o: bind - traefik: - driver_opts: - type: none - device: '${PWD}/volumes/traefik' - o: bind - postgres: - driver_opts: - type: none - device: '${PWD}/volumes/postgres' - o: bind - rabbitmq: - driver_opts: - type: none - device: '${PWD}/volumes/rabbitmq' - o: bind - pecan: - driver_opts: - type: none - device: '${PWD}/volumes/pecan' - o: bind + pecan_lib: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/lib' + # o: bind + #pecan: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/pecan' + # o: bind + #traefik: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/traefik' + # o: bind + #postgres: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/postgres' + # o: bind + #rabbitmq: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/rabbitmq' + # o: bind portainer: - driver_opts: - type: none - device: '${PWD}/volumes/portainer' - o: bind + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/portainer' + # o: bind diff --git a/docker-compose.yml b/docker-compose.yml index e4b4552d396..4688a117c8d 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -145,6 +145,8 @@ services: image: pecan/rstudio-nginx:${PECAN_VERSION:-latest} networks: - pecan + depends_on: + - rstudio labels: - "traefik.enable=true" - "traefik.backend=rstudio" diff --git a/docker/env.example b/docker/env.example index e2083fc8e86..ee0ca6d8ec1 100644 --- a/docker/env.example +++ b/docker/env.example @@ -6,7 +6,7 @@ # ---------------------------------------------------------------------- # project name (-p flag for docker-compose) -#COMPOSE_PROJECT_NAME=dev +#COMPOSE_PROJECT_NAME=pecan # ---------------------------------------------------------------------- # TRAEFIK CONFIGURATION From 6562b58f4ee8baa90bd0c4f2af566a828f9d6ac7 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 15:09:15 -0700 Subject: [PATCH 1030/2289] Update modules/meta.analysis/R/meta.analysis.R --- modules/meta.analysis/R/meta.analysis.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/meta.analysis.R b/modules/meta.analysis/R/meta.analysis.R index 8ff675f134d..e572b6f5128 100644 --- a/modules/meta.analysis/R/meta.analysis.R +++ b/modules/meta.analysis/R/meta.analysis.R @@ -109,7 +109,7 @@ pecan.ma <- function(trait.data, prior.distns, ## check for excess missing data if (all(is.na(data[["obs.prec"]]))) { - logger.warn("NO ERROR STATS PROVIDED\n Check meta-analysis Model Convergence", + PEcAn.logger::logger.warn("NO ERROR STATS PROVIDED\n Check meta-analysis Model Convergence", "and consider turning off Random Effects by", "setting FALSE", "in your pecan.xml settings file ") From d035e51b2ff198b287be98ab50e996ddc2277858 Mon Sep 17 00:00:00 2001 From: Ken Youens-Clark Date: Wed, 3 Jun 2020 15:27:38 -0700 Subject: [PATCH 1031/2289] Add optparse to workflow.R ## Description The current version of workflow.R incorrectly parses the command-line arguments for the given XML configuration file. Looking at the following section: ``` args <- commandArgs(trailingOnly = TRUE) if (is.na(args[1])) { settings <- PEcAn.settings::read.settings("pecan.xml") } else { settings_file <- args[1] settings <- PEcAn.settings::read.settings(settings_file) } ``` Per the documentation on `commandArgs`, `args` will be: A character vector containing the name of the executable and the user-supplied command line arguments. The first element is the name of the executable by which R was invoked. The exact form of this element is platform dependent: it may be the fully qualified name, or simply the last component (or basename) of the application, or for an embedded R it can be anything the programmer supplied. This program is intended to be executed like so: ./workflow.R --settings input.xml This code is using `args[1]` which will actually be the path to the R executable, not the name of the "settings" XML document. The only reason the program succeeds in using the given XML is because the "read.settings" function in "base/settings/R/read.settings.R" separately parses the `commandArgs` for the presence of a "--settings" option and grabs the "input.xml" value. FWIW, I would strongly argue AGAINST this as this hidden behavior has allowed this bug to persist. The function should only use and validate the given "inputfile" and should definitely NOT parse the command-line arguments OR inspect Sys.getenv("PECAN_SETTINGS"), but that is another battle. ## Motivation and Context The "optparse" establishes the parameters for a program in a way that both validates the inputs and generates documentation. This program also, apparently, accepts a "--continue" flag, but the only way to know this is to read the source code itself. As written, this program could be run with any other arbitrary arguments without error. By describing the parameters to "optparse," the program will now reject unknown arguments and also generate a "usage" statement when executed with "-h" or "--help": ``` $ ./workflow.R -h Usage: ./workflow.R [options] Options: -s FILE, --settings=FILE Settings XML file -c, --continue Continue processing -h, --help Show this help message and exit ``` Note that unknown arguments are rejected: ``` $ ./workflow.R --foo bar Error in getopt_options(object, args) : Error in getopt(spec = spec, opt = args) : long flag "foo" is invalid Calls: main ... get_args -> parse_args -> parse_options -> getopt_options Execution halted ``` And invalid file arguments for "--settings" are rejected: ``` $ ./workflow.R --settings foo.xml Usage: ./workflow.R [options] Options: -s FILE, --settings=FILE Settings XML file -c, --continue Continue processing -h, --help Show this help message and exit Error in get_args() : --settings "foo.xml" not a valid file Calls: main -> get_args Execution halted ``` ## Review Time Estimate - [ ] Immediately - [ ] Within one week - [x] When possible ## Types of changes - [ ] Bug fix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Checklist: - [ ] My change requires a change to the documentation. - [ ] I have updated the CHANGELOG.md. - [ ] I have updated the documentation accordingly. - [ ] I have read the **CONTRIBUTING** document. - [ ] I have added tests to cover my changes. - [ ] All new and existing tests passed. --- web/workflow.R | 395 +++++++++++++++++++++++++++---------------------- 1 file changed, 214 insertions(+), 181 deletions(-) diff --git a/web/workflow.R b/web/workflow.R index 19fac3880e2..006c2a21e2b 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -11,184 +11,217 @@ # ---------------------------------------------------------------------- # Load required libraries # ---------------------------------------------------------------------- -library(PEcAn.all) -library(PEcAn.utils) -library(RCurl) - -# make sure always to call status.end -options(warn = 1) -options(error = quote({ - try(PEcAn.utils::status.end("ERROR")) - try(PEcAn.remote::kill.tunnel(settings)) - if (!interactive()) { - q(status = 1) - } -})) - -# ---------------------------------------------------------------------- -# PEcAn Workflow -# ---------------------------------------------------------------------- -# Open and read in settings file for PEcAn run. -args <- commandArgs(trailingOnly = TRUE) -if (is.na(args[1])) { - settings <- PEcAn.settings::read.settings("pecan.xml") -} else { - settings_file <- args[1] - settings <- PEcAn.settings::read.settings(settings_file) -} - -# Check for additional modules that will require adding settings -if ("benchmarking" %in% names(settings)) { - library(PEcAn.benchmark) - settings <- papply(settings, read_settings_BRR) -} - -if ("sitegroup" %in% names(settings)) { - if (is.null(settings$sitegroup$nSite)) { - settings <- PEcAn.settings::createSitegroupMultiSettings( - settings, - sitegroupId = settings$sitegroup$id) - } else { - settings <- PEcAn.settings::createSitegroupMultiSettings( - settings, - sitegroupId = settings$sitegroup$id, - nSite = settings$sitegroup$nSite) - } - # zero out so don't expand a second time if re-reading - settings$sitegroup <- NULL -} - -# Update/fix/check settings. -# Will only run the first time it's called, unless force=TRUE -settings <- PEcAn.settings::prepare.settings(settings, force = FALSE) - -# Write pecan.CHECKED.xml -PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") - -# start from scratch if no continue is passed in -status_file <- file.path(settings$outdir, "STATUS") -if (length(which(commandArgs() == "--continue")) == 0 - && file.exists(status_file)) { - file.remove(status_file) -} - -# Do conversions -settings <- PEcAn.workflow::do_conversions(settings) - -# Query the trait database for data and priors -if (PEcAn.utils::status.check("TRAIT") == 0) { - PEcAn.utils::status.start("TRAIT") - settings <- PEcAn.workflow::runModule.get.trait.data(settings) - PEcAn.settings::write.settings( - settings, - outputfile = "pecan.TRAIT.xml") - PEcAn.utils::status.end() -} else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { - settings <- PEcAn.settings::read.settings( - file.path(settings$outdir, "pecan.TRAIT.xml")) -} - - -# Run the PEcAn meta.analysis -if (!is.null(settings$meta.analysis)) { - if (PEcAn.utils::status.check("META") == 0) { - PEcAn.utils::status.start("META") - PEcAn.MA::runModule.run.meta.analysis(settings) - PEcAn.utils::status.end() - } -} - -# Write model specific configs -if (PEcAn.utils::status.check("CONFIG") == 0) { - PEcAn.utils::status.start("CONFIG") - settings <- PEcAn.workflow::runModule.run.write.configs(settings) - PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") - PEcAn.utils::status.end() -} else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { - settings <- PEcAn.settings::read.settings( - file.path(settings$outdir, "pecan.CONFIGS.xml")) -} - -if ((length(which(commandArgs() == "--advanced")) != 0) - && (PEcAn.utils::status.check("ADVANCED") == 0)) { - PEcAn.utils::status.start("ADVANCED") - q(); -} - -# Start ecosystem model runs -if (PEcAn.utils::status.check("MODEL") == 0) { - PEcAn.utils::status.start("MODEL") - PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) - PEcAn.utils::status.end() -} - -# Get results of model runs -if (PEcAn.utils::status.check("OUTPUT") == 0) { - PEcAn.utils::status.start("OUTPUT") - runModule.get.results(settings) - PEcAn.utils::status.end() -} - -# Run ensemble analysis on model output. -if ("ensemble" %in% names(settings) - && PEcAn.utils::status.check("ENSEMBLE") == 0) { - PEcAn.utils::status.start("ENSEMBLE") - runModule.run.ensemble.analysis(settings, TRUE) - PEcAn.utils::status.end() -} - -# Run sensitivity analysis and variance decomposition on model output -if ("sensitivity.analysis" %in% names(settings) - && PEcAn.utils::status.check("SENSITIVITY") == 0) { - PEcAn.utils::status.start("SENSITIVITY") - runModule.run.sensitivity.analysis(settings) - PEcAn.utils::status.end() -} - -# Run parameter data assimilation -if ("assim.batch" %in% names(settings)) { - if (PEcAn.utils::status.check("PDA") == 0) { - PEcAn.utils::status.start("PDA") - settings <- PEcAn.assim.batch::runModule.assim.batch(settings) - PEcAn.utils::status.end() - } -} - -# Run state data assimilation -if ("state.data.assimilation" %in% names(settings)) { - if (PEcAn.utils::status.check("SDA") == 0) { - PEcAn.utils::status.start("SDA") - settings <- sda.enfk(settings) - PEcAn.utils::status.end() - } -} - -# Run benchmarking -if ("benchmarking" %in% names(settings) - && "benchmark" %in% names(settings$benchmarking)) { - PEcAn.utils::status.start("BENCHMARKING") - results <- papply(settings, function(x) calc_benchmark(x, bety)) - PEcAn.utils::status.end() -} - -# Pecan workflow complete -if (PEcAn.utils::status.check("FINISHED") == 0) { - PEcAn.utils::status.start("FINISHED") - PEcAn.remote::kill.tunnel(settings) - db.query(paste("UPDATE workflows SET finished_at=NOW() WHERE id=", - settings$workflow$id, "AND finished_at IS NULL"), - params = settings$database$bety) - - # Send email if configured - if (!is.null(settings$email) - && !is.null(settings$email$to) - && (settings$email$to != "")) { - sendmail(settings$email$from, settings$email$to, - paste0("Workflow has finished executing at ", base::date()), - paste0("You can find the results on ", settings$email$url)) - } - PEcAn.utils::status.end() -} - -db.print.connections() -print("---------- PEcAn Workflow Complete ----------") +suppressMessages(library("PEcAn.all")) +suppressMessages(library("PEcAn.utils")) +suppressMessages(library("RCurl")) +suppressMessages(library("optparse")) + +# -------------------------------------------------- +get_args <- function () { + option_list = list( + make_option( + c("-s", "--settings"), + default = ifelse(Sys.getenv("PECAN_SETTINGS") != "", + Sys.getenv("PECAN_SETTINGS"), "pecan.xml"), + type = "character", + help = "Settings XML file", + metavar = "FILE", + ), + make_option( + c("-c", "--continue"), + default = FALSE, + action = "store_true", + type = "logical", + help = "Continue processing", + ) + ) + + parser = OptionParser(option_list = option_list) + args = parse_args(parser) + + if (!file.exists(args$settings)) { + print_help(parser) + stop(sprintf('--settings "%s" not a valid file\n', args$settings)) + } + + return(invisible(args)) +} + +# -------------------------------------------------- +main <- function () { + # get command-line arguments + args = get_args() + + # make sure always to call status.end + options(warn = 1) + options(error = quote({ + try(PEcAn.utils::status.end("ERROR")) + try(PEcAn.remote::kill.tunnel(settings)) + if (!interactive()) { + q(status = 1) + } + })) + + # ---------------------------------------------------------------------- + # PEcAn Workflow + # ---------------------------------------------------------------------- + # Open and read in settings file for PEcAn run. + settings <- PEcAn.settings::read.settings(args$settings) + + # Check for additional modules that will require adding settings + if ("benchmarking" %in% names(settings)) { + library(PEcAn.benchmark) + settings <- papply(settings, read_settings_BRR) + } + + if ("sitegroup" %in% names(settings)) { + if (is.null(settings$sitegroup$nSite)) { + settings <- PEcAn.settings::createSitegroupMultiSettings( + settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings( + settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite) + } + # zero out so don't expand a second time if re-reading + settings$sitegroup <- NULL + } + + # Update/fix/check settings. + # Will only run the first time it's called, unless force=TRUE + settings <- PEcAn.settings::prepare.settings(settings, force = FALSE) + + # Write pecan.CHECKED.xml + PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") + + # start from scratch if no continue is passed in + status_file <- file.path(settings$outdir, "STATUS") + if (args$continue && file.exists(status_file)) { + file.remove(status_file) + } + + # Do conversions + settings <- PEcAn.workflow::do_conversions(settings) + + # Query the trait database for data and priors + if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings( + settings, + outputfile = "pecan.TRAIT.xml") + PEcAn.utils::status.end() + } else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { + settings <- PEcAn.settings::read.settings( + file.path(settings$outdir, "pecan.TRAIT.xml")) + } + + + # Run the PEcAn meta.analysis + if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } + } + + # Write model specific configs + if (PEcAn.utils::status.check("CONFIG") == 0) { + PEcAn.utils::status.start("CONFIG") + settings <- PEcAn.workflow::runModule.run.write.configs(settings) + PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") + PEcAn.utils::status.end() + } else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { + settings <- PEcAn.settings::read.settings( + file.path(settings$outdir, "pecan.CONFIGS.xml")) + } + + if ((length(which(commandArgs() == "--advanced")) != 0) + && (PEcAn.utils::status.check("ADVANCED") == 0)) { + PEcAn.utils::status.start("ADVANCED") + q(); + } + + # Start ecosystem model runs + if (PEcAn.utils::status.check("MODEL") == 0) { + PEcAn.utils::status.start("MODEL") + PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) + PEcAn.utils::status.end() + } + + # Get results of model runs + if (PEcAn.utils::status.check("OUTPUT") == 0) { + PEcAn.utils::status.start("OUTPUT") + runModule.get.results(settings) + PEcAn.utils::status.end() + } + + # Run ensemble analysis on model output. + if ("ensemble" %in% names(settings) + && PEcAn.utils::status.check("ENSEMBLE") == 0) { + PEcAn.utils::status.start("ENSEMBLE") + runModule.run.ensemble.analysis(settings, TRUE) + PEcAn.utils::status.end() + } + + # Run sensitivity analysis and variance decomposition on model output + if ("sensitivity.analysis" %in% names(settings) + && PEcAn.utils::status.check("SENSITIVITY") == 0) { + PEcAn.utils::status.start("SENSITIVITY") + runModule.run.sensitivity.analysis(settings) + PEcAn.utils::status.end() + } + + # Run parameter data assimilation + if ("assim.batch" %in% names(settings)) { + if (PEcAn.utils::status.check("PDA") == 0) { + PEcAn.utils::status.start("PDA") + settings <- PEcAn.assim.batch::runModule.assim.batch(settings) + PEcAn.utils::status.end() + } + } + + # Run state data assimilation + if ("state.data.assimilation" %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + settings <- sda.enfk(settings) + PEcAn.utils::status.end() + } + } + + # Run benchmarking + if ("benchmarking" %in% names(settings) + && "benchmark" %in% names(settings$benchmarking)) { + PEcAn.utils::status.start("BENCHMARKING") + results <- papply(settings, function(x) calc_benchmark(x, bety)) + PEcAn.utils::status.end() + } + + # Pecan workflow complete + if (PEcAn.utils::status.check("FINISHED") == 0) { + PEcAn.utils::status.start("FINISHED") + PEcAn.remote::kill.tunnel(settings) + db.query(paste("UPDATE workflows SET finished_at=NOW() WHERE id=", + settings$workflow$id, "AND finished_at IS NULL"), + params = settings$database$bety) + + # Send email if configured + if (!is.null(settings$email) + && !is.null(settings$email$to) + && (settings$email$to != "")) { + sendmail(settings$email$from, settings$email$to, + paste0("Workflow has finished executing at ", base::date()), + paste0("You can find the results on ", settings$email$url)) + } + PEcAn.utils::status.end() + } + + db.print.connections() + print("---------- PEcAn Workflow Complete ----------") +} + +main() From fb9dbc684b44c793f4d50bfe3d056b9a8fc13e6f Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 17:19:17 -0700 Subject: [PATCH 1032/2289] Update web/workflow.R --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index 006c2a21e2b..acdd43526ff 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -36,7 +36,7 @@ get_args <- function () { ) ) - parser = OptionParser(option_list = option_list) + parser <- optparse::OptionParser(option_list = option_list) args = parse_args(parser) if (!file.exists(args$settings)) { From bf8c6a6cb95a9a372b75f51021bcf9d6d3b8d937 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 17:19:32 -0700 Subject: [PATCH 1033/2289] Update web/workflow.R --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index acdd43526ff..c4e1b93ac44 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -37,7 +37,7 @@ get_args <- function () { ) parser <- optparse::OptionParser(option_list = option_list) - args = parse_args(parser) + args <- optparse::parse_args(parser) if (!file.exists(args$settings)) { print_help(parser) From 84cec5643e151ebee21601a92ae2a5eaf113be74 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 17:20:22 -0700 Subject: [PATCH 1034/2289] Update web/workflow.R --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index c4e1b93ac44..55167357292 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -48,7 +48,7 @@ get_args <- function () { } # -------------------------------------------------- -main <- function () { +workflow <- function(settings_file = "pecan.xml", continue = FALSE) { # get command-line arguments args = get_args() From 32be861ae56fd802cd2a8e1cfa9fe3c2265909ff Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 17:20:35 -0700 Subject: [PATCH 1035/2289] Update web/workflow.R --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index 55167357292..2a767d94f1a 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -19,7 +19,7 @@ suppressMessages(library("optparse")) # -------------------------------------------------- get_args <- function () { option_list = list( - make_option( + optparse::make_option( c("-s", "--settings"), default = ifelse(Sys.getenv("PECAN_SETTINGS") != "", Sys.getenv("PECAN_SETTINGS"), "pecan.xml"), From acda74c15ccfa0f436d69b37b54ab566c4390e99 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 17:20:45 -0700 Subject: [PATCH 1036/2289] Update web/workflow.R --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index 2a767d94f1a..858e0b0ddad 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -40,7 +40,7 @@ get_args <- function () { args <- optparse::parse_args(parser) if (!file.exists(args$settings)) { - print_help(parser) + optparse::print_help(parser) stop(sprintf('--settings "%s" not a valid file\n', args$settings)) } From e9cc9c60e8a88584203cad1605e621f7e135ae22 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 3 Jun 2020 17:20:55 -0700 Subject: [PATCH 1037/2289] Update web/workflow.R --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index 858e0b0ddad..acdac5645b2 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -27,7 +27,7 @@ get_args <- function () { help = "Settings XML file", metavar = "FILE", ), - make_option( + optparse::make_option( c("-c", "--continue"), default = FALSE, action = "store_true", From a417de0874eb63746e478725f2e5e7d5b9d85052 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 14:09:51 +0530 Subject: [PATCH 1038/2289] updated container image --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 6aafe44f5cf..f6ea38c68c0 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends:develop + container: pecan/docs steps: From 030462172ab7223406ea7f3ddf38d831b8ddf171 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 14:15:37 +0530 Subject: [PATCH 1039/2289] updated image --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index f6ea38c68c0..55efa86bac2 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/docs + container: pecan/depends:latest steps: From fc1c1eab5a9868f0a6120c39d81335efe190a106 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 14:35:07 +0530 Subject: [PATCH 1040/2289] updated dependencies --- .github/workflows/book.yml | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 55efa86bac2..c574a983bff 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -20,7 +20,11 @@ jobs: steps: - uses: actions/checkout@v2 - + - run : | + apt-get update \ + && apt-get install -y --no-install-recommends pandoc \ + && install2.r -e -s -n -1 bookdown \ + && rm -rf /var/lib/apt/lists/* # Building book from source using makefile - name: Building book From 9ae6b777cf94bea414d945ddc3fa8bc5d0abe6e6 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 14:42:40 +0530 Subject: [PATCH 1041/2289] updated dependencies --- .github/workflows/book.yml | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index c574a983bff..39330778db4 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends:latest + # container: pecan/depends:latest steps: @@ -25,6 +25,19 @@ jobs: && apt-get install -y --no-install-recommends pandoc \ && install2.r -e -s -n -1 bookdown \ && rm -rf /var/lib/apt/lists/* + + - run : | + apt-get update \ + && apt-get -y --no-install-recommends install \ + jags \ + time \ + libgdal-dev \ + libglpk-dev \ + librdf0-dev \ + libnetcdf-dev \ + libudunits2-dev \ + libgl1-mesa-dev \ + libglu1-mesa-dev # Building book from source using makefile - name: Building book From aef80825ffb902b3c06a064b1e5ac7bb2e6aad81 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 14:46:26 +0530 Subject: [PATCH 1042/2289] changes --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 39330778db4..914a6796c5f 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - # container: pecan/depends:latest + container: pecan/depends:latest steps: From 3429b9aaa886319463550abba1b4cd430d5e18c6 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 18:13:32 +0530 Subject: [PATCH 1043/2289] updated container image --- .github/workflows/book.yml | 5 ----- 1 file changed, 5 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 914a6796c5f..3c09ef9c2d7 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -20,11 +20,6 @@ jobs: steps: - uses: actions/checkout@v2 - - run : | - apt-get update \ - && apt-get install -y --no-install-recommends pandoc \ - && install2.r -e -s -n -1 bookdown \ - && rm -rf /var/lib/apt/lists/* - run : | apt-get update \ From 5f07193baac508544ad2350def060de5a2c91b27 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 4 Jun 2020 08:49:14 -0400 Subject: [PATCH 1044/2289] vary ic --- models/basgra/NAMESPACE | 2 + models/basgra/R/read_restart.BASGRA.R | 19 ++++- models/basgra/R/write.config.BASGRA.R | 88 +++++++++++------------ models/basgra/man/read_restart.BASGRA.Rd | 30 ++++++++ models/basgra/man/write.config.BASGRA.Rd | 4 +- models/basgra/man/write_restart.BASGRA.Rd | 46 ++++++++++++ 6 files changed, 143 insertions(+), 46 deletions(-) create mode 100644 models/basgra/man/read_restart.BASGRA.Rd create mode 100644 models/basgra/man/write_restart.BASGRA.Rd diff --git a/models/basgra/NAMESPACE b/models/basgra/NAMESPACE index ce060a2b8c5..5d45555d209 100644 --- a/models/basgra/NAMESPACE +++ b/models/basgra/NAMESPACE @@ -1,5 +1,7 @@ # Generated by roxygen2: do not edit by hand +export(read_restart.BASGRA) export(run_BASGRA) export(write.config.BASGRA) +export(write_restart.BASGRA) useDynLib(PEcAn.BASGRA, .registration = TRUE) diff --git a/models/basgra/R/read_restart.BASGRA.R b/models/basgra/R/read_restart.BASGRA.R index 572a5978edb..0cbb48cbfb1 100644 --- a/models/basgra/R/read_restart.BASGRA.R +++ b/models/basgra/R/read_restart.BASGRA.R @@ -13,6 +13,13 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p forecast <- list() + year <- lubridate::year(stop.time) + + start_date <- as.POSIXlt(settings$run$start.date, tz = "UTC") + end_date <- as.POSIXlt(settings$run$end.date, tz = "UTC") + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + # Read ensemble output ens <- read.output(runid = runid, outdir = file.path(outdir, runid), @@ -20,7 +27,17 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p end.year = lubridate::year(stop.time), variables = var.names) - last <- length(ens[[1]]) + if(year == start_year & year != end_year){ + simdays <- seq(lubridate::yday(start_date), lubridate::yday(stop.time)) + } + # To BE CONTINUED... + #else if(year != start_year & year == end_year){ + # simdays <- seq(1, lubridate::yday(end_date)) + #}else{ + # simdays <- seq(lubridate::yday(start_date), lubridate::yday(end_date)) + #} + + last <- length(simdays) params$restart <- c() diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index aef391e17e0..4ddd0412125 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -266,9 +266,9 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N }else if(!is.null(settings$run$inputs$poolinitcond$path)){ IC.path <- settings$run$inputs$poolinitcond$path - IC.pools <- PEcAn.data.land::prepare_pools(IC.path, constants = list(sla = SLA)) + #IC.pools <- PEcAn.data.land::prepare_pools(IC.path, constants = list(sla = SLA)) - if(!is.null(IC.pools)){ + #if(!is.null(IC.pools)){ IC.nc <- ncdf4::nc_open(IC.path) ## laiInit m2/m2 @@ -286,55 +286,55 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # This is IC # Initial value of SOM (g C m-2) - # csom0 <- try(ncdf4::ncvar_get(IC.nc, "SOC"), silent = TRUE) - # if (!is.na(csom0) && is.numeric(csom0)) { - # run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(csom0, "Mg ha-1", "kg C m-2"), "kg", "g") - # } - - # Initial fraction of SOC that is fast (g C g-1 C) - if ("r_fSOC" %in% pft.names) { - run_params[which(names(run_params) == "FCSOMF0")] <- pft.traits[which(pft.names == "r_fSOC")] - } - - # This is IC, change later - # Initial C-N ratio of litter (g C g-1 N) - if ("c2n_litter" %in% pft.names) { - run_params[which(names(run_params) == "CNLITT0")] <- 100*pft.traits[which(pft.names == "c2n_litter")] - } - - # Initial C-N ratio of fast SOM (g C g-1 N) - if ("c2n_fSOM" %in% pft.names) { - run_params[which(names(run_params) == "CNSOMF0")] <- pft.traits[which(pft.names == "c2n_fSOM")] - } - - # Initial C-N ratio of slow SOM (g C g-1 N) - if ("c2n_sSOM" %in% pft.names) { - run_params[which(names(run_params) == "CNSOMS0")] <- pft.traits[which(pft.names == "c2n_sSOM")] - } + csom0 <- try(ncdf4::ncvar_get(IC.nc, "TotSoilCarb"), silent = TRUE) + if (!is.na(csom0) && is.numeric(csom0)) { + run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(csom0, "kg", "g") + } - # Initial value of soil mineral N (g N m-2) - if ("NMIN" %in% pft.names) { - run_params[which(names(run_params) == "NMIN0")] <- pft.traits[which(pft.names == "NMIN")] - } - - # This is IC, change later - # Initial value of soil water concentration (m3 m-3) - if ("initial_volume_fraction_of_condensed_water_in_soil" %in% pft.names) { - run_params[which(names(run_params) == "WCI")] <- pft.traits[which(pft.names == "initial_volume_fraction_of_condensed_water_in_soil")] - } - - - # Water concentration at saturation (m3 m-3) - if ("volume_fraction_of_water_in_soil_at_saturation" %in% pft.names) { - run_params[which(names(run_params) == "WCST")] <- pft.traits[which(pft.names == "volume_fraction_of_water_in_soil_at_saturation")] - } + # # Initial fraction of SOC that is fast (g C g-1 C) + # if ("r_fSOC" %in% pft.names) { + # run_params[which(names(run_params) == "FCSOMF0")] <- pft.traits[which(pft.names == "r_fSOC")] + # } + # + # # This is IC, change later + # # Initial C-N ratio of litter (g C g-1 N) + # if ("c2n_litter" %in% pft.names) { + # run_params[which(names(run_params) == "CNLITT0")] <- 100*pft.traits[which(pft.names == "c2n_litter")] + # } + # + # # Initial C-N ratio of fast SOM (g C g-1 N) + # if ("c2n_fSOM" %in% pft.names) { + # run_params[which(names(run_params) == "CNSOMF0")] <- pft.traits[which(pft.names == "c2n_fSOM")] + # } + # + # # Initial C-N ratio of slow SOM (g C g-1 N) + # if ("c2n_sSOM" %in% pft.names) { + # run_params[which(names(run_params) == "CNSOMS0")] <- pft.traits[which(pft.names == "c2n_sSOM")] + # } + # + # # Initial value of soil mineral N (g N m-2) + # if ("NMIN" %in% pft.names) { + # run_params[which(names(run_params) == "NMIN0")] <- pft.traits[which(pft.names == "NMIN")] + # } + # + # # This is IC, change later + # # Initial value of soil water concentration (m3 m-3) + # if ("initial_volume_fraction_of_condensed_water_in_soil" %in% pft.names) { + # run_params[which(names(run_params) == "WCI")] <- pft.traits[which(pft.names == "initial_volume_fraction_of_condensed_water_in_soil")] + # } + # + # + # # Water concentration at saturation (m3 m-3) + # if ("volume_fraction_of_water_in_soil_at_saturation" %in% pft.names) { + # run_params[which(names(run_params) == "WCST")] <- pft.traits[which(pft.names == "volume_fraction_of_water_in_soil_at_saturation")] + # } # # Temperature that kills half the plants in a day (degrees Celcius) # if ("plant_min_temp" %in% pft.names) { # run_params[which(names(run_params) == "LT50I")] <- pft.traits[which(pft.names == "plant_min_temp")] # } - } + #} } #----------------------------------------------------------------------- diff --git a/models/basgra/man/read_restart.BASGRA.Rd b/models/basgra/man/read_restart.BASGRA.Rd new file mode 100644 index 00000000000..020d0d57d73 --- /dev/null +++ b/models/basgra/man/read_restart.BASGRA.Rd @@ -0,0 +1,30 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/read_restart.BASGRA.R +\name{read_restart.BASGRA} +\alias{read_restart.BASGRA} +\title{Read restart function for SDA with BASGRA} +\usage{ +read_restart.BASGRA(outdir, runid, stop.time, settings, var.names, params) +} +\arguments{ +\item{outdir}{Output directory} + +\item{runid}{Run ID} + +\item{stop.time}{Year that is being read} + +\item{settings}{PEcAn settings object} + +\item{var.names}{Variable names to be extracted} + +\item{params}{Any parameters required for state calculations} +} +\value{ +X.vec vector of forecasts +} +\description{ +Read Restart for BASGRA +} +\author{ +Istem Fer +} diff --git a/models/basgra/man/write.config.BASGRA.Rd b/models/basgra/man/write.config.BASGRA.Rd index 7e20e3efc49..3fbf94d632f 100644 --- a/models/basgra/man/write.config.BASGRA.Rd +++ b/models/basgra/man/write.config.BASGRA.Rd @@ -4,7 +4,7 @@ \alias{write.config.BASGRA} \title{Write BASGRA configuration files} \usage{ -write.config.BASGRA(defaults, trait.values, settings, run.id) +write.config.BASGRA(defaults, trait.values, settings, run.id, IC = NULL) } \arguments{ \item{defaults}{list of defaults to process} @@ -14,6 +14,8 @@ write.config.BASGRA(defaults, trait.values, settings, run.id) \item{settings}{list of settings from pecan settings file} \item{run.id}{id of run} + +\item{IC}{initial conditions list} } \value{ configuration file for BASGRA for given run diff --git a/models/basgra/man/write_restart.BASGRA.Rd b/models/basgra/man/write_restart.BASGRA.Rd new file mode 100644 index 00000000000..aa970cc931d --- /dev/null +++ b/models/basgra/man/write_restart.BASGRA.Rd @@ -0,0 +1,46 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/write_restart.BASGRA.R +\name{write_restart.BASGRA} +\alias{write_restart.BASGRA} +\title{write_restart.SIPNET} +\usage{ +write_restart.BASGRA( + outdir, + runid, + start.time, + stop.time, + settings, + new.state, + RENAME = TRUE, + new.params = FALSE, + inputs +) +} +\arguments{ +\item{outdir}{outout directory} + +\item{runid}{run id} + +\item{start.time}{Time of current assimilation step} + +\item{stop.time}{Time of next assimilation step} + +\item{settings}{pecan settings list} + +\item{new.state}{Analysis state matrix returned by \code{sda.enkf}} + +\item{RENAME}{flag to either rename output file or not} + +\item{new.params}{optional, additionals params to pass write.configs that are deterministically related to the parameters updated by the analysis} + +\item{inputs}{new input paths updated by the SDA workflow, will be passed to write.configs} +} +\value{ +TRUE if successful +} +\description{ +Write restart files for BASGRA +} +\author{ +Istem Fer +} From 816958d5918f119320671afc0649f0f4a1b329d0 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 4 Jun 2020 10:26:08 -0400 Subject: [PATCH 1045/2289] inital vars --- models/basgra/R/write.config.BASGRA.R | 9 ++++++--- models/basgra/R/write_restart.BASGRA.R | 10 +++------- 2 files changed, 9 insertions(+), 10 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 4ddd0412125..f10b19ef93d 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -257,9 +257,12 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ic.names <- names(IC) - # Initial value of leaf area index m2 m-2 - logged) - if ("ilai" %in% ic.names) { - run_params[which(names(run_params) == "LOG10LAII")] <- log(IC$lai) + if ("LAI" %in% ic.names) { + run_params[which(names(run_params) == "LOG10LAII")] <- log(IC$LAI) + } + + if ("TotSoilCarb" %in% ic.names) { + run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(IC$TotSoilCarb, "kg", "g") } diff --git a/models/basgra/R/write_restart.BASGRA.R b/models/basgra/R/write_restart.BASGRA.R index 3c9149b26ee..37e52977382 100644 --- a/models/basgra/R/write_restart.BASGRA.R +++ b/models/basgra/R/write_restart.BASGRA.R @@ -11,6 +11,7 @@ write_restart.BASGRA <- function(outdir, runid, start.time, stop.time, settings, new.state, RENAME = TRUE, new.params = FALSE, inputs) { + rundir <- settings$host$rundir variables <- colnames(new.state) @@ -26,9 +27,9 @@ write_restart.BASGRA <- function(outdir, runid, start.time, stop.time, settings, } if ("TotSoilCarb" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- udunits2::ud.convert(new.state$TotSoilCarb, "kg m-2", "g m-2") + analysis.save[[length(analysis.save) + 1]] <- new.state$TotSoilCarb if (new.state$TotSoilCarb < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("SOC") + names(analysis.save[[length(analysis.save)]]) <- c("TotSoilCarb") } if (!is.null(analysis.save) && length(analysis.save) > 0){ @@ -48,11 +49,6 @@ write_restart.BASGRA <- function(outdir, runid, start.time, stop.time, settings, inputs = inputs, IC = analysis.save.mat)) - # write.config.BASGRA (defaults, trait.values, settings, run.id) - - - - return(TRUE) } # write_restart.BASGRA \ No newline at end of file From 94a0ba99004cad1f5603e26146f2fb80fc1aa18e Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 4 Jun 2020 20:04:40 +0530 Subject: [PATCH 1046/2289] updated container image --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 3c09ef9c2d7..79fb9e7e89c 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,7 +14,7 @@ jobs: build: # The type of runner that the job will run on runs-on: ubuntu-latest - container: pecan/depends:latest + container: pecan/base:latest steps: From 1d942df692e86465d1b7c667c5eb638a50096fe7 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 6 Jun 2020 12:35:23 +0530 Subject: [PATCH 1047/2289] updated shell version --- .github/workflows/book.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 79fb9e7e89c..be1ed4cd943 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -40,6 +40,7 @@ jobs: - name: looking for generated html run: cd _book + shell: bash - name: Commiting the changes to pecan documentation repo run: | From e35ed3c16163b10d1df94bc1f021e66b351a2de8 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 6 Jun 2020 12:41:36 +0530 Subject: [PATCH 1048/2289] Update book.yml --- .github/workflows/book.yml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index be1ed4cd943..9740788f2e7 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -39,7 +39,9 @@ jobs: run: cd book_source && make - name: looking for generated html - run: cd _book + run: | + ls + cd _book shell: bash - name: Commiting the changes to pecan documentation repo From a02c721cbd6e788a1783f3935d2245a1dd48a62f Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 6 Jun 2020 12:46:03 +0530 Subject: [PATCH 1049/2289] Update book.yml --- .github/workflows/book.yml | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 9740788f2e7..53b1a51c003 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -39,9 +39,7 @@ jobs: run: cd book_source && make - name: looking for generated html - run: | - ls - cd _book + run: cd /book_source/_book shell: bash - name: Commiting the changes to pecan documentation repo From 36d8119ba49efe567c45729a95a9ac7d96acac2a Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 6 Jun 2020 12:57:09 +0530 Subject: [PATCH 1050/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 53b1a51c003..f17d4da3acb 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -39,7 +39,7 @@ jobs: run: cd book_source && make - name: looking for generated html - run: cd /book_source/_book + run: cd /__w/pecan-sandbox/pecan-sandbox/book_source/_book && ls shell: bash - name: Commiting the changes to pecan documentation repo From f87b57f1aaebb8859ceec0445a870308a112633b Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 6 Jun 2020 13:05:06 +0530 Subject: [PATCH 1051/2289] updated git repo changes --- .github/workflows/book.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index f17d4da3acb..f4a097bf128 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -44,6 +44,8 @@ jobs: - name: Commiting the changes to pecan documentation repo run: | + git config --global user.email "pecan_bot@example.com" + git config --global user.name "Mukul Maheshwari" git init git add . git commit -m "documentation updated by actions" From 7e9a2a3bb0c935308008232e29f5f43f384e955c Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 6 Jun 2020 14:51:56 -0500 Subject: [PATCH 1052/2289] added ping & model api, authentication & swagger yaml --- apps/api/auth.R | 77 +++++++ apps/api/entrypoint.R | 21 ++ apps/api/models.R | 42 ++++ apps/api/pecanapi-spec.yml | 411 +++++++++++++++++++++++++++++++++++++ apps/api/ping.R | 11 + 5 files changed, 562 insertions(+) create mode 100644 apps/api/auth.R create mode 100644 apps/api/entrypoint.R create mode 100644 apps/api/models.R create mode 100644 apps/api/pecanapi-spec.yml create mode 100644 apps/api/ping.R diff --git a/apps/api/auth.R b/apps/api/auth.R new file mode 100644 index 00000000000..46d30969809 --- /dev/null +++ b/apps/api/auth.R @@ -0,0 +1,77 @@ +library(digest) +library(PEcAn.DB) +library(DBI) +require(RPostgreSQL) + +#* Obtain the encrypted password for a user +#* @param username Username, which is also the 'salt' +#* @param password Unencrypted password +#* @param secretkey Secret Key, which if null, is set to 'notasecret' +#* @return Encrypted password +#* @author Tezan Sahu +get_crypt_pass <- function(username, password, secretkey = NULL) { + secretkey <- if(is.null(secretkey)) "notasecret" else secretkey + dig <- secretkey + salt <- username + for (i in 1:10) { + dig <- digest(paste(dig, salt, password, secretkey, sep="--"), algo="sha1", serialize=FALSE) + } + return(dig) +} + + +#* Check if the encrypted password for the user is valid +#* @param username Username +#* @param crypt_pass Encrypted password +#* @return TRUE if encrypted password is correct, else FALSE +#* @author Tezan Sahu +validate_crypt_pass <- function(username, crypt_pass) { + settings <-list(database = list(bety = list(driver = "PostgreSQL", user = "bety", dbname = "bety", password = "bety", host="postgres"))) + dbcon <- db.open(settings$database$bety) + + qry_statement <- paste0("SELECT crypted_password FROM users WHERE login='", username, "'") + res <- db.query(qry_statement, dbcon) + + if (nrow(res) == 1 && res[1, 1] == crypt_pass) { + return(TRUE) + } + + return(FALSE) +} + +#* Filter to authenticate a user calling the PEcAn API +#* @param req The request +#* @param res The response to be set +#* @return Appropriate response +#* @author Tezan Sahu +authenticate_user <- function(req, res) { + # If the API endpoint called does not requires authentication, allow it to pass through as is + if ( + grepl("swagger", req$PATH_INFO, ignore.case = TRUE) || + grepl("openapi.json", req$PATH_INFO, fixed = TRUE) || + grepl("ping", req$PATH_INFO, ignore.case = TRUE)) + { + return(forward()) + } + + if (!is.null(req$HTTP_AUTHORIZATION)) { + # HTTP_AUTHORIZATION is of the form "Basic ", + # where the is contains : + auth_details <- strsplit(rawToChar(base64_dec(strsplit(req$HTTP_AUTHORIZATION, " +")[[1]][2])), ":")[[1]] + username <- auth_details[1] + password <- auth_details[2] + crypt_pass <- get_crypt_pass(username, password) + + if(validate_crypt_pass(username, crypt_pass)){ + return(forward()) + } + + } + else{ + res$status <- 401 # Unauthorized + return(list(error="Authentication required")) + } + + + +} diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R new file mode 100644 index 00000000000..9b310278d37 --- /dev/null +++ b/apps/api/entrypoint.R @@ -0,0 +1,21 @@ +library("plumber") + +source("auth.R") +source("ping.R") + +root <- plumber$new() +root$setSerializer(serializer_unboxed_json()) + +root$filter("require-auth", authenticate_user) + +root$handle("GET", "/api/ping", ping) + +models_pr <- plumber$new("models.R") +root$mount("/api/models", models_pr) + +root$run(host="0.0.0.0", port=8000, swagger = function(pr, spec, ...) { + spec <- yaml::read_yaml("pecanapi-spec.yml") + spec +}) + + diff --git a/apps/api/models.R b/apps/api/models.R new file mode 100644 index 00000000000..966349ecd4e --- /dev/null +++ b/apps/api/models.R @@ -0,0 +1,42 @@ +library(PEcAn.DB) +library(DBI) +library(jsonlite) +require(RPostgreSQL) + + +#' Retrieve the details of a particular version of a model +#' @param name Model name (character) +#' @param revision Model version/revision (character) +#' @return Model details +#' @author Tezan Sahu +#* @get / +getModels <- function(model_name="all", revision="all", res){ + settings <-list(database = list(bety = list(driver = "PostgreSQL", user = "bety", dbname = "bety", password = "bety", host="postgres"))) + dbcon <- db.open(settings$database$bety) + + qry_statement <- "SELECT m.id AS model_id, m.model_name, m.revision, m.modeltype_id, t.name AS model_type FROM models m, modeltypes t WHERE m.modeltype_id = t.id" + if (model_name == "all" & revision == "all"){ + # Leave as it is + } + else if (model_name != "all" & revision == "all"){ + qry_statement <- paste0(qry_statement, " and model_name = '", model_name, "'") + } + else if (model_name == "all" & revision != "all"){ + qry_statement <- paste0(qry_statement, " and revision = '", revision, "'") + } + else{ + qry_statement <- paste0(qry_statement, " and model_name = '", model_name, "' and revision = '", revision, "'") + } + + qry_statement <- paste0(qry_statement, " ORDER BY m.id DESC") + + qry_res <- db.query(qry_statement, dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Model(s) not found")) + else { + qry_res + } +} + diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml new file mode 100644 index 00000000000..0afe6512138 --- /dev/null +++ b/apps/api/pecanapi-spec.yml @@ -0,0 +1,411 @@ +openapi: 3.0.0 +servers: + - description: PEcAn Tezan VM + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/b1303344 + - description: Localhost + url: http://127.0.0.1:8000 + +info: + title: PEcAn Project API + description: >- + This is the API for interacting with server(s) of the __PEcAn Project__. The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. Here's the link to [PEcAn's Github Repository](https://github.com/PecanProject/pecan). + PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. + version: "1.0.0" + contact: + email: "pecanproj@gmail.com" + license: + name: University of Illinois/NCSA Open Source License + url: https://opensource.org/licenses/NCSA +externalDocs: + description: Find out more about PEcAn Project + url: https://pecanproject.github.io/ + +tags: + - name: general + description: Related to the overall working on the API & its details + - name: workflows + description: Everything about PEcAn workflows + - name: runs + description: Everything about PEcAn runs + - name: models + description: Everything about PEcAn models + +##################################################################################################################### +##################################################### API Endpoints ################################################# +##################################################################################################################### +security: + - basicAuth: [] + +paths: + /api/: + get: + summary: Root of the API tree + tags: + - general + - + responses: + '200': + description: OK + content: + application/json: + schema: + type: object + properties: + message: + type: string + example: This is the API for the PEcAn Project + server: + type: string + status: + type: string + + /api/ping: + get: + summary: Ping the server to check if it is live + tags: + - general + - + responses: + '200': + description: OK + content: + application/json: + schema: + type: object + properties: + req: + type: string + resp: + type: string + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Models not found + + /api/models/: + get: + tags: + - models + - + summary: Details of model(s) + parameters: + - in: query + name: model_name + description: Name of the model + required: false + schema: + type: string + default: all + - in: query + name: revision + description: Model version/revision + required: false + schema: + type: string + default: all + responses: + '200': + description: Available Models + content: + application/json: + schema: + type: array + items: + type: object + $ref: '#/components/schemas/Model' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Model(s) not found + + + /api/workflows: + get: + tags: + - workflows + - + summary: Get the list of workflows + parameters: + - in: query + name: model_id + description: If provided, returns all workflows that use the provided model_id + required: false + schema: + type: string + responses: + '200': + description: List of workflow ids + content: + application/json: + schema: + type: array + items: string + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflows not found + + post: + tags: + - workflows + - + summary: Submit a new PEcAn workflow + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/Workflow' + responses: + '201': + description: Submitted workflow successfully + '401': + description: Authentication required + + + /api/workflows/{id}: + get: + tags: + - workflows + - + summary: Get the details of a PEcAn Workflow + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + responses: + '200': + description: Details of the requested PEcAn Workflow + content: + application/json: + schema: + $ref: '#/components/schemas/Workflow' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + + + + + /api/workflows/{id}/runs: + get: + tags: + - workflows + - runs + summary: Get the list of all runs for a specified PEcAn Workflow + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + responses: + '200': + description: List of all runs for the requested PEcAn Workflow + content: + application/json: + schema: + type: array + items: + type: object + properties: + id: + type: string + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + + /api/run/{id}: + get: + tags: + - runs + - + summary: Get the details of a specified PEcAn run + parameters: + - in: path + name: id + description: ID of the PEcAn run + required: true + schema: + type: string + responses: + '200': + description: Details about the requested run within the requested PEcAn workflow + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + +##################################################################################################################### +###################################################### Components ################################################### +##################################################################################################################### + +components: + schemas: + Model: + properties: + model_id: + type: string + model_name: + type: string + revision: + type: string + modeltype_id: + type: string + + Run: + properties: + run_id: + type: string + workflow_id: + type: string + runtype: + type: string + quantile: + type: number + site: + type: object + properties: + id: + type: string + name: + type: string + lat: + type: number + lon: + type: number + pfts: + type: array + items: + type: object + properties: + name: + type: string + constants: + type: array + items: + type: number + model: + type: object + properties: + id: + type: string + name: + type: string + inputs: + type: array + items: + type: object + properties: + type: + type: string + id: + type: string + source: + type: string + path: + type: string + start_date: + type: string + end_date: + type: string + + Workflow: + properties: + id: + type: string + pfts: + type: array + items: + type: string + model: + type: object + properties: + id: + type: string + name: + type: string + meta_analysis: + type: object + properties: + iter: + type: number + random_effects: + type: object + properties: + "on": + type: boolean + use_ghs: + type: boolean + update: + type: boolean + threshold: + type: number + ensemble: + type: object + properties: + size: + type: number + variable: + type: string + sampling_space: + type: object + properties: + parameters.method: + type: string + met.method: + type: string + sensitivity_analysis: + type: object + properties: + quantiles_sigma: + type: array + items: + type: number + variable: + type: string + perpft: + type: boolean + runs: + type: array + items: + type: string + host: + type: object + properties: + name: + type: string + status: + type: string + securitySchemes: + basicAuth: + type: http + scheme: basic + + + diff --git a/apps/api/ping.R b/apps/api/ping.R new file mode 100644 index 00000000000..7af47a9bd4e --- /dev/null +++ b/apps/api/ping.R @@ -0,0 +1,11 @@ +#------------------------------------------- Ping function -------------------------------------------# +#* Function to be executed when /ping API endpoint is called +#* +#* If successful connection to API server is established, this function will return the "pong" message +#* @return Mapping containing response as "pong" +#* @author Tezan Sahu +ping <- function(){ + res <- list(request="ping", response="pong") + res +} + From e1736978c0218692033a7d7bacb9a7bdfb752af6 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 6 Jun 2020 15:18:25 -0500 Subject: [PATCH 1053/2289] directory refactoring --- base/api/DESCRIPTION | 20 --- base/api/NAMESPACE | 3 - base/api/R/entrypoint.R | 7 - base/api/R/ping.R | 13 -- base/api/inst/pecanapi-spec.yml | 303 -------------------------------- base/api/man/ping.Rd | 17 -- 6 files changed, 363 deletions(-) delete mode 100644 base/api/DESCRIPTION delete mode 100644 base/api/NAMESPACE delete mode 100644 base/api/R/entrypoint.R delete mode 100644 base/api/R/ping.R delete mode 100644 base/api/inst/pecanapi-spec.yml delete mode 100644 base/api/man/ping.Rd diff --git a/base/api/DESCRIPTION b/base/api/DESCRIPTION deleted file mode 100644 index 06a15434bb5..00000000000 --- a/base/api/DESCRIPTION +++ /dev/null @@ -1,20 +0,0 @@ -Package: pecanapi -Title: R API for Dockerized remote PEcAn instances -Version: 1.7.1 -Authors@R: - person(given = "Tezan", - family = "Sahu", - role = c("aut", "cre"), - email = "tezansahu@gmail.com", - comment = c(ORCID = "0000-0003-1031-9683")) -Description:The package contains functions that deliver the functionalty for PEcAn's API -Imports: - plumber, - yaml, - XML, - DBI -License: BSD_3_clause + file LICENSE -Encoding: UTF-8 -LazyData: true -Roxygen: list(markdown = TRUE) -RoxygenNote: 7.1.0 diff --git a/base/api/NAMESPACE b/base/api/NAMESPACE deleted file mode 100644 index 368ef012b6b..00000000000 --- a/base/api/NAMESPACE +++ /dev/null @@ -1,3 +0,0 @@ -# Generated by roxygen2: do not edit by hand - -export(ping) diff --git a/base/api/R/entrypoint.R b/base/api/R/entrypoint.R deleted file mode 100644 index 84e5cfed11a..00000000000 --- a/base/api/R/entrypoint.R +++ /dev/null @@ -1,7 +0,0 @@ -root <- plumber::plumber$new() -root$handle("GET", "/ping", pecanapi::ping) - -root$run(port=8000, swagger = function(pr, spec, ...) { - spec <- yaml::read_yaml("inst/pecan-api-spec.yml") - spec -}) diff --git a/base/api/R/ping.R b/base/api/R/ping.R deleted file mode 100644 index f159ac01b96..00000000000 --- a/base/api/R/ping.R +++ /dev/null @@ -1,13 +0,0 @@ -#------------------------------------------- Ping function -------------------------------------------# -##' Function to be executed when /ping API endpoint is called -##' -##' If successful connection to API server is established, this function will return the "pong" message -##' @return Mapping containing response as "pong" -##' @author Tezan Sahu -##' @export -ping <- function(){ - list( - request = "ping", - response = "pong" - ) -} diff --git a/base/api/inst/pecanapi-spec.yml b/base/api/inst/pecanapi-spec.yml deleted file mode 100644 index c3244068477..00000000000 --- a/base/api/inst/pecanapi-spec.yml +++ /dev/null @@ -1,303 +0,0 @@ -openapi: 3.0.0 -servers: - - description: Localhost - url: http://127.0.0.1:8000 -info: - title: PEcAn Project API - description: >- - This is the API for interacting with server(s) of the __PEcAn Project__. The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. Here's the link to [PEcAn's Github Repository](https://github.com/PecanProject/pecan). - PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. - version: "1.0.0" - contact: - email: "pecanproj@gmail.com" - license: - name: University of Illinois/NCSA Open Source License - url: https://opensource.org/licenses/NCSA -externalDocs: - description: Find out more about PEcAn Project - url: https://pecanproject.github.io/ - -tags: - - name: general - description: Related to the overall working on the API & its details - - name: workflows - description: Everything about PEcAn workflows - - name: runs - description: Everything about PEcAn runs - -##################################################################################################################### -##################################################### API Endpoints ################################################# -##################################################################################################################### - -paths: - /: - get: - summary: Root of the API tree - tags: - - general - - - responses: - '200': - description: OK - content: - application/json: - schema: - type: object - properties: - message: - type: string - example: This is the API for the PEcAn Project - server: - type: string - status: - type: string - - /ping: - get: - summary: Ping the server to check if it is live - tags: - - general - - - responses: - '200': - description: OK - content: - text/plain: - schema: - type: string - example: pong - - - /workflow/{id}: - get: - tags: - - workflows - - - summary: Get the details of a PEcAn Workflow - parameters: - - in: path - name: id - description: ID of the PEcAn Workflow - required: true - schema: - type: string - responses: - '200': - description: Details of the requested PEcAn Workflow - content: - application/json: - schema: - $ref: '#/components/schemas/Workflow' - '403': - description: Access forbidden - '404': - description: Workflow with specified ID was not found - - - /workflow: - post: - tags: - - workflows - - - summary: Submit a new PEcAn workflow - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/Workflow' - responses: - '201': - description: Submitted workflow successfully - - - /workflow/{id}/runs: - get: - tags: - - workflows - - runs - summary: Get the list of all runs for a specified PEcAn Workflow - parameters: - - in: path - name: id - description: ID of the PEcAn Workflow - required: true - schema: - type: string - responses: - '200': - description: List of all runs for the requested PEcAn Workflow - content: - application/json: - schema: - type: array - items: - type: object - properties: - id: - type: string - '403': - description: Access forbidden - '404': - description: Workflow with specified ID was not found - - - /run/{id}: - get: - tags: - - runs - - - summary: Get the details of a specified PEcAn run - parameters: - - in: path - name: id - description: ID of the PEcAn run - required: true - schema: - type: string - responses: - '200': - description: Details about the requested run within the requested PEcAn workflow - content: - application/json: - schema: - $ref: '#/components/schemas/Run' - '403': - description: Access forbidden - '404': - description: Workflow with specified ID was not found - - -##################################################################################################################### -##################################################### Model Schemas ################################################# -##################################################################################################################### - -components: - schemas: - Run: - properties: - run_id: - type: string - workflow_id: - type: string - runtype: - type: string - quantile: - type: number - site: - type: object - properties: - id: - type: string - name: - type: string - lat: - type: number - lon: - type: number - pfts: - type: array - items: - type: object - properties: - name: - type: string - constants: - type: array - items: - type: number - model: - type: object - properties: - id: - type: string - name: - type: string - inputs: - type: array - items: - type: object - properties: - type: - type: string - id: - type: string - source: - type: string - path: - type: string - start_date: - type: string - end_date: - type: string - - Workflow: - properties: - id: - type: string - pfts: - type: array - items: - type: string - model: - type: object - properties: - id: - type: string - name: - type: string - meta_analysis: - type: object - properties: - iter: - type: number - random_effects: - type: object - properties: - "on": - type: boolean - use_ghs: - type: boolean - update: - type: boolean - threshold: - type: number - ensemble: - type: object - properties: - size: - type: number - variable: - type: string - sampling_space: - type: object - properties: - parameters.method: - type: string - met.method: - type: string - sensitivity_analysis: - type: object - properties: - quantiles_sigma: - type: array - items: - type: number - variable: - type: string - perpft: - type: boolean - runs: - type: array - items: - type: string - host: - type: object - properties: - name: - type: string - status: - type: string - - - diff --git a/base/api/man/ping.Rd b/base/api/man/ping.Rd deleted file mode 100644 index 5c52c93de08..00000000000 --- a/base/api/man/ping.Rd +++ /dev/null @@ -1,17 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/ping.R -\name{ping} -\alias{ping} -\title{Function to be executed when /ping API endpoint is called} -\usage{ -ping() -} -\value{ -Mapping containing response as "pong" -} -\description{ -If successful connection to API server is established, this function will return the "pong" message -} -\author{ -Tezan Sahu -} From 0ba03e0dd16069cec41c26aa0a8958d3a342876c Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 7 Jun 2020 05:33:52 +0000 Subject: [PATCH 1054/2289] removed importing of dependencies --- apps/api/auth.R | 38 ++++++++++++++++++++------------------ apps/api/entrypoint.R | 8 +++----- apps/api/models.R | 19 ++++++++++--------- apps/api/pecanapi-spec.yml | 2 +- apps/api/ping.R | 3 +-- 5 files changed, 35 insertions(+), 35 deletions(-) diff --git a/apps/api/auth.R b/apps/api/auth.R index 46d30969809..261339f2e48 100644 --- a/apps/api/auth.R +++ b/apps/api/auth.R @@ -1,8 +1,3 @@ -library(digest) -library(PEcAn.DB) -library(DBI) -require(RPostgreSQL) - #* Obtain the encrypted password for a user #* @param username Username, which is also the 'salt' #* @param password Unencrypted password @@ -14,23 +9,34 @@ get_crypt_pass <- function(username, password, secretkey = NULL) { dig <- secretkey salt <- username for (i in 1:10) { - dig <- digest(paste(dig, salt, password, secretkey, sep="--"), algo="sha1", serialize=FALSE) + dig <- digest::digest( + paste(dig, salt, password, secretkey, sep="--"), + algo="sha1", + serialize=FALSE + ) } return(dig) } + #* Check if the encrypted password for the user is valid #* @param username Username #* @param crypt_pass Encrypted password #* @return TRUE if encrypted password is correct, else FALSE #* @author Tezan Sahu validate_crypt_pass <- function(username, crypt_pass) { - settings <-list(database = list(bety = list(driver = "PostgreSQL", user = "bety", dbname = "bety", password = "bety", host="postgres"))) - dbcon <- db.open(settings$database$bety) + settings <-list(database = list(bety = list( + driver = "PostgreSQL", + user = "bety", + dbname = "bety", + password = "bety", + host="postgres" + ))) + dbcon <- PEcAn.DB::db.open(settings$database$bety) qry_statement <- paste0("SELECT crypted_password FROM users WHERE login='", username, "'") - res <- db.query(qry_statement, dbcon) + res <- PEcAn.DB::db.query(qry_statement, dbcon) if (nrow(res) == 1 && res[1, 1] == crypt_pass) { return(TRUE) @@ -51,27 +57,23 @@ authenticate_user <- function(req, res) { grepl("openapi.json", req$PATH_INFO, fixed = TRUE) || grepl("ping", req$PATH_INFO, ignore.case = TRUE)) { - return(forward()) + return(plumber::forward()) } if (!is.null(req$HTTP_AUTHORIZATION)) { # HTTP_AUTHORIZATION is of the form "Basic ", # where the is contains : - auth_details <- strsplit(rawToChar(base64_dec(strsplit(req$HTTP_AUTHORIZATION, " +")[[1]][2])), ":")[[1]] + auth_details <- strsplit(rawToChar(jsonlite::base64_dec(strsplit(req$HTTP_AUTHORIZATION, " +")[[1]][2])), ":")[[1]] username <- auth_details[1] password <- auth_details[2] crypt_pass <- get_crypt_pass(username, password) if(validate_crypt_pass(username, crypt_pass)){ - return(forward()) + return(plumber::forward()) } } - else{ - res$status <- 401 # Unauthorized - return(list(error="Authentication required")) - } - - + res$status <- 401 # Unauthorized + return(list(error="Authentication required")) } diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R index 9b310278d37..84d311b36bd 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/entrypoint.R @@ -1,16 +1,14 @@ -library("plumber") - source("auth.R") source("ping.R") -root <- plumber$new() -root$setSerializer(serializer_unboxed_json()) +root <- plumber::plumber$new() +root$setSerializer(plumber::serializer_unboxed_json()) root$filter("require-auth", authenticate_user) root$handle("GET", "/api/ping", ping) -models_pr <- plumber$new("models.R") +models_pr <- plumber::plumber$new("models.R") root$mount("/api/models", models_pr) root$run(host="0.0.0.0", port=8000, swagger = function(pr, spec, ...) { diff --git a/apps/api/models.R b/apps/api/models.R index 966349ecd4e..374c7da3f20 100644 --- a/apps/api/models.R +++ b/apps/api/models.R @@ -1,9 +1,3 @@ -library(PEcAn.DB) -library(DBI) -library(jsonlite) -require(RPostgreSQL) - - #' Retrieve the details of a particular version of a model #' @param name Model name (character) #' @param revision Model version/revision (character) @@ -11,8 +5,14 @@ require(RPostgreSQL) #' @author Tezan Sahu #* @get / getModels <- function(model_name="all", revision="all", res){ - settings <-list(database = list(bety = list(driver = "PostgreSQL", user = "bety", dbname = "bety", password = "bety", host="postgres"))) - dbcon <- db.open(settings$database$bety) + settings <-list(database = list(bety = list( + driver = "PostgreSQL", + user = "bety", + dbname = "bety", + password = "bety", + host="postgres" + ))) + dbcon <- PEcAn.DB::db.open(settings$database$bety) qry_statement <- "SELECT m.id AS model_id, m.model_name, m.revision, m.modeltype_id, t.name AS model_type FROM models m, modeltypes t WHERE m.modeltype_id = t.id" if (model_name == "all" & revision == "all"){ @@ -30,11 +30,12 @@ getModels <- function(model_name="all", revision="all", res){ qry_statement <- paste0(qry_statement, " ORDER BY m.id DESC") - qry_res <- db.query(qry_statement, dbcon) + qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Model(s) not found")) + } else { qry_res } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 0afe6512138..af7627baa94 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/b1303344 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/baaeb5ea - description: Localhost url: http://127.0.0.1:8000 diff --git a/apps/api/ping.R b/apps/api/ping.R index 7af47a9bd4e..32a99b9c8c6 100644 --- a/apps/api/ping.R +++ b/apps/api/ping.R @@ -1,5 +1,4 @@ -#------------------------------------------- Ping function -------------------------------------------# -#* Function to be executed when /ping API endpoint is called +#* Function to be executed when /api/ping endpoint is called #* #* If successful connection to API server is established, this function will return the "pong" message #* @return Mapping containing response as "pong" From cf000aaca6201e9139096830c67766fbd5bcfb10 Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 7 Jun 2020 04:42:03 -0400 Subject: [PATCH 1055/2289] last values --- models/basgra/R/read_restart.BASGRA.R | 19 +----- models/basgra/R/run_BASGRA.R | 6 +- models/basgra/R/write.config.BASGRA.R | 86 ++++++++++++++++++++++++- models/basgra/R/write_restart.BASGRA.R | 1 - models/basgra/inst/BASGRA_params.Rdata | Bin 2675 -> 2919 bytes models/basgra/src/parameters_plant.f90 | 8 +-- models/basgra/src/parameters_site.f90 | 16 ++--- models/basgra/src/set_params.f90 | 15 ++++- 8 files changed, 117 insertions(+), 34 deletions(-) diff --git a/models/basgra/R/read_restart.BASGRA.R b/models/basgra/R/read_restart.BASGRA.R index 0cbb48cbfb1..4a56334eb87 100644 --- a/models/basgra/R/read_restart.BASGRA.R +++ b/models/basgra/R/read_restart.BASGRA.R @@ -13,12 +13,7 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p forecast <- list() - year <- lubridate::year(stop.time) - - start_date <- as.POSIXlt(settings$run$start.date, tz = "UTC") - end_date <- as.POSIXlt(settings$run$end.date, tz = "UTC") - start_year <- lubridate::year(start_date) - end_year <- lubridate::year(end_date) + # maybe have some checks here to make sure the first run is actually ran for the period you requested # Read ensemble output ens <- read.output(runid = runid, @@ -27,17 +22,7 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p end.year = lubridate::year(stop.time), variables = var.names) - if(year == start_year & year != end_year){ - simdays <- seq(lubridate::yday(start_date), lubridate::yday(stop.time)) - } - # To BE CONTINUED... - #else if(year != start_year & year == end_year){ - # simdays <- seq(1, lubridate::yday(end_date)) - #}else{ - # simdays <- seq(lubridate::yday(start_date), lubridate::yday(end_date)) - #} - - last <- length(simdays) + last <- length(ens[[1]]) params$restart <- c() diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 26243f55356..be05d36db3f 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -264,6 +264,10 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, NOUT, matrix(0, NDAYS, NOUT))[[8]] + # for now a hack to write other states out + last_vals <- output[nrow(output),] + names(last_vals) <- outputNames + save(last_vals, file = file.path(outdir, "last_vals_basgra.Rdata")) ############################# WRITE OUTPUTS ########################### # writing model outputs already in standard format @@ -291,7 +295,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, csoms <- output[thisyear, which(outputNames == "CSOMS")] # (g C m-2) outlist[[5]] <- udunits2::ud.convert(csoms, "g m-2", "kg m-2") - outlist[[6]] <- udunits2::ud.convert(clitt + csomf + csoms, "g m-2", "kg m-2") + outlist[[6]] <- udunits2::ud.convert(csomf + csoms, "g m-2", "kg m-2") # Soil Respiration in kgC/m2/s rsoil <- output[thisyear, which(outputNames == "Rsoil")] # (g C m-2 d-1) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index f10b19ef93d..f8c75e71636 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -251,6 +251,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N } #### End parameter update + #### Update initial conditions if (!is.null(IC)) { @@ -258,7 +259,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ic.names <- names(IC) if ("LAI" %in% ic.names) { - run_params[which(names(run_params) == "LOG10LAII")] <- log(IC$LAI) + run_params[which(names(run_params) == "LOG10LAII")] <- log10(IC$LAI) } if ("TotSoilCarb" %in% ic.names) { @@ -277,7 +278,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ## laiInit m2/m2 lai <- try(ncdf4::ncvar_get(IC.nc, "LAI"), silent = TRUE) if (!is.na(lai) && is.numeric(lai)) { - run_params[which(names(run_params) == "LOG10LAII")] <- log(lai) + run_params[which(names(run_params) == "LOG10LAII")] <- log10(lai) } # This is IC @@ -339,6 +340,87 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N #} } + + ################################################################## + ######################### PREVIOUS STATE ######################### + ################################################################## + + # developing hack: overwrite initial values with previous time steps + last_states_file <- file.path(outdir, "last_vals_basgra.Rdata") + + if(file.exists(last_states_file)){ + + load(last_states_file) + + # LOG10CLVI = pa(1) + run_params[names(run_params) == "LOG10CLVI"] <- log10(last_vals[names(last_vals) == "CLV"]) + + # LOG10CRESI = pa(2) + run_params[names(run_params) == "LOG10CRESI"] <- log10(last_vals[names(last_vals) == "CRES"]) + + # LOG10CRTI = pa(3) + run_params[names(run_params) == "LOG10CRTI"] <- log10(last_vals[names(last_vals) == "CRT"]) + + # CSTI = pa(4) + run_params[names(run_params) == "CSTI"] <- last_vals[names(last_vals) == "CST"] + + # LOG10LAII handled above + + # PHENI = pa(6) + run_params[names(run_params) == "PHENI"] <- last_vals[names(last_vals) == "PHEN"] + + # TILTOTI = pa(7) + run_params[names(run_params) == "TILTOTI"] <- last_vals[names(last_vals) == "TILG"] + last_vals[names(last_vals) == "TILV"] + + # FRTILGI = pa(8) + run_params[names(run_params) == "FRTILGI"] <- last_vals[names(last_vals) == "FRTILG"] + + # LT50I = pa(9) + run_params[names(run_params) == "LT50I"] <- last_vals[names(last_vals) == "LT50"] + + # CLITT0 = pa( 82) ! (g C m-2) Initial C in litter + run_params[names(run_params) == "CLITT0"] <- last_vals[names(last_vals) == "CLITT"] + + # CSOM0 = pa( 83) ! (g C m-2) Initial C in OM - handled above + + # CNLITT0 = pa( 84) ! (g C g-1 N) Initial C/N ratio of litter + run_params[names(run_params) == "CNLITT0"] <- last_vals[names(last_vals) == "CLITT"] / last_vals[names(last_vals) == "NLITT"] + + # FCSOMF0 = pa( 87) ! (-) Initial C in fast-decomposing OM as a fraction of total OM + # preserve the ratio + run_params[which(names(run_params) == "FCSOMF0")] <- last_vals[names(last_vals) == "CSOMF"] / + (last_vals[names(last_vals) == "CSOMF"] + last_vals[names(last_vals) == "CSOMS"]) + + # CNSOMF0 = pa( 85) ! (g C g-1 N) Initial C/N ratio of fast-decomposing OM + csomf <- run_params[which(names(run_params) == "FCSOMF0")] * run_params[which(names(run_params) == "CSOM0")] + run_params[names(run_params) == "CNSOMF0"] <- csomf / last_vals[names(last_vals) == "NSOMF"] + + # CNSOMS0 = pa( 86) ! (g C g-1 N) Initial C/N ratio of slowly decomposing OM + csoms <- (1 - run_params[which(names(run_params) == "FCSOMF0")]) * run_params[which(names(run_params) == "CSOM0")] + run_params[names(run_params) == "CNSOMS0"] <- csoms / last_vals[names(last_vals) == "NSOMS"] + + PHENRF <- (1 - run_params[names(run_params) == "PHENI"])/(1 - run_params[names(run_params) == "PHENCR"]) + if (PHENRF > 1.0) PHENRF = 1.0 + if (PHENRF < 0.0) PHENRF = 0.0 + run_params[names(run_params) == "NELLVM"] <- last_vals[names(last_vals) == "NELLVG"] / PHENRF + + run_params[names(run_params) == "CLVDI"] <- last_vals[names(last_vals) == "CLVD"] + run_params[names(run_params) == "YIELDI"] <- last_vals[names(last_vals) == "YIELD"] + run_params[names(run_params) == "CSTUBI"] <- last_vals[names(last_vals) == "CSTUB"] + + run_params[names(run_params) == "ROOTDM"] <- last_vals[names(last_vals) == "ROOTD"] + + run_params[names(run_params) == "DRYSTORI"] <- last_vals[names(last_vals) == "DRYSTOR"] + run_params[names(run_params) == "FdepthI"] <- last_vals[names(last_vals) == "Fdepth"] + run_params[names(run_params) == "SDEPTHI"] <- last_vals[names(last_vals) == "Sdepth"] + run_params[names(run_params) == "TANAERI"] <- last_vals[names(last_vals) == "TANAER"] + run_params[names(run_params) == "WAPLI"] <- last_vals[names(last_vals) == "WAPL"] + run_params[names(run_params) == "WAPSI"] <- last_vals[names(last_vals) == "WAPS"] + run_params[names(run_params) == "WASI"] <- last_vals[names(last_vals) == "WAS"] + run_params[names(run_params) == "WETSTORI"] <- last_vals[names(last_vals) == "WETSTOR"] + + } + #----------------------------------------------------------------------- # write job.sh diff --git a/models/basgra/R/write_restart.BASGRA.R b/models/basgra/R/write_restart.BASGRA.R index 37e52977382..52f3e384002 100644 --- a/models/basgra/R/write_restart.BASGRA.R +++ b/models/basgra/R/write_restart.BASGRA.R @@ -46,7 +46,6 @@ write_restart.BASGRA <- function(outdir, runid, start.time, stop.time, settings, trait.values = new.params, settings = settings, run.id = runid, - inputs = inputs, IC = analysis.save.mat)) diff --git a/models/basgra/inst/BASGRA_params.Rdata b/models/basgra/inst/BASGRA_params.Rdata index 4e800797b7b47a5c73719ed59d370d657ba7aba9..81cf98282011e146fb583635b626273b3090bf8e 100644 GIT binary patch delta 269 zcmew?@?30!1!K)d%P!^s1}cNeyP1_3Yc}6w?qjQGVc-Os&g$$F=Hkf!;jl$|y80k^ z&cPv}PB1wRm!Qbt5dR<;pWQ7bwV)&e&JA{P4G8gob3+{c99`jxS;HLzd|+}QPOzs6 Z)J~Rg$6%NYN4RSU#100Ae?Z6m0|0MCA%*|| delta 31 lcmaDZ_E}_t1!Lhx%P!{0(kxPpg`3S-`q Date: Sun, 7 Jun 2020 06:22:37 -0400 Subject: [PATCH 1056/2289] few more states --- models/basgra/R/write.config.BASGRA.R | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index f8c75e71636..e80c137b0ef 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -419,6 +419,13 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "WASI"] <- last_vals[names(last_vals) == "WAS"] run_params[names(run_params) == "WETSTORI"] <- last_vals[names(last_vals) == "WETSTOR"] + run_params[names(run_params) == "WCI"] <- last_vals[names(last_vals) == "WAL"] / (1000 * last_vals[names(last_vals) == "ROOTD"]) + + # this is probably not changing + run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"]/ + (last_vals[names(last_vals) == "FRTILG1"] + last_vals[names(last_vals) == "FRTILG2"]) + + run_params[names(run_params) == "NMIN0"] <- last_vals[names(last_vals) == "NMIN"] } From e69454e9674c863db4bce486521223690c744a72 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 7 Jun 2020 10:55:24 +0000 Subject: [PATCH 1057/2289] db connections closed --- apps/api/auth.R | 2 ++ apps/api/models.R | 1 + 2 files changed, 3 insertions(+) diff --git a/apps/api/auth.R b/apps/api/auth.R index 261339f2e48..479406bdb0b 100644 --- a/apps/api/auth.R +++ b/apps/api/auth.R @@ -38,6 +38,8 @@ validate_crypt_pass <- function(username, crypt_pass) { qry_statement <- paste0("SELECT crypted_password FROM users WHERE login='", username, "'") res <- PEcAn.DB::db.query(qry_statement, dbcon) + PEcAn.DB::db.close(dbcon) + if (nrow(res) == 1 && res[1, 1] == crypt_pass) { return(TRUE) } diff --git a/apps/api/models.R b/apps/api/models.R index 374c7da3f20..a1af1dfac4c 100644 --- a/apps/api/models.R +++ b/apps/api/models.R @@ -31,6 +31,7 @@ getModels <- function(model_name="all", revision="all", res){ qry_statement <- paste0(qry_statement, " ORDER BY m.id DESC") qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + PEcAn.DB::db.close(dbcon) if (nrow(qry_res) == 0) { res$status <- 404 From 182ff64cf4ba5af84cf62424adbf0c9b5eb6065a Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 7 Jun 2020 10:56:08 +0000 Subject: [PATCH 1058/2289] added api to filter workflow by model/site id [w/o pagination] --- apps/api/entrypoint.R | 3 +++ apps/api/pecanapi-spec.yml | 17 +++++++++---- apps/api/workflows.R | 51 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 66 insertions(+), 5 deletions(-) create mode 100644 apps/api/workflows.R diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R index 84d311b36bd..9689f22513a 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/entrypoint.R @@ -11,6 +11,9 @@ root$handle("GET", "/api/ping", ping) models_pr <- plumber::plumber$new("models.R") root$mount("/api/models", models_pr) +workflows_pr <- plumber::plumber$new("workflows.R") +root$mount("/api/workflows", workflows_pr) + root$run(host="0.0.0.0", port=8000, swagger = function(pr, spec, ...) { spec <- yaml::read_yaml("pecanapi-spec.yml") spec diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index af7627baa94..03f3c8ed8ed 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/baaeb5ea + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/626b0168 - description: Localhost url: http://127.0.0.1:8000 @@ -97,14 +97,12 @@ paths: required: false schema: type: string - default: all - in: query name: revision description: Model version/revision required: false schema: type: string - default: all responses: '200': description: Available Models @@ -123,7 +121,7 @@ paths: description: Model(s) not found - /api/workflows: + /api/workflows/: get: tags: - workflows @@ -136,6 +134,12 @@ paths: required: false schema: type: string + - in: query + name: site_id + description: If provided, returns all workflows that use the provided site_id + required: false + schema: + type: string responses: '200': description: List of workflow ids @@ -143,7 +147,10 @@ paths: application/json: schema: type: array - items: string + items: + type: object + $ref: '#/components/schemas/Workflow' + '401': description: Authentication required '403': diff --git a/apps/api/workflows.R b/apps/api/workflows.R new file mode 100644 index 00000000000..42065f8d13c --- /dev/null +++ b/apps/api/workflows.R @@ -0,0 +1,51 @@ +#' Get the list of workflows (using a particular model & site, if specified) +#' @param model_id Model id (character) +#' @param site_id Site id (character) +#' @return List of workflows (using a particular model & site, if specified) +#' @author Tezan Sahu +#* @get / +getWorkflows <- function(model_id=NULL, site_id=NULL, res){ + settings <-list(database = list(bety = list( + driver = "PostgreSQL", + user = "bety", + dbname = "bety", + password = "bety", + host="postgres" + ))) + dbcon <- PEcAn.DB::db.open(settings$database$bety) + qry_statement <- paste( + "SELECT w.id, a.value AS properties", + "FROM workflows w", + "FULL OUTER JOIN attributes a", + "ON (w.id = a.container_id)" + ) + + if (is.null(model_id) & is.null(site_id)){ + # Leave as it is + } + else if (!is.null(model_id) & is.null(site_id)){ + qry_statement <- paste0(qry_statement, " WHERE w.model_id = '", model_id, "'") + } + else if (is.null(model_id) & !is.null(site_id)){ + qry_statement <- paste0(qry_statement, " WHERE w.site_id = '", site_id, "'") + } + else{ + qry_statement <- paste0(qry_statement, " WHERE w.model_id = '", model_id, "' and w.site_id = '", site_id, "'") + } + + qry_statement <- paste0(qry_statement, " ORDER BY id") + + qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Workflows not found")) + } + else { + qry_res$properties[is.na(qry_res$properties)] = "{}" + qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) + return(list(workflows = qry_res)) + } +} \ No newline at end of file From a06c1149b81cddb51a6713ef302f3b7cce782bc8 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 7 Jun 2020 12:01:46 +0000 Subject: [PATCH 1059/2289] added endpoint to get details of workflow by id --- apps/api/pecanapi-spec.yml | 15 ++++++++----- apps/api/workflows.R | 45 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+), 6 deletions(-) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 03f3c8ed8ed..136c5e4ebb7 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/626b0168 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/84e574de - description: Localhost url: http://127.0.0.1:8000 @@ -142,14 +142,17 @@ paths: type: string responses: '200': - description: List of workflow ids + description: List of workflows content: application/json: schema: - type: array - items: - type: object - $ref: '#/components/schemas/Workflow' + type: object + properties: + workflows: + type: array + items: + type: object + $ref: '#/components/schemas/Workflow' '401': description: Authentication required diff --git a/apps/api/workflows.R b/apps/api/workflows.R index 42065f8d13c..dbdfba6e524 100644 --- a/apps/api/workflows.R +++ b/apps/api/workflows.R @@ -48,4 +48,49 @@ getWorkflows <- function(model_id=NULL, site_id=NULL, res){ qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) return(list(workflows = qry_res)) } +} + +################################################################################################# + +#' Get the of the workflow specified by the id +#' @param id Workflow id (character) +#' @return Details of requested workflow +#' @author Tezan Sahu +#* @get / +getWorkflowDetails <- function(id, res){ + settings <-list(database = list(bety = list( + driver = "PostgreSQL", + user = "bety", + dbname = "bety", + password = "bety", + host="postgres" + ))) + dbcon <- PEcAn.DB::db.open(settings$database$bety) + qry_statement <- paste0( + "SELECT w.id, w.model_id, w.site_id, a.value AS properties ", + "FROM workflows w ", + "FULL OUTER JOIN attributes a ", + "ON (w.id = a.container_id) ", + "WHERE w.id='", id, "'" + ) + + qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Workflow with specified ID was not found")) + } + else { + if(is.na(qry_res$properties)){ + res <- list(id = id, modelid = qry_res$model_id, siteid = qry_res$site_id) + } + else{ + res <- jsonlite::parse_json(qry_res$properties[[1]]) + res$id <- id + } + + return(res) + } } \ No newline at end of file From f32b2dda9983ce05a2e87ea5f57c3fc55643c9cc Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 7 Jun 2020 14:50:06 +0000 Subject: [PATCH 1060/2289] added pagination for workflows endpoint --- apps/api/pecanapi-spec.yml | 29 ++++++++++++++++++++++++++++- apps/api/workflows.R | 31 ++++++++++++++++++++++++++++--- 2 files changed, 56 insertions(+), 4 deletions(-) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 136c5e4ebb7..0e12255cde0 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/84e574de + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/36ce16fa - description: Localhost url: http://127.0.0.1:8000 @@ -140,6 +140,27 @@ paths: required: false schema: type: string + - in: query + name: offset + description: The number of workflows to skip before starting to collect the result set. + schema: + type: integer + minimum: 0 + default: 0 + required: false + - in: query + name: limit + description: The number of workflows to return. + schema: + type: integer + default: 50 + enum: + - 10 + - 20 + - 50 + - 100 + - 500 + required: false responses: '200': description: List of workflows @@ -153,6 +174,12 @@ paths: items: type: object $ref: '#/components/schemas/Workflow' + count: + type: integer + next_page: + type: string + prev_page: + type: string '401': description: Authentication required diff --git a/apps/api/workflows.R b/apps/api/workflows.R index dbdfba6e524..dca2452a33a 100644 --- a/apps/api/workflows.R +++ b/apps/api/workflows.R @@ -4,7 +4,11 @@ #' @return List of workflows (using a particular model & site, if specified) #' @author Tezan Sahu #* @get / -getWorkflows <- function(model_id=NULL, site_id=NULL, res){ +getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res){ + if (! limit %in% c(10, 20, 50, 100, 500)) { + res$status <- 400 + return(list(error = "Invalid value for parameter")) + } settings <-list(database = list(bety = list( driver = "PostgreSQL", user = "bety", @@ -33,7 +37,7 @@ getWorkflows <- function(model_id=NULL, site_id=NULL, res){ qry_statement <- paste0(qry_statement, " WHERE w.model_id = '", model_id, "' and w.site_id = '", site_id, "'") } - qry_statement <- paste0(qry_statement, " ORDER BY id") + qry_statement <- paste0(qry_statement, " ORDER BY id LIMIT ", limit, " OFFSET ", offset, ";") qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) @@ -46,7 +50,28 @@ getWorkflows <- function(model_id=NULL, site_id=NULL, res){ else { qry_res$properties[is.na(qry_res$properties)] = "{}" qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) - return(list(workflows = qry_res)) + result <- list(workflows = qry_res) + result$count <- nrow(qry_res) + if(nrow(qry_res) == limit){ + result$next_page <- paste0( + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + (as.numeric(limit) + as.numeric(offset)), + "&limit=", + limit + ) + } + if(as.numeric(offset) != 0) { + result$prev_page <- paste0( + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + max(0, (as.numeric(offset) - as.numeric(limit))), + "&limit=", + limit + ) + } + + return(result) } } From 050b4c1cf0503bf28b625de98254917fd5a8040c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 8 Jun 2020 15:27:51 +0530 Subject: [PATCH 1061/2289] changed ic.process to ic_process --- book_source/03_topical_pages/11_adding_to_pecan.Rmd | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/book_source/03_topical_pages/11_adding_to_pecan.Rmd b/book_source/03_topical_pages/11_adding_to_pecan.Rmd index c4829297eb5..9c5d0051121 100644 --- a/book_source/03_topical_pages/11_adding_to_pecan.Rmd +++ b/book_source/03_topical_pages/11_adding_to_pecan.Rmd @@ -382,7 +382,7 @@ PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, Vegetation data will be required to parameterize your model. In these examples we will go over how to produce a standard initial condition file. -The main function to process cohort data is the `ic.process.R` function. As of now however, if you require pool data you will run a separate function, `pool_ic_list2netcdf.R`. +The main function to process cohort data is the `ic_process.R` function. As of now however, if you require pool data you will run a separate function, `pool_ic_list2netcdf.R`. ###### Example 1: Processing Veg data from data in hand. @@ -392,7 +392,7 @@ First, you'll need to create a input record in BETY that will have a file record Once you have created an input record you must take note of the input id of your record. An easy way to take note of this is in the URL of the BETY webpage that shows your input record. In this example we use an input record with the id `1000013064` which can be found at this url: https://psql-pecan.bu.edu/bety/inputs/1000013064# . Note that this is the Boston University BETY database. If you are on a different machine, your url will be different. -With the input id in hand you can now edit a pecan XML so that the PEcAn function `ic.process` will know where to look in order to process your data. The `inputs` section of your pecan XML will look like this. As of now ic.process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the `useic` flag. +With the input id in hand you can now edit a pecan XML so that the PEcAn function `ic_process` will know where to look in order to process your data. The `inputs` section of your pecan XML will look like this. As of now ic_process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the `useic` flag. ``` @@ -496,12 +496,12 @@ Once you edit your PEcAn.xml you can than create a settings object using PEcAn f settings <- PEcAn.settings::read.settings("pecan.xml") settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) ``` -You can then execute the `ic.process` function to convert data into a standard Rds file: +You can then execute the `ic_process` function to convert data into a standard Rds file: ``` input <- settings$run$inputs dir <- "." -ic.process(settings, input, dir, overwrite = FALSE) +ic_process(settings, input, dir, overwrite = FALSE) ``` Note that the argument `dir` is set to the current directory. You will find the final ED2 file there. More importantly though you will find the `.Rds ` file within the same directory. From 436104dc32ee67689362020d72f72f4854201a19 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 8 Jun 2020 18:18:18 +0530 Subject: [PATCH 1062/2289] Update book.yml --- .github/workflows/book.yml | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index f4a097bf128..20136e3dd24 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -39,16 +39,17 @@ jobs: run: cd book_source && make - name: looking for generated html - run: cd /__w/pecan-sandbox/pecan-sandbox/book_source/_book && ls + run: cd /__w/pecan-sandbox/pecan-sandbox/book_source/_book shell: bash - name: Commiting the changes to pecan documentation repo run: | - git config --global user.email "pecan_bot@example.com" + git config --global user.email "mukulmaheshwari12@gmail.com" git config --global user.name "Mukul Maheshwari" git init - git add . - git commit -m "documentation updated by actions" - git remote add origin https://github.com/MukulMaheshwari/pecan-documentation.git - git push -u origin master + git add --all * + git commit -m "Update the book `date`" || true + git push -q origin master + + From d23b369757016f543ed6ce4caab0004040ab5918 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 8 Jun 2020 19:19:09 +0530 Subject: [PATCH 1063/2289] Update book.yml --- .github/workflows/book.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 20136e3dd24..5ac7a91a598 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -50,6 +50,9 @@ jobs: git add --all * git commit -m "Update the book `date`" || true git push -q origin master + - uses: r-lib/actions/pr-push@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} From 9e87b067b3f31cda6ff7d366815e5503f4303873 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 8 Jun 2020 22:17:16 +0530 Subject: [PATCH 1064/2289] updated repo token --- .github/workflows/book.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 5ac7a91a598..fb5780f2c8f 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -49,10 +49,10 @@ jobs: git init git add --all * git commit -m "Update the book `date`" || true - git push -q origin master + - uses: r-lib/actions/pr-push@master with: - repo-token: ${{ secrets.GITHUB_TOKEN }} + repo-token: ${{ secrets.REPO_TOKEN }} From 2c045e5b1f593548dbf11a45ad22b57d3ce8d07a Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 8 Jun 2020 23:50:15 +0530 Subject: [PATCH 1065/2289] updated remote repo --- .github/workflows/book.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index fb5780f2c8f..ddc6ac4f28b 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -49,10 +49,10 @@ jobs: git init git add --all * git commit -m "Update the book `date`" || true + git remote add origin https://github.com/${{secrets.GH_USER}}/pecan-documentation.git + git push origin - - uses: r-lib/actions/pr-push@master - with: - repo-token: ${{ secrets.REPO_TOKEN }} + From e4723ba1cabdfff1e7d9034426e50601106adb73 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 9 Jun 2020 00:12:35 +0530 Subject: [PATCH 1066/2289] Update book.yml --- .github/workflows/book.yml | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index ddc6ac4f28b..e6acb448e6a 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -15,6 +15,8 @@ jobs: # The type of runner that the job will run on runs-on: ubuntu-latest container: pecan/base:latest + env: + GITHUB_PAT: ${{ secrets.REPO_TOKEN }} steps: @@ -50,7 +52,8 @@ jobs: git add --all * git commit -m "Update the book `date`" || true git remote add origin https://github.com/${{secrets.GH_USER}}/pecan-documentation.git - git push origin + git push --set-upstream origin master + From 245ef87b79fca2755d5e3938851dc1970b6cf691 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 9 Jun 2020 00:36:43 +0530 Subject: [PATCH 1067/2289] Update book.yml --- .github/workflows/book.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index e6acb448e6a..7c497aebba0 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -53,6 +53,7 @@ jobs: git commit -m "Update the book `date`" || true git remote add origin https://github.com/${{secrets.GH_USER}}/pecan-documentation.git git push --set-upstream origin master + git config --global user.name "MukulMaheshwari" From 8d26b00792a0b93ecb48b667b4643ba0fcc2dbb6 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 9 Jun 2020 00:40:28 +0530 Subject: [PATCH 1068/2289] Update book.yml --- .github/workflows/book.yml | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 7c497aebba0..39590b4040b 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -46,14 +46,13 @@ jobs: - name: Commiting the changes to pecan documentation repo run: | - git config --global user.email "mukulmaheshwari12@gmail.com" - git config --global user.name "Mukul Maheshwari" git init git add --all * git commit -m "Update the book `date`" || true git remote add origin https://github.com/${{secrets.GH_USER}}/pecan-documentation.git git push --set-upstream origin master - git config --global user.name "MukulMaheshwari" + git config --global user.email "mukulmaheshwari12@gmail.com" + git config --global user.name "Mukul Maheshwari" From 196e9c36c8e87e08103f4f5af98f495fe1984c87 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue, 9 Jun 2020 00:43:40 +0530 Subject: [PATCH 1069/2289] Update book.yml --- .github/workflows/book.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 39590b4040b..63a9505beb8 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -46,14 +46,14 @@ jobs: - name: Commiting the changes to pecan documentation repo run: | + git config --global user.email "mukulmaheshwari12@gmail.com" + git config --global user.name "MukulMaheshwari" git init git add --all * git commit -m "Update the book `date`" || true git remote add origin https://github.com/${{secrets.GH_USER}}/pecan-documentation.git git push --set-upstream origin master - git config --global user.email "mukulmaheshwari12@gmail.com" - git config --global user.name "Mukul Maheshwari" - + From 306024dea93dde365558cf34b1f9a9ab1709301e Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 9 Jun 2020 02:57:04 +0000 Subject: [PATCH 1070/2289] documentation bugfixes --- apps/api/entrypoint.R | 2 +- apps/api/pecanapi-spec.yml | 2 +- apps/api/workflows.R | 8 ++++++++ 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R index 9689f22513a..5c55808e3a2 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/entrypoint.R @@ -14,7 +14,7 @@ root$mount("/api/models", models_pr) workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) -root$run(host="0.0.0.0", port=8000, swagger = function(pr, spec, ...) { +root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { spec <- yaml::read_yaml("pecanapi-spec.yml") spec }) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 0e12255cde0..ea945a6d6b8 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/36ce16fa + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/1623282b - description: Localhost url: http://127.0.0.1:8000 diff --git a/apps/api/workflows.R b/apps/api/workflows.R index dca2452a33a..f477e9b3a1a 100644 --- a/apps/api/workflows.R +++ b/apps/api/workflows.R @@ -1,6 +1,8 @@ #' Get the list of workflows (using a particular model & site, if specified) #' @param model_id Model id (character) #' @param site_id Site id (character) +#' @param offset +#' @param limit #' @return List of workflows (using a particular model & site, if specified) #' @author Tezan Sahu #* @get / @@ -54,6 +56,9 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r result$count <- nrow(qry_res) if(nrow(qry_res) == limit){ result$next_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", req$PATH_INFO, substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), (as.numeric(limit) + as.numeric(offset)), @@ -63,6 +68,9 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r } if(as.numeric(offset) != 0) { result$prev_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", req$PATH_INFO, substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), max(0, (as.numeric(offset) - as.numeric(limit))), From 72e620af95c5a48ccf7c24fd85241b2c8b64ef47 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Jun 2020 06:22:04 -0400 Subject: [PATCH 1071/2289] track separate pools --- models/basgra/R/read_restart.BASGRA.R | 11 +++-- models/basgra/R/run_BASGRA.R | 3 +- models/basgra/R/write.config.BASGRA.R | 56 +++++++++++++++++++------- models/basgra/R/write_restart.BASGRA.R | 14 +++++-- models/basgra/src/BASGRA.f90 | 8 ++-- 5 files changed, 66 insertions(+), 26 deletions(-) diff --git a/models/basgra/R/read_restart.BASGRA.R b/models/basgra/R/read_restart.BASGRA.R index 4a56334eb87..79bc53d8a38 100644 --- a/models/basgra/R/read_restart.BASGRA.R +++ b/models/basgra/R/read_restart.BASGRA.R @@ -31,9 +31,14 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p names(forecast[[length(forecast)]]) <- c("LAI") } - if ("TotSoilCarb" %in% var.names) { - forecast[[length(forecast) + 1]] <- ens$TotSoilCarb[last] # kg C m-2 - names(forecast[[length(forecast)]]) <- c("TotSoilCarb") + if ("fast_soil_pool_carbon_content" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$fast_soil_pool_carbon_content[last] # kg C m-2 + names(forecast[[length(forecast)]]) <- c("fast_soil_pool_carbon_content") + } + + if ("slow_soil_pool_carbon_content" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$slow_soil_pool_carbon_content[last] # kg C m-2 + names(forecast[[length(forecast)]]) <- c("slow_soil_pool_carbon_content") } PEcAn.logger::logger.info(runid) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index be05d36db3f..96d59619bb9 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -169,7 +169,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, "YIELD" , "CRES" , "CRT" , "CST" , "CSTUB" , "DRYSTOR" , "Fdepth" , "LAI" , "LT50" , "O2" , "PHEN" , "ROOTD" , "Sdepth" , "TANAER" , "TILG" , "TILV" , "WAL" , "WAPL" , - "WAPS" , "WAS" , "WETSTOR" , "DM" , "RES" , "LERG" , + "WAPS" , "WAS" , "WETSTOR" , "DM" , "RES" , "PHENCR" , "NELLVG" , "RLEAF" , "SLA" , "TILTOT" , "FRTILG" , "FRTILG1" , "FRTILG2" , "RDRT" , "VERN" , "CLITT" , "CSOMF", "CSOMS" , "NLITT" , "NSOMF", @@ -265,6 +265,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, matrix(0, NDAYS, NOUT))[[8]] # for now a hack to write other states out + save(output, file = file.path(outdir, "output_basgra.Rdata")) last_vals <- output[nrow(output),] names(last_vals) <- outputNames save(last_vals, file = file.path(outdir, "last_vals_basgra.Rdata")) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index e80c137b0ef..3d82ae118ad 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -262,8 +262,17 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "LOG10LAII")] <- log10(IC$LAI) } - if ("TotSoilCarb" %in% ic.names) { - run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(IC$TotSoilCarb, "kg", "g") + #if ("TotSoilCarb" %in% ic.names) { + # run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(IC$TotSoilCarb, "kg", "g") + #} + + + if ("fast_soil_pool_carbon_content" %in% ic.names & + "slow_soil_pool_carbon_content" %in% ic.names) { + run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(IC$fast_soil_pool_carbon_content + + IC$slow_soil_pool_carbon_content, "kg", "g") + run_params[which(names(run_params) == "FCSOMF0")] <- IC$fast_soil_pool_carbon_content / + (IC$fast_soil_pool_carbon_content + IC$slow_soil_pool_carbon_content) } @@ -293,7 +302,21 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N csom0 <- try(ncdf4::ncvar_get(IC.nc, "TotSoilCarb"), silent = TRUE) if (!is.na(csom0) && is.numeric(csom0)) { run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(csom0, "kg", "g") - } + } + + # Initial mineral N + nmin0 <- try(ncdf4::ncvar_get(IC.nc, "soil_nitrogen_content"), silent = TRUE) + if (!is.na(nmin0) && is.numeric(nmin0)) { + run_params[which(names(run_params) == "NMIN0")] <- udunits2::ud.convert(nmin0, "kg", "g") + } + + + # WCI + wci <- try(ncdf4::ncvar_get(IC.nc, "SoilMoistFrac"), silent = TRUE) + if (!is.na(wci) && is.numeric(wci)) { + run_params[which(names(run_params) == "WCI")] <- wci + } + # # Initial fraction of SOC that is fast (g C g-1 C) # if ("r_fSOC" %in% pft.names) { @@ -321,11 +344,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # run_params[which(names(run_params) == "NMIN0")] <- pft.traits[which(pft.names == "NMIN")] # } # - # # This is IC, change later - # # Initial value of soil water concentration (m3 m-3) - # if ("initial_volume_fraction_of_condensed_water_in_soil" %in% pft.names) { - # run_params[which(names(run_params) == "WCI")] <- pft.traits[which(pft.names == "initial_volume_fraction_of_condensed_water_in_soil")] - # } + # # # # Water concentration at saturation (m3 m-3) @@ -386,10 +405,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # CNLITT0 = pa( 84) ! (g C g-1 N) Initial C/N ratio of litter run_params[names(run_params) == "CNLITT0"] <- last_vals[names(last_vals) == "CLITT"] / last_vals[names(last_vals) == "NLITT"] - # FCSOMF0 = pa( 87) ! (-) Initial C in fast-decomposing OM as a fraction of total OM - # preserve the ratio - run_params[which(names(run_params) == "FCSOMF0")] <- last_vals[names(last_vals) == "CSOMF"] / - (last_vals[names(last_vals) == "CSOMF"] + last_vals[names(last_vals) == "CSOMS"]) + # FCSOMF0 handled above # CNSOMF0 = pa( 85) ! (g C g-1 N) Initial C/N ratio of fast-decomposing OM csomf <- run_params[which(names(run_params) == "FCSOMF0")] * run_params[which(names(run_params) == "CSOM0")] @@ -402,7 +418,11 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N PHENRF <- (1 - run_params[names(run_params) == "PHENI"])/(1 - run_params[names(run_params) == "PHENCR"]) if (PHENRF > 1.0) PHENRF = 1.0 if (PHENRF < 0.0) PHENRF = 0.0 - run_params[names(run_params) == "NELLVM"] <- last_vals[names(last_vals) == "NELLVG"] / PHENRF + run_params[names(run_params) == "NELLVM"] <- last_vals[names(last_vals) == "NELLVG"] / PHENRF + if(is.nan(run_params[names(run_params) == "NELLVM"])) run_params[names(run_params) == "NELLVM"] <- 0 + if(run_params[names(run_params) == "NELLVM"] == Inf) run_params[names(run_params) == "NELLVM"] <- 0 + + #run_params[names(run_params) == "PHENCR"] <- last_vals[names(last_vals) == "PHENCR"] run_params[names(run_params) == "CLVDI"] <- last_vals[names(last_vals) == "CLVD"] run_params[names(run_params) == "YIELDI"] <- last_vals[names(last_vals) == "YIELD"] @@ -422,10 +442,16 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "WCI"] <- last_vals[names(last_vals) == "WAL"] / (1000 * last_vals[names(last_vals) == "ROOTD"]) # this is probably not changing - run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"]/ - (last_vals[names(last_vals) == "FRTILG1"] + last_vals[names(last_vals) == "FRTILG2"]) + run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"] / last_vals[names(last_vals) == "FRTILG"] run_params[names(run_params) == "NMIN0"] <- last_vals[names(last_vals) == "NMIN"] + + # run_params[names(run_params) == "NCR"] <- last_vals[names(last_vals) == "NRT"] / last_vals[names(last_vals) == "CRT"] + + # Don't think this changes + # O2 = FGAS * ROOTDM * FO2MX * 1000./22.4 + #run_params[names(run_params) == "FGAS"] <- last_vals[names(last_vals) == "O2"] / + # (last_vals[names(last_vals) == "ROOTD"] * run_params[names(run_params) == "FO2MX"] * (1000./22.4) ) } diff --git a/models/basgra/R/write_restart.BASGRA.R b/models/basgra/R/write_restart.BASGRA.R index 52f3e384002..fa5bb4477ae 100644 --- a/models/basgra/R/write_restart.BASGRA.R +++ b/models/basgra/R/write_restart.BASGRA.R @@ -26,10 +26,16 @@ write_restart.BASGRA <- function(outdir, runid, start.time, stop.time, settings, names(analysis.save[[length(analysis.save)]]) <- c("LAI") } - if ("TotSoilCarb" %in% variables) { - analysis.save[[length(analysis.save) + 1]] <- new.state$TotSoilCarb - if (new.state$TotSoilCarb < 0) analysis.save[[length(analysis.save)]] <- 0 - names(analysis.save[[length(analysis.save)]]) <- c("TotSoilCarb") + if ("fast_soil_pool_carbon_content" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$fast_soil_pool_carbon_content + if (new.state$fast_soil_pool_carbon_content < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("fast_soil_pool_carbon_content") + } + + if ("slow_soil_pool_carbon_content" %in% variables) { + analysis.save[[length(analysis.save) + 1]] <- new.state$slow_soil_pool_carbon_content + if (new.state$slow_soil_pool_carbon_content < 0) analysis.save[[length(analysis.save)]] <- 0 + names(analysis.save[[length(analysis.save)]]) <- c("slow_soil_pool_carbon_content") } if (!is.null(analysis.save) && length(analysis.save) > 0){ diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index 87d59f5eb18..856fa6bbbf4 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -24,7 +24,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & implicit none integer, dimension(100,2) :: DAYS_HARVEST -real :: PARAMS(120) +real :: PARAMS(130) #ifdef weathergen integer, parameter :: NWEATHER = 7 #else @@ -205,7 +205,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & F_WALL_DM,F_WALL_DMSH,F_WALL_LV,F_WALL_ST, & F_DIGEST_DM,F_DIGEST_DMSH,F_DIGEST_LV,F_DIGEST_ST,F_DIGEST_WALL) - !================ + !================ ! Outputs !================ y(day, 1) = year + (doy-0.5)/366 ! "Time" = Decimal year (approximation) @@ -239,7 +239,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & y(day,28) = DM ! "DM" = Aboveground dry matter in g m-2 y(day,29) = DMRES / DM ! "RES" = Reserves in g g-1 aboveground dry matter - y(day,30) = LERG ! + y(day,30) = PHENCR ! y(day,31) = NELLVG ! y(day,32) = RLEAF ! y(day,33) = LAI / DMLV ! "SLA" = m2 leaf area g-1 dry matter leaves @@ -370,6 +370,8 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & WAPS = WAPS - THAWPS + FREEZEPL WAS = WAS - THAWS + FREEZEL WETSTOR = WETSTOR + Wremain - WETSTOR + + enddo From a849a732b25829daaeb2b1213dd7e8e660da2122 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Jun 2020 11:11:56 -0400 Subject: [PATCH 1072/2289] better hop test --- models/basgra/R/write.config.BASGRA.R | 33 ++++++++++++++++++++++++- models/basgra/inst/BASGRA_params.Rdata | Bin 2919 -> 3024 bytes models/basgra/src/BASGRA.f90 | 10 ++++---- models/basgra/src/parameters_plant.f90 | 3 ++- models/basgra/src/set_params.f90 | 7 ++++++ 5 files changed, 46 insertions(+), 7 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 3d82ae118ad..e052535a93f 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -360,6 +360,31 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N #} } + # THESE "PARAMETERS" (IN FACT, INITIAL CONDITIONS) WERE NOT PART OF THE ORIGINAL VECTOR + # THESE DERIVATIONS WERE PART OF THE BASGRA CODE, NOW TAKEN OUT HERE + + # NRT = NCR * CRTI + run_params[which(names(run_params) == "NRTI")] <- (10^run_params[names(run_params) == "LOG10CRTI"])* + run_params[names(run_params) == "NCR"] + + # NCSHI = NCSHMAX * (1-EXP(-K*LAII)) / (K*LAII) + # NSH = NCSHI * (CLVI+CSTI) + lai_tmp <- (10^run_params[names(run_params) == "LOG10LAII"]) + ncshi <- run_params[names(run_params) == "NCSHMAX"] * + (1-exp(-run_params[names(run_params) == "K"]*lai_tmp)) / (run_params[names(run_params) == "K"]*lai_tmp) + run_params[which(names(run_params) == "NSHI")] <- ncshi * + ((10^run_params[names(run_params) == "LOG10CLVI"]) + run_params[names(run_params) == "CSTI"]) + + + # TILG1 = TILTOTI * FRTILGI * FRTILGG1I + # TILG2 = TILTOTI * FRTILGI * (1-FRTILGG1I) + # TILV = TILTOTI * (1. - FRTILGI) + tiltot_tmp <- run_params[names(run_params) == "TILTOTI"] + frtilg_tmp <- run_params[names(run_params) == "FRTILGI"] + run_params[names(run_params) == "TILG1I"] <- tiltot_tmp * frtilg_tmp * run_params[names(run_params) == "FRTILGG1I"] + run_params[names(run_params) == "TILG2I"] <- tiltot_tmp * frtilg_tmp * (1 - run_params[names(run_params) == "FRTILGG1I"]) + run_params[names(run_params) == "TILVI"] <- tiltot_tmp * (1 - frtilg_tmp) + ################################################################## ######################### PREVIOUS STATE ######################### ################################################################## @@ -444,9 +469,15 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # this is probably not changing run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"] / last_vals[names(last_vals) == "FRTILG"] + run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "FRTILG1"] * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "FRTILG2"] * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILVI"] <- last_vals[names(last_vals) == "TILV"] + + run_params[names(run_params) == "NMIN0"] <- last_vals[names(last_vals) == "NMIN"] - # run_params[names(run_params) == "NCR"] <- last_vals[names(last_vals) == "NRT"] / last_vals[names(last_vals) == "CRT"] + run_params[names(run_params) == "NRTI"] <- last_vals[names(last_vals) == "NRT"] + run_params[which(names(run_params) == "NSHI")] <- last_vals[names(last_vals) == "NSH"] # Don't think this changes # O2 = FGAS * ROOTDM * FO2MX * 1000./22.4 diff --git a/models/basgra/inst/BASGRA_params.Rdata b/models/basgra/inst/BASGRA_params.Rdata index 81cf98282011e146fb583635b626273b3090bf8e..25c688f5dc5bd0db1e81a8166f9a64a220ca5154 100644 GIT binary patch delta 132 zcmaDZc0qiC1!Lnz%LmNL_Se$D!2aP(5SZnl#sCHmN>I9Q!DJB@MaIU>W-MCl9xM!; mKyenopb$?67(3Vl#%2rg^l>*t;TgertU$RiPoU<1APfM`ZyVbH delta 31 lcmca0{# Date: Tue, 9 Jun 2020 16:51:32 +0000 Subject: [PATCH 1073/2289] added endpoints for runs --- apps/api/entrypoint.R | 3 ++ apps/api/pecanapi-spec.yml | 33 +++++++++--- apps/api/runs.R | 107 +++++++++++++++++++++++++++++++++++++ 3 files changed, 137 insertions(+), 6 deletions(-) create mode 100644 apps/api/runs.R diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R index 5c55808e3a2..d0ab1fb2104 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/entrypoint.R @@ -14,6 +14,9 @@ root$mount("/api/models", models_pr) workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) +runs_pr <- plumber::plumber$new("runs.R") +root$mount("/api/runs", runs_pr) + root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { spec <- yaml::read_yaml("pecanapi-spec.yml") spec diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index ea945a6d6b8..1100831ea23 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/1623282b + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/503b5316 - description: Localhost url: http://127.0.0.1:8000 @@ -237,19 +237,40 @@ paths: - /api/workflows/{id}/runs: + /api/runs/: get: tags: - - workflows - runs + - summary: Get the list of all runs for a specified PEcAn Workflow parameters: - - in: path - name: id + - in: query + name: workflow_id description: ID of the PEcAn Workflow required: true schema: type: string + - in: query + name: offset + description: The number of workflows to skip before starting to collect the result set. + schema: + type: integer + minimum: 0 + default: 0 + required: false + - in: query + name: limit + description: The number of workflows to return. + schema: + type: integer + default: 50 + enum: + - 10 + - 20 + - 50 + - 100 + - 500 + required: false responses: '200': description: List of all runs for the requested PEcAn Workflow @@ -270,7 +291,7 @@ paths: description: Workflow with specified ID was not found - /api/run/{id}: + /api/runs/{id}: get: tags: - runs diff --git a/apps/api/runs.R b/apps/api/runs.R new file mode 100644 index 00000000000..d07b88a8cfb --- /dev/null +++ b/apps/api/runs.R @@ -0,0 +1,107 @@ +#' Get the list of runs (belonging to a particuar workflow) +#' @param workflow_id Workflow id (character) +#' @param offset +#' @param limit +#' @return List of runs (belonging to a particuar workflow) +#' @author Tezan Sahu +#* @get / +getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ + if (! limit %in% c(10, 20, 50, 100, 500)) { + res$status <- 400 + return(list(error = "Invalid value for parameter")) + } + settings <-list(database = list(bety = list( + driver = "PostgreSQL", + user = "bety", + dbname = "bety", + password = "bety", + host="postgres" + ))) + dbcon <- PEcAn.DB::db.open(settings$database$bety) + qry_statement <- paste0( + "SELECT r.id, e.runtype, r.model_id, r.site_id, r.parameter_list, r.ensemble_id, ", + "e.workflow_id, r.start_time, r.finish_time ", + "FROM runs r FULL OUTER JOIN ensembles e ", + "ON (r.ensemble_id = e.id) ", + "WHERE e.workflow_id = '", workflow_id, + "' ORDER BY r.id DESC ", + "LIMIT ", limit, + " OFFSET ", offset, ";" + ) + + PEcAn.DB::db.query(qry_statement, dbcon) + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Run(s) not found")) + } + else { + result <- list(runs = qry_res) + result$count <- nrow(qry_res) + if(nrow(qry_res) == limit){ + result$next_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + (as.numeric(limit) + as.numeric(offset)), + "&limit=", + limit + ) + } + if(as.numeric(offset) != 0) { + result$prev_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + max(0, (as.numeric(offset) - as.numeric(limit))), + "&limit=", + limit + ) + } + + return(result) + } +} + +################################################################################################# + +#' Get the of the run specified by the id +#' @param id Run id (character) +#' @return Details of requested run +#' @author Tezan Sahu +#* @get / +getWorkflowDetails <- function(id, res){ + settings <-list(database = list(bety = list( + driver = "PostgreSQL", + user = "bety", + dbname = "bety", + password = "bety", + host="postgres" + ))) + dbcon <- PEcAn.DB::db.open(settings$database$bety) + qry_statement <- paste0( + "SELECT r.id, e.runtype, r.model_id, r.site_id, r.parameter_list, r.ensemble_id, e.workflow_id, ", + "r.start_time, r.finish_time, r.created_at, r.updated_at, r.started_at, r.finished_at ", + "FROM runs r FULL OUTER JOIN ensembles e ", + "ON (r.ensemble_id = e.id) ", + "WHERE r.id = '", id, "';" + ) + + qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Run with specified ID was not found")) + } + else { + return(qry_res) + } +} \ No newline at end of file From 57ce5380094e73fcc009580cb7ac654ff68d03c7 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 10 Jun 2020 08:42:31 -0400 Subject: [PATCH 1074/2289] pass more states --- models/basgra/R/write.config.BASGRA.R | 229 +++++++++++++------------ models/basgra/inst/BASGRA_params.Rdata | Bin 3024 -> 3173 bytes models/basgra/src/BASGRA.f90 | 16 +- models/basgra/src/parameters_plant.f90 | 54 +++--- models/basgra/src/parameters_site.f90 | 72 ++++---- models/basgra/src/set_params.f90 | 11 +- 6 files changed, 204 insertions(+), 178 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index e052535a93f..866874cb0fb 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -25,7 +25,7 @@ ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = NULL) { - + # find out where to write run/ouput rundir <- file.path(settings$host$rundir, run.id) outdir <- file.path(settings$host$outdir, run.id) @@ -250,8 +250,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N } } #### End parameter update - - + + #### Update initial conditions if (!is.null(IC)) { @@ -262,19 +262,14 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "LOG10LAII")] <- log10(IC$LAI) } - #if ("TotSoilCarb" %in% ic.names) { - # run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(IC$TotSoilCarb, "kg", "g") - #} - - if ("fast_soil_pool_carbon_content" %in% ic.names & - "slow_soil_pool_carbon_content" %in% ic.names) { - run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(IC$fast_soil_pool_carbon_content + - IC$slow_soil_pool_carbon_content, "kg", "g") - run_params[which(names(run_params) == "FCSOMF0")] <- IC$fast_soil_pool_carbon_content / - (IC$fast_soil_pool_carbon_content + IC$slow_soil_pool_carbon_content) + if ("fast_soil_pool_carbon_content" %in% ic.names) { + run_params[which(names(run_params) == "CSOMF0")] <- udunits2::ud.convert(IC$fast_soil_pool_carbon_content, "kg", "g") } + if ("slow_soil_pool_carbon_content" %in% ic.names) { + run_params[which(names(run_params) == "CSOMS0")] <- udunits2::ud.convert(IC$slow_soil_pool_carbon_content, "kg", "g") + } }else if(!is.null(settings$run$inputs$poolinitcond$path)){ @@ -282,81 +277,85 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N #IC.pools <- PEcAn.data.land::prepare_pools(IC.path, constants = list(sla = SLA)) #if(!is.null(IC.pools)){ - IC.nc <- ncdf4::nc_open(IC.path) - - ## laiInit m2/m2 - lai <- try(ncdf4::ncvar_get(IC.nc, "LAI"), silent = TRUE) - if (!is.na(lai) && is.numeric(lai)) { - run_params[which(names(run_params) == "LOG10LAII")] <- log10(lai) - } - - # This is IC - # Initial value of litter C (g C m-2) - clitt0 <- try(ncdf4::ncvar_get(IC.nc, "litter_carbon_content"), silent = TRUE) - if (!is.na(clitt0) && is.numeric(clitt0)) { - run_params[which(names(run_params) == "CLITT0")] <- udunits2::ud.convert(clitt0, "kg", "g") - } - - # This is IC - # Initial value of SOM (g C m-2) - csom0 <- try(ncdf4::ncvar_get(IC.nc, "TotSoilCarb"), silent = TRUE) - if (!is.na(csom0) && is.numeric(csom0)) { - run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(csom0, "kg", "g") - } - - # Initial mineral N - nmin0 <- try(ncdf4::ncvar_get(IC.nc, "soil_nitrogen_content"), silent = TRUE) - if (!is.na(nmin0) && is.numeric(nmin0)) { - run_params[which(names(run_params) == "NMIN0")] <- udunits2::ud.convert(nmin0, "kg", "g") - } - - - # WCI - wci <- try(ncdf4::ncvar_get(IC.nc, "SoilMoistFrac"), silent = TRUE) - if (!is.na(wci) && is.numeric(wci)) { - run_params[which(names(run_params) == "WCI")] <- wci - } - - - # # Initial fraction of SOC that is fast (g C g-1 C) - # if ("r_fSOC" %in% pft.names) { - # run_params[which(names(run_params) == "FCSOMF0")] <- pft.traits[which(pft.names == "r_fSOC")] - # } - # - # # This is IC, change later - # # Initial C-N ratio of litter (g C g-1 N) - # if ("c2n_litter" %in% pft.names) { - # run_params[which(names(run_params) == "CNLITT0")] <- 100*pft.traits[which(pft.names == "c2n_litter")] - # } - # - # # Initial C-N ratio of fast SOM (g C g-1 N) - # if ("c2n_fSOM" %in% pft.names) { - # run_params[which(names(run_params) == "CNSOMF0")] <- pft.traits[which(pft.names == "c2n_fSOM")] - # } - # - # # Initial C-N ratio of slow SOM (g C g-1 N) - # if ("c2n_sSOM" %in% pft.names) { - # run_params[which(names(run_params) == "CNSOMS0")] <- pft.traits[which(pft.names == "c2n_sSOM")] - # } - # - # # Initial value of soil mineral N (g N m-2) - # if ("NMIN" %in% pft.names) { - # run_params[which(names(run_params) == "NMIN0")] <- pft.traits[which(pft.names == "NMIN")] - # } - # - - # - # - # # Water concentration at saturation (m3 m-3) - # if ("volume_fraction_of_water_in_soil_at_saturation" %in% pft.names) { - # run_params[which(names(run_params) == "WCST")] <- pft.traits[which(pft.names == "volume_fraction_of_water_in_soil_at_saturation")] - # } - - # # Temperature that kills half the plants in a day (degrees Celcius) - # if ("plant_min_temp" %in% pft.names) { - # run_params[which(names(run_params) == "LT50I")] <- pft.traits[which(pft.names == "plant_min_temp")] - # } - + IC.nc <- ncdf4::nc_open(IC.path) + + ## laiInit m2/m2 + lai <- try(ncdf4::ncvar_get(IC.nc, "LAI"), silent = TRUE) + if (!is.na(lai) && is.numeric(lai)) { + run_params[which(names(run_params) == "LOG10LAII")] <- log10(lai) + } + + # This is IC + # Initial value of litter C (g C m-2) + clitt0 <- try(ncdf4::ncvar_get(IC.nc, "litter_carbon_content"), silent = TRUE) + if (!is.na(clitt0) && is.numeric(clitt0)) { + run_params[which(names(run_params) == "CLITT0")] <- udunits2::ud.convert(clitt0, "kg", "g") + } + + # This is IC + # Initial value of SOM (g C m-2) + csom0 <- try(ncdf4::ncvar_get(IC.nc, "TotSoilCarb"), silent = TRUE) + if (!is.na(csom0) && is.numeric(csom0)) { + run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(csom0, "kg", "g") + # CSOMF = CSOM0 * FCSOMF0 + run_params[names(run_params) == "CSOMF0"] <- udunits2::ud.convert(csom0 * run_params[names(run_params) == "FCSOMF0"], "kg", "g") + # CSOMS = CSOM0 * (1-FCSOMF0) + run_params[names(run_params) == "CSOMS0"] <- udunits2::ud.convert(csom0 * (1 - run_params[names(run_params) == "FCSOMF0"]), "kg", "g") + } + + # Initial mineral N + nmin0 <- try(ncdf4::ncvar_get(IC.nc, "soil_nitrogen_content"), silent = TRUE) + if (!is.na(nmin0) && is.numeric(nmin0)) { + run_params[which(names(run_params) == "NMIN0")] <- udunits2::ud.convert(nmin0, "kg", "g") + } + + + # WCI + wci <- try(ncdf4::ncvar_get(IC.nc, "SoilMoistFrac"), silent = TRUE) + if (!is.na(wci) && is.numeric(wci)) { + run_params[which(names(run_params) == "WCI")] <- wci + } + + + # # Initial fraction of SOC that is fast (g C g-1 C) + # if ("r_fSOC" %in% pft.names) { + # run_params[which(names(run_params) == "FCSOMF0")] <- pft.traits[which(pft.names == "r_fSOC")] + # } + # + # # This is IC, change later + # # Initial C-N ratio of litter (g C g-1 N) + # if ("c2n_litter" %in% pft.names) { + # run_params[which(names(run_params) == "CNLITT0")] <- 100*pft.traits[which(pft.names == "c2n_litter")] + # } + # + # # Initial C-N ratio of fast SOM (g C g-1 N) + # if ("c2n_fSOM" %in% pft.names) { + # run_params[which(names(run_params) == "CNSOMF0")] <- pft.traits[which(pft.names == "c2n_fSOM")] + # } + # + # # Initial C-N ratio of slow SOM (g C g-1 N) + # if ("c2n_sSOM" %in% pft.names) { + # run_params[which(names(run_params) == "CNSOMS0")] <- pft.traits[which(pft.names == "c2n_sSOM")] + # } + # + # # Initial value of soil mineral N (g N m-2) + # if ("NMIN" %in% pft.names) { + # run_params[which(names(run_params) == "NMIN0")] <- pft.traits[which(pft.names == "NMIN")] + # } + # + + # + # + # # Water concentration at saturation (m3 m-3) + # if ("volume_fraction_of_water_in_soil_at_saturation" %in% pft.names) { + # run_params[which(names(run_params) == "WCST")] <- pft.traits[which(pft.names == "volume_fraction_of_water_in_soil_at_saturation")] + # } + + # # Temperature that kills half the plants in a day (degrees Celcius) + # if ("plant_min_temp" %in% pft.names) { + # run_params[which(names(run_params) == "LT50I")] <- pft.traits[which(pft.names == "plant_min_temp")] + # } + #} } @@ -374,7 +373,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N (1-exp(-run_params[names(run_params) == "K"]*lai_tmp)) / (run_params[names(run_params) == "K"]*lai_tmp) run_params[which(names(run_params) == "NSHI")] <- ncshi * ((10^run_params[names(run_params) == "LOG10CLVI"]) + run_params[names(run_params) == "CSTI"]) - + # TILG1 = TILTOTI * FRTILGI * FRTILGG1I # TILG2 = TILTOTI * FRTILGI * (1-FRTILGG1I) @@ -385,6 +384,20 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "TILG2I"] <- tiltot_tmp * frtilg_tmp * (1 - run_params[names(run_params) == "FRTILGG1I"]) run_params[names(run_params) == "TILVI"] <- tiltot_tmp * (1 - frtilg_tmp) + # WAL = 1000. * ROOTDM * WCI + run_params[names(run_params) == "WALI"] <- 1000. * run_params[names(run_params) == "ROOTDM"] * run_params[names(run_params) == "WCI"] + + # O2 = FGAS * ROOTDM * FO2MX * 1000./22.4 + run_params[names(run_params) == "O2I"] <- run_params[names(run_params) == "FGAS"] * + run_params[names(run_params) == "ROOTDM"] * run_params[names(run_params) == "FO2MX"] * 1000./22.4 + + #NLITT = CLITT0 / CNLITT0 + run_params[names(run_params) == "NLITT0"] <- run_params[names(run_params) == "CLITT0"] / run_params[names(run_params) == "CNLITT0"] + + #NSOMF = (CSOM0 * FCSOMF0) / CNSOMF0 + run_params[names(run_params) == "NSOMF0"] <- run_params[names(run_params) == "CSOMF0"] / run_params[names(run_params) == "CNSOMF0"] + run_params[names(run_params) == "NSOMS0"] <- run_params[names(run_params) == "CSOMS0"] / run_params[names(run_params) == "CNSOMS0"] + ################################################################## ######################### PREVIOUS STATE ######################### ################################################################## @@ -440,12 +453,12 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N csoms <- (1 - run_params[which(names(run_params) == "FCSOMF0")]) * run_params[which(names(run_params) == "CSOM0")] run_params[names(run_params) == "CNSOMS0"] <- csoms / last_vals[names(last_vals) == "NSOMS"] - PHENRF <- (1 - run_params[names(run_params) == "PHENI"])/(1 - run_params[names(run_params) == "PHENCR"]) - if (PHENRF > 1.0) PHENRF = 1.0 - if (PHENRF < 0.0) PHENRF = 0.0 - run_params[names(run_params) == "NELLVM"] <- last_vals[names(last_vals) == "NELLVG"] / PHENRF - if(is.nan(run_params[names(run_params) == "NELLVM"])) run_params[names(run_params) == "NELLVM"] <- 0 - if(run_params[names(run_params) == "NELLVM"] == Inf) run_params[names(run_params) == "NELLVM"] <- 0 + # PHENRF <- (1 - run_params[names(run_params) == "PHENI"])/(1 - run_params[names(run_params) == "PHENCR"]) + # if (PHENRF > 1.0) PHENRF = 1.0 + # if (PHENRF < 0.0) PHENRF = 0.0 + # run_params[names(run_params) == "NELLVM"] <- last_vals[names(last_vals) == "NELLVG"] / PHENRF + # if(is.nan(run_params[names(run_params) == "NELLVM"])) run_params[names(run_params) == "NELLVM"] <- 0 + # if(run_params[names(run_params) == "NELLVM"] == Inf) run_params[names(run_params) == "NELLVM"] <- 0 #run_params[names(run_params) == "PHENCR"] <- last_vals[names(last_vals) == "PHENCR"] @@ -464,13 +477,13 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "WASI"] <- last_vals[names(last_vals) == "WAS"] run_params[names(run_params) == "WETSTORI"] <- last_vals[names(last_vals) == "WETSTOR"] - run_params[names(run_params) == "WCI"] <- last_vals[names(last_vals) == "WAL"] / (1000 * last_vals[names(last_vals) == "ROOTD"]) + #run_params[names(run_params) == "WCI"] <- last_vals[names(last_vals) == "WAL"] / (1000 * last_vals[names(last_vals) == "ROOTD"]) # this is probably not changing - run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"] / last_vals[names(last_vals) == "FRTILG"] + #run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"] / last_vals[names(last_vals) == "FRTILG"] - run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "FRTILG1"] * run_params[names(run_params) == "TILTOTI"] - run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "FRTILG2"] * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "TILG1"] # * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "TILG2"] # * run_params[names(run_params) == "TILTOTI"] run_params[names(run_params) == "TILVI"] <- last_vals[names(last_vals) == "TILV"] @@ -479,13 +492,15 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "NRTI"] <- last_vals[names(last_vals) == "NRT"] run_params[which(names(run_params) == "NSHI")] <- last_vals[names(last_vals) == "NSH"] - # Don't think this changes - # O2 = FGAS * ROOTDM * FO2MX * 1000./22.4 - #run_params[names(run_params) == "FGAS"] <- last_vals[names(last_vals) == "O2"] / - # (last_vals[names(last_vals) == "ROOTD"] * run_params[names(run_params) == "FO2MX"] * (1000./22.4) ) + + run_params[names(run_params) == "WALI"] <- last_vals[names(last_vals) == "WAL"] + run_params[names(run_params) == "O2I"] <- last_vals[names(last_vals) == "O2"] + run_params[names(run_params) == "NLITT0"] <- last_vals[names(last_vals) == "NLITT"] + run_params[names(run_params) == "NSOMF0"] <- last_vals[names(last_vals) == "NSOMF"] + run_params[names(run_params) == "NSOMS0"] <- last_vals[names(last_vals) == "NSOMS"] } - - + + #----------------------------------------------------------------------- # write job.sh # create launch script (which will create symlink) @@ -537,5 +552,5 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) Sys.chmod(file.path(settings$rundir, run.id, "job.sh")) - -} # write.config.MODEL + +} # write.config.BASGRA diff --git a/models/basgra/inst/BASGRA_params.Rdata b/models/basgra/inst/BASGRA_params.Rdata index 25c688f5dc5bd0db1e81a8166f9a64a220ca5154..5732a4a154dba6aca23e8dd4c0c597be6b29636d 100644 GIT binary patch delta 176 zcmca0{#0Uu1!KoXOAZ!ehw=jqAmG3QrWhPH9RM>Njy{0U5@3qKp#)0TFif^#QDp4c z9LJK)p2EVw2^41ucl7aOfUueUjbKbR=U{(dHv^~ym=_G^`T2N;gdlh*D*b>wph^aY He;^D1WV{>a delta 31 lcmaDVaY1~71!LnzOAeOFl`Jxhjhm;jWV5p~F#KZxVgRQE36%f< diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index 16667ee9765..2f4bc9b4406 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -24,7 +24,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & implicit none integer, dimension(100,2) :: DAYS_HARVEST -real :: PARAMS(130) +real :: PARAMS(140) #ifdef weathergen integer, parameter :: NWEATHER = 7 #else @@ -121,18 +121,18 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & ! Initial constants for soil state variables CLITT = CLITT0 -CSOMF = CSOM0 * FCSOMF0 -CSOMS = CSOM0 * (1-FCSOMF0) +CSOMF = CSOMF0 +CSOMS = CSOMS0 DRYSTOR = DRYSTORI Fdepth = FdepthI -NLITT = CLITT0 / CNLITT0 -NSOMF = (CSOM0 * FCSOMF0) / CNSOMF0 -NSOMS = (CSOM0 * (1-FCSOMF0)) / CNSOMS0 +NLITT = NLITT0 +NSOMF = NSOMF0 +NSOMS = NSOMS0 NMIN = NMIN0 -O2 = FGAS * ROOTDM * FO2MX * 1000./22.4 +O2 = O2I Sdepth = SDEPTHI TANAER = TANAERI -WAL = 1000. * ROOTDM * WCI +WAL = WALI WAPL = WAPLI WAPS = WAPSI WAS = WASI diff --git a/models/basgra/src/parameters_plant.f90 b/models/basgra/src/parameters_plant.f90 index aa0772d2714..3b96fc2492f 100644 --- a/models/basgra/src/parameters_plant.f90 +++ b/models/basgra/src/parameters_plant.f90 @@ -3,36 +3,36 @@ module parameters_plant implicit none ! Initial constants - real :: LOG10CLVI, LOG10CRESI, LOG10CRTI, CSTI, LOG10LAII - real :: CLVI, CRESI, CRTI, LAII - real :: PHENI, TILTOTI, FRTILGI, FRTILGG1I, TILG1I, TILG2I, TILVI - real :: NSHI , NRTI +real :: LOG10CLVI, LOG10CRESI, LOG10CRTI, CSTI, LOG10LAII +real :: CLVI, CRESI, CRTI, LAII +real :: PHENI, TILTOTI, FRTILGI, FRTILGG1I, TILG1I, TILG2I, TILVI +real :: NSHI , NRTI ! Initial constants, continued - real :: CLVDI - real :: YIELDI - real :: CSTUBI - real :: LT50I +real :: CLVDI +real :: YIELDI +real :: CSTUBI +real :: LT50I ! Process parameters - real :: CLAIV , COCRESMX, CSTAVM, DAYLB , DAYLG1G2, DAYLP , DLMXGE, FSLAMIN - real :: FSMAX , HAGERE , K , KLUETILG, LAICR , LAIEFT , LAITIL, LFWIDG - real :: LFWIDV , NELLVM , PHENCR, PHY , RDRSCO , RDRSMX , RDRTEM, RGENMX - real :: RGRTG1G2, ROOTDM , RRDMAX, RUBISC , SHAPE , SIMAX1T, SLAMAX, SLAMIN - real :: TBASE , TCRES , TOPTGE, TRANCO , YG - real :: RDRTMIN , TVERN - real :: NCSHMAX , NCR - real :: RDRROOT , RDRSTUB - real :: FNCGSHMIN, TCNSHMOB, TCNUPT - - real :: F_WALL_LV_FMIN, F_WALL_LV_MAX, F_WALL_ST_FMIN, F_WALL_ST_MAX - real :: F_DIGEST_WALL_FMIN, F_DIGEST_WALL_MAX - +real :: CLAIV , COCRESMX, CSTAVM, DAYLB , DAYLG1G2, DAYLP , DLMXGE, FSLAMIN +real :: FSMAX , HAGERE , K , KLUETILG, LAICR , LAIEFT , LAITIL, LFWIDG +real :: LFWIDV , NELLVM , PHENCR, PHY , RDRSCO , RDRSMX , RDRTEM, RGENMX +real :: RGRTG1G2, ROOTDM , RRDMAX, RUBISC , SHAPE , SIMAX1T, SLAMAX, SLAMIN +real :: TBASE , TCRES , TOPTGE, TRANCO , YG +real :: RDRTMIN , TVERN +real :: NCSHMAX , NCR +real :: RDRROOT , RDRSTUB +real :: FNCGSHMIN, TCNSHMOB, TCNUPT + +real :: F_WALL_LV_FMIN, F_WALL_LV_MAX, F_WALL_ST_FMIN, F_WALL_ST_MAX +real :: F_DIGEST_WALL_FMIN, F_DIGEST_WALL_MAX + ! Process parameters, continued - real :: Dparam, Hparam, KRDRANAER, KRESPHARD, KRSR3H - real :: LDT50A, LDT50B, LT50MN, LT50MX, RATEDMX - real :: reHardRedDay - real, parameter :: reHardRedEnd = 91. ! If LAT<0, this is changed to 91+183 in plant.f90 - real :: THARDMX, TsurfDiff - +real :: Dparam, Hparam, KRDRANAER, KRESPHARD, KRSR3H +real :: LDT50A, LDT50B, LT50MN, LT50MX, RATEDMX +real :: reHardRedDay +real, parameter :: reHardRedEnd = 91. ! If LAT<0, this is changed to 91+183 in plant.f90 +real :: THARDMX, TsurfDiff + end module parameters_plant diff --git a/models/basgra/src/parameters_site.f90 b/models/basgra/src/parameters_site.f90 index 33cc2d2fee3..5d4fb1ec0c9 100644 --- a/models/basgra/src/parameters_site.f90 +++ b/models/basgra/src/parameters_site.f90 @@ -1,61 +1,63 @@ module parameters_site ! Simulation period and time step - real, parameter :: DELT = 1.0 +real, parameter :: DELT = 1.0 ! Geography - real :: LAT +real :: LAT ! Atmospheric conditions - real, parameter :: CO2A = 350 +real, parameter :: CO2A = 350 ! Soil - real, parameter :: DRATE = 50 - real :: WCI - real :: FWCAD, FWCWP, FWCFC, FWCWET, WCST - real :: WCAD, WCWP, WCFC, WCWET - real, parameter :: KNFIX = 0, RRUNBULK = 0.05 +real, parameter :: DRATE = 50 +real :: WCI +real :: FWCAD, FWCWP, FWCFC, FWCWET, WCST +real :: WCAD, WCWP, WCFC, WCWET +real, parameter :: KNFIX = 0, RRUNBULK = 0.05 ! Soil - WINTER PARAMETERS - real :: FGAS, FO2MX, gamma, KRTOTAER, KSNOW - real, parameter :: LAMBDAice = 1.9354e+005 - real :: LAMBDAsoil - real, parameter :: LatentHeat = 335000. - real, parameter :: poolInfilLimit = 0.2 - real :: RHOnewSnow, RHOpack - real, parameter :: RHOwater = 1000. - real :: SWret, SWrf, TmeltFreeze, TrainSnow - real :: WpoolMax - +real :: FGAS, FO2MX, gamma, KRTOTAER, KSNOW, O2I +real, parameter :: LAMBDAice = 1.9354e+005 +real :: LAMBDAsoil +real, parameter :: LatentHeat = 335000. +real, parameter :: poolInfilLimit = 0.2 +real :: RHOnewSnow, RHOpack +real, parameter :: RHOwater = 1000. +real :: SWret, SWrf, TmeltFreeze, TrainSnow +real :: WpoolMax + ! Soil initial values (parameters) real :: CLITT0, CSOM0, CNLITT0, CNSOMF0, CNSOMS0, FCSOMF0, NMIN0 +real :: CSOMF0, CSOMS0, NLITT0, NSOMF0, NSOMS0 real :: FLITTSOMF, FSOMFSOMS, RNLEACH, KNEMIT real :: TCLITT, TCSOMF, TCSOMS, TMAXF, TSIGMAF, RFN2O real :: WFPS50N2O ! Soil initial constants - real :: DRYSTORI - real :: FdepthI - real :: SDEPTHI - real :: TANAERI - real :: WAPLI - real :: WAPSI - real :: WASI - real :: WETSTORI - +real :: DRYSTORI +real :: FdepthI +real :: SDEPTHI +real :: TANAERI +real :: WAPLI +real :: WAPSI +real :: WASI +real :: WETSTORI +real :: WALI + ! Management: harvest dates and irrigation - integer, dimension(3) :: doyHA - real, parameter :: IRRIGF = 0. +integer, dimension(3) :: doyHA +real, parameter :: IRRIGF = 0. ! Mathematical constants - real, parameter :: pi = 3.141592653589793 +real, parameter :: pi = 3.141592653589793 ! real, parameter :: Freq = 2.*pi / 365. - real, parameter :: Kmin = 4. - real, parameter :: Ampl = 0.625 - real, parameter :: Bias = Kmin + Ampl - +real, parameter :: Kmin = 4. +real, parameter :: Ampl = 0.625 +real, parameter :: Bias = Kmin + Ampl + ! SA parameters - real :: NFERTMULT +real :: NFERTMULT end module parameters_site diff --git a/models/basgra/src/set_params.f90 b/models/basgra/src/set_params.f90 index 3595d1c44fc..3695a9e3f81 100644 --- a/models/basgra/src/set_params.f90 +++ b/models/basgra/src/set_params.f90 @@ -3,7 +3,7 @@ Subroutine set_params(pa) use parameters_site use parameters_plant implicit none -real :: pa(130) ! The length of pa() should be at least as high as the number of parameters +real :: pa(140) ! The length of pa() should be at least as high as the number of parameters ! Initial constants LOG10CLVI = pa(1) @@ -154,6 +154,15 @@ Subroutine set_params(pa) TILG2I = pa(128) TILVI = pa(129) +WALI = pa(130) +O2I = pa(131) + +CSOMF0 = pa(132) +CSOMS0 = pa(133) +NLITT0 = pa(134) +NSOMF0 = pa(135) +NSOMS0 = pa(136) + ! Parameter transformations CLVI = 10**LOG10CLVI CRESI = 10**LOG10CRESI From f90a3a55a6df6d4f43a76e5c9b129a5ff2423605 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 10 Jun 2020 10:36:08 -0400 Subject: [PATCH 1075/2289] fix bug --- models/basgra/R/write.config.BASGRA.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 866874cb0fb..22e26294a1d 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -482,8 +482,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # this is probably not changing #run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"] / last_vals[names(last_vals) == "FRTILG"] - run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "TILG1"] # * run_params[names(run_params) == "TILTOTI"] - run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "TILG2"] # * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "FRTILG1"] * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "FRTILG2"] * run_params[names(run_params) == "TILTOTI"] run_params[names(run_params) == "TILVI"] <- last_vals[names(last_vals) == "TILV"] From 2b0c16831beb215104823f028af244bf1519a695 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Wed, 10 Jun 2020 23:36:27 +0530 Subject: [PATCH 1076/2289] Update book.yml --- .github/workflows/book.yml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 63a9505beb8..5c43a945f39 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -15,8 +15,7 @@ jobs: # The type of runner that the job will run on runs-on: ubuntu-latest container: pecan/base:latest - env: - GITHUB_PAT: ${{ secrets.REPO_TOKEN }} + steps: @@ -51,8 +50,9 @@ jobs: git init git add --all * git commit -m "Update the book `date`" || true - git remote add origin https://github.com/${{secrets.GH_USER}}/pecan-documentation.git - git push --set-upstream origin master + - uses: r-lib/actions/pr-push@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} From 7877d59a0b0be873f8525ce4af7c846ebfb312b5 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 10 Jun 2020 18:51:29 -0500 Subject: [PATCH 1077/2289] use get_postgres_envvars in betyConnect --- CHANGELOG.md | 1 + base/db/R/query.dplyr.R | 31 +++++++++++++------ base/db/R/query.format.vars.R | 16 +++++----- base/workflow/inst/batch_run.R | 3 +- base/workflow/inst/permutation_tests.R | 1 - .../03_topical_pages/09_standalone_tools.Rmd | 2 +- .../scripts/benchmark.workflow.FATES_BCI.R | 2 +- shiny/BrownDog/server.R | 10 +++--- shiny/ViewMet/server.R | 2 +- 9 files changed, 38 insertions(+), 30 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7996e843a80..7cd2ee2a85e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Fixed +- PEcAn.DB::betyConnect() is now smarter, and will try to use either config.php or environment variables to create a connection. It has switched to use db.open helper function. - PEcAn.utils::tranformstats() assumed the statistic names column of its input was a factor. It now accepts character too, and returns the same class given as input (#2545). - fixed and added tests for `get.rh` function in PEcAn.data.atmosphere - Invalid .zenodo.json that broke automatic archiving on Zenodo ([b56ef53](https://github.com/PecanProject/pecan/commit/b56ef53888d73904c893b9e8c8cfaeedd7b1edbe)) diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index 4882ed53c0a..2235a302a56 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -4,19 +4,32 @@ #' betyConnect <- function(php.config = "../../web/config.php") { ## Read PHP config file for webserver + if (file.exists(php.config)) { + php_params <- PEcAn.utils::read_web_config(php.config) + } else { + php_params <- list() + } + + ## helper function + getphp = function (item, default = "") { + value = php_params[[item]] + if (is.null(value)) default else value + } + + ## fill in all data from environment variables + dbparams <- get_postgres_envvars(host = getphp("db_bety_hostname", "localhost"), + port = getphp("db_bety_port", "5432"), + dbname = getphp("db_bety_database", "bety"), + user = getphp("db_bety_username", "bety"), + password = getphp("db_bety_password", "bety")) - config.list <- PEcAn.utils::read_web_config(php.config) + ## force driver to be postgres, needed when switching to db,open + dbparams[["driver"]] <- "Postgres" ## Database connection - # TODO: The latest version of dplyr/dbplyr works with standard DBI-based - # objects, so we should replace this with a standard `db.open` call. - dplyr::src_postgres(dbname = config.list$db_bety_database, - host = config.list$db_bety_hostname, - user = config.list$db_bety_username, - password = config.list$db_bety_password) + db.open(dbparams) } # betyConnect - #' Convert number to scientific notation pretty expression #' @param l Number to convert to scientific notation #' @export @@ -59,7 +72,7 @@ ncdays2date <- function(time, unit) { #' @export dbHostInfo <- function(bety) { # get host id - result <- db.query(query = "select cast(floor(nextval('users_id_seq') / 1e9) as bigint);", con = bety$con) + result <- db.query(query = "select cast(floor(nextval('users_id_seq') / 1e9) as bigint);", con = bety) hostid <- result[["floor"]] # get machine start and end based on hostid diff --git a/base/db/R/query.format.vars.R b/base/db/R/query.format.vars.R index 79f88eba8d8..61ba7cf3e0e 100644 --- a/base/db/R/query.format.vars.R +++ b/base/db/R/query.format.vars.R @@ -13,8 +13,6 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { PEcAn.logger::logger.error("Must specify input id or format id") } - con <- bety$con - # get input info either form input.id or format.id, depending which is provided # defaults to format.id if both provided # also query site information (id/lat/lon) if an input.id @@ -27,9 +25,9 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { if (is.na(format.id)) { f <- PEcAn.DB::db.query( query = paste("SELECT * from formats as f join inputs as i on f.id = i.format_id where i.id = ", input.id), - con = con + con = bety ) - site.id <- PEcAn.DB::db.query(query = paste("SELECT site_id from inputs where id =", input.id), con = con) + site.id <- PEcAn.DB::db.query(query = paste("SELECT site_id from inputs where id =", input.id), con = bety) if (is.data.frame(site.id) && nrow(site.id)>0) { site.id <- site.id$site_id site.info <- @@ -38,17 +36,17 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { "SELECT id, time_zone, ST_X(ST_CENTROID(geometry)) AS lon, ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id =", site.id ), - con = con + con = bety ) site.lat <- site.info$lat site.lon <- site.info$lon site.time_zone <- site.info$time_zone } } else { - f <- PEcAn.DB::db.query(query = paste("SELECT * from formats where id = ", format.id), con = con) + f <- PEcAn.DB::db.query(query = paste("SELECT * from formats where id = ", format.id), con = bety) } - mimetype <- PEcAn.DB::db.query(query = paste("SELECT * from mimetypes where id = ", f$mimetype_id), con = con)[["type_string"]] + mimetype <- PEcAn.DB::db.query(query = paste("SELECT * from mimetypes where id = ", f$mimetype_id), con = bety)[["type_string"]] f$mimetype <- utils::tail(unlist(strsplit(mimetype, "/")),1) # get variable names and units of input data @@ -56,7 +54,7 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { query = paste( "SELECT variable_id,name,unit,storage_type,column_number from formats_variables where format_id = ", f$id ), - con = con + con = bety ) if(all(!is.na(var.ids))){ @@ -84,7 +82,7 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { vars_bety[i, (ncol(vars_bety) - 1):ncol(vars_bety)] <- as.matrix(PEcAn.DB::db.query( query = paste("SELECT name, units from variables where id = ", fv$variable_id[i]), - con = con + con = bety )) } diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index 9a5c561fe21..63940dae476 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -50,7 +50,6 @@ php_file <- file.path(pecan_path, "web", "config.php") stopifnot(file.exists(php_file)) config.list <- PEcAn.utils::read_web_config(php_file) bety <- PEcAn.DB::betyConnect(php_file) -con <- bety$con # Create outfile directory if it doesn't exist dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) @@ -73,7 +72,7 @@ for (i in seq_len(nrow(input_table))) { revision <- table_row$revision message("Model: ", shQuote(model)) message("Revision: ", shQuote(revision)) - model_df <- tbl(con, "models") %>% + model_df <- tbl(bety, "models") %>% filter(model_name == !!model, revision == !!revision) %>% collect() diff --git a/base/workflow/inst/permutation_tests.R b/base/workflow/inst/permutation_tests.R index 69af5f96598..5821ea28980 100755 --- a/base/workflow/inst/permutation_tests.R +++ b/base/workflow/inst/permutation_tests.R @@ -31,7 +31,6 @@ php_file <- file.path(pecan_path, "web", "config.php") stopifnot(file.exists(php_file)) config.list <- PEcAn.utils::read_web_config(php_file) bety <- PEcAn.DB::betyConnect(php_file) -con <- bety$con # Create path for outfile dir.create(dirname(outfile), showWarnings = FALSE, recursive = TRUE) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index a4d6690af56..006d90c7653 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -125,7 +125,7 @@ input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) data.path = PEcAn.DB::query.file.path( input.id = input.id, host_name = PEcAn.remote::fqdn(), - con = bety$con) + con = bety) ``` 6. Load the data diff --git a/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R b/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R index 2c2d47e08dc..7697cd65205 100644 --- a/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R +++ b/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R @@ -28,7 +28,7 @@ settings <- PEcAn.settings::read.settings(settings.file) input_id <- 1000011171 ## 4) Edit Input to associate File ## 5) Verify that PEcAn is able to find and load file -input <- PEcAn.DB::query.file.path(input_id,host_name = "localhost",con = bety$con) +input <- PEcAn.DB::query.file.path(input_id,host_name = "localhost",con = bety) format <- PEcAn.DB::query.format.vars(bety,input_id) field <- PEcAn.benchmark::load_data(input,format) ## 6) Look up variable_id in database diff --git a/shiny/BrownDog/server.R b/shiny/BrownDog/server.R index bdf6cf74178..99ea2fbe366 100644 --- a/shiny/BrownDog/server.R +++ b/shiny/BrownDog/server.R @@ -33,9 +33,8 @@ server <- shinyServer(function(input, output, session) { output$modelSelector <- renderUI({ bety <- betyConnect("../../web/config.php") - con <- bety$con - on.exit(db.close(con), add = TRUE) - models <- db.query("SELECT name FROM modeltypes;", con) + on.exit(db.close(bety), add = TRUE) + models <- db.query("SELECT name FROM modeltypes;", bety) selectInput("model", "Model", models) }) @@ -75,8 +74,7 @@ server <- shinyServer(function(input, output, session) { observeEvent(input$type, { # get all sites name, lat and lon by sitegroups bety <- betyConnect("../../web/config.php") - con <- bety$con - on.exit(db.close(con), add = TRUE) + on.exit(db.close(bety), add = TRUE) sites <- db.query( paste0( @@ -87,7 +85,7 @@ server <- shinyServer(function(input, output, session) { input$type, "');" ), - con + bety ) if(length(sites) > 0){ diff --git a/shiny/ViewMet/server.R b/shiny/ViewMet/server.R index 4cbacdea701..3cb45c022e0 100644 --- a/shiny/ViewMet/server.R +++ b/shiny/ViewMet/server.R @@ -138,7 +138,7 @@ server <- function(input, output, session) { formatid <- tbl(bety, "inputs") %>% filter(id == inputid) %>% pull(format_id) siteid <- tbl(bety, "inputs") %>% filter(id == inputid) %>% pull(site_id) - site = query.site(con = bety$con, siteid) + site = query.site(con = bety, siteid) current_nc <- ncdf4::nc_open(rv$load.paths[i]) vars_in_file <- names(current_nc[["var"]]) From ddbfe3f476fd7dd92490c4ff828e2b081873bb2c Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 11 Jun 2020 05:32:05 -0400 Subject: [PATCH 1078/2289] preserve ncr --- models/basgra/R/run_BASGRA.R | 4 +- models/basgra/R/write.config.BASGRA.R | 26 ++++--- models/basgra/src/BASGRA.f90 | 108 +++++++++++++------------- 3 files changed, 71 insertions(+), 67 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 96d59619bb9..e77bbd85783 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -170,8 +170,8 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, "Fdepth" , "LAI" , "LT50" , "O2" , "PHEN" , "ROOTD" , "Sdepth" , "TANAER" , "TILG" , "TILV" , "WAL" , "WAPL" , "WAPS" , "WAS" , "WETSTOR" , "DM" , "RES" , "PHENCR" , - "NELLVG" , "RLEAF" , "SLA" , "TILTOT" , "FRTILG" , "FRTILG1" , - "FRTILG2" , "RDRT" , "VERN" , + "NELLVG" , "NELLVM" , "SLA" , "TILTOT" , "FRTILG" , "TILG1" , + "TILG2" , "RDRT" , "VERN" , "CLITT" , "CSOMF", "CSOMS" , "NLITT" , "NSOMF", "NSOMS" , "NMIN" , "PHOT" , "RplantAer" ,"Rsoil" , "NemissionN2O", "NemissionNO", "Nfert", "Ndep" , "RWA" , diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 22e26294a1d..4497cdaad83 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -259,16 +259,16 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ic.names <- names(IC) if ("LAI" %in% ic.names) { - run_params[which(names(run_params) == "LOG10LAII")] <- log10(IC$LAI) + run_params[names(run_params) == "LOG10LAII"] <- log10(IC$LAI) } if ("fast_soil_pool_carbon_content" %in% ic.names) { - run_params[which(names(run_params) == "CSOMF0")] <- udunits2::ud.convert(IC$fast_soil_pool_carbon_content, "kg", "g") + run_params[names(run_params) == "CSOMF0"] <- udunits2::ud.convert(IC$fast_soil_pool_carbon_content, "kg", "g") } if ("slow_soil_pool_carbon_content" %in% ic.names) { - run_params[which(names(run_params) == "CSOMS0")] <- udunits2::ud.convert(IC$slow_soil_pool_carbon_content, "kg", "g") + run_params[names(run_params) == "CSOMS0"] <- udunits2::ud.convert(IC$slow_soil_pool_carbon_content, "kg", "g") } }else if(!is.null(settings$run$inputs$poolinitcond$path)){ @@ -430,7 +430,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "TILTOTI"] <- last_vals[names(last_vals) == "TILG"] + last_vals[names(last_vals) == "TILV"] # FRTILGI = pa(8) - run_params[names(run_params) == "FRTILGI"] <- last_vals[names(last_vals) == "FRTILG"] + #run_params[names(run_params) == "FRTILGI"] <- last_vals[names(last_vals) == "FRTILG"] # LT50I = pa(9) run_params[names(run_params) == "LT50I"] <- last_vals[names(last_vals) == "LT50"] @@ -441,17 +441,17 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # CSOM0 = pa( 83) ! (g C m-2) Initial C in OM - handled above # CNLITT0 = pa( 84) ! (g C g-1 N) Initial C/N ratio of litter - run_params[names(run_params) == "CNLITT0"] <- last_vals[names(last_vals) == "CLITT"] / last_vals[names(last_vals) == "NLITT"] + # run_params[names(run_params) == "CNLITT0"] <- last_vals[names(last_vals) == "CLITT"] / last_vals[names(last_vals) == "NLITT"] # FCSOMF0 handled above # CNSOMF0 = pa( 85) ! (g C g-1 N) Initial C/N ratio of fast-decomposing OM - csomf <- run_params[which(names(run_params) == "FCSOMF0")] * run_params[which(names(run_params) == "CSOM0")] - run_params[names(run_params) == "CNSOMF0"] <- csomf / last_vals[names(last_vals) == "NSOMF"] + # csomf <- run_params[which(names(run_params) == "FCSOMF0")] * run_params[which(names(run_params) == "CSOM0")] + # run_params[names(run_params) == "CNSOMF0"] <- csomf / last_vals[names(last_vals) == "NSOMF"] # CNSOMS0 = pa( 86) ! (g C g-1 N) Initial C/N ratio of slowly decomposing OM - csoms <- (1 - run_params[which(names(run_params) == "FCSOMF0")]) * run_params[which(names(run_params) == "CSOM0")] - run_params[names(run_params) == "CNSOMS0"] <- csoms / last_vals[names(last_vals) == "NSOMS"] + # csoms <- (1 - run_params[which(names(run_params) == "FCSOMF0")]) * run_params[which(names(run_params) == "CSOM0")] + # run_params[names(run_params) == "CNSOMS0"] <- csoms / last_vals[names(last_vals) == "NSOMS"] # PHENRF <- (1 - run_params[names(run_params) == "PHENI"])/(1 - run_params[names(run_params) == "PHENCR"]) # if (PHENRF > 1.0) PHENRF = 1.0 @@ -482,8 +482,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # this is probably not changing #run_params[names(run_params) == "FRTILGG1I"] <- last_vals[names(last_vals) == "FRTILG1"] / last_vals[names(last_vals) == "FRTILG"] - run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "FRTILG1"] * run_params[names(run_params) == "TILTOTI"] - run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "FRTILG2"] * run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG1I"] <- last_vals[names(last_vals) == "TILG1"] #* run_params[names(run_params) == "TILTOTI"] + run_params[names(run_params) == "TILG2I"] <- last_vals[names(last_vals) == "TILG2"] #* run_params[names(run_params) == "TILTOTI"] run_params[names(run_params) == "TILVI"] <- last_vals[names(last_vals) == "TILV"] @@ -498,6 +498,10 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "NLITT0"] <- last_vals[names(last_vals) == "NLITT"] run_params[names(run_params) == "NSOMF0"] <- last_vals[names(last_vals) == "NSOMF"] run_params[names(run_params) == "NSOMS0"] <- last_vals[names(last_vals) == "NSOMS"] + + #ratio to be preserved + # NRT = NCR * CRTI + run_params[which(names(run_params) == "NCR")] <- last_vals[names(last_vals) == "NCRT"] } diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index 2f4bc9b4406..4ffaa1ed04a 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -193,7 +193,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & DMRES = CRES / 0.40 DMSH = DMLV + DMST + DMRES DM = DMSH + DMSTUB - TILTOT = TILG1 + TILG2 + TILV + NSH_DMSH = NSH / DMSH ! N content in shoot DM; g N g-1 DM @@ -205,6 +205,56 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & F_WALL_DM,F_WALL_DMSH,F_WALL_LV,F_WALL_ST, & F_DIGEST_DM,F_DIGEST_DMSH,F_DIGEST_LV,F_DIGEST_ST,F_DIGEST_WALL) +! State equations plants + CLV = CLV + GLV - DLV - HARVLV + CLVD = CLVD + DLV + CRES = CRES + GRES - RESMOB - HARVRE + CRT = CRT + GRT - DRT + CST = CST + GST - HARVST + CSTUB = CSTUB + GSTUB - DSTUB + LAI = LAI + GLAI - DLAI - HARVLA + LT50 = LT50 + DeHardRate - HardRate + PHEN = min(1., PHEN + GPHEN - DPHEN - HARVPH) + ROOTD = ROOTD + RROOTD + TILG1 = TILG1 + TILVG1 - TILG1G2 + TILG2 = TILG2 + TILG1G2 - HARVTILG2 + TILV = TILV + GTILV - TILVG1 - DTILV + TILTOT = TILG1 + TILG2 + TILV + if((LAT>0).AND.(doy==305)) VERN = 0 + if((LAT<0).AND.(doy==122)) VERN = 0 + if(DAVTMP0) YIELD_LAST = YIELD + YIELD_TOT = YIELD_TOT + YIELD + + NRT = NRT + GNRT - DNRT + NSH = NSH + GNSH - DNSH - HARVNSH - NSHmob + + Nfert_TOT = Nfert_TOT + Nfert + DM_MAX = max( DM, DM_MAX ) + +! State equations soil + CLITT = CLITT + DLV + DSTUB - rCLITT - dCLITT + CSOMF = CSOMF + DRT + dCLITTsomf - rCSOMF - dCSOMF + CSOMS = CSOMS + dCSOMFsoms - dCSOMS + DRYSTOR = DRYSTOR + reFreeze + Psnow - SnowMelt + Fdepth = Fdepth + Frate + NLITT = NLITT + DNSH - rNLITT - dNLITT + NSOMF = NSOMF + DNRT + NLITTsomf - rNSOMF - dNSOMF + NSOMS = NSOMS + NSOMFsoms - dNSOMS + NMIN = NMIN + Ndep + Nfert + Nmineralisation + Nfixation + NSHmobsoil & + - Nupt - Nleaching - Nemission + NMIN = max(0.,NMIN) + O2 = O2 + O2IN - O2OUT + Sdepth = Sdepth + Psnow/RHOnewSnow - PackMelt + TANAER = TANAER + dTANAER + WAL = WAL + THAWS - FREEZEL + poolDrain + INFIL +EXPLOR+IRRIG-DRAIN-RUNOFF-EVAP-TRAN + WAPL = WAPL + THAWPS - FREEZEPL + poolInfil - poolDrain + WAPS = WAPS - THAWPS + FREEZEPL + WAS = WAS - THAWS + FREEZEL + WETSTOR = WETSTOR + Wremain - WETSTOR + + !================ ! Outputs !================ @@ -241,12 +291,12 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & y(day,29) = DMRES / DM ! "RES" = Reserves in g g-1 aboveground dry matter y(day,30) = PHENCR ! y(day,31) = NELLVG ! - y(day,32) = RLEAF ! + y(day,32) = NELLVM ! y(day,33) = LAI / DMLV ! "SLA" = m2 leaf area g-1 dry matter leaves y(day,34) = TILTOT ! "TILTOT" = Total tiller number in # m-2 y(day,35) = (TILG1+TILG2) / TILTOT ! "FRTILG" = Fraction of tillers that is generative - y(day,36) = TILG1 / TILTOT ! "FRTILG1" = Fraction of tillers that is in TILG1 - y(day,37) = TILG2 / TILTOT ! "FRTILG2" = Fraction of tillers that is in TILG2 + y(day,36) = TILG1 + y(day,37) = TILG2 y(day,38) = RDRT y(day,39) = VERN @@ -323,56 +373,6 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & y(day,102) = DAYL -! State equations plants - CLV = CLV + GLV - DLV - HARVLV - CLVD = CLVD + DLV - CRES = CRES + GRES - RESMOB - HARVRE - CRT = CRT + GRT - DRT - CST = CST + GST - HARVST - CSTUB = CSTUB + GSTUB - DSTUB - LAI = LAI + GLAI - DLAI - HARVLA - LT50 = LT50 + DeHardRate - HardRate - PHEN = min(1., PHEN + GPHEN - DPHEN - HARVPH) - ROOTD = ROOTD + RROOTD - TILG1 = TILG1 + TILVG1 - TILG1G2 - TILG2 = TILG2 + TILG1G2 - HARVTILG2 - TILV = TILV + GTILV - TILVG1 - DTILV - if((LAT>0).AND.(doy==305)) VERN = 0 - if((LAT<0).AND.(doy==122)) VERN = 0 - if(DAVTMP0) YIELD_LAST = YIELD - YIELD_TOT = YIELD_TOT + YIELD - - NRT = NRT + GNRT - DNRT - NSH = NSH + GNSH - DNSH - HARVNSH - NSHmob - - Nfert_TOT = Nfert_TOT + Nfert - DM_MAX = max( DM, DM_MAX ) - -! State equations soil - CLITT = CLITT + DLV + DSTUB - rCLITT - dCLITT - CSOMF = CSOMF + DRT + dCLITTsomf - rCSOMF - dCSOMF - CSOMS = CSOMS + dCSOMFsoms - dCSOMS - DRYSTOR = DRYSTOR + reFreeze + Psnow - SnowMelt - Fdepth = Fdepth + Frate - NLITT = NLITT + DNSH - rNLITT - dNLITT - NSOMF = NSOMF + DNRT + NLITTsomf - rNSOMF - dNSOMF - NSOMS = NSOMS + NSOMFsoms - dNSOMS - NMIN = NMIN + Ndep + Nfert + Nmineralisation + Nfixation + NSHmobsoil & - - Nupt - Nleaching - Nemission - NMIN = max(0.,NMIN) - O2 = O2 + O2IN - O2OUT - Sdepth = Sdepth + Psnow/RHOnewSnow - PackMelt - TANAER = TANAER + dTANAER - WAL = WAL + THAWS - FREEZEL + poolDrain + INFIL +EXPLOR+IRRIG-DRAIN-RUNOFF-EVAP-TRAN - WAPL = WAPL + THAWPS - FREEZEPL + poolInfil - poolDrain - WAPS = WAPS - THAWPS + FREEZEPL - WAS = WAS - THAWS + FREEZEL - WETSTOR = WETSTOR + Wremain - WETSTOR - - - enddo end \ No newline at end of file From 94c2b2cd2c0a426cdd44cda3e2202af729168c34 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 11 Jun 2020 09:27:36 -0500 Subject: [PATCH 1079/2289] Update base/db/R/query.dplyr.R Co-authored-by: Chris Black --- base/db/R/query.dplyr.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index 2235a302a56..a3c64fce868 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -23,7 +23,7 @@ betyConnect <- function(php.config = "../../web/config.php") { user = getphp("db_bety_username", "bety"), password = getphp("db_bety_password", "bety")) - ## force driver to be postgres, needed when switching to db,open + ## force driver to be postgres (only value supported by db.open) dbparams[["driver"]] <- "Postgres" ## Database connection From ce51edd782c6100eee13253472dfda1462b727bd Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 11 Jun 2020 09:27:47 -0500 Subject: [PATCH 1080/2289] Update CHANGELOG.md Co-authored-by: Chris Black --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7723d97f95b..df47224b858 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,7 +9,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Fixed -- PEcAn.DB::betyConnect() is now smarter, and will try to use either config.php or environment variables to create a connection. It has switched to use db.open helper function. +- PEcAn.DB::betyConnect() is now smarter, and will try to use either config.php or environment variables to create a connection. It has switched to use db.open helper function (#2632). - PEcAn.utils::tranformstats() assumed the statistic names column of its input was a factor. It now accepts character too, and returns the same class given as input (#2545). - fixed and added tests for `get.rh` function in PEcAn.data.atmosphere - Invalid .zenodo.json that broke automatic archiving on Zenodo ([b56ef53](https://github.com/PecanProject/pecan/commit/b56ef53888d73904c893b9e8c8cfaeedd7b1edbe)) From 1fe6d919f05a3e2e9dc138824d27efcfdd409cdd Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 12 Jun 2020 03:42:18 -0400 Subject: [PATCH 1081/2289] hop test passes --- models/basgra/R/write.config.BASGRA.R | 20 ++++++++++++-------- models/basgra/inst/BASGRA_params.Rdata | Bin 3173 -> 3194 bytes models/basgra/src/BASGRA.f90 | 2 +- models/basgra/src/set_params.f90 | 9 +++++---- 4 files changed, 18 insertions(+), 13 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 4497cdaad83..503c9aeb497 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -259,7 +259,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ic.names <- names(IC) if ("LAI" %in% ic.names) { - run_params[names(run_params) == "LOG10LAII"] <- log10(IC$LAI) + run_params[names(run_params) == "LOG10LAII"] <- IC$LAI } @@ -282,7 +282,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ## laiInit m2/m2 lai <- try(ncdf4::ncvar_get(IC.nc, "LAI"), silent = TRUE) if (!is.na(lai) && is.numeric(lai)) { - run_params[which(names(run_params) == "LOG10LAII")] <- log10(lai) + run_params[which(names(run_params) == "LOG10LAII")] <- lai } # This is IC @@ -363,16 +363,16 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # THESE DERIVATIONS WERE PART OF THE BASGRA CODE, NOW TAKEN OUT HERE # NRT = NCR * CRTI - run_params[which(names(run_params) == "NRTI")] <- (10^run_params[names(run_params) == "LOG10CRTI"])* + run_params[which(names(run_params) == "NRTI")] <- run_params[names(run_params) == "LOG10CRTI"]* run_params[names(run_params) == "NCR"] # NCSHI = NCSHMAX * (1-EXP(-K*LAII)) / (K*LAII) # NSH = NCSHI * (CLVI+CSTI) - lai_tmp <- (10^run_params[names(run_params) == "LOG10LAII"]) + lai_tmp <- run_params[names(run_params) == "LOG10LAII"] ncshi <- run_params[names(run_params) == "NCSHMAX"] * (1-exp(-run_params[names(run_params) == "K"]*lai_tmp)) / (run_params[names(run_params) == "K"]*lai_tmp) run_params[which(names(run_params) == "NSHI")] <- ncshi * - ((10^run_params[names(run_params) == "LOG10CLVI"]) + run_params[names(run_params) == "CSTI"]) + ((run_params[names(run_params) == "LOG10CLVI"]) + run_params[names(run_params) == "CSTI"]) # TILG1 = TILTOTI * FRTILGI * FRTILGG1I @@ -398,6 +398,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[names(run_params) == "NSOMF0"] <- run_params[names(run_params) == "CSOMF0"] / run_params[names(run_params) == "CNSOMF0"] run_params[names(run_params) == "NSOMS0"] <- run_params[names(run_params) == "CSOMS0"] / run_params[names(run_params) == "CNSOMS0"] + + ################################################################## ######################### PREVIOUS STATE ######################### ################################################################## @@ -410,13 +412,13 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N load(last_states_file) # LOG10CLVI = pa(1) - run_params[names(run_params) == "LOG10CLVI"] <- log10(last_vals[names(last_vals) == "CLV"]) + run_params[names(run_params) == "LOG10CLVI"] <- last_vals[names(last_vals) == "CLV"] # LOG10CRESI = pa(2) - run_params[names(run_params) == "LOG10CRESI"] <- log10(last_vals[names(last_vals) == "CRES"]) + run_params[names(run_params) == "LOG10CRESI"] <- last_vals[names(last_vals) == "CRES"] # LOG10CRTI = pa(3) - run_params[names(run_params) == "LOG10CRTI"] <- log10(last_vals[names(last_vals) == "CRT"]) + run_params[names(run_params) == "LOG10CRTI"] <- last_vals[names(last_vals) == "CRT"] # CSTI = pa(4) run_params[names(run_params) == "CSTI"] <- last_vals[names(last_vals) == "CST"] @@ -502,6 +504,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N #ratio to be preserved # NRT = NCR * CRTI run_params[which(names(run_params) == "NCR")] <- last_vals[names(last_vals) == "NCRT"] + + } diff --git a/models/basgra/inst/BASGRA_params.Rdata b/models/basgra/inst/BASGRA_params.Rdata index 5732a4a154dba6aca23e8dd4c0c597be6b29636d..446d20dd2742a97cda61b3439a1de5f34e30845b 100644 GIT binary patch delta 47 zcmaDV@k?TY1!Lz%O9vJv`v;SYS(F$%H&0_Z!7jqWz{$YC!0O@{>Ej7x`~zVCXZ;Nr delta 31 lcmew*@l;}h1!KoXO9z(8jVv;Z9h>K|oM2~XVED%X!~nKH3UdGe diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index 4ffaa1ed04a..a0edf55dc88 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -141,8 +141,8 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & do day = 1, NDAYS ! Environment - call DDAYL (doy) call set_weather_day(day,DRYSTOR, year,doy) + call DDAYL (doy) call SoilWaterContent(Fdepth,ROOTD,WAL) call Physics (DAVTMP,Fdepth,ROOTD,Sdepth,WAS, Frate) call MicroClimate (doy,DRYSTOR,Fdepth,Frate,LAI,Sdepth,Tsurf,WAPL,WAPS,WETSTOR, & diff --git a/models/basgra/src/set_params.f90 b/models/basgra/src/set_params.f90 index 3695a9e3f81..1ca7d54c76d 100644 --- a/models/basgra/src/set_params.f90 +++ b/models/basgra/src/set_params.f90 @@ -163,11 +163,12 @@ Subroutine set_params(pa) NSOMF0 = pa(135) NSOMS0 = pa(136) + ! Parameter transformations -CLVI = 10**LOG10CLVI -CRESI = 10**LOG10CRESI -CRTI = 10**LOG10CRTI -LAII = 10**LOG10LAII +CLVI = LOG10CLVI +CRESI = LOG10CRESI +CRTI = LOG10CRTI +LAII = LOG10LAII WCAD = FWCAD * WCST WCWP = FWCWP * WCST From 5337bf57d0f58fb9f542dc049f215466f28f7b9c Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 12 Jun 2020 10:03:06 -0400 Subject: [PATCH 1082/2289] hop test passes --- models/basgra/src/BASGRA.f90 | 2 ++ 1 file changed, 2 insertions(+) diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index a0edf55dc88..c7a4ac36cc6 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -229,6 +229,8 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & NRT = NRT + GNRT - DNRT NSH = NSH + GNSH - DNSH - HARVNSH - NSHmob + + NCR = NRT / CRT Nfert_TOT = Nfert_TOT + Nfert DM_MAX = max( DM, DM_MAX ) From 6bcb0f2435cc51b6b9bd97c054e2081b78ce3518 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 13 Jun 2020 00:05:05 -0500 Subject: [PATCH 1083/2289] replaces bety connection pipeline with --- apps/api/auth.R | 10 ++-------- apps/api/models.R | 10 ++-------- apps/api/pecanapi-spec.yml | 2 +- apps/api/runs.R | 10 ++-------- apps/api/workflows.R | 19 +++---------------- 5 files changed, 10 insertions(+), 41 deletions(-) diff --git a/apps/api/auth.R b/apps/api/auth.R index 479406bdb0b..9f080962829 100644 --- a/apps/api/auth.R +++ b/apps/api/auth.R @@ -26,14 +26,8 @@ get_crypt_pass <- function(username, password, secretkey = NULL) { #* @return TRUE if encrypted password is correct, else FALSE #* @author Tezan Sahu validate_crypt_pass <- function(username, crypt_pass) { - settings <-list(database = list(bety = list( - driver = "PostgreSQL", - user = "bety", - dbname = "bety", - password = "bety", - host="postgres" - ))) - dbcon <- PEcAn.DB::db.open(settings$database$bety) + + dbcon <- PEcAn.DB::betyConnect() qry_statement <- paste0("SELECT crypted_password FROM users WHERE login='", username, "'") res <- PEcAn.DB::db.query(qry_statement, dbcon) diff --git a/apps/api/models.R b/apps/api/models.R index a1af1dfac4c..aaad5ca7672 100644 --- a/apps/api/models.R +++ b/apps/api/models.R @@ -5,14 +5,8 @@ #' @author Tezan Sahu #* @get / getModels <- function(model_name="all", revision="all", res){ - settings <-list(database = list(bety = list( - driver = "PostgreSQL", - user = "bety", - dbname = "bety", - password = "bety", - host="postgres" - ))) - dbcon <- PEcAn.DB::db.open(settings$database$bety) + + dbcon <- PEcAn.DB::betyConnect() qry_statement <- "SELECT m.id AS model_id, m.model_name, m.revision, m.modeltype_id, t.name AS model_type FROM models m, modeltypes t WHERE m.modeltype_id = t.id" if (model_name == "all" & revision == "all"){ diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 1100831ea23..7e3ef867dfd 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/503b5316 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/a024c7b8 - description: Localhost url: http://127.0.0.1:8000 diff --git a/apps/api/runs.R b/apps/api/runs.R index d07b88a8cfb..65566fc013c 100644 --- a/apps/api/runs.R +++ b/apps/api/runs.R @@ -10,14 +10,8 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ res$status <- 400 return(list(error = "Invalid value for parameter")) } - settings <-list(database = list(bety = list( - driver = "PostgreSQL", - user = "bety", - dbname = "bety", - password = "bety", - host="postgres" - ))) - dbcon <- PEcAn.DB::db.open(settings$database$bety) + + dbcon <- PEcAn.DB::betyConnect() qry_statement <- paste0( "SELECT r.id, e.runtype, r.model_id, r.site_id, r.parameter_list, r.ensemble_id, ", "e.workflow_id, r.start_time, r.finish_time ", diff --git a/apps/api/workflows.R b/apps/api/workflows.R index f477e9b3a1a..62b91207186 100644 --- a/apps/api/workflows.R +++ b/apps/api/workflows.R @@ -11,14 +11,8 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r res$status <- 400 return(list(error = "Invalid value for parameter")) } - settings <-list(database = list(bety = list( - driver = "PostgreSQL", - user = "bety", - dbname = "bety", - password = "bety", - host="postgres" - ))) - dbcon <- PEcAn.DB::db.open(settings$database$bety) + + dbcon <- PEcAn.DB::betyConnect() qry_statement <- paste( "SELECT w.id, a.value AS properties", "FROM workflows w", @@ -91,14 +85,7 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r #' @author Tezan Sahu #* @get / getWorkflowDetails <- function(id, res){ - settings <-list(database = list(bety = list( - driver = "PostgreSQL", - user = "bety", - dbname = "bety", - password = "bety", - host="postgres" - ))) - dbcon <- PEcAn.DB::db.open(settings$database$bety) + dbcon <- PEcAn.DB::betyConnect() qry_statement <- paste0( "SELECT w.id, w.model_id, w.site_id, a.value AS properties ", "FROM workflows w ", From 6f8b0b5ac428180a571d33cfa425e971ffe6cd9a Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 16:52:35 +0530 Subject: [PATCH 1084/2289] Update book.yml --- .github/workflows/book.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 5c43a945f39..2db970921e0 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -50,6 +50,7 @@ jobs: git init git add --all * git commit -m "Update the book `date`" || true + git remote -v - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} From e243af6547826db8c7d7e158c910187e5d5b73e0 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 17:14:05 +0530 Subject: [PATCH 1085/2289] Update book.yml --- .github/workflows/book.yml | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 2db970921e0..07e030d67c7 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -15,7 +15,8 @@ jobs: # The type of runner that the job will run on runs-on: ubuntu-latest container: pecan/base:latest - + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} steps: @@ -50,10 +51,11 @@ jobs: git init git add --all * git commit -m "Update the book `date`" || true - git remote -v - - uses: r-lib/actions/pr-push@master - with: - repo-token: ${{ secrets.GITHUB_TOKEN }} + git remote add origin git@github.com:MukulMaheshwari/pecan-documentation.git + git config master.remote origin + git config master.merge refs/heads/master + + From 3247d4f03bb22c30de774e1d0a2422838e161d61 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 17:19:33 +0530 Subject: [PATCH 1086/2289] Update book.yml --- .github/workflows/book.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 07e030d67c7..c6c7bf1ea2b 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -54,6 +54,7 @@ jobs: git remote add origin git@github.com:MukulMaheshwari/pecan-documentation.git git config master.remote origin git config master.merge refs/heads/master + git push origin master From 40ad66148753a6d78baf7b8872ad754f551c804d Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 17:58:53 +0530 Subject: [PATCH 1087/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index c6c7bf1ea2b..fc62195e619 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -54,7 +54,7 @@ jobs: git remote add origin git@github.com:MukulMaheshwari/pecan-documentation.git git config master.remote origin git config master.merge refs/heads/master - git push origin master + git push -aq origin master From b501fbca8215d278036b487da20ffe6a6de87ecc Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 22:49:16 +0530 Subject: [PATCH 1088/2289] updated book.yml --- .github/workflows/book.yml | 91 ++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 48 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index fc62195e619..20118c5fefc 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -1,60 +1,55 @@ -# This is a basic workflow to help you get started with Actions - -name: book - on: push: - branches: master - pull_request: - branches: master + branches: + - master + +name: renderbook jobs: - - build: - # The type of runner that the job will run on - runs-on: ubuntu-latest + bookdown: + name: Render-Book + runs-on: macOS-latest container: pecan/base:latest - env: - GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} - - steps: - - - uses: actions/checkout@v2 + - uses: actions/checkout@v1 + - uses: r-lib/actions/setup-r@v1 + - uses: r-lib/actions/setup-pandoc@v1 + - name: Install rmarkdown + run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' + - name: Render Book + run: Rscript -e 'bookdown::render_book("index.Rmd")' + - uses: actions/upload-artifact@v1 + with: + name: _book + path: _book/ + +# Need to first create an empty gh-pages branch +# see https://pkgdown.r-lib.org/reference/deploy_site_github.html +# and also add secrets for a GITHUB_PAT and EMAIL to the repository +# gh-action from Cecilapp/GitHub-Pages-deploy + checkout-and-deploy: + runs-on: ubuntu-latest + needs: bookdown + steps: + - name: Checkout + uses: actions/checkout@master + - name: Download artifact + uses: actions/download-artifact@v1.0.0 + with: + # Artifact name + name: _book # optional + # Destination path + path: _book # optional + - name: Deploy to GitHub Pages + uses: Cecilapp/GitHub-Pages-deploy@master + env: + EMAIL: ${{ secrets.EMAIL }} # must be a verified email + GH_TOKEN: ${{ secrets.GITHUB_PA }} # https://github.com/settings/tokens + BUILD_DIR: _book/ # "_site/" by default + - - run : | - apt-get update \ - && apt-get -y --no-install-recommends install \ - jags \ - time \ - libgdal-dev \ - libglpk-dev \ - librdf0-dev \ - libnetcdf-dev \ - libudunits2-dev \ - libgl1-mesa-dev \ - libglu1-mesa-dev - - # Building book from source using makefile - - name: Building book - run: cd book_source && make - - - name: looking for generated html - run: cd /__w/pecan-sandbox/pecan-sandbox/book_source/_book - shell: bash - - name: Commiting the changes to pecan documentation repo - run: | - git config --global user.email "mukulmaheshwari12@gmail.com" - git config --global user.name "MukulMaheshwari" - git init - git add --all * - git commit -m "Update the book `date`" || true - git remote add origin git@github.com:MukulMaheshwari/pecan-documentation.git - git config master.remote origin - git config master.merge refs/heads/master - git push -aq origin master From aab4efe080e12dcb5021da617ef9bd8f36628678 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 22:52:57 +0530 Subject: [PATCH 1089/2289] Update book.yml --- .github/workflows/book.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 20118c5fefc..171b0af8bdc 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -10,7 +10,6 @@ jobs: bookdown: name: Render-Book runs-on: macOS-latest - container: pecan/base:latest steps: - uses: actions/checkout@v1 - uses: r-lib/actions/setup-r@v1 From c46047145e3e6461cca1be264e8af9f4b0778157 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 23:16:24 +0530 Subject: [PATCH 1090/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 171b0af8bdc..f4174ffcfbc 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -17,7 +17,7 @@ jobs: - name: Install rmarkdown run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book - run: Rscript -e 'bookdown::render_book("index.Rmd")' + run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' - uses: actions/upload-artifact@v1 with: name: _book From 243471dc18df35a24cb67d24e2a055a92b56a902 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 23:20:02 +0530 Subject: [PATCH 1091/2289] Update book.yml --- .github/workflows/book.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index f4174ffcfbc..bfde0502504 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -9,7 +9,8 @@ name: renderbook jobs: bookdown: name: Render-Book - runs-on: macOS-latest + runs-on: ubuntu-latest + container: pecan/base:latest steps: - uses: actions/checkout@v1 - uses: r-lib/actions/setup-r@v1 From 6d33db32b6ef22d03dd686d140993cd7841b6c46 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 13 Jun 2020 23:34:42 +0530 Subject: [PATCH 1092/2289] Update book.yml --- .github/workflows/book.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index bfde0502504..6e29f497221 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -22,7 +22,7 @@ jobs: - uses: actions/upload-artifact@v1 with: name: _book - path: _book/ + path: book_source/_book/ # Need to first create an empty gh-pages branch # see https://pkgdown.r-lib.org/reference/deploy_site_github.html @@ -40,13 +40,13 @@ jobs: # Artifact name name: _book # optional # Destination path - path: _book # optional + path: book_source/_book/ # optional - name: Deploy to GitHub Pages uses: Cecilapp/GitHub-Pages-deploy@master env: EMAIL: ${{ secrets.EMAIL }} # must be a verified email GH_TOKEN: ${{ secrets.GITHUB_PA }} # https://github.com/settings/tokens - BUILD_DIR: _book/ # "_site/" by default + BUILD_DIR: book_source/_book/ # "_site/" by default From 44b39c6708b4d85bc88ab687890a50afbd21444e Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun, 14 Jun 2020 11:46:00 +0530 Subject: [PATCH 1093/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 6e29f497221..9d8c2ba2310 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -40,7 +40,7 @@ jobs: # Artifact name name: _book # optional # Destination path - path: book_source/_book/ # optional + path: _book/ # optional - name: Deploy to GitHub Pages uses: Cecilapp/GitHub-Pages-deploy@master env: From bd912a6d8a400d67efbe2921f5c7939a21816502 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun, 14 Jun 2020 11:52:26 +0530 Subject: [PATCH 1094/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 9d8c2ba2310..819aeba6269 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -46,7 +46,7 @@ jobs: env: EMAIL: ${{ secrets.EMAIL }} # must be a verified email GH_TOKEN: ${{ secrets.GITHUB_PA }} # https://github.com/settings/tokens - BUILD_DIR: book_source/_book/ # "_site/" by default + BUILD_DIR: _book/ # "_site/" by default From 299c240277d616463979cdbe09ef57731b8167fc Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 14 Jun 2020 03:57:33 -0500 Subject: [PATCH 1095/2289] converted all betydb calls to dplyr --- apps/api/auth.R | 13 ++++--- apps/api/models.R | 37 ++++++++++--------- apps/api/pecanapi-spec.yml | 2 +- apps/api/runs.R | 64 +++++++++++++++++---------------- apps/api/workflows.R | 74 ++++++++++++++++++++++---------------- 5 files changed, 108 insertions(+), 82 deletions(-) diff --git a/apps/api/auth.R b/apps/api/auth.R index 9f080962829..55c4accb9ab 100644 --- a/apps/api/auth.R +++ b/apps/api/auth.R @@ -1,3 +1,5 @@ +library(dplyr) + #* Obtain the encrypted password for a user #* @param username Username, which is also the 'salt' #* @param password Unencrypted password @@ -29,12 +31,15 @@ validate_crypt_pass <- function(username, crypt_pass) { dbcon <- PEcAn.DB::betyConnect() - qry_statement <- paste0("SELECT crypted_password FROM users WHERE login='", username, "'") - res <- PEcAn.DB::db.query(qry_statement, dbcon) - + res <- tbl(bety, "users") %>% + filter(login == username, + crypted_password == crypt_pass) %>% + count() %>% + collect() + PEcAn.DB::db.close(dbcon) - if (nrow(res) == 1 && res[1, 1] == crypt_pass) { + if (res == 1) { return(TRUE) } diff --git a/apps/api/models.R b/apps/api/models.R index aaad5ca7672..cddfc4442a9 100644 --- a/apps/api/models.R +++ b/apps/api/models.R @@ -1,3 +1,5 @@ +library(dplyr) + #' Retrieve the details of a particular version of a model #' @param name Model name (character) #' @param revision Model version/revision (character) @@ -7,24 +9,27 @@ getModels <- function(model_name="all", revision="all", res){ dbcon <- PEcAn.DB::betyConnect() - - qry_statement <- "SELECT m.id AS model_id, m.model_name, m.revision, m.modeltype_id, t.name AS model_type FROM models m, modeltypes t WHERE m.modeltype_id = t.id" - if (model_name == "all" & revision == "all"){ - # Leave as it is - } - else if (model_name != "all" & revision == "all"){ - qry_statement <- paste0(qry_statement, " and model_name = '", model_name, "'") - } - else if (model_name == "all" & revision != "all"){ - qry_statement <- paste0(qry_statement, " and revision = '", revision, "'") + + Models <- tbl(dbcon, "models") %>% + select(model_id = id, model_name, revision, modeltype_id) + + if (model_name != "all"){ + Models <- Models %>% + filter(model_name == !!model_name) } - else{ - qry_statement <- paste0(qry_statement, " and model_name = '", model_name, "' and revision = '", revision, "'") + + if (revision != "all"){ + Models <- Models %>% + filter(revision == !!revision) } - - qry_statement <- paste0(qry_statement, " ORDER BY m.id DESC") - qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + Models <- tbl(dbcon, "modeltypes") %>% + select(modeltype_id = id, model_type = name) %>% + inner_join(Models, by = "modeltype_id") %>% + arrange(model_id) + + qry_res <- Models %>% collect() + PEcAn.DB::db.close(dbcon) if (nrow(qry_res) == 0) { @@ -32,7 +37,7 @@ getModels <- function(model_name="all", revision="all", res){ return(list(error="Model(s) not found")) } else { - qry_res + return(list(models=qry_res)) } } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 7e3ef867dfd..175a944e4c1 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/a024c7b8 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/c5b30dae - description: Localhost url: http://127.0.0.1:8000 diff --git a/apps/api/runs.R b/apps/api/runs.R index 65566fc013c..c3daab5fa53 100644 --- a/apps/api/runs.R +++ b/apps/api/runs.R @@ -12,29 +12,38 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ } dbcon <- PEcAn.DB::betyConnect() - qry_statement <- paste0( - "SELECT r.id, e.runtype, r.model_id, r.site_id, r.parameter_list, r.ensemble_id, ", - "e.workflow_id, r.start_time, r.finish_time ", - "FROM runs r FULL OUTER JOIN ensembles e ", - "ON (r.ensemble_id = e.id) ", - "WHERE e.workflow_id = '", workflow_id, - "' ORDER BY r.id DESC ", - "LIMIT ", limit, - " OFFSET ", offset, ";" - ) - PEcAn.DB::db.query(qry_statement, dbcon) + Runs <- tbl(dbcon, "runs") %>% + select(id, model_id, site_id, parameter_list, ensemble_id, start_time, finish_time) + + Runs <- tbl(dbcon, "ensembles") %>% + select(runtype, ensemble_id=id, workflow_id) %>% + full_join(Runs, by="ensemble_id") %>% + filter(workflow_id == !!workflow_id) + + qry_res <- Runs %>% + arrange(id) %>% + collect() PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { + if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { res$status <- 404 return(list(error="Run(s) not found")) } else { + has_next <- FALSE + has_prev <- FALSE + if (nrow(qry_res) > (as.numeric(offset) + as.numeric(limit))) { + has_next <- TRUE + } + if (as.numeric(offset) != 0) { + has_prev <- TRUE + } + qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] result <- list(runs = qry_res) result$count <- nrow(qry_res) - if(nrow(qry_res) == limit){ + if(has_next){ result$next_page <- paste0( req$rook.url_scheme, "://", req$HTTP_HOST, @@ -46,7 +55,7 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ limit ) } - if(as.numeric(offset) != 0) { + if(has_prev) { result$prev_page <- paste0( req$rook.url_scheme, "://", req$HTTP_HOST, @@ -71,23 +80,18 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ #' @author Tezan Sahu #* @get / getWorkflowDetails <- function(id, res){ - settings <-list(database = list(bety = list( - driver = "PostgreSQL", - user = "bety", - dbname = "bety", - password = "bety", - host="postgres" - ))) - dbcon <- PEcAn.DB::db.open(settings$database$bety) - qry_statement <- paste0( - "SELECT r.id, e.runtype, r.model_id, r.site_id, r.parameter_list, r.ensemble_id, e.workflow_id, ", - "r.start_time, r.finish_time, r.created_at, r.updated_at, r.started_at, r.finished_at ", - "FROM runs r FULL OUTER JOIN ensembles e ", - "ON (r.ensemble_id = e.id) ", - "WHERE r.id = '", id, "';" - ) - qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + dbcon <- PEcAn.DB::betyConnect() + + Runs <- tbl(dbcon, "runs") %>% + select(-outdir, -outprefix, -setting) + + Runs <- tbl(dbcon, "ensembles") %>% + select(runtype, ensemble_id=id, workflow_id) %>% + full_join(Runs, by="ensemble_id") %>% + filter(id == !!id) + + qry_res <- Runs %>% collect() PEcAn.DB::db.close(dbcon) diff --git a/apps/api/workflows.R b/apps/api/workflows.R index 62b91207186..4ecbe94acea 100644 --- a/apps/api/workflows.R +++ b/apps/api/workflows.R @@ -1,3 +1,5 @@ +library(dplyr) + #' Get the list of workflows (using a particular model & site, if specified) #' @param model_id Model id (character) #' @param site_id Site id (character) @@ -13,42 +15,51 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r } dbcon <- PEcAn.DB::betyConnect() - qry_statement <- paste( - "SELECT w.id, a.value AS properties", - "FROM workflows w", - "FULL OUTER JOIN attributes a", - "ON (w.id = a.container_id)" - ) - if (is.null(model_id) & is.null(site_id)){ - # Leave as it is - } - else if (!is.null(model_id) & is.null(site_id)){ - qry_statement <- paste0(qry_statement, " WHERE w.model_id = '", model_id, "'") - } - else if (is.null(model_id) & !is.null(site_id)){ - qry_statement <- paste0(qry_statement, " WHERE w.site_id = '", site_id, "'") - } - else{ - qry_statement <- paste0(qry_statement, " WHERE w.model_id = '", model_id, "' and w.site_id = '", site_id, "'") - } + Workflow <- tbl(dbcon, "workflows") %>% + select(id, model_id, site_id) + + Workflow <- tbl(dbcon, "attributes") %>% + select(id = container_id, properties = value) %>% + full_join(Workflow, by = "id") - qry_statement <- paste0(qry_statement, " ORDER BY id LIMIT ", limit, " OFFSET ", offset, ";") + if (!is.null(model_id)) { + Workflow <- Workflow %>% + filter(model_id == !!model_id) + } - qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + if (!is.null(site_id)) { + Workflow <- Workflow %>% + filter(site_id == !!site_id) + } + qry_res <- Workflow %>% + arrange(id) %>% + collect() + PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { + if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { res$status <- 404 return(list(error="Workflows not found")) } else { + has_next <- FALSE + has_prev <- FALSE + if (nrow(qry_res) > (as.numeric(offset) + as.numeric(limit))) { + has_next <- TRUE + } + if (as.numeric(offset) != 0) { + has_prev <- TRUE + } + + qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] + qry_res$properties[is.na(qry_res$properties)] = "{}" qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) result <- list(workflows = qry_res) result$count <- nrow(qry_res) - if(nrow(qry_res) == limit){ + if(has_next){ result$next_page <- paste0( req$rook.url_scheme, "://", req$HTTP_HOST, @@ -60,7 +71,7 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r limit ) } - if(as.numeric(offset) != 0) { + if(has_prev) { result$prev_page <- paste0( req$rook.url_scheme, "://", req$HTTP_HOST, @@ -86,15 +97,16 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r #* @get / getWorkflowDetails <- function(id, res){ dbcon <- PEcAn.DB::betyConnect() - qry_statement <- paste0( - "SELECT w.id, w.model_id, w.site_id, a.value AS properties ", - "FROM workflows w ", - "FULL OUTER JOIN attributes a ", - "ON (w.id = a.container_id) ", - "WHERE w.id='", id, "'" - ) - qry_res <- PEcAn.DB::db.query(qry_statement, dbcon) + Workflow <- tbl(dbcon, "workflows") %>% + select(id, model_id, site_id) + + Workflow <- tbl(dbcon, "attributes") %>% + select(id = container_id, properties = value) %>% + full_join(Workflow, by = "id") %>% + filter(id == !!id) + + qry_res <- Workflow %>% collect() PEcAn.DB::db.close(dbcon) From 64c623fd29195458c197d43a0dbf0c543d725f92 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun, 14 Jun 2020 23:53:14 +0530 Subject: [PATCH 1096/2289] Update book.yml --- .github/workflows/book.yml | 39 ++++++++++++++++---------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 819aeba6269..e4ab84f7ea8 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -19,42 +19,35 @@ jobs: run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' - - uses: actions/upload-artifact@v1 + - uses: actions/upload-artifact@v2 with: name: _book path: book_source/_book/ -# Need to first create an empty gh-pages branch -# see https://pkgdown.r-lib.org/reference/deploy_site_github.html -# and also add secrets for a GITHUB_PAT and EMAIL to the repository -# gh-action from Cecilapp/GitHub-Pages-deploy + checkout-and-deploy: runs-on: ubuntu-latest needs: bookdown steps: - - name: Checkout - uses: actions/checkout@master + - name: Checkout documentation repo + uses: actions/checkout@v2 + with: + repository: MukulMaheshwari/pecan-documentation + path: pecan-documentation - name: Download artifact - uses: actions/download-artifact@v1.0.0 + uses: actions/download-artifact@v2 with: # Artifact name name: _book # optional # Destination path path: _book/ # optional - - name: Deploy to GitHub Pages - uses: Cecilapp/GitHub-Pages-deploy@master - env: - EMAIL: ${{ secrets.EMAIL }} # must be a verified email - GH_TOKEN: ${{ secrets.GITHUB_PA }} # https://github.com/settings/tokens - BUILD_DIR: _book/ # "_site/" by default - - - - - - - + - name: Commit changes + run: | + git clone https://${GITHUB_PA}@github.com/${GH_USER}/pecan-documentation.git book_hosted + cd book_hosted + rsync -a --delete ../_book/ $VERSION/ + git add --all * + git commit -m "Update the book `date`" || true + git push -q origin master - - From 3f3f93f204354ec70ae053ea8a6aad67609c70f7 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 15 Jun 2020 14:55:45 +0530 Subject: [PATCH 1097/2289] Update book.yml --- .github/workflows/book.yml | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index e4ab84f7ea8..aa7d859fd09 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -42,12 +42,15 @@ jobs: # Destination path path: _book/ # optional - name: Commit changes + env: + EMAIL: ${{ secrets.EMAIL }} + GH_TOKEN: ${{ secrets.GITHUB_PA}} run: | git clone https://${GITHUB_PA}@github.com/${GH_USER}/pecan-documentation.git book_hosted cd book_hosted - rsync -a --delete ../_book/ $VERSION/ + rsync -a --delete ../_book/ VERSION/ git add --all * git commit -m "Update the book `date`" || true git push -q origin master - + From 6ff92f0d33bc4cb13d76a99a1d3ffd62ea3611b4 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 15 Jun 2020 15:01:45 +0530 Subject: [PATCH 1098/2289] Update book.yml --- .github/workflows/book.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index aa7d859fd09..b91d5d80d0c 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -28,6 +28,9 @@ jobs: checkout-and-deploy: runs-on: ubuntu-latest needs: bookdown + env: + EMAIL: ${{ secrets.EMAIL }} + GH_TOKEN: ${{ secrets.GITHUB_PA}} steps: - name: Checkout documentation repo uses: actions/checkout@v2 @@ -42,9 +45,6 @@ jobs: # Destination path path: _book/ # optional - name: Commit changes - env: - EMAIL: ${{ secrets.EMAIL }} - GH_TOKEN: ${{ secrets.GITHUB_PA}} run: | git clone https://${GITHUB_PA}@github.com/${GH_USER}/pecan-documentation.git book_hosted cd book_hosted From 907d9e48be811299359471c3b865d923120bd1ad Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 15 Jun 2020 16:19:39 +0530 Subject: [PATCH 1099/2289] Update book.yml --- .github/workflows/book.yml | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index b91d5d80d0c..8ad0137fbe9 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -29,8 +29,7 @@ jobs: runs-on: ubuntu-latest needs: bookdown env: - EMAIL: ${{ secrets.EMAIL }} - GH_TOKEN: ${{ secrets.GITHUB_PA}} + EMAIL: ${{ secrets.EMAIL }} steps: - name: Checkout documentation repo uses: actions/checkout@v2 @@ -44,7 +43,10 @@ jobs: name: _book # optional # Destination path path: _book/ # optional - - name: Commit changes + - name: authentication + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + - name: commit changes run: | git clone https://${GITHUB_PA}@github.com/${GH_USER}/pecan-documentation.git book_hosted cd book_hosted @@ -52,5 +54,6 @@ jobs: git add --all * git commit -m "Update the book `date`" || true git push -q origin master - + + From c3d9944f7cf9681bdc58f6d269c09d333da8f98b Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon, 15 Jun 2020 16:26:22 +0530 Subject: [PATCH 1100/2289] Update book.yml --- .github/workflows/book.yml | 2 -- 1 file changed, 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 8ad0137fbe9..0a6826453c0 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -43,8 +43,6 @@ jobs: name: _book # optional # Destination path path: _book/ # optional - - name: authentication - with: repo-token: ${{ secrets.GITHUB_TOKEN }} - name: commit changes run: | From 3a24ca1121bfc7775dafe9ad2c6465321c6e5ad4 Mon Sep 17 00:00:00 2001 From: mccabete Date: Mon, 15 Jun 2020 09:29:50 -0400 Subject: [PATCH 1101/2289] Improve checking for output variable names --- models/ed/R/model2netcdf.ED2.R | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 9b1a7047ed6..c569a394c6d 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -280,9 +280,19 @@ read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ } } - CheckED2Version <- function(nc) { + CheckED2Variables <- function(nc) { + vars_detected <- NULL + if ("FMEAN_BDEAD_PY" %in% names(nc$var)) { - return("Git") + vars_detected <- c(vars_detected,"FMEAN_BDEAD_PY") + return("Contains_FMEAN") + } + if ("FMEAN_SOIL_TEMP_PY" %in% names(nc$var)) { + vars_detected <- c(vars_detected, "FMEAN_SOIL_TEMP_PY") + return("Contains_FMEAN") + } + if(!is.null(vars_detected)){ + PEcAn.logger::logger.warn(paste("Found variable(s): ", paste(vars_detected, collapse = " "), ", now processing FMEAN* named variables. Note that varible naming conventions may change with ED2 version.")) } } @@ -324,8 +334,8 @@ read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ slzdata <- array(c(-2, -1.5, -1, -0.8, -0.6, -0.4, -0.2, -0.1, -0.05)) } - ## Check for which version of ED2 we are using. - ED2vc <- CheckED2Version(ncT) + ## Check for what naming convention of ED2 vars we are using. May change with ED2 version. + ED2vc <- CheckED2Variables(ncT) ## store for later use, will only use last data dz <- diff(slzdata) From 9b4b2f01d4b42a7b51ae987dba6f75ee78744477 Mon Sep 17 00:00:00 2001 From: Tess McCabe Date: Mon, 15 Jun 2020 10:14:56 -0400 Subject: [PATCH 1102/2289] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index df47224b858..6184d6666dd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -23,6 +23,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Update pecan/depends docker image to have latest Roxygen and devtools. - Update ED docker build, will now build version 2.2.0 and git - Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 +- model2netcdf.ED2 no longer detecting which varibles names `-T-` files have based on ED2 version (#2623) ### Changed From d098bc36bc173bda507c89e1c44437427848b475 Mon Sep 17 00:00:00 2001 From: mccabete Date: Mon, 15 Jun 2020 13:12:46 -0400 Subject: [PATCH 1103/2289] fix if statments so warning will be triggered --- models/ed/R/model2netcdf.ED2.R | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index c569a394c6d..3170d9b8457 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -282,18 +282,21 @@ read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ CheckED2Variables <- function(nc) { vars_detected <- NULL + name_convention <- NULL if ("FMEAN_BDEAD_PY" %in% names(nc$var)) { vars_detected <- c(vars_detected,"FMEAN_BDEAD_PY") - return("Contains_FMEAN") + name_convention <- "Contains_FMEAN" } if ("FMEAN_SOIL_TEMP_PY" %in% names(nc$var)) { vars_detected <- c(vars_detected, "FMEAN_SOIL_TEMP_PY") - return("Contains_FMEAN") + name_convention <- "Contains_FMEAN" } if(!is.null(vars_detected)){ PEcAn.logger::logger.warn(paste("Found variable(s): ", paste(vars_detected, collapse = " "), ", now processing FMEAN* named variables. Note that varible naming conventions may change with ED2 version.")) } + + return(name_convention) } # note that there is always one Tower file per year From b1c3f939bc38338a4e23501796ab04762907e806 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 16 Jun 2020 16:44:12 +0530 Subject: [PATCH 1104/2289] Sentinel 2 NDVI script for remote data module (#2634) * added satellitetools * added s2ndvi * added test file * removed satellitetool's old functions * Apply suggestions from code review (fixes in docs) Co-authored-by: istfer * added example run and dependencies required * moved test file to satellitetools * updated docs * minor fixes * removed pycache Co-authored-by: istfer Co-authored-by: Michael Dietze --- modules/data.remote/inst/s2ndvi.py | 83 ++ .../data.remote/inst/satellitetools/gee.py | 716 ++++++++++++++++++ .../inst/satellitetools/test.geojson | 38 + 3 files changed, 837 insertions(+) create mode 100644 modules/data.remote/inst/s2ndvi.py create mode 100755 modules/data.remote/inst/satellitetools/gee.py create mode 100644 modules/data.remote/inst/satellitetools/test.geojson diff --git a/modules/data.remote/inst/s2ndvi.py b/modules/data.remote/inst/s2ndvi.py new file mode 100644 index 00000000000..b98affd99ee --- /dev/null +++ b/modules/data.remote/inst/s2ndvi.py @@ -0,0 +1,83 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Module to calculate NDVI from sentinel 2 data, based on Satellitetools by Olli Nevalainen. +This will be used to create automated workflows for the remote data module. + +Please use Python3 for using this module. + +Warning: Point coordinates as input has currently not been implemented. + +Author: Ayush Prasad +""" + +from satellitetools import gee +import geopandas as gpd +import os + + +def s2_ndvi(geofile, outdir, start, end, qi_threshold): + """ + Downloads sentinel 2 data, calculates NDVI and saves it in a netCDF file at the specified location. + + Parameters + ---------- + + geofile (str) -- path to the file containing the coordinates of AOI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date of the data request in the form YYYY-MM-DD + + qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + + To test this function please run the following code, testfile is included. + + s2_ndvi(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) + + + + + """ + + # read in the input file containing coordinates + df = gpd.read_file(geofile) + + request = gee.S2RequestParams(start, end) + + # filter area of interest from the coordinates in the input file + area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # calcuate qi attribute for the AOI + gee.ee_get_s2_quality_info(area, request) + + # get the final data + gee.ee_get_s2_data(area, request, qi_threshold=qi_threshold) + + # convert dataframe to an xarray dataset, used later for converting to netCDF + gee.s2_data_to_xarray(area, request) + + # calculate NDVI for the selected area + area.data = gee.compute_ndvi(area.data) + + timeseries = {} + timeseries_variable = ["ndvi"] + + # if specified output directory does not exist, create it. + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # creating a timerseries and saving the netCDF file + area.data.to_netcdf(os.path.join(outdir, area.name + ".nc")) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area.data, timeseries_variable) diff --git a/modules/data.remote/inst/satellitetools/gee.py b/modules/data.remote/inst/satellitetools/gee.py new file mode 100755 index 00000000000..cb74eb74d5d --- /dev/null +++ b/modules/data.remote/inst/satellitetools/gee.py @@ -0,0 +1,716 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Thu Feb 6 15:24:12 2020 + +Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). +Warning: the data is currently retrieved with 10m resolution (scale=10), so +the 20m resolution bands are resampled. +TODO: Add option for specifying the request spatial resolution. + +@author: Olli Nevalainen (olli.nevalainen@fmi.fi), + Finnish Meteorological Institute) + + +""" +import sys +import os +import ee +import datetime +import pandas as pd +import geopandas as gpd +import numpy as np +import xarray as xr +from functools import reduce + +ee.Initialize() + + +NO_DATA = -99999 +S2_REFL_TRANS = 10000 +# ----------------- Sentinel-2 ------------------------------------- +s2_qi_labels = ['NODATA', + 'SATURATED_DEFECTIVE', + 'DARK_FEATURE_SHADOW', + 'CLOUD_SHADOW', + 'VEGETATION', + 'NOT_VEGETATED', + 'WATER', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE'] + +s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE'] + + +class S2RequestParams(): + """S2 data request paramaters. + + Attributes + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + the default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. + """ + + def __init__(self, datestart, dateend, bands=None): + """. + + Parameters + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + The default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. + + Returns + ------- + None. + + """ + default_bands = ['B3', 'B4', 'B5', 'B6', 'B7', 'B8A', 'B11', 'B12'] + + self.datestart = datestart + self.dateend = dateend + self.bands = bands if bands else default_bands + + +class AOI(): + """Area of interest for area info and data. + + Attributes + ---------- + name : str + Name of the area. + geometry : str + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + qi : pandas dataframe + Dataframe with quality information about available imagery for the AOI. + qi is empty at init and can be computed with + ee_get_s2_quality_info function. + data : pandas dataframe or xarray + Dataframe holding data retrieved from GEE. Data can be computed using + function + qi is empty at init and can be computed with ee_get_s2_data and + converted to xarray using s2_data_to_xarray function. + + Methods + ------- + __init__ + """ + + def __init__(self, name, geometry=None, coordinate_list=None, tile=None): + """. + + Parameters + ---------- + name : str + Name of the area. + geometry : geometry in wkt, optional + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + + Returns + ------- + None. + + """ + if not geometry and not coordinate_list: + sys.exit("AOI has to get either geometry or coordinates as list!") + elif geometry and not coordinate_list: + coordinate_list = list(geometry.exterior.coords) + elif coordinate_list and not geometry: + geometry = None + + self.name = name + self.geometry = geometry + self.coordinate_list = coordinate_list + self.qi = None + self.data = None + self.tile = tile + + +def ee_get_s2_quality_info(AOIs, req_params): + """Get S2 quality information from GEE. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + + Returns + ------- + Nothing: + Computes qi attribute for the given AOI instances. + + """ + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [ee.Feature( + ee.Geometry.Polygon(a.coordinate_list), + {'name': a.name}) for a in AOIs] + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_quality_info_feature(feature): + + area = feature.geometry() + image_collection = ee.ImageCollection("COPERNICUS/S2_SR") \ + .filterBounds(area) \ + .filterDate(req_params.datestart, req_params.dateend) \ + .select(['SCL']) + + def ee_get_s2_quality_info_image(img): + productid = img.get('PRODUCT_ID') + assetid = img.id() + tileid = img.get('MGRS_TILE') + system_index = img.get('system:index') + proj = img.select("SCL").projection() + + # apply reducer to list + img = img.reduceRegion( + reducer=ee.Reducer.toList(), + geometry=area, + maxPixels=1e8, + scale=10) + + # get data into arrays + classdata = ee.Array( + ee.Algorithms.If(img.get("SCL"), + ee.Array(img.get("SCL")), + ee.Array([0]))) + + totalcount = classdata.length() + classpercentages = { + key: + classdata.eq(i).reduce(ee.Reducer.sum(), [0]) + .divide(totalcount).get([0]) + for i, key in enumerate(s2_qi_labels)} + + tmpfeature = ee.Feature(ee.Geometry.Point([0, 0])) \ + .set('productid', productid) \ + .set('system_index', system_index) \ + .set('assetid', assetid) \ + .set('tileid', tileid) \ + .set('projection', proj) \ + .set(classpercentages) + return tmpfeature + + s2_qi_image_collection = image_collection.map( + ee_get_s2_quality_info_image) + + return feature \ + .set('productid', s2_qi_image_collection + .aggregate_array('productid')) \ + .set('system_index', s2_qi_image_collection + .aggregate_array('system_index')) \ + .set('assetid', s2_qi_image_collection + .aggregate_array('assetid')) \ + .set('tileid', s2_qi_image_collection + .aggregate_array('tileid')) \ + .set('projection', s2_qi_image_collection + .aggregate_array('projection')) \ + .set({key: s2_qi_image_collection + .aggregate_array(key) for key in s2_qi_labels}) + + s2_qi_feature_collection = feature_collection.map( + ee_get_s2_quality_info_feature).getInfo() + + s2_qi = s2_feature_collection_to_dataframes(s2_qi_feature_collection) + + for a in AOIs: + name = a.name + a.qi = s2_qi[name] + + +def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): + """Get S2 data (level L2A, bottom of atmosphere data) from GEE. + + Warning: the data is currently retrieved with 10m resolution (scale=10), so + the 20m resolution bands are resampled. + TODO: Add option for specifying the request spatial resolution. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. AOIs should have qi attribute computed first. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + qi_threshold : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + qi_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + Nothing: + Computes data attribute for the given AOI instances. + + """ + datestart = req_params.datestart + dateend = req_params. dateend + bands = req_params.bands + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [] + for a in AOIs: + filtered_qi = filter_s2_qi_dataframe(a.qi, qi_threshold, qi_filter) + if len(filtered_qi) == 0: + print('No observations to retrieve for area %s' % a.name) + continue + + if a.tile is None: + min_tile = min(filtered_qi['tileid'].values) + filtered_qi = (filtered_qi[ + filtered_qi['tileid'] == min_tile]) + a.tile = min_tile + else: + filtered_qi = (filtered_qi[ + filtered_qi['tileid'] == a.tile]) + + full_assetids = "COPERNICUS/S2_SR/" + filtered_qi['assetid'] + image_list = [ee.Image(asset_id) for asset_id in full_assetids] + crs = filtered_qi['projection'].values[0]["crs"] + feature = ee.Feature(ee.Geometry.Polygon(a.coordinate_list), + {'name': a.name, + 'image_list': image_list}) + + features.append(feature) + + if len(features) == 0: + print('No data to be retrieved!') + return None + + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_data_feature(feature): + geom = feature.geometry(0.01, crs) + image_collection = \ + ee.ImageCollection.fromImages(feature.get('image_list')) \ + .filterBounds(geom) \ + .filterDate(datestart, dateend) \ + .select(bands + ['SCL']) + + def ee_get_s2_data_image(img): + # img = img.clip(geom) + productid = img.get('PRODUCT_ID') + assetid = img.id() + tileid = img.get('MGRS_TILE') + system_index = img.get('system:index') + proj = img.select(bands[0]).projection() + sun_azimuth = img.get('MEAN_SOLAR_AZIMUTH_ANGLE') + sun_zenith = img.get('MEAN_SOLAR_ZENITH_ANGLE') + view_azimuth = ee.Array( + [img.get('MEAN_INCIDENCE_AZIMUTH_ANGLE_%s' % b) + for b in bands]) \ + .reduce(ee.Reducer.mean(), [0]).get([0]) + view_zenith = ee.Array( + [img.get('MEAN_INCIDENCE_ZENITH_ANGLE_%s' % b) + for b in bands]) \ + .reduce(ee.Reducer.mean(), [0]).get([0]) + + img = img.resample('bilinear') \ + .reproject(crs=crs, scale=10) + + # get the lat lon and add the ndvi + image_grid = ee.Image.pixelCoordinates( + ee.Projection(crs)) \ + .addBands([img.select(b) for b in bands + ['SCL']]) + + # apply reducer to list + image_grid = image_grid.reduceRegion( + reducer=ee.Reducer.toList(), + geometry=geom, + maxPixels=1e8, + scale=10) + + # get data into arrays + x_coords = ee.Array(image_grid.get("x")) + y_coords = ee.Array(image_grid.get("y")) + band_data = {b: ee.Array(image_grid.get("%s" % b)) for b in bands} + + scl_data = ee.Array(image_grid.get("SCL")) + + # perform LAI et al. computation possibly here! + + tmpfeature = ee.Feature(ee.Geometry.Point([0, 0])) \ + .set('productid', productid) \ + .set('system_index', system_index) \ + .set('assetid', assetid) \ + .set('tileid', tileid) \ + .set('projection', proj) \ + .set('sun_zenith', sun_zenith) \ + .set('sun_azimuth', sun_azimuth) \ + .set('view_zenith', view_zenith) \ + .set('view_azimuth', view_azimuth) \ + .set('x_coords', x_coords) \ + .set('y_coords', y_coords) \ + .set('SCL', scl_data) \ + .set(band_data) + return tmpfeature + + s2_data_feature = image_collection.map(ee_get_s2_data_image) + + return feature \ + .set('productid', s2_data_feature + .aggregate_array('productid')) \ + .set('system_index', s2_data_feature + .aggregate_array('system_index')) \ + .set('assetid', s2_data_feature + .aggregate_array('assetid')) \ + .set('tileid', s2_data_feature + .aggregate_array('tileid')) \ + .set('projection', s2_data_feature + .aggregate_array('projection')) \ + .set('sun_zenith', s2_data_feature + .aggregate_array('sun_zenith')) \ + .set('sun_azimuth', s2_data_feature + .aggregate_array('sun_azimuth')) \ + .set('view_zenith', s2_data_feature + .aggregate_array('view_zenith')) \ + .set('view_azimuth', s2_data_feature + .aggregate_array('view_azimuth')) \ + .set('x_coords', s2_data_feature + .aggregate_array('x_coords')) \ + .set('y_coords', s2_data_feature + .aggregate_array('y_coords')) \ + .set('SCL', s2_data_feature + .aggregate_array('SCL')) \ + .set({b: s2_data_feature + .aggregate_array(b) for b in bands}) + s2_data_feature_collection = feature_collection.map( + ee_get_s2_data_feature).getInfo() + + s2_data = s2_feature_collection_to_dataframes(s2_data_feature_collection) + + for a in AOIs: + name = a.name + a.data = s2_data[name] + +def filter_s2_qi_dataframe(s2_qi_dataframe, qi_thresh, s2_filter=s2_filter1): + """Filter qi dataframe. + + Parameters + ---------- + s2_qi_dataframe : pandas dataframe + S2 quality information dataframe (AOI instance qi attribute). + qi_thresh : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + s2_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + filtered_s2_qi_df : pandas dataframe + Filtered dataframe. + + """ + filtered_s2_qi_df = s2_qi_dataframe.loc[ + s2_qi_dataframe[s2_filter1].sum(axis=1) < qi_thresh] + + return filtered_s2_qi_df + + +def s2_feature_collection_to_dataframes(s2_feature_collection): + """Convert feature collection dict from GEE to pandas dataframe. + + Parameters + ---------- + s2_feature_collection : dict + Dictionary returned by GEE. + + Returns + ------- + dataframes : pandas dataframe + GEE dictinary converted to pandas dataframe. + + """ + dataframes = {} + + for featnum in range(len(s2_feature_collection['features'])): + tmp_dict = {} + key = s2_feature_collection['features'][featnum]['properties']['name'] + productid = (s2_feature_collection + ['features'] + [featnum] + ['properties'] + ['productid']) + + dates = [datetime.datetime.strptime( + d.split('_')[2], '%Y%m%dT%H%M%S') for d in productid] + + tmp_dict.update({'Date': dates}) # , 'crs': crs} + properties = s2_feature_collection['features'][featnum]['properties'] + for prop, data in properties.items(): + if prop not in ['Date'] : # 'crs' ,, 'projection' + tmp_dict.update({prop: data}) + dataframes[key] = pd.DataFrame(tmp_dict) + return dataframes + +def compute_ndvi(dataset): + """Compute NDVI + + Parameters + ---------- + dataset : xarray dataset + + Returns + ------- + xarray dataset + Adds 'ndvi' xr array to xr dataset. + + """ + b4 = dataset.band_data.sel(band='B4') + b8 = dataset.band_data.sel(band='B8A') + ndvi = (b8 - b4) / (b8 + b4) + return dataset.assign({'ndvi': ndvi}) + + + +def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): + """Convert AOI.data dataframe to xarray dataset. + + Parameters + ---------- + aoi : AOI instance + AOI instance. + request_params : S2RequestParams + S2RequestParams. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + Nothing. + Converts the data atrribute dataframe to xarray Dataset. + xarray is better for handling multiband data. It also has + implementation for saving the data in NetCDF format. + + """ + # check that all bands have full data! + datalengths = [aoi.data[b].apply( + lambda d: len(d)) == len(aoi.data.iloc[0]['x_coords']) + for b in request_params.bands] + consistent_data = reduce(lambda a, b: a & b, datalengths) + aoi.data = aoi.data[consistent_data] + + # 2D data + bands = request_params.bands + + # 1D data + list_vars = ['assetid', 'productid', 'sun_azimuth', + 'sun_zenith', 'system_index', + 'view_azimuth', 'view_zenith'] + + # crs from projection + crs = aoi.data['projection'].values[0]['crs'] + tileid = aoi.data['tileid'].values[0] + # original number of pixels requested (pixels inside AOI) + aoi_pixels = len(aoi.data.iloc[0]['x_coords']) + + # transform 2D data to arrays + for b in bands: + + aoi.data[b] = aoi.data.apply( + lambda row: s2_lists_to_array( + row['x_coords'], row['y_coords'], row[b], + convert_to_reflectance=convert_to_reflectance), axis=1) + + aoi.data['SCL'] = aoi.data.apply( + lambda row: s2_lists_to_array( + row['x_coords'], row['y_coords'], row['SCL'], + convert_to_reflectance=False), axis=1) + + array = aoi.data[bands].values + + # this will stack the array to ndarray with + # dimension order = (time, band, x,y) + narray = np.stack( + [np.stack(array[:, b], axis=2) for b in range(len(bands))], + axis=2).transpose() # .swapaxes(2, 3) + + scl_array = np.stack(aoi.data['SCL'].values, axis=2).transpose() + + coords = {'time': aoi.data['Date'].values, + 'band': bands, + 'x': np.unique(aoi.data.iloc[0]['x_coords']), + 'y': np.unique(aoi.data.iloc[0]['y_coords']) + } + + dataset_dict = {'band_data': (['time', 'band', 'x', 'y'], narray), + 'SCL': (['time', 'x', 'y'], scl_array)} + var_dict = {var: (['time'], aoi.data[var]) for var in list_vars} + dataset_dict.update(var_dict) + + ds = xr.Dataset(dataset_dict, + coords=coords, + attrs={'name': aoi.name, + 'crs': crs, + 'tile_id': tileid, + 'aoi_geometry': aoi.geometry.to_wkt(), + 'aoi_pixels': aoi_pixels}) + aoi.data = ds + + +def s2_lists_to_array(x_coords, y_coords, data, convert_to_reflectance=True): + """Convert 1D lists of coordinates and corresponding values to 2D array. + + Parameters + ---------- + x_coords : list + List of x-coordinates. + y_coords : list + List of y-coordinates. + data : list + List of data values corresponding to the coordinates. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + arr : 2D numpy array + Return 2D numpy array. + + """ + # get the unique coordinates + uniqueYs = np.unique(y_coords) + uniqueXs = np.unique(x_coords) + + # get number of columns and rows from coordinates + ncols = len(uniqueXs) + nrows = len(uniqueYs) + + # determine pixelsizes + # ys = uniqueYs[1] - uniqueYs[0] + # xs = uniqueXs[1] - uniqueXs[0] + + y_vals, y_idx = np.unique(y_coords, return_inverse=True) + x_vals, x_idx = np.unique(x_coords, return_inverse=True) + if convert_to_reflectance: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) + arr.fill(np.nan) + arr[y_idx, x_idx] = np.array(data, dtype=np.float64) / S2_REFL_TRANS + else: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.int32) + arr.fill(NO_DATA) # or whatever yor desired missing data flag is + arr[y_idx, x_idx] = data + arr = np.flipud(arr) + return arr + + +def xr_dataset_to_timeseries(xr_dataset, variables): + """Compute timeseries dataframe from xr dataset. + + Parameters + ---------- + xr_dataset : xarray dataset + + variables : list + list of variable names as string. + + Returns + ------- + df : pandas dataframe + Pandas dataframe with mean, std, se and percentage of NaNs inside AOI. + + """ + df = pd.DataFrame({'Date': pd.to_datetime(xr_dataset.time.values)}) + + for var in variables: + df[var] = xr_dataset[var].mean(dim=['x', 'y']) + df[var+'_std'] = xr_dataset[var].std(dim=['x', 'y']) + + # nans occure due to missging data from 1D to 2D array + #(pixels outside the polygon), + # from snap algorihtm nans occure due to input/output ouf of bounds + # checking. + # TODO: flaggging with snap biophys algorith or some other solution to + # check which nan are from snap algorithm and which from 1d to 2d transformation + nans = np.isnan(xr_dataset[var]).sum(dim=['x', 'y']) + sample_n = len(xr_dataset[var].x) * len(xr_dataset[var].y) - nans + + # compute how many of the nans are inside aoi (due to snap algorithm) + out_of_aoi_pixels = (len(xr_dataset[var].x) * len(xr_dataset[var].y) + - xr_dataset.aoi_pixels) + nans_inside_aoi = nans - out_of_aoi_pixels + df['aoi_nan_percentage'] = nans_inside_aoi / xr_dataset.aoi_pixels + + df[var+'_se'] = df[var+'_std'] / np.sqrt(sample_n) + + return df diff --git a/modules/data.remote/inst/satellitetools/test.geojson b/modules/data.remote/inst/satellitetools/test.geojson new file mode 100644 index 00000000000..9a890595d4c --- /dev/null +++ b/modules/data.remote/inst/satellitetools/test.geojson @@ -0,0 +1,38 @@ +{ + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "properties": { + "name": "Reykjavik" + }, + "geometry": { + "type": "Polygon", + "coordinates": [ + [ + [ + -21.788935661315918, + 64.04460250271562 + ], + [ + -21.786317825317383, + 64.04460250271562 + ], + [ + -21.786317825317383, + 64.04537258754581 + ], + [ + -21.788935661315918, + 64.04537258754581 + ], + [ + -21.788935661315918, + 64.04460250271562 + ] + ] + ] + } + } + ] +} From c5d7169f0506df54287866ec65af443f7b7abe46 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 16 Jun 2020 17:34:26 +0530 Subject: [PATCH 1105/2289] added SNAP biophyscial processor Co-authored-by: Olli Nevalainen --- .../inst/satellitetools/biophys_xarray.py | 408 ++++++++++++++++++ 1 file changed, 408 insertions(+) create mode 100644 modules/data.remote/inst/satellitetools/biophys_xarray.py diff --git a/modules/data.remote/inst/satellitetools/biophys_xarray.py b/modules/data.remote/inst/satellitetools/biophys_xarray.py new file mode 100644 index 00000000000..8fb44ab9d4a --- /dev/null +++ b/modules/data.remote/inst/satellitetools/biophys_xarray.py @@ -0,0 +1,408 @@ +# -*- coding: utf-8 -*- +""" +Created on Mon May 11 14:34:08 2020 + +@author: Olli Nevalainen (olli.nevalainen@fmi.fi), + Finnish Meteorological Institute) + +Olli's python implementation of ESA SNAP s2toolbox biophysical processor and +computation of vegetation indices. +See ATBD at https://step.esa.int/docs/extra/ATBD_S2ToolBox_L2B_V1.1.pdf +And java source code at +https://github.com/senbox-org/s2tbx/tree/master/s2tbx-biophysical/src/main/java/org/esa/s2tbx/biophysical + +Caveats +Currently changes out of bounds inputs and outputs to nan (or min or max value +if output wihtin tolerance). Maybe output flagging information as well ( i.e. +diffferent flags input and output out of bounds). + +Convex hull input checking currently disabled. It's computationally slow and + not sure of its benefits. Better to filter out bad data based on L2A quality + info/classification\ + and hope averaging removes some bad pixels. +""" + +import requests +import io +import numpy as np +import xarray as xr + +# url to Sentinel 2 Toolbox's auxdata +# This base_url points towards the original toolbox(not the one created by Olli) +base_url = "https://raw.githubusercontent.com/senbox-org/s2tbx/master/s2tbx-biophysical/src/main/resources/auxdata/2_1/{}/{}" + + +def get_fromurl(var, pattern): + """ + Fetches the contents of a text file from the base url and stores it in a ndarray. + + Author: Ayush Prasad + + Parameters + ---------- + var (str) -- type of the product, one of FAPAR, FCOVER, LAI, LAI_Cab and LAI_Cw. + pattern (str) -- name of the file excluding the initial variable part. + + Returns + ------- + ndarray -- loaded with the contents of the text file. + """ + # attach variable and file name to the base url + res_url = base_url.format(var, str(var) + "%s" % str(pattern)) + # make a GET request to the url to fetch the data. + res_url = requests.get(res_url) + # check the HTTP status code to see if any error has occured. + res_url.raise_for_status() + # store the contents of the url in an in-memory buffer and use it to load the ndarray. + return np.loadtxt(io.BytesIO(res_url.content), delimiter=",") + + +# Read SNAP Biophysical processor neural network parameters +nn_params = {} +for var in ["FAPAR", "FCOVER", "LAI", "LAI_Cab", "LAI_Cw"]: + norm_minmax = get_fromurl(var, "_Normalisation") + denorm_minmax = get_fromurl(var, "_Denormalisation") + layer1_weights = get_fromurl(var, "_Weights_Layer1_Neurons") + layer1_bias = get_fromurl(var, "_Weights_Layer1_Bias").reshape(-1, 1) + layer2_weights = get_fromurl(var, "_Weights_Layer2_Neurons").reshape(1, -1) + layer2_bias = get_fromurl(var, "_Weights_Layer2_Bias").reshape(1, -1) + extreme_cases = get_fromurl(var, "_ExtremeCases") + + if var == "FCOVER": + nn_params[var] = { + "norm_minmax": norm_minmax, + "denorm_minmax": denorm_minmax, + "layer1_weights": layer1_weights, + "layer1_bias": layer1_bias, + "layer2_weights": layer2_weights, + "layer2_bias": layer2_bias, + "extreme_cases": extreme_cases, + } + else: + defdom_min = get_fromurl(var, "_DefinitionDomain_MinMax")[0, :].reshape(-1, 1) + defdom_max = get_fromurl(var, "_DefinitionDomain_MinMax")[1, :].reshape(-1, 1) + defdom_grid = get_fromurl(var, "_DefinitionDomain_Grid") + nn_params[var] = { + "norm_minmax": norm_minmax, + "denorm_minmax": denorm_minmax, + "layer1_weights": layer1_weights, + "layer1_bias": layer1_bias, + "layer2_weights": layer2_weights, + "layer2_bias": layer2_bias, + "defdom_min": defdom_min, + "defdom_max": defdom_max, + "defdom_grid": defdom_grid, + "extreme_cases": extreme_cases, + } + + +def _normalization(x, x_min, x_max): + x_norm = 2 * (x - x_min) / (x_max - x_min) - 1 + return x_norm + + +def _denormalization(y_norm, y_min, y_max): + y = 0.5 * (y_norm + 1) * (y_max - y_min) + return y + + +def _input_ouf_of_range(x, variable): + x_copy = x.copy() + x_bands = x_copy[:8, :] + + # check min max domain + defdom_min = nn_params[variable]["defdom_min"][:, 0].reshape(-1, 1) + defdom_max = nn_params[variable]["defdom_max"][:, 0].reshape(-1, 1) + bad_input_mask = (x_bands < defdom_min) | (x_bands > defdom_max) + bad_vector = np.any(bad_input_mask, axis=0) + x_bands[:, bad_vector] = np.nan + + # convex hull check, currently disabled due to time consumption vs benefit + # gridProject = lambda v: np.floor(10 * (v - defdom_min) / (defdom_max - defdom_min) + 1 ).astype(int) + # x_bands = gridProject(x_bands) + # isInGrid = lambda v: any((v == x).all() for x in nn_params[variable]['defdom_grid']) + # notInGrid = ~np.array([isInGrid(v) for v in x_bands.T]) + # x[:,notInGrid | bad_vector] = np.nan + + x_copy[:, bad_vector] = np.nan + return x_copy + + +def _output_ouf_of_range(output, variable): + new_output = np.copy(output) + tolerance = nn_params[variable]["extreme_cases"][0] + output_min = nn_params[variable]["extreme_cases"][1] + output_max = nn_params[variable]["extreme_cases"][2] + + new_output[output < (output_min + tolerance)] = np.nan + new_output[(output > (output_min + tolerance)) & (output < output_min)] = output_min + new_output[(output < (output_max - tolerance)) & (output > output_max)] = output_max + new_output[output > (output_max - tolerance)] = np.nan + return new_output + + +def _compute_variable(x, variable): + + x_norm = np.zeros_like(x) + x = _input_ouf_of_range(x, variable) + x_norm = _normalization( + x, + nn_params[variable]["norm_minmax"][:, 0].reshape(-1, 1), + nn_params[variable]["norm_minmax"][:, 1].reshape(-1, 1), + ) + + out_layer1 = np.tanh( + nn_params[variable]["layer1_weights"].dot(x_norm) + + nn_params[variable]["layer1_bias"] + ) + out_layer2 = ( + nn_params[variable]["layer2_weights"].dot(out_layer1) + + nn_params[variable]["layer2_bias"] + ) + output = _denormalization( + out_layer2, + nn_params[variable]["denorm_minmax"][0], + nn_params[variable]["denorm_minmax"][1], + )[0] + output = _output_ouf_of_range(output, variable) + output = output.reshape(1, np.shape(x)[1]) + return output + + +def _s2_lists_to_pixel_vectors(single_date_dict): + band_list = ["B3", "B4", "B5", "B6", "B7", "B8A", "B11", "B12"] + pixel_vector = np.zeros(shape=(11, len(single_date_dict[band_list[0]]))) + + for i, b in enumerate(band_list): + pixel_vector[i, :] = np.array(single_date_dict[b]) / 10000.0 + + pixel_vector[8, :] = np.cos(np.radians(single_date_dict["view_zenith"])) + pixel_vector[9, :] = np.cos(np.radians(single_date_dict["sun_zenith"])) + pixel_vector[10, :] = np.cos( + np.radians(single_date_dict["sun_azimuth"] - single_date_dict["view_azimuth"]) + ) + return pixel_vector + + +def compute_ci_red_edge(dataset): + """Compute CI_Red_Edge vegetation index. + + Parameters + ---------- + dataset : xarray dataset + + Returns + ------- + xarray dataset + Adds 'ci_red_edge' xr array to xr dataset. + + """ + b5 = dataset.band_data.sel(band="B5") + b7 = dataset.band_data.sel(band="B7") + ci_red_edge = (b7 / b5) - 1 + return dataset.assign({"ci_red_edge": ci_red_edge}) + + +def run_snap_biophys(dataset, variable): + """Compute specified variable using the SNAP algorithm. + + See ATBD at https://step.esa.int/docs/extra/ATBD_S2ToolBox_L2B_V1.1.pdf + + Parameters + ---------- + dataset : xr dataset + xarray dataset. + variable : str + Options 'FAPAR', 'FCOVER', 'LAI', 'LAI_Cab' or 'LAI_Cw' + + Returns + ------- + xarray dataset + Adds the specified variable array to dataset (variable name in + lowercase). + + """ + # generate view angle bands/layers + vz = ( + np.ones_like(dataset.band_data[:, 0, :, :]).T + * np.cos(np.radians(dataset.view_zenith)).values + ) + vz = vz[..., np.newaxis] + vzarr = xr.DataArray( + vz, + coords=[dataset.y, dataset.x, dataset.time, ["view_zenith"]], + dims=["y", "x", "time", "band"], + ) + + sz = ( + np.ones_like(dataset.band_data[:, 0, :, :]).T + * np.cos(np.radians(dataset.sun_zenith)).values + ) + sz = sz[..., np.newaxis] + szarr = xr.DataArray( + sz, + coords=[dataset.y, dataset.x, dataset.time, ["sun_zenith"]], + dims=["y", "x", "time", "band"], + ) + + raz = ( + np.ones_like(dataset.band_data[:, 0, :, :]).T + * np.cos(np.radians(dataset.sun_azimuth - dataset.view_azimuth)).values + ) + raz = raz[..., np.newaxis] + razarr = xr.DataArray( + raz, + coords=[dataset.y, dataset.x, dataset.time, ["relative_azimuth"]], + dims=["y", "x", "time", "band"], + ) + + newarr = xr.concat([dataset.band_data, vzarr, szarr, razarr], dim="band") + newarr = newarr.stack(xy=("x", "y")) + arr = xr.apply_ufunc( + _compute_variable, + newarr, + input_core_dims=[["band", "xy"]], + output_core_dims=[["xy"]], + kwargs={"variable": variable}, + vectorize=True, + ).unstack() + return dataset.assign({variable.lower(): arr}) + + +def estimate_gpp_vi_lue(vi, daily_par, model_name="ci_red_edge_1"): + """Estimate GPP using simple vegetation index based models and PAR. + + This function has not been properly tested (i.e.used for a while) + + Parameters + ---------- + vi : float + Vegetation index values. + daily_par : float + Daily PAR as MJ/s/m². + model_name : str, optional + Name of the model (see biophys_xarray.GPP_LUE_models). + The default is 'ci_red_edge_1'. + + Returns + ------- + gpp : float + Estimated gross primary productivity. + + """ + vi_name = "_".join(model_name.split("_")[:-1]) + gpp = GPP_LUE_models[vi_name][model_name]["model"]( + np.array(vi), np.array(daily_par) + ) + return gpp + + +# GPP estimation models +GPP_LUE_models = { + "ci_red_edge": { + "ci_red_edge_1": { + "model": lambda vi, par: 4.80 * np.log(vi * par * 1e3) - 37.93, + "species": "soybean", + "reference": "Peng & Gitelson, 2012", + }, + "ci_red_edge_2": { + "model": lambda vi, par: 0.31 * (vi * par) - 0.1, + "species": "grass", + "reference": "Huang et al. 2019", + }, + }, + "ci_green": { + "ci_green_1": { + "model": lambda vi, par: 5.13 * np.log(vi * par * 1e3) - 46.92, + "species": "soybean", + "reference": "Peng & Gitelson, 2012", + }, + "ci_green_2": { + "model": lambda vi, par: 14.7 * np.log(vi * par * 1e3 + 27900.61) - 154, + "species": "maize", + "reference": "Peng & Gitelson, 2012", + }, + }, + "NDVI": { + "NDVI_1": { + "model": lambda vi, par: 2.07 * (vi * par) - 6.19, + "species": "soybean", + "reference": "Gitelson et al., 2012", + }, + "NDVI_2": { + "model": lambda vi, par: 3.11 * (vi * par) - 9.22, + "species": "maize", + "reference": "Gitelson et al., 2012", + }, + "NDVI_3": { + "model": lambda vi, par: ( + -3.26 * 1e-8 * (vi * par * 1e3) ** 2 + + 1.7 * 1e-3 * (vi * par * 1e3) + - 2.17 + ), + "species": "soybean", + "reference": "Peng & Gitelson, 2012", + }, + "NDVI_4": { + "model": lambda vi, par: 1.94e-3 * (vi * par * 1e3) - 2.59, + "species": "maize", + "reference": "Peng & Gitelson, 2012", + }, + }, + "gndvi": { + "gndvi_1": { + "model": lambda vi, par: 2.86 * (vi * par) - 11.9, + "species": "soybean", + "reference": "Gitelson et al., 2012", + }, + "gndvi_2": { + "model": lambda vi, par: 4 * (vi * par) - 15.4, + "species": "maize", + "reference": "Gitelson et al., 2012", + }, + }, + "evi": { + "evi_1": { + "model": lambda vi, par: (2.26 * (vi * par) - 3.73), + "species": "soybean", + "reference": "Peng et al., 2013", + }, + "evi_2": { + "model": lambda vi, par: (3.49 * (vi * par) - 4.92), + "species": "maize", + "reference": "Peng et al., 2013", + }, + }, + "reNDVI": { + "reNDVI_1": { + "model": lambda vi, par: 1.61 * (vi * par) - 1.75, + "species": "mixed", + "reference": "Wolanin et al., 2019", + }, + "reNDVI_2": { + "model": lambda vi, par: ( + -1.19 * 1e-7 * (vi * par * 1e3) ** 2 + + 3 * 1e-3 * (vi * par * 1e3) + - 2.70 + ), + "species": "soybean", + "reference": "Peng & Gitelson, 2012", + }, + "reNDVI_3": { + "model": lambda vi, par: ( + -3.41 * 1e-8 * (vi * par * 1e3) ** 2 + + 2.77 * 1e-3 * (vi * par * 1e3) + - 2.06 + ), + "species": "maize", + "reference": "Peng & Gitelson, 2012", + }, + }, + "fapar": { + "fapar_1": { + "model": lambda vi, par: 1.10 * (vi * par), + "species": "grass", + "reference": "Olli Qvidja test fapar*x*PAR", + } + }, +} From 1921baa7797fd5cd007234ba693a9bcc1b94ff28 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 16 Jun 2020 17:35:39 +0530 Subject: [PATCH 1106/2289] script to calculate LAI from gee --- modules/data.remote/inst/s2lai.py | 80 +++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 modules/data.remote/inst/s2lai.py diff --git a/modules/data.remote/inst/s2lai.py b/modules/data.remote/inst/s2lai.py new file mode 100644 index 00000000000..d83961a9911 --- /dev/null +++ b/modules/data.remote/inst/s2lai.py @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Calculates LAI from Sentinel 2 data. Uses geeapi to download data and the python implementation of the SNAP toolbox to calculate LAI. + +Requires Python3 + +Warning: Point coordinates as input has currently not been implemented. + +Author: Ayush Prasad +""" + +import satellitetools.biophys_xarray as bio +from satellitetools import gee +import geopandas as gpd +import os + + +def s2_lai(geofile, outdir, start, end, qi_threshold): + """ + Downloads sentinel 2 data, calculates LAI and saves it in a netCDF file at the specified location. + + Parameters + ---------- + + geofile (str) -- path to the file containing the coordinates and name of the AOI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date of the data request in the form YYYY-MM-DD + + qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + + To test this function please run the following code inside a python shell after importing this module, testfile is included. + + s2_lai(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) + + """ + + # read in the input file containing coordinates + df = gpd.read_file(geofile) + + request = gee.S2RequestParams(start, end) + + # filter area of interest from the coordinates in the input file + area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # calcuate qi attribute for the AOI + gee.ee_get_s2_quality_info(area, request) + + # get the final data + gee.ee_get_s2_data(area, request, qi_threshold=qi_threshold) + + # convert dataframe to an xarray dataset, used later for converting to netCDF + gee.s2_data_to_xarray(area, request) + + # calculate LAI for the selected area using SNAP + area.data = bio.run_snap_biophys(area.data, "LAI") + + timeseries = {} + timeseries_variable = ["lai"] + + # if specified output directory does not exist, create it. + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # creating a timerseries and saving the netCDF file + area.data.to_netcdf(os.path.join(outdir, area.name + ".nc")) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area.data, timeseries_variable) From e15bf8de2e038af9459b409bbd13a7cb41bd4717 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 18 Jun 2020 11:22:27 +0530 Subject: [PATCH 1107/2289] removed GPP and other unused functions --- .../inst/satellitetools/biophys_xarray.py | 173 ------------------ 1 file changed, 173 deletions(-) diff --git a/modules/data.remote/inst/satellitetools/biophys_xarray.py b/modules/data.remote/inst/satellitetools/biophys_xarray.py index 8fb44ab9d4a..ca63c18a062 100644 --- a/modules/data.remote/inst/satellitetools/biophys_xarray.py +++ b/modules/data.remote/inst/satellitetools/biophys_xarray.py @@ -169,40 +169,6 @@ def _compute_variable(x, variable): return output -def _s2_lists_to_pixel_vectors(single_date_dict): - band_list = ["B3", "B4", "B5", "B6", "B7", "B8A", "B11", "B12"] - pixel_vector = np.zeros(shape=(11, len(single_date_dict[band_list[0]]))) - - for i, b in enumerate(band_list): - pixel_vector[i, :] = np.array(single_date_dict[b]) / 10000.0 - - pixel_vector[8, :] = np.cos(np.radians(single_date_dict["view_zenith"])) - pixel_vector[9, :] = np.cos(np.radians(single_date_dict["sun_zenith"])) - pixel_vector[10, :] = np.cos( - np.radians(single_date_dict["sun_azimuth"] - single_date_dict["view_azimuth"]) - ) - return pixel_vector - - -def compute_ci_red_edge(dataset): - """Compute CI_Red_Edge vegetation index. - - Parameters - ---------- - dataset : xarray dataset - - Returns - ------- - xarray dataset - Adds 'ci_red_edge' xr array to xr dataset. - - """ - b5 = dataset.band_data.sel(band="B5") - b7 = dataset.band_data.sel(band="B7") - ci_red_edge = (b7 / b5) - 1 - return dataset.assign({"ci_red_edge": ci_red_edge}) - - def run_snap_biophys(dataset, variable): """Compute specified variable using the SNAP algorithm. @@ -267,142 +233,3 @@ def run_snap_biophys(dataset, variable): vectorize=True, ).unstack() return dataset.assign({variable.lower(): arr}) - - -def estimate_gpp_vi_lue(vi, daily_par, model_name="ci_red_edge_1"): - """Estimate GPP using simple vegetation index based models and PAR. - - This function has not been properly tested (i.e.used for a while) - - Parameters - ---------- - vi : float - Vegetation index values. - daily_par : float - Daily PAR as MJ/s/m². - model_name : str, optional - Name of the model (see biophys_xarray.GPP_LUE_models). - The default is 'ci_red_edge_1'. - - Returns - ------- - gpp : float - Estimated gross primary productivity. - - """ - vi_name = "_".join(model_name.split("_")[:-1]) - gpp = GPP_LUE_models[vi_name][model_name]["model"]( - np.array(vi), np.array(daily_par) - ) - return gpp - - -# GPP estimation models -GPP_LUE_models = { - "ci_red_edge": { - "ci_red_edge_1": { - "model": lambda vi, par: 4.80 * np.log(vi * par * 1e3) - 37.93, - "species": "soybean", - "reference": "Peng & Gitelson, 2012", - }, - "ci_red_edge_2": { - "model": lambda vi, par: 0.31 * (vi * par) - 0.1, - "species": "grass", - "reference": "Huang et al. 2019", - }, - }, - "ci_green": { - "ci_green_1": { - "model": lambda vi, par: 5.13 * np.log(vi * par * 1e3) - 46.92, - "species": "soybean", - "reference": "Peng & Gitelson, 2012", - }, - "ci_green_2": { - "model": lambda vi, par: 14.7 * np.log(vi * par * 1e3 + 27900.61) - 154, - "species": "maize", - "reference": "Peng & Gitelson, 2012", - }, - }, - "NDVI": { - "NDVI_1": { - "model": lambda vi, par: 2.07 * (vi * par) - 6.19, - "species": "soybean", - "reference": "Gitelson et al., 2012", - }, - "NDVI_2": { - "model": lambda vi, par: 3.11 * (vi * par) - 9.22, - "species": "maize", - "reference": "Gitelson et al., 2012", - }, - "NDVI_3": { - "model": lambda vi, par: ( - -3.26 * 1e-8 * (vi * par * 1e3) ** 2 - + 1.7 * 1e-3 * (vi * par * 1e3) - - 2.17 - ), - "species": "soybean", - "reference": "Peng & Gitelson, 2012", - }, - "NDVI_4": { - "model": lambda vi, par: 1.94e-3 * (vi * par * 1e3) - 2.59, - "species": "maize", - "reference": "Peng & Gitelson, 2012", - }, - }, - "gndvi": { - "gndvi_1": { - "model": lambda vi, par: 2.86 * (vi * par) - 11.9, - "species": "soybean", - "reference": "Gitelson et al., 2012", - }, - "gndvi_2": { - "model": lambda vi, par: 4 * (vi * par) - 15.4, - "species": "maize", - "reference": "Gitelson et al., 2012", - }, - }, - "evi": { - "evi_1": { - "model": lambda vi, par: (2.26 * (vi * par) - 3.73), - "species": "soybean", - "reference": "Peng et al., 2013", - }, - "evi_2": { - "model": lambda vi, par: (3.49 * (vi * par) - 4.92), - "species": "maize", - "reference": "Peng et al., 2013", - }, - }, - "reNDVI": { - "reNDVI_1": { - "model": lambda vi, par: 1.61 * (vi * par) - 1.75, - "species": "mixed", - "reference": "Wolanin et al., 2019", - }, - "reNDVI_2": { - "model": lambda vi, par: ( - -1.19 * 1e-7 * (vi * par * 1e3) ** 2 - + 3 * 1e-3 * (vi * par * 1e3) - - 2.70 - ), - "species": "soybean", - "reference": "Peng & Gitelson, 2012", - }, - "reNDVI_3": { - "model": lambda vi, par: ( - -3.41 * 1e-8 * (vi * par * 1e3) ** 2 - + 2.77 * 1e-3 * (vi * par * 1e3) - - 2.06 - ), - "species": "maize", - "reference": "Peng & Gitelson, 2012", - }, - }, - "fapar": { - "fapar_1": { - "model": lambda vi, par: 1.10 * (vi * par), - "species": "grass", - "reference": "Olli Qvidja test fapar*x*PAR", - } - }, -} From 22c5ce12bd6772a48ec7a9e074736b40cc569b33 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 18 Jun 2020 11:24:55 +0530 Subject: [PATCH 1108/2289] added gee2pecan_bands --- .../inst/{s2lai.py => gee2pecan_bands.py} | 36 ++++---- modules/data.remote/inst/s2ndvi.py | 83 ------------------- 2 files changed, 14 insertions(+), 105 deletions(-) rename modules/data.remote/inst/{s2lai.py => gee2pecan_bands.py} (64%) delete mode 100644 modules/data.remote/inst/s2ndvi.py diff --git a/modules/data.remote/inst/s2lai.py b/modules/data.remote/inst/gee2pecan_bands.py similarity index 64% rename from modules/data.remote/inst/s2lai.py rename to modules/data.remote/inst/gee2pecan_bands.py index d83961a9911..3c325784962 100644 --- a/modules/data.remote/inst/s2lai.py +++ b/modules/data.remote/inst/gee2pecan_bands.py @@ -2,29 +2,31 @@ # -*- coding: utf-8 -*- """ -Calculates LAI from Sentinel 2 data. Uses geeapi to download data and the python implementation of the SNAP toolbox to calculate LAI. +Downloads ESA Sentinel 2, Level-2A Bottom of Atmosphere data and saves it in a netCDF file. +Bands retrieved: B3, B4, B5, B6, B7, B8A, B11 and B12 +More information about the bands and the process followed to get the data can be found out at /satellitetools/geeapi.py + +Warning: Point coordinates as input has currently not been implemented. Requires Python3 -Warning: Point coordinates as input has currently not been implemented. +Uses satellitetools created by Olli Nevalainen. Author: Ayush Prasad """ -import satellitetools.biophys_xarray as bio from satellitetools import gee import geopandas as gpd import os -def s2_lai(geofile, outdir, start, end, qi_threshold): +def gee2pecan_bands(geofile, outdir, start, end, qi_threshold): """ - Downloads sentinel 2 data, calculates LAI and saves it in a netCDF file at the specified location. + Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. Parameters ---------- - - geofile (str) -- path to the file containing the coordinates and name of the AOI, currently tested with geojson. + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. @@ -34,18 +36,15 @@ def s2_lai(geofile, outdir, start, end, qi_threshold): qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved - Returns ------- Nothing: output netCDF is saved in the specified directory. Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray - To test this function please run the following code inside a python shell after importing this module, testfile is included. - - s2_lai(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) - + + gee2pecan_bands(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) """ # read in the input file containing coordinates @@ -65,16 +64,9 @@ def s2_lai(geofile, outdir, start, end, qi_threshold): # convert dataframe to an xarray dataset, used later for converting to netCDF gee.s2_data_to_xarray(area, request) - # calculate LAI for the selected area using SNAP - area.data = bio.run_snap_biophys(area.data, "LAI") - - timeseries = {} - timeseries_variable = ["lai"] - - # if specified output directory does not exist, create it. + # if specified output directory does not exist, create it if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) - # creating a timerseries and saving the netCDF file - area.data.to_netcdf(os.path.join(outdir, area.name + ".nc")) - timeseries[area.name] = gee.xr_dataset_to_timeseries(area.data, timeseries_variable) + # create a timerseries and save the netCDF file + area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) diff --git a/modules/data.remote/inst/s2ndvi.py b/modules/data.remote/inst/s2ndvi.py deleted file mode 100644 index b98affd99ee..00000000000 --- a/modules/data.remote/inst/s2ndvi.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -""" -Module to calculate NDVI from sentinel 2 data, based on Satellitetools by Olli Nevalainen. -This will be used to create automated workflows for the remote data module. - -Please use Python3 for using this module. - -Warning: Point coordinates as input has currently not been implemented. - -Author: Ayush Prasad -""" - -from satellitetools import gee -import geopandas as gpd -import os - - -def s2_ndvi(geofile, outdir, start, end, qi_threshold): - """ - Downloads sentinel 2 data, calculates NDVI and saves it in a netCDF file at the specified location. - - Parameters - ---------- - - geofile (str) -- path to the file containing the coordinates of AOI, currently tested with geojson. - - outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - - start (str) -- starting date of the data request in the form YYYY-MM-DD - - end (str) -- ending date of the data request in the form YYYY-MM-DD - - qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved - - - Returns - ------- - Nothing: - output netCDF is saved in the specified directory. - - Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray - - To test this function please run the following code, testfile is included. - - s2_ndvi(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) - - - - - """ - - # read in the input file containing coordinates - df = gpd.read_file(geofile) - - request = gee.S2RequestParams(start, end) - - # filter area of interest from the coordinates in the input file - area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) - - # calcuate qi attribute for the AOI - gee.ee_get_s2_quality_info(area, request) - - # get the final data - gee.ee_get_s2_data(area, request, qi_threshold=qi_threshold) - - # convert dataframe to an xarray dataset, used later for converting to netCDF - gee.s2_data_to_xarray(area, request) - - # calculate NDVI for the selected area - area.data = gee.compute_ndvi(area.data) - - timeseries = {} - timeseries_variable = ["ndvi"] - - # if specified output directory does not exist, create it. - if not os.path.exists(outdir): - os.makedirs(outdir, exist_ok=True) - - # creating a timerseries and saving the netCDF file - area.data.to_netcdf(os.path.join(outdir, area.name + ".nc")) - timeseries[area.name] = gee.xr_dataset_to_timeseries(area.data, timeseries_variable) From 390d1839b1443c06fb32d64dd06c110780301c03 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 18 Jun 2020 11:26:25 +0530 Subject: [PATCH 1109/2289] added bands2lai_snap --- modules/data.remote/inst/bands2lai_snap.py | 47 ++++++++++++++++++++++ 1 file changed, 47 insertions(+) create mode 100644 modules/data.remote/inst/bands2lai_snap.py diff --git a/modules/data.remote/inst/bands2lai_snap.py b/modules/data.remote/inst/bands2lai_snap.py new file mode 100644 index 00000000000..86707027432 --- /dev/null +++ b/modules/data.remote/inst/bands2lai_snap.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Calculates LAI using SNAP. + +Author: Ayush Prasad +""" + +from satellitetools import gee +import satellitetools.biophys_xarray as bio +import geopandas as gpd +import xarray as xr +import os + + +def bands2lai_snap(inputfile, outdir): + """ + Calculates LAI for the input netCDF file and saves it in a new netCDF file. + + Parameters + ---------- + input (str) -- path to the input netCDF file containing bands. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + """ + # load the input file + ds_disk = xr.open_dataset(inputfile) + # calculate LAI using SNAP + area = bio.run_snap_biophys(ds_disk, "LAI") + + timeseries = {} + timeseries_variable = ["lai"] + + # if specified output directory does not exist, create it. + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # creating a timerseries and saving the netCDF file + area.to_netcdf(os.path.join(outdir, area.name + "_lai.nc")) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) \ No newline at end of file From 3ee0d5a88bda65adb619c090d059e2841bc55c30 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 18 Jun 2020 11:27:51 +0530 Subject: [PATCH 1110/2289] added bands2ndvi --- modules/data.remote/inst/bands2ndvi.py | 46 ++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 modules/data.remote/inst/bands2ndvi.py diff --git a/modules/data.remote/inst/bands2ndvi.py b/modules/data.remote/inst/bands2ndvi.py new file mode 100644 index 00000000000..216c3b1e77c --- /dev/null +++ b/modules/data.remote/inst/bands2ndvi.py @@ -0,0 +1,46 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Calculates NDVI using gee. + +Author: Ayush Prasad +""" + +import xarray as xr +from satellitetools import gee +import geopandas as gpd +import os + + +def bands2ndvi(inputfile, outdir): + """ + Calculates NDVI for the input netCDF file and saves it in a new netCDF file. + + Parameters + ---------- + input (str) -- path to the input netCDF file containing bands. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + """ + # load the input file + ds_disk = xr.open_dataset(inputfile) + # calculate NDVI using gee + area = gee.compute_ndvi(ds_disk) + + timeseries = {} + timeseries_variable = ["ndvi"] + + # if specified output directory does not exist, create it. + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # creating a timerseries and saving the netCDF file + area.to_netcdf(os.path.join(outdir, area.name + "_ndvi.nc")) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) From 24e2184b59423eb8bc0d9c179ce15a8aa3b17681 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 18 Jun 2020 13:26:51 +0000 Subject: [PATCH 1111/2289] added status endpoint & modified dbHostInfo() --- CHANGELOG.md | 1 + apps/api/auth.R | 9 +- apps/api/entrypoint.R | 20 +++- apps/api/general.R | 20 ++++ apps/api/models.R | 3 +- apps/api/pecanapi-spec.yml | 227 ++++++++++++++++--------------------- apps/api/ping.R | 10 -- apps/api/workflows.R | 1 + base/db/R/query.dplyr.R | 14 ++- 9 files changed, 149 insertions(+), 156 deletions(-) create mode 100644 apps/api/general.R delete mode 100644 apps/api/ping.R diff --git a/CHANGELOG.md b/CHANGELOG.md index df47224b858..6d463a343a9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -42,6 +42,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Added +- PEcAn API that can be used to talk to PEcAn servers. Endpoints to GET the details about the server that user is talking to, PEcAn models, workflows & runs. Authetication enabled. (#2631) - New versioned ED2IN template: ED2IN.2.2.0 (#2143) (replaces ED2IN.git) - model_info.json and Dockerfile to template (#2567) - Dockerize BASGRA_N model. diff --git a/apps/api/auth.R b/apps/api/auth.R index 55c4accb9ab..6cf3114a7e3 100644 --- a/apps/api/auth.R +++ b/apps/api/auth.R @@ -31,7 +31,7 @@ validate_crypt_pass <- function(username, crypt_pass) { dbcon <- PEcAn.DB::betyConnect() - res <- tbl(bety, "users") %>% + res <- tbl(dbcon, "users") %>% filter(login == username, crypted_password == crypt_pass) %>% count() %>% @@ -52,11 +52,12 @@ validate_crypt_pass <- function(username, crypt_pass) { #* @return Appropriate response #* @author Tezan Sahu authenticate_user <- function(req, res) { - # If the API endpoint called does not requires authentication, allow it to pass through as is + # If the API endpoint that do not require authentication if ( grepl("swagger", req$PATH_INFO, ignore.case = TRUE) || grepl("openapi.json", req$PATH_INFO, fixed = TRUE) || - grepl("ping", req$PATH_INFO, ignore.case = TRUE)) + grepl("ping", req$PATH_INFO, ignore.case = TRUE) || + grepl("status", req$PATH_INFO, ignore.case = TRUE)) { return(plumber::forward()) } @@ -77,4 +78,4 @@ authenticate_user <- function(req, res) { res$status <- 401 # Unauthorized return(list(error="Authentication required")) -} +} \ No newline at end of file diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R index d0ab1fb2104..9e01dd8ca21 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/entrypoint.R @@ -1,25 +1,37 @@ +#* This is the entry point to the PEcAn API. +#* All API endpoints (& filters) are mounted here +#* @author Tezan Sahu + source("auth.R") -source("ping.R") +source("general.R") root <- plumber::plumber$new() root$setSerializer(plumber::serializer_unboxed_json()) +# Filter for authenticating users trying to hit the API endpoints root$filter("require-auth", authenticate_user) +# The /api/ping & /api/status are standalone API endpoints +# implemented using handle() because of restrictions of plumber +# to mount multiple endpoints on the same path (or subpath) root$handle("GET", "/api/ping", ping) +root$handle("GET", "/api/status", status) +# The endpoints defined here are related to details of PEcAn models models_pr <- plumber::plumber$new("models.R") root$mount("/api/models", models_pr) +# The endpoints defined here are related to details of PEcAn workflows workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) +# The endpoints defined here are related to details of PEcAn runs runs_pr <- plumber::plumber$new("runs.R") root$mount("/api/runs", runs_pr) +# The API server is bound to 0.0.0.0 on port 8000 +# The Swagger UI for the API draws its source from the pecanapi-spec.yml file root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { spec <- yaml::read_yaml("pecanapi-spec.yml") spec -}) - - +}) \ No newline at end of file diff --git a/apps/api/general.R b/apps/api/general.R new file mode 100644 index 00000000000..c19c477f98c --- /dev/null +++ b/apps/api/general.R @@ -0,0 +1,20 @@ +#* Function to be executed when /api/ping endpoint is called +#* If successful connection to API server is established, this function will return the "pong" message +#* @return Mapping containing response as "pong" +#* @author Tezan Sahu +ping <- function(req){ + res <- list(request="ping", response="pong") + res +} + +#* Function to get the status & basic information about the Database Host +#* @return Details about the database host +#* @author Tezan Sahu +status <- function() { + dbcon <- PEcAn.DB::betyConnect() + res <- list(host_details = PEcAn.DB::dbHostInfo(dbcon)) + + # Needs to be completed using env var + res$pecan_details <- list(version="1.7.0", branch="api_1", gitsha1="unknown") + return(res) +} \ No newline at end of file diff --git a/apps/api/models.R b/apps/api/models.R index cddfc4442a9..ef8da6582cd 100644 --- a/apps/api/models.R +++ b/apps/api/models.R @@ -39,5 +39,4 @@ getModels <- function(model_name="all", revision="all", res){ else { return(list(models=qry_res)) } -} - +} \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 175a944e4c1..93b8968b709 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/c5b30dae + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/eb9c2d61 - description: Localhost url: http://127.0.0.1:8000 @@ -22,7 +22,7 @@ externalDocs: tags: - name: general - description: Related to the overall working on the API & its details + description: Related to the overall working on the API, details of PEcAn & the server - name: workflows description: Everything about PEcAn workflows - name: runs @@ -37,9 +37,10 @@ security: - basicAuth: [] paths: - /api/: + + /api/ping: get: - summary: Root of the API tree + summary: Ping the server to check if it is live tags: - general - @@ -48,20 +49,21 @@ paths: description: OK content: application/json: - schema: - type: object - properties: - message: - type: string - example: This is the API for the PEcAn Project - server: - type: string - status: - type: string - - /api/ping: + schema: + type: object + properties: + req: + type: string + resp: + type: string + '403': + description: Access forbidden + '404': + description: Models not found + + /api/status: get: - summary: Ping the server to check if it is live + summary: Obtain general information about PEcAn & the details of the database host tags: - general - @@ -73,12 +75,31 @@ paths: schema: type: object properties: - req: - type: string - resp: - type: string - '401': - description: Authentication required + pecan_details: + type: object + properties: + version: + type: string + branch: + type: string + gitsha1: + type: string + host_details: + type: object + properties: + hostid: + type: integer + hostname: + type: string + start: + type: string + end: + type: string + sync_url: + type: string + sync_contact: + type: string + '403': description: Access forbidden '404': @@ -277,12 +298,19 @@ paths: content: application/json: schema: - type: array - items: - type: object - properties: - id: - type: string + type: object + properties: + runs: + type: array + items: + type: object + $ref: '#/components/schemas/Run' + count: + type: integer + next_page: + type: string + prev_page: + type: string '401': description: Authentication required '403': @@ -335,62 +363,28 @@ components: type: string modeltype_id: type: string + model_type: + type: string Run: properties: - run_id: + id: type: string workflow_id: type: string runtype: type: string - quantile: - type: number - site: - type: object - properties: - id: - type: string - name: - type: string - lat: - type: number - lon: - type: number - pfts: - type: array - items: - type: object - properties: - name: - type: string - constants: - type: array - items: - type: number - model: - type: object - properties: - id: - type: string - name: - type: string - inputs: - type: array - items: - type: object - properties: - type: - type: string - id: - type: string - source: - type: string - path: - type: string - start_date: + ensemble_id: + type: string + model_id: + type: string + site_id: type: string - end_date: + parameter_list: + type: string + start_time: + type: string + finish_time: type: string Workflow: @@ -401,63 +395,32 @@ components: type: array items: type: string - model: - type: object - properties: - id: - type: string - name: - type: string - meta_analysis: - type: object - properties: - iter: - type: number - random_effects: - type: object - properties: - "on": - type: boolean - use_ghs: - type: boolean - update: - type: boolean - threshold: - type: number - ensemble: - type: object - properties: - size: - type: number - variable: - type: string - sampling_space: - type: object - properties: - parameters.method: - type: string - met.method: - type: string - sensitivity_analysis: - type: object - properties: - quantiles_sigma: - type: array - items: - type: number - variable: - type: string - perpft: - type: boolean + input_met: + type: string + modelid: + type: string + siteid: + type: string + sitename: + type: string + sitegroupid: + type: string + start: + type: string + end: + type: string + variables: + type: string + sensitivity: + type: string + email: + type: string + notes: + type: string runs: - type: array - items: - type: string - host: - type: object - properties: - name: - type: string + type: string + pecan_edit: + type: string status: type: string securitySchemes: diff --git a/apps/api/ping.R b/apps/api/ping.R deleted file mode 100644 index 32a99b9c8c6..00000000000 --- a/apps/api/ping.R +++ /dev/null @@ -1,10 +0,0 @@ -#* Function to be executed when /api/ping endpoint is called -#* -#* If successful connection to API server is established, this function will return the "pong" message -#* @return Mapping containing response as "pong" -#* @author Tezan Sahu -ping <- function(){ - res <- list(request="ping", response="pong") - res -} - diff --git a/apps/api/workflows.R b/apps/api/workflows.R index 4ecbe94acea..fcdc78795ec 100644 --- a/apps/api/workflows.R +++ b/apps/api/workflows.R @@ -34,6 +34,7 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r } qry_res <- Workflow %>% + select(-model_id, -site_id) %>% arrange(id) %>% collect() diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index a3c64fce868..3099d0757a5 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -77,17 +77,23 @@ dbHostInfo <- function(bety) { # get machine start and end based on hostid machine <- dplyr::tbl(bety, "machines") %>% - dplyr::filter(sync_host_id == !!hostid) %>% - dplyr::select(sync_start, sync_end) + dplyr::filter(sync_host_id == !!hostid) + if (is.na(nrow(machine)) || nrow(machine) == 0) { return(list(hostid = hostid, + hostname = "", start = 1e+09 * hostid, - end = 1e+09 * (hostid + 1) - 1)) + end = 1e+09 * (hostid + 1) - 1, + sync_url = "", + sync_contact = "")) } else { return(list(hostid = hostid, + hostname = machnie$hostname, start = machine$sync_start, - end = machine$sync_end)) + end = machine$sync_end, + sync_url = machines$sync_url, + sync_contact = machines$sync_contact)) } } # dbHostInfo From 8d7e28f26d08b96a09de0282e500a843257bb836 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 18 Jun 2020 14:59:03 +0000 Subject: [PATCH 1112/2289] status endpoint uses env vars --- apps/api/entrypoint.R | 6 +++--- apps/api/general.R | 14 ++++++++++++-- apps/api/pecanapi-spec.yml | 2 +- 3 files changed, 16 insertions(+), 6 deletions(-) diff --git a/apps/api/entrypoint.R b/apps/api/entrypoint.R index 9e01dd8ca21..9512f2f232c 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/entrypoint.R @@ -17,15 +17,15 @@ root$filter("require-auth", authenticate_user) root$handle("GET", "/api/ping", ping) root$handle("GET", "/api/status", status) -# The endpoints defined here are related to details of PEcAn models +# The endpoints mounted here are related to details of PEcAn models models_pr <- plumber::plumber$new("models.R") root$mount("/api/models", models_pr) -# The endpoints defined here are related to details of PEcAn workflows +# The endpoints mounted here are related to details of PEcAn workflows workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) -# The endpoints defined here are related to details of PEcAn runs +# The endpoints mounted here are related to details of PEcAn runs runs_pr <- plumber::plumber$new("runs.R") root$mount("/api/runs", runs_pr) diff --git a/apps/api/general.R b/apps/api/general.R index c19c477f98c..5b8fcc7fd67 100644 --- a/apps/api/general.R +++ b/apps/api/general.R @@ -11,10 +11,20 @@ ping <- function(req){ #* @return Details about the database host #* @author Tezan Sahu status <- function() { + + ## helper function to obtain environment variables + get_env_var = function (item, default = "unknown") { + value = Sys.getenv(item) + if (value == "") default else value + } + dbcon <- PEcAn.DB::betyConnect() res <- list(host_details = PEcAn.DB::dbHostInfo(dbcon)) - # Needs to be completed using env var - res$pecan_details <- list(version="1.7.0", branch="api_1", gitsha1="unknown") + res$pecan_details <- list( + version = get_env_var("PECAN_VERSION"), + branch = get_env_var("PECAN_GIT_BRANCH"), + gitsha1 = get_env_var("PECAN_GIT_CHECKSUM") + ) return(res) } \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 93b8968b709..c4adf652f47 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -88,7 +88,7 @@ paths: type: object properties: hostid: - type: integer + type: string hostname: type: string start: From 81bd7a357c804460d27c2be7501312f10bff645f Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 18 Jun 2020 15:35:44 +0000 Subject: [PATCH 1113/2289] minor bugfix in dbHostInfo --- base/db/R/query.dplyr.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index 3099d0757a5..b08751f1337 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -89,11 +89,11 @@ dbHostInfo <- function(bety) { sync_contact = "")) } else { return(list(hostid = hostid, - hostname = machnie$hostname, + hostname = machine$hostname, start = machine$sync_start, end = machine$sync_end, - sync_url = machines$sync_url, - sync_contact = machines$sync_contact)) + sync_url = machine$sync_url, + sync_contact = machine$sync_contact)) } } # dbHostInfo From f4a360617f77bcccc05fa96ff0a3a348d69c82a5 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 18 Jun 2020 23:44:46 +0530 Subject: [PATCH 1114/2289] updated auth and build jobs --- .github/workflows/book.yml | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 0a6826453c0..8f5d5c9e483 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -28,14 +28,7 @@ jobs: checkout-and-deploy: runs-on: ubuntu-latest needs: bookdown - env: - EMAIL: ${{ secrets.EMAIL }} steps: - - name: Checkout documentation repo - uses: actions/checkout@v2 - with: - repository: MukulMaheshwari/pecan-documentation - path: pecan-documentation - name: Download artifact uses: actions/download-artifact@v2 with: @@ -44,14 +37,20 @@ jobs: # Destination path path: _book/ # optional repo-token: ${{ secrets.GITHUB_TOKEN }} - - name: commit changes - run: | - git clone https://${GITHUB_PA}@github.com/${GH_USER}/pecan-documentation.git book_hosted - cd book_hosted - rsync -a --delete ../_book/ VERSION/ - git add --all * - git commit -m "Update the book `date`" || true - git push -q origin master - - - + - name: Checkout documentation repo + uses: actions/checkout@v2 + with: + repository: ${{ github.repository_owner }}/pecan-documentation + path: pecan-documentation + token: ${{ secrets.GITHUB_PA }} + - run: | + export VERSION=${GITHUB_REF##*/}_test + cd pecan-documentation && mkdir -p $VERSION + git config --global user.email "pecanproj@gmail.com" + git config --global user.name "GitHub Documentation Robot" + rsync -a --delete ../_book/ $VERSION + git add --all * + git commit -m "Build book from pecan revision $GITHUB_SHA" || true + git push -q origin master + + From ac23b024e0b5f158ca507b91b7b3c0168d440170 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu, 18 Jun 2020 23:51:46 +0530 Subject: [PATCH 1115/2289] Update book.yml --- .github/workflows/book.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 8f5d5c9e483..47915f83296 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -36,13 +36,13 @@ jobs: name: _book # optional # Destination path path: _book/ # optional - repo-token: ${{ secrets.GITHUB_TOKEN }} + # repo-token: ${{ secrets.GITHUB_TOKEN }} - name: Checkout documentation repo uses: actions/checkout@v2 with: repository: ${{ github.repository_owner }}/pecan-documentation path: pecan-documentation - token: ${{ secrets.GITHUB_PA }} + token: ${{ secrets.GH_PAT }} - run: | export VERSION=${GITHUB_REF##*/}_test cd pecan-documentation && mkdir -p $VERSION From 3cf8cd7fc7e0c34e839235bad44e49c1055f60c9 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 19 Jun 2020 16:28:15 +0530 Subject: [PATCH 1116/2289] added remote_process v1 --- modules/data.remote/inst/remote_process.py | 119 +++++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 modules/data.remote/inst/remote_process.py diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py new file mode 100644 index 00000000000..2ba406471d6 --- /dev/null +++ b/modules/data.remote/inst/remote_process.py @@ -0,0 +1,119 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +remote_process controls the individual functions to create an automatic workflow for downloading and performing computation on remote sensing data. + +Requires Python3 + +Author: Ayush Prasad +""" + +from gee2pecan_bands import gee2pecan_bands +from bands2ndvi import bands2ndvi +from bands2lai_snap import bands2lai_snap +from satellitetools import gee +import geopandas as gpd +import os + + +def remote_process( + geofile, + outdir, + start, + end, + qi_threshold, + source="gee", + collection="COPERNICUS/S2_SR", + input_type="bands", + output=["lai", "ndvi"], +): + """ + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date areaof the data request in the form YYYY-MM-DD + + qi_threshold (float) -- Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + source (str) -- source from where data is to be downloaded + + collection (str) -- dataset ID + + input_type (str) -- type of raw intermediate data + + output (list of str) -- type of output data requested + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + + To test this function run: python3 remote_process.py + """ + + # this part will be removed in the next version, after deciding whether to pass the file or the extracted data to initial functions + df = gpd.read_file(geofile) + area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # selecting the initial data download function by concatenating source and input_type + initial_step = "".join([source, "2pecan", input_type]) + if initial_step == "gee2pecanbands": + if collection == "COPERNICUS/S2_SR": + gee2pecan_bands(geofile, outdir, start, end, qi_threshold) + else: + print("other gee download options go here, currently WIP") + # This should be a function being called from an another file + """ + data = ee.ImageCollection(collection) + filtered_data = (data.filterDate(start, end).select(bands).filterBounds(ee.Geometry(pathtofile)) + filtered_data = filtered_data.getInfo() + ... + """ + + else: + print("other sources like AppEEARS go here") + return + + # if raw data is the requested output, process is completed + if input_type == output: + print("process is complete") + + else: + # locate the raw data file formed in initial step + input_file = "".join([outdir, area.name, "_", str(input_type), ".nc"]) + + # store all the requested conversions in a list + conversions = [] + for conv_type in output: + conversions.append("".join([input_type, "2", conv_type])) + + # perform the available conversions + if "bands2lai" in conversions: + print("using SNAP to calculate LAI") + bands2lai_snap(input_file, outdir) + + if "bands2ndvi" in conversions: + print("using GEE to calculate NDVI") + bands2ndvi(input_file, outdir) + + +if __name__ == "__main__": + remote_process( + "./satellitetools/test.geojson", + "./out/", + "2019-01-01", + "2019-12-31", + 1, + "gee", + "COPERNICUS/S2_SR", + "bands", + ["lai", "ndvi"], + ) From be7098f045c1c650b2c8caa0133cf696b917358e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 19 Jun 2020 16:38:14 +0530 Subject: [PATCH 1117/2289] unused import --- modules/data.remote/inst/remote_process.py | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index 2ba406471d6..2d0de3b42f8 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -14,7 +14,6 @@ from bands2lai_snap import bands2lai_snap from satellitetools import gee import geopandas as gpd -import os def remote_process( From 32e95940cbd85d8ccf4efb78774695ae2bd3b366 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 19 Jun 2020 22:40:16 +0200 Subject: [PATCH 1118/2289] pass connection not list (plus misc roxygen fixes) --- modules/data.atmosphere/R/met.process.R | 51 +++++++++---------- modules/data.atmosphere/R/met.process.stage.R | 1 + modules/data.atmosphere/man/browndog.met.Rd | 20 ++++---- .../data.atmosphere/man/db.site.lat.lon.Rd | 9 +++- .../data.atmosphere/man/met.process.stage.Rd | 2 +- .../tests/Rcheck_reference.log | 5 +- 6 files changed, 45 insertions(+), 43 deletions(-) diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index e1d8e49c52b..d91cc9d8b87 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -87,12 +87,12 @@ met.process <- function(site, input_met, start_date, end_date, model, } # set up connection and host information - bety <- dplyr::src_postgres(dbname = dbparms$dbname, - host = dbparms$host, - user = dbparms$user, - password = dbparms$password) - - con <- bety$con + con <- dplyr::src_postgres( + dbname = dbparms$dbname, + host = dbparms$host, + user = dbparms$user, + password = dbparms$password)$con + on.exit(PEcAn.DB::db.close(con), add = TRUE) username <- ifelse(is.null(input_met$username), "pecan", input_met$username) machine.host <- ifelse(host == "localhost" || host$name == "localhost", PEcAn.remote::fqdn(), host$name) @@ -128,10 +128,10 @@ met.process <- function(site, input_met, start_date, end_date, model, # first attempt at function that designates where to start met.process if (is.null(input_met$id)) { stage <- list(download.raw = TRUE, met2cf = TRUE, standardize = TRUE, met2model = TRUE) - format.vars <- PEcAn.DB::query.format.vars(bety = bety, format.id = register$format$id) # query variable info from format id + format.vars <- PEcAn.DB::query.format.vars(bety = con, format.id = register$format$id) # query variable info from format id } else { stage <- met.process.stage(input.id=input_met$id, raw.id=register$format$id, con) - format.vars <- PEcAn.DB::query.format.vars(bety = bety, input.id = input_met$id) # query DB to get format variable information if available + format.vars <- PEcAn.DB::query.format.vars(bety = con, input.id = input_met$id) # query DB to get format variable information if available # Is there a situation in which the input ID could be given but not the file path? # I'm assuming not right now assign(stage$id.name, @@ -280,7 +280,7 @@ met.process <- function(site, input_met, start_date, end_date, model, con = con, host = host, overwrite = overwrite$met2cf, format.vars = format.vars, - bety = bety) + bety = con) } else { if (! met %in% c("ERA5")) cf.id = input_met$id } @@ -411,11 +411,11 @@ met.process <- function(site, input_met, start_date, end_date, model, ################################################################################################################################# -##' @name db.site.lat.lon -##' @title db.site.lat.lon +##' Look up lat/lon from siteid +##' ##' @export -##' @param site.id -##' @param con +##' @param site.id BeTY ID of site to look up +##' @param con database connection ##' @author Betsy Cowdery db.site.lat.lon <- function(site.id, con) { site <- PEcAn.DB::db.query(paste("SELECT id, ST_X(ST_CENTROID(geometry)) AS lon, ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id =", @@ -434,20 +434,19 @@ db.site.lat.lon <- function(site.id, con) { ################################################################################################################################# -##' @name browndog.met -##' @description Use browndog to get the met data for a specific model -##' @title get met data from browndog +##' Use browndog to get the met data for a specific model +##' ##' @export -##' @param browndog, list with url, username and password to connect to browndog -##' @param source, the source of the met data, currently only NARR an Ameriflux is supported -##' @param site, site information should have id, lat, lon and name (ameriflux id) -##' @param start_date, start date for result -##' @param end_date, end date for result -##' @param model, model to convert the met data to -##' @param dir, folder where results are stored (in subfolder) -##' @param username, used when downloading data from Ameriflux like sites -##' @param con, database connection -## +##' @param browndog list with url, username and password to connect to browndog +##' @param source the source of the met data, currently only NARR an Ameriflux is supported +##' @param site site information should have id, lat, lon and name (ameriflux id) +##' @param start_date start date for result +##' @param end_date end date for result +##' @param model model to convert the met data to +##' @param dir folder where results are stored (in subfolder) +##' @param username used when downloading data from Ameriflux like sites +##' @param con database connection +##' ##' @author Rob Kooper browndog.met <- function(browndog, source, site, start_date, end_date, model, dir, username, con) { folder <- tempfile("BD-", dir) diff --git a/modules/data.atmosphere/R/met.process.stage.R b/modules/data.atmosphere/R/met.process.stage.R index 41b2c756166..e0ca23964a8 100644 --- a/modules/data.atmosphere/R/met.process.stage.R +++ b/modules/data.atmosphere/R/met.process.stage.R @@ -4,6 +4,7 @@ ##' ##' @param input.id ##' @param raw.id +##' @param con database connection ##' ##' @author Elizabeth Cowdery met.process.stage <- function(input.id, raw.id, con) { diff --git a/modules/data.atmosphere/man/browndog.met.Rd b/modules/data.atmosphere/man/browndog.met.Rd index 18b8bdb63f9..16f5b224972 100644 --- a/modules/data.atmosphere/man/browndog.met.Rd +++ b/modules/data.atmosphere/man/browndog.met.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/met.process.R \name{browndog.met} \alias{browndog.met} -\title{get met data from browndog} +\title{Use browndog to get the met data for a specific model} \usage{ browndog.met( browndog, @@ -17,23 +17,23 @@ browndog.met( ) } \arguments{ -\item{browndog, }{list with url, username and password to connect to browndog} +\item{browndog}{list with url, username and password to connect to browndog} -\item{source, }{the source of the met data, currently only NARR an Ameriflux is supported} +\item{source}{the source of the met data, currently only NARR an Ameriflux is supported} -\item{site, }{site information should have id, lat, lon and name (ameriflux id)} +\item{site}{site information should have id, lat, lon and name (ameriflux id)} -\item{start_date, }{start date for result} +\item{start_date}{start date for result} -\item{end_date, }{end date for result} +\item{end_date}{end date for result} -\item{model, }{model to convert the met data to} +\item{model}{model to convert the met data to} -\item{dir, }{folder where results are stored (in subfolder)} +\item{dir}{folder where results are stored (in subfolder)} -\item{username, }{used when downloading data from Ameriflux like sites} +\item{username}{used when downloading data from Ameriflux like sites} -\item{con, }{database connection} +\item{con}{database connection} } \description{ Use browndog to get the met data for a specific model diff --git a/modules/data.atmosphere/man/db.site.lat.lon.Rd b/modules/data.atmosphere/man/db.site.lat.lon.Rd index 46aa66c45fd..9a2cfed78f9 100644 --- a/modules/data.atmosphere/man/db.site.lat.lon.Rd +++ b/modules/data.atmosphere/man/db.site.lat.lon.Rd @@ -2,12 +2,17 @@ % Please edit documentation in R/met.process.R \name{db.site.lat.lon} \alias{db.site.lat.lon} -\title{db.site.lat.lon} +\title{Look up lat/lon from siteid} \usage{ db.site.lat.lon(site.id, con) } +\arguments{ +\item{site.id}{BeTY ID of site to look up} + +\item{con}{database connection} +} \description{ -db.site.lat.lon +Look up lat/lon from siteid } \author{ Betsy Cowdery diff --git a/modules/data.atmosphere/man/met.process.stage.Rd b/modules/data.atmosphere/man/met.process.stage.Rd index fe3eaa0122a..cf056773ff7 100644 --- a/modules/data.atmosphere/man/met.process.stage.Rd +++ b/modules/data.atmosphere/man/met.process.stage.Rd @@ -7,7 +7,7 @@ met.process.stage(input.id, raw.id, con) } \arguments{ -\item{raw.id}{} +\item{con}{database connection} } \description{ met.process.stage diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index bf1308c9e6f..b0445673eaf 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -289,9 +289,6 @@ Undocumented arguments in documentation object 'closest_xy' Undocumented arguments in documentation object 'daygroup' ‘date’ ‘flx’ -Undocumented arguments in documentation object 'db.site.lat.lon' - ‘site.id’ ‘con’ - Undocumented arguments in documentation object 'debias_met' ‘outfolder’ ‘site_id’ ‘...’ @@ -364,7 +361,7 @@ Undocumented arguments in documentation object 'met.process' ‘browndog’ Undocumented arguments in documentation object 'met.process.stage' - ‘input.id’ ‘con’ + ‘input.id’ ‘raw.id’ Undocumented arguments in documentation object 'met2CF.ALMA' ‘verbose’ From a84d5b1fda2d37494b4f12d223347943a470f110 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 20 Jun 2020 21:17:52 +0200 Subject: [PATCH 1119/2289] src_postgres is deprecated --- modules/data.atmosphere/R/met.process.R | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index d91cc9d8b87..33328c9a306 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -87,11 +87,7 @@ met.process <- function(site, input_met, start_date, end_date, model, } # set up connection and host information - con <- dplyr::src_postgres( - dbname = dbparms$dbname, - host = dbparms$host, - user = dbparms$user, - password = dbparms$password)$con + con <- PEcAn.DB::db.open(dbparms) on.exit(PEcAn.DB::db.close(con), add = TRUE) username <- ifelse(is.null(input_met$username), "pecan", input_met$username) From 581b4b2c96a2f71b57cff1208a591cd8d7f45d90 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 22 Jun 2020 12:29:28 +0000 Subject: [PATCH 1120/2289] docker setup for api --- apps/api/Dockerfile | 30 ++++++++++++++++++++++++++++++ docker-compose.yml | 27 ++++++++++++++++++++++++--- docker.sh | 16 ++++++++++++++++ 3 files changed, 70 insertions(+), 3 deletions(-) create mode 100644 apps/api/Dockerfile diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile new file mode 100644 index 00000000000..7073a3fc06b --- /dev/null +++ b/apps/api/Dockerfile @@ -0,0 +1,30 @@ +# this needs to be at the top, what version are we building +ARG IMAGE_VERSION="latest" + + +# -------------------------------------------------------------------------- +# PECAN FOR MODEL BASE IMAGE +# -------------------------------------------------------------------------- +FROM pecan/base:${IMAGE_VERSION} +LABEL maintainer="Tezan Sahu " + + +COPY ./ /api + +WORKDIR /api + +# -------------------------------------------------------------------------- +# Variables to store in docker image (most of them come from the base image) +# -------------------------------------------------------------------------- +ENV AUTH_REQ="yes" \ + HOST_ONLY="no" \ + PGHOST="postgres" + +# COMMAND TO RUN +RUN apt-get update +RUN apt-get install libsodium-dev -y +RUN Rscript -e "devtools::install_version('promises', '1.1.0', repos = 'http://cran.rstudio.com')" \ + && Rscript -e "devtools::install_version('webutils', '1.1', repos = 'http://cran.rstudio.com')" \ + && Rscript -e "devtools::install_github('rstudio/swagger')" \ + && Rscript -e "devtools::install_github('rstudio/plumber')" +CMD Rscript entrypoint.R \ No newline at end of file diff --git a/docker-compose.yml b/docker-compose.yml index e0f5661b0fb..f906e3aca71 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -310,9 +310,9 @@ services: volumes: - pecan:/data -# ---------------------------------------------------------------------- -# Shiny Apps -# ---------------------------------------------------------------------- + # ---------------------------------------------------------------------- + # Shiny Apps + # ---------------------------------------------------------------------- # PEcAn DB Sync visualization dbsync: image: pecan/shiny-dbsync:${PECAN_VERSION:-latest} @@ -327,6 +327,27 @@ services: - "traefik.port=3838" - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip:/dbsync/" + + # ---------------------------------------------------------------------- + # PEcAn API + # ---------------------------------------------------------------------- + api: + image: pecan/api:${PECAN_VERSION:-latest} + networks: + - pecan + environment: + - PECAN_VERSION=${PECAN_VERSION:-1.7.0} + - PECAN_GIT_BRANCH=${PECAN_GIT_BRANCH:-develop} + - PECAN_GIT_CHECKSUM=${PECAN_GIT_CHECKSUM:-unknown} + - PECAN_GIT_DATE=${PECAN_GIT_DATE:-unknown} + - PGHOST=${PGHOST:-postgres} + - HOST_ONLY=${HOST_ONLY:-FALSE} + - AUTH_REQ=${AUTH_REQ:-TRUE} + labels: + - "traefik.enable=true" + - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/" + - "traefik.backend=api" + # ---------------------------------------------------------------------- # Name of network to be used by all containers # ---------------------------------------------------------------------- diff --git a/docker.sh b/docker.sh index 87c48b5d13b..372093d7279 100755 --- a/docker.sh +++ b/docker.sh @@ -214,3 +214,19 @@ for version in git r136; do --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ models/sipnet done + +# -------------------------------------------------------------------------------- +# PEcAn Apps +# -------------------------------------------------------------------------------- + +# build API +for x in api; do + ${DEBUG} docker build \ + --tag pecan/$x:${IMAGE_VERSION} \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg PECAN_VERSION="${VERSION}" \ + --build-arg PECAN_GIT_BRANCH="${PECAN_GIT_BRANCH}" \ + --build-arg PECAN_GIT_CHECKSUM="${PECAN_GIT_CHECKSUM}" \ + --build-arg PECAN_GIT_DATE="${PECAN_GIT_DATE}" \ + apps/$x/ +done From 3ca09483fc575553d5ce053547f652d534aec823 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 23 Jun 2020 18:37:05 +0000 Subject: [PATCH 1121/2289] API documentation for GET endpoints involving models, workflows, runs & general host details --- .../07_remote_access/01_pecan_api.Rmd | 44 +++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd new file mode 100644 index 00000000000..f00de81088e --- /dev/null +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -0,0 +1,44 @@ +# PEcAn Project API + +## Introduction + +##### Welcome to the PEcAn Project API Documentation. + +The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. + +Our API allows users to remotely interact with the PEcAn servers and leverage the functionalities provided by the PEcAn Project. It has been designed to follow common RESTful API conventions. Most operations are performed using the HTTP methods: `GET` (retrieve) & `POST` (create). + +_Please note that the PEcAn Project API is currently under active development and is possible that any information in this document is subject to change._ + +## Authentication + +Authentication to the PEcAn API occurs via [Basic HTTP Auth](https://en.wikipedia.org/wiki/Basic_access_authentication). The credentials for using the API are the same as those used to log into PEcAn & BetyDB. Here is how you use basic HTTP auth with `curl`: +``` +$ curl --user ':' +``` + +Authentication also depends on the PEcAn server that the user interacts with. Some servers, at the time of deployment have the `AUTH_REQ = FALSE`, meaning that such servers do not require user autertication for the usage of the PEcAn APIs. Regardless of the type of server, the endpoints defind under General section can be accessed without any authentication. + +## RESTful API Endpoints + +This page contains the high-level overviews & the functionalities offered by the different RESTful endpoints of the PEcAn API. For more detailed information including the request & response structure along with relevant schemas, you can visit the [OpenAPI 3 Documentation for the PEcAn API](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/PecanProject/pecan/299c240277d616463979cdbe09ef57731b8167fc/apps/api/pecanapi-spec.yml). + +The currently implemented functionalities include: + +* __General:__ + * `GET /api/ping`: Ping the server to check if it is live + * `GET /api/status`: Obtain general information about PEcAn & the details of the database host + +* __Models:__ + * `GET /api/models/`: Retrieve the details of environmental model(s) used by PEcAn (can be filtered based on `model_name` & `revision`) + +* __Workflows:__ + * `GET /api/workflows/`: Retrieve a list of PEcAn workflows (can be filtered based on `model_id` & `site_id`) + * `POST /api/workflows/` *: Submit a new PEcAn workflow + * `GET /api/workflows/{id}`: Obtain the details of a particular PEcAn workflow by supplying its ID + +* __Runs:__ + * `GET /api/runs`: Get the list of all the runs for a specified PEcAn Workflow + * `GET /api/runs/{id}`: Fetch the details of a specified PEcAn run + +_* indicates that the particular API is under development & may not be ready for use_ From b3b811fd6ddfed2adeaee145010fc0c9f42c9320 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 25 Jun 2020 17:58:29 +0000 Subject: [PATCH 1122/2289] api folder restructuring --- apps/api/Dockerfile | 2 +- apps/api/{ => R}/auth.R | 0 apps/api/{ => R}/entrypoint.R | 2 +- apps/api/{ => R}/general.R | 0 apps/api/{ => R}/models.R | 0 apps/api/{ => R}/runs.R | 0 apps/api/{ => R}/workflows.R | 0 apps/api/README.md | 20 ++++++++++++++++++++ apps/api/test_pecanapi.sh | 7 +++++++ 9 files changed, 29 insertions(+), 2 deletions(-) rename apps/api/{ => R}/auth.R (100%) rename apps/api/{ => R}/entrypoint.R (96%) rename apps/api/{ => R}/general.R (100%) rename apps/api/{ => R}/models.R (100%) rename apps/api/{ => R}/runs.R (100%) rename apps/api/{ => R}/workflows.R (100%) create mode 100644 apps/api/README.md create mode 100644 apps/api/test_pecanapi.sh diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index 7073a3fc06b..abec70cff90 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -11,7 +11,7 @@ LABEL maintainer="Tezan Sahu " COPY ./ /api -WORKDIR /api +WORKDIR /api/R # -------------------------------------------------------------------------- # Variables to store in docker image (most of them come from the base image) diff --git a/apps/api/auth.R b/apps/api/R/auth.R similarity index 100% rename from apps/api/auth.R rename to apps/api/R/auth.R diff --git a/apps/api/entrypoint.R b/apps/api/R/entrypoint.R similarity index 96% rename from apps/api/entrypoint.R rename to apps/api/R/entrypoint.R index 9512f2f232c..fc3ed32d604 100644 --- a/apps/api/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -32,6 +32,6 @@ root$mount("/api/runs", runs_pr) # The API server is bound to 0.0.0.0 on port 8000 # The Swagger UI for the API draws its source from the pecanapi-spec.yml file root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { - spec <- yaml::read_yaml("pecanapi-spec.yml") + spec <- yaml::read_yaml("../pecanapi-spec.yml") spec }) \ No newline at end of file diff --git a/apps/api/general.R b/apps/api/R/general.R similarity index 100% rename from apps/api/general.R rename to apps/api/R/general.R diff --git a/apps/api/models.R b/apps/api/R/models.R similarity index 100% rename from apps/api/models.R rename to apps/api/R/models.R diff --git a/apps/api/runs.R b/apps/api/R/runs.R similarity index 100% rename from apps/api/runs.R rename to apps/api/R/runs.R diff --git a/apps/api/workflows.R b/apps/api/R/workflows.R similarity index 100% rename from apps/api/workflows.R rename to apps/api/R/workflows.R diff --git a/apps/api/README.md b/apps/api/README.md new file mode 100644 index 00000000000..2e1649ee023 --- /dev/null +++ b/apps/api/README.md @@ -0,0 +1,20 @@ +# PEcAn RESTful API Server + +This folder contains the code & tests for PEcAn's RESTful API server. The API allows users to remotely interact with the PEcAn servers and leverage the functionalities provided by the PEcAn Project. It has been designed to follow common RESTful API conventions. Most operations are performed using the HTTP methods: `GET` (retrieve) & `POST` (create). + +#### For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/PecanProject/pecan/299c240277d616463979cdbe09ef57731b8167fc/apps/api/pecanapi-spec.yml). + +## Starting the PEcAn server: + +Follow the following steps to spin up the PEcAn API server locally: + +```bash +$ cd R +$ Rscript entrypoint.R +``` + +## Running the tests: + +```bash +$ ./test_pecanapi.sh +``` diff --git a/apps/api/test_pecanapi.sh b/apps/api/test_pecanapi.sh new file mode 100644 index 00000000000..35543046251 --- /dev/null +++ b/apps/api/test_pecanapi.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +R R/entrypoint.R & +PID=$! + +R test/alltests.R +kill $PID \ No newline at end of file From b0e2f3b935da3aa716a69bbce8ab07bbf887f9d2 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Thu, 25 Jun 2020 14:44:02 -0700 Subject: [PATCH 1123/2289] Update DEV-INTRO.md --- DEV-INTRO.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index aba762612a4..aaccafe31c3 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -6,9 +6,23 @@ This is a minimal guide to getting started with PEcAn development under Docker. We recommend following the the [gitflow](https://nvie.com/posts/a-successful-git-branching-model/) workflow and working in your own [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). See the [PEcAn developer guide](book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd) for further details. In the `/scripts` folder there is a script called [syncgit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. +To clone the PEcAn repository: + +```sh +git clone git@github.com:pecanproject/pecan + +# alternatively, if you haven't set up ssh keys with GitHub +# git clone https://github.com/PecanProject/pecan +``` + ## Developing in Docker -If running on a linux system it is recommended to add your user to the docker group. This will prevent you from having to use `sudo` to start the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using `sudo adduser ${USER} docker`. +_Note for Linux users:_ add your user to the docker group. This will prevent you from having to use `sudo` to start the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using +```sh +# for linux users +sudo adduser ${USER} docker`. +``` + To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. From 89c51eaaafb2aa3c0a717d371ee5f3933150504a Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 11:52:19 +0530 Subject: [PATCH 1124/2289] updated tags field --- .github/workflows/book.yml | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 47915f83296..45f0b7fa387 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -2,6 +2,11 @@ on: push: branches: - master + - develop + - release/* + tags: + - v1 + - v1.* name: renderbook From 691b82346762327ea338edd9d8770baeda86361b Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 12:41:29 +0530 Subject: [PATCH 1125/2289] Update book.yml --- .github/workflows/book.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 45f0b7fa387..bca59413958 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -5,7 +5,6 @@ on: - develop - release/* tags: - - v1 - v1.* From ef3ceadeccb29ce7eebb34190033f45273393fe1 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 12:47:10 +0530 Subject: [PATCH 1126/2289] Update book.yml --- .github/workflows/book.yml | 4 ---- 1 file changed, 4 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index bca59413958..47915f83296 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -2,10 +2,6 @@ on: push: branches: - master - - develop - - release/* - tags: - - v1.* name: renderbook From 84bef4753ded00715c04805842d103e1dd4870b1 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 12:53:46 +0530 Subject: [PATCH 1127/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 47915f83296..234c20a04ff 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -3,7 +3,7 @@ on: branches: - master - +# render book name: renderbook jobs: From b59e33d0f32418201e68222abdc0682376a0af11 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 13:34:50 +0530 Subject: [PATCH 1128/2289] Update book.yml --- .github/workflows/book.yml | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 234c20a04ff..752985cac18 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -2,6 +2,11 @@ on: push: branches: - master + - develop + - release/* + tags: + - v1 + - v1* # render book name: renderbook @@ -19,6 +24,7 @@ jobs: run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' + - uses: actions/upload-artifact@v2 with: name: _book From 56a97a83707d330c27feb0543d50906b1edc4701 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 16:01:35 +0530 Subject: [PATCH 1129/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 752985cac18..c149e274d91 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -23,7 +23,7 @@ jobs: - name: Install rmarkdown run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book - run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' + run: Rscript -e 'bookdown::render_book("index.Rmd")' - uses: actions/upload-artifact@v2 with: From 90c985220e60018d53aa3551f8f8f3663c7e9eb7 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 16:37:03 +0530 Subject: [PATCH 1130/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index c149e274d91..2b6c92c87ef 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -23,7 +23,7 @@ jobs: - name: Install rmarkdown run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book - run: Rscript -e 'bookdown::render_book("index.Rmd")' + run: cd book_source && make - uses: actions/upload-artifact@v2 with: From cf24ccb05732d155aa4b301c76725dd0eee7ee6f Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 16:43:19 +0530 Subject: [PATCH 1131/2289] Update book.yml --- .github/workflows/book.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 2b6c92c87ef..227dc5c54fe 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -24,6 +24,7 @@ jobs: run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book run: cd book_source && make + working-directory: ${{ github.repository_owner }}/pecan-sandbox/documentation/ - uses: actions/upload-artifact@v2 with: From 10f9b15dc2083036de6d4017cdf4d9eb560554f2 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 17:00:26 +0530 Subject: [PATCH 1132/2289] Update book.yml --- .github/workflows/book.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 227dc5c54fe..ee0ddab338d 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -23,8 +23,8 @@ jobs: - name: Install rmarkdown run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book - run: cd book_source && make - working-directory: ${{ github.repository_owner }}/pecan-sandbox/documentation/ + run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' + working-directory: /documentation/ - uses: actions/upload-artifact@v2 with: From 63e45d0d65dcdf53fa5f5b1b74086b7fc25a1afc Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 17:15:31 +0530 Subject: [PATCH 1133/2289] Update book.yml --- .github/workflows/book.yml | 2 -- 1 file changed, 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index ee0ddab338d..96b7508b909 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -24,8 +24,6 @@ jobs: run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' - working-directory: /documentation/ - - uses: actions/upload-artifact@v2 with: name: _book From 01b0fdf2c7507d03b987a1863fe232137fe0bf0f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 27 Jun 2020 17:49:10 +0530 Subject: [PATCH 1134/2289] added l8gee2pecan_bands --- modules/data.remote/inst/l8gee2pecan_bands.py | 213 ++++++++++++++++++ 1 file changed, 213 insertions(+) create mode 100644 modules/data.remote/inst/l8gee2pecan_bands.py diff --git a/modules/data.remote/inst/l8gee2pecan_bands.py b/modules/data.remote/inst/l8gee2pecan_bands.py new file mode 100644 index 00000000000..5992b987288 --- /dev/null +++ b/modules/data.remote/inst/l8gee2pecan_bands.py @@ -0,0 +1,213 @@ +""" +Extracts Landsat 8 surface reflactance band data from Google Earth Engine and saves it in a netCDF file + +Requires Python3 + +If ROI is a Point, this function can be used for getting SR data from Landsat 7, 5 and 4 as well. + +Author: Ayush Prasad +""" + +import ee +import pandas as pd +import datetime +import geopandas as gpd +import os +import xarray as xr +import numpy as np +import re + +ee.Initialize() + + +def l8gee2pecan_bands(geofile, outdir, start, end, ic, vi, qc, bands=["B5", "B4"]): + """ + Extracts Landsat 8 SR band data from GEE + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date areaof the data request in the form YYYY-MM-DD + + ic (str) -- image collection id of the Landsat SR product from Google Earth Engine + + vi (bool) -- set to True if NDVI needs to be calculated + + qc (bool) -- uses the cloud masking function if set to True + + bands (list of str) -- bands to be retrieved. Default: B5, B4 + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + """ + + # read in the geojson file + df = gpd.read_file(geofile) + + # scale (int) Default: 30 + scale = 30 + + def reduce_region(image): + """ + Reduces the selected region + currently set to mean, can be changed as per requirements. + """ + stat_dict = image.reduceRegion(ee.Reducer.mean(), geo, scale) + sensingtime = image.get("SENSING_TIME") + return ee.Feature(None, stat_dict).set("sensing_time", sensingtime) + + def mask(image): + """ + Masks clouds and cloud shadows using the pixel_qa band + Can be configured as per requirements + """ + clear = image.select("pixel_qa") + return image.updateMask(clear) + + # create image collection depending upon the qc choice + if qc == True: + landsat = ( + ee.ImageCollection(ic) + .filterDate(start, end) + .map(mask) + .sort("system:time_start", True) + ) + + else: + landsat = ( + ee.ImageCollection(ic) + .filterDate(start, end) + .sort("system:time_start", True) + ) + + # If NDVI is to be calculated select the appropriate bands and create the image collection + if vi == True: + + def calcNDVI(image): + """ + Calculates NDVI and adds the band to the image. + """ + ndvi = image.normalizedDifference(["B5", "B4"]).rename("NDVI") + return image.addBands(ndvi) + + # map NDVI to the image collection and select the NDVI band + landsat = landsat.map(calcNDVI).select("NDVI") + + # select the user specified bands if NDVI is not be calculated + else: + landsat = landsat.select(bands) + + # if ROI is a point + if (df.geometry.type == "Point").bool(): + # extract the coordinates + lon = float(df.geometry.x) + lat = float(df.geometry.y) + # create geometry + geo = ee.Geometry.Point(lon, lat) + + # get the data + l_data = landsat.filterBounds(geo).getRegion(geo, scale).getInfo() + # put the data inside a list of dictionary + l_data_dict = [dict(zip(l_data[0], values)) for values in l_data[1:]] + + def getdate(filename): + """ + calculates Date from the landsat id + """ + string = re.compile( + r"(?PLC08|LE07|LT05|LT04)_(?P\d{6})_(?P\d{8})" + ) + x = string.search(filename) + d = datetime.datetime.strptime(x.group("date"), "%Y%m%d").date() + return d + + # pop out unnecessary keys and add date + for d in l_data_dict: + d.pop("longitude", None) + d.pop("latitude", None) + d.pop("time", None) + d.update(time=getdate(d["id"])) + + # Put data in a dataframe + datadf = pd.DataFrame(l_data_dict) + # converting date to the numpy date format + datadf["time"] = datadf["time"].apply(lambda x: np.datetime64(x)) + + # if ROI is a polygon + elif (df.geometry.type == "Polygon").bool(): + # extract coordinates + area = [ + list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) + ] + geo = ee.Geometry.Polygon(area) + + # get the data + l_data = landsat.filterBounds(geo).map(reduce_region).getInfo() + + def l8_fc2dict(fc): + """ + Converts feature collection to dictionary form. + """ + + def eachf2dict(f): + id = f["id"] + out = f["properties"] + out.update(id=id) + return out + + out = [eachf2dict(x) for x in fc["features"]] + return out + + # convert to dictionary + l_data_dict = l8_fc2dict(l_data) + # create a dataframe from the dictionary + datadf = pd.DataFrame(l_data_dict) + # convert date to a more readable format + datadf["sensing_time"] = datadf["sensing_time"].apply( + lambda x: datetime.datetime.strptime(x.split(".")[0], "%Y-%m-%dT%H:%M:%S") + ) + # if ROI type is not a Point ot Polygon + else: + raise ValueError("geometry choice not supported") + + site_name = df[df.columns[0]].iloc[0] + AOI = str(df[df.columns[1]].iloc[0]) + # convert the dataframe to an xarray dataset + tosave = xr.Dataset( + datadf, + attrs={ + "site_name": site_name, + "start_date": start, + "end_date": end, + "AOI": AOI, + "sensor": ic, + }, + ) + + # if specified path does not exist create it + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # convert the dataset to a netCDF file and save it + file_name = "l8ndvi" if vi == True else "l8bands" + tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) + + +if __name__ == "__main__": + l8gee2pecan_bands( + "./satellitetools/test.geojson", + "./outing", + "2018-01-01", + "2018-12-31", + "LANDSAT/LC08/C01/T1_SR", + True, + True, + ) From ce7a9a757ff6c22f9bbdb5476444a4f4d2f9b1d3 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 27 Jun 2020 18:04:52 +0530 Subject: [PATCH 1135/2289] small fix --- modules/data.remote/inst/l8gee2pecan_bands.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/inst/l8gee2pecan_bands.py b/modules/data.remote/inst/l8gee2pecan_bands.py index 5992b987288..e5dfeb09841 100644 --- a/modules/data.remote/inst/l8gee2pecan_bands.py +++ b/modules/data.remote/inst/l8gee2pecan_bands.py @@ -204,7 +204,7 @@ def eachf2dict(f): if __name__ == "__main__": l8gee2pecan_bands( "./satellitetools/test.geojson", - "./outing", + "./out/", "2018-01-01", "2018-12-31", "LANDSAT/LC08/C01/T1_SR", From 932f931bb5a8ae78cba2f1a69ca474ef99d8a15e Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat, 27 Jun 2020 18:37:40 +0530 Subject: [PATCH 1136/2289] updated bookdown bug --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 96b7508b909..a633707d5f1 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -23,7 +23,7 @@ jobs: - name: Install rmarkdown run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - name: Render Book - run: cd book_source && Rscript -e 'bookdown::render_book("index.Rmd")' + run: cd book_source && Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd")' - uses: actions/upload-artifact@v2 with: name: _book From 3464bffad024cda5f6baeb4c8e8129fb3195646b Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat, 27 Jun 2020 22:36:55 +0530 Subject: [PATCH 1137/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 5aafd00c247..cba01d836c2 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -27,7 +27,7 @@ jobs: cat names.txt | tr -d '[]' > changed_files.txt text=$(cat changed_files.txt) IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done + for i in "${ids[@]}"; do if [ "$i" == *.R\" || "$i" == *.Rmd\" ]; then echo "$i" >> files_to_style.txt; fi; done - name: Upload artifacts uses: actions/upload-artifact@v1 with: From 5f7617045ece7546ae751995e2044a489adfda83 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat, 27 Jun 2020 22:37:10 +0530 Subject: [PATCH 1138/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index cba01d836c2..035fa14e6e7 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -39,7 +39,7 @@ jobs: run: | git add \*.R git add \*.Rmd - if [[ $(git diff --name-only --cached) != "" ]]; then git commit -m 'automated syle update' ; fi + if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated syle update' ; fi - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} From c4b1bd17f80eb7a09aa5a563b5b6359bc25c8e65 Mon Sep 17 00:00:00 2001 From: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat, 27 Jun 2020 22:37:23 +0530 Subject: [PATCH 1139/2289] Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 035fa14e6e7..206d0c3a080 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -72,7 +72,7 @@ jobs: git config --global user.email "pecan_bot@example.com" git config --global user.name "PEcAn stylebot" git add \*.Rd - if [[ $(git diff --name-only --cached) != "" ]]; then git commit -m 'automated documentation update' ; fi + if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated documentation update' ; fi - uses: r-lib/actions/pr-push@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} From acf68510bd6d5688b0cd0cb1e0717dd4ef1d0f48 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 27 Jun 2020 19:42:53 +0000 Subject: [PATCH 1140/2289] updated docs with examples --- apps/api/R/workflows.R | 5 +- apps/api/pecanapi-spec.yml | 84 ++-- .../07_remote_access/01_pecan_api.Rmd | 437 +++++++++++++++++- 3 files changed, 483 insertions(+), 43 deletions(-) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index fcdc78795ec..f6007a342e7 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -117,11 +117,10 @@ getWorkflowDetails <- function(id, res){ } else { if(is.na(qry_res$properties)){ - res <- list(id = id, modelid = qry_res$model_id, siteid = qry_res$site_id) + res <- list(id = id, properties = list(modelid = qry_res$model_id, siteid = qry_res$site_id)) } else{ - res <- jsonlite::parse_json(qry_res$properties[[1]]) - res$id <- id + res <- list(id = id, properties = jsonlite::parse_json(qry_res$properties[[1]])) } return(res) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index c4adf652f47..a5d0ffc73e8 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/eb9c2d61 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/912db446 - description: Localhost url: http://127.0.0.1:8000 @@ -130,10 +130,13 @@ paths: content: application/json: schema: - type: array - items: - type: object - $ref: '#/components/schemas/Model' + type: object + properties: + models: + type: array + items: + type: object + $ref: '#/components/schemas/Model' '401': description: Authentication required '403': @@ -391,38 +394,45 @@ components: properties: id: type: string - pfts: - type: array - items: - type: string - input_met: - type: string - modelid: - type: string - siteid: - type: string - sitename: - type: string - sitegroupid: - type: string - start: - type: string - end: - type: string - variables: - type: string - sensitivity: - type: string - email: - type: string - notes: - type: string - runs: - type: string - pecan_edit: - type: string - status: - type: string + "properties": + type: object + properties: + pfts: + type: array + items: + type: string + input_met: + type: string + modelid: + type: string + siteid: + type: string + sitename: + type: string + sitegroupid: + type: string + start: + type: string + end: + type: string + variables: + type: string + sensitivity: + type: string + email: + type: string + notes: + type: string + runs: + type: string + pecan_edit: + type: string + status: + type: string + fluxusername: + type: string + input_poolinitcond: + type: string securitySchemes: basicAuth: type: http diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index f00de81088e..245055bb38a 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -21,7 +21,9 @@ Authentication also depends on the PEcAn server that the user interacts with. So ## RESTful API Endpoints -This page contains the high-level overviews & the functionalities offered by the different RESTful endpoints of the PEcAn API. For more detailed information including the request & response structure along with relevant schemas, you can visit the [OpenAPI 3 Documentation for the PEcAn API](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/PecanProject/pecan/299c240277d616463979cdbe09ef57731b8167fc/apps/api/pecanapi-spec.yml). +This page contains the high-level overviews & the functionalities offered by the different RESTful endpoints of the PEcAn API. + +__For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/PecanProject/pecan/299c240277d616463979cdbe09ef57731b8167fc/apps/api/pecanapi-spec.yml).__ The currently implemented functionalities include: @@ -30,10 +32,10 @@ The currently implemented functionalities include: * `GET /api/status`: Obtain general information about PEcAn & the details of the database host * __Models:__ - * `GET /api/models/`: Retrieve the details of environmental model(s) used by PEcAn (can be filtered based on `model_name` & `revision`) + * `GET /api/models/`: Retrieve the details of model(s) used by PEcAn * __Workflows:__ - * `GET /api/workflows/`: Retrieve a list of PEcAn workflows (can be filtered based on `model_id` & `site_id`) + * `GET /api/workflows/`: Retrieve a list of PEcAn workflows * `POST /api/workflows/` *: Submit a new PEcAn workflow * `GET /api/workflows/{id}`: Obtain the details of a particular PEcAn workflow by supplying its ID @@ -42,3 +44,432 @@ The currently implemented functionalities include: * `GET /api/runs/{id}`: Fetch the details of a specified PEcAn run _* indicates that the particular API is under development & may not be ready for use_ + + +## Examples: + +#### Prerequisites to interact with the PEcAn API Server {.tabset .tabset-pills} + +##### R Packages +* [httr](https://cran.r-project.org/web/packages/httr/index.html) +* [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html) + +##### Python Packages +* [requests](https://requests.readthedocs.io/en/master/) +* [json](https://docs.python.org/3/library/json.html) + +#### {-} + + +Following are some example snippets to call the PEcAn API endpoints: + +### `GET /api/ping` {.tabset .tabset-pills} + +#### R Snippet + +```R +res <- httr::GET("http://localhost:8000/api/ping") +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $request +## [1] "ping" + +## $response +## [1] "pong" +``` +#### Python Snippet + +```python +response = requests.get("http://localhost:8000/api/ping") +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "request": "ping", +## "response": "pong" +## } +``` +### {-} + + +### `GET /api/status` {.tabset .tabset-pills} + +#### R Snippet + +```R +res <- httr::GET("http://localhost:8000/api/status") +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $pecan_details$version +## [1] "1.7.0" + +## $pecan_details$branch +## [1] "develop" + +## $pecan_details$gitsha1 +## [1] "unknown" + +## $host_details$hostid +## [1] 99 + +## $host_details$hostname +## [1] "" + +## $host_details$start +## [1] 99000000000 + +## $host_details$end +## [1] 99999999999 + +## $host_details$sync_url +## [1] "" + +## $host_details$sync_contact +## [1] "" +``` + +#### Python Snippet + +```python +response = requests.get("http://localhost:8000/api/status") +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "pecan_details": { +## "version": "1.7.0", +## "branch": "develop", +## "gitsha1": "unknown" +## }, +## "host_details": { +## "hostid": 99, +## "hostname": "", +## "start": 99000000000, +## "end": 99999999999, +## "sync_url": "", +## "sync_contact": "" +## } +## } +``` + +### {-} + +### `GET /api/models/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get model(s) with `model_name` = SIPNET & `revision` = ssr +res <- httr::GET( + "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $models +## model_id model_name revision modeltype_id model_type +## 1 1000000022 SIPNET ssr 3 SIPNET +## ... +``` + +#### Python Snippet + +```python +# Get model(s) with `model_name` = SIPNET & `revision` = ssr +response = requests.get( + "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "models": [ +## { +## "model_id": "1000000022", +## "model_name": "SIPNET", +## "revision": "ssr", +## "modeltype_id": 3, +## "model_type": "SIPNET" +## }, +## ... +## ] +## } +``` + +### {-} + +### `GET /api/workflows/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get workflow(s) that use `model_id` = 1000000022 [SIPNET] & `site_id` = 676 [Willow Creek (US-WCr)] +res <- httr::GET( + "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $workflows +## id properties +## +## 1 1000009172 +## ... + +## $count +## [1] 5 +``` + +#### Python Snippet + +```python +# Get workflow(s) that use `model_id` = 1000000022 [SIPNET] & `site_id` = 676 [Willow Creek (US-WCr)] +response = requests.get( + "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "workflows": [ +## { +## "id": 1000009172, +## "properties": { +## "end": "2004/12/31", +## "pft": [ +## "soil.IF", +## "temperate.deciduous.IF" +## ], +## "email": "", +## "notes": "", +## "start": "2004/01/01", +## "siteid": "676", +## "modelid": "1000000022", +## "hostname": "test-pecan.bu.edu", +## "sitename": "WillowCreek(US-WCr)", +## "input_met": "AmerifluxLBL.SIPNET", +## "pecan_edit": "on", +## "sitegroupid": "1000000022", +## "fluxusername": "pecan", +## "input_poolinitcond": "-1" +## } +## }, +## ... +## ], +## "count": 5 +## } +``` + +### {-} + +### `GET /api/workflows/{id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get details of workflow with `id` = '1000009172' +res <- httr::GET( + "http://localhost:8000/api/workflows/1000009172", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $id +## [1] "1000009172" + +## $properties +## $properties$end +## [1] "2004/12/31" + +## $properties$pft +## $properties$pft[[1]] +## [1] "soil.IF" + +## $properties$pft[[2]] +## [1] "temperate.deciduous.IF" + + +## $properties$email +## [1] "" + +## $properties$notes +## [1] "" + +## $properties$start +## [1] "2004/01/01" + +## $properties$siteid +## [1] "676" + +## $properties$modelid +## [1] "1000000022" + +## $properties$hostname +## [1] "test-pecan.bu.edu" + +## $properties$sitename +## [1] "WillowCreek(US-WCr)" + +## $properties$input_met +## [1] "AmerifluxLBL.SIPNET" + +## $properties$pecan_edit +## [1] "on" + +## $properties$sitegroupid +## [1] "1000000022" + +## $properties$fluxusername +## [1] "pecan" + +## $properties$input_poolinitcond +## [1] "-1" +``` + +#### Python Snippet + +```python +# Get details of workflow with `id` = '1000009172' +response = requests.get( + "http://localhost:8000/api/workflows/1000009172", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "id": "1000009172", +## "properties": { +## "end": "2004/12/31", +## "pft": [ +## "soil.IF", +## "temperate.deciduous.IF" +## ], +## "email": "", +## "notes": "", +## "start": "2004/01/01", +## "siteid": "676", +## "modelid": "1000000022", +## "hostname": "test-pecan.bu.edu", +## "sitename": "WillowCreek(US-WCr)", +## "input_met": "AmerifluxLBL.SIPNET", +## "pecan_edit": "on", +## "sitegroupid": "1000000022", +## "fluxusername": "pecan", +## "input_poolinitcond": "-1" +## } +## } +``` + +### {-} + +### `GET /api/runs/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get list of run belonging to the workflow with `workflow_id` = '1000009172' +res <- httr::GET( + "http://localhost:8000/api/runs/?workflow_id=1000009172", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $runs +## runtype ensemble_id workflow_id id model_id site_id parameter_list start_time +## finish_time +## 1 ensemble 1000017624 1000009172 1002042201 1000000022 796 ensemble=1 2005-01-01 +## 00:00:00 2011-12-31 00:00:00 +## ... + +## $count +## [1] 50 +``` + +#### Python Snippet + +```python +# Get list of run belonging to the workflow with `workflow_id` = '1000009172' +response = requests.get( + "http://localhost:8000/api/runs/?workflow_id=1000009172", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "runs": [ +## { +## "runtype": "ensemble", +## "ensemble_id": 1000017624, +## "workflow_id": 1000009172, +## "id": 1002042201, +## "model_id": 1000000022, +## "site_id": 796, +## "parameter_list": "ensemble=1", +## "start_time": "2005-01-01", +## "finish_time": "2011-12-31" +## }, +## ... +## ] +## "count": 50, +## "next_page": "http://localhost:8000/api/workflows/?workflow_id=1000009172&offset=50&limit=50" +## } +``` + +### {-} + +### `GET /api/runs/{id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get details of run belonging with `id` = '1002042201' +res <- httr::GET( + "http://localhost:8000/api/runs/1002042201", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## runtype ensemble_id workflow_id id model_id site_id start_time finish_time parameter_list +## 1 ensemble 1000017624 1000009172 1002042201 1000000022 796 2005-01-01 2011-12-31 ensemble=1 +## created_at updated_at started_at finished_at +## 1 2018-04-11 22:20:31 2018-04-11 22:22:08 2018-04-11 18:21:57 2018-04-11 18:22:08 +``` + +#### Python Snippet + +```python +# Get details of run with `id` = '1002042201' +response = requests.get( + "http://localhost:8000/api/runs/1002042201", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## [ +## { +## "runtype": "ensemble", +## "ensemble_id": 1000017624, +## "workflow_id": 1000009172, +## "id": 1002042201, +## "model_id": 1000000022, +## "site_id": 796, +## "parameter_list": "ensemble=1", +## "start_time": "2005-01-01", +## "finish_time": "2011-12-31" +## } +## ] +``` + +### {-} From b7de3a672bed0e3901488c0f3ef5161e976b9db2 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 27 Jun 2020 22:39:18 +0200 Subject: [PATCH 1141/2289] Update .github/workflows/styler-actions.yml --- .github/workflows/styler-actions.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 206d0c3a080..ff3869db622 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -22,12 +22,13 @@ jobs: Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' - name: string operations + shell: bash run: | echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt cat names.txt | tr -d '[]' > changed_files.txt text=$(cat changed_files.txt) IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [ "$i" == *.R\" || "$i" == *.Rmd\" ]; then echo "$i" >> files_to_style.txt; fi; done + for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done - name: Upload artifacts uses: actions/upload-artifact@v1 with: From 71e6fb72f6c23f5a4cce8528b6f558131b2a0670 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 27 Jun 2020 21:33:39 +0000 Subject: [PATCH 1142/2289] updated swagger docs link in documentation & readme --- apps/api/README.md | 2 +- book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/apps/api/README.md b/apps/api/README.md index 2e1649ee023..69ae57643fc 100644 --- a/apps/api/README.md +++ b/apps/api/README.md @@ -2,7 +2,7 @@ This folder contains the code & tests for PEcAn's RESTful API server. The API allows users to remotely interact with the PEcAn servers and leverage the functionalities provided by the PEcAn Project. It has been designed to follow common RESTful API conventions. Most operations are performed using the HTTP methods: `GET` (retrieve) & `POST` (create). -#### For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/PecanProject/pecan/299c240277d616463979cdbe09ef57731b8167fc/apps/api/pecanapi-spec.yml). +#### For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/tezansahu/pecan/api_1/apps/api/pecanapi-spec.yml). ## Starting the PEcAn server: diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 245055bb38a..4a69f62d495 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -23,7 +23,7 @@ Authentication also depends on the PEcAn server that the user interacts with. So This page contains the high-level overviews & the functionalities offered by the different RESTful endpoints of the PEcAn API. -__For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/PecanProject/pecan/299c240277d616463979cdbe09ef57731b8167fc/apps/api/pecanapi-spec.yml).__ +__For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/tezansahu/pecan/api_1/apps/api/pecanapi-spec.yml).__ The currently implemented functionalities include: From c4cbe5729261d7e8859334d847b6191af4a3d59e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 28 Jun 2020 10:38:45 +0530 Subject: [PATCH 1143/2289] changed conditional for filename --- modules/data.remote/inst/l8gee2pecan_bands.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/inst/l8gee2pecan_bands.py b/modules/data.remote/inst/l8gee2pecan_bands.py index e5dfeb09841..2394bdb2405 100644 --- a/modules/data.remote/inst/l8gee2pecan_bands.py +++ b/modules/data.remote/inst/l8gee2pecan_bands.py @@ -100,10 +100,12 @@ def calcNDVI(image): # map NDVI to the image collection and select the NDVI band landsat = landsat.map(calcNDVI).select("NDVI") + file_name = "_l8ndvi" # select the user specified bands if NDVI is not be calculated else: landsat = landsat.select(bands) + file_name = "_l8bands" # if ROI is a point if (df.geometry.type == "Point").bool(): @@ -197,7 +199,6 @@ def eachf2dict(f): os.makedirs(outdir, exist_ok=True) # convert the dataset to a netCDF file and save it - file_name = "l8ndvi" if vi == True else "l8bands" tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) From dcf27a1e9cca8495e25be2fbcbca6a0f26747265 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 28 Jun 2020 09:58:51 +0200 Subject: [PATCH 1144/2289] work around "Input files not all in same directory, please supply explicit wd" from bookdown 0.20 --- book_source/Makefile | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/book_source/Makefile b/book_source/Makefile index 5d57f169a91..1411d6fcd75 100755 --- a/book_source/Makefile +++ b/book_source/Makefile @@ -23,7 +23,10 @@ DEMO_1_FIGS := $(wildcard ../documentation/tutorials/01_Demo_Basic_Run/extfiles/ build: bkdcheck mkdir -p extfiles cp -f ${DEMO_1_FIGS} extfiles/ - Rscript -e 'bookdown::render_book("index.Rmd", "bookdown::gitbook")' + # options call is a workaround for a behavior change and probable bug in bookdown 0.20: + # https://stackoverflow.com/a/62583304 + # Remove when this is fixed in Bookdown + Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd", "bookdown::gitbook")' clean: rm -rf ../book/* @@ -32,4 +35,4 @@ deploy: build ./deploy.sh pdf: bkdcheck - Rscript -e 'bookdown::render_book("index.Rmd", "bookdown::pdf_book")' + Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd", "bookdown::pdf_book")' From a868addb471e5188c370f97c64d93c551f2e893c Mon Sep 17 00:00:00 2001 From: runner Date: Sun, 28 Jun 2020 09:29:08 +0000 Subject: [PATCH 1145/2289] automated syle update --- models/biocro/R/call_biocro.R | 81 +++++++++++++++++++---------------- 1 file changed, 45 insertions(+), 36 deletions(-) diff --git a/models/biocro/R/call_biocro.R b/models/biocro/R/call_biocro.R index dfd8e636368..226bcbca1f8 100644 --- a/models/biocro/R/call_biocro.R +++ b/models/biocro/R/call_biocro.R @@ -10,7 +10,7 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, # Check that all variables are present in the expected order -- # BioGro < 1.0 accesses weather vars by position and DOES NOT check headers. expected_cols <- c("year", "doy", "hour", "[Ss]olar", "Temp", "RH", "WS|windspeed", "precip") - if(!all(mapply(grepl, expected_cols, colnames(WetDat)))){ + if (!all(mapply(grepl, expected_cols, colnames(WetDat)))) { PEcAn.logger::logger.severe("Format error in weather file: Columns must be (", expected_cols, "), in that order.") } day1 <- min(WetDat$doy) # data already subset upstream, but BioCro 0.9 assumes a full year if day1/dayn are unset @@ -26,39 +26,46 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, # If not, rescale day1 and dayn to be relative to the start of the input. # Scaling is derived by inverting Biocro's day->index equations. biocro_checks_doy <- tryCatch( - {m <- BioCro::BioGro( - WetDat = matrix(c(0,10,0,0,0,0,0,0), nrow = 1), - day1 = 10, dayn = 10, timestep = 24); - inherits(m, "BioGro") }, - error = function(e){FALSE}) - if (!biocro_checks_doy && min(WetDat[,"doy"])>1) { - if (!is.null(day1)){ + { + m <- BioCro::BioGro( + WetDat = matrix(c(0, 10, 0, 0, 0, 0, 0, 0), nrow = 1), + day1 = 10, dayn = 10, timestep = 24 + ) + inherits(m, "BioGro") + }, + error = function(e) { + FALSE + } + ) + if (!biocro_checks_doy && min(WetDat[, "doy"]) > 1) { + if (!is.null(day1)) { # Biocro calculates line number as `indes1 <- (day1 - 1) * 24` - indes1 <- Position(function(x)x==day1, WetDat[,"doy"]) - day1 <- indes1/24 + 1 + indes1 <- Position(function(x) x == day1, WetDat[, "doy"]) + day1 <- indes1 / 24 + 1 } - if (!is.null(dayn)){ + if (!is.null(dayn)) { # Biocro calculates line number as `indesn <- (dayn) * 24` - indesn <- Position(function(x)x==dayn, WetDat[,"doy"], right = TRUE) - dayn <- indesn/24 + indesn <- Position(function(x) x == dayn, WetDat[, "doy"], right = TRUE) + dayn <- indesn / 24 } } - coppice.interval = config$pft$coppice.interval - if(is.null(coppice.interval)) { - coppice.interval = 1 # i.e. harvest every year + coppice.interval <- config$pft$coppice.interval + if (is.null(coppice.interval)) { + coppice.interval <- 1 # i.e. harvest every year } - if (genus == "Saccharum") { - #probably should be handled like coppice shrubs or perennial grasses + if (genus == "Saccharum") { + # probably should be handled like coppice shrubs or perennial grasses tmp.result <- BioCro::caneGro( WetDat = WetDat, lat = lat, - soilControl = l2n(config$pft$soilControl)) + soilControl = l2n(config$pft$soilControl) + ) # Addin Rhizome and Grain to avoid error in subsequent script processing results tmp.result$Rhizome <- 0 tmp.result$Grain <- 0 - } else if (genus %in% c("Salix", "Populus")) {#coppice trees / shrubs + } else if (genus %in% c("Salix", "Populus")) { # coppice trees / shrubs if (year_in_run == 1) { iplant <- config$pft$iPlantControl } else { @@ -66,13 +73,13 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, iplant$iRoot <- data.table::last(tmp.result$Root) iplant$iStem <- data.table::last(tmp.result$Stem) - if ((year_in_run - 1)%%coppice.interval == 0) { + if ((year_in_run - 1) %% coppice.interval == 0) { # coppice when remainder = 0 HarvestedYield <- round(data.table::last(tmp.result$Stem) * 0.95, 2) - } else if ((year_in_run - 1)%%coppice.interval == 1) { + } else if ((year_in_run - 1) %% coppice.interval == 1) { # year after coppice iplant$iStem <- iplant$iStem * 0.05 - } # else { # do nothing if neither coppice year nor year following + } # else { # do nothing if neither coppice year nor year following } ## run willowGro tmp.result <- BioCro::willowGro( @@ -86,9 +93,9 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, canopyControl = l2n(config$pft$canopyControl), willowphenoControl = l2n(config$pft$phenoParms), seneControl = l2n(config$pft$seneControl), - photoControl = l2n(config$pft$photoParms)) - - } else if (genus %in% c("Miscanthus", "Panicum")) {#perennial grasses + photoControl = l2n(config$pft$photoParms) + ) + } else if (genus %in% c("Miscanthus", "Panicum")) { # perennial grasses if (year_in_run == 1) { iRhizome <- config$pft$iPlantControl$iRhizome } else { @@ -105,9 +112,9 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, phenoControl = l2n(config$pft$phenoParms), seneControl = l2n(config$pft$seneControl), iRhizome = as.numeric(iRhizome), - photoControl = config$pft$photoParms) - - } else if (genus %in% c("Sorghum", "Setaria")) { #annual grasses + photoControl = config$pft$photoParms + ) + } else if (genus %in% c("Sorghum", "Setaria")) { # annual grasses # Perennial Sorghum exists but is not a major crop # assume these are replanted from seed each year # https://landinstitute.org/our-work/perennial-crops/perennial-sorghum/ @@ -118,19 +125,20 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, iRhizome = as.numeric(iplant$iRhizome), iRoot = as.numeric(iplant$iRoot), iStem = as.numeric(iplant$iStem), - iLeaf = as.numeric(iplant$iLeaf), + iLeaf = as.numeric(iplant$iLeaf), day1 = day1, dayn = dayn, soilControl = l2n(config$pft$soilControl), canopyControl = l2n(config$pft$canopyControl), phenoControl = l2n(config$pft$phenoParms), seneControl = l2n(config$pft$seneControl), - photoControl = l2n(config$pft$photoParms)) - + photoControl = l2n(config$pft$photoParms) + ) } else { PEcAn.logger::logger.severe( "Genus '", genus, "' is not supported by PEcAn.BIOCRO when using BioCro 0.9x.", - "Supported genera: Saccharum, Salix, Populus, Sorghum, Miscanthus, Panicum, Setaria") + "Supported genera: Saccharum, Salix, Populus, Sorghum, Miscanthus, Panicum, Setaria" + ) } names(tmp.result) <- sub("DayofYear", "doy", names(tmp.result)) names(tmp.result) <- sub("Hour", "hour", names(tmp.result)) @@ -146,7 +154,6 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, call_biocro_1 <- function(WetDat, genus, year_in_run, config, lat, lon, tmp.result, HarvestedYield) { - if (year_in_run == 1) { initial_values <- config$pft$initial_values } else { @@ -160,13 +167,15 @@ call_biocro_1 <- function(WetDat, genus, year_in_run, initial_values = initial_values, parameters = config$pft$parameters, varying_parameters = WetDat, - modules = config$pft$modules) + modules = config$pft$modules + ) tmp.result <- dplyr::rename(tmp.result, ThermalT = "TTc", LAI = "lai", SoilEvaporation = "soil_evaporation", - CanopyTrans = "canopy_transpiration") + CanopyTrans = "canopy_transpiration" + ) tmp.result$AboveLitter <- tmp.result$LeafLitter + tmp.result$StemLitter tmp.result$BelowLitter <- tmp.result$RootLitter + tmp.result$RhizomeLitter From d9cde51c408eee225b2e0b8728e3acec07190620 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 29 Jun 2020 08:59:40 -0500 Subject: [PATCH 1146/2289] more explicit about commands for windows/linux/mac --- DEV-INTRO.md | 137 ++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 114 insertions(+), 23 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index aaccafe31c3..b0156e7399f 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -26,16 +26,26 @@ sudo adduser ${USER} docker`. To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. -By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. Copy the `docker-compose.dev.yml` file to `docker-compose.override.yml` to start working with your own override file, i.e. `cp docker-compose.dev.yml docker-compose.override.yml`. You can now use the command `docker-compose` to launch all the containers setup for development. **The rest of this document assumes you have done this step.** +By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. Copy the `docker-compose.dev.yml` file to `docker-compose.override.yml` to start working with your own override file, i.e. : -If you want to start from scratch and remove all old data, but keep your pecan checked out folder, you can remove the folders where you have written the data (see `folders` below). You will also need to remove any of the docker managed volumes. To see all volumes you can do `docker volume ls -q -f name=pecan`. If you are sure, you can either remove them one by one, or use `docker volume rm $(docker volume ls -q -f name=pecan)` to remove them all. **THIS DESTROYS ALL DATA IN DOCKER MANAGED VOLUMES.**. If you changed the path in `docker-compose.override.yml` you will need to remove that volume as well. This will however not destroy the data since it is already on your local machine. +For Linux/MacOSX + +``` +cp docker-compose.dev.yml docker-compose.override.yml +``` + +For Windows + +``` +copy docker-compose.dev.yml docker-compose.override.yml +``` + +You can now use the command `docker-compose` to work with the containers setup for development. **The rest of this document assumes you have done this step.** ### First time setup The steps in this section only need to be done the fist time you start working with the stack in docker. After this is done you can skip these steps. You can find more detail about the docker commands in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). -Before doing anything it is recommended to make sure you have the lastest docker images ready. You can do a `docker-compose pull` to get the latest images. - * setup .env file * create folders to hold the data * load the postgresql database @@ -48,20 +58,52 @@ Before doing anything it is recommended to make sure you have the lastest docker You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers -* `PECAN_VERSION` set this to develop, the docker image we start with -Both of these variables should also be uncommented by removing the # preceding them. -At the end you should see the following if you run the following command `egrep -v '^(#|$)' .env` +* PECAN_VERSION` set this to develop, the docker image we start with + +Both of these variables should also be uncommented by removing the # preceding them. At the end you should see the following if you run the following command `egrep -v '^(#|$)' .env`. If you have a windows system, you will need to set the variable PWD as well, and for linux you will need to set UID and GID (for rstudio). + +For Linux + +``` +echo "COMPOSE_PROJECT_NAME=pecan" >> .env +echo "PECAN_VERSION=develop" >> .env +echo "UID=$(id -u)" >> .env +echo "GID=$(id -g)" >> .env +``` + +For MacOSX + +``` +echo "COMPOSE_PROJECT_NAME=pecan" >> .env +echo "PECAN_VERSION=develop" >> .env +``` + +For windows: + +``` +echo "COMPOSE_PROJECT_NAME=pecan" >> .env +echo "PECAN_VERSION=develop" >> .env +echo "PWD=%CD%" >> .env +``` + +Once you have setup `docker-compose.override.yml` and the `.env` files, it is time to pull all docker images that will be used. Doing this will make sure you have the latest version of those images on your local system. ``` -COMPOSE_PROJECT_NAME=pecan -PECAN_VERSION=develop +docker-compose pull ``` -#### folders +#### folders (optional) The goal of the development is to share the development folder with your container, whilst minimizing the latency. What this will do is setup the folders to allow for your pecan folder to be shared, and keep the rest of the folders managed by docker. Some of this is based on a presentation done during [DockerCon 2020](https://docker.events.cube365.net/docker/dockercon/content/Videos/92BAM7vob5uQ2spZf). In this talk it is recommended to keep the database on the filesystem managed by docker, as well as any other folders that are not directly modified on the host system (not using the docker managed volumes could lead to a large speed loss when reading/writing to the disk). The `docker-compose.override.yml` can be modified to copy all the data to the local filesystem, you will need to comment out the appropriate blocks. If you are sharing more than the pecan home directory you will need to make sure that these folder exist. As from the video, it is recommended to keep these folders outside of the actual pecan folder to allow for better caching capabilities of the docker system. -If you have commented out the volumes in `docker-compose.override.yml` you will need to create the folders. Assuming you have not modified the values, you can do this with: `mkdir -p $HOME/volumes/pecan/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. The following volumes are specified: +If you have commented out the volumes in `docker-compose.override.yml` you will need to create the folders. Assuming you have not modified the values, you can do this with: + +``` +mkdir -p $HOME/volumes/pecan/{lib,pecan,portainer,postgres,rabbitmq,traefik} +``` + + +The following volumes are specified: - **pecan_home** : is the checked out folder of PEcAn. This is shared with the executor and rstudio container allowing you to share and compile PEcAn. (defaults to current folder) - **pecan_web** : is the checked out web folder of PEcAn. This is shared with the web container allowing you to share and modify the PEcAn web app. (defaults to web folder in the current folder) @@ -76,35 +118,73 @@ These folders will hold all the persistent data for each of the respective conta #### postgresql database -First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): `docker-compose up -d postgres rabbitmq`. This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. +First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): + +``` +docker-compose up -d postgres rabbitmq +``` + +This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. -Once the database has finished starting up we will initialize the database. Before we run the container we want to make sure we have the latest database information, you can do this with `docker pull pecan/db`, which will make sure you have the latest version of the database ready. Now you can load the database using: `docker run --rm --network pecan_pecan pecan/db` (in this case we use the latest image instead of develop since it refers to the actual database data, and not the actual code). Once that is done we create two users for BETY: +Once the database has finished starting up we will initialize the database. Now you can load the database using the following commands. The first command will make sure we have the latest version of the image, the second command will actually load the information into the database. ``` -# guest user -docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 +docker pull pecan/db +docker run --rm --network pecan_pecan pecan/db +``` + + +Once that is done we create two users for BETY, first user is the guest user that you can use to login in the BETY interface. The second user is a user with admin rights. -# example user +``` +docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 ``` #### load example data -Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. To do this we first again make sure we have the latest code ready using `docker pull pecan/data:develop` and run this image using `docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop`. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers. +Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers. As with the database we first pull the latest version of the image, and then execute the image to copy all the data: + +``` +docker pull pecan/data:develop +docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop +``` #### copy R packages (optional but recommended) Next copy the R packages from a container to volume `pecan_lib`. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. This folder is shared with all PEcAn containers, allowing you to compile the code in one place, and have the compiled code available in all other containers. For example modify the code for a model, allows you to compile the code in rstudio container, and see the results in the model container. -You can copy all the data using `docker run -ti --rm -v pecan_lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/`. This will copy all compiled packages to your local machine. +You can copy all the data using the following command. This will copy all compiled packages to your local machine. + +``` +docker run -ti --rm -v pecan_lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/ +``` #### copy web config file (optional) -The `docker-compose.override.yml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using `cp docker/web/config.docker.php web/config.php`. +The `docker-compose.override.yml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using + +For Linux/MacOSX + +``` +cp docker/web/config.docker.php web/config.php +``` + +For Windows + +``` +copy docker\web\config.docker.php web\config.php +``` + + ### PEcAn Development -To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers: `docker-compose up -d`. At this point you have PEcAn running in docker. +To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers. At this point you have PEcAn running in docker. + +``` +docker-compose up -d +``` The current folder (most likely your clone of the git repository) is mounted in some containers as `/pecan`, and in the case of rstudio also in your home folder as `pecan`. You can see which containers exactly in `docker-compose.override.yml`. @@ -167,11 +247,22 @@ Small scripts that are used as part of the development and installation of PEcAn ## Reset all containers/database -At some point you might want to reset your database. +If you want to start from scratch and remove all old data, but keep your pecan checked out folder, you can remove the folders where you have written the data (see `folders` below). You will also need to remove any of the docker managed volumes. To see all volumes you can do `docker volume ls -q -f name=pecan`. If you are sure, you can either remove them one by one, or remove them all at once using the command below. **THIS DESTROYS ALL DATA IN DOCKER MANAGED VOLUMES.**. + +``` +docker volume rm $(docker volume ls -q -f name=pecan) +``` + +If you changed the docker-compose.override.yml file to point to a location on disk for some of the containers (instead of having them managed by docker) you will need to actually delete the data on your local disk, docker will NOT do this. + +## Reset the lib folder -## New version of R +If you want to reset the pecan lib folder that is mounted across all machines, for example when there is a new version of PEcAn or a a new version of R, you will need to delete the volume pecan_lib, and repopulate it. To delete the volume use the following command, and then look at "copy R packages" to copy the data again. -This only needs to be done once (or if the PEcAn base image changes drastically, for example a new version of R). Firs stop the full stack (using `docker-compose down`). You can delete the volume using `docker rm pecan_lib` (also remvove the folder on your local disk), copy the R packages, and start the full stack. +``` +docker-compose down +docker rm pecan_lib +``` ## Linux and User permissions From 1e43abe79fbdfc9b8bf90b0b479093a73e35b4c9 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 29 Jun 2020 12:23:44 -0500 Subject: [PATCH 1147/2289] missing quote --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index b0156e7399f..2ee8602ba9b 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -58,7 +58,7 @@ The steps in this section only need to be done the fist time you start working w You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers -* PECAN_VERSION` set this to develop, the docker image we start with +* `PECAN_VERSION` set this to develop, the docker image we start with Both of these variables should also be uncommented by removing the # preceding them. At the end you should see the following if you run the following command `egrep -v '^(#|$)' .env`. If you have a windows system, you will need to set the variable PWD as well, and for linux you will need to set UID and GID (for rstudio). From f4f0c0a43a4c3a72574009c4d098c9effb76178f Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 30 Jun 2020 03:19:46 -0500 Subject: [PATCH 1148/2289] bugfix in documentation --- .../03_topical_pages/07_remote_access/01_pecan_api.Rmd | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 4a69f62d495..a8e3890997f 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -164,7 +164,7 @@ print(json.dumps(response.json(), indent=2)) # Get model(s) with `model_name` = SIPNET & `revision` = ssr res <- httr::GET( "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", - authenticate("carya", "illinois") + httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` @@ -210,7 +210,7 @@ print(json.dumps(response.json(), indent=2)) # Get workflow(s) that use `model_id` = 1000000022 [SIPNET] & `site_id` = 676 [Willow Creek (US-WCr)] res <- httr::GET( "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", - authenticate("carya", "illinois") + httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` @@ -276,7 +276,7 @@ print(json.dumps(response.json(), indent=2)) # Get details of workflow with `id` = '1000009172' res <- httr::GET( "http://localhost:8000/api/workflows/1000009172", - authenticate("carya", "illinois") + httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` @@ -378,7 +378,7 @@ print(json.dumps(response.json(), indent=2)) # Get list of run belonging to the workflow with `workflow_id` = '1000009172' res <- httr::GET( "http://localhost:8000/api/runs/?workflow_id=1000009172", - authenticate("carya", "illinois") + httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` @@ -435,7 +435,7 @@ print(json.dumps(response.json(), indent=2)) # Get details of run belonging with `id` = '1002042201' res <- httr::GET( "http://localhost:8000/api/runs/1002042201", - authenticate("carya", "illinois") + httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` From 120c8b5f2dc60c513fc9be727a63bd446b62aa3f Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 30 Jun 2020 03:21:33 -0500 Subject: [PATCH 1149/2289] basic tests added for api endpoints --- apps/api/R/entrypoint.R | 2 ++ apps/api/README.md | 2 +- apps/api/test_pecanapi.sh | 9 +++++++-- apps/api/tests/alltests.R | 3 +++ apps/api/tests/test.models.R | 9 +++++++++ apps/api/tests/test.ping.R | 6 ++++++ apps/api/tests/test.runs.R | 20 ++++++++++++++++++++ apps/api/tests/test.status.R | 6 ++++++ apps/api/tests/test.workflows.R | 20 ++++++++++++++++++++ apps/api/tests/testthat.R | 1 + 10 files changed, 75 insertions(+), 3 deletions(-) mode change 100644 => 100755 apps/api/R/entrypoint.R mode change 100644 => 100755 apps/api/test_pecanapi.sh create mode 100755 apps/api/tests/alltests.R create mode 100644 apps/api/tests/test.models.R create mode 100644 apps/api/tests/test.ping.R create mode 100644 apps/api/tests/test.runs.R create mode 100644 apps/api/tests/test.status.R create mode 100644 apps/api/tests/test.workflows.R create mode 100644 apps/api/tests/testthat.R diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R old mode 100644 new mode 100755 index fc3ed32d604..98d82ee69e6 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -1,3 +1,5 @@ +#!/usr/bin/env Rscript + #* This is the entry point to the PEcAn API. #* All API endpoints (& filters) are mounted here #* @author Tezan Sahu diff --git a/apps/api/README.md b/apps/api/README.md index 69ae57643fc..1ffc908408d 100644 --- a/apps/api/README.md +++ b/apps/api/README.md @@ -10,7 +10,7 @@ Follow the following steps to spin up the PEcAn API server locally: ```bash $ cd R -$ Rscript entrypoint.R +$ ./entrypoint.R ``` ## Running the tests: diff --git a/apps/api/test_pecanapi.sh b/apps/api/test_pecanapi.sh old mode 100644 new mode 100755 index 35543046251..4cd27998dd3 --- a/apps/api/test_pecanapi.sh +++ b/apps/api/test_pecanapi.sh @@ -1,7 +1,12 @@ #!/bin/bash -R R/entrypoint.R & +cd R; ./entrypoint.R & PID=$! -R test/alltests.R +while ! curl --output /dev/null --silent http://localhost:8000 +do + sleep 1 && echo -n . +done + +cd ../tests; ./alltests.R kill $PID \ No newline at end of file diff --git a/apps/api/tests/alltests.R b/apps/api/tests/alltests.R new file mode 100755 index 00000000000..ffd3c10d4f6 --- /dev/null +++ b/apps/api/tests/alltests.R @@ -0,0 +1,3 @@ +#!/usr/bin/env Rscript + +testthat::test_dir("./") \ No newline at end of file diff --git a/apps/api/tests/test.models.R b/apps/api/tests/test.models.R new file mode 100644 index 00000000000..2763a83d2db --- /dev/null +++ b/apps/api/tests/test.models.R @@ -0,0 +1,9 @@ +context("Testing the /api/models/ endpoint") + +test_that("Calling /api/models/ returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) \ No newline at end of file diff --git a/apps/api/tests/test.ping.R b/apps/api/tests/test.ping.R new file mode 100644 index 00000000000..defd30bdb45 --- /dev/null +++ b/apps/api/tests/test.ping.R @@ -0,0 +1,6 @@ +context("Testing the /api/ping endpoint") + +test_that("Calling /api/ping returns Status 200", { + res <- httr::GET("http://localhost:8000/api/ping") + expect_equal(res$status, 200) +}) \ No newline at end of file diff --git a/apps/api/tests/test.runs.R b/apps/api/tests/test.runs.R new file mode 100644 index 00000000000..e4be12d8b9f --- /dev/null +++ b/apps/api/tests/test.runs.R @@ -0,0 +1,20 @@ +context("Testing the /api/runs/ endpoint") + +test_that("Calling /api/runs/ returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/?workflow_id=1000009172", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + + +context("Testing the /api/runs/{id} endpoint") + +test_that("Calling /api/runs/{id} returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/1002042201", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) \ No newline at end of file diff --git a/apps/api/tests/test.status.R b/apps/api/tests/test.status.R new file mode 100644 index 00000000000..ffd3ae54d34 --- /dev/null +++ b/apps/api/tests/test.status.R @@ -0,0 +1,6 @@ +context("Testing the /api/status endpoint") + +test_that("Calling /api/status returns Status 200", { + res <- httr::GET("http://localhost:8000/api/status") + expect_equal(res$status, 200) +}) \ No newline at end of file diff --git a/apps/api/tests/test.workflows.R b/apps/api/tests/test.workflows.R new file mode 100644 index 00000000000..641d289862a --- /dev/null +++ b/apps/api/tests/test.workflows.R @@ -0,0 +1,20 @@ +context("Testing the /api/workflows/ endpoint") + +test_that("Calling /api/workflows/ returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + + +context("Testing the /api/workflows/{id} endpoint") + +test_that("Calling /api/workflows/{id} returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/workflows/1000009172", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) \ No newline at end of file diff --git a/apps/api/tests/testthat.R b/apps/api/tests/testthat.R new file mode 100644 index 00000000000..527042724a4 --- /dev/null +++ b/apps/api/tests/testthat.R @@ -0,0 +1 @@ +library(testthat) From 766922b3e7d7ed4d047e3aee2baf346432675e5a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 30 Jun 2020 16:56:03 +0530 Subject: [PATCH 1150/2289] added gee2pecan_smap() --- modules/data.remote/inst/gee2pecan_smap.py | 152 +++++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 modules/data.remote/inst/gee2pecan_smap.py diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py new file mode 100644 index 00000000000..5ea7b62b455 --- /dev/null +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -0,0 +1,152 @@ +""" +Downloads SMAP Global Soil Moisture Data from Google Earth Engine and saves it in a netCDF file. + +Requires Python3 + +Author: Ayush Prasad +""" + +import ee +import pandas as pd +import geopandas as gpd +import os +import xarray as xr +import datetime + +ee.Initialize() + + +def gee2pecan_smap(geofile, outdir, start, end, var): + """ + Downloads and saves SMAP data from GEE + + Parameters + ---------- + geofile (str) -- path to the geosjon file containing the name and coordinates of ROI + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-dd + + end (str) -- ending date areaof the data request in the form YYYY-MM-dd + + var (str) -- one of ssm, susm, smp, ssma, susma + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory + """ + + # read in the geojson file + df = gpd.read_file(geofile) + + if (df.geometry.type == "Point").bool(): + # extract coordinates + lon = float(df.geometry.x) + lat = float(df.geometry.y) + # create geometry + geo = ee.Geometry.Point(lon, lat) + + elif (df.geometry.type == "Polygon").bool(): + # extract coordinates + area = [ + list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) + ] + # create geometry + geo = ee.Geometry.Polygon(area) + + else: + # if the input geometry type is not + raise ValueError("geometry type not supported") + + def smap_ts(geo, start, end, var): + # extract a feature from the geometry + features = [ee.Feature(geo)] + # create a feature collection from the features + featureCollection = ee.FeatureCollection(features) + + def smap_ts_feature(feature): + area = feature.geometry() + # create the image collection + collection = ( + ee.ImageCollection("NASA_USDA/HSL/SMAP_soil_moisture") + .filterBounds(area) + .filterDate(start, end) + .select([var]) + ) + + def smap_ts_image(img): + # scale (int) Default: 30 + scale = 30 + # extract date from the image + dateinfo = ee.Date(img.get("system:time_start")).format("YYYY-MM-dd") + # reduce the region to a list, can be configured as per requirements + img = img.reduceRegion( + reducer=ee.Reducer.toList(), + geometry=area, + maxPixels=1e8, + scale=scale, + ) + # store data in an ee.Array + smapdata = ee.Array(img.get(var)) + tmpfeature = ( + ee.Feature(ee.Geometry.Point([0, 0])) + .set("smapdata", smapdata) + .set("dateinfo", dateinfo) + ) + return tmpfeature + + # map tmpfeature over the image collection + smap_timeseries = collection.map(smap_ts_image) + return feature.set( + "smapdata", smap_timeseries.aggregate_array("smapdata") + ).set("dateinfo", smap_timeseries.aggregate_array("dateinfo")) + + # map feature over featurecollection + featureCollection = featureCollection.map(smap_ts_feature).getInfo() + return featureCollection + + fc = smap_ts(geo=geo, start=start, end=end, var=var) + + def fc2dataframe(fc): + smapdatalist = [] + datelist = [] + # extract var and date data from fc dictionary and store in it in smapdatalist and datelist + for i in range(len(fc["features"][0]["properties"]["smapdata"])): + smapdatalist.append(fc["features"][0]["properties"]["smapdata"][i][0]) + datelist.append( + datetime.datetime.strptime( + (fc["features"][0]["properties"]["dateinfo"][i]).split(".")[0], + "%Y-%m-%d", + ) + ) + fc_dict = {"date": datelist, var: smapdatalist} + # create a pandas dataframe and store the data + fcdf = pd.DataFrame(fc_dict, columns=["date", var]) + return fcdf + + datadf = fc2dataframe(fc) + + site_name = df[df.columns[0]].iloc[0] + AOI = str(df[df.columns[1]].iloc[0]) + + # convert the dataframe to an xarray dataset, used for converting it to a netCDF file + tosave = xr.Dataset( + datadf, + attrs={ + "site_name": site_name, + "start_date": start, + "end_date": end, + "AOI": AOI, + "product": var, + }, + ) + + # # if specified output path does not exist create it + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + file_name = "_" + var + # convert to netCDF and save the file + tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) From 994074f0aded6413a98c699ec43bab0174a80fe6 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 1 Jul 2020 08:56:33 -0400 Subject: [PATCH 1151/2289] basgra passing hop-test --- models/basgra/R/run_BASGRA.R | 51 ++++++++++++++++++++++----- models/basgra/R/write.config.BASGRA.R | 18 +++++----- models/basgra/src/BASGRA.f90 | 35 +++++++++++------- models/basgra/src/environment.f90 | 7 ++-- models/basgra/src/parameters_site.f90 | 2 +- models/basgra/src/plant.f90 | 4 +-- models/basgra/src/soil.f90 | 14 ++++---- 7 files changed, 87 insertions(+), 44 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index e77bbd85783..7f2d117713f 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -13,18 +13,21 @@ ##' @param run_met path to CF met ##' @param run_params parameter vector ##' @param site_harvest path to harvest file +##' @param site_fertilize path to fertilizer application file ##' @param start_date start time of the simulation ##' @param end_date end time of the simulation ##' @param outdir where to write BASGRA output ##' @param sitelat latitude of the site ##' @param sitelon longitude of the site +##' @param co2_file path to daily atmospheric CO2 concentration file, optional, defaults to 350 ppm when missing ##' ##' @export ##' @useDynLib PEcAn.BASGRA, .registration = TRUE ##' @author Istem Fer ##-------------------------------------------------------------------------------------------------# -run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, outdir, sitelat, sitelon){ +run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_date, end_date, outdir, + sitelat, sitelon, co2_file = NULL){ start_date <- as.POSIXlt(start_date, tz = "UTC") end_date <- as.POSIXlt(end_date, tz = "UTC") @@ -50,7 +53,12 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, }else if(year != start_year & year == end_year){ simdays <- seq(1, lubridate::yday(end_date)) }else{ - simdays <- seq(lubridate::yday(start_date), lubridate::yday(end_date)) + if(year == start_year & year == end_year){ + simdays <- seq(lubridate::yday(start_date), lubridate::yday(end_date)) + }else{ + simdays <- 1:365 #seq_len(PEcAn.utils::days_in_year(year)) + } + } @@ -146,7 +154,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, matrix_weather <- do.call("rbind", out.list) #BASGRA wants the matrix_weather to be of 10000 x 8 matrix - NMAXDAYS <- as.integer(10000) + NMAXDAYS <- as.integer(365000) nmw <- nrow(matrix_weather) if(nmw > NMAXDAYS){ matrix_weather <- matrix_weather[seq_len(NMAXDAYS), ] @@ -234,15 +242,27 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, NDAYS <- as.integer(sum(matrix_weather[,1] != 0)) - calendar_fert <- matrix( 0, nrow=100, ncol=3 ) - calendar_Ndep <- matrix( 0, nrow=100, ncol=3 ) + matrix_weather <- cbind( matrix_weather, matrix_weather[,8]) #add a col + if(!is.null(co2_file)){ + co2val <- utils::read.table(co2_file, header=TRUE, sep = ",") + matrix_weather[1:NDAYS,9] <- co2val[paste0(co2val[,1], co2val[,2]) %in% paste0(matrix_weather[,1], matrix_weather[,2]),3] + }else{ + PEcAn.logger::logger.info("No atmospheric CO2 concentration was provided. Using default 350 ppm.") + matrix_weather[1:NDAYS,9] <- 350 + } + + + calendar_fert <- matrix( 0, nrow=300, ncol=3 ) + + # read in harvest days + f_days <- as.matrix(utils::read.table(site_fertilize, header = TRUE, sep = ",")) + calendar_fert[1:nrow(f_days),] <- f_days + + calendar_Ndep <- matrix( 0, nrow=300, ncol=3 ) #calendar_Ndep[1,] <- c(1900, 1,0) #calendar_Ndep[2,] <- c(2100, 366, 0) - days_harvest <- matrix( as.integer(-1), nrow=100, ncol=2 ) # hardcoding these for now, should be able to modify later on - calendar_fert[1,] <- c( 2018, 202, 65*1000/ 10000 ) # 140 kg N ha-1 applied on day 115 - calendar_fert[2,] <- c( 2018, 233, 40*1000/ 10000 ) # 80 kg N ha-1 applied on day 150 # calendar_fert[3,] <- c( 2001, 123, 0*1000/ 10000 ) # 0 kg N ha-1 applied on day 123 calendar_Ndep[1,] <- c( 1900, 1, 0*1000/(10000*365) ) # 2 kg N ha-1 y-1 N-deposition in 1900 calendar_Ndep[2,] <- c( 1980, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 @@ -250,9 +270,22 @@ run_BASGRA <- function(run_met, run_params, site_harvest, start_date, end_date, # read in harvest days h_days <- as.matrix(utils::read.table(site_harvest, header = TRUE, sep = ",")) - days_harvest[1:nrow(h_days),] <- h_days + days_harvest[1:nrow(h_days),] <- h_days[,1:2] + days_harvest <- as.integer(days_harvest) + # This is a management specific parameter + # I'll pass it via harvest file as the 3rd column + # even though it won't change from harvest to harvest, it may change from run to run + # but just in case users forgot to add the third column to the harvest file: + if(nrow(h_days) == 3){ + run_params[names(run_params) == "CLAIV"] <- h_days[1,3] + }else{ + PEcAn.logger::logger.info("Maximum LAI remaining after harvest is not provided via harvest file. Assuming CLAIV=1.") + run_params[names(run_params) == "CLAIV"] <- 1 + } + + # run model output <- .Fortran('BASGRA', run_params, diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 503c9aeb497..7116165827e 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -47,6 +47,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[ind] <- pft.traits[mi] } + + # Maximum SLA of new leaves if ("SLAMAX" %in% pft.names) { run_params[which(names(run_params) == "SLAMAX")] <- udunits2::ud.convert(pft.traits[which(pft.names == "SLAMAX")], "kg-1","g-1") @@ -219,6 +221,11 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "DAYLP")] <- pft.traits[which(pft.names == "min_daylength_slow")] } + # Day length below which DAYLGE becomes less than 1 + if ("daylength_effect" %in% pft.names) { + run_params[which(names(run_params) == "DLMXGE")] <- pft.traits[which(pft.names == "daylength_effect")] + } + # LAI above which shading induces leaf senescence if ("lai_senescence" %in% pft.names) { run_params[which(names(run_params) == "LAICR")] <- pft.traits[which(pft.names == "lai_senescence")] @@ -316,11 +323,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "WCI")] <- wci } - - # # Initial fraction of SOC that is fast (g C g-1 C) - # if ("r_fSOC" %in% pft.names) { - # run_params[which(names(run_params) == "FCSOMF0")] <- pft.traits[which(pft.names == "r_fSOC")] - # } + # # # This is IC, change later # # Initial C-N ratio of litter (g C g-1 N) @@ -338,11 +341,6 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # run_params[which(names(run_params) == "CNSOMS0")] <- pft.traits[which(pft.names == "c2n_sSOM")] # } # - # # Initial value of soil mineral N (g N m-2) - # if ("NMIN" %in% pft.names) { - # run_params[which(names(run_params) == "NMIN0")] <- pft.traits[which(pft.names == "NMIN")] - # } - # # # diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index c7a4ac36cc6..67b7c0c9c89 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -23,17 +23,17 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & use plant implicit none -integer, dimension(100,2) :: DAYS_HARVEST +integer, dimension(300,2) :: DAYS_HARVEST real :: PARAMS(140) #ifdef weathergen integer, parameter :: NWEATHER = 7 #else - integer, parameter :: NWEATHER = 8 + integer, parameter :: NWEATHER = 9 #endif real :: MATRIX_WEATHER(NMAXDAYS,NWEATHER) -real , dimension(100,3) :: CALENDAR_FERT, CALENDAR_NDEP -integer, dimension(100,2) :: DAYS_FERT , DAYS_NDEP -real , dimension(100) :: NFERTV , NDEPV +real , dimension(300,3) :: CALENDAR_FERT, CALENDAR_NDEP +integer, dimension(300,2) :: DAYS_FERT , DAYS_NDEP +real , dimension(300) :: NFERTV , NDEPV integer :: day, doy, i, NDAYS, NOUT, year real :: y(NDAYS,NOUT) @@ -86,6 +86,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & VPI = MATRIX_WEATHER(:,6) RAINI = MATRIX_WEATHER(:,7) WNI = MATRIX_WEATHER(:,8) + CO2 = MATRIX_WEATHER(:,9) #endif ! Calendars @@ -164,6 +165,16 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & ! Plant call Harvest (CLV,CRES,CST,year,doy,DAYS_HARVEST,LAI,PHEN,TILG1,TILG2,TILV, & GSTUB,HARVLA,HARVLV,HARVPH,HARVRE,HARVST,HARVTILG2) + + + CLV = CLV - HARVLV + CRES = CRES - HARVRE + CST = CST - HARVST + LAI = LAI - HARVLA + PHEN = min(1., PHEN - HARVPH) + TILG2 = TILG2 - HARVTILG2 + NSH = NSH - HARVNSH + call Biomass (CLV,CRES,CST) call Phenology (DAYL,PHEN, DPHEN,GPHEN) call Foliage1 @@ -206,18 +217,18 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & F_DIGEST_DM,F_DIGEST_DMSH,F_DIGEST_LV,F_DIGEST_ST,F_DIGEST_WALL) ! State equations plants - CLV = CLV + GLV - DLV - HARVLV + CLV = CLV + GLV - DLV CLVD = CLVD + DLV - CRES = CRES + GRES - RESMOB - HARVRE + CRES = CRES + GRES - RESMOB CRT = CRT + GRT - DRT - CST = CST + GST - HARVST + CST = CST + GST CSTUB = CSTUB + GSTUB - DSTUB - LAI = LAI + GLAI - DLAI - HARVLA + LAI = LAI + GLAI - DLAI LT50 = LT50 + DeHardRate - HardRate - PHEN = min(1., PHEN + GPHEN - DPHEN - HARVPH) + PHEN = min(1., PHEN + GPHEN - DPHEN) ROOTD = ROOTD + RROOTD TILG1 = TILG1 + TILVG1 - TILG1G2 - TILG2 = TILG2 + TILG1G2 - HARVTILG2 + TILG2 = TILG2 + TILG1G2 TILV = TILV + GTILV - TILVG1 - DTILV TILTOT = TILG1 + TILG2 + TILV if((LAT>0).AND.(doy==305)) VERN = 0 @@ -228,7 +239,7 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & YIELD_TOT = YIELD_TOT + YIELD NRT = NRT + GNRT - DNRT - NSH = NSH + GNSH - DNSH - HARVNSH - NSHmob + NSH = NSH + GNSH - DNSH - NSHmob NCR = NRT / CRT diff --git a/models/basgra/src/environment.f90 b/models/basgra/src/environment.f90 index d9ca5616a81..f4f6348089e 100644 --- a/models/basgra/src/environment.f90 +++ b/models/basgra/src/environment.f90 @@ -3,10 +3,10 @@ module environment use parameters_site use parameters_plant implicit none -integer, parameter :: NMAXDAYS = 10000 -real :: GR, TMMN, TMMX, VP, WN +integer, parameter :: NMAXDAYS = 365000 +real :: GR, TMMN, TMMX, VP, WN, CO2A real :: YEARI(NMAXDAYS), DOYI(NMAXDAYS) , RAINI(NMAXDAYS), GRI(NMAXDAYS) -real :: TMMNI(NMAXDAYS), TMMXI(NMAXDAYS), VPI(NMAXDAYS) , WNI(NMAXDAYS) +real :: TMMNI(NMAXDAYS), TMMXI(NMAXDAYS), VPI(NMAXDAYS) , WNI(NMAXDAYS), CO2(NMAXDAYS) #ifdef weathergen real :: PETI(NMAXDAYS) @@ -48,6 +48,7 @@ Subroutine set_weather_day(day,DRYSTOR, year,doy) TMMX = TMMXI(day) ! maximum (or average) temperature (degrees Celsius) VP = VPI(day) ! vapour pressure (kPa) WN = WNI(day) ! mean wind speed (m s-1) + CO2A = CO2(day) DAVTMP = (TMMN + TMMX)/2.0 DTR = GR * exp(-KSNOW*DRYSTOR) PAR = 0.5*4.56*DTR diff --git a/models/basgra/src/parameters_site.f90 b/models/basgra/src/parameters_site.f90 index 5d4fb1ec0c9..512383ea63c 100644 --- a/models/basgra/src/parameters_site.f90 +++ b/models/basgra/src/parameters_site.f90 @@ -7,7 +7,7 @@ module parameters_site real :: LAT ! Atmospheric conditions -real, parameter :: CO2A = 350 +! real, parameter :: CO2A = 350 ! Soil real, parameter :: DRATE = 50 diff --git a/models/basgra/src/plant.f90 b/models/basgra/src/plant.f90 index 48fc23cd65a..7da7de30d59 100644 --- a/models/basgra/src/plant.f90 +++ b/models/basgra/src/plant.f90 @@ -19,7 +19,7 @@ module plant Subroutine Harvest(CLV,CRES,CST,year,doy,DAYS_HARVEST,LAI,PHEN,TILG1,TILG2,TILV, & GSTUB,HARVLA,HARVLV,HARVPH,HARVRE,HARVST,HARVTILG2) integer :: doy,year - integer,dimension(100,2) :: DAYS_HARVEST + integer,dimension(300,2) :: DAYS_HARVEST real :: CLV, CRES, CST, LAI, PHEN, TILG1, TILG2, TILV real :: GSTUB, HARVLV, HARVLA, HARVRE, HARVTILG2, HARVST, HARVPH real :: CLAI, HARVFR, TV1 @@ -27,7 +27,7 @@ Subroutine Harvest(CLV,CRES,CST,year,doy,DAYS_HARVEST,LAI,PHEN,TILG1,TILG2,TILV, HARV = 0 NOHARV = 1 - do i=1,100 + do i=1,300 if ( (year==DAYS_HARVEST(i,1)) .and. (doy==DAYS_HARVEST(i,2)) ) then HARV = 1 NOHARV = 0 diff --git a/models/basgra/src/soil.f90 b/models/basgra/src/soil.f90 index f6f70aa98e1..fb8323da9af 100644 --- a/models/basgra/src/soil.f90 +++ b/models/basgra/src/soil.f90 @@ -109,11 +109,11 @@ end Subroutine O2fluxes Subroutine N_fert(year,doy,DAYS_FERT,NFERTV, Nfert) integer :: year,doy,i - integer,dimension(100,2) :: DAYS_FERT - real ,dimension(100 ) :: NFERTV + integer,dimension(300,2) :: DAYS_FERT + real ,dimension(300 ) :: NFERTV real :: Nfert Nfert = 0 - do i=1,100 + do i=1,300 if ( (year==DAYS_FERT (i,1)) .and. (doy==DAYS_FERT (i,2)) ) then Nfert = NFERTV (i) end if @@ -122,15 +122,15 @@ end Subroutine N_fert Subroutine N_dep(year,doy,DAYS_NDEP,NDEPV, Ndep) integer :: year,doy,j - integer,dimension(100,2) :: DAYS_NDEP - real ,dimension(100 ) :: NDEPV + integer,dimension(300,2) :: DAYS_NDEP + real ,dimension(300 ) :: NDEPV integer :: idep real :: NDEPV_interval,t - real ,dimension(100) :: tNdep + real ,dimension(300) :: tNdep real :: Ndep t = year + (doy -0.5)/366 tNdep = DAYS_NDEP(:,1) + (DAYS_NDEP(:,2)-0.5)/366 - do j = 2,100 + do j = 2,300 if ( (tNdep(j-1)=t) ) idep = j-1 end do NDEPV_interval = NDEPV(idep+1) - NDEPV(idep) From 829dc78463dcfc2c86aa724ad46caaf84e82a6df Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 1 Jul 2020 09:19:59 -0400 Subject: [PATCH 1152/2289] pass fertilization and co2 --- models/basgra/R/write.config.BASGRA.R | 6 ++++++ models/basgra/inst/template.job | 2 +- models/basgra/man/run_BASGRA.Rd | 8 +++++++- 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 7116165827e..c9647c03026 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -543,6 +543,12 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N jobsh <- gsub("@SITE_MET@", settings$run$inputs$met$path, jobsh) jobsh <- gsub("@SITE_HARVEST@", settings$run$inputs$harvest$path, jobsh) + jobsh <- gsub("@SITE_FERTILIZE@", settings$run$inputs$fertilize$path, jobsh) + if(!is.null(settings$run$inputs$co2_file$path)){ + jobsh <- gsub("@SITE_CO2FILE@", settings$run$inputs$co2_file$path, jobsh) + }else{ + jobsh <- gsub("@SITE_CO2FILE@", 'NULL', jobsh) + } jobsh <- gsub("@START_DATE@", settings$run$start.date, jobsh) jobsh <- gsub("@END_DATE@", settings$run$end.date, jobsh) diff --git a/models/basgra/inst/template.job b/models/basgra/inst/template.job index 1cbf436fcfe..288a142745d 100644 --- a/models/basgra/inst/template.job +++ b/models/basgra/inst/template.job @@ -15,7 +15,7 @@ if [ ! -e "@OUTDIR@/results.csv" ]; then # convert to MsTMIP echo "library (PEcAn.BASGRA) -run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@SITE_HARVEST@', '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@) +run_BASGRA('@SITE_MET@', @RUN_PARAMS@, '@SITE_HARVEST@', '@SITE_FERTILIZE@', '@START_DATE@', '@END_DATE@', '@OUTDIR@', @SITE_LAT@, @SITE_LON@, '@SITE_CO2FILE@') " | R --vanilla STATUS=$? diff --git a/models/basgra/man/run_BASGRA.Rd b/models/basgra/man/run_BASGRA.Rd index d8a2e58fea7..8cbe14b3760 100644 --- a/models/basgra/man/run_BASGRA.Rd +++ b/models/basgra/man/run_BASGRA.Rd @@ -8,11 +8,13 @@ run_BASGRA( run_met, run_params, site_harvest, + site_fertilize, start_date, end_date, outdir, sitelat, - sitelon + sitelon, + co2_file = NULL ) } \arguments{ @@ -22,6 +24,8 @@ run_BASGRA( \item{site_harvest}{path to harvest file} +\item{site_fertilize}{path to fertilizer application file} + \item{start_date}{start time of the simulation} \item{end_date}{end time of the simulation} @@ -31,6 +35,8 @@ run_BASGRA( \item{sitelat}{latitude of the site} \item{sitelon}{longitude of the site} + +\item{co2_file}{path to daily atmospheric CO2 concentration file, optional, defaults to 350 ppm when missing} } \description{ BASGRA wrapper function. Runs and writes model outputs in PEcAn standard. From 80d0b8a597aca3d213f2cb7c8b53511518d9d6d0 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 1 Jul 2020 10:09:50 -0400 Subject: [PATCH 1153/2289] more ic --- models/basgra/R/write.config.BASGRA.R | 112 +++++++++++++++----------- 1 file changed, 66 insertions(+), 46 deletions(-) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index c9647c03026..6a29de28508 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -292,22 +292,37 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "LOG10LAII")] <- lai } - # This is IC # Initial value of litter C (g C m-2) clitt0 <- try(ncdf4::ncvar_get(IC.nc, "litter_carbon_content"), silent = TRUE) if (!is.na(clitt0) && is.numeric(clitt0)) { run_params[which(names(run_params) == "CLITT0")] <- udunits2::ud.convert(clitt0, "kg", "g") } - # This is IC - # Initial value of SOM (g C m-2) - csom0 <- try(ncdf4::ncvar_get(IC.nc, "TotSoilCarb"), silent = TRUE) - if (!is.na(csom0) && is.numeric(csom0)) { - run_params[which(names(run_params) == "CSOM0")] <- udunits2::ud.convert(csom0, "kg", "g") - # CSOMF = CSOM0 * FCSOMF0 - run_params[names(run_params) == "CSOMF0"] <- udunits2::ud.convert(csom0 * run_params[names(run_params) == "FCSOMF0"], "kg", "g") - # CSOMS = CSOM0 * (1-FCSOMF0) - run_params[names(run_params) == "CSOMS0"] <- udunits2::ud.convert(csom0 * (1 - run_params[names(run_params) == "FCSOMF0"]), "kg", "g") + + # Initial value of slow SOM (g C m-2) + csoms0 <- try(ncdf4::ncvar_get(IC.nc, "slow_soil_pool_carbon_content"), silent = TRUE) + if (!is.na(csoms0) && is.numeric(csoms0)) { + run_params[which(names(run_params) == "CSOMS0")] <- udunits2::ud.convert(csoms0, "kg", "g") + } + + # Initial value of fast SOM (g C m-2) + csomf0 <- try(ncdf4::ncvar_get(IC.nc, "fast_soil_pool_carbon_content"), silent = TRUE) + if (!is.na(csomf0) && is.numeric(csomf0)) { + run_params[which(names(run_params) == "CSOMF0")] <- udunits2::ud.convert(csomf0, "kg", "g") + } + + # Initial value of root C (g C m-2) + crti <- try(ncdf4::ncvar_get(IC.nc, "root_carbon_content"), silent = TRUE) + if (!is.na(crti) && is.numeric(crti)) { + # not log10 anymore, don't mind the name + run_params[which(names(run_params) == "LOG10CRTI")] <- udunits2::ud.convert(crti, "kg", "g") + } + + # Initial value of leaf C (g C m-2) + clvi <- try(ncdf4::ncvar_get(IC.nc, "leaf_carbon_content"), silent = TRUE) + if (!is.na(clvi) && is.numeric(clvi)) { + # not log10 anymore, don't mind the name + run_params[which(names(run_params) == "LOG10CLVI")] <- udunits2::ud.convert(clvi, "kg", "g") } # Initial mineral N @@ -316,6 +331,11 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "NMIN0")] <- udunits2::ud.convert(nmin0, "kg", "g") } + # Rooting depth (m) + rootd <- try(ncdf4::ncvar_get(IC.nc, "rooting_depth"), silent = TRUE) + if (!is.na(rootd) && is.numeric(rootd)) { + run_params[which(names(run_params) == "ROOTDM")] <- rootd + } # WCI wci <- try(ncdf4::ncvar_get(IC.nc, "SoilMoistFrac"), silent = TRUE) @@ -323,38 +343,43 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N run_params[which(names(run_params) == "WCI")] <- wci } - - # - # # This is IC, change later - # # Initial C-N ratio of litter (g C g-1 N) - # if ("c2n_litter" %in% pft.names) { - # run_params[which(names(run_params) == "CNLITT0")] <- 100*pft.traits[which(pft.names == "c2n_litter")] - # } - # - # # Initial C-N ratio of fast SOM (g C g-1 N) - # if ("c2n_fSOM" %in% pft.names) { - # run_params[which(names(run_params) == "CNSOMF0")] <- pft.traits[which(pft.names == "c2n_fSOM")] - # } - # - # # Initial C-N ratio of slow SOM (g C g-1 N) - # if ("c2n_sSOM" %in% pft.names) { - # run_params[which(names(run_params) == "CNSOMS0")] <- pft.traits[which(pft.names == "c2n_sSOM")] - # } - # + # Tiller density (m-2) + tiltoti <- try(ncdf4::ncvar_get(IC.nc, "tiller_density"), silent = TRUE) + if (!is.na(tiltoti) && is.numeric(tiltoti)) { + run_params[which(names(run_params) == "TILTOTI")] <- tiltoti + } - # - # - # # Water concentration at saturation (m3 m-3) - # if ("volume_fraction_of_water_in_soil_at_saturation" %in% pft.names) { - # run_params[which(names(run_params) == "WCST")] <- pft.traits[which(pft.names == "volume_fraction_of_water_in_soil_at_saturation")] - # } + # Phenological stage + pheni <- try(ncdf4::ncvar_get(IC.nc, "phenological_stage"), silent = TRUE) + if (!is.na(pheni) && is.numeric(pheni)) { + run_params[which(names(run_params) == "PHENI")] <- pheni + } - # # Temperature that kills half the plants in a day (degrees Celcius) - # if ("plant_min_temp" %in% pft.names) { - # run_params[which(names(run_params) == "LT50I")] <- pft.traits[which(pft.names == "plant_min_temp")] - # } + # Initial C in reserves (g C m-2) + cresi <- try(ncdf4::ncvar_get(IC.nc, "reserve_carbon_content"), silent = TRUE) + if (!is.na(cresi) && is.numeric(cresi)) { + # not log10 anymore, don't mind the name + run_params[which(names(run_params) == "LOG10CRESI")] <- udunits2::ud.convert(cresi, "kg", "g") + } + + # N-C ratio of roots + n2c <- try(ncdf4::ncvar_get(IC.nc, "n2c_roots"), silent = TRUE) + if (!is.na(n2c) && is.numeric(n2c)) { + run_params[which(names(run_params) == "NCR")] <- n2c + } + + # Initial C-N ratio of fast SOM + c2n <- try(ncdf4::ncvar_get(IC.nc, "c2n_fast_pool"), silent = TRUE) + if (!is.na(c2n) && is.numeric(c2n)) { + run_params[which(names(run_params) == "CNSOMF0")] <- c2n + } + + # Water concentration at saturation (m3 m-3) + wcst <- try(ncdf4::ncvar_get(IC.nc, "water_concentration_at_saturation"), silent = TRUE) + if (!is.na(wcst) && is.numeric(wcst)) { + run_params[which(names(run_params) == "WCST")] <- wcst + } - #} } # THESE "PARAMETERS" (IN FACT, INITIAL CONDITIONS) WERE NOT PART OF THE ORIGINAL VECTOR @@ -402,7 +427,8 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N ######################### PREVIOUS STATE ######################### ################################################################## - # developing hack: overwrite initial values with previous time steps + # overwrite initial values with previous time steps + # as model2netcdf is developed, some or all of these can be dropped? last_states_file <- file.path(outdir, "last_vals_basgra.Rdata") if(file.exists(last_states_file)){ @@ -443,12 +469,6 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # CNLITT0 = pa( 84) ! (g C g-1 N) Initial C/N ratio of litter # run_params[names(run_params) == "CNLITT0"] <- last_vals[names(last_vals) == "CLITT"] / last_vals[names(last_vals) == "NLITT"] - # FCSOMF0 handled above - - # CNSOMF0 = pa( 85) ! (g C g-1 N) Initial C/N ratio of fast-decomposing OM - # csomf <- run_params[which(names(run_params) == "FCSOMF0")] * run_params[which(names(run_params) == "CSOM0")] - # run_params[names(run_params) == "CNSOMF0"] <- csomf / last_vals[names(last_vals) == "NSOMF"] - # CNSOMS0 = pa( 86) ! (g C g-1 N) Initial C/N ratio of slowly decomposing OM # csoms <- (1 - run_params[which(names(run_params) == "FCSOMF0")]) * run_params[which(names(run_params) == "CSOM0")] # run_params[names(run_params) == "CNSOMS0"] <- csoms / last_vals[names(last_vals) == "NSOMS"] From 8a43738a80193bff19c1c02f6f0cacb186d8c59e Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 1 Jul 2020 10:28:20 -0400 Subject: [PATCH 1154/2289] bugfix --- models/basgra/R/run_BASGRA.R | 1 + 1 file changed, 1 insertion(+) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 7f2d117713f..152a68fe1c2 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -268,6 +268,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ calendar_Ndep[2,] <- c( 1980, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 1980 calendar_Ndep[3,] <- c( 2100, 366, 0*1000/(10000*365) ) # 20 kg N ha-1 y-1 N-deposition in 2100 + days_harvest <- matrix(as.integer(-1), nrow= 300, ncol = 2) # read in harvest days h_days <- as.matrix(utils::read.table(site_harvest, header = TRUE, sep = ",")) days_harvest[1:nrow(h_days),] <- h_days[,1:2] From d9e5e70f8455b90ebcec75612922dd6eb0d8c3cf Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 1 Jul 2020 11:05:44 -0400 Subject: [PATCH 1155/2289] output evap and tran --- models/basgra/R/run_BASGRA.R | 4 ++-- models/basgra/src/BASGRA.f90 | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index 152a68fe1c2..bb552b7ed61 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -197,7 +197,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ "NSOURCE" , "NSINK" , # 96:97 "NRT" , "NCRT" , # 98:99 "rNLITT" , "rNSOMF" , # 100:101 - "DAYL" # 102 + "DAYL" , "EVAP" , "TRAN" # 102:104 ) outputUnits <- c( @@ -226,7 +226,7 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ "(g N m-2 d-1)", "(g N m-2 d-1)", # 96:97 "(g N m-2)" , "(g N g-1 C)" , # 98:99 "(g N m-2)" , "(g N g-1 C)" , # 100:101 - "(d d-1)" # 102 + "(d d-1)" , "(mm d-1)" , "(mm d-1)" # 102:104 ) NOUT <- as.integer( length(outputNames) ) diff --git a/models/basgra/src/BASGRA.f90 b/models/basgra/src/BASGRA.f90 index 67b7c0c9c89..cf931693cff 100644 --- a/models/basgra/src/BASGRA.f90 +++ b/models/basgra/src/BASGRA.f90 @@ -385,6 +385,8 @@ subroutine BASGRA( PARAMS, MATRIX_WEATHER, & y(day,101) = rNSOMF y(day,102) = DAYL + y(day,103) = EVAP + y(day,104) = TRAN enddo From 6dd0eb1dfbfe2565db7d1e18a0c1f5b338775fc5 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 1 Jul 2020 18:21:48 +0000 Subject: [PATCH 1156/2289] added failure tests --- apps/api/tests/test.auth.R | 24 ++++++++++++++++++++++++ apps/api/tests/test.models.R | 8 ++++++++ apps/api/tests/test.runs.R | 25 +++++++++++++++++++++---- apps/api/tests/test.workflows.R | 25 +++++++++++++++++++++---- 4 files changed, 74 insertions(+), 8 deletions(-) create mode 100644 apps/api/tests/test.auth.R diff --git a/apps/api/tests/test.auth.R b/apps/api/tests/test.auth.R new file mode 100644 index 00000000000..20fe35ef213 --- /dev/null +++ b/apps/api/tests/test.auth.R @@ -0,0 +1,24 @@ +context("Testing authentication for API") + +test_that("Using correct username & password returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/models/", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Using incorrect username & password returns Status 401", { + res <- httr::GET( + "http://localhost:8000/api/models/", + httr::authenticate("carya", "wrong_password") + ) + expect_equal(res$status, 401) +}) + +test_that("Not using username & password returns Status 401", { + res <- httr::GET( + "http://localhost:8000/api/models/", + ) + expect_equal(res$status, 401) +}) \ No newline at end of file diff --git a/apps/api/tests/test.models.R b/apps/api/tests/test.models.R index 2763a83d2db..827b8593278 100644 --- a/apps/api/tests/test.models.R +++ b/apps/api/tests/test.models.R @@ -6,4 +6,12 @@ test_that("Calling /api/models/ returns Status 200", { httr::authenticate("carya", "illinois") ) expect_equal(res$status, 200) +}) + +test_that("Calling /api/models/ with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/models/?model_name=random&revision=random", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file diff --git a/apps/api/tests/test.runs.R b/apps/api/tests/test.runs.R index e4be12d8b9f..3a49cfad77d 100644 --- a/apps/api/tests/test.runs.R +++ b/apps/api/tests/test.runs.R @@ -1,6 +1,6 @@ -context("Testing the /api/runs/ endpoint") +context("Testing all runs endpoints") -test_that("Calling /api/runs/ returns Status 200", { +test_that("Calling /api/runs/ with a valid workflow id returns Status 200", { res <- httr::GET( "http://localhost:8000/api/runs/?workflow_id=1000009172", httr::authenticate("carya", "illinois") @@ -9,12 +9,29 @@ test_that("Calling /api/runs/ returns Status 200", { }) -context("Testing the /api/runs/{id} endpoint") -test_that("Calling /api/runs/{id} returns Status 200", { +test_that("Calling /api/runs/{id} with a valid run id returns Status 200", { res <- httr::GET( "http://localhost:8000/api/runs/1002042201", httr::authenticate("carya", "illinois") ) expect_equal(res$status, 200) +}) + +test_that("Calling /api/runs/ with a invalid workflow id returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/runs/?workflow_id=1000000000", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) + + + +test_that("Calling /api/runs/{id} with a invalid run id returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/runs/1000000000", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file diff --git a/apps/api/tests/test.workflows.R b/apps/api/tests/test.workflows.R index 641d289862a..b8bf5f66ab1 100644 --- a/apps/api/tests/test.workflows.R +++ b/apps/api/tests/test.workflows.R @@ -1,6 +1,6 @@ -context("Testing the /api/workflows/ endpoint") +context("Testing all workflows endpoints") -test_that("Calling /api/workflows/ returns Status 200", { +test_that("Calling /api/workflows/ with valid parameters returns Status 200", { res <- httr::GET( "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", httr::authenticate("carya", "illinois") @@ -9,12 +9,29 @@ test_that("Calling /api/workflows/ returns Status 200", { }) -context("Testing the /api/workflows/{id} endpoint") -test_that("Calling /api/workflows/{id} returns Status 200", { +test_that("Calling /api/workflows/{id} with valid workflow id returns Status 200", { res <- httr::GET( "http://localhost:8000/api/workflows/1000009172", httr::authenticate("carya", "illinois") ) expect_equal(res$status, 200) +}) + +test_that("Calling /api/workflows/ with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/workflows/?model_id=1000000000&site_id=1000000000", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) + + + +test_that("Calling /api/workflows/{id} with invalid workflow id returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/workflows/1000000000", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file From 2f186b440005607acc95496789e9a35e11e16a78 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 1 Jul 2020 18:24:34 +0000 Subject: [PATCH 1157/2289] updated swagger docs link --- apps/api/README.md | 2 +- book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/apps/api/README.md b/apps/api/README.md index 1ffc908408d..828bca112a9 100644 --- a/apps/api/README.md +++ b/apps/api/README.md @@ -2,7 +2,7 @@ This folder contains the code & tests for PEcAn's RESTful API server. The API allows users to remotely interact with the PEcAn servers and leverage the functionalities provided by the PEcAn Project. It has been designed to follow common RESTful API conventions. Most operations are performed using the HTTP methods: `GET` (retrieve) & `POST` (create). -#### For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/tezansahu/pecan/api_1/apps/api/pecanapi-spec.yml). +#### For the most up-to-date documentation, you can visit the [PEcAn API Documentation](http://pecan-dev.ncsa.illinois.edu/swagger/). ## Starting the PEcAn server: diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index a8e3890997f..b467cf2844d 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -23,7 +23,7 @@ Authentication also depends on the PEcAn server that the user interacts with. So This page contains the high-level overviews & the functionalities offered by the different RESTful endpoints of the PEcAn API. -__For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/tezansahu/pecan/api_1/apps/api/pecanapi-spec.yml).__ +__For the most up-to-date documentation, you can visit the [PEcAn API Documentation](http://pecan-dev.ncsa.illinois.edu/swagger/).__ The currently implemented functionalities include: From e04d204e8545c5636a37f993ef32c90480186a80 Mon Sep 17 00:00:00 2001 From: Kristina Riemer Date: Thu, 2 Jul 2020 12:48:22 -0700 Subject: [PATCH 1158/2289] Update ED2IN v 2.2.0 name --- models/ed/inst/{ED2IN.2.2.0 => ED2IN.r2.2.0} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename models/ed/inst/{ED2IN.2.2.0 => ED2IN.r2.2.0} (100%) diff --git a/models/ed/inst/ED2IN.2.2.0 b/models/ed/inst/ED2IN.r2.2.0 similarity index 100% rename from models/ed/inst/ED2IN.2.2.0 rename to models/ed/inst/ED2IN.r2.2.0 From 856a7e45a32d26ad5bca982f30968eb9b0e051ae Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Thu, 2 Jul 2020 17:32:09 -0700 Subject: [PATCH 1159/2289] Update DEV-INTRO.md --- DEV-INTRO.md | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 2ee8602ba9b..32e99234bcf 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -10,19 +10,26 @@ To clone the PEcAn repository: ```sh git clone git@github.com:pecanproject/pecan - +cd pecan # alternatively, if you haven't set up ssh keys with GitHub # git clone https://github.com/PecanProject/pecan ``` ## Developing in Docker +### Installing Docker + +To install Docker and docker-compose, see the docker documentation for installing +- Docker Desktop in [Mac OSX](https://docs.docker.com/docker-for-mac/install/) or [Windows](https://docs.docker.com/docker-for-windows/install/) +- Docker (e.g. [Ubuntu](https://docs.docker.com/compose/install/)) and [docker-compose](https://docs.docker.com/compose/install/) on your linux operating system. + _Note for Linux users:_ add your user to the docker group. This will prevent you from having to use `sudo` to start the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using ```sh # for linux users sudo adduser ${USER} docker`. ``` +### Deploying PEcAn in Docker To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. @@ -55,7 +62,11 @@ The steps in this section only need to be done the fist time you start working w #### .env file -You can copy the [`env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: +You can copy the [`docker/env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: + +```sh +cp docker/env.example ./env +``` * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers * `PECAN_VERSION` set this to develop, the docker image we start with From bfbdd5aff7f56949d2c991a9bc14153a2030312f Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 3 Jul 2020 13:52:06 -0700 Subject: [PATCH 1160/2289] Update DEV-INTRO.md --- DEV-INTRO.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 32e99234bcf..b559c3fc07a 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -17,9 +17,11 @@ cd pecan ## Developing in Docker +The use of Docker in PEcAn is described in detail in the [PEcAn documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). This is intended as a quick start. + ### Installing Docker -To install Docker and docker-compose, see the docker documentation for installing +To install Docker and docker-compose, see the docker documentation: - Docker Desktop in [Mac OSX](https://docs.docker.com/docker-for-mac/install/) or [Windows](https://docs.docker.com/docker-for-windows/install/) - Docker (e.g. [Ubuntu](https://docs.docker.com/compose/install/)) and [docker-compose](https://docs.docker.com/compose/install/) on your linux operating system. @@ -64,10 +66,18 @@ The steps in this section only need to be done the fist time you start working w You can copy the [`docker/env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: +For Linux/MacOSX + ```sh cp docker/env.example ./env ``` +For Windows + +``` +copy docker/env.example ./env +``` + * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers * `PECAN_VERSION` set this to develop, the docker image we start with @@ -89,7 +99,7 @@ echo "COMPOSE_PROJECT_NAME=pecan" >> .env echo "PECAN_VERSION=develop" >> .env ``` -For windows: +For Windows: ``` echo "COMPOSE_PROJECT_NAME=pecan" >> .env From 72c5742d5b504018b03ec42d344ebac0ef5c1479 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun, 5 Jul 2020 14:47:53 +0530 Subject: [PATCH 1161/2289] created ci.yml --- .github/workflows/ci.yml | 168 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 168 insertions(+) create mode 100644 .github/workflows/ci.yml diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 00000000000..7965ee45819 --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,168 @@ +name: CI + +on: + push: + branches: + - master + - develop + + tags: + - '*' + + pull_request: + +env: + # Would be more usual to set R_LIBS_USER, but R uses R_LIBS first if present + # ...and it's always present here, because the rocker/tidyverse base image + # checks at R startup time for R_LIBS and R_LIBS_USER, sets both if not found + R_LIBS: ~/R/library + +jobs: + + build: + runs-on: ubuntu-latest + container: pecan/depends:develop + steps: + - uses: actions/checkout@v2 + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + shell: bash + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: cache .doc + uses: actions/cache@v1 + with: + key: doc-${{ github.sha }} + path: .doc + - name: cache .install + uses: actions/cache@v1 + with: + key: install-${{ github.sha }} + path: .install + - name: build + run: make -j1 + env: + NCPUS: 2 + CI: true + - name: check for out-of-date Rd files + uses: infotroph/tree-is-clean@v1 + + test: + needs: build + runs-on: ubuntu-latest + container: pecan/depends:develop + services: + postgres: + image: mdillon/postgis:9.5 + options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 + env: + NCPUS: 2 + PGHOST: postgres + CI: true + steps: + - uses: actions/checkout@v2 + - name: install utils + run: apt-get update && apt-get install -y openssh-client postgresql-client curl + - name: db setup + uses: docker://pecan/db:ci + - name: add models to db + run: ./scripts/add.models.sh + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: cache .doc + uses: actions/cache@v1 + with: + key: doc-${{ github.sha }} + path: .doc + - name: cache .install + uses: actions/cache@v1 + with: + key: install-${{ github.sha }} + path: .install + - name: test + run: make test + + check: + needs: build + runs-on: ubuntu-latest + container: pecan/depends:develop + env: + NCPUS: 2 + CI: true + _R_CHECK_LENGTH_1_CONDITION_: true + _R_CHECK_LENGTH_1_LOGIC2_: true + # Avoid compilation check warnings that come from the system Makevars + # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html + _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + steps: + - uses: actions/checkout@v2 + - name: install ssh + run: apt-get update && apt-get install -y openssh-client qpdf + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: cache .doc + uses: actions/cache@v1 + with: + key: doc-${{ github.sha }} + path: .doc + - name: cache .install + uses: actions/cache@v1 + with: + key: install-${{ github.sha }} + path: .install + - name: check + run: make check + env: + REBUILD_DOCS: "FALSE" + RUN_TESTS: "FALSE" + + sipnet: + needs: build + runs-on: ubuntu-latest + container: pecan/depends:develop + services: + postgres: + image: mdillon/postgis:9.5 + options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 + env: + PGHOST: postgres + steps: + - uses: actions/checkout@v2 + - run: apt-get update && apt-get install -y curl postgresql-client + - name: install sipnet + run: | + cd ${HOME} + curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz + tar zxf sipnet_unk.tar.gz + cd sipnet_unk + make + - name: db setup + uses: docker://pecan/db:ci + - name: add models to db + run: ./scripts/add.models.sh + - run: mkdir -p "${HOME}${R_LIBS#'~'}" + - name: cache R packages + uses: actions/cache@v1 + with: + key: pkgcache-${{ github.sha }} + path: ${{ env.R_LIBS }} + restore-keys: | + pkgcache- + - name: integration test + run: ./tests/integration.sh ghaction From c6a9e71f082616ea15bcd9a1405f97c61a9651aa Mon Sep 17 00:00:00 2001 From: istfer Date: Sun, 5 Jul 2020 06:22:58 -0400 Subject: [PATCH 1162/2289] correct units --- models/basgra/R/run_BASGRA.R | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index bb552b7ed61..b029cf07a27 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -351,6 +351,10 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ # again this is not technically GPP outlist[[10]] <- udunits2::ud.convert(phot, "g m-2", "kg m-2") / sec_in_day + # Qle W/m2 + outlist[[11]] <- ( output[thisyear, which(outputNames == "EVAP")] + output[thisyear, which(outputNames == "TRAN")] * + PEcAn.data.atmosphere::get.lv()) / sec_in_day + # ******************** Declare netCDF dimensions and variables ********************# t <- ncdf4::ncdim_def(name = "time", units = paste0("days since ", y, "-01-01 00:00:00"), @@ -365,16 +369,17 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ dims <- list(lon = lon, lat = lat, time = t) nc_var <- list() - nc_var[[1]] <- PEcAn.utils::to_ncvar("LAI", dims) - nc_var[[2]] <- PEcAn.utils::to_ncvar("CropYield", dims) - nc_var[[3]] <- PEcAn.utils::to_ncvar("litter_carbon_content", dims) - nc_var[[4]] <- PEcAn.utils::to_ncvar("fast_soil_pool_carbon_content", dims) - nc_var[[5]] <- PEcAn.utils::to_ncvar("slow_soil_pool_carbon_content", dims) - nc_var[[6]] <- PEcAn.utils::to_ncvar("TotSoilCarb", dims) - nc_var[[7]] <- PEcAn.utils::to_ncvar("SoilResp", dims) - nc_var[[8]] <- PEcAn.utils::to_ncvar("AutoResp", dims) - nc_var[[9]] <- PEcAn.utils::to_ncvar("NEE", dims) + nc_var[[1]] <- PEcAn.utils::to_ncvar("LAI", dims) + nc_var[[2]] <- PEcAn.utils::to_ncvar("CropYield", dims) + nc_var[[3]] <- PEcAn.utils::to_ncvar("litter_carbon_content", dims) + nc_var[[4]] <- PEcAn.utils::to_ncvar("fast_soil_pool_carbon_content", dims) + nc_var[[5]] <- PEcAn.utils::to_ncvar("slow_soil_pool_carbon_content", dims) + nc_var[[6]] <- PEcAn.utils::to_ncvar("TotSoilCarb", dims) + nc_var[[7]] <- PEcAn.utils::to_ncvar("SoilResp", dims) + nc_var[[8]] <- PEcAn.utils::to_ncvar("AutoResp", dims) + nc_var[[9]] <- PEcAn.utils::to_ncvar("NEE", dims) nc_var[[10]] <- PEcAn.utils::to_ncvar("GPP", dims) + nc_var[[11]] <- PEcAn.utils::to_ncvar("Qle", dims) # ******************** Declare netCDF variables ********************# From 0e212a707b691419e621f45f7bb9042e9dcb8329 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun, 5 Jul 2020 19:19:00 +0530 Subject: [PATCH 1163/2289] Delete book.yml --- actions/book.yml | 31 ------------------------------- 1 file changed, 31 deletions(-) delete mode 100644 actions/book.yml diff --git a/actions/book.yml b/actions/book.yml deleted file mode 100644 index 933b3174c34..00000000000 --- a/actions/book.yml +++ /dev/null @@ -1,31 +0,0 @@ -# This is a basic workflow to help you get started with Actions - -name: CI - -on: - push: - branches: master - pull_request: - branches: master - -# A workflow run is made up of one or more jobs that can run sequentially or in parallel -jobs: - # This workflow contains a single job called "build" - build: - # The type of runner that the job will run on - runs-on: ubuntu-latest - container: pecan/depends:develop - - steps: - # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it - - uses: actions/checkout@v2 - - - name: Building book from source - run: cd book_source && make - - # Runs a set of commands using the runners shell - - name: Run a multi-line script - run: | - echo Add other actions to build, - echo test, and deploy your project. - From 7a3455bcd90cb0c1d9faaee20624c47285f0f6e2 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun, 5 Jul 2020 19:34:54 +0530 Subject: [PATCH 1164/2289] changed version from v1 to v2 Co-authored-by: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index a633707d5f1..8f976666157 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -17,7 +17,7 @@ jobs: runs-on: ubuntu-latest container: pecan/base:latest steps: - - uses: actions/checkout@v1 + - uses: actions/checkout@v2 - uses: r-lib/actions/setup-r@v1 - uses: r-lib/actions/setup-pandoc@v1 - name: Install rmarkdown From ffaabbc3704faabfbaeb0dcbde3ec1d8e3016db3 Mon Sep 17 00:00:00 2001 From: Mukul Maheshwari Date: Sun, 5 Jul 2020 20:17:48 +0530 Subject: [PATCH 1165/2289] Squashed commit of the following: commit 671027d70d8c04fb74d0c4f1aec5c6a8aa16b288 Merge: 72c5742d5 e11e42782 Author: Mukul Maheshwari Date: Sun Jul 5 16:48:35 2020 +0530 Merge branch 'develop' into book-build-on-actions updated develop with book actions commit 72c5742d5b504018b03ec42d344ebac0ef5c1479 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun Jul 5 14:47:53 2020 +0530 created ci.yml commit e11e42782b18d537c55f1863cf8b3bf08253a8ec Merge: bea3f00a6 4f1324a30 Author: Rob Kooper Date: Fri Jul 3 17:47:45 2020 -0500 Merge pull request #2649 from dlebauer/dev_intro2 Minor changes to DEV-INTRO.md commit 4f1324a30bfe01e4935661d84cc86a98ad21921c Merge: bfbdd5aff bea3f00a6 Author: David LeBauer Date: Fri Jul 3 13:54:53 2020 -0700 Merge branch 'develop' into dev_intro2 commit bea3f00a6cf172d410f7ea88c3449d992d0ee031 Merge: b689d08ed e04d204e8 Author: David LeBauer Date: Fri Jul 3 13:53:41 2020 -0700 Merge pull request #2648 from KristinaRiemer/update_ed2in_name Update ED2IN v 2.2.0 name commit bfbdd5aff7f56949d2c991a9bc14153a2030312f Author: David LeBauer Date: Fri Jul 3 13:52:06 2020 -0700 Update DEV-INTRO.md commit 856a7e45a32d26ad5bca982f30968eb9b0e051ae Author: David LeBauer Date: Thu Jul 2 17:32:09 2020 -0700 Update DEV-INTRO.md commit e04d204e8545c5636a37f993ef32c90480186a80 Author: Kristina Riemer Date: Thu Jul 2 12:48:22 2020 -0700 Update ED2IN v 2.2.0 name commit b689d08ed6dcc325c187c37dabeb6fd77f6032dc Merge: 16fbbb3c8 766922b3e Author: Michael Dietze Date: Tue Jun 30 09:32:09 2020 -0400 Merge pull request #2645 from ayushprd/geesmap GEE - SMAP script commit 766922b3e7d7ed4d047e3aee2baf346432675e5a Author: Ayush Prasad Date: Tue Jun 30 16:56:03 2020 +0530 added gee2pecan_smap() commit 16fbbb3c8d20e429552e5d8aa0b88227338f7d65 Merge: 092a43e10 4c7a6e7b8 Author: Michael Dietze Date: Mon Jun 29 15:28:59 2020 -0400 Merge pull request #2610 from robkooper/docker-develop how to develop using docker commit 4c7a6e7b8fd41fb4de1952cc86e9ffb6cf2172a5 Merge: 1e43abe79 092a43e10 Author: Rob Kooper Date: Mon Jun 29 13:18:21 2020 -0500 Merge branch 'develop' into docker-develop commit 1e43abe79fbdfc9b8bf90b0b479093a73e35b4c9 Author: Rob Kooper Date: Mon Jun 29 12:23:44 2020 -0500 missing quote commit d9cde51c408eee225b2e0b8728e3acec07190620 Author: Rob Kooper Date: Mon Jun 29 08:59:40 2020 -0500 more explicit about commands for windows/linux/mac commit 092a43e10b7a8e3184742311b867f08c613aa4ae Merge: d1a30bef4 88a487526 Author: Michael Dietze Date: Sun Jun 28 18:13:20 2020 -0400 Merge pull request #2496 from PecanProject/biocro_annual_grass no carbon storage across years for annual grasses in BioCro commit 88a487526f3d79b8ebfb66beeac3d5e1fd4584ef Merge: a868addb4 d1a30bef4 Author: Michael Dietze Date: Sun Jun 28 17:14:26 2020 -0400 Merge branch 'develop' into biocro_annual_grass commit d1a30bef47bdbb370453ae4fdbdec195a0470945 Merge: 46f60f9b9 dcf27a1e9 Author: Michael Dietze Date: Sun Jun 28 17:14:12 2020 -0400 Merge pull request #2643 from infotroph/bookdown-workaround Fix for failing documentation build commit a868addb471e5188c370f97c64d93c551f2e893c Author: runner Date: Sun Jun 28 09:29:08 2020 +0000 automated syle update commit 07ec40c5e2135df64aec733c2002ffd7723eda53 Merge: ad7e98384 46f60f9b9 Author: Chris Black Date: Sun Jun 28 11:22:11 2020 +0200 Merge branch 'develop' into biocro_annual_grass commit dcf27a1e9cca8495e25be2fbcbca6a0f26747265 Author: Chris Black Date: Sun Jun 28 09:58:51 2020 +0200 work around "Input files not all in same directory, please supply explicit wd" from bookdown 0.20 commit 46f60f9b9a2016864f557c2f1b24a166c09af1ec Merge: 013bc2059 c62d28330 Author: Michael Dietze Date: Sat Jun 27 22:29:07 2020 -0400 Merge pull request #2631 from tezansahu/api_1 API endpoints for ping, models, workflows & runs (with authentication) commit c62d2833009cf485dd8b32795107ec545c40f167 Merge: c18023bc8 88188abd6 Author: Tezan Sahu Date: Sat Jun 27 16:34:55 2020 -0500 Merge branch 'api_1' of github.com:tezansahu/pecan into api_1 commit c18023bc88f2865da30ab5f653773a5ed7afdccc Merge: 71e6fb72f 013bc2059 Author: Tezan Sahu Date: Sat Jun 27 16:33:55 2020 -0500 Merge branch 'develop' of https://github.com/PecanProject/pecan into api_1 commit 71e6fb72f6c23f5a4cce8528b6f558131b2a0670 Author: Tezan Sahu Date: Sat Jun 27 21:33:39 2020 +0000 updated swagger docs link in documentation & readme commit 88188abd6d2db53ce83b35f8290a080926993806 Merge: acf68510b 013bc2059 Author: Michael Dietze Date: Sat Jun 27 17:33:18 2020 -0400 Merge branch 'develop' into api_1 commit 013bc2059784942a80d392d04205621410778049 Merge: d30d5c202 b7de3a672 Author: Michael Dietze Date: Sat Jun 27 17:23:27 2020 -0400 Merge pull request #2569 from rahul799/styler-workflow Styler workflow commit b7de3a672bed0e3901488c0f3ef5161e976b9db2 Author: Chris Black Date: Sat Jun 27 22:39:18 2020 +0200 Update .github/workflows/styler-actions.yml commit acf68510bd6d5688b0cd0cb1e0717dd4ef1d0f48 Author: Tezan Sahu Date: Sat Jun 27 19:42:53 2020 +0000 updated docs with examples commit c4b1bd17f80eb7a09aa5a563b5b6359bc25c8e65 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat Jun 27 22:37:23 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit 5f7617045ece7546ae751995e2044a489adfda83 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat Jun 27 22:37:10 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit 3464bffad024cda5f6baeb4c8e8129fb3195646b Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat Jun 27 22:36:55 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit 932f931bb5a8ae78cba2f1a69ca474ef99d8a15e Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 18:37:40 2020 +0530 updated bookdown bug commit 63e45d0d65dcdf53fa5f5b1b74086b7fc25a1afc Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 17:15:31 2020 +0530 Update book.yml commit 10f9b15dc2083036de6d4017cdf4d9eb560554f2 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 17:00:26 2020 +0530 Update book.yml commit cf24ccb05732d155aa4b301c76725dd0eee7ee6f Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 16:43:19 2020 +0530 Update book.yml commit 90c985220e60018d53aa3551f8f8f3663c7e9eb7 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 16:37:03 2020 +0530 Update book.yml commit 56a97a83707d330c27feb0543d50906b1edc4701 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 16:01:35 2020 +0530 Update book.yml commit b59e33d0f32418201e68222abdc0682376a0af11 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 13:34:50 2020 +0530 Update book.yml commit 84bef4753ded00715c04805842d103e1dd4870b1 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 12:53:46 2020 +0530 Update book.yml commit ef3ceadeccb29ce7eebb34190033f45273393fe1 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 12:47:10 2020 +0530 Update book.yml commit 691b82346762327ea338edd9d8770baeda86361b Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 12:41:29 2020 +0530 Update book.yml commit 89c51eaaafb2aa3c0a717d371ee5f3933150504a Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 27 11:52:19 2020 +0530 updated tags field commit 8626151d3374afc5a36a34cca70c42ada177a76a Merge: 17a00e23d d30d5c202 Author: Chris Black Date: Fri Jun 26 21:09:54 2020 +0200 Merge branch 'develop' into styler-workflow commit d107248adbb55e5795f025820d551e3bf843ef31 Merge: b0e2f3b93 61829fe3d Author: David LeBauer Date: Thu Jun 25 14:44:08 2020 -0700 Merge branch 'docker-develop' of https://github.com/robkooper/pecan into pr/2610 commit b0e2f3b935da3aa716a69bbce8ab07bbf887f9d2 Author: David LeBauer Date: Thu Jun 25 14:44:02 2020 -0700 Update DEV-INTRO.md commit b3b811fd6ddfed2adeaee145010fc0c9f42c9320 Author: Tezan Sahu Date: Thu Jun 25 17:58:29 2020 +0000 api folder restructuring commit 15851f87701874d196e5454ec6e052442bcb3b11 Merge: 3ca09483f d30d5c202 Author: Tezan Sahu Date: Tue Jun 23 22:17:04 2020 -0500 Merge branch 'develop' of https://github.com/PecanProject/pecan into api_1 commit 3ca09483fc575553d5ce053547f652d534aec823 Author: Tezan Sahu Date: Tue Jun 23 18:37:05 2020 +0000 API documentation for GET endpoints involving models, workflows, runs & general host details commit d30d5c2026858ed55a931303f25110b99dbab2e4 Merge: d7cbc6a68 a84d5b1fd Author: Michael Dietze Date: Mon Jun 22 10:11:11 2020 -0400 Merge pull request #2641 from infotroph/met-process-fix-con Pass correct object to query.format.vars commit 581b4b2c96a2f71b57cff1208a591cd8d7f45d90 Author: Tezan Sahu Date: Mon Jun 22 12:29:28 2020 +0000 docker setup for api commit a84d5b1fda2d37494b4f12d223347943a470f110 Author: Chris Black Date: Sat Jun 20 21:17:52 2020 +0200 src_postgres is deprecated commit 32e95940cbd85d8ccf4efb78774695ae2bd3b366 Author: Chris Black Date: Fri Jun 19 22:40:16 2020 +0200 pass connection not list (plus misc roxygen fixes) commit d7cbc6a68f4ee66489348c1f2646fe390c94ec86 Merge: b1c3f939b be7098f04 Author: Michael Dietze Date: Fri Jun 19 08:11:40 2020 -0400 Merge pull request #2637 from ayushprd/geelai LAI script for remote data module commit be7098f045c1c650b2c8caa0133cf696b917358e Author: Ayush Prasad Date: Fri Jun 19 16:38:14 2020 +0530 unused import commit 3cf8cd7fc7e0c34e839235bad44e49c1055f60c9 Author: Ayush Prasad Date: Fri Jun 19 16:28:15 2020 +0530 added remote_process v1 commit ac23b024e0b5f158ca507b91b7b3c0168d440170 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 18 23:51:46 2020 +0530 Update book.yml commit f4a360617f77bcccc05fa96ff0a3a348d69c82a5 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 18 23:44:46 2020 +0530 updated auth and build jobs commit 81bd7a357c804460d27c2be7501312f10bff645f Author: Tezan Sahu Date: Thu Jun 18 15:35:44 2020 +0000 minor bugfix in dbHostInfo commit 8d7e28f26d08b96a09de0282e500a843257bb836 Author: Tezan Sahu Date: Thu Jun 18 14:59:03 2020 +0000 status endpoint uses env vars commit 24e2184b59423eb8bc0d9c179ce15a8aa3b17681 Author: Tezan Sahu Date: Thu Jun 18 13:26:51 2020 +0000 added status endpoint & modified dbHostInfo() commit 3ee0d5a88bda65adb619c090d059e2841bc55c30 Author: Ayush Prasad Date: Thu Jun 18 11:27:51 2020 +0530 added bands2ndvi commit 390d1839b1443c06fb32d64dd06c110780301c03 Author: Ayush Prasad Date: Thu Jun 18 11:26:25 2020 +0530 added bands2lai_snap commit 22c5ce12bd6772a48ec7a9e074736b40cc569b33 Author: Ayush Prasad Date: Thu Jun 18 11:24:55 2020 +0530 added gee2pecan_bands commit e15bf8de2e038af9459b409bbd13a7cb41bd4717 Author: Ayush Prasad Date: Thu Jun 18 11:22:27 2020 +0530 removed GPP and other unused functions commit 1921baa7797fd5cd007234ba693a9bcc1b94ff28 Author: Ayush Prasad Date: Tue Jun 16 17:35:39 2020 +0530 script to calculate LAI from gee commit c5d7169f0506df54287866ec65af443f7b7abe46 Author: Ayush Prasad Date: Tue Jun 16 17:34:26 2020 +0530 added SNAP biophyscial processor Co-authored-by: Olli Nevalainen commit b1c3f939bc38338a4e23501796ab04762907e806 Author: Ayush Prasad Date: Tue Jun 16 16:44:12 2020 +0530 Sentinel 2 NDVI script for remote data module (#2634) * added satellitetools * added s2ndvi * added test file * removed satellitetool's old functions * Apply suggestions from code review (fixes in docs) Co-authored-by: istfer * added example run and dependencies required * moved test file to satellitetools * updated docs * minor fixes * removed pycache Co-authored-by: istfer Co-authored-by: Michael Dietze commit cdc18ca2d995f94a8888a837a659b9a4a067234f Merge: 4467cf15f 630bd5857 Author: Michael Dietze Date: Mon Jun 15 15:49:00 2020 -0400 Merge pull request #2636 from PecanProject/model2netcdf_version_detection Improve checking for output variable names in model2necdf.ED2 commit 630bd5857a52dd6284eeb1f28c8f4401846529db Merge: d098bc36b 9b4b2f01d Author: mccabete Date: Mon Jun 15 13:13:58 2020 -0400 Merge branch 'model2netcdf_version_detection' of https://github.com/pecanproject/pecan into model2netcdf_version_detection commit d098bc36bc173bda507c89e1c44437427848b475 Author: mccabete Date: Mon Jun 15 13:12:46 2020 -0400 fix if statments so warning will be triggered commit 9b4b2f01d4b42a7b51ae987dba6f75ee78744477 Author: Tess McCabe Date: Mon Jun 15 10:14:56 2020 -0400 Update CHANGELOG.md commit 3a24ca1121bfc7775dafe9ad2c6465321c6e5ad4 Author: mccabete Date: Mon Jun 15 09:29:50 2020 -0400 Improve checking for output variable names commit c3d9944f7cf9681bdc58f6d269c09d333da8f98b Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 15 16:26:22 2020 +0530 Update book.yml commit 907d9e48be811299359471c3b865d923120bd1ad Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 15 16:19:39 2020 +0530 Update book.yml commit 6ff92f0d33bc4cb13d76a99a1d3ffd62ea3611b4 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 15 15:01:45 2020 +0530 Update book.yml commit 3f3f93f204354ec70ae053ea8a6aad67609c70f7 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 15 14:55:45 2020 +0530 Update book.yml commit 64c623fd29195458c197d43a0dbf0c543d725f92 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun Jun 14 23:53:14 2020 +0530 Update book.yml commit 299c240277d616463979cdbe09ef57731b8167fc Author: Tezan Sahu Date: Sun Jun 14 03:57:33 2020 -0500 converted all betydb calls to dplyr commit bd912a6d8a400d67efbe2921f5c7939a21816502 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun Jun 14 11:52:26 2020 +0530 Update book.yml commit 44b39c6708b4d85bc88ab687890a50afbd21444e Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sun Jun 14 11:46:00 2020 +0530 Update book.yml commit 6d33db32b6ef22d03dd686d140993cd7841b6c46 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 23:34:42 2020 +0530 Update book.yml commit 243471dc18df35a24cb67d24e2a055a92b56a902 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 23:20:02 2020 +0530 Update book.yml commit c46047145e3e6461cca1be264e8af9f4b0778157 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 23:16:24 2020 +0530 Update book.yml commit aab4efe080e12dcb5021da617ef9bd8f36628678 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 22:52:57 2020 +0530 Update book.yml commit b501fbca8215d278036b487da20ffe6a6de87ecc Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 22:49:16 2020 +0530 updated book.yml commit 40ad66148753a6d78baf7b8872ad754f551c804d Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 17:58:53 2020 +0530 Update book.yml commit 3247d4f03bb22c30de774e1d0a2422838e161d61 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 17:19:33 2020 +0530 Update book.yml commit e243af6547826db8c7d7e158c910187e5d5b73e0 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 17:14:05 2020 +0530 Update book.yml commit 6f8b0b5ac428180a571d33cfa425e971ffe6cd9a Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 13 16:52:35 2020 +0530 Update book.yml commit 6bcb0f2435cc51b6b9bd97c054e2081b78ce3518 Author: Tezan Sahu Date: Sat Jun 13 00:05:05 2020 -0500 replaces bety connection pipeline with commit 61829fe3ddeac59f771ab1a26e5382ce00b33001 Merge: e95c7c144 4467cf15f Author: Rob Kooper Date: Fri Jun 12 23:06:09 2020 -0500 Merge remote-tracking branch 'upstream/develop' into docker-develop commit eace0cebd7f3ba7d0815d83924388bbab200a1f0 Merge: 154d961d2 4467cf15f Author: Tezan Sahu <31898274+tezansahu@users.noreply.github.com> Date: Sat Jun 13 09:20:28 2020 +0530 Merge branch 'develop' into api_1 commit 4467cf15f967ef6426ac226c8e058e2b1401eefd Merge: d0648000c d6cdebe9f Author: Rob Kooper Date: Thu Jun 11 18:02:05 2020 -0500 Merge pull request #2632 from robkooper/betyconnect-env-vars use get_postgres_envvars in betyConnect commit d6cdebe9fad9666700751fd4785b8a699498f77f Merge: ce51edd78 d0648000c Author: Rob Kooper Date: Thu Jun 11 09:27:56 2020 -0500 Merge branch 'develop' into betyconnect-env-vars commit ce51edd782c6100eee13253472dfda1462b727bd Author: Rob Kooper Date: Thu Jun 11 09:27:47 2020 -0500 Update CHANGELOG.md Co-authored-by: Chris Black commit 94c2b2cd2c0a426cdd44cda3e2202af729168c34 Author: Rob Kooper Date: Thu Jun 11 09:27:36 2020 -0500 Update base/db/R/query.dplyr.R Co-authored-by: Chris Black commit d0648000c59e5367d9c5e9cec0aaf4194da0ee62 Merge: 5ee572b2e 55339f32b Author: Michael Dietze Date: Thu Jun 11 08:06:26 2020 -0400 Merge pull request #2617 from tezansahu/develop updated steps used for installing pecan using docker commit 7a9f75c040fb7bbcde3efc9eaf0890138eb58bb2 Merge: 7877d59a0 5ee572b2e Author: Rob Kooper Date: Wed Jun 10 20:55:55 2020 -0500 Merge branch 'develop' into betyconnect-env-vars commit 7877d59a0b0be873f8525ce4af7c846ebfb312b5 Author: Rob Kooper Date: Wed Jun 10 18:51:29 2020 -0500 use get_postgres_envvars in betyConnect commit 2b0c16831beb215104823f028af244bf1519a695 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Wed Jun 10 23:36:27 2020 +0530 Update book.yml commit 154d961d23bce1b844ae0639efce0dd50c83bb9a Merge: aac4d2adf 5ee572b2e Author: Tezan Sahu <31898274+tezansahu@users.noreply.github.com> Date: Wed Jun 10 22:01:41 2020 +0530 Merge branch 'develop' into api_1 commit aac4d2adf7f7632d09a4a973cf61dc6221ab828a Author: Tezan Sahu Date: Tue Jun 9 16:51:32 2020 +0000 added endpoints for runs commit 55339f32b2bac821bf9eabae1ae942417214e021 Merge: 193e90283 5ee572b2e Author: Michael Dietze Date: Tue Jun 9 09:13:01 2020 -0400 Merge branch 'develop' into develop commit 5ee572b2ee5297d952fdaab900f418996c739ea4 Merge: 654539f98 691e59970 Author: Michael Dietze Date: Tue Jun 9 09:12:07 2020 -0400 Merge pull request #2622 from istfer/replace_tmvtnorm Replace tmvtnorm commit 306024dea93dde365558cf34b1f9a9ab1709301e Author: Tezan Sahu Date: Tue Jun 9 02:57:04 2020 +0000 documentation bugfixes commit 196e9c36c8e87e08103f4f5af98f495fe1984c87 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 9 00:43:40 2020 +0530 Update book.yml commit 8d26b00792a0b93ecb48b667b4643ba0fcc2dbb6 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 9 00:40:28 2020 +0530 Update book.yml commit 245ef87b79fca2755d5e3938851dc1970b6cf691 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 9 00:36:43 2020 +0530 Update book.yml commit e4723ba1cabdfff1e7d9034426e50601106adb73 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 9 00:12:35 2020 +0530 Update book.yml commit 2c045e5b1f593548dbf11a45ad22b57d3ce8d07a Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 8 23:50:15 2020 +0530 updated remote repo commit 9e87b067b3f31cda6ff7d366815e5503f4303873 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 8 22:17:16 2020 +0530 updated repo token commit d23b369757016f543ed6ce4caab0004040ab5918 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 8 19:19:09 2020 +0530 Update book.yml commit 691e599704728db2ed173d6478245240c3a602c2 Merge: 73eb2990c 654539f98 Author: Michael Dietze Date: Mon Jun 8 09:46:16 2020 -0400 Merge branch 'develop' into replace_tmvtnorm commit 654539f98388fcba9dbc16fba56df531cdeef1e5 Merge: 10239e62f c7eef5ac6 Author: Michael Dietze Date: Mon Jun 8 09:45:14 2020 -0400 Merge pull request #2630 from ayushprd/develop changed ic.process to ic_process commit 436104dc32ee67689362020d72f72f4854201a19 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Mon Jun 8 18:18:18 2020 +0530 Update book.yml commit c7eef5ac60d8cd00b30bdb4aca3ced33ad6b9d1c Merge: dcbdc5cba 10239e62f Author: Michael Dietze Date: Mon Jun 8 07:02:29 2020 -0400 Merge branch 'develop' into develop commit dcbdc5cba8991821e2ad766e57a8a570923f4584 Merge: 301a15e78 050b4c1cf Author: Ayush Prasad Date: Mon Jun 8 15:32:15 2020 +0530 Merge pull request #1 from ayushprd/ic-doc-changes changed ic.process to ic_process commit 050b4c1cf0503bf28b625de98254917fd5a8040c Author: Ayush Prasad Date: Mon Jun 8 15:27:51 2020 +0530 changed ic.process to ic_process commit f32b2dda9983ce05a2e87ea5f57c3fc55643c9cc Author: Tezan Sahu Date: Sun Jun 7 14:50:06 2020 +0000 added pagination for workflows endpoint commit a06c1149b81cddb51a6713ef302f3b7cce782bc8 Author: Tezan Sahu Date: Sun Jun 7 12:01:46 2020 +0000 added endpoint to get details of workflow by id commit 182ff64cf4ba5af84cf62424adbf0c9b5eb6065a Author: Tezan Sahu Date: Sun Jun 7 10:56:08 2020 +0000 added api to filter workflow by model/site id [w/o pagination] commit e69454e9674c863db4bce486521223690c744a72 Author: Tezan Sahu Date: Sun Jun 7 10:55:24 2020 +0000 db connections closed commit 0ba03e0dd16069cec41c26aa0a8958d3a342876c Author: Tezan Sahu Date: Sun Jun 7 05:33:52 2020 +0000 removed importing of dependencies commit e1736978c0218692033a7d7bacb9a7bdfb752af6 Author: Tezan Sahu Date: Sat Jun 6 15:18:25 2020 -0500 directory refactoring commit 7e9a2a3bb0c935308008232e29f5f43f384e955c Author: Tezan Sahu Date: Sat Jun 6 14:51:56 2020 -0500 added ping & model api, authentication & swagger yaml commit f87b57f1aaebb8859ceec0445a870308a112633b Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 6 13:05:06 2020 +0530 updated git repo changes commit 36d8119ba49efe567c45729a95a9ac7d96acac2a Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 6 12:57:09 2020 +0530 Update book.yml commit a02c721cbd6e788a1783f3935d2245a1dd48a62f Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 6 12:46:03 2020 +0530 Update book.yml commit e35ed3c16163b10d1df94bc1f021e66b351a2de8 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 6 12:41:36 2020 +0530 Update book.yml commit 1d942df692e86465d1b7c667c5eb638a50096fe7 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Sat Jun 6 12:35:23 2020 +0530 updated shell version commit 94a0ba99004cad1f5603e26146f2fb80fc1aa18e Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 20:04:40 2020 +0530 updated container image commit 3429b9aaa886319463550abba1b4cd430d5e18c6 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 18:13:32 2020 +0530 updated container image commit aef80825ffb902b3c06a064b1e5ac7bb2e6aad81 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 14:46:26 2020 +0530 changes commit 9ae6b777cf94bea414d945ddc3fa8bc5d0abe6e6 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 14:42:40 2020 +0530 updated dependencies commit fc1c1eab5a9868f0a6120c39d81335efe190a106 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 14:35:07 2020 +0530 updated dependencies commit 030462172ab7223406ea7f3ddf38d831b8ddf171 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 14:15:37 2020 +0530 updated image commit a417de0874eb63746e478725f2e5e7d5b9d85052 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Thu Jun 4 14:09:51 2020 +0530 updated container image commit 73eb2990c6082ea36513df0f0c73f993a75b590c Merge: b829beebf 10239e62f Author: istfer Date: Thu Jun 4 03:49:41 2020 -0400 Merge branch 'develop' of github.com:PecanProject/pecan into replace_tmvtnorm commit 193e9028349bdcee27eb01f7272c5016aaa570ae Merge: 7dc0ffe1f 10239e62f Author: David LeBauer Date: Wed Jun 3 17:31:22 2020 -0700 Merge branch 'develop' into develop commit e95c7c144954a0ad445b95f48634becc2c2466e6 Merge: bc252328f 10239e62f Author: David LeBauer Date: Wed Jun 3 17:29:08 2020 -0700 Merge branch 'develop' into docker-develop commit 10239e62f28130508ea99f3d1e679ce0af49ef01 Merge: d39e83ca1 6562b58f4 Author: David LeBauer Date: Wed Jun 3 17:28:31 2020 -0700 Merge pull request #2625 from dlebauer/develop don't override random.effects = TRUE commit 6562b58f4ee8baa90bd0c4f2af566a828f9d6ac7 Author: David LeBauer Date: Wed Jun 3 15:09:15 2020 -0700 Update modules/meta.analysis/R/meta.analysis.R commit bc252328f8a13755ea0f7ee69d6496bf90faadbe Author: Rob Kooper Date: Wed Jun 3 15:47:53 2020 -0500 updates to doc and speedups commit ad7e98384da3f519b1fa3437e52dd174a508dcf0 Merge: dc27dd82d d39e83ca1 Author: David LeBauer Date: Wed Jun 3 13:00:59 2020 -0700 Merge branch 'develop' into biocro_annual_grass commit 5fe037daeafe1de1b0c51c74294268b01c236a67 Author: David LeBauer Date: Wed Jun 3 13:00:32 2020 -0700 Update CHANGELOG.md commit 9521887a66a147be24879839da1abfd7e6ebb59c Author: David LeBauer Date: Wed Jun 3 12:48:27 2020 -0700 don't override random.effects = TRUE commit b829beebf15322bb2bed83a5111f8a1c022eef92 Author: istfer Date: Tue Jun 2 13:33:40 2020 -0400 add issue no commit e1b2d53eb702ce0f080f60c495999ae7973b4d3f Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 22:47:28 2020 +0530 added changes commit 9249e462dae827b5559fca57de81213aa59cf876 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 22:44:41 2020 +0530 updated container image commit b813fea99a5dd037df063396f61ad340bff00891 Author: istfer Date: Tue Jun 2 11:13:39 2020 -0400 update changelog commit 902fdbfae3e456bd74bb3c3a0a941a19f34a7836 Author: istfer Date: Tue Jun 2 11:11:34 2020 -0400 replace function calls commit 823dcb3c40bd74729565e424266f94362e69bfbe Author: istfer Date: Tue Jun 2 11:04:50 2020 -0400 replace package commit 27dda34d03d3b22ca3369e1e6cfcf61715312264 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 20:12:45 2020 +0530 updated image name commit 2c60a175aadf652681dc66c530562e637dcb4eae Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 20:08:46 2020 +0530 update container image commit 1c970839bd1dcd188ce646b3d3aea3d030d6686c Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 19:18:47 2020 +0530 updated image name commit d19a4a5f3313c1598ef67872756fff760f9954e0 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 19:08:35 2020 +0530 updated book.yml commit f09cbb36ab7a19b5efcebf4d07d1a5283753ea18 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 19:06:41 2020 +0530 added more updates commit 4332a418597ea9ab232c609b776d05d6f48bc0e8 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 17:17:41 2020 +0530 adeed dependencies commit b6ecfd3547336f87e7c0183128d75297e77dc60d Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 17:05:43 2020 +0530 updated book.yml commit cbb09ebb0062f3169d557dbe0a323872c44d9580 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:54:55 2020 +0530 created index.Rmd commit e98615ad302debaead7e1ae58062874dca41963b Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:52:07 2020 +0530 Delete libudunits2-dev commit e107fb75eb1bbbea18208708f2f30ae41b6f1b24 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:51:57 2020 +0530 Delete librdf0-dev commit c4e959e5247a0c4fb4fa4a4e8d764bd377ba8396 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:51:43 2020 +0530 Delete libnetcdf-dev commit c6454359fe48520ff4b3a6629b41fb98a826d783 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:51:32 2020 +0530 Delete libglu1-mesa-dev commit 10bb55b75a0a295f60d068c4c993243c2fba77ce Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:51:19 2020 +0530 Delete libglpk-dev commit 33249a12bf69249222ed6fd447219cc5ecd81aed Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:51:05 2020 +0530 Delete libgl1-mesa-dev commit eb3ef7f3626f6c419248bca2f6f9d95e36cbcf60 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:50:47 2020 +0530 Delete index.Rmd commit e7382d422de471e1058c012cf4b771b1c9f43575 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:50:15 2020 +0530 Delete _main.Rmd commit 8680132b465fd709469a080fb6ecea98b8c570a9 Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:43:22 2020 +0530 Delete ci.yml commit 98ce000961c48e7f453c64eaf34aa482ca525a0f Author: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Tue Jun 2 16:33:33 2020 +0530 create book.yml commit 0fea2c0b2713cde58e8ce5d3992387a4bb6082d3 Author: Mukul Maheshwari Date: Tue Jun 2 15:42:17 2020 +0530 new sandox repo commit 17a00e23da2d9f8f1a2c2652ccc16f2ce7e77864 Merge: 852e081f8 d39e83ca1 Author: Chris Black Date: Tue Jun 2 10:10:29 2020 +0200 Merge branch 'develop' into styler-workflow commit 852e081f8d7539e5e039ba03e207801fcc791bd2 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue Jun 2 10:48:46 2020 +0530 Update styler-actions.yml commit 3de66d005ce4a672ed606666c212947984cb62f8 Author: Rob Kooper Date: Fri May 29 14:24:50 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit 724664f30c249af653e1758d98adee26f65fc612 Author: Rob Kooper Date: Fri May 29 14:24:31 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit d5cb02d3518684fc59e0954ab5f5862dbffd8136 Author: Rob Kooper Date: Fri May 29 14:24:03 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit 109cd7658860bf23f08e468ab9c2ea96ff2d6e5d Author: Rob Kooper Date: Fri May 29 14:23:10 2020 -0500 Update DEV-INTRO.md Co-authored-by: Kristina Riemer commit 46e1f76a3c8ddd856f957816e2cd99d49e7731d6 Author: Rob Kooper Date: Fri May 29 14:22:51 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit 07e1a1504f5fef00f203af176aacfdff845bb285 Author: Rob Kooper Date: Fri May 29 14:21:08 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit ce642787971130de46192d787d4ae94549ec6492 Author: Tezan Sahu Date: Thu May 28 02:29:54 2020 -0500 started creating api endpoints; ping ready commit c473a3b486819fe70dd2147b92bf1cd1a7910f40 Author: Rob Kooper Date: Wed May 27 16:05:08 2020 -0500 use copy instead of rename commit 3fec6287ce493fc0073e10d82885e4dbf2b6b873 Merge: 5ea1169f4 d39e83ca1 Author: Rob Kooper Date: Wed May 27 16:01:37 2020 -0500 Merge branch 'develop' into docker-develop commit 7dc0ffe1fdcb27bbf0943b6ff84b2105b9405acf Author: tezansahu Date: Thu May 28 00:56:13 2020 +0530 modified link for pecan releases commit a035f3cface61fe4e5363a79a3365e6aca9aee14 Author: tezansahu Date: Wed May 27 08:36:41 2020 +0530 modified image for env file & added warning commit 5ea1169f4e9a6eb346c5800fb46df5301f994d3a Author: Rob Kooper Date: Tue May 26 18:48:37 2020 -0500 more updates commit a78cd186a0d1c94e62c43c5248a181c55945b95f Author: Tezan Sahu <31898274+tezansahu@users.noreply.github.com> Date: Tue May 26 20:55:44 2020 +0530 Update 01_install_pecan.Rmd commit 9bc38e499cdafd8e99e7dbb3167ef70f5e024a3d Merge: 3033bc93f bd12ad9d2 Author: tezansahu Date: Tue May 26 20:42:50 2020 +0530 fixes to issues in the docs commit 3033bc93fb789511935c2dbbd63dd4f439eb9d95 Author: tezansahu Date: Tue May 26 20:40:36 2020 +0530 fixes to issues in the docs commit bd12ad9d29a759ea0e68a11fcf20d59d5ed2e165 Author: Tezan Sahu <31898274+tezansahu@users.noreply.github.com> Date: Tue May 26 20:34:12 2020 +0530 Update book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd Co-authored-by: istfer commit ee98276e998c7dfe8f3202dcf7f350d610a40b19 Author: tezansahu Date: Tue May 26 08:11:46 2020 +0530 updated steps used for installing pecan using docker commit faec5ac08f06add16a8f7fda95b4657b160e58d2 Merge: 14eafc215 f0771aaf6 Author: Rob Kooper Date: Sun May 24 11:26:10 2020 -0500 Merge branch 'docker-develop' of github.com:robkooper/pecan into docker-develop commit 14eafc2153d4a2988c798d4d6c069053e9e18377 Author: Rob Kooper Date: Sun May 24 11:26:02 2020 -0500 fix error message commit d9da9d1c7fac3e524cdc539735bdbf1db3744a4a Author: Rob Kooper Date: Sun May 24 11:25:24 2020 -0500 update sipnet docker test fixes #2615 commit d39e83ca1d3236cb05b101ed8092b0d57c93a3ee Merge: bdea9c74e ca8cb7ba7 Author: Rob Kooper Date: Sun May 24 10:11:46 2020 -0500 Merge pull request #2616 from istfer/fix_SA_typo Fixing small typos that casuses errors for SA runs commit ca8cb7ba7f35f58601dc0f52879f73dc91f0d39c Author: istfer Date: Sun May 24 10:37:16 2020 -0400 remove extra commit 656d8e6f3b2878c2ff79d8e563cad33730fa471a Author: istfer Date: Sun May 24 09:22:47 2020 -0400 fix typo commit 2bb5b1a2c16584dacc03d10f3035752a6097c1aa Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sun May 24 10:19:14 2020 +0530 Update styler-actions.yml commit f0771aaf6ca9cfc928ce65e7b25033d25a26d7b1 Author: Rob Kooper Date: Tue May 19 07:58:54 2020 -0500 Update DEV-INTRO.md Co-authored-by: Chris Black commit 1910db876a7f0689735757cb635d41f0d4812291 Author: Rob Kooper Date: Mon May 18 17:40:51 2020 -0500 fix cp command commit bdea9c74e6ce9ff639a2c2dcbc21ded14c870fb8 Merge: 948d6326f 6b22d2eb9 Author: Michael Dietze Date: Mon May 18 17:23:45 2020 -0400 Merge pull request #2605 from kyclark/ed2.dockerfile setting the default values to match docker.sh so this will build commit e627e3fc32bb0a42d4b93fcddcc90ce797d4452d Author: Rob Kooper Date: Mon May 18 15:13:28 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit 6b22d2eb96b498914aa09c53d5b41e9f0732717d Merge: da17fad70 948d6326f Author: Rob Kooper Date: Mon May 18 12:58:21 2020 -0500 Merge branch 'develop' into ed2.dockerfile commit 948d6326f1675439cf1626a8cfb163d9e28e26af Merge: faaf9cddd 691025f02 Author: Rob Kooper Date: Mon May 18 12:57:17 2020 -0500 Merge pull request #2601 from PecanProject/docker-quickstart-docs add a true docker quickstart commit 691025f028d894fe1ecd4916a87911f0c81c5b03 Merge: 6a3fe92c5 faaf9cddd Author: Rob Kooper Date: Mon May 18 11:49:13 2020 -0500 Merge branch 'develop' into docker-quickstart-docs commit 9989ab40236390ce018feb8542f7955cb312c7be Author: Rob Kooper Date: Mon May 18 11:32:48 2020 -0500 more fixes based on discussion in #2572 commit 5eeeb60f91ca4b961e1925556bffd367c1cecbdc Merge: 5ba4df45c faaf9cddd Author: Rob Kooper Date: Mon May 18 10:08:10 2020 -0500 Merge remote-tracking branch 'upstream/develop' into docker-develop commit faaf9cddddbcf9970f16d7e720884dad4a6444d0 Merge: 4e84f1ddd b34a4cb8d Author: Michael Dietze Date: Mon May 18 09:25:05 2020 -0400 Merge pull request #2611 from ayushprd/2520 Fixed not triggering of "All values bad" in call_MODIS commit b34a4cb8df61ef6f9cb40c5f5eceb4c990f55f5c Merge: c02c2f009 4e84f1ddd Author: Michael Dietze Date: Mon May 18 08:45:52 2020 -0400 Merge branch 'develop' into 2520 commit 4e84f1ddda484406da26c9925d061ee3f6f81333 Merge: 7f290ce99 04236fc07 Author: Michael Dietze Date: Mon May 18 08:45:27 2020 -0400 Merge pull request #2613 from istfer/fix-db-dbsync-crash add .sh commit 04236fc07023b3cd22a4f18579a5dd5f6d36d83f Author: istfer Date: Mon May 18 03:37:37 2020 -0400 add .sh commit 5ba4df45cc8fa45fbddd64673801a68c39dd80aa Merge: 236241366 6591e19f6 Author: Rob Kooper Date: Sat May 16 10:41:26 2020 -0500 Merge branch 'docker-develop' of github.com:robkooper/pecan into docker-develop commit 236241366445f44e14b889f92cec1f588cb042cd Author: Rob Kooper Date: Sat May 16 10:41:24 2020 -0500 added some text to clarify things based on feedback from @istfer commit 6591e19f61dd1693bb47f2ac83dcc326f4b51a6d Author: Rob Kooper Date: Fri May 15 09:59:59 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit 40a15e117ff662d1b8b2c34dab1798b8d5085b8e Author: Rob Kooper Date: Thu May 14 23:18:35 2020 -0500 few more tweaks commit c30e5f6dd0e71165ca1371c5e66cfcc9a54cb8ff Author: Rob Kooper Date: Thu May 14 23:12:22 2020 -0500 fixes based on demo today commit 62b900c76100dca60f2642c88f2c170be680291f Author: Rob Kooper Date: Thu May 14 16:53:12 2020 -0500 Update DEV-INTRO.md Co-authored-by: David LeBauer commit a55a4f45980245eb8560ba256a91529c2dc4cf19 Author: Rob Kooper Date: Thu May 14 16:53:02 2020 -0500 Update DEV-INTRO.md Co-authored-by: istfer commit c02c2f009b1732ef03ca8208742353300d4aeef6 Author: Ayush Prasad Date: Thu May 14 15:03:46 2020 +0530 removed !(is.null(good)) commit 01d0596546ca4c056a761f9305d7ab249eb182ac Author: Rob Kooper Date: Wed May 13 23:49:16 2020 -0500 add note about docker-compose.override.yml commit 63d87af67b3e1250121e88a9120f16d9c6e4c094 Author: Rob Kooper Date: Wed May 13 23:33:02 2020 -0500 update CHANGELOG commit 3e3df92c3ca78234fd58fb0e36df86f77a5fe7b5 Author: Rob Kooper Date: Wed May 13 23:28:22 2020 -0500 how to develop using docker commit 7f290ce9943ce4bf23f0a3e8181d00e9fe9920db Merge: ea5a57c6f 573dfd57d Author: Rob Kooper Date: Wed May 13 17:14:27 2020 -0500 Merge pull request #2608 from robkooper/fix-db-dbsync-build do apt-get before install commit 573dfd57d6852db50b7dd6d206cbc2904eaffe8e Author: Rob Kooper Date: Wed May 13 16:31:36 2020 -0500 do apt-get before install commit ea5a57c6ff1a35d2c6d6b56fb33a231190d7908a Merge: db6b6166a 65aa79a03 Author: Michael Dietze Date: Wed May 13 10:32:53 2020 -0400 Merge pull request #2598 from PecanProject/ed2.2_ed2in Create ED2INv2.2.0 commit 65aa79a03f0f4c277b912b7034328d3a2fe73ecf Merge: ecf4faf32 db6b6166a Author: Michael Dietze Date: Wed May 13 09:08:10 2020 -0400 Merge branch 'develop' into ed2.2_ed2in commit db6b6166a7ba964e5d3563af00d22d72131fb12a Merge: 301a15e78 78b901c9b Author: Michael Dietze Date: Wed May 13 09:07:12 2020 -0400 Merge pull request #2606 from ayushprd/2519 Change order of arguments in call_MODIS() commit ecf4faf3245a0ca5b46060774c92269818ad7f6c Merge: 2eb8a1001 301a15e78 Author: David LeBauer Date: Tue May 12 23:49:44 2020 -0700 Merge branch 'develop' into ed2.2_ed2in commit 6a3fe92c5195759d0dd883da50cb3869a766e6a5 Merge: 2c8eca2d2 301a15e78 Author: David LeBauer Date: Tue May 12 23:46:41 2020 -0700 Merge branch 'develop' into docker-quickstart-docs commit 78b901c9bcffa3892a796783c05ed442a4c474de Author: Ayush Prasad Date: Mon May 11 18:39:53 2020 +0530 generated documentation for call_MODIS commit a6e7c9614bbb224a12cfe7e52413ff9f0ca5e819 Author: Ayush Prasad Date: Mon May 11 16:13:45 2020 +0530 Update CHANGELOG.md commit 40b0e519f774f6b131654ae0d47d5b70eaadfd1a Author: Ayush Prasad Date: Mon May 11 16:04:36 2020 +0530 changed order of arguments, updated docs commit d7eeb8b516c333911d33dc66cf1384ce8b1e3f22 Author: Ayush Prasad Date: Mon May 11 15:18:23 2020 +0530 updated CHANGELOG.md commit da17fad705eee338b2732999982637129b6a11eb Author: Ken Youens-Clark Date: Fri May 8 16:19:20 2020 -0700 setting the default values to match docker.sh so this will build commit 30b96bcdd06508edca4345b0ec7d286195b9d362 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat May 9 00:04:25 2020 +0530 Update styler-actions.yml commit 87066d0b3c6674776a9791a46ebc99d2179bb766 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat May 9 00:03:54 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit 06340b4af02aac590ee3910a0bedf5bc90458782 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Sat May 9 00:01:34 2020 +0530 file name updated commit 3801fa22ddef3903daa204671b510cc36714c36f Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Fri May 8 06:59:26 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit b2e2bd535b4bd00e4cc647d3340c9609f2071272 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Fri May 8 06:58:19 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit 301a15e785a2c68b8f4c4e390a1edd6191eae463 Merge: df25a3b85 47a5368c2 Author: Rob Kooper Date: Thu May 7 17:28:22 2020 -0500 Merge pull request #2588 from robkooper/fix-edit-docker fixes not starting after editing files (fixes #2587) commit 2eb8a1001d0cbdae72507d6085ff583325abea53 Merge: 3562c6501 df25a3b85 Author: David LeBauer Date: Thu May 7 15:25:15 2020 -0700 Merge branch 'develop' into ed2.2_ed2in commit 47a5368c21fb16113e2bbfe2d1015028e7f082dd Merge: 635ec80b3 df25a3b85 Author: Rob Kooper Date: Thu May 7 15:44:23 2020 -0500 Merge branch 'develop' into fix-edit-docker commit df25a3b85593d0e0c22e2be90702dec555c49f81 Merge: 90e70d440 54508fc51 Author: Rob Kooper Date: Thu May 7 15:43:58 2020 -0500 Merge pull request #2590 from robkooper/fix-ftp-code ftp has status code 226 for ok commit 3562c65017f619546cbbfa659d57a68b3d6f2b3b Merge: 732ab71a6 90e70d440 Author: David LeBauer Date: Thu May 7 13:32:40 2020 -0700 Merge branch 'develop' of github.com:pecanproject/pecan into ed2.2_ed2in commit 54508fc5135290cc04e874f9dab6253f5195740e Merge: c6c311a0e 90e70d440 Author: Rob Kooper Date: Thu May 7 14:47:30 2020 -0500 Merge branch 'develop' into fix-ftp-code commit 90e70d440acd5cd16c800a243c678e420f30881e Merge: 2109a8c88 064bdaad5 Author: Rob Kooper Date: Thu May 7 14:47:07 2020 -0500 Merge pull request #2596 from rahul799/remove-git-install removing-git-install commit 064bdaad515aeb929f2a5a4bf9b3996a1a1bde96 Merge: 92334ae4a 2109a8c88 Author: Rob Kooper Date: Thu May 7 13:51:00 2020 -0500 Merge branch 'develop' into remove-git-install commit 635ec80b33f8665a20d9b5977d024575d75a780f Merge: b0da91424 2109a8c88 Author: Michael Dietze Date: Thu May 7 12:40:53 2020 -0400 Merge branch 'develop' into fix-edit-docker commit 2109a8c88d92e1f64e1b41c5d50e0e21ebbc8182 Author: David LeBauer Date: Wed May 6 17:56:36 2020 -0700 fix typos in docker.sh commit b0da91424b5553d93e88f3ecb61f81af6d1b6de4 Merge: 07ab6df92 84880c34b Author: Rob Kooper Date: Wed May 6 15:55:43 2020 -0500 Merge branch 'develop' into fix-edit-docker commit 84880c34b7247fcac878b155492fad67dec7e06b Merge: d52018ef8 d52d20565 Author: Rob Kooper Date: Wed May 6 15:55:22 2020 -0500 Merge pull request #2603 from infotroph/docker_depends_fix typo in depends Dockerfile commit 732ab71a6eab45cbbfe5751b0b2f129945d21ed2 Author: Ken Youens-Clark Date: Wed May 6 13:02:43 2020 -0700 version which might one day be compatible with ED 2.2.0 commit d52d2056568e6ebc411ea28368b0c48cf6531434 Author: Chris Black Date: Wed May 6 21:20:10 2020 +0200 == is a bashism commit 92334ae4a9ef401a4827ec936f0748e41be10a07 Merge: fa6d5502e d52018ef8 Author: Chris Black Date: Wed May 6 19:47:18 2020 +0200 Merge branch 'develop' into remove-git-install commit 1ea2cad98f99461844da18f5e6075872df8e3577 Merge: cd8d750bf d52018ef8 Author: Rob Kooper Date: Wed May 6 11:03:56 2020 -0500 Merge branch 'develop' into ed2.2_ed2in commit cd8d750bfa227fb365c97d67ae835cf862e76948 Author: Rob Kooper Date: Wed May 6 11:03:14 2020 -0500 remove git build of ED commit 07138ae5552f72a81e43f5752579425bc1d69780 Author: David LeBauer Date: Wed May 6 08:59:45 2020 -0700 Update CHANGELOG.md commit 088bfd11d50e0218d9a2684ff7e7c9d556a04001 Author: David LeBauer Date: Wed May 6 08:58:45 2020 -0700 Rename ED2INv2.2.0 to ED2IN.2.2.0 per convention, as requested in https://github.com/PecanProject/pecan/pull/2598#pullrequestreview-406275757 commit b9ad6beaea69f5cf0be761230f7f055cf4f39162 Author: David LeBauer Date: Wed May 6 08:57:39 2020 -0700 Update CHANGELOG.md add changes re: ED2IN to changelog commit eeb22ff6738dbb12a63d73fd546ea17f5730a1b3 Author: David LeBauer Date: Wed May 6 08:54:18 2020 -0700 Delete ED2IN.git (#2599) this no longer works with the master branch of the https://github.com/EDmodel/ED2 keeping this unversioned ED2IN around will only cause trouble. commit bde3bc6b75f6d8a8aa169c3b05d6186b2db021ae Merge: de5c50873 2bb2d4bbb Author: David LeBauer Date: Wed May 6 08:53:56 2020 -0700 Merge branch 'develop' into ed2.2_ed2in commit fa6d5502e3e80115596aca229e79c79a9f585b87 Merge: 15e71d953 7a3365ac5 Author: Rob Kooper Date: Tue May 5 23:04:22 2020 -0500 Merge branch 'develop' into remove-git-install commit 2c8eca2d2e6c6066f4f33dc92d124fb49518bda0 Author: David LeBauer Date: Tue May 5 18:06:00 2020 -0700 add a true docker quickstart previously the docker-quickstart link pointed to this file, with the first section ``` ## The PEcAn docker install process in detail {#docker-quickstart} ``` commit de5c5087306c8127e9186f5995cb6475c77eed35 Merge: 1c4bcd9bf 9b02d18aa Author: David LeBauer Date: Tue May 5 17:29:19 2020 -0700 Merge branch 'develop' into ed2.2_ed2in commit 1c4bcd9bf004d387f34b9ee7e8bbc099459ca662 Author: David LeBauer Date: Tue May 5 17:19:42 2020 -0700 Update CHANGELOG.md commit dc1438f5784e2073c75adc2ca6b8ada73e0c232b Author: David LeBauer Date: Tue May 5 14:24:37 2020 -0700 Create ED2INv2.2.0 based on ED2IN.git still need to finish finding and replacing variables commit 15e71d9538590e517f955e0e544f5e010615fd0b Merge: 4a7cc603b 78a657fc2 Author: Rob Kooper Date: Tue May 5 13:18:00 2020 -0500 Merge branch 'develop' into remove-git-install commit 07ab6df924ad8bc37abb59134ee1eec3e845dca4 Merge: e88649f0b c719dc59e Author: Rob Kooper Date: Tue May 5 12:58:38 2020 -0500 Merge branch 'develop' into fix-edit-docker commit 4a7cc603b9e18a4a0b9ce90766e339772f3b5c4e Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue May 5 22:53:16 2020 +0530 removing-git-install commit 31e85a60a87513b07a67917df3e82a0ae564a56b Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue May 5 20:33:44 2020 +0530 Update .github/workflows/styler-actions.yml Co-authored-by: Chris Black commit d80db13539fd97694866a680f18edf37113e9413 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue May 5 18:47:02 2020 +0530 removed unwanted step commit e88649f0bd408cd3b40bdd9fe78f1050cb972027 Merge: fd856a6ba 4da2f3943 Author: Michael Dietze Date: Tue May 5 06:42:30 2020 -0400 Merge branch 'develop' into fix-edit-docker commit c6c311a0ee59e606d9fe52738595c3ace7b280e3 Author: Rob Kooper Date: Wed Apr 29 13:48:22 2020 -0500 remove debug statement commit 0aaa2714690c9e2b1d6348e66c102d82627392a8 Author: Rob Kooper Date: Wed Apr 29 13:29:00 2020 -0500 fix missing namespace commit 5560e0b4dcb45bf89ef255c8e77c72a3d1c520c7 Author: Rob Kooper Date: Wed Apr 29 12:37:49 2020 -0500 remove timeout will try servers once, if no versions.txt add to ignored list (and use gray color). This is reset on update servers. commit 613547ca5496e0cc19645f33a4f1c8c4a653e6c7 Author: Rob Kooper Date: Wed Apr 29 10:23:48 2020 -0500 ftp has status code 226 for ok commit fd856a6baca7feb8c4ce7efc9a28a61ad47b3041 Author: Rob Kooper Date: Mon Apr 27 18:42:32 2020 -0500 run workflow and stop in case of advanced, and continue commit 079d4772ceaa78bb77a939fcce7ec6d737455e47 Author: Rob Kooper Date: Mon Apr 27 16:31:05 2020 -0500 fixes not starting after editing files (fixes #2587) commit 86533c206a0fa1858747ee3340b44606a7f343a6 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Fri Apr 24 13:46:28 2020 +0530 Update styler-actions.yml commit 8c4a947396a272c8e0ffd4c9d69ea9327bb3a099 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Thu Apr 23 19:10:29 2020 +0530 Update styler-actions.yml commit f4a04ee62b3e4145d2c3048ddf50a371eb538f32 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue Apr 14 14:21:52 2020 +0530 Update styler-actions.yml commit 5b149af5335698a7b440bd016bd18c0a87236b66 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue Apr 14 14:13:44 2020 +0530 Update styler-actions.yml commit b700de87bd0c5fcbfcb778d263111154e4f07847 Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue Apr 14 14:01:18 2020 +0530 Update styler-actions.yml commit 0e9daf7bb7ca6300c2550c1a7f6960a9341c0b0e Author: Rahul Agrawal <41531498+rahul799@users.noreply.github.com> Date: Tue Apr 14 13:59:22 2020 +0530 Create styler-actions.yml commit dc27dd82de6812553116276a0b9e044779e8d619 Author: David LeBauer Date: Thu Dec 19 15:53:14 2019 -0700 clarifying comment commit f4d2eff335753a77769bf95e86cde50f31faebbc Author: David LeBauer Date: Thu Dec 19 15:51:27 2019 -0700 Update CHANGELOG.md commit 646c7a0e2eab5b94aacce6c8a2fda1bafeaec834 Author: David LeBauer Date: Thu Dec 19 15:49:00 2019 -0700 no carbon storage across years for annual grasses When there are multiple years, always restart annual plants from seed. Otherwise (I think) these would be handled the same as Miscanthus + Switchgrass or Sugarcane --- .github/workflows/book.yml | 61 + .github/workflows/ci.yml | 14 - .github/workflows/styler-actions.yml | 79 + .gitignore | 3 - CHANGELOG.md | 14 + DEV-INTRO.md | 346 ++- actions/book.yml | 31 + apps/api/Dockerfile | 30 + apps/api/R/auth.R | 81 + apps/api/R/entrypoint.R | 37 + apps/api/R/general.R | 30 + apps/api/R/models.R | 42 + apps/api/R/runs.R | 105 + apps/api/R/workflows.R | 128 ++ apps/api/README.md | 20 + apps/api/pecanapi-spec.yml | 442 ++++ apps/api/test_pecanapi.sh | 7 + base/db/R/query.dplyr.R | 45 +- base/db/R/query.format.vars.R | 16 +- base/logger/DESCRIPTION | 2 +- base/remote/DESCRIPTION | 2 +- base/settings/R/check.all.settings.R | 2 +- base/utils/R/sensitivity.R | 5 +- base/workflow/inst/batch_run.R | 3 +- base/workflow/inst/permutation_tests.R | 1 - .../01_install_pecan.Rmd | 34 +- .../07_remote_access/01_pecan_api.Rmd | 475 ++++ .../03_topical_pages/09_standalone_tools.Rmd | 2 +- .../03_topical_pages/11_adding_to_pecan.Rmd | 8 +- .../94_docker/02_quickstart.Rmd | 82 +- book_source/Makefile | 7 +- book_source/figures/env-file.PNG | Bin 0 -> 92110 bytes docker-compose.dev.yml | 105 + docker-compose.yml | 52 +- docker.sh | 24 +- docker/depends/Dockerfile | 2 +- docker/depends/pecan.depends | 2 +- docker/env.example | 2 +- docker/executor/Dockerfile | 2 +- docker/executor/executor.py | 8 + models/biocro/R/call_biocro.R | 91 +- models/ed/Dockerfile | 6 +- models/ed/R/model2netcdf.ED2.R | 21 +- models/ed/inst/ED2IN.git | 1259 ----------- models/ed/inst/ED2IN.r2.2.0 | 1997 +++++++++++++++++ modules/assim.batch/DESCRIPTION | 2 +- modules/assim.batch/R/hier.mcmc.R | 20 +- modules/assim.batch/R/pda.utils.R | 8 +- .../scripts/benchmark.workflow.FATES_BCI.R | 2 +- modules/data.atmosphere/R/met.process.R | 47 +- modules/data.atmosphere/R/met.process.stage.R | 1 + modules/data.atmosphere/man/browndog.met.Rd | 20 +- .../data.atmosphere/man/db.site.lat.lon.Rd | 9 +- .../data.atmosphere/man/met.process.stage.Rd | 2 +- .../tests/Rcheck_reference.log | 5 +- modules/data.remote/R/call_MODIS.R | 20 +- modules/data.remote/inst/bands2lai_snap.py | 47 + modules/data.remote/inst/bands2ndvi.py | 46 + modules/data.remote/inst/gee2pecan_bands.py | 72 + modules/data.remote/inst/gee2pecan_smap.py | 152 ++ modules/data.remote/inst/remote_process.py | 118 + .../inst/satellitetools/biophys_xarray.py | 235 ++ .../data.remote/inst/satellitetools/gee.py | 716 ++++++ .../inst/satellitetools/test.geojson | 38 + modules/data.remote/man/call_MODIS.Rd | 24 +- modules/emulator/DESCRIPTION | 4 +- modules/emulator/R/minimize.GP.R | 10 +- modules/meta.analysis/R/meta.analysis.R | 9 +- scripts/compile.sh | 3 + shiny/BrownDog/server.R | 10 +- shiny/ViewMet/server.R | 2 +- shiny/dbsync/Dockerfile | 10 +- shiny/dbsync/app.R | 45 +- tests/docker.sipnet.xml | 67 + web/04-runpecan.php | 6 +- web/07-continue.php | 29 +- 76 files changed, 5910 insertions(+), 1594 deletions(-) create mode 100644 .github/workflows/book.yml create mode 100644 .github/workflows/styler-actions.yml create mode 100644 actions/book.yml create mode 100644 apps/api/Dockerfile create mode 100644 apps/api/R/auth.R create mode 100644 apps/api/R/entrypoint.R create mode 100644 apps/api/R/general.R create mode 100644 apps/api/R/models.R create mode 100644 apps/api/R/runs.R create mode 100644 apps/api/R/workflows.R create mode 100644 apps/api/README.md create mode 100644 apps/api/pecanapi-spec.yml create mode 100644 apps/api/test_pecanapi.sh create mode 100644 book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd create mode 100644 book_source/figures/env-file.PNG create mode 100644 docker-compose.dev.yml delete mode 100644 models/ed/inst/ED2IN.git create mode 100644 models/ed/inst/ED2IN.r2.2.0 create mode 100644 modules/data.remote/inst/bands2lai_snap.py create mode 100644 modules/data.remote/inst/bands2ndvi.py create mode 100644 modules/data.remote/inst/gee2pecan_bands.py create mode 100644 modules/data.remote/inst/gee2pecan_smap.py create mode 100644 modules/data.remote/inst/remote_process.py create mode 100644 modules/data.remote/inst/satellitetools/biophys_xarray.py create mode 100755 modules/data.remote/inst/satellitetools/gee.py create mode 100644 modules/data.remote/inst/satellitetools/test.geojson create mode 100755 scripts/compile.sh create mode 100644 tests/docker.sipnet.xml diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml new file mode 100644 index 00000000000..a633707d5f1 --- /dev/null +++ b/.github/workflows/book.yml @@ -0,0 +1,61 @@ +on: + push: + branches: + - master + - develop + - release/* + tags: + - v1 + - v1* + +# render book +name: renderbook + +jobs: + bookdown: + name: Render-Book + runs-on: ubuntu-latest + container: pecan/base:latest + steps: + - uses: actions/checkout@v1 + - uses: r-lib/actions/setup-r@v1 + - uses: r-lib/actions/setup-pandoc@v1 + - name: Install rmarkdown + run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' + - name: Render Book + run: cd book_source && Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd")' + - uses: actions/upload-artifact@v2 + with: + name: _book + path: book_source/_book/ + + + checkout-and-deploy: + runs-on: ubuntu-latest + needs: bookdown + steps: + - name: Download artifact + uses: actions/download-artifact@v2 + with: + # Artifact name + name: _book # optional + # Destination path + path: _book/ # optional + # repo-token: ${{ secrets.GITHUB_TOKEN }} + - name: Checkout documentation repo + uses: actions/checkout@v2 + with: + repository: ${{ github.repository_owner }}/pecan-documentation + path: pecan-documentation + token: ${{ secrets.GH_PAT }} + - run: | + export VERSION=${GITHUB_REF##*/}_test + cd pecan-documentation && mkdir -p $VERSION + git config --global user.email "pecanproj@gmail.com" + git config --global user.name "GitHub Documentation Robot" + rsync -a --delete ../_book/ $VERSION + git add --all * + git commit -m "Build book from pecan revision $GITHUB_SHA" || true + git push -q origin master + + diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 518c4aad5de..7965ee45819 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -23,20 +23,6 @@ jobs: runs-on: ubuntu-latest container: pecan/depends:develop steps: - - name: check git version - id: gitversion - run: | - v=$(git --version | grep -oE '[0-9\.]+') - v='cat(numeric_version("'${v}'") < "2.18")' - echo "##[set-output name=isold;]$(Rscript -e "${v}")" - - name: upgrade git if needed - # Hack: actions/checkout wants git >= 2.18, rocker 3.5 images have 2.11 - # Assuming debian stretch because newer images have git >= 2.20 already - if: steps.gitversion.outputs.isold == 'TRUE' - run: | - echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list - apt-get update - apt-get -t stretch-backports upgrade -y git - uses: actions/checkout@v2 - run: mkdir -p "${HOME}${R_LIBS#'~'}" shell: bash diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml new file mode 100644 index 00000000000..ff3869db622 --- /dev/null +++ b/.github/workflows/styler-actions.yml @@ -0,0 +1,79 @@ +on: + issue_comment: + types: [created] +name: Commands +jobs: + style: + if: startsWith(github.event.comment.body, '/style') + name: style + runs-on: macOS-latest + steps: + - id: file_changes + uses: trilom/file-changes-action@v1.2.3 + - name: testing + run: echo '${{ steps.file_changes.outputs.files_modified}}' + - uses: actions/checkout@v2 + - uses: r-lib/actions/pr-fetch@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + - uses: r-lib/actions/setup-r@master + - name: Install dependencies + run: | + Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' + Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' + - name: string operations + shell: bash + run: | + echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt + cat names.txt | tr -d '[]' > changed_files.txt + text=$(cat changed_files.txt) + IFS=',' read -ra ids <<< "$text" + for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done + - name: Upload artifacts + uses: actions/upload-artifact@v1 + with: + name: artifacts + path: files_to_style.txt + - name: Style + run: for i in $(cat files_to_style.txt); do Rscript -e "styler::style_file("$i")"; done + - name: commit + run: | + git add \*.R + git add \*.Rmd + if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated syle update' ; fi + - uses: r-lib/actions/pr-push@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + + + check: + needs: [style] + runs-on: ubuntu-latest + container: pecan/depends:develop + steps: + - uses: actions/checkout@v2 + - uses: r-lib/actions/pr-fetch@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + - uses: r-lib/actions/setup-r@master + - name : download artifacts + uses: actions/download-artifact@v1 + with: + name: artifacts + - name : make + shell: bash + run: | + cut -d / -f 1-2 artifacts/files_to_style.txt | tr -d '"' > changed_dirs.txt + cat changed_dirs.txt + sort changed_dirs.txt | uniq > needs_documenting.txt + cat needs_documenting.txt + for i in $(cat needs_documenting.txt); do make .doc/${i}; done + - name: commit + run: | + git config --global user.email "pecan_bot@example.com" + git config --global user.name "PEcAn stylebot" + git add \*.Rd + if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated documentation update' ; fi + - uses: r-lib/actions/pr-push@master + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.gitignore b/.gitignore index badc2505dbf..1e523cbe68f 100644 --- a/.gitignore +++ b/.gitignore @@ -99,6 +99,3 @@ contrib/modellauncher/modellauncher # don't checkin renv /renv/ - -# ignore IP mapping to lat/lon (is about 65MB) -shiny/dbsync/IP2LOCATION-LITE-DB5.BIN diff --git a/CHANGELOG.md b/CHANGELOG.md index 19e373161d5..dab05d00119 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,8 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Fixed +- Use initial biomass pools for Sorghum and Setaria #2495, #2496 +- PEcAn.DB::betyConnect() is now smarter, and will try to use either config.php or environment variables to create a connection. It has switched to use db.open helper function (#2632). - PEcAn.utils::tranformstats() assumed the statistic names column of its input was a factor. It now accepts character too, and returns the same class given as input (#2545). - fixed and added tests for `get.rh` function in PEcAn.data.atmosphere - Invalid .zenodo.json that broke automatic archiving on Zenodo ([b56ef53](https://github.com/PecanProject/pecan/commit/b56ef53888d73904c893b9e8c8cfaeedd7b1edbe)) @@ -21,8 +23,12 @@ For more information about this file see also [Keep a Changelog](http://keepacha - When building sipnet model would not set correct model version - Update pecan/depends docker image to have latest Roxygen and devtools. - Update ED docker build, will now build version 2.2.0 and git +- Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 +- model2netcdf.ED2 no longer detecting which varibles names `-T-` files have based on ED2 version (#2623) ### Changed + +- Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). - `PEcAn.DB::insert_table` now uses `DBI::dbAppendTable` internally instead of manually constructed SQL (#2552). @@ -34,8 +40,13 @@ For more information about this file see also [Keep a Changelog](http://keepacha - No longer writing an arbitrary num for each PFT, this was breaking ED runs potentially. - The pecan/data container has no longer hardcoded path for postgres - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). +- data.remote: Arguments to the function `call_MODIS()` have been changed (issue #2519). ### Added + +- Documentation in [DEV-INTRO.md](DEV-INTRO.md) on development in a docker environment (#2553) +- PEcAn API that can be used to talk to PEcAn servers. Endpoints to GET the details about the server that user is talking to, PEcAn models, workflows & runs. Authetication enabled. (#2631) +- New versioned ED2IN template: ED2IN.2.2.0 (#2143) (replaces ED2IN.git) - model_info.json and Dockerfile to template (#2567) - Dockerize BASGRA_N model. - Basic coupling for models BASGRA_N and STICS. @@ -47,10 +58,13 @@ For more information about this file see also [Keep a Changelog](http://keepacha - New shiny application to show database synchronization status (shiny/dbsync) ### Removed + +- Removed ED2IN.git (#2599) 'definitely going to break things for people' - but they can still use PEcAn <=1.7.1 - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). - Scripts `dump.pgsql.sh` and `dump.mysql.sh` have been deleted. See the ["BeTY database administration"](https://pecanproject.github.io/pecan-documentation/develop/database.html) chapter of the PEcAn documentation for current recommendations (#2563). - Old dependency management scripts `check.dependencies.sh`, `update.dependencies.sh`, and `install_deps.R` have been deleted. Use `generate_dependencies.R` and the automatic dependency handling built into `make install` instead (#2563). + ## [1.7.1] - 2018-09-12 ### Fixed diff --git a/DEV-INTRO.md b/DEV-INTRO.md index b1881ba44f4..b559c3fc07a 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -1,59 +1,325 @@ -PEcAn Development -================= +# PEcAn Development -Directory Structure -------------------- +This is a minimal guide to getting started with PEcAn development under Docker. You can find more information about docker in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). -### pecan/ +## Git Repository and Workflow -* modules/ Contains the modules that make up PEcAn -* web/ The PEcAn web app to start a run and view outputs. -* models/ Code to create the model specific configurations. -* documentation/ Documentation about what PEcAn is and how to use it. +We recommend following the the [gitflow](https://nvie.com/posts/a-successful-git-branching-model/) workflow and working in your own [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). See the [PEcAn developer guide](book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd) for further details. In the `/scripts` folder there is a script called [syncgit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. -### Modules (R packages) +To clone the PEcAn repository: -* General -** all -** utils -** db -* Analysis -** modules/meta.analysis -** modules/uncertainty" -** modules/data.land -** modules/data.atmosphere -** modules/assim.batch -** modules/assim.sequential -** modules/priors -* Model Interfaces -** models/ed -** models/sipnet -** models/biocro +```sh +git clone git@github.com:pecanproject/pecan +cd pecan +# alternatively, if you haven't set up ssh keys with GitHub +# git clone https://github.com/PecanProject/pecan +``` + +## Developing in Docker + +The use of Docker in PEcAn is described in detail in the [PEcAn documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). This is intended as a quick start. + +### Installing Docker + +To install Docker and docker-compose, see the docker documentation: +- Docker Desktop in [Mac OSX](https://docs.docker.com/docker-for-mac/install/) or [Windows](https://docs.docker.com/docker-for-windows/install/) +- Docker (e.g. [Ubuntu](https://docs.docker.com/compose/install/)) and [docker-compose](https://docs.docker.com/compose/install/) on your linux operating system. + +_Note for Linux users:_ add your user to the docker group. This will prevent you from having to use `sudo` to start the docker containers, and makes sure that any file that is written to a mounted volume is owned by you. This can be done using +```sh +# for linux users +sudo adduser ${USER} docker`. +``` + +### Deploying PEcAn in Docker + +To get started with development in docker we need to bring up the docker stack first. In the main pecan folder you will find the [docker-compose.yml](docker-compose.yml) file that can be used to bring up the pecan stack. There is also the [docker-compose.dev.yaml](docker-compose.dev.yaml) file that adds additional containers, and changes some services to make it easier for development. + +By default docker-compose will use the files `docker-compose.yml` and `docker-compose.override.yml`. We will use the default `docker-compose.yml` file from PEcAn. The `docker-compose.override.yml` file can be used to configure it for your specific environment, in our case we will use it to setup the docker environment for development. Copy the `docker-compose.dev.yml` file to `docker-compose.override.yml` to start working with your own override file, i.e. : + +For Linux/MacOSX + +``` +cp docker-compose.dev.yml docker-compose.override.yml +``` + +For Windows + +``` +copy docker-compose.dev.yml docker-compose.override.yml +``` + +You can now use the command `docker-compose` to work with the containers setup for development. **The rest of this document assumes you have done this step.** +### First time setup -#### List of modules +The steps in this section only need to be done the fist time you start working with the stack in docker. After this is done you can skip these steps. You can find more detail about the docker commands in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). -Installing PEcAn ----------------- +* setup .env file +* create folders to hold the data +* load the postgresql database +* load some test data +* copy all R packages (optional but recommended) +* setup for web folder development (optional) -### Virtual Machine +#### .env file -* Fastest way to get started -* see PEcAn demo ... +You can copy the [`docker/env.example`](docker/env.example) file as .env in your pecan folder. The variables we want to modify are: -### Installing from source +For Linux/MacOSX -#### From GitHub +```sh +cp docker/env.example ./env +``` + +For Windows + +``` +copy docker/env.example ./env +``` + +* `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers +* `PECAN_VERSION` set this to develop, the docker image we start with + +Both of these variables should also be uncommented by removing the # preceding them. At the end you should see the following if you run the following command `egrep -v '^(#|$)' .env`. If you have a windows system, you will need to set the variable PWD as well, and for linux you will need to set UID and GID (for rstudio). + +For Linux ``` -library(devtools) -install_github("pecan", "PEcAnProject") +echo "COMPOSE_PROJECT_NAME=pecan" >> .env +echo "PECAN_VERSION=develop" >> .env +echo "UID=$(id -u)" >> .env +echo "GID=$(id -g)" >> .env ``` -#### "Makefile" +For MacOSX ``` -./scripts/build.sh -install # installs all R packages -./scripts/build.sh -h # list other options +echo "COMPOSE_PROJECT_NAME=pecan" >> .env +echo "PECAN_VERSION=develop" >> .env +``` + +For Windows: + +``` +echo "COMPOSE_PROJECT_NAME=pecan" >> .env +echo "PECAN_VERSION=develop" >> .env +echo "PWD=%CD%" >> .env +``` + +Once you have setup `docker-compose.override.yml` and the `.env` files, it is time to pull all docker images that will be used. Doing this will make sure you have the latest version of those images on your local system. + +``` +docker-compose pull +``` + +#### folders (optional) + +The goal of the development is to share the development folder with your container, whilst minimizing the latency. What this will do is setup the folders to allow for your pecan folder to be shared, and keep the rest of the folders managed by docker. Some of this is based on a presentation done during [DockerCon 2020](https://docker.events.cube365.net/docker/dockercon/content/Videos/92BAM7vob5uQ2spZf). In this talk it is recommended to keep the database on the filesystem managed by docker, as well as any other folders that are not directly modified on the host system (not using the docker managed volumes could lead to a large speed loss when reading/writing to the disk). The `docker-compose.override.yml` can be modified to copy all the data to the local filesystem, you will need to comment out the appropriate blocks. If you are sharing more than the pecan home directory you will need to make sure that these folder exist. As from the video, it is recommended to keep these folders outside of the actual pecan folder to allow for better caching capabilities of the docker system. + +If you have commented out the volumes in `docker-compose.override.yml` you will need to create the folders. Assuming you have not modified the values, you can do this with: + +``` +mkdir -p $HOME/volumes/pecan/{lib,pecan,portainer,postgres,rabbitmq,traefik} +``` + + +The following volumes are specified: + +- **pecan_home** : is the checked out folder of PEcAn. This is shared with the executor and rstudio container allowing you to share and compile PEcAn. (defaults to current folder) +- **pecan_web** : is the checked out web folder of PEcAn. This is shared with the web container allowing you to share and modify the PEcAn web app. (defaults to web folder in the current folder) +- **pecan_lib** : holds all the R packages for the specific version of PEcAn and R. This folder will be shared amongst all other containers, and will contain the compiled PEcAn code. (defaults to managed by docker, or $HOME/volumes/pecan/lib) +- **pecan** this holds all the data, such as workflows and any downloaded data. (defaults to managed by docker, or $HOME/volumes/pecan/pecan) +- **traefik** holds persisent data for the web proxy, that directs incoming traffic to the correct container. (defaults to managed by docker, or $HOME/volumes/pecan/traefik) +- **postgres** holds the actual database data. If you want to backup the database, you can stop the postgres container, zip up the folder. (defaults to managed by docker, or $HOME/volumes/pecan/postgres) +- **rabbitmq** holds persistent information of the message broker (rabbitmq). (defaults to managed by docker, or $HOME/volumes/pecan/rabbitmq) +- **portainer** if you enabled the portainer service this folder is used to hold persistent data for this service. You will need to enable this service. (defaults to managed by docker, or $HOME/volumes/pecan/portainer) + +These folders will hold all the persistent data for each of the respective containers and can grow. For example the postgres database is multiple GB. The pecan folder will hold all data produced by the workflows, including any downloaded data, and can grow to many giga bytes. + +#### postgresql database + +First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): + +``` +docker-compose up -d postgres rabbitmq +``` + +This will start postgresql and rabbitmq. We need to wait for a few minutes (you can look at the logs using `docker-compose logs postgres`) to see if it is ready. + +Once the database has finished starting up we will initialize the database. Now you can load the database using the following commands. The first command will make sure we have the latest version of the image, the second command will actually load the information into the database. + +``` +docker pull pecan/db +docker run --rm --network pecan_pecan pecan/db +``` + + +Once that is done we create two users for BETY, first user is the guest user that you can use to login in the BETY interface. The second user is a user with admin rights. + +``` +docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 +docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 +``` + +#### load example data + +Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers. As with the database we first pull the latest version of the image, and then execute the image to copy all the data: + +``` +docker pull pecan/data:develop +docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop +``` + +#### copy R packages (optional but recommended) + +Next copy the R packages from a container to volume `pecan_lib`. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. This folder is shared with all PEcAn containers, allowing you to compile the code in one place, and have the compiled code available in all other containers. For example modify the code for a model, allows you to compile the code in rstudio container, and see the results in the model container. + +You can copy all the data using the following command. This will copy all compiled packages to your local machine. + +``` +docker run -ti --rm -v pecan_lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/ +``` + +#### copy web config file (optional) + +The `docker-compose.override.yml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using + +For Linux/MacOSX -``` \ No newline at end of file +``` +cp docker/web/config.docker.php web/config.php +``` + +For Windows + +``` +copy docker\web\config.docker.php web\config.php +``` + + + +### PEcAn Development + +To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers. At this point you have PEcAn running in docker. + +``` +docker-compose up -d +``` + +The current folder (most likely your clone of the git repository) is mounted in some containers as `/pecan`, and in the case of rstudio also in your home folder as `pecan`. You can see which containers exactly in `docker-compose.override.yml`. + +You can now modify the code on your local machine, or you can use [rstudio](http://localhost:8000) in the docker stack. Once you made changes to the code you can compile the code either in the terminal of rstudio (`cd pecan && make`) or using `./scripts/compile.sh` from your machine (latter is nothing more than a shell script that runs `docker-compose exec executor sh -c 'cd /pecan && make'`. + +The compiled code is written to `/usr/local/lib/R/site-library` which is mapped to `volumes/lib` on your machine. This same folder is mounted in many other containers, allowing you to share the same PEcAn modules in all containers. Now if you change a module, and compile all other containers will see and use this new version of your module. + +To compile the PEcAn code you can use the make command in either the rstudio container, or in the executor container. The script [`compile.sh`](sripts/compile.sh) will run make inside the executor container. + +### Workflow Submission + +You can submit your workflow either in the executor container or in rstudio container. For example to run the `docker.sipnet.xml` workflow located in the tests folder you can use: + +``` +docker-compose exec executor bash +# inside the container +cd /pecan/tests +R CMD ../web/workflow.R docker.sipnet.xml +``` + +A better way of doing this is developed as part of GSOC, in which case you can leverage of the restful interface defined, or using the new R PEcAn API package. + +# Directory Structure + +Following are the main folders inside the pecan repository. + +### base (R packages) + +These are the core packages of PEcAn. Most other packages will depend on the packages in this folder. + +### models (R packages) + +Each subfolder contains the required pieces to run the model in PEcAn + +### modules (R packages) + +Contains packages that either do analysis, or download and convert different data products. + +### web (PHP + javascript) + +The Pecan web application + +### shiny (R + shiny) + +Each subfolder is its own shiny application. + +### book_source (RMarkdown) + +The PEcAn documentation that is compiled and uploaded to the PEcAn webpage. + +### docker + +Some of the docker build files. The Dockerfiles for each model are placed in the models folder. + +### scripts + +Small scripts that are used as part of the development and installation of PEcAn. + +# Advanced Development Options + +## Reset all containers/database + +If you want to start from scratch and remove all old data, but keep your pecan checked out folder, you can remove the folders where you have written the data (see `folders` below). You will also need to remove any of the docker managed volumes. To see all volumes you can do `docker volume ls -q -f name=pecan`. If you are sure, you can either remove them one by one, or remove them all at once using the command below. **THIS DESTROYS ALL DATA IN DOCKER MANAGED VOLUMES.**. + +``` +docker volume rm $(docker volume ls -q -f name=pecan) +``` + +If you changed the docker-compose.override.yml file to point to a location on disk for some of the containers (instead of having them managed by docker) you will need to actually delete the data on your local disk, docker will NOT do this. + +## Reset the lib folder + +If you want to reset the pecan lib folder that is mounted across all machines, for example when there is a new version of PEcAn or a a new version of R, you will need to delete the volume pecan_lib, and repopulate it. To delete the volume use the following command, and then look at "copy R packages" to copy the data again. + +``` +docker-compose down +docker rm pecan_lib +``` + +## Linux and User permissions + +(On Mac OSX and Windows files should automatically be owned by the user running the docker-compose commands). + +If you use mounted folders, make sure that these folders are writable by the containers. Docker on Linux will try to preserve the file permissions. To do this it might be necessary for the folders to have rw permissions. This can be done by using `chmod 777 $HOME/volumes/pecan/{lib,pecan,portainer,postgres,rabbitmq,traefik}`. + +This will leverage of NFS to mount the file system in your local docker image, changing the files to owned by the user specified in the export file. Try to limit this to only your PEcAn folder since this will allow anybody on this system to get access to the exported folder as you! + +First install nfs server: + +``` +apt-get install nfs-kernel-server +``` + +Next export your home directory: + +``` +echo -e "$PWD\t127.0.0.1(rw,no_subtree_check,all_squash,anonuid=$(id -u),anongid=$(id -g))" | sudo tee -a /etc/exports +``` + +And export the filesystem. + +``` +sudo exportfs -va +``` + +At this point you have exported your home directory, only to your local machine. All files written to that exported filesystem will be owned by you (`id -u`) and your primary group (`id -g`). + +Finally we can modify the `docker-compose.override.yml` file to allow for writing files to your PEcAn folder as you: + +``` +volumes: + pecan_home: + driver_opts: + type: "nfs" + device: ":${PWD}" + o: "addr=127.0.0.1" +``` diff --git a/actions/book.yml b/actions/book.yml new file mode 100644 index 00000000000..933b3174c34 --- /dev/null +++ b/actions/book.yml @@ -0,0 +1,31 @@ +# This is a basic workflow to help you get started with Actions + +name: CI + +on: + push: + branches: master + pull_request: + branches: master + +# A workflow run is made up of one or more jobs that can run sequentially or in parallel +jobs: + # This workflow contains a single job called "build" + build: + # The type of runner that the job will run on + runs-on: ubuntu-latest + container: pecan/depends:develop + + steps: + # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it + - uses: actions/checkout@v2 + + - name: Building book from source + run: cd book_source && make + + # Runs a set of commands using the runners shell + - name: Run a multi-line script + run: | + echo Add other actions to build, + echo test, and deploy your project. + diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile new file mode 100644 index 00000000000..abec70cff90 --- /dev/null +++ b/apps/api/Dockerfile @@ -0,0 +1,30 @@ +# this needs to be at the top, what version are we building +ARG IMAGE_VERSION="latest" + + +# -------------------------------------------------------------------------- +# PECAN FOR MODEL BASE IMAGE +# -------------------------------------------------------------------------- +FROM pecan/base:${IMAGE_VERSION} +LABEL maintainer="Tezan Sahu " + + +COPY ./ /api + +WORKDIR /api/R + +# -------------------------------------------------------------------------- +# Variables to store in docker image (most of them come from the base image) +# -------------------------------------------------------------------------- +ENV AUTH_REQ="yes" \ + HOST_ONLY="no" \ + PGHOST="postgres" + +# COMMAND TO RUN +RUN apt-get update +RUN apt-get install libsodium-dev -y +RUN Rscript -e "devtools::install_version('promises', '1.1.0', repos = 'http://cran.rstudio.com')" \ + && Rscript -e "devtools::install_version('webutils', '1.1', repos = 'http://cran.rstudio.com')" \ + && Rscript -e "devtools::install_github('rstudio/swagger')" \ + && Rscript -e "devtools::install_github('rstudio/plumber')" +CMD Rscript entrypoint.R \ No newline at end of file diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R new file mode 100644 index 00000000000..6cf3114a7e3 --- /dev/null +++ b/apps/api/R/auth.R @@ -0,0 +1,81 @@ +library(dplyr) + +#* Obtain the encrypted password for a user +#* @param username Username, which is also the 'salt' +#* @param password Unencrypted password +#* @param secretkey Secret Key, which if null, is set to 'notasecret' +#* @return Encrypted password +#* @author Tezan Sahu +get_crypt_pass <- function(username, password, secretkey = NULL) { + secretkey <- if(is.null(secretkey)) "notasecret" else secretkey + dig <- secretkey + salt <- username + for (i in 1:10) { + dig <- digest::digest( + paste(dig, salt, password, secretkey, sep="--"), + algo="sha1", + serialize=FALSE + ) + } + return(dig) +} + + + +#* Check if the encrypted password for the user is valid +#* @param username Username +#* @param crypt_pass Encrypted password +#* @return TRUE if encrypted password is correct, else FALSE +#* @author Tezan Sahu +validate_crypt_pass <- function(username, crypt_pass) { + + dbcon <- PEcAn.DB::betyConnect() + + res <- tbl(dbcon, "users") %>% + filter(login == username, + crypted_password == crypt_pass) %>% + count() %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (res == 1) { + return(TRUE) + } + + return(FALSE) +} + +#* Filter to authenticate a user calling the PEcAn API +#* @param req The request +#* @param res The response to be set +#* @return Appropriate response +#* @author Tezan Sahu +authenticate_user <- function(req, res) { + # If the API endpoint that do not require authentication + if ( + grepl("swagger", req$PATH_INFO, ignore.case = TRUE) || + grepl("openapi.json", req$PATH_INFO, fixed = TRUE) || + grepl("ping", req$PATH_INFO, ignore.case = TRUE) || + grepl("status", req$PATH_INFO, ignore.case = TRUE)) + { + return(plumber::forward()) + } + + if (!is.null(req$HTTP_AUTHORIZATION)) { + # HTTP_AUTHORIZATION is of the form "Basic ", + # where the is contains : + auth_details <- strsplit(rawToChar(jsonlite::base64_dec(strsplit(req$HTTP_AUTHORIZATION, " +")[[1]][2])), ":")[[1]] + username <- auth_details[1] + password <- auth_details[2] + crypt_pass <- get_crypt_pass(username, password) + + if(validate_crypt_pass(username, crypt_pass)){ + return(plumber::forward()) + } + + } + + res$status <- 401 # Unauthorized + return(list(error="Authentication required")) +} \ No newline at end of file diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R new file mode 100644 index 00000000000..fc3ed32d604 --- /dev/null +++ b/apps/api/R/entrypoint.R @@ -0,0 +1,37 @@ +#* This is the entry point to the PEcAn API. +#* All API endpoints (& filters) are mounted here +#* @author Tezan Sahu + +source("auth.R") +source("general.R") + +root <- plumber::plumber$new() +root$setSerializer(plumber::serializer_unboxed_json()) + +# Filter for authenticating users trying to hit the API endpoints +root$filter("require-auth", authenticate_user) + +# The /api/ping & /api/status are standalone API endpoints +# implemented using handle() because of restrictions of plumber +# to mount multiple endpoints on the same path (or subpath) +root$handle("GET", "/api/ping", ping) +root$handle("GET", "/api/status", status) + +# The endpoints mounted here are related to details of PEcAn models +models_pr <- plumber::plumber$new("models.R") +root$mount("/api/models", models_pr) + +# The endpoints mounted here are related to details of PEcAn workflows +workflows_pr <- plumber::plumber$new("workflows.R") +root$mount("/api/workflows", workflows_pr) + +# The endpoints mounted here are related to details of PEcAn runs +runs_pr <- plumber::plumber$new("runs.R") +root$mount("/api/runs", runs_pr) + +# The API server is bound to 0.0.0.0 on port 8000 +# The Swagger UI for the API draws its source from the pecanapi-spec.yml file +root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { + spec <- yaml::read_yaml("../pecanapi-spec.yml") + spec +}) \ No newline at end of file diff --git a/apps/api/R/general.R b/apps/api/R/general.R new file mode 100644 index 00000000000..5b8fcc7fd67 --- /dev/null +++ b/apps/api/R/general.R @@ -0,0 +1,30 @@ +#* Function to be executed when /api/ping endpoint is called +#* If successful connection to API server is established, this function will return the "pong" message +#* @return Mapping containing response as "pong" +#* @author Tezan Sahu +ping <- function(req){ + res <- list(request="ping", response="pong") + res +} + +#* Function to get the status & basic information about the Database Host +#* @return Details about the database host +#* @author Tezan Sahu +status <- function() { + + ## helper function to obtain environment variables + get_env_var = function (item, default = "unknown") { + value = Sys.getenv(item) + if (value == "") default else value + } + + dbcon <- PEcAn.DB::betyConnect() + res <- list(host_details = PEcAn.DB::dbHostInfo(dbcon)) + + res$pecan_details <- list( + version = get_env_var("PECAN_VERSION"), + branch = get_env_var("PECAN_GIT_BRANCH"), + gitsha1 = get_env_var("PECAN_GIT_CHECKSUM") + ) + return(res) +} \ No newline at end of file diff --git a/apps/api/R/models.R b/apps/api/R/models.R new file mode 100644 index 00000000000..ef8da6582cd --- /dev/null +++ b/apps/api/R/models.R @@ -0,0 +1,42 @@ +library(dplyr) + +#' Retrieve the details of a particular version of a model +#' @param name Model name (character) +#' @param revision Model version/revision (character) +#' @return Model details +#' @author Tezan Sahu +#* @get / +getModels <- function(model_name="all", revision="all", res){ + + dbcon <- PEcAn.DB::betyConnect() + + Models <- tbl(dbcon, "models") %>% + select(model_id = id, model_name, revision, modeltype_id) + + if (model_name != "all"){ + Models <- Models %>% + filter(model_name == !!model_name) + } + + if (revision != "all"){ + Models <- Models %>% + filter(revision == !!revision) + } + + Models <- tbl(dbcon, "modeltypes") %>% + select(modeltype_id = id, model_type = name) %>% + inner_join(Models, by = "modeltype_id") %>% + arrange(model_id) + + qry_res <- Models %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Model(s) not found")) + } + else { + return(list(models=qry_res)) + } +} \ No newline at end of file diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R new file mode 100644 index 00000000000..c3daab5fa53 --- /dev/null +++ b/apps/api/R/runs.R @@ -0,0 +1,105 @@ +#' Get the list of runs (belonging to a particuar workflow) +#' @param workflow_id Workflow id (character) +#' @param offset +#' @param limit +#' @return List of runs (belonging to a particuar workflow) +#' @author Tezan Sahu +#* @get / +getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ + if (! limit %in% c(10, 20, 50, 100, 500)) { + res$status <- 400 + return(list(error = "Invalid value for parameter")) + } + + dbcon <- PEcAn.DB::betyConnect() + + Runs <- tbl(dbcon, "runs") %>% + select(id, model_id, site_id, parameter_list, ensemble_id, start_time, finish_time) + + Runs <- tbl(dbcon, "ensembles") %>% + select(runtype, ensemble_id=id, workflow_id) %>% + full_join(Runs, by="ensemble_id") %>% + filter(workflow_id == !!workflow_id) + + qry_res <- Runs %>% + arrange(id) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { + res$status <- 404 + return(list(error="Run(s) not found")) + } + else { + has_next <- FALSE + has_prev <- FALSE + if (nrow(qry_res) > (as.numeric(offset) + as.numeric(limit))) { + has_next <- TRUE + } + if (as.numeric(offset) != 0) { + has_prev <- TRUE + } + qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] + result <- list(runs = qry_res) + result$count <- nrow(qry_res) + if(has_next){ + result$next_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + (as.numeric(limit) + as.numeric(offset)), + "&limit=", + limit + ) + } + if(has_prev) { + result$prev_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + max(0, (as.numeric(offset) - as.numeric(limit))), + "&limit=", + limit + ) + } + + return(result) + } +} + +################################################################################################# + +#' Get the of the run specified by the id +#' @param id Run id (character) +#' @return Details of requested run +#' @author Tezan Sahu +#* @get / +getWorkflowDetails <- function(id, res){ + + dbcon <- PEcAn.DB::betyConnect() + + Runs <- tbl(dbcon, "runs") %>% + select(-outdir, -outprefix, -setting) + + Runs <- tbl(dbcon, "ensembles") %>% + select(runtype, ensemble_id=id, workflow_id) %>% + full_join(Runs, by="ensemble_id") %>% + filter(id == !!id) + + qry_res <- Runs %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Run with specified ID was not found")) + } + else { + return(qry_res) + } +} \ No newline at end of file diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R new file mode 100644 index 00000000000..f6007a342e7 --- /dev/null +++ b/apps/api/R/workflows.R @@ -0,0 +1,128 @@ +library(dplyr) + +#' Get the list of workflows (using a particular model & site, if specified) +#' @param model_id Model id (character) +#' @param site_id Site id (character) +#' @param offset +#' @param limit +#' @return List of workflows (using a particular model & site, if specified) +#' @author Tezan Sahu +#* @get / +getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res){ + if (! limit %in% c(10, 20, 50, 100, 500)) { + res$status <- 400 + return(list(error = "Invalid value for parameter")) + } + + dbcon <- PEcAn.DB::betyConnect() + + Workflow <- tbl(dbcon, "workflows") %>% + select(id, model_id, site_id) + + Workflow <- tbl(dbcon, "attributes") %>% + select(id = container_id, properties = value) %>% + full_join(Workflow, by = "id") + + if (!is.null(model_id)) { + Workflow <- Workflow %>% + filter(model_id == !!model_id) + } + + if (!is.null(site_id)) { + Workflow <- Workflow %>% + filter(site_id == !!site_id) + } + + qry_res <- Workflow %>% + select(-model_id, -site_id) %>% + arrange(id) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { + res$status <- 404 + return(list(error="Workflows not found")) + } + else { + has_next <- FALSE + has_prev <- FALSE + if (nrow(qry_res) > (as.numeric(offset) + as.numeric(limit))) { + has_next <- TRUE + } + if (as.numeric(offset) != 0) { + has_prev <- TRUE + } + + qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] + + qry_res$properties[is.na(qry_res$properties)] = "{}" + qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) + result <- list(workflows = qry_res) + result$count <- nrow(qry_res) + if(has_next){ + result$next_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + (as.numeric(limit) + as.numeric(offset)), + "&limit=", + limit + ) + } + if(has_prev) { + result$prev_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + max(0, (as.numeric(offset) - as.numeric(limit))), + "&limit=", + limit + ) + } + + return(result) + } +} + +################################################################################################# + +#' Get the of the workflow specified by the id +#' @param id Workflow id (character) +#' @return Details of requested workflow +#' @author Tezan Sahu +#* @get / +getWorkflowDetails <- function(id, res){ + dbcon <- PEcAn.DB::betyConnect() + + Workflow <- tbl(dbcon, "workflows") %>% + select(id, model_id, site_id) + + Workflow <- tbl(dbcon, "attributes") %>% + select(id = container_id, properties = value) %>% + full_join(Workflow, by = "id") %>% + filter(id == !!id) + + qry_res <- Workflow %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Workflow with specified ID was not found")) + } + else { + if(is.na(qry_res$properties)){ + res <- list(id = id, properties = list(modelid = qry_res$model_id, siteid = qry_res$site_id)) + } + else{ + res <- list(id = id, properties = jsonlite::parse_json(qry_res$properties[[1]])) + } + + return(res) + } +} \ No newline at end of file diff --git a/apps/api/README.md b/apps/api/README.md new file mode 100644 index 00000000000..69ae57643fc --- /dev/null +++ b/apps/api/README.md @@ -0,0 +1,20 @@ +# PEcAn RESTful API Server + +This folder contains the code & tests for PEcAn's RESTful API server. The API allows users to remotely interact with the PEcAn servers and leverage the functionalities provided by the PEcAn Project. It has been designed to follow common RESTful API conventions. Most operations are performed using the HTTP methods: `GET` (retrieve) & `POST` (create). + +#### For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/tezansahu/pecan/api_1/apps/api/pecanapi-spec.yml). + +## Starting the PEcAn server: + +Follow the following steps to spin up the PEcAn API server locally: + +```bash +$ cd R +$ Rscript entrypoint.R +``` + +## Running the tests: + +```bash +$ ./test_pecanapi.sh +``` diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml new file mode 100644 index 00000000000..a5d0ffc73e8 --- /dev/null +++ b/apps/api/pecanapi-spec.yml @@ -0,0 +1,442 @@ +openapi: 3.0.0 +servers: + - description: PEcAn Tezan VM + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/912db446 + - description: Localhost + url: http://127.0.0.1:8000 + +info: + title: PEcAn Project API + description: >- + This is the API for interacting with server(s) of the __PEcAn Project__. The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. Here's the link to [PEcAn's Github Repository](https://github.com/PecanProject/pecan). + PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. + version: "1.0.0" + contact: + email: "pecanproj@gmail.com" + license: + name: University of Illinois/NCSA Open Source License + url: https://opensource.org/licenses/NCSA +externalDocs: + description: Find out more about PEcAn Project + url: https://pecanproject.github.io/ + +tags: + - name: general + description: Related to the overall working on the API, details of PEcAn & the server + - name: workflows + description: Everything about PEcAn workflows + - name: runs + description: Everything about PEcAn runs + - name: models + description: Everything about PEcAn models + +##################################################################################################################### +##################################################### API Endpoints ################################################# +##################################################################################################################### +security: + - basicAuth: [] + +paths: + + /api/ping: + get: + summary: Ping the server to check if it is live + tags: + - general + - + responses: + '200': + description: OK + content: + application/json: + schema: + type: object + properties: + req: + type: string + resp: + type: string + '403': + description: Access forbidden + '404': + description: Models not found + + /api/status: + get: + summary: Obtain general information about PEcAn & the details of the database host + tags: + - general + - + responses: + '200': + description: OK + content: + application/json: + schema: + type: object + properties: + pecan_details: + type: object + properties: + version: + type: string + branch: + type: string + gitsha1: + type: string + host_details: + type: object + properties: + hostid: + type: string + hostname: + type: string + start: + type: string + end: + type: string + sync_url: + type: string + sync_contact: + type: string + + '403': + description: Access forbidden + '404': + description: Models not found + + /api/models/: + get: + tags: + - models + - + summary: Details of model(s) + parameters: + - in: query + name: model_name + description: Name of the model + required: false + schema: + type: string + - in: query + name: revision + description: Model version/revision + required: false + schema: + type: string + responses: + '200': + description: Available Models + content: + application/json: + schema: + type: object + properties: + models: + type: array + items: + type: object + $ref: '#/components/schemas/Model' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Model(s) not found + + + /api/workflows/: + get: + tags: + - workflows + - + summary: Get the list of workflows + parameters: + - in: query + name: model_id + description: If provided, returns all workflows that use the provided model_id + required: false + schema: + type: string + - in: query + name: site_id + description: If provided, returns all workflows that use the provided site_id + required: false + schema: + type: string + - in: query + name: offset + description: The number of workflows to skip before starting to collect the result set. + schema: + type: integer + minimum: 0 + default: 0 + required: false + - in: query + name: limit + description: The number of workflows to return. + schema: + type: integer + default: 50 + enum: + - 10 + - 20 + - 50 + - 100 + - 500 + required: false + responses: + '200': + description: List of workflows + content: + application/json: + schema: + type: object + properties: + workflows: + type: array + items: + type: object + $ref: '#/components/schemas/Workflow' + count: + type: integer + next_page: + type: string + prev_page: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflows not found + + post: + tags: + - workflows + - + summary: Submit a new PEcAn workflow + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/Workflow' + responses: + '201': + description: Submitted workflow successfully + '401': + description: Authentication required + + + /api/workflows/{id}: + get: + tags: + - workflows + - + summary: Get the details of a PEcAn Workflow + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + responses: + '200': + description: Details of the requested PEcAn Workflow + content: + application/json: + schema: + $ref: '#/components/schemas/Workflow' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + + + + + /api/runs/: + get: + tags: + - runs + - + summary: Get the list of all runs for a specified PEcAn Workflow + parameters: + - in: query + name: workflow_id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + - in: query + name: offset + description: The number of workflows to skip before starting to collect the result set. + schema: + type: integer + minimum: 0 + default: 0 + required: false + - in: query + name: limit + description: The number of workflows to return. + schema: + type: integer + default: 50 + enum: + - 10 + - 20 + - 50 + - 100 + - 500 + required: false + responses: + '200': + description: List of all runs for the requested PEcAn Workflow + content: + application/json: + schema: + type: object + properties: + runs: + type: array + items: + type: object + $ref: '#/components/schemas/Run' + count: + type: integer + next_page: + type: string + prev_page: + type: string + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + + /api/runs/{id}: + get: + tags: + - runs + - + summary: Get the details of a specified PEcAn run + parameters: + - in: path + name: id + description: ID of the PEcAn run + required: true + schema: + type: string + responses: + '200': + description: Details about the requested run within the requested PEcAn workflow + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found + + +##################################################################################################################### +###################################################### Components ################################################### +##################################################################################################################### + +components: + schemas: + Model: + properties: + model_id: + type: string + model_name: + type: string + revision: + type: string + modeltype_id: + type: string + model_type: + type: string + + Run: + properties: + id: + type: string + workflow_id: + type: string + runtype: + type: string + ensemble_id: + type: string + model_id: + type: string + site_id: + type: string + parameter_list: + type: string + start_time: + type: string + finish_time: + type: string + + Workflow: + properties: + id: + type: string + "properties": + type: object + properties: + pfts: + type: array + items: + type: string + input_met: + type: string + modelid: + type: string + siteid: + type: string + sitename: + type: string + sitegroupid: + type: string + start: + type: string + end: + type: string + variables: + type: string + sensitivity: + type: string + email: + type: string + notes: + type: string + runs: + type: string + pecan_edit: + type: string + status: + type: string + fluxusername: + type: string + input_poolinitcond: + type: string + securitySchemes: + basicAuth: + type: http + scheme: basic + + + diff --git a/apps/api/test_pecanapi.sh b/apps/api/test_pecanapi.sh new file mode 100644 index 00000000000..35543046251 --- /dev/null +++ b/apps/api/test_pecanapi.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +R R/entrypoint.R & +PID=$! + +R test/alltests.R +kill $PID \ No newline at end of file diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index 4882ed53c0a..b08751f1337 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -4,19 +4,32 @@ #' betyConnect <- function(php.config = "../../web/config.php") { ## Read PHP config file for webserver + if (file.exists(php.config)) { + php_params <- PEcAn.utils::read_web_config(php.config) + } else { + php_params <- list() + } + + ## helper function + getphp = function (item, default = "") { + value = php_params[[item]] + if (is.null(value)) default else value + } - config.list <- PEcAn.utils::read_web_config(php.config) + ## fill in all data from environment variables + dbparams <- get_postgres_envvars(host = getphp("db_bety_hostname", "localhost"), + port = getphp("db_bety_port", "5432"), + dbname = getphp("db_bety_database", "bety"), + user = getphp("db_bety_username", "bety"), + password = getphp("db_bety_password", "bety")) + + ## force driver to be postgres (only value supported by db.open) + dbparams[["driver"]] <- "Postgres" ## Database connection - # TODO: The latest version of dplyr/dbplyr works with standard DBI-based - # objects, so we should replace this with a standard `db.open` call. - dplyr::src_postgres(dbname = config.list$db_bety_database, - host = config.list$db_bety_hostname, - user = config.list$db_bety_username, - password = config.list$db_bety_password) + db.open(dbparams) } # betyConnect - #' Convert number to scientific notation pretty expression #' @param l Number to convert to scientific notation #' @export @@ -59,22 +72,28 @@ ncdays2date <- function(time, unit) { #' @export dbHostInfo <- function(bety) { # get host id - result <- db.query(query = "select cast(floor(nextval('users_id_seq') / 1e9) as bigint);", con = bety$con) + result <- db.query(query = "select cast(floor(nextval('users_id_seq') / 1e9) as bigint);", con = bety) hostid <- result[["floor"]] # get machine start and end based on hostid machine <- dplyr::tbl(bety, "machines") %>% - dplyr::filter(sync_host_id == !!hostid) %>% - dplyr::select(sync_start, sync_end) + dplyr::filter(sync_host_id == !!hostid) + if (is.na(nrow(machine)) || nrow(machine) == 0) { return(list(hostid = hostid, + hostname = "", start = 1e+09 * hostid, - end = 1e+09 * (hostid + 1) - 1)) + end = 1e+09 * (hostid + 1) - 1, + sync_url = "", + sync_contact = "")) } else { return(list(hostid = hostid, + hostname = machine$hostname, start = machine$sync_start, - end = machine$sync_end)) + end = machine$sync_end, + sync_url = machine$sync_url, + sync_contact = machine$sync_contact)) } } # dbHostInfo diff --git a/base/db/R/query.format.vars.R b/base/db/R/query.format.vars.R index 79f88eba8d8..61ba7cf3e0e 100644 --- a/base/db/R/query.format.vars.R +++ b/base/db/R/query.format.vars.R @@ -13,8 +13,6 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { PEcAn.logger::logger.error("Must specify input id or format id") } - con <- bety$con - # get input info either form input.id or format.id, depending which is provided # defaults to format.id if both provided # also query site information (id/lat/lon) if an input.id @@ -27,9 +25,9 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { if (is.na(format.id)) { f <- PEcAn.DB::db.query( query = paste("SELECT * from formats as f join inputs as i on f.id = i.format_id where i.id = ", input.id), - con = con + con = bety ) - site.id <- PEcAn.DB::db.query(query = paste("SELECT site_id from inputs where id =", input.id), con = con) + site.id <- PEcAn.DB::db.query(query = paste("SELECT site_id from inputs where id =", input.id), con = bety) if (is.data.frame(site.id) && nrow(site.id)>0) { site.id <- site.id$site_id site.info <- @@ -38,17 +36,17 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { "SELECT id, time_zone, ST_X(ST_CENTROID(geometry)) AS lon, ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id =", site.id ), - con = con + con = bety ) site.lat <- site.info$lat site.lon <- site.info$lon site.time_zone <- site.info$time_zone } } else { - f <- PEcAn.DB::db.query(query = paste("SELECT * from formats where id = ", format.id), con = con) + f <- PEcAn.DB::db.query(query = paste("SELECT * from formats where id = ", format.id), con = bety) } - mimetype <- PEcAn.DB::db.query(query = paste("SELECT * from mimetypes where id = ", f$mimetype_id), con = con)[["type_string"]] + mimetype <- PEcAn.DB::db.query(query = paste("SELECT * from mimetypes where id = ", f$mimetype_id), con = bety)[["type_string"]] f$mimetype <- utils::tail(unlist(strsplit(mimetype, "/")),1) # get variable names and units of input data @@ -56,7 +54,7 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { query = paste( "SELECT variable_id,name,unit,storage_type,column_number from formats_variables where format_id = ", f$id ), - con = con + con = bety ) if(all(!is.na(var.ids))){ @@ -84,7 +82,7 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { vars_bety[i, (ncol(vars_bety) - 1):ncol(vars_bety)] <- as.matrix(PEcAn.DB::db.query( query = paste("SELECT name, units from variables where id = ", fv$variable_id[i]), - con = con + con = bety )) } diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 8d2cd9ab22d..d78726612f1 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -11,5 +11,5 @@ Suggests: testthat License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 Roxygen: list(markdown = TRUE) diff --git a/base/remote/DESCRIPTION b/base/remote/DESCRIPTION index f7a1174e619..7e0fd00bcbc 100644 --- a/base/remote/DESCRIPTION +++ b/base/remote/DESCRIPTION @@ -21,4 +21,4 @@ License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true Roxygen: list(markdown = TRUE) -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 diff --git a/base/settings/R/check.all.settings.R b/base/settings/R/check.all.settings.R index c8e12950de0..44c5dd7465c 100644 --- a/base/settings/R/check.all.settings.R +++ b/base/settings/R/check.all.settings.R @@ -835,7 +835,7 @@ check.model.settings <- function(settings, dbcon = NULL) { con = dbcon) if (nrow(model) > 1) { PEcAn.logger::logger.warn( - "multiple records for", settings$model$name, + "multiple records for", settings$model$type, "returned; using the latest") row <- which.max(model$updated_at) if (length(row) == 0) row <- nrow(model) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index 72f441fb3e8..da87d48a069 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -261,7 +261,7 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, settings$run$site$id, "', '", settings$run$start.date, "', '", settings$run$end.date, "', '", - settings$run$outdir, "', '", + settings$run$outdir, "', ", ensemble.id, ", '", paramlist, "') RETURNING id"), con = con) run.id <- insert_result[["id"]] @@ -270,8 +270,7 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, for (pft in defaults) { PEcAn.DB::db.query(paste0("INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) values (", pft$posteriorid, ", ", - ensemble.id, ", '", - "');"), con = con) + ensemble.id, ");"), con = con) } # associate inputs with runs diff --git a/base/workflow/inst/batch_run.R b/base/workflow/inst/batch_run.R index 9a5c561fe21..63940dae476 100755 --- a/base/workflow/inst/batch_run.R +++ b/base/workflow/inst/batch_run.R @@ -50,7 +50,6 @@ php_file <- file.path(pecan_path, "web", "config.php") stopifnot(file.exists(php_file)) config.list <- PEcAn.utils::read_web_config(php_file) bety <- PEcAn.DB::betyConnect(php_file) -con <- bety$con # Create outfile directory if it doesn't exist dir.create(dirname(outfile), recursive = TRUE, showWarnings = FALSE) @@ -73,7 +72,7 @@ for (i in seq_len(nrow(input_table))) { revision <- table_row$revision message("Model: ", shQuote(model)) message("Revision: ", shQuote(revision)) - model_df <- tbl(con, "models") %>% + model_df <- tbl(bety, "models") %>% filter(model_name == !!model, revision == !!revision) %>% collect() diff --git a/base/workflow/inst/permutation_tests.R b/base/workflow/inst/permutation_tests.R index 69af5f96598..5821ea28980 100755 --- a/base/workflow/inst/permutation_tests.R +++ b/base/workflow/inst/permutation_tests.R @@ -31,7 +31,6 @@ php_file <- file.path(pecan_path, "web", "config.php") stopifnot(file.exists(php_file)) config.list <- PEcAn.utils::read_web_config(php_file) bety <- PEcAn.DB::betyConnect(php_file) -con <- bety$con # Create path for outfile dir.create(dirname(outfile), showWarnings = FALSE, recursive = TRUE) diff --git a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd index fcc3f4cb37d..9c8801408da 100644 --- a/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd +++ b/book_source/02_demos_tutorials_workflows/01_install_pecan.Rmd @@ -80,7 +80,7 @@ This will not go into much detail about about how to use Docker -- for more deta This should print the current version of docker-compose. We have tested the instruction below with versions of docker-compose 1.22 and above. -3. **Download the PEcAn docker-compose file**. It is located in the root directory of the [PEcAn source code](https://github.com/pecanproject/pecan). For reference, here are direct links to the [latest stable version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) and the [bleeding edge development version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml). (To download the files, you should be able to right click the link and select "Save link as".) Make sure the file is saved as `docker-compose.yml` in a directory called `pecan`. +3. **Download the PEcAn docker-compose file**. It is located in the root directory of the [PEcAn source code](https://github.com/pecanproject/pecan). For reference, here are direct links to the [latest stable version](https://raw.githubusercontent.com/PecanProject/pecan/master/docker-compose.yml) and the [bleeding edge development version](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker-compose.yml). (To download the files, you should be able to right click the link and select "Save link as".) Make sure the file is saved as `docker-compose.yml` in a directory called `pecan`. 4. **Initialize the PEcAn database and data images**. The following `docker-compose` commands are used to download all the data PEcAn needs to start working. For more on how they work, see our [Docker topical pages](#pecan-docker-quickstart-init). @@ -99,22 +99,25 @@ This will not go into much detail about about how to use Docker -- for more deta b. "Initialize" the data for the PEcAn database. ```bash - docker-compose run --rm bety initialize + docker run --rm --network pecan_pecan pecan/db ``` This should produce a lot of output describing the database operations happening under the hood. - Some of these will look like errors (including starting with `ERROR`), but _this is normal_. - This command succeeded if the output ends with the following: + Some of these will look like errors, but _this is normal_. + This command succeeded if the output ends with the following (the syntax for creating a new user for accessing BetyDB): ``` - Added carya41 with access_level=4 and page_access_level=1 with id=323 - Added carya42 with access_level=4 and page_access_level=2 with id=325 - Added carya43 with access_level=4 and page_access_level=3 with id=327 - Added carya44 with access_level=4 and page_access_level=4 with id=329 - Added guestuser with access_level=4 and page_access_level=4 with id=331 + docker-compose run bety user 'login' 'password' 'full name' 'email' 1 1 ``` - - c. Download and configure the core PEcAn database files. + c. Add a user to BetyDB using the example syntax provided as the last line of the output of the previous step: + ```bash + # guest user + docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 + + # example user + docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 + ``` + d. Download and configure the core PEcAn database files. ```bash docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop @@ -128,7 +131,16 @@ This will not go into much detail about about how to use Docker -- for more deta Done! ###################################################################### ``` + e. Download the [`pecan/docker/env.example`](https://raw.githubusercontent.com/PecanProject/pecan/develop/docker/env.example) & save it as `.env` file. + Now, open the `.env` file & uncomment the lines mentioned below: + ```{r, echo=FALSE, fig.align='center'} + knitr::include_graphics(rep("figures/env-file.PNG")) + ``` + + Setting `PECAN_VERSION=develop` indicates that you want to run the bleeding-edge `develop` branch, meaning it may have bugs. To go ahead with the stable version you may set `PECAN_VERSION=latest` or `PECAN_VERSION=` (For example `1.7.0`). You can look at the list of all the [releases](https://github.com/pecanproject/pecan/releases) of PEcAn to see what options are availble. + + 5. **Start the PEcAn stack**. Assuming all of the steps above completed successfully, start the full stack by running the following shell command: ```bash diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd new file mode 100644 index 00000000000..4a69f62d495 --- /dev/null +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -0,0 +1,475 @@ +# PEcAn Project API + +## Introduction + +##### Welcome to the PEcAn Project API Documentation. + +The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. + +Our API allows users to remotely interact with the PEcAn servers and leverage the functionalities provided by the PEcAn Project. It has been designed to follow common RESTful API conventions. Most operations are performed using the HTTP methods: `GET` (retrieve) & `POST` (create). + +_Please note that the PEcAn Project API is currently under active development and is possible that any information in this document is subject to change._ + +## Authentication + +Authentication to the PEcAn API occurs via [Basic HTTP Auth](https://en.wikipedia.org/wiki/Basic_access_authentication). The credentials for using the API are the same as those used to log into PEcAn & BetyDB. Here is how you use basic HTTP auth with `curl`: +``` +$ curl --user ':' +``` + +Authentication also depends on the PEcAn server that the user interacts with. Some servers, at the time of deployment have the `AUTH_REQ = FALSE`, meaning that such servers do not require user autertication for the usage of the PEcAn APIs. Regardless of the type of server, the endpoints defind under General section can be accessed without any authentication. + +## RESTful API Endpoints + +This page contains the high-level overviews & the functionalities offered by the different RESTful endpoints of the PEcAn API. + +__For the most up-to-date documentation, you can visit the [PEcAn API Documentation](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/tezansahu/pecan/api_1/apps/api/pecanapi-spec.yml).__ + +The currently implemented functionalities include: + +* __General:__ + * `GET /api/ping`: Ping the server to check if it is live + * `GET /api/status`: Obtain general information about PEcAn & the details of the database host + +* __Models:__ + * `GET /api/models/`: Retrieve the details of model(s) used by PEcAn + +* __Workflows:__ + * `GET /api/workflows/`: Retrieve a list of PEcAn workflows + * `POST /api/workflows/` *: Submit a new PEcAn workflow + * `GET /api/workflows/{id}`: Obtain the details of a particular PEcAn workflow by supplying its ID + +* __Runs:__ + * `GET /api/runs`: Get the list of all the runs for a specified PEcAn Workflow + * `GET /api/runs/{id}`: Fetch the details of a specified PEcAn run + +_* indicates that the particular API is under development & may not be ready for use_ + + +## Examples: + +#### Prerequisites to interact with the PEcAn API Server {.tabset .tabset-pills} + +##### R Packages +* [httr](https://cran.r-project.org/web/packages/httr/index.html) +* [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html) + +##### Python Packages +* [requests](https://requests.readthedocs.io/en/master/) +* [json](https://docs.python.org/3/library/json.html) + +#### {-} + + +Following are some example snippets to call the PEcAn API endpoints: + +### `GET /api/ping` {.tabset .tabset-pills} + +#### R Snippet + +```R +res <- httr::GET("http://localhost:8000/api/ping") +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $request +## [1] "ping" + +## $response +## [1] "pong" +``` +#### Python Snippet + +```python +response = requests.get("http://localhost:8000/api/ping") +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "request": "ping", +## "response": "pong" +## } +``` +### {-} + + +### `GET /api/status` {.tabset .tabset-pills} + +#### R Snippet + +```R +res <- httr::GET("http://localhost:8000/api/status") +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $pecan_details$version +## [1] "1.7.0" + +## $pecan_details$branch +## [1] "develop" + +## $pecan_details$gitsha1 +## [1] "unknown" + +## $host_details$hostid +## [1] 99 + +## $host_details$hostname +## [1] "" + +## $host_details$start +## [1] 99000000000 + +## $host_details$end +## [1] 99999999999 + +## $host_details$sync_url +## [1] "" + +## $host_details$sync_contact +## [1] "" +``` + +#### Python Snippet + +```python +response = requests.get("http://localhost:8000/api/status") +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "pecan_details": { +## "version": "1.7.0", +## "branch": "develop", +## "gitsha1": "unknown" +## }, +## "host_details": { +## "hostid": 99, +## "hostname": "", +## "start": 99000000000, +## "end": 99999999999, +## "sync_url": "", +## "sync_contact": "" +## } +## } +``` + +### {-} + +### `GET /api/models/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get model(s) with `model_name` = SIPNET & `revision` = ssr +res <- httr::GET( + "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $models +## model_id model_name revision modeltype_id model_type +## 1 1000000022 SIPNET ssr 3 SIPNET +## ... +``` + +#### Python Snippet + +```python +# Get model(s) with `model_name` = SIPNET & `revision` = ssr +response = requests.get( + "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "models": [ +## { +## "model_id": "1000000022", +## "model_name": "SIPNET", +## "revision": "ssr", +## "modeltype_id": 3, +## "model_type": "SIPNET" +## }, +## ... +## ] +## } +``` + +### {-} + +### `GET /api/workflows/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get workflow(s) that use `model_id` = 1000000022 [SIPNET] & `site_id` = 676 [Willow Creek (US-WCr)] +res <- httr::GET( + "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $workflows +## id properties +## +## 1 1000009172 +## ... + +## $count +## [1] 5 +``` + +#### Python Snippet + +```python +# Get workflow(s) that use `model_id` = 1000000022 [SIPNET] & `site_id` = 676 [Willow Creek (US-WCr)] +response = requests.get( + "http://localhost:8000/api/workflows/?model_id=1000000022&site_id=676", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "workflows": [ +## { +## "id": 1000009172, +## "properties": { +## "end": "2004/12/31", +## "pft": [ +## "soil.IF", +## "temperate.deciduous.IF" +## ], +## "email": "", +## "notes": "", +## "start": "2004/01/01", +## "siteid": "676", +## "modelid": "1000000022", +## "hostname": "test-pecan.bu.edu", +## "sitename": "WillowCreek(US-WCr)", +## "input_met": "AmerifluxLBL.SIPNET", +## "pecan_edit": "on", +## "sitegroupid": "1000000022", +## "fluxusername": "pecan", +## "input_poolinitcond": "-1" +## } +## }, +## ... +## ], +## "count": 5 +## } +``` + +### {-} + +### `GET /api/workflows/{id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get details of workflow with `id` = '1000009172' +res <- httr::GET( + "http://localhost:8000/api/workflows/1000009172", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $id +## [1] "1000009172" + +## $properties +## $properties$end +## [1] "2004/12/31" + +## $properties$pft +## $properties$pft[[1]] +## [1] "soil.IF" + +## $properties$pft[[2]] +## [1] "temperate.deciduous.IF" + + +## $properties$email +## [1] "" + +## $properties$notes +## [1] "" + +## $properties$start +## [1] "2004/01/01" + +## $properties$siteid +## [1] "676" + +## $properties$modelid +## [1] "1000000022" + +## $properties$hostname +## [1] "test-pecan.bu.edu" + +## $properties$sitename +## [1] "WillowCreek(US-WCr)" + +## $properties$input_met +## [1] "AmerifluxLBL.SIPNET" + +## $properties$pecan_edit +## [1] "on" + +## $properties$sitegroupid +## [1] "1000000022" + +## $properties$fluxusername +## [1] "pecan" + +## $properties$input_poolinitcond +## [1] "-1" +``` + +#### Python Snippet + +```python +# Get details of workflow with `id` = '1000009172' +response = requests.get( + "http://localhost:8000/api/workflows/1000009172", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "id": "1000009172", +## "properties": { +## "end": "2004/12/31", +## "pft": [ +## "soil.IF", +## "temperate.deciduous.IF" +## ], +## "email": "", +## "notes": "", +## "start": "2004/01/01", +## "siteid": "676", +## "modelid": "1000000022", +## "hostname": "test-pecan.bu.edu", +## "sitename": "WillowCreek(US-WCr)", +## "input_met": "AmerifluxLBL.SIPNET", +## "pecan_edit": "on", +## "sitegroupid": "1000000022", +## "fluxusername": "pecan", +## "input_poolinitcond": "-1" +## } +## } +``` + +### {-} + +### `GET /api/runs/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get list of run belonging to the workflow with `workflow_id` = '1000009172' +res <- httr::GET( + "http://localhost:8000/api/runs/?workflow_id=1000009172", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $runs +## runtype ensemble_id workflow_id id model_id site_id parameter_list start_time +## finish_time +## 1 ensemble 1000017624 1000009172 1002042201 1000000022 796 ensemble=1 2005-01-01 +## 00:00:00 2011-12-31 00:00:00 +## ... + +## $count +## [1] 50 +``` + +#### Python Snippet + +```python +# Get list of run belonging to the workflow with `workflow_id` = '1000009172' +response = requests.get( + "http://localhost:8000/api/runs/?workflow_id=1000009172", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "runs": [ +## { +## "runtype": "ensemble", +## "ensemble_id": 1000017624, +## "workflow_id": 1000009172, +## "id": 1002042201, +## "model_id": 1000000022, +## "site_id": 796, +## "parameter_list": "ensemble=1", +## "start_time": "2005-01-01", +## "finish_time": "2011-12-31" +## }, +## ... +## ] +## "count": 50, +## "next_page": "http://localhost:8000/api/workflows/?workflow_id=1000009172&offset=50&limit=50" +## } +``` + +### {-} + +### `GET /api/runs/{id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get details of run belonging with `id` = '1002042201' +res <- httr::GET( + "http://localhost:8000/api/runs/1002042201", + authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## runtype ensemble_id workflow_id id model_id site_id start_time finish_time parameter_list +## 1 ensemble 1000017624 1000009172 1002042201 1000000022 796 2005-01-01 2011-12-31 ensemble=1 +## created_at updated_at started_at finished_at +## 1 2018-04-11 22:20:31 2018-04-11 22:22:08 2018-04-11 18:21:57 2018-04-11 18:22:08 +``` + +#### Python Snippet + +```python +# Get details of run with `id` = '1002042201' +response = requests.get( + "http://localhost:8000/api/runs/1002042201", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## [ +## { +## "runtype": "ensemble", +## "ensemble_id": 1000017624, +## "workflow_id": 1000009172, +## "id": 1002042201, +## "model_id": 1000000022, +## "site_id": 796, +## "parameter_list": "ensemble=1", +## "start_time": "2005-01-01", +## "finish_time": "2011-12-31" +## } +## ] +``` + +### {-} diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index a4d6690af56..006d90c7653 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -125,7 +125,7 @@ input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) data.path = PEcAn.DB::query.file.path( input.id = input.id, host_name = PEcAn.remote::fqdn(), - con = bety$con) + con = bety) ``` 6. Load the data diff --git a/book_source/03_topical_pages/11_adding_to_pecan.Rmd b/book_source/03_topical_pages/11_adding_to_pecan.Rmd index c4829297eb5..9c5d0051121 100644 --- a/book_source/03_topical_pages/11_adding_to_pecan.Rmd +++ b/book_source/03_topical_pages/11_adding_to_pecan.Rmd @@ -382,7 +382,7 @@ PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, Vegetation data will be required to parameterize your model. In these examples we will go over how to produce a standard initial condition file. -The main function to process cohort data is the `ic.process.R` function. As of now however, if you require pool data you will run a separate function, `pool_ic_list2netcdf.R`. +The main function to process cohort data is the `ic_process.R` function. As of now however, if you require pool data you will run a separate function, `pool_ic_list2netcdf.R`. ###### Example 1: Processing Veg data from data in hand. @@ -392,7 +392,7 @@ First, you'll need to create a input record in BETY that will have a file record Once you have created an input record you must take note of the input id of your record. An easy way to take note of this is in the URL of the BETY webpage that shows your input record. In this example we use an input record with the id `1000013064` which can be found at this url: https://psql-pecan.bu.edu/bety/inputs/1000013064# . Note that this is the Boston University BETY database. If you are on a different machine, your url will be different. -With the input id in hand you can now edit a pecan XML so that the PEcAn function `ic.process` will know where to look in order to process your data. The `inputs` section of your pecan XML will look like this. As of now ic.process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the `useic` flag. +With the input id in hand you can now edit a pecan XML so that the PEcAn function `ic_process` will know where to look in order to process your data. The `inputs` section of your pecan XML will look like this. As of now ic_process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the `useic` flag. ``` @@ -496,12 +496,12 @@ Once you edit your PEcAn.xml you can than create a settings object using PEcAn f settings <- PEcAn.settings::read.settings("pecan.xml") settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) ``` -You can then execute the `ic.process` function to convert data into a standard Rds file: +You can then execute the `ic_process` function to convert data into a standard Rds file: ``` input <- settings$run$inputs dir <- "." -ic.process(settings, input, dir, overwrite = FALSE) +ic_process(settings, input, dir, overwrite = FALSE) ``` Note that the argument `dir` is set to the current directory. You will find the final ED2 file there. More importantly though you will find the `.Rds ` file within the same directory. diff --git a/book_source/03_topical_pages/94_docker/02_quickstart.Rmd b/book_source/03_topical_pages/94_docker/02_quickstart.Rmd index d23ed2bf550..552da07e98d 100644 --- a/book_source/03_topical_pages/94_docker/02_quickstart.Rmd +++ b/book_source/03_topical_pages/94_docker/02_quickstart.Rmd @@ -1,4 +1,37 @@ -## The PEcAn docker install process in detail {#docker-quickstart} +## Quick-start docker install {#docker-quickstart} + +```bash +git clone git@github.com/pecanproject/pecan +cd pecan + +# start database +docker-compose -p pecan up -d postgres + +# add example data (first time only) +docker-compose run --rm bety initialize +docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop + +# start PEcAn +docker-compose -p pecan up -d + +# run a model +curl -v -X POST \ + -F 'hostname=docker' \ + -F 'modelid=5000000002' \ + -F 'sitegroupid=1' \ + -F 'siteid=772' \ + -F 'sitename=Niwot Ridge Forest/LTER NWT1 (US-NR1)' \ + -F 'pft[]=temperate.coniferous' \ + -F 'start=2004/01/01' \ + -F 'end=2004/12/31' \ + -F 'input_met=5000000005' \ + -F 'email=' \ + -F 'notes=' \ + 'http://localhost:8000/pecan/04-runpecan.php' +``` + + +## The PEcAn docker install process in detail ### Configure docker-compose {#pecan-setup-compose-configure} @@ -65,48 +98,49 @@ As a side effect, the above command will also create blank data ["volumes"](http Because our project is called `pecan` and `docker-compose.yml` describes a network called `pecan`, the resulting network is called `pecan_pecan`. This is relevant to the following commands, which will actually initialize and populate the BETY database. -Assuming the above ran successfully, next run the following: +Assuming the above has run successfully, next run the following: ```bash -docker-compose run --rm bety initialize +docker run --rm --network pecan_pecan pecan/db ``` The breakdown of this command is as follows: {#docker-run-init} -- `docker-compose run` -- This says we will be running a specific command inside the target service (bety in this case). +- `docker run` -- This says we will be running a container. - `--rm` -- This automatically removes the resulting container once the specified command exits, as well as any volumes associated with the container. This is useful as a general "clean-up" flag for one-off commands (like this one) to make sure you don't leave any "zombie" containers or volumes around at the end. -- `bety` -- This is the name of the service in which we want to run the specified command. -- Everything after the service name (here, `bety`) is interpreted as an argument to the image's specified [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). For the `bety` service, the entrypoint is the script [`docker/entrypoint.sh`](https://github.com/PecanProject/bety/blob/master/docker/entrypoint.sh) located in the BETY repository. Here, the `initialize` argument is parsed to mean "Create a new database", which first runs `psql` commands to create the `bety` role and database and then runs the `load.bety.sh` script. - - NOTE: The entrypoint script that is used is the one copied into the Docker container at the time it was built, which, depending on the indicated image version and how often images are built on Docker Hub relative to updates to the source, may be older than whatever is in the source code. +- `--network pecan_pecan` -- Thsi will start the container in the same network space as the posgres container, allowing it to push data into the database. +- `pecan/db` -- This is the name of the container, this holds a copy of the database used to initialize the postgresql database. Note that this command may throw a bunch of errors related to functions and/or operators already existing. This is normal -- it just means that the PostGIS extension to PostgreSQL is already installed. The important thing is that you see output near the end like: ``` -CREATED SCHEMA -Loading schema_migrations : ADDED 61 -Started psql (pid=507) -Updated formats : 35 (+35) -Fixed formats : 46 -Updated machines : 23 (+23) -Fixed machines : 24 -Updated mimetypes : 419 (+419) -Fixed mimetypes : 1095 -... -... -... -Added carya41 with access_level=4 and page_access_level=1 with id=323 -Added carya42 with access_level=4 and page_access_level=2 with id=325 -Added carya43 with access_level=4 and page_access_level=3 with id=327 -Added carya44 with access_level=4 and page_access_level=4 with id=329 -Added guestuser with access_level=4 and page_access_level=4 with id=331 +---------------------------------------------------------------------- +Safety checks + +---------------------------------------------------------------------- + +---------------------------------------------------------------------- +Making sure user 'bety' exists. ``` If you do not see this output, you can look at the [troubleshooting](#docker-quickstart-troubleshooting) section at the end of this section for some troubleshooting tips, as well as some solutions to common problems. Once the command has finished successfully, proceed with the next step which will load some initial data into the database and place the data in the docker volumes. +#### Add first user to PEcAn database + +You can add an initial user to the BETY database, for example the following commands will add the guestuser account as well as the demo `carya` account: + +``` +# guest user +docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@example.com 4 4 + +# example user +docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 +``` + #### Add example data (first time only) {#pecan-docker-quickstart-init-data} The following command will add some initial data to the PEcAn stack and register the data with the database. diff --git a/book_source/Makefile b/book_source/Makefile index 5d57f169a91..1411d6fcd75 100755 --- a/book_source/Makefile +++ b/book_source/Makefile @@ -23,7 +23,10 @@ DEMO_1_FIGS := $(wildcard ../documentation/tutorials/01_Demo_Basic_Run/extfiles/ build: bkdcheck mkdir -p extfiles cp -f ${DEMO_1_FIGS} extfiles/ - Rscript -e 'bookdown::render_book("index.Rmd", "bookdown::gitbook")' + # options call is a workaround for a behavior change and probable bug in bookdown 0.20: + # https://stackoverflow.com/a/62583304 + # Remove when this is fixed in Bookdown + Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd", "bookdown::gitbook")' clean: rm -rf ../book/* @@ -32,4 +35,4 @@ deploy: build ./deploy.sh pdf: bkdcheck - Rscript -e 'bookdown::render_book("index.Rmd", "bookdown::pdf_book")' + Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd", "bookdown::pdf_book")' diff --git a/book_source/figures/env-file.PNG b/book_source/figures/env-file.PNG new file mode 100644 index 0000000000000000000000000000000000000000..f1ad8553b90458e1b4445b0767c4f304f5d3b89b GIT binary patch literal 92110 zcmb@tRZtzz7c~k&g1c*gUXXAc@v9mYj+>YKZkl$#RVrZrWoNWi_Q#KU)JSp4MCnPaf6rPU z=dZsSM2Rr|GuoHllF3Pc|IhQQ-~V@qKDIAixv}cx3CZ?UIv%qk zfulDs_Q(>gv?6de+)wD-7cBTX>nbJekYB84t?L(W{61JNjK0T76fehCqT1{|3UUlw z&gWWvbW=axC4YqqI3Yl~yO)DlCit3(k35${u&qkz?0@w_43HSgA|E<;JcYDrE^0Q3sZ5D9~GZIE4vX>KjJ;1b2=|5sG!g{DO1wa zPZb9r=@_>!JYF0(w~r?`n2&-G&}vY&^}3~dIDA$cr`b@u{no?VH1C}nD;}0$z6;hH ziXo*uwia5J#Q7M`m<+l}#Kt)_FcE8ed~vjD3y~E7gtqt=ry2ukc zqIUg0%=8ESbRvK2@yJx|AV^X5(6iJ4`wcB78#lCbCEn>>5E|VHNQ^C#SI1bO`RZdsie?th<;{5>Hs0ptlw}8f~LKp?<;Tt6d5=>RJc8DOlZJ zH;1utof6l})-_uFcFU|d4$J#h7srEAp0oPHcHHe}Wr?hJG zn=kZrjx13w+b-VI?lP}tMZO+a39DmFP-nXrid4fKKqmfL}ndbK-i50U5K!#o<5a|;qi|tcsVhGw*WN6+h zuoMwbPs_%XH(4hXV?;52GnMGC5Lo+R70<1rGW?SlZ-U}}@hlmQl{L~!3TN>k*N`b6 zW~N7YJk2z}+z_~vQR&Pa(RI%g*YH%$ntV{8Z@ zs?Bksqlh$99Uo+VngSkLYtEdFI6RaRD5^i#3NSMOmY2ksV9QA2NcVx+m=geYwiK7r zj^Woul77vVp|5dkUnWyr8hl78*U>&a&Mu5qpDze9fl5#R@vPnu%xqyc+h)LS@~QS{Ip>r?s?Y-o+N1r{u&*S>II$7F#EF>-l0}dE z?Nss05stHlU!ZYVXu5t9WD!0yPRp_>lx_jYeSB6@u1qMQxd#N<;Ud&U-3rJ5pxCq}T$2LX|V4xY1;>z#R!!MC1PIdo8C?cYk%_p+PKWEy5g} z)xaaR6n76;3#NHD|Aeb;-HH7U8LJK2WVI<CLF7M;VYlSQ8C!%Kv{$(J0G$l%Q0{2w=x9>af;@eXis&Bl?<^4BB>`NSZ-A;3^~WRl(Mb$z!f3cE(eP8 zbH#F<>9-!pPU7xubQ|?E1NtCtMDz8aQxZMQ$TN|i4o;^rnrzU53v--t!dC44ZZa;e zeEg2vlp_-PT! zOVukCd0A$SZp|yz*!Ge!Vd{T{T>Not4As+hOt+(>trrK=;2Br_2vNgrA>|eQEyb$6 zVQYZug31??V`CARoD4@32y?Yu@%^?4FXw#eqMX&%ax3khy(TQ3@ao2x|BVRliz3F;FZeai7ax9bj8*e5tzv>#7bZ;m`2O9>-pse?+b{{Tz`4f z*nC4V%62dj0E-XH|0OejV==-Oqq;s9KzrFNCiul$Gj@>ddrXP!Bhku97I~@WumAZ~ zsvdMLZ(mb2%>eL8WQaHTM%vgY z+Dmh%W+rq~#R>$%s{JYjr*Z$@z(;9hbCrcP0y&u>6&w^n6!a&3L(4#B$^TIGwN~!S zTP*1<7)W9tF~_GgK&?U%8!Ut-VtMNiqkmLfJzcnji;?+KeZ$wh{N_+i%-?YpIV$Q( zKjl~!q|dVrkF>}#i{h>Ybc)q-1))n*qntn40i>Y|^8x5&p>;Va8lObZO*o%hI z`%9;$i#XVzTfS!uG>$GvHqaaZ0aGaQvm)7AKIhXeIL1d@UqugPqJMZPf7;gi(Jm{^4K;)&c?|Jl+oe8#EL)zpex!q9X3C-f zZkHY_jwtMGH~VIE0Tg0<1$I_jJX*XJqwzu3*~|m*v72BmDDewbgOAZ1hNaQqxbG99 z2->vEOFx8!fti9~=&pebo;@pu#5CWs0o~{DPvfjN@{%>6lf|1edhuPLCW2BRoQwru zO^KJi$eoSvyTp=cR~+3k*XlKBh)#C!5`&iW+NkGF3oUx%1qjy8#T!WBo)mlerzSJ$~(ydk+xT z@oTBoPUAH~LNC`}Y&@`TCH>D+r+wZiZt%j@5SOrh{@bG0z&{E;k_Fkx!J>r_;A=2J zy|waltk4=5OW5fM1F_G*k;geSlh2VncLQo|!{`ra&IM zwU<=qSKo6r3je;eaXR@02YE*5Gd~rV{0^>|8zxE4RFtMV&=rQubtEp)9+FPOBGd5! z$|4@x*-f6|*E%DltZ*O0T(N*)G%C{K5m$NUBA}lKM;6jvz>5Q_8sQ^=WC2fcc#I_` zUxbPj^AL%&C2(f0V{FYUYr@)5n5bpJ6PoUolruu{S<~{v_lM@;4+`PMe3R4eCuJ|CB(b^^}NILe$%V$e%<%2)D!H zjxJngQ;Ip%3&B?l-}Gb7JHAZGPz@;%l#A5NvjRa|pnAPJ8JA?WkeN-t_?FZ6leNee zr}%bH6e1P1kgFGu1?YcjMEbm-8`CY4q7vWA)NAzoBhOdF8_Ct-D5N?~FnU1NCfzo1 zcY4IV5c*zB-bK4q63xnepNr#j_f<{g!%W5U=VV2aJXQkEtdNaC2~&7uX3zsXDJdH> z_t@#_&1g8q)5?#&p}`pTJuAA0=G0Z(+ep2>#XH++o8xva#o6?J?xz-Q0XO3Jv)o`@ z1ist(eMQfX$&SkxQ6K^pHAdL?N`@InuyN!RVg5Az3IBpUo;gD6mCak z-932jo!VOL{3I#vKbiFYydl2Y9y~c#&)T)KmN4xu->>9E*?&Vw8uw zrciT-KJ^GpkiF4*X}%;;PfoJp+eh<5#Do8>D}B#VY5tWpA$#DB>4y6dI!y@S$`w}C z9>w(SZ(yuHr~7Mti!_1G4GZ2rhr`zyJIwv(V3O1OiXf%Q;)Nc_XFMJ97CUN10oree zH*}Ofev6BirUbZh3NQ)T3h2_wgBJnDBFKD*>NqI45?+Kis>qAo_4@Nl6u)tF55N$K zMaH<(OGR$xRgVR0g6=O-DvV}J@-XL2_XR1~`@`_$qA>YsSyiU}(7@VY+? J`iN zmb!#RIoN;1GB>=SCe}6p3!6$Z(#b{%r3W1q@o%r!ZGwYJ-u08;*m%NR{V5YYeaxe^ zpDEn{SoB&zd#|9_>^6XQmtp^SIwlH}S-QQHk_X42*Om}!PRqdQF(*IAw(mJdh5C5f z*o%C-Ls!tk`kc~btq6+IFWQPt8J$RJfuxK*3&{W{hkw>Bf}!6+vt(#Z_dbHXJzg2K z2`PZ!+gRktFLsc2quQ$@G`o!)iop*OyAB?EHQZ99yDmS_A)l4 z;~9+kyQivuKq6*nKEhg_7foGPiq8t3gA((WRz))sbRSoFh;zfjtupR^x(5Q}koJBh zYe<+JIn#)dCnUWS_cb6}9Cednd2>BxXIS`Ao$`2)hi^L5~xg6=Z0>5*+>Yiu5;H06_OXvP+{|+lIC)cJuZg`UpuRcyDruLN+eyp|H z$a^fDf!W!D-#;PrVnM)iKLl43y57OZYIL-uxB>I<^u^mVNzKg(Q49 zaeO_*{%K#|%Uz(>I+)@3m`2Cb8wOX}NrEG)%p8Z)RUPGLAJMvr>)p5{-1SE!nD*!d za@=nH!yj9u4LRWf+pjs|noPQ^SnlYULDU)sc(27s>E@qXC1n)lnx54tNQ8=W1Alte zzNss1!qnEix&$2&DRPq0G22HSBWs|D4D!8D`r~_a?x^*8bgC8Z)rKik#>y7!wY7*@ zI=yUrvmj96IBL!S6pa#d5$qb2nwSSzU!V#?kDQ`JInQk0m?2xwz2QP3sUAld0-Dr6feWOLfwhyhdz6!%axlO5z#LSY zCVR9iimj^LqGjqSdP#H3bD-s7pw&TyG`wEkm}fsbKCF)OXT53L7pm}6tH1L*d?Nm% z)$NtKbMn3(ze9~WiofTrre-;99*V%#r9}M!Jyz8Q!-w%WfIBk*ydF;ts516iX%@&j zsmlH%2IhVduTBN}ql2NcWE-MRU2TNY#j2B+Z0KN~ps+8g8|d6#OKo{|z7)r>qaCq{ z4QX>f!2=gdGd)7*&g-hJr)67d-t02+yCB@x-FLKE`dW6zn-UmYsm6DxeHHhv$d|8* zeV{r1)?uM?V@CUkWL62Dn6E9$c0T?5Y#`VIZ+BwjZu&a@Ld*8FX{ zf*1A4_JPu<)y?+hE2fr z`>-?-ddAwYC(6jt>jW)L!rR!1+kim7xDDwJ9n6Iu z@wS$xX}<8qlB)1h$l`n@!LxT5e&LwS!QkzL`;h>UN_xSK$??Qkq9$gqi*a3F}E|;vgRvfpv5z2DuM&rFLXt+GU0k^COT@SU?NMd@<>Qu%4 z-eJqn3vMr#87`r}K&ujRvR~hO3e`qd{qll5nkwRENYbk|*TKG}!fP6kZtT85-~8^d z8e#gsl{-188h$t)UE zr8TeO;m=#zUG~dr}>(h43x-eicmfy11N0;x$ zU*0Ap!*cfHx!De-gYhi?5drvyOOY(@womz#uF4mE-6r0zTWt}!WADuA(+y;cnfSwI zQXC|cuIOE(9v7G2@h_bGjpkt0gE2yd@b#n_(bF6>i-SM;kR^*qr=ZP-j(LoMV&z-F=T~k1V zL!s#D+JE6JZ#2`ba4>Nd4p*X8xfohWvOkzd{4AK{#PrK@4S-ugA#hyQG;>L|I(Mw>~Uw37Uw z+rJU>OLjijkFcQ(H>>D4QVPsS+Qmj?I8Qay27;4^Tss=3o`pvG-1a5dKq^6Gvx-aP zfb_(P4(J|^ON5Yjc@WkhT#R8@Va4K!1cTKmoL0DrBf@6-DIh}^N8P6xdLNbD`E;CM zC}pLTzml)Hb0UjO?8{i6g&N8Zu!vmn`?-BJbc13)FfSpANuPgYz_3n2f`=+K*=l)b?p7rEuR9|0A_5g&Ih#;}d6q1)c=DJDJ=5B(c z7M4z@xTj6(<^+??$`pmnr!^E&MT}NHjF0ZUr{ajM5aQXhT}Fd;I)h=V?n}MPaXO2zK%t{)J^CoZ({ktPkxky}&3DvUDRo6#Dwbo~Us=R!%&RGK1@A|-%KFb) zb-}s}qRh#mZ9cYflN;?5kLEcHrBW6`#{y$#lzo*Z+lIJ;&eb7C$9{dmg^3N3@BQvh zXuBv;Tm1ZP5V}-|2B^7n$&cIQ>4pxkuF5pN(64>f?`TwVz?nLBw}qW}1>H8w$2=Ay z*b)H&+|gk-f3`{nDmt%flJ632@uJ9>@;@gGJoL<-p9s~J-L z*jgoHaYQ^r1dbi`eb5i278Hy5TM3IHod5dRC%Gt&QRMo&@Ql~Rk<(3CN@P81sb_m{yP-ur-;jyyHhX0R!=$c1~Uo%>A^7!i0ZN z$hWQtip?`S(}-+GFnjx;^&Y-cSNiCBL!5rkxB%7msGpsc(S|0;Pl(lHi2A9Rciq`n zGmA~Ri;JZvOGv#+;$O&*yg$)8bJYoDs1 z&`JneBv1YV$Oyd0;&IiT)fg0;?0X&vB$xQF;V%Z-Ic*KSJ!VUdyZqz`%${^wu8HD~ z>j_d=f5r+4uY$ehTBjBF56jhsb99LMqwqg@J}{BybWsGy7dv*36~he zcNGYrllpCs(#>~}^uVwjw*+qtHV6vUS^Oc7ubfqp3iT$x7P$#SlcFuo&`!wP>a}`J zQ?Q~ufRQa$1%7R(|K9j2X+-GeU!pKoRg#Y#DFLr)yXu}@8nLZgHzD5| zR5S5sMiCM)=sD(<(v$FMvx6Fwv(g1@w>@YAa4%UzPvDs)J;tP_>G~l?u+h=?4M6NR zl7iXj=zgu0jXyMypS@qCxO+RnfLgf)dVo~;OXVTbwgXRf%d|@q)@EZ$J>0k_r z;wC3ssoXk>E|_#d5=xw%8puZVrw!N|XO)a5C9)3e0%D@B5ud%R1Z1=W;a&XW7xdFR zJO)U2_2d4P48bU|p2kx3#Mf|x_T?>EVm3R}NdwHEXsVwlD{|XnqnRU2QBXIp_0+AB zrp}ogFRT*Zzg*NT5MjjMM4Wnbxk{iob(4K7><;2n4-=w@*-q`k5FJ%%Ho`p5n_#Gk z+MXT_OyWN4LkQcQQAr~js-(nCYPD3;!yPZ2k%L^O>Jv$Ng`)@Tfgbw)utlu-G8^E1 zpM&?P{LF%6Mgo>K(d!l`y%>-)t@Py`E%yacZRAxNnmy+jR%{sw_g;GFPQ!5G>^hw3 zuI+MM^3{kaqm$LuI?KAzjL-3?%G3Rvy!QE~{KApXD8 zh53VLM8juUH=KjE>}hkxTKU>Vrh#Eo=`WosNow{xG%M2i_^G{$dir05!g@Gyx_#*) zK5Nd1=@JjbpP_HAiJd9Ye9mph-@rybc#X@|dg@$hl|ERqa{iT7r(JsRi%uOl*~vhC za}i&8n-}K5>#8oM))o|$4pO&2cQP!X?Ylv}Rad?FZRj-@AH}A!|GiSj0?E+m(7TO1 zP$NP(G9zSv<4?fK!gFo-*~FF^ifPAdSqSFQTc9E`Rc4_Hrte4I$*xaD@V3cjQO1duM5V5Heh8$>BF3}*dXljb^cZkm9ObUo8;PbRc};)ZYi6LMG>oHX zf6sgj?_GyG*WcpuJCakfXPY;mAS`Ohb!oqsInxYMUt zfKcKlBa_qKR(Q$dlJ3CQdzgdS^_Zmyk1ijE%9{76ANv@GWTOTm75^~39?h77hnqUP z81X{I$Mc#@zl?KY8OGwv9&^Yk-MbD#h&$fBxzIW@4Pn@;)B=B#EvQ{-l3yWuWDgoyT-gff_4csVVk# zRGq&|MX*=g++brBAfI!XQuT_trnmZbEAbZT=)mHj(@s-y|EF5SVdV8wblWs2fu{6H}%=4@^twu~_V zePQQ>nu3woGV2!a(*LL&^Ho7Vzz$)fSZ%+gngQ=0F;aFhT5O$fp4cb(nq?`W6jCw5 z*h^zcx$~|=f!(~#cy?RD{=YLt+<*f(i(rIqb6e7lUu&iA1Qxl-xACtc89X z@uvnJM1o=iNoQ8S5$UKeomF z%k+UXX+i%CwtGZ&)xCMAGREb7cVm|F<#G7xQ};y@tl*hJj5|t`HLXZ&%#jcF7`tS) zzy>1!qfyjGZm<7a6#HvDty{8E{N|jR_lW%N_?N{cC{^WyPAf4%zibR6{1*id^^mgM zpPb5%Yp9ITh7Eu~u|qa}{_!d)x`Sm0=n{pEhi+KkH!J^vaO$HhT|}TzGaEbBisj3l zaPzyz8g`FjwFt0kI@M%58A{^BW3VnxK#d<<)60I3ioksFLC5vFjZH1|rlCm>${!b&EeMR~RB5~7+A64Fe!HoDg0W-_Lh zz}-^32P=1WEzZ?xb>D1)LTAZQZ{1jHk1es+)SDG*}gv zc^acu2_K^Z-F+Q&Py#%cHje*%`g}*Fa3Yo@Rv_*)l$)BQG)UiS^RLI4!+4z$P4vr`{F>$4`fJqx^;ov zUWsUKL6nRb!C+mE&P{Qj{*GHELm6chvK+7D4c2oJ8nx|9j*=iMc?5lQzj1-fF zA6z{40$~)rUFM8}(|)epK_d!7Q?7F*5}{BOD)nnR9?cyj{yQ?q;Z`&G-Ij82Yq z-JJL~8!5kqa`##|;9nmqmHobV@*;Uvc70UN`FUg+1f|k~ILqz}=%-{{5MOcw%UWMw zUJvTU}{A|M{9Ve?pZtUrb>t7c#1RX2ok>WWjyc(%Rm5rbJhtnZ` zc*VEPDm{<<_2fd!S^Q1Tq!UIY5?BoQ=K#KrrTCQaR_@1pRAk1GvMouD*;TCyUZn#q zNw?Ma-(h1lNE9U#;cikmF(AC}62ZaAD9EkQ_d>K5{ixPHZt8y0CHyU9Nd&#}L<6_j zg>ku#OI>=Qx~9hP`_`;;1FJ}zLHAr%XXH!O1-~quv}?=;=&*_1XP?RK_@A$0$$Am2 zbI7M0wmQa3D(TEbEZo+O_l&Ch@64&2>{x~^bM6in2P4>XuD#u>x9w3k;$1r1Qr#$G zj&3OMC`po*l29HpT|4?OiP;lo_;$-GxUtm#MP7MRsdcEIRBgA{J!mlMVh- zogX|B z)A-Zf5k58-%e~oqc1ZJZWUXgc69`HzAZXT6f4i%1|ZO_o^^ zd-ruPQ(Ua`r$J%2Z39+Q1K!8ZPXo7bbyQWoLD+h!(LzaBbbd4E;`1XSK+vIS9M~w_ zzFtA$q#`X9>6z@URuOZiI>t4Qp*x%xE#GM}!<1CiC`3D48r!K6=UT5(YrOmG8?kiV zMba3P?`u5go@q8>dc8qhC2!4uK?G1I@)Elq3tRidnY4RQUq2kkeTftBLlA_Ybqfoo zOnQ=PY{=R@G>yI9P@ktV>|_|K3iVR*$ZU%+mS`8x+{W)#N*5bvm@o2lXqk42^X2_5 zI=`Lr>5>lz;zS-Ey$^h&$PvfiH$Q(eCAfd%&%6HD46Qg?!4i>6Co|X||E#iGB=_}K zJQ-h1*!!s}a`Q`mG(O;Qn4+2e{|F;7i!%JDryu}pKs9F(BDXHw_mJ+bAR_<%Mj~(5P5_Ir{G&9 zx>R4!_%&r(+VP&ztu7RUZO{uAw$?;UFZcyD+}HDh6!o_$Bx;O`4iT!jY`aY48JG)` zCP*7~D6!4-jx^ti+mjg{1QNv_vjk%(2Ne^MqIZ>1^@%YX5lO!19>hjjWnY%7Nl8V?lG;h{NS zpX;828`^OT*)rO_O>jntrk4vLN;L=8T!A0LgzuAjA-Ekc@=$uZfxTv?EyighIB*)o zub#FQ!8_{6V|Y@D)i9^DoPsyM&?ne7edFG1)p_2Jr#{zi@hY>8_Y9h$G)edNItUAP zeV1Sz8LCI78qTj2J*}=JqMLif7iZ<~pLoBMgRbO*PgW<=<{wGii!xTv77B8-fpkQO(_!(iqnwz`{)jb}%^w zQ|)qx!(Tl-unc6UhP+y*du<8jL=OEUT3_cez55cu;A5bo zo=AivYUy?LCG+?5<w*Qd1ukN1 zefe%1sDUmRxa9ceVT5RGA*8HrxjkxAU1@voV$fCceWrz(B{o-i7*M_)Y4yr0uA`7c z-n-+=^M1th$#b7BL2JA&d$)SbEZjh+u0ngXj=89kQEqUKPjnfxI%6v{H4_1M#Fr($hmq##?~l6qH+w9{BMk3 zcQFXLmHD&J@l}cy%%bln0$xB^v}8;Da`Q+yZF8x#EYKq5yN*X=OTFy}dElu6OEtUt z;?#lB^F)*TjscUZT7smbaee`*a|3+(n>R*Ral$d3H~2SC>m}CJ+DPT<3j37Bk0#iP z49WQL7lHqsln4=FP@1UY|8V0bc-av!ollg(NbH^&1)25y5At?UoE;&zU2-2y@gGAG zb1wgL1#$gf0nr7s)4DV$kBLZ^0rjEjv9_c4`42JD#6*jYiavLrIA`G|x)PiU9P)cT zA;w{7ITVerui({zdrG9;WuP^9$KpY0rKG@EX-589tCo?^BCaR^N|&&z%vd zBmO|yxTQ2R;Yw9hU-vHPpRe)rhYxJen7C%&_gGBybS~b7^Li(nE{f!+q>Y9AW5Gn< zuE|8m7R?|W`{0D0UUr@L?A!0F%~7}bge^WYC{1OX>G*ub;W+vpE(AliB4q2uluq_I z_@(@3dJL~{9}{xs%qc4ygs7pGpl3|6N#B^A*7zlO>`5i>ogG@_Ni7gQbXlW0k+5+4 zUE)*M#MFCNE`_MtuXIb3%TLIrc2$-7Zwf^v)GNGo{qg4!X6K08MFvPWfw3ej;W@0Q ztwh8|8K*k%-Lb9x_KjcD({%S}=+*yPcyZ$lzCOD$7R~@F?;$>n_iwlv*9nVXizn`n zAMl&r`D{0xjI1dstf?aX(%6D_uhM2IqXS@M2)XB{zeNfAJ-Fz6Wi(AKY zyJ$aGU7DB5}ebI(XK5}0nq?zUlT*n9IqwWnd?D_F1i zlj?RsFNZeJ*abqn)u2z!5k#p7)a!|LURQbGyxFtrPIKM2)F(1Ra>=KhG}xLp4zJy5 z9n^97SGr43BN{8T4IRv6>0vrbgEgw~Ebp(;l0PbuE-`Huf6$%IHJr~2zMquOU-)^q zIPv%@>qVd9&8p)nfP-{yb^@**F&uO3zmueAm~7g4gk03Ttg>1MXCy;}JZ&`4+WZH5L!&))piZx9l4(z9Y#p{?%lV z9_P4RTM>K6VF_mg{PD0`aFT`pgs9%&Y{Vt{ zZxd{|O+4dFq)4irQa;?((<=Of-!>UzYCggs5dBf*l8QXwkMkCY4p0VLlSX{3qJv~n zX^W?-7SwEnuvx;1_NZmyz2B)Sr_{!dF?KDsb9D0}XhrahVX+W=1$92+1 zqGZY5TU4F58#RNf9<+nWg{&dSA$T=&{!HhAtCn7N+*^VeVF7PM% ze(n@he8sDD>9|~Fyv%u@oEaUQN`}bq*Y+GwTQlZLi&KP2Upr$c1sM}=@zbpEtKGr{%A zVA`Sf%=+JqN3@x?qJt09MPAFvrQ&To9~8(le3XCJNlvu32%8Lt>vx`s7D^1#3|?>} z3JAG3MA0?_THJ#WvoOVNGQ-}z=i@(T-$R;Nj?;u@KHa_&nk?9APe_{;d+^h-%}7ow z*H)|#?OZa3hZWiD;><&@wSHsb1!_zA{SG=&%TGYxztAOT|4c|%Y;szt@SUk*@}6;@ zl_+iP&zZ;fXBAS0_@$N~VX?{V*m<%3FR2InN&bSZD?>3Vj^-XToab|i!qei47#A~? z3UsO$%wV?3yH|iEu1~;!5jmgubt1=dt5eRe<9j8>r?IR;Nu<&www!TJ(Y_ zJm@|8z!HeTHJc_y^-mz})%s?tNUJuWpP#52t73*9&P0D-~TQ8{fRy%G9QgXCD6pJ?*Gyb{>1;=t}#&bfrS}rjQ#91 zKPUVKyoh1hrGLu*(j|Nl!q+qt%1>}M>D>7aiftc%rS#Vl31YVIozHve@JQDRG0Y7K ziDnc()cX??d@aBIcfAt=>eKahL(nrq-~JU{@NoDF?K;q5)ShSiO(h#n1cEmP8#iAT zq5aLhI+U}HT)?JW3n(~RN;4x#K}BQ|)^~~&N4C$i$1vIqs@h=Mt0>dJ-0Rli)ID6s z-UJh?b4mn$3y!5s%vl4G&>WE zeHC@+k=Z6^FC8y2eXG&kI2iR%`#?Mi;VzdT8kAesC~jAMlj7Q7Q#T45;)delo3 z;hPZ6Zf24PBjR#Zjn}xmT-iUQZN#tzbOsY#@3RL)rM?q@-(f9Ml1-^ZVnu!_Gk~8$ zKE*mP-k4hu$(y~=f%(I2%GTZ4Vv{vEEK(B83%)?qbt0>tks$g29j@|c^ zrs@-R9sXdarsUKEPQGiF3@;~X04!3lQ~v=+%d_h5)HL}V{V6K?lT)bd?u<^x8B49+ zG%4&x0SieQ0u~w!=KnP2lT!$e&+2WB-}OQ&Rp}Z0VM&>gY#%U*Bk+Kzk+DJ|>Y5|T z8f~-{+;@L;fbNv&dA>q94EXKkUVxmviESxARk zP0=f>e!I;+4?N+fgJYDMoYKI|XK8wWjH9G^>6U&lB8y>^8sJ0o=8I;6IQ2>b0PkKqN?taIvdsZ;^W=4bRuQwFrq0xhO>)Ldd5F5^kL(bsiVHJazTwMWMl9+2PTOqrWePrVB5T z2Ikb}UuRX@hTCX$B0a#ad+}pY#{k3b1TU*sp`G)#YH@#+e`Sw8i7BZ@mC>HM`)xay zmvxGVl*b>_31-ZuG5^O`)5&>xs50MKh0N0mcKcUQ*|wwf;lN>_B|4w=s1vZ?cJyoG zyy%cq^Rls}GOHDfoFQ>T#x2Fq%Acj~xJ{66{^hJFOO13{V;-pEMp*Ho0dt)oPt`Co zf&FD3a4Lr>__PA$sE;=oAAo(}JCgW6*y+p*rr3+5A`)$sl|r|jtp4-AM8=ok2MgnW z_;wTdKri?|pqoVc|8Ku+ve{tD4+iWJK8sGwehY`#b)ye(CC9bmF?mePae4hfyZ47- zHSPa~a)>Qk;DK};)kIUtov)ZL{-X(H!PirmdhC%|(Zm3t^nu6k7sCkudltX#bJK65 zVZ;y1yENWjE@$B1eRXV%0O98QujAccg3mX|`W%v&zRV36d{kCnmBQFOdRnV!6KuC8 ziTe}{kYzguTn2{ARK_cL+wUw<6bGFr-2zLkc%M5p#7gXWod zWTDjV$eWI@`^whP#4u&*61s7zzZ3Szl1MZ@)^em}%+E$qnZl)u!Rv#&<2R#>W{=wz zZ$%P#NTs|SE~`JHOU=F2SmC_zW*b&tsMBfVvHhHn0M0_&A)Lg7H&CBbE*@}*VV>&V zw?DgA$DCq<@k#{3vwzFsfiD)xUrTLnvAnmE2|Y_s@ST-FmL3e`!UuXq<@f2Rvck>N=q0kVBEm0T^jY2w zK$_u*S^H#OHXR*~nQ7j%UH9mZC!cy|%z8_S)=CzRIa4q0tEDE@yA?1*{JjChH}Gdu z9>=MYk4NHdR-Q*;dP$2cljIzG<*-mi)5g%BBY!hd1WrYeD(Xd?M}hK@ce?d&1}8I$ zK`&6zHBZpn^m3Nr6siTh(ir3|9fnqZrFk_(-vuBC` z@8G*TRNAm8(aMjn$BqBEP|f-Uq$&a8Zv}+;z6l%G2C-yY@kN>TD8m>Tikea3dE{w# za9<$IA&XZN5;z)R4U#T=aBZIn#s>vcN9A|kPna-wyd`UCll|FZww0@$uCcsLhFj>~Ugt1l z3cnL`@r7m5+~xnZvF;wZ+u)oTXgT~43nZDyk6-&Aqk&E4d9nTP=(#2a_F1?{-~`RR zakS9P-pgRSfo8C{odn-I15iW?Uo2DgT3$0S{5_F($JobcC4i^eb1E5QyfjI2OPZrc z)gNiiiy*d*D{oglV;D*e=nNMAL)CkYB6XI*lY_<1=)1K{fRwH&+|#2q#Y3u-ioWg9 zB#?6WdXNUo+r5qBLgpJ#1|935`pdqFElPEA?w2aeM#tiWh^r>-`zRYx2I_b#U6hTD z<2riJPkcj?UeqLqxYXE@{xt&6**=8H>%GMi`eVw`{q3>72b2{LS} zx@)qo%_1!HL>_y?xLy3!Y)_)LushAJ$AL#enfZ*`SWm=*3H(b^c3i^>LHmt5Rg!I> zZI4_8Mc zdYaxI;8|LF&?$Sa&O0lP=MJEJr64b(a&hB`nt6Ai{m*o3TQ#>pXRyhS$emQh#B;v} za6qjE9|&RxP?(u6UYlznn?`|V9Z<4ueITBOC@id(VZp}4zK+}&Z*@87-I|Loa|-MgHe%*kXX^V%bC6khT2z@-lP zk>AtYiu4xjnj^#Z)gjF68t@ks1O!Q;?-n%hTl1)J2QvUlo6S54KJ3U>dxu0JvEfNy z0VOk+^Vh?M*%ZKWw7;3@y=CADCh5zAuAN?scx1~lmW4;gjiHy`PB%efj_(}9GfZOLx3D&JWj8#PR= z`<+u+lXv4`TdgmIq+!%ffGznYI(y=WWTMd*m&Cl%s3x*5vAT)7RFY2pFgeXZ7<$?GJvErTYW~pMH|O4@)=rOcSBp z?(U6gxgkXa-TFn7X$*4-SdI{^RGqYz>%Ha9AeKd=uazU)?$sK*kt_jW9Qb}rCsJu} z{$oZSw2~|0wzbaLp)D)I%~E-E2x1RwcB#ctJxtn#&g5*uAt2kvIHjoIv)H%X)Ha)G zV2cVHZ?kt;$4Wp)QL(+)WMyF(neeJeTYQg_i(noeHQ1CDD|r_d$+fbO@WgiS*8ZM> znpkvtc9>WJ5R3m{H%Qlia5RNPENwXSL>9+->QmI5t2L7oe=CDnYov%lZ29^f8|pTo zB*rhsR*v7LZXjYO`QfzZB5>(!72%C>ni|M8#ODF?Kz!VR6DT1W`$1W|VH`3&%}hQ8 z%v*o_fVhD6aDJ~0ova9Y$dVJET`Rbe`sRTgS`h-C9539czgY@wYuAvSrh<3s6mD00N@phK?k$H@;k8sOz3Q`+ZgR&$tP=dN{s~86D?2u!1Jq zBNrh>+gq_sa#RM!1Z-ttQ>GIWy+nE*(Zh5}5C!47x?;xfcQ&0IP8IVM#6`tKgsXR} zG`9Gv{s$PeMxEBW1dHRvn|Fi~;uqmKrnl0s1!3$=5~5s3M?vdfB*B&?WQWuWrRw!b8 zzZW#_^SN?G=42x}{gg}r&lP54fIN0cG7y_Z+TsUHjM`{>TpT;=D0$VAZ zR7xqK%=oc?3_9g14wK!;9Y)2a$_d%;w#O%|JH>{~l+LxO$@&HdyRyxu%-)an`$G69 z=?n10+-0Oeq0v8dkQL`suK3Z@7*WD+8hNT-3G3yFYRE#y*wr?wH9gq!7FGzQ@E&;E zCPE~~WgZZysSNTmbr8_$r@9&_uYd^1BFDrkyxHn_eUC)Syk7z$1hfPYKoKN7*bh|} zS<$v^Tgq`e>eqbSqg_odC<;Zc>6AWKu_Tqvq6&#Jk>!oHML1}^D1BOb#jAf|Wq!u` zKehU`DYA|s>D#4b6vy$Ay!oaLJ+ewF#==!exY zyd3+|oBcOiX{qYKQ27syDZE5|Z0<6>o&)wcmYQ;EL8wsZC~u#)TR~7wNCt(IRXWKp z{%Zm(kIuq$onJEvk{sa)Pxcg&`5QcgE7h8ctdrrBYYzJBu+P;fD#!u?WN`p=5pSh# zGzBUTRrGC@D3Mv^^C$GcmqZ-Sa<^G*O5{)xW)WRaz*{3fEtiU$H~LuH;?vgF&s7GN z3>9YGfmq!rz3?dwB46;)A%vzNGTh?k)d=e$4J0U zFtVE}KNWH`Dv6Th1`P4wL1#n}Phh-{*by7Fr72yW|9US9&5f>!X19Bp+Y==)i1M|d z#Bf8M{e`(=Xb1wnh0^>n%xUK7YX8Z~`|Ue{$u6j=-P)w&Mb^w|7N>;5o@u*8cnZmr zll}Eu#zY8$m8I0go;E2Z+k)@!{%7-I%(EEMfJiZ*?cQ)Ncm7M>np#5wfu?h7%GrCR zngkh~mS@@H8ByY0e(SKy1!SVOy=H~2%KiCgJ`!A)7DtJl6OEB&x2V2IrkGB{HewrP zcs3asV04gJ@!F)Vl0h%E)25KPhFFOsjW?`-BWR|c8v#_W^SUYRtXfIK`X}jvM3ST2 z9L13~rjq%o5+U?KTW+lXTC_XGg2^u|tdEgJ*1f+tUoSry;;BVPwZ!bwux^;w-D>Xt zc{CI!?}TP-_Q3q2XM?vQTY+h3R>*RsC5}Jd25lsB;@*i4;;Io}hZomAxESxd>_cvf zKm5*c_$(%R5#t^#%=PPo?`*_aD*(^oxx?iQ1EB^!4v zdJiR|?Zq;q#6pKWPZqZ0Rj)mmF4WKTBhp^zs0{lKO*rQTQO< zg0vdJFSROmE(bF4S2yQFA)Z)+<$sBxuaDktmzYZrojm0KG|kA9!D}3vIeYLaFALdX zJYAJ94`RPKYw7d-{nWG2n zgQdYs4T@D-dngZ&g;7uM^rUI-eaplXSGi&jBKN;Go6MG-*zG5l8>3{I_1l^f7B>fy z%JUOKL3=ivoTl%a;_BYd$XeEhX;ixDS3{%zOb}fwfRltIp_IzsAP9_9>xax4u!t^b z)%Cz9YLB_~Q7blCx3Rtr)qx^Ha$c){+m-4;0z6015c-0~$bB9I5sZ4T)w4i?+{Z2s z%%`?|LiAZfYs7Y=#Mt-!CBt&BZ9*;h`^s*RAQkF$@zbdmbGu&f;(@Fo2Mnv&THTmi zaXcbKv0(uN8?h?23g_*i6IoD~OL=$mlTV_s52Pb!IcJfI_A|g4=7mcMSZU+C29j}M zpFxpr5L}F(JaQ@R_+!Dt%UD5TU+HA-cJk_5$-tTGgj}7hoKKgt=v`QK3LF%2gy(Ny zF@6{wVjkuWD>IwY!<)$-t~Pfz2kP&9??5@1xRO|e5#te@;up%>R#a#FO)R6LJumDV zP_3l}?aRJ2)4lt;7WI4|mBB5jRn1`en3Yzb4*m<_y-B(wDA0p~kOvE)Cr}1y`dm?Y0;s__;6k1{neoN0Y_QeC zhue&!I7rzf(Rd-nGcC|OO4IzzGg^&ElCJ9>DUXm0*}VVCPTorI+Fe29ad~AF08L!& z&7|g6X&>eDwOF(t&?R(z3q#M-85UIkd@rD4_LE=1p?%PM2|*N@xXR&4F6Q5Wn}M32 z=5^tM@jE+nr5;ZLmB!t>ve4Vg)5v#fq?hgijUN++3um8^aEKO7-fZG2?wTjuaFSQA zHm@|E)M|Y?!+ITtkR75MU2OIar%92bx4*S(x-!T=&OYib z;(e7BH*#Nl7tsXlZWg_``A*3p^^J9a4xJt$tQ*89T5Y2>efDT*q=<*!mi)qtUbiSL z;etp9EQxxIy(Jh|EEEfZN&dEw@}?Q&%d9sOsFHB;-xlS?k|5GI zjyzBNLHLCb@7rx<9L#vg|3j)GiRZXU!=zE?s&)P^GN#@IZ@8O%`qXC8Y5zG1rAlSJ7b8qj&0tJF zNVi#{o?~E{Mxxv_UFPk043DjJWk5iWpk7r|I34Een2!+}QEo$*2gT(!hXiCRx4rI3 zC>?7gbGgQ6ntIwRsrH|noAR@Hp-ncO5AI~nGm}1|0PH;Oi}$)*<0sYj$$61)`pA=j zw~c|JuQ#Q!|D3eMmPpl^!tn%OTF2KEqlQLO#Ivl2@Az@U9wkCD5A~>WHKL+odVgv0 z9Itb^SWc;qMouyb5;k$=h9qeD(is%j=5YS?THoV)tw9=~huRv;0lO+)_Pj(RvV?Yu zZfmmEa=An+W%c`ppYqLV_n|{$CarAKUfV&`n@ygEatWdZPrE&%GG*af&nQDx+MA}4m$>;!3e)goaxv|4czaET!s`KvO+|Bv5Axh z{elwgA#52XNNF+8F;S8gZI}-8su;$_Lypu`08(y{MeU+u8SMOOSE=o>;mOrf&xzLb z;%7;o!~DpAAxpE5JFD&6@&)(}or)`$YKAd3b(fp>w$i%Du+>c{m1R}JwHPH^`D<&VvvMHB1%sb z$_CzhiX4hCt0y7#0W0~}k`=N+Z}HM18=g6FJz~!84X0-@{!wLK{M}6u@04f{Q-zl& zuIPbf0aLLNQt3YqR%W0Dyu_e$FF`7JSjtL*6#mDg@sA?a|7i-H3s3eWhtM74>_mvj z@vzzPk@Yzj2mIJjAY6i+&i>Ek@mVCksOleOXM*|L<=;=F!Ac{n7BU&1Q;6Z^0!~OG z$N!`+ei0A*M|yy(mj6$DMqv~N#z7f&JBoR3_lc1aB zPFeHc+vx}c-+m)J-1`6+JXyv_qvYbOz0d!=T5;TcPG{v>3OU)R2M9{I7?boO(2&iX zK14hTle`{j5spm@b9`>h#M-FumeL49%o65Cfuc_x->mC$AMg;83$5ZRydZV(YdxN+ zAspJEe&&ud3oMyK|JdorjdIq7Km+aR`4yA*hBv62mipL7TJdYdk~ioLV)}tN8`VAN zOW)g6Uda#)l%~5`crT^dLimX|wdp9AC6Pb7B>YOUEH4A(dW1!b`QG(H^4jdxPlGQm ze0fdosU$qehAF70r#i@_B1DcY3?W*S!sFSSx@r$&7O!-T$-Ca{@aIGq!FgN8d^{Ge z1iEp6G{er3o2T1NU&4P}JN^(A>M>6vSY_5_4glRK(1Qdtv0UwYJWit{@EUF+;5^KL z$Odb+m3?(n`7t40Mc#&hc63nWBeWF?a)gz*ZVlVjEs)>a~@2 z&)6@y29Oj;cSM%lB3f$0Z)Y1%$Zy7Dy-IJ|$Q10?|0s>QEU&+52MAbR<;JBtY$^;i zC)U((5(aJPome9lW)OGY5js8#RHB^jw4uAAvzB4xDdm@wL3my+L}dtISKp>-)N~-c zFP{f&9}ii$NH+#-Cl?XK8;qqtb6voh`skL^MvD6HHf7l@J+3@xF+aFmNWRzqRaCB^ z3bxo|(bW;txj$P|Ahb#QHob+H&lhBu7w4CCzV3~wH*iLtu^l(f2SKWCMk5jB3NK?O zQ71t5o4GsduqBKt849vQn*ZkUfJO-oSc>o+xKCv7Cdt~`4mvxdS#fy976Eg9a_>%< z4(1G8x~V`3ia&_H$o*1--e`BboEJ#3-fZNrPolX;5dB#BjFp$6puJ=B7)omxvd8{T*sixAwtVnlC0q{rBC)B z*MK@MF+I^9l_XByhTk6(16~vV`!wAhk5;vZEd$#K--rK-O?--1&;Ow`{4M`)=D9mz zAN3iT?!&acqH|bJlQm?V1|Ee-_V!uwStwmc`GJjPJVY~>%hnQEj|0}niLh{>dGu_K zwakdH9v;pQy?K>3xQ%3}(O5>3BM;*`RX(9OW#qx^R=x52UW4yk`XgX7WvKwM5NR9W zTlR_52qAZ^*bNt$d%0+F_Ddv5A9Jazw;V2S z!we6hX5+b$Zcd&7->F9r5t6IYz4U-?NN{I~>~v7=hid0>z3%D-^Xtvd8~!+z5-)9*J)(+KN1E)6L%l^jY7+_j&BUlf6evjVfVlQUYj)nFFA37IL?lh*6 zhV*g6;QG+52!ecFlGJQlZx6NBE%*&MPuY~C5kbWI%g9=spUZwGwnt5~P`Xx~^3?S5 zjuJ8Y_Nn(3RpB`>x8`Sj<$CqbYvDx4wGr0LiAe{JAD_+E=1BVf{(k=~j&VmziE}#l zkaYt(GH0v%0%nzUXr-ID^>NK3Cun;BzN)1Bm@vQ5{Zx4nNg{QJi%5_Dg15H(0P2k)oI zxEvRjv`JW0vd3Yi^=z3$-PhpeC(*!{aGZkg9b!&Mr@-&8O>@Cc{t~f|n}J29Eg^+` z;k8U8N@sn%ElVA9Ml|nMd*bCNgM9zstj4oAQf)nR?HfyVi@GWkSn~f^$7g%Kq&o=A zQs9eNoi{Pp`gvfjWO_kgG|{)ch0xlXbmjKg?juXExVi}#^jLDyMEWglHsp;s%f*BD z7q@6kI(4ElB{=b4y3dWXxI88G9sH0S?mYj-cW$tYW#mYIJ{6NnMboAAbJED~+c)XZ zJHtf);rvKat$TEgDnZuidV&CYA9l6q==Z3WHcTpZ3FGh3e(s44O9PX;4bucFMe*Ul z&0c``xz?+nWUW1pENtLe1`Mc=zAXC%l}K(biXaz>!}?XD`&Ww&^3mG_@h=HU-nkx- zI5{@`vmPtm4UZUzb?I&q?fvLUZd>k?$mKDgUBK<;Lo~k6kwMvJyqnypFo;llOfEG2 zNc^`lNWMFxi`Qi3m)TeEb6tw=njU6}1`Cd*`(7ye-V){(+SDc-443)JVg{u zCs?yum=$$Z4sCHq2LUW(_EK(%4qCCX8@1xsBXIcBovIBrB#Nnm*yhnlv`h5>jK-JO zG4o20r*!xGYkVKAI%Dd{%R3ST7PjkNp{V>2@Yw5O`xHV=4a#bbK2S5ShxwGvb zGLQ!n5qw%KF43giq{AGoCPLP?D8qlg89}F{w@Tfe;e-b>5$0kuJM$hX{4g7$ za4f+D?_{!~f{}>T_OhH<12L7S3UzKEiA_g}wb{qt@>{kRskNivkW$XT$BpV&fzO{_ z8OAk73ohlXL+HR+MUB@L{>TqEw!4xd(TPahIm~)rfxR3Hc(5qzN5S+XSMi=exws9+ zpQQvtlun@t@0r<6&%_*>Z;+`C=u9kxdL)F>(xi&t_g0#o(?Zkx%7{Vq#qZX-L&>!; zS*piIcoESA`*_tXegxz37tbSNV!lZa#B8R^Es;~OJfjF)mbeoiMI%HdA+khigTf^( ze<^ZjW=fTmb1f=$@ZyR`6r#^!)~uq{yJpNO)wB_3)#|gif3U<@d)q+6jZw8)8{~V9 zT)>?obK2V*n|qF^F1Ca34}Q!cUQLSzLKQk^E8}QOQVs-)$E&*Q03p%FG`Too%2#Gd zVA{|*Y(WMGZ_H<3EC9E1$!Za;7Ll7EckJfnhq?oPhtieoym&kvbZ|J_Te1$0me1fp z43y{N=wVAI8(;AKJj$_u!c!vnn+3itVP1zVmLVw`S7kLI3GYwZgtNM=Z%LS3Mpr(7 zOarrg*>mER!*Y>DcVl%-B_J#c z$>O~2)R6m4wuZ4tEo`Cm3>(zAs10_ZuV?iKbg9OH6K=;NxftEIJt~N@%yMV-gP=Ek zgkm1+*Yy(OleQZzDSc|-jA#e8+u_ha4!u5Ky;(|P5+#r-r_&EhC^b)~{6rNX5c9(~h<={?#daHdfM}l}Nc4KQs+HBk@u!s3 z+liVtDAmseA7q=N6-3;XV0?<4{wJTkRLEy;t66<(aHb!$KZ%GD!j%bse)s+c>lO>< zxNf`zk}N|G?i){!!**5{0mf?Vy?NkcSl`~>T*YNFH_$4OYU)aux$T3&tn0?n33z2; zO4C>T=-cM)UU^JbW~t~h!1w@9(w%Cj;6@l|Li_IVpCT-v4?wfx^`91IB$WyX=i2^D zQtSUs4CaQF?;Ic1>A+?%h_x%`Kl0%JAIX&nB@ijmz`mo1y`Q@wo{qcZpPJ$?rh4HB z#$012aoBV&oxoz(Ssqzt0Nt0y5mzz%e_!?Jww~VA?5=!#7N>= z^QwyX-{#!CkM+1pHeFwRraF%)n&`cz5AlZU5TbsmArV1hq8`N+rWkSWCR-&%ZAujthjHh9*r%h(_Xs!e!zr5y_Erp z0(W-(rF@bQ5s<#epVho_pV;EE*+*(deL!Wc9TLQ)Z{fZB;;7@3_RN+kN14R7WJ*o+ z&WY$uP)WH2N03v^eatN5q;JyneEDE|?vUwPBX|1e>;`4n7%fUNbf-I}%-}*1-s& zG6DvMn4?pW(l!!iV~O|PJIY__uO>gsYh?c2HmzNX$oM7Y5*9ok%3<|!=6LdK0Z90a zG+?WMX(OXUOITSx=8|6Xga2K0a0#7c)$=>a1_8u7My74Bv*onJ73FcW``#|0=!W&xW40l64!5o{nnX2ZvSXz5TvUU;dy^LBY_;nS{_sy%rL+1f!G>a>8l4VJ%z!rVUj_tVLLS;+xt zdPN8hR<1w~XxZ7{YRIl{R8mVY<@ z&IHv1L&st+L}u>3eIJ&;XM)|oncX^d+0e#zd^SFK(lf@$w{j~FD^GTjmKlGW3Q*%WO^{LgKblQsYo-!Jf<(w|B3Xk2zshn?DtfJoz zT1b3c7zm^Z!iM^bx~_+rNe*?^J+*SVZuUdq+NgPjtC%T($cm)-S-$<5fY z+=%?M_0W56VV4W`86A<=)CC=rI&;sLJ`Sl_hxu zoRf7!rtvRSU&eu?*qHj9ZGy;mj467N!+^e!*ew^twj2E$v>DGo+U7>@dLAC~UN3sg z7iiR-TyD-2N>DgnO#Dj^6YQnOh#&blQv+#|Uixysyzsk3qy!xU!P`{O>3sN&{|C(*65N}yR_Og8<{meAcgRzFJobrfpwqRTs(~1B! z^m+{;-V%k?U={@iPK}^k#!9YHr5v_)tjhmowbVbELBzXW6$OJ257{NlX9_+%{Sfjb zTRhynkt<}Z{*{ZZ(WvCMwrU~X(-c`9y))akI!$%{FV`xf{qnE!yeUY+QcT0*tT)rt zsGqU3j`UV^H|M91XGOIC%c!RZ2mAxX+})8lmpnZP5H*8KHn9k-$vkJ+ib<$@cZv*^ z3N@7T8p_cSYSV&-r0_+$zI@VGiYy+xP_|qe$;HhaPXHGo@*baG9_#jYTrV(%Sd2O{4P-RgY zjt1w6HT5g>N+G(Tu<89al%Fqu)6T!#v4(EeHO2BaZhvl-?H62Aez%*gb26>&f$8lL zh64t(1->-nEE=|&N`49PaC&fAEemWlw91K8_7ncz@EZ{MV0y7$RJof9PcIvC*|E{> z+wtK7@OTOgBg10#w)su{6H+=?Jstt`etgKxZR{qa3R;1{bZtn)=3FPRYmHmdp!<>G zUsdn}@xpF759j_y6zF^3QA*OW3Mxmd487N=dX9|dd{p^Ki0JWvI&N>&ARJBUgnW8x z7wErZNeE{nj@78aLb)H*jVm`2bRWGx{SL?G{^0dZ3qLd|xDC}^{G~9?SAi{!l;64B znebp`WVEUTfJtPIoR>JKK^=IOj{|All10Hf{8{FKS%9Z%Y04DGQ|$((7R*Z=v}Vwu z(y6|v3a@d|z$CuYCR$m8cVZXoYLyQOU621tEDf19SN%7{n6=Ii0_L&Vx{pT(bp1t% zhj{zfObTx|Fd@S)`1_Iqt1@C(aZ0sx=v}4A83S&;9}rrXKW)|uMVwy6VD1*VWP$~e z9XM!A3fGzsx>>+%#ZDh_d0XY47Z!lML(n{vACQ6D9v3VR@nf^^;U_N(5Yd|=%`?O> zRw?p>w|bmx9ApgR&fQvOfJR;@DOcyIG=rnz%$~Kj*oD`jis!kf-B_&9>yQfY-{(Ziq&*1X_uSWXc zr!mA~{x>28fZHxM?CBEWrR+=p`=~w#Spgo$Z+k{9sU1pYFM`w-MF5g_<73$yD}HZJ z2k^|?3&7V^A!Zf9?vuY>)H6b;MxK9HRS+%;81XNB(fG9?HKmZM@KKd7-l6xRtZnBX z&J4l*Avqo7<%3d{XCf`yw(<*Ai}}ZH)Q~+n4BL($7kz@)K0RBp-K{cQ-2_v2n3sn^ zzc&azu1$@5T9U^zde_}Cwa#SXFA*6!>`3Z|Q$iy7F8U6H`AMNz&c>dXB(prP@fc&S z|L+Z!R)wN$)^kAoiVv_E%tlr&RRgB<<7eReXwae$E5s(B9RZu83{qs| zSX>2DwR(7#+D72$)6+NEM+dipBpxq0hmY8CMu+RF-JqTe7`uD1W9{DW3HbEYlrwN7 zgJACj8XGh^1upV{Pm?hO<#Ty-skX9qpI#Q!96qpPt-Oi*BUVPj3i!cmEj8nIn8N9N zF}jP&v0BiTWkJ9GHN2FF)aT!2VN!PGt*`!1!n2C794MzXzKGN<`+)`mut>bQRWAvcoJ%ZVa*w2v_+ugU(0Uw41AZKKm2 zectWT2FYj4s%qaxl*xc^XwL)~jV5IBRkGbV+qKlFrioy3ZZOU4nEJH0BTVD)(s{3y z*dAnRO4KWUTke~u8^1tob_alWH;FOH)+9A?{Y!G5L07( zS@|H0q=+AHgzR?T%r*JEM{48%Tj=}_%gv5fRs3=iz#0N3_xPH`@wgU;duc#@?c?R@ zVYyq!A|ayGdmdn9W)FJ_;0g)->CT`L8~~~gPLRbMDt|R|teueTv~gXQmnt#WBgb`3 zAHh!miP~t^2XM6UM9YVAVv_cIQcg;Gv)7_5d4G5ly6`kN(~qo2+Ra^Ab=s$Z)u+$V zvCfx!D~Aj#DH7|I$c;^VEB~=Jt`uaovMnC>pv5+SGd~I`3!D%Ziug7+*J!Gk=iNN@ zvC6G6jtt%3<#$L*<8K6i!;I}i7U$h}7Z1w7X z>acYa>fyRMx^-1&ppBNk~lomWHRJ!*v1={gf5As$#y;`FwTC<0Tz{gQpY54RE zX2@Obgi~u=@2FAKTc-86`Jqn3$8V(BuM zqve9aLKFtg9-Q3Pe-vXJ=4=$c`0LWCCFJCj*$7EiBpM{zL;hAI<(+!>+vJJV(SW8? zL1NH-p|=5LBkUS!aAd&q!iw3~2JXV|=hHs@`y>Sp_vaH=|1$ou^(|@iUJimTBM*Q2=LMJclRqsnQ?k1MYP_*ah=sj`RKo^DdK! zNn29iGwkLgd=7A+{CdUU*QRw@il=C^;rnr&0gF3ZYV1tsq@G%JWQwdymBuGhG{@$6 zZE63C{1%(FCE@_%`k9D#ur_{#K~9i~i~UN(lGNVj3DC46-~?te=_(K(r{ zD$(i8w`bl<=dtj`Y(IhvXs-@i=Ps4WYBpo@Ti?CMKAKTiP;*tRk7jkFOHEU=;w_}{ zO~%2y{t7JcUj*aD7SWPpg~REDy{bt|8l+;3;bO!;`_1q3 zX)Da%laq9R<(-iYbz8*J(E9wk01@l)p@^Pj@Qw2ZT&YXB{7}qt(ivI;!H)%>R zBT-_HLfYZYj-DEyuBUJ29a*d=a^=yOI{tOwQJe+n@$Z}V07DYPXNnF_ zUxh$lZV;cA)~u!mwH8l6?v{h01!$ZR&;MFEkcfPRtM)QdK^*`%;!APKNgtVG&}(x3 z8g=v_(eoo=0?*_fu|fh+p7DAZy_);Xtz{#*@JnQiMa?){NGv(A)AT;-j?6vyu3O~T9Bm^9m4J0Mp4@#xlah`MXvWcp5%)8Q>D|`*uY(xvr&yrwx$TR=ZWPH zVV>NzD`Rw55vwE;W6YcTReDvuN+dX2KMwlYE^zVN4Lwz;~4zjH7=6yw7rB9TJ#RhxEItH73zZug##vJK)qUFd!>K4ea=|f?+-SxhZ_TtjT z_6OL2%~5(p+$JGsLC=gPIp!EZvJn^jKO<(=KcFS!)AQKyPA0b z8wTpxcOq$f4QLRUTCy;gL`W#)<9I@&7 zCOh;h#@sb?#mN=t=x=UWH%Uq9NcBUrQHOviiI30fCO@@ibvi5x)XVjXXmBVugwQVz ztw-=agoe`5ILK3D>+W&KIwerX^Tc%Tb3VyWeigCoYZ5RRG5X~7N??Q= z-L~~?xly%y0p-R|ZBFZOnVXrQNMn0>zmc5Z|9}{Q0XuRXW{NZZijNJ1bhS(l5^m~@ z_`WFM?)~)eQ>NfBW85|8)-jVB)+uuALdKk@?z=Jh6Netyc-v&w<71a& z*3$!-lv8_>T+u!)u*+4-trpcRrx(R$Ak!OL^`y%m>@)N(iC&r23O}-s0iL)!2ZNs? z>PFsViy1F+NwjfJ;kS6?gQF~75rYEb!dme!;~>pN30>xd(0gOt;V7M=w{hxe#6&(v zyA!h_#NlTptj2XT7cr{=v7fjsFKb&3Iy(&rW%YE)uoUy|%L5FqmANNZ>VpNKpviWll+(Y#oMuAqDt*lR2BcoQSBEbgbIkoJjQd}cY=?>u%- zoE+`IAwAOZ@3$Gh@hrXV;g~Fe3ePI!3LEzUcu1d~()yl{WTns^HZJCs;Hxk;o4}bQ z@}E;QrSh!gPn~9Aq8Zp$CcJO)XR&*XB5&S5vP*2-e<@ZQGBlU#^%BB??5rAJV898^ zFNn@6I^*|1&RPW)Lng3u`bN_s=7QI&@&ouCs zz<%fJwI;1DrIt&pLASSC**6J24(L63in!ZI7xUtei$cx)`7!#n zT4p{@I?OTiA6^~1|KJe_r_~o^L+30H! zRH7X_sI*nDy7&^l-i@Wj!0zoE8oqhiZ#dsbBAdKCL{E@SXdXzli2vZV$3jIV8#J&v zej`v9smT-z#9n0Dj^opi8on-@b(fpJEu|EFjR;Bw%*mm{f^Ia3Myh#*y&4W@oKuHO z^Q9MiP7dav^Lea@4N$uGor2dNhRTRW%K9Aff}1{n>%EsZU6}J6|2)QGFWGfnJ0JZ% zjrS#|O|%OcWgCiBM0)Bwns#m+51qh4lJ^jv~;LlZxN2O0Pj7v@9l4=$g=5hF!WgE_mQ05SSs>wPSwvm|Nv(K+;{q-QQvtSZ= z;pG+s7RiIJ+uP8st_4mNnH_5(QnoyefB}l6X|n)GlcR8KfWIv`4*#3SyV+J@WTQWx zs%)=TwkhuYkn9s7g493Ts$O4)_fRrU2^5V%?qTu@c4fyl&Z&Gaq{@>g6{{{s9zkgu+=HUdJ zVP0vL;8xR~K@YuaWiaRYEz>CR2@7O>VIBRsb$w}$_19F#O=Z;D4`dY2QGb+NIFx9dwCj``Wo8Ex`M)lo*1yA*IRTIT$aW0?HYDR zFfyJkSjt9?DLZTAHDWP$y`7-mfN$W+WxURjBT<uqt@aIbSagvZgOz|>{awKHNf zs;RRKQzkRTNGwR@mM5X)i!3fq<+N1BC61bMVbno5+ifL$O0@~+cXF9r-0Eg$puiVD zB54r1nF{#>v&#eTrEnn*1T~^1jJ*#lFS%^R#gab8NV0rw&{4n!II7cRCmC!gLE0rW z>c%NOPe8W3zER&L=m-@ELA$fhTlabTS27KiM0&sSWFVsU2PNX996^THUvJG`S? zP%Z$(Y>IlmdZ(@`f8`<+Lv4Mn)%0fJC1sd>1+(b2C!7iPH)lDel>yyGaa#x$5-MCP^ z&!{Mg5^J6up0YAI?gS(uR;S%@mR_mmh)(VnEd>!tyerx% zapE=NE|5vnS@+GvAW~HCQHKh#+2F1i7u9lMrEr7k#&@2a!;PvrLQc67Okuh36-h&H z4swxE&PnjTYy3ef_8!e5Lc9uq5Y#+77%o3zaK+GYrwP)|cP(c{<^JgVkZ(miDK=}r zU5Tid&i>nYPhIx1O`K-<04=yf4@UJunZ)uElV8VG+*FUF4DwzxvD-Wo9<@UHh(; zhX|_%U?xM5m8Q@u&Z;o4ug8;OS>U~ug`f{_-~%hbH>(%sRuN!oV@u&pVGkgI?v?Ht z(W~{KpE11XLD`cmOxr_PZPYpqD-kryMAZQ)nx3eb5 z%8+-;h`t`2b*VUVgOx_l2&tg1ZB0`QVj~|2M!Qp&`o#>8AaIza(Vc|TcQkUj{2E%i z*n5J6vzPUn6Xd@x3AMPL;3q$6J?SoeYEwxRuA|0j|9~16KAb?62MJ^(!ln?O7LMp* z(-G9Wo!=o%)0&t!lSD3YrREozDWk+W8@O?jkj)&pR=)WXm_Z%S=w z$uv=;a6)MJ4j}OgpbPn^z`Y+)zFxr+hEh0|WHfjk+0fqS`}zB8)$(7a!C_9TxYF{; zSo{8|ZT#84-gge`M&Ea{SUq7nsskZS9!b=Tahjx6OIUsc3}bGTvYgK?gt1Ji*DE?X zGWHx~Ru(8&75%0|kKeUSJjLi~hr2Rn6~-jT#jvbcg=O*pKcf#?-xglYs5Q849Uwfy zK!H<77n}dqs`{;}p7kZE_IAA<@ z0YdNth@-6fUn!9K>E1hk8}E-}!_v{fEYND$B(+?As0sgEc(N|eBb@Th0PCqJ{EYHC z$UiRZ({-b+Fc;MKy)6EsZ#fv$v3s)NLZnl1oddS9Skb~3Tm$`{9`0b|@7Js)wR9<; zBmA0+&JhRVR}SqGy!6D9UU%_(0xS&+2Kc@w=(SnF(K@_o`y~SM_*OGb`fg-Xd|6p` zbq+z!4`aO9?C^#*T|RU_qej)NZalzRb><}^BJ_5Y*+WHCGPZ?VDElJ&8HB~J%_n2t za?=Rd$f>1I@TXAkElJr!v{#{~mUU?WUoewpWR9a|2amTQ=ib`aWjaF_jpnT0x(aFU z2`89aSV$1g8A=?U{14jRDk`on+SWya1P#Fow*(08F2M;-aCdhnxJyA{!Cewu3U@8s zCAho0`>lVseec74J?*@&r&?>)9HWmu`gfaO0$Q$$+1wO{_Hv!lU15^Uk8+z&*zqE| z_+`=hG4pghag}za;j%Odmn-@7 zqr-Vt=T_s-c_=E!noc{npCdhkfViqk!fzAe0!@;A2{xilP)p(PlYx<-$LmyV-KO%( z@ihCSy5J}C5Sl07a*^(jtd%b+0Kq`f&;`*x`p+i%mpw*2Hns?U!>fV8x#=jBsta4H zW?tM7k8U-iKO`7Dul}w!jbyD5$^!jrrx3_yl!1rga{&b(QCLv8UrGR(;HKE_4RS~A zg`D~74``CJj}!HUf$dDE5P;`bdV}}ti~Z&Hef}ES@V(e31^n|8w7u$hWaZA!Mn-y&-89TU%PJ2hrk%u{^DQKfd@g|4TxfD&w`v`6VS~ zmlZa!c9+*&qykexm%a37ZRG&wTZ~>4R2EZuf`xba(uW^&lqNenL$C40 zS@~vezs^@(b`QrIEzv4hMy83`W^On!*D7)wnDbc47De<`ZB*FN+3Rf-f{dPfzwtQf zncsYuG$bA|k*7;v_#h$9MO;)PM}&mQE?D{R?X$M3Kzpc+kMro^`>lzcKy|Z+)~&d6 z@@^ex7NU!EXrv4ue(U@(Z6-mt?%km=AK~)aZ2{j0fkxedF^z~8IR=1Dw_?Sw=qUp} zD$LE)(JEJ)IodL#ttX`2#QxZ(j>H3kW|dr4AM55Yf@>3LgEI0ZX*mn*!#C2+Ds{e%&LZcZD$$4g;LViAf+^ zQ)JQ9UdG-@K~BLd9Ub>fR0%z_dy0VhS4zpROhHr9mdX`{=Z zZ6O=Xqc?9R(07<6KWqh)FCE4cbZxTCr2aoHwz;!gtd zvJML!C2tisV^C=CQ-s}9=A{N#((bBJE%i(P4Uv>GS_GA=nE31P#)O`fG4+S^nk6M$ zBdd$JrM5`bM$#Q9YwuzYVH{&FS^M|{PR|=TP5;0$^_P|9Dw7Utxktn4Qp$( z{MpduB3z_0Mazx!78z=jT)y<&nSuR9!?y00R&*N`!#MA+IqEjq|C~BLZI2oKLt9?K9ME+;@vGQC zfhjs@OP3%7^;*gd*TvzgVaeC#a+O%hSF@B}{8P%ii;HRaf}NChMo@+AM%dt~h*Z`B z+Ju(_j1juC$P+7TTIUNu`1#Fg7r=UAA-jr9x5K-_YDlge1P#|XPA`<6#XlfW&?f{j zeL?RpFa8fC3^v2LV`zdi=?t!a5J-8Htn*!Hn5$eU*Kkk?UTaO_C9HtYefAplnX`S% zw9^{%m}kM-|v|n0Za1Yu6=&87kWl49j#gOR|alq zRK*i~sS9fZcMt(yKmL+tyUOv3%a4=4oYAj?hTgHxFZaBS_jfS6%Hv$Pg$tHus~MUe zRUd5S^4okETD>}Btk3VxcxISC7OV%tkS-0fX`YeKRb_7IUS=!cb2CI&vSdXaKi?nm zOpo*(%|HjW#^UKg9v)6|Z5$%TF_G00UCAKAtuZL1@MKHZu^ z3^Ri_f%2EBvxZ9~x+{YTPjpeYKdbF~li=T`S}8;GF{_`Of|e`MO)QgD!fLaI5VqxX zT{a+K_OFh@?YtaN?GDDLOh+ND&Z)2+!fhP^wR*ndoz_%3=SEX+sMz&NmQ;kSdlWp>a7bM*OB%7XZ80g*`qzhwc*~tCXDa4ok>y zH@kk+DA(9yqWfCyX8|l~vyyXfY{eAcqiNoCm9pinIRPS9W}9yHThV})u^NWH#geVV zz`Hyx4i@CKGzHc0{xQCYMy@2-BUyp}ONH)ZmL1tg9G>{#BB)9$TZLgnTv1Qk_{K3z z@9*=WQN9jbq~6!35yD!Lb1#O;p!>OBdi?k5I3x#af4qc;4|jQs4dL8QIhSU+={gJ< zZvvX`P`ZtO$AJ&ps>xgIKI8$v5pGuxX&0?av z^RK6+x0QSEeBAOPukoegNj*$yk7X;-jGPl%llz{Mq{lgrCaYO>g|mUaPJk$0EDL6! z;ZL@9bGo%X3;I&`}e!}&=2rj4Y4)5U^+!6OmZ#<(A@lS>wZ~e~P(6zL|Css%zw*0MQozR=k3@w?f39^x3kaEOIam zlkm;@eqE)(lv=0y6H!f@9`C@1SV;4e;8yRe-qTEuAQBea-WthP=;Q#7cbT6rUt-oA z)M9`4aJ}s1!B{V~+%NT;)yl6lCJ-)dgHqr%+fz>02M&VAWv!XVMcGJdHi0ox-;Vjl zN51eO7N~d;su|v}U~^``jbmVjALF8tVT(!n=+fqtQEO>-sJYm`;(GVJo_t&X)@oUV zPY58o-poVKWHzq6r?!dQVg9~4-uJ4X1Y8%u?aGGmz3q3ytev_EQ)i%bcRivSPr4s3 zP-IsYrf_?MLph__xS(TMhwnaCOMw1Ca}3Z-CRX9KK$!H%H?GyXYQ!tW#{Jldp1;we z`Y%vNj)I5G2E?#fm`AJ|UcLBi8w09R}viaSN*RwbF4={$CU)jukv7aq}bt{6}C+?rm|TPua`lSTQ0C5b%b zr^tQ6;#^@DJ-~Wr7KT{!y_m;myTlJxZGb#IA19nWA+j3Og{tgQdX~6_LdsM#)~ATA z1AX+}&^3o00m)>C>4%R?mX7y1lF5f0LM2>Tpoed%?56wKq}gUr0QwmB#qqvtZ}2=z zR*GP|?b>3N_V|AKnKEu|`(@_;vc&Djz9F-_nRh zuree7Aw#+6Ad)(P#5ZpPCTXZ&@8v{AP%%qcJ{X8DxBPuVS72qHKFl*9mXv5jST_9M z*P$K_=wL;vF|_(ViG0pGUV@)0LOx&qh4^%rqNg?-Xw|5*7iO)Yg%YKtS3yAm37O5y z&yuPB`@`w!Ec&Cc4#7~LN9`$=yU&^>DrU!U@qp}S%o)a}FCkh#Zl1Dy8^?*Ayo&a6 zW8R-lKCQpkV&d!JNZ2bb&w)?3@&?RO#3yU!Vl7Hj|BV6MMB$5f2_LrL;A4dex{}R3 z3^+xYr~-iC7(oj#t~_GWR-=7{@vMzsUCy4!FniEz=74ldFzHLP&l)Y`L~nwCBtQL1 z_*)C5yF*9t^=X&0aH)5i2G_u_0UpGHM(2TX;OG;&yo1hggi2wsxj4(%^hK10+kW1N zXC1#^OP~jVldtCd(ZQ#;#G1G686k!i7NF<^%nv9sffJ;!Vg?I2GbM^trCzjh-~Zhs zNO7mD{g{o(OO+3e2~fS^Rvxhwp8y+jmK_X&^2k>O8?rXvFV+2)6ulOzkwMTDnxg9- zd&No3_6s=}EnV>xw5WX{)Fri-{nw8Vcki-YF0ERb8X+RPUykrj`BIpTp)vhgY#G~Y zG{@MARl;uZZ_u&~|v60K{tK zS3GS11zx-$&%)>}-O(Ygi`xvy!_60X?D43LWUWFMyWK(?F=SPQ%d-2e@oA(;lF9f{ ztOd`B4T7 zRb12OC@WQ~IKsYf=wvJwL6Pz{Zp8Sg3H}r5mJuXF@fvb}lgc(ML2yoXCGErES*mT_ zMyxjuQ8dTme~Yr($VI zh<<+cf&258FZvi2jY(omv-1!VAh5-Yl*gIXTgV1AtAGK2eOyfwUcRb5nlE9Pr;6mwB>ZQt0} z`EZuLB9mRL3==yFz|`b-F^07)=k=5x%Z0CJ$;Wy|7<&&fRi*}ySo5TNlDZDu47ho4bnJNFy78RvV;v8W}T zU#3+<6_SF~pp>Hp=Nzh5K_SUwotKAOYjpOMFelj(U=aW-x%M>;2rS4f+j2c+?i7Jc zA3(TbeC2HS6YLESKUGGt!7mw&pVKuO7<;UFSFvC8JnQniq8%yF0yy}9Z&uMfVcMFP z$z-h=f$`fWw7h!*W%@!Qtn(X*A}8;k+(TX!SyXM!d?D0*O5L2aXFpdu#$)T(>d7N_ zbo0IxeOdk(EkLP?u?6zWV?xn5X%&~MpUI6_#>J&W!{UwL*A3ED#lz2wpxcrWwY4v1 zU*Sqydkp*jHiwmOG1Lj43E5=Ij$5ex7D%e=V;x1J5ju@;Wt2=$8UDR1Y?#e^iAqKn z^OJ)Y)R;h}fL$lG@`AM|+(-1!lR=n74>Tt>HWx%%#px8@~9i4!<3l`P+mL zt#CME;tyKC50)Mi5z046{A1qPZqtkXwM-!2FN^K%b7<>5uQ3mF!f)II@~>`9rCT~s z#^QQZxedo#4as4@OKKJ#T*vWZG2|UTlQFmAdu2gdVmWI=Jlhr&$J!0FUoFEYsy2T&~h4lnj~K){GiK@*~~0fGo8gldcYEkCtJ? z$O}lc>4J>ijt=$xeW0=Hj7R4Fd%>og;EbgBLxR<+L0dma$WRDZNku;!J2DE+KSTJ zYEuIxz@g7`GBV*B`o{h7tczF6krNdzEj1&R6HyIKK)O}UU%@>^S^~j zap%4N3v<^0|D^T>0{FyeSYXf*njnDsajg}r_iny5zOq9{s|>{isX1RusSJoAc%p;Tf-{o^U@P%$8l=d-HBHRLbq# z6vnrHP5D^V%}V|VFE`2nW_1S!Mr!@HEtG-V?O~73pF=c|)2g?j5+%RFF@tmdc1Wbl z6O}CPB=Q#or(k+%Oe)e32ArkZuXYCd?efcis_q<~^fSDcnp%k{;JR0{5Zv+AwV)9P@8@Z0p=PzRh8ch4orO+A zviR>0&NFj#d1>0nTwTLJ2CG|QRgV0E)m$CE&5^wmg9j5QX2yqcv1i^!N-0bjLq|5X z40U$lSwbwI1n$LST5Un?f9wcgAq5JdB=b50% zjiLGrqdF1b`8ob2Yc`f0Rtyj-cx^f|A+915^>Ezu5f1ws+Ubuo#ns{5jCt+dXjhMG z+~p~3uVYJIe7|kG;wLZdXRrsDn=tak!(h7@BcRaQz2Ued8GaR0QlI~+UP5BZ>8uN* z;tW}zz%T54_(^7t8`3G;!SVrJ+NUxSxiP4m6mM&SO-Y5_F46vCV|-xcK)4-OjJ-$r zC90`_0w-U`@?I&L2CY;aGEQBCut$I*f{tsg>%pPHcFG z|6GI&c#8kD;MsauR9N?Uwdwj?pJNw^r9CCMiIm!pW3JnP^m41h8c8_S4k<}qLmY}4 zTjC(<^pp%=h-sBA{{DPh_lY3pXi0jcZhVMVm2~>Du>z$-oMwytqlL0jUTEE*%jA{m zpdof>rfld(kyWBEY<)dnZQlcYi0qwm_4}&MueU7jy_7TTH}ZQntn@y|TW->;(3V-a z)h(A_Kqv9Ch(RnJ`R&`%X2C9eq^DY z5LLh=3jB#4o*^tiuyiNBuAgW44}co((X=tnaZ%xfvbI)19;`pnC<%W~0%QMPLzec4 z$CP85`$fO6MXcyv#9^W-%}gI?*$)mSn`jS!~Hp|E~chz?34}7)0(cG z3v=wZ4n#<32yqXvFbdHfFW$UxgNr^mZX-~~oJ=b(58c%=WqTj~^hXD znUJr1ozjni8vi6g(l(v5+&@L2saMAG#oU6D+SRv(5Q18GmPXGDKdWNFkKecoZ`>wp zd%OCumxo}%y&p(PW{v(?PKTrzy-!~|IFEV}TQqntW!+Y~?2;HKP zQ%vKhaV17Ozh6@-GVaNQIpS_|Y%5zuB4CXadO?}U;2agzO9}%bZp;z+9&|f-6AUah z!QtAtm$0m2?ETEsgH#LV5B3jHF_mL97Ep1oOakHP&dR6>0XlySU+rVmV(Oz(9-JO| zhJiI}ZHi*pQWqLkEOFQnCtr%)S+VY9>kS>*SK1->I7KPjf8?mzX<@-LE3=oq?co?^4%ou~$DHe2+45cn#pr8mmTm3&cdgh~B?qROHH0k+7*bTtd!7S@$i~`}sh>cen6^6;{Mf(r!W4eu4<0qd^ zUC?;_<(G9^xzu1zhGyl$s3KvP^q#fNhC`Ttft#bl4~E1;?_=<=DH6_pOJ~KI7kDKh z2^;%GhGxR(=`v|uwjg!-4S}Xt?O!ewu^##gm?8|j`y$m7^0zenW4f-iC9yL#^Iy!W zJTIw8z3Ufy5qya)Z@dbq9+|K4-z9%(2?>f`HZIU;pM;ahSJ5`@-?5Si8xrVbJO~23 zLpL2DG#|h$dVLe;w}$@s%ix$S((nKD651CvZ8v7W%Mhs5%k5O+ zI69GSD^Pgtyzg8{$#Iq#`$sib^36|2q~#k3(kg(HF(B~f17{Ve`W3+V@)hph?kDTF z_3koe&+zPobF*-d|gTsX}A)%g3xsxd*hhet*sS{p&nXK+vi50V=QBdIko^*13Q zj#sOQ$2}y%TcX`1fN^<(czJ{|)yId7&z0HWV3781Nyj$lja>LJST z`+IxocIjnaM~u94?uj$fh?N`)In!r=wAf556G!1&^V7-vk@06@abkCbiB*yxREZ~s z9|pvFC=@Ri6z7%ZRNJsdsVS$v1537P{IBz4P-;^YAzAM?#@L{@s0m5L<#%e3dlC3` z#sa}{#}|`sIn?zBM;kl?$;25CyWZR+%voCr?)Ek`AVk;)jLk9LDYTe12pq`q%B=B< zsv#VEY@+{CMwK{weva>aX-H*u>z_DF&(GQRY%Q%GJJl5F9h{~`?4qcmKG6!cG=kYG zJ)guvZ0PK(mLYhtYYw|>@VlobX@AQ8rKNE7C`lsc;0OabtQM2sp`PX^HAifJP^iXj zdoIL}RmwvrwH4!C#tk>ewW04LyLoAKGZXDbqCbdq3hB6vMq*Bex4J3q@X?~T*WMYh z#IVl_ZmZVrOMWFJBw)2O5(j6IrY;gbJ!`fHO`8yMoon#l*FcJusm(F#E_b{cVzESu z9N92(Q@kVM6H_UZ!Z|H}s1LHtSuTi{SycPyX6nJJlguiOvBjv;M}Qi!geL>sGjC1U zAVnLRWx+wJdD1wxe`vHNBx-F!iTpL!Lw_Cxi4?#O7Q1cIU^Y)(!;w+%^Rm3&Y!p2i zReEfuy`pf)2F`}sid!+g=`ZIAr_*x~=fh&8$|;6#`dqRfEyyZ1#9Y+~a(!O5w`VU) z4hTvuuDrgPBuH(LUg^Bto{pq8UWz7rwsX_8dIrRNPjx0m%= znQ!E7>)t(tVu&oL8VuCRbH+{SysqUPT=u@(kAS2dAN63{Pb3~93Mw(bJp~cz2=#jd z+<(Ws7f7|;7?UA-xEX{G*R`pO*jJLYIVC2V)osv-3FTj}zaHRL6DOyy^qcjk`KtSkA-K zex2>{4D;#qT?$U&sy=UZhF&f73UtznMj;X=Y+F?M=_=qiUzJ5g4#@$l4VOcfh@~&4 zT9)7Drnug+{aO+wg`t2E z`>O&e9#C$mLP-DPRGzn@QB4lLmG8Kex`p_L95r!dWxpB<6-Qa9Nn}lFzVg#HmHx{E zE55nzCuDsmOR8@ZqJ8m|yEKn4c~(_NxAw?sQU4kJf1IdG}T?C+=>{rlhb6 zFVih?8HxV}x-fZ+qNZ(e_H}518sK`}?xqfmWJVVwi>TlXCbW$OA$$c_)$-AMmSsV& z226YmZdTSG(4h6EPm?5a`9WT3hzlf}hLp$eOMBxq%6KE0?8Cj2e$?mM(kTv-B{Dw3l2wTqMas`GO zlU3QY=_0(5LlwGiW*Gjz148&0XNlF zp@YE|DKWm$q|{^ACfWw{p1&Atlp#wOuYobqMp=b;I;mYgKFflhVRoNuNXG~5x{-&| zPhVEAnKDf?B+(ii9V<|~H(Jo|W{o6UH5e~UR=%SWW(AF(1*#HHU*gR$S$t^fbB*&H z1g*Dk_0+gAwT+pKU={3nXSI9?b@H zDGghjGC8iZXB%>`|B4}-cPUrMMU!cCEGOL&F#WZ4wUf^aTkg3Ydc}11E-mCwJH@v5 z@(&=i7|W}Vf982xyY-j1X-`}Dh}-kxWX@mj+YV0p!w%Piy?lMi>?`a(;GI$)sr`ki+Z{R6|A@E7}7L6>-k?7 zPo3SK$T7rfQ`1?p@$bsCO|dn@XC{=aK05{?gVxLi&n_a$#Apht7v7T1)1E%naahnC zgm~$e(CF{~b@kJX{~PGhh^*f;Ok$Y7{sUd!SFWj{3Cumy#c1--$p7#JpN?+0>db77 z;;X5OT&MYcm5KYzPbc^rsX?^6*YL)eHw!UNG357tW)4<(dPYq>(nHF zfWc_pbflZ%BTnM!4T}MSK{Iv0GwEgSl=H=InU3L)q?GbG92=`{`+Krr!Oc>VjHv}8x~I(v`ezM8J1*6F1#jU`zLr04}npQ z@U1}g3laA?Cb?Fo z&TfxG&Rp<05H0;=VA`07dgiSqs_s8GgH>+x>KQ+q3|jB(VFXre``N$hT`- zX<@s+%D~mgBPpmuOrFE`V(V<)ZB*+-3}S>oOc=1Eiz?Pew{^u(z6hO_M4k)x^d^T> zT})u=YBognM-H?JR2c-L3jbOGHO?LLxc>~S(&~HW&`whY9_IkNsFss9qLgckQZ*qi zh@&?Jf7U?H#R1F0`OA`$fuVVIpU+85^O2G8L5<)ebZ%0E!=7YB1PNFYVc&nC+i*48 zl~R*M40Mxlo~C_f{kA3vu@BJ2X*gVi_uohv#{uWetku7M#~@xZm?EJpDn{<=zwG?L zKbnzW;Xz)|J=~DGwtLxg&~K|Z=sVRQSg6&Kno@$-F#z@&2v0iRn)%tTpY)W17;@YI zT%+Ao?-yG`w}d`fiBy{6!MbnR`2K#Yp_xv8rm8amrl}7y)lin$xrkd^q|H;!`5kRH zKGA*(jLZhcyy1!jD~bgJ`Wi;fZTOeXiKCoqr*Z$h*tliOj0w zvqFDdEet-cig+b}y1!Cu-?1$nSqFaG0@cN5TbmprOX_eC8=qE8yT|Tvanv3~hlpoK zCD&~=Ha=dC`;rnDIGsi#->usjVBD)bu*V=;_GOTAm?B-&>i87vpLK^>CzBvRQG47B ztd5>U=pT2dE>9hx>fd9XlEuWWk+C9C%GiAY&j^Zv2j=?{mQFXngQwF(2A`w?7X>NI z_-KuCs_77vm96fD+ORAIjJ(=x%vbe?-d$SV_E(HWKq=U?3hI_Wc z(W~}%zma%reNwY`yCxby*_MW@BlhJ&hh1ZAE!rb8(0@W9tA82>ce&*f5aFDw4W-a9 z?`#i`W}%-%&mPB?k=!giWP|iI7yXdbe3p)`=2`7UMW}k? z+yb9QCN=uV{IRFMx8-Gjt+HAn(u5wqK4F`AXN;&MRf)YMDFI%XREg$%gEJkd|7nab z{~B$_qgpwId%C+m<;n3msYn0! zrHXgXf5}WmpuHHEbyfz<1s_!rk@BGWRS>{FUS!7)776` z!FPm4x37uxOCNz~sdKX&9=?~C18?kREfnrN1%qMzate>`>x^WkqRFe`{=DSX$qV?ou*gj|WNS%^3Eb=3~xcq=Zw95}XXmD})h z&8I5kH%PK-44L}_a(6lR#YF!4CHW09kjpRdjpJ-e#num<^R9Kn|Is>MGU6Qa|I@Gk z==hkUNsA_cWX%CZcX#$Cuin`kg2A!Bw_vHwZ}?qe-J`JPP#wIKmiopV|NHjyDX9KO z0e3}hPW(>^hhCZgt5)N6%W5J((9u50n%!`3XrK2tsKRr{uwh|sllT8rT7)b2pN~p@ z6Ji*KEq#^K-0PVz_!k`8F4?=(^hziU7ryc^CQB%Xl<5B$0gw9leq$UFwU;9{=mNEJ11GGoO;jX*yl@h8KE#~dbPc1B3wMhsqz7Zu z;BcrOHu={B&kc5^+(2+uvfU$9-WTKg2)w>>D|~$+M*It{i@{R!rCA;5IdQg|MchwL z35Tn4@8iyx`m2@XOandh%%7N*_;|FA`UM@V2MIrQVVYOJ)_+gT+w>=OqH6Gm(ZFo@ zgu74PQQ=2+EgPI0E?V@p=v^Z`9s8X?jBz$bpvcx8qVGuuBkRXTdgvL#svlwpZwE8WzWS> z!C4e9Ocv$V@u+uSwSkaFSdNqy$Hac}`bp8807MT=)Z-o-BV3hl_FiA$?kn7l9``1- zHM1E|+mCb~7P0Ny+LenaK30U*8<(M$2-=nV^1;QPAytw5^>12OvWDgk_y!D z*14HWO>3I_PaM{R_ds>vD|GH`-Q6Dbz4ZdMfvnip>O9qM>IS<<1$2lk@yfLZv;JCX zw=5MeyA;IE|ISdaIjD-66&!2V^dcjiZTw2+J&(IGJd^8uZp&CXNQ+eKUHZ^ZCa4Kv zv)3Z*QZDR_I9&JRg2QC=pm4T?DIi8SvnTT1=!re|fXpm+?P#JiSKyo{H7Sj>a93HZ zcbxUW5~fyOOiTN@REPHSA$W5g@ZkAHdkizc+t1C+<{Hgn;Z~?^t>WRYt(Ke2eHI>e zZ&ISd*N5I^WK`Dr@sXjg3^ycLT@SWIuQZ5p;tmhCO#%uRR>gB*Q9`bT-Y`K90aYI{ zp`-fC>)d&A>|i3@rv2A({GTsPR@Of!OZgmGOG_;hIem&Fal9^WRp<6=@HA|q;HvvP z0Wg*=c{fp0Y$Fzg|t1xONV4RQtBBf6R2)O=z-zDVcn$fI_Pqi|ZNe|k$;^F%& z5{ylzH1*Hk;n#nskw*#hY_9q`BnUsH*{SU(#6aJ@;K}qENpo3bm(Qpav$JI^oJ7&9 z`td`qiU_FiYRR9N$5pyJ5sRzN&>0Ee;1oi_ec>_$eQ!XkV!u%n;g0d}z{4#~d~vNN z+j~M9sDyv z&$hD{A=9yBR{2gDsIBNo9g>o;n~h8ImpzoAO7d3jWq%!1l$hfte_8xjxjhDy#&0i= z)6zuBR(N~(@QU_4cC@Oi-9d-cYrn>P_jC@KkfH$XtA6o;!FuIP3{Xq#JIYaCN`tYx z0TH*}p4htp_Ihhi?9IVN6$wiWcs#wQ-=xW=>Gol18OOVah2h*ZrvUAo6e+n zxh?z|Y$}b74>%{w<=Y+(R^75@cm_^MDyis7yuc7PqE<9aFFKFjx!te?>4wGv)=@2e)RnrLUivumt?o0 zMN3cV|G9E}Fkoph>x2{^o$R_t(Cv>a*3CZQeX%X<9sCTIJG%T9;s~w#P(z<~O!QAm zr=0GE+ZSAVLvxh1Nbu*4#(Sp3_!OGt(7rjwtt4yuEEhC?VfO=xB7|W@U zI0SqK`~7o(hC&3yoxs-tRbswM#Pjv8V?)=fxE|aLtHb1=^x9t+u#$#y{$SD2oUrLq zW#!T|0OSqpmwb-l=IU5cLD-xZE+2GLZ}>Y##XmyASk!(%s*IK?ZaP3?g~=c}XK3O_byorO^6-haJkz`5}gbf@&Qsod3F zS+$2vd+WUnTAz^$gCZ#58}$thw_s3h^dq}Uu{YQG$OYa`JPX&CnFP_t3s(uLjy z8R%KZv;i+R$d9ZnOu0wr1V?w&v5_!4sIa=$2saVG^}cbf`Fw#ZD-(T`ffhUieHMH2 zt{|W=^)Ro=FRZ`?!iH0Q2u-Bu^QVzYYYXb22s>Myp4jRCp2d%xh=z>>RV)o>ez+iw4i0F75U6=QtsBR7Di%MVj@2+Gu(qf1XE3;PLCLWSi1)twj zBK zJ_ZM+S{iQCaYI?U`JSnPHj7AG?Fm3JCV{uWuoVuJM1?OQrh-!TyEjL#t7MxSs)nBo@#1nSBR`yRt*Xr`gS zRYMG4GY(Lk(kdJ;=b)v)!yo(74>wXmRTGKx1lPvMZrVu{-j!}`y^~_j;-`|TalU;4 zzQ33Jzy&YtgNz$7ZwYzrg1CMbMp1ZVh0g6p>!)@k?S-ol{8;p>)AdgLmBz*=r=)s5a98)MGAd%{HC@Y%&noM z$_s5xeWcnjEzQS5qKT^c>M~`KO2vVyY$uarR=WAs@)3Pp$P?egv2eSmOA(?&U2w@F zU59TKk&P_w3b?lmt6?+YFYL>8AaYxi_CU4_Lg8ZbPlcg|i`hVw4v!q9^HUY4p#&Ih zO$GbN(EaR~E*E;YCMzDPnFxV~pxyI1=w4GW4e8BRA0wgmD@sH4>uo%SHrKC<{aG{= zP9%c@W$)KCWPza@H!Vaw>Jr)+yBo)Emc>(!0It1S5_&6!4uREv%=e={ejV4udHT&1 zF5%I5F@_vFrUMI;M-wbhpWrZR!eYu{=0f#>LU&AfU8f{6q_sk2t-YQ-Dik=$eVf1^ zUlN#^tW~3fw{VD1+V4qkx;WoRJd(NpAgp;JMJg#tnSBLtPWEaOW{*WPNKzwU`ZC-Q znUI@z62&?k>i+Uy0jMUt2+Mh`h94D+Goz7n2f-#*+3co`HIS_pB51VKr>ohPx=vEH zuMP>UNmFe)G`i19Va1zM%_}=Gv?b3)l5l{v@&sa&tpM`d|H6i(E3?CPmr&)zr|#BM$2%^!D`oi%N&^ipiL685bQW>4gz%9dGO<_Nu z-%u-NI6D}w>68s> zo!Bbc{5;S?-qRB#g2#3Kc(Baz7~Hc9ox|mAXE*K#d^RyULdHg=Ay~N0CTcxQTEF*T zZkXd~I-J8>sUHk!jM`Ogi=Hk8NYrmyXs7UKc(9ar23ldmr*2JMe8# zxQhaqvmQVCxY6AerUC#2!-}-`{BH-L%>BLZK2-$V(FN>XIsJ;5_dgz79xtwIVMcQ9 zwz}Qu_Vky4!g^%iwKYbdK+*(e259L~XRIFyB1(BuqMYk#Sy`{Ial|RtJM3PKZDlpq zhgpnPz4F$Sj;wP!zO86-E{}{BLYxj}mZkN4{FeP3f$!0IfKp{%X83N-?ROu6)al(5 ztY_iP-=YSkO1W<*(7)0?A@90bx-@0(udKExoYxg=cB(+is#-7alZ=AD&Lwzt2a~-U ze26ihL3V{6L5ZxO#A3(ZJift=UuyOLHloqoNhYi9p_-U~Z)kdDv@;FIm2S8%p@~kKA^>B7h1%}j29h+#QmbHjN&gbA74e$c?sqj=^Qa+^wrsHrZVIeWAO_7h1 z5wf9#EsMm+b_X&sq0gSq#tZWXBml@O273&}29V5N0*5lvat7x;vK@x`X*g}KwNlyR z(mY8R(yX>lD~e_t&zn%bE(J+uY{?saNF^&RgLp}CTJrRC_6dEBtS%C+-p5X>9*Y_X zp~RDQi6g@#7I-|=m4w#!IV2n&(a$lebSdq#C@ZiC-Z5GFs3W_Da8{~Le<*HNi8gbq ze?nNjR{3oLgsM@rODr1BW+wq1D?eo=GHgqE_FPz;Ln5(W8KP;O9UXHJ$&%vdi;RUw ztf{Dn`jgn1$Nz%g`mBcEe|zNgdmc`A#6SmGDlbpJFl z!QqGLnq(lXgpBy!w^H{*PX)B)*7rCRvSX&Zy^E`bxoL1Xvtjz2Dz=7#>1&j2+iOBo zChl7Wdjr}@9 zvhrR|L<0!r{8v_7Ec9AZNMTd8@{~9B-O~<f3hB-8{Q%?V2XT<`f!co*V9{?bIi(-WXxVzPhfPAHITIBQN) z@e%LJj)WdFE5{w5pK#4x9%k+LKhaAJ#J+C}Dt`^vK9f;PmKC!FBnWZE5UUvV=|Ejg z@aL8DMy@IvIhc~QLB4;r9oKKTW&j!Xn++c^{4Qyj^_L4NKCKN^=PC4!L+Gqi_k5JM zY+p`VVCL-qO*v;T@4rcXN80m-DkQcksA?6PCcSJpiOD5AKDvUliqIxT7^y-^@igWK zL~chMmRC^TQyb@`LT#hS`2Ew(C)uYoz)wqxnX>%E{XXiYgPifYN4%%~rD7qhCAzt> zLF3_1*zGlT#eV3VjNd!EsoN^(rc1=VztGY0=jF;X@7!|5+tsm?>(iuma9`h~5|*|s z+%oge_uhOU>HH-RnR4c|6u{~0zC~qtx%|O>7ipq8N`3rCoUh3+0AzMNJ{Dk#n+jMC zw*zS?DUY>Gqx{*=0BG3e(HwW?+=ibSN;_=6@&h=##DRrSvRu#uF{%lwt@tqtMvY=7 z7$Gtb8ah+BH8i&A80fo{!oS179+r*$G9apr>Q%GGl#&mI4$FERKb5NM}YX2CvSbDQTJP3W+t$gk8=F-=| zNi$-)#S$@wll{x=?blS5>06GR_Aboh+!$OP4rmR9S)%$S%c%`G%tR6Ju4SE@vH;x~ z=ZOj8_JH&7T@C-CD))^eW?yz4n$#%-yU^AbTA-S3pG-D0(8UM|theoK$3lFF?)#DKspCDEFtr?;vhWatXj)6diuidY882#&bl1C%zQ;vkU}Qh+Kg+EZWyc= z*8a>vd<~0_DQq_z!uq}$isugYsaEtg<-nxY24EK$Ay9Go10!_?!tRO!UJ5~iwzNnn9^8t%ySuwP6nEDacP(Drf){spibHX00_=SM z-uvdM{^2tdc6hE2Gtt= z?LW_eCXK}cj8p7T96R^=gn`vW$3eKI!>ivb-s24r_$)LsUG4B`!eQ!Gr3oxwbEZX@ zZw~`+b_?7=HSNs+!x9xj@9wK?q8`72uruX=b?^OSU+lxtX<-@)Kj~p-pWO%$0m#aT zUtl-e4{6H`I(%1`1d#%HjYY=!UXyUe&snYCB*F^~n!DGss;9?qWnouC3>2_`^thmV zpUZwB@$JnAVv>5y=KvJO0OQBL!{iUtSp2UTaYSRxmba?!$LKgXCKHZlHDoO zssAvR^PLT?hG2}YTsV)bj@Yn4&F@PD{h%PBeTf9> zseS$O{=u$LWt$lXFi9^i9jfv)T7pW^&5S7Y&lcHZCbMbA<5)@zL)|?`k~f$}g3$9> zKyC3(D0aYC4XAHiZ@_nN`#2%gb;L9JwV8Xw1&8IM*HO^hYWMHNbJkhyG(i~b4Wf5n zI9(PRlyY96nn|tC#zf^0QSa3P3fbe?bgUkU|B8MZXn^5F*Li6nH14{u4_?1Mrc-nl zFY=@U09T@l;V1t%Mo*@6`!mLQMYKu4@zhM0rqW3vul}p#YO8O6DP?(>B|YbpQZNU1 zsYMSoZOk9UC>u{Lb=3jyq5wNk;(!?*xleywTV;eTx}#3f=pM1BpxD?BH&6HY`vtZ; z66`+!jMrh%jV*J-|13+4yByCA?yu(01E)-6RnyL|q)hY89Z4&>o?O;iEbzmq25zeleRxwQf0Rdc} zTida3x->WIhyh-5ry{TW_YEwSpWHT&f@tn*JPN)H4VBD^49lRB9IaHE{!#)D?7J&{ zx&Bke5=Y^7bQ$6GJ!y6yGj`kCRMu`_7UAbsP0^GUrNk{x$y7XU-a7gfy+>U_AwSN) z>n%BF^h*4J&&BIxi=Q`y#lFlDEJvD(U9P%t9q7FS1&Y;&xux$Hebmr??G`6wA~VhP zL0zk`6#Htvo-i!_&x-S;ixI?B$o?)Fa2iOcJ?Q)G7vHiZAJ|J94Ue(AMY+E^>dh_g z7;iz$n8>cnPCLf-qDRBY2|ZVcGRbR#!MN)T7E+SWBU*e5lAS2}8vS)-*$%MZzlhM#^TD2AwJRlxjTwI;*bSN|nv*JZXFtK8~8kzBv*j z{k}8Cxj+k^$3W_H#b~sO4VFCq)m+XkGS%8>hhj~lc3@Yvv*(>wkaZLV#`?XhLuqYB z3i|Qw%e^$?>H6m1A)5N<8G9av@>&h&!td%v0ovg5lC#5Y=5r|64S9G>L|IOyGOjps zdcTf(cx6d0InTO(Fz^((vZc0ZzH`KsY(HUu8D8wD@(z{(5}59YzUjsd zy=YnwruKPRHQ6HH9Cm7+-BOR8EhiM5>|$l76mo9W(RQ)>xKCfNpUVWHF8ZHK&(ap@1HD&+7jFc$~fCAo- zF_#1pqJqf1cG0=+!QqPVS)61h_y#Fr+z0Ko&mvh0J@j4hWil6>CVhzvzaTtZH00Q6 zNgyu#M>YTVl#bUK3Lv?l66XIvo!eWXIzwPzWVm`vni*W_LOzj1c&JcT;Z@eU9MOhz&nndRC*r?M!Q0C7YzQi<$_zHCt%(<+Q4<}vM#aaE!$Zqr9FLy* z3@mOU2>r46M=7(fQa7DTQ71IOZRz z)^dcu?oX3<(Cbmi838cInjMKa1Hd+9UtIsPwOv*FypAwZ?4EhloklWS6?@M0-`8d!ht`%KbgAArna@O3G z$chG0yfvrF8hw2`Fn6qJSKh|>E3%-^W`QZ_$MHdoq*}sFXX&@#6!53C^W2vx6S3PI zY0iMx;xgNdtm1nPV+t2z$*Q>QZ73lQp#s;{A+6rBW{J_90zgtL$V+ti9!T4IuR1Tt zb+1Z~wv|Y$UY$^`IjD^l*?SrBPt?!mhnV!&I|}mpt}m^tY}8B=&P6F4{4FwrBLyBu zaqhci&P+lOsVHS72bosY_I&Bit(iqled5H}Zq#mG_^#M;t3wi{VF&n|9`K4=!k(Js zoUv;lAqy`0g-EE0g~^78kIIeI)mbUE!n{bi;Gr^E7%@aj0`ALCqPSmhl9S4)^It}j zMlgV3SJt?TVR;j;?_#lBd@?h6S3*y}#cvFI<^DBNsgHA)RJRkTHqx28Y*YK`FC!OA zN=Ha(=i4?e{2s8hEKD9)4i7YJ{0}Z@t_q+n>4_ zi#%&*#sii*uMv&99!!z&O(@smp6)YP+!$;UIgE+4q!I{FUQ$M})XbkzYcA3bbar5t zuop%IzrDkm?JS!sEnQ==vfA3J5M~A|Fw*BGS{Sb@-5VE#WGsgF|59*H{66I>{(9H) zZus`&DhYLWgb+1Ut98~xj zN)szvCr!H5@Fj(vP?gg}A;P9FB&4TvyYD?Pf4-Rr2G8e1-g7Lq*3tDqMC)1(a-A3dqSTxt~s8FH7@ z$%KZ6V-q>*R@cMUlw8Fv*(vWI>i*t+!*GR3nf8%b_zXACw^P5VsYT5gnW&paN@f{% zVVLBD7-Cv_Sb%iF-ozc~o=mO2LI zSu@&K%;cPHri=5HS`hB&kA`s&&o(aaFtp!}#nC(@^h($wX5uLx+`Rk(?lwyj9flp}NV2{jrkDCH`k0;EX%PyR4aA8TB_-Da_ z?{M9lUIxe0qcu^8sN+!?fiEb{P~q#~(B+(l_%n&V+Uc^?zU4g&Yp6q)hLy>htV3s@ zmbi6TxEpBZjFir#W}?y&bvwqZIZ*6$X>2$H|KjI&@AZpcx9$oALcN=O4j@ z?XZ5AZ=N1?iiKas$&QTH-T=B zZY&DBkHbirY3wA8N1LNeQN{r>7js0i8TH*SGu^?xo-tbKS-Z0seSuWQ@>uS zkoyzR6SZJ~v_os)o*-JAd-yM?+zPG97(%rF!%Kl6&CRp}rK&Jfqnd>ba^Irg?5gz; zZXp3GsI}~CA7Yo!lHnwdYl<9pUr{XFT3!8^Zb?*y9rJlxyd@Hf#rmb+sK?u@M#`~X z0DGrB^1AE1u-oE3ZLmAqz6DJN6WX`+T^*Bparu1~b5v)Rl@(*a16wryovq?0=tBR$ z$o)0(6mg*_pW)_TG1x-CuS?#*)lSMmOAGpxVgk%1MwV%3@wW<95Xa_y3aoW)lCekK3fr1E1N3f5~ga z#y%E3@CkQ#>_zdY)w8!Oc3F~!2O?n?Vb1`-mXiLny!~jkmZVa|KZl>Q-Titf!#}JC zwa?@G7$Cd-sQC8ca4WoA^W*8~(FEHT%L&WxMtLqZ;$cu#7YF1BU7MgVR%;wocyHko zs~4=_>6-zV>s#mY^irrGD&fP0+giyKigGs9v#)mRDeqn%hfZ@Ga=ic0XtV<%(_$%!%2fNasOy;G7 z)+q6mI<^nsPH8vSF7o3zbQPdz;_@`YQr{yP=23uz&hgzu)Fy)%%CgEheW9PC@`bxk zPZ={anLS&$DT528$GPzK)DuNTlV{WFxJS@#e~?S!^_kGXaKDd_)Qt?o(Mh9iA#Bi7BENy>_Ia=OmcI3LN%?VOlzohnYkJ^^yiXmh z>(@?59s)*DZ-F>xQ=^zPB@v@K$N?rzV6bvSPjJs8LMkZSwpaQa)#ss!g=I>_AvexG zqJle%&#f3UlYe&L?*x;YQ{vDxA4R3D*I`}o8`bAl|M;M&{x9lFhzsfPj({Id?ulMvGr$z)sY zi+1Nl)F)xfO_`e-2sHdNj3tq%TpBJgypnOd=};QcW>g&{tc}B|3L&#sV=?e zBqxMvlM~OyZG7 zFFL@)r$oFTQaL&IKuXbjS40*|LfZRY0#mZ@*cZpNV|hz8Q*c(zdEV{PFyo0ebJOq5 zFsxjdf70XJ9QS1#M9K8Sm};fW)V2+A@RW*c1wu9YcL1Hyi(cB!olM_uTT&J=H7#r* zJCM=j)RNclnHGBWQybO78H^XCwewIN4f?4IBnS-JO2LSx`Vcg+A z`FX6RI_lvf_@|WnXf30mYGJz(@TddO(jSjY1(R~=9rVML31@Y99C?PR*+|q4-}l}s zqoUH-XwjPGSZ+tCT798GtBbsjs&R$r?hb2~k&KMhvE;XeMP)Plaw#!~-3TIOgwVUf zT7wRmiT@Zq8(?pK!S8vLL7?UK1acDvICbIv1(NR({Y#qa?)H#DF!cp3*JbVT{_az( zxi)2;3pqTivdGCibk86j+2LE(0~?8NWy-tdWjiINd_SeXq}q zh>EpXB)DM|3)n;?kB9zvUe;aln1pCbqK+k%_@C+S(GB@mA--Aaw@3Dc2_<_A`rEzi zPFr1gULLHWD9Ugu$-46wM0nWf&YD4Yb--)i&EAf=(R@D%HepIStk&CVzpwvg{g z+)s|)X)(Z<)mPNm>!M|s9WKR=Br)@>0aw@M!g zmE1r$B+6oJfgn#Qw(VKlR#%U?_u(C?*$I4qiGO#nWB8kGh0)e>!7dQgV@Vm+t4i6* zVp4;P=$Y*2iy!7--47$W;WznDJoJCA}@Jtr7b9TLZ{5(gg#;BKBYqEik#szO^RlBmLQ$ zXQzcG_*E5Wyc|?-KyojOay++cZq?i)yffbbmMcfUyMYnz_TDVU-0Ud;>zQZrE<@ZWLHQJp= z_;}xbr5k@2x)WfgK5GPL#!UJUnN$QS6_|2iledFoTI5YZAa|;EC;J#S+G?OIRc^8M&^Yj|C;;kkb8trgHb;a~83wOIby|exA zj-DwxwfFKVsb_?5?|X*Y?@J0nKliL@)%m-mA?__ryH%;~#*YEZIKZbSRzcWq1A+w6 zie=aF1>a=UL?>eEPYGu2*tCy6p)tf^{yFB>YT=B=ZIlAldq#|aDR`aBD^uAzG-Mfm zX1?jHDtGcws&F^g-+z=yuQvh26=IgPri9Ht0n;6W~4^XibJDd`IuG_@;zMEr1?-eKQ28adGzV zz2ldnS7eEes-yC^3QiPnwPT<(M3cC`S52j>!@M-N7Mp zAn?c0=ko*PzXQ}E3HZ@nx#K|}js~svwak?iybj%QyI)C}7n_zJ-wSv}jCSNw%;Gb9 zU-rbp_du1oa6;4iBJ#gdmJ;zK892NV$T>IhP^FSjA~Y#CLbk?Qz|vIQ#~rH_&O-fW8p4O^+F^D=NSM`n@_onu}O*9E9vP7}clh zH@(^=tV6LI8#Ph?>%Pw5h*cKD7a`ou?PC14&eu{jq0X$73J4^lEVL88>&F30PwLZW zK5BJx*cMgv%fZH5QCDSRcEb7S z=ESxxQ|z4GzK~0del+n3Go_mkq_8_B7Hp32rgOU#jFtg5wVU$3yvtom_&IM>7xz2_ z#e=+cW8m|&c*S{gI(smN$|{ETA`tC4LId<U|Eng;k=mFTMZZ@w zQNR5Q%KYs~h{BSLH<4=4i{ZoIL1YTam*j{^8pXjdj<|HQUa3+iZE2`>?w8j>BMJCx zo(ljTOgX~EbC+OXx+Ks%COT$f~#kLvG?(`2CU@RdR zFFfwyp`1AF!bqU2M{pea*(a9B;9YO|CZU_Yuvud~Zg~)sWie-hfr$srn8}*W{fSiR z!>js%HeEbmRjM#)b>{b-$4r^EI1OPVHqsnG;uFT^YH>j@{+)`CkEt&&_k1s1dFs$p zT{vX!IkP`;)7(1Jcq60UFh=`=#wT5J%{yfu>(p`r%ZL|heQksp7mIaLe%mdstyl&_ zD}95@yre9xt(b$~UJG`}51T!ws5Up7{$6g%qcL;{^7{Tv3UD;lYYtv&siL!e(>+-K zv2cri5NGkc05P)6*wS}%vjDY-D=1;F@Vmj?67g{yg_((4W%R>?yL}|-55Tx3Wh?{w z29kD{a)<$0Zw4HAur)Q5r3N3*X^kjo2q01fvlQn=eNYCz-2Rx1Ox|M~8VH=i{_P`! zi!!#h(P*RS6Cc)%M}~Q}RV;`3Oudk=7mJ0|A_wpvd~3r2NW4t92ihI@a}?Dne@=*ZThz=cc-1VPZ&tfUG<8ez;*|E47P|4x+7BRw}ysjr1Z9d z*>T6(y$$yB0z0?DexO}nomQ{vAao@1vmzL8N38lpdCeiQ5Mcs1E?oWvi{_m$qX0WNQM&A)I0a|8(M;^pLm1+mbot4!75$xx=#sPA76Px9wV9l zVc(TUNL#GL<8_-I{_s|65NABa!cIc{We)tB(HFpE+~i8>Ze;IS)Hy{gX(^y zh`co$s@Q*V;aSu@(woCB_c$Od3&DgJ;XB|(cp}tbOsF=hlV1j!XV1^=QU*k?BXq^f ziUivlEAewQJTK$Ck+c3t$&bv2yOJtDNdvvq0gA7}b7Wt(>y4vuD#!F5=c;%Qm@YqD zg^~vPN{U~32{BEoI4_OyIvDJV(xkwCcd2s{LF8RZk2;HoG5XaX8ND@NI1EuaYEXnKw5w4VY&jMA28nb>zA5p#)Qa5th62qa>WN?DGJ2Rsx{{D+)Cx@-(J$=d4rNtYA#Sm+wBsI z(#1y4Yyaz|Q<>L$xT6Cy@dZX=GShI`Iv}7+ZalJ%wLj=%8vwtrBcqUj5qHTEt`=Ju z7E2}1s06ejU4gM!i+Axgv1rQcIH@VT0ictqDSrm_*7x>a$yivEN{NZ)9|X}O-Bq<` zt`#V4igt^GE4oCF6V9D?&|WVKHI$uSk?_2n1g7?0@s9m1GP_Q_UpoGn-GJxh@K3ziO^dUWnX+ygO5af1vYL8ibHq**8UJDp%Pu^vv5lVeL- z>hH^^;GAu=M~=6lvsX_1VM|S$Z({1zRZ;kIn`=!s0DZeBsk_{=5XM%IJlup(+051_ zRlm)&p!^rf2iL5)UUfLh=c#WgGQBE<(PqAZaygV~T5hDwr0_mlHLhd7Z3to;~bUFWO;Y@N=eQ zS)F0AjE%!>W@CGApC6RN-Pgxd-xT8o{Iu+GpnTFdnNa>s%ka3zVwsp{xfg~`sFP3^ zm=%;=fZ8pz*86Bob8R}HqXv!#d$^LU=>L{-F*78UlpQ3U#yJ>gb2|1arL^85-q#zG zI*Mrdm!xZCykmvbKPP$$)$_Awut?|5oOo-(blL+S<&xBWJpV=G1NffF4(Tly!?wkx zqceJXow$~~*}_X*0+uSC+ahntXL1hp{@M%j)U-Yt14nq+YW0m={;)<(r_@k3vtn@> z-`24y8pLRI#TFw59o+aj-k<-Pvm=;e{k}E~@%811{XwiFdxm%wphOLAlMtRtdgB*| z(;tUV#@`{qXKIH{Zd_=S7hh!JoyZiKZ|cf=+7x<-!|YaflaHFEFr+T~tko;1&%Zj< zEehlhfjwS}k!CdhMRyZ)=-~Yp&Eb$5L9G(?Cnmf-hhopTj(r<&=T3Pqa0(f6p@=Tz$}2Hia`y3yAdn-mL&N^&h3k0?cg~)E!t{)L0e+_| z!aeZt+MucD#&*#XQ@p!yYS^h|rq>_rt}&u_C?-K|^y0l5paxNRqzWlL&-gj?ZHBVe+Fx}ij&Kd z98{DmkK7t3v-3wj}j zKW7S2NP3OzC{Al|#^6AkqMe2UBU-}?61-YnR`^3%RSwMJQyc`_MR3q2_-YawR!?GC zADna9hhHEYVV@@y_|8VoG!3Y?+tr{fjGWe@6(h?ZB~UjoP?t-n(LFU1E`ImBzP{p1 zlS;W3jp!F0k~}k%rL!M!g%&+f=+~4;W3>J*z_7G&Etct3Ku%IbOg>^o)7nJa?LKX) zY-gMGWxaSe@I|CQ)t+jutWbL-qfCwUpfsaA4ON`OMuQZwYi}`sC?i)F9GcPnXDCSq`_*qE z4!H9(H>|6}?eRS})^DV1A8jw_mDIXC?&Ir>?%E+9=6skUjA*5KCJfgOe=FP3y{$r( zH|=0COOTiNhDT&ysKS!Y<{jpfxKB{F9wz-SCjQ30FeLpirY|4L!IE{9xGLW< zkY@18L~spOWe7*Jqq(>iASW9ITX0B5HjRXp=#iOM=E}U@XlpELgz`dd?$x&9Yh>@n z7?{|S^Nv6|iW8NNm%{wnGZ)^aFGXZNF)s`}I3AGX(KtAWz=$(S-O&J$nZF1O8W8ZO z=QO^TkW=d1blbns6vMAPo^3f4Lt7noSZvtBtflBA!XZ;?_YFSl|rI2~|*F0SBx z9)kSGL<0_RUagb0gd-HixPUVDc0OOGG%{k1A|sxand^!?2n_hI?}(0Xf6h8RL`wvk z7Zopy0sBAI6KdkcZMQ;DX|MoKtK}9s8V(Ef!RL<9$Y&}QlJf~tOC!1>Y-jZSzlZwV z(nt-pgAgu4R2F06IU>Zd6-#q8j!!JW4wL90R)4ay&60v91xVzT2{SgnLJDe$`XK23 zZd;I3W3HU7SL{`U^wh)Vh;P1k{6)iQg5JId4cE>Hyl29PK~}vs?||;!6WlM4<6!(` za*;u8+vO!$eL+AI>`$}U;$s^Zj&*-yBT@J9)%=SiL!1E#`zRe}om*3sboFa?0cSJ$ zv{G-qhgD1bI0Yf6<6KgPr z-M)pm8L=F!*#)z;hE5Ra24%4nmtMET>B53sWog?AOf)`iYWa^(AE~y)Ua^$fK7Jcx zck_FAV(P0;z;k}BbdMvNvgfnMnv8HpBClR9q;mWgM0(BPm*CD+n535|XAUck{)0tY zferHTwujw?BXmFWIXb;*f69O+w*HQRdW5sX-R;-<|8OQI4Pj#zBuB0LH$OZ(Az9xk zX`fL^GQ}@Cx=z3(68z9gw1^I$p$Ao}va+apXCTGev_MIKVC;9YuSPvFGbRtCF8e=P z?F-Q<;|4@)JCunG3jNo?LK2qTEI^Xa-6s-AqFuYnZ^juh2D>=+c`0;i(6%`K`{OGk z|NDluv76FE_`h#mpZ~vikN#hu;kEruRI~G)pOAyj*>G`gUS61gnIdQ6^1@> z+I6}R^E2BrmNjJUHsJlh=T$9UotodT-n3RVQoPjk4BAi z&lAH__j}3?%JS)p0OmUdnTC{}X{E+1P>&>U;0S)3#NL4Nhz3ab~2=f=6L=ln7Y z`&1u5U@KEo@?siZsd$`c!rv}M`UO3MI11>_UkuI0rIWJX@p1*>*Hp1q*DFZxbPjAq zeW87OBNdHXqG(29f7~@LOD1bBAc&FC@)M@Cv8TnQoH61=vFljirRu%DchIx*FZy6$ zfIISCPt>*eKc#f1h=i!ACs2~r!S|8o=KZXJ3lMue;hoVFJ)}Y8{QX^=WmoXwuRHW8 zqXOqUU^6)M56wF^y0wG8l&q*!>CDSwEojXw*|Xsti)sgCI_23J zo(}2moZE%ebq+LlzxM|U!aihvO7{nRFq?2y6ffWLTN`!kpZJG1g*hgcO*=FQffwBC z_T1kmmaOc)?296j>F5 zFJY<<)=wppnR4NEuABUNn?a0Z;>VIhm`(jf^C;v} zE2Z@r8>_!TC^o(9uS} zgnv=6=mG|8C_)@x_>JN67DQ-shSj?vN?#WY)+yM+IG!j5!@b~2qY*4hLW}TwP9n2A z4(GMEC{?;&#u*Eo3skh&mIkgwd3~@r>pp2hqWdv+*}lLYRq#or)Vk)25gO(PR(d$x(gL1Pa1`cTI`%*ee74gUJP&eH( zEn@sd$bR>s)YSfqxgq6p6}7JtLPi|B za;7~s!<H+IW=QwRf`y6t!`+dbU&_zKUj#N7+=CTVuSqq2gt) z_ea}dF*Qs(sbB-+{_TMZeX?4egHf{Zh6ca%Jhg^5{F{XOd@O1==7Fsl8BR6S{TTg4 zw6>DrpHstNwNHG>i(veKaUoN4&Zc4CtuWE%+FA^@?kd+s#37t$s`HLOrGy@e z*1g7jzsSTxWdbQ(9sE`DpVeMkqgk_Z=jPlvQ;m;OFy#l$)fyi5n~my}BzIgRV)8B7 zT)I^2|9;W@Xs^eSa1-3%5LocmBd9me+)IB_#@-5hJF%V2$bBadqWo=tt%RY>J zX&Rq&tA)}%Wng^GIHpw#!0 zP#7x*vMv!91`I#n5s~C8M!rFk$BNXff65GglY*p`jmceY8z><@#;q}~0L+50D)%|- zCr@@e6wH?5JMD%>CpE${`&erlO*HGDDeV?`UKL?R#5xIyL~a{jdXdnqgP=VcAb;VaN5(hu8sd0CeIf-Au`+?jA*Pi~6Wq#r$gS z*GUk=@3LSYSZ4TMT<^&Q?+^NU%cT=5tCveLqED{*oM|@YwW`V(pnN7KZSu<5cjAM* zku1*U z^I>)2*b(zL6rZB9d49^18A*bx`SKf@d+ii*i78JZ*DWr)@}y=c0?P_JW`6&qeWGpk zxyAZvndCW#tz?EJRDb6dr*t8X3HBe_pp6eb@?t{b6 z5EAR`hto%+GSt8h1*U>CKV5w< zo4EL;#}!w8m?)Y~rF!SDDErb`oqpC&C+@*AM#m_q8p`R84X*)k>h7Q^ zBWLNSu$%L@ImPm+JO7YYN%rVDe}{{I*48gDp2&5|^FZl3R3Gc|O7ed`5DYUAs&o4h zBqNlAet)MtJfsF(R~E(2@>p?`kk(h1!|N9DBFG2O(Nt{6dR z8kjtd)xfN9?(BSgL|XUthfd5|$JrT%gh{fyAvV^(+K2LMBBuLvhry4&w%-#S%U@3W zZ#L|09uMc3M%5+XlaF?TiR+~au|_K^`9D=n!jONLMYR7>@frQh>7a8WDxDj3F(;~k zYxtfQiC0!t0!T+j+^gTxRG3Mz>;+7nEHAlsaaJLm{wg-D%aC8DJn=@*I2X6EAxK{h zc^`}u{0|?v$f--5d1z!*6PNLThHFYrJB#m2pyv~XE7Z0AHoTWIOr*9-Ueewlj%jS6 z({iiF_0+egicVpUmN5<^xJLc>k?EL zqI{hQvHW?ogn(e#AT1`W_85Tr_n!uXMZmywscS@db1>s!weKrAQM2-no>LO8=`O*U znXY0J53-t$TZ3p|Gqi*Kny$k};kGe*mSeX84Z12>0&>#f_L}sV3wl<2uDy5X_BU;< zor*xJ+IZ~u1zH#!^}qV@d>gp@oIk0o{2`Wp%=ejt$b*CTlBI7$Qc^OuY8cK>p_g#K ziXG57u6{eYFX4`|PwP`2?s_>~tikFk!@nrX5|gpT{zhrh^Cer0YU7%}G7b4|`=O8cEy zeL1o#RNHgjhvMolwfn@e11~dnwe@GG3cZ97<;qE5=*#t4;FyH$xPVSs_4z08yYoy` zD#*;;?$}E;XBicwjMj_+AtdKuRkHDB)oJ;b${R2CgD(DUK)+P{ z7S}y$Ag?DP83l~CIV9D%tdRW3NF5JIKLPUcWqz(&iz+eILa2$MkB?Nm` zL*u_-xGJUvOZgCW#E18v;|wSX)2NVFuMlC7VrLnsWMfAldG3rOG;q@`XkjAxaC19F z;=K)AQ<$iDPN_jN&+=kWz8p0v=#D{b#YX=s%mNrMRU*u?5#gfh*a? zXh(O3eY^>O9rs2&us`n&y8DAJ;QCw_VGxHJ8!I@JYl?%#;#<@*FZsjtsr~6`Z~zS< zj%(d6QaXv?A_=WE!Mq&4YdGmqj!fIkh=da7ur=9WNh9Zdoxz?Y3JaAq1XFGEE)8h& zy}lJ4D5>ZmYdx}xF3#_*gD@A(C?V%j*Q1kgvfSTHS!$uf`@1(E(TH|!eqKGbx4tfI ze1sMd#aO#7_lp(md3 zf?A5zkhrf=R~BrVU|KTnR%aF*WZdQbmkC>x##PRIg?O|(04mPhJRjQl_ZtPo&DG1H zG|nO;EGf2=LEB-m4uQ!spzu6v>)yoFn`kvRVvjf{gF;EzId6A6fLx&bY_Mx!rpunp z{Et)OlaCK7OOAKhk#i>0c^g4GWA0~PQFR%rV|;| zn2bS+bt9a3+skt{bghC?0{=IXcm!5O^G^O895;OzH~5khYk^Z)z*(2=?l|*wR#?ly zOE7aQ$k~XM*8U+p&{1HtotJ-z-=YqCZ<&h`%{_aYtSVo5C!W+pS18bFbr@bLVM3>}3xz3F6n^ur=(+|IB54;q|omU$mx%ddXBit4f_u0TNhIk(6H!=UD@S z&991D_!iUQkN$;y<*9k`F0#FuS#y3bwqb3bA%Jf$vs@E;-M`gQ=-<;62$GwxJ@?Yp zU2Ca0M<+{qsJMBwuJVNX{7TzWMm@i=8iU$3oW$N+uPwb65%a7h;I$QeRqxz@l`#0V zf?=PghsBi$cY3BIpxyt60koa&#MLWl=s9n$zkFNl0RFt zY=`e~vSG^W@r)1s69DSSQSm~CJZs_67F+;>N4Npv4_|QxCMKuEDOa=uq>>1=qmbvv zEpeJ;RYokjrWu{uf(3KanR$eoHo9*~$2jlZQ92Y5gkHiav$#$#2X!8w0+V5gd+6cDq8d1+F;%zWktENCTC|akBWN2H6fbb`oHcPykT;v17^2b{R5|6Y&r8 z%e$gC0cZySr1$QY@yu~T*?(Xgc5AN>nP)M=j6nA%bcmLA=JKme`jB|I+T}EW= zk)54quQ{K!esl8&S_ID_{2nc2{&9nQ-9~&IGS3!Dm>M6wp(Mrn36IG%XRVRLU)RtB zPE;I7*5xMR+4q%zOVD3%=A+gS?vDKBrX)6Fp@{~;-ww;NDTPxQ({WUNiscFRafv21 zamr^i)@44MhyWlJ*OW&`v&z8alYP;0-0X1gyDr%`q-Tq{#1EPaCA(Py^{W9;lql|y z^vzP$zz7uvR;QQJ=z6)GNyg74D;^o>18%JfE=`gZM%FJ^8C3_8i?@b)_p5vgP9g?9Po>xD;%@quJ)~40#yi4#)xhhU`OITSv{g#6 ztLtK_MQH5vyaDyFD$&U? z`(S6!4M5}4vDyDFcr2n^7l7%^t(F9ZExF*nnWNh#7u0f?8{r@7QUChx9d)Z)Lush+ zfzq2$bu!FWu+m`S1N^wqW>-2AEE6}2M{JPa%6T6iphatM$=x+6Bp4dG&I@AkA>ng{ zcjR9l@Wx6&xVX$@ZU!X7SV-+1ntjC^Q=gO#;k6$Ip0gmE{O;SH>h%$=WKMb6E^VdF zfz7>+q_*%rUK5JBT`(X`Cx4o2d$`@C6w3l|b9hJr6EP3s%@7{_$I{YjwIJs2NRWJt z$=knXq(3S{-OYp#CK8>AU@iFfR1`*ypfO4I6ipWg(Ukp4Ya*^0qR#B{&VrHL5@CWK zpZS!!^YucP+!KA;qfXZexkwi)OZt&+Mrtv|;R^$d0Nl$We{}I24|$X`4BI(jX*~F2 zeY~#r zK6E+qz#QB#$c-WrI(ojmv>g4H0AF%He~3}YYsx2djtg+;{_P$Cp_cXbf&;FD!#Cvp zm;+n4D6qvzuIdd5`*%~VkRn>1i5+r!)|6-}GqRjeAY4Dn_TlkylDy>yrue`wY_b;pBzZhTdF2uE4ODD zv+L&@Ij2RP{vhFKMj2>~uUOGsnqF}2 zVmemDL&Cj`co97*J=Mwn3 zz0WJ+0blyiOK5H8MIg_Ezh+55gq`1)JdkZ~nbQ%xnPrY+r;=>)okc2#lxfCt;t0<{W4wJtN+z@3*MVALQ+ zbWC`SY(vnt*8vYq!?ox>>@x&NfUiLGDH)ytJ~X;af#=8Egy=nDN+~;!Mr4Gnw?#PF z^ZJ!Ovw#kI`;xE&V5q#5RvI^o&=#m>>%x*4VfywF<~p>Qkx2HOX}p)8G=qCij7y-D zDG!=>ptP_pk~4KFA$^=pjF(bI%+1IGhI6tvsR0IT8WNCH<5m04fKAUk-HX*E6u0(f zHP8*eSnaSaYLmrXDod5wk2SP!J>S2|RXm0>tg-Cw4*Sau1OD-~`;^cKPHhJCqftL+ zw_eig35`wEq9-R@K=d}kP|J+jw$HdGqU0{}_z=x>%1idZ$`Arl!J@;1t3zjoV2#^d zm`*m`ORd>Ul@$y#!%~FC{p)qQ4jlTJaG4pM^3o3RWmtkL@X?v;N_P%qryu&*KY#jE zwt2Zrc*nM$GOrZ=i-nE9I1h+6Sm%iQ?B2G?@LE%jMji6(srsW-7O9~4S$8(AB%}bu zwQ|yZg@n6Hyo|~TL2ynjBBt}pb|Eu+)}~?JDbWL2Rco#B3pzyz!92(?MkY^LNCS0x`3@hHvBc(U0z7*Q4BW=XTne_W&&-&ub_jNQ#T`!`{Nk~Wey5_ zRn*d59muyy<)+~K^xf3KrL3*COEW&`^z(W|yDfinoFXITJgLmN*`IS0&z$dicv9R*->Y5OHq! zn*>XHAbN7^w9j)M_0(x<0zH&`l(^<2wb{tLQVpGvndLRikcoi$U&`>s!}Mm$(dSGXeETO zb6>xIU8_rqh!}|d0=JheLaRySs+T0d}OP2CIA3C_bn z9;zdR8l0xbYM(tyfARe-gDj!JmoYy$+vOx7Hc}J%{FDaT%+}9tB-#%POfXzw9WCeN z8}jZJmY(nu%rwtc%qCMvJ#gXH`-HqRg7H(lL4UMjY?LppN{Qy7-8i0QROCbC`_8^Kut+2}|Z`+n0z@ zq-N_*i||oddXjgawVH3iSm*YSCnk+_U@<8qS@9}l*Gw0+BLC$xrhXp@^0xaGJvc4v zuD(ne0($gOCj+=csxnb@w^xc>?t+kbUux5dulOIAdt1VWowvl9DVnRZ9W{r|jk{@K z>f-Xefx=~lPz*6FtRS?=()Yq99jLMc%Y}PGP@TS3k4G_n>ci^1_ra>|HANmmpX-OM zHlAoi;QMiesRY??k36RW-s%>2&8Ux~-@kdE^ zJ7B7iqG!s@H}=hM4{KFVFh9@fmZJ@*!wc+@(xCwe93xBQdgCv5mCbF$dKbD}Tzcy_BGhr>8ZO9>Hwx}u6HPdh ztrp~A6Xfp8qD)-Gw707(w(UO4tF|Q#y^>FdKZPA1%Yw|3X)b=z(iTl2ggKe>_ma}W z(+g#~N%nFcXacAlcz3=5k$<NS8W7QxE7hyt=CK`K28*c=UL4X!@bOUQ_;aGqQf-YlcLyBag)IU$tjN zGSh3>5m(2DZH3u0mvnf7pqYLz#^b#ckQMz`azelo)ZU zjKhN6e-)#?QZyG3?{{_Z<-*+qF%PYH?|F=C-)lquav%dw@Uv`CY6X;}tRHaTktk0j z1t;{c+CEEYh$30<6!W@YpT@Z2QN=f0rg>oNoMbMYJ0l{&p?6xd)ilr&@k$l%mjyx7 zPkxG~VLv%1pb3+dB5HA1-?k28H0s04F#9%MlT2EevsSos9W*@ZKy`PfU+8(|v_l-` z+V{I}dwLTFYmmlY8(I3F@31PeCX(e{vYu!mP@D`qanQ&txk?rf9)gElbGe)^<1w$&$i;<+BA!4C6q zY(%O&<)3P1i%9zA^H4u(%46IfoVP6D+Z&i@S5W`0PcTXfmxu`9zHPz0#)}4FqFxr2 z!Er3*8?~-H3_GHhFL$NX+)ymT5D22ayZ3^oR5u`9S~7o8iv0FHnAdpVj3JBS6)c># zjNeu+vY)qXXc5oD6sO>h4ftH>6#A+6xDQx_kvfiCKQLft?nr zDOiNAZY##^Q_Q=OuYDQ4?X3L)#c)J(*P>GUDu7P9SWHgQ*DV_f9X1*81R(MA?FV{d zt9zQzKMh3OAZ2^>M&7z~`cFb>^L=SlJ3E{Gmz5)K9S9;~zN74d<(NHu7*=AM`RE_X zY}L0Btnh?Sc5SGFs=(jdiCZ8`3Wa8Uvq4s&EGDL;D*gJss8W-N^95ATcLvYY)*Bi( znAhQZO2@qR6$XHN}PQ_eo0j!h@V~J=6AQv37lal z&cXlo(L_b5|AzU<+`vs&R^t|J_{gDned+^B&elNpmckR`x%^l;%xVa{Is_n zL=r#GF`&fI(N%@!B(p@J+Qm(et~C!L3L#NvyfB5v=~E{pSF9{d8f6fWU2j{NSDE9Q zH})Y}don`UA@62f=%_DUGk2W@CIy5tXu0l*ex_9bcW?6N@~2Qd-Ep!fs-MpkNNUJ( zvq%sHLRXxnhlJ4NV;wB6cXDoz=2oKFxon@)bATynD~7%Q=2pIpW~RhUe%fiv^nz?* zO^FV}E<(Ra`ybj&hba-ZBh1g`0 z_@tD>iWo&z=0kH6H(e(idQaf~Zb9NK*O285htS?VTG4vrNDz%1{dYODCs-&kMOBvp zx{7@r^Zd$ca0{+afw(8EJfx?ZdTf{i8yZ~d_g(d?n)~Gh9_;1d$#I`=uc9jIY@UZl ze0@t|Jt0Q&tW#!+6)*F142u3?t-@2g6>Y^Wz;KJFhUtUV>e)ijW09-Ypv_?Y>y6xS@S`wg*#JMyz1y>}LDGyB+vFKbTl(Y!#mENUP zg2eN6pj7m{r1H)|*A?GnL_Nl76&mJP4a{Ph8lIB!yVkHGxiDLLbE46ol}lw5DZ82joWzy;F?C%1rqD!a^aLy8Vo}KUD6q$_2Rxr_)nr0| zlNz1nAj0&AcCu&YhE+zI;gb3q$!`5uH>4p1XJ@Oj)g(xlzd!C^xsEo z6%!GX^FCvk7-^Hfo9eu7DgKwyl!$0KRQU!;!~qcStbF@%VUjuaiG9t#S*533fDn=% zYhY>z=rJdX99Ud{n4SP%E~UI|)k!i_mf_Tif&M_>eZO-->d9`mYA$=@6y#%9E84$q z7%8TkW+@pw9eM+G{-B)ebA-{Zmq*_fp4E_AtM<@O1W6KBgHFr`u$AcZxB961AI-yD zkHSt2D5E7DM!RA!33wU(i6mi`-Pbx2ke7d{7F}NKKaB7NPp%C-iJv6H>R$I0NZP07)XF-rwo)d2_5SGFHxa zC_GN%J+e}VghcMjgTwsp?G+Z5dv}#kcBaF~yZTk8WyC^JbN*dsOQkRpgkXcl@JD2= zI(`3e4g5zI-Un5W{RbZ;!eeFrHu1g#ZFrHarRcytJktG=G}1rc8P_^81=mBr+S&HW zDphVQCj#1!x_%?+YLw;8e?#t8E!tNH`^E$5nKysWKP%#RJAs`{8Bz529l7sVpm`Q^ z^%*I%v?b7!j#m~mtMu7gC_b`;l3yCIj2P5dBW$n#w0GG+VxKcF_=X zvfm^o^Ys*1-I3Mh??L!hs-jmBac>c}rd?pfne;nM5U!B-Vc!!M&X2i37%ST~Cl1Q} z(BgiqUUBvtx&XKsb3);$`))|#%f_X1UTIoM#;(;%BTXm6=cMTY?U{KzP-yOEYRC!c z&0Ei%xP85558Is5%p#Q0A8XESma7Y*CCr>94w~f==qm^%P1LGK^d#1k*adS0($~hIWvP^`DUm=UVyLoGUkd6*8`}0HzEmYnEWc$ zR~IpH-eC$0ugw(#hu@2Apq1wt5i=>Sbu6ut8C8!t?hGX%12suqTw$k(K}IB%h_#)Y z)GtCHfvNjT)$(D03s())kjGL^m;hI4)M(sba6ErVAk>Sbd-fGSod}2HmfQggpk&o}~pC*z?$%PAmonJ+jg3z&QyOZrQTd~F6-o0!8J zWa9utmWRw}Y+1c_8L34zV_@4+YE9M|21Trfa-Q%M_M{l+z5mu-%FFjX`-C2Av{A1C zrOh?ndB6`J%6(kQh_qpK4GngP4sfw+aZnEyfSxm&)x?DrXl;tnmC$jbfoI8#8=8RB zejJJWPbx2W9wgj4#O3e7*6=ihxZW25(T$tF+Ji=Vq&pj%CQJ{@8XZ0rtF@6RQRg-k z^>PEnI(bdDyrmR%&Ece#zLL&_bN_N=)Q$6u37+ja7*z@)LWL9@c7ZQ~^D$Hwfb2!T zr>U&2u$tnyrrjHgf4}t%qG12=6jXwzbcqOCH#(+ENdGnQ8&lhn(gZSIiuV!*CJ2-3 z8(UKi~f^uW|E%-yN^e52s`6WnW6|Rw~Crq)Z3Lroyff%oG$r7_Ie#&O%mjhw zBC@{_2yatO)%Z>a;SPva-01RkA#^aP`OE-A*JkzU(U^2kB%{^Z2uioo_E&jUprg{U zI>8K=5!r#Da~4lNq(Wm&b-qRaj6XS|pD1Dsn9H^GF(qr+RZd?1CE_EL)n5}K^Mp?sQM-7$CbhQu@Pxv}(oDu~_7s$% z?QYr{aiCtDgZ+(ox;b;^;;$7`CWpVq$-A#ZUWevgJbcEHe>bFS1cfVznS^xltUn=R zW`pQ1W$c=A5cVNmq`o{~reyrmNZ*k5)vLy{wu$iXsl`^lqUiwv!Z{8R4<92J%vm- ztwla0cm^B)+V$aH~>X^Uy#2AkYYHQ1imLc>L&=}Xv~ zK*P4Gwq9Nd!e0KM&=6zP8fdgjXxNlgKA~~4Xr0jYjII6Y0`0p3 zx>WFz$B@XA>#?3U`L9}{J%=pEfWA2J_v4BebCIOhf`~Y`+$?^%eLF}Ht+IK@2>6_z z<_npAPE@ff2pQ0I$P1}YXPu9MLd?54Z`t z$~YS+y-|4jHO%i`L;QWXxRRQ%3*WIK253yR=7n27R8&bK-Pe#fD?BzF2kH1G)6UZD zDt22{cNG(Mgfrpn^A9~d28h&ST|2T)?}zU1MkdaZ4p~ktd?>~Zo;rW%1gW9ZcD?#1 z%b}D192o`-9m$d~;tT|e3*Mkq?^JE6B>xGzu3xu@Y48j+{$-a%G=8p%k%KQgIx6}%@p8%xcX#J364Ae9l5H2S?X4dA}j*wiZo5gyczg zp0BZ$_V6%njNA5dyH8R8Y~B&DF0>=TNg3Nv9Zm&VD&m>>Efb2qA)A-hku1mpEQ|h; z4w^rhvT z^7l4awEhIHRV0MuB~J8!ejR;{EZ;Ct10rf`N~rB@kK=QOpBs zY|FTpJD&(4;-IlO%;Lao!=)3XRp?(M!ltlye_9+Cu}u$J6UW1P0iexzKTM^>+iqna z%eJ)&2N}RLq&%?vn6BmB*pQCWr)E7b!nQf#&CMIL*?wQe`Q}g!LM%WXIV;m2mBXzQ zkX>;J59nZE7EzLlh>~-C;Vn|LhSF3rh8EV*VN1E?V6$_)BB|WCKQiqi6W!z zSz8ydWqAN369bkP?fRGEFV&!_*>0DUl1&rpJ&?hZtjs!u{DTAE?8_VxlYAU@)%&o? zO{soahem(0k!~l(S{(zVph9b69hbPf?M-9V=IQO;F~Gw@LCvU;i!PIg)CfPcGp5az zFAuIMlusrvDh5Q=ry>;W!T_~&2-9B|!KV=f_1Gk|AS6oWA?^B;U-CL_;0O)CszmwKy4k*n`*KID=ET%d2EdJIiJ+<+4C^hd2s26O>o} zG;{Q+4w^WK&(P{%+A#dPC9uE9cO!4k$?p0-;g_R03|Al|O!&hpU7Op_Sb--Igczpc z&sd+-EE%|?R6?VhnEwy~U6npG*9p4ulk{E;vZ8!f%7ypmw1ZuJSjNRJvynO9!BHou zI07dfz8{y7OAUyzvT1bHyAX1bDAcA5J@}ldHE#!ep$51ghGY0%J0tM+7#@t*?`PTS zk7|;P*HzK0U^gHU8@raU(YaiNhW6OLx3_@87gv`dg;2K7YquBFP0{C{i~L83CyzWE zbAFe?$l|}NZaM7r3!>KriNs7mAk5f$HV#E8_!B6G=LUs3lZ zC+QXiTv0%`NJpRK)jLESoPRiCfIL1^X^M)A98VWVVR1(T3)7tBJ{GW(6izBl%86Y) zw7utru=dy(-?AEr{SSK{E6dD-YZyJFOws!q^N zOKg*pgub+id1oQH3JzGirw1uV55{#g;ha?8yM1~O4)HgdqYN<8+<&!Rsi9!iL8-of zc4y&&x4+i{bU0{W+vasa29bGW&AlM!`LG3zCH-)})QUs%I zIg;k-i1mHsY$J1pI)Nj7ne4#Obv0I>THw%Xm9&Y9154qzE_x!4lRSMCWNzVapKuBk zaIa5u`jZ(6Z>9*CNKH0Jn_W_6S{+gXSCsXsu63Gpd+;aELPUHLg|=VRA~#i;KSXx+bR@g=J?s#M(l7_5!6 zsyKU1ivYa2DmP~tF3Xyzx_lEi4G-Y=R1ajzBK}C52ya~Kp62tjPY$Umk9HY2G&My# zvwdG|PF`8@YqB*~m)b8mIYa%HwY^U@eV0eMMcJ4)O1;~JjjL)Ye{0y=>ig%o#uzX39e<rJAotHuSmF@5{=0MWIqhMj~jlnN8S|ACas`OiDh)agVayAa)!`} z+MGc6Lz9Fg1@hq=QHsjdts>xa7*8)qli@QO`{^gen>;wpddV*8If0UuPE;qu)RDLO zCb&y-P;`hUt`;<2bvkK}SP1&q#Kqbiw!IakJV=S4JP*jETZeyWCU48u(>C5(T8G#A zbs=OoR63z2H|To5`_CNSX_$yZL?7J$&XB)BMrLt#ll{*wmiZ2QWk)RqTYsS=d-N|8 zW{Lv$zwTkaU}bU=vBR=&rTV`T0ShNJ zX1VrWv0A5E1T$tmLelQD|6?pn=$~5;6nR6Qp2~fa1$YwLLmYoKg$I=n{nK0!D7Rl# z=Mz=Txl^uQdG}*0jMY62jU}{@ioshAYDxcdRhE3#9qaaOkw0EcaKyeF>YGPuQr6hj zGrjIg7|b=Moe9k;3T`==d8Lz0JzD2S9)BqGZ*_x=4`!_}+lc);9fHy`InUJZ0zEAy z_0?((tIe!gTjcg2q~n+qn9C63ue2pBes7$5xSS7BER;#7v(Hd zel`4-vZBPgIEA!9X^toqkuf(VIBkE-<5BCT%+acliTEVIYtkv8!( zSiyS%VO)Y!BI~l+q=emJn;Ym#f6{UQVPzzKd_gx7%Fdw?-M2++dqaoU+eLIm}Z|6{f6|bSFeaieKO1@l?nWVnsfv+iC7GQ9)OX`XB z%EIWFuStpe6VPs)EFQhO20r8DluCL>L|Cl4zqR}v1CR>#$q#Ow{9%`8McuQzQ-wo< z-Tq@jnsfA1Ec=%-k{8&?0cVycutteHqzz_pTsrMH?`})93x0Z;> zX*C_5-?dKV58QL5O{xecEGz%8vIhC<^nc$`FPLVjZ)P0I_H>oUHb1w-Ef=&e2+a6= zrskZVdn_*fNQyM}p7-HB>6EebV70)dB;n*$gxCGgvIv5b3pKlqKB3;X=1>I!SJY#X z-7jZTkW63y#7k1x=+rpZsV>e?8<@L}wK*o}{chU<{W+?8}R=sOa(4ezQp; zyyJY9GwFbKL8$mWwR(&3x*ocQk(53?bOrc#Wr}Z916_<|3wcDoO)ky#I}(Xed^gyw zEb7Eo<*73AL4u`*ErTSawX(bz6^F~)uzt@Y$Ln5L99hR%iU6~Aq-$y2(}?jTMPFmW z=D#nhvF`g7zH~773D8p|+hcXBsb8vybk?I~9Jplm^Ym!YXg?-r)6r`{H66xT#W zM*B(tSK4OJ!d=nJs4GC{Ap_~h>6oX&V#ZCp=6tfv^pU1I9+1#}dh>Pa z1=+uwwiWu!C@PbC_+O47T7pdf4!A>#*nJ>3@D`T54d${@dHD42c2&H*_=-_ova7bo zKlfmN5Wo0aCZC;V@$TH7&xb)8`vZmrnu{vP0X{#fT>9R2n|^cb6|P@J^3XbEpMl| z@H_gBRYYoI6p!Nr71oadh4_SoqL2T~i$O%nlG=aX#T3PVAH(_)`u{j$+5d!gP&DeZ zfWM*hAw-`0hmw4>hBf^x@7 z%Vj1^zNzWkvFX;`qhcm!fA^CtA6mE_1=S+>Es%ChfGT9lKe}jHNK}AN7?34N>*(oeLq3|{JmnjwxW3ZH4XyS-`nUNcrq>;Po*{8wNS5XJ?)slBG&d4 z94~NyLt-GIzjtk0-}>loTlf)2>@}teh;8$-1C1T2W|8xNrfw?)^G9I3H(%u`?g3Jf zkj%pz3Ql#>=@85C&vwr6bGNy0Y-D<1X0E*+^ieUzXO2I7vG>P6-^GSN*yY33?e!2K zY^O_Gl>_kQJ63H{iXQnJ;{_b=2PFD2piOmg-LFNtla zo}bzYNwo?Ke)#6AJ7VSS?JN=~8y8r~Bm1&|V0G1T zo0xginMl-gY2p=`vmpVbtFh*5Rcg_sC0_?R+MLZx+@*nC)qzyE(FofjXRQv>OWeas zU$qD(76S}O=o!0D==c3Y1RR9BTyM`z^8KO5EJ%}Vn3*q+IBGA<{}eiUi=`4YsP0U# z2D7)hq8eqWS4zv}>AlyR`ba^2T_USso6{(9Bpj+c>-S|iIs@L1ImRYh7BC-AH76Rp zG8R6j!Igp~Z|}EmG048oklY*_6u>`Q>bsrY*G-fpxK)2-f9*K4qF9LzJRWhXQ;TlS z8RT-iACQNjUvE}gdXq#tyJ8)gWfHMzLOsH2J02x_mOf{!&E365JUg0X2HHvvES*|K z|M*b{8atV1MX?iP8tD8h;OX$vx?sC1P8zgGcFAggZQ_Stkv!bLn%EHazHWP#KqPyYbpuw^lfs z;cn1~G;Q1uC8GnftA1q}kMO#+p}Mv4O96#l;CYo=td{=fK#D}{XkC*MgZAhX@{@w_ z$x49%<#?X-zBp#Ff6KuNLUGDhaJ7y-Lj&bJ! zfgIzl=7N(wO3BThAnx)ne2F;H%OmAeFFQ)J-icp>#}$w3VfoTXKr=g-^$gQ3mcc!jKch&4lJ3QxlEBJNt z5suJZH{!agn#79=B=vR`$Hi4&o#9G{v5NJ#$-XBIubjZNMC;+uS~@e3fI~1MgE=ZM z`VnmtjE+u6Fm(Og*rBVJ0tz>f!W8fZ43g#8Y>`h7#9K3@9)naq3H-{@!xC@Q6wN_o zjWW=a?&v601bmcrvEPUdj76;6U+>3Dj{U~4=vAxgqH_Z0iTq4we)NhAwlumwsYFch93ZrJF$&$_hzU8eV}iB5kZU@CDW{x-~3Loz-T zrtpBYOFSO)dOx%-^3aD(ak5C@Sh97dQ`mB!0VzW%kPZqeA`}!rg|Ko|E|1oLZZV=D zk36(kd(U)7ZeDzW&JBq9l zls917!qCYfp%I@tmIi`jI6yqyoOF~+mYwp<3Bp#^EdVw@!rC)8P{#{5)5{VkrLG$G zMnYM2H1F##!z>AtvyE3-VkM~>rr%rgAc8=y;dI4&4{p@fvs2<3sSf>uar?l=ZFTk z27p-uJVEcs!$mdbwt@mPAsD*uFoW%EAL<>z!ZRLRt)A6uaGBYkS^Fr%?hV8D-`n(P zGe_WO698v&#e^pYXG8wlZ!D{9+yQ31T8DmRm`}u^=Er^D1=$-ekL(=IZV}3)4zRb^ z)bxSWE>EbB0C3F^>N(wWlJiX0;C24OTkk;x@;x@t;3W+645j0Qbv2@}}_aFM3)tN@fz~J`1Um$Kb+P)gt=Zmx6(C z*`led4m}vg`E`Dre&#du?F??JEdjkn=imuMcQOan$_W;0Z~&bs1A_-+STPp^55-CE zN+bDqZ2F0aAH-Kx-Eq^DsfnfA+~~oV2cd77oqrI50{7$07X28GpPQ+-n3cl_Bcmd_ zdoLzmRr5N-8^PdTxil-=3;ebYUiV}a4DJAbTgrlm**=NZXvvF7>` ze!il6Q&Ux^E%l3QWiggX+cI4va|dJa_M?a$k8P4t?Ar6d$Cpb+RQ7|-mwg;Ng@RDd zmD^OcHy0u4-t|(BUUj5n-PonE_I%a6pGXLsr04W+oQv|hUrS|B+rUd_g0ry7w_|V#8Md+hkf)67KQJg66-g4^EAzM<4o@= z2=FTlNNDrFzm^t;wq+!hEtVEQnjz!?51B_FR8>3&=v7HcDx}X43~KPBdfo(vU~cMN zx!j-QYiN#!nL}%1k~!Pox@Y(FYs8&;S_ab5Px@^U|g4AgEi2N2DNb@i6KP<%5gdN z$!_C>CMN`8z;3*eIPbHgXloX3+2QIw&GCRkll222=(zg`57t5!p?S|@AXoP0bjjrM z#eTW0?h62@F@A<8N4E3B3~1uA2TYN);_r=AxRL%0>r)T*>wLBEba-!?Rr~D2v>Y!# z&iMUUr%-+hr>#}D>jklR-Y-jB5Jx>j#hH#4vvMuVPQnQ)?7VRWJn(UsbSour=RhSe7M zHPcbPm{H7^Pkch7F)K)1%dLs?e~XkD8GIwNX?pI?<}sMU4FS(i% zU8PzUlu?jcCI(g;pQrcd;P2`wG6P>5_3y4dnWW(6&Qvh)ptHMueTuf$)KB${mHx6e zr$)pyl@6%qk50OG-|qoiYzHQ#8N%_!E%1?5vqE5!gQRq$ppbSu+7$TP9(QFqDxPqZ zI-isfn~Dr!hOzLU6+Bgf?qD6x;ZLQP2Wz|wisSIh0`4R% zicZxma6E11HGo;6IA78L085Q~TfCp8GyFL7Y_pCGIAp&MaPE@!`cXqj?nB-P-{XD) z>+@))OOyyaX%|{FdMJ_=e_B(dy`pMf8j|YBFyr1g#`cY^k3JRFwsNWdT5Dpe^xB3( zolS(xf|zEZsWhlT{D7QpV|Z2NzM-n}ud9bfcaNHr8SMNNL^?b~8mZqF5P;WzU^Rb-wuh$*@!EBOIlccyHP#!kPfdcorI13P8 z#vuJ0(J<;y{J>IvVXqj{abvrxF!Xrd>Mb#Lv1 z{jBP!4^ESs9d_Bt_$hG(j#gijtDsN(;-CmmzJYv0*;w_-yBs3g2psNkb3?;p4gC1w z>#}m=W$s2Di!U=E);XIrUz{< zC)*Xn3g4*Ge+|PHIW>jQn544MRf3e|`|4)35-P)3s3%4>#LBy!sa!d+4$z{JqfqhfU8;Vdgiy&Bf7d zydUFrigV(XB|C9y<^_>{MjG;UlaR3LeaS+BpQ>s^=`~07j_ntM9=e*BAEndCYScQM zc22L)Ua}7HGm~v7n?WVH9Wr0){rnulH{baLd#Cy4_3%Vf5?+!v@=7Ex7C0rBt8vo7 zyXDj?UP{@P;IDIadpi7p#HU1p4t`jm)YJlxh{`?4|0jUy`ZzqKz+QaK7r zdZI)_l;psi+UG~WM#OGD1zd=mBa35T0P7;onK1mj0~fRD(1AV;QM0x&`QOw-pFXX| zre~Qp3y<;pit?ag=14;vW=Bjx&s5YSa;Xm+BfN^*_aK6ypW^;O?BkIeAsTqlBlN#U zI+Kh}xO8$Ulne=aquS~TI_r_(iNL<;hk}NDK^{4(d%0RdF!-phD?F66VjW)xy{?YS z=_$eKkH{Wu=8h=SNUPR=X2K*5f7G?DAO9&_(LpYQmA`BoP4blxm*U&Hs zNS7iE-3%#6N{4{bAw3{5q|%+z9fL#H#{YiKbI#S-&;G98@4Z+TYrXLc2YSL+WR=NU zD?G#4%^`RFy)O?zAHBVuXUaSCzxpjbqBufOc7MSq3b-i_A6LjYRCOT|Fl&8D75<5j zKRl5oygFOog&=EFSx) zk#7ugZo^B_wphXH$q0xF8uGtd=>ns-t(tdi35A=9wM%(467DjUacfKt_Qb!4I<*?y z5rPJUSKi{)elwB3M67){ePTN&9I+za9qqBdHA<(GaEJEyzmnVRZzg=Y8%E+}yYj0o z#ET)xNH_pS=%F~LXeLV*z?sS+|FGgv-;!;b5I`_m-*+{61- z?DmTo5^k{ONN;KTI|nnLV$R{;{Z@#^@hdv|QuWq)xtSP$s5hNLfu-l}<3TQ28atf` zo`rahgv^){F~PkV?XQ0>k(m3HQ^zy3m}TW+Fr#RB0H1e{0p0N{!fKusDg4bK1m|0p zV`lc6YU8rhH3o4NI}HLcgKe+dMGTl`f1`^KU-Yy=iEvodA1@pH<}(WsPSrOgIZ7Rd zN=TjR@zM<04Vz+mN}6CuK`^ZOWmb(LY4W58N8sIJyxGXZIKJWVPJaohfS9sL_M{Q_*uZ&N0Ox(S)&$ zZ_~k$I;{?kO)rwZ_;fmC09t1O9a1ExHPS=i34IV}%%ScoA_<0Ecw*pv^OOHv^gEW> zD#yV8oSh8(RtyU+=-eg|b?eZ2qC*f@&-L|lnDMJP%V`Drd0^0MnCT0$-)hSrv>)jp znG*nL>a4|3R6o${;p)6wpsAU2#k|(a^S8~64ST~_6QRJHpSFcH59?}puK6(vwxhJY zpDY;{#r-$1M2FPHSu-r;ad|1+HTf_lho2=Xou7FQx^;zdS8dYO&6M5im9OhOhFaC8 zFUWZEHczS|%g$od2a|ujm&c_fgr&M|+=N2aaClloU$T9|l{$2xUB1`jXq)DB=+@Vq zphTyY2tLQwtGIUt7}s+Z65+oinSFAmr~HfMd4jj(RX$Qk(yjeTmX~hrX`>pP=%$Pm zwtWCFjN|%TllS*W0rNHc3avqhyoGkez;1`u;P(`g_xcq6o?3wDd#X+a-dH&(8IHgy zzI_vXQZ@vjqF6+;dYn%)Gu@Md4@AUpQyp@ki*5G(fb}oZ`s}xU&j~|K8MV~QA$1Az zk=k)4t~U-1{E%5(Hxd@Jj5#M@R13ookQHFa+5gUS#$01yj`k-7Bm0bTK`#5EJJ+=i zu2Gt&=OcoOn2~5VO|bO1UtrMOhICn}wOM#@a>kAU60i^Ecb!jGYxGwq#Od^*kkebZ zr=sLoI}n#!S!sE;Wj?X~s?e+V2N~gSi<~Xv?e~!0h{-_d#kHcLDxT-&9poU@guA07 zhcxZoXO(GFd*K*MmFR(cSHv<8EONQ+SyVfMwA05JIguao6re84D!+JjuI@e(79N;e zZ`5nP^MWD&<9T#1R!QtfeN|2z?)J(8_`$pulDXSE(UvL0?{t{U43ScCIv}a}Eq}S2 z)kFq|)Euo{KBg(!vDZ$lg!dvv7Y?s)&`Z^jE|eVm_#1)%1=`1XASH0v(F5+= zJNy;I7PXlB*0-Qw+;Gp$CgfkGx~(*Hgx-Eo*pWG%{1N}r)jN9FhyA@#K14lLbxzA@ zG@Wv#)$Iue@rbI>je{cd<@dala{->rshwwLCRu!RF>7${6nyE}xQrte*Y^11VKqhW ziYp_tPd>Vi^QHMg&JmD)&)s8 zDR>@|0FN@!B zvATj}Bb6=LZ*ZAdx?MsJzDt&@Hk`j~jAk(NF=jTCkh1RcC1;Dt@X>f$r&~!&aFd8~ z_7RTxej%e+wwVwH$Utq&nq>>!nbR;veJCiI2({)0mXU1lQJ8{)E?CM2*~A;>_c%@&PI=FZp&O>JmK}Cr|_A^w;MOUMu4CKSrr+efVnrIuRB?Y0PYx zayoa{C@~#Lo(elKcMWi53QT!!CD97u9{<|Id~G{^Pi7)52w$PrfqDYyX{5m>p#j< zwtR?qaZCywMV}3gqkuXRBnUy`?0#HusRdxccJVE=JY?ssywgoObVnb!UVb3*+E>X_ zn70v%4V(NxWut)_$Pw@6Xs_y%kpykY;>}W{ce`Wv2X4RbRfFlItq%gMfoIb9F;fn{Fnh}#h@dIOT=S)ZVQ6i-qJy#PxU zzXA^*cgrR8$fRBn#Z&2)kB&lsNR!czvV?SI^xY8=^y>hgN(t6P3Od3K??tXjousY- z8inD)gr&Z=MoX!->jK!#U2CIp-k;NC(5tM?*h$wZ_AnLu15qOHTS741JB#|Xdgw2< z%F_7ffy)SZ8c4G>$k5Z%3HCa z*5mq0>o|l)qp@6Ne}2gj2n{{>_+nc+7M-$jHodo8K|wh4SEaIqk`s_ubl*{->5wv> zP8hxO#uGzSY_!Iu4I`(2lG4CErOc9Ms3KixZzi_MYye5nf6MksnsT|w#Y{q)X3YQ|FK(z^@WaaFzzUB#73>z z;c@LD6vn5nPc_e|qqR|K&oP(uyFThAB@za`_O6KZmCs?VDj=?1VrqT{HLZD+lGQI7 zxp6nUSVA?OA?KcW;^I%S*wM7rGtz_<(EafuuvY^8RgMFkDL2^MCmsB4HGqL2x5%%n z$8lX-NN&syZ1y$QJK@lwpWc8TlT``f4Z(1*3D*Ewwl zNJqYO2|adh@&oqL9(i4~p8V_7Dk<(gasJb)eWv(7d&&QCarwUvSmom&MfpW`YxpuB zmBJZZKON)v%d1ZYB|SRI0Oa7A7gzstVSt(7Bu*kba~<(s+;S#;ZCdG7b4w4JA9h?* zRqj>Ru82*?J{VThyep*<;EFl|U#zAa{-y|Wdy+25dX^Ae{=z2(*X@kE^Dq@dgJx3t9+MSj z>pMI753*#a)dMCHNzEY-St@B>&U2nAUYYHc^+N1^4pvlibwy>e!QjI`rgksZx-j3K z@g3hL{DgJMP=oTW8SpKCTRv+jh99X5EL6&K1xSXzl+;a7a@kmTPx(|9+Ly<4kJfpj zL<2zXw^2oD2Xb8lCMLHV_YyFOudD@aSIZtZKNW4Jrd6tYe{<4?@qPOyDL%^bPKuy8 zZ}retW(FvT3UMP&)aEQRE4CWp!LD3N4HVj=1Ak#a);SZC?|xW7zspryq07~7S#dPF z$IPu^4)igb;B~Zl#8>;C{{@ZGH2=+}hV(v^B0BwkvI~6vxl!X*Jk+!~>~G7ri-Xed z*WQbd&h{SUu)I7c}TvM*lSlMof@S&u8MU9sTExuHU>*d z_Em}Y96?u$i)jyg5fTvcp4!2<+a$M4rq3eIy5Qnuke4?lPsO7PqIu2b{!X=8Z{nSw z&DKnB>g15?vt;Xv@6m_O&wlYz(l;Jyx_ zBIAp$9CFujaZ5m4jHN}9rj%e&jNtLu^CuU#KB7g$RmTt1@tf7KXZ4i?s|uSwinIf= zpE5TsECgNm7N0;S-a(;Hk(q~5%M7mLcZUK*)8IUXC&Dyyi=706{F$OL!9L%@{a+AY z#Y9_b>Wq}%o0oF=x_sI2pKlQuC;{vmgd=>vy;*ga5by6W)m0citaF>LgzZ-`mR0*NpbgUl5Vxq{r&6JT<}Wk}>z`^O;-ROrXYU~!?o)qB zEGI&6zeBCtXEJ5)8}LDA}b$EkSa zuFv-}6s^rvxCO%$!fE~*3+qQ0u=b>g9-c3VUFsCB`fEFXZ}^9*Ri7|JsG*0dM;oZi z$M|VCUZgmDXi&Jl!l-9`dhz=sGmW|P^ZwK}f%y+*m6*VSq(yLP5^#GiH$Sn)1hPSU zhKY?Y2gcdc$E`{LLeCX3Z>A;d^tl$#Ahq%#(oKf4gCa`Oin0v1#R{%_FUU-Iv$CaU zq78HkRyB1yI>hUKKF^k4Y@bnZGb9w^IZUF<&F4~+_bi{tOIAcht6I^h10kNIN_)1<1ETOU;4YiKXP^sK;Ym zjk;4Q$_G-^U;A7Iij5pLJUbT0!ibsm_+%C9<=VbZB(=hE7eRicJ_q!ylt?&*R+nb~ zx2f2B{zepCl4k?&{mo;4UpK%U6He5K!<%|oE$%LGVxpGwvfmrN&JkhsLqlG>CfKP( zS1un^f2c?1u$JvrsMUcWiEtadl_vZ`jtuM+WKk0c^G%F>gm22rm6(oIHowd0kRsFOy)6ONmuXjLXnTRP!+Q-l(zMqtdezOb z{j^5bs@YmUz}_7&Wuv}Z(H{C|`gKaQuC-))a?%D-YV=9WeT@prxHi#&{x@TdNGeCK4qMJYYkK05z!prRGY0nxYCan$Z zAdOEhP&jq=TZ*3=#REx<(o5rY8dPQVzVB@M{eFt8zAjyLI|(8#0$wnUXN!k+)~I=Z zv*^!%hA3cjwlL9}nngHAUg1!u(D`_`usQbN52Yj?v_lLT3BLl7mtuL{_=>mEoso3m0hX zEVdJ@_zv^o^3u*{dHGVTQ5lQ5(q*uh*;2(|@mK0PM_J>Y zjMPc9dxF&iks{OHjBM0GiGQXByUGJV3hqW3+5XS2mb~BXFOJ5q2Tl=uy);<&KLg+$ zQtcpLxTZiZ&vmyA0Evh&DS9HsG1&o)HuHx?flUne-LuxEB*tihN#Sp-gnnvdBDxs>%#e|;!i zdp$5sT-6B|69b=7B0DIh7kmw{{9zh2(e)fkDz&IJ4#L*cH(iJL1+0TUFR=iQpRYtU zPr!r&)R@F4Ef5;9ZUxYWRp1V6v#xEdtm->~=)AB;GZ9MNN(r0FIx=__6L z-r9KIz!UamwIeDecaaN$2rT zrch?NN|~~Geuo-(dwZT0`Vj%+xVFZDQ@(vmR3;1PCl)OcXxx$H=a5jsv!!O|wcA)> z5%fEM!D*QWf7~H@Tjn2TUCr3KF}j{YWAk5LcmCX#p;Q(6Rs|WA?jpMB-Qfk(*>o)w zSm7b^-FuqiEXN~iHi&@ruWyr=?PZh`LOfWzd49wk6h{CY4sL;-)_1uy&RGr?EaE>` zKSjMvweNd-xh$4yXSl>JPRTi2(rIyK9piD3t_5bWVQMt*)ra{M;Lbv7Wn?=vv4uv& z#H!2`KP1MM4#%ycM{#YEYJ4BW!n!)ShZ{ypE8(@QPq!&XsvRSO1x}2u@SI%4%MkJT>&Y^h~ zXP;GaH8f~B*#B_o?`+HAkRDI>LR!FzqY&&Tw`3TnY>7$Zj){cIbaM}l^2ayMu8{s#!1#W!2cl+%@ge{C4u~8L!)JEEK zpn=N+j|#jZ{;M9lGD|Jfcsp#l|2y8&1Xnz73Ac@Rro-hfQ;M52R4-yNShH5yw)N4? z*(Jti9BL^}HJ_u{iMVlTF76R_lj2n1$(5cXa8MaszbKDGy+GeXPZCy@k(~%p-EZZC zdTm@*W!!yDgGXP9DX~|%Zm`iv2az$IgTq0gRW=ckk<= zP<8I_Pf}8|Y%M19NuB+lg}wpZgI87`q-vD$G!~-*%V5(x8`AsBh(`R~UPYm??-CRBY(3#@qAGz!t}7opMXAFlYKbs!_W2LQ%FN zcRIeFpRIPdWe32skTV+G}MzV-he>EhGl;qc6-ljX!-gbe`M%T#qEiX+CbB3CzfDO1+#tWLv zP;s8r+%RSI;nIy%FVn~GL(;=ocpJBfD_eTq>b$v9sawmeaIEVx1*BR#N&c>?zxe|- z$Jbjc)ej)`T=&Mq4MQ)2zqL-c*Rr}*IV3#79DjAiAb92}wQhNpQ+Zrx8>{=R4?x6_ z80|ZLihFmVKsww;_S5Ri$Lsk{CDdE;k#G3~!;a%0%7f6>2oQ548_QR zEYC>b1QF6Qd3v88Qi;MT?f3Nd>z&pwODCxShLAP|Y5K157TP`(wbrkgN*mho<3FF7 z2s6h*)4#5^B?g=7;n(ZTBKO+gR#tZWd)z!^#&v=TdQ!z6JnEv_n((R03)6}N=@y53 z@~wgUHtrI}7hCKu)gCK~9%jgtM zUPM&4a7`gLg;SM~ayjD0$`7M)sa{BVOk(qBjoY-sYReO6yeMO)K}*q%>3G3G)SkT% z2{RSPORJlpZ*6`ZXcT}rb=C#h35eQjb7hJz{hVPc+moW5pW1^wC=X>Y?kLv3(0%)` zlp24DSdwY_*ZF^$O$smnsc3m@s{r*a+NH*f5w_P14_bXi0S7c)6&+0`dm-`EFOSf3BSbh zz`OQGgpU`=^IA2&YY!U`7d`Xfd8zq?`%bB=cmepYFW1;`Xirjh4hj>q6=}Sw5Y)#= z`bYgPy~&`A3>kbCou7-`SJ}9Fc4&H|JiKl$4NHktRS>=i;HJNCq5k*kB{2Lq^Qe#7 zYV$ue3P;5Mtx+CQ!3$-T{(G=N{Qr=Y{{8u{i245qVEnwt+mtKnE9>h+U_5-v^6GLG IG8V!A1wT>sz5oCK literal 0 HcmV?d00001 diff --git a/docker-compose.dev.yml b/docker-compose.dev.yml new file mode 100644 index 00000000000..a35993435ac --- /dev/null +++ b/docker-compose.dev.yml @@ -0,0 +1,105 @@ +# mkdir -p volumes/{lib,pecan,portainer,postgres,rabbitmq,traefik} +# +# docker-compose -f docker-compose.yml -f docker-compose.dev.yml + +version: '3.2' + +services: + + # web application. This expects the config.php to be copied from docker/web + # cp docker/web/config.docker.php web/config.php + #web: + # volumes: + # - 'pecan_web:/var/www/html/pecan' + + # executor can compile the code + executor: + volumes: + - 'pecan_home:/pecan/' + - 'pecan_lib:/usr/local/lib/R/site-library/' + + # use same for R development in rstudio + rstudio: + volumes: + - 'pecan_home:/pecan/' + - 'pecan_home:/home/carya/pecan/' + - 'pecan_lib:/usr/local/lib/R/site-library/' + + # use following as template for other models + # this can be used if you are changng the code for a model in PEcAN + sipnet: + volumes: + - 'pecan_lib:/usr/local/lib/R/site-library/' + + # this will open postgres to the hostcompute + #postgres: + # ports: + # - '5432:5432' + + # Allow to see all docker containers running, restart and see log files. + #portainer: + # image: portainer/portainer:latest + # command: + # - --admin-password=${PORTAINER_PASSWORD:-} + # - --host=unix:///var/run/docker.sock + # restart: unless-stopped + # networks: + # - pecan + # labels: + # - "traefik.enable=true" + # - "traefik.backend=portainer" + # - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip: /portainer" + # - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" + # volumes: + # - /var/run/docker.sock:/var/run/docker.sock + # - portainer:/data + +# ----------------------------------------------------------------------- +# Theser are the volumes mounted into the containers. For speed reasons +# it is best to use docker native containers (less important on Linux). +# The pecan_home and pecan_web are important since this allows us to +# share the PEcAn source code from local machine to docker containers. +# Volumes are placed outside of the PEcAn source tree to allow for +# optimized caching of the changed files. +# ----------------------------------------------------------------------- +volumes: + pecan_home: + driver_opts: + type: none + device: '${PWD}' + o: bind + pecan_web: + driver_opts: + type: none + device: '${PWD}/web/' + o: bind + pecan_lib: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/lib' + # o: bind + #pecan: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/pecan' + # o: bind + #traefik: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/traefik' + # o: bind + #postgres: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/postgres' + # o: bind + #rabbitmq: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/rabbitmq' + # o: bind + portainer: + # driver_opts: + # type: none + # device: '${HOME}/volumes/pecan/portainer' + # o: bind diff --git a/docker-compose.yml b/docker-compose.yml index e0f5661b0fb..64843b53e11 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,4 +1,4 @@ -version: "3" +version: "3.2" services: @@ -43,24 +43,6 @@ services: - /var/run/docker.sock:/var/run/docker.sock:ro - traefik:/config - # Allow to see all docker containers running, restart and see log files. - portainer: - image: portainer/portainer:latest - command: - - --admin-password=${PORTAINER_PASSWORD:-} - - --host=unix:///var/run/docker.sock - restart: unless-stopped - networks: - - pecan - labels: - - "traefik.enable=true" - - "traefik.backend=portainer" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip: /portainer" - - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" - volumes: - - /var/run/docker.sock:/var/run/docker.sock - - portainer:/data - # ---------------------------------------------------------------------- # Access to the files generated and used by PEcAn, both through a # web interface (minio) as well using the thredds server. @@ -163,6 +145,8 @@ services: image: pecan/rstudio-nginx:${PECAN_VERSION:-latest} networks: - pecan + depends_on: + - rstudio labels: - "traefik.enable=true" - "traefik.backend=rstudio" @@ -176,6 +160,9 @@ services: networks: - pecan environment: + - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} + - FQDN=${PECAN_FQDN:-docker} + - NAME=${PECAN_NAME:-docker} - USER=${PECAN_RSTUDIO_USER:-carya} - PASSWORD=${PECAN_RSTUDIO_PASS:-illinois} entrypoint: /init @@ -310,9 +297,9 @@ services: volumes: - pecan:/data -# ---------------------------------------------------------------------- -# Shiny Apps -# ---------------------------------------------------------------------- + # ---------------------------------------------------------------------- + # Shiny Apps + # ---------------------------------------------------------------------- # PEcAn DB Sync visualization dbsync: image: pecan/shiny-dbsync:${PECAN_VERSION:-latest} @@ -327,6 +314,27 @@ services: - "traefik.port=3838" - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip:/dbsync/" + + # ---------------------------------------------------------------------- + # PEcAn API + # ---------------------------------------------------------------------- + api: + image: pecan/api:${PECAN_VERSION:-latest} + networks: + - pecan + environment: + - PECAN_VERSION=${PECAN_VERSION:-1.7.0} + - PECAN_GIT_BRANCH=${PECAN_GIT_BRANCH:-develop} + - PECAN_GIT_CHECKSUM=${PECAN_GIT_CHECKSUM:-unknown} + - PECAN_GIT_DATE=${PECAN_GIT_DATE:-unknown} + - PGHOST=${PGHOST:-postgres} + - HOST_ONLY=${HOST_ONLY:-FALSE} + - AUTH_REQ=${AUTH_REQ:-TRUE} + labels: + - "traefik.enable=true" + - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/" + - "traefik.backend=api" + # ---------------------------------------------------------------------- # Name of network to be used by all containers # ---------------------------------------------------------------------- diff --git a/docker.sh b/docker.sh index 3f05e57c524..372093d7279 100755 --- a/docker.sh +++ b/docker.sh @@ -61,7 +61,7 @@ IMAGE_VERSION or the option -i. To run the script in debug mode without actually building any images you can use the environment variable DEBUG or option -d. -By default the docker.sh process will try and use a prebuild dependency +By default the docker.sh process will try and use a prebuilt dependency image since this image takes a long time to build. To force this image to be build use the DEPEND="build" environment flag, or use option -f. @@ -103,8 +103,8 @@ echo "# test this build you can use:" echo "# PECAN_VERSION='${IMAGE_VERSION}' docker-compose up" echo "#" echo "# The docker image for dependencies takes a long time to build. You" -echo "# can use a prebuild version (default) or force a new versin to be" -echo "# build locally using: DEPEND=build $0" +echo "# can use a prebuilt version (default) or force a new version to be" +echo "# built locally using: DEPEND=build $0" echo "# ----------------------------------------------------------------------" # not building dependencies image, following command will build this @@ -188,7 +188,7 @@ for version in 0.95; do done # build ed2 -for version in git 2.2.0; do +for version in 2.2.0; do ${DEBUG} docker build \ --tag pecan/model-ed2-${version}:${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ @@ -214,3 +214,19 @@ for version in git r136; do --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ models/sipnet done + +# -------------------------------------------------------------------------------- +# PEcAn Apps +# -------------------------------------------------------------------------------- + +# build API +for x in api; do + ${DEBUG} docker build \ + --tag pecan/$x:${IMAGE_VERSION} \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg PECAN_VERSION="${VERSION}" \ + --build-arg PECAN_GIT_BRANCH="${PECAN_GIT_BRANCH}" \ + --build-arg PECAN_GIT_CHECKSUM="${PECAN_GIT_CHECKSUM}" \ + --build-arg PECAN_GIT_DATE="${PECAN_GIT_DATE}" \ + apps/$x/ +done diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index 2e419aab5ff..fc309ad12d7 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -10,7 +10,7 @@ MAINTAINER Rob Kooper # UPDATE GIT # This is needed for stretch and github actions # ---------------------------------------------------------------------- -RUN if [ "$(lsb_release -s -c)" == "stretch" ]; then \ +RUN if [ "$(lsb_release -s -c)" = "stretch" ]; then \ echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list \ && apt-get update \ && apt-get -t stretch-backports upgrade -y git \ diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index cd041decf6e..6228573ce78 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -111,8 +111,8 @@ install2.r -e -s -l "${RLIB}" -n -1\ tictoc \ tidyr \ tidyverse \ - tmvtnorm \ tools \ + TruncatedNormal \ truncnorm \ udunits2 \ utils \ diff --git a/docker/env.example b/docker/env.example index e2083fc8e86..ee0ca6d8ec1 100644 --- a/docker/env.example +++ b/docker/env.example @@ -6,7 +6,7 @@ # ---------------------------------------------------------------------- # project name (-p flag for docker-compose) -#COMPOSE_PROJECT_NAME=dev +#COMPOSE_PROJECT_NAME=pecan # ---------------------------------------------------------------------- # TRAEFIK CONFIGURATION diff --git a/docker/executor/Dockerfile b/docker/executor/Dockerfile index 4b68c7ba897..0cf5622e487 100644 --- a/docker/executor/Dockerfile +++ b/docker/executor/Dockerfile @@ -20,7 +20,7 @@ WORKDIR /work # variables to store in docker image ENV RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" \ RABBITMQ_QUEUE="pecan" \ - APPLICATION="R CMD BATCH workflow.R" + APPLICATION="workflow" # actual application that will be executed COPY executor.py sender.py /work/ diff --git a/docker/executor/executor.py b/docker/executor/executor.py index b79e089e277..a432df87ecb 100644 --- a/docker/executor/executor.py +++ b/docker/executor/executor.py @@ -55,6 +55,14 @@ def runfunc(self): application = "R CMD BATCH workflow.R" elif custom_application is not None: application = custom_application + elif default_application == "workflow": + application = "R CMD BATCH" + if jbody.get("continue") == True: + application = application + " --continue workflow.R workflow2.Rout"; + else: + if jbody.get("modeledit") == True: + application = application + " --advanced" + application = application + " workflow.R workflow.Rout"; else: logging.info("Running default command: %s" % default_application) application = default_application diff --git a/models/biocro/R/call_biocro.R b/models/biocro/R/call_biocro.R index 17f029b32c3..226bcbca1f8 100644 --- a/models/biocro/R/call_biocro.R +++ b/models/biocro/R/call_biocro.R @@ -10,7 +10,7 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, # Check that all variables are present in the expected order -- # BioGro < 1.0 accesses weather vars by position and DOES NOT check headers. expected_cols <- c("year", "doy", "hour", "[Ss]olar", "Temp", "RH", "WS|windspeed", "precip") - if(!all(mapply(grepl, expected_cols, colnames(WetDat)))){ + if (!all(mapply(grepl, expected_cols, colnames(WetDat)))) { PEcAn.logger::logger.severe("Format error in weather file: Columns must be (", expected_cols, "), in that order.") } day1 <- min(WetDat$doy) # data already subset upstream, but BioCro 0.9 assumes a full year if day1/dayn are unset @@ -26,38 +26,46 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, # If not, rescale day1 and dayn to be relative to the start of the input. # Scaling is derived by inverting Biocro's day->index equations. biocro_checks_doy <- tryCatch( - {m <- BioCro::BioGro( - WetDat = matrix(c(0,10,0,0,0,0,0,0), nrow = 1), - day1 = 10, dayn = 10, timestep = 24); - inherits(m, "BioGro") }, - error = function(e){FALSE}) - if (!biocro_checks_doy && min(WetDat[,"doy"])>1) { - if (!is.null(day1)){ + { + m <- BioCro::BioGro( + WetDat = matrix(c(0, 10, 0, 0, 0, 0, 0, 0), nrow = 1), + day1 = 10, dayn = 10, timestep = 24 + ) + inherits(m, "BioGro") + }, + error = function(e) { + FALSE + } + ) + if (!biocro_checks_doy && min(WetDat[, "doy"]) > 1) { + if (!is.null(day1)) { # Biocro calculates line number as `indes1 <- (day1 - 1) * 24` - indes1 <- Position(function(x)x==day1, WetDat[,"doy"]) - day1 <- indes1/24 + 1 + indes1 <- Position(function(x) x == day1, WetDat[, "doy"]) + day1 <- indes1 / 24 + 1 } - if (!is.null(dayn)){ + if (!is.null(dayn)) { # Biocro calculates line number as `indesn <- (dayn) * 24` - indesn <- Position(function(x)x==dayn, WetDat[,"doy"], right = TRUE) - dayn <- indesn/24 + indesn <- Position(function(x) x == dayn, WetDat[, "doy"], right = TRUE) + dayn <- indesn / 24 } } - coppice.interval = config$pft$coppice.interval - if(is.null(coppice.interval)) { - coppice.interval = 1 # i.e. harvest every year + coppice.interval <- config$pft$coppice.interval + if (is.null(coppice.interval)) { + coppice.interval <- 1 # i.e. harvest every year } if (genus == "Saccharum") { + # probably should be handled like coppice shrubs or perennial grasses tmp.result <- BioCro::caneGro( WetDat = WetDat, lat = lat, - soilControl = l2n(config$pft$soilControl)) - # Addin Rhizome an Grain to avoid error in subsequent script processing results + soilControl = l2n(config$pft$soilControl) + ) + # Addin Rhizome and Grain to avoid error in subsequent script processing results tmp.result$Rhizome <- 0 tmp.result$Grain <- 0 - } else if (genus %in% c("Salix", "Populus")) { + } else if (genus %in% c("Salix", "Populus")) { # coppice trees / shrubs if (year_in_run == 1) { iplant <- config$pft$iPlantControl } else { @@ -65,13 +73,13 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, iplant$iRoot <- data.table::last(tmp.result$Root) iplant$iStem <- data.table::last(tmp.result$Stem) - if ((year_in_run - 1)%%coppice.interval == 0) { + if ((year_in_run - 1) %% coppice.interval == 0) { # coppice when remainder = 0 HarvestedYield <- round(data.table::last(tmp.result$Stem) * 0.95, 2) - } else if ((year_in_run - 1)%%coppice.interval == 1) { + } else if ((year_in_run - 1) %% coppice.interval == 1) { # year after coppice iplant$iStem <- iplant$iStem * 0.05 - } # else { # do nothing if neither coppice year nor year following + } # else { # do nothing if neither coppice year nor year following } ## run willowGro tmp.result <- BioCro::willowGro( @@ -85,9 +93,9 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, canopyControl = l2n(config$pft$canopyControl), willowphenoControl = l2n(config$pft$phenoParms), seneControl = l2n(config$pft$seneControl), - photoControl = l2n(config$pft$photoParms)) - - } else if (genus %in% c("Miscanthus", "Panicum")) { + photoControl = l2n(config$pft$photoParms) + ) + } else if (genus %in% c("Miscanthus", "Panicum")) { # perennial grasses if (year_in_run == 1) { iRhizome <- config$pft$iPlantControl$iRhizome } else { @@ -104,35 +112,33 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, phenoControl = l2n(config$pft$phenoParms), seneControl = l2n(config$pft$seneControl), iRhizome = as.numeric(iRhizome), - photoControl = config$pft$photoParms) - - } else if (genus %in% c("Sorghum", "Setaria")) { - if (year_in_run == 1) { - iplant <- config$pft$iPlantControl - } else { - iplant$iRhizome <- data.table::last(tmp.result$Rhizome) - iplant$iRoot <- data.table::last(tmp.result$Root) - iplant$iStem <- data.table::last(tmp.result$Stem) - } + photoControl = config$pft$photoParms + ) + } else if (genus %in% c("Sorghum", "Setaria")) { # annual grasses + # Perennial Sorghum exists but is not a major crop + # assume these are replanted from seed each year + # https://landinstitute.org/our-work/perennial-crops/perennial-sorghum/ + iplant <- config$pft$iPlantControl ## run BioGro tmp.result <- BioCro::BioGro( WetDat = WetDat, iRhizome = as.numeric(iplant$iRhizome), iRoot = as.numeric(iplant$iRoot), iStem = as.numeric(iplant$iStem), - iLeaf = as.numeric(iplant$iLeaf), + iLeaf = as.numeric(iplant$iLeaf), day1 = day1, dayn = dayn, soilControl = l2n(config$pft$soilControl), canopyControl = l2n(config$pft$canopyControl), phenoControl = l2n(config$pft$phenoParms), seneControl = l2n(config$pft$seneControl), - photoControl = l2n(config$pft$photoParms)) - + photoControl = l2n(config$pft$photoParms) + ) } else { PEcAn.logger::logger.severe( "Genus '", genus, "' is not supported by PEcAn.BIOCRO when using BioCro 0.9x.", - "Supported genera: Saccharum, Salix, Populus, Sorghum, Miscanthus, Panicum, Setaria") + "Supported genera: Saccharum, Salix, Populus, Sorghum, Miscanthus, Panicum, Setaria" + ) } names(tmp.result) <- sub("DayofYear", "doy", names(tmp.result)) names(tmp.result) <- sub("Hour", "hour", names(tmp.result)) @@ -148,7 +154,6 @@ call_biocro_0.9 <- function(WetDat, genus, year_in_run, call_biocro_1 <- function(WetDat, genus, year_in_run, config, lat, lon, tmp.result, HarvestedYield) { - if (year_in_run == 1) { initial_values <- config$pft$initial_values } else { @@ -162,13 +167,15 @@ call_biocro_1 <- function(WetDat, genus, year_in_run, initial_values = initial_values, parameters = config$pft$parameters, varying_parameters = WetDat, - modules = config$pft$modules) + modules = config$pft$modules + ) tmp.result <- dplyr::rename(tmp.result, ThermalT = "TTc", LAI = "lai", SoilEvaporation = "soil_evaporation", - CanopyTrans = "canopy_transpiration") + CanopyTrans = "canopy_transpiration" + ) tmp.result$AboveLitter <- tmp.result$LeafLitter + tmp.result$StemLitter tmp.result$BelowLitter <- tmp.result$RootLitter + tmp.result$RhizomeLitter diff --git a/models/ed/Dockerfile b/models/ed/Dockerfile index c1d64ba2e3c..34cad0989a1 100644 --- a/models/ed/Dockerfile +++ b/models/ed/Dockerfile @@ -7,8 +7,8 @@ ARG IMAGE_VERSION="latest" FROM debian:stretch as model-binary # Some variables that can be used to set control the docker build -ARG MODEL_VERSION=git -ARG BINARY_VERSION=2.1 +ARG MODEL_VERSION="2.2.0" +ARG BINARY_VERSION="2.2" # specify fortran compiler ENV FC_TYPE=GNU @@ -59,7 +59,7 @@ RUN apt-get update \ # ---------------------------------------------------------------------- # Some variables that can be used to set control the docker build -ARG MODEL_VERSION=git +ARG MODEL_VERSION="2.2.0" # Setup model_info file COPY model_info.json /work/model.json diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 9b1a7047ed6..3170d9b8457 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -280,10 +280,23 @@ read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ } } - CheckED2Version <- function(nc) { + CheckED2Variables <- function(nc) { + vars_detected <- NULL + name_convention <- NULL + if ("FMEAN_BDEAD_PY" %in% names(nc$var)) { - return("Git") + vars_detected <- c(vars_detected,"FMEAN_BDEAD_PY") + name_convention <- "Contains_FMEAN" + } + if ("FMEAN_SOIL_TEMP_PY" %in% names(nc$var)) { + vars_detected <- c(vars_detected, "FMEAN_SOIL_TEMP_PY") + name_convention <- "Contains_FMEAN" } + if(!is.null(vars_detected)){ + PEcAn.logger::logger.warn(paste("Found variable(s): ", paste(vars_detected, collapse = " "), ", now processing FMEAN* named variables. Note that varible naming conventions may change with ED2 version.")) + } + + return(name_convention) } # note that there is always one Tower file per year @@ -324,8 +337,8 @@ read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ slzdata <- array(c(-2, -1.5, -1, -0.8, -0.6, -0.4, -0.2, -0.1, -0.05)) } - ## Check for which version of ED2 we are using. - ED2vc <- CheckED2Version(ncT) + ## Check for what naming convention of ED2 vars we are using. May change with ED2 version. + ED2vc <- CheckED2Variables(ncT) ## store for later use, will only use last data dz <- diff(slzdata) diff --git a/models/ed/inst/ED2IN.git b/models/ed/inst/ED2IN.git deleted file mode 100644 index 58a154572b2..00000000000 --- a/models/ed/inst/ED2IN.git +++ /dev/null @@ -1,1259 +0,0 @@ -!==========================================================================================! -!==========================================================================================! -! ED2IN . ! -! ! -! This is the file that contains the variables that define how ED is to be run. There ! -! is some brief information about the variables here. ! -!------------------------------------------------------------------------------------------! -$ED_NL - - !----- Simulation title (64 characters). -----------------------------------------------! - NL%EXPNME = 'ED2 vGITHUB PEcAn @ENSNAME@' - !---------------------------------------------------------------------------------------! - - !---------------------------------------------------------------------------------------! - ! Type of run: ! - ! INITIAL -- Starts a new run, that can be based on a previous run (restart/history), ! - ! but then it will use only the biomass and soil carbon information. ! - ! HISTORY -- Resumes a simulation from the last history. This is different from ! - ! initial in the sense that exactly the same information written in the ! - ! history will be used here. ! - !---------------------------------------------------------------------------------------! - NL%RUNTYPE = 'INITIAL' - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! Start of simulation. Information must be given in UTC time. ! - !---------------------------------------------------------------------------------------! - NL%IMONTHA = @START_MONTH@ - NL%IDATEA = @START_DAY@ - NL%IYEARA = @START_YEAR@ - NL%ITIMEA = 0000 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! End of simulation. Information must be given in UTC time. ! - !---------------------------------------------------------------------------------------! - NL%IMONTHZ = @END_MONTH@ - NL%IDATEZ = @END_DAY@ - NL%IYEARZ = @END_YEAR@ - NL%ITIMEZ = 0000 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! DTLSM -- Time step to integrate photosynthesis, and the maximum time step for ! - ! integration of energy and water budgets (units: seconds). Notice that the ! - ! model will take steps shorter than this if this is too coarse and could ! - ! lead to loss of accuracy or unrealistic results in the biophysics. ! - ! Recommended values are < 60 seconds if INTEGRATION_SCHEME is 0, and 240-900 ! - ! seconds otherwise. ! - ! RADFRQ -- Time step to integrate radiation, in seconds. This must be an integer ! - ! multiple of DTLSM, and we recommend it to be exactly the same as DTLSM. ! - !---------------------------------------------------------------------------------------! - NL%DTLSM = 600. - NL%RADFRQ = 600. - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used in case the user wants to run a regional run. ! - ! ! - ! N_ED_REGION -- number of regions for which you want to run ED. This can be set to ! - ! zero provided that N_POI is not... ! - ! GRID_TYPE -- which kind of grid to run: ! - ! 0. Longitude/latitude grid ! - ! 1. Polar-stereographic ! - !---------------------------------------------------------------------------------------! - NL%N_ED_REGION = 0 - NL%GRID_TYPE = 0 - - !------------------------------------------------------------------------------------! - ! The following variables are used only when GRID_TYPE is set to 0. You must ! - ! provide one value for each grid, except otherwise noted. ! - ! ! - ! GRID_RES -- Grid resolution, in degrees (first grid only, the other grids ! - ! resolution will be defined by NSTRATX/NSTRATY). ! - ! ED_REG_LATMIN -- Southernmost point of each region. ! - ! ED_REG_LATMAX -- Northernmost point of each region. ! - ! ED_REG_LONMIN -- Westernmost point of each region. ! - ! ED_REG_LONMAX -- Easternmost point of each region. ! - !------------------------------------------------------------------------------------! - NL%GRID_RES = 1.0 ! This is the grid resolution scale in degrees. [\*/] - NL%ED_REG_LATMIN = -90 ! List of minimum latitudes; - NL%ED_REG_LATMAX = 90 ! List of maximum latitudes; - NL%ED_REG_LONMIN = -180 ! List of minimum longitudes; - NL%ED_REG_LONMAX = 180 ! List of maximum longitudes; - !------------------------------------------------------------------------------------! - - - - !------------------------------------------------------------------------------------! - ! The following variables are used only when GRID_TYPE is set to 1. ! - ! ! - ! NNXP -- number of points in the X direction. One value for each grid. ! - ! NNYP -- number of points in the Y direction. One value for each grid. ! - ! DELTAX -- grid resolution in the X direction, near the grid pole. Units: [ m]. ! - ! this value is used to define the first grid only, other grids are ! - ! defined using NNSTRATX. ! - ! DELTAY -- grid resolution in the Y direction, near the grid pole. Units: [ m]. ! - ! this value is used to define the first grid only, other grids are ! - ! defined using NNSTRATX. Unless you are running some specific tests, ! - ! both DELTAX and DELTAY should be the same. ! - ! POLELAT -- Latitude of the pole point. Set this close to CENTLAT for a more ! - ! traditional "square" domain. One value for all grids. ! - ! POLELON -- Longitude of the pole point. Set this close to CENTLON for a more ! - ! traditional "square" domain. One value for all grids. ! - ! CENTLAT -- Latitude of the central point. One value for each grid. ! - ! CENTLON -- Longitude of the central point. One value for each grid. ! - !------------------------------------------------------------------------------------! - NL%NNXP = 110 - NL%NNYP = 70 - NL%DELTAX = 60000. - NL%DELTAY = 60000. - NL%POLELAT = -2.609075 - NL%POLELON = -60.2093 - NL%CENTLAT = -2.609075 - NL%CENTLON = -60.2093 - !------------------------------------------------------------------------------------! - - - - !------------------------------------------------------------------------------------! - ! Nest ratios. These values are used by both GRID_TYPE=0 and GRID_TYPE=1. ! - ! NSTRATX -- this is will divide the values given by DELTAX or GRID_RES for the ! - ! nested grids. The first value should be always one. ! - ! NSTRATY -- this is will divide the values given by DELTAY or GRID_RES for the ! - ! nested grids. The first value should be always one, and this must ! - ! be always the same as NSTRATX when GRID_TYPE = 0, and this is also ! - ! strongly recommended for when GRID_TYPE = 1. ! - !------------------------------------------------------------------------------------! - NL%NSTRATX = 1,4 - NL%NSTRATY = 1,4 - !------------------------------------------------------------------------------------! - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables are used to define single polygon of interest runs, and ! - ! they are ignored when N_ED_REGION = 0. ! - ! ! - ! N_POI -- number of polygons of interest (POIs). This can be zero as long as ! - ! N_ED_REGION is not. ! - ! POI_LAT -- list of latitudes of each POI. ! - ! POI_LON -- list of longitudes of each POI. ! - ! POI_RES -- grid resolution of each POI (degrees). This is used only to define the ! - ! soil types ! - !---------------------------------------------------------------------------------------! - NL%N_POI = 1 ! number of polygons of interest (POIs). This could be zero. - NL%POI_LAT = @SITE_LAT@ ! list of the latitudes of the POIs (degrees north) - NL%POI_LON = @SITE_LON@ ! list of the longitudes of the POIs (degrees east) - NL%POI_RES = 1.00 - !---------------------------------------------------------------------------------------! - !---------------------------------------------------------------------------------------! - ! LOADMETH -- Load balancing method. This is used only in regional runs run in ! - ! parallel. ! - ! 0. Let ED decide the best way of splitting the polygons. Commonest ! - ! option and default. ! - ! 1. One of the methods to split polygons based on their previous ! - ! work load. Developpers only. ! - ! 2. Try to load an equal number of SITES per node. Useful for when ! - ! total number of polygon is the same as the total number of cores. ! - ! 3. Another method to split polygons based on their previous work load. ! - ! Developpers only. ! - !---------------------------------------------------------------------------------------! - NL%LOADMETH = 0 - !---------------------------------------------------------------------------------------! - - - - - !---------------------------------------------------------------------------------------! - ! ED2 File output. For all the variables 0 means no output and 3 means HDF5 output. ! - ! ! - ! IFOUTPUT -- Fast analysis. These are mostly polygon-level averages, and the time ! - ! interval between files is determined by FRQANL ! - ! IDOUTPUT -- Daily means (one file per day) ! - ! IMOUTPUT -- Monthly means (one file per month) ! - ! IQOUTPUT -- Monthly means of the diurnal cycle (one file per month). The number ! - ! of points for the diurnal cycle is 86400 / FRQANL ! - ! IYOUTPUT -- Annual output. ! - ! ITOUTPUT -- Instantaneous fluxes, mostly polygon-level variables, one file per year. ! - ! ISOUTPUT -- restart file, for HISTORY runs. The time interval between files is ! - ! determined by FRQHIS ! - !---------------------------------------------------------------------------------------! - NL%IFOUTPUT = 0 ! Instantaneous analysis (site average) - NL%IDOUTPUT = 0 ! Daily means (site average) - NL%IMOUTPUT = 0 ! Monthly means (site average) - NL%IQOUTPUT = 0 ! Monthly means (diurnal cycle) - NL%IYOUTPUT = 3 ! Annual output - NL%ITOUTPUT = 3 ! Instantaneous fluxes (site average) --> "Tower" Files - NL%ISOUTPUT = 0 ! History files - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! The following variables control whether site-, patch-, and cohort-level time ! - ! means and mean sum of squares should be included in the output files or not. ! - ! ! - ! Reasons to add them: ! - ! a. Sub-polygon variables are more comprehensive. ! - ! b. Explore heterogeneity within a polygon and make interesting analysis. ! - ! c. More chances to create cool 3-D plots. ! - ! ! - ! Reasons to NOT add them: ! - ! a. Output files will become much larger! ! - ! b. In regional/coupled runs, the output files will be ridiculously large. ! - ! c. You may fill up the disk. ! - ! d. Other people's job may crash due to insufficient disk space. ! - ! e. You will gain a bad reputation amongst your colleagues. ! - ! f. And it will be entirely your fault. ! - ! ! - ! Either way, polygon-level averages are always included, and so are the instan- ! - ! taneous site-, patch-, and cohort-level variables needed for resuming the run. ! - ! ! - ! IADD_SITE_MEANS -- Add site-level averages to the output (0 = no; 1 = yes) ! - ! IADD_PATCH_MEANS -- Add patch-level averages to the output (0 = no; 1 = yes) ! - ! IADD_COHORT_MEANS -- Add cohort-level averages to the output (0 = no; 1 = yes) ! - ! ! - !---------------------------------------------------------------------------------------! - NL%IADD_SITE_MEANS = 0 - NL%IADD_PATCH_MEANS = 0 - NL%IADD_COHORT_MEANS = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ATTACH_METADATA -- Flag for attaching metadata to HDF datasets. Attaching metadata ! - ! will aid new users in quickly identifying dataset descriptions but ! - ! will compromise I/O performance significantly. ! - ! 0 = no metadata, 1 = attach metadata ! - !---------------------------------------------------------------------------------------! - NL%ATTACH_METADATA = 0 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! UNITFAST -- The following variables control the units for FRQFAST/OUTFAST, and ! - ! UNITSTATE FRQSTATE/OUTSTATE, respectively. Possible values are: ! - ! 0. Seconds; ! - ! 1. Days; ! - ! 2. Calendar months (variable) ! - ! 3. Calendar years (variable) ! - ! ! - ! N.B.: 1. In case OUTFAST/OUTSTATE are set to special flags (-1 or -2) ! - ! UNITFAST/UNITSTATE will be ignored for them. ! - ! 2. In case IQOUTPUT is set to 3, then UNITFAST has to be 0. ! - ! ! - !---------------------------------------------------------------------------------------! - NL%UNITFAST = 0 - NL%UNITSTATE = 3 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! OUTFAST/OUTSTATE -- these control the number of times per file. ! - ! 0. Each time gets its own file ! - ! -1. One file per day ! - ! -2. One file per month ! - ! > 0. Multiple timepoints can be recorded to a single file reducing ! - ! the number of files and i/o time in post-processing. ! - ! Multiple timepoints should not be used in the history files ! - ! if you intend to use these for HISTORY runs. ! - !---------------------------------------------------------------------------------------! - NL%OUTFAST = -1 ! orig. 3600. - NL%OUTSTATE = 0 ! orig. 1. - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ICLOBBER -- What to do in case the model finds a file that it was supposed the ! - ! written? 0 = stop the run, 1 = overwrite without warning. ! - ! FRQFAST -- time interval between analysis files, units defined by UNITFAST. ! - ! FRQSTATE -- time interval between history files, units defined by UNITSTATE. ! - !---------------------------------------------------------------------------------------! - NL%ICLOBBER = 1 ! 0 = stop if files exist, 1 = overwite files - NL%FRQFAST = 1800. ! Time interval between analysis/history files - NL%FRQSTATE = 86400. ! Time interval between history files - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! - ! SFILOUT -- Path and prefix for history files. ! - !---------------------------------------------------------------------------------------! - NL%FFILOUT = '@FFILOUT@' ! Analysis output prefix; - NL%SFILOUT = '@SFILOUT@' ! History output prefix - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! IED_INIT_MODE -- This controls how the plant community and soil carbon pools are ! - ! initialised. ! - ! ! - ! -1. Start from a true bare ground run, or an absolute desert run. This will ! - ! never grow any plant. ! - ! 0. Start from near-bare ground (only a few seedlings from each PFT to be included ! - ! in this run). ! - ! 1. This will use history files written by ED-1.0. It will grab the ecosystem ! - ! state (like biomass, LAI, plant density, etc.), but it will start the ! - ! thermodynamic state as a new simulation. ! - ! 2. Same as 1, but it uses history files from ED-2.0 without multiple sites, and ! - ! with the old PFT numbers. ! - ! 3. Same as 1, but using history files from ED-2.0 with multiple sites and ! - ! TOPMODEL hydrology. ! - ! 4. Same as 1, but using ED2.1 H5 history/state files that take the form: ! - ! 'dir/prefix-gxx.h5' ! - ! Initialization files MUST end with -gxx.h5 where xx is a two digit integer ! - ! grid number. Each grid has its own initialization file. As an example, if a ! - ! user has two files to initialize their grids with: ! - ! example_file_init-g01.h5 and example_file_init-g02.h5 ! - ! NL%SFILIN = 'example_file_init' ! - ! ! - ! 5. This is similar to option 4, except that you may provide several files ! - ! (including a mix of regional and POI runs, each file ending at a different ! - ! date). This will not check date nor grid structure, it will simply read all ! - ! polygons and match the nearest neighbour to each polygon of your run. SFILIN ! - ! must have the directory common to all history files that are sought to be used,! - ! up to the last character the files have in common. For example if your files ! - ! are ! - ! /mypath/P0001-S-2000-01-01-000000-g01.h5, ! - ! /mypath/P0002-S-1966-01-01-000000-g02.h5, ! - ! ... ! - ! /mypath/P1000-S-1687-01-01-000000-g01.h5: ! - ! NL%SFILIN = '/mypath/P' ! - ! ! - ! 6 - Initialize with ED-2 style files without multiple sites, exactly like option ! - ! 2, except that the PFT types are preserved. ! - !---------------------------------------------------------------------------------------! - NL%IED_INIT_MODE = @INIT_MODEL@ - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! EDRES -- Expected input resolution for ED2.0 files. This is not used unless ! - ! IED_INIT_MODE = 3. ! - !---------------------------------------------------------------------------------------! - NL%EDRES = 1.0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! SFILIN -- The meaning and the size of this variable depends on the type of run, set ! - ! at variable NL%RUNTYPE. ! - ! ! - ! 1. INITIAL. Then this is the path+prefix of the previous ecosystem state. This has ! - ! dimension of the number of grids so you can initialize each grid with a ! - ! different dataset. In case only one path+prefix is given, the same will ! - ! be used for every grid. Only some ecosystem variables will be set up ! - ! here, and the initial condition will be in thermodynamic equilibrium. ! - ! ! - ! 2. HISTORY. This is the path+prefix of the history file that will be used. Only the ! - ! path+prefix will be used, as the history for every grid must have come ! - ! from the same simulation. ! - !---------------------------------------------------------------------------------------! - NL%SFILIN = '@SITE_PSSCSS@' - !---------------------------------------------------------------------------------------! - ! History file information. These variables are used to continue a simulation from ! - ! a point other than the beginning. Time must be in UTC. ! - ! ! - ! IMONTHH -- the time of the history file. This is the only place you need to change ! - ! IDATEH dates for a HISTORY run. You may change IMONTHZ and related in case you ! - ! IYEARH want to extend the run, but yo should NOT change IMONTHA and related. ! - ! ITIMEH ! - !---------------------------------------------------------------------------------------! - NL%ITIMEH = 0000 - NL%IDATEH = 01 - NL%IMONTHH = 01 - NL%IYEARH = 1500 - !---------------------------------------------------------------------------------------! - ! NZG - number of soil layers. One value for all grids. ! - ! NZS - maximum number of snow/water pounding layers. This is used only for ! - ! snow, if only liquid water is standing, the water will be all collapsed ! - ! into a single layer, so if you are running for places where it doesn't snow ! - ! a lot, leave this set to 1. One value for all grids. ! - !---------------------------------------------------------------------------------------! - NL%NZG = 9 - NL%NZS = 1 - !---------------------------------------------------------------------------------------! - - - - !---------------------------------------------------------------------------------------! - ! ISOILFLG -- this controls which soil type input you want to use. ! - ! 1. Read in from a dataset I will provide in the SOIL_DATABASE variable a ! - ! few lines below. ! - ! below. ! - ! 2. No data available, I will use constant values I will provide in ! - ! NSLCON or by prescribing the fraction of sand and clay (see SLXSAND ! - ! and SLXCLAY). ! - !---------------------------------------------------------------------------------------! - NL%ISOILFLG = 2 - !---------------------------------------------------------------------------------------! - ! NSLCON -- ED-2 Soil classes that the model will use when ISOILFLG is set to 2. ! - ! Possible values are: ! - !---------------------------------------------------------------------------------------! - ! 1 -- sand | 7 -- silty clay loam | 13 -- bedrock ! - ! 2 -- loamy sand | 8 -- clayey loam | 14 -- silt ! - ! 3 -- sandy loam | 9 -- sandy clay | 15 -- heavy clay ! - ! 4 -- silt loam | 10 -- silty clay | 16 -- clayey sand ! - ! 5 -- loam | 11 -- clay | 17 -- clayey silt ! - ! 6 -- sandy clay loam | 12 -- peat ! - !---------------------------------------------------------------------------------------! - NL%NSLCON = 3 !3 US-WCr, 2 US-Syv, 10 US-Los - !---------------------------------------------------------------------------------------! - ! ISOILCOL -- LEAF-3 and ED-2 soil colour classes that the model will use when ISOILFLG ! - ! is set to 2. Soil classes are from 1 to 20 (1 = lightest; 20 = darkest). ! - ! The values are the same as CLM-4.0. The table is the albedo for visible ! - ! and near infra-red. ! - !---------------------------------------------------------------------------------------! - ! ! - ! |-----------------------------------------------------------------------| ! - ! | | Dry soil | Saturated | | Dry soil | Saturated | ! - ! | Class |-------------+-------------| Class +-------------+-------------| ! - ! | | VIS | NIR | VIS | NIR | | VIS | NIR | VIS | NIR | ! - ! |-------+------+------+------+------+-------+------+------+------+------| ! - ! | 1 | 0.36 | 0.61 | 0.25 | 0.50 | 11 | 0.24 | 0.37 | 0.13 | 0.26 | ! - ! | 2 | 0.34 | 0.57 | 0.23 | 0.46 | 12 | 0.23 | 0.35 | 0.12 | 0.24 | ! - ! | 3 | 0.32 | 0.53 | 0.21 | 0.42 | 13 | 0.22 | 0.33 | 0.11 | 0.22 | ! - ! | 4 | 0.31 | 0.51 | 0.20 | 0.40 | 14 | 0.20 | 0.31 | 0.10 | 0.20 | ! - ! | 5 | 0.30 | 0.49 | 0.19 | 0.38 | 15 | 0.18 | 0.29 | 0.09 | 0.18 | ! - ! | 6 | 0.29 | 0.48 | 0.18 | 0.36 | 16 | 0.16 | 0.27 | 0.08 | 0.16 | ! - ! | 7 | 0.28 | 0.45 | 0.17 | 0.34 | 17 | 0.14 | 0.25 | 0.07 | 0.14 | ! - ! | 8 | 0.27 | 0.43 | 0.16 | 0.32 | 18 | 0.12 | 0.23 | 0.06 | 0.12 | ! - ! | 9 | 0.26 | 0.41 | 0.15 | 0.30 | 19 | 0.10 | 0.21 | 0.05 | 0.10 | ! - ! | 10 | 0.25 | 0.39 | 0.14 | 0.28 | 20 | 0.08 | 0.16 | 0.04 | 0.08 | ! - ! |-----------------------------------------------------------------------| ! - ! ! - ! Soil type 21 is a special case in which we use the albedo method that used to be ! - ! the default in ED-2.1. ! - !---------------------------------------------------------------------------------------! - NL%ISOILCOL = 10 !21 12 for US-Los - !---------------------------------------------------------------------------------------! - ! These variables are used to define the soil properties when you don't want to use ! - ! the standard soil classes. ! - ! ! - ! SLXCLAY -- Prescribed fraction of clay [0-1] ! - ! SLXSAND -- Prescribed fraction of sand [0-1]. ! - ! ! - ! They are used only when ISOILFLG is 2, both values are between 0. and 1., and ! - ! theira sum doesn't exceed 1. Otherwise standard ED values will be used instead. ! - !---------------------------------------------------------------------------------------! - NL%SLXCLAY = 0.13 ! 0.13 US-WCr, 0.06 US-Syv, 0.0663 US-PFa, 0.68 default - NL%SLXSAND = 0.54 ! 0.54 US-WCr, 0.57 US-Syv, 0.5931 US-PFa, 0.20 default - !---------------------------------------------------------------------------------------! - ! Soil grid and initial conditions if no file is provided: ! - ! ! - ! SLZ - soil depth in m. Values must be negative and go from the deepest layer to ! - ! the top. ! - ! SLMSTR - this is the initial soil moisture, now given as the soil moisture index. ! - ! Values can be fraction, in which case they will be linearly interpolated ! - ! between the special points (e.g. 0.5 will put soil moisture half way ! - ! between the wilting point and field capacity). ! - ! -1 = dry air soil moisture ! - ! 0 = wilting point ! - ! 1 = field capacity ! - ! 2 = porosity (saturation) ! - ! STGOFF - initial temperature offset (soil temperature = air temperature + offset) ! - !---------------------------------------------------------------------------------------! - NL%SLZ = -2.0,-1.5, -1.0, -0.80, -0.60, -0.40, -0.2, -0.10, -0.05 - NL%SLMSTR = 0.65, 0.65, 0.65, 0.65, 0.65, 0.65, 0.65, 0.65, 0.65 - NL%STGOFF = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - !---------------------------------------------------------------------------------------! - ! Input databases ! - ! VEG_DATABASE -- vegetation database, used only to determine the land/water mask. ! - ! Fill with the path and the prefix. ! - ! SOIL_DATABASE -- soil database, used to determine the soil type. Fill with the ! - ! path and the prefix. ! - ! LU_DATABASE -- land-use change disturbance rates database, used only when ! - ! IANTH_DISTURB is set to 1. Fill with the path and the prefix. ! - ! PLANTATION_FILE -- plantation fraction file. In case you don't have such a file or ! - ! you do not want to use it, you must leave this variable empty: ! - ! (NL%PLANTATION_FILE = '' ! - ! THSUMS_DATABASE -- input directory with dataset to initialise chilling degrees and ! - ! growing degree days, which is used to drive the cold-deciduous ! - ! phenology (you must always provide this, even when your PFTs are ! - ! not cold deciduous). ! - ! ED_MET_DRIVER_DB -- File containing information for meteorological driver ! - ! instructions (the "header" file). ! - ! SOILSTATE_DB -- Dataset in case you want to provide the initial conditions of ! - ! soil temperature and moisture. ! - ! SOILDEPTH_DB -- Dataset in case you want to read in soil depth information. ! - !---------------------------------------------------------------------------------------! - NL%VEG_DATABASE = '@ED_VEG@' - NL%SOIL_DATABASE = '@ED_SOIL@' - NL%LU_DATABASE = '@ED_LU@' - NL%PLANTATION_FILE = '' - NL%THSUMS_DATABASE = '@ED_THSUM@' - NL%ED_MET_DRIVER_DB = '@SITE_MET@' - NL%SOILSTATE_DB = '' - NL%SOILDEPTH_DB = '' - !---------------------------------------------------------------------------------------! - ! ISOILSTATEINIT -- Variable controlling how to initialise the soil temperature and ! - ! moisture ! - ! 0. Use SLMSTR and STGOFF. ! - ! 1. Read from SOILSTATE_DB. ! - ! ISOILDEPTHFLG -- Variable controlling how to initialise soil depth ! - ! 0. Constant, always defined by the first SLZ layer. ! - ! 1. Read from SOILDEPTH_DB. ! - !---------------------------------------------------------------------------------------! - NL%ISOILSTATEINIT = 0 - NL%ISOILDEPTHFLG = 0 - !---------------------------------------------------------------------------------------! - ! ISOILBC -- This controls the soil moisture boundary condition at the bottom. If ! - ! unsure, use 0 for short-term simulations (couple of days), and 1 for long- ! - ! -term simulations (months to years). ! - ! 0. Bedrock. Flux from the bottom of the bottommost layer is set to 0. ! - ! 1. Gravitational flow. The flux from the bottom of the bottommost layer ! - ! is due to gradient of height only. ! - ! 2. Super drainage. Soil moisture of the ficticious layer beneath the ! - ! bottom is always at dry air soil moisture. ! - ! 3. Half-way. Assume that the fictious layer beneath the bottom is always ! - ! at field capacity. ! - ! 4. Aquifer. Soil moisture of the ficticious layer beneath the bottom is ! - ! always at saturation. ! - !---------------------------------------------------------------------------------------! - NL%ISOILBC = 1 - !---------------------------------------------------------------------------------------! - ! SLDRAIN -- This is used only when ISOILBC is set to 2. In this case SLDRAIN is the ! - ! equivalent slope that will slow down drainage. If this is set to zero, ! - ! then lateral drainage reduces to flat bedrock, and if this is set to 90, ! - ! then lateral drainage becomes free drainage. SLDRAIN must be between 0 ! - ! and 90. ! - !---------------------------------------------------------------------------------------! - NL%SLDRAIN = 10. - !---------------------------------------------------------------------------------------! - ! IVEGT_DYNAMICS -- The vegetation dynamics scheme. ! - ! 0. No vegetation dynamics, the initial state will be preserved, ! - ! even though the model will compute the potential values. This ! - ! option is useful for theoretical simulations only. ! - ! 1. Normal ED vegetation dynamics (Moorcroft et al 2001). ! - ! The normal option for almost any simulation. ! - !---------------------------------------------------------------------------------------! - NL%IVEGT_DYNAMICS = 1 - !---------------------------------------------------------------------------------------! - ! IBIGLEAF -- Do you want to run ED as a 'big leaf' model? ! - ! 0. No, use the standard size- and age-structure (Moorcroft et al. 2001) ! - ! This is the recommended method for most applications. ! - ! 1. 'big leaf' ED: this will have no horizontal or vertical hetero- ! - ! geneities; 1 patch per PFT and 1 cohort per patch; no vertical ! - ! growth, recruits will 'appear' instantaneously at maximum height. ! - ! ! - ! N.B. if you set IBIGLEAF to 1, you MUST turn off the crown model (CROWN_MOD = 0) ! - !---------------------------------------------------------------------------------------! - NL%IBIGLEAF = 0 - !---------------------------------------------------------------------------------------! - ! INTEGRATION_SCHEME -- The biophysics integration scheme. ! - ! 0. Euler step. The fastest, but it doesn't estimate ! - ! errors. ! - ! 1. Fourth-order Runge-Kutta method. ED-2.1 default method ! - ! 2. Heun's method (a second-order Runge-Kutta). ! - ! 3. Hybrid Stepping (BDF2 implicit step for the canopy air and ! - ! leaf temp, forward Euler for else, under development). ! - !---------------------------------------------------------------------------------------! - NL%INTEGRATION_SCHEME = 1 - !---------------------------------------------------------------------------------------! - ! RK4_TOLERANCE -- This is the relative tolerance for Runge-Kutta or Heun's ! - ! integration. Larger numbers will make runs go faster, at the ! - ! expense of being less accurate. Currently the valid range is ! - ! between 1.e-7 and 1.e-1, but recommended values are between 1.e-4 ! - ! and 1.e-2. ! - !---------------------------------------------------------------------------------------! - NL%RK4_TOLERANCE = 0.01 - !---------------------------------------------------------------------------------------! - ! IBRANCH_THERMO -- This determines whether branches should be included in the ! - ! vegetation thermodynamics and radiation or not. ! - ! 0. No branches in energy/radiation (ED-2.1 default); ! - ! 1. Branches are accounted in the energy and radiation. Branchwood ! - ! and leaf are treated separately in the canopy radiation scheme, ! - ! but solved as a single pool in the biophysics integration. ! - ! 2. Similar to 1, but branches are treated as separate pools in the ! - ! biophysics (thus doubling the number of prognostic variables). ! - !---------------------------------------------------------------------------------------! - NL%IBRANCH_THERMO = 0 - !---------------------------------------------------------------------------------------! - ! IPHYSIOL -- This variable will determine the functional form that will control how ! - ! the various parameters will vary with temperature, and how the CO2 ! - ! compensation point for gross photosynthesis (Gamma*) will be found. ! - ! Options are: ! - ! ! - ! 0 -- Original ED-2.1, we use the "Arrhenius" function as in Foley et al. (1996) and ! - ! Moorcroft et al. (2001). Gamma* is found using the parameters for tau as in ! - ! Foley et al. (1996). ! - ! 1 -- Modified ED-2.1. In this case Gamma* is found using the Michaelis-Mentel ! - ! coefficients for CO2 and O2, as in Farquhar et al. (1980) and in CLM. ! - ! 2 -- Collatz et al. (1991). We use the power (Q10) equations, with Collatz et al. ! - ! parameters for compensation point, and the Michaelis-Mentel coefficients. The ! - ! correction for high and low temperatures are the same as in Moorcroft et al. ! - ! (2001). ! - ! 3 -- Same as 2, except that we find Gamma* as in Farquhar et al. (1980) and in CLM. ! - !---------------------------------------------------------------------------------------! - NL%IPHYSIOL = 2 - !---------------------------------------------------------------------------------------! - ! IALLOM -- Which allometry to use (this mostly affects tropical PFTs. Temperate PFTs ! - ! will use the new root allometry and the maximum crown area if IALLOM is set ! - ! to 1 or 2). ! - ! 0. Original ED-2.1 ! - ! 1. a. The coefficients for structural biomass are set so the total AGB ! - ! is similar to Baker et al. (2004), equation 2. Balive is the ! - ! default ED-2.1; ! - ! b. Experimental root depth that makes canopy trees to have root depths ! - ! of 5m and grasses/seedlings at 0.5 to have root depth of 0.5 m. ! - ! c. Crown area defined as in Poorter et al. (2006), imposing maximum ! - ! crown area ! - ! 2. Similar to 1, but with a few extra changes. ! - ! a. Height -> DBH allometry as in Poorter et al. (2006) ! - ! b. Balive is retuned, using a few leaf biomass allometric equations for ! - ! a few genuses in Costa Rica. References: ! - ! Cole and Ewel (2006), and Calvo Alvarado et al. (2008). ! - !---------------------------------------------------------------------------------------! - NL%IALLOM = 2 - !---------------------------------------------------------------------------------------! - ! IGRASS -- This controls the dynamics and growth calculation for grasses. A new ! - ! grass scheme is now available where bdead = 0, height is a function of bleaf! - ! and growth happens daily. ALS (3/3/12) ! - ! 0: grasses behave like trees as in ED2.1 (old scheme) ! - ! ! - ! 1: new grass scheme as described above ! - !---------------------------------------------------------------------------------------! - NL%IGRASS = 0 - !---------------------------------------------------------------------------------------! - ! IPHEN_SCHEME -- It controls the phenology scheme. Even within each scheme, the ! - ! actual phenology will be different depending on the PFT. ! - ! ! - ! -1: grasses - evergreen; ! - ! tropical - evergreen; ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous (Botta et al.); ! - ! ! - ! 0: grasses - drought-deciduous (old scheme); ! - ! tropical - drought-deciduous (old scheme); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! 1: prescribed phenology ! - ! ! - ! 2: grasses - drought-deciduous (new scheme); ! - ! tropical - drought-deciduous (new scheme); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! 3: grasses - drought-deciduous (new scheme); ! - ! tropical - drought-deciduous (light phenology); ! - ! conifers - evergreen; ! - ! hardwoods - cold-deciduous; ! - ! ! - ! Old scheme: plants shed their leaves once instantaneous amount of available water ! - ! becomes less than a critical value. ! - ! New scheme: plants shed their leaves once a 10-day running average of available ! - ! water becomes less than a critical value. ! - !---------------------------------------------------------------------------------------! - NL%IPHEN_SCHEME = @PHENOL_SCHEME@ - !---------------------------------------------------------------------------------------! - ! Parameters that control the phenology response to radiation, used only when ! - ! IPHEN_SCHEME = 3. ! - ! ! - ! RADINT -- Intercept ! - ! RADSLP -- Slope. ! - !---------------------------------------------------------------------------------------! - NL%RADINT = -11.3868 - NL%RADSLP = 0.0824 - !---------------------------------------------------------------------------------------! - ! REPRO_SCHEME -- This controls plant reproduction and dispersal. ! - ! 0. Reproduction off. Useful for very short runs only. ! - ! 1. Original reproduction scheme. Seeds are exchanged between ! - ! patches belonging to the same site, but they can't go outside ! - ! their original site. ! - ! 2. Similar to 1, but seeds are exchanged between patches belonging ! - ! to the same polygon, even if they are in different sites. They ! - ! can't go outside their original polygon, though. This is the ! - ! same as option 1 if there is only one site per polygon. ! - ! 3. Similar to 2, but recruits will only be formed if their phenology ! - ! status would be "leaves fully flushed". This only matters for ! - ! drought deciduous plants. This option is for testing purposes ! - ! only, think 50 times before using it... ! - !---------------------------------------------------------------------------------------! - NL%REPRO_SCHEME = 0 - !---------------------------------------------------------------------------------------! - ! LAPSE_SCHEME -- This specifies the met lapse rate scheme: ! - ! 0. No lapse rates ! - ! 1. phenomenological, global ! - ! 2. phenomenological, local (not yet implemented) ! - ! 3. mechanistic(not yet implemented) ! - !---------------------------------------------------------------------------------------! - NL%LAPSE_SCHEME = 0 - !---------------------------------------------------------------------------------------! - ! CROWN_MOD -- Specifies how tree crowns are represent in the canopy radiation model, ! - ! and in the turbulence scheme depending on ICANTURB. ! - ! 0. ED1 default, crowns are evenly spread throughout the patch area, and ! - ! cohorts are stacked on the top of each other. ! - ! 1. Dietze (2008) model. Cohorts have a finite radius, and cohorts are ! - ! stacked on the top of each other. ! - !---------------------------------------------------------------------------------------! - NL%CROWN_MOD = 1 - !---------------------------------------------------------------------------------------! - ! The following variables control the canopy radiation solver. ! - ! ! - ! ICANRAD -- Specifies how canopy radiation is solved. This variable sets both ! - ! shortwave and longwave. ! - ! 0. Two-stream model (Medvigy 2006), with the possibility to apply ! - ! finite crown area to direct shortwave radiation. ! - ! 1. Multiple-scattering model (Zhao and Qualls 2005,2006), with the ! - ! possibility to apply finite crown area to all radiation fluxes. ! - ! LTRANS_VIS -- Leaf transmittance for tropical plants - Visible/PAR ! - ! LTRANS_NIR -- Leaf transmittance for tropical plants - Near Infrared ! - ! LREFLECT_VIS -- Leaf reflectance for tropical plants - Visible/PAR ! - ! LREFLECT_NIR -- Leaf reflectance for tropical plants - Near Infrared ! - ! ORIENT_TREE -- Leaf orientation factor for tropical trees. Extremes are: ! - ! -1. All leaves are oriented in the vertical ! - ! 0. Leaf orientation is perfectly random ! - ! 1. All leaves are oriented in the horizontal ! - ! In practice, acceptable values range from -0.4 and 0.6 ! - ! ORIENT_GRASS -- Leaf orientation factor for tropical grasses. Extremes are: ! - ! -1. All leaves are oriented in the vertical ! - ! 0. Leaf orientation is perfectly random ! - ! 1. All leaves are oriented in the horizontal ! - ! In practice, acceptable values range from -0.4 and 0.6 ! - ! CLUMP_TREE -- Clumping factor for tropical trees. Extremes are: ! - ! lim -> 0. Black hole (0 itself is unacceptable) ! - ! 1. Homogeneously spread over the layer (i.e., no clumping) ! - ! CLUMP_GRASS -- Clumping factor for tropical grasses. Extremes are: ! - ! lim -> 0. Black hole (0 itself is unacceptable) ! - ! 1. Homogeneously spread over the layer (i.e., no clumping) ! - !---------------------------------------------------------------------------------------! - NL%ICANRAD = 0 - NL%LTRANS_VIS = 0.050 - NL%LTRANS_NIR = 0.270 - NL%LREFLECT_VIS = 0.150 - NL%LREFLECT_NIR = 0.540 - NL%ORIENT_TREE = 0.100 - NL%ORIENT_GRASS = -0.100 - NL%CLUMP_TREE = 0.800 - NL%CLUMP_GRASS = 1.000 - !---------------------------------------------------------------------------------------! - ! DECOMP_SCHEME -- This specifies the dependence of soil decomposition on temperature. ! - ! 0. ED-2.0 default, the original exponential ! - ! 1. Lloyd and Taylor (1994) model ! - ! [[option 1 requires parameters to be set in xml]] ! - !---------------------------------------------------------------------------------------! - NL%DECOMP_SCHEME = 0 - !---------------------------------------------------------------------------------------! - ! H2O_PLANT_LIM -- this determines whether plant photosynthesis can be limited by ! - ! soil moisture, the FSW, defined as FSW = Supply / (Demand + Supply). ! - ! ! - ! Demand is always the transpiration rates in case soil moisture is ! - ! not limiting (the psi_0 term times LAI). The supply is determined ! - ! by Kw * nplant * Broot * Available_Water, and the definition of ! - ! available water changes depending on H2O_PLANT_LIM: ! - ! 0. Force FSW = 1 (effectively available water is infinity). ! - ! 1. Available water is the total soil water above wilting point, ! - ! integrated across all layers within the rooting zone. ! - ! 2. Available water is the soil water at field capacity minus ! - ! wilting point, scaled by the so-called wilting factor: ! - ! (psi(k) - (H - z(k)) - psi_wp) / (psi_fc - psi_wp) ! - ! where psi is the matric potentital at layer k, z is the layer ! - ! depth, H it the crown height and psi_fc and psi_wp are the ! - ! matric potentials at wilting point and field capacity. ! - !---------------------------------------------------------------------------------------! - NL%H2O_PLANT_LIM = 1 - !---------------------------------------------------------------------------------------! - ! IDDMORT_SCHEME -- This flag determines whether storage should be accounted in the ! - ! carbon balance. ! - ! 0 -- Carbon balance is done in terms of fluxes only. This is the ! - ! default in ED-2.1 ! - ! 1 -- Carbon balance is offset by the storage pool. Plants will be ! - ! in negative carbon balance only when they run out of storage ! - ! and are still losing more carbon than gaining. ! - ! ! - ! DDMORT_CONST -- This constant (k) determines the relative contribution of light ! - ! and soil moisture to the density-dependent mortality rate. Values ! - ! range from 0 (soil moisture only) to 1 (light only). ! - ! ! - ! mort1 ! - ! mu_DD = ------------------------- ! - ! 1 + exp [ mort2 * cr ] ! - ! ! - ! CB CB ! - ! cr = k ------------- + (1 - k) ------------- ! - ! CB_lightmax CB_watermax ! - !---------------------------------------------------------------------------------------! - NL%IDDMORT_SCHEME = 0 - NL%DDMORT_CONST = 0.8 - !---------------------------------------------------------------------------------------! - ! The following variables are factors that control photosynthesis and respiration. ! - ! Notice that some of them are relative values whereas others are absolute. ! - ! ! - ! VMFACT_C3 -- Factor multiplying the default Vm0 for C3 plants (1.0 = default). ! - ! VMFACT_C4 -- Factor multiplying the default Vm0 for C4 plants (1.0 = default). ! - ! MPHOTO_TRC3 -- Stomatal slope (M) for tropical C3 plants ! - ! MPHOTO_TEC3 -- Stomatal slope (M) for conifers and temperate C3 plants ! - ! MPHOTO_C4 -- Stomatal slope (M) for C4 plants. ! - ! BPHOTO_BLC3 -- cuticular conductance for broadleaf C3 plants [umol/m2/s] ! - ! BPHOTO_NLC3 -- cuticular conductance for needleleaf C3 plants [umol/m2/s] ! - ! BPHOTO_C4 -- cuticular conductance for C4 plants [umol/m2/s] ! - ! KW_GRASS -- Water conductance for trees, in m2/yr/kgC_root. This is used only ! - ! when H2O_PLANT_LIM is not 0. ! - ! KW_TREE -- Water conductance for grasses, in m2/yr/kgC_root. This is used only ! - ! when H2O_PLANT_LIM is not 0. ! - ! GAMMA_C3 -- The dark respiration factor (gamma) for C3 plants. Subtropical ! - ! conifers will be scaled by GAMMA_C3 * 0.028 / 0.02 ! - ! GAMMA_C4 -- The dark respiration factor (gamma) for C4 plants. ! - ! D0_GRASS -- The transpiration control in gsw (D0) for ALL grasses. ! - ! D0_TREE -- The transpiration control in gsw (D0) for ALL trees. ! - ! ALPHA_C3 -- Quantum yield of ALL C3 plants. This is only applied when ! - ! QUANTUM_EFFICIENCY_T = 0. ! - ! ALPHA_C4 -- Quantum yield of C4 plants. This is always applied. ! - ! KLOWCO2IN -- The coefficient that controls the PEP carboxylase limited rate of ! - ! carboxylation for C4 plants. ! - ! RRFFACT -- Factor multiplying the root respiration factor for ALL PFTs. ! - ! (1.0 = default). ! - ! GROWTHRESP -- The actual growth respiration factor (C3/C4 tropical PFTs only). ! - ! (1.0 = default). ! - ! LWIDTH_GRASS -- Leaf width for grasses, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). ! - ! LWIDTH_BLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). This is applied to broadleaf trees ! - ! only. ! - ! LWIDTH_NLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! - ! layer conductance (gbh and gbw). This is applied to conifer trees ! - ! only. ! - ! Q10_C3 -- Q10 factor for C3 plants (used only if IPHYSIOL is set to 2 or 3). ! - ! Q10_C4 -- Q10 factor for C4 plants (used only if IPHYSIOL is set to 2 or 3). ! - !---------------------------------------------------------------------------------------! - NL%VMFACT_C3 = 1.00 - NL%VMFACT_C4 = 1.00 - NL%MPHOTO_TRC3 = 9.0 - NL%MPHOTO_TEC3 = 7.2 - NL%MPHOTO_C4 = 5.2 - NL%BPHOTO_BLC3 = 10000. - NL%BPHOTO_NLC3 = 1000. - NL%BPHOTO_C4 = 10000. - NL%KW_GRASS = 900. - NL%KW_TREE = 600. - NL%GAMMA_C3 = 0.015 - NL%GAMMA_C4 = 0.040 - NL%D0_GRASS = 0.016 - NL%D0_TREE = 0.016 - NL%ALPHA_C3 = 0.080 - NL%ALPHA_C4 = 0.055 - NL%KLOWCO2IN = 4000. - NL%RRFFACT = 1.000 - NL%GROWTHRESP = 0.333 - NL%LWIDTH_GRASS = 0.05 - NL%LWIDTH_BLTREE = 0.10 - NL%LWIDTH_NLTREE = 0.05 - NL%Q10_C3 = 2.4 - NL%Q10_C4 = 2.4 - !---------------------------------------------------------------------------------------! - ! THETACRIT -- Leaf drought phenology threshold. The sign matters here: ! - ! >= 0. -- This is the relative soil moisture above the wilting point ! - ! below which the drought-deciduous plants will start shedding ! - ! their leaves ! - ! < 0. -- This is the soil potential in MPa below which the drought- ! - ! -deciduous plants will start shedding their leaves. The wilt- ! - ! ing point is by definition -1.5MPa, so make sure that the value ! - ! is above -1.5. ! - !---------------------------------------------------------------------------------------! - NL%THETACRIT = -1.15 - !---------------------------------------------------------------------------------------! - ! QUANTUM_EFFICIENCY_T -- Which quantum yield model should to use for C3 plants ! - ! 0. Original ED-2.1, quantum efficiency is constant. ! - ! 1. Quantum efficiency varies with temperature following ! - ! Ehleringer (1978) polynomial fit. ! - !---------------------------------------------------------------------------------------! - NL%QUANTUM_EFFICIENCY_T = 0 - !---------------------------------------------------------------------------------------! - ! N_PLANT_LIM -- This controls whether plant photosynthesis can be limited by nitrogen. ! - ! 0. No limitation ! - ! 1. ED-2.1 nitrogen limitation model. ! - !---------------------------------------------------------------------------------------! - NL%N_PLANT_LIM = 0 - !---------------------------------------------------------------------------------------! - ! N_DECOMP_LIM -- This controls whether decomposition can be limited by nitrogen. ! - ! 0. No limitation ! - ! 1. ED-2.1 nitrogen limitation model. ! - !---------------------------------------------------------------------------------------! - NL%N_DECOMP_LIM = 0 - !---------------------------------------------------------------------------------------! - ! The following parameters adjust the fire disturbance in the model. ! - ! INCLUDE_FIRE -- Which threshold to use for fires. ! - ! 0. No fires; ! - ! 1. (deprecated) Fire will be triggered with enough biomass and ! - ! integrated ground water depth less than a threshold. Based on ! - ! ED-1, the threshold assumes that the soil is 1 m, so deeper ! - ! soils will need to be much drier to allow fires to happen and ! - ! often will never allow fires. ! - ! 2. Fire will be triggered with enough biomass and the total soil ! - ! water at the top 75 cm falls below a threshold. ! - ! FIRE_PARAMETER -- If fire happens, this will control the intensity of the disturbance ! - ! given the amount of fuel (currently the total above-ground ! - ! biomass). ! - ! SM_FIRE -- This is used only when INCLUDE_FIRE = 2. The sign here matters. ! - ! >= 0. - Minimum relative soil moisture above dry air of the top 1m ! - ! that will prevent fires to happen. ! - ! < 0. - Minimum mean soil moisture potential in MPa of the top 1m ! - ! that will prevent fires to happen. The dry air soil ! - ! potential is defined as -3.1 MPa, so make sure SM_FIRE is ! - ! greater than this value. ! - !---------------------------------------------------------------------------------------! - NL%INCLUDE_FIRE = 0 ! default is 2 - NL%FIRE_PARAMETER = 0.2 - NL%SM_FIRE = -1.45 - !---------------------------------------------------------------------------------------! - ! IANTH_DISTURB -- This flag controls whether to include anthropogenic disturbances ! - ! such as land clearing, abandonment, and logging. ! - ! 0. no anthropogenic disturbance. ! - ! 1. use anthropogenic disturbance dataset. ! - !---------------------------------------------------------------------------------------! - NL%IANTH_DISTURB = 0 - !---------------------------------------------------------------------------------------! - ! ICANTURB -- This flag controls the canopy roughness. ! - ! 0. Based on Leuning et al. (1995), wind is computed using the similarity ! - ! theory for the top cohort, and they are extinguished with cumulative ! - ! LAI. If using CROWN_MOD 1 or 2, this will use local LAI and average ! - ! by crown area. ! - ! 1. The default ED-2.1 scheme, except that it uses the zero-plane ! - ! displacement height. ! - ! 2. This uses the method of Massman (1997) using constant drag and no ! - ! sheltering factor. ! - ! 3. This is also based on Massman (1997), but with the option of varying ! - ! the drag and sheltering within the canopy. ! - ! 4. Same as 0, but if finds the ground conductance following CLM ! - ! technical note (equations 5.98-5.100). ! - !---------------------------------------------------------------------------------------! - NL%ICANTURB = 1 - !---------------------------------------------------------------------------------------! - ! ISFCLYRM -- Similarity theory model. The model that computes u*, T*, etc... ! - ! 1. BRAMS default, based on Louis (1979). It uses empirical relations to ! - ! estimate the flux based on the bulk Richardson number ! - ! ! - ! All models below use an interative method to find z/L, and the only change ! - ! is the functional form of the psi functions. ! - ! ! - ! 2. Oncley and Dudhia (1995) model, based on MM5. ! - ! 3. Beljaars and Holtslag (1991) model. Similar to 2, but it uses an alternative ! - ! method for the stable case that mixes more than the OD95. ! - ! 4. CLM (2004). Similar to 2 and 3, but they have special functions to deal with ! - ! very stable and very stable cases. ! - !---------------------------------------------------------------------------------------! - NL%ISFCLYRM = 4 ! 3 set by default - !---------------------------------------------------------------------------------------! - ! IED_GRNDVAP -- Methods to find the ground -> canopy conductance. ! - ! 0. Modified Lee Pielke (1992), adding field capacity, but using beta factor ! - ! without the square, like in Noilhan and Planton (1989). This is the closest ! - ! to the original ED-2.0 and LEAF-3, and it is also the recommended one. ! - ! 1. Test # 1 of Mahfouf and Noilhan (1991) ! - ! 2. Test # 2 of Mahfouf and Noilhan (1991) ! - ! 3. Test # 3 of Mahfouf and Noilhan (1991) ! - ! 4. Test # 4 of Mahfouf and Noilhan (1991) ! - ! 5. Combination of test #1 (alpha) and test #2 (soil resistance). ! - ! In all cases the beta term is modified so it approaches zero as soil moisture goes ! - ! to dry air soil. ! - !---------------------------------------------------------------------------------------! - NL%IED_GRNDVAP = 0 - !---------------------------------------------------------------------------------------! - ! The following variables are used to control the similarity theory model. For the ! - ! meaning of these parameters, check Beljaars and Holtslag (1991). ! - ! GAMM -- gamma coefficient for momentum, unstable case (dimensionless) ! - ! Ignored when ISTAR = 1 ! - ! GAMH -- gamma coefficient for heat, unstable case (dimensionless) ! - ! Ignored when ISTAR = 1 ! - ! TPRANDTL -- Turbulent Prandtl number ! - ! Ignored when ISTAR = 1 ! - ! RIBMAX -- maximum bulk Richardson number. ! - ! LEAF_MAXWHC -- Maximum water that can be intercepted by leaves, in kg/m2leaf. ! - !---------------------------------------------------------------------------------------! - NL%GAMM = 13.0 - NL%GAMH = 13.0 - NL%TPRANDTL = 0.74 - NL%RIBMAX = 0.50 - NL%LEAF_MAXWHC = 0.11 - !---------------------------------------------------------------------------------------! - ! IPERCOL -- This controls percolation and infiltration. ! - ! 0. Default method. Assumes soil conductivity constant and for the ! - ! temporary surface water, it sheds liquid in excess of a 1:9 liquid- ! - ! -to-ice ratio through percolation. Temporary surface water exists ! - ! only if the top soil layer is at saturation. ! - ! 1. Constant soil conductivity, and it uses the percolation model as in ! - ! Anderson (1976) NOAA technical report NWS 19. Temporary surface ! - ! water may exist after a heavy rain event, even if the soil doesn't ! - ! saturate. Recommended value. ! - ! 2. Soil conductivity decreases with depth even for constant soil moisture ! - ! , otherwise it is the same as 1. ! - !---------------------------------------------------------------------------------------! - NL%IPERCOL = 1 - !---------------------------------------------------------------------------------------! - ! The following variables control the plant functional types (PFTs) that will be ! - ! used in this simulation. ! - ! ! - ! INCLUDE_THESE_PFT -- a list containing all the PFTs you want to include in this run ! - ! AGRI_STOCK -- which PFT should be used for agriculture ! - ! (used only when IANTH_DISTURB = 1) ! - ! PLANTATION_STOCK -- which PFT should be used for plantation ! - ! (used only when IANTH_DISTURB = 1) ! - ! ! - ! PFT table ! - !---------------------------------------------------------------------------------------! - ! 1 - C4 grass | 9 - early temperate deciduous ! - ! 2 - early tropical | 10 - mid temperate deciduous ! - ! 3 - mid tropical | 11 - late temperate deciduous ! - ! 4 - late tropical | 12:15 - agricultural PFTs ! - ! 5 - temperate C3 grass | 16 - Subtropical C3 grass ! - ! 6 - northern pines | (C4 grass with C3 photo). ! - ! 7 - southern pines | 17 - "Araucaria" (non-optimised ! - ! 8 - late conifers | Southern Pines). ! - !---------------------------------------------------------------------------------------! - NL%INCLUDE_THESE_PFT = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 ! List of PFTs to be included - NL%AGRI_STOCK = 5 ! Agriculture PFT (used only if ianth_disturb=1) - NL%PLANTATION_STOCK = 6 ! Plantation PFT (used only if ianth_disturb=1) - !---------------------------------------------------------------------------------------! - ! PFT_1ST_CHECK -- What to do if the initialisation file has a PFT that is not listed ! - ! in INCLUDE_THESE_PFT (ignored if IED_INIT_MODE is -1 or 0) ! - ! 0. Stop the run ! - ! 1. Add the PFT in the INCLUDE_THESE_PFT list ! - ! 2. Ignore the cohort ! - !---------------------------------------------------------------------------------------! - NL%PFT_1ST_CHECK = 0 - !---------------------------------------------------------------------------------------! - ! The following variables control the size of sub-polygon structures in ED-2. ! - ! MAXSITE -- This is the strict maximum number of sites that each polygon can ! - ! contain. Currently this is used only when the user wants to run ! - ! the same polygon with multiple soil types. If there aren't that ! - ! many different soil types with a minimum area (check MIN_SITE_AREA ! - ! below), then the model will allocate just the amount needed. ! - ! MAXPATCH -- If number of patches in a given site exceeds MAXPATCH, force patch ! - ! fusion. If MAXPATCH is 0, then fusion will never happen. If ! - ! MAXPATCH is negative, then the absolute value is used only during ! - ! the initialization, and fusion will never happen again. Notice ! - ! that if the patches are too different, then the actual number of ! - ! patches in a site may exceed MAXPATCH. ! - ! MAXCOHORT -- If number of cohorts in a given patch exceeds MAXCOHORT, force ! - ! cohort fusion. If MAXCOHORT is 0, then fusion will never happen. ! - ! If MAXCOHORT is negative, then the absolute value is used only ! - ! during the initialization, and fusion will never happen again. ! - ! Notice that if the cohorts are too different, then the actual ! - ! number of cohorts in a patch may exceed MAXCOHORT. ! - ! MIN_SITE_AREA -- This is the minimum fraction area of a given soil type that allows ! - ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! - ! MIN_PATCH_AREA -- This is the minimum fraction area of a given soil type that allows ! - ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! - !---------------------------------------------------------------------------------------! - NL%MAXSITE = 6 - NL%MAXPATCH = 30 - NL%MAXCOHORT = 60 - NL%MIN_SITE_AREA = 0.005 - NL%MIN_PATCH_AREA = 0.005 - !---------------------------------------------------------------------------------------! - ! ZROUGH -- constant roughness, in metres, if for all domain ! - !---------------------------------------------------------------------------------------! - NL%ZROUGH = 0.1 - !---------------------------------------------------------------------------------------! - ! Treefall disturbance parameters. ! - ! TREEFALL_DISTURBANCE_RATE -- Sign-dependent treefall disturbance rate: ! - ! > 0. usual disturbance rate, in 1/years; ! - ! = 0. No treefall disturbance; ! - ! < 0. Treefall will be added as a mortality rate (it ! - ! will kill plants, but it won't create a new patch). ! - ! TIME2CANOPY -- Minimum patch age for treefall disturbance to happen. ! - ! If TREEFALL_DISTURBANCE_RATE = 0., this value will be ! - ! ignored. If this value is different than zero, then ! - ! TREEFALL_DISTURBANCE_RATE is internally adjusted so the ! - ! average patch age is still 1/TREEFALL_DISTURBANCE_RATE ! - !---------------------------------------------------------------------------------------! - NL%TREEFALL_DISTURBANCE_RATE = 0.0 !0.014 - NL%TIME2CANOPY = 0.0 - !---------------------------------------------------------------------------------------! - ! RUNOFF_TIME -- In case a temporary surface water (TSW) is created, this is the "e- ! - ! -folding lifetime" of the TSW in seconds due to runoff. If you don't ! - ! want runoff to happen, set this to 0. ! - !---------------------------------------------------------------------------------------! - NL%RUNOFF_TIME = 86400.0 - !---------------------------------------------------------------------------------------! - ! The following variables control the minimum values of various velocities in the ! - ! canopy. This is needed to avoid the air to be extremely still, or to avoid singular- ! - ! ities. When defining the values, keep in mind that UBMIN >= UGBMIN >= USTMIN. ! - ! ! - ! UBMIN -- minimum wind speed at the top of the canopy air space [ m/s] ! - ! UGBMIN -- minimum wind speed at the leaf level [ m/s] ! - ! USTMIN -- minimum friction velocity, u*, in m/s. [ m/s] ! - !---------------------------------------------------------------------------------------! - NL%UBMIN = 0.65 - NL%UGBMIN = 0.25 - NL%USTMIN = 0.05 - !---------------------------------------------------------------------------------------! - ! Control parameters for printing to standard output. Any variable can be printed ! - ! to standard output as long as it is one dimensional. Polygon variables have been ! - ! tested, no gaurtantees for other hierarchical levels. Choose any variables that are ! - ! defined in the variable table fill routine in ed_state_vars.f90. Choose the start ! - ! and end index of the polygon,site,patch or cohort. It should work in parallel. The ! - ! indices are global indices of the entire domain. The are printed out in rows of 10 ! - ! columns each. ! - ! ! - ! IPRINTPOLYS -- 0. Do not print information to screen ! - ! 1. Print polygon arrays to screen, use variables described below to ! - ! determine which ones and how ! - ! NPVARS -- Number of variables to be printed ! - ! PRINTVARS -- List of variables to be printed ! - ! PFMTSTR -- The standard fortran format for the prints. One format per variable ! - ! IPMIN -- First polygon (absolute index) to be print ! - ! IPMAX -- Last polygon (absolute index) to print ! - !---------------------------------------------------------------------------------------! - NL%IPRINTPOLYS = 0 - NL%NPVARS = 1 - NL%PRINTVARS = 'AVG_PCPG','AVG_CAN_TEMP','AVG_VAPOR_AC','AVG_CAN_SHV' - NL%PFMTSTR = 'f10.8','f5.1','f7.2','f9.5' - NL%IPMIN = 1 - NL%IPMAX = 60 - !---------------------------------------------------------------------------------------! - ! Variables that control the meteorological forcing. ! - ! ! - ! IMETTYPE -- Format of the meteorological dataset ! - ! 0. ASCII (deprecated) ! - ! 1. HDF5 ! - ! ISHUFFLE -- How to choose an year outside the meterorological data range (see ! - ! METCYC1 and METCYCF). ! - ! 0. Sequentially cycle over years ! - ! 1. Randomly pick the years, using the same sequence. This has worked ! - ! with gfortran running in Mac OS X system, but it acts like option 2 ! - ! when running ifort. ! - ! 2. Randomly pick the years, choosing a different sequence each time ! - ! the model is run. ! - ! IMETCYC1 -- First year with meteorological information ! - ! IMETCYCF -- Last year with meteorological information ! - ! IMETAVG -- How the input radiation was originally averaged. You must tell this ! - ! because ED-2.1 can make a interpolation accounting for the cosine of ! - ! zenith angle. ! - ! -1. I don't know, use linear interpolation. ! - ! 0. No average, the values are instantaneous ! - ! 1. Averages ending at the reference time ! - ! 2. Averages beginning at the reference time ! - ! 3. Averages centred at the reference time ! - ! IMETRAD -- What should the model do with the input short wave radiation? ! - ! 0. Nothing, use it as is. ! - ! 1. Add them together, then use the SiB method to break radiation down ! - ! into the four components (PAR direct, PAR diffuse, NIR direct, ! - ! NIR diffuse). ! - ! 2. Add then together, then use the method by Weiss and Norman (1985) ! - ! to break radiation down to the four components. ! - ! 3. Gloomy -- All radiation goes to diffuse. ! - ! 4. Sesame street -- all radiation goes to direct, except at night. ! - ! INITIAL_CO2 -- Initial value for CO2 in case no CO2 is provided at the meteorological ! - ! driver dataset [Units: ?mol/mol] ! - !---------------------------------------------------------------------------------------! - NL%IMETTYPE = 1 ! 0 = ASCII, 1 = HDF5 - NL%ISHUFFLE = 2 ! 2. Randomly pick recycled years - NL%METCYC1 = @MET_START@ ! First year of met data - NL%METCYCF = @MET_END@ ! Last year of met data - NL%IMETAVG = @MET_SOURCE@ - NL%IMETRAD = 0 - NL%INITIAL_CO2 = 370.0 ! Initial value for CO2 in case no CO2 is provided at the - ! meteorological driver dataset - !---------------------------------------------------------------------------------------! - ! The following variables control the phenology prescribed from observations: ! - ! ! - ! IPHENYS1 -- First year for spring phenology ! - ! IPHENYSF -- Final year for spring phenology ! - ! IPHENYF1 -- First year for fall/autumn phenology ! - ! IPHENYFF -- Final year for fall/autumn phenology ! - ! PHENPATH -- path and prefix of the prescribed phenology data. ! - ! ! - ! If the years don't cover the entire simulation period, they will be recycled. ! - !---------------------------------------------------------------------------------------! - NL%IPHENYS1 = @PHENOL_START@ - NL%IPHENYSF = @PHENOL_END@ - NL%IPHENYF1 = @PHENOL_START@ - NL%IPHENYFF = @PHENOL_END@ - NL%PHENPATH = '@PHENOL@' - !---------------------------------------------------------------------------------------! - ! These are some additional configuration files. ! - ! IEDCNFGF -- XML file containing additional parameter settings. If you don't have ! - ! one, leave it empty ! - ! EVENT_FILE -- file containing specific events that must be incorporated into the ! - ! simulation. ! - ! PHENPATH -- path and prefix of the prescribed phenology data. ! - !---------------------------------------------------------------------------------------! - NL%IEDCNFGF = '@CONFIGFILE@' - NL%EVENT_FILE = 'myevents.xml' - !---------------------------------------------------------------------------------------! - ! Census variables. This is going to create unique census statuses to cohorts, to ! - ! better compare the model with census observations. In case you don't intend to ! - ! compare the model with census data, set up DT_CENSUS to 1., otherwise you may reduce ! - ! cohort fusion. ! - ! DT_CENSUS -- Time between census, in months. Currently the maximum is 60 ! - ! months, to avoid excessive memory allocation. Every time the ! - ! simulation reaches the census time step, all census tags will be ! - ! reset. ! - ! YR1ST_CENSUS -- In which year was the first census conducted? ! - ! MON1ST_CENSUS -- In which month was the first census conducted? ! - ! MIN_RECRUIT_DBH -- Minimum DBH that is measured in the census, in cm. ! - !---------------------------------------------------------------------------------------! - NL%DT_CENSUS = 1 - NL%YR1ST_CENSUS = 1901 - NL%MON1ST_CENSUS = 7 - NL%MIN_RECRUIT_DBH = 10 - !---------------------------------------------------------------------------------------! - ! The following variables are used to control the detailed output for debugging ! - ! purposes. ! - ! ! - ! IDETAILED -- This flag controls the possible detailed outputs, mostly used for ! - ! debugging purposes. Notice that this doesn't replace the normal debug- ! - ! ger options, the idea is to provide detailed output to check bad ! - ! assumptions. The options are additive, and the indices below represent ! - ! the different types of output: ! - ! ! - ! 1 -- Detailed budget (every DTLSM) ! - ! 2 -- Detailed photosynthesis (every DTLSM) ! - ! 4 -- Detailed output from the integrator (every HDID) ! - ! 8 -- Thermodynamic bounds for sanity check (every DTLSM) ! - ! 16 -- Daily error stats (which variable caused the time step to shrink) ! - ! 32 -- Allometry parameters, and minimum and maximum sizes ! - ! (two files, only at the beginning) ! - ! ! - ! In case you don't want any detailed output (likely for most runs), set ! - ! IDETAILED to zero. In case you want to generate multiple outputs, add ! - ! the number of the sought options: for example, if you want detailed ! - ! photosynthesis and detailed output from the integrator, set IDETAILED ! - ! to 6 (2 + 4). Any combination of the above outputs is acceptable, al- ! - ! though all but the last produce a sheer amount of txt files, in which ! - ! case you may want to look at variable PATCH_KEEP. It is also a good ! - ! idea to set IVEGT_DYNAMICS to 0 when using the first five outputs. ! - ! ! - ! ! - ! PATCH_KEEP -- This option will eliminate all patches except one from the initial- ! - ! isation. This is only used when one of the first five types of ! - ! detailed output is active, otherwise it will be ignored. Options are: ! - ! -2. Keep only the patch with the lowest potential LAI ! - ! -1. Keep only the patch with the highest potential LAI ! - ! 0. Keep all patches. ! - ! > 0. Keep the patch with the provided index. In case the index is ! - ! not valid, the model will crash. ! - !---------------------------------------------------------------------------------------! - NL%IDETAILED = 0 - NL%PATCH_KEEP = 0 - !---------------------------------------------------------------------------------------! - - - !---------------------------------------------------------------------------------------! - ! IOPTINPT -- Optimization configuration. (Currently not used) ! - !---------------------------------------------------------------------------------------! - NL%IOPTINPT = '' - !---------------------------------------------------------------------------------------! -$END -!==========================================================================================! -!==========================================================================================! diff --git a/models/ed/inst/ED2IN.r2.2.0 b/models/ed/inst/ED2IN.r2.2.0 new file mode 100644 index 00000000000..14e29ff6d56 --- /dev/null +++ b/models/ed/inst/ED2IN.r2.2.0 @@ -0,0 +1,1997 @@ +!==========================================================================================! +!==========================================================================================! +! ED2IN . ! +! ! +! This is the file that contains the variables that define how ED is to be run. There ! +! is some brief information about the variables here. Some of the variables allow ! +! switching between algorithms; in this case we highlight the status of each ! +! implementation using the following labels: ! +! ! +! ED-2.2 default. ! +! These are the options described in the ED-2.2 technical note (L19) and that have been ! +! thoroughly tested. When unsure, we recommend to use this option. ! +! ! +! ED-2.2 alternative. ! +! These are the options described either in the ED-2.2 or other publications (mostly ! +! X16), and should be fully functional. Depending on the application, this may be the ! +! most appropriate option. ! +! ! +! Legacy. ! +! Older implementations that have been implemented in ED-1.0, ED-2.0, or ED-2.1. These ! +! options are still fully functional and may be the most appropriate option depending ! +! on the question. ! +! ! +! Beta. ! +! Well developed alternative implementations to the ED-2.2 default. These ! +! implementations are nearly complete, but they have not be thoroughly tested. Feel ! +! free to try if you think it is useful, but bear in mind that they may still need some ! +! adjustments. ! +! ! +! Under development. ! +! Alternative implementations to the ED-2.2 default, but not yet fully implemented. Do ! +! not use these options unless you are willing to contribute to the development. ! +! ! +! Deprecated. ! +! Older implementations that have shown important limitations. They are included for ! +! back-compatibility but we strongly discourage their use in most cases. ! +! ! +! Non-functional. ! +! Older implementations that have been discontinued or methods not yet implemented. ! +! Do not use these options. ! +! ! +! References: ! +! ! +! Longo M, Knox RG, Medvigy DM, Levine NM, Dietze MC, Kim Y, Swann ALS, Zhang K, ! +! Rollinson CR, Bras RL, Wofsy SC, Moorcroft PR. 2019. The biophysics, ecology, and ! +! biogeochemistry of functionally diverse, vertically and horizontally heterogeneous ! +! ecosystems: the Ecosystem Demography model, version 2.2 ? part 1: Model description. ! +! Geosci. Model Dev. 12: 4309-4346, doi:10.5194/gmd-12-4309-2019 (L19). ! +! ! +! Xu X, Medvigy D, Powers JS, Becknell JM , Guan K. 2016. Diversity in plant hydraulic ! +! traits explains seasonal and inter-annual variations of vegetation dynamics in ! +! seasonally dry tropical forests. New Phytol. 212: 80-95, doi:10.1111/nph.14009 (X16). ! +!------------------------------------------------------------------------------------------! +$ED_NL + + !----- Simulation title (64 characters). -----------------------------------------------! + NL%EXPNME = 'ED2 vGITHUB PEcAn @ENSNAME@' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Type of run: ! + ! ! + ! INITIAL -- ED-2 will start a new run. Initial conditions will be set by ! + ! IED_INIT_MODE. The initial conditions can be based on a previous run ! + ! (restart/history), but then it will use only the biomass information as a ! + ! simple initial condition. Energy, water, and CO2 will use standard ! + ! initial conditions. ! + ! HISTORY -- ED-2 will resume a simulation from the last history, and every variable ! + ! (including forest structure and thermodynamics) will be assigned based on ! + ! the history file. ! + ! IMPORTANT: this option is intended for continuing interrupted simulations ! + ! (e.g. power outage). We discourage users to select this option and ! + ! provide restart files generated by different commits. ! + !---------------------------------------------------------------------------------------! + NL%RUNTYPE = 'INITIAL' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Start of simulation. Information must be given in UTC time. ! + !---------------------------------------------------------------------------------------! + NL%IMONTHA = @START_MONTH@ ! Month + NL%IDATEA = @START_DAY@ ! Day + NL%IYEARA = @START_YEAR@ ! Year + NL%ITIMEA = 0000 ! UTC + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! End of simulation. Information must be given in UTC time. ! + !---------------------------------------------------------------------------------------! + NL%IMONTHZ = @END_MONTH@ ! Month + NL%IDATEZ = @END_DAY@ ! Day + NL%IYEARZ = @END_YEAR@ ! Year + NL%ITIMEZ = 0000 ! UTC + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! DTLSM -- Basic time step [seconds] for photosynthesis, and maximum step for thermo- ! + ! dynamics. Recommended values range from 240 to 900 seconds when using the ! + ! 4th-order Runge Kutta integrator (INTEGRATION_SCHEME=1). We discourage ! + ! using the forward Euler scheme (INTEGRATION_SCHEME=0), but in case you ! + ! really want to use it, set the time step to 60 seconds or shorter. ! + ! RADFRQ -- Time step for the canopy radiative transfer model [seconds]. This value ! + ! must be an integer multiple of DTLSM, and we recommend it to be exactly ! + ! the same as DTSLM. ! + !---------------------------------------------------------------------------------------! + NL%DTLSM = 600 + NL%RADFRQ = 600 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! MONTH_YRSTEP -- Month in which the yearly time step (patch dynamics) should occur. ! + !---------------------------------------------------------------------------------------! + NL%MONTH_YRSTEP = 7 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used in case the user wants to run a regional run. ! + ! ! + ! N_ED_REGION -- number of regions for which you want to run ED. This can be set to ! + ! zero provided that N_POI is not... ! + ! GRID_TYPE -- which kind of grid to run: ! + ! 0. Longitude/latitude grid ! + ! 1. Polar-stereographic ! + !---------------------------------------------------------------------------------------! + NL%N_ED_REGION = 0 + NL%GRID_TYPE = 0 + + !------------------------------------------------------------------------------------! + ! The following variables are used only when GRID_TYPE is set to 0. You must ! + ! provide one value for each grid, except otherwise noted. ! + ! ! + ! GRID_RES -- Grid resolution, in degrees (first grid only, the other grids ! + ! resolution will be defined by NSTRATX/NSTRATY). ! + ! ED_REG_LATMIN -- Southernmost point of each region. ! + ! ED_REG_LATMAX -- Northernmost point of each region. ! + ! ED_REG_LONMIN -- Westernmost point of each region. ! + ! ED_REG_LONMAX -- Easternmost point of each region. ! + !------------------------------------------------------------------------------------! + NL%GRID_RES = 1.0 + NL%ED_REG_LATMIN = -12.0, -7.5, 10.0, -6.0 + NL%ED_REG_LATMAX = 1.0, -3.5, 15.0, -1.0 + NL%ED_REG_LONMIN = -66.0,-58.5, 70.0, -63.0 + NL%ED_REG_LONMAX = -49.0,-54.5, 35.0, -53.0 + !------------------------------------------------------------------------------------! + + + + !------------------------------------------------------------------------------------! + ! The following variables are used only when GRID_TYPE is set to 1. ! + ! ! + ! NNXP -- number of points in the X direction. One value for each grid. ! + ! NNYP -- number of points in the Y direction. One value for each grid. ! + ! DELTAX -- grid resolution in the X direction, near the grid pole. Units: [ m]. ! + ! this value is used to define the first grid only, other grids are ! + ! defined using NNSTRATX. ! + ! DELTAY -- grid resolution in the Y direction, near the grid pole. Units: [ m]. ! + ! this value is used to define the first grid only, other grids are ! + ! defined using NNSTRATX. Unless you are running some specific tests, ! + ! both DELTAX and DELTAY should be the same. ! + ! POLELAT -- Latitude of the pole point. Set this close to CENTLAT for a more ! + ! traditional "square" domain. One value for all grids. ! + ! POLELON -- Longitude of the pole point. Set this close to CENTLON for a more ! + ! traditional "square" domain. One value for all grids. ! + ! CENTLAT -- Latitude of the central point. One value for each grid. ! + ! CENTLON -- Longitude of the central point. One value for each grid. ! + !------------------------------------------------------------------------------------! + NL%NNXP = 110 + NL%NNYP = 70 + NL%DELTAX = 60000 + NL%DELTAY = 60000 + NL%POLELAT = -2.857 + NL%POLELON = -54.959 + NL%CENTLAT = -2.857 + NL%CENTLON = -54.959 + !------------------------------------------------------------------------------------! + + + + !------------------------------------------------------------------------------------! + ! Nest ratios. These values are used by both GRID_TYPE=0 and GRID_TYPE=1. ! + ! NSTRATX -- this is will divide the values given by DELTAX or GRID_RES for the ! + ! nested grids. The first value should be always one. ! + ! NSTRATY -- this is will divide the values given by DELTAY or GRID_RES for the ! + ! nested grids. The first value should be always one, and this must ! + ! be always the same as NSTRATX when GRID_TYPE = 0, and this is also ! + ! strongly recommended for when GRID_TYPE = 1. ! + !------------------------------------------------------------------------------------! + NL%NSTRATX = 1,4 + NL%NSTRATY = 1,4 + !------------------------------------------------------------------------------------! + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to define single polygon of interest runs, and ! + ! they are ignored when N_ED_REGION = 0. ! + ! ! + ! N_POI -- number of polygons of interest (POIs). This can be zero as long as ! + ! N_ED_REGION is not. ! + ! POI_LAT -- list of latitudes of each POI. ! + ! POI_LON -- list of longitudes of each POI. ! + ! POI_RES -- grid resolution of each POI (degrees). This is used only to define the ! + ! soil types ! + !---------------------------------------------------------------------------------------! + NL%N_POI = 1 + NL%POI_LAT = @SITE_LAT@ + NL%POI_LON = @SITE_LON@ + NL%POI_RES = 1.00 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! LOADMETH -- Load balancing method. This is used only in regional runs run in ! + ! parallel. ! + ! 0. Let ED decide the best way of splitting the polygons. Commonest ! + ! option and default. ! + ! 1. One of the methods to split polygons based on their previous ! + ! work load. Developpers only. ! + ! 2. Try to load an equal number of SITES per node. Useful for when ! + ! total number of polygon is the same as the total number of cores. ! + ! 3. Another method to split polygons based on their previous work load. ! + ! Developpers only. ! + !---------------------------------------------------------------------------------------! + NL%LOADMETH = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ED2 File output. For all the variables 0 means no output and 3 means HDF5 output. ! + ! ! + ! IFOUTPUT -- Fast analysis. These are mostly polygon-level averages, and the time ! + ! interval between files is determined by FRQANL ! + ! IDOUTPUT -- Daily means (one file per day) ! + ! IMOUTPUT -- Monthly means (one file per month) ! + ! IQOUTPUT -- Monthly means of the diurnal cycle (one file per month). The number ! + ! of points for the diurnal cycle is 86400 / FRQANL ! + ! IYOUTPUT -- Annual output. ! + ! ITOUTPUT -- Instantaneous fluxes, mostly polygon-level variables, one file per year. ! + ! IOOUTPUT -- Observation time output. Equivalent to IFOUTPUT, except only at the ! + ! times specified in OBSTIME_DB. ! + ! ISOUTPUT -- restart file, for HISTORY runs. The time interval between files is ! + ! determined by FRQHIS ! + !---------------------------------------------------------------------------------------! + NL%IFOUTPUT = 0 + NL%IDOUTPUT = 0 + NL%IMOUTPUT = 3 + NL%IQOUTPUT = 0 + NL%IYOUTPUT = 0 + NL%ITOUTPUT = 3 + NL%IOOUTPUT = 0 + NL%ISOUTPUT = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control whether site-, patch-, and cohort-level time ! + ! means and mean sum of squares should be included in the output files or not. ! + ! If these options are on, then they provide much more detailed output, but they may ! + ! add a lot of disk space (especially if you want the fast output to have the detailed ! + ! output). ! + ! ! + ! IADD_SITE_MEANS -- Add site-level averages to the output ! + ! IADD_PATCH_MEANS -- Add patch-level averages to the output ! + ! IADD_COHORT_MEANS -- Add cohort-level averages to the output ! + ! ! + ! The options are additive, and the indices below represent the different types of ! + ! output: ! + ! ! + ! 0 -- No detailed output. ! + ! 1 -- Include the level in monthly output (IMOUTPUT and IQOUTPUT) ! + ! 2 -- Include the level in daily output (IDOUTPUT). ! + ! 4 -- Include the level in sub-daily output (IFOUTPUT and IOOUTPUT). ! + ! ! + ! For example, in case you don't want any cohort output, set IADD_COHORT_MEANS to zero. ! + ! In case you want to generate include cohort means to both daily and monthly outputs, ! + ! but not the sub-daily means, set IADD_COHORT_MEANS to 3 (1 + 2). Any combination of ! + ! the above outputs is acceptable (i.e., any number from 0 to 7). ! + !---------------------------------------------------------------------------------------! + NL%IADD_SITE_MEANS = 1 + NL%IADD_PATCH_MEANS = 1 + NL%IADD_COHORT_MEANS = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ATTACH_METADATA -- Flag for attaching metadata to HDF datasets. Attaching metadata ! + ! will aid new users in quickly identifying dataset descriptions but ! + ! will compromise I/O performance significantly. ! + ! 0 = no metadata, 1 = attach metadata ! + !---------------------------------------------------------------------------------------! + NL%ATTACH_METADATA = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! UNITFAST -- The following variables control the units for FRQFAST/OUTFAST, and ! + ! UNITSTATE FRQSTATE/OUTSTATE, respectively. Possible values are: ! + ! 0. Seconds; ! + ! 1. Days; ! + ! 2. Calendar months (variable) ! + ! 3. Calendar years (variable) ! + ! ! + ! N.B.: 1. In case OUTFAST/OUTSTATE are set to special flags (-1 or -2) ! + ! UNITFAST/UNITSTATE will be ignored for them. ! + ! 2. In case IQOUTPUT is set to 3, then UNITFAST has to be 0. ! + ! ! + !---------------------------------------------------------------------------------------! + NL%UNITFAST = 0 + NL%UNITSTATE = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! OUTFAST/OUTSTATE -- these control the number of times per file. ! + ! 0. Each time gets its own file ! + ! -1. One file per day ! + ! -2. One file per month ! + ! > 0. Multiple timepoints can be recorded to a single file reducing ! + ! the number of files and i/o time in post-processing. ! + ! Multiple timepoints should not be used in the history files ! + ! if you intend to use these for HISTORY runs. ! + !---------------------------------------------------------------------------------------! + NL%OUTFAST = 0 + NL%OUTSTATE = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ICLOBBER -- What to do in case the model finds a file that it was supposed the ! + ! written? 0 = stop the run, 1 = overwrite without warning. ! + ! FRQFAST -- time interval between analysis files, units defined by UNITFAST. ! + ! FRQSTATE -- time interval between history files, units defined by UNITSTATE. ! + !---------------------------------------------------------------------------------------! + NL%ICLOBBER = 1 + NL%FRQFAST = 3600. + NL%FRQSTATE = 1. + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! FFILOUT -- Path and prefix for analysis files (all but history/restart). ! + ! SFILOUT -- Path and prefix for history files. ! + !---------------------------------------------------------------------------------------! + NL%FFILOUT = '@FFILOUT@' + NL%SFILOUT = '@SFILOUT@' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IED_INIT_MODE -- This controls how the plant community and soil carbon pools are ! + ! initialised. ! + ! ! + ! -1. Start from a true bare ground run, or an absolute desert run. This will ! + ! never grow any plant. ! + ! 0. Start from near-bare ground (only a few seedlings from each PFT to be included ! + ! in this run). ! + ! 1. (Deprecated) This will use history files written by ED-1.0. It will read the ! + ! ecosystem state (like biomass, LAI, plant density, etc.), but it will start ! + ! the thermodynamic state as a new simulation. ! + ! 2. (Deprecated) Same as 1, but it uses history files from ED-2.0 without multiple ! + ! sites, and with the old PFT numbers. ! + ! 3. Same as 1, but using history files from ED-2.0 with multiple sites and ! + ! TOPMODEL hydrology. ! + ! 4. Same as 1, but using ED2.1 H5 history/state files that take the form: ! + ! 'dir/prefix-gxx.h5' ! + ! Initialization files MUST end with -gxx.h5 where xx is a two digit integer ! + ! grid number. Each grid has its own initialization file. As an example, if a ! + ! user has two files to initialize their grids with: ! + ! example_file_init-g01.h5 and example_file_init-g02.h5 ! + ! SFILIN = 'example_file_init' ! + ! ! + ! 5. This is similar to option 4, except that you may provide several files ! + ! (including a mix of regional and POI runs, each file ending at a different ! + ! date). This will not check date nor grid structure, it will simply read all ! + ! polygons and match the nearest neighbour to each polygon of your run. SFILIN ! + ! must have the directory common to all history files that are sought to be used,! + ! up to the last character the files have in common. For example if your files ! + ! are ! + ! /mypath/P0001-S-2000-01-01-000000-g01.h5, ! + ! /mypath/P0002-S-1966-01-01-000000-g02.h5, ! + ! ... ! + ! /mypath/P1000-S-1687-01-01-000000-g01.h5: ! + ! SFILIN = '/mypath/P' ! + ! ! + ! 6 - Initialize with ED-2 style files without multiple sites, exactly like option ! + ! 2, except that the PFT types are preserved. ! + ! ! + ! 7. Initialize from a list of both POI and gridded ED2.1 state files, organized ! + ! in the same manner as 5. This method overrides the soil database info and ! + ! takes the soil texture and soil moisture information from the initializing ! + ! ED2.1 state file. It allows for different layering, and assigns via nearest ! + ! neighbor. ! + !---------------------------------------------------------------------------------------! + NL%IED_INIT_MODE = @INIT_MODEL@ + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! EDRES -- Expected input resolution for ED2.0 files. This is not used unless ! + ! IED_INIT_MODE = 3. ! + !---------------------------------------------------------------------------------------! + NL%EDRES = 1.0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! SFILIN -- The meaning and the size of this variable depends on the type of run, set ! + ! at variable RUNTYPE. ! + ! ! + ! 1. INITIAL. Then this is the path+prefix of the previous ecosystem state. This has ! + ! dimension of the number of grids so you can initialize each grid with a ! + ! different dataset. In case only one path+prefix is given, the same will ! + ! be used for every grid. Only some ecosystem variables will be set up ! + ! here, and the initial condition will be in thermodynamic equilibrium. ! + ! ! + ! 2. HISTORY. This is the path+prefix of the history file that will be used. Only the ! + ! path+prefix will be used, as the history for every grid must have come ! + ! from the same simulation. ! + !---------------------------------------------------------------------------------------! + NL%SFILIN = '@SITE_PSSCSS@' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! History file information. These variables are used to continue a simulation from ! + ! a point other than the beginning. Time must be in UTC. ! + ! ! + ! IMONTHH -- the time of the history file. This is the only place you need to change ! + ! IDATEH dates for a HISTORY run. You may change IMONTHZ and related in case you ! + ! IYEARH want to extend the run, but yo should NOT change IMONTHA and related. ! + ! ITIMEH ! + !---------------------------------------------------------------------------------------! + NL%IYEARH = 2000 ! Year + NL%IMONTHH = 08 ! Month + NL%IDATEH = 01 ! Day + NL%ITIMEH = 0000 ! UTC + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! NZG - Number of soil layers. One value for all regions and polygons of interest. ! + ! NZS - Maximum number of snow/water pounding layers. One value for all regions ! + ! and polygons of interest. This is used only when snow is accumulating. ! + ! If only liquid water is standing, the water will be always collapsed ! + ! into a single layer. ! + !---------------------------------------------------------------------------------------! + NL%NZG = 16 + NL%NZS = 4 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISOILFLG -- This controls how to initialise soil texture. This must be a list with ! + ! N_ED_REGION+N_POI elements. The first N_ED_REGION elements correspond to ! + ! each gridded domain (from first to last). Elements between N_ED_REGION+1 ! + ! and N_ED_REGION+N_POI correspond to the polygons of interest (from 1 to ! + ! N_POI. Options are: ! + ! 1 -- Read in soil textural class from the files set in SOIL_DATABASE. ! + ! 2 -- Assign either the value set by NSLCON (see below) or define soil ! + ! texture from SLXSAND and SLXCLAY. ! + !---------------------------------------------------------------------------------------! + NL%ISOILFLG = 2 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! NSLCON -- ED-2 Soil classes that the model will use when ISOILFLG is set to 2. ! + ! Possible values are: ! + !---------------------------------------------------------------------------------------! + ! 1 -- sand | 7 -- silty clay loam | 13 -- bedrock ! + ! 2 -- loamy sand | 8 -- clayey loam | 14 -- silt ! + ! 3 -- sandy loam | 9 -- sandy clay | 15 -- heavy clay ! + ! 4 -- silt loam | 10 -- silty clay | 16 -- clayey sand ! + ! 5 -- loam | 11 -- clay | 17 -- clayey silt ! + ! 6 -- sandy clay loam | 12 -- peat ! + !---------------------------------------------------------------------------------------! + NL%NSLCON = 6 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISOILCOL -- LEAF-3 and ED-2 soil colour classes that the model will use when ISOILFLG ! + ! is set to 2. Soil classes are from 1 to 20 (1 = lightest; 20 = darkest). ! + ! The values are the same as CLM-4.0. The table is the albedo for visible ! + ! and near infra-red. ! + !---------------------------------------------------------------------------------------! + ! ! + ! |-----------------------------------------------------------------------| ! + ! | | Dry soil | Saturated | | Dry soil | Saturated | ! + ! | Class |-------------+-------------| Class +-------------+-------------| ! + ! | | VIS | NIR | VIS | NIR | | VIS | NIR | VIS | NIR | ! + ! |-------+------+------+------+------+-------+------+------+------+------| ! + ! | 1 | 0.36 | 0.61 | 0.25 | 0.50 | 11 | 0.24 | 0.37 | 0.13 | 0.26 | ! + ! | 2 | 0.34 | 0.57 | 0.23 | 0.46 | 12 | 0.23 | 0.35 | 0.12 | 0.24 | ! + ! | 3 | 0.32 | 0.53 | 0.21 | 0.42 | 13 | 0.22 | 0.33 | 0.11 | 0.22 | ! + ! | 4 | 0.31 | 0.51 | 0.20 | 0.40 | 14 | 0.20 | 0.31 | 0.10 | 0.20 | ! + ! | 5 | 0.30 | 0.49 | 0.19 | 0.38 | 15 | 0.18 | 0.29 | 0.09 | 0.18 | ! + ! | 6 | 0.29 | 0.48 | 0.18 | 0.36 | 16 | 0.16 | 0.27 | 0.08 | 0.16 | ! + ! | 7 | 0.28 | 0.45 | 0.17 | 0.34 | 17 | 0.14 | 0.25 | 0.07 | 0.14 | ! + ! | 8 | 0.27 | 0.43 | 0.16 | 0.32 | 18 | 0.12 | 0.23 | 0.06 | 0.12 | ! + ! | 9 | 0.26 | 0.41 | 0.15 | 0.30 | 19 | 0.10 | 0.21 | 0.05 | 0.10 | ! + ! | 10 | 0.25 | 0.39 | 0.14 | 0.28 | 20 | 0.08 | 0.16 | 0.04 | 0.08 | ! + ! |-----------------------------------------------------------------------| ! + ! ! + ! Soil type 21 is a special case in which we use the albedo method that used to be ! + ! the default in ED-2.1. ! + !---------------------------------------------------------------------------------------! + NL%ISOILCOL = 14 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! These variables are used to define the soil properties when you don't want to use ! + ! the standard soil classes. ! + ! ! + ! SLXCLAY -- Prescribed fraction of clay [0-1] ! + ! SLXSAND -- Prescribed fraction of sand [0-1]. ! + ! ! + ! They are used only when ISOILFLG is 2, both values are between 0. and 1., and ! + ! their sum doesn't exceed 1. In case ISOILFLG is 2 but the fractions do not meet the ! + ! criteria, ED-2 uses NSLCON instead. ! + !---------------------------------------------------------------------------------------! + NL%SLXCLAY = 0.345 + NL%SLXSAND = 0.562 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Soil grid and initial conditions in case no file is provided. Provide NZG values ! + ! for the following variables (always from deepest to shallowest layer). ! + ! ! + ! SLZ - depth of the bottom of each soil layer [m]. Values must be negative. ! + ! SLMSTR - this is the initial soil moisture, now given as the soil moisture index. ! + ! -1 = dry air soil moisture ! + ! 0 = wilting point ! + ! 1 = field capacity ! + ! 2 = porosity (saturation) ! + ! Values can be fraction, in which case they will be linearly interpolated ! + ! between the special points (e.g. 0.5 will put soil moisture half way ! + ! between the wilting point and field capacity). ! + ! STGOFF - initial temperature offset (soil temperature = air temperature + offset) ! + !---------------------------------------------------------------------------------------! + NL%SLZ = -8.000,-7.072,-6.198,-5.380,-4.617,-3.910,-3.259,-2.664,-2.127,-1.648, + -1.228,-0.866,-0.566,-0.326,-0.150,-0.040 + NL%SLMSTR = 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, + 1.000, 1.000, 1.000, 1.000, 1.000, 1.000 + NL%STGOFF = 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, + 0.000, 0.000, 0.000, 0.000, 0.000, 0.000 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Input databases ! + ! VEG_DATABASE -- vegetation database, used only to determine the land/water mask. ! + ! Fill with the path and the prefix. ! + ! SOIL_DATABASE -- soil database, used to determine the soil type. Fill with the ! + ! path and the prefix. ! + ! LU_DATABASE -- land-use change disturbance rates database, used only when ! + ! IANTH_DISTURB is set to 1. Fill with the path and the prefix. ! + ! PLANTATION_FILE -- Character string for the path to the forest plantation fraction ! + ! file. This is used only when IANTH_DISTURB is set to 1 and ! + ! the user wants to simulate forest plantations. Otherwise, leave ! + ! it empty (PLANTATION_FILE='') ! + ! THSUMS_DATABASE -- input directory with dataset to initialise chilling-degree and ! + ! growing-degree days, which is used to drive the cold-deciduous ! + ! phenology (you must always provide this, even when your PFTs are ! + ! not cold deciduous). ! + ! ED_MET_DRIVER_DB -- File containing information for meteorological driver ! + ! instructions (the "header" file). ! + ! OBSTIME_DB -- File containing times of desired IOOUTPUT ! + ! Reference file: /ED/run/obstime_template.time ! + ! SOILSTATE_DB -- If ISOILSTATEINIT=1, this variable specifies the full path of ! + ! the file that contains soil moisture and temperature ! + ! information. ! + ! SOILDEPTH_DB -- If ISOILDEPTHFLG=1, this variable specifies the full path of the ! + ! file that contains soil moisture and temperature information. ! + !---------------------------------------------------------------------------------------! + NL%VEG_DATABASE = '@ED_VEG@' + NL%SOIL_DATABASE = '@ED_SOIL@' + NL%LU_DATABASE = '@ED_LU@' + NL%PLANTATION_FILE = '' + NL%THSUMS_DATABASE = '@ED_THSUM@' + NL%ED_MET_DRIVER_DB = '@SITE_MET@' + NL%OBSTIME_DB = '' !Reference file: /ED/run/obstime_template.time + NL%SOILSTATE_DB = '/mypath/soil_data/temp+moist/STW1996OCT.dat' + NL%SOILDEPTH_DB = '/mypath/soil_data/depth/H250mBRD/H250mBRD_' + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ISOILSTATEINIT -- Variable controlling how to initialise the soil temperature and ! + ! moisture ! + ! 0. Use SLMSTR and STGOFF. ! + ! 1. Read from SOILSTATE_DB. ! + ! ISOILDEPTHFLG -- Variable controlling how to initialise soil depth ! + ! 0. Constant, always defined by the first (deepest) SLZ layer. ! + ! 1. Read from SOILDEPTH_DB (ED-1.0 style, ascii file). ! + ! 2. Read from SOILDEPTH_DB (ED-2.2 style, hdf5 files + header). ! + !---------------------------------------------------------------------------------------! + NL%ISOILSTATEINIT = 0 + NL%ISOILDEPTHFLG = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ISOILBC -- This controls the soil moisture boundary condition at the bottom. Choose ! + ! the option according to the site characteristics. ! + ! 0. Flat bedrock. Flux from the bottom of the bottommost layer is zero. ! + ! 1. Gravitational flow (free drainage). The flux from the bottom of the ! + ! bottommost layer is due to gradient of height only. ! + ! 2. Lateral drainage. Similar to free drainage, but the gradient is ! + ! reduced by the slope not being completely vertical. The reduction is ! + ! controlled by variable SLDRAIN. In the future options 0, 1, and 2 may ! + ! be combined into a single option. ! + ! 3. Aquifer. Soil moisture of the ficticious layer beneath the bottom is ! + ! always at saturation. ! + !---------------------------------------------------------------------------------------! + NL%ISOILBC = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! SLDRAIN -- This is used only when ISOILBC is set to 2. In this case SLDRAIN is the ! + ! equivalent slope that will slow down drainage. If this is set to zero, ! + ! then lateral drainage reduces to flat bedrock, and if this is set to 90, ! + ! then lateral drainage becomes free drainage. SLDRAIN must be between 0 ! + ! and 90. ! + !---------------------------------------------------------------------------------------! + NL%SLDRAIN = 90. + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IVEGT_DYNAMICS -- The vegetation dynamics scheme. ! + ! 0. No vegetation dynamics, the initial state will be preserved, ! + ! even though the model will compute the potential values. This ! + ! option is useful for theoretical simulations only. ! + ! 1. Normal ED vegetation dynamics (Moorcroft et al 2001). ! + ! The normal option for almost any simulation. ! + !---------------------------------------------------------------------------------------! + NL%IVEGT_DYNAMICS = 1 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! IBIGLEAF -- Do you want to run ED as a 'big leaf' model? ! + ! 0. No, use the standard size- and age-structure (Moorcroft et al. 2001, ! + ! Ecol. Monogr.). This is the recommended method for most ! + ! applications. ! + ! 1. 'big leaf' ED (Levine et al. 2016, PNAS): this will have no ! + ! horizontal or vertical heterogeneities; 1 patch per PFT and 1 cohort ! + ! per patch; no vertical growth, recruits will 'appear' instantaneously ! + ! at maximum height. ! + ! ! + ! N.B. if you set IBIGLEAF to 1, you MUST turn off the crown model (CROWN_MOD = 0) ! + !---------------------------------------------------------------------------------------! + NL%IBIGLEAF = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! INTEGRATION_SCHEME -- The biophysics integration scheme. ! + ! 0. (Deprecated) Euler step. The fastest, but it has only a ! + ! very crude estimate of time-step errors. ! + ! 1. (ED-2.2 default) Fourth-order Runge-Kutta method. ! + ! 2. (Deprecated) Second-order Runge-Kutta method (Heun's). ! + ! This is not faster than option 1, and it will be eventually ! + ! removed. ! + ! 3. (Under development) Hybrid Stepping (Backward Euler BDF2 ! + ! implicit step for the canopy air and leaf temp, forward ! + ! Euler for else). This has not been thoroughly tested for ! + ! energy, water, and CO2 conservation. ! + !---------------------------------------------------------------------------------------! + NL%INTEGRATION_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! NSUB_EULER -- The number of sub-steps in case we are running forward Euler. The ! + ! maximum time step will then be DTLSM / NSUB_EULER. This is needed to ! + ! make sure we don't take too long steps with Euler, as we cannot ! + ! estimate errors using first-order schemes. This number is ignored ! + ! except when INTEGRATION_SCHEME is 0. ! + !---------------------------------------------------------------------------------------! + NL%NSUB_EULER = 40 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! RK4_TOLERANCE -- This is the relative tolerance for Runge-Kutta or Heun's ! + ! integration. Currently the valid range is between 1.e-7 and 1.e-1, ! + ! but recommended values are between 1.e-4 and 1.e-2. ! + !---------------------------------------------------------------------------------------! + NL%RK4_TOLERANCE = 0.01 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IBRANCH_THERMO -- This determines whether branches should be included in the ! + ! vegetation thermodynamics and radiation or not. ! + ! 0. (Legacy) No branches in energy/radiation. ! + ! 1. (ED-2.2 default) Branches are accounted in the energy and ! + ! radiation. Branchwood and leaf are treated separately in the ! + ! canopy radiation scheme, but solved as a single pool in the ! + ! biophysics integration. ! + ! 2. (Beta) Similar to 1, but branches are treated as separate pools ! + ! in the biophysics (thus doubling the number of prognostic ! + ! variables). ! + !---------------------------------------------------------------------------------------! + NL%IBRANCH_THERMO = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPHYSIOL -- This variable will determine the functional form that will control how ! + ! the various parameters will vary with temperature, and how the CO2 ! + ! compensation point for gross photosynthesis (Gamma*) will be found. ! + ! Options are: ! + ! ! + ! 0 -- (Legacy) Original ED-2.1, we use the "Arrhenius" function as in Foley et al. ! + ! (1996, Global Biogeochem. Cycles) and Moorcroft et al. (2001, Ecol. Monogr.). ! + ! Gamma* is found using the parameters for tau as in Foley et al. (1996). This ! + ! option causes optimal temperature to be quite low, even in the tropics (Rogers ! + ! et al. 2017, New Phytol.). ! + ! 1 -- (Beta) Similar to case 0, but we use Jmax to determine the RubP-regeneration ! + ! (aka light) limitation case, account for the triose phosphate utilisation ! + ! limitation case (C3), and use the Michaelis-Mentel coefficients along with other ! + ! parameters from von Caemmerer (2000, Biochemical models of leaf photosynthesis). ! + ! 2 -- (ED-2.2 default) Collatz et al. (1991, Agric. For. Meteorol.). We use the power ! + ! (Q10) equations, with Collatz et al. (1991) parameters for compensation point, ! + ! and the Michaelis-Mentel coefficients. The correction for high and low ! + ! temperatures are the same as in Moorcroft et al. (2001). ! + ! 3 -- (Beta) Similar to case 2, but we use Jmax to determine the RubP-regeneration ! + ! (aka light) limitation case, account for the triose phosphate utilisation ! + ! limitation case (C3), and use the Michaelis-Mentel coefficients along with other ! + ! parameters from von Caemmerer (2000). ! + ! 4 -- (Beta) Use "Arrhenius" function as in Harley et al. (1991). This must be run ! + ! with ISTOMATA_SCHEME = 1 ! + !---------------------------------------------------------------------------------------! + NL%IPHYSIOL = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IALLOM -- Which allometry to use (this mostly affects tropical PFTs). Temperate PFTs ! + ! will use the new root allometry and the maximum crown area unless IALLOM is ! + ! set to 0). ! + ! 0. (Legacy) Original ED-1.0, included for back compatibility. ! + ! 1. (Legacy) ! + ! a. The coefficients for structural biomass are set so the total AGB ! + ! is similar to Baker et al. (2004, Glob. Change Biol.), equation 2. ! + ! b. Experimental root depth that makes canopy trees to have root depths ! + ! of 5m and grasses/seedlings at 0.5 to have root depth of 0.5 m. ! + ! c. Crown area defined as in Poorter et al. (2006, Ecology), imposing ! + ! maximum crown area. ! + ! 2. (ED-2.2 default) Similar to 1, but with a few extra changes. ! + ! a. Height -> DBH allometry as in Poorter et al. (2006) ! + ! b. Balive is retuned, using a few leaf biomass allometric equations for ! + ! a few genera in Costa Rica. References: ! + ! Cole and Ewel (2006, Forest Ecol. Manag.), and Calvo-Alvarado et al. ! + ! (2008, Tree Physiol.). ! + ! 3. (Beta) Updated allometric for tropical PFTs based on data from ! + ! Sustainable Landscapes Brazil (Height and crown area), Chave et al. ! + ! (2014, Glob. Change Biol.) (biomass) and the BAAD data base, Falster et ! + ! al. (2015, Ecology) (leaf area). Both leaf and structural biomass take ! + ! DBH and Height as dependent variables, and DBH-Height takes a simpler ! + ! log-linear form fitted using SMA so it can be inverted (useful for ! + ! airborne lidar initialisation). ! + !---------------------------------------------------------------------------------------! + NL%IALLOM = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ECONOMICS_SCHEME -- Temporary variable for testing the relationship amongst traits in ! + ! the tropics. ! + ! 0. (ED-2.2 default) ED-2.1 standard, based on Reich et al. (1997, ! + ! PNAS), Moorcroft et al. (2001, Ecol. Monogr.) and some updates ! + ! following Kim et al. (2012, Glob. Change Biol.). ! + ! 1. When available, trait relationships were derived from more ! + ! up-to-date data sets, including the TRY database (Kattge et ! + ! al. 2011, Glob. Change Biol.), NGEE-Tropics (Norby et al. ! + ! 2017, New Phytol.), RAINFOR (Bahar et al. 2017, New Phytol.), ! + ! and GLOPNET (Wright et al. 2004, Nature). Check ! + ! ed_params.f90 for details. ! + !---------------------------------------------------------------------------------------! + NL%ECONOMICS_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IGRASS -- This controls the dynamics and growth calculation for grasses. ! + ! ! + ! 0. (Legacy) Original ED-1/ED-2.0 method, grasses are miniature trees, grasses have ! + ! heartwood biomass (albeit small), and growth happens monthly. ! + ! 1. (ED-2.2 default). Heartwood biomass is always 0, height is a function of leaf ! + ! biomass , and growth happens daily. With this option, grasses are evergreen ! + ! regardless of IPHEN_SCHEME. ! + !---------------------------------------------------------------------------------------! + NL%IGRASS = 1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPHEN_SCHEME -- It controls the phenology scheme. Even within each scheme, the ! + ! actual phenology will be different depending on the PFT. ! + ! ! + ! -1: (ED-2.2 default for evergreen tropical). ! + ! grasses - evergreen; ! + ! tropical - evergreen; ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous (Botta et al.); ! + ! ! + ! 0: (Deprecated). ! + ! grasses - drought-deciduous (old scheme); ! + ! tropical - drought-deciduous (old scheme); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! 1: (ED-2.2 default for prescribed phenology; deprecated for tropical PFTs). ! + ! phenology is prescribed for cold-deciduous broadleaf trees. ! + ! ! + ! 2: (ED-2.2 default). ! + ! grasses - drought-deciduous (new scheme); ! + ! tropical - drought-deciduous (new scheme); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! 3: (Beta). ! + ! grasses - drought-deciduous (new scheme); ! + ! tropical - drought-deciduous (light phenology); ! + ! conifers - evergreen; ! + ! hardwoods - cold-deciduous; ! + ! ! + ! Old scheme: plants shed their leaves once instantaneous amount of available water ! + ! becomes less than a critical value. ! + ! New scheme: plants shed their leaves once a 10-day running average of available ! + ! water becomes less than a critical value. ! + !---------------------------------------------------------------------------------------! + NL%IPHEN_SCHEME = @PHENOL_SCHEME@ + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! Parameters that control the phenology response to radiation, used only when ! + ! IPHEN_SCHEME = 3. ! + ! ! + ! RADINT -- Intercept ! + ! RADSLP -- Slope. ! + !---------------------------------------------------------------------------------------! + NL%RADINT = -11.3868 + NL%RADSLP = 0.0824 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! REPRO_SCHEME -- This controls plant reproduction and dispersal. ! + ! 0. Reproduction off. Useful for very short runs only. ! + ! 1. (Legacy) Original reproduction scheme. Seeds are exchanged ! + ! between patches belonging to the same site, but they can't go ! + ! outside their original site. ! + ! 2. (ED-2.2 default) Similar to 1, but seeds are exchanged between ! + ! patches belonging to the same polygon, even if they are in ! + ! different sites. They can't go outside their original polygon, ! + ! though. This is the same as option 1 if there is only one site ! + ! per polygon. ! + ! 3. (Beta) Similar to 2, but allocation to reproduction may be set as ! + ! a function of height using an asymptotic curve. ! + !---------------------------------------------------------------------------------------! + NL%REPRO_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! LAPSE_SCHEME -- This specifies the met lapse rate scheme: ! + ! 0. (ED-2.2 default) No lapse rates ! + ! 1. (Beta) Phenomenological, global ! + ! 2. (Non-functional) Phenomenological, local ! + ! 3. (Non-functional) Mechanistic ! + !---------------------------------------------------------------------------------------! + NL%LAPSE_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! CROWN_MOD -- Specifies how tree crowns are represent in the canopy radiation model, ! + ! and in the turbulence scheme depending on ICANTURB. ! + ! 0. (ED-2.2 default) Flat-top, infinitesimally thin crowns. ! + ! 1. (Under development) Finite radius mixing model (Dietze). ! + ! This is only implemented for direct radiation with ICANRAD=0. ! + !---------------------------------------------------------------------------------------! + NL%CROWN_MOD = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the canopy radiation solver. ! + ! ! + ! ICANRAD -- Specifies how canopy radiation is solved. This variable sets both ! + ! shortwave and longwave. ! + ! 0. (Deprecated) original two-stream model from Medvigy (2006), with ! + ! the possibility to apply finite crown area to direct shortwave ! + ! radiation. This option is no longer supported and may be removed ! + ! in future releases. ! + ! 1. (Deprecated) Multiple scattering model from Zhao and Qualls (2005, ! + ! 2006, Water Resour. Res.). This option is no longer supported and ! + ! may be removed in future releases. ! + ! 2. (ED-2.2 default) Updated two-stream model from Liou (2002, An ! + ! introduction to atmospheric radiation). ! + ! IHRZRAD -- Specifies how horizontal canopy radiation is solved. ! + ! 0. (ED-2.2 default) No horizontal patch shading. All patches ! + ! receive the same amount of light at the top. ! + ! 1. (Beta) A realized map of the plant community is built by randomly ! + ! assigning gaps associated with gaps (number of gaps proportional ! + ! to the patch area), and populating them with individuals, ! + ! respecting the cohort distribution in each patch. The crown ! + ! closure index is calculated for the entire landscape and used ! + ! to change the amount of direct light reaching the top of the ! + ! canopy. Patches are then split into 1-3 patches based on the ! + ! light condition, so expect simulations to be slower. (Morton et ! + ! al., in review). ! + ! 2. (Beta) Similar to option 1, except that height for trees with ! + ! DBH > DBH_crit are rescaled to calculate CCI. ! + ! 3. (Beta) This creates patches following IHRZRAD = 1, but then ! + ! assumes that the light scaling factor is 1 for all patches. This ! + ! is only useful to isolate the effect of heterogeneous ! + ! illumination from the patch count. ! + ! 4. (Beta) Similar to option 3, but it applies the same method as ! + ! IHRZRAD=2. ! + !---------------------------------------------------------------------------------------! + NL%ICANRAD = 2 + NL%IHRZRAD = 0 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! The variables below will be eventually removed from ED2IN, use XML initialisation ! + ! file to set these parameters instead. ! + ! LTRANS_VIS -- Leaf transmittance for tropical plants - Visible/PAR ! + ! LTRANS_NIR -- Leaf transmittance for tropical plants - Near Infrared ! + ! LREFLECT_VIS -- Leaf reflectance for tropical plants - Visible/PAR ! + ! LREFLECT_NIR -- Leaf reflectance for tropical plants - Near Infrared ! + ! ORIENT_TREE -- Leaf orientation factor for tropical trees. Extremes are: ! + ! -1. All leaves are oriented in the vertical ! + ! 0. Leaf orientation is perfectly random ! + ! 1. All leaves are oriented in the horizontal ! + ! In practice, acceptable values range from -0.4 and 0.6 ! + ! (Goudriaan, 1977). ! + ! ORIENT_GRASS -- Leaf orientation factor for tropical grasses. Extremes are: ! + ! -1. All leaves are oriented in the vertical ! + ! 0. Leaf orientation is perfectly random ! + ! 1. All leaves are oriented in the horizontal ! + ! In practice, acceptable values range from -0.4 and 0.6 ! + ! (Goudriaan, 1977). ! + ! CLUMP_TREE -- Clumping factor for tropical trees. Extremes are: ! + ! lim -> 0. Black hole (0 itself is unacceptable) ! + ! 1. Homogeneously spread over the layer (i.e., no clumping) ! + ! CLUMP_GRASS -- Clumping factor for tropical grasses. Extremes are: ! + ! lim -> 0. Black hole (0 itself is unacceptable) ! + ! 1. Homogeneously spread over the layer (i.e., no clumping) ! + !---------------------------------------------------------------------------------------! + NL%LTRANS_VIS = 0.05 + NL%LTRANS_NIR = 0.2 + NL%LREFLECT_VIS = 0.1 + NL%LREFLECT_NIR = 0.4 + NL%ORIENT_TREE = 0.1 + NL%ORIENT_GRASS = 0.0 + NL%CLUMP_TREE = 0.8 + NL%CLUMP_GRASS = 1.0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IGOUTPUT -- In case IHRZRAD is not zero, should the model write the patch table and ! + ! gap realisation files? (0 -- no; 1 -- yes). Note these files are still ! + ! in text files so they may take considerable disk space. ! + ! GFILOUT -- Prefix for the output patch table/gap files. ! + !---------------------------------------------------------------------------------------! + NL%IGOUTPUT = 0 + NL%GFILOUT = '/mypath/generic-prefix' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! DECOMP_SCHEME -- This specifies the soil Carbon (decomposition) model. ! + ! ! + ! 0 - (Deprecated) ED-2.0 default. Exponential with low-temperature limitation only. ! + ! This option is known to cause excessive accumulation of soil carbon in the ! + ! tropics. ! + ! 1 - (Beta) Lloyd and Taylor (1994, Funct. Ecol.) model. Aditional parameters must be ! + ! set in an XML file. ! + ! 2 - (ED-2.2 default) Similar to ED-1.0 and CENTURY model, heterotrophic respiration ! + ! reaches a maximum at around 38C (using the default parameters), then quickly ! + ! falls to zero at around 50C. It applies a similar function for soil moisture, ! + ! which allows higher decomposition rates when it is close to the optimal, plumet- ! + ! ting when it is almost saturated. ! + ! 3 - (Beta) Similar to option 0, but it uses an empirical moisture limit equation ! + ! from Moyano et al. (2012), Biogeosciences. ! + ! 4 - (Beta) Similar to option 1, but it uses an empirical moisture limit equation ! + ! from Moyano et al. (2012), Biogeosciences. ! + ! 5 - (Beta) Based on Bolker et al. (1998, Ecol. Appl.) CENTURY model. Five necromass ! + ! pools (litter aka fast, structural, microbial, humified aka slow, and passive). ! + ! Temperature and moisture functions are the same as 2. ! + !---------------------------------------------------------------------------------------! + NL%DECOMP_SCHEME = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! H2O_PLANT_LIM -- this determines whether plant photosynthesis can be limited by ! + ! soil moisture, the FSW, defined as FSW = Supply / (Demand + Supply). ! + ! ! + ! Demand is always the transpiration rates in case soil moisture is not limiting (the ! + ! psi_0 term times LAI). The supply is determined by ! + ! ! + ! Kw * nplant * Broot * Available_Water, ! + ! ! + ! and the definition of available water changes depending on H2O_PLANT_LIM: ! + ! 0. Force FSW = 1 (effectively available water is infinity). ! + ! 1. (Legacy) Available water is the total soil water above wilting point, integrated ! + ! across all layers within the rooting zone. ! + ! 2. (ED-2.2 default) Available water is the soil water at field capacity minus wilt- ! + ! ing point, scaled by the so-called wilting factor: ! + ! ! + ! (psi(k) - (H - z(k)) - psi_wp) / (psi_fc - psi_wp) ! + ! ! + ! where psi is the matric potentital at layer k, z is the layer depth, H it the ! + ! crown height and psi_fc and psi_wp are the matric potentials at wilting point ! + ! and field capacity. ! + ! 3. (Beta) Use leaf water potential to modify fsw following Powell et al. (2017). ! + ! This setting requires PLANT_HYDRO_SCHEME to be non-zero. ! + ! 4. (Beta) Use leaf water potential to modify the optimization-based stomatal model ! + ! following Xu et al. (2016). This setting requires PLANT_HYDRO_SCHEME to be ! + ! non-zero values and set ISTOMATA_SCHEME to 1. ! + ! 5. (Beta) Similar to 2, but the water supply directly affects gsw, as opposed to ! + ! fsw. This is done by making D0 a function of soil moisture. Note that this ! + ! still uses Kw but Kw must be significantly lower, at least for tropical trees ! + ! (1/15 - 1/10 of the original). This works only with PLANT_HYDRO_SCHEME set to 0. ! + !---------------------------------------------------------------------------------------! + NL%H2O_PLANT_LIM = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! PLANT_HYDRO_SCHEME -- Flag to set dynamic plant hydraulics. ! + ! 0 - (ED-2.2 default) No dynamic hydraulics (leaf and wood are always saturated). ! + ! 1 - (ED-2.2 alternative) Track plant hydrodynamics. Model framework from X16, using ! + ! parameters from C16. ! + ! 2 - (Deprecated) Track plant hydrodynamics. Model framework from X16, using ! + ! parameters from X16. ! + ! ! + ! References: ! + ! ! + ! Christoffersen BO, Gloor M, Fauset S, Fyllas NM, Galbraith DR, Baker TR, Kruijt B, ! + ! Rowland L, Fisher RA, Binks OJ et al. 2016. Linking hydraulic traits to tropical ! + ! forest function in a size- structured and trait-driven model (TFS v.1-Hydro). ! + ! Geosci. Model Dev., 9: 4227-4255. doi:10.5194/gmd- 9-4227-2016 (C16). ! + ! ! + ! Xu X, Medvigy D, Powers JS, Becknell JM , Guan K. 2016. Diversity in plant hydraulic ! + ! traits explains seasonal and inter-annual variations of vegetation dynamics in ! + ! seasonally dry tropical forests. New Phytol., 212: 80-95. doi:10.1111/nph.14009 ! + ! (X16). ! + !---------------------------------------------------------------------------------------! + NL%PLANT_HYDRO_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISTRUCT_GROWTH_SCHEME -- Different methods to perform structural growth. ! + ! 0. (ED-2.2 default) Use all bstorage allocation to growth to increase heartwood. ! + ! This option will be eventually deprecated, as it creates problems for drought- ! + ! deciduous plants and for allometric settings that properly calculate sapwood ! + ! (IALLOM = 3). ! + ! 1. (ED-2.2 alternative) Correct the fraction of storage allocated to heartwood, so ! + ! storage has sufficient carbon to increment all living tissues in the upcoming ! + ! month. This option will eventually become the default. ! + !---------------------------------------------------------------------------------------! + NL%ISTRUCT_GROWTH_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISTOMATA_SCHEME -- Which stomatal conductance model to use. ! + ! 0. (ED-2.2 default) Leuning (L95) model. ! + ! 1. (Beta) Katul's optimization-based model (see X16) ! + ! ! + ! References: ! + ! ! + ! Leuning R. 1995. A critical appraisal of a combined stomatal-photosynthesis model for ! + ! C3 plants. Plant Cell Environ., 18: 339-355. ! + ! doi:10.1111/j.1365-3040.1995.tb00370.x (L95). ! + ! ! + ! Xu X, Medvigy D, Powers JS, Becknell JM , Guan K. 2016. Diversity in plant hydraulic ! + ! traits explains seasonal and inter-annual variations of vegetation dynamics in ! + ! seasonally dry tropical forests. New Phytol., 212: 80-95. doi:10.1111/nph.14009 ! + ! (X16). ! + !---------------------------------------------------------------------------------------! + NL%ISTOMATA_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! TRAIT_PLASTICITY_SCHEME -- Whether/How plant traits vary with local environment. ! + ! ! + ! 0 - (ED-2.2 default) No trait plasticity. Trait parameters for each PFT are fixed. ! + ! 1 - (Beta) Vm0, SLA and leaf turnover rate change annually with cohort light ! + ! environment. The parametrisation is based on Lloyd et al. (2010, ! + ! Biogeosciences), with additional data from Keenan and Niinemets (2016, Nat. ! + ! Plants) and Russo and Kitajima (2016, Tropical Tree Physiology book). For each ! + ! cohort, Vm0 and leaf turnover rates decrease, and SLA increases with shading. ! + ! The magnitude of changes is calculated using overtopping LAI and corresponding ! + ! extinction factors for each trait. This is not applicable to grass PFTs. ! + ! (Xu et al. in prep.) ! + ! 2 - (Beta) Similar to 1, but traits are updated monthly. ! + ! -1 - (Beta) Similar to 1, but use height to adjust SLA. ! + ! -2 - (Beta) Similar to 2, but use height to adjust SLA. ! + !---------------------------------------------------------------------------------------! + NL%TRAIT_PLASTICITY_SCHEME = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IDDMORT_SCHEME -- This flag determines whether storage should be accounted in the ! + ! carbon balance. ! + ! 0 -- (Legacy) Carbon balance is done in terms of fluxes only. ! + ! 1 -- (ED-2.2 default) Carbon balance is offset by the storage ! + ! pool. Plants will be in negative carbon balance only when ! + ! they run out of storage and are still losing more carbon than ! + ! gaining. ! + ! ! + ! CBR_SCHEME -- This flag determines which carbon stress scheme is used: ! + ! 0 -- (ED-2.2 default) Single stress. CBR = cb/cb_mlmax ! + ! cb_mlmax is the carbon balance in full sun and no moisture ! + ! limitation ! + ! 1 -- (Legacy) Co-limitation from light and moisture (Longo et al. ! + ! 2018, New Phytol.). CBR_LIGHT = cb/cb_lightmax and ! + ! CBR_MOIST = cb/cb_moistmax. CBR_LIGHT and CBR_MOIST are then ! + ! weighted according to DDMORT_CONST (below) ! + ! 2 -- (Beta) Leibig Style, i.e. limitation from either light or ! + ! moisture depending on which is lower at a given point in time ! + ! CBR = cb/max(cb_lightmax, cb_moistmax) ! + ! ! + ! DDMORT_CONST -- CBR_Scheme = 1 only ! + ! This constant (k) determines the relative contribution of light ! + ! and soil moisture to the density-dependent mortality rate. Values ! + ! range from 0 (soil moisture only) to 1 (light only, which is the ! + ! ED-1.0 and ED-2.0 default). ! + ! ! + ! mort1 ! + ! mu_DD = ------------------------- ! + ! 1 + exp [ mort2 * CBR ] ! + ! ! + ! 1 DDMORT_CONST 1 - DDMORT_CONST ! + ! ------------ = ------------------ + ------------------ ! + ! CBR - CBR_SS CBR_LIGHT - CBR_SS CBR_MOIST - CBR_SS ! + ! ! + !---------------------------------------------------------------------------------------! + NL%IDDMORT_SCHEME = 1 + NL%CBR_SCHEME = 0 + NL%DDMORT_CONST = 0.8 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! These variables will be eventually removed from ED2IN, use the XML initialisation ! + ! file to set these parameters Vm0 instead. The following variables are factors that ! + ! control photosynthesis and respiration. Note that some of them are relative values, ! + ! whereas others are absolute. ! + ! ! + ! VMFACT_C3 -- Factor multiplying the default Vm0 for C3 plants (1.0 = default). ! + ! VMFACT_C4 -- Factor multiplying the default Vm0 for C4 plants (1.0 = default). ! + ! MPHOTO_TRC3 -- Stomatal slope (M) for tropical C3 plants ! + ! MPHOTO_TEC3 -- Stomatal slope (M) for conifers and temperate C3 plants ! + ! MPHOTO_C4 -- Stomatal slope (M) for C4 plants. ! + ! BPHOTO_BLC3 -- cuticular conductance for broadleaf C3 plants [umol/m2/s] ! + ! BPHOTO_NLC3 -- cuticular conductance for needleleaf C3 plants [umol/m2/s] ! + ! BPHOTO_C4 -- cuticular conductance for C4 plants [umol/m2/s] ! + ! KW_GRASS -- Water conductance for trees, in m2/yr/kgC_root. This is used only ! + ! when H2O_PLANT_LIM is not 0. ! + ! KW_TREE -- Water conductance for grasses, in m2/yr/kgC_root. This is used only ! + ! when H2O_PLANT_LIM is not 0. ! + ! GAMMA_C3 -- The dark respiration factor (gamma) for C3 plants. In case this ! + ! number is set to 0, find the factor based on Atkin et al. (2015). ! + ! GAMMA_C4 -- The dark respiration factor (gamma) for C4 plants. In case this ! + ! number is set to 0, find the factor based on Atkin et al. (2015). ! + ! (assumed to be twice as large as C3 grasses, as Atkin et al. 2015 ! + ! did not estimate Rd0 for C4 grasses). ! + ! D0_GRASS -- The transpiration control in gsw (D0) for ALL grasses. ! + ! D0_TREE -- The transpiration control in gsw (D0) for ALL trees. ! + ! ALPHA_C3 -- Quantum yield of ALL C3 plants. This is only applied when ! + ! QUANTUM_EFFICIENCY_T = 0. ! + ! ALPHA_C4 -- Quantum yield of C4 plants. This is always applied. ! + ! KLOWCO2IN -- The coefficient that controls the PEP carboxylase limited rate of ! + ! carboxylation for C4 plants. ! + ! RRFFACT -- Factor multiplying the root respiration factor for ALL PFTs. ! + ! (1.0 = default). ! + ! GROWTHRESP -- The actual growth respiration factor (C3/C4 tropical PFTs only). ! + ! (1.0 = default). ! + ! LWIDTH_GRASS -- Leaf width for grasses, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). ! + ! LWIDTH_BLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). This is applied to broadleaf trees ! + ! only. ! + ! LWIDTH_NLTREE -- Leaf width for trees, in metres. This controls the leaf boundary ! + ! layer conductance (gbh and gbw). This is applied to conifer trees ! + ! only. ! + ! Q10_C3 -- Q10 factor for C3 plants (used only if IPHYSIOL is set to 2 or 3). ! + ! Q10_C4 -- Q10 factor for C4 plants (used only if IPHYSIOL is set to 2 or 3). ! + !---------------------------------------------------------------------------------------! + NL%VMFACT_C3 = 1.00 + NL%VMFACT_C4 = 1.00 + NL%MPHOTO_TRC3 = 9.0 + NL%MPHOTO_TEC3 = 7.2 + NL%MPHOTO_C4 = 5.2 + NL%BPHOTO_BLC3 = 10000. + NL%BPHOTO_NLC3 = 1000. + NL%BPHOTO_C4 = 10000. + NL%KW_GRASS = 900. + NL%KW_TREE = 600. + NL%GAMMA_C3 = 0.0145 + NL%GAMMA_C4 = 0.035 + NL%D0_GRASS = 0.016 + NL%D0_TREE = 0.016 + NL%ALPHA_C3 = 0.08 + NL%ALPHA_C4 = 0.055 + NL%KLOWCO2IN = 17949. + NL%RRFFACT = 1.000 + NL%GROWTHRESP = 0.333 + NL%LWIDTH_GRASS = 0.05 + NL%LWIDTH_BLTREE = 0.10 + NL%LWIDTH_NLTREE = 0.05 + NL%Q10_C3 = 2.4 + NL%Q10_C4 = 2.4 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! THETACRIT -- Leaf drought phenology threshold. The sign matters here: ! + ! >= 0. -- This is the relative soil moisture above the wilting point ! + ! below which the drought-deciduous plants will start shedding ! + ! their leaves ! + ! < 0. -- This is the soil potential in MPa below which the drought- ! + ! -deciduous plants will start shedding their leaves. The wilt- ! + ! ing point is by definition -1.5MPa, so make sure that the value ! + ! is above -1.5. ! + !---------------------------------------------------------------------------------------! + NL%THETACRIT = -1.20 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! QUANTUM_EFFICIENCY_T -- Which quantum yield model should to use for C3 plants ! + ! 0. (ED-2.2 default) Quantum efficiency is constant. ! + ! 1. (Beta) Quantum efficiency varies with temperature ! + ! following Ehleringer (1978, Oecologia) polynomial fit. ! + !---------------------------------------------------------------------------------------! + NL%QUANTUM_EFFICIENCY_T = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! N_PLANT_LIM -- This controls whether plant photosynthesis can be limited by nitrogen. ! + ! 0. No limitation ! + ! 1. Activate nitrogen limitation model. As of ED-2.2, this option has ! + ! not been thoroughly tested in the tropics. ! + !---------------------------------------------------------------------------------------! + NL%N_PLANT_LIM = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! N_DECOMP_LIM -- This controls whether decomposition can be limited by nitrogen. ! + ! 0. No limitation ! + ! 1. Activate nitrogen limitation model. As of ED-2.2, this option has ! + ! not been thoroughly tested in the tropics. ! + !---------------------------------------------------------------------------------------! + NL%N_DECOMP_LIM = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! The following parameters adjust the fire disturbance in the model. ! + ! INCLUDE_FIRE -- Which threshold to use for fires. ! + ! 0. No fires; ! + ! 1. (deprecated) Fire will be triggered with enough fuel (assumed ! + ! to be total above-ground biomass) and integrated ground water ! + ! depth less than a threshold. Based on ED-1, the threshold ! + ! assumes that the soil is 1 m, so deeper soils will need to be ! + ! much drier to allow fires to happen. ! + ! 2. (ED-2.2 default) Fire will be triggered with enough biomass and ! + ! the total soil water at the top 50 cm falls below a threshold. ! + ! 3. (Under development) This will eventually become SPITFIRE and/or ! + ! HESFIRE. Currently this is similar to 2, except that fuel is ! + ! defined as above-ground litter and coarse woody debris, ! + ! grasses, and trees shorter than 2 m. Ignitions are currently ! + ! restricted to areas with human presence (i.e. any non-natural ! + ! patch). ! + ! FIRE_PARAMETER -- If fire happens, this will control the intensity of the disturbance ! + ! given the amount of fuel. ! + ! SM_FIRE -- This is used only when INCLUDE_FIRE = 2 or 3, and it has different ! + ! meanings. The sign here matters. ! + ! When INCLUDE_FIRE = 2: ! + ! >= 0. - Minimum relative soil moisture above dry air of the top ! + ! soil layers that will prevent fires to happen. ! + ! < 0. - Minimum mean soil moisture potential in MPa of the top ! + ! soil layers that will prevent fires to happen. Although ! + ! this variable can be as negative as -3.1 MPa (residual ! + ! soil water), it is recommended that SM_FIRE > -1.5 MPa ! + ! (wilting point), otherwise fires may never occur. ! + !---------------------------------------------------------------------------------------! + NL%INCLUDE_FIRE = 0 + NL%FIRE_PARAMETER = 0.5 + NL%SM_FIRE = -1.4 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IANTH_DISTURB -- This flag controls whether to include anthropogenic disturbances ! + ! such as land clearing, abandonment, and logging. ! + ! 0. No anthropogenic disturbance. ! + ! 1. Use anthropogenic disturbance dataset (ED-2.2 default when ! + ! anthropogenic disturbance is sought). ! + ! 2. Site-specific forest plantation or selective logging cycle. ! + ! (Longo et al., in prep.) (Beta) ! + ! ! + ! The following variables are used only when IANTH_DISTURB is 2. ! + ! ! + ! SL_SCALE -- This flag assumes whether the simulation scale is local or ! + ! landscape. This controls the recurrence of logging. ! + ! 0. Local. The simulation represents one logging unit. Apply ! + ! logging only once every SL_NYRS ! + ! 1. Landscape. The simulation represents a landscape. Logging ! + ! occurs every year but it is restricted to patches with age ! + ! greater than or equal to SL_NYRS ! + ! SL_YR_FIRST -- The first year to apply logging. In case IANTH_DISTURB is 2 it ! + ! must be a simulation year (i.e. between IYEARA and IYEARZ). ! + ! SL_NYRS -- This variable defines the logging cycle, in years (see variable ! + ! SL_SCALE above) ! + ! SL_PFT -- PFTs that can be harvested. ! + ! SL_PROB_HARVEST -- Logging intensity (one value for each PFT provided in SL_PFT). ! + ! Values should be between 0.0 and 1.0, with 0 meaning no ! + ! removal, and 1 removal of all trees needed to meet demands. ! + ! SL_MINDBH_HARVEST -- Minimum DBH for logging (one value for each PFT provided in ! + ! SL_PFT). ! + ! SL_BIOMASS_HARVEST -- Target biomass to be harvested in each cycle, in kgC/m2. If ! + ! zero, then all trees that meet the minimum DBH and minimum ! + ! patch age will be logged. In case you don't want logging to ! + ! occur, don't set this value to zero! Instead, set IANTH_DISTURB ! + ! to zero. ! + ! ! + ! The following variables are used when IANTH_DISTURB is 1 or 2. ! + ! ! + ! SL_SKID_REL_AREA -- area damaged by skid trails (relative to felled area). ! + ! SL_SKID_S_GTHARV -- survivorship of trees with DBH > MINDBH in skid trails. ! + ! SL_SKID_S_LTHARV -- survivorship of trees with DBH < MINDBH in skid trails. ! + ! SL_FELLING_S_LTHARV -- survivorship of trees with DBH < MINDBH in felling gaps. ! + ! ! + ! Cropland variables, used when IANTH_DISTURB is 1 or 2. ! + ! ! + ! CL_FSEEDS_HARVEST -- fraction of seeds that is harvested. ! + ! CL_FSTORAGE_HARVEST -- fraction of non-structural carbon that is harvested. ! + ! CL_FLEAF_HARVEST -- fraction of leaves that is harvested in croplands. ! + !---------------------------------------------------------------------------------------! + NL%IANTH_DISTURB = 0 + NL%SL_SCALE = 1 + NL%SL_SCALE = 0 + NL%SL_YR_FIRST = 1992 + NL%SL_NYRS = 50 + NL%SL_PFT = 2,3,4 + NL%SL_PROB_HARVEST = 1.0,1.0,1.0 + NL%SL_MINDBH_HARVEST = 50.,50.,50. + NL%SL_BIOMASS_HARVEST = 0 + NL%SL_SKID_REL_AREA = 1 + NL%SL_SKID_S_GTHARV = 1 + NL%SL_SKID_S_LTHARV = 0.6 + NL%SL_FELLING_S_LTHARV = 0.35 + NL%CL_FSEEDS_HARVEST = 0.75 + NL%CL_FSTORAGE_HARVEST = 0.00 + NL%CL_FLEAF_HARVEST = 0.00 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ICANTURB -- This flag controls the canopy roughness. ! + ! ! + ! 0. (Legacy) Based on Leuning et al. (Oct 1995, Plant Cell Environ.) and LEAF-3 ! + ! (Walko et al. 2000, J. Appl. Meteorol.). Roughness and displacement height are ! + ! found using simple relations with vegetation height; wind is computed using the ! + ! similarity theory for the top cohort, then it is assumed that wind extinguishes ! + ! following an exponential decay with "perceived" cumulative LAI (local LAI with ! + ! finite crown area). ! + ! 1. (Legacy) Similar to option 0, but the wind profile is not based on LAI, instead ! + ! it used the cohort height. ! + ! 2. (ED-2.2 default) This uses the method of Massman (1997, Boundary-Layer Meteorol.) ! + ! assuming constant drag and no sheltering factor. ! + ! 3. (ED-2.2 alternative) This is based on Massman and Weil (1999, Boundary-Layer ! + ! Meteorol.). Similar to 2, but with the option of varying the drag and sheltering ! + ! within the canopy. ! + ! 4. Similar to 0, but if finds the ground conductance following CLM-4.5 technical ! + ! note (Oleson et al. 2013, NCAR/TN-503+STR) (equations 5.98-5.100). ! + !---------------------------------------------------------------------------------------! + NL%ICANTURB = 2 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! ISFCLYRM -- Similarity theory model. The model that computes u*, T*, etc... ! + ! 1. (Legacy) BRAMS default, based on Louis (1979, Boundary-Layer Meteorol.). It uses ! + ! empirical relations to estimate the flux based on the bulk Richardson number. ! + ! ! + ! All models below use an interative method to find z/L, and the only change ! + ! is the functional form of the psi functions. ! + ! ! + ! 2. (Legacy) Oncley and Dudhia (1995) model, based on MM5. ! + ! 3. (ED-2.2 default) Beljaars and Holtslag (1991) model. Similar to 2, but it uses an ! + ! alternative method for the stable case that mixes more than the OD95. ! + ! 4. (Beta) CLM-based (Oleson et al. 2013, NCAR/TN-503+STR). Similar to options 2 ! + ! and 3, but it uses special functions to deal with very stable and very stable ! + ! cases. It also accounts for different roughness lengths between momentum and ! + ! heat. ! + !---------------------------------------------------------------------------------------! + NL%ISFCLYRM = 3 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! IED_GRNDVAP -- Methods to find the ground -> canopy conductance. ! + ! 0. (ED-2.2 default) Modified Lee and Pielke (1992, J. Appl. Meteorol.), adding ! + ! field capacity, but using beta factor without the square, like in ! + ! Noilhan and Planton (1989, Mon. Wea. Rev.). ! + ! 1. (Legacy) Test # 1 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 2. (Legacy) Test # 2 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 3. (Legacy) Test # 3 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 4. (Legacy) Test # 4 of Mahfouf and Noilhan (1991, J. Appl. Meteorol.). ! + ! 5. (Legacy) Combination of test #1 (alpha) and test #2 (soil resistance). ! + ! In all cases the beta term is modified so it approaches zero as soil moisture goes ! + ! to dry air soil. ! + !---------------------------------------------------------------------------------------! + NL%IED_GRNDVAP = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! These variables will be eventually removed from ED2IN, use XML initialisation file ! + ! to set these parameters instead. These variables are used to control the similarity ! + ! theory model. For the meaning of these parameters, check Beljaars and Holtslag ! + ! (1991, J. Appl. Meteorol.). ! + ! ! + ! GAMM -- gamma coefficient for momentum, unstable case (dimensionless) ! + ! Ignored when ISTAR = 1 ! + ! GAMH -- gamma coefficient for heat, unstable case (dimensionless) ! + ! Ignored when ISTAR = 1 ! + ! TPRANDTL -- Turbulent Prandtl number ! + ! Ignored when ISTAR = 1 ! + ! RIBMAX -- maximum bulk Richardson number. ! + ! LEAF_MAXWHC -- Maximum water that can be intercepted by leaves, in kg/m2leaf. ! + !---------------------------------------------------------------------------------------! + NL%GAMM = 13.0 + NL%GAMH = 13.0 + NL%TPRANDTL = 0.74 + NL%RIBMAX = 0.50 + NL%LEAF_MAXWHC = 0.11 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! IPERCOL -- This controls percolation and infiltration. ! + ! 0. (ED-2.2 default) Based on LEAF-3 (Walko et al. 2000, J. Appl. ! + ! Meteorol.). This assumes soil conductivity constant and for the ! + ! temporary surface water, it sheds liquid in excess of a 1:9 liquid- ! + ! -to-ice ratio through percolation. Temporary surface water exists ! + ! only if the top soil layer is at saturation. ! + ! 1. (Beta). Constant soil conductivity, and it uses the percolation ! + ! model as in Anderson (1976, NOAA technical report NWS 19). Temporary ! + ! surface water may exist after a heavy rain event, even if the soil ! + ! doesn't saturate. ! + ! 2. (Beta). Similar to 1, but soil conductivity decreases with depth even ! + ! for constant soil moisture. ! + !---------------------------------------------------------------------------------------! + NL%IPERCOL = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the plant functional types (PFTs) that will be ! + ! used in this simulation. ! + ! ! + ! INCLUDE_THESE_PFT -- which PFTs to be considered for the simulation. ! + ! PASTURE_STOCK -- which PFT should be used for pastures ! + ! (used only when IANTH_DISTURB = 1 or 2) ! + ! AGRI_STOCK -- which PFT should be used for agriculture ! + ! (used only when IANTH_DISTURB = 1 or 2) ! + ! PLANTATION_STOCK -- which PFT should be used for plantation ! + ! (used only when IANTH_DISTURB = 1 or 2) ! + ! ! + ! PFT table ! + !---------------------------------------------------------------------------------------! + ! 1 - C4 Grass ! + ! 2 - Tropical broadleaf, early successional ! + ! 3 - Tropical broadleaf, mid-successional ! + ! 4 - Tropical broadleaf, late successional ! + ! 5 - Temperate C3 grass ! + ! 6 - Northern North American pines ! + ! 7 - Southern North American pines ! + ! 8 - Late-successional North American conifers ! + ! 9 - Temperate broadleaf, early successional ! + ! 10 - Temperate broadleaf, mid-successional ! + ! 11 - Temperate broadleaf, late successional ! + ! 12 - (Beta) Tropical broadleaf, early successional (thick bark) ! + ! 13 - (Beta) Tropical broadleaf, mid-successional (thick bark) ! + ! 14 - (Beta) Tropical broadleaf, late successional (thick bark) ! + ! 15 - Araucaria ! + ! 16 - Tropical/subtropical C3 grass ! + ! 17 - (Beta) Lianas ! + !---------------------------------------------------------------------------------------! + NL%INCLUDE_THESE_PFT = 1,2,3,4,16 + NL%PASTURE_STOCK = 1 + NL%AGRI_STOCK = 1 + NL%PLANTATION_STOCK = 3 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! PFT_1ST_CHECK -- What to do if the initialisation file has a PFT that is not listed ! + ! in INCLUDE_THESE_PFT (ignored if IED_INIT_MODE is -1 or 0) ! + ! 0. Stop the run ! + ! 1. Add the PFT in the INCLUDE_THESE_PFT list ! + ! 2. Ignore the cohort ! + !---------------------------------------------------------------------------------------! + NL%PFT_1ST_CHECK = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the size of sub-polygon structures in ED-2. ! + ! IFUSION -- Control on patch/cohort fusion scheme ! + ! 0. (ED-2.2 default). This is the original ED-2 scheme. This will ! + ! be eventually superseded by IFUSION=1. ! + ! 1. (Beta) New scheme, developed to address a few issues that ! + ! become more evident when initialising ED with large (>1000) ! + ! number of patches. It uses absolute difference in light levels ! + ! to avoid fusing patches with very different canopies, and also ! + ! makes sure that remaining patches have area above ! + ! MIN_PATCH_AREA and that a high percentage of the original ! + ! landscape is retained. ! + ! ! + ! MAXSITE -- This is the strict maximum number of sites that each polygon can ! + ! contain. Currently this is used only when the user wants to run ! + ! the same polygon with multiple soil types. If there aren't that ! + ! many different soil types with a minimum area (check MIN_SITE_AREA ! + ! below), then the model will allocate just the amount needed. ! + ! MAXPATCH -- A variable controlling the sought number of patches per site. ! + ! Possible values are: ! + ! 0. Disable any patch fusion. This may lead to a large number ! + ! of patches in century-long simulations. ! + ! 1. The model will force fusion until the total number of ! + ! patches is 1 for each land use type. ! + ! -1. Similar to 1, but fusion will only happen during ! + ! initialisation ! + ! >= 2. The model will seek fusion of patches every year, aiming to ! + ! keep the number of patches below NL%MAXPATCH. ! + ! <= -2. Similar to >= 2, but fusion will only happen during ! + ! initialisation. The target number of patches will be the ! + ! absolute number of NL%MAXPATCH. ! + ! ! + ! IMPORTANT: A given site may contain more patches than MAXPATCH in ! + ! case the patches are so different that they cannot be ! + ! fused even when the fusion threshold is relaxed. ! + ! ! + ! MAXCOHORT -- A variable controlling the sought number of cohorts per patch. ! + ! Possible values are: ! + ! 0. Disable cohort fusion. This may lead to a large number of ! + ! cohorts in century-long simulations. ! + ! >= 1. The model will seek fusion of cohorts every month, aiming to ! + ! keep the number of cohorts per patch below MAXCOHORT. ! + ! <= -1. Similar to >= 1, but fusion will only happen during ! + ! initialisation. The target number of cohorts will be the ! + ! absolute number of MAXCOHORT. ! + ! ! + ! IMPORTANT: A given patch may contain more cohorts than MAXCOHORT in ! + ! case the cohorts are so different that they cannot be ! + ! fused even when the fusion threshold is relaxed. ! + ! ! + ! MIN_SITE_AREA -- This is the minimum fraction area of a given soil type that allows ! + ! a site to be created. ! + ! ! + ! MIN_PATCH_AREA -- This is the minimum fraction area of a given soil type that allows ! + ! a site to be created (ignored if IED_INIT_MODE is set to 3). ! + ! IMPORTANT: This is not enforced by the model, but we recommend that ! + ! MIN_PATCH_AREA >= 1/MAXPATCH, otherwise the model may ! + ! never reach MAXPATCH. ! + !---------------------------------------------------------------------------------------! + NL%IFUSION = 0 + NL%MAXSITE = 1 + NL%MAXPATCH = 30 + NL%MAXCOHORT = 80 + NL%MIN_SITE_AREA = 0.001 + NL%MIN_PATCH_AREA = 0.001 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! ZROUGH -- Roughness length [metres] of non-vegetated soil. This variable will be ! + ! eventually removed from ED2IN, use XML initialisation file to set this ! + ! parameter instead. ! + !---------------------------------------------------------------------------------------! + NL%ZROUGH = 0.1 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! Treefall disturbance parameters. ! + ! TREEFALL_DISTURBANCE_RATE -- Sign-dependent treefall disturbance rate: ! + ! > 0. usual disturbance rate, in 1/years; ! + ! = 0. No treefall disturbance; ! + ! TIME2CANOPY -- Minimum patch age for treefall disturbance to happen. ! + ! If TREEFALL_DISTURBANCE_RATE = 0., this value will be ! + ! ignored. If this value is different than zero, then ! + ! TREEFALL_DISTURBANCE_RATE is internally adjusted so the ! + ! average patch age is still 1/TREEFALL_DISTURBANCE_RATE ! + !---------------------------------------------------------------------------------------! + NL%TREEFALL_DISTURBANCE_RATE = 0.0125 + NL%TIME2CANOPY = 0 + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! RUNOFF_TIME -- In case a temporary surface water (TSW) is created, this is the "e- ! + ! -folding lifetime" of the TSW in seconds due to runoff. If you don't ! + ! want runoff to happen, set this to 0. ! + !---------------------------------------------------------------------------------------! + NL%RUNOFF_TIME = 3600. + !---------------------------------------------------------------------------------------! + + + + + !---------------------------------------------------------------------------------------! + ! These variables will be eventually removed from ED2IN, use XML initialisation ! + ! file to set these parameters instead. ! + ! ! + ! The following variables control the minimum values of various velocities in the ! + ! canopy. This is needed to avoid the air to be extremely still, or to avoid singular- ! + ! ities. When defining the values, keep in mind that UBMIN >= UGBMIN >= USTMIN. ! + ! ! + ! UBMIN -- minimum wind speed at the top of the canopy air space [ m/s] ! + ! UGBMIN -- minimum wind speed at the leaf level [ m/s] ! + ! USTMIN -- minimum friction velocity, u*, in m/s. [ m/s] ! + !---------------------------------------------------------------------------------------! + NL%UBMIN = 1.00 + NL%UGBMIN = 0.25 + NL%USTMIN = 0.10 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Control parameters for printing to standard output. Any variable can be printed ! + ! to standard output as long as it is one dimensional. Polygon variables have been ! + ! tested, no gaurtantees for other hierarchical levels. Choose any variables that are ! + ! defined in the variable table fill routine in ed_state_vars.f90. Choose the start ! + ! and end index of the polygon,site,patch or cohort. It should work in parallel. The ! + ! indices are global indices of the entire domain. The are printed out in rows of 10 ! + ! columns each. ! + ! ! + ! IPRINTPOLYS -- 0. Do not print information to screen ! + ! 1. Print polygon arrays to screen, use variables described below to ! + ! determine which ones and how ! + ! NPVARS -- Number of variables to be printed ! + ! PRINTVARS -- List of variables to be printed ! + ! PFMTSTR -- The standard fortran format for the prints. One format per variable ! + ! IPMIN -- First polygon (absolute index) to be print ! + ! IPMAX -- Last polygon (absolute index) to print ! + !---------------------------------------------------------------------------------------! + NL%IPRINTPOLYS = 0 + NL%NPVARS = 1 + NL%PRINTVARS = 'AVG_PCPG','AVG_CAN_TEMP','AVG_VAPOR_AC','AVG_CAN_SHV' + NL%PFMTSTR = 'f10.8','f5.1','f7.2','f9.5' + NL%IPMIN = 1 + NL%IPMAX = 60 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Variables that control the meteorological forcing. ! + ! ! + ! IMETTYPE -- Format of the meteorological dataset ! + ! 0. (Non-functional) ASCII ! + ! 1. (ED-2.2 default) HDF5 ! + ! ISHUFFLE -- How to choose an year outside the meterorological data range (see ! + ! METCYC1 and METCYCF). ! + ! 0. (ED-2.2 default) Sequentially cycle over years ! + ! 1. (Under development) Randomly pick a year. The sequence of randomly ! + ! picked years will be the same every time the simulation is re-run, ! + ! provided that the initial year and met driver time span remain the ! + ! same. This have been reports that this option is working like ! + ! option 2 (completely random). ! + ! 2. (Beta) Randomly pick the years, choosing a different sequence each ! + ! time the model is run. ! + ! ! + ! IMPORTANT: Regardless of the ISHUFFLE option, the model always use the ! + ! correct year for the period in which meteorological drivers ! + ! exist. ! + ! ! + ! IMETCYC1 -- First year for which meteorological driver files exist. ! + ! IMETCYCF -- Last year for which meteorological driver files exist. In addition, ! + ! the model assumes that files exist for all years between METCYC1 and ! + ! METCYCF. ! + ! IMETAVG -- How the input radiation was originally averaged. You must tell this ! + ! because ED-2.1 can make a interpolation accounting for the cosine of ! + ! zenith angle. ! + ! -1. (Deprecated) I don't know, use linear interpolation. ! + ! 0. No average, the values are instantaneous ! + ! 1. Averages ending at the reference time ! + ! 2. Averages beginning at the reference time ! + ! 3. Averages centred at the reference time ! + ! ! + ! IMPORTANT: The user must obtain the correct information for each ! + ! meteorological driver before running the model, and set ! + ! this variable consistently. Inconsistent settings are ! + ! known to cause numerical instabilities, particularly at ! + ! around sunrise and sunset times. ! + ! ! + ! IMETRAD -- What should the model do with the input short wave radiation? ! + ! 0. (ED-2.2 default, when radiation components were measured) ! + ! Nothing, use it as is. ! + ! 1. (Legacy) Add radiation components together, then use the SiB ! + ! method (Sellers et al. 1986, J. Atmos. Sci) to split radiation ! + ! into the four components (PAR direct, PAR diffuse, NIR direct, ! + ! NIR diffuse). ! + ! 2. (ED-2.2 default when radiation components were not measured) ! + ! Add components then together, then use the method by Weiss and ! + ! Norman (1985, Agric. For. Meteorol.) to split radiation down to ! + ! the four components. ! + ! 3. All radiation goes to diffuse. Useful for theoretical studies ! + ! only. ! + ! 4. All radiation goes to direct, except at night. Useful for ! + ! theoretical studies only. ! + ! 5. (Beta) Add radiation components back together, then split ! + ! radiation to the four components based on clearness index (Bendix ! + ! et al. 2010, Int. J. Biometeorol.). ! + ! INITIAL_CO2 -- Initial value for CO2 in case no CO2 is provided at the meteorological ! + ! driver dataset [Units: umol/mol] ! + !---------------------------------------------------------------------------------------! + NL%IMETTYPE = 1 + NL%ISHUFFLE = 0 + NL%METCYC1 = @MET_START@ + NL%METCYCF = @MET_END@ + NL%IMETAVG = @MET_SOURCE@ + NL%IMETRAD = 5 + NL%INITIAL_CO2 = 410. + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables control the phenology prescribed from observations: ! + ! ! + ! IPHENYS1 -- First year for spring phenology ! + ! IPHENYSF -- Final year for spring phenology ! + ! IPHENYF1 -- First year for fall/autumn phenology ! + ! IPHENYFF -- Final year for fall/autumn phenology ! + ! PHENPATH -- path and prefix of the prescribed phenology data. ! + ! ! + ! If the years don't cover the entire simulation period, they will be recycled. ! + !---------------------------------------------------------------------------------------! + NL%IPHENYS1 = @PHENOL_START@ + NL%IPHENYSF = @PHENOL_END@ + NL%IPHENYF1 = @PHENOL_START@ + NL%IPHENYFF = @PHENOL_END@ + NL%PHENPATH = '@PHENOL@' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! These are some additional configuration files. ! + ! IEDCNFGF -- XML file containing additional parameter settings. If you don't have ! + ! one, leave it empty ! + ! EVENT_FILE -- file containing specific events that must be incorporated into the ! + ! simulation. ! + ! PHENPATH -- path and prefix of the prescribed phenology data. ! + !---------------------------------------------------------------------------------------! + NL%IEDCNFGF = '@CONFIGFILE@' + NL%EVENT_FILE = '/mypath/event.xml' + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! Census variables. This is going to create unique census statuses to cohorts, to ! + ! better compare the model with census observations. In case you don't intend to ! + ! compare the model with census data, set up DT_CENSUS to 1., otherwise you may reduce ! + ! cohort fusion. ! + ! DT_CENSUS -- Time between census, in months. Currently the maximum is 60 ! + ! months, to avoid excessive memory allocation. Every time the ! + ! simulation reaches the census time step, all census tags will be ! + ! reset. ! + ! YR1ST_CENSUS -- In which year was the first census conducted? ! + ! MON1ST_CENSUS -- In which month was the first census conducted? ! + ! MIN_RECRUIT_DBH -- Minimum DBH that is measured in the census, in cm. ! + !---------------------------------------------------------------------------------------! + NL%DT_CENSUS = 24 + NL%YR1ST_CENSUS = 2004 + NL%MON1ST_CENSUS = 3 + NL%MIN_RECRUIT_DBH = 10 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! The following variables are used to control the detailed output for debugging ! + ! purposes. ! + ! ! + ! IDETAILED -- This flag controls the possible detailed outputs, mostly used for ! + ! debugging purposes. Notice that this doesn't replace the normal debug- ! + ! ger options, the idea is to provide detailed output to check bad ! + ! assumptions. The options are additive, and the indices below represent ! + ! the different types of output: ! + ! ! + ! 0 -- (ED-2.2 default) No detailed output. ! + ! 1 -- Detailed budget (every DTLSM) ! + ! 2 -- Detailed photosynthesis (every DTLSM) ! + ! 4 -- Detailed output from the integrator (every HDID) ! + ! 8 -- Thermodynamic bounds for sanity check (every DTLSM) ! + ! 16 -- Daily error stats (which variable caused the time step to shrink) ! + ! 32 -- Allometry parameters, photosynthesis parameters, and minimum and ! + ! maximum sizes (three files, only at the beginning) ! + ! 64 -- Detailed disturbance rate output. Two types of detailed ! + ! transitions will be written (single polygon runs only). ! + ! a. A text file that looks like the .lu files. This is written ! + ! only once, at the beginning of the simulation. ! + ! b. Detailed information about the transition matrix. This is ! + ! written to the standard output (e.g. screen), every time the ! + ! patch dynamics is called. ! + ! ! + ! In case you don't want any detailed output (likely for most runs), set ! + ! IDETAILED to zero. In case you want to generate multiple outputs, add ! + ! the number of the sought options: for example, if you want detailed ! + ! photosynthesis and detailed output from the integrator, set IDETAILED ! + ! to 6 (2 + 4). Any combination of the above outputs is acceptable, al- ! + ! though all but the last produce a sheer amount of txt files, in which ! + ! case you may want to look at variable PATCH_KEEP. ! + ! ! + ! IMPORTANT: The first five options will only work for single site ! + ! simulations, and it is strongly recommended to set ! + ! IVEGT_DYNAMICS to 0. These options generate tons of ! + ! output, so don't try these options with long simulations. ! + ! ! + ! ! + ! PATCH_KEEP -- This option will eliminate all patches except one from the initial- ! + ! isation. This is only used when one of the first five types of ! + ! detailed output is active, otherwise it will be ignored. Options are: ! + ! -2. Keep only the patch with the lowest potential LAI ! + ! -1. Keep only the patch with the highest potential LAI ! + ! 0. Keep all patches. ! + ! > 0. Keep the patch with the provided index. In case the index is ! + ! not valid, the model will crash. ! + !---------------------------------------------------------------------------------------! + NL%IDETAILED = 0 + NL%PATCH_KEEP = 0 + !---------------------------------------------------------------------------------------! + + + + !---------------------------------------------------------------------------------------! + ! GROWTH_RESP_SCHEME -- This flag indicates how growth respiration fluxes are treated. ! + ! ! + ! 0 - (Legacy) Growth respiration is treated as tax on GPP, at pft-specific ! + ! rate given by growth_resp_factor. All growth respiration is treated as ! + ! aboveground wood -> canopy-airspace flux. ! + ! 1 - (ED-2.2 default) Growth respiration is calculated as in 0, but split into ! + ! fluxes entering the CAS from Leaf, Fine Root, Sapwood (above- and below- ! + ! -ground), and Bark (above- and below-ground, only when IALLOM=3), ! + ! proportionally to the biomass of each tissue. This does not affect carbon ! + ! budget at all, it provides greater within-ecosystem flux resolution. ! + !---------------------------------------------------------------------------------------! + NL%GROWTH_RESP_SCHEME = 1 + !---------------------------------------------------------------------------------------! + + + !---------------------------------------------------------------------------------------! + ! STORAGE_RESP_SCHEME -- This flag controls how storage respiration fluxes are treated. ! + ! ! + ! 0 - (Legacy) Storage resp. is aboveground wood -> canopy-airspace flux. ! + ! 1 - (ED-2.2 default) Storage respiration is calculated as in 0, but split into ! + ! fluxes entering the CAS from Leaf, Fine Root, Sapwood (above- and below- ! + ! -ground), and Bark (above- and below-ground, only when IALLOM=3), ! + ! proportionally to the biomass of each tissue. This does not affect carbon ! + ! budget at all, it provides greater within-ecosystem flux resolution. ! + !---------------------------------------------------------------------------------------! + NL%STORAGE_RESP_SCHEME = 1 + !---------------------------------------------------------------------------------------! +$END +!==========================================================================================! +!==========================================================================================! diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 122e2da0e83..faddbebc3de 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -42,7 +42,7 @@ Imports: stats, prodlim, MCMCpack, - tmvtnorm, + TruncatedNormal (>= 2.2), udunits2 (>= 0.11), utils, XML, diff --git a/modules/assim.batch/R/hier.mcmc.R b/modules/assim.batch/R/hier.mcmc.R index de8b668c512..6bf159169b4 100644 --- a/modules/assim.batch/R/hier.mcmc.R +++ b/modules/assim.batch/R/hier.mcmc.R @@ -208,11 +208,11 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # propose new site parameter vectors thissite <- g %% nsites if(thissite == 0) thissite <- nsites - proposed <- tmvtnorm::rtmvnorm(1, - mean = mu_site_curr[thissite,], + proposed <- TruncatedNormal::rtmvnorm(1, + mu = mu_site_curr[thissite,], sigma = jcov.arr[,,thissite], - lower = rng_orig[,1], - upper = rng_orig[,2]) + lb = rng_orig[,1], + ub = rng_orig[,2]) mu_site_new <- matrix(rep(proposed, nsites),ncol=nparam, byrow = TRUE) @@ -228,9 +228,9 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # calculate jump probabilities currHR <- sapply(seq_len(nsites), function(v) { - tmvtnorm::dtmvnorm(mu_site_curr[v,], mu_site_new[v,], jcov.arr[,,v], - lower = rng_orig[,1], - upper = rng_orig[,2], log = TRUE) + TruncatedNormal::dtmvnorm(mu_site_curr[v,], mu_site_new[v,], jcov.arr[,,v], + lb = rng_orig[,1], + ub = rng_orig[,2], log = TRUE, B = 1e2) }) # predict new SS @@ -246,9 +246,9 @@ hier.mcmc <- function(settings, gp.stack, nstack = NULL, nmcmc, rng_orig, # calculate jump probabilities newHR <- sapply(seq_len(nsites), function(v) { - tmvtnorm::dtmvnorm(mu_site_new[v,], mu_site_curr[v,], jcov.arr[,,v], - lower = rng_orig[,1], - upper = rng_orig[,2], log = TRUE) + TruncatedNormal::dtmvnorm(mu_site_new[v,], mu_site_curr[v,], jcov.arr[,,v], + lb = rng_orig[,1], + ub = rng_orig[,2], log = TRUE, B = 1e2) }) # Accept/reject with MH rule diff --git a/modules/assim.batch/R/pda.utils.R b/modules/assim.batch/R/pda.utils.R index 2180709a3c2..0763fa465e7 100644 --- a/modules/assim.batch/R/pda.utils.R +++ b/modules/assim.batch/R/pda.utils.R @@ -901,11 +901,11 @@ generate_hierpost <- function(mcmc.out, prior.fn.all, prior.ind.all){ # calculate hierarchical posteriors from mu_global_samp and tau_global_samp hierarchical_samp <- mu_global_samp for(si in seq_len(iter_size)){ - hierarchical_samp[si,] <- tmvtnorm::rtmvnorm(1, - mean = mu_global_samp[si,], + hierarchical_samp[si,] <- TruncatedNormal::rtmvnorm(1, + mu = mu_global_samp[si,], sigma = sigma_global_samp[si,,], - lower = lower_lim, - upper = upper_lim) + lb = lower_lim, + ub = upper_lim) } mcmc.out[[i]]$hierarchical_samp <- hierarchical_samp diff --git a/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R b/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R index 2c2d47e08dc..7697cd65205 100644 --- a/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R +++ b/modules/benchmark/inst/scripts/benchmark.workflow.FATES_BCI.R @@ -28,7 +28,7 @@ settings <- PEcAn.settings::read.settings(settings.file) input_id <- 1000011171 ## 4) Edit Input to associate File ## 5) Verify that PEcAn is able to find and load file -input <- PEcAn.DB::query.file.path(input_id,host_name = "localhost",con = bety$con) +input <- PEcAn.DB::query.file.path(input_id,host_name = "localhost",con = bety) format <- PEcAn.DB::query.format.vars(bety,input_id) field <- PEcAn.benchmark::load_data(input,format) ## 6) Look up variable_id in database diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index e1d8e49c52b..33328c9a306 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -87,12 +87,8 @@ met.process <- function(site, input_met, start_date, end_date, model, } # set up connection and host information - bety <- dplyr::src_postgres(dbname = dbparms$dbname, - host = dbparms$host, - user = dbparms$user, - password = dbparms$password) - - con <- bety$con + con <- PEcAn.DB::db.open(dbparms) + on.exit(PEcAn.DB::db.close(con), add = TRUE) username <- ifelse(is.null(input_met$username), "pecan", input_met$username) machine.host <- ifelse(host == "localhost" || host$name == "localhost", PEcAn.remote::fqdn(), host$name) @@ -128,10 +124,10 @@ met.process <- function(site, input_met, start_date, end_date, model, # first attempt at function that designates where to start met.process if (is.null(input_met$id)) { stage <- list(download.raw = TRUE, met2cf = TRUE, standardize = TRUE, met2model = TRUE) - format.vars <- PEcAn.DB::query.format.vars(bety = bety, format.id = register$format$id) # query variable info from format id + format.vars <- PEcAn.DB::query.format.vars(bety = con, format.id = register$format$id) # query variable info from format id } else { stage <- met.process.stage(input.id=input_met$id, raw.id=register$format$id, con) - format.vars <- PEcAn.DB::query.format.vars(bety = bety, input.id = input_met$id) # query DB to get format variable information if available + format.vars <- PEcAn.DB::query.format.vars(bety = con, input.id = input_met$id) # query DB to get format variable information if available # Is there a situation in which the input ID could be given but not the file path? # I'm assuming not right now assign(stage$id.name, @@ -280,7 +276,7 @@ met.process <- function(site, input_met, start_date, end_date, model, con = con, host = host, overwrite = overwrite$met2cf, format.vars = format.vars, - bety = bety) + bety = con) } else { if (! met %in% c("ERA5")) cf.id = input_met$id } @@ -411,11 +407,11 @@ met.process <- function(site, input_met, start_date, end_date, model, ################################################################################################################################# -##' @name db.site.lat.lon -##' @title db.site.lat.lon +##' Look up lat/lon from siteid +##' ##' @export -##' @param site.id -##' @param con +##' @param site.id BeTY ID of site to look up +##' @param con database connection ##' @author Betsy Cowdery db.site.lat.lon <- function(site.id, con) { site <- PEcAn.DB::db.query(paste("SELECT id, ST_X(ST_CENTROID(geometry)) AS lon, ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id =", @@ -434,20 +430,19 @@ db.site.lat.lon <- function(site.id, con) { ################################################################################################################################# -##' @name browndog.met -##' @description Use browndog to get the met data for a specific model -##' @title get met data from browndog +##' Use browndog to get the met data for a specific model +##' ##' @export -##' @param browndog, list with url, username and password to connect to browndog -##' @param source, the source of the met data, currently only NARR an Ameriflux is supported -##' @param site, site information should have id, lat, lon and name (ameriflux id) -##' @param start_date, start date for result -##' @param end_date, end date for result -##' @param model, model to convert the met data to -##' @param dir, folder where results are stored (in subfolder) -##' @param username, used when downloading data from Ameriflux like sites -##' @param con, database connection -## +##' @param browndog list with url, username and password to connect to browndog +##' @param source the source of the met data, currently only NARR an Ameriflux is supported +##' @param site site information should have id, lat, lon and name (ameriflux id) +##' @param start_date start date for result +##' @param end_date end date for result +##' @param model model to convert the met data to +##' @param dir folder where results are stored (in subfolder) +##' @param username used when downloading data from Ameriflux like sites +##' @param con database connection +##' ##' @author Rob Kooper browndog.met <- function(browndog, source, site, start_date, end_date, model, dir, username, con) { folder <- tempfile("BD-", dir) diff --git a/modules/data.atmosphere/R/met.process.stage.R b/modules/data.atmosphere/R/met.process.stage.R index 41b2c756166..e0ca23964a8 100644 --- a/modules/data.atmosphere/R/met.process.stage.R +++ b/modules/data.atmosphere/R/met.process.stage.R @@ -4,6 +4,7 @@ ##' ##' @param input.id ##' @param raw.id +##' @param con database connection ##' ##' @author Elizabeth Cowdery met.process.stage <- function(input.id, raw.id, con) { diff --git a/modules/data.atmosphere/man/browndog.met.Rd b/modules/data.atmosphere/man/browndog.met.Rd index 18b8bdb63f9..16f5b224972 100644 --- a/modules/data.atmosphere/man/browndog.met.Rd +++ b/modules/data.atmosphere/man/browndog.met.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/met.process.R \name{browndog.met} \alias{browndog.met} -\title{get met data from browndog} +\title{Use browndog to get the met data for a specific model} \usage{ browndog.met( browndog, @@ -17,23 +17,23 @@ browndog.met( ) } \arguments{ -\item{browndog, }{list with url, username and password to connect to browndog} +\item{browndog}{list with url, username and password to connect to browndog} -\item{source, }{the source of the met data, currently only NARR an Ameriflux is supported} +\item{source}{the source of the met data, currently only NARR an Ameriflux is supported} -\item{site, }{site information should have id, lat, lon and name (ameriflux id)} +\item{site}{site information should have id, lat, lon and name (ameriflux id)} -\item{start_date, }{start date for result} +\item{start_date}{start date for result} -\item{end_date, }{end date for result} +\item{end_date}{end date for result} -\item{model, }{model to convert the met data to} +\item{model}{model to convert the met data to} -\item{dir, }{folder where results are stored (in subfolder)} +\item{dir}{folder where results are stored (in subfolder)} -\item{username, }{used when downloading data from Ameriflux like sites} +\item{username}{used when downloading data from Ameriflux like sites} -\item{con, }{database connection} +\item{con}{database connection} } \description{ Use browndog to get the met data for a specific model diff --git a/modules/data.atmosphere/man/db.site.lat.lon.Rd b/modules/data.atmosphere/man/db.site.lat.lon.Rd index 46aa66c45fd..9a2cfed78f9 100644 --- a/modules/data.atmosphere/man/db.site.lat.lon.Rd +++ b/modules/data.atmosphere/man/db.site.lat.lon.Rd @@ -2,12 +2,17 @@ % Please edit documentation in R/met.process.R \name{db.site.lat.lon} \alias{db.site.lat.lon} -\title{db.site.lat.lon} +\title{Look up lat/lon from siteid} \usage{ db.site.lat.lon(site.id, con) } +\arguments{ +\item{site.id}{BeTY ID of site to look up} + +\item{con}{database connection} +} \description{ -db.site.lat.lon +Look up lat/lon from siteid } \author{ Betsy Cowdery diff --git a/modules/data.atmosphere/man/met.process.stage.Rd b/modules/data.atmosphere/man/met.process.stage.Rd index fe3eaa0122a..cf056773ff7 100644 --- a/modules/data.atmosphere/man/met.process.stage.Rd +++ b/modules/data.atmosphere/man/met.process.stage.Rd @@ -7,7 +7,7 @@ met.process.stage(input.id, raw.id, con) } \arguments{ -\item{raw.id}{} +\item{con}{database connection} } \description{ met.process.stage diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index bf1308c9e6f..b0445673eaf 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -289,9 +289,6 @@ Undocumented arguments in documentation object 'closest_xy' Undocumented arguments in documentation object 'daygroup' ‘date’ ‘flx’ -Undocumented arguments in documentation object 'db.site.lat.lon' - ‘site.id’ ‘con’ - Undocumented arguments in documentation object 'debias_met' ‘outfolder’ ‘site_id’ ‘...’ @@ -364,7 +361,7 @@ Undocumented arguments in documentation object 'met.process' ‘browndog’ Undocumented arguments in documentation object 'met.process.stage' - ‘input.id’ ‘con’ + ‘input.id’ ‘raw.id’ Undocumented arguments in documentation object 'met2CF.ALMA' ‘verbose’ diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 71ecd18e0a6..2cb138472a1 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -3,7 +3,7 @@ ##' @name call_MODIS ##' @title call_MODIS ##' @export -##' @param outdir where the output file will be stored. Default is NULL +##' @param outdir where the output file will be stored. Default is NULL and in this case only values are returned. When path is provided values are returned and written to disk. ##' @param var the simple name of the modis dataset variable (e.g. lai) ##' @param site_info Bety list of site info for parsing MODIS data: list(site_id, site_name, lat, ##' lon, time_zone) @@ -35,14 +35,14 @@ ##' lon = 90, ##' time_zone = "UTC") ##' test_modistools <- call_MODIS( -##' outdir = NULL, ##' var = "lai", +##' product = "MOD15A2H", +##' band = "Lai_500m", ##' site_info = site_info, ##' product_dates = c("2001150", "2001365"), +##' outdir = NULL, ##' run_parallel = TRUE, ##' ncores = NULL, -##' product = "MOD15A2H", -##' band = "Lai_500m", ##' package_method = "MODISTools", ##' QC_filter = TRUE, ##' progress = FALSE) @@ -50,12 +50,12 @@ ##' @importFrom foreach %do% %dopar% ##' @author Bailey Morrison ##' -call_MODIS <- function(outdir = NULL, - var, site_info, - product_dates, +call_MODIS <- function(var, product, + band, site_info, + product_dates, + outdir = NULL, run_parallel = FALSE, - ncores = NULL, - product, band, + ncores = NULL, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE) { @@ -244,7 +244,7 @@ call_MODIS <- function(outdir = NULL, output$qc[i] <- substr(convert, nchar(convert) - 2, nchar(convert)) } good <- which(output$qc %in% c("000", "001")) - if (length(good) > 0 || !(is.null(good))) + if (length(good) > 0) { output <- output[good, ] } else { diff --git a/modules/data.remote/inst/bands2lai_snap.py b/modules/data.remote/inst/bands2lai_snap.py new file mode 100644 index 00000000000..86707027432 --- /dev/null +++ b/modules/data.remote/inst/bands2lai_snap.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Calculates LAI using SNAP. + +Author: Ayush Prasad +""" + +from satellitetools import gee +import satellitetools.biophys_xarray as bio +import geopandas as gpd +import xarray as xr +import os + + +def bands2lai_snap(inputfile, outdir): + """ + Calculates LAI for the input netCDF file and saves it in a new netCDF file. + + Parameters + ---------- + input (str) -- path to the input netCDF file containing bands. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + """ + # load the input file + ds_disk = xr.open_dataset(inputfile) + # calculate LAI using SNAP + area = bio.run_snap_biophys(ds_disk, "LAI") + + timeseries = {} + timeseries_variable = ["lai"] + + # if specified output directory does not exist, create it. + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # creating a timerseries and saving the netCDF file + area.to_netcdf(os.path.join(outdir, area.name + "_lai.nc")) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) \ No newline at end of file diff --git a/modules/data.remote/inst/bands2ndvi.py b/modules/data.remote/inst/bands2ndvi.py new file mode 100644 index 00000000000..216c3b1e77c --- /dev/null +++ b/modules/data.remote/inst/bands2ndvi.py @@ -0,0 +1,46 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Calculates NDVI using gee. + +Author: Ayush Prasad +""" + +import xarray as xr +from satellitetools import gee +import geopandas as gpd +import os + + +def bands2ndvi(inputfile, outdir): + """ + Calculates NDVI for the input netCDF file and saves it in a new netCDF file. + + Parameters + ---------- + input (str) -- path to the input netCDF file containing bands. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + """ + # load the input file + ds_disk = xr.open_dataset(inputfile) + # calculate NDVI using gee + area = gee.compute_ndvi(ds_disk) + + timeseries = {} + timeseries_variable = ["ndvi"] + + # if specified output directory does not exist, create it. + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # creating a timerseries and saving the netCDF file + area.to_netcdf(os.path.join(outdir, area.name + "_ndvi.nc")) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) diff --git a/modules/data.remote/inst/gee2pecan_bands.py b/modules/data.remote/inst/gee2pecan_bands.py new file mode 100644 index 00000000000..3c325784962 --- /dev/null +++ b/modules/data.remote/inst/gee2pecan_bands.py @@ -0,0 +1,72 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Downloads ESA Sentinel 2, Level-2A Bottom of Atmosphere data and saves it in a netCDF file. +Bands retrieved: B3, B4, B5, B6, B7, B8A, B11 and B12 +More information about the bands and the process followed to get the data can be found out at /satellitetools/geeapi.py + +Warning: Point coordinates as input has currently not been implemented. + +Requires Python3 + +Uses satellitetools created by Olli Nevalainen. + +Author: Ayush Prasad +""" + +from satellitetools import gee +import geopandas as gpd +import os + + +def gee2pecan_bands(geofile, outdir, start, end, qi_threshold): + """ + Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date of the data request in the form YYYY-MM-DD + + qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + To test this function please run the following code inside a python shell after importing this module, testfile is included. + + gee2pecan_bands(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) + """ + + # read in the input file containing coordinates + df = gpd.read_file(geofile) + + request = gee.S2RequestParams(start, end) + + # filter area of interest from the coordinates in the input file + area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # calcuate qi attribute for the AOI + gee.ee_get_s2_quality_info(area, request) + + # get the final data + gee.ee_get_s2_data(area, request, qi_threshold=qi_threshold) + + # convert dataframe to an xarray dataset, used later for converting to netCDF + gee.s2_data_to_xarray(area, request) + + # if specified output directory does not exist, create it + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # create a timerseries and save the netCDF file + area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py new file mode 100644 index 00000000000..5ea7b62b455 --- /dev/null +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -0,0 +1,152 @@ +""" +Downloads SMAP Global Soil Moisture Data from Google Earth Engine and saves it in a netCDF file. + +Requires Python3 + +Author: Ayush Prasad +""" + +import ee +import pandas as pd +import geopandas as gpd +import os +import xarray as xr +import datetime + +ee.Initialize() + + +def gee2pecan_smap(geofile, outdir, start, end, var): + """ + Downloads and saves SMAP data from GEE + + Parameters + ---------- + geofile (str) -- path to the geosjon file containing the name and coordinates of ROI + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-dd + + end (str) -- ending date areaof the data request in the form YYYY-MM-dd + + var (str) -- one of ssm, susm, smp, ssma, susma + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory + """ + + # read in the geojson file + df = gpd.read_file(geofile) + + if (df.geometry.type == "Point").bool(): + # extract coordinates + lon = float(df.geometry.x) + lat = float(df.geometry.y) + # create geometry + geo = ee.Geometry.Point(lon, lat) + + elif (df.geometry.type == "Polygon").bool(): + # extract coordinates + area = [ + list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) + ] + # create geometry + geo = ee.Geometry.Polygon(area) + + else: + # if the input geometry type is not + raise ValueError("geometry type not supported") + + def smap_ts(geo, start, end, var): + # extract a feature from the geometry + features = [ee.Feature(geo)] + # create a feature collection from the features + featureCollection = ee.FeatureCollection(features) + + def smap_ts_feature(feature): + area = feature.geometry() + # create the image collection + collection = ( + ee.ImageCollection("NASA_USDA/HSL/SMAP_soil_moisture") + .filterBounds(area) + .filterDate(start, end) + .select([var]) + ) + + def smap_ts_image(img): + # scale (int) Default: 30 + scale = 30 + # extract date from the image + dateinfo = ee.Date(img.get("system:time_start")).format("YYYY-MM-dd") + # reduce the region to a list, can be configured as per requirements + img = img.reduceRegion( + reducer=ee.Reducer.toList(), + geometry=area, + maxPixels=1e8, + scale=scale, + ) + # store data in an ee.Array + smapdata = ee.Array(img.get(var)) + tmpfeature = ( + ee.Feature(ee.Geometry.Point([0, 0])) + .set("smapdata", smapdata) + .set("dateinfo", dateinfo) + ) + return tmpfeature + + # map tmpfeature over the image collection + smap_timeseries = collection.map(smap_ts_image) + return feature.set( + "smapdata", smap_timeseries.aggregate_array("smapdata") + ).set("dateinfo", smap_timeseries.aggregate_array("dateinfo")) + + # map feature over featurecollection + featureCollection = featureCollection.map(smap_ts_feature).getInfo() + return featureCollection + + fc = smap_ts(geo=geo, start=start, end=end, var=var) + + def fc2dataframe(fc): + smapdatalist = [] + datelist = [] + # extract var and date data from fc dictionary and store in it in smapdatalist and datelist + for i in range(len(fc["features"][0]["properties"]["smapdata"])): + smapdatalist.append(fc["features"][0]["properties"]["smapdata"][i][0]) + datelist.append( + datetime.datetime.strptime( + (fc["features"][0]["properties"]["dateinfo"][i]).split(".")[0], + "%Y-%m-%d", + ) + ) + fc_dict = {"date": datelist, var: smapdatalist} + # create a pandas dataframe and store the data + fcdf = pd.DataFrame(fc_dict, columns=["date", var]) + return fcdf + + datadf = fc2dataframe(fc) + + site_name = df[df.columns[0]].iloc[0] + AOI = str(df[df.columns[1]].iloc[0]) + + # convert the dataframe to an xarray dataset, used for converting it to a netCDF file + tosave = xr.Dataset( + datadf, + attrs={ + "site_name": site_name, + "start_date": start, + "end_date": end, + "AOI": AOI, + "product": var, + }, + ) + + # # if specified output path does not exist create it + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + file_name = "_" + var + # convert to netCDF and save the file + tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py new file mode 100644 index 00000000000..2d0de3b42f8 --- /dev/null +++ b/modules/data.remote/inst/remote_process.py @@ -0,0 +1,118 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +remote_process controls the individual functions to create an automatic workflow for downloading and performing computation on remote sensing data. + +Requires Python3 + +Author: Ayush Prasad +""" + +from gee2pecan_bands import gee2pecan_bands +from bands2ndvi import bands2ndvi +from bands2lai_snap import bands2lai_snap +from satellitetools import gee +import geopandas as gpd + + +def remote_process( + geofile, + outdir, + start, + end, + qi_threshold, + source="gee", + collection="COPERNICUS/S2_SR", + input_type="bands", + output=["lai", "ndvi"], +): + """ + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date areaof the data request in the form YYYY-MM-DD + + qi_threshold (float) -- Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + source (str) -- source from where data is to be downloaded + + collection (str) -- dataset ID + + input_type (str) -- type of raw intermediate data + + output (list of str) -- type of output data requested + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + + To test this function run: python3 remote_process.py + """ + + # this part will be removed in the next version, after deciding whether to pass the file or the extracted data to initial functions + df = gpd.read_file(geofile) + area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # selecting the initial data download function by concatenating source and input_type + initial_step = "".join([source, "2pecan", input_type]) + if initial_step == "gee2pecanbands": + if collection == "COPERNICUS/S2_SR": + gee2pecan_bands(geofile, outdir, start, end, qi_threshold) + else: + print("other gee download options go here, currently WIP") + # This should be a function being called from an another file + """ + data = ee.ImageCollection(collection) + filtered_data = (data.filterDate(start, end).select(bands).filterBounds(ee.Geometry(pathtofile)) + filtered_data = filtered_data.getInfo() + ... + """ + + else: + print("other sources like AppEEARS go here") + return + + # if raw data is the requested output, process is completed + if input_type == output: + print("process is complete") + + else: + # locate the raw data file formed in initial step + input_file = "".join([outdir, area.name, "_", str(input_type), ".nc"]) + + # store all the requested conversions in a list + conversions = [] + for conv_type in output: + conversions.append("".join([input_type, "2", conv_type])) + + # perform the available conversions + if "bands2lai" in conversions: + print("using SNAP to calculate LAI") + bands2lai_snap(input_file, outdir) + + if "bands2ndvi" in conversions: + print("using GEE to calculate NDVI") + bands2ndvi(input_file, outdir) + + +if __name__ == "__main__": + remote_process( + "./satellitetools/test.geojson", + "./out/", + "2019-01-01", + "2019-12-31", + 1, + "gee", + "COPERNICUS/S2_SR", + "bands", + ["lai", "ndvi"], + ) diff --git a/modules/data.remote/inst/satellitetools/biophys_xarray.py b/modules/data.remote/inst/satellitetools/biophys_xarray.py new file mode 100644 index 00000000000..ca63c18a062 --- /dev/null +++ b/modules/data.remote/inst/satellitetools/biophys_xarray.py @@ -0,0 +1,235 @@ +# -*- coding: utf-8 -*- +""" +Created on Mon May 11 14:34:08 2020 + +@author: Olli Nevalainen (olli.nevalainen@fmi.fi), + Finnish Meteorological Institute) + +Olli's python implementation of ESA SNAP s2toolbox biophysical processor and +computation of vegetation indices. +See ATBD at https://step.esa.int/docs/extra/ATBD_S2ToolBox_L2B_V1.1.pdf +And java source code at +https://github.com/senbox-org/s2tbx/tree/master/s2tbx-biophysical/src/main/java/org/esa/s2tbx/biophysical + +Caveats +Currently changes out of bounds inputs and outputs to nan (or min or max value +if output wihtin tolerance). Maybe output flagging information as well ( i.e. +diffferent flags input and output out of bounds). + +Convex hull input checking currently disabled. It's computationally slow and + not sure of its benefits. Better to filter out bad data based on L2A quality + info/classification\ + and hope averaging removes some bad pixels. +""" + +import requests +import io +import numpy as np +import xarray as xr + +# url to Sentinel 2 Toolbox's auxdata +# This base_url points towards the original toolbox(not the one created by Olli) +base_url = "https://raw.githubusercontent.com/senbox-org/s2tbx/master/s2tbx-biophysical/src/main/resources/auxdata/2_1/{}/{}" + + +def get_fromurl(var, pattern): + """ + Fetches the contents of a text file from the base url and stores it in a ndarray. + + Author: Ayush Prasad + + Parameters + ---------- + var (str) -- type of the product, one of FAPAR, FCOVER, LAI, LAI_Cab and LAI_Cw. + pattern (str) -- name of the file excluding the initial variable part. + + Returns + ------- + ndarray -- loaded with the contents of the text file. + """ + # attach variable and file name to the base url + res_url = base_url.format(var, str(var) + "%s" % str(pattern)) + # make a GET request to the url to fetch the data. + res_url = requests.get(res_url) + # check the HTTP status code to see if any error has occured. + res_url.raise_for_status() + # store the contents of the url in an in-memory buffer and use it to load the ndarray. + return np.loadtxt(io.BytesIO(res_url.content), delimiter=",") + + +# Read SNAP Biophysical processor neural network parameters +nn_params = {} +for var in ["FAPAR", "FCOVER", "LAI", "LAI_Cab", "LAI_Cw"]: + norm_minmax = get_fromurl(var, "_Normalisation") + denorm_minmax = get_fromurl(var, "_Denormalisation") + layer1_weights = get_fromurl(var, "_Weights_Layer1_Neurons") + layer1_bias = get_fromurl(var, "_Weights_Layer1_Bias").reshape(-1, 1) + layer2_weights = get_fromurl(var, "_Weights_Layer2_Neurons").reshape(1, -1) + layer2_bias = get_fromurl(var, "_Weights_Layer2_Bias").reshape(1, -1) + extreme_cases = get_fromurl(var, "_ExtremeCases") + + if var == "FCOVER": + nn_params[var] = { + "norm_minmax": norm_minmax, + "denorm_minmax": denorm_minmax, + "layer1_weights": layer1_weights, + "layer1_bias": layer1_bias, + "layer2_weights": layer2_weights, + "layer2_bias": layer2_bias, + "extreme_cases": extreme_cases, + } + else: + defdom_min = get_fromurl(var, "_DefinitionDomain_MinMax")[0, :].reshape(-1, 1) + defdom_max = get_fromurl(var, "_DefinitionDomain_MinMax")[1, :].reshape(-1, 1) + defdom_grid = get_fromurl(var, "_DefinitionDomain_Grid") + nn_params[var] = { + "norm_minmax": norm_minmax, + "denorm_minmax": denorm_minmax, + "layer1_weights": layer1_weights, + "layer1_bias": layer1_bias, + "layer2_weights": layer2_weights, + "layer2_bias": layer2_bias, + "defdom_min": defdom_min, + "defdom_max": defdom_max, + "defdom_grid": defdom_grid, + "extreme_cases": extreme_cases, + } + + +def _normalization(x, x_min, x_max): + x_norm = 2 * (x - x_min) / (x_max - x_min) - 1 + return x_norm + + +def _denormalization(y_norm, y_min, y_max): + y = 0.5 * (y_norm + 1) * (y_max - y_min) + return y + + +def _input_ouf_of_range(x, variable): + x_copy = x.copy() + x_bands = x_copy[:8, :] + + # check min max domain + defdom_min = nn_params[variable]["defdom_min"][:, 0].reshape(-1, 1) + defdom_max = nn_params[variable]["defdom_max"][:, 0].reshape(-1, 1) + bad_input_mask = (x_bands < defdom_min) | (x_bands > defdom_max) + bad_vector = np.any(bad_input_mask, axis=0) + x_bands[:, bad_vector] = np.nan + + # convex hull check, currently disabled due to time consumption vs benefit + # gridProject = lambda v: np.floor(10 * (v - defdom_min) / (defdom_max - defdom_min) + 1 ).astype(int) + # x_bands = gridProject(x_bands) + # isInGrid = lambda v: any((v == x).all() for x in nn_params[variable]['defdom_grid']) + # notInGrid = ~np.array([isInGrid(v) for v in x_bands.T]) + # x[:,notInGrid | bad_vector] = np.nan + + x_copy[:, bad_vector] = np.nan + return x_copy + + +def _output_ouf_of_range(output, variable): + new_output = np.copy(output) + tolerance = nn_params[variable]["extreme_cases"][0] + output_min = nn_params[variable]["extreme_cases"][1] + output_max = nn_params[variable]["extreme_cases"][2] + + new_output[output < (output_min + tolerance)] = np.nan + new_output[(output > (output_min + tolerance)) & (output < output_min)] = output_min + new_output[(output < (output_max - tolerance)) & (output > output_max)] = output_max + new_output[output > (output_max - tolerance)] = np.nan + return new_output + + +def _compute_variable(x, variable): + + x_norm = np.zeros_like(x) + x = _input_ouf_of_range(x, variable) + x_norm = _normalization( + x, + nn_params[variable]["norm_minmax"][:, 0].reshape(-1, 1), + nn_params[variable]["norm_minmax"][:, 1].reshape(-1, 1), + ) + + out_layer1 = np.tanh( + nn_params[variable]["layer1_weights"].dot(x_norm) + + nn_params[variable]["layer1_bias"] + ) + out_layer2 = ( + nn_params[variable]["layer2_weights"].dot(out_layer1) + + nn_params[variable]["layer2_bias"] + ) + output = _denormalization( + out_layer2, + nn_params[variable]["denorm_minmax"][0], + nn_params[variable]["denorm_minmax"][1], + )[0] + output = _output_ouf_of_range(output, variable) + output = output.reshape(1, np.shape(x)[1]) + return output + + +def run_snap_biophys(dataset, variable): + """Compute specified variable using the SNAP algorithm. + + See ATBD at https://step.esa.int/docs/extra/ATBD_S2ToolBox_L2B_V1.1.pdf + + Parameters + ---------- + dataset : xr dataset + xarray dataset. + variable : str + Options 'FAPAR', 'FCOVER', 'LAI', 'LAI_Cab' or 'LAI_Cw' + + Returns + ------- + xarray dataset + Adds the specified variable array to dataset (variable name in + lowercase). + + """ + # generate view angle bands/layers + vz = ( + np.ones_like(dataset.band_data[:, 0, :, :]).T + * np.cos(np.radians(dataset.view_zenith)).values + ) + vz = vz[..., np.newaxis] + vzarr = xr.DataArray( + vz, + coords=[dataset.y, dataset.x, dataset.time, ["view_zenith"]], + dims=["y", "x", "time", "band"], + ) + + sz = ( + np.ones_like(dataset.band_data[:, 0, :, :]).T + * np.cos(np.radians(dataset.sun_zenith)).values + ) + sz = sz[..., np.newaxis] + szarr = xr.DataArray( + sz, + coords=[dataset.y, dataset.x, dataset.time, ["sun_zenith"]], + dims=["y", "x", "time", "band"], + ) + + raz = ( + np.ones_like(dataset.band_data[:, 0, :, :]).T + * np.cos(np.radians(dataset.sun_azimuth - dataset.view_azimuth)).values + ) + raz = raz[..., np.newaxis] + razarr = xr.DataArray( + raz, + coords=[dataset.y, dataset.x, dataset.time, ["relative_azimuth"]], + dims=["y", "x", "time", "band"], + ) + + newarr = xr.concat([dataset.band_data, vzarr, szarr, razarr], dim="band") + newarr = newarr.stack(xy=("x", "y")) + arr = xr.apply_ufunc( + _compute_variable, + newarr, + input_core_dims=[["band", "xy"]], + output_core_dims=[["xy"]], + kwargs={"variable": variable}, + vectorize=True, + ).unstack() + return dataset.assign({variable.lower(): arr}) diff --git a/modules/data.remote/inst/satellitetools/gee.py b/modules/data.remote/inst/satellitetools/gee.py new file mode 100755 index 00000000000..cb74eb74d5d --- /dev/null +++ b/modules/data.remote/inst/satellitetools/gee.py @@ -0,0 +1,716 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Thu Feb 6 15:24:12 2020 + +Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). +Warning: the data is currently retrieved with 10m resolution (scale=10), so +the 20m resolution bands are resampled. +TODO: Add option for specifying the request spatial resolution. + +@author: Olli Nevalainen (olli.nevalainen@fmi.fi), + Finnish Meteorological Institute) + + +""" +import sys +import os +import ee +import datetime +import pandas as pd +import geopandas as gpd +import numpy as np +import xarray as xr +from functools import reduce + +ee.Initialize() + + +NO_DATA = -99999 +S2_REFL_TRANS = 10000 +# ----------------- Sentinel-2 ------------------------------------- +s2_qi_labels = ['NODATA', + 'SATURATED_DEFECTIVE', + 'DARK_FEATURE_SHADOW', + 'CLOUD_SHADOW', + 'VEGETATION', + 'NOT_VEGETATED', + 'WATER', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE'] + +s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE'] + + +class S2RequestParams(): + """S2 data request paramaters. + + Attributes + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + the default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. + """ + + def __init__(self, datestart, dateend, bands=None): + """. + + Parameters + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + The default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. + + Returns + ------- + None. + + """ + default_bands = ['B3', 'B4', 'B5', 'B6', 'B7', 'B8A', 'B11', 'B12'] + + self.datestart = datestart + self.dateend = dateend + self.bands = bands if bands else default_bands + + +class AOI(): + """Area of interest for area info and data. + + Attributes + ---------- + name : str + Name of the area. + geometry : str + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + qi : pandas dataframe + Dataframe with quality information about available imagery for the AOI. + qi is empty at init and can be computed with + ee_get_s2_quality_info function. + data : pandas dataframe or xarray + Dataframe holding data retrieved from GEE. Data can be computed using + function + qi is empty at init and can be computed with ee_get_s2_data and + converted to xarray using s2_data_to_xarray function. + + Methods + ------- + __init__ + """ + + def __init__(self, name, geometry=None, coordinate_list=None, tile=None): + """. + + Parameters + ---------- + name : str + Name of the area. + geometry : geometry in wkt, optional + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + + Returns + ------- + None. + + """ + if not geometry and not coordinate_list: + sys.exit("AOI has to get either geometry or coordinates as list!") + elif geometry and not coordinate_list: + coordinate_list = list(geometry.exterior.coords) + elif coordinate_list and not geometry: + geometry = None + + self.name = name + self.geometry = geometry + self.coordinate_list = coordinate_list + self.qi = None + self.data = None + self.tile = tile + + +def ee_get_s2_quality_info(AOIs, req_params): + """Get S2 quality information from GEE. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + + Returns + ------- + Nothing: + Computes qi attribute for the given AOI instances. + + """ + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [ee.Feature( + ee.Geometry.Polygon(a.coordinate_list), + {'name': a.name}) for a in AOIs] + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_quality_info_feature(feature): + + area = feature.geometry() + image_collection = ee.ImageCollection("COPERNICUS/S2_SR") \ + .filterBounds(area) \ + .filterDate(req_params.datestart, req_params.dateend) \ + .select(['SCL']) + + def ee_get_s2_quality_info_image(img): + productid = img.get('PRODUCT_ID') + assetid = img.id() + tileid = img.get('MGRS_TILE') + system_index = img.get('system:index') + proj = img.select("SCL").projection() + + # apply reducer to list + img = img.reduceRegion( + reducer=ee.Reducer.toList(), + geometry=area, + maxPixels=1e8, + scale=10) + + # get data into arrays + classdata = ee.Array( + ee.Algorithms.If(img.get("SCL"), + ee.Array(img.get("SCL")), + ee.Array([0]))) + + totalcount = classdata.length() + classpercentages = { + key: + classdata.eq(i).reduce(ee.Reducer.sum(), [0]) + .divide(totalcount).get([0]) + for i, key in enumerate(s2_qi_labels)} + + tmpfeature = ee.Feature(ee.Geometry.Point([0, 0])) \ + .set('productid', productid) \ + .set('system_index', system_index) \ + .set('assetid', assetid) \ + .set('tileid', tileid) \ + .set('projection', proj) \ + .set(classpercentages) + return tmpfeature + + s2_qi_image_collection = image_collection.map( + ee_get_s2_quality_info_image) + + return feature \ + .set('productid', s2_qi_image_collection + .aggregate_array('productid')) \ + .set('system_index', s2_qi_image_collection + .aggregate_array('system_index')) \ + .set('assetid', s2_qi_image_collection + .aggregate_array('assetid')) \ + .set('tileid', s2_qi_image_collection + .aggregate_array('tileid')) \ + .set('projection', s2_qi_image_collection + .aggregate_array('projection')) \ + .set({key: s2_qi_image_collection + .aggregate_array(key) for key in s2_qi_labels}) + + s2_qi_feature_collection = feature_collection.map( + ee_get_s2_quality_info_feature).getInfo() + + s2_qi = s2_feature_collection_to_dataframes(s2_qi_feature_collection) + + for a in AOIs: + name = a.name + a.qi = s2_qi[name] + + +def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): + """Get S2 data (level L2A, bottom of atmosphere data) from GEE. + + Warning: the data is currently retrieved with 10m resolution (scale=10), so + the 20m resolution bands are resampled. + TODO: Add option for specifying the request spatial resolution. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. AOIs should have qi attribute computed first. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + qi_threshold : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + qi_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + Nothing: + Computes data attribute for the given AOI instances. + + """ + datestart = req_params.datestart + dateend = req_params. dateend + bands = req_params.bands + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [] + for a in AOIs: + filtered_qi = filter_s2_qi_dataframe(a.qi, qi_threshold, qi_filter) + if len(filtered_qi) == 0: + print('No observations to retrieve for area %s' % a.name) + continue + + if a.tile is None: + min_tile = min(filtered_qi['tileid'].values) + filtered_qi = (filtered_qi[ + filtered_qi['tileid'] == min_tile]) + a.tile = min_tile + else: + filtered_qi = (filtered_qi[ + filtered_qi['tileid'] == a.tile]) + + full_assetids = "COPERNICUS/S2_SR/" + filtered_qi['assetid'] + image_list = [ee.Image(asset_id) for asset_id in full_assetids] + crs = filtered_qi['projection'].values[0]["crs"] + feature = ee.Feature(ee.Geometry.Polygon(a.coordinate_list), + {'name': a.name, + 'image_list': image_list}) + + features.append(feature) + + if len(features) == 0: + print('No data to be retrieved!') + return None + + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_data_feature(feature): + geom = feature.geometry(0.01, crs) + image_collection = \ + ee.ImageCollection.fromImages(feature.get('image_list')) \ + .filterBounds(geom) \ + .filterDate(datestart, dateend) \ + .select(bands + ['SCL']) + + def ee_get_s2_data_image(img): + # img = img.clip(geom) + productid = img.get('PRODUCT_ID') + assetid = img.id() + tileid = img.get('MGRS_TILE') + system_index = img.get('system:index') + proj = img.select(bands[0]).projection() + sun_azimuth = img.get('MEAN_SOLAR_AZIMUTH_ANGLE') + sun_zenith = img.get('MEAN_SOLAR_ZENITH_ANGLE') + view_azimuth = ee.Array( + [img.get('MEAN_INCIDENCE_AZIMUTH_ANGLE_%s' % b) + for b in bands]) \ + .reduce(ee.Reducer.mean(), [0]).get([0]) + view_zenith = ee.Array( + [img.get('MEAN_INCIDENCE_ZENITH_ANGLE_%s' % b) + for b in bands]) \ + .reduce(ee.Reducer.mean(), [0]).get([0]) + + img = img.resample('bilinear') \ + .reproject(crs=crs, scale=10) + + # get the lat lon and add the ndvi + image_grid = ee.Image.pixelCoordinates( + ee.Projection(crs)) \ + .addBands([img.select(b) for b in bands + ['SCL']]) + + # apply reducer to list + image_grid = image_grid.reduceRegion( + reducer=ee.Reducer.toList(), + geometry=geom, + maxPixels=1e8, + scale=10) + + # get data into arrays + x_coords = ee.Array(image_grid.get("x")) + y_coords = ee.Array(image_grid.get("y")) + band_data = {b: ee.Array(image_grid.get("%s" % b)) for b in bands} + + scl_data = ee.Array(image_grid.get("SCL")) + + # perform LAI et al. computation possibly here! + + tmpfeature = ee.Feature(ee.Geometry.Point([0, 0])) \ + .set('productid', productid) \ + .set('system_index', system_index) \ + .set('assetid', assetid) \ + .set('tileid', tileid) \ + .set('projection', proj) \ + .set('sun_zenith', sun_zenith) \ + .set('sun_azimuth', sun_azimuth) \ + .set('view_zenith', view_zenith) \ + .set('view_azimuth', view_azimuth) \ + .set('x_coords', x_coords) \ + .set('y_coords', y_coords) \ + .set('SCL', scl_data) \ + .set(band_data) + return tmpfeature + + s2_data_feature = image_collection.map(ee_get_s2_data_image) + + return feature \ + .set('productid', s2_data_feature + .aggregate_array('productid')) \ + .set('system_index', s2_data_feature + .aggregate_array('system_index')) \ + .set('assetid', s2_data_feature + .aggregate_array('assetid')) \ + .set('tileid', s2_data_feature + .aggregate_array('tileid')) \ + .set('projection', s2_data_feature + .aggregate_array('projection')) \ + .set('sun_zenith', s2_data_feature + .aggregate_array('sun_zenith')) \ + .set('sun_azimuth', s2_data_feature + .aggregate_array('sun_azimuth')) \ + .set('view_zenith', s2_data_feature + .aggregate_array('view_zenith')) \ + .set('view_azimuth', s2_data_feature + .aggregate_array('view_azimuth')) \ + .set('x_coords', s2_data_feature + .aggregate_array('x_coords')) \ + .set('y_coords', s2_data_feature + .aggregate_array('y_coords')) \ + .set('SCL', s2_data_feature + .aggregate_array('SCL')) \ + .set({b: s2_data_feature + .aggregate_array(b) for b in bands}) + s2_data_feature_collection = feature_collection.map( + ee_get_s2_data_feature).getInfo() + + s2_data = s2_feature_collection_to_dataframes(s2_data_feature_collection) + + for a in AOIs: + name = a.name + a.data = s2_data[name] + +def filter_s2_qi_dataframe(s2_qi_dataframe, qi_thresh, s2_filter=s2_filter1): + """Filter qi dataframe. + + Parameters + ---------- + s2_qi_dataframe : pandas dataframe + S2 quality information dataframe (AOI instance qi attribute). + qi_thresh : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + s2_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + filtered_s2_qi_df : pandas dataframe + Filtered dataframe. + + """ + filtered_s2_qi_df = s2_qi_dataframe.loc[ + s2_qi_dataframe[s2_filter1].sum(axis=1) < qi_thresh] + + return filtered_s2_qi_df + + +def s2_feature_collection_to_dataframes(s2_feature_collection): + """Convert feature collection dict from GEE to pandas dataframe. + + Parameters + ---------- + s2_feature_collection : dict + Dictionary returned by GEE. + + Returns + ------- + dataframes : pandas dataframe + GEE dictinary converted to pandas dataframe. + + """ + dataframes = {} + + for featnum in range(len(s2_feature_collection['features'])): + tmp_dict = {} + key = s2_feature_collection['features'][featnum]['properties']['name'] + productid = (s2_feature_collection + ['features'] + [featnum] + ['properties'] + ['productid']) + + dates = [datetime.datetime.strptime( + d.split('_')[2], '%Y%m%dT%H%M%S') for d in productid] + + tmp_dict.update({'Date': dates}) # , 'crs': crs} + properties = s2_feature_collection['features'][featnum]['properties'] + for prop, data in properties.items(): + if prop not in ['Date'] : # 'crs' ,, 'projection' + tmp_dict.update({prop: data}) + dataframes[key] = pd.DataFrame(tmp_dict) + return dataframes + +def compute_ndvi(dataset): + """Compute NDVI + + Parameters + ---------- + dataset : xarray dataset + + Returns + ------- + xarray dataset + Adds 'ndvi' xr array to xr dataset. + + """ + b4 = dataset.band_data.sel(band='B4') + b8 = dataset.band_data.sel(band='B8A') + ndvi = (b8 - b4) / (b8 + b4) + return dataset.assign({'ndvi': ndvi}) + + + +def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): + """Convert AOI.data dataframe to xarray dataset. + + Parameters + ---------- + aoi : AOI instance + AOI instance. + request_params : S2RequestParams + S2RequestParams. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + Nothing. + Converts the data atrribute dataframe to xarray Dataset. + xarray is better for handling multiband data. It also has + implementation for saving the data in NetCDF format. + + """ + # check that all bands have full data! + datalengths = [aoi.data[b].apply( + lambda d: len(d)) == len(aoi.data.iloc[0]['x_coords']) + for b in request_params.bands] + consistent_data = reduce(lambda a, b: a & b, datalengths) + aoi.data = aoi.data[consistent_data] + + # 2D data + bands = request_params.bands + + # 1D data + list_vars = ['assetid', 'productid', 'sun_azimuth', + 'sun_zenith', 'system_index', + 'view_azimuth', 'view_zenith'] + + # crs from projection + crs = aoi.data['projection'].values[0]['crs'] + tileid = aoi.data['tileid'].values[0] + # original number of pixels requested (pixels inside AOI) + aoi_pixels = len(aoi.data.iloc[0]['x_coords']) + + # transform 2D data to arrays + for b in bands: + + aoi.data[b] = aoi.data.apply( + lambda row: s2_lists_to_array( + row['x_coords'], row['y_coords'], row[b], + convert_to_reflectance=convert_to_reflectance), axis=1) + + aoi.data['SCL'] = aoi.data.apply( + lambda row: s2_lists_to_array( + row['x_coords'], row['y_coords'], row['SCL'], + convert_to_reflectance=False), axis=1) + + array = aoi.data[bands].values + + # this will stack the array to ndarray with + # dimension order = (time, band, x,y) + narray = np.stack( + [np.stack(array[:, b], axis=2) for b in range(len(bands))], + axis=2).transpose() # .swapaxes(2, 3) + + scl_array = np.stack(aoi.data['SCL'].values, axis=2).transpose() + + coords = {'time': aoi.data['Date'].values, + 'band': bands, + 'x': np.unique(aoi.data.iloc[0]['x_coords']), + 'y': np.unique(aoi.data.iloc[0]['y_coords']) + } + + dataset_dict = {'band_data': (['time', 'band', 'x', 'y'], narray), + 'SCL': (['time', 'x', 'y'], scl_array)} + var_dict = {var: (['time'], aoi.data[var]) for var in list_vars} + dataset_dict.update(var_dict) + + ds = xr.Dataset(dataset_dict, + coords=coords, + attrs={'name': aoi.name, + 'crs': crs, + 'tile_id': tileid, + 'aoi_geometry': aoi.geometry.to_wkt(), + 'aoi_pixels': aoi_pixels}) + aoi.data = ds + + +def s2_lists_to_array(x_coords, y_coords, data, convert_to_reflectance=True): + """Convert 1D lists of coordinates and corresponding values to 2D array. + + Parameters + ---------- + x_coords : list + List of x-coordinates. + y_coords : list + List of y-coordinates. + data : list + List of data values corresponding to the coordinates. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + arr : 2D numpy array + Return 2D numpy array. + + """ + # get the unique coordinates + uniqueYs = np.unique(y_coords) + uniqueXs = np.unique(x_coords) + + # get number of columns and rows from coordinates + ncols = len(uniqueXs) + nrows = len(uniqueYs) + + # determine pixelsizes + # ys = uniqueYs[1] - uniqueYs[0] + # xs = uniqueXs[1] - uniqueXs[0] + + y_vals, y_idx = np.unique(y_coords, return_inverse=True) + x_vals, x_idx = np.unique(x_coords, return_inverse=True) + if convert_to_reflectance: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) + arr.fill(np.nan) + arr[y_idx, x_idx] = np.array(data, dtype=np.float64) / S2_REFL_TRANS + else: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.int32) + arr.fill(NO_DATA) # or whatever yor desired missing data flag is + arr[y_idx, x_idx] = data + arr = np.flipud(arr) + return arr + + +def xr_dataset_to_timeseries(xr_dataset, variables): + """Compute timeseries dataframe from xr dataset. + + Parameters + ---------- + xr_dataset : xarray dataset + + variables : list + list of variable names as string. + + Returns + ------- + df : pandas dataframe + Pandas dataframe with mean, std, se and percentage of NaNs inside AOI. + + """ + df = pd.DataFrame({'Date': pd.to_datetime(xr_dataset.time.values)}) + + for var in variables: + df[var] = xr_dataset[var].mean(dim=['x', 'y']) + df[var+'_std'] = xr_dataset[var].std(dim=['x', 'y']) + + # nans occure due to missging data from 1D to 2D array + #(pixels outside the polygon), + # from snap algorihtm nans occure due to input/output ouf of bounds + # checking. + # TODO: flaggging with snap biophys algorith or some other solution to + # check which nan are from snap algorithm and which from 1d to 2d transformation + nans = np.isnan(xr_dataset[var]).sum(dim=['x', 'y']) + sample_n = len(xr_dataset[var].x) * len(xr_dataset[var].y) - nans + + # compute how many of the nans are inside aoi (due to snap algorithm) + out_of_aoi_pixels = (len(xr_dataset[var].x) * len(xr_dataset[var].y) + - xr_dataset.aoi_pixels) + nans_inside_aoi = nans - out_of_aoi_pixels + df['aoi_nan_percentage'] = nans_inside_aoi / xr_dataset.aoi_pixels + + df[var+'_se'] = df[var+'_std'] / np.sqrt(sample_n) + + return df diff --git a/modules/data.remote/inst/satellitetools/test.geojson b/modules/data.remote/inst/satellitetools/test.geojson new file mode 100644 index 00000000000..9a890595d4c --- /dev/null +++ b/modules/data.remote/inst/satellitetools/test.geojson @@ -0,0 +1,38 @@ +{ + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "properties": { + "name": "Reykjavik" + }, + "geometry": { + "type": "Polygon", + "coordinates": [ + [ + [ + -21.788935661315918, + 64.04460250271562 + ], + [ + -21.786317825317383, + 64.04460250271562 + ], + [ + -21.786317825317383, + 64.04537258754581 + ], + [ + -21.788935661315918, + 64.04537258754581 + ], + [ + -21.788935661315918, + 64.04460250271562 + ] + ] + ] + } + } + ] +} diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index cf54ceaf091..52fc0422110 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -5,39 +5,39 @@ \title{call_MODIS} \usage{ call_MODIS( - outdir = NULL, var, + product, + band, site_info, product_dates, + outdir = NULL, run_parallel = FALSE, ncores = NULL, - product, - band, package_method = "MODISTools", QC_filter = FALSE, progress = FALSE ) } \arguments{ -\item{outdir}{where the output file will be stored. Default is NULL} - \item{var}{the simple name of the modis dataset variable (e.g. lai)} +\item{product}{string value for MODIS product number} + +\item{band}{string value for which measurement to extract} + \item{site_info}{Bety list of site info for parsing MODIS data: list(site_id, site_name, lat, lon, time_zone)} \item{product_dates}{a character vector of the start and end date of the data in YYYYJJJ} +\item{outdir}{where the output file will be stored. Default is NULL and in this case only values are returned. When path is provided values are returned and written to disk.} + \item{run_parallel}{optional method to download data paralleize. Only works if more than 1 site is needed and there are >1 CPUs available.} \item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL.} -\item{product}{string value for MODIS product number} - -\item{band}{string value for which measurement to extract} - \item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} @@ -64,14 +64,14 @@ site_info <- list( lon = 90, time_zone = "UTC") test_modistools <- call_MODIS( - outdir = NULL, var = "lai", + product = "MOD15A2H", + band = "Lai_500m", site_info = site_info, product_dates = c("2001150", "2001365"), + outdir = NULL, run_parallel = TRUE, ncores = NULL, - product = "MOD15A2H", - band = "Lai_500m", package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index 00204e8566c..cd10e25c424 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -13,7 +13,7 @@ Imports: mlegp, coda (>= 0.18), MASS, - tmvtnorm, + TruncatedNormal (>= 2.2), lqmm, MCMCpack Description: Implementation of a Gaussian Process model (both likelihood and @@ -21,4 +21,4 @@ Description: Implementation of a Gaussian Process model (both likelihood and for sampling design and prediction. License: BSD_3_clause + file LICENSE Encoding: UTF-8 -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 diff --git a/modules/emulator/R/minimize.GP.R b/modules/emulator/R/minimize.GP.R index 79a92468729..6e1c4508306 100644 --- a/modules/emulator/R/minimize.GP.R +++ b/modules/emulator/R/minimize.GP.R @@ -271,7 +271,7 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn } ## propose new parameters - xnew <- tmvtnorm::rtmvnorm(1, mean = c(xcurr), sigma = jcov, lower = rng[,1], upper = rng[,2]) + xnew <- TruncatedNormal::rtmvnorm(1, mu = c(xcurr), sigma = jcov, lb = rng[,1], ub = rng[,2]) # if(bounded(xnew,rng)){ # re-predict SS @@ -282,16 +282,16 @@ mcmc.GP <- function(gp, x0, nmcmc, rng, format = "lin", mix = "joint", splinefcn # don't update the currllp ( = llik.par, e.g. tau) yet # calculate posterior with xcurr | currllp ycurr <- get_y(currSS, xcurr, llik.fn, priors, currllp) - HRcurr <- tmvtnorm::dtmvnorm(c(xnew), c(xcurr), jcov, - lower = rng[,1], upper = rng[,2], log = TRUE) + HRcurr <- TruncatedNormal::dtmvnorm(c(xnew), c(xcurr), jcov, + lb = rng[,1], ub = rng[,2], log = TRUE, B = 1e2) newSS <- get_ss(gp, xnew, pos.check) if(all(newSS != -Inf)){ newllp <- pda.calc.llik.par(settings, n.of.obs, newSS, hyper.pars) ynew <- get_y(newSS, xnew, llik.fn, priors, newllp) - HRnew <- tmvtnorm::dtmvnorm(c(xcurr), c(xnew), jcov, - lower = rng[,1], upper = rng[,2], log = TRUE) + HRnew <- TruncatedNormal::dtmvnorm(c(xcurr), c(xnew), jcov, + lb = rng[,1], ub = rng[,2], log = TRUE, B = 1e2) if (is.accepted(ycurr+HRcurr, ynew+HRnew)) { xcurr <- xnew diff --git a/modules/meta.analysis/R/meta.analysis.R b/modules/meta.analysis/R/meta.analysis.R index 113c6e4b757..e572b6f5128 100644 --- a/modules/meta.analysis/R/meta.analysis.R +++ b/modules/meta.analysis/R/meta.analysis.R @@ -109,11 +109,10 @@ pecan.ma <- function(trait.data, prior.distns, ## check for excess missing data if (all(is.na(data[["obs.prec"]]))) { - if (verbose) { - writeLines("NO ERROR STATS PROVIDED, DROPPING RANDOM EFFECTS") - } - data$site <- rep(1, nrow(data)) - data$trt <- rep(0, nrow(data)) + PEcAn.logger::logger.warn("NO ERROR STATS PROVIDED\n Check meta-analysis Model Convergence", + "and consider turning off Random Effects by", + "setting FALSE", + "in your pecan.xml settings file ") } if (!random) { diff --git a/scripts/compile.sh b/scripts/compile.sh new file mode 100755 index 00000000000..462c2ac55aa --- /dev/null +++ b/scripts/compile.sh @@ -0,0 +1,3 @@ +#!/bin/bash + +docker-compose exec executor sh -c 'cd /pecan && make' diff --git a/shiny/BrownDog/server.R b/shiny/BrownDog/server.R index bdf6cf74178..99ea2fbe366 100644 --- a/shiny/BrownDog/server.R +++ b/shiny/BrownDog/server.R @@ -33,9 +33,8 @@ server <- shinyServer(function(input, output, session) { output$modelSelector <- renderUI({ bety <- betyConnect("../../web/config.php") - con <- bety$con - on.exit(db.close(con), add = TRUE) - models <- db.query("SELECT name FROM modeltypes;", con) + on.exit(db.close(bety), add = TRUE) + models <- db.query("SELECT name FROM modeltypes;", bety) selectInput("model", "Model", models) }) @@ -75,8 +74,7 @@ server <- shinyServer(function(input, output, session) { observeEvent(input$type, { # get all sites name, lat and lon by sitegroups bety <- betyConnect("../../web/config.php") - con <- bety$con - on.exit(db.close(con), add = TRUE) + on.exit(db.close(bety), add = TRUE) sites <- db.query( paste0( @@ -87,7 +85,7 @@ server <- shinyServer(function(input, output, session) { input$type, "');" ), - con + bety ) if(length(sites) > 0){ diff --git a/shiny/ViewMet/server.R b/shiny/ViewMet/server.R index 4cbacdea701..3cb45c022e0 100644 --- a/shiny/ViewMet/server.R +++ b/shiny/ViewMet/server.R @@ -138,7 +138,7 @@ server <- function(input, output, session) { formatid <- tbl(bety, "inputs") %>% filter(id == inputid) %>% pull(format_id) siteid <- tbl(bety, "inputs") %>% filter(id == inputid) %>% pull(site_id) - site = query.site(con = bety$con, siteid) + site = query.site(con = bety, siteid) current_nc <- ncdf4::nc_open(rv$load.paths[i]) vars_in_file <- names(current_nc[["var"]]) diff --git a/shiny/dbsync/Dockerfile b/shiny/dbsync/Dockerfile index 2506d2236c5..d5a89a6ca0f 100644 --- a/shiny/dbsync/Dockerfile +++ b/shiny/dbsync/Dockerfile @@ -6,10 +6,16 @@ ENV PGHOST=postgres \ PGPASSWORD=bety \ GEOCACHE=/srv/shiny-server/geoip.json -RUN apt-get -y install libpq-dev libssl-dev \ +RUN apt-get update \ + && apt-get -y install libpq-dev libssl-dev \ && install2.r -e -s -n -1 curl dbplyr DT leaflet RPostgreSQL \ - && rm -rf /srv/shiny-server/* + && rm -rf /srv/shiny-server/* \ + && rm -rf /var/lib/apt/lists/* ADD . /srv/shiny-server/ +ADD https://raw.githubusercontent.com/rocker-org/shiny/master/shiny-server.sh /usr/bin/ + +RUN chmod +x /usr/bin/shiny-server.sh + # special script to start shiny server and preserve env variable CMD /srv/shiny-server/save-env-shiny.sh diff --git a/shiny/dbsync/app.R b/shiny/dbsync/app.R index 98fdda1bc21..da4f3065247 100644 --- a/shiny/dbsync/app.R +++ b/shiny/dbsync/app.R @@ -28,6 +28,9 @@ host_mapping <- list( "paleon-pecan.virtual.crc.nd.edu"="crc.nd.edu" ) +# ignored servers, is reset on refresh +ignored_servers <- c() + # given a IP address lookup geo spatital info # uses a cache to prevent to many requests (1000 per day) get_geoip <- function(ip) { @@ -55,6 +58,8 @@ get_geoip <- function(ip) { # get a list of all servers in BETY and their geospatial location get_servers <- function() { + ignored_servers <<- c() + # connect to BETYdb bety <- DBI::dbConnect( DBI::dbDriver("PostgreSQL"), @@ -104,16 +109,21 @@ get_servers <- function() { # fetch information from the actual servers check_servers <- function(servers, progress) { + check_servers <- servers$sync_url[! servers$sync_host_id %in% ignored_servers] + # generic failure message to increment progress failure <- function(res) { + print(res) progress$inc(amount = 1) } # version information server_version <- function(res) { + url <- sub("version.txt", "bety.tar.gz", res$url) progress$inc(amount = 0, message = paste("Processing", progress$getValue(), "of", progress$getMax())) - if (res$status == 200) { - url <- sub("version.txt", "bety.tar.gz", res$url) + print(paste(res$status, url)) + if (res$status == 200 || res$status == 226) { + check_servers <<- check_servers[check_servers != url] version <- strsplit(rawToChar(res$content), '\t', fixed = TRUE)[[1]] if (!is.na(as.numeric(version[1]))) { servers[servers$sync_url == url,'version'] <<- version[2] @@ -127,14 +137,15 @@ check_servers <- function(servers, progress) { } progress$inc(amount = 1) } - urls <- sapply(servers[,'sync_url'], function(x) { sub("bety.tar.gz", "version.txt", x) }) - lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_version, fail = failure, handle = curl::new_handle(connecttimeout=1)) }) + urls <- sapply(check_servers, function(x) { sub("bety.tar.gz", "version.txt", x) }) + lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_version, fail = failure) } ) # log information server_log <- function(res) { + url <- sub("sync.log", "bety.tar.gz", res$url) progress$inc(amount = 0, message = paste("Processing", progress$getValue(), "of", progress$getMax())) - if (res$status == 200) { - url <- sub("sync.log", "bety.tar.gz", res$url) + print(paste(res$status, url)) + if (res$status == 200 || res$status == 226) { lines <- strsplit(rawToChar(res$content), '\n', fixed = TRUE)[[1]] now <- as.POSIXlt(Sys.time(), tz="UTC") for (line in tail(lines, maxlines)) { @@ -152,12 +163,13 @@ check_servers <- function(servers, progress) { } progress$inc(amount = 1) } - urls <- sapply(servers[,'sync_url'], function(x) { sub("bety.tar.gz", "sync.log", x) }) - lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_log, fail = failure, handle = curl::new_handle(connecttimeout=1)) }) + urls <- sapply(check_servers, function(x) { sub("bety.tar.gz", "sync.log", x) }) + lapply(urls, function(x) { curl::curl_fetch_multi(x, done = server_log, fail = failure) } ) # run queries in parallel curl::multi_run() - myservers <<- servers + ignored_servers <<- c(ignored_servers, servers[servers$sync_url %in% check_servers, "sync_host_id"]) + return(servers) } @@ -257,12 +269,15 @@ server <- function(input, output, session) { # update sync list (slow) observeEvent(input$refresh_sync, { + servers <- values$servers session$sendCustomMessage("disableUI", "") - progress <- Progress$new(session, min=0, max=2*nrow(values$servers)) - values$servers <- check_servers(values$servers, progress) - values$sync <- check_sync(values$servers) + progress <- Progress$new(session, min=0, max=2*(nrow(servers)-length(ignored_servers))) + servers <- check_servers(servers, progress) + sync <- check_sync(servers) progress$close() session$sendCustomMessage("enableUI", "") + values$servers <- servers + values$sync <- sync }) # create a map of all servers that have a sync_host_id and sync_url @@ -282,7 +297,11 @@ server <- function(input, output, session) { # create a table of all servers that have a sync_host_id and sync_url output$table <- DT::renderDataTable({ - DT::datatable(values$servers %>% dplyr::select("sync_host_id", "hostname", "city", "country", "lastdump", "migrations")) + ignored <- rep("gray", length(ignored_servers) + 1) + DT::datatable(values$servers %>% + dplyr::select("sync_host_id", "hostname", "city", "country", "lastdump", "migrations"), + rownames = FALSE) %>% + DT::formatStyle('sync_host_id', target = "row", color = DT::styleEqual(c(ignored_servers, "-1"), ignored)) }) } diff --git a/tests/docker.sipnet.xml b/tests/docker.sipnet.xml new file mode 100644 index 00000000000..3da74156bc5 --- /dev/null +++ b/tests/docker.sipnet.xml @@ -0,0 +1,67 @@ + + + /data/tests/sipnet + + + + PostgreSQL + bety + bety + postgres + bety + FALSE + + + + + + temperate.coniferous + + + + + 3000 + FALSE + 1.2 + AUTO + + + + NPP + + + + + -1 + 1 + + NPP + + + + SIPNET + r136 + + + + + 772 + + + + 5000000005 + + + 2002-01-01 00:00:00 + 2005-12-31 00:00:00 + pecan/dbfiles + + + + localhost + + amqp://guest:guest@rabbitmq/%2F + SIPNET_r136 + + + diff --git a/web/04-runpecan.php b/web/04-runpecan.php index 16d2f44626c..2bbf990d70e 100644 --- a/web/04-runpecan.php +++ b/web/04-runpecan.php @@ -532,7 +532,11 @@ } # create the message - $message = '{"folder": "' . $folder . '", "workflowid": "' . $workflowid . '"}'; + $message = '{"folder": "' . $folder . '", "workflowid": "' . $workflowid . '"'; + if ($model_edit) { + $message .= ', "modeledit": true'; + } + $message .= '}'; send_rabbitmq_message($message, $rabbitmq_uri, $rabbitmq_queue); #done diff --git a/web/07-continue.php b/web/07-continue.php index 9b319de9799..6e5142c5a44 100644 --- a/web/07-continue.php +++ b/web/07-continue.php @@ -53,7 +53,6 @@ $stmt->closeCursor(); close_database(); -$exec = "R_LIBS_USER=\"$R_library_path\" $Rbinary CMD BATCH"; $path = "05-running.php?workflowid=$workflowid&hostname=${hostname}"; if ($pecan_edit) { $path .= "&pecan_edit=pecan_edit"; @@ -74,13 +73,6 @@ $fh = fopen($folder . DIRECTORY_SEPARATOR . "STATUS", 'a') or die("can't open file"); fwrite($fh, "\t" . date("Y-m-d H:i:s") . "\tDONE\t\n"); fclose($fh); - - $exec .= " --continue workflow.R workflow2.Rout"; -} else { - if ($model_edit) { - $exec .= " --advanced"; - } - $exec .= " workflow.R"; } # start the workflow again @@ -91,11 +83,28 @@ } else { $rabbitmq_queue = "pecan"; } - $msg_exec = str_replace("\"", "'", $exec); - $message = '{"folder": "' . $folder . '", "custom_application": "' . $msg_exec . '"}'; + + $message = '{"folder": "' . $folder . '", "workflowid": "' . $workflowid . '"'; + if (file_exists($folder . DIRECTORY_SEPARATOR . "STATUS")) { + $message .= ', "continue": true'; + } else if ($model_edit) { + $message .= ', "modeledit": true'; + } + $message .= '}'; send_rabbitmq_message($message, $rabbitmq_uri, $rabbitmq_queue); } else { chdir($folder); + + $exec = "R_LIBS_USER=\"$R_library_path\" $Rbinary CMD BATCH"; + if (file_exists($folder . DIRECTORY_SEPARATOR . "STATUS")) { + $exec .= " --continue workflow.R workflow2.Rout"; + } else { + if ($model_edit) { + $exec .= " --advanced"; + } + $exec .= " workflow.R"; + } + pclose(popen("$exec &", 'r')); } From d45690cf60bceb67d13a1c4414c218bc64049fe0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:32:42 +0530 Subject: [PATCH 1166/2289] Update and rename l8gee2pecan_bands.py to gee2pecan_l8.py --- .../{l8gee2pecan_bands.py => gee2pecan_l8.py} | 79 +++++++------------ 1 file changed, 28 insertions(+), 51 deletions(-) rename modules/data.remote/inst/{l8gee2pecan_bands.py => gee2pecan_l8.py} (75%) diff --git a/modules/data.remote/inst/l8gee2pecan_bands.py b/modules/data.remote/inst/gee2pecan_l8.py similarity index 75% rename from modules/data.remote/inst/l8gee2pecan_bands.py rename to modules/data.remote/inst/gee2pecan_l8.py index 2394bdb2405..020a4ad2155 100644 --- a/modules/data.remote/inst/l8gee2pecan_bands.py +++ b/modules/data.remote/inst/gee2pecan_l8.py @@ -3,15 +3,17 @@ Requires Python3 +Bands retrieved: B1, B2, B3, B4, B5, B6, B7, B10, B11 along with computed NDVI + If ROI is a Point, this function can be used for getting SR data from Landsat 7, 5 and 4 as well. Author: Ayush Prasad """ - +from gee_utils import create_geo, get_siteaoi, get_sitename import ee import pandas as pd -import datetime import geopandas as gpd +import datetime import os import xarray as xr import numpy as np @@ -20,7 +22,7 @@ ee.Initialize() -def l8gee2pecan_bands(geofile, outdir, start, end, ic, vi, qc, bands=["B5", "B4"]): +def gee2pecan_l8(geofile, outdir, start, end, qc=1): """ Extracts Landsat 8 SR band data from GEE @@ -49,11 +51,10 @@ def l8gee2pecan_bands(geofile, outdir, start, end, ic, vi, qc, bands=["B5", "B4" """ - # read in the geojson file - df = gpd.read_file(geofile) - # scale (int) Default: 30 scale = 30 + # bands retrieved + bands = ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11"] def reduce_region(image): """ @@ -75,7 +76,7 @@ def mask(image): # create image collection depending upon the qc choice if qc == True: landsat = ( - ee.ImageCollection(ic) + ee.ImageCollection("LANDSAT/LC08/C01/T1_SR") .filterDate(start, end) .map(mask) .sort("system:time_start", True) @@ -83,37 +84,28 @@ def mask(image): else: landsat = ( - ee.ImageCollection(ic) + ee.ImageCollection("LANDSAT/LC08/C01/T1_SR") .filterDate(start, end) .sort("system:time_start", True) ) - # If NDVI is to be calculated select the appropriate bands and create the image collection - if vi == True: - - def calcNDVI(image): - """ - Calculates NDVI and adds the band to the image. - """ - ndvi = image.normalizedDifference(["B5", "B4"]).rename("NDVI") - return image.addBands(ndvi) - - # map NDVI to the image collection and select the NDVI band - landsat = landsat.map(calcNDVI).select("NDVI") - file_name = "_l8ndvi" + def calcNDVI(image): + """ + Calculates NDVI and adds the band to the image. + """ + ndvi = image.normalizedDifference(["B5", "B4"]).rename("NDVI") + return image.addBands(ndvi) - # select the user specified bands if NDVI is not be calculated - else: - landsat = landsat.select(bands) - file_name = "_l8bands" + # map NDVI to the image collection and select the bands + landsat = landsat.map(calcNDVI).select( + ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11", "NDVI"] + ) # if ROI is a point + df = gpd.read_file(geofile) if (df.geometry.type == "Point").bool(): - # extract the coordinates - lon = float(df.geometry.x) - lat = float(df.geometry.y) - # create geometry - geo = ee.Geometry.Point(lon, lat) + + geo = create_geo(geofile) # get the data l_data = landsat.filterBounds(geo).getRegion(geo, scale).getInfo() @@ -145,11 +137,8 @@ def getdate(filename): # if ROI is a polygon elif (df.geometry.type == "Polygon").bool(): - # extract coordinates - area = [ - list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) - ] - geo = ee.Geometry.Polygon(area) + + geo = create_geo(geofile) # get the data l_data = landsat.filterBounds(geo).map(reduce_region).getInfo() @@ -180,8 +169,8 @@ def eachf2dict(f): else: raise ValueError("geometry choice not supported") - site_name = df[df.columns[0]].iloc[0] - AOI = str(df[df.columns[1]].iloc[0]) + site_name = get_sitename(geofile) + AOI = get_siteaoi(geofile) # convert the dataframe to an xarray dataset tosave = xr.Dataset( datadf, @@ -190,7 +179,7 @@ def eachf2dict(f): "start_date": start, "end_date": end, "AOI": AOI, - "sensor": ic, + "sensor": "LANDSAT/LC08/C01/T1_SR", }, ) @@ -199,16 +188,4 @@ def eachf2dict(f): os.makedirs(outdir, exist_ok=True) # convert the dataset to a netCDF file and save it - tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) - - -if __name__ == "__main__": - l8gee2pecan_bands( - "./satellitetools/test.geojson", - "./out/", - "2018-01-01", - "2018-12-31", - "LANDSAT/LC08/C01/T1_SR", - True, - True, - ) + tosave.to_netcdf(os.path.join(outdir, site_name + "_bands.nc")) From 57ff0e347a0b0b55662ade7059abc370b440d22a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:42:33 +0530 Subject: [PATCH 1167/2289] removed gee.py --- .../data.remote/inst/satellitetools/gee.py | 716 ------------------ 1 file changed, 716 deletions(-) delete mode 100755 modules/data.remote/inst/satellitetools/gee.py diff --git a/modules/data.remote/inst/satellitetools/gee.py b/modules/data.remote/inst/satellitetools/gee.py deleted file mode 100755 index cb74eb74d5d..00000000000 --- a/modules/data.remote/inst/satellitetools/gee.py +++ /dev/null @@ -1,716 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Thu Feb 6 15:24:12 2020 - -Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). -Warning: the data is currently retrieved with 10m resolution (scale=10), so -the 20m resolution bands are resampled. -TODO: Add option for specifying the request spatial resolution. - -@author: Olli Nevalainen (olli.nevalainen@fmi.fi), - Finnish Meteorological Institute) - - -""" -import sys -import os -import ee -import datetime -import pandas as pd -import geopandas as gpd -import numpy as np -import xarray as xr -from functools import reduce - -ee.Initialize() - - -NO_DATA = -99999 -S2_REFL_TRANS = 10000 -# ----------------- Sentinel-2 ------------------------------------- -s2_qi_labels = ['NODATA', - 'SATURATED_DEFECTIVE', - 'DARK_FEATURE_SHADOW', - 'CLOUD_SHADOW', - 'VEGETATION', - 'NOT_VEGETATED', - 'WATER', - 'UNCLASSIFIED', - 'CLOUD_MEDIUM_PROBA', - 'CLOUD_HIGH_PROBA', - 'THIN_CIRRUS', - 'SNOW_ICE'] - -s2_filter1 = ['NODATA', - 'SATURATED_DEFECTIVE', - 'CLOUD_SHADOW', - 'UNCLASSIFIED', - 'CLOUD_MEDIUM_PROBA', - 'CLOUD_HIGH_PROBA', - 'THIN_CIRRUS', - 'SNOW_ICE'] - - -class S2RequestParams(): - """S2 data request paramaters. - - Attributes - ---------- - datestart : str - Starting date for data request in form "2019-01-01". - dateend : str - Starting date for data request in form "2019-12-31". - bands : list, optional - List of strings with band name. - the default is ['B3', 'B4', 'B5', - 'B6', 'B7', 'B8A', 'B11', 'B12']. - """ - - def __init__(self, datestart, dateend, bands=None): - """. - - Parameters - ---------- - datestart : str - Starting date for data request in form "2019-01-01". - dateend : str - Starting date for data request in form "2019-12-31". - bands : list, optional - List of strings with band name. - The default is ['B3', 'B4', 'B5', - 'B6', 'B7', 'B8A', 'B11', 'B12']. - - Returns - ------- - None. - - """ - default_bands = ['B3', 'B4', 'B5', 'B6', 'B7', 'B8A', 'B11', 'B12'] - - self.datestart = datestart - self.dateend = dateend - self.bands = bands if bands else default_bands - - -class AOI(): - """Area of interest for area info and data. - - Attributes - ---------- - name : str - Name of the area. - geometry : str - Geometry of the area of interest e.g. from geopandas. - Currently only polygons tested. The default is None. - coordinate_list : list, optional - List of coordinates of a polygon - (loop should be closed). Computed from geometry if not - provided. The default is None. - tile : str, optional - Tile id as string for the data. Used to keep the data in - same crs because an area can be in multiple tiles with - different crs. The default is None. - qi : pandas dataframe - Dataframe with quality information about available imagery for the AOI. - qi is empty at init and can be computed with - ee_get_s2_quality_info function. - data : pandas dataframe or xarray - Dataframe holding data retrieved from GEE. Data can be computed using - function - qi is empty at init and can be computed with ee_get_s2_data and - converted to xarray using s2_data_to_xarray function. - - Methods - ------- - __init__ - """ - - def __init__(self, name, geometry=None, coordinate_list=None, tile=None): - """. - - Parameters - ---------- - name : str - Name of the area. - geometry : geometry in wkt, optional - Geometry of the area of interest e.g. from geopandas. - Currently only polygons tested. The default is None. - coordinate_list : list, optional - List of coordinates of a polygon - (loop should be closed). Computed from geometry if not - provided. The default is None. - tile : str, optional - Tile id as string for the data. Used to keep the data in - same crs because an area can be in multiple tiles with - different crs. The default is None. - - Returns - ------- - None. - - """ - if not geometry and not coordinate_list: - sys.exit("AOI has to get either geometry or coordinates as list!") - elif geometry and not coordinate_list: - coordinate_list = list(geometry.exterior.coords) - elif coordinate_list and not geometry: - geometry = None - - self.name = name - self.geometry = geometry - self.coordinate_list = coordinate_list - self.qi = None - self.data = None - self.tile = tile - - -def ee_get_s2_quality_info(AOIs, req_params): - """Get S2 quality information from GEE. - - Parameters - ---------- - AOIs : list or AOI instance - List of AOI instances or single AOI instance. If multiple AOIs - proviveded the computation in GEE server is parallellized. - If too many areas with long time range is provided, user might - hit GEE memory limits. Then you should call this function - sequentally to all AOIs. - req_params : S2RequestParams instance - S2RequestParams instance with request details. - - Returns - ------- - Nothing: - Computes qi attribute for the given AOI instances. - - """ - # if single AOI instance, make a list - if isinstance(AOIs, AOI): - AOIs = list([AOIs]) - - features = [ee.Feature( - ee.Geometry.Polygon(a.coordinate_list), - {'name': a.name}) for a in AOIs] - feature_collection = ee.FeatureCollection(features) - - def ee_get_s2_quality_info_feature(feature): - - area = feature.geometry() - image_collection = ee.ImageCollection("COPERNICUS/S2_SR") \ - .filterBounds(area) \ - .filterDate(req_params.datestart, req_params.dateend) \ - .select(['SCL']) - - def ee_get_s2_quality_info_image(img): - productid = img.get('PRODUCT_ID') - assetid = img.id() - tileid = img.get('MGRS_TILE') - system_index = img.get('system:index') - proj = img.select("SCL").projection() - - # apply reducer to list - img = img.reduceRegion( - reducer=ee.Reducer.toList(), - geometry=area, - maxPixels=1e8, - scale=10) - - # get data into arrays - classdata = ee.Array( - ee.Algorithms.If(img.get("SCL"), - ee.Array(img.get("SCL")), - ee.Array([0]))) - - totalcount = classdata.length() - classpercentages = { - key: - classdata.eq(i).reduce(ee.Reducer.sum(), [0]) - .divide(totalcount).get([0]) - for i, key in enumerate(s2_qi_labels)} - - tmpfeature = ee.Feature(ee.Geometry.Point([0, 0])) \ - .set('productid', productid) \ - .set('system_index', system_index) \ - .set('assetid', assetid) \ - .set('tileid', tileid) \ - .set('projection', proj) \ - .set(classpercentages) - return tmpfeature - - s2_qi_image_collection = image_collection.map( - ee_get_s2_quality_info_image) - - return feature \ - .set('productid', s2_qi_image_collection - .aggregate_array('productid')) \ - .set('system_index', s2_qi_image_collection - .aggregate_array('system_index')) \ - .set('assetid', s2_qi_image_collection - .aggregate_array('assetid')) \ - .set('tileid', s2_qi_image_collection - .aggregate_array('tileid')) \ - .set('projection', s2_qi_image_collection - .aggregate_array('projection')) \ - .set({key: s2_qi_image_collection - .aggregate_array(key) for key in s2_qi_labels}) - - s2_qi_feature_collection = feature_collection.map( - ee_get_s2_quality_info_feature).getInfo() - - s2_qi = s2_feature_collection_to_dataframes(s2_qi_feature_collection) - - for a in AOIs: - name = a.name - a.qi = s2_qi[name] - - -def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): - """Get S2 data (level L2A, bottom of atmosphere data) from GEE. - - Warning: the data is currently retrieved with 10m resolution (scale=10), so - the 20m resolution bands are resampled. - TODO: Add option for specifying the request spatial resolution. - - Parameters - ---------- - AOIs : list or AOI instance - List of AOI instances or single AOI instance. If multiple AOIs - proviveded the computation in GEE server is parallellized. - If too many areas with long time range is provided, user might - hit GEE memory limits. Then you should call this function - sequentally to all AOIs. AOIs should have qi attribute computed first. - req_params : S2RequestParams instance - S2RequestParams instance with request details. - qi_threshold : float - Threshold value to filter images based on used qi filter. - qi filter holds labels of classes whose percentages within the AOI - is summed. If the sum is larger then the qi_threhold, data will not be - retrieved for that date/image. The default is 1, meaning all data is - retrieved. - qi_filter : list - List of strings with class labels (of unwanted classes) used to compute qi value, - see qi_threhold. The default is s2_filter1 = ['NODATA', - 'SATURATED_DEFECTIVE', - 'CLOUD_SHADOW', - 'UNCLASSIFIED', - 'CLOUD_MEDIUM_PROBA', - 'CLOUD_HIGH_PROBA', - 'THIN_CIRRUS', - 'SNOW_ICE']. - - Returns - ------- - Nothing: - Computes data attribute for the given AOI instances. - - """ - datestart = req_params.datestart - dateend = req_params. dateend - bands = req_params.bands - # if single AOI instance, make a list - if isinstance(AOIs, AOI): - AOIs = list([AOIs]) - - features = [] - for a in AOIs: - filtered_qi = filter_s2_qi_dataframe(a.qi, qi_threshold, qi_filter) - if len(filtered_qi) == 0: - print('No observations to retrieve for area %s' % a.name) - continue - - if a.tile is None: - min_tile = min(filtered_qi['tileid'].values) - filtered_qi = (filtered_qi[ - filtered_qi['tileid'] == min_tile]) - a.tile = min_tile - else: - filtered_qi = (filtered_qi[ - filtered_qi['tileid'] == a.tile]) - - full_assetids = "COPERNICUS/S2_SR/" + filtered_qi['assetid'] - image_list = [ee.Image(asset_id) for asset_id in full_assetids] - crs = filtered_qi['projection'].values[0]["crs"] - feature = ee.Feature(ee.Geometry.Polygon(a.coordinate_list), - {'name': a.name, - 'image_list': image_list}) - - features.append(feature) - - if len(features) == 0: - print('No data to be retrieved!') - return None - - feature_collection = ee.FeatureCollection(features) - - def ee_get_s2_data_feature(feature): - geom = feature.geometry(0.01, crs) - image_collection = \ - ee.ImageCollection.fromImages(feature.get('image_list')) \ - .filterBounds(geom) \ - .filterDate(datestart, dateend) \ - .select(bands + ['SCL']) - - def ee_get_s2_data_image(img): - # img = img.clip(geom) - productid = img.get('PRODUCT_ID') - assetid = img.id() - tileid = img.get('MGRS_TILE') - system_index = img.get('system:index') - proj = img.select(bands[0]).projection() - sun_azimuth = img.get('MEAN_SOLAR_AZIMUTH_ANGLE') - sun_zenith = img.get('MEAN_SOLAR_ZENITH_ANGLE') - view_azimuth = ee.Array( - [img.get('MEAN_INCIDENCE_AZIMUTH_ANGLE_%s' % b) - for b in bands]) \ - .reduce(ee.Reducer.mean(), [0]).get([0]) - view_zenith = ee.Array( - [img.get('MEAN_INCIDENCE_ZENITH_ANGLE_%s' % b) - for b in bands]) \ - .reduce(ee.Reducer.mean(), [0]).get([0]) - - img = img.resample('bilinear') \ - .reproject(crs=crs, scale=10) - - # get the lat lon and add the ndvi - image_grid = ee.Image.pixelCoordinates( - ee.Projection(crs)) \ - .addBands([img.select(b) for b in bands + ['SCL']]) - - # apply reducer to list - image_grid = image_grid.reduceRegion( - reducer=ee.Reducer.toList(), - geometry=geom, - maxPixels=1e8, - scale=10) - - # get data into arrays - x_coords = ee.Array(image_grid.get("x")) - y_coords = ee.Array(image_grid.get("y")) - band_data = {b: ee.Array(image_grid.get("%s" % b)) for b in bands} - - scl_data = ee.Array(image_grid.get("SCL")) - - # perform LAI et al. computation possibly here! - - tmpfeature = ee.Feature(ee.Geometry.Point([0, 0])) \ - .set('productid', productid) \ - .set('system_index', system_index) \ - .set('assetid', assetid) \ - .set('tileid', tileid) \ - .set('projection', proj) \ - .set('sun_zenith', sun_zenith) \ - .set('sun_azimuth', sun_azimuth) \ - .set('view_zenith', view_zenith) \ - .set('view_azimuth', view_azimuth) \ - .set('x_coords', x_coords) \ - .set('y_coords', y_coords) \ - .set('SCL', scl_data) \ - .set(band_data) - return tmpfeature - - s2_data_feature = image_collection.map(ee_get_s2_data_image) - - return feature \ - .set('productid', s2_data_feature - .aggregate_array('productid')) \ - .set('system_index', s2_data_feature - .aggregate_array('system_index')) \ - .set('assetid', s2_data_feature - .aggregate_array('assetid')) \ - .set('tileid', s2_data_feature - .aggregate_array('tileid')) \ - .set('projection', s2_data_feature - .aggregate_array('projection')) \ - .set('sun_zenith', s2_data_feature - .aggregate_array('sun_zenith')) \ - .set('sun_azimuth', s2_data_feature - .aggregate_array('sun_azimuth')) \ - .set('view_zenith', s2_data_feature - .aggregate_array('view_zenith')) \ - .set('view_azimuth', s2_data_feature - .aggregate_array('view_azimuth')) \ - .set('x_coords', s2_data_feature - .aggregate_array('x_coords')) \ - .set('y_coords', s2_data_feature - .aggregate_array('y_coords')) \ - .set('SCL', s2_data_feature - .aggregate_array('SCL')) \ - .set({b: s2_data_feature - .aggregate_array(b) for b in bands}) - s2_data_feature_collection = feature_collection.map( - ee_get_s2_data_feature).getInfo() - - s2_data = s2_feature_collection_to_dataframes(s2_data_feature_collection) - - for a in AOIs: - name = a.name - a.data = s2_data[name] - -def filter_s2_qi_dataframe(s2_qi_dataframe, qi_thresh, s2_filter=s2_filter1): - """Filter qi dataframe. - - Parameters - ---------- - s2_qi_dataframe : pandas dataframe - S2 quality information dataframe (AOI instance qi attribute). - qi_thresh : float - Threshold value to filter images based on used qi filter. - qi filter holds labels of classes whose percentages within the AOI - is summed. If the sum is larger then the qi_threhold, data will not be - retrieved for that date/image. The default is 1, meaning all data is - retrieved. - s2_filter : list - List of strings with class labels (of unwanted classes) used to compute qi value, - see qi_threhold. The default is s2_filter1 = ['NODATA', - 'SATURATED_DEFECTIVE', - 'CLOUD_SHADOW', - 'UNCLASSIFIED', - 'CLOUD_MEDIUM_PROBA', - 'CLOUD_HIGH_PROBA', - 'THIN_CIRRUS', - 'SNOW_ICE']. - - Returns - ------- - filtered_s2_qi_df : pandas dataframe - Filtered dataframe. - - """ - filtered_s2_qi_df = s2_qi_dataframe.loc[ - s2_qi_dataframe[s2_filter1].sum(axis=1) < qi_thresh] - - return filtered_s2_qi_df - - -def s2_feature_collection_to_dataframes(s2_feature_collection): - """Convert feature collection dict from GEE to pandas dataframe. - - Parameters - ---------- - s2_feature_collection : dict - Dictionary returned by GEE. - - Returns - ------- - dataframes : pandas dataframe - GEE dictinary converted to pandas dataframe. - - """ - dataframes = {} - - for featnum in range(len(s2_feature_collection['features'])): - tmp_dict = {} - key = s2_feature_collection['features'][featnum]['properties']['name'] - productid = (s2_feature_collection - ['features'] - [featnum] - ['properties'] - ['productid']) - - dates = [datetime.datetime.strptime( - d.split('_')[2], '%Y%m%dT%H%M%S') for d in productid] - - tmp_dict.update({'Date': dates}) # , 'crs': crs} - properties = s2_feature_collection['features'][featnum]['properties'] - for prop, data in properties.items(): - if prop not in ['Date'] : # 'crs' ,, 'projection' - tmp_dict.update({prop: data}) - dataframes[key] = pd.DataFrame(tmp_dict) - return dataframes - -def compute_ndvi(dataset): - """Compute NDVI - - Parameters - ---------- - dataset : xarray dataset - - Returns - ------- - xarray dataset - Adds 'ndvi' xr array to xr dataset. - - """ - b4 = dataset.band_data.sel(band='B4') - b8 = dataset.band_data.sel(band='B8A') - ndvi = (b8 - b4) / (b8 + b4) - return dataset.assign({'ndvi': ndvi}) - - - -def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): - """Convert AOI.data dataframe to xarray dataset. - - Parameters - ---------- - aoi : AOI instance - AOI instance. - request_params : S2RequestParams - S2RequestParams. - convert_to_reflectance : boolean, optional - Convert S2 data from GEE (integers) to reflectances (floats), - i,e, divide by 10000. - The default is True. - - Returns - ------- - Nothing. - Converts the data atrribute dataframe to xarray Dataset. - xarray is better for handling multiband data. It also has - implementation for saving the data in NetCDF format. - - """ - # check that all bands have full data! - datalengths = [aoi.data[b].apply( - lambda d: len(d)) == len(aoi.data.iloc[0]['x_coords']) - for b in request_params.bands] - consistent_data = reduce(lambda a, b: a & b, datalengths) - aoi.data = aoi.data[consistent_data] - - # 2D data - bands = request_params.bands - - # 1D data - list_vars = ['assetid', 'productid', 'sun_azimuth', - 'sun_zenith', 'system_index', - 'view_azimuth', 'view_zenith'] - - # crs from projection - crs = aoi.data['projection'].values[0]['crs'] - tileid = aoi.data['tileid'].values[0] - # original number of pixels requested (pixels inside AOI) - aoi_pixels = len(aoi.data.iloc[0]['x_coords']) - - # transform 2D data to arrays - for b in bands: - - aoi.data[b] = aoi.data.apply( - lambda row: s2_lists_to_array( - row['x_coords'], row['y_coords'], row[b], - convert_to_reflectance=convert_to_reflectance), axis=1) - - aoi.data['SCL'] = aoi.data.apply( - lambda row: s2_lists_to_array( - row['x_coords'], row['y_coords'], row['SCL'], - convert_to_reflectance=False), axis=1) - - array = aoi.data[bands].values - - # this will stack the array to ndarray with - # dimension order = (time, band, x,y) - narray = np.stack( - [np.stack(array[:, b], axis=2) for b in range(len(bands))], - axis=2).transpose() # .swapaxes(2, 3) - - scl_array = np.stack(aoi.data['SCL'].values, axis=2).transpose() - - coords = {'time': aoi.data['Date'].values, - 'band': bands, - 'x': np.unique(aoi.data.iloc[0]['x_coords']), - 'y': np.unique(aoi.data.iloc[0]['y_coords']) - } - - dataset_dict = {'band_data': (['time', 'band', 'x', 'y'], narray), - 'SCL': (['time', 'x', 'y'], scl_array)} - var_dict = {var: (['time'], aoi.data[var]) for var in list_vars} - dataset_dict.update(var_dict) - - ds = xr.Dataset(dataset_dict, - coords=coords, - attrs={'name': aoi.name, - 'crs': crs, - 'tile_id': tileid, - 'aoi_geometry': aoi.geometry.to_wkt(), - 'aoi_pixels': aoi_pixels}) - aoi.data = ds - - -def s2_lists_to_array(x_coords, y_coords, data, convert_to_reflectance=True): - """Convert 1D lists of coordinates and corresponding values to 2D array. - - Parameters - ---------- - x_coords : list - List of x-coordinates. - y_coords : list - List of y-coordinates. - data : list - List of data values corresponding to the coordinates. - convert_to_reflectance : boolean, optional - Convert S2 data from GEE (integers) to reflectances (floats), - i,e, divide by 10000. - The default is True. - - Returns - ------- - arr : 2D numpy array - Return 2D numpy array. - - """ - # get the unique coordinates - uniqueYs = np.unique(y_coords) - uniqueXs = np.unique(x_coords) - - # get number of columns and rows from coordinates - ncols = len(uniqueXs) - nrows = len(uniqueYs) - - # determine pixelsizes - # ys = uniqueYs[1] - uniqueYs[0] - # xs = uniqueXs[1] - uniqueXs[0] - - y_vals, y_idx = np.unique(y_coords, return_inverse=True) - x_vals, x_idx = np.unique(x_coords, return_inverse=True) - if convert_to_reflectance: - arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) - arr.fill(np.nan) - arr[y_idx, x_idx] = np.array(data, dtype=np.float64) / S2_REFL_TRANS - else: - arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.int32) - arr.fill(NO_DATA) # or whatever yor desired missing data flag is - arr[y_idx, x_idx] = data - arr = np.flipud(arr) - return arr - - -def xr_dataset_to_timeseries(xr_dataset, variables): - """Compute timeseries dataframe from xr dataset. - - Parameters - ---------- - xr_dataset : xarray dataset - - variables : list - list of variable names as string. - - Returns - ------- - df : pandas dataframe - Pandas dataframe with mean, std, se and percentage of NaNs inside AOI. - - """ - df = pd.DataFrame({'Date': pd.to_datetime(xr_dataset.time.values)}) - - for var in variables: - df[var] = xr_dataset[var].mean(dim=['x', 'y']) - df[var+'_std'] = xr_dataset[var].std(dim=['x', 'y']) - - # nans occure due to missging data from 1D to 2D array - #(pixels outside the polygon), - # from snap algorihtm nans occure due to input/output ouf of bounds - # checking. - # TODO: flaggging with snap biophys algorith or some other solution to - # check which nan are from snap algorithm and which from 1d to 2d transformation - nans = np.isnan(xr_dataset[var]).sum(dim=['x', 'y']) - sample_n = len(xr_dataset[var].x) * len(xr_dataset[var].y) - nans - - # compute how many of the nans are inside aoi (due to snap algorithm) - out_of_aoi_pixels = (len(xr_dataset[var].x) * len(xr_dataset[var].y) - - xr_dataset.aoi_pixels) - nans_inside_aoi = nans - out_of_aoi_pixels - df['aoi_nan_percentage'] = nans_inside_aoi / xr_dataset.aoi_pixels - - df[var+'_se'] = df[var+'_std'] / np.sqrt(sample_n) - - return df From 196e80deea9e37c957b18061bd369dbb56a45bfa Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:44:49 +0530 Subject: [PATCH 1168/2289] added gee2pecan_s2 --- modules/data.remote/inst/gee2pecan_s2.py | 800 +++++++++++++++++++++++ 1 file changed, 800 insertions(+) create mode 100644 modules/data.remote/inst/gee2pecan_s2.py diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py new file mode 100644 index 00000000000..23c0950e843 --- /dev/null +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -0,0 +1,800 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Thu Feb 6 15:24:12 2020 + +Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). +Warning: the data is currently retrieved with 10m resolution (scale=10), so +the 20m resolution bands are resampled. +TODO: Add option for specifying the request spatial resolution. + +@author: Olli Nevalainen (olli.nevalainen@fmi.fi), + Finnish Meteorological Institute) + + +""" +import sys +import os +import ee +import datetime +import pandas as pd +import geopandas as gpd +import numpy as np +import xarray as xr +from functools import reduce + +ee.Initialize() + + +NO_DATA = -99999 +S2_REFL_TRANS = 10000 +# ----------------- Sentinel-2 ------------------------------------- +s2_qi_labels = [ + "NODATA", + "SATURATED_DEFECTIVE", + "DARK_FEATURE_SHADOW", + "CLOUD_SHADOW", + "VEGETATION", + "NOT_VEGETATED", + "WATER", + "UNCLASSIFIED", + "CLOUD_MEDIUM_PROBA", + "CLOUD_HIGH_PROBA", + "THIN_CIRRUS", + "SNOW_ICE", +] + +s2_filter1 = [ + "NODATA", + "SATURATED_DEFECTIVE", + "CLOUD_SHADOW", + "UNCLASSIFIED", + "CLOUD_MEDIUM_PROBA", + "CLOUD_HIGH_PROBA", + "THIN_CIRRUS", + "SNOW_ICE", +] + + +class S2RequestParams: + """S2 data request paramaters. + + Attributes + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + the default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. + """ + + def __init__(self, datestart, dateend, bands=None): + """. + + Parameters + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + The default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. + + Returns + ------- + None. + + """ + default_bands = ["B3", "B4", "B5", "B6", "B7", "B8A", "B11", "B12"] + + self.datestart = datestart + self.dateend = dateend + self.bands = bands if bands else default_bands + + +class AOI: + """Area of interest for area info and data. + + Attributes + ---------- + name : str + Name of the area. + geometry : str + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + qi : pandas dataframe + Dataframe with quality information about available imagery for the AOI. + qi is empty at init and can be computed with + ee_get_s2_quality_info function. + data : pandas dataframe or xarray + Dataframe holding data retrieved from GEE. Data can be computed using + function + qi is empty at init and can be computed with ee_get_s2_data and + converted to xarray using s2_data_to_xarray function. + + Methods + ------- + __init__ + """ + + def __init__(self, name, geometry=None, coordinate_list=None, tile=None): + """. + + Parameters + ---------- + name : str + Name of the area. + geometry : geometry in wkt, optional + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + + Returns + ------- + None. + + """ + if not geometry and not coordinate_list: + sys.exit("AOI has to get either geometry or coordinates as list!") + elif geometry and not coordinate_list: + coordinate_list = list(geometry.exterior.coords) + elif coordinate_list and not geometry: + geometry = None + + self.name = name + self.geometry = geometry + self.coordinate_list = coordinate_list + self.qi = None + self.data = None + self.tile = tile + + +def ee_get_s2_quality_info(AOIs, req_params): + """Get S2 quality information from GEE. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + + Returns + ------- + Nothing: + Computes qi attribute for the given AOI instances. + + """ + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [ + ee.Feature(ee.Geometry.Polygon(a.coordinate_list), {"name": a.name}) + for a in AOIs + ] + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_quality_info_feature(feature): + + area = feature.geometry() + image_collection = ( + ee.ImageCollection("COPERNICUS/S2_SR") + .filterBounds(area) + .filterDate(req_params.datestart, req_params.dateend) + .select(["SCL"]) + ) + + def ee_get_s2_quality_info_image(img): + productid = img.get("PRODUCT_ID") + assetid = img.id() + tileid = img.get("MGRS_TILE") + system_index = img.get("system:index") + proj = img.select("SCL").projection() + + # apply reducer to list + img = img.reduceRegion( + reducer=ee.Reducer.toList(), geometry=area, maxPixels=1e8, scale=10 + ) + + # get data into arrays + classdata = ee.Array( + ee.Algorithms.If( + img.get("SCL"), ee.Array(img.get("SCL")), ee.Array([0]) + ) + ) + + totalcount = classdata.length() + classpercentages = { + key: classdata.eq(i) + .reduce(ee.Reducer.sum(), [0]) + .divide(totalcount) + .get([0]) + for i, key in enumerate(s2_qi_labels) + } + + tmpfeature = ( + ee.Feature(ee.Geometry.Point([0, 0])) + .set("productid", productid) + .set("system_index", system_index) + .set("assetid", assetid) + .set("tileid", tileid) + .set("projection", proj) + .set(classpercentages) + ) + return tmpfeature + + s2_qi_image_collection = image_collection.map(ee_get_s2_quality_info_image) + + return ( + feature.set( + "productid", s2_qi_image_collection.aggregate_array("productid") + ) + .set("system_index", s2_qi_image_collection.aggregate_array("system_index")) + .set("assetid", s2_qi_image_collection.aggregate_array("assetid")) + .set("tileid", s2_qi_image_collection.aggregate_array("tileid")) + .set("projection", s2_qi_image_collection.aggregate_array("projection")) + .set( + { + key: s2_qi_image_collection.aggregate_array(key) + for key in s2_qi_labels + } + ) + ) + + s2_qi_feature_collection = feature_collection.map( + ee_get_s2_quality_info_feature + ).getInfo() + + s2_qi = s2_feature_collection_to_dataframes(s2_qi_feature_collection) + + for a in AOIs: + name = a.name + a.qi = s2_qi[name] + + +def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): + """Get S2 data (level L2A, bottom of atmosphere data) from GEE. + + Warning: the data is currently retrieved with 10m resolution (scale=10), so + the 20m resolution bands are resampled. + TODO: Add option for specifying the request spatial resolution. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. AOIs should have qi attribute computed first. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + qi_threshold : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + qi_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + Nothing: + Computes data attribute for the given AOI instances. + + """ + datestart = req_params.datestart + dateend = req_params.dateend + bands = req_params.bands + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [] + for a in AOIs: + filtered_qi = filter_s2_qi_dataframe(a.qi, qi_threshold, qi_filter) + if len(filtered_qi) == 0: + print("No observations to retrieve for area %s" % a.name) + continue + + if a.tile is None: + min_tile = min(filtered_qi["tileid"].values) + filtered_qi = filtered_qi[filtered_qi["tileid"] == min_tile] + a.tile = min_tile + else: + filtered_qi = filtered_qi[filtered_qi["tileid"] == a.tile] + + full_assetids = "COPERNICUS/S2_SR/" + filtered_qi["assetid"] + image_list = [ee.Image(asset_id) for asset_id in full_assetids] + crs = filtered_qi["projection"].values[0]["crs"] + feature = ee.Feature( + ee.Geometry.Polygon(a.coordinate_list), + {"name": a.name, "image_list": image_list}, + ) + + features.append(feature) + + if len(features) == 0: + print("No data to be retrieved!") + return None + + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_data_feature(feature): + geom = feature.geometry(0.01, crs) + image_collection = ( + ee.ImageCollection.fromImages(feature.get("image_list")) + .filterBounds(geom) + .filterDate(datestart, dateend) + .select(bands + ["SCL"]) + ) + + def ee_get_s2_data_image(img): + # img = img.clip(geom) + productid = img.get("PRODUCT_ID") + assetid = img.id() + tileid = img.get("MGRS_TILE") + system_index = img.get("system:index") + proj = img.select(bands[0]).projection() + sun_azimuth = img.get("MEAN_SOLAR_AZIMUTH_ANGLE") + sun_zenith = img.get("MEAN_SOLAR_ZENITH_ANGLE") + view_azimuth = ( + ee.Array( + [img.get("MEAN_INCIDENCE_AZIMUTH_ANGLE_%s" % b) for b in bands] + ) + .reduce(ee.Reducer.mean(), [0]) + .get([0]) + ) + view_zenith = ( + ee.Array([img.get("MEAN_INCIDENCE_ZENITH_ANGLE_%s" % b) for b in bands]) + .reduce(ee.Reducer.mean(), [0]) + .get([0]) + ) + + img = img.resample("bilinear").reproject(crs=crs, scale=10) + + # get the lat lon and add the ndvi + image_grid = ee.Image.pixelCoordinates(ee.Projection(crs)).addBands( + [img.select(b) for b in bands + ["SCL"]] + ) + + # apply reducer to list + image_grid = image_grid.reduceRegion( + reducer=ee.Reducer.toList(), geometry=geom, maxPixels=1e8, scale=10 + ) + + # get data into arrays + x_coords = ee.Array(image_grid.get("x")) + y_coords = ee.Array(image_grid.get("y")) + band_data = {b: ee.Array(image_grid.get("%s" % b)) for b in bands} + + scl_data = ee.Array(image_grid.get("SCL")) + + # perform LAI et al. computation possibly here! + + tmpfeature = ( + ee.Feature(ee.Geometry.Point([0, 0])) + .set("productid", productid) + .set("system_index", system_index) + .set("assetid", assetid) + .set("tileid", tileid) + .set("projection", proj) + .set("sun_zenith", sun_zenith) + .set("sun_azimuth", sun_azimuth) + .set("view_zenith", view_zenith) + .set("view_azimuth", view_azimuth) + .set("x_coords", x_coords) + .set("y_coords", y_coords) + .set("SCL", scl_data) + .set(band_data) + ) + return tmpfeature + + s2_data_feature = image_collection.map(ee_get_s2_data_image) + + return ( + feature.set("productid", s2_data_feature.aggregate_array("productid")) + .set("system_index", s2_data_feature.aggregate_array("system_index")) + .set("assetid", s2_data_feature.aggregate_array("assetid")) + .set("tileid", s2_data_feature.aggregate_array("tileid")) + .set("projection", s2_data_feature.aggregate_array("projection")) + .set("sun_zenith", s2_data_feature.aggregate_array("sun_zenith")) + .set("sun_azimuth", s2_data_feature.aggregate_array("sun_azimuth")) + .set("view_zenith", s2_data_feature.aggregate_array("view_zenith")) + .set("view_azimuth", s2_data_feature.aggregate_array("view_azimuth")) + .set("x_coords", s2_data_feature.aggregate_array("x_coords")) + .set("y_coords", s2_data_feature.aggregate_array("y_coords")) + .set("SCL", s2_data_feature.aggregate_array("SCL")) + .set({b: s2_data_feature.aggregate_array(b) for b in bands}) + ) + + s2_data_feature_collection = feature_collection.map( + ee_get_s2_data_feature + ).getInfo() + + s2_data = s2_feature_collection_to_dataframes(s2_data_feature_collection) + + for a in AOIs: + name = a.name + a.data = s2_data[name] + + +def filter_s2_qi_dataframe(s2_qi_dataframe, qi_thresh, s2_filter=s2_filter1): + """Filter qi dataframe. + + Parameters + ---------- + s2_qi_dataframe : pandas dataframe + S2 quality information dataframe (AOI instance qi attribute). + qi_thresh : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + s2_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + filtered_s2_qi_df : pandas dataframe + Filtered dataframe. + + """ + filtered_s2_qi_df = s2_qi_dataframe.loc[ + s2_qi_dataframe[s2_filter1].sum(axis=1) < qi_thresh + ] + + return filtered_s2_qi_df + + +def s2_feature_collection_to_dataframes(s2_feature_collection): + """Convert feature collection dict from GEE to pandas dataframe. + + Parameters + ---------- + s2_feature_collection : dict + Dictionary returned by GEE. + + Returns + ------- + dataframes : pandas dataframe + GEE dictinary converted to pandas dataframe. + + """ + dataframes = {} + + for featnum in range(len(s2_feature_collection["features"])): + tmp_dict = {} + key = s2_feature_collection["features"][featnum]["properties"]["name"] + productid = s2_feature_collection["features"][featnum]["properties"][ + "productid" + ] + + dates = [ + datetime.datetime.strptime(d.split("_")[2], "%Y%m%dT%H%M%S") + for d in productid + ] + + tmp_dict.update({"Date": dates}) # , 'crs': crs} + properties = s2_feature_collection["features"][featnum]["properties"] + for prop, data in properties.items(): + if prop not in ["Date"]: # 'crs' ,, 'projection' + tmp_dict.update({prop: data}) + dataframes[key] = pd.DataFrame(tmp_dict) + return dataframes + + +def compute_ndvi(dataset): + """Compute NDVI + + Parameters + ---------- + dataset : xarray dataset + + Returns + ------- + xarray dataset + Adds 'ndvi' xr array to xr dataset. + + """ + b4 = dataset.band_data.sel(band="B4") + b8 = dataset.band_data.sel(band="B8A") + ndvi = (b8 - b4) / (b8 + b4) + return dataset.assign({"ndvi": ndvi}) + + +def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): + """Convert AOI.data dataframe to xarray dataset. + + Parameters + ---------- + aoi : AOI instance + AOI instance. + request_params : S2RequestParams + S2RequestParams. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + Nothing. + Converts the data atrribute dataframe to xarray Dataset. + xarray is better for handling multiband data. It also has + implementation for saving the data in NetCDF format. + + """ + # check that all bands have full data! + datalengths = [ + aoi.data[b].apply(lambda d: len(d)) == len(aoi.data.iloc[0]["x_coords"]) + for b in request_params.bands + ] + consistent_data = reduce(lambda a, b: a & b, datalengths) + aoi.data = aoi.data[consistent_data] + + # 2D data + bands = request_params.bands + + # 1D data + list_vars = [ + "assetid", + "productid", + "sun_azimuth", + "sun_zenith", + "system_index", + "view_azimuth", + "view_zenith", + ] + + # crs from projection + crs = aoi.data["projection"].values[0]["crs"] + tileid = aoi.data["tileid"].values[0] + # original number of pixels requested (pixels inside AOI) + aoi_pixels = len(aoi.data.iloc[0]["x_coords"]) + + # transform 2D data to arrays + for b in bands: + + aoi.data[b] = aoi.data.apply( + lambda row: s2_lists_to_array( + row["x_coords"], + row["y_coords"], + row[b], + convert_to_reflectance=convert_to_reflectance, + ), + axis=1, + ) + + aoi.data["SCL"] = aoi.data.apply( + lambda row: s2_lists_to_array( + row["x_coords"], row["y_coords"], row["SCL"], convert_to_reflectance=False + ), + axis=1, + ) + + array = aoi.data[bands].values + + # this will stack the array to ndarray with + # dimension order = (time, band, x,y) + narray = np.stack( + [np.stack(array[:, b], axis=2) for b in range(len(bands))], axis=2 + ).transpose() # .swapaxes(2, 3) + + scl_array = np.stack(aoi.data["SCL"].values, axis=2).transpose() + + coords = { + "time": aoi.data["Date"].values, + "band": bands, + "x": np.unique(aoi.data.iloc[0]["x_coords"]), + "y": np.unique(aoi.data.iloc[0]["y_coords"]), + } + + dataset_dict = { + "band_data": (["time", "band", "x", "y"], narray), + "SCL": (["time", "x", "y"], scl_array), + } + var_dict = {var: (["time"], aoi.data[var]) for var in list_vars} + dataset_dict.update(var_dict) + + ds = xr.Dataset( + dataset_dict, + coords=coords, + attrs={ + "name": aoi.name, + "crs": crs, + "tile_id": tileid, + "aoi_geometry": aoi.geometry.to_wkt(), + "aoi_pixels": aoi_pixels, + }, + ) + aoi.data = ds + + +def s2_lists_to_array(x_coords, y_coords, data, convert_to_reflectance=True): + """Convert 1D lists of coordinates and corresponding values to 2D array. + + Parameters + ---------- + x_coords : list + List of x-coordinates. + y_coords : list + List of y-coordinates. + data : list + List of data values corresponding to the coordinates. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + arr : 2D numpy array + Return 2D numpy array. + + """ + # get the unique coordinates + uniqueYs = np.unique(y_coords) + uniqueXs = np.unique(x_coords) + + # get number of columns and rows from coordinates + ncols = len(uniqueXs) + nrows = len(uniqueYs) + + # determine pixelsizes + # ys = uniqueYs[1] - uniqueYs[0] + # xs = uniqueXs[1] - uniqueXs[0] + + y_vals, y_idx = np.unique(y_coords, return_inverse=True) + x_vals, x_idx = np.unique(x_coords, return_inverse=True) + if convert_to_reflectance: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) + arr.fill(np.nan) + arr[y_idx, x_idx] = np.array(data, dtype=np.float64) / S2_REFL_TRANS + else: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.int32) + arr.fill(NO_DATA) # or whatever yor desired missing data flag is + arr[y_idx, x_idx] = data + arr = np.flipud(arr) + return arr + + +def xr_dataset_to_timeseries(xr_dataset, variables): + """Compute timeseries dataframe from xr dataset. + + Parameters + ---------- + xr_dataset : xarray dataset + + variables : list + list of variable names as string. + + Returns + ------- + df : pandas dataframe + Pandas dataframe with mean, std, se and percentage of NaNs inside AOI. + + """ + df = pd.DataFrame({"Date": pd.to_datetime(xr_dataset.time.values)}) + + for var in variables: + df[var] = xr_dataset[var].mean(dim=["x", "y"]) + df[var + "_std"] = xr_dataset[var].std(dim=["x", "y"]) + + # nans occure due to missging data from 1D to 2D array + # (pixels outside the polygon), + # from snap algorihtm nans occure due to input/output ouf of bounds + # checking. + # TODO: flaggging with snap biophys algorith or some other solution to + # check which nan are from snap algorithm and which from 1d to 2d transformation + nans = np.isnan(xr_dataset[var]).sum(dim=["x", "y"]) + sample_n = len(xr_dataset[var].x) * len(xr_dataset[var].y) - nans + + # compute how many of the nans are inside aoi (due to snap algorithm) + out_of_aoi_pixels = ( + len(xr_dataset[var].x) * len(xr_dataset[var].y) - xr_dataset.aoi_pixels + ) + nans_inside_aoi = nans - out_of_aoi_pixels + df["aoi_nan_percentage"] = nans_inside_aoi / xr_dataset.aoi_pixels + + df[var + "_se"] = df[var + "_std"] / np.sqrt(sample_n) + + return df + + +def gee2pecan_s2(geofile, outdir, start, end, qi_threshold): + """ + Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date of the data request in the form YYYY-MM-DD + + qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + """ + + # read in the input file containing coordinates + df = gpd.read_file(geofile) + + request = S2RequestParams(start, end) + + # filter area of interest from the coordinates in the input file + area = AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # calcuate qi attribute for the AOI + ee_get_s2_quality_info(area, request) + + # get the final data + ee_get_s2_data(area, request, qi_threshold=qi_threshold) + + # convert dataframe to an xarray dataset, used later for converting to netCDF + s2_data_to_xarray(area, request) + + area.data = compute_ndvi(area.data) + + # if specified output directory does not exist, create it + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # create a timerseries and save the netCDF file + timeseries = {} + timeseries_variable = ["ndvi"] + + area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) + timeseries[area.name] = xr_dataset_to_timeseries(area.data, timeseries_variable) \ No newline at end of file From f8c90c347ea37264c8b9596cabd4e3abc1eeb26c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:46:13 +0530 Subject: [PATCH 1169/2289] added gee_utils --- modules/data.remote/inst/gee_utils.py | 82 +++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 modules/data.remote/inst/gee_utils.py diff --git a/modules/data.remote/inst/gee_utils.py b/modules/data.remote/inst/gee_utils.py new file mode 100644 index 00000000000..e09d3ff9ed6 --- /dev/null +++ b/modules/data.remote/inst/gee_utils.py @@ -0,0 +1,82 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +GEE utility functions + +Requires Python3 + +Author: Ayush Prasad +""" + +import ee +import geopandas as gpd + + +def create_geo(geofile): + """ + creates a GEE geometry from the input file + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + Returns + ------- + geo -- object of ee.Geometry type + """ + df = gpd.read_file(geofile) + if (df.geometry.type == "Point").bool(): + # extract coordinates + lon = float(df.geometry.x) + lat = float(df.geometry.y) + # create geometry + geo = ee.Geometry.Point(lon, lat) + + elif (df.geometry.type == "Polygon").bool(): + # extract coordinates + area = [ + list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) + ] + # create geometry + geo = ee.Geometry.Polygon(area) + + else: + # if the input geometry type is not + raise ValueError("geometry type not supported") + + return geo + + +def get_sitename(geofile): + """ + extracts AOI name from the input file + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + Returns + ------- + site_name (str) -- name of the AOI + """ + df = gpd.read_file(geofile) + site_name = df[df.columns[0]].iloc[0] + return site_name + + +def get_siteaoi(geofile): + """ + extracts AOI coordinates from the input file + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + Returns + ------- + site_aoi (str) -- coordinates of the AOI + """ + df = gpd.read_file(geofile) + site_aoi = str(df[df.columns[1]].iloc[0]) + return site_aoi From 473f822d2c6143898ff4d985b20acb7df537319f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:47:02 +0530 Subject: [PATCH 1170/2289] added get_remote_data --- modules/data.remote/inst/get_remote_data.py | 65 +++++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 modules/data.remote/inst/get_remote_data.py diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py new file mode 100644 index 00000000000..e9a664c4dae --- /dev/null +++ b/modules/data.remote/inst/get_remote_data.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +get_remote_data controls GEE and AppEEARS functions to download data. + +Requires Python3 + +Author: Ayush Prasad +""" + +from importlib import import_module + +# dictionary used to map the GEE image collection id to PEcAn specific function name +collection_dict = { + "LANDSAT/LC08/C01/T1_SR": "l8", + "COPERNICUS/S2_SR": "s2", + "NASA_USDA/HSL/SMAP_soil_moisture": "smap", + # "insert GEE collection id": "insert PEcAn specific name", +} + + +def get_remote_data(geofile, outdir, start, end, source, collection, qc=None): + """ + uses GEE and AppEEARS functions to download data + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date areaof the data request in the form YYYY-MM-DD + + source (str) -- source from where data is to be downloaded + + collection (str) -- dataset ID + + qc (float) -- quality control parameter + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + """ + if source == "gee": + try: + # get collection id from the dictionary + collection = collection_dict[collection] + except KeyError: + print("Requested image collection is not available") + # construct the function name + func_name = "".join([source, "2pecan", "_", collection]) + # import the module + module = import_module(func_name) + # import the function from the module + func = getattr(module, func_name) + # if a qc parameter is specified pass these arguments to the function + if qc: + func(geofile, outdir, start, end, qc) + # this part takes care of functions which do not perform any quality checks, e.g. SMAP + else: + func(geofile, outdir, start, end) From ebb49853a292c21a38cc621f294c8ee20eae6851 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:47:58 +0530 Subject: [PATCH 1171/2289] added process_remote_data --- .../data.remote/inst/process_remote_data.py | 47 +++++++++++++++++++ 1 file changed, 47 insertions(+) create mode 100644 modules/data.remote/inst/process_remote_data.py diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/process_remote_data.py new file mode 100644 index 00000000000..3ef21de0372 --- /dev/null +++ b/modules/data.remote/inst/process_remote_data.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +process_remote_data controls functions which perform further computation on the data. + +Requires Python3 + +Author: Ayush Prasad +""" + +from importlib import import_module + + +def process_remote_data(aoi_name, output, outdir, algorithm): + """ + uses processing functions to perform computation on input data + + Parameters + ---------- + aoi_name (str) -- name to the AOI. + + output (dict) -- dictionary contatining the keys get_data and process_data + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + algorithm (str) -- name of the algorithm used to perform computation. + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + """ + # get the type of the input data + input_type = output["get_data"] + # locate the input file + input_file = "".join([outdir, "/", aoi_name, "_", input_type, ".nc"]) + # extract the computation which is to be done + output = output["process_data"] + # construct the function name + func_name = "".join([input_type, "2", output, "_", algorithm]) + # import the module + module = import_module(func_name) + # import the function from the module + func = getattr(module, func_name) + # call the function + func(input_file, outdir) From 79e234869637f8d0e9484402c400e6286e16d00d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:49:39 +0530 Subject: [PATCH 1172/2289] updated remote_process --- modules/data.remote/inst/remote_process.py | 112 +++++++-------------- 1 file changed, 39 insertions(+), 73 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index 2d0de3b42f8..166fe874add 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -9,11 +9,9 @@ Author: Ayush Prasad """ -from gee2pecan_bands import gee2pecan_bands -from bands2ndvi import bands2ndvi -from bands2lai_snap import bands2lai_snap -from satellitetools import gee -import geopandas as gpd +from get_remote_data import get_remote_data +from process_remote_data import process_remote_data +from gee_utils import get_sitename def remote_process( @@ -21,13 +19,17 @@ def remote_process( outdir, start, end, - qi_threshold, - source="gee", - collection="COPERNICUS/S2_SR", - input_type="bands", - output=["lai", "ndvi"], + source, + collection, + qc=None, + algorithm="snap", + output={"get_data": "bands", "process_data": "lai"}, + stage={"get_data": True, "process_data": True}, ): + """ + Controls get_remote_data() and process_remote_data() to download and process remote sensing data. + Parameters ---------- geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. @@ -36,83 +38,47 @@ def remote_process( start (str) -- starting date of the data request in the form YYYY-MM-DD - end (str) -- ending date areaof the data request in the form YYYY-MM-DD - - qi_threshold (float) -- Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved - + end (str) -- ending date area of the data request in the form YYYY-MM-DD + source (str) -- source from where data is to be downloaded collection (str) -- dataset ID - input_type (str) -- type of raw intermediate data + qc (float) -- quality control parameter - output (list of str) -- type of output data requested + algorithm (str) -- algorithm used for processing data in process_data() + output (dict) -- "get_data" - the type of raw data, "process_data" - final proccesed data + + stage (dict) -- temporary argument to imitate database checks + Returns ------- Nothing: - output netCDF is saved in the specified directory. - - Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + output files from the functions are saved in the specified directory. - To test this function run: python3 remote_process.py """ - # this part will be removed in the next version, after deciding whether to pass the file or the extracted data to initial functions - df = gpd.read_file(geofile) - area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) - - # selecting the initial data download function by concatenating source and input_type - initial_step = "".join([source, "2pecan", input_type]) - if initial_step == "gee2pecanbands": - if collection == "COPERNICUS/S2_SR": - gee2pecan_bands(geofile, outdir, start, end, qi_threshold) - else: - print("other gee download options go here, currently WIP") - # This should be a function being called from an another file - """ - data = ee.ImageCollection(collection) - filtered_data = (data.filterDate(start, end).select(bands).filterBounds(ee.Geometry(pathtofile)) - filtered_data = filtered_data.getInfo() - ... - """ - - else: - print("other sources like AppEEARS go here") - return - - # if raw data is the requested output, process is completed - if input_type == output: - print("process is complete") - - else: - # locate the raw data file formed in initial step - input_file = "".join([outdir, area.name, "_", str(input_type), ".nc"]) - - # store all the requested conversions in a list - conversions = [] - for conv_type in output: - conversions.append("".join([input_type, "2", conv_type])) - - # perform the available conversions - if "bands2lai" in conversions: - print("using SNAP to calculate LAI") - bands2lai_snap(input_file, outdir) - - if "bands2ndvi" in conversions: - print("using GEE to calculate NDVI") - bands2ndvi(input_file, outdir) + # when db connections are made, this will be removed + aoi_name = get_sitename(geofile) + + if stage["get_data"]: + get_remote_data(geofile, outdir, start, end, source, collection, qc) + + if stage["process_data"]: + process_remote_data(aoi_name, output, outdir, algorithm) if __name__ == "__main__": remote_process( - "./satellitetools/test.geojson", - "./out/", - "2019-01-01", - "2019-12-31", - 1, - "gee", - "COPERNICUS/S2_SR", - "bands", - ["lai", "ndvi"], + geofile="./satellitetools/test.geojson", + outdir="./out", + start="2018-01-01", + end="2018-12-31", + source="gee", + collection="COPERNICUS/S2_SR", + qc=1, + algorithm="snap", + output={"get_data": "bands", "process_data": "lai"}, + stage={"get_data": True, "process_data": True}, ) From b6213c64308555dbeda80aec909b20e2abcacb2a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:51:06 +0530 Subject: [PATCH 1173/2289] updated bands2lai_snap --- modules/data.remote/inst/bands2lai_snap.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/inst/bands2lai_snap.py b/modules/data.remote/inst/bands2lai_snap.py index 86707027432..270b927043b 100644 --- a/modules/data.remote/inst/bands2lai_snap.py +++ b/modules/data.remote/inst/bands2lai_snap.py @@ -7,7 +7,7 @@ Author: Ayush Prasad """ -from satellitetools import gee +import gee2pecan_s2 as gee import satellitetools.biophys_xarray as bio import geopandas as gpd import xarray as xr @@ -34,6 +34,7 @@ def bands2lai_snap(inputfile, outdir): ds_disk = xr.open_dataset(inputfile) # calculate LAI using SNAP area = bio.run_snap_biophys(ds_disk, "LAI") + area = area[["lai"]] timeseries = {} timeseries_variable = ["lai"] From 72bab4526d96db81277eab2d5a51529b394484d7 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 10:52:40 +0530 Subject: [PATCH 1174/2289] updated gee2pecan_smap --- modules/data.remote/inst/gee2pecan_smap.py | 102 +++++++++++---------- 1 file changed, 55 insertions(+), 47 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py index 5ea7b62b455..688542f9a0a 100644 --- a/modules/data.remote/inst/gee2pecan_smap.py +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -1,14 +1,15 @@ """ Downloads SMAP Global Soil Moisture Data from Google Earth Engine and saves it in a netCDF file. +Data retrieved: ssm, susm, smp, ssma, susma + Requires Python3 Author: Ayush Prasad """ - +from gee_utils import create_geo, get_siteaoi, get_sitename import ee import pandas as pd -import geopandas as gpd import os import xarray as xr import datetime @@ -16,7 +17,7 @@ ee.Initialize() -def gee2pecan_smap(geofile, outdir, start, end, var): +def gee2pecan_smap(geofile, outdir, start, end): """ Downloads and saves SMAP data from GEE @@ -30,37 +31,15 @@ def gee2pecan_smap(geofile, outdir, start, end, var): end (str) -- ending date areaof the data request in the form YYYY-MM-dd - var (str) -- one of ssm, susm, smp, ssma, susma - Returns ------- Nothing: output netCDF is saved in the specified directory """ - # read in the geojson file - df = gpd.read_file(geofile) - - if (df.geometry.type == "Point").bool(): - # extract coordinates - lon = float(df.geometry.x) - lat = float(df.geometry.y) - # create geometry - geo = ee.Geometry.Point(lon, lat) - - elif (df.geometry.type == "Polygon").bool(): - # extract coordinates - area = [ - list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) - ] - # create geometry - geo = ee.Geometry.Polygon(area) - - else: - # if the input geometry type is not - raise ValueError("geometry type not supported") - - def smap_ts(geo, start, end, var): + geo = create_geo(geofile) + + def smap_ts(geo, start, end): # extract a feature from the geometry features = [ee.Feature(geo)] # create a feature collection from the features @@ -73,7 +52,7 @@ def smap_ts_feature(feature): ee.ImageCollection("NASA_USDA/HSL/SMAP_soil_moisture") .filterBounds(area) .filterDate(start, end) - .select([var]) + .select(["ssm", "susm", "smp", "ssma", "susma"]) ) def smap_ts_image(img): @@ -89,47 +68,77 @@ def smap_ts_image(img): scale=scale, ) # store data in an ee.Array - smapdata = ee.Array(img.get(var)) + ssm = ee.Array(img.get("ssm")) + susm = ee.Array(img.get("susm")) + smp = ee.Array(img.get("smp")) + ssma = ee.Array(img.get("ssma")) + susma = ee.Array(img.get("susma")) tmpfeature = ( ee.Feature(ee.Geometry.Point([0, 0])) - .set("smapdata", smapdata) + .set("ssm", ssm) + .set("susm", susm) + .set("smp", smp) + .set("ssma", ssma) + .set("susma", susma) .set("dateinfo", dateinfo) ) return tmpfeature # map tmpfeature over the image collection smap_timeseries = collection.map(smap_ts_image) - return feature.set( - "smapdata", smap_timeseries.aggregate_array("smapdata") - ).set("dateinfo", smap_timeseries.aggregate_array("dateinfo")) + return ( + feature.set("ssm", smap_timeseries.aggregate_array("ssm")) + .set("susm", smap_timeseries.aggregate_array("susm")) + .set("smp", smap_timeseries.aggregate_array("smp")) + .set("ssma", smap_timeseries.aggregate_array("ssma")) + .set("susma", smap_timeseries.aggregate_array("susma")) + .set("dateinfo", smap_timeseries.aggregate_array("dateinfo")) + ) # map feature over featurecollection featureCollection = featureCollection.map(smap_ts_feature).getInfo() return featureCollection - fc = smap_ts(geo=geo, start=start, end=end, var=var) + fc = smap_ts(geo=geo, start=start, end=end) def fc2dataframe(fc): - smapdatalist = [] - datelist = [] + ssm_datalist = [] + susm_datalist = [] + smp_datalist = [] + ssma_datalist = [] + susma_datalist = [] + date_list = [] # extract var and date data from fc dictionary and store in it in smapdatalist and datelist - for i in range(len(fc["features"][0]["properties"]["smapdata"])): - smapdatalist.append(fc["features"][0]["properties"]["smapdata"][i][0]) - datelist.append( + for i in range(len(fc["features"][0]["properties"]["ssm"])): + ssm_datalist.append(fc["features"][0]["properties"]["ssm"][i][0]) + susm_datalist.append(fc["features"][0]["properties"]["susm"][i][0]) + smp_datalist.append(fc["features"][0]["properties"]["smp"][i][0]) + ssma_datalist.append(fc["features"][0]["properties"]["ssma"][i][0]) + susma_datalist.append(fc["features"][0]["properties"]["susma"][i][0]) + date_list.append( datetime.datetime.strptime( (fc["features"][0]["properties"]["dateinfo"][i]).split(".")[0], "%Y-%m-%d", ) ) - fc_dict = {"date": datelist, var: smapdatalist} + fc_dict = { + "date": date_list, + "ssm": ssm_datalist, + "susm": susm_datalist, + "smp": smp_datalist, + "ssma": ssma_datalist, + "susma": susma_datalist, + } # create a pandas dataframe and store the data - fcdf = pd.DataFrame(fc_dict, columns=["date", var]) + fcdf = pd.DataFrame( + fc_dict, columns=["date", "ssm", "susm", "smp", "ssma", "susma"] + ) return fcdf datadf = fc2dataframe(fc) - site_name = df[df.columns[0]].iloc[0] - AOI = str(df[df.columns[1]].iloc[0]) + site_name = get_sitename(geofile) + AOI = get_siteaoi(geofile) # convert the dataframe to an xarray dataset, used for converting it to a netCDF file tosave = xr.Dataset( @@ -139,7 +148,6 @@ def fc2dataframe(fc): "start_date": start, "end_date": end, "AOI": AOI, - "product": var, }, ) @@ -147,6 +155,6 @@ def fc2dataframe(fc): if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) - file_name = "_" + var # convert to netCDF and save the file - tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) + tosave.to_netcdf(os.path.join(outdir, site_name + "_smap.nc")) + From 9a80e6bc908a91721d1adb0a57526158cf9e88fe Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 6 Jul 2020 11:02:00 +0530 Subject: [PATCH 1175/2289] removed bands2ndvi.py --- modules/data.remote/inst/bands2ndvi.py | 46 ------------- modules/data.remote/inst/gee2pecan_bands.py | 72 --------------------- 2 files changed, 118 deletions(-) delete mode 100644 modules/data.remote/inst/bands2ndvi.py delete mode 100644 modules/data.remote/inst/gee2pecan_bands.py diff --git a/modules/data.remote/inst/bands2ndvi.py b/modules/data.remote/inst/bands2ndvi.py deleted file mode 100644 index 216c3b1e77c..00000000000 --- a/modules/data.remote/inst/bands2ndvi.py +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -""" -Calculates NDVI using gee. - -Author: Ayush Prasad -""" - -import xarray as xr -from satellitetools import gee -import geopandas as gpd -import os - - -def bands2ndvi(inputfile, outdir): - """ - Calculates NDVI for the input netCDF file and saves it in a new netCDF file. - - Parameters - ---------- - input (str) -- path to the input netCDF file containing bands. - - outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - - Returns - ------- - Nothing: - output netCDF is saved in the specified directory. - - """ - # load the input file - ds_disk = xr.open_dataset(inputfile) - # calculate NDVI using gee - area = gee.compute_ndvi(ds_disk) - - timeseries = {} - timeseries_variable = ["ndvi"] - - # if specified output directory does not exist, create it. - if not os.path.exists(outdir): - os.makedirs(outdir, exist_ok=True) - - # creating a timerseries and saving the netCDF file - area.to_netcdf(os.path.join(outdir, area.name + "_ndvi.nc")) - timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) diff --git a/modules/data.remote/inst/gee2pecan_bands.py b/modules/data.remote/inst/gee2pecan_bands.py deleted file mode 100644 index 3c325784962..00000000000 --- a/modules/data.remote/inst/gee2pecan_bands.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -""" -Downloads ESA Sentinel 2, Level-2A Bottom of Atmosphere data and saves it in a netCDF file. -Bands retrieved: B3, B4, B5, B6, B7, B8A, B11 and B12 -More information about the bands and the process followed to get the data can be found out at /satellitetools/geeapi.py - -Warning: Point coordinates as input has currently not been implemented. - -Requires Python3 - -Uses satellitetools created by Olli Nevalainen. - -Author: Ayush Prasad -""" - -from satellitetools import gee -import geopandas as gpd -import os - - -def gee2pecan_bands(geofile, outdir, start, end, qi_threshold): - """ - Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. - - Parameters - ---------- - geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. - - outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - - start (str) -- starting date of the data request in the form YYYY-MM-DD - - end (str) -- ending date of the data request in the form YYYY-MM-DD - - qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved - - Returns - ------- - Nothing: - output netCDF is saved in the specified directory. - - Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray - To test this function please run the following code inside a python shell after importing this module, testfile is included. - - gee2pecan_bands(geofile="./satellitetools/test.geojson", outdir="./out/", start="2019-01-01", end="2019-12-31", qi_threshold=1) - """ - - # read in the input file containing coordinates - df = gpd.read_file(geofile) - - request = gee.S2RequestParams(start, end) - - # filter area of interest from the coordinates in the input file - area = gee.AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) - - # calcuate qi attribute for the AOI - gee.ee_get_s2_quality_info(area, request) - - # get the final data - gee.ee_get_s2_data(area, request, qi_threshold=qi_threshold) - - # convert dataframe to an xarray dataset, used later for converting to netCDF - gee.s2_data_to_xarray(area, request) - - # if specified output directory does not exist, create it - if not os.path.exists(outdir): - os.makedirs(outdir, exist_ok=True) - - # create a timerseries and save the netCDF file - area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) From dd7e6c6248b0c41a46ae0895299cd331c1f999bb Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 6 Jul 2020 15:35:28 -0400 Subject: [PATCH 1176/2289] updating the downscale of precipitation for GEFS download --- .../R/download.NOAA_GEFS_downscale.R | 27 ++++++++++++------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index c090a95a326..95d74c35c1c 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -44,7 +44,7 @@ ##' -download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, start_date = Sys.time(), end_date = (as.POSIXct(start_date, tz="UTC") + lubridate::days(16)), +download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, start_date, end_date, overwrite = FALSE, verbose = FALSE, ...) { start_date <- as.POSIXct(start_date, tz = "UTC") @@ -63,7 +63,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st PEcAn.logger::logger.severe("Invalid dates: end date occurs before start date") } else if (as.numeric(end_date - start_date, units="hours") < 6) { #Done separately to produce a more helpful error message. PEcAn.logger::logger.severe("Times not far enough appart for a forecast to fall between them. Forecasts occur every six hours; make sure start - and end dates are at least 6 hours appart.") + and end dates are at least 6 hours apart.") } #Set the end forecast date (default is the full 16 days) @@ -74,7 +74,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st #Round the starting date/time down to the previous block of 6 hours. Adjust the time frame to match. forecast_hour = (lubridate::hour(start_date) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - increments = as.integer(as.numeric(end_date - start_date, units = "hours") / 6) #Calculating the number of forecasts between start and end dates. + increments = as.integer(as.numeric(end_date - start_date, units = "hours") / 6) #Calculating the number of forecasts between start and end dates increments = increments + ((lubridate::hour(end_date) - lubridate::hour(start_date)) %/% 6) #These calculations are required to use the rnoaa package. end_hour = sprintf("%04d", ((forecast_hour + (increments * 6)) %% 24) * 100) #Calculating the starting hour as a string, which is required type to access the @@ -87,7 +87,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st end_date = start_date + lubridate::hours(increments * 6) - + #Bounds date checking #NOAA's GEFS database maintains a rolling 12 days of forecast data for access through this function. #We do want Sys.Date() here - NOAA makes data unavaliable days at a time, not forecasts at a time. @@ -136,12 +136,12 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st noaa_data = list() #Downloading the data here. It is stored in a matrix, where columns represent time in intervals of 6 hours, and rows represent - #each ensemble member. Each variable getxs its own matrix, which is stored in the list noaa_data. - + #each ensemble member. Each variable gets its own matrix, which is stored in the list noaa_data. - for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data - } + + for (i in 1:length(noaa_var_names)) { + noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data + } #Fills in data with NaNs if there happens to be missing columns. for (i in 1:length(noaa_var_names)) { @@ -226,7 +226,8 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st forecasts$wind_speed <- sqrt(forecasts$eastward_wind^ 2 + forecasts$northward_wind^ 2) ### Downscale state variables - gefs_hour <- PEcAn.data.atmosphere::downscale_spline_to_hourly(df = forecasts, VarNamesStates = c("air_temperature", "wind_speed", "specific_humidity", "precipitation_flux", "air_pressure")) + gefs_hour <- PEcAn.data.atmosphere::downscale_spline_to_hourly(df = forecasts, VarNamesStates = c("air_temperature", "wind_speed", "specific_humidity", "air_pressure")) + ## convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) nonSW.flux.hrly <- forecasts %>% @@ -241,8 +242,14 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st dplyr::group_by_at(c("NOAA.member", "timestamp")) %>% dplyr::summarize(surface_downwelling_shortwave_flux_in_air = mean(surface_downwelling_shortwave_flux_in_air)) + ## Downscale Precipitation Flux + precip.hrly <- forecasts %>% + dplyr::select(timestamp, NOAA.member, precipitation_flux) %>% + tidyr::complete(timestamp = nonSW.flux.hrly$timestamp, nesting(NOAA.member), fill = list(precipitation_flux = 0)) + joined<- dplyr::inner_join(gefs_hour, nonSW.flux.hrly, by = c("NOAA.member", "timestamp")) + joined<- dplyr::inner_join(joined, precip.hrly, by = c("NOAA.member", "timestamp")) joined <- dplyr::inner_join(joined, ShortWave.ds, by = c("NOAA.member", "timestamp")) %>% dplyr::distinct() %>% From 1ba2f30f00abb3b505eec71ce66e729a6aef96bf Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 6 Jul 2020 15:55:57 -0400 Subject: [PATCH 1177/2289] adding updated .Rd file --- modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd index 3cb0ba7644f..fbb32863ad6 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd @@ -9,8 +9,8 @@ download.NOAA_GEFS_downscale( lat.in, lon.in, sitename, - start_date = Sys.time(), - end_date = (as.POSIXct(start_date, tz = "UTC") + lubridate::days(16)), + start_date, + end_date, overwrite = FALSE, verbose = FALSE, ... From 1dea3f68a14e8b5a8195416aac4df88a956b94e5 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 6 Jul 2020 16:51:09 -0400 Subject: [PATCH 1178/2289] changing file names to be more appropriate for generalized scripts --- .../inst/NEFI/automatic_graphs.sh | 4 + .../inst/NEFI/forecast.graphs.R | 196 ++++++++++++++++++ .../inst/NEFI/graph_fluxtowers.R | 4 +- 3 files changed, 202 insertions(+), 2 deletions(-) create mode 100755 modules/assim.sequential/inst/NEFI/automatic_graphs.sh create mode 100644 modules/assim.sequential/inst/NEFI/forecast.graphs.R diff --git a/modules/assim.sequential/inst/NEFI/automatic_graphs.sh b/modules/assim.sequential/inst/NEFI/automatic_graphs.sh new file mode 100755 index 00000000000..32951d72601 --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/automatic_graphs.sh @@ -0,0 +1,4 @@ +Rscript -e "rmarkdown::render('/fs/data3/kzarada/NEFI/Willow_Creek/index.Rmd')" +Rscript "/fs/data3/kzarada/NEFI/Willow_Creek/email.R" +Rscript "/fs/data3/kzarada/NEFI/graph_fluxtowers.R" +Rscript "/fs/data3/kzarada/NEFI/graph_SDA_fluxtowers.R" \ No newline at end of file diff --git a/modules/assim.sequential/inst/NEFI/forecast.graphs.R b/modules/assim.sequential/inst/NEFI/forecast.graphs.R new file mode 100644 index 00000000000..2e4b1203567 --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/forecast.graphs.R @@ -0,0 +1,196 @@ +#### need to create a graph funciton here to call with the args of start time + +forecast.graphs <- function(args){ + start_date <- tryCatch(as.POSIXct(args[1]), error = function(e) {NULL} ) + if (is.null(start_date)) { + in_wid <- as.integer(args[1]) + } + dbparms = list() + dbparms$dbname = "bety" + dbparms$host = "128.197.168.114" + dbparms$user = "bety" + dbparms$password = "bety" + #Connection code copied and pasted from met.process + bety <- dplyr::src_postgres(dbname = dbparms$dbname, + host = dbparms$host, + user = dbparms$user, + password = dbparms$password) + con <- bety$con #Connection to the database. dplyr returns a list. + # Identify the workflow with the proper information + if (!is.null(start_date)) { + workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE start_date='", format(start_date, "%Y-%m-%d %H:%M:%S"), + "' ORDER BY id"), con) + } else { + workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE id='", in_wid, "'"), con) + } + print(workflows) + + workflows <- workflows[which(workflows$site_id == args[3]),] + + if (nrow(workflows) > 1) { + workflow <- workflows[1,] + } else { + workflow <- workflows + } + + + print(paste0("Using workflow ", workflow$id)) + wid <- workflow$id + outdir <- args[4] + pecan_out_dir <- paste0(outdir, "PEcAn_", wid, "/out"); + pecan_out_dirs <- list.dirs(path = pecan_out_dir) + if (is.na(pecan_out_dirs[1])) { + print(paste0(pecan_out_dirs, " does not exist.")) + } + + + #neemat <- matrix(1:64, nrow=1, ncol=64) # Proxy row, will be deleted later. + #qlemat <- matrix(1:64, nrow=1, ncol=64)# Proxy row, will be deleted later. + + neemat <- vector() + qlemat <- vector() + soilmoist <- vector() + time <- vector() + + num_results <- 0; + for (i in 2:length(pecan_out_dirs)) { + #datafile <- file.path(pecan_out_dirs[i], format(workflow$start_date, "%Y.nc")) + datafiles <- list.files(pecan_out_dirs[i]) + datafiles <- datafiles[grep("*.nc$", datafiles)] + + if (length(datafiles) == 0) { + print(paste0("File ", pecan_out_dirs[i], " does not exist.")) + next + } + + if(length(datafiles) == 1){ + + file = paste0(pecan_out_dirs[i],'/', datafiles[1]) + + num_results <- num_results + 1 + + #open netcdf file + ncptr <- ncdf4::nc_open(file); + + # Attach data to matricies + nee <- ncdf4::ncvar_get(ncptr, "NEE") + if(i == 2){ neemat <- nee} else{neemat <- cbind(neemat,nee)} + + qle <- ncdf4::ncvar_get(ncptr, "Qle") + if(i == 2){ qlemat <- qle} else{qlemat <- cbind(qlemat,qle)} + + soil <- ncdf4::ncvar_get(ncptr, "SoilMoistFrac") + if(i == 2){ soilmoist <- soil} else{soilmoist <- cbind(soilmoist,soil)} + + sec <- ncptr$dim$time$vals + origin <- strsplit(ncptr$dim$time$units, " ")[[1]][3] + + # Close netcdf file + ncdf4::nc_close(ncptr) + } + + if(length(datafiles) > 1){ + + + file = paste0(pecan_out_dirs[i],'/', datafiles[1]) + file2 = paste0(pecan_out_dirs[i],'/', datafiles[2]) + + num_results <- num_results + 1 + + #open netcdf file + ncptr1 <- ncdf4::nc_open(file); + ncptr2 <- ncdf4::nc_open(file2); + # Attach data to matricies + nee1 <- ncdf4::ncvar_get(ncptr1, "NEE") + nee2 <- ncdf4::ncvar_get(ncptr2, "NEE") + nee <- c(nee1, nee2) + if(i == 2){ neemat <- nee} else{neemat <- cbind(neemat,nee)} + + qle1 <- ncdf4::ncvar_get(ncptr1, "Qle") + qle2 <- ncdf4::ncvar_get(ncptr2, "Qle") + qle <- c(qle1, qle2) + + if(i == 2){ qlemat <- qle} else{qlemat <- cbind(qlemat,qle)} + + soil1 <- ncdf4::ncvar_get(ncptr1, "SoilMoistFrac") + soil2 <- ncdf4::ncvar_get(ncptr2, "SoilMoistFrac") + soil <- c(soil1, soil2) + if(i == 2){ soilmoist <- soil} else{soilmoist <- cbind(soilmoist,soil)} + + + sec <- c(ncptr1$dim$time$vals, ncptr2$dim$time$vals+ last(ncptr1$dim$time$vals)) + origin <- strsplit(ncptr1$dim$time$units, " ")[[1]][3] + + + # Close netcdf file + ncdf4::nc_close(ncptr1) + ncdf4::nc_close(ncptr2) + + } + + } + + if (num_results == 0) { + print("No results found.") + quit("no") + } else { + print(paste0(num_results, " results found.")) + } + + # Time + time <- seq(1, length.out= length(sec)) + + + # Caluclate means + neemins <- NULL + neemaxes <- NULL + quantiles <- apply(neemat,1,quantile,c(0.025,0.5,0.975), na.rm=TRUE) + neelower95 <- quantiles[1,] + neemeans <- quantiles[2,] + neeupper95 <- quantiles[3,] + needf <- data.frame(time = time, Lower = neelower95, Predicted = neemeans, Upper = neeupper95) + needf$date <- as.Date(sec, origin = origin) + #$needf$Time <- c(6,12,18, rep(c(0,6,12,18),length.out = (length(needf$date) - 3))) + needf$start_date <- rep(start_date, each = length(sec)) + needf$Time <- round(abs(sec - floor(sec)) * 24) + + + + + quantiles <- apply(qlemat,1,quantile,c(0.025,0.5,0.975), na.rm=TRUE) + qlelower95 <- quantiles[1,] + qlemeans <- quantiles[2,] + qleupper95 <- quantiles[3,] + qledf <- data.frame(time = time, Lower = qlelower95, Predicted = qlemeans, Upper = qleupper95) + qledf$date <- as.Date(sec, origin = origin) + #qledf$Time <- c(6,12,18, rep(c(0,6,12,18),length.out = (length(qledf$date) - 3))) + qledf$start_date <- rep(start_date, each = length(sec)) + qledf$Time <- round(abs(sec - floor(sec)) * 24) + + + #soil moisture + soilmins <- NULL + soilmaxes <- NULL + quantiles <- apply(soilmoist,1,quantile,c(0.025,0.5,0.975), na.rm=TRUE) + soillower95 <- quantiles[1,] + soilmeans <- quantiles[2,] + soilupper95 <- quantiles[3,] + soildf <- data.frame(time = time, Lower = soillower95, Predicted = soilmeans, Upper = soilupper95) + soildf$date <- as.Date(sec, origin = origin) + #$needf$Time <- c(6,12,18, rep(c(0,6,12,18),length.out = (length(needf$date) - 3))) + soildf$start_date <- rep(start_date, each = length(sec)) + soildf$Time <- round(abs(sec - floor(sec)) * 24) + + + + if(args[2] == "NEE"){ + return(needf)} + if(args[2]== "LE"){ + return(qledf)} + else(return(soildf)) + + PEcAn.DB::db.close(con) +} + + + diff --git a/modules/assim.sequential/inst/NEFI/graph_fluxtowers.R b/modules/assim.sequential/inst/NEFI/graph_fluxtowers.R index 019d46fe3da..a531a8dd78d 100644 --- a/modules/assim.sequential/inst/NEFI/graph_fluxtowers.R +++ b/modules/assim.sequential/inst/NEFI/graph_fluxtowers.R @@ -6,7 +6,7 @@ library("gganimate") library("tidyverse") library('PEcAn.all') library("RCurl") -source("/fs/data3/kzarada/NEFI/Willow_Creek/wcr.graphs.R") +source("/fs/data3/kzarada/NEFI/Willow_Creek/forecast.graphs.R") #source("/fs/data3/kzarada/NEFI/Willow_Creek/download_WCr_met.R") #WLEF @@ -64,7 +64,7 @@ for(j in 1:length(vars)){ args = c(as.character(ctime[i]), vars[j], site.num, outdir) - assign(paste0(ctime[i], "_", vars[j]), wcr.graphs(args)) + assign(paste0(ctime[i], "_", vars[j]), forecast.graphs(args)) } } From 9398cf7372f6f1006919044dd7853db8aa29715e Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 6 Jul 2020 16:52:18 -0400 Subject: [PATCH 1179/2289] adding WCr shell and xml to US_WCr folder --- .../assim.sequential/inst/NEFI/US_WCr/wcr.sh | 30 +++++++ .../assim.sequential/inst/NEFI/US_WCr/wcr.xml | 79 +++++++++++++++++++ 2 files changed, 109 insertions(+) create mode 100755 modules/assim.sequential/inst/NEFI/US_WCr/wcr.sh create mode 100644 modules/assim.sequential/inst/NEFI/US_WCr/wcr.xml diff --git a/modules/assim.sequential/inst/NEFI/US_WCr/wcr.sh b/modules/assim.sequential/inst/NEFI/US_WCr/wcr.sh new file mode 100755 index 00000000000..994492f91e7 --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/US_WCr/wcr.sh @@ -0,0 +1,30 @@ +# This script first runs a program which sets up the xml file for a current +# NOAA_GEFS PEcAn run, then runs PEcAn with that file. +# @author Luke Dramko + +# REPLACE < username > WITH YOUR USERNAME +# If running from a CRON job, these paths MUST be absolute paths. This is because CRON assumes that the directory it is in is the working directory. +xmlfile="/fs/data3/kzarada/NEFI/US_WCr/wcr.xml" #Path to, and name of, the base xml file. +workflow_path="/fs/data3/kzarada/pecan/web/" #Path to workflow.R (in pecan/web for the standard version or pecan/scripts for the custom version). +output_path="/fs/data3/kzarada/output/" #Path to the directory where all PEcAn output is put. +xmlscript="/fs/data3/kzarada/NEFI/NEFI_tools/pecan_scripts/generate.gefs.xml.R" #Path to, and name of, the script that modifies the xml. +# Could also be just workflow.R in pecan/web +workflow_name="workflow.R" #"workflow.wcr.assim.R" #Name of the workflow.R version + +# Generates the xml file based on a given input file. Overwrites the +# input file. +Rscript $xmlscript $xmlfile $1 &> /dev/null +if [ $? -eq 11 ]; +then +echo "xml file not found." +elif [ $? -eq 12 ] +then +echo "Database connection failed." +else + # Find the most recently created output directory. This system is kind of hack-y, and is messed up if anything after + # "PEcAn", alphabetically, is put in the directory. Fortunately, that is unlikely to happen. + output_dir=$(ls $output_path | sort -V | grep "PEcAn_" | tail -n 1) +# Runs the PEcAn workflow. +Rscript ${workflow_path}${workflow_name} $xmlfile &> ${output_path}/${output_dir}/workflow.log.txt +echo "Workflow completed." +fi \ No newline at end of file diff --git a/modules/assim.sequential/inst/NEFI/US_WCr/wcr.xml b/modules/assim.sequential/inst/NEFI/US_WCr/wcr.xml new file mode 100644 index 00000000000..a8a6c966e9b --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/US_WCr/wcr.xml @@ -0,0 +1,79 @@ + + + + + -1 + + 2020/07/06 04:00:02 +0000 + + /fs/data3/kzarada/output/PEcAn_1000013659/ + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + false + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.ALL + + 1 + + 1000012409 + + + + 3000 + FALSE + + + 100 + NEE + 2020 + 2020 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + 1000013659 + + + + 676 + 2020-06-24 + 2020-07-22 + + + + NOAA_GEFS_downscale + SIPNET + + + 2020-07-06 + 2020-07-22 + + + localhost + + From adae1caff6b8e5dcf0aeff44302e49f284bad223 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 6 Jul 2020 16:53:01 -0400 Subject: [PATCH 1180/2289] removing files that have been renamed and moved to proper folders --- .../assim.sequential/inst/NEFI/WCR_graphs.sh | 4 - .../inst/NEFI/run.gefs.sipnet.EXAMPLE.sh | 29 --- .../assim.sequential/inst/NEFI/wcr.graphs.R | 196 ------------------ 3 files changed, 229 deletions(-) delete mode 100755 modules/assim.sequential/inst/NEFI/WCR_graphs.sh delete mode 100644 modules/assim.sequential/inst/NEFI/run.gefs.sipnet.EXAMPLE.sh delete mode 100644 modules/assim.sequential/inst/NEFI/wcr.graphs.R diff --git a/modules/assim.sequential/inst/NEFI/WCR_graphs.sh b/modules/assim.sequential/inst/NEFI/WCR_graphs.sh deleted file mode 100755 index 32951d72601..00000000000 --- a/modules/assim.sequential/inst/NEFI/WCR_graphs.sh +++ /dev/null @@ -1,4 +0,0 @@ -Rscript -e "rmarkdown::render('/fs/data3/kzarada/NEFI/Willow_Creek/index.Rmd')" -Rscript "/fs/data3/kzarada/NEFI/Willow_Creek/email.R" -Rscript "/fs/data3/kzarada/NEFI/graph_fluxtowers.R" -Rscript "/fs/data3/kzarada/NEFI/graph_SDA_fluxtowers.R" \ No newline at end of file diff --git a/modules/assim.sequential/inst/NEFI/run.gefs.sipnet.EXAMPLE.sh b/modules/assim.sequential/inst/NEFI/run.gefs.sipnet.EXAMPLE.sh deleted file mode 100644 index 872aed955e9..00000000000 --- a/modules/assim.sequential/inst/NEFI/run.gefs.sipnet.EXAMPLE.sh +++ /dev/null @@ -1,29 +0,0 @@ -# This script first runs a program which sets up the xml file for a current -# NOAA_GEFS PEcAn run, then runs PEcAn with that file. -# @author Luke Dramko - -# REPLACE < username > WITH YOUR USERNAME -# If running from a CRON job, these paths MUST be absolute paths. This is because CRON assumes that the directory it is in is the working directory. -xmlfile="./gefs.sipnet.source.xml" #Path to, and name of, the base xml file. -workflow_path="./../../../../scripts/" #Path to workflow.R (in pecan/web for the standard version or pecan/scripts for the custom version). -output_path="/fs/data3/kzarada/output/" #Path to the directory where all PEcAn output is put. -xmlscript="./generate.gefs.xml.R" #Path to, and name of, the script that modifies the xml. -workflow_name="workflow.wcr.assim.R" #"workflow.wcr.assim.R" Name of the workflow.R version# Could also be just workflow.R in pecan/web - -# Generates the xml file based on a given input file. Overwrites the -# input file. -Rscript $xmlscript $xmlfile $1 &> /dev/null -if [ $? -eq 11 ]; -then - echo "xml file not found." -elif [ $? -eq 12 ] -then - echo "Database connection failed." -else - # Find the most recently created output directory. This system is kind of hack-y, and is messed up if anything after - # "PEcAn", alphabetically, is put in the directory. Fortunately, that is unlikely to happen. - output_dir=$(ls $output_path | sort -V | tail -n 1) - # Runs the PEcAn workflow. - Rscript ${workflow_path}${workflow_name} $xmlfile &> ${output_path}/${output_dir}/workflow.log.txt - echo "Workflow completed." -fi diff --git a/modules/assim.sequential/inst/NEFI/wcr.graphs.R b/modules/assim.sequential/inst/NEFI/wcr.graphs.R deleted file mode 100644 index 88295c1903e..00000000000 --- a/modules/assim.sequential/inst/NEFI/wcr.graphs.R +++ /dev/null @@ -1,196 +0,0 @@ -#### need to create a graph funciton here to call with the args of start time - -wcr.graphs <- function(args){ - start_date <- tryCatch(as.POSIXct(args[1]), error = function(e) {NULL} ) - if (is.null(start_date)) { - in_wid <- as.integer(args[1]) - } - dbparms = list() - dbparms$dbname = "bety" - dbparms$host = "128.197.168.114" - dbparms$user = "bety" - dbparms$password = "bety" - #Connection code copied and pasted from met.process - bety <- dplyr::src_postgres(dbname = dbparms$dbname, - host = dbparms$host, - user = dbparms$user, - password = dbparms$password) - con <- bety$con #Connection to the database. dplyr returns a list. - # Identify the workflow with the proper information - if (!is.null(start_date)) { - workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE start_date='", format(start_date, "%Y-%m-%d %H:%M:%S"), - "' ORDER BY id"), con) - } else { - workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE id='", in_wid, "'"), con) - } - print(workflows) - - workflows <- workflows[which(workflows$site_id == args[3]),] - - if (nrow(workflows) > 1) { - workflow <- workflows[1,] - } else { - workflow <- workflows - } - - - print(paste0("Using workflow ", workflow$id)) - wid <- workflow$id - outdir <- args[4] - pecan_out_dir <- paste0(outdir, "PEcAn_", wid, "/out"); - pecan_out_dirs <- list.dirs(path = pecan_out_dir) - if (is.na(pecan_out_dirs[1])) { - print(paste0(pecan_out_dirs, " does not exist.")) - } - - - #neemat <- matrix(1:64, nrow=1, ncol=64) # Proxy row, will be deleted later. - #qlemat <- matrix(1:64, nrow=1, ncol=64)# Proxy row, will be deleted later. - - neemat <- vector() - qlemat <- vector() - soilmoist <- vector() - time <- vector() - - num_results <- 0; - for (i in 2:length(pecan_out_dirs)) { - #datafile <- file.path(pecan_out_dirs[i], format(workflow$start_date, "%Y.nc")) - datafiles <- list.files(pecan_out_dirs[i]) - datafiles <- datafiles[grep("*.nc$", datafiles)] - - if (length(datafiles) == 0) { - print(paste0("File ", pecan_out_dirs[i], " does not exist.")) - next - } - - if(length(datafiles) == 1){ - - file = paste0(pecan_out_dirs[i],'/', datafiles[1]) - - num_results <- num_results + 1 - - #open netcdf file - ncptr <- ncdf4::nc_open(file); - - # Attach data to matricies - nee <- ncdf4::ncvar_get(ncptr, "NEE") - if(i == 2){ neemat <- nee} else{neemat <- cbind(neemat,nee)} - - qle <- ncdf4::ncvar_get(ncptr, "Qle") - if(i == 2){ qlemat <- qle} else{qlemat <- cbind(qlemat,qle)} - - soil <- ncdf4::ncvar_get(ncptr, "SoilMoistFrac") - if(i == 2){ soilmoist <- soil} else{soilmoist <- cbind(soilmoist,soil)} - - sec <- ncptr$dim$time$vals - origin <- strsplit(ncptr$dim$time$units, " ")[[1]][3] - - # Close netcdf file - ncdf4::nc_close(ncptr) - } - - if(length(datafiles) > 1){ - - - file = paste0(pecan_out_dirs[i],'/', datafiles[1]) - file2 = paste0(pecan_out_dirs[i],'/', datafiles[2]) - - num_results <- num_results + 1 - - #open netcdf file - ncptr1 <- ncdf4::nc_open(file); - ncptr2 <- ncdf4::nc_open(file2); - # Attach data to matricies - nee1 <- ncdf4::ncvar_get(ncptr1, "NEE") - nee2 <- ncdf4::ncvar_get(ncptr2, "NEE") - nee <- c(nee1, nee2) - if(i == 2){ neemat <- nee} else{neemat <- cbind(neemat,nee)} - - qle1 <- ncdf4::ncvar_get(ncptr1, "Qle") - qle2 <- ncdf4::ncvar_get(ncptr2, "Qle") - qle <- c(qle1, qle2) - - if(i == 2){ qlemat <- qle} else{qlemat <- cbind(qlemat,qle)} - - soil1 <- ncdf4::ncvar_get(ncptr1, "SoilMoistFrac") - soil2 <- ncdf4::ncvar_get(ncptr2, "SoilMoistFrac") - soil <- c(soil1, soil2) - if(i == 2){ soilmoist <- soil} else{soilmoist <- cbind(soilmoist,soil)} - - - sec <- c(ncptr1$dim$time$vals, ncptr2$dim$time$vals+ last(ncptr1$dim$time$vals)) - origin <- strsplit(ncptr1$dim$time$units, " ")[[1]][3] - - - # Close netcdf file - ncdf4::nc_close(ncptr1) - ncdf4::nc_close(ncptr2) - - } - - } - - if (num_results == 0) { - print("No results found.") - quit("no") - } else { - print(paste0(num_results, " results found.")) - } - - # Time - time <- seq(1, length.out= length(sec)) - - - # Caluclate means - neemins <- NULL - neemaxes <- NULL - quantiles <- apply(neemat,1,quantile,c(0.025,0.5,0.975), na.rm=TRUE) - neelower95 <- quantiles[1,] - neemeans <- quantiles[2,] - neeupper95 <- quantiles[3,] - needf <- data.frame(time = time, Lower = neelower95, Predicted = neemeans, Upper = neeupper95) - needf$date <- as.Date(sec, origin = origin) - #$needf$Time <- c(6,12,18, rep(c(0,6,12,18),length.out = (length(needf$date) - 3))) - needf$start_date <- rep(start_date, each = length(sec)) - needf$Time <- round(abs(sec - floor(sec)) * 24) - - - - - quantiles <- apply(qlemat,1,quantile,c(0.025,0.5,0.975), na.rm=TRUE) - qlelower95 <- quantiles[1,] - qlemeans <- quantiles[2,] - qleupper95 <- quantiles[3,] - qledf <- data.frame(time = time, Lower = qlelower95, Predicted = qlemeans, Upper = qleupper95) - qledf$date <- as.Date(sec, origin = origin) - #qledf$Time <- c(6,12,18, rep(c(0,6,12,18),length.out = (length(qledf$date) - 3))) - qledf$start_date <- rep(start_date, each = length(sec)) - qledf$Time <- round(abs(sec - floor(sec)) * 24) - - - #soil moisture - soilmins <- NULL - soilmaxes <- NULL - quantiles <- apply(soilmoist,1,quantile,c(0.025,0.5,0.975), na.rm=TRUE) - soillower95 <- quantiles[1,] - soilmeans <- quantiles[2,] - soilupper95 <- quantiles[3,] - soildf <- data.frame(time = time, Lower = soillower95, Predicted = soilmeans, Upper = soilupper95) - soildf$date <- as.Date(sec, origin = origin) - #$needf$Time <- c(6,12,18, rep(c(0,6,12,18),length.out = (length(needf$date) - 3))) - soildf$start_date <- rep(start_date, each = length(sec)) - soildf$Time <- round(abs(sec - floor(sec)) * 24) - - - - if(args[2] == "NEE"){ - return(needf)} - if(args[2]== "LE"){ - return(qledf)} - else(return(soildf)) - - PEcAn.DB::db.close(con) -} - - - From fad005b5d04957b4af6e75c455079f1ba8474531 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 6 Jul 2020 16:53:39 -0400 Subject: [PATCH 1181/2289] adding Read me for forecasts --- .../assim.sequential/inst/NEFI/README.html | 452 ++++++++++++++++++ modules/assim.sequential/inst/NEFI/README.md | 28 ++ 2 files changed, 480 insertions(+) create mode 100644 modules/assim.sequential/inst/NEFI/README.html create mode 100644 modules/assim.sequential/inst/NEFI/README.md diff --git a/modules/assim.sequential/inst/NEFI/README.html b/modules/assim.sequential/inst/NEFI/README.html new file mode 100644 index 00000000000..6c67d52b5fe --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/README.html @@ -0,0 +1,452 @@ + + + + + + + + + + + + + +README.utf8 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + +
+

Steps for running and adding SIPNET forecasts

+
+

Files associated with running forecasts:

+
    +
  1. generate.gefs.xml : R script for setting start and end date in xml; creates workflow id and associated directory
  2. +
  3. *sitename*.xml : site specific xml script
  4. +
  5. *sitename*.sh : site specific shell file to run forecasts with cron jobs
  6. +
+
+
+

Files associated with graphing forecasts:

+
    +
  1. download_*sitename*.R : R script for downloading and cleaning Flux data from towers
  2. +
  3. graph_fluxtowers.R : script to create RData for shiny app
  4. +
  5. forecast.graphs.R : script that is used to get forecasted data for graph_fluxtowers.R
  6. +
  7. graph_SDA_fluxtowers.R : script to graph SDA fluxtowers
  8. +
  9. automatic_graphs.sh : shell for running email.R, graph_fluxtowers.R, graph_SDA_fluxtowers.R
  10. +
+
+
+

Files for automatic WCr emails:

+
    +
  1. email_graphs.R - creates graphs for the email - this is specifically for WCr right now
  2. +
  3. email.R - sends the email out with graphs for today’s forecast
  4. +
+
+
+

To add forecast and add to dashboard:

+
    +
  1. create a flux download for site
  2. +
  3. create site specific xml
  4. +
  5. create site specific shell file to run forecast automatically
  6. +
  7. Add site.num, site.abv, outdir, db.num to graph_fluxtowers.R
  8. +
+
+
+ + + + +
+ + + + + + + + + + + + + + + diff --git a/modules/assim.sequential/inst/NEFI/README.md b/modules/assim.sequential/inst/NEFI/README.md new file mode 100644 index 00000000000..fad78dba3d1 --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/README.md @@ -0,0 +1,28 @@ +Steps for running and adding SIPNET forecasts +========================================================================== + +### Files associated with running forecasts: +1. `generate.gefs.xml` : R script for setting start and end date in xml; creates workflow id and associated directory +2. `*sitename*.xml` : site specific xml script +3. `*sitename*.sh` : site specific shell file to run forecasts with cron jobs + +### Files associated with graphing forecasts: +1. `download_*sitename*.R` : R script for downloading and cleaning Flux data from towers +2. `graph_fluxtowers.R` : script to create RData for shiny app +3. `forecast.graphs.R` : script that is used to get forecasted data for `graph_fluxtowers.R` +4. `graph_SDA_fluxtowers.R` : script to graph SDA fluxtowers +5. `automatic_graphs.sh` : shell for running `email.R`, `graph_fluxtowers.R`, `graph_SDA_fluxtowers.R` + +### Files for automatic WCr emails: +1. `email_graphs.R` - creates graphs for the email - this is specifically for WCr right now +2. `email.R` - sends the email out with graphs for today's forecast + + + + +### To add forecast and add to dashboard: +1. create a flux download for site +2. create site specific xml +3. create site specific shell file to run forecast automatically +4. Add site.num, site.abv, outdir, db.num to graph_fluxtowers.R + From c8065dba204a9c47fe2baaa8abc5790e6d0e4b85 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 7 Jul 2020 06:31:41 +0000 Subject: [PATCH 1182/2289] bugfix in /api/runs/ --- apps/api/R/runs.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index c3daab5fa53..30fdfeab34c 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -47,7 +47,7 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ result$next_page <- paste0( req$rook.url_scheme, "://", req$HTTP_HOST, - "/api/workflows", + "/api/runs", req$PATH_INFO, substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), (as.numeric(limit) + as.numeric(offset)), @@ -59,7 +59,7 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ result$prev_page <- paste0( req$rook.url_scheme, "://", req$HTTP_HOST, - "/api/workflows", + "/api/runs", req$PATH_INFO, substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), max(0, (as.numeric(offset) - as.numeric(limit))), From e175fe1bff0eb4a953b3a3ab0ad9412a4efd9829 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 7 Jul 2020 11:45:19 -0400 Subject: [PATCH 1183/2289] addressing PR comments --- CHANGELOG.md | 4 +++- .../data.atmosphere/R/download.NOAA_GEFS_downscale.R | 10 ++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index dab05d00119..1492790aeb2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,6 +25,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Update ED docker build, will now build version 2.2.0 and git - Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 - model2netcdf.ED2 no longer detecting which varibles names `-T-` files have based on ED2 version (#2623) + ### Changed @@ -40,7 +41,8 @@ For more information about this file see also [Keep a Changelog](http://keepacha - No longer writing an arbitrary num for each PFT, this was breaking ED runs potentially. - The pecan/data container has no longer hardcoded path for postgres - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). -- data.remote: Arguments to the function `call_MODIS()` have been changed (issue #2519). +- data.remote: Arguments to the function `call_MODIS()` have been changed (issue #2519). +- Changed precipitaion downscale in `PEcAn.data.atmosphere::download.NOAA_GEFS_downscale`. Precipitation was being downscaled via a spline which was causing fake rain events. Instead the 6 hr precipitation flux values from GEFS are preserved with 0's filling in the hours between. ### Added diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index 95d74c35c1c..e70c64baffd 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -243,12 +243,14 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st dplyr::summarize(surface_downwelling_shortwave_flux_in_air = mean(surface_downwelling_shortwave_flux_in_air)) ## Downscale Precipitation Flux + #fills in the hours between the 6hr GEFS with zeros using the timestamp from downscaled Flux precip.hrly <- forecasts %>% - dplyr::select(timestamp, NOAA.member, precipitation_flux) %>% - tidyr::complete(timestamp = nonSW.flux.hrly$timestamp, nesting(NOAA.member), fill = list(precipitation_flux = 0)) + dplyr::select(timestamp, NOAA.member, precipitation_flux) %>% + tidyr::complete(timestamp = nonSW.flux.hrly$timestamp, tidyr::nesting(NOAA.member), fill = list(precipitation_flux = 0)) - - joined<- dplyr::inner_join(gefs_hour, nonSW.flux.hrly, by = c("NOAA.member", "timestamp")) +#join together the 4 different downscaled data frames + #checks for errors in downscaled data; removes NA times; replaces erroneous values with 0's or NA's + joined<- dplyr::inner_join(gefs_hour, nonSW.flux.hrly, by = c("NOAA.member", "timestamp")) joined<- dplyr::inner_join(joined, precip.hrly, by = c("NOAA.member", "timestamp")) joined <- dplyr::inner_join(joined, ShortWave.ds, by = c("NOAA.member", "timestamp")) %>% From 7b9b07d95d9f1ba5c4e36961445136fae7404117 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 13:50:58 +0530 Subject: [PATCH 1184/2289] Apply suggestions from code review Co-authored-by: istfer --- modules/data.remote/inst/remote_process.py | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index 166fe874add..f901f774cbf 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -40,15 +40,15 @@ def remote_process( end (str) -- ending date area of the data request in the form YYYY-MM-DD - source (str) -- source from where data is to be downloaded + source (str) -- source from where data is to be downloaded, e.g. "gee" or "appEEARS" etc. Currently only "gee" implemented - collection (str) -- dataset ID + collection (str) -- dataset or product name as it is provided on the source, e.g. "LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR" for gee - qc (float) -- quality control parameter + qc (float) -- quality control parameter, only required for gee queries, None by default - algorithm (str) -- algorithm used for processing data in process_data() + algorithm (str) -- algorithm used for processing data in process_data(), currently only SNAP is implemented to estimate LAI from Sentinel-2 bands, None by default - output (dict) -- "get_data" - the type of raw data, "process_data" - final proccesed data + output (dict) -- "get_data" - the type of output variable requested from get_data module, "process_data" - the type of output variable requested from process_data module stage (dict) -- temporary argument to imitate database checks From f5bf36f28df82009f122b2ed3a53d16db2414fe8 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:33:13 +0530 Subject: [PATCH 1185/2289] updated remote_process --- modules/data.remote/inst/remote_process.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index f901f774cbf..464cc503a03 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -6,7 +6,7 @@ Requires Python3 -Author: Ayush Prasad +Author(s): Ayush Prasad, Istem Fer """ from get_remote_data import get_remote_data @@ -22,7 +22,7 @@ def remote_process( source, collection, qc=None, - algorithm="snap", + algorithm=None, output={"get_data": "bands", "process_data": "lai"}, stage={"get_data": True, "process_data": True}, ): From 42e5aa3216135aeb90c465b5b5caa61db72f7009 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:35:08 +0530 Subject: [PATCH 1186/2289] moved ndvi calc to gee_utils --- modules/data.remote/inst/gee_utils.py | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/inst/gee_utils.py b/modules/data.remote/inst/gee_utils.py index e09d3ff9ed6..a1488856494 100644 --- a/modules/data.remote/inst/gee_utils.py +++ b/modules/data.remote/inst/gee_utils.py @@ -65,7 +65,7 @@ def get_sitename(geofile): return site_name -def get_siteaoi(geofile): +def get_sitecoord(geofile): """ extracts AOI coordinates from the input file @@ -80,3 +80,23 @@ def get_siteaoi(geofile): df = gpd.read_file(geofile) site_aoi = str(df[df.columns[1]].iloc[0]) return site_aoi + +def calc_ndvi(nir, red): + """ + calculates NDVI on GEE + + Parameters + ---------- + nir (str) -- NIR band of the image collection + + red (str) -- RED band of the image collection + + Returns + ------- + image -- with added NDVI band + + """ + def add_ndvi(image): + ndvi = image.normalizedDifference([nir, red]).rename("NDVI") + return image.addBands(ndvi) + return add_ndvi \ No newline at end of file From 1e5f865163438b432389fcdb7e8bc3086e9a1d57 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:38:03 +0530 Subject: [PATCH 1187/2289] moved s2 ndvi function to gee --- modules/data.remote/inst/gee2pecan_s2.py | 888 ++++------------------- 1 file changed, 134 insertions(+), 754 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py index 23c0950e843..ec132d70e8a 100644 --- a/modules/data.remote/inst/gee2pecan_s2.py +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -1,800 +1,180 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- """ -Created on Thu Feb 6 15:24:12 2020 +Extracts Landsat 8 surface reflactance band data from Google Earth Engine and saves it in a netCDF file -Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). -Warning: the data is currently retrieved with 10m resolution (scale=10), so -the 20m resolution bands are resampled. -TODO: Add option for specifying the request spatial resolution. +Requires Python3 -@author: Olli Nevalainen (olli.nevalainen@fmi.fi), - Finnish Meteorological Institute) +Bands retrieved: B1, B2, B3, B4, B5, B6, B7, B10, B11 along with computed NDVI +If ROI is a Point, this function can be used for getting SR data from Landsat 7, 5 and 4 as well. +Author: Ayush Prasad """ -import sys -import os +from gee_utils import create_geo, get_sitecoord, get_sitename, calc_ndvi import ee -import datetime import pandas as pd import geopandas as gpd -import numpy as np +import datetime +import os import xarray as xr -from functools import reduce +import numpy as np +import re ee.Initialize() -NO_DATA = -99999 -S2_REFL_TRANS = 10000 -# ----------------- Sentinel-2 ------------------------------------- -s2_qi_labels = [ - "NODATA", - "SATURATED_DEFECTIVE", - "DARK_FEATURE_SHADOW", - "CLOUD_SHADOW", - "VEGETATION", - "NOT_VEGETATED", - "WATER", - "UNCLASSIFIED", - "CLOUD_MEDIUM_PROBA", - "CLOUD_HIGH_PROBA", - "THIN_CIRRUS", - "SNOW_ICE", -] - -s2_filter1 = [ - "NODATA", - "SATURATED_DEFECTIVE", - "CLOUD_SHADOW", - "UNCLASSIFIED", - "CLOUD_MEDIUM_PROBA", - "CLOUD_HIGH_PROBA", - "THIN_CIRRUS", - "SNOW_ICE", -] - - -class S2RequestParams: - """S2 data request paramaters. - - Attributes - ---------- - datestart : str - Starting date for data request in form "2019-01-01". - dateend : str - Starting date for data request in form "2019-12-31". - bands : list, optional - List of strings with band name. - the default is ['B3', 'B4', 'B5', - 'B6', 'B7', 'B8A', 'B11', 'B12']. - """ - - def __init__(self, datestart, dateend, bands=None): - """. - - Parameters - ---------- - datestart : str - Starting date for data request in form "2019-01-01". - dateend : str - Starting date for data request in form "2019-12-31". - bands : list, optional - List of strings with band name. - The default is ['B3', 'B4', 'B5', - 'B6', 'B7', 'B8A', 'B11', 'B12']. - - Returns - ------- - None. - - """ - default_bands = ["B3", "B4", "B5", "B6", "B7", "B8A", "B11", "B12"] - - self.datestart = datestart - self.dateend = dateend - self.bands = bands if bands else default_bands - - -class AOI: - """Area of interest for area info and data. - - Attributes - ---------- - name : str - Name of the area. - geometry : str - Geometry of the area of interest e.g. from geopandas. - Currently only polygons tested. The default is None. - coordinate_list : list, optional - List of coordinates of a polygon - (loop should be closed). Computed from geometry if not - provided. The default is None. - tile : str, optional - Tile id as string for the data. Used to keep the data in - same crs because an area can be in multiple tiles with - different crs. The default is None. - qi : pandas dataframe - Dataframe with quality information about available imagery for the AOI. - qi is empty at init and can be computed with - ee_get_s2_quality_info function. - data : pandas dataframe or xarray - Dataframe holding data retrieved from GEE. Data can be computed using - function - qi is empty at init and can be computed with ee_get_s2_data and - converted to xarray using s2_data_to_xarray function. - - Methods - ------- - __init__ +def gee2pecan_l8(geofile, outdir, start, end, qc=1): """ - - def __init__(self, name, geometry=None, coordinate_list=None, tile=None): - """. - - Parameters - ---------- - name : str - Name of the area. - geometry : geometry in wkt, optional - Geometry of the area of interest e.g. from geopandas. - Currently only polygons tested. The default is None. - coordinate_list : list, optional - List of coordinates of a polygon - (loop should be closed). Computed from geometry if not - provided. The default is None. - tile : str, optional - Tile id as string for the data. Used to keep the data in - same crs because an area can be in multiple tiles with - different crs. The default is None. - - Returns - ------- - None. - - """ - if not geometry and not coordinate_list: - sys.exit("AOI has to get either geometry or coordinates as list!") - elif geometry and not coordinate_list: - coordinate_list = list(geometry.exterior.coords) - elif coordinate_list and not geometry: - geometry = None - - self.name = name - self.geometry = geometry - self.coordinate_list = coordinate_list - self.qi = None - self.data = None - self.tile = tile - - -def ee_get_s2_quality_info(AOIs, req_params): - """Get S2 quality information from GEE. + Extracts Landsat 8 SR band data from GEE Parameters ---------- - AOIs : list or AOI instance - List of AOI instances or single AOI instance. If multiple AOIs - proviveded the computation in GEE server is parallellized. - If too many areas with long time range is provided, user might - hit GEE memory limits. Then you should call this function - sequentally to all AOIs. - req_params : S2RequestParams instance - S2RequestParams instance with request details. - - Returns - ------- - Nothing: - Computes qi attribute for the given AOI instances. - - """ - # if single AOI instance, make a list - if isinstance(AOIs, AOI): - AOIs = list([AOIs]) - - features = [ - ee.Feature(ee.Geometry.Polygon(a.coordinate_list), {"name": a.name}) - for a in AOIs - ] - feature_collection = ee.FeatureCollection(features) - - def ee_get_s2_quality_info_feature(feature): - - area = feature.geometry() - image_collection = ( - ee.ImageCollection("COPERNICUS/S2_SR") - .filterBounds(area) - .filterDate(req_params.datestart, req_params.dateend) - .select(["SCL"]) - ) - - def ee_get_s2_quality_info_image(img): - productid = img.get("PRODUCT_ID") - assetid = img.id() - tileid = img.get("MGRS_TILE") - system_index = img.get("system:index") - proj = img.select("SCL").projection() - - # apply reducer to list - img = img.reduceRegion( - reducer=ee.Reducer.toList(), geometry=area, maxPixels=1e8, scale=10 - ) - - # get data into arrays - classdata = ee.Array( - ee.Algorithms.If( - img.get("SCL"), ee.Array(img.get("SCL")), ee.Array([0]) - ) - ) - - totalcount = classdata.length() - classpercentages = { - key: classdata.eq(i) - .reduce(ee.Reducer.sum(), [0]) - .divide(totalcount) - .get([0]) - for i, key in enumerate(s2_qi_labels) - } - - tmpfeature = ( - ee.Feature(ee.Geometry.Point([0, 0])) - .set("productid", productid) - .set("system_index", system_index) - .set("assetid", assetid) - .set("tileid", tileid) - .set("projection", proj) - .set(classpercentages) - ) - return tmpfeature - - s2_qi_image_collection = image_collection.map(ee_get_s2_quality_info_image) - - return ( - feature.set( - "productid", s2_qi_image_collection.aggregate_array("productid") - ) - .set("system_index", s2_qi_image_collection.aggregate_array("system_index")) - .set("assetid", s2_qi_image_collection.aggregate_array("assetid")) - .set("tileid", s2_qi_image_collection.aggregate_array("tileid")) - .set("projection", s2_qi_image_collection.aggregate_array("projection")) - .set( - { - key: s2_qi_image_collection.aggregate_array(key) - for key in s2_qi_labels - } - ) - ) - - s2_qi_feature_collection = feature_collection.map( - ee_get_s2_quality_info_feature - ).getInfo() - - s2_qi = s2_feature_collection_to_dataframes(s2_qi_feature_collection) - - for a in AOIs: - name = a.name - a.qi = s2_qi[name] - - -def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): - """Get S2 data (level L2A, bottom of atmosphere data) from GEE. + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date areaof the data request in the form YYYY-MM-DD - Warning: the data is currently retrieved with 10m resolution (scale=10), so - the 20m resolution bands are resampled. - TODO: Add option for specifying the request spatial resolution. + qc (bool) -- uses the cloud masking function if set to True - Parameters - ---------- - AOIs : list or AOI instance - List of AOI instances or single AOI instance. If multiple AOIs - proviveded the computation in GEE server is parallellized. - If too many areas with long time range is provided, user might - hit GEE memory limits. Then you should call this function - sequentally to all AOIs. AOIs should have qi attribute computed first. - req_params : S2RequestParams instance - S2RequestParams instance with request details. - qi_threshold : float - Threshold value to filter images based on used qi filter. - qi filter holds labels of classes whose percentages within the AOI - is summed. If the sum is larger then the qi_threhold, data will not be - retrieved for that date/image. The default is 1, meaning all data is - retrieved. - qi_filter : list - List of strings with class labels (of unwanted classes) used to compute qi value, - see qi_threhold. The default is s2_filter1 = ['NODATA', - 'SATURATED_DEFECTIVE', - 'CLOUD_SHADOW', - 'UNCLASSIFIED', - 'CLOUD_MEDIUM_PROBA', - 'CLOUD_HIGH_PROBA', - 'THIN_CIRRUS', - 'SNOW_ICE']. + bands (list of str) -- bands to be retrieved. Default: B5, B4 Returns ------- Nothing: - Computes data attribute for the given AOI instances. - + output netCDF is saved in the specified directory. + """ - datestart = req_params.datestart - dateend = req_params.dateend - bands = req_params.bands - # if single AOI instance, make a list - if isinstance(AOIs, AOI): - AOIs = list([AOIs]) - - features = [] - for a in AOIs: - filtered_qi = filter_s2_qi_dataframe(a.qi, qi_threshold, qi_filter) - if len(filtered_qi) == 0: - print("No observations to retrieve for area %s" % a.name) - continue - - if a.tile is None: - min_tile = min(filtered_qi["tileid"].values) - filtered_qi = filtered_qi[filtered_qi["tileid"] == min_tile] - a.tile = min_tile - else: - filtered_qi = filtered_qi[filtered_qi["tileid"] == a.tile] - - full_assetids = "COPERNICUS/S2_SR/" + filtered_qi["assetid"] - image_list = [ee.Image(asset_id) for asset_id in full_assetids] - crs = filtered_qi["projection"].values[0]["crs"] - feature = ee.Feature( - ee.Geometry.Polygon(a.coordinate_list), - {"name": a.name, "image_list": image_list}, - ) - - features.append(feature) - if len(features) == 0: - print("No data to be retrieved!") - return None + # scale (int) Default: 30 + scale = 30 + # bands retrieved ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11"] - feature_collection = ee.FeatureCollection(features) + def reduce_region(image): + """ + Reduces the selected region + currently set to mean, can be changed as per requirements. + """ + stat_dict = image.reduceRegion(ee.Reducer.mean(), geo, scale) + sensingtime = image.get("SENSING_TIME") + return ee.Feature(None, stat_dict).set("sensing_time", sensingtime) - def ee_get_s2_data_feature(feature): - geom = feature.geometry(0.01, crs) - image_collection = ( - ee.ImageCollection.fromImages(feature.get("image_list")) - .filterBounds(geom) - .filterDate(datestart, dateend) - .select(bands + ["SCL"]) + def mask(image): + """ + Masks clouds and cloud shadows using the pixel_qa band + Can be configured as per requirements + """ + clear = image.select("pixel_qa") + return image.updateMask(clear) + + # create image collection depending upon the qc choice + if qc == True: + landsat = ( + ee.ImageCollection("LANDSAT/LC08/C01/T1_SR") + .filterDate(start, end) + .map(mask) + .sort("system:time_start", True) ) - def ee_get_s2_data_image(img): - # img = img.clip(geom) - productid = img.get("PRODUCT_ID") - assetid = img.id() - tileid = img.get("MGRS_TILE") - system_index = img.get("system:index") - proj = img.select(bands[0]).projection() - sun_azimuth = img.get("MEAN_SOLAR_AZIMUTH_ANGLE") - sun_zenith = img.get("MEAN_SOLAR_ZENITH_ANGLE") - view_azimuth = ( - ee.Array( - [img.get("MEAN_INCIDENCE_AZIMUTH_ANGLE_%s" % b) for b in bands] - ) - .reduce(ee.Reducer.mean(), [0]) - .get([0]) - ) - view_zenith = ( - ee.Array([img.get("MEAN_INCIDENCE_ZENITH_ANGLE_%s" % b) for b in bands]) - .reduce(ee.Reducer.mean(), [0]) - .get([0]) - ) - - img = img.resample("bilinear").reproject(crs=crs, scale=10) - - # get the lat lon and add the ndvi - image_grid = ee.Image.pixelCoordinates(ee.Projection(crs)).addBands( - [img.select(b) for b in bands + ["SCL"]] - ) - - # apply reducer to list - image_grid = image_grid.reduceRegion( - reducer=ee.Reducer.toList(), geometry=geom, maxPixels=1e8, scale=10 - ) - - # get data into arrays - x_coords = ee.Array(image_grid.get("x")) - y_coords = ee.Array(image_grid.get("y")) - band_data = {b: ee.Array(image_grid.get("%s" % b)) for b in bands} - - scl_data = ee.Array(image_grid.get("SCL")) - - # perform LAI et al. computation possibly here! - - tmpfeature = ( - ee.Feature(ee.Geometry.Point([0, 0])) - .set("productid", productid) - .set("system_index", system_index) - .set("assetid", assetid) - .set("tileid", tileid) - .set("projection", proj) - .set("sun_zenith", sun_zenith) - .set("sun_azimuth", sun_azimuth) - .set("view_zenith", view_zenith) - .set("view_azimuth", view_azimuth) - .set("x_coords", x_coords) - .set("y_coords", y_coords) - .set("SCL", scl_data) - .set(band_data) - ) - return tmpfeature - - s2_data_feature = image_collection.map(ee_get_s2_data_image) - - return ( - feature.set("productid", s2_data_feature.aggregate_array("productid")) - .set("system_index", s2_data_feature.aggregate_array("system_index")) - .set("assetid", s2_data_feature.aggregate_array("assetid")) - .set("tileid", s2_data_feature.aggregate_array("tileid")) - .set("projection", s2_data_feature.aggregate_array("projection")) - .set("sun_zenith", s2_data_feature.aggregate_array("sun_zenith")) - .set("sun_azimuth", s2_data_feature.aggregate_array("sun_azimuth")) - .set("view_zenith", s2_data_feature.aggregate_array("view_zenith")) - .set("view_azimuth", s2_data_feature.aggregate_array("view_azimuth")) - .set("x_coords", s2_data_feature.aggregate_array("x_coords")) - .set("y_coords", s2_data_feature.aggregate_array("y_coords")) - .set("SCL", s2_data_feature.aggregate_array("SCL")) - .set({b: s2_data_feature.aggregate_array(b) for b in bands}) + else: + landsat = ( + ee.ImageCollection("LANDSAT/LC08/C01/T1_SR") + .filterDate(start, end) + .sort("system:time_start", True) ) - s2_data_feature_collection = feature_collection.map( - ee_get_s2_data_feature - ).getInfo() - - s2_data = s2_feature_collection_to_dataframes(s2_data_feature_collection) - - for a in AOIs: - name = a.name - a.data = s2_data[name] - - -def filter_s2_qi_dataframe(s2_qi_dataframe, qi_thresh, s2_filter=s2_filter1): - """Filter qi dataframe. - - Parameters - ---------- - s2_qi_dataframe : pandas dataframe - S2 quality information dataframe (AOI instance qi attribute). - qi_thresh : float - Threshold value to filter images based on used qi filter. - qi filter holds labels of classes whose percentages within the AOI - is summed. If the sum is larger then the qi_threhold, data will not be - retrieved for that date/image. The default is 1, meaning all data is - retrieved. - s2_filter : list - List of strings with class labels (of unwanted classes) used to compute qi value, - see qi_threhold. The default is s2_filter1 = ['NODATA', - 'SATURATED_DEFECTIVE', - 'CLOUD_SHADOW', - 'UNCLASSIFIED', - 'CLOUD_MEDIUM_PROBA', - 'CLOUD_HIGH_PROBA', - 'THIN_CIRRUS', - 'SNOW_ICE']. - - Returns - ------- - filtered_s2_qi_df : pandas dataframe - Filtered dataframe. - - """ - filtered_s2_qi_df = s2_qi_dataframe.loc[ - s2_qi_dataframe[s2_filter1].sum(axis=1) < qi_thresh - ] - return filtered_s2_qi_df - - -def s2_feature_collection_to_dataframes(s2_feature_collection): - """Convert feature collection dict from GEE to pandas dataframe. - - Parameters - ---------- - s2_feature_collection : dict - Dictionary returned by GEE. - - Returns - ------- - dataframes : pandas dataframe - GEE dictinary converted to pandas dataframe. - - """ - dataframes = {} - - for featnum in range(len(s2_feature_collection["features"])): - tmp_dict = {} - key = s2_feature_collection["features"][featnum]["properties"]["name"] - productid = s2_feature_collection["features"][featnum]["properties"][ - "productid" - ] - - dates = [ - datetime.datetime.strptime(d.split("_")[2], "%Y%m%dT%H%M%S") - for d in productid - ] - - tmp_dict.update({"Date": dates}) # , 'crs': crs} - properties = s2_feature_collection["features"][featnum]["properties"] - for prop, data in properties.items(): - if prop not in ["Date"]: # 'crs' ,, 'projection' - tmp_dict.update({prop: data}) - dataframes[key] = pd.DataFrame(tmp_dict) - return dataframes - - -def compute_ndvi(dataset): - """Compute NDVI - - Parameters - ---------- - dataset : xarray dataset - - Returns - ------- - xarray dataset - Adds 'ndvi' xr array to xr dataset. - - """ - b4 = dataset.band_data.sel(band="B4") - b8 = dataset.band_data.sel(band="B8A") - ndvi = (b8 - b4) / (b8 + b4) - return dataset.assign({"ndvi": ndvi}) + # map NDVI to the image collection and select the bands + landsat = landsat.map(calc_ndvi(nir="B5", red="B4")).select( + ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11", "NDVI"] + ) + # if ROI is a point + df = gpd.read_file(geofile) + if (df.geometry.type == "Point").bool(): -def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): - """Convert AOI.data dataframe to xarray dataset. + geo = create_geo(geofile) - Parameters - ---------- - aoi : AOI instance - AOI instance. - request_params : S2RequestParams - S2RequestParams. - convert_to_reflectance : boolean, optional - Convert S2 data from GEE (integers) to reflectances (floats), - i,e, divide by 10000. - The default is True. + # get the data + l_data = landsat.filterBounds(geo).getRegion(geo, scale).getInfo() + # put the data inside a list of dictionary + l_data_dict = [dict(zip(l_data[0], values)) for values in l_data[1:]] - Returns - ------- - Nothing. - Converts the data atrribute dataframe to xarray Dataset. - xarray is better for handling multiband data. It also has - implementation for saving the data in NetCDF format. - - """ - # check that all bands have full data! - datalengths = [ - aoi.data[b].apply(lambda d: len(d)) == len(aoi.data.iloc[0]["x_coords"]) - for b in request_params.bands - ] - consistent_data = reduce(lambda a, b: a & b, datalengths) - aoi.data = aoi.data[consistent_data] - - # 2D data - bands = request_params.bands - - # 1D data - list_vars = [ - "assetid", - "productid", - "sun_azimuth", - "sun_zenith", - "system_index", - "view_azimuth", - "view_zenith", - ] - - # crs from projection - crs = aoi.data["projection"].values[0]["crs"] - tileid = aoi.data["tileid"].values[0] - # original number of pixels requested (pixels inside AOI) - aoi_pixels = len(aoi.data.iloc[0]["x_coords"]) - - # transform 2D data to arrays - for b in bands: - - aoi.data[b] = aoi.data.apply( - lambda row: s2_lists_to_array( - row["x_coords"], - row["y_coords"], - row[b], - convert_to_reflectance=convert_to_reflectance, - ), - axis=1, + def getdate(filename): + """ + calculates Date from the landsat id + """ + string = re.compile( + r"(?PLC08|LE07|LT05|LT04)_(?P\d{6})_(?P\d{8})" + ) + x = string.search(filename) + d = datetime.datetime.strptime(x.group("date"), "%Y%m%d").date() + return d + + # pop out unnecessary keys and add date + for d in l_data_dict: + d.pop("longitude", None) + d.pop("latitude", None) + d.pop("time", None) + d.update(time=getdate(d["id"])) + + # Put data in a dataframe + datadf = pd.DataFrame(l_data_dict) + # converting date to the numpy date format + datadf["time"] = datadf["time"].apply(lambda x: np.datetime64(x)) + + # if ROI is a polygon + elif (df.geometry.type == "Polygon").bool(): + + geo = create_geo(geofile) + + # get the data + l_data = landsat.filterBounds(geo).map(reduce_region).getInfo() + + def l8_fc2dict(fc): + """ + Converts feature collection to dictionary form. + """ + + def eachf2dict(f): + id = f["id"] + out = f["properties"] + out.update(id=id) + return out + + out = [eachf2dict(x) for x in fc["features"]] + return out + + # convert to dictionary + l_data_dict = l8_fc2dict(l_data) + # create a dataframe from the dictionary + datadf = pd.DataFrame(l_data_dict) + # convert date to a more readable format + datadf["sensing_time"] = datadf["sensing_time"].apply( + lambda x: datetime.datetime.strptime(x.split(".")[0], "%Y-%m-%dT%H:%M:%S") ) + # if ROI type is not a Point ot Polygon + else: + raise ValueError("geometry choice not supported") - aoi.data["SCL"] = aoi.data.apply( - lambda row: s2_lists_to_array( - row["x_coords"], row["y_coords"], row["SCL"], convert_to_reflectance=False - ), - axis=1, - ) - - array = aoi.data[bands].values - - # this will stack the array to ndarray with - # dimension order = (time, band, x,y) - narray = np.stack( - [np.stack(array[:, b], axis=2) for b in range(len(bands))], axis=2 - ).transpose() # .swapaxes(2, 3) - - scl_array = np.stack(aoi.data["SCL"].values, axis=2).transpose() - - coords = { - "time": aoi.data["Date"].values, - "band": bands, - "x": np.unique(aoi.data.iloc[0]["x_coords"]), - "y": np.unique(aoi.data.iloc[0]["y_coords"]), - } - - dataset_dict = { - "band_data": (["time", "band", "x", "y"], narray), - "SCL": (["time", "x", "y"], scl_array), - } - var_dict = {var: (["time"], aoi.data[var]) for var in list_vars} - dataset_dict.update(var_dict) - - ds = xr.Dataset( - dataset_dict, - coords=coords, + site_name = get_sitename(geofile) + AOI = get_sitecoord(geofile) + # convert the dataframe to an xarray dataset + tosave = xr.Dataset( + datadf, attrs={ - "name": aoi.name, - "crs": crs, - "tile_id": tileid, - "aoi_geometry": aoi.geometry.to_wkt(), - "aoi_pixels": aoi_pixels, + "site_name": site_name, + "start_date": start, + "end_date": end, + "AOI": AOI, + "sensor": "LANDSAT/LC08/C01/T1_SR", }, ) - aoi.data = ds - - -def s2_lists_to_array(x_coords, y_coords, data, convert_to_reflectance=True): - """Convert 1D lists of coordinates and corresponding values to 2D array. - - Parameters - ---------- - x_coords : list - List of x-coordinates. - y_coords : list - List of y-coordinates. - data : list - List of data values corresponding to the coordinates. - convert_to_reflectance : boolean, optional - Convert S2 data from GEE (integers) to reflectances (floats), - i,e, divide by 10000. - The default is True. - - Returns - ------- - arr : 2D numpy array - Return 2D numpy array. - """ - # get the unique coordinates - uniqueYs = np.unique(y_coords) - uniqueXs = np.unique(x_coords) - - # get number of columns and rows from coordinates - ncols = len(uniqueXs) - nrows = len(uniqueYs) - - # determine pixelsizes - # ys = uniqueYs[1] - uniqueYs[0] - # xs = uniqueXs[1] - uniqueXs[0] - - y_vals, y_idx = np.unique(y_coords, return_inverse=True) - x_vals, x_idx = np.unique(x_coords, return_inverse=True) - if convert_to_reflectance: - arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) - arr.fill(np.nan) - arr[y_idx, x_idx] = np.array(data, dtype=np.float64) / S2_REFL_TRANS - else: - arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.int32) - arr.fill(NO_DATA) # or whatever yor desired missing data flag is - arr[y_idx, x_idx] = data - arr = np.flipud(arr) - return arr - - -def xr_dataset_to_timeseries(xr_dataset, variables): - """Compute timeseries dataframe from xr dataset. - - Parameters - ---------- - xr_dataset : xarray dataset - - variables : list - list of variable names as string. - - Returns - ------- - df : pandas dataframe - Pandas dataframe with mean, std, se and percentage of NaNs inside AOI. - - """ - df = pd.DataFrame({"Date": pd.to_datetime(xr_dataset.time.values)}) - - for var in variables: - df[var] = xr_dataset[var].mean(dim=["x", "y"]) - df[var + "_std"] = xr_dataset[var].std(dim=["x", "y"]) - - # nans occure due to missging data from 1D to 2D array - # (pixels outside the polygon), - # from snap algorihtm nans occure due to input/output ouf of bounds - # checking. - # TODO: flaggging with snap biophys algorith or some other solution to - # check which nan are from snap algorithm and which from 1d to 2d transformation - nans = np.isnan(xr_dataset[var]).sum(dim=["x", "y"]) - sample_n = len(xr_dataset[var].x) * len(xr_dataset[var].y) - nans - - # compute how many of the nans are inside aoi (due to snap algorithm) - out_of_aoi_pixels = ( - len(xr_dataset[var].x) * len(xr_dataset[var].y) - xr_dataset.aoi_pixels - ) - nans_inside_aoi = nans - out_of_aoi_pixels - df["aoi_nan_percentage"] = nans_inside_aoi / xr_dataset.aoi_pixels - - df[var + "_se"] = df[var + "_std"] / np.sqrt(sample_n) - - return df - - -def gee2pecan_s2(geofile, outdir, start, end, qi_threshold): - """ - Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. - - Parameters - ---------- - geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. - - outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - - start (str) -- starting date of the data request in the form YYYY-MM-DD - - end (str) -- ending date of the data request in the form YYYY-MM-DD - - qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved - - Returns - ------- - Nothing: - output netCDF is saved in the specified directory. - - Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray - """ - - # read in the input file containing coordinates - df = gpd.read_file(geofile) - - request = S2RequestParams(start, end) - - # filter area of interest from the coordinates in the input file - area = AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) - - # calcuate qi attribute for the AOI - ee_get_s2_quality_info(area, request) - - # get the final data - ee_get_s2_data(area, request, qi_threshold=qi_threshold) - - # convert dataframe to an xarray dataset, used later for converting to netCDF - s2_data_to_xarray(area, request) - - area.data = compute_ndvi(area.data) - - # if specified output directory does not exist, create it + # if specified path does not exist create it if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) - # create a timerseries and save the netCDF file - timeseries = {} - timeseries_variable = ["ndvi"] - - area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) - timeseries[area.name] = xr_dataset_to_timeseries(area.data, timeseries_variable) \ No newline at end of file + # convert the dataset to a netCDF file and save it + tosave.to_netcdf(os.path.join(outdir, site_name + "_bands.nc")) From eea3d1838833a7e27ad4c3d29164d8e63daba522 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:40:21 +0530 Subject: [PATCH 1188/2289] update gee2pecan_l8 --- modules/data.remote/inst/gee2pecan_l8.py | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_l8.py b/modules/data.remote/inst/gee2pecan_l8.py index 020a4ad2155..ec132d70e8a 100644 --- a/modules/data.remote/inst/gee2pecan_l8.py +++ b/modules/data.remote/inst/gee2pecan_l8.py @@ -9,7 +9,7 @@ Author: Ayush Prasad """ -from gee_utils import create_geo, get_siteaoi, get_sitename +from gee_utils import create_geo, get_sitecoord, get_sitename, calc_ndvi import ee import pandas as pd import geopandas as gpd @@ -36,10 +36,6 @@ def gee2pecan_l8(geofile, outdir, start, end, qc=1): end (str) -- ending date areaof the data request in the form YYYY-MM-DD - ic (str) -- image collection id of the Landsat SR product from Google Earth Engine - - vi (bool) -- set to True if NDVI needs to be calculated - qc (bool) -- uses the cloud masking function if set to True bands (list of str) -- bands to be retrieved. Default: B5, B4 @@ -53,8 +49,7 @@ def gee2pecan_l8(geofile, outdir, start, end, qc=1): # scale (int) Default: 30 scale = 30 - # bands retrieved - bands = ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11"] + # bands retrieved ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11"] def reduce_region(image): """ @@ -89,15 +84,9 @@ def mask(image): .sort("system:time_start", True) ) - def calcNDVI(image): - """ - Calculates NDVI and adds the band to the image. - """ - ndvi = image.normalizedDifference(["B5", "B4"]).rename("NDVI") - return image.addBands(ndvi) # map NDVI to the image collection and select the bands - landsat = landsat.map(calcNDVI).select( + landsat = landsat.map(calc_ndvi(nir="B5", red="B4")).select( ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11", "NDVI"] ) @@ -170,7 +159,7 @@ def eachf2dict(f): raise ValueError("geometry choice not supported") site_name = get_sitename(geofile) - AOI = get_siteaoi(geofile) + AOI = get_sitecoord(geofile) # convert the dataframe to an xarray dataset tosave = xr.Dataset( datadf, From 4e181c3bbd0c04429d76bea7d56efbbf9576ab30 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:42:02 +0530 Subject: [PATCH 1189/2289] updated gee2pecan_smap --- modules/data.remote/inst/gee2pecan_smap.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py index 688542f9a0a..5acffef4b01 100644 --- a/modules/data.remote/inst/gee2pecan_smap.py +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -7,7 +7,7 @@ Author: Ayush Prasad """ -from gee_utils import create_geo, get_siteaoi, get_sitename +from gee_utils import create_geo, get_sitecoord, get_sitename import ee import pandas as pd import os @@ -137,7 +137,7 @@ def fc2dataframe(fc): datadf = fc2dataframe(fc) - site_name = get_sitename(geofile) + site_name = get_sitecoord(geofile) AOI = get_siteaoi(geofile) # convert the dataframe to an xarray dataset, used for converting it to a netCDF file From b91596502ddceecea3f85de2fb5e4bea7354347a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:44:59 +0530 Subject: [PATCH 1190/2289] updated bands2lai_snap --- modules/data.remote/inst/bands2lai_snap.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/data.remote/inst/bands2lai_snap.py b/modules/data.remote/inst/bands2lai_snap.py index 270b927043b..03a0017b10a 100644 --- a/modules/data.remote/inst/bands2lai_snap.py +++ b/modules/data.remote/inst/bands2lai_snap.py @@ -32,6 +32,8 @@ def bands2lai_snap(inputfile, outdir): """ # load the input file ds_disk = xr.open_dataset(inputfile) + # select the required bands + ds_disk = ds_disk.sel(band=["B3", "B4", "B5", "B6", "B7", "B8A", "B11", "B12"]) # calculate LAI using SNAP area = bio.run_snap_biophys(ds_disk, "LAI") area = area[["lai"]] From 2a6892410b7c63769de69e94f205c94dd9adf4d8 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 14:50:41 +0530 Subject: [PATCH 1191/2289] updated gee2pecan_s2 --- modules/data.remote/inst/gee2pecan_s2.py | 889 +++++++++++++++++++---- 1 file changed, 755 insertions(+), 134 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py index ec132d70e8a..a6cc4833874 100644 --- a/modules/data.remote/inst/gee2pecan_s2.py +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -1,180 +1,801 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- """ -Extracts Landsat 8 surface reflactance band data from Google Earth Engine and saves it in a netCDF file +Created on Thu Feb 6 15:24:12 2020 -Requires Python3 +Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). +Warning: the data is currently retrieved with 10m resolution (scale=10), so +the 20m resolution bands are resampled. +TODO: Add option for specifying the request spatial resolution. -Bands retrieved: B1, B2, B3, B4, B5, B6, B7, B10, B11 along with computed NDVI +@author: Olli Nevalainen (olli.nevalainen@fmi.fi), + Finnish Meteorological Institute) -If ROI is a Point, this function can be used for getting SR data from Landsat 7, 5 and 4 as well. -Author: Ayush Prasad """ -from gee_utils import create_geo, get_sitecoord, get_sitename, calc_ndvi +import sys +import os import ee +import datetime import pandas as pd import geopandas as gpd -import datetime -import os -import xarray as xr import numpy as np -import re +import xarray as xr +from functools import reduce +from gee_utils import calc_ndvi ee.Initialize() -def gee2pecan_l8(geofile, outdir, start, end, qc=1): +NO_DATA = -99999 +S2_REFL_TRANS = 10000 +# ----------------- Sentinel-2 ------------------------------------- +s2_qi_labels = [ + "NODATA", + "SATURATED_DEFECTIVE", + "DARK_FEATURE_SHADOW", + "CLOUD_SHADOW", + "VEGETATION", + "NOT_VEGETATED", + "WATER", + "UNCLASSIFIED", + "CLOUD_MEDIUM_PROBA", + "CLOUD_HIGH_PROBA", + "THIN_CIRRUS", + "SNOW_ICE", +] + +s2_filter1 = [ + "NODATA", + "SATURATED_DEFECTIVE", + "CLOUD_SHADOW", + "UNCLASSIFIED", + "CLOUD_MEDIUM_PROBA", + "CLOUD_HIGH_PROBA", + "THIN_CIRRUS", + "SNOW_ICE", +] + + +class S2RequestParams: + """S2 data request paramaters. + + Attributes + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + the default is ['B3', 'B4', 'B5', + 'B6', 'B7', 'B8A', 'B11', 'B12']. """ - Extracts Landsat 8 SR band data from GEE - Parameters + def __init__(self, datestart, dateend, bands=None): + """. + + Parameters + ---------- + datestart : str + Starting date for data request in form "2019-01-01". + dateend : str + Starting date for data request in form "2019-12-31". + bands : list, optional + List of strings with band name. + + Returns + ------- + None. + + """ + default_bands = [ + "B1", + "B2", + "B3", + "B4", + "B5", + "B6", + "B7", + "B8", + "B8A", + "B9", + "B11", + "B12", + ] + + self.datestart = datestart + self.dateend = dateend + self.bands = bands if bands else default_bands + + +class AOI: + """Area of interest for area info and data. + + Attributes ---------- - geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. - - outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - - start (str) -- starting date of the data request in the form YYYY-MM-DD - - end (str) -- ending date areaof the data request in the form YYYY-MM-DD + name : str + Name of the area. + geometry : str + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + qi : pandas dataframe + Dataframe with quality information about available imagery for the AOI. + qi is empty at init and can be computed with + ee_get_s2_quality_info function. + data : pandas dataframe or xarray + Dataframe holding data retrieved from GEE. Data can be computed using + function + qi is empty at init and can be computed with ee_get_s2_data and + converted to xarray using s2_data_to_xarray function. + + Methods + ------- + __init__ + """ - qc (bool) -- uses the cloud masking function if set to True + def __init__(self, name, geometry=None, coordinate_list=None, tile=None): + """. + + Parameters + ---------- + name : str + Name of the area. + geometry : geometry in wkt, optional + Geometry of the area of interest e.g. from geopandas. + Currently only polygons tested. The default is None. + coordinate_list : list, optional + List of coordinates of a polygon + (loop should be closed). Computed from geometry if not + provided. The default is None. + tile : str, optional + Tile id as string for the data. Used to keep the data in + same crs because an area can be in multiple tiles with + different crs. The default is None. + + Returns + ------- + None. - bands (list of str) -- bands to be retrieved. Default: B5, B4 + """ + if not geometry and not coordinate_list: + sys.exit("AOI has to get either geometry or coordinates as list!") + elif geometry and not coordinate_list: + coordinate_list = list(geometry.exterior.coords) + elif coordinate_list and not geometry: + geometry = None + + self.name = name + self.geometry = geometry + self.coordinate_list = coordinate_list + self.qi = None + self.data = None + self.tile = tile + + +def ee_get_s2_quality_info(AOIs, req_params): + """Get S2 quality information from GEE. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. + req_params : S2RequestParams instance + S2RequestParams instance with request details. Returns ------- Nothing: - output netCDF is saved in the specified directory. - + Computes qi attribute for the given AOI instances. + """ + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [ + ee.Feature(ee.Geometry.Polygon(a.coordinate_list), {"name": a.name}) + for a in AOIs + ] + feature_collection = ee.FeatureCollection(features) + + def ee_get_s2_quality_info_feature(feature): + + area = feature.geometry() + image_collection = ( + ee.ImageCollection("COPERNICUS/S2_SR") + .filterBounds(area) + .filterDate(req_params.datestart, req_params.dateend) + .select(["SCL"]) + ) - # scale (int) Default: 30 - scale = 30 - # bands retrieved ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11"] + def ee_get_s2_quality_info_image(img): + productid = img.get("PRODUCT_ID") + assetid = img.id() + tileid = img.get("MGRS_TILE") + system_index = img.get("system:index") + proj = img.select("SCL").projection() - def reduce_region(image): - """ - Reduces the selected region - currently set to mean, can be changed as per requirements. - """ - stat_dict = image.reduceRegion(ee.Reducer.mean(), geo, scale) - sensingtime = image.get("SENSING_TIME") - return ee.Feature(None, stat_dict).set("sensing_time", sensingtime) + # apply reducer to list + img = img.reduceRegion( + reducer=ee.Reducer.toList(), geometry=area, maxPixels=1e8, scale=10 + ) - def mask(image): - """ - Masks clouds and cloud shadows using the pixel_qa band - Can be configured as per requirements - """ - clear = image.select("pixel_qa") - return image.updateMask(clear) - - # create image collection depending upon the qc choice - if qc == True: - landsat = ( - ee.ImageCollection("LANDSAT/LC08/C01/T1_SR") - .filterDate(start, end) - .map(mask) - .sort("system:time_start", True) + # get data into arrays + classdata = ee.Array( + ee.Algorithms.If( + img.get("SCL"), ee.Array(img.get("SCL")), ee.Array([0]) + ) + ) + + totalcount = classdata.length() + classpercentages = { + key: classdata.eq(i) + .reduce(ee.Reducer.sum(), [0]) + .divide(totalcount) + .get([0]) + for i, key in enumerate(s2_qi_labels) + } + + tmpfeature = ( + ee.Feature(ee.Geometry.Point([0, 0])) + .set("productid", productid) + .set("system_index", system_index) + .set("assetid", assetid) + .set("tileid", tileid) + .set("projection", proj) + .set(classpercentages) + ) + return tmpfeature + + s2_qi_image_collection = image_collection.map(ee_get_s2_quality_info_image) + + return ( + feature.set( + "productid", s2_qi_image_collection.aggregate_array("productid") + ) + .set("system_index", s2_qi_image_collection.aggregate_array("system_index")) + .set("assetid", s2_qi_image_collection.aggregate_array("assetid")) + .set("tileid", s2_qi_image_collection.aggregate_array("tileid")) + .set("projection", s2_qi_image_collection.aggregate_array("projection")) + .set( + { + key: s2_qi_image_collection.aggregate_array(key) + for key in s2_qi_labels + } + ) ) - else: - landsat = ( - ee.ImageCollection("LANDSAT/LC08/C01/T1_SR") - .filterDate(start, end) - .sort("system:time_start", True) + s2_qi_feature_collection = feature_collection.map( + ee_get_s2_quality_info_feature + ).getInfo() + + s2_qi = s2_feature_collection_to_dataframes(s2_qi_feature_collection) + + for a in AOIs: + name = a.name + a.qi = s2_qi[name] + + +def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): + """Get S2 data (level L2A, bottom of atmosphere data) from GEE. + + Warning: the data is currently retrieved with 10m resolution (scale=10), so + the 20m resolution bands are resampled. + TODO: Add option for specifying the request spatial resolution. + + Parameters + ---------- + AOIs : list or AOI instance + List of AOI instances or single AOI instance. If multiple AOIs + proviveded the computation in GEE server is parallellized. + If too many areas with long time range is provided, user might + hit GEE memory limits. Then you should call this function + sequentally to all AOIs. AOIs should have qi attribute computed first. + req_params : S2RequestParams instance + S2RequestParams instance with request details. + qi_threshold : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + qi_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + Nothing: + Computes data attribute for the given AOI instances. + + """ + datestart = req_params.datestart + dateend = req_params.dateend + bands = req_params.bands + # if single AOI instance, make a list + if isinstance(AOIs, AOI): + AOIs = list([AOIs]) + + features = [] + for a in AOIs: + filtered_qi = filter_s2_qi_dataframe(a.qi, qi_threshold, qi_filter) + if len(filtered_qi) == 0: + print("No observations to retrieve for area %s" % a.name) + continue + + if a.tile is None: + min_tile = min(filtered_qi["tileid"].values) + filtered_qi = filtered_qi[filtered_qi["tileid"] == min_tile] + a.tile = min_tile + else: + filtered_qi = filtered_qi[filtered_qi["tileid"] == a.tile] + + full_assetids = "COPERNICUS/S2_SR/" + filtered_qi["assetid"] + image_list = [ee.Image(asset_id) for asset_id in full_assetids] + crs = filtered_qi["projection"].values[0]["crs"] + feature = ee.Feature( + ee.Geometry.Polygon(a.coordinate_list), + {"name": a.name, "image_list": image_list}, ) + features.append(feature) - # map NDVI to the image collection and select the bands - landsat = landsat.map(calc_ndvi(nir="B5", red="B4")).select( - ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11", "NDVI"] - ) + if len(features) == 0: + print("No data to be retrieved!") + return None - # if ROI is a point - df = gpd.read_file(geofile) - if (df.geometry.type == "Point").bool(): + feature_collection = ee.FeatureCollection(features) - geo = create_geo(geofile) + def ee_get_s2_data_feature(feature): + geom = feature.geometry(0.01, crs) + image_collection = ( + ee.ImageCollection.fromImages(feature.get("image_list")) + .filterBounds(geom) + .filterDate(datestart, dateend) + # .select(bands + ["SCL"]) + ) + image_collection = image_collection.map(calc_ndvi(nir="B8A", red="B4")).select( + bands + ["SCL", "NDVI"] + ) - # get the data - l_data = landsat.filterBounds(geo).getRegion(geo, scale).getInfo() - # put the data inside a list of dictionary - l_data_dict = [dict(zip(l_data[0], values)) for values in l_data[1:]] + def ee_get_s2_data_image(img): + # img = img.clip(geom) + productid = img.get("PRODUCT_ID") + assetid = img.id() + tileid = img.get("MGRS_TILE") + system_index = img.get("system:index") + proj = img.select(bands[0]).projection() + sun_azimuth = img.get("MEAN_SOLAR_AZIMUTH_ANGLE") + sun_zenith = img.get("MEAN_SOLAR_ZENITH_ANGLE") + view_azimuth = ( + ee.Array( + [img.get("MEAN_INCIDENCE_AZIMUTH_ANGLE_%s" % b) for b in bands] + ) + .reduce(ee.Reducer.mean(), [0]) + .get([0]) + ) + view_zenith = ( + ee.Array([img.get("MEAN_INCIDENCE_ZENITH_ANGLE_%s" % b) for b in bands]) + .reduce(ee.Reducer.mean(), [0]) + .get([0]) + ) + + img = img.resample("bilinear").reproject(crs=crs, scale=10) + + # get the lat lon and add the ndvi + image_grid = ee.Image.pixelCoordinates(ee.Projection(crs)).addBands( + [img.select(b) for b in bands + ["SCL", "NDVI"]] + ) - def getdate(filename): - """ - calculates Date from the landsat id - """ - string = re.compile( - r"(?PLC08|LE07|LT05|LT04)_(?P\d{6})_(?P\d{8})" + # apply reducer to list + image_grid = image_grid.reduceRegion( + reducer=ee.Reducer.toList(), geometry=geom, maxPixels=1e8, scale=10 ) - x = string.search(filename) - d = datetime.datetime.strptime(x.group("date"), "%Y%m%d").date() - return d - - # pop out unnecessary keys and add date - for d in l_data_dict: - d.pop("longitude", None) - d.pop("latitude", None) - d.pop("time", None) - d.update(time=getdate(d["id"])) - - # Put data in a dataframe - datadf = pd.DataFrame(l_data_dict) - # converting date to the numpy date format - datadf["time"] = datadf["time"].apply(lambda x: np.datetime64(x)) - - # if ROI is a polygon - elif (df.geometry.type == "Polygon").bool(): - - geo = create_geo(geofile) - - # get the data - l_data = landsat.filterBounds(geo).map(reduce_region).getInfo() - - def l8_fc2dict(fc): - """ - Converts feature collection to dictionary form. - """ - - def eachf2dict(f): - id = f["id"] - out = f["properties"] - out.update(id=id) - return out - - out = [eachf2dict(x) for x in fc["features"]] - return out - - # convert to dictionary - l_data_dict = l8_fc2dict(l_data) - # create a dataframe from the dictionary - datadf = pd.DataFrame(l_data_dict) - # convert date to a more readable format - datadf["sensing_time"] = datadf["sensing_time"].apply( - lambda x: datetime.datetime.strptime(x.split(".")[0], "%Y-%m-%dT%H:%M:%S") + + # get data into arrays + x_coords = ee.Array(image_grid.get("x")) + y_coords = ee.Array(image_grid.get("y")) + band_data = {b: ee.Array(image_grid.get("%s" % b)) for b in bands} + + scl_data = ee.Array(image_grid.get("SCL")) + ndvi_data = ee.Array(image_grid.get("NDVI")) + + # perform LAI et al. computation possibly here! + + tmpfeature = ( + ee.Feature(ee.Geometry.Point([0, 0])) + .set("productid", productid) + .set("system_index", system_index) + .set("assetid", assetid) + .set("tileid", tileid) + .set("projection", proj) + .set("sun_zenith", sun_zenith) + .set("sun_azimuth", sun_azimuth) + .set("view_zenith", view_zenith) + .set("view_azimuth", view_azimuth) + .set("x_coords", x_coords) + .set("y_coords", y_coords) + .set("SCL", scl_data) + .set("NDVI", ndvi_data) + .set(band_data) + ) + return tmpfeature + + s2_data_feature = image_collection.map(ee_get_s2_data_image) + + return ( + feature.set("productid", s2_data_feature.aggregate_array("productid")) + .set("system_index", s2_data_feature.aggregate_array("system_index")) + .set("assetid", s2_data_feature.aggregate_array("assetid")) + .set("tileid", s2_data_feature.aggregate_array("tileid")) + .set("projection", s2_data_feature.aggregate_array("projection")) + .set("sun_zenith", s2_data_feature.aggregate_array("sun_zenith")) + .set("sun_azimuth", s2_data_feature.aggregate_array("sun_azimuth")) + .set("view_zenith", s2_data_feature.aggregate_array("view_zenith")) + .set("view_azimuth", s2_data_feature.aggregate_array("view_azimuth")) + .set("x_coords", s2_data_feature.aggregate_array("x_coords")) + .set("y_coords", s2_data_feature.aggregate_array("y_coords")) + .set("SCL", s2_data_feature.aggregate_array("SCL")) + .set("NDVI", s2_data_feature.aggregate_array("NDVI")) + .set({b: s2_data_feature.aggregate_array(b) for b in bands}) ) - # if ROI type is not a Point ot Polygon - else: - raise ValueError("geometry choice not supported") - site_name = get_sitename(geofile) - AOI = get_sitecoord(geofile) - # convert the dataframe to an xarray dataset - tosave = xr.Dataset( - datadf, + s2_data_feature_collection = feature_collection.map( + ee_get_s2_data_feature + ).getInfo() + + s2_data = s2_feature_collection_to_dataframes(s2_data_feature_collection) + + for a in AOIs: + name = a.name + a.data = s2_data[name] + + +def filter_s2_qi_dataframe(s2_qi_dataframe, qi_thresh, s2_filter=s2_filter1): + """Filter qi dataframe. + + Parameters + ---------- + s2_qi_dataframe : pandas dataframe + S2 quality information dataframe (AOI instance qi attribute). + qi_thresh : float + Threshold value to filter images based on used qi filter. + qi filter holds labels of classes whose percentages within the AOI + is summed. If the sum is larger then the qi_threhold, data will not be + retrieved for that date/image. The default is 1, meaning all data is + retrieved. + s2_filter : list + List of strings with class labels (of unwanted classes) used to compute qi value, + see qi_threhold. The default is s2_filter1 = ['NODATA', + 'SATURATED_DEFECTIVE', + 'CLOUD_SHADOW', + 'UNCLASSIFIED', + 'CLOUD_MEDIUM_PROBA', + 'CLOUD_HIGH_PROBA', + 'THIN_CIRRUS', + 'SNOW_ICE']. + + Returns + ------- + filtered_s2_qi_df : pandas dataframe + Filtered dataframe. + + """ + filtered_s2_qi_df = s2_qi_dataframe.loc[ + s2_qi_dataframe[s2_filter1].sum(axis=1) < qi_thresh + ] + + return filtered_s2_qi_df + + +def s2_feature_collection_to_dataframes(s2_feature_collection): + """Convert feature collection dict from GEE to pandas dataframe. + + Parameters + ---------- + s2_feature_collection : dict + Dictionary returned by GEE. + + Returns + ------- + dataframes : pandas dataframe + GEE dictinary converted to pandas dataframe. + + """ + dataframes = {} + + for featnum in range(len(s2_feature_collection["features"])): + tmp_dict = {} + key = s2_feature_collection["features"][featnum]["properties"]["name"] + productid = s2_feature_collection["features"][featnum]["properties"][ + "productid" + ] + + dates = [ + datetime.datetime.strptime(d.split("_")[2], "%Y%m%dT%H%M%S") + for d in productid + ] + + tmp_dict.update({"Date": dates}) # , 'crs': crs} + properties = s2_feature_collection["features"][featnum]["properties"] + for prop, data in properties.items(): + if prop not in ["Date"]: # 'crs' ,, 'projection' + tmp_dict.update({prop: data}) + dataframes[key] = pd.DataFrame(tmp_dict) + return dataframes + + +def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): + """Convert AOI.data dataframe to xarray dataset. + + Parameters + ---------- + aoi : AOI instance + AOI instance. + request_params : S2RequestParams + S2RequestParams. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + Nothing. + Converts the data atrribute dataframe to xarray Dataset. + xarray is better for handling multiband data. It also has + implementation for saving the data in NetCDF format. + + """ + # check that all bands have full data! + datalengths = [ + aoi.data[b].apply(lambda d: len(d)) == len(aoi.data.iloc[0]["x_coords"]) + for b in request_params.bands + ] + consistent_data = reduce(lambda a, b: a & b, datalengths) + aoi.data = aoi.data[consistent_data] + + # 2D data + bands = request_params.bands + + # 1D data + list_vars = [ + "assetid", + "productid", + "sun_azimuth", + "sun_zenith", + "system_index", + "view_azimuth", + "view_zenith", + ] + + # crs from projection + crs = aoi.data["projection"].values[0]["crs"] + tileid = aoi.data["tileid"].values[0] + # original number of pixels requested (pixels inside AOI) + aoi_pixels = len(aoi.data.iloc[0]["x_coords"]) + + # transform 2D data to arrays + for b in bands: + + aoi.data[b] = aoi.data.apply( + lambda row: s2_lists_to_array( + row["x_coords"], + row["y_coords"], + row[b], + convert_to_reflectance=convert_to_reflectance, + ), + axis=1, + ) + + aoi.data["SCL"] = aoi.data.apply( + lambda row: s2_lists_to_array( + row["x_coords"], row["y_coords"], row["SCL"], convert_to_reflectance=False + ), + axis=1, + ) + + aoi.data["NDVI"] = aoi.data.apply( + lambda row: s2_lists_to_array( + row["x_coords"], row["y_coords"], row["NDVI"], convert_to_reflectance=False + ), + axis=1, + ) + + array = aoi.data[bands].values + + # this will stack the array to ndarray with + # dimension order = (time, band, x,y) + narray = np.stack( + [np.stack(array[:, b], axis=2) for b in range(len(bands))], axis=2 + ).transpose() # .swapaxes(2, 3) + + scl_array = np.stack(aoi.data["SCL"].values, axis=2).transpose() + ndvi_array = np.stack(aoi.data["NDVI"].values, axis=2).transpose() + + coords = { + "time": aoi.data["Date"].values, + "band": bands, + "x": np.unique(aoi.data.iloc[0]["x_coords"]), + "y": np.unique(aoi.data.iloc[0]["y_coords"]), + } + + dataset_dict = { + "band_data": (["time", "band", "x", "y"], narray), + "SCL": (["time", "x", "y"], scl_array), + "NDVI": (["time", "x", "y"], ndvi_array), + } + var_dict = {var: (["time"], aoi.data[var]) for var in list_vars} + dataset_dict.update(var_dict) + + ds = xr.Dataset( + dataset_dict, + coords=coords, attrs={ - "site_name": site_name, - "start_date": start, - "end_date": end, - "AOI": AOI, - "sensor": "LANDSAT/LC08/C01/T1_SR", + "name": aoi.name, + "crs": crs, + "tile_id": tileid, + "aoi_geometry": aoi.geometry.to_wkt(), + "aoi_pixels": aoi_pixels, }, ) + aoi.data = ds + + +def s2_lists_to_array(x_coords, y_coords, data, convert_to_reflectance=True): + """Convert 1D lists of coordinates and corresponding values to 2D array. + + Parameters + ---------- + x_coords : list + List of x-coordinates. + y_coords : list + List of y-coordinates. + data : list + List of data values corresponding to the coordinates. + convert_to_reflectance : boolean, optional + Convert S2 data from GEE (integers) to reflectances (floats), + i,e, divide by 10000. + The default is True. + + Returns + ------- + arr : 2D numpy array + Return 2D numpy array. + + """ + # get the unique coordinates + uniqueYs = np.unique(y_coords) + uniqueXs = np.unique(x_coords) + + # get number of columns and rows from coordinates + ncols = len(uniqueXs) + nrows = len(uniqueYs) + + # determine pixelsizes + # ys = uniqueYs[1] - uniqueYs[0] + # xs = uniqueXs[1] - uniqueXs[0] + + y_vals, y_idx = np.unique(y_coords, return_inverse=True) + x_vals, x_idx = np.unique(x_coords, return_inverse=True) + if convert_to_reflectance: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) + arr.fill(np.nan) + arr[y_idx, x_idx] = np.array(data, dtype=np.float64) / S2_REFL_TRANS + else: + arr = np.empty(y_vals.shape + x_vals.shape, dtype=np.float64) + arr.fill(NO_DATA) # or whatever yor desired missing data flag is + arr[y_idx, x_idx] = data + arr = np.flipud(arr) + return arr + + +def xr_dataset_to_timeseries(xr_dataset, variables): + """Compute timeseries dataframe from xr dataset. + + Parameters + ---------- + xr_dataset : xarray dataset + + variables : list + list of variable names as string. + + Returns + ------- + df : pandas dataframe + Pandas dataframe with mean, std, se and percentage of NaNs inside AOI. + + """ + df = pd.DataFrame({"Date": pd.to_datetime(xr_dataset.time.values)}) + + for var in variables: + df[var] = xr_dataset[var].mean(dim=["x", "y"]) + df[var + "_std"] = xr_dataset[var].std(dim=["x", "y"]) + + # nans occure due to missging data from 1D to 2D array + # (pixels outside the polygon), + # from snap algorihtm nans occure due to input/output ouf of bounds + # checking. + # TODO: flaggging with snap biophys algorith or some other solution to + # check which nan are from snap algorithm and which from 1d to 2d transformation + nans = np.isnan(xr_dataset[var]).sum(dim=["x", "y"]) + sample_n = len(xr_dataset[var].x) * len(xr_dataset[var].y) - nans + + # compute how many of the nans are inside aoi (due to snap algorithm) + out_of_aoi_pixels = ( + len(xr_dataset[var].x) * len(xr_dataset[var].y) - xr_dataset.aoi_pixels + ) + nans_inside_aoi = nans - out_of_aoi_pixels + df["aoi_nan_percentage"] = nans_inside_aoi / xr_dataset.aoi_pixels + + df[var + "_se"] = df[var + "_std"] / np.sqrt(sample_n) + + return df + + +def gee2pecan_s2(geofile, outdir, start, end, qi_threshold): + """ + Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. + + Parameters + ---------- + geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date of the data request in the form YYYY-MM-DD + + qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + + Returns + ------- + Nothing: + output netCDF is saved in the specified directory. + + Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray + """ + + # read in the input file containing coordinates + df = gpd.read_file(geofile) + + request = S2RequestParams(start, end) + + # filter area of interest from the coordinates in the input file + area = AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) + + # calcuate qi attribute for the AOI + ee_get_s2_quality_info(area, request) + + # get the final data + ee_get_s2_data(area, request, qi_threshold=qi_threshold) + + # convert dataframe to an xarray dataset, used later for converting to netCDF + s2_data_to_xarray(area, request) - # if specified path does not exist create it + # if specified output directory does not exist, create it if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) - # convert the dataset to a netCDF file and save it - tosave.to_netcdf(os.path.join(outdir, site_name + "_bands.nc")) + area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) From 3eb68d34d35e1633e6c54643e66d4ed9b07fd5ea Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 8 Jul 2020 15:04:27 +0530 Subject: [PATCH 1192/2289] corrected gee_utils function name --- modules/data.remote/inst/gee2pecan_smap.py | 4 ++-- modules/data.remote/inst/get_remote_data.py | 9 +++++++-- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py index 5acffef4b01..fcf91a31623 100644 --- a/modules/data.remote/inst/gee2pecan_smap.py +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -137,8 +137,8 @@ def fc2dataframe(fc): datadf = fc2dataframe(fc) - site_name = get_sitecoord(geofile) - AOI = get_siteaoi(geofile) + site_name = get_sitename(geofile) + AOI = get_sitecoord(geofile) # convert the dataframe to an xarray dataset, used for converting it to a netCDF file tosave = xr.Dataset( diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index e9a664c4dae..aa0b5568479 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -6,7 +6,7 @@ Requires Python3 -Author: Ayush Prasad +Author(s): Ayush Prasad, Istem Fer """ from importlib import import_module @@ -50,7 +50,12 @@ def get_remote_data(geofile, outdir, start, end, source, collection, qc=None): # get collection id from the dictionary collection = collection_dict[collection] except KeyError: - print("Requested image collection is not available") + print( + "Please check if the collection name you requested is one of these and spelled correctly. If not, you need to implement a corresponding gee2pecan_{} function and add it to the collection dictionary.".format( + collection + ) + ) + print(collection_dict.keys()) # construct the function name func_name = "".join([source, "2pecan", "_", collection]) # import the module From 2e23c56bfcb8a1248e06ec2f5b757a5e6d2d978d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 12:28:27 +0530 Subject: [PATCH 1193/2289] added requirements file --- modules/data.remote/inst/requirements.txt | 40 +++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 modules/data.remote/inst/requirements.txt diff --git a/modules/data.remote/inst/requirements.txt b/modules/data.remote/inst/requirements.txt new file mode 100644 index 00000000000..bfd099d95d7 --- /dev/null +++ b/modules/data.remote/inst/requirements.txt @@ -0,0 +1,40 @@ +attrs==19.3.0 +cachetools==4.1.0 +certifi==2020.4.5.2 +cftime==1.1.3 +chardet==3.0.4 +click==7.1.2 +click-plugins==1.1.1 +cligj==0.5.0 +earthengine-api==0.1.224 +Fiona==1.8.13.post1 +future==0.18.2 +geopandas==0.7.0 +google-api-core==1.20.0 +google-api-python-client==1.9.2 +google-auth==1.16.1 +google-auth-httplib2==0.0.3 +google-cloud-core==1.3.0 +google-cloud-storage==1.28.1 +google-resumable-media==0.5.1 +googleapis-common-protos==1.52.0 +httplib2==0.18.1 +httplib2shim==0.0.3 +idna==2.9 +munch==2.5.0 +netCDF4==1.5.3 +numpy==1.18.5 +pandas==1.0.4 +protobuf==3.12.2 +pyasn1==0.4.8 +pyasn1-modules==0.2.8 +pyproj==2.6.1.post1 +python-dateutil==2.8.1 +pytz==2020.1 +requests==2.23.0 +rsa==4.0 +Shapely==1.7.0 +six==1.15.0 +uritemplate==3.0.1 +urllib3==1.25.9 +xarray==0.15.1 From 7a2729cef2990cdd429ddbadeb089d810d3a802d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 12:33:04 +0530 Subject: [PATCH 1194/2289] added remote data module documentation to book --- .../03_topical_pages/09_standalone_tools.Rmd | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 006d90c7653..2544e65a388 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -133,3 +133,75 @@ input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) ```R data = PEcAn.benchmark::load_data(data.path = data.path, format = format) ``` + +## Remote data module +Remote data module retrieves remote sensing data from MODISTools and Google Earth Engine (with plans to support AppEEARS in near future). The downloaded data can be used for preforming further analysis in PEcAn. + +#### Google Earth Engine +[Google Earth Engine](https://earthengine.google.com/) is a cloud-based platform for performing analysis on satellite data. It provides access to a [large data catalog](https://developers.google.com/earth-engine/datasets) through an online JavaScript code editor and a Python API. + +Datasets currently available for use in PEcAn via Google Earth Engine are, + +* [Sentinel-2 MSI](https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR) [`gee2pecan_s2()`](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/gee2pecan_s2.py) +* [SMAP Global Soil Moisture Data](https://developers.google.com/earth-engine/datasets/catalog/NASA_USDA_HSL_SMAP_soil_moisture) [`gee2pecan_smap()`](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/gee2pecan_smap.py) +* [Landsat 8 Surface Reflectance](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR) [`gee2pecan_l8()`](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/gee2pecan_l8.py) + +Processing functions currently available are, + +* [SNAP](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/satellitetools/biophys_xarray.py) + +#### Set-Up instructions: + +1. **Sign up for the Google Earth Engine**. Follow the instructions [here](https://earthengine.google.com/new_signup/) to sign up for using GEE. You need to have your own GEE account for using the GEE download functions in this module. + +2. **Install the Python dependencies**. Using this module requires Python3 and the package manager pip to be installed in your system. + To install the additional Python dependencies required, +a. Navigate to `pecan/modules/data.remote/inst` If you are inside the pecan directory, this can be done by, +```bash +cd modules/data.remote/inst +``` +b. Use pip to install the dependencies. +```bash +pip install -r requirements.txt +``` +3. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials (one time step). The credentials will be stored locally on your system. This can be done by, +```bash +#this will open a browser and ask you to sign in with the Google account registered for GEE +earthengine authenticate +``` + +#### Usage guide: +This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `remote_process()` which is a main function that controls all the individual functions to create an organized way of downloading and handling remote sensing data in PEcAn. + + +**Input data**: The input data must be the coordinates of the area of interest (point or polygon type) and the name of the site/AOI. These information must be provided in a GeoJSON file. + +**Output data**: The output data returned by the functions are in the form of netCDF files. + +#### Example use +This example will download Sentinel 2 bands for an area in Reykjavik(test file is included) for the time period 2018-01-01 to 2018-12-31 and then use the SNAP algorithm to compute Leaf Area Index. + +1. Open a Python shell at `pecan/modules/data.remote/inst` + +2. In the Python shell run the following, +```python +# import remote_process +from remote_process import remote_process + +# call remote_process +remote_process( + geofile="./satellitetools/test.geojson", + outdir="./out", + start="2018-01-01", + end="2018-12-31", + source="gee", + collection="COPERNICUS/S2_SR", + qc=1, + algorithm="snap", + output={"get_data": "bands", "process_data": "lai"}, + stage={"get_data": True, "process_data": True}, + ) +``` + +The output netCDF files(bands and LAI) will be saved at `./out` +More information about the function and its arguments can be found out by `help(remote_process)` \ No newline at end of file From 37a9f74f682072b1c2399050269a0a255e2bfdb6 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 18:15:41 +0530 Subject: [PATCH 1195/2289] added argument to specify scale in s2 and l8 --- modules/data.remote/inst/gee2pecan_l8.py | 11 +++----- modules/data.remote/inst/gee2pecan_s2.py | 32 ++++++++++++++---------- 2 files changed, 22 insertions(+), 21 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_l8.py b/modules/data.remote/inst/gee2pecan_l8.py index ec132d70e8a..814d2b58344 100644 --- a/modules/data.remote/inst/gee2pecan_l8.py +++ b/modules/data.remote/inst/gee2pecan_l8.py @@ -22,7 +22,7 @@ ee.Initialize() -def gee2pecan_l8(geofile, outdir, start, end, qc=1): +def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1): """ Extracts Landsat 8 SR band data from GEE @@ -36,9 +36,9 @@ def gee2pecan_l8(geofile, outdir, start, end, qc=1): end (str) -- ending date areaof the data request in the form YYYY-MM-DD - qc (bool) -- uses the cloud masking function if set to True + scale (int) -- spatial resolution - bands (list of str) -- bands to be retrieved. Default: B5, B4 + qc (bool) -- uses the cloud masking function if set to True Returns ------- @@ -47,10 +47,6 @@ def gee2pecan_l8(geofile, outdir, start, end, qc=1): """ - # scale (int) Default: 30 - scale = 30 - # bands retrieved ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11"] - def reduce_region(image): """ Reduces the selected region @@ -84,7 +80,6 @@ def mask(image): .sort("system:time_start", True) ) - # map NDVI to the image collection and select the bands landsat = landsat.map(calc_ndvi(nir="B5", red="B4")).select( ["B1", "B2", "B3", "B4", "B5", "B6", "B7", "B10", "B11", "NDVI"] diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py index a6cc4833874..1e668fb9bad 100644 --- a/modules/data.remote/inst/gee2pecan_s2.py +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -4,9 +4,6 @@ Created on Thu Feb 6 15:24:12 2020 Module to retrieve Sentinel-2 data from Google Earth Engine (GEE). -Warning: the data is currently retrieved with 10m resolution (scale=10), so -the 20m resolution bands are resampled. -TODO: Add option for specifying the request spatial resolution. @author: Olli Nevalainen (olli.nevalainen@fmi.fi), Finnish Meteorological Institute) @@ -66,13 +63,15 @@ class S2RequestParams: Starting date for data request in form "2019-01-01". dateend : str Starting date for data request in form "2019-12-31". + scale : int + Spatial resolution bands : list, optional List of strings with band name. the default is ['B3', 'B4', 'B5', 'B6', 'B7', 'B8A', 'B11', 'B12']. """ - def __init__(self, datestart, dateend, bands=None): + def __init__(self, datestart, dateend, scale, bands=None): """. Parameters @@ -81,6 +80,8 @@ def __init__(self, datestart, dateend, bands=None): Starting date for data request in form "2019-01-01". dateend : str Starting date for data request in form "2019-12-31". + scale : int + Spatial resolution bands : list, optional List of strings with band name. @@ -106,6 +107,7 @@ def __init__(self, datestart, dateend, bands=None): self.datestart = datestart self.dateend = dateend + self.scale = scale self.bands = bands if bands else default_bands @@ -230,7 +232,10 @@ def ee_get_s2_quality_info_image(img): # apply reducer to list img = img.reduceRegion( - reducer=ee.Reducer.toList(), geometry=area, maxPixels=1e8, scale=10 + reducer=ee.Reducer.toList(), + geometry=area, + maxPixels=1e8, + scale=req_params.scale, ) # get data into arrays @@ -292,10 +297,6 @@ def ee_get_s2_quality_info_image(img): def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): """Get S2 data (level L2A, bottom of atmosphere data) from GEE. - Warning: the data is currently retrieved with 10m resolution (scale=10), so - the 20m resolution bands are resampled. - TODO: Add option for specifying the request spatial resolution. - Parameters ---------- AOIs : list or AOI instance @@ -400,7 +401,7 @@ def ee_get_s2_data_image(img): .get([0]) ) - img = img.resample("bilinear").reproject(crs=crs, scale=10) + img = img.resample("bilinear").reproject(crs=crs, scale=req_params.scale) # get the lat lon and add the ndvi image_grid = ee.Image.pixelCoordinates(ee.Projection(crs)).addBands( @@ -409,7 +410,10 @@ def ee_get_s2_data_image(img): # apply reducer to list image_grid = image_grid.reduceRegion( - reducer=ee.Reducer.toList(), geometry=geom, maxPixels=1e8, scale=10 + reducer=ee.Reducer.toList(), + geometry=geom, + maxPixels=1e8, + scale=req_params.scale, ) # get data into arrays @@ -753,7 +757,7 @@ def xr_dataset_to_timeseries(xr_dataset, variables): return df -def gee2pecan_s2(geofile, outdir, start, end, qi_threshold): +def gee2pecan_s2(geofile, outdir, start, end, scale, qi_threshold): """ Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. @@ -766,6 +770,8 @@ def gee2pecan_s2(geofile, outdir, start, end, qi_threshold): start (str) -- starting date of the data request in the form YYYY-MM-DD end (str) -- ending date of the data request in the form YYYY-MM-DD + + scale (int) - spatial resolution, recommended value 10 qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved @@ -780,7 +786,7 @@ def gee2pecan_s2(geofile, outdir, start, end, qi_threshold): # read in the input file containing coordinates df = gpd.read_file(geofile) - request = S2RequestParams(start, end) + request = S2RequestParams(start, end, scale) # filter area of interest from the coordinates in the input file area = AOI(df[df.columns[0]].iloc[0], df[df.columns[1]].iloc[0]) From f90541f961e41fdf10d78d435a8a6975ae7a19d2 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 18:17:06 +0530 Subject: [PATCH 1196/2289] updated remote_process, get_remote_data to add scale argument --- modules/data.remote/inst/get_remote_data.py | 6 ++++-- modules/data.remote/inst/remote_process.py | 6 +++++- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index aa0b5568479..6b97986044d 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -20,7 +20,7 @@ } -def get_remote_data(geofile, outdir, start, end, source, collection, qc=None): +def get_remote_data(geofile, outdir, start, end, source, collection, scale=None, qc=None): """ uses GEE and AppEEARS functions to download data @@ -38,6 +38,8 @@ def get_remote_data(geofile, outdir, start, end, source, collection, qc=None): collection (str) -- dataset ID + scale (int) -- spatial resolution of the image + qc (float) -- quality control parameter Returns @@ -64,7 +66,7 @@ def get_remote_data(geofile, outdir, start, end, source, collection, qc=None): func = getattr(module, func_name) # if a qc parameter is specified pass these arguments to the function if qc: - func(geofile, outdir, start, end, qc) + func(geofile, outdir, start, end, scale, qc) # this part takes care of functions which do not perform any quality checks, e.g. SMAP else: func(geofile, outdir, start, end) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index 464cc503a03..e7d4286929c 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -21,6 +21,7 @@ def remote_process( end, source, collection, + scale=None, qc=None, algorithm=None, output={"get_data": "bands", "process_data": "lai"}, @@ -44,6 +45,8 @@ def remote_process( collection (str) -- dataset or product name as it is provided on the source, e.g. "LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR" for gee + scale (int) -- spatial resolution of the image, None by default, recommended to use 10 for Sentinel 2 + qc (float) -- quality control parameter, only required for gee queries, None by default algorithm (str) -- algorithm used for processing data in process_data(), currently only SNAP is implemented to estimate LAI from Sentinel-2 bands, None by default @@ -63,7 +66,7 @@ def remote_process( aoi_name = get_sitename(geofile) if stage["get_data"]: - get_remote_data(geofile, outdir, start, end, source, collection, qc) + get_remote_data(geofile, outdir, start, end, source, collection, scale, qc) if stage["process_data"]: process_remote_data(aoi_name, output, outdir, algorithm) @@ -77,6 +80,7 @@ def remote_process( end="2018-12-31", source="gee", collection="COPERNICUS/S2_SR", + scale=10, qc=1, algorithm="snap", output={"get_data": "bands", "process_data": "lai"}, From 1f311be2b28e9e6270c1268d42df2bae2b9df924 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 18:26:52 +0530 Subject: [PATCH 1197/2289] updated scale comment --- modules/data.remote/inst/gee2pecan_l8.py | 2 +- modules/data.remote/inst/gee2pecan_s2.py | 6 +++--- modules/data.remote/inst/get_remote_data.py | 2 +- modules/data.remote/inst/remote_process.py | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_l8.py b/modules/data.remote/inst/gee2pecan_l8.py index 814d2b58344..c221fd04790 100644 --- a/modules/data.remote/inst/gee2pecan_l8.py +++ b/modules/data.remote/inst/gee2pecan_l8.py @@ -36,7 +36,7 @@ def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1): end (str) -- ending date areaof the data request in the form YYYY-MM-DD - scale (int) -- spatial resolution + scale (int) -- pixel resolution qc (bool) -- uses the cloud masking function if set to True diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py index 1e668fb9bad..a017f0f7e8f 100644 --- a/modules/data.remote/inst/gee2pecan_s2.py +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -64,7 +64,7 @@ class S2RequestParams: dateend : str Starting date for data request in form "2019-12-31". scale : int - Spatial resolution + pixel resolution bands : list, optional List of strings with band name. the default is ['B3', 'B4', 'B5', @@ -81,7 +81,7 @@ def __init__(self, datestart, dateend, scale, bands=None): dateend : str Starting date for data request in form "2019-12-31". scale : int - Spatial resolution + pixel resolution bands : list, optional List of strings with band name. @@ -771,7 +771,7 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qi_threshold): end (str) -- ending date of the data request in the form YYYY-MM-DD - scale (int) - spatial resolution, recommended value 10 + scale (int) - pixel resolution, recommended value 10 qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index 6b97986044d..2782d7ddb3a 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -38,7 +38,7 @@ def get_remote_data(geofile, outdir, start, end, source, collection, scale=None, collection (str) -- dataset ID - scale (int) -- spatial resolution of the image + scale (int) -- pixel resolution qc (float) -- quality control parameter diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index e7d4286929c..3a6794f3442 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -45,7 +45,7 @@ def remote_process( collection (str) -- dataset or product name as it is provided on the source, e.g. "LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR" for gee - scale (int) -- spatial resolution of the image, None by default, recommended to use 10 for Sentinel 2 + scale (int) -- pixel resolution, None by default, recommended to use 10 for Sentinel 2 qc (float) -- quality control parameter, only required for gee queries, None by default From 90e6acc5aaf26fd6d897c3896bdddfc7026b87ef Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 18:29:16 +0530 Subject: [PATCH 1198/2289] updated book documentation --- book_source/03_topical_pages/09_standalone_tools.Rmd | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 2544e65a388..e1d03acaf99 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -135,7 +135,7 @@ input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) ``` ## Remote data module -Remote data module retrieves remote sensing data from MODISTools and Google Earth Engine (with plans to support AppEEARS in near future). The downloaded data can be used for preforming further analysis in PEcAn. +Remote data module retrieves remote sensing data from MODISTools and Google Earth Engine (with plans to support AppEEARS in near future). The downloaded data can be used for performing further analysis in PEcAn. #### Google Earth Engine [Google Earth Engine](https://earthengine.google.com/) is a cloud-based platform for performing analysis on satellite data. It provides access to a [large data catalog](https://developers.google.com/earth-engine/datasets) through an online JavaScript code editor and a Python API. @@ -150,7 +150,7 @@ Processing functions currently available are, * [SNAP](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/satellitetools/biophys_xarray.py) -#### Set-Up instructions: +#### Set-Up instructions (one time step): 1. **Sign up for the Google Earth Engine**. Follow the instructions [here](https://earthengine.google.com/new_signup/) to sign up for using GEE. You need to have your own GEE account for using the GEE download functions in this module. @@ -164,7 +164,7 @@ b. Use pip to install the dependencies. ```bash pip install -r requirements.txt ``` -3. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials (one time step). The credentials will be stored locally on your system. This can be done by, +3. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials. The credentials will be stored locally on your system. This can be done by, ```bash #this will open a browser and ask you to sign in with the Google account registered for GEE earthengine authenticate @@ -178,6 +178,8 @@ This module contains GEE and other processing functions which retrieve data from **Output data**: The output data returned by the functions are in the form of netCDF files. +**scale**: Some of the GEE functions require a pixel resolution argument. Information about how GEE handles scale can be found out [here](https://developers.google.com/earth-engine/scale) + #### Example use This example will download Sentinel 2 bands for an area in Reykjavik(test file is included) for the time period 2018-01-01 to 2018-12-31 and then use the SNAP algorithm to compute Leaf Area Index. From 3a7dd6dec447b18f4af9282788b1d2691e1d0a9c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 18:31:45 +0530 Subject: [PATCH 1199/2289] fixed error in book --- book_source/03_topical_pages/09_standalone_tools.Rmd | 1 + 1 file changed, 1 insertion(+) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index e1d03acaf99..8b9a4d23753 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -198,6 +198,7 @@ remote_process( end="2018-12-31", source="gee", collection="COPERNICUS/S2_SR", + scale=10, qc=1, algorithm="snap", output={"get_data": "bands", "process_data": "lai"}, From fe47b7b9cdaf699666c33a2c4b95356dd950d6f1 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 9 Jul 2020 19:07:56 +0530 Subject: [PATCH 1200/2289] Update book_source/03_topical_pages/09_standalone_tools.Rmd Co-authored-by: istfer --- book_source/03_topical_pages/09_standalone_tools.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 8b9a4d23753..c8ce4ef1558 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -150,7 +150,7 @@ Processing functions currently available are, * [SNAP](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/satellitetools/biophys_xarray.py) -#### Set-Up instructions (one time step): +#### Set-Up instructions (first time and one time only): 1. **Sign up for the Google Earth Engine**. Follow the instructions [here](https://earthengine.google.com/new_signup/) to sign up for using GEE. You need to have your own GEE account for using the GEE download functions in this module. @@ -207,4 +207,4 @@ remote_process( ``` The output netCDF files(bands and LAI) will be saved at `./out` -More information about the function and its arguments can be found out by `help(remote_process)` \ No newline at end of file +More information about the function and its arguments can be found out by `help(remote_process)` From ed59f677294401b30a762cdb4b06094b85d06cd0 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 9 Jul 2020 11:55:28 -0400 Subject: [PATCH 1201/2289] Parallelizing some parts --- modules/assim.sequential/R/Remote_helpers.R | 27 ++++++++++++--------- 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index d88ef859221..f37f329f226 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -94,7 +94,7 @@ Obs.data.prepare.MultiSite <- function(obs.path, site.ids) { SDA_remote_launcher <-function(settingPath, ObsPath, run.bash.args){ - + plan(multiprocess) #--------------------------------------------------------------- # Reading the settings #--------------------------------------------------------------- @@ -133,7 +133,7 @@ SDA_remote_launcher <-function(settingPath, fname_p2<-"" } - + folder_name<-"SDA" folder_name <- paste0(c("SDA",fname_p1,fname_p2), collapse = "_") #creating a folder on remote out <- remote.execute.R(script=paste0("dir.create(\"/",settings$host$folder,"//",folder_name,"\")"), @@ -229,7 +229,7 @@ SDA_remote_launcher <-function(settingPath, #--------------------------------------------------------------- # Finding all the met paths in your settings if (is.MultiSettings(settings)){ - input.paths <-settings %>% map(~.x[['run']] ) %>% map(~.x[['inputs']] %>% map(~.x[['path']])) %>% unlist() + input.paths <-settings$run %>% map(~.x[['inputs']] %>% map(~.x[['path']])) %>% unlist() } else { input.paths <-settings$run$inputs %>% map(~.x[['path']]) %>% unlist() } @@ -263,18 +263,21 @@ SDA_remote_launcher <-function(settingPath, unique() %>% discard(~ .x %in% c(".", "/fs/data1/pecan.data/dbfiles" )) - #copy over - remote.copy.to( - my_host, - need.copy.dirs, - file.path(settings$host$folder, folder_name, "inputs"), - delete = FALSE, - stderr = FALSE - ) + need.copy.dirs %>% + purrr::walk( ~ #copy over + remote.copy.to( + my_host, + .x, + file.path(settings$host$folder, folder_name, "inputs"), + delete = FALSE, + stderr = FALSE + )) + + need.copy%>% - walk(function(missing.input){ + furrr::future_map(function(missing.input){ tryCatch({ From fe87105391736d3c7d5b73f7f2ffdd84fd72a7b5 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 9 Jul 2020 12:07:49 -0400 Subject: [PATCH 1202/2289] Addressing comments --- modules/assim.sequential/R/Remote_helpers.R | 2 +- .../inst/RemoteLauncher/SDA_launcher.R | 30 ++++++++++--------- modules/data.land/R/ic_process.R | 6 ++-- 3 files changed, 20 insertions(+), 18 deletions(-) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index f37f329f226..f3eccf08ff8 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -261,7 +261,7 @@ SDA_remote_launcher <-function(settingPath, need.copy.dirs <- dirname(need.copy) %>% unique() %>% - discard(~ .x %in% c(".", "/fs/data1/pecan.data/dbfiles" )) + discard(~ .x %in% c(".")) need.copy.dirs %>% diff --git a/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R b/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R index ea4c8402b80..abefb8bac4a 100644 --- a/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R +++ b/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R @@ -35,20 +35,22 @@ if (is.na(args[2])){ #--------------------------------------------------------------- setwd(settings$outdir) # This is how I delete large folders -c('run', 'out', 'SDA') %>% - map(function(dir.delete) { - if (dir.exists(file.path(settings$outdir, dir.delete))) { - setwd(settings$outdir) - list.dirs(dir.delete, full.names = T) %>% - furrr::future_map(function(del.dir) { - setwd(file.path(settings$outdir, del.dir)) - system(paste0("perl -e 'for(<*>){((stat)[9]<(unlink))}'")) - }) - PEcAn.logger::logger.info(paste0("I just deleted ", dir.delete, " folder !")) - } - }) - -unlink(c('run', 'out', 'SDA'), recursive = TRUE) +# In case there is an SDA run already performed in this dir and you're planning to use the same dir for some reason +# These next lines could be uncommented to delete the necessary dirs. +# c('run', 'out', 'SDA') %>% +# map(function(dir.delete) { +# if (dir.exists(file.path(settings$outdir, dir.delete))) { +# setwd(settings$outdir) +# list.dirs(dir.delete, full.names = T) %>% +# furrr::future_map(function(del.dir) { +# setwd(file.path(settings$outdir, del.dir)) +# system(paste0("perl -e 'for(<*>){((stat)[9]<(unlink))}'")) +# }) +# PEcAn.logger::logger.info(paste0("I just deleted ", dir.delete, " folder !")) +# } +# }) +# +# unlink(c('run', 'out', 'SDA'), recursive = TRUE) #---------------------------------------------------------------- # Find what sites we are running for #--------------------------------------------------------------- diff --git a/modules/data.land/R/ic_process.R b/modules/data.land/R/ic_process.R index 6bc5c580017..e1d9aa73150 100644 --- a/modules/data.land/R/ic_process.R +++ b/modules/data.land/R/ic_process.R @@ -50,11 +50,11 @@ ic_process <- function(settings, input, dir, overwrite = FALSE){ con <- bety$con on.exit(db.close(con), add = TRUE) - + latlon <- PEcAn.data.atmosphere::db.site.lat.lon(site$id, con = con) # setup site database number, lat, lon and name and copy for format.vars if new input new.site <- data.frame(id = as.numeric(site$id), - lat = PEcAn.data.atmosphere::db.site.lat.lon(site$id, con = con)$lat, - lon = PEcAn.data.atmosphere::db.site.lat.lon(site$id, con = con)$lon) + lat = latlon$lat, + lon = latlon$lon) new.site$name <- settings$run$site$name From 9a14fe39073447427cf0154e314b7a196393d56c Mon Sep 17 00:00:00 2001 From: araiho Date: Thu, 9 Jul 2020 15:21:59 -0400 Subject: [PATCH 1203/2289] trying to replace <<- with globalVariables --- modules/assim.sequential/R/Analysis_sda.R | 33 ++++++++++++++++------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 9ca1e3e1ea9..0955cac0d01 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -121,7 +121,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 operators <- sapply(settings$state.data.assimilation$inputs, '[[', "operator") #Loading nimbles functions - PEcAn.assim.sequential::load_nimble() + #PEcAn.assim.sequential::load_nimble() #Forecast inputs Q <- Forecast$Q # process error @@ -142,6 +142,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 aqq <- extraArg$aqq bqq <- extraArg$bqq wts <- extraArg$wts/sum(extraArg$wts) + if(any(is.na(wts))){ logger.warn(paste('We found an NA in the wts for the ensemble members. Is this what you want? For now, we will change the NA to a zero.')) wts[is.na(wts)] <- 0 @@ -157,7 +158,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 ###Snow no snow hack for(ii in 1:ncol(X)){ - try(if( sum(X[,ii],na.rm=T)==0 ) X[sample(x = 1:nrow(X),size = .2*nrow(X)),ii] <- .00000001) + try(if( sum(X[,ii],na.rm=T)==0 ) X[sample(x = 1:nrow(X),size = .2*nrow(X)),ii] <- .001) } ####getting ready to calculate y.ind and x.ind @@ -188,10 +189,10 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 #The purpose of this step is to impute data for mu.f #where there are zero values so that #mu.f is in 'tobit space' in the full model - constants.tobit2space <<- list(N = nrow(X), + constants.tobit2space <- list(N = nrow(X), J = length(mu.f)) - data.tobit2space <<- list(y.ind = x.ind, + data.tobit2space <- list(y.ind = x.ind, y.censored = x.censored, mu_0 = rep(0,length(mu.f)), lambda_0 = diag(1000,length(mu.f)), #can try solve @@ -205,7 +206,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 cholesky = chol(solve(stats::cov(X))))) #ptm <- proc.time() - tobit2space_pred <<- nimbleModel(tobit2space.model, data = data.tobit2space, + tobit2space_pred <- nimbleModel(tobit2space.model, data = data.tobit2space, constants = constants.tobit2space, inits = inits.tobit2space(), name = 'space') @@ -214,13 +215,13 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 if(logprob_y_tobit2space < -1000000) logger.warn(paste('Log probability very low for y in tobit2space model during time',t,'. Check initial conditions.')) ## Adding X.mod,q,r as data for building model. - conf_tobit2space <<- configureMCMC(tobit2space_pred, thin = 10, print=TRUE) + conf_tobit2space <- configureMCMC(tobit2space_pred, thin = 10, print=TRUE) conf_tobit2space$addMonitors(c("pf", "muf","y.censored")) conf_tobit2space$removeSampler('pf') conf_tobit2space$addSampler('pf','conj_wt_wishart_sampler') - samplerNumberOffset_tobit2space <<- length(conf_tobit2space$getSamplers()) + samplerNumberOffset_tobit2space <- length(conf_tobit2space$getSamplers()) for(j in seq_along(mu.f)){ for(n in seq_len(nrow(X))){ @@ -231,11 +232,11 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 } } - Rmcmc_tobit2space <<- buildMCMC(conf_tobit2space) + Rmcmc_tobit2space <- buildMCMC(conf_tobit2space) #restarting at good initial conditions is somewhat important here - Cmodel_tobit2space <<- compileNimble(tobit2space_pred) - Cmcmc_tobit2space <<- compileNimble(Rmcmc_tobit2space, project = tobit2space_pred) + Cmodel_tobit2space <- compileNimble(tobit2space_pred) + Cmcmc_tobit2space <- compileNimble(Rmcmc_tobit2space, project = tobit2space_pred) for(i in seq_along(X)) { ## ironically, here we have to "toggle" the value of y.ind[i] @@ -244,6 +245,18 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) } + utils::globalVariables( + 'constants.tobit2space', + 'data.tobit2space', + 'inits.tobit2space', + 'tobit2space_pred', + 'conf_tobit2space', + 'samplerNumberOffset_tobit2space', + 'Rmcmc_tobit2space', + 'Cmodel_tobit2space', + 'Cmcmc_tobit2space' + ) + }else{ Cmodel_tobit2space$wts <- wts*nrow(X) From 2b5db2f7aa241e28d633705f5faedf1224b0c3e5 Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 9 Jul 2020 15:34:20 -0400 Subject: [PATCH 1204/2289] adding the namespace --- modules/assim.sequential/R/Remote_helpers.R | 3 ++- modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index f3eccf08ff8..6da0a791a3d 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -94,7 +94,8 @@ Obs.data.prepare.MultiSite <- function(obs.path, site.ids) { SDA_remote_launcher <-function(settingPath, ObsPath, run.bash.args){ - plan(multiprocess) + + future::plan(future::multiprocess) #--------------------------------------------------------------- # Reading the settings #--------------------------------------------------------------- diff --git a/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R b/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R index abefb8bac4a..8d481157a51 100644 --- a/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R +++ b/modules/assim.sequential/inst/RemoteLauncher/SDA_launcher.R @@ -11,6 +11,7 @@ library(furrr) library(nimble) library(reshape2) library(tictoc) + plan(multiprocess) #---------------------------------------------------------------- # Reading settings and paths From 5837c1fd4f08693614501279ba5bf55815c48f5c Mon Sep 17 00:00:00 2001 From: araiho Date: Thu, 9 Jul 2020 17:11:13 -0400 Subject: [PATCH 1205/2289] added export to get rid of <<- --- modules/assim.sequential/R/Nimble_codes.R | 496 ++++++++++++---------- 1 file changed, 277 insertions(+), 219 deletions(-) diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index 37bf19c445b..d2fd9848f14 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -1,250 +1,308 @@ ##' @title load_nimble ##' @name load_nimble ##' @author Ann Raiho, Hamze Dokoohaki -##' +##' ##' @description This functions is internally used to register a series of nimble functions inside GEF analysis function. -##' +##' #' @import nimble -load_nimble <- function(){ - #y_star_create-------------------------------------------------------------------- - y_star_create <<- nimbleFunction( - run = function(X = double(1)) { - returnType(double(1)) - - y_star <- X - - return(y_star) - }) +#' +#' +#' @export +y_star_create <- nimbleFunction( + run = function(X = double(1)) { + returnType(double(1)) + + y_star <- X + + return(y_star) + }, + where = nimble::getLoadingNamespace() +) - alr <<- nimbleFunction( - run = function(y = double(1)) { - returnType(double(1)) - - y[y < .00001] <- .000001 - - y_out <- log(y[1:(length(y)-1)] / y[length(y)]) - - return(y_out) - }) - - - inv.alr <<- nimbleFunction( - run = function(alr = double(1)) { - returnType(double(1)) - - y = exp(c(alr, 0)) / sum(exp(c(alr, 0))) - - return(y) - }) - - rwtmnorm <<- nimbleFunction( - run = function(n = integer(0), mean = double(1), - prec = double(2), wt = double(0)){ - returnType(double(1)) - if(n != 1) nimPrint("rwtmnorm only allows n = 1; using n = 1.") - Prob <- rmnorm_chol(n=1, mean, chol(prec), prec_param = TRUE) * wt - return(Prob) +#' @export +alr <- nimbleFunction( + run = function(y = double(1)) { + returnType(double(1)) + + y[y < .00001] <- .000001 + + y_out <- log(y[1:(length(y) - 1)] / y[length(y)]) + + return(y_out) + }, + where = nimble::getLoadingNamespace() +) + +#' @export +inv.alr <- nimbleFunction( + run = function(alr = double(1)) { + returnType(double(1)) + + y = exp(c(alr, 0)) / sum(exp(c(alr, 0))) + + return(y) + }, + where = nimble::getLoadingNamespace() +) + +#' @export +rwtmnorm <- nimbleFunction( + run = function(n = integer(0), + mean = double(1), + prec = double(2), + wt = double(0)) { + returnType(double(1)) + if (n != 1) + nimPrint("rwtmnorm only allows n = 1; using n = 1.") + Prob <- + rmnorm_chol(n = 1, mean, chol(prec), prec_param = TRUE) * wt + return(Prob) + }, + where = nimble::getLoadingNamespace() +) + +#' @export +dwtmnorm <- nimbleFunction( + run = function(x = double(1), + mean = double(1), + prec = double(2), + wt = double(0), + log = integer(0, default = 0)) { + returnType(double(0)) + + logProb <- + dmnorm_chol( + x = x, + mean = mean, + cholesky = chol(prec), + prec_param = TRUE, + log = TRUE + ) * wt + + if (log) { + return((logProb)) + } else { + return((exp(logProb))) } + }, + where = nimble::getLoadingNamespace() +) + +registerDistributions(list(dwtmnorm = list( + BUGSdist = "dwtmnorm(mean, prec, wt)", + types = c( + 'value = double(1)', + 'mean = double(1)', + 'prec = double(2)', + 'wt = double(0)' ) - - dwtmnorm <<- nimbleFunction( - run = function(x = double(1), mean = double(1), prec = double(2), - wt = double(0), log = integer(0, default=0)){ - returnType(double(0)) - - logProb <- dmnorm_chol(x = x, mean = mean, cholesky = chol(prec), - prec_param = TRUE, log = TRUE) * wt - - if(log){return((logProb))} else {return((exp(logProb)))} +))) + +#tobit2space.model------------------------------------------------------------------------------------------------ +#' @export +tobit2space.model <- nimbleCode({ + for (i in 1:N) { + y.censored[i, 1:J] ~ dwtmnorm(mean = muf[1:J], + prec = pf[1:J, 1:J], + wt = wts[i]) # + for (j in 1:J) { + y.ind[i, j] ~ dinterval(y.censored[i, j], 0) } - ) + } + + muf[1:J] ~ dmnorm(mean = mu_0[1:J], prec = Sigma_0[1:J, 1:J]) + pf[1:J, 1:J] ~ dwish(S = lambda_0[1:J, 1:J], df = nu_0) + +}) + +#tobit.model--This does the GEF ---------------------------------------------------- +#' @export +tobit.model <- nimbleCode({ + q[1:N, 1:N] ~ dwish(R = aq[1:N, 1:N], df = bq) ## aq and bq are estimated over time + Q[1:N, 1:N] <- inverse(q[1:N, 1:N]) + X.mod[1:N] ~ dmnorm(muf[1:N], prec = pf[1:N, 1:N]) ## Model Forecast ##muf and pf are assigned from ensembles - registerDistributions(list(dwtmnorm = list(BUGSdist = "dwtmnorm(mean, prec, wt)", - types = c('value = double(1)','mean = double(1)', 'prec = double(2)', 'wt = double(0)')))) + ## add process error + X[1:N] ~ dmnorm(X.mod[1:N], prec = q[1:N, 1:N]) - #tobit2space.model------------------------------------------------------------------------------------------------ - tobit2space.model <<- nimbleCode({ + #observation operator + + if (direct_TRUE) { + y_star[X_direct_start:X_direct_end] <- + y_star_create(X[X_direct_start:X_direct_end]) + } else{ - for(i in 1:N){ - - y.censored[i,1:J] ~ dwtmnorm(mean = muf[1:J], - prec = pf[1:J,1:J], - wt = wts[i]) # - for(j in 1:J){ - y.ind[i,j] ~ dinterval(y.censored[i,j], 0) - } - } + } + + if (fcomp_TRUE) { + y_star[X_fcomp_start:X_fcomp_end] <- + alr(X[X_fcomp_model_start:X_fcomp_model_end]) + } else{ - muf[1:J] ~ dmnorm(mean = mu_0[1:J], prec = Sigma_0[1:J,1:J]) - pf[1:J,1:J] ~ dwish(S = lambda_0[1:J,1:J], df = nu_0) + } + + if (pft2total_TRUE) { + y_star[X_pft2total_start] <- + sum(X[X_pft2total_model_start:X_pft2total_model_end]) + } else{ - }) + } - #tobit.model--This does the GEF ---------------------------------------------------- - tobit.model <<- nimbleCode({ + #likelihood + y.censored[1:YN] ~ dmnorm(y_star[1:YN], prec = r[1:YN, 1:YN]) + for (i in 1:YN) { + y.ind[i] ~ dinterval(y.censored[i], 0) + } + + +}) - q[1:N,1:N] ~ dwish(R = aq[1:N,1:N], df = bq) ## aq and bq are estimated over time - Q[1:N,1:N] <- inverse(q[1:N,1:N]) - X.mod[1:N] ~ dmnorm(muf[1:N], prec = pf[1:N,1:N]) ## Model Forecast ##muf and pf are assigned from ensembles - - ## add process error - X[1:N] ~ dmnorm(X.mod[1:N], prec = q[1:N,1:N]) +#tobit.model--This does the GEF for multi Site ------------------------------------- +#' @export +GEF.MultiSite.Nimble <- nimbleCode({ + if (q.type == 1) { + # Sorting out qs + qq ~ dgamma(aq, bq) ## aq and bq are estimated over time + q[1:YN, 1:YN] <- qq * diag(YN) + } else if (q.type == 2) { + # Sorting out qs + q[1:YN, 1:YN] ~ dwish(R = aq[1:YN, 1:YN], df = bq) ## aq and bq are estimated over time - #observation operator + } + + # X model + X.mod[1:N] ~ dmnorm(mean = muf[1:N], cov = pf[1:N, 1:N]) + + for (i in 1:nH) { + tmpX[i] <- X.mod[H[i]] + Xs[i] <- tmpX[i] + } + ## add process error to x model but just for the state variables that we have data and H knows who + X[1:YN] ~ dmnorm(Xs[1:YN], prec = q[1:YN, 1:YN]) + + ## Likelihood + y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN, 1:YN]) + + # #puting the ones that they don't have q in Xall - They come from X.model + # # If I don't have data on then then their q is not identifiable, so we use the same Xs as Xmodel + for (j in 1:nNotH) { + tmpXmod[j] <- X.mod[NotH[j]] + Xall[NotH[j]] <- tmpXmod[j] + } + #These are the one that they have data and their q can be estimated. + for (i in 1:nH) { + tmpXH[i] <- X[i] + Xall[H[i]] <- tmpXH[i] + } + + for (i in 1:YN) { + y.ind[i] ~ dinterval(y.censored[i], 0) + } + +}) - if(direct_TRUE){ - y_star[X_direct_start:X_direct_end] <- y_star_create(X[X_direct_start:X_direct_end]) - } else{ - - } +#sampler_toggle------------------------------------------------------------------------------------------------ +#' @export +sampler_toggle <- nimbleFunction( + contains = sampler_BASE, + setup = function(model, mvSaved, target, control) { + type <- control$type + nested_sampler_name <- paste0('sampler_', type) + control_new <- nimbleOptions('MCMCcontrolDefaultList') + control_new[[names(control)]] <- control + nested_sampler_list <- nimbleFunctionList(sampler_BASE) + nested_sampler_list[[1]] <- + do.call(nested_sampler_name, + list(model, mvSaved, target, control_new)) + toggle <- 1 + }, + run = function() { + if (toggle == 1) + nested_sampler_list[[1]]$run() + }, + methods = list( + reset = function() + nested_sampler_list[[1]]$reset() + ), + where = nimble::getLoadingNamespace() +) + +#' @export +conj_wt_wishart_sampler <- nimbleFunction( + contains = sampler_BASE, + setup = function(model, mvSaved, target, control) { + targetAsScalar <- + model$expandNodeNames(target, returnScalarComponents = TRUE) + d <- sqrt(length(targetAsScalar)) - if(fcomp_TRUE){ - y_star[X_fcomp_start:X_fcomp_end] <- alr(X[X_fcomp_model_start:X_fcomp_model_end]) - } else{ - - } + dep_dmnorm_nodeNames <- + model$getDependencies(target, self = F, includeData = T) + N_dep_dmnorm <- length(dep_dmnorm_nodeNames) - if(pft2total_TRUE){ - y_star[X_pft2total_start] <- sum(X[X_pft2total_model_start:X_pft2total_model_end]) - } else{ - - } + dep_dmnorm_nodeSize <- d #ragged problem refered to on github? - #likelihood - y.censored[1:YN] ~ dmnorm(y_star[1:YN], prec = r[1:YN,1:YN]) - for(i in 1:YN){ - y.ind[i] ~ dinterval(y.censored[i], 0) - } - + calcNodes <- model$getDependencies(target) - }) - - #tobit.model--This does the GEF for multi Site ------------------------------------- - GEF.MultiSite.Nimble <<- nimbleCode({ + dep_dmnorm_values <- + array(0, c(N_dep_dmnorm, dep_dmnorm_nodeSize)) + dep_dmnorm_mean <- + array(0, c(N_dep_dmnorm, dep_dmnorm_nodeSize)) + dep_dmnorm_wt <- numeric(N_dep_dmnorm) - if (q.type == 1){ - # Sorting out qs - qq ~ dgamma(aq, bq) ## aq and bq are estimated over time - q[1:YN, 1:YN] <- qq * diag(YN) - } else if (q.type == 2){ - # Sorting out qs - q[1:YN, 1:YN] ~ dwish(R = aq[1:YN, 1:YN], df = bq) ## aq and bq are estimated over time - - } - - # X model - X.mod[1:N] ~ dmnorm(mean = muf[1:N], cov = pf[1:N, 1:N]) + }, + run = function() { + #Find Prior Values + prior_R <- model$getParam(target[1], "R") + prior_df <- model$getParam(target[1], "df") - for (i in 1:nH) { - tmpX[i] <- X.mod[H[i]] - Xs[i] <- tmpX[i] + #Loop over multivariate normal + for (iDep in 1:N_dep_dmnorm) { + dep_dmnorm_values[iDep, 1:dep_dmnorm_nodeSize] <<- + model$getParam(dep_dmnorm_nodeNames[iDep], + "value") + dep_dmnorm_mean[iDep, 1:dep_dmnorm_nodeSize] <<- + model$getParam(dep_dmnorm_nodeNames[iDep], + "mean") + dep_dmnorm_wt[iDep] <<- + model$getParam(dep_dmnorm_nodeNames[iDep], + "wt") + } - ## add process error to x model but just for the state variables that we have data and H knows who - X[1:YN] ~ dmnorm(Xs[1:YN], prec = q[1:YN, 1:YN]) - - ## Likelihood - y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN, 1:YN]) - # #puting the ones that they don't have q in Xall - They come from X.model - # # If I don't have data on then then their q is not identifiable, so we use the same Xs as Xmodel - for (j in 1:nNotH) { - tmpXmod[j] <- X.mod[NotH[j]] - Xall[NotH[j]] <- tmpXmod[j] - } - #These are the one that they have data and their q can be estimated. - for (i in 1:nH) { - tmpXH[i] <- X[i] - Xall[H[i]] <- tmpXH[i] + #Calculate contribution parameters for wishart based on multivariate normal + contribution_R <<- nimArray(0, dim = nimC(d, d)) + contribution_df <<- 0 + for (iDep in 1:N_dep_dmnorm) { + tmp_diff <<- + sqrt(dep_dmnorm_wt[iDep]) * asRow(dep_dmnorm_values[iDep, 1:d] - dep_dmnorm_mean[iDep, 1:d]) + + contribution_R <<- contribution_R + t(tmp_diff) %*% tmp_diff + + contribution_df <<- contribution_df + dep_dmnorm_wt[iDep] } - for (i in 1:YN) { - y.ind[i] ~ dinterval(y.censored[i], 0) - } + #Draw a new value based on prior and contribution parameters + newValue <- + rwish_chol( + 1, + cholesky = chol(prior_R + contribution_R), + df = prior_df + contribution_df, + scale_param = 0 + ) + model[[target]] <<- newValue - }) - - #sampler_toggle------------------------------------------------------------------------------------------------ - sampler_toggle <<- nimbleFunction( - contains = sampler_BASE, - setup = function(model, mvSaved, target, control) { - type <- control$type - nested_sampler_name <- paste0('sampler_', type) - control_new <- nimbleOptions('MCMCcontrolDefaultList') - control_new[[names(control)]] <- control - nested_sampler_list <- nimbleFunctionList(sampler_BASE) - nested_sampler_list[[1]] <- do.call(nested_sampler_name, list(model, mvSaved, target, control_new)) - toggle <- 1 - }, - run = function() { - if(toggle == 1) - nested_sampler_list[[1]]$run() - }, - methods = list( - reset = function() - nested_sampler_list[[1]]$reset() + #Calculate probability + calculate(model, calcNodes) + nimCopy( + from = model, + to = mvSaved, + row = 1, + nodes = calcNodes, + logProb = TRUE ) + }, + methods = list( + reset = function () { + } ) - - - conj_wt_wishart_sampler <<- nimbleFunction( - contains = sampler_BASE, - setup = function(model, mvSaved, target, control) { - - targetAsScalar <- model$expandNodeNames(target, returnScalarComponents = TRUE) - d <- sqrt(length(targetAsScalar)) - - dep_dmnorm_nodeNames <- model$getDependencies(target, self = F, includeData = T) - N_dep_dmnorm <- length(dep_dmnorm_nodeNames) - - dep_dmnorm_nodeSize <- d #ragged problem refered to on github? - - calcNodes <- model$getDependencies(target) - - dep_dmnorm_values <- array(0, c(N_dep_dmnorm,dep_dmnorm_nodeSize)) - dep_dmnorm_mean <- array(0, c(N_dep_dmnorm,dep_dmnorm_nodeSize)) - dep_dmnorm_wt <- numeric(N_dep_dmnorm) - - }, - run = function(){ - - #Find Prior Values - prior_R <- model$getParam(target[1], "R") - prior_df <- model$getParam(target[1], "df") - - #Loop over multivariate normal - for (iDep in 1:N_dep_dmnorm) { - dep_dmnorm_values[iDep, 1:dep_dmnorm_nodeSize] <<- model$getParam(dep_dmnorm_nodeNames[iDep], - "value") - dep_dmnorm_mean[iDep, 1:dep_dmnorm_nodeSize] <<- model$getParam(dep_dmnorm_nodeNames[iDep], - "mean") - dep_dmnorm_wt[iDep] <<- model$getParam(dep_dmnorm_nodeNames[iDep], - "wt") - - } - - #Calculate contribution parameters for wishart based on multivariate normal - contribution_R <<- nimArray(0, dim = nimC(d, d)) - contribution_df <<- 0 - for (iDep in 1:N_dep_dmnorm) { - tmp_diff <<- sqrt(dep_dmnorm_wt[iDep]) * asRow(dep_dmnorm_values[iDep, 1:d] - dep_dmnorm_mean[iDep, 1:d]) - - contribution_R <<- contribution_R + t(tmp_diff)%*%tmp_diff - - contribution_df <<- contribution_df + dep_dmnorm_wt[iDep] - } - - #Draw a new value based on prior and contribution parameters - newValue <- rwish_chol(1, cholesky = chol(prior_R + contribution_R), - df = prior_df + contribution_df, scale_param = 0) - model[[target]] <<- newValue - - #Calculate probability - calculate(model, calcNodes) - nimCopy(from = model, to = mvSaved, row = 1, nodes = calcNodes, - logProb = TRUE) - }, - methods = list( reset = function () {} ) - ) - - -} +) From cbb586a9ab29f3460ec4611ef5beba5e1b5437b5 Mon Sep 17 00:00:00 2001 From: araiho Date: Thu, 9 Jul 2020 17:16:45 -0400 Subject: [PATCH 1206/2289] adding more global vars --- modules/assim.sequential/R/Analysis_sda.R | 38 +++++++++++++++-------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 0955cac0d01..5586d85c2d6 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -425,7 +425,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 } if(t == 1 | recompileGEF){ - constants.tobit <<- list( + constants.tobit <- list( N = ncol(X), YN = length(y.ind), X_direct_start = X_direct_start, @@ -442,25 +442,25 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 pft2total_TRUE = pft2total_TRUE ) - dimensions.tobit <<- list(X = length(mu.f), X.mod = ncol(X), + dimensions.tobit <- list(X = length(mu.f), X.mod = ncol(X), Q = c(length(mu.f),length(mu.f)), y_star = (length(y.censored))) - data.tobit <<- list(muf = as.vector(mu.f), + data.tobit <- list(muf = as.vector(mu.f), pf = solve(Pf), aq = aqq, bq = bqq, y.ind = y.ind, y.censored = y.censored, r = R) #precision - inits.pred <<- - function() - list( + inits.pred <- function(){ + list( q = diag(ncol(X)) * stats::runif(1, length(mu.f), length(mu.f) + 1), X.mod = stats::rnorm(length(mu.f), mu.f, 1), X = stats::rnorm(length(mu.f), mu.f, .1), y_star = stats::rnorm(length(y.censored), 0, 1) - ) + ) + } # # inits.pred <- list(q = diag(length(mu.f))*(length(mu.f)+1), @@ -468,13 +468,13 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 # X = rnorm(length(mu.f),mu.f,1), # y_star = rnorm(length(y.censored),0,1)) - model_pred <<- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, + model_pred <- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, constants = constants.tobit, inits = inits.pred(), name = 'base') model_pred$initializeInfo() ## Adding X.mod,q,r as data for building model. - conf <<- configureMCMC(model_pred, print=TRUE,thin = 10) + conf <- configureMCMC(model_pred, print=TRUE,thin = 10) conf$addMonitors(c("X","X.mod","q","Q", "y_star","y.censored")) if(ncol(X) > length(y.censored)){ @@ -500,7 +500,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 ## important! ## this is needed for correct indexing later - samplerNumberOffset <<- length(conf$getSamplers()) + samplerNumberOffset <- length(conf$getSamplers()) for(i in 1:length(y.ind)) { node <- paste0('y.censored[',i,']') @@ -513,10 +513,10 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 ## can monitor y.censored, if you wish, to verify correct behaviour #conf$addMonitors('y.censored') - Rmcmc <<- buildMCMC(conf) + Rmcmc <- buildMCMC(conf) - Cmodel <<- compileNimble(model_pred) - Cmcmc <<- compileNimble(Rmcmc, project = model_pred) + Cmodel <- compileNimble(model_pred) + Cmcmc <- compileNimble(Rmcmc, project = model_pred) for(i in 1:length(y.ind)) { ## ironically, here we have to "toggle" the value of y.ind[i] @@ -525,6 +525,18 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) } + utils::globalVariables( + 'constants.tobit', + 'data.tobit', + 'inits.tobit', + 'model_pred', + 'conf', + 'samplerNumberOffset', + 'Rmcmc', + 'Cmodel', + 'Cmcmc' + ) + }else{ Cmodel$y.ind <- y.ind Cmodel$y.censored <- y.censored From 8b9192a2015e154b357dda84513e9b707a02d3f2 Mon Sep 17 00:00:00 2001 From: araiho Date: Thu, 9 Jul 2020 17:18:21 -0400 Subject: [PATCH 1207/2289] adding the roxygen stuff --- modules/assim.sequential/NAMESPACE | 10 ++++++++++ modules/assim.sequential/man/load_nimble.Rd | 3 ++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index 654a3ab8541..6063886de62 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -9,6 +9,7 @@ export(EnKF) export(EnKF.MultiSite) export(GEF) export(GEF.MultiSite) +export(GEF.MultiSite.Nimble) export(Local.support) export(Obs.data.prepare.MultiSite) export(Remote.Sync.launcher) @@ -16,12 +17,16 @@ export(SDA_control) export(SDA_remote_launcher) export(adj.ens) export(alltocs) +export(alr) export(assessParams) export(block_matrix) +export(conj_wt_wishart_sampler) +export(dwtmnorm) export(generate_colors_sda) export(get_ensemble_weights) export(hop_test) export(interactive.plotting.sda) +export(inv.alr) export(load_data_paleon_sda) export(outlier.detector.boxplot) export(piecew.poly.local) @@ -31,11 +36,16 @@ export(post.analysis.multisite.ggplot) export(postana.bias.plotting.sda) export(postana.bias.plotting.sda.corr) export(postana.timeser.plotting.sda) +export(rwtmnorm) export(sample_met) +export(sampler_toggle) export(sda.enkf) export(sda.enkf.multisite) export(sda.enkf.original) export(simple.local) +export(tobit.model) +export(tobit2space.model) +export(y_star_create) import(furrr) import(lubridate) import(nimble) diff --git a/modules/assim.sequential/man/load_nimble.Rd b/modules/assim.sequential/man/load_nimble.Rd index 703ca040d1c..903fc4f0372 100644 --- a/modules/assim.sequential/man/load_nimble.Rd +++ b/modules/assim.sequential/man/load_nimble.Rd @@ -2,9 +2,10 @@ % Please edit documentation in R/Nimble_codes.R \name{load_nimble} \alias{load_nimble} +\alias{y_star_create} \title{load_nimble} \usage{ -load_nimble() +y_star_create(X) } \description{ This functions is internally used to register a series of nimble functions inside GEF analysis function. From 5c6e113d5802ecad4b90fb35f33ea91f376ef95d Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Fri, 10 Jul 2020 12:30:58 +0530 Subject: [PATCH 1208/2289] reverted roxygen to 7.0.2 --- base/logger/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index d78726612f1..8d2cd9ab22d 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -11,5 +11,5 @@ Suggests: testthat License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true -RoxygenNote: 7.1.0 +RoxygenNote: 7.0.2 Roxygen: list(markdown = TRUE) From 5be6cfa4056c89d71183b144ae15c0d5664d2f55 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Fri, 10 Jul 2020 12:36:19 +0530 Subject: [PATCH 1209/2289] reverted version to 7.0.2 --- base/remote/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/remote/DESCRIPTION b/base/remote/DESCRIPTION index 7e0fd00bcbc..f7a1174e619 100644 --- a/base/remote/DESCRIPTION +++ b/base/remote/DESCRIPTION @@ -21,4 +21,4 @@ License: BSD_3_clause + file LICENSE Encoding: UTF-8 LazyData: true Roxygen: list(markdown = TRUE) -RoxygenNote: 7.1.0 +RoxygenNote: 7.0.2 From fc54515e6896d5312e96ce67577c8aac97a910be Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Fri, 10 Jul 2020 12:40:18 +0530 Subject: [PATCH 1210/2289] delete book.yml --- actions/book.yml | 31 ------------------------------- 1 file changed, 31 deletions(-) delete mode 100644 actions/book.yml diff --git a/actions/book.yml b/actions/book.yml deleted file mode 100644 index 933b3174c34..00000000000 --- a/actions/book.yml +++ /dev/null @@ -1,31 +0,0 @@ -# This is a basic workflow to help you get started with Actions - -name: CI - -on: - push: - branches: master - pull_request: - branches: master - -# A workflow run is made up of one or more jobs that can run sequentially or in parallel -jobs: - # This workflow contains a single job called "build" - build: - # The type of runner that the job will run on - runs-on: ubuntu-latest - container: pecan/depends:develop - - steps: - # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it - - uses: actions/checkout@v2 - - - name: Building book from source - run: cd book_source && make - - # Runs a set of commands using the runners shell - - name: Run a multi-line script - run: | - echo Add other actions to build, - echo test, and deploy your project. - From 02f98a7d92f9ca96953c95b968cded970f296660 Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Fri, 10 Jul 2020 12:43:03 +0530 Subject: [PATCH 1211/2289] reverted roxygen to 7.0.2 --- modules/emulator/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index cd10e25c424..a782b0ab330 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -21,4 +21,4 @@ Description: Implementation of a Gaussian Process model (both likelihood and for sampling design and prediction. License: BSD_3_clause + file LICENSE Encoding: UTF-8 -RoxygenNote: 7.1.0 +RoxygenNote: 7.0.2 From ebfbf410032dcbf3cbf8193b0e8e5f86e212ca4b Mon Sep 17 00:00:00 2001 From: MukulMaheshwari <31155543+MukulMaheshwari@users.noreply.github.com> Date: Fri, 10 Jul 2020 13:11:07 +0530 Subject: [PATCH 1212/2289] updated book.yml --- .github/workflows/book.yml | 4 ---- 1 file changed, 4 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index e1672d7c385..8f976666157 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -17,11 +17,7 @@ jobs: runs-on: ubuntu-latest container: pecan/base:latest steps: -<<<<<<< HEAD - - uses: actions/checkout@v1 -======= - uses: actions/checkout@v2 ->>>>>>> 7a3455bcd90cb0c1d9faaee20624c47285f0f6e2 - uses: r-lib/actions/setup-r@v1 - uses: r-lib/actions/setup-pandoc@v1 - name: Install rmarkdown From 3be84647b0666a7b22a21e613c81c4a08559b71c Mon Sep 17 00:00:00 2001 From: araiho Date: Fri, 10 Jul 2020 11:31:23 -0400 Subject: [PATCH 1213/2289] plot is not a variable it's a function within an apply function --- modules/assim.sequential/R/Analysis_sda.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 5586d85c2d6..86b2b8a62bf 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -648,7 +648,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 for(rr in 1:nrow(dat_save)) { eigen_save[rr,] <- eigen((matrix(dat_save[rr, iq],ncol(X),ncol(X))))$values } - apply(eigen_save,2,plot,typ='l') + apply(eigen_save,2,graphics::plot,typ='l') dev.off() return(list(mu.f = mu.f, From 0295a60869cbbddc0f7487b7535c6a645cbc899c Mon Sep 17 00:00:00 2001 From: araiho Date: Fri, 10 Jul 2020 11:45:47 -0400 Subject: [PATCH 1214/2289] trying to get rid of some rogue library calls --- modules/assim.sequential/DESCRIPTION | 3 +++ modules/assim.sequential/R/Analysis_sda_multiSite.R | 2 +- modules/assim.sequential/R/load_data_paleon_sda.R | 2 +- modules/assim.sequential/R/sda.particle.R | 9 ++++----- modules/assim.sequential/R/sda_plotting.R | 3 +-- 5 files changed, 10 insertions(+), 9 deletions(-) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 59f071e7855..774da4e1a71 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -11,11 +11,14 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific streamline the interaction between data and models, and to improve the efficacy of scientific investigation. Imports: + corrplot, + ggrepel, PEcAn.logger, PEcAn.remote, plyr (>= 1.8.4), magic (>= 1.5.0), lubridate (>= 1.6.0), + plotrix, reshape2 (>= 1.4.2), nimble, tictoc, diff --git a/modules/assim.sequential/R/Analysis_sda_multiSite.R b/modules/assim.sequential/R/Analysis_sda_multiSite.R index bb951120465..fae437e2c86 100644 --- a/modules/assim.sequential/R/Analysis_sda_multiSite.R +++ b/modules/assim.sequential/R/Analysis_sda_multiSite.R @@ -92,7 +92,7 @@ GEF.MultiSite<-function(setting, Forecast, Observed, H, extraArg,...){ q.type <- ifelse(q.type == "SITE", Site.q, pft.q) } #Loading nimbles functions - if (!exists('GEF.MultiSite.Nimble')) PEcAn.assim.sequential:::load_nimble() + if (!exists('GEF.MultiSite.Nimble')) PEcAn.assim.sequential::load_nimble() #load_nimble() #Forecast inputs Q <- Forecast$Q # process error diff --git a/modules/assim.sequential/R/load_data_paleon_sda.R b/modules/assim.sequential/R/load_data_paleon_sda.R index 16a7bb790b8..ae978551e0a 100644 --- a/modules/assim.sequential/R/load_data_paleon_sda.R +++ b/modules/assim.sequential/R/load_data_paleon_sda.R @@ -24,7 +24,7 @@ load_data_paleon_sda <- function(settings){ return(obs.list) } - library(plyr) #need to load to use .fnc below + # library(plyr) #need to load to use .fnc below d <- settings$database$bety[c("dbname", "password", "host", "user")] bety <- src_postgres(host = d$host, user = d$user, password = d$password, dbname = d$dbname) diff --git a/modules/assim.sequential/R/sda.particle.R b/modules/assim.sequential/R/sda.particle.R index bbb5b448759..f06c34cf773 100644 --- a/modules/assim.sequential/R/sda.particle.R +++ b/modules/assim.sequential/R/sda.particle.R @@ -148,20 +148,19 @@ sda.particle <- function(model) { } ## Weighted distributions - library(plotrix) for (i in c(2, 3)) { - weighted.hist(ensp.conv[[i]][, nt], wc[, nt]/sum(wc[, nt]), main = names(ensp.all)[i]) + plotrix::weighted.hist(ensp.conv[[i]][, nt], wc[, nt]/sum(wc[, nt]), main = names(ensp.all)[i]) } # for(i in c(1,2,4,5)){ for (i in c(2, 3)) { if (i == 5) { - weighted.hist(ensp.conv[[i]][, nt], + plotrix::weighted.hist(ensp.conv[[i]][, nt], wc[, nt]/sum(wc[, nt]), main = names(ensp.all)[i], col = 2) } else { - weighted.hist(ensp.conv[[i]][, nt], + plotrix::weighted.hist(ensp.conv[[i]][, nt], wc[, nt]/sum(wc[, nt]), main = names(ensp.all)[i], xlim = range(ensp.conv[[i]][, nt]) * c(0.9, 1.1), @@ -172,7 +171,7 @@ sda.particle <- function(model) { for (i in c(2, 3)) { h <- hist(ensp.conv[[i]][, nt], plot = FALSE) - w <- weighted.hist(ensp.conv[[i]][, nt], wc[, nt] / sum(wc[, nt]), plot = FALSE, breaks = h$breaks) + w <- plotrix::weighted.hist(ensp.conv[[i]][, nt], wc[, nt] / sum(wc[, nt]), plot = FALSE, breaks = h$breaks) dx <- diff(h$breaks)[1] plot(w$mids - dx/2, w$density/dx, diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 7155a89ad33..facf9feaf28 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -293,14 +293,13 @@ postana.bias.plotting.sda.corr<-function(t, obs.times, X, aqq, bqq){ generate_colors_sda() #--- - library(corrplot) pdf('SDA/process.var.plots.pdf') cor.mat <- cov2cor(aqq[t,,] / bqq[t]) colnames(cor.mat) <- colnames(X) rownames(cor.mat) <- colnames(X) par(mfrow = c(1, 1), mai = c(1, 1, 4, 1)) - corrplot(cor.mat, type = "upper", tl.srt = 45,order='FPC') + corrplot::corrplot(cor.mat, type = "upper", tl.srt = 45,order='FPC') par(mfrow=c(1,1)) plot(as.Date(obs.times[t1:t]), bqq[t1:t], From bf4e1fd6a7454f36da66108cbd23dab0aa48d9e3 Mon Sep 17 00:00:00 2001 From: araiho Date: Fri, 10 Jul 2020 12:41:37 -0400 Subject: [PATCH 1215/2289] trying to fix description errors --- modules/assim.sequential/DESCRIPTION | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 774da4e1a71..fe98afd94d0 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -12,18 +12,31 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific efficacy of scientific investigation. Imports: corrplot, + devtools, + dplyr, + future, ggrepel, + Matrix, + ncdf4, + PEcAn.benchmark, + PEcAn.DB, PEcAn.logger, PEcAn.remote, + PEcAn.settings, + PEcAn.utils, plyr (>= 1.8.4), magic (>= 1.5.0), lubridate (>= 1.6.0), plotrix, reshape2 (>= 1.4.2), + sf, + sp, nimble, - tictoc, + tictoc, + tidyr, purrr, furrr, + XML, coda Suggests: testthat From b9118961e65070b61bc444dcc11e46282472297c Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 10 Jul 2020 20:41:20 +0000 Subject: [PATCH 1216/2289] changes for docker deployment of api --- apps/api/Dockerfile | 21 +++++++++++++-------- apps/api/R/runs.R | 6 ++++-- docker-compose.yml | 5 ++++- 3 files changed, 21 insertions(+), 11 deletions(-) diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index abec70cff90..0e88133c538 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -8,10 +8,7 @@ ARG IMAGE_VERSION="latest" FROM pecan/base:${IMAGE_VERSION} LABEL maintainer="Tezan Sahu " - -COPY ./ /api - -WORKDIR /api/R +EXPOSE 8000 # -------------------------------------------------------------------------- # Variables to store in docker image (most of them come from the base image) @@ -21,10 +18,18 @@ ENV AUTH_REQ="yes" \ PGHOST="postgres" # COMMAND TO RUN -RUN apt-get update -RUN apt-get install libsodium-dev -y -RUN Rscript -e "devtools::install_version('promises', '1.1.0', repos = 'http://cran.rstudio.com')" \ +RUN apt-get update \ + && apt-get install libsodium-dev -y \ + && rm -rf /var/lib/apt/lists/* \ + && Rscript -e "devtools::install_version('promises', '1.1.0', repos = 'http://cran.rstudio.com')" \ && Rscript -e "devtools::install_version('webutils', '1.1', repos = 'http://cran.rstudio.com')" \ && Rscript -e "devtools::install_github('rstudio/swagger')" \ && Rscript -e "devtools::install_github('rstudio/plumber')" -CMD Rscript entrypoint.R \ No newline at end of file + +WORKDIR /api/R + +CMD Rscript entrypoint.R + +COPY ./ /api + + diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 30fdfeab34c..48f790ee2a7 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -1,3 +1,5 @@ +library(dplyr) + #' Get the list of runs (belonging to a particuar workflow) #' @param workflow_id Workflow id (character) #' @param offset @@ -5,7 +7,7 @@ #' @return List of runs (belonging to a particuar workflow) #' @author Tezan Sahu #* @get / -getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ +getRuns <- function(req, workflow_id, offset=0, limit=50, res){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) @@ -79,7 +81,7 @@ getWorkflows <- function(req, workflow_id, offset=0, limit=50, res){ #' @return Details of requested run #' @author Tezan Sahu #* @get / -getWorkflowDetails <- function(id, res){ +getRunDetails <- function(id, res){ dbcon <- PEcAn.DB::betyConnect() diff --git a/docker-compose.yml b/docker-compose.yml index 64843b53e11..6deef1deb00 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -332,8 +332,11 @@ services: - AUTH_REQ=${AUTH_REQ:-TRUE} labels: - "traefik.enable=true" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/" + - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/api" - "traefik.backend=api" + - "traefik.port=8000" + depends_on: + - postgres # ---------------------------------------------------------------------- # Name of network to be used by all containers From 39e3cbb275e553ddb7704e2e6350bee924d604c7 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 10 Jul 2020 21:15:13 +0000 Subject: [PATCH 1217/2289] added search functionality for models & sites --- apps/api/R/entrypoint.R | 4 + apps/api/R/models.R | 55 ++++++++---- apps/api/R/sites.R | 58 +++++++++++++ apps/api/pecanapi-spec.yml | 171 +++++++++++++++++++++++++++++++++++-- 4 files changed, 262 insertions(+), 26 deletions(-) create mode 100644 apps/api/R/sites.R diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 98d82ee69e6..8d28eab09bd 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -23,6 +23,10 @@ root$handle("GET", "/api/status", status) models_pr <- plumber::plumber$new("models.R") root$mount("/api/models", models_pr) +# The endpoints mounted here are related to details of PEcAn sites +sites_pr <- plumber::plumber$new("sites.R") +root$mount("/api/sites", sites_pr) + # The endpoints mounted here are related to details of PEcAn workflows workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index ef8da6582cd..b68d55894ab 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -1,31 +1,52 @@ library(dplyr) -#' Retrieve the details of a particular version of a model -#' @param name Model name (character) -#' @param revision Model version/revision (character) +#' Retrieve the details of a PEcAn model, based on model_id +#' @param model_id Model ID (character) #' @return Model details #' @author Tezan Sahu -#* @get / -getModels <- function(model_name="all", revision="all", res){ +#* @get / +getModel <- function(model_id){ dbcon <- PEcAn.DB::betyConnect() - Models <- tbl(dbcon, "models") %>% - select(model_id = id, model_name, revision, modeltype_id) + Model <- tbl(dbcon, "models") %>% + select(model_id = id, model_name, revision, modeltype_id) %>% + filter(model_id == !!model_id) - if (model_name != "all"){ - Models <- Models %>% - filter(model_name == !!model_name) - } + Model <- tbl(dbcon, "modeltypes") %>% + select(modeltype_id = id, model_type = name) %>% + inner_join(Model, by = "modeltype_id") + + qry_res <- Model %>% collect() - if (revision != "all"){ - Models <- Models %>% - filter(revision == !!revision) + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Model not found")) } + else { + return(qry_res) + } +} + +######################################################################### + +#' Search for PEcAn sites containing wildcards for filtering +#' @param name Model name search string (character) +#' @param revision Model version/revision search string (character) +#' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @return Model subset matching the model search string +#' @author Tezan Sahu +#* @get / +searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ - Models <- tbl(dbcon, "modeltypes") %>% - select(modeltype_id = id, model_type = name) %>% - inner_join(Models, by = "modeltype_id") %>% + dbcon <- PEcAn.DB::betyConnect() + + Models <- tbl(dbcon, "models") %>% + select(model_id = id, model_name, revision) %>% + filter(grepl(!!model_name, model_name, ignore.case=ignore_case)) %>% + filter(grepl(!!revision, revision, ignore.case=ignore_case)) %>% arrange(model_id) qry_res <- Models %>% collect() diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R new file mode 100644 index 00000000000..d98eebe7ff8 --- /dev/null +++ b/apps/api/R/sites.R @@ -0,0 +1,58 @@ +library(dplyr) + +#' Retrieve the details of a PEcAn site, based on site_id +#' @param site_id Site ID (character) +#' @return Site details +#' @author Tezan Sahu +#* @get / +getSite <- function(site_id){ + + dbcon <- PEcAn.DB::betyConnect() + + site <- tbl(dbcon, "sites") %>% + select(-created_at, -updated_at, -user_id, -geometry) %>% + filter(id == !!site_id) + + + qry_res <- site %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Site not found")) + } + else { + return(qry_res) + } +} + +################################################################################################# + +#' Search for PEcAn sites containing wildcards for filtering +#' @param sitename Site name search string (character) +#' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @return Site subset matching the site search string +#' @author Tezan Sahu +#* @get / +searchSite <- function(sitename="", ignore_case=TRUE, res){ + dbcon <- PEcAn.DB::betyConnect() + + sites <- tbl(dbcon, "sites") %>% + select(id, sitename) %>% + filter(grepl(!!sitename, sitename, ignore.case=ignore_case)) %>% + arrange(id) + + + qry_res <- sites %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Site(s) not found")) + } + else { + return(list(sites=qry_res, count = nrow(qry_res))) + } +} \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index a5d0ffc73e8..24b0964fb03 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,9 @@ openapi: 3.0.0 servers: - - description: PEcAn Tezan VM - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/912db446 + - description: PEcAn API Server + url: https://pecan-tezan.ncsa.illinois.edu/ + - description: PEcAn Test Server + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/9272c48c - description: Localhost url: http://127.0.0.1:8000 @@ -29,6 +31,8 @@ tags: description: Everything about PEcAn runs - name: models description: Everything about PEcAn models + - name: sites + description: Everything about PEcAn sites ##################################################################################################################### ##################################################### API Endpoints ################################################# @@ -105,28 +109,66 @@ paths: '404': description: Models not found + /api/models/{model_id}: + get: + tags: + - models + - + summary: Details of requested model + parameters: + - in: path + name: model_id + description: Model ID + required: true + schema: + type: string + responses: + '200': + description: Model Details + content: + application/json: + schema: + $ref: '#/components/schemas/Model' + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Model not found + /api/models/: get: tags: - models - - summary: Details of model(s) + summary: Search for model(s) using search pattern based on model name & revision parameters: - in: query name: model_name - description: Name of the model + description: Search string for model name required: false schema: type: string - in: query name: revision - description: Model version/revision + description: Search string for revision required: false schema: type: string + - in: query + name: ignore_case + description: Indicator of case sensitive or case insensitive search + required: false + schema: + type: string + default: "TRUE" + enum: + - "TRUE" + - "FALSE" responses: '200': - description: Available Models + description: List of sites matching search pattern content: application/json: schema: @@ -136,14 +178,92 @@ paths: type: array items: type: object - $ref: '#/components/schemas/Model' + properties: + model_id: + type: string + model_name: + type: string + revision: + type: string '401': description: Authentication required '403': description: Access forbidden '404': description: Model(s) not found - + + /api/sites/{site_id}: + get: + tags: + - sites + - + summary: Details of a site + parameters: + - in: path + name: site_id + description: PEcAn site ID + required: true + schema: + type: string + responses: + '200': + description: Site Details + content: + application/json: + schema: + $ref: '#/components/schemas/Site' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Site not found + + /api/sites/: + get: + tags: + - sites + - + summary: Search for sites using search pattern based on site name + parameters: + - in: query + name: sitename + description: Search string for site name + required: false + schema: + type: string + - in: query + name: ignore_case + description: Indicator of case sensitive or case insensitive search + required: false + schema: + type: string + default: "TRUE" + enum: + - "TRUE" + - "FALSE" + responses: + '200': + description: List of sites matching search pattern + content: + application/json: + schema: + sites: + type: array + items: + type: object + properties: + id: + type: string + sitename: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Site(s) not found /api/workflows/: get: @@ -223,6 +343,9 @@ paths: application/json: schema: $ref: '#/components/schemas/Workflow' + application/xml: + schema: + $ref: '#/components/schemas/Workflow' responses: '201': description: Submitted workflow successfully @@ -389,7 +512,37 @@ components: type: string finish_time: type: string - + + Site: + properties: + id: + type: string + sitename: + type: string + city: + type: string + state: + type: string + country: + type: string + mat: + type: number + map: + type: number + soil: + type: string + som: + type: number + notes: + type: string + soilnotes: + type: string + greenhouse: + type: string + sand_pct: + type: number + time_zone: + type: string Workflow: properties: id: From 6d4a2c591f5ecd9a6f44fa3beb251302a3f12566 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 10 Jul 2020 17:14:44 -0700 Subject: [PATCH 1218/2289] added instructions for using syncgit.sh --- .../02_git/01_using-git.Rmd | 78 +++++++++++++++---- 1 file changed, 63 insertions(+), 15 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd index 170cf15b7a3..3a25be86f07 100644 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd @@ -73,7 +73,7 @@ The **easiest** approach is to use GitHub's browser based workflow. This is usef ### Recommended Git Workflow -Summary: +Summary: development should occur on a fork of the main repository. 1. Fork 2. Create Branch @@ -99,39 +99,80 @@ There is a script in the scripts folder called `scripts/syncgit.sh` that will ke 1. Introduce yourself to GIT -`git config --global user.name "FULLNAME"` -`git config --global user.email you@yourdomain.example.com` +```sh +git config --global user.name "FULLNAME" +git config --global user.email you@yourdomain.example.com +``` 2. Fork PEcAn on GitHub. Go to the PEcAn source code and click on the Fork button in the upper right. This will create a copy of PEcAn in your personal space. 3. Clone to your local machine via command line -`git clone git@github.com:/pecan.git` +```sh +git clone git@github.com:/pecan.git +``` 4. Define `PEcAnProject/pecan` as upstream repository -``` +```sh cd pecan git remote add upstream git@github.com:PecanProject/pecan.git ``` -#### A new branch for each feature or bug fix +##### Hint: Keeping your fork in sync + +If you have used the instructions above, you can use the helper script called [`scripts/syncgit.sh`](https://github.com/PecanProject/pecan/blob/master/scripts/syncgit.sh) to keep the master and develop branches of your own fork in sync with the PEcAnProject/pecan repository. + +After following the above, your .git/config file will include the following: + +``` +... +[remote "origin"] + url = git@github.com:/pecan.git + fetch = +refs/heads/*:refs/remotes/origin/* +[branch "develop"] + remote = origin + merge = refs/heads/develop +[remote "upstream"] + url = git@github.com:PecanProject/pecan.git + fetch = +refs/heads/*:refs/remotes/upstream/* +``` + +Then, you can run: + +```sh +./scripts/syncgit.sh +``` + +Now the master and develop branches on your fork will be up to date. + +#### Using Branching + +Ideally, a developer should create a new branch for each feature or bug fix 1. Make sure you start in the develop branch -`git checkout develop` +```sh +git checkout develop +``` 2. Make sure develop is up to date -`git pull upstream develop` +```sh +git pull upstream develop +``` 3. Run the PEcAn MAKEFILE to compile code from the main directory. -`make` +```sh +make +``` 4. Create a new branch and switch to it -`git checkout -b ` +```sh +git checkout -b +``` 5. Work/commit/etc @@ -149,7 +190,9 @@ make 7. Push this branch to your github space -`git push origin ` +```sh +git push origin +``` 8. submit pull request with [[link commits to issues|Using-Git#link-commits-to-issuess]]; * also see [github documentation](https://help.github.com/articles/using-pull-requests) @@ -158,16 +201,21 @@ make 1. Make sure you start in master -`git checkout develop` +```sh +git checkout develop` +``` 2. delete branch remotely -`git push origin --delete ` +```sh +git push origin --delete ` +``` 3. delete branch locally -`git branch -D ` - +```sh +git branch -D ` +``` #### Link commits to issues From ab125d08eb04a320d6756c6d75c95009c530a53a Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 11 Jul 2020 05:12:29 +0000 Subject: [PATCH 1219/2289] updated docs for models & sites API endpoints --- apps/api/R/models.R | 2 +- .../07_remote_access/01_pecan_api.Rmd | 199 ++++++++++++++++-- 2 files changed, 182 insertions(+), 19 deletions(-) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index b68d55894ab..726bd600f2f 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -58,6 +58,6 @@ searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ return(list(error="Model(s) not found")) } else { - return(list(models=qry_res)) + return(list(models=qry_res, count = nrow(qry_res))) } } \ No newline at end of file diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index b467cf2844d..ef2493cc71f 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -2,7 +2,7 @@ ## Introduction -##### Welcome to the PEcAn Project API Documentation. +__Welcome to the PEcAn Project API Documentation.__ The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. @@ -32,7 +32,12 @@ The currently implemented functionalities include: * `GET /api/status`: Obtain general information about PEcAn & the details of the database host * __Models:__ - * `GET /api/models/`: Retrieve the details of model(s) used by PEcAn + * `GET /api/models/`: Search for model(s) using search pattern based on model name & revision + * `GET /api/models/{model_id}`: Fetch the details of specific model + +* __Sites:__ + * `GET /api/sites/`: Search for site(s) using search pattern based on site name + * `GET /api/sites/{site_id}`: Fetch the details of specific site * __Workflows:__ * `GET /api/workflows/`: Retrieve a list of PEcAn workflows @@ -48,17 +53,17 @@ _* indicates that the particular API is under development & may not be ready for ## Examples: -#### Prerequisites to interact with the PEcAn API Server {.tabset .tabset-pills} +### Prerequisites to interact with the PEcAn API Server {.tabset .tabset-pills} -##### R Packages +#### R Packages * [httr](https://cran.r-project.org/web/packages/httr/index.html) * [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html) -##### Python Packages +#### Python Packages * [requests](https://requests.readthedocs.io/en/master/) * [json](https://docs.python.org/3/library/json.html) -#### {-} +### {-} Following are some example snippets to call the PEcAn API endpoints: @@ -161,26 +166,28 @@ print(json.dumps(response.json(), indent=2)) #### R Snippet ```R -# Get model(s) with `model_name` = SIPNET & `revision` = ssr +# Search model(s) with `model_name` containing "sip" & `revision` containing "ssr" res <- httr::GET( - "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + "http://localhost:8000/api/models/?model_name=sip&revision=ssr&ignore_case=TRUE", httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` ## $models -## model_id model_name revision modeltype_id model_type -## 1 1000000022 SIPNET ssr 3 SIPNET -## ... +## model_id model_name revision +## 1 1000000022 SIPNET ssr + +## $count +## [1] 1 ``` #### Python Snippet ```python -# Get model(s) with `model_name` = SIPNET & `revision` = ssr +# Search model(s) with `model_name` containing "sip" & `revision` containing "ssr" response = requests.get( - "http://localhost:8000/api/models/?model_name=SIPNET&revision=ssr", + "http://localhost:8000/api/models/?model_name=sip&revision=ssr&ignore_case=TRUE", auth=HTTPBasicAuth('carya', 'illinois') ) print(json.dumps(response.json(), indent=2)) @@ -191,17 +198,173 @@ print(json.dumps(response.json(), indent=2)) ## { ## "model_id": "1000000022", ## "model_name": "SIPNET", -## "revision": "ssr", -## "modeltype_id": 3, -## "model_type": "SIPNET" +## "revision": "ssr" +## } +## ], +## "count": 1 +## } +``` + +### {-} + +### `GET /api/models/{model_id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Fetch the details of PEcAn model with id = 1000000022 +res <- httr::GET( + "http://localhost:8000/api/models/1000000022", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## model_id model_name revision modeltype_id model_type +## 1 1000000022 SIPNET ssr 3 SIPNET +``` + +#### Python Snippet + +```python +# Fetch the details of PEcAn model with id = 1000000022 +response = requests.get( + "http://localhost:8000/api/models/1000000022", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## [ +## { +## "model_id": "1000000022", +## "model_name": "SIPNET", +## "revision": "ssr", +## "modeltype_id": 3, +## "model_type": "SIPNET" +## } +## ] +``` + +### {-} + +### `GET /api/sites/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Search site(s) with `site_name` containing "willow" +res <- httr::GET( + "http://localhost:8000/api/sites/?sitename=willow&ignore_case=TRUE", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $sites +## id sitename +## 1 676 Willow Creek (US-WCr) +## 2 1108 Willow Creek (WC)-Chequamegon National Forest +## 3 1202 Tully_willow +## 4 1223 Saare SRF willow plantation +## 5 1000005151 Willow Creek (US-WCr) + +## $count +## [1] 5 +``` + +#### Python Snippet + +```python +# Search site(s) with `site_name` containing "willow" +response = requests.get( + "http://localhost:8000/api/models/?sitename=willow&ignore_case=TRUE", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "sites": [ +## { +## "id": 676, +## "sitename": "Willow Creek (US-WCr)" ## }, -## ... -## ] +## { +## "id": 1108, +## "sitename": "Willow Creek (WC)-Chequamegon National Forest" +## }, +## { +## "id": 1202, +## "sitename": "Tully_willow" +## }, +## { +## "id": 1223, +## "sitename": "Saare SRF willow plantation" +## }, +## { +## "id": 1000005151, +## "sitename": "Willow Creek (US-WCr)" +## } +## ], +## "count": 5 ## } ``` ### {-} +### `GET /api/sites/{site_id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Fetch the details of PEcAn site with id = 676 +res <- httr::GET( + "http://localhost:8000/api/sites/676", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## id city state country mat map soil notes soilnotes sitename +## 1 676 Park Falls Ranger District Wisconsin US 4 815 MF Willow Creek (US-WCr) +## greenhouse sand_pct clay_pct time_zone +## 1 FALSE 42.52 20.17 America/Chicago +``` + +#### Python Snippet + +```python +# Fetch the details of PEcAn site with id = 676 +response = requests.get( + "http://localhost:8000/api/sites/676", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## [ +## { +## "id": 676, +## "city": "Park Falls Ranger District", +## "state": "Wisconsin", +## "country": "US", +## "mat": 4, +## "map": 815, +## "soil": "", +## "notes": "MF", +## "soilnotes": "", +## "sitename": "Willow Creek (US-WCr)", +## "greenhouse": false, +## "sand_pct": 42.52, +## "clay_pct": 20.17, +## "time_zone": "America/Chicago" +## } +## ] +``` + +### {-} + ### `GET /api/workflows/` {.tabset .tabset-pills} #### R Snippet From e28a9c57cbc9a4ea6913fa3d1242937c0e15f31b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 11 Jul 2020 20:06:23 +0530 Subject: [PATCH 1220/2289] correct reference to Remote.Sync.launcher --- .../04_more_web_interface/02_hidden_analyses.Rmd | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd index 0fa96db4c8b..a553c0b5deb 100644 --- a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd +++ b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd @@ -479,11 +479,11 @@ An example of multi-settings pecan xml file also may look like below: ``` ### Running SDA on remote -In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote_Sync_launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote_Sync_launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. +In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote.Sync.launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote.Sync.launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. -`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote_Sync_launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. +`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote.Sync.launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. -`Additionally, the Remote_Sync_launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. +`Additionally, the Remote.Sync.launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. Several points on how to prepare your xml settings for the remote SDA run: 1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. From 80c1a9b99202477932f2023c920301b2dd701854 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 11 Jul 2020 22:26:23 -0500 Subject: [PATCH 1221/2289] 404 bugfix; modified tests for models & sites --- apps/api/R/models.R | 2 +- apps/api/R/sites.R | 2 +- apps/api/tests/test.models.R | 18 +++++++++++++++++- apps/api/tests/test.sites.R | 33 +++++++++++++++++++++++++++++++++ 4 files changed, 52 insertions(+), 3 deletions(-) create mode 100644 apps/api/tests/test.sites.R diff --git a/apps/api/R/models.R b/apps/api/R/models.R index 726bd600f2f..660ccaaa9f1 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -5,7 +5,7 @@ library(dplyr) #' @return Model details #' @author Tezan Sahu #* @get / -getModel <- function(model_id){ +getModel <- function(model_id, res){ dbcon <- PEcAn.DB::betyConnect() diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R index d98eebe7ff8..17343650cb9 100644 --- a/apps/api/R/sites.R +++ b/apps/api/R/sites.R @@ -5,7 +5,7 @@ library(dplyr) #' @return Site details #' @author Tezan Sahu #* @get / -getSite <- function(site_id){ +getSite <- function(site_id, res){ dbcon <- PEcAn.DB::betyConnect() diff --git a/apps/api/tests/test.models.R b/apps/api/tests/test.models.R index 827b8593278..d1be63b006f 100644 --- a/apps/api/tests/test.models.R +++ b/apps/api/tests/test.models.R @@ -1,4 +1,4 @@ -context("Testing the /api/models/ endpoint") +context("Testing all models endpoints") test_that("Calling /api/models/ returns Status 200", { res <- httr::GET( @@ -14,4 +14,20 @@ test_that("Calling /api/models/ with invalid parameters returns Status 404", { httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) +}) + +test_that("Calling /api/models/{model_id} returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/models/1000000014", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/models/{model_id} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/models/1", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file diff --git a/apps/api/tests/test.sites.R b/apps/api/tests/test.sites.R new file mode 100644 index 00000000000..a636390ab28 --- /dev/null +++ b/apps/api/tests/test.sites.R @@ -0,0 +1,33 @@ +context("Testing all sites endpoints") + +test_that("Calling /api/sites/ returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/sites/?sitename=washington", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/sites/ with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/sites/?sitename=random", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) + +test_that("Calling /api/sites/{site_id} returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/sites/676", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/sites/{site_id} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/sites/0", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) \ No newline at end of file From 95fea903f36e1f3432bf122827c47d5405ae178e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 13 Jul 2020 16:38:15 -0500 Subject: [PATCH 1222/2289] send/receive message to rabbitmq rest interface --- base/remote/NAMESPACE | 2 + base/remote/R/rabbitmq.R | 179 +++++++++++++++++++++++ base/remote/man/rabbitmq_get.Rd | 30 ++++ base/remote/man/rabbitmq_parse_uri.Rd | 24 +++ base/remote/man/rabbitmq_post.Rd | 32 ++++ base/remote/man/rabbitmq_send_message.Rd | 26 ++++ 6 files changed, 293 insertions(+) create mode 100644 base/remote/R/rabbitmq.R create mode 100644 base/remote/man/rabbitmq_get.Rd create mode 100644 base/remote/man/rabbitmq_parse_uri.Rd create mode 100644 base/remote/man/rabbitmq_post.Rd create mode 100644 base/remote/man/rabbitmq_send_message.Rd diff --git a/base/remote/NAMESPACE b/base/remote/NAMESPACE index c91a6b220e9..a6361fb0465 100644 --- a/base/remote/NAMESPACE +++ b/base/remote/NAMESPACE @@ -7,6 +7,8 @@ export(kill.tunnel) export(open_tunnel) export(qsub_get_jobid) export(qsub_run_finished) +export(rabbitmq_get) +export(rabbitmq_post) export(remote.copy.from) export(remote.copy.to) export(remote.copy.update) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R new file mode 100644 index 00000000000..e98dece3700 --- /dev/null +++ b/base/remote/R/rabbitmq.R @@ -0,0 +1,179 @@ +#' parse the RabbiMQ URI. This will parse the uri into smaller pieces that can +#' be used to talk to the rest endpoint for RabbitMQ. +#' +#' @param uri the amqp URI +#' @param prefix the prefix that the RabbitMQ managmenet interface uses +#' @param port the port for rabbitmq managment interface +#' @return a list that contains the url to the mangement interface, username +#' password and vhost. +rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { + # save username/password + if (!grepl("@", uri, fixed = TRUE)) { + print("rabbitmq uri is not recognized, missing username and password, assuming guest/guest") + upw <- c("guest", "guest") + } else { + upw <- strsplit(sub(".*://([^@]*).*", "\\1", uri), ":")[[1]] + if (length(upw) != 2) { + print("rabbitmq uri is not recognized, missing username or password") + return(NA) + } + } + + # split uri and check scheme + url_split <- urltools::url_parse(uri) + if (!startsWith(url_split$scheme, "amqp")) { + print("rabbitmq uri is not recognized, invalid scheme (need amqp(s) or http(s))") + return(NA) + } + + # convert uri to rabbitmq rest/management call + url_split["scheme"] <- sub("amqp", "http", url_split["scheme"]) + url_split["port"] <- port + vhost <- url_split["path"] + prefix <- sub("^/+", "", prefix) + if (prefix == "") { + url_split["path"] <- "" + } else if (endsWith(prefix, "/")) { + url_split["path"] <- prefix + } else { + url_split["path"] <- paste0(prefix, "/") + } + + url <- urltools::url_compose(url_split) + + return(list(url=url, vhost=vhost, username=upw[[1]], password=upw[[2]])) +} + +#' Send a message to RabbitMQ rest API. It will check the resulting status code +#' and print a message in case something goes wrong. +#' +#' @param url the full endpoint rest url +#' @param auth authentication for rabbitmq in httr:auth +#' @param body the actual body to send, this is a rabbitmq message. +#' @param action the rest action to perform +#' @return will return NA if message failed, otherwise it will either +#' return the resulting message, or if not availble an empty string "". +rabbitmq_send_message <- function(url, auth, body, action="POST") { + if (action == "GET") { + result <- httr::GET(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) + } else if (action == "PUT") { + result <- httr::PUT(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) + } else if (action == "DELETE") { + result <- httr::DELETE(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) + } else if (action == "POST") { + result <- httr::POST(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) + } else { + print(paste("error sending message to rabbitmq, uknown action", action)) + return(NA) + } + + if (result$status_code >= 200 && result$status_code < 300) { + content <- httr::content(result) + if (length(content) == 0) { + return("") + } else { + return(content) + } + } else if (result$status_code == 401) { + print("error sending message to rabbitmq, make sure username/password is correct") + return(NA) + } else { + output <<- httr::content(result) + if ("reason" %in% names(output)) { + print(paste("error sending message to rabbitmq,", output$reason)) + } else { + print("error sending message to rabbitmq") + } + return(NA) + } +} + +#' Post message to RabbitMQ. This will submit a message to RabbitMQ, if the +#' queue does not exist it will be created. The message will be converted to +#' a json message that is submitted. +#' +#' @param uri RabbitMQ URI or URL to rest endpoint +#' @param queue the queue the message is submitted to +#' @param message the message to submit, will beconverted to json. +#' @param prefix prefix for the rabbitmq api endpoint, default is for no prefix. +#' @param port port for the management interface, the default is 15672. +#' @return the result of the post if message was send, or NA if it failed. +#' @author Alexey Shiklomanov, Rob Kooper +#' @export +rabbitmq_post <- function(uri, queue, message, prefix="", port=15672) { + # parse rabbitmq URI + rabbitmq <- rabbitmq_parse_uri(uri, prefix, port) + if (length(rabbitmq) != 4) { + return(NA) + } + + # create authentication + auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) + + # create message to be send to create the queue + body <- list( + auto_delete = FALSE, + durable = FALSE + ) + url <- paste0(rabbitmq$url, "api/queues/", rabbitmq$vhost, "/", queue) + if (is.na(rabbitmq_send_message(url, auth, body, "PUT"))) { + return(NA) + } + + # send actual message to queue + body <- list( + properties = list(delivery_mode = 2), + routing_key = queue, + payload = jsonlite::toJSON(message, auto_unbox = TRUE), + payload_encoding = "string" + ) + url <- paste0(rabbitmq$url, "api/exchanges/", rabbitmq$vhost, "//publish") + return(rabbitmq_send_message(url, auth, body, "POST")) +} + +#' Get message from RabbitMQ. This will get a message from RabbitMQ, if the +#' queue does not exist it will be created. The message will be converted to +#' a json message that is returned. +#' +#' @param uri RabbitMQ URI or URL to rest endpoint +#' @param queue the queue the message is received from. +#' @param prefix prefix for the rabbitmq api endpoint, default is for no prefix. +#' @param port port for the management interface, the default is 15672. +#' @return NA if no message was retrieved, or a list of the messages payload. +#' @author Alexey Shiklomanov, Rob Kooper +#' @export +rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { + # parse rabbitmq URI + rabbitmq <- rabbitmq_parse_uri(uri, prefix, port) + if (length(rabbitmq) != 4) { + return(NA) + } + + # create authentication + auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) + + # create message to be send to create the queue + body <- list( + auto_delete = FALSE, + durable = FALSE + ) + url <- paste0(rabbitmq$url, "api/queues/", rabbitmq$vhost, "/", queue) + if (is.na(rabbitmq_send_message(url, auth, body, "PUT"))) { + return(NA) + } + + # get actual message from queue + body <- list( + count = count, + ackmode = "ack_requeue_false", + encoding = "auto" + ) + url <- paste0(rabbitmq$url, "api/queues/", rabbitmq$vhost, "/", queue, "/get") + result <<- rabbitmq_send_message(url, auth, body, "POST") + if (length(result) == 1 && is.na(result)) { + return(NA) + } else { + return(lapply(result, function(x) { jsonlite::fromJSON(x$payload) })) + } +} + diff --git a/base/remote/man/rabbitmq_get.Rd b/base/remote/man/rabbitmq_get.Rd new file mode 100644 index 00000000000..91680d36e7b --- /dev/null +++ b/base/remote/man/rabbitmq_get.Rd @@ -0,0 +1,30 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rabbitmq.R +\name{rabbitmq_get} +\alias{rabbitmq_get} +\title{Get message from RabbitMQ. This will get a message from RabbitMQ, if the +queue does not exist it will be created. The message will be converted to +a json message that is returned.} +\usage{ +rabbitmq_get(uri, queue, count = 1, prefix = "", port = 15672) +} +\arguments{ +\item{uri}{RabbitMQ URI or URL to rest endpoint} + +\item{queue}{the queue the message is received from.} + +\item{prefix}{prefix for the rabbitmq api endpoint, default is for no prefix.} + +\item{port}{port for the management interface, the default is 15672.} +} +\value{ +NA if no message was retrieved, or a list of the messages payload. +} +\description{ +Get message from RabbitMQ. This will get a message from RabbitMQ, if the +queue does not exist it will be created. The message will be converted to +a json message that is returned. +} +\author{ +Alexey Shiklomanov, Rob Kooper +} diff --git a/base/remote/man/rabbitmq_parse_uri.Rd b/base/remote/man/rabbitmq_parse_uri.Rd new file mode 100644 index 00000000000..9505e11745e --- /dev/null +++ b/base/remote/man/rabbitmq_parse_uri.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rabbitmq.R +\name{rabbitmq_parse_uri} +\alias{rabbitmq_parse_uri} +\title{parse the RabbiMQ URI. This will parse the uri into smaller pieces that can +be used to talk to the rest endpoint for RabbitMQ.} +\usage{ +rabbitmq_parse_uri(uri, prefix = "", port = 15672) +} +\arguments{ +\item{uri}{the amqp URI} + +\item{prefix}{the prefix that the RabbitMQ managmenet interface uses} + +\item{port}{the port for rabbitmq managment interface} +} +\value{ +a list that contains the url to the mangement interface, username +password and vhost. +} +\description{ +parse the RabbiMQ URI. This will parse the uri into smaller pieces that can +be used to talk to the rest endpoint for RabbitMQ. +} diff --git a/base/remote/man/rabbitmq_post.Rd b/base/remote/man/rabbitmq_post.Rd new file mode 100644 index 00000000000..bcb368f33a8 --- /dev/null +++ b/base/remote/man/rabbitmq_post.Rd @@ -0,0 +1,32 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rabbitmq.R +\name{rabbitmq_post} +\alias{rabbitmq_post} +\title{Post message to RabbitMQ. This will submit a message to RabbitMQ, if the +queue does not exist it will be created. The message will be converted to +a json message that is submitted.} +\usage{ +rabbitmq_post(uri, queue, message, prefix = "", port = 15672) +} +\arguments{ +\item{uri}{RabbitMQ URI or URL to rest endpoint} + +\item{queue}{the queue the message is submitted to} + +\item{message}{the message to submit, will beconverted to json.} + +\item{prefix}{prefix for the rabbitmq api endpoint, default is for no prefix.} + +\item{port}{port for the management interface, the default is 15672.} +} +\value{ +the result of the post if message was send, or NA if it failed. +} +\description{ +Post message to RabbitMQ. This will submit a message to RabbitMQ, if the +queue does not exist it will be created. The message will be converted to +a json message that is submitted. +} +\author{ +Alexey Shiklomanov, Rob Kooper +} diff --git a/base/remote/man/rabbitmq_send_message.Rd b/base/remote/man/rabbitmq_send_message.Rd new file mode 100644 index 00000000000..0af1a899bfd --- /dev/null +++ b/base/remote/man/rabbitmq_send_message.Rd @@ -0,0 +1,26 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rabbitmq.R +\name{rabbitmq_send_message} +\alias{rabbitmq_send_message} +\title{Send a message to RabbitMQ rest API. It will check the resulting status code +and print a message in case something goes wrong.} +\usage{ +rabbitmq_send_message(url, auth, body, action = "POST") +} +\arguments{ +\item{url}{the full endpoint rest url} + +\item{auth}{authentication for rabbitmq in httr:auth} + +\item{body}{the actual body to send, this is a rabbitmq message.} + +\item{action}{the rest action to perform} +} +\value{ +will return NA if message failed, otherwise it will either +return the resulting message, or if not availble an empty string "". +} +\description{ +Send a message to RabbitMQ rest API. It will check the resulting status code +and print a message in case something goes wrong. +} From 31b6259c5147beefd1e8a3ff8d458160171eac6b Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 13 Jul 2020 16:40:17 -0500 Subject: [PATCH 1223/2289] update CHANGELOG --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 1492790aeb2..e34d9227904 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -46,6 +46,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Added +- Functions to send/receive messages to/from rabbitmq. - Documentation in [DEV-INTRO.md](DEV-INTRO.md) on development in a docker environment (#2553) - PEcAn API that can be used to talk to PEcAn servers. Endpoints to GET the details about the server that user is talking to, PEcAn models, workflows & runs. Authetication enabled. (#2631) - New versioned ED2IN template: ED2IN.2.2.0 (#2143) (replaces ED2IN.git) From ddcfa869c9a3dc7898854b416abe54f7700f731a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 13 Jul 2020 22:04:40 -0500 Subject: [PATCH 1224/2289] use PEcAn.logger and fix decode of just string --- base/remote/R/rabbitmq.R | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index e98dece3700..4e546052cd7 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -9,12 +9,12 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { # save username/password if (!grepl("@", uri, fixed = TRUE)) { - print("rabbitmq uri is not recognized, missing username and password, assuming guest/guest") + PEcAn.logger::logger.info("rabbitmq uri is not recognized, missing username and password, assuming guest/guest") upw <- c("guest", "guest") } else { upw <- strsplit(sub(".*://([^@]*).*", "\\1", uri), ":")[[1]] if (length(upw) != 2) { - print("rabbitmq uri is not recognized, missing username or password") + PEcAn.logger::logger.error("rabbitmq uri is not recognized, missing username or password") return(NA) } } @@ -22,7 +22,7 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { # split uri and check scheme url_split <- urltools::url_parse(uri) if (!startsWith(url_split$scheme, "amqp")) { - print("rabbitmq uri is not recognized, invalid scheme (need amqp(s) or http(s))") + PEcAn.logger::logger.error("rabbitmq uri is not recognized, invalid scheme (need amqp(s) or http(s))") return(NA) } @@ -63,7 +63,7 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { } else if (action == "POST") { result <- httr::POST(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) } else { - print(paste("error sending message to rabbitmq, uknown action", action)) + PEcAn.logger::logger.error(paste("error sending message to rabbitmq, uknown action", action)) return(NA) } @@ -75,14 +75,14 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { return(content) } } else if (result$status_code == 401) { - print("error sending message to rabbitmq, make sure username/password is correct") + PEcAn.logger::logger.error("error sending message to rabbitmq, make sure username/password is correct") return(NA) } else { output <<- httr::content(result) if ("reason" %in% names(output)) { - print(paste("error sending message to rabbitmq,", output$reason)) + PEcAn.logger::logger.error(paste("error sending message to rabbitmq,", output$reason)) } else { - print("error sending message to rabbitmq") + PEcAn.logger::logger.error("error sending message to rabbitmq") } return(NA) } @@ -173,7 +173,6 @@ rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { if (length(result) == 1 && is.na(result)) { return(NA) } else { - return(lapply(result, function(x) { jsonlite::fromJSON(x$payload) })) + return(lapply(result, function(x) { tryCatch(jsonlite::fromJSON(x$payload), error=function(e) { x$payload }) })) } } - From 862c1ffe645acd4209e0552ec9f5f709f59b00d1 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 14 Jul 2020 08:57:05 -0500 Subject: [PATCH 1225/2289] update DESCRIPTION, remove debug --- base/remote/DESCRIPTION | 5 ++++- base/remote/R/rabbitmq.R | 22 +++++++++++----------- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/base/remote/DESCRIPTION b/base/remote/DESCRIPTION index f7a1174e619..d39d6616c5b 100644 --- a/base/remote/DESCRIPTION +++ b/base/remote/DESCRIPTION @@ -12,7 +12,10 @@ Maintainer: Alexey Shiklomanov Description: This package contains utilities for communicating with and executing code on local and remote hosts. In particular, it has PEcAn-specific utilities for starting ecosystem model runs. Imports: - PEcAn.logger + PEcAn.logger, + httr, + jsonlite, + urltools Suggests: testthat, tools, diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index 4e546052cd7..0f5753214dc 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -1,6 +1,6 @@ #' parse the RabbiMQ URI. This will parse the uri into smaller pieces that can #' be used to talk to the rest endpoint for RabbitMQ. -#' +#' #' @param uri the amqp URI #' @param prefix the prefix that the RabbitMQ managmenet interface uses #' @param port the port for rabbitmq managment interface @@ -18,14 +18,14 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { return(NA) } } - + # split uri and check scheme url_split <- urltools::url_parse(uri) if (!startsWith(url_split$scheme, "amqp")) { PEcAn.logger::logger.error("rabbitmq uri is not recognized, invalid scheme (need amqp(s) or http(s))") return(NA) } - + # convert uri to rabbitmq rest/management call url_split["scheme"] <- sub("amqp", "http", url_split["scheme"]) url_split["port"] <- port @@ -38,7 +38,7 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { } else { url_split["path"] <- paste0(prefix, "/") } - + url <- urltools::url_compose(url_split) return(list(url=url, vhost=vhost, username=upw[[1]], password=upw[[2]])) @@ -46,7 +46,7 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { #' Send a message to RabbitMQ rest API. It will check the resulting status code #' and print a message in case something goes wrong. -#' +#' #' @param url the full endpoint rest url #' @param auth authentication for rabbitmq in httr:auth #' @param body the actual body to send, this is a rabbitmq message. @@ -66,7 +66,7 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { PEcAn.logger::logger.error(paste("error sending message to rabbitmq, uknown action", action)) return(NA) } - + if (result$status_code >= 200 && result$status_code < 300) { content <- httr::content(result) if (length(content) == 0) { @@ -78,7 +78,7 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { PEcAn.logger::logger.error("error sending message to rabbitmq, make sure username/password is correct") return(NA) } else { - output <<- httr::content(result) + output <- httr::content(result) if ("reason" %in% names(output)) { PEcAn.logger::logger.error(paste("error sending message to rabbitmq,", output$reason)) } else { @@ -148,10 +148,10 @@ rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { if (length(rabbitmq) != 4) { return(NA) } - + # create authentication auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) - + # create message to be send to create the queue body <- list( auto_delete = FALSE, @@ -161,7 +161,7 @@ rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { if (is.na(rabbitmq_send_message(url, auth, body, "PUT"))) { return(NA) } - + # get actual message from queue body <- list( count = count, @@ -169,7 +169,7 @@ rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { encoding = "auto" ) url <- paste0(rabbitmq$url, "api/queues/", rabbitmq$vhost, "/", queue, "/get") - result <<- rabbitmq_send_message(url, auth, body, "POST") + result <- rabbitmq_send_message(url, auth, body, "POST") if (length(result) == 1 && is.na(result)) { return(NA) } else { From 0c92cbee705aacf7baa27921119dd42be12c128a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 14 Jul 2020 10:04:00 -0500 Subject: [PATCH 1226/2289] update documentation --- base/remote/R/rabbitmq.R | 2 +- base/remote/man/rabbitmq_get.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index 0f5753214dc..653c399aa3a 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -137,7 +137,7 @@ rabbitmq_post <- function(uri, queue, message, prefix="", port=15672) { #' #' @param uri RabbitMQ URI or URL to rest endpoint #' @param queue the queue the message is received from. -#' @param prefix prefix for the rabbitmq api endpoint, default is for no prefix. +#' @param count the number of messages to retrieve from the queue. #' @param port port for the management interface, the default is 15672. #' @return NA if no message was retrieved, or a list of the messages payload. #' @author Alexey Shiklomanov, Rob Kooper diff --git a/base/remote/man/rabbitmq_get.Rd b/base/remote/man/rabbitmq_get.Rd index 91680d36e7b..a5693bcf489 100644 --- a/base/remote/man/rabbitmq_get.Rd +++ b/base/remote/man/rabbitmq_get.Rd @@ -13,7 +13,7 @@ rabbitmq_get(uri, queue, count = 1, prefix = "", port = 15672) \item{queue}{the queue the message is received from.} -\item{prefix}{prefix for the rabbitmq api endpoint, default is for no prefix.} +\item{count}{the number of messages to retrieve from the queue.} \item{port}{port for the management interface, the default is 15672.} } From 5fee2a8d918c17439f2bf9b92c8431d5ac92df8e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 14 Jul 2020 10:24:15 -0500 Subject: [PATCH 1227/2289] fix documentation --- base/remote/R/rabbitmq.R | 1 + base/remote/man/rabbitmq_get.Rd | 2 ++ 2 files changed, 3 insertions(+) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index 653c399aa3a..7bcb1ca8392 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -138,6 +138,7 @@ rabbitmq_post <- function(uri, queue, message, prefix="", port=15672) { #' @param uri RabbitMQ URI or URL to rest endpoint #' @param queue the queue the message is received from. #' @param count the number of messages to retrieve from the queue. +#' @param prefix prefix for the rabbitmq api endpoint, default is for no prefix. #' @param port port for the management interface, the default is 15672. #' @return NA if no message was retrieved, or a list of the messages payload. #' @author Alexey Shiklomanov, Rob Kooper diff --git a/base/remote/man/rabbitmq_get.Rd b/base/remote/man/rabbitmq_get.Rd index a5693bcf489..abbb750ba22 100644 --- a/base/remote/man/rabbitmq_get.Rd +++ b/base/remote/man/rabbitmq_get.Rd @@ -15,6 +15,8 @@ rabbitmq_get(uri, queue, count = 1, prefix = "", port = 15672) \item{count}{the number of messages to retrieve from the queue.} +\item{prefix}{prefix for the rabbitmq api endpoint, default is for no prefix.} + \item{port}{port for the management interface, the default is 15672.} } \value{ From 89ca2679950099fd389d823ccce2738d006419b0 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 14 Jul 2020 11:11:41 -0500 Subject: [PATCH 1228/2289] missing dependency --- docker/depends/pecan.depends | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 6228573ce78..57b50c4d3b3 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -115,6 +115,7 @@ install2.r -e -s -l "${RLIB}" -n -1\ TruncatedNormal \ truncnorm \ udunits2 \ + urltools \ utils \ XML \ xtable \ From 3a33940cd1184790b603ff1ea12ba7804ad3fc21 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 11:02:03 +0530 Subject: [PATCH 1229/2289] added appeears2pecan fucntion --- modules/data.remote/inst/appeears2pecan.py | 206 +++++++++++++++++++++ 1 file changed, 206 insertions(+) create mode 100644 modules/data.remote/inst/appeears2pecan.py diff --git a/modules/data.remote/inst/appeears2pecan.py b/modules/data.remote/inst/appeears2pecan.py new file mode 100644 index 00000000000..4a848bdcfd9 --- /dev/null +++ b/modules/data.remote/inst/appeears2pecan.py @@ -0,0 +1,206 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +appears2pecan downloads remote sensing data using the AppEEARS API + +AppEEARS API documentation: https://lpdaacsvc.cr.usgs.gov/appeears/api/?language=Python%203#introduction + +Requires Python3 + +Author(s): Ayush Prasad +""" + +import requests as r +import geopandas as gpd +import getpass +import time +import os +import cgi +import json +from gee_utils import get_sitename +from datetime import datetime + + +def appeears2pecan(geofile, outdir, start, end, product, projection=None): + """ + Downloads remote sensing data from AppEEARS + + Parameters + ---------- + geofile (str) -- path to the GeoJSON file containing the name and coordinates of AOI + + outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + start (str) -- starting date of the data request in the form YYYY-MM-DD + + end (str) -- ending date area of the data request in the form YYYY-MM-DD + + product (str) -- product name followed by " . " and the product version, e.g. "SPL3SMP_E.003" + + projection (str) -- type of projection, only required for polygon AOI type. None by default + + Returns + ------- + Nothing: + Output files are saved in the specified directory. + Output file is of netCDF type when AOI is a Polygon and csv type when AOI is a Point. + """ + + # API url + api = "https://lpdaacsvc.cr.usgs.gov/appeears/api/" + + def authenticate(): + """ + uses the user provided NASA Earthdata credentials to request a token. + + Returns + ------- + head (dict) : header contatining the authentication token + """ + try: + # if the user does not want to enter their credentials everytime they use this function, they need to store their username and password in a JSON file, preferabbly not in a git initialized directory + with open("path/to/cred.json", "r") as f: + cred = json.load(f) + user = cred["username"] + password = cred["password"] + except: + # if user does not want to store the credentials + user = getpass.getpass(prompt="Enter NASA Earthdata Login Username: ") + password = getpass.getpass(prompt="Enter NASA Earthdata Login Password: ") + # use the credentials to call log in service. Use request's HTTP Basic Auth to do the authentication + response = r.post("{}login".format(api), auth=(user, password)) + # delete the user and password variables as they are no longer needed + del user, password + # raise an exception if the POST request returned an unsuccessful status code + response.raise_for_status() + token_response = response.json() + # extract the token + token = token_response["token"] + head = {"Authorization": "Bearer {}".format(token)} + return head + + head = authenticate() + + # query the available layers for the product and store it in a layer + product tuple + lst_response = r.get("{}product/{}".format(api, product)).json() + l = list(lst_response.keys()) + layers = [(product, each_l) for each_l in l] + prodLayer = [({"layer": l[1], "product": l[0]}) for l in layers] + + # special case to handle SMAP products + # SMAP products individually have more than 40 layers all of which are not allowed by the API to be downloaded in a single request + # if the requested product is one of the SMAP products then select the first 25 layers + if product in [ + "SPL3FTP.002", + "SPL3SMP.006", + "SPL3SMP_E.003", + "SPL4CMDL.004", + "SPL4SMGP.004", + ]: + # chabge this part to select your own SMAP layers + prodLayer = prodLayer[0:25] + + site_name = get_sitename(geofile) + + task_name = site_name + product + + # convert start date to MM-DD-YY format as needed by the API + start = datetime.strptime(start, "%Y-%m-%d") + start = datetime.strftime(start, "%m-%d-%Y") + # convert end date to MM-DD-YY format + end = datetime.strptime(end, "%Y-%m-%d") + end = datetime.strftime(end, "%m-%d-%Y") + + # read in the GeoJSON file containing name and coordinates of the AOI + df = gpd.read_file(geofile) + + if (df.geometry.type == "Point").bool(): + # extract coordinates + lon = float(df.geometry.x) + lat = float(df.geometry.y) + coordinates = [{"longitude": lon, "latitude": lat,}] + # compile the JSON task request + task = { + "task_type": "point", + "task_name": task_name, + "params": { + "dates": [{"startDate": start, "endDate": end}], + "layers": prodLayer, + "coordinates": coordinates, + }, + } + + elif (df.geometry.type == "Polygon").bool(): + # query the projections + projections = r.get("{}spatial/proj".format(api)).json() + projs = {} + for p in projections: + projs[p["Name"]] = p + # select the projection which user requested + proj = projs[projection]["Name"] + # extract the coordinates from the dataframe and convert it to JSON + geo = df[df["name"] == site_name].to_json() + geo = json.loads(geo) + # compile the JSON task request + task = { + "task_type": "area", + "task_name": task_name, + "params": { + "dates": [{"startDate": start, "endDate": end}], + "layers": prodLayer, + "output": {"format": {"type": "netcdf4"}, "projection": proj}, + "geo": geo, + }, + } + + else: + # if the input geometry is not of Polygon or Point Type + raise ValueError("geometry type not supported") + + # submit the task request + task_response = r.post("{}task".format(api), json=task, headers=head).json() + + # limit response to 2 + params = { + "limit": 2, + "pretty": True, + } + + # retrieve task response and extract task id + tasks_response = r.get("{}task".format(api), params=params, headers=head).json() + task_id = task_response["task_id"] + + # wait_time (float) : time (sceconds) after which task status is checked + wait_time = 60.0 + + # check the task status as per the wait_time specified + starttime = time.time() + while ( + r.get("{}task/{}".format(api, task_id), headers=head).json()["status"] != "done" + ): + print(r.get("{}task/{}".format(api, task_id), headers=head).json()["status"]) + time.sleep(wait_time - ((time.time() - starttime) % wait_time)) + print(r.get("{}task/{}".format(api, task_id), headers=head).json()["status"]) + + # if specified output directory does not exist create it + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + # query the created files using the bundle + bundle = r.get("{}bundle/{}".format(api, task_id)).json() + + # use the contents of the bundle to store file name and id in a dictionary + files = {} + for f in bundle["files"]: + files[f["file_id"]] = f["file_name"] + # download and save the files + for f in files: + dl = r.get("{}bundle/{}/{}".format(api, task_id, f), stream=True) + filename = os.path.basename( + cgi.parse_header(dl.headers["Content-Disposition"])[1]["filename"] + ) + filepath = os.path.join(outdir, filename) + with open(filepath, "wb") as f: + for data in dl.iter_content(chunk_size=8192): + f.write(data) From 189f545ded51b2a5aaf4da6ebed2ee57ee966c69 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 11:03:14 +0530 Subject: [PATCH 1230/2289] update remote_process --- modules/data.remote/inst/remote_process.py | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index 3a6794f3442..c3e442c433d 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -22,9 +22,10 @@ def remote_process( source, collection, scale=None, + projection=None, qc=None, algorithm=None, - output={"get_data": "bands", "process_data": "lai"}, + process_data=None, stage={"get_data": True, "process_data": True}, ): @@ -41,17 +42,19 @@ def remote_process( end (str) -- ending date area of the data request in the form YYYY-MM-DD - source (str) -- source from where data is to be downloaded, e.g. "gee" or "appEEARS" etc. Currently only "gee" implemented + source (str) -- source from where data is to be downloaded, e.g. "gee" or "appeears" - collection (str) -- dataset or product name as it is provided on the source, e.g. "LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR" for gee + collection (str) -- dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears scale (int) -- pixel resolution, None by default, recommended to use 10 for Sentinel 2 + projection (str) -- type of projection. Only required for appeears polygon AOI type. None by default. + qc (float) -- quality control parameter, only required for gee queries, None by default algorithm (str) -- algorithm used for processing data in process_data(), currently only SNAP is implemented to estimate LAI from Sentinel-2 bands, None by default - output (dict) -- "get_data" - the type of output variable requested from get_data module, "process_data" - the type of output variable requested from process_data module + process_data (str) -- the type of output variable requested from process_data module stage (dict) -- temporary argument to imitate database checks @@ -66,10 +69,12 @@ def remote_process( aoi_name = get_sitename(geofile) if stage["get_data"]: - get_remote_data(geofile, outdir, start, end, source, collection, scale, qc) + get_remote_data( + geofile, outdir, start, end, source, collection, scale, projection, qc + ) if stage["process_data"]: - process_remote_data(aoi_name, output, outdir, algorithm) + process_remote_data(aoi_name, process_data, outdir, algorithm) if __name__ == "__main__": @@ -83,6 +88,6 @@ def remote_process( scale=10, qc=1, algorithm="snap", - output={"get_data": "bands", "process_data": "lai"}, + process_data="lai", stage={"get_data": True, "process_data": True}, ) From ee91f769d003effb83bf2dc4db1a92c9c1356d7e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 11:04:14 +0530 Subject: [PATCH 1231/2289] integrate appeears with get_remote_data --- modules/data.remote/inst/get_remote_data.py | 24 +++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index 2782d7ddb3a..cef5dcd7671 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -10,17 +10,28 @@ """ from importlib import import_module +from appeears2pecan import appeears2pecan # dictionary used to map the GEE image collection id to PEcAn specific function name collection_dict = { "LANDSAT/LC08/C01/T1_SR": "l8", "COPERNICUS/S2_SR": "s2", "NASA_USDA/HSL/SMAP_soil_moisture": "smap", - # "insert GEE collection id": "insert PEcAn specific name", + # "insert GEE collection id": "insert PEcAn specific name", } -def get_remote_data(geofile, outdir, start, end, source, collection, scale=None, qc=None): +def get_remote_data( + geofile, + outdir, + start, + end, + source, + collection, + scale=None, + projection=None, + qc=None, +): """ uses GEE and AppEEARS functions to download data @@ -36,9 +47,11 @@ def get_remote_data(geofile, outdir, start, end, source, collection, scale=None, source (str) -- source from where data is to be downloaded - collection (str) -- dataset ID + collection (str) -- dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears - scale (int) -- pixel resolution + scale (int) -- pixel resolution, None by default + + projection (str) -- type of projection. Only required for appeears polygon AOI type. None by default. qc (float) -- quality control parameter @@ -70,3 +83,6 @@ def get_remote_data(geofile, outdir, start, end, source, collection, scale=None, # this part takes care of functions which do not perform any quality checks, e.g. SMAP else: func(geofile, outdir, start, end) + + if source == "appeears": + appeears2pecan(geofile, outdir, start, end, collection, projection) From 492a01dd64109fbcdd66fee3833e1383b6ed08e3 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 11:04:39 +0530 Subject: [PATCH 1232/2289] update process_remote_data --- modules/data.remote/inst/process_remote_data.py | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/process_remote_data.py index 3ef21de0372..c4335cfb6eb 100644 --- a/modules/data.remote/inst/process_remote_data.py +++ b/modules/data.remote/inst/process_remote_data.py @@ -12,15 +12,15 @@ from importlib import import_module -def process_remote_data(aoi_name, output, outdir, algorithm): +def process_remote_data(aoi_name, process_data, outdir, algorithm): """ uses processing functions to perform computation on input data Parameters ---------- - aoi_name (str) -- name to the AOI. + aoi_name (str) -- name of the AOI. - output (dict) -- dictionary contatining the keys get_data and process_data + process_data (str) -- the type of output variable outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. @@ -31,14 +31,13 @@ def process_remote_data(aoi_name, output, outdir, algorithm): Nothing: output netCDF is saved in the specified directory. """ - # get the type of the input data - input_type = output["get_data"] + # locate the input file - input_file = "".join([outdir, "/", aoi_name, "_", input_type, ".nc"]) + input_file = "".join([outdir, "/", aoi_name, "_", "bands", ".nc"]) # extract the computation which is to be done - output = output["process_data"] + output = process_data # construct the function name - func_name = "".join([input_type, "2", output, "_", algorithm]) + func_name = "".join(["bands", "2", output, "_", algorithm]) # import the module module = import_module(func_name) # import the function from the module From 84a599d5c9a39ff487a1994ffde3beb67a15dd3c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 11:05:06 +0530 Subject: [PATCH 1233/2289] add book documentation --- .../03_topical_pages/09_standalone_tools.Rmd | 43 ++++++++++++++++--- 1 file changed, 37 insertions(+), 6 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index c8ce4ef1558..116af6d3bc0 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -135,7 +135,7 @@ input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) ``` ## Remote data module -Remote data module retrieves remote sensing data from MODISTools and Google Earth Engine (with plans to support AppEEARS in near future). The downloaded data can be used for performing further analysis in PEcAn. +Remote data module retrieves remote sensing data from MODISTools, Google Earth Engine and AppEEARS. The downloaded data can be used for performing further analysis in PEcAn. #### Google Earth Engine [Google Earth Engine](https://earthengine.google.com/) is a cloud-based platform for performing analysis on satellite data. It provides access to a [large data catalog](https://developers.google.com/earth-engine/datasets) through an online JavaScript code editor and a Python API. @@ -146,6 +146,9 @@ Datasets currently available for use in PEcAn via Google Earth Engine are, * [SMAP Global Soil Moisture Data](https://developers.google.com/earth-engine/datasets/catalog/NASA_USDA_HSL_SMAP_soil_moisture) [`gee2pecan_smap()`](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/gee2pecan_smap.py) * [Landsat 8 Surface Reflectance](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR) [`gee2pecan_l8()`](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/gee2pecan_l8.py) +#### AppEEARS +[AppEEARS (Application for Extracting and Exploring Analysis Ready Samples)](https://lpdaacsvc.cr.usgs.gov/appeears/) is an online tool whhich provides an easy to use interface for downloading analysis ready remote sensing data. [Products available on AppEEARS.](https://lpdaacsvc.cr.usgs.gov/appeears/products) Note: AppEEARS uses a task based system for processing the data request, it is possible for a task to run for long hours before it gets completed. The module checks the task status after every 60 seconds and saves the files when the task gets completed. + Processing functions currently available are, * [SNAP](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/satellitetools/biophys_xarray.py) @@ -154,7 +157,9 @@ Processing functions currently available are, 1. **Sign up for the Google Earth Engine**. Follow the instructions [here](https://earthengine.google.com/new_signup/) to sign up for using GEE. You need to have your own GEE account for using the GEE download functions in this module. -2. **Install the Python dependencies**. Using this module requires Python3 and the package manager pip to be installed in your system. +2. **Sign up for NASA Earthdata**. Using AppEEARS requires an Earthdata account visit this [page](https://urs.earthdata.nasa.gov/users/new) to create your own account. + +3. **Install the Python dependencies**. Using this module requires Python3 and the package manager pip to be installed in your system. To install the additional Python dependencies required, a. Navigate to `pecan/modules/data.remote/inst` If you are inside the pecan directory, this can be done by, ```bash @@ -164,11 +169,12 @@ b. Use pip to install the dependencies. ```bash pip install -r requirements.txt ``` -3. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials. The credentials will be stored locally on your system. This can be done by, +4. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials. The credentials will be stored locally on your system. This can be done by, ```bash #this will open a browser and ask you to sign in with the Google account registered for GEE earthengine authenticate ``` +5. **Save the Earthdata credentials (optional)**. If you do not wish to enter your credentials every time you use AppEEARS, you may save your username and password inside a JSON file and then add its path [here](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/appeears2pecan.py#L63) #### Usage guide: This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `remote_process()` which is a main function that controls all the individual functions to create an organized way of downloading and handling remote sensing data in PEcAn. @@ -176,11 +182,11 @@ This module contains GEE and other processing functions which retrieve data from **Input data**: The input data must be the coordinates of the area of interest (point or polygon type) and the name of the site/AOI. These information must be provided in a GeoJSON file. -**Output data**: The output data returned by the functions are in the form of netCDF files. +**Output data**: The output data returned by the GEE are in the form of netCDF files. When using AppEEARS, output is in the form of netCDF files if the AOI type is a polygon and in the form of csv files if the AOI type is a point. **scale**: Some of the GEE functions require a pixel resolution argument. Information about how GEE handles scale can be found out [here](https://developers.google.com/earth-engine/scale) -#### Example use +#### Example use (GEE) This example will download Sentinel 2 bands for an area in Reykjavik(test file is included) for the time period 2018-01-01 to 2018-12-31 and then use the SNAP algorithm to compute Leaf Area Index. 1. Open a Python shell at `pecan/modules/data.remote/inst` @@ -201,10 +207,35 @@ remote_process( scale=10, qc=1, algorithm="snap", - output={"get_data": "bands", "process_data": "lai"}, + process_data="lai", stage={"get_data": True, "process_data": True}, ) ``` The output netCDF files(bands and LAI) will be saved at `./out` More information about the function and its arguments can be found out by `help(remote_process)` + +#### Example use (AppEEARS) + +This example will download the layers of a SMAP product(SPL3SMP_E.003) for an area in Reykjavik(test file is included) for the time period 2018-01-01 to 2018-01-31 + +1. Open a Python shell at `pecan/modules/data.remote/inst` + +2. In the Python shell run the following, +```python +# import remote_process +from remote_process import remote_process + +# call remote_process +remote_process( + geofile="./satellitetools/test.geojson", + outdir="./out", + start="2018-01-01", + end="2018-01-31", + source="appeears", + collection="SPL3SMP_E.003", + projection="native", + stage={"get_data": True, "process_data": False}, + ) +``` +The output netCDF file will be saved at `./out` From 3fb64f2c7b9205349d304f901984cae0c31e94c1 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 11:18:03 +0530 Subject: [PATCH 1234/2289] fix typo in comment --- modules/data.remote/inst/appeears2pecan.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/inst/appeears2pecan.py b/modules/data.remote/inst/appeears2pecan.py index 4a848bdcfd9..9525d930f6c 100644 --- a/modules/data.remote/inst/appeears2pecan.py +++ b/modules/data.remote/inst/appeears2pecan.py @@ -98,7 +98,7 @@ def authenticate(): "SPL4CMDL.004", "SPL4SMGP.004", ]: - # chabge this part to select your own SMAP layers + # change this part to select your own SMAP layers prodLayer = prodLayer[0:25] site_name = get_sitename(geofile) From 39c8a880cd25c73441a36bbccfcc8b2dfd52f6e4 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 16 Jul 2020 06:07:04 +0000 Subject: [PATCH 1235/2289] userid & username forwarded after authentication filter --- apps/api/R/auth.R | 15 ++++++++++----- apps/api/pecanapi-spec.yml | 2 +- docker-compose.yml | 7 +++---- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R index 6cf3114a7e3..2b398360a0b 100644 --- a/apps/api/R/auth.R +++ b/apps/api/R/auth.R @@ -34,16 +34,15 @@ validate_crypt_pass <- function(username, crypt_pass) { res <- tbl(dbcon, "users") %>% filter(login == username, crypted_password == crypt_pass) %>% - count() %>% collect() PEcAn.DB::db.close(dbcon) - if (res == 1) { - return(TRUE) + if (nrow(res) == 1) { + return(res$id) } - return(FALSE) + return(NA) } #* Filter to authenticate a user calling the PEcAn API @@ -59,6 +58,8 @@ authenticate_user <- function(req, res) { grepl("ping", req$PATH_INFO, ignore.case = TRUE) || grepl("status", req$PATH_INFO, ignore.case = TRUE)) { + req$user$userid <- NA + req$user$username <- "" return(plumber::forward()) } @@ -70,7 +71,11 @@ authenticate_user <- function(req, res) { password <- auth_details[2] crypt_pass <- get_crypt_pass(username, password) - if(validate_crypt_pass(username, crypt_pass)){ + userid <- validate_crypt_pass(username, crypt_pass) + + if(! is.na(userid)){ + req$user$userid <- userid + req$user$username <- username return(plumber::forward()) } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 24b0964fb03..2669ed099ce 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-tezan.ncsa.illinois.edu/ - description: PEcAn Test Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/9272c48c + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/027e1bde - description: Localhost url: http://127.0.0.1:8000 diff --git a/docker-compose.yml b/docker-compose.yml index 6deef1deb00..cfcafc2eefa 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -323,13 +323,10 @@ services: networks: - pecan environment: - - PECAN_VERSION=${PECAN_VERSION:-1.7.0} - - PECAN_GIT_BRANCH=${PECAN_GIT_BRANCH:-develop} - - PECAN_GIT_CHECKSUM=${PECAN_GIT_CHECKSUM:-unknown} - - PECAN_GIT_DATE=${PECAN_GIT_DATE:-unknown} - PGHOST=${PGHOST:-postgres} - HOST_ONLY=${HOST_ONLY:-FALSE} - AUTH_REQ=${AUTH_REQ:-TRUE} + - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} labels: - "traefik.enable=true" - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/api" @@ -337,6 +334,8 @@ services: - "traefik.port=8000" depends_on: - postgres + volumes: + - pecan:/data/ # ---------------------------------------------------------------------- # Name of network to be used by all containers From bd3755916fe28c2b6f5986094f1814e5db24ddc5 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 16:44:29 +0530 Subject: [PATCH 1236/2289] Update book_source/03_topical_pages/09_standalone_tools.Rmd Co-authored-by: istfer --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 116af6d3bc0..d985f4df70c 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -147,7 +147,7 @@ Datasets currently available for use in PEcAn via Google Earth Engine are, * [Landsat 8 Surface Reflectance](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR) [`gee2pecan_l8()`](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/gee2pecan_l8.py) #### AppEEARS -[AppEEARS (Application for Extracting and Exploring Analysis Ready Samples)](https://lpdaacsvc.cr.usgs.gov/appeears/) is an online tool whhich provides an easy to use interface for downloading analysis ready remote sensing data. [Products available on AppEEARS.](https://lpdaacsvc.cr.usgs.gov/appeears/products) Note: AppEEARS uses a task based system for processing the data request, it is possible for a task to run for long hours before it gets completed. The module checks the task status after every 60 seconds and saves the files when the task gets completed. +[AppEEARS (Application for Extracting and Exploring Analysis Ready Samples)](https://lpdaacsvc.cr.usgs.gov/appeears/) is an online tool which provides an easy to use interface for downloading analysis ready remote sensing data. [Products available on AppEEARS.](https://lpdaacsvc.cr.usgs.gov/appeears/products) Note: AppEEARS uses a task based system for processing the data request, it is possible for a task to run for long hours before it gets completed. The module checks the task status after every 60 seconds and saves the files when the task gets completed. Processing functions currently available are, From ecbb036a129db7bf6a97fb57e49ca3f66e6a530c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 19:17:35 +0530 Subject: [PATCH 1237/2289] add credfile argument in appeears2pecan --- modules/data.remote/inst/appeears2pecan.py | 25 ++++++++++++++-------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/modules/data.remote/inst/appeears2pecan.py b/modules/data.remote/inst/appeears2pecan.py index 9525d930f6c..0c4a399ae7e 100644 --- a/modules/data.remote/inst/appeears2pecan.py +++ b/modules/data.remote/inst/appeears2pecan.py @@ -20,9 +20,10 @@ import json from gee_utils import get_sitename from datetime import datetime +from warnings import warn -def appeears2pecan(geofile, outdir, start, end, product, projection=None): +def appeears2pecan(geofile, outdir, start, end, product, projection=None, credfile=None): """ Downloads remote sensing data from AppEEARS @@ -36,10 +37,12 @@ def appeears2pecan(geofile, outdir, start, end, product, projection=None): end (str) -- ending date area of the data request in the form YYYY-MM-DD - product (str) -- product name followed by " . " and the product version, e.g. "SPL3SMP_E.003" + product (str) -- product name followed by " . " and the product version, e.g. "SPL3SMP_E.003", as listed on AppEEARS website. projection (str) -- type of projection, only required for polygon AOI type. None by default + credfile (str) -- path to JSON file containing Earthdata username and password. None by default + Returns ------- Nothing: @@ -58,13 +61,16 @@ def authenticate(): ------- head (dict) : header contatining the authentication token """ - try: - # if the user does not want to enter their credentials everytime they use this function, they need to store their username and password in a JSON file, preferabbly not in a git initialized directory - with open("path/to/cred.json", "r") as f: - cred = json.load(f) - user = cred["username"] - password = cred["password"] - except: + if credfile: + try: + # if the user does not want to enter their credentials everytime they use this function, they need to store their username and password in a JSON file, preferabbly not in a git initialized directory + with open(credfile, "r") as f: + cred = json.load(f) + user = cred["username"] + password = cred["password"] + except IOError: + print("file does not exist") + else: # if user does not want to store the credentials user = getpass.getpass(prompt="Enter NASA Earthdata Login Username: ") password = getpass.getpass(prompt="Enter NASA Earthdata Login Password: ") @@ -98,6 +104,7 @@ def authenticate(): "SPL4CMDL.004", "SPL4SMGP.004", ]: + warn("Since you have requested a SMAP product, all layers cannot be downloaded, selecting first 25 layers..") # change this part to select your own SMAP layers prodLayer = prodLayer[0:25] From 9c4c35ef98dc0ce70d78d970815da389cd936883 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 19:18:51 +0530 Subject: [PATCH 1238/2289] add output dict to remote_process --- modules/data.remote/inst/remote_process.py | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index c3e442c433d..dc29c1a6bfe 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -25,7 +25,8 @@ def remote_process( projection=None, qc=None, algorithm=None, - process_data=None, + credfile=None, + output={"get_data": None, "process_data": None}, stage={"get_data": True, "process_data": True}, ): @@ -54,7 +55,9 @@ def remote_process( algorithm (str) -- algorithm used for processing data in process_data(), currently only SNAP is implemented to estimate LAI from Sentinel-2 bands, None by default - process_data (str) -- the type of output variable requested from process_data module + credfile (str) -- path to JSON file containing Earthdata username and password, only required for AppEEARS, None by default + + output (dict) -- "get_data" - the type of output variable requested from get_data module, "process_data" - the type of output variable requested from process_data module stage (dict) -- temporary argument to imitate database checks @@ -70,11 +73,11 @@ def remote_process( if stage["get_data"]: get_remote_data( - geofile, outdir, start, end, source, collection, scale, projection, qc + geofile, outdir, start, end, source, collection, scale, projection, qc, credfile ) if stage["process_data"]: - process_remote_data(aoi_name, process_data, outdir, algorithm) + process_remote_data(aoi_name, output, outdir, algorithm) if __name__ == "__main__": @@ -88,6 +91,6 @@ def remote_process( scale=10, qc=1, algorithm="snap", - process_data="lai", + output={"get_data": "bands", "process_data": "lai"}, stage={"get_data": True, "process_data": True}, ) From f25252bcd15bad4dadcf44b519f5cbfb499d6379 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 19:19:27 +0530 Subject: [PATCH 1239/2289] add credfile argument to get_remote_data --- modules/data.remote/inst/get_remote_data.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index cef5dcd7671..b22bb45717e 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -31,6 +31,7 @@ def get_remote_data( scale=None, projection=None, qc=None, + credfile=None, ): """ uses GEE and AppEEARS functions to download data @@ -85,4 +86,4 @@ def get_remote_data( func(geofile, outdir, start, end) if source == "appeears": - appeears2pecan(geofile, outdir, start, end, collection, projection) + appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) From 84f55f7ddc089eea3b5e9f34d55cdbbd4f405ec0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 19:20:04 +0530 Subject: [PATCH 1240/2289] add output dict to process_remote_data --- .../data.remote/inst/process_remote_data.py | 21 +++++++------------ 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/process_remote_data.py index c4335cfb6eb..1ef7ac5b76b 100644 --- a/modules/data.remote/inst/process_remote_data.py +++ b/modules/data.remote/inst/process_remote_data.py @@ -3,41 +3,36 @@ """ process_remote_data controls functions which perform further computation on the data. - Requires Python3 - Author: Ayush Prasad """ from importlib import import_module -def process_remote_data(aoi_name, process_data, outdir, algorithm): +def process_remote_data(aoi_name, output, outdir, algorithm): """ uses processing functions to perform computation on input data Parameters ---------- - aoi_name (str) -- name of the AOI. - - process_data (str) -- the type of output variable - + aoi_name (str) -- name to the AOI. + output (dict) -- dictionary contatining the keys get_data and process_data outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - algorithm (str) -- name of the algorithm used to perform computation. - Returns ------- Nothing: output netCDF is saved in the specified directory. """ - + # get the type of the input data + input_type = output["get_data"] # locate the input file - input_file = "".join([outdir, "/", aoi_name, "_", "bands", ".nc"]) + input_file = "".join([outdir, "/", aoi_name, "_", input_type, ".nc"]) # extract the computation which is to be done - output = process_data + output = output["process_data"] # construct the function name - func_name = "".join(["bands", "2", output, "_", algorithm]) + func_name = "".join([input_type, "2", output, "_", algorithm]) # import the module module = import_module(func_name) # import the function from the module From 36ac374fc1dd822117856fb4d34a823f81324d65 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 19:20:59 +0530 Subject: [PATCH 1241/2289] update function arguments in book --- book_source/03_topical_pages/09_standalone_tools.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index d985f4df70c..2976729ae70 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -174,7 +174,7 @@ pip install -r requirements.txt #this will open a browser and ask you to sign in with the Google account registered for GEE earthengine authenticate ``` -5. **Save the Earthdata credentials (optional)**. If you do not wish to enter your credentials every time you use AppEEARS, you may save your username and password inside a JSON file and then add its path [here](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/appeears2pecan.py#L63) +5. **Save the Earthdata credentials (optional)**. If you do not wish to enter your credentials every time you use AppEEARS, you may save your username and password inside a JSON file and then pass its file path as an argument in `remote_process` #### Usage guide: This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `remote_process()` which is a main function that controls all the individual functions to create an organized way of downloading and handling remote sensing data in PEcAn. @@ -207,7 +207,7 @@ remote_process( scale=10, qc=1, algorithm="snap", - process_data="lai", + output={"get_data": "bands", "process_data": "lai"}, stage={"get_data": True, "process_data": True}, ) ``` From 0e8becf5c735265f727b15b275ac23b89158ba76 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 19:24:17 +0530 Subject: [PATCH 1242/2289] add file not found error message in appeears2pecan --- modules/data.remote/inst/appeears2pecan.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/inst/appeears2pecan.py b/modules/data.remote/inst/appeears2pecan.py index 0c4a399ae7e..0d1a410535a 100644 --- a/modules/data.remote/inst/appeears2pecan.py +++ b/modules/data.remote/inst/appeears2pecan.py @@ -69,7 +69,7 @@ def authenticate(): user = cred["username"] password = cred["password"] except IOError: - print("file does not exist") + print("specified file does not exist, please make sure that you have specified the path correctly") else: # if user does not want to store the credentials user = getpass.getpass(prompt="Enter NASA Earthdata Login Username: ") From bfd893f2cf6af9e506f96a65f4e96e639817694d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 16 Jul 2020 20:38:51 +0530 Subject: [PATCH 1243/2289] add approx download time --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 2976729ae70..8ce9aab0fcc 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -238,4 +238,4 @@ remote_process( stage={"get_data": True, "process_data": False}, ) ``` -The output netCDF file will be saved at `./out` +This will approximately take 20 minutes to complete and the output netCDF file will be saved at `./out` From 7bf6fd6514980b9de515b224ede3c62caaadb53a Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 17 Jul 2020 15:48:45 +0000 Subject: [PATCH 1244/2289] added API endpoint to submit workflow as XML --- .docker-compose.yml.swp | Bin 0 -> 1024 bytes apps/api/R/submit.workflow.R | 172 +++++++++++++++++++++++++++++++++++ apps/api/R/workflows.R | 14 +++ tests/api.sipnet.xml | 46 ++++++++++ 4 files changed, 232 insertions(+) create mode 100644 .docker-compose.yml.swp create mode 100644 apps/api/R/submit.workflow.R create mode 100644 tests/api.sipnet.xml diff --git a/.docker-compose.yml.swp b/.docker-compose.yml.swp new file mode 100644 index 0000000000000000000000000000000000000000..e80831b61edf5b2de492c39c744cbd74c55a8b6c GIT binary patch literal 1024 zcmYc?$V<%2S1{7E)H7y40&l|^7)nyB67z}^GfI)fu`vr$lN0lF!K$%I!^Kkale1Hc cbd&RQ3-XIo^(u37;8LTE(GVC7fdL2s0A~CX;Q#;t literal 0 HcmV?d00001 diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R new file mode 100644 index 00000000000..ca684af5399 --- /dev/null +++ b/apps/api/R/submit.workflow.R @@ -0,0 +1,172 @@ +library(dplyr) + +#* Submit a workflow submitted as XML +#* @param workflowXmlString String containing the XML workflow from request body +#* @param userDetails List containing userid & username +#* @return ID & status of the submitted workflow +#* @author Tezan Sahu +submit.workflow.xml <- function(workflowXmlString, userDetails){ + + workflowXml <- XML::xmlParseString(stringr::str_replace(workflowXmlString, "\n", "")) + workflowList <- XML::xmlToList(workflowXml) + + # Fix details about the database + workflowList$database <- list(bety = PEcAn.DB::get_postgres_envvars( + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "PostgreSQL", + write = FALSE + ) + ) + + # Fix RabbitMQ details + dbcon <- PEcAn.DB::betyConnect() + hostInfo <- PEcAn.DB::dbHostInfo(dbcon) + PEcAn.DB::db.close(dbcon) + workflowList$host <- list( + rabbitmq = list( + uri = Sys.getenv("RABBITMQ_URI", "amqp://guest:guest@localhost/%2F"), + queue = paste0(workflowList$model$type, "_", workflowList$model$revision) + ) + ) + workflowList$host$name <- if(hostInfo$hostname == "") "localhost" else hostInfo$hostname + + # Fix the info + workflowList$info$notes <- workflowList$info$notes + if(is.null(workflowList$info$userid)){ + workflowList$info$userid <- userDetails$userid + } + if(is.null(workflowList$info$username)){ + workflowList$info$username <- userDetails$username + } + if(is.null(workflowList$info$date)){ + workflowList$info$date <- Sys.time() + } + + # Add entry to workflows table in database + workflow_id <- insert.workflow(workflowList) + workflowList$workflow$id <- workflow_id + + # Add entry to attributes table in database + insert.attribute(workflowList) + + # Fix the output directory + outdir <- paste0("data/workflows/PEcAn_", workflow_id) + workflowList$outdir <- outdir + + # Create output diretory + dir.create(paste0("/", outdir), recursive=TRUE) + + # Convert settings list to XML & save it into outdir + workflowXml <- PEcAn.settings::listToXml(workflowList, "pecan") + XML::saveXML(workflowXml, paste0("/", outdir, "/pecan.xml")) + + # Post workflow to RabbitMQ + message <- list(folder = outdir, workflowid = workflow_id) + PEcAn.remote::rabbitmq_post(workflowList$host$rabbitmq$uri, "pecan", message) + +} + + +#* Insert the workflow into workflows table to obtain the workflow_id +#* @param workflowList List containing the workflow details +#* @return ID of the submitted workflow +#* @author Tezan Sahu +insert.workflow <- function(workflowList){ + dbcon <- PEcAn.DB::betyConnect() + + model_id <- workflowList$model$id + if(is.null(model_id)){ + model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), dbcon) + } + + start_time <- Sys.time() + + workflow_df <- tibble::tibble( + "site_id" = c(bit64::as.integer64(workflowList$run$site$id)), + "model_id" = c(bit64::as.integer64(model_id)), + "folder" = "temp_dir", + "hostname" = c(workflowList$host$name), + "start_date" = c(as.POSIXct(workflowList$run$start.date)), + "end_date" = c(as.POSIXct(workflowList$run$end.date)), + "advanced_edit" = c(FALSE), + "started_at" = c(start_time), + stringsAsFactors = FALSE + ) + + if(! is.na(userDetails$userid)){ + workflow_df <- workflow_df %>% tibble::add_column("user_id" = c(bit64::as.integer64(workflowList$info$userid))) + } + + insert <- PEcAn.DB::insert_table(workflow_df, "workflows", dbcon) + workflow_id <- dplyr::tbl(dbcon, "workflows") %>% + filter(started_at == start_time + && site_id == bit64::as.integer64(workflowList$run$site$id) + && model_id == bit64::as.integer64(model_id) + ) %>% + pull(id) + + update_qry <- paste0("UPDATE workflows SET folder = 'data/workflows/PEcAn_", workflow_id, "' WHERE id = '", workflow_id, "';") + PEcAn.DB::db.query(update_qry, dbcon) + + PEcAn.DB::db.close(dbcon) + + return(workflow_id) +} + + +#* Insert the workflow into attributes table +#* @param workflowList List containing the workflow details +#* @author Tezan Sahu +insert.attribute <- function(workflowList){ + dbcon <- PEcAn.DB::betyConnect() + + # Create an array of PFTs + pfts <- c() + for(i in seq(length(workflowList$pfts))){ + pfts <- c(pfts, workflowList$pfts[i]$pft$name) + } + + # Obtain the model_id + model_id <- workflowList$model$id + if(is.null(model_id)){ + model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), dbcon) + } + + # Fill in the properties + properties <- list( + start = as.POSIXct(workflowList$run$start.date), + end = as.POSIXct(workflowList$run$end.date), + pfts = pfts, + runs = if(is.null(workflowList$ensemble$size)) 1 else workflowList$ensemble$size, + modelid = model_id, + siteid = bit64::as.integer64(workflowList$run$site$id), + sitename = dplyr::tbl(dbcon, "sites") %>% filter(id == bit64::as.integer64(workflowList$run$site$id)) %>% pull(sitename), + #sitegroupid <- + lat = if(is.null(workflowList$run$site$lat)) "" else workflowList$run$site$lat, + lon = if(is.null(workflowList$run$site$lon)) "" else workflowList$run$site$lon, + email = if(is.na(workflowList$info$userid) || workflowList$info$userid == -1) "" else + dplyr::tbl(dbcon, "users") %>% filter(id == bit64::as.integer64(workflowList$info$userid)) %>% pull(email), + notes = if(is.null(workflowList$info$notes)) "" else workflowList$info$notes, + input_met = workflowList$run$inputs$met$id, + variables = workflowList$ensemble$variable + ) + if(! is.null(workflowList$ensemble$parameters$method)) properties$parm_method <- workflowList$ensemble$parameters$method + if(! is.null(workflowList$sensitivity.analysis$quantiles)){ + sensitivity <- c() + for(i in seq(length(workflowList$sensitivity.analysis$quantiles))){ + sensitivity <- c(sensitivity, workflowList$sensitivity.analysis$quantiles[i]$sigma) + } + properties$sensitivity <- paste0(sensitivity, collapse=",") + } + # More variables can be added later + + # Insert properties into attributes table + value_json <- as.character(jsonlite::toJSON(properties, auto_unbox = TRUE)) + + res <- DBI::dbSendStatement(dbcon, + "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", + list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) +} \ No newline at end of file diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index f6007a342e7..34ca8c44a4f 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -1,4 +1,5 @@ library(dplyr) +source("submit.workflow.R") #' Get the list of workflows (using a particular model & site, if specified) #' @param model_id Model id (character) @@ -125,4 +126,17 @@ getWorkflowDetails <- function(id, res){ return(res) } +} + +################################################################################################# + +#' Post a workflow for execution +#' @param req Request sent +#' @return ID & status of the submitted workflow +#' @author Tezan Sahu +#* @post / +submitWorkflow <- function(req){ + if(req$HTTP_CONTENT_TYPE == "application/xml"){ + return(submit.workflow.xml(req$postBody, req$user)) + } } \ No newline at end of file diff --git a/tests/api.sipnet.xml b/tests/api.sipnet.xml new file mode 100644 index 00000000000..d690454a76e --- /dev/null +++ b/tests/api.sipnet.xml @@ -0,0 +1,46 @@ + + + + + temperate.coniferous + + + + + 3000 + FALSE + 1.2 + AUTO + + + + NPP + + + + + -1 + 1 + + NPP + + + + SIPNET + r136 + + + + + 772 + + + + 5000000005 + + + 2002-01-01 00:00:00 + 2005-12-31 00:00:00 + pecan/dbfiles + + From 70bfe17c4214059055bba350c5265015d8a0a406 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 19 Jul 2020 10:08:40 -0500 Subject: [PATCH 1245/2289] if queue already exist, leave it alone --- base/remote/NAMESPACE | 4 +- base/remote/R/rabbitmq.R | 63 +++++++++++++------ base/remote/man/rabbitmq_create_queue.Rd | 43 +++++++++++++ ...abbitmq_get.Rd => rabbitmq_get_message.Rd} | 6 +- ...bitmq_post.Rd => rabbitmq_post_message.Rd} | 6 +- 5 files changed, 95 insertions(+), 27 deletions(-) create mode 100644 base/remote/man/rabbitmq_create_queue.Rd rename base/remote/man/{rabbitmq_get.Rd => rabbitmq_get_message.Rd} (87%) rename base/remote/man/{rabbitmq_post.Rd => rabbitmq_post_message.Rd} (87%) diff --git a/base/remote/NAMESPACE b/base/remote/NAMESPACE index a6361fb0465..ec730637600 100644 --- a/base/remote/NAMESPACE +++ b/base/remote/NAMESPACE @@ -7,8 +7,8 @@ export(kill.tunnel) export(open_tunnel) export(qsub_get_jobid) export(qsub_run_finished) -export(rabbitmq_get) -export(rabbitmq_post) +export(rabbitmq_get_message) +export(rabbitmq_post_message) export(remote.copy.from) export(remote.copy.to) export(remote.copy.update) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index 7bcb1ca8392..de200d65c0a 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -55,7 +55,11 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { #' return the resulting message, or if not availble an empty string "". rabbitmq_send_message <- function(url, auth, body, action="POST") { if (action == "GET") { - result <- httr::GET(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) + if (is.na(body)) { + result <- httr::GET(url, auth) + } else { + result <- httr::GET(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) + } } else if (action == "PUT") { result <- httr::PUT(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) } else if (action == "DELETE") { @@ -67,7 +71,7 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { return(NA) } - if (result$status_code >= 200 && result$status_code < 300) { + if (result$status_code >= 200 && result$status_code <= 299) { content <- httr::content(result) if (length(content) == 0) { return("") @@ -80,7 +84,7 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { } else { output <- httr::content(result) if ("reason" %in% names(output)) { - PEcAn.logger::logger.error(paste("error sending message to rabbitmq,", output$reason)) + PEcAn.logger::logger.error(paste0("error sending message to rabbitmq [", result$status_code, "], ", output$reason)) } else { PEcAn.logger::logger.error("error sending message to rabbitmq") } @@ -88,6 +92,37 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { } } +#' Create a queu in RabbitMQ. This will first check to see if the queue +#' already exists in RabbitMQ, if not it will create the queue. If the +#' queue exists, or is created it will return TRUE, it will return FALSE +#' otherwise. +#' +#' @param url parsed RabbitMQ URL. +#' @param auth the httr authentication object to use. +#' @param vhost the vhost where to create the queue. +#' @param queue the queue that should be checked/created. +#' @param auto_delete should the queue be deleted afterwards (FALSE is default) +#' @param durable should the messages exists after a server restart (TRUE is default) +#' @return TRUE if the queue now exists, FALSE otherwise. +#' @author Rob Kooper +rabbitmq_create_queue <- function(url, auth, vhost, queue, auto_delete = FALSE, durable = TRUE) { + resturl <- paste0(url, "api/queues/", vhost, "/", queue) + + # check if queue exists + result <- rabbitmq_send_message(resturl, auth, NA, "GET") + if (length(result) > 1 || !is.na(result)) { + return(TRUE) + } + + # create the queue + body <- list( + auto_delete = auto_delete, + durable = durable + ) + result <- rabbitmq_send_message(url, auth, body, "PUT") + return(length(result) > 1 || !is.na(result)) +} + #' Post message to RabbitMQ. This will submit a message to RabbitMQ, if the #' queue does not exist it will be created. The message will be converted to #' a json message that is submitted. @@ -100,7 +135,7 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { #' @return the result of the post if message was send, or NA if it failed. #' @author Alexey Shiklomanov, Rob Kooper #' @export -rabbitmq_post <- function(uri, queue, message, prefix="", port=15672) { +rabbitmq_post_message <- function(uri, queue, message, prefix="", port=15672) { # parse rabbitmq URI rabbitmq <- rabbitmq_parse_uri(uri, prefix, port) if (length(rabbitmq) != 4) { @@ -110,13 +145,8 @@ rabbitmq_post <- function(uri, queue, message, prefix="", port=15672) { # create authentication auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) - # create message to be send to create the queue - body <- list( - auto_delete = FALSE, - durable = FALSE - ) - url <- paste0(rabbitmq$url, "api/queues/", rabbitmq$vhost, "/", queue) - if (is.na(rabbitmq_send_message(url, auth, body, "PUT"))) { + # make sure the queue exists + if (!rabbitmq_queue(rabbitmq$url, auth, rabbitmq$vhost, queue)) { return(NA) } @@ -143,7 +173,7 @@ rabbitmq_post <- function(uri, queue, message, prefix="", port=15672) { #' @return NA if no message was retrieved, or a list of the messages payload. #' @author Alexey Shiklomanov, Rob Kooper #' @export -rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { +rabbitmq_get_message <- function(uri, queue, count=1, prefix="", port=15672) { # parse rabbitmq URI rabbitmq <- rabbitmq_parse_uri(uri, prefix, port) if (length(rabbitmq) != 4) { @@ -153,13 +183,8 @@ rabbitmq_get <- function(uri, queue, count=1, prefix="", port=15672) { # create authentication auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) - # create message to be send to create the queue - body <- list( - auto_delete = FALSE, - durable = FALSE - ) - url <- paste0(rabbitmq$url, "api/queues/", rabbitmq$vhost, "/", queue) - if (is.na(rabbitmq_send_message(url, auth, body, "PUT"))) { + # make sure the queue exists + if (!rabbitmq_queue(rabbitmq$url, auth, rabbitmq$vhost, queue)) { return(NA) } diff --git a/base/remote/man/rabbitmq_create_queue.Rd b/base/remote/man/rabbitmq_create_queue.Rd new file mode 100644 index 00000000000..ec21e5a57f0 --- /dev/null +++ b/base/remote/man/rabbitmq_create_queue.Rd @@ -0,0 +1,43 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rabbitmq.R +\name{rabbitmq_create_queue} +\alias{rabbitmq_create_queue} +\title{Create a queu in RabbitMQ. This will first check to see if the queue +already exists in RabbitMQ, if not it will create the queue. If the +queue exists, or is created it will return TRUE, it will return FALSE +otherwise.} +\usage{ +rabbitmq_create_queue( + url, + auth, + vhost, + queue, + auto_delete = FALSE, + durable = TRUE +) +} +\arguments{ +\item{url}{parsed RabbitMQ URL.} + +\item{auth}{the httr authentication object to use.} + +\item{vhost}{the vhost where to create the queue.} + +\item{queue}{the queue that should be checked/created.} + +\item{auto_delete}{should the queue be deleted afterwards (FALSE is default)} + +\item{durable}{should the messages exists after a server restart (TRUE is default)} +} +\value{ +TRUE if the queue now exists, FALSE otherwise. +} +\description{ +Create a queu in RabbitMQ. This will first check to see if the queue +already exists in RabbitMQ, if not it will create the queue. If the +queue exists, or is created it will return TRUE, it will return FALSE +otherwise. +} +\author{ +Rob Kooper +} diff --git a/base/remote/man/rabbitmq_get.Rd b/base/remote/man/rabbitmq_get_message.Rd similarity index 87% rename from base/remote/man/rabbitmq_get.Rd rename to base/remote/man/rabbitmq_get_message.Rd index abbb750ba22..755ddd4bb30 100644 --- a/base/remote/man/rabbitmq_get.Rd +++ b/base/remote/man/rabbitmq_get_message.Rd @@ -1,12 +1,12 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/rabbitmq.R -\name{rabbitmq_get} -\alias{rabbitmq_get} +\name{rabbitmq_get_message} +\alias{rabbitmq_get_message} \title{Get message from RabbitMQ. This will get a message from RabbitMQ, if the queue does not exist it will be created. The message will be converted to a json message that is returned.} \usage{ -rabbitmq_get(uri, queue, count = 1, prefix = "", port = 15672) +rabbitmq_get_message(uri, queue, count = 1, prefix = "", port = 15672) } \arguments{ \item{uri}{RabbitMQ URI or URL to rest endpoint} diff --git a/base/remote/man/rabbitmq_post.Rd b/base/remote/man/rabbitmq_post_message.Rd similarity index 87% rename from base/remote/man/rabbitmq_post.Rd rename to base/remote/man/rabbitmq_post_message.Rd index bcb368f33a8..c5655ed36a0 100644 --- a/base/remote/man/rabbitmq_post.Rd +++ b/base/remote/man/rabbitmq_post_message.Rd @@ -1,12 +1,12 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/rabbitmq.R -\name{rabbitmq_post} -\alias{rabbitmq_post} +\name{rabbitmq_post_message} +\alias{rabbitmq_post_message} \title{Post message to RabbitMQ. This will submit a message to RabbitMQ, if the queue does not exist it will be created. The message will be converted to a json message that is submitted.} \usage{ -rabbitmq_post(uri, queue, message, prefix = "", port = 15672) +rabbitmq_post_message(uri, queue, message, prefix = "", port = 15672) } \arguments{ \item{uri}{RabbitMQ URI or URL to rest endpoint} From 712109685a3644cc147e6893799a84efb7d22bb8 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 19 Jul 2020 14:13:31 -0500 Subject: [PATCH 1246/2289] don't rename function just before push --- base/remote/R/rabbitmq.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index de200d65c0a..8b2236758d0 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -146,7 +146,7 @@ rabbitmq_post_message <- function(uri, queue, message, prefix="", port=15672) { auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) # make sure the queue exists - if (!rabbitmq_queue(rabbitmq$url, auth, rabbitmq$vhost, queue)) { + if (!rabbitmq_create_queue(rabbitmq$url, auth, rabbitmq$vhost, queue)) { return(NA) } @@ -184,7 +184,7 @@ rabbitmq_get_message <- function(uri, queue, count=1, prefix="", port=15672) { auth <- httr::authenticate(rabbitmq$username, rabbitmq$password) # make sure the queue exists - if (!rabbitmq_queue(rabbitmq$url, auth, rabbitmq$vhost, queue)) { + if (!rabbitmq_create_queue(rabbitmq$url, auth, rabbitmq$vhost, queue)) { return(NA) } From a5337cab215a837d693c8c0cb1c91fbb6a1b2eea Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 20 Jul 2020 03:47:27 +0000 Subject: [PATCH 1247/2289] added api for workflow submission --- .docker-compose.yml.swp | Bin 1024 -> 0 bytes apps/api/R/submit.workflow.R | 12 ++++++++++-- apps/api/R/workflows.R | 11 +++++++++-- apps/api/pecanapi-spec.yml | 2 +- docker-compose.yml | 3 +++ 5 files changed, 23 insertions(+), 5 deletions(-) delete mode 100644 .docker-compose.yml.swp diff --git a/.docker-compose.yml.swp b/.docker-compose.yml.swp deleted file mode 100644 index e80831b61edf5b2de492c39c744cbd74c55a8b6c..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1024 zcmYc?$V<%2S1{7E)H7y40&l|^7)nyB67z}^GfI)fu`vr$lN0lF!K$%I!^Kkale1Hc cbd&RQ3-XIo^(u37;8LTE(GVC7fdL2s0A~CX;Q#;t diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index ca684af5399..f21b7e9549d 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -65,7 +65,15 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ # Post workflow to RabbitMQ message <- list(folder = outdir, workflowid = workflow_id) - PEcAn.remote::rabbitmq_post(workflowList$host$rabbitmq$uri, "pecan", message) + res <- PEcAn.remote::rabbitmq_post_message(workflowList$host$rabbitmq$uri, "pecan", message, "rabbitmq") + + if(res$routed){ + return(list(workflow_id = workflow_id, status = "Submitted successfully")) + } + else{ + return(list(status = "Error", message = "Could not submit to RabbitMQ")) + } + } @@ -96,7 +104,7 @@ insert.workflow <- function(workflowList){ stringsAsFactors = FALSE ) - if(! is.na(userDetails$userid)){ + if(! is.na(workflowList$info$userid)){ workflow_df <- workflow_df %>% tibble::add_column("user_id" = c(bit64::as.integer64(workflowList$info$userid))) } diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 34ca8c44a4f..1bd4b1e14a4 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -135,8 +135,15 @@ getWorkflowDetails <- function(id, res){ #' @return ID & status of the submitted workflow #' @author Tezan Sahu #* @post / -submitWorkflow <- function(req){ +submitWorkflow <- function(req, res){ if(req$HTTP_CONTENT_TYPE == "application/xml"){ - return(submit.workflow.xml(req$postBody, req$user)) + submission_res <- submit.workflow.xml(req$postBody, req$user) + if(submission_res$status == "Error"){ + res$status <- 400 + return(submission_res) + } } + + res$status <- 201 + return(submission_res) } \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 2669ed099ce..dbe8a5284c5 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-tezan.ncsa.illinois.edu/ - description: PEcAn Test Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/027e1bde + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/769e6872 - description: Localhost url: http://127.0.0.1:8000 diff --git a/docker-compose.yml b/docker-compose.yml index cfcafc2eefa..fda167091c5 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -159,6 +159,9 @@ services: restart: unless-stopped networks: - pecan + depends_on: + - rabbitmq + - postgres environment: - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} - FQDN=${PECAN_FQDN:-docker} From 05e21d1aae404746fb68be61bb25b5131a45e06f Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 21 Jul 2020 21:54:22 +0000 Subject: [PATCH 1248/2289] updates to rabbitmq functions --- base/remote/R/rabbitmq.R | 29 ++++++++++++++++-------- base/remote/man/rabbitmq_send_message.Rd | 2 +- 2 files changed, 20 insertions(+), 11 deletions(-) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index 8b2236758d0..95479cdafbc 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -53,7 +53,7 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { #' @param action the rest action to perform #' @return will return NA if message failed, otherwise it will either #' return the resulting message, or if not availble an empty string "". -rabbitmq_send_message <- function(url, auth, body, action="POST") { +rabbitmq_send_message <- function(url, auth, body, action = "POST", silent = FALSE) { if (action == "GET") { if (is.na(body)) { result <- httr::GET(url, auth) @@ -67,7 +67,9 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { } else if (action == "POST") { result <- httr::POST(url, auth, body = jsonlite::toJSON(body, auto_unbox = TRUE)) } else { - PEcAn.logger::logger.error(paste("error sending message to rabbitmq, uknown action", action)) + if (!silent) { + PEcAn.logger::logger.error(paste("error sending message to rabbitmq, uknown action", action)) + } return(NA) } @@ -82,11 +84,13 @@ rabbitmq_send_message <- function(url, auth, body, action="POST") { PEcAn.logger::logger.error("error sending message to rabbitmq, make sure username/password is correct") return(NA) } else { - output <- httr::content(result) - if ("reason" %in% names(output)) { - PEcAn.logger::logger.error(paste0("error sending message to rabbitmq [", result$status_code, "], ", output$reason)) - } else { - PEcAn.logger::logger.error("error sending message to rabbitmq") + if (!silent) { + output <- httr::content(result) + if ("reason" %in% names(output)) { + PEcAn.logger::logger.error(paste0("error sending message to rabbitmq [", result$status_code, "], ", output$reason)) + } else { + PEcAn.logger::logger.error("error sending message to rabbitmq") + } } return(NA) } @@ -109,17 +113,18 @@ rabbitmq_create_queue <- function(url, auth, vhost, queue, auto_delete = FALSE, resturl <- paste0(url, "api/queues/", vhost, "/", queue) # check if queue exists - result <- rabbitmq_send_message(resturl, auth, NA, "GET") + result <- rabbitmq_send_message(resturl, auth, NA, "GET", silent = TRUE) if (length(result) > 1 || !is.na(result)) { return(TRUE) } # create the queue + PEcAn.logger::logger.info("creating queue", queue, "in rabbitmq") body <- list( auto_delete = auto_delete, durable = durable ) - result <- rabbitmq_send_message(url, auth, body, "PUT") + result <- rabbitmq_send_message(resturl, auth, body, "PUT") return(length(result) > 1 || !is.na(result)) } @@ -199,6 +204,10 @@ rabbitmq_get_message <- function(uri, queue, count=1, prefix="", port=15672) { if (length(result) == 1 && is.na(result)) { return(NA) } else { - return(lapply(result, function(x) { tryCatch(jsonlite::fromJSON(x$payload), error=function(e) { x$payload }) })) + if (length(result) == 1 && result == "") { + return(c()) + } else { + return(lapply(result, function(x) { tryCatch(jsonlite::fromJSON(x$payload), error=function(e) { x$payload }) })) + } } } diff --git a/base/remote/man/rabbitmq_send_message.Rd b/base/remote/man/rabbitmq_send_message.Rd index 0af1a899bfd..407e058d616 100644 --- a/base/remote/man/rabbitmq_send_message.Rd +++ b/base/remote/man/rabbitmq_send_message.Rd @@ -5,7 +5,7 @@ \title{Send a message to RabbitMQ rest API. It will check the resulting status code and print a message in case something goes wrong.} \usage{ -rabbitmq_send_message(url, auth, body, action = "POST") +rabbitmq_send_message(url, auth, body, action = "POST", silent = FALSE) } \arguments{ \item{url}{the full endpoint rest url} From a365964944dfd0b665efcefdb102f751e5fab7b7 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 21 Jul 2020 21:54:46 +0000 Subject: [PATCH 1249/2289] bugfixes for xml workflow submission --- apps/api/Dockerfile | 4 +--- apps/api/R/submit.workflow.R | 12 ++++++------ tests/api.sipnet.xml | 2 +- 3 files changed, 8 insertions(+), 10 deletions(-) diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index 0e88133c538..c9f14b50b8b 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -30,6 +30,4 @@ WORKDIR /api/R CMD Rscript entrypoint.R -COPY ./ /api - - +COPY ./ /api \ No newline at end of file diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index f21b7e9549d..0eaa40cbf5f 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -16,8 +16,7 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ dbname = "bety", user = "bety", password = "bety", - driver = "PostgreSQL", - write = FALSE + driver = "PostgreSQL" ) ) @@ -53,15 +52,16 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ insert.attribute(workflowList) # Fix the output directory - outdir <- paste0("data/workflows/PEcAn_", workflow_id) + outdir <- paste0("/data/workflows/PEcAn_", workflow_id) workflowList$outdir <- outdir # Create output diretory - dir.create(paste0("/", outdir), recursive=TRUE) + dir.create(outdir, recursive=TRUE) # Convert settings list to XML & save it into outdir workflowXml <- PEcAn.settings::listToXml(workflowList, "pecan") - XML::saveXML(workflowXml, paste0("/", outdir, "/pecan.xml")) + XML::saveXML(workflowXml, paste0(outdir, "/pecan.xml")) + res <- file.copy("/work/workflow.R", outdir) # Post workflow to RabbitMQ message <- list(folder = outdir, workflowid = workflow_id) @@ -96,7 +96,7 @@ insert.workflow <- function(workflowList){ "site_id" = c(bit64::as.integer64(workflowList$run$site$id)), "model_id" = c(bit64::as.integer64(model_id)), "folder" = "temp_dir", - "hostname" = c(workflowList$host$name), + "hostname" = c("docker"), "start_date" = c(as.POSIXct(workflowList$run$start.date)), "end_date" = c(as.POSIXct(workflowList$run$end.date)), "advanced_edit" = c(FALSE), diff --git a/tests/api.sipnet.xml b/tests/api.sipnet.xml index d690454a76e..2c390bdf0d1 100644 --- a/tests/api.sipnet.xml +++ b/tests/api.sipnet.xml @@ -36,7 +36,7 @@ - 5000000005 + 99000000003 2002-01-01 00:00:00 From db651c8652dd3a714c336c21c86107221bfd24c8 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 06:44:25 +0000 Subject: [PATCH 1250/2289] added test for xml workflow submission & updated swagger yaml to fix errors --- apps/api/Dockerfile | 3 +- apps/api/R/workflows.R | 10 +- apps/api/pecanapi-spec.yml | 134 ++++++++++++++----- apps/api/test_pecanapi.sh | 2 +- apps/api/tests/test.workflows.R | 11 ++ apps/api/tests/test_workflows/api.sipnet.xml | 46 +++++++ 6 files changed, 170 insertions(+), 36 deletions(-) create mode 100644 apps/api/tests/test_workflows/api.sipnet.xml diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index c9f14b50b8b..4dfc56b94f3 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -15,7 +15,8 @@ EXPOSE 8000 # -------------------------------------------------------------------------- ENV AUTH_REQ="yes" \ HOST_ONLY="no" \ - PGHOST="postgres" + PGHOST="postgres"\ + RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" # COMMAND TO RUN RUN apt-get update \ diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 1bd4b1e14a4..ff23d1bd104 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -136,14 +136,18 @@ getWorkflowDetails <- function(id, res){ #' @author Tezan Sahu #* @post / submitWorkflow <- function(req, res){ + print(req$HTTP_CONTENT_TYPE) if(req$HTTP_CONTENT_TYPE == "application/xml"){ submission_res <- submit.workflow.xml(req$postBody, req$user) if(submission_res$status == "Error"){ res$status <- 400 return(submission_res) } + res$status <- 201 + return(submission_res) + } + else{ + res$status <- 415 + return(paste("Unsupported request content type:", req$HTTP_CONTENT_TYPE)) } - - res$status <- 201 - return(submission_res) } \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index dbe8a5284c5..61b83b8f342 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -10,7 +10,7 @@ servers: info: title: PEcAn Project API description: >- - This is the API for interacting with server(s) of the __PEcAn Project__. The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. Here's the link to [PEcAn's Github Repository](https://github.com/PecanProject/pecan). + This is the API for interacting with server(s) of the __PEcAn Project__. The Predictive Ecosystem Analyser (PEcAn) Project is an open source framework initiated to meet the demands for more accessible, transparent & repeatable modeling of ecosystems. Here's the link to [PEcAn's Github Repository](https://github.com/PecanProject/pecan).

PEcAn can be considered as an ecoinformatics toolbox combined with a set of workflows that wrap around ecosystem models that allow users to effectively perform data synthesis, propagation of uncertainty through a model & ecological predictions in an integrated fashion using a diverse repository of data & models. version: "1.0.0" contact: @@ -47,7 +47,6 @@ paths: summary: Ping the server to check if it is live tags: - general - - responses: '200': description: OK @@ -70,7 +69,6 @@ paths: summary: Obtain general information about PEcAn & the details of the database host tags: - general - - responses: '200': description: OK @@ -113,7 +111,6 @@ paths: get: tags: - models - - summary: Details of requested model parameters: - in: path @@ -141,7 +138,6 @@ paths: get: tags: - models - - summary: Search for model(s) using search pattern based on model name & revision parameters: - in: query @@ -196,7 +192,6 @@ paths: get: tags: - sites - - summary: Details of a site parameters: - in: path @@ -223,7 +218,6 @@ paths: get: tags: - sites - - summary: Search for sites using search pattern based on site name parameters: - in: query @@ -248,15 +242,17 @@ paths: content: application/json: schema: - sites: - type: array - items: - type: object - properties: - id: - type: string - sitename: - type: string + type: object + properties: + sites: + type: array + items: + type: object + properties: + id: + type: string + sitename: + type: string '401': description: Authentication required @@ -269,7 +265,6 @@ paths: get: tags: - workflows - - summary: Get the list of workflows parameters: - in: query @@ -316,7 +311,6 @@ paths: workflows: type: array items: - type: object $ref: '#/components/schemas/Workflow' count: type: integer @@ -335,29 +329,113 @@ paths: post: tags: - workflows - - summary: Submit a new PEcAn workflow requestBody: required: true content: - application/json: - schema: - $ref: '#/components/schemas/Workflow' + # application/json: + # schema: + # $ref: '#/components/schemas/Workflow' application/xml: schema: - $ref: '#/components/schemas/Workflow' + xml: + name: pecan + + type: object + properties: + pfts: + type: object + properties: + pft: + type: object + properties: + name: + type: string + example: temperate.coniferous + meta.analysis: + type: object + properties: + iter: + type: integer + example: 100 + random.effects: + type: boolean + example: "FALSE" + threshold: + type: number + example: 1.2 + update: + type: string + example: AUTO + ensemble: + type: object + properties: + variable: + type: string + example: NPP + sensitivity.analysis: + type: object + properties: + quantiles: + type: object + properties: + sigma: + type: array + items: + type: integer + example: [-1, 1] + variable: + type: string + example: NPP + model: + type: object + properties: + "type": + type: string + example: SIPNET + revision: + type: string + example: r136 + run: + type: object + properties: + site: + type: object + properties: + id: + type: string + example: 772 + inputs: + type: object + properties: + met: + type: object + properties: + id: + type: string + example: "99000000003" + start.date: + type: string + example: "2002-01-01 00:00:00" + end.date: + type: string + example: "2002-12-31 00:00:00" + dbfiles: + type: string + example: pecan/dbfiles responses: '201': description: Submitted workflow successfully '401': description: Authentication required + '415': + description: Unsupported request content type /api/workflows/{id}: get: tags: - workflows - - summary: Get the details of a PEcAn Workflow parameters: - in: path @@ -388,7 +466,6 @@ paths: get: tags: - runs - - summary: Get the list of all runs for a specified PEcAn Workflow parameters: - in: query @@ -429,7 +506,6 @@ paths: runs: type: array items: - type: object $ref: '#/components/schemas/Run' count: type: integer @@ -449,7 +525,6 @@ paths: get: tags: - runs - - summary: Get the details of a specified PEcAn run parameters: - in: path @@ -589,7 +664,4 @@ components: securitySchemes: basicAuth: type: http - scheme: basic - - - + scheme: basic \ No newline at end of file diff --git a/apps/api/test_pecanapi.sh b/apps/api/test_pecanapi.sh index 4cd27998dd3..59064c6b3e1 100755 --- a/apps/api/test_pecanapi.sh +++ b/apps/api/test_pecanapi.sh @@ -1,6 +1,6 @@ #!/bin/bash -cd R; ./entrypoint.R & +cd R; ./entrypoint.R 2>/dev/null & PID=$! while ! curl --output /dev/null --silent http://localhost:8000 diff --git a/apps/api/tests/test.workflows.R b/apps/api/tests/test.workflows.R index b8bf5f66ab1..ce6522ee0e5 100644 --- a/apps/api/tests/test.workflows.R +++ b/apps/api/tests/test.workflows.R @@ -34,4 +34,15 @@ test_that("Calling /api/workflows/{id} with invalid workflow id returns Status 4 httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) +}) + +test_that("Submitting XML workflow to /api/workflows/ returns Status 201", { + xml_string <- paste0(xml2::read_xml("test_workflows/api.sipnet.xml")) + res <- httr::POST( + "http://localhost:8000/api/workflows/", + httr::authenticate("carya", "illinois"), + httr::content_type("application/xml"), + body = xml_string + ) + expect_equal(res$status, 201) }) \ No newline at end of file diff --git a/apps/api/tests/test_workflows/api.sipnet.xml b/apps/api/tests/test_workflows/api.sipnet.xml new file mode 100644 index 00000000000..1f871b6a731 --- /dev/null +++ b/apps/api/tests/test_workflows/api.sipnet.xml @@ -0,0 +1,46 @@ + + + + + temperate.coniferous + + + + + 3000 + FALSE + 1.2 + AUTO + + + + NPP + + + + + -1 + 1 + + NPP + + + + SIPNET + r136 + + + + + 772 + + + + 99000000003 + + + 2002-01-01 00:00:00 + 2002-12-31 00:00:00 + pecan/dbfiles + + \ No newline at end of file From 7f9acc9ed7d2dba5d0ef5def24b9ac403bca44fe Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 07:59:59 +0000 Subject: [PATCH 1251/2289] workflow submission response modified; documentation for workflow submission --- apps/api/R/submit.workflow.R | 2 +- apps/api/R/workflows.R | 1 - .../07_remote_access/01_pecan_api.Rmd | 49 +++++++++++++++++++ 3 files changed, 50 insertions(+), 2 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 0eaa40cbf5f..96daadcec68 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -68,7 +68,7 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ res <- PEcAn.remote::rabbitmq_post_message(workflowList$host$rabbitmq$uri, "pecan", message, "rabbitmq") if(res$routed){ - return(list(workflow_id = workflow_id, status = "Submitted successfully")) + return(list(workflow_id = as.character(workflow_id), status = "Submitted successfully")) } else{ return(list(status = "Error", message = "Could not submit to RabbitMQ")) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index ff23d1bd104..0d9c3f0ddb9 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -136,7 +136,6 @@ getWorkflowDetails <- function(id, res){ #' @author Tezan Sahu #* @post / submitWorkflow <- function(req, res){ - print(req$HTTP_CONTENT_TYPE) if(req$HTTP_CONTENT_TYPE == "application/xml"){ submission_res <- submit.workflow.xml(req$postBody, req$user) if(submission_res$status == "Error"){ diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index ef2493cc71f..0403a1eeae6 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -58,10 +58,12 @@ _* indicates that the particular API is under development & may not be ready for #### R Packages * [httr](https://cran.r-project.org/web/packages/httr/index.html) * [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html) +* [xml2](https://cran.r-project.org/web/packages/xml2/index.html) #### Python Packages * [requests](https://requests.readthedocs.io/en/master/) * [json](https://docs.python.org/3/library/json.html) +* [xml](https://docs.python.org/3/library/xml.html) ### {-} @@ -431,6 +433,53 @@ print(json.dumps(response.json(), indent=2)) ### {-} +### `POST /api/workflows/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Submit a workflow in XML format for execution +xmlFile <- "pecan/tests/api.sipnet.xml" +xml_string <- paste0(xml2::read_xml(xmlFile)) +res <- httr::POST( + "http://localhost:8000/api/workflows/", + httr::authenticate("carya", "illinois"), + httr::content_type("application/xml"), + body = xml_string + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $workflow_id +## [1] 99000000001 + +## $status +## [1] "Submitted successfully" +``` + +#### Python Snippet + +```python +# Submit a workflow in XML format for execution +xml_file = "pecan/tests/api.sipnet.xml" +root = xml.etree.ElementTree.parse(xml_file).getroot() +response = requests.post( + "http://localhost:8000/api/workflows/", + auth=HTTPBasicAuth('carya', 'illinois'), + headers = {'Content-Type': 'application/xml'}, + data = xml.etree.ElementTree.tostring(root, encoding='unicode', method='xml') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "workflow_id": "99000000001", +## "status": "Submitted successfully" +## } +``` + +### {-} + ### `GET /api/workflows/{id}` {.tabset .tabset-pills} #### R Snippet From 68b069b8784247545c05175119121d4519b28c9a Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 08:26:33 +0000 Subject: [PATCH 1252/2289] fixed rabbitmq function documentation --- base/remote/R/rabbitmq.R | 1 + base/remote/man/rabbitmq_send_message.Rd | 2 ++ 2 files changed, 3 insertions(+) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index 95479cdafbc..da7023b9d30 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -51,6 +51,7 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { #' @param auth authentication for rabbitmq in httr:auth #' @param body the actual body to send, this is a rabbitmq message. #' @param action the rest action to perform +#' @param silent boolean to indicate if logging should be performed. #' @return will return NA if message failed, otherwise it will either #' return the resulting message, or if not availble an empty string "". rabbitmq_send_message <- function(url, auth, body, action = "POST", silent = FALSE) { diff --git a/base/remote/man/rabbitmq_send_message.Rd b/base/remote/man/rabbitmq_send_message.Rd index 407e058d616..581e40fda14 100644 --- a/base/remote/man/rabbitmq_send_message.Rd +++ b/base/remote/man/rabbitmq_send_message.Rd @@ -15,6 +15,8 @@ rabbitmq_send_message(url, auth, body, action = "POST", silent = FALSE) \item{body}{the actual body to send, this is a rabbitmq message.} \item{action}{the rest action to perform} + +\item{silent}{boolean to indicate if logging should be performed.} } \value{ will return NA if message failed, otherwise it will either From b97181540a1f347723615121ae2a031a388a4987 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 11:50:18 +0000 Subject: [PATCH 1253/2289] created endpoints for getting PFT details --- apps/api/R/entrypoint.R | 4 ++ apps/api/R/models.R | 4 +- apps/api/R/pfts.R | 77 ++++++++++++++++++++++++++ apps/api/pecanapi-spec.yml | 110 +++++++++++++++++++++++++++++++++++++ 4 files changed, 193 insertions(+), 2 deletions(-) create mode 100644 apps/api/R/pfts.R diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 8d28eab09bd..039772ad5f1 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -27,6 +27,10 @@ root$mount("/api/models", models_pr) sites_pr <- plumber::plumber$new("sites.R") root$mount("/api/sites", sites_pr) +# The endpoints mounted here are related to details of PEcAn pfts +pfts_pr <- plumber::plumber$new("pfts.R") +root$mount("/api/pfts", pfts_pr) + # The endpoints mounted here are related to details of PEcAn workflows workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index 660ccaaa9f1..571065fd9c1 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -32,8 +32,8 @@ getModel <- function(model_id, res){ ######################################################################### -#' Search for PEcAn sites containing wildcards for filtering -#' @param name Model name search string (character) +#' Search for PEcAn model(s) containing wildcards for filtering +#' @param model_name Model name search string (character) #' @param revision Model version/revision search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search #' @return Model subset matching the model search string diff --git a/apps/api/R/pfts.R b/apps/api/R/pfts.R new file mode 100644 index 00000000000..bc5a75b432b --- /dev/null +++ b/apps/api/R/pfts.R @@ -0,0 +1,77 @@ +library(dplyr) + +#' Retrieve the details of a PEcAn PFT, based on pft_id +#' @param pft_id PFT ID (character) +#' @return PFT details +#' @author Tezan Sahu +#* @get / +getPfts <- function(pft_id, res){ + + dbcon <- PEcAn.DB::betyConnect() + + pft <- tbl(dbcon, "pfts") %>% + select(pft_id = id, pft_name = name, definition, pft_type, modeltype_id) %>% + filter(pft_id == !!pft_id) + + pft <- tbl(dbcon, "modeltypes") %>% + select(modeltype_id = id, model_type = name) %>% + inner_join(pft, by = "modeltype_id") + + qry_res <- pft %>% + select(-modeltype_id) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="PFT not found")) + } + else { + return(qry_res) + } +} + +######################################################################### + +#' Search for PFTs containing wildcards for filtering +#' @param pft_name PFT name search string (character) +#' @param pft_type PFT type (either 'plant' or 'cultivar') (character) +#' @param model_type Model type serch string (character) +#' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @return PFT subset matching the searc criteria +#' @author Tezan Sahu +#* @get / +searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE, res){ + if(! pft_type %in% c("", "plant", "cultivar")){ + res$status <- 400 + return(list(error = "Invalid pft_type")) + } + + dbcon <- PEcAn.DB::betyConnect() + + pfts <- tbl(dbcon, "pfts") %>% + select(pft_id = id, pft_name = name, pft_type, modeltype_id) + + pfts <- tbl(dbcon, "modeltypes") %>% + select(modeltype_id = id, model_type = name) %>% + inner_join(pfts, by = "modeltype_id") + + qry_res <- pfts %>% + filter(grepl(!!pft_name, pft_name, ignore.case=ignore_case)) %>% + filter(grepl(!!pft_type, pft_type, ignore.case=ignore_case)) %>% + filter(grepl(!!model_type, model_type, ignore.case=ignore_case)) %>% + select(-modeltype_id) %>% + arrange(pft_id) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="PFT(s) not found")) + } + else { + return(list(pfts=qry_res, count = nrow(qry_res))) + } +} \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 61b83b8f342..fcddc9a76fc 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -33,6 +33,8 @@ tags: description: Everything about PEcAn models - name: sites description: Everything about PEcAn sites + - name: pfts + description: Everything about PEcAn PFTs (Plant Functional Types) ##################################################################################################################### ##################################################### API Endpoints ################################################# @@ -261,6 +263,100 @@ paths: '404': description: Site(s) not found + /api/pfts/{pft_id}: + get: + tags: + - pfts + summary: Details of a PFT + parameters: + - in: path + name: pft_id + description: PEcAn PFT ID + required: true + schema: + type: string + responses: + '200': + description: PFT Details + content: + application/json: + schema: + $ref: '#/components/schemas/PFT' + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: PFT not found + + /api/pfts/: + get: + tags: + - pfts + summary: Search for PFTs using search pattern matching + parameters: + - in: query + name: pft_name + description: Search string for PFT name + required: false + schema: + type: string + - in: query + name: pft_type + description: PFT Type + required: false + schema: + type: string + default: "" + enum: + - "plant" + - "cultivar" + - "" + - in: query + name: model_type + description: Search string for Model type + required: false + schema: + type: string + - in: query + name: ignore_case + description: Indicator of case sensitive or case insensitive search + required: false + schema: + type: string + default: "TRUE" + enum: + - "TRUE" + - "FALSE" + responses: + '200': + description: List of PFTs matching search pattern + content: + application/json: + schema: + type: object + properties: + pfts: + type: array + items: + type: object + properties: + pft_id: + type: string + pft_name: + type: string + pft_type: + type: string + model_type: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Site(s) not found + /api/workflows/: get: tags: @@ -618,6 +714,20 @@ components: type: number time_zone: type: string + + PFT: + properties: + pft_id: + type: string + pft_name: + type: string + pft_type: + type: string + definition: + type: string + model_type: + type: string + Workflow: properties: id: From ac6082b0858c5aac232f39bae2738fcbf5e555b0 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 13:39:23 +0000 Subject: [PATCH 1254/2289] added documentation for pft endpoints --- .../07_remote_access/01_pecan_api.Rmd | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 0403a1eeae6..0cb57cde81a 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -39,6 +39,10 @@ The currently implemented functionalities include: * `GET /api/sites/`: Search for site(s) using search pattern based on site name * `GET /api/sites/{site_id}`: Fetch the details of specific site +* __PFTs:__ + * `GET /api/pfts/`: Search for PFT(s) using search pattern based on PFT name, PFT type & Model type + * `GET /api/pfts/{pft_id}`: Fetch the details of specific PFT + * __Workflows:__ * `GET /api/workflows/`: Retrieve a list of PEcAn workflows * `POST /api/workflows/` *: Submit a new PEcAn workflow @@ -367,6 +371,103 @@ print(json.dumps(response.json(), indent=2)) ### {-} +### `GET /api/pfts/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Search pft(s) of "plant" type with `pft_name` containing "temperate" & belonging to `model_type` "SIPNET" +res <- httr::GET( + "http://localhost:8000/api/pfts/?pft_name=temperate&pft_type=plant&model_type=sipnet&ignore_case=TRUE", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $pfts +## model_type pft_id pft_name pft_type +## +## 1 SIPNET 41 temperate.deciduous plant +## 2 SIPNET 1000000105 temperate.deciduous.IF plant +## 3 SIPNET 1000000107 temperate.deciduous_SDA plant +## 4 SIPNET 1000000115 temperate.deciduous.ALL plant +## 5 SIPNET 1000000118 temperate.deciduous.ALL.NORMAL plant +## 6 SIPNET 2000000017 tundra.deciduous.NGEE_Arctic plant +## 7 SIPNET 2000000045 temperate.broadleaf.deciduous plant + +## $count +## [1] 7 +``` + +#### Python Snippet + +```python +# Search pft(s) of "plant" type with `pft_name` containing "temperate" & belonging to `model_type` "SIPNET" +response = requests.get( + "http://localhost:8000/api/pfts/?pft_name=temperate&pft_type=plant&model_type=sipnet&ignore_case=TRUE", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "pfts": [ +## { +## "model_type": "SIPNET", +## "pft_id": 41, +## "pft_name": "temperate.deciduous", +## "pft_type": "plant" +## }, +## ... +## ], +## "count": 7 +## } +``` + +### {-} + +### `GET /api/pfts/{pft_id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Fetch the details of PEcAn PFT with id = 2000000045 +res <- httr::GET( + "http://localhost:8000/api/pfts/2000000045", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## model_type pft_id pft_name definition pft_type +## +## 1 SIPNET 2000000045 temperate.broadleaf.deciduous SIPNET Temperate Deciduous PFT with priors on all parameters plant +``` + +#### Python Snippet + +```python +# Fetch the details of PEcAn PFT with id = 2000000045 +response = requests.get( + "http://localhost:8000/api/pfts/2000000045", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## [ +## { +## "model_type": "SIPNET", +## "pft_id": 2000000045, +## "pft_name": "temperate.broadleaf.deciduous", +## "definition": "SIPNET Temperate Deciduous PFT with priors on all parameters", +## "pft_type": "plant" +## } +## ] +``` + +### {-} + ### `GET /api/workflows/` {.tabset .tabset-pills} #### R Snippet From 51a0af3efab3fb5e629b727d546deb29b3a626ce Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 19:12:15 +0000 Subject: [PATCH 1255/2289] changed absolute directories to env vars --- apps/api/Dockerfile | 13 +++++++---- apps/api/R/submit.workflow.R | 24 ++++++++++++-------- apps/api/tests/test_workflows/api.sipnet.xml | 8 ------- docker-compose.yml | 2 ++ 4 files changed, 25 insertions(+), 22 deletions(-) diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index 4dfc56b94f3..25112a1449e 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -13,11 +13,7 @@ EXPOSE 8000 # -------------------------------------------------------------------------- # Variables to store in docker image (most of them come from the base image) # -------------------------------------------------------------------------- -ENV AUTH_REQ="yes" \ - HOST_ONLY="no" \ - PGHOST="postgres"\ - RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" - + # COMMAND TO RUN RUN apt-get update \ && apt-get install libsodium-dev -y \ @@ -27,6 +23,13 @@ RUN apt-get update \ && Rscript -e "devtools::install_github('rstudio/swagger')" \ && Rscript -e "devtools::install_github('rstudio/plumber')" +ENV AUTH_REQ="TRUE" \ + HOST_ONLY="FALSE" \ + PGHOST="postgres"\ + RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F"\ + DATA_DIR="/data/"\ + DBFILES_DIR="/data/dbfiles/" + WORKDIR /api/R CMD Rscript entrypoint.R diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 96daadcec68..6004057d697 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -12,12 +12,12 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ # Fix details about the database workflowList$database <- list(bety = PEcAn.DB::get_postgres_envvars( - host = "localhost", - dbname = "bety", - user = "bety", - password = "bety", - driver = "PostgreSQL" - ) + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "PostgreSQL" + ) ) # Fix RabbitMQ details @@ -52,12 +52,18 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ insert.attribute(workflowList) # Fix the output directory - outdir <- paste0("/data/workflows/PEcAn_", workflow_id) + outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id) workflowList$outdir <- outdir # Create output diretory dir.create(outdir, recursive=TRUE) + # Modify the `dbfiles` path & create the directory if needed + workflowList$run$dbfiles <- Sys.getenv("DBFILES_DIR", "/data/dbfiles/") + if(! dir.exists(workflowList$run$dbfiles)){ + dir.create(workflowList$run$dbfiles, recursive = TRUE) + } + # Convert settings list to XML & save it into outdir workflowXml <- PEcAn.settings::listToXml(workflowList, "pecan") XML::saveXML(workflowXml, paste0(outdir, "/pecan.xml")) @@ -175,6 +181,6 @@ insert.attribute <- function(workflowList){ value_json <- as.character(jsonlite::toJSON(properties, auto_unbox = TRUE)) res <- DBI::dbSendStatement(dbcon, - "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", - list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) + "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", + list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) } \ No newline at end of file diff --git a/apps/api/tests/test_workflows/api.sipnet.xml b/apps/api/tests/test_workflows/api.sipnet.xml index 1f871b6a731..d71eeb00b81 100644 --- a/apps/api/tests/test_workflows/api.sipnet.xml +++ b/apps/api/tests/test_workflows/api.sipnet.xml @@ -17,14 +17,6 @@ NPP
- - - -1 - 1 - - NPP - - SIPNET r136 diff --git a/docker-compose.yml b/docker-compose.yml index fda167091c5..37c7daffefb 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -330,6 +330,8 @@ services: - HOST_ONLY=${HOST_ONLY:-FALSE} - AUTH_REQ=${AUTH_REQ:-TRUE} - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} + - DATA_DIR=${DATA_DIR:-/data/} + - DBFILES_DIR=${DBFILES_DIR:-/data/dbfiles/} labels: - "traefik.enable=true" - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/api" From 322e6ffdb6ead1c7174bfb1568a3c10103a933bd Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 22 Jul 2020 20:22:26 +0000 Subject: [PATCH 1256/2289] added information about modeltype_format in model details --- apps/api/R/models.R | 8 ++++++-- apps/api/R/runs.R | 10 +++++++--- apps/api/pecanapi-spec.yml | 14 ++++++++++++-- 3 files changed, 25 insertions(+), 7 deletions(-) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index 571065fd9c1..fbf357dfc19 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -19,13 +19,17 @@ getModel <- function(model_id, res){ qry_res <- Model %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { + PEcAn.DB::db.close(dbcon) res$status <- 404 return(list(error="Model not found")) } else { + inputs_req <- tbl(dbcon, "modeltypes_formats") %>% + filter(modeltype_id == bit64::as.integer64(qry_res$modeltype_id)) %>% + select(input=tag, required) %>% collect() + qry_res <- qry_res %>% tibble::add_column(inputs = gsub('(\")', '"', jsonlite::toJSON(inputs_req))) + PEcAn.DB::db.close(dbcon) return(qry_res) } } diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 48f790ee2a7..abae4f9d5c9 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -7,7 +7,7 @@ library(dplyr) #' @return List of runs (belonging to a particuar workflow) #' @author Tezan Sahu #* @get / -getRuns <- function(req, workflow_id, offset=0, limit=50, res){ +getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) @@ -20,8 +20,12 @@ getRuns <- function(req, workflow_id, offset=0, limit=50, res){ Runs <- tbl(dbcon, "ensembles") %>% select(runtype, ensemble_id=id, workflow_id) %>% - full_join(Runs, by="ensemble_id") %>% - filter(workflow_id == !!workflow_id) + full_join(Runs, by="ensemble_id") + + if(! is.null(workflow_id)){ + Runs <- Runs %>% + filter(workflow_id == !!workflow_id) + } qry_res <- Runs %>% arrange(id) %>% diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index fcddc9a76fc..d66f9f34b19 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -567,7 +567,7 @@ paths: - in: query name: workflow_id description: ID of the PEcAn Workflow - required: true + required: false schema: type: string - in: query @@ -662,6 +662,15 @@ components: type: string model_type: type: string + inputs: + type: array + items: + type: object + properties: + input: + type: string + required: + type: boolean Run: properties: @@ -762,7 +771,8 @@ components: notes: type: string runs: - type: string + type: integer + example: 1 pecan_edit: type: string status: From e0a5e3d161f4a21a847afd87ece10c4de1d1f0e7 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 23 Jul 2020 04:33:35 +0000 Subject: [PATCH 1257/2289] converted responses for model, site & run from tibble to list --- apps/api/R/models.R | 10 ++++++++-- apps/api/R/runs.R | 9 +++++++-- apps/api/R/sites.R | 7 ++++++- apps/api/pecanapi-spec.yml | 4 ++++ 4 files changed, 25 insertions(+), 5 deletions(-) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index fbf357dfc19..f8ca3e2a044 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -25,12 +25,18 @@ getModel <- function(model_id, res){ return(list(error="Model not found")) } else { + # Convert the response from tibble to list + response <- list() + for(colname in colnames(qry_res)){ + response[colname] <- qry_res[colname] + } + inputs_req <- tbl(dbcon, "modeltypes_formats") %>% filter(modeltype_id == bit64::as.integer64(qry_res$modeltype_id)) %>% select(input=tag, required) %>% collect() - qry_res <- qry_res %>% tibble::add_column(inputs = gsub('(\")', '"', jsonlite::toJSON(inputs_req))) + response$inputs <- jsonlite::fromJSON(gsub('(\")', '"', jsonlite::toJSON(inputs_req))) PEcAn.DB::db.close(dbcon) - return(qry_res) + return(response) } } diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index abae4f9d5c9..295c2368886 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -90,7 +90,7 @@ getRunDetails <- function(id, res){ dbcon <- PEcAn.DB::betyConnect() Runs <- tbl(dbcon, "runs") %>% - select(-outdir, -outprefix, -setting) + select(-outdir, -outprefix, -setting, -created_at, -updated_at) Runs <- tbl(dbcon, "ensembles") %>% select(runtype, ensemble_id=id, workflow_id) %>% @@ -106,6 +106,11 @@ getRunDetails <- function(id, res){ return(list(error="Run with specified ID was not found")) } else { - return(qry_res) + # Convert the response from tibble to list + response <- list() + for(colname in colnames(qry_res)){ + response[colname] <- qry_res[colname] + } + return(response) } } \ No newline at end of file diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R index 17343650cb9..7cc6599ed20 100644 --- a/apps/api/R/sites.R +++ b/apps/api/R/sites.R @@ -23,7 +23,12 @@ getSite <- function(site_id, res){ return(list(error="Site not found")) } else { - return(qry_res) + # Convert the response from tibble to list + response <- list() + for(colname in colnames(qry_res)){ + response[colname] <- qry_res[colname] + } + return(response) } } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index d66f9f34b19..c3aa89a1c05 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -692,6 +692,10 @@ components: type: string finish_time: type: string + started_at: + type: string + finished_at: + type: string Site: properties: From a0f7f3989891ae86018b27c6e6ea1b61c38a1e29 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 23 Jul 2020 07:05:46 +0000 Subject: [PATCH 1258/2289] /runs/ now returns output details & variables if present on host --- apps/api/R/runs.R | 49 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 48 insertions(+), 1 deletion(-) diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 295c2368886..907eaac2a11 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -80,7 +80,7 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ ################################################################################################# -#' Get the of the run specified by the id +#' Get the details of the run specified by the id #' @param id Run id (character) #' @return Details of requested run #' @author Tezan Sahu @@ -111,6 +111,53 @@ getRunDetails <- function(id, res){ for(colname in colnames(qry_res)){ response[colname] <- qry_res[colname] } + + # If outputs exist on the host, add them to the response + outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", response$workflow_id, "/out/", id) + if(dir.exists(outdir)){ + response$outputs <- getRunOutputs(outdir) + } + return(response) } +} + +################################################################################################# + +#' Get the outputs of a run (if the files exist on the host) +#' @param outdir Run output directory (character) +#' @return Output details of the run +#' @author Tezan Sahu + +getRunOutputs <- function(outdir){ + outputs <- list() + if(file.exists(paste0(outdir, "/logfile.txt"))){ + outputs$logfile <- "logfile.txt" + } + + if(file.exists(paste0(outdir, "/README.txt"))){ + outputs$info <- "README.txt" + } + + year_files <- list.files(outdir, pattern="*.nc$") + years <- stringr::str_replace_all(year_files, ".nc", "") + years_data <- c() + outputs$years <- list() + for(year in years){ + var_lines <- readLines(paste0(outdir, "/", year, ".nc.var")) + keys <- stringr::word(var_lines, 1) + values <- stringr::word(var_lines, 2, -1) + vars <- list() + for(i in 1:length(keys)){ + vars[keys[i]] <- values[i] + } + years_data <- c(years_data, list(list( + data = paste0(year, ".nc"), + variables = vars + ))) + } + for(i in 1:length(years)){ + outputs$years[years[i]] <- years_data[i] + } + return(outputs) } \ No newline at end of file From 8da18b83ed63bf9b6f5ffd1e7db837046d929119 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 23 Jul 2020 10:00:43 +0000 Subject: [PATCH 1259/2289] updated documentation --- apps/api/R/pfts.R | 8 +- apps/api/pecanapi-spec.yml | 204 ++++++++------ .../07_remote_access/01_pecan_api.Rmd | 259 +++++++++++++----- 3 files changed, 317 insertions(+), 154 deletions(-) diff --git a/apps/api/R/pfts.R b/apps/api/R/pfts.R index bc5a75b432b..f8e508001e6 100644 --- a/apps/api/R/pfts.R +++ b/apps/api/R/pfts.R @@ -28,7 +28,13 @@ getPfts <- function(pft_id, res){ return(list(error="PFT not found")) } else { - return(qry_res) + # Convert the response from tibble to list + response <- list() + for(colname in colnames(qry_res)){ + response[colname] <- qry_res[colname] + } + + return(response) } } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index c3aa89a1c05..452189e8c8e 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -407,7 +407,7 @@ paths: workflows: type: array items: - $ref: '#/components/schemas/Workflow' + $ref: '#/components/schemas/Workflow_GET' count: type: integer next_page: @@ -429,96 +429,13 @@ paths: requestBody: required: true content: - # application/json: - # schema: - # $ref: '#/components/schemas/Workflow' application/xml: schema: - xml: - name: pecan + $ref: '#/components/schemas/Workflow_POST' + application/json: + schema: + $ref: '#/components/schemas/Workflow_POST' - type: object - properties: - pfts: - type: object - properties: - pft: - type: object - properties: - name: - type: string - example: temperate.coniferous - meta.analysis: - type: object - properties: - iter: - type: integer - example: 100 - random.effects: - type: boolean - example: "FALSE" - threshold: - type: number - example: 1.2 - update: - type: string - example: AUTO - ensemble: - type: object - properties: - variable: - type: string - example: NPP - sensitivity.analysis: - type: object - properties: - quantiles: - type: object - properties: - sigma: - type: array - items: - type: integer - example: [-1, 1] - variable: - type: string - example: NPP - model: - type: object - properties: - "type": - type: string - example: SIPNET - revision: - type: string - example: r136 - run: - type: object - properties: - site: - type: object - properties: - id: - type: string - example: 772 - inputs: - type: object - properties: - met: - type: object - properties: - id: - type: string - example: "99000000003" - start.date: - type: string - example: "2002-01-01 00:00:00" - end.date: - type: string - example: "2002-12-31 00:00:00" - dbfiles: - type: string - example: pecan/dbfiles responses: '201': description: Submitted workflow successfully @@ -546,7 +463,7 @@ paths: content: application/json: schema: - $ref: '#/components/schemas/Workflow' + $ref: '#/components/schemas/Workflow_GET' '401': description: Authentication required '403': @@ -696,6 +613,27 @@ components: type: string finished_at: type: string + outputs: + type: object + properties: + logfile: + type: string + info: + type: string + years: + type: object + properties: + "": + type: object + properties: + data: + type: string + variables: + type: object + properties: + "": + type: string + Site: properties: @@ -741,7 +679,7 @@ components: model_type: type: string - Workflow: + Workflow_GET: properties: id: type: string @@ -785,6 +723,92 @@ components: type: string input_poolinitcond: type: string + + Workflow_POST: + type: object + xml: + name: pecan + properties: + pfts: + type: object + properties: + pft: + type: object + properties: + name: + type: string + example: temperate.coniferous + meta.analysis: + type: object + properties: + iter: + type: integer + example: 100 + random.effects: + type: boolean + example: "FALSE" + threshold: + type: number + example: 1.2 + update: + type: string + example: AUTO + ensemble: + type: object + properties: + variable: + type: string + example: NPP + sensitivity.analysis: + type: object + properties: + quantiles: + type: object + properties: + sigma: + type: array + items: + type: integer + example: [-1, 1] + variable: + type: string + example: NPP + model: + type: object + properties: + "type": + type: string + example: SIPNET + revision: + type: string + example: r136 + run: + type: object + properties: + site: + type: object + properties: + id: + type: string + example: 772 + inputs: + type: object + properties: + met: + type: object + properties: + id: + type: string + example: "99000000003" + start.date: + type: string + example: "2002-01-01 00:00:00" + end.date: + type: string + example: "2002-12-31 00:00:00" + dbfiles: + type: string + example: pecan/dbfiles securitySchemes: basicAuth: type: http diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 0cb57cde81a..720267fd781 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -226,8 +226,25 @@ res <- httr::GET( print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` -## model_id model_name revision modeltype_id model_type -## 1 1000000022 SIPNET ssr 3 SIPNET +## $modeltype_id +## [1] 3 + +## $model_type +## [1] "SIPNET" + +## $model_id +## [1] 1000000022 + +## $model_name +## [1] "SIPNET" + +## $revision +## [1] "ssr" + +## $inputs +## input required +## 1 met TRUE +## 2 poolinitcond FALSE ``` #### Python Snippet @@ -241,15 +258,23 @@ response = requests.get( print(json.dumps(response.json(), indent=2)) ``` ``` -## [ -## { -## "model_id": "1000000022", -## "model_name": "SIPNET", -## "revision": "ssr", -## "modeltype_id": 3, -## "model_type": "SIPNET" -## } -## ] +## { +## "model_id": "1000000022", +## "model_name": "SIPNET", +## "revision": "ssr", +## "modeltype_id": 3, +## "model_type": "SIPNET" +## "inputs": [ +## { +## "input": "met", +## "required": TRUE +## }, +## { +## "input": "poolinitcond", +## "required": FALSE +## } +## ] +## } ``` ### {-} @@ -332,10 +357,50 @@ res <- httr::GET( print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` -## id city state country mat map soil notes soilnotes sitename -## 1 676 Park Falls Ranger District Wisconsin US 4 815 MF Willow Creek (US-WCr) -## greenhouse sand_pct clay_pct time_zone -## 1 FALSE 42.52 20.17 America/Chicago +## $id +## [1] 676 + +## $city +## [1] "Park Falls Ranger District" + +## $state +## [1] "Wisconsin" + +## $country +## [1] "US" + +## $mat +## [1] 4 + +## $map +## [1] 815 + +## $soil +## [1] "" + +## $som +## [1] "NA" + +## $notes +## [1] "MF" + +## $soilnotes +## [1] "" + +## $sitename +## [1] "Willow Creek (US-WCr)" + +## $greenhouse +[1] FALSE + +## $sand_pct +## [1] 42.52 + +## $clay_pct +## [1] 20.17 + +## $time_zone +## [1] "America/Chicago" ``` #### Python Snippet @@ -349,24 +414,22 @@ response = requests.get( print(json.dumps(response.json(), indent=2)) ``` ``` -## [ -## { -## "id": 676, -## "city": "Park Falls Ranger District", -## "state": "Wisconsin", -## "country": "US", -## "mat": 4, -## "map": 815, -## "soil": "", -## "notes": "MF", -## "soilnotes": "", -## "sitename": "Willow Creek (US-WCr)", -## "greenhouse": false, -## "sand_pct": 42.52, -## "clay_pct": 20.17, -## "time_zone": "America/Chicago" -## } -## ] +## { +## "id": 676, +## "city": "Park Falls Ranger District", +## "state": "Wisconsin", +## "country": "US", +## "mat": 4, +## "map": 815, +## "soil": "", +## "notes": "MF", +## "soilnotes": "", +## "sitename": "Willow Creek (US-WCr)", +## "greenhouse": false, +## "sand_pct": 42.52, +## "clay_pct": 20.17, +## "time_zone": "America/Chicago" +## } ``` ### {-} @@ -439,9 +502,20 @@ res <- httr::GET( print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` -## model_type pft_id pft_name definition pft_type -## -## 1 SIPNET 2000000045 temperate.broadleaf.deciduous SIPNET Temperate Deciduous PFT with priors on all parameters plant +## $model_type +## [1] "SIPNET" + +## $pft_id +## [1] 2000000045 + +## $pft_name +## [1] "temperate.broadleaf.deciduous" + +## $definition +## [1] "SIPNET Temperate Deciduous PFT with priors on all parameters" + +## $pft_type +## [1] "plant" ``` #### Python Snippet @@ -455,15 +529,13 @@ response = requests.get( print(json.dumps(response.json(), indent=2)) ``` ``` -## [ -## { -## "model_type": "SIPNET", -## "pft_id": 2000000045, -## "pft_name": "temperate.broadleaf.deciduous", -## "definition": "SIPNET Temperate Deciduous PFT with priors on all parameters", -## "pft_type": "plant" -## } -## ] +## { +## "model_type": "SIPNET", +## "pft_id": 2000000045, +## "pft_name": "temperate.broadleaf.deciduous", +## "definition": "SIPNET Temperate Deciduous PFT with priors on all parameters", +## "pft_type": "plant" +## } ``` ### {-} @@ -745,18 +817,67 @@ print(json.dumps(response.json(), indent=2)) #### R Snippet ```R -# Get details of run belonging with `id` = '1002042201' +# Get details of run belonging with `id` = '99000000282' res <- httr::GET( - "http://localhost:8000/api/runs/1002042201", + "http://localhost:8000/api/runs/99000000282", httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` -## runtype ensemble_id workflow_id id model_id site_id start_time finish_time parameter_list -## 1 ensemble 1000017624 1000009172 1002042201 1000000022 796 2005-01-01 2011-12-31 ensemble=1 -## created_at updated_at started_at finished_at -## 1 2018-04-11 22:20:31 2018-04-11 22:22:08 2018-04-11 18:21:57 2018-04-11 18:22:08 +## $runtype +## [1] "sensitivity analysis" + +## $ensemble_id +## [1] 99000000016 + +## $workflow_id +## [1] 99000000031 + +## $id +## [1] 99000000282 + +## $model_id +## [1] 1000000014 + +## $site_id +## [1] 772 + +## $start_time +## [1] "2002-01-01" + +## $finish_time +## [1] "2002-12-31" + +## $parameter_list +## [1] "quantile=MEDIAN,trait=all,pft=temperate.coniferous" + +## $started_at +## [1] "2020-07-22 07:02:40" + +## $finished_at +## [1] "2020-07-22 07:02:57" + +## $outputs +## $outputs$logfile +## [1] "logfile.txt" + +## $outputs$info +## [1] "README.txt" + +## $outputs$years +## $outputs$years$`2002` +## $outputs$years$`2002`$data +## [1] "2002.nc" + +## $outputs$years$`2002`$variables +## $outputs$years$`2002`$variables$GPP +## [1] "Gross Primary Productivity" + +## $outputs$years$`2002`$variables$NPP +## [1] "Net Primary Productivity" + +## ... ``` #### Python Snippet @@ -770,19 +891,31 @@ response = requests.get( print(json.dumps(response.json(), indent=2)) ``` ``` -## [ -## { -## "runtype": "ensemble", -## "ensemble_id": 1000017624, -## "workflow_id": 1000009172, -## "id": 1002042201, -## "model_id": 1000000022, -## "site_id": 796, -## "parameter_list": "ensemble=1", -## "start_time": "2005-01-01", -## "finish_time": "2011-12-31" +## { +## "runtype": "ensemble", +## "ensemble_id": 1000017624, +## "workflow_id": 1000009172, +## "id": 1002042201, +## "model_id": 1000000022, +## "site_id": 796, +## "parameter_list": "ensemble=1", +## "start_time": "2005-01-01", +## "finish_time": "2011-12-31", +## "outputs": { +## "logfile": "logfile.txt", +## "info": "README.txt", +## "years": { +## "2002": { +## "data": "2002.nc", +## "variables": { +## "GPP": "Gross Primary Productivity", +## "NPP": "Net Primary Productivity", +## ... +## } +## } +## } ## } -## ] +## } ``` ### {-} From 824a945189a1a844ab69026d8137212b7979bd0e Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 23 Jul 2020 11:40:35 +0000 Subject: [PATCH 1260/2289] created endpoint for plotting run outputs; updated documentation --- apps/api/R/runs.R | 49 +++++++++++-- apps/api/pecanapi-spec.yml | 66 ++++++++++++++++-- .../07_remote_access/01_pecan_api.Rmd | 43 ++++++++++-- book_source/figures/run_output_plot.png | Bin 0 -> 34996 bytes 4 files changed, 145 insertions(+), 13 deletions(-) create mode 100644 book_source/figures/run_output_plot.png diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 907eaac2a11..f6a3dd045ca 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -81,11 +81,11 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ ################################################################################################# #' Get the details of the run specified by the id -#' @param id Run id (character) +#' @param run_id Run id (character) #' @return Details of requested run #' @author Tezan Sahu -#* @get / -getRunDetails <- function(id, res){ +#* @get / +getRunDetails <- function(run_id, res){ dbcon <- PEcAn.DB::betyConnect() @@ -95,7 +95,7 @@ getRunDetails <- function(id, res){ Runs <- tbl(dbcon, "ensembles") %>% select(runtype, ensemble_id=id, workflow_id) %>% full_join(Runs, by="ensemble_id") %>% - filter(id == !!id) + filter(id == !!run_id) qry_res <- Runs %>% collect() @@ -113,7 +113,7 @@ getRunDetails <- function(id, res){ } # If outputs exist on the host, add them to the response - outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", response$workflow_id, "/out/", id) + outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", response$workflow_id, "/out/", run_id) if(dir.exists(outdir)){ response$outputs <- getRunOutputs(outdir) } @@ -124,6 +124,45 @@ getRunDetails <- function(id, res){ ################################################################################################# +#' Plot the results obtained from a run +#' @param run_id Run id (character) +#' @param year the year this data is for +#' @param yvar the variable to plot along the y-axis. +#' @param xvar the variable to plot along the x-axis, by default time is used. +#' @param width the width of the image generated, default is 800 pixels. +#' @param height the height of the image generated, default is 600 pixels. +#' @return List of runs (belonging to a particuar workflow) +#' @author Tezan Sahu +#* @get //graph// +#* @png +plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600, res){ + # Get workflow_id for the run + dbcon <- PEcAn.DB::betyConnect() + + Run <- tbl(dbcon, "runs") %>% + filter(id == !!run_id) + + workflow_id <- tbl(dbcon, "ensembles") %>% + select(ensemble_id=id, workflow_id) %>% + full_join(Run, by="ensemble_id") %>% + filter(id == !!run_id) %>% + pull(workflow_id) + + PEcAn.DB::db.close(dbcon) + + # Check if the data file exists on the host + datafile <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/out/", run_id, "/", year, ".nc") + if(! file.exists(datafile)){ + res$status <- 404 + return() + } + + # Plot & return + return(PEcAn.visualization::plot_netcdf(datafile, y_var, x_var, width, height, year=year)) +} + +################################################################################################# + #' Get the outputs of a run (if the files exist on the host) #' @param outdir Run output directory (character) #' @return Output details of the run diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 452189e8c8e..7466c7de57b 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -534,21 +534,21 @@ paths: description: Workflow with specified ID was not found - /api/runs/{id}: + /api/runs/{run_id}: get: tags: - runs summary: Get the details of a specified PEcAn run parameters: - in: path - name: id + name: run_id description: ID of the PEcAn run required: true schema: type: string responses: '200': - description: Details about the requested run within the requested PEcAn workflow + description: Details about the requested run content: application/json: schema: @@ -560,7 +560,65 @@ paths: '404': description: Workflow with specified ID was not found - + /api/runs/{run_id}/graph/{year}/{y_var}: + get: + tags: + - runs + summary: Plot the desired variables for a run output + parameters: + - in: path + name: run_id + description: ID of the PEcAn run + required: true + schema: + type: string + - in: path + name: year + description: Year the plot is for + required: true + schema: + type: string + - in: path + name: y_var + description: Variable to plot along the Y-axis + required: true + schema: + type: string + - in: query + name: x_var + description: Variable to plot along the X-axis + required: true + schema: + type: string + default: time + - in: query + name: width + description: Width of the image generated + required: true + schema: + type: string + default: 800 + - in: query + name: height + description: Height of the image generated + required: true + schema: + type: string + default: 600 + responses: + '200': + description: Plot of the desired output variables obtained from a run + content: + image/png: + schema: + type: string + format: binary + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Run data not found ##################################################################################################################### ###################################################### Components ################################################### ##################################################################################################################### diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 720267fd781..562cbe2fdb8 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -45,12 +45,13 @@ The currently implemented functionalities include: * __Workflows:__ * `GET /api/workflows/`: Retrieve a list of PEcAn workflows - * `POST /api/workflows/` *: Submit a new PEcAn workflow + * `POST /api/workflows/`: Submit a new PEcAn workflow * `GET /api/workflows/{id}`: Obtain the details of a particular PEcAn workflow by supplying its ID * __Runs:__ - * `GET /api/runs`: Get the list of all the runs for a specified PEcAn Workflow - * `GET /api/runs/{id}`: Fetch the details of a specified PEcAn run + * `GET /api/runs`: Get the list of all the runs + * `GET /api/runs/{run_id}`: Fetch the details of a specified PEcAn run + * `GET /api/runs/{run_id}/graph/{year}/{y_var}`: Plot the graph of desired output variables for a run _* indicates that the particular API is under development & may not be ready for use_ @@ -812,7 +813,7 @@ print(json.dumps(response.json(), indent=2)) ### {-} -### `GET /api/runs/{id}` {.tabset .tabset-pills} +### `GET /api/runs/{run_id}` {.tabset .tabset-pills} #### R Snippet @@ -918,4 +919,38 @@ print(json.dumps(response.json(), indent=2)) ## } ``` +### {-} + +### `GET /api/runs/{run_id}/graph/{year}/{y_var}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Plot the Gross Primary Productivity vs Time for the run with ID `99000000282` for the year 2002 +res <- httr::GET( + "http://localhost:8000/api/runs/99000000282/graph/2002/GPP", + httr::authenticate("carya", "illinois") + ) +writeBin(res$content, "test.png") +``` +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/run_output_plot.png")) +``` + +#### Python Snippet + +```python +# Plot the Gross Primary Productivity vs Time for the run with ID `99000000282` for the year 2002 +response = requests.get( + "http://localhost:8000/api/runs/99000000282/graph/2002/GPP", + auth=HTTPBasicAuth('carya', 'illinois') + ) +with open("test.png", "wb") as file: + file.write(response.content) +``` +```{r, echo=FALSE, fig.align='center'} +knitr::include_graphics(rep("figures/run_output_plot.png")) +``` + + ### {-} diff --git a/book_source/figures/run_output_plot.png b/book_source/figures/run_output_plot.png new file mode 100644 index 0000000000000000000000000000000000000000..d8179f2970bd3b75573b5dc066f089a7ca89998d GIT binary patch literal 34996 zcmc$`c|4T;-#%On#y++oi5bRHSu<*~&t$D^sVs>Q4MmZJP{vx;C`!>-LfN+@`!XTX z6cVy4d-f%%es6v6`+4qv?&q)Pc|G&GuGiINPBZ8ES&sMdK91wlT~lLyPBvk-UAuO1 z8XD-F+O=!<$j-k=2KdSHtNo3;c42lI>JU%+Je^DN_2)nHdE@GhwdBrP6zL20PM#Tt zf=t*-jbIgkdzP^89I`J!LqU8DHwwY%KwjVrBT|y`At7ha<7_y6%i4!K0h#gYoR8b zYaiRTqIHFPM+AAe_P4o-#+fr`p8FKHj0d$ds2eCn3l}YjsCh@(KN8jO+jNSntv2hp zt-P@?{472`y?0hn&boU-FOE*qCfzw`QIMIH)%e#S_iX-1;RC0>{Bsq|KDfJg?}o(5 zpMSGD+p8cc$rQFqKgPY^cFwyh_|JaV?mGdKQy()z;(iQP9jOcr{Jq6P3fvrJX{b1I z+%tO9Tg-PPhy{sl3ga9ozig2ZE{AL^c5Km$72UUQU&^CLK7W62Jktyi?tG?oJ8*k* z3BD^u^+vL~ucwhw^tx}kH)@y|#KoHGzidb{_>ih``_IoyL{x!I<)MHrFOor(_q;R>Vk<`N z2PP!Xsx+(35sgSIDLJ%%|6SA}pT!a56jdJ58}E7RbLVC~E8pB);Ii9wWY}LBJ5f;e zgBL-ZueMg}*fYS1Wb<0A^#566oGlSDH(d8D)vIdIM185b`7YlHcpX2E#Cmvm+*+Tj z{FZpGc5{rKh*eKD7x+6-){4YrE&dtCRy^b@e9mG~b+!Lk^If-r zSJ6>X)xXyVM!(&uW{5vB^WpX%W$eX2+dr@U8vgs|CaHCGx{KCp;J)!KA2odFQm06? z#mmiCrnTG4CrActLEMBY+VzkE{3+IvynH#LC zILi3hXZWl85soUymKa7BE<{pWuJ(12-n!ejNd^l;HCLj52-)`GgP@&6uwvzo1yZ&S%D>U`-EAL(qK<TdsqwMsEIdZu|k=Wi35`okUjpt-Y*X>hHJC?KzID z^jUJy)n#fg*{A9iw^tJY>g}&ru001x2E(-h^WJJ7R*oJ$T8mtTh02Q6@LRoBHAt#a z|6TWY<8MfqXy|6NCqcx_%uKfZlLcqt3IFH`Z?R#vBEsi_^SQr|f4dqfyaS2s?4_2n zg7P+Z4F1kn42NY)nWUaXT2=cL?+j7dEgjqiwEkj{U8tafZRRB|xRat(xIMxiH2QGh zwhuwma{`H-_))bG6A^(4;SkfThA~)V8O}C&mXniXo^32CDak3SPLP$IgOC65r8uMC zmxfJI_gx-o2xekpf)$mjdSgcG?@yDh2R=gg3YnZ|Ms%ZvlqmL(yyyFQNa~{895-O+ zaD_`8JeYgAw}2}=7}em-P95mEIKR7+WO+i>P4cH7WLJvAgG#P0U*wV#vE7Cz?>)CTpcg3?BD^;^*P?BwTRmX+Q^_S;f9j?7vy`-n3)6A|=y=yt` z^JXx&rUBBb!hH}n3lb}w-j}ocVC_})50TH}W1k-2@$ZU=%}7rlNwY0H@}z?~nCSL& zS9P4xlfz~~pKDX`-}6x!o5x4f-u}(#aDIY!`83g%V1DXUEj%(Yn3;#<;ONMTefaR< zH6vJg*v#`)-ma59{X}6VfEkybmjc_0{+pxW-1}5+W4suo@{#NOb@Kqo=%oICkDH~pU+#ye*z-bI?(O~ zF|sr+_^G(p{a%N`f|+pp`nnkQ$(V0ZR@zeS+H_Yqsssv7c9$V?cMIdLN`(UlUgqWT zBkTv?+*sXjaN$dBfEsc*YLD_MbMxLjt78gOEw1K^&dzUXv>@%9b6edmSk}JI9mkQi;a}?@)jf# z0~V*LSqn&r@}aOmJs;I7fW@T)VWo5q9ckS)7J-JCe zu0|#%cD4S#0CF%nBQGKa6^tJ#?tu59L(4#H2!LEq|E_kK@bU6`sSYNqT%KU~4H(rp zt9otZ&UVLxuRq4mVbNqL&2n^w>P3orRPf)Hv9U4Wl1G3qC@Fb-$}|KHc`b6xHyZet zJ%FwjC4`?wqs^~R$`Tgr$fye6qI}as{K{kHQ zKNqaQhQK0VXmDSyykcTUBX02{Nl55N46aq$P%!`Fe^pat=mrsYphw|^x70td{9jjU zM>4YRg8|LlEku+^V}xJE@7kFo?I684<`Ep0R~5P)14ft>Qn)s96-xbA_nW&^NpatU z(JZ{G2okmH-z#>X8v0HxD)aFLM05z(?h*o8&KiI7$JhPlSx=8%mr~RaH|VNI3bXE8 z4Vspclyq8~=>f>wd&J$w!C~y#-|flw@83f`V$QmWSlag#r$Y+AW6T>PtTF@S>Vwwb z?VSyLu{91#BvT1o!c<0xqhsk-BC_pJqgPXI4gvJO?=Qd1_*ISK!z9Or^FoGb*Zutd zL@&e4Gv5BuZboJV^(MlFf9EleVGvt=m!2^NefO<>^F~*~irqsZjeU zJ|3!MdU|@4o?~yp`N{W>$Gub;>1q@_+rKYHF=#lwdjJet_S~zyva;@Po=`&aEsN8O ziWH@#rFYdx7-ak#S%&-06&~Yn$Hp8YX$(7@3P4x=+DLt8iW? zeOm3*j&5rkth(7kqcQz^+c=x=zuvC7*cK1$sWVlh3ZQ_8ho`)}JTx@)`Sa(%e0NXn zgOcS(H<&^pgN#p{P=k?=*{24i|GPqV%vH_TYk1uY7`e%@P|gKuXwV~IRdkCiVm z*P0`ic(S`(4n3yqtV7%)L&u8$*|X?BAvz>&YIiE_`_DJWG1Oe2?0k68g7ibVz2pPz)N@yBo_hpXT3Vhw z88vay$;rvc$cR0LmxydIB*9=m)vf?qsimc*p|MXCy5BP3aKgRWnL957<3MOVfyK$G z=m7MXieF=>xi;FEmzQUjp?Cg`mkXfTENnjbQ@x))3Q_q3J}y?|XbF{i0V*?0p2^9R zPmf%+{k1;N_*o6m+QgH6i>ugYBPyp!O1w zB1GnY{I3CH5!Y^B)sdJ3$WOa$o>M}&cHZ0$YZidvXq(~jNI zjHF;rpFSNi*}_Dumpo^X)8HY(wzD6`N-a%I>A4n#uV250Mjtw3W@FV9V_y|B1gkvR z1B2Ctope_^)dx5d?3@;{+uIWfCO?L1)OMwa8}RLHf&C2kX&x|v)z#{WPS81NZB1pg zh~Ao89BtY?B}f!z*P|O4@I+j|YHFss+rl>LDZTUox)%r;JO1NNYV5Cemc+EidE(4d?x?Gq6Y zkgWXi>G=tx9};S6w*Z=d{`@JGa2Lh9ckj+-`ug=N^lA7?BZLd9B1MmMkC~yysXY;% z=BfQ7-AMG$3Rsu@QG1WXXyS#V`~JQD%rg;_S$*i13aqR=nGKD=HGtOhKS;wE(1Vb| zD#GwnD-l@r@M;iZr69y2s6?t#3L~qkwfDfwGr0_hxe%;O7vfE`yOz|&=87>9S=t|x zRSsSl*N!s<>`SKwg)k88qcL^ZR|JX%8u5|;-?qCVg5+h>MnnVpip#Kj+1saKkbhUS z{;qsWvV$oCC}Q%lMDp@aow^!Dy>F^bq{0&hfUa;|qyprDzS2tyw$+={+0VGPTT!^h z9f&_AgIo<43-p=$qLk>gqZ&XNb04bynOj)P_V1z#0FH1`83y_B<3~WS34;4beUM*` zs2tY>bC9*j4yN8ARet>Sqv~pN#11k+vs&f(?Hz2zgw#~g&HGq`k+g)6f7em8Z^LSV z#%9e&9r&Xiv>0d4e&#b#=lOS816~c6X^JA(*qXEpmAiB?C(Os~Wr-QLOmq@r23+T_ z4?Tcf-n+9fn4ns1Gk1*b)D@;o200{2S2vX4kNT(F6lIW5TBfH@cYisjon%J6k4g*D z7Ubt&o#{C!FJAy8>+lL#2ZS@VgoAXnV1VD|~!B zRx5D1P5S1-(7rr-ticK}DwjDqFB1~%Gk$X1|F*gtg!fSJd{u)(U(Q{Vgh9qfP{+r| zS5{U=3DDTVq`(-lzs*54bAOp%e>)?4(c=Os9Uzz+Mrs3!1?ED=tO(@K&hMAN7CVU9Z7l8t%!@&H^ZMa9r>|DLte+8hba%*3fCr6eZmSp-V7 z-arVDp%mbSYix%-(JbyKLh!qpf^6#o13Xndq?=KIc2=#?iWkz2-q-jfREQ%lFphP{dy~qGh&)EwWs2 zb2@ar3=fd3{>(dnOxJ}97abj)qH_6cOU%CC0|bhUL$PK3*d5%xjweS7&sVKOuig7P z^oQZpF!D`S7IWlF2_tPRMO58q5!&zVU%R0zva0dB;qAQ+0s@To?>6aY+6-cpG7)rS zs2)8e4oGdG{go4QhDf)qT$PD15|%OqE&FU?-4L|i5fMngWq{$1g)7}TW-MGHn}2>S zK;Nl+YyC>6QSQh}sqViuT6T z$ELb~YG8BADhwP=972lqb^EXpLWxe;%6wQLu%Ywv-sfsJC0hK6rov{I;)`%F{Gj*?;h;ej}ftYb;ZiXp^87ph$- zZ^mJf_$UNXSeG#fjTi-_5mk2PP`4>{HUf5RcWYjyP>Z8{NAbyi7MKi3x_9qoMIq2E zVBoevS>1SYwbsSB6q&v|<%dKFxEC!gEi9Mg>C;|ghpaPQ@TjR!cEMVZbYs<*k?NrK z&&5lg(EK@(P%#`>bu<8^(5rnVx}KgJKL$aT?r=d9@w;AxIu;S2*Yz(Zc&Vn`&h0yz zVhp4=>mw#|{P)K2-|dwTF#AYuIdnUaw7!D#RnQ=Ai2dEvPxxgCQ1zwQ5x{K_syKKN z(Bd{WHh%i_32Ll!^gd!Y54+cm8!9@Cw$ihNnBeq1fl*XY_So>B@WRMds5@Kh^9L)_ zZ*BYl#nIJR47MjU(*wnh`XKB9%>k(Z1RT(_7jKGv^1Rd7Sf~D>NwamakXi0NZKv=j zxM1i3TdO}o*=FGs#%cb9G4%%3_3@w(M;I`;h9#)IQ%aOF6pGk&U}tVrQ#u`r$u@Ps z2w%6?p~Kt1HCiHEggMoxo<4nQm>_kxy;A^<)&$y;AZ4`dnDp9;z`IM^m6N#%`W2XB zIdo)?YpBEaJ%Di_3+2s;7Jzj|SwzI|#U}2gq$I%O9bKudtqsUJkdkB|3JE5fd-9gB z23=H!`6VBruwqNuvHnAzR}MdE8BaUN`g@ibTrU2K#?vgAd7|dotsj+b3DT33lh4#{ z764ZT?qgtVyngk+?!}&ynK7@}S?D{cwYAqqDgA2z@$=pl&8Z<5Jnr;S((s>T_+34k z-@g5`6npzh+8>|P5AWVJnliY!kP+uCELhNspwB<^{l2Gs*6lDTgL0QVwCz#+AmuY%;(R+jCgVT?;j|c zH-~&DAM7^=!Lh(Q0N5(DCpc+t5dw7{Cyw03;D#55uO8$^RkPDxG`@MG8r#D7Ve4j> zrKO?F(AvGppYWr?qqmW~grK3y>yz&u9vp3v`k!Rq)6?_spWl+wU|_}47EyM)LPOnH zT@d+Q(-(wI12uH$fJSLeqjAVpfJ2~jtKd;|*`3GKS@0v?02_%{h7|@b)A-wHkO|{$ zhjomu_Pjh37#Iltgu1%lDj8V6O2Jjwm++6I^pPuHOi!KqRJI#)gdb0VVGz)#i?p;yDrdB=(IsU4J6>!46LzD03+VBG!{@QMWO43rwx8f#E;v zu&WBK%V&Bl?A$y}*ahX^zk8RgXx|7sLdqz4HC$_Zw7l2$^4nG(9rhXGz?H)u2s!R( zsIhZ>rEZ7Mxw*N){>^yV2OU!9Wi!1TT0@DffK%(_pswuYZ$?X#4#Z$r?esOzw|25Rm*Fd{->g-v)jj!F75#QMJ zt;@l;TLvz3Qi`uL4kO1VllPGKvHg_yPqT{UR8| z+?)_1Lyw3BDx49k`Nns}InfDtDiF`lc~&c+nzg)ryF-FvHMhuoTQGu*{vy{sJwd+p zR{X^bXbhJexw(l z1POX=wwGJ<5BxSj@-MIyyI^%YdwWmO)8x**su$UU2~LQQPwvA|a?%bP%2?Q5TLi@~ zS}y~Gv}NjduP}eHCP`s7->X6XM;XRjVt2G@?8F!?5(syN>Xn|@Q5?5VaJ=zq0nO{#k&RM)OIBV|< znFCj0ih+#z`d0Q`S*MM~AZ8FV8Krx1()O4xbm1<+x_2JfSr2EFURGJ2!QSq>Q!@dGd!Q9wDd8zQ}eFg zT;YrdCU)!;6Fw={qcO*V^!%X%xa8#YaeQ-g^X70s1(k}Cm5o9z!=6(ql`ie`?=m{l zumrjtXeNiR4#Ae6ay}f1xn~;mYGB^gmrlX#Gz1xX(Z^4o#D#=}hE@)`7y8np%L&x| zP#~o+a$ghz34nNsv#;T^4<2kEy;f!;*c#2TMg^C(dyLxqdho}N3jhcLJ`fBzYQk>~ zB&YBN2BU+sGizJg--*zjeQ_=S^G)Hl%SYR|L_HOXyGt0y9?DEqnFyXD6+%Z^r&k+i zm%e)S3N#6rYvxi@bd=ve87P?n6~_+udPveaB&e# z)*ZpjDH>oHgOjgKZ5X4Y$sUOLo?zI@arwBmVE5^MI&2ca2H0EwZh)Cd_-8=65b=a8 z*Q41gzcp8Xe{}(J4oiNDv%WNIjkSnZi)D(jpaFNkbLpUGyBttE*PR=eLOpJUb!K|& zB}nwUg?4!|k~GNT_o$NZz(9}E%jmnp#m#->@kw~5=*PIc{QMpv=5hkV9Dv)a)>ebU z(5G=IeIoS;OvKfp>f~f$KWAty|8DixC4X({eqjzU>w5T!%WkQ6fRV{xWhrFs73y7Q zM9PB)6P}fgO--P`uwSmVN?cPlG|=t4tfzc>R(kh`v7Fr8$cPB-pxPlb)D+0Jo}EvR z0vdYR^*XyRzWJYRze@H#&?2oPgN65npc0*E@20#|6A}`@#@`bL#oQOVuJiJO6U58< z^s#A8&Lc3~?FzM<;<5=m`t;qFIS!?LynK9S)WVxkVN9NBf+w_k^$4HT$<_}aHm*Ja z(zZs8){}@HQ<6raCwA2<9?Tyb6^S%h6z8rh%6M%@Dlss|4>k%YQu>gE%?N zKJV;7jkc1!i}9w_TG);;%#ai-jUnIO1Ek%su*u!>U%2Z;$%ufZG2NwKcWe{@L|MfA zQJWr^h}&r=Hk%u96$BguXE~wTnD>*FKHa82DFp6n9XzhX#^Zgvc9aF?;2`5QcFKj) zKI=05Oi5GuME?K$e)@jnZx7hZUVqQGEeyX@W6WzUiw1IV+2o{D!oWOw@BRnibi8e8 zX)68_%vF!GcXE2(aDi&as{Y|}Hlui&A!86MwBa(BE@Y*7x(0pXc8yjwZv zr7FpT7x6hivk#4CW|;S^gilTG2fz+>AK!0`VU42gZVC<;X|YEHXnfM9S`^w2f*iKf zT3BWfQMc2`Y6j~&1#1i`e299tLIa>i`SD5lr1mR^Z>fux_RV=#%AzAD4pj^o2|PoI zC$`sHis)tB*Q1j?Y_jnvvdo`7d-v|mY`{beoAQy{aA`VJ>_|@x00z~S{d?G$>jIDg z91ceh5Bi#b3fvPR$HhN2%m3Aj4R&Qt5ihs3yT_(Doui5VfJ0cnU7-rXCqC18glvku z%vJ1_!+TGgJl9xOk7WqZpwQyuhNbWElwFjx?~>&bUp}ZV8mmqT%LeeAI(zO~Bq1_G z_sB$EZZ0_F4w0Y+7n<`U{QApW0QBGP6N{S_)(uLOJ?&zRI1iQWHybRZ!%t`o0W;nE zzf3nemwvpW{z(zwDk!W#(h5W&JE*2*J+2*XZHC6ioo#J>uiT!*o-4o_7>q`;4B~^N z(w`nRDCPS{kV3vVm)^a^B`I3=+OCfAd{P9NKGzbprxX+; z=#F?vJIyZ8j2&fdw-}Cs7gpi6wk>+58B%jPB8#?rw%X;JM8yC|!}6~Pgcx&vaxu%G zOo44pzopyrA8Y{vJ4*$&6t~uPTtdQ*&ZnoR*SP1O^bSgTu#;-sEK};FfChQ~`H-y~ zSJj~LhAs9un&ra^wYMmnRYSpcW>5ygLc&!wB^LvE_0E`?eGD18Ka#$vHVY;kP=@Sh za^X!M!IC9KG+aPVyc$N1-NZe4`_f88_?wJ)X9Z>v95R`)ung! z;Jr!TJWkjb7)luT!4{8r5h`>l($rv5ob&kjagxVjrfBM2ze=-57W~;44M>haTG3Nd z=H}*IUkpia%;lJD7J&YM?(PL0+RqFU!?9qbuy?SSvU5~67W6sR&pR`m!|J6h%{u5_ z(Qx5{RU&hyL$MSmll7P~J{qcm*v$g4;K#nTC$VCSO2x66B_;7O?=Bt!L-|(SpRLO) z;UNePrbtJS=7YpoK;#i)vR2}@&iE?xK$~|!n0G+uDGgglosfW91)VF4$S-$~iSOS7 zmuP}IiJHg^Fp$-Jmowk4GZ-K#-}LSeOKW}}Gqst?_g1@aC1Of2RH7vM&>hGZa^FIW z#?HFgJzKToc7&^FE$^~jwwe_AEtqXjqliw=edU9LO^_0l6 zciwp$BKlffHHGVeu?>M0mEC73%k{F|RjnvuZ`yd?ix*2D+rRt%fwGr56prQWnuDf# zN?drtkC^s#3VO+-_KhJQGxUGah{xjSM0byf6NkGTqh-17(y#^XyG?nzP3;SLl+J*< zcK&=yY@`npQASo4C(bSW-pl7GI#Tk$0XC+sEmA|Uv5`><1n!`*k%(0{&ySns z5wqYu;78Z#8dDVF7;_3*mgTaGm&Z9h!9_@rJ@%-(`J_TP!Lw@6t8iDwM}w~2V17f7 zJKs$ST$pI?G3l$Rp%Fl*65T7Rs;fn^9~20lW@muT z*89iTqmq)ksGTGQljlj~Cx8F_M-J?M@031GJa%oS9hY`E-qM!g!^+rIq?pC)MfG9W znvh2Mh)g~!^37Kh3?C(>kK;?PZR8)F&>@L|hWN&7dUI(U+;ZoR3xi(>-PhEdj{hAW7~tl=a+5;xh)tbeRa0V|tUMB#5Ow!L2|1cG zYkqE}q2mN0DC4nyApE^%7cV8$J9<>wKxV+Az~*jbw%e z)uU|utg7ME2{cO&vBp+mC=YRb9MJf@JRL-fzjT?%rWOl*2^axJg%J0b;;#r`JVVUJ z_?c$PlP6CS6F);}Lu^2wA-0k9(7q|`?zj^`7&H=~p0oIok)JsHU(m3`>h6JD8ZiYy zQzy%$$yZujLH&B<%)(Rlo&7gwF$*V88KJ{i-B3tC06LU4kUF99+Rdc;pyA&hEjJ`_ zIyNLZ7#4LuIk2x>ZHfs)$H$U5ng^qKryv!EA{XX1rCqih%OQKjZ8hF1YTJdFw#2D6 zv+7iSgapS3gE_7qFHKobrtSWwH(jhwNz^}@X-jqaIv=SWY&AOWZH1?x3vvAVbbMqm z8~%@d7=g`~4#;O!M*avMF)J2YULP=To=g+7|GmcX61)c0@@t7Fifju(pb%I(%WW-6 zY-h5zT}K!;55Am0{;pEJw>$W-a5U;xnszvWjp6rHMl9Gw_yY&- zp+12U6214x_lXI7=@XEyIzT-D33G?}JDVfN*ljI%KPe_OmdruOD1w^@KZWVGcD;h& zoOvNf_mTwVk~5?nzNVuwYZ&jqZ7o9lMfRAOoE%Rr1?Q1^{rdIZ{Bym9c3LC@Nb^Fn<0~3U% z`eGWJfTr(wcO`Ssy3>mzEgl?Ao1(-gC$m6)YBL0OleG{Wiw?&ip_tdaCErgxHp-|9 z{RwouJJtl))e8IfTNOLdfEjQ1h_#peN9vxUFtKxqs!tU<^XO1|4M~RiW@2gRaNzeh z1|XmyGUTd5$IOEKZ|VME(pN@n-VtFm8_D2xG>X8T+&vi#1G~N4t^iOXLnjg&#x~pS z{*-_?v$(jJp&wf!`wMzlqsNDyAN?ni$7_V`bOIogQ;&9t8#wZ?mpR1E3RKdtJmd&} zNobl*Q>W84%Wc6UM4dGh6uxPQNUpWrd=sxj*JN9TWST6R{c|Wc+dDBjH~_zt0)9hU zB9t+8^77@&O-)VT>NI$o>wM%AanEq8Ot0CUc?6h*#f@}>vJSFNQy9x_aaJPH$vFNh zbi5(S7Xf_&KR>@0xQnu~S0Kp=$<0M4L5Ai98Kl#jsBlrgoTC8*kr+WceVRC$E+Wj7gDwlc>)~kkm%;e*z=cE0Tu~doPYOfHyq{ z9#X=9PMp+8BuhvC{K+OxqvnWco&!q`YPwCq%)vqz_q~k2_v|+WKtb&O4PqkSV-~kV zkNr2OwB&rx~i#~?nWRR;MxL&>-E2_cP;t-i(c97NnE=D6Gkv<@5K~u4}k%b#Q6y@xqYx(^&>x$KD>R z5q4?sWsj*fKaWyR`{$QyYvfM{iNwNQ&Fy(P{|Hu2Bg-o2rhHmaQBhZ%_4KW+`$1mw zp8H}3aR)Oq9zH&02%^E8QWw+24X!WJ#^`DyQGN(H<7?|a>KDGA0kM<)LQ1MNS|5%b zbWP)hIF{dai^hTye>-eJz}4eU>Xy5Q!FSzn3`P}EOUW!9E7eV-yIO3j%J@Ec`rM`d z4sEy~q~gji0^;sHQ3(U~(Q?c9mn`{vh>>z!G};*cAxVvr*phjHz%M9xbBa!M@KEF! zcT`bpi)F5)5hcilwD|hqv&t9a>ggkO%C>hQD+*dLci@~S0BYbBP9neA*H>p8JY@ca z1sjs)9Sm13KaQcUavfwY`G%2t*z(G~LQP4D$A+uz)Kx>NXA25e@*q=x#A(^da&4-9 zlrg51$#5dJl6E?+B)cq-DIlE$q+CqR`wGBwFc?jc(JFcyj*-*pDj~?AdfrL1G7+Iy zm3gpR>4m=}w#8$gnl~Z{4W0of=dc91D1S7{;VD=7Di}yljit*8ijfjB_)QL7eSJO> zq{6)5G{?VCJ!l?UzvfDz58M9(H-LBUd^%m2h0@`q8Ir``{h)V_nMP8leS;fJ!J$wh zAZ2vAgcCxf658O2M7SP#C+O25jZhRTS5Cl;JiKx%XDNP9SXw8+Y?-Pd%8iug=lB>y zO)-}CkG4na7nyD%ss`OX?pHz-ODoWKS^e3=htJoi_2Ux5^s0n;@MgmaaYOEkLZBcc z%Y2@T>CjCNSbHiU%iiMB`mTg|=V&*j^A491Fc-7Vw}%R(hDLlzLM9s@NZ;AT|C;2- zj~@eT`n_}X0SW@FDG0Hs)MZ0J1)^=hVBsgg+E-UsK`A{_XN54+kwcr7Qnq^Ra*Sotp z<;*`3*)4sGYJ$@3x;qiJ1*J z38*8|p*t-f(2z>72m$?*VJA_#;G8LMci`qXh$cFjokxHcA|sfF9r+u+)hy!UPu=I3*=vnOny z`9wQ}(3l!Y!#sevsFr@TunJ(e@5* znD=|c`{2Vu92`wKx>zr{XXz}Qw#pq&?i3bIwDQxm#sjGXAwwUYX_=UGLL?e2#Q4vI zn9{y0iwrQLXtrl*tb9^)5O2fUI*)-Ka1s#PM>VCHv@6x^yKI=p#obA8|{x0q($W@LkNDN#xwaW5zO6G{TS zSrwHvaP~JczJ)*tF-X57IBe~+jwHodT{P9{?(z3BT%Mp)%Q;CG6ENl^kJC~KteIy& z28aD_q)B6CK6i#E*vO*EAZl&TfXW10T>S5?-f6Em9s@-Rmnd81Dm2$1-v!L(S6G+3 z>fXjdEIh9*VCQ*8mQ>rbg^0=`(nYdZ94tgg>BVnL7H374_JJd(sFCCB@#v(~>r=(( zaGTFPQNo6^4ym^|)5NnMvGK-?# zSYt?ui@OHFqPyCWD(CYWm+@*Ukxds51&x5X^^h}PfcElp|E3ImbsLg#0Gb&x^s*6^ zIi9W@drTm4OGyjJ;yC0v5pQ0<_5sr(Bu;Iix->6O*5DY#^r%WT%uMrBqB0Hzx^xacvAqWc z$;EE@&K}wi3DJ19sndnM(l{KnmR;HPTK69eLW0M;Y7qL=G;R=#CnE2G9w7-94YViu zXLBur^nh^~Z0rL0CZ8_*hoNRVxvZ76Ac)k8pU7KIWNGG)JHJ&Ye~eAm`C|WQF}9 zwnKoODYBw&63dD$!|U@AJjO!$&4J33J$HC)7z?M`6B*4Lavh+Tkjhr5G`-Sl(XxJA z8sS#IL$M8b44S%3ejpfk)8UDLa5@B`Z(nbHSOGi&OmlGvqg^`mc-60_?c%9}g~xA8 zUPwM)1o=RaRq*-ZhYp>3>fiyUlhlUNjhSvR6q7r?Lv*Rld>O+;+-`*6i}AmMg>YH{ zXW|RFs%%FeTlZhE?#G^gx!~cGg2ROveB~g2y`UK3cqW;aX8)pU@W+lO7T!;b2WR*A zxiBV`qBCU<(0(FPm>C8|1U_GAPn6rq;x{QqqUBuguJdG?!ND&(1Vki6KCMreI)L)d zcu#{2{9TrQL4lsF9Z72-@aWQoJ<3|fA#wal6C1)Ij7PdM@pZImyxHgGy++BEK z{M5n)9!!QWL}|xi{)()0T^_`)WMw%@ z36v9e5ULUqO$urs#0^&9ROjI*MoLmOXkw{slZWZq7hvVL$t&+GrTXZ~_eya}81 zO~D8i+CJ2hG<4!)WIq$)7kRAYk!gNN*_XUsI|NEXT-^Q5v=rg8_*dEbv+RUD> zvOezPZeRMl3Lo`25Y$#7gwg!rK*8HYj7cfKWG9#wRxFS2kgf77B?#g1SZ=R>ii)f37M%I^Zhr5s(m5K z3CipHj*hiAEeMFgQ!t7lLM0BXvHzU~cpy%;UNMgWvgGa|1NX3tLGG^0`{&P9t=<<1 z@t+8JB)^8PlHsL(w}BH?U(15FoyvRUW!u*wMlQ>x8 z)`;H&n@SWeUN(jRIEw{gH-)qB6>`pBIK+-=Aem@=DQ?ayzgoFsf$7{ zE;B65Q|4nQ&p#EXyH(@2G)CJ=eR#C&F%3{ri*VzDKX( zMW3m(e@;7QYn$44MH51%r%uIhql;OLkj2*dMJ)%ZN`{ta&M>jmE(E2^L>9%H%F=5y z`}S}!gUA9(lpW4C0dGnT*2Nr-){U!|@RmO5btoT)7qInf)tlfeJo^5A|6Zl^`I(*{kAzOu4XMh@8~bahMBerr=LF~` z#I%Y}N%_zrE}!o`r+E78tNzLr{XUxKUi zyyb5NcHHeQ>i^J_uz2d9daSie+vCXXQF;oIQ{XW+eoF9NJ{Um|A?;?3jc;Cr!{nV7*5h00Glc$8GPPGNPVJxIPfpC0< z9B9x!Aqt*5RZu7_SE1SvX^+Han;bhp1No}wN7bwDDov2P*8$F?%z_=X*#rhJ)pUH1 zik7VXDGIeOGRqqs-jBnMs8ABw%O=oK)Zoy@$?3FkQSlI*q~yZ~54It{2yU3~Xs95A zrXr-#8CrN!V%XWu{dZD%FOPgySd%)A4o_yuw7uJrKkx?hP9f*lbc3`DG|gkafe?WJoDorS(1DaRct4C<1cb{WRpQ8=w;AGe zPcHvevyF=3TEBKVC({mN6cP)S7&4u!!km%E3VYR3jA@(JcIRffxF@6aQrc|+dpo(*Z`RA*KKg|(hy7zLe9ydhDCH8{{ z@l(vJUcbi1_et`Vvx2rOel?xrqo>m^q_jWJ9mXfhOsBMr z`@(q$frTEF=_A~PkyOg@P$}_y9WRpP9w+fFyp+_x#cKk#=tU0$_p7H%FrPfZyxs{b zgX9mJM9|h}6q>06`}gm*5GEE#Cd14h(HqBX4j%lUgWM> ziK$nlSy`!8oGXcs!{_-bm=(i!iJ|4*sFO_rhV+C*qC$>5CKJ_C+9mgdS)$US?l*hi zaeLEkdxcM&Tqs($dgg-mJ`Vl|8*m7C77{RVFrt4?fQ0Z_#%*5+1{+!L!O(Q=Z0^rb zk|s6U?0z-IxRP$2^+zY~hlYygu30zy*}|)lB4`dTUc3nUgkMG`-%!QMe{L<=)r@AD z^_&)-c|o9nA_HzndDlqyr#U>dko#P5uVH|<)@C6b0f8e%En&oxV6QkiQdfk?&K&Fj zvBwXoW-T#2-qY#Vt0d9-aJU{+86wDJA0S72^}zn=*ABE;b~K9Q#RZK-w4pi)Sx$(M zp+KYerR-BMxg?%o`QMF{`BqrIJPqd=BJAQZNFOz>#LWUntE!ynR>zMvvaA3h_oQt@HG34^d4ar zZtD_=VSHA~J693Yy>QvMdOnB3eujhy{Ttj6%(5_0!QCZ#e|B%GwCZ3{vEv@1kjJn2 z-cF*ZmquQ?_K~N!2Rb0;3G39!_W5!5&*IXW$58q^0YV5i@DeXuAef*Hcp>oOg}!az z&(p@ndU|1Fa{W)th@D+aO7Y?hw@3-mdnm zp7BZzjcCCe79MUMlqFgweTes6_>2hG+i@&~dY^@JHDEOK0k;-nGRSR;2!FFnvG$Xb zQs6pqP7seMO8N{rme0(4tJwDxutf8CMNltz_=76HLn;F#7d$4-Inyt|vy99)n)p7(k9hvMS`R(z9;9-Y7Q%IBjZ zxI?@M>q##Nr_x~IOd%orxhvD$AB3N>A;HMtm+-gYoIb_*@L@rYOsA{U?99l14eZ-6 z+idAQ`vXN zQ{Df6A2LohAt5=EWMt3G6S7yx9)&pejLbL@LM5|gWrwVcj504Po9wLYm5j3P*SWsG z@8ffJlw;1(4mxs>`|J>{Y~Be;J4 zHo?l7ysoY;2D1wG5nJt-omJJ;u4|+S1MUL->0;2yiilK#e+fh!#R&p?34pzn%+i6~ zg;w_Wuk)GbM(AdE?Xgt!+j-(t{_4xq^`3{@@MGJT0v$}*`xK}wqtHq0Sf}Eq)v0aZ zI#*YV`|1@|8$7-FCyN?Xg|@4*r~Ni71CVGdi{Md8&JE>vKSrWJR00jDsmcSAHTiI^ z0TpzZ6%;~Xo^v!};!=qjjtq`p2L`Y5;oH2mu122T7kP6Jfcc%pHi znhltY>g7TVo|!|EBhCj&J&jXVg7TK!=BpaWh}c-Zn>Q02A0X+8o88{F;L88JUNgk+oIiDamh;Mu>^ygudXW(Fmn$*nD75hwpr}*aE%= zZwzofKn;8fvX_^Q7eZKhu;>63Kf2yZc>TSH#G(AoTb}c&q^^D(Y3dl z0fr2Q%pX{nC!NvHN()q(<++>6K8!iqGPK#2tJIN3uKco@wqxhodnLSIn(fM*8$f8x zjEed#y9t>jaG*E98|sksQ*s4n7^g&J0ss_P=u58Rpwb7bx!dYcK7)?`6V_8j0m>9m zK0HkttF0xX59bV_zw?)$!_NzB{5ljZ-0)72>MPV9&9PWD%QLt2T9q*~-qi&>#m$HdemRZOOIx)Vz1k|=e zd{>F(t$P;x>@C@?^1zV?{J;k;1|Ae24-ZaTn)Z10M&s|DHYS(j6|JDim>7_%SpX9o zICg-HrOD;I`0c7iQ$<;%d^{j~kX?ew-SU=hLT`4hY2x(L!nhpRDUz5teG&lD~G! zd@0Uq78fs{&77GHR&C+xZd6D=GSa2gZ~@nPT@gfsn!L%V$clMM zya(6U@P19pNm^-KRe(M~ zer@eI`*^3p($4bvS@CaNNYjA7;h$Pj0+K#(Rdc0=O}^$D>GolJ09tPj&p{)IJ1M+^ ziku+DiUH@rCBPURILSFO>skjw!8CyB!QJ7G zgCaW~D+jnd0DNl5ehKEcW8KN>9|k+N>(U2zIO4v|rt7O1$$xw&t)>*ij(yRs?)Hx6 zVX2(NUGtKQclusrUjIZ3x514B9P~Q&3p%APEDj5@ltteL=sjZKd=|fQ|g0NN11zBiyw^*rlfdB?-Y7d{V- z*I!a+hx4(6Pq}!y0!QkPehle7QdfruD9aQrEN8wq=|OzUo7PkE(EKH3cOu5yo%#;G zTxjfF5Ho`6Wma5J@D<#{vQ{0($Nx}2BvmdmG2H^&K{&5~*jUp64F8LI1~h^xHhU%g zeIQeYa`>w;-N@`RSpHR(TLh{gQGN&Sy1v;m5M??VOl>bVHCY=hc!1MiA>HM_g!4eA1dhO1Y%Lu__;mqa}S5Z+ZBAd>okB zK=kx6ynFYYe3f{}$c3h-;crycGH5ZO6NK`8E0!|5tGrdFHG2kkbaj7g$?ET|8zm$I zlp4|c3VosRQV$~!sO6z|2$V0Ps4URPfag+0g|L{I7kuBNVB!21*jo%EW&H9+nFMf2t#Jc+zY?{dKcJ>S9)4(T{o^_RfO{UV9(W}{3MewvH^cG! zTN>oTpeh=x^96U44}&!jg+6(0PHxpKMRI7YS~T*!!I_I#n45p2X>xtrP#gnUTKj#K zREap5Zr2Za59Zv91W#VGISuXjXM^2#SF32HMe1;~WJNC=?u$a>OG#or#3 z_Hii#kPb8g*1o^17{#dD$J$Ffuudlfk2Zc;-%V|A{w3A!MijOp5)`-u!1T|DX+4lx zsi;W!Y_p=?gT0XMW*#x`JN9EI2{ML3=5{JHLs*H$ZW9yP9CEz92amt!<~G#tK!$Q@ z_+uM0PtRk>5AFi3Q5R)Teh)sm0xlAymBODN$6H;d(9Ax5YMn3E2b4C5YO!9D`5)Gp zHj@bPoh%uq?S@8&l!s)6gvxs-Yg|nas~S@S7pU7dK>o7TH+xp~XlX|vb?w}jwDRtR z-gaA%Q}u!nW6@p3t<){@cI;iQR=2*nSO{2<5#$Z{#5^E)BE;xO zi~!C|14w$R+hp~~`zlze!UW!;Z!qkouSAlVbJ4NWs1^ova#F}Er%DJN)GTQNbo5|c z_&kasy$7Bg;Ovgd%Kw0mizF;+kl;f!Oixd5v$z3$2w`EIk%zdrIMBd3IhVoGC|Z^m zos+EU#!w(F$CN8xFiS^b?_7MyuFWi)tH%`;6=j4l&#-jZkl)q!>udWhZFl}lHb~x% zw}KwE#a<*%1{IqV;5pC{Wp01sMhWRfs1X;|>U+VH_Z?i$ckbZO_nZ^RpDN&mVweE2!1+VAi4j!>dLjF2Q%GDGd)S5BCcgjU?$OF3oU0dE zZyz(yPJ3-XJ-U5cM`}y^Dyq-wy8$ZcJynIBFkb1;VBJt8I~^jaCn9&dx)4j-`w zfxO`?jm30wfyG6}ix=Zul6kcB$z2{~RX>UqLiW+!yg8?imd|)0_1Jncl!NT{j+BXR z=H2{fI+T)5zpPV>ip<=9RUXXbp6y;~w(|D3p>{*KdU$*eI+N)tyRST~wKtM0PLRB= zLYb0Nm<|8-9PY+~@~?b2$7rZ6!ION`H!7A4=l9cjzbXX3gTg8|Eznp`uL6is}QWt^r|UZ|v2=!ov7b6IXm^by;<4YU<9bYGkq{7csCde*9^sv~imF ziRfEg6v>VtR#ZUfSws~xYOc$3(;~j)>&+?9gjpHJ%+ry4`pGP8VgPRoNps%BYeO4c zFC`_*Y-}v~BX4&bZxij|0XzlR6qx1mc}GS?Mz*Njyib|JPgbIhB$BpM8AkMUS=X5s zTRdPS`$@tWvq(ko1cfa+c8ONbPMZ6O9K1ns!WjbYTagv`Lwr2@C{f5R3-@T+}#ls9XLL2zY)_J!4A$jZWt3PpRBJQ zY&jpE-{Xq*>#Hy*EU&tFtBAJ!JZQEDR6ymEsftX-X44x>`Ns-96 zHY4b-$%(}8t9YLtyfsdTAV_FssZdOMKuc_(C;xLa32`E>zcH)R_3Lw8baWsiszmn& zi!?r>(wG;X^bNeC^2%!>?`2x_Hw#QdmM)>6={Q}5+z?vU2{vAmQ~z2!OmH|`tjyve zU^!{IEw{_B-DSQ{QG4?S1tOi`$z3k_b@>bb?sVzM2b0xqA*>6Fi{QW~5;%7ScL@*O zIZz1DYyM6aSsTbr8`x^Jm{&)K>UnAOp5Md!tMbUFISf1zho(b+Iy%*Ku46eXe(A6m z4a-pUVG#&L9p-iw#B;UK&$YEQXfkd~eS5=7mI`^yoXXuqzFtyf-b(px8<&)-QeWnD zBl>f2>&QMxe}!zjzt?!=9_op)vE2v7gR`?Uc>@vddhV%F8s{36x3)evFd6iaiUSIw z*sI1e&97=u6Vfp>9fiv)TZi*?ZpMz`D zLT@jn0TG8GSNz@jhhdt89!Bv#j;mKa4!1QTBEq&JQ=N+i~T&^!9vCxZl>K6H!Ng~)d{{|5a1dI3@T=EHg6s9aN5z#Ua}& zPYZMkhwoF4TmM>65te-{j6%u#E@@6k`xW;l?NondzGG-`B^^&Lcd!(@eLur3EW>A- z-w<4a;4@BoH{#`}wPm(Mj52$jsrl-n4Rd)cyf%1A!dz?r#dg-mb1-@-mXR z(*PdX;(cXb;eyJ7js5xtatZAa+u}Y{6f6Xx!WGp6+LcrAS~+9Z_D+uyA3OkkzFtaV;zOzAt`{<`s5ZCe z{k{v=Yu`#{pSitEvxxt+Q}@uA@qPB!l|h87ZC65y?zKFlWMm%@e?S$ccNxAOAJF4z~7Q zvTs-unG(|%^|^A~b?*Z&1Fqp`4*9tXb&jr%4tTm@;rEWj;kaq2`{}zZGS*Ix&l-r$_Fkqlq=S>nRy1R_m$wXX zPl93D8c4rM`pn9F=bh+o`aVK~orbKBUPoF^7P5O`;orx9c5x)SH+xMu<-gLyA%Bb! zkIF-rnJ*p~VG-#Q2SkQ(WGGehofzNKQ=maUhmb$_Bf54%FnJwpR$!m1%j4zhscIB$ z{uZz4c5BmOj8JfQ`E4g~Zqxhs^4H6^3Mz_>A(b*@wtaYh_*}(ja9f(L2j8t-umBbr z^lzM;G)-1F@Y^KIXoKx!)tJPvqqy-W8gv5AT?tbWkpquRFQpc@(W!DT${d*fdUY+N z|GW4Fa%+ZypE5TTb>`BgOLw_%N;p-As&jnyT1%*^5-S>Z=ZbF`GET?ZT$1mLPYl%w zroZPqQq|%^+b(@2ZOy`e$0vM1*MN#&?v2Hs3%{Zc&~HiUd0f4`_I7vIbn*p886@1_ zi(BpjD2a!M2Q4rP3JP#C>+0@y?6#*aAUvr^#(L?jZxq$K&38zUubK_NA2*E3%`=YZ z+`f#~p}5Z#mWhz#H+0im*w_H#Wg0X>28@fohl;ZFqcrt8dcWC5E5##u3e&2U?NUTV zBIi4N4HLf7hF?oMlx_spOiq0pSmw0`jYF{r@@ruMftNWeS~7pEK`?!*;MD4=_Fa^g z2~YmbTD{XM?SQhhoy8V=v-q<{t3)+rWo>KQ$0Zh&i1(RI9U}>yzETr!t4i}nLN|d& z0d}yh2E*SWjgNdW`GF_DU)e~W>sowsUd!x&KE;A~vli;(2ZD1I*YqesXrV(p3_Ap} zyS1}SP(9dLUpM{@ZT~AW{)WtEVnL_=j~*SgrCmV9*#$#VvbrQJ(yhtPN$WYxHshsx ziGDAeqH=$KVBInL^Q~65m`|Tx#GMN}m#2|!W0x9JRVt7iN`#2aG(?QsDRbwGwBkvo z&XOrL@eBV<=gsI8!}$z?K?qnG6GJq(2t_^4{uv0G{KbyC766N|VLd^+FD-3032mFE z5tLTA_POE7P|ZBOnY$yx-4tBmUxyE{kN;+``#eSKnyRzYH1Wq&#+>&Jp_1o{PfEGT z>F(hz`mGj86A?JZ(ZoMq(Dm4^tMhqo(gVuz;{yLsoF)#k_7i+1XuknG%_!r42;veZ zk7#(IAc!+8GME9|Z?G6Zym;|~ACi2da$Av~70BUCfqbHRSj5NE^|4oOahw`c7)6eJX7trC8!H7D=KTur>|ubb0q=<8)U2BA<(cq_7!RpZqhBG{dhvNe>L@bq+hi1yBjUtcT{B@6~Iaytub zOA=^$d38>yC^hv<8P>`)HC0+l%2nT-d-Q_?iGGJVfLt43KrY%X4!&kLB_$uj=Ij~8 z$EMuAOK8_<(^cEr7CF!P?$umg5Wor@)<#B~;grN}Ek14SQTO`CCN3nIf!F z-TIvK2vYnM8wqwS&t|@U{hG3DTDGxQA%=1Ph4zb%;H6!^gN9qqlQny5g}tfyR_?5i zy?>a>3`r&nweGNcyG*~G`TS^#a$(UOtb}v*#wY6wy}XG+XNXfpolQ+WmKde4ek6`I z&dphs7vq-a7#tg0ade7bSs5>Qy10!(;gj{{RTg9~#P9uRMHCs_RZ`T#Sn$FR@ylh6 zs8EkUh(bBjdp78MYhLUr5nvLxC?5bO@RJT0*Z^=q-*pvbxZs!${@rKSe zcl_P+l8rz0^48X_d_=e|e--yjFfn`S4m;xGo73coJ@Yo&FV_T0jU(LD-B`cS5@U+G zr6_7Us1sTd(f_&J4S6w3uQSPaG(0EPj<&Sd5stYH}8va-5Lf+)A`!3`zCscqcm6c9<^EDkVIAPBbHdI7#1 z9UR;RI7~HLcT$3t^#>S2!bBG?1kRsmbo2ns$9TFTw`xCs2FR6-l~qnkN4X|ch#;Cn^_nac|$<6J@jaRyCz5Wfy&@QmJs3@9}LZD zxpE^Cy!Qv~gb`qd;sFr@ym-`~e?z3}=7Uon&T<||k8n&oLlRHc{4qY_6rgbgr&ylN z{L0D=rH4D+o12>{ZTcovJh*j(>6f;1Tc7IR-mbmE7Ox8KfoE_&i%5djV$E#b&pNiNhf>1Lw%J&SwX3=D&eeIP-7v zp!ddyeD=6sA{azj+i|^od3}AgG6-DB;WPN@Yqz!FB?OOYb)R_b*2V@o0tdYTfVuSJ z$2MrawNivvpjW*Pqa9|Q9WO*@meLfhHnYCJb8egW6EW|=*tl`+`JW>&7Xx9*!10ge zULFKLxb^(uRR9%0<*J!1Xal9-eb$@Y+~tNQJ()H^q0|K`YQ4C1Sgfj6?#bQVGd&#$ zW*TdwC42=Rz7+P`v9Ynq9R4^LnF*vd@fg3;KTCU43xIIpBkb!dzy7;f=~F`k6JUsi zl(={`ydN;d)a8l?@45}J5{LmsFapQh_V73YqX9u7A$SXZzZN>+vf^R6|V%h)QY<;Eu_VCt1WTpuD`n-fIETZ@a8 zz^DPbfie{qOrOA`daSMOC*M%selZyIsAoRG5rzVO2f0m*8P{J_KAWGj;Rr z{Oe+<1Y5_(lIN=-hDdIvy?L{?v;-DL^#)_J&`d<4xWO;NPfM$}cB>KA8Swo-buN>M zcT=FO^RJj;;xpk0DvqE&xa;Lf!X!|BHij5P-=`z zAJavXQ&5a(y$Qrne?LAx1_m+C!oA_%la#1l7!lDg48Hs5lXt+eg(}}Z!O*miS;EdO z$;{gN{j3tRZMgOza7YDicW_r4PX#OP022hR8Hl{bk(q6=S7hB6{(;QD4sa4MG&A|+ zm<*o$pyC4Rij{x5A%Qp84@2msJMeUM?Hgs71dl7Qq|es14=7=yPrEE=jT;{W?#UnF z5kN$Rgo}`XfPjeT8`w;Cyt)>2G%@2J5CEjzJxGU&jVo163&*=%lP;r+zCwn(HPh%4 zMykU83Z}_0(0?e1s=XLpl(WzF5wIlSp^#UrZnQpx5qr4qHLMVL-QpY^wvcH+-??PQ z3Bv}@HG{q+GqXvm{}KW^$pCmjb!JeKwQ49lV#up#2~AK4M7yB!5&gkcc`;2tUk@xm zn?cSc>A4J+(0pdKE&S^BGA?%o*7$Fo#DdI^Ayub*PW;<|2LF}FOwLY!HDa-J8i&hK zZH$H4$aOff%pb2@t1+DpE|jV~boB8#gyRsfms-1iHqv5Fr{tY~iVpL+(Lb-ptKRZl zwtVN%VJ%p0MvCFZpGewag79UJd%nHzAafSCA9xQt)L=rM3Ja?p0XkRR=9~TO#@p^c zk*IoLS3UOev8x=nttU~`f!902-~R!8z2S5KYK+5|Qm-2_pJ88s95?8A{tCn=$a`EA z6cjE9*PeDW{>y7`sP^5>IUFx*Or^qT07(P3Cqy)BP;kOoA0`o9>xbDHgCh`vwK^o4 zU?JHEVq5TmgHvP(<8@c0VH3V-Z?+ot4?_wynwJZNRH~UwIo-Us0$s?ub$)eguG3oV(YlqaRRa= zm`S(0Z@ z69|D87J7d6>M&?j;k=)~Fae2VeQ|NPj$EC0GrF1AsH|l`Lyq3I z0TBSd6;6oA3&=}h$zXZ|Nd{)jg6$V;-clXlo6&cgPzyx~web|d&GP(&#Jh%|pewM^BlHq!x5`zR| z71k$3hi$-JHd{!)#;2lb19!3+VtMt)|$c*uSf4WT9P?#Qt{qWi*fr&5} zq7y(iKtLT`%Y&!Vn=AzDA^Ya7>({Og>-LeY*jE~dS@GysRTHAHcEXG&gRg()^xIV! zJCT|t>QE7wm`u=*A3la1C;_6?22;JkGQ(gQi22Z9#S~Odv%`vv0baWQl8Xq=oEz>A ztZusp$VaoBoLc@7@pnm*DwHmJ@+> zI-sls^88DR)abmYP03g!$~xPX1BFH!-FU0=3?4^;NH2SffVUU+!TRd69DCDer7_#) zBIiBG%5mj6fNF5Y?Wi}w-V>Se0?2O|}={F&waj)W>@c;e`u(%UoFo1#r3<;ST;4LG8Loi9oR}f+P*;`}s{x7B3 zDL-J<7#C@(tHTfrfZm|S4PpIJFPH#BPqjILWC*6dz*SyMN@57HD99NhIrkgJ4^3v> zq1@RkX*^w>pMNm6Zx{idqi{O^v*;cWEb5!Jy$O@>gxJ{?{LOT9~MvsdQWevko;RpqAHH(vEOJ8GubLt3qb5q~G;S|Kd|GCGC(9qj3H0==D z2}xEA@v1gs97fmU#7MSS5CCt0bH^F@sRLmVNkW2OcTWU*zul0V)`9pQ0-i;!=Xn>s zLCn)28=T3Hh={&_YD&uL;^NaF^<-<@GatlMf_@QpJ-C5UX9>aMzIO8_v?qBHtZ^MU zL?C0t#BNNk`)Kr)u3mr$uBl%t`Z!3uisL5tK(gVBa^z;-gB;my5;^V9G^!V@$ zV5UG?;`~`9>G?-m{wWyfTBpGmu_2q_o+kgT0Kq{PXgVz$K3OSLKpn?z9l9KGDs-OL zm;ZY)|36px|NmdCGNu!nj$T1wy9=*9AdrdPnHTM?3_=C!uNGf!1LN0;blNW*w_m30L zB;NTNXvc9M3>0jzCI}Dh9G6tZ%}msI|E?W7jk4HDJxjz>%Tn@ZHq+A`FeSgEk&&MM zWo)b_Su+r(&Bdz@W3NAtjXl`eDXpraJb%8%dn2V9SMq@4J|rK>{8k}KS@||u9nIv) zQX~i@>dBKQ5cN1rhJ&+Kr)`CjU(!THrk+YPf61gyMU zv&&vv(NRaEIYZ}iEAlUiUh}0`WDMJNvnwepwg1+3wUGWlOn@9o6R;Kmjb#H2T?AeO z_+SBDkp}H4X$ImJm}LV~rp&?2+`5AX!C%wAFdMcc=1Ysfd1yiQ1rr5?#tMj-^$$ef z{Gqar#^fKu%86LJ`1*|H{V9)W4W@dve6}+voLMSJxAAt(r~6;0$(@#-PVoAw)Nq;B zbD8dqxh{6Pe*V6FbNBSs%hWOT?0<9~z#up#rnTAGe(;=vR&xv3o=!HvQ~^p#%52>m z`$tF=5PW_@Dc<@B>^H<9Edk%;3m4`AK?esm=mj?qGB(D;ky!8dXs0z@J``$udTQ!^ zsO5mS7ZHx5;zED}TnOg~2aqOe0fH~=2$-mF8BBTr&Yt}P^)K`g*c!mTJDn4j`xaE| zoBdhg)jP0{_I7qq*rZ9jU#`u&Z=@or${-y*OAmNk z2GP=XPmpv{V)k8Z<6nt>>}1KNJn>WXom&$u`q^A>QGDX$mwdIkgAFec|ZjN>828xioHX@R92YkL~8-|_K$si$0jDgj%=d0S8H+^ zN*##H@>J^1!oJnBe>7MK5m4lVbs?OGLcT&wu6k1wWC*bE^su!2pGSk2LM86A*BziC znw@|^P$BTK8J;$a2<@*cbDEomk3`Mq*%s~6IZ2SSH}Hj9X;feGsbAjbI-M}dg$h{% z7}M~JiwyV-KgvxmFDw9grU(=h)Mz~}eQ>ryKy8*P;Q<3l7J#GQ|vb6pN}BuL5~$0{@c5|6Ocd~8h(dm z#ZJ=^bapfY>2`C;3)O6zTSsv6-~ct~4c&ZO&@={LoHuOP~3z?Nivv?T3tysJyiAN5ih zH?jm?zBX;P9v3v^@V)?HD(LJKR2iRPcFn)zoPi_+124f9p1rFKmm)#kz#XsJD#Xxe zX+diGKm_!DlF~HPv2;{lf6^6)) zwbRq@cAYT$xzejn{!qc7>bb4dI5*8#* zE&^KvScu?gR|swqxE;_c9X<1FZ*SOQ=&gwO^+1oo_?(V;!a)d+~91&rd76;;o z?@-(3p>7Hrf|iPv7A0&|*%~6YR4w}V+*7@*Y!4RO|B+V^5EIKn zuLUL^F}K6U{0qZji4k#38Z6G=S>oz|7$@Oy*z1NuAg-8})wGIQ(;Ev;itKQZk+ zqw4ugE9vPdcI-C(+0@u={?BP}pcM3mam$k4D<-KjVcC@RbF}hRc~f(B<6hXrhsP>= zV_kC;1zHwmezL+pWJ{!Z{sAh#u9lRJ88YM}j0=XI5LlDqX8(d^EA|D@pTW;S9CN2IpfDmblEwmN#lS4|rje16j*b=Z6DY{b>q{FsKPdgGC3-LEEo&GwZu{P)3` zH&l(jXguB>uKL7hWn_?e@2fk0KEjKT>3hb}@y^}aPh0E{vKdlMKs2BD_N_m}$VBtU z9Rp7C%SJ8q4yvRzf?@joAu&{l1eHTyC} zkM_O6?J0?{49=AJS~BD`l9%T;Qp*F4$GL^yNpfD3&}!3A_}x5bVdH`lf4HGn*dG!wiD=Q-H z(cahIrPwwX`Vgg4r|F;vyvMVUB42x;Pr`KV>LRvXi{LzpfFND|p6)=^?W)3qt-2|X z-^ZiVw)RR?Sd;vY@)<_HQTm$C1@AW;FEJt&IaSkUT>XV0nMS)zWOFh-f6i(jhU=?HW3jqkxk zOUgYublXU9LDQ_j4#LV+05GBiV(^;!5|7DQGipq!Ms48DmPc?Y$&5b3eQ8 z58W%>I#~W8M?L*fO})h*i5Etd@>I~g+K83M=t$$?l^M*atGBl;LzsHJ0}bZ>@+w9(3t^};zI9^}RgLK8f( zI86)MUtiXg?yz2_M#!f>i%D3(EOS$G{k&;3e>UmC!RN8RZsfKz25Npt*N&m3KerF< zTUVmrg%*srAd`hTjOdI*XpE*TmqY!ppM@8kogDLR(}g&eujT~FY25chRF}9?x3%wg zip?3_udPy=7{s1(UO0mN`I1hBO&{m zV`)sSf9{VtGmRFLEvH+@Z(?0Wb2ZiaF_jROKyMBf=YAST4ENCZ2ze^plpJsf!+ChY zBR4KGm>jEqK?D6O#t(EP2xT`&v=l z7Wx9KSj|R|Ezx^*;ApRrJc{9{!1U&I{lpQG{U%brKH(m>w^31Qj*gDdRczS+hRt7= zD&2f=Q5}Ly2nIdgvo4Y|NC1Im1^o>y6`MO-@AR3%JazptbI1HwA5RaSN}VE_0mxpxB+dN&k0IZJXXHK-z zIfg`Pbdm8MQf;ups`Oq;PU}n8zr64;$(s4qg$VD=t183s=K-Dki`BqO&IQB+im;UV zRe0IbJayxymX_(0xU`sxfdO966^cuOEf5XCK^bQy2q@7J94>1?SF~yAsHroqbY;vi j(9qnE(1MN9vv`Kb#M85&kZtCKfRBozrb02=GUR^&6jvZC literal 0 HcmV?d00001 From c7da191569d9ab6ad3ebc379d7cb4606edb622f6 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 24 Jul 2020 13:33:17 +0000 Subject: [PATCH 1261/2289] bugfixes for runs graphs; tests for runs & pfts endpoints --- apps/api/R/runs.R | 9 +++++++-- apps/api/pecanapi-spec.yml | 2 +- apps/api/tests/test.pfts.R | 33 +++++++++++++++++++++++++++++++++ apps/api/tests/test.runs.R | 18 ++++++++++++++++-- 4 files changed, 57 insertions(+), 5 deletions(-) create mode 100644 apps/api/tests/test.pfts.R diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index f6a3dd045ca..3e61fea9792 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -134,7 +134,8 @@ getRunDetails <- function(run_id, res){ #' @return List of runs (belonging to a particuar workflow) #' @author Tezan Sahu #* @get //graph// -#* @png +#* @serializer contentType list(type='image/png') + plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600, res){ # Get workflow_id for the run dbcon <- PEcAn.DB::betyConnect() @@ -158,7 +159,11 @@ plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600 } # Plot & return - return(PEcAn.visualization::plot_netcdf(datafile, y_var, x_var, width, height, year=year)) + filename <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/temp", stringi::stri_rand_strings(1, 10), ".png") + PEcAn.visualization::plot_netcdf(datafile, y_var, x_var, as.integer(width), as.integer(height), year=year, filename=filename) + img_bin <- readBin(filename,'raw',n = file.info(filename)$size) + file.remove(filename) + return(img_bin) } ################################################################################################# diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 7466c7de57b..10591471239 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-tezan.ncsa.illinois.edu/ - description: PEcAn Test Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/769e6872 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/0ac36769 - description: Localhost url: http://127.0.0.1:8000 diff --git a/apps/api/tests/test.pfts.R b/apps/api/tests/test.pfts.R new file mode 100644 index 00000000000..b4de5cbe393 --- /dev/null +++ b/apps/api/tests/test.pfts.R @@ -0,0 +1,33 @@ +context("Testing all PFTs endpoints") + +test_that("Calling /api/pfts/ returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/pfts/?pft_name=temperate&pft_type=plant&model_type=sipnet", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/pfts/ with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/pfts/?pft_name=random&model_type=random", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) + +test_that("Calling /api/pfts/{pft_id} returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/pfts/2000000045", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/pfts/{pft_id} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/pfts/0", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) \ No newline at end of file diff --git a/apps/api/tests/test.runs.R b/apps/api/tests/test.runs.R index 3a49cfad77d..87857680114 100644 --- a/apps/api/tests/test.runs.R +++ b/apps/api/tests/test.runs.R @@ -26,12 +26,26 @@ test_that("Calling /api/runs/ with a invalid workflow id returns Status 404", { expect_equal(res$status, 404) }) - - test_that("Calling /api/runs/{id} with a invalid run id returns Status 404", { res <- httr::GET( "http://localhost:8000/api/runs/1000000000", httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) +}) + +test_that("Calling /api/runs/{run_id}/graph/{year}/{yvar}/ with valid inputs returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/99000000282/graph/2002/GPP", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/runs/{run_id}/graph/{year}/{yvar}/ with valid inputs returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/1000000000/graph/100/GPP", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file From 96a02a980df1f330371936de83b8e54bad7d3f79 Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 24 Jul 2020 13:00:10 -0400 Subject: [PATCH 1262/2289] Adding the SMAP download, and a minor change to gee2pecan_smap.py date output --- .../inst/download_SMAP_gee2pecan.R | 123 ++++++++++++++++++ modules/data.remote/inst/gee2pecan_smap.py | 106 +++++++-------- 2 files changed, 172 insertions(+), 57 deletions(-) create mode 100644 modules/data.remote/inst/download_SMAP_gee2pecan.R diff --git a/modules/data.remote/inst/download_SMAP_gee2pecan.R b/modules/data.remote/inst/download_SMAP_gee2pecan.R new file mode 100644 index 00000000000..ee1b7d12ec0 --- /dev/null +++ b/modules/data.remote/inst/download_SMAP_gee2pecan.R @@ -0,0 +1,123 @@ +##'@name download_SMAP_gee2pecan +##'@description: +##'Download SMAP data from GEE by date and site location +##' +##'@param start start date as YYYY-mm-dd (chr) +##'@param end end date YYYY-mm-dd (chr) +##'@param site_id Bety site location id number(s) +##'@param var variable/band to be extracted from GEE SMAP archive (e.g "ssm" for surface soil moisture) +##' Go to https://developers.google.com/earth-engine/datasets/catalog/NASA_USDA_HSL_SMAP_soil_moisture#bands for options +##'@param geoJSON_outdir directory to store site GeoJSON, must be the location same as 'gee2pecan_smap.py' +##'@param smap_outdir directory to store netCDF file of SMAP data, if directory folder does not exist it will be created +##'@return data.frame of SMAP data w/ Date, NA's filling missing data +##' +##'Requires python3 and earthengine-api. +##'Untill 'gee2pecan_smap' is integrated into PEcAn workflow, +##'follow GEE registration 'Installation Instructions' here: +##'https://github.com/PecanProject/pecan/pull/2645 +##' +##'@authors Juliette Bateman, Ayush Prasad (gee2pecan_smap.py) +##' +##'@examples +##'\dontrun{ +##'test <- download_SMAP_gee2pecan( +##' start = "2019-11-01", +##' end = "2019-11-10", +##' site_id = 676, +##' geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst", +##' smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst", +##' var = "ssm") +##'} + + +download_SMAP_gee2pecan <- function(start, end, + site_id, + geoJSON_outdir, smap_outdir, var) { + + + #################### Variable Entry Parameter Check #################### + + ## Check that there is a valid variable entered + if (var %in% c("ssm", "susm", "smp", "ssma", "susma") == FALSE) { + + PEcAn.logger::logger.severe( + "ERROR: The variable (",var,") is not valid. Please enter a different variable. Visit https://developers.google.com/earth-engine/datasets/catalog/NASA_USDA_HSL_SMAP_soil_moisture#bands for available variables." + ) + } + + #################### Connect to BETY #################### + + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- as.character(site_id) + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + #################### Begin Data Extraction #################### + + # Create geoJSON file for site + site_GeoJSON <- data.frame(site_info$lon, site_info$lat) %>% + setNames(c("lon","lat")) %>% + leafletR::toGeoJSON(name = site_info$site_name, dest = geoJSON_outdir, overwrite = TRUE) %>% + rgdal::readOGR() + site_GeoJSON$name = site_info$site_name + site_GeoJSON = site_GeoJSON[-1] %>% + leafletR::toGeoJSON(name = site_info$site_name, dest = geoJSON_outdir, overwrite = TRUE) + + # Locate gee2pecan_smap.py function and load into R + script.path = file.path("/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst/gee2pecan_smap.py") + reticulate::source_python(script.path) + + # Run gee2pecan_smap function + smap.out = gee2pecan_smap(geofile = site_GeoJSON, outdir = smap_outdir, start = start, end = end, var = var) + output = ncdf4::nc_open(paste0(site_info$site_name,"_", var, ".nc")) + smap.data = cbind((ncdf4::ncvar_get(output, "date")), as.numeric(ncdf4::ncvar_get(output, var))) %>% + as.data.frame(stringsAsFactors = FALSE) %>% + setNames(c("Date", "var")) %>% + dplyr::mutate(var = as.numeric(var)) %>% + dplyr::mutate(Date = as.Date(Date)) %>% + tidyr::complete(Date = seq.Date(as.Date(start), as.Date(end), by="day")) + smap.data = smap.data %>% setNames(c("Date", var)) + + + + #################### Convert to % Soil Moisture (when necessary) #################### + + ## If variable is ssm or susm, must convert unit from mm --> % + # SSM (surface soil moisture) represents top 0-5cm (50mm) of soil + if (var == "ssm") { + smap.data$ssm.vol = unlist((smap.data[,2] / 50) * 100) %>% as.numeric() + # SUSM (subsurface soil moisture) represents top 0-100 cm (1000mm) of soil + } else if (var == "susm") { + smap.data$susm.vol = unlist((smap.data[,2] / 1000) * 100) %>% as.numeric() + } + + + + #################### Date Entry Parameter Check #################### + + ## Check if there is data for the date range entered + if (all(is.na(smap.data[-1])) == TRUE) { + + PEcAn.logger::logger.error( + "There are no SMAP data observations for this date range (", start, " to ", end, + "), Please choose another date range. (SMAP data is not available before 2015-04-01.)") + + } else if (colSums(is.na(smap.data[-1])) > 1) { + + ## NOTE: SMAP collects data every ~2-3 days. Missing observations are expected. + PEcAn.logger::logger.warn( + "WARNING: There are some missing SMAP observations during this date range (", start, " to ", end, ").") + + return(smap.data) } + +} + diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py index fcf91a31623..ab1d11497cc 100644 --- a/modules/data.remote/inst/gee2pecan_smap.py +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -1,15 +1,14 @@ """ Downloads SMAP Global Soil Moisture Data from Google Earth Engine and saves it in a netCDF file. -Data retrieved: ssm, susm, smp, ssma, susma - Requires Python3 Author: Ayush Prasad """ -from gee_utils import create_geo, get_sitecoord, get_sitename + import ee import pandas as pd +import geopandas as gpd import os import xarray as xr import datetime @@ -17,7 +16,7 @@ ee.Initialize() -def gee2pecan_smap(geofile, outdir, start, end): +def gee2pecan_smap(geofile, outdir, start, end, var): """ Downloads and saves SMAP data from GEE @@ -31,15 +30,37 @@ def gee2pecan_smap(geofile, outdir, start, end): end (str) -- ending date areaof the data request in the form YYYY-MM-dd + var (str) -- one of ssm, susm, smp, ssma, susma + Returns ------- Nothing: output netCDF is saved in the specified directory """ - geo = create_geo(geofile) - - def smap_ts(geo, start, end): + # read in the geojson file + df = gpd.read_file(geofile) + + if (df.geometry.type == "Point").bool(): + # extract coordinates + lon = float(df.geometry.x) + lat = float(df.geometry.y) + # create geometry + geo = ee.Geometry.Point(lon, lat) + + elif (df.geometry.type == "Polygon").bool(): + # extract coordinates + area = [ + list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) + ] + # create geometry + geo = ee.Geometry.Polygon(area) + + else: + # if the input geometry type is not + raise ValueError("geometry type not supported") + + def smap_ts(geo, start, end, var): # extract a feature from the geometry features = [ee.Feature(geo)] # create a feature collection from the features @@ -52,7 +73,7 @@ def smap_ts_feature(feature): ee.ImageCollection("NASA_USDA/HSL/SMAP_soil_moisture") .filterBounds(area) .filterDate(start, end) - .select(["ssm", "susm", "smp", "ssma", "susma"]) + .select([var]) ) def smap_ts_image(img): @@ -68,77 +89,47 @@ def smap_ts_image(img): scale=scale, ) # store data in an ee.Array - ssm = ee.Array(img.get("ssm")) - susm = ee.Array(img.get("susm")) - smp = ee.Array(img.get("smp")) - ssma = ee.Array(img.get("ssma")) - susma = ee.Array(img.get("susma")) + smapdata = ee.Array(img.get(var)) tmpfeature = ( ee.Feature(ee.Geometry.Point([0, 0])) - .set("ssm", ssm) - .set("susm", susm) - .set("smp", smp) - .set("ssma", ssma) - .set("susma", susma) + .set("smapdata", smapdata) .set("dateinfo", dateinfo) ) return tmpfeature # map tmpfeature over the image collection smap_timeseries = collection.map(smap_ts_image) - return ( - feature.set("ssm", smap_timeseries.aggregate_array("ssm")) - .set("susm", smap_timeseries.aggregate_array("susm")) - .set("smp", smap_timeseries.aggregate_array("smp")) - .set("ssma", smap_timeseries.aggregate_array("ssma")) - .set("susma", smap_timeseries.aggregate_array("susma")) - .set("dateinfo", smap_timeseries.aggregate_array("dateinfo")) - ) + return feature.set( + "smapdata", smap_timeseries.aggregate_array("smapdata") + ).set("dateinfo", smap_timeseries.aggregate_array("dateinfo")) # map feature over featurecollection featureCollection = featureCollection.map(smap_ts_feature).getInfo() return featureCollection - fc = smap_ts(geo=geo, start=start, end=end) + fc = smap_ts(geo=geo, start=start, end=end, var=var) def fc2dataframe(fc): - ssm_datalist = [] - susm_datalist = [] - smp_datalist = [] - ssma_datalist = [] - susma_datalist = [] - date_list = [] + smapdatalist = [] + datelist = [] # extract var and date data from fc dictionary and store in it in smapdatalist and datelist - for i in range(len(fc["features"][0]["properties"]["ssm"])): - ssm_datalist.append(fc["features"][0]["properties"]["ssm"][i][0]) - susm_datalist.append(fc["features"][0]["properties"]["susm"][i][0]) - smp_datalist.append(fc["features"][0]["properties"]["smp"][i][0]) - ssma_datalist.append(fc["features"][0]["properties"]["ssma"][i][0]) - susma_datalist.append(fc["features"][0]["properties"]["susma"][i][0]) - date_list.append( - datetime.datetime.strptime( + for i in range(len(fc["features"][0]["properties"]["smapdata"])): + smapdatalist.append(fc["features"][0]["properties"]["smapdata"][i][0]) + datelist.append( + str(datetime.datetime.strptime( (fc["features"][0]["properties"]["dateinfo"][i]).split(".")[0], "%Y-%m-%d", - ) + )) ) - fc_dict = { - "date": date_list, - "ssm": ssm_datalist, - "susm": susm_datalist, - "smp": smp_datalist, - "ssma": ssma_datalist, - "susma": susma_datalist, - } + fc_dict = {"date": datelist, var: smapdatalist} # create a pandas dataframe and store the data - fcdf = pd.DataFrame( - fc_dict, columns=["date", "ssm", "susm", "smp", "ssma", "susma"] - ) + fcdf = pd.DataFrame(fc_dict, columns=["date", var]) return fcdf datadf = fc2dataframe(fc) - site_name = get_sitename(geofile) - AOI = get_sitecoord(geofile) + site_name = df[df.columns[0]].iloc[0] + AOI = str(df[df.columns[1]].iloc[0]) # convert the dataframe to an xarray dataset, used for converting it to a netCDF file tosave = xr.Dataset( @@ -148,6 +139,7 @@ def fc2dataframe(fc): "start_date": start, "end_date": end, "AOI": AOI, + "product": var, }, ) @@ -155,6 +147,6 @@ def fc2dataframe(fc): if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) + file_name = "_" + var # convert to netCDF and save the file - tosave.to_netcdf(os.path.join(outdir, site_name + "_smap.nc")) - + tosave.to_netcdf(os.path.join(outdir, site_name + file_name + ".nc")) From 7fad5ecfe47e48ba370a911ddf70d1ef0a81e640 Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 24 Jul 2020 14:47:24 -0400 Subject: [PATCH 1263/2289] NEON soil data download --- modules/data.land/R/download_NEON_soildata.R | 85 ++++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 modules/data.land/R/download_NEON_soildata.R diff --git a/modules/data.land/R/download_NEON_soildata.R b/modules/data.land/R/download_NEON_soildata.R new file mode 100644 index 00000000000..a8b7a81f20d --- /dev/null +++ b/modules/data.land/R/download_NEON_soildata.R @@ -0,0 +1,85 @@ +##' @name download_NEON_soildata +##' @description: +##' Download NEON Soil Water Content and Soil Salinity data by date and site name +##' +##' @param site four letter NEON site code name(s). If no site is specified, it will download all of them (chr) (e.g "BART" or multiple c("SRER", "KONA", "BART")) +##' @param avg averaging interval: either 30 minute OR 1 minute, default will return BOTH (chr) (e.g 1 or 30) +##' @param var variable of interest, either "SWC" (soil water content) or "SIC" (soil ion content) +##' @param startdate start date as YYYY-mm. If left empty, all data will available will be downloaded (chr) +##' @param enddate start date as YYYY-mm. If left empty, all data will available will be downloaded (chr) +##' @param outdir out directory to store .CSV files with individual site data +##' @return Data.frames for each site of interest loaded to the R Global Environment, AND saved as .csv files +##' +##' @author Juliette Bateman +##' +##' @example +##' \dontrun{ +##' test <- download_NEON_soildata( +##' site = c("SRER", "BART"), +##' avg = 30, +##' startdate = "2019-01", +##' enddate = "2020-01", +##' var = "SWC", +##' outdir = getwd())} + +## Install NEON libs +#devtools::install_github("NEONScience/NEON-geolocation/geoNEON") +#devtools::install_github("NEONScience/NEON-utilities/neonUtilities", force = TRUE) +#install.packages("BiocManager") +# BiocManager::install("rhdf5") + + +download_NEON_soildata <- function(site, avg = "all", var, + startdate = NA, enddate = NA, + outdir) { + + + #################### Data Download from NEON #################### + soil.raw = neonUtilities::loadByProduct(dpID = "DP1.00094.001", site = "KONA", avg = avg, startdate = startdate, enddate = enddate, check.size = FALSE) + + # Only select data from list and remove flagged observations + if (avg == 30) { + data.raw = soil.raw$SWS_30_minute + } else if (avg == 1) { + data.raw = soil.raw$SWS_1_minute + } else { + data.raw = list(soil.raw$SWS_1_minute, soil.raw$SWS_30_minute) + } + + + # Separate selected variable, omit NA values + if (var == "SWC") { + data.raw = (split(data.raw, data.raw$VSWCFinalQF))$'0' + data.raw = data.raw[,-c(15:22)] %>% na.omit() + } else if (var == "SIC") { + data.raw = (split(data.raw, data.raw$VSICFinalQF))$'0' + data.raw = data.raw[,-c(7:14)] %>% na.omit() + } + + # Separate dataframe into list of each site of interest + data.sites = data.raw %>% + split(data.raw$siteID) + + # Separate each site by sensors (horizontal and vertical position) + data.sitePositions = vector(mode = "list", length = length(data.sites)) + names(data.sitePositions) = names(data.sites) + for (i in length(data.sites)) { + data.sitePositions[[i]] = data.sites[[i]] %>% + split(list(data.sites[[i]]$horizontalPosition, data.sites[[i]]$verticalPosition)) + } + + # Load individual site list into the Global Environment + data.sitePositions %>% + list2env(envir = .GlobalEnv) + + # Save site lists as .rds files to outdir + for (i in names(data.sitePositions)){ + #rlist::list.save(data.sites[[i]], file = paste0(outdir, "/", i, ".json"), type = tools::file_ext(".json")) + #write.csv(data.sites[[1]], paste0(outdir, "/", i,".csv")) + saveRDS(data.sitePositions[[i]], file = paste0(outdir, "/", i, ".rds")) + } + +} + + + From decc2bf5fa9e2dccba449b753a9b20b6bc6dc367 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 27 Jul 2020 12:50:18 -0500 Subject: [PATCH 1264/2289] updated docs to add internal links to api examples --- .../07_remote_access/01_pecan_api.Rmd | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 562cbe2fdb8..1521d98af19 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -28,30 +28,30 @@ __For the most up-to-date documentation, you can visit the [PEcAn API Documentat The currently implemented functionalities include: * __General:__ - * `GET /api/ping`: Ping the server to check if it is live - * `GET /api/status`: Obtain general information about PEcAn & the details of the database host + * [`GET /api/ping`](#get-apiping): Ping the server to check if it is live + * [`GET /api/status`](#get-apistatus): Obtain general information about PEcAn & the details of the database host * __Models:__ - * `GET /api/models/`: Search for model(s) using search pattern based on model name & revision - * `GET /api/models/{model_id}`: Fetch the details of specific model + * [`GET /api/models/`](#get-apimodels): Search for model(s) using search pattern based on model name & revision + * [`GET /api/models/{model_id}`](#get-apimodelsmodel_id): Fetch the details of specific model * __Sites:__ - * `GET /api/sites/`: Search for site(s) using search pattern based on site name - * `GET /api/sites/{site_id}`: Fetch the details of specific site + * [`GET /api/sites/`](#get-apisites): Search for site(s) using search pattern based on site name + * [`GET /api/sites/{site_id}`](#get-apisitessite_id): Fetch the details of specific site * __PFTs:__ - * `GET /api/pfts/`: Search for PFT(s) using search pattern based on PFT name, PFT type & Model type - * `GET /api/pfts/{pft_id}`: Fetch the details of specific PFT + * [`GET /api/pfts/`](#get-apipfts): Search for PFT(s) using search pattern based on PFT name, PFT type & Model type + * [`GET /api/pfts/{pft_id}`](#get-apipftspft_id): Fetch the details of specific PFT * __Workflows:__ - * `GET /api/workflows/`: Retrieve a list of PEcAn workflows - * `POST /api/workflows/`: Submit a new PEcAn workflow - * `GET /api/workflows/{id}`: Obtain the details of a particular PEcAn workflow by supplying its ID + * [`GET /api/workflows/`](#get-apiworkflows): Retrieve a list of PEcAn workflows + * [`POST /api/workflows/`](#post-apiworkflows): Submit a new PEcAn workflow + * [`GET /api/workflows/{id}`](#get-apiworkflowsid): Obtain the details of a particular PEcAn workflow by supplying its ID * __Runs:__ - * `GET /api/runs`: Get the list of all the runs - * `GET /api/runs/{run_id}`: Fetch the details of a specified PEcAn run - * `GET /api/runs/{run_id}/graph/{year}/{y_var}`: Plot the graph of desired output variables for a run + * [`GET /api/runs`](#get-apiruns): Get the list of all the runs + * [`GET /api/runs/{run_id}`](#get-apirunsrun_id): Fetch the details of a specified PEcAn run + * [`GET /api/runs/{run_id}/graph/{year}/{y_var}`](#get-apirunsrun_idgraphyeary_var): Plot the graph of desired output variables for a run _* indicates that the particular API is under development & may not be ready for use_ @@ -934,7 +934,7 @@ res <- httr::GET( writeBin(res$content, "test.png") ``` ```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/run_output_plot.png")) +knitr::include_graphics(rep("../../figures/run_output_plot.png")) ``` #### Python Snippet @@ -949,7 +949,7 @@ with open("test.png", "wb") as file: file.write(response.content) ``` ```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("figures/run_output_plot.png")) +knitr::include_graphics(rep("../../figures/run_output_plot.png")) ``` From a593b79b9d0b5f61dc880aa0a54df531486a7f6a Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Wed, 29 Jul 2020 11:03:19 -0400 Subject: [PATCH 1265/2289] Updated download_SMAP function --- modules/data.remote/inst/download_SMAP_gee2pecan.R | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/modules/data.remote/inst/download_SMAP_gee2pecan.R b/modules/data.remote/inst/download_SMAP_gee2pecan.R index 1a3175b03f8..d89918b08e3 100644 --- a/modules/data.remote/inst/download_SMAP_gee2pecan.R +++ b/modules/data.remote/inst/download_SMAP_gee2pecan.R @@ -76,16 +76,13 @@ download_SMAP_gee2pecan <- function(start, end, - #################### Convert to % Soil Moisture (when necessary) #################### + #################### Convert to % Soil Moisture #################### ## If variable is ssm or susm, must convert unit from mm --> % # SSM (surface soil moisture) represents top 0-5cm (50mm) of soil - if (var == "ssm") { - smap.data$ssm.vol = unlist((smap.data[,2] / 50) * 100) %>% as.numeric() + smap.data$ssm.vol = unlist((smap.data[,2] / 50) * 100) %>% as.numeric() # SUSM (subsurface soil moisture) represents top 0-100 cm (1000mm) of soil - } else if (var == "susm") { - smap.data$susm.vol = unlist((smap.data[,2] / 1000) * 100) %>% as.numeric() - } + smap.data$susm.vol = unlist((smap.data[,2] / 1000) * 100) %>% as.numeric() From 0dd26becd0c7dd8c02450d0bc385e9b35a98c33a Mon Sep 17 00:00:00 2001 From: hamzed Date: Thu, 30 Jul 2020 12:34:22 -0400 Subject: [PATCH 1266/2289] Addressing comments on pft move section --- modules/assim.sequential/R/Remote_helpers.R | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index 6da0a791a3d..f30f541050b 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -163,16 +163,17 @@ SDA_remote_launcher <-function(settingPath, ) # change the pft folder inside the setting - + + settings$pfts %>% purrr::map('outdir') %>% walk(function(pft.dir) { settings <<- rapply(settings, function(x) ifelse( - x == missing.input, + x == pft.dir, file.path(settings$host$folder, - folder_name, "pft", fdir, fname) , + folder_name, "pft") , x ), how = "replace") @@ -262,7 +263,7 @@ SDA_remote_launcher <-function(settingPath, need.copy.dirs <- dirname(need.copy) %>% unique() %>% - discard(~ .x %in% c(".")) + purrr::discard(~ .x %in% c(".")) need.copy.dirs %>% From c51b2c0b68b5b1ab53cc2ff604f08268ca8e530e Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Thu, 30 Jul 2020 14:00:00 -0400 Subject: [PATCH 1267/2289] Fixed file path --- modules/data.remote/inst/download_SMAP_gee2pecan.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/inst/download_SMAP_gee2pecan.R b/modules/data.remote/inst/download_SMAP_gee2pecan.R index d89918b08e3..90bb9408aa0 100644 --- a/modules/data.remote/inst/download_SMAP_gee2pecan.R +++ b/modules/data.remote/inst/download_SMAP_gee2pecan.R @@ -61,7 +61,7 @@ download_SMAP_gee2pecan <- function(start, end, leafletR::toGeoJSON(name = site_info$site_name, dest = geoJSON_outdir, overwrite = TRUE) # Locate gee2pecan_smap.py function and load into R - script.path = file.path("/fs/data3/jbateman/pecan/modules/data.remote/inst/gee2pecan_smap.py") + script.path = file.path(system.file("gee2pecan_smap.py", package = "PEcAn.data.remote")) reticulate::source_python(script.path) # Run gee2pecan_smap function From 80d1131c4db7e2899d25d478332207d7812396a2 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 30 Jul 2020 16:36:15 -0400 Subject: [PATCH 1268/2289] Update sda_plotting.R fix namespace on map --- modules/assim.sequential/R/sda_plotting.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 40407bd5b78..1020d53994a 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -451,7 +451,7 @@ post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.co setNames(names(obs.mean))%>% purrr::map_df(function(one.day.data){ #CI - purrr::map2_df(sqrt(one.day.data$covs %>% map( ~ diag(.x)) %>% unlist), one.day.data$means, + purrr::map2_df(sqrt(one.day.data$covs %>% purrr::map( ~ diag(.x)) %>% unlist), one.day.data$means, function(sd,mean){ data.frame(mean-(sd*1.96), mean+(sd*1.96)) From 9f054794271bd19901ff41b0030655361091054b Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 30 Jul 2020 17:23:30 -0400 Subject: [PATCH 1269/2289] Update extract_ERA5.R --- modules/data.atmosphere/R/extract_ERA5.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/extract_ERA5.R b/modules/data.atmosphere/R/extract_ERA5.R index c4fb4bf74e8..bf219b4a509 100644 --- a/modules/data.atmosphere/R/extract_ERA5.R +++ b/modules/data.atmosphere/R/extract_ERA5.R @@ -35,7 +35,7 @@ extract.nc.ERA5 <- overwrite = FALSE, ...) { - library(xts) + # library(xts) # Distributing the job between whatever core is available. years <- seq(lubridate::year(start_date), From 3957101f4a5e9e6a4afcbd873790ec4889fc8cac Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 30 Jul 2020 18:04:51 -0400 Subject: [PATCH 1270/2289] Update ic_process.R --- modules/data.land/R/ic_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.land/R/ic_process.R b/modules/data.land/R/ic_process.R index e1d9aa73150..b6c88e35be1 100644 --- a/modules/data.land/R/ic_process.R +++ b/modules/data.land/R/ic_process.R @@ -83,7 +83,7 @@ ic_process <- function(settings, input, dir, overwrite = FALSE){ #see if there is already files generated there newfile <-list.files(outfolder, "*.nc$", full.names = TRUE) %>% as.list()%>% - setNames(rep("path", length(.))) + stats::setNames(rep("path", length(.))) if (length(newfile)==0){ newfile <- PEcAn.data.land::BADM_IC_process(settings, dir=outfolder, overwrite=FALSE) From 0182ea3399cf436e85c50b0e3fe61e96adc67e5b Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 30 Jul 2020 18:05:34 -0400 Subject: [PATCH 1271/2289] Update soil_process.R --- modules/data.land/R/soil_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.land/R/soil_process.R b/modules/data.land/R/soil_process.R index 77d98e8a608..42669f44029 100644 --- a/modules/data.land/R/soil_process.R +++ b/modules/data.land/R/soil_process.R @@ -50,7 +50,7 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T #see if there is already files generated there newfile <-list.files(outfolder, "*.nc$", full.names = TRUE) %>% as.list()%>% - setNames(rep("path", length(.))) + stats::setNames(rep("path", length(.))) if(length(newfile)==0){ radiusL <- ifelse(is.null(settings$run$input$soil$radius), 500, as.numeric(settings$run$input$soil$radius)) @@ -93,4 +93,4 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T newfile <- PEcAn.data.land::extract_soil_nc(source.file,outfolder,lat = latlon$lat,lon=latlon$lon) return(newfile) -} # ic_process \ No newline at end of file +} # ic_process From e51ebae5da1c68e63b20451e1fad1a9892a1ed2b Mon Sep 17 00:00:00 2001 From: mccabete Date: Thu, 30 Jul 2020 19:19:48 -0400 Subject: [PATCH 1272/2289] Deleting met2model CO2 check --- models/ed/R/met2model.ED2.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/models/ed/R/met2model.ED2.R b/models/ed/R/met2model.ED2.R index 59d7b257d34..c867c1684a3 100644 --- a/models/ed/R/met2model.ED2.R +++ b/models/ed/R/met2model.ED2.R @@ -341,10 +341,10 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l update_frequency = dt, flag = 1 ) - if (!useCO2) { - metvar_table[metvar_table$variable == "co2", - c("update_frequency", "flag")] <- list(380, 4) - } + # if (!useCO2) { + # metvar_table[metvar_table$variable == "co2", + # c("update_frequency", "flag")] <- list(380, 4) + # } ed_metheader <- list(list( path_prefix = met_folder, From 443a08749fc3a403fa01db7e2ec26a30ecff50c0 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Fri, 31 Jul 2020 10:22:42 -0400 Subject: [PATCH 1273/2289] Update ic_process.R --- modules/data.land/R/ic_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.land/R/ic_process.R b/modules/data.land/R/ic_process.R index b6c88e35be1..8ec256668f0 100644 --- a/modules/data.land/R/ic_process.R +++ b/modules/data.land/R/ic_process.R @@ -82,8 +82,8 @@ ic_process <- function(settings, input, dir, overwrite = FALSE){ #see if there is already files generated there newfile <-list.files(outfolder, "*.nc$", full.names = TRUE) %>% - as.list()%>% - stats::setNames(rep("path", length(.))) + as.list() + names(newfile) <- rep("path", length(newfile)) if (length(newfile)==0){ newfile <- PEcAn.data.land::BADM_IC_process(settings, dir=outfolder, overwrite=FALSE) From 7b4a299996a1375abea78ee2ea6425fb31f19400 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Fri, 31 Jul 2020 10:23:30 -0400 Subject: [PATCH 1274/2289] Update soil_process.R --- modules/data.land/R/soil_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.land/R/soil_process.R b/modules/data.land/R/soil_process.R index 42669f44029..6982ef58bfe 100644 --- a/modules/data.land/R/soil_process.R +++ b/modules/data.land/R/soil_process.R @@ -49,8 +49,8 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T #see if there is already files generated there newfile <-list.files(outfolder, "*.nc$", full.names = TRUE) %>% - as.list()%>% - stats::setNames(rep("path", length(.))) + as.list() + names(newfile) <- rep("path", length(newfile)) if(length(newfile)==0){ radiusL <- ifelse(is.null(settings$run$input$soil$radius), 500, as.numeric(settings$run$input$soil$radius)) From ec044ca6e25406df14653ace6d88417402bcaac3 Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 31 Jul 2020 10:53:57 -0400 Subject: [PATCH 1275/2289] Fixed requested edits throughout --- .../data.land/R/download_NEON_soilmoisture.R | 111 ++++++++++++++++++ 1 file changed, 111 insertions(+) create mode 100644 modules/data.land/R/download_NEON_soilmoisture.R diff --git a/modules/data.land/R/download_NEON_soilmoisture.R b/modules/data.land/R/download_NEON_soilmoisture.R new file mode 100644 index 00000000000..2132c980638 --- /dev/null +++ b/modules/data.land/R/download_NEON_soilmoisture.R @@ -0,0 +1,111 @@ +##' @name download_NEON_soilmoisture +##' @description: +##' Download NEON Soil Water Content and Soil Salinity data by date and site name +##' +##' @param site four letter NEON site code name(s). If no site is specified, it will download all of them (chr) (e.g "BART" or c("SRER", "KONA", "BART")) +##' @param avg averaging interval (minutes): 1, 30, or both ("all") . default returns both +##' @param var variable of interest: "SWC" (soil water content) or "SIC" (soil ion content) or both ("all") default returns both. +##' Both variables will be saved in outdir automatically (chr) +##' @param startdate start date as YYYY-mm. If left empty, all data available will be downloaded (chr) +##' @param enddate start date as YYYY-mm. If left empty, all data available will be downloaded (chr) +##' @param outdir out directory to store the following data: +##' .rds list files of SWC and SIC data for each site and sensor position, +##' sensor positions .csv for each site, +##' variable description .csv file, +##' readme .csv file +##' @return List of specified variable(s) AND prints the path to output folder +##' +##' @author Juliette Bateman +##' +##' @example +##' \run{ +##' test <- download_NEON_soilmoisture( +##' site = c("SRER", "BART", "KONA"), +##' avg = 30, +##' var = "SWC", +##' startdate = "2019-01", +##' enddate = "2020-01", +##' outdir = getwd())} + +## Install NEON libs +#devtools::install_github("NEONScience/NEON-geolocation/geoNEON") +#devtools::install_github("NEONScience/NEON-utilities/neonUtilities", force = TRUE) +#install.packages("BiocManager") +# BiocManager::install("rhdf5") + + +download_NEON_soilmoist <- function(site, avg = "all", var = "all", + startdate = NA, enddate = NA, + outdir) { + + + #################### Data Download from NEON #################### + soil.raw = neonUtilities::loadByProduct(dpID = "DP1.00094.001", site = site, avg = avg, startdate = startdate, enddate = enddate, check.size = FALSE) + + # Export into new folder in outdir + dir = paste0(outdir, "/NEONSoilMoist", "_", startdate, "-", enddate) + dir.create(dir) + + #################### Clean-up Data Observations #################### + # Only select data from list and remove flagged observations + if (avg == 30) { + data.raw = soil.raw$SWS_30_minute %>% na.omit() + } else if (avg == 1) { + data.raw = soil.raw$SWS_1_minute %>% na.omit() + } else { + data.raw = list(soil.raw$SWS_1_minute, soil.raw$SWS_30_minute) %>% na.omit() + } + + # Separate variables, omit flagged data obs + data.raw.SWC = (split(data.raw, data.raw$VSWCFinalQF))$'0' %>% + select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime", "VSWCMean", "VSWCMinimum", "VSWCMaximum", "VSWCVariance", "VSWCNumPts", "VSWCExpUncert", "VSWCStdErMean")) + data.raw.SIC = (split(data.raw, data.raw$VSICFinalQF))$'0' %>% + select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime","VSICMean", "VSICMinimum", "VSICMaximum", "VSICVariance", "VSICNumPts", "VSICExpUncert", "VSICStdErMean")) + + data.raw.both = list(data.raw.SWC, data.raw.SIC) + names(data.raw.both) <- c("SWC", "SIC") + data.split.both = lapply(data.raw.both, function(x) split(x, x$siteID)) + + # Separate dataframe into lists by site and sensor position + data.SWC.sites = split(data.raw.SWC, data.raw.SWC$siteID) + data.SIC.sites = split(data.raw.SIC, data.raw.SIC$siteID) + for (i in 1:length(data.SWC.sites)){ + data.SWC.sites[i]=lapply(data.SWC.sites[i], function(x) split(x, list(x$horizontalPosition, x$verticalPosition))) + } + for (i in 1:length(data.SIC.sites)){ + data.SIC.sites[i]=lapply(data.SIC.sites[i], function(x) split(x, list(x$horizontalPosition, x$verticalPosition))) + } + + #################### Save data into folders #################### + + # Saving metadata + write.csv(soil.raw$readme_00094, file = (paste0(dir,"/readme.csv"))) + write.csv(soil.raw$variables_00094, file = paste0(dir, "/variable_description.csv")) + sensor.pos = split(soil.raw$sensor_positions_00094, soil.raw$sensor_positions_00094$siteID) + for (i in names(sensor.pos)){ + write.csv(sensor.pos[[i]], file = paste0(dir, "/", i, "_sensor_positions.csv")) + } + + # Save site data lists as .rds files to outdir + for (i in names(data.SIC.sites)) { + saveRDS(data.SIC.sites[[i]], file = paste0(dir, "/", i, "_SIC_data.rds")) + } + for (i in names(data.SWC.sites)) { + saveRDS(data.SWC.sites[[i]], file = paste0(dir, "/", i, "_SWC_data.rds")) + } + + # Return file path to data and print lists of + PEcAn.logger::logger.info("Done! NEON soil data has been downloaded and stored in ", paste0(dir), ".") + if (var == "SWC") { + data.SWC = data.SWC.sites + return(data.SWC) + } else if (var == "SIC") { + data.SIC = data.SIC.sites + return(data.SIC) + } else if (var == "all") { + both.var = list(data.SWC, data.SIC) + names(both.var) = c("SWC", "SIC") + return(both.var) + } + +} From 52542d2ee8557234d4bf2402d337d921ce22fff8 Mon Sep 17 00:00:00 2001 From: mccabete Date: Fri, 31 Jul 2020 14:09:31 -0400 Subject: [PATCH 1276/2289] Prevent met-check for CO2 --- models/ed/R/check_ed_metheader.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/models/ed/R/check_ed_metheader.R b/models/ed/R/check_ed_metheader.R index e68a76a3788..ed30db3033f 100644 --- a/models/ed/R/check_ed_metheader.R +++ b/models/ed/R/check_ed_metheader.R @@ -16,7 +16,10 @@ check_ed_metheader <- function(ed_metheader, check_files = TRUE) { testthat::test_that( "ED met header object is a nested list", { - testthat::expect_true(!is.null(names(ed_metheader[[1]]))) + names_required <- names(ed_metheader[[1]]) + names_required <- names_required[ names_required != "co2"] ## "co2" provided optionally for ED2 + + testthat::expect_true(!is.null(names_required)) } ) .z <- lapply(ed_metheader, check_ed_metheader_format, check_files = check_files) From cf9c4f527c8055e71dbe3c07b8bb84e3359acdb4 Mon Sep 17 00:00:00 2001 From: mccabete Date: Fri, 31 Jul 2020 14:22:36 -0400 Subject: [PATCH 1277/2289] Prevent met-check for CO2 --- models/ed/R/check_ed_metheader.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/ed/R/check_ed_metheader.R b/models/ed/R/check_ed_metheader.R index ed30db3033f..a5db9d94423 100644 --- a/models/ed/R/check_ed_metheader.R +++ b/models/ed/R/check_ed_metheader.R @@ -22,7 +22,7 @@ check_ed_metheader <- function(ed_metheader, check_files = TRUE) { testthat::expect_true(!is.null(names_required)) } ) - .z <- lapply(ed_metheader, check_ed_metheader_format, check_files = check_files) + .z <- lapply(names_required, check_ed_metheader_format, check_files = check_files) invisible(TRUE) } From 497f2add88fe11cc966614090a5c479f2ad5386b Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Fri, 31 Jul 2020 15:47:42 -0400 Subject: [PATCH 1278/2289] updating soil process to add inputs to BETY --- CHANGELOG.md | 3 +++ base/db/R/dbfiles.R | 34 ++++++++++++++++++++------- modules/data.land/R/extract_soil_nc.R | 29 ++++++++++++++++++++++- modules/data.land/R/soil_process.R | 2 +- 4 files changed, 57 insertions(+), 11 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e34d9227904..201dc673399 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,6 +25,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Update ED docker build, will now build version 2.2.0 and git - Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 - model2netcdf.ED2 no longer detecting which varibles names `-T-` files have based on ED2 version (#2623) +-gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) ### Changed @@ -43,6 +44,8 @@ For more information about this file see also [Keep a Changelog](http://keepacha - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). - data.remote: Arguments to the function `call_MODIS()` have been changed (issue #2519). - Changed precipitaion downscale in `PEcAn.data.atmosphere::download.NOAA_GEFS_downscale`. Precipitation was being downscaled via a spline which was causing fake rain events. Instead the 6 hr precipitation flux values from GEFS are preserved with 0's filling in the hours between. +-Changed `dbfile.input.insert` to work with inputs (i.e soils) that don't have start and end dates associated with them + ### Added diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index f52823e922a..05f7bfd9c10 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -79,16 +79,18 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, inputid <- NULL if (nrow(existing.input) > 0) { # Convert dates to Date objects and strip all time zones (DB values are timezone-free) - startdate <- lubridate::force_tz(time = lubridate::as_date(startdate), tzone = 'UTC') - enddate <- lubridate::force_tz(time = lubridate::as_date(enddate), tzone = 'UTC') + if(!is.null(startdate)){ startdate <- lubridate::force_tz(time = lubridate::as_date(startdate), tzone = 'UTC')} + if(!is.null(enddate)){enddate <- lubridate::force_tz(time = lubridate::as_date(enddate), tzone = 'UTC')} existing.input$start_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$start_date), tzone = 'UTC') existing.input$end_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$end_date), tzone = 'UTC') for (i in seq_len(nrow(existing.input))) { existing.input.i <- existing.input[i,] - if (existing.input.i$start_date == startdate && existing.input.i$end_date == enddate) { + if (is.na(existing.input.i$start_date) && is.null(startdate)) { inputid <- existing.input.i[['id']] - break + }else if(existing.input.i$start_date == startdate && existing.input.i$end_date == enddate){ + inputid <- existing.input.i[['id']] + } } @@ -109,7 +111,13 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, if (is.null(inputid)) { # Either there was no existing input, or there was but the dates don't match and # allow.conflicting.dates==TRUE. So, insert new input record. - if (parent == "") { + # adding is.null(startdate) to add inputs like soil that don't have dates + if(parent == "" && is.null(startdate)) { + cmd <- paste0("INSERT INTO inputs ", + "(site_id, format_id, name) VALUES (", + siteid, ", ", formatid, ", '", name, + "'",") RETURNING id") + } else if(parent == "" && !is.null(startdate)) { cmd <- paste0("INSERT INTO inputs ", "(site_id, format_id, start_date, end_date, name) VALUES (", siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, @@ -123,7 +131,15 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, inserted.id <-db.query(query = cmd, con = con) name.s <- name - inputid <- db.query( + if(is.null(startdate)){ + inputid <- db.query( + query = paste0( + "SELECT id FROM inputs WHERE site_id=", siteid, + " AND format_id=", formatid), + con = con + )$id + + }else{inputid <- db.query( query = paste0( "SELECT id FROM inputs WHERE site_id=", siteid, " AND format_id=", formatid, @@ -132,7 +148,7 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, "'" , parent, ";" ), con = con - )$id + )$id} }else{ inserted.id <- data.frame(id=inputid) # in the case that inputid is not null then this means that there was an exsiting input } @@ -150,7 +166,7 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, # find appropriate dbfile, if not in database, insert new dbfile dbfile <- dbfile.check(type = 'Input', container.id = inputid, con = con, hostname = hostname) - if (nrow(dbfile) > 0 ) { + if (nrow(dbfile) > 0 & !ens) { if (nrow(dbfile) > 1) { print(dbfile) @@ -158,7 +174,7 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, dbfile <- dbfile[nrow(dbfile),] } - if (dbfile$file_name != in.prefix || dbfile$file_path != in.path ) { + if (dbfile$file_name != in.prefix || dbfile$file_path != in.path && !ens) { print(dbfile, digits = 10) PEcAn.logger::logger.error(paste0( "The existing dbfile record printed above has the same machine_id and container ", diff --git a/modules/data.land/R/extract_soil_nc.R b/modules/data.land/R/extract_soil_nc.R index dff30b31335..6b3a0666080 100644 --- a/modules/data.land/R/extract_soil_nc.R +++ b/modules/data.land/R/extract_soil_nc.R @@ -19,7 +19,7 @@ #' } #' @author Hamze Dokoohaki #' -extract_soil_gssurgo<-function(outdir, lat, lon, size=1, radius=500, depths=c(0.15,0.30,0.60)){ +extract_soil_gssurgo<-function(outdir, lat, lon, settings, size=1, radius=500, depths=c(0.15,0.30,0.60)){ # I keep all the ensembles here all.soil.ens <-list() @@ -237,6 +237,33 @@ extract_soil_gssurgo<-function(outdir, lat, lon, size=1, radius=500, depths=c(0. out.ense<-out.ense%>% setNames(rep("path", length(out.ense))) + #connect to db + # Extract info from settings and setup + site <- settings$run$site + dbparms <- settings$database + + bety <- dplyr::src_postgres(dbname = dbparms$bety$dbname, + host = dbparms$bety$host, + user = dbparms$bety$user, + password = dbparms$bety$password) + con <- bety$con + # register files in DB + for(i in 1:length(out.ense)){ + in.path = paste0(dirname(out.ense[i]$path), '/') + in.prefix = stringr::str_remove(basename(out.ense[i]$path), ".nc") + + + dbfile.input.insert (in.path, + in.prefix, + site$id, + startdate = NULL, + enddate = NULL, + mimetype = "application/x-netcdf", + formatname = "gSSURGO Soil", + con = con, + ens=TRUE) + } + return(out.ense) } diff --git a/modules/data.land/R/soil_process.R b/modules/data.land/R/soil_process.R index 6982ef58bfe..7f781723552 100644 --- a/modules/data.land/R/soil_process.R +++ b/modules/data.land/R/soil_process.R @@ -55,7 +55,7 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T if(length(newfile)==0){ radiusL <- ifelse(is.null(settings$run$input$soil$radius), 500, as.numeric(settings$run$input$soil$radius)) - newfile<-extract_soil_gssurgo(outfolder, lat = latlon$lat, lon=latlon$lon, radius=radiusL) + newfile<-extract_soil_gssurgo(outfolder, lat = latlon$lat, lon=latlon$lon, settings, radius = radiusL) } return(newfile) From 42d08ec7e7287e1ec8011e0842fb3e3c38d9182a Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Fri, 31 Jul 2020 16:15:55 -0400 Subject: [PATCH 1279/2289] adding updated Rd files --- base/db/man/insert.format.vars.Rd | 2 +- base/db/man/take.samples.Rd | 2 +- base/logger/man/print2string.Rd | 4 ++-- modules/data.land/man/extract_soil_gssurgo.Rd | 1 + modules/rtm/man/invert_bt.Rd | 2 +- modules/rtm/man/params2edr.Rd | 2 +- 6 files changed, 7 insertions(+), 6 deletions(-) diff --git a/base/db/man/insert.format.vars.Rd b/base/db/man/insert.format.vars.Rd index b0902645918..cd6b463dc4c 100644 --- a/base/db/man/insert.format.vars.Rd +++ b/base/db/man/insert.format.vars.Rd @@ -55,7 +55,7 @@ formats_variables_tibble <- tibble::tibble( variable_id = c(411, 135, 382), name = c("NPP", NA, "YEAR"), unit = c("g C m-2 yr-1", NA, NA), - storage_type = c(NA, NA, "\%Y"), + storage_type = c(NA, NA, "%Y"), column_number = c(2, NA, 4)) insert.format.vars( diff --git a/base/db/man/take.samples.Rd b/base/db/man/take.samples.Rd index 0c8f693365f..f834d60df80 100644 --- a/base/db/man/take.samples.Rd +++ b/base/db/man/take.samples.Rd @@ -20,7 +20,7 @@ sample from normal distribution, given summary stats \examples{ ## return the mean when stat = NA take.samples(summary = data.frame(mean = 10, stat = NA)) -## return vector of length \code{sample.size} from N(mean,stat) +## return vector of length \\code{sample.size} from N(mean,stat) take.samples(summary = data.frame(mean = 10, stat = 10), sample.size = 10) } diff --git a/base/logger/man/print2string.Rd b/base/logger/man/print2string.Rd index 5c1b76b3283..a6cd87c17b7 100644 --- a/base/logger/man/print2string.Rd +++ b/base/logger/man/print2string.Rd @@ -23,9 +23,9 @@ functions, you should always add the \code{wrap = FALSE} argument, and probably add a newline (\code{"\\n"}) before the output of this function. } \examples{ -logger.info("First few rows of Iris:\n", print2string(iris[1:10, -5]), wrap = FALSE) +logger.info("First few rows of Iris:\\n", print2string(iris[1:10, -5]), wrap = FALSE) df <- data.frame(test = c("download", "process", "plot"), status = c(TRUE, TRUE, FALSE)) -logger.debug("Current status:\n", print2string(df, row.names = FALSE), wrap = FALSE) +logger.debug("Current status:\\n", print2string(df, row.names = FALSE), wrap = FALSE) } \author{ Alexey Shiklomanov diff --git a/modules/data.land/man/extract_soil_gssurgo.Rd b/modules/data.land/man/extract_soil_gssurgo.Rd index d8231132824..1e97e964124 100644 --- a/modules/data.land/man/extract_soil_gssurgo.Rd +++ b/modules/data.land/man/extract_soil_gssurgo.Rd @@ -8,6 +8,7 @@ extract_soil_gssurgo( outdir, lat, lon, + settings, size = 1, radius = 500, depths = c(0.15, 0.3, 0.6) diff --git a/modules/rtm/man/invert_bt.Rd b/modules/rtm/man/invert_bt.Rd index 89f2d1e8e0d..32a2ad229b3 100644 --- a/modules/rtm/man/invert_bt.Rd +++ b/modules/rtm/man/invert_bt.Rd @@ -34,7 +34,7 @@ specified by the user; see Details). This is most common for the initial number of iterations, which is the minimum expected for convergence. \item \code{loop} -- BayesianTools settings for iterations inside the convergence -checking \code{while} loop. This is most commonly for setting a smaller +checking \verb{while} loop. This is most commonly for setting a smaller iteration count than in \code{init}. \item \code{other} -- Miscellaneous (non-BayesianTools) settings, including: \itemize{ diff --git a/modules/rtm/man/params2edr.Rd b/modules/rtm/man/params2edr.Rd index 69ce0a4ae26..bd9338ac985 100644 --- a/modules/rtm/man/params2edr.Rd +++ b/modules/rtm/man/params2edr.Rd @@ -33,7 +33,7 @@ The regular expression defining the separation is greedy, i.e. and \code{SLA} (assuming the default \code{sep = "."}). Therefore, it is crucial that trait names not contain any \code{sep} characters (neither ED nor PROSPECT parameters should anyway). If this is a problem, use an alternate separator -(e.g. \code{|}). +(e.g. \verb{|}). Note that using \code{sep = "."} allows this function to directly invert the default behavior of \code{unlist}. That is, calling From 73893e7883415bfda01fe643f9cc0927e85935ed Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 31 Jul 2020 17:29:42 -0400 Subject: [PATCH 1280/2289] Same edits, but on original file (I hope) --- modules/data.land/R/download_NEON_soildata.R | 85 ------------------- .../data.land/R/download_NEON_soilmoisture.R | 26 +++--- 2 files changed, 16 insertions(+), 95 deletions(-) delete mode 100644 modules/data.land/R/download_NEON_soildata.R diff --git a/modules/data.land/R/download_NEON_soildata.R b/modules/data.land/R/download_NEON_soildata.R deleted file mode 100644 index a8b7a81f20d..00000000000 --- a/modules/data.land/R/download_NEON_soildata.R +++ /dev/null @@ -1,85 +0,0 @@ -##' @name download_NEON_soildata -##' @description: -##' Download NEON Soil Water Content and Soil Salinity data by date and site name -##' -##' @param site four letter NEON site code name(s). If no site is specified, it will download all of them (chr) (e.g "BART" or multiple c("SRER", "KONA", "BART")) -##' @param avg averaging interval: either 30 minute OR 1 minute, default will return BOTH (chr) (e.g 1 or 30) -##' @param var variable of interest, either "SWC" (soil water content) or "SIC" (soil ion content) -##' @param startdate start date as YYYY-mm. If left empty, all data will available will be downloaded (chr) -##' @param enddate start date as YYYY-mm. If left empty, all data will available will be downloaded (chr) -##' @param outdir out directory to store .CSV files with individual site data -##' @return Data.frames for each site of interest loaded to the R Global Environment, AND saved as .csv files -##' -##' @author Juliette Bateman -##' -##' @example -##' \dontrun{ -##' test <- download_NEON_soildata( -##' site = c("SRER", "BART"), -##' avg = 30, -##' startdate = "2019-01", -##' enddate = "2020-01", -##' var = "SWC", -##' outdir = getwd())} - -## Install NEON libs -#devtools::install_github("NEONScience/NEON-geolocation/geoNEON") -#devtools::install_github("NEONScience/NEON-utilities/neonUtilities", force = TRUE) -#install.packages("BiocManager") -# BiocManager::install("rhdf5") - - -download_NEON_soildata <- function(site, avg = "all", var, - startdate = NA, enddate = NA, - outdir) { - - - #################### Data Download from NEON #################### - soil.raw = neonUtilities::loadByProduct(dpID = "DP1.00094.001", site = "KONA", avg = avg, startdate = startdate, enddate = enddate, check.size = FALSE) - - # Only select data from list and remove flagged observations - if (avg == 30) { - data.raw = soil.raw$SWS_30_minute - } else if (avg == 1) { - data.raw = soil.raw$SWS_1_minute - } else { - data.raw = list(soil.raw$SWS_1_minute, soil.raw$SWS_30_minute) - } - - - # Separate selected variable, omit NA values - if (var == "SWC") { - data.raw = (split(data.raw, data.raw$VSWCFinalQF))$'0' - data.raw = data.raw[,-c(15:22)] %>% na.omit() - } else if (var == "SIC") { - data.raw = (split(data.raw, data.raw$VSICFinalQF))$'0' - data.raw = data.raw[,-c(7:14)] %>% na.omit() - } - - # Separate dataframe into list of each site of interest - data.sites = data.raw %>% - split(data.raw$siteID) - - # Separate each site by sensors (horizontal and vertical position) - data.sitePositions = vector(mode = "list", length = length(data.sites)) - names(data.sitePositions) = names(data.sites) - for (i in length(data.sites)) { - data.sitePositions[[i]] = data.sites[[i]] %>% - split(list(data.sites[[i]]$horizontalPosition, data.sites[[i]]$verticalPosition)) - } - - # Load individual site list into the Global Environment - data.sitePositions %>% - list2env(envir = .GlobalEnv) - - # Save site lists as .rds files to outdir - for (i in names(data.sitePositions)){ - #rlist::list.save(data.sites[[i]], file = paste0(outdir, "/", i, ".json"), type = tools::file_ext(".json")) - #write.csv(data.sites[[1]], paste0(outdir, "/", i,".csv")) - saveRDS(data.sitePositions[[i]], file = paste0(outdir, "/", i, ".rds")) - } - -} - - - diff --git a/modules/data.land/R/download_NEON_soilmoisture.R b/modules/data.land/R/download_NEON_soilmoisture.R index 2132c980638..17d073b82a8 100644 --- a/modules/data.land/R/download_NEON_soilmoisture.R +++ b/modules/data.land/R/download_NEON_soilmoisture.R @@ -35,8 +35,8 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", - startdate = NA, enddate = NA, - outdir) { + startdate = NA, enddate = NA, + outdir) { #################### Data Download from NEON #################### @@ -61,7 +61,7 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime", "VSWCMean", "VSWCMinimum", "VSWCMaximum", "VSWCVariance", "VSWCNumPts", "VSWCExpUncert", "VSWCStdErMean")) data.raw.SIC = (split(data.raw, data.raw$VSICFinalQF))$'0' %>% select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime","VSICMean", "VSICMinimum", "VSICMaximum", "VSICVariance", "VSICNumPts", "VSICExpUncert", "VSICStdErMean")) - + data.raw.both = list(data.raw.SWC, data.raw.SIC) names(data.raw.both) <- c("SWC", "SIC") data.split.both = lapply(data.raw.both, function(x) split(x, x$siteID)) @@ -75,24 +75,30 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", for (i in 1:length(data.SIC.sites)){ data.SIC.sites[i]=lapply(data.SIC.sites[i], function(x) split(x, list(x$horizontalPosition, x$verticalPosition))) } - + #################### Save data into folders #################### - # Saving metadata - write.csv(soil.raw$readme_00094, file = (paste0(dir,"/readme.csv"))) - write.csv(soil.raw$variables_00094, file = paste0(dir, "/variable_description.csv")) + # Saving metadata and site data lists as .rds files to outdir, organize into site specific folders sensor.pos = split(soil.raw$sensor_positions_00094, soil.raw$sensor_positions_00094$siteID) for (i in names(sensor.pos)){ write.csv(sensor.pos[[i]], file = paste0(dir, "/", i, "_sensor_positions.csv")) } - - # Save site data lists as .rds files to outdir for (i in names(data.SIC.sites)) { - saveRDS(data.SIC.sites[[i]], file = paste0(dir, "/", i, "_SIC_data.rds")) + saveRDS(data.SIC.sites[[i]], file = paste0(dir, "/", i, "_SIC_data.rds")) } for (i in names(data.SWC.sites)) { saveRDS(data.SWC.sites[[i]], file = paste0(dir, "/", i, "_SWC_data.rds")) } + for (i in 1:length(site)){ + folders = paste0(dir, "/", site[1:i]) + dir.create(folders[i]) + #fs::file_move(paste0(dir, "/", site[i], "_sensor_positions.csv"), folders[i]) + fs::file_move(paste0(dir, "/", site[i], "_SIC_data.rds"), folders[i]) + fs::file_move(paste0(dir, "/", site[i], "_SWC_data.rds"), folders[i]) + } + + write.csv(soil.raw$readme_00094, file = (paste0(dir,"/readme.csv"))) + write.csv(soil.raw$variables_00094, file = paste0(dir, "/variable_description.csv")) # Return file path to data and print lists of PEcAn.logger::logger.info("Done! NEON soil data has been downloaded and stored in ", paste0(dir), ".") From 3ce1558b2affe2658d387b943f002bc961ffee4b Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 31 Jul 2020 18:53:02 -0400 Subject: [PATCH 1281/2289] Fixed bugs! --- modules/data.land/R/download_NEON_soilmoisture.R | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/data.land/R/download_NEON_soilmoisture.R b/modules/data.land/R/download_NEON_soilmoisture.R index 17d073b82a8..ad5f47092c6 100644 --- a/modules/data.land/R/download_NEON_soilmoisture.R +++ b/modules/data.land/R/download_NEON_soilmoisture.R @@ -1,4 +1,4 @@ -##' @name download_NEON_soilmoisture +##' @name download_NEON_soilmoist ##' @description: ##' Download NEON Soil Water Content and Soil Salinity data by date and site name ##' @@ -92,7 +92,7 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", for (i in 1:length(site)){ folders = paste0(dir, "/", site[1:i]) dir.create(folders[i]) - #fs::file_move(paste0(dir, "/", site[i], "_sensor_positions.csv"), folders[i]) + fs::file_move(paste0(dir, "/", site[i], "_sensor_positions.csv"), folders[i]) fs::file_move(paste0(dir, "/", site[i], "_SIC_data.rds"), folders[i]) fs::file_move(paste0(dir, "/", site[i], "_SWC_data.rds"), folders[i]) } @@ -109,6 +109,8 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", data.SIC = data.SIC.sites return(data.SIC) } else if (var == "all") { + data.SWC <- data.SWC.sites + data.SIC <- data.SIC.sites both.var = list(data.SWC, data.SIC) names(both.var) = c("SWC", "SIC") return(both.var) From 5e09844b013471dacca92cd9e372efa24e7cabf1 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 1 Aug 2020 08:37:51 +0530 Subject: [PATCH 1282/2289] changed func arguments --- modules/data.remote/inst/bands2lai_snap.py | 11 +++- modules/data.remote/inst/gee2pecan_s2.py | 11 ++-- modules/data.remote/inst/get_remote_data.py | 25 ++++++-- modules/data.remote/inst/merge_files.py | 58 +++++++++++++++++++ .../data.remote/inst/process_remote_data.py | 22 ++++--- modules/data.remote/inst/remote_process.py | 52 ++++++++++++----- 6 files changed, 146 insertions(+), 33 deletions(-) create mode 100644 modules/data.remote/inst/merge_files.py diff --git a/modules/data.remote/inst/bands2lai_snap.py b/modules/data.remote/inst/bands2lai_snap.py index 03a0017b10a..5d3229fd7b0 100644 --- a/modules/data.remote/inst/bands2lai_snap.py +++ b/modules/data.remote/inst/bands2lai_snap.py @@ -12,7 +12,7 @@ import geopandas as gpd import xarray as xr import os - +import time def bands2lai_snap(inputfile, outdir): """ @@ -45,6 +45,11 @@ def bands2lai_snap(inputfile, outdir): if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) + timestamp = time.strftime("%y%m%d%H%M%S") + + save_path = os.path.join(outdir, area.name + "_lai_snap_" + timestamp + ".nc") # creating a timerseries and saving the netCDF file - area.to_netcdf(os.path.join(outdir, area.name + "_lai.nc")) - timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) \ No newline at end of file + area.to_netcdf(save_path) + timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) + + return os.path.abspath(save_path) \ No newline at end of file diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py index a017f0f7e8f..e4ad2f8142b 100644 --- a/modules/data.remote/inst/gee2pecan_s2.py +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -10,6 +10,7 @@ """ +import time import sys import os import ee @@ -584,11 +585,8 @@ def s2_data_to_xarray(aoi, request_params, convert_to_reflectance=True): # 1D data list_vars = [ - "assetid", - "productid", "sun_azimuth", "sun_zenith", - "system_index", "view_azimuth", "view_zenith", ] @@ -804,4 +802,9 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qi_threshold): if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) - area.data.to_netcdf(os.path.join(outdir, area.name + "_bands.nc")) + timestamp = time.strftime("%y%m%d%H%M%S") + save_path = os.path.join(outdir, "gee_" + "s2_" + area.name + "_"+ timestamp + ".nc") + + area.data.to_netcdf(save_path) + + return os.path.abspath(save_path) diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index b22bb45717e..d835a24c210 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -8,9 +8,10 @@ Author(s): Ayush Prasad, Istem Fer """ - +from nc_merge import nc_merge from importlib import import_module from appeears2pecan import appeears2pecan +import os # dictionary used to map the GEE image collection id to PEcAn specific function name collection_dict = { @@ -32,6 +33,8 @@ def get_remote_data( projection=None, qc=None, credfile=None, + raw_merge=None, + existing_raw_file_path=None, ): """ uses GEE and AppEEARS functions to download data @@ -61,6 +64,11 @@ def get_remote_data( Nothing: output netCDF is saved in the specified directory. """ + + + + + if source == "gee": try: # get collection id from the dictionary @@ -80,10 +88,17 @@ def get_remote_data( func = getattr(module, func_name) # if a qc parameter is specified pass these arguments to the function if qc: - func(geofile, outdir, start, end, scale, qc) + get_datareturn_path = func(geofile, outdir, start, end, scale, qc) # this part takes care of functions which do not perform any quality checks, e.g. SMAP else: - func(geofile, outdir, start, end) + get_datareturn_path = func(geofile, outdir, start, end) + + # if source == "appeears": + # get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) + + if raw_merge == "TRUE": + + get_datareturn_path = nc_merge(existing_raw_file_path, get_datareturn_path, outdir) + + return get_datareturn_path - if source == "appeears": - appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) diff --git a/modules/data.remote/inst/merge_files.py b/modules/data.remote/inst/merge_files.py new file mode 100644 index 00000000000..9a70caac67f --- /dev/null +++ b/modules/data.remote/inst/merge_files.py @@ -0,0 +1,58 @@ +import xarray +import os +import time +import pandas as pd + +def nc_merge(old, new, outdir): + + + head, tail = os.path.split(new) + + + orig_nameof_newfile = new + timestamp = time.strftime("%y%m%d%H%M%S") + changed_new = os.path.join(outdir, tail + 'temp' + timestamp + '.nc') + + os.rename(orig_nameof_newfile, changed_new) + + + ds = xarray.open_mfdataset([old, changed_new], combine='by_coords') + + + + ds.to_netcdf(os.path.join(outdir, tail)) + + return os.path.abspath(os.path.join(outdir, tail)) + + +def csv_merge(old, new, outdir): + head, tail = os.path.split(new) + + orig_nameof_newfile = new + timestamp = time.strftime("%y%m%d%H%M%S") + changed_new = os.path.join(outdir, tail + 'temp' + timestamp + '.csv') + + os.rename(orig_nameof_newfile, changed_new) + + df_old = pd.read_csv(old) + df_changed_new = pd.read_csv(changed_new) + + merged_df = pd.concat([df_old, df_changed_new]) + + merged_df = merged_df.sort_values(by='Date') + + merged_df.to_csv(os.path.join(outdir, tail), index=False) + + return os.path.abspath(os.path.join(outdir, tail)) + + + + + + + + + + + + diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/process_remote_data.py index 1ef7ac5b76b..74989083846 100644 --- a/modules/data.remote/inst/process_remote_data.py +++ b/modules/data.remote/inst/process_remote_data.py @@ -6,11 +6,12 @@ Requires Python3 Author: Ayush Prasad """ - +from nc_merge import nc_merge from importlib import import_module +import os +import time - -def process_remote_data(aoi_name, output, outdir, algorithm): +def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge=None, existing_pro_file_path=None): """ uses processing functions to perform computation on input data @@ -25,12 +26,14 @@ def process_remote_data(aoi_name, output, outdir, algorithm): Nothing: output netCDF is saved in the specified directory. """ + + # get the type of the input data - input_type = output["get_data"] + input_type = out_get_data # locate the input file - input_file = "".join([outdir, "/", aoi_name, "_", input_type, ".nc"]) + # input_file = os.path.join(outdir, aoi_name, "_", input_type, ".nc") # extract the computation which is to be done - output = output["process_data"] + output = out_process_data # construct the function name func_name = "".join([input_type, "2", output, "_", algorithm]) # import the module @@ -38,4 +41,9 @@ def process_remote_data(aoi_name, output, outdir, algorithm): # import the function from the module func = getattr(module, func_name) # call the function - func(input_file, outdir) + process_datareturn_path = func(input_file, outdir) + + if pro_merge == "TRUE": + process_datareturn_path = nc_merge(existing_pro_file_path, process_datareturn_path, outdir) + + return process_datareturn_path diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index dc29c1a6bfe..e814d64cb44 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -25,9 +25,19 @@ def remote_process( projection=None, qc=None, algorithm=None, + input_file=None, credfile=None, - output={"get_data": None, "process_data": None}, - stage={"get_data": True, "process_data": True}, + out_get_data=None, + out_process_data=None, + stage_get_data=None, + stage_process_data=None, + raw_merge=None, + pro_merge=None, + existing_raw_file_path=None, + existing_pro_file_path=None, + + # output={"get_data": "bands", "process_data": "lai"}, + # stage={"get_data": True, "process_data": True}, ): """ @@ -70,27 +80,41 @@ def remote_process( # when db connections are made, this will be removed aoi_name = get_sitename(geofile) + + if stage_get_data: + get_datareturn_path = get_remote_data( + geofile, outdir, start, end, source, collection, scale, projection, qc, credfile, raw_merge, existing_raw_file_path + ) + + if stage_process_data: + if input_file is None: + input_file = get_datareturn_path + process_datareturn_path = process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge, existing_pro_file_path) + + + output = {"raw_data": None, "process_data": None} - if stage["get_data"]: - get_remote_data( - geofile, outdir, start, end, source, collection, scale, projection, qc, credfile - ) + if stage_get_data: + output['raw_data'] = get_datareturn_path - if stage["process_data"]: - process_remote_data(aoi_name, output, outdir, algorithm) + if stage_process_data: + output['process_data'] = process_datareturn_path + return output + +""" if __name__ == "__main__": remote_process( - geofile="./satellitetools/test.geojson", - outdir="./out", + geofile="/home/carya/pecan/modules/data.remote/inst/satellitetools/test.geojson", + outdir="/home/carya/pecan/modules/data.remote/inst/out", start="2018-01-01", end="2018-12-31", source="gee", - collection="COPERNICUS/S2_SR", - scale=10, - qc=1, - algorithm="snap", + collection="LANDSAT/LC08/C01/T1_SR", + scale=30, output={"get_data": "bands", "process_data": "lai"}, stage={"get_data": True, "process_data": True}, ) +""" + From 4c05270dfbecf7509f7820f4c2893c42b746fb35 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 06:53:58 -0400 Subject: [PATCH 1283/2289] namespace fixes --- modules/data.land/R/download_NEON_soilmoisture.R | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/modules/data.land/R/download_NEON_soilmoisture.R b/modules/data.land/R/download_NEON_soilmoisture.R index ad5f47092c6..78ae44326d1 100644 --- a/modules/data.land/R/download_NEON_soilmoisture.R +++ b/modules/data.land/R/download_NEON_soilmoisture.R @@ -49,18 +49,18 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", #################### Clean-up Data Observations #################### # Only select data from list and remove flagged observations if (avg == 30) { - data.raw = soil.raw$SWS_30_minute %>% na.omit() + data.raw = soil.raw$SWS_30_minute %>% stats::na.omit() } else if (avg == 1) { - data.raw = soil.raw$SWS_1_minute %>% na.omit() + data.raw = soil.raw$SWS_1_minute %>% stats::na.omit() } else { - data.raw = list(soil.raw$SWS_1_minute, soil.raw$SWS_30_minute) %>% na.omit() + data.raw = list(soil.raw$SWS_1_minute, soil.raw$SWS_30_minute) %>% stats::na.omit() } # Separate variables, omit flagged data obs data.raw.SWC = (split(data.raw, data.raw$VSWCFinalQF))$'0' %>% - select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime", "VSWCMean", "VSWCMinimum", "VSWCMaximum", "VSWCVariance", "VSWCNumPts", "VSWCExpUncert", "VSWCStdErMean")) + dplyr::select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime", "VSWCMean", "VSWCMinimum", "VSWCMaximum", "VSWCVariance", "VSWCNumPts", "VSWCExpUncert", "VSWCStdErMean")) data.raw.SIC = (split(data.raw, data.raw$VSICFinalQF))$'0' %>% - select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime","VSICMean", "VSICMinimum", "VSICMaximum", "VSICVariance", "VSICNumPts", "VSICExpUncert", "VSICStdErMean")) + dplyr::select(c("domainID", "siteID", "horizontalPosition", "verticalPosition", "startDateTime", "endDateTime","VSICMean", "VSICMinimum", "VSICMaximum", "VSICVariance", "VSICNumPts", "VSICExpUncert", "VSICStdErMean")) data.raw.both = list(data.raw.SWC, data.raw.SIC) names(data.raw.both) <- c("SWC", "SIC") @@ -81,7 +81,7 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", # Saving metadata and site data lists as .rds files to outdir, organize into site specific folders sensor.pos = split(soil.raw$sensor_positions_00094, soil.raw$sensor_positions_00094$siteID) for (i in names(sensor.pos)){ - write.csv(sensor.pos[[i]], file = paste0(dir, "/", i, "_sensor_positions.csv")) + utils::write.csv(sensor.pos[[i]], file = paste0(dir, "/", i, "_sensor_positions.csv")) } for (i in names(data.SIC.sites)) { saveRDS(data.SIC.sites[[i]], file = paste0(dir, "/", i, "_SIC_data.rds")) @@ -97,8 +97,8 @@ download_NEON_soilmoist <- function(site, avg = "all", var = "all", fs::file_move(paste0(dir, "/", site[i], "_SWC_data.rds"), folders[i]) } - write.csv(soil.raw$readme_00094, file = (paste0(dir,"/readme.csv"))) - write.csv(soil.raw$variables_00094, file = paste0(dir, "/variable_description.csv")) + utils::write.csv(soil.raw$readme_00094, file = (paste0(dir,"/readme.csv"))) + utils::write.csv(soil.raw$variables_00094, file = paste0(dir, "/variable_description.csv")) # Return file path to data and print lists of PEcAn.logger::logger.info("Done! NEON soil data has been downloaded and stored in ", paste0(dir), ".") From 80b7217cbf016b31b29dc1d4f3aa2a8feda132bc Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 07:26:04 -0400 Subject: [PATCH 1284/2289] updated dependencies in data.land --- modules/data.land/DESCRIPTION | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index f7a0b25b87c..8a04a34d8fa 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -43,6 +43,17 @@ Suggests: rgdal, RPostgreSQL, testthat (>= 1.0.2), + coda, + fs, + lubridate, + neonUtilities, + PEcAn.benchmark, + PEcAn.data.atmosphere, + PEcAn.visualization, + raster, + RCurl, + traits, + udunits2 License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes From fe11e73335c744d6cc774e6349d8f004064cbd67 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 1 Aug 2020 18:22:04 +0530 Subject: [PATCH 1285/2289] process and raw --- modules/data.remote/R/call_remote_process.R | 258 ++++++++++++++++++ modules/data.remote/inst/get_remote_data.py | 26 +- .../data.remote/inst/process_remote_data.py | 9 +- modules/data.remote/inst/remote_process.py | 18 +- 4 files changed, 278 insertions(+), 33 deletions(-) create mode 100644 modules/data.remote/R/call_remote_process.R diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R new file mode 100644 index 00000000000..ccc7dfd29d9 --- /dev/null +++ b/modules/data.remote/R/call_remote_process.R @@ -0,0 +1,258 @@ +call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, geofile, outdir, start, end, source, collection, scale=NULL, projection=NULL, qc=NULL, algorithm=NULL, credfile=NULL, pro_mimetype=NULL, pro_formatname=NULL, out_get_data=NULL, out_process_data=NULL){ + + reticulate::import_from_path("remote_process", "/home/carya/pecan/modules/data.remote/inst") + reticulate::source_python('/home/carya/pecan/modules/data.remote/inst/remote_process.py') + + + input_file <- NULL + stage_get_data <- NULL + stage_process_data <- NULL + raw_merge <- NULL + pro_merge <- NULL + existing_raw_file_path <- NULL + existing_pro_file_path <- NULL + projection <- NULL + flag <- 0 + + collection_lut <- data.frame(stringsAsFactors=FALSE, + original_name = c("LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR", "NASA_USDA/HSL/SMAP_soil_moisture"), + pecan_code = c("l8", "s2", "smap") + ) + getpecancode <- collection_lut$pecan_code + names(getpecancode) <- collection_lut$original_name + + collection = unname(getpecancode[collection]) + print(collection) + + req_start <- start + req_end <- end + + raw_file_name = paste0(source, "_", collection, "_", sitename) + + existing_data <- PEcAn.DB::db.query(paste0("select * from inputs where site_id=", siteid), dbcon) + + if(nrow(existing_data) >= 1){ + + if(!is.null(out_process_data)){ + # construct pro file name + pro_file_name = paste0(sitename, "_", out_process_data, "_", algorithm) + + print("print pro file name") + print(pro_file_name) + + # check if pro file exists + if(nrow(pro_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", pro_file_name), dbcon)) == 1){ + print("were you here?") + datalist <- set_date_stage(pro_check, req_start, req_end, stage_process_data) + pro_start <- as.character(datalist[[1]]) + pro_end <- as.character(datalist[[2]]) + write_pro_start <- datalist[[5]] + write_pro_end <- datalist[[6]] + if(pro_start == "dont write" || pro_end == "dont write"){ + # break condition + }else{ + stage_process_data <- datalist[[3]] + pro_merge <- datalist[[4]] + if(pro_merge == TRUE){ + existing_pro_file_path <- pro_check$file_path + }else{ + existing_pro_file_path <- NULL + } + if(stage_process_data == TRUE){ + # check about the status of raw file + raw_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", raw_file_name), dbcon) + raw_datalist <- set_date_stage(raw_check, pro_start, pro_end, stage_get_data) + start <- as.character(raw_datalist[[1]]) + end <- as.character(raw_datalist[[2]]) + write_raw_start <- raw_datalist[[5]] + write_raw_end <- raw_datalist[[6]] + stage_get_data <- raw_datalist[[3]] + raw_merge <- raw_datalist[[4]] + if(stage_get_data == FALSE){ + input_file = raw_check$file_path + } + if(raw_merge == TRUE){ + existing_raw_file_path <- raw_check$file_path + }else{ + existing_raw_file_path <- NULL + } + print("here as well?") + } + + } + }else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", raw_file_name), dbcon)) ==1){ + # atleast check if raw data exists + print("printing raw file name") + print(raw_file_name) + print("whya are you here?") + datalist <- set_date_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- datalist[[3]] + raw_merge <- datalist[[4]] + print("showing status of raw merge") + print(raw_merge) + stage_process_data <- TRUE + pro_merge <- FALSE + if(raw_merge == TRUE){ + existing_raw_file_path = raw_check$file_path + }else{ + existing_raw_file_path = NULL + } + }else{ + print("you shouldnt be here !!!!!!") + # if no pro or raw of requested type exists + start <- req_start + end <- req_end + write_raw_start <- req_start + write_raw_end <- req_end + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- TRUE + raw_merge <- FALSE + existing_raw_file_path = NULL + stage_process_data <- TRUE + pro_merge <- FALSE + existing_pro_file_path = NULL + flag <- 1 + } + + + } + + + }else{ + # db is completely empty for the given siteid + print("i am here") + start <- req_start + end <- req_end + write_raw_start <- req_start + write_raw_end <- req_end + stage_get_data <- TRUE + raw_merge <- FALSE + existing_raw_file_path = NULL + pro_merge <- FALSE + existing_pro_file_path <- NULL + if(!is.null(out_process_data)){ + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + flag <- 1 + }else{ + stage_process_data <- FALSE + } + } + + + + + #################### calling remote_process ########################## + print(geofile) + print(outdir) + print(start) + print(end) + print(source) + print(collection) + print(scale) + print(projection) + print(qc) + print(algorithm) + print(input_file) + print(credfile) + print(out_get_data) + print(out_process_data) + print(stage_get_data) + print(stage_process_data) + print(raw_merge) + print(pro_merge) + print(existing_raw_file_path) + print(existing_raw_file_path) + output = remote_process(geofile=geofile, outdir=outdir, start=start, end=end, source=source, collection=collection, scale=scale, projection=projection, qc=qc, algorithm=algorithm, input_file=input_file, credfile=credfile, out_get_data=out_get_data, out_process_data=out_process_data, stage_get_data=stage_get_data, stage_process_data=stage_process_data, raw_merge=raw_merge, pro_merge=pro_merge, existing_raw_file_path=existing_raw_file_path, existing_pro_file_path=existing_pro_file_path) + + + + + ################### saving files to DB ################################ + + if(!is.null(out_process_data)){ + # if req pro and raw is for the first time being inserted - comment for down + if(flag == 1){ + PEcAn.logger::logger.info("inserting for first time") + # insert processed data + PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) + # insert raw file + PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) + }else if(stage_get_data == FALSE){ + pro_id = pro_check$id + PEcAn.logger::logger.info("updating process") + db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s', where container_id=%f", output$process_data_path, output$process_data_name, pro_id), dbcon) + }else{ + pro_id = pro_check$id + raw_id = raw_check$id + PEcAn.logger::logger.info("updating process and raw") + db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$process_data_path, output$process_data_name, pro_id), dbcon) + PEcAn.logger::logger.info("now updating raw") + db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + } + } + + + + + + + + + + +} + + + + + +set_date_stage <- function(result, req_start, req_end, stage){ + db_start = as.Date(result$start_date) + db_end = as.Date(result$end_date) + req_start = as.Date(req_start) + req_end = as.Date(req_end) + stage <- TRUE + merge <- TRUE + + # data already exists + if((req_start >= db_start) && (req_end <= db_end)){ + req_start <- "dont write" + req_end <- "dont write" + stage <- FALSE + merge <- FALSE + write_start <- NULL + write_end <- NULL + }else if(req_start < db_start && db_end < req_end){ + # file replace case [i think can be removed] + merge <- "replace" + write_start <-req_start + write_end <-req_end + stage <- TRUE + }else if( ( (req_end > db_end) && (db_start <= req_start && req_start <= db_end) ) || ( (req_start > db_start) && (req_end > db_end))){ + # forward case + print("extending forward") + req_start <- db_end + 1 + write_start <- db_start + write_end <- req_end + }else if( ( (req_start < db_start) && (db_start <= req_end && req_end <= db_end)) || ( (req_start < db_start) && (req_end < db_end))) { + # backward case + req_end <- db_start - 1 + write_end <- db_end + write_start <- req_start + } + return (list(req_start, req_end, stage, merge, write_start, write_end)) + +} + diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index d835a24c210..a797ee7c5f2 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -8,19 +8,11 @@ Author(s): Ayush Prasad, Istem Fer """ -from nc_merge import nc_merge +from merge_files import nc_merge from importlib import import_module from appeears2pecan import appeears2pecan import os -# dictionary used to map the GEE image collection id to PEcAn specific function name -collection_dict = { - "LANDSAT/LC08/C01/T1_SR": "l8", - "COPERNICUS/S2_SR": "s2", - "NASA_USDA/HSL/SMAP_soil_moisture": "smap", - # "insert GEE collection id": "insert PEcAn specific name", -} - def get_remote_data( geofile, @@ -66,20 +58,7 @@ def get_remote_data( """ - - - if source == "gee": - try: - # get collection id from the dictionary - collection = collection_dict[collection] - except KeyError: - print( - "Please check if the collection name you requested is one of these and spelled correctly. If not, you need to implement a corresponding gee2pecan_{} function and add it to the collection dictionary.".format( - collection - ) - ) - print(collection_dict.keys()) # construct the function name func_name = "".join([source, "2pecan", "_", collection]) # import the module @@ -96,8 +75,7 @@ def get_remote_data( # if source == "appeears": # get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) - if raw_merge == "TRUE": - + if raw_merge == True and raw_merge != "replace": get_datareturn_path = nc_merge(existing_raw_file_path, get_datareturn_path, outdir) return get_datareturn_path diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/process_remote_data.py index 74989083846..cef6320baae 100644 --- a/modules/data.remote/inst/process_remote_data.py +++ b/modules/data.remote/inst/process_remote_data.py @@ -6,7 +6,7 @@ Requires Python3 Author: Ayush Prasad """ -from nc_merge import nc_merge +from merge_files import nc_merge from importlib import import_module import os import time @@ -43,7 +43,10 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori # call the function process_datareturn_path = func(input_file, outdir) - if pro_merge == "TRUE": + if pro_merge == True and pro_merge != "replace": + try: process_datareturn_path = nc_merge(existing_pro_file_path, process_datareturn_path, outdir) - + except: + print(existing_pro_file_path) + print(process_datareturn_path) return process_datareturn_path diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index e814d64cb44..bfc9c366570 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -8,10 +8,11 @@ Author(s): Ayush Prasad, Istem Fer """ - +from merge_files import nc_merge, csv_merge from get_remote_data import get_remote_data from process_remote_data import process_remote_data from gee_utils import get_sitename +import os def remote_process( @@ -80,25 +81,30 @@ def remote_process( # when db connections are made, this will be removed aoi_name = get_sitename(geofile) - + get_datareturn_path = 78 + + if stage_get_data: get_datareturn_path = get_remote_data( geofile, outdir, start, end, source, collection, scale, projection, qc, credfile, raw_merge, existing_raw_file_path ) + get_datareturn_name = os.path.split(get_datareturn_path) if stage_process_data: if input_file is None: input_file = get_datareturn_path process_datareturn_path = process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge, existing_pro_file_path) + process_datareturn_name = os.path.split(process_datareturn_path) - - output = {"raw_data": None, "process_data": None} + output = {"raw_data_name": None, "raw_data_path": None, "process_data_name": None, "process_data_path": None} if stage_get_data: - output['raw_data'] = get_datareturn_path + output['raw_data_name'] = get_datareturn_name[1] + output['raw_data_path'] = get_datareturn_path if stage_process_data: - output['process_data'] = process_datareturn_path + output['process_data_name'] = process_datareturn_name[1] + output['process_data_path'] = process_datareturn_path return output From 0e462b7b3a81473237a88b2669a044843d85c7da Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 08:53:19 -0400 Subject: [PATCH 1286/2289] Update DESCRIPTION --- modules/data.land/DESCRIPTION | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 8a04a34d8fa..bd2964636fb 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -24,36 +24,35 @@ Depends: PEcAn.DB, PEcAn.utils, redland, - sirt, - sf + sirt Imports: + coda, dplyr, + fields, + fs, + lubridate, magrittr, ncdf4 (>= 1.15), + neonUtilities, PEcAn.logger, PEcAn.remote, purrr, rlang, sp, sf, - XML (>= 3.98-1.4) + traits, + udunits2, + XML (>= 3.98-1.4) Suggests: - fields, - PEcAn.settings, - rgdal, - RPostgreSQL, - testthat (>= 1.0.2), - coda, - fs, - lubridate, - neonUtilities, PEcAn.benchmark, PEcAn.data.atmosphere, + PEcAn.settings, PEcAn.visualization, raster, + rgdal, RCurl, - traits, - udunits2 + RPostgreSQL, + testthat (>= 1.0.2) License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes From c2ec559ed2447050882296736f13a3a59458e115 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 09:10:02 -0400 Subject: [PATCH 1287/2289] more namespace fixes --- modules/data.land/DESCRIPTION | 8 +++++-- modules/data.land/R/InventoryGrowthFusion.R | 7 +++--- modules/data.land/R/fia2ED.R | 25 +++++++++------------ 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index bd2964636fb..e31235db8e5 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -21,8 +21,6 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific Depends: datapack, dataone, - PEcAn.DB, - PEcAn.utils, redland, sirt Imports: @@ -34,9 +32,12 @@ Imports: magrittr, ncdf4 (>= 1.15), neonUtilities, + PEcAn.DB, PEcAn.logger, PEcAn.remote, + PEcAn.utils, purrr, + rjags, rlang, sp, sf, @@ -44,6 +45,9 @@ Imports: udunits2, XML (>= 3.98-1.4) Suggests: + dplR, + maptools, + mvtnorm, PEcAn.benchmark, PEcAn.data.atmosphere, PEcAn.settings, diff --git a/modules/data.land/R/InventoryGrowthFusion.R b/modules/data.land/R/InventoryGrowthFusion.R index cf6f0ba7f27..94bfa06f34c 100644 --- a/modules/data.land/R/InventoryGrowthFusion.R +++ b/modules/data.land/R/InventoryGrowthFusion.R @@ -13,7 +13,6 @@ ##' @return an mcmc.list object ##' @export InventoryGrowthFusion <- function(data, cov.data=NULL, time_data = NULL, n.iter=5000, n.chunk = n.iter, n.burn = min(n.chunk, 2000), random = NULL, fixed = NULL,time_varying=NULL, burnin_plot = FALSE, save.jags = "IGF.txt", z0 = NULL, save.state=TRUE,restart = NULL) { - library(rjags) # baseline variables to monitor burnin.variables <- c("tau_add", "tau_dbh", "tau_inc", "mu") # process variability, dbh and tree-ring observation error, intercept @@ -426,11 +425,11 @@ model{ PEcAn.logger::logger.info("COMPILE JAGS MODEL") - j.model <- jags.model(file = textConnection(TreeDataFusionMV), data = data, inits = init, n.chains = 3) + j.model <- rjags::jags.model(file = textConnection(TreeDataFusionMV), data = data, inits = init, n.chains = 3) if(n.burn > 0){ PEcAn.logger::logger.info("BURN IN") - jags.out <- coda.samples(model = j.model, + jags.out <- rjags::coda.samples(model = j.model, variable.names = burnin.variables, n.iter = n.burn) if (burnin_plot) { @@ -450,7 +449,7 @@ model{ } ## sample chunk - jags.out <- coda.samples(model = j.model, variable.names = vnames, n.iter = n.chunk) + jags.out <- rjags::coda.samples(model = j.model, variable.names = vnames, n.iter = n.chunk) ## save chunk ofile <- paste("IGF",model,k,"RData",sep=".") diff --git a/modules/data.land/R/fia2ED.R b/modules/data.land/R/fia2ED.R index 0f8c325c6d5..c4d7c0e14d5 100644 --- a/modules/data.land/R/fia2ED.R +++ b/modules/data.land/R/fia2ED.R @@ -7,9 +7,6 @@ # http://opensource.ncsa.illinois.edu/license.html #------------------------------------------------------------------------------- -library(PEcAn.utils) -library(PEcAn.DB) - ##' convert x into a table ##' ##' @title fia.to.psscss @@ -37,14 +34,14 @@ fia.to.psscss <- function(settings, lonmin <- lon - gridres ## connect to database - con <- db.open(settings$database$bety) - on.exit(db.close(con), add = TRUE) + con <- PEcAn.DB::db.open(settings$database$bety) + on.exit(PEcAn.DB::db.close(con), add = TRUE) # Check whether inputs exist already if(!overwrite) { existing.files <- list() for(format in formatnames) { - existing.files[[format]] <- dbfile.input.check( + existing.files[[format]] <- PEcAn.DB::dbfile.input.check( siteid = settings$run$site$id, startdate = startdate, enddate = enddate, @@ -82,7 +79,7 @@ fia.to.psscss <- function(settings, query <- paste0(query, " OR bp.name = '", pft$name, "'") } } - pfts <- db.query(query, con = con) + pfts <- PEcAn.DB::db.query(query, con = con) # Convert PFT names to ED2 Numbers data(pftmapping) @@ -109,7 +106,7 @@ fia.to.psscss <- function(settings, bad <- pfts$spcd[duplicated(pfts$spcd)] if (length(bad) > 0) { # Coerce spcds back into species names using data from FIA manual. Makes a more readable warning. - symbol.table <- db.query("SELECT spcd, \"Symbol\" FROM species where spcd IS NOT NULL", con = con) + symbol.table <- PEcAn.DB::db.query("SELECT spcd, \"Symbol\" FROM species where spcd IS NOT NULL", con = con) names(symbol.table) <- tolower(names(symbol.table)) # grab the names where we have bad spcds in the symbol.table, exclude NAs @@ -121,8 +118,8 @@ fia.to.psscss <- function(settings, } ## connect to database - fia.con <- db.open(settings$database$fia) - on.exit(db.close(fia.con), add = TRUE) + fia.con <- PEcAn.DB::db.open(settings$database$fia) + on.exit(PEcAn.DB::db.close(fia.con), add = TRUE) ################## ## ## @@ -137,7 +134,7 @@ fia.to.psscss <- function(settings, " AND p.lat <= ", latmax, " AND p.measyear >= ", min.year, " AND p.measyear <= ", max.year, " GROUP BY p.cn") - pss <- db.query(query, con = fia.con) + pss <- PEcAn.DB::db.query(query, con = fia.con) if (nrow(pss) == 0) { PEcAn.logger::logger.severe("No pss data found.") } @@ -195,7 +192,7 @@ fia.to.psscss <- function(settings, " and p.lon < ", lonmax, " and p.lat >= ", latmin, " and p.lat < ", latmax) - css <- db.query(query, con = fia.con) + css <- PEcAn.DB::db.query(query, con = fia.con) names(css) <- tolower(names(css)) if (nrow(css) == 0) { PEcAn.logger::logger.severe("No FIA data found.") @@ -232,7 +229,7 @@ fia.to.psscss <- function(settings, if (length(pft.only) > 0) { if (!exists("symbol.table")) { - symbol.table <- db.query("SELECT spcd, \"Symbol\" FROM species where spcd IS NOT NULL", con = con) + symbol.table <- PEcAn.DB::db.query("SELECT spcd, \"Symbol\" FROM species where spcd IS NOT NULL", con = con) names(symbol.table) <- tolower(names(symbol.table)) } name.list <- na.omit(symbol.table$symbol[symbol.table$spcd %in% pft.only]) @@ -247,7 +244,7 @@ fia.to.psscss <- function(settings, if (length(fia.only) > 0) { if (!exists("symbol.table")) { - symbol.table <- db.query("SELECT spcd, \"Symbol\" FROM species where spcd IS NOT NULL", con = con) + symbol.table <- PEcAn.DB::db.query("SELECT spcd, \"Symbol\" FROM species where spcd IS NOT NULL", con = con) names(symbol.table) <- tolower(names(symbol.table)) } name.list <- na.omit(symbol.table$symbol[symbol.table$spcd %in% fia.only]) From 751528c478ef7579a824d64ffb9981a7edac6ae8 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 1 Aug 2020 19:33:14 +0530 Subject: [PATCH 1288/2289] working raw --- modules/data.remote/R/call_remote_process.R | 72 +++++++++++++++++---- 1 file changed, 60 insertions(+), 12 deletions(-) diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R index ccc7dfd29d9..78652970490 100644 --- a/modules/data.remote/R/call_remote_process.R +++ b/modules/data.remote/R/call_remote_process.R @@ -120,30 +120,67 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, existing_pro_file_path = NULL flag <- 1 } - - + }else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", raw_file_name), dbcon)) == 1){ + # if only raw data is requested + print(raw_check) + datalist <- set_date_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + stage_get_data <- datalist[[3]] + raw_merge <- datalist[[4]] + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + stage_process_data <- FALSE + if(raw_merge == TRUE){ + existing_raw_file_path <- raw_check$file_path + }else{ + existing_raw_file_path <- NULL + } + existing_pro_file_path <- NULL + }else{ + # nothing of requested type exists + flag <- 1 + start <- req_start + end <- req_end + if(!is.null(out_get_data)){ + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path = NULL + } + if(!is.null(out_process_data)){ + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + process_file_name <- NULL + existing_pro_file_path <- NULL + } } + + }else{ # db is completely empty for the given siteid + flag <- 1 print("i am here") start <- req_start end <- req_end - write_raw_start <- req_start - write_raw_end <- req_end - stage_get_data <- TRUE - raw_merge <- FALSE - existing_raw_file_path = NULL - pro_merge <- FALSE - existing_pro_file_path <- NULL + if(!is.null(out_get_data)){ + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path = NULL + } if(!is.null(out_process_data)){ stage_process_data <- TRUE write_pro_start <- req_start write_pro_end <- req_end - flag <- 1 - }else{ - stage_process_data <- FALSE + pro_merge <- FALSE + existing_pro_file_path <- NULL } } @@ -201,6 +238,17 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$raw_data_path, output$raw_data_name, raw_id), dbcon) } + }else{ + if(flag == 1){ + PEcAn.logger::logger.info(("inserting raw for the first time")) + PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) + }else{ + PEcAn.logger::logger.info("updating raw data") + raw_id = raw_check$id + db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + } + } From 265e4e43598a46af824b080fba34082b1e4222c9 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 1 Aug 2020 16:24:14 +0000 Subject: [PATCH 1289/2289] created api endpoint for getting workflow status --- apps/api/R/workflows.R | 71 ++++++++++++++++++++++++++++++++---------- 1 file changed, 54 insertions(+), 17 deletions(-) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 0d9c3f0ddb9..36adec8560c 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -92,12 +92,35 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r ################################################################################################# +#' Post a workflow for execution +#' @param req Request sent +#' @return ID & status of the submitted workflow +#' @author Tezan Sahu +#* @post / +submitWorkflow <- function(req, res){ + if(req$HTTP_CONTENT_TYPE == "application/xml"){ + submission_res <- submit.workflow.xml(req$postBody, req$user) + if(submission_res$status == "Error"){ + res$status <- 400 + return(submission_res) + } + res$status <- 201 + return(submission_res) + } + else{ + res$status <- 415 + return(paste("Unsupported request content type:", req$HTTP_CONTENT_TYPE)) + } +} + +################################################################################################# + #' Get the of the workflow specified by the id #' @param id Workflow id (character) #' @return Details of requested workflow #' @author Tezan Sahu #* @get / -getWorkflowDetails <- function(id, res){ +getWorkflowDetails <- function(id, req, res){ dbcon <- PEcAn.DB::betyConnect() Workflow <- tbl(dbcon, "workflows") %>% @@ -130,23 +153,37 @@ getWorkflowDetails <- function(id, res){ ################################################################################################# -#' Post a workflow for execution -#' @param req Request sent -#' @return ID & status of the submitted workflow +#' Get the of the workflow specified by the id +#' @param id Workflow id (character) +#' @return Details of requested workflow #' @author Tezan Sahu -#* @post / -submitWorkflow <- function(req, res){ - if(req$HTTP_CONTENT_TYPE == "application/xml"){ - submission_res <- submit.workflow.xml(req$postBody, req$user) - if(submission_res$status == "Error"){ - res$status <- 400 - return(submission_res) - } - res$status <- 201 - return(submission_res) +#* @get //status +getWorkflowDetails <- function(req, id, res){ + dbcon <- PEcAn.DB::betyConnect() + + Workflow <- tbl(dbcon, "workflows") %>% + select(id, user_id) %>% + filter(id == !!id) + + + qry_res <- Workflow %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Workflow with specified ID was not found on this host")) } - else{ - res$status <- 415 - return(paste("Unsupported request content type:", req$HTTP_CONTENT_TYPE)) + else { + # Check if the STATUS file exists on the host + statusfile <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", qry_res$id, "/STATUS") + if(! file.exists(datafile)){ + res$status <- 404 + return(list(error="Workflow with specified ID was not found on this host")) + } + + wf_status <- readLines(statusfile) + wf_status <- stringr::str_replace_all(wf_status, "\t", " ") + return(list(workflow_id=id, status=wf_status)) } } \ No newline at end of file From 23a9c7010843899d8eccc611435b46a7ffe0a4cf Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 13:17:02 -0400 Subject: [PATCH 1290/2289] more namespace fixes in data.land --- modules/data.land/DESCRIPTION | 1 - modules/data.land/R/Read_Tuscon.R | 4 +--- modules/data.land/R/find.land.R | 5 ++--- modules/data.land/R/gis.functions.R | 16 ++++++++-------- modules/data.land/R/land.utils.R | 11 +++++------ modules/data.land/R/plot2AGB.R | 4 +--- 6 files changed, 17 insertions(+), 24 deletions(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index e31235db8e5..569b3d02a60 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -26,7 +26,6 @@ Depends: Imports: coda, dplyr, - fields, fs, lubridate, magrittr, diff --git a/modules/data.land/R/Read_Tuscon.R b/modules/data.land/R/Read_Tuscon.R index 3b5e8eadb96..0aa0be32a6d 100644 --- a/modules/data.land/R/Read_Tuscon.R +++ b/modules/data.land/R/Read_Tuscon.R @@ -44,8 +44,6 @@ Clean_Tucson <- function(file) { ##' (WinDendro can sometimes create duplicate records when editing) Read_Tucson <- function(folder) { - library(dplR) - filenames <- dir(folder, pattern = "TXT", full.names = TRUE) filenames <- c(filenames, dir(folder, pattern = "rwl", full.names = TRUE)) filenames <- c(filenames, dir(folder, pattern = "rw", full.names = TRUE)) @@ -56,7 +54,7 @@ Read_Tucson <- function(folder) { filedata <- list() for (file in filenames) { file <- Clean_Tucson(file) - filedata[[file]] <- read.tucson(file, header = FALSE) + filedata[[file]] <- dplR::read.tucson(file, header = FALSE) } return(filedata) diff --git a/modules/data.land/R/find.land.R b/modules/data.land/R/find.land.R index 480dee37103..808d09f5b75 100644 --- a/modules/data.land/R/find.land.R +++ b/modules/data.land/R/find.land.R @@ -11,13 +11,12 @@ ##' @export ##' @author David LeBauer find.land <- function(lat, lon, plot = FALSE) { - library(maptools) - data(wrld_simpl) + data(wrld_simpl,package="maptools") ## Create a SpatialPoints object points <- expand.grid(lon, lat) colnames(points) <- c("lat", "lon") - pts <- SpatialPoints(points, proj4string = CRS(proj4string(wrld_simpl))) + pts <- sp::SpatialPoints(points, proj4string = sp::CRS(sp::proj4string(wrld_simpl))) ## Find which points fall over land landmask <- cbind(points, data.frame(land = !is.na(over(pts, wrld_simpl)$FIPS))) diff --git a/modules/data.land/R/gis.functions.R b/modules/data.land/R/gis.functions.R index fc5ae542dc7..152c43cb693 100644 --- a/modules/data.land/R/gis.functions.R +++ b/modules/data.land/R/gis.functions.R @@ -70,8 +70,8 @@ shp2kml <- function(dir, ext, kmz = FALSE, proj4 = NULL, color = NULL, NameField # Read in shapefile(s) & get coordinates/projection info shp.file <- # readShapeSpatial(file.path(dir,i),verbose=TRUE) coordinates(test) <- ~X+Y - layers <- ogrListLayers(file.path(dir, i)) - info <- ogrInfo(file.path(dir, i), layers) + layers <- rgdal::ogrListLayers(file.path(dir, i)) + info <- rgdal::ogrInfo(file.path(dir, i), layers) # shp.file <- readOGR(file.path(dir,i),layer=layers) # no need to read in file # Display vector info to the console @@ -131,10 +131,10 @@ shp2kml <- function(dir, ext, kmz = FALSE, proj4 = NULL, color = NULL, NameField ##' @author Shawn P. Serbin get.attributes <- function(file, coords) { # ogr tools do not seem to function properly in R. Need to figure out a work around reading in - # kml files drops important fields inside the layers. + # kml files drops important fie lds inside the layers. - library(fields) - require(rgdal) + #library(fields) + #require(rgdal) # print('NOT IMPLEMENTED YET') subset.layer(file,coords) } # get.attributes @@ -170,9 +170,9 @@ get.attributes <- function(file, coords) { ##' @author Shawn P. Serbin subset.layer <- function(file, coords = NULL, sub.layer = NULL, clip = FALSE, out.dir = NULL, out.name = NULL) { - if (!require(rgdal)) { - print("install rgdal") - } +# if (!require(rgdal)) { +# print("install rgdal") +# } # Setup output directory for subset layer if (is.null(out.dir)) { diff --git a/modules/data.land/R/land.utils.R b/modules/data.land/R/land.utils.R index 8b549222862..483333dab64 100644 --- a/modules/data.land/R/land.utils.R +++ b/modules/data.land/R/land.utils.R @@ -1,13 +1,12 @@ get.elevation <- function(lat, lon) { # http://stackoverflow.com/a/8974308/199217 - library(RCurl) - + url <- paste("http://www.earthtools.org/height", lat, lon, sep = "/") - page <- getURL(url) - ans <- xmlTreeParse(page, useInternalNodes = TRUE) - heightNode <- xpathApply(ans, "//meters")[[1]] - return(as.numeric(xmlValue(heightNode))) + page <- RCurl::getURL(url) + ans <- XML::xmlTreeParse(page, useInternalNodes = TRUE) + heightNode <- XML::xpathApply(ans, "//meters")[[1]] + return(as.numeric(XML::xmlValue(heightNode))) } # get.elevation diff --git a/modules/data.land/R/plot2AGB.R b/modules/data.land/R/plot2AGB.R index 7e535c6e224..5d10ba040b9 100644 --- a/modules/data.land/R/plot2AGB.R +++ b/modules/data.land/R/plot2AGB.R @@ -21,8 +21,6 @@ ##' @export plot2AGB <- function(combined, out, outfolder, allom.stats, unit.conv = 0.02) { - library(mvtnorm) - ## Jenkins: hemlock (kg) b0 <- -2.5384 b1 <- 2.4814 ## Allometric statistics @@ -55,7 +53,7 @@ plot2AGB <- function(combined, out, outfolder, allom.stats, unit.conv = 0.02) { for (g in seq_len(nrep)) { ## Draw allometries - b <- rmvnorm(1, B, Bcov) + b <- mvtnorm::rmvnorm(1, B, Bcov) ## convert tree diameter to biomass biomass <- matrix(exp(b[1] + b[2] * log(out[g, ])), ntree, nt) From e2c34b74ae106e6f217c4ca2e9c2183af4e17cbb Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 1 Aug 2020 17:35:54 +0000 Subject: [PATCH 1291/2289] added docs & tests for workflow status get endpoint --- apps/api/R/workflows.R | 2 +- apps/api/pecanapi-spec.yml | 33 ++++++++++- apps/api/tests/test.workflows.R | 21 +++++++ .../07_remote_access/01_pecan_api.Rmd | 55 +++++++++++++++++++ 4 files changed, 108 insertions(+), 3 deletions(-) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 36adec8560c..83bec757326 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -177,7 +177,7 @@ getWorkflowDetails <- function(req, id, res){ else { # Check if the STATUS file exists on the host statusfile <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", qry_res$id, "/STATUS") - if(! file.exists(datafile)){ + if(! file.exists(statusfile)){ res$status <- 404 return(list(error="Workflow with specified ID was not found on this host")) } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 10591471239..269565577a2 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-tezan.ncsa.illinois.edu/ - description: PEcAn Test Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/0ac36769 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/446c2044 - description: Localhost url: http://127.0.0.1:8000 @@ -471,7 +471,36 @@ paths: '404': description: Workflow with specified ID was not found - + /api/workflows/{id}/status: + get: + tags: + - workflows + summary: Get the status of a PEcAn Workflow execution + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + responses: + '200': + description: Status of the requested PEcAn Workflow + content: + application/json: + schema: + type: object + properties: + workflow_id: + type: string + status: + type: string + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflow with specified ID was not found diff --git a/apps/api/tests/test.workflows.R b/apps/api/tests/test.workflows.R index ce6522ee0e5..1d18a605a84 100644 --- a/apps/api/tests/test.workflows.R +++ b/apps/api/tests/test.workflows.R @@ -36,6 +36,8 @@ test_that("Calling /api/workflows/{id} with invalid workflow id returns Status 4 expect_equal(res$status, 404) }) +submitted_workflow_id <- NULL; + test_that("Submitting XML workflow to /api/workflows/ returns Status 201", { xml_string <- paste0(xml2::read_xml("test_workflows/api.sipnet.xml")) res <- httr::POST( @@ -44,5 +46,24 @@ test_that("Submitting XML workflow to /api/workflows/ returns Status 201", { httr::content_type("application/xml"), body = xml_string ) + + submitted_workflow_id <<- jsonlite::fromJSON(rawToChar(res$content))$workflow_id expect_equal(res$status, 201) +}) + +test_that("Calling /api/workflows/{id}/status with valid workflow id returns Status 200", { + Sys.sleep(5) + res <- httr::GET( + paste0("http://localhost:8000/api/workflows/", submitted_workflow_id, "/status"), + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/workflows/{id}/status with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/workflows/0/status", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 1521d98af19..1e0f16fdfcd 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -47,6 +47,7 @@ The currently implemented functionalities include: * [`GET /api/workflows/`](#get-apiworkflows): Retrieve a list of PEcAn workflows * [`POST /api/workflows/`](#post-apiworkflows): Submit a new PEcAn workflow * [`GET /api/workflows/{id}`](#get-apiworkflowsid): Obtain the details of a particular PEcAn workflow by supplying its ID + * [`GET /api/workflows/{id}/status`](#get-apiworkflowsidstatus): Obtain the status of a particular PEcAn workflow * __Runs:__ * [`GET /api/runs`](#get-apiruns): Get the list of all the runs @@ -756,6 +757,60 @@ print(json.dumps(response.json(), indent=2)) ### {-} +### `GET /api/workflows/{id}/status` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get list of run belonging to the workflow with `workflow_id` = '99000000001' +res <- httr::GET( + "http://localhost:8000/api/workflows/99000000001/status", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $workflow_id +## [1] "99000000001" + +## $status +## [1] "TRAIT 2020-07-22 07:02:33 2020-07-22 07:02:35 DONE " +## [2] "META 2020-07-22 07:02:35 2020-07-22 07:02:38 DONE " +## [3] "CONFIG 2020-07-22 07:02:38 2020-07-22 07:02:40 DONE " +## [4] "MODEL 2020-07-22 07:02:40 2020-07-22 07:04:07 DONE " +## [5] "OUTPUT 2020-07-22 07:04:07 2020-07-22 07:04:08 DONE " +## [6] "ENSEMBLE 2020-07-22 07:04:08 2020-07-22 07:04:09 DONE " +## [7] "SENSITIVITY 2020-07-22 07:04:09 2020-07-22 07:04:16 DONE " +## [8] "FINISHED 2020-07-22 07:04:16 2020-07-22 07:04:16 DONE " +``` + +#### Python Snippet + +```python +# Get list of run belonging to the workflow with `workflow_id` = '99000000001' +response = requests.get( + "http://localhost:8000/api/workflows/99000000001/status", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "workflow_id": "99000000001", +## "status": [ +## "TRAIT 2020-07-22 07:02:33 2020-07-22 07:02:35 DONE ", +## "META 2020-07-22 07:02:35 2020-07-22 07:02:38 DONE ", +## "CONFIG 2020-07-22 07:02:38 2020-07-22 07:02:40 DONE ", +## "MODEL 2020-07-22 07:02:40 2020-07-22 07:04:07 DONE ", +## "OUTPUT 2020-07-22 07:04:07 2020-07-22 07:04:08 DONE ", +## "ENSEMBLE 2020-07-22 07:04:08 2020-07-22 07:04:09 DONE ", +## "SENSITIVITY 2020-07-22 07:04:09 2020-07-22 07:04:16 DONE ", +## "FINISHED 2020-07-22 07:04:16 2020-07-22 07:04:16 DONE " +## ] +## } +``` +### {-} + ### `GET /api/runs/` {.tabset .tabset-pills} #### R Snippet From 6db91452ebd86e1ce55d0d6d52e028477b612408 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 13:47:17 -0400 Subject: [PATCH 1292/2289] tried moving infrequently used packages from depends to suggests --- modules/data.land/DESCRIPTION | 9 ++++----- modules/data.land/R/gis.functions.R | 2 -- 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 569b3d02a60..86c49130f69 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -18,11 +18,6 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific model parameterization, execution, and analysis. The goal of PECAn is to streamline the interaction between data and models, and to improve the efficacy of scientific investigation. -Depends: - datapack, - dataone, - redland, - sirt Imports: coda, dplyr, @@ -44,6 +39,10 @@ Imports: udunits2, XML (>= 3.98-1.4) Suggests: + datapack, + dataone, + redland, + sirt, dplR, maptools, mvtnorm, diff --git a/modules/data.land/R/gis.functions.R b/modules/data.land/R/gis.functions.R index 152c43cb693..b9f83b68156 100644 --- a/modules/data.land/R/gis.functions.R +++ b/modules/data.land/R/gis.functions.R @@ -36,8 +36,6 @@ ##' @author Shawn P. Serbin shp2kml <- function(dir, ext, kmz = FALSE, proj4 = NULL, color = NULL, NameField = NULL, out.dir = NULL) { - require(rgdal) - # TODO: Enable compression of KML files using zip/gzip utility. Not quite figured this out yet # TODO: Allow assignment of output projection info by entering proj4 string # TODO: Allow for customization of output fill colors and line size From 466de00c7ecbe271e42c3338e89f1e075fa422dd Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 1 Aug 2020 18:23:18 +0000 Subject: [PATCH 1293/2289] added input file names to run response --- apps/api/R/runs.R | 23 +++++++++++++++++++ apps/api/pecanapi-spec.yml | 9 ++++++++ .../07_remote_access/01_pecan_api.Rmd | 16 +++++++++++++ 3 files changed, 48 insertions(+) diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 3e61fea9792..af8619f1a37 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -112,6 +112,12 @@ getRunDetails <- function(run_id, res){ response[colname] <- qry_res[colname] } + # If inputs exist on the host, add them to the response + indir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", response$workflow_id, "/run/", run_id) + if(dir.exists(indir)){ + response$inputs <- getRunInputs(indir) + } + # If outputs exist on the host, add them to the response outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", response$workflow_id, "/out/", run_id) if(dir.exists(outdir)){ @@ -168,6 +174,23 @@ plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600 ################################################################################################# +#' Get the inputs of a run (if the files exist on the host) +#' @param indir Run input directory (character) +#' @return Input details of the run +#' @author Tezan Sahu + +getRunInputs <- function(indir){ + inputs <- list() + if(file.exists(paste0(indir, "/README.txt"))){ + inputs$info <- "README.txt" + } + all_files <- list.files(indir) + inputs$others <- all_files[!all_files %in% c("job.sh", "rabbitmq.out", "README.txt")] + return(inputs) +} + +################################################################################################# + #' Get the outputs of a run (if the files exist on the host) #' @param outdir Run output directory (character) #' @return Output details of the run diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 269565577a2..2daf324c596 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -700,6 +700,15 @@ components: type: string finished_at: type: string + inputs: + type: object + properties: + info: + type: string + others: + type: array + items: + type: string outputs: type: object properties: diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 1e0f16fdfcd..6f28c4f98a6 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -914,6 +914,13 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ## $finished_at ## [1] "2020-07-22 07:02:57" +## $inputs +## $inputs$info +## [1] "README.txt" + +## $inputs$others +## [1] "sipnet.clim" "sipnet.in" "sipnet.param" "sipnet.param-spatial" + ## $outputs ## $outputs$logfile ## [1] "logfile.txt" @@ -957,6 +964,15 @@ print(json.dumps(response.json(), indent=2)) ## "parameter_list": "ensemble=1", ## "start_time": "2005-01-01", ## "finish_time": "2011-12-31", +## "inputs": { +## "info": "README.txt", +## "others": [ +## "sipnet.clim", +## "sipnet.in", +## "sipnet.param", +## "sipnet.param-spatial" +## ] +## } ## "outputs": { ## "logfile": "logfile.txt", ## "info": "README.txt", From 768c79c2a824162d688c80777482d7895d587416 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 14:44:29 -0400 Subject: [PATCH 1294/2289] more data.land namespace --- modules/data.land/DESCRIPTION | 17 +++--- modules/data.land/NAMESPACE | 2 +- modules/data.land/R/IC_BADM_Utilities.R | 2 +- modules/data.land/R/diametergrow.R | 76 ++++++++++++------------- modules/data.land/R/find.land.R | 2 +- modules/data.land/R/gis.functions.R | 12 ++-- modules/data.land/R/ic_utils.R | 4 ++ modules/data.land/R/plot2AGB.R | 2 +- 8 files changed, 59 insertions(+), 58 deletions(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 86c49130f69..7f368b8106d 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -3,16 +3,13 @@ Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Mike","Dietze"), - person("David","LeBauer"), - person("Xiaohui", "Feng"), - person("Dan"," Wang"), - person("Carl", "Davidson"), - person("Rob","Kooper"), - person("Alexey", "Shiklomanov")) -Author: David LeBauer, Mike Dietze, Xiaohui Feng, Dan Wang, - Carl Davidson, Rob Kooper, Alexey Shiklomanov -Maintainer: Mike Dietze , David LeBauer +Authors@R: c(person("Mike","Dietze",email=dietze@bu.edu,"cre"), + person("David","LeBauer",role="aut"), + person("Xiaohui", "Feng",role="ctb"), + person("Dan"," Wang",role="ctb"), + person("Carl", "Davidson",role="ctb"), + person("Rob","Kooper",role="ctb"), + person("Alexey", "Shiklomanov",role="ctb")) Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to diff --git a/modules/data.land/NAMESPACE b/modules/data.land/NAMESPACE index 439d00a934b..10802961cfb 100644 --- a/modules/data.land/NAMESPACE +++ b/modules/data.land/NAMESPACE @@ -46,7 +46,7 @@ export(soil.units) export(soil2netcdf) export(soil_params) export(soil_process) -export(subset.layer) +export(subset_layer) export(to.Tag) export(to.TreeCode) export(write_ic) diff --git a/modules/data.land/R/IC_BADM_Utilities.R b/modules/data.land/R/IC_BADM_Utilities.R index 600b7f18107..d4e51feacbf 100644 --- a/modules/data.land/R/IC_BADM_Utilities.R +++ b/modules/data.land/R/IC_BADM_Utilities.R @@ -258,7 +258,7 @@ BADM_IC_process <- function(settings, dir, overwrite=TRUE){ ens=.x)) out.ense <- out.ense %>% - setNames(rep("path", length(out.ense))) + stats::setNames(rep("path", length(out.ense))) return(out.ense) } diff --git a/modules/data.land/R/diametergrow.R b/modules/data.land/R/diametergrow.R index ef80a2174a4..589cc2ec624 100644 --- a/modules/data.land/R/diametergrow.R +++ b/modules/data.land/R/diametergrow.R @@ -14,8 +14,8 @@ diametergrow <- function(diameters, increment, survival = NULL) { ## ## - plotend <- function(fname) { dev.off() } - plotstart <- function(fname) { pdf(fname) } + plotend <- function(fname) { grDevices::dev.off() } + plotstart <- function(fname) { grDevices::pdf(fname) } ##################################################################################### tnorm <- function(n, lo, hi, mu, sig) { # normal truncated lo and hi @@ -27,8 +27,8 @@ diametergrow <- function(diameters, increment, survival = NULL) { hi <- rep(hi, length(mu)) } - z <- runif(n, pnorm(lo, mu, sig), pnorm(hi, mu, sig)) - z <- qnorm(z, mu, sig) + z <- stats::runif(n, stats::pnorm(lo, mu, sig), stats::pnorm(hi, mu, sig)) + z <- stats::qnorm(z, mu, sig) z[z == Inf] <- lo[z == Inf] z[z == -Inf] <- hi[z == -Inf] z @@ -53,7 +53,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { wi <- which(is.finite(dcens[i, ]), arr.ind = TRUE) xi <- time[wi] - wf + 1 # recenter to first year yi <- dcens[i, wi] - intercept <- mean(yi) - mean(xi) * (cov(xi, yi) / var(xi)) + intercept <- mean(yi) - mean(xi) * (stats::cov(xi, yi) / stats::var(xi)) ## modification: if only one census, assume mean increment if (length(xi) == 1) { @@ -99,13 +99,13 @@ diametergrow <- function(diameters, increment, survival = NULL) { v <- crossprod(X, (dgrow[aincr] - teffect[aincr])) / allvars + prior.IVm %*% prior.mu V <- solve(crossprod(X) / allvars + prior.IVm) - alpha <- matrix(rmvnorm(1, V %*% v, V), (ncovars + 1), 1) + alpha <- matrix(mvtnorm::rmvnorm(1, V %*% v, V), (ncovars + 1), 1) mumat[aincr] <- X %*% alpha v <- apply((dgrow - mumat), 2, sum, na.rm = TRUE) / allvars V <- 1 / (ntt / allvars + 1 / prior.Vmu) - beta <- rnorm(length(v), (V * v), sqrt(V)) + beta <- stats::rnorm(length(v), (V * v), sqrt(V)) mb <- mean(beta[ntt > 0]) #extract mean beta.t <- beta - mb beta.t[ntt == 0] <- 0 @@ -234,12 +234,12 @@ diametergrow <- function(diameters, increment, survival = NULL) { # errors: ss <- sum((dcens[dobs] - diam.t[dobs]) ^ 2, na.rm = TRUE) #diameter error - sw <- 1 / (rgamma(1, (w1 + ndobs / 2), (w2 + 0.5 * ss))) + sw <- 1 / (stats::rgamma(1, (w1 + ndobs / 2), (w2 + 0.5 * ss))) sv <- 0 if (length(iobs) > 0) { ss <- sum((dgrow[iobs] - dincr[iobs]) ^ 2, na.rm = TRUE) #growth error - sv <- 1 / (rgamma(1, (v11 + 0.5 * niobs), (v22 + 0.5 * ss))) + sv <- 1 / (stats::rgamma(1, (v11 + 0.5 * niobs), (v22 + 0.5 * ss))) } list(diam.t = diam.t, sw = sw, sv = sv, ad = ad, aa = aa) @@ -272,17 +272,17 @@ diametergrow <- function(diameters, increment, survival = NULL) { pnew <- pnow # diameter data - pnow[dobs] <- pnow[dobs] + dnorm(dcens[dobs], diam.t[dobs], sqrt(w.error), log = TRUE) - pnew[dobs] <- pnew[dobs] + dnorm(dcens[dobs], diamnew[dobs], sqrt(w.error), log = TRUE) + pnow[dobs] <- pnow[dobs] + stats::dnorm(dcens[dobs], diam.t[dobs], sqrt(w.error), log = TRUE) + pnew[dobs] <- pnew[dobs] + stats::dnorm(dcens[dobs], diamnew[dobs], sqrt(w.error), log = TRUE) # regression - pnow[, -1] <- pnow[, -1] + dnorm(dgrow, lreg, sqrt(sig), log = TRUE) - pnew[, -1] <- pnew[, -1] + dnorm(dnew, lreg, sqrt(sig), log = TRUE) + pnow[, -1] <- pnow[, -1] + stats::dnorm(dgrow, lreg, sqrt(sig), log = TRUE) + pnew[, -1] <- pnew[, -1] + stats::dnorm(dnew, lreg, sqrt(sig), log = TRUE) # increment data if (length(iobs) > 0) { - pnow[iobs] <- pnow[iobs] + dnorm(dincr[iobs], dgrow[iobs], sqrt(v.error), log = TRUE) - pnew[iobs] <- pnew[iobs] + dnorm(dincr[iobs], dnew[iobs], sqrt(v.error), log = TRUE) + pnow[iobs] <- pnow[iobs] + stats::dnorm(dincr[iobs], dgrow[iobs], sqrt(v.error), log = TRUE) + pnew[iobs] <- pnew[iobs] + stats::dnorm(dincr[iobs], dnew[iobs], sqrt(v.error), log = TRUE) } pnow <- apply(pnow, 1, sum, na.rm = TRUE) @@ -299,10 +299,10 @@ diametergrow <- function(diameters, increment, survival = NULL) { # errors: ss <- sum((dcens[dobs] - diam.t[dobs]) ^ 2, na.rm = TRUE) #diameter error - sw <- 1 / (rgamma(1, (w1 + ndobs / 2), (w2 + 0.5 * ss))) + sw <- 1 / (stats::rgamma(1, (w1 + ndobs / 2), (w2 + 0.5 * ss))) ss <- sum((dgrow[iobs] - dincr[iobs])^2, na.rm = TRUE) #growth error - sv <- 1 / (rgamma(1, (v11 + 0.5 * niobs), (v22 + 0.5 * ss))) + sv <- 1 / (stats::rgamma(1, (v11 + 0.5 * niobs), (v22 + 0.5 * ss))) if (length(iobs) == 0) { sv <- 0 } @@ -312,18 +312,18 @@ diametergrow <- function(diameters, increment, survival = NULL) { sd.update <- function() { # variance on random effects - 1 / rgamma(1, (vi1 + n/2), (vi2 + 0.5 * sum(beta.i ^ 2))) + 1 / stats::rgamma(1, (vi1 + n/2), (vi2 + 0.5 * sum(beta.i ^ 2))) } # sd.update sp.update <- function() { # variance on random plot effects - 1 / rgamma(1, (pi1 + mplot / 2), (pi2 + 0.5 * sum(beta.p ^ 2))) + 1 / stats::rgamma(1, (pi1 + mplot / 2), (pi2 + 0.5 * sum(beta.p ^ 2))) } # sp.update se.update <- function() { # process error ss <- sum((dgrow - mumat - ieffect - teffect - peffect) ^ 2, na.rm = TRUE) - 1 / (rgamma(1, (s1 + 0.5 * sum(nti)), (s2 + 0.5 * ss))) + 1 / (stats::rgamma(1, (s1 + 0.5 * sum(nti)), (s2 + 0.5 * ss))) } # se.update @@ -462,7 +462,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { ncovars <- 1 #number of covariates X <- matrix(1, nrow(aincr), (ncovars + 1)) nx <- nrow(X) - X[, 2] <- rnorm(nx, (dgrow[aincr] * 0.5 + 1), 0.1) #simulated data + X[, 2] <- stats::rnorm(nx, (dgrow[aincr] * 0.5 + 1), 0.1) #simulated data prior.mu <- rep(0, (1 + ncovars)) prior.Vmu <- rep(10, (1 + ncovars)) prior.IVm <- solve(diag(prior.Vmu)) @@ -505,11 +505,11 @@ diametergrow <- function(diameters, increment, survival = NULL) { ############# initial values ################ mu <- tnorm(1, 0, 1, prior.mu, 1) - sig <- 1 / rgamma(1, s1, s2) - sigd <- 1 / rgamma(1, vi1, vi2) - sigp <- 1 / rgamma(1, pi1, pi2) - w.error <- 1 / rgamma(1, w1, w2) - v.error <- 1 / rgamma(1, vi1, vi2) + sig <- 1 / stats::rgamma(1, s1, s2) + sigd <- 1 / stats::rgamma(1, vi1, vi2) + sigp <- 1 / stats::rgamma(1, pi1, pi2) + w.error <- 1 / stats::rgamma(1, w1, w2) + v.error <- 1 / stats::rgamma(1, vi1, vi2) beta.i <- rep(0, n) #individual random effects beta.p <- rep(0, mplot) #plot random effects beta.t <- rep(0, (nt - 1)) #fixed year effects @@ -678,7 +678,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { estimate <- c(apply(cbind(mgibbs, sgibbs, tgibbs)[keep, ], 2, mean), peff) std_err <- c(apply(cbind(mgibbs, sgibbs, tgibbs)[keep, ], 2, sd), sdp) - p3 <- t(apply(cbind(mgibbs, sgibbs, tgibbs)[keep, ], 2, quantile, c(0.025, 0.975))) + p3 <- t(apply(cbind(mgibbs, sgibbs, tgibbs)[keep, ], 2, stats::quantile, c(0.025, 0.975))) p3 <- rbind(p3, pci) nn <- c(rep(nall, (ncovars + 2)), n, mplot, ndobs, niobs, ntt, ntp) p3 <- cbind(nn, estimate, std_err, p3) @@ -693,7 +693,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { colnames(diampars) <- c(colnames(p3), "par1", "par2", "prior mean") outfile <- file.path(outfolder, "diampars.txt") - write.table(signif(diampars, 3), outfile, row.names = TRUE, col.names = TRUE, quote = FALSE) + utils::write.table(signif(diampars, 3), outfile, row.names = TRUE, col.names = TRUE, quote = FALSE) # determine posterior means and sd's for diameter, growth, and other columns in treemat @@ -769,8 +769,8 @@ diametergrow <- function(diameters, increment, survival = NULL) { yjvec <- c(1:nyr[j]) for (w in wna) { - lfit <- lm(md[w, ] ~ yjvec) - newvals <- predict.lm(lfit, newdata = data.frame(yjvec)) + lfit <- stats::lm(md[w, ] ~ yjvec) + newvals <- stats::predict.lm(lfit, newdata = data.frame(yjvec)) md[w, is.na(md[w, ])] <- newvals[is.na(md[w, ])] check <- diff(md[w, ]) @@ -812,7 +812,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { plotfile <- file.path(outfolder, "incrementdata.ps") plotstart(plotfile) - par(mfrow = c(6, 2), mar = c(1, 1, 2, 1), bty = "n") + graphics::par(mfrow = c(6, 2), mar = c(1, 1, 2, 1), bty = "n") for (j in seq_len(mplot)) { if (mtree[j] == 0) { @@ -869,7 +869,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { prior.mu <- 0.3 #prior mean variance for mean growth rate prior.Vmu <- 10 - par(mfrow = c(3, 2)) + graphics::par(mfrow = c(3, 2)) for (j in 1:5) { mj <- vp[j, 2] / (vp[j, 1] - 1) if (max(sgibbs[keep, j], na.rm = TRUE) == 0) { @@ -884,7 +884,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { title(colnames(sgibbs)[j]) } vt <- seq(-0.3, 0.3, length = 100) - plot(vt, dnorm(vt, 0, sqrt(prior.Vmu)), + plot(vt, stats::dnorm(vt, 0, sqrt(prior.Vmu)), col = "darkgreen", type = "l", lwd = 2, ylim = c(0, 60), xlab = "Parameter value", ylab = "Density") title("yr effects") @@ -901,7 +901,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { # var comparison sdi <- apply(tgibbs, 1, sd) - par(mfrow = c(2, 1)) + graphics::par(mfrow = c(2, 1)) meanyr <- apply(tgibbs[keep, ], 2, mean) plot(yrvec[-nt], log10(mgrow[1, ]), ylim = c(-2, 0.5), type = "l", xlab = "Year", ylab = "Diameter increment (log cm)") @@ -967,7 +967,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { plotfile <- file.path(outfolder, "diam_ind.ps") plotstart(plotfile) - par(mfrow = c(2, 2)) + graphics::par(mfrow = c(2, 2)) for (j in seq_along(mgibbs)) { plot(mgibbs[keep, j], type = "l") # title(ins[j]) @@ -983,7 +983,7 @@ diametergrow <- function(diameters, increment, survival = NULL) { # diameters and growth rates ## par(mfrow=c(2,2)) - par(mfrow = c(1, 1)) + graphics::par(mfrow = c(1, 1)) if (length(iobs) > 0) { lo <- mgrow[iobs] - 1.96 * sgrow[iobs] @@ -1018,8 +1018,8 @@ diametergrow <- function(diameters, increment, survival = NULL) { jj <- jj + 1 iplot <- sort(sample(seq_len(n), 5)) - par(mfrow = c(5, 2)) - par(mar = c(3, 2, 2, 1)) + graphics::par(mfrow = c(5, 2)) + graphics::par(mar = c(3, 2, 2, 1)) for (j in 1:5) { md <- exp(mldiam[iplot[j], ]) diff --git a/modules/data.land/R/find.land.R b/modules/data.land/R/find.land.R index 808d09f5b75..b82d5cf4e85 100644 --- a/modules/data.land/R/find.land.R +++ b/modules/data.land/R/find.land.R @@ -11,7 +11,7 @@ ##' @export ##' @author David LeBauer find.land <- function(lat, lon, plot = FALSE) { - data(wrld_simpl,package="maptools") + data("wrld_simpl",package="maptools") ## Create a SpatialPoints object points <- expand.grid(lon, lat) diff --git a/modules/data.land/R/gis.functions.R b/modules/data.land/R/gis.functions.R index b9f83b68156..e51b80c7198 100644 --- a/modules/data.land/R/gis.functions.R +++ b/modules/data.land/R/gis.functions.R @@ -134,7 +134,7 @@ get.attributes <- function(file, coords) { #library(fields) #require(rgdal) - # print('NOT IMPLEMENTED YET') subset.layer(file,coords) + # print('NOT IMPLEMENTED YET') subset_layer(file,coords) } # get.attributes @@ -157,16 +157,16 @@ get.attributes <- function(file, coords) { ##' file <- Sys.glob(file.path(R.home(), 'library', 'PEcAn.data.land','data','*.shp')) ##' out.dir <- path.expand('~/temp') ##' # with clipping enabled -##' subset.layer(file=file,coords=c(-95,42,-84,47),clip=TRUE,out.dir=out.dir) +##' subset_layer(file=file,coords=c(-95,42,-84,47),clip=TRUE,out.dir=out.dir) ##' # without clipping enables -##' subset.layer(file=file,coords=c(-95,42,-84,47),out.dir=out.dir) +##' subset_layer(file=file,coords=c(-95,42,-84,47),out.dir=out.dir) ##' system(paste('rm -r',out.dir,sep='')) ##' } ##' -##' @export subset.layer +##' @export subset_layer ##' ##' @author Shawn P. Serbin -subset.layer <- function(file, coords = NULL, sub.layer = NULL, clip = FALSE, out.dir = NULL, out.name = NULL) { +subset_layer <- function(file, coords = NULL, sub.layer = NULL, clip = FALSE, out.dir = NULL, out.name = NULL) { # if (!require(rgdal)) { # print("install rgdal") @@ -211,4 +211,4 @@ subset.layer <- function(file, coords = NULL, sub.layer = NULL, clip = FALSE, ou # Run subset command system(OGRstring) -} # subset.layer +} # subset_layer diff --git a/modules/data.land/R/ic_utils.R b/modules/data.land/R/ic_utils.R index 248e6abe59a..3615ead69f3 100644 --- a/modules/data.land/R/ic_utils.R +++ b/modules/data.land/R/ic_utils.R @@ -2,6 +2,10 @@ ##' ##' @name write_veg ##' @title write_veg +##' @param outfolder output folder +##' @param start_date start date +##' @param veg_info vegetation data to be saved +##' @param source name of data source (used in file naming) ##' @export write_veg <- function(outfolder, start_date, veg_info, source){ diff --git a/modules/data.land/R/plot2AGB.R b/modules/data.land/R/plot2AGB.R index 5d10ba040b9..d545d191526 100644 --- a/modules/data.land/R/plot2AGB.R +++ b/modules/data.land/R/plot2AGB.R @@ -113,7 +113,7 @@ plot2AGB <- function(combined, out, outfolder, allom.stats, unit.conv = 0.02) { lines(yrvec, upA) lines(yrvec, lowA) } - dev.off() + grDevices::dev.off() save(AGB, NPP, mNPP, sNPP, mAGB, sAGB, yrvec, mbiomass_tsca, sbiomass_tsca, mbiomass_acsa3, sbiomass_acsa3, From b4f7b4f9293e957002aed9c30a89010b66827734 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 14:46:00 -0400 Subject: [PATCH 1295/2289] roxygen --- .../data.land/man/{subset.layer.Rd => subset_layer.Rd} | 10 +++++----- modules/data.land/man/write_veg.Rd | 9 +++++++++ 2 files changed, 14 insertions(+), 5 deletions(-) rename modules/data.land/man/{subset.layer.Rd => subset_layer.Rd} (86%) diff --git a/modules/data.land/man/subset.layer.Rd b/modules/data.land/man/subset_layer.Rd similarity index 86% rename from modules/data.land/man/subset.layer.Rd rename to modules/data.land/man/subset_layer.Rd index c2fd5a6ab8a..a147dd48c62 100644 --- a/modules/data.land/man/subset.layer.Rd +++ b/modules/data.land/man/subset_layer.Rd @@ -1,11 +1,11 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/gis.functions.R -\name{subset.layer} -\alias{subset.layer} +\name{subset_layer} +\alias{subset_layer} \title{Function to subset and clip a GIS vector or raster layer by a bounding box or clip/subset layer (e.g. shapefile/KML)} \usage{ -\method{subset}{layer}( +subset_layer( file, coords = NULL, sub.layer = NULL, @@ -39,9 +39,9 @@ or clip/subset layer (e.g. shapefile/KML) file <- Sys.glob(file.path(R.home(), 'library', 'PEcAn.data.land','data','*.shp')) out.dir <- path.expand('~/temp') # with clipping enabled -subset.layer(file=file,coords=c(-95,42,-84,47),clip=TRUE,out.dir=out.dir) +subset_layer(file=file,coords=c(-95,42,-84,47),clip=TRUE,out.dir=out.dir) # without clipping enables -subset.layer(file=file,coords=c(-95,42,-84,47),out.dir=out.dir) +subset_layer(file=file,coords=c(-95,42,-84,47),out.dir=out.dir) system(paste('rm -r',out.dir,sep='')) } diff --git a/modules/data.land/man/write_veg.Rd b/modules/data.land/man/write_veg.Rd index 27efd6e70a1..b50d4165f50 100644 --- a/modules/data.land/man/write_veg.Rd +++ b/modules/data.land/man/write_veg.Rd @@ -6,6 +6,15 @@ \usage{ write_veg(outfolder, start_date, veg_info, source) } +\arguments{ +\item{outfolder}{output folder} + +\item{start_date}{start date} + +\item{veg_info}{vegetation data to be saved} + +\item{source}{name of data source (used in file naming)} +} \description{ Function to save intermediate rds file } From d11cf4270e30b3f9196d9b636cc5423eafc61c25 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 14:46:49 -0400 Subject: [PATCH 1296/2289] DESCRIPTION --- modules/data.land/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 7f368b8106d..0aef6c7a65d 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -3,7 +3,7 @@ Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Mike","Dietze",email=dietze@bu.edu,"cre"), +Authors@R: c(person("Mike","Dietze",email="dietze@bu.edu",role="cre"), person("David","LeBauer",role="aut"), person("Xiaohui", "Feng",role="ctb"), person("Dan"," Wang",role="ctb"), From a8f017c9e15a07101934d47dedb405de26a1a200 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 1 Aug 2020 15:20:48 -0400 Subject: [PATCH 1297/2289] trying to get maptools data not to throw a new build note! --- modules/data.land/R/find.land.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.land/R/find.land.R b/modules/data.land/R/find.land.R index b82d5cf4e85..e8b421c2e99 100644 --- a/modules/data.land/R/find.land.R +++ b/modules/data.land/R/find.land.R @@ -11,6 +11,7 @@ ##' @export ##' @author David LeBauer find.land <- function(lat, lon, plot = FALSE) { + requireNamespace("maptools") data("wrld_simpl",package="maptools") ## Create a SpatialPoints object From 9d9b9c9883258fb59e0e681aa3e365a3ec1aa4c9 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sat, 1 Aug 2020 20:15:14 +0000 Subject: [PATCH 1298/2289] created endpoints to download input & output files for a run --- apps/api/R/get.file.R | 30 ++++++++++++ apps/api/R/runs.R | 96 ++++++++++++++++++++++++++++++++++++++ apps/api/pecanapi-spec.yml | 70 ++++++++++++++++++++++++++- 3 files changed, 194 insertions(+), 2 deletions(-) create mode 100644 apps/api/R/get.file.R diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R new file mode 100644 index 00000000000..01284588dfc --- /dev/null +++ b/apps/api/R/get.file.R @@ -0,0 +1,30 @@ +library(dplyr) + +get.file <- function(filepath, userid) { + # Check if the workflow for run after obtaining absolute path is owned by the user or not + parent_dir <- dirname(filepath) + run_id <- substr(parent_dir, stringi::stri_locate_last(parent_dir, regex="/")[1] + 1, stringr::str_length(parent_dir)) + + dbcon <- PEcAn.DB::betyConnect() + + Run <- tbl(dbcon, "runs") %>% + filter(id == !!run_id) + Run <- tbl(dbcon, "ensembles") %>% + select(ensemble_id=id, workflow_id) %>% + full_join(Run, by="ensemble_id") %>% + filter(id == !!run_id) + user_id <- tbl(dbcon, "workflows") %>% + select(workflow_id=id, user_id) %>% full_join(Run, by="workflow_id") %>% + filter(id == !!run_id) %>% + pull(user_id) + + PEcAn.DB::db.close(dbcon) + + if(! user_id == userid) { + return(list(status = "Error", message = "Access forbidden")) + } + + # Read the data in binary form & return it + bin <- readBin(filepath,'raw', n = file.info(filepath)$size) + return(list(file_contents = bin)) +} \ No newline at end of file diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index af8619f1a37..68e5de34a0d 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -1,4 +1,5 @@ library(dplyr) +source("get.file.R") #' Get the list of runs (belonging to a particuar workflow) #' @param workflow_id Workflow id (character) @@ -130,6 +131,100 @@ getRunDetails <- function(run_id, res){ ################################################################################################# +#' Get the input file specified by user for a run +#' @param run_id Run id (character) +#' @param filename Name of the input file (character) +#' @return Input file specified by user for the run +#' @author Tezan Sahu +#* @serializer contentType list(type="application/octet-stream") +#* @get //input/ +getRunInputFile <- function(req, run_id, filename, res){ + + dbcon <- PEcAn.DB::betyConnect() + + Run <- tbl(dbcon, "runs") %>% + filter(id == !!run_id) + + workflow_id <- tbl(dbcon, "ensembles") %>% + select(ensemble_id=id, workflow_id) %>% + full_join(Run, by="ensemble_id") %>% + filter(id == !!run_id) %>% + pull(workflow_id) + + user_id <- tbl(dbcon, "workflows") %>% + select(id, user_id) %>% + filter(id == !!workflow_id) %>% + pull(user_id) + + # Obtain the absolute path of the requested file + inputpath <- normalizePath( + paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/run/", run_id, "/", filename) + ) + + if(! file.exists(inputpath)){ + res$status <- 404 + return() + } + + result <- get.file(inputpath, req$user$userid) + if(is.null(result$file_contents)){ + if(result$status == "Error" && result$message == "Access forbidden") { + res$status <- 403 + return() + } + } + return(result$file_contents) +} + +################################################################################################# + +#' Get the output file specified by user for a run +#' @param run_id Run id (character) +#' @param filename Name of the output file (character) +#' @return Output file specified by user for the run +#' @author Tezan Sahu +#* @serializer contentType list(type="application/octet-stream") +#* @get //output/ +getRunOutputFile <- function(req, run_id, filename, res){ + + dbcon <- PEcAn.DB::betyConnect() + + Run <- tbl(dbcon, "runs") %>% + filter(id == !!run_id) + + workflow_id <- tbl(dbcon, "ensembles") %>% + select(ensemble_id=id, workflow_id) %>% + full_join(Run, by="ensemble_id") %>% + filter(id == !!run_id) %>% + pull(workflow_id) + + user_id <- tbl(dbcon, "workflows") %>% + select(id, user_id) %>% + filter(id == !!workflow_id) %>% + pull(user_id) + + # Obtain the absolute path of the requested file + outputpath <- normalizePath( + paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/out/", run_id, "/", filename) + ) + + if(! file.exists(outputpath)){ + res$status <- 404 + return() + } + + result <- get.file(outputpath, req$user$userid) + if(is.null(result$file_contents)){ + if(result$status == "Error" && result$message == "Access forbidden") { + res$status <- 403 + return() + } + } + return(result$file_contents) +} + +################################################################################################# + #' Plot the results obtained from a run #' @param run_id Run id (character) #' @param year the year this data is for @@ -172,6 +267,7 @@ plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600 return(img_bin) } + ################################################################################################# #' Get the inputs of a run (if the files exist on the host) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 2daf324c596..72129098dae 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -560,7 +560,7 @@ paths: '403': description: Access forbidden '404': - description: Workflow with specified ID was not found + description: Run(s) not found /api/runs/{run_id}: @@ -587,7 +587,73 @@ paths: '403': description: Access forbidden '404': - description: Workflow with specified ID was not found + description: Run with specified ID was not found + + /api/runs/{run_id}/input/{filename}: + get: + tags: + - runs + summary: Get the details of a specified PEcAn run + parameters: + - in: path + name: run_id + description: ID of the PEcAn run + required: true + schema: + type: string + - in: path + name: filename + description: Name of input file desired + required: true + schema: + type: string + responses: + '200': + description: Contents of the input file + content: + application/octet-stream: + schema: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Input file not found on host + + /api/runs/{run_id}/output/{filename}: + get: + tags: + - runs + summary: Get the details of a specified PEcAn run + parameters: + - in: path + name: run_id + description: ID of the PEcAn run + required: true + schema: + type: string + - in: path + name: filename + description: Name of output file desired + required: true + schema: + type: string + responses: + '200': + description: Contents of the output file + content: + application/octet-stream: + schema: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Output file not found on host /api/runs/{run_id}/graph/{year}/{y_var}: get: From 4d59ab22fc90bd359c208b2bb9202aeb8fab3d5c Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 2 Aug 2020 05:47:10 +0000 Subject: [PATCH 1299/2289] bugfix in file download endpoints --- apps/api/R/get.file.R | 9 +++++++-- apps/api/R/runs.R | 40 +++++++++++++--------------------------- 2 files changed, 20 insertions(+), 29 deletions(-) diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R index 01284588dfc..03dfb5aa9c4 100644 --- a/apps/api/R/get.file.R +++ b/apps/api/R/get.file.R @@ -1,10 +1,15 @@ library(dplyr) get.file <- function(filepath, userid) { + # Check if the file path is valid + if(! file.exists(filepath)){ + return(list(status = "Error", message = "File not found")) + } + # Check if the workflow for run after obtaining absolute path is owned by the user or not - parent_dir <- dirname(filepath) + parent_dir <- normalizePath(dirname(filepath)) + run_id <- substr(parent_dir, stringi::stri_locate_last(parent_dir, regex="/")[1] + 1, stringr::str_length(parent_dir)) - dbcon <- PEcAn.DB::betyConnect() Run <- tbl(dbcon, "runs") %>% diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 68e5de34a0d..bdf3c66247c 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -151,27 +151,20 @@ getRunInputFile <- function(req, run_id, filename, res){ filter(id == !!run_id) %>% pull(workflow_id) - user_id <- tbl(dbcon, "workflows") %>% - select(id, user_id) %>% - filter(id == !!workflow_id) %>% - pull(user_id) - - # Obtain the absolute path of the requested file - inputpath <- normalizePath( - paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/run/", run_id, "/", filename) - ) - - if(! file.exists(inputpath)){ - res$status <- 404 - return() - } + PEcAn.DB::db.close(dbcon) + inputpath <- paste0( Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/run/", run_id, "/", filename) + result <- get.file(inputpath, req$user$userid) if(is.null(result$file_contents)){ if(result$status == "Error" && result$message == "Access forbidden") { res$status <- 403 return() } + if(result$status == "Error" && result$message == "File not found") { + res$status <- 404 + return() + } } return(result$file_contents) } @@ -198,20 +191,9 @@ getRunOutputFile <- function(req, run_id, filename, res){ filter(id == !!run_id) %>% pull(workflow_id) - user_id <- tbl(dbcon, "workflows") %>% - select(id, user_id) %>% - filter(id == !!workflow_id) %>% - pull(user_id) - - # Obtain the absolute path of the requested file - outputpath <- normalizePath( - paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/out/", run_id, "/", filename) - ) + PEcAn.DB::db.close(dbcon) - if(! file.exists(outputpath)){ - res$status <- 404 - return() - } + outputpath <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/out/", run_id, "/", filename) result <- get.file(outputpath, req$user$userid) if(is.null(result$file_contents)){ @@ -219,6 +201,10 @@ getRunOutputFile <- function(req, run_id, filename, res){ res$status <- 403 return() } + if(result$status == "Error" && result$message == "File not found") { + res$status <- 404 + return() + } } return(result$file_contents) } From b255c9ed3185593db42a08baa91de37cf1a0a796 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 2 Aug 2020 06:52:21 +0000 Subject: [PATCH 1300/2289] docs & testing for file download endpoints --- apps/api/tests/test.runs.R | 34 ++++++++++- .../07_remote_access/01_pecan_api.Rmd | 56 +++++++++++++++++++ 2 files changed, 89 insertions(+), 1 deletion(-) diff --git a/apps/api/tests/test.runs.R b/apps/api/tests/test.runs.R index 87857680114..f88e8977f82 100644 --- a/apps/api/tests/test.runs.R +++ b/apps/api/tests/test.runs.R @@ -48,4 +48,36 @@ test_that("Calling /api/runs/{run_id}/graph/{year}/{yvar}/ with valid inputs ret httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) -}) \ No newline at end of file +}) + +test_that("Calling /api/runs/{run_id}/input/{filename} with valid inputs returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/99000000282/input/sipnet.in", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/runs/{run_id}/input/{filename} with valid inputs returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/1000000000/input/randomfile", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) + +test_that("Calling /api/runs/{run_id}/output/{filename} with valid inputs returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/99000000282/output/2002.nc", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/runs/{run_id}/output/{filename} with valid inputs returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/runs/1000000000/output/randomfile", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 6f28c4f98a6..c620570de64 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -52,6 +52,8 @@ The currently implemented functionalities include: * __Runs:__ * [`GET /api/runs`](#get-apiruns): Get the list of all the runs * [`GET /api/runs/{run_id}`](#get-apirunsrun_id): Fetch the details of a specified PEcAn run + * [`GET /api/runs/{run_id}/input/{filename}`](#get-apirunsrun_idinputfilename): Download the desired input file for a run + * [`GET /api/runs/{run_id}/output/{filename}`](#get-apirunsrun_idoutputfilename): Download the desired output file for a run * [`GET /api/runs/{run_id}/graph/{year}/{y_var}`](#get-apirunsrun_idgraphyeary_var): Plot the graph of desired output variables for a run _* indicates that the particular API is under development & may not be ready for use_ @@ -992,6 +994,60 @@ print(json.dumps(response.json(), indent=2)) ### {-} +### `GET /api/runs/{run_id}/input/{filename}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Download the 'sipnet.in' input file for the run with id = 99000000282 +res <- httr::GET( + "http://localhost:8000/api/runs/99000000282/input/sipnet.in", + httr::authenticate("carya", "illinois") + ) +writeBin(res$content, "test.sipnet.in") +``` + +#### Python Snippet + +```python +# Download the 'sipnet.in' input file for the run with id = 99000000282 +response = requests.get( + "http://localhost:8000/api/runs/99000000282/input/sipnet.in", + auth=HTTPBasicAuth('carya', 'illinois') + ) +with open("test.sipnet.in", "wb") as file: + file.write(response.content) +``` + +### {-} + +### `GET /api/runs/{run_id}/output/{filename}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Download the '2002.nc' output file for the run with id = 99000000282 +res <- httr::GET( + "http://localhost:8000/api/runs/99000000282/output/2002.nc", + httr::authenticate("carya", "illinois") + ) +writeBin(res$content, "test.2002.nc") +``` + +#### Python Snippet + +```python +# Download the '2002.nc' output file for the run with id = 99000000282 +response = requests.get( + "http://localhost:8000/api/runs/99000000282/output/2002.nc", + auth=HTTPBasicAuth('carya', 'illinois') + ) +with open("test.2002.nc", "wb") as file: + file.write(response.content) +``` + +### {-} + ### `GET /api/runs/{run_id}/graph/{year}/{y_var}` {.tabset .tabset-pills} #### R Snippet From 073dc796776ac4c4f47830e742e9fffa28376b73 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 2 Aug 2020 06:56:35 +0000 Subject: [PATCH 1301/2289] documentation fix --- .../03_topical_pages/07_remote_access/01_pecan_api.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index c620570de64..97b2e077976 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -1061,7 +1061,7 @@ res <- httr::GET( writeBin(res$content, "test.png") ``` ```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("../../figures/run_output_plot.png")) +knitr::include_graphics(rep("figures/run_output_plot.png")) ``` #### Python Snippet @@ -1076,7 +1076,7 @@ with open("test.png", "wb") as file: file.write(response.content) ``` ```{r, echo=FALSE, fig.align='center'} -knitr::include_graphics(rep("../../figures/run_output_plot.png")) +knitr::include_graphics(rep("figures/run_output_plot.png")) ``` From fe12633eb1c6bf078d24fdde4ca0bba2215d283c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 2 Aug 2020 19:08:30 +0530 Subject: [PATCH 1302/2289] make appeears2pecan follow output name convention --- modules/data.remote/inst/appeears2pecan.py | 39 +++++++++++++++++++--- 1 file changed, 35 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/inst/appeears2pecan.py b/modules/data.remote/inst/appeears2pecan.py index 0d1a410535a..49e72ccc7cd 100644 --- a/modules/data.remote/inst/appeears2pecan.py +++ b/modules/data.remote/inst/appeears2pecan.py @@ -21,9 +21,13 @@ from gee_utils import get_sitename from datetime import datetime from warnings import warn +import os.path +import time -def appeears2pecan(geofile, outdir, start, end, product, projection=None, credfile=None): +def appeears2pecan( + geofile, outdir, start, end, product, projection=None, credfile=None +): """ Downloads remote sensing data from AppEEARS @@ -69,7 +73,9 @@ def authenticate(): user = cred["username"] password = cred["password"] except IOError: - print("specified file does not exist, please make sure that you have specified the path correctly") + print( + "specified file does not exist, please make sure that you have specified the path correctly" + ) else: # if user does not want to store the credentials user = getpass.getpass(prompt="Enter NASA Earthdata Login Username: ") @@ -104,7 +110,9 @@ def authenticate(): "SPL4CMDL.004", "SPL4SMGP.004", ]: - warn("Since you have requested a SMAP product, all layers cannot be downloaded, selecting first 25 layers..") + warn( + "Since you have requested a SMAP product, all layers cannot be downloaded, selecting first 25 layers.." + ) # change this part to select your own SMAP layers prodLayer = prodLayer[0:25] @@ -137,6 +145,7 @@ def authenticate(): "coordinates": coordinates, }, } + outformat = "csv" elif (df.geometry.type == "Polygon").bool(): # query the projections @@ -160,6 +169,7 @@ def authenticate(): "geo": geo, }, } + outformat = "nc" else: # if the input geometry is not of Polygon or Point Type @@ -201,7 +211,7 @@ def authenticate(): files = {} for f in bundle["files"]: files[f["file_id"]] = f["file_name"] - # download and save the files + # download and save the file for f in files: dl = r.get("{}bundle/{}/{}".format(api, task_id, f), stream=True) filename = os.path.basename( @@ -211,3 +221,24 @@ def authenticate(): with open(filepath, "wb") as f: for data in dl.iter_content(chunk_size=8192): f.write(data) + if os.path.splitext(filename)[1][1:] == outformat: + break + + if projection is None: + projection = "NA" + timestamp = time.strftime("%y%m%d%H%M%S") + save_path = os.path.join( + outdir, + site_name + + "_appeears_" + + product + + "_NA_" + + projection + + "_NA_" + + timestamp + + "." + + outformat, + ) + os.rename(filepath, save_path) + + return os.path.abspath(save_path) From 8ed0fe75ebd94af326cf85612010914d760c037c Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 2 Aug 2020 09:52:43 -0400 Subject: [PATCH 1303/2289] @infotroph gave me permission to update the warning --- modules/data.land/tests/Rcheck_reference.log | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.land/tests/Rcheck_reference.log b/modules/data.land/tests/Rcheck_reference.log index a68defe393c..6fa69f29f0a 100644 --- a/modules/data.land/tests/Rcheck_reference.log +++ b/modules/data.land/tests/Rcheck_reference.log @@ -379,7 +379,7 @@ Found the following calls to data() loading into the global environment: File ‘PEcAn.data.land/R/fia2ED.R’: data(pftmapping) File ‘PEcAn.data.land/R/find.land.R’: - data(wrld_simpl) + data("wrld_simpl", package = "maptools") See section ‘Good practice’ in ‘?data’. * checking Rd files ... WARNING checkRd: (7) gSSURGO.Query.Rd:29-35: Tag \dontrun not recognized From 985ff5152a7837c0b50efcae7a0f27f62d8bbe79 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 2 Aug 2020 19:27:47 +0530 Subject: [PATCH 1304/2289] make s2 follow naming convention --- modules/data.remote/inst/gee2pecan_s2.py | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/gee2pecan_s2.py index e4ad2f8142b..a42d8ce67be 100644 --- a/modules/data.remote/inst/gee2pecan_s2.py +++ b/modules/data.remote/inst/gee2pecan_s2.py @@ -801,10 +801,19 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qi_threshold): # if specified output directory does not exist, create it if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) - timestamp = time.strftime("%y%m%d%H%M%S") - save_path = os.path.join(outdir, "gee_" + "s2_" + area.name + "_"+ timestamp + ".nc") + save_path = os.path.join( + outdir, + area.name + + "_gee_s2_" + + str(scale) + + "_NA_" + + str(qi_threshold) + + "_" + + timestamp + + ".nc", + ) area.data.to_netcdf(save_path) - + return os.path.abspath(save_path) From 385889075cad4055ce01832ef386bffa0acca0e6 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 2 Aug 2020 20:03:28 +0530 Subject: [PATCH 1305/2289] make l8 follow naming convention --- modules/data.remote/inst/gee2pecan_l8.py | 32 +++++++++++++++++++----- 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_l8.py b/modules/data.remote/inst/gee2pecan_l8.py index c221fd04790..5ab1990daef 100644 --- a/modules/data.remote/inst/gee2pecan_l8.py +++ b/modules/data.remote/inst/gee2pecan_l8.py @@ -18,6 +18,7 @@ import xarray as xr import numpy as np import re +import time ee.Initialize() @@ -42,8 +43,8 @@ def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1): Returns ------- - Nothing: - output netCDF is saved in the specified directory. + Absolute path to the output file + output netCDF is saved in the specified directory. """ @@ -118,7 +119,7 @@ def getdate(filename): datadf = pd.DataFrame(l_data_dict) # converting date to the numpy date format datadf["time"] = datadf["time"].apply(lambda x: np.datetime64(x)) - + datadf.rename(columns={"time": "sensing_time"}, inplace=True) # if ROI is a polygon elif (df.geometry.type == "Polygon").bool(): @@ -155,15 +156,19 @@ def eachf2dict(f): site_name = get_sitename(geofile) AOI = get_sitecoord(geofile) + coords = { + "time": datadf["sensing_time"].values, + } # convert the dataframe to an xarray dataset tosave = xr.Dataset( datadf, + coords=coords, attrs={ "site_name": site_name, - "start_date": start, - "end_date": end, "AOI": AOI, "sensor": "LANDSAT/LC08/C01/T1_SR", + "scale": scale, + "qc parameter": qc, }, ) @@ -172,4 +177,19 @@ def eachf2dict(f): os.makedirs(outdir, exist_ok=True) # convert the dataset to a netCDF file and save it - tosave.to_netcdf(os.path.join(outdir, site_name + "_bands.nc")) + timestamp = time.strftime("%y%m%d%H%M%S") + filepath = os.path.join( + outdir, + site_name + + "_gee_l8_" + + str(scale) + + "_NA_" + + str(qc) + + "_" + + timestamp + + ".nc", + ) + + tosave.to_netcdf(filepath) + + return os.path.abspath(filepath) From 50ab11f43b2080d89ba60a9887cd190049c13010 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 2 Aug 2020 20:15:16 +0530 Subject: [PATCH 1306/2289] make gee_smap follow naming convention --- modules/data.remote/inst/gee2pecan_smap.py | 35 +++++++++++++--------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/gee2pecan_smap.py index 804015e7f07..6d9cdbb72a7 100644 --- a/modules/data.remote/inst/gee2pecan_smap.py +++ b/modules/data.remote/inst/gee2pecan_smap.py @@ -13,6 +13,7 @@ import os import xarray as xr import datetime +import time ee.Initialize() @@ -33,8 +34,8 @@ def gee2pecan_smap(geofile, outdir, start, end): Returns ------- - Nothing: - output netCDF is saved in the specified directory + Absolute path to the output file + output netCDF is saved in the specified directory """ geo = create_geo(geofile) @@ -116,10 +117,12 @@ def fc2dataframe(fc): ssma_datalist.append(fc["features"][0]["properties"]["ssma"][i][0]) susma_datalist.append(fc["features"][0]["properties"]["susma"][i][0]) date_list.append( - str(datetime.datetime.strptime( - (fc["features"][0]["properties"]["dateinfo"][i]).split(".")[0], - "%Y-%m-%d", - )) + str( + datetime.datetime.strptime( + (fc["features"][0]["properties"]["dateinfo"][i]).split(".")[0], + "%Y-%m-%d", + ) + ) ) fc_dict = { "date": date_list, @@ -140,21 +143,25 @@ def fc2dataframe(fc): site_name = get_sitename(geofile) AOI = get_sitecoord(geofile) + coords = { + "time": datadf["date"].values, + } # convert the dataframe to an xarray dataset, used for converting it to a netCDF file tosave = xr.Dataset( - datadf, - attrs={ - "site_name": site_name, - "start_date": start, - "end_date": end, - "AOI": AOI, - }, + datadf, coords=coords, attrs={"site_name": site_name, "AOI": AOI,}, ) # # if specified output path does not exist create it if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) + timestamp = time.strftime("%y%m%d%H%M%S") + + filepath = os.path.join( + outdir, site_name + "_gee_smap_" + "NA_NA_NA_" + timestamp + ".nc" + ) + # convert to netCDF and save the file - tosave.to_netcdf(os.path.join(outdir, site_name + "_smap.nc")) + tosave.to_netcdf(filepath) + return os.path.abspath(filepath) From b1adf166fc20e0ccb112cf149176d2fde46654e6 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 2 Aug 2020 19:35:18 +0200 Subject: [PATCH 1307/2289] pick up any dependency updates --- .github/workflows/ci.yml | 2 ++ .github/workflows/styler-actions.yml | 2 ++ 2 files changed, 4 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 7965ee45819..20aa4013e2d 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -43,6 +43,8 @@ jobs: with: key: install-${{ github.sha }} path: .install + - name: update dependency lists + run: Rscript scripts/generate_dependencies.R - name: build run: make -j1 env: diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index ff3869db622..d14246432c3 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -60,6 +60,8 @@ jobs: uses: actions/download-artifact@v1 with: name: artifacts + - name: update dependency lists + run: Rscript scripts/generate_dependencies.R - name : make shell: bash run: | From 6d0125fba31006641dcc0fa95d733a1ec4804c3d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 2 Aug 2020 23:24:22 +0200 Subject: [PATCH 1308/2289] update dependencies missed in #2670 --- Makefile.depends | 2 +- docker/depends/pecan.depends | 5 ++++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index cda562d943d..3d243afbe48 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -14,7 +14,7 @@ $(call depends,modules/assim.sequential): | .install/base/logger .install/base/r $(call depends,modules/benchmark): | .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils -$(call depends,modules/data.land): | .install/base/db .install/base/utils .install/base/logger .install/base/remote .install/base/settings +$(call depends,modules/data.land): | .install/base/db .install/base/logger .install/base/remote .install/base/utils .install/modules/benchmark .install/modules/data.atmosphere .install/base/settings .install/base/visualization $(call depends,modules/data.remote): | .install/base/db .install/base/utils .install/base/logger .install/base/remote $(call depends,modules/emulator): | .install/base/logger $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .install/base/logger .install/base/settings diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 57b50c4d3b3..c6f63b543db 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -31,10 +31,11 @@ install2.r -e -s -l "${RLIB}" -n -1\ DBI \ dbplyr \ doParallel \ + dplR \ dplyr \ ellipse \ - fields \ foreach \ + fs \ furrr \ geonames \ getPass \ @@ -71,6 +72,7 @@ install2.r -e -s -l "${RLIB}" -n -1\ MODISTools \ mvtnorm \ ncdf4 \ + neonUtilities \ nimble \ nneo \ parallel \ @@ -112,6 +114,7 @@ install2.r -e -s -l "${RLIB}" -n -1\ tidyr \ tidyverse \ tools \ + traits \ TruncatedNormal \ truncnorm \ udunits2 \ From 8ceb9be2bdd0752e74b744924ebff9ab31410aab Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Sun, 2 Aug 2020 21:38:41 +0000 Subject: [PATCH 1309/2289] added functionality for AUTH_REQ in API --- apps/api/R/auth.R | 5 +++-- apps/api/R/general.R | 1 + apps/api/R/get.file.R | 17 +++++++++++------ apps/api/R/runs.R | 36 ++++++++++++++++++++++++++++++++++-- apps/api/R/workflows.R | 2 +- apps/api/pecanapi-spec.yml | 2 +- 6 files changed, 51 insertions(+), 12 deletions(-) diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R index 2b398360a0b..337e7f4aa33 100644 --- a/apps/api/R/auth.R +++ b/apps/api/R/auth.R @@ -53,10 +53,11 @@ validate_crypt_pass <- function(username, crypt_pass) { authenticate_user <- function(req, res) { # If the API endpoint that do not require authentication if ( + Sys.getenv("AUTH_REQ") == FALSE || grepl("swagger", req$PATH_INFO, ignore.case = TRUE) || grepl("openapi.json", req$PATH_INFO, fixed = TRUE) || - grepl("ping", req$PATH_INFO, ignore.case = TRUE) || - grepl("status", req$PATH_INFO, ignore.case = TRUE)) + grepl("/api/ping", req$PATH_INFO, ignore.case = TRUE) || + grepl("/api/status", req$PATH_INFO, ignore.case = TRUE)) { req$user$userid <- NA req$user$username <- "" diff --git a/apps/api/R/general.R b/apps/api/R/general.R index 5b8fcc7fd67..dd51a8bd798 100644 --- a/apps/api/R/general.R +++ b/apps/api/R/general.R @@ -20,6 +20,7 @@ status <- function() { dbcon <- PEcAn.DB::betyConnect() res <- list(host_details = PEcAn.DB::dbHostInfo(dbcon)) + res$host_details$authentication_required = get_env_var("AUTH_REQ") res$pecan_details <- list( version = get_env_var("PECAN_VERSION"), diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R index 03dfb5aa9c4..b227c19b19b 100644 --- a/apps/api/R/get.file.R +++ b/apps/api/R/get.file.R @@ -18,15 +18,20 @@ get.file <- function(filepath, userid) { select(ensemble_id=id, workflow_id) %>% full_join(Run, by="ensemble_id") %>% filter(id == !!run_id) - user_id <- tbl(dbcon, "workflows") %>% - select(workflow_id=id, user_id) %>% full_join(Run, by="workflow_id") %>% - filter(id == !!run_id) %>% - pull(user_id) + + if(Sys.getenv("AUTH_REQ") == TRUE) { + user_id <- tbl(dbcon, "workflows") %>% + select(workflow_id=id, user_id) %>% full_join(Run, by="workflow_id") %>% + filter(id == !!run_id) %>% + pull(user_id) + } PEcAn.DB::db.close(dbcon) - if(! user_id == userid) { - return(list(status = "Error", message = "Access forbidden")) + if(Sys.getenv("AUTH_REQ") == TRUE) { + if(! user_id == userid) { + return(list(status = "Error", message = "Access forbidden")) + } } # Read the data in binary form & return it diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index bdf3c66247c..29b4d5a351d 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -86,7 +86,7 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ #' @return Details of requested run #' @author Tezan Sahu #* @get / -getRunDetails <- function(run_id, res){ +getRunDetails <- function(req, run_id, res){ dbcon <- PEcAn.DB::betyConnect() @@ -100,6 +100,13 @@ getRunDetails <- function(run_id, res){ qry_res <- Runs %>% collect() + if(Sys.getenv("AUTH_REQ") == TRUE){ + user_id <- tbl(dbcon, "workflows") %>% + select(workflow_id=id, user_id) %>% full_join(Runs, by="workflow_id") %>% + filter(id == !!run_id) %>% + pull(user_id) + } + PEcAn.DB::db.close(dbcon) if (nrow(qry_res) == 0) { @@ -107,6 +114,15 @@ getRunDetails <- function(run_id, res){ return(list(error="Run with specified ID was not found")) } else { + + if(Sys.getenv("AUTH_REQ") == TRUE) { + # If user id of requested run does not match the caller of the API, return 403 Access forbidden + if(is.na(user_id) || user_id != req$user$userid){ + res$status <- 403 + return(list(error="Access forbidden")) + } + } + # Convert the response from tibble to list response <- list() for(colname in colnames(qry_res)){ @@ -223,7 +239,7 @@ getRunOutputFile <- function(req, run_id, filename, res){ #* @get //graph// #* @serializer contentType list(type='image/png') -plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600, res){ +plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, height=600, res){ # Get workflow_id for the run dbcon <- PEcAn.DB::betyConnect() @@ -236,6 +252,14 @@ plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600 filter(id == !!run_id) %>% pull(workflow_id) + if(Sys.getenv("AUTH_REQ") == TRUE){ + user_id <- tbl(dbcon, "workflows") %>% + select(id, user_id) %>% + filter(id == !!workflow_id) %>% + pull(user_id) + } + + PEcAn.DB::db.close(dbcon) # Check if the data file exists on the host @@ -245,6 +269,14 @@ plotResults <- function(run_id, year, y_var, x_var="time", width=800, height=600 return() } + if(Sys.getenv("AUTH_REQ") == TRUE) { + # If user id of requested run does not match the caller of the API, return 403 Access forbidden + if(is.na(user_id) || user_id != req$user$userid){ + res$status <- 403 + return(list(error="Access forbidden")) + } + } + # Plot & return filename <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/temp", stringi::stri_rand_strings(1, 10), ".png") PEcAn.visualization::plot_netcdf(datafile, y_var, x_var, as.integer(width), as.integer(height), year=year, filename=filename) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 83bec757326..b3f37ab1818 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -158,7 +158,7 @@ getWorkflowDetails <- function(id, req, res){ #' @return Details of requested workflow #' @author Tezan Sahu #* @get //status -getWorkflowDetails <- function(req, id, res){ +getWorkflowStatus <- function(req, id, res){ dbcon <- PEcAn.DB::betyConnect() Workflow <- tbl(dbcon, "workflows") %>% diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 72129098dae..9f44d40cd15 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-tezan.ncsa.illinois.edu/ - description: PEcAn Test Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/446c2044 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/3df13f71 - description: Localhost url: http://127.0.0.1:8000 From 43fe1d7b5096ae722869555208d41d0d2c41a4c4 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:14:01 +0530 Subject: [PATCH 1310/2289] add merge_files documentation --- modules/data.remote/inst/merge_files.py | 65 ++++++++++++++++++------- 1 file changed, 47 insertions(+), 18 deletions(-) diff --git a/modules/data.remote/inst/merge_files.py b/modules/data.remote/inst/merge_files.py index 9a70caac67f..a7ffbe2c85b 100644 --- a/modules/data.remote/inst/merge_files.py +++ b/modules/data.remote/inst/merge_files.py @@ -1,48 +1,77 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +Used to merge netCDF and CSV files +Author: Ayush Prasad +""" + import xarray import os import time import pandas as pd def nc_merge(old, new, outdir): - - - head, tail = os.path.split(new) - + """ + Merge netCDF files. + Order in which the files are specified in the function does not matter. + Parameters + ---------- + old (str) -- path to the first netCDF file + new (str) -- path to the second netCDF file + outdir (str) -- path where the merged file has to be stored + Returns + ------- + Absolute path to the merged file + output netCDF file is saved in the specified directory. + """ + # extract the name from the new (second) netCDF file and attach timestamp to it, this will be the name of the output merged file + head, tail = os.path.split(new) orig_nameof_newfile = new timestamp = time.strftime("%y%m%d%H%M%S") changed_new = os.path.join(outdir, tail + 'temp' + timestamp + '.nc') - + # rename the new file to prevent it from being overwritten os.rename(orig_nameof_newfile, changed_new) - - ds = xarray.open_mfdataset([old, changed_new], combine='by_coords') - - - ds.to_netcdf(os.path.join(outdir, tail)) - + # delete the old and temproary file + os.remove(changed_new) + os.remove(old) return os.path.abspath(os.path.join(outdir, tail)) def csv_merge(old, new, outdir): + """ + Merge csv files. + Order in which the files are specified in the function does not matter. + + Parameters + ---------- + old (str) -- path to the first csv file + new (str) -- path to the second csv file + outdir (str) -- path where the merged file has to be stored + Returns + ------- + Absolute path to the merged file + output csv file is saved in the specified directory. + """ + + # extract the name from the new (second) csv file and attach timestamp to it, this will be the name of the output merged file head, tail = os.path.split(new) - orig_nameof_newfile = new timestamp = time.strftime("%y%m%d%H%M%S") changed_new = os.path.join(outdir, tail + 'temp' + timestamp + '.csv') - + # rename the new file to prevent it from being overwritten os.rename(orig_nameof_newfile, changed_new) - df_old = pd.read_csv(old) df_changed_new = pd.read_csv(changed_new) - merged_df = pd.concat([df_old, df_changed_new]) - merged_df = merged_df.sort_values(by='Date') - merged_df.to_csv(os.path.join(outdir, tail), index=False) - + # delete the old and temproary file + os.remove(df_changed_new) + os.remove(df_old) return os.path.abspath(os.path.join(outdir, tail)) From e431966f3b10c74ee3ad12024735a00a2f5e534c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:14:39 +0530 Subject: [PATCH 1311/2289] update requirements.txt --- modules/data.remote/inst/requirements.txt | 91 +++++++++++++---------- 1 file changed, 51 insertions(+), 40 deletions(-) diff --git a/modules/data.remote/inst/requirements.txt b/modules/data.remote/inst/requirements.txt index bfd099d95d7..0d793eb7fa2 100644 --- a/modules/data.remote/inst/requirements.txt +++ b/modules/data.remote/inst/requirements.txt @@ -1,40 +1,51 @@ -attrs==19.3.0 -cachetools==4.1.0 -certifi==2020.4.5.2 -cftime==1.1.3 -chardet==3.0.4 -click==7.1.2 -click-plugins==1.1.1 -cligj==0.5.0 -earthengine-api==0.1.224 -Fiona==1.8.13.post1 -future==0.18.2 -geopandas==0.7.0 -google-api-core==1.20.0 -google-api-python-client==1.9.2 -google-auth==1.16.1 -google-auth-httplib2==0.0.3 -google-cloud-core==1.3.0 -google-cloud-storage==1.28.1 -google-resumable-media==0.5.1 -googleapis-common-protos==1.52.0 -httplib2==0.18.1 -httplib2shim==0.0.3 -idna==2.9 -munch==2.5.0 -netCDF4==1.5.3 -numpy==1.18.5 -pandas==1.0.4 -protobuf==3.12.2 -pyasn1==0.4.8 -pyasn1-modules==0.2.8 -pyproj==2.6.1.post1 -python-dateutil==2.8.1 -pytz==2020.1 -requests==2.23.0 -rsa==4.0 -Shapely==1.7.0 -six==1.15.0 -uritemplate==3.0.1 -urllib3==1.25.9 -xarray==0.15.1 +attrs>=19.3.0 +cachetools>=4.1.1 +certifi>=2020.6.20 +cffi>=1.14.1 +chardet>=3.0.4 +click>=7.1.2 +click-plugins>=1.1.1 +cligj>=0.5.0 +cryptography>=1.7.1 +dask>=2.6.0 +earthengine-api>=0.1.229 +Fiona>=1.8.13.post1 +future>=0.18.2 +geopandas>=0.8.1 +google-api-core>=1.22.0 +google-api-python-client>=1.10.0 +google-auth>=1.20.0 +google-auth-httplib2>=0.0.4 +google-cloud-core>=1.3.0 +google-cloud-storage>=1.30.0 +google-crc32c>=0.1.0 +google-resumable-media>=0.7.0 +googleapis-common-protos>=1.52.0 +httplib2>=0.18.1 +httplib2shim>=0.0.3 +idna>=2.10 +keyring>=10.1 +keyrings.alt>=1.3 +munch>=2.5.0 +numpy>=1.18.5 +pandas>=0.25.3 +protobuf>=3.12.4 +pyasn1>=0.4.8 +pyasn1-modules>=0.2.8 +pycparser>=2.20 +pycrypto>=2.6.1 +pygobject>=3.22.0 +pyproj>=2.6.1.post1 +python-dateutil>=2.8.1 +pytz>=2020.1 +pyxdg>=0.25 +requests>=2.24.0 +rsa>=4.6 +scipy>=1.4.1 +SecretStorage>=2.3.1 +Shapely>=1.7.0 +six>=1.10.0 +toolz>=0.10.0 +uritemplate>=3.0.1 +urllib3>=1.25.10 +xarray>=0.13.0 \ No newline at end of file From 860780f60a5e398f67b0a3efd31dbd37d97d01d1 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:16:32 +0530 Subject: [PATCH 1312/2289] update get_remote_data --- modules/data.remote/inst/get_remote_data.py | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/get_remote_data.py index a797ee7c5f2..87a7d212055 100644 --- a/modules/data.remote/inst/get_remote_data.py +++ b/modules/data.remote/inst/get_remote_data.py @@ -8,10 +8,11 @@ Author(s): Ayush Prasad, Istem Fer """ -from merge_files import nc_merge +from merge_files import nc_merge, csv_merge from importlib import import_module from appeears2pecan import appeears2pecan import os +import os.path def get_remote_data( @@ -72,11 +73,16 @@ def get_remote_data( else: get_datareturn_path = func(geofile, outdir, start, end) - # if source == "appeears": - # get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) + if source == "appeears": + get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) if raw_merge == True and raw_merge != "replace": - get_datareturn_path = nc_merge(existing_raw_file_path, get_datareturn_path, outdir) + # if output file is of csv type use csv_merge, example AppEEARS point AOI type + if os.path.splitext(existing_raw_file_path)[1][1:] == "csv": + get_datareturn_path = csv_merge(existing_raw_file_path, get_datareturn_path, outdir) + # else it must be of netCDF type, use nc_merge + else: + get_datareturn_path = nc_merge(existing_raw_file_path, get_datareturn_path, outdir) return get_datareturn_path From 6aeff373acfd49b0f1ecd92fae3500789a198535 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:16:57 +0530 Subject: [PATCH 1313/2289] update process_remote_data --- modules/data.remote/inst/process_remote_data.py | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/process_remote_data.py index cef6320baae..1c8f9ba183a 100644 --- a/modules/data.remote/inst/process_remote_data.py +++ b/modules/data.remote/inst/process_remote_data.py @@ -48,5 +48,4 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori process_datareturn_path = nc_merge(existing_pro_file_path, process_datareturn_path, outdir) except: print(existing_pro_file_path) - print(process_datareturn_path) return process_datareturn_path From 8b7e128ad1f101805add1f1ee08845a5eb722fda Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:18:27 +0530 Subject: [PATCH 1314/2289] more changes to call_remote_process --- modules/data.remote/R/call_remote_process.R | 336 +++++++++++++------- 1 file changed, 215 insertions(+), 121 deletions(-) diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R index 78652970490..82b1598a542 100644 --- a/modules/data.remote/R/call_remote_process.R +++ b/modules/data.remote/R/call_remote_process.R @@ -1,8 +1,26 @@ -call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, geofile, outdir, start, end, source, collection, scale=NULL, projection=NULL, qc=NULL, algorithm=NULL, credfile=NULL, pro_mimetype=NULL, pro_formatname=NULL, out_get_data=NULL, out_process_data=NULL){ - - reticulate::import_from_path("remote_process", "/home/carya/pecan/modules/data.remote/inst") - reticulate::source_python('/home/carya/pecan/modules/data.remote/inst/remote_process.py') +##' call the Python - remote_process from PEcAn and store the output in BETY +##' +##' @name call_remote_process +##' @title call_remote_process +##' @export +##' @param settings PEcAn settings list containing remotedata tags: siteid, sitename, raw_mimetype, raw_formatname, geofile, outdir, start, end, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data +##' @examples +##' \dontrun{ +##' call_remote_process(settings) +##' } +##' @author Ayush Prasad +##' + +call_remote_process <- function(settings){ + # information about the date variables used in call_remote_process - + # req_start, req_end : start, end dates requested by the user, the user does not be aware about the status of the requested file in the DB + # start, end : effective start, end dates created after checking the DB status. These dates are sent to remote_process for downloading and processing data + # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB + # the "pro" version of these variables have the same meaning and are used to refer to the processed file + + reticulate::import_from_path("remote_process", file.path("..", "inst")) + reticulate::source_python(file.path("..", "inst", "remote_process.py")) input_file <- NULL stage_get_data <- NULL @@ -11,9 +29,41 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, pro_merge <- NULL existing_raw_file_path <- NULL existing_pro_file_path <- NULL - projection <- NULL - flag <- 0 + # extract the variables from the settings list + siteid <- settings$remotedata$siteid + sitename <- settings$remotedata$sitename + raw_mimetype <- settings$remotedata$raw_mimetype + raw_formatname <- settings$remotedata$raw_formatname + geofile <- settings$remotedata$geofile + outdir <- settings$remotedata$outdir + start <- settings$remotedata$start + end <- settings$remotedata$end + source <- settings$remotedata$source + collection <- settings$remotedata$collection + scale <- settings$remotedata$scale + if(!is.null(scale)){ + scale <- as.double(settings$remotedata$scale) + scale <- format(scale, nsmall = 1) + } + projection <- settings$remotedata$projection + qc <- settings$remotedata$qc + if(!is.null(qc)){ + qc <- as.double(settings$remotedata$qc) + qc <- format(qc, nsmall = 1) + } + algorithm <- settings$remotedata$algorithm + credfile <- settings$remotedata$credfile + pro_mimetype <- settings$remotedata$pro_mimetype + pro_formatname <- settings$remotedata$pro_formatname + out_get_data <- settings$remotedata$out_get_data + out_process_data <- settings$remotedata$out_process_data + + + dbcon <- db.open(settings$database$bety) + flag <- 0 + + # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names collection_lut <- data.frame(stringsAsFactors=FALSE, original_name = c("LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR", "NASA_USDA/HSL/SMAP_soil_moisture"), pecan_code = c("l8", "s2", "smap") @@ -21,47 +71,42 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, getpecancode <- collection_lut$pecan_code names(getpecancode) <- collection_lut$original_name + if(source == "gee"){ collection = unname(getpecancode[collection]) - print(collection) + } req_start <- start req_end <- end - raw_file_name = paste0(source, "_", collection, "_", sitename) - - existing_data <- PEcAn.DB::db.query(paste0("select * from inputs where site_id=", siteid), dbcon) + raw_file_name = construct_raw_filename(sitename, source, collection, scale, projection, qc) + # check if any data is already present in the inputs table + existing_data <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) if(nrow(existing_data) >= 1){ + # if processed data is requested, example lai if(!is.null(out_process_data)){ - # construct pro file name - pro_file_name = paste0(sitename, "_", out_process_data, "_", algorithm) - print("print pro file name") - print(pro_file_name) + # construct processed file name + pro_file_name = paste0(sitename, "_", out_process_data, "_", algorithm) - # check if pro file exists - if(nrow(pro_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", pro_file_name), dbcon)) == 1){ - print("were you here?") - datalist <- set_date_stage(pro_check, req_start, req_end, stage_process_data) + # check if processed file exists + if(nrow(pro_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", pro_file_name), dbcon)) == 1){ + datalist <- set_stage(pro_check, req_start, req_end, stage_process_data) pro_start <- as.character(datalist[[1]]) pro_end <- as.character(datalist[[2]]) write_pro_start <- datalist[[5]] write_pro_end <- datalist[[6]] - if(pro_start == "dont write" || pro_end == "dont write"){ - # break condition - }else{ + if(pro_start != "dont write" || pro_end != "dont write"){ stage_process_data <- datalist[[3]] pro_merge <- datalist[[4]] if(pro_merge == TRUE){ existing_pro_file_path <- pro_check$file_path - }else{ - existing_pro_file_path <- NULL } if(stage_process_data == TRUE){ # check about the status of raw file - raw_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", raw_file_name), dbcon) - raw_datalist <- set_date_stage(raw_check, pro_start, pro_end, stage_get_data) + raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon) + raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) start <- as.character(raw_datalist[[1]]) end <- as.character(raw_datalist[[2]]) write_raw_start <- raw_datalist[[5]] @@ -69,23 +114,26 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, stage_get_data <- raw_datalist[[3]] raw_merge <- raw_datalist[[4]] if(stage_get_data == FALSE){ - input_file = raw_check$file_path + input_file <- raw_check$file_path } + flag <- 4 if(raw_merge == TRUE){ existing_raw_file_path <- raw_check$file_path - }else{ - existing_raw_file_path <- NULL } - print("here as well?") + if(pro_merge == TRUE && stage_get_data == FALSE){ + flag <- 5 + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + existing_pro_file_path <- NULL + pro_merge <- FALSE + } } - } - }else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", raw_file_name), dbcon)) ==1){ - # atleast check if raw data exists - print("printing raw file name") - print(raw_file_name) - print("whya are you here?") - datalist <- set_date_stage(raw_check, req_start, req_end, stage_get_data) + } + else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) ==1){ + # if the processed file does not exist in the DB check if the raw file required for creating it is present + PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") + datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) start <- as.character(datalist[[1]]) end <- as.character(datalist[[2]]) write_raw_start <- datalist[[5]] @@ -93,19 +141,23 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, write_pro_start <- req_start write_pro_end <- req_end stage_get_data <- datalist[[3]] + if(stage_get_data == FALSE){ + input_file <- raw_check$file_path + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + flag <- 2 + } raw_merge <- datalist[[4]] - print("showing status of raw merge") - print(raw_merge) stage_process_data <- TRUE pro_merge <- FALSE - if(raw_merge == TRUE){ + if(raw_merge == TRUE || raw_merge == "replace"){ existing_raw_file_path = raw_check$file_path + flag <- 3 }else{ existing_raw_file_path = NULL } }else{ - print("you shouldnt be here !!!!!!") - # if no pro or raw of requested type exists + # if no processed or raw file of requested type exists start <- req_start end <- req_end write_raw_start <- req_start @@ -120,10 +172,9 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, existing_pro_file_path = NULL flag <- 1 } - }else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("select inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path from inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%'", raw_file_name), dbcon)) == 1){ + }else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) == 1){ # if only raw data is requested - print(raw_check) - datalist <- set_date_stage(raw_check, req_start, req_end, stage_get_data) + datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) start <- as.character(datalist[[1]]) end <- as.character(datalist[[2]]) stage_get_data <- datalist[[3]] @@ -138,7 +189,8 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, } existing_pro_file_path <- NULL }else{ - # nothing of requested type exists + # no data of requested type exists + PEcAn.logger::logger.info("nothing of requested type exists") flag <- 1 start <- req_start end <- req_end @@ -156,25 +208,21 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, pro_merge <- FALSE process_file_name <- NULL existing_pro_file_path <- NULL + flag <- 1 } } - - - }else{ # db is completely empty for the given siteid + PEcAn.logger::logger.info("DB is completely empt for this site") flag <- 1 - print("i am here") start <- req_start end <- req_end - if(!is.null(out_get_data)){ - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path = NULL - } + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path = NULL if(!is.null(out_process_data)){ stage_process_data <- TRUE write_pro_start <- req_start @@ -184,89 +232,137 @@ call_remote_process <- function(siteid, sitename, raw_mimetype, raw_formatname, } } - + # call remote_process + output = remote_process(geofile=geofile, outdir=outdir, start=start, end=end, source=source, collection=collection, scale=as.double(scale), projection=projection, qc=as.double(qc), algorithm=algorithm, input_file=input_file, credfile=credfile, out_get_data=out_get_data, out_process_data=out_process_data, stage_get_data=stage_get_data, stage_process_data=stage_process_data, raw_merge=raw_merge, pro_merge=pro_merge, existing_raw_file_path=existing_raw_file_path, existing_pro_file_path=existing_pro_file_path) - #################### calling remote_process ########################## - print(geofile) - print(outdir) - print(start) - print(end) - print(source) - print(collection) - print(scale) - print(projection) - print(qc) - print(algorithm) - print(input_file) - print(credfile) - print(out_get_data) - print(out_process_data) - print(stage_get_data) - print(stage_process_data) - print(raw_merge) - print(pro_merge) - print(existing_raw_file_path) - print(existing_raw_file_path) - output = remote_process(geofile=geofile, outdir=outdir, start=start, end=end, source=source, collection=collection, scale=scale, projection=projection, qc=qc, algorithm=algorithm, input_file=input_file, credfile=credfile, out_get_data=out_get_data, out_process_data=out_process_data, stage_get_data=stage_get_data, stage_process_data=stage_process_data, raw_merge=raw_merge, pro_merge=pro_merge, existing_raw_file_path=existing_raw_file_path, existing_pro_file_path=existing_pro_file_path) - - - - - ################### saving files to DB ################################ + # inserting data in the DB if(!is.null(out_process_data)){ - # if req pro and raw is for the first time being inserted - comment for down + # if the requested processed file already exists within the required timeline dont insert or update the DB + if(as.character(write_pro_start) == "dont write" && as.character(write_pro_end) == "dont write"){ + PEcAn.logger::logger.info("Requested processed file already exists") + }else{ if(flag == 1){ - PEcAn.logger::logger.info("inserting for first time") + # no processed and rawfile are present + PEcAn.logger::logger.info("inserting rawfile and processed files for the first time") # insert processed data PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) # insert raw file PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) - }else if(stage_get_data == FALSE){ - pro_id = pro_check$id - PEcAn.logger::logger.info("updating process") - db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s', where container_id=%f", output$process_data_path, output$process_data_name, pro_id), dbcon) - }else{ + }else if(flag == 2){ + # requested processed file does not exist but the raw file used to create it exists within the required timeline + PEcAn.logger::logger.info("inserting proc file for the first time") + PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) + }else if(flag == 3){ + # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates + PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) + raw_id = raw_check$id + db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + }else if(flag == 4){ + # requested processed and raw files are present and have to be updated pro_id = pro_check$id raw_id = raw_check$id - PEcAn.logger::logger.info("updating process and raw") - db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$process_data_path, output$process_data_name, pro_id), dbcon) - PEcAn.logger::logger.info("now updating raw") - db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + PEcAn.logger::logger.info("updating processed and raw files") + db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) + db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + }else if(flag == 5){ + # raw file required for creating the processed file exists and the processed file needs to be updated + pro_id = pro_check$id + PEcAn.logger::logger.info("Updating the existing processed file") + db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) } - }else{ + } + } + else{ + # if the requested raw file already exists within the required timeline dont insert or update the DB + if(as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write"){ + PEcAn.logger::logger.info("Requested raw file already exists") + }else{ if(flag == 1){ - PEcAn.logger::logger.info(("inserting raw for the first time")) + PEcAn.logger::logger.info(("Inserting raw file for the first time")) PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) }else{ - PEcAn.logger::logger.info("updating raw data") + PEcAn.logger::logger.info("Updating raw file") raw_id = raw_check$id - db.query(sprintf("UPDATE inputs set start_date='%s', end_date='%s', name='%s' where id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - db.query(sprintf("UPDATE dbfiles set file_path='%s', file_name='%s' where container_id=%f", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + } } - } + PEcAn.DB::db.close(dbcon) - - - - - - - - } +##' construct the raw file name +##' +##' @name construct_raw_filename +##' @title construct_raw_filename +##' @param sitename site name +##' @param source source of the remote data +##' @param collection collection or product requested from the source +##' @param scale scale, NULL by default +##' @param projection projection, NULL by default +##' @param qc qc_parameter, NULL by default +##' @return raw_file_name +##' @examples +##' \dontrun{ +##' raw_file_name <- construct_raw_filename( +##' sitename="Reykjavik", +##' source="gee", +##' collection="s2", +##' scale=10.0 +##' projection=NULL +##' qc=1.0) +##' } +##' @author Ayush Prasad +construct_raw_filename <- function(sitename, source, collection, scale=NULL, projection=NULL, qc=NULL){ + # use NA if a parameter is not applicable and is NULL + if(is.null(scale)){ + scale <- "NA" + }else{ + scale <- format(scale, nsmall = 1) + } + if(is.null(projection)){ + projection <- "NA" + } + if(is.null(qc)){ + qc <- "NA" + }else{ + qc <- format(qc, nsmall = 1) + } + raw_file_name <- paste(sitename, source, collection, scale, projection, qc, sep = "_") + return(raw_file_name) +} + -set_date_stage <- function(result, req_start, req_end, stage){ +##' set dates, stage and merge status for remote data download +##' +##' @name set_stage +##' @title set_stage +##' @param result dataframe containing id, site_id, name, start_date, end_date from inputs table and file_path from dbfiles table +##' @param req_start start date requested by the user +##' @param req_end end date requested by the user +##' @param stage the stage which needs to be set, get_remote_data or process_remote_data +##' @return list containing req_start, req_end, stage, merge, write_start, write_end +##' @examples +##' \dontrun{ +##' raw_check <- set_stage( +##' result, +##' req_start, +##' req_end, +##' get_remote_data) +##' } +##' @author Ayush Prasad +set_stage <- function(result, req_start, req_end, stage){ db_start = as.Date(result$start_date) db_end = as.Date(result$end_date) req_start = as.Date(req_start) @@ -280,21 +376,20 @@ set_date_stage <- function(result, req_start, req_end, stage){ req_end <- "dont write" stage <- FALSE merge <- FALSE - write_start <- NULL - write_end <- NULL + write_start <- "dont write" + write_end <- "dont write" }else if(req_start < db_start && db_end < req_end){ - # file replace case [i think can be removed] + # data has to be replaced merge <- "replace" write_start <-req_start write_end <-req_end stage <- TRUE - }else if( ( (req_end > db_end) && (db_start <= req_start && req_start <= db_end) ) || ( (req_start > db_start) && (req_end > db_end))){ + }else if((req_start > db_start) && (req_end > db_end)){ # forward case - print("extending forward") req_start <- db_end + 1 write_start <- db_start write_end <- req_end - }else if( ( (req_start < db_start) && (db_start <= req_end && req_end <= db_end)) || ( (req_start < db_start) && (req_end < db_end))) { + }else if((req_start < db_start) && (req_end < db_end)) { # backward case req_end <- db_start - 1 write_end <- db_end @@ -302,5 +397,4 @@ set_date_stage <- function(result, req_start, req_end, stage){ } return (list(req_start, req_end, stage, merge, write_start, write_end)) -} - +} \ No newline at end of file From 3670006e63eff982014d7e1540d68b9bdba98843 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:33:11 +0530 Subject: [PATCH 1315/2289] update namespace --- modules/data.remote/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.remote/NAMESPACE b/modules/data.remote/NAMESPACE index 59779d91aa1..e2700301711 100644 --- a/modules/data.remote/NAMESPACE +++ b/modules/data.remote/NAMESPACE @@ -1,6 +1,7 @@ # Generated by roxygen2: do not edit by hand export(call_MODIS) +export(call_remote_process) export(download.LandTrendr.AGB) export(download.NLCD) export(download.thredds.AGB) From 6afe804016a9d270e5f033d1d7416f99a03d4f07 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:33:49 +0530 Subject: [PATCH 1316/2289] generate R documentation --- .../data.remote/man/call_remote_process.Rd | 22 +++++++++ .../data.remote/man/construct_raw_filename.Rd | 48 +++++++++++++++++++ modules/data.remote/man/set_stage.Rd | 35 ++++++++++++++ 3 files changed, 105 insertions(+) create mode 100644 modules/data.remote/man/call_remote_process.Rd create mode 100644 modules/data.remote/man/construct_raw_filename.Rd create mode 100644 modules/data.remote/man/set_stage.Rd diff --git a/modules/data.remote/man/call_remote_process.Rd b/modules/data.remote/man/call_remote_process.Rd new file mode 100644 index 00000000000..31e0c658afc --- /dev/null +++ b/modules/data.remote/man/call_remote_process.Rd @@ -0,0 +1,22 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/call_remote_process.R +\name{call_remote_process} +\alias{call_remote_process} +\title{call_remote_process} +\usage{ +call_remote_process(settings) +} +\arguments{ +\item{settings}{PEcAn settings list containing remotedata tags: siteid, sitename, raw_mimetype, raw_formatname, geofile, outdir, start, end, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data} +} +\description{ +call the Python - remote_process from PEcAn and store the output in BETY +} +\examples{ +\dontrun{ +call_remote_process(settings) +} +} +\author{ +Ayush Prasad +} diff --git a/modules/data.remote/man/construct_raw_filename.Rd b/modules/data.remote/man/construct_raw_filename.Rd new file mode 100644 index 00000000000..57bf84f94b9 --- /dev/null +++ b/modules/data.remote/man/construct_raw_filename.Rd @@ -0,0 +1,48 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/call_remote_process.R +\name{construct_raw_filename} +\alias{construct_raw_filename} +\title{construct_raw_filename} +\usage{ +construct_raw_filename( + sitename, + source, + collection, + scale = NULL, + projection = NULL, + qc = NULL +) +} +\arguments{ +\item{sitename}{site name} + +\item{source}{source of the remote data} + +\item{collection}{collection or product requested from the source} + +\item{scale}{scale, NULL by default} + +\item{projection}{projection, NULL by default} + +\item{qc}{qc_parameter, NULL by default} +} +\value{ +raw_file_name +} +\description{ +construct the raw file name +} +\examples{ +\dontrun{ +raw_file_name <- construct_raw_filename( + sitename="Reykjavik", + source="gee", + collection="s2", + scale=10.0 + projection=NULL + qc=1.0) +} +} +\author{ +Ayush Prasad +} diff --git a/modules/data.remote/man/set_stage.Rd b/modules/data.remote/man/set_stage.Rd new file mode 100644 index 00000000000..a957d75a59d --- /dev/null +++ b/modules/data.remote/man/set_stage.Rd @@ -0,0 +1,35 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/call_remote_process.R +\name{set_stage} +\alias{set_stage} +\title{set_stage} +\usage{ +set_stage(result, req_start, req_end, stage) +} +\arguments{ +\item{result}{dataframe containing id, site_id, name, start_date, end_date from inputs table and file_path from dbfiles table} + +\item{req_start}{start date requested by the user} + +\item{req_end}{end date requested by the user} + +\item{stage}{the stage which needs to be set, get_remote_data or process_remote_data} +} +\value{ +list containing req_start, req_end, stage, merge, write_start, write_end +} +\description{ +set dates, stage and merge status for remote data download +} +\examples{ +\dontrun{ +raw_check <- set_stage( + result, + req_start, + req_end, + get_remote_data) +} +} +\author{ +Ayush Prasad +} From 27a5976baa62ed36020f79c11ca094869f55e308 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 12:37:23 +0530 Subject: [PATCH 1317/2289] update merge_files --- modules/data.remote/inst/merge_files.py | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-) diff --git a/modules/data.remote/inst/merge_files.py b/modules/data.remote/inst/merge_files.py index a7ffbe2c85b..45cd8f2a164 100644 --- a/modules/data.remote/inst/merge_files.py +++ b/modules/data.remote/inst/merge_files.py @@ -11,6 +11,7 @@ import time import pandas as pd + def nc_merge(old, new, outdir): """ Merge netCDF files. @@ -30,10 +31,10 @@ def nc_merge(old, new, outdir): head, tail = os.path.split(new) orig_nameof_newfile = new timestamp = time.strftime("%y%m%d%H%M%S") - changed_new = os.path.join(outdir, tail + 'temp' + timestamp + '.nc') + changed_new = os.path.join(outdir, tail + "temp" + timestamp + ".nc") # rename the new file to prevent it from being overwritten os.rename(orig_nameof_newfile, changed_new) - ds = xarray.open_mfdataset([old, changed_new], combine='by_coords') + ds = xarray.open_mfdataset([old, changed_new], combine="by_coords") ds.to_netcdf(os.path.join(outdir, tail)) # delete the old and temproary file os.remove(changed_new) @@ -56,32 +57,20 @@ def csv_merge(old, new, outdir): Absolute path to the merged file output csv file is saved in the specified directory. """ - + # extract the name from the new (second) csv file and attach timestamp to it, this will be the name of the output merged file head, tail = os.path.split(new) orig_nameof_newfile = new timestamp = time.strftime("%y%m%d%H%M%S") - changed_new = os.path.join(outdir, tail + 'temp' + timestamp + '.csv') + changed_new = os.path.join(outdir, tail + "temp" + timestamp + ".csv") # rename the new file to prevent it from being overwritten os.rename(orig_nameof_newfile, changed_new) df_old = pd.read_csv(old) df_changed_new = pd.read_csv(changed_new) merged_df = pd.concat([df_old, df_changed_new]) - merged_df = merged_df.sort_values(by='Date') + merged_df = merged_df.sort_values(by="Date") merged_df.to_csv(os.path.join(outdir, tail), index=False) # delete the old and temproary file os.remove(df_changed_new) os.remove(df_old) return os.path.abspath(os.path.join(outdir, tail)) - - - - - - - - - - - - From c951623ecfa0260872148c35f9a5011b661a7f30 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 13:04:42 +0530 Subject: [PATCH 1318/2289] Delete 02_hidden_analyses.Rmd --- .../02_hidden_analyses.Rmd | 798 ------------------ 1 file changed, 798 deletions(-) delete mode 100644 book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd diff --git a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd deleted file mode 100644 index a553c0b5deb..00000000000 --- a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd +++ /dev/null @@ -1,798 +0,0 @@ -## Settings-configured analyses - -These analyses can be run through the web interface, but lack graphical interfaces and currently can only be configured throughthe XML settings. To run these analyses use the **Edit pecan.xml** checkbox on the Input configuration page. Eventually, these modules will be integrated into the web user interface. - -- [Parameter Data Assimilation (PDA)](#pda) -- [State Data Assimilation (SDA)](#sda) -- [MultiSettings](#multisettings) -- [Benchmarking](#benchmarking) - -(TODO: Add links) - -### Parameter data assimilation (PDA) {#pda} - -All functions pertaining to Parameter Data Assimilation are housed within: **pecan/modules/assim.batch**. - -For a detailed usage of the module, please see the vignette under **pecan/modules/assim.batch/vignettes**. - -Hierarchical version of the PDA is also implemented, for more details, see the `MultiSitePDAVignette` [package vignette](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd) and function-level documentation. - -#### **pda.mcmc.R** -This is the main PDA code. It performs Bayesian MCMC on model parameters by proposing parameter values, running the model, calculating a likelihood (between model output and supplied observations), and accepting or rejecting the proposed parameters (Metropolis algorithm). Additional notes: - -* The first argument is *settings*, followed by others that all default to *NULL.settings* is a list used throughout Pecan, which contains all the user options for whatever analyses are being done. The easiest thing to do is just pass that whole object all around the Pecan code and let different functions access whichever settings they need. That's what a lot of the rest of the Pecan code does. But the flexibility to override most of the relevant settings in *settings* is there by providing them directly as arguments to the function. - -* The *if(FALSE)...* : If you're trying to step through the function you probably will have the *settings* object around, but those other variables will be undefined. If you set them all to NULL then they'll be ignored without causing errors. It is there for debugging purposes. - -* The next step calls pda.settings(), which is in the file pda.utils.R (see below). It checks whether any settings are being overridden by arguments, and in most cases supplies default values if it can't find either. - - -* In the MCMC setup section - * The code is set up to allow you to start a new MCMC chain, or to continue a previous chain as specified in settings. - * The code writes a simple text file of parameter samples at every iteration, which lets you get some results and even re-start an MCMC that fails for some reason. - * The code has adaptive jump distributions. So you can see some initialization of the jump distributions and associated variables here. - * Finally, note that after all this setup a new XML settings file is saved. The idea is that the original pecan.xml you create is preserved for provenance, and then periodically throughout the workflow the settings (likely containing new information) are re-saved with descriptive filenames. - -* MCMC loop - * Periodically adjust jump distribution to make acceptance rate closer to target - * Propose new parameters one at a time. For each: - * First, note that Pecan may be handling many more parameters than are actually being targeted by PDA. Pecan puts priors on any variables it has information for (in the BETY database), and then these get passed around throughout the analysis and every step (meta-, sensitivity, ensemble analyses, etc.). But for PDA, you specify a separate list of probably far fewer parameters to constrain with data. These are the ones that get looped over and varied here. The distinction between all parameters and only those dealt with in PDA is dealt with in the setup code above. - * First a new value is proposed for the parameter of interest. - * Then, a new model run is set up, identical to the previous except with the new proposed value for the one parameter being updated on this run. - * The model run is started, and outputs collected after waiting for it to finish. - * A new likelihood is calculated based on the model outputs and the observed dataset provided. - * Standard Metropolis acceptance criteria is used to decide whether to keep the proposed parameter. - * Periodically (at interval specified in settings), a diagnostic figure is saved to disk so you can check on progress. - * This works only for NEE currently - -#### **pda.mcmc.bs.R** -This file is basically identical to pda.mcm.R, but rather than propose parameters one at a time, it proposes new values for all parameters at once ("bs" stands for "block sampling"). You choose which option to use by specifying settings$assim.batch$method: - * "bruteforce" means sample parameters one at a time - * "bruteforce.bs" means use this version, sampling all parameters at once - * "emulator" means use the emulated-likelihood version - -#### **pda.emulator** -This version of the PDA code again looks quite similar to the basic "bruteforce" one, but its mechanics are very different. The basic idea is, rather than running thousands of model iterations to explore parameter space via MCMC, run a relatively smaller number of runs that have been carefully chosen to give good coverage of parameter space. Then, basically interpolate the likelihood calculated for each of those runs (actually, fit a Gaussian process to it), to get a surface that "emulates" the true likelihood. Now, perform regular MCMC (just like the "bruteforce" approach), except instead of actually running the model on every iteration to get a likelihood, just get an approximation from the likelihood emulator. Since the latter step takes virtually no time, you can run as long of an MCMC as you need at little computational cost, once you have done the initial model runs to create the likelihood emulator. - -#### **pda.mcmc.recover.R** -This function is for recovering a failed PDA MCMC run. - -#### **pda.utils.R** -This file contains most of the individual functions used by the main PDA functions (pda.mcmc.*.R). - - * *assim.batch* is the main function Pecan calls to do PDA. It checks which method is requested (bruteforce, bruteforce.bs, or emulator) and call the appropriate function described above. - * *pda.setting* handles settings. If a setting isn't found, the code can usually supply a reasonable default. - * *pda.load.priors* is fairly self explanatory, except that it handles a lot of cases and gives different options priority over others. Basically, the priors to use for PDA parameters can come from either a Pecan prior.distns or post.distns object (the latter would be, e.g., the posteriors of a meta-analysis or previous PDA), or specified either by file path or BETY ID. If not told otherwise, the code tries to just find the most recent posterior in BETY, and use that as prior for PDA. - * *pda.create.ensemble* gets an ensemble ID for the PDA. All model runs associated with an individual PDA (any of the three methods) are considered part of a single ensemble. This function does is register a new ensemble in BETY, and return the ID that BETY gives it. - * *pda.define.prior.fn* creates R functions for all of the priors the PDA will use. - * *pda.init.params* sets up the parameter matrix for the run, which has one row per iteration, and one column per parameter. Columns include all Pecan parameters, not just the (probably small) subset that are being updated by PDA. This is for compatibility with other Pecan components. If starting a fresh run, the returned matrix is just a big empty matrix to fill in as the PDA runs. If continuing an existing MCMC, then it will be the previous params matrix, with a bunch of blank rows added on for filling in during this round of PDA. - * *pda.init.run* This is basically a big wrapper for Pecan's write.config function (actually functions [plural], since every model in Pecan has its own version). For the bruteforce and bruteforce.bs methods this will be run once per iteration, whereas the emulator method knows about all its runs ahead of time and this will be a big batch of all runs at once. - * *pda.adjust.jumps* tweaks the jump distributions for the standard MCMC method, and *pda.adjust.jumps.bs* does the same for the block-sampled version. - * *pda.calc.llik* calculates the log-likelihood of the model given all datasets provided to compare it to. - * *pda.generate.knots* is for the emulator version of PDA. It uses a Latin hypercube design to sample a specified number of locations in parameter space. These locations are where the model will actually be run, and then the GP interpolates the likelihood surface in between. - * *pda.plot.params* provides basic MCMC diagnostics (trace and density) for parameters being sampled. - * *pda.postprocess* prepares the posteriors of the PDA, stores them to files and the database, and performs some other cleanup functions. - * *pda.load.data.r* This is the function that loads in data that will be used to constrain the PDA. It's supposed to be eventually more integrated with Pecan, which will know how to load all kinds of data from all kinds of sources. For now, it can do NEE from Ameriflux. - * *pda.define.llik.r* A simple helper function that defines likelihood functions for different datasets. Probably in the future this should be queried from the database or something. For now, it is extremely limited. The original test case of NEE assimilation uses a heteroskedastic Laplacian distribution. - * *pda.get.model.output.R* Another function that will eventually grow to handle many more cases, or perhaps be replaced by a better system altogether. For now though, it again just handles Ameriflux NEE. - -#### **get.da.data.\*.R, plot.da.R** -Old codes written by Carl Davidson. Defunct now, but may contain good ideas so currently left in. - - -### State data assimilation (SDA) {#sda} - -`sda.enkf.R` is housed within: `/pecan/modules/assim.sequential/R` - -The tree ring tutorial is housed within: `/pecan/documentation/tutorials/StateAssimilation` - -More descriptive SDA methods can be found at: `/pecan/book_source/adve_user_guide_web/SDA_Methods.Rmd` - -#### **sda.enkf.R Description** -This is the main ensemble Kalman filter and generalized filter code. Originally, this was just ensemble Kalman filter code. Mike Dietze and Ann Raiho added a generalized ensemble filter to avoid filter divergence. The output of this function will be all the of run outputs, a PDF of diagnostics, and an Rdata object that includes three lists: - -* FORECAST will be the ensemble forecasts for each year -* ANALYSIS will be the updated ensemble sample given the NPP observations -* enkf.params contains the prior and posterior mean vector and covariance matrix for each time step. - -#### **sda.enkf.R Arguments** - -* settings - (required) [State Data Assimilation Tags Example] settings object - -* obs.mean - (required) a list of observation means named with dates in YYYY/MM/DD format - -* obs.cov - (required) a list of observation covariances names with dates in YYYY/MM/DD format - -* IC - (optional) initial condition matrix (dimensions: ensemble memeber # by state variables). Default is NULL. - -* Q - (optional) process covariance matrix (dimensions: state variable by state variables). Defualt is NULL. - -#### State Data Assimilation Workflow -Before running sda.enkf, these tasks must be completed (in no particular order), - -* Read in a [State Data Assimilation Tags Example] settings file with tags listed below. i.e. read.settings('pecan.SDA.xml') - -* Load data means (obs.mean) and covariances (obs.cov) as lists with PEcAn naming and unit conventions. Each observation must have a date in YYYY/MM/DD format (optional time) associated with it. If there are missing data, the date must still be represented in the list with an NA as the list object. - -* Create initial conditions matrix (IC) that is state variables columns by ensemble members rows in dimension. [sample.IC.MODEL][sample.IC.MODEL.R] can be used to create the IC matrix, but it is not required. This IC matrix is fed into write.configs for the initial model runs. - -The main parts of the SDA function are: - -Setting up for initial runs: - -* Set parameters - -* Load initial run inputs via [split.inputs.MODEL][split.inputs.MODEL.R] - -* Open database connection - -* Get new workflow ids - -* Create ensemble ids - -Performing the initial set of runs - -Set up for data assimilation - -Loop over time - -* [read.restart.MODEL][read.restart.MODEL.R] - read model restart files corresponding to start.time and stop.time that you want to assimilate data into - -* Analysis - There are four choices based on if process variance is TRUE or FALSE and if there is data or not. [See explaination below.][Analysis Options] - -* [write.restart.MODEL][write.restart.MODEL.R] - This function has two jobs. First, to insert adjusted state back into model restart file. Second, to update start.time, stop.time, and job.sh. - -* run model - -Save outputs - -Create diagnostics - -#### State Data Assimilation Tags Example - -``` - - TRUE - FALSE - FALSE - Single - - - AGB.pft - MgC/ha/yr - 0 - 100000000 - - - TotSoilCarb - KgC/m^2 - 0 - 100000000 - - - - 1950/01/01 - 1960/12/31 - - 1 - 1961/01/01 - 2010/12/31 - -``` - -#### State Data Assimilation Tags Descriptions - -* **adjustment** : [optional] TRUE/FLASE flag for if ensembles needs to be adjusted based on weights estimated given their likelihood during analysis step. The defualt is TRUE for this flag. -* **process.variance** : [optional] TRUE/FLASE flag for if process variance should be estimated (TRUE) or not (FALSE). If TRUE, a generalized ensemble filter will be used. If FALSE, an ensemble Kalman filter will be used. Default is FALSE. If you use the TRUE argument you can set three more optional tags to control the MCMCs built for the generalized esnsemble filter. -* **nitrGEF** : [optional] numeric defining the length of the MCMC chains. -* **nthin** : [optional] numeric defining thining length for the MCMC chains. -* **nburnin** : [optional] numeric defining the number of burnins during the MCMCs. -* **q.type** : [optional] If the `process.variance` is set to TRUE then this can take values of Single, Site or PFT. -* **censored.data** : [optional] logical set TRUE for censored state variables. - -* **sample.parameters** : [optional] TRUE/FLASE flag for if parameters should be sampled for each ensemble member or not. This allows for more spread in the initial conditions of the forecast. -* **state.variable** : [required] State variable that is to be assimilated (in PEcAn standard format). -* **spin.up** : [required] start.date and end.date for initial model runs. -* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because initial runs can be done over a subset of the full run. -* **forecast.time.step** : [optional] In the future, this will be used to allow the forecast time step to vary from the data time step. -* **start.date** : [optional] start date of the state data assimilation (in YYYY/MM/DD format) -* **end.date** : [optional] end date of the state data assimilation (in YYYY/MM/DD format) -* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because this analysis can be done over a subset of the run. - -#### Model Specific Functions for SDA Workflow - -#### read.restart.MODEL.R -The purpose of read.restart is to read model restart files and return a matrix that is site rows by state variable columns. The state variables must be in PEcAn names and units. The arguments are: - -* outdir - output directory - -* runid - ensemble member run ID - -* stop.time - used to determine which restart file to read (in POSIX format) - -* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object - -* var.names - vector with state variable names with PEcAn standard naming. Example: c('AGB.pft', 'TotSoilCarb') - -* params - parameters used by ensemble member (same format as write.configs) - -#### write.restart.MODEL.R -This model specific function takes in new state and new parameter matrices from sda.enkf.R after the analysis step and translates new variables back to the model variables. Then, updates start.time, stop.time, and job.sh so that start.model.runs() does the correct runs with the new states. In write.restart.LINKAGES and write.restart.SIPNET, job.sh is updated by using write.configs.MODEL. - -* outdir - output directory - -* runid - run ID for ensemble member - -* start.time - beginning of model run (in POSIX format) - -* stop.time - end of model run (in POSIX format) - -* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object - -* new.state - matrix from analysis of updated state variables with PEcAn names (dimensions: site rows by state variables columns) - -* new.params - In the future, this will allow us to update parameters based on states (same format as write.configs) - -* inputs - model specific inputs from [split.inputs.MODEL][split.inputs.MODEL.R] used to run the model from start.time to stop.time - -* RENAME - [optional] Flag used in write.restart.LINKAGES.R for development. - -#### split.inputs.MODEL.R -This model specific function gives the correct met and/or other model inputs to settings$run$inputs. This function returns settings$run$inputs to an inputs argument in sda.enkf.R. But, the inputs will not need to change for all models and should return settings$run$inputs unchanged if that is the case. - -* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object - -* start.time - start time for model run (in POSIX format) - -* stop.time - stop time for model run (in POSIX format) - -#### sample.IC.MODEL.R -This model specific function is optional. But, it can be used to create initial condition matrix (IC) with # state variables columns by # ensemble rows. This IC matrix is used for the initial runs in sda.enkf.R in the write.configs.MODEL function. - -* ne - number of ensemble members - -* state - matrix of state variables to get initial conditions from - -* year - used to determine which year to sample initial conditions from - -#### Analysis Options -There are four options depending on whether process variance is TRUE/FALSE and whether or not there is data or not. - -* If there is no data and process variance = FALSE, there is no analysis step. - -* If there is no data and process variance = TRUE, process variance is added to the forecast. - -* If there is data and process variance = TRUE, [the generalized ensemble filter][The Generalized Ensemble Filter] is implemented with MCMC. - -* If there is data and process variance = FALSE, the Kalman filter is used and solved analytically. - -#### The Generalized Ensemble Filter -An ensemble filter is a sequential data assimilation algorithm with two procedures at every time step: a forecast followed by an analysis. The forecast ensembles arise from a model while the analysis makes an adjustment of the forecasts ensembles from the model towards the data. An ensemble Kalman filter is typically suggested for this type of analysis because of its computationally efficient analytical solution and its ability to update states based on an estimate of covariance structure. But, in some cases, the ensemble Kalman filter fails because of filter divergence. Filter divergence occurs when forecast variability is too small, which causes the analysis to favor the forecast and diverge from the data. Models often produce low forecast variability because there is little internal stochasticity. Our ensemble filter overcomes this problem in a Bayesian framework by including an estimation of model process variance. This methodology also maintains the benefits of the ensemble Kalman filter by updating the state vector based on the estimated covariance structure. - -This process begins after the model is spun up to equilibrium. - -The likelihood function uses the data vector $\left(\boldsymbol{y_{t}}\right)$ conditional on the estimated state vector $\left(\boldsymbol{x_{t}}\right)$ such that - - $\boldsymbol{y}_{t}\sim\mathrm{multivariate\:normal}(\boldsymbol{x}_{t},\boldsymbol{R}_{t})$ - -where $\boldsymbol{R}_{t}=\boldsymbol{\sigma}_{t}^{2}\boldsymbol{I}$ and $\boldsymbol{\sigma}_{t}^{2}$ is a vector of data variances. To obtain an estimate of the state vector $\left(\boldsymbol{x}_{t}\right)$, we use a process model that incorporates a process covariance matrix $\left(\boldsymbol{Q}_{t}\right)$. This process covariance matrix differentiates our methods from past ensemble filters. Our process model contains the following equations - -$\boldsymbol{x}_{t} \sim \mathrm{multivariate\: normal}(\boldsymbol{x}_{model_{t}},\boldsymbol{Q}_{t})$ - -$\boldsymbol{x}_{model_{t}} \sim \mathrm{multivariate\: normal}(\boldsymbol{\mu}_{forecast_{t}},\boldsymbol{P}_{forecast_{t}})$ - -where $\boldsymbol{\mu}_{forecast_{t}}$ is a vector of means from the ensemble forecasts and $\boldsymbol{P}_{forecast_{t}}$ is a covariance matrix calculated from the ensemble forecasts. The prior for our process covariance matrix is $\boldsymbol{Q}_{t}\sim\mathrm{Wishart}(\boldsymbol{V}_{t},n_{t})$ where $\boldsymbol{V}_{t}$ is a scale matrix and $n_{t}$ is the degrees of freedom. The prior shape parameters are updated at each time step through moment matching such that - -$\boldsymbol{V}_{t+1} = n_{t}\bar{\boldsymbol{Q}}_{t}$ - -$n_{t+1} = \frac{\sum_{i=1}^{I}\sum_{j=1}^{J}\frac{v_{ijt}^{2}+v_{iit}v_{jjt}}{Var(\boldsymbol{\bar{Q}}_{t})}}{I\times J}$ - -where we calculate the mean of the process covariance matrix $\left(\bar{\boldsymbol{Q}_{t}}\right)$ from the posterior samples at time t. Degrees of freedom for the Wishart are typically calculated element by element where $v_{ij}$ are the elements of $\boldsymbol{V}_{t}$. $I$ and $J$ index rows and columns of $\boldsymbol{V}$. Here, we calculate a mean number of degrees of freedom for $t+1$ by summing over all the elements of the scale matrix $\left(\boldsymbol{V}\right)$ and dividing by the count of those elements $\left(I\times J\right)$. We fit this model sequentially through time in the R computing environment using R package 'rjags.' - -Users have control over how they think is the best way to estimate $Q$. Our code will look for the tag `q.type` in the XML settings under `state.data.assimilation` which can take 3 values of Single, PFT or Site. If `q.type` is set to single then one value of process variance will be estimated across all different sites or PFTs. On the other hand, when q.type` is set to Site or PFT then a process variance will be estimated for each site or PFT at a cost of more time and computation power. - -#### Multi-site State data assimilation. -`sda.enkf.multisite` function allows for assimilation of observed data at multiple sites at the same time. In order to run a multi-site SDA, one needs to send a multisettings pecan xml file to this function. This multisettings xml file needs to contain information required for running at least two sites under `run` tag. The code will automatically run the ensembles for all the sites and reformats the outputs matching the required formats for analysis step. - -The observed mean and cov needs to be formatted as list of different dates with observations. For each element of this list also there needs to be a list with mean and cov matrices of different sites named by their siteid. In case that zero variance was estimated for a variable inside the obs.cov, the SDA code will automatically replace that with half of the minimum variance from other non-zero variables in that time step. - -This would look like something like this: -``` -> obs.mean - -$`2010/12/31` -$`2010/12/31`$`1000000650` - AbvGrndWood GWBI - 111.502 1.0746 - -$`2010/12/31`$`1000000651` - AbvGrndWood GWBI - 114.302 1.574695 -``` - -``` -> obs.cov - -$`2010/12/31` -$`2010/12/31`$`1000000650` - [,1] [,2] -[1,] 19.7821691 0.213584319 -[2,] 0.5135843 0.005162113 - -$`2010/12/31`$`1000000651` - [,1] [,2] -[1,] 15.2821691 0.513584319 -[2,] 0.1213583 0.001162113 -``` -An example of multi-settings pecan xml file also may look like below: -``` - - - - FALSE - TRUE - - 1000000040 - 1000013298 - - - - GWBI - KgC/m^2 - 0 - 9999 - - - AbvGrndWood - KgC/m^2 - 0 - 9999 - - - 1960/01/01 - 2000/12/31 - - - - -1 - - 2017/12/06 21:19:33 +0000 - - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768 - - - bety - bety - 128.197.168.114 - bety - PostgreSQL - false - - /fs/data1/pecan.data/dbfiles/ - - - - temperate.deciduous_SDA - - 2 - - /fs/data2/output//PEcAn_1000008768/pft/temperate.deciduous_SDA - 1000008552 - - - - 3000 - - FALSE - TRUE - - - - 20 - 1000016146 - 1995 - 1999 - - - uniform - - - sampling - - - parameters - - - soil - - - - - 1000000022 - /fs/data3/hamzed/output/paleon_sda_SIPNET-8768/Bartlett.param - SIPNET - r136 - FALSE - /fs/data5/pecan.models/SIPNET/trunk/sipnet_ssr - - - 1000008768 - - - - - 1000000650 - 1960/01/01 - 1965/12/31 - Harvard Forest - Lyford Plots (PalEON PHA) - 42.53 - -72.18 - - - - CRUNCEP - SIPNET - - /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim - - - - 1960/01/01 - 1980/12/31 - - - - 1000000651 - 1960/01/01 - 1965/12/31 - Harvard Forest - Lyford Plots (PalEON PHA) - 42.53 - -72.18 - - - - CRUNCEP - SIPNET - - /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim - - - - 1960/01/01 - 1980/12/31 - - - - localhost - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out - - - TRUE - TRUE - TRUE - - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run - /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out - run - -``` - -### Running SDA on remote -In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote.Sync.launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote.Sync.launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. - -`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote.Sync.launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. - -`Additionally, the Remote.Sync.launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. - -Several points on how to prepare your xml settings for the remote SDA run: -1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. -2 - Inside the xml setting an `` flag needs to be included and point to a local directory where `SDA_remote_launcher` will look for either a `sample.Rdata` file or a `pft` folder. -3 - You need to set your `` tag according to the desired remote machine. You can learn more about this on the `Remote execution with PEcAn` section of the documentation. Please make sure that the `` tag inside `` is pointing to a directory where you would like to store and run your SDA job(s). -4 - Finally, make sure the tag inside the tag is set to the correct path on the remote machine. - -### Restart functionality in SDA -If you prefer to run your SDA analysis in multiple stages, where each phase picks up where the previous one left off, you can use the `restart` argument in the `sda.enkf.multisite` function. You need to make sure that the output from previous step exists in the `SDA` folder (in the `outfolder`), and the `` is the same as `` from the previous step. When you run the SDA with the restart parameter, it will load the output from the previous step and use configs already written in the run folder to set itself up for the next step. Using the restart argument could be as easy as : - -``` -sda.enkf.multisite(settings, - obs.mean =obs.mean , - obs.cov = obs.cov, - control = SDA.arguments(debug = FALSE, TimeseriesPlot = FALSE), - restart = FALSE - ) - -``` -Where the new `settings`, `obs.mean` and `obs.cov` contain the relevant information for the next phase. - - -### State Data Assimilation Methods - - -*By Ann Raiho* - -Our goal is build a fully generalizable state data assimilation (SDA) workflow that will assimilate multiple types of data and data products into ecosystem models within PEcAn temporally and spatially. But, during development, specifically with PalEON goals in mind, we have been focusing on assimilating tree ring estimated NPP and AGB and pollen derived fractional composition into two ecosystem models, SIPNET and LINKAGES, at Harvard Forest. This methodology will soon be expanded to include the PalEON sites listed on the [state data assimilation wiki page](https://paleon.geography.wisc.edu/doku.php/working_groups;state_data_assimilation). - -#### Data Products -During workflow development, we have been working with tree ring estimated NPP and AGB and pollen derived fractional composition data products. Both of these data products have been estimated with a full accounting of uncertainty, which provides us with state variable observation mean vector and covariance matrix at each time step. These data products are discussed in more detail below. Even though we have been working with specific data products during development, our workflow is generalizable to alternative data products as long as we can calculate a state variable observation mean vector and covariance for a time point. - -#### Tree Rings -We have been primarily been working with the tree ring data product created by Andria Dawson and Chris Paciorek and the PEcAn tree ring allometry module. They have developed a Bayesian model that estimates annual aboveground biomass increment (Mg/ha/yr) and aboveground biomass (Mg/ha) for each tree in a dataset. We obtain this data and aggregate to the level appropriate for the ecosystem model. In SIPNET, we are assimilating annual gross woody increment (Mg/ha/yr) and above ground woody biomass (Mg/ha). In LINKAGES, we are assimilating annual species biomass. More information on deriving these tree ring data products can be found in Dawson et al 201?. - -We have been working mostly with tree data collected at Harvard Forest. Tree rings and census data were collected at Lyford Plot between 1960 and 2010 in three separate plots. Other tree ring data will be added to this analysis in the future from past PEON courses (UNDERC), Kelly Heilman (Billy's Lake and Bigwoods), and Alex Dye (Huron Mt. Club). - -#### Pollen -STEPPS is a Bayesian model developed by Paciorek and McLachlan 2009 and Dawson et al 2016 to estimate spatially gridded fractional composition from fossil pollen. We have been working with STEPPS1 output, specifically with the grid cell that contains Harvard Forest. The temporal resolution of this data product is centennial. Our workflow currently operates at annual time steps, but does not require data at every time step. So, it is possible to assimilate fractional composition every one hundred years or to assimilate fractional composition data every year by accounting for variance inflation. - -In the future, pollen derived biomass (ReFAB) will also be available for data assimilation. Although, we have not discussed how STEPPS and ReFAB data assimilation will work. - -#### Variance Inflation -*Side Note: Probably want to call this something else now. - -Since the fractional composition data product has a centennial resolution, in order to use fractional composition information every year we need to change the weight the data has on the analysis. The basic idea is to downweight the likelihood relative to the prior to account for (a) the fact that we assimilate an observation multiple times and (b) the fact that the number of STEPPS observations is 'inflated' because of the autocorrelation. To do this, we take the likelihood and raise it to the power of (1/w) where 'w' is an inflation factor. - -w = D * (N / ESS) - -where D is the length of the time step. In our case D = 100. N is the number of time steps. In our case N = 11. and ESS is the effective sample size. The ESS is calculated with the following function where ntimes is the same as N above and sims is a matrix with the dimensions number of MCMC samples by number of state variables. - -``` -ESS_calc <- function(ntimes, sims){ - # center based on mean at each time to remove baseline temporal correlation - # (we want to estimate effective sample size effect from correlation of the errors) - row.means.sims <- sims - rowMeans(sims) - - # compute all pairwise covariances at different times - covars <- NULL - for(lag in 1:(ntimes-1)){ - covars <- c(covars, rowMeans(row.means.sims[(lag+1):ntimes, , drop = FALSE] * row.means.sims[1:(ntimes-lag), , drop = FALSE])) - } - vars <- apply(row.means.sims, 1, var) # pointwise post variances at each time, might not be homoscedastic - - # nominal sample size scaled by ratio of variance of an average - # under independence to variance of average of correlated values - neff <- ntimes * sum(vars) / (sum(vars) + 2 * sum(covars)) - return(neff) - } -``` - -The ESS for the STEPPS1 data product is 3.6, so w in our assimilation of fractional composition at Harvard Forest will be w = 305.6. - -#### Current Models -SIPNET and LINKAGES are the two ecosystem models that have been used during state data assimilation development within PEcAn. SIPNET is a simple ecosystem model that was built for... LINKAGES is a forest gap model created to simulate the process of succession that occurs when a gap is opened in the forest canopy. LINKAGES has 72 species level plant functional types and the ability to simulate some below ground processes (C and N cycles). - -#### Model Calibration -Without model calibration both SIPNET and LINKAGES make incorrect predictions about Harvard Forest. To confront this problem, SIPNET and LINKAGES will both be calibrated using data collected at the Harvard Forest flux tower. Istem has completed calibration for SIPNET using a [parameter data assimilation emulator](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/R/pda.emulator.R) contained within the PEcAn workflow. LINKAGES will also be calibrated using this method. This method is also generalizable to other sites assuming there is data independent of data assimilation data available to calibrate against. - -#### Initial Conditions -The initial conditions for SIPNET are sampled across state space based on data distributions at the time when the data assimilation will begin. We do not sample LINAKGES for initial conditions and instead perform model spin up for 100 years prior to beginning data assimilation. In the future, we would like to estimate initial conditions based on data. We achieve adequate spread in the initial conditions by allowing the parameters to vary across ensemble members. - -#### Drivers -We are currently using Global Climate Model (GCM) drivers from the PaLEON model intercomparison. Christy Rollinson and John Tipton are creating MET downscaled GCM drivers for the Paleon data assimilation sites. We will use these drivers when they are available because they are a closer representation of reality. - -#### Sequential State Data Assimilation -We are using sequential state data assimilation methods to assimilate paleon data products into ecosystem models because less computation power is required for sequential state data assimilation than for particle filter methods. - -#### General Description -The general sequential data assimilation framework consists of three steps at each time step: -1. Read the state variable output for time t from the model forecast ensembles and save the forecast mean (muf) and covariance (Pf). -2. If there are data mean (y) and covariance (R) at this time step, perform data assimilation analysis (either EnKF or generalized ensemble filter) to calculate the new mean (mua) and covariance (Pa) of the state variables. -3. Use mua and Pa to restart and run the ecosystem model ensembles with new state variables for time t+1. - -#### EnKF -There are two ways to implement sequential state data assimilation at this time. The first is the Ensemble Kalman Filter (EnKF). EnKF has an analytical solution, so the kalman gain, analysis mean vector, and analysis covariance matrix can be calculated directly: - -``` - - K <- Pf %*% t(H) %*% solve((R + H %*% Pf %*% t(H))) ## Kalman Gain - - mu.a <- mu.f + K %*% (Y - H %*% mu.f) # Analysis mean vector - - Pa <- (diag(ncol(X)) - K %*% H) %*% Pf # Analysis covariance matrix - -``` - -The EnKF is typically used for sequential state data assimilation, but we found that EnKF lead to filter divergence when combined with our uncertain data products. Filter divergence led us to create a generalized ensemble filter that estimates process variance. - -#### Generalized Ensemble Filter -The generalized ensemble filter follows generally the three steps of sequential state data assimilation. But, in the generalized ensemble filter we add a latent state vector that accounts for added process variance. Furthermore, instead of solving the analysis analytically like the EnKF, we have to estimate the mean analysis vector and covariance matrix with MCMC. - -#### Mapping Ensemble Output to Tobit Space -There are some instances when we have right or left censored variables from the model forecast. For example, a model estimating species level biomass may have several ensemble members that produce zero biomass for a given species. We are considering this case a left censored state variable that needs to be mapped to normal space using a tobit model. We do this by creating two matrices with dimensions number of ensembles by state variable. The first matrix is a matrix of indicator variables (y.ind), and the second is a matrix of censored variables (y.censored). When the indicator variable is 0 the state variable (j) for ensemble member (i) is sampled. This allows us to impute a normal distribution for each state variable that contains 'missing' forecasts or forecasts of zero. - -``` -tobit2space.model <- nimbleCode({ - for(i in 1:N){ - y.censored[i,1:J] ~ dmnorm(muf[1:J], cov = pf[1:J,1:J]) - for(j in 1:J){ - y.ind[i,j] ~ dconstraint(y.censored[i,j] > 0) - } - } - - muf[1:J] ~ dmnorm(mean = mu_0[1:J], cov = pf[1:J,1:J]) - - Sigma[1:J,1:J] <- lambda_0[1:J,1:J]/nu_0 - pf[1:J,1:J] ~ dinvwish(S = Sigma[1:J,1:J], df = J) - - }) -``` - - -#### Generalized Ensemble Filter Model Description -Below is the BUGS code for the full analysis model. The forecast mean an covariance are calculated from the tobit2space model above. We use a tobit likelihood in this model because there are instances when the data may be left or right censored. Process variance is included by adding a latent model state (X) with a process precision matrix (q). We update our prior on q at each time step using our estimate of q from the previous time step. - -``` - tobit.model <- nimbleCode({ - - q[1:N,1:N] ~ dwish(R = aq[1:N,1:N], df = bq) ## aq and bq are estimated over time - Q[1:N,1:N] <- inverse(q[1:N,1:N]) - X.mod[1:N] ~ dmnorm(muf[1:N], prec = pf[1:N,1:N]) ## Model Forecast ##muf and pf are assigned from ensembles - - ## add process error - X[1:N] ~ dmnorm(X.mod[1:N], prec = q[1:N,1:N]) - - #agb linear - #y_star[1:YN,1:YN] <- X[1:YN,1:YN] #[choose] - - #f.comp non linear - #y_star[1:YN] <- X[1:YN] / sum(X[1:YN]) - - ## Analysis - y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN,1:YN]) #is it an okay assumpution to just have X and Y in the same order? - - #don't flag y.censored as data, y.censored in inits - #remove y.censored samplers and only assign univariate samplers on NAs - - for(i in 1:YN){ - y.ind[i] ~ dconstraint(y.censored[i] > 0) - } - - }) -``` - -#### Ensemble Adjustment -Each ensemble member has a different set of species parameters. We adjust the updated state variables by using an ensemble adjustment. The ensemble adjustment weights the ensemble members based on their likelihood during the analysis step. - -``` - S_f <- svd(Pf) - L_f <- S_f$d - V_f <- S_f$v - - ## normalize - Z <- X*0 - for(i in seq_len(nrow(X))){ - Z[i,] <- 1/sqrt(L_f) * t(V_f)%*%(X[i,]-mu.f) - } - Z[is.na(Z)]<-0 - - ## analysis - S_a <- svd(Pa) - L_a <- S_a$d - V_a <- S_a$v - - ## analysis ensemble - X_a <- X*0 - for(i in seq_len(nrow(X))){ - X_a[i,] <- V_a %*%diag(sqrt(L_a))%*%Z[i,] + mu.a - } -``` - -#### Diagnostics -There are three diagnostics we have currently implemented: time series, bias time series, and process variance. The time series diagnostics show the data, forecast, and analysis time series for each state variable. These are useful for visually assessing variance and magnitude of change of state variables through time. These time series are also updated throughout the analysis and are also created as a pdf at the end of the SDA workflow. There are two types of bias time series the first assess the bias in the update (the forecast minus the analysis) and the second assess the bias in the error (the forecast minus the data). These bias time series are useful for identifying which state variables have intrinsic bias within the model. For example, if red oak biomass in LINKAGES increases at every time step (the update and the error are always positive), this would suggest that LINKAGES has a positive growth or recruitment bias for red oak. Finally, when using the generalized ensemble filter to estimate process variance, there are two additional plots to assess estimation of process variance. The first is a correlation plot of the process covariance matrix. This tells us what correlations are incorrectly represented by the model. For example, if red oak biomass and white pine biomass are highly negatively correlated in the process covariance matrix, this means that the model either 1) has no relationship between red oak and white pine and they should affect each other negatively or 2) there is a positive relationship between red oak and white pine and there shouldn't be any relationship. We can determine which of these is true by comparing the process covariance matrix to the model covariance matrix. The second process variance diagnostic plot shows how the degrees of freedom associated with estimating the process covariance matrix have changed through time. This plot should show increasing degrees of freedom through time. - - -### MultiSettings{#multisettings} - -(TODO: Under construction...) - -### Benchmarking {#benchmarking} - -Benchmarking is the process of comparing model outputs against either experimental data or against other model outputs as a way to validate model performance. -We have a suit of statistical comparisons that provide benchmarking scores as well as visual comparisons that help in diagnosing data-model and/or model-model differences. - -#### Data Preparation - -All data that you want to compare with model runs must be registered in the database. -This is currently a step that must be done by hand either from the command line or through the online BETY interface. -The data must have three records: - -1. An input record (Instructions [here](#NewInput)) - -2. A database file record (Instructions [here](#NewInput)) - -3. A format record (Instructions [here](#NewFormat)) - -#### Model Runs - -Model runs can be setup and executed -- Using the PEcAn web interface online or with a VM ([see setup](#GettingStarted)) -- By hand using the [pecan.xml](#pecanXML) - -#### The Benchmarking Shiny App - -The entire benchmarking process can be done through the Benchmarking R Shiny app. - -When the model run has completed, navigate to the workflow visualization Shiny app. - -- Load model data - - Select the workflow and run id - - Make sure that your model output is loading properly (i.e. you can see plots of your data) -- Load benchmarking data - - Again make sure that you can see the uploaded data plotted alongside the model output. In the future there will be more tools for double checking that your uploaded data is appropriate for benchmarking, but for now you may need to do the sanity checks by hand. - -#### Create a reference run record - -- Navigate to the Benchmarking tab - + The first step is to register the new model run as a reference run in the database. Benchmarking cannot be done before this step is completed. When the reference run record has been created, additional menus for benchmarking will appear. - -#### Setup Benchmarks and metrics - -- From the menus select - - The variables in the uploaded data that you wish to compare with model output. - - The numerical metrics you would like to use in your comparison. - - Additional comparison plots that you would like to see. -- Note: All these selections populate the benchmarking section of the `pecan.BENCH.xml` which is then saved in the same location as the original run output. This xml is purely for reference. - -##### Benchmarking Output - -- All benchmarking results are stored in the benchmarking directory which is created in the same folder as the original model run. -- The benchmaking directory contains subdirectories for each of the datasets compared with the model output. The names of these directories are the same as the corresponding data set's input id in BETY. -- Each input directory contains `benchmarking.output.Rdata`, an Rdata file contianing all the results of the benchmarking workflow. `load(benchmarking.output.Rdata) ` loads a list called `result.out` which contains the following: - - `bench.results`: a data frame of all numeric benchmarking scores - - `format`: a data frame that can be used to see how the input data was transformed to make it comparable to the model output. This involves converting from the original variable names and units to the internal pecan standard. - - `aligned.dat`: a data frame of the final aligned model and input values. -- All plots are saved as pdf files with names with "benchmark_plot-type_variable_input-id.pdf" - -- To view interactive results, naviage to the Benchmarking Plots tab in the shiny app. - - - -#### Benchmarking in pecan.xml - -Before reading this section, it is recommended that you [familiarize yourself with basics of the pecan.xml file.](#pecanXML) - -The `pecan.xml` has an _optional_ benchmarking section. Below are all the tags in the benchmarking section explained. Many of these field are filled in automatically during the benchmarking process when using the benchmarking shiny app. - -The only time one should edit the benchmarking section by hand is for performing clone runs. See [clone run documentation.](#CloneRun) - -`` settings: - -- `ensemble_id`: the id of the ensemble that you will be using - the settings from this ensemble will be saved in a reference run record and then `ensemble_id` will be replaced with `reference_run_id` -- `new_run`: TRUE = create new run, FALSE = use existing run (required, default FALSE) - -It is possible to look at more than one benchmark with a particular run. -The specific settings related to each benchmark are in a sub section called `benchmark` - -- `input_id`: the id of the benchmarking data (required) -- `variable_id`: the id of the variable of interest within the data. If you leave this blank, all variables that are shared between the input and model output will be used. -- `metric_id`: the id(s) of the metric(s) to be calculated. If you leave this blank, all metrics will be used. - -Example: -In this example, -- we are using a pre-existing run from `ensemble_id = 1000010983` (`new_run = FALSE`) -- the output will be compared to data from `input_id = 1000013743`, specifically two variables of interest: `variable_id = 411, variable_id = 18` -- for `variable_id = 411` we will perform only one metric of comparison `metric_id = 1000000001` -- for for `variable_id = 18` we will perform two metrics of comparison `metric_id = 1000000001, metric_id = 1000000002` - -```xml - - 1000010983 - FALSE - - 1000013743 - 411 - 853 - - 1000000001 - - - - 1000013743 - 18 - 853 - - 1000000001 - 1000000002 - - - -``` From 87c36387a3c8ad8bb7f19aa01d9c76becec02db4 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 13:08:55 +0530 Subject: [PATCH 1319/2289] remove comments which arent required --- modules/data.remote/inst/remote_process.py | 3 --- 1 file changed, 3 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index bfc9c366570..f6a2a36d676 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -37,8 +37,6 @@ def remote_process( existing_raw_file_path=None, existing_pro_file_path=None, - # output={"get_data": "bands", "process_data": "lai"}, - # stage={"get_data": True, "process_data": True}, ): """ @@ -79,7 +77,6 @@ def remote_process( """ - # when db connections are made, this will be removed aoi_name = get_sitename(geofile) get_datareturn_path = 78 From ff63d166d2a04d609719cd2d67c898c2f113c21d Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 3 Aug 2020 07:47:57 +0000 Subject: [PATCH 1320/2289] added functionality & test for submission of JSON workflow --- apps/api/R/submit.workflow.R | 29 +++++++++++++++- apps/api/R/workflows.R | 18 ++++++---- apps/api/pecanapi-spec.yml | 3 ++ apps/api/tests/test.workflows.R | 19 +++++++---- apps/api/tests/test_workflows/api.sipnet.json | 34 +++++++++++++++++++ apps/api/tests/test_workflows/api.sipnet.xml | 1 + 6 files changed, 90 insertions(+), 14 deletions(-) create mode 100644 apps/api/tests/test_workflows/api.sipnet.json diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 6004057d697..c635758b614 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -1,6 +1,6 @@ library(dplyr) -#* Submit a workflow submitted as XML +#* Submit a workflow sent as XML #* @param workflowXmlString String containing the XML workflow from request body #* @param userDetails List containing userid & username #* @return ID & status of the submitted workflow @@ -10,6 +10,31 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ workflowXml <- XML::xmlParseString(stringr::str_replace(workflowXmlString, "\n", "")) workflowList <- XML::xmlToList(workflowXml) + return(submit.workflow.list(workflowList, userDetails)) +} + +################################################################################################# + +#* Submit a workflow sent as JSON +#* @param workflowJsonString String containing the JSON workflow from request body +#* @param userDetails List containing userid & username +#* @return ID & status of the submitted workflow +#* @author Tezan Sahu +submit.workflow.json <- function(workflowJsonString, userDetails){ + + workflowList <- jsonlite::fromJSON(workflowJsonString) + + return(submit.workflow.list(workflowList, userDetails)) +} + +################################################################################################# + +#* Submit a workflow (converted to list) +#* @param workflowList Workflow parameters expressed as a list +#* @param userDetails List containing userid & username +#* @return ID & status of the submitted workflow +#* @author Tezan Sahu +submit.workflow.list <- function(workflowList, userDetails) { # Fix details about the database workflowList$database <- list(bety = PEcAn.DB::get_postgres_envvars( host = "localhost", @@ -83,6 +108,7 @@ submit.workflow.xml <- function(workflowXmlString, userDetails){ } +################################################################################################# #* Insert the workflow into workflows table to obtain the workflow_id #* @param workflowList List containing the workflow details @@ -130,6 +156,7 @@ insert.workflow <- function(workflowList){ return(workflow_id) } +################################################################################################# #* Insert the workflow into attributes table #* @param workflowList List containing the workflow details diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index b3f37ab1818..436d475fb8f 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -98,19 +98,23 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r #' @author Tezan Sahu #* @post / submitWorkflow <- function(req, res){ - if(req$HTTP_CONTENT_TYPE == "application/xml"){ + if(req$HTTP_CONTENT_TYPE == "application/xml") { submission_res <- submit.workflow.xml(req$postBody, req$user) - if(submission_res$status == "Error"){ - res$status <- 400 - return(submission_res) - } - res$status <- 201 - return(submission_res) + } + else if(req$HTTP_CONTENT_TYPE == "application/json") { + submission_res <- submit.workflow.json(req$postBody, req$user) } else{ res$status <- 415 return(paste("Unsupported request content type:", req$HTTP_CONTENT_TYPE)) } + + if(submission_res$status == "Error"){ + res$status <- 400 + return(submission_res) + } + res$status <- 201 + return(submission_res) } ################################################################################################# diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 9f44d40cd15..dd8a94f3169 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -918,6 +918,9 @@ components: ensemble: type: object properties: + size: + type: number + example: 10 variable: type: string example: NPP diff --git a/apps/api/tests/test.workflows.R b/apps/api/tests/test.workflows.R index 1d18a605a84..7bc0bf4cd86 100644 --- a/apps/api/tests/test.workflows.R +++ b/apps/api/tests/test.workflows.R @@ -36,8 +36,6 @@ test_that("Calling /api/workflows/{id} with invalid workflow id returns Status 4 expect_equal(res$status, 404) }) -submitted_workflow_id <- NULL; - test_that("Submitting XML workflow to /api/workflows/ returns Status 201", { xml_string <- paste0(xml2::read_xml("test_workflows/api.sipnet.xml")) res <- httr::POST( @@ -46,15 +44,24 @@ test_that("Submitting XML workflow to /api/workflows/ returns Status 201", { httr::content_type("application/xml"), body = xml_string ) - - submitted_workflow_id <<- jsonlite::fromJSON(rawToChar(res$content))$workflow_id expect_equal(res$status, 201) }) +test_that("Submitting JSON workflow to /api/workflows/ returns Status 201", { + json_workflow <- jsonlite::read_json("test_workflows/api.sipnet.json") + res <- httr::POST( + "http://localhost:8000/api/workflows/", + httr::authenticate("carya", "illinois"), + body = json_workflow, + encode='json' + ) + expect_equal(res$status, 201) +}) + + test_that("Calling /api/workflows/{id}/status with valid workflow id returns Status 200", { - Sys.sleep(5) res <- httr::GET( - paste0("http://localhost:8000/api/workflows/", submitted_workflow_id, "/status"), + paste0("http://localhost:8000/api/workflows/", 99000000031, "/status"), httr::authenticate("carya", "illinois") ) expect_equal(res$status, 200) diff --git a/apps/api/tests/test_workflows/api.sipnet.json b/apps/api/tests/test_workflows/api.sipnet.json new file mode 100644 index 00000000000..12652891885 --- /dev/null +++ b/apps/api/tests/test_workflows/api.sipnet.json @@ -0,0 +1,34 @@ +{ + "pfts": { + "pft": { + "name": "temperate.coniferous" + } + }, + "meta.analysis": { + "iter": 100, + "random.effects": "FALSE", + "threshold": 1.2, + "update": "AUTO" + }, + "ensemble": { + "size": 1, + "variable": "NPP" + }, + "model": { + "type": "SIPNET", + "revision": "r136" + }, + "run": { + "site": { + "id": 772 + }, + "inputs": { + "met": { + "id": "99000000003" + } + }, + "start.date": "2002-01-01 00:00:00", + "end.date": "2002-12-31 00:00:00", + "dbfiles": "pecan/dbfiles" + } +} \ No newline at end of file diff --git a/apps/api/tests/test_workflows/api.sipnet.xml b/apps/api/tests/test_workflows/api.sipnet.xml index d71eeb00b81..903a11befb2 100644 --- a/apps/api/tests/test_workflows/api.sipnet.xml +++ b/apps/api/tests/test_workflows/api.sipnet.xml @@ -14,6 +14,7 @@ + 1 NPP From c9b7ab5241a650f387e0e6b555a76b81d8c03f59 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 14:19:33 +0530 Subject: [PATCH 1321/2289] added missing PEcAn::DB in few places --- modules/data.remote/R/call_remote_process.R | 22 ++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R index 82b1598a542..1ad54bf098a 100644 --- a/modules/data.remote/R/call_remote_process.R +++ b/modules/data.remote/R/call_remote_process.R @@ -60,7 +60,7 @@ call_remote_process <- function(settings){ out_process_data <- settings$remotedata$out_process_data - dbcon <- db.open(settings$database$bety) + dbcon <- PEcAn.DB::db.open(settings$database$bety) flag <- 0 # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names @@ -258,23 +258,23 @@ call_remote_process <- function(settings){ # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) raw_id = raw_check$id - db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) }else if(flag == 4){ # requested processed and raw files are present and have to be updated pro_id = pro_check$id raw_id = raw_check$id PEcAn.logger::logger.info("updating processed and raw files") - db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) - db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) }else if(flag == 5){ # raw file required for creating the processed file exists and the processed file needs to be updated pro_id = pro_check$id PEcAn.logger::logger.info("Updating the existing processed file") - db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) } } } @@ -289,13 +289,13 @@ call_remote_process <- function(settings){ }else{ PEcAn.logger::logger.info("Updating raw file") raw_id = raw_check$id - db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) } } } - PEcAn.DB::db.close(dbcon) + PEcAn.DB::db.close(con=dbcon) } From 582dce1bd8c7b68473148bc7bf97533e85901393 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 14:35:43 +0530 Subject: [PATCH 1322/2289] add missing .DB --- modules/data.remote/R/call_remote_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R index 1ad54bf098a..e963fb13c71 100644 --- a/modules/data.remote/R/call_remote_process.R +++ b/modules/data.remote/R/call_remote_process.R @@ -259,7 +259,7 @@ call_remote_process <- function(settings){ PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) raw_id = raw_check$id PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) }else if(flag == 4){ # requested processed and raw files are present and have to be updated pro_id = pro_check$id @@ -268,7 +268,7 @@ call_remote_process <- function(settings){ PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - PEcAn.DB::.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) }else if(flag == 5){ # raw file required for creating the processed file exists and the processed file needs to be updated pro_id = pro_check$id From afbbde7531a2fa7532bc767ae563802f31edc8ae Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 3 Aug 2020 07:16:54 -0500 Subject: [PATCH 1323/2289] remove db and sugarcan --- web/config.example.php | 13 - web/db/bety.css | 234 ---- web/db/common.php | 1243 ----------------- web/db/edit.php | 45 - web/db/index.php | 21 - web/db/list.php | 27 - web/db/login.php | 37 - web/db/logout.php | 7 - web/db/show.php | 40 - web/sugarcane/config/file_locations.php | 15 - web/sugarcane/config/graph_variables.php | 6 - web/sugarcane/config/xml_structure.txt | 31 - web/sugarcane/data.xml | 91 -- web/sugarcane/default/default.xml | 130 -- web/sugarcane/inc/model/functions.php | 103 -- web/sugarcane/inc/view/main_view.php | 58 - web/sugarcane/index.php | 132 -- web/sugarcane/ooutput | 368 ----- web/sugarcane/plot_data/output | 368 ----- web/sugarcane/python/plot.py | 45 - web/sugarcane/python/png/5afafac753c4.png | Bin 15859 -> 0 bytes web/sugarcane/python/png/delete_old.sh | 11 - web/sugarcane/run.sh | 7 - web/sugarcane/static/ajax-loader.gif | Bin 8238 -> 0 bytes web/sugarcane/static/bootstrap-button.js | 96 -- .../static/bootstrap-responsive.min.css | 9 - web/sugarcane/static/bootstrap-tab.js | 135 -- web/sugarcane/static/bootstrap.min.css | 9 - web/sugarcane/static/bootstrap.min.js | 6 - web/sugarcane/static/jquery-1.7.2.min.js | 4 - web/sugarcane/static/script.js | 10 - web/sugarcane/static/style.css | 48 - web/sugarcane/sugarcane.R | 206 --- web/sugarcane/sugarcaneForBioCro.R | 201 --- 34 files changed, 3756 deletions(-) delete mode 100644 web/db/bety.css delete mode 100644 web/db/common.php delete mode 100644 web/db/edit.php delete mode 100644 web/db/index.php delete mode 100644 web/db/list.php delete mode 100644 web/db/login.php delete mode 100644 web/db/logout.php delete mode 100644 web/db/show.php delete mode 100644 web/sugarcane/config/file_locations.php delete mode 100755 web/sugarcane/config/graph_variables.php delete mode 100644 web/sugarcane/config/xml_structure.txt delete mode 100644 web/sugarcane/data.xml delete mode 100755 web/sugarcane/default/default.xml delete mode 100644 web/sugarcane/inc/model/functions.php delete mode 100644 web/sugarcane/inc/view/main_view.php delete mode 100755 web/sugarcane/index.php delete mode 100644 web/sugarcane/ooutput delete mode 100644 web/sugarcane/plot_data/output delete mode 100755 web/sugarcane/python/plot.py delete mode 100644 web/sugarcane/python/png/5afafac753c4.png delete mode 100755 web/sugarcane/python/png/delete_old.sh delete mode 100755 web/sugarcane/run.sh delete mode 100644 web/sugarcane/static/ajax-loader.gif delete mode 100644 web/sugarcane/static/bootstrap-button.js delete mode 100644 web/sugarcane/static/bootstrap-responsive.min.css delete mode 100644 web/sugarcane/static/bootstrap-tab.js delete mode 100644 web/sugarcane/static/bootstrap.min.css delete mode 100644 web/sugarcane/static/bootstrap.min.js delete mode 100644 web/sugarcane/static/jquery-1.7.2.min.js delete mode 100644 web/sugarcane/static/script.js delete mode 100644 web/sugarcane/static/style.css delete mode 100755 web/sugarcane/sugarcane.R delete mode 100644 web/sugarcane/sugarcaneForBioCro.R diff --git a/web/config.example.php b/web/config.example.php index 653cd64fcfb..1148ecafe16 100644 --- a/web/config.example.php +++ b/web/config.example.php @@ -104,20 +104,7 @@ # of BETYDB $betydb="/bety"; -# ---------------------------------------------------------------------- -# SIMPLE EDITING OF BETY DATABSE -# ---------------------------------------------------------------------- -# Number of items to show on a page -$pagesize = 30; - -# Location where logs should be written -$logfile = "/home/carya/output/betydb.log"; - -# uncomment the following variable to enable the simple interface -#$simpleBETY = TRUE; - # syncing details - $server_url="192.168.0.5"; // local test server $client_sceret=""; $server_auth_token=""; diff --git a/web/db/bety.css b/web/db/bety.css deleted file mode 100644 index dacb41d1759..00000000000 --- a/web/db/bety.css +++ /dev/null @@ -1,234 +0,0 @@ -#cssmenu ul, -#cssmenu li, -#cssmenu span, -#cssmenu a { - margin: 0; - padding: 0; - position: relative; -} -#cssmenu { - height: 49px; - border-radius: 5px 5px 0 0; - -moz-border-radius: 5px 5px 0 0; - -webkit-border-radius: 5px 5px 0 0; - background: #141414; - background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAAxCAIAAACUDVRzAAAAA3NCSVQICAjb4U/gAAAALElEQVQImWMwMrJi+v//PxMDw3+m//8ZoPR/qBgDEhuXGLoeYswhXg8R5gAAdVpfoJ3dB5oAAAAASUVORK5CYII=) - 100% 100%; - background: -moz-linear-gradient(top, #32323a 0%, #141414 100%); - background: -webkit-gradient( - linear, - left top, - left bottom, - color-stop(0%, #32323a), - color-stop(100%, #141414) - ); - background: -webkit-linear-gradient(top, #32323a 0%, #141414 100%); - background: -o-linear-gradient(top, #32323a 0%, #141414 100%); - background: -ms-linear-gradient(top, #32323a 0%, #141414 100%); - background: linear-gradient(to bottom, #32323a 0%, #141414 100%); - border-bottom: 2px solid #f1f1f1; -} -#cssmenu:after, -#cssmenu ul:after { - content: ""; - display: block; - clear: both; -} -#cssmenu a { - background: #141414; - background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAAxCAIAAACUDVRzAAAAA3NCSVQICAjb4U/gAAAALElEQVQImWMwMrJi+v//PxMDw3+m//8ZoPR/qBgDEhuXGLoeYswhXg8R5gAAdVpfoJ3dB5oAAAAASUVORK5CYII=) - 100% 100%; - background: -moz-linear-gradient(top, #32323a 0%, #141414 100%); - background: -webkit-gradient( - linear, - left top, - left bottom, - color-stop(0%, #32323a), - color-stop(100%, #141414) - ); - background: -webkit-linear-gradient(top, #32323a 0%, #141414 100%); - background: -o-linear-gradient(top, #32323a 0%, #141414 100%); - background: -ms-linear-gradient(top, #32323a 0%, #141414 100%); - background: linear-gradient(to bottom, #32323a 0%, #141414 100%); - color: #ffffff; - display: inline-block; - font-family: Helvetica, Arial, Verdana, sans-serif; - font-size: 12px; - line-height: 49px; - padding: 0 20px; - text-decoration: none; -} -#cssmenu ul { - list-style: none; -} -#cssmenu > ul { - float: left; -} -#cssmenu > ul > li { - float: left; -} -#cssmenu > ul > li:hover:after { - content: ""; - display: block; - width: 0; - height: 0; - position: absolute; - left: 50%; - bottom: 0; - border-left: 10px solid transparent; - border-right: 10px solid transparent; - border-bottom: 10px solid #f1f1f1; - margin-left: -10px; -} -#cssmenu > ul > li:first-child > a { - border-radius: 5px 0 0 0; - -moz-border-radius: 5px 0 0 0; - -webkit-border-radius: 5px 0 0 0; -} -#cssmenu > ul > li:last-child > a { - border-radius: 0 5px 0 0; - -moz-border-radius: 0 5px 0 0; - -webkit-border-radius: 0 5px 0 0; -} -#cssmenu > ul > li.active > a { - box-shadow: inset 0 0 3px #000000; - -moz-box-shadow: inset 0 0 3px #000000; - -webkit-box-shadow: inset 0 0 3px #000000; - background: #070707; - background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAAxCAIAAACUDVRzAAAAA3NCSVQICAjb4U/gAAAALklEQVQImWNQU9Nh+v//PxMDw3+m//8ZkNj/mRgYIHxy5f//Z0BSi18e2TwS5QG4MGB54HL+mAAAAABJRU5ErkJggg==) - 100% 100%; - background: -moz-linear-gradient(top, #26262c 0%, #070707 100%); - background: -webkit-gradient( - linear, - left top, - left bottom, - color-stop(0%, #26262c), - color-stop(100%, #070707) - ); - background: -webkit-linear-gradient(top, #26262c 0%, #070707 100%); - background: -o-linear-gradient(top, #26262c 0%, #070707 100%); - background: -ms-linear-gradient(top, #26262c 0%, #070707 100%); - background: linear-gradient(to bottom, #26262c 0%, #070707 100%); -} -#cssmenu > ul > li:hover > a { - background: #070707; - background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAAxCAIAAACUDVRzAAAAA3NCSVQICAjb4U/gAAAALklEQVQImWNQU9Nh+v//PxMDw3+m//8ZkNj/mRgYIHxy5f//Z0BSi18e2TwS5QG4MGB54HL+mAAAAABJRU5ErkJggg==) - 100% 100%; - background: -moz-linear-gradient(top, #26262c 0%, #070707 100%); - background: -webkit-gradient( - linear, - left top, - left bottom, - color-stop(0%, #26262c), - color-stop(100%, #070707) - ); - background: -webkit-linear-gradient(top, #26262c 0%, #070707 100%); - background: -o-linear-gradient(top, #26262c 0%, #070707 100%); - background: -ms-linear-gradient(top, #26262c 0%, #070707 100%); - background: linear-gradient(to bottom, #26262c 0%, #070707 100%); - box-shadow: inset 0 0 3px #000000; - -moz-box-shadow: inset 0 0 3px #000000; - -webkit-box-shadow: inset 0 0 3px #000000; -} -#cssmenu > ul > li ul a { - color: #333; -} -#cssmenu .has-sub { - z-index: 1; -} -#cssmenu .has-sub:hover > ul { - display: block; -} -#cssmenu .has-sub ul { - display: none; - position: absolute; - width: 200px; - top: 100%; - left: 0; -} -#cssmenu .has-sub ul li { - *margin-bottom: -1px; -} -#cssmenu .has-sub ul li a { - background: #f1f1f1; - border-bottom: 1px dotted #f7f7f7; - filter: none; - font-size: 11px; - display: block; - line-height: 120%; - padding: 10px; -} -#cssmenu .has-sub ul li:hover a { - background: #d8d8d8; -} -#cssmenu .has-sub .has-sub:hover > ul { - display: block; -} -#cssmenu .has-sub .has-sub ul { - display: none; - position: absolute; - left: 100%; - top: 0; -} -#cssmenu .has-sub .has-sub ul li a { - background: #d8d8d8; - border-bottom: 1px dotted #e7e7e7; -} -#cssmenu .has-sub .has-sub ul li a:hover { - background: #bebebe; -} - -#list { - display: table; - width: 100%; -} -#list .row { - display: table-row; - width: 100%; -} -#list .row:nth-child(even) { - background: #eee; -} -#list .row:nth-chard(odd) { - background: #fff; -} -#list .hdr { - background: #aaa; - display: table-cell; - text-align: left; - font-weight: bold; -} -#list .col { - display: table-cell; -} -#list .id { - width: 100px; -} - -#editor { - display: table; - width: 100%; -} -#editor .row { - display: table-row; - width: 100%; -} -#editor .key { - display: table-cell; - width: 10%; - vertical-align: center; -} -#editor .val { - display: table-cell; - width: 90%; -} - -.val input, -textarea, -select { - -webkit-box-sizing: border-box; - -moz-box-sizing: border-box; - box-sizing: border-box; - width: 100%; - text-align: left; -} diff --git a/web/db/common.php b/web/db/common.php deleted file mode 100644 index 6b02fb6fafd..00000000000 --- a/web/db/common.php +++ /dev/null @@ -1,1243 +0,0 @@ - array( - "section" => "BETY", - "list" => "SELECT id, author, title FROM citations", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "dbfiles" => array( - "section" => "BETY", - "list" => "SELECT dbfiles.id as id, machines.hostname as machine, dbfiles.file_path as filepath, dbfiles.file_name as filename FROM dbfiles, machines WHERE dbfiles.machine_id=machines.id", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "formats" => array( - "section" => "BETY", - "list" => "SELECT id, name, mime_type FROM formats", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "inputs" => array( - "section" => "BETY", - "list" => "SELECT inputs.id AS id, inputs.name AS name, CONCAT(sitename, ', ', city, ', ', state, ', ', country) as site, count(dbfiles.id) AS files FROM inputs". - " LEFT JOIN sites ON sites.id = inputs.site_id" . - " LEFT JOIN dbfiles ON dbfiles.container_id = inputs.id AND dbfiles.container_type='Input'" . - " GROUP BY inputs.id, site", - "files" => TRUE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "sites" => array( - "section" => "BETY", - "list" => "SELECT id, sitename, city, state, country FROM sites", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "species" => array( - "section" => "BETY", - "list" => "SELECT id, genus, species, scientificname FROM species", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "traits" => array( - "section" => "BETY", - "list" => "SELECT traits.id AS id, CONCAT(sitename, ', ', city, ', ', state, ', ', country) as site, species.commonname AS speciename, statname FROM traits". - " LEFT JOIN sites ON sites.id = traits.site_id" . - " LEFT JOIN species ON species.id = traits.specie_id", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "users" => array( - "section" => "BETY", - "list" => "SELECT id, login, name, email FROM users", - "files" => FALSE, - "level" => array( - "show" => 1, - "edit" => 1, - ) - ), - "ensembles" => array( - "section" => "PEcAn", - "list" => "SELECT id, runtype, workflow_id FROM ensembles", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "models" => array( - "section" => "PEcAn", - "list" => "SELECT models.id, models.model_name AS name, models.revision, count(dbfiles.id) AS files FROM models" . - " LEFT JOIN dbfiles ON dbfiles.container_id = models.id AND dbfiles.container_type='Model'" . - " GROUP BY models.id", - "files" => TRUE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "runs" => array( - "section" => "PEcAn", - "list" => "SELECT workflows.id as id, CONCAT(sitename, ', ', city, ', ', state, ', ', country) as site, models.model_name as model, workflows.started_at as start_date, workflows.finished_at as end_date FROM workflows, sites, models WHERE workflows.site_id=sites.id AND workflows.model_id=models.id", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ), - "workflows" => array( - "section" => "PEcAn", - "list" => "SELECT runs.id as id, CONCAT(sitename, ', ', city, ', ', state, ', ', country) AS site, CONCAT(model_name, ' (', revision, ')') AS model, runs.started_at as start_date, runs.finished_at as end_date FROM runs, sites, models WHERE runs.site_id=sites.id AND runs.model_id=models.id", - "files" => FALSE, - "level" => array( - "show" => 4, - "edit" => 3, - ) - ) -); - -# make sure we do a session start -session_start(); - -if (!isset($simpleBETY) || !$simpleBETY) { - header( "Location: ../index.php"); - exit; -} - -# ---------------------------------------------------------------------- -# DATABASE FUNCTIONS -# ---------------------------------------------------------------------- -function open_database() { - global $db_bety_hostname; - global $db_bety_username; - global $db_bety_password; - global $db_bety_database; - global $db_bety_type; - global $pdo; - - $pdo = new PDO("${db_bety_type}:host=${db_bety_hostname};dbname=${db_bety_database}", $db_bety_username, $db_bety_password); -} - -function close_database() { - global $pdo; - $pdo = null; -} - -function error_database() { - global $pdo; - $tmp = $pdo->errorInfo(); - return $tmp[2]; -} - -function column_names($table) { - global $pdo; - - $rs = $pdo->query("SELECT * FROM ${table} LIMIT 0"); - for ($i = 0; $i < $rs->columnCount(); $i++) { - $col = $rs->getColumnMeta($i); - $columns[$col['name']] = $col['native_type']; - } - $rs->closeCursor(); - return $columns; -} - -# ---------------------------------------------------------------------- -# COMMON HTML FUNCTIONS -# ---------------------------------------------------------------------- -function print_header($table="") { - global $sections; - - open_database(); - - print "\n"; - print "\n"; - if ($table == "") { - print "PEcAn DB\n"; - } else { - print "PEcAn DB [{$sections[$table]['section']}/{$table}]\n"; - } - print "\n"; - print "\n"; - print "\n"; -} - -function print_footer($msg="") { - if ($msg != "") { - print "
\n"; - print "$msg\n"; - } - print "
\n"; - print "\n"; - print "\n"; - print "\n"; - print "\n"; - - close_database(); -} - -function print_menu($active) { - global $sections; - - $menu=array("Home" => "index.php"); - - foreach($sections as $key => $entry) { - $section = $entry['section']; - # make sure there is an entry for the section - if (empty($menu[$section])) { - $menu[$section] = array(); - } - - # Add the entry - if (get_page_acccess_level() <= $entry['level']['show']) { - $menu[$section][$key]["#"] = "list.php?table={$key}"; - - # if edit capabilities add new option - if (get_page_acccess_level() <= $entry['level']['edit']) { - $menu[$section][$key]["New"] = "edit.php?table={$key}&id=-1"; - } - } - } - - if (check_login()) { - $menu[get_user_name()] = array( - "Edit" => "edit.php?table=users&id=" . get_userid(), - "Logout" => "logout.php", - ); - } else { - $menu["Login"] = "login.php"; - } - - print "
\n"; - print_menu_entry($active, $menu); - print "

\n"; -} - -function print_menu_entry($active, $menu) { - $keys = array_keys($menu); - $last = end($keys); - print "
    \n"; - foreach($menu as $key => $val) { - if ($key == "#") { - continue; - } - $class = ""; - if ($active == $key) { - $class .= " active"; - } - if (is_array($val)) { - $class .= " has-sub"; - } - if ($last == $key) { - $class .= " last"; - } - $class=trim($class); - if ($class != "") { - print "
  • "; - } else { - print "
  • "; - } - if (is_array($val)) { - if (array_key_exists("#", $val)) { - $url = $val['#']; - } else { - $url = "#"; - } - print "$key"; - } else if ($val != "") { - print "$key"; - } else { - print "$key"; - } - if (is_array($val)) { - print "\n"; - print_menu_entry($active, $val); - } - print "
  • \n"; - } - print "
\n"; -} - -# ---------------------------------------------------------------------- -# USER FUNCTIONS -# ---------------------------------------------------------------------- - -function login($username, $password) { - global $pdo; - - if (isset($_SESSION['userid'])) { - return TRUE; - } - - $q=$pdo->prepare("SELECT * FROM users WHERE login=:username"); - $q->bindParam(':username', $username, PDO::PARAM_STR); - if ($q->execute() === FALSE) { - die('Invalid query : ' . error_database()); - } - $row = $q->fetch(PDO::FETCH_ASSOC); - $q->closeCursor(); - - if (!isset($row['salt'])) { - return FALSE; - } - - $digest = encrypt_password($password, $row['salt']); - - if ($digest == $row['crypted_password']) { - $_SESSION['userid']=$row['id']; - $_SESSION['username']=$row['name']; - $_SESSION['useraccess']=$row['access_level']; - $_SESSION['userpageaccess']=$row['page_access_level']; - return TRUE; - } else { - return FALSE; - } -} - -function encrypt_password($password, $salt) { - global $REST_AUTH_SITE_KEY; - global $REST_AUTH_DIGEST_STRETCHES; - - $digest=$REST_AUTH_SITE_KEY; - for($i=0; $i<$REST_AUTH_DIGEST_STRETCHES; $i++) { - $digest=sha1($digest . "--" . $salt . "--" . $password . "--" . $REST_AUTH_SITE_KEY); - } - return $digest; -} - -function logout() { - unset($_SESSION['userid']); - unset($_SESSION['username']); - unset($_SESSION['useraccess']); - unset($_SESSION['userpageaccess']); -} - -function get_userid() { - if (isset($_SESSION['userid'])) { - return $_SESSION['userid']; - } else { - return -1; - } -} - -function check_login() { - return isset($_SESSION['userid']); -} - -function get_user_name() { - if (isset($_SESSION['username'])) { - return $_SESSION['username']; - } else { - return FALSE; - } -} - -function get_acccess_level() { - global $anonymous_level; - if (isset($_SESSION['useraccess'])) { - return $_SESSION['useraccess']; - } else { - return $anonymous_level; - } -} - -function get_page_acccess_level() { - global $anonymous_page; - if (isset($_SESSION['userpageaccess'])) { - return $_SESSION['userpageaccess']; - } else { - return $anonymous_page; - } -} - -# ---------------------------------------------------------------------- -# LIST PAGE FUNCTIONS -# ---------------------------------------------------------------------- - -function print_list($table, $query) { - global $pagesize; - global $sections; - global $pdo; - - $idkey = 'id'; - - # handle any information send in inputs form - $msg = ""; - if (isset($_REQUEST['action'])) { - if ($_REQUEST['action'] == "delete") { - if (($table != "users") || ((get_page_acccess_level() == 1) && ($_REQUEST['kill'] != get_userid()))) { - $q = "DELETE FROM $table WHERE ${idkey}={$_REQUEST['kill']};"; - if ($pdo->query($q) === FALSE) { - $msg = "Error updating database : " . error_database() . "
{$q}"; - editor_log("FAIL", $q); - } else { - $msg .= "Removed {$_REQUEST['kill']} from {$table}"; - editor_log("OK", $q); - } - } - } - } - - if (isset($_REQUEST['pagesize'])) { - $pagesize = $_REQUEST['pagesize']; - } - - # fix access_level - if (($table != "users") && (get_page_acccess_level() > 1)) { - if (in_array('access_level', array_keys(column_names($table)))) { - $pos = stripos($query, "WHERE"); - if ($pos !== false) { - $head = substr($query, 0, $pos + 5); - $tail = substr($query, $pos + 6); - $query = "$head (access_level >= " . get_acccess_level() . " OR access_level IS NULL) AND $tail"; - } else { - $pos = stripos($query, "group"); - if ($pos ) { - $head = substr($query, 0, $pos); - $tail = substr($query, $pos); - $query = "$head WHERE (access_level >= " . get_acccess_level() . " OR access_level IS NULL) $tail"; - } else { - $query .= " WHERE (access_level >= " . get_acccess_level() . " OR access_level IS NULL)"; - } - } - } - } - - # get the input that we want to show - if (isset($_REQUEST['page'])) { - $current = $_REQUEST['page']; - } else { - $current = 1; - } - $result = $pdo->query($query . " ORDER BY $idkey LIMIT $pagesize OFFSET " . (($current - 1) * $pagesize)); - if (!$result) { - die("Invalid query : $query " . error_database()); - } - - - print "
\n"; - print "
\n"; - print "
Action
\n"; - for ($i = 0; $i < $result->columnCount(); $i++) { - $md = $result->getColumnMeta($i); - $key = $md['name']; - if ($key == $idkey) { - print "
$key
\n"; - } - } - for ($i = 0; $i < $result->columnCount(); $i++) { - $md = $result->getColumnMeta($i); - $key = $md['name']; - if ($key != $idkey) { - print "
$key
\n"; - } - } - print "
\n"; - - while($row = @$result->fetch(PDO::FETCH_ASSOC)) { - print "
\n"; - if (array_key_exists($idkey, $row)) { - print "
"; - print "S "; - if (get_page_acccess_level() <= $sections[$table]['level']['edit']) { - print "E "; - } - if (get_page_acccess_level() <= $sections[$table]['level']['edit'] && (($table != "users") || ($row[$idkey] != get_userid()))) { - $url="list.php?table={$table}&page={$current}&action=delete&kill={$row[$idkey]}"; - print "D "; - } - print "
\n"; - print "
{$row[$idkey]}
\n"; - } - foreach ($row as $key => $value) { - if ($key != $idkey) { - print "
$value
\n"; - } - } - print "
\n"; - } - print "
\n"; - - $result->closeCursor(); - - print_pages($current, $pagesize, $query, $table); - - return $msg; -} - -function print_pages($current, $pagesize, $query, $table) { - global $pdo; - - # count items - $result = $pdo->query($query); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $count = $result->rowCount(); - $result->closeCursor(); - - if ($count <= $pagesize) { - return; - } - - $pages = ""; - if ($count > 0) { - $numpages = ceil($count / $pagesize); - - if ($numpages <= 15) { - for ($i=1; $i<$numpages+1; $i++) { - if ($i == $current) { - $pages .= " $i "; - } else { - $pages .= " $i "; - } - } - } else { - if ($current < 8) { - for ($i=1; $i<12; $i++) { - if ($i == $current) { - $pages .= " $i "; - } else { - $pages .= " $i "; - } - } - $pages .= "..."; - for ($i=$numpages-2; $i<$numpages; $i++) { - $pages .= " $i "; - } - } else { - for ($i=1; $i<3; $i++) { - $pages .= " $i "; - } - $pages .= "..."; - if ($current > ($numpages - 7)) { - for ($i=$numpages-10; $i<$numpages; $i++) { - if ($i == $current) { - $pages .= " $i "; - } else { - $pages .= " $i "; - } - } - } else { - for ($i=$current-4; $i<$current+5; $i++) { - if ($i == $current) { - $pages .= " $i "; - } else { - $pages .= " $i "; - } - } - $pages .= "..."; - for ($i=$numpages-2; $i<$numpages; $i++) { - $pages .= " $i "; - } - } - } - } - } - - print "

"; - if ($pages != "") { - if ($current > 1) { - $pages = "< $pages"; - } else { - $pages = "< $pages"; - } - if ($current < $numpages) { - $pages = "$pages >"; - } else { - $pages = "> $pages"; - } - print "

$pages
"; - } - print "

\n"; -} - -# ---------------------------------------------------------------------- -# SHOW FILES ASSOCIATED -# ---------------------------------------------------------------------- - -function show_files($id, $table, $readonly) { - global $pdo; - global $sections; - - $type = ucfirst(strtolower(substr($table, 0, -1))); - - # process the form - $msg = ""; - if (!$readonly && isset($_REQUEST['action'])) { - if ($_REQUEST['action'] == "add") { - $query = "UPDATE dbfiles SET container_id={$id}, container_type='{$type}' WHERE id={$_REQUEST['dbid']};"; - if ($pdo->query($query) === FALSE) { - $msg = "Error updating database : [" . error_database() . "] " . $pdo->errorInfo($pdo) . "
"; - editor_log("FAIL", $query); - } else { - $msg .= "Added dbfiles={$_REQUEST['dbid']} to inputs={$_REQUEST['id']}
\n"; - editor_log("OK", $query); - } - } - - if ($_REQUEST['action'] == "del") { - $query = "UPDATE dbfiles SET container_id=NULL WHERE id={$_REQUEST['dbid']};"; - if ($pdo->query($query) === FALSE) { - $msg = "Error updating database : [" . error_database() . "] " . $pdo->errorInfo($pdo) . "
"; - editor_log("FAIL", $query); - } else { - $msg .= "Removed dbfiles={$_REQUEST['dbid']} from inputs={$_REQUEST['id']}
\n"; - editor_log("OK", $query); - } - } - } - - print "
\n"; - print " Existing Files
\n"; - - # get the input that we want to show - $query="SELECT dbfiles.id, concat(machines.hostname, ':', dbfiles.file_path, '/', dbfiles.file_name) as filename" . - " FROM dbfiles, machines" . - " WHERE dbfiles.container_id={$id} AND dbfiles.container_type='{$type}' AND machines.id=dbfiles.machine_id;"; - $result = $pdo->query($query); - if (!$result) { - die("Invalid query [$query] " . $pdo->errorInfo($pdo)); - } - if ($result->rowCount() > 0) { - print " \n"; - print "
\n"; - print "
\n"; - print "
\n"; - print " S\n"; - if (!$readonly) { - print " E\n"; - print " D\n"; - } - print "
\n"; - print "
\n"; - print " \n"; - print "
\n"; - print "
\n"; - print "
\n"; - } - - if (!$readonly) { - print "
\n"; - print " Add a Files
\n"; - - # get the input that we want to show - $query="SELECT dbfiles.id, concat(machines.hostname, ':', dbfiles.file_path, '/', dbfiles.file_name) as filename" . - " FROM dbfiles, machines" . - " WHERE dbfiles.container_id IS NULL AND machines.id=dbfiles.machine_id;"; - $result = $pdo->query($query); - if (!$result) { - die("Invalid query [$query] " . $pdo->errorInfo($pdo)); - } - if ($result->rowCount() > 0) { - print "
\n"; - print "
\n"; - print "
\n"; - print " S\n"; - if (!$readonly) { - print " E\n"; - print " A\n"; - } - print "
\n"; - print "
\n"; - print " \n"; - print "
\n"; - print "
\n"; - print "
\n"; - } - } - - return $msg; -} - - - -# ---------------------------------------------------------------------- -# EDIT PAGE FUNCTIONS -# ---------------------------------------------------------------------- - -function editor_log($status, $query) { - global $logfile; - if (is_writeable($logfile)) { - file_put_contents($logfile, date("c") . "\t${status}\t" . get_userid() . "\t" . get_user_name() . "\t${query}\n", FILE_APPEND); - } -} - -function editor_update($id, $table) { - global $pdo; - - # get the row from the database (this can be empty) - $result = $pdo->query("SELECT * FROM $table WHERE id=$id;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_ASSOC); - $result->closeCursor(); - - $msg = ""; - $set = ""; - foreach($_REQUEST as $key => $val) { - if ($key == "id") continue; - if ($key == "table") continue; - if ($key == "action") continue; - - $pre = substr($key, 0, 1); - $key = substr($key, 2); - - if ($val == $row[$key]) { - continue; - } - - if ($pre == 's') { - if ($set != "") { - $set .= ", "; - } - if ($val == "") { - $set .= " $key=NULL"; - } else { - $set .= " $key=" . $pdo->quote($val); - } - } else if ($pre == 'n') { - if ($set != "") { - $set .= ", "; - } - $set .= " $key=" . $pdo->quote($val); - } else if ($pre == 'b') { - if ($set != "") { - $set .= ", "; - } - if ($val == 'on') { - $set .= " $key=true"; - } else { - $set .= " $key=false"; - } - } else if ($pre == 'u') { - if ($set != "") { - $set .= ", "; - } - $set .= " $key=NOW()"; - } else if ($pre == 'p') { - $salt = $_REQUEST['s_salt']; - if (($val != "") && ($salt != "")) { - $val = encrypt_password($val, $salt); - if ($set != "") { - $set .= ", "; - } - $set .= " $key=" . $pdo->quote($val); - } - } - } - if ($set != "") { - if ($id == "-1") { - $query = "INSERT INTO $table SET $set;"; - $pdo->query($query, $param); - $id = $pdo->lastInsertId(); - if (!$pdo->query($query)) { - $msg = "Error updating database : " . error_database() . "
$query"; - editor_log("FAIL", $query); - } else { - $msg .= "Added into $table table id=$id
\n"; - editor_log("OK", $query); - } - } else { - $query = "UPDATE $table SET $set WHERE id=$id;"; - if (!$pdo->query($query)) { - $msg = "Error updating database : " . error_database() . "
$query"; - editor_log("FAIL", $query); - } else { - $msg .= "Updated $table table for id=$id
\n"; - editor_log("OK", $query); - } - } - } else { - if ($id == "-1") { - $result = $pdo->query("SELECT id FROM $table ORDER BY id ASC LIMIT 1;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_ASSOC); - $id = $row[0]; - $result->closeCursor(); - $msg .= "No data entered showing first $table.
\n"; - } else { - $msg .= "Nothing changed.
\n"; - } - } - - return $msg; -} - -function print_entry($id, $table, $readonly) { - global $pdo; - global $sections; - - $edit = false; - if ($readonly) { - if ($sections[$table]['level']['show'] < get_page_acccess_level() && ($table != "users" || $id != get_userid())) { - #header("Location: index.php"); - die("no access"); - } - if ($sections[$table]['level']['edit'] >= get_page_acccess_level() || ($table == "users" && $id == get_userid())) { - $edit = true; - } - } else { - if ($sections[$table]['level']['edit'] < get_page_acccess_level() && ($table != "users" || $id != get_userid())) { - #header("Location: index.php"); - die("no access"); - } - } - - # update database - $msg = ""; - if (!$readonly && isset($_REQUEST['action']) && $_REQUEST['action'] == "update") { - $msg = editor_update($id, $table); - } - - # print navigation - print_prev_next($id, $table, $edit); - - if ($readonly) { - $disabled = "disabled"; - } else { - $disabled = ""; - } - - # get the row from the database (this can be empty) - $result = $pdo->query("SELECT * FROM $table WHERE id=$id;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_ASSOC); - $result->closeCursor(); - - # check access_level - if (is_array($row) && array_key_exists('access_level', $row) && ($row['access_level'] != "") && ($row['access_level'] != "-1")) { - if (get_acccess_level() > $row['access_level']) { - header("Location: index.php"); - return; - } - } - - if (!$readonly) { - print "
\n"; - print "\n"; - print "\n"; - } - print "
\n"; - - foreach(column_names($table) as $key => $type) { - if ($key == "id") { - $fancykey = $key; - } else { - $fancykey = ucwords(str_replace("_", " ", str_replace("_id", "", $key))); - } - if (is_array($row) && array_key_exists($key, $row)) { - $val = $row[$key]; - } else { - $val = ""; - } - - if (substr($val, 0, 4) == "http") { - $fancykey = "$fancykey"; - } - - print "
\n"; - if ($key == "id") { - if ($id == -1) { - $val = "new entry"; - } - print "
$fancykey
\n"; - print "
\n"; - } else if ($key == "created_at") { - if ($id == -1) { - $val = 'now'; - print "\n"; - } - print "
$fancykey
\n"; - print "
\n"; - } else if ($key == "updated_at") { - if ($id != -1) { - print "\n"; - } - print "
$fancykey
\n"; - print "
\n"; - } else if ($key == "site_id") { - print "
$fancykey
\n"; - print "
\n"; - print_sites_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "model_id") { - print "
$fancykey
\n"; - print "
\n"; - print_models_options("n_$key", $val, $readonly); - print "
\n"; - } else if (($key == "user_id") || ($key == "created_user_id") || ($key == "updated_user_id")) { - print "
$fancykey
\n"; - print "
\n"; - print_users_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "machine_id") { - print "
$fancykey
\n"; - print "
\n"; - print_machines_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "format_id") { - print "
$fancykey
\n"; - print "
\n"; - print_formats_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "citation_id") { - print "
$fancykey
\n"; - print "
\n"; - print_citations_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "specie_id") { - print "
$fancykey
\n"; - print "
\n"; - print_species_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "variable_id") { - print "
$fancykey
\n"; - print "
\n"; - print_variables_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "treatment_id") { - print "
$fancykey
\n"; - print "
\n"; - print_treatments_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "cultivar_id") { - print "
$fancykey
\n"; - print "
\n"; - print_cultivars_options("n_$key", $val, $readonly); - print "
\n"; - } else if ($key == "page_access_level") { - if (get_page_acccess_level() == 1) { - $sel_readonly=$readonly; - } else { - $sel_readonly=true; - } - print "
$fancykey
\n"; - print "
\n"; - print_select_array_options("n_$key", $val, $sel_readonly, array( - "1" => "Administrator", - "2" => "Manager", - "3" => "Creator", - "4" => "Viewer")); - print "
\n"; - } else if ($key == "access_level") { - if ((get_page_acccess_level() == 1) || ($val == "") || ($val == "-1")) { - $sel_readonly=$readonly; - } else { - $sel_readonly=true; - } - print "
$fancykey
\n"; - print "
\n"; - print_select_array_options("n_$key", $val, $sel_readonly, array( - "1" => "Restricted", - "2" => "Internal EBI & Collaborators", - "3" => "Creator", - "4" => "Viewer")); - print "
\n"; - } else if ($key == "salt") { - if ($id == -1) { - $val = uniqid("", true); - print "\n"; - } else { - print "\n"; - } - print "
$fancykey
\n"; - print "
\n"; - } else if ($key == "crypted_password") { - print "
$fancykey
\n"; - print "
\n"; - } else if (stristr($type, "text")) { - print "
$fancykey
\n"; - print "
\n"; - } else if (stristr($type, "tinyint")) { - print "
$fancykey
\n"; - print "
\n"; - } else { - print "
$fancykey
\n"; - print "
\n"; - } - print "
\n"; - - } - - if (!$readonly) { - print "
\n"; - print "
\n"; - print "
\n"; - print "
\n"; - } - print "
\n"; - if (!$readonly) { - print "\n"; - } - - return $msg; -} - -function print_prev_next($id, $table, $edit=false) { - global $pdo; - - print "
\n"; - print "All"; - - $and = ""; - $where = ""; - if (in_array('access_level', column_names($table))) { - $and = " AND (access_level >= " . get_acccess_level() . " OR access_level IS NULL)"; - $where = " WHERE (access_level >= " . get_acccess_level() . " OR access_level IS NULL)"; - } - - $result = $pdo->query("SELECT id FROM {$table} WHERE id < ${id} ${and} ORDER BY id DESC LIMIT 1;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_NUM); - $prev = $row[0]; - $result->closeCursor(); - if ($prev == "") { - $result = $pdo->query("SELECT id FROM {$table} ${where} ORDER BY id DESC LIMIT 1;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_NUM); - $prev = $row[0]; - $result->closeCursor(); - } - print " Prev"; - - $result = $pdo->query("SELECT id FROM $table WHERE id > ${id} ${and} ORDER BY id ASC LIMIT 1;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_NUM); - $next = $row[0]; - $result->closeCursor(); - if ($next == "") { - $result = $pdo->query("SELECT id FROM $table ${where} ORDER BY id ASC LIMIT 1;"); - if (!$result) { - die('Invalid query : ' . error_database()); - } - $row = $result->fetch(PDO::FETCH_NUM); - $next = $row[0]; - $result->closeCursor(); - } - print " Next"; - - if ($edit) { - print " Edit\n"; - } else if (strpos($_SERVER['SCRIPT_NAME'], 'show.php') === false) { - print " Show\n"; - } - - print "

\n"; -} - -function print_users_options($name, $myid, $readonly=false) { - $query = "SELECT id, CONCAT(name, ' <', email, '>') AS name FROM users"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_machines_options($name, $myid, $readonly=false) { - $query = "SELECT id, hostname AS name FROM machines"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_formats_options($name, $myid, $readonly=false) { - $query = "SELECT id, name FROM formats"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_sites_options($name, $myid, $readonly=false) { - $query = "SELECT id, CONCAT(coalesce(sitename, ''), ', ', coalesce(city, ''), ', ', coalesce(state, ''), ', ', coalesce(country, '')) AS name FROM sites"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_models_options($name, $myid, $readonly=false) { - $query = "SELECT id, CONCAT(coalesce(model_name, ''), ' ', coalesce(revision, ')') AS name FROM models"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_citations_options($name, $myid, $readonly=false) { - $query = "SELECT id, CONCAT(coalesce(author, ''), ' \"', coalesce(title, ''), '\" ') AS name FROM citations"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_species_options($name, $myid, $readonly=false) { - $query = "SELECT id, scientificname AS name FROM species"; - if ($readonly) { - print_select_options($name, $myid, $readonly, $query); - } else { - if ($myid == -1) { - $values = array(); - } else { - $values = array($myid => "Current value " . $myid); - } - print_select_array_options($name, $myid, $readonly, $values); - } -} - -function print_variables_options($name, $myid, $readonly=false) { - $query = "SELECT id, name FROM variables"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_treatments_options($name, $myid, $readonly=false) { - $query = "SELECT id, name FROM treatments"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_cultivars_options($name, $myid, $readonly=false) { - $query = "SELECT id, name FROM cultivars"; - print_select_options($name, $myid, $readonly, $query); -} - -function print_select_options($name, $myid, $readonly, $query) { - global $pdo; - - if ($readonly) { - if ($myid == "") { - $query .= " WHERE id=-1"; - } else { - $query .= " WHERE id=${myid}"; - } - } - $result = $pdo->query($query . " ORDER BY name"); - if (!$result) { - die('Invalid query "' . $query . '" : [' . error_database() . ']' . error_database()); - } - - if ($readonly) { - print "\n"; - } - $html = ""; - $foundit = false; - while($row = @$result->fetch(PDO::FETCH_ASSOC)) { - $name = $row['name']; - if ($name == "") { - $name = "NO NAME {$row['id']}"; - } - if ($myid == $row['id']) { - $html .= "\n"; - $foundit = true; - } else if (!$readonly) { - $html .= "\n"; - } - } - if (! $foundit) { - if (($myid == "") || ($myid == "-1")) { - $html = "\n" . $html; - } else { - $html = "\n" . $html; - } - } - print $html; - print "\n"; - - $result->closeCursor(); -} - -function print_select_array_options($name, $myid, $readonly, $values) { - if ($readonly) { - print "\n"; - } - $html = ""; - $foundit = false; - foreach ($values as $key => $val) { - if ($myid == $key) { - $html .= "\n"; - $foundit = true; - } else if (!$readonly) { - $html .= "\n"; - } - } - if (! $foundit) { - if (($myid == "") || ($myid == "-1")) { - $html = "\n" . $html; - } else { - $html = "\n" . $html; - } - } - print $html; - print "\n"; -} - -# ---------------------------------------------------------------------- -# COMMON FUNCTIONS -# ---------------------------------------------------------------------- -function starts_with($haystack, $needle) { - return !strncmp($haystack, $needle, strlen($needle)); -} -?> diff --git a/web/db/edit.php b/web/db/edit.php deleted file mode 100644 index d7454b69f9c..00000000000 --- a/web/db/edit.php +++ /dev/null @@ -1,45 +0,0 @@ -{$tmp}"; - } -} - -# print footer of html -print_footer($msg); -?> diff --git a/web/db/index.php b/web/db/index.php deleted file mode 100644 index f23d3943332..00000000000 --- a/web/db/index.php +++ /dev/null @@ -1,21 +0,0 @@ - - -

Welcome

- -

Welcome to the EBI Biofuel Ecophysiological Traits and Yields Database, -BETY-DB. The emerging biofuel industry may aid in reducing greenhouse gas -emissions and decreasing dependence on foreign oil importation. How to -develop and implement biofuel crops in an ecologically and economically -sustainable way requires evaluating the growth and functionality of biofuel -crops from the local scale to the regional scale. BETY-DB has been -developed to support research, agriculture, and policy by synthesizing -available information on potential biofuel crops.

- - diff --git a/web/db/list.php b/web/db/list.php deleted file mode 100644 index d5c27d3635d..00000000000 --- a/web/db/list.php +++ /dev/null @@ -1,27 +0,0 @@ - $section['level']['show']) { - #header("Location: index.php"); - die("Not authorized."); -} - -# print top -print_header($table); -print_menu($section); - -# create query to show things -$msg = print_list($table, $sections[$table]['list']); - -# print footer of html -print_footer($msg); -?> diff --git a/web/db/login.php b/web/db/login.php deleted file mode 100644 index ee28b9b8a3c..00000000000 --- a/web/db/login.php +++ /dev/null @@ -1,37 +0,0 @@ - - -
-
-
-
Login
-
-
-
-
Password
-
-
-
-
-
-
-
- - - diff --git a/web/db/logout.php b/web/db/logout.php deleted file mode 100644 index eb25e2823bf..00000000000 --- a/web/db/logout.php +++ /dev/null @@ -1,7 +0,0 @@ - diff --git a/web/db/show.php b/web/db/show.php deleted file mode 100644 index daed22f93fc..00000000000 --- a/web/db/show.php +++ /dev/null @@ -1,40 +0,0 @@ - diff --git a/web/sugarcane/config/file_locations.php b/web/sugarcane/config/file_locations.php deleted file mode 100644 index 2f402e3e144..00000000000 --- a/web/sugarcane/config/file_locations.php +++ /dev/null @@ -1,15 +0,0 @@ - diff --git a/web/sugarcane/config/graph_variables.php b/web/sugarcane/config/graph_variables.php deleted file mode 100755 index 12de285d95b..00000000000 --- a/web/sugarcane/config/graph_variables.php +++ /dev/null @@ -1,6 +0,0 @@ - diff --git a/web/sugarcane/config/xml_structure.txt b/web/sugarcane/config/xml_structure.txt deleted file mode 100644 index eeb0ce6d04a..00000000000 --- a/web/sugarcane/config/xml_structure.txt +++ /dev/null @@ -1,31 +0,0 @@ -# This file defines the structure of the XML file -# Rules: -# 1. Lines starting with a "#" are comments and will be ignored -# 2. Indentations are ignored -# 3. Tags (starting with "<" and ending with ">") will be kept the same -# 4. Section name and variables are defined as: name(w,x,y,z,...) -# For example, after entering the values for each variable on the web page, -# the following will be generated. ("..." is the submitted value.) -# -# ... -# ... -# ... -# ... -# -# - - - - simulationPeriod(dateofplanting,dateofharvest) - location(latitude,longitude) - - canopyParms(Sp,SpD,nlayers,kd,chi.l,heightFactor) - nitroParms(iLeafN,kLN,Vmax.b1,alpha.b1,kpLN,lnb0,lnb1,lnFun,minln,maxln,daymaxln) - photoParms(vmax,alpha,kparm,theta,beta,Rd,Catm,b0,b1,UPPERTEMP,LOWERTEMP) - soilParms(phi1,phi2,iWatCont,soilType,soilLayers,soilDepths,hydrDist,wsFun,scsf,transResp,leafPotTh,rfl,rsec,rsdf,optiontocalculaterootdepth,rootfrontvelocity) - seneParms(senLeaf,senStem,senRoot,senRhizome,leafremobilizafraction,rootturnover,leafturnover) -# For sugarcane phenoParms are not needed -# phenoParms(tp1,kLeaf1,kStem1,kRoot1,kRhizome1,tp2,kLeaf2,kStem2,kRoot2,kRhizome2,tp3,kLeaf3,kStem3,kRoot3,kRhizome3,tp4,kLeaf4,kStem4,kRoot4,kRhizome4,tp5,kLeaf5,kStem5,kRoot5,kRhizome5,tp6,kLeaf6,kStem6,kRoot6,kRhizome6) - SugarPhenoParms(TT0,TTseed,Tmaturity,Rd,Alm,Arm,Clstem,Ilstem,Cestem,Iestem,Clsuc,Ilsuc,Cesuc,Iesuc) - - diff --git a/web/sugarcane/data.xml b/web/sugarcane/data.xml deleted file mode 100644 index a1e69bcf411..00000000000 --- a/web/sugarcane/data.xml +++ /dev/null @@ -1,91 +0,0 @@ - - - - -01/25/2010 -01/25/2011 - - --23 --47 - - - -1.3 -0.001 -40 -0.0 -1.02 -5 - - -2.0 -0.5 -0.0 -0.0 -0.174 --5 -18 -0 -80 -80 -80 - - -39 -0.04 -0.7 -0.83 -0.93 -0.0 -380 -0.03 -6.1 -42 -15 - - -0.01 -0.83 -0.0 -6 -1 -1.0 -0 -0 -1.0 -5000000 --800 -1.7 -0.2 -0.44 -0 -0.5 - - -1400 -10000 -0 -10000 -0.0 -0.2 -1.38 - - -200 -800 -6000 -0.8 -0.32 -0.08 -0.05 -0.0 -0.02 -15 -0.01 -25 -0.02 -45 - - - diff --git a/web/sugarcane/default/default.xml b/web/sugarcane/default/default.xml deleted file mode 100755 index 36b85df1345..00000000000 --- a/web/sugarcane/default/default.xml +++ /dev/null @@ -1,130 +0,0 @@ - - - - - -23 - -47 - - - 01/25/2010 - 01/25/2011 - - - - - - 1.3 - 0.001 - 40 - 0.0 - 1.0 - 3.0 - - - - 2.0 - 0.5 - 0.0 - 0.0 - 0.174 - -5 - 18 - 0 - 80 - 80 - 80 - - - - 39 - 0.04 - 0.7 - 0.83 - 0.93 - 0.0 - 380 - 0.03 - 6.1 - 42 - 15 - - - - 0.01 - 0.83 - 0.0 - 6 - 1 - 1.0 - 0 - 0 - 1.0 - 5000000 - -800 - 1.7 - 0.2 - 0.44 - 0 - 0.5 - - - - 1400 - 10000 - 0 - 10000 - 0.0 - 0.2 - 1.38 - - - - 562 - 1312 - 2063 - 2676 - 3211 - 7000 - 0.33 - 0.37 - 0.3 - -0.008 - 0.14 - 0.85 - 0.01 - -0.0005 - 0.01 - 0.63 - 0.01 - 0.35 - 0.01 - 0.63 - 0.01 - 0.35 - 0.01 - 0.63 - 0.01 - 0.35 - 0.01 - 0.63 - 0.01 - 0.35 - - - 200 - 800 - 6000 - 0.8 - 0.32 - 0.08 - 0.05 - 0.0 - 0.02 - 15 - 0.01 - 25 - 0.02 - 45 - - - diff --git a/web/sugarcane/inc/model/functions.php b/web/sugarcane/inc/model/functions.php deleted file mode 100644 index 8054f2e835b..00000000000 --- a/web/sugarcane/inc/model/functions.php +++ /dev/null @@ -1,103 +0,0 @@ -\\"]/','',$a); -} -function strict_sanitize($a){ - return preg_replace('/[^a-zA-Z0-9\\-_\\.\\+\\=]/','',$a); -} -function line($line){ - return $line.PHP_EOL; -} -function explode_and_trim($separator,$string){ - $result = explode(",",$string); - foreach($result as $k=>$v){ - $result[$k] = trim($v); - } - return $result; -} -function read_xml_structure($path){ - $structure_txt = file_get_contents($path, true); - if($structure_txt==False) { - echo "Error: $path is not readable."; - exit(); - } - $lines=explode("\n",$structure_txt); - $i=0; - foreach($lines as $line){ - if($line!="" and $line!=" "and $line[0]!='#'){ - if(preg_match_all('#\<[^,]+\>#', $line, $arr, PREG_PATTERN_ORDER)){ - }else{ - if(preg_match_all('#([^() ,<>]+)\(([^()]+)\)#',$line, $arr, PREG_SET_ORDER)){ - $items[$i][0]=trim($arr[0][1]); - $items[$i][1]=explode_and_trim(',',$arr[0][2]); - $i++; - } - } - } - } - return $items; -} -function generate_xml($post,$path){ - $structure_txt = file_get_contents($path, true); - if($structure_txt==False) { - echo "$path is not readable. Try changing the permission of the destination folder and $path to 777 if it exists."; - exit(); - } - $lines=explode("\n",$structure_txt); - $i=0; - $return_value=""; - foreach($lines as $line){ - if($line!="" and $line!=" "and $line[0]!='#'){ - if(preg_match_all('#\<[^,]+\>#', $line, $arr, PREG_PATTERN_ORDER)){ - $return_value.=line(trim($arr[0][0])); - }else{ - if(preg_match_all('#([^(), <>]+)\(([^()]+)\)#',$line, $arr, PREG_SET_ORDER)){ - $tag=sanitize(trim($arr[0][1])); - $items=explode_and_trim(',',$arr[0][2]); - $i++; - $content=line("<$tag>"); - $show=false; - foreach ($items as $var) { - $var=sanitize($var); - $var_name=$tag."_".$var; - $var_name=preg_replace('/\./','_',$var_name); - if(isset($post[$var_name]) and $post[$var_name]!=""){ - $inside=sanitize($post[$var_name]); - $content.=line("<$var>$inside"); - $show=true; - } - } - $content .= line(""); - - if($show==true){ - $return_value.= $content; - } - - } - } - } - } - return $return_value; -} -function get_default($path){ - $default_xml_raw = file_get_contents($path, true); - $return=""; - if($default_xml_raw==False) { - echo "$path is not readable. Try changing the permission of the destination folder and $path to 777 if it exists."; - exit(); - } - if(preg_match_all('#<([^<>/]*)>[ \n\r]*([^< >\n\r]+)[ \n\r]*#', $default_xml_raw, $arr, PREG_PATTERN_ORDER)){ - foreach($arr[1] as $index=>$tag){ - $value=trim($arr[2][$index]); - #print($value . "
"); - if(preg_match('#<([^]+)>[ \r\n]*(?:<([^<>]+)>.*[ \r\n]*)*[ \r\n]*<'.$tag.'>[ \r\n]*'.$value.'#', $default_xml_raw, $arr2)){ - $parent_tag=trim($arr2[1]); - $var_name=$parent_tag.'_'.$tag; - $return.= "$('[name=\"$var_name\"]').val('$value'); "; - } - } - - } - return $return; -} -?> diff --git a/web/sugarcane/inc/view/main_view.php b/web/sugarcane/inc/view/main_view.php deleted file mode 100644 index e083f2576f7..00000000000 --- a/web/sugarcane/inc/view/main_view.php +++ /dev/null @@ -1,58 +0,0 @@ - - - - - Sugarcane - - - - - - - - - - -
-
-

Set BioCro Parameters

-
-
- - - -
- $tab) { ?> -
" id=""> -
-
- - nodeName; ?> -
- -
- -
-
- -
-
-
- -
- -
- -
-
- - diff --git a/web/sugarcane/index.php b/web/sugarcane/index.php deleted file mode 100755 index c605e8f0e9a..00000000000 --- a/web/sugarcane/index.php +++ /dev/null @@ -1,132 +0,0 @@ -query($query); -if (!$result) { - die('Invalid query: ' . error_database()); -} -$workflow = $result->fetch(PDO::FETCH_ASSOC); -$folder = $workflow['folder']; - -$runFolder = $folder . DIRECTORY_SEPARATOR . "run"; -$runs = explode("\n", file_get_contents($runFolder . DIRECTORY_SEPARATOR . "runs.txt")); -// this is an advanced edit, so only one run is supported. use the last one in the file. -$lastRun = $runs[count($runs)-2]; - -#$dataXml="/home/pecan/pecan/web/sugarcane/default/default.xml"; -$dataXml=array_shift(glob($workflow["folder"] . "/run/" . $lastRun . "/config.xml")); - -$pdo = null; - -// END PEcAn additions - -if(isset($_POST["command"]) and strpos($_POST["command"],"continue")!==False){ - - // prepare the datafile to be saved - $dataOrigXml=str_replace("config.xml", "data_orig.xml", $dataXml); - - if (!copy($dataXml, $dataOrigXml)) { - die("Failed to copy parameters to new file, $dataOrigXml"); - } - - $doc = new DOMDocument(); - $doc->load($dataXml); - //$doc->preserveWhiteSpace = false; - $xpath = new DOMXPath($doc); - - // The name of most of the posted parameters will be an xpath to - // the same parameter in the config.xml file. Iterate through all the - // posted parameters and set the value of the parameter to the posted value. - foreach($_POST as $key=>$value) { - // All xpaths for this document will start with /config - if(strpos($key,"/config") !== false) { - $query = "/" . $key; - $nodeList = $xpath->query($query); - // The above query will only ever return 1 node - $node = $nodeList->item(0); - $node->nodeValue = $value; - } - } - - if(!$doc->save($dataXml,LIBXML_NOEMPTYTAG)) { - die("$dataXml could not be saved"); - } - - $dataDiff=str_replace("config.xml", "data.diff", $dataXml); - exec("diff $dataOrigXml $dataXml > $dataDiff"); - // TODO do something more intelligent with the diff, like save in the database - - // call R code to lauch stage 2 and redirect to running_stage2.php - chdir($folder); - pclose(popen('R_LIBS_USER="' . ${pecan_install} . '" R CMD BATCH workflow_stage2.R &', 'r')); - if ($offline) { - header( "Location: ../running_stage2.php?workflowid=$workflowid&offline=offline"); - } else { - header( "Location: ../running_stage2.php?workflowid=$workflowid"); - } - -} - -$doc = new DOMDocument(); -$doc->load($dataXml); -$rootNode=$doc->documentElement; - -$tabs = array(); - -foreach ($rootNode->childNodes as $tabNode) { - // filter out top level text nodes - if ($tabNode->nodeType != 3) { - if ($tabNode->nodeName != "pft") { - $tabName = $tabNode->nodeName; - $childNodes = $tabNode->childNodes; - $paramNodes=array(); - // filter out text nodes from children - foreach ($childNodes as $childNode) { - if ($childNode->nodeType != 3) { - $paramNodes[]=$childNode; - } - } - // add this tab and associated parameters to tabs array - $tabs[]=array($tabName,$paramNodes); - } else { // these are pft parameters, so we have to go down one level in the tree to create the rest of the tabs - foreach ($tabNode->childNodes as $pftTabNode) { - $nodeName = $pftTabNode->nodeName; - if ($pftTabNode->nodeType != 3 && $nodeName != "comment" && $nodeName != "num") { - $tabName = $pftTabNode->nodeName; - $childNodes = $pftTabNode->childNodes; - $paramNodes=array(); - // filter out text nodes and comments nodes from children - foreach ($childNodes as $childNode) { - if ($childNode->nodeType != 3 && $childNode->nodeName != "comment") { - $paramNodes[]=$childNode; - } - } - $tabs[] = array($tabName,$paramNodes); - } - } - } - } -} - -include_once("inc/view/main_view.php"); -?> diff --git a/web/sugarcane/ooutput b/web/sugarcane/ooutput deleted file mode 100644 index 30750e5fded..00000000000 --- a/web/sugarcane/ooutput +++ /dev/null @@ -1,368 +0,0 @@ -"DAP" "Leaf" "Stem" "Root" "Sugar" "LAI" "ThermalT" -"1" 0 0 0 0 0 0.01 0.5 -"2" 1 0 0 0 0 0.01 12.29 -"3" 2 0 0 0 0 0.01 25.61 -"4" 3 0 0 0 0 0.01 37.7 -"5" 4 0 0 0 0 0.01 50.46 -"6" 5 0 0.01 0 0 0.01 62.14 -"7" 6 0 0.01 0 0 0.01 74.08 -"8" 7 0 0.01 0 0 0.01 87.33 -"9" 8 0 0.01 0.01 0 0.01 102.9 -"10" 9 0 0.01 0.01 0 0.01 118.93 -"11" 10 0 0.01 0.01 0 0.01 134.95 -"12" 11 0 0.01 0.01 0 0.01 151.17 -"13" 12 0 0.01 0.01 0 0.01 167.71 -"14" 13 0 0.01 0.01 0 0.01 185.56 -"15" 14 0.01 0.01 0.01 0 0.01 202.46 -"16" 15 0.01 0.01 0.01 0 0.01 218.6 -"17" 16 0.01 0.02 0.01 0 0.01 233.88 -"18" 17 0.01 0.02 0.01 0 0.01 249.72 -"19" 18 0.01 0.02 0.01 0 0.02 263.03 -"20" 19 0.01 0.02 0.01 0 0.02 276.02 -"21" 20 0.02 0.02 0.01 0 0.02 290.38 -"22" 21 0.02 0.02 0.01 0 0.03 306.52 -"23" 22 0.03 0.02 0.01 0 0.04 325.63 -"24" 23 0.04 0.03 0.01 0.01 0.05 343.31 -"25" 24 0.05 0.03 0.01 0.01 0.06 360.8 -"26" 25 0.05 0.03 0.01 0.01 0.07 374.79 -"27" 26 0.06 0.04 0.01 0.01 0.08 386.42 -"28" 27 0.07 0.04 0.02 0.02 0.09 399.33 -"29" 28 0.08 0.05 0.02 0.02 0.1 412.66 -"30" 29 0.09 0.06 0.02 0.02 0.12 428.53 -"31" 30 0.11 0.07 0.02 0.03 0.14 445.9 -"32" 31 0.14 0.08 0.03 0.04 0.17 462.41 -"33" 32 0.16 0.1 0.03 0.04 0.2 477.36 -"34" 33 0.17 0.11 0.03 0.05 0.22 489.83 -"35" 34 0.19 0.12 0.03 0.06 0.24 499.22 -"36" 35 0.21 0.14 0.04 0.07 0.27 510.52 -"37" 36 0.23 0.16 0.04 0.08 0.29 521.73 -"38" 37 0.25 0.18 0.04 0.09 0.32 531.47 -"39" 38 0.27 0.2 0.05 0.1 0.35 541.82 -"40" 39 0.3 0.22 0.05 0.11 0.38 553.02 -"41" 40 0.33 0.25 0.06 0.13 0.42 565.1 -"42" 41 0.37 0.29 0.06 0.15 0.47 578.38 -"43" 42 0.41 0.33 0.07 0.17 0.52 590.92 -"44" 43 0.46 0.38 0.08 0.19 0.57 602.79 -"45" 44 0.5 0.43 0.09 0.22 0.63 615.03 -"46" 45 0.55 0.5 0.1 0.25 0.69 628.13 -"47" 46 0.61 0.56 0.11 0.28 0.76 641.55 -"48" 47 0.66 0.64 0.12 0.32 0.83 656.31 -"49" 48 0.72 0.73 0.13 0.36 0.9 671.09 -"50" 49 0.79 0.83 0.14 0.42 0.99 686.62 -"51" 50 0.85 0.93 0.16 0.47 1.07 701.75 -"52" 51 0.91 1.02 0.17 0.51 1.13 714.49 -"53" 52 0.97 1.13 0.18 0.56 1.2 727.03 -"54" 53 1.01 1.22 0.19 0.61 1.26 738.78 -"55" 54 1.06 1.32 0.21 0.66 1.32 750.11 -"56" 55 1.12 1.44 0.22 0.72 1.4 764.62 -"57" 56 1.18 1.59 0.24 0.79 1.47 780.36 -"58" 57 1.24 1.71 0.25 0.84 1.53 794.44 -"59" 58 1.3 1.85 0.27 0.92 1.61 809.44 -"60" 59 1.36 2 0.29 0.99 1.69 824.43 -"61" 60 1.43 2.15 0.3 1.06 1.77 838.59 -"62" 61 1.49 2.3 0.32 1.13 1.85 853.7 -"63" 62 1.55 2.43 0.33 1.19 1.91 867.34 -"64" 63 1.61 2.57 0.35 1.26 1.99 881.11 -"65" 64 1.66 2.71 0.37 1.32 2.05 895.12 -"66" 65 1.71 2.84 0.38 1.38 2.12 908.59 -"67" 66 1.77 2.99 0.4 1.45 2.19 921.84 -"68" 67 1.83 3.13 0.41 1.52 2.26 933.61 -"69" 68 1.89 3.27 0.43 1.58 2.33 945.47 -"70" 69 1.93 3.38 0.44 1.63 2.37 956.42 -"71" 70 1.96 3.47 0.45 1.68 2.41 967.41 -"72" 71 2.01 3.61 0.46 1.74 2.47 980.39 -"73" 72 2.07 3.74 0.48 1.8 2.54 991.29 -"74" 73 2.1 3.83 0.48 1.84 2.58 999.24 -"75" 74 2.15 3.95 0.5 1.89 2.63 1005.8 -"76" 75 2.19 4.06 0.51 1.95 2.68 1012.78 -"77" 76 2.24 4.18 0.52 2 2.74 1020.42 -"78" 77 2.28 4.3 0.53 2.05 2.79 1028.6 -"79" 78 2.34 4.44 0.55 2.12 2.86 1037.91 -"80" 79 2.4 4.59 0.56 2.18 2.93 1047.49 -"81" 80 2.46 4.74 0.58 2.25 2.99 1057.56 -"82" 81 2.52 4.89 0.6 2.32 3.07 1067.97 -"83" 82 2.56 5 0.61 2.37 3.11 1079.09 -"84" 83 2.63 5.19 0.63 2.45 3.19 1092.71 -"85" 84 2.7 5.37 0.64 2.53 3.28 1106.42 -"86" 85 2.77 5.54 0.66 2.61 3.36 1119.23 -"87" 86 2.84 5.73 0.68 2.69 3.44 1132.39 -"88" 87 2.91 5.93 0.7 2.77 3.53 1146.99 -"89" 88 2.99 6.12 0.72 2.86 3.61 1162.81 -"90" 89 3.06 6.31 0.74 2.94 3.7 1178.42 -"91" 90 3.1 6.46 0.75 3 3.75 1193.37 -"92" 91 3.15 6.61 0.77 3.07 3.81 1206.28 -"93" 92 3.21 6.77 0.78 3.14 3.87 1219.53 -"94" 93 3.28 6.96 0.8 3.22 3.95 1233.69 -"95" 94 3.34 7.13 0.82 3.29 4.02 1248.25 -"96" 95 3.39 7.29 0.83 3.36 4.08 1260.35 -"97" 96 3.41 7.36 0.84 3.39 4.1 1268.96 -"98" 97 3.46 7.51 0.85 3.45 4.15 1278.72 -"99" 98 3.52 7.68 0.87 3.52 4.22 1290.16 -"100" 99 3.57 7.84 0.88 3.59 4.29 1301.58 -"101" 100 3.63 8.01 0.9 3.66 4.35 1313.32 -"102" 101 3.68 8.17 0.91 3.72 4.41 1325.48 -"103" 102 3.73 8.33 0.92 3.79 4.47 1339.21 -"104" 103 3.79 8.5 0.94 3.86 4.53 1352.81 -"105" 104 3.84 8.66 0.95 3.92 4.59 1365.58 -"106" 105 3.88 8.79 0.96 3.97 4.63 1378.08 -"107" 106 3.9 8.88 0.97 4.01 4.65 1385.72 -"108" 107 3.92 8.97 0.98 4.05 4.67 1390.74 -"109" 108 3.94 9.07 0.98 4.09 4.7 1398.04 -"110" 109 3.92 9.17 0.99 4.13 4.66 1402.99 -"111" 110 3.9 9.28 1 4.18 4.64 1408.98 -"112" 111 3.89 9.43 1.01 4.23 4.62 1418.41 -"113" 112 3.89 9.58 1.03 4.29 4.61 1429.29 -"114" 113 3.88 9.73 1.04 4.35 4.6 1441.44 -"115" 114 3.87 9.87 1.05 4.4 4.59 1452.01 -"116" 115 3.85 9.98 1.06 4.45 4.56 1463.34 -"117" 116 3.82 10.09 1.07 4.49 4.53 1472.52 -"118" 117 3.81 10.2 1.07 4.54 4.5 1478.83 -"119" 118 3.8 10.33 1.08 4.59 4.48 1487.67 -"120" 119 3.79 10.46 1.09 4.64 4.47 1496.81 -"121" 120 3.78 10.6 1.11 4.69 4.46 1507.74 -"122" 121 3.77 10.74 1.12 4.74 4.45 1519.66 -"123" 122 3.75 10.86 1.12 4.79 4.42 1531.94 -"124" 123 3.75 11 1.13 4.84 4.41 1543.04 -"125" 124 3.73 11.11 1.14 4.89 4.39 1552.41 -"126" 125 3.72 11.23 1.15 4.93 4.36 1561.19 -"127" 126 3.7 11.34 1.16 4.97 4.34 1571.01 -"128" 127 3.69 11.47 1.17 5.02 4.33 1582.91 -"129" 128 3.68 11.59 1.18 5.06 4.31 1591.35 -"130" 129 3.66 11.69 1.18 5.1 4.28 1597.78 -"131" 130 3.64 11.79 1.19 5.14 4.26 1605.51 -"132" 131 3.62 11.9 1.2 5.18 4.23 1614.41 -"133" 132 3.59 11.97 1.2 5.21 4.19 1622.93 -"134" 133 3.56 12.06 1.2 5.24 4.16 1629.92 -"135" 134 3.54 12.15 1.21 5.27 4.13 1634.04 -"136" 135 3.53 12.25 1.22 5.31 4.11 1640.64 -"137" 136 3.52 12.35 1.22 5.35 4.09 1646.02 -"138" 137 3.5 12.44 1.23 5.38 4.06 1653.19 -"139" 138 3.46 12.5 1.23 5.41 4.02 1659.56 -"140" 139 3.45 12.6 1.24 5.44 4 1666.35 -"141" 140 3.43 12.7 1.24 5.48 3.98 1672.2 -"142" 141 3.42 12.79 1.25 5.51 3.96 1678.09 -"143" 142 3.41 12.9 1.25 5.55 3.95 1684.98 -"144" 143 3.4 13.01 1.26 5.59 3.94 1692.65 -"145" 144 3.4 13.13 1.27 5.63 3.93 1701.67 -"146" 145 3.4 13.24 1.28 5.67 3.92 1711.73 -"147" 146 3.39 13.36 1.29 5.71 3.91 1722.08 -"148" 147 3.39 13.49 1.29 5.76 3.91 1733.71 -"149" 148 3.39 13.61 1.3 5.8 3.9 1745.46 -"150" 149 3.37 13.71 1.31 5.83 3.88 1757.86 -"151" 150 3.35 13.77 1.31 5.86 3.85 1769.01 -"152" 151 3.34 13.88 1.31 5.9 3.84 1777.35 -"153" 152 3.34 14 1.32 5.94 3.83 1788.62 -"154" 153 3.34 14.13 1.33 5.98 3.83 1799.4 -"155" 154 3.34 14.24 1.34 6.02 3.82 1809.18 -"156" 155 3.34 14.35 1.34 6.05 3.82 1817.45 -"157" 156 3.34 14.47 1.35 6.09 3.81 1826.72 -"158" 157 3.33 14.58 1.36 6.12 3.81 1834.39 -"159" 158 3.33 14.69 1.36 6.16 3.8 1843.67 -"160" 159 3.33 14.8 1.37 6.2 3.8 1852.71 -"161" 160 3.33 14.92 1.38 6.23 3.79 1861.74 -"162" 161 3.33 15.03 1.39 6.27 3.79 1871.57 -"163" 162 3.33 15.14 1.39 6.3 3.78 1880.34 -"164" 163 3.33 15.26 1.4 6.34 3.78 1889.14 -"165" 164 3.33 15.38 1.41 6.38 3.78 1900.22 -"166" 165 3.33 15.5 1.42 6.41 3.78 1911.45 -"167" 166 3.33 15.62 1.42 6.45 3.78 1921.21 -"168" 167 3.34 15.74 1.43 6.49 3.78 1932.27 -"169" 168 3.33 15.84 1.43 6.52 3.77 1942.56 -"170" 169 3.33 15.96 1.44 6.55 3.77 1953.83 -"171" 170 3.33 16.07 1.45 6.59 3.76 1966.74 -"172" 171 3.3 16.12 1.45 6.61 3.73 1976.56 -"173" 172 3.26 16.15 1.44 6.62 3.68 1983.2 -"174" 173 3.22 16.19 1.44 6.64 3.63 1990.75 -"175" 174 3.21 16.27 1.44 6.66 3.61 1998.24 -"176" 175 3.2 16.35 1.45 6.68 3.59 2006.55 -"177" 176 3.19 16.45 1.45 6.71 3.59 2016.08 -"178" 177 3.19 16.55 1.46 6.74 3.58 2026.11 -"179" 178 3.18 16.63 1.46 6.77 3.56 2037.89 -"180" 179 3.18 16.74 1.47 6.8 3.57 2049.52 -"181" 180 3.19 16.87 1.47 6.83 3.57 2062.21 -"182" 181 3.2 16.98 1.48 6.86 3.58 2075.9 -"183" 182 3.2 17.09 1.49 6.89 3.58 2086.81 -"184" 183 3.21 17.2 1.49 6.92 3.58 2096.3 -"185" 184 3.22 17.32 1.5 6.95 3.59 2107.45 -"186" 185 3.22 17.42 1.51 6.97 3.58 2117.09 -"187" 186 3.22 17.53 1.51 7 3.58 2127.37 -"188" 187 3.2 17.6 1.51 7.02 3.56 2138.37 -"189" 188 3.21 17.71 1.52 7.05 3.56 2150.51 -"190" 189 3.22 17.83 1.53 7.08 3.57 2162.7 -"191" 190 3.22 17.93 1.53 7.1 3.57 2176.75 -"192" 191 3.21 18.01 1.53 7.12 3.56 2186.93 -"193" 192 3.19 18.08 1.53 7.14 3.54 2195.54 -"194" 193 3.18 18.15 1.54 7.16 3.52 2204.22 -"195" 194 3.18 18.25 1.54 7.18 3.52 2214.58 -"196" 195 3.18 18.35 1.55 7.2 3.52 2221.46 -"197" 196 3.19 18.46 1.55 7.22 3.52 2230.2 -"198" 197 3.21 18.6 1.56 7.25 3.54 2240.96 -"199" 198 3.23 18.73 1.57 7.28 3.56 2255 -"200" 199 3.23 18.83 1.58 7.3 3.56 2264.66 -"201" 200 3.23 18.93 1.58 7.32 3.56 2275.08 -"202" 201 3.25 19.05 1.59 7.34 3.57 2287.72 -"203" 202 3.26 19.16 1.6 7.36 3.57 2300.16 -"204" 203 3.26 19.27 1.6 7.38 3.58 2309.28 -"205" 204 3.26 19.36 1.61 7.4 3.57 2315.29 -"206" 205 3.26 19.44 1.61 7.42 3.56 2320.5 -"207" 206 3.26 19.55 1.62 7.44 3.56 2328.23 -"208" 207 3.27 19.66 1.63 7.46 3.57 2336.16 -"209" 208 3.28 19.76 1.63 7.49 3.57 2345.08 -"210" 209 3.29 19.88 1.64 7.51 3.58 2354.93 -"211" 210 3.3 20 1.65 7.54 3.59 2366.52 -"212" 211 3.33 20.14 1.66 7.57 3.61 2381.57 -"213" 212 3.34 20.26 1.66 7.6 3.63 2395.23 -"214" 213 3.35 20.36 1.67 7.62 3.63 2408.6 -"215" 214 3.36 20.48 1.67 7.65 3.64 2424.59 -"216" 215 3.36 20.58 1.68 7.68 3.65 2439.93 -"217" 216 3.37 20.67 1.68 7.71 3.65 2457.37 -"218" 217 3.37 20.77 1.68 7.74 3.65 2473.74 -"219" 218 3.37 20.85 1.68 7.77 3.65 2491.39 -"220" 219 3.36 20.92 1.68 7.8 3.63 2507.31 -"221" 220 3.33 20.95 1.68 7.82 3.6 2522.01 -"222" 221 3.28 20.95 1.67 7.83 3.56 2537.01 -"223" 222 3.24 20.95 1.67 7.83 3.5 2551.19 -"224" 223 3.19 20.95 1.67 7.83 3.45 2569.01 -"225" 224 3.15 20.95 1.66 7.83 3.4 2585.7 -"226" 225 3.11 20.95 1.66 7.83 3.35 2600.12 -"227" 226 3.06 20.95 1.66 7.83 3.3 2610.91 -"228" 227 3.02 20.96 1.65 7.84 3.26 2620.07 -"229" 228 3.03 21.05 1.66 7.87 3.25 2630.64 -"230" 229 3.05 21.15 1.66 7.9 3.26 2642.8 -"231" 230 3.05 21.24 1.67 7.93 3.26 2656.33 -"232" 231 3.05 21.3 1.67 7.96 3.26 2672.21 -"233" 232 3.03 21.33 1.66 7.99 3.24 2687.62 -"234" 233 2.99 21.33 1.66 7.99 3.2 2703.55 -"235" 234 2.95 21.33 1.66 7.99 3.15 2721.22 -"236" 235 2.91 21.33 1.65 7.99 3.11 2739.57 -"237" 236 2.87 21.33 1.65 7.99 3.06 2756.22 -"238" 237 2.83 21.33 1.65 7.99 3.02 2776.98 -"239" 238 2.79 21.33 1.64 7.99 2.97 2791.02 -"240" 239 2.75 21.33 1.64 7.99 2.93 2804.7 -"241" 240 2.71 21.33 1.64 7.99 2.88 2819.19 -"242" 241 2.67 21.33 1.63 7.99 2.84 2835.2 -"243" 242 2.64 21.33 1.63 7.99 2.8 2854.65 -"244" 243 2.6 21.33 1.63 7.99 2.76 2873.78 -"245" 244 2.56 21.33 1.62 7.99 2.72 2887.29 -"246" 245 2.57 21.39 1.63 8.02 2.71 2898.43 -"247" 246 2.58 21.46 1.63 8.05 2.72 2910.81 -"248" 247 2.6 21.54 1.63 8.09 2.73 2926.43 -"249" 248 2.59 21.56 1.63 8.11 2.72 2936.07 -"250" 249 2.6 21.65 1.63 8.15 2.73 2947.38 -"251" 250 2.64 21.75 1.64 8.2 2.76 2961.73 -"252" 251 2.64 21.8 1.64 8.23 2.76 2975.34 -"253" 252 2.66 21.89 1.64 8.27 2.78 2989.15 -"254" 253 2.64 21.92 1.64 8.29 2.76 2997.98 -"255" 254 2.64 21.97 1.64 8.32 2.76 3007.2 -"256" 255 2.69 22.11 1.65 8.38 2.81 3023.08 -"257" 256 2.75 22.25 1.66 8.44 2.86 3038.77 -"258" 257 2.76 22.33 1.66 8.49 2.88 3053.1 -"259" 258 2.8 22.46 1.67 8.54 2.92 3066.35 -"260" 259 2.84 22.58 1.68 8.59 2.95 3078.03 -"261" 260 2.86 22.69 1.69 8.64 2.98 3087.94 -"262" 261 2.9 22.81 1.7 8.69 3.01 3098.9 -"263" 262 2.93 22.92 1.71 8.74 3.03 3108.45 -"264" 263 2.97 23.06 1.72 8.8 3.08 3121.43 -"265" 264 3.01 23.18 1.73 8.86 3.11 3137.7 -"266" 265 3.02 23.27 1.73 8.91 3.12 3150.92 -"267" 266 3.06 23.39 1.74 8.97 3.16 3165.7 -"268" 267 3.1 23.53 1.75 9.04 3.2 3178.97 -"269" 268 3.11 23.62 1.76 9.08 3.21 3189.9 -"270" 269 3.16 23.77 1.77 9.15 3.25 3202.23 -"271" 270 3.2 23.91 1.78 9.22 3.29 3215.84 -"272" 271 3.26 24.08 1.8 9.3 3.35 3232.43 -"273" 272 3.32 24.24 1.81 9.38 3.41 3251.22 -"274" 273 3.34 24.34 1.81 9.44 3.42 3266.42 -"275" 274 3.37 24.47 1.82 9.51 3.45 3279.67 -"276" 275 3.42 24.63 1.84 9.59 3.5 3294.93 -"277" 276 3.45 24.75 1.85 9.65 3.52 3307.68 -"278" 277 3.49 24.9 1.86 9.73 3.56 3320.52 -"279" 278 3.54 25.08 1.87 9.81 3.62 3335.72 -"280" 279 3.61 25.26 1.89 9.92 3.68 3355.41 -"281" 280 3.63 25.36 1.89 9.99 3.7 3372.77 -"282" 281 3.64 25.46 1.9 10.05 3.7 3385.34 -"283" 282 3.67 25.6 1.91 10.12 3.73 3398.38 -"284" 283 3.72 25.76 1.92 10.21 3.77 3411.77 -"285" 284 3.76 25.92 1.94 10.29 3.82 3426.84 -"286" 285 3.81 26.08 1.95 10.38 3.86 3443.43 -"287" 286 3.84 26.21 1.96 10.47 3.88 3459.18 -"288" 287 3.83 26.29 1.96 10.52 3.87 3469.16 -"289" 288 3.87 26.45 1.98 10.61 3.91 3482.23 -"290" 289 3.92 26.63 1.99 10.71 3.96 3499.4 -"291" 290 3.98 26.8 2 10.82 4.02 3518.62 -"292" 291 4 26.92 2.01 10.89 4.03 3532.7 -"293" 292 3.99 27.01 2.02 10.95 4.02 3542.12 -"294" 293 4.01 27.14 2.03 11.03 4.04 3554.51 -"295" 294 4.01 27.23 2.03 11.09 4.03 3564.17 -"296" 295 4.02 27.33 2.04 11.16 4.03 3575.01 -"297" 296 4.04 27.47 2.05 11.24 4.05 3586.66 -"298" 297 4.03 27.54 2.05 11.3 4.04 3596.72 -"299" 298 4.07 27.7 2.06 11.4 4.07 3610.21 -"300" 299 4.12 27.87 2.08 11.5 4.12 3624.52 -"301" 300 4.19 28.07 2.09 11.63 4.18 3644.44 -"302" 301 4.24 28.24 2.11 11.76 4.23 3666 -"303" 302 4.27 28.37 2.12 11.86 4.26 3683.69 -"304" 303 4.28 28.48 2.12 11.95 4.26 3700.07 -"305" 304 4.29 28.59 2.13 12.03 4.27 3714.06 -"306" 305 4.32 28.74 2.14 12.14 4.3 3730.44 -"307" 306 4.35 28.88 2.15 12.25 4.32 3747.46 -"308" 307 4.4 29.06 2.16 12.37 4.37 3765.59 -"309" 308 4.45 29.24 2.18 12.5 4.41 3782.82 -"310" 309 4.51 29.42 2.19 12.63 4.46 3801.68 -"311" 310 4.54 29.57 2.2 12.75 4.49 3819.5 -"312" 311 4.55 29.69 2.21 12.85 4.5 3835.5 -"313" 312 4.56 29.8 2.22 12.94 4.5 3848.71 -"314" 313 4.58 29.93 2.22 13.05 4.51 3865.15 -"315" 314 4.62 30.09 2.24 13.18 4.55 3884.59 -"316" 315 4.65 30.24 2.25 13.3 4.57 3900.14 -"317" 316 4.66 30.35 2.25 13.4 4.58 3916.9 -"318" 317 4.65 30.45 2.26 13.49 4.57 3929.34 -"319" 318 4.67 30.58 2.27 13.6 4.58 3945.02 -"320" 319 4.64 30.61 2.26 13.67 4.54 3958.54 -"321" 320 4.67 30.76 2.27 13.79 4.57 3975.44 -"322" 321 4.7 30.92 2.28 13.92 4.6 3992.3 -"323" 322 4.73 31.06 2.29 14.06 4.62 4011.95 -"324" 323 4.77 31.21 2.3 14.21 4.66 4032.37 -"325" 324 4.79 31.35 2.31 14.33 4.67 4048.78 -"326" 325 4.76 31.39 2.31 14.4 4.64 4060.99 -"327" 326 4.73 31.43 2.31 14.46 4.6 4070.55 -"328" 327 4.68 31.46 2.3 14.53 4.55 4082.39 -"329" 328 4.66 31.51 2.3 14.61 4.52 4095.96 -"330" 329 4.65 31.59 2.31 14.69 4.51 4108.44 -"331" 330 4.65 31.7 2.31 14.8 4.51 4122.31 -"332" 331 4.68 31.82 2.32 14.92 4.53 4138.39 -"333" 332 4.71 31.96 2.33 15.07 4.56 4158.49 -"334" 333 4.72 32.07 2.34 15.19 4.56 4173.36 -"335" 334 4.72 32.15 2.34 15.28 4.55 4187.28 -"336" 335 4.7 32.23 2.34 15.37 4.53 4199.31 -"337" 336 4.72 32.35 2.35 15.5 4.55 4214.7 -"338" 337 4.72 32.44 2.35 15.61 4.54 4229.42 -"339" 338 4.72 32.53 2.36 15.7 4.53 4241.54 -"340" 339 4.73 32.66 2.37 15.82 4.54 4254.4 -"341" 340 4.76 32.8 2.38 15.95 4.57 4267.94 -"342" 341 4.79 32.95 2.39 16.09 4.59 4281.94 -"343" 342 4.82 33.09 2.4 16.22 4.61 4296.9 -"344" 343 4.82 33.19 2.41 16.34 4.61 4310.37 -"345" 344 4.81 33.27 2.41 16.43 4.59 4322.45 -"346" 345 4.78 33.31 2.41 16.51 4.56 4333.9 -"347" 346 4.77 33.39 2.41 16.61 4.54 4346.62 -"348" 347 4.77 33.49 2.42 16.73 4.54 4360.28 -"349" 348 4.76 33.56 2.42 16.83 4.53 4373.89 -"350" 349 4.75 33.62 2.42 16.93 4.51 4387.12 -"351" 350 4.73 33.68 2.42 17.03 4.49 4399.69 -"352" 351 4.72 33.75 2.42 17.13 4.47 4412.86 -"353" 352 4.71 33.82 2.42 17.23 4.46 4426.36 -"354" 353 4.72 33.92 2.43 17.36 4.46 4441.53 -"355" 354 4.73 34.01 2.43 17.48 4.47 4455.78 -"356" 355 4.72 34.09 2.43 17.59 4.46 4469.04 -"357" 356 4.72 34.17 2.44 17.7 4.45 4482.17 -"358" 357 4.74 34.29 2.45 17.84 4.46 4496.3 -"359" 358 4.75 34.4 2.45 17.97 4.47 4511.12 -"360" 359 4.73 34.45 2.45 18.07 4.45 4523.91 -"361" 360 4.71 34.49 2.45 18.16 4.42 4537.91 -"362" 361 4.69 34.53 2.45 18.26 4.4 4552.12 -"363" 362 4.68 34.6 2.45 18.37 4.39 4565.94 -"364" 363 4.69 34.68 2.45 18.5 4.39 4580.54 -"365" 364 4.71 34.8 2.46 18.64 4.41 4595.54 -"366" 365 4.74 34.91 2.47 18.81 4.43 4614.21 -"367" 366 4.76 35.03 2.48 18.96 4.45 4629.25 diff --git a/web/sugarcane/plot_data/output b/web/sugarcane/plot_data/output deleted file mode 100644 index 0d3c876ce8c..00000000000 --- a/web/sugarcane/plot_data/output +++ /dev/null @@ -1,368 +0,0 @@ -DAP Leaf Stem Root Sugar LAI ThermalT -1 0 0 0 0 0 0.01 0.5 -2 1 0 0 0 0 0.01 12.29 -3 2 0 0 0 0 0.01 25.61 -4 3 0 0 0 0 0.01 37.7 -5 4 0 0 0 0 0.01 50.46 -6 5 0 0.01 0 0 0.01 62.14 -7 6 0 0.01 0 0 0.01 74.08 -8 7 0 0.01 0 0 0.01 87.33 -9 8 0 0.01 0.01 0 0.01 102.9 -10 9 0 0.01 0.01 0 0.01 118.93 -11 10 0 0.01 0.01 0 0.01 134.95 -12 11 0 0.01 0.01 0 0.01 151.17 -13 12 0 0.01 0.01 0 0.01 167.71 -14 13 0 0.01 0.01 0 0.01 185.56 -15 14 0.01 0.01 0.01 0 0.01 202.46 -16 15 0.01 0.01 0.01 0 0.01 218.6 -17 16 0.01 0.02 0.01 0 0.01 233.88 -18 17 0.01 0.02 0.01 0 0.01 249.72 -19 18 0.01 0.02 0.01 0 0.02 263.03 -20 19 0.01 0.02 0.01 0 0.02 276.02 -21 20 0.02 0.02 0.01 0 0.02 290.38 -22 21 0.02 0.02 0.01 0 0.03 306.52 -23 22 0.03 0.02 0.01 0 0.04 325.63 -24 23 0.04 0.03 0.01 0.01 0.05 343.31 -25 24 0.05 0.03 0.01 0.01 0.06 360.8 -26 25 0.05 0.03 0.01 0.01 0.07 374.79 -27 26 0.06 0.04 0.01 0.01 0.08 386.42 -28 27 0.07 0.04 0.02 0.02 0.09 399.33 -29 28 0.08 0.05 0.02 0.02 0.1 412.66 -30 29 0.09 0.06 0.02 0.02 0.12 428.53 -31 30 0.11 0.07 0.02 0.03 0.14 445.9 -32 31 0.14 0.08 0.03 0.04 0.17 462.41 -33 32 0.16 0.1 0.03 0.04 0.2 477.36 -34 33 0.17 0.11 0.03 0.05 0.22 489.83 -35 34 0.19 0.12 0.03 0.06 0.24 499.22 -36 35 0.21 0.14 0.04 0.07 0.27 510.52 -37 36 0.23 0.16 0.04 0.08 0.29 521.73 -38 37 0.25 0.18 0.04 0.09 0.32 531.47 -39 38 0.27 0.2 0.05 0.1 0.35 541.82 -40 39 0.3 0.22 0.05 0.11 0.38 553.02 -41 40 0.33 0.25 0.06 0.13 0.42 565.1 -42 41 0.37 0.29 0.06 0.15 0.47 578.38 -43 42 0.41 0.33 0.07 0.17 0.52 590.92 -44 43 0.46 0.38 0.08 0.19 0.57 602.79 -45 44 0.5 0.43 0.09 0.22 0.63 615.03 -46 45 0.55 0.5 0.1 0.25 0.69 628.13 -47 46 0.61 0.56 0.11 0.28 0.76 641.55 -48 47 0.66 0.64 0.12 0.32 0.83 656.31 -49 48 0.72 0.73 0.13 0.36 0.9 671.09 -50 49 0.79 0.83 0.14 0.42 0.99 686.62 -51 50 0.85 0.93 0.16 0.47 1.07 701.75 -52 51 0.91 1.02 0.17 0.51 1.13 714.49 -53 52 0.97 1.13 0.18 0.56 1.2 727.03 -54 53 1.01 1.22 0.19 0.61 1.26 738.78 -55 54 1.06 1.32 0.21 0.66 1.32 750.11 -56 55 1.12 1.44 0.22 0.72 1.4 764.62 -57 56 1.18 1.59 0.24 0.79 1.47 780.36 -58 57 1.24 1.71 0.25 0.84 1.53 794.44 -59 58 1.3 1.85 0.27 0.92 1.61 809.44 -60 59 1.36 2 0.29 0.99 1.69 824.43 -61 60 1.43 2.15 0.3 1.06 1.77 838.59 -62 61 1.49 2.3 0.32 1.13 1.85 853.7 -63 62 1.55 2.43 0.33 1.19 1.91 867.34 -64 63 1.61 2.57 0.35 1.26 1.99 881.11 -65 64 1.66 2.71 0.37 1.32 2.05 895.12 -66 65 1.71 2.84 0.38 1.38 2.12 908.59 -67 66 1.77 2.99 0.4 1.45 2.19 921.84 -68 67 1.83 3.13 0.41 1.52 2.26 933.61 -69 68 1.89 3.27 0.43 1.58 2.33 945.47 -70 69 1.93 3.38 0.44 1.63 2.37 956.42 -71 70 1.96 3.47 0.45 1.68 2.41 967.41 -72 71 2.01 3.61 0.46 1.74 2.47 980.39 -73 72 2.07 3.74 0.48 1.8 2.54 991.29 -74 73 2.1 3.83 0.48 1.84 2.58 999.24 -75 74 2.15 3.95 0.5 1.89 2.63 1005.8 -76 75 2.19 4.06 0.51 1.95 2.68 1012.78 -77 76 2.24 4.18 0.52 2 2.74 1020.42 -78 77 2.28 4.3 0.53 2.05 2.79 1028.6 -79 78 2.34 4.44 0.55 2.12 2.86 1037.91 -80 79 2.4 4.59 0.56 2.18 2.93 1047.49 -81 80 2.46 4.74 0.58 2.25 2.99 1057.56 -82 81 2.52 4.89 0.6 2.32 3.07 1067.97 -83 82 2.56 5 0.61 2.37 3.11 1079.09 -84 83 2.63 5.19 0.63 2.45 3.19 1092.71 -85 84 2.7 5.37 0.64 2.53 3.28 1106.42 -86 85 2.77 5.54 0.66 2.61 3.36 1119.23 -87 86 2.84 5.73 0.68 2.69 3.44 1132.39 -88 87 2.91 5.93 0.7 2.77 3.53 1146.99 -89 88 2.99 6.12 0.72 2.86 3.61 1162.81 -90 89 3.06 6.31 0.74 2.94 3.7 1178.42 -91 90 3.1 6.46 0.75 3 3.75 1193.37 -92 91 3.15 6.61 0.77 3.07 3.81 1206.28 -93 92 3.21 6.77 0.78 3.14 3.87 1219.53 -94 93 3.28 6.96 0.8 3.22 3.95 1233.69 -95 94 3.34 7.13 0.82 3.29 4.02 1248.25 -96 95 3.39 7.29 0.83 3.36 4.08 1260.35 -97 96 3.41 7.36 0.84 3.39 4.1 1268.96 -98 97 3.46 7.51 0.85 3.45 4.15 1278.72 -99 98 3.52 7.68 0.87 3.52 4.22 1290.16 -100 99 3.57 7.84 0.88 3.59 4.29 1301.58 -101 100 3.63 8.01 0.9 3.66 4.35 1313.32 -102 101 3.68 8.17 0.91 3.72 4.41 1325.48 -103 102 3.73 8.33 0.92 3.79 4.47 1339.21 -104 103 3.79 8.5 0.94 3.86 4.53 1352.81 -105 104 3.84 8.66 0.95 3.92 4.59 1365.58 -106 105 3.88 8.79 0.96 3.97 4.63 1378.08 -107 106 3.9 8.88 0.97 4.01 4.65 1385.72 -108 107 3.92 8.97 0.98 4.05 4.67 1390.74 -109 108 3.94 9.07 0.98 4.09 4.7 1398.04 -110 109 3.92 9.17 0.99 4.13 4.66 1402.99 -111 110 3.9 9.28 1 4.18 4.64 1408.98 -112 111 3.89 9.43 1.01 4.23 4.62 1418.41 -113 112 3.89 9.58 1.03 4.29 4.61 1429.29 -114 113 3.88 9.73 1.04 4.35 4.6 1441.44 -115 114 3.87 9.87 1.05 4.4 4.59 1452.01 -116 115 3.85 9.98 1.06 4.45 4.56 1463.34 -117 116 3.82 10.09 1.07 4.49 4.53 1472.52 -118 117 3.81 10.2 1.07 4.54 4.5 1478.83 -119 118 3.8 10.33 1.08 4.59 4.48 1487.67 -120 119 3.79 10.46 1.09 4.64 4.47 1496.81 -121 120 3.78 10.6 1.11 4.69 4.46 1507.74 -122 121 3.77 10.74 1.12 4.74 4.45 1519.66 -123 122 3.75 10.86 1.12 4.79 4.42 1531.94 -124 123 3.75 11 1.13 4.84 4.41 1543.04 -125 124 3.73 11.11 1.14 4.89 4.39 1552.41 -126 125 3.72 11.23 1.15 4.93 4.36 1561.19 -127 126 3.7 11.34 1.16 4.97 4.34 1571.01 -128 127 3.69 11.47 1.17 5.02 4.33 1582.91 -129 128 3.68 11.59 1.18 5.06 4.31 1591.35 -130 129 3.66 11.69 1.18 5.1 4.28 1597.78 -131 130 3.64 11.79 1.19 5.14 4.26 1605.51 -132 131 3.62 11.9 1.2 5.18 4.23 1614.41 -133 132 3.59 11.97 1.2 5.21 4.19 1622.93 -134 133 3.56 12.06 1.2 5.24 4.16 1629.92 -135 134 3.54 12.15 1.21 5.27 4.13 1634.04 -136 135 3.53 12.25 1.22 5.31 4.11 1640.64 -137 136 3.52 12.35 1.22 5.35 4.09 1646.02 -138 137 3.5 12.44 1.23 5.38 4.06 1653.19 -139 138 3.46 12.5 1.23 5.41 4.02 1659.56 -140 139 3.45 12.6 1.24 5.44 4 1666.35 -141 140 3.43 12.7 1.24 5.48 3.98 1672.2 -142 141 3.42 12.79 1.25 5.51 3.96 1678.09 -143 142 3.41 12.9 1.25 5.55 3.95 1684.98 -144 143 3.4 13.01 1.26 5.59 3.94 1692.65 -145 144 3.4 13.13 1.27 5.63 3.93 1701.67 -146 145 3.4 13.24 1.28 5.67 3.92 1711.73 -147 146 3.39 13.36 1.29 5.71 3.91 1722.08 -148 147 3.39 13.49 1.29 5.76 3.91 1733.71 -149 148 3.39 13.61 1.3 5.8 3.9 1745.46 -150 149 3.37 13.71 1.31 5.83 3.88 1757.86 -151 150 3.35 13.77 1.31 5.86 3.85 1769.01 -152 151 3.34 13.88 1.31 5.9 3.84 1777.35 -153 152 3.34 14 1.32 5.94 3.83 1788.62 -154 153 3.34 14.13 1.33 5.98 3.83 1799.4 -155 154 3.34 14.24 1.34 6.02 3.82 1809.18 -156 155 3.34 14.35 1.34 6.05 3.82 1817.45 -157 156 3.34 14.47 1.35 6.09 3.81 1826.72 -158 157 3.33 14.58 1.36 6.12 3.81 1834.39 -159 158 3.33 14.69 1.36 6.16 3.8 1843.67 -160 159 3.33 14.8 1.37 6.2 3.8 1852.71 -161 160 3.33 14.92 1.38 6.23 3.79 1861.74 -162 161 3.33 15.03 1.39 6.27 3.79 1871.57 -163 162 3.33 15.14 1.39 6.3 3.78 1880.34 -164 163 3.33 15.26 1.4 6.34 3.78 1889.14 -165 164 3.33 15.38 1.41 6.38 3.78 1900.22 -166 165 3.33 15.5 1.42 6.41 3.78 1911.45 -167 166 3.33 15.62 1.42 6.45 3.78 1921.21 -168 167 3.34 15.74 1.43 6.49 3.78 1932.27 -169 168 3.33 15.84 1.43 6.52 3.77 1942.56 -170 169 3.33 15.96 1.44 6.55 3.77 1953.83 -171 170 3.33 16.07 1.45 6.59 3.76 1966.74 -172 171 3.3 16.12 1.45 6.61 3.73 1976.56 -173 172 3.26 16.15 1.44 6.62 3.68 1983.2 -174 173 3.22 16.19 1.44 6.64 3.63 1990.75 -175 174 3.21 16.27 1.44 6.66 3.61 1998.24 -176 175 3.2 16.35 1.45 6.68 3.59 2006.55 -177 176 3.19 16.45 1.45 6.71 3.59 2016.08 -178 177 3.19 16.55 1.46 6.74 3.58 2026.11 -179 178 3.18 16.63 1.46 6.77 3.56 2037.89 -180 179 3.18 16.74 1.47 6.8 3.57 2049.52 -181 180 3.19 16.87 1.47 6.83 3.57 2062.21 -182 181 3.2 16.98 1.48 6.86 3.58 2075.9 -183 182 3.2 17.09 1.49 6.89 3.58 2086.81 -184 183 3.21 17.2 1.49 6.92 3.58 2096.3 -185 184 3.22 17.32 1.5 6.95 3.59 2107.45 -186 185 3.22 17.42 1.51 6.97 3.58 2117.09 -187 186 3.22 17.53 1.51 7 3.58 2127.37 -188 187 3.2 17.6 1.51 7.02 3.56 2138.37 -189 188 3.21 17.71 1.52 7.05 3.56 2150.51 -190 189 3.22 17.83 1.53 7.08 3.57 2162.7 -191 190 3.22 17.93 1.53 7.1 3.57 2176.75 -192 191 3.21 18.01 1.53 7.12 3.56 2186.93 -193 192 3.19 18.08 1.53 7.14 3.54 2195.54 -194 193 3.18 18.15 1.54 7.16 3.52 2204.22 -195 194 3.18 18.25 1.54 7.18 3.52 2214.58 -196 195 3.18 18.35 1.55 7.2 3.52 2221.46 -197 196 3.19 18.46 1.55 7.22 3.52 2230.2 -198 197 3.21 18.6 1.56 7.25 3.54 2240.96 -199 198 3.23 18.73 1.57 7.28 3.56 2255 -200 199 3.23 18.83 1.58 7.3 3.56 2264.66 -201 200 3.23 18.93 1.58 7.32 3.56 2275.08 -202 201 3.25 19.05 1.59 7.34 3.57 2287.72 -203 202 3.26 19.16 1.6 7.36 3.57 2300.16 -204 203 3.26 19.27 1.6 7.38 3.58 2309.28 -205 204 3.26 19.36 1.61 7.4 3.57 2315.29 -206 205 3.26 19.44 1.61 7.42 3.56 2320.5 -207 206 3.26 19.55 1.62 7.44 3.56 2328.23 -208 207 3.27 19.66 1.63 7.46 3.57 2336.16 -209 208 3.28 19.76 1.63 7.49 3.57 2345.08 -210 209 3.29 19.88 1.64 7.51 3.58 2354.93 -211 210 3.3 20 1.65 7.54 3.59 2366.52 -212 211 3.33 20.14 1.66 7.57 3.61 2381.57 -213 212 3.34 20.26 1.66 7.6 3.63 2395.23 -214 213 3.35 20.36 1.67 7.62 3.63 2408.6 -215 214 3.36 20.48 1.67 7.65 3.64 2424.59 -216 215 3.36 20.58 1.68 7.68 3.65 2439.93 -217 216 3.37 20.67 1.68 7.71 3.65 2457.37 -218 217 3.37 20.77 1.68 7.74 3.65 2473.74 -219 218 3.37 20.85 1.68 7.77 3.65 2491.39 -220 219 3.36 20.92 1.68 7.8 3.63 2507.31 -221 220 3.33 20.95 1.68 7.82 3.6 2522.01 -222 221 3.28 20.95 1.67 7.83 3.56 2537.01 -223 222 3.24 20.95 1.67 7.83 3.5 2551.19 -224 223 3.19 20.95 1.67 7.83 3.45 2569.01 -225 224 3.15 20.95 1.66 7.83 3.4 2585.7 -226 225 3.11 20.95 1.66 7.83 3.35 2600.12 -227 226 3.06 20.95 1.66 7.83 3.3 2610.91 -228 227 3.02 20.96 1.65 7.84 3.26 2620.07 -229 228 3.03 21.05 1.66 7.87 3.25 2630.64 -230 229 3.05 21.15 1.66 7.9 3.26 2642.8 -231 230 3.05 21.24 1.67 7.93 3.26 2656.33 -232 231 3.05 21.3 1.67 7.96 3.26 2672.21 -233 232 3.03 21.33 1.66 7.99 3.24 2687.62 -234 233 2.99 21.33 1.66 7.99 3.2 2703.55 -235 234 2.95 21.33 1.66 7.99 3.15 2721.22 -236 235 2.91 21.33 1.65 7.99 3.11 2739.57 -237 236 2.87 21.33 1.65 7.99 3.06 2756.22 -238 237 2.83 21.33 1.65 7.99 3.02 2776.98 -239 238 2.79 21.33 1.64 7.99 2.97 2791.02 -240 239 2.75 21.33 1.64 7.99 2.93 2804.7 -241 240 2.71 21.33 1.64 7.99 2.88 2819.19 -242 241 2.67 21.33 1.63 7.99 2.84 2835.2 -243 242 2.64 21.33 1.63 7.99 2.8 2854.65 -244 243 2.6 21.33 1.63 7.99 2.76 2873.78 -245 244 2.56 21.33 1.62 7.99 2.72 2887.29 -246 245 2.57 21.39 1.63 8.02 2.71 2898.43 -247 246 2.58 21.46 1.63 8.05 2.72 2910.81 -248 247 2.6 21.54 1.63 8.09 2.73 2926.43 -249 248 2.59 21.56 1.63 8.11 2.72 2936.07 -250 249 2.6 21.65 1.63 8.15 2.73 2947.38 -251 250 2.64 21.75 1.64 8.2 2.76 2961.73 -252 251 2.64 21.8 1.64 8.23 2.76 2975.34 -253 252 2.66 21.89 1.64 8.27 2.78 2989.15 -254 253 2.64 21.92 1.64 8.29 2.76 2997.98 -255 254 2.64 21.97 1.64 8.32 2.76 3007.2 -256 255 2.69 22.11 1.65 8.38 2.81 3023.08 -257 256 2.75 22.25 1.66 8.44 2.86 3038.77 -258 257 2.76 22.33 1.66 8.49 2.88 3053.1 -259 258 2.8 22.46 1.67 8.54 2.92 3066.35 -260 259 2.84 22.58 1.68 8.59 2.95 3078.03 -261 260 2.86 22.69 1.69 8.64 2.98 3087.94 -262 261 2.9 22.81 1.7 8.69 3.01 3098.9 -263 262 2.93 22.92 1.71 8.74 3.03 3108.45 -264 263 2.97 23.06 1.72 8.8 3.08 3121.43 -265 264 3.01 23.18 1.73 8.86 3.11 3137.7 -266 265 3.02 23.27 1.73 8.91 3.12 3150.92 -267 266 3.06 23.39 1.74 8.97 3.16 3165.7 -268 267 3.1 23.53 1.75 9.04 3.2 3178.97 -269 268 3.11 23.62 1.76 9.08 3.21 3189.9 -270 269 3.16 23.77 1.77 9.15 3.25 3202.23 -271 270 3.2 23.91 1.78 9.22 3.29 3215.84 -272 271 3.26 24.08 1.8 9.3 3.35 3232.43 -273 272 3.32 24.24 1.81 9.38 3.41 3251.22 -274 273 3.34 24.34 1.81 9.44 3.42 3266.42 -275 274 3.37 24.47 1.82 9.51 3.45 3279.67 -276 275 3.42 24.63 1.84 9.59 3.5 3294.93 -277 276 3.45 24.75 1.85 9.65 3.52 3307.68 -278 277 3.49 24.9 1.86 9.73 3.56 3320.52 -279 278 3.54 25.08 1.87 9.81 3.62 3335.72 -280 279 3.61 25.26 1.89 9.92 3.68 3355.41 -281 280 3.63 25.36 1.89 9.99 3.7 3372.77 -282 281 3.64 25.46 1.9 10.05 3.7 3385.34 -283 282 3.67 25.6 1.91 10.12 3.73 3398.38 -284 283 3.72 25.76 1.92 10.21 3.77 3411.77 -285 284 3.76 25.92 1.94 10.29 3.82 3426.84 -286 285 3.81 26.08 1.95 10.38 3.86 3443.43 -287 286 3.84 26.21 1.96 10.47 3.88 3459.18 -288 287 3.83 26.29 1.96 10.52 3.87 3469.16 -289 288 3.87 26.45 1.98 10.61 3.91 3482.23 -290 289 3.92 26.63 1.99 10.71 3.96 3499.4 -291 290 3.98 26.8 2 10.82 4.02 3518.62 -292 291 4 26.92 2.01 10.89 4.03 3532.7 -293 292 3.99 27.01 2.02 10.95 4.02 3542.12 -294 293 4.01 27.14 2.03 11.03 4.04 3554.51 -295 294 4.01 27.23 2.03 11.09 4.03 3564.17 -296 295 4.02 27.33 2.04 11.16 4.03 3575.01 -297 296 4.04 27.47 2.05 11.24 4.05 3586.66 -298 297 4.03 27.54 2.05 11.3 4.04 3596.72 -299 298 4.07 27.7 2.06 11.4 4.07 3610.21 -300 299 4.12 27.87 2.08 11.5 4.12 3624.52 -301 300 4.19 28.07 2.09 11.63 4.18 3644.44 -302 301 4.24 28.24 2.11 11.76 4.23 3666 -303 302 4.27 28.37 2.12 11.86 4.26 3683.69 -304 303 4.28 28.48 2.12 11.95 4.26 3700.07 -305 304 4.29 28.59 2.13 12.03 4.27 3714.06 -306 305 4.32 28.74 2.14 12.14 4.3 3730.44 -307 306 4.35 28.88 2.15 12.25 4.32 3747.46 -308 307 4.4 29.06 2.16 12.37 4.37 3765.59 -309 308 4.45 29.24 2.18 12.5 4.41 3782.82 -310 309 4.51 29.42 2.19 12.63 4.46 3801.68 -311 310 4.54 29.57 2.2 12.75 4.49 3819.5 -312 311 4.55 29.69 2.21 12.85 4.5 3835.5 -313 312 4.56 29.8 2.22 12.94 4.5 3848.71 -314 313 4.58 29.93 2.22 13.05 4.51 3865.15 -315 314 4.62 30.09 2.24 13.18 4.55 3884.59 -316 315 4.65 30.24 2.25 13.3 4.57 3900.14 -317 316 4.66 30.35 2.25 13.4 4.58 3916.9 -318 317 4.65 30.45 2.26 13.49 4.57 3929.34 -319 318 4.67 30.58 2.27 13.6 4.58 3945.02 -320 319 4.64 30.61 2.26 13.67 4.54 3958.54 -321 320 4.67 30.76 2.27 13.79 4.57 3975.44 -322 321 4.7 30.92 2.28 13.92 4.6 3992.3 -323 322 4.73 31.06 2.29 14.06 4.62 4011.95 -324 323 4.77 31.21 2.3 14.21 4.66 4032.37 -325 324 4.79 31.35 2.31 14.33 4.67 4048.78 -326 325 4.76 31.39 2.31 14.4 4.64 4060.99 -327 326 4.73 31.43 2.31 14.46 4.6 4070.55 -328 327 4.68 31.46 2.3 14.53 4.55 4082.39 -329 328 4.66 31.51 2.3 14.61 4.52 4095.96 -330 329 4.65 31.59 2.31 14.69 4.51 4108.44 -331 330 4.65 31.7 2.31 14.8 4.51 4122.31 -332 331 4.68 31.82 2.32 14.92 4.53 4138.39 -333 332 4.71 31.96 2.33 15.07 4.56 4158.49 -334 333 4.72 32.07 2.34 15.19 4.56 4173.36 -335 334 4.72 32.15 2.34 15.28 4.55 4187.28 -336 335 4.7 32.23 2.34 15.37 4.53 4199.31 -337 336 4.72 32.35 2.35 15.5 4.55 4214.7 -338 337 4.72 32.44 2.35 15.61 4.54 4229.42 -339 338 4.72 32.53 2.36 15.7 4.53 4241.54 -340 339 4.73 32.66 2.37 15.82 4.54 4254.4 -341 340 4.76 32.8 2.38 15.95 4.57 4267.94 -342 341 4.79 32.95 2.39 16.09 4.59 4281.94 -343 342 4.82 33.09 2.4 16.22 4.61 4296.9 -344 343 4.82 33.19 2.41 16.34 4.61 4310.37 -345 344 4.81 33.27 2.41 16.43 4.59 4322.45 -346 345 4.78 33.31 2.41 16.51 4.56 4333.9 -347 346 4.77 33.39 2.41 16.61 4.54 4346.62 -348 347 4.77 33.49 2.42 16.73 4.54 4360.28 -349 348 4.76 33.56 2.42 16.83 4.53 4373.89 -350 349 4.75 33.62 2.42 16.93 4.51 4387.12 -351 350 4.73 33.68 2.42 17.03 4.49 4399.69 -352 351 4.72 33.75 2.42 17.13 4.47 4412.86 -353 352 4.71 33.82 2.42 17.23 4.46 4426.36 -354 353 4.72 33.92 2.43 17.36 4.46 4441.53 -355 354 4.73 34.01 2.43 17.48 4.47 4455.78 -356 355 4.72 34.09 2.43 17.59 4.46 4469.04 -357 356 4.72 34.17 2.44 17.7 4.45 4482.17 -358 357 4.74 34.29 2.45 17.84 4.46 4496.3 -359 358 4.75 34.4 2.45 17.97 4.47 4511.12 -360 359 4.73 34.45 2.45 18.07 4.45 4523.91 -361 360 4.71 34.49 2.45 18.16 4.42 4537.91 -362 361 4.69 34.53 2.45 18.26 4.4 4552.12 -363 362 4.68 34.6 2.45 18.37 4.39 4565.94 -364 363 4.69 34.68 2.45 18.5 4.39 4580.54 -365 364 4.71 34.8 2.46 18.64 4.41 4595.54 -366 365 4.74 34.91 2.47 18.81 4.43 4614.21 -367 366 4.76 35.03 2.48 18.96 4.45 4629.25 diff --git a/web/sugarcane/python/plot.py b/web/sugarcane/python/plot.py deleted file mode 100755 index 5b3c430a59a..00000000000 --- a/web/sugarcane/python/plot.py +++ /dev/null @@ -1,45 +0,0 @@ -#!/usr/bin/python -import os,time -import re,sys,tempfile -dir=os.path.dirname(sys.argv[0]) -os.environ['MPLCONFIGDIR'] = tempfile.mkdtemp() -import matplotlib -matplotlib.use('agg') -import matplotlib.pyplot as plt - -f=open(dir+"/../plot_data/output","r") -rows=filter(None, f.readlines()) -#keys=[rows[i] for i in range(len(rows)) if i%2==0] -#values=[filter(None, rows[i].split()) for i in range(len(rows)) if i%2==1] -numbers=[] -for index,row in enumerate(rows): - if index==0: - variables=re.findall(r'[^ \t\r\n]+', row) - else: - numbers.append(re.findall(r'[0-9\-\.]+', row)) - -if (len(sys.argv) > 2): - data=[] - for index,x in enumerate(sys.argv): - if index>0: # first argument is the file name/path. ignore it - i=variables.index(x)+1 # it is necessary to add 1 because the first row always has one less item because of the row numbers - column=[] - for number_row in numbers: - column.append(float(number_row[i])) # converting rows to columns - data.append(column) - #print data - -fig = plt.figure(figsize=(5, 4), dpi=80) -ax1 = fig.add_subplot(1,1,1) -ax1.set_xlabel(sys.argv[1]); -ax1.set_ylabel(sys.argv[2]); -#ax1.plot(range(len(data[0])),data[0],range(len(data[1])),data[1]) -plt.plot(data[0],data[1]) - -#ax2 = fig.add_subplot(2,1,2) -#ax2.plot(range(len(data[1])),data[1]) - -path=dir+'/png/'+os.urandom(6).encode('hex')+'.png' - -fig.savefig(path) -print path diff --git a/web/sugarcane/python/png/5afafac753c4.png b/web/sugarcane/python/png/5afafac753c4.png deleted file mode 100644 index 896b611e7a1f1ec7aacf35f8d9608fc756affd5e..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 15859 zcmdsebySpZ*DfYXibyG;V*ru@(jhRE3Wz~SOC#MKA~1v^qJ)%)f=WwEOUnSGNcR8( zN;62u*$=<>eCzwZcdhsQe_YE2Ox(|%dtdvyt~>0Gni3`1Suzq55=s?i^j#8?L#y!r z#fjtaigw}4CirpGSx)8t3Hax8!W094pL9^xaV8<5Hb(v*`Y4lWMM83(L;$ z*i-L;E52#D^VripjvvQ&Gsrayv zPn;QMSFFuGME{PTGUe^p&W%51*|NFosF=ZJ^l^Q}Ic|E_YdFqpp?qCBG)OXO-rFX` zm=Od2l!%4m0q|2#hVCo;V7QYOfx&nj`v2gMQl3@)3O5>BT6qr-(ZImKr%#_cY|obp z+K)(9Y;LXo4oXcm3{*-Mcjdmxi$dYO4r!8deD5!`_;fFmDoT2bV|;wv)XXe8DGB>l zHb~TOM|7e&sxwtYR5ZRlQDm@6T2!>JJ6Zva<~9+esLvN~=(gB|z zPo80CjJmXyh`O7b@M|uuE9cJ#EG;?U@pw^ZLZm_%JWa22-o2cfRWiQZdHV3!;-dPa zN2qkAs1AQW0ybe|d%Nqc>=7H`b5>SXx0A(R!FLWv zqRGl+D+ROYi$weCa$K*=TwgQ}cSm}EhlmR|!R0!Aqsf+NZ2YzQ zKa9QDE@IF$Ur^>WX{X_F^-k*E5{i&Re3c~ZYB%nl?z2%NTW4RJA8fuDCHZpnWD0hk zeAh`sLt}DkYI!0`hF)i;HZ|qEdp)HH$(;%}ThzsiEN^~1`2Lz}x}d*~_-`%KUb%6T z44X==p`k|RM#t;=gxz^N2G3jSI;#nT|D;oB9!<>FuZWF|qS1z|ciBAsR=#mPhgL4LYNGIEZtlo?nQ8^F0qFS*U_>SyhSidx)NJOp0f$!S z`eQJd6BPAEvpqSt6NR67{r&T!#@|1SK^~3XYvaeBW)nQ#6cwtGo~<^iKU9KCfA>y1 z-{??Y)!Nyby)7=Y@2}D{(iORNzB~$aOq%oiZSH4jJ}WB| zP4W7}nOwz-!e4pL7AhNL^w*kEget|m;}3eI*_OP@u*Pq7ws$d{_%tNsi`QRkhRaSe z1qB6~N@ct$DQuGdOp;PE9(w{FUVa>2OcX~-Y=iuFj%a2Fyg22tu3t9I%0RkGW9QHE zweWprCM_LZW1OMi?`8J)7);r@`he(*pP=}9Ghe-RL=f?gqU90>=KIBHOk48)0 zb1Tsqi(sh7Ch*(DyT*OkJdMN}QEXihF6W%jIt#bXVQLv0SJY^l90LtaXkY#z7^a-O z@wS^~AMOvjx^W^`9^P1g8Alq!F5mndm(4Uc^hu!f2^qAW+uvX4FzUVmdMPI6v8I#NrR?|-I^58anfh&a zT|SPN-{N_P>H3+gE;KZ)rS5z#3l#1{hgw@ty%o%n3_-q5BIpg3sB0ancUjOa2m z^3m~MoQ2q1uLAzgrs;`g(YAK~>^MV1!xs12?0KW#%6l=gp>`47WaZpoWOjV|YW68} zbQ9gxbZc1cD%$^`opOqTVNa#|Z5-NL6XAG3CcSgYu4WkCkV0lodS*H`%~qE~|E%;L zo1mPHA<#u7hCljM&CegJ9xZw=&=93#L;7QgZA}%yLU=+yIUJ{Tqxk(wj^oFf`U60 z!j;R%(P-^tWj$GBTnZFNj#CMBnj4m2hi)>AzJ;}2Mv*golUS?`r@QKGI8oA!bDtkj zxP6-fvcu8i$4{R={cXN6KNQO;JNl{A_w&l?ODwLL%=CU}F97Ia}@Wn%PgZTD( z{nGf-OggTrqT8yW8;*5niqtL0J<2z4qIh_ob!TZIf%`b+*@jn~_g1F2*9PsaxsOQT2;9KWWcyR{aT6t>O*H`&XLZC@9yNb3dv$ohGoBp46 z4+l$Z#T>>+oTod?b~cttZod%<8yhpad+#3SgU@G3AX1{y=ohbEHT?Q?Pd^2wqx*{5 z)4ZCH`N8(Y9L%EQAkXSMhzo>$DG`ylU%&2ubze*|B#=DF(!4A#{sw;N&}g6=%+!(s zgl)$f8H`sDE2uMm|J}YE-XUz5Q zY_G*c(=)v?hrrs}8(PiuG>%_K{Xur{R#sLu00|^Ne*D?{uvB7;blTXf=p-hMWwz1@u7&)cFc_2FwR1!6Y_GUaa&q2z_Y_W8 z^8YJOt+=f;e<8X2GUWKWb{ zFc}G+a&=YY!bUaKZqzQ5^8$Rax02e z3SjVC*V&#+moHDnRbw!j-b7VpwUDW{G-LU!E{ufJpxk0_9^3BLU#_0Y=;&yhp|T&X zVQ6%=Dg4SxEv)~HZi)Qqkv{XN~NX=Jm2?M-xFhEqup(tsx!IC;s zwi^e_Z;xa^$LN!^My;o{OKw*TG%g?X{MgfdsW3E25 zo>pBOo_!Fh!(PUW*O9j8ti%}{PJ|qT&q2Db_Qm6xXao|~PmXva zi|w~N_DtkH0q=uk5>g4@flA37UvH0H*I}v$dh(;@0Ih|6L?7}f+mPy3+_z^U50jOBqLE#Zdr$6(nBq3R@zlA;FhdUppn-C#M3F?3 zBK3sBRbDW^J_&<%Y4$^9(S3c|pSAMTjEv4B$zcW;s4l(PU{@{eyEQgFtu>~Rsm2oV zG6It)>iJW0Gv@)Zpi;Ii< zk3e)6wjZ%s--2JXl;~;LoPLiGraK>NYdKQe>CP0Ix4*ml;U)yFM~^=06kZ7n3v&xz zfZ`$MimD!q1|<3Tx#7s19(kz?T z?3#~I6>IW%EGxif*ohikaN@%PWNv`~11V~=3nHZD&-W)O&kEI=7e>d%HbHLcFS4SQ zaGN`S=~BbD1{B_w)t{M)%{c*r?;{E7_gy%}0|HPhceKVaWnk7#;!ZdyGmBBC%c z&yz?HvKxx;NR|-v-}6z7WOQs?XAyI{1br(^q(67=RyOYyy6Cn_A{93Iv9!yDXVWTb zYFEU>`t2SnDuyUnF?X7K)CNvJeR>oUyz=eam!zeuwevUu#m^5Eo51serBfl<-@ktb zN~OyXqMk=`p$ukPXe|3q4HN~Ij`rYkN4_O~t(}ibGrUTK%JWg2*vwHe8pos|J&z4z zj(bf*;#vt25>B)F3!PHy%qLHtydBRU7)HxZzwP?CH}~?zi$Uh?Z^W)8V>ZHDW5&DG zjoau?P(mnE9tZG{sh--=r7okoy*B^**C*Ok-?fBcB2*qzPKxebHXSi)NV~&aQK3U6*>jzB}yC_m;3fK7IPsXCo|Z(^nfpd2TMH zu-`_n(j)pml@`zF*-^byt5y&IiG-!lh{x!mmxJ_MQ}X5XSA>LK0iLNi{|vM7=9YYg zx%oF;dn-#VM}K${3Z?l*kW@2Im~DJqGm~c1+0H+<0)xr=@V<6woG>xs4T1TtBpmB# z8SPRkK>m+R#y`ZtYTrf9di4I*6vIq6xwzX?g$S1+JxL@eOx7qy^562QJA4Aiq=+|O z<>5j-@iZh(wHddb{&=VEj}S^r9UhD)R&O5P+jE~8>X-E6#fL$V!FT!X^-G!?6n`#v z&2=1TL_%wWg_)Ub9IwOp6Hhnq{x6W<%H706cu~^7@e0n`y}sMn<$>!SksjiZ0yf@j zq#MmYB$}2rJ%fHb?ILYD8+bPQU{ffQ#bI_Et|4&K=X=HCw~46w81-+sG#IrbGxFs~ z776SH{R-D027`R@XBZRY;q1(kmIf9h`4=iTuWbCzO!Z*$y%mA!m2hs7)HL|~%Ja0I zLw?8Ngh&~!R*gsf`t3G4AnbXvB--`$cwG~ zMt%Iw4BLOYaoGvTtGAF>fA?FA|Lry2^C6mw(%5Il;L#1UPziTR9*JSEM`~NehU>_n z9zfhZPA$x)E@kF=mZnz&VtH?0Ijz^45T8ck^;@NKbfZWbh;m5kbWE8Gar%7F;&UB7gm-6t0(!hzwi`;MXaX*tIa4Cm!{es98}y6C@RJR*;fzLh8SSdw*q8xISjoAw#2 zfb{ZEN2*93XP#Ck_wx0}MpHLWP?w4)V776K#VEW$LPEa7h4=5{*$bHzmkpq!o71%F zweM9RcT=-kk|HLcv_^jU}SsKKFy~dq1iq8X9UD z3_oHOk{SV>h}aFaKpUGo#7;4TEwTiW%VKz+0X1MedubSE+i}ojCN5Fb5p^q$*Q_U72NF%6@uGy&q%u?@79u>t!m*I* zLw!RqnO{9m@<(}y06BJa|6W@Z2mjZLqkJ{JKbk{=G8|h%BNEFJ ztG9D;)3(Cr82E;zKb?TVPD7;2@ZDZ>`SUBRQ+gY_u$iHnc$h%H!{R<9JKd2S78TXW zrkw!@=Bi6{mMJ5@^%?A&W6slZiz5e(JKaoMUmQ(^V)2^ynvGq>yfE-~d0SFY@WKq* z^yLRqo)hkqT5mQyVcq>%;o-xxCe6=3C`GXzg^_%9ef;v}%ay2th?Z1A3Lwc0{Fux; zg`-jctdvJDD0L!!6MNM(5+QBhIx3lx-(_qb^SEm8s*4EV`+si#xIF`*9M zU1vl_JWK*_)u9`FRZ}QBrOElKPh3=jNQ%x4zDMS}G**XAl^JES=Rr}e;UoAR-|75G zSAseNo%{NpTRBumZ<<-XBPA9qW2jifPn|w}*C>TO#p4?j8xKm$xQT*Mg8qu?lYOdM z4ixKm&fvX92E%-xO(y_a_r~T{Z$TLBc_VcWkP;Lk6VoVsf70EVD5*6@kzp6wTd~~b zgu)?(22VnJ5P)3vlz;~G@O9OObD|$`HJgNUn%Ua2z+fjWuc4e+=y{&V24UsYZ{AIa z@F34jI>W%q`dn4!GJ2>(*Iu^9kKf^Z<<{r#dJT#2?TLvAm76#5JDbD~4{w_CoxMJ(!f%|Ck92$;=H}Q#ASAtj+73&owFS%;#}ok*E00&M#dE!FT)1erP(Z1gnB1w zC94vn-O3p(?Y%a)Q&Y3tbS}6fU5>%AnmB!dw}X56f;@J3PF(3hgM`=z9C2kOX}JXD zME9IEm{rQCmsh*&)Ye+Dcr*qA;V3l(>cTcCGHueL*kn@jYWCg~ zKkju}7!sqXZ}Eo;bg}xgZhE-#a^-G`&LoR?d~aeFR>3FY9+}kT%fY~Rh`atl*P6*Z zu&es3>9;XXYT=WOQ5T?a-smA^4Okd%f5fNp4bm&#T8(oRQ1G2hwPd+eKyGfjb0{a@ z6yKi8djkOSUS1}IIiT>~unj~;eFR}nIakZ2yw@Bcp@>hIDEM?l@V#-rdF!^&{;wH( zRCOC?@JN-nJB(IrZDIJaNXB-j%sv+Cjo&}gLj|Q_ey~YaBW*r-G^0odV5>*hGtX?j zSBd#}|I=wWW$yuclTa`TJRP*Fcm_;AM4{y60+h7Lnhm2gbBYh(X#jJmg<8&FgN)lu z(wmdCSJYE(-?%|?-TD{8^lWT+l-Ld`0&{TiJ>w2$Vozq2jNO7$ubmssA@=8Iw1Do0 zW{a=$;&0?t9j_0mYl-If{rX)Gz1PgCI|I;jdN!|y_h6KFs=Qp0iPQ3)zs@LrMlmuV z&H%PEMhZHeCLbDJ$*{FaoECGLd4UY=@$LlMJ{K|0P%|MyS(dkkYXt%u(0N5!urExi z_P%<~c8%9RAfWCFg|f)+Ps{ALzEg-;*_U#ex%2%Mr-?`>G}d4Im;o}W!>{IWkK$g# z(7K>@?>C+CX#xz<@5Hikgzt5fEhr=5 ze^Y=Y1DY))KA!QaW;O=Yzch=M_1|5jGBP!XquwpF^5Uro*g%yS>|HjY7Np_d%x)A_ zHvM0A0I6;6!?6&l+250Za?g#ZU+M84zLjFgxJaU? z6{2x3vPtj^dJTfer%YtukJK6mkGxl*rRUbZ0Z{9Z_E#59k{sRAlEg&_2XD zFJ^5k{T8W?z54DnbaZASqXWf{$!~K})$@n{tu$tA9653xZ#`HlzrKL0mg3eX@#AIl z(LqlRe zj3xp4^9PI{sNjHh^NYZI1MK=@)pg|`l@e1gb8pfbM4*O-_#=()Gvr!dLV~BUpLi9G z*W=;k?N}HE?KKl}ugk(v0vtBMM-3+;UWV0<*W)rM?`K=G%GAuIC;93vzGO^qKU8`Qp~u3nQtTRa@yOTPsZPHcY3 z;8^TDjinv)+hZ!R3)Zzq(vgQmDc>q^-!Z^KgLF&MSK)5I*G~iq0Kc(3@!yJe1^dWc z5)zg$#0&6m1-zNhvTT1p?n8uUZ&7<&-fgGEQbIA3gdBvJbC$2?&eK0;lyVk zKiq{Ndoz~16;uULSW0*ib{7o;L-Rd#sSiK!4Ig;*0w8ZTQ*DeL8MahF>otRxI#w6t zFW&j5J4^osdh$E3fsj(b_L4mDXAz+t292t2m?zh7pq-qms|{T0ka+R(u1#31DD*Oc0)2M1-Xq_lyd4R)&;x%5N5e_8^fq=1ZtXa7hyU7iF z>1>3NDFA6K_dB7TPhVHBK zwYd@75t|Ph7q!bzKY4=e*hlKYyesT$iz43)4YlOns??$6)>>tGYJC|JlG9bwMv>X4XQE`E0Qrq9485m%b$S@=56;7K#vLF>g( zqj!osAOGqW+$kMYIx5_331Zg$t1inbq0}IoE)hEk%PapZsFN?W?)eR&1PNsf(uE{gL4(U&x== zz~uWG?SjLaxlgzLBD)R$5HvI?LBW`7CQYZnH3F#?{0mU|q~Va79B4F3$Y2P&1+mO| zX-t-Zot~>|oA-9o2~d?rWL41(y=As(sgj;A0R$~WPVf3#Ba>z;tQ)vly}PFSu-IC` z&W@LplapJo>;)v|u6GLL5Nv<{>3bAd|?S1ST82y-1sSnp!%Zm@k&kN~oK zw70Vv+#zE1^U{-`AQe^B5BERaMPexp4UO#GyQjfz;HqQpSVhY|mg4=^%hfsCjglG8 z&-aUdJ4_)OGpj>GdQc4gh7{(PaO8f@!`Cx2mYO*ar#@pI1tu|c$8~R@ z8>T(bX$(`ffk!4MC!v%MI^5ZvXPCNJv!6oFz^nb@b`lR?r)0X66iZKbpmq1b;o*tm zU4=dzIz>;fIaZ#Z&kq=P)Ru2#>UU=QxLzE0;QqD@(7qV$&UA8ppRRB^eoq0({Yp<% z62ymP6~{)6Y9CJ$wNz>Sv2v&fO$z%@P*PUmy(5{$UP8L*SB-e~?6R5|s_Q6mU=aJR(~Xbzz6DVQW9qcOTs zNR>=zMMZJ0ccOO`Lp+~t=9GT>7JR4WK_m*L@IRF{r9~%dP^qBC&(&5`yUa#*!sbLs$wm43qh8~ z(vFL}AhaUJ+i6ygZ^2{k{|afK6_5H_P|X5bIS{VEN-_1L8P`}bKS&Bu?)BBPJrB^6 zoij@B%RaD3fR}+oi%Qt`iGUfh()bZLdv5S&#bOBNMBkTVcGdlV&?wXb96md zGz*oqUed1W19&(}3!E;m&KQr#+Ri`Jf6R+=x<0hU?Y_(+vwJj3+B&7Z+J$pt z8!QLthC;AGa9c3q@(|6^c^XJPD49!ejZUD{lCl9cEp859M;r*T12t@lQ6XvaY5Ee# zzVZM;PMd&Nm>KM479}(?931C&cAUAP!+L?l@j|$MXI{<3VBNTV1(sG&D*-=)acz3OsU(jk-QW zwk;ND+=F1~m{2J>Gs<@L!IMw-4j!uykJTfv?en`A+7BH4a1&TybVCIUP47|taYn)8 zQ0Uq}y9zog@S>TJR%dXB-iY*)2%u0M@AMb&_oq7(>%W%RSlN9Tv_Ycqvf5P_$In(> zV`H+~RSy9v<8Olyr{=u$R?GoQt9~O{{Gcm#@F}wTfHKjK+B8@G2_oyDT{!}(Ga44x zF25W15u)J}n9lJpU$BX}E^{Z=he5Zbp+Rg8p^1`A&)G!mqdwD)gU1YLWU9aIA}?N< zj+`y_az&Wy!IJPd62d^Ygp&g50-lLtNH%^R571Bea;r>#g1 z5T5iOmi_iOsliO+yH)$IRW9{>kfwx2;{beSnQR=b4+uQc-qw|&UWzQY$Ah$UAt0CZ z35|U0;xVup@iA*ZdxE_1;JwCctGAGHKkW2n4&4&&`N7g@@aPqYztr0L&CYo3Fu?Hx zIShpwyBy34c(bp-M5of@8kAt1h-<4kiY*g76Fy6Iq?*mdt8s3{aoi{KDjy+pU*2F2 zT%+p=iAw?jx;<}!V4LA6+a>M^<054PRYdrrD zgbZX68sq*OYK79d39MKuz!fUAGS@#QXBk^=W}AMkWpDw^esABNub76TB9e{P+WgH< z^dfpM2K(=IAe@Sc-co_O1C6u3HcimmCMofEmj z&yV{f%4+`urCq~%_ud7eU;p|T$13QVNYQOqv+D*WG1L(|Hy}YZKW8x+{}v46_fU%Z z))I}Y5OMiqip&6Kq}CZPprW;_eZ;ylR4+m|SEkyK)HvcXK?$m0_Vei2JRN znfwc4V}jP1tun_6MiIND?rW852CO@~i}}v%KC97g#h_XX=5e?OKO;z;}-pf*yf3E~m9yMI}%p9V_vP@|^iWZ(x`A^_R3Wr%MUb zpQc3)G^+7~T%1pZTS9&Lh$lO@-U+|0g;O-kj9bw>B#2UAPfbIU?{!^I_--yFGcmfa z4s(nR4LAC~5G)2n1c+{DPvZRt*~W0>WJfH&*IBc5NsAvNXD&YUvXUeBV~44y-HCK)(yc zFuCfRqq-%v8LAG%_Janr;8DQAVJkjXEgdehM#L!^M>^;gij=B$bf0Q{nDifV6|JWd zyl`1S;H|C*1W00;WE<0F4AQ7;#!hMWJ;AjJI)zmIC zr@+%O-C5t0v(#I2u(;V7g+o%<%+xpa<%fNC3V^jj--b_qakzk1l1D$)ne*R_IQYK4 zheDc#xw!xc3!m?Ql5=q2hl6Jho+Qm?r}$L=5b!|!P)c*RcUJDb`T9I)*MBT~8~>MW zcBFt}q?U`UO1IRG+n_SuBE=J!6k){QbL!l=+2v7Fkmac+&e9yp(yAXSU)q{I#b_zL z_`7ZT_hub2uI|5DThu;oKeo)?SOw&e&&GHt04`$f5o!|L7qeSbY|_}7AL*WBfuu`B zV$!JYmoH;g>*p^Y1LRN60CgKNgv;1iGZW67`;?ff)>DXu8ed(kyw>KdM5^pZt9O7$ z8z*+klyc}S?`(1T5GgOQ%SW*ia5?Lv_Rrg-jStU_QVW`cdIrU)q#yaN;rUM%?h)O6BubR6PE>G2w@*rKU1mZwjeVHVWohx08uFY<0}1FQcRQj?AO$@$^V}~0vG^3wzeGKj&Vx;% zQ2e_qaYN1?_ZD2CtzeDrXHgn0w4f&W8(y}n9ZW{&0Sl@B60Cqjy~*w)R<1`wmo^HA zD%|F8nwwwY(#+0JthW4tg1z@`e|L0W=hbc4)UGi9RiM-q*$&2DkPle`qV;aNBIV1N zn9Z>uY7gL)!GWx)1eQ`q@u*QbQtfi5%T-=~AAvlGSYB8p-12Su1;9EtPawX!@TlwO zPuMeDtII$w_O<_;e^P4ysf<<&7HbL~>IP673mz%#?CgMj7x5f#4 z-bl(x@mU$B`DuOS6OOhPdyXg%hD_B zNUWIcWkYmNcmhIdfk6ip*&)B}d2FIcUTx(<8C7`VXmYQtF`+XxDm9f2ia_3_B#GS6 zS@Vq-kZ{j}pQmZ4-1!+8+)O}E2x&JLjOEq0b`_FH6tNe!5l+13`se-xvctVi&CLm& zK#@wPgL6-@EtcmQP(tUpv~q=jZLxr&t`W%Lr?QD~c?pcsNuEPd-^om!1U_s1XpSrirg3K4V!h<>B(KQ#5 zx^kuIvwnVlO`u~;R378ct)u{Piv*LX15&_W$V3~VON$jQ!6_uMO5PM3pPXU zqrQr%(!P<R^wa&kcd_Fu(|JOvS#fF5{IQH<$BtfmlgUy1FxLW@q`7hSZy@{XIg z(-chn z>71Z8q9K04(7&F)rM(?-BON2B&-dH)O7U4ysTJ(0d%Xg0&yE4sG=^In$W;^Mf=4Ye zC7(6-b#v;JfnKjTI-5>TUY<$9jhjceawjvbX1+1&FDc@#;cwr*l_T9gNl97kBBk&jdE*lSy4>~u%}th2>)cZ} zwOJ=vy(b?+VZ1O@P6AhBV&dbKfwK1Fby#$D_^>m^G;7qCZw$d7G#XNdtxERn(8p7P z!3!#9g>96aQ6H7QH;zMuNW7NP(K9wBuCvjR8<>H)n1-p40cq!frhEZEWU zb2lV~fP4P>_1B;i4*X=+Tu#tmKqj!_#L1K6Bi@Ts5NHYToJ~)@sr~eSC=xQMJ+t7f z)`viybfn6p`V*7drd><_{wgq0f@z=csLOX^0=X?-z>Ew!PQfDni?{1TCm(lAMRw4; z6^6#)>OC;A10RCy<~F_>EQmz$lAZ#9)rlC_JNNhYAu}sfH|h7tLP9_^VqZ^dvXZ^JF3eFmkruypRbBIqN+kuGn$VWUeD=0NfL(X-7EcNjt%gae4k4;4(J_ zErk(AAVg7L1(A4#0hw<#wdY`gGzaPF@;>rQk^Sh%2`VaKB!tfmSM>5zbm~$!!d|sw zBwU{cLwplBej?%0T8k}d?C;@<(rCDGlC4|X0Z}Azxf4pfkA^kXG>nXvHlV!wSKHU@ zmf-4W?8>KNx%Nn1iU8JiDyyhmqvzGLg!!4++g=p8d-rZ6c!MTD`>yzteJ|s7D_hG2 zND%SDwr?CE3&~lIU8*8TE>wgaM`lw{5d;!?x^aDoBKVOg z{QdorWTOZ%{8_SuJAM2*?02DoA}bTvYm7)1gLq`Vvs6#b+pe4Q@H(=3!invGO$xWS z$LmA$IA^o+s;z&0e$aM;j7$JRzc~Zn2m(?11W27bN>EB5ZV*U(B0$ov4AF`(Y6V%sG)z(Z3LLCg475Tr}_)KW5#V+YY&w3|j+;yn5>o z7ZB7sWe&y#Bi;aSD8OI^(~Hm;ka&`dMNv+apl}$xHK_6uvikWHiK4}iwG6y|Uk1#g zs;qnhz{3O_W;Y_Ac76a57GN*igv&7f)xO)Q)$0IMReXFTX|LQK1Cc7*plYB$;xbAL zF7Mz!e5d(M+0>BYR$mzK`*0x6+n^{Ph0dkBd zC`=B(+*N7=%wAeww}2~on_YmH6adV_IQb!#X8_u(++7=lI7c@t zl0*t;N!jxu#HpW-BA+2%aAv-TZR{qYlZy$F8~<8 zgvf)Bxj)}H2rTGBw*$Hlp&LBEfUN*7$3S&=6s++hwfq!_e*}z5dp?N7*e)_^3exzH z+n}x6;xZx8{w~~$Jbd`@M}PvLv%CHNzrE;#g8(;G(xFIW45#o_>!{a3cn;xT)XG)e}SRFM;mS2~Ks-xq@aW3qyx1LyVXD zOri=rmLEaam)F-zIf^lQT;PCZ=!0^EfmiQ5+%kf@wS~5W!U*RB=>egcpw_W~hrZ&_ zc#@go6%Gidpw^q>tPlpyW1-!!c=FN}d?2g34D ./plot_data/output -# rm ./ooutput -echo "done" diff --git a/web/sugarcane/static/ajax-loader.gif b/web/sugarcane/static/ajax-loader.gif deleted file mode 100644 index 08622f8c227a335d5c4049c8dfb7ebf537c723c3..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 8238 zcmb8!XH-**)-K>=t(9I#0%AnK00E;EBh84EgboS@5jI7Xsvs&w6hzcT=+#g~6bMD6 zNf8l5)KElFREj7nLMRG~DE6|Ca&e!#_jk^B|9JB=BV&#Hc*gtAIhU1O|G3%73Fy7bG8 zjEsPQ00RR9#P5BSe#)7J`gw)<`7*)-!u%Otd%Ys|1$p^u1svENgi!hhAuvl1Qd%U~ zTWohUg; z0E$g`&!ff}Flpy5e}xWA zxFO|e++lSR*%4!}MWUn$R-@vMLRuh!#|H4cj7%mjC(_B1f-c-q%%qml$}LERXi|-8 zY*~WkNl~+McHFTW0uV>WUG2KlclX}?{(-@v2gB+BSoF|>h{29yZa>`!jEze!PZ3Qw zfHQCLQxw9-Y90LBnNv^EAJ6E$d9Re%tm$a9@;OxAsD^MlyOQ)iX{D7`jv!7(XYXiF zL+P!P46|f;n~P<=SuE8>9Gee|fn96t7#q1CU@%Wn$#BPdSWS?>m(SYR&}HsGbF?Kc zYY2OpShfbzNE9WZqVx3>DuTa%L?UX<+9Q`eJyi3kR+GU`qqL8e6&de8V3hJ;-?P#! z^Cfo)dT;+|UHzc;W<&?VIiHsVeBOL{z2|u^g=(QejwBg!3A*?Edw9t7)xwwhq-%5( z)a>Be+O?vwY3@L9xcpwjiEuRr@CiGg{*<`W?ib m#_!))B%=?= z)DKBhh@t`}kr|wNSOC%*5J<}4tv;YfgR_N7q4d*Y7vgB`0)p{+L@ZOk=QeH#eXhtF)x@FP+HWqDaC1MG=fO7tpb({iBkG8ghw?oSCbK za{S|Z!r;7V*X*(@ZHDKMS;sZ1daf%{ZwKCSG{)4Iyu@NW)d~Af93e}kr(<9i5EC0m z(jX-{5hB)7PZpBa1_)gp;) z7VBN*CF`YUKw@a&0$}$#J@4Wx94O&(MaoY6N_FA%Xh5uac{wLR?I?+GpHq1+E@Y8Ni zUd`2GGEo*-wH;pI4`FM3p=u7al}A;U3y zp5^bnaCE~&Q-;TRvF-WF==1z6^Zf6d@V+TaQeYd&0SQNEGR3>J#o z!gKM;bj~Ut$ zi{cffFjt1&UX~uhl9jSPmu(qrfE(J^w|6+weaFlOmZ$E{q~H$J8VX+c&X%%qgdmWO zRZnWf)1srn&^3p!hcPaZc3ARpbx@Ecj8h>cC1Lky0cSO#ECDK4bxZzXG#6JA9eD24 z=_Ci^nhVr=rupeQlgtY_^#GJaE;50$%C3~-+qu7@G`8>Iqmjo?o{m0S3eG100V{Q*l$0v%Uv<@Rf(ez)h~nF$$o|LN>GhVpW0-M= zvk9X3=!73*$TE&FK3bqOzKG+~`xVJaKcUoY3bOdd8P37~g;oY&a5;y=EMg7~7U~&g zh+gSp^tbj6wKhRUb-QfBQ2mE$5|h``S#0q~2n=wptYjbkjC8%KayfrErTM_l>)wCX z^0^}(Pg(}6ibY48-;LKF|MK}9eIz8gDUfFub7mOw5uec9kSj0u-oJgdT+MvyXw{{XS%`l(kRnjHd?Y*{zgqrH)HcHTspcMhyLm;o|?!PVhGwcJdiSsJ+ad%i%~tWudk@hDGQqeP`|tYTSjE;Wot%9}zqdp;)tL*kqE$N;Hyl)3z)iFs)7B+ zOA$Ie`cHPgdHV{WP;n>$>AyQ*DPpy2;zZ( zK2bvbNz=n*HhK%cbFkf5W?~W9!U`pm(5t14V{h@NwfyePI=k+%WEhHf zUPCe60gsLsj(+v-r!evy}s1x+Miz*rtTDNy>b1=uNpULKZJnZ zFW@{8NXWZnJGeMu#whcEUVpUb;DtVBAc1Brw56C=h!18{T)KR0Y^0UoNMgO@fR&yY zk)+8OL8Sb7lQos9`_O+XjdNWrPS(EX$c4z?87Gmr!XYkX@)N5<0)3mf!g^hyFA`PJ zZI(q~!ISH))K*02Y~l+2j?Q3VrqHp*3z0AowhRP+HmvY;{!=4h*ziwL<+$PZ+0#Y3 zRPzO^)U~arr-~ERs%A^HV*iE<{T5uVTc@W#bkZ=!=Lnloq>n!M(vw@aUUhh&E7Co_ z^Fi^~e8-si(1pERXpgfQ)z!s=@FsMEMj^x0;E>fMTml&vt_c#T`?OBL(Xp!ui3w3$ zVpc&+jtUi*REpzy6~Gh`^GF_wceWB^S))wM!&UQemC$7-5rsot<;7<~MdIBQbV}b} z^XI?w(-=VG!8^LZJ>e(<#9-$z{W>K$==LaSssAV0lz0EKe~SMCWLD-B5Y|ULnPKokSYMn;7j7 z>>aL-wzDShe|+)T^{g)?6p^do=`t`o-e<@1P~YL^z(41!u7;@J6(ytxzcOf9paS$@ zF_{?WTvhbxasrQ-Reh{f87y}~R~8j!+1Db7P@`DxD3Zb8 z|1*CoH^ex+OxDCwppQOc*uIETvbUh6gcl2VuE~#BkU)-%Cqzj&*NOAV?Rw%Qj3(lNKiTy8*yR9qfeFA9Z=>dMgEU@aO1d z9Xv9#X`9*rV&91PRAR+UBc5q^zBnH31la3Io>5h_k#3w`AUlG=9d?rSm)_t7xk}= zP@w{qX*zj?-Tf;=(M@Dm+Johady9-Da1wuH0xQ&B>TcHxF4zuishaJ~@Kxg9UEf0l zq3k}_4n!8AEg}Zu+q#lAYayw~Y#VfrV-opegqTlai4bAuHDUN<0H6w=MH}jjt%{-@ zkROmE zT_jREu7~e2y?WEdT;{{20PgHcZxWGB#F68mBazCxb{|VJBE;je9Fv)pW6*BUG%GHS zn#xPn>DfIu?;k&%LQlsNo^hIJ@-W&K1=;p#or0{x^F;e!DK3DHO zTZ@>%=@dLX%NMPdZ8L-=aElCb-v4T^j=z1WMV<5A+n;6~E;zeJxZ@96fcb}=f9fj9 z%-J`4UQ3Q3G|we9*m;z=7K_vEmbuJyFWYx2(4)he6r@Rw))1Uu%c$@>!^@$r zx?|*Ga_%~xbaOBBq27TC$LgJTTmRnSS6YB5y!{PDSQPGN2eJOWVdL^7Hs;&L&K{B#UCR=AAwIcq@8kuKOt`i{@USh zoY)OUg(~hvS$m42kD^o%QP9VEwm6!SzM*@6i(Z}?KvB#Sv51m#IF{p?R}A4;{slB(RLtt(CYQ=D6gXGE*5|==du>20u5=dnS zQ(xElDmj#YOaDT?!~NNhucdWw*TpLtN-oo5qB9Hjn&YXI^(3OR3(oCul*(}!4(4u6 zObkp(O+RT$$bxV=xrns%WMg!}`67`O3KeGx@Drh0kqVJi#s%;M^pz&@e}dd|<2G?= z$`tC6z-#xrm!RXuhjq}@A^ek005v{_!|Rg<2dNXcMqiInH=y6Y=zIBWPIlCjUAs#M zJ9m8`GW;GLIz*7wpHm9?Um%x`O1|rnAaRlI*&iGW=SrhAf`i4;;4Y1HU-x&C_M}4N z&4^_N+2;4DR!F9W+s7`%8NNclx{`E~pwcpCw^LP4^&jQCEHo5ab@wG>`GI3Q8VESO zLXVr@H3->nMr(cMCa)dUud)L-E@z1=OKNsBTFomz%y((=T|F*m3T zEB;v&ID=oSsKYS&9yGu1@Wwi=ROI^)(~rmUM_}%kad+SIW{M{+uT;Gg^cToQ`IpxH zS)pdHO~nVrSP6W3Cv6(jiNKn(Jk~~Oh#tXQT}E*%hJ%HG<;LYPz)D*zou!nV2PUa1 zOUNw7PF6JG5oHDl*W4BnAGiEPBVb5_*0j5uDDu4%#PXu$nd%yhG9l0)Qc93C6vLTX zmzmD9##${{SgaV1FMB4$p&=(Q}; zKfRCrU9GG@E}B#_sFo;$PnizI>u`#|s>^;62BfFK$qp^rUH;N$#hgw#63@+#PoFje zy^0FX)DWnirlg|~3fgvDg&Lh^1}4N)Qkc$ED&C9|e>6D9TL#RRbL-IuT5zW@2JA3qlnAYR_OUeq3Qjlr!?=!YN-PM*1UKt!&T zmQwWW^H(mlq$!%DOIQMCZ-%1dA{Oq+L@d|OcRp*5^Hj4sBNtNAx8~yRtui1DtMFv4 zg@V4u?%Zjdwy$H+)>8)ODznY`xzu&}SU(XfxoX4F_+wQEE4Mamalj_BwtJtf+_X9M z*|!g6y{%504O2M{#co(4{ky`Jmimd#L}~S%eJu?>G&`oR2IFh)qZR6A#$5i9TmVCH zyz3%55?XtoOon9@K;Fx?24?~K7=xGgsNQS6wb9D3G_X!pl6iNjo(9iNOqd#NQXoo{ z6vA@-;JOcV!VVP|_K$|YdfzJI)0xaer+&x?#f8^G86=uKHyGxIYf_%f-6%Bs8NG;U zJ70Kw{_$rJMEtZWI_T`h*$CGxE2?ODS>3 zlh%C2ctW*At_Wt?iYC)E3K?7^V%ujtL}gNV>8qK+Q)$$T{anu<@<*ni7v|G$9>rwzRsZFn0<-GXRx%EJp>P zpee~*;#q2Nkq=;5{a+5-3@(&iw`v2>P&ar)w+oN&Z6-pfZs?viv9I(hIMjX40^K*( zMCl%cCf66wUh*3fUfWa*l7<&P|F=VmuyhN`5o^65nur1CBAaOZItZCSmRiU%XCUR4 z5!v2R;U!)+S*r4j<=brlRYbOyh39q`2G0s0Y-dqk@_EU5+N;`R3_5(bl$yuLT0GGX zC^(}pU18t_@98|tg2~TXv(F8d+8oh;9$CVBaAwZbO*Oz&edwHTvmkEHsdk8V=AO4* zh(lR-(V>K~V=HAUHs{4Oo|RavzQZi`u-!A&*;u0!XE1}~;MN%9=QCNg^Hvj5CK=;jZW5RT~(@4F11|k9D`%;{Np!gK* zt;Vup!1H1~H`iZLGK#khKC&6Dy_F6eax(QF^oz6dVwxF;`R9sSO^4AqJw~T|4 zs;eVciyE~F^)0AU3Kd02(I&Ul(@KDAJ^Gz3B_z~+6jjVVe@Ci&V1TOMe5bX9c%N#g z2XwU1nhJm+om;dz)T^40pZ*K3E)v<%7R^L6gyMQihYU!!&6eon7`1d?RVG!-gEzmi{b3DpNcQI!<|jsnXJ z11l)aVBa2xoYhX>w^OCd1!;COpB}hf(iko_xm(4CWz_CTQ#U!s8eA@Wa7nZDz;NrA z+A-;UDuzzQ{K?S-U6S&z0G+xeRM}j(<;q3B6{JxX=c}+C35lr0@mJyE`$;AUtMYu#a%GHRs%bWI3q5VXh*Ujv0PmTNChTcF``y$B4 zBrWHWXqoXA&dkT>A7+QDckMdOi@04U6=lyi!o&hNJk+|Kdwmfy8pjFOc;x$Ss)#zzEi)9K=aybA=eh8WmGKv|>6meFefdi@WaDYjBfAe8u;19b$CX(xdN4!B0p)$OLYnEcyC$pYa z1S&O44>#IN?j}u8Cmw54XJ}(LT<53s2WB3;!cU#m+Y#d9Jntb!LNq3Qq2_dF0!794 zKNg~Q*jR4`4>kiv@BmoJX_3w0Uo3;Nm!Wj|aGFJQuBHOrH@4S&Wzjxc&g4dA7NXSR zM7bxQVTu&67?Q3lI&1n%jVvIGtOX$Kwv?FGqDRd+kzV`z+MrNuvK# z+RuENd&t38XJo=VpDa`V8u+Ls;by+)cU@B_8p&lc7I#T=vdd6&k6xAQyCn2xxpaT- zAoF?l8slWk|5=EBz0~AQvpnk!RAV5FU@oi$~ai^BkvOmF@1A8_POm8+Xgj+=6!ckNWl*W-7@N8mRM9!v)_a`4!>}!OX=8%w>S>tSz(35G0hPIGhM`JoW}KAF-XFIrxmW|V=DpY6GGMXwY_y?Pp&aR-GEo)X3E8A*{ NMQr-@_WiHV{{hN>wuJxy diff --git a/web/sugarcane/static/bootstrap-button.js b/web/sugarcane/static/bootstrap-button.js deleted file mode 100644 index 7f187be6206..00000000000 --- a/web/sugarcane/static/bootstrap-button.js +++ /dev/null @@ -1,96 +0,0 @@ -/* ============================================================ - * bootstrap-button.js v2.0.4 - * http://twitter.github.com/bootstrap/javascript.html#buttons - * ============================================================ - * Copyright 2012 Twitter, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * ============================================================ */ - - -!function ($) { - - "use strict"; // jshint ;_; - - - /* BUTTON PUBLIC CLASS DEFINITION - * ============================== */ - - var Button = function (element, options) { - this.$element = $(element) - this.options = $.extend({}, $.fn.button.defaults, options) - } - - Button.prototype.setState = function (state) { - var d = 'disabled' - , $el = this.$element - , data = $el.data() - , val = $el.is('input') ? 'val' : 'html' - - state = state + 'Text' - data.resetText || $el.data('resetText', $el[val]()) - - $el[val](data[state] || this.options[state]) - - // push to event loop to allow forms to submit - setTimeout(function () { - state == 'loadingText' ? - $el.addClass(d).attr(d, d) : - $el.removeClass(d).removeAttr(d) - }, 0) - } - - Button.prototype.toggle = function () { - var $parent = this.$element.parent('[data-toggle="buttons-radio"]') - - $parent && $parent - .find('.active') - .removeClass('active') - - this.$element.toggleClass('active') - } - - - /* BUTTON PLUGIN DEFINITION - * ======================== */ - - $.fn.button = function (option) { - return this.each(function () { - var $this = $(this) - , data = $this.data('button') - , options = typeof option == 'object' && option - if (!data) $this.data('button', (data = new Button(this, options))) - if (option == 'toggle') data.toggle() - else if (option) data.setState(option) - }) - } - - $.fn.button.defaults = { - loadingText: 'loading...' - } - - $.fn.button.Constructor = Button - - - /* BUTTON DATA-API - * =============== */ - - $(function () { - $('body').on('click.button.data-api', '[data-toggle^=button]', function ( e ) { - var $btn = $(e.target) - if (!$btn.hasClass('btn')) $btn = $btn.closest('.btn') - $btn.button('toggle') - }) - }) - -}(window.jQuery); \ No newline at end of file diff --git a/web/sugarcane/static/bootstrap-responsive.min.css b/web/sugarcane/static/bootstrap-responsive.min.css deleted file mode 100644 index dd134a1bd2c..00000000000 --- a/web/sugarcane/static/bootstrap-responsive.min.css +++ /dev/null @@ -1,9 +0,0 @@ -/*! - * Bootstrap Responsive v2.0.3 - * - * Copyright 2012 Twitter, Inc - * Licensed under the Apache License v2.0 - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Designed and built with all the love in the world @twitter by @mdo and @fat. - */.clearfix{*zoom:1}.clearfix:before,.clearfix:after{display:table;content:""}.clearfix:after{clear:both}.hide-text{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.input-block-level{display:block;width:100%;min-height:28px;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box}.hidden{display:none;visibility:hidden}.visible-phone{display:none!important}.visible-tablet{display:none!important}.hidden-desktop{display:none!important}@media(max-width:767px){.visible-phone{display:inherit!important}.hidden-phone{display:none!important}.hidden-desktop{display:inherit!important}.visible-desktop{display:none!important}}@media(min-width:768px) and (max-width:979px){.visible-tablet{display:inherit!important}.hidden-tablet{display:none!important}.hidden-desktop{display:inherit!important}.visible-desktop{display:none!important}}@media(max-width:480px){.nav-collapse{-webkit-transform:translate3d(0,0,0)}.page-header h1 small{display:block;line-height:18px}input[type="checkbox"],input[type="radio"]{border:1px solid #ccc}.form-horizontal .control-group>label{float:none;width:auto;padding-top:0;text-align:left}.form-horizontal .controls{margin-left:0}.form-horizontal .control-list{padding-top:0}.form-horizontal .form-actions{padding-right:10px;padding-left:10px}.modal{position:absolute;top:10px;right:10px;left:10px;width:auto;margin:0}.modal.fade.in{top:auto}.modal-header .close{padding:10px;margin:-10px}.carousel-caption{position:static}}@media(max-width:767px){body{padding-right:20px;padding-left:20px}.navbar-fixed-top,.navbar-fixed-bottom{margin-right:-20px;margin-left:-20px}.container-fluid{padding:0}.dl-horizontal dt{float:none;width:auto;clear:none;text-align:left}.dl-horizontal dd{margin-left:0}.container{width:auto}.row-fluid{width:100%}.row,.thumbnails{margin-left:0}[class*="span"],.row-fluid [class*="span"]{display:block;float:none;width:auto;margin-left:0}.input-large,.input-xlarge,.input-xxlarge,input[class*="span"],select[class*="span"],textarea[class*="span"],.uneditable-input{display:block;width:100%;min-height:28px;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box}.input-prepend input,.input-append input,.input-prepend input[class*="span"],.input-append input[class*="span"]{display:inline-block;width:auto}}@media(min-width:768px) and (max-width:979px){.row{margin-left:-20px;*zoom:1}.row:before,.row:after{display:table;content:""}.row:after{clear:both}[class*="span"]{float:left;margin-left:20px}.container,.navbar-fixed-top .container,.navbar-fixed-bottom .container{width:724px}.span12{width:724px}.span11{width:662px}.span10{width:600px}.span9{width:538px}.span8{width:476px}.span7{width:414px}.span6{width:352px}.span5{width:290px}.span4{width:228px}.span3{width:166px}.span2{width:104px}.span1{width:42px}.offset12{margin-left:764px}.offset11{margin-left:702px}.offset10{margin-left:640px}.offset9{margin-left:578px}.offset8{margin-left:516px}.offset7{margin-left:454px}.offset6{margin-left:392px}.offset5{margin-left:330px}.offset4{margin-left:268px}.offset3{margin-left:206px}.offset2{margin-left:144px}.offset1{margin-left:82px}.row-fluid{width:100%;*zoom:1}.row-fluid:before,.row-fluid:after{display:table;content:""}.row-fluid:after{clear:both}.row-fluid [class*="span"]{display:block;float:left;width:100%;min-height:28px;margin-left:2.762430939%;*margin-left:2.709239449638298%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box}.row-fluid [class*="span"]:first-child{margin-left:0}.row-fluid .span12{width:99.999999993%;*width:99.9468085036383%}.row-fluid .span11{width:91.436464082%;*width:91.38327259263829%}.row-fluid .span10{width:82.87292817100001%;*width:82.8197366816383%}.row-fluid .span9{width:74.30939226%;*width:74.25620077063829%}.row-fluid .span8{width:65.74585634900001%;*width:65.6926648596383%}.row-fluid .span7{width:57.182320438000005%;*width:57.129128948638304%}.row-fluid .span6{width:48.618784527%;*width:48.5655930376383%}.row-fluid .span5{width:40.055248616%;*width:40.0020571266383%}.row-fluid .span4{width:31.491712705%;*width:31.4385212156383%}.row-fluid .span3{width:22.928176794%;*width:22.874985304638297%}.row-fluid .span2{width:14.364640883%;*width:14.311449393638298%}.row-fluid .span1{width:5.801104972%;*width:5.747913482638298%}input,textarea,.uneditable-input{margin-left:0}input.span12,textarea.span12,.uneditable-input.span12{width:714px}input.span11,textarea.span11,.uneditable-input.span11{width:652px}input.span10,textarea.span10,.uneditable-input.span10{width:590px}input.span9,textarea.span9,.uneditable-input.span9{width:528px}input.span8,textarea.span8,.uneditable-input.span8{width:466px}input.span7,textarea.span7,.uneditable-input.span7{width:404px}input.span6,textarea.span6,.uneditable-input.span6{width:342px}input.span5,textarea.span5,.uneditable-input.span5{width:280px}input.span4,textarea.span4,.uneditable-input.span4{width:218px}input.span3,textarea.span3,.uneditable-input.span3{width:156px}input.span2,textarea.span2,.uneditable-input.span2{width:94px}input.span1,textarea.span1,.uneditable-input.span1{width:32px}}@media(min-width:1200px){.row{margin-left:-30px;*zoom:1}.row:before,.row:after{display:table;content:""}.row:after{clear:both}[class*="span"]{float:left;margin-left:30px}.container,.navbar-fixed-top .container,.navbar-fixed-bottom .container{width:1170px}.span12{width:1170px}.span11{width:1070px}.span10{width:970px}.span9{width:870px}.span8{width:770px}.span7{width:670px}.span6{width:570px}.span5{width:470px}.span4{width:370px}.span3{width:270px}.span2{width:170px}.span1{width:70px}.offset12{margin-left:1230px}.offset11{margin-left:1130px}.offset10{margin-left:1030px}.offset9{margin-left:930px}.offset8{margin-left:830px}.offset7{margin-left:730px}.offset6{margin-left:630px}.offset5{margin-left:530px}.offset4{margin-left:430px}.offset3{margin-left:330px}.offset2{margin-left:230px}.offset1{margin-left:130px}.row-fluid{width:100%;*zoom:1}.row-fluid:before,.row-fluid:after{display:table;content:""}.row-fluid:after{clear:both}.row-fluid [class*="span"]{display:block;float:left;width:100%;min-height:28px;margin-left:2.564102564%;*margin-left:2.510911074638298%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box}.row-fluid [class*="span"]:first-child{margin-left:0}.row-fluid .span12{width:100%;*width:99.94680851063829%}.row-fluid .span11{width:91.45299145300001%;*width:91.3997999636383%}.row-fluid .span10{width:82.905982906%;*width:82.8527914166383%}.row-fluid .span9{width:74.358974359%;*width:74.30578286963829%}.row-fluid .span8{width:65.81196581200001%;*width:65.7587743226383%}.row-fluid .span7{width:57.264957265%;*width:57.2117657756383%}.row-fluid .span6{width:48.717948718%;*width:48.6647572286383%}.row-fluid .span5{width:40.170940171000005%;*width:40.117748681638304%}.row-fluid .span4{width:31.623931624%;*width:31.5707401346383%}.row-fluid .span3{width:23.076923077%;*width:23.0237315876383%}.row-fluid .span2{width:14.529914530000001%;*width:14.4767230406383%}.row-fluid .span1{width:5.982905983%;*width:5.929714493638298%}input,textarea,.uneditable-input{margin-left:0}input.span12,textarea.span12,.uneditable-input.span12{width:1160px}input.span11,textarea.span11,.uneditable-input.span11{width:1060px}input.span10,textarea.span10,.uneditable-input.span10{width:960px}input.span9,textarea.span9,.uneditable-input.span9{width:860px}input.span8,textarea.span8,.uneditable-input.span8{width:760px}input.span7,textarea.span7,.uneditable-input.span7{width:660px}input.span6,textarea.span6,.uneditable-input.span6{width:560px}input.span5,textarea.span5,.uneditable-input.span5{width:460px}input.span4,textarea.span4,.uneditable-input.span4{width:360px}input.span3,textarea.span3,.uneditable-input.span3{width:260px}input.span2,textarea.span2,.uneditable-input.span2{width:160px}input.span1,textarea.span1,.uneditable-input.span1{width:60px}.thumbnails{margin-left:-30px}.thumbnails>li{margin-left:30px}.row-fluid .thumbnails{margin-left:0}}@media(max-width:979px){body{padding-top:0}.navbar-fixed-top{position:static;margin-bottom:18px}.navbar-fixed-top .navbar-inner{padding:5px}.navbar .container{width:auto;padding:0}.navbar .brand{padding-right:10px;padding-left:10px;margin:0 0 0 -5px}.nav-collapse{clear:both}.nav-collapse .nav{float:none;margin:0 0 9px}.nav-collapse .nav>li{float:none}.nav-collapse .nav>li>a{margin-bottom:2px}.nav-collapse .nav>.divider-vertical{display:none}.nav-collapse .nav .nav-header{color:#999;text-shadow:none}.nav-collapse .nav>li>a,.nav-collapse .dropdown-menu a{padding:6px 15px;font-weight:bold;color:#999;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px}.nav-collapse .btn{padding:4px 10px 4px;font-weight:normal;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.nav-collapse .dropdown-menu li+li a{margin-bottom:2px}.nav-collapse .nav>li>a:hover,.nav-collapse .dropdown-menu a:hover{background-color:#222}.nav-collapse.in .btn-group{padding:0;margin-top:5px}.nav-collapse .dropdown-menu{position:static;top:auto;left:auto;display:block;float:none;max-width:none;padding:0;margin:0 15px;background-color:transparent;border:0;-webkit-border-radius:0;-moz-border-radius:0;border-radius:0;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.nav-collapse .dropdown-menu:before,.nav-collapse .dropdown-menu:after{display:none}.nav-collapse .dropdown-menu .divider{display:none}.nav-collapse .navbar-form,.nav-collapse .navbar-search{float:none;padding:9px 15px;margin:9px 0;border-top:1px solid #222;border-bottom:1px solid #222;-webkit-box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.1);-moz-box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.1);box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.1)}.navbar .nav-collapse .nav.pull-right{float:none;margin-left:0}.nav-collapse,.nav-collapse.collapse{height:0;overflow:hidden}.navbar .btn-navbar{display:block}.navbar-static .navbar-inner{padding-right:10px;padding-left:10px}}@media(min-width:980px){.nav-collapse.collapse{height:auto!important;overflow:visible!important}} diff --git a/web/sugarcane/static/bootstrap-tab.js b/web/sugarcane/static/bootstrap-tab.js deleted file mode 100644 index 88641de864c..00000000000 --- a/web/sugarcane/static/bootstrap-tab.js +++ /dev/null @@ -1,135 +0,0 @@ -/* ======================================================== - * bootstrap-tab.js v2.0.3 - * http://twitter.github.com/bootstrap/javascript.html#tabs - * ======================================================== - * Copyright 2012 Twitter, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * ======================================================== */ - - -!function ($) { - - "use strict"; // jshint ;_; - - - /* TAB CLASS DEFINITION - * ==================== */ - - var Tab = function ( element ) { - this.element = $(element) - } - - Tab.prototype = { - - constructor: Tab - - , show: function () { - var $this = this.element - , $ul = $this.closest('ul:not(.dropdown-menu)') - , selector = $this.attr('data-target') - , previous - , $target - , e - - if (!selector) { - selector = $this.attr('href') - selector = selector && selector.replace(/.*(?=#[^\s]*$)/, '') //strip for ie7 - } - - if ( $this.parent('li').hasClass('active') ) return - - previous = $ul.find('.active a').last()[0] - - e = $.Event('show', { - relatedTarget: previous - }) - - $this.trigger(e) - - if (e.isDefaultPrevented()) return - - $target = $(selector) - - this.activate($this.parent('li'), $ul) - this.activate($target, $target.parent(), function () { - $this.trigger({ - type: 'shown' - , relatedTarget: previous - }) - }) - } - - , activate: function ( element, container, callback) { - var $active = container.find('> .active') - , transition = callback - && $.support.transition - && $active.hasClass('fade') - - function next() { - $active - .removeClass('active') - .find('> .dropdown-menu > .active') - .removeClass('active') - - element.addClass('active') - - if (transition) { - element[0].offsetWidth // reflow for transition - element.addClass('in') - } else { - element.removeClass('fade') - } - - if ( element.parent('.dropdown-menu') ) { - element.closest('li.dropdown').addClass('active') - } - - callback && callback() - } - - transition ? - $active.one($.support.transition.end, next) : - next() - - $active.removeClass('in') - } - } - - - /* TAB PLUGIN DEFINITION - * ===================== */ - - $.fn.tab = function ( option ) { - return this.each(function () { - var $this = $(this) - , data = $this.data('tab') - if (!data) $this.data('tab', (data = new Tab(this))) - if (typeof option == 'string') data[option]() - }) - } - - $.fn.tab.Constructor = Tab - - - /* TAB DATA-API - * ============ */ - - $(function () { - $('body').on('click.tab.data-api', '[data-toggle="tab"], [data-toggle="pill"]', function (e) { - e.preventDefault() - $(this).tab('show') - }) - }) - -}(window.jQuery); \ No newline at end of file diff --git a/web/sugarcane/static/bootstrap.min.css b/web/sugarcane/static/bootstrap.min.css deleted file mode 100644 index 1c75d0c07a4..00000000000 --- a/web/sugarcane/static/bootstrap.min.css +++ /dev/null @@ -1,9 +0,0 @@ -/*! - * Bootstrap v2.0.3 - * - * Copyright 2012 Twitter, Inc - * Licensed under the Apache License v2.0 - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Designed and built with all the love in the world @twitter by @mdo and @fat. - */article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}audio:not([controls]){display:none}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}a:focus{outline:thin dotted #333;outline:5px auto -webkit-focus-ring-color;outline-offset:-2px}a:hover,a:active{outline:0}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sup{top:-0.5em}sub{bottom:-0.25em}img{max-width:100%;vertical-align:middle;border:0;-ms-interpolation-mode:bicubic}button,input,select,textarea{margin:0;font-size:100%;vertical-align:middle}button,input{*overflow:visible;line-height:normal}button::-moz-focus-inner,input::-moz-focus-inner{padding:0;border:0}button,input[type="button"],input[type="reset"],input[type="submit"]{cursor:pointer;-webkit-appearance:button}input[type="search"]{-webkit-box-sizing:content-box;-moz-box-sizing:content-box;box-sizing:content-box;-webkit-appearance:textfield}input[type="search"]::-webkit-search-decoration,input[type="search"]::-webkit-search-cancel-button{-webkit-appearance:none}textarea{overflow:auto;vertical-align:top}.clearfix{*zoom:1}.clearfix:before,.clearfix:after{display:table;content:""}.clearfix:after{clear:both}.hide-text{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.input-block-level{display:block;width:100%;min-height:28px;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box}body{margin:0;font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;line-height:18px;color:#333;background-color:#fff}a{color:#08c;text-decoration:none}a:hover{color:#005580;text-decoration:underline}.row{margin-left:-20px;*zoom:1}.row:before,.row:after{display:table;content:""}.row:after{clear:both}[class*="span"]{float:left;margin-left:20px}.container,.navbar-fixed-top .container,.navbar-fixed-bottom .container{width:940px}.span12{width:940px}.span11{width:860px}.span10{width:780px}.span9{width:700px}.span8{width:620px}.span7{width:540px}.span6{width:460px}.span5{width:380px}.span4{width:300px}.span3{width:220px}.span2{width:140px}.span1{width:60px}.offset12{margin-left:980px}.offset11{margin-left:900px}.offset10{margin-left:820px}.offset9{margin-left:740px}.offset8{margin-left:660px}.offset7{margin-left:580px}.offset6{margin-left:500px}.offset5{margin-left:420px}.offset4{margin-left:340px}.offset3{margin-left:260px}.offset2{margin-left:180px}.offset1{margin-left:100px}.row-fluid{width:100%;*zoom:1}.row-fluid:before,.row-fluid:after{display:table;content:""}.row-fluid:after{clear:both}.row-fluid [class*="span"]{display:block;float:left;width:100%;min-height:28px;margin-left:2.127659574%;*margin-left:2.0744680846382977%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box}.row-fluid [class*="span"]:first-child{margin-left:0}.row-fluid .span12{width:99.99999998999999%;*width:99.94680850063828%}.row-fluid .span11{width:91.489361693%;*width:91.4361702036383%}.row-fluid .span10{width:82.97872339599999%;*width:82.92553190663828%}.row-fluid .span9{width:74.468085099%;*width:74.4148936096383%}.row-fluid .span8{width:65.95744680199999%;*width:65.90425531263828%}.row-fluid .span7{width:57.446808505%;*width:57.3936170156383%}.row-fluid .span6{width:48.93617020799999%;*width:48.88297871863829%}.row-fluid .span5{width:40.425531911%;*width:40.3723404216383%}.row-fluid .span4{width:31.914893614%;*width:31.8617021246383%}.row-fluid .span3{width:23.404255317%;*width:23.3510638276383%}.row-fluid .span2{width:14.89361702%;*width:14.8404255306383%}.row-fluid .span1{width:6.382978723%;*width:6.329787233638298%}.container{margin-right:auto;margin-left:auto;*zoom:1}.container:before,.container:after{display:table;content:""}.container:after{clear:both}.container-fluid{padding-right:20px;padding-left:20px;*zoom:1}.container-fluid:before,.container-fluid:after{display:table;content:""}.container-fluid:after{clear:both}p{margin:0 0 9px;font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;line-height:18px}p small{font-size:11px;color:#999}.lead{margin-bottom:18px;font-size:20px;font-weight:200;line-height:27px}h1,h2,h3,h4,h5,h6{margin:0;font-family:inherit;font-weight:bold;color:inherit;text-rendering:optimizelegibility}h1 small,h2 small,h3 small,h4 small,h5 small,h6 small{font-weight:normal;color:#999}h1{font-size:30px;line-height:36px}h1 small{font-size:18px}h2{font-size:24px;line-height:36px}h2 small{font-size:18px}h3{font-size:18px;line-height:27px}h3 small{font-size:14px}h4,h5,h6{line-height:18px}h4{font-size:14px}h4 small{font-size:12px}h5{font-size:12px}h6{font-size:11px;color:#999;text-transform:uppercase}.page-header{padding-bottom:17px;margin:18px 0;border-bottom:1px solid #eee}.page-header h1{line-height:1}ul,ol{padding:0;margin:0 0 9px 25px}ul ul,ul ol,ol ol,ol ul{margin-bottom:0}ul{list-style:disc}ol{list-style:decimal}li{line-height:18px}ul.unstyled,ol.unstyled{margin-left:0;list-style:none}dl{margin-bottom:18px}dt,dd{line-height:18px}dt{font-weight:bold;line-height:17px}dd{margin-left:9px}.dl-horizontal dt{float:left;width:120px;overflow:hidden;clear:left;text-align:right;text-overflow:ellipsis;white-space:nowrap}.dl-horizontal dd{margin-left:130px}hr{margin:18px 0;border:0;border-top:1px solid #eee;border-bottom:1px solid #fff}strong{font-weight:bold}em{font-style:italic}.muted{color:#999}abbr[title]{cursor:help;border-bottom:1px dotted #ddd}abbr.initialism{font-size:90%;text-transform:uppercase}blockquote{padding:0 0 0 15px;margin:0 0 18px;border-left:5px solid #eee}blockquote p{margin-bottom:0;font-size:16px;font-weight:300;line-height:22.5px}blockquote small{display:block;line-height:18px;color:#999}blockquote small:before{content:'\2014 \00A0'}blockquote.pull-right{float:right;padding-right:15px;padding-left:0;border-right:5px solid #eee;border-left:0}blockquote.pull-right p,blockquote.pull-right small{text-align:right}q:before,q:after,blockquote:before,blockquote:after{content:""}address{display:block;margin-bottom:18px;font-style:normal;line-height:18px}small{font-size:100%}cite{font-style:normal}code,pre{padding:0 3px 2px;font-family:Menlo,Monaco,Consolas,"Courier New",monospace;font-size:12px;color:#333;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px}code{padding:2px 4px;color:#d14;background-color:#f7f7f9;border:1px solid #e1e1e8}pre{display:block;padding:8.5px;margin:0 0 9px;font-size:12.025px;line-height:18px;word-break:break-all;word-wrap:break-word;white-space:pre;white-space:pre-wrap;background-color:#f5f5f5;border:1px solid #ccc;border:1px solid rgba(0,0,0,0.15);-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}pre.prettyprint{margin-bottom:18px}pre code{padding:0;color:inherit;background-color:transparent;border:0}.pre-scrollable{max-height:340px;overflow-y:scroll}form{margin:0 0 18px}fieldset{padding:0;margin:0;border:0}legend{display:block;width:100%;padding:0;margin-bottom:27px;font-size:19.5px;line-height:36px;color:#333;border:0;border-bottom:1px solid #eee}legend small{font-size:13.5px;color:#999}label,input,button,select,textarea{font-size:13px;font-weight:normal;line-height:18px}input,button,select,textarea{font-family:"Helvetica Neue",Helvetica,Arial,sans-serif}label{display:block;margin-bottom:5px;color:#333}input,textarea,select,.uneditable-input{display:inline-block;width:210px;height:18px;padding:4px;margin-bottom:9px;font-size:13px;line-height:18px;color:#555;background-color:#fff;border:1px solid #ccc;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px}.uneditable-textarea{width:auto;height:auto}label input,label textarea,label select{display:block}input[type="image"],input[type="checkbox"],input[type="radio"]{width:auto;height:auto;padding:0;margin:3px 0;*margin-top:0;line-height:normal;cursor:pointer;background-color:transparent;border:0 \9;-webkit-border-radius:0;-moz-border-radius:0;border-radius:0}input[type="image"]{border:0}input[type="file"]{width:auto;padding:initial;line-height:initial;background-color:#fff;background-color:initial;border:initial;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}input[type="button"],input[type="reset"],input[type="submit"]{width:auto;height:auto}select,input[type="file"]{height:28px;*margin-top:4px;line-height:28px}input[type="file"]{line-height:18px \9}select{width:220px;background-color:#fff}select[multiple],select[size]{height:auto}input[type="image"]{-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}textarea{height:auto}input[type="hidden"]{display:none}.radio,.checkbox{min-height:18px;padding-left:18px}.radio input[type="radio"],.checkbox input[type="checkbox"]{float:left;margin-left:-18px}.controls>.radio:first-child,.controls>.checkbox:first-child{padding-top:5px}.radio.inline,.checkbox.inline{display:inline-block;padding-top:5px;margin-bottom:0;vertical-align:middle}.radio.inline+.radio.inline,.checkbox.inline+.checkbox.inline{margin-left:10px}input,textarea{-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);-moz-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);-webkit-transition:border linear .2s,box-shadow linear .2s;-moz-transition:border linear .2s,box-shadow linear .2s;-ms-transition:border linear .2s,box-shadow linear .2s;-o-transition:border linear .2s,box-shadow linear .2s;transition:border linear .2s,box-shadow linear .2s}input:focus,textarea:focus{border-color:rgba(82,168,236,0.8);outline:0;outline:thin dotted \9;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 8px rgba(82,168,236,0.6);-moz-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 8px rgba(82,168,236,0.6);box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 8px rgba(82,168,236,0.6)}input[type="file"]:focus,input[type="radio"]:focus,input[type="checkbox"]:focus,select:focus{outline:thin dotted #333;outline:5px auto -webkit-focus-ring-color;outline-offset:-2px;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.input-mini{width:60px}.input-small{width:90px}.input-medium{width:150px}.input-large{width:210px}.input-xlarge{width:270px}.input-xxlarge{width:530px}input[class*="span"],select[class*="span"],textarea[class*="span"],.uneditable-input[class*="span"],.row-fluid input[class*="span"],.row-fluid select[class*="span"],.row-fluid textarea[class*="span"],.row-fluid .uneditable-input[class*="span"]{float:none;margin-left:0}input,textarea,.uneditable-input{margin-left:0}input.span12,textarea.span12,.uneditable-input.span12{width:930px}input.span11,textarea.span11,.uneditable-input.span11{width:850px}input.span10,textarea.span10,.uneditable-input.span10{width:770px}input.span9,textarea.span9,.uneditable-input.span9{width:690px}input.span8,textarea.span8,.uneditable-input.span8{width:610px}input.span7,textarea.span7,.uneditable-input.span7{width:530px}input.span6,textarea.span6,.uneditable-input.span6{width:450px}input.span5,textarea.span5,.uneditable-input.span5{width:370px}input.span4,textarea.span4,.uneditable-input.span4{width:290px}input.span3,textarea.span3,.uneditable-input.span3{width:210px}input.span2,textarea.span2,.uneditable-input.span2{width:130px}input.span1,textarea.span1,.uneditable-input.span1{width:50px}input[disabled],select[disabled],textarea[disabled],input[readonly],select[readonly],textarea[readonly]{cursor:not-allowed;background-color:#eee;border-color:#ddd}input[type="radio"][disabled],input[type="checkbox"][disabled],input[type="radio"][readonly],input[type="checkbox"][readonly]{background-color:transparent}.control-group.warning>label,.control-group.warning .help-block,.control-group.warning .help-inline{color:#c09853}.control-group.warning input,.control-group.warning select,.control-group.warning textarea{color:#c09853;border-color:#c09853}.control-group.warning input:focus,.control-group.warning select:focus,.control-group.warning textarea:focus{border-color:#a47e3c;-webkit-box-shadow:0 0 6px #dbc59e;-moz-box-shadow:0 0 6px #dbc59e;box-shadow:0 0 6px #dbc59e}.control-group.warning .input-prepend .add-on,.control-group.warning .input-append .add-on{color:#c09853;background-color:#fcf8e3;border-color:#c09853}.control-group.error>label,.control-group.error .help-block,.control-group.error .help-inline{color:#b94a48}.control-group.error input,.control-group.error select,.control-group.error textarea{color:#b94a48;border-color:#b94a48}.control-group.error input:focus,.control-group.error select:focus,.control-group.error textarea:focus{border-color:#953b39;-webkit-box-shadow:0 0 6px #d59392;-moz-box-shadow:0 0 6px #d59392;box-shadow:0 0 6px #d59392}.control-group.error .input-prepend .add-on,.control-group.error .input-append .add-on{color:#b94a48;background-color:#f2dede;border-color:#b94a48}.control-group.success>label,.control-group.success .help-block,.control-group.success .help-inline{color:#468847}.control-group.success input,.control-group.success select,.control-group.success textarea{color:#468847;border-color:#468847}.control-group.success input:focus,.control-group.success select:focus,.control-group.success textarea:focus{border-color:#356635;-webkit-box-shadow:0 0 6px #7aba7b;-moz-box-shadow:0 0 6px #7aba7b;box-shadow:0 0 6px #7aba7b}.control-group.success .input-prepend .add-on,.control-group.success .input-append .add-on{color:#468847;background-color:#dff0d8;border-color:#468847}input:focus:required:invalid,textarea:focus:required:invalid,select:focus:required:invalid{color:#b94a48;border-color:#ee5f5b}input:focus:required:invalid:focus,textarea:focus:required:invalid:focus,select:focus:required:invalid:focus{border-color:#e9322d;-webkit-box-shadow:0 0 6px #f8b9b7;-moz-box-shadow:0 0 6px #f8b9b7;box-shadow:0 0 6px #f8b9b7}.form-actions{padding:17px 20px 18px;margin-top:18px;margin-bottom:18px;background-color:#f5f5f5;border-top:1px solid #ddd;*zoom:1}.form-actions:before,.form-actions:after{display:table;content:""}.form-actions:after{clear:both}.uneditable-input{overflow:hidden;white-space:nowrap;cursor:not-allowed;background-color:#fff;border-color:#eee;-webkit-box-shadow:inset 0 1px 2px rgba(0,0,0,0.025);-moz-box-shadow:inset 0 1px 2px rgba(0,0,0,0.025);box-shadow:inset 0 1px 2px rgba(0,0,0,0.025)}:-moz-placeholder{color:#999}::-webkit-input-placeholder{color:#999}.help-block,.help-inline{color:#555}.help-block{display:block;margin-bottom:9px}.help-inline{display:inline-block;*display:inline;padding-left:5px;vertical-align:middle;*zoom:1}.input-prepend,.input-append{margin-bottom:5px}.input-prepend input,.input-append input,.input-prepend select,.input-append select,.input-prepend .uneditable-input,.input-append .uneditable-input{position:relative;margin-bottom:0;*margin-left:0;vertical-align:middle;-webkit-border-radius:0 3px 3px 0;-moz-border-radius:0 3px 3px 0;border-radius:0 3px 3px 0}.input-prepend input:focus,.input-append input:focus,.input-prepend select:focus,.input-append select:focus,.input-prepend .uneditable-input:focus,.input-append .uneditable-input:focus{z-index:2}.input-prepend .uneditable-input,.input-append .uneditable-input{border-left-color:#ccc}.input-prepend .add-on,.input-append .add-on{display:inline-block;width:auto;height:18px;min-width:16px;padding:4px 5px;font-weight:normal;line-height:18px;text-align:center;text-shadow:0 1px 0 #fff;vertical-align:middle;background-color:#eee;border:1px solid #ccc}.input-prepend .add-on,.input-append .add-on,.input-prepend .btn,.input-append .btn{margin-left:-1px;-webkit-border-radius:0;-moz-border-radius:0;border-radius:0}.input-prepend .active,.input-append .active{background-color:#a9dba9;border-color:#46a546}.input-prepend .add-on,.input-prepend .btn{margin-right:-1px}.input-prepend .add-on:first-child,.input-prepend .btn:first-child{-webkit-border-radius:3px 0 0 3px;-moz-border-radius:3px 0 0 3px;border-radius:3px 0 0 3px}.input-append input,.input-append select,.input-append .uneditable-input{-webkit-border-radius:3px 0 0 3px;-moz-border-radius:3px 0 0 3px;border-radius:3px 0 0 3px}.input-append .uneditable-input{border-right-color:#ccc;border-left-color:#eee}.input-append .add-on:last-child,.input-append .btn:last-child{-webkit-border-radius:0 3px 3px 0;-moz-border-radius:0 3px 3px 0;border-radius:0 3px 3px 0}.input-prepend.input-append input,.input-prepend.input-append select,.input-prepend.input-append .uneditable-input{-webkit-border-radius:0;-moz-border-radius:0;border-radius:0}.input-prepend.input-append .add-on:first-child,.input-prepend.input-append .btn:first-child{margin-right:-1px;-webkit-border-radius:3px 0 0 3px;-moz-border-radius:3px 0 0 3px;border-radius:3px 0 0 3px}.input-prepend.input-append .add-on:last-child,.input-prepend.input-append .btn:last-child{margin-left:-1px;-webkit-border-radius:0 3px 3px 0;-moz-border-radius:0 3px 3px 0;border-radius:0 3px 3px 0}.search-query{padding-right:14px;padding-right:4px \9;padding-left:14px;padding-left:4px \9;margin-bottom:0;-webkit-border-radius:14px;-moz-border-radius:14px;border-radius:14px}.form-search input,.form-inline input,.form-horizontal input,.form-search textarea,.form-inline textarea,.form-horizontal textarea,.form-search select,.form-inline select,.form-horizontal select,.form-search .help-inline,.form-inline .help-inline,.form-horizontal .help-inline,.form-search .uneditable-input,.form-inline .uneditable-input,.form-horizontal .uneditable-input,.form-search .input-prepend,.form-inline .input-prepend,.form-horizontal .input-prepend,.form-search .input-append,.form-inline .input-append,.form-horizontal .input-append{display:inline-block;*display:inline;margin-bottom:0;*zoom:1}.form-search .hide,.form-inline .hide,.form-horizontal .hide{display:none}.form-search label,.form-inline label{display:inline-block}.form-search .input-append,.form-inline .input-append,.form-search .input-prepend,.form-inline .input-prepend{margin-bottom:0}.form-search .radio,.form-search .checkbox,.form-inline .radio,.form-inline .checkbox{padding-left:0;margin-bottom:0;vertical-align:middle}.form-search .radio input[type="radio"],.form-search .checkbox input[type="checkbox"],.form-inline .radio input[type="radio"],.form-inline .checkbox input[type="checkbox"]{float:left;margin-right:3px;margin-left:0}.control-group{margin-bottom:9px}legend+.control-group{margin-top:18px;-webkit-margin-top-collapse:separate}.form-horizontal .control-group{margin-bottom:18px;*zoom:1}.form-horizontal .control-group:before,.form-horizontal .control-group:after{display:table;content:""}.form-horizontal .control-group:after{clear:both}.form-horizontal .control-label{float:left;width:140px;padding-top:5px;text-align:right}.form-horizontal .controls{*display:inline-block;*padding-left:20px;margin-left:160px;*margin-left:0}.form-horizontal .controls:first-child{*padding-left:160px}.form-horizontal .help-block{margin-top:9px;margin-bottom:0}.form-horizontal .form-actions{padding-left:160px}table{max-width:100%;background-color:transparent;border-collapse:collapse;border-spacing:0}.table{width:100%;margin-bottom:18px}.table th,.table td{padding:8px;line-height:18px;text-align:left;vertical-align:top;border-top:1px solid #ddd}.table th{font-weight:bold}.table thead th{vertical-align:bottom}.table caption+thead tr:first-child th,.table caption+thead tr:first-child td,.table colgroup+thead tr:first-child th,.table colgroup+thead tr:first-child td,.table thead:first-child tr:first-child th,.table thead:first-child tr:first-child td{border-top:0}.table tbody+tbody{border-top:2px solid #ddd}.table-condensed th,.table-condensed td{padding:4px 5px}.table-bordered{border:1px solid #ddd;border-collapse:separate;*border-collapse:collapsed;border-left:0;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.table-bordered th,.table-bordered td{border-left:1px solid #ddd}.table-bordered caption+thead tr:first-child th,.table-bordered caption+tbody tr:first-child th,.table-bordered caption+tbody tr:first-child td,.table-bordered colgroup+thead tr:first-child th,.table-bordered colgroup+tbody tr:first-child th,.table-bordered colgroup+tbody tr:first-child td,.table-bordered thead:first-child tr:first-child th,.table-bordered tbody:first-child tr:first-child th,.table-bordered tbody:first-child tr:first-child td{border-top:0}.table-bordered thead:first-child tr:first-child th:first-child,.table-bordered tbody:first-child tr:first-child td:first-child{-webkit-border-top-left-radius:4px;border-top-left-radius:4px;-moz-border-radius-topleft:4px}.table-bordered thead:first-child tr:first-child th:last-child,.table-bordered tbody:first-child tr:first-child td:last-child{-webkit-border-top-right-radius:4px;border-top-right-radius:4px;-moz-border-radius-topright:4px}.table-bordered thead:last-child tr:last-child th:first-child,.table-bordered tbody:last-child tr:last-child td:first-child{-webkit-border-radius:0 0 0 4px;-moz-border-radius:0 0 0 4px;border-radius:0 0 0 4px;-webkit-border-bottom-left-radius:4px;border-bottom-left-radius:4px;-moz-border-radius-bottomleft:4px}.table-bordered thead:last-child tr:last-child th:last-child,.table-bordered tbody:last-child tr:last-child td:last-child{-webkit-border-bottom-right-radius:4px;border-bottom-right-radius:4px;-moz-border-radius-bottomright:4px}.table-striped tbody tr:nth-child(odd) td,.table-striped tbody tr:nth-child(odd) th{background-color:#f9f9f9}.table tbody tr:hover td,.table tbody tr:hover th{background-color:#f5f5f5}table .span1{float:none;width:44px;margin-left:0}table .span2{float:none;width:124px;margin-left:0}table .span3{float:none;width:204px;margin-left:0}table .span4{float:none;width:284px;margin-left:0}table .span5{float:none;width:364px;margin-left:0}table .span6{float:none;width:444px;margin-left:0}table .span7{float:none;width:524px;margin-left:0}table .span8{float:none;width:604px;margin-left:0}table .span9{float:none;width:684px;margin-left:0}table .span10{float:none;width:764px;margin-left:0}table .span11{float:none;width:844px;margin-left:0}table .span12{float:none;width:924px;margin-left:0}table .span13{float:none;width:1004px;margin-left:0}table .span14{float:none;width:1084px;margin-left:0}table .span15{float:none;width:1164px;margin-left:0}table .span16{float:none;width:1244px;margin-left:0}table .span17{float:none;width:1324px;margin-left:0}table .span18{float:none;width:1404px;margin-left:0}table .span19{float:none;width:1484px;margin-left:0}table .span20{float:none;width:1564px;margin-left:0}table .span21{float:none;width:1644px;margin-left:0}table .span22{float:none;width:1724px;margin-left:0}table .span23{float:none;width:1804px;margin-left:0}table .span24{float:none;width:1884px;margin-left:0}[class^="icon-"],[class*=" icon-"]{display:inline-block;width:14px;height:14px;*margin-right:.3em;line-height:14px;vertical-align:text-top;background-image:url("../img/glyphicons-halflings.png");background-position:14px 14px;background-repeat:no-repeat}[class^="icon-"]:last-child,[class*=" icon-"]:last-child{*margin-left:0}.icon-white{background-image:url("../img/glyphicons-halflings-white.png")}.icon-glass{background-position:0 0}.icon-music{background-position:-24px 0}.icon-search{background-position:-48px 0}.icon-envelope{background-position:-72px 0}.icon-heart{background-position:-96px 0}.icon-star{background-position:-120px 0}.icon-star-empty{background-position:-144px 0}.icon-user{background-position:-168px 0}.icon-film{background-position:-192px 0}.icon-th-large{background-position:-216px 0}.icon-th{background-position:-240px 0}.icon-th-list{background-position:-264px 0}.icon-ok{background-position:-288px 0}.icon-remove{background-position:-312px 0}.icon-zoom-in{background-position:-336px 0}.icon-zoom-out{background-position:-360px 0}.icon-off{background-position:-384px 0}.icon-signal{background-position:-408px 0}.icon-cog{background-position:-432px 0}.icon-trash{background-position:-456px 0}.icon-home{background-position:0 -24px}.icon-file{background-position:-24px -24px}.icon-time{background-position:-48px -24px}.icon-road{background-position:-72px -24px}.icon-download-alt{background-position:-96px -24px}.icon-download{background-position:-120px -24px}.icon-upload{background-position:-144px -24px}.icon-inbox{background-position:-168px -24px}.icon-play-circle{background-position:-192px -24px}.icon-repeat{background-position:-216px -24px}.icon-refresh{background-position:-240px -24px}.icon-list-alt{background-position:-264px -24px}.icon-lock{background-position:-287px -24px}.icon-flag{background-position:-312px -24px}.icon-headphones{background-position:-336px -24px}.icon-volume-off{background-position:-360px -24px}.icon-volume-down{background-position:-384px -24px}.icon-volume-up{background-position:-408px -24px}.icon-qrcode{background-position:-432px -24px}.icon-barcode{background-position:-456px -24px}.icon-tag{background-position:0 -48px}.icon-tags{background-position:-25px -48px}.icon-book{background-position:-48px -48px}.icon-bookmark{background-position:-72px -48px}.icon-print{background-position:-96px -48px}.icon-camera{background-position:-120px -48px}.icon-font{background-position:-144px -48px}.icon-bold{background-position:-167px -48px}.icon-italic{background-position:-192px -48px}.icon-text-height{background-position:-216px -48px}.icon-text-width{background-position:-240px -48px}.icon-align-left{background-position:-264px -48px}.icon-align-center{background-position:-288px -48px}.icon-align-right{background-position:-312px -48px}.icon-align-justify{background-position:-336px -48px}.icon-list{background-position:-360px -48px}.icon-indent-left{background-position:-384px -48px}.icon-indent-right{background-position:-408px -48px}.icon-facetime-video{background-position:-432px -48px}.icon-picture{background-position:-456px -48px}.icon-pencil{background-position:0 -72px}.icon-map-marker{background-position:-24px -72px}.icon-adjust{background-position:-48px -72px}.icon-tint{background-position:-72px -72px}.icon-edit{background-position:-96px -72px}.icon-share{background-position:-120px -72px}.icon-check{background-position:-144px -72px}.icon-move{background-position:-168px -72px}.icon-step-backward{background-position:-192px -72px}.icon-fast-backward{background-position:-216px -72px}.icon-backward{background-position:-240px -72px}.icon-play{background-position:-264px -72px}.icon-pause{background-position:-288px -72px}.icon-stop{background-position:-312px -72px}.icon-forward{background-position:-336px -72px}.icon-fast-forward{background-position:-360px -72px}.icon-step-forward{background-position:-384px -72px}.icon-eject{background-position:-408px -72px}.icon-chevron-left{background-position:-432px -72px}.icon-chevron-right{background-position:-456px -72px}.icon-plus-sign{background-position:0 -96px}.icon-minus-sign{background-position:-24px -96px}.icon-remove-sign{background-position:-48px -96px}.icon-ok-sign{background-position:-72px -96px}.icon-question-sign{background-position:-96px -96px}.icon-info-sign{background-position:-120px -96px}.icon-screenshot{background-position:-144px -96px}.icon-remove-circle{background-position:-168px -96px}.icon-ok-circle{background-position:-192px -96px}.icon-ban-circle{background-position:-216px -96px}.icon-arrow-left{background-position:-240px -96px}.icon-arrow-right{background-position:-264px -96px}.icon-arrow-up{background-position:-289px -96px}.icon-arrow-down{background-position:-312px -96px}.icon-share-alt{background-position:-336px -96px}.icon-resize-full{background-position:-360px -96px}.icon-resize-small{background-position:-384px -96px}.icon-plus{background-position:-408px -96px}.icon-minus{background-position:-433px -96px}.icon-asterisk{background-position:-456px -96px}.icon-exclamation-sign{background-position:0 -120px}.icon-gift{background-position:-24px -120px}.icon-leaf{background-position:-48px -120px}.icon-fire{background-position:-72px -120px}.icon-eye-open{background-position:-96px -120px}.icon-eye-close{background-position:-120px -120px}.icon-warning-sign{background-position:-144px -120px}.icon-plane{background-position:-168px -120px}.icon-calendar{background-position:-192px -120px}.icon-random{background-position:-216px -120px}.icon-comment{background-position:-240px -120px}.icon-magnet{background-position:-264px -120px}.icon-chevron-up{background-position:-288px -120px}.icon-chevron-down{background-position:-313px -119px}.icon-retweet{background-position:-336px -120px}.icon-shopping-cart{background-position:-360px -120px}.icon-folder-close{background-position:-384px -120px}.icon-folder-open{background-position:-408px -120px}.icon-resize-vertical{background-position:-432px -119px}.icon-resize-horizontal{background-position:-456px -118px}.icon-hdd{background-position:0 -144px}.icon-bullhorn{background-position:-24px -144px}.icon-bell{background-position:-48px -144px}.icon-certificate{background-position:-72px -144px}.icon-thumbs-up{background-position:-96px -144px}.icon-thumbs-down{background-position:-120px -144px}.icon-hand-right{background-position:-144px -144px}.icon-hand-left{background-position:-168px -144px}.icon-hand-up{background-position:-192px -144px}.icon-hand-down{background-position:-216px -144px}.icon-circle-arrow-right{background-position:-240px -144px}.icon-circle-arrow-left{background-position:-264px -144px}.icon-circle-arrow-up{background-position:-288px -144px}.icon-circle-arrow-down{background-position:-312px -144px}.icon-globe{background-position:-336px -144px}.icon-wrench{background-position:-360px -144px}.icon-tasks{background-position:-384px -144px}.icon-filter{background-position:-408px -144px}.icon-briefcase{background-position:-432px -144px}.icon-fullscreen{background-position:-456px -144px}.dropup,.dropdown{position:relative}.dropdown-toggle{*margin-bottom:-3px}.dropdown-toggle:active,.open .dropdown-toggle{outline:0}.caret{display:inline-block;width:0;height:0;vertical-align:top;border-top:4px solid #000;border-right:4px solid transparent;border-left:4px solid transparent;content:"";opacity:.3;filter:alpha(opacity=30)}.dropdown .caret{margin-top:8px;margin-left:2px}.dropdown:hover .caret,.open .caret{opacity:1;filter:alpha(opacity=100)}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:160px;padding:4px 0;margin:1px 0 0;list-style:none;background-color:#fff;border:1px solid #ccc;border:1px solid rgba(0,0,0,0.2);*border-right-width:2px;*border-bottom-width:2px;-webkit-border-radius:5px;-moz-border-radius:5px;border-radius:5px;-webkit-box-shadow:0 5px 10px rgba(0,0,0,0.2);-moz-box-shadow:0 5px 10px rgba(0,0,0,0.2);box-shadow:0 5px 10px rgba(0,0,0,0.2);-webkit-background-clip:padding-box;-moz-background-clip:padding;background-clip:padding-box}.dropdown-menu.pull-right{right:0;left:auto}.dropdown-menu .divider{*width:100%;height:1px;margin:8px 1px;*margin:-5px 0 5px;overflow:hidden;background-color:#e5e5e5;border-bottom:1px solid #fff}.dropdown-menu a{display:block;padding:3px 15px;clear:both;font-weight:normal;line-height:18px;color:#333;white-space:nowrap}.dropdown-menu li>a:hover,.dropdown-menu .active>a,.dropdown-menu .active>a:hover{color:#fff;text-decoration:none;background-color:#08c}.open{*z-index:1000}.open .dropdown-menu{display:block}.pull-right .dropdown-menu{right:0;left:auto}.dropup .caret,.navbar-fixed-bottom .dropdown .caret{border-top:0;border-bottom:4px solid #000;content:"\2191"}.dropup .dropdown-menu,.navbar-fixed-bottom .dropdown .dropdown-menu{top:auto;bottom:100%;margin-bottom:1px}.typeahead{margin-top:2px;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.well{min-height:20px;padding:19px;margin-bottom:20px;background-color:#f5f5f5;border:1px solid #eee;border:1px solid rgba(0,0,0,0.05);-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.05);-moz-box-shadow:inset 0 1px 1px rgba(0,0,0,0.05);box-shadow:inset 0 1px 1px rgba(0,0,0,0.05)}.well blockquote{border-color:#ddd;border-color:rgba(0,0,0,0.15)}.well-large{padding:24px;-webkit-border-radius:6px;-moz-border-radius:6px;border-radius:6px}.well-small{padding:9px;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px}.fade{opacity:0;filter:alpha(opacity=0);-webkit-transition:opacity .15s linear;-moz-transition:opacity .15s linear;-ms-transition:opacity .15s linear;-o-transition:opacity .15s linear;transition:opacity .15s linear}.fade.in{opacity:1;filter:alpha(opacity=100)}.collapse{position:relative;height:0;overflow:hidden;-webkit-transition:height .35s ease;-moz-transition:height .35s ease;-ms-transition:height .35s ease;-o-transition:height .35s ease;transition:height .35s ease}.collapse.in{height:auto}.close{float:right;font-size:20px;font-weight:bold;line-height:18px;color:#000;text-shadow:0 1px 0 #fff;opacity:.2;filter:alpha(opacity=20)}.close:hover{color:#000;text-decoration:none;cursor:pointer;opacity:.4;filter:alpha(opacity=40)}button.close{padding:0;cursor:pointer;background:transparent;border:0;-webkit-appearance:none}.btn{display:inline-block;*display:inline;padding:4px 10px 4px;margin-bottom:0;*margin-left:.3em;font-size:13px;line-height:18px;*line-height:20px;color:#333;text-align:center;text-shadow:0 1px 1px rgba(255,255,255,0.75);vertical-align:middle;cursor:pointer;background-color:#f5f5f5;*background-color:#e6e6e6;background-image:-ms-linear-gradient(top,#fff,#e6e6e6);background-image:-webkit-gradient(linear,0 0,0 100%,from(#fff),to(#e6e6e6));background-image:-webkit-linear-gradient(top,#fff,#e6e6e6);background-image:-o-linear-gradient(top,#fff,#e6e6e6);background-image:linear-gradient(top,#fff,#e6e6e6);background-image:-moz-linear-gradient(top,#fff,#e6e6e6);background-repeat:repeat-x;border:1px solid #ccc;*border:0;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);border-color:#e6e6e6 #e6e6e6 #bfbfbf;border-bottom-color:#b3b3b3;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#ffffff',endColorstr='#e6e6e6',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false);*zoom:1;-webkit-box-shadow:inset 0 1px 0 rgba(255,255,255,0.2),0 1px 2px rgba(0,0,0,0.05);-moz-box-shadow:inset 0 1px 0 rgba(255,255,255,0.2),0 1px 2px rgba(0,0,0,0.05);box-shadow:inset 0 1px 0 rgba(255,255,255,0.2),0 1px 2px rgba(0,0,0,0.05)}.btn:hover,.btn:active,.btn.active,.btn.disabled,.btn[disabled]{background-color:#e6e6e6;*background-color:#d9d9d9}.btn:active,.btn.active{background-color:#ccc \9}.btn:first-child{*margin-left:0}.btn:hover{color:#333;text-decoration:none;background-color:#e6e6e6;*background-color:#d9d9d9;background-position:0 -15px;-webkit-transition:background-position .1s linear;-moz-transition:background-position .1s linear;-ms-transition:background-position .1s linear;-o-transition:background-position .1s linear;transition:background-position .1s linear}.btn:focus{outline:thin dotted #333;outline:5px auto -webkit-focus-ring-color;outline-offset:-2px}.btn.active,.btn:active{background-color:#e6e6e6;background-color:#d9d9d9 \9;background-image:none;outline:0;-webkit-box-shadow:inset 0 2px 4px rgba(0,0,0,0.15),0 1px 2px rgba(0,0,0,0.05);-moz-box-shadow:inset 0 2px 4px rgba(0,0,0,0.15),0 1px 2px rgba(0,0,0,0.05);box-shadow:inset 0 2px 4px rgba(0,0,0,0.15),0 1px 2px rgba(0,0,0,0.05)}.btn.disabled,.btn[disabled]{cursor:default;background-color:#e6e6e6;background-image:none;opacity:.65;filter:alpha(opacity=65);-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.btn-large{padding:9px 14px;font-size:15px;line-height:normal;-webkit-border-radius:5px;-moz-border-radius:5px;border-radius:5px}.btn-large [class^="icon-"]{margin-top:1px}.btn-small{padding:5px 9px;font-size:11px;line-height:16px}.btn-small [class^="icon-"]{margin-top:-1px}.btn-mini{padding:2px 6px;font-size:11px;line-height:14px}.btn-primary,.btn-primary:hover,.btn-warning,.btn-warning:hover,.btn-danger,.btn-danger:hover,.btn-success,.btn-success:hover,.btn-info,.btn-info:hover,.btn-inverse,.btn-inverse:hover{color:#fff;text-shadow:0 -1px 0 rgba(0,0,0,0.25)}.btn-primary.active,.btn-warning.active,.btn-danger.active,.btn-success.active,.btn-info.active,.btn-inverse.active{color:rgba(255,255,255,0.75)}.btn{border-color:#ccc;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25)}.btn-primary{background-color:#0074cc;*background-color:#05c;background-image:-ms-linear-gradient(top,#08c,#05c);background-image:-webkit-gradient(linear,0 0,0 100%,from(#08c),to(#05c));background-image:-webkit-linear-gradient(top,#08c,#05c);background-image:-o-linear-gradient(top,#08c,#05c);background-image:-moz-linear-gradient(top,#08c,#05c);background-image:linear-gradient(top,#08c,#05c);background-repeat:repeat-x;border-color:#05c #05c #003580;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#0088cc',endColorstr='#0055cc',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false)}.btn-primary:hover,.btn-primary:active,.btn-primary.active,.btn-primary.disabled,.btn-primary[disabled]{background-color:#05c;*background-color:#004ab3}.btn-primary:active,.btn-primary.active{background-color:#004099 \9}.btn-warning{background-color:#faa732;*background-color:#f89406;background-image:-ms-linear-gradient(top,#fbb450,#f89406);background-image:-webkit-gradient(linear,0 0,0 100%,from(#fbb450),to(#f89406));background-image:-webkit-linear-gradient(top,#fbb450,#f89406);background-image:-o-linear-gradient(top,#fbb450,#f89406);background-image:-moz-linear-gradient(top,#fbb450,#f89406);background-image:linear-gradient(top,#fbb450,#f89406);background-repeat:repeat-x;border-color:#f89406 #f89406 #ad6704;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#fbb450',endColorstr='#f89406',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false)}.btn-warning:hover,.btn-warning:active,.btn-warning.active,.btn-warning.disabled,.btn-warning[disabled]{background-color:#f89406;*background-color:#df8505}.btn-warning:active,.btn-warning.active{background-color:#c67605 \9}.btn-danger{background-color:#da4f49;*background-color:#bd362f;background-image:-ms-linear-gradient(top,#ee5f5b,#bd362f);background-image:-webkit-gradient(linear,0 0,0 100%,from(#ee5f5b),to(#bd362f));background-image:-webkit-linear-gradient(top,#ee5f5b,#bd362f);background-image:-o-linear-gradient(top,#ee5f5b,#bd362f);background-image:-moz-linear-gradient(top,#ee5f5b,#bd362f);background-image:linear-gradient(top,#ee5f5b,#bd362f);background-repeat:repeat-x;border-color:#bd362f #bd362f #802420;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#ee5f5b',endColorstr='#bd362f',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false)}.btn-danger:hover,.btn-danger:active,.btn-danger.active,.btn-danger.disabled,.btn-danger[disabled]{background-color:#bd362f;*background-color:#a9302a}.btn-danger:active,.btn-danger.active{background-color:#942a25 \9}.btn-success{background-color:#5bb75b;*background-color:#51a351;background-image:-ms-linear-gradient(top,#62c462,#51a351);background-image:-webkit-gradient(linear,0 0,0 100%,from(#62c462),to(#51a351));background-image:-webkit-linear-gradient(top,#62c462,#51a351);background-image:-o-linear-gradient(top,#62c462,#51a351);background-image:-moz-linear-gradient(top,#62c462,#51a351);background-image:linear-gradient(top,#62c462,#51a351);background-repeat:repeat-x;border-color:#51a351 #51a351 #387038;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#62c462',endColorstr='#51a351',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false)}.btn-success:hover,.btn-success:active,.btn-success.active,.btn-success.disabled,.btn-success[disabled]{background-color:#51a351;*background-color:#499249}.btn-success:active,.btn-success.active{background-color:#408140 \9}.btn-info{background-color:#49afcd;*background-color:#2f96b4;background-image:-ms-linear-gradient(top,#5bc0de,#2f96b4);background-image:-webkit-gradient(linear,0 0,0 100%,from(#5bc0de),to(#2f96b4));background-image:-webkit-linear-gradient(top,#5bc0de,#2f96b4);background-image:-o-linear-gradient(top,#5bc0de,#2f96b4);background-image:-moz-linear-gradient(top,#5bc0de,#2f96b4);background-image:linear-gradient(top,#5bc0de,#2f96b4);background-repeat:repeat-x;border-color:#2f96b4 #2f96b4 #1f6377;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#5bc0de',endColorstr='#2f96b4',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false)}.btn-info:hover,.btn-info:active,.btn-info.active,.btn-info.disabled,.btn-info[disabled]{background-color:#2f96b4;*background-color:#2a85a0}.btn-info:active,.btn-info.active{background-color:#24748c \9}.btn-inverse{background-color:#414141;*background-color:#222;background-image:-ms-linear-gradient(top,#555,#222);background-image:-webkit-gradient(linear,0 0,0 100%,from(#555),to(#222));background-image:-webkit-linear-gradient(top,#555,#222);background-image:-o-linear-gradient(top,#555,#222);background-image:-moz-linear-gradient(top,#555,#222);background-image:linear-gradient(top,#555,#222);background-repeat:repeat-x;border-color:#222 #222 #000;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#555555',endColorstr='#222222',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false)}.btn-inverse:hover,.btn-inverse:active,.btn-inverse.active,.btn-inverse.disabled,.btn-inverse[disabled]{background-color:#222;*background-color:#151515}.btn-inverse:active,.btn-inverse.active{background-color:#080808 \9}button.btn,input[type="submit"].btn{*padding-top:2px;*padding-bottom:2px}button.btn::-moz-focus-inner,input[type="submit"].btn::-moz-focus-inner{padding:0;border:0}button.btn.btn-large,input[type="submit"].btn.btn-large{*padding-top:7px;*padding-bottom:7px}button.btn.btn-small,input[type="submit"].btn.btn-small{*padding-top:3px;*padding-bottom:3px}button.btn.btn-mini,input[type="submit"].btn.btn-mini{*padding-top:1px;*padding-bottom:1px}.btn-group{position:relative;*margin-left:.3em;*zoom:1}.btn-group:before,.btn-group:after{display:table;content:""}.btn-group:after{clear:both}.btn-group:first-child{*margin-left:0}.btn-group+.btn-group{margin-left:5px}.btn-toolbar{margin-top:9px;margin-bottom:9px}.btn-toolbar .btn-group{display:inline-block;*display:inline;*zoom:1}.btn-group>.btn{position:relative;float:left;margin-left:-1px;-webkit-border-radius:0;-moz-border-radius:0;border-radius:0}.btn-group>.btn:first-child{margin-left:0;-webkit-border-bottom-left-radius:4px;border-bottom-left-radius:4px;-webkit-border-top-left-radius:4px;border-top-left-radius:4px;-moz-border-radius-bottomleft:4px;-moz-border-radius-topleft:4px}.btn-group>.btn:last-child,.btn-group>.dropdown-toggle{-webkit-border-top-right-radius:4px;border-top-right-radius:4px;-webkit-border-bottom-right-radius:4px;border-bottom-right-radius:4px;-moz-border-radius-topright:4px;-moz-border-radius-bottomright:4px}.btn-group>.btn.large:first-child{margin-left:0;-webkit-border-bottom-left-radius:6px;border-bottom-left-radius:6px;-webkit-border-top-left-radius:6px;border-top-left-radius:6px;-moz-border-radius-bottomleft:6px;-moz-border-radius-topleft:6px}.btn-group>.btn.large:last-child,.btn-group>.large.dropdown-toggle{-webkit-border-top-right-radius:6px;border-top-right-radius:6px;-webkit-border-bottom-right-radius:6px;border-bottom-right-radius:6px;-moz-border-radius-topright:6px;-moz-border-radius-bottomright:6px}.btn-group>.btn:hover,.btn-group>.btn:focus,.btn-group>.btn:active,.btn-group>.btn.active{z-index:2}.btn-group .dropdown-toggle:active,.btn-group.open .dropdown-toggle{outline:0}.btn-group>.dropdown-toggle{*padding-top:4px;padding-right:8px;*padding-bottom:4px;padding-left:8px;-webkit-box-shadow:inset 1px 0 0 rgba(255,255,255,0.125),inset 0 1px 0 rgba(255,255,255,0.2),0 1px 2px rgba(0,0,0,0.05);-moz-box-shadow:inset 1px 0 0 rgba(255,255,255,0.125),inset 0 1px 0 rgba(255,255,255,0.2),0 1px 2px rgba(0,0,0,0.05);box-shadow:inset 1px 0 0 rgba(255,255,255,0.125),inset 0 1px 0 rgba(255,255,255,0.2),0 1px 2px rgba(0,0,0,0.05)}.btn-group>.btn-mini.dropdown-toggle{padding-right:5px;padding-left:5px}.btn-group>.btn-small.dropdown-toggle{*padding-top:4px;*padding-bottom:4px}.btn-group>.btn-large.dropdown-toggle{padding-right:12px;padding-left:12px}.btn-group.open .dropdown-toggle{background-image:none;-webkit-box-shadow:inset 0 2px 4px rgba(0,0,0,0.15),0 1px 2px rgba(0,0,0,0.05);-moz-box-shadow:inset 0 2px 4px rgba(0,0,0,0.15),0 1px 2px rgba(0,0,0,0.05);box-shadow:inset 0 2px 4px rgba(0,0,0,0.15),0 1px 2px rgba(0,0,0,0.05)}.btn-group.open .btn.dropdown-toggle{background-color:#e6e6e6}.btn-group.open .btn-primary.dropdown-toggle{background-color:#05c}.btn-group.open .btn-warning.dropdown-toggle{background-color:#f89406}.btn-group.open .btn-danger.dropdown-toggle{background-color:#bd362f}.btn-group.open .btn-success.dropdown-toggle{background-color:#51a351}.btn-group.open .btn-info.dropdown-toggle{background-color:#2f96b4}.btn-group.open .btn-inverse.dropdown-toggle{background-color:#222}.btn .caret{margin-top:7px;margin-left:0}.btn:hover .caret,.open.btn-group .caret{opacity:1;filter:alpha(opacity=100)}.btn-mini .caret{margin-top:5px}.btn-small .caret{margin-top:6px}.btn-large .caret{margin-top:6px;border-top-width:5px;border-right-width:5px;border-left-width:5px}.dropup .btn-large .caret{border-top:0;border-bottom:5px solid #000}.btn-primary .caret,.btn-warning .caret,.btn-danger .caret,.btn-info .caret,.btn-success .caret,.btn-inverse .caret{border-top-color:#fff;border-bottom-color:#fff;opacity:.75;filter:alpha(opacity=75)}.alert{padding:8px 35px 8px 14px;margin-bottom:18px;color:#c09853;text-shadow:0 1px 0 rgba(255,255,255,0.5);background-color:#fcf8e3;border:1px solid #fbeed5;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.alert-heading{color:inherit}.alert .close{position:relative;top:-2px;right:-21px;line-height:18px}.alert-success{color:#468847;background-color:#dff0d8;border-color:#d6e9c6}.alert-danger,.alert-error{color:#b94a48;background-color:#f2dede;border-color:#eed3d7}.alert-info{color:#3a87ad;background-color:#d9edf7;border-color:#bce8f1}.alert-block{padding-top:14px;padding-bottom:14px}.alert-block>p,.alert-block>ul{margin-bottom:0}.alert-block p+p{margin-top:5px}.nav{margin-bottom:18px;margin-left:0;list-style:none}.nav>li>a{display:block}.nav>li>a:hover{text-decoration:none;background-color:#eee}.nav>.pull-right{float:right}.nav .nav-header{display:block;padding:3px 15px;font-size:11px;font-weight:bold;line-height:18px;color:#999;text-shadow:0 1px 0 rgba(255,255,255,0.5);text-transform:uppercase}.nav li+.nav-header{margin-top:9px}.nav-list{padding-right:15px;padding-left:15px;margin-bottom:0}.nav-list>li>a,.nav-list .nav-header{margin-right:-15px;margin-left:-15px;text-shadow:0 1px 0 rgba(255,255,255,0.5)}.nav-list>li>a{padding:3px 15px}.nav-list>.active>a,.nav-list>.active>a:hover{color:#fff;text-shadow:0 -1px 0 rgba(0,0,0,0.2);background-color:#08c}.nav-list [class^="icon-"]{margin-right:2px}.nav-list .divider{*width:100%;height:1px;margin:8px 1px;*margin:-5px 0 5px;overflow:hidden;background-color:#e5e5e5;border-bottom:1px solid #fff}.nav-tabs,.nav-pills{*zoom:1}.nav-tabs:before,.nav-pills:before,.nav-tabs:after,.nav-pills:after{display:table;content:""}.nav-tabs:after,.nav-pills:after{clear:both}.nav-tabs>li,.nav-pills>li{float:left}.nav-tabs>li>a,.nav-pills>li>a{padding-right:12px;padding-left:12px;margin-right:2px;line-height:14px}.nav-tabs{border-bottom:1px solid #ddd}.nav-tabs>li{margin-bottom:-1px}.nav-tabs>li>a{padding-top:8px;padding-bottom:8px;line-height:18px;border:1px solid transparent;-webkit-border-radius:4px 4px 0 0;-moz-border-radius:4px 4px 0 0;border-radius:4px 4px 0 0}.nav-tabs>li>a:hover{border-color:#eee #eee #ddd}.nav-tabs>.active>a,.nav-tabs>.active>a:hover{color:#555;cursor:default;background-color:#fff;border:1px solid #ddd;border-bottom-color:transparent}.nav-pills>li>a{padding-top:8px;padding-bottom:8px;margin-top:2px;margin-bottom:2px;-webkit-border-radius:5px;-moz-border-radius:5px;border-radius:5px}.nav-pills>.active>a,.nav-pills>.active>a:hover{color:#fff;background-color:#08c}.nav-stacked>li{float:none}.nav-stacked>li>a{margin-right:0}.nav-tabs.nav-stacked{border-bottom:0}.nav-tabs.nav-stacked>li>a{border:1px solid #ddd;-webkit-border-radius:0;-moz-border-radius:0;border-radius:0}.nav-tabs.nav-stacked>li:first-child>a{-webkit-border-radius:4px 4px 0 0;-moz-border-radius:4px 4px 0 0;border-radius:4px 4px 0 0}.nav-tabs.nav-stacked>li:last-child>a{-webkit-border-radius:0 0 4px 4px;-moz-border-radius:0 0 4px 4px;border-radius:0 0 4px 4px}.nav-tabs.nav-stacked>li>a:hover{z-index:2;border-color:#ddd}.nav-pills.nav-stacked>li>a{margin-bottom:3px}.nav-pills.nav-stacked>li:last-child>a{margin-bottom:1px}.nav-tabs .dropdown-menu{-webkit-border-radius:0 0 5px 5px;-moz-border-radius:0 0 5px 5px;border-radius:0 0 5px 5px}.nav-pills .dropdown-menu{-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.nav-tabs .dropdown-toggle .caret,.nav-pills .dropdown-toggle .caret{margin-top:6px;border-top-color:#08c;border-bottom-color:#08c}.nav-tabs .dropdown-toggle:hover .caret,.nav-pills .dropdown-toggle:hover .caret{border-top-color:#005580;border-bottom-color:#005580}.nav-tabs .active .dropdown-toggle .caret,.nav-pills .active .dropdown-toggle .caret{border-top-color:#333;border-bottom-color:#333}.nav>.dropdown.active>a:hover{color:#000;cursor:pointer}.nav-tabs .open .dropdown-toggle,.nav-pills .open .dropdown-toggle,.nav>li.dropdown.open.active>a:hover{color:#fff;background-color:#999;border-color:#999}.nav li.dropdown.open .caret,.nav li.dropdown.open.active .caret,.nav li.dropdown.open a:hover .caret{border-top-color:#fff;border-bottom-color:#fff;opacity:1;filter:alpha(opacity=100)}.tabs-stacked .open>a:hover{border-color:#999}.tabbable{*zoom:1}.tabbable:before,.tabbable:after{display:table;content:""}.tabbable:after{clear:both}.tab-content{overflow:auto}.tabs-below>.nav-tabs,.tabs-right>.nav-tabs,.tabs-left>.nav-tabs{border-bottom:0}.tab-content>.tab-pane,.pill-content>.pill-pane{display:none}.tab-content>.active,.pill-content>.active{display:block}.tabs-below>.nav-tabs{border-top:1px solid #ddd}.tabs-below>.nav-tabs>li{margin-top:-1px;margin-bottom:0}.tabs-below>.nav-tabs>li>a{-webkit-border-radius:0 0 4px 4px;-moz-border-radius:0 0 4px 4px;border-radius:0 0 4px 4px}.tabs-below>.nav-tabs>li>a:hover{border-top-color:#ddd;border-bottom-color:transparent}.tabs-below>.nav-tabs>.active>a,.tabs-below>.nav-tabs>.active>a:hover{border-color:transparent #ddd #ddd #ddd}.tabs-left>.nav-tabs>li,.tabs-right>.nav-tabs>li{float:none}.tabs-left>.nav-tabs>li>a,.tabs-right>.nav-tabs>li>a{min-width:74px;margin-right:0;margin-bottom:3px}.tabs-left>.nav-tabs{float:left;margin-right:19px;border-right:1px solid #ddd}.tabs-left>.nav-tabs>li>a{margin-right:-1px;-webkit-border-radius:4px 0 0 4px;-moz-border-radius:4px 0 0 4px;border-radius:4px 0 0 4px}.tabs-left>.nav-tabs>li>a:hover{border-color:#eee #ddd #eee #eee}.tabs-left>.nav-tabs .active>a,.tabs-left>.nav-tabs .active>a:hover{border-color:#ddd transparent #ddd #ddd;*border-right-color:#fff}.tabs-right>.nav-tabs{float:right;margin-left:19px;border-left:1px solid #ddd}.tabs-right>.nav-tabs>li>a{margin-left:-1px;-webkit-border-radius:0 4px 4px 0;-moz-border-radius:0 4px 4px 0;border-radius:0 4px 4px 0}.tabs-right>.nav-tabs>li>a:hover{border-color:#eee #eee #eee #ddd}.tabs-right>.nav-tabs .active>a,.tabs-right>.nav-tabs .active>a:hover{border-color:#ddd #ddd #ddd transparent;*border-left-color:#fff}.navbar{*position:relative;*z-index:2;margin-bottom:18px;overflow:visible}.navbar-inner{min-height:40px;padding-right:20px;padding-left:20px;background-color:#2c2c2c;background-image:-moz-linear-gradient(top,#333,#222);background-image:-ms-linear-gradient(top,#333,#222);background-image:-webkit-gradient(linear,0 0,0 100%,from(#333),to(#222));background-image:-webkit-linear-gradient(top,#333,#222);background-image:-o-linear-gradient(top,#333,#222);background-image:linear-gradient(top,#333,#222);background-repeat:repeat-x;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#333333',endColorstr='#222222',GradientType=0);-webkit-box-shadow:0 1px 3px rgba(0,0,0,0.25),inset 0 -1px 0 rgba(0,0,0,0.1);-moz-box-shadow:0 1px 3px rgba(0,0,0,0.25),inset 0 -1px 0 rgba(0,0,0,0.1);box-shadow:0 1px 3px rgba(0,0,0,0.25),inset 0 -1px 0 rgba(0,0,0,0.1)}.navbar .container{width:auto}.nav-collapse.collapse{height:auto}.navbar{color:#999}.navbar .brand:hover{text-decoration:none}.navbar .brand{display:block;float:left;padding:8px 20px 12px;margin-left:-20px;font-size:20px;font-weight:200;line-height:1;color:#999}.navbar .navbar-text{margin-bottom:0;line-height:40px}.navbar .navbar-link{color:#999}.navbar .navbar-link:hover{color:#fff}.navbar .btn,.navbar .btn-group{margin-top:5px}.navbar .btn-group .btn{margin:0}.navbar-form{margin-bottom:0;*zoom:1}.navbar-form:before,.navbar-form:after{display:table;content:""}.navbar-form:after{clear:both}.navbar-form input,.navbar-form select,.navbar-form .radio,.navbar-form .checkbox{margin-top:5px}.navbar-form input,.navbar-form select{display:inline-block;margin-bottom:0}.navbar-form input[type="image"],.navbar-form input[type="checkbox"],.navbar-form input[type="radio"]{margin-top:3px}.navbar-form .input-append,.navbar-form .input-prepend{margin-top:6px;white-space:nowrap}.navbar-form .input-append input,.navbar-form .input-prepend input{margin-top:0}.navbar-search{position:relative;float:left;margin-top:6px;margin-bottom:0}.navbar-search .search-query{padding:4px 9px;font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;font-weight:normal;line-height:1;color:#fff;background-color:#626262;border:1px solid #151515;-webkit-box-shadow:inset 0 1px 2px rgba(0,0,0,0.1),0 1px 0 rgba(255,255,255,0.15);-moz-box-shadow:inset 0 1px 2px rgba(0,0,0,0.1),0 1px 0 rgba(255,255,255,0.15);box-shadow:inset 0 1px 2px rgba(0,0,0,0.1),0 1px 0 rgba(255,255,255,0.15);-webkit-transition:none;-moz-transition:none;-ms-transition:none;-o-transition:none;transition:none}.navbar-search .search-query:-moz-placeholder{color:#ccc}.navbar-search .search-query::-webkit-input-placeholder{color:#ccc}.navbar-search .search-query:focus,.navbar-search .search-query.focused{padding:5px 10px;color:#333;text-shadow:0 1px 0 #fff;background-color:#fff;border:0;outline:0;-webkit-box-shadow:0 0 3px rgba(0,0,0,0.15);-moz-box-shadow:0 0 3px rgba(0,0,0,0.15);box-shadow:0 0 3px rgba(0,0,0,0.15)}.navbar-fixed-top,.navbar-fixed-bottom{position:fixed;right:0;left:0;z-index:1030;margin-bottom:0}.navbar-fixed-top .navbar-inner,.navbar-fixed-bottom .navbar-inner{padding-right:0;padding-left:0;-webkit-border-radius:0;-moz-border-radius:0;border-radius:0}.navbar-fixed-top .container,.navbar-fixed-bottom .container{width:940px}.navbar-fixed-top{top:0}.navbar-fixed-bottom{bottom:0}.navbar .nav{position:relative;left:0;display:block;float:left;margin:0 10px 0 0}.navbar .nav.pull-right{float:right}.navbar .nav>li{display:block;float:left}.navbar .nav>li>a{float:none;padding:9px 10px 11px;line-height:19px;color:#999;text-decoration:none;text-shadow:0 -1px 0 rgba(0,0,0,0.25)}.navbar .btn{display:inline-block;padding:4px 10px 4px;margin:5px 5px 6px;line-height:18px}.navbar .btn-group{padding:5px 5px 6px;margin:0}.navbar .nav>li>a:hover{color:#fff;text-decoration:none;background-color:transparent}.navbar .nav .active>a,.navbar .nav .active>a:hover{color:#fff;text-decoration:none;background-color:#222}.navbar .divider-vertical{width:1px;height:40px;margin:0 9px;overflow:hidden;background-color:#222;border-right:1px solid #333}.navbar .nav.pull-right{margin-right:0;margin-left:10px}.navbar .btn-navbar{display:none;float:right;padding:7px 10px;margin-right:5px;margin-left:5px;background-color:#2c2c2c;*background-color:#222;background-image:-ms-linear-gradient(top,#333,#222);background-image:-webkit-gradient(linear,0 0,0 100%,from(#333),to(#222));background-image:-webkit-linear-gradient(top,#333,#222);background-image:-o-linear-gradient(top,#333,#222);background-image:linear-gradient(top,#333,#222);background-image:-moz-linear-gradient(top,#333,#222);background-repeat:repeat-x;border-color:#222 #222 #000;border-color:rgba(0,0,0,0.1) rgba(0,0,0,0.1) rgba(0,0,0,0.25);filter:progid:dximagetransform.microsoft.gradient(startColorstr='#333333',endColorstr='#222222',GradientType=0);filter:progid:dximagetransform.microsoft.gradient(enabled=false);-webkit-box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.075);-moz-box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.075);box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.075)}.navbar .btn-navbar:hover,.navbar .btn-navbar:active,.navbar .btn-navbar.active,.navbar .btn-navbar.disabled,.navbar .btn-navbar[disabled]{background-color:#222;*background-color:#151515}.navbar .btn-navbar:active,.navbar .btn-navbar.active{background-color:#080808 \9}.navbar .btn-navbar .icon-bar{display:block;width:18px;height:2px;background-color:#f5f5f5;-webkit-border-radius:1px;-moz-border-radius:1px;border-radius:1px;-webkit-box-shadow:0 1px 0 rgba(0,0,0,0.25);-moz-box-shadow:0 1px 0 rgba(0,0,0,0.25);box-shadow:0 1px 0 rgba(0,0,0,0.25)}.btn-navbar .icon-bar+.icon-bar{margin-top:3px}.navbar .dropdown-menu:before{position:absolute;top:-7px;left:9px;display:inline-block;border-right:7px solid transparent;border-bottom:7px solid #ccc;border-left:7px solid transparent;border-bottom-color:rgba(0,0,0,0.2);content:''}.navbar .dropdown-menu:after{position:absolute;top:-6px;left:10px;display:inline-block;border-right:6px solid transparent;border-bottom:6px solid #fff;border-left:6px solid transparent;content:''}.navbar-fixed-bottom .dropdown-menu:before{top:auto;bottom:-7px;border-top:7px solid #ccc;border-bottom:0;border-top-color:rgba(0,0,0,0.2)}.navbar-fixed-bottom .dropdown-menu:after{top:auto;bottom:-6px;border-top:6px solid #fff;border-bottom:0}.navbar .nav li.dropdown .dropdown-toggle .caret,.navbar .nav li.dropdown.open .caret{border-top-color:#fff;border-bottom-color:#fff}.navbar .nav li.dropdown.active .caret{opacity:1;filter:alpha(opacity=100)}.navbar .nav li.dropdown.open>.dropdown-toggle,.navbar .nav li.dropdown.active>.dropdown-toggle,.navbar .nav li.dropdown.open.active>.dropdown-toggle{background-color:transparent}.navbar .nav li.dropdown.active>.dropdown-toggle:hover{color:#fff}.navbar .pull-right .dropdown-menu,.navbar .dropdown-menu.pull-right{right:0;left:auto}.navbar .pull-right .dropdown-menu:before,.navbar .dropdown-menu.pull-right:before{right:12px;left:auto}.navbar .pull-right .dropdown-menu:after,.navbar .dropdown-menu.pull-right:after{right:13px;left:auto}.breadcrumb{padding:7px 14px;margin:0 0 18px;list-style:none;background-color:#fbfbfb;background-image:-moz-linear-gradient(top,#fff,#f5f5f5);background-image:-ms-linear-gradient(top,#fff,#f5f5f5);background-image:-webkit-gradient(linear,0 0,0 100%,from(#fff),to(#f5f5f5));background-image:-webkit-linear-gradient(top,#fff,#f5f5f5);background-image:-o-linear-gradient(top,#fff,#f5f5f5);background-image:linear-gradient(top,#fff,#f5f5f5);background-repeat:repeat-x;border:1px solid #ddd;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#ffffff',endColorstr='#f5f5f5',GradientType=0);-webkit-box-shadow:inset 0 1px 0 #fff;-moz-box-shadow:inset 0 1px 0 #fff;box-shadow:inset 0 1px 0 #fff}.breadcrumb li{display:inline-block;*display:inline;text-shadow:0 1px 0 #fff;*zoom:1}.breadcrumb .divider{padding:0 5px;color:#999}.breadcrumb .active a{color:#333}.pagination{height:36px;margin:18px 0}.pagination ul{display:inline-block;*display:inline;margin-bottom:0;margin-left:0;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px;*zoom:1;-webkit-box-shadow:0 1px 2px rgba(0,0,0,0.05);-moz-box-shadow:0 1px 2px rgba(0,0,0,0.05);box-shadow:0 1px 2px rgba(0,0,0,0.05)}.pagination li{display:inline}.pagination a{float:left;padding:0 14px;line-height:34px;text-decoration:none;border:1px solid #ddd;border-left-width:0}.pagination a:hover,.pagination .active a{background-color:#f5f5f5}.pagination .active a{color:#999;cursor:default}.pagination .disabled span,.pagination .disabled a,.pagination .disabled a:hover{color:#999;cursor:default;background-color:transparent}.pagination li:first-child a{border-left-width:1px;-webkit-border-radius:3px 0 0 3px;-moz-border-radius:3px 0 0 3px;border-radius:3px 0 0 3px}.pagination li:last-child a{-webkit-border-radius:0 3px 3px 0;-moz-border-radius:0 3px 3px 0;border-radius:0 3px 3px 0}.pagination-centered{text-align:center}.pagination-right{text-align:right}.pager{margin-bottom:18px;margin-left:0;text-align:center;list-style:none;*zoom:1}.pager:before,.pager:after{display:table;content:""}.pager:after{clear:both}.pager li{display:inline}.pager a{display:inline-block;padding:5px 14px;background-color:#fff;border:1px solid #ddd;-webkit-border-radius:15px;-moz-border-radius:15px;border-radius:15px}.pager a:hover{text-decoration:none;background-color:#f5f5f5}.pager .next a{float:right}.pager .previous a{float:left}.pager .disabled a,.pager .disabled a:hover{color:#999;cursor:default;background-color:#fff}.modal-open .dropdown-menu{z-index:2050}.modal-open .dropdown.open{*z-index:2050}.modal-open .popover{z-index:2060}.modal-open .tooltip{z-index:2070}.modal-backdrop{position:fixed;top:0;right:0;bottom:0;left:0;z-index:1040;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop,.modal-backdrop.fade.in{opacity:.8;filter:alpha(opacity=80)}.modal{position:fixed;top:50%;left:50%;z-index:1050;width:560px;margin:-250px 0 0 -280px;overflow:auto;background-color:#fff;border:1px solid #999;border:1px solid rgba(0,0,0,0.3);*border:1px solid #999;-webkit-border-radius:6px;-moz-border-radius:6px;border-radius:6px;-webkit-box-shadow:0 3px 7px rgba(0,0,0,0.3);-moz-box-shadow:0 3px 7px rgba(0,0,0,0.3);box-shadow:0 3px 7px rgba(0,0,0,0.3);-webkit-background-clip:padding-box;-moz-background-clip:padding-box;background-clip:padding-box}.modal.fade{top:-25%;-webkit-transition:opacity .3s linear,top .3s ease-out;-moz-transition:opacity .3s linear,top .3s ease-out;-ms-transition:opacity .3s linear,top .3s ease-out;-o-transition:opacity .3s linear,top .3s ease-out;transition:opacity .3s linear,top .3s ease-out}.modal.fade.in{top:50%}.modal-header{padding:9px 15px;border-bottom:1px solid #eee}.modal-header .close{margin-top:2px}.modal-body{max-height:400px;padding:15px;overflow-y:auto}.modal-form{margin-bottom:0}.modal-footer{padding:14px 15px 15px;margin-bottom:0;text-align:right;background-color:#f5f5f5;border-top:1px solid #ddd;-webkit-border-radius:0 0 6px 6px;-moz-border-radius:0 0 6px 6px;border-radius:0 0 6px 6px;*zoom:1;-webkit-box-shadow:inset 0 1px 0 #fff;-moz-box-shadow:inset 0 1px 0 #fff;box-shadow:inset 0 1px 0 #fff}.modal-footer:before,.modal-footer:after{display:table;content:""}.modal-footer:after{clear:both}.modal-footer .btn+.btn{margin-bottom:0;margin-left:5px}.modal-footer .btn-group .btn+.btn{margin-left:-1px}.tooltip{position:absolute;z-index:1020;display:block;padding:5px;font-size:11px;opacity:0;filter:alpha(opacity=0);visibility:visible}.tooltip.in{opacity:.8;filter:alpha(opacity=80)}.tooltip.top{margin-top:-2px}.tooltip.right{margin-left:2px}.tooltip.bottom{margin-top:2px}.tooltip.left{margin-left:-2px}.tooltip.top .tooltip-arrow{bottom:0;left:50%;margin-left:-5px;border-top:5px solid #000;border-right:5px solid transparent;border-left:5px solid transparent}.tooltip.left .tooltip-arrow{top:50%;right:0;margin-top:-5px;border-top:5px solid transparent;border-bottom:5px solid transparent;border-left:5px solid #000}.tooltip.bottom .tooltip-arrow{top:0;left:50%;margin-left:-5px;border-right:5px solid transparent;border-bottom:5px solid #000;border-left:5px solid transparent}.tooltip.right .tooltip-arrow{top:50%;left:0;margin-top:-5px;border-top:5px solid transparent;border-right:5px solid #000;border-bottom:5px solid transparent}.tooltip-inner{max-width:200px;padding:3px 8px;color:#fff;text-align:center;text-decoration:none;background-color:#000;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.tooltip-arrow{position:absolute;width:0;height:0}.popover{position:absolute;top:0;left:0;z-index:1010;display:none;padding:5px}.popover.top{margin-top:-5px}.popover.right{margin-left:5px}.popover.bottom{margin-top:5px}.popover.left{margin-left:-5px}.popover.top .arrow{bottom:0;left:50%;margin-left:-5px;border-top:5px solid #000;border-right:5px solid transparent;border-left:5px solid transparent}.popover.right .arrow{top:50%;left:0;margin-top:-5px;border-top:5px solid transparent;border-right:5px solid #000;border-bottom:5px solid transparent}.popover.bottom .arrow{top:0;left:50%;margin-left:-5px;border-right:5px solid transparent;border-bottom:5px solid #000;border-left:5px solid transparent}.popover.left .arrow{top:50%;right:0;margin-top:-5px;border-top:5px solid transparent;border-bottom:5px solid transparent;border-left:5px solid #000}.popover .arrow{position:absolute;width:0;height:0}.popover-inner{width:280px;padding:3px;overflow:hidden;background:#000;background:rgba(0,0,0,0.8);-webkit-border-radius:6px;-moz-border-radius:6px;border-radius:6px;-webkit-box-shadow:0 3px 7px rgba(0,0,0,0.3);-moz-box-shadow:0 3px 7px rgba(0,0,0,0.3);box-shadow:0 3px 7px rgba(0,0,0,0.3)}.popover-title{padding:9px 15px;line-height:1;background-color:#f5f5f5;border-bottom:1px solid #eee;-webkit-border-radius:3px 3px 0 0;-moz-border-radius:3px 3px 0 0;border-radius:3px 3px 0 0}.popover-content{padding:14px;background-color:#fff;-webkit-border-radius:0 0 3px 3px;-moz-border-radius:0 0 3px 3px;border-radius:0 0 3px 3px;-webkit-background-clip:padding-box;-moz-background-clip:padding-box;background-clip:padding-box}.popover-content p,.popover-content ul,.popover-content ol{margin-bottom:0}.thumbnails{margin-left:-20px;list-style:none;*zoom:1}.thumbnails:before,.thumbnails:after{display:table;content:""}.thumbnails:after{clear:both}.row-fluid .thumbnails{margin-left:0}.thumbnails>li{float:left;margin-bottom:18px;margin-left:20px}.thumbnail{display:block;padding:4px;line-height:1;border:1px solid #ddd;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px;-webkit-box-shadow:0 1px 1px rgba(0,0,0,0.075);-moz-box-shadow:0 1px 1px rgba(0,0,0,0.075);box-shadow:0 1px 1px rgba(0,0,0,0.075)}a.thumbnail:hover{border-color:#08c;-webkit-box-shadow:0 1px 4px rgba(0,105,214,0.25);-moz-box-shadow:0 1px 4px rgba(0,105,214,0.25);box-shadow:0 1px 4px rgba(0,105,214,0.25)}.thumbnail>img{display:block;max-width:100%;margin-right:auto;margin-left:auto}.thumbnail .caption{padding:9px}.label,.badge{font-size:10.998px;font-weight:bold;line-height:14px;color:#fff;text-shadow:0 -1px 0 rgba(0,0,0,0.25);white-space:nowrap;vertical-align:baseline;background-color:#999}.label{padding:1px 4px 2px;-webkit-border-radius:3px;-moz-border-radius:3px;border-radius:3px}.badge{padding:1px 9px 2px;-webkit-border-radius:9px;-moz-border-radius:9px;border-radius:9px}a.label:hover,a.badge:hover{color:#fff;text-decoration:none;cursor:pointer}.label-important,.badge-important{background-color:#b94a48}.label-important[href],.badge-important[href]{background-color:#953b39}.label-warning,.badge-warning{background-color:#f89406}.label-warning[href],.badge-warning[href]{background-color:#c67605}.label-success,.badge-success{background-color:#468847}.label-success[href],.badge-success[href]{background-color:#356635}.label-info,.badge-info{background-color:#3a87ad}.label-info[href],.badge-info[href]{background-color:#2d6987}.label-inverse,.badge-inverse{background-color:#333}.label-inverse[href],.badge-inverse[href]{background-color:#1a1a1a}@-webkit-keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}@-moz-keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}@-ms-keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}@-o-keyframes progress-bar-stripes{from{background-position:0 0}to{background-position:40px 0}}@keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}.progress{height:18px;margin-bottom:18px;overflow:hidden;background-color:#f7f7f7;background-image:-moz-linear-gradient(top,#f5f5f5,#f9f9f9);background-image:-ms-linear-gradient(top,#f5f5f5,#f9f9f9);background-image:-webkit-gradient(linear,0 0,0 100%,from(#f5f5f5),to(#f9f9f9));background-image:-webkit-linear-gradient(top,#f5f5f5,#f9f9f9);background-image:-o-linear-gradient(top,#f5f5f5,#f9f9f9);background-image:linear-gradient(top,#f5f5f5,#f9f9f9);background-repeat:repeat-x;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#f5f5f5',endColorstr='#f9f9f9',GradientType=0);-webkit-box-shadow:inset 0 1px 2px rgba(0,0,0,0.1);-moz-box-shadow:inset 0 1px 2px rgba(0,0,0,0.1);box-shadow:inset 0 1px 2px rgba(0,0,0,0.1)}.progress .bar{width:0;height:18px;font-size:12px;color:#fff;text-align:center;text-shadow:0 -1px 0 rgba(0,0,0,0.25);background-color:#0e90d2;background-image:-moz-linear-gradient(top,#149bdf,#0480be);background-image:-webkit-gradient(linear,0 0,0 100%,from(#149bdf),to(#0480be));background-image:-webkit-linear-gradient(top,#149bdf,#0480be);background-image:-o-linear-gradient(top,#149bdf,#0480be);background-image:linear-gradient(top,#149bdf,#0480be);background-image:-ms-linear-gradient(top,#149bdf,#0480be);background-repeat:repeat-x;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#149bdf',endColorstr='#0480be',GradientType=0);-webkit-box-shadow:inset 0 -1px 0 rgba(0,0,0,0.15);-moz-box-shadow:inset 0 -1px 0 rgba(0,0,0,0.15);box-shadow:inset 0 -1px 0 rgba(0,0,0,0.15);-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;box-sizing:border-box;-webkit-transition:width .6s ease;-moz-transition:width .6s ease;-ms-transition:width .6s ease;-o-transition:width .6s ease;transition:width .6s ease}.progress-striped .bar{background-color:#149bdf;background-image:-o-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-webkit-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-moz-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-ms-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-webkit-gradient(linear,0 100%,100% 0,color-stop(0.25,rgba(255,255,255,0.15)),color-stop(0.25,transparent),color-stop(0.5,transparent),color-stop(0.5,rgba(255,255,255,0.15)),color-stop(0.75,rgba(255,255,255,0.15)),color-stop(0.75,transparent),to(transparent));background-image:linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);-webkit-background-size:40px 40px;-moz-background-size:40px 40px;-o-background-size:40px 40px;background-size:40px 40px}.progress.active .bar{-webkit-animation:progress-bar-stripes 2s linear infinite;-moz-animation:progress-bar-stripes 2s linear infinite;-ms-animation:progress-bar-stripes 2s linear infinite;-o-animation:progress-bar-stripes 2s linear infinite;animation:progress-bar-stripes 2s linear infinite}.progress-danger .bar{background-color:#dd514c;background-image:-moz-linear-gradient(top,#ee5f5b,#c43c35);background-image:-ms-linear-gradient(top,#ee5f5b,#c43c35);background-image:-webkit-gradient(linear,0 0,0 100%,from(#ee5f5b),to(#c43c35));background-image:-webkit-linear-gradient(top,#ee5f5b,#c43c35);background-image:-o-linear-gradient(top,#ee5f5b,#c43c35);background-image:linear-gradient(top,#ee5f5b,#c43c35);background-repeat:repeat-x;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#ee5f5b',endColorstr='#c43c35',GradientType=0)}.progress-danger.progress-striped .bar{background-color:#ee5f5b;background-image:-webkit-gradient(linear,0 100%,100% 0,color-stop(0.25,rgba(255,255,255,0.15)),color-stop(0.25,transparent),color-stop(0.5,transparent),color-stop(0.5,rgba(255,255,255,0.15)),color-stop(0.75,rgba(255,255,255,0.15)),color-stop(0.75,transparent),to(transparent));background-image:-webkit-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-moz-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-ms-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-o-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent)}.progress-success .bar{background-color:#5eb95e;background-image:-moz-linear-gradient(top,#62c462,#57a957);background-image:-ms-linear-gradient(top,#62c462,#57a957);background-image:-webkit-gradient(linear,0 0,0 100%,from(#62c462),to(#57a957));background-image:-webkit-linear-gradient(top,#62c462,#57a957);background-image:-o-linear-gradient(top,#62c462,#57a957);background-image:linear-gradient(top,#62c462,#57a957);background-repeat:repeat-x;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#62c462',endColorstr='#57a957',GradientType=0)}.progress-success.progress-striped .bar{background-color:#62c462;background-image:-webkit-gradient(linear,0 100%,100% 0,color-stop(0.25,rgba(255,255,255,0.15)),color-stop(0.25,transparent),color-stop(0.5,transparent),color-stop(0.5,rgba(255,255,255,0.15)),color-stop(0.75,rgba(255,255,255,0.15)),color-stop(0.75,transparent),to(transparent));background-image:-webkit-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-moz-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-ms-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-o-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent)}.progress-info .bar{background-color:#4bb1cf;background-image:-moz-linear-gradient(top,#5bc0de,#339bb9);background-image:-ms-linear-gradient(top,#5bc0de,#339bb9);background-image:-webkit-gradient(linear,0 0,0 100%,from(#5bc0de),to(#339bb9));background-image:-webkit-linear-gradient(top,#5bc0de,#339bb9);background-image:-o-linear-gradient(top,#5bc0de,#339bb9);background-image:linear-gradient(top,#5bc0de,#339bb9);background-repeat:repeat-x;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#5bc0de',endColorstr='#339bb9',GradientType=0)}.progress-info.progress-striped .bar{background-color:#5bc0de;background-image:-webkit-gradient(linear,0 100%,100% 0,color-stop(0.25,rgba(255,255,255,0.15)),color-stop(0.25,transparent),color-stop(0.5,transparent),color-stop(0.5,rgba(255,255,255,0.15)),color-stop(0.75,rgba(255,255,255,0.15)),color-stop(0.75,transparent),to(transparent));background-image:-webkit-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-moz-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-ms-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-o-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent)}.progress-warning .bar{background-color:#faa732;background-image:-moz-linear-gradient(top,#fbb450,#f89406);background-image:-ms-linear-gradient(top,#fbb450,#f89406);background-image:-webkit-gradient(linear,0 0,0 100%,from(#fbb450),to(#f89406));background-image:-webkit-linear-gradient(top,#fbb450,#f89406);background-image:-o-linear-gradient(top,#fbb450,#f89406);background-image:linear-gradient(top,#fbb450,#f89406);background-repeat:repeat-x;filter:progid:dximagetransform.microsoft.gradient(startColorstr='#fbb450',endColorstr='#f89406',GradientType=0)}.progress-warning.progress-striped .bar{background-color:#fbb450;background-image:-webkit-gradient(linear,0 100%,100% 0,color-stop(0.25,rgba(255,255,255,0.15)),color-stop(0.25,transparent),color-stop(0.5,transparent),color-stop(0.5,rgba(255,255,255,0.15)),color-stop(0.75,rgba(255,255,255,0.15)),color-stop(0.75,transparent),to(transparent));background-image:-webkit-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-moz-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-ms-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:-o-linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent);background-image:linear-gradient(-45deg,rgba(255,255,255,0.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,0.15) 50%,rgba(255,255,255,0.15) 75%,transparent 75%,transparent)}.accordion{margin-bottom:18px}.accordion-group{margin-bottom:2px;border:1px solid #e5e5e5;-webkit-border-radius:4px;-moz-border-radius:4px;border-radius:4px}.accordion-heading{border-bottom:0}.accordion-heading .accordion-toggle{display:block;padding:8px 15px}.accordion-toggle{cursor:pointer}.accordion-inner{padding:9px 15px;border-top:1px solid #e5e5e5}.carousel{position:relative;margin-bottom:18px;line-height:1}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel .item{position:relative;display:none;-webkit-transition:.6s ease-in-out left;-moz-transition:.6s ease-in-out left;-ms-transition:.6s ease-in-out left;-o-transition:.6s ease-in-out left;transition:.6s ease-in-out left}.carousel .item>img{display:block;line-height:1}.carousel .active,.carousel .next,.carousel .prev{display:block}.carousel .active{left:0}.carousel .next,.carousel .prev{position:absolute;top:0;width:100%}.carousel .next{left:100%}.carousel .prev{left:-100%}.carousel .next.left,.carousel .prev.right{left:0}.carousel .active.left{left:-100%}.carousel .active.right{left:100%}.carousel-control{position:absolute;top:40%;left:15px;width:40px;height:40px;margin-top:-20px;font-size:60px;font-weight:100;line-height:30px;color:#fff;text-align:center;background:#222;border:3px solid #fff;-webkit-border-radius:23px;-moz-border-radius:23px;border-radius:23px;opacity:.5;filter:alpha(opacity=50)}.carousel-control.right{right:15px;left:auto}.carousel-control:hover{color:#fff;text-decoration:none;opacity:.9;filter:alpha(opacity=90)}.carousel-caption{position:absolute;right:0;bottom:0;left:0;padding:10px 15px 5px;background:#333;background:rgba(0,0,0,0.75)}.carousel-caption h4,.carousel-caption p{color:#fff}.hero-unit{padding:60px;margin-bottom:30px;background-color:#eee;-webkit-border-radius:6px;-moz-border-radius:6px;border-radius:6px}.hero-unit h1{margin-bottom:0;font-size:60px;line-height:1;letter-spacing:-1px;color:inherit}.hero-unit p{font-size:18px;font-weight:200;line-height:27px;color:inherit}.pull-right{float:right}.pull-left{float:left}.hide{display:none}.show{display:block}.invisible{visibility:hidden} diff --git a/web/sugarcane/static/bootstrap.min.js b/web/sugarcane/static/bootstrap.min.js deleted file mode 100644 index 1f87730a3ae..00000000000 --- a/web/sugarcane/static/bootstrap.min.js +++ /dev/null @@ -1,6 +0,0 @@ -/*! -* Bootstrap.js by @fat & @mdo -* Copyright 2012 Twitter, Inc. -* http://www.apache.org/licenses/LICENSE-2.0.txt -*/ -!function(a){a(function(){"use strict",a.support.transition=function(){var a=function(){var a=document.createElement("bootstrap"),b={WebkitTransition:"webkitTransitionEnd",MozTransition:"transitionend",OTransition:"oTransitionEnd",msTransition:"MSTransitionEnd",transition:"transitionend"},c;for(c in b)if(a.style[c]!==undefined)return b[c]}();return a&&{end:a}}()})}(window.jQuery),!function(a){"use strict";var b='[data-dismiss="alert"]',c=function(c){a(c).on("click",b,this.close)};c.prototype.close=function(b){function f(){e.trigger("closed").remove()}var c=a(this),d=c.attr("data-target"),e;d||(d=c.attr("href"),d=d&&d.replace(/.*(?=#[^\s]*$)/,"")),e=a(d),b&&b.preventDefault(),e.length||(e=c.hasClass("alert")?c:c.parent()),e.trigger(b=a.Event("close"));if(b.isDefaultPrevented())return;e.removeClass("in"),a.support.transition&&e.hasClass("fade")?e.on(a.support.transition.end,f):f()},a.fn.alert=function(b){return this.each(function(){var d=a(this),e=d.data("alert");e||d.data("alert",e=new c(this)),typeof b=="string"&&e[b].call(d)})},a.fn.alert.Constructor=c,a(function(){a("body").on("click.alert.data-api",b,c.prototype.close)})}(window.jQuery),!function(a){"use strict";var b=function(b,c){this.$element=a(b),this.options=a.extend({},a.fn.button.defaults,c)};b.prototype.setState=function(a){var b="disabled",c=this.$element,d=c.data(),e=c.is("input")?"val":"html";a+="Text",d.resetText||c.data("resetText",c[e]()),c[e](d[a]||this.options[a]),setTimeout(function(){a=="loadingText"?c.addClass(b).attr(b,b):c.removeClass(b).removeAttr(b)},0)},b.prototype.toggle=function(){var a=this.$element.parent('[data-toggle="buttons-radio"]');a&&a.find(".active").removeClass("active"),this.$element.toggleClass("active")},a.fn.button=function(c){return this.each(function(){var d=a(this),e=d.data("button"),f=typeof c=="object"&&c;e||d.data("button",e=new b(this,f)),c=="toggle"?e.toggle():c&&e.setState(c)})},a.fn.button.defaults={loadingText:"loading..."},a.fn.button.Constructor=b,a(function(){a("body").on("click.button.data-api","[data-toggle^=button]",function(b){var c=a(b.target);c.hasClass("btn")||(c=c.closest(".btn")),c.button("toggle")})})}(window.jQuery),!function(a){"use strict";var b=function(b,c){this.$element=a(b),this.options=c,this.options.slide&&this.slide(this.options.slide),this.options.pause=="hover"&&this.$element.on("mouseenter",a.proxy(this.pause,this)).on("mouseleave",a.proxy(this.cycle,this))};b.prototype={cycle:function(b){return b||(this.paused=!1),this.options.interval&&!this.paused&&(this.interval=setInterval(a.proxy(this.next,this),this.options.interval)),this},to:function(b){var c=this.$element.find(".active"),d=c.parent().children(),e=d.index(c),f=this;if(b>d.length-1||b<0)return;return this.sliding?this.$element.one("slid",function(){f.to(b)}):e==b?this.pause().cycle():this.slide(b>e?"next":"prev",a(d[b]))},pause:function(a){return a||(this.paused=!0),clearInterval(this.interval),this.interval=null,this},next:function(){if(this.sliding)return;return this.slide("next")},prev:function(){if(this.sliding)return;return this.slide("prev")},slide:function(b,c){var d=this.$element.find(".active"),e=c||d[b](),f=this.interval,g=b=="next"?"left":"right",h=b=="next"?"first":"last",i=this,j=a.Event("slide");this.sliding=!0,f&&this.pause(),e=e.length?e:this.$element.find(".item")[h]();if(e.hasClass("active"))return;if(a.support.transition&&this.$element.hasClass("slide")){this.$element.trigger(j);if(j.isDefaultPrevented())return;e.addClass(b),e[0].offsetWidth,d.addClass(g),e.addClass(g),this.$element.one(a.support.transition.end,function(){e.removeClass([b,g].join(" ")).addClass("active"),d.removeClass(["active",g].join(" ")),i.sliding=!1,setTimeout(function(){i.$element.trigger("slid")},0)})}else{this.$element.trigger(j);if(j.isDefaultPrevented())return;d.removeClass("active"),e.addClass("active"),this.sliding=!1,this.$element.trigger("slid")}return f&&this.cycle(),this}},a.fn.carousel=function(c){return this.each(function(){var d=a(this),e=d.data("carousel"),f=a.extend({},a.fn.carousel.defaults,typeof c=="object"&&c);e||d.data("carousel",e=new b(this,f)),typeof c=="number"?e.to(c):typeof c=="string"||(c=f.slide)?e[c]():f.interval&&e.cycle()})},a.fn.carousel.defaults={interval:5e3,pause:"hover"},a.fn.carousel.Constructor=b,a(function(){a("body").on("click.carousel.data-api","[data-slide]",function(b){var c=a(this),d,e=a(c.attr("data-target")||(d=c.attr("href"))&&d.replace(/.*(?=#[^\s]+$)/,"")),f=!e.data("modal")&&a.extend({},e.data(),c.data());e.carousel(f),b.preventDefault()})})}(window.jQuery),!function(a){"use strict";var b=function(b,c){this.$element=a(b),this.options=a.extend({},a.fn.collapse.defaults,c),this.options.parent&&(this.$parent=a(this.options.parent)),this.options.toggle&&this.toggle()};b.prototype={constructor:b,dimension:function(){var a=this.$element.hasClass("width");return a?"width":"height"},show:function(){var b,c,d,e;if(this.transitioning)return;b=this.dimension(),c=a.camelCase(["scroll",b].join("-")),d=this.$parent&&this.$parent.find("> .accordion-group > .in");if(d&&d.length){e=d.data("collapse");if(e&&e.transitioning)return;d.collapse("hide"),e||d.data("collapse",null)}this.$element[b](0),this.transition("addClass",a.Event("show"),"shown"),this.$element[b](this.$element[0][c])},hide:function(){var b;if(this.transitioning)return;b=this.dimension(),this.reset(this.$element[b]()),this.transition("removeClass",a.Event("hide"),"hidden"),this.$element[b](0)},reset:function(a){var b=this.dimension();return this.$element.removeClass("collapse")[b](a||"auto")[0].offsetWidth,this.$element[a!==null?"addClass":"removeClass"]("collapse"),this},transition:function(b,c,d){var e=this,f=function(){c.type=="show"&&e.reset(),e.transitioning=0,e.$element.trigger(d)};this.$element.trigger(c);if(c.isDefaultPrevented())return;this.transitioning=1,this.$element[b]("in"),a.support.transition&&this.$element.hasClass("collapse")?this.$element.one(a.support.transition.end,f):f()},toggle:function(){this[this.$element.hasClass("in")?"hide":"show"]()}},a.fn.collapse=function(c){return this.each(function(){var d=a(this),e=d.data("collapse"),f=typeof c=="object"&&c;e||d.data("collapse",e=new b(this,f)),typeof c=="string"&&e[c]()})},a.fn.collapse.defaults={toggle:!0},a.fn.collapse.Constructor=b,a(function(){a("body").on("click.collapse.data-api","[data-toggle=collapse]",function(b){var c=a(this),d,e=c.attr("data-target")||b.preventDefault()||(d=c.attr("href"))&&d.replace(/.*(?=#[^\s]+$)/,""),f=a(e).data("collapse")?"toggle":c.data();a(e).collapse(f)})})}(window.jQuery),!function(a){function d(){a(b).parent().removeClass("open")}"use strict";var b='[data-toggle="dropdown"]',c=function(b){var c=a(b).on("click.dropdown.data-api",this.toggle);a("html").on("click.dropdown.data-api",function(){c.parent().removeClass("open")})};c.prototype={constructor:c,toggle:function(b){var c=a(this),e,f,g;if(c.is(".disabled, :disabled"))return;return f=c.attr("data-target"),f||(f=c.attr("href"),f=f&&f.replace(/.*(?=#[^\s]*$)/,"")),e=a(f),e.length||(e=c.parent()),g=e.hasClass("open"),d(),g||e.toggleClass("open"),!1}},a.fn.dropdown=function(b){return this.each(function(){var d=a(this),e=d.data("dropdown");e||d.data("dropdown",e=new c(this)),typeof b=="string"&&e[b].call(d)})},a.fn.dropdown.Constructor=c,a(function(){a("html").on("click.dropdown.data-api",d),a("body").on("click.dropdown",".dropdown form",function(a){a.stopPropagation()}).on("click.dropdown.data-api",b,c.prototype.toggle)})}(window.jQuery),!function(a){function c(){var b=this,c=setTimeout(function(){b.$element.off(a.support.transition.end),d.call(b)},500);this.$element.one(a.support.transition.end,function(){clearTimeout(c),d.call(b)})}function d(a){this.$element.hide().trigger("hidden"),e.call(this)}function e(b){var c=this,d=this.$element.hasClass("fade")?"fade":"";if(this.isShown&&this.options.backdrop){var e=a.support.transition&&d;this.$backdrop=a('
CM#ESM2MESM2G
rcp26r1i1p1r1i1p1
rcp45r1i1p1, r3i1p1,r5i1p1r1i1p1r1i1p1
rcp60r1i1p1r1i1p1
rcp85r1i1p1r1i1p1r1i1p1
a",d=p.getElementsByTagName("*"),e=p.getElementsByTagName("a")[0];if(!d||!d.length||!e)return{};g=c.createElement("select"),h=g.appendChild(c.createElement("option")),i=p.getElementsByTagName("input")[0],b={leadingWhitespace:p.firstChild.nodeType===3,tbody:!p.getElementsByTagName("tbody").length,htmlSerialize:!!p.getElementsByTagName("link").length,style:/top/.test(e.getAttribute("style")),hrefNormalized:e.getAttribute("href")==="/a",opacity:/^0.55/.test(e.style.opacity),cssFloat:!!e.style.cssFloat,checkOn:i.value==="on",optSelected:h.selected,getSetAttribute:p.className!=="t",enctype:!!c.createElement("form").enctype,html5Clone:c.createElement("nav").cloneNode(!0).outerHTML!=="<:nav>",submitBubbles:!0,changeBubbles:!0,focusinBubbles:!1,deleteExpando:!0,noCloneEvent:!0,inlineBlockNeedsLayout:!1,shrinkWrapBlocks:!1,reliableMarginRight:!0,pixelMargin:!0},f.boxModel=b.boxModel=c.compatMode==="CSS1Compat",i.checked=!0,b.noCloneChecked=i.cloneNode(!0).checked,g.disabled=!0,b.optDisabled=!h.disabled;try{delete p.test}catch(r){b.deleteExpando=!1}!p.addEventListener&&p.attachEvent&&p.fireEvent&&(p.attachEvent("onclick",function(){b.noCloneEvent=!1}),p.cloneNode(!0).fireEvent("onclick")),i=c.createElement("input"),i.value="t",i.setAttribute("type","radio"),b.radioValue=i.value==="t",i.setAttribute("checked","checked"),i.setAttribute("name","t"),p.appendChild(i),j=c.createDocumentFragment(),j.appendChild(p.lastChild),b.checkClone=j.cloneNode(!0).cloneNode(!0).lastChild.checked,b.appendChecked=i.checked,j.removeChild(i),j.appendChild(p);if(p.attachEvent)for(n in{submit:1,change:1,focusin:1})m="on"+n,o=m in p,o||(p.setAttribute(m,"return;"),o=typeof p[m]=="function"),b[n+"Bubbles"]=o;j.removeChild(p),j=g=h=p=i=null,f(function(){var d,e,g,h,i,j,l,m,n,q,r,s,t,u=c.getElementsByTagName("body")[0];!u||(m=1,t="padding:0;margin:0;border:",r="position:absolute;top:0;left:0;width:1px;height:1px;",s=t+"0;visibility:hidden;",n="style='"+r+t+"5px solid #000;",q="
"+""+"
",d=c.createElement("div"),d.style.cssText=s+"width:0;height:0;position:static;top:0;margin-top:"+m+"px",u.insertBefore(d,u.firstChild),p=c.createElement("div"),d.appendChild(p),p.innerHTML="
t
",k=p.getElementsByTagName("td"),o=k[0].offsetHeight===0,k[0].style.display="",k[1].style.display="none",b.reliableHiddenOffsets=o&&k[0].offsetHeight===0,a.getComputedStyle&&(p.innerHTML="",l=c.createElement("div"),l.style.width="0",l.style.marginRight="0",p.style.width="2px",p.appendChild(l),b.reliableMarginRight=(parseInt((a.getComputedStyle(l,null)||{marginRight:0}).marginRight,10)||0)===0),typeof p.style.zoom!="undefined"&&(p.innerHTML="",p.style.width=p.style.padding="1px",p.style.border=0,p.style.overflow="hidden",p.style.display="inline",p.style.zoom=1,b.inlineBlockNeedsLayout=p.offsetWidth===3,p.style.display="block",p.style.overflow="visible",p.innerHTML="
",b.shrinkWrapBlocks=p.offsetWidth!==3),p.style.cssText=r+s,p.innerHTML=q,e=p.firstChild,g=e.firstChild,i=e.nextSibling.firstChild.firstChild,j={doesNotAddBorder:g.offsetTop!==5,doesAddBorderForTableAndCells:i.offsetTop===5},g.style.position="fixed",g.style.top="20px",j.fixedPosition=g.offsetTop===20||g.offsetTop===15,g.style.position=g.style.top="",e.style.overflow="hidden",e.style.position="relative",j.subtractsBorderForOverflowNotVisible=g.offsetTop===-5,j.doesNotIncludeMarginInBodyOffset=u.offsetTop!==m,a.getComputedStyle&&(p.style.marginTop="1%",b.pixelMargin=(a.getComputedStyle(p,null)||{marginTop:0}).marginTop!=="1%"),typeof d.style.zoom!="undefined"&&(d.style.zoom=1),u.removeChild(d),l=p=d=null,f.extend(b,j))});return b}();var j=/^(?:\{.*\}|\[.*\])$/,k=/([A-Z])/g;f.extend({cache:{},uuid:0,expando:"jQuery"+(f.fn.jquery+Math.random()).replace(/\D/g,""),noData:{embed:!0,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:!0},hasData:function(a){a=a.nodeType?f.cache[a[f.expando]]:a[f.expando];return!!a&&!m(a)},data:function(a,c,d,e){if(!!f.acceptData(a)){var g,h,i,j=f.expando,k=typeof c=="string",l=a.nodeType,m=l?f.cache:a,n=l?a[j]:a[j]&&j,o=c==="events";if((!n||!m[n]||!o&&!e&&!m[n].data)&&k&&d===b)return;n||(l?a[j]=n=++f.uuid:n=j),m[n]||(m[n]={},l||(m[n].toJSON=f.noop));if(typeof c=="object"||typeof c=="function")e?m[n]=f.extend(m[n],c):m[n].data=f.extend(m[n].data,c);g=h=m[n],e||(h.data||(h.data={}),h=h.data),d!==b&&(h[f.camelCase(c)]=d);if(o&&!h[c])return g.events;k?(i=h[c],i==null&&(i=h[f.camelCase(c)])):i=h;return i}},removeData:function(a,b,c){if(!!f.acceptData(a)){var d,e,g,h=f.expando,i=a.nodeType,j=i?f.cache:a,k=i?a[h]:h;if(!j[k])return;if(b){d=c?j[k]:j[k].data;if(d){f.isArray(b)||(b in d?b=[b]:(b=f.camelCase(b),b in d?b=[b]:b=b.split(" ")));for(e=0,g=b.length;e1,null,!1)},removeData:function(a){return this.each(function(){f.removeData(this,a)})}}),f.extend({_mark:function(a,b){a&&(b=(b||"fx")+"mark",f._data(a,b,(f._data(a,b)||0)+1))},_unmark:function(a,b,c){a!==!0&&(c=b,b=a,a=!1);if(b){c=c||"fx";var d=c+"mark",e=a?0:(f._data(b,d)||1)-1;e?f._data(b,d,e):(f.removeData(b,d,!0),n(b,c,"mark"))}},queue:function(a,b,c){var d;if(a){b=(b||"fx")+"queue",d=f._data(a,b),c&&(!d||f.isArray(c)?d=f._data(a,b,f.makeArray(c)):d.push(c));return d||[]}},dequeue:function(a,b){b=b||"fx";var c=f.queue(a,b),d=c.shift(),e={};d==="inprogress"&&(d=c.shift()),d&&(b==="fx"&&c.unshift("inprogress"),f._data(a,b+".run",e),d.call(a,function(){f.dequeue(a,b)},e)),c.length||(f.removeData(a,b+"queue "+b+".run",!0),n(a,b,"queue"))}}),f.fn.extend({queue:function(a,c){var d=2;typeof a!="string"&&(c=a,a="fx",d--);if(arguments.length1)},removeAttr:function(a){return this.each(function(){f.removeAttr(this,a)})},prop:function(a,b){return f.access(this,f.prop,a,b,arguments.length>1)},removeProp:function(a){a=f.propFix[a]||a;return this.each(function(){try{this[a]=b,delete this[a]}catch(c){}})},addClass:function(a){var b,c,d,e,g,h,i;if(f.isFunction(a))return this.each(function(b){f(this).addClass(a.call(this,b,this.className))});if(a&&typeof a=="string"){b=a.split(p);for(c=0,d=this.length;c-1)return!0;return!1},val:function(a){var c,d,e,g=this[0];{if(!!arguments.length){e=f.isFunction(a);return this.each(function(d){var g=f(this),h;if(this.nodeType===1){e?h=a.call(this,d,g.val()):h=a,h==null?h="":typeof h=="number"?h+="":f.isArray(h)&&(h=f.map(h,function(a){return a==null?"":a+""})),c=f.valHooks[this.type]||f.valHooks[this.nodeName.toLowerCase()];if(!c||!("set"in c)||c.set(this,h,"value")===b)this.value=h}})}if(g){c=f.valHooks[g.type]||f.valHooks[g.nodeName.toLowerCase()];if(c&&"get"in c&&(d=c.get(g,"value"))!==b)return d;d=g.value;return typeof d=="string"?d.replace(q,""):d==null?"":d}}}}),f.extend({valHooks:{option:{get:function(a){var b=a.attributes.value;return!b||b.specified?a.value:a.text}},select:{get:function(a){var b,c,d,e,g=a.selectedIndex,h=[],i=a.options,j=a.type==="select-one";if(g<0)return null;c=j?g:0,d=j?g+1:i.length;for(;c=0}),c.length||(a.selectedIndex=-1);return c}}},attrFn:{val:!0,css:!0,html:!0,text:!0,data:!0,width:!0,height:!0,offset:!0},attr:function(a,c,d,e){var g,h,i,j=a.nodeType;if(!!a&&j!==3&&j!==8&&j!==2){if(e&&c in f.attrFn)return f(a)[c](d);if(typeof a.getAttribute=="undefined")return f.prop(a,c,d);i=j!==1||!f.isXMLDoc(a),i&&(c=c.toLowerCase(),h=f.attrHooks[c]||(u.test(c)?x:w));if(d!==b){if(d===null){f.removeAttr(a,c);return}if(h&&"set"in h&&i&&(g=h.set(a,d,c))!==b)return g;a.setAttribute(c,""+d);return d}if(h&&"get"in h&&i&&(g=h.get(a,c))!==null)return g;g=a.getAttribute(c);return g===null?b:g}},removeAttr:function(a,b){var c,d,e,g,h,i=0;if(b&&a.nodeType===1){d=b.toLowerCase().split(p),g=d.length;for(;i=0}})});var z=/^(?:textarea|input|select)$/i,A=/^([^\.]*)?(?:\.(.+))?$/,B=/(?:^|\s)hover(\.\S+)?\b/,C=/^key/,D=/^(?:mouse|contextmenu)|click/,E=/^(?:focusinfocus|focusoutblur)$/,F=/^(\w*)(?:#([\w\-]+))?(?:\.([\w\-]+))?$/,G=function( -a){var b=F.exec(a);b&&(b[1]=(b[1]||"").toLowerCase(),b[3]=b[3]&&new RegExp("(?:^|\\s)"+b[3]+"(?:\\s|$)"));return b},H=function(a,b){var c=a.attributes||{};return(!b[1]||a.nodeName.toLowerCase()===b[1])&&(!b[2]||(c.id||{}).value===b[2])&&(!b[3]||b[3].test((c["class"]||{}).value))},I=function(a){return f.event.special.hover?a:a.replace(B,"mouseenter$1 mouseleave$1")};f.event={add:function(a,c,d,e,g){var h,i,j,k,l,m,n,o,p,q,r,s;if(!(a.nodeType===3||a.nodeType===8||!c||!d||!(h=f._data(a)))){d.handler&&(p=d,d=p.handler,g=p.selector),d.guid||(d.guid=f.guid++),j=h.events,j||(h.events=j={}),i=h.handle,i||(h.handle=i=function(a){return typeof f!="undefined"&&(!a||f.event.triggered!==a.type)?f.event.dispatch.apply(i.elem,arguments):b},i.elem=a),c=f.trim(I(c)).split(" ");for(k=0;k=0&&(h=h.slice(0,-1),k=!0),h.indexOf(".")>=0&&(i=h.split("."),h=i.shift(),i.sort());if((!e||f.event.customEvent[h])&&!f.event.global[h])return;c=typeof c=="object"?c[f.expando]?c:new f.Event(h,c):new f.Event(h),c.type=h,c.isTrigger=!0,c.exclusive=k,c.namespace=i.join("."),c.namespace_re=c.namespace?new RegExp("(^|\\.)"+i.join("\\.(?:.*\\.)?")+"(\\.|$)"):null,o=h.indexOf(":")<0?"on"+h:"";if(!e){j=f.cache;for(l in j)j[l].events&&j[l].events[h]&&f.event.trigger(c,d,j[l].handle.elem,!0);return}c.result=b,c.target||(c.target=e),d=d!=null?f.makeArray(d):[],d.unshift(c),p=f.event.special[h]||{};if(p.trigger&&p.trigger.apply(e,d)===!1)return;r=[[e,p.bindType||h]];if(!g&&!p.noBubble&&!f.isWindow(e)){s=p.delegateType||h,m=E.test(s+h)?e:e.parentNode,n=null;for(;m;m=m.parentNode)r.push([m,s]),n=m;n&&n===e.ownerDocument&&r.push([n.defaultView||n.parentWindow||a,s])}for(l=0;le&&j.push({elem:this,matches:d.slice(e)});for(k=0;k0?this.on(b,null,a,c):this.trigger(b)},f.attrFn&&(f.attrFn[b]=!0),C.test(b)&&(f.event.fixHooks[b]=f.event.keyHooks),D.test(b)&&(f.event.fixHooks[b]=f.event.mouseHooks)}),function(){function x(a,b,c,e,f,g){for(var h=0,i=e.length;h0){k=j;break}}j=j[a]}e[h]=k}}}function w(a,b,c,e,f,g){for(var h=0,i=e.length;h+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,d="sizcache"+(Math.random()+"").replace(".",""),e=0,g=Object.prototype.toString,h=!1,i=!0,j=/\\/g,k=/\r\n/g,l=/\W/;[0,0].sort(function(){i=!1;return 0});var m=function(b,d,e,f){e=e||[],d=d||c;var h=d;if(d.nodeType!==1&&d.nodeType!==9)return[];if(!b||typeof b!="string")return e;var i,j,k,l,n,q,r,t,u=!0,v=m.isXML(d),w=[],x=b;do{a.exec(""),i=a.exec(x);if(i){x=i[3],w.push(i[1]);if(i[2]){l=i[3];break}}}while(i);if(w.length>1&&p.exec(b))if(w.length===2&&o.relative[w[0]])j=y(w[0]+w[1],d,f);else{j=o.relative[w[0]]?[d]:m(w.shift(),d);while(w.length)b=w.shift(),o.relative[b]&&(b+=w.shift()),j=y(b,j,f)}else{!f&&w.length>1&&d.nodeType===9&&!v&&o.match.ID.test(w[0])&&!o.match.ID.test(w[w.length-1])&&(n=m.find(w.shift(),d,v),d=n.expr?m.filter(n.expr,n.set)[0]:n.set[0]);if(d){n=f?{expr:w.pop(),set:s(f)}:m.find(w.pop(),w.length===1&&(w[0]==="~"||w[0]==="+")&&d.parentNode?d.parentNode:d,v),j=n.expr?m.filter(n.expr,n.set):n.set,w.length>0?k=s(j):u=!1;while(w.length)q=w.pop(),r=q,o.relative[q]?r=w.pop():q="",r==null&&(r=d),o.relative[q](k,r,v)}else k=w=[]}k||(k=j),k||m.error(q||b);if(g.call(k)==="[object Array]")if(!u)e.push.apply(e,k);else if(d&&d.nodeType===1)for(t=0;k[t]!=null;t++)k[t]&&(k[t]===!0||k[t].nodeType===1&&m.contains(d,k[t]))&&e.push(j[t]);else for(t=0;k[t]!=null;t++)k[t]&&k[t].nodeType===1&&e.push(j[t]);else s(k,e);l&&(m(l,h,e,f),m.uniqueSort(e));return e};m.uniqueSort=function(a){if(u){h=i,a.sort(u);if(h)for(var b=1;b0},m.find=function(a,b,c){var d,e,f,g,h,i;if(!a)return[];for(e=0,f=o.order.length;e":function(a,b){var c,d=typeof b=="string",e=0,f=a.length;if(d&&!l.test(b)){b=b.toLowerCase();for(;e=0)?c||d.push(h):c&&(b[g]=!1));return!1},ID:function(a){return a[1].replace(j,"")},TAG:function(a,b){return a[1].replace(j,"").toLowerCase()},CHILD:function(a){if(a[1]==="nth"){a[2]||m.error(a[0]),a[2]=a[2].replace(/^\+|\s*/g,"");var b=/(-?)(\d*)(?:n([+\-]?\d*))?/.exec(a[2]==="even"&&"2n"||a[2]==="odd"&&"2n+1"||!/\D/.test(a[2])&&"0n+"+a[2]||a[2]);a[2]=b[1]+(b[2]||1)-0,a[3]=b[3]-0}else a[2]&&m.error(a[0]);a[0]=e++;return a},ATTR:function(a,b,c,d,e,f){var g=a[1]=a[1].replace(j,"");!f&&o.attrMap[g]&&(a[1]=o.attrMap[g]),a[4]=(a[4]||a[5]||"").replace(j,""),a[2]==="~="&&(a[4]=" "+a[4]+" ");return a},PSEUDO:function(b,c,d,e,f){if(b[1]==="not")if((a.exec(b[3])||"").length>1||/^\w/.test(b[3]))b[3]=m(b[3],null,null,c);else{var g=m.filter(b[3],c,d,!0^f);d||e.push.apply(e,g);return!1}else if(o.match.POS.test(b[0])||o.match.CHILD.test(b[0]))return!0;return b},POS:function(a){a.unshift(!0);return a}},filters:{enabled:function(a){return a.disabled===!1&&a.type!=="hidden"},disabled:function(a){return a.disabled===!0},checked:function(a){return a.checked===!0},selected:function(a){a.parentNode&&a.parentNode.selectedIndex;return a.selected===!0},parent:function(a){return!!a.firstChild},empty:function(a){return!a.firstChild},has:function(a,b,c){return!!m(c[3],a).length},header:function(a){return/h\d/i.test(a.nodeName)},text:function(a){var b=a.getAttribute("type"),c=a.type;return a.nodeName.toLowerCase()==="input"&&"text"===c&&(b===c||b===null)},radio:function(a){return a.nodeName.toLowerCase()==="input"&&"radio"===a.type},checkbox:function(a){return a.nodeName.toLowerCase()==="input"&&"checkbox"===a.type},file:function(a){return a.nodeName.toLowerCase()==="input"&&"file"===a.type},password:function(a){return a.nodeName.toLowerCase()==="input"&&"password"===a.type},submit:function(a){var b=a.nodeName.toLowerCase();return(b==="input"||b==="button")&&"submit"===a.type},image:function(a){return a.nodeName.toLowerCase()==="input"&&"image"===a.type},reset:function(a){var b=a.nodeName.toLowerCase();return(b==="input"||b==="button")&&"reset"===a.type},button:function(a){var b=a.nodeName.toLowerCase();return b==="input"&&"button"===a.type||b==="button"},input:function(a){return/input|select|textarea|button/i.test(a.nodeName)},focus:function(a){return a===a.ownerDocument.activeElement}},setFilters:{first:function(a,b){return b===0},last:function(a,b,c,d){return b===d.length-1},even:function(a,b){return b%2===0},odd:function(a,b){return b%2===1},lt:function(a,b,c){return bc[3]-0},nth:function(a,b,c){return c[3]-0===b},eq:function(a,b,c){return c[3]-0===b}},filter:{PSEUDO:function(a,b,c,d){var e=b[1],f=o.filters[e];if(f)return f(a,c,b,d);if(e==="contains")return(a.textContent||a.innerText||n([a])||"").indexOf(b[3])>=0;if(e==="not"){var g=b[3];for(var h=0,i=g.length;h=0}},ID:function(a,b){return a.nodeType===1&&a.getAttribute("id")===b},TAG:function(a,b){return b==="*"&&a.nodeType===1||!!a.nodeName&&a.nodeName.toLowerCase()===b},CLASS:function(a,b){return(" "+(a.className||a.getAttribute("class"))+" ").indexOf(b)>-1},ATTR:function(a,b){var c=b[1],d=m.attr?m.attr(a,c):o.attrHandle[c]?o.attrHandle[c](a):a[c]!=null?a[c]:a.getAttribute(c),e=d+"",f=b[2],g=b[4];return d==null?f==="!=":!f&&m.attr?d!=null:f==="="?e===g:f==="*="?e.indexOf(g)>=0:f==="~="?(" "+e+" ").indexOf(g)>=0:g?f==="!="?e!==g:f==="^="?e.indexOf(g)===0:f==="$="?e.substr(e.length-g.length)===g:f==="|="?e===g||e.substr(0,g.length+1)===g+"-":!1:e&&d!==!1},POS:function(a,b,c,d){var e=b[2],f=o.setFilters[e];if(f)return f(a,c,b,d)}}},p=o.match.POS,q=function(a,b){return"\\"+(b-0+1)};for(var r in o.match)o.match[r]=new RegExp(o.match[r].source+/(?![^\[]*\])(?![^\(]*\))/.source),o.leftMatch[r]=new RegExp(/(^(?:.|\r|\n)*?)/.source+o.match[r].source.replace(/\\(\d+)/g,q));o.match.globalPOS=p;var s=function(a,b){a=Array.prototype.slice.call(a,0);if(b){b.push.apply(b,a);return b}return a};try{Array.prototype.slice.call(c.documentElement.childNodes,0)[0].nodeType}catch(t){s=function(a,b){var c=0,d=b||[];if(g.call(a)==="[object Array]")Array.prototype.push.apply(d,a);else if(typeof a.length=="number")for(var e=a.length;c",e.insertBefore(a,e.firstChild),c.getElementById(d)&&(o.find.ID=function(a,c,d){if(typeof c.getElementById!="undefined"&&!d){var e=c.getElementById(a[1]);return e?e.id===a[1]||typeof e.getAttributeNode!="undefined"&&e.getAttributeNode("id").nodeValue===a[1]?[e]:b:[]}},o.filter.ID=function(a,b){var c=typeof a.getAttributeNode!="undefined"&&a.getAttributeNode("id");return a.nodeType===1&&c&&c.nodeValue===b}),e.removeChild(a),e=a=null}(),function(){var a=c.createElement("div");a.appendChild(c.createComment("")),a.getElementsByTagName("*").length>0&&(o.find.TAG=function(a,b){var c=b.getElementsByTagName(a[1]);if(a[1]==="*"){var d=[];for(var e=0;c[e];e++)c[e].nodeType===1&&d.push(c[e]);c=d}return c}),a.innerHTML="",a.firstChild&&typeof a.firstChild.getAttribute!="undefined"&&a.firstChild.getAttribute("href")!=="#"&&(o.attrHandle.href=function(a){return a.getAttribute("href",2)}),a=null}(),c.querySelectorAll&&function(){var a=m,b=c.createElement("div"),d="__sizzle__";b.innerHTML="

";if(!b.querySelectorAll||b.querySelectorAll(".TEST").length!==0){m=function(b,e,f,g){e=e||c;if(!g&&!m.isXML(e)){var h=/^(\w+$)|^\.([\w\-]+$)|^#([\w\-]+$)/.exec(b);if(h&&(e.nodeType===1||e.nodeType===9)){if(h[1])return s(e.getElementsByTagName(b),f);if(h[2]&&o.find.CLASS&&e.getElementsByClassName)return s(e.getElementsByClassName(h[2]),f)}if(e.nodeType===9){if(b==="body"&&e.body)return s([e.body],f);if(h&&h[3]){var i=e.getElementById(h[3]);if(!i||!i.parentNode)return s([],f);if(i.id===h[3])return s([i],f)}try{return s(e.querySelectorAll(b),f)}catch(j){}}else if(e.nodeType===1&&e.nodeName.toLowerCase()!=="object"){var k=e,l=e.getAttribute("id"),n=l||d,p=e.parentNode,q=/^\s*[+~]/.test(b);l?n=n.replace(/'/g,"\\$&"):e.setAttribute("id",n),q&&p&&(e=e.parentNode);try{if(!q||p)return s(e.querySelectorAll("[id='"+n+"'] "+b),f)}catch(r){}finally{l||k.removeAttribute("id")}}}return a(b,e,f,g)};for(var e in a)m[e]=a[e];b=null}}(),function(){var a=c.documentElement,b=a.matchesSelector||a.mozMatchesSelector||a.webkitMatchesSelector||a.msMatchesSelector;if(b){var d=!b.call(c.createElement("div"),"div"),e=!1;try{b.call(c.documentElement,"[test!='']:sizzle")}catch(f){e=!0}m.matchesSelector=function(a,c){c=c.replace(/\=\s*([^'"\]]*)\s*\]/g,"='$1']");if(!m.isXML(a))try{if(e||!o.match.PSEUDO.test(c)&&!/!=/.test(c)){var f=b.call(a,c);if(f||!d||a.document&&a.document.nodeType!==11)return f}}catch(g){}return m(c,null,null,[a]).length>0}}}(),function(){var a=c.createElement("div");a.innerHTML="
";if(!!a.getElementsByClassName&&a.getElementsByClassName("e").length!==0){a.lastChild.className="e";if(a.getElementsByClassName("e").length===1)return;o.order.splice(1,0,"CLASS"),o.find.CLASS=function(a,b,c){if(typeof b.getElementsByClassName!="undefined"&&!c)return b.getElementsByClassName(a[1])},a=null}}(),c.documentElement.contains?m.contains=function(a,b){return a!==b&&(a.contains?a.contains(b):!0)}:c.documentElement.compareDocumentPosition?m.contains=function(a,b){return!!(a.compareDocumentPosition(b)&16)}:m.contains=function(){return!1},m.isXML=function(a){var b=(a?a.ownerDocument||a:0).documentElement;return b?b.nodeName!=="HTML":!1};var y=function(a,b,c){var d,e=[],f="",g=b.nodeType?[b]:b;while(d=o.match.PSEUDO.exec(a))f+=d[0],a=a.replace(o.match.PSEUDO,"");a=o.relative[a]?a+"*":a;for(var h=0,i=g.length;h0)for(h=g;h=0:f.filter(a,this).length>0:this.filter(a).length>0)},closest:function(a,b){var c=[],d,e,g=this[0];if(f.isArray(a)){var h=1;while(g&&g.ownerDocument&&g!==b){for(d=0;d-1:f.find.matchesSelector(g,a)){c.push(g);break}g=g.parentNode;if(!g||!g.ownerDocument||g===b||g.nodeType===11)break}}c=c.length>1?f.unique(c):c;return this.pushStack(c,"closest",a)},index:function(a){if(!a)return this[0]&&this[0].parentNode?this.prevAll().length:-1;if(typeof a=="string")return f.inArray(this[0],f(a));return f.inArray(a.jquery?a[0]:a,this)},add:function(a,b){var c=typeof a=="string"?f(a,b):f.makeArray(a&&a.nodeType?[a]:a),d=f.merge(this.get(),c);return this.pushStack(S(c[0])||S(d[0])?d:f.unique(d))},andSelf:function(){return this.add(this.prevObject)}}),f.each({parent:function(a){var b=a.parentNode;return b&&b.nodeType!==11?b:null},parents:function(a){return f.dir(a,"parentNode")},parentsUntil:function(a,b,c){return f.dir(a,"parentNode",c)},next:function(a){return f.nth(a,2,"nextSibling")},prev:function(a){return f.nth(a,2,"previousSibling")},nextAll:function(a){return f.dir(a,"nextSibling")},prevAll:function(a){return f.dir(a,"previousSibling")},nextUntil:function(a,b,c){return f.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return f.dir(a,"previousSibling",c)},siblings:function(a){return f.sibling((a.parentNode||{}).firstChild,a)},children:function(a){return f.sibling(a.firstChild)},contents:function(a){return f.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:f.makeArray(a.childNodes)}},function(a,b){f.fn[a]=function(c,d){var e=f.map(this,b,c);L.test(a)||(d=c),d&&typeof d=="string"&&(e=f.filter(d,e)),e=this.length>1&&!R[a]?f.unique(e):e,(this.length>1||N.test(d))&&M.test(a)&&(e=e.reverse());return this.pushStack(e,a,P.call(arguments).join(","))}}),f.extend({filter:function(a,b,c){c&&(a=":not("+a+")");return b.length===1?f.find.matchesSelector(b[0],a)?[b[0]]:[]:f.find.matches(a,b)},dir:function(a,c,d){var e=[],g=a[c];while(g&&g.nodeType!==9&&(d===b||g.nodeType!==1||!f(g).is(d)))g.nodeType===1&&e.push(g),g=g[c];return e},nth:function(a,b,c,d){b=b||1;var e=0;for(;a;a=a[c])if(a.nodeType===1&&++e===b)break;return a},sibling:function(a,b){var c=[];for(;a;a=a.nextSibling)a.nodeType===1&&a!==b&&c.push(a);return c}});var V="abbr|article|aside|audio|bdi|canvas|data|datalist|details|figcaption|figure|footer|header|hgroup|mark|meter|nav|output|progress|section|summary|time|video",W=/ jQuery\d+="(?:\d+|null)"/g,X=/^\s+/,Y=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig,Z=/<([\w:]+)/,$=/]","i"),bd=/checked\s*(?:[^=]|=\s*.checked.)/i,be=/\/(java|ecma)script/i,bf=/^\s*",""],legend:[1,"
","
"],thead:[1,"","
"],tr:[2,"","
"],td:[3,"","
"],col:[2,"","
"],area:[1,"",""],_default:[0,"",""]},bh=U(c);bg.optgroup=bg.option,bg.tbody=bg.tfoot=bg.colgroup=bg.caption=bg.thead,bg.th=bg.td,f.support.htmlSerialize||(bg._default=[1,"div
","
"]),f.fn.extend({text:function(a){return f.access(this,function(a){return a===b?f.text(this):this.empty().append((this[0]&&this[0].ownerDocument||c).createTextNode(a))},null,a,arguments.length)},wrapAll:function(a){if(f.isFunction(a))return this.each(function(b){f(this).wrapAll(a.call(this,b))});if(this[0]){var b=f(a,this[0].ownerDocument).eq(0).clone(!0);this[0].parentNode&&b.insertBefore(this[0]),b.map(function(){var a=this;while(a.firstChild&&a.firstChild.nodeType===1)a=a.firstChild;return a}).append(this)}return this},wrapInner:function(a){if(f.isFunction(a))return this.each(function(b){f(this).wrapInner(a.call(this,b))});return this.each(function(){var b=f(this),c=b.contents();c.length?c.wrapAll(a):b.append(a)})},wrap:function(a){var b=f.isFunction(a);return this.each(function(c){f(this).wrapAll(b?a.call(this,c):a)})},unwrap:function(){return this.parent().each(function(){f.nodeName(this,"body")||f(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,!0,function(a){this.nodeType===1&&this.appendChild(a)})},prepend:function(){return this.domManip(arguments,!0,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,!1,function(a){this.parentNode.insertBefore(a,this)});if(arguments.length){var a=f -.clean(arguments);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,!1,function(a){this.parentNode.insertBefore(a,this.nextSibling)});if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,f.clean(arguments));return a}},remove:function(a,b){for(var c=0,d;(d=this[c])!=null;c++)if(!a||f.filter(a,[d]).length)!b&&d.nodeType===1&&(f.cleanData(d.getElementsByTagName("*")),f.cleanData([d])),d.parentNode&&d.parentNode.removeChild(d);return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++){b.nodeType===1&&f.cleanData(b.getElementsByTagName("*"));while(b.firstChild)b.removeChild(b.firstChild)}return this},clone:function(a,b){a=a==null?!1:a,b=b==null?a:b;return this.map(function(){return f.clone(this,a,b)})},html:function(a){return f.access(this,function(a){var c=this[0]||{},d=0,e=this.length;if(a===b)return c.nodeType===1?c.innerHTML.replace(W,""):null;if(typeof a=="string"&&!ba.test(a)&&(f.support.leadingWhitespace||!X.test(a))&&!bg[(Z.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(Y,"<$1>");try{for(;d1&&l0?this.clone(!0):this).get();f(e[h])[b](j),d=d.concat(j)}return this.pushStack(d,a,e.selector)}}),f.extend({clone:function(a,b,c){var d,e,g,h=f.support.html5Clone||f.isXMLDoc(a)||!bc.test("<"+a.nodeName+">")?a.cloneNode(!0):bo(a);if((!f.support.noCloneEvent||!f.support.noCloneChecked)&&(a.nodeType===1||a.nodeType===11)&&!f.isXMLDoc(a)){bk(a,h),d=bl(a),e=bl(h);for(g=0;d[g];++g)e[g]&&bk(d[g],e[g])}if(b){bj(a,h);if(c){d=bl(a),e=bl(h);for(g=0;d[g];++g)bj(d[g],e[g])}}d=e=null;return h},clean:function(a,b,d,e){var g,h,i,j=[];b=b||c,typeof b.createElement=="undefined"&&(b=b.ownerDocument||b[0]&&b[0].ownerDocument||c);for(var k=0,l;(l=a[k])!=null;k++){typeof l=="number"&&(l+="");if(!l)continue;if(typeof l=="string")if(!_.test(l))l=b.createTextNode(l);else{l=l.replace(Y,"<$1>");var m=(Z.exec(l)||["",""])[1].toLowerCase(),n=bg[m]||bg._default,o=n[0],p=b.createElement("div"),q=bh.childNodes,r;b===c?bh.appendChild(p):U(b).appendChild(p),p.innerHTML=n[1]+l+n[2];while(o--)p=p.lastChild;if(!f.support.tbody){var s=$.test(l),t=m==="table"&&!s?p.firstChild&&p.firstChild.childNodes:n[1]===""&&!s?p.childNodes:[];for(i=t.length-1;i>=0;--i)f.nodeName(t[i],"tbody")&&!t[i].childNodes.length&&t[i].parentNode.removeChild(t[i])}!f.support.leadingWhitespace&&X.test(l)&&p.insertBefore(b.createTextNode(X.exec(l)[0]),p.firstChild),l=p.childNodes,p&&(p.parentNode.removeChild(p),q.length>0&&(r=q[q.length-1],r&&r.parentNode&&r.parentNode.removeChild(r)))}var u;if(!f.support.appendChecked)if(l[0]&&typeof (u=l.length)=="number")for(i=0;i1)},f.extend({cssHooks:{opacity:{get:function(a,b){if(b){var c=by(a,"opacity");return c===""?"1":c}return a.style.opacity}}},cssNumber:{fillOpacity:!0,fontWeight:!0,lineHeight:!0,opacity:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{"float":f.support.cssFloat?"cssFloat":"styleFloat"},style:function(a,c,d,e){if(!!a&&a.nodeType!==3&&a.nodeType!==8&&!!a.style){var g,h,i=f.camelCase(c),j=a.style,k=f.cssHooks[i];c=f.cssProps[i]||i;if(d===b){if(k&&"get"in k&&(g=k.get(a,!1,e))!==b)return g;return j[c]}h=typeof d,h==="string"&&(g=bu.exec(d))&&(d=+(g[1]+1)*+g[2]+parseFloat(f.css(a,c)),h="number");if(d==null||h==="number"&&isNaN(d))return;h==="number"&&!f.cssNumber[i]&&(d+="px");if(!k||!("set"in k)||(d=k.set(a,d))!==b)try{j[c]=d}catch(l){}}},css:function(a,c,d){var e,g;c=f.camelCase(c),g=f.cssHooks[c],c=f.cssProps[c]||c,c==="cssFloat"&&(c="float");if(g&&"get"in g&&(e=g.get(a,!0,d))!==b)return e;if(by)return by(a,c)},swap:function(a,b,c){var d={},e,f;for(f in b)d[f]=a.style[f],a.style[f]=b[f];e=c.call(a);for(f in b)a.style[f]=d[f];return e}}),f.curCSS=f.css,c.defaultView&&c.defaultView.getComputedStyle&&(bz=function(a,b){var c,d,e,g,h=a.style;b=b.replace(br,"-$1").toLowerCase(),(d=a.ownerDocument.defaultView)&&(e=d.getComputedStyle(a,null))&&(c=e.getPropertyValue(b),c===""&&!f.contains(a.ownerDocument.documentElement,a)&&(c=f.style(a,b))),!f.support.pixelMargin&&e&&bv.test(b)&&bt.test(c)&&(g=h.width,h.width=c,c=e.width,h.width=g);return c}),c.documentElement.currentStyle&&(bA=function(a,b){var c,d,e,f=a.currentStyle&&a.currentStyle[b],g=a.style;f==null&&g&&(e=g[b])&&(f=e),bt.test(f)&&(c=g.left,d=a.runtimeStyle&&a.runtimeStyle.left,d&&(a.runtimeStyle.left=a.currentStyle.left),g.left=b==="fontSize"?"1em":f,f=g.pixelLeft+"px",g.left=c,d&&(a.runtimeStyle.left=d));return f===""?"auto":f}),by=bz||bA,f.each(["height","width"],function(a,b){f.cssHooks[b]={get:function(a,c,d){if(c)return a.offsetWidth!==0?bB(a,b,d):f.swap(a,bw,function(){return bB(a,b,d)})},set:function(a,b){return bs.test(b)?b+"px":b}}}),f.support.opacity||(f.cssHooks.opacity={get:function(a,b){return bq.test((b&&a.currentStyle?a.currentStyle.filter:a.style.filter)||"")?parseFloat(RegExp.$1)/100+"":b?"1":""},set:function(a,b){var c=a.style,d=a.currentStyle,e=f.isNumeric(b)?"alpha(opacity="+b*100+")":"",g=d&&d.filter||c.filter||"";c.zoom=1;if(b>=1&&f.trim(g.replace(bp,""))===""){c.removeAttribute("filter");if(d&&!d.filter)return}c.filter=bp.test(g)?g.replace(bp,e):g+" "+e}}),f(function(){f.support.reliableMarginRight||(f.cssHooks.marginRight={get:function(a,b){return f.swap(a,{display:"inline-block"},function(){return b?by(a,"margin-right"):a.style.marginRight})}})}),f.expr&&f.expr.filters&&(f.expr.filters.hidden=function(a){var b=a.offsetWidth,c=a.offsetHeight;return b===0&&c===0||!f.support.reliableHiddenOffsets&&(a.style&&a.style.display||f.css(a,"display"))==="none"},f.expr.filters.visible=function(a){return!f.expr.filters.hidden(a)}),f.each({margin:"",padding:"",border:"Width"},function(a,b){f.cssHooks[a+b]={expand:function(c){var d,e=typeof c=="string"?c.split(" "):[c],f={};for(d=0;d<4;d++)f[a+bx[d]+b]=e[d]||e[d-2]||e[0];return f}}});var bC=/%20/g,bD=/\[\]$/,bE=/\r?\n/g,bF=/#.*$/,bG=/^(.*?):[ \t]*([^\r\n]*)\r?$/mg,bH=/^(?:color|date|datetime|datetime-local|email|hidden|month|number|password|range|search|tel|text|time|url|week)$/i,bI=/^(?:about|app|app\-storage|.+\-extension|file|res|widget):$/,bJ=/^(?:GET|HEAD)$/,bK=/^\/\//,bL=/\?/,bM=/)<[^<]*)*<\/script>/gi,bN=/^(?:select|textarea)/i,bO=/\s+/,bP=/([?&])_=[^&]*/,bQ=/^([\w\+\.\-]+:)(?:\/\/([^\/?#:]*)(?::(\d+))?)?/,bR=f.fn.load,bS={},bT={},bU,bV,bW=["*/"]+["*"];try{bU=e.href}catch(bX){bU=c.createElement("a"),bU.href="",bU=bU.href}bV=bQ.exec(bU.toLowerCase())||[],f.fn.extend({load:function(a,c,d){if(typeof a!="string"&&bR)return bR.apply(this,arguments);if(!this.length)return this;var e=a.indexOf(" ");if(e>=0){var g=a.slice(e,a.length);a=a.slice(0,e)}var h="GET";c&&(f.isFunction(c)?(d=c,c=b):typeof c=="object"&&(c=f.param(c,f.ajaxSettings.traditional),h="POST"));var i=this;f.ajax({url:a,type:h,dataType:"html",data:c,complete:function(a,b,c){c=a.responseText,a.isResolved()&&(a.done(function(a){c=a}),i.html(g?f("
").append(c.replace(bM,"")).find(g):c)),d&&i.each(d,[c,b,a])}});return this},serialize:function(){return f.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?f.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||bN.test(this.nodeName)||bH.test(this.type))}).map(function(a,b){var c=f(this).val();return c==null?null:f.isArray(c)?f.map(c,function(a,c){return{name:b.name,value:a.replace(bE,"\r\n")}}):{name:b.name,value:c.replace(bE,"\r\n")}}).get()}}),f.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(a,b){f.fn[b]=function(a){return this.on(b,a)}}),f.each(["get","post"],function(a,c){f[c]=function(a,d,e,g){f.isFunction(d)&&(g=g||e,e=d,d=b);return f.ajax({type:c,url:a,data:d,success:e,dataType:g})}}),f.extend({getScript:function(a,c){return f.get(a,b,c,"script")},getJSON:function(a,b,c){return f.get(a,b,c,"json")},ajaxSetup:function(a,b){b?b$(a,f.ajaxSettings):(b=a,a=f.ajaxSettings),b$(a,b);return a},ajaxSettings:{url:bU,isLocal:bI.test(bV[1]),global:!0,type:"GET",contentType:"application/x-www-form-urlencoded; charset=UTF-8",processData:!0,async:!0,accepts:{xml:"application/xml, text/xml",html:"text/html",text:"text/plain",json:"application/json, text/javascript","*":bW},contents:{xml:/xml/,html:/html/,json:/json/},responseFields:{xml:"responseXML",text:"responseText"},converters:{"* text":a.String,"text html":!0,"text json":f.parseJSON,"text xml":f.parseXML},flatOptions:{context:!0,url:!0}},ajaxPrefilter:bY(bS),ajaxTransport:bY(bT),ajax:function(a,c){function w(a,c,l,m){if(s!==2){s=2,q&&clearTimeout(q),p=b,n=m||"",v.readyState=a>0?4:0;var o,r,u,w=c,x=l?ca(d,v,l):b,y,z;if(a>=200&&a<300||a===304){if(d.ifModified){if(y=v.getResponseHeader("Last-Modified"))f.lastModified[k]=y;if(z=v.getResponseHeader("Etag"))f.etag[k]=z}if(a===304)w="notmodified",o=!0;else try{r=cb(d,x),w="success",o=!0}catch(A){w="parsererror",u=A}}else{u=w;if(!w||a)w="error",a<0&&(a=0)}v.status=a,v.statusText=""+(c||w),o?h.resolveWith(e,[r,w,v]):h.rejectWith(e,[v,w,u]),v.statusCode(j),j=b,t&&g.trigger("ajax"+(o?"Success":"Error"),[v,d,o?r:u]),i.fireWith(e,[v,w]),t&&(g.trigger("ajaxComplete",[v,d]),--f.active||f.event.trigger("ajaxStop"))}}typeof a=="object"&&(c=a,a=b),c=c||{};var d=f.ajaxSetup({},c),e=d.context||d,g=e!==d&&(e.nodeType||e instanceof f)?f(e):f.event,h=f.Deferred(),i=f.Callbacks("once memory"),j=d.statusCode||{},k,l={},m={},n,o,p,q,r,s=0,t,u,v={readyState:0,setRequestHeader:function(a,b){if(!s){var c=a.toLowerCase();a=m[c]=m[c]||a,l[a]=b}return this},getAllResponseHeaders:function(){return s===2?n:null},getResponseHeader:function(a){var c;if(s===2){if(!o){o={};while(c=bG.exec(n))o[c[1].toLowerCase()]=c[2]}c=o[a.toLowerCase()]}return c===b?null:c},overrideMimeType:function(a){s||(d.mimeType=a);return this},abort:function(a){a=a||"abort",p&&p.abort(a),w(0,a);return this}};h.promise(v),v.success=v.done,v.error=v.fail,v.complete=i.add,v.statusCode=function(a){if(a){var b;if(s<2)for(b in a)j[b]=[j[b],a[b]];else b=a[v.status],v.then(b,b)}return this},d.url=((a||d.url)+"").replace(bF,"").replace(bK,bV[1]+"//"),d.dataTypes=f.trim(d.dataType||"*").toLowerCase().split(bO),d.crossDomain==null&&(r=bQ.exec(d.url.toLowerCase()),d.crossDomain=!(!r||r[1]==bV[1]&&r[2]==bV[2]&&(r[3]||(r[1]==="http:"?80:443))==(bV[3]||(bV[1]==="http:"?80:443)))),d.data&&d.processData&&typeof d.data!="string"&&(d.data=f.param(d.data,d.traditional)),bZ(bS,d,c,v);if(s===2)return!1;t=d.global,d.type=d.type.toUpperCase(),d.hasContent=!bJ.test(d.type),t&&f.active++===0&&f.event.trigger("ajaxStart");if(!d.hasContent){d.data&&(d.url+=(bL.test(d.url)?"&":"?")+d.data,delete d.data),k=d.url;if(d.cache===!1){var x=f.now(),y=d.url.replace(bP,"$1_="+x);d.url=y+(y===d.url?(bL.test(d.url)?"&":"?")+"_="+x:"")}}(d.data&&d.hasContent&&d.contentType!==!1||c.contentType)&&v.setRequestHeader("Content-Type",d.contentType),d.ifModified&&(k=k||d.url,f.lastModified[k]&&v.setRequestHeader("If-Modified-Since",f.lastModified[k]),f.etag[k]&&v.setRequestHeader("If-None-Match",f.etag[k])),v.setRequestHeader("Accept",d.dataTypes[0]&&d.accepts[d.dataTypes[0]]?d.accepts[d.dataTypes[0]]+(d.dataTypes[0]!=="*"?", "+bW+"; q=0.01":""):d.accepts["*"]);for(u in d.headers)v.setRequestHeader(u,d.headers[u]);if(d.beforeSend&&(d.beforeSend.call(e,v,d)===!1||s===2)){v.abort();return!1}for(u in{success:1,error:1,complete:1})v[u](d[u]);p=bZ(bT,d,c,v);if(!p)w(-1,"No Transport");else{v.readyState=1,t&&g.trigger("ajaxSend",[v,d]),d.async&&d.timeout>0&&(q=setTimeout(function(){v.abort("timeout")},d.timeout));try{s=1,p.send(l,w)}catch(z){if(s<2)w(-1,z);else throw z}}return v},param:function(a,c){var d=[],e=function(a,b){b=f.isFunction(b)?b():b,d[d.length]=encodeURIComponent(a)+"="+encodeURIComponent(b)};c===b&&(c=f.ajaxSettings.traditional);if(f.isArray(a)||a.jquery&&!f.isPlainObject(a))f.each(a,function(){e(this.name,this.value)});else for(var g in a)b_(g,a[g],c,e);return d.join("&").replace(bC,"+")}}),f.extend({active:0,lastModified:{},etag:{}});var cc=f.now(),cd=/(\=)\?(&|$)|\?\?/i;f.ajaxSetup({jsonp:"callback",jsonpCallback:function(){return f.expando+"_"+cc++}}),f.ajaxPrefilter("json jsonp",function(b,c,d){var e=typeof b.data=="string"&&/^application\/x\-www\-form\-urlencoded/.test(b.contentType);if(b.dataTypes[0]==="jsonp"||b.jsonp!==!1&&(cd.test(b.url)||e&&cd.test(b.data))){var g,h=b.jsonpCallback=f.isFunction(b.jsonpCallback)?b.jsonpCallback():b.jsonpCallback,i=a[h],j=b.url,k=b.data,l="$1"+h+"$2";b.jsonp!==!1&&(j=j.replace(cd,l),b.url===j&&(e&&(k=k.replace(cd,l)),b.data===k&&(j+=(/\?/.test(j)?"&":"?")+b.jsonp+"="+h))),b.url=j,b.data=k,a[h]=function(a){g=[a]},d.always(function(){a[h]=i,g&&f.isFunction(i)&&a[h](g[0])}),b.converters["script json"]=function(){g||f.error(h+" was not called");return g[0]},b.dataTypes[0]="json";return"script"}}),f.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/javascript|ecmascript/},converters:{"text script":function(a){f.globalEval(a);return a}}}),f.ajaxPrefilter("script",function(a){a.cache===b&&(a.cache=!1),a.crossDomain&&(a.type="GET",a.global=!1)}),f.ajaxTransport("script",function(a){if(a.crossDomain){var d,e=c.head||c.getElementsByTagName("head")[0]||c.documentElement;return{send:function(f,g){d=c.createElement("script"),d.async="async",a.scriptCharset&&(d.charset=a.scriptCharset),d.src=a.url,d.onload=d.onreadystatechange=function(a,c){if(c||!d.readyState||/loaded|complete/.test(d.readyState))d.onload=d.onreadystatechange=null,e&&d.parentNode&&e.removeChild(d),d=b,c||g(200,"success")},e.insertBefore(d,e.firstChild)},abort:function(){d&&d.onload(0,1)}}}});var ce=a.ActiveXObject?function(){for(var a in cg)cg[a](0,1)}:!1,cf=0,cg;f.ajaxSettings.xhr=a.ActiveXObject?function(){return!this.isLocal&&ch()||ci()}:ch,function(a){f.extend(f.support,{ajax:!!a,cors:!!a&&"withCredentials"in a})}(f.ajaxSettings.xhr()),f.support.ajax&&f.ajaxTransport(function(c){if(!c.crossDomain||f.support.cors){var d;return{send:function(e,g){var h=c.xhr(),i,j;c.username?h.open(c.type,c.url,c.async,c.username,c.password):h.open(c.type,c.url,c.async);if(c.xhrFields)for(j in c.xhrFields)h[j]=c.xhrFields[j];c.mimeType&&h.overrideMimeType&&h.overrideMimeType(c.mimeType),!c.crossDomain&&!e["X-Requested-With"]&&(e["X-Requested-With"]="XMLHttpRequest");try{for(j in e)h.setRequestHeader(j,e[j])}catch(k){}h.send(c.hasContent&&c.data||null),d=function(a,e){var j,k,l,m,n;try{if(d&&(e||h.readyState===4)){d=b,i&&(h.onreadystatechange=f.noop,ce&&delete cg[i]);if(e)h.readyState!==4&&h.abort();else{j=h.status,l=h.getAllResponseHeaders(),m={},n=h.responseXML,n&&n.documentElement&&(m.xml=n);try{m.text=h.responseText}catch(a){}try{k=h.statusText}catch(o){k=""}!j&&c.isLocal&&!c.crossDomain?j=m.text?200:404:j===1223&&(j=204)}}}catch(p){e||g(-1,p)}m&&g(j,k,m,l)},!c.async||h.readyState===4?d():(i=++cf,ce&&(cg||(cg={},f(a).unload(ce)),cg[i]=d),h.onreadystatechange=d)},abort:function(){d&&d(0,1)}}}});var cj={},ck,cl,cm=/^(?:toggle|show|hide)$/,cn=/^([+\-]=)?([\d+.\-]+)([a-z%]*)$/i,co,cp=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]],cq;f.fn.extend({show:function(a,b,c){var d,e;if(a||a===0)return this.animate(ct("show",3),a,b,c);for(var g=0,h=this.length;g=i.duration+this.startTime){this.now=this.end,this.pos=this.state=1,this.update(),i.animatedProperties[this.prop]=!0;for(b in i.animatedProperties)i.animatedProperties[b]!==!0&&(g=!1);if(g){i.overflow!=null&&!f.support.shrinkWrapBlocks&&f.each(["","X","Y"],function(a,b){h.style["overflow"+b]=i.overflow[a]}),i.hide&&f(h).hide();if(i.hide||i.show)for(b in i.animatedProperties)f.style(h,b,i.orig[b]),f.removeData(h,"fxshow"+b,!0),f.removeData(h,"toggle"+b,!0);d=i.complete,d&&(i.complete=!1,d.call(h))}return!1}i.duration==Infinity?this.now=e:(c=e-this.startTime,this.state=c/i.duration,this.pos=f.easing[i.animatedProperties[this.prop]](this.state,c,0,1,i.duration),this.now=this.start+(this.end-this.start)*this.pos),this.update();return!0}},f.extend(f.fx,{tick:function(){var a,b=f.timers,c=0;for(;c-1,k={},l={},m,n;j?(l=e.position(),m=l.top,n=l.left):(m=parseFloat(h)||0,n=parseFloat(i)||0),f.isFunction(b)&&(b=b.call(a,c,g)),b.top!=null&&(k.top=b.top-g.top+m),b.left!=null&&(k.left=b.left-g.left+n),"using"in b?b.using.call(a,k):e.css(k)}},f.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),c=this.offset(),d=cx.test(b[0].nodeName)?{top:0,left:0}:b.offset();c.top-=parseFloat(f.css(a,"marginTop"))||0,c.left-=parseFloat(f.css(a,"marginLeft"))||0,d.top+=parseFloat(f.css(b[0],"borderTopWidth"))||0,d.left+=parseFloat(f.css(b[0],"borderLeftWidth"))||0;return{top:c.top-d.top,left:c.left-d.left}},offsetParent:function(){return this.map(function(){var a=this.offsetParent||c.body;while(a&&!cx.test(a.nodeName)&&f.css(a,"position")==="static")a=a.offsetParent;return a})}}),f.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(a,c){var d=/Y/.test(c);f.fn[a]=function(e){return f.access(this,function(a,e,g){var h=cy(a);if(g===b)return h?c in h?h[c]:f.support.boxModel&&h.document.documentElement[e]||h.document.body[e]:a[e];h?h.scrollTo(d?f(h).scrollLeft():g,d?g:f(h).scrollTop()):a[e]=g},a,e,arguments.length,null)}}),f.each({Height:"height",Width:"width"},function(a,c){var d="client"+a,e="scroll"+a,g="offset"+a;f.fn["inner"+a]=function(){var a=this[0];return a?a.style?parseFloat(f.css(a,c,"padding")):this[c]():null},f.fn["outer"+a]=function(a){var b=this[0];return b?b.style?parseFloat(f.css(b,c,a?"margin":"border")):this[c]():null},f.fn[c]=function(a){return f.access(this,function(a,c,h){var i,j,k,l;if(f.isWindow(a)){i=a.document,j=i.documentElement[d];return f.support.boxModel&&j||i.body&&i.body[d]||j}if(a.nodeType===9){i=a.documentElement;if(i[d]>=i[e])return i[d];return Math.max(a.body[e],i[e],a.body[g],i[g])}if(h===b){k=f.css(a,c),l=parseFloat(k);return f.isNumeric(l)?l:k}f(a).css(c,h)},c,a,arguments.length,null)}}),a.jQuery=a.$=f,typeof define=="function"&&define.amd&&define.amd.jQuery&&define("jquery",[],function(){return f})})(window); \ No newline at end of file diff --git a/web/sugarcane/static/script.js b/web/sugarcane/static/script.js deleted file mode 100644 index be16ad931a3..00000000000 --- a/web/sugarcane/static/script.js +++ /dev/null @@ -1,10 +0,0 @@ -$(document).ready(function () { - $('#myTab a:first').tab('show'); - - $('a').click(function (e) { - e.preventDefault(); - $(this).tab('show'); - this.blur(); - }); - -}); diff --git a/web/sugarcane/static/style.css b/web/sugarcane/static/style.css deleted file mode 100644 index ce0b728e615..00000000000 --- a/web/sugarcane/static/style.css +++ /dev/null @@ -1,48 +0,0 @@ -html { - overflow-y: scroll; -} -.buttons{ - margin-top:23px; - text-align:center; -} -.buttons button{ - margin-top:8px; - width:90%; -} -.fixed{ - position:fixed; -} -#zcontainer{ - padding-top:25px; - margin-left:35px; - width:940px; -} -#zcontainer .contain-left{ - width:940px; - display:block; - float:left; -} -.contain-left .tab-content{ - margin-left:180px; -} -#graph_box{ -} -.loading{ - margin:25px; -} -#map_control{ - width:155px; - float:right; -} -#map_canvas{ - width:600px; - height:600px; - float:left; -} -#select_area_btn{ - width:140px; - height:50px; -} -#infobox1{ - text-align:right; -} diff --git a/web/sugarcane/sugarcane.R b/web/sugarcane/sugarcane.R deleted file mode 100755 index d77215da3a3..00000000000 --- a/web/sugarcane/sugarcane.R +++ /dev/null @@ -1,206 +0,0 @@ -library(BioCro) -library(XML) -#############Reading from the fXML file generated via web-interface###### -settings.xml <- XML::xmlParse('./config.xml') -settings <- XML::xmlToList(settings.xml) - -################ Read Year and Location Details and derive model input############ -lat<-as.numeric(settings$location$latitude) -lon<-as.numeric(settings$location$longitude) -dateofplanting<-settings$simulationPeriod$dateofplanting -dateofharvest<-settings$simulationPeriod$dateofharvest -year1<-as.numeric(substr(dateofplanting,7,10)) -date1<-as.numeric(substr(dateofplanting,4,5)) -month1<-as.numeric(substr(dateofplanting,1,2)) -year2<-as.numeric(substr(dateofharvest,7,10)) -date2<-as.numeric(substr(dateofharvest,4,5)) -month2<-as.numeric(substr(dateofharvest,1,2)) - -dummydate<-as.Date(c(paste(year1,"-1","-1",sep=""),paste(year1,"-",month1,"-",date1,sep=""),paste(year2,"-",month2,"-",date2,sep=""))) -day1<-as.numeric(dummydate[2]-dummydate[1]) -dayn<-as.numeric(dummydate[3]-dummydate[1]) - -#####Getting Phenology Parameters ####################################### - - -pheno<-settings$pft$phenoParms -pp<-phenoParms() -pp[names(pheno)]<-as.numeric(pheno) - - -######Getting Nitrogen Parameters ####################################### -nitro<-settings$pft$nitroParms -nitroP<-nitroParms() -nitroP[names(nitro)]<-as.numeric(nitro) - -### Getting Photosynthesis Parameters #################################### -photo<-settings$pft$photoParms -photoP<-photoParms() -photoP[names(photo)]<-as.numeric(photo) - -### Getting Senescence Parameters ############################################### -sene<-settings$pft$seneParms -senP<-seneParms() -senP[names(sene)]<-as.numeric(sene) - -#### Getting Soil Parameters################################################## -soil<-settings$pft$soilParms -soilP<-soilParms() -soilP[names(soil)]<-as.numeric(soil) - -##### Getting Canopy Parameters ####################################### -canopy<-settings$pft$canopyParms -canopyP40<-canopyParms() -## canopyP40[names(canopy)]<-as.numeric(canopy) # Error message Error: (list) object cannot be coerced to type 'double' -# read individual elements -canopyP40$Sp<-as.numeric(canopy$Sp) -canopyP40$SpD<-as.numeric(canopy$SpD) -canopyP40$nlayers<-as.numeric(canopy$nlayers) -canopyP40$chi.l<-as.numeric(canopy$chi.l) -canopyP40$heightFactor<-as.numeric(canopy$heightFactor) - -# Reading sugar phenology -sugarpheno<-settings$pft$SugarPhenoParms -ppP50<-SugarPhenoParms() -ppP50[names(sugarpheno)]<-as.numeric(sugarpheno) - -##Here we are extracting weather data from NCEP and preparing for model ####### - -#Declare a function -library(RNCEP) - InputForWeach<-function (lat,lon,year1,year2){ - - # Get Temperature Records - avgTemp<-NCEP.gather(variable=c('air.2m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - avgTemp<-NCEP.aggregate(avgTemp,HOURS=FALSE,fxn='mean') # Average Flux for the whole day - avgTemp<-NCEP.array2df(avgTemp,var.name='avgTemp') - avgTemp<-aggregate(avgTemp~datetime,data=avgTemp,mean) # mean of all nearby spatial locations - avgTemp$datetime<-substr(avgTemp$datetime,1,10) - avgTemp$avgTemp<-avgTemp$avgTemp-273 - - # Get Solar Radiation Records - solarR<-NCEP.gather(variable=c('dswrf.sfc'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - solarR<-NCEP.aggregate(solarR,HOURS=FALSE,fxn='mean') # Average Flux for the whole day - solarR<-NCEP.array2df(solarR,var.name='solarR') - solarR$solarR<- solarR$solarR*24*60*60*1e-6 # To convert Wt/m2 to MJ/m2 - solarR<-aggregate(solarR~datetime,data=solarR,mean) # mean of all nearby spatial locations - solarR$datetime<-substr(solarR$datetime,1,10) - - ## T Maximum Data - Tmax<-NCEP.gather(variable=c('tmax.2m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Tmax<-NCEP.aggregate(Tmax,HOURS=FALSE,fxn='max') - Tmax<-NCEP.array2df(Tmax,var.name='Tmax') - Tmax<-aggregate(Tmax~datetime,data=Tmax,max) - Tmax$datetime<-substr(Tmax$datetime,1,10) - Tmax$Tmax<-Tmax$Tmax-273 - - - ## T Minimum Data - Tmin<-NCEP.gather(variable=c('tmin.2m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Tmin<-NCEP.aggregate(Tmin,HOURS=FALSE,fxn='max') - Tmin<-NCEP.array2df(Tmin,var.name='Tmin') - Tmin<-aggregate(Tmin~datetime,data=Tmin,max) - Tmin$datetime<-substr(Tmin$datetime,1,10) - Tmin$Tmin<-Tmin$Tmin-273 - - ## Relative Humidity (I am using surface level, not Grid level to get rlative humidity, not absolute humidity, hope its not a problem. - RH<-NCEP.gather(variable=c('rhum.sig995'), level='surface', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE,return.units = FALSE, status.bar=FALSE) - - # Warnign Message, not available in Reanalysis 2, Instead using Reanalysis 1. - - RHavg<-NCEP.aggregate(RH,HOURS=FALSE,fxn='mean') - RHmax<-NCEP.aggregate(RH,HOURS=FALSE,fxn='max') - RHmin<-NCEP.aggregate(RH,HOURS=FALSE,fxn='min') - RHavg<-NCEP.array2df(RHavg,var.name='RH') - RHmax<-NCEP.array2df(RHmax,var.name='RH') - RHmin<-NCEP.array2df(RHmin,var.name='RH') - - RHavg<-aggregate(RH~datetime,data=RHavg,mean) - RHmax<-aggregate(RH~datetime,data=RHmax,max) - RHmin<-aggregate(RH~datetime,data=RHmin,min) - RHavg$datetime<-substr(RHavg$datetime,1,10) - RHmax$datetime<-substr(RHmax$datetime,1,10) - RHmin$datetime<-substr(RHmin$datetime,1,10) - RHavg$RH<-RHavg$RH*0.01 ## Percent to Fraction - RHmax$RH<-RHmax$RH*0.01 ## Percent to Fraction - RHmin$RH<-RHmin$RH*0.01 ## Percent to Fraction - - - ## WInD SPEED - - Vwind<-NCEP.gather(variable=c('vwnd.10m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Vwind<-NCEP.aggregate(Vwind,HOURS=FALSE,fxn='mean') - Vwind<-NCEP.array2df(Vwind,var.name='Vwind') - Vwind<-aggregate(Vwind~datetime,data=Vwind,mean) - Vwind$datetime<-substr(Vwind$datetime,1,10) - - Uwind<-NCEP.gather(variable=c('uwnd.10m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Uwind<-NCEP.aggregate(Uwind,HOURS=FALSE,fxn='mean') - Uwind<-NCEP.array2df(Uwind,var.name='Uwind') - Uwind<-aggregate(Uwind~datetime,data=Uwind,mean) - Uwind$datetime<-substr(Uwind$datetime,1,10) - - Uwind$Uwind<-sqrt(Uwind$Uwind^2+Vwind$Vwind^2) - Uwind$Uwind<-Uwind$Uwind*4.87/log(67.8*10-5.42) # converting Windspeed from 10m to 2 m height using correlation provided by FAO (http://www.fao.org/docrep/X0490E/x0490e07.htm) - - Uwind$Uwind<-Uwind$Uwind*(3600)/(1609) # unit conversion from m/s to miles per hr - - - ## Precipitation - - Rain<-NCEP.gather(variable=c('prate.sfc'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Rain<-NCEP.aggregate(Rain,HOURS=FALSE,fxn='mean') - Rain<-NCEP.array2df(Rain,var.name='Rain') - Rain<-aggregate(Rain~datetime,data=Rain,mean) - Rain$datetime<-substr(Rain$datetime,1,10) - Rain$Rain<-Rain$Rain*(24*60*60)*(1/1000)*39.37 # Converting from kg/m2 sec to kg/m2 to m3/m2 to inches - - day<-numeric(0) - year<-numeric(0) -for (i in year1:year2) - { - if(lubridate::leap_year(i)) - { - indx<-as.integer(length(day)) - day[as.integer(indx+1):as.integer(indx+366)]=seq(1:366) - year[as.integer(indx+1):as.integer(indx+366)]=rep(i,366) - } else { - indx<-as.integer(length(day)) - day[as.integer(indx+1):as.integer(indx+365)]=seq(1:365) - year[as.integer(indx+1):as.integer(indx+365)]=rep(i,365) - } - } -result<-data.frame(year=year,day=day,solarR=solarR$solarR,Tmax=Tmax$Tmax, Tmin=Tmin$Tmin, Tavg=avgTemp$avgTemp, RHmax=RHmax$RH,RHmin=RHmin$RH, RHavg=RHavg$RH, WS=Uwind$Uwind,precip=Rain$Rain) - return(result) - } - -climdata<-InputForWeach(lat=lat,lon=lon,year1=year1,year2=year2) -wet<-weachNEW(climdata,lat=lat,ts=1, temp.units="Celsius", rh.units="fraction",ws.units="mph",pp.units="in") - - -res<- BioGro(wet,lat=lat,phenoControl=pp,canopyControl=canopyP40, soilControl=soilP,photoControl=photoP, seneControl=senP,sugarphenoControl=ppP50,nitroControl=nitroP,day1 = day1, dayn = dayn,iRhizome=4,irtl=1e-03) - -Date<-seq(dummydate[2],dummydate[3]+1,1) -DAP<-seq(0,length(Date)-1) -Leaf=res$Leaf -Stem=res$Stem -Root=res$Root -Sugar<-res$Sugar -LAI<-res$LAI -ThermalT<-res$dailyThermalT - -toprint<-data.frame(DAP=DAP,Leaf=Leaf,Stem=Stem,Root=Root,Sugar=Sugar,LAI=LAI,ThermalT=ThermalT) - -toprint<-round(toprint,digit=2) -write.table(toprint, file="./ooutput",append=FALSE,sep='\t') - - - - diff --git a/web/sugarcane/sugarcaneForBioCro.R b/web/sugarcane/sugarcaneForBioCro.R deleted file mode 100644 index 43dc5764c2d..00000000000 --- a/web/sugarcane/sugarcaneForBioCro.R +++ /dev/null @@ -1,201 +0,0 @@ -library(BioCro) -library(XML) -#############Reading from the fXML file generated via web-interface###### -settings.xml <- XML::xmlParse('./config.xml') -settings <- XML::xmlToList(settings.xml) - -################ Read Year and Location Details and derive model input############ -lat<-as.numeric(settings$location$latitude) -lon<-as.numeric(settings$location$longitude) -dateofplanting<-settings$simulationPeriod$dateofplanting -dateofharvest<-settings$simulationPeriod$dateofharvest -year1<-as.numeric(substr(dateofplanting,7,10)) -date1<-as.numeric(substr(dateofplanting,4,5)) -month1<-as.numeric(substr(dateofplanting,1,2)) -year2<-as.numeric(substr(dateofharvest,7,10)) -date2<-as.numeric(substr(dateofharvest,4,5)) -month2<-as.numeric(substr(dateofharvest,1,2)) - -dummydate<-as.Date(c(paste(year1,"-1","-1",sep=""),paste(year1,"-",month1,"-",date1,sep=""),paste(year2,"-",month2,"-",date2,sep=""))) -day1<-as.numeric(dummydate[2]-dummydate[1]) -dayn<-as.numeric(dummydate[3]-dummydate[1]) - -#####Getting Phenology Parameters ####################################### - - -pheno<-settings$pft$phenoParms -pp<-phenoParms() -pp[names(pheno)]<-as.numeric(pheno) - - -######Getting Nitrogen Parameters ####################################### -nitro<-settings$pft$nitroParms -nitroP<-nitroParms() -nitroP[names(nitro)]<-as.numeric(nitro) - -### Getting Photosynthesis Parameters #################################### -photo<-settings$pft$photoParms -photoP<-photoParms() -photoP[names(photo)]<-as.numeric(photo) - -### Getting Senescence Parameters ############################################### -sene<-settings$pft$seneParms -senP<-seneParms() -senP[names(sene)]<-as.numeric(sene) - -#### Getting Soil Parameters################################################## -soil<-settings$pft$soilParms -soilP<-soilParms() -soilP[names(soil)]<-as.numeric(soil) - -##### Getting Canopy Parameters ####################################### -canopy<-settings$pft$canopyParms -canopyP40<-canopyParms() -## canopyP40[names(canopy)]<-as.numeric(canopy) # Error message Error: (list) object cannot be coerced to type 'double' -# read individual elements -canopyP40$Sp<-as.numeric(canopy$Sp) -canopyP40$SpD<-as.numeric(canopy$SpD) -canopyP40$nlayers<-as.numeric(canopy$nlayers) -canopyP40$chi.l<-as.numeric(canopy$chi.l) -canopyP40$heightFactor<-as.numeric(canopy$heightFactor) - -# Reading sugar phenology -sugarpheno<-settings$pft$SugarPhenoParms -ppP50<-SugarPhenoParms() -ppP50[names(sugarpheno)]<-as.numeric(sugarpheno) - -##Here we are extracting weather data from NCEP and preparing for model ####### - -#Declare a function -library(RNCEP) - InputForWeach<-function (lat,lon,year1,year2){ - - # Get Temperature Records - avgTemp<-NCEP.gather(variable=c('air.2m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - avgTemp<-NCEP.aggregate(avgTemp,HOURS=FALSE,fxn='mean') # Average Flux for the whole day - avgTemp<-NCEP.array2df(avgTemp,var.name='avgTemp') - avgTemp<-aggregate(avgTemp~datetime,data=avgTemp,mean) # mean of all nearby spatial locations - avgTemp$datetime<-substr(avgTemp$datetime,1,10) - avgTemp$avgTemp<-avgTemp$avgTemp-273 - - # Get Solar Radiation Records - solarR<-NCEP.gather(variable=c('dswrf.sfc'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - solarR<-NCEP.aggregate(solarR,HOURS=FALSE,fxn='mean') # Average Flux for the whole day - solarR<-NCEP.array2df(solarR,var.name='solarR') - solarR$solarR<- solarR$solarR*24*60*60*1e-6 # To convert Wt/m2 to MJ/m2 - solarR<-aggregate(solarR~datetime,data=solarR,mean) # mean of all nearby spatial locations - solarR$datetime<-substr(solarR$datetime,1,10) - - ## T Maximum Data - Tmax<-NCEP.gather(variable=c('tmax.2m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Tmax<-NCEP.aggregate(Tmax,HOURS=FALSE,fxn='max') - Tmax<-NCEP.array2df(Tmax,var.name='Tmax') - Tmax<-aggregate(Tmax~datetime,data=Tmax,max) - Tmax$datetime<-substr(Tmax$datetime,1,10) - Tmax$Tmax<-Tmax$Tmax-273 - - - ## T Minimum Data - Tmin<-NCEP.gather(variable=c('tmin.2m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Tmin<-NCEP.aggregate(Tmin,HOURS=FALSE,fxn='max') - Tmin<-NCEP.array2df(Tmin,var.name='Tmin') - Tmin<-aggregate(Tmin~datetime,data=Tmin,max) - Tmin$datetime<-substr(Tmin$datetime,1,10) - Tmin$Tmin<-Tmin$Tmin-273 - - ## Relative Humidity (I am using surface level, not Grid level to get rlative humidity, not absolute humidity, hope its not a problem. - RH<-NCEP.gather(variable=c('rhum.sig995'), level='surface', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE,return.units = FALSE, status.bar=FALSE) - - # Warnign Message, not available in Reanalysis 2, Instead using Reanalysis 1. - - RHavg<-NCEP.aggregate(RH,HOURS=FALSE,fxn='mean') - RHmax<-NCEP.aggregate(RH,HOURS=FALSE,fxn='max') - RHmin<-NCEP.aggregate(RH,HOURS=FALSE,fxn='min') - RHavg<-NCEP.array2df(RHavg,var.name='RH') - RHmax<-NCEP.array2df(RHmax,var.name='RH') - RHmin<-NCEP.array2df(RHmin,var.name='RH') - - RHavg<-aggregate(RH~datetime,data=RHavg,mean) - RHmax<-aggregate(RH~datetime,data=RHmax,max) - RHmin<-aggregate(RH~datetime,data=RHmin,min) - RHavg$datetime<-substr(RHavg$datetime,1,10) - RHmax$datetime<-substr(RHmax$datetime,1,10) - RHmin$datetime<-substr(RHmin$datetime,1,10) - RHavg$RH<-RHavg$RH*0.01 ## Percent to Fraction - RHmax$RH<-RHmax$RH*0.01 ## Percent to Fraction - RHmin$RH<-RHmin$RH*0.01 ## Percent to Fraction - - - ## WInD SPEED - - Vwind<-NCEP.gather(variable=c('vwnd.10m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE - Vwind<-NCEP.aggregate(Vwind,HOURS=FALSE,fxn='mean') - Vwind<-NCEP.array2df(Vwind,var.name='Vwind') - Vwind<-aggregate(Vwind~datetime,data=Vwind,mean) - Vwind$datetime<-substr(Vwind$datetime,1,10) - - Uwind<-NCEP.gather(variable=c('uwnd.10m'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Uwind<-NCEP.aggregate(Uwind,HOURS=FALSE,fxn='mean') - Uwind<-NCEP.array2df(Uwind,var.name='Uwind') - Uwind<-aggregate(Uwind~datetime,data=Uwind,mean) - Uwind$datetime<-substr(Uwind$datetime,1,10) - - Uwind$Uwind<-sqrt(Uwind$Uwind^2+Vwind$Vwind^2) - Uwind$Uwind<-Uwind$Uwind*4.87/log(67.8*10-5.42) # converting Windspeed from 10m to 2 m height using correlation provided by FAO (http://www.fao.org/docrep/X0490E/x0490e07.htm) - - Uwind$Uwind<-Uwind$Uwind*(3600)/(1609) # unit conversion from m/s to miles per hr - - - ## Precipitation - - Rain<-NCEP.gather(variable=c('prate.sfc'), level='gaussian', months.minmax=c(1,12), years.minmax=c(year1,year2),lat.southnorth=c(lat,lat),lon.westeast=c(lon,lon), reanalysis2 = TRUE, - return.units = FALSE, status.bar=FALSE) - Rain<-NCEP.aggregate(Rain,HOURS=FALSE,fxn='mean') - Rain<-NCEP.array2df(Rain,var.name='Rain') - Rain<-aggregate(Rain~datetime,data=Rain,mean) - Rain$datetime<-substr(Rain$datetime,1,10) - Rain$Rain<-Rain$Rain*(24*60*60)*(1/1000)*39.37 # Converting from kg/m2 sec to kg/m2 to m3/m2 to inches - - day<-numeric(0) - year<-numeric(0) - -for (i in year1:year2) { - if(lubridate::leap_year(i)) { - indx<-as.integer(length(day)) - day[as.integer(indx+1):as.integer(indx+366)]=seq(1:366) - year[as.integer(indx+1):as.integer(indx+366)]=rep(i,366) - } else { - indx<-as.integer(length(day)) - day[as.integer(indx+1):as.integer(indx+365)]=seq(1:365) - year[as.integer(indx+1):as.integer(indx+365)]=rep(i,365) - } - } -result<-data.frame(year=year,day=day,solarR=solarR$solarR,Tmax=Tmax$Tmax, Tmin=Tmin$Tmin, Tavg=avgTemp$avgTemp, RHmax=RHmax$RH,RHmin=RHmin$RH, RHavg=RHavg$RH, WS=Uwind$Uwind,precip=Rain$Rain) - return(result) - } - -climdata<-InputForWeach(lat=lat,lon=lon,year1=year1,year2=year2) -wet<-weachNEW(climdata,lat=lat,ts=1, temp.units="Celsius", rh.units="fraction",ws.units="mph",pp.units="in") - - -res<- BioGro(wet,lat=lat,phenoControl=pp,canopyControl=canopyP40, soilControl=soilP,photoControl=photoP, seneControl=senP,sugarphenoControl=ppP50,nitroControl=nitroP,day1 = day1, dayn = dayn,iRhizome=4,irtl=1e-03) - -Date<-seq(dummydate[2],dummydate[3]+1,1) -DAP<-seq(0,length(Date)-1) -Leaf=res$Leaf -Stem=res$Stem -Root=res$Root -Sugar<-res$Sugar -LAI<-res$LAI -ThermalT<-res$dailyThermalT - -toprint<-data.frame(DAP=DAP,Leaf=Leaf,Stem=Stem,Root=Root,Sugar=Sugar,LAI=LAI,ThermalT=ThermalT) - -round(toprint,digit=2) -write.table(toprint, file="./ooutput",append=FALSE,sep='\t') From 3008d241e81860ec28ec192e2cfcee236bee7fa9 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 3 Aug 2020 07:21:39 -0500 Subject: [PATCH 1324/2289] update CHANGELOG --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index e34d9227904..298e478faf9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -62,6 +62,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Removed +- Removed the sugarcane and db folders from web, this removes the simple DB editor in the web folder. (#2532) - Removed ED2IN.git (#2599) 'definitely going to break things for people' - but they can still use PEcAn <=1.7.1 - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). - Scripts `dump.pgsql.sh` and `dump.mysql.sh` have been deleted. See the ["BeTY database administration"](https://pecanproject.github.io/pecan-documentation/develop/database.html) chapter of the PEcAn documentation for current recommendations (#2563). From c929ddbca678f609f241ca03d0c6d508390dd8d7 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 18:47:03 +0530 Subject: [PATCH 1325/2289] Revert "Delete 02_hidden_analyses.Rmd" This reverts commit c951623ecfa0260872148c35f9a5011b661a7f30. --- .../02_hidden_analyses.Rmd | 798 ++++++++++++++++++ 1 file changed, 798 insertions(+) create mode 100644 book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd diff --git a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd new file mode 100644 index 00000000000..a553c0b5deb --- /dev/null +++ b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd @@ -0,0 +1,798 @@ +## Settings-configured analyses + +These analyses can be run through the web interface, but lack graphical interfaces and currently can only be configured throughthe XML settings. To run these analyses use the **Edit pecan.xml** checkbox on the Input configuration page. Eventually, these modules will be integrated into the web user interface. + +- [Parameter Data Assimilation (PDA)](#pda) +- [State Data Assimilation (SDA)](#sda) +- [MultiSettings](#multisettings) +- [Benchmarking](#benchmarking) + +(TODO: Add links) + +### Parameter data assimilation (PDA) {#pda} + +All functions pertaining to Parameter Data Assimilation are housed within: **pecan/modules/assim.batch**. + +For a detailed usage of the module, please see the vignette under **pecan/modules/assim.batch/vignettes**. + +Hierarchical version of the PDA is also implemented, for more details, see the `MultiSitePDAVignette` [package vignette](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/vignettes/MultiSitePDAVignette.Rmd) and function-level documentation. + +#### **pda.mcmc.R** +This is the main PDA code. It performs Bayesian MCMC on model parameters by proposing parameter values, running the model, calculating a likelihood (between model output and supplied observations), and accepting or rejecting the proposed parameters (Metropolis algorithm). Additional notes: + +* The first argument is *settings*, followed by others that all default to *NULL.settings* is a list used throughout Pecan, which contains all the user options for whatever analyses are being done. The easiest thing to do is just pass that whole object all around the Pecan code and let different functions access whichever settings they need. That's what a lot of the rest of the Pecan code does. But the flexibility to override most of the relevant settings in *settings* is there by providing them directly as arguments to the function. + +* The *if(FALSE)...* : If you're trying to step through the function you probably will have the *settings* object around, but those other variables will be undefined. If you set them all to NULL then they'll be ignored without causing errors. It is there for debugging purposes. + +* The next step calls pda.settings(), which is in the file pda.utils.R (see below). It checks whether any settings are being overridden by arguments, and in most cases supplies default values if it can't find either. + + +* In the MCMC setup section + * The code is set up to allow you to start a new MCMC chain, or to continue a previous chain as specified in settings. + * The code writes a simple text file of parameter samples at every iteration, which lets you get some results and even re-start an MCMC that fails for some reason. + * The code has adaptive jump distributions. So you can see some initialization of the jump distributions and associated variables here. + * Finally, note that after all this setup a new XML settings file is saved. The idea is that the original pecan.xml you create is preserved for provenance, and then periodically throughout the workflow the settings (likely containing new information) are re-saved with descriptive filenames. + +* MCMC loop + * Periodically adjust jump distribution to make acceptance rate closer to target + * Propose new parameters one at a time. For each: + * First, note that Pecan may be handling many more parameters than are actually being targeted by PDA. Pecan puts priors on any variables it has information for (in the BETY database), and then these get passed around throughout the analysis and every step (meta-, sensitivity, ensemble analyses, etc.). But for PDA, you specify a separate list of probably far fewer parameters to constrain with data. These are the ones that get looped over and varied here. The distinction between all parameters and only those dealt with in PDA is dealt with in the setup code above. + * First a new value is proposed for the parameter of interest. + * Then, a new model run is set up, identical to the previous except with the new proposed value for the one parameter being updated on this run. + * The model run is started, and outputs collected after waiting for it to finish. + * A new likelihood is calculated based on the model outputs and the observed dataset provided. + * Standard Metropolis acceptance criteria is used to decide whether to keep the proposed parameter. + * Periodically (at interval specified in settings), a diagnostic figure is saved to disk so you can check on progress. + * This works only for NEE currently + +#### **pda.mcmc.bs.R** +This file is basically identical to pda.mcm.R, but rather than propose parameters one at a time, it proposes new values for all parameters at once ("bs" stands for "block sampling"). You choose which option to use by specifying settings$assim.batch$method: + * "bruteforce" means sample parameters one at a time + * "bruteforce.bs" means use this version, sampling all parameters at once + * "emulator" means use the emulated-likelihood version + +#### **pda.emulator** +This version of the PDA code again looks quite similar to the basic "bruteforce" one, but its mechanics are very different. The basic idea is, rather than running thousands of model iterations to explore parameter space via MCMC, run a relatively smaller number of runs that have been carefully chosen to give good coverage of parameter space. Then, basically interpolate the likelihood calculated for each of those runs (actually, fit a Gaussian process to it), to get a surface that "emulates" the true likelihood. Now, perform regular MCMC (just like the "bruteforce" approach), except instead of actually running the model on every iteration to get a likelihood, just get an approximation from the likelihood emulator. Since the latter step takes virtually no time, you can run as long of an MCMC as you need at little computational cost, once you have done the initial model runs to create the likelihood emulator. + +#### **pda.mcmc.recover.R** +This function is for recovering a failed PDA MCMC run. + +#### **pda.utils.R** +This file contains most of the individual functions used by the main PDA functions (pda.mcmc.*.R). + + * *assim.batch* is the main function Pecan calls to do PDA. It checks which method is requested (bruteforce, bruteforce.bs, or emulator) and call the appropriate function described above. + * *pda.setting* handles settings. If a setting isn't found, the code can usually supply a reasonable default. + * *pda.load.priors* is fairly self explanatory, except that it handles a lot of cases and gives different options priority over others. Basically, the priors to use for PDA parameters can come from either a Pecan prior.distns or post.distns object (the latter would be, e.g., the posteriors of a meta-analysis or previous PDA), or specified either by file path or BETY ID. If not told otherwise, the code tries to just find the most recent posterior in BETY, and use that as prior for PDA. + * *pda.create.ensemble* gets an ensemble ID for the PDA. All model runs associated with an individual PDA (any of the three methods) are considered part of a single ensemble. This function does is register a new ensemble in BETY, and return the ID that BETY gives it. + * *pda.define.prior.fn* creates R functions for all of the priors the PDA will use. + * *pda.init.params* sets up the parameter matrix for the run, which has one row per iteration, and one column per parameter. Columns include all Pecan parameters, not just the (probably small) subset that are being updated by PDA. This is for compatibility with other Pecan components. If starting a fresh run, the returned matrix is just a big empty matrix to fill in as the PDA runs. If continuing an existing MCMC, then it will be the previous params matrix, with a bunch of blank rows added on for filling in during this round of PDA. + * *pda.init.run* This is basically a big wrapper for Pecan's write.config function (actually functions [plural], since every model in Pecan has its own version). For the bruteforce and bruteforce.bs methods this will be run once per iteration, whereas the emulator method knows about all its runs ahead of time and this will be a big batch of all runs at once. + * *pda.adjust.jumps* tweaks the jump distributions for the standard MCMC method, and *pda.adjust.jumps.bs* does the same for the block-sampled version. + * *pda.calc.llik* calculates the log-likelihood of the model given all datasets provided to compare it to. + * *pda.generate.knots* is for the emulator version of PDA. It uses a Latin hypercube design to sample a specified number of locations in parameter space. These locations are where the model will actually be run, and then the GP interpolates the likelihood surface in between. + * *pda.plot.params* provides basic MCMC diagnostics (trace and density) for parameters being sampled. + * *pda.postprocess* prepares the posteriors of the PDA, stores them to files and the database, and performs some other cleanup functions. + * *pda.load.data.r* This is the function that loads in data that will be used to constrain the PDA. It's supposed to be eventually more integrated with Pecan, which will know how to load all kinds of data from all kinds of sources. For now, it can do NEE from Ameriflux. + * *pda.define.llik.r* A simple helper function that defines likelihood functions for different datasets. Probably in the future this should be queried from the database or something. For now, it is extremely limited. The original test case of NEE assimilation uses a heteroskedastic Laplacian distribution. + * *pda.get.model.output.R* Another function that will eventually grow to handle many more cases, or perhaps be replaced by a better system altogether. For now though, it again just handles Ameriflux NEE. + +#### **get.da.data.\*.R, plot.da.R** +Old codes written by Carl Davidson. Defunct now, but may contain good ideas so currently left in. + + +### State data assimilation (SDA) {#sda} + +`sda.enkf.R` is housed within: `/pecan/modules/assim.sequential/R` + +The tree ring tutorial is housed within: `/pecan/documentation/tutorials/StateAssimilation` + +More descriptive SDA methods can be found at: `/pecan/book_source/adve_user_guide_web/SDA_Methods.Rmd` + +#### **sda.enkf.R Description** +This is the main ensemble Kalman filter and generalized filter code. Originally, this was just ensemble Kalman filter code. Mike Dietze and Ann Raiho added a generalized ensemble filter to avoid filter divergence. The output of this function will be all the of run outputs, a PDF of diagnostics, and an Rdata object that includes three lists: + +* FORECAST will be the ensemble forecasts for each year +* ANALYSIS will be the updated ensemble sample given the NPP observations +* enkf.params contains the prior and posterior mean vector and covariance matrix for each time step. + +#### **sda.enkf.R Arguments** + +* settings - (required) [State Data Assimilation Tags Example] settings object + +* obs.mean - (required) a list of observation means named with dates in YYYY/MM/DD format + +* obs.cov - (required) a list of observation covariances names with dates in YYYY/MM/DD format + +* IC - (optional) initial condition matrix (dimensions: ensemble memeber # by state variables). Default is NULL. + +* Q - (optional) process covariance matrix (dimensions: state variable by state variables). Defualt is NULL. + +#### State Data Assimilation Workflow +Before running sda.enkf, these tasks must be completed (in no particular order), + +* Read in a [State Data Assimilation Tags Example] settings file with tags listed below. i.e. read.settings('pecan.SDA.xml') + +* Load data means (obs.mean) and covariances (obs.cov) as lists with PEcAn naming and unit conventions. Each observation must have a date in YYYY/MM/DD format (optional time) associated with it. If there are missing data, the date must still be represented in the list with an NA as the list object. + +* Create initial conditions matrix (IC) that is state variables columns by ensemble members rows in dimension. [sample.IC.MODEL][sample.IC.MODEL.R] can be used to create the IC matrix, but it is not required. This IC matrix is fed into write.configs for the initial model runs. + +The main parts of the SDA function are: + +Setting up for initial runs: + +* Set parameters + +* Load initial run inputs via [split.inputs.MODEL][split.inputs.MODEL.R] + +* Open database connection + +* Get new workflow ids + +* Create ensemble ids + +Performing the initial set of runs + +Set up for data assimilation + +Loop over time + +* [read.restart.MODEL][read.restart.MODEL.R] - read model restart files corresponding to start.time and stop.time that you want to assimilate data into + +* Analysis - There are four choices based on if process variance is TRUE or FALSE and if there is data or not. [See explaination below.][Analysis Options] + +* [write.restart.MODEL][write.restart.MODEL.R] - This function has two jobs. First, to insert adjusted state back into model restart file. Second, to update start.time, stop.time, and job.sh. + +* run model + +Save outputs + +Create diagnostics + +#### State Data Assimilation Tags Example + +``` + + TRUE + FALSE + FALSE + Single + + + AGB.pft + MgC/ha/yr + 0 + 100000000 + + + TotSoilCarb + KgC/m^2 + 0 + 100000000 + + + + 1950/01/01 + 1960/12/31 + + 1 + 1961/01/01 + 2010/12/31 + +``` + +#### State Data Assimilation Tags Descriptions + +* **adjustment** : [optional] TRUE/FLASE flag for if ensembles needs to be adjusted based on weights estimated given their likelihood during analysis step. The defualt is TRUE for this flag. +* **process.variance** : [optional] TRUE/FLASE flag for if process variance should be estimated (TRUE) or not (FALSE). If TRUE, a generalized ensemble filter will be used. If FALSE, an ensemble Kalman filter will be used. Default is FALSE. If you use the TRUE argument you can set three more optional tags to control the MCMCs built for the generalized esnsemble filter. +* **nitrGEF** : [optional] numeric defining the length of the MCMC chains. +* **nthin** : [optional] numeric defining thining length for the MCMC chains. +* **nburnin** : [optional] numeric defining the number of burnins during the MCMCs. +* **q.type** : [optional] If the `process.variance` is set to TRUE then this can take values of Single, Site or PFT. +* **censored.data** : [optional] logical set TRUE for censored state variables. + +* **sample.parameters** : [optional] TRUE/FLASE flag for if parameters should be sampled for each ensemble member or not. This allows for more spread in the initial conditions of the forecast. +* **state.variable** : [required] State variable that is to be assimilated (in PEcAn standard format). +* **spin.up** : [required] start.date and end.date for initial model runs. +* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because initial runs can be done over a subset of the full run. +* **forecast.time.step** : [optional] In the future, this will be used to allow the forecast time step to vary from the data time step. +* **start.date** : [optional] start date of the state data assimilation (in YYYY/MM/DD format) +* **end.date** : [optional] end date of the state data assimilation (in YYYY/MM/DD format) +* **_NOTE:_** start.date and end.date are distinct from values set in the run tag because this analysis can be done over a subset of the run. + +#### Model Specific Functions for SDA Workflow + +#### read.restart.MODEL.R +The purpose of read.restart is to read model restart files and return a matrix that is site rows by state variable columns. The state variables must be in PEcAn names and units. The arguments are: + +* outdir - output directory + +* runid - ensemble member run ID + +* stop.time - used to determine which restart file to read (in POSIX format) + +* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object + +* var.names - vector with state variable names with PEcAn standard naming. Example: c('AGB.pft', 'TotSoilCarb') + +* params - parameters used by ensemble member (same format as write.configs) + +#### write.restart.MODEL.R +This model specific function takes in new state and new parameter matrices from sda.enkf.R after the analysis step and translates new variables back to the model variables. Then, updates start.time, stop.time, and job.sh so that start.model.runs() does the correct runs with the new states. In write.restart.LINKAGES and write.restart.SIPNET, job.sh is updated by using write.configs.MODEL. + +* outdir - output directory + +* runid - run ID for ensemble member + +* start.time - beginning of model run (in POSIX format) + +* stop.time - end of model run (in POSIX format) + +* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object + +* new.state - matrix from analysis of updated state variables with PEcAn names (dimensions: site rows by state variables columns) + +* new.params - In the future, this will allow us to update parameters based on states (same format as write.configs) + +* inputs - model specific inputs from [split.inputs.MODEL][split.inputs.MODEL.R] used to run the model from start.time to stop.time + +* RENAME - [optional] Flag used in write.restart.LINKAGES.R for development. + +#### split.inputs.MODEL.R +This model specific function gives the correct met and/or other model inputs to settings$run$inputs. This function returns settings$run$inputs to an inputs argument in sda.enkf.R. But, the inputs will not need to change for all models and should return settings$run$inputs unchanged if that is the case. + +* settings - [pecan.SDA.xml][State Data Assimilation Tags Example] settings object + +* start.time - start time for model run (in POSIX format) + +* stop.time - stop time for model run (in POSIX format) + +#### sample.IC.MODEL.R +This model specific function is optional. But, it can be used to create initial condition matrix (IC) with # state variables columns by # ensemble rows. This IC matrix is used for the initial runs in sda.enkf.R in the write.configs.MODEL function. + +* ne - number of ensemble members + +* state - matrix of state variables to get initial conditions from + +* year - used to determine which year to sample initial conditions from + +#### Analysis Options +There are four options depending on whether process variance is TRUE/FALSE and whether or not there is data or not. + +* If there is no data and process variance = FALSE, there is no analysis step. + +* If there is no data and process variance = TRUE, process variance is added to the forecast. + +* If there is data and process variance = TRUE, [the generalized ensemble filter][The Generalized Ensemble Filter] is implemented with MCMC. + +* If there is data and process variance = FALSE, the Kalman filter is used and solved analytically. + +#### The Generalized Ensemble Filter +An ensemble filter is a sequential data assimilation algorithm with two procedures at every time step: a forecast followed by an analysis. The forecast ensembles arise from a model while the analysis makes an adjustment of the forecasts ensembles from the model towards the data. An ensemble Kalman filter is typically suggested for this type of analysis because of its computationally efficient analytical solution and its ability to update states based on an estimate of covariance structure. But, in some cases, the ensemble Kalman filter fails because of filter divergence. Filter divergence occurs when forecast variability is too small, which causes the analysis to favor the forecast and diverge from the data. Models often produce low forecast variability because there is little internal stochasticity. Our ensemble filter overcomes this problem in a Bayesian framework by including an estimation of model process variance. This methodology also maintains the benefits of the ensemble Kalman filter by updating the state vector based on the estimated covariance structure. + +This process begins after the model is spun up to equilibrium. + +The likelihood function uses the data vector $\left(\boldsymbol{y_{t}}\right)$ conditional on the estimated state vector $\left(\boldsymbol{x_{t}}\right)$ such that + + $\boldsymbol{y}_{t}\sim\mathrm{multivariate\:normal}(\boldsymbol{x}_{t},\boldsymbol{R}_{t})$ + +where $\boldsymbol{R}_{t}=\boldsymbol{\sigma}_{t}^{2}\boldsymbol{I}$ and $\boldsymbol{\sigma}_{t}^{2}$ is a vector of data variances. To obtain an estimate of the state vector $\left(\boldsymbol{x}_{t}\right)$, we use a process model that incorporates a process covariance matrix $\left(\boldsymbol{Q}_{t}\right)$. This process covariance matrix differentiates our methods from past ensemble filters. Our process model contains the following equations + +$\boldsymbol{x}_{t} \sim \mathrm{multivariate\: normal}(\boldsymbol{x}_{model_{t}},\boldsymbol{Q}_{t})$ + +$\boldsymbol{x}_{model_{t}} \sim \mathrm{multivariate\: normal}(\boldsymbol{\mu}_{forecast_{t}},\boldsymbol{P}_{forecast_{t}})$ + +where $\boldsymbol{\mu}_{forecast_{t}}$ is a vector of means from the ensemble forecasts and $\boldsymbol{P}_{forecast_{t}}$ is a covariance matrix calculated from the ensemble forecasts. The prior for our process covariance matrix is $\boldsymbol{Q}_{t}\sim\mathrm{Wishart}(\boldsymbol{V}_{t},n_{t})$ where $\boldsymbol{V}_{t}$ is a scale matrix and $n_{t}$ is the degrees of freedom. The prior shape parameters are updated at each time step through moment matching such that + +$\boldsymbol{V}_{t+1} = n_{t}\bar{\boldsymbol{Q}}_{t}$ + +$n_{t+1} = \frac{\sum_{i=1}^{I}\sum_{j=1}^{J}\frac{v_{ijt}^{2}+v_{iit}v_{jjt}}{Var(\boldsymbol{\bar{Q}}_{t})}}{I\times J}$ + +where we calculate the mean of the process covariance matrix $\left(\bar{\boldsymbol{Q}_{t}}\right)$ from the posterior samples at time t. Degrees of freedom for the Wishart are typically calculated element by element where $v_{ij}$ are the elements of $\boldsymbol{V}_{t}$. $I$ and $J$ index rows and columns of $\boldsymbol{V}$. Here, we calculate a mean number of degrees of freedom for $t+1$ by summing over all the elements of the scale matrix $\left(\boldsymbol{V}\right)$ and dividing by the count of those elements $\left(I\times J\right)$. We fit this model sequentially through time in the R computing environment using R package 'rjags.' + +Users have control over how they think is the best way to estimate $Q$. Our code will look for the tag `q.type` in the XML settings under `state.data.assimilation` which can take 3 values of Single, PFT or Site. If `q.type` is set to single then one value of process variance will be estimated across all different sites or PFTs. On the other hand, when q.type` is set to Site or PFT then a process variance will be estimated for each site or PFT at a cost of more time and computation power. + +#### Multi-site State data assimilation. +`sda.enkf.multisite` function allows for assimilation of observed data at multiple sites at the same time. In order to run a multi-site SDA, one needs to send a multisettings pecan xml file to this function. This multisettings xml file needs to contain information required for running at least two sites under `run` tag. The code will automatically run the ensembles for all the sites and reformats the outputs matching the required formats for analysis step. + +The observed mean and cov needs to be formatted as list of different dates with observations. For each element of this list also there needs to be a list with mean and cov matrices of different sites named by their siteid. In case that zero variance was estimated for a variable inside the obs.cov, the SDA code will automatically replace that with half of the minimum variance from other non-zero variables in that time step. + +This would look like something like this: +``` +> obs.mean + +$`2010/12/31` +$`2010/12/31`$`1000000650` + AbvGrndWood GWBI + 111.502 1.0746 + +$`2010/12/31`$`1000000651` + AbvGrndWood GWBI + 114.302 1.574695 +``` + +``` +> obs.cov + +$`2010/12/31` +$`2010/12/31`$`1000000650` + [,1] [,2] +[1,] 19.7821691 0.213584319 +[2,] 0.5135843 0.005162113 + +$`2010/12/31`$`1000000651` + [,1] [,2] +[1,] 15.2821691 0.513584319 +[2,] 0.1213583 0.001162113 +``` +An example of multi-settings pecan xml file also may look like below: +``` + + + + FALSE + TRUE + + 1000000040 + 1000013298 + + + + GWBI + KgC/m^2 + 0 + 9999 + + + AbvGrndWood + KgC/m^2 + 0 + 9999 + + + 1960/01/01 + 2000/12/31 + + + + -1 + + 2017/12/06 21:19:33 +0000 + + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768 + + + bety + bety + 128.197.168.114 + bety + PostgreSQL + false + + /fs/data1/pecan.data/dbfiles/ + + + + temperate.deciduous_SDA + + 2 + + /fs/data2/output//PEcAn_1000008768/pft/temperate.deciduous_SDA + 1000008552 + + + + 3000 + + FALSE + TRUE + + + + 20 + 1000016146 + 1995 + 1999 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000022 + /fs/data3/hamzed/output/paleon_sda_SIPNET-8768/Bartlett.param + SIPNET + r136 + FALSE + /fs/data5/pecan.models/SIPNET/trunk/sipnet_ssr + + + 1000008768 + + + + + 1000000650 + 1960/01/01 + 1965/12/31 + Harvard Forest - Lyford Plots (PalEON PHA) + 42.53 + -72.18 + + + + CRUNCEP + SIPNET + + /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim + + + + 1960/01/01 + 1980/12/31 + + + + 1000000651 + 1960/01/01 + 1965/12/31 + Harvard Forest - Lyford Plots (PalEON PHA) + 42.53 + -72.18 + + + + CRUNCEP + SIPNET + + /fs/data1/pecan.data/dbfiles/CRUNCEP_SIPNET_site_0-758/CRUNCEP.1960-01-01.2010-12-31.clim + + + + 1960/01/01 + 1980/12/31 + + + + localhost + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out + + + TRUE + TRUE + TRUE + + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/run + /fs/data3/hamzed/output/MultiSite_Sandbox/paleon_sda_SIPNET-8768/out + run + +``` + +### Running SDA on remote +In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote.Sync.launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote.Sync.launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. + +`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote.Sync.launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. + +`Additionally, the Remote.Sync.launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. + +Several points on how to prepare your xml settings for the remote SDA run: +1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. +2 - Inside the xml setting an `` flag needs to be included and point to a local directory where `SDA_remote_launcher` will look for either a `sample.Rdata` file or a `pft` folder. +3 - You need to set your `` tag according to the desired remote machine. You can learn more about this on the `Remote execution with PEcAn` section of the documentation. Please make sure that the `` tag inside `` is pointing to a directory where you would like to store and run your SDA job(s). +4 - Finally, make sure the tag inside the tag is set to the correct path on the remote machine. + +### Restart functionality in SDA +If you prefer to run your SDA analysis in multiple stages, where each phase picks up where the previous one left off, you can use the `restart` argument in the `sda.enkf.multisite` function. You need to make sure that the output from previous step exists in the `SDA` folder (in the `outfolder`), and the `` is the same as `` from the previous step. When you run the SDA with the restart parameter, it will load the output from the previous step and use configs already written in the run folder to set itself up for the next step. Using the restart argument could be as easy as : + +``` +sda.enkf.multisite(settings, + obs.mean =obs.mean , + obs.cov = obs.cov, + control = SDA.arguments(debug = FALSE, TimeseriesPlot = FALSE), + restart = FALSE + ) + +``` +Where the new `settings`, `obs.mean` and `obs.cov` contain the relevant information for the next phase. + + +### State Data Assimilation Methods + + +*By Ann Raiho* + +Our goal is build a fully generalizable state data assimilation (SDA) workflow that will assimilate multiple types of data and data products into ecosystem models within PEcAn temporally and spatially. But, during development, specifically with PalEON goals in mind, we have been focusing on assimilating tree ring estimated NPP and AGB and pollen derived fractional composition into two ecosystem models, SIPNET and LINKAGES, at Harvard Forest. This methodology will soon be expanded to include the PalEON sites listed on the [state data assimilation wiki page](https://paleon.geography.wisc.edu/doku.php/working_groups;state_data_assimilation). + +#### Data Products +During workflow development, we have been working with tree ring estimated NPP and AGB and pollen derived fractional composition data products. Both of these data products have been estimated with a full accounting of uncertainty, which provides us with state variable observation mean vector and covariance matrix at each time step. These data products are discussed in more detail below. Even though we have been working with specific data products during development, our workflow is generalizable to alternative data products as long as we can calculate a state variable observation mean vector and covariance for a time point. + +#### Tree Rings +We have been primarily been working with the tree ring data product created by Andria Dawson and Chris Paciorek and the PEcAn tree ring allometry module. They have developed a Bayesian model that estimates annual aboveground biomass increment (Mg/ha/yr) and aboveground biomass (Mg/ha) for each tree in a dataset. We obtain this data and aggregate to the level appropriate for the ecosystem model. In SIPNET, we are assimilating annual gross woody increment (Mg/ha/yr) and above ground woody biomass (Mg/ha). In LINKAGES, we are assimilating annual species biomass. More information on deriving these tree ring data products can be found in Dawson et al 201?. + +We have been working mostly with tree data collected at Harvard Forest. Tree rings and census data were collected at Lyford Plot between 1960 and 2010 in three separate plots. Other tree ring data will be added to this analysis in the future from past PEON courses (UNDERC), Kelly Heilman (Billy's Lake and Bigwoods), and Alex Dye (Huron Mt. Club). + +#### Pollen +STEPPS is a Bayesian model developed by Paciorek and McLachlan 2009 and Dawson et al 2016 to estimate spatially gridded fractional composition from fossil pollen. We have been working with STEPPS1 output, specifically with the grid cell that contains Harvard Forest. The temporal resolution of this data product is centennial. Our workflow currently operates at annual time steps, but does not require data at every time step. So, it is possible to assimilate fractional composition every one hundred years or to assimilate fractional composition data every year by accounting for variance inflation. + +In the future, pollen derived biomass (ReFAB) will also be available for data assimilation. Although, we have not discussed how STEPPS and ReFAB data assimilation will work. + +#### Variance Inflation +*Side Note: Probably want to call this something else now. + +Since the fractional composition data product has a centennial resolution, in order to use fractional composition information every year we need to change the weight the data has on the analysis. The basic idea is to downweight the likelihood relative to the prior to account for (a) the fact that we assimilate an observation multiple times and (b) the fact that the number of STEPPS observations is 'inflated' because of the autocorrelation. To do this, we take the likelihood and raise it to the power of (1/w) where 'w' is an inflation factor. + +w = D * (N / ESS) + +where D is the length of the time step. In our case D = 100. N is the number of time steps. In our case N = 11. and ESS is the effective sample size. The ESS is calculated with the following function where ntimes is the same as N above and sims is a matrix with the dimensions number of MCMC samples by number of state variables. + +``` +ESS_calc <- function(ntimes, sims){ + # center based on mean at each time to remove baseline temporal correlation + # (we want to estimate effective sample size effect from correlation of the errors) + row.means.sims <- sims - rowMeans(sims) + + # compute all pairwise covariances at different times + covars <- NULL + for(lag in 1:(ntimes-1)){ + covars <- c(covars, rowMeans(row.means.sims[(lag+1):ntimes, , drop = FALSE] * row.means.sims[1:(ntimes-lag), , drop = FALSE])) + } + vars <- apply(row.means.sims, 1, var) # pointwise post variances at each time, might not be homoscedastic + + # nominal sample size scaled by ratio of variance of an average + # under independence to variance of average of correlated values + neff <- ntimes * sum(vars) / (sum(vars) + 2 * sum(covars)) + return(neff) + } +``` + +The ESS for the STEPPS1 data product is 3.6, so w in our assimilation of fractional composition at Harvard Forest will be w = 305.6. + +#### Current Models +SIPNET and LINKAGES are the two ecosystem models that have been used during state data assimilation development within PEcAn. SIPNET is a simple ecosystem model that was built for... LINKAGES is a forest gap model created to simulate the process of succession that occurs when a gap is opened in the forest canopy. LINKAGES has 72 species level plant functional types and the ability to simulate some below ground processes (C and N cycles). + +#### Model Calibration +Without model calibration both SIPNET and LINKAGES make incorrect predictions about Harvard Forest. To confront this problem, SIPNET and LINKAGES will both be calibrated using data collected at the Harvard Forest flux tower. Istem has completed calibration for SIPNET using a [parameter data assimilation emulator](https://github.com/PecanProject/pecan/blob/develop/modules/assim.batch/R/pda.emulator.R) contained within the PEcAn workflow. LINKAGES will also be calibrated using this method. This method is also generalizable to other sites assuming there is data independent of data assimilation data available to calibrate against. + +#### Initial Conditions +The initial conditions for SIPNET are sampled across state space based on data distributions at the time when the data assimilation will begin. We do not sample LINAKGES for initial conditions and instead perform model spin up for 100 years prior to beginning data assimilation. In the future, we would like to estimate initial conditions based on data. We achieve adequate spread in the initial conditions by allowing the parameters to vary across ensemble members. + +#### Drivers +We are currently using Global Climate Model (GCM) drivers from the PaLEON model intercomparison. Christy Rollinson and John Tipton are creating MET downscaled GCM drivers for the Paleon data assimilation sites. We will use these drivers when they are available because they are a closer representation of reality. + +#### Sequential State Data Assimilation +We are using sequential state data assimilation methods to assimilate paleon data products into ecosystem models because less computation power is required for sequential state data assimilation than for particle filter methods. + +#### General Description +The general sequential data assimilation framework consists of three steps at each time step: +1. Read the state variable output for time t from the model forecast ensembles and save the forecast mean (muf) and covariance (Pf). +2. If there are data mean (y) and covariance (R) at this time step, perform data assimilation analysis (either EnKF or generalized ensemble filter) to calculate the new mean (mua) and covariance (Pa) of the state variables. +3. Use mua and Pa to restart and run the ecosystem model ensembles with new state variables for time t+1. + +#### EnKF +There are two ways to implement sequential state data assimilation at this time. The first is the Ensemble Kalman Filter (EnKF). EnKF has an analytical solution, so the kalman gain, analysis mean vector, and analysis covariance matrix can be calculated directly: + +``` + + K <- Pf %*% t(H) %*% solve((R + H %*% Pf %*% t(H))) ## Kalman Gain + + mu.a <- mu.f + K %*% (Y - H %*% mu.f) # Analysis mean vector + + Pa <- (diag(ncol(X)) - K %*% H) %*% Pf # Analysis covariance matrix + +``` + +The EnKF is typically used for sequential state data assimilation, but we found that EnKF lead to filter divergence when combined with our uncertain data products. Filter divergence led us to create a generalized ensemble filter that estimates process variance. + +#### Generalized Ensemble Filter +The generalized ensemble filter follows generally the three steps of sequential state data assimilation. But, in the generalized ensemble filter we add a latent state vector that accounts for added process variance. Furthermore, instead of solving the analysis analytically like the EnKF, we have to estimate the mean analysis vector and covariance matrix with MCMC. + +#### Mapping Ensemble Output to Tobit Space +There are some instances when we have right or left censored variables from the model forecast. For example, a model estimating species level biomass may have several ensemble members that produce zero biomass for a given species. We are considering this case a left censored state variable that needs to be mapped to normal space using a tobit model. We do this by creating two matrices with dimensions number of ensembles by state variable. The first matrix is a matrix of indicator variables (y.ind), and the second is a matrix of censored variables (y.censored). When the indicator variable is 0 the state variable (j) for ensemble member (i) is sampled. This allows us to impute a normal distribution for each state variable that contains 'missing' forecasts or forecasts of zero. + +``` +tobit2space.model <- nimbleCode({ + for(i in 1:N){ + y.censored[i,1:J] ~ dmnorm(muf[1:J], cov = pf[1:J,1:J]) + for(j in 1:J){ + y.ind[i,j] ~ dconstraint(y.censored[i,j] > 0) + } + } + + muf[1:J] ~ dmnorm(mean = mu_0[1:J], cov = pf[1:J,1:J]) + + Sigma[1:J,1:J] <- lambda_0[1:J,1:J]/nu_0 + pf[1:J,1:J] ~ dinvwish(S = Sigma[1:J,1:J], df = J) + + }) +``` + + +#### Generalized Ensemble Filter Model Description +Below is the BUGS code for the full analysis model. The forecast mean an covariance are calculated from the tobit2space model above. We use a tobit likelihood in this model because there are instances when the data may be left or right censored. Process variance is included by adding a latent model state (X) with a process precision matrix (q). We update our prior on q at each time step using our estimate of q from the previous time step. + +``` + tobit.model <- nimbleCode({ + + q[1:N,1:N] ~ dwish(R = aq[1:N,1:N], df = bq) ## aq and bq are estimated over time + Q[1:N,1:N] <- inverse(q[1:N,1:N]) + X.mod[1:N] ~ dmnorm(muf[1:N], prec = pf[1:N,1:N]) ## Model Forecast ##muf and pf are assigned from ensembles + + ## add process error + X[1:N] ~ dmnorm(X.mod[1:N], prec = q[1:N,1:N]) + + #agb linear + #y_star[1:YN,1:YN] <- X[1:YN,1:YN] #[choose] + + #f.comp non linear + #y_star[1:YN] <- X[1:YN] / sum(X[1:YN]) + + ## Analysis + y.censored[1:YN] ~ dmnorm(X[1:YN], prec = r[1:YN,1:YN]) #is it an okay assumpution to just have X and Y in the same order? + + #don't flag y.censored as data, y.censored in inits + #remove y.censored samplers and only assign univariate samplers on NAs + + for(i in 1:YN){ + y.ind[i] ~ dconstraint(y.censored[i] > 0) + } + + }) +``` + +#### Ensemble Adjustment +Each ensemble member has a different set of species parameters. We adjust the updated state variables by using an ensemble adjustment. The ensemble adjustment weights the ensemble members based on their likelihood during the analysis step. + +``` + S_f <- svd(Pf) + L_f <- S_f$d + V_f <- S_f$v + + ## normalize + Z <- X*0 + for(i in seq_len(nrow(X))){ + Z[i,] <- 1/sqrt(L_f) * t(V_f)%*%(X[i,]-mu.f) + } + Z[is.na(Z)]<-0 + + ## analysis + S_a <- svd(Pa) + L_a <- S_a$d + V_a <- S_a$v + + ## analysis ensemble + X_a <- X*0 + for(i in seq_len(nrow(X))){ + X_a[i,] <- V_a %*%diag(sqrt(L_a))%*%Z[i,] + mu.a + } +``` + +#### Diagnostics +There are three diagnostics we have currently implemented: time series, bias time series, and process variance. The time series diagnostics show the data, forecast, and analysis time series for each state variable. These are useful for visually assessing variance and magnitude of change of state variables through time. These time series are also updated throughout the analysis and are also created as a pdf at the end of the SDA workflow. There are two types of bias time series the first assess the bias in the update (the forecast minus the analysis) and the second assess the bias in the error (the forecast minus the data). These bias time series are useful for identifying which state variables have intrinsic bias within the model. For example, if red oak biomass in LINKAGES increases at every time step (the update and the error are always positive), this would suggest that LINKAGES has a positive growth or recruitment bias for red oak. Finally, when using the generalized ensemble filter to estimate process variance, there are two additional plots to assess estimation of process variance. The first is a correlation plot of the process covariance matrix. This tells us what correlations are incorrectly represented by the model. For example, if red oak biomass and white pine biomass are highly negatively correlated in the process covariance matrix, this means that the model either 1) has no relationship between red oak and white pine and they should affect each other negatively or 2) there is a positive relationship between red oak and white pine and there shouldn't be any relationship. We can determine which of these is true by comparing the process covariance matrix to the model covariance matrix. The second process variance diagnostic plot shows how the degrees of freedom associated with estimating the process covariance matrix have changed through time. This plot should show increasing degrees of freedom through time. + + +### MultiSettings{#multisettings} + +(TODO: Under construction...) + +### Benchmarking {#benchmarking} + +Benchmarking is the process of comparing model outputs against either experimental data or against other model outputs as a way to validate model performance. +We have a suit of statistical comparisons that provide benchmarking scores as well as visual comparisons that help in diagnosing data-model and/or model-model differences. + +#### Data Preparation + +All data that you want to compare with model runs must be registered in the database. +This is currently a step that must be done by hand either from the command line or through the online BETY interface. +The data must have three records: + +1. An input record (Instructions [here](#NewInput)) + +2. A database file record (Instructions [here](#NewInput)) + +3. A format record (Instructions [here](#NewFormat)) + +#### Model Runs + +Model runs can be setup and executed +- Using the PEcAn web interface online or with a VM ([see setup](#GettingStarted)) +- By hand using the [pecan.xml](#pecanXML) + +#### The Benchmarking Shiny App + +The entire benchmarking process can be done through the Benchmarking R Shiny app. + +When the model run has completed, navigate to the workflow visualization Shiny app. + +- Load model data + - Select the workflow and run id + - Make sure that your model output is loading properly (i.e. you can see plots of your data) +- Load benchmarking data + - Again make sure that you can see the uploaded data plotted alongside the model output. In the future there will be more tools for double checking that your uploaded data is appropriate for benchmarking, but for now you may need to do the sanity checks by hand. + +#### Create a reference run record + +- Navigate to the Benchmarking tab + + The first step is to register the new model run as a reference run in the database. Benchmarking cannot be done before this step is completed. When the reference run record has been created, additional menus for benchmarking will appear. + +#### Setup Benchmarks and metrics + +- From the menus select + - The variables in the uploaded data that you wish to compare with model output. + - The numerical metrics you would like to use in your comparison. + - Additional comparison plots that you would like to see. +- Note: All these selections populate the benchmarking section of the `pecan.BENCH.xml` which is then saved in the same location as the original run output. This xml is purely for reference. + +##### Benchmarking Output + +- All benchmarking results are stored in the benchmarking directory which is created in the same folder as the original model run. +- The benchmaking directory contains subdirectories for each of the datasets compared with the model output. The names of these directories are the same as the corresponding data set's input id in BETY. +- Each input directory contains `benchmarking.output.Rdata`, an Rdata file contianing all the results of the benchmarking workflow. `load(benchmarking.output.Rdata) ` loads a list called `result.out` which contains the following: + - `bench.results`: a data frame of all numeric benchmarking scores + - `format`: a data frame that can be used to see how the input data was transformed to make it comparable to the model output. This involves converting from the original variable names and units to the internal pecan standard. + - `aligned.dat`: a data frame of the final aligned model and input values. +- All plots are saved as pdf files with names with "benchmark_plot-type_variable_input-id.pdf" + +- To view interactive results, naviage to the Benchmarking Plots tab in the shiny app. + + + +#### Benchmarking in pecan.xml + +Before reading this section, it is recommended that you [familiarize yourself with basics of the pecan.xml file.](#pecanXML) + +The `pecan.xml` has an _optional_ benchmarking section. Below are all the tags in the benchmarking section explained. Many of these field are filled in automatically during the benchmarking process when using the benchmarking shiny app. + +The only time one should edit the benchmarking section by hand is for performing clone runs. See [clone run documentation.](#CloneRun) + +`` settings: + +- `ensemble_id`: the id of the ensemble that you will be using - the settings from this ensemble will be saved in a reference run record and then `ensemble_id` will be replaced with `reference_run_id` +- `new_run`: TRUE = create new run, FALSE = use existing run (required, default FALSE) + +It is possible to look at more than one benchmark with a particular run. +The specific settings related to each benchmark are in a sub section called `benchmark` + +- `input_id`: the id of the benchmarking data (required) +- `variable_id`: the id of the variable of interest within the data. If you leave this blank, all variables that are shared between the input and model output will be used. +- `metric_id`: the id(s) of the metric(s) to be calculated. If you leave this blank, all metrics will be used. + +Example: +In this example, +- we are using a pre-existing run from `ensemble_id = 1000010983` (`new_run = FALSE`) +- the output will be compared to data from `input_id = 1000013743`, specifically two variables of interest: `variable_id = 411, variable_id = 18` +- for `variable_id = 411` we will perform only one metric of comparison `metric_id = 1000000001` +- for for `variable_id = 18` we will perform two metrics of comparison `metric_id = 1000000001, metric_id = 1000000002` + +```xml + + 1000010983 + FALSE + + 1000013743 + 411 + 853 + + 1000000001 + + + + 1000013743 + 18 + 853 + + 1000000001 + 1000000002 + + + +``` From 4738fa737d6d570c9059781ab2cfed0fc1a04e30 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 18:47:29 +0530 Subject: [PATCH 1326/2289] Revert "correct reference to Remote.Sync.launcher" This reverts commit e28a9c57cbc9a4ea6913fa3d1242937c0e15f31b. --- .../04_more_web_interface/02_hidden_analyses.Rmd | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd index a553c0b5deb..0fa96db4c8b 100644 --- a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd +++ b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd @@ -479,11 +479,11 @@ An example of multi-settings pecan xml file also may look like below: ``` ### Running SDA on remote -In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote.Sync.launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote.Sync.launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. +In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote_Sync_launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote_Sync_launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. -`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote.Sync.launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. +`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote_Sync_launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. -`Additionally, the Remote.Sync.launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. +`Additionally, the Remote_Sync_launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. Several points on how to prepare your xml settings for the remote SDA run: 1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. From bf0acb1d3bbc397401ee6f79368e2081ea1c1606 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 3 Aug 2020 18:50:58 +0530 Subject: [PATCH 1327/2289] remove __name__ == __main__ --- modules/data.remote/inst/remote_process.py | 60 ++++++++++++---------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/remote_process.py index f6a2a36d676..e63343106cb 100644 --- a/modules/data.remote/inst/remote_process.py +++ b/modules/data.remote/inst/remote_process.py @@ -36,7 +36,6 @@ def remote_process( pro_merge=None, existing_raw_file_path=None, existing_pro_file_path=None, - ): """ @@ -80,44 +79,51 @@ def remote_process( aoi_name = get_sitename(geofile) get_datareturn_path = 78 - if stage_get_data: get_datareturn_path = get_remote_data( - geofile, outdir, start, end, source, collection, scale, projection, qc, credfile, raw_merge, existing_raw_file_path - ) + geofile, + outdir, + start, + end, + source, + collection, + scale, + projection, + qc, + credfile, + raw_merge, + existing_raw_file_path, + ) get_datareturn_name = os.path.split(get_datareturn_path) if stage_process_data: if input_file is None: input_file = get_datareturn_path - process_datareturn_path = process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge, existing_pro_file_path) + process_datareturn_path = process_remote_data( + aoi_name, + out_get_data, + out_process_data, + outdir, + algorithm, + input_file, + pro_merge, + existing_pro_file_path, + ) process_datareturn_name = os.path.split(process_datareturn_path) - output = {"raw_data_name": None, "raw_data_path": None, "process_data_name": None, "process_data_path": None} + output = { + "raw_data_name": None, + "raw_data_path": None, + "process_data_name": None, + "process_data_path": None, + } if stage_get_data: - output['raw_data_name'] = get_datareturn_name[1] - output['raw_data_path'] = get_datareturn_path + output["raw_data_name"] = get_datareturn_name[1] + output["raw_data_path"] = get_datareturn_path if stage_process_data: - output['process_data_name'] = process_datareturn_name[1] - output['process_data_path'] = process_datareturn_path + output["process_data_name"] = process_datareturn_name[1] + output["process_data_path"] = process_datareturn_path return output - - -""" -if __name__ == "__main__": - remote_process( - geofile="/home/carya/pecan/modules/data.remote/inst/satellitetools/test.geojson", - outdir="/home/carya/pecan/modules/data.remote/inst/out", - start="2018-01-01", - end="2018-12-31", - source="gee", - collection="LANDSAT/LC08/C01/T1_SR", - scale=30, - output={"get_data": "bands", "process_data": "lai"}, - stage={"get_data": True, "process_data": True}, - ) -""" - From d8e3e462288456bb4a9b57d06d1967668e408974 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 3 Aug 2020 15:37:56 +0000 Subject: [PATCH 1328/2289] added pecan-dev server to swagger docs --- apps/api/pecanapi-spec.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index dd8a94f3169..763d6651d16 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,7 +1,7 @@ openapi: 3.0.0 servers: - description: PEcAn API Server - url: https://pecan-tezan.ncsa.illinois.edu/ + url: https://pecan-dev.ncsa.illinois.edu/ - description: PEcAn Test Server url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/3df13f71 - description: Localhost From d54f257a49d20d3f6f75ee106742cdaaa6468157 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 3 Aug 2020 16:14:08 +0000 Subject: [PATCH 1329/2289] tried to fix CORS error --- apps/api/R/auth.R | 3 +++ apps/api/pecanapi-spec.yml | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R index 337e7f4aa33..b925fef7b6f 100644 --- a/apps/api/R/auth.R +++ b/apps/api/R/auth.R @@ -51,6 +51,9 @@ validate_crypt_pass <- function(username, crypt_pass) { #* @return Appropriate response #* @author Tezan Sahu authenticate_user <- function(req, res) { + # Fix CORS issues + res$setHeader("Access-Control-Allow-Origin", "*") + # If the API endpoint that do not require authentication if ( Sys.getenv("AUTH_REQ") == FALSE || diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 763d6651d16..fd055f20d6e 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1,9 +1,11 @@ openapi: 3.0.0 servers: - description: PEcAn API Server - url: https://pecan-dev.ncsa.illinois.edu/ - - description: PEcAn Test Server + url: https://pecan-dev.ncsa.illinois.edu + - description: PEcAn Development Server url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/3df13f71 + - description: PEcAn API Test Server + url: https://pecan-tezan.ncsa.illinois.edu - description: Localhost url: http://127.0.0.1:8000 From a40ab7bb046b5f12359dc1ba0401a0570c56fb71 Mon Sep 17 00:00:00 2001 From: mccabete Date: Mon, 3 Aug 2020 14:22:05 -0400 Subject: [PATCH 1330/2289] stop inserting Co2 & delete ed_metheader co2 check --- models/ed/R/check_ed_metheader.R | 7 ++----- models/ed/R/met2model.ED2.R | 10 ++++++++-- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/models/ed/R/check_ed_metheader.R b/models/ed/R/check_ed_metheader.R index a5db9d94423..e68a76a3788 100644 --- a/models/ed/R/check_ed_metheader.R +++ b/models/ed/R/check_ed_metheader.R @@ -16,13 +16,10 @@ check_ed_metheader <- function(ed_metheader, check_files = TRUE) { testthat::test_that( "ED met header object is a nested list", { - names_required <- names(ed_metheader[[1]]) - names_required <- names_required[ names_required != "co2"] ## "co2" provided optionally for ED2 - - testthat::expect_true(!is.null(names_required)) + testthat::expect_true(!is.null(names(ed_metheader[[1]]))) } ) - .z <- lapply(names_required, check_ed_metheader_format, check_files = check_files) + .z <- lapply(ed_metheader, check_ed_metheader_format, check_files = check_files) invisible(TRUE) } diff --git a/models/ed/R/met2model.ED2.R b/models/ed/R/met2model.ED2.R index c867c1684a3..9088f86d2ce 100644 --- a/models/ed/R/met2model.ED2.R +++ b/models/ed/R/met2model.ED2.R @@ -346,6 +346,12 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l # c("update_frequency", "flag")] <- list(380, 4) # } + if (!useCO2) { + metvar_table_vars <- metvar_table[metvar_table$variable != "co2",] ## CO2 optional in ED2 + }else{ + metvar_table_vars <- metvar_table + } + ed_metheader <- list(list( path_prefix = met_folder, nlon = 1, @@ -354,9 +360,9 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l dy = 1, xmin = lon, ymin = lat, - variables = metvar_table + variables = metvar_table_vars )) - + check_ed_metheader(ed_metheader) write_ed_metheader(ed_metheader, met_header_file, header_line = shQuote("Made_by_PEcAn_met2model.ED2")) From ee66a352fdf2ca3dd25cb7ecd0db546d0bd19c49 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 4 Aug 2020 11:13:55 +0530 Subject: [PATCH 1331/2289] add module behaviour to book --- .../03_topical_pages/09_standalone_tools.Rmd | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 8ce9aab0fcc..d0a42cbaacf 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -177,13 +177,26 @@ earthengine authenticate 5. **Save the Earthdata credentials (optional)**. If you do not wish to enter your credentials every time you use AppEEARS, you may save your username and password inside a JSON file and then pass its file path as an argument in `remote_process` #### Usage guide: -This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `remote_process()` which is a main function that controls all the individual functions to create an organized way of downloading and handling remote sensing data in PEcAn. +This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `call_remote_process()`which is a main function that controls all the individual functions to create an organized way of downloading and handling remote sensing data in PEcAn. +Whenever a data product is requested the output files are stored in the inputs table of BETYdb. Subsequently when the same product is requested again with a different date range but with the same qc, scale, projection the previous file in the db would be extended. The DB would always contain only one file of the same type. +As an example, if a file `Reykajvik_gee_s2_10.0_NA_1.0_200802174519.nc` containing Sentinel 2 bands for start date: 2018-01-01, end date: 2018-06-30 exists in the DB and the same product is requested again for a different date range one of the following cases would happen, + +1. New dates are ahead of the existing file: For example, if the requested dates are start: 2018-10-01, end: 2018-12-31 in this case the previous file will be extended forward meaning the effective start date of the file to be downloaded would be the day after the end date of the previous file record, i.e. 2018-07-01. The new and the previous file would be merged and the DB would now be having data for 2018-01-01 to 2018-12-31. + +2. New dates are preceding of the existing file: For example, if the requested dates are start: 2017-01-01, end: 2017-06-30 in this case the effective end date of the new download would be the day before the start date of the existing file, i.e., 2017-12-31. The new and the previous file would be merged and the file in the DB would now be having data for 2017-01-01 to 2018-06-30. + +3. New dates contain the date range of the existing file: For example, if the requested dates are start: 2016-01-01, end: 2019-06-30 here the existing file would be replaced entirely with the new file. A more efficient way of doing this could be to divide your request into two parts, i.e, first request for 2016-01-01 to 2018-01-01 and then for 2018-06-30 to 2019-06-30. + +When a processed data product such as SNAP-LAI is requested, the raw product (here Sentinel 2 bands) used to create it would also be stored in the DB. If the raw product required for creating the processed product already exists for the requested time period, the processed product would be created for the entire time period of the raw file. For example, if Sentinel 2 bands are present in the DB for 2017-01-01 to 2017-12-31 and SNAP-LAI is requested for 2017-03-01 to 2017-07-31, the output file would be containing LAI for 2017-01-01 to 2017-12-31. **Input data**: The input data must be the coordinates of the area of interest (point or polygon type) and the name of the site/AOI. These information must be provided in a GeoJSON file. **Output data**: The output data returned by the GEE are in the form of netCDF files. When using AppEEARS, output is in the form of netCDF files if the AOI type is a polygon and in the form of csv files if the AOI type is a point. +The output files are created with the following naming convention: sitename_source_collection_scale_projection_qc_TimeStampOfFileCreation +If scale, projection or qc is not applicable for the requested product "NA" would be put. + **scale**: Some of the GEE functions require a pixel resolution argument. Information about how GEE handles scale can be found out [here](https://developers.google.com/earth-engine/scale) #### Example use (GEE) From 873d9648ddaddd1e030f01d9a694d2a3c3a94f38 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 4 Aug 2020 14:54:31 +0530 Subject: [PATCH 1332/2289] assign reticulate func to a var --- modules/data.remote/R/call_remote_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R index e963fb13c71..4844d7bd3a6 100644 --- a/modules/data.remote/R/call_remote_process.R +++ b/modules/data.remote/R/call_remote_process.R @@ -19,8 +19,8 @@ call_remote_process <- function(settings){ # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB # the "pro" version of these variables have the same meaning and are used to refer to the processed file - reticulate::import_from_path("remote_process", file.path("..", "inst")) - reticulate::source_python(file.path("..", "inst", "remote_process.py")) + remote_process <- reticulate::import_from_path("remote_process", file.path("..", "inst")) + remote_process <- reticulate::source_python(file.path("..", "inst", "remote_process.py")) input_file <- NULL stage_get_data <- NULL From 64f3e070a58a93aa67b79b85721691ba35c22d58 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 4 Aug 2020 09:56:59 +0000 Subject: [PATCH 1333/2289] restructured get workflow/{id} response; added endpoint to download workflow file --- apps/api/R/get.file.R | 24 ++++++------ apps/api/R/workflows.R | 77 +++++++++++++++++++++++++++++++------- apps/api/pecanapi-spec.yml | 66 ++++++++++++++++++++++++++++++-- 3 files changed, 137 insertions(+), 30 deletions(-) diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R index b227c19b19b..7fb1e5d88cb 100644 --- a/apps/api/R/get.file.R +++ b/apps/api/R/get.file.R @@ -10,25 +10,23 @@ get.file <- function(filepath, userid) { parent_dir <- normalizePath(dirname(filepath)) run_id <- substr(parent_dir, stringi::stri_locate_last(parent_dir, regex="/")[1] + 1, stringr::str_length(parent_dir)) - dbcon <- PEcAn.DB::betyConnect() - - Run <- tbl(dbcon, "runs") %>% - filter(id == !!run_id) - Run <- tbl(dbcon, "ensembles") %>% - select(ensemble_id=id, workflow_id) %>% - full_join(Run, by="ensemble_id") %>% - filter(id == !!run_id) if(Sys.getenv("AUTH_REQ") == TRUE) { + dbcon <- PEcAn.DB::betyConnect() + + Run <- tbl(dbcon, "runs") %>% + filter(id == !!run_id) + Run <- tbl(dbcon, "ensembles") %>% + select(ensemble_id=id, workflow_id) %>% + full_join(Run, by="ensemble_id") %>% + filter(id == !!run_id) user_id <- tbl(dbcon, "workflows") %>% select(workflow_id=id, user_id) %>% full_join(Run, by="workflow_id") %>% filter(id == !!run_id) %>% pull(user_id) - } - - PEcAn.DB::db.close(dbcon) - - if(Sys.getenv("AUTH_REQ") == TRUE) { + + PEcAn.DB::db.close(dbcon) + if(! user_id == userid) { return(list(status = "Error", message = "Access forbidden")) } diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 436d475fb8f..8a265a519d0 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -18,11 +18,7 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r dbcon <- PEcAn.DB::betyConnect() Workflow <- tbl(dbcon, "workflows") %>% - select(id, model_id, site_id) - - Workflow <- tbl(dbcon, "attributes") %>% - select(id = container_id, properties = value) %>% - full_join(Workflow, by = "id") + select(-created_at, -updated_at, -params, -advanced_edit, -notes) if (!is.null(model_id)) { Workflow <- Workflow %>% @@ -34,10 +30,7 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r filter(site_id == !!site_id) } - qry_res <- Workflow %>% - select(-model_id, -site_id) %>% - arrange(id) %>% - collect() + qry_res <- Workflow %>% collect() PEcAn.DB::db.close(dbcon) @@ -57,8 +50,8 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] - qry_res$properties[is.na(qry_res$properties)] = "{}" - qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) + # qry_res$properties[is.na(qry_res$properties)] = "{}" + # qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) result <- list(workflows = qry_res) result$count <- nrow(qry_res) if(has_next){ @@ -128,7 +121,7 @@ getWorkflowDetails <- function(id, req, res){ dbcon <- PEcAn.DB::betyConnect() Workflow <- tbl(dbcon, "workflows") %>% - select(id, model_id, site_id) + select(id, model_id, site_id, folder, hostname, user_id) Workflow <- tbl(dbcon, "attributes") %>% select(id = container_id, properties = value) %>% @@ -145,10 +138,29 @@ getWorkflowDetails <- function(id, req, res){ } else { if(is.na(qry_res$properties)){ - res <- list(id = id, properties = list(modelid = qry_res$model_id, siteid = qry_res$site_id)) + res <- list( + id = id, + folder=qry_res$folder, + hostname=qry_res$hostname, + user_id=qry_res$user_id, + properties = list(modelid = qry_res$model_id, siteid = qry_res$site_id) + ) } else{ - res <- list(id = id, properties = jsonlite::parse_json(qry_res$properties[[1]])) + res <- list( + id = id, + folder=qry_res$folder, + hostname=qry_res$hostname, + user_id=qry_res$user_id, + properties = jsonlite::parse_json(qry_res$properties[[1]]) + ) + } + + # Add the files for the workflow if they exist on disk + filesdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", id) + if(dir.exists(filesdir)){ + all_files <- list.files(filesdir) + res$files <- all_files[!all_files %in% c("out", "rabbitmq.out", "pft", "run", "STATUS")] } return(res) @@ -190,4 +202,41 @@ getWorkflowStatus <- function(req, id, res){ wf_status <- stringr::str_replace_all(wf_status, "\t", " ") return(list(workflow_id=id, status=wf_status)) } +} + +################################################################################################# + +#' Get a specified file of the workflow specified by the id +#' @param id Workflow id (character) +#' @return Details of requested workflow +#' @author Tezan Sahu +#* @serializer contentType list(type="application/octet-stream") +#* @get //file/ +getWorkflowFile <- function(req, id, filename, res){ + dbcon <- PEcAn.DB::betyConnect() + + Workflow <- tbl(dbcon, "workflows") %>% + select(id, user_id) %>% + filter(id == !!id) + + qry_res <- Workflow %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return() + } + else { + # Check if the requested file exists on the host + filepath <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", id, "/", filename) + if(! file.exists(filepath)){ + res$status <- 404 + return() + } + + # Read the data in binary form & return it + bin <- readBin(filepath,'raw', n = file.info(filepath)$size) + return(bin) + } } \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index fd055f20d6e..876a2f24267 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-dev.ncsa.illinois.edu - description: PEcAn Development Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/3df13f71 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/3a3fd41c - description: PEcAn API Test Server url: https://pecan-tezan.ncsa.illinois.edu - description: Localhost @@ -409,7 +409,28 @@ paths: workflows: type: array items: - $ref: '#/components/schemas/Workflow_GET' + type: object + properties: + id: + type: string + folder: + type: string + started_at: + type: string + finished_at: + type: string + site_id: + type: integer + model_id: + type: integer + hostname: + type: string + start_date: + type: string + end_date: + type: string + user_id: + type: integer count: type: integer next_page: @@ -504,7 +525,36 @@ paths: '404': description: Workflow with specified ID was not found - + /api/workflows/{id}/file/{filename}: + get: + tags: + - workflows + summary: Download a file from a specified PEcAn workflow + parameters: + - in: path + name: id + description: ID of the PEcAn Workflow + required: true + schema: + type: string + - in: path + name: filename + description: Name of file desired + required: true + schema: + type: string + responses: + '200': + description: Contents of the output file + content: + application/octet-stream: + schema: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden /api/runs/: get: @@ -847,6 +897,12 @@ components: properties: id: type: string + folder: + type: string + hostname: + type: string + user_id: + type: string "properties": type: object properties: @@ -887,6 +943,10 @@ components: type: string input_poolinitcond: type: string + files: + type: array + items: + type: string Workflow_POST: type: object From 6f4de865e92e5cfc576745bb4bb29a7920fc3fce Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 4 Aug 2020 11:52:21 -0400 Subject: [PATCH 1334/2289] changes addressing PR comments --- base/db/R/dbfiles.R | 6 +++++- modules/data.land/R/extract_soil_nc.R | 29 +-------------------------- modules/data.land/R/soil_process.R | 22 ++++++++++++++++++-- 3 files changed, 26 insertions(+), 31 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 05f7bfd9c10..504fe915f1d 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -122,7 +122,11 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, "(site_id, format_id, start_date, end_date, name) VALUES (", siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, "') RETURNING id") - } else { + }else if(is.null(startdate)){ + cmd <- paste0("INSERT INTO inputs ", + "(site_id, format_id, name, parent_id) VALUES (", + siteid, ", ", formatid, ", '", name, "',", parentid, ") RETURNING id") + }else { cmd <- paste0("INSERT INTO inputs ", "(site_id, format_id, start_date, end_date, name, parent_id) VALUES (", siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, "',", parentid, ") RETURNING id") diff --git a/modules/data.land/R/extract_soil_nc.R b/modules/data.land/R/extract_soil_nc.R index 6b3a0666080..dff30b31335 100644 --- a/modules/data.land/R/extract_soil_nc.R +++ b/modules/data.land/R/extract_soil_nc.R @@ -19,7 +19,7 @@ #' } #' @author Hamze Dokoohaki #' -extract_soil_gssurgo<-function(outdir, lat, lon, settings, size=1, radius=500, depths=c(0.15,0.30,0.60)){ +extract_soil_gssurgo<-function(outdir, lat, lon, size=1, radius=500, depths=c(0.15,0.30,0.60)){ # I keep all the ensembles here all.soil.ens <-list() @@ -237,33 +237,6 @@ extract_soil_gssurgo<-function(outdir, lat, lon, settings, size=1, radius=500, d out.ense<-out.ense%>% setNames(rep("path", length(out.ense))) - #connect to db - # Extract info from settings and setup - site <- settings$run$site - dbparms <- settings$database - - bety <- dplyr::src_postgres(dbname = dbparms$bety$dbname, - host = dbparms$bety$host, - user = dbparms$bety$user, - password = dbparms$bety$password) - con <- bety$con - # register files in DB - for(i in 1:length(out.ense)){ - in.path = paste0(dirname(out.ense[i]$path), '/') - in.prefix = stringr::str_remove(basename(out.ense[i]$path), ".nc") - - - dbfile.input.insert (in.path, - in.prefix, - site$id, - startdate = NULL, - enddate = NULL, - mimetype = "application/x-netcdf", - formatname = "gSSURGO Soil", - con = con, - ens=TRUE) - } - return(out.ense) } diff --git a/modules/data.land/R/soil_process.R b/modules/data.land/R/soil_process.R index 7f781723552..0af1baad509 100644 --- a/modules/data.land/R/soil_process.R +++ b/modules/data.land/R/soil_process.R @@ -55,8 +55,26 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T if(length(newfile)==0){ radiusL <- ifelse(is.null(settings$run$input$soil$radius), 500, as.numeric(settings$run$input$soil$radius)) - newfile<-extract_soil_gssurgo(outfolder, lat = latlon$lat, lon=latlon$lon, settings, radius = radiusL) - } + newfile<-extract_soil_gssurgo(outfolder, lat = latlon$lat, lon=latlon$lon, radius = radiusL) + + # register files in DB + for(i in 1:length(newfile)){ + in.path = paste0(dirname(newfile[i]$path), '/') + in.prefix = stringr::str_remove(basename(newfile[i]$path), ".nc") + + dbfile.input.insert (in.path, + in.prefix, + new.site$id, + startdate = NULL, + enddate = NULL, + mimetype = "application/x-netcdf", + formatname = "gSSURGO Soil", + con = con, + ens=TRUE) + } + + + } return(newfile) } From 09c50b3b6a4cd88da584f2aee707c611cfb8bec3 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 4 Aug 2020 13:19:23 -0400 Subject: [PATCH 1335/2289] adding .Rd file --- modules/data.land/man/extract_soil_gssurgo.Rd | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.land/man/extract_soil_gssurgo.Rd b/modules/data.land/man/extract_soil_gssurgo.Rd index 1e97e964124..d8231132824 100644 --- a/modules/data.land/man/extract_soil_gssurgo.Rd +++ b/modules/data.land/man/extract_soil_gssurgo.Rd @@ -8,7 +8,6 @@ extract_soil_gssurgo( outdir, lat, lon, - settings, size = 1, radius = 500, depths = c(0.15, 0.3, 0.6) From ef89f4a9be1285d9467d96f8ee899b7c85dd5bd3 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 4 Aug 2020 14:17:58 -0400 Subject: [PATCH 1336/2289] updating Rd files --- base/db/man/insert.format.vars.Rd | 2 +- base/db/man/take.samples.Rd | 2 +- base/logger/man/print2string.Rd | 4 ++-- modules/rtm/man/invert_bt.Rd | 2 +- modules/rtm/man/params2edr.Rd | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/base/db/man/insert.format.vars.Rd b/base/db/man/insert.format.vars.Rd index cd6b463dc4c..b0902645918 100644 --- a/base/db/man/insert.format.vars.Rd +++ b/base/db/man/insert.format.vars.Rd @@ -55,7 +55,7 @@ formats_variables_tibble <- tibble::tibble( variable_id = c(411, 135, 382), name = c("NPP", NA, "YEAR"), unit = c("g C m-2 yr-1", NA, NA), - storage_type = c(NA, NA, "%Y"), + storage_type = c(NA, NA, "\%Y"), column_number = c(2, NA, 4)) insert.format.vars( diff --git a/base/db/man/take.samples.Rd b/base/db/man/take.samples.Rd index f834d60df80..0c8f693365f 100644 --- a/base/db/man/take.samples.Rd +++ b/base/db/man/take.samples.Rd @@ -20,7 +20,7 @@ sample from normal distribution, given summary stats \examples{ ## return the mean when stat = NA take.samples(summary = data.frame(mean = 10, stat = NA)) -## return vector of length \\code{sample.size} from N(mean,stat) +## return vector of length \code{sample.size} from N(mean,stat) take.samples(summary = data.frame(mean = 10, stat = 10), sample.size = 10) } diff --git a/base/logger/man/print2string.Rd b/base/logger/man/print2string.Rd index a6cd87c17b7..5c1b76b3283 100644 --- a/base/logger/man/print2string.Rd +++ b/base/logger/man/print2string.Rd @@ -23,9 +23,9 @@ functions, you should always add the \code{wrap = FALSE} argument, and probably add a newline (\code{"\\n"}) before the output of this function. } \examples{ -logger.info("First few rows of Iris:\\n", print2string(iris[1:10, -5]), wrap = FALSE) +logger.info("First few rows of Iris:\n", print2string(iris[1:10, -5]), wrap = FALSE) df <- data.frame(test = c("download", "process", "plot"), status = c(TRUE, TRUE, FALSE)) -logger.debug("Current status:\\n", print2string(df, row.names = FALSE), wrap = FALSE) +logger.debug("Current status:\n", print2string(df, row.names = FALSE), wrap = FALSE) } \author{ Alexey Shiklomanov diff --git a/modules/rtm/man/invert_bt.Rd b/modules/rtm/man/invert_bt.Rd index 32a2ad229b3..89f2d1e8e0d 100644 --- a/modules/rtm/man/invert_bt.Rd +++ b/modules/rtm/man/invert_bt.Rd @@ -34,7 +34,7 @@ specified by the user; see Details). This is most common for the initial number of iterations, which is the minimum expected for convergence. \item \code{loop} -- BayesianTools settings for iterations inside the convergence -checking \verb{while} loop. This is most commonly for setting a smaller +checking \code{while} loop. This is most commonly for setting a smaller iteration count than in \code{init}. \item \code{other} -- Miscellaneous (non-BayesianTools) settings, including: \itemize{ diff --git a/modules/rtm/man/params2edr.Rd b/modules/rtm/man/params2edr.Rd index bd9338ac985..69ce0a4ae26 100644 --- a/modules/rtm/man/params2edr.Rd +++ b/modules/rtm/man/params2edr.Rd @@ -33,7 +33,7 @@ The regular expression defining the separation is greedy, i.e. and \code{SLA} (assuming the default \code{sep = "."}). Therefore, it is crucial that trait names not contain any \code{sep} characters (neither ED nor PROSPECT parameters should anyway). If this is a problem, use an alternate separator -(e.g. \verb{|}). +(e.g. \code{|}). Note that using \code{sep = "."} allows this function to directly invert the default behavior of \code{unlist}. That is, calling From 554e483bc2eb53c2f182acf0c1fb55fe79844e5f Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 4 Aug 2020 14:51:03 -0400 Subject: [PATCH 1337/2289] updating function call for git check --- modules/data.land/R/soil_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.land/R/soil_process.R b/modules/data.land/R/soil_process.R index 0af1baad509..7f3b369dbc1 100644 --- a/modules/data.land/R/soil_process.R +++ b/modules/data.land/R/soil_process.R @@ -62,7 +62,7 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T in.path = paste0(dirname(newfile[i]$path), '/') in.prefix = stringr::str_remove(basename(newfile[i]$path), ".nc") - dbfile.input.insert (in.path, + PEcAn.DB::dbfile.input.insert (in.path, in.prefix, new.site$id, startdate = NULL, From 32e1fe84b224ca33528976fe8c98d3a437bd1eb7 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 4 Aug 2020 18:52:04 +0000 Subject: [PATCH 1338/2289] docs & tests for workflow file download --- apps/api/R/workflows.R | 7 ++ apps/api/tests/test.workflows.R | 18 ++++- .../07_remote_access/01_pecan_api.Rmd | 72 +++++++++++++------ 3 files changed, 74 insertions(+), 23 deletions(-) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 8a265a519d0..843966ab995 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -235,6 +235,13 @@ getWorkflowFile <- function(req, id, filename, res){ return() } + if(Sys.getenv("AUTH_REQ") == TRUE){ + if(qry_res$user_id != req$user$userid) { + res$status <- 403 + return() + } + } + # Read the data in binary form & return it bin <- readBin(filepath,'raw', n = file.info(filepath)$size) return(bin) diff --git a/apps/api/tests/test.workflows.R b/apps/api/tests/test.workflows.R index 7bc0bf4cd86..fb650bbf732 100644 --- a/apps/api/tests/test.workflows.R +++ b/apps/api/tests/test.workflows.R @@ -48,6 +48,7 @@ test_that("Submitting XML workflow to /api/workflows/ returns Status 201", { }) test_that("Submitting JSON workflow to /api/workflows/ returns Status 201", { + Sys.sleep(2) json_workflow <- jsonlite::read_json("test_workflows/api.sipnet.json") res <- httr::POST( "http://localhost:8000/api/workflows/", @@ -58,7 +59,6 @@ test_that("Submitting JSON workflow to /api/workflows/ returns Status 201", { expect_equal(res$status, 201) }) - test_that("Calling /api/workflows/{id}/status with valid workflow id returns Status 200", { res <- httr::GET( paste0("http://localhost:8000/api/workflows/", 99000000031, "/status"), @@ -73,4 +73,20 @@ test_that("Calling /api/workflows/{id}/status with invalid parameters returns St httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) +}) + +test_that("Calling /api/workflows/{id}/file/{filename} with valid parameters returns Status 200", { + res <- httr::GET( + paste0("http://localhost:8000/api/workflows/", 99000000031, "/file/", "pecan.CONFIGS.xml"), + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/workflows/{id}/file/{filename} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/workflows/0/file/randomfile.txt", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 97b2e077976..da5eafd3c4a 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -48,6 +48,7 @@ The currently implemented functionalities include: * [`POST /api/workflows/`](#post-apiworkflows): Submit a new PEcAn workflow * [`GET /api/workflows/{id}`](#get-apiworkflowsid): Obtain the details of a particular PEcAn workflow by supplying its ID * [`GET /api/workflows/{id}/status`](#get-apiworkflowsidstatus): Obtain the status of a particular PEcAn workflow + * [`GET /api/workflows/{id}/file/{filename}`](#get-apiworkflowsidfilefilename): Download the desired file from a PEcAn workflow * __Runs:__ * [`GET /api/runs`](#get-apiruns): Get the list of all the runs @@ -558,9 +559,9 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` ## $workflows -## id properties -## -## 1 1000009172 +## id folder started_at site_id model_id hostname start_date end_date +## 1 1000009900 /fs/data2/output//PEcAn_1000009900 2018-11-09 08:56:37 676 1000000022 geo.bu.edu 2004-01-01 2004-12-31 +## 2 1000009172 /fs/data2/output//PEcAn_1000009172 2018-04-11 18:14:52 676 1000000022 test-pecan.bu.edu 2004-01-01 2004-12-31 ## ... ## $count @@ -582,25 +583,13 @@ print(json.dumps(response.json(), indent=2)) ## "workflows": [ ## { ## "id": 1000009172, -## "properties": { -## "end": "2004/12/31", -## "pft": [ -## "soil.IF", -## "temperate.deciduous.IF" -## ], -## "email": "", -## "notes": "", -## "start": "2004/01/01", -## "siteid": "676", -## "modelid": "1000000022", -## "hostname": "test-pecan.bu.edu", -## "sitename": "WillowCreek(US-WCr)", -## "input_met": "AmerifluxLBL.SIPNET", -## "pecan_edit": "on", -## "sitegroupid": "1000000022", -## "fluxusername": "pecan", -## "input_poolinitcond": "-1" -## } +## "folder": "/fs/data2/output//PEcAn_1000009900", +## "started_at": "2018-11-09 08:56:37", +## "site_id": 676, +## "model_id": 1000000022, +## "hostname": "geo.bu.edu", +## "start_date": "2004-01-01", +## "end_date": "2004-12-31" ## }, ## ... ## ], @@ -673,6 +662,15 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ## $id ## [1] "1000009172" +## $folder +## [1] "/fs/data2/output//PEcAn_1000009172" + +## $hostname +## [1] "test-pecan.bu.edu" + +## $user_id +## [1] "NA" + ## $properties ## $properties$end ## [1] "2004/12/31" @@ -735,6 +733,9 @@ print(json.dumps(response.json(), indent=2)) ``` ## { ## "id": "1000009172", +## "folder": "/fs/data2/output//PEcAn_1000009172", +## "hostname": "test-pecan.bu.edu", +## "user_id": "NA", ## "properties": { ## "end": "2004/12/31", ## "pft": [ @@ -813,6 +814,33 @@ print(json.dumps(response.json(), indent=2)) ``` ### {-} +### `GET /api/workflows/{id}/file/{filename}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Download the 'ensemble.ts.99000000017.NPP.2002.2002.Rdata' output file for the workflow with id = 99000000031 +res <- httr::GET( + "http://localhost:8000/api/workflows/99000000031/file/ensemble.ts.99000000017.NPP.2002.2002.Rdata", + httr::authenticate("carya", "illinois") + ) +writeBin(res$content, "test.ensemble.ts.99000000017.NPP.2002.2002.Rdata") +``` + +#### Python Snippet + +```python +# Download the '2002.nc' output file for the run with id = 99000000282 +response = requests.get( + "http://localhost:8000/api/workflows/99000000031/file/ensemble.ts.99000000017.NPP.2002.2002.Rdata", + auth=HTTPBasicAuth('carya', 'illinois') + ) +with open("test.ensemble.ts.99000000017.NPP.2002.2002.Rdata", "wb") as file: + file.write(response.content) +``` + +### {-} + ### `GET /api/runs/` {.tabset .tabset-pills} #### R Snippet From 0c5687a84d90611d7cf51a5277cf2477d01fe40d Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Tue, 4 Aug 2020 15:31:32 -0400 Subject: [PATCH 1339/2289] updating DESCRIPTION with stringr for Git check --- modules/data.land/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 0aef6c7a65d..a06a437a0ec 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -51,6 +51,7 @@ Suggests: rgdal, RCurl, RPostgreSQL, + stringr, testthat (>= 1.0.2) License: BSD_3_clause + file LICENSE Copyright: Authors From 6d867134b66f70ab4c933a0d1500f4631ef69cdb Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Tue, 4 Aug 2020 21:09:14 +0000 Subject: [PATCH 1340/2289] fixed urlencode bug in searching --- apps/api/R/models.R | 2 ++ apps/api/R/pfts.R | 4 ++++ apps/api/R/sites.R | 2 ++ apps/api/R/submit.workflow.R | 11 +++++++++++ 4 files changed, 19 insertions(+) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index f8ca3e2a044..05530248a32 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -50,6 +50,8 @@ getModel <- function(model_id, res){ #' @author Tezan Sahu #* @get / searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ + model_name <- URLdecode(model_name) + revision <- URLdecode(revision) dbcon <- PEcAn.DB::betyConnect() diff --git a/apps/api/R/pfts.R b/apps/api/R/pfts.R index f8e508001e6..d68f344a82a 100644 --- a/apps/api/R/pfts.R +++ b/apps/api/R/pfts.R @@ -49,6 +49,10 @@ getPfts <- function(pft_id, res){ #' @author Tezan Sahu #* @get / searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE, res){ + pft_name <- URLdecode(pft_name) + pft_type <- URLdecode(pft_type) + model_type <- URLdecode(model_type) + if(! pft_type %in% c("", "plant", "cultivar")){ res$status <- 400 return(list(error = "Invalid pft_type")) diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R index 7cc6599ed20..12a5eb83903 100644 --- a/apps/api/R/sites.R +++ b/apps/api/R/sites.R @@ -41,6 +41,8 @@ getSite <- function(site_id, res){ #' @author Tezan Sahu #* @get / searchSite <- function(sitename="", ignore_case=TRUE, res){ + sitename <- URLdecode(sitename) + dbcon <- PEcAn.DB::betyConnect() sites <- tbl(dbcon, "sites") %>% diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index c635758b614..47ea1c11c2c 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -45,6 +45,17 @@ submit.workflow.list <- function(workflowList, userDetails) { ) ) + if(is.null(workflowList$model$id)) { + dbcon <- PEcAn.DB::betyConnect() + res <- dplyr::tbl(dbcon, "models") %>% + filter(model_name == workflowList$model$type && revision == workflowList$model$revision) %>% + collect() + PEcAn.DB::db.close(dbcon) + + workflowList$model$type <- res$model_name + workflowList$model$revision <- res$revision + } + # Fix RabbitMQ details dbcon <- PEcAn.DB::betyConnect() hostInfo <- PEcAn.DB::dbHostInfo(dbcon) From 7dcfe0ce87cc75d6cb09823fdef3ae9c556a3de2 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 5 Aug 2020 09:52:03 +0530 Subject: [PATCH 1341/2289] add pro exists, raw deleted case --- modules/data.remote/R/call_remote_process.R | 32 +++++++++++++++++---- 1 file changed, 27 insertions(+), 5 deletions(-) diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R index 4844d7bd3a6..38d34c43957 100644 --- a/modules/data.remote/R/call_remote_process.R +++ b/modules/data.remote/R/call_remote_process.R @@ -106,6 +106,7 @@ call_remote_process <- function(settings){ if(stage_process_data == TRUE){ # check about the status of raw file raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon) + if(!is.null(raw_check$start_date) && !is.null(raw_check$end_date)){ raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) start <- as.character(raw_datalist[[1]]) end <- as.character(raw_datalist[[2]]) @@ -127,6 +128,20 @@ call_remote_process <- function(settings){ existing_pro_file_path <- NULL pro_merge <- FALSE } + }else{ + # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted + flag <- 6 + write_raw_start <- req_start + write_raw_end <- req_end + start <- req_start + end <- req_end + stage_get_data <- TRUE + existing_raw_file_path <- NULL + write_pro_start <- write_raw_start + write_pro_end <- write_raw_end + pro_merge <- "replace" + existing_pro_file_path <- NULL + } } } } @@ -190,7 +205,7 @@ call_remote_process <- function(settings){ existing_pro_file_path <- NULL }else{ # no data of requested type exists - PEcAn.logger::logger.info("nothing of requested type exists") + PEcAn.logger::logger.info("no data of requested type exists") flag <- 1 start <- req_start end <- req_end @@ -214,7 +229,7 @@ call_remote_process <- function(settings){ }else{ # db is completely empty for the given siteid - PEcAn.logger::logger.info("DB is completely empt for this site") + PEcAn.logger::logger.info("DB is completely empty for this site") flag <- 1 start <- req_start end <- req_end @@ -245,14 +260,14 @@ call_remote_process <- function(settings){ }else{ if(flag == 1){ # no processed and rawfile are present - PEcAn.logger::logger.info("inserting rawfile and processed files for the first time") + PEcAn.logger::logger.info("inserting raw and processed files for the first time") # insert processed data PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) # insert raw file PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) }else if(flag == 2){ # requested processed file does not exist but the raw file used to create it exists within the required timeline - PEcAn.logger::logger.info("inserting proc file for the first time") + PEcAn.logger::logger.info("inserting processed file for the first time") PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) }else if(flag == 3){ # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates @@ -272,9 +287,16 @@ call_remote_process <- function(settings){ }else if(flag == 5){ # raw file required for creating the processed file exists and the processed file needs to be updated pro_id = pro_check$id - PEcAn.logger::logger.info("Updating the existing processed file") + PEcAn.logger::logger.info("updating the existing processed file") + PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) + PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) + }else if(flag == 6){ + # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file + pro_id = pro_check$id + PEcAn.logger::logger.info("replacing the existing processed file and creating a new raw file") PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) + PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) } } } From 321ee6a10be5c65b6a0000604f42e67c740b0377 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 5 Aug 2020 07:43:44 +0000 Subject: [PATCH 1342/2289] created endpoint for getting info about inputs --- apps/api/R/entrypoint.R | 4 + apps/api/R/inputs.R | 116 ++++++++++++++++++ apps/api/R/workflows.R | 2 - apps/api/pecanapi-spec.yml | 92 +++++++++++++- .../07_remote_access/01_pecan_api.Rmd | 89 ++++++++++++++ 5 files changed, 300 insertions(+), 3 deletions(-) create mode 100644 apps/api/R/inputs.R diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 039772ad5f1..9d03ede87ed 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -31,6 +31,10 @@ root$mount("/api/sites", sites_pr) pfts_pr <- plumber::plumber$new("pfts.R") root$mount("/api/pfts", pfts_pr) +# The endpoints mounted here are related to details of PEcAn inputs +inputs_pr <- plumber::plumber$new("inputs.R") +root$mount("/api/inputs", inputs_pr) + # The endpoints mounted here are related to details of PEcAn workflows workflows_pr <- plumber::plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R new file mode 100644 index 00000000000..f97e1be7196 --- /dev/null +++ b/apps/api/R/inputs.R @@ -0,0 +1,116 @@ +library(dplyr) + +#' Search for Inputs containing wildcards for filtering +#' @param model_id Model Id (character) +#' @param site_id Site Id (character) +#' @param offset +#' @param limit +#' @return Information about Inputs based on model & site +#' @author Tezan Sahu +#* @get / +searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res){ + if (! limit %in% c(10, 20, 50, 100, 500)) { + res$status <- 400 + return(list(error = "Invalid value for parameter")) + } + + dbcon <- PEcAn.DB::betyConnect() + + inputs <- tbl(dbcon, "inputs") %>% + select(input_name=name, id, site_id, format_id, start_date, end_date) + + inputs <- tbl(dbcon, "dbfiles") %>% + select(file_name, file_path, container_type, id=container_id, machine_id) %>% + inner_join(inputs, by = "id") %>% + filter(container_type == 'Input') %>% + select(-container_type) + + inputs <- tbl(dbcon, "machines") %>% + select(hostname, machine_id=id) %>% + inner_join(inputs, by='machine_id') %>% + select(-machine_id) + + inputs <- tbl(dbcon, "modeltypes_formats") %>% + select(tag, modeltype_id, format_id, input) %>% + inner_join(inputs, by='format_id') %>% + filter(input) %>% + select(-input) + + inputs <- tbl(dbcon, "formats") %>% + select(format_id = id) %>% + inner_join(inputs, by='format_id') %>% + select(-format_id) + + inputs <- tbl(dbcon, "models") %>% + select(model_id = id, modeltype_id, model_name, revision) %>% + inner_join(inputs, by='modeltype_id') %>% + select(-modeltype_id) + + inputs <- tbl(dbcon, "sites") %>% + select(site_id = id, sitename) %>% + inner_join(inputs, by='site_id') + + if(! is.null(model_id)) { + inputs <- inputs %>% + filter(model_id == !!model_id) + } + + if(! is.null(site_id)) { + inputs <- inputs %>% + filter(site_id == !!site_id) + } + + qry_res <- inputs %>% + select(-site_id, -model_id) %>% + distinct() %>% + arrange(id) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { + res$status <- 404 + return(list(error="Input(s) not found")) + } + else { + has_next <- FALSE + has_prev <- FALSE + if (nrow(qry_res) > (as.numeric(offset) + as.numeric(limit))) { + has_next <- TRUE + } + if (as.numeric(offset) != 0) { + has_prev <- TRUE + } + + qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] + + result <- list(inputs = qry_res) + result$count <- nrow(qry_res) + if(has_next){ + result$next_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + (as.numeric(limit) + as.numeric(offset)), + "&limit=", + limit + ) + } + if(has_prev) { + result$prev_page <- paste0( + req$rook.url_scheme, "://", + req$HTTP_HOST, + "/api/workflows", + req$PATH_INFO, + substr(req$QUERY_STRING, 0, stringr::str_locate(req$QUERY_STRING, "offset=")[[2]]), + max(0, (as.numeric(offset) - as.numeric(limit))), + "&limit=", + limit + ) + } + + return(result) + } +} \ No newline at end of file diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 843966ab995..31959018af0 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -50,8 +50,6 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r qry_res <- qry_res[(as.numeric(offset) + 1):min((as.numeric(offset) + as.numeric(limit)), nrow(qry_res)), ] - # qry_res$properties[is.na(qry_res$properties)] = "{}" - # qry_res$properties <- purrr::map(qry_res$properties, jsonlite::parse_json) result <- list(workflows = qry_res) result$count <- nrow(qry_res) if(has_next){ diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 876a2f24267..01529beecd1 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-dev.ncsa.illinois.edu - description: PEcAn Development Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/3a3fd41c + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/58af05f9 - description: PEcAn API Test Server url: https://pecan-tezan.ncsa.illinois.edu - description: Localhost @@ -37,6 +37,8 @@ tags: description: Everything about PEcAn sites - name: pfts description: Everything about PEcAn PFTs (Plant Functional Types) + - name: inputs + description: Everything about inputs for a PEcAn workflow ##################################################################################################################### ##################################################### API Endpoints ################################################# @@ -358,6 +360,94 @@ paths: description: Access forbidden '404': description: Site(s) not found + + /api/inputs/: + get: + tags: + - inputs + summary: Search for the inputs + parameters: + - in: query + name: model_id + description: If provided, returns all inputs for the provided model_id + required: false + schema: + type: string + - in: query + name: site_id + description: If provided, returns all inputs for the provided site_id + required: false + schema: + type: string + - in: query + name: offset + description: The number of inputs to skip before starting to collect the result set. + schema: + type: integer + minimum: 0 + default: 0 + required: false + - in: query + name: limit + description: The number of inputs to return. + schema: + type: integer + default: 50 + enum: + - 10 + - 20 + - 50 + - 100 + - 500 + required: false + responses: + '200': + description: List of inputs + content: + application/json: + schema: + type: object + properties: + inputs: + type: array + items: + type: object + properties: + id: + type: string + filename: + type: string + file_path: + type: string + input_name: + type: string + model_name: + type: string + revision: + type: string + sitename: + type: string + tag: + type: string + hostname: + type: string + start_date: + type: string + end_date: + type: string + count: + type: integer + next_page: + type: string + prev_page: + type: string + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Workflows not found /api/workflows/: get: diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index da5eafd3c4a..bc1099c3c5f 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -43,6 +43,9 @@ The currently implemented functionalities include: * [`GET /api/pfts/`](#get-apipfts): Search for PFT(s) using search pattern based on PFT name, PFT type & Model type * [`GET /api/pfts/{pft_id}`](#get-apipftspft_id): Fetch the details of specific PFT +* __Inputs:__ + * [`GET /api/inputs/`](#get-apiinputs): Search for inputs needed for a PEcAn workflow based on `model_id` & `site_id` + * [`GET /api/inputs/{input_id}`](#get-apiinputsinput_id) *: Download the desired input file * __Workflows:__ * [`GET /api/workflows/`](#get-apiworkflows): Retrieve a list of PEcAn workflows * [`POST /api/workflows/`](#post-apiworkflows): Submit a new PEcAn workflow @@ -545,6 +548,92 @@ print(json.dumps(response.json(), indent=2)) ### {-} +### `GET /api/inputs/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Get the inputs needed for a workflow with model_id = 1000000022 & site_id = 676 +res <- httr::GET( + "http://localhost:8000/api/inputs/?model_id=1000000022&site_id=676", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $inputs +## sitename model_name revision tag hostname file_name +## 1 Willow Creek (US-WCr) SIPNET ssr met ebi-forecast.igb.illinois.edu wcr.clim +## 2 Willow Creek (US-WCr) SIPNET ssr met docker wcr.clim +## file_path id input_name start_date end_date +## 1 /home/share/data/dbfiles/ 235 2000-01-01 06:00:00 2007-01-01 05:59:00 +## 2 /data/sites/willow 2000000001 1998-01-01 00:00:00 2006-12-31 00:00:00 + +## $count +## [1] 2 +``` + +#### Python Snippet + +```python +# Get the inputs needed for a workflow with model_id = 1000000022 & site_id = 676 +response = requests.get( + "http://localhost:8000/api/inputs/?model_id=1000000022&site_id=676", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "inputs": [ +## { +## "sitename": "Willow Creek (US-WCr)", +## "model_name": "SIPNET", +## "revision": "ssr", +## "tag": "met", +## "hostname": "ebi-forecast.igb.illinois.edu", +## "file_name": "wcr.clim", +## "file_path": "/home/share/data/dbfiles/", +## "id": 235, +## "input_name": "", +## "start_date": "2000-01-01 06:00:00", +## "end_date": "2007-01-01 05:59:00" +## }, +## ... +## ], +## "count: 2 +## } +``` + +### {-} + +### `GET /api/inputs/{input_id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Download the input file with id = 99000000003 +res <- httr::GET( + "http://localhost:8000/api/inputs/99000000003", + httr::authenticate("carya", "illinois") + ) +writeBin(res$content, "test.2002.nc") +``` + +#### Python Snippet + +```python +# Download the input file for with id = 99000000003 +response = requests.get( + "http://localhost:8000/api/inputs/99000000003", + auth=HTTPBasicAuth('carya', 'illinois') + ) +with open("test.2002.nc", "wb") as file: + file.write(response.content) +``` + +### {-} + ### `GET /api/workflows/` {.tabset .tabset-pills} #### R Snippet From a8ae2472deff3e84c9717b31fd51b06e784d06d9 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 5 Aug 2020 14:58:09 +0000 Subject: [PATCH 1343/2289] added format_name & mimetype to inputs response --- apps/api/R/inputs.R | 7 ++++++- apps/api/R/submit.workflow.R | 19 ++++++++++--------- apps/api/pecanapi-spec.yml | 4 ++++ .../07_remote_access/01_pecan_api.Rmd | 8 +++++--- 4 files changed, 25 insertions(+), 13 deletions(-) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index f97e1be7196..3ba324c89ff 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -37,10 +37,15 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r select(-input) inputs <- tbl(dbcon, "formats") %>% - select(format_id = id) %>% + select(format_id = id, format_name = name, mimetype_id) %>% inner_join(inputs, by='format_id') %>% select(-format_id) + inputs <- tbl(dbcon, "mimetypes") %>% + select(mimetype_id = id, mimetype = type_string) %>% + inner_join(inputs, by='mimetype_id') %>% + select(-mimetype_id) + inputs <- tbl(dbcon, "models") %>% select(model_id = id, modeltype_id, model_name, revision) %>% inner_join(inputs, by='modeltype_id') %>% diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 47ea1c11c2c..68c3d04667e 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -36,19 +36,20 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* @author Tezan Sahu submit.workflow.list <- function(workflowList, userDetails) { # Fix details about the database - workflowList$database <- list(bety = PEcAn.DB::get_postgres_envvars( - host = "localhost", - dbname = "bety", - user = "bety", - password = "bety", - driver = "PostgreSQL" - ) + workflowList$database <- list( + bety = PEcAn.DB::get_postgres_envvars( + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "PostgreSQL" + ) ) - if(is.null(workflowList$model$id)) { + if(! is.null(workflowList$model$id) && (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { dbcon <- PEcAn.DB::betyConnect() res <- dplyr::tbl(dbcon, "models") %>% - filter(model_name == workflowList$model$type && revision == workflowList$model$revision) %>% + filter(id == workflowList$model$id) %>% collect() PEcAn.DB::db.close(dbcon) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 01529beecd1..2b1cbbc99e8 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -421,6 +421,10 @@ paths: type: string input_name: type: string + mimetype: + type: string + format_name: + type: string model_name: type: string revision: diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index bc1099c3c5f..2d6e78b16f4 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -562,9 +562,9 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` ## $inputs -## sitename model_name revision tag hostname file_name -## 1 Willow Creek (US-WCr) SIPNET ssr met ebi-forecast.igb.illinois.edu wcr.clim -## 2 Willow Creek (US-WCr) SIPNET ssr met docker wcr.clim +## sitename model_name revision tag hostname file_name format_name mimetype +## 1 Willow Creek (US-WCr) SIPNET ssr met ebi-forecast.igb.illinois.edu wcr.clim Sipnet.climna text/csv +## 2 Willow Creek (US-WCr) SIPNET ssr met docker wcr.clim Sipnet.climna text/csv ## file_path id input_name start_date end_date ## 1 /home/share/data/dbfiles/ 235 2000-01-01 06:00:00 2007-01-01 05:59:00 ## 2 /data/sites/willow 2000000001 1998-01-01 00:00:00 2006-12-31 00:00:00 @@ -593,6 +593,8 @@ print(json.dumps(response.json(), indent=2)) ## "tag": "met", ## "hostname": "ebi-forecast.igb.illinois.edu", ## "file_name": "wcr.clim", +## "format_name": "Sipnet.climna", +## "mimetype": "text/csv", ## "file_path": "/home/share/data/dbfiles/", ## "id": 235, ## "input_name": "", From 0d4f1ea3a2b389b007dd6a2df7b5e1c8d10870ab Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 5 Aug 2020 15:39:53 +0000 Subject: [PATCH 1344/2289] added endpoints for searching & getting details of PEcAn formats; added docs --- apps/api/R/entrypoint.R | 4 + apps/api/R/formats.R | 76 ++++++++++++ apps/api/pecanapi-spec.yml | 97 +++++++++++++++ .../07_remote_access/01_pecan_api.Rmd | 114 ++++++++++++++++++ 4 files changed, 291 insertions(+) create mode 100644 apps/api/R/formats.R diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 9d03ede87ed..3423f259912 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -31,6 +31,10 @@ root$mount("/api/sites", sites_pr) pfts_pr <- plumber::plumber$new("pfts.R") root$mount("/api/pfts", pfts_pr) +# The endpoints mounted here are related to details of PEcAn formats +formats_pr <- plumber::plumber$new("formats.R") +root$mount("/api/formats", formats_pr) + # The endpoints mounted here are related to details of PEcAn inputs inputs_pr <- plumber::plumber$new("inputs.R") root$mount("/api/inputs", inputs_pr) diff --git a/apps/api/R/formats.R b/apps/api/R/formats.R new file mode 100644 index 00000000000..47930e7b110 --- /dev/null +++ b/apps/api/R/formats.R @@ -0,0 +1,76 @@ +library(dplyr) + +#' Retrieve the details of a PEcAn format, based on format_id +#' @param format_id Format ID (character) +#' @return Format details +#' @author Tezan Sahu +#* @get / +getFormat <- function(format_id, res){ + + dbcon <- PEcAn.DB::betyConnect() + + Format <- tbl(dbcon, "formats") %>% + select(format_id = id, name, notes, header, mimetype_id) %>% + filter(format_id == !!format_id) + + Format <- tbl(dbcon, "mimetypes") %>% + select(mimetype_id = id, mimetype = type_string) %>% + inner_join(Format, by = "mimetype_id") %>% + select(-mimetype_id) + + qry_res <- Format %>% collect() + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Format not found")) + } + else { + # Convert the response from tibble to list + response <- list() + for(colname in colnames(qry_res)){ + response[colname] <- qry_res[colname] + } + + return(response) + } +} + +######################################################################### + +#' Search for PEcAn format(s) containing wildcards for filtering +#' @param format_name Format name search string (character) +#' @param mimetype Mime type search string (character) +#' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @return Formats subset matching the model search string +#' @author Tezan Sahu +#* @get / +searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res){ + format_name <- URLdecode(format_name) + mimetype <- URLdecode(mimetype) + + dbcon <- PEcAn.DB::betyConnect() + + Formats <- tbl(dbcon, "formats") %>% + select(format_id = id, format_name=name, mimetype_id) %>% + filter(grepl(!!format_name, format_name, ignore.case=ignore_case)) + + Formats <- tbl(dbcon, "mimetypes") %>% + select(mimetype_id = id, mimetype = type_string) %>% + inner_join(Formats, by = "mimetype_id") %>% + filter(grepl(!!mimetype, mimetype, ignore.case=ignore_case)) %>% + select(-mimetype_id) %>% + arrange(format_id) + + qry_res <- Formats %>% collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(qry_res) == 0) { + res$status <- 404 + return(list(error="Format(s) not found")) + } + else { + return(list(formats=qry_res, count = nrow(qry_res))) + } +} \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 2b1cbbc99e8..5748bf94744 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -35,6 +35,8 @@ tags: description: Everything about PEcAn models - name: sites description: Everything about PEcAn sites + - name: formats + description: Everything about PEcAn formats - name: pfts description: Everything about PEcAn PFTs (Plant Functional Types) - name: inputs @@ -361,6 +363,88 @@ paths: '404': description: Site(s) not found + /api/formats/{format_id}: + get: + tags: + - formats + summary: Details of requested format + parameters: + - in: path + name: format_id + description: Format ID + required: true + schema: + type: string + responses: + '200': + description: Format Details + content: + application/json: + schema: + $ref: '#/components/schemas/Format' + + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Model not found + + /api/formats/: + get: + tags: + - formats + summary: Search for format(s) using search pattern based on format name & mime type + parameters: + - in: query + name: format_name + description: Search string for format name + required: false + schema: + type: string + - in: query + name: mimetype + description: Search string for mime type + required: false + schema: + type: string + - in: query + name: ignore_case + description: Indicator of case sensitive or case insensitive search + required: false + schema: + type: string + default: "TRUE" + enum: + - "TRUE" + - "FALSE" + responses: + '200': + description: List of formats matching search pattern + content: + application/json: + schema: + type: object + properties: + models: + type: array + items: + type: object + properties: + format_id: + type: string + format_name: + type: string + mimetype: + type: string + '401': + description: Authentication required + '403': + description: Access forbidden + '404': + description: Model(s) not found + + /api/inputs/: get: tags: @@ -987,6 +1071,19 @@ components: model_type: type: string + Format: + properties: + format_id: + type: string + name: + type: string + notes: + type: string + header: + type: string + mimetype: + type: string + Workflow_GET: properties: id: diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 2d6e78b16f4..7f0fc442736 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -43,9 +43,14 @@ The currently implemented functionalities include: * [`GET /api/pfts/`](#get-apipfts): Search for PFT(s) using search pattern based on PFT name, PFT type & Model type * [`GET /api/pfts/{pft_id}`](#get-apipftspft_id): Fetch the details of specific PFT +* __Formats:__ + * [`GET /api/formats/`](#get-apiformats): Search for format(s) using search pattern based on format name & mime type + * [`GET /api/formats/{format_id}`](#get-formatsformat_id): Fetch the details of specific Format + * __Inputs:__ * [`GET /api/inputs/`](#get-apiinputs): Search for inputs needed for a PEcAn workflow based on `model_id` & `site_id` * [`GET /api/inputs/{input_id}`](#get-apiinputsinput_id) *: Download the desired input file + * __Workflows:__ * [`GET /api/workflows/`](#get-apiworkflows): Retrieve a list of PEcAn workflows * [`POST /api/workflows/`](#post-apiworkflows): Submit a new PEcAn workflow @@ -548,6 +553,115 @@ print(json.dumps(response.json(), indent=2)) ### {-} +### `GET /api/formats/` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Search format(s) with name containing 'ameriflux' & mime type containing 'csv' +res <- httr::GET( + "http://localhost:8000/api/formats/?format_name=ameriflux&mimetype=csv&ignore_case=TRUE", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $formats +## mimetype format_id format_name +## 1 text/csv 19 AmeriFlux.level4.h +## 2 text/csv 20 AmeriFlux.level4.d +## 3 text/csv 21 AmeriFlux.level4.w +## 4 text/csv 22 AmeriFlux.level4.m +## 5 text/csv 35 AmeriFlux.level2.h +## 6 text/csv 36 AmeriFlux.level3.h + +## $count +## [1] 6 +``` + +#### Python Snippet + +```python +# # Search format(s) with name containing 'ameriflux' & mime type containing 'csv' +response = requests.get( + "http://localhost:8000/api/formats/?format_name=ameriflux&mimetype=csv&ignore_case=TRUE", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "formats": [ +## { +## "mimetype": "text/csv", +## "format_id": 19, +## "format_name": "AmeriFlux.level4.h" +## }, +## { +## "mimetype": "text/csv", +## "format_id": 20, +## "format_name": "AmeriFlux.level4.d" +## }, +## ... +## ], +## "count": 6 +## } + +``` + +### {-} + +### `GET /api/formats/{format_id}` {.tabset .tabset-pills} + +#### R Snippet + +```R +# Fetch the details of PEcAn format with id = 19 +res <- httr::GET( + "http://localhost:8000/api/formats/19", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $mimetype +## [1] "text/csv" + +## $format_id +## [1] 19 + +## $name +## [1] "AmeriFlux.level4.h" + +## $notes +## [1] "Half-hourly AmeriFlux level 4 gap filled, partitioned, and flagged flux tower data. Variables description: Level 4 data are obtained from the level 3 products, data are ustar filtered, gap-filled using different methods (ANN and MDS) and partitioned (i.e. NEE, GPP, and Re). Flags with information regarding quality of the original and gapfilled data are added. Missing values: -9999." + +## $header +## [1] "" +``` + +#### Python Snippet + +```python +# Fetch the details of PEcAn format with id = 19 +response = requests.get( + "http://localhost:8000/api/formats/19", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "mimetype": "text/csv", +## "format_id": 19, +## "name": "AmeriFlux.level4.h", +## "notes": "Half-hourly AmeriFlux level 4 gap filled, partitioned, and flagged flux tower data. Variables description: ## Level 4 data are obtained from the level 3 products, data are ustar filtered, gap-filled using different methods (ANN and ## MDS) and partitioned (i.e. NEE, GPP, and Re). Flags with information regarding quality of the original and gapfilled data ## are added. Missing values: -9999.", +## "header": "" +## } +``` + +### {-} + ### `GET /api/inputs/` {.tabset .tabset-pills} #### R Snippet From 1f789302b8923669405fadaeeb4687f23d24b222 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 5 Aug 2020 19:32:59 +0000 Subject: [PATCH 1345/2289] bugfix in workflow submission --- apps/api/R/submit.workflow.R | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 68c3d04667e..1e99246a6a3 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -45,18 +45,17 @@ submit.workflow.list <- function(workflowList, userDetails) { driver = "PostgreSQL" ) ) - if(! is.null(workflowList$model$id) && (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { dbcon <- PEcAn.DB::betyConnect() res <- dplyr::tbl(dbcon, "models") %>% - filter(id == workflowList$model$id) %>% + select(id, model_name, revision) %>% + filter(id == !!workflowList$model$id) %>% collect() PEcAn.DB::db.close(dbcon) workflowList$model$type <- res$model_name workflowList$model$revision <- res$revision } - # Fix RabbitMQ details dbcon <- PEcAn.DB::betyConnect() hostInfo <- PEcAn.DB::dbHostInfo(dbcon) @@ -68,7 +67,6 @@ submit.workflow.list <- function(workflowList, userDetails) { ) ) workflowList$host$name <- if(hostInfo$hostname == "") "localhost" else hostInfo$hostname - # Fix the info workflowList$info$notes <- workflowList$info$notes if(is.null(workflowList$info$userid)){ @@ -80,21 +78,21 @@ submit.workflow.list <- function(workflowList, userDetails) { if(is.null(workflowList$info$date)){ workflowList$info$date <- Sys.time() } - + # Add entry to workflows table in database workflow_id <- insert.workflow(workflowList) workflowList$workflow$id <- workflow_id - + # Add entry to attributes table in database insert.attribute(workflowList) - + # Fix the output directory outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id) workflowList$outdir <- outdir # Create output diretory dir.create(outdir, recursive=TRUE) - + # Modify the `dbfiles` path & create the directory if needed workflowList$run$dbfiles <- Sys.getenv("DBFILES_DIR", "/data/dbfiles/") if(! dir.exists(workflowList$run$dbfiles)){ @@ -127,6 +125,7 @@ submit.workflow.list <- function(workflowList, userDetails) { #* @return ID of the submitted workflow #* @author Tezan Sahu insert.workflow <- function(workflowList){ + dbcon <- PEcAn.DB::betyConnect() model_id <- workflowList$model$id @@ -153,6 +152,7 @@ insert.workflow <- function(workflowList){ } insert <- PEcAn.DB::insert_table(workflow_df, "workflows", dbcon) + workflow_id <- dplyr::tbl(dbcon, "workflows") %>% filter(started_at == start_time && site_id == bit64::as.integer64(workflowList$run$site$id) @@ -174,6 +174,7 @@ insert.workflow <- function(workflowList){ #* @param workflowList List containing the workflow details #* @author Tezan Sahu insert.attribute <- function(workflowList){ + dbcon <- PEcAn.DB::betyConnect() # Create an array of PFTs @@ -222,4 +223,6 @@ insert.attribute <- function(workflowList){ res <- DBI::dbSendStatement(dbcon, "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) + + } \ No newline at end of file From f844baa83e1ff3b35c738e15ae54d4108fbb588d Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Wed, 5 Aug 2020 17:55:25 -0400 Subject: [PATCH 1346/2289] update skipped tests to skip on Github Actions too, not just Travis --- .../data.atmosphere/tests/testthat/test.download.CRUNCEP.R | 2 +- modules/data.atmosphere/tests/testthat/test.download.GFDLR.R | 4 ++-- modules/data.atmosphere/tests/testthat/test.download.NARR.R | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/tests/testthat/test.download.CRUNCEP.R b/modules/data.atmosphere/tests/testthat/test.download.CRUNCEP.R index 5a0869cc723..4e5e1dd456c 100644 --- a/modules/data.atmosphere/tests/testthat/test.download.CRUNCEP.R +++ b/modules/data.atmosphere/tests/testthat/test.download.CRUNCEP.R @@ -6,7 +6,7 @@ teardown(unlink(tmpdir, recursive = TRUE)) test_that("download works and returns a valid CF file", { # download is slow and was causing lots of Travis timeouts - skip_on_travis() + skip_on_ci() PEcAn.logger::logger.setLevel("WARN") diff --git a/modules/data.atmosphere/tests/testthat/test.download.GFDLR.R b/modules/data.atmosphere/tests/testthat/test.download.GFDLR.R index 9ccce6172ad..320e9ffb058 100644 --- a/modules/data.atmosphere/tests/testthat/test.download.GFDLR.R +++ b/modules/data.atmosphere/tests/testthat/test.download.GFDLR.R @@ -5,7 +5,7 @@ dir.create(tmpdir, showWarnings = FALSE) teardown(unlink(tmpdir, recursive = TRUE)) test_that("GFDL server is reachable", { - skip_on_travis() + skip_on_ci() test_url <- paste0("http://nomads.gfdl.noaa.gov:9192/opendap/", "CMIP5/output1/NOAA-GFDL/GFDL-CM3/rcp45/3hr/", @@ -23,7 +23,7 @@ test_that("GFDL server is reachable", { test_that("download works and returns a valid CF file", { # Download is too slow for Travis -- please run locally before committing! - skip_on_travis() + skip_on_ci() PEcAn.logger::logger.setLevel("WARN") diff --git a/modules/data.atmosphere/tests/testthat/test.download.NARR.R b/modules/data.atmosphere/tests/testthat/test.download.NARR.R index 956353efcb0..f8e488d4cf7 100644 --- a/modules/data.atmosphere/tests/testthat/test.download.NARR.R +++ b/modules/data.atmosphere/tests/testthat/test.download.NARR.R @@ -19,7 +19,7 @@ ncdf4::nc_close(test_nc) test_that("NARR download works as expected", { # Download is too slow for travis # Please run locally to test! - skip_on_travis() + skip_on_ci() r <- download.NARR_site(outfolder, start_date, end_date, lat.in, lon.in, progress = TRUE, parallel = TRUE, ncores = 2) From 51cde8971c3626807e06cd4b325c11dee8a61d8a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 6 Aug 2020 10:14:04 +0530 Subject: [PATCH 1347/2289] reorganise /inst --- modules/data.remote/inst/{ => RpTools/RpTools}/appeears2pecan.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/bands2lai_snap.py | 0 .../inst/{satellitetools => RpTools/RpTools}/biophys_xarray.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/gee2pecan_l8.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/gee2pecan_s2.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/gee2pecan_smap.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/gee_utils.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/get_remote_data.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/merge_files.py | 0 .../data.remote/inst/{ => RpTools/RpTools}/process_remote_data.py | 0 modules/data.remote/inst/{ => RpTools/RpTools}/remote_process.py | 0 modules/data.remote/inst/{satellitetools => }/test.geojson | 0 12 files changed, 0 insertions(+), 0 deletions(-) rename modules/data.remote/inst/{ => RpTools/RpTools}/appeears2pecan.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/bands2lai_snap.py (100%) rename modules/data.remote/inst/{satellitetools => RpTools/RpTools}/biophys_xarray.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/gee2pecan_l8.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/gee2pecan_s2.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/gee2pecan_smap.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/gee_utils.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/get_remote_data.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/merge_files.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/process_remote_data.py (100%) rename modules/data.remote/inst/{ => RpTools/RpTools}/remote_process.py (100%) rename modules/data.remote/inst/{satellitetools => }/test.geojson (100%) diff --git a/modules/data.remote/inst/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py similarity index 100% rename from modules/data.remote/inst/appeears2pecan.py rename to modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py diff --git a/modules/data.remote/inst/bands2lai_snap.py b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py similarity index 100% rename from modules/data.remote/inst/bands2lai_snap.py rename to modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py diff --git a/modules/data.remote/inst/satellitetools/biophys_xarray.py b/modules/data.remote/inst/RpTools/RpTools/biophys_xarray.py similarity index 100% rename from modules/data.remote/inst/satellitetools/biophys_xarray.py rename to modules/data.remote/inst/RpTools/RpTools/biophys_xarray.py diff --git a/modules/data.remote/inst/gee2pecan_l8.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py similarity index 100% rename from modules/data.remote/inst/gee2pecan_l8.py rename to modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py diff --git a/modules/data.remote/inst/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py similarity index 100% rename from modules/data.remote/inst/gee2pecan_s2.py rename to modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py diff --git a/modules/data.remote/inst/gee2pecan_smap.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py similarity index 100% rename from modules/data.remote/inst/gee2pecan_smap.py rename to modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py diff --git a/modules/data.remote/inst/gee_utils.py b/modules/data.remote/inst/RpTools/RpTools/gee_utils.py similarity index 100% rename from modules/data.remote/inst/gee_utils.py rename to modules/data.remote/inst/RpTools/RpTools/gee_utils.py diff --git a/modules/data.remote/inst/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py similarity index 100% rename from modules/data.remote/inst/get_remote_data.py rename to modules/data.remote/inst/RpTools/RpTools/get_remote_data.py diff --git a/modules/data.remote/inst/merge_files.py b/modules/data.remote/inst/RpTools/RpTools/merge_files.py similarity index 100% rename from modules/data.remote/inst/merge_files.py rename to modules/data.remote/inst/RpTools/RpTools/merge_files.py diff --git a/modules/data.remote/inst/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py similarity index 100% rename from modules/data.remote/inst/process_remote_data.py rename to modules/data.remote/inst/RpTools/RpTools/process_remote_data.py diff --git a/modules/data.remote/inst/remote_process.py b/modules/data.remote/inst/RpTools/RpTools/remote_process.py similarity index 100% rename from modules/data.remote/inst/remote_process.py rename to modules/data.remote/inst/RpTools/RpTools/remote_process.py diff --git a/modules/data.remote/inst/satellitetools/test.geojson b/modules/data.remote/inst/test.geojson similarity index 100% rename from modules/data.remote/inst/satellitetools/test.geojson rename to modules/data.remote/inst/test.geojson From ed8de44f9166a2938271512e581397485f148b52 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 6 Aug 2020 10:52:40 +0530 Subject: [PATCH 1348/2289] create RpTools package --- .../inst/RpTools/RpTools/__init__.py | 3 + .../inst/RpTools/RpTools/appeears2pecan.py | 2 +- .../inst/RpTools/RpTools/bands2lai_snap.py | 7 +- .../inst/RpTools/RpTools/gee2pecan_l8.py | 2 +- .../inst/RpTools/RpTools/gee2pecan_s2.py | 2 +- .../inst/RpTools/RpTools/gee2pecan_smap.py | 2 +- .../inst/RpTools/RpTools/get_remote_data.py | 5 +- .../RpTools/RpTools/process_remote_data.py | 3 +- .../inst/RpTools/RpTools/remote_process.py | 8 +-- modules/data.remote/inst/RpTools/setup.py | 65 +++++++++++++++++++ 10 files changed, 85 insertions(+), 14 deletions(-) create mode 100644 modules/data.remote/inst/RpTools/RpTools/__init__.py create mode 100644 modules/data.remote/inst/RpTools/setup.py diff --git a/modules/data.remote/inst/RpTools/RpTools/__init__.py b/modules/data.remote/inst/RpTools/RpTools/__init__.py new file mode 100644 index 00000000000..183023df1b6 --- /dev/null +++ b/modules/data.remote/inst/RpTools/RpTools/__init__.py @@ -0,0 +1,3 @@ +from RpTools.remote_process import remote_process +from RpRools.get_remote_data import get_remote_data +from RpTools.process_remote_data import process_remote_data diff --git a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py index 49e72ccc7cd..2006c864900 100644 --- a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py +++ b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py @@ -18,7 +18,7 @@ import os import cgi import json -from gee_utils import get_sitename +from . gee_utils import get_sitename from datetime import datetime from warnings import warn import os.path diff --git a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py index 5d3229fd7b0..4400df908e7 100644 --- a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py +++ b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py @@ -7,8 +7,9 @@ Author: Ayush Prasad """ -import gee2pecan_s2 as gee -import satellitetools.biophys_xarray as bio +import RpTools.gee2pecan_s2 as gee +from RpTools.gee2pecan_s2 import xr_dataset_to_timeseries +import RpTools.biophys_xarray as bio import geopandas as gpd import xarray as xr import os @@ -50,6 +51,6 @@ def bands2lai_snap(inputfile, outdir): save_path = os.path.join(outdir, area.name + "_lai_snap_" + timestamp + ".nc") # creating a timerseries and saving the netCDF file area.to_netcdf(save_path) - timeseries[area.name] = gee.xr_dataset_to_timeseries(area, timeseries_variable) + timeseries[area.name] = xr_dataset_to_timeseries(area, timeseries_variable) return os.path.abspath(save_path) \ No newline at end of file diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py index 5ab1990daef..5d812a01a6b 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py @@ -9,7 +9,7 @@ Author: Ayush Prasad """ -from gee_utils import create_geo, get_sitecoord, get_sitename, calc_ndvi +from RpTools.gee_utils import create_geo, get_sitecoord, get_sitename, calc_ndvi import ee import pandas as pd import geopandas as gpd diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py index a42d8ce67be..286268b9d33 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py @@ -20,7 +20,7 @@ import numpy as np import xarray as xr from functools import reduce -from gee_utils import calc_ndvi +from RpTools.gee_utils import calc_ndvi ee.Initialize() diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py index 6d9cdbb72a7..90e25188e70 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py @@ -7,7 +7,7 @@ Author: Ayush Prasad """ -from gee_utils import create_geo, get_sitecoord, get_sitename +from RpTools.gee_utils import create_geo, get_sitecoord, get_sitename import ee import pandas as pd import os diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index 87a7d212055..27d31f5dc91 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -8,9 +8,9 @@ Author(s): Ayush Prasad, Istem Fer """ -from merge_files import nc_merge, csv_merge +from RpTools.merge_files import nc_merge, csv_merge from importlib import import_module -from appeears2pecan import appeears2pecan +from . appeears2pecan import appeears2pecan import os import os.path @@ -63,6 +63,7 @@ def get_remote_data( # construct the function name func_name = "".join([source, "2pecan", "_", collection]) # import the module + func_name = "RpTools" + "." + func_name module = import_module(func_name) # import the function from the module func = getattr(module, func_name) diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index 1c8f9ba183a..bd35c7adc73 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -6,7 +6,7 @@ Requires Python3 Author: Ayush Prasad """ -from merge_files import nc_merge +from RpTools.merge_files import nc_merge from importlib import import_module import os import time @@ -37,6 +37,7 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori # construct the function name func_name = "".join([input_type, "2", output, "_", algorithm]) # import the module + func_name = "RpTools" + "." + func_name module = import_module(func_name) # import the function from the module func = getattr(module, func_name) diff --git a/modules/data.remote/inst/RpTools/RpTools/remote_process.py b/modules/data.remote/inst/RpTools/RpTools/remote_process.py index e63343106cb..2e82ed9bd04 100644 --- a/modules/data.remote/inst/RpTools/RpTools/remote_process.py +++ b/modules/data.remote/inst/RpTools/RpTools/remote_process.py @@ -8,10 +8,10 @@ Author(s): Ayush Prasad, Istem Fer """ -from merge_files import nc_merge, csv_merge -from get_remote_data import get_remote_data -from process_remote_data import process_remote_data -from gee_utils import get_sitename +from . merge_files import nc_merge, csv_merge +from . get_remote_data import get_remote_data +from . process_remote_data import process_remote_data +from . gee_utils import get_sitename import os diff --git a/modules/data.remote/inst/RpTools/setup.py b/modules/data.remote/inst/RpTools/setup.py new file mode 100644 index 00000000000..71a9825829d --- /dev/null +++ b/modules/data.remote/inst/RpTools/setup.py @@ -0,0 +1,65 @@ +from setuptools import setup + +setup( + name="RpTools", + version="0.1", + description="RpTools contains the Python codes required by PEcAn's remote data module", + # url=' ', + author="Ayush Prasad", + author_email="ayush.prd@gmail.com", + license="University of Illinois/NCSA Open Source License", + packages=["RpTools"], + install_requires=[ + "attrs>=19.3.0", + "cachetools>=4.1.1", + "certifi>=2020.6.20", + "cffi>=1.14.1", + "chardet>=3.0.4", + "click>=7.1.2", + "click-plugins>=1.1.1", + "cligj>=0.5.0", + "cryptography>=1.7.1", + "dask>=2.6.0", + "earthengine-api>=0.1.229", + "Fiona>=1.8.13.post1", + "future>=0.18.2" "geopandas>=0.8.1", + "google-api-core>=1.22.0", + "google-api-python-client>=1.10.0", + "google-auth>=1.20.0", + "google-auth-httplib2>=0.0.4", + "google-cloud-core>=1.3.0", + "google-cloud-storage>=1.30.0", + "google-crc32c>=0.1.0", + "google-resumable-media>=0.7.0", + "googleapis-common-protos>=1.52.0", + "httplib2>=0.18.1", + "httplib2shim>=0.0.3", + "idna>=2.10", + "keyring>=10.1", + "keyrings.alt>=1.3", + "munch>=2.5.0", + "numpy>=1.18.5", + "pandas>=0.25.3", + "protobuf>=3.12.4", + "pyasn1>=0.4.8", + "pyasn1-modules>=0.2.8", + "pycparser>=2.20", + "pycrypto>=2.6.1", + "pygobject>=3.22.0", + "pyproj>=2.6.1.post1", + "python-dateutil>=2.8.1", + "pytz>=2020.1", + "pyxdg>=0.25", + "requests>=2.24.0", + "rsa>=4.6", + "scipy>=1.4.1", + "SecretStorage>=2.3.1", + "Shapely>=1.7.0", + "six>=1.10.0", + "toolz>=0.10.0", + "uritemplate>=3.0.1", + "urllib3>=1.25.10", + "xarray>=0.13.0", + ], + # zip_safe=False +) From 6888f5707f34ebc9c320c3ea93b28c34eba399b3 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 6 Aug 2020 11:17:54 +0530 Subject: [PATCH 1349/2289] minor fixes in RpTools --- modules/data.remote/inst/RpTools/RpTools/__init__.py | 2 +- modules/data.remote/inst/RpTools/RpTools/get_remote_data.py | 4 ++-- .../data.remote/inst/RpTools/RpTools/process_remote_data.py | 4 ++-- modules/data.remote/inst/RpTools/setup.py | 3 ++- 4 files changed, 7 insertions(+), 6 deletions(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/__init__.py b/modules/data.remote/inst/RpTools/RpTools/__init__.py index 183023df1b6..6895e7ce580 100644 --- a/modules/data.remote/inst/RpTools/RpTools/__init__.py +++ b/modules/data.remote/inst/RpTools/RpTools/__init__.py @@ -1,3 +1,3 @@ from RpTools.remote_process import remote_process -from RpRools.get_remote_data import get_remote_data +from RpTools.get_remote_data import get_remote_data from RpTools.process_remote_data import process_remote_data diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index 27d31f5dc91..b7f685e0446 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -63,8 +63,8 @@ def get_remote_data( # construct the function name func_name = "".join([source, "2pecan", "_", collection]) # import the module - func_name = "RpTools" + "." + func_name - module = import_module(func_name) + module_from_pack = "RpTools" + "." + func_name + module = import_module(module_from_pack) # import the function from the module func = getattr(module, func_name) # if a qc parameter is specified pass these arguments to the function diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index bd35c7adc73..7e0a7ef69fd 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -37,8 +37,8 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori # construct the function name func_name = "".join([input_type, "2", output, "_", algorithm]) # import the module - func_name = "RpTools" + "." + func_name - module = import_module(func_name) + module_from_pack = "RpTools" + "." + func_name + module = import_module(module_from_pack) # import the function from the module func = getattr(module, func_name) # call the function diff --git a/modules/data.remote/inst/RpTools/setup.py b/modules/data.remote/inst/RpTools/setup.py index 71a9825829d..248b3f518b9 100644 --- a/modules/data.remote/inst/RpTools/setup.py +++ b/modules/data.remote/inst/RpTools/setup.py @@ -22,7 +22,8 @@ "dask>=2.6.0", "earthengine-api>=0.1.229", "Fiona>=1.8.13.post1", - "future>=0.18.2" "geopandas>=0.8.1", + "future>=0.18.2", + "geopandas>=0.8.1", "google-api-core>=1.22.0", "google-api-python-client>=1.10.0", "google-auth>=1.20.0", From 6480af8d3a264bd91b6c1b5931c032ff015a0565 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 6 Aug 2020 17:42:17 +0000 Subject: [PATCH 1350/2289] tests for inputs & formats related endpoints --- apps/api/R/submit.workflow.R | 12 +++++++++--- apps/api/tests/test.formats.R | 33 +++++++++++++++++++++++++++++++++ apps/api/tests/test.inputs.R | 17 +++++++++++++++++ 3 files changed, 59 insertions(+), 3 deletions(-) create mode 100644 apps/api/tests/test.formats.R create mode 100644 apps/api/tests/test.inputs.R diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 1e99246a6a3..9e5d1081575 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -174,7 +174,6 @@ insert.workflow <- function(workflowList){ #* @param workflowList List containing the workflow details #* @author Tezan Sahu insert.attribute <- function(workflowList){ - dbcon <- PEcAn.DB::betyConnect() # Create an array of PFTs @@ -182,7 +181,7 @@ insert.attribute <- function(workflowList){ for(i in seq(length(workflowList$pfts))){ pfts <- c(pfts, workflowList$pfts[i]$pft$name) } - + # Obtain the model_id model_id <- workflowList$model$id if(is.null(model_id)){ @@ -204,9 +203,16 @@ insert.attribute <- function(workflowList){ email = if(is.na(workflowList$info$userid) || workflowList$info$userid == -1) "" else dplyr::tbl(dbcon, "users") %>% filter(id == bit64::as.integer64(workflowList$info$userid)) %>% pull(email), notes = if(is.null(workflowList$info$notes)) "" else workflowList$info$notes, - input_met = workflowList$run$inputs$met$id, variables = workflowList$ensemble$variable ) + + if(! is.null(workflowList$run$inputs$met$id)) { + properties$input_met <- workflowList$run$inputs$met$id + } + else if(! is.null(workflowList$run$inputs$met$source)) { + properties$input_met <- workflowList$run$inputs$met$source + } + if(! is.null(workflowList$ensemble$parameters$method)) properties$parm_method <- workflowList$ensemble$parameters$method if(! is.null(workflowList$sensitivity.analysis$quantiles)){ sensitivity <- c() diff --git a/apps/api/tests/test.formats.R b/apps/api/tests/test.formats.R new file mode 100644 index 00000000000..6261b711bf5 --- /dev/null +++ b/apps/api/tests/test.formats.R @@ -0,0 +1,33 @@ +context("Testing all formats related endpoints") + +test_that("Calling /api/formats/ with valid parameters returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/formats/?format_name=ameriflux&mimetype=csv&ignore_case=TRUE", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/formats/ with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/formats/?format_name=random&mimetype=random&ignore_case=TRUE", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) + +test_that("Calling /api/formats/{format_id} returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/formats/19", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/formats/{format_id} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/formats/0", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) \ No newline at end of file diff --git a/apps/api/tests/test.inputs.R b/apps/api/tests/test.inputs.R new file mode 100644 index 00000000000..8b22f50f73e --- /dev/null +++ b/apps/api/tests/test.inputs.R @@ -0,0 +1,17 @@ +context("Testing all inputs related endpoints") + +test_that("Calling /api/inputs/ with valid parameters returns Status 200", { + res <- httr::GET( + "http://localhost:8000/api/inputs/?model_id=1000000022&site_id=676", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/inputs/ with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/inputs/?model_id=0&site_id=0", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) +}) \ No newline at end of file From 0c774ba5d12594a0180aa2fe0acb2545a2af5f98 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Thu, 6 Aug 2020 23:13:52 -0500 Subject: [PATCH 1351/2289] updated changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index bb3ca374651..9eee17b2da2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -49,6 +49,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Added +- New functionality to the PEcAn API to GET information about PFTs, formats & sites, submit workflows in XML or JSON formats & download relevant inputs/outputs/files related to runs & workflows (#2674 #2665 #2662 #2655) - Functions to send/receive messages to/from rabbitmq. - Documentation in [DEV-INTRO.md](DEV-INTRO.md) on development in a docker environment (#2553) - PEcAn API that can be used to talk to PEcAn servers. Endpoints to GET the details about the server that user is talking to, PEcAn models, workflows & runs. Authetication enabled. (#2631) From 91cf94c774af8faa119ad9f72e20f63885b4bed7 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 11:26:03 +0530 Subject: [PATCH 1352/2289] func name changes, use RpTools package in R --- modules/data.remote/NAMESPACE | 2 +- modules/data.remote/R/call_remote_process.R | 422 ----------- modules/data.remote/R/remote_process.R | 693 ++++++++++++++++++ .../inst/RpTools/RpTools/__init__.py | 2 +- .../inst/RpTools/RpTools/appeears2pecan.py | 16 +- .../inst/RpTools/RpTools/bands2lai_snap.py | 9 +- .../inst/RpTools/RpTools/gee2pecan_l8.py | 11 +- .../inst/RpTools/RpTools/gee2pecan_s2.py | 14 +- .../inst/RpTools/RpTools/gee2pecan_smap.py | 7 +- .../inst/RpTools/RpTools/get_remote_data.py | 5 +- .../RpTools/RpTools/process_remote_data.py | 4 +- .../{remote_process.py => rp_control.py} | 5 +- modules/data.remote/inst/test.geojson | 2 +- .../data.remote/man/call_remote_process.Rd | 22 - .../data.remote/man/construct_raw_filename.Rd | 13 +- modules/data.remote/man/remote_process.Rd | 22 + modules/data.remote/man/set_stage.Rd | 2 +- 17 files changed, 772 insertions(+), 479 deletions(-) delete mode 100644 modules/data.remote/R/call_remote_process.R create mode 100644 modules/data.remote/R/remote_process.R rename modules/data.remote/inst/RpTools/RpTools/{remote_process.py => rp_control.py} (98%) delete mode 100644 modules/data.remote/man/call_remote_process.Rd create mode 100644 modules/data.remote/man/remote_process.Rd diff --git a/modules/data.remote/NAMESPACE b/modules/data.remote/NAMESPACE index e2700301711..d5da4a8e230 100644 --- a/modules/data.remote/NAMESPACE +++ b/modules/data.remote/NAMESPACE @@ -1,11 +1,11 @@ # Generated by roxygen2: do not edit by hand export(call_MODIS) -export(call_remote_process) export(download.LandTrendr.AGB) export(download.NLCD) export(download.thredds.AGB) export(extract.LandTrendr.AGB) export(extract_NLCD) +export(remote_process) importFrom(foreach,"%do%") importFrom(foreach,"%dopar%") diff --git a/modules/data.remote/R/call_remote_process.R b/modules/data.remote/R/call_remote_process.R deleted file mode 100644 index 38d34c43957..00000000000 --- a/modules/data.remote/R/call_remote_process.R +++ /dev/null @@ -1,422 +0,0 @@ -##' call the Python - remote_process from PEcAn and store the output in BETY -##' -##' @name call_remote_process -##' @title call_remote_process -##' @export -##' @param settings PEcAn settings list containing remotedata tags: siteid, sitename, raw_mimetype, raw_formatname, geofile, outdir, start, end, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data -##' @examples -##' \dontrun{ -##' call_remote_process(settings) -##' } -##' @author Ayush Prasad -##' - -call_remote_process <- function(settings){ - - # information about the date variables used in call_remote_process - - # req_start, req_end : start, end dates requested by the user, the user does not be aware about the status of the requested file in the DB - # start, end : effective start, end dates created after checking the DB status. These dates are sent to remote_process for downloading and processing data - # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB - # the "pro" version of these variables have the same meaning and are used to refer to the processed file - - remote_process <- reticulate::import_from_path("remote_process", file.path("..", "inst")) - remote_process <- reticulate::source_python(file.path("..", "inst", "remote_process.py")) - - input_file <- NULL - stage_get_data <- NULL - stage_process_data <- NULL - raw_merge <- NULL - pro_merge <- NULL - existing_raw_file_path <- NULL - existing_pro_file_path <- NULL - - # extract the variables from the settings list - siteid <- settings$remotedata$siteid - sitename <- settings$remotedata$sitename - raw_mimetype <- settings$remotedata$raw_mimetype - raw_formatname <- settings$remotedata$raw_formatname - geofile <- settings$remotedata$geofile - outdir <- settings$remotedata$outdir - start <- settings$remotedata$start - end <- settings$remotedata$end - source <- settings$remotedata$source - collection <- settings$remotedata$collection - scale <- settings$remotedata$scale - if(!is.null(scale)){ - scale <- as.double(settings$remotedata$scale) - scale <- format(scale, nsmall = 1) - } - projection <- settings$remotedata$projection - qc <- settings$remotedata$qc - if(!is.null(qc)){ - qc <- as.double(settings$remotedata$qc) - qc <- format(qc, nsmall = 1) - } - algorithm <- settings$remotedata$algorithm - credfile <- settings$remotedata$credfile - pro_mimetype <- settings$remotedata$pro_mimetype - pro_formatname <- settings$remotedata$pro_formatname - out_get_data <- settings$remotedata$out_get_data - out_process_data <- settings$remotedata$out_process_data - - - dbcon <- PEcAn.DB::db.open(settings$database$bety) - flag <- 0 - - # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names - collection_lut <- data.frame(stringsAsFactors=FALSE, - original_name = c("LANDSAT/LC08/C01/T1_SR", "COPERNICUS/S2_SR", "NASA_USDA/HSL/SMAP_soil_moisture"), - pecan_code = c("l8", "s2", "smap") - ) - getpecancode <- collection_lut$pecan_code - names(getpecancode) <- collection_lut$original_name - - if(source == "gee"){ - collection = unname(getpecancode[collection]) - } - - req_start <- start - req_end <- end - - raw_file_name = construct_raw_filename(sitename, source, collection, scale, projection, qc) - - # check if any data is already present in the inputs table - existing_data <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) - if(nrow(existing_data) >= 1){ - - # if processed data is requested, example lai - if(!is.null(out_process_data)){ - - # construct processed file name - pro_file_name = paste0(sitename, "_", out_process_data, "_", algorithm) - - # check if processed file exists - if(nrow(pro_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", pro_file_name), dbcon)) == 1){ - datalist <- set_stage(pro_check, req_start, req_end, stage_process_data) - pro_start <- as.character(datalist[[1]]) - pro_end <- as.character(datalist[[2]]) - write_pro_start <- datalist[[5]] - write_pro_end <- datalist[[6]] - if(pro_start != "dont write" || pro_end != "dont write"){ - stage_process_data <- datalist[[3]] - pro_merge <- datalist[[4]] - if(pro_merge == TRUE){ - existing_pro_file_path <- pro_check$file_path - } - if(stage_process_data == TRUE){ - # check about the status of raw file - raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon) - if(!is.null(raw_check$start_date) && !is.null(raw_check$end_date)){ - raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- as.character(raw_datalist[[1]]) - end <- as.character(raw_datalist[[2]]) - write_raw_start <- raw_datalist[[5]] - write_raw_end <- raw_datalist[[6]] - stage_get_data <- raw_datalist[[3]] - raw_merge <- raw_datalist[[4]] - if(stage_get_data == FALSE){ - input_file <- raw_check$file_path - } - flag <- 4 - if(raw_merge == TRUE){ - existing_raw_file_path <- raw_check$file_path - } - if(pro_merge == TRUE && stage_get_data == FALSE){ - flag <- 5 - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date - existing_pro_file_path <- NULL - pro_merge <- FALSE - } - }else{ - # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted - flag <- 6 - write_raw_start <- req_start - write_raw_end <- req_end - start <- req_start - end <- req_end - stage_get_data <- TRUE - existing_raw_file_path <- NULL - write_pro_start <- write_raw_start - write_pro_end <- write_raw_end - pro_merge <- "replace" - existing_pro_file_path <- NULL - } - } - } - } - else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) ==1){ - # if the processed file does not exist in the DB check if the raw file required for creating it is present - PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") - datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- datalist[[3]] - if(stage_get_data == FALSE){ - input_file <- raw_check$file_path - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date - flag <- 2 - } - raw_merge <- datalist[[4]] - stage_process_data <- TRUE - pro_merge <- FALSE - if(raw_merge == TRUE || raw_merge == "replace"){ - existing_raw_file_path = raw_check$file_path - flag <- 3 - }else{ - existing_raw_file_path = NULL - } - }else{ - # if no processed or raw file of requested type exists - start <- req_start - end <- req_end - write_raw_start <- req_start - write_raw_end <- req_end - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- TRUE - raw_merge <- FALSE - existing_raw_file_path = NULL - stage_process_data <- TRUE - pro_merge <- FALSE - existing_pro_file_path = NULL - flag <- 1 - } - }else if(nrow(raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) == 1){ - # if only raw data is requested - datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - stage_get_data <- datalist[[3]] - raw_merge <- datalist[[4]] - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] - stage_process_data <- FALSE - if(raw_merge == TRUE){ - existing_raw_file_path <- raw_check$file_path - }else{ - existing_raw_file_path <- NULL - } - existing_pro_file_path <- NULL - }else{ - # no data of requested type exists - PEcAn.logger::logger.info("no data of requested type exists") - flag <- 1 - start <- req_start - end <- req_end - if(!is.null(out_get_data)){ - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path = NULL - } - if(!is.null(out_process_data)){ - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE - process_file_name <- NULL - existing_pro_file_path <- NULL - flag <- 1 - } - } - - }else{ - # db is completely empty for the given siteid - PEcAn.logger::logger.info("DB is completely empty for this site") - flag <- 1 - start <- req_start - end <- req_end - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path = NULL - if(!is.null(out_process_data)){ - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE - existing_pro_file_path <- NULL - } - } - - - # call remote_process - output = remote_process(geofile=geofile, outdir=outdir, start=start, end=end, source=source, collection=collection, scale=as.double(scale), projection=projection, qc=as.double(qc), algorithm=algorithm, input_file=input_file, credfile=credfile, out_get_data=out_get_data, out_process_data=out_process_data, stage_get_data=stage_get_data, stage_process_data=stage_process_data, raw_merge=raw_merge, pro_merge=pro_merge, existing_raw_file_path=existing_raw_file_path, existing_pro_file_path=existing_pro_file_path) - - - # inserting data in the DB - if(!is.null(out_process_data)){ - # if the requested processed file already exists within the required timeline dont insert or update the DB - if(as.character(write_pro_start) == "dont write" && as.character(write_pro_end) == "dont write"){ - PEcAn.logger::logger.info("Requested processed file already exists") - }else{ - if(flag == 1){ - # no processed and rawfile are present - PEcAn.logger::logger.info("inserting raw and processed files for the first time") - # insert processed data - PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) - # insert raw file - PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) - }else if(flag == 2){ - # requested processed file does not exist but the raw file used to create it exists within the required timeline - PEcAn.logger::logger.info("inserting processed file for the first time") - PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) - }else if(flag == 3){ - # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates - PEcAn.DB::dbfile.input.insert(in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, mimetype = pro_mimetype, formatname = pro_formatname, con = dbcon) - raw_id = raw_check$id - PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) - }else if(flag == 4){ - # requested processed and raw files are present and have to be updated - pro_id = pro_check$id - raw_id = raw_check$id - PEcAn.logger::logger.info("updating processed and raw files") - PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) - }else if(flag == 5){ - # raw file required for creating the processed file exists and the processed file needs to be updated - pro_id = pro_check$id - PEcAn.logger::logger.info("updating the existing processed file") - PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) - }else if(flag == 6){ - # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file - pro_id = pro_check$id - PEcAn.logger::logger.info("replacing the existing processed file and creating a new raw file") - PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, output$process_data_name, pro_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, output$process_data_name, pro_id), dbcon) - PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) - } - } - } - else{ - # if the requested raw file already exists within the required timeline dont insert or update the DB - if(as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write"){ - PEcAn.logger::logger.info("Requested raw file already exists") - }else{ - if(flag == 1){ - PEcAn.logger::logger.info(("Inserting raw file for the first time")) - PEcAn.DB::dbfile.input.insert(in.path = output$raw_data_path, in.prefix = output$raw_data_name, siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, mimetype = raw_mimetype, formatname = raw_formatname, con = dbcon) - }else{ - PEcAn.logger::logger.info("Updating raw file") - raw_id = raw_check$id - PEcAn.DB::db.query(sprintf("UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, output$raw_data_name, raw_id), dbcon) - PEcAn.DB::db.query(sprintf("UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, output$raw_data_name, raw_id), dbcon) - } - } - } - - PEcAn.DB::db.close(con=dbcon) - -} - - - -##' construct the raw file name -##' -##' @name construct_raw_filename -##' @title construct_raw_filename -##' @param sitename site name -##' @param source source of the remote data -##' @param collection collection or product requested from the source -##' @param scale scale, NULL by default -##' @param projection projection, NULL by default -##' @param qc qc_parameter, NULL by default -##' @return raw_file_name -##' @examples -##' \dontrun{ -##' raw_file_name <- construct_raw_filename( -##' sitename="Reykjavik", -##' source="gee", -##' collection="s2", -##' scale=10.0 -##' projection=NULL -##' qc=1.0) -##' } -##' @author Ayush Prasad -construct_raw_filename <- function(sitename, source, collection, scale=NULL, projection=NULL, qc=NULL){ - # use NA if a parameter is not applicable and is NULL - if(is.null(scale)){ - scale <- "NA" - }else{ - scale <- format(scale, nsmall = 1) - } - if(is.null(projection)){ - projection <- "NA" - } - if(is.null(qc)){ - qc <- "NA" - }else{ - qc <- format(qc, nsmall = 1) - } - raw_file_name <- paste(sitename, source, collection, scale, projection, qc, sep = "_") - return(raw_file_name) -} - - - -##' set dates, stage and merge status for remote data download -##' -##' @name set_stage -##' @title set_stage -##' @param result dataframe containing id, site_id, name, start_date, end_date from inputs table and file_path from dbfiles table -##' @param req_start start date requested by the user -##' @param req_end end date requested by the user -##' @param stage the stage which needs to be set, get_remote_data or process_remote_data -##' @return list containing req_start, req_end, stage, merge, write_start, write_end -##' @examples -##' \dontrun{ -##' raw_check <- set_stage( -##' result, -##' req_start, -##' req_end, -##' get_remote_data) -##' } -##' @author Ayush Prasad -set_stage <- function(result, req_start, req_end, stage){ - db_start = as.Date(result$start_date) - db_end = as.Date(result$end_date) - req_start = as.Date(req_start) - req_end = as.Date(req_end) - stage <- TRUE - merge <- TRUE - - # data already exists - if((req_start >= db_start) && (req_end <= db_end)){ - req_start <- "dont write" - req_end <- "dont write" - stage <- FALSE - merge <- FALSE - write_start <- "dont write" - write_end <- "dont write" - }else if(req_start < db_start && db_end < req_end){ - # data has to be replaced - merge <- "replace" - write_start <-req_start - write_end <-req_end - stage <- TRUE - }else if((req_start > db_start) && (req_end > db_end)){ - # forward case - req_start <- db_end + 1 - write_start <- db_start - write_end <- req_end - }else if((req_start < db_start) && (req_end < db_end)) { - # backward case - req_end <- db_start - 1 - write_end <- db_end - write_start <- req_start - } - return (list(req_start, req_end, stage, merge, write_start, write_end)) - -} \ No newline at end of file diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R new file mode 100644 index 00000000000..408e341721b --- /dev/null +++ b/modules/data.remote/R/remote_process.R @@ -0,0 +1,693 @@ +##' call rp_control (from RpTools py package) from PEcAn workflow and store the output in BETY +##' +##' @name remote_process +##' @title remote_process +##' @export +##' @param settings PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, geofile, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data +##' @examples +##' \dontrun{ +##' remote_process(settings) +##' } +##' @author Ayush Prasad +##' + +remote_process <- function(settings) { + # information about the date variables used in call_remote_process - + # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB + # start, end : effective start, end dates created after checking the DB status. These dates are sent to remote_process for downloading and processing data + # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB + # the "pro" version of these variables have the same meaning and are used to refer to the processed file + + RpTools <- reticulate::import("RpTools") + + input_file <- NULL + stage_get_data <- NULL + stage_process_data <- NULL + raw_merge <- NULL + pro_merge <- NULL + existing_raw_file_path <- NULL + existing_pro_file_path <- NULL + + # extract the variables from the settings list + siteid <- as.numeric(settings$run$site$id) + siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) + raw_mimetype <- settings$remotedata$raw_mimetype + raw_formatname <- settings$remotedata$raw_formatname + geofile <- settings$remotedata$geofile + outdir <- settings$outdir + start <- as.character(as.Date(settings$run$start.date)) + end <- as.character(as.Date(settings$run$end.date)) + source <- settings$remotedata$source + collection <- settings$remotedata$collection + scale <- settings$remotedata$scale + if (!is.null(scale)) { + scale <- as.double(settings$remotedata$scale) + scale <- format(scale, nsmall = 1) + } + projection <- settings$remotedata$projection + qc <- settings$remotedata$qc + if (!is.null(qc)) { + qc <- as.double(settings$remotedata$qc) + qc <- format(qc, nsmall = 1) + } + algorithm <- settings$remotedata$algorithm + credfile <- settings$remotedata$credfile + pro_mimetype <- settings$remotedata$pro_mimetype + pro_formatname <- settings$remotedata$pro_formatname + out_get_data <- settings$remotedata$out_get_data + out_process_data <- settings$remotedata$out_process_data + + + dbcon <- PEcAn.DB::db.open(settings$database$bety) + on.exit(PEcAn.DB::db.close(dbcon), add = TRUE) + + flag <- 0 + + # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names + collection_lut <- data.frame( + stringsAsFactors = FALSE, + original_name = c( + "LANDSAT/LC08/C01/T1_SR", + "COPERNICUS/S2_SR", + "NASA_USDA/HSL/SMAP_soil_moisture" + ), + pecan_code = c("l8", "s2", "smap") + ) + getpecancode <- collection_lut$pecan_code + names(getpecancode) <- collection_lut$original_name + + if (source == "gee") { + collection = unname(getpecancode[collection]) + } + + req_start <- start + req_end <- end + + raw_file_name = construct_raw_filename(collection, siteid_short, scale, projection, qc) + + # check if any data is already present in the inputs table + existing_data <- + PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) + if (nrow(existing_data) >= 1) { + # if processed data is requested, example LAI + if (!is.null(out_process_data)) { + # construct processed file name + pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) + + # check if processed file exists + if (nrow(pro_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + pro_file_name + ), + dbcon + )) == 1) { + datalist <- + set_stage(pro_check, req_start, req_end, stage_process_data) + pro_start <- as.character(datalist[[1]]) + pro_end <- as.character(datalist[[2]]) + write_pro_start <- datalist[[5]] + write_pro_end <- datalist[[6]] + if (pro_start != "dont write" || pro_end != "dont write") { + stage_process_data <- datalist[[3]] + pro_merge <- datalist[[4]] + if (pro_merge == TRUE) { + existing_pro_file_path <- pro_check$file_path + } + if (stage_process_data == TRUE) { + # check about the status of raw file + raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + ) + if (!is.null(raw_check$start_date) && + !is.null(raw_check$end_date)) { + raw_datalist <- + set_stage(raw_check, pro_start, pro_end, stage_get_data) + start <- as.character(raw_datalist[[1]]) + end <- as.character(raw_datalist[[2]]) + write_raw_start <- raw_datalist[[5]] + write_raw_end <- raw_datalist[[6]] + stage_get_data <- raw_datalist[[3]] + raw_merge <- raw_datalist[[4]] + if (stage_get_data == FALSE) { + input_file <- raw_check$file_path + } + flag <- 4 + if (raw_merge == TRUE) { + existing_raw_file_path <- raw_check$file_path + } + if (pro_merge == TRUE && stage_get_data == FALSE) { + flag <- 5 + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + existing_pro_file_path <- NULL + pro_merge <- FALSE + } + } else{ + # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted + flag <- 6 + write_raw_start <- req_start + write_raw_end <- req_end + start <- req_start + end <- req_end + stage_get_data <- TRUE + existing_raw_file_path <- NULL + write_pro_start <- write_raw_start + write_pro_end <- write_raw_end + pro_merge <- "replace" + existing_pro_file_path <- NULL + } + } + } else{ + # requested file already exists + pro_id <- pro_check$id + pro_path <- pro_check$file_path + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + raw_id <- raw_check$id + raw_path <- raw_check$file_path + } + } + } + else if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + # if the processed file does not exist in the DB check if the raw file required for creating it is present + PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") + datalist <- + set_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- datalist[[3]] + if (stage_get_data == FALSE) { + input_file <- raw_check$file_path + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + flag <- 2 + } + raw_merge <- datalist[[4]] + stage_process_data <- TRUE + pro_merge <- FALSE + if (raw_merge == TRUE || raw_merge == "replace") { + existing_raw_file_path = raw_check$file_path + flag <- 3 + } else{ + existing_raw_file_path = NULL + } + } else{ + # if no processed or raw file of requested type exists + start <- req_start + end <- req_end + write_raw_start <- req_start + write_raw_end <- req_end + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- TRUE + raw_merge <- FALSE + existing_raw_file_path = NULL + stage_process_data <- TRUE + pro_merge <- FALSE + existing_pro_file_path = NULL + flag <- 1 + } + } else if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + # if only raw data is requested + datalist <- + set_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + stage_get_data <- datalist[[3]] + raw_merge <- datalist[[4]] + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + stage_process_data <- FALSE + if(as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write"){ + raw_id <- raw_check$id + raw_path <- raw_check$file_path + } + if (raw_merge == TRUE) { + existing_raw_file_path <- raw_check$file_path + } else{ + existing_raw_file_path <- NULL + } + existing_pro_file_path <- NULL + } else{ + # no data of requested type exists + PEcAn.logger::logger.info("no data of requested type exists") + flag <- 1 + start <- req_start + end <- req_end + if (!is.null(out_get_data)) { + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path = NULL + } + if (!is.null(out_process_data)) { + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + process_file_name <- NULL + existing_pro_file_path <- NULL + flag <- 1 + } + } + + } else{ + # db is completely empty for the given siteid + PEcAn.logger::logger.info("DB is completely empty for this site") + flag <- 1 + start <- req_start + end <- req_end + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path = NULL + if (!is.null(out_process_data)) { + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + existing_pro_file_path <- NULL + } + } + + + # call remote_process + output = RpTools$rp_control( + geofile = geofile, + outdir = outdir, + start = start, + end = end, + source = source, + collection = collection, + siteid = siteid_short, + scale = as.double(scale), + projection = projection, + qc = as.double(qc), + algorithm = algorithm, + input_file = input_file, + credfile = credfile, + out_get_data = out_get_data, + out_process_data = out_process_data, + stage_get_data = stage_get_data, + stage_process_data = stage_process_data, + raw_merge = raw_merge, + pro_merge = pro_merge, + existing_raw_file_path = existing_raw_file_path, + existing_pro_file_path = existing_pro_file_path + ) + + print("hi") + print(raw_check) + + # inserting data in the DB + if (!is.null(out_process_data)) { + # if the requested processed file already exists within the required timeline dont insert or update the DB + if (as.character(write_pro_start) == "dont write" && + as.character(write_pro_end) == "dont write") { + PEcAn.logger::logger.info("Requested processed file already exists") + } else{ + if (flag == 1) { + # no processed and rawfile are present + PEcAn.logger::logger.info("inserting raw and processed files for the first time") + # insert processed data + pro_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, + formatname = pro_formatname, + con = dbcon + ) + # insert raw file + raw_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, + formatname = raw_formatname, + con = dbcon + ) + pro_id <- pro_ins$input.id + raw_id <- raw_ins$input.id + } else if (flag == 2) { + # requested processed file does not exist but the raw file used to create it exists within the required timeline + PEcAn.logger::logger.info("inserting processed file for the first time") + pro_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, + formatname = pro_formatname, + con = dbcon + ) + raw_id <- raw_check$id + pro_id <- pro$input.id + } else if (flag == 3) { + # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates + PEcAn.DB::dbfile.input.insert( + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, + formatname = pro_formatname, + con = dbcon + ) + raw_id = raw_check$id + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_raw_start, + write_raw_end, + output$raw_data_name, + raw_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$raw_data_path, + output$raw_data_name, + raw_id + ), + dbcon + ) + } else if (flag == 4) { + # requested processed and raw files are present and have to be updated + pro_id <- pro_check$id + raw_id <- raw_check$id + raw_path <- output$raw_data_path + PEcAn.logger::logger.info("updating processed and raw files") + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_pro_start, + write_pro_end, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$process_data_path, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", + write_raw_start, + write_raw_end, + output$raw_data_name, + raw_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$raw_data_path, + output$raw_data_name, + raw_id + ), + dbcon + ) + } else if (flag == 5) { + # raw file required for creating the processed file exists and the processed file needs to be updated + pro_id = pro_check$id + PEcAn.logger::logger.info("updating the existing processed file") + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_pro_start, + write_pro_end, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$process_data_path, + output$process_data_name, + pro_id + ), + dbcon + ) + } else if (flag == 6) { + # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file + pro_id = pro_check$id + PEcAn.logger::logger.info("replacing the existing processed file and creating a new raw file") + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_pro_start, + write_pro_end, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$process_data_path, + output$process_data_name, + pro_id + ), + dbcon + ) + raw_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, + formatname = raw_formatname, + con = dbcon + ) + raw_id <- raw_ins$input.id + } + } + } + else{ + # if the requested raw file already exists within the required timeline dont insert or update the DB + if (as.character(write_raw_start) == "dont write" && + as.character(write_raw_end) == "dont write") { + PEcAn.logger::logger.info("Requested raw file already exists") + raw_id = raw_check$id + raw_path = raw_check$file_path + print(raw_path) + } else{ + if (flag == 1) { + PEcAn.logger::logger.info(("Inserting raw file for the first time")) + raw_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, + formatname = raw_formatname, + con = dbcon + ) + raw_id <- raw_ins$input.id + } else{ + PEcAn.logger::logger.info("Updating raw file") + raw_id <- raw_check$id + raw_path <- output$raw_data_path + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_raw_start, + write_raw_end, + output$raw_data_name, + raw_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$raw_data_path, + output$raw_data_name, + raw_id + ), + dbcon + ) + } + } + } + + + + + if (!is.null(out_get_data)) { + settings$remotedata$raw_id <- raw_id + settings$remotedata$raw_path <- raw_path + } + + if(!is.null(out_process_data)){ + settings$remotedata$pro_id <- pro_id + settings$remotedata$pro_path <- pro_path + } + return (settings) +} + + + +##' construct the raw file name +##' +##' @name construct_raw_filename +##' @title construct_raw_filename +##' @param sitename site name +##' @param source source of the remote data +##' @param collection collection or product requested from the source +##' @param scale scale, NULL by default +##' @param projection projection, NULL by default +##' @param qc qc_parameter, NULL by default +##' @return raw_file_name +##' @examples +##' \dontrun{ +##' raw_file_name <- construct_raw_filename( +##' sitename="Reykjavik", +##' source="gee", +##' collection="s2", +##' scale=10.0 +##' projection=NULL +##' qc=1.0) +##' } +##' @author Ayush Prasad +construct_raw_filename <- + function(collection, + siteid, + scale = NULL, + projection = NULL, + qc = NULL) { + # use NA if a parameter is not applicable and is NULL + if (is.null(scale)) { + scale <- "NA" + } else{ + scale <- format(scale, nsmall = 1) + } + if (is.null(projection)) { + projection <- "NA" + } + if (is.null(qc)) { + qc <- "NA" + } else{ + qc <- format(qc, nsmall = 1) + } + raw_file_name <- + paste(collection, scale, projection, qc, "site", siteid, sep = "_") + return(raw_file_name) + } + + + +##' set dates, stage and merge status for remote data download +##' +##' @name set_stage +##' @title set_stage +##' @param result dataframe containing id, site_id, name, start_date, end_date from inputs table and file_path from dbfiles table +##' @param req_start start date requested by the user +##' @param req_end end date requested by the user +##' @param stage the stage which needs to be set, get_remote_data or process_remote_data +##' @return list containing req_start, req_end, stage, merge, write_start, write_end +##' @examples +##' \dontrun{ +##' raw_check <- set_stage( +##' result, +##' req_start, +##' req_end, +##' get_remote_data) +##' } +##' @author Ayush Prasad +set_stage <- function(result, req_start, req_end, stage) { + db_start = as.Date(result$start_date) + db_end = as.Date(result$end_date) + req_start = as.Date(req_start) + req_end = as.Date(req_end) + stage <- TRUE + merge <- TRUE + + # data already exists + if ((req_start >= db_start) && (req_end <= db_end)) { + req_start <- "dont write" + req_end <- "dont write" + stage <- FALSE + merge <- FALSE + write_start <- "dont write" + write_end <- "dont write" + } else if (req_start < db_start && db_end < req_end) { + # data has to be replaced + merge <- "replace" + write_start <- req_start + write_end <- req_end + stage <- TRUE + } else if ((req_start > db_start) && (req_end > db_end)) { + # forward case + req_start <- db_end + 1 + write_start <- db_start + write_end <- req_end + } else if ((req_start < db_start) && (req_end < db_end)) { + # backward case + req_end <- db_start - 1 + write_end <- db_end + write_start <- req_start + } + return (list(req_start, req_end, stage, merge, write_start, write_end)) + +} \ No newline at end of file diff --git a/modules/data.remote/inst/RpTools/RpTools/__init__.py b/modules/data.remote/inst/RpTools/RpTools/__init__.py index 6895e7ce580..bee700c9a57 100644 --- a/modules/data.remote/inst/RpTools/RpTools/__init__.py +++ b/modules/data.remote/inst/RpTools/RpTools/__init__.py @@ -1,3 +1,3 @@ -from RpTools.remote_process import remote_process +from RpTools.rp_control import rp_control from RpTools.get_remote_data import get_remote_data from RpTools.process_remote_data import process_remote_data diff --git a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py index 2006c864900..9fa681a957e 100644 --- a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py +++ b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py @@ -18,7 +18,7 @@ import os import cgi import json -from . gee_utils import get_sitename +from .gee_utils import get_sitename from datetime import datetime from warnings import warn import os.path @@ -26,7 +26,7 @@ def appeears2pecan( - geofile, outdir, start, end, product, projection=None, credfile=None + geofile, outdir, start, end, product, projection=None, credfile=None, siteid=None ): """ Downloads remote sensing data from AppEEARS @@ -224,17 +224,21 @@ def authenticate(): if os.path.splitext(filename)[1][1:] == outformat: break + if siteid is None: + siteid = site_name + if projection is None: projection = "NA" timestamp = time.strftime("%y%m%d%H%M%S") save_path = os.path.join( outdir, - site_name - + "_appeears_" - + product - + "_NA_" + product, + +"_NA_" + projection + "_NA_" + + "site_" + + siteid + + "_" + timestamp + "." + outformat, diff --git a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py index 4400df908e7..4b2173ee09b 100644 --- a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py +++ b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py @@ -15,7 +15,7 @@ import os import time -def bands2lai_snap(inputfile, outdir): +def bands2lai_snap(inputfile, outdir, siteid): """ Calculates LAI for the input netCDF file and saves it in a new netCDF file. @@ -47,10 +47,13 @@ def bands2lai_snap(inputfile, outdir): os.makedirs(outdir, exist_ok=True) timestamp = time.strftime("%y%m%d%H%M%S") + + if siteid is None: + siteid = area.name - save_path = os.path.join(outdir, area.name + "_lai_snap_" + timestamp + ".nc") + save_path = os.path.join(outdir, "snap_lai_site_" + siteid + "_" + timestamp + ".nc") # creating a timerseries and saving the netCDF file area.to_netcdf(save_path) timeseries[area.name] = xr_dataset_to_timeseries(area, timeseries_variable) - return os.path.abspath(save_path) \ No newline at end of file + return os.path.abspath(save_path) diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py index 5d812a01a6b..3c2aff70e95 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py @@ -23,7 +23,7 @@ ee.Initialize() -def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1): +def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1, siteid=None): """ Extracts Landsat 8 SR band data from GEE @@ -171,6 +171,9 @@ def eachf2dict(f): "qc parameter": qc, }, ) + + if siteid is None: + siteid = site_name # if specified path does not exist create it if not os.path.exists(outdir): @@ -180,12 +183,14 @@ def eachf2dict(f): timestamp = time.strftime("%y%m%d%H%M%S") filepath = os.path.join( outdir, - site_name - + "_gee_l8_" + "l8_" + str(scale) + "_NA_" + str(qc) + "_" + + "site_" + + siteid + + "_" + timestamp + ".nc", ) diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py index 286268b9d33..bb9a7061fef 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py @@ -755,7 +755,7 @@ def xr_dataset_to_timeseries(xr_dataset, variables): return df -def gee2pecan_s2(geofile, outdir, start, end, scale, qi_threshold): +def gee2pecan_s2(geofile, outdir, start, end, scale, qc, siteid=None): """ Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. @@ -793,22 +793,26 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qi_threshold): ee_get_s2_quality_info(area, request) # get the final data - ee_get_s2_data(area, request, qi_threshold=qi_threshold) + ee_get_s2_data(area, request, qi_threshold=qc) # convert dataframe to an xarray dataset, used later for converting to netCDF s2_data_to_xarray(area, request) # if specified output directory does not exist, create it + if siteid is None: + siteid = area.name if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) timestamp = time.strftime("%y%m%d%H%M%S") save_path = os.path.join( outdir, - area.name - + "_gee_s2_" + "s2_" + str(scale) + "_NA_" - + str(qi_threshold) + + str(qc) + + "_" + + "site_" + + siteid + "_" + timestamp + ".nc", diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py index 90e25188e70..eb0bf1ec863 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py @@ -18,7 +18,7 @@ ee.Initialize() -def gee2pecan_smap(geofile, outdir, start, end): +def gee2pecan_smap(geofile, outdir, start, end, siteid=None): """ Downloads and saves SMAP data from GEE @@ -151,6 +151,9 @@ def fc2dataframe(fc): datadf, coords=coords, attrs={"site_name": site_name, "AOI": AOI,}, ) + if siteid is None: + sitedid = site_name + # # if specified output path does not exist create it if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) @@ -158,7 +161,7 @@ def fc2dataframe(fc): timestamp = time.strftime("%y%m%d%H%M%S") filepath = os.path.join( - outdir, site_name + "_gee_smap_" + "NA_NA_NA_" + timestamp + ".nc" + outdir, "smap_" + "NA_NA_NA_" + "site_" + siteid +"_" + timestamp + ".nc" ) # convert to netCDF and save the file diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index b7f685e0446..2ec3af3c08f 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -22,6 +22,7 @@ def get_remote_data( end, source, collection, + siteid=None, scale=None, projection=None, qc=None, @@ -69,10 +70,10 @@ def get_remote_data( func = getattr(module, func_name) # if a qc parameter is specified pass these arguments to the function if qc: - get_datareturn_path = func(geofile, outdir, start, end, scale, qc) + get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, scale=scale, qc=qc, siteid=siteid) # this part takes care of functions which do not perform any quality checks, e.g. SMAP else: - get_datareturn_path = func(geofile, outdir, start, end) + get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, siteid=siteid) if source == "appeears": get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index 7e0a7ef69fd..99bc868dfee 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -11,7 +11,7 @@ import os import time -def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge=None, existing_pro_file_path=None): +def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, siteid=None, pro_merge=None, existing_pro_file_path=None): """ uses processing functions to perform computation on input data @@ -42,7 +42,7 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori # import the function from the module func = getattr(module, func_name) # call the function - process_datareturn_path = func(input_file, outdir) + process_datareturn_path = func(input_file, outdir, siteid) if pro_merge == True and pro_merge != "replace": try: diff --git a/modules/data.remote/inst/RpTools/RpTools/remote_process.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py similarity index 98% rename from modules/data.remote/inst/RpTools/RpTools/remote_process.py rename to modules/data.remote/inst/RpTools/RpTools/rp_control.py index 2e82ed9bd04..706cab9b8c1 100644 --- a/modules/data.remote/inst/RpTools/RpTools/remote_process.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -15,13 +15,14 @@ import os -def remote_process( +def rp_control( geofile, outdir, start, end, source, collection, + siteid=None, scale=None, projection=None, qc=None, @@ -87,6 +88,7 @@ def remote_process( end, source, collection, + siteid, scale, projection, qc, @@ -106,6 +108,7 @@ def remote_process( outdir, algorithm, input_file, + siteid, pro_merge, existing_pro_file_path, ) diff --git a/modules/data.remote/inst/test.geojson b/modules/data.remote/inst/test.geojson index 9a890595d4c..850f0233d8e 100644 --- a/modules/data.remote/inst/test.geojson +++ b/modules/data.remote/inst/test.geojson @@ -4,7 +4,7 @@ { "type": "Feature", "properties": { - "name": "Reykjavik" + "name": "youtube" }, "geometry": { "type": "Polygon", diff --git a/modules/data.remote/man/call_remote_process.Rd b/modules/data.remote/man/call_remote_process.Rd deleted file mode 100644 index 31e0c658afc..00000000000 --- a/modules/data.remote/man/call_remote_process.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/call_remote_process.R -\name{call_remote_process} -\alias{call_remote_process} -\title{call_remote_process} -\usage{ -call_remote_process(settings) -} -\arguments{ -\item{settings}{PEcAn settings list containing remotedata tags: siteid, sitename, raw_mimetype, raw_formatname, geofile, outdir, start, end, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data} -} -\description{ -call the Python - remote_process from PEcAn and store the output in BETY -} -\examples{ -\dontrun{ -call_remote_process(settings) -} -} -\author{ -Ayush Prasad -} diff --git a/modules/data.remote/man/construct_raw_filename.Rd b/modules/data.remote/man/construct_raw_filename.Rd index 57bf84f94b9..500195ccf25 100644 --- a/modules/data.remote/man/construct_raw_filename.Rd +++ b/modules/data.remote/man/construct_raw_filename.Rd @@ -1,23 +1,18 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/call_remote_process.R +% Please edit documentation in R/remote_process.R \name{construct_raw_filename} \alias{construct_raw_filename} \title{construct_raw_filename} \usage{ construct_raw_filename( - sitename, - source, collection, + siteid, scale = NULL, projection = NULL, qc = NULL ) } \arguments{ -\item{sitename}{site name} - -\item{source}{source of the remote data} - \item{collection}{collection or product requested from the source} \item{scale}{scale, NULL by default} @@ -25,6 +20,10 @@ construct_raw_filename( \item{projection}{projection, NULL by default} \item{qc}{qc_parameter, NULL by default} + +\item{sitename}{site name} + +\item{source}{source of the remote data} } \value{ raw_file_name diff --git a/modules/data.remote/man/remote_process.Rd b/modules/data.remote/man/remote_process.Rd new file mode 100644 index 00000000000..4773d5ba7aa --- /dev/null +++ b/modules/data.remote/man/remote_process.Rd @@ -0,0 +1,22 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/remote_process.R +\name{remote_process} +\alias{remote_process} +\title{remote_process} +\usage{ +remote_process(settings) +} +\arguments{ +\item{settings}{PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, geofile, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data} +} +\description{ +call rp_control (from RpTools py package) from PEcAn workflow and store the output in BETY +} +\examples{ +\dontrun{ +remote_process(settings) +} +} +\author{ +Ayush Prasad +} diff --git a/modules/data.remote/man/set_stage.Rd b/modules/data.remote/man/set_stage.Rd index a957d75a59d..23b787f4646 100644 --- a/modules/data.remote/man/set_stage.Rd +++ b/modules/data.remote/man/set_stage.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/call_remote_process.R +% Please edit documentation in R/remote_process.R \name{set_stage} \alias{set_stage} \title{set_stage} From b1c921439fb297fe0670ad0ff5de095541aaaabf Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 11:29:26 +0530 Subject: [PATCH 1353/2289] remove print --- modules/data.remote/R/remote_process.R | 3 --- 1 file changed, 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 408e341721b..1eb078e5584 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -329,8 +329,6 @@ remote_process <- function(settings) { existing_pro_file_path = existing_pro_file_path ) - print("hi") - print(raw_check) # inserting data in the DB if (!is.null(out_process_data)) { @@ -528,7 +526,6 @@ remote_process <- function(settings) { PEcAn.logger::logger.info("Requested raw file already exists") raw_id = raw_check$id raw_path = raw_check$file_path - print(raw_path) } else{ if (flag == 1) { PEcAn.logger::logger.info(("Inserting raw file for the first time")) From 134b3af9915a6961b880963b49da05e54ec1899b Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 7 Aug 2020 06:25:35 +0000 Subject: [PATCH 1354/2289] added format variables info to formats details response --- apps/api/R/formats.R | 15 ++++++++++++++- apps/api/pecanapi-spec.yml | 2 +- .../07_remote_access/01_pecan_api.Rmd | 19 ++++++++++++++++++- 3 files changed, 33 insertions(+), 3 deletions(-) diff --git a/apps/api/R/formats.R b/apps/api/R/formats.R index 47930e7b110..6947634c36c 100644 --- a/apps/api/R/formats.R +++ b/apps/api/R/formats.R @@ -19,9 +19,9 @@ getFormat <- function(format_id, res){ select(-mimetype_id) qry_res <- Format %>% collect() - PEcAn.DB::db.close(dbcon) if (nrow(qry_res) == 0) { + PEcAn.DB::db.close(dbcon) res$status <- 404 return(list(error="Format not found")) } @@ -32,6 +32,19 @@ getFormat <- function(format_id, res){ response[colname] <- qry_res[colname] } + format_vars <- tbl(dbcon, "formats_variables") %>% + select(name, unit, format_id, variable_id) %>% + filter(format_id == !!format_id) + format_vars <- tbl(dbcon, "variables") %>% + select(variable_id = id, description, units) %>% + inner_join(format_vars, by="variable_id") %>% + mutate(unit = ifelse(unit %in% "", units, unit)) %>% + select(-variable_id, -format_id, -units) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + response$format_variables <- format_vars return(response) } } diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 5748bf94744..970c5356a44 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-dev.ncsa.illinois.edu - description: PEcAn Development Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/58af05f9 + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/a433c80c - description: PEcAn API Test Server url: https://pecan-tezan.ncsa.illinois.edu - description: Localhost diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 7f0fc442736..09f636ebf2a 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -638,6 +638,15 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ## $header ## [1] "" + +## $format_variables +## description name unit +## 1 Latent heat flux LE_f W m-2 +## 2 Sensible heat flux H_f W m-2 +## 3 air temperature Ta_f degrees C +## 4 Vapor Pressure Deficit VPD_f Pa +## 5 Cumulative ecosystem respiration over a specified time step Reco umol C02 m-2 s-1 +## 6 Net ecosystem exchange NEE_st_fMDS umol C m-2 s-1 ``` #### Python Snippet @@ -656,7 +665,15 @@ print(json.dumps(response.json(), indent=2)) ## "format_id": 19, ## "name": "AmeriFlux.level4.h", ## "notes": "Half-hourly AmeriFlux level 4 gap filled, partitioned, and flagged flux tower data. Variables description: ## Level 4 data are obtained from the level 3 products, data are ustar filtered, gap-filled using different methods (ANN and ## MDS) and partitioned (i.e. NEE, GPP, and Re). Flags with information regarding quality of the original and gapfilled data ## are added. Missing values: -9999.", -## "header": "" +## "header": "", +## "format_variables": [ +## { +## "description": "Latent heat flux", +## "name": "LE_f", +## "unit": "W m-2" +## }, +## ... +## ] ## } ``` From 53ec8f8910eda2fb09e5e493a0d26f094b062b44 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 12:00:10 +0530 Subject: [PATCH 1355/2289] more fixes --- modules/data.remote/R/remote_process.R | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 1eb078e5584..2267f65b01f 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -366,6 +366,8 @@ remote_process <- function(settings) { ) pro_id <- pro_ins$input.id raw_id <- raw_ins$input.id + pro_path <- output$process_data_path + raw_path <- output$raw_data_path } else if (flag == 2) { # requested processed file does not exist but the raw file used to create it exists within the required timeline PEcAn.logger::logger.info("inserting processed file for the first time") @@ -381,10 +383,12 @@ remote_process <- function(settings) { con = dbcon ) raw_id <- raw_check$id - pro_id <- pro$input.id + raw_path <- raw_check$file_path + pro_id <- pro_ins$input.id + pro_path <- output$pro_data_path } else if (flag == 3) { # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates - PEcAn.DB::dbfile.input.insert( + pro_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$process_data_path, in.prefix = output$process_data_name, siteid = siteid, @@ -414,11 +418,14 @@ remote_process <- function(settings) { ), dbcon ) + pro_id <- pro_ins$input.id + pro_path <- output$pro_data_path } else if (flag == 4) { # requested processed and raw files are present and have to be updated pro_id <- pro_check$id raw_id <- raw_check$id raw_path <- output$raw_data_path + pro_path <- output$pro_data_path PEcAn.logger::logger.info("updating processed and raw files") PEcAn.DB::db.query( sprintf( @@ -460,7 +467,10 @@ remote_process <- function(settings) { ) } else if (flag == 5) { # raw file required for creating the processed file exists and the processed file needs to be updated - pro_id = pro_check$id + pro_id <- pro_check$id + pro_path <- output$pro_data_path + raw_id <- raw_check$id + raw_path <- raw_check$file_path PEcAn.logger::logger.info("updating the existing processed file") PEcAn.DB::db.query( sprintf( @@ -483,7 +493,9 @@ remote_process <- function(settings) { ) } else if (flag == 6) { # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file - pro_id = pro_check$id + pro_id <- pro_check$id + pro_path <- output$pro_data_path + raw_path <-output$raw_data_path PEcAn.logger::logger.info("replacing the existing processed file and creating a new raw file") PEcAn.DB::db.query( sprintf( @@ -589,9 +601,8 @@ remote_process <- function(settings) { ##' ##' @name construct_raw_filename ##' @title construct_raw_filename -##' @param sitename site name -##' @param source source of the remote data ##' @param collection collection or product requested from the source +##' @param siteid shortform of siteid ##' @param scale scale, NULL by default ##' @param projection projection, NULL by default ##' @param qc qc_parameter, NULL by default @@ -599,9 +610,8 @@ remote_process <- function(settings) { ##' @examples ##' \dontrun{ ##' raw_file_name <- construct_raw_filename( -##' sitename="Reykjavik", -##' source="gee", ##' collection="s2", +##' siteid=721, ##' scale=10.0 ##' projection=NULL ##' qc=1.0) From 9fd5ecc9904e94ee531837cdba63335b7b9e11e5 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 12:19:32 +0530 Subject: [PATCH 1356/2289] out of date rd error --- modules/data.remote/R/remote_process.R | 1 + modules/data.remote/man/construct_raw_filename.Rd | 9 +++------ 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 2267f65b01f..c2cf14c6f26 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -420,6 +420,7 @@ remote_process <- function(settings) { ) pro_id <- pro_ins$input.id pro_path <- output$pro_data_path + print(pro_path) } else if (flag == 4) { # requested processed and raw files are present and have to be updated pro_id <- pro_check$id diff --git a/modules/data.remote/man/construct_raw_filename.Rd b/modules/data.remote/man/construct_raw_filename.Rd index 500195ccf25..325c9d6e546 100644 --- a/modules/data.remote/man/construct_raw_filename.Rd +++ b/modules/data.remote/man/construct_raw_filename.Rd @@ -15,15 +15,13 @@ construct_raw_filename( \arguments{ \item{collection}{collection or product requested from the source} +\item{siteid}{shortform of siteid} + \item{scale}{scale, NULL by default} \item{projection}{projection, NULL by default} \item{qc}{qc_parameter, NULL by default} - -\item{sitename}{site name} - -\item{source}{source of the remote data} } \value{ raw_file_name @@ -34,9 +32,8 @@ construct the raw file name \examples{ \dontrun{ raw_file_name <- construct_raw_filename( - sitename="Reykjavik", - source="gee", collection="s2", + siteid=721, scale=10.0 projection=NULL qc=1.0) From e08973a71abbe450b65633b3d13dc3498864a9d2 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 12:52:52 +0530 Subject: [PATCH 1357/2289] fix error in settings return --- modules/data.remote/R/remote_process.R | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c2cf14c6f26..a5d82e9e3f0 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -420,13 +420,12 @@ remote_process <- function(settings) { ) pro_id <- pro_ins$input.id pro_path <- output$pro_data_path - print(pro_path) } else if (flag == 4) { # requested processed and raw files are present and have to be updated pro_id <- pro_check$id raw_id <- raw_check$id raw_path <- output$raw_data_path - pro_path <- output$pro_data_path + pro_path <- output$process_data_path PEcAn.logger::logger.info("updating processed and raw files") PEcAn.DB::db.query( sprintf( @@ -469,7 +468,7 @@ remote_process <- function(settings) { } else if (flag == 5) { # raw file required for creating the processed file exists and the processed file needs to be updated pro_id <- pro_check$id - pro_path <- output$pro_data_path + pro_path <- output$process_data_path raw_id <- raw_check$id raw_path <- raw_check$file_path PEcAn.logger::logger.info("updating the existing processed file") From 11290f623b42c158a7a3f73accdb92ecb737d465 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 7 Aug 2020 07:48:24 +0000 Subject: [PATCH 1358/2289] added function to download any input file --- apps/api/R/inputs.R | 43 ++++++++++++++++++++++++++++++++++++ apps/api/pecanapi-spec.yml | 26 +++++++++++++++++++++- apps/api/tests/test.inputs.R | 16 ++++++++++++++ 3 files changed, 84 insertions(+), 1 deletion(-) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index 3ba324c89ff..01ff7131bc3 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -118,4 +118,47 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r return(result) } +} + +################################################################################################# + +#' Download the input specified by the id +#' @param id Input id (character) +#' @return Input file specified by user +#' @author Tezan Sahu +#* @serializer contentType list(type="application/octet-stream") +#* @get / +downloadInput <- function(input_id, req, res){ + dbcon <- PEcAn.DB::betyConnect() + db_hostid <- PEcAn.DB::dbHostInfo(dbcon)$hostid + + # This is just for temporary testing due to the existing issue in dbHostInfo() + db_hostid <- ifelse(db_hostid == 99, 99000000001, db_hostid) + + input <- tbl(dbcon, "dbfiles") %>% + select(file_name, file_path, container_id, machine_id, container_type) %>% + filter(machine_id == !!db_hostid) %>% + filter(container_type == "Input") %>% + filter(container_id == !!input_id) %>% + collect() + + PEcAn.DB::db.close(dbcon) + + if (nrow(input) == 0) { + res$status <- 404 + return() + } + else{ + # Generate the full file path using the file_path & file_name + filepath <- paste0(input$file_path, "/", input$file_name) + + if(! file.exists(filepath)){ + res$status <- 404 + return() + } + + # Read the data in binary form & return it + bin <- readBin(filepath,'raw', n = file.info(filepath)$size) + return(bin) + } } \ No newline at end of file diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 970c5356a44..6bf6864224c 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -40,7 +40,7 @@ tags: - name: pfts description: Everything about PEcAn PFTs (Plant Functional Types) - name: inputs - description: Everything about inputs for a PEcAn workflow + description: Everything about PEcAn inputs ##################################################################################################################### ##################################################### API Endpoints ################################################# @@ -536,6 +536,30 @@ paths: description: Access forbidden '404': description: Workflows not found + + /api/inputs/{input_id}: + get: + tags: + - inputs + summary: Download a desired PEcAn input file + parameters: + - in: path + name: input_id + description: ID of the PEcAn Input to be downloaded + required: true + schema: + type: string + responses: + '200': + description: Contents of the desired input file + content: + application/octet-stream: + schema: + type: string + '401': + description: Authentication required + '403': + description: Access forbidden /api/workflows/: get: diff --git a/apps/api/tests/test.inputs.R b/apps/api/tests/test.inputs.R index 8b22f50f73e..930747b2bfa 100644 --- a/apps/api/tests/test.inputs.R +++ b/apps/api/tests/test.inputs.R @@ -14,4 +14,20 @@ test_that("Calling /api/inputs/ with invalid parameters returns Status 404", { httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) +}) + +test_that("Calling /api/inputs/{input_id} with valid parameters returns Status 200", { + res <- httr::GET( + paste0("http://localhost:8000/api/inputs/", 99000000003), + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/inputs/{input_id} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/inputs/0", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 404) }) \ No newline at end of file From eea29a562d716e562e45a222fe462e64e562a7d9 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 13:29:47 +0530 Subject: [PATCH 1359/2289] python functions doc update --- .../inst/RpTools/RpTools/appeears2pecan.py | 7 +++--- .../inst/RpTools/RpTools/bands2lai_snap.py | 6 +++-- .../inst/RpTools/RpTools/gee2pecan_l8.py | 6 +++-- .../inst/RpTools/RpTools/gee2pecan_s2.py | 6 +++-- .../inst/RpTools/RpTools/gee2pecan_smap.py | 6 +++-- .../inst/RpTools/RpTools/get_remote_data.py | 14 ++++++++--- .../RpTools/RpTools/process_remote_data.py | 10 ++++++-- .../inst/RpTools/RpTools/rp_control.py | 23 +++++++++++++++---- modules/data.remote/inst/test.geojson | 2 +- 9 files changed, 58 insertions(+), 22 deletions(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py index 9fa681a957e..329731068be 100644 --- a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py +++ b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py @@ -47,11 +47,12 @@ def appeears2pecan( credfile (str) -- path to JSON file containing Earthdata username and password. None by default + siteid (str) -- shortform of siteid, None by default + Returns ------- - Nothing: - Output files are saved in the specified directory. - Output file is of netCDF type when AOI is a Polygon and csv type when AOI is a Point. + Absolute path to the output file. + output netCDF is saved in the specified directory. """ # API url diff --git a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py index 4b2173ee09b..3df5c314d69 100644 --- a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py +++ b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py @@ -24,11 +24,13 @@ def bands2lai_snap(inputfile, outdir, siteid): input (str) -- path to the input netCDF file containing bands. outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + siteid (str) -- shortform of the siteid Returns ------- - Nothing: - output netCDF is saved in the specified directory. + Absolute path to the output file + output netCDF is saved in the specified directory. """ # load the input file diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py index 3c2aff70e95..1ebd149b304 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py @@ -41,10 +41,12 @@ def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1, siteid=None): qc (bool) -- uses the cloud masking function if set to True + siteid (str) -- shortform of siteid, None by default + Returns ------- - Absolute path to the output file - output netCDF is saved in the specified directory. + Absolute path to the output file. + output netCDF is saved in the specified directory. """ diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py index bb9a7061fef..ec59a9046d4 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py @@ -773,10 +773,12 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qc, siteid=None): qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved + siteid (str) -- shortform of siteid, None by default + Returns ------- - Nothing: - output netCDF is saved in the specified directory. + Absolute path to the output file. + output netCDF is saved in the specified directory. Python dependencies required: earthengine-api, geopandas, pandas, netCDF4, xarray """ diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py index eb0bf1ec863..439ae1cce02 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py @@ -32,10 +32,12 @@ def gee2pecan_smap(geofile, outdir, start, end, siteid=None): end (str) -- ending date areaof the data request in the form YYYY-MM-dd + siteid (str) -- shortform of siteid, None by default + Returns ------- - Absolute path to the output file - output netCDF is saved in the specified directory + Absolute path to the output file. + output netCDF is saved in the specified directory. """ geo = create_geo(geofile) diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index 2ec3af3c08f..81b0b8c045e 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -47,16 +47,24 @@ def get_remote_data( collection (str) -- dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears + siteid (str) -- shortform of the siteid + scale (int) -- pixel resolution, None by default projection (str) -- type of projection. Only required for appeears polygon AOI type. None by default. - qc (float) -- quality control parameter + qc (float) -- quality control parameter, None by default + + credfile (str) -- path to credentials file only requried for AppEEARS, None by default + raw_merge (str) -- if the existing raw file has to be merged, None by default + + existing_raw_file_path (str) -- path to exisiting raw file if raw_merge is TRUE., None by default + Returns ------- - Nothing: - output netCDF is saved in the specified directory. + Absolute path to the created file. + output netCDF is saved in the specified directory. """ diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index 99bc868dfee..74bce11312d 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -21,10 +21,16 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori output (dict) -- dictionary contatining the keys get_data and process_data outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. algorithm (str) -- name of the algorithm used to perform computation. + inputfile (str) -- path to raw file + siteid (str) -- shortform of the siteid + pro_merge (str) -- if the pro file has to be merged + existing_pro_file_path -- path to existing pro file if pro_merge is TRUE + Returns ------- - Nothing: - output netCDF is saved in the specified directory. + Absolute path to the output file + + output netCDF is saved in the specified directory. """ diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index 706cab9b8c1..3f1a104ecac 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -55,8 +55,10 @@ def rp_control( source (str) -- source from where data is to be downloaded, e.g. "gee" or "appeears" collection (str) -- dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears + + siteid(str) -- short form of the siteid , None by default - scale (int) -- pixel resolution, None by default, recommended to use 10 for Sentinel 2 + scale (int) -- pixel resolution, None by default, recommended to use 10 for Sentinel 2 , None by default projection (str) -- type of projection. Only required for appeears polygon AOI type. None by default. @@ -66,14 +68,25 @@ def rp_control( credfile (str) -- path to JSON file containing Earthdata username and password, only required for AppEEARS, None by default - output (dict) -- "get_data" - the type of output variable requested from get_data module, "process_data" - the type of output variable requested from process_data module + out_get_data (str) -- the type of output variable requested from get_data module , None by default + + out_process_data (str) -- the type of output variable requested from process_data module, None by default - stage (dict) -- temporary argument to imitate database checks + stage_get_data (str) -- stage for get_data module, None by default + + stage_process_data (str) -- stage for process_data_module, None by default + + raw_merge (str) -- if raw file has to be merged, None by default + + pro_merge (str) -- if pro file has to be merged, None by default + + existing_raw_file_path (str) -- path to existing raw file , None by default + + existing_pro_file_path (str) -- path to existing pro file path, None by default Returns ------- - Nothing: - output files from the functions are saved in the specified directory. + dictionary containing raw_id, raw_path, pro_id, pro_path """ diff --git a/modules/data.remote/inst/test.geojson b/modules/data.remote/inst/test.geojson index 850f0233d8e..9a890595d4c 100644 --- a/modules/data.remote/inst/test.geojson +++ b/modules/data.remote/inst/test.geojson @@ -4,7 +4,7 @@ { "type": "Feature", "properties": { - "name": "youtube" + "name": "Reykjavik" }, "geometry": { "type": "Polygon", From 942e1aef8c58f68c5cc9aaca029980696d13bc02 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Fri, 7 Aug 2020 08:14:14 +0000 Subject: [PATCH 1360/2289] added filtering by host_id & format_id for inputs --- apps/api/R/inputs.R | 22 +++++++++++++++------- apps/api/pecanapi-spec.yml | 12 ++++++++++++ 2 files changed, 27 insertions(+), 7 deletions(-) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index 01ff7131bc3..987ba405b3e 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -8,7 +8,7 @@ library(dplyr) #' @return Information about Inputs based on model & site #' @author Tezan Sahu #* @get / -searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res){ +searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_id=NULL, offset=0, limit=50, res){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) @@ -27,8 +27,7 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r inputs <- tbl(dbcon, "machines") %>% select(hostname, machine_id=id) %>% - inner_join(inputs, by='machine_id') %>% - select(-machine_id) + inner_join(inputs, by='machine_id') inputs <- tbl(dbcon, "modeltypes_formats") %>% select(tag, modeltype_id, format_id, input) %>% @@ -38,8 +37,7 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r inputs <- tbl(dbcon, "formats") %>% select(format_id = id, format_name = name, mimetype_id) %>% - inner_join(inputs, by='format_id') %>% - select(-format_id) + inner_join(inputs, by='format_id') inputs <- tbl(dbcon, "mimetypes") %>% select(mimetype_id = id, mimetype = type_string) %>% @@ -48,7 +46,7 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r inputs <- tbl(dbcon, "models") %>% select(model_id = id, modeltype_id, model_name, revision) %>% - inner_join(inputs, by='modeltype_id') %>% + right_join(inputs, by='modeltype_id') %>% select(-modeltype_id) inputs <- tbl(dbcon, "sites") %>% @@ -65,8 +63,18 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r filter(site_id == !!site_id) } + if(! is.null(format_id)) { + inputs <- inputs %>% + filter(format_id == !!format_id) + } + + if(! is.null(host_id)) { + inputs <- inputs %>% + filter(machine_id == !!host_id) + } + qry_res <- inputs %>% - select(-site_id, -model_id) %>% + select(-site_id, -model_id, -format_id, -machine_id) %>% distinct() %>% arrange(id) %>% collect() diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 6bf6864224c..df19517bf95 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -463,6 +463,18 @@ paths: required: false schema: type: string + - in: query + name: format_id + description: If provided, returns all inputs for the provided format_id + required: false + schema: + type: string + - in: query + name: host_id + description: If provided, returns all inputs for the provided host_id + required: false + schema: + type: string - in: query name: offset description: The number of inputs to skip before starting to collect the result set. From 9b379d1ef3cb32814d441636883b84be6efcefac Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 11 Jul 2020 20:06:23 +0530 Subject: [PATCH 1361/2289] correct reference to Remote.Sync.launcher --- .../04_more_web_interface/02_hidden_analyses.Rmd | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd index 0fa96db4c8b..a553c0b5deb 100644 --- a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd +++ b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd @@ -479,11 +479,11 @@ An example of multi-settings pecan xml file also may look like below: ``` ### Running SDA on remote -In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote_Sync_launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote_Sync_launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. +In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote.Sync.launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote.Sync.launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. -`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote_Sync_launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. +`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote.Sync.launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. -`Additionally, the Remote_Sync_launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. +`Additionally, the Remote.Sync.launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. Several points on how to prepare your xml settings for the remote SDA run: 1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. From d7b5db1a72aef0955fdbbed4a7640130aab286a5 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 14:33:29 +0530 Subject: [PATCH 1362/2289] Revert "correct reference to Remote.Sync.launcher" This reverts commit e28a9c57cbc9a4ea6913fa3d1242937c0e15f31b. --- .../04_more_web_interface/02_hidden_analyses.Rmd | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd index a553c0b5deb..0fa96db4c8b 100644 --- a/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd +++ b/book_source/02_demos_tutorials_workflows/04_more_web_interface/02_hidden_analyses.Rmd @@ -479,11 +479,11 @@ An example of multi-settings pecan xml file also may look like below: ``` ### Running SDA on remote -In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote.Sync.launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote.Sync.launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. +In general, the idea is that sending, running and monitoring an SDA job should all be done using two functions (`SDA_remote_launcher` and `Remote_Sync_launcher`). `SDA_remote_launcher` checks the XML settings defining the run, sets up the SDA on the remote machine, and then sends a qusb command for running the job. `Remote_Sync_launcher`, on the other hand, sits on the local machine and monitors the progress of the job(s) and brings back the outputs as long as the job is running. -`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote.Sync.launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. +`SDA_remote_launcher` sets up the job by copying a template SDA workflow R script and a bash file template that are ready for submission to the remote machine. This function checks the paths to all inputs including met, soil, `site_pft` and etc., testing whether they exists on the remote machine or not. If they do not exist, the function copies the missing paths over and replaces the settings accordingly. After submitting the bash script, the function returns the PID of the job writing the log file, allowing the `Remote_Sync_launcher` to monitor the progress of the run, checks to see if the job is still alive, and determines if `sda.output.rdata` has been updated since the last check or not. -`Additionally, the Remote.Sync.launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. +`Additionally, the Remote_Sync_launcher` function follows the progress of the remote job by executing a nohup command on a template R script and keeps the R console open for further use. This R script, as mentioned above, constantly pings the given PID every 5 minutes and copies over the SDA output. Several points on how to prepare your xml settings for the remote SDA run: 1 - In the main pecan workflow.R, if you were able to generate `pecan.TRAIT.xml`, your settings are ready to be used for an SDA run. All you need to add is your state data assimilation tags. From 1df100fdaa7646fff33ee7870a18e34692331947 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 14:47:33 +0530 Subject: [PATCH 1363/2289] use _ in remote.sync.launcher --- modules/assim.sequential/NAMESPACE | 2 +- modules/assim.sequential/R/Remote_helpers.R | 4 ++-- ...Remote.Sync.launcher.Rd => Remote_Sync_launcher.Rd} | 10 +++++----- 3 files changed, 8 insertions(+), 8 deletions(-) rename modules/assim.sequential/man/{Remote.Sync.launcher.Rd => Remote_Sync_launcher.Rd} (70%) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index 654a3ab8541..6c3a4b0b235 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -11,7 +11,7 @@ export(GEF) export(GEF.MultiSite) export(Local.support) export(Obs.data.prepare.MultiSite) -export(Remote.Sync.launcher) +export(Remote_Sync_launcher) export(SDA_control) export(SDA_remote_launcher) export(adj.ens) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index f30f541050b..d0efe9225ec 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -410,14 +410,14 @@ SDA_remote_launcher <-function(settingPath, } -#' Remote.Sync.launcher +#' Remote_Sync_launcher #' #' @param settingPath Path to your local setting . #' @param remote.path Path generated by SDA_remote_launcher which shows the path to your remote SDA run. #' @param PID PID generated by SDA_remote_launcher which shows the active PID running your SDA job. #' #' @export -Remote.Sync.launcher <- function(settingPath, remote.path, PID) { +Remote_Sync_launcher <- function(settingPath, remote.path, PID) { settings <- read.settings(settingPath) diff --git a/modules/assim.sequential/man/Remote.Sync.launcher.Rd b/modules/assim.sequential/man/Remote_Sync_launcher.Rd similarity index 70% rename from modules/assim.sequential/man/Remote.Sync.launcher.Rd rename to modules/assim.sequential/man/Remote_Sync_launcher.Rd index 558fc33b92b..e7c7f0a0e2d 100644 --- a/modules/assim.sequential/man/Remote.Sync.launcher.Rd +++ b/modules/assim.sequential/man/Remote_Sync_launcher.Rd @@ -1,10 +1,10 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/Remote_helpers.R -\name{Remote.Sync.launcher} -\alias{Remote.Sync.launcher} -\title{Remote.Sync.launcher} +\name{Remote_Sync_launcher} +\alias{Remote_Sync_launcher} +\title{Remote_Sync_launcher} \usage{ -Remote.Sync.launcher(settingPath, remote.path, PID) +Remote_Sync_launcher(settingPath, remote.path, PID) } \arguments{ \item{settingPath}{Path to your local setting .} @@ -14,5 +14,5 @@ Remote.Sync.launcher(settingPath, remote.path, PID) \item{PID}{PID generated by SDA_remote_launcher which shows the active PID running your SDA job.} } \description{ -Remote.Sync.launcher +Remote_Sync_launcher } From 6d8c31e27f5d1801e862902d17e4c2eb54ce5ca5 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 15:05:43 +0530 Subject: [PATCH 1364/2289] update changelog --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index bb3ca374651..e644adee0b3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -38,6 +38,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Renamed functions that looked like S3 methods but were not: * PEcAn.priors: `plot.posterior.density`->`plot_posterior.density`, `plot.prior.density`->`plot_prior.density`, `plot.trait`->`plot_trait` (#2439). * PEcAn.visualization: `plot.netcdf`->`plot_netcdf` (#2526). + * PEcAn.assim.sequential: `Remote.Sync.launcher` -> `Remote_Sync_launcher` (#2652) - Stricter package checking: `make check` and CI builds will now fail if `R CMD check` returns any ERRORs or any "newly-added" WARNINGs or NOTEs. "Newly-added" is determined by strict string comparison against a check result saved 2019-09-03; messages that exist in the reference result do not break the build but will be fixed as time allows in future refactorings (#2404). - No longer writing an arbitrary num for each PFT, this was breaking ED runs potentially. - The pecan/data container has no longer hardcoded path for postgres @@ -46,7 +47,6 @@ For more information about this file see also [Keep a Changelog](http://keepacha - Changed precipitaion downscale in `PEcAn.data.atmosphere::download.NOAA_GEFS_downscale`. Precipitation was being downscaled via a spline which was causing fake rain events. Instead the 6 hr precipitation flux values from GEFS are preserved with 0's filling in the hours between. -Changed `dbfile.input.insert` to work with inputs (i.e soils) that don't have start and end dates associated with them - ### Added - Functions to send/receive messages to/from rabbitmq. From e576862b982d496c7032171c2889465b09cf23a1 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 7 Aug 2020 16:52:25 +0530 Subject: [PATCH 1365/2289] directory naming convention --- modules/data.remote/R/remote_process.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index a5d82e9e3f0..a5576cde526 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -303,6 +303,7 @@ remote_process <- function(settings) { } } + outdir = file.path(outdir, paste(source, "site", siteid_short, sep = "_")) # call remote_process output = RpTools$rp_control( From e0ed4fc4935f754d21f246bb77807908517ecda9 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Fri, 7 Aug 2020 13:05:53 -0400 Subject: [PATCH 1366/2289] small suggestions from @infotroph that I missed on PR #2670 --- modules/data.land/DESCRIPTION | 26 +++++++++++++------------- modules/data.land/R/find.land.R | 3 +-- modules/data.land/R/gis.functions.R | 6 +----- 3 files changed, 15 insertions(+), 20 deletions(-) diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index a06a437a0ec..85b67b552df 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -17,41 +17,41 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific efficacy of scientific investigation. Imports: coda, + datapack, dplyr, + dplR, fs, lubridate, magrittr, + maptools, + mvtnorm, ncdf4 (>= 1.15), neonUtilities, + PEcAn.benchmark, + PEcAn.data.atmosphere, PEcAn.DB, PEcAn.logger, PEcAn.remote, + PEcAn.settings, PEcAn.utils, + PEcAn.visualization, purrr, rjags, rlang, - sp, + RCurl, + RPostgreSQL, sf, + sirt, + sp, + stringr, traits, udunits2, XML (>= 3.98-1.4) Suggests: - datapack, dataone, redland, - sirt, - dplR, - maptools, - mvtnorm, - PEcAn.benchmark, - PEcAn.data.atmosphere, - PEcAn.settings, - PEcAn.visualization, raster, rgdal, - RCurl, - RPostgreSQL, - stringr, testthat (>= 1.0.2) License: BSD_3_clause + file LICENSE Copyright: Authors diff --git a/modules/data.land/R/find.land.R b/modules/data.land/R/find.land.R index e8b421c2e99..403e3597a4d 100644 --- a/modules/data.land/R/find.land.R +++ b/modules/data.land/R/find.land.R @@ -11,8 +11,7 @@ ##' @export ##' @author David LeBauer find.land <- function(lat, lon, plot = FALSE) { - requireNamespace("maptools") - data("wrld_simpl",package="maptools") + data("wrld_simpl",package="maptools",envir = environment()) ## Create a SpatialPoints object points <- expand.grid(lon, lat) diff --git a/modules/data.land/R/gis.functions.R b/modules/data.land/R/gis.functions.R index e51b80c7198..8f9434aea3f 100644 --- a/modules/data.land/R/gis.functions.R +++ b/modules/data.land/R/gis.functions.R @@ -129,7 +129,7 @@ shp2kml <- function(dir, ext, kmz = FALSE, proj4 = NULL, color = NULL, NameField ##' @author Shawn P. Serbin get.attributes <- function(file, coords) { # ogr tools do not seem to function properly in R. Need to figure out a work around reading in - # kml files drops important fie lds inside the layers. + # kml files drops important fields inside the layers. #library(fields) #require(rgdal) @@ -168,10 +168,6 @@ get.attributes <- function(file, coords) { ##' @author Shawn P. Serbin subset_layer <- function(file, coords = NULL, sub.layer = NULL, clip = FALSE, out.dir = NULL, out.name = NULL) { -# if (!require(rgdal)) { -# print("install rgdal") -# } - # Setup output directory for subset layer if (is.null(out.dir)) { out.dir <- dirname(file) From e09895700652a6a4e8a551eccc3a059693d7dbae Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Fri, 7 Aug 2020 14:15:16 -0400 Subject: [PATCH 1367/2289] Makefile.depends --- Makefile.depends | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 3d243afbe48..0f3cfbce554 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -1,5 +1,5 @@ # autogenerated -$(call depends,base/all): | .install/base/db .install/base/settings .install/modules/meta.analysis .install/base/logger .install/base/utils .install/modules/uncertainty .install/modules/data.atmosphere .install/modules/data.land .install/modules/data.remote .install/modules/assim.batch .install/modules/emulator .install/modules/priors .install/modules/benchmark .install/base/remote .install/base/workflow .install/models/ed .install/models/sipnet .install/models/biocro .install/models/dalec .install/models/linkages .install/modules/allometry .install/modules/photosynthesis +$(call depends,base/all): | .install/base/db .install/base/settings .install/modules/meta.analysis .install/base/logger .install/base/utils .install/modules/uncertainty .install/modules/data.atmosphere .install/modules/data.land .install/modules/data.remote .install/modules/assim.batch .install/modules/emulator .install/modules/priors .install/modules/benchmark .install/base/remote .install/base/workflow .install/models/ed .install/models/sipnet .install/models/biocro .install/models/dalec .install/models/linkages .install/modules/PEcAn.allometry.Rcheck/PEcAn.allometry .install/modules/photosynthesis $(call depends,base/db): | .install/base/logger .install/base/remote .install/base/utils $(call depends,base/logger): | $(call depends,base/qaqc): | .install/base/logger @@ -14,10 +14,12 @@ $(call depends,modules/assim.sequential): | .install/base/logger .install/base/r $(call depends,modules/benchmark): | .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils -$(call depends,modules/data.land): | .install/base/db .install/base/logger .install/base/remote .install/base/utils .install/modules/benchmark .install/modules/data.atmosphere .install/base/settings .install/base/visualization +$(call depends,modules/data.land): | .install/modules/benchmark .install/modules/data.atmosphere .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/base/visualization $(call depends,modules/data.remote): | .install/base/db .install/base/utils .install/base/logger .install/base/remote $(call depends,modules/emulator): | .install/base/logger $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .install/base/logger .install/base/settings +$(call depends,modules/PEcAn.allometry.Rcheck/00_pkg_src/PEcAn.allometry): | .install/base/db +$(call depends,modules/PEcAn.allometry.Rcheck/PEcAn.allometry): | .install/base/db $(call depends,modules/photosynthesis): | .install/base/logger $(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed From cb761fbb6550707941f847534e2096a6726f754a Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Fri, 7 Aug 2020 14:34:37 -0400 Subject: [PATCH 1368/2289] Makefile.depends --- Makefile.depends | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 0f3cfbce554..be640d374c7 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -1,5 +1,5 @@ # autogenerated -$(call depends,base/all): | .install/base/db .install/base/settings .install/modules/meta.analysis .install/base/logger .install/base/utils .install/modules/uncertainty .install/modules/data.atmosphere .install/modules/data.land .install/modules/data.remote .install/modules/assim.batch .install/modules/emulator .install/modules/priors .install/modules/benchmark .install/base/remote .install/base/workflow .install/models/ed .install/models/sipnet .install/models/biocro .install/models/dalec .install/models/linkages .install/modules/PEcAn.allometry.Rcheck/PEcAn.allometry .install/modules/photosynthesis +$(call depends,base/all): | .install/base/db .install/base/settings .install/modules/meta.analysis .install/base/logger .install/base/utils .install/modules/uncertainty .install/modules/data.atmosphere .install/modules/data.land .install/modules/data.remote .install/modules/assim.batch .install/modules/emulator .install/modules/priors .install/modules/benchmark .install/base/remote .install/base/workflow .install/models/ed .install/models/sipnet .install/models/biocro .install/models/dalec .install/models/linkages .install/modules/allometry .install/modules/photosynthesis $(call depends,base/db): | .install/base/logger .install/base/remote .install/base/utils $(call depends,base/logger): | $(call depends,base/qaqc): | .install/base/logger @@ -18,8 +18,6 @@ $(call depends,modules/data.land): | .install/modules/benchmark .install/modules $(call depends,modules/data.remote): | .install/base/db .install/base/utils .install/base/logger .install/base/remote $(call depends,modules/emulator): | .install/base/logger $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .install/base/logger .install/base/settings -$(call depends,modules/PEcAn.allometry.Rcheck/00_pkg_src/PEcAn.allometry): | .install/base/db -$(call depends,modules/PEcAn.allometry.Rcheck/PEcAn.allometry): | .install/base/db $(call depends,modules/photosynthesis): | .install/base/logger $(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed From 9baeb4f0b06d951e2c0d9bb690d60a221ca83c97 Mon Sep 17 00:00:00 2001 From: araiho Date: Fri, 7 Aug 2020 14:44:12 -0400 Subject: [PATCH 1369/2289] seeing if this fixes my .Rd problem in travis --- Makefile.depends | 2 +- docker/depends/pecan.depends | 7 ++++++- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 3d243afbe48..4f3faf0fead 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -10,7 +10,7 @@ $(call depends,base/visualization): | .install/base/db .install/base/logger .ins $(call depends,base/workflow): | .install/modules/data.atmosphere .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils $(call depends,modules/allometry): | .install/base/logger .install/base/db $(call depends,modules/assim.batch): | .install/modules/benchmark .install/base/db .install/modules/emulator .install/base/logger .install/modules/meta.analysis .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils -$(call depends,modules/assim.sequential): | .install/base/logger .install/base/remote +$(call depends,modules/assim.sequential): | .install/modules/benchmark .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,modules/benchmark): | .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index c6f63b543db..7b1d1d4c8d9 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -25,11 +25,13 @@ install2.r -e -s -l "${RLIB}" -n -1\ BioCro \ bit64 \ coda \ - data.table \ + corrplot \ dataone \ datapack \ + data.table \ DBI \ dbplyr \ + devtools \ doParallel \ dplR \ dplyr \ @@ -37,10 +39,12 @@ install2.r -e -s -l "${RLIB}" -n -1\ foreach \ fs \ furrr \ + future \ geonames \ getPass \ ggmap \ ggplot2 \ + ggrepel \ glue \ graphics \ grDevices \ @@ -62,6 +66,7 @@ install2.r -e -s -l "${RLIB}" -n -1\ maps \ maptools \ MASS \ + Matrix \ mclust \ MCMCpack \ methods \ From cbfc81bf94e87958a24bbd1b0e30c34694d07b30 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 7 Aug 2020 16:33:15 -0700 Subject: [PATCH 1370/2289] Implementing PR fixes suggested in https://github.com/PecanProject/pecan/pull/2626#issuecomment-651981287 --- base/settings/DESCRIPTION | 5 +- base/settings/R/get_args.R | 39 ++++ web/workflow.R | 383 +++++++++++++++++-------------------- 3 files changed, 221 insertions(+), 206 deletions(-) create mode 100644 base/settings/R/get_args.R diff --git a/base/settings/DESCRIPTION b/base/settings/DESCRIPTION index 9b4e98098a9..7478de686b1 100644 --- a/base/settings/DESCRIPTION +++ b/base/settings/DESCRIPTION @@ -1,7 +1,7 @@ Package: PEcAn.settings Title: PEcAn Settings package Authors@R: c(person("David","LeBauer", role = c("aut", "cre"), - email = "dlebauer@illinois.edu"), + email = "dlebauer@arizona.edu"), person("Rob","Kooper", rol="aut")) Version: 1.7.1 Date: 2019-09-05 @@ -20,7 +20,8 @@ Imports: PEcAn.utils, lubridate (>= 1.6.0), purrr, - XML (>= 3.98-1.3) + XML (>= 3.98-1.3), + optparse Suggests: testthat (>= 2.0.0) Encoding: UTF-8 diff --git a/base/settings/R/get_args.R b/base/settings/R/get_args.R new file mode 100644 index 00000000000..59d34ae4f14 --- /dev/null +++ b/base/settings/R/get_args.R @@ -0,0 +1,39 @@ +#' Get Args +#' +#' Used in web/workflow.R to parse command line arguments. +#' See also https://github.com/PecanProject/pecan/pull/2626. +#' +#' @return +#' @export +#' +#' @examples +#' \dontrun{./web/workflow.R -h} +get_args <- function () { + option_list = list( + optparse::make_option( + c("-s", "--settings"), + default = ifelse(Sys.getenv("PECAN_SETTINGS") != "", + Sys.getenv("PECAN_SETTINGS"), "pecan.xml"), + type = "character", + help = "Settings XML file", + metavar = "FILE", + ), + optparse::make_option( + c("-c", "--continue"), + default = FALSE, + action = "store_true", + type = "logical", + help = "Continue processing", + ) + ) + + parser <- optparse::OptionParser(option_list = option_list) + args <- optparse::parse_args(parser) + + if (!file.exists(args$settings)) { + optparse::print_help(parser) + stop(sprintf('--settings "%s" not a valid file\n', args$settings)) + } + + return(invisible(args)) +} \ No newline at end of file diff --git a/web/workflow.R b/web/workflow.R index acdac5645b2..186dc0d720b 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -11,217 +11,192 @@ # ---------------------------------------------------------------------- # Load required libraries # ---------------------------------------------------------------------- -suppressMessages(library("PEcAn.all")) -suppressMessages(library("PEcAn.utils")) -suppressMessages(library("RCurl")) -suppressMessages(library("optparse")) +library("PEcAn.all") +library("RCurl") + # -------------------------------------------------- -get_args <- function () { - option_list = list( - optparse::make_option( - c("-s", "--settings"), - default = ifelse(Sys.getenv("PECAN_SETTINGS") != "", - Sys.getenv("PECAN_SETTINGS"), "pecan.xml"), - type = "character", - help = "Settings XML file", - metavar = "FILE", - ), - optparse::make_option( - c("-c", "--continue"), - default = FALSE, - action = "store_true", - type = "logical", - help = "Continue processing", - ) +# get command-line arguments +args = get_args() + +# make sure always to call status.end +options(warn = 1) +options(error = quote({ + try(PEcAn.utils::status.end("ERROR")) + try(PEcAn.remote::kill.tunnel(settings)) + if (!interactive()) { + q(status = 1) + } +})) + +# ---------------------------------------------------------------------- +# PEcAn Workflow +# ---------------------------------------------------------------------- +# Open and read in settings file for PEcAn run. +settings <- PEcAn.settings::read.settings(args$settings) + +# Check for additional modules that will require adding settings +if ("benchmarking" %in% names(settings)) { + library(PEcAn.benchmark) + settings <- papply(settings, read_settings_BRR) +} + +if ("sitegroup" %in% names(settings)) { + if (is.null(settings$sitegroup$nSite)) { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings( + settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite ) + } + # zero out so don't expand a second time if re-reading + settings$sitegroup <- NULL +} - parser <- optparse::OptionParser(option_list = option_list) - args <- optparse::parse_args(parser) +# Update/fix/check settings. +# Will only run the first time it's called, unless force=TRUE +settings <- + PEcAn.settings::prepare.settings(settings, force = FALSE) - if (!file.exists(args$settings)) { - optparse::print_help(parser) - stop(sprintf('--settings "%s" not a valid file\n', args$settings)) - } +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") - return(invisible(args)) +# start from scratch if no continue is passed in +status_file <- file.path(settings$outdir, "STATUS") +if (args$continue && file.exists(status_file)) { + file.remove(status_file) } -# -------------------------------------------------- -workflow <- function(settings_file = "pecan.xml", continue = FALSE) { - # get command-line arguments - args = get_args() - - # make sure always to call status.end - options(warn = 1) - options(error = quote({ - try(PEcAn.utils::status.end("ERROR")) - try(PEcAn.remote::kill.tunnel(settings)) - if (!interactive()) { - q(status = 1) - } - })) - - # ---------------------------------------------------------------------- - # PEcAn Workflow - # ---------------------------------------------------------------------- - # Open and read in settings file for PEcAn run. - settings <- PEcAn.settings::read.settings(args$settings) - - # Check for additional modules that will require adding settings - if ("benchmarking" %in% names(settings)) { - library(PEcAn.benchmark) - settings <- papply(settings, read_settings_BRR) - } - - if ("sitegroup" %in% names(settings)) { - if (is.null(settings$sitegroup$nSite)) { - settings <- PEcAn.settings::createSitegroupMultiSettings( - settings, - sitegroupId = settings$sitegroup$id) - } else { - settings <- PEcAn.settings::createSitegroupMultiSettings( - settings, - sitegroupId = settings$sitegroup$id, - nSite = settings$sitegroup$nSite) - } - # zero out so don't expand a second time if re-reading - settings$sitegroup <- NULL - } - - # Update/fix/check settings. - # Will only run the first time it's called, unless force=TRUE - settings <- PEcAn.settings::prepare.settings(settings, force = FALSE) - - # Write pecan.CHECKED.xml - PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") - - # start from scratch if no continue is passed in - status_file <- file.path(settings$outdir, "STATUS") - if (args$continue && file.exists(status_file)) { - file.remove(status_file) - } - - # Do conversions - settings <- PEcAn.workflow::do_conversions(settings) - - # Query the trait database for data and priors - if (PEcAn.utils::status.check("TRAIT") == 0) { - PEcAn.utils::status.start("TRAIT") - settings <- PEcAn.workflow::runModule.get.trait.data(settings) - PEcAn.settings::write.settings( - settings, - outputfile = "pecan.TRAIT.xml") - PEcAn.utils::status.end() - } else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { - settings <- PEcAn.settings::read.settings( - file.path(settings$outdir, "pecan.TRAIT.xml")) - } - - - # Run the PEcAn meta.analysis - if (!is.null(settings$meta.analysis)) { - if (PEcAn.utils::status.check("META") == 0) { - PEcAn.utils::status.start("META") - PEcAn.MA::runModule.run.meta.analysis(settings) - PEcAn.utils::status.end() - } - } - - # Write model specific configs - if (PEcAn.utils::status.check("CONFIG") == 0) { - PEcAn.utils::status.start("CONFIG") - settings <- PEcAn.workflow::runModule.run.write.configs(settings) - PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") - PEcAn.utils::status.end() - } else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { - settings <- PEcAn.settings::read.settings( - file.path(settings$outdir, "pecan.CONFIGS.xml")) - } - - if ((length(which(commandArgs() == "--advanced")) != 0) - && (PEcAn.utils::status.check("ADVANCED") == 0)) { - PEcAn.utils::status.start("ADVANCED") - q(); - } - - # Start ecosystem model runs - if (PEcAn.utils::status.check("MODEL") == 0) { - PEcAn.utils::status.start("MODEL") - PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) - PEcAn.utils::status.end() - } - - # Get results of model runs - if (PEcAn.utils::status.check("OUTPUT") == 0) { - PEcAn.utils::status.start("OUTPUT") - runModule.get.results(settings) - PEcAn.utils::status.end() - } - - # Run ensemble analysis on model output. - if ("ensemble" %in% names(settings) - && PEcAn.utils::status.check("ENSEMBLE") == 0) { - PEcAn.utils::status.start("ENSEMBLE") - runModule.run.ensemble.analysis(settings, TRUE) - PEcAn.utils::status.end() - } - - # Run sensitivity analysis and variance decomposition on model output - if ("sensitivity.analysis" %in% names(settings) - && PEcAn.utils::status.check("SENSITIVITY") == 0) { - PEcAn.utils::status.start("SENSITIVITY") - runModule.run.sensitivity.analysis(settings) - PEcAn.utils::status.end() - } - - # Run parameter data assimilation - if ("assim.batch" %in% names(settings)) { - if (PEcAn.utils::status.check("PDA") == 0) { - PEcAn.utils::status.start("PDA") - settings <- PEcAn.assim.batch::runModule.assim.batch(settings) - PEcAn.utils::status.end() - } - } - - # Run state data assimilation - if ("state.data.assimilation" %in% names(settings)) { - if (PEcAn.utils::status.check("SDA") == 0) { - PEcAn.utils::status.start("SDA") - settings <- sda.enfk(settings) - PEcAn.utils::status.end() - } - } - - # Run benchmarking - if ("benchmarking" %in% names(settings) - && "benchmark" %in% names(settings$benchmarking)) { - PEcAn.utils::status.start("BENCHMARKING") - results <- papply(settings, function(x) calc_benchmark(x, bety)) - PEcAn.utils::status.end() - } - - # Pecan workflow complete - if (PEcAn.utils::status.check("FINISHED") == 0) { - PEcAn.utils::status.start("FINISHED") - PEcAn.remote::kill.tunnel(settings) - db.query(paste("UPDATE workflows SET finished_at=NOW() WHERE id=", - settings$workflow$id, "AND finished_at IS NULL"), - params = settings$database$bety) - - # Send email if configured - if (!is.null(settings$email) - && !is.null(settings$email$to) - && (settings$email$to != "")) { - sendmail(settings$email$from, settings$email$to, - paste0("Workflow has finished executing at ", base::date()), - paste0("You can find the results on ", settings$email$url)) - } - PEcAn.utils::status.end() - } - - db.print.connections() - print("---------- PEcAn Workflow Complete ----------") +# Do conversions +settings <- PEcAn.workflow::do_conversions(settings) + +# Query the trait database for data and priors +if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings(settings, + outputfile = "pecan.TRAIT.xml") + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.TRAIT.xml")) +} + + +# Run the PEcAn meta.analysis +if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } +} + +# Write model specific configs +if (PEcAn.utils::status.check("CONFIG") == 0) { + PEcAn.utils::status.start("CONFIG") + settings <- + PEcAn.workflow::runModule.run.write.configs(settings) + PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.CONFIGS.xml")) +} + +if ((length(which(commandArgs() == "--advanced")) != 0) + && (PEcAn.utils::status.check("ADVANCED") == 0)) { + PEcAn.utils::status.start("ADVANCED") + q() + +} + +# Start ecosystem model runs +if (PEcAn.utils::status.check("MODEL") == 0) { + PEcAn.utils::status.start("MODEL") + PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) + PEcAn.utils::status.end() +} + +# Get results of model runs +if (PEcAn.utils::status.check("OUTPUT") == 0) { + PEcAn.utils::status.start("OUTPUT") + runModule.get.results(settings) + PEcAn.utils::status.end() +} + +# Run ensemble analysis on model output. +if ("ensemble" %in% names(settings) + && PEcAn.utils::status.check("ENSEMBLE") == 0) { + PEcAn.utils::status.start("ENSEMBLE") + runModule.run.ensemble.analysis(settings, TRUE) + PEcAn.utils::status.end() +} + +# Run sensitivity analysis and variance decomposition on model output +if ("sensitivity.analysis" %in% names(settings) + && PEcAn.utils::status.check("SENSITIVITY") == 0) { + PEcAn.utils::status.start("SENSITIVITY") + runModule.run.sensitivity.analysis(settings) + PEcAn.utils::status.end() +} + +# Run parameter data assimilation +if ("assim.batch" %in% names(settings)) { + if (PEcAn.utils::status.check("PDA") == 0) { + PEcAn.utils::status.start("PDA") + settings <- + PEcAn.assim.batch::runModule.assim.batch(settings) + PEcAn.utils::status.end() + } +} + +# Run state data assimilation +if ("state.data.assimilation" %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + settings <- sda.enfk(settings) + PEcAn.utils::status.end() + } +} + +# Run benchmarking +if ("benchmarking" %in% names(settings) + && "benchmark" %in% names(settings$benchmarking)) { + PEcAn.utils::status.start("BENCHMARKING") + results <- + papply(settings, function(x) + calc_benchmark(x, bety)) + PEcAn.utils::status.end() +} + +# Pecan workflow complete +if (PEcAn.utils::status.check("FINISHED") == 0) { + PEcAn.utils::status.start("FINISHED") + PEcAn.remote::kill.tunnel(settings) + db.query( + paste( + "UPDATE workflows SET finished_at=NOW() WHERE id=", + settings$workflow$id, + "AND finished_at IS NULL" + ), + params = settings$database$bety + ) + + # Send email if configured + if (!is.null(settings$email) + && !is.null(settings$email$to) + && (settings$email$to != "")) { + sendmail( + settings$email$from, + settings$email$to, + paste0("Workflow has finished executing at ", base::date()), + paste0("You can find the results on ", settings$email$url) + ) + } + PEcAn.utils::status.end() } -main() +db.print.connections() +print("---------- PEcAn Workflow Complete ----------") \ No newline at end of file From 5d0f595d4973a988e22086a34338a0f7478d8d52 Mon Sep 17 00:00:00 2001 From: runner Date: Sat, 8 Aug 2020 00:38:48 +0000 Subject: [PATCH 1371/2289] automated syle update --- web/workflow.R | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/web/workflow.R b/web/workflow.R index 186dc0d720b..fcef5ba70f7 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -17,7 +17,7 @@ library("RCurl") # -------------------------------------------------- # get command-line arguments -args = get_args() +args <- get_args() # make sure always to call status.end options(warn = 1) @@ -44,7 +44,8 @@ if ("benchmarking" %in% names(settings)) { if ("sitegroup" %in% names(settings)) { if (is.null(settings$sitegroup$nSite)) { settings <- PEcAn.settings::createSitegroupMultiSettings(settings, - sitegroupId = settings$sitegroup$id) + sitegroupId = settings$sitegroup$id + ) } else { settings <- PEcAn.settings::createSitegroupMultiSettings( settings, @@ -78,7 +79,8 @@ if (PEcAn.utils::status.check("TRAIT") == 0) { PEcAn.utils::status.start("TRAIT") settings <- PEcAn.workflow::runModule.get.trait.data(settings) PEcAn.settings::write.settings(settings, - outputfile = "pecan.TRAIT.xml") + outputfile = "pecan.TRAIT.xml" + ) PEcAn.utils::status.end() } else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.TRAIT.xml")) @@ -106,10 +108,9 @@ if (PEcAn.utils::status.check("CONFIG") == 0) { } if ((length(which(commandArgs() == "--advanced")) != 0) - && (PEcAn.utils::status.check("ADVANCED") == 0)) { +&& (PEcAn.utils::status.check("ADVANCED") == 0)) { PEcAn.utils::status.start("ADVANCED") q() - } # Start ecosystem model runs @@ -128,7 +129,7 @@ if (PEcAn.utils::status.check("OUTPUT") == 0) { # Run ensemble analysis on model output. if ("ensemble" %in% names(settings) - && PEcAn.utils::status.check("ENSEMBLE") == 0) { +&& PEcAn.utils::status.check("ENSEMBLE") == 0) { PEcAn.utils::status.start("ENSEMBLE") runModule.run.ensemble.analysis(settings, TRUE) PEcAn.utils::status.end() @@ -136,7 +137,7 @@ if ("ensemble" %in% names(settings) # Run sensitivity analysis and variance decomposition on model output if ("sensitivity.analysis" %in% names(settings) - && PEcAn.utils::status.check("SENSITIVITY") == 0) { +&& PEcAn.utils::status.check("SENSITIVITY") == 0) { PEcAn.utils::status.start("SENSITIVITY") runModule.run.sensitivity.analysis(settings) PEcAn.utils::status.end() @@ -163,11 +164,12 @@ if ("state.data.assimilation" %in% names(settings)) { # Run benchmarking if ("benchmarking" %in% names(settings) - && "benchmark" %in% names(settings$benchmarking)) { +&& "benchmark" %in% names(settings$benchmarking)) { PEcAn.utils::status.start("BENCHMARKING") results <- - papply(settings, function(x) - calc_benchmark(x, bety)) + papply(settings, function(x) { + calc_benchmark(x, bety) + }) PEcAn.utils::status.end() } @@ -183,11 +185,11 @@ if (PEcAn.utils::status.check("FINISHED") == 0) { ), params = settings$database$bety ) - + # Send email if configured if (!is.null(settings$email) - && !is.null(settings$email$to) - && (settings$email$to != "")) { + && !is.null(settings$email$to) + && (settings$email$to != "")) { sendmail( settings$email$from, settings$email$to, @@ -199,4 +201,4 @@ if (PEcAn.utils::status.check("FINISHED") == 0) { } db.print.connections() -print("---------- PEcAn Workflow Complete ----------") \ No newline at end of file +print("---------- PEcAn Workflow Complete ----------") From 14e104c852ea05549796ced0010e64ce8b8f6362 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 8 Aug 2020 12:29:24 +0530 Subject: [PATCH 1372/2289] add settings namespace --- modules/assim.sequential/R/Remote_helpers.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index d0efe9225ec..16fbcceca3f 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -419,7 +419,7 @@ SDA_remote_launcher <-function(settingPath, #' @export Remote_Sync_launcher <- function(settingPath, remote.path, PID) { - settings <- read.settings(settingPath) + settings <- PEcAn.settings::read.settings(settingPath) system(paste0("nohup Rscript ", system.file("RemoteLauncher", "Remote_sync.R", package = "PEcAn.assim.sequential")," ", From 533bb504b2a7f004f22195d7c8fc1095451d1a4b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 8 Aug 2020 22:06:42 +0530 Subject: [PATCH 1373/2289] appeears bug fix --- modules/data.remote/R/remote_process.R | 1 + modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py | 4 ++-- modules/data.remote/inst/RpTools/RpTools/get_remote_data.py | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index a5576cde526..06a0cdc5625 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -554,6 +554,7 @@ remote_process <- function(settings) { con = dbcon ) raw_id <- raw_ins$input.id + raw_path <- output$raw_data_path } else{ PEcAn.logger::logger.info("Updating raw file") raw_id <- raw_check$id diff --git a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py index 329731068be..d0b316b3e0c 100644 --- a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py +++ b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py @@ -233,7 +233,7 @@ def authenticate(): timestamp = time.strftime("%y%m%d%H%M%S") save_path = os.path.join( outdir, - product, + product +"_NA_" + projection + "_NA_" @@ -242,7 +242,7 @@ def authenticate(): + "_" + timestamp + "." - + outformat, + + outformat ) os.rename(filepath, save_path) diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index 81b0b8c045e..40786ea8c77 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -84,7 +84,7 @@ def get_remote_data( get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, siteid=siteid) if source == "appeears": - get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile) + get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile, siteid=siteid) if raw_merge == True and raw_merge != "replace": # if output file is of csv type use csv_merge, example AppEEARS point AOI type From ac38526c20d7c514815835f6b33d2269e9f18856 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Mon, 10 Aug 2020 05:40:53 +0000 Subject: [PATCH 1374/2289] fixed documentation --- .../07_remote_access/01_pecan_api.Rmd | 65 ++++++++++++++++++- 1 file changed, 62 insertions(+), 3 deletions(-) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index 09f636ebf2a..f762d1eb9d9 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -48,7 +48,7 @@ The currently implemented functionalities include: * [`GET /api/formats/{format_id}`](#get-formatsformat_id): Fetch the details of specific Format * __Inputs:__ - * [`GET /api/inputs/`](#get-apiinputs): Search for inputs needed for a PEcAn workflow based on `model_id` & `site_id` + * [`GET /api/inputs/`](#get-apiinputs): Search for inputs needed for a PEcAn workflow based on `model_id`, `site_id`, `format_id` & `host_id`. * [`GET /api/inputs/{input_id}`](#get-apiinputsinput_id) *: Download the desired input file * __Workflows:__ @@ -704,6 +704,32 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ## [1] 2 ``` +```R +# Get the inputs needed for a workflow with format_id = 24 (Sipnet.climna) & host_id = 2 (ebi-forecast.igb.illinois.edu) +res <- httr::GET( + "http://localhost:8000/api/inputs/?format_id=24&host_id=2", + httr::authenticate("carya", "illinois") + ) +print(jsonlite::fromJSON(rawToChar(res$content))) +``` +``` +## $inputs +## sitename model_name revision mimetype format_name tag hostname +## 1 Howland Forest- main tower (US-Ho1) (PalEON PHO) Model 102319 text/csv Sipnet.climna met ebi-forecast.igb.illinois.edu +## 2 Howland Forest- main tower (US-Ho1) (PalEON PHO) SIPNET r136 text/csv Sipnet.climna met ebi-forecast.igb.illinois.edu +## ... +## file_name file_path id input_name start_date end_date +## 1 US-Ho1.clim /home/share/data/dbfiles/ 232 +## 2 US-Ho1.clim /home/share/data/dbfiles/ 232 +## ... + +## $count +## [1] 50 + +## $next_page +## [1] "http://localhost:8000/api/workflows/?format_id=24&host_id=2&offset=50&limit=50" +``` + #### Python Snippet ```python @@ -734,7 +760,40 @@ print(json.dumps(response.json(), indent=2)) ## }, ## ... ## ], -## "count: 2 +## "count": 2 +## } +``` + +```python +# Get the inputs needed for a workflow with format_id = 24 (Sipnet.climna) & host_id = 2 (ebi-forecast.igb.illinois.edu) +response = requests.get( + "http://localhost:8000/api/inputs/?format_id=24&host_id=2", + auth=HTTPBasicAuth('carya', 'illinois') + ) +print(json.dumps(response.json(), indent=2)) +``` +``` +## { +## "inputs": [ +## { +## "sitename": "Howland Forest- main tower (US-Ho1) (PalEON PHO)", +## "model_name": "Model", +## "revision": "102319", +## "tag": "met", +## "hostname": "ebi-forecast.igb.illinois.edu", +## "file_name": "US-Ho1.clim", +## "format_name": "Sipnet.climna", +## "mimetype": "text/csv", +## "file_path": "/home/share/data/dbfiles/", +## "id": 232, +## "input_name": "", +## "start_date": "NA", +## "end_date": "NA" +## }, +## ... +## ], +## "count": 50, +## "next_page": "http://localhost:8000/api/workflows/?format_id=24&host_id=2&offset=50&limit=50" ## } ``` @@ -1052,7 +1111,7 @@ writeBin(res$content, "test.ensemble.ts.99000000017.NPP.2002.2002.Rdata") #### Python Snippet ```python -# Download the '2002.nc' output file for the run with id = 99000000282 +# Download the 'ensemble.ts.99000000017.NPP.2002.2002.Rdata' output file for the workflow with id = 99000000031 response = requests.get( "http://localhost:8000/api/workflows/99000000031/file/ensemble.ts.99000000017.NPP.2002.2002.Rdata", auth=HTTPBasicAuth('carya', 'illinois') From 4393e46c2eb4bb908ec770f2745b5f24f30d9711 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Mon, 10 Aug 2020 12:48:42 -0400 Subject: [PATCH 1375/2289] updating format name for db file insert --- modules/data.land/R/soil_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.land/R/soil_process.R b/modules/data.land/R/soil_process.R index 7f3b369dbc1..5f7f1e54990 100644 --- a/modules/data.land/R/soil_process.R +++ b/modules/data.land/R/soil_process.R @@ -68,7 +68,7 @@ soil_process <- function(settings, input, dbfiles, overwrite = FALSE,run.local=T startdate = NULL, enddate = NULL, mimetype = "application/x-netcdf", - formatname = "gSSURGO Soil", + formatname = "pecan_soil_standard", con = con, ens=TRUE) } From 27d854429ddc7b1de3b7158ef8d96da71d6ac67e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 11 Aug 2020 19:38:17 +0530 Subject: [PATCH 1376/2289] extract coords from BETY --- modules/data.remote/R/remote_process.R | 6 +++-- .../inst/RpTools/RpTools/create_geojson.py | 23 +++++++++++++++++++ .../inst/RpTools/RpTools/rp_control.py | 6 ++++- 3 files changed, 32 insertions(+), 3 deletions(-) create mode 100644 modules/data.remote/inst/RpTools/RpTools/create_geojson.py diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 06a0cdc5625..5dc07354bad 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -33,7 +33,6 @@ remote_process <- function(settings) { siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) raw_mimetype <- settings$remotedata$raw_mimetype raw_formatname <- settings$remotedata$raw_formatname - geofile <- settings$remotedata$geofile outdir <- settings$outdir start <- as.character(as.Date(settings$run$start.date)) end <- as.character(as.Date(settings$run$end.date)) @@ -89,6 +88,9 @@ remote_process <- function(settings) { existing_data <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) if (nrow(existing_data) >= 1) { + + coords = unlist(PEcAn.DB::db.query(sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), con = dbcon), use.names=FALSE) + # if processed data is requested, example LAI if (!is.null(out_process_data)) { # construct processed file name @@ -307,7 +309,7 @@ remote_process <- function(settings) { # call remote_process output = RpTools$rp_control( - geofile = geofile, + coords = coords, outdir = outdir, start = start, end = end, diff --git a/modules/data.remote/inst/RpTools/RpTools/create_geojson.py b/modules/data.remote/inst/RpTools/RpTools/create_geojson.py new file mode 100644 index 00000000000..321adfa1639 --- /dev/null +++ b/modules/data.remote/inst/RpTools/RpTools/create_geojson.py @@ -0,0 +1,23 @@ +import json +import os + + +def create_geojson(coords, siteid, outdir): + + geo = json.loads(coords) + + features = [] + + features.append(Feature(geometry=geo, properties={"name": siteid})) + + feature_collection = FeatureCollection(features) + + if not os.path.exists(outdir): + os.makedirs(outdir, exist_ok=True) + + file = os.path.join(outdir, siteid + ".geojson") + + with open(file, "w") as f: + dump(feature_collection, f) + + return os.path.abspath(file) diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index 3f1a104ecac..b502ae233d1 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -12,11 +12,12 @@ from . get_remote_data import get_remote_data from . process_remote_data import process_remote_data from . gee_utils import get_sitename +from . create_geojson import create_geojson import os def rp_control( - geofile, + coords, outdir, start, end, @@ -90,6 +91,9 @@ def rp_control( """ + geofile = create_geojson(coords, siteid, outdir) + + aoi_name = get_sitename(geofile) get_datareturn_path = 78 From 1535b87e8a0d52b24f548eb0dbf6cbcfd2dc52fa Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 11 Aug 2020 20:20:12 +0530 Subject: [PATCH 1377/2289] create GeoJSON file from BETY data --- modules/data.remote/R/remote_process.R | 5 ++-- .../inst/RpTools/RpTools/create_geojson.py | 24 +++++++++++++++++++ .../inst/RpTools/RpTools/gee2pecan_s2.py | 2 ++ .../inst/RpTools/RpTools/rp_control.py | 2 +- modules/data.remote/inst/RpTools/setup.py | 1 + 5 files changed, 30 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 5dc07354bad..9bf143b0c4c 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -84,13 +84,12 @@ remote_process <- function(settings) { raw_file_name = construct_raw_filename(collection, siteid_short, scale, projection, qc) + coords = unlist(PEcAn.DB::db.query(sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), con = dbcon), use.names=FALSE) + # check if any data is already present in the inputs table existing_data <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) if (nrow(existing_data) >= 1) { - - coords = unlist(PEcAn.DB::db.query(sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), con = dbcon), use.names=FALSE) - # if processed data is requested, example LAI if (!is.null(out_process_data)) { # construct processed file name diff --git a/modules/data.remote/inst/RpTools/RpTools/create_geojson.py b/modules/data.remote/inst/RpTools/RpTools/create_geojson.py index 321adfa1639..e89ddf732e1 100644 --- a/modules/data.remote/inst/RpTools/RpTools/create_geojson.py +++ b/modules/data.remote/inst/RpTools/RpTools/create_geojson.py @@ -1,8 +1,32 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +""" +used to Create GeoJSON file from the geometry extracted from sites table in BETY + +Author: Ayush Prasad +""" + + +from geojson import Point, Feature, FeatureCollection, dump import json import os def create_geojson(coords, siteid, outdir): + """ + Create GeoJSON file from geometry extracted from BETY + + Parameters + ---------- + coords (str) -- geometry from BETY sites + siteid (str) -- siteid + outdir (str) -- path where the output file has to be stored + Returns + ------- + Absolute path to the merged file + output GeoJSOn file is saved in the specified directory. + """ geo = json.loads(coords) diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py index ec59a9046d4..0523d41589e 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py @@ -173,6 +173,8 @@ def __init__(self, name, geometry=None, coordinate_list=None, tile=None): sys.exit("AOI has to get either geometry or coordinates as list!") elif geometry and not coordinate_list: coordinate_list = list(geometry.exterior.coords) + for i in range(len(coordinate_list)): + coordinate_list[i] = coordinate_list[i][0:2] elif coordinate_list and not geometry: geometry = None diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index b502ae233d1..e8b67ebaac6 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -45,7 +45,7 @@ def rp_control( Parameters ---------- - geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. + coords (str) -- geometry of the site from BETY outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. diff --git a/modules/data.remote/inst/RpTools/setup.py b/modules/data.remote/inst/RpTools/setup.py index 248b3f518b9..cddd3193f60 100644 --- a/modules/data.remote/inst/RpTools/setup.py +++ b/modules/data.remote/inst/RpTools/setup.py @@ -61,6 +61,7 @@ "uritemplate>=3.0.1", "urllib3>=1.25.10", "xarray>=0.13.0", + "geojson>=2.5.0", ], # zip_safe=False ) From 44fe824b89d7b21745d7b23ada6d1acb954390c2 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 11 Aug 2020 20:25:46 +0530 Subject: [PATCH 1378/2289] remove comment --- modules/data.remote/inst/RpTools/RpTools/process_remote_data.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index 74bce11312d..880500791cf 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -36,8 +36,6 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori # get the type of the input data input_type = out_get_data - # locate the input file - # input_file = os.path.join(outdir, aoi_name, "_", input_type, ".nc") # extract the computation which is to be done output = out_process_data # construct the function name From c97d4bce6d0a5372abd3031330da9429d00a7f6b Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 12 Aug 2020 16:34:54 +0000 Subject: [PATCH 1379/2289] added functionality to download input files if input_id points to folder & a filename is specified --- apps/api/R/inputs.R | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index 987ba405b3e..b5b9b3e65f9 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -132,11 +132,13 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ #' Download the input specified by the id #' @param id Input id (character) +#' @param filename Optional filename specified if the id points to a folder instead of file (character) +#' If this is passed with an id that actually points to a file, this name will be ignored #' @return Input file specified by user #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get / -downloadInput <- function(input_id, req, res){ +downloadInput <- function(input_id, filename="", req, res){ dbcon <- PEcAn.DB::betyConnect() db_hostid <- PEcAn.DB::dbHostInfo(dbcon)$hostid @@ -156,10 +158,23 @@ downloadInput <- function(input_id, req, res){ res$status <- 404 return() } - else{ + else { # Generate the full file path using the file_path & file_name filepath <- paste0(input$file_path, "/", input$file_name) + # If the id points to a directory, check if 'filename' within this directory has been specified + if(dir.exists(filepath)) { + # If no filename is provided, return 400 Bad Request error + if(filename == "") { + res$status <- 400 + return() + } + + # Append the filename to the filepath + filepath <- paste0(filepath, filename) + } + + # If the file doesn't exist, return 404 error if(! file.exists(filepath)){ res$status <- 404 return() From 3ddfcf637e4f987210a8bc50991a9aed36b3bddc Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 12 Aug 2020 16:35:38 +0000 Subject: [PATCH 1380/2289] tests & docs for added functionality --- apps/api/pecanapi-spec.yml | 14 +++++++++++++- apps/api/tests/test.inputs.R | 18 +++++++++++++++++- .../07_remote_access/01_pecan_api.Rmd | 17 ++++++++++++++++- 3 files changed, 46 insertions(+), 3 deletions(-) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index df19517bf95..21f9be50613 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -3,7 +3,7 @@ servers: - description: PEcAn API Server url: https://pecan-dev.ncsa.illinois.edu - description: PEcAn Development Server - url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/a433c80c + url: https://pecan-tezan-rstudio.ncsa.illinois.edu/p/670121ec - description: PEcAn API Test Server url: https://pecan-tezan.ncsa.illinois.edu - description: Localhost @@ -561,6 +561,12 @@ paths: required: true schema: type: string + - in: query + name: filename + description: Optional filename specified if the id points to a folder instead of file + required: false + schema: + type: string responses: '200': description: Contents of the desired input file @@ -568,6 +574,9 @@ paths: application/octet-stream: schema: type: string + format: binary + '400': + description: Bad request. Input ID points to directory & filename is not specified '401': description: Authentication required '403': @@ -764,6 +773,7 @@ paths: application/octet-stream: schema: type: string + format: binary '401': description: Authentication required @@ -880,6 +890,7 @@ paths: application/octet-stream: schema: type: string + format: binary '401': description: Authentication required @@ -913,6 +924,7 @@ paths: application/octet-stream: schema: type: string + format: binary '401': description: Authentication required diff --git a/apps/api/tests/test.inputs.R b/apps/api/tests/test.inputs.R index 930747b2bfa..cba2ee9e792 100644 --- a/apps/api/tests/test.inputs.R +++ b/apps/api/tests/test.inputs.R @@ -30,4 +30,20 @@ test_that("Calling /api/inputs/{input_id} with invalid parameters returns Status httr::authenticate("carya", "illinois") ) expect_equal(res$status, 404) -}) \ No newline at end of file +}) + +test_that("Calling /api/inputs/{input_id}?filename={filename} with valid parameters returns Status 200", { + res <- httr::GET( + paste0("http://localhost:8000/api/inputs/295?filename=fraction.plantation"), + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 200) +}) + +test_that("Calling /api/inputs/{input_id}?filename={filename} with invalid parameters returns Status 404", { + res <- httr::GET( + "http://localhost:8000/api/inputs/295?filename=random", + httr::authenticate("carya", "illinois") + ) + expect_equal(res$status, 400) +}) diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index f762d1eb9d9..aaec0134eac 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -49,7 +49,7 @@ The currently implemented functionalities include: * __Inputs:__ * [`GET /api/inputs/`](#get-apiinputs): Search for inputs needed for a PEcAn workflow based on `model_id`, `site_id`, `format_id` & `host_id`. - * [`GET /api/inputs/{input_id}`](#get-apiinputsinput_id) *: Download the desired input file + * [`GET /api/inputs/{input_id}`](#get-apiinputsinput_id) *: Download the desired input file (if the `input_id` corresponds to a folder, an optional `filename` argument must be provided) * __Workflows:__ * [`GET /api/workflows/`](#get-apiworkflows): Retrieve a list of PEcAn workflows @@ -810,6 +810,13 @@ res <- httr::GET( httr::authenticate("carya", "illinois") ) writeBin(res$content, "test.2002.nc") + +# Download the file 'fraction.plantation' from the input directory with id = 295 +res <- httr::GET( + "http://localhost:8000/api/inputs/295?filename=fraction.plantation", + httr::authenticate("carya", "illinois") + ) +writeBin(res$content, "test.fraction.plantation") ``` #### Python Snippet @@ -822,6 +829,14 @@ response = requests.get( ) with open("test.2002.nc", "wb") as file: file.write(response.content) + +# Download the file 'fraction.plantation' from the input directory with id = 295 +response = requests.get( + "http://localhost:8000/api/inputs/295?filename=fraction.plantation", + auth=HTTPBasicAuth('carya', 'illinois') + ) +with open("test.2002.nc", "wb") as file: + file.write(response.content) ``` ### {-} From eb6e2c1f4990e35243b9aa6e36c92d81754b9de9 Mon Sep 17 00:00:00 2001 From: Tezan Sahu Date: Wed, 12 Aug 2020 18:33:30 +0000 Subject: [PATCH 1381/2289] allow to search for inputs not associated with any model also --- apps/api/R/inputs.R | 26 +++++----- .../07_remote_access/01_pecan_api.Rmd | 52 +++++++------------ 2 files changed, 32 insertions(+), 46 deletions(-) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index b5b9b3e65f9..986a012adc6 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -29,12 +29,6 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ select(hostname, machine_id=id) %>% inner_join(inputs, by='machine_id') - inputs <- tbl(dbcon, "modeltypes_formats") %>% - select(tag, modeltype_id, format_id, input) %>% - inner_join(inputs, by='format_id') %>% - filter(input) %>% - select(-input) - inputs <- tbl(dbcon, "formats") %>% select(format_id = id, format_name = name, mimetype_id) %>% inner_join(inputs, by='format_id') @@ -44,18 +38,22 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ inner_join(inputs, by='mimetype_id') %>% select(-mimetype_id) - inputs <- tbl(dbcon, "models") %>% - select(model_id = id, modeltype_id, model_name, revision) %>% - right_join(inputs, by='modeltype_id') %>% - select(-modeltype_id) - inputs <- tbl(dbcon, "sites") %>% select(site_id = id, sitename) %>% inner_join(inputs, by='site_id') if(! is.null(model_id)) { - inputs <- inputs %>% - filter(model_id == !!model_id) + inputs <- tbl(dbcon, "modeltypes_formats") %>% + select(tag, modeltype_id, format_id, input) %>% + inner_join(inputs, by='format_id') %>% + filter(input) %>% + select(-input) + + inputs <- tbl(dbcon, "models") %>% + select(model_id = id, modeltype_id, model_name, revision) %>% + inner_join(inputs, by='modeltype_id') %>% + filter(model_id == !!model_id) %>% + select(-modeltype_id, -model_id) } if(! is.null(site_id)) { @@ -74,7 +72,7 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ } qry_res <- inputs %>% - select(-site_id, -model_id, -format_id, -machine_id) %>% + select(-site_id, -format_id, -machine_id) %>% distinct() %>% arrange(id) %>% collect() diff --git a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd index aaec0134eac..0b15310149e 100644 --- a/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd +++ b/book_source/03_topical_pages/07_remote_access/01_pecan_api.Rmd @@ -705,29 +705,22 @@ print(jsonlite::fromJSON(rawToChar(res$content))) ``` ```R -# Get the inputs needed for a workflow with format_id = 24 (Sipnet.climna) & host_id = 2 (ebi-forecast.igb.illinois.edu) +# Get the inputs needed for a workflow with format_id = 5000000002 (AMERIFLUX_BASE_HH) & host_id = 99000000001 (docker) res <- httr::GET( - "http://localhost:8000/api/inputs/?format_id=24&host_id=2", + "http://localhost:8000/api/inputs/?format_id=5000000002&host_id=99000000001", httr::authenticate("carya", "illinois") ) print(jsonlite::fromJSON(rawToChar(res$content))) ``` ``` ## $inputs -## sitename model_name revision mimetype format_name tag hostname -## 1 Howland Forest- main tower (US-Ho1) (PalEON PHO) Model 102319 text/csv Sipnet.climna met ebi-forecast.igb.illinois.edu -## 2 Howland Forest- main tower (US-Ho1) (PalEON PHO) SIPNET r136 text/csv Sipnet.climna met ebi-forecast.igb.illinois.edu -## ... -## file_name file_path id input_name start_date end_date -## 1 US-Ho1.clim /home/share/data/dbfiles/ 232 -## 2 US-Ho1.clim /home/share/data/dbfiles/ 232 -## ... +## sitename mimetype format_name hostname file_name +## 1 Niwot Ridge Forest/LTER NWT1 (US-NR1) text/csv AMERIFLUX_BASE_HH docker AMF_US-NR1_BASE_HH_15-5 +## file_path id input_name start_date end_date +## 1 /data/dbfiles/AmerifluxLBL_site_0-772 1000011238 AmerifluxLBL_site_0-772 1998-01-01 2016-12-31 ## $count -## [1] 50 - -## $next_page -## [1] "http://localhost:8000/api/workflows/?format_id=24&host_id=2&offset=50&limit=50" +## [1] 1 ``` #### Python Snippet @@ -765,9 +758,9 @@ print(json.dumps(response.json(), indent=2)) ``` ```python -# Get the inputs needed for a workflow with format_id = 24 (Sipnet.climna) & host_id = 2 (ebi-forecast.igb.illinois.edu) +# Get the inputs needed for a workflow with format_id = 5000000002 (AMERIFLUX_BASE_HH) & host_id = 99000000001 (docker) response = requests.get( - "http://localhost:8000/api/inputs/?format_id=24&host_id=2", + "http://localhost:8000/api/inputs/?format_id=5000000002&host_id=99000000001", auth=HTTPBasicAuth('carya', 'illinois') ) print(json.dumps(response.json(), indent=2)) @@ -776,24 +769,19 @@ print(json.dumps(response.json(), indent=2)) ## { ## "inputs": [ ## { -## "sitename": "Howland Forest- main tower (US-Ho1) (PalEON PHO)", -## "model_name": "Model", -## "revision": "102319", -## "tag": "met", -## "hostname": "ebi-forecast.igb.illinois.edu", -## "file_name": "US-Ho1.clim", -## "format_name": "Sipnet.climna", +## "sitename": "Niwot Ridge Forest/LTER NWT1 (US-NR1)", +## "hostname": "docker", +## "file_name": "AMF_US-NR1_BASE_HH_15-5", +## "format_name": "AMERIFLUX_BASE_HH", ## "mimetype": "text/csv", -## "file_path": "/home/share/data/dbfiles/", -## "id": 232, -## "input_name": "", -## "start_date": "NA", -## "end_date": "NA" -## }, -## ... +## "file_path": "/data/dbfiles/AmerifluxLBL_site_0-772", +## "id": 1000011238, +## "input_name": "AmerifluxLBL_site_0-772", +## "start_date": "1998-01-01", +## "end_date": "2016-12-31" +## } ## ], -## "count": 50, -## "next_page": "http://localhost:8000/api/workflows/?format_id=24&host_id=2&offset=50&limit=50" +## "count": 1, ## } ``` From 1423dd4ec4a374a07c3df8399d69e471f8701aa9 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 13 Aug 2020 19:29:17 +0530 Subject: [PATCH 1382/2289] more documentation --- book_source/03_topical_pages/03_pecan_xml.Rmd | 46 ++++ .../03_topical_pages/09_standalone_tools.Rmd | 208 ++++++++++++------ 2 files changed, 191 insertions(+), 63 deletions(-) diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index de9d1cd0ea1..8a06de20878 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -21,6 +21,7 @@ It contains the following major sections ("nodes"): - (experimental) [`state.data.assimilation`](#xml-state-data-assimilation) -- State data assimilation - (experimental) [`browndog`](#xml-browndog) -- Brown Dog configuration - (experimental) [`benchmarking`](#xml-benchmarking) -- Benchmarking + - [`remote_process`](#xml-remote_process) -- Remote data module A basic example looks like this: @@ -775,3 +776,48 @@ This information is currently used by the following R functions: Coming soon... +### Remote data module {#xml-remote_process} + +This section describes the tags required for configuring `remote_process`. + +```xml + + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + +``` + +* `raw_mimetype`: (required) MIME type of the raw file +* `raw_formatname`: (required) format name of the raw file +* `out_get_data`: (required) type of raw output requested, e.g, bands, smap +* `source`: (required) source of remote data, e.g., gee or appeears +* `collection`: (required) dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears +* `scale`: (optional) pixel resolution required for some gee collections, recommended to use 10 for Sentinel 2 +* `projection`: (optional) type of projection. Only required for appeears polygon AOI type +* `qc`: (optional) quality control parameter, required for some gee collections + +These tags are only required if processed data is requested: + +* `out_process_data`: (optional) type of processed output requested, e.g, lai +* `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands +* `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS +* `pro_mimetype`: (optional) MIME type of the processed file +* `pro_formatname`: (optional) format name of the processed file + +The output data from the module are returned in the following tags: + +* `raw_id`: input id of the raw file +* `raw_path`: absolute path to the raw file +* `pro_id`: input id of the processed file +* `pro_path`: absolute path to the processed file \ No newline at end of file diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index d0a42cbaacf..96df6a2c7bb 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -135,7 +135,7 @@ input.id = tbl(bety,"inputs") %>% filter(name == input_name) %>% pull(id) ``` ## Remote data module -Remote data module retrieves remote sensing data from MODISTools, Google Earth Engine and AppEEARS. The downloaded data can be used for performing further analysis in PEcAn. +Remote data module retrieves remote sensing data from MODISTools, Google Earth Engine and AppEEARS. The downloaded data can be used while performing further analysis in PEcAn. #### Google Earth Engine [Google Earth Engine](https://earthengine.google.com/) is a cloud-based platform for performing analysis on satellite data. It provides access to a [large data catalog](https://developers.google.com/earth-engine/datasets) through an online JavaScript code editor and a Python API. @@ -159,28 +159,89 @@ Processing functions currently available are, 2. **Sign up for NASA Earthdata**. Using AppEEARS requires an Earthdata account visit this [page](https://urs.earthdata.nasa.gov/users/new) to create your own account. -3. **Install the Python dependencies**. Using this module requires Python3 and the package manager pip to be installed in your system. - To install the additional Python dependencies required, -a. Navigate to `pecan/modules/data.remote/inst` If you are inside the pecan directory, this can be done by, +3. **Install the RpTools package**. Python codes required by this module are stored in a Python package named "RpTools" using this requires Python3 and the package manager pip to be installed in your system. + To install the package, +a. Navigate to `pecan/modules/data.remote/inst/RpTools` If you are inside the pecan directory, this can be done by, ```bash -cd modules/data.remote/inst +cd modules/data.remote/inst/RpTools ``` -b. Use pip to install the dependencies. +b. Use pip to install the package. "-e" flag is used to install the package in an editable or develop mode, so that changes made to the code get updated in the package without reinstalling. ```bash -pip install -r requirements.txt +pip install -e . ``` 4. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials. The credentials will be stored locally on your system. This can be done by, ```bash #this will open a browser and ask you to sign in with the Google account registered for GEE earthengine authenticate ``` -5. **Save the Earthdata credentials (optional)**. If you do not wish to enter your credentials every time you use AppEEARS, you may save your username and password inside a JSON file and then pass its file path as an argument in `remote_process` +5. **Save the Earthdata credentials**. If you wish to use AppEEARS you will have to store your username and password inside a JSON file and then pass its file path as an argument in `remote_process` #### Usage guide: -This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `call_remote_process()`which is a main function that controls all the individual functions to create an organized way of downloading and handling remote sensing data in PEcAn. +This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `remote_process`which is the main function that controls all the individual functions to create an organized way of downloading and storing remote sensing data in PEcAn. + +### Configuring `remote_process` + +`remote_process` is configured using remote data tags in the `pecan.xml`. The required tags are described below, + +```xml + + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + ... + +``` + +* `raw_mimetype`: (required) MIME type of the raw file +* `raw_formatname`: (required) format name of the raw file +* `out_get_data`: (required) type of raw output requested, e.g, bands, smap +* `source`: (required) source of remote data, e.g., gee or appeears +* `collection`: (required) dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears +* `scale`: (optional) pixel resolution required for some gee collections, recommended to use 10 for Sentinel 2 **scale** Information about how GEE handles scale can be found out [here](https://developers.google.com/earth-engine/scale) +* `projection`: (optional) type of projection. Only required for appeears polygon AOI type +* `qc`: (optional) quality control parameter, required for some gee collections + +These tags are only required if processed data is requested: + +* `out_process_data`: (optional) type of processed output requested, e.g, lai +* `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands +* `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS +* `pro_mimetype`: (optional) MIME type of the processed file +* `pro_formatname`: (optional) format name of the processed file + +Other input data: + +* start date, end date: these are taken from the `run` tag in `pecan.xml` +* outdir: from the `outdir` tag in `pecan.xml` +* Area of interest: the coordinates and site name are found out from BETY using `siteid` present in the `run` tag. These are then used to create a GeoJSON file which is used by the download functions. + +The output data from the module are returned in the following tags: + +* `raw_id`: input id of the raw file +* `raw_path`: absolute path to the raw file +* `pro_id`: input id of the processed file +* `pro_path`: absolute path to the processed file + +**Output files**: + +The output files are stored in a directory inside the specified outdir with the following naming convention: `source_site_siteid` + +Output files returned by GEE functions are in the form of netCDF files. When using AppEEARS, output is in the form of netCDF files if the AOI type is a polygon and in the form of csv files if the AOI type is a point. + +The output files are created with the following naming convention: `collection_scale_projection_qc_site_siteid_TimeStampOfFileCreation` +If scale, projection or qc is not applicable for the requested product "NA" would be put. Whenever a data product is requested the output files are stored in the inputs table of BETYdb. Subsequently when the same product is requested again with a different date range but with the same qc, scale, projection the previous file in the db would be extended. The DB would always contain only one file of the same type. -As an example, if a file `Reykajvik_gee_s2_10.0_NA_1.0_200802174519.nc` containing Sentinel 2 bands for start date: 2018-01-01, end date: 2018-06-30 exists in the DB and the same product is requested again for a different date range one of the following cases would happen, +As an example, if a file containing Sentinel 2 bands for start date: 2018-01-01, end date: 2018-06-30 exists in the DB and the same product is requested again for a different date range one of the following cases would happen, 1. New dates are ahead of the existing file: For example, if the requested dates are start: 2018-10-01, end: 2018-12-31 in this case the previous file will be extended forward meaning the effective start date of the file to be downloaded would be the day after the end date of the previous file record, i.e. 2018-07-01. The new and the previous file would be merged and the DB would now be having data for 2018-01-01 to 2018-12-31. @@ -190,65 +251,86 @@ As an example, if a file `Reykajvik_gee_s2_10.0_NA_1.0_200802174519.nc` containi When a processed data product such as SNAP-LAI is requested, the raw product (here Sentinel 2 bands) used to create it would also be stored in the DB. If the raw product required for creating the processed product already exists for the requested time period, the processed product would be created for the entire time period of the raw file. For example, if Sentinel 2 bands are present in the DB for 2017-01-01 to 2017-12-31 and SNAP-LAI is requested for 2017-03-01 to 2017-07-31, the output file would be containing LAI for 2017-01-01 to 2017-12-31. -**Input data**: The input data must be the coordinates of the area of interest (point or polygon type) and the name of the site/AOI. These information must be provided in a GeoJSON file. - -**Output data**: The output data returned by the GEE are in the form of netCDF files. When using AppEEARS, output is in the form of netCDF files if the AOI type is a polygon and in the form of csv files if the AOI type is a point. +#### Example use (GEE) +This example will download Sentinel 2 bands and then use the SNAP algorithm to compute Leaf Area Index. + +1. Add remotedata tag to `pecan.xml` and configure it. + +```xml + + raw_mimetype + raw_formatname + bands + gee + COPERNICUS/S2_SR + 10 + + 1 + snap + + pro_mimetype + pro_formatname + lai + +``` -The output files are created with the following naming convention: sitename_source_collection_scale_projection_qc_TimeStampOfFileCreation -If scale, projection or qc is not applicable for the requested product "NA" would be put. +2. Store the contents of `pecan.xml` in a variable named `settings` and pass it to `remote_process`. -**scale**: Some of the GEE functions require a pixel resolution argument. Information about how GEE handles scale can be found out [here](https://developers.google.com/earth-engine/scale) +``` +PEcAn.data.remote::remote_process(settings) +``` -#### Example use (GEE) -This example will download Sentinel 2 bands for an area in Reykjavik(test file is included) for the time period 2018-01-01 to 2018-12-31 and then use the SNAP algorithm to compute Leaf Area Index. - -1. Open a Python shell at `pecan/modules/data.remote/inst` - -2. In the Python shell run the following, -```python -# import remote_process -from remote_process import remote_process - -# call remote_process -remote_process( - geofile="./satellitetools/test.geojson", - outdir="./out", - start="2018-01-01", - end="2018-12-31", - source="gee", - collection="COPERNICUS/S2_SR", - scale=10, - qc=1, - algorithm="snap", - output={"get_data": "bands", "process_data": "lai"}, - stage={"get_data": True, "process_data": True}, - ) -``` - -The output netCDF files(bands and LAI) will be saved at `./out` -More information about the function and its arguments can be found out by `help(remote_process)` +The output netCDF files(bands and LAI) will be saved at outdir and their records would be kept in the inputs table of BETYdb. #### Example use (AppEEARS) -This example will download the layers of a SMAP product(SPL3SMP_E.003) for an area in Reykjavik(test file is included) for the time period 2018-01-01 to 2018-01-31 - -1. Open a Python shell at `pecan/modules/data.remote/inst` +This example will download the layers of a SMAP product(SPL3SMP_E.003) + +1. Add remotedata tag to `pecan.xml` and configure it. + +```xml + + raw_mimetype + raw_formatname + smap + appeears + SPL3SMP_E.003 + + native + + + + + + + +``` -2. In the Python shell run the following, -```python -# import remote_process -from remote_process import remote_process +2. Store the contents of `pecan.xml` in a variable named `settings` and pass it to `remote_process`. -# call remote_process -remote_process( - geofile="./satellitetools/test.geojson", - outdir="./out", - start="2018-01-01", - end="2018-01-31", - source="appeears", - collection="SPL3SMP_E.003", - projection="native", - stage={"get_data": True, "process_data": False}, - ) ``` -This will approximately take 20 minutes to complete and the output netCDF file will be saved at `./out` +PEcAn.data.remote::remote_process(settings) +``` + +This will approximately take 20 minutes to complete and the output netCDF file will be saved at outdir and its record would be kept in the inputs table of BETYdb. + + +#### Adding new GEE image collections + +Once you have the Python script for downloading the collection from GEE, please do the following to integrate it with this module. + +1. Make sure that the function and script names are same and named in the following way: `gee2pecan_pecancodeofimagecollection` + `pecancodeofimagecollection` can be any name which you want to use for representing the collection is an easier way. For example, the colection `NASA_USDA/HSL/SMAP_soil_moisture` has the pecancode `smap`. Add this information to the `collection_lut` data frame in `remote_process.R` + + Additionaly, ensure that the function accepts and uses the following arguments, + * `geofile` - (str) GeoJSON file containing AOI information of the site + * `outdir` - (str) path where the output file has to be saved + * `start` - (str) start date in the form YYYY-MM-DD + * `end` - (str) end date in the form YYYY-MM-DD + * `scale` and `qc` if applicable. + +2. Make sure the output file is of netCDF or CSV type and follows the naming convention described above. + +3. Store the Python script at `pecan/modules/data.remote/inst/RpTools/RpTools` + +After performing these steps the script will be integrated with the remote data module and would be ready to use. From 93b2389689d25ee3147efe1855b61210085618b6 Mon Sep 17 00:00:00 2001 From: Katie Zarada Date: Thu, 13 Aug 2020 14:33:06 -0400 Subject: [PATCH 1383/2289] updating GEFS download to start at time 0 --- .../data.atmosphere/R/download.NOAA_GEFS.R | 25 ++++++++++- .../R/download.NOAA_GEFS_downscale.R | 43 ++++++++++++++----- .../R/downscale_ShortWave_to_hrly.R | 28 ++++++------ 3 files changed, 69 insertions(+), 27 deletions(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index eda2354cac5..475b6d1d4d8 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -130,9 +130,30 @@ download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, sitename, start_date = #Downloading the data here. It is stored in a matrix, where columns represent time in intervals of 6 hours, and rows represent #each ensemble member. Each variable gets its own matrix, which is stored in the list noaa_data. for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1:increments, forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data + noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments + 1), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data } + + ### ERROR CHECK FOR FIRST TIME POINT ### + + #Check if first time point is present, if not, grab from previous forecast + + index <- which(lengths(noaa_data) == increments * 21) #finding which ones are missing first point + new_start = start_date - lubridate::hours(6) #grab previous forecast + new_forecast_hour = (lubridate::hour(new_start) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 + new_forecast_hour = sprintf("%04d", new_forecast_hour * 100) #Having the end date as a string is useful later, too. + + filled_noaa_data = list() + + for (i in index) { + filled_noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1, forecast_time = new_forecast_hour, date=format(new_start, "%Y%m%d"))$data + } + + #add filled data into first slot of forecast + for(i in index){ + noaa_data[[i]] = cbind(filled_noaa_data[[i]], noaa_data[[i]])} + + #Fills in data with NaNs if there happens to be missing columns. for (i in 1:length(noaa_var_names)) { if (!is.null(ncol(noaa_data[[i]]))) { # Is a matrix @@ -219,7 +240,7 @@ download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, sitename, start_date = #to comply with the PEcAn standard). time_dim = ncdf4::ncdim_def(name="time", paste(units="hours since", format(start_date, "%Y-%m-%dT%H:%M")), - seq(6, 6 * increments, by = 6), + seq(0, 6 * increments, by = 6), create_dimvar = TRUE) lat_dim = ncdf4::ncdim_def("latitude", "degree_north", lat.in, create_dimvar = TRUE) lon_dim = ncdf4::ncdim_def("longitude", "degree_east", lon.in, create_dimvar = TRUE) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index e70c64baffd..904a1f9d10e 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -28,7 +28,7 @@ ##' @param start_date, end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight) ##' @param lat site latitude in decimal degrees ##' @param lon site longitude in decimal degrees -##' @param sitename The unique ID given to each site. This is used as part of the file name. +##' @param site_id The unique ID given to each site. This is used as part of the file name. ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. ##' @param ... Other arguments, currently ignored @@ -44,7 +44,7 @@ ##' -download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, start_date, end_date, +download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, start_date, end_date, overwrite = FALSE, verbose = FALSE, ...) { start_date <- as.POSIXct(start_date, tz = "UTC") @@ -140,9 +140,30 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data + noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments + 1), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data } + + ### ERROR CHECK FOR FIRST TIME POINT ### + + #Check if first time point is present, if not, grab from previous forecast + + index <- which(lengths(noaa_data) == increments * 21) #finding which ones are missing first point + new_start = start_date - lubridate::hours(6) #grab previous forecast + new_forecast_hour = (lubridate::hour(new_start) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 + new_forecast_hour = sprintf("%04d", new_forecast_hour * 100) #Having the end date as a string is useful later, too. + + filled_noaa_data = list() + + for (i in index) { + filled_noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1, forecast_time = new_forecast_hour, date=format(new_start, "%Y%m%d"))$data + } + + #add filled data into first slot of forecast + for(i in index){ + noaa_data[[i]] = cbind(filled_noaa_data[[i]], noaa_data[[i]])} + + #Fills in data with NaNs if there happens to be missing columns. for (i in 1:length(noaa_var_names)) { if (!is.null(ncol(noaa_data[[i]]))) { # Is a matrix @@ -203,7 +224,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st ##################################### #done with data processing- now want to take the list and make one df for downscaling - time = seq(from = start_date + lubridate::hours(6), to = end_date, by = "6 hour") + time = seq(from = start_date, to = end_date, by = "6 hour") forecasts = matrix(ncol = length(noaa_data)+ 2, nrow = 0) colnames(forecasts) <- c(cf_var_names, "timestamp", "NOAA.member") @@ -297,7 +318,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st #to comply with the PEcAn standard). time_dim = ncdf4::ncdim_def(name="time", units = paste("hours since", format(start_date, "%Y-%m-%dT%H:%M")), - seq(from = 6, length.out = length(unique(joined$timestamp))), #GEFS forecast starts 6 hours from start time + seq(from = 0, length.out = length(unique(joined$timestamp))), #GEFS forecast starts 6 hours from start time create_dimvar = TRUE) lat_dim = ncdf4::ncdim_def("latitude", "degree_north", lat.in, create_dimvar = TRUE) lon_dim = ncdf4::ncdim_def("longitude", "degree_east", lon.in, create_dimvar = TRUE) @@ -312,20 +333,20 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, sitename, st #For each ensemble for (i in 1:21) { # i is the ensemble number #Generating a unique identifier string that characterizes a particular data set. - identifier = paste("NOAA_GEFS_downscale", sitename, i, format(start_date, "%Y-%m-%dT%H:%M"), - format(end_date, "%Y-%m-%dT%H:%M"), sep=".") + identifier = paste("NOAA_GEFS_downscale", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), + format(end_date, "%Y-%m-%dT%H:%M"), sep="_") - ensemble_folder = file.path(outfolder, identifier) + #ensemble_folder = file.path(outfolder, identifier) data = as.data.frame(joined %>% dplyr::select(NOAA.member, cf_var_names1) %>% dplyr::filter(NOAA.member == i) %>% dplyr::select(-NOAA.member)) #Each file will go in its own folder. - if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE) + if (!dir.exists(outfolder)) { + dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) } - flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) + flname = file.path(outfolder, paste(identifier, "nc", sep = ".")) #Each ensemble member gets its own unique data frame, which is stored in results_list #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it diff --git a/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R b/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R index 471c5ec93a6..b5f146fba0c 100644 --- a/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R +++ b/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R @@ -27,20 +27,20 @@ downscale_ShortWave_to_hrly <- function(debiased, time0, time_end, lat, lon, out rpot <- 1366 * cosz return(rpot) } - grouping = append("NOAA.member", "timestamp") - - surface_downwelling_shortwave_flux_in_air<- rep(debiased$surface_downwelling_shortwave_flux_in_air, each = 6) - time = rep(seq(from = as.POSIXct(time0 - lubridate::hours(5), tz = output_tz), to = as.POSIXct(time_end, tz = output_tz), by = 'hour'), times = 21) - - ShortWave.hours <- as.data.frame(surface_downwelling_shortwave_flux_in_air) - ShortWave.hours$timestamp = time - ShortWave.hours$NOAA.member = rep(debiased$NOAA.member, each = 6) - ShortWave.hours$hour = as.numeric(format(time, "%H")) - ShortWave.hours$group = rep(seq(1, length(debiased$NOAA.member)/6), each= 6) - - - - ShortWave.ds <- ShortWave.hours %>% + grouping = append("NOAA.member", "timestamp") + + surface_downwelling_shortwave_flux_in_air<- rep(debiased$surface_downwelling_shortwave_flux_in_air, each = 6) + time = rep(seq(from = as.POSIXct(time0, tz = output_tz), to = as.POSIXct(time_end + lubridate::hours(5), tz = output_tz), by = 'hour'), times = 21) + + ShortWave.hours <- as.data.frame(surface_downwelling_shortwave_flux_in_air) + ShortWave.hours$timestamp = time + ShortWave.hours$NOAA.member = rep(debiased$NOAA.member, each = 6) + ShortWave.hours$hour = as.numeric(format(time, "%H")) + ShortWave.hours$group = as.numeric(as.factor(format(ShortWave.hours$time, "%d"))) + + + + ShortWave.ds <- ShortWave.hours %>% dplyr::mutate(doy = lubridate::yday(timestamp) + hour/24) %>% dplyr::mutate(rpot = downscale_solar_geom(doy, lon, lat)) %>% # hourly sw flux calculated using solar geometry dplyr::group_by_at(c("group", "NOAA.member")) %>% From 094bd7adc83fbbaecddeea73f231b2bc7b198a24 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 13 Aug 2020 14:45:52 -0400 Subject: [PATCH 1384/2289] updating file path for nc files --- modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index 904a1f9d10e..0daa1cf34ac 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -341,7 +341,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, sta dplyr::filter(NOAA.member == i) %>% dplyr::select(-NOAA.member)) - #Each file will go in its own folder. + if (!dir.exists(outfolder)) { dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) } From 02f2ab9c49e27270b66a0f0caa912556962fab96 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 14 Aug 2020 09:03:57 +0530 Subject: [PATCH 1385/2289] better log messages --- .../03_topical_pages/09_standalone_tools.Rmd | 2 +- modules/data.remote/R/remote_process.R | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 96df6a2c7bb..794e0648243 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -149,7 +149,7 @@ Datasets currently available for use in PEcAn via Google Earth Engine are, #### AppEEARS [AppEEARS (Application for Extracting and Exploring Analysis Ready Samples)](https://lpdaacsvc.cr.usgs.gov/appeears/) is an online tool which provides an easy to use interface for downloading analysis ready remote sensing data. [Products available on AppEEARS.](https://lpdaacsvc.cr.usgs.gov/appeears/products) Note: AppEEARS uses a task based system for processing the data request, it is possible for a task to run for long hours before it gets completed. The module checks the task status after every 60 seconds and saves the files when the task gets completed. -Processing functions currently available are, +#### Processing functions currently available are, * [SNAP](https://github.com/PecanProject/pecan/blob/develop/modules/data.remote/inst/satellitetools/biophys_xarray.py) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 9bf143b0c4c..5e1983d9ca2 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -12,7 +12,7 @@ ##' remote_process <- function(settings) { - # information about the date variables used in call_remote_process - + # information about the date variables used in remote_process - # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB # start, end : effective start, end dates created after checking the DB status. These dates are sent to remote_process for downloading and processing data # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB @@ -262,7 +262,7 @@ remote_process <- function(settings) { existing_pro_file_path <- NULL } else{ # no data of requested type exists - PEcAn.logger::logger.info("no data of requested type exists") + PEcAn.logger::logger.info("Requested data does not exist in the DB, retrieving for the first time") flag <- 1 start <- req_start end <- req_end @@ -341,7 +341,7 @@ remote_process <- function(settings) { } else{ if (flag == 1) { # no processed and rawfile are present - PEcAn.logger::logger.info("inserting raw and processed files for the first time") + PEcAn.logger::logger.info("Inserting raw and processed files for the first time") # insert processed data pro_ins <- PEcAn.DB::dbfile.input.insert( @@ -372,7 +372,7 @@ remote_process <- function(settings) { raw_path <- output$raw_data_path } else if (flag == 2) { # requested processed file does not exist but the raw file used to create it exists within the required timeline - PEcAn.logger::logger.info("inserting processed file for the first time") + PEcAn.logger::logger.info("Inserting processed file for the first time") pro_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$process_data_path, @@ -428,7 +428,7 @@ remote_process <- function(settings) { raw_id <- raw_check$id raw_path <- output$raw_data_path pro_path <- output$process_data_path - PEcAn.logger::logger.info("updating processed and raw files") + PEcAn.logger::logger.info("Updating processed and raw files") PEcAn.DB::db.query( sprintf( "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", @@ -473,7 +473,7 @@ remote_process <- function(settings) { pro_path <- output$process_data_path raw_id <- raw_check$id raw_path <- raw_check$file_path - PEcAn.logger::logger.info("updating the existing processed file") + PEcAn.logger::logger.info("Updating the existing processed file") PEcAn.DB::db.query( sprintf( "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", @@ -498,7 +498,7 @@ remote_process <- function(settings) { pro_id <- pro_check$id pro_path <- output$pro_data_path raw_path <-output$raw_data_path - PEcAn.logger::logger.info("replacing the existing processed file and creating a new raw file") + PEcAn.logger::logger.info("Replacing the existing processed file and creating a new raw file") PEcAn.DB::db.query( sprintf( "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", From 695cc62f53986a440ffd685aa9ae19e34efe78f0 Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 14 Aug 2020 15:30:50 -0400 Subject: [PATCH 1386/2289] Soil moisture download functions for NEFI sites --- .../US_Harvard/download_soilmoist_harvard.R | 56 +++++++++++++++++++ .../inst/NEFI/US_Syv/download_soilmoist_Syv.R | 40 +++++++++++++ .../inst/NEFI/US_WCr/download_soilmoist_WCr.R | 31 ++++++---- .../NEFI/US_WLEF/download_soilmoist_WLEF.R | 40 +++++++++++++ 4 files changed, 155 insertions(+), 12 deletions(-) create mode 100644 modules/assim.sequential/inst/NEFI/US_Harvard/download_soilmoist_harvard.R create mode 100644 modules/assim.sequential/inst/NEFI/US_Syv/download_soilmoist_Syv.R create mode 100644 modules/assim.sequential/inst/NEFI/US_WLEF/download_soilmoist_WLEF.R diff --git a/modules/assim.sequential/inst/NEFI/US_Harvard/download_soilmoist_harvard.R b/modules/assim.sequential/inst/NEFI/US_Harvard/download_soilmoist_harvard.R new file mode 100644 index 00000000000..b6ba2937b2c --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/US_Harvard/download_soilmoist_harvard.R @@ -0,0 +1,56 @@ +library(tidyr) +library(tidyverse) +library(lubridate) +library(PEcAn.all) + +download_soilmoist_Harvard <- function(start_date, end_date) { + + if(end_date > Sys.Date()) end_date = Sys.Date() + if(start_date < as.Date("2019-11-06")) start_date = "2019-11-06" + date = seq(from = as.Date(start_date), to = as.Date(end_date), by = 'days') + data = NA + for(i in 1:length(date)){ + + yy <- strftime(date[i], format = "%y") + doy <- strftime(date[i], format = "%j") + #my_host <- list(name = "geo.bu.edu", user = 'kzarada', tunnel = "/tmp/tunnel") + + #try(remote.copy.from(host = my_, src = paste0('/projectnb/dietzelab/NEFI_data/HFEMS_prelim_', yy, '_', doy, '_dat.csv'), + #dst = paste0('/fs/data3/kzarada/NEFI/US_Harvard/Flux_data/', yy, doy,'.csv'), delete=FALSE)) + + + + if(file.exists(paste0('/projectnb/dietzelab/NEFI_data/HFEMS_prelim_', yy, '_', doy, '_dat.csv'))){ + data1 = read.csv(paste0('/projectnb/dietzelab/NEFI_data/HFEMS_prelim_', yy, '_', doy, '_dat.csv'), header = T, sep = "") + data = rbind(data, data1)} + + } + + + data <- data %>% + drop_na(TIME_START.YYYYMMDDhhmm) %>% + mutate(Time = lubridate::with_tz(as.POSIXct(strptime(TIME_START.YYYYMMDDhhmm, format = "%Y%m%d%H%M", tz = "EST"),tz = "EST"), tz = "UTC")) %>% + dplyr::select(Time, SWC15) + + colnames(data) <- c("Time", "SWC15") + + + Time = lubridate::force_tz(seq(from = as.POSIXct(start_date), to = as.POSIXct(end_date), by = "30 mins"), tz = "UTC") + + data.full = data.frame(Time, SWC15 = rep(NA, length(Time))) + + + + + match(Time, data$Time) + + data.full$SWC15 <- data$SWC15[match(Time, data$Time)] + + + + return(data.full) +} + + +#manually check if files are available +#read.csv('ftp://ftp.as.harvard.edu/pub/exchange/jwm/Forecast_data/HFEMS_prelim_20_196_dat.csv') diff --git a/modules/assim.sequential/inst/NEFI/US_Syv/download_soilmoist_Syv.R b/modules/assim.sequential/inst/NEFI/US_Syv/download_soilmoist_Syv.R new file mode 100644 index 00000000000..e8c022c8c45 --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/US_Syv/download_soilmoist_Syv.R @@ -0,0 +1,40 @@ +download_soilmoist_Syv <- function(start_date, end_date) { + base_url <- "http://co2.aos.wisc.edu/data/cheas/sylvania/flux/prelim/clean/ameriflux/US-Syv_HH_" + + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + + # Reading in the data + raw.data <- start_year:end_year %>% + purrr::map_df(function(syear) { + influx <- + tryCatch( + read.table( + paste0(base_url, syear, "01010000_", syear+1, "01010000.csv"), + sep = ",", + header = TRUE, stringsAsFactors = F + ) %>% + apply(2, trimws) %>% + apply(2, as.character) %>% + data.frame(stringsAsFactors = F), + error = function(e) { + NULL + }, + warning = function(e) { + NULL + } + ) + }) %>% + mutate_all(funs(as.numeric)) + + #Constructing the date based on the columns we have + raw.data$Time <-as.POSIXct(as.character(raw.data$TIMESTAMP_START), + format="%Y%m%d%H%M", tz="UTC") + # Some cleaning and filtering + raw.data <- raw.data %>% + dplyr::select(SWC_1_1_1, Time) %>% + na_if(-9999) %>% + filter(Time >= start_date & Time <=end_date) + colnames(raw.data) <- c('avgsoil', 'Time') + return(raw.data) +} diff --git a/modules/assim.sequential/inst/NEFI/US_WCr/download_soilmoist_WCr.R b/modules/assim.sequential/inst/NEFI/US_WCr/download_soilmoist_WCr.R index dc87a19a869..6b64599b979 100644 --- a/modules/assim.sequential/inst/NEFI/US_WCr/download_soilmoist_WCr.R +++ b/modules/assim.sequential/inst/NEFI/US_WCr/download_soilmoist_WCr.R @@ -1,5 +1,6 @@ download_soilmoist_WCr <- function(start_date, end_date) { base_url <- "http://co2.aos.wisc.edu/data/cheas/wcreek/flux/prelim/clean/ameriflux/US-WCr_HH_" + start_year <- lubridate::year(start_date) end_year <- lubridate::year(end_date) @@ -28,18 +29,24 @@ download_soilmoist_WCr <- function(start_date, end_date) { #Constructing the date based on the columns we have if(dim(raw.data)[1] > 0 & dim(raw.data)[2] > 0){ - raw.data$Time <-as.POSIXct(as.character(raw.data$TIMESTAMP_START), - format="%Y%m%d%H%M", tz="UTC") - # Some cleaning and filtering - raw.data <- raw.data %>% - dplyr::select(SWC_1_1_1, SWC_1_2_1, SWC_1_3_1, SWC_1_4_1, SWC_1_5_1, Time) %>% - na_if(-9999) %>% - filter(Time >= start_date & Time <=end_date) - - #get average soil moisture - - raw.data$avgsoil <- raw.data$SWC_1_2_1*0.12 + raw.data$SWC_1_3_1*0.16 + raw.data$SWC_1_4_1*0.32 + raw.data$SWC_1_5_1*0.4 - raw.data <- raw.data %>% dplyr::select(Time, avgsoil) + raw.data$Time <-as.POSIXct(as.character(raw.data$TIMESTAMP_START), + format="%Y%m%d%H%M", tz="UTC") + # Some cleaning and filtering + # SWC origionally has units = % at depths 2_1 = 5cm, 2_2 = 10cm, 2_3 = 20cm, 2_4 = 30cm, 2_5 = 40cm, 2_6 = 50cm + raw.data <- raw.data %>% + dplyr::select(SWC_1_1_1, SWC_1_2_1, SWC_1_3_1, SWC_1_4_1, SWC_1_5_1, SWC_2_1_1, SWC_2_2_1, SWC_2_3_1, SWC_2_4_1, SWC_2_5_1, SWC_2_6_1, Time) %>% + na_if(-9999) %>% + filter(Time >= start_date & Time <=end_date) + + #get average soil moisture + + #with all depths + #raw.data$avgsoil <- raw.data$SWC_2_1_1*.05 + raw.data$SWC_2_2_1*.10 + raw.data$SWC_2_3_1*.20 + raw.data$SWC_2_4_1*.30 + raw.data$SWC_2_5_1*.40 + raw.data$SWC_2_6_1*.50 + + #shallow depths (>30cm) + raw.data$avgsoil2 <- raw.data$SWC_2_1_1 #*.05 + raw.data$SWC_2_2_1*.10 + raw.data$SWC_2_3_1*.20 + raw.data$avgsoil1 <- raw.data$SWC_1_2_1*0.12 + raw.data$SWC_1_3_1*0.16 + raw.data$SWC_1_4_1*0.32 + raw.data$SWC_1_5_1*0.4 #old sensor + raw.data <- raw.data %>% dplyr::select(Time, avgsoil1, avgsoil2) }else(raw.data <- NULL) return(raw.data) } diff --git a/modules/assim.sequential/inst/NEFI/US_WLEF/download_soilmoist_WLEF.R b/modules/assim.sequential/inst/NEFI/US_WLEF/download_soilmoist_WLEF.R new file mode 100644 index 00000000000..d52daa6a8b1 --- /dev/null +++ b/modules/assim.sequential/inst/NEFI/US_WLEF/download_soilmoist_WLEF.R @@ -0,0 +1,40 @@ +download_soilmoist_WLEF <- function(start_date, end_date) { + base_url <- "http://co2.aos.wisc.edu/data/cheas/wlef/flux/prelim/clean/ameriflux/US-PFa_HR_" + + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + + # Reading in the data + raw.data <- start_year:end_year %>% + purrr::map_df(function(syear) { + influx <- + tryCatch( + read.table( + paste0(base_url, syear, "01010000_", syear+1, "01010000.csv"), + sep = ",", + header = TRUE, stringsAsFactors = F + ) %>% + apply(2, trimws) %>% + apply(2, as.character) %>% + data.frame(stringsAsFactors = F), + error = function(e) { + NULL + }, + warning = function(e) { + NULL + } + ) + }) %>% + mutate_all(funs(as.numeric)) + + #Constructing the date based on the columns we have + raw.data$Time <-as.POSIXct(as.character(raw.data$TIMESTAMP_START), + format="%Y%m%d%H%M", tz="UTC") + # Some cleaning and filtering + raw.data <- raw.data %>% + dplyr::select(SWC_1_1_1, Time) %>% + na_if(-9999) %>% + filter(Time >= start_date & Time <=end_date) + colnames(raw.data) <- c('avgsoil', 'Time') + return(raw.data) +} From afcb65c257a59951f19cf9d11c3dded9f6b5a0bb Mon Sep 17 00:00:00 2001 From: Juliette Bateman Date: Fri, 14 Aug 2020 15:32:43 -0400 Subject: [PATCH 1387/2289] Script that compares field soil moisture to SMAP --- .../data.remote/inst/FieldvsSMAP_compare.R | 622 ++++++++++++++++++ 1 file changed, 622 insertions(+) create mode 100644 modules/data.remote/inst/FieldvsSMAP_compare.R diff --git a/modules/data.remote/inst/FieldvsSMAP_compare.R b/modules/data.remote/inst/FieldvsSMAP_compare.R new file mode 100644 index 00000000000..c4314260ad9 --- /dev/null +++ b/modules/data.remote/inst/FieldvsSMAP_compare.R @@ -0,0 +1,622 @@ +#load necessities +library(tidyverse) +library(hrbrthemes) +library(plotly) +library(patchwork) +library(babynames) +library(viridis) +library(purrr) +library(lubridate) +library(tidyr) +library(dplyr) +library(ncdf4) + +par(mfrow = c(1,2)) + +#set start and end dates +start = "2019-04-01" +end = as.character(Sys.Date()) + + + + ############################## + ######## WILLOW CREEK ######## + ############################## + + ######## Download Ameriflux field data######## + +#download and get daily average +source("/fs/data3/jbateman/pecan/modules/assim.sequential/inst/NEFI/US_WCr/download_soilmoist_WCr.R") +sm_wcr = download_soilmoist_WCr(start, end) %>% + dplyr::mutate(Day = lubridate::day(Time), Month = lubridate::month(Time), Year = lubridate::year(Time)) %>% + group_by(Year, Month, Day) +sm_wcr$Date = as.Date(with(sm_wcr, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") + +sm_wcr.Dayavg = sm_wcr %>% + summarise(DayAvgsm1 = mean(avgsoil1)) %>% + ungroup() +sm_wcr.Dayavg2= sm_wcr %>% + summarise(DayAvgsm2 = mean(avgsoil2)) %>% + ungroup() +sm_wcr.Dayavg$DayAvgsm2 =sm_wcr.Dayavg2$DayAvgsm2 +sm_wcr.Dayavg$Date = as.Date(with(sm_wcr.Dayavg, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_wcr.Dayavg = sm_wcr.Dayavg %>% dplyr::select(Date, DayAvgsm1, DayAvgsm2) + + + + ######## Download SMAP data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst/" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +site_info <- list( + site_id = 676, + site_name = "Willow Creek", + lat = 45.805925, + lon = -90.07961, + time_zone = "UTC") + +wcr.smap_sm = download_SMAP_gee2pecan(start, end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +wcr.d = ggplot() + + geom_line(data = na.omit(wcr.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(wcr.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_wcr.Dayavg, aes(x=Date, y=DayAvgsm1, color = "red"), linetype = "dashed") + + geom_line(data = sm_wcr.Dayavg, aes(x=Date, y=DayAvgsm2, color = "purple"), linetype = "dashed") + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: Willow Creek") + + labs(x = "Date", + y = "Soil Moisture (%)" , + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "purple"), + labels = c("SMAP", "Old Field", "New Field"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +wcr.half = ggplot() + + geom_line(data = sm_wcr, aes(x=Date, y=avgsoil1, color="red"), linetype ="solid") + + geom_line(data = sm_wcr, aes(x=Date, y=avgsoil2, color="purple"), linetype ="solid") + + geom_line(data = na.omit(wcr.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(wcr.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ggtitle("SMAP vs 1/2 hr Field Data: Willow Creek") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "purple"), + labels = c("SMAP", "Old Field", "New Field"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + +require(gridExtra) +grid.arrange(wcr.d, wcr.half) + + + ############################## + ######## SYLVANIA ######## + ############################## + +######## Download Ameriflux field data######## + +#download and get daily average +source("/fs/data3/jbateman/pecan/modules/assim.sequential/inst/NEFI/US_Syv/download_soilmoist_Syv.R") +sm_syv = download_soilmoist_Syv(start, end) %>% + mutate(Day = day(Time), Month = month(Time), Year = year(Time)) %>% + group_by(Year, Month, Day) +sm_syv$Date = as.Date(with(sm_syv, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") + +sm_syv.Dayavg = sm_syv %>% + summarise(DayAvgsm = mean(avgsoil)) %>% + ungroup() +sm_syv.Dayavg$Date = as.Date(with(sm_syv.Dayavg, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_syv.Dayavg = sm_syv.Dayavg %>% dplyr::select(Date, DayAvgsm) + + + +######## Download SMAP ssm data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" + +site_info <- list( + site_id = 622, + site_name = "Sylvania", + lat = 46.242017, + lon = -89.347567, + time_zone = "UTC") +syv.smap_sm = download_SMAP_gee2pecan(start, end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +syv.d = ggplot() + + geom_line(data = na.omit(syv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(syv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_syv.Dayavg, aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dashed") + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: SYLVANIA") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +syv.half = ggplot() + + geom_line(data = sm_syv, aes(x=Date, y=avgsoil, color="red"), linetype ="solid") + + geom_line(data = na.omit(syv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(syv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ggtitle("SMAP vs 1/2 hr Field Data: SYLVANIA") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + +grid.arrange(syv.d, syv.half) + + + + + ############################## + ######## WLEF ######## + ############################## + +######## Download Ameriflux field data######## + +#download and get daily average +source("/fs/data3/jbateman/pecan/modules/assim.sequential/inst/NEFI/US_WLEF/download_soilmoist_WLEF.R") +sm_wlef = download_soilmoist_WLEF(start, end) %>% + mutate(Day = day(Time), Month = month(Time), Year = year(Time)) %>% + group_by(Year, Month, Day) +sm_wlef$Date = as.Date(with(sm_wlef, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") + +sm_wlef.Dayavg = sm_wlef %>% + summarise(DayAvgsm = mean(avgsoil)) %>% + ungroup() +sm_wlef.Dayavg$Date = as.Date(with(sm_wlef.Dayavg, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_wlef.Dayavg = sm_wlef.Dayavg %>% dplyr::select(Date, DayAvgsm) + + + +######## Download SMAP data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" + +site_info <- list( + site_id = 678, + site_name = "WLEF", + lat = 45.9408, + lon = -90.27, + time_zone = "UTC") +wlef.smap_sm = download_SMAP_gee2pecan(start, end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +wlef.d = ggplot() + + geom_line(data = na.omit(wlef.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(wlef.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_wlef.Dayavg, aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dashed") + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: WLEF") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +wlef.half = ggplot() + + geom_line(data = sm_wlef, aes(x=Date, y=avgsoil, color="red"), linetype ="solid") + + geom_line(data = na.omit(wlef.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(wlef.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ggtitle("SMAP vs 1/2 hr Field Data: WLEF") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + +grid.arrange(wlef.d, wlef.half) + + + ############################## + ######## HARVARD ######## + ############################## + +######## Download Ameriflux field data######## + +#download and get daily average +source("/fs/data3/jbateman/pecan/modules/assim.sequential/inst/NEFI/US_Harvard/download_soilmoist_harvard.R") +sm_harv = download_soilmoist_Harvard(start, end) %>% + mutate(Day = day(Time), Month = month(Time), Year = year(Time)) %>% + group_by(Year, Month, Day) +sm_harv$Date = as.Date(with(sm_harv, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_harv$SWC15 = replace(sm_harv$SWC15, sm_harv$SWC15 == -9999, NA) + +sm_harv.Dayavg = sm_harv %>% + summarise(DayAvgsm = mean(SWC15)) %>% + ungroup() +sm_harv.Dayavg$Date = as.Date(with(sm_harv.Dayavg, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_harv.Dayavg = sm_harv.Dayavg %>% dplyr::select(Date, DayAvgsm) + + + +######## Download SMAP data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" + +site_info <- list( + site_id = 1126, + site_name = "Harvard Forest", + lat = 42.531453, + lon = -72.188896, + time_zone = "UTC") +harv.smap_sm = download_SMAP_gee2pecan("2019-11-06", end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +harv.d = ggplot() + + geom_line(data = na.omit(harv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(harv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_harv.Dayavg, aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dashed") + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: Harvard") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +harv.half = ggplot() + + geom_line(data = na.omit(sm_harv), aes(x=Date, y=SWC15, color="red"), linetype ="solid") + + geom_line(data = na.omit(harv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(harv.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ggtitle("SMAP vs 1/2 hr Field Data: Harvard") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + + +grid.arrange(harv.d, harv.half) + + ############################## + ######## BART ######## + ############################## + +######## NEON data ######## + +#download and get daily average +BART_ssm = split(BART, list(BART$verticalPosition, BART$horizontalPosition, BART$VSWCFinalQF)) +BART_ssm = split(BART, BART$VSWCFinalQF) +sm_bart = BART_ssm$'0' %>% + na.omit() %>% + dplyr::select(startDateTime, VSWCMean, horizontalPosition, verticalPosition) %>% + mutate(Day = day(startDateTime), Month = month(startDateTime), Year = year(startDateTime)) %>% + group_by(Year, Month, Day) +sm_bart$Date = as.Date(with(sm_bart, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_bart$VSWCMean = sm_bart$VSWCMean * 100 +sm_bart = split(sm_bart, list(sm_bart$verticalPosition, sm_bart$horizontalPosition)) + +sm_bart.Dayavg = vector(mode = "list", length = 40) +names(sm_bart.Dayavg) = names(sm_bart) +for (i in 1:length(sm_bart)){ + sm_bart.Dayavg[[i]] = dplyr::select(sm_bart[[i]], Date, VSWCMean) %>% + summarise(DayAvgsm = mean(VSWCMean)) %>% + ungroup() + sm_bart.Dayavg[[i]]$Date = as.Date(with(sm_bart.Dayavg[[i]], paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +} + + +######## Download SMAP data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" + +site_info <- list( + site_id = 796, + site_name = "Bartlett", + lat = 44.06464, + lon = -71.288077, + time_zone = "UTC") +bart.smap_sm = download_SMAP_gee2pecan(start, end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +bart.d = ggplot() + + geom_line(data = na.omit(bart.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(bart.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_bart.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), size=1) + + geom_line(data = sm_bart.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"),linetype = "dotted", size=.5) + + geom_point(data = sm_bart.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), size=1) + + geom_line(data = sm_bart.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), size=1) + + geom_line(data = sm_bart.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), size=1) + + geom_line(data = sm_bart.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), size=1) + + geom_line(data = sm_bart.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), linetype = "dotted", size=.5) + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: Bartlett") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "green", "purple", "orange", "yellow"), + labels = c("SMAP", "Field 1 (-6cm)", "Field 2 (-6cm)", "Field 3 (-6cm)", "Field 4 (-6cm)", "Field 5 (-6cm)"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +bart.half = ggplot() + + geom_line(data = sm_bart$'502.1', aes(x=Date, y=VSWCMean, color = "red"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart$'502.1', aes(x=Date, y=VSWCMean, color = "red"), size=1) + + geom_line(data = sm_bart$'502.1', aes(x=Date, y=VSWCMean, color = "red"),linetype = "dotted", size=.5) + + geom_point(data = sm_bart$'502.2', aes(x=Date, y=VSWCMean, color = "green"), size=1) + + geom_line(data = sm_bart$'502.2', aes(x=Date, y=VSWCMean, color = "green"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart$'502.3', aes(x=Date, y=VSWCMean, color = "purple"), size=1) + + geom_line(data = sm_bart$'502.3', aes(x=Date, y=VSWCMean, color = "purple"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart$'502.4', aes(x=Date, y=VSWCMean, color = "orange"), size=1) + + geom_line(data = sm_bart$'502.4', aes(x=Date, y=VSWCMean, color = "orange"), linetype = "dotted", size=.5) + + geom_point(data = sm_bart$'502.5', aes(x=Date, y=VSWCMean, color = "yellow"), size=1) + + geom_line(data = sm_bart$'502.5', aes(x=Date, y=VSWCMean, color = "yellow"), linetype = "dotted", size=.5) + + ylim(0,60) + + geom_line(data = na.omit(bart.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(bart.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ggtitle("SMAP vs 1/2 hr Field Data: Bartlett") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "green", "purple", "orange", "yellow"), + labels = c("SMAP", "Field 1 (-6cm)", "Field 2 (-6cm)", "Field 3 (-6cm)", "Field 4 (-6cm)", "Field 5 (-6cm)"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + +#require(gridExtra) +#grid.arrange(bart.d, bart.half) +plot(bart.half) + + + +############################## +######## SRER ######## +############################## + +######## NEON data ######## + +#download and get daily average +SRER_ssm = split(SRER, list(SRER$verticalPosition, SRER$horizontalPosition, SRER$VSWCFinalQF)) +SRER_ssm = split(SRER, SRER$VSWCFinalQF) +sm_srer = SRER_ssm$'0' %>% + na.omit() %>% + dplyr::select(startDateTime, VSWCMean, horizontalPosition, verticalPosition) %>% + mutate(Day = day(startDateTime), Month = month(startDateTime), Year = year(startDateTime)) %>% + group_by(Year, Month, Day) +sm_srer$Date = as.Date(with(sm_srer, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_srer$VSWCMean = sm_srer$VSWCMean * 100 +sm_srer = split(sm_srer, list(sm_srer$verticalPosition, sm_srer$horizontalPosition)) + +sm_srer.Dayavg = vector(mode = "list", length = 40) +names(sm_srer.Dayavg) = names(sm_srer) +for (i in 1:length(sm_srer)){ + sm_srer.Dayavg[[i]] = dplyr::select(sm_srer[[i]], Date, VSWCMean) %>% + summarise(DayAvgsm = mean(VSWCMean)) %>% + ungroup() + sm_srer.Dayavg[[i]]$Date = as.Date(with(sm_srer.Dayavg[[i]], paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +} + + +######## Download SMAP data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" + +site_info <- list( + site_id = 1000004876, + site_name = "Santa Rita", + lat = 31.91068, + lon = -110.83549, + time_zone = "UTC") +srer.smap_sm = download_SMAP_gee2pecan(start, end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +srer.d = ggplot() + + geom_line(data = na.omit(srer.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(srer.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"),linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), linetype = "dotted", size=.5) + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: Santa Rita") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "green", "purple", "orange", "yellow"), + labels = c("SMAP", "Field 1 (-6cm)", "Field 2 (-6cm)", "Field 3 (-6cm)", "Field 4 (-6cm)", "Field 5 (-6cm)"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +srer.half = ggplot() + + geom_line(data = sm_srer.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"),linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), linetype = "dotted", size=.5) + + geom_point(data = sm_srer.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), size=1) + + geom_line(data = sm_srer.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), linetype = "dotted", size=.5) + + geom_line(data = na.omit(srer.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(srer.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ylim(0,60) + + ggtitle("SMAP vs 1/2 hr Field Data: Santa Rita") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "green", "purple", "orange", "yellow"), + labels = c("SMAP", "Field 1 (-6cm)", "Field 2 (-6cm)", "Field 3 (-6cm)", "Field 4 (-6cm)", "Field 5 (-6cm)"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + + +grid.arrange(srer.d, srer.half) +plot(srer.half) + + +############################## +######## KONA ######## +############################## + +######## NEON data ######## + +#download and get daily average +KONA_ssm = split(KONA, list(KONA$verticalPosition, KONA$horizontalPosition, KONA$VSWCFinalQF)) +KONA_ssm = split(KONA, KONA$VSWCFinalQF) +sm_kona = KONA_ssm$'0' %>% + na.omit() %>% + dplyr::select(startDateTime, VSWCMean, horizontalPosition, verticalPosition) %>% + mutate(Day = day(startDateTime), Month = month(startDateTime), Year = year(startDateTime)) %>% + group_by(Year, Month, Day) +sm_kona$Date = as.Date(with(sm_kona, paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +sm_kona$VSWCMean = sm_kona$VSWCMean * 100 +sm_kona = split(sm_kona, list(sm_kona$verticalPosition, sm_kona$horizontalPosition)) + +sm_kona.Dayavg = vector(mode = "list", length = 40) +names(sm_kona.Dayavg) = names(sm_kona) +for (i in 1:length(sm_kona)){ + sm_kona.Dayavg[[i]] = dplyr::select(sm_kona[[i]], Date, VSWCMean) %>% + summarise(DayAvgsm = mean(VSWCMean)) %>% + ungroup() + sm_kona.Dayavg[[i]]$Date = as.Date(with(sm_kona.Dayavg[[i]], paste(Year, Month, Day, sep="-")), "%Y-%m-%d") +} + + + +######## Download SMAP data ######## +geoJSON_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" +smap_outdir = "/projectnb/dietzelab/jbateman/pecan/modules/data.remote/inst" + +site_info <- list( + site_id = 1000004925, + site_name = "KONA", + lat = 39.11044, + lon = -96.61295, + time_zone = "UTC") +kona.smap_sm = download_SMAP_gee2pecan(start, end, site_info, geoJSON_outdir, smap_outdir) + +##### plot time series + +# Daily sm average +kona.d = ggplot() + + geom_line(data = na.omit(kona.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(kona.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + geom_line(data = sm_kona.Dayavg, aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dashed") + + ylim(0,60) + + ggtitle("SMAP vs Daily Field Data: Konza Prairie") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red"), + labels = c("SMAP", "Field"), + guide = "legend") + + theme( + legend.position = "none", + legend.title = element_blank()) + +# 1/2 hr field data vs daily smap (6am) +kona.half = ggplot() + + geom_line(data = sm_kona.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), linetype = "dotted", size=.5) + + geom_point(data = sm_kona.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"), size=1) + + geom_line(data = sm_kona.Dayavg$'502.1', aes(x=Date, y=DayAvgsm, color = "red"),linetype = "dotted", size=.5) + + geom_point(data = sm_kona.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), size=1) + + geom_line(data = sm_kona.Dayavg$'502.2', aes(x=Date, y=DayAvgsm, color = "green"), linetype = "dotted", size=.5) + + geom_point(data = sm_kona.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), size=1) + + geom_line(data = sm_kona.Dayavg$'502.3', aes(x=Date, y=DayAvgsm, color = "purple"), linetype = "dotted", size=.5) + + geom_point(data = sm_kona.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), size=1) + + geom_line(data = sm_kona.Dayavg$'502.4', aes(x=Date, y=DayAvgsm, color = "orange"), linetype = "dotted", size=.5) + + geom_point(data = sm_kona.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), size=1) + + geom_line(data = sm_kona.Dayavg$'502.5', aes(x=Date, y=DayAvgsm, color = "yellow"), linetype = "dotted", size=.5) + + geom_line(data = na.omit(kona.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue")) + + geom_point(data = na.omit(kona.smap_sm), aes(x=Date, y=ssm.vol, color="steel blue"), size=1) + + ggtitle("SMAP vs 1/2 hr Field Data: Konza Prairie") + + labs(x = "Date", + y = "Soil Moisture (%)", + color = "Legend\n") + + scale_color_identity( + breaks = c("steel blue","red", "green", "purple", "orange", "yellow"), + labels = c("SMAP", "Field 1 (-6cm)", "Field 2 (-6cm)", "Field 3 (-6cm)", "Field 4 (-6cm)", "Field 5 (-6cm)"), + guide = "legend") + + theme( + legend.position = "bottom", + legend.title = element_blank()) + + +grid.arrange(kona.d, kona.half) +plot(kona.half) + From ca81ea82979e7ed85d48cab465ec6220809c40bc Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 15 Aug 2020 13:34:43 +0530 Subject: [PATCH 1388/2289] fix deletion bug in csv merge --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- modules/data.remote/inst/RpTools/RpTools/merge_files.py | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 794e0648243..37f5114fc93 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -312,7 +312,7 @@ This example will download the layers of a SMAP product(SPL3SMP_E.003) PEcAn.data.remote::remote_process(settings) ``` -This will approximately take 20 minutes to complete and the output netCDF file will be saved at outdir and its record would be kept in the inputs table of BETYdb. +The output netCDF file will be saved at outdir and its record would be kept in the inputs table of BETYdb. #### Adding new GEE image collections diff --git a/modules/data.remote/inst/RpTools/RpTools/merge_files.py b/modules/data.remote/inst/RpTools/RpTools/merge_files.py index 45cd8f2a164..e3a2c3e01e0 100644 --- a/modules/data.remote/inst/RpTools/RpTools/merge_files.py +++ b/modules/data.remote/inst/RpTools/RpTools/merge_files.py @@ -71,6 +71,6 @@ def csv_merge(old, new, outdir): merged_df = merged_df.sort_values(by="Date") merged_df.to_csv(os.path.join(outdir, tail), index=False) # delete the old and temproary file - os.remove(df_changed_new) - os.remove(df_old) + os.remove(changed_new) + os.remove(old) return os.path.abspath(os.path.join(outdir, tail)) From dc1441220a0afab1b11930335a627882df575f5b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 15 Aug 2020 16:54:16 +0530 Subject: [PATCH 1389/2289] Apply suggestions from istfer's code review Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 5e1983d9ca2..74c8fb73499 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -82,9 +82,9 @@ remote_process <- function(settings) { req_start <- start req_end <- end - raw_file_name = construct_raw_filename(collection, siteid_short, scale, projection, qc) + raw_file_name <- construct_raw_filename(collection, siteid_short, scale, projection, qc) - coords = unlist(PEcAn.DB::db.query(sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), con = dbcon), use.names=FALSE) + coords <- unlist(PEcAn.DB::db.query(sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), con = dbcon), use.names=FALSE) # check if any data is already present in the inputs table existing_data <- @@ -700,4 +700,4 @@ set_stage <- function(result, req_start, req_end, stage) { } return (list(req_start, req_end, stage, merge, write_start, write_end)) -} \ No newline at end of file +} From bccb93a6e4895f794cdc0c940b1051b09fa32861 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 16 Aug 2020 14:25:03 +0530 Subject: [PATCH 1390/2289] Update 09_standalone_tools.Rmd --- book_source/03_topical_pages/09_standalone_tools.Rmd | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 37f5114fc93..a95da513fb5 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -159,15 +159,15 @@ Datasets currently available for use in PEcAn via Google Earth Engine are, 2. **Sign up for NASA Earthdata**. Using AppEEARS requires an Earthdata account visit this [page](https://urs.earthdata.nasa.gov/users/new) to create your own account. -3. **Install the RpTools package**. Python codes required by this module are stored in a Python package named "RpTools" using this requires Python3 and the package manager pip to be installed in your system. +3. **Install the RpTools package**. Python codes required by this module are stored in a Python package named "RpTools" using this requires Python3 and the package manager pip3 to be installed in your system. To install the package, a. Navigate to `pecan/modules/data.remote/inst/RpTools` If you are inside the pecan directory, this can be done by, ```bash cd modules/data.remote/inst/RpTools ``` -b. Use pip to install the package. "-e" flag is used to install the package in an editable or develop mode, so that changes made to the code get updated in the package without reinstalling. +b. Use pip3 to install the package. "-e" flag is used to install the package in an editable or develop mode, so that changes made to the code get updated in the package without reinstalling. ```bash -pip install -e . +pip3 install -e . ``` 4. **Authenticate GEE**. The GEE API needs to be authenticated using your credentials. The credentials will be stored locally on your system. This can be done by, ```bash From d9f4d736b0179fb82c2d95a585a5258c52cc6a54 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 16 Aug 2020 16:01:07 +0530 Subject: [PATCH 1391/2289] add overwrite argument --- modules/data.remote/R/remote_process.R | 164 +++++++++++++++------- modules/data.remote/man/remote_process.Rd | 2 +- 2 files changed, 115 insertions(+), 51 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 5e1983d9ca2..623a6157b68 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -8,53 +8,77 @@ ##' \dontrun{ ##' remote_process(settings) ##' } -##' @author Ayush Prasad +##' @author Ayush Prasad, Istem Fer ##' remote_process <- function(settings) { # information about the date variables used in remote_process - # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB - # start, end : effective start, end dates created after checking the DB status. These dates are sent to remote_process for downloading and processing data + # start, end : effective start, end dates created after checking the DB status. These dates are sent to rp_control for downloading and processing data # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB # the "pro" version of these variables have the same meaning and are used to refer to the processed file RpTools <- reticulate::import("RpTools") - input_file <- NULL - stage_get_data <- NULL - stage_process_data <- NULL - raw_merge <- NULL - pro_merge <- NULL + input_file <- NULL + stage_get_data <- NULL + stage_process_data <- NULL + raw_merge <- NULL + pro_merge <- NULL existing_raw_file_path <- NULL existing_pro_file_path <- NULL # extract the variables from the settings list - siteid <- as.numeric(settings$run$site$id) - siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) - raw_mimetype <- settings$remotedata$raw_mimetype - raw_formatname <- settings$remotedata$raw_formatname - outdir <- settings$outdir - start <- as.character(as.Date(settings$run$start.date)) - end <- as.character(as.Date(settings$run$end.date)) - source <- settings$remotedata$source - collection <- settings$remotedata$collection - scale <- settings$remotedata$scale + siteid <- as.numeric(settings$run$site$id) + siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) + raw_mimetype <- settings$remotedata$raw_mimetype + raw_formatname <- settings$remotedata$raw_formatname + outdir <- settings$outdir + start <- as.character(as.Date(settings$run$start.date)) + end <- as.character(as.Date(settings$run$end.date)) + source <- settings$remotedata$source + collection <- settings$remotedata$collection + scale <- settings$remotedata$scale if (!is.null(scale)) { scale <- as.double(settings$remotedata$scale) scale <- format(scale, nsmall = 1) } - projection <- settings$remotedata$projection - qc <- settings$remotedata$qc + projection <- settings$remotedata$projection + qc <- settings$remotedata$qc if (!is.null(qc)) { qc <- as.double(settings$remotedata$qc) qc <- format(qc, nsmall = 1) } - algorithm <- settings$remotedata$algorithm - credfile <- settings$remotedata$credfile - pro_mimetype <- settings$remotedata$pro_mimetype - pro_formatname <- settings$remotedata$pro_formatname - out_get_data <- settings$remotedata$out_get_data + algorithm <- settings$remotedata$algorithm + credfile <- settings$remotedata$credfile + pro_mimetype <- settings$remotedata$pro_mimetype + pro_formatname <- settings$remotedata$pro_formatname + out_get_data <- settings$remotedata$out_get_data out_process_data <- settings$remotedata$out_process_data + overwrite <- settings$remotedata$overwrite + if (is.null(overwrite)){ + overwrite <- FALSE + } + + + PEcAn.logger::severeifnot("Check if siteid is of numeric type and is not NULL", is.numeric(siteid)) + PEcAn.logger::severeifnot("Check if outdir is of character type and is not NULL", is.character(outdir)) + PEcAn.logger::severeifnot("Check if source is of character type and is not NULL", is.character(source)) + # PEcAn.logger::severeifnot("Source should be one of gee, appeears", source == "gee" || source == "appeears") + # collection validation to be implemented + if(!is.null(projection)){ + PEcAn.logger::severeifnot("projection should be of character type", is.character(projection)) + } + if(!is.null(algorithm)){ + PEcAn.logger::severeifnot("algorithm should be of character type", is.character(algorithm)) + } + if(!is.null(credfile)){ + PEcAn.logger::severeifnot("credfile should be of character type", is.character(credfile)) + } + PEcAn.logger::severeifnot("Check if out_get_data is of character type and is not NULL", is.character(out_get_data)) + if(!is.null(out_process_data)){ + PEcAn.logger::severeifnot("out_process_data should be of character type", is.character(out_process_data)) + } dbcon <- PEcAn.DB::db.open(settings$database$bety) @@ -90,8 +114,42 @@ remote_process <- function(settings) { existing_data <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) if (nrow(existing_data) >= 1) { - # if processed data is requested, example LAI - if (!is.null(out_process_data)) { + + if (overwrite) { + PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") + if (!is.null(out_process_data)) { + pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) + if (nrow(pro_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", pro_file_name), dbcon)) == 1) { + if (nrow(raw_check <-PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) == 1) { + flag <- 4 + }else{ + flag <- 6 + } + }else{ + flag <- 1 + } + stage_process_data <- TRUE + pro_merge <- "repace" + write_pro_start <- start + write_pro_end <- end + }else if(!is.null(out_get_data)){ + if (nrow(raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) == 1) { + flag <- 0 + }else{ + flag <- 1 + } + } + stage_get_data <- TRUE + start <- req_start + end <- req_end + write_raw_start <- start + write_raw_end <- end + raw_merge <- "replace" + existing_pro_file_path <- NULL + existing_raw_file_path <- NULL + }else if (!is.null(out_process_data)) { + # if processed data is requested, example LAI + # construct processed file name pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) @@ -306,31 +364,37 @@ remote_process <- function(settings) { outdir = file.path(outdir, paste(source, "site", siteid_short, sep = "_")) - # call remote_process - output = RpTools$rp_control( - coords = coords, - outdir = outdir, - start = start, - end = end, - source = source, - collection = collection, - siteid = siteid_short, - scale = as.double(scale), - projection = projection, - qc = as.double(qc), - algorithm = algorithm, - input_file = input_file, - credfile = credfile, - out_get_data = out_get_data, - out_process_data = out_process_data, - stage_get_data = stage_get_data, - stage_process_data = stage_process_data, - raw_merge = raw_merge, - pro_merge = pro_merge, - existing_raw_file_path = existing_raw_file_path, - existing_pro_file_path = existing_pro_file_path - ) + fcn.args <- list() + fcn.args$coords <- coords + fcn.args$outdir <- outdir + fcn.args$start <- start + fcn.args$end <- end + fcn.args$source <- source + fcn.args$collection <- collection + fcn.args$siteid <- siteid_short + fcn.args$scale <- as.double(scale) + fcn.args$projection <- projection + fcn.args$qc <- as.double(qc) + fcn.args$algorithm <- algorithm + fcn.args$input_file <- input_file + fcn.args$credfile <- credfile + fcn.args$out_get_data <- out_get_data + fcn.args$out_process_data <- out_process_data + fcn.args$stage_get_data <- stage_get_data + fcn.args$stage_process_data <- stage_process_data + fcn.args$raw_merge <- raw_merge + fcn.args$pro_merge <- pro_merge + fcn.args$existing_raw_file_path <- existing_raw_file_path + fcn.args$existing_pro_file_path <- existing_pro_file_path + + + arg.string <- PEcAn.utils::listToArgString(fcn.args) + + cmdFcn <- paste0("RpTools$rp_control(", arg.string, ")") + PEcAn.logger::logger.debug(paste0("Remote module executing the following function:\n", cmdFcn)) + # call rp_control + output <- do.call(RpTools$rp_control, fcn.args) # inserting data in the DB if (!is.null(out_process_data)) { diff --git a/modules/data.remote/man/remote_process.Rd b/modules/data.remote/man/remote_process.Rd index 4773d5ba7aa..c0c11a23a58 100644 --- a/modules/data.remote/man/remote_process.Rd +++ b/modules/data.remote/man/remote_process.Rd @@ -18,5 +18,5 @@ remote_process(settings) } } \author{ -Ayush Prasad +Ayush Prasad, Istem Fer } From df26527c828fddc5eec88e8cd3ac1c2a4647440f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 16 Aug 2020 18:52:44 +0530 Subject: [PATCH 1392/2289] add overwrite option --- book_source/03_topical_pages/03_pecan_xml.Rmd | 4 +- .../03_topical_pages/09_standalone_tools.Rmd | 10 +- modules/data.remote/R/remote_process.R | 478 ++++++++++-------- .../inst/RpTools/RpTools/gee_utils.py | 6 +- modules/data.remote/man/remote_process.Rd | 2 +- 5 files changed, 281 insertions(+), 219 deletions(-) diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index 8a06de20878..f32f39d1f13 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -788,13 +788,14 @@ This section describes the tags required for configuring `remote_process`. ... ... ... - ... + ... ... ... ... ... ... ... + ... ``` @@ -806,6 +807,7 @@ This section describes the tags required for configuring `remote_process`. * `scale`: (optional) pixel resolution required for some gee collections, recommended to use 10 for Sentinel 2 * `projection`: (optional) type of projection. Only required for appeears polygon AOI type * `qc`: (optional) quality control parameter, required for some gee collections +* `overwrite`: (optional) if TRUE database checks will be skipped and existing data of same type will be replaced entirely. When processed data is requested, the raw data required for creating it will also be replaced. By default FALSE These tags are only required if processed data is requested: diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index a95da513fb5..fa6c53a5a05 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -190,7 +190,7 @@ This module contains GEE and other processing functions which retrieve data from ... ... ... - ... + ... ... ... ... @@ -198,6 +198,7 @@ This module contains GEE and other processing functions which retrieve data from ... ... ... + ... ``` @@ -209,6 +210,7 @@ This module contains GEE and other processing functions which retrieve data from * `scale`: (optional) pixel resolution required for some gee collections, recommended to use 10 for Sentinel 2 **scale** Information about how GEE handles scale can be found out [here](https://developers.google.com/earth-engine/scale) * `projection`: (optional) type of projection. Only required for appeears polygon AOI type * `qc`: (optional) quality control parameter, required for some gee collections +* `overwrite`: (optional) if TRUE database checks will be skipped and existing data of same type will be replaced entirely. When processed data is requested, the raw data required for creating it will also be replaced. By default FALSE These tags are only required if processed data is requested: @@ -264,13 +266,14 @@ This example will download Sentinel 2 bands and then use the SNAP algorithm to c gee COPERNICUS/S2_SR 10 - + 1 snap pro_mimetype pro_formatname lai + ``` @@ -296,13 +299,14 @@ This example will download the layers of a SMAP product(SPL3SMP_E.003) appeears SPL3SMP_E.003 - native + native + ``` diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 6ec1f72716a..84960a747f5 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -3,21 +3,35 @@ ##' @name remote_process ##' @title remote_process ##' @export -##' @param settings PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, geofile, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data +##' @param settings PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data, overwrite ##' @examples ##' \dontrun{ ##' remote_process(settings) ##' } -##' @author Ayush Prasad, Istem Fer +##' @author Ayush Prasad, Istem Fer ##' remote_process <- function(settings) { - # information about the date variables used in remote_process - + # Information about the date variables used in remote_process - # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB # start, end : effective start, end dates created after checking the DB status. These dates are sent to rp_control for downloading and processing data # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB # the "pro" version of these variables have the same meaning and are used to refer to the processed file + # The value of remotefile_check_flag denotes the following cases : + + # When processed file is requested, + # 1 - There are no existing raw and processed files of the requested type in the DB + # 2 - Requested processed file does not exist, the raw file used to create is it present and matches with the requested daterange + # 3 - Requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested daterange + # 4 - Both processed and raw file of the requested type exists, but they have to be updated to match with the requested daterange + # 5 - Raw file required for creating the processed file exists with the required daterange and the processed file needs to be updated. Here the new processed file will now contain data for the entire daterange of the existing raw file + # 6 - There is a existing processed file of the requested type but the raw file used to create it has been deleted. Here, the raw file will be created again and the processed file will be replaced entirely with the one created from new raw file + + # When raw file is requested, + # 1 - There is no existing raw the requested type in the DB + # In all other cases the existing raw file will be updated + RpTools <- reticulate::import("RpTools") input_file <- NULL @@ -33,7 +47,7 @@ remote_process <- function(settings) { siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) raw_mimetype <- settings$remotedata$raw_mimetype raw_formatname <- settings$remotedata$raw_formatname - outdir <- settings$outdir + outdir <- settings$database$dbfiles start <- as.character(as.Date(settings$run$start.date)) end <- as.character(as.Date(settings$run$end.date)) source <- settings$remotedata$source @@ -56,35 +70,45 @@ remote_process <- function(settings) { out_get_data <- settings$remotedata$out_get_data out_process_data <- settings$remotedata$out_process_data overwrite <- settings$remotedata$overwrite - if (is.null(overwrite)){ + if (is.null(overwrite)) { overwrite <- FALSE } - PEcAn.logger::severeifnot("Check if siteid is of numeric type and is not NULL", is.numeric(siteid)) - PEcAn.logger::severeifnot("Check if outdir is of character type and is not NULL", is.character(outdir)) - PEcAn.logger::severeifnot("Check if source is of character type and is not NULL", is.character(source)) + PEcAn.logger::severeifnot("Check if siteid is of numeric type and is not NULL", + is.numeric(siteid)) + PEcAn.logger::severeifnot("Check if outdir is of character type and is not NULL", + is.character(outdir)) + PEcAn.logger::severeifnot("Check if source is of character type and is not NULL", + is.character(source)) # PEcAn.logger::severeifnot("Source should be one of gee, appeears", source == "gee" || source == "appeears") # collection validation to be implemented - if(!is.null(projection)){ - PEcAn.logger::severeifnot("projection should be of character type", is.character(projection)) + if (!is.null(projection)) { + PEcAn.logger::severeifnot("projection should be of character type", + is.character(projection)) } - if(!is.null(algorithm)){ - PEcAn.logger::severeifnot("algorithm should be of character type", is.character(algorithm)) + if (!is.null(algorithm)) { + PEcAn.logger::severeifnot("algorithm should be of character type", + is.character(algorithm)) } - if(!is.null(credfile)){ - PEcAn.logger::severeifnot("credfile should be of character type", is.character(credfile)) + if (!is.null(credfile)) { + PEcAn.logger::severeifnot("credfile should be of character type", + is.character(credfile)) + } + PEcAn.logger::severeifnot( + "Check if out_get_data is of character type and is not NULL", + is.character(out_get_data) + ) + if (!is.null(out_process_data)) { + PEcAn.logger::severeifnot("out_process_data should be of character type", + is.character(out_process_data)) } - PEcAn.logger::severeifnot("Check if out_get_data is of character type and is not NULL", is.character(out_get_data)) - if(!is.null(out_process_data)){ - PEcAn.logger::severeifnot("out_process_data should be of character type", is.character(out_process_data)) - } dbcon <- PEcAn.DB::db.open(settings$database$bety) on.exit(PEcAn.DB::db.close(dbcon), add = TRUE) - flag <- 0 + remotefile_check_flag <- 0 # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names collection_lut <- data.frame( @@ -106,48 +130,77 @@ remote_process <- function(settings) { req_start <- start req_end <- end - raw_file_name <- construct_raw_filename(collection, siteid_short, scale, projection, qc) + raw_file_name <- + construct_raw_filename(collection, siteid_short, scale, projection, qc) - coords <- unlist(PEcAn.DB::db.query(sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), con = dbcon), use.names=FALSE) + coords <- + unlist(PEcAn.DB::db.query( + sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), + con = dbcon + ), use.names = FALSE) # check if any data is already present in the inputs table existing_data <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) if (nrow(existing_data) >= 1) { - if (overwrite) { PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") if (!is.null(out_process_data)) { - pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) - if (nrow(pro_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", pro_file_name), dbcon)) == 1) { - if (nrow(raw_check <-PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) == 1) { - flag <- 4 - }else{ - flag <- 6 + pro_file_name = paste0(algorithm, + "_", + out_process_data, + "_site_", + siteid_short) + if (nrow(pro_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + pro_file_name + ), + dbcon + )) == 1) { + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + remotefile_check_flag <- 4 + } else{ + remotefile_check_flag <- 6 } - }else{ - flag <- 1 + } else{ + remotefile_check_flag <- 1 } stage_process_data <- TRUE - pro_merge <- "repace" - write_pro_start <- start - write_pro_end <- end - }else if(!is.null(out_get_data)){ - if (nrow(raw_check <- PEcAn.DB::db.query(sprintf("SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", raw_file_name), dbcon)) == 1) { - flag <- 0 - }else{ - flag <- 1 + pro_merge <- "repace" + write_pro_start <- start + write_pro_end <- end + } else if (!is.null(out_get_data)) { + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + remotefile_check_flag <- 0 + } else{ + remotefile_check_flag <- 1 } } - stage_get_data <- TRUE - start <- req_start - end <- req_end - write_raw_start <- start - write_raw_end <- end - raw_merge <- "replace" + stage_get_data <- TRUE + start <- req_start + end <- req_end + write_raw_start <- start + write_raw_end <- end + raw_merge <- "replace" existing_pro_file_path <- NULL existing_raw_file_path <- NULL - }else if (!is.null(out_process_data)) { + } else if (!is.null(out_process_data)) { # if processed data is requested, example LAI # construct processed file name @@ -164,10 +217,10 @@ remote_process <- function(settings) { )) == 1) { datalist <- set_stage(pro_check, req_start, req_end, stage_process_data) - pro_start <- as.character(datalist[[1]]) - pro_end <- as.character(datalist[[2]]) + pro_start <- as.character(datalist[[1]]) + pro_end <- as.character(datalist[[2]]) write_pro_start <- datalist[[5]] - write_pro_end <- datalist[[6]] + write_pro_end <- datalist[[6]] if (pro_start != "dont write" || pro_end != "dont write") { stage_process_data <- datalist[[3]] pro_merge <- datalist[[4]] @@ -188,38 +241,38 @@ remote_process <- function(settings) { !is.null(raw_check$end_date)) { raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- as.character(raw_datalist[[1]]) - end <- as.character(raw_datalist[[2]]) + start <- as.character(raw_datalist[[1]]) + end <- as.character(raw_datalist[[2]]) write_raw_start <- raw_datalist[[5]] - write_raw_end <- raw_datalist[[6]] - stage_get_data <- raw_datalist[[3]] - raw_merge <- raw_datalist[[4]] + write_raw_end <- raw_datalist[[6]] + stage_get_data <- raw_datalist[[3]] + raw_merge <- raw_datalist[[4]] if (stage_get_data == FALSE) { input_file <- raw_check$file_path } - flag <- 4 + remotefile_check_flag <- 4 if (raw_merge == TRUE) { existing_raw_file_path <- raw_check$file_path } if (pro_merge == TRUE && stage_get_data == FALSE) { - flag <- 5 - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date + remotefile_check_flag <- 5 + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date existing_pro_file_path <- NULL - pro_merge <- FALSE + pro_merge <- FALSE } } else{ # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted - flag <- 6 - write_raw_start <- req_start - write_raw_end <- req_end - start <- req_start - end <- req_end - stage_get_data <- TRUE + remotefile_check_flag <- 6 + write_raw_start <- req_start + write_raw_end <- req_end + start <- req_start + end <- req_end + stage_get_data <- TRUE existing_raw_file_path <- NULL - write_pro_start <- write_raw_start - write_pro_end <- write_raw_end - pro_merge <- "replace" + write_pro_start <- write_raw_start + write_pro_end <- write_raw_end + pro_merge <- "replace" existing_pro_file_path <- NULL } } @@ -252,43 +305,43 @@ remote_process <- function(settings) { PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] + write_raw_end <- datalist[[6]] write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- datalist[[3]] + write_pro_end <- req_end + stage_get_data <- datalist[[3]] if (stage_get_data == FALSE) { - input_file <- raw_check$file_path + input_file <- raw_check$file_path write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date - flag <- 2 + write_pro_end <- raw_check$end_date + remotefile_check_flag <- 2 } raw_merge <- datalist[[4]] stage_process_data <- TRUE pro_merge <- FALSE if (raw_merge == TRUE || raw_merge == "replace") { existing_raw_file_path = raw_check$file_path - flag <- 3 + remotefile_check_flag <- 3 } else{ existing_raw_file_path = NULL } } else{ # if no processed or raw file of requested type exists - start <- req_start - end <- req_end - write_raw_start <- req_start - write_raw_end <- req_end - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- TRUE - raw_merge <- FALSE - existing_raw_file_path = NULL - stage_process_data <- TRUE - pro_merge <- FALSE - existing_pro_file_path = NULL - flag <- 1 + start <- req_start + end <- req_end + write_raw_start <- req_start + write_raw_end <- req_end + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- TRUE + raw_merge <- FALSE + existing_raw_file_path <- NULL + stage_process_data <- TRUE + pro_merge <- FALSE + existing_pro_file_path <- NULL + remotefile_check_flag <- 1 } } else if (nrow(raw_check <- PEcAn.DB::db.query( @@ -301,15 +354,16 @@ remote_process <- function(settings) { # if only raw data is requested datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - stage_get_data <- datalist[[3]] - raw_merge <- datalist[[4]] - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + stage_get_data <- datalist[[3]] + raw_merge <- datalist[[4]] + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] stage_process_data <- FALSE - if(as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write"){ - raw_id <- raw_check$id + if (as.character(write_raw_start) == "dont write" && + as.character(write_raw_end) == "dont write") { + raw_id <- raw_check$id raw_path <- raw_check$file_path } if (raw_merge == TRUE) { @@ -321,48 +375,48 @@ remote_process <- function(settings) { } else{ # no data of requested type exists PEcAn.logger::logger.info("Requested data does not exist in the DB, retrieving for the first time") - flag <- 1 + remotefile_check_flag <- 1 start <- req_start - end <- req_end + end <- req_end if (!is.null(out_get_data)) { - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path = NULL + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path <- NULL } if (!is.null(out_process_data)) { - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE - process_file_name <- NULL + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + process_file_name <- NULL existing_pro_file_path <- NULL - flag <- 1 + remotefile_check_flag <- 1 } } } else{ # db is completely empty for the given siteid PEcAn.logger::logger.info("DB is completely empty for this site") - flag <- 1 - start <- req_start - end <- req_end - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path = NULL + remotefile_check_flag <- 1 + start <- req_start + end <- req_end + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path <- NULL if (!is.null(out_process_data)) { - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE existing_pro_file_path <- NULL } } - outdir = file.path(outdir, paste(source, "site", siteid_short, sep = "_")) + outdir <- file.path(outdir, paste(source, "site", siteid_short, sep = "_")) fcn.args <- list() fcn.args$coords <- coords @@ -403,66 +457,66 @@ remote_process <- function(settings) { as.character(write_pro_end) == "dont write") { PEcAn.logger::logger.info("Requested processed file already exists") } else{ - if (flag == 1) { + if (remotefile_check_flag == 1) { # no processed and rawfile are present PEcAn.logger::logger.info("Inserting raw and processed files for the first time") # insert processed data pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, - in.prefix = output$process_data_name, - siteid = siteid, - startdate = write_pro_start, - enddate = write_pro_end, - mimetype = pro_mimetype, + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, formatname = pro_formatname, - con = dbcon + con = dbcon ) # insert raw file raw_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, - in.prefix = output$raw_data_name, - siteid = siteid, - startdate = write_raw_start, - enddate = write_raw_end, - mimetype = raw_mimetype, + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, formatname = raw_formatname, - con = dbcon + con = dbcon ) - pro_id <- pro_ins$input.id - raw_id <- raw_ins$input.id + pro_id <- pro_ins$input.id + raw_id <- raw_ins$input.id pro_path <- output$process_data_path raw_path <- output$raw_data_path - } else if (flag == 2) { + } else if (remotefile_check_flag == 2) { # requested processed file does not exist but the raw file used to create it exists within the required timeline PEcAn.logger::logger.info("Inserting processed file for the first time") pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, - in.prefix = output$process_data_name, - siteid = siteid, - startdate = write_pro_start, - enddate = write_pro_end, - mimetype = pro_mimetype, + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, formatname = pro_formatname, - con = dbcon + con = dbcon ) - raw_id <- raw_check$id + raw_id <- raw_check$id raw_path <- raw_check$file_path - pro_id <- pro_ins$input.id + pro_id <- pro_ins$input.id pro_path <- output$pro_data_path - } else if (flag == 3) { + } else if (remotefile_check_flag == 3) { # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, - in.prefix = output$process_data_name, - siteid = siteid, - startdate = write_pro_start, - enddate = write_pro_end, - mimetype = pro_mimetype, + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, formatname = pro_formatname, - con = dbcon + con = dbcon ) raw_id = raw_check$id PEcAn.DB::db.query( @@ -484,12 +538,12 @@ remote_process <- function(settings) { ), dbcon ) - pro_id <- pro_ins$input.id + pro_id <- pro_ins$input.id pro_path <- output$pro_data_path - } else if (flag == 4) { + } else if (remotefile_check_flag == 4) { # requested processed and raw files are present and have to be updated - pro_id <- pro_check$id - raw_id <- raw_check$id + pro_id <- pro_check$id + raw_id <- raw_check$id raw_path <- output$raw_data_path pro_path <- output$process_data_path PEcAn.logger::logger.info("Updating processed and raw files") @@ -531,11 +585,11 @@ remote_process <- function(settings) { ), dbcon ) - } else if (flag == 5) { + } else if (remotefile_check_flag == 5) { # raw file required for creating the processed file exists and the processed file needs to be updated - pro_id <- pro_check$id + pro_id <- pro_check$id pro_path <- output$process_data_path - raw_id <- raw_check$id + raw_id <- raw_check$id raw_path <- raw_check$file_path PEcAn.logger::logger.info("Updating the existing processed file") PEcAn.DB::db.query( @@ -557,11 +611,11 @@ remote_process <- function(settings) { ), dbcon ) - } else if (flag == 6) { + } else if (remotefile_check_flag == 6) { # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file - pro_id <- pro_check$id + pro_id <- pro_check$id pro_path <- output$pro_data_path - raw_path <-output$raw_data_path + raw_path <- output$raw_data_path PEcAn.logger::logger.info("Replacing the existing processed file and creating a new raw file") PEcAn.DB::db.query( sprintf( @@ -584,14 +638,14 @@ remote_process <- function(settings) { ) raw_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, - in.prefix = output$raw_data_name, - siteid = siteid, - startdate = write_raw_start, - enddate = write_raw_end, - mimetype = raw_mimetype, + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, formatname = raw_formatname, - con = dbcon + con = dbcon ) raw_id <- raw_ins$input.id } @@ -602,27 +656,27 @@ remote_process <- function(settings) { if (as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write") { PEcAn.logger::logger.info("Requested raw file already exists") - raw_id = raw_check$id + raw_id = raw_check$id raw_path = raw_check$file_path } else{ - if (flag == 1) { + if (remotefile_check_flag == 1) { PEcAn.logger::logger.info(("Inserting raw file for the first time")) raw_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, - in.prefix = output$raw_data_name, - siteid = siteid, - startdate = write_raw_start, - enddate = write_raw_end, - mimetype = raw_mimetype, + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, formatname = raw_formatname, - con = dbcon + con = dbcon ) - raw_id <- raw_ins$input.id + raw_id <- raw_ins$input.id raw_path <- output$raw_data_path } else{ PEcAn.logger::logger.info("Updating raw file") - raw_id <- raw_check$id + raw_id <- raw_check$id raw_path <- output$raw_data_path PEcAn.DB::db.query( sprintf( @@ -647,17 +701,17 @@ remote_process <- function(settings) { } } - - + + if (!is.null(out_get_data)) { - settings$remotedata$raw_id <- raw_id + settings$remotedata$raw_id <- raw_id settings$remotedata$raw_path <- raw_path } - if(!is.null(out_process_data)){ - settings$remotedata$pro_id <- pro_id - settings$remotedata$pro_path <- pro_path + if (!is.null(out_process_data)) { + settings$remotedata$pro_id <- pro_id + settings$remotedata$pro_path <- pro_path } return (settings) } @@ -729,39 +783,39 @@ construct_raw_filename <- ##' get_remote_data) ##' } ##' @author Ayush Prasad -set_stage <- function(result, req_start, req_end, stage) { - db_start = as.Date(result$start_date) - db_end = as.Date(result$end_date) - req_start = as.Date(req_start) - req_end = as.Date(req_end) - stage <- TRUE - merge <- TRUE +set_stage <- function(result, req_start, req_end, stage) { + db_start <- as.Date(result$start_date) + db_end <- as.Date(result$end_date) + req_start <- as.Date(req_start) + req_end <- as.Date(req_end) + stage <- TRUE + merge <- TRUE # data already exists if ((req_start >= db_start) && (req_end <= db_end)) { - req_start <- "dont write" - req_end <- "dont write" - stage <- FALSE - merge <- FALSE + req_start <- "dont write" + req_end <- "dont write" + stage <- FALSE + merge <- FALSE write_start <- "dont write" - write_end <- "dont write" + write_end <- "dont write" } else if (req_start < db_start && db_end < req_end) { # data has to be replaced - merge <- "replace" + merge <- "replace" write_start <- req_start - write_end <- req_end - stage <- TRUE + write_end <- req_end + stage <- TRUE } else if ((req_start > db_start) && (req_end > db_end)) { # forward case - req_start <- db_end + 1 + req_start <- db_end + 1 write_start <- db_start - write_end <- req_end + write_end <- req_end } else if ((req_start < db_start) && (req_end < db_end)) { # backward case - req_end <- db_start - 1 - write_end <- db_end + req_end <- db_start - 1 + write_end <- db_end write_start <- req_start } return (list(req_start, req_end, stage, merge, write_start, write_end)) -} +} diff --git a/modules/data.remote/inst/RpTools/RpTools/gee_utils.py b/modules/data.remote/inst/RpTools/RpTools/gee_utils.py index a1488856494..f156d665e43 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee_utils.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee_utils.py @@ -39,8 +39,10 @@ def create_geo(geofile): list(df.geometry.exterior[row_id].coords) for row_id in range(df.shape[0]) ] # create geometry + for i in range(len(area[0])): + area[0][i] = area[0][i][0:2] geo = ee.Geometry.Polygon(area) - + else: # if the input geometry type is not raise ValueError("geometry type not supported") @@ -99,4 +101,4 @@ def calc_ndvi(nir, red): def add_ndvi(image): ndvi = image.normalizedDifference([nir, red]).rename("NDVI") return image.addBands(ndvi) - return add_ndvi \ No newline at end of file + return add_ndvi diff --git a/modules/data.remote/man/remote_process.Rd b/modules/data.remote/man/remote_process.Rd index c0c11a23a58..44391e1e340 100644 --- a/modules/data.remote/man/remote_process.Rd +++ b/modules/data.remote/man/remote_process.Rd @@ -7,7 +7,7 @@ remote_process(settings) } \arguments{ -\item{settings}{PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, geofile, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data} +\item{settings}{PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data, overwrite} } \description{ call rp_control (from RpTools py package) from PEcAn workflow and store the output in BETY From ec233f7d2c0c68babbfe52dcc3b53ffc9f1de158 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 17 Aug 2020 16:15:38 +0530 Subject: [PATCH 1393/2289] Apply suggestions from code review Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 84960a747f5..4255c3a80bb 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -518,7 +518,7 @@ remote_process <- function(settings) { formatname = pro_formatname, con = dbcon ) - raw_id = raw_check$id + raw_id <- raw_check$id PEcAn.DB::db.query( sprintf( "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", @@ -656,8 +656,8 @@ remote_process <- function(settings) { if (as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write") { PEcAn.logger::logger.info("Requested raw file already exists") - raw_id = raw_check$id - raw_path = raw_check$file_path + raw_id <- raw_check$id + raw_path <- raw_check$file_path } else{ if (remotefile_check_flag == 1) { PEcAn.logger::logger.info(("Inserting raw file for the first time")) From ca07ed7bb3ff7e72e843c18338fa61e6e55b331b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 17 Aug 2020 18:40:27 +0530 Subject: [PATCH 1394/2289] exception handling during merging --- modules/data.remote/DESCRIPTION | 2 +- modules/data.remote/inst/RpTools/RpTools/merge_files.py | 8 ++++++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index ebdc2d1a79e..2e64b61ea53 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -36,4 +36,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.0 diff --git a/modules/data.remote/inst/RpTools/RpTools/merge_files.py b/modules/data.remote/inst/RpTools/RpTools/merge_files.py index e3a2c3e01e0..5d0070237b1 100644 --- a/modules/data.remote/inst/RpTools/RpTools/merge_files.py +++ b/modules/data.remote/inst/RpTools/RpTools/merge_files.py @@ -34,8 +34,12 @@ def nc_merge(old, new, outdir): changed_new = os.path.join(outdir, tail + "temp" + timestamp + ".nc") # rename the new file to prevent it from being overwritten os.rename(orig_nameof_newfile, changed_new) - ds = xarray.open_mfdataset([old, changed_new], combine="by_coords") - ds.to_netcdf(os.path.join(outdir, tail)) + try: + ds = xarray.open_mfdataset([old, changed_new], combine="by_coords") + ds.to_netcdf(os.path.join(outdir, tail)) + except: + os.remove(old) + return os.path.abspath(os.path.join(outdir, changed_new)) # delete the old and temproary file os.remove(changed_new) os.remove(old) From 7d9316d3a0c564efd3ec845903f50195cef2f0d7 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 17 Aug 2020 10:15:26 -0400 Subject: [PATCH 1395/2289] updating Rd files --- modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd index fbb32863ad6..648e3754e6f 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd @@ -8,7 +8,7 @@ download.NOAA_GEFS_downscale( outfolder, lat.in, lon.in, - sitename, + site_id, start_date, end_date, overwrite = FALSE, @@ -19,7 +19,7 @@ download.NOAA_GEFS_downscale( \arguments{ \item{outfolder}{Directory where results should be written} -\item{sitename}{The unique ID given to each site. This is used as part of the file name.} +\item{site_id}{The unique ID given to each site. This is used as part of the file name.} \item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} From d0b04a89a81986623a5c28c88452b96106a52b8d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 18 Aug 2020 10:16:39 +0530 Subject: [PATCH 1396/2289] small fixes in remote_process --- modules/data.remote/R/remote_process.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 4255c3a80bb..54571d8425a 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -341,7 +341,7 @@ remote_process <- function(settings) { stage_process_data <- TRUE pro_merge <- FALSE existing_pro_file_path <- NULL - remotefile_check_flag <- 1 + remotefile_check_flag <- 1 } } else if (nrow(raw_check <- PEcAn.DB::db.query( @@ -399,7 +399,7 @@ remote_process <- function(settings) { } else{ # db is completely empty for the given siteid PEcAn.logger::logger.info("DB is completely empty for this site") - remotefile_check_flag <- 1 + remotefile_check_flag <- 1 start <- req_start end <- req_end stage_get_data <- TRUE @@ -505,7 +505,7 @@ remote_process <- function(settings) { raw_id <- raw_check$id raw_path <- raw_check$file_path pro_id <- pro_ins$input.id - pro_path <- output$pro_data_path + pro_path <- output$process_data_path } else if (remotefile_check_flag == 3) { # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates pro_ins <- PEcAn.DB::dbfile.input.insert( @@ -810,7 +810,7 @@ set_stage <- function(result, req_start, req_end, stage) { req_start <- db_end + 1 write_start <- db_start write_end <- req_end - } else if ((req_start < db_start) && (req_end < db_end)) { + } else if ((req_start < db_start) && (req_end < db_end)) { # backward case req_end <- db_start - 1 write_end <- db_end From b8d4b67909b482e68ad0545841dcc99970a09f9b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 18 Aug 2020 10:24:05 +0530 Subject: [PATCH 1397/2289] pro_data_path -> process_data_path --- modules/data.remote/R/remote_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 54571d8425a..e8ce64b19cf 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -614,7 +614,7 @@ remote_process <- function(settings) { } else if (remotefile_check_flag == 6) { # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file pro_id <- pro_check$id - pro_path <- output$pro_data_path + pro_path <- output$process_data_path raw_path <- output$raw_data_path PEcAn.logger::logger.info("Replacing the existing processed file and creating a new raw file") PEcAn.DB::db.query( From 8a49c702098697a35c766d051f8c479a9c27dc05 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 18 Aug 2020 10:25:39 +0530 Subject: [PATCH 1398/2289] pro_data_path -> process_data_path --- modules/data.remote/R/remote_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index e8ce64b19cf..18308a675e2 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -539,7 +539,7 @@ remote_process <- function(settings) { dbcon ) pro_id <- pro_ins$input.id - pro_path <- output$pro_data_path + pro_path <- output$process_data_path } else if (remotefile_check_flag == 4) { # requested processed and raw files are present and have to be updated pro_id <- pro_check$id From 12cccf62b8304664c8ffeb670b0f287a245d6f71 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 18 Aug 2020 08:45:28 -0400 Subject: [PATCH 1399/2289] updating met to include site id for NOAA GEFS download --- modules/data.atmosphere/R/download.raw.met.module.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index a87b77fcfba..ed186d30b3d 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -87,7 +87,8 @@ username = username, lat.in = lat.in, lon.in = lon.in, - pattern = met + pattern = met, + site_id = site.id ) } else { From e4e8d3f7840e1b01e90aae5cd247c5053f09c63f Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 18 Aug 2020 08:53:58 -0400 Subject: [PATCH 1400/2289] updating GEFS downscale to also have site_id --- .../data.atmosphere/R/download.NOAA_GEFS.R | 19 +++++++++---------- .../R/download.NOAA_GEFS_downscale.R | 2 +- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 475b6d1d4d8..16f05463da6 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -28,7 +28,7 @@ ##' @param start_date, end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight) ##' @param lat site latitude in decimal degrees ##' @param lon site longitude in decimal degrees -##' @param sitename The unique ID given to each site. This is used as part of the file name. +##' @param site_id The unique ID given to each site. This is used as part of the file name. ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. ##' @param ... Other arguments, currently ignored @@ -36,12 +36,12 @@ ##' ##' @examples ##' \dontrun{ -##' download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, sitename="US-WCr") +##' download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) ##' } ##' ##' @author Luke Dramko ##' -download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, sitename, start_date = Sys.time(), end_date = (as.POSIXct(start_date, tz="UTC") + lubridate::days(16)), +download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, site_id, start_date = Sys.time(), end_date = (as.POSIXct(start_date, tz="UTC") + lubridate::days(16)), overwrite = FALSE, verbose = FALSE, ...) { start_date <- as.POSIXct(start_date, tz = "UTC") @@ -255,17 +255,16 @@ download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, sitename, start_date = #For each ensemble for (i in 1:21) { # i is the ensemble number #Generating a unique identifier string that characterizes a particular data set. - identifier = paste("NOAA_GEFS", sitename, i, format(start_date, "%Y-%m-%dT%H:%M"), - format(end_date, "%Y-%m-%dT%H:%M"), sep=".") - - ensemble_folder = file.path(outfolder, identifier) + identifier = paste("NOAA_GEFS", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), + format(end_date, "%Y-%m-%dT%H:%M"), sep="_") + #Each file will go in its own folder. - if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE) + if (!dir.exists(outfolder)) { + dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) } - flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) + flname = file.path(outfolder, paste(identifier, "nc", sep = ".")) #Each ensemble member gets its own unique data frame, which is stored in results_list #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index 0daa1cf34ac..e55fcb4a5e8 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -36,7 +36,7 @@ ##' ##' @examples ##' \dontrun{ -##' download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, sitename="US-WCr") +##' download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) ##' } ##' ##' @author Katie Zarada - modified code from Luke Dramko and Laura Puckett From 46bd4094ab67741960490a820ad051723efa635f Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 18 Aug 2020 09:21:51 -0400 Subject: [PATCH 1401/2289] updating Rd files --- modules/data.atmosphere/man/download.NOAA_GEFS.Rd | 6 +++--- modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index 967eab5b4c7..4fc167d8dba 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -8,7 +8,7 @@ download.NOAA_GEFS( outfolder, lat.in, lon.in, - sitename, + site_id, start_date = Sys.time(), end_date = (as.POSIXct(start_date, tz = "UTC") + lubridate::days(16)), overwrite = FALSE, @@ -19,7 +19,7 @@ download.NOAA_GEFS( \arguments{ \item{outfolder}{Directory where results should be written} -\item{sitename}{The unique ID given to each site. This is used as part of the file name.} +\item{site_id}{The unique ID given to each site. This is used as part of the file name.} \item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} @@ -70,7 +70,7 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. \examples{ \dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, sitename="US-WCr") + download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) } } diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd index 648e3754e6f..786441d9d8d 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd @@ -70,7 +70,7 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. \examples{ \dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, sitename="US-WCr") + download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) } } From aa86dcac1ca2476ae3fbf94958e3de988885318e Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 19 Aug 2020 05:26:50 -0400 Subject: [PATCH 1402/2289] generate same samples --- modules/priors/R/priors.R | 11 +++++++++- modules/uncertainty/R/ensemble.R | 22 ++++++++++++++----- modules/uncertainty/R/get.parameter.samples.R | 21 +++++++++++++++++- 3 files changed, 46 insertions(+), 8 deletions(-) diff --git a/modules/priors/R/priors.R b/modules/priors/R/priors.R index 387673662db..8efd49c255d 100644 --- a/modules/priors/R/priors.R +++ b/modules/priors/R/priors.R @@ -189,7 +189,16 @@ pr.samp <- function(distn, parama, paramb, n) { ##' @return vector with n random samples from prior ##' @seealso \link{pr.samp} ##' @export -get.sample <- function(prior, n) { +get.sample <- function(prior, n, p = NULL) { + if(!is.null(p)){ + if (as.character(prior$distn) %in% c("exp", "pois", "geom")) { + ## one parameter distributions + return(do.call(paste0("q", prior$distn), list(p, prior$parama))) + } else { + ## two parameter distributions + return(do.call(paste0("q", prior$distn), list(p, prior$parama, prior$paramb))) + } + } if (as.character(prior$distn) %in% c("exp", "pois", "geom")) { ## one parameter distributions return(do.call(paste0("r", prior$distn), list(n, prior$parama))) diff --git a/modules/uncertainty/R/ensemble.R b/modules/uncertainty/R/ensemble.R index 2ba060d4606..1cf32b71b70 100644 --- a/modules/uncertainty/R/ensemble.R +++ b/modules/uncertainty/R/ensemble.R @@ -111,7 +111,7 @@ get.ensemble.samples <- function(ensemble.size, pft.samples, env.samples, random.samples <- as.matrix(random.samples) } else if (method == "sobol") { PEcAn.logger::logger.info("Using ", method, "method for sampling") - random.samples <- randtoolbox::sobol(n = ensemble.size, dim = total.sample.num, ...) + random.samples <- randtoolbox::sobol(n = ensemble.size, dim = total.sample.num, scrambling = 3, ...) ## force as a matrix in case length(samples)=1 random.samples <- as.matrix(random.samples) } else if (method == "torus") { @@ -146,12 +146,22 @@ get.ensemble.samples <- function(ensemble.size, pft.samples, env.samples, # meaning we want to keep MCMC samples together if(length(pft.samples[[pft.i]])>0 & !is.null(param.names)){ - # TODO: for now we are sampling row numbers uniformly - # stop if other methods were requested - if(method != "uniform"){ - PEcAn.logger::logger.severe("Only uniform sampling is available for joint sampling at the moment. Other approaches are not implemented yet.") + if (method == "halton") { + same.i <- round(randtoolbox::halton(ensemble.size) * length(pft.samples[[pft.i]][[1]])) + } else if (method == "sobol") { + same.i <- round(randtoolbox::sobol(ensemble.size, scrambling = 3) * length(pft.samples[[pft.i]][[1]])) + } else if (method == "torus") { + same.i <- round(randtoolbox::torus(ensemble.size) * length(pft.samples[[pft.i]][[1]])) + } else if (method == "lhc") { + same.i <- round(c(PEcAn.emulator::lhc(t(matrix(0:1, ncol = 1, nrow = 2)), ensemble.size) * length(pft.samples[[pft.i]][[1]]))) + } else if (method == "uniform") { + same.i <- sample.int(length(pft.samples[[pft.i]][[1]]), ensemble.size) + } else { + PEcAn.logger::logger.info("Method ", method, " has not been implemented yet, using uniform random sampling") + # uniform random + same.i <- sample.int(length(pft.samples[[pft.i]][[1]]), ensemble.size) } - same.i <- sample.int(length(pft.samples[[pft.i]][[1]]), ensemble.size) + } for (trait.i in seq(pft.samples[[pft.i]])) { diff --git a/modules/uncertainty/R/get.parameter.samples.R b/modules/uncertainty/R/get.parameter.samples.R index 87ff35eaa72..8d7dbb5890d 100644 --- a/modules/uncertainty/R/get.parameter.samples.R +++ b/modules/uncertainty/R/get.parameter.samples.R @@ -129,11 +129,30 @@ get.parameter.samples <- function(settings, } PEcAn.logger::logger.info("using ", samples.num, "samples per trait") + if (ens.sample.method == "halton") { + q_samples <- randtoolbox::halton(n = samples.num, dim = length(priors)) + } else if (ens.sample.method == "sobol") { + q_samples <- randtoolbox::sobol(n = samples.num, dim = length(priors), scrambling = 3) + } else if (ens.sample.method == "torus") { + q_samples <- randtoolbox::torus(n = samples.num, dim = length(priors)) + } else if (ens.sample.method == "lhc") { + q_samples <- PEcAn.emulator::lhc(t(matrix(0:1, ncol = length(priors), nrow = 2)), samples.num) + } else if (ens.sample.method == "uniform") { + q_samples <- matrix(stats::runif(samples.num * length(priors)), + samples.num, + length(priors)) + } else { + PEcAn.logger::logger.info("Method ", ens.sample.method, " has not been implemented yet, using uniform random sampling") + # uniform random + q_samples <- matrix(stats::runif(samples.num * length(priors)), + samples.num, + length(priors)) + } for (prior in priors) { if (prior %in% param.names[[i]]) { samples <- trait.mcmc[[prior]] %>% purrr::map(~ .x[,'beta.o']) %>% unlist() %>% as.matrix() } else { - samples <- PEcAn.priors::get.sample(prior.distns[prior, ], samples.num) + samples <- PEcAn.priors::get.sample(prior.distns[prior, ], samples.num, q_samples[ , priors==prior]) } trait.samples[[pft.name]][[prior]] <- samples } From 7cffa36b393bc5c2ce11613e7ee6030bcb0a0413 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 19 Aug 2020 05:28:09 -0400 Subject: [PATCH 1403/2289] temporary treatment handling --- models/basgra/R/read_restart.BASGRA.R | 5 +++++ models/basgra/R/run_BASGRA.R | 18 +++++++++++++----- 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/models/basgra/R/read_restart.BASGRA.R b/models/basgra/R/read_restart.BASGRA.R index 79bc53d8a38..b86ab8741ec 100644 --- a/models/basgra/R/read_restart.BASGRA.R +++ b/models/basgra/R/read_restart.BASGRA.R @@ -41,6 +41,11 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p names(forecast[[length(forecast)]]) <- c("slow_soil_pool_carbon_content") } + if ("TotSoilCarb" %in% var.names) { + forecast[[length(forecast) + 1]] <- ens$TotSoilCarb[last] # kg C m-2 + names(forecast[[length(forecast)]]) <- c("TotSoilCarb") + } + PEcAn.logger::logger.info(runid) X_tmp <- list(X = unlist(forecast), params = params) diff --git a/models/basgra/R/run_BASGRA.R b/models/basgra/R/run_BASGRA.R index b029cf07a27..50d1a0aa80d 100644 --- a/models/basgra/R/run_BASGRA.R +++ b/models/basgra/R/run_BASGRA.R @@ -245,7 +245,13 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ matrix_weather <- cbind( matrix_weather, matrix_weather[,8]) #add a col if(!is.null(co2_file)){ co2val <- utils::read.table(co2_file, header=TRUE, sep = ",") - matrix_weather[1:NDAYS,9] <- co2val[paste0(co2val[,1], co2val[,2]) %in% paste0(matrix_weather[,1], matrix_weather[,2]),3] + + weird_line <- which(!paste0(matrix_weather[1:NDAYS,1], matrix_weather[1:NDAYS,2]) %in% paste0(co2val[,1], co2val[,2])) + if(length(weird_line)!=0){ + matrix_weather <- matrix_weather[-weird_line,] + NDAYS <- NDAYS-length(weird_line) + } + matrix_weather[1:NDAYS,9] <- co2val[paste0(co2val[,1], co2val[,2]) %in% paste0(matrix_weather[1:NDAYS,1], matrix_weather[1:NDAYS,2]),3] }else{ PEcAn.logger::logger.info("No atmospheric CO2 concentration was provided. Using default 350 ppm.") matrix_weather[1:NDAYS,9] <- 350 @@ -279,11 +285,13 @@ run_BASGRA <- function(run_met, run_params, site_harvest, site_fertilize, start_ # I'll pass it via harvest file as the 3rd column # even though it won't change from harvest to harvest, it may change from run to run # but just in case users forgot to add the third column to the harvest file: - if(nrow(h_days) == 3){ - run_params[names(run_params) == "CLAIV"] <- h_days[1,3] + if(ncol(h_days) > 2){ + run_params[names(run_params) == "CLAIV"] <- h_days[1,3] + run_params[names(run_params) == "TRANCO"] <- h_days[1,4] + run_params[names(run_params) == "SLAMAX"] <- h_days[1,5] + run_params[names(run_params) == "FSOMFSOMS"] <- h_days[1,6] }else{ - PEcAn.logger::logger.info("Maximum LAI remaining after harvest is not provided via harvest file. Assuming CLAIV=1.") - run_params[names(run_params) == "CLAIV"] <- 1 + PEcAn.logger::logger.info("CLAIV, TRANCO, SLAMAX not provided via harvest file. Using defaults.") } From 01f27aaf4a596f291307c8cda44ea190cfd2d813 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 19 Aug 2020 09:30:59 -0400 Subject: [PATCH 1404/2289] adding ensemble folder back in --- modules/data.atmosphere/R/download.NOAA_GEFS.R | 9 ++++----- modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R | 8 ++++---- 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 16f05463da6..e4fa7a2d0f8 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -258,13 +258,12 @@ download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, site_id, start_date = identifier = paste("NOAA_GEFS", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), format(end_date, "%Y-%m-%dT%H:%M"), sep="_") - + ensemble_folder = file.path(outfolder, identifier) #Each file will go in its own folder. - if (!dir.exists(outfolder)) { - dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) - } + if (!dir.exists(ensemble_folder)) { + dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE) - flname = file.path(outfolder, paste(identifier, "nc", sep = ".")) + flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) #Each ensemble member gets its own unique data frame, which is stored in results_list #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index e55fcb4a5e8..71d2be26eb6 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -335,6 +335,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, sta #Generating a unique identifier string that characterizes a particular data set. identifier = paste("NOAA_GEFS_downscale", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), format(end_date, "%Y-%m-%dT%H:%M"), sep="_") + ensemble_folder = file.path(outfolder, identifier) #ensemble_folder = file.path(outfolder, identifier) data = as.data.frame(joined %>% dplyr::select(NOAA.member, cf_var_names1) %>% @@ -342,11 +343,10 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, sta dplyr::select(-NOAA.member)) - if (!dir.exists(outfolder)) { - dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) - } + if (!dir.exists(ensemble_folder)) { + dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE) - flname = file.path(outfolder, paste(identifier, "nc", sep = ".")) + flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) #Each ensemble member gets its own unique data frame, which is stored in results_list #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it From ae2eaaf95a8948dc79c4b59afe5f89d37567c67a Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 19 Aug 2020 09:52:59 -0400 Subject: [PATCH 1405/2289] adding missing bracket in --- modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index 71d2be26eb6..ed6eff7c002 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -344,7 +344,7 @@ download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, sta if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE) + dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) From 4cb118849d1f27f3190c1663b090aae7c7f570a2 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 19 Aug 2020 11:08:01 -0400 Subject: [PATCH 1406/2289] adding end bracket to GEFS --- modules/data.atmosphere/R/download.NOAA_GEFS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index e4fa7a2d0f8..ee29683dea4 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -261,7 +261,7 @@ download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, site_id, start_date = ensemble_folder = file.path(outfolder, identifier) #Each file will go in its own folder. if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE) + dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) From cef20dce5010e4368f4848a9af68dfbffec7723a Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 19 Aug 2020 11:37:20 -0400 Subject: [PATCH 1407/2289] updating Rd line length --- modules/data.atmosphere/R/download.NOAA_GEFS.R | 14 +++++++++----- .../R/download.NOAA_GEFS_downscale.R | 5 ++++- modules/data.atmosphere/man/download.NOAA_GEFS.Rd | 14 +++++++++----- .../man/download.NOAA_GEFS_downscale.Rd | 5 ++++- 4 files changed, 26 insertions(+), 12 deletions(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index ee29683dea4..1cbcb5c70a3 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -1,13 +1,14 @@ ##' @title Download NOAA GEFS Weather Data ##' ##' @section Information on Units: -##' Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downlaoded -##' in Kelvin. +##' Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, +##' but is converted at the station and downlaoded in Kelvin. ##' @references https://www.ncdc.noaa.gov/crn/measurements.html ##' ##' @section NOAA_GEFS General Information: -##' This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. A 16 day forecast is avaliable -##' every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn +##' This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. +##' A 16 day forecast is avaliable every 6 hours. Each forecast includes information on a total of 8 variables. +##' These are transformed from the NOAA standard to the internal PEcAn ##' standard. ##' ##' @section Data Avaliability: @@ -36,7 +37,10 @@ ##' ##' @examples ##' \dontrun{ -##' download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) +##' download.NOAA_GEFS(outfolder="~/Working/results", +##' lat.in= 45.805925, +##' lon.in = -90.07961, +##' site_id = 676) ##' } ##' ##' @author Luke Dramko diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R index ed6eff7c002..ccdee0f5c34 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R @@ -36,7 +36,10 @@ ##' ##' @examples ##' \dontrun{ -##' download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) +##' download.NOAA_GEFS(outfolder="~/Working/results", +##' lat.in= 45.805925, +##' lon.in = -90.07961, +##' site_id = 676) ##' } ##' ##' @author Katie Zarada - modified code from Luke Dramko and Laura Puckett diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index 4fc167d8dba..c6e313362ec 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -42,14 +42,15 @@ Download NOAA GEFS Weather Data } \section{Information on Units}{ -Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downlaoded -in Kelvin. +Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, +but is converted at the station and downlaoded in Kelvin. } \section{NOAA_GEFS General Information}{ -This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. A 16 day forecast is avaliable -every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn +This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. +A 16 day forecast is avaliable every 6 hours. Each forecast includes information on a total of 8 variables. +These are transformed from the NOAA standard to the internal PEcAn standard. } @@ -70,7 +71,10 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. \examples{ \dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) + download.NOAA_GEFS(outfolder="~/Working/results", + lat.in= 45.805925, + lon.in = -90.07961, + site_id = 676) } } diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd index 786441d9d8d..d13d8ffa128 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd @@ -70,7 +70,10 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. \examples{ \dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", lat.in= 45.805925, lon.in = -90.07961, site_id = 676) + download.NOAA_GEFS(outfolder="~/Working/results", + lat.in= 45.805925, + lon.in = -90.07961, + site_id = 676) } } From 955b6e413ff8b65df2899491871478fe82db4f5d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 20 Aug 2020 22:15:06 +0530 Subject: [PATCH 1408/2289] refactor remote_process to create remote_data_dbcheck --- modules/data.remote/DESCRIPTION | 2 +- modules/data.remote/R/remote_process.R | 639 ++++++++++++++----------- 2 files changed, 365 insertions(+), 276 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 2e64b61ea53..ebdc2d1a79e 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -36,4 +36,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.1.0 +RoxygenNote: 7.0.2 diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 18308a675e2..7affb3c639a 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -3,7 +3,9 @@ ##' @name remote_process ##' @title remote_process ##' @export +##' ##' @param settings PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data, overwrite +##' ##' @examples ##' \dontrun{ ##' remote_process(settings) @@ -140,281 +142,28 @@ remote_process <- function(settings) { ), use.names = FALSE) # check if any data is already present in the inputs table - existing_data <- - PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) - if (nrow(existing_data) >= 1) { - if (overwrite) { - PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") - if (!is.null(out_process_data)) { - pro_file_name = paste0(algorithm, - "_", - out_process_data, - "_site_", - siteid_short) - if (nrow(pro_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - pro_file_name - ), - dbcon - )) == 1) { - if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - remotefile_check_flag <- 4 - } else{ - remotefile_check_flag <- 6 - } - } else{ - remotefile_check_flag <- 1 - } - stage_process_data <- TRUE - pro_merge <- "repace" - write_pro_start <- start - write_pro_end <- end - } else if (!is.null(out_get_data)) { - if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - remotefile_check_flag <- 0 - } else{ - remotefile_check_flag <- 1 - } - } - stage_get_data <- TRUE - start <- req_start - end <- req_end - write_raw_start <- start - write_raw_end <- end - raw_merge <- "replace" - existing_pro_file_path <- NULL - existing_raw_file_path <- NULL - } else if (!is.null(out_process_data)) { - # if processed data is requested, example LAI - - # construct processed file name - pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) - - # check if processed file exists - if (nrow(pro_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - pro_file_name - ), - dbcon - )) == 1) { - datalist <- - set_stage(pro_check, req_start, req_end, stage_process_data) - pro_start <- as.character(datalist[[1]]) - pro_end <- as.character(datalist[[2]]) - write_pro_start <- datalist[[5]] - write_pro_end <- datalist[[6]] - if (pro_start != "dont write" || pro_end != "dont write") { - stage_process_data <- datalist[[3]] - pro_merge <- datalist[[4]] - if (pro_merge == TRUE) { - existing_pro_file_path <- pro_check$file_path - } - if (stage_process_data == TRUE) { - # check about the status of raw file - raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - ) - if (!is.null(raw_check$start_date) && - !is.null(raw_check$end_date)) { - raw_datalist <- - set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- as.character(raw_datalist[[1]]) - end <- as.character(raw_datalist[[2]]) - write_raw_start <- raw_datalist[[5]] - write_raw_end <- raw_datalist[[6]] - stage_get_data <- raw_datalist[[3]] - raw_merge <- raw_datalist[[4]] - if (stage_get_data == FALSE) { - input_file <- raw_check$file_path - } - remotefile_check_flag <- 4 - if (raw_merge == TRUE) { - existing_raw_file_path <- raw_check$file_path - } - if (pro_merge == TRUE && stage_get_data == FALSE) { - remotefile_check_flag <- 5 - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date - existing_pro_file_path <- NULL - pro_merge <- FALSE - } - } else{ - # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted - remotefile_check_flag <- 6 - write_raw_start <- req_start - write_raw_end <- req_end - start <- req_start - end <- req_end - stage_get_data <- TRUE - existing_raw_file_path <- NULL - write_pro_start <- write_raw_start - write_pro_end <- write_raw_end - pro_merge <- "replace" - existing_pro_file_path <- NULL - } - } - } else{ - # requested file already exists - pro_id <- pro_check$id - pro_path <- pro_check$file_path - if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - raw_id <- raw_check$id - raw_path <- raw_check$file_path - } - } - } - else if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - # if the processed file does not exist in the DB check if the raw file required for creating it is present - PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") - datalist <- - set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- datalist[[3]] - if (stage_get_data == FALSE) { - input_file <- raw_check$file_path - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date - remotefile_check_flag <- 2 - } - raw_merge <- datalist[[4]] - stage_process_data <- TRUE - pro_merge <- FALSE - if (raw_merge == TRUE || raw_merge == "replace") { - existing_raw_file_path = raw_check$file_path - remotefile_check_flag <- 3 - } else{ - existing_raw_file_path = NULL - } - } else{ - # if no processed or raw file of requested type exists - start <- req_start - end <- req_end - write_raw_start <- req_start - write_raw_end <- req_end - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- TRUE - raw_merge <- FALSE - existing_raw_file_path <- NULL - stage_process_data <- TRUE - pro_merge <- FALSE - existing_pro_file_path <- NULL - remotefile_check_flag <- 1 - } - } else if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - # if only raw data is requested - datalist <- - set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - stage_get_data <- datalist[[3]] - raw_merge <- datalist[[4]] - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] - stage_process_data <- FALSE - if (as.character(write_raw_start) == "dont write" && - as.character(write_raw_end) == "dont write") { - raw_id <- raw_check$id - raw_path <- raw_check$file_path - } - if (raw_merge == TRUE) { - existing_raw_file_path <- raw_check$file_path - } else{ - existing_raw_file_path <- NULL - } - existing_pro_file_path <- NULL - } else{ - # no data of requested type exists - PEcAn.logger::logger.info("Requested data does not exist in the DB, retrieving for the first time") - remotefile_check_flag <- 1 - start <- req_start - end <- req_end - if (!is.null(out_get_data)) { - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path <- NULL - } - if (!is.null(out_process_data)) { - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE - process_file_name <- NULL - existing_pro_file_path <- NULL - remotefile_check_flag <- 1 - } - } - - } else{ - # db is completely empty for the given siteid - PEcAn.logger::logger.info("DB is completely empty for this site") - remotefile_check_flag <- 1 - start <- req_start - end <- req_end - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path <- NULL - if (!is.null(out_process_data)) { - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE - existing_pro_file_path <- NULL - } - } + dbstatus <- remote_data_dbcheck(raw_file_name = raw_file_name, start = start, end = end, req_start = req_start, req_end = req_end, siteid = siteid, siteid_short = siteid_short, out_get_data = out_get_data, algorithm = algorithm, out_process_data = out_process_data, overwrite, dbcon = dbcon) + + + + remotefile_check_flag <- dbstatus[[1]] + start <- dbstatus[[2]] + end <- dbstatus[[3]] + stage_get_data <- dbstatus[[4]] + write_raw_start <- dbstatus[[5]] + write_raw_end <- dbstatus[[6]] + raw_merge <- dbstatus[[7]] + existing_raw_file_path <- dbstatus[[8]] + stage_process_data <- dbstatus[[9]] + write_pro_start <- dbstatus[[10]] + write_pro_end <- dbstatus[[11]] + pro_merge <- dbstatus[[12]] + input_file <- dbstatus[[13]] + existing_pro_file_path <- dbstatus[[14]] + raw_check <- dbstatus[[15]] + pro_check <- dbstatus[[16]] + + outdir <- file.path(outdir, paste(source, "site", siteid_short, sep = "_")) @@ -456,6 +205,10 @@ remote_process <- function(settings) { if (as.character(write_pro_start) == "dont write" && as.character(write_pro_end) == "dont write") { PEcAn.logger::logger.info("Requested processed file already exists") + pro_id <- pro_check$id + pro_path <- pro_check$file_path + raw_id <- raw_check$id + raw_path <- raw_check$file_path } else{ if (remotefile_check_flag == 1) { # no processed and rawfile are present @@ -819,3 +572,339 @@ set_stage <- function(result, req_start, req_end, stage) { return (list(req_start, req_end, stage, merge, write_start, write_end)) } + + +##' set details after checking db +##' +##' @name remote_data_dbcheck +##' @title remote_data_dbcheck +##' @param raw_file_name raw_file_name +##' @param start start +##' @param end end +##' @param req_start req_start +##' @param req_end req_end +##' @param siteid siteid +##' @param siteid_short siteid_short +##' @param out_get_data out_get_data +##' @param algorithm algorithm +##' @param out_process_data out_process_data +##' @param overwrite overwrite +##' @param dbcon con +##' @return list containing information +##' @examples +##' \dontrun{ +##' dbstatus <- remote_data_dbcheck( +##' raw_file_name, +##' start, +##' end, +##' req_start, +##' req_end, +##' siteid, +##' siteid_short, +##' algorithm, +##' out_process_data, +##' con) +##' } +##' @author Ayush Prasad +remote_data_dbcheck <- function(raw_file_name, start, end, req_start, req_end, siteid, siteid_short, out_get_data, algorithm, out_process_data, overwrite, dbcon){ + + input_file <- NULL + stage_get_data <- NULL + stage_process_data <- NULL + raw_merge <- NULL + pro_merge <- NULL + existing_raw_file_path <- NULL + existing_pro_file_path <- NULL + write_raw_start <- NULL + write_raw_end <- NULL + write_pro_start <- NULL + write_pro_end <- NULL + raw_check <- NULL + pro_check <- NULL + remotefile_check_flag <- NULL + + existing_data <- + PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) + if (nrow(existing_data) >= 1) { + if (overwrite) { + PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") + if (!is.null(out_process_data)) { + pro_file_name = paste0(algorithm, + "_", + out_process_data, + "_site_", + siteid_short) + if (nrow(pro_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + pro_file_name + ), + dbcon + )) == 1) { + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + remotefile_check_flag <- 4 + } else{ + remotefile_check_flag <- 6 + } + } else{ + remotefile_check_flag <- 1 + } + stage_process_data <- TRUE + pro_merge <- "repace" + write_pro_start <- start + write_pro_end <- end + } else if (!is.null(out_get_data)) { + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + remotefile_check_flag <- 0 + } else{ + remotefile_check_flag <- 1 + } + } + stage_get_data <- TRUE + start <- req_start + end <- req_end + write_raw_start <- start + write_raw_end <- end + raw_merge <- "replace" + existing_pro_file_path <- NULL + existing_raw_file_path <- NULL + } else if (!is.null(out_process_data)) { + # if processed data is requested, example LAI + + # construct processed file name + pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) + + # check if processed file exists + if (nrow(pro_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + pro_file_name + ), + dbcon + )) == 1) { + datalist <- + set_stage(pro_check, req_start, req_end, stage_process_data) + pro_start <- as.character(datalist[[1]]) + pro_end <- as.character(datalist[[2]]) + write_pro_start <- datalist[[5]] + write_pro_end <- datalist[[6]] + if (pro_start != "dont write" || pro_end != "dont write") { + stage_process_data <- datalist[[3]] + pro_merge <- datalist[[4]] + if (pro_merge == TRUE) { + existing_pro_file_path <- pro_check$file_path + } + if (stage_process_data == TRUE) { + # check about the status of raw file + raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + ) + if (!is.null(raw_check$start_date) && + !is.null(raw_check$end_date)) { + raw_datalist <- + set_stage(raw_check, pro_start, pro_end, stage_get_data) + start <- as.character(raw_datalist[[1]]) + end <- as.character(raw_datalist[[2]]) + write_raw_start <- raw_datalist[[5]] + write_raw_end <- raw_datalist[[6]] + stage_get_data <- raw_datalist[[3]] + raw_merge <- raw_datalist[[4]] + if (stage_get_data == FALSE) { + input_file <- raw_check$file_path + } + remotefile_check_flag <- 4 + if (raw_merge == TRUE) { + existing_raw_file_path <- raw_check$file_path + } + if (pro_merge == TRUE && stage_get_data == FALSE) { + remotefile_check_flag <- 5 + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + existing_pro_file_path <- NULL + pro_merge <- FALSE + } + } else{ + # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted + remotefile_check_flag <- 6 + write_raw_start <- req_start + write_raw_end <- req_end + start <- req_start + end <- req_end + stage_get_data <- TRUE + existing_raw_file_path <- NULL + write_pro_start <- write_raw_start + write_pro_end <- write_raw_end + pro_merge <- "replace" + existing_pro_file_path <- NULL + } + } + } else{ + # requested file already exists + pro_id <- pro_check$id + pro_path <- pro_check$file_path + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + raw_id <- raw_check$id + raw_path <- raw_check$file_path + } + } + } + else if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + # if the processed file does not exist in the DB check if the raw file required for creating it is present + PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") + datalist <- + set_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- datalist[[3]] + if (stage_get_data == FALSE) { + input_file <- raw_check$file_path + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + remotefile_check_flag <- 2 + } + raw_merge <- datalist[[4]] + stage_process_data <- TRUE + pro_merge <- FALSE + if (raw_merge == TRUE || raw_merge == "replace") { + existing_raw_file_path = raw_check$file_path + remotefile_check_flag <- 3 + } else{ + existing_raw_file_path = NULL + } + } else{ + # if no processed or raw file of requested type exists + start <- req_start + end <- req_end + write_raw_start <- req_start + write_raw_end <- req_end + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- TRUE + raw_merge <- FALSE + existing_raw_file_path <- NULL + stage_process_data <- TRUE + pro_merge <- FALSE + existing_pro_file_path <- NULL + remotefile_check_flag <- 1 + } + } else if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + # if only raw data is requested + datalist <- + set_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + stage_get_data <- datalist[[3]] + raw_merge <- datalist[[4]] + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + stage_process_data <- FALSE + if (as.character(write_raw_start) == "dont write" && + as.character(write_raw_end) == "dont write") { + raw_id <- raw_check$id + raw_path <- raw_check$file_path + } + if (raw_merge == TRUE) { + existing_raw_file_path <- raw_check$file_path + } else{ + existing_raw_file_path <- NULL + } + existing_pro_file_path <- NULL + } else{ + # no data of requested type exists + PEcAn.logger::logger.info("Requested data does not exist in the DB, retrieving for the first time") + remotefile_check_flag <- 1 + start <- req_start + end <- req_end + if (!is.null(out_get_data)) { + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path <- NULL + } + if (!is.null(out_process_data)) { + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + process_file_name <- NULL + existing_pro_file_path <- NULL + remotefile_check_flag <- 1 + } + } + + } else{ + # db is completely empty for the given siteid + PEcAn.logger::logger.info("DB is completely empty for this site") + remotefile_check_flag <- 1 + start <- req_start + end <- req_end + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path <- NULL + if (!is.null(out_process_data)) { + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + existing_pro_file_path <- NULL + } + } + + return(list(remotefile_check_flag, start, end, stage_get_data, write_raw_start, write_raw_end, raw_merge, existing_raw_file_path, stage_process_data, write_pro_start, write_pro_end, pro_merge, input_file, existing_pro_file_path, raw_check, pro_check)) + +} + + + + + + From 50a7af8dd4c46f6073733b51d36c337b57068d54 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 00:20:27 +0530 Subject: [PATCH 1409/2289] refactor remote_process to create remotedata_db_insert --- modules/data.remote/R/remote_process.R | 1177 +++++++++-------- .../data.remote/man/remotedata_db_check.Rd | 72 + .../data.remote/man/remotedata_db_insert.Rd | 85 ++ 3 files changed, 809 insertions(+), 525 deletions(-) create mode 100644 modules/data.remote/man/remotedata_db_check.Rd create mode 100644 modules/data.remote/man/remotedata_db_insert.Rd diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 7affb3c639a..96948cb88a5 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -32,7 +32,7 @@ remote_process <- function(settings) { # When raw file is requested, # 1 - There is no existing raw the requested type in the DB - # In all other cases the existing raw file will be updated + # 2 - existing raw file will be updated RpTools <- reticulate::import("RpTools") @@ -142,9 +142,22 @@ remote_process <- function(settings) { ), use.names = FALSE) # check if any data is already present in the inputs table - dbstatus <- remote_data_dbcheck(raw_file_name = raw_file_name, start = start, end = end, req_start = req_start, req_end = req_end, siteid = siteid, siteid_short = siteid_short, out_get_data = out_get_data, algorithm = algorithm, out_process_data = out_process_data, overwrite, dbcon = dbcon) + dbstatus <- + remotedata_db_check( + raw_file_name = raw_file_name, + start = start, + end = end, + req_start = req_start, + req_end = req_end, + siteid = siteid, + siteid_short = siteid_short, + out_get_data = out_get_data, + algorithm = algorithm, + out_process_data = out_process_data, + overwrite, + dbcon = dbcon + ) - remotefile_check_flag <- dbstatus[[1]] start <- dbstatus[[2]] @@ -165,7 +178,8 @@ remote_process <- function(settings) { - outdir <- file.path(outdir, paste(source, "site", siteid_short, sep = "_")) + outdir <- + file.path(outdir, paste(source, "site", siteid_short, sep = "_")) fcn.args <- list() fcn.args$coords <- coords @@ -198,273 +212,36 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - + # inserting data in the DB - if (!is.null(out_process_data)) { - # if the requested processed file already exists within the required timeline dont insert or update the DB - if (as.character(write_pro_start) == "dont write" && - as.character(write_pro_end) == "dont write") { - PEcAn.logger::logger.info("Requested processed file already exists") - pro_id <- pro_check$id - pro_path <- pro_check$file_path - raw_id <- raw_check$id - raw_path <- raw_check$file_path - } else{ - if (remotefile_check_flag == 1) { - # no processed and rawfile are present - PEcAn.logger::logger.info("Inserting raw and processed files for the first time") - # insert processed data - pro_ins <- - PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, - in.prefix = output$process_data_name, - siteid = siteid, - startdate = write_pro_start, - enddate = write_pro_end, - mimetype = pro_mimetype, - formatname = pro_formatname, - con = dbcon - ) - # insert raw file - raw_ins <- - PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, - in.prefix = output$raw_data_name, - siteid = siteid, - startdate = write_raw_start, - enddate = write_raw_end, - mimetype = raw_mimetype, - formatname = raw_formatname, - con = dbcon - ) - pro_id <- pro_ins$input.id - raw_id <- raw_ins$input.id - pro_path <- output$process_data_path - raw_path <- output$raw_data_path - } else if (remotefile_check_flag == 2) { - # requested processed file does not exist but the raw file used to create it exists within the required timeline - PEcAn.logger::logger.info("Inserting processed file for the first time") - pro_ins <- - PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, - in.prefix = output$process_data_name, - siteid = siteid, - startdate = write_pro_start, - enddate = write_pro_end, - mimetype = pro_mimetype, - formatname = pro_formatname, - con = dbcon - ) - raw_id <- raw_check$id - raw_path <- raw_check$file_path - pro_id <- pro_ins$input.id - pro_path <- output$process_data_path - } else if (remotefile_check_flag == 3) { - # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates - pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, - in.prefix = output$process_data_name, - siteid = siteid, - startdate = write_pro_start, - enddate = write_pro_end, - mimetype = pro_mimetype, - formatname = pro_formatname, - con = dbcon - ) - raw_id <- raw_check$id - PEcAn.DB::db.query( - sprintf( - "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", - write_raw_start, - write_raw_end, - output$raw_data_name, - raw_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$raw_data_path, - output$raw_data_name, - raw_id - ), - dbcon - ) - pro_id <- pro_ins$input.id - pro_path <- output$process_data_path - } else if (remotefile_check_flag == 4) { - # requested processed and raw files are present and have to be updated - pro_id <- pro_check$id - raw_id <- raw_check$id - raw_path <- output$raw_data_path - pro_path <- output$process_data_path - PEcAn.logger::logger.info("Updating processed and raw files") - PEcAn.DB::db.query( - sprintf( - "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", - write_pro_start, - write_pro_end, - output$process_data_name, - pro_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$process_data_path, - output$process_data_name, - pro_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", - write_raw_start, - write_raw_end, - output$raw_data_name, - raw_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$raw_data_path, - output$raw_data_name, - raw_id - ), - dbcon - ) - } else if (remotefile_check_flag == 5) { - # raw file required for creating the processed file exists and the processed file needs to be updated - pro_id <- pro_check$id - pro_path <- output$process_data_path - raw_id <- raw_check$id - raw_path <- raw_check$file_path - PEcAn.logger::logger.info("Updating the existing processed file") - PEcAn.DB::db.query( - sprintf( - "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", - write_pro_start, - write_pro_end, - output$process_data_name, - pro_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$process_data_path, - output$process_data_name, - pro_id - ), - dbcon - ) - } else if (remotefile_check_flag == 6) { - # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file - pro_id <- pro_check$id - pro_path <- output$process_data_path - raw_path <- output$raw_data_path - PEcAn.logger::logger.info("Replacing the existing processed file and creating a new raw file") - PEcAn.DB::db.query( - sprintf( - "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", - write_pro_start, - write_pro_end, - output$process_data_name, - pro_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$process_data_path, - output$process_data_name, - pro_id - ), - dbcon - ) - raw_ins <- - PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, - in.prefix = output$raw_data_name, - siteid = siteid, - startdate = write_raw_start, - enddate = write_raw_end, - mimetype = raw_mimetype, - formatname = raw_formatname, - con = dbcon - ) - raw_id <- raw_ins$input.id - } - } - } - else{ - # if the requested raw file already exists within the required timeline dont insert or update the DB - if (as.character(write_raw_start) == "dont write" && - as.character(write_raw_end) == "dont write") { - PEcAn.logger::logger.info("Requested raw file already exists") - raw_id <- raw_check$id - raw_path <- raw_check$file_path - } else{ - if (remotefile_check_flag == 1) { - PEcAn.logger::logger.info(("Inserting raw file for the first time")) - raw_ins <- - PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, - in.prefix = output$raw_data_name, - siteid = siteid, - startdate = write_raw_start, - enddate = write_raw_end, - mimetype = raw_mimetype, - formatname = raw_formatname, - con = dbcon - ) - raw_id <- raw_ins$input.id - raw_path <- output$raw_data_path - } else{ - PEcAn.logger::logger.info("Updating raw file") - raw_id <- raw_check$id - raw_path <- output$raw_data_path - PEcAn.DB::db.query( - sprintf( - "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", - write_raw_start, - write_raw_end, - output$raw_data_name, - raw_id - ), - dbcon - ) - PEcAn.DB::db.query( - sprintf( - "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$raw_data_path, - output$raw_data_name, - raw_id - ), - dbcon - ) - } - } - } - - - + db_out <- + remotedata_db_insert( + output = output, + remotefile_check_flag = remotefile_check_flag, + siteid = siteid, + out_get_data = out_get_data, + out_process_data = out_process_data, + write_raw_start = write_raw_start, + write_raw_end = write_raw_end, + write_pro_start = write_pro_start, + write_pro_end = write_pro_end, + raw_check = raw_check, + pro_check = pro_check, + raw_mimetype = raw_mimetype, + raw_formatname = raw_formatname, + pro_mimetype = pro_mimetype, + pro_formatname = pro_formatname, + dbcon = dbcon + ) if (!is.null(out_get_data)) { - settings$remotedata$raw_id <- raw_id - settings$remotedata$raw_path <- raw_path + settings$remotedata$raw_id <- db_out[[1]] + settings$remotedata$raw_path <- db_out[[2]] } if (!is.null(out_process_data)) { - settings$remotedata$pro_id <- pro_id - settings$remotedata$pro_path <- pro_path + settings$remotedata$pro_id <- db_out[[3]] + settings$remotedata$pro_path <- db_out[[4]] } return (settings) } @@ -574,10 +351,10 @@ set_stage <- function(result, req_start, req_end, stage) { } -##' set details after checking db +##' check the status of the requested data in the DB ##' -##' @name remote_data_dbcheck -##' @title remote_data_dbcheck +##' @name remotedata_db_check +##' @title remotedata_db_check ##' @param raw_file_name raw_file_name ##' @param start start ##' @param end end @@ -590,10 +367,10 @@ set_stage <- function(result, req_start, req_end, stage) { ##' @param out_process_data out_process_data ##' @param overwrite overwrite ##' @param dbcon con -##' @return list containing information +##' @return list containing remotefile_check_flag, start, end, stage_get_data, write_raw_start, write_raw_end, raw_merge, existing_raw_file_path, stage_process_data, write_pro_start, write_pro_end, pro_merge, input_file, existing_pro_file_path, raw_check, pro_check ##' @examples ##' \dontrun{ -##' dbstatus <- remote_data_dbcheck( +##' dbstatus <- remotedata_db_check( ##' raw_file_name, ##' start, ##' end, @@ -601,47 +378,80 @@ set_stage <- function(result, req_start, req_end, stage) { ##' req_end, ##' siteid, ##' siteid_short, +##' out_get_data, ##' algorithm, ##' out_process_data, -##' con) +##' overwrite +##' dbcon) ##' } ##' @author Ayush Prasad -remote_data_dbcheck <- function(raw_file_name, start, end, req_start, req_end, siteid, siteid_short, out_get_data, algorithm, out_process_data, overwrite, dbcon){ - - input_file <- NULL - stage_get_data <- NULL - stage_process_data <- NULL - raw_merge <- NULL - pro_merge <- NULL - existing_raw_file_path <- NULL - existing_pro_file_path <- NULL - write_raw_start <- NULL - write_raw_end <- NULL - write_pro_start <- NULL - write_pro_end <- NULL - raw_check <- NULL - pro_check <- NULL - remotefile_check_flag <- NULL - - existing_data <- - PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) - if (nrow(existing_data) >= 1) { - if (overwrite) { - PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") - if (!is.null(out_process_data)) { - pro_file_name = paste0(algorithm, - "_", - out_process_data, - "_site_", - siteid_short) - if (nrow(pro_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - pro_file_name - ), - dbcon - )) == 1) { +remotedata_db_check <- + function(raw_file_name, + start, + end, + req_start, + req_end, + siteid, + siteid_short, + out_get_data, + algorithm, + out_process_data, + overwrite, + dbcon) { + input_file <- NULL + stage_get_data <- NULL + stage_process_data <- NULL + raw_merge <- NULL + pro_merge <- NULL + existing_raw_file_path <- NULL + existing_pro_file_path <- NULL + write_raw_start <- NULL + write_raw_end <- NULL + write_pro_start <- NULL + write_pro_end <- NULL + raw_check <- NULL + pro_check <- NULL + remotefile_check_flag <- NULL + + existing_data <- + PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE site_id=", siteid), dbcon) + if (nrow(existing_data) >= 1) { + if (overwrite) { + PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") + if (!is.null(out_process_data)) { + pro_file_name = paste0(algorithm, + "_", + out_process_data, + "_site_", + siteid_short) + if (nrow(pro_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + pro_file_name + ), + dbcon + )) == 1) { + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + remotefile_check_flag <- 4 + } else{ + remotefile_check_flag <- 6 + } + } else{ + remotefile_check_flag <- 1 + } + stage_process_data <- TRUE + pro_merge <- "repace" + write_pro_start <- start + write_pro_end <- end + } else if (!is.null(out_get_data)) { if (nrow(raw_check <- PEcAn.DB::db.query( sprintf( @@ -650,261 +460,578 @@ remote_data_dbcheck <- function(raw_file_name, start, end, req_start, req_end, s ), dbcon )) == 1) { - remotefile_check_flag <- 4 + remotefile_check_flag <- 2 } else{ - remotefile_check_flag <- 6 + remotefile_check_flag <- 1 } - } else{ - remotefile_check_flag <- 1 } - stage_process_data <- TRUE - pro_merge <- "repace" - write_pro_start <- start - write_pro_end <- end - } else if (!is.null(out_get_data)) { - if (nrow(raw_check <- + stage_get_data <- TRUE + start <- req_start + end <- req_end + write_raw_start <- start + write_raw_end <- end + raw_merge <- "replace" + existing_pro_file_path <- NULL + existing_raw_file_path <- NULL + } else if (!is.null(out_process_data)) { + # if processed data is requested, example LAI + + # construct processed file name + pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) + + # check if processed file exists + if (nrow(pro_check <- PEcAn.DB::db.query( sprintf( "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name + pro_file_name ), dbcon )) == 1) { - remotefile_check_flag <- 0 - } else{ - remotefile_check_flag <- 1 - } - } - stage_get_data <- TRUE - start <- req_start - end <- req_end - write_raw_start <- start - write_raw_end <- end - raw_merge <- "replace" - existing_pro_file_path <- NULL - existing_raw_file_path <- NULL - } else if (!is.null(out_process_data)) { - # if processed data is requested, example LAI - - # construct processed file name - pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) - - # check if processed file exists - if (nrow(pro_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - pro_file_name - ), - dbcon - )) == 1) { - datalist <- - set_stage(pro_check, req_start, req_end, stage_process_data) - pro_start <- as.character(datalist[[1]]) - pro_end <- as.character(datalist[[2]]) - write_pro_start <- datalist[[5]] - write_pro_end <- datalist[[6]] - if (pro_start != "dont write" || pro_end != "dont write") { - stage_process_data <- datalist[[3]] - pro_merge <- datalist[[4]] - if (pro_merge == TRUE) { - existing_pro_file_path <- pro_check$file_path - } - if (stage_process_data == TRUE) { - # check about the status of raw file - raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - ) - if (!is.null(raw_check$start_date) && - !is.null(raw_check$end_date)) { - raw_datalist <- - set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- as.character(raw_datalist[[1]]) - end <- as.character(raw_datalist[[2]]) - write_raw_start <- raw_datalist[[5]] - write_raw_end <- raw_datalist[[6]] - stage_get_data <- raw_datalist[[3]] - raw_merge <- raw_datalist[[4]] - if (stage_get_data == FALSE) { - input_file <- raw_check$file_path - } - remotefile_check_flag <- 4 - if (raw_merge == TRUE) { - existing_raw_file_path <- raw_check$file_path - } - if (pro_merge == TRUE && stage_get_data == FALSE) { - remotefile_check_flag <- 5 - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date + datalist <- + set_stage(pro_check, req_start, req_end, stage_process_data) + pro_start <- as.character(datalist[[1]]) + pro_end <- as.character(datalist[[2]]) + write_pro_start <- datalist[[5]] + write_pro_end <- datalist[[6]] + if (pro_start != "dont write" || pro_end != "dont write") { + stage_process_data <- datalist[[3]] + pro_merge <- datalist[[4]] + if (pro_merge == TRUE) { + existing_pro_file_path <- pro_check$file_path + } + if (stage_process_data == TRUE) { + # check about the status of raw file + raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + ) + if (!is.null(raw_check$start_date) && + !is.null(raw_check$end_date)) { + raw_datalist <- + set_stage(raw_check, pro_start, pro_end, stage_get_data) + start <- as.character(raw_datalist[[1]]) + end <- as.character(raw_datalist[[2]]) + write_raw_start <- raw_datalist[[5]] + write_raw_end <- raw_datalist[[6]] + stage_get_data <- raw_datalist[[3]] + raw_merge <- raw_datalist[[4]] + if (stage_get_data == FALSE) { + input_file <- raw_check$file_path + } + remotefile_check_flag <- 4 + if (raw_merge == TRUE) { + existing_raw_file_path <- raw_check$file_path + } + if (pro_merge == TRUE && stage_get_data == FALSE) { + remotefile_check_flag <- 5 + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + existing_pro_file_path <- NULL + pro_merge <- FALSE + } + } else{ + # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted + remotefile_check_flag <- 6 + write_raw_start <- req_start + write_raw_end <- req_end + start <- req_start + end <- req_end + stage_get_data <- TRUE + existing_raw_file_path <- NULL + write_pro_start <- write_raw_start + write_pro_end <- write_raw_end + pro_merge <- "replace" existing_pro_file_path <- NULL - pro_merge <- FALSE } - } else{ - # this case happens when the processed file has to be extended but the raw file used to create the existing processed file has been deleted - remotefile_check_flag <- 6 - write_raw_start <- req_start - write_raw_end <- req_end - start <- req_start - end <- req_end - stage_get_data <- TRUE - existing_raw_file_path <- NULL - write_pro_start <- write_raw_start - write_pro_end <- write_raw_end - pro_merge <- "replace" - existing_pro_file_path <- NULL + } + } else{ + # requested file already exists + pro_id <- pro_check$id + pro_path <- pro_check$file_path + if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + raw_id <- raw_check$id + raw_path <- raw_check$file_path } } - } else{ - # requested file already exists - pro_id <- pro_check$id - pro_path <- pro_check$file_path - if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - raw_id <- raw_check$id - raw_path <- raw_check$file_path + } + else if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + # if the processed file does not exist in the DB check if the raw file required for creating it is present + PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") + datalist <- + set_stage(raw_check, req_start, req_end, stage_get_data) + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- datalist[[3]] + if (stage_get_data == FALSE) { + input_file <- raw_check$file_path + write_pro_start <- raw_check$start_date + write_pro_end <- raw_check$end_date + remotefile_check_flag <- 2 } + raw_merge <- datalist[[4]] + stage_process_data <- TRUE + pro_merge <- FALSE + if (raw_merge == TRUE || raw_merge == "replace") { + existing_raw_file_path = raw_check$file_path + remotefile_check_flag <- 3 + } else{ + existing_raw_file_path = NULL + } + } else{ + # if no processed or raw file of requested type exists + start <- req_start + end <- req_end + write_raw_start <- req_start + write_raw_end <- req_end + write_pro_start <- req_start + write_pro_end <- req_end + stage_get_data <- TRUE + raw_merge <- FALSE + existing_raw_file_path <- NULL + stage_process_data <- TRUE + pro_merge <- FALSE + existing_pro_file_path <- NULL + remotefile_check_flag <- 1 } - } - else if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - # if the processed file does not exist in the DB check if the raw file required for creating it is present - PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") + } else if (nrow(raw_check <- + PEcAn.DB::db.query( + sprintf( + "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + raw_file_name + ), + dbcon + )) == 1) { + # if only raw data is requested datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- datalist[[3]] - if (stage_get_data == FALSE) { - input_file <- raw_check$file_path - write_pro_start <- raw_check$start_date - write_pro_end <- raw_check$end_date - remotefile_check_flag <- 2 + start <- as.character(datalist[[1]]) + end <- as.character(datalist[[2]]) + stage_get_data <- datalist[[3]] + raw_merge <- datalist[[4]] + write_raw_start <- datalist[[5]] + write_raw_end <- datalist[[6]] + stage_process_data <- FALSE + if (as.character(write_raw_start) == "dont write" && + as.character(write_raw_end) == "dont write") { + raw_id <- raw_check$id + raw_path <- raw_check$file_path } - raw_merge <- datalist[[4]] - stage_process_data <- TRUE - pro_merge <- FALSE - if (raw_merge == TRUE || raw_merge == "replace") { - existing_raw_file_path = raw_check$file_path - remotefile_check_flag <- 3 + if (raw_merge == TRUE) { + existing_raw_file_path <- raw_check$file_path + remotefile_check_flag <- 2 } else{ - existing_raw_file_path = NULL + existing_raw_file_path <- NULL } - } else{ - # if no processed or raw file of requested type exists - start <- req_start - end <- req_end - write_raw_start <- req_start - write_raw_end <- req_end - write_pro_start <- req_start - write_pro_end <- req_end - stage_get_data <- TRUE - raw_merge <- FALSE - existing_raw_file_path <- NULL - stage_process_data <- TRUE - pro_merge <- FALSE existing_pro_file_path <- NULL - remotefile_check_flag <- 1 - } - } else if (nrow(raw_check <- - PEcAn.DB::db.query( - sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", - raw_file_name - ), - dbcon - )) == 1) { - # if only raw data is requested - datalist <- - set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - stage_get_data <- datalist[[3]] - raw_merge <- datalist[[4]] - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] - stage_process_data <- FALSE - if (as.character(write_raw_start) == "dont write" && - as.character(write_raw_end) == "dont write") { - raw_id <- raw_check$id - raw_path <- raw_check$file_path - } - if (raw_merge == TRUE) { - existing_raw_file_path <- raw_check$file_path } else{ - existing_raw_file_path <- NULL + # no data of requested type exists + PEcAn.logger::logger.info("Requested data does not exist in the DB, retrieving for the first time") + remotefile_check_flag <- 1 + start <- req_start + end <- req_end + if (!is.null(out_get_data)) { + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path <- NULL + } + if (!is.null(out_process_data)) { + stage_process_data <- TRUE + write_pro_start <- req_start + write_pro_end <- req_end + pro_merge <- FALSE + process_file_name <- NULL + existing_pro_file_path <- NULL + remotefile_check_flag <- 1 + } } - existing_pro_file_path <- NULL + } else{ - # no data of requested type exists - PEcAn.logger::logger.info("Requested data does not exist in the DB, retrieving for the first time") + # db is completely empty for the given siteid + PEcAn.logger::logger.info("DB is completely empty for this site") remotefile_check_flag <- 1 - start <- req_start - end <- req_end - if (!is.null(out_get_data)) { - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path <- NULL - } + start <- req_start + end <- req_end + stage_get_data <- TRUE + write_raw_start <- req_start + write_raw_end <- req_end + raw_merge <- FALSE + existing_raw_file_path <- NULL if (!is.null(out_process_data)) { stage_process_data <- TRUE write_pro_start <- req_start write_pro_end <- req_end pro_merge <- FALSE - process_file_name <- NULL existing_pro_file_path <- NULL - remotefile_check_flag <- 1 } } - } else{ - # db is completely empty for the given siteid - PEcAn.logger::logger.info("DB is completely empty for this site") - remotefile_check_flag <- 1 - start <- req_start - end <- req_end - stage_get_data <- TRUE - write_raw_start <- req_start - write_raw_end <- req_end - raw_merge <- FALSE - existing_raw_file_path <- NULL - if (!is.null(out_process_data)) { - stage_process_data <- TRUE - write_pro_start <- req_start - write_pro_end <- req_end - pro_merge <- FALSE - existing_pro_file_path <- NULL - } + return( + list( + remotefile_check_flag, + start, + end, + stage_get_data, + write_raw_start, + write_raw_end, + raw_merge, + existing_raw_file_path, + stage_process_data, + write_pro_start, + write_pro_end, + pro_merge, + input_file, + existing_pro_file_path, + raw_check, + pro_check + ) + ) + } - - return(list(remotefile_check_flag, start, end, stage_get_data, write_raw_start, write_raw_end, raw_merge, existing_raw_file_path, stage_process_data, write_pro_start, write_pro_end, pro_merge, input_file, existing_pro_file_path, raw_check, pro_check)) - -} - +##' Insert the output data returned from rp_control into BETYdb +##' +##' @name remotedata_db_insert +##' @param output output list from rp_control +##' @param remotefile_check_flag remotefile_check_flag +##' @param siteid siteid +##' @param out_get_data out_get_data +##' @param out_process_data out_process_data +##' @param write_raw_start write_raw_start +##' @param write_raw_end write_raw_end +##' @param write_pro_start write_pro_start +##' @param write_pro_end write_pro_end +##' @param raw_check raw_check +##' @param pro_check pro_check +##' @param raw_mimetype raw_mimetype +##' @param raw_formatname raw_formatname +##' @param pro_mimetype pro_mimetype +##' @param pro_formatname pro_formatname +##' @param dbcon dbcon +##' +##' @return list containing raw_id, raw_path, pro_id, pro_path +##' +##' @examples +##' \dontrun{ +##' db_out <- remotedata_db_insert( +##' output, +##' remotefile_check_flag, +##' siteid, +##' out_get_data, +##' out_process_data, +##' write_raw_start, +##' write_raw_end, +##' write_pro_start, +##' write_pro_end, +##' raw_check, +##' pro_check +##' raw_mimetype, +##' raw_formatname, +##' pro_mimetype, +##' pro_formatname, +##' dbcon) +##' } +remotedata_db_insert <- + function(output, + remotefile_check_flag, + siteid, + out_get_data, + out_process_data, + write_raw_start, + write_raw_end, + write_pro_start, + write_pro_end, + raw_check, + pro_check, + raw_mimetype, + raw_formatname, + pro_mimetype, + pro_formatname, + dbcon) { + pro_id <- NULL + pro_path <- NULL + + if (!is.null(out_process_data)) { + # if the requested processed file already exists within the required timeline dont insert or update the DB + if (as.character(write_pro_start) == "dont write" && + as.character(write_pro_end) == "dont write") { + PEcAn.logger::logger.info("Requested processed file already exists") + pro_id <- pro_check$id + pro_path <- pro_check$file_path + raw_id <- raw_check$id + raw_path <- raw_check$file_path + } else{ + if (remotefile_check_flag == 1) { + # no processed and rawfile are present + PEcAn.logger::logger.info("Inserting raw and processed files for the first time") + # insert processed data + pro_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, + formatname = pro_formatname, + con = dbcon + ) + # insert raw file + raw_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, + formatname = raw_formatname, + con = dbcon + ) + pro_id <- pro_ins$input.id + raw_id <- raw_ins$input.id + pro_path <- output$process_data_path + raw_path <- output$raw_data_path + } else if (remotefile_check_flag == 2) { + # requested processed file does not exist but the raw file used to create it exists within the required timeline + PEcAn.logger::logger.info("Inserting processed file for the first time") + pro_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, + formatname = pro_formatname, + con = dbcon + ) + raw_id <- raw_check$id + raw_path <- raw_check$file_path + pro_id <- pro_ins$input.id + pro_path <- output$process_data_path + } else if (remotefile_check_flag == 3) { + # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates + pro_ins <- PEcAn.DB::dbfile.input.insert( + in.path = output$process_data_path, + in.prefix = output$process_data_name, + siteid = siteid, + startdate = write_pro_start, + enddate = write_pro_end, + mimetype = pro_mimetype, + formatname = pro_formatname, + con = dbcon + ) + raw_id <- raw_check$id + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_raw_start, + write_raw_end, + output$raw_data_name, + raw_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$raw_data_path, + output$raw_data_name, + raw_id + ), + dbcon + ) + pro_id <- pro_ins$input.id + pro_path <- output$process_data_path + } else if (remotefile_check_flag == 4) { + # requested processed and raw files are present and have to be updated + pro_id <- pro_check$id + raw_id <- raw_check$id + raw_path <- output$raw_data_path + pro_path <- output$process_data_path + PEcAn.logger::logger.info("Updating processed and raw files") + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_pro_start, + write_pro_end, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$process_data_path, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", + write_raw_start, + write_raw_end, + output$raw_data_name, + raw_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$raw_data_path, + output$raw_data_name, + raw_id + ), + dbcon + ) + } else if (remotefile_check_flag == 5) { + # raw file required for creating the processed file exists and the processed file needs to be updated + pro_id <- pro_check$id + pro_path <- output$process_data_path + raw_id <- raw_check$id + raw_path <- raw_check$file_path + PEcAn.logger::logger.info("Updating the existing processed file") + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_pro_start, + write_pro_end, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$process_data_path, + output$process_data_name, + pro_id + ), + dbcon + ) + } else if (remotefile_check_flag == 6) { + # there is some existing processed file but the raw file used to create it is now deleted, replace the processed file entirely with the one created from new raw file + pro_id <- pro_check$id + pro_path <- output$process_data_path + raw_path <- output$raw_data_path + PEcAn.logger::logger.info("Replacing the existing processed file and creating a new raw file") + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_pro_start, + write_pro_end, + output$process_data_name, + pro_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$process_data_path, + output$process_data_name, + pro_id + ), + dbcon + ) + raw_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, + formatname = raw_formatname, + con = dbcon + ) + raw_id <- raw_ins$input.id + } + } + } + else{ + # if the requested raw file already exists within the required timeline dont insert or update the DB + if (as.character(write_raw_start) == "dont write" && + as.character(write_raw_end) == "dont write") { + PEcAn.logger::logger.info("Requested raw file already exists") + raw_id <- raw_check$id + raw_path <- raw_check$file_path + } else{ + if (remotefile_check_flag == 1) { + PEcAn.logger::logger.info(("Inserting raw file for the first time")) + raw_ins <- + PEcAn.DB::dbfile.input.insert( + in.path = output$raw_data_path, + in.prefix = output$raw_data_name, + siteid = siteid, + startdate = write_raw_start, + enddate = write_raw_end, + mimetype = raw_mimetype, + formatname = raw_formatname, + con = dbcon + ) + raw_id <- raw_ins$input.id + raw_path <- output$raw_data_path + } else if (remotefile_check_flag == 2) { + PEcAn.logger::logger.info("Updating raw file") + raw_id <- raw_check$id + raw_path <- output$raw_data_path + PEcAn.DB::db.query( + sprintf( + "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", + write_raw_start, + write_raw_end, + output$raw_data_name, + raw_id + ), + dbcon + ) + PEcAn.DB::db.query( + sprintf( + "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", + output$raw_data_path, + output$raw_data_name, + raw_id + ), + dbcon + ) + } + } + } + + return(list(raw_id, raw_path, pro_id, pro_path)) + } \ No newline at end of file diff --git a/modules/data.remote/man/remotedata_db_check.Rd b/modules/data.remote/man/remotedata_db_check.Rd new file mode 100644 index 00000000000..baebd4f9f86 --- /dev/null +++ b/modules/data.remote/man/remotedata_db_check.Rd @@ -0,0 +1,72 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/remote_process.R +\name{remotedata_db_check} +\alias{remotedata_db_check} +\title{remotedata_db_check} +\usage{ +remotedata_db_check( + raw_file_name, + start, + end, + req_start, + req_end, + siteid, + siteid_short, + out_get_data, + algorithm, + out_process_data, + overwrite, + dbcon +) +} +\arguments{ +\item{raw_file_name}{raw_file_name} + +\item{start}{start} + +\item{end}{end} + +\item{req_start}{req_start} + +\item{req_end}{req_end} + +\item{siteid}{siteid} + +\item{siteid_short}{siteid_short} + +\item{out_get_data}{out_get_data} + +\item{algorithm}{algorithm} + +\item{out_process_data}{out_process_data} + +\item{overwrite}{overwrite} + +\item{dbcon}{con} +} +\value{ +list containing remotefile_check_flag, start, end, stage_get_data, write_raw_start, write_raw_end, raw_merge, existing_raw_file_path, stage_process_data, write_pro_start, write_pro_end, pro_merge, input_file, existing_pro_file_path, raw_check, pro_check +} +\description{ +check the status of the requested data in the DB +} +\examples{ +\dontrun{ +dbstatus <- remotedata_db_check( + raw_file_name, + start, + end, + req_start, + req_end, + siteid, + siteid_short, + out_get_data, + algorithm, + out_process_data, + overwrite + dbcon) +} +} +\author{ +Ayush Prasad +} diff --git a/modules/data.remote/man/remotedata_db_insert.Rd b/modules/data.remote/man/remotedata_db_insert.Rd new file mode 100644 index 00000000000..cc029ef402a --- /dev/null +++ b/modules/data.remote/man/remotedata_db_insert.Rd @@ -0,0 +1,85 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/remote_process.R +\name{remotedata_db_insert} +\alias{remotedata_db_insert} +\title{Insert the output data returned from rp_control into BETYdb} +\usage{ +remotedata_db_insert( + output, + remotefile_check_flag, + siteid, + out_get_data, + out_process_data, + write_raw_start, + write_raw_end, + write_pro_start, + write_pro_end, + raw_check, + pro_check, + raw_mimetype, + raw_formatname, + pro_mimetype, + pro_formatname, + dbcon +) +} +\arguments{ +\item{output}{output list from rp_control} + +\item{remotefile_check_flag}{remotefile_check_flag} + +\item{siteid}{siteid} + +\item{out_get_data}{out_get_data} + +\item{out_process_data}{out_process_data} + +\item{write_raw_start}{write_raw_start} + +\item{write_raw_end}{write_raw_end} + +\item{write_pro_start}{write_pro_start} + +\item{write_pro_end}{write_pro_end} + +\item{raw_check}{raw_check} + +\item{pro_check}{pro_check} + +\item{raw_mimetype}{raw_mimetype} + +\item{raw_formatname}{raw_formatname} + +\item{pro_mimetype}{pro_mimetype} + +\item{pro_formatname}{pro_formatname} + +\item{dbcon}{dbcon} +} +\value{ +list containing raw_id, raw_path, pro_id, pro_path +} +\description{ +Insert the output data returned from rp_control into BETYdb +} +\examples{ +\dontrun{ +db_out <- remotedata_db_insert( + output, + remotefile_check_flag, + siteid, + out_get_data, + out_process_data, + write_raw_start, + write_raw_end, + write_pro_start, + write_pro_end, + raw_check, + pro_check + raw_mimetype, + raw_formatname, + pro_mimetype, + pro_formatname, + dbcon) +} +} From ba5c11353e402a50a2337f8020a1edc305b9d08e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 10:01:33 +0530 Subject: [PATCH 1410/2289] more refactoring --- modules/data.remote/R/remote_process.R | 141 ++++++++++-------- modules/data.remote/man/remote_process.Rd | 2 +- .../data.remote/man/remotedata_db_check.Rd | 18 +-- .../data.remote/man/remotedata_db_insert.Rd | 9 +- 4 files changed, 91 insertions(+), 79 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 96948cb88a5..f478a3002ac 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -1,4 +1,4 @@ -##' call rp_control (from RpTools py package) from PEcAn workflow and store the output in BETY +##' call rp_control (from RpTools Python package) and store the output in BETY ##' ##' @name remote_process ##' @title remote_process @@ -14,13 +14,13 @@ ##' remote_process <- function(settings) { - # Information about the date variables used in remote_process - + # Information about the date variables used in remote_process: # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB # start, end : effective start, end dates created after checking the DB status. These dates are sent to rp_control for downloading and processing data # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB # the "pro" version of these variables have the same meaning and are used to refer to the processed file - # The value of remotefile_check_flag denotes the following cases : + # The value of remotefile_check_flag denotes the following cases: # When processed file is requested, # 1 - There are no existing raw and processed files of the requested type in the DB @@ -110,8 +110,6 @@ remote_process <- function(settings) { dbcon <- PEcAn.DB::db.open(settings$database$bety) on.exit(PEcAn.DB::db.close(dbcon), add = TRUE) - remotefile_check_flag <- 0 - # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names collection_lut <- data.frame( stringsAsFactors = FALSE, @@ -129,36 +127,26 @@ remote_process <- function(settings) { collection = unname(getpecancode[collection]) } - req_start <- start - req_end <- end - + # construct raw file name raw_file_name <- construct_raw_filename(collection, siteid_short, scale, projection, qc) - coords <- - unlist(PEcAn.DB::db.query( - sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), - con = dbcon - ), use.names = FALSE) # check if any data is already present in the inputs table dbstatus <- remotedata_db_check( - raw_file_name = raw_file_name, - start = start, - end = end, - req_start = req_start, - req_end = req_end, - siteid = siteid, - siteid_short = siteid_short, - out_get_data = out_get_data, - algorithm = algorithm, - out_process_data = out_process_data, - overwrite, - dbcon = dbcon + raw_file_name = raw_file_name, + start = start, + end = end, + siteid = siteid, + siteid_short = siteid_short, + out_get_data = out_get_data, + algorithm = algorithm, + out_process_data = out_process_data, + overwrite = overwrite, + dbcon = dbcon ) - remotefile_check_flag <- dbstatus[[1]] start <- dbstatus[[2]] end <- dbstatus[[3]] @@ -176,10 +164,16 @@ remote_process <- function(settings) { raw_check <- dbstatus[[15]] pro_check <- dbstatus[[16]] - - + # construct outdir path outdir <- file.path(outdir, paste(source, "site", siteid_short, sep = "_")) + + # extract the AOI of the site from BETYdb + coords <- + unlist(PEcAn.DB::db.query( + sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), + con = dbcon + ), use.names = FALSE) fcn.args <- list() fcn.args$coords <- coords @@ -213,36 +207,38 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - # inserting data in the DB + # insert output data in the DB db_out <- remotedata_db_insert( - output = output, + output = output, remotefile_check_flag = remotefile_check_flag, - siteid = siteid, - out_get_data = out_get_data, - out_process_data = out_process_data, - write_raw_start = write_raw_start, - write_raw_end = write_raw_end, - write_pro_start = write_pro_start, - write_pro_end = write_pro_end, - raw_check = raw_check, - pro_check = pro_check, - raw_mimetype = raw_mimetype, - raw_formatname = raw_formatname, - pro_mimetype = pro_mimetype, - pro_formatname = pro_formatname, - dbcon = dbcon + siteid = siteid, + out_get_data = out_get_data, + out_process_data = out_process_data, + write_raw_start = write_raw_start, + write_raw_end = write_raw_end, + write_pro_start = write_pro_start, + write_pro_end = write_pro_end, + raw_check = raw_check, + pro_check = pro_check, + raw_mimetype = raw_mimetype, + raw_formatname = raw_formatname, + pro_mimetype = pro_mimetype, + pro_formatname = pro_formatname, + dbcon = dbcon ) + + # return the ids and paths of the inserted data if (!is.null(out_get_data)) { settings$remotedata$raw_id <- db_out[[1]] settings$remotedata$raw_path <- db_out[[2]] } - if (!is.null(out_process_data)) { settings$remotedata$pro_id <- db_out[[3]] settings$remotedata$pro_path <- db_out[[4]] } + return (settings) } @@ -295,6 +291,7 @@ construct_raw_filename <- + ##' set dates, stage and merge status for remote data download ##' ##' @name set_stage @@ -351,22 +348,22 @@ set_stage <- function(result, req_start, req_end, stage) { } + + ##' check the status of the requested data in the DB ##' ##' @name remotedata_db_check ##' @title remotedata_db_check ##' @param raw_file_name raw_file_name -##' @param start start -##' @param end end -##' @param req_start req_start -##' @param req_end req_end -##' @param siteid siteid -##' @param siteid_short siteid_short +##' @param start start date requested by user +##' @param end end date requested by the user +##' @param siteid siteid of the site +##' @param siteid_short short form of the siteid ##' @param out_get_data out_get_data ##' @param algorithm algorithm ##' @param out_process_data out_process_data ##' @param overwrite overwrite -##' @param dbcon con +##' @param dbcon BETYdb con ##' @return list containing remotefile_check_flag, start, end, stage_get_data, write_raw_start, write_raw_end, raw_merge, existing_raw_file_path, stage_process_data, write_pro_start, write_pro_end, pro_merge, input_file, existing_pro_file_path, raw_check, pro_check ##' @examples ##' \dontrun{ @@ -374,8 +371,6 @@ set_stage <- function(result, req_start, req_end, stage) { ##' raw_file_name, ##' start, ##' end, -##' req_start, -##' req_end, ##' siteid, ##' siteid_short, ##' out_get_data, @@ -389,8 +384,6 @@ remotedata_db_check <- function(raw_file_name, start, end, - req_start, - req_end, siteid, siteid_short, out_get_data, @@ -398,6 +391,15 @@ remotedata_db_check <- out_process_data, overwrite, dbcon) { + + # Information about the date variables used: + # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB + # start, end : effective start, end dates created after checking the DB status. These dates are sent to rp_control for downloading and processing data + # write_raw_start, write_raw_end : start, end dates which are used while inserting and updating the DB + # the "pro" version of these variables have the same meaning and are used to refer to the processed file + + req_start <- start + req_end <- end input_file <- NULL stage_get_data <- NULL stage_process_data <- NULL @@ -722,12 +724,12 @@ remotedata_db_check <- ##' @name remotedata_db_insert ##' @param output output list from rp_control ##' @param remotefile_check_flag remotefile_check_flag -##' @param siteid siteid +##' @param siteid siteid ##' @param out_get_data out_get_data ##' @param out_process_data out_process_data -##' @param write_raw_start write_raw_start -##' @param write_raw_end write_raw_end -##' @param write_pro_start write_pro_start +##' @param write_raw_start write_raw_start, start date of the raw file +##' @param write_raw_end write_raw_end, end date of the raw file +##' @param write_pro_start write_pro_start ##' @param write_pro_end write_pro_end ##' @param raw_check raw_check ##' @param pro_check pro_check @@ -735,10 +737,10 @@ remotedata_db_check <- ##' @param raw_formatname raw_formatname ##' @param pro_mimetype pro_mimetype ##' @param pro_formatname pro_formatname -##' @param dbcon dbcon +##' @param dbcon BETYdb con ##' ##' @return list containing raw_id, raw_path, pro_id, pro_path -##' +##' @author Ayush Prasad ##' @examples ##' \dontrun{ ##' db_out <- remotedata_db_insert( @@ -776,6 +778,21 @@ remotedata_db_insert <- pro_mimetype, pro_formatname, dbcon) { + + # The value of remotefile_check_flag denotes the following cases: + + # When processed file is requested, + # 1 - There are no existing raw and processed files of the requested type in the DB + # 2 - Requested processed file does not exist, the raw file used to create is it present and matches with the requested daterange + # 3 - Requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested daterange + # 4 - Both processed and raw file of the requested type exists, but they have to be updated to match with the requested daterange + # 5 - Raw file required for creating the processed file exists with the required daterange and the processed file needs to be updated. Here the new processed file will now contain data for the entire daterange of the existing raw file + # 6 - There is a existing processed file of the requested type but the raw file used to create it has been deleted. Here, the raw file will be created again and the processed file will be replaced entirely with the one created from new raw file + + # When raw file is requested, + # 1 - There is no existing raw the requested type in the DB + # 2 - existing raw file will be updated + pro_id <- NULL pro_path <- NULL diff --git a/modules/data.remote/man/remote_process.Rd b/modules/data.remote/man/remote_process.Rd index 44391e1e340..60283627bdc 100644 --- a/modules/data.remote/man/remote_process.Rd +++ b/modules/data.remote/man/remote_process.Rd @@ -10,7 +10,7 @@ remote_process(settings) \item{settings}{PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data, overwrite} } \description{ -call rp_control (from RpTools py package) from PEcAn workflow and store the output in BETY +call rp_control (from RpTools Python package) and store the output in BETY } \examples{ \dontrun{ diff --git a/modules/data.remote/man/remotedata_db_check.Rd b/modules/data.remote/man/remotedata_db_check.Rd index baebd4f9f86..e7ecead4a5c 100644 --- a/modules/data.remote/man/remotedata_db_check.Rd +++ b/modules/data.remote/man/remotedata_db_check.Rd @@ -8,8 +8,6 @@ remotedata_db_check( raw_file_name, start, end, - req_start, - req_end, siteid, siteid_short, out_get_data, @@ -22,17 +20,13 @@ remotedata_db_check( \arguments{ \item{raw_file_name}{raw_file_name} -\item{start}{start} +\item{start}{start date requested by user} -\item{end}{end} +\item{end}{end date requested by the user} -\item{req_start}{req_start} +\item{siteid}{siteid of the site} -\item{req_end}{req_end} - -\item{siteid}{siteid} - -\item{siteid_short}{siteid_short} +\item{siteid_short}{short form of the siteid} \item{out_get_data}{out_get_data} @@ -42,7 +36,7 @@ remotedata_db_check( \item{overwrite}{overwrite} -\item{dbcon}{con} +\item{dbcon}{BETYdb con} } \value{ list containing remotefile_check_flag, start, end, stage_get_data, write_raw_start, write_raw_end, raw_merge, existing_raw_file_path, stage_process_data, write_pro_start, write_pro_end, pro_merge, input_file, existing_pro_file_path, raw_check, pro_check @@ -56,8 +50,6 @@ dbstatus <- remotedata_db_check( raw_file_name, start, end, - req_start, - req_end, siteid, siteid_short, out_get_data, diff --git a/modules/data.remote/man/remotedata_db_insert.Rd b/modules/data.remote/man/remotedata_db_insert.Rd index cc029ef402a..a18846b5dc6 100644 --- a/modules/data.remote/man/remotedata_db_insert.Rd +++ b/modules/data.remote/man/remotedata_db_insert.Rd @@ -34,9 +34,9 @@ remotedata_db_insert( \item{out_process_data}{out_process_data} -\item{write_raw_start}{write_raw_start} +\item{write_raw_start}{write_raw_start, start date of the raw file} -\item{write_raw_end}{write_raw_end} +\item{write_raw_end}{write_raw_end, end date of the raw file} \item{write_pro_start}{write_pro_start} @@ -54,7 +54,7 @@ remotedata_db_insert( \item{pro_formatname}{pro_formatname} -\item{dbcon}{dbcon} +\item{dbcon}{BETYdb con} } \value{ list containing raw_id, raw_path, pro_id, pro_path @@ -83,3 +83,6 @@ db_out <- remotedata_db_insert( dbcon) } } +\author{ +Ayush Prasad +} From 4294770cb30ff4aedd3fe396090fc4c080998073 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 13:21:12 +0530 Subject: [PATCH 1411/2289] Update modules/data.remote/R/remote_process.R Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index f478a3002ac..802f23c441d 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -258,7 +258,7 @@ remote_process <- function(settings) { ##' \dontrun{ ##' raw_file_name <- construct_raw_filename( ##' collection="s2", -##' siteid=721, +##' siteid="0-721", ##' scale=10.0 ##' projection=NULL ##' qc=1.0) @@ -1051,4 +1051,4 @@ remotedata_db_insert <- } return(list(raw_id, raw_path, pro_id, pro_path)) - } \ No newline at end of file + } From 62f73c6bc238809862bd9c29ba1562fe64c13898 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 13:32:09 +0530 Subject: [PATCH 1412/2289] roxygenize --- modules/data.remote/man/construct_raw_filename.Rd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/man/construct_raw_filename.Rd b/modules/data.remote/man/construct_raw_filename.Rd index 325c9d6e546..c31af074b70 100644 --- a/modules/data.remote/man/construct_raw_filename.Rd +++ b/modules/data.remote/man/construct_raw_filename.Rd @@ -33,7 +33,7 @@ construct the raw file name \dontrun{ raw_file_name <- construct_raw_filename( collection="s2", - siteid=721, + siteid="0-721", scale=10.0 projection=NULL qc=1.0) From 7bdd6d01362e160e2fb76be614eb0d9a802f9939 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 16:09:46 +0530 Subject: [PATCH 1413/2289] use named list Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 38 ++++++++++++++++++++++++-- 1 file changed, 35 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 802f23c441d..1b2bf60f683 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -147,7 +147,22 @@ remote_process <- function(settings) { dbcon = dbcon ) - remotefile_check_flag <- dbstatus[[1]] + remotefile_check_flag <- dbstatus$remotefile_check_flag + start <- dbstatus$start + end <- dbstatus$end + stage_get_data <- dbstatus$stage_get_data + write_raw_start <- dbstatus$write_raw_start + write_raw_end <- dbstatus$write_raw_end + raw_merge <- dbstatus$raw_merge + existing_raw_file_path <- dbstatus$existing_raw_file_path + stage_process_data <- dbstatus$stage_process_data + write_pro_start <- dbstatus$write_pro_start + write_pro_end <- dbstatus$write_pro_end + pro_merge <- dbstatus$pro_merge + input_file <- dbstatus$input_file + existing_pro_file_path <- dbstatus$existing_pro_file_path + raw_check <- dbstatus$raw_check + pro_check <- dbstatus$pro_check start <- dbstatus[[2]] end <- dbstatus[[3]] stage_get_data <- dbstatus[[4]] @@ -166,7 +181,7 @@ remote_process <- function(settings) { # construct outdir path outdir <- - file.path(outdir, paste(source, "site", siteid_short, sep = "_")) + file.path(outdir, paste(toupper(source), "site", siteid_short, sep = "_")) # extract the AOI of the site from BETYdb coords <- @@ -693,7 +708,24 @@ remotedata_db_check <- } return( - list( + list( + remotefile_check_flag = remotefile_check_flag, + start = start, + end = end, + stage_get_data = stage_get_data, + write_raw_start = write_raw_start, + write_raw_end = write_raw_end, + raw_merge = raw_merge, + existing_raw_file_path = existing_raw_file_path, + stage_process_data = stage_process_data, + write_pro_start = write_pro_start, + write_pro_end = write_pro_end, + pro_merge = pro_merge, + input_file = input_file, + existing_pro_file_path = existing_pro_file_path, + raw_check = raw_check, + pro_check = pro_check + ) remotefile_check_flag, start, end, From a41cf1666a353ac76204021c38b5efad6b08475d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 16:11:00 +0530 Subject: [PATCH 1414/2289] remove unused variables --- modules/data.remote/R/remote_process.R | 8 -------- 1 file changed, 8 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 802f23c441d..a2fd2a5e47e 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -36,14 +36,6 @@ remote_process <- function(settings) { RpTools <- reticulate::import("RpTools") - input_file <- NULL - stage_get_data <- NULL - stage_process_data <- NULL - raw_merge <- NULL - pro_merge <- NULL - existing_raw_file_path <- NULL - existing_pro_file_path <- NULL - # extract the variables from the settings list siteid <- as.numeric(settings$run$site$id) siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) From 2ceda390d029e5dcf365476d8d11704319b337db Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 16:19:23 +0530 Subject: [PATCH 1415/2289] use named list --- modules/data.remote/R/remote_process.R | 34 ++------------------------ 1 file changed, 2 insertions(+), 32 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c792d4e5597..fd03c97e08c 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -155,21 +155,7 @@ remote_process <- function(settings) { existing_pro_file_path <- dbstatus$existing_pro_file_path raw_check <- dbstatus$raw_check pro_check <- dbstatus$pro_check - start <- dbstatus[[2]] - end <- dbstatus[[3]] - stage_get_data <- dbstatus[[4]] - write_raw_start <- dbstatus[[5]] - write_raw_end <- dbstatus[[6]] - raw_merge <- dbstatus[[7]] - existing_raw_file_path <- dbstatus[[8]] - stage_process_data <- dbstatus[[9]] - write_pro_start <- dbstatus[[10]] - write_pro_end <- dbstatus[[11]] - pro_merge <- dbstatus[[12]] - input_file <- dbstatus[[13]] - existing_pro_file_path <- dbstatus[[14]] - raw_check <- dbstatus[[15]] - pro_check <- dbstatus[[16]] + # construct outdir path outdir <- @@ -718,24 +704,8 @@ remotedata_db_check <- raw_check = raw_check, pro_check = pro_check ) - remotefile_check_flag, - start, - end, - stage_get_data, - write_raw_start, - write_raw_end, - raw_merge, - existing_raw_file_path, - stage_process_data, - write_pro_start, - write_pro_end, - pro_merge, - input_file, - existing_pro_file_path, - raw_check, - pro_check ) - ) + } From 1c7571630dc8cdc15d80849835423080ec9b04c4 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 17:09:23 +0530 Subject: [PATCH 1416/2289] use named lists --- modules/data.remote/R/remote_process.R | 62 +++++++++++++------------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index fd03c97e08c..c62a7a2ab3e 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -224,12 +224,12 @@ remote_process <- function(settings) { # return the ids and paths of the inserted data if (!is.null(out_get_data)) { - settings$remotedata$raw_id <- db_out[[1]] - settings$remotedata$raw_path <- db_out[[2]] + settings$remotedata$raw_id <- db_out$raw_id + settings$remotedata$raw_path <- db_out$raw_path } if (!is.null(out_process_data)) { - settings$remotedata$pro_id <- db_out[[3]] - settings$remotedata$pro_path <- db_out[[4]] + settings$remotedata$pro_id <- db_out$pro_id + settings$remotedata$pro_path <- db_out$pro_path } return (settings) @@ -336,7 +336,7 @@ set_stage <- function(result, req_start, req_end, stage) { write_end <- db_end write_start <- req_start } - return (list(req_start, req_end, stage, merge, write_start, write_end)) + return (list(req_start = req_start, req_end = req_end, stage = stage, merge = merge, write_start = write_start, write_end = write_end)) } @@ -443,7 +443,7 @@ remotedata_db_check <- remotefile_check_flag <- 1 } stage_process_data <- TRUE - pro_merge <- "repace" + pro_merge <- "replace" write_pro_start <- start write_pro_end <- end } else if (!is.null(out_get_data)) { @@ -485,13 +485,13 @@ remotedata_db_check <- )) == 1) { datalist <- set_stage(pro_check, req_start, req_end, stage_process_data) - pro_start <- as.character(datalist[[1]]) - pro_end <- as.character(datalist[[2]]) - write_pro_start <- datalist[[5]] - write_pro_end <- datalist[[6]] + pro_start <- as.character(datalist$req_start) + pro_end <- as.character(datalist$req_end) + write_pro_start <- datalist$write_start + write_pro_end <- datalist$write_end if (pro_start != "dont write" || pro_end != "dont write") { - stage_process_data <- datalist[[3]] - pro_merge <- datalist[[4]] + stage_process_data <- datalist$stage + pro_merge <- datalist$merge if (pro_merge == TRUE) { existing_pro_file_path <- pro_check$file_path } @@ -509,12 +509,12 @@ remotedata_db_check <- !is.null(raw_check$end_date)) { raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- as.character(raw_datalist[[1]]) - end <- as.character(raw_datalist[[2]]) - write_raw_start <- raw_datalist[[5]] - write_raw_end <- raw_datalist[[6]] - stage_get_data <- raw_datalist[[3]] - raw_merge <- raw_datalist[[4]] + start <- as.character(raw_datalist$req_start) + end <- as.character(raw_datalist$req_end) + write_raw_start <- raw_datalist$write_start + write_raw_end <- raw_datalist$write_end + stage_get_data <- raw_datalist$stage + raw_merge <- raw_datalist$merge if (stage_get_data == FALSE) { input_file <- raw_check$file_path } @@ -573,20 +573,20 @@ remotedata_db_check <- PEcAn.logger::logger.info("Requested processed file does not exist in the DB, checking if the raw file does") datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] + start <- as.character(datalist$req_start) + end <- as.character(datalist$req_end) + write_raw_start <- datalist$write_start + write_raw_end <- datalist$write_end write_pro_start <- req_start write_pro_end <- req_end - stage_get_data <- datalist[[3]] + stage_get_data <- datalist$stage if (stage_get_data == FALSE) { input_file <- raw_check$file_path write_pro_start <- raw_check$start_date write_pro_end <- raw_check$end_date remotefile_check_flag <- 2 } - raw_merge <- datalist[[4]] + raw_merge <- datalist$merge stage_process_data <- TRUE pro_merge <- FALSE if (raw_merge == TRUE || raw_merge == "replace") { @@ -622,12 +622,12 @@ remotedata_db_check <- # if only raw data is requested datalist <- set_stage(raw_check, req_start, req_end, stage_get_data) - start <- as.character(datalist[[1]]) - end <- as.character(datalist[[2]]) - stage_get_data <- datalist[[3]] - raw_merge <- datalist[[4]] - write_raw_start <- datalist[[5]] - write_raw_end <- datalist[[6]] + start <- as.character(datalist$req_start) + end <- as.character(datalist$req_end) + stage_get_data <- datalist$stage + raw_merge <- datalist$merge + write_raw_start <- datalist$write_start + write_raw_end <- datalist$write_end stage_process_data <- FALSE if (as.character(write_raw_start) == "dont write" && as.character(write_raw_end) == "dont write") { @@ -1044,5 +1044,5 @@ remotedata_db_insert <- } } - return(list(raw_id, raw_path, pro_id, pro_path)) + return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) } From 58cce01dab975f0681fed37629171d93c9d5bc0b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 21 Aug 2020 19:31:13 +0530 Subject: [PATCH 1417/2289] pass filenames to python functions --- modules/data.remote/R/remote_process.R | 47 +++++++++++++------ .../inst/RpTools/RpTools/appeears2pecan.py | 11 ++--- .../inst/RpTools/RpTools/bands2lai_snap.py | 11 ++--- .../inst/RpTools/RpTools/gee2pecan_l8.py | 17 ++----- .../inst/RpTools/RpTools/gee2pecan_s2.py | 16 ++----- .../inst/RpTools/RpTools/gee2pecan_smap.py | 6 ++- .../inst/RpTools/RpTools/get_remote_data.py | 12 ++--- .../RpTools/RpTools/process_remote_data.py | 9 ++-- .../inst/RpTools/RpTools/rp_control.py | 12 +++-- .../data.remote/man/construct_raw_filename.Rd | 4 ++ .../data.remote/man/remotedata_db_check.Rd | 4 ++ 11 files changed, 80 insertions(+), 69 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c62a7a2ab3e..52defcee158 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -121,8 +121,17 @@ remote_process <- function(settings) { # construct raw file name raw_file_name <- - construct_raw_filename(collection, siteid_short, scale, projection, qc) + construct_raw_filename(source, collection, siteid_short, scale, projection, qc) + if (is.null(out_process_data)){ + pro_file_name <- NULL + }else{ + pro_file_name <- paste0(algorithm, + "_", + out_process_data, + "_site_", + siteid_short) + } # check if any data is already present in the inputs table dbstatus <- @@ -190,6 +199,10 @@ remote_process <- function(settings) { fcn.args$pro_merge <- pro_merge fcn.args$existing_raw_file_path <- existing_raw_file_path fcn.args$existing_pro_file_path <- existing_pro_file_path + fcn.args$raw_file_name <- raw_file_name + fcn.args$pro_file_name <- pro_file_name + + arg.string <- PEcAn.utils::listToArgString(fcn.args) @@ -241,6 +254,7 @@ remote_process <- function(settings) { ##' ##' @name construct_raw_filename ##' @title construct_raw_filename +##' @param source source ##' @param collection collection or product requested from the source ##' @param siteid shortform of siteid ##' @param scale scale, NULL by default @@ -250,6 +264,7 @@ remote_process <- function(settings) { ##' @examples ##' \dontrun{ ##' raw_file_name <- construct_raw_filename( +##' source="gee", ##' collection="s2", ##' siteid="0-721", ##' scale=10.0 @@ -258,27 +273,32 @@ remote_process <- function(settings) { ##' } ##' @author Ayush Prasad construct_raw_filename <- - function(collection, + function(source, + collection, siteid, scale = NULL, projection = NULL, qc = NULL) { # use NA if a parameter is not applicable and is NULL + # skip if a parameter is not applicable and is NULL if (is.null(scale)) { - scale <- "NA" + scale_str <- "_" } else{ - scale <- format(scale, nsmall = 1) + scale_str <- paste0("_", format(scale, nsmall = 1), "_") } if (is.null(projection)) { - projection <- "NA" + prj_str <- "" + }else{ + prj_str <- paste0(projection, "_") } if (is.null(qc)) { - qc <- "NA" + qc_str <- "" } else{ - qc <- format(qc, nsmall = 1) + qc_str <- paste0(format(qc, nsmall = 1), "_") } - raw_file_name <- - paste(collection, scale, projection, qc, "site", siteid, sep = "_") + + raw_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, "site_", siteid) + return(raw_file_name) } @@ -348,6 +368,7 @@ set_stage <- function(result, req_start, req_end, stage) { ##' @name remotedata_db_check ##' @title remotedata_db_check ##' @param raw_file_name raw_file_name +##' @param pro_file_name pro_file_name ##' @param start start date requested by user ##' @param end end date requested by the user ##' @param siteid siteid of the site @@ -362,6 +383,7 @@ set_stage <- function(result, req_start, req_end, stage) { ##' \dontrun{ ##' dbstatus <- remotedata_db_check( ##' raw_file_name, +##' pro_file_name, ##' start, ##' end, ##' siteid, @@ -375,6 +397,7 @@ set_stage <- function(result, req_start, req_end, stage) { ##' @author Ayush Prasad remotedata_db_check <- function(raw_file_name, + pro_file_name, start, end, siteid, @@ -414,11 +437,7 @@ remotedata_db_check <- if (overwrite) { PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") if (!is.null(out_process_data)) { - pro_file_name = paste0(algorithm, - "_", - out_process_data, - "_site_", - siteid_short) + if (nrow(pro_check <- PEcAn.DB::db.query( sprintf( diff --git a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py index d0b316b3e0c..5fd24e46287 100644 --- a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py +++ b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py @@ -26,7 +26,7 @@ def appeears2pecan( - geofile, outdir, start, end, product, projection=None, credfile=None, siteid=None + geofile, outdir, filename, start, end, product, projection=None, credfile=None ): """ Downloads remote sensing data from AppEEARS @@ -37,6 +37,8 @@ def appeears2pecan( outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + filename (str) -- filename of the output file + start (str) -- starting date of the data request in the form YYYY-MM-DD end (str) -- ending date area of the data request in the form YYYY-MM-DD @@ -233,12 +235,7 @@ def authenticate(): timestamp = time.strftime("%y%m%d%H%M%S") save_path = os.path.join( outdir, - product - +"_NA_" - + projection - + "_NA_" - + "site_" - + siteid + filename + "_" + timestamp + "." diff --git a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py index 3df5c314d69..bf775873308 100644 --- a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py +++ b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py @@ -15,7 +15,7 @@ import os import time -def bands2lai_snap(inputfile, outdir, siteid): +def bands2lai_snap(inputfile, outdir, filename): """ Calculates LAI for the input netCDF file and saves it in a new netCDF file. @@ -25,8 +25,8 @@ def bands2lai_snap(inputfile, outdir, siteid): outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - siteid (str) -- shortform of the siteid - + filename (str) -- filename of the output file + Returns ------- Absolute path to the output file @@ -50,10 +50,7 @@ def bands2lai_snap(inputfile, outdir, siteid): timestamp = time.strftime("%y%m%d%H%M%S") - if siteid is None: - siteid = area.name - - save_path = os.path.join(outdir, "snap_lai_site_" + siteid + "_" + timestamp + ".nc") + save_path = os.path.join(outdir, filename + "_" + timestamp + ".nc") # creating a timerseries and saving the netCDF file area.to_netcdf(save_path) timeseries[area.name] = xr_dataset_to_timeseries(area, timeseries_variable) diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py index 1ebd149b304..3b24d209d1a 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_l8.py @@ -23,7 +23,7 @@ ee.Initialize() -def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1, siteid=None): +def gee2pecan_l8(geofile, outdir, filename, start, end, scale, qc): """ Extracts Landsat 8 SR band data from GEE @@ -32,6 +32,8 @@ def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1, siteid=None): geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + filename (str) -- filename of the output file start (str) -- starting date of the data request in the form YYYY-MM-DD @@ -41,8 +43,7 @@ def gee2pecan_l8(geofile, outdir, start, end, scale, qc=1, siteid=None): qc (bool) -- uses the cloud masking function if set to True - siteid (str) -- shortform of siteid, None by default - + Returns ------- Absolute path to the output file. @@ -174,8 +175,6 @@ def eachf2dict(f): }, ) - if siteid is None: - siteid = site_name # if specified path does not exist create it if not os.path.exists(outdir): @@ -185,13 +184,7 @@ def eachf2dict(f): timestamp = time.strftime("%y%m%d%H%M%S") filepath = os.path.join( outdir, - "l8_" - + str(scale) - + "_NA_" - + str(qc) - + "_" - + "site_" - + siteid + filename + "_" + timestamp + ".nc", diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py index 0523d41589e..96c90c660b8 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py @@ -757,7 +757,7 @@ def xr_dataset_to_timeseries(xr_dataset, variables): return df -def gee2pecan_s2(geofile, outdir, start, end, scale, qc, siteid=None): +def gee2pecan_s2(geofile, outdir, filename, start, end, scale, qc): """ Downloads Sentinel 2 data from gee and saves it in a netCDF file at the specified location. @@ -766,6 +766,8 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qc, siteid=None): geofile (str) -- path to the file containing the name and coordinates of ROI, currently tested with geojson. outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + filename (str) -- filename of the output file start (str) -- starting date of the data request in the form YYYY-MM-DD @@ -775,8 +777,6 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qc, siteid=None): qi_threshold (float) -- From satellitetools: Threshold value to filter images based on used qi filter. qi filter holds labels of classes whose percentages within the AOI is summed. If the sum is larger then the qi_threshold, data will not be retrieved for that date/image. The default is 1, meaning all data is retrieved - siteid (str) -- shortform of siteid, None by default - Returns ------- Absolute path to the output file. @@ -803,20 +803,12 @@ def gee2pecan_s2(geofile, outdir, start, end, scale, qc, siteid=None): s2_data_to_xarray(area, request) # if specified output directory does not exist, create it - if siteid is None: - siteid = area.name if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) timestamp = time.strftime("%y%m%d%H%M%S") save_path = os.path.join( outdir, - "s2_" - + str(scale) - + "_NA_" - + str(qc) - + "_" - + "site_" - + siteid + filename + "_" + timestamp + ".nc", diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py index 439ae1cce02..10ae8340133 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_smap.py @@ -18,7 +18,7 @@ ee.Initialize() -def gee2pecan_smap(geofile, outdir, start, end, siteid=None): +def gee2pecan_smap(geofile, outdir, filename, start, end): """ Downloads and saves SMAP data from GEE @@ -28,6 +28,8 @@ def gee2pecan_smap(geofile, outdir, start, end, siteid=None): outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + filename (str) -- filename of the output file + start (str) -- starting date of the data request in the form YYYY-MM-dd end (str) -- ending date areaof the data request in the form YYYY-MM-dd @@ -163,7 +165,7 @@ def fc2dataframe(fc): timestamp = time.strftime("%y%m%d%H%M%S") filepath = os.path.join( - outdir, "smap_" + "NA_NA_NA_" + "site_" + siteid +"_" + timestamp + ".nc" + outdir, filename + "_" + timestamp + ".nc" ) # convert to netCDF and save the file diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index 40786ea8c77..48b6bc3bd06 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -22,13 +22,13 @@ def get_remote_data( end, source, collection, - siteid=None, scale=None, projection=None, qc=None, credfile=None, raw_merge=None, existing_raw_file_path=None, + raw_file_name=None ): """ uses GEE and AppEEARS functions to download data @@ -47,8 +47,6 @@ def get_remote_data( collection (str) -- dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears - siteid (str) -- shortform of the siteid - scale (int) -- pixel resolution, None by default projection (str) -- type of projection. Only required for appeears polygon AOI type. None by default. @@ -61,6 +59,8 @@ def get_remote_data( existing_raw_file_path (str) -- path to exisiting raw file if raw_merge is TRUE., None by default + raw_file_name (str) -- filename of the output file + Returns ------- Absolute path to the created file. @@ -78,13 +78,13 @@ def get_remote_data( func = getattr(module, func_name) # if a qc parameter is specified pass these arguments to the function if qc: - get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, scale=scale, qc=qc, siteid=siteid) + get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, scale=scale, qc=qc, filename=raw_file_name) # this part takes care of functions which do not perform any quality checks, e.g. SMAP else: - get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, siteid=siteid) + get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, filename=raw_file_name) if source == "appeears": - get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile, siteid=siteid) + get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile, filename=raw_file_name) if raw_merge == True and raw_merge != "replace": # if output file is of csv type use csv_merge, example AppEEARS point AOI type diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index 880500791cf..79374d41abd 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -11,20 +11,19 @@ import os import time -def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, siteid=None, pro_merge=None, existing_pro_file_path=None): +def process_remote_data(out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge=None, existing_pro_file_path=None, pro_file_name=None): """ uses processing functions to perform computation on input data Parameters ---------- - aoi_name (str) -- name to the AOI. output (dict) -- dictionary contatining the keys get_data and process_data outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. algorithm (str) -- name of the algorithm used to perform computation. inputfile (str) -- path to raw file - siteid (str) -- shortform of the siteid pro_merge (str) -- if the pro file has to be merged - existing_pro_file_path -- path to existing pro file if pro_merge is TRUE + existing_pro_file_path (str) -- path to existing pro file if pro_merge is TRUE + pro_file_name (str) -- name of the output file Returns ------- @@ -46,7 +45,7 @@ def process_remote_data(aoi_name, out_get_data, out_process_data, outdir, algori # import the function from the module func = getattr(module, func_name) # call the function - process_datareturn_path = func(input_file, outdir, siteid) + process_datareturn_path = func(input_file, outdir, pro_file_name) if pro_merge == True and pro_merge != "replace": try: diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index e8b67ebaac6..065316062d5 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -38,6 +38,8 @@ def rp_control( pro_merge=None, existing_raw_file_path=None, existing_pro_file_path=None, + raw_file_name=None, + pro_file_name=None, ): """ @@ -84,6 +86,10 @@ def rp_control( existing_raw_file_path (str) -- path to existing raw file , None by default existing_pro_file_path (str) -- path to existing pro file path, None by default + + raw_file_name (str) -- filename of the raw file, None by default + + pro_file_name (str) -- filename of the processed file, None by default Returns ------- @@ -95,7 +101,6 @@ def rp_control( aoi_name = get_sitename(geofile) - get_datareturn_path = 78 if stage_get_data: get_datareturn_path = get_remote_data( @@ -105,13 +110,13 @@ def rp_control( end, source, collection, - siteid, scale, projection, qc, credfile, raw_merge, existing_raw_file_path, + raw_file_name ) get_datareturn_name = os.path.split(get_datareturn_path) @@ -119,15 +124,14 @@ def rp_control( if input_file is None: input_file = get_datareturn_path process_datareturn_path = process_remote_data( - aoi_name, out_get_data, out_process_data, outdir, algorithm, input_file, - siteid, pro_merge, existing_pro_file_path, + pro_file_name ) process_datareturn_name = os.path.split(process_datareturn_path) diff --git a/modules/data.remote/man/construct_raw_filename.Rd b/modules/data.remote/man/construct_raw_filename.Rd index c31af074b70..7775ba75111 100644 --- a/modules/data.remote/man/construct_raw_filename.Rd +++ b/modules/data.remote/man/construct_raw_filename.Rd @@ -5,6 +5,7 @@ \title{construct_raw_filename} \usage{ construct_raw_filename( + source, collection, siteid, scale = NULL, @@ -13,6 +14,8 @@ construct_raw_filename( ) } \arguments{ +\item{source}{source} + \item{collection}{collection or product requested from the source} \item{siteid}{shortform of siteid} @@ -32,6 +35,7 @@ construct the raw file name \examples{ \dontrun{ raw_file_name <- construct_raw_filename( + source="gee", collection="s2", siteid="0-721", scale=10.0 diff --git a/modules/data.remote/man/remotedata_db_check.Rd b/modules/data.remote/man/remotedata_db_check.Rd index e7ecead4a5c..c6b71d4861d 100644 --- a/modules/data.remote/man/remotedata_db_check.Rd +++ b/modules/data.remote/man/remotedata_db_check.Rd @@ -6,6 +6,7 @@ \usage{ remotedata_db_check( raw_file_name, + pro_file_name, start, end, siteid, @@ -20,6 +21,8 @@ remotedata_db_check( \arguments{ \item{raw_file_name}{raw_file_name} +\item{pro_file_name}{pro_file_name} + \item{start}{start date requested by user} \item{end}{end date requested by the user} @@ -48,6 +51,7 @@ check the status of the requested data in the DB \dontrun{ dbstatus <- remotedata_db_check( raw_file_name, + pro_file_name, start, end, siteid, From 772411a7852369e85a5a21cf4abdf103425c795c Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 21 Aug 2020 15:17:19 -0400 Subject: [PATCH 1418/2289] adding dbfile move function --- base/db/R/dbfiles.R | 246 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 245 insertions(+), 1 deletion(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 504fe915f1d..92610201d23 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -661,4 +661,248 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { PEcAn.logger::logger.warn("no id found for", file, "in database") invisible(NA) } -} \ No newline at end of file +} + + +##' +##' This function will move dbfiles - clim or nc - from one location +##' to another on the same machine and update BETY +##' +##' @name move.input.files +##' @title Move files to new location +##' @param old.dir directory with files to be moved +##' @param new.dir directory where files should be moved +##' @param file.type what type of files are we moving either clim or nc +##' @param siteid needed to register .nc files that arent already in BETY +##' @return print statement of how many files were moved, registered, or have symbolic links +##' @export +##' @author kzarada +##' @examples +##' \dontrun{ +##' dbfile.move( +##' old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", +##' new.dir = '/projectnb/dietzelab/pecan.data/dbfiles/NOAA_GEFS_site_0-676' +##' file.type= clim, +##' siteid = 676 +##' ) +##' } + + +dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ + + + #create nulls for file movement and error info + error = 0 + files.sym = 0 + files.changed = 0 + files.reg = 0 + files.indb = 0 + + #check for file type and update to make it *.file type + if(file.type != "clim" | file.type != "nc"){ + PEcAn.logger::logger.error('File type not supported by move at this time. Please enter either clim or nc') + error = 1 + } + file.pattern = paste0("*.", file.type) + + + + #create new directory if it doesn't exist + if(!dir.exists(new.file.path)){ + dir.create(new.file.path)} + + + # check to make sure both directories exist + if(!dir.exists(old.file.path)){ + PEcAn.logger::logger.error('Old File directory does not exist. Please enter valid file path') + error = 1} + + if(!dir.exists(new.file.path)){ + PEcAn.logger::logger.error('New File directory does not exist. Please enter valid file path') + error = 1} + + if(basename(new.file.path) != basename(old.file.path)){ + PEcAn.logger::logger.error('Basenames of files do not match') + } + + #list files in the old directory + old.files <- list.files(path= old.file.path, pattern = file.pattern) + + #check to make sure there are files + if(length(old.files) == 0){ + PEcAn.logger::logger.warn('No files found') + error = 1 + } + + #create full file path + full.old.file = file.path(old.file.path, old.files) + + + ### Get BETY information ### + bety <- dplyr::src_postgres(dbname = 'bety', + host = 'psql-pecan.bu.edu', + user = 'bety', + password = 'bety') + con <- bety$con + + #get matching dbfiles from BETY + dbfile.path = dirname(full.old.file) + dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% + dplyr::filter(file_name %in% basename(full.old.file)) %>% + dplyr::filter(file_path %in% dbfile.path) + + + #if there are matching db files + if(dim(dbfiles)[1] > 0){ + + # Check to make sure files line up + if(dim(dbfiles)[1] != length(full.old.file)) { + PEcAn.logger::logger.warn("Files to be moved don't match up with BETY files, only moving the files that match") + + #IF DB FILES AND FULL FILES DONT MATCH, remove those not in BETY - will take care of the rest below + index = which(basename(full.old.file) %in% dbfiles$file_name) + index1 = seq(1, length(full.old.file)) + check <- index1[-which(index1 %in% index)] + full.old.file <- full.old.file[-check] + + #record the number of files that are being moved + files.changed = length(full.old.file) + + } + + #Check to make sure the files line up + if(dim(dbfiles)[1] != length(full.old.file)) { + PEcAn.logger::logger.error("Files to be moved don't match up with BETY files, canceling move") + error = 1 + } + + + #Make sure the files line up + dbfiles <- dbfiles[order(dbfiles$file_name),] + full.old.file <- sort(full.old.file) + + #Record number of files moved and changed in BETY + files.indb = dim(dbfiles)[1] + + #Move files and update BETY + if(error == 0) { + for(i in 1:length(full.old.file)){ + fs::file_move(full.old.file[i], new.file.path) + db.query(paste0("UPDATE dbfiles SET file_path= '", new.file.path, "' where id=", dbfiles$id[i]), con) + } #end i loop + } #end error if statement + + + + } #end dbfile loop + + #if statement for when there are no matching files in BETY or if some clim files matched but others didn't + #for clim files not registered, we'll create a symbolic link to where we move the files + if (dim(dbfiles)[1] == 0 & file.pattern == "*.clim" | files.changed > 0 & file.pattern == "*.clim" ){ + + #Recheck what files are in the directory since others may have been moved above + old.files <- list.files(path= old.file.path, pattern = file.pattern) + + #Recreate full file path + full.old.file = file.path(old.file.path, old.files) + + #Record number of files that will have a symbolic link made + files.sym = length(full.old.file) + + #Error check again to make sure there aren't any matching dbfiles + dbfile.path = dirname(full.old.file) + dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% + dplyr::filter(file_name %in% basename(full.old.file)) %>% + dplyr::filter(file_path %in% dbfile.path) + + if(dim(dbfiles)[1] > 0){ + PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling symbolic link creation") + error = 1 + } + + #Create file path for symbolic link + full.new.file = file.path(new.file.path, old.files) + + #Line up files + full.new.file = sort(full.new.file) + full.old.file <- sort(full.old.file) + + #Check to make sure the files are the same length + if(length(full.new.file) != length(full.old.file)) { + + PEcAn.logger::logger.error("Files to be moved don't match up with BETY. Canceling Move") + error = 1 + } + + #Move file and create symbolic link if there are no errors + + if(error ==0){ + for(i in 1:length(full.old.file)){ + fs::file_move(full.old.file[i], new.file.path) + R.utils::createLink(link = full.old.file[i], target = full.new.file[i]) + }#end i loop + } #end error loop + + + } #end clim if statement + + + #If files are .nc files and aren't in BETY for some reason, we will register them + if (dim(dbfiles)[1] == 0 & file.pattern == "*.nc" | files.changed == 1 & file.pattern == "*.nc" ){ + + #Re make full file path and find files that were not moved + old.files <- list.files(path= old.file.path, pattern = file.pattern) + + full.old.file = file.path(old.file.path, old.files) + + #Record how many files are being registered to BETY + files.reg= length(full.old.file) + + #Check again to make sure there aren't any matching dbfiles + dbfile.path = dirname(full.old.file) + dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% + dplyr::filter(file_name %in% basename(full.old.file)) %>% + dplyr::filter(file_path %in% dbfile.path) + + if(dim(dbfiles)[1] > 0){ + PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling symbolic link creation") + error = 1 + } + + if(error == 0){ + + for(i in 1:length(full.old.file)){ + + file_path = dirname(full.old.file[i]) + file_name = basename(full.old.file[i]) + + + dbfile.input.insert(in.path = file_path, + in.prefix = file_name, + siteid = siteid, + startdate = NULL, + enddate = NULL, + mimetype = "application/x-netcdf", + formatname = "CF Meteorology application", + parentid=NA, + con = con, + hostname=PEcAn.remote::fqdn(), + allow.conflicting.dates=FALSE, + ens=FALSE) + }#end i loop + } #end error loop + } #end nc file registration + + + if(error > 0){ + PEcAn.logger::logger.error("There was an error, files were not moved or linked") + + } + + if(error == 0){ + + PEcAn.logger::logger.info(paste0(files.changed + files.indb, " files were moved and updated on BETY, ", files.sym, " were moved and had a symbolic link created, and ", files.reg , " files were moved and then registered in BETY")) + + } + +} #end dbfile.move() \ No newline at end of file From ee0e3fadccfb69d5d94de06ae640c8a90fe4432a Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 21 Aug 2020 15:26:09 -0400 Subject: [PATCH 1419/2289] updating function name --- base/db/R/dbfiles.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 92610201d23..8a10f599b6c 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -668,7 +668,7 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { ##' This function will move dbfiles - clim or nc - from one location ##' to another on the same machine and update BETY ##' -##' @name move.input.files +##' @name dbfile.move ##' @title Move files to new location ##' @param old.dir directory with files to be moved ##' @param new.dir directory where files should be moved From 74e36cc1d202ff424e5ec43387c682456673f403 Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 21 Aug 2020 15:38:23 -0400 Subject: [PATCH 1420/2289] updating params --- base/db/R/dbfiles.R | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 8a10f599b6c..4043ad15dfb 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -708,25 +708,25 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ #create new directory if it doesn't exist - if(!dir.exists(new.file.path)){ - dir.create(new.file.path)} + if(!dir.exists(new.dir)){ + dir.create(new.dir)} # check to make sure both directories exist - if(!dir.exists(old.file.path)){ + if(!dir.exists(old.dir)){ PEcAn.logger::logger.error('Old File directory does not exist. Please enter valid file path') error = 1} - if(!dir.exists(new.file.path)){ + if(!dir.exists(new.dir)){ PEcAn.logger::logger.error('New File directory does not exist. Please enter valid file path') error = 1} - if(basename(new.file.path) != basename(old.file.path)){ + if(basename(new.dir) != basename(old.dir)){ PEcAn.logger::logger.error('Basenames of files do not match') } #list files in the old directory - old.files <- list.files(path= old.file.path, pattern = file.pattern) + old.files <- list.files(path= old.dir, pattern = file.pattern) #check to make sure there are files if(length(old.files) == 0){ @@ -735,7 +735,7 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ } #create full file path - full.old.file = file.path(old.file.path, old.files) + full.old.file = file.path(old.dir, old.files) ### Get BETY information ### @@ -787,8 +787,8 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ #Move files and update BETY if(error == 0) { for(i in 1:length(full.old.file)){ - fs::file_move(full.old.file[i], new.file.path) - db.query(paste0("UPDATE dbfiles SET file_path= '", new.file.path, "' where id=", dbfiles$id[i]), con) + fs::file_move(full.old.file[i], new.dir) + db.query(paste0("UPDATE dbfiles SET file_path= '", new.dir, "' where id=", dbfiles$id[i]), con) } #end i loop } #end error if statement @@ -801,10 +801,10 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ if (dim(dbfiles)[1] == 0 & file.pattern == "*.clim" | files.changed > 0 & file.pattern == "*.clim" ){ #Recheck what files are in the directory since others may have been moved above - old.files <- list.files(path= old.file.path, pattern = file.pattern) + old.files <- list.files(path= old.dir, pattern = file.pattern) #Recreate full file path - full.old.file = file.path(old.file.path, old.files) + full.old.file = file.path(old.dir, old.files) #Record number of files that will have a symbolic link made files.sym = length(full.old.file) @@ -821,7 +821,7 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ } #Create file path for symbolic link - full.new.file = file.path(new.file.path, old.files) + full.new.file = file.path(new.dir, old.files) #Line up files full.new.file = sort(full.new.file) @@ -838,7 +838,7 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ if(error ==0){ for(i in 1:length(full.old.file)){ - fs::file_move(full.old.file[i], new.file.path) + fs::file_move(full.old.file[i], new.dir) R.utils::createLink(link = full.old.file[i], target = full.new.file[i]) }#end i loop } #end error loop @@ -851,9 +851,9 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ if (dim(dbfiles)[1] == 0 & file.pattern == "*.nc" | files.changed == 1 & file.pattern == "*.nc" ){ #Re make full file path and find files that were not moved - old.files <- list.files(path= old.file.path, pattern = file.pattern) + old.files <- list.files(path= old.dir, pattern = file.pattern) - full.old.file = file.path(old.file.path, old.files) + full.old.file = file.path(old.dir, old.files) #Record how many files are being registered to BETY files.reg= length(full.old.file) From d1b67773bc30e0748ee325da571fe16744f4c04f Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 21 Aug 2020 15:44:36 -0400 Subject: [PATCH 1421/2289] updating after make --- base/db/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/base/db/NAMESPACE b/base/db/NAMESPACE index d06edfc35e1..c1ff7c38dc6 100644 --- a/base/db/NAMESPACE +++ b/base/db/NAMESPACE @@ -20,6 +20,7 @@ export(dbfile.id) export(dbfile.input.check) export(dbfile.input.insert) export(dbfile.insert) +export(dbfile.move) export(dbfile.posterior.check) export(dbfile.posterior.insert) export(default_hostname) From 92a60d5b91c9c3617dd17e4e0afbc5724cd3dd45 Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 21 Aug 2020 16:02:34 -0400 Subject: [PATCH 1422/2289] adding Rd file --- base/db/man/dbfile.move.Rd | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 base/db/man/dbfile.move.Rd diff --git a/base/db/man/dbfile.move.Rd b/base/db/man/dbfile.move.Rd new file mode 100644 index 00000000000..e33aaf1eb2b --- /dev/null +++ b/base/db/man/dbfile.move.Rd @@ -0,0 +1,37 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/dbfiles.R +\name{dbfile.move} +\alias{dbfile.move} +\title{Move files to new location} +\usage{ +dbfile.move(old.dir, new.dir, file.type, siteid = NULL) +} +\arguments{ +\item{old.dir}{directory with files to be moved} + +\item{new.dir}{directory where files should be moved} + +\item{file.type}{what type of files are we moving either clim or nc} + +\item{siteid}{needed to register .nc files that arent already in BETY} +} +\value{ +print statement of how many files were moved, registered, or have symbolic links +} +\description{ +This function will move dbfiles - clim or nc - from one location +to another on the same machine and update BETY +} +\examples{ +\dontrun{ + dbfile.move( + old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", + new.dir = '/projectnb/dietzelab/pecan.data/dbfiles/NOAA_GEFS_site_0-676' + file.type= clim, + siteid = 676 + ) +} +} +\author{ +kzarada +} From c13025dc98e9710b8b44da3d0c9246db222085b6 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 09:15:41 +0530 Subject: [PATCH 1423/2289] Use registration files Co-authored-by: istfer --- modules/data.remote/NAMESPACE | 1 + modules/data.remote/R/remote_process.R | 137 +++++++++++++----- .../inst/registration/register.APPEEARS.xml | 12 ++ .../inst/registration/register.GEE.xml | 50 +++++++ .../data.remote/man/read_remote_registry.Rd | 29 ++++ 5 files changed, 196 insertions(+), 33 deletions(-) create mode 100644 modules/data.remote/inst/registration/register.APPEEARS.xml create mode 100644 modules/data.remote/inst/registration/register.GEE.xml create mode 100644 modules/data.remote/man/read_remote_registry.Rd diff --git a/modules/data.remote/NAMESPACE b/modules/data.remote/NAMESPACE index d5da4a8e230..501fe0991ca 100644 --- a/modules/data.remote/NAMESPACE +++ b/modules/data.remote/NAMESPACE @@ -9,3 +9,4 @@ export(extract_NLCD) export(remote_process) importFrom(foreach,"%do%") importFrom(foreach,"%dopar%") +importFrom(purrr,"%>%") diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 52defcee158..6722db4c38a 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -39,28 +39,58 @@ remote_process <- function(settings) { # extract the variables from the settings list siteid <- as.numeric(settings$run$site$id) siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) - raw_mimetype <- settings$remotedata$raw_mimetype - raw_formatname <- settings$remotedata$raw_formatname outdir <- settings$database$dbfiles start <- as.character(as.Date(settings$run$start.date)) end <- as.character(as.Date(settings$run$end.date)) source <- settings$remotedata$source collection <- settings$remotedata$collection - scale <- settings$remotedata$scale - if (!is.null(scale)) { - scale <- as.double(settings$remotedata$scale) - scale <- format(scale, nsmall = 1) + reg_info <- read_remote_registry(source, collection) + collection <- reg_info$pecan_name + raw_mimetype <- reg_info$raw_mimetype + raw_formatname <- reg_info$raw_formatname + pro_mimetype <- reg_info$pro_mimetype + pro_formatname <- reg_info$pro_formatname + + + if (!is.null(reg_info$scale)) { + if(!is.null(settings$remotedata$scale)){ + scale <- as.double(settings$remotedata$scale) + scale <- format(scale, nsmall = 1) + }else{ + scale <- as.double(reg_info$scale) + scale <- format(scale, nsmall = 1) + PEcAn.logger::logger.warn(paste0("scale not provided, using default scale ", scale)) + } + }else{ + scale <- NULL } - projection <- settings$remotedata$projection - qc <- settings$remotedata$qc - if (!is.null(qc)) { - qc <- as.double(settings$remotedata$qc) - qc <- format(qc, nsmall = 1) + + if (!is.null(reg_info$qc)) { + if(!is.null(settings$remotedata$qc)){ + qc <- as.double(settings$remotedata$qc) + qc <- format(qc, nsmall = 1) + }else{ + qc <- as.double(reg_info$qc) + qc <- format(qc, nsmall = 1) + PEcAn.logger::logger.warn(paste0("qc not provided, using default qc ", qc)) + } + }else{ + qc <- NULL } + + if (!is.null(reg_info$projection)) { + if(!is.null(settings$remotedata$projection)){ + projection <- settings$remotedata$projection + }else{ + projection <- reg_info$projection + PEcAn.logger::logger.warn(paste0("projection not provided, using default projection ", projection)) + } + }else{ + projection <- NULL + } + algorithm <- settings$remotedata$algorithm credfile <- settings$remotedata$credfile - pro_mimetype <- settings$remotedata$pro_mimetype - pro_formatname <- settings$remotedata$pro_formatname out_get_data <- settings$remotedata$out_get_data out_process_data <- settings$remotedata$out_process_data overwrite <- settings$remotedata$overwrite @@ -102,21 +132,15 @@ remote_process <- function(settings) { dbcon <- PEcAn.DB::db.open(settings$database$bety) on.exit(PEcAn.DB::db.close(dbcon), add = TRUE) - # collection dataframe used to map Google Earth Engine collection names to their PEcAn specific names - collection_lut <- data.frame( - stringsAsFactors = FALSE, - original_name = c( - "LANDSAT/LC08/C01/T1_SR", - "COPERNICUS/S2_SR", - "NASA_USDA/HSL/SMAP_soil_moisture" - ), - pecan_code = c("l8", "s2", "smap") - ) - getpecancode <- collection_lut$pecan_code - names(getpecancode) <- collection_lut$original_name + # extract the AOI of the site from BETYdb + coords <- + unlist(PEcAn.DB::db.query( + sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), + con = dbcon + ), use.names = FALSE) - if (source == "gee") { - collection = unname(getpecancode[collection]) + if(!(tolower(gsub(".*type(.+),coordinates.*", "\\1", gsub("[^=A-Za-z,0-9{} ]+","",coords))) %in% reg_info$coordtype)){ + PEcAn.logger::logger.severe(paste0("Coordinate type of the site is not supported by the requested source, please make sure that your site type is ", reg_info$coordtype)) } # construct raw file name @@ -170,12 +194,6 @@ remote_process <- function(settings) { outdir <- file.path(outdir, paste(toupper(source), "site", siteid_short, sep = "_")) - # extract the AOI of the site from BETYdb - coords <- - unlist(PEcAn.DB::db.query( - sprintf("select ST_AsGeoJSON(geometry) from sites where id=%f", siteid), - con = dbcon - ), use.names = FALSE) fcn.args <- list() fcn.args$coords <- coords @@ -363,6 +381,59 @@ set_stage <- function(result, req_start, req_end, stage) { +##' read remote module registration files +##' +##' @name read_remote_registry +##' @title read_remote_registry +##' @importFrom purrr %>% +##' @param source remote source, e.g gee or appeears +##' @param collection collection or product name +##' @return list containing original_name, pecan_name, scale, qc, projection raw_mimetype, raw_formatname pro_mimetype, pro_formatname, coordtype +##' @examples +##' \dontrun{ +##' read_remote_registry( +##' "gee", +##' "COPERNICUS/S2_SR") +##' } +##' @author Istem Fer +read_remote_registry <- function(source, collection){ + + # get registration file + register.xml <- system.file(paste0("registration/register.", toupper(source), ".xml"), package = "PEcAn.data.remote") + + tryCatch(expr = { + register <- XML::xmlToList(XML::xmlParse(register.xml)) + }, + error = function(e){ + PEcAn.logger::logger.severe("Requested source is not available") + } + ) + + if(!(purrr::is_empty(register %>% purrr::keep(names(.) == "collection")))){ + # this is a type of source that requires different setup for its collections, e.g. GEE + # then read collection specific information + register <- register[[which(register %>% purrr::map_chr("original_name") == collection)]] + } + + reg_list <- list() + reg_list$original_name <- ifelse(is.null(register$original_name), collection, register$original_name) + reg_list$pecan_name <- ifelse(is.null(register$pecan_name), collection, register$pecan_name) + reg_list$scale <- register$scale + reg_list$qc <- register$qc + reg_list$projection <- register$projection + reg_list$raw_mimetype <- register$raw_format$mimetype + reg_list$raw_formatname <- register$raw_format$name + reg_list$pro_mimetype <- register$pro_format$mimetype + reg_list$pro_formatname <- register$pro_format$name + reg_list$coordtype <- unlist(register$coord) + + return(reg_list) +} + + + + + ##' check the status of the requested data in the DB ##' ##' @name remotedata_db_check diff --git a/modules/data.remote/inst/registration/register.APPEEARS.xml b/modules/data.remote/inst/registration/register.APPEEARS.xml new file mode 100644 index 00000000000..69bace69d80 --- /dev/null +++ b/modules/data.remote/inst/registration/register.APPEEARS.xml @@ -0,0 +1,12 @@ + + + + polygon + point + + + + + application/x-netcdf + + diff --git a/modules/data.remote/inst/registration/register.GEE.xml b/modules/data.remote/inst/registration/register.GEE.xml new file mode 100644 index 00000000000..036f9e4e92c --- /dev/null +++ b/modules/data.remote/inst/registration/register.GEE.xml @@ -0,0 +1,50 @@ + + + + COPERNICUS/S2_SR + s2 + + polygon + + 10 + 1 + + + CF Meteorology + application/x-netcdf + + + + NCEP + application/x-netcdf + + + + LANDSAT/LC08/C01/T1_SR + l8 + 30 + 1 + + polygon + point + + + + + application/x-netcdf + + + + NASA_USDA/HSL/SMAP_soil_moisture + smap + + polygon + point + + + + + application/x-netcdf + + + diff --git a/modules/data.remote/man/read_remote_registry.Rd b/modules/data.remote/man/read_remote_registry.Rd new file mode 100644 index 00000000000..a9668ab7b8a --- /dev/null +++ b/modules/data.remote/man/read_remote_registry.Rd @@ -0,0 +1,29 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/remote_process.R +\name{read_remote_registry} +\alias{read_remote_registry} +\title{read_remote_registry} +\usage{ +read_remote_registry(source, collection) +} +\arguments{ +\item{source}{remote source, e.g gee or appeears} + +\item{collection}{collection or product name} +} +\value{ +list containing original_name, pecan_name, scale, qc, projection raw_mimetype, raw_formatname pro_mimetype, pro_formatname, coordtype +} +\description{ +read remote module registration files +} +\examples{ +\dontrun{ + read_remote_registry( + "gee", + "COPERNICUS/S2_SR") +} +} +\author{ +Istem Fer +} From d2b43210812812937547a8cc72e00aa38a14ffd4 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 09:53:08 +0530 Subject: [PATCH 1424/2289] dont split filepath in rp_control --- modules/data.remote/DESCRIPTION | 1 + modules/data.remote/R/remote_process.R | 36 +++++++++---------- .../inst/RpTools/RpTools/rp_control.py | 7 ---- 3 files changed, 19 insertions(+), 25 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index ebdc2d1a79e..7b6c3663c2d 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -14,6 +14,7 @@ Imports: PEcAn.DB, PEcAn.utils, purrr, + XML, raster, RCurl, sp, diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 6722db4c38a..c641c109341 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -897,7 +897,7 @@ remotedata_db_insert <- pro_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$process_data_path, - in.prefix = output$process_data_name, + in.prefix = basename(output$process_data_path), siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, @@ -909,7 +909,7 @@ remotedata_db_insert <- raw_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$raw_data_path, - in.prefix = output$raw_data_name, + in.prefix = basename(output$raw_data_path), siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, @@ -927,7 +927,7 @@ remotedata_db_insert <- pro_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$process_data_path, - in.prefix = output$process_data_name, + in.prefix = basename(output$process_data_path), siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, @@ -943,7 +943,7 @@ remotedata_db_insert <- # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates pro_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$process_data_path, - in.prefix = output$process_data_name, + in.prefix = basename(output$process_data_path), siteid = siteid, startdate = write_pro_start, enddate = write_pro_end, @@ -957,7 +957,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, - output$raw_data_name, + basename(output$raw_data_path), raw_id ), dbcon @@ -966,7 +966,7 @@ remotedata_db_insert <- sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, - output$raw_data_name, + basename(output$raw_data_path), raw_id ), dbcon @@ -985,7 +985,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, - output$process_data_name, + basename(output$process_data_path), pro_id ), dbcon @@ -994,7 +994,7 @@ remotedata_db_insert <- sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, - output$process_data_name, + basename(output$process_data_path), pro_id ), dbcon @@ -1004,7 +1004,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, - output$raw_data_name, + basename(output$raw_data_path), raw_id ), dbcon @@ -1013,7 +1013,7 @@ remotedata_db_insert <- sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, - output$raw_data_name, + basename(output$raw_data_path), raw_id ), dbcon @@ -1030,7 +1030,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, - output$process_data_name, + basename(output$process_data_path), pro_id ), dbcon @@ -1039,7 +1039,7 @@ remotedata_db_insert <- sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, - output$process_data_name, + basename(output$process_data_path), pro_id ), dbcon @@ -1055,7 +1055,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, - output$process_data_name, + basename(output$process_data_path), pro_id ), dbcon @@ -1064,7 +1064,7 @@ remotedata_db_insert <- sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$process_data_path, - output$process_data_name, + basename(output$process_data_path), pro_id ), dbcon @@ -1072,7 +1072,7 @@ remotedata_db_insert <- raw_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$raw_data_path, - in.prefix = output$raw_data_name, + in.prefix = basename(output$raw_data_path), siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, @@ -1097,7 +1097,7 @@ remotedata_db_insert <- raw_ins <- PEcAn.DB::dbfile.input.insert( in.path = output$raw_data_path, - in.prefix = output$raw_data_name, + in.prefix = basename(output$raw_data_path), siteid = siteid, startdate = write_raw_start, enddate = write_raw_end, @@ -1116,7 +1116,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, - output$raw_data_name, + basename(output$raw_data_path), raw_id ), dbcon @@ -1125,7 +1125,7 @@ remotedata_db_insert <- sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", output$raw_data_path, - output$raw_data_name, + basename(output$raw_data_path), raw_id ), dbcon diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index 065316062d5..4229ed44432 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -13,7 +13,6 @@ from . process_remote_data import process_remote_data from . gee_utils import get_sitename from . create_geojson import create_geojson -import os def rp_control( @@ -118,7 +117,6 @@ def rp_control( existing_raw_file_path, raw_file_name ) - get_datareturn_name = os.path.split(get_datareturn_path) if stage_process_data: if input_file is None: @@ -133,21 +131,16 @@ def rp_control( existing_pro_file_path, pro_file_name ) - process_datareturn_name = os.path.split(process_datareturn_path) output = { - "raw_data_name": None, "raw_data_path": None, - "process_data_name": None, "process_data_path": None, } if stage_get_data: - output["raw_data_name"] = get_datareturn_name[1] output["raw_data_path"] = get_datareturn_path if stage_process_data: - output["process_data_name"] = process_datareturn_name[1] output["process_data_path"] = process_datareturn_path return output From 4edb8b0ec9ed8000ef5ee6d5e849927395b8193e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 10:46:29 +0530 Subject: [PATCH 1425/2289] . fix --- modules/data.remote/R/remote_process.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c641c109341..2bcdb03594b 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -408,7 +408,8 @@ read_remote_registry <- function(source, collection){ PEcAn.logger::logger.severe("Requested source is not available") } ) - + . <- NULL + if(!(purrr::is_empty(register %>% purrr::keep(names(.) == "collection")))){ # this is a type of source that requires different setup for its collections, e.g. GEE # then read collection specific information From c8e0a6d53ad6f4320fc40ae08ecd721cb349593a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 16:29:52 +0530 Subject: [PATCH 1426/2289] convert appeears csv to nc --- .../inst/RpTools/RpTools/appeears2pecan.py | 33 ++++++++++++++----- .../inst/RpTools/RpTools/get_remote_data.py | 2 +- 2 files changed, 26 insertions(+), 9 deletions(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py index 5fd24e46287..4b7a9b4f9c9 100644 --- a/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py +++ b/modules/data.remote/inst/RpTools/RpTools/appeears2pecan.py @@ -10,7 +10,8 @@ Author(s): Ayush Prasad """ - +import xarray as xr +import pandas as pd import requests as r import geopandas as gpd import getpass @@ -26,7 +27,7 @@ def appeears2pecan( - geofile, outdir, filename, start, end, product, projection=None, credfile=None + geofile, outdir, out_filename, start, end, product, projection=None, credfile=None ): """ Downloads remote sensing data from AppEEARS @@ -37,7 +38,7 @@ def appeears2pecan( outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. - filename (str) -- filename of the output file + out_filename (str) -- filename of the output file start (str) -- starting date of the data request in the form YYYY-MM-DD @@ -227,11 +228,7 @@ def authenticate(): if os.path.splitext(filename)[1][1:] == outformat: break - if siteid is None: - siteid = site_name - if projection is None: - projection = "NA" timestamp = time.strftime("%y%m%d%H%M%S") save_path = os.path.join( outdir, @@ -242,5 +239,25 @@ def authenticate(): + outformat ) os.rename(filepath, save_path) - + + if outformat == "csv": + df = pd.read_csv(save_path) + coords = { + "time": df["Date"].values, + } + + tosave = xr.Dataset( + df, + coords=coords, + ) + + save_path = os.path.join( + outdir, + out_filename + + "_" + + timestamp + + ".nc" + ) + tosave.to_netcdf(os.path.join(save_path)) + return os.path.abspath(save_path) diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index 48b6bc3bd06..a2f0510a244 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -84,7 +84,7 @@ def get_remote_data( get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, filename=raw_file_name) if source == "appeears": - get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile, filename=raw_file_name) + get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile, raw_file_name) if raw_merge == True and raw_merge != "replace": # if output file is of csv type use csv_merge, example AppEEARS point AOI type From 50a98d88d1224e9103b0f5d9f570412435ae2990 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 16:46:35 +0530 Subject: [PATCH 1427/2289] Apply suggestions from code review (changes in file name construction) Co-authored-by: istfer --- book_source/03_topical_pages/03_pecan_xml.Rmd | 8 +--- modules/data.remote/R/remote_process.R | 40 +++++++++++++------ 2 files changed, 30 insertions(+), 18 deletions(-) diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index f32f39d1f13..b07e2cabd6e 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -782,8 +782,6 @@ This section describes the tags required for configuring `remote_process`. ```xml - ... - ... ... ... ... @@ -792,8 +790,6 @@ This section describes the tags required for configuring `remote_process`. ... ... ... - ... - ... ... ... @@ -811,7 +807,7 @@ This section describes the tags required for configuring `remote_process`. These tags are only required if processed data is requested: -* `out_process_data`: (optional) type of processed output requested, e.g, lai +* `out_process_data`: (optional) type of processed output requested, e.g, LAI * `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands * `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS * `pro_mimetype`: (optional) MIME type of the processed file @@ -822,4 +818,4 @@ The output data from the module are returned in the following tags: * `raw_id`: input id of the raw file * `raw_path`: absolute path to the raw file * `pro_id`: input id of the processed file -* `pro_path`: absolute path to the processed file \ No newline at end of file +* `pro_path`: absolute path to the processed file diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 2bcdb03594b..beb12dc4dac 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -105,7 +105,8 @@ remote_process <- function(settings) { is.character(outdir)) PEcAn.logger::severeifnot("Check if source is of character type and is not NULL", is.character(source)) - # PEcAn.logger::severeifnot("Source should be one of gee, appeears", source == "gee" || source == "appeears") + these_sources <- gsub("^.+?\\.(.+?)\\..*$", "\\1", list.files(system.file("registration", package = "PEcAn.data.remote"))) + PEcAn.logger::severeifnot(paste0("Source should be one of ", paste(these_sources, collapse = ' ')), toupper(source) %in% these_sources) # collection validation to be implemented if (!is.null(projection)) { PEcAn.logger::severeifnot("projection should be of character type", @@ -145,7 +146,7 @@ remote_process <- function(settings) { # construct raw file name raw_file_name <- - construct_raw_filename(source, collection, siteid_short, scale, projection, qc) + construct_remotedata_filename(source, collection, siteid_short, scale, projection, qc, algorithm, out_process_data) if (is.null(out_process_data)){ pro_file_name <- NULL @@ -268,36 +269,41 @@ remote_process <- function(settings) { -##' construct the raw file name +##' construct remotedata module file names ##' -##' @name construct_raw_filename -##' @title construct_raw_filename +##' @name construct_remotedata_filename +##' @title construct_remotedata_filename ##' @param source source ##' @param collection collection or product requested from the source ##' @param siteid shortform of siteid ##' @param scale scale, NULL by default ##' @param projection projection, NULL by default ##' @param qc qc_parameter, NULL by default -##' @return raw_file_name +##' @param algorithm algorithm name to process data, NULL by default +##' @param out_process_data variable name requested for the processed file, NULL by default +##' @return remotedata_file_names ##' @examples ##' \dontrun{ -##' raw_file_name <- construct_raw_filename( +##' remotedata_file_names <- construct_remotedata_filename( ##' source="gee", ##' collection="s2", ##' siteid="0-721", ##' scale=10.0 ##' projection=NULL -##' qc=1.0) +##' qc=1.0, +##' algorithm="snap", +##' out_process_data="lai") ##' } ##' @author Ayush Prasad -construct_raw_filename <- +construct_remotedata_filename <- function(source, collection, siteid, scale = NULL, projection = NULL, - qc = NULL) { - # use NA if a parameter is not applicable and is NULL + qc = NULL, + algorithm = NULL, + out_process_data = NULL) { # skip if a parameter is not applicable and is NULL if (is.null(scale)) { scale_str <- "_" @@ -316,8 +322,18 @@ construct_raw_filename <- } raw_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, "site_", siteid) + if(!is.null(out_process_data)){ + alg_str <- paste0(algorithm, "_") + var_str <- paste0(out_process_data, "_") + pro_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, alg_str, var_str, "site_", siteid) + }else{ + pro_file_name <- NULL + } + + remotedata_file_names <- list(raw_file_name = raw_file_name, + pro_file_name = pro_file_name) - return(raw_file_name) + return(remotedata_file_names) } From 4f936e17ff33f357bb8eaa4ac3715a1df6428357 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 18:29:11 +0530 Subject: [PATCH 1428/2289] better way to construct pro file names --- modules/data.remote/R/remote_process.R | 27 +++++++------------ ...me.Rd => construct_remotedata_filename.Rd} | 26 +++++++++++------- 2 files changed, 27 insertions(+), 26 deletions(-) rename modules/data.remote/man/{construct_raw_filename.Rd => construct_remotedata_filename.Rd} (53%) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index beb12dc4dac..cd08e29902c 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -145,23 +145,20 @@ remote_process <- function(settings) { } # construct raw file name - raw_file_name <- - construct_remotedata_filename(source, collection, siteid_short, scale, projection, qc, algorithm, out_process_data) + remotedata_file_names <- construct_remotedata_filename(source, collection, siteid_short, scale, projection, qc, algorithm, out_process_data) - if (is.null(out_process_data)){ - pro_file_name <- NULL - }else{ - pro_file_name <- paste0(algorithm, - "_", - out_process_data, - "_site_", - siteid_short) - } + raw_file_name <- remotedata_file_names$raw_file_name + + pro_file_name <- remotedata_file_names$pro_file_name + + print("see this") + print(pro_file_name) # check if any data is already present in the inputs table dbstatus <- remotedata_db_check( raw_file_name = raw_file_name, + pro_file_name = pro_file_name, start = start, end = end, siteid = siteid, @@ -525,7 +522,6 @@ remotedata_db_check <- if (overwrite) { PEcAn.logger::logger.warn("overwrite is set to TRUE, any existing file will be entirely replaced") if (!is.null(out_process_data)) { - if (nrow(pro_check <- PEcAn.DB::db.query( sprintf( @@ -577,10 +573,7 @@ remotedata_db_check <- existing_raw_file_path <- NULL } else if (!is.null(out_process_data)) { # if processed data is requested, example LAI - - # construct processed file name - pro_file_name = paste0(algorithm, "_", out_process_data, "_site_", siteid_short) - + # check if processed file exists if (nrow(pro_check <- PEcAn.DB::db.query( @@ -700,7 +693,7 @@ remotedata_db_check <- existing_raw_file_path = raw_check$file_path remotefile_check_flag <- 3 } else{ - existing_raw_file_path = NULL + existing_raw_file_path <- NULL } } else{ # if no processed or raw file of requested type exists diff --git a/modules/data.remote/man/construct_raw_filename.Rd b/modules/data.remote/man/construct_remotedata_filename.Rd similarity index 53% rename from modules/data.remote/man/construct_raw_filename.Rd rename to modules/data.remote/man/construct_remotedata_filename.Rd index 7775ba75111..ae8149535cc 100644 --- a/modules/data.remote/man/construct_raw_filename.Rd +++ b/modules/data.remote/man/construct_remotedata_filename.Rd @@ -1,16 +1,18 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/remote_process.R -\name{construct_raw_filename} -\alias{construct_raw_filename} -\title{construct_raw_filename} +\name{construct_remotedata_filename} +\alias{construct_remotedata_filename} +\title{construct_remotedata_filename} \usage{ -construct_raw_filename( +construct_remotedata_filename( source, collection, siteid, scale = NULL, projection = NULL, - qc = NULL + qc = NULL, + algorithm = NULL, + out_process_data = NULL ) } \arguments{ @@ -25,22 +27,28 @@ construct_raw_filename( \item{projection}{projection, NULL by default} \item{qc}{qc_parameter, NULL by default} + +\item{algorithm}{algorithm name to process data, NULL by default} + +\item{out_process_data}{variable name requested for the processed file, NULL by default} } \value{ -raw_file_name +remotedata_file_names } \description{ -construct the raw file name +construct remotedata module file names } \examples{ \dontrun{ -raw_file_name <- construct_raw_filename( +remotedata_file_names <- construct_remotedata_filename( source="gee", collection="s2", siteid="0-721", scale=10.0 projection=NULL - qc=1.0) + qc=1.0, + algorithm="snap", + out_process_data="lai") } } \author{ From 24ecc7398ff49c17f69e0710e0a56b9c32153d9f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 18:30:34 +0530 Subject: [PATCH 1429/2289] better way to construct pro file names --- modules/data.remote/R/remote_process.R | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index cd08e29902c..e0cf5b04117 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -150,9 +150,7 @@ remote_process <- function(settings) { raw_file_name <- remotedata_file_names$raw_file_name pro_file_name <- remotedata_file_names$pro_file_name - - print("see this") - print(pro_file_name) + # check if any data is already present in the inputs table dbstatus <- From ce1fb5b71a4e3bfb40a0ae8b13168d8dc98c18e4 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 18:33:30 +0530 Subject: [PATCH 1430/2289] Apply suggestions from code review Co-authored-by: istfer --- modules/data.remote/inst/registration/register.GEE.xml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/inst/registration/register.GEE.xml b/modules/data.remote/inst/registration/register.GEE.xml index 036f9e4e92c..14a9f16a98f 100644 --- a/modules/data.remote/inst/registration/register.GEE.xml +++ b/modules/data.remote/inst/registration/register.GEE.xml @@ -6,8 +6,8 @@ polygon - 10 - 1 + 10 + 1 CF Meteorology @@ -22,8 +22,8 @@ LANDSAT/LC08/C01/T1_SR l8 - 30 - 1 + 30 + 1 polygon point From dae10c26a10aa85d54a5e0d0f01c53a47b96f3fa Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 18:45:10 +0530 Subject: [PATCH 1431/2289] improved remotedata_db_insert doc --- modules/data.remote/R/remote_process.R | 4 ++-- modules/data.remote/man/remotedata_db_insert.Rd | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index e0cf5b04117..18dc612a841 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -823,8 +823,8 @@ remotedata_db_check <- ##' @param write_raw_end write_raw_end, end date of the raw file ##' @param write_pro_start write_pro_start ##' @param write_pro_end write_pro_end -##' @param raw_check raw_check -##' @param pro_check pro_check +##' @param raw_check id, site_id, name, start_date, end_date, of the existing raw file from inputs table and file_path from dbfiles tables +##' @param pro_check pro_check id, site_id, name, start_date, end_date, of the existing processed file from inputs table and file_path from dbfiles tables ##' @param raw_mimetype raw_mimetype ##' @param raw_formatname raw_formatname ##' @param pro_mimetype pro_mimetype diff --git a/modules/data.remote/man/remotedata_db_insert.Rd b/modules/data.remote/man/remotedata_db_insert.Rd index a18846b5dc6..6527a041a10 100644 --- a/modules/data.remote/man/remotedata_db_insert.Rd +++ b/modules/data.remote/man/remotedata_db_insert.Rd @@ -42,9 +42,9 @@ remotedata_db_insert( \item{write_pro_end}{write_pro_end} -\item{raw_check}{raw_check} +\item{raw_check}{id, site_id, name, start_date, end_date, of the existing raw file from inputs table and file_path from dbfiles tables} -\item{pro_check}{pro_check} +\item{pro_check}{pro_check id, site_id, name, start_date, end_date, of the existing processed file from inputs table and file_path from dbfiles tables} \item{raw_mimetype}{raw_mimetype} From 5ca7db008928ba2886ce0e343cb084022991ba61 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 20:43:38 +0530 Subject: [PATCH 1432/2289] post-processing function trigger Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 18dc612a841..c16bf8bd3b0 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -226,7 +226,17 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - + # IF: this is extremely hacky but we will need a post-processing function/sub-module here + # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example + # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion + if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ + settings$remotedata$collapse <- TRUE + } + + if(!is.null(settings$remotedata$collapse)){ + latlon <- PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) + collapse_remote_data(output, out_process_data, latlon) + } # insert output data in the DB db_out <- remotedata_db_insert( From 18677d32ddaded1c7944e5017f259fc7d003c298 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 21:25:24 +0530 Subject: [PATCH 1433/2289] add collapse remote data Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 90 ++++++++++++++++++- .../data.remote/man/collapse_remote_data.Rd | 29 ++++++ 2 files changed, 118 insertions(+), 1 deletion(-) create mode 100644 modules/data.remote/man/collapse_remote_data.Rd diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c16bf8bd3b0..4876bd8bffc 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -229,7 +229,7 @@ remote_process <- function(settings) { # IF: this is extremely hacky but we will need a post-processing function/sub-module here # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ + if(source == "gee" && collection == "s2" && !is.null(algorithm) && !is.null(out_process_data)){ settings$remotedata$collapse <- TRUE } @@ -1154,3 +1154,91 @@ remotedata_db_insert <- return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) } + + + + + + + + + + + + + + + + + + +##' collapses remote data (currently LAI only) +##' +##' @name collapse_remote_data +##' @title collapse_remote_data +##' @param output output list from rp_control +##' @param out_process_data type of processed data +##' @param latlon latlon +##' @examples +##' \dontrun{ +##' collapse_remote_data( +##' output, +##' out_process_data, +##' latlon) +##' } +##' @author Istem Fer +collapse_remote_data <- function(output, out_process_data, latlon){ + + # open up the nc + nc <- ncdf4::nc_open(output$process_data_path) + ncdat <- ncdf4::ncvar_get(nc, out_process_data) + if(length(dim(ncdat)) == 3){ + aggregated_ncdat_mean <- apply(ncdat, 3, mean, na.rm = TRUE) + aggregated_ncdat_std <- apply(ncdat, 3, sd, na.rm = TRUE) + } + + # write back to nc file + outlist <- list() + outlist[[1]] <- aggregated_ncdat_mean # LAI in (m2 m-2) + outlist[[2]] <- aggregated_ncdat_std + outlist[[3]] <- ncdat + + t <- ncdf4::ncdim_def(name = "time", + units = nc$var$lai$dim[[3]]$units, + nc$var$lai$dim[[3]]$vals, # allow partial years, this info is already in matrix_weather + calendar = "standard", + unlim = TRUE) + + + lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(latlon$lat), longname = "latitude") + lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(latlon$lon), longname = "longitude") + + x <- ncdf4::ncdim_def("x", "", vals = as.numeric(nc$var$lai$dim[[2]]$vals), longname = "") + y <- ncdf4::ncdim_def("y", "", vals = as.numeric(nc$var$lai$dim[[1]]$vals), longname = "") + + nc_var <- list() + nc_var[[1]] <- ncdf4::ncvar_def("LAI", units = "m2 m-2", dim = list(lon, lat, t), missval = -999, longname = "Leaf Area Index") + nc_var[[2]] <- ncdf4::ncvar_def("LAI_STD", units = "", dim = list(lon, lat, t), missval = -999, longname = "") + nc_var[[3]] <- ncdf4::ncvar_def("LAI_UNCOLLAPSED", units = "m2 m-2", dim = list(y, x, t), missval = -999, longname = "") + + ncdf4::nc_close(nc) + + ### Output netCDF data, overwrite previous + nc <- ncdf4::nc_create(output$process_data_path, nc_var) + for (i in seq_along(nc_var)) { + # print(i) + ncdf4::ncvar_put(nc, nc_var[[i]], outlist[[i]]) + } + ncdf4::nc_close(nc) + +} # collapse_remote_data + + + + + + + + + + diff --git a/modules/data.remote/man/collapse_remote_data.Rd b/modules/data.remote/man/collapse_remote_data.Rd new file mode 100644 index 00000000000..c8dba92f5de --- /dev/null +++ b/modules/data.remote/man/collapse_remote_data.Rd @@ -0,0 +1,29 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/remote_process.R +\name{collapse_remote_data} +\alias{collapse_remote_data} +\title{collapse_remote_data} +\usage{ +collapse_remote_data(output, out_process_data, latlon) +} +\arguments{ +\item{output}{output list from rp_control} + +\item{out_process_data}{type of processed data} + +\item{latlon}{latlon} +} +\description{ +collapses remote data (currently LAI only) +} +\examples{ +\dontrun{ +collapse_remote_data( + output, + out_process_data, + latlon) +} +} +\author{ +Istem Fer +} From 0c4aa2e434093efd534c92eded3174869f4802d2 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 21:30:24 +0530 Subject: [PATCH 1434/2289] update load_netcdf.R Co-authored-by: istfer --- modules/benchmark/R/load_netcdf.R | 39 +++++++++++++------------------ 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/modules/benchmark/R/load_netcdf.R b/modules/benchmark/R/load_netcdf.R index 9ffbbe01197..aa116030d17 100644 --- a/modules/benchmark/R/load_netcdf.R +++ b/modules/benchmark/R/load_netcdf.R @@ -9,40 +9,31 @@ ##' @param vars character ##' @author Istem Fer load_x_netcdf <- function(data.path, format, site, vars = NULL) { - data.path <- sapply(data.path, function(x) dir(dirname(x), basename(x), full.names = TRUE)) - nc <- lapply(data.path, ncdf4::nc_open) - dat <- list() for (ind in seq_along(vars)) { nc.dat <- lapply(nc, ncdf4::ncvar_get, vars[ind]) dat[vars[ind]] <- as.data.frame(unlist(nc.dat)) } - dat <- as.matrix(as.data.frame(dat)) - # we need to replace filling/missing values with NA now we don't want these values to go into unit # conversion dat[dat %in% as.numeric(format$na.strings)] <- NA dat <- as.data.frame(dat) colnames(dat) <- vars - # deal with time time.col <- list() for (i in seq_along(nc)) { dims <- names(nc[[i]]$dim) time.var <- grep(pattern = "time", dims, ignore.case = TRUE) time.col[[i]] <- ncdf4::ncvar_get(nc[[i]], dims[time.var]) - t.units <- ncdf4::ncatt_get(nc[[i]], dims[time.var])$units - # If the unit has if of the form * since YYYY-MM-DD * with "-hour" timezone offset # This is a feature of the met produced by met2CF - if(str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*\\s-\\d+")){ - unit2 <- str_split_fixed(t.units,"\\s-",2)[1] - offset <- str_split_fixed(t.units,"\\s-",2)[2] %>% as.numeric() - + if(stringr::str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*\\s-\\d+")){ + unit2 <- stringr::str_split_fixed(t.units,"\\s-",2)[1] + offset <- stringr::str_split_fixed(t.units,"\\s-",2)[2] %>% as.numeric() date_time <- suppressWarnings(try(lubridate::ymd((unit2)))) if(is.na(date_time)){ date_time <- suppressWarnings(try(lubridate::ymd_hms(unit2))) @@ -50,14 +41,22 @@ load_x_netcdf <- function(data.path, format, site, vars = NULL) { if(is.na(date_time)){ PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") } - - t.units <- paste(str_split_fixed(t.units," since",2)[1], "since", + t.units <- paste(stringr::str_split_fixed(t.units," since",2)[1], "since", date_time - lubridate::hms(paste(offset,":00:00"))) + }else if(stringr::str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*")){ + unit2 <- stringr::str_split_fixed(t.units,"\\s-",2)[1] + date_time <- suppressWarnings(try(lubridate::ymd((unit2)))) + if(is.na(date_time)){ + date_time <- suppressWarnings(try(lubridate::ymd_hms(unit2))) + } + if(is.na(date_time)){ + PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") + } + t.units <- paste(stringr::str_split_fixed(t.units," since",2)[1], "since", + date_time) } - # for heterogenous formats try parsing ymd_hms date.origin <- suppressWarnings(try(lubridate::ymd_hms(t.units))) - # parsing ymd if (is.na(date.origin)) { date.origin <- lubridate::ymd(t.units) @@ -66,26 +65,20 @@ load_x_netcdf <- function(data.path, format, site, vars = NULL) { if (is.na(date.origin)) { PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") } - - time.stamp.match <- gsub("UTC", "", date.origin) t.units <- gsub(paste0(" since ", time.stamp.match, ".*"), "", t.units) - # need to change system TZ otherwise, lines below keeps writing in the current time zone Sys.setenv(TZ = 'UTC') foo <- as.POSIXct(date.origin, tz = "UTC") + udunits2::ud.convert(time.col[[i]], t.units, "seconds") time.col[[i]] <- foo } - # needed to use 'round' to 'mins' here, otherwise I end up with values like '2006-12-31 23:29:59' # while reading Ameriflux for example however the model timesteps are more regular and the last # value can be '2006-12-31 23:30:00'.. this will result in cutting the last value in the # align_data step dat$posix <- round(as.POSIXct(do.call("c", time.col), tz = "UTC"), "mins") dat$posix <- as.POSIXct(dat$posix) - lapply(nc, ncdf4::nc_close) - return(dat) -} # load_x_netcdf +} # load_x_netcdf \ No newline at end of file From 8cd35a435ff1bc6858dfa66fd23931b7ead429aa Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 22:00:11 +0530 Subject: [PATCH 1435/2289] add stringr req --- modules/benchmark/DESCRIPTION | 3 +- modules/data.remote/DESCRIPTION | 1 + modules/data.remote/R/remote_process.R | 305 ++++++++++++++++--------- 3 files changed, 200 insertions(+), 109 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 42d56538d57..82e4e1d3b4a 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -31,7 +31,8 @@ Imports: reshape2, dbplyr, SimilarityMeasures, - zoo + zoo, + stringr Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 7b6c3663c2d..450d2ee18d0 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -22,6 +22,7 @@ Imports: reticulate, PEcAn.logger, PEcAn.remote, + PEcAn.data.atmosphere, stringr (>= 1.1.0), binaryLogic, doParallel, diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 4876bd8bffc..357b1bbac0d 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -49,43 +49,46 @@ remote_process <- function(settings) { raw_mimetype <- reg_info$raw_mimetype raw_formatname <- reg_info$raw_formatname pro_mimetype <- reg_info$pro_mimetype - pro_formatname <- reg_info$pro_formatname + pro_formatname <- reg_info$pro_formatname + - if (!is.null(reg_info$scale)) { - if(!is.null(settings$remotedata$scale)){ + if (!is.null(settings$remotedata$scale)) { scale <- as.double(settings$remotedata$scale) scale <- format(scale, nsmall = 1) - }else{ + } else{ scale <- as.double(reg_info$scale) scale <- format(scale, nsmall = 1) PEcAn.logger::logger.warn(paste0("scale not provided, using default scale ", scale)) } - }else{ + } else{ scale <- NULL } if (!is.null(reg_info$qc)) { - if(!is.null(settings$remotedata$qc)){ + if (!is.null(settings$remotedata$qc)) { qc <- as.double(settings$remotedata$qc) qc <- format(qc, nsmall = 1) - }else{ + } else{ qc <- as.double(reg_info$qc) qc <- format(qc, nsmall = 1) PEcAn.logger::logger.warn(paste0("qc not provided, using default qc ", qc)) } - }else{ + } else{ qc <- NULL } - + if (!is.null(reg_info$projection)) { - if(!is.null(settings$remotedata$projection)){ + if (!is.null(settings$remotedata$projection)) { projection <- settings$remotedata$projection - }else{ + } else{ projection <- reg_info$projection - PEcAn.logger::logger.warn(paste0("projection not provided, using default projection ", projection)) + PEcAn.logger::logger.warn(paste0( + "projection not provided, using default projection ", + projection + )) } - }else{ + } else{ projection <- NULL } @@ -105,8 +108,10 @@ remote_process <- function(settings) { is.character(outdir)) PEcAn.logger::severeifnot("Check if source is of character type and is not NULL", is.character(source)) - these_sources <- gsub("^.+?\\.(.+?)\\..*$", "\\1", list.files(system.file("registration", package = "PEcAn.data.remote"))) - PEcAn.logger::severeifnot(paste0("Source should be one of ", paste(these_sources, collapse = ' ')), toupper(source) %in% these_sources) + these_sources <- + gsub("^.+?\\.(.+?)\\..*$", "\\1", list.files(system.file("registration", package = "PEcAn.data.remote"))) + PEcAn.logger::severeifnot(paste0("Source should be one of ", paste(these_sources, collapse = ' ')), + toupper(source) %in% these_sources) # collection validation to be implemented if (!is.null(projection)) { PEcAn.logger::severeifnot("projection should be of character type", @@ -140,17 +145,36 @@ remote_process <- function(settings) { con = dbcon ), use.names = FALSE) - if(!(tolower(gsub(".*type(.+),coordinates.*", "\\1", gsub("[^=A-Za-z,0-9{} ]+","",coords))) %in% reg_info$coordtype)){ - PEcAn.logger::logger.severe(paste0("Coordinate type of the site is not supported by the requested source, please make sure that your site type is ", reg_info$coordtype)) + if (!(tolower(gsub( + ".*type(.+),coordinates.*", + "\\1", + gsub("[^=A-Za-z,0-9{} ]+", "", coords) + )) %in% reg_info$coordtype)) { + PEcAn.logger::logger.severe( + paste0( + "Coordinate type of the site is not supported by the requested source, please make sure that your site type is ", + reg_info$coordtype + ) + ) } # construct raw file name - remotedata_file_names <- construct_remotedata_filename(source, collection, siteid_short, scale, projection, qc, algorithm, out_process_data) + remotedata_file_names <- + construct_remotedata_filename( + source, + collection, + siteid_short, + scale, + projection, + qc, + algorithm, + out_process_data + ) raw_file_name <- remotedata_file_names$raw_file_name pro_file_name <- remotedata_file_names$pro_file_name - + # check if any data is already present in the inputs table dbstatus <- @@ -168,28 +192,28 @@ remote_process <- function(settings) { dbcon = dbcon ) - remotefile_check_flag <- dbstatus$remotefile_check_flag - start <- dbstatus$start - end <- dbstatus$end - stage_get_data <- dbstatus$stage_get_data - write_raw_start <- dbstatus$write_raw_start - write_raw_end <- dbstatus$write_raw_end - raw_merge <- dbstatus$raw_merge + remotefile_check_flag <- dbstatus$remotefile_check_flag + start <- dbstatus$start + end <- dbstatus$end + stage_get_data <- dbstatus$stage_get_data + write_raw_start <- dbstatus$write_raw_start + write_raw_end <- dbstatus$write_raw_end + raw_merge <- dbstatus$raw_merge existing_raw_file_path <- dbstatus$existing_raw_file_path - stage_process_data <- dbstatus$stage_process_data - write_pro_start <- dbstatus$write_pro_start - write_pro_end <- dbstatus$write_pro_end - pro_merge <- dbstatus$pro_merge - input_file <- dbstatus$input_file + stage_process_data <- dbstatus$stage_process_data + write_pro_start <- dbstatus$write_pro_start + write_pro_end <- dbstatus$write_pro_end + pro_merge <- dbstatus$pro_merge + input_file <- dbstatus$input_file existing_pro_file_path <- dbstatus$existing_pro_file_path - raw_check <- dbstatus$raw_check - pro_check <- dbstatus$pro_check - + raw_check <- dbstatus$raw_check + pro_check <- dbstatus$pro_check + # construct outdir path outdir <- file.path(outdir, paste(toupper(source), "site", siteid_short, sep = "_")) - + fcn.args <- list() fcn.args$coords <- coords @@ -216,7 +240,7 @@ remote_process <- function(settings) { fcn.args$raw_file_name <- raw_file_name fcn.args$pro_file_name <- pro_file_name - + arg.string <- PEcAn.utils::listToArgString(fcn.args) @@ -229,12 +253,15 @@ remote_process <- function(settings) { # IF: this is extremely hacky but we will need a post-processing function/sub-module here # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if(source == "gee" && collection == "s2" && !is.null(algorithm) && !is.null(out_process_data)){ + if (source == "gee" && + collection == "s2" && + !is.null(algorithm) && !is.null(out_process_data)) { settings$remotedata$collapse <- TRUE } - - if(!is.null(settings$remotedata$collapse)){ - latlon <- PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) + + if (!is.null(settings$remotedata$collapse)) { + latlon <- + PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) collapse_remote_data(output, out_process_data, latlon) } # insert output data in the DB @@ -317,7 +344,7 @@ construct_remotedata_filename <- } if (is.null(projection)) { prj_str <- "" - }else{ + } else{ prj_str <- paste0(projection, "_") } if (is.null(qc)) { @@ -326,12 +353,32 @@ construct_remotedata_filename <- qc_str <- paste0(format(qc, nsmall = 1), "_") } - raw_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, "site_", siteid) - if(!is.null(out_process_data)){ + raw_file_name <- + paste0(toupper(source), + "_", + collection, + scale_str, + prj_str, + qc_str, + "site_", + siteid) + if (!is.null(out_process_data)) { alg_str <- paste0(algorithm, "_") var_str <- paste0(out_process_data, "_") - pro_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, alg_str, var_str, "site_", siteid) - }else{ + pro_file_name <- + paste0( + toupper(source), + "_", + collection, + scale_str, + prj_str, + qc_str, + alg_str, + var_str, + "site_", + siteid + ) + } else{ pro_file_name <- NULL } @@ -395,7 +442,16 @@ set_stage <- function(result, req_start, req_end, stage) { write_end <- db_end write_start <- req_start } - return (list(req_start = req_start, req_end = req_end, stage = stage, merge = merge, write_start = write_start, write_end = write_end)) + return ( + list( + req_start = req_start, + req_end = req_end, + stage = stage, + merge = merge, + write_start = write_start, + write_end = write_end + ) + ) } @@ -417,29 +473,38 @@ set_stage <- function(result, req_start, req_end, stage) { ##' "COPERNICUS/S2_SR") ##' } ##' @author Istem Fer -read_remote_registry <- function(source, collection){ - +read_remote_registry <- function(source, collection) { # get registration file - register.xml <- system.file(paste0("registration/register.", toupper(source), ".xml"), package = "PEcAn.data.remote") + register.xml <- + system.file(paste0("registration/register.", toupper(source), ".xml"), + package = "PEcAn.data.remote") - tryCatch(expr = { - register <- XML::xmlToList(XML::xmlParse(register.xml)) - }, - error = function(e){ + tryCatch( + expr = { + register <- XML::xmlToList(XML::xmlParse(register.xml)) + }, + error = function(e) { PEcAn.logger::logger.severe("Requested source is not available") - } + } ) . <- NULL - if(!(purrr::is_empty(register %>% purrr::keep(names(.) == "collection")))){ + if (!(purrr::is_empty(register %>% purrr::keep(names(.) == "collection")))) { # this is a type of source that requires different setup for its collections, e.g. GEE # then read collection specific information - register <- register[[which(register %>% purrr::map_chr("original_name") == collection)]] + register <- + register[[which(register %>% purrr::map_chr("original_name") == collection)]] } reg_list <- list() - reg_list$original_name <- ifelse(is.null(register$original_name), collection, register$original_name) - reg_list$pecan_name <- ifelse(is.null(register$pecan_name), collection, register$pecan_name) + reg_list$original_name <- + ifelse(is.null(register$original_name), + collection, + register$original_name) + reg_list$pecan_name <- + ifelse(is.null(register$pecan_name), + collection, + register$pecan_name) reg_list$scale <- register$scale reg_list$qc <- register$qc reg_list$projection <- register$projection @@ -464,7 +529,7 @@ read_remote_registry <- function(source, collection){ ##' @param pro_file_name pro_file_name ##' @param start start date requested by user ##' @param end end date requested by the user -##' @param siteid siteid of the site +##' @param siteid siteid of the site ##' @param siteid_short short form of the siteid ##' @param out_get_data out_get_data ##' @param algorithm algorithm @@ -500,7 +565,6 @@ remotedata_db_check <- out_process_data, overwrite, dbcon) { - # Information about the date variables used: # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB # start, end : effective start, end dates created after checking the DB status. These dates are sent to rp_control for downloading and processing data @@ -581,7 +645,7 @@ remotedata_db_check <- existing_raw_file_path <- NULL } else if (!is.null(out_process_data)) { # if processed data is requested, example LAI - + # check if processed file exists if (nrow(pro_check <- PEcAn.DB::db.query( @@ -597,7 +661,8 @@ remotedata_db_check <- pro_end <- as.character(datalist$req_end) write_pro_start <- datalist$write_start write_pro_end <- datalist$write_end - if (pro_start != "dont write" || pro_end != "dont write") { + if (pro_start != "dont write" || + pro_end != "dont write") { stage_process_data <- datalist$stage pro_merge <- datalist$merge if (pro_merge == TRUE) { @@ -617,8 +682,10 @@ remotedata_db_check <- !is.null(raw_check$end_date)) { raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- as.character(raw_datalist$req_start) - end <- as.character(raw_datalist$req_end) + start <- + as.character(raw_datalist$req_start) + end <- + as.character(raw_datalist$req_end) write_raw_start <- raw_datalist$write_start write_raw_end <- raw_datalist$write_end stage_get_data <- raw_datalist$stage @@ -794,7 +861,7 @@ remotedata_db_check <- } return( - list( + list( remotefile_check_flag = remotefile_check_flag, start = start, end = end, @@ -812,7 +879,7 @@ remotedata_db_check <- raw_check = raw_check, pro_check = pro_check ) - ) + ) } @@ -826,12 +893,12 @@ remotedata_db_check <- ##' @name remotedata_db_insert ##' @param output output list from rp_control ##' @param remotefile_check_flag remotefile_check_flag -##' @param siteid siteid +##' @param siteid siteid ##' @param out_get_data out_get_data ##' @param out_process_data out_process_data ##' @param write_raw_start write_raw_start, start date of the raw file ##' @param write_raw_end write_raw_end, end date of the raw file -##' @param write_pro_start write_pro_start +##' @param write_pro_start write_pro_start ##' @param write_pro_end write_pro_end ##' @param raw_check id, site_id, name, start_date, end_date, of the existing raw file from inputs table and file_path from dbfiles tables ##' @param pro_check pro_check id, site_id, name, start_date, end_date, of the existing processed file from inputs table and file_path from dbfiles tables @@ -880,7 +947,6 @@ remotedata_db_insert <- pro_mimetype, pro_formatname, dbcon) { - # The value of remotefile_check_flag denotes the following cases: # When processed file is requested, @@ -1152,7 +1218,12 @@ remotedata_db_insert <- } } - return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) + return(list( + raw_id = raw_id, + raw_path = raw_path, + pro_id = pro_id, + pro_path = pro_path + )) } @@ -1161,17 +1232,6 @@ remotedata_db_insert <- - - - - - - - - - - - ##' collapses remote data (currently LAI only) ##' ##' @name collapse_remote_data @@ -1187,12 +1247,11 @@ remotedata_db_insert <- ##' latlon) ##' } ##' @author Istem Fer -collapse_remote_data <- function(output, out_process_data, latlon){ - +collapse_remote_data <- function(output, out_process_data, latlon) { # open up the nc nc <- ncdf4::nc_open(output$process_data_path) ncdat <- ncdf4::ncvar_get(nc, out_process_data) - if(length(dim(ncdat)) == 3){ + if (length(dim(ncdat)) == 3) { aggregated_ncdat_mean <- apply(ncdat, 3, mean, na.rm = TRUE) aggregated_ncdat_std <- apply(ncdat, 3, sd, na.rm = TRUE) } @@ -1200,26 +1259,66 @@ collapse_remote_data <- function(output, out_process_data, latlon){ # write back to nc file outlist <- list() outlist[[1]] <- aggregated_ncdat_mean # LAI in (m2 m-2) - outlist[[2]] <- aggregated_ncdat_std + outlist[[2]] <- aggregated_ncdat_std outlist[[3]] <- ncdat - t <- ncdf4::ncdim_def(name = "time", - units = nc$var$lai$dim[[3]]$units, - nc$var$lai$dim[[3]]$vals, # allow partial years, this info is already in matrix_weather - calendar = "standard", - unlim = TRUE) + t <- ncdf4::ncdim_def( + name = "time", + units = nc$var$lai$dim[[3]]$units, + nc$var$lai$dim[[3]]$vals, + # allow partial years, this info is already in matrix_weather + calendar = "standard", + unlim = TRUE + ) - lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(latlon$lat), longname = "latitude") - lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(latlon$lon), longname = "longitude") + lat <- + ncdf4::ncdim_def("lat", + "degrees_north", + vals = as.numeric(latlon$lat), + longname = "latitude") + lon <- + ncdf4::ncdim_def("lon", + "degrees_east", + vals = as.numeric(latlon$lon), + longname = "longitude") - x <- ncdf4::ncdim_def("x", "", vals = as.numeric(nc$var$lai$dim[[2]]$vals), longname = "") - y <- ncdf4::ncdim_def("y", "", vals = as.numeric(nc$var$lai$dim[[1]]$vals), longname = "") + x <- + ncdf4::ncdim_def("x", + "", + vals = as.numeric(nc$var$lai$dim[[2]]$vals), + longname = "") + y <- + ncdf4::ncdim_def("y", + "", + vals = as.numeric(nc$var$lai$dim[[1]]$vals), + longname = "") nc_var <- list() - nc_var[[1]] <- ncdf4::ncvar_def("LAI", units = "m2 m-2", dim = list(lon, lat, t), missval = -999, longname = "Leaf Area Index") - nc_var[[2]] <- ncdf4::ncvar_def("LAI_STD", units = "", dim = list(lon, lat, t), missval = -999, longname = "") - nc_var[[3]] <- ncdf4::ncvar_def("LAI_UNCOLLAPSED", units = "m2 m-2", dim = list(y, x, t), missval = -999, longname = "") + nc_var[[1]] <- + ncdf4::ncvar_def( + "LAI", + units = "m2 m-2", + dim = list(lon, lat, t), + missval = -999, + longname = "Leaf Area Index" + ) + nc_var[[2]] <- + ncdf4::ncvar_def( + "LAI_STD", + units = "", + dim = list(lon, lat, t), + missval = -999, + longname = "" + ) + nc_var[[3]] <- + ncdf4::ncvar_def( + "LAI_UNCOLLAPSED", + units = "m2 m-2", + dim = list(y, x, t), + missval = -999, + longname = "" + ) ncdf4::nc_close(nc) @@ -1231,14 +1330,4 @@ collapse_remote_data <- function(output, out_process_data, latlon){ } ncdf4::nc_close(nc) -} # collapse_remote_data - - - - - - - - - - +} # collapse_remote_data \ No newline at end of file From 569ecd1ad3022d56badb3105ec48fb5bf9e0d9be Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 22 Aug 2020 22:39:25 +0530 Subject: [PATCH 1436/2289] update Description --- modules/benchmark/DESCRIPTION | 2 +- modules/data.remote/DESCRIPTION | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 82e4e1d3b4a..a971f50446c 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -32,7 +32,7 @@ Imports: dbplyr, SimilarityMeasures, zoo, - stringr + stringr (>= 1.1.0) Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 450d2ee18d0..4e2789a56ac 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -23,7 +23,6 @@ Imports: PEcAn.logger, PEcAn.remote, PEcAn.data.atmosphere, - stringr (>= 1.1.0), binaryLogic, doParallel, parallel, From 9e826ab3b4cb926031e8ff30134f703d4dfe157d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:30:01 +0530 Subject: [PATCH 1437/2289] try again --- modules/benchmark/DESCRIPTION | 2 +- modules/data.remote/DESCRIPTION | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index a971f50446c..82e4e1d3b4a 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -32,7 +32,7 @@ Imports: dbplyr, SimilarityMeasures, zoo, - stringr (>= 1.1.0) + stringr Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 4e2789a56ac..450d2ee18d0 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -23,6 +23,7 @@ Imports: PEcAn.logger, PEcAn.remote, PEcAn.data.atmosphere, + stringr (>= 1.1.0), binaryLogic, doParallel, parallel, From c6f1a732e4754703bc01f59e2beb50ac69539d46 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:32:59 +0530 Subject: [PATCH 1438/2289] Revert "try again" This reverts commit 9e826ab3b4cb926031e8ff30134f703d4dfe157d. --- modules/benchmark/DESCRIPTION | 2 +- modules/data.remote/DESCRIPTION | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 82e4e1d3b4a..a971f50446c 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -32,7 +32,7 @@ Imports: dbplyr, SimilarityMeasures, zoo, - stringr + stringr (>= 1.1.0) Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 450d2ee18d0..4e2789a56ac 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -23,7 +23,6 @@ Imports: PEcAn.logger, PEcAn.remote, PEcAn.data.atmosphere, - stringr (>= 1.1.0), binaryLogic, doParallel, parallel, From b499eeb0edbf6c9b8acbf82e3cca86343d9a8ae0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:33:10 +0530 Subject: [PATCH 1439/2289] Revert "update Description" This reverts commit 569ecd1ad3022d56badb3105ec48fb5bf9e0d9be. --- modules/benchmark/DESCRIPTION | 2 +- modules/data.remote/DESCRIPTION | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index a971f50446c..82e4e1d3b4a 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -32,7 +32,7 @@ Imports: dbplyr, SimilarityMeasures, zoo, - stringr (>= 1.1.0) + stringr Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 4e2789a56ac..450d2ee18d0 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -23,6 +23,7 @@ Imports: PEcAn.logger, PEcAn.remote, PEcAn.data.atmosphere, + stringr (>= 1.1.0), binaryLogic, doParallel, parallel, From 930b8ac99189556d2750a2e0194d787d4643fa68 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:33:21 +0530 Subject: [PATCH 1440/2289] Revert "add stringr req" This reverts commit 8cd35a435ff1bc6858dfa66fd23931b7ead429aa. --- modules/benchmark/DESCRIPTION | 3 +- modules/data.remote/DESCRIPTION | 1 - modules/data.remote/R/remote_process.R | 305 +++++++++---------------- 3 files changed, 109 insertions(+), 200 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 82e4e1d3b4a..42d56538d57 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -31,8 +31,7 @@ Imports: reshape2, dbplyr, SimilarityMeasures, - zoo, - stringr + zoo Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 450d2ee18d0..7b6c3663c2d 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -22,7 +22,6 @@ Imports: reticulate, PEcAn.logger, PEcAn.remote, - PEcAn.data.atmosphere, stringr (>= 1.1.0), binaryLogic, doParallel, diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 357b1bbac0d..4876bd8bffc 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -49,46 +49,43 @@ remote_process <- function(settings) { raw_mimetype <- reg_info$raw_mimetype raw_formatname <- reg_info$raw_formatname pro_mimetype <- reg_info$pro_mimetype - pro_formatname <- reg_info$pro_formatname - + pro_formatname <- reg_info$pro_formatname + if (!is.null(reg_info$scale)) { - if (!is.null(settings$remotedata$scale)) { + if(!is.null(settings$remotedata$scale)){ scale <- as.double(settings$remotedata$scale) scale <- format(scale, nsmall = 1) - } else{ + }else{ scale <- as.double(reg_info$scale) scale <- format(scale, nsmall = 1) PEcAn.logger::logger.warn(paste0("scale not provided, using default scale ", scale)) } - } else{ + }else{ scale <- NULL } if (!is.null(reg_info$qc)) { - if (!is.null(settings$remotedata$qc)) { + if(!is.null(settings$remotedata$qc)){ qc <- as.double(settings$remotedata$qc) qc <- format(qc, nsmall = 1) - } else{ + }else{ qc <- as.double(reg_info$qc) qc <- format(qc, nsmall = 1) PEcAn.logger::logger.warn(paste0("qc not provided, using default qc ", qc)) } - } else{ + }else{ qc <- NULL } - + if (!is.null(reg_info$projection)) { - if (!is.null(settings$remotedata$projection)) { + if(!is.null(settings$remotedata$projection)){ projection <- settings$remotedata$projection - } else{ + }else{ projection <- reg_info$projection - PEcAn.logger::logger.warn(paste0( - "projection not provided, using default projection ", - projection - )) + PEcAn.logger::logger.warn(paste0("projection not provided, using default projection ", projection)) } - } else{ + }else{ projection <- NULL } @@ -108,10 +105,8 @@ remote_process <- function(settings) { is.character(outdir)) PEcAn.logger::severeifnot("Check if source is of character type and is not NULL", is.character(source)) - these_sources <- - gsub("^.+?\\.(.+?)\\..*$", "\\1", list.files(system.file("registration", package = "PEcAn.data.remote"))) - PEcAn.logger::severeifnot(paste0("Source should be one of ", paste(these_sources, collapse = ' ')), - toupper(source) %in% these_sources) + these_sources <- gsub("^.+?\\.(.+?)\\..*$", "\\1", list.files(system.file("registration", package = "PEcAn.data.remote"))) + PEcAn.logger::severeifnot(paste0("Source should be one of ", paste(these_sources, collapse = ' ')), toupper(source) %in% these_sources) # collection validation to be implemented if (!is.null(projection)) { PEcAn.logger::severeifnot("projection should be of character type", @@ -145,36 +140,17 @@ remote_process <- function(settings) { con = dbcon ), use.names = FALSE) - if (!(tolower(gsub( - ".*type(.+),coordinates.*", - "\\1", - gsub("[^=A-Za-z,0-9{} ]+", "", coords) - )) %in% reg_info$coordtype)) { - PEcAn.logger::logger.severe( - paste0( - "Coordinate type of the site is not supported by the requested source, please make sure that your site type is ", - reg_info$coordtype - ) - ) + if(!(tolower(gsub(".*type(.+),coordinates.*", "\\1", gsub("[^=A-Za-z,0-9{} ]+","",coords))) %in% reg_info$coordtype)){ + PEcAn.logger::logger.severe(paste0("Coordinate type of the site is not supported by the requested source, please make sure that your site type is ", reg_info$coordtype)) } # construct raw file name - remotedata_file_names <- - construct_remotedata_filename( - source, - collection, - siteid_short, - scale, - projection, - qc, - algorithm, - out_process_data - ) + remotedata_file_names <- construct_remotedata_filename(source, collection, siteid_short, scale, projection, qc, algorithm, out_process_data) raw_file_name <- remotedata_file_names$raw_file_name pro_file_name <- remotedata_file_names$pro_file_name - + # check if any data is already present in the inputs table dbstatus <- @@ -192,28 +168,28 @@ remote_process <- function(settings) { dbcon = dbcon ) - remotefile_check_flag <- dbstatus$remotefile_check_flag - start <- dbstatus$start - end <- dbstatus$end - stage_get_data <- dbstatus$stage_get_data - write_raw_start <- dbstatus$write_raw_start - write_raw_end <- dbstatus$write_raw_end - raw_merge <- dbstatus$raw_merge + remotefile_check_flag <- dbstatus$remotefile_check_flag + start <- dbstatus$start + end <- dbstatus$end + stage_get_data <- dbstatus$stage_get_data + write_raw_start <- dbstatus$write_raw_start + write_raw_end <- dbstatus$write_raw_end + raw_merge <- dbstatus$raw_merge existing_raw_file_path <- dbstatus$existing_raw_file_path - stage_process_data <- dbstatus$stage_process_data - write_pro_start <- dbstatus$write_pro_start - write_pro_end <- dbstatus$write_pro_end - pro_merge <- dbstatus$pro_merge - input_file <- dbstatus$input_file + stage_process_data <- dbstatus$stage_process_data + write_pro_start <- dbstatus$write_pro_start + write_pro_end <- dbstatus$write_pro_end + pro_merge <- dbstatus$pro_merge + input_file <- dbstatus$input_file existing_pro_file_path <- dbstatus$existing_pro_file_path - raw_check <- dbstatus$raw_check - pro_check <- dbstatus$pro_check - + raw_check <- dbstatus$raw_check + pro_check <- dbstatus$pro_check + # construct outdir path outdir <- file.path(outdir, paste(toupper(source), "site", siteid_short, sep = "_")) - + fcn.args <- list() fcn.args$coords <- coords @@ -240,7 +216,7 @@ remote_process <- function(settings) { fcn.args$raw_file_name <- raw_file_name fcn.args$pro_file_name <- pro_file_name - + arg.string <- PEcAn.utils::listToArgString(fcn.args) @@ -253,15 +229,12 @@ remote_process <- function(settings) { # IF: this is extremely hacky but we will need a post-processing function/sub-module here # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if (source == "gee" && - collection == "s2" && - !is.null(algorithm) && !is.null(out_process_data)) { + if(source == "gee" && collection == "s2" && !is.null(algorithm) && !is.null(out_process_data)){ settings$remotedata$collapse <- TRUE } - - if (!is.null(settings$remotedata$collapse)) { - latlon <- - PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) + + if(!is.null(settings$remotedata$collapse)){ + latlon <- PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) collapse_remote_data(output, out_process_data, latlon) } # insert output data in the DB @@ -344,7 +317,7 @@ construct_remotedata_filename <- } if (is.null(projection)) { prj_str <- "" - } else{ + }else{ prj_str <- paste0(projection, "_") } if (is.null(qc)) { @@ -353,32 +326,12 @@ construct_remotedata_filename <- qc_str <- paste0(format(qc, nsmall = 1), "_") } - raw_file_name <- - paste0(toupper(source), - "_", - collection, - scale_str, - prj_str, - qc_str, - "site_", - siteid) - if (!is.null(out_process_data)) { + raw_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, "site_", siteid) + if(!is.null(out_process_data)){ alg_str <- paste0(algorithm, "_") var_str <- paste0(out_process_data, "_") - pro_file_name <- - paste0( - toupper(source), - "_", - collection, - scale_str, - prj_str, - qc_str, - alg_str, - var_str, - "site_", - siteid - ) - } else{ + pro_file_name <- paste0(toupper(source), "_", collection, scale_str, prj_str, qc_str, alg_str, var_str, "site_", siteid) + }else{ pro_file_name <- NULL } @@ -442,16 +395,7 @@ set_stage <- function(result, req_start, req_end, stage) { write_end <- db_end write_start <- req_start } - return ( - list( - req_start = req_start, - req_end = req_end, - stage = stage, - merge = merge, - write_start = write_start, - write_end = write_end - ) - ) + return (list(req_start = req_start, req_end = req_end, stage = stage, merge = merge, write_start = write_start, write_end = write_end)) } @@ -473,38 +417,29 @@ set_stage <- function(result, req_start, req_end, stage) { ##' "COPERNICUS/S2_SR") ##' } ##' @author Istem Fer -read_remote_registry <- function(source, collection) { +read_remote_registry <- function(source, collection){ + # get registration file - register.xml <- - system.file(paste0("registration/register.", toupper(source), ".xml"), - package = "PEcAn.data.remote") + register.xml <- system.file(paste0("registration/register.", toupper(source), ".xml"), package = "PEcAn.data.remote") - tryCatch( - expr = { - register <- XML::xmlToList(XML::xmlParse(register.xml)) - }, - error = function(e) { + tryCatch(expr = { + register <- XML::xmlToList(XML::xmlParse(register.xml)) + }, + error = function(e){ PEcAn.logger::logger.severe("Requested source is not available") - } + } ) . <- NULL - if (!(purrr::is_empty(register %>% purrr::keep(names(.) == "collection")))) { + if(!(purrr::is_empty(register %>% purrr::keep(names(.) == "collection")))){ # this is a type of source that requires different setup for its collections, e.g. GEE # then read collection specific information - register <- - register[[which(register %>% purrr::map_chr("original_name") == collection)]] + register <- register[[which(register %>% purrr::map_chr("original_name") == collection)]] } reg_list <- list() - reg_list$original_name <- - ifelse(is.null(register$original_name), - collection, - register$original_name) - reg_list$pecan_name <- - ifelse(is.null(register$pecan_name), - collection, - register$pecan_name) + reg_list$original_name <- ifelse(is.null(register$original_name), collection, register$original_name) + reg_list$pecan_name <- ifelse(is.null(register$pecan_name), collection, register$pecan_name) reg_list$scale <- register$scale reg_list$qc <- register$qc reg_list$projection <- register$projection @@ -529,7 +464,7 @@ read_remote_registry <- function(source, collection) { ##' @param pro_file_name pro_file_name ##' @param start start date requested by user ##' @param end end date requested by the user -##' @param siteid siteid of the site +##' @param siteid siteid of the site ##' @param siteid_short short form of the siteid ##' @param out_get_data out_get_data ##' @param algorithm algorithm @@ -565,6 +500,7 @@ remotedata_db_check <- out_process_data, overwrite, dbcon) { + # Information about the date variables used: # req_start, req_end : start, end dates requested by the user, the user does not have to be aware about the status of the requested file in the DB # start, end : effective start, end dates created after checking the DB status. These dates are sent to rp_control for downloading and processing data @@ -645,7 +581,7 @@ remotedata_db_check <- existing_raw_file_path <- NULL } else if (!is.null(out_process_data)) { # if processed data is requested, example LAI - + # check if processed file exists if (nrow(pro_check <- PEcAn.DB::db.query( @@ -661,8 +597,7 @@ remotedata_db_check <- pro_end <- as.character(datalist$req_end) write_pro_start <- datalist$write_start write_pro_end <- datalist$write_end - if (pro_start != "dont write" || - pro_end != "dont write") { + if (pro_start != "dont write" || pro_end != "dont write") { stage_process_data <- datalist$stage pro_merge <- datalist$merge if (pro_merge == TRUE) { @@ -682,10 +617,8 @@ remotedata_db_check <- !is.null(raw_check$end_date)) { raw_datalist <- set_stage(raw_check, pro_start, pro_end, stage_get_data) - start <- - as.character(raw_datalist$req_start) - end <- - as.character(raw_datalist$req_end) + start <- as.character(raw_datalist$req_start) + end <- as.character(raw_datalist$req_end) write_raw_start <- raw_datalist$write_start write_raw_end <- raw_datalist$write_end stage_get_data <- raw_datalist$stage @@ -861,7 +794,7 @@ remotedata_db_check <- } return( - list( + list( remotefile_check_flag = remotefile_check_flag, start = start, end = end, @@ -879,7 +812,7 @@ remotedata_db_check <- raw_check = raw_check, pro_check = pro_check ) - ) + ) } @@ -893,12 +826,12 @@ remotedata_db_check <- ##' @name remotedata_db_insert ##' @param output output list from rp_control ##' @param remotefile_check_flag remotefile_check_flag -##' @param siteid siteid +##' @param siteid siteid ##' @param out_get_data out_get_data ##' @param out_process_data out_process_data ##' @param write_raw_start write_raw_start, start date of the raw file ##' @param write_raw_end write_raw_end, end date of the raw file -##' @param write_pro_start write_pro_start +##' @param write_pro_start write_pro_start ##' @param write_pro_end write_pro_end ##' @param raw_check id, site_id, name, start_date, end_date, of the existing raw file from inputs table and file_path from dbfiles tables ##' @param pro_check pro_check id, site_id, name, start_date, end_date, of the existing processed file from inputs table and file_path from dbfiles tables @@ -947,6 +880,7 @@ remotedata_db_insert <- pro_mimetype, pro_formatname, dbcon) { + # The value of remotefile_check_flag denotes the following cases: # When processed file is requested, @@ -1218,12 +1152,7 @@ remotedata_db_insert <- } } - return(list( - raw_id = raw_id, - raw_path = raw_path, - pro_id = pro_id, - pro_path = pro_path - )) + return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) } @@ -1232,6 +1161,17 @@ remotedata_db_insert <- + + + + + + + + + + + ##' collapses remote data (currently LAI only) ##' ##' @name collapse_remote_data @@ -1247,11 +1187,12 @@ remotedata_db_insert <- ##' latlon) ##' } ##' @author Istem Fer -collapse_remote_data <- function(output, out_process_data, latlon) { +collapse_remote_data <- function(output, out_process_data, latlon){ + # open up the nc nc <- ncdf4::nc_open(output$process_data_path) ncdat <- ncdf4::ncvar_get(nc, out_process_data) - if (length(dim(ncdat)) == 3) { + if(length(dim(ncdat)) == 3){ aggregated_ncdat_mean <- apply(ncdat, 3, mean, na.rm = TRUE) aggregated_ncdat_std <- apply(ncdat, 3, sd, na.rm = TRUE) } @@ -1259,66 +1200,26 @@ collapse_remote_data <- function(output, out_process_data, latlon) { # write back to nc file outlist <- list() outlist[[1]] <- aggregated_ncdat_mean # LAI in (m2 m-2) - outlist[[2]] <- aggregated_ncdat_std + outlist[[2]] <- aggregated_ncdat_std outlist[[3]] <- ncdat - t <- ncdf4::ncdim_def( - name = "time", - units = nc$var$lai$dim[[3]]$units, - nc$var$lai$dim[[3]]$vals, - # allow partial years, this info is already in matrix_weather - calendar = "standard", - unlim = TRUE - ) + t <- ncdf4::ncdim_def(name = "time", + units = nc$var$lai$dim[[3]]$units, + nc$var$lai$dim[[3]]$vals, # allow partial years, this info is already in matrix_weather + calendar = "standard", + unlim = TRUE) - lat <- - ncdf4::ncdim_def("lat", - "degrees_north", - vals = as.numeric(latlon$lat), - longname = "latitude") - lon <- - ncdf4::ncdim_def("lon", - "degrees_east", - vals = as.numeric(latlon$lon), - longname = "longitude") + lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(latlon$lat), longname = "latitude") + lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(latlon$lon), longname = "longitude") - x <- - ncdf4::ncdim_def("x", - "", - vals = as.numeric(nc$var$lai$dim[[2]]$vals), - longname = "") - y <- - ncdf4::ncdim_def("y", - "", - vals = as.numeric(nc$var$lai$dim[[1]]$vals), - longname = "") + x <- ncdf4::ncdim_def("x", "", vals = as.numeric(nc$var$lai$dim[[2]]$vals), longname = "") + y <- ncdf4::ncdim_def("y", "", vals = as.numeric(nc$var$lai$dim[[1]]$vals), longname = "") nc_var <- list() - nc_var[[1]] <- - ncdf4::ncvar_def( - "LAI", - units = "m2 m-2", - dim = list(lon, lat, t), - missval = -999, - longname = "Leaf Area Index" - ) - nc_var[[2]] <- - ncdf4::ncvar_def( - "LAI_STD", - units = "", - dim = list(lon, lat, t), - missval = -999, - longname = "" - ) - nc_var[[3]] <- - ncdf4::ncvar_def( - "LAI_UNCOLLAPSED", - units = "m2 m-2", - dim = list(y, x, t), - missval = -999, - longname = "" - ) + nc_var[[1]] <- ncdf4::ncvar_def("LAI", units = "m2 m-2", dim = list(lon, lat, t), missval = -999, longname = "Leaf Area Index") + nc_var[[2]] <- ncdf4::ncvar_def("LAI_STD", units = "", dim = list(lon, lat, t), missval = -999, longname = "") + nc_var[[3]] <- ncdf4::ncvar_def("LAI_UNCOLLAPSED", units = "m2 m-2", dim = list(y, x, t), missval = -999, longname = "") ncdf4::nc_close(nc) @@ -1330,4 +1231,14 @@ collapse_remote_data <- function(output, out_process_data, latlon) { } ncdf4::nc_close(nc) -} # collapse_remote_data \ No newline at end of file +} # collapse_remote_data + + + + + + + + + + From 33bfa3c58c3f54b92417d64bf05d62c8658744da Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:33:39 +0530 Subject: [PATCH 1441/2289] Revert "update load_netcdf.R" This reverts commit 0c4aa2e434093efd534c92eded3174869f4802d2. --- modules/benchmark/R/load_netcdf.R | 39 ++++++++++++++++++------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/modules/benchmark/R/load_netcdf.R b/modules/benchmark/R/load_netcdf.R index aa116030d17..9ffbbe01197 100644 --- a/modules/benchmark/R/load_netcdf.R +++ b/modules/benchmark/R/load_netcdf.R @@ -9,31 +9,40 @@ ##' @param vars character ##' @author Istem Fer load_x_netcdf <- function(data.path, format, site, vars = NULL) { + data.path <- sapply(data.path, function(x) dir(dirname(x), basename(x), full.names = TRUE)) + nc <- lapply(data.path, ncdf4::nc_open) + dat <- list() for (ind in seq_along(vars)) { nc.dat <- lapply(nc, ncdf4::ncvar_get, vars[ind]) dat[vars[ind]] <- as.data.frame(unlist(nc.dat)) } + dat <- as.matrix(as.data.frame(dat)) + # we need to replace filling/missing values with NA now we don't want these values to go into unit # conversion dat[dat %in% as.numeric(format$na.strings)] <- NA dat <- as.data.frame(dat) colnames(dat) <- vars + # deal with time time.col <- list() for (i in seq_along(nc)) { dims <- names(nc[[i]]$dim) time.var <- grep(pattern = "time", dims, ignore.case = TRUE) time.col[[i]] <- ncdf4::ncvar_get(nc[[i]], dims[time.var]) + t.units <- ncdf4::ncatt_get(nc[[i]], dims[time.var])$units + # If the unit has if of the form * since YYYY-MM-DD * with "-hour" timezone offset # This is a feature of the met produced by met2CF - if(stringr::str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*\\s-\\d+")){ - unit2 <- stringr::str_split_fixed(t.units,"\\s-",2)[1] - offset <- stringr::str_split_fixed(t.units,"\\s-",2)[2] %>% as.numeric() + if(str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*\\s-\\d+")){ + unit2 <- str_split_fixed(t.units,"\\s-",2)[1] + offset <- str_split_fixed(t.units,"\\s-",2)[2] %>% as.numeric() + date_time <- suppressWarnings(try(lubridate::ymd((unit2)))) if(is.na(date_time)){ date_time <- suppressWarnings(try(lubridate::ymd_hms(unit2))) @@ -41,22 +50,14 @@ load_x_netcdf <- function(data.path, format, site, vars = NULL) { if(is.na(date_time)){ PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") } - t.units <- paste(stringr::str_split_fixed(t.units," since",2)[1], "since", + + t.units <- paste(str_split_fixed(t.units," since",2)[1], "since", date_time - lubridate::hms(paste(offset,":00:00"))) - }else if(stringr::str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*")){ - unit2 <- stringr::str_split_fixed(t.units,"\\s-",2)[1] - date_time <- suppressWarnings(try(lubridate::ymd((unit2)))) - if(is.na(date_time)){ - date_time <- suppressWarnings(try(lubridate::ymd_hms(unit2))) - } - if(is.na(date_time)){ - PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") - } - t.units <- paste(stringr::str_split_fixed(t.units," since",2)[1], "since", - date_time) } + # for heterogenous formats try parsing ymd_hms date.origin <- suppressWarnings(try(lubridate::ymd_hms(t.units))) + # parsing ymd if (is.na(date.origin)) { date.origin <- lubridate::ymd(t.units) @@ -65,20 +66,26 @@ load_x_netcdf <- function(data.path, format, site, vars = NULL) { if (is.na(date.origin)) { PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") } + + time.stamp.match <- gsub("UTC", "", date.origin) t.units <- gsub(paste0(" since ", time.stamp.match, ".*"), "", t.units) + # need to change system TZ otherwise, lines below keeps writing in the current time zone Sys.setenv(TZ = 'UTC') foo <- as.POSIXct(date.origin, tz = "UTC") + udunits2::ud.convert(time.col[[i]], t.units, "seconds") time.col[[i]] <- foo } + # needed to use 'round' to 'mins' here, otherwise I end up with values like '2006-12-31 23:29:59' # while reading Ameriflux for example however the model timesteps are more regular and the last # value can be '2006-12-31 23:30:00'.. this will result in cutting the last value in the # align_data step dat$posix <- round(as.POSIXct(do.call("c", time.col), tz = "UTC"), "mins") dat$posix <- as.POSIXct(dat$posix) + lapply(nc, ncdf4::nc_close) + return(dat) -} # load_x_netcdf \ No newline at end of file +} # load_x_netcdf From 0d33e66ff16ae317ed0a3760c068403a1709749e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:33:47 +0530 Subject: [PATCH 1442/2289] Revert "add collapse remote data" This reverts commit 18677d32ddaded1c7944e5017f259fc7d003c298. --- modules/data.remote/R/remote_process.R | 90 +------------------ .../data.remote/man/collapse_remote_data.Rd | 29 ------ 2 files changed, 1 insertion(+), 118 deletions(-) delete mode 100644 modules/data.remote/man/collapse_remote_data.Rd diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 4876bd8bffc..c16bf8bd3b0 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -229,7 +229,7 @@ remote_process <- function(settings) { # IF: this is extremely hacky but we will need a post-processing function/sub-module here # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if(source == "gee" && collection == "s2" && !is.null(algorithm) && !is.null(out_process_data)){ + if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ settings$remotedata$collapse <- TRUE } @@ -1154,91 +1154,3 @@ remotedata_db_insert <- return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) } - - - - - - - - - - - - - - - - - - -##' collapses remote data (currently LAI only) -##' -##' @name collapse_remote_data -##' @title collapse_remote_data -##' @param output output list from rp_control -##' @param out_process_data type of processed data -##' @param latlon latlon -##' @examples -##' \dontrun{ -##' collapse_remote_data( -##' output, -##' out_process_data, -##' latlon) -##' } -##' @author Istem Fer -collapse_remote_data <- function(output, out_process_data, latlon){ - - # open up the nc - nc <- ncdf4::nc_open(output$process_data_path) - ncdat <- ncdf4::ncvar_get(nc, out_process_data) - if(length(dim(ncdat)) == 3){ - aggregated_ncdat_mean <- apply(ncdat, 3, mean, na.rm = TRUE) - aggregated_ncdat_std <- apply(ncdat, 3, sd, na.rm = TRUE) - } - - # write back to nc file - outlist <- list() - outlist[[1]] <- aggregated_ncdat_mean # LAI in (m2 m-2) - outlist[[2]] <- aggregated_ncdat_std - outlist[[3]] <- ncdat - - t <- ncdf4::ncdim_def(name = "time", - units = nc$var$lai$dim[[3]]$units, - nc$var$lai$dim[[3]]$vals, # allow partial years, this info is already in matrix_weather - calendar = "standard", - unlim = TRUE) - - - lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(latlon$lat), longname = "latitude") - lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(latlon$lon), longname = "longitude") - - x <- ncdf4::ncdim_def("x", "", vals = as.numeric(nc$var$lai$dim[[2]]$vals), longname = "") - y <- ncdf4::ncdim_def("y", "", vals = as.numeric(nc$var$lai$dim[[1]]$vals), longname = "") - - nc_var <- list() - nc_var[[1]] <- ncdf4::ncvar_def("LAI", units = "m2 m-2", dim = list(lon, lat, t), missval = -999, longname = "Leaf Area Index") - nc_var[[2]] <- ncdf4::ncvar_def("LAI_STD", units = "", dim = list(lon, lat, t), missval = -999, longname = "") - nc_var[[3]] <- ncdf4::ncvar_def("LAI_UNCOLLAPSED", units = "m2 m-2", dim = list(y, x, t), missval = -999, longname = "") - - ncdf4::nc_close(nc) - - ### Output netCDF data, overwrite previous - nc <- ncdf4::nc_create(output$process_data_path, nc_var) - for (i in seq_along(nc_var)) { - # print(i) - ncdf4::ncvar_put(nc, nc_var[[i]], outlist[[i]]) - } - ncdf4::nc_close(nc) - -} # collapse_remote_data - - - - - - - - - - diff --git a/modules/data.remote/man/collapse_remote_data.Rd b/modules/data.remote/man/collapse_remote_data.Rd deleted file mode 100644 index c8dba92f5de..00000000000 --- a/modules/data.remote/man/collapse_remote_data.Rd +++ /dev/null @@ -1,29 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/remote_process.R -\name{collapse_remote_data} -\alias{collapse_remote_data} -\title{collapse_remote_data} -\usage{ -collapse_remote_data(output, out_process_data, latlon) -} -\arguments{ -\item{output}{output list from rp_control} - -\item{out_process_data}{type of processed data} - -\item{latlon}{latlon} -} -\description{ -collapses remote data (currently LAI only) -} -\examples{ -\dontrun{ -collapse_remote_data( - output, - out_process_data, - latlon) -} -} -\author{ -Istem Fer -} From 9a5c12f59c88abfd1d90d5b8ea7cfae0cf435f7a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:33:58 +0530 Subject: [PATCH 1443/2289] Revert "post-processing function trigger" This reverts commit 5ca7db008928ba2886ce0e343cb084022991ba61. --- modules/data.remote/R/remote_process.R | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c16bf8bd3b0..18dc612a841 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -226,17 +226,7 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - # IF: this is extremely hacky but we will need a post-processing function/sub-module here - # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example - # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ - settings$remotedata$collapse <- TRUE - } - - if(!is.null(settings$remotedata$collapse)){ - latlon <- PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) - collapse_remote_data(output, out_process_data, latlon) - } + # insert output data in the DB db_out <- remotedata_db_insert( From d3f21ff73af041d0afa2854f0dc137af7bc40e9f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 12:59:43 +0530 Subject: [PATCH 1444/2289] @istfer update load_netcdf.R Co-authored-by: istfer --- modules/benchmark/DESCRIPTION | 3 ++- modules/benchmark/R/load_netcdf.R | 37 +++++++++++++------------------ 2 files changed, 17 insertions(+), 23 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 42d56538d57..82e4e1d3b4a 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -31,7 +31,8 @@ Imports: reshape2, dbplyr, SimilarityMeasures, - zoo + zoo, + stringr Suggests: testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE diff --git a/modules/benchmark/R/load_netcdf.R b/modules/benchmark/R/load_netcdf.R index 9ffbbe01197..6b77b5df077 100644 --- a/modules/benchmark/R/load_netcdf.R +++ b/modules/benchmark/R/load_netcdf.R @@ -9,40 +9,31 @@ ##' @param vars character ##' @author Istem Fer load_x_netcdf <- function(data.path, format, site, vars = NULL) { - data.path <- sapply(data.path, function(x) dir(dirname(x), basename(x), full.names = TRUE)) - nc <- lapply(data.path, ncdf4::nc_open) - dat <- list() for (ind in seq_along(vars)) { nc.dat <- lapply(nc, ncdf4::ncvar_get, vars[ind]) dat[vars[ind]] <- as.data.frame(unlist(nc.dat)) } - dat <- as.matrix(as.data.frame(dat)) - # we need to replace filling/missing values with NA now we don't want these values to go into unit # conversion dat[dat %in% as.numeric(format$na.strings)] <- NA dat <- as.data.frame(dat) colnames(dat) <- vars - # deal with time time.col <- list() for (i in seq_along(nc)) { dims <- names(nc[[i]]$dim) time.var <- grep(pattern = "time", dims, ignore.case = TRUE) time.col[[i]] <- ncdf4::ncvar_get(nc[[i]], dims[time.var]) - t.units <- ncdf4::ncatt_get(nc[[i]], dims[time.var])$units - # If the unit has if of the form * since YYYY-MM-DD * with "-hour" timezone offset # This is a feature of the met produced by met2CF - if(str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*\\s-\\d+")){ - unit2 <- str_split_fixed(t.units,"\\s-",2)[1] - offset <- str_split_fixed(t.units,"\\s-",2)[2] %>% as.numeric() - + if(stringr::str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*\\s-\\d+")){ + unit2 <- stringr::str_split_fixed(t.units,"\\s-",2)[1] + offset <- stringr::str_split_fixed(t.units,"\\s-",2)[2] %>% as.numeric() date_time <- suppressWarnings(try(lubridate::ymd((unit2)))) if(is.na(date_time)){ date_time <- suppressWarnings(try(lubridate::ymd_hms(unit2))) @@ -50,14 +41,22 @@ load_x_netcdf <- function(data.path, format, site, vars = NULL) { if(is.na(date_time)){ PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") } - - t.units <- paste(str_split_fixed(t.units," since",2)[1], "since", + t.units <- paste(stringr::str_split_fixed(t.units," since",2)[1], "since", date_time - lubridate::hms(paste(offset,":00:00"))) + }else if(stringr::str_detect(t.units, "ince\\s[0-9]{4}[.-][0-9]{2}[.-][0-9]{2}.*")){ + unit2 <- stringr::str_split_fixed(t.units,"\\s-",2)[1] + date_time <- suppressWarnings(try(lubridate::ymd((unit2)))) + if(is.na(date_time)){ + date_time <- suppressWarnings(try(lubridate::ymd_hms(unit2))) + } + if(is.na(date_time)){ + PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") + } + t.units <- paste(stringr::str_split_fixed(t.units," since",2)[1], "since", + date_time) } - # for heterogenous formats try parsing ymd_hms date.origin <- suppressWarnings(try(lubridate::ymd_hms(t.units))) - # parsing ymd if (is.na(date.origin)) { date.origin <- lubridate::ymd(t.units) @@ -66,26 +65,20 @@ load_x_netcdf <- function(data.path, format, site, vars = NULL) { if (is.na(date.origin)) { PEcAn.logger::logger.error("All time formats failed to parse. No formats found.") } - - time.stamp.match <- gsub("UTC", "", date.origin) t.units <- gsub(paste0(" since ", time.stamp.match, ".*"), "", t.units) - # need to change system TZ otherwise, lines below keeps writing in the current time zone Sys.setenv(TZ = 'UTC') foo <- as.POSIXct(date.origin, tz = "UTC") + udunits2::ud.convert(time.col[[i]], t.units, "seconds") time.col[[i]] <- foo } - # needed to use 'round' to 'mins' here, otherwise I end up with values like '2006-12-31 23:29:59' # while reading Ameriflux for example however the model timesteps are more regular and the last # value can be '2006-12-31 23:30:00'.. this will result in cutting the last value in the # align_data step dat$posix <- round(as.POSIXct(do.call("c", time.col), tz = "UTC"), "mins") dat$posix <- as.POSIXct(dat$posix) - lapply(nc, ncdf4::nc_close) - return(dat) } # load_x_netcdf From 2f73ceccdb02035d6c974636b766e3bf0f1e944d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 13:47:47 +0530 Subject: [PATCH 1445/2289] @istfer add post processing function Co-authored-by: istfer --- modules/data.remote/DESCRIPTION | 3 +- modules/data.remote/R/remote_process.R | 78 +++++++++++++++++++++++++- 2 files changed, 79 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 7b6c3663c2d..e85f1db2b76 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -26,7 +26,8 @@ Imports: binaryLogic, doParallel, parallel, - foreach + foreach, + ncdf4 Suggests: testthat (>= 1.0.2), ggplot2, diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 18dc612a841..12d8a48f493 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -226,7 +226,18 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - + # IF: this is extremely hacky but we will need a post-processing function/sub-module here + # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example + # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion + if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ + settings$remotedata$collapse <- TRUE + } + + if(!is.null(settings$remotedata$collapse)){ + latlon <- PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) + collapse_remote_data(output, out_process_data, latlon) + } + # insert output data in the DB db_out <- remotedata_db_insert( @@ -1144,3 +1155,68 @@ remotedata_db_insert <- return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) } + + + + + +##' collapses remote data (currently LAI only) +##' +##' @name collapse_remote_data +##' @title collapse_remote_data +##' @param output output list from rp_control +##' @param out_process_data type of processed data +##' @param latlon latlon +##' @examples +##' \dontrun{ +##' collapse_remote_data( +##' output, +##' out_process_data, +##' latlon) +##' } +##' @author Istem Fer +collapse_remote_data <- function(output, out_process_data, latlon){ + + # open up the nc + nc <- ncdf4::nc_open(output$process_data_path) + ncdat <- ncdf4::ncvar_get(nc, out_process_data) + if(length(dim(ncdat)) == 3){ + aggregated_ncdat_mean <- apply(ncdat, 3, mean, na.rm = TRUE) + aggregated_ncdat_std <- apply(ncdat, 3, sd, na.rm = TRUE) + } + + # write back to nc file + outlist <- list() + outlist[[1]] <- aggregated_ncdat_mean # LAI in (m2 m-2) + outlist[[2]] <- aggregated_ncdat_std + outlist[[3]] <- ncdat + + t <- ncdf4::ncdim_def(name = "time", + units = nc$var$lai$dim[[3]]$units, + nc$var$lai$dim[[3]]$vals, # allow partial years, this info is already in matrix_weather + calendar = "standard", + unlim = TRUE) + + + lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(latlon$lat), longname = "latitude") + lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(latlon$lon), longname = "longitude") + + x <- ncdf4::ncdim_def("x", "", vals = as.numeric(nc$var$lai$dim[[2]]$vals), longname = "") + y <- ncdf4::ncdim_def("y", "", vals = as.numeric(nc$var$lai$dim[[1]]$vals), longname = "") + + nc_var <- list() + nc_var[[1]] <- ncdf4::ncvar_def("LAI", units = "m2 m-2", dim = list(lon, lat, t), missval = -999, longname = "Leaf Area Index") + nc_var[[2]] <- ncdf4::ncvar_def("LAI_STD", units = "", dim = list(lon, lat, t), missval = -999, longname = "") + nc_var[[3]] <- ncdf4::ncvar_def("LAI_UNCOLLAPSED", units = "m2 m-2", dim = list(y, x, t), missval = -999, longname = "") + + ncdf4::nc_close(nc) + + ### Output netCDF data, overwrite previous + nc <- ncdf4::nc_create(output$process_data_path, nc_var) + for (i in seq_along(nc_var)) { + # print(i) + ncdf4::ncvar_put(nc, nc_var[[i]], outlist[[i]]) + } + ncdf4::nc_close(nc) + +} # collapse_remote_data \ No newline at end of file From 3fef97457f6c7ff8839e2c4ac59eac79ff1bd9d5 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 14:04:32 +0530 Subject: [PATCH 1446/2289] add post processing Rd --- .../data.remote/man/collapse_remote_data.Rd | 29 +++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 modules/data.remote/man/collapse_remote_data.Rd diff --git a/modules/data.remote/man/collapse_remote_data.Rd b/modules/data.remote/man/collapse_remote_data.Rd new file mode 100644 index 00000000000..c8dba92f5de --- /dev/null +++ b/modules/data.remote/man/collapse_remote_data.Rd @@ -0,0 +1,29 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/remote_process.R +\name{collapse_remote_data} +\alias{collapse_remote_data} +\title{collapse_remote_data} +\usage{ +collapse_remote_data(output, out_process_data, latlon) +} +\arguments{ +\item{output}{output list from rp_control} + +\item{out_process_data}{type of processed data} + +\item{latlon}{latlon} +} +\description{ +collapses remote data (currently LAI only) +} +\examples{ +\dontrun{ +collapse_remote_data( + output, + out_process_data, + latlon) +} +} +\author{ +Istem Fer +} From 9816c9d7dd9ebbf00511c92c564413e032d228a7 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 14:05:27 +0530 Subject: [PATCH 1447/2289] Update modules/data.remote/R/remote_process.R Co-authored-by: istfer --- modules/data.remote/R/remote_process.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 12d8a48f493..9fee889e58d 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -1193,7 +1193,7 @@ collapse_remote_data <- function(output, out_process_data, latlon){ t <- ncdf4::ncdim_def(name = "time", units = nc$var$lai$dim[[3]]$units, - nc$var$lai$dim[[3]]$vals, # allow partial years, this info is already in matrix_weather + nc$var$lai$dim[[3]]$vals, calendar = "standard", unlim = TRUE) @@ -1219,4 +1219,4 @@ collapse_remote_data <- function(output, out_process_data, latlon){ } ncdf4::nc_close(nc) -} # collapse_remote_data \ No newline at end of file +} # collapse_remote_data From babbc272bb40f440acc547549f08f412148caab0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 14:27:16 +0530 Subject: [PATCH 1448/2289] add atmosphere to desc --- modules/data.remote/DESCRIPTION | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index e85f1db2b76..5f69be28a21 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -27,7 +27,8 @@ Imports: doParallel, parallel, foreach, - ncdf4 + ncdf4, + PEcAn.data.atmosphere Suggests: testthat (>= 1.0.2), ggplot2, From c7a0be981969f8557482563a9d6e6a86e1237132 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 15:03:18 +0530 Subject: [PATCH 1449/2289] Apply suggestions from code review Co-authored-by: istfer --- modules/data.remote/DESCRIPTION | 1 - modules/data.remote/R/remote_process.R | 4 ++-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 5f69be28a21..4662ae64842 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -28,7 +28,6 @@ Imports: parallel, foreach, ncdf4, - PEcAn.data.atmosphere Suggests: testthat (>= 1.0.2), ggplot2, diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 9fee889e58d..970610475c8 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -234,8 +234,8 @@ remote_process <- function(settings) { } if(!is.null(settings$remotedata$collapse)){ - latlon <- PEcAn.data.atmosphere::db.site.lat.lon(siteid, con = dbcon) - collapse_remote_data(output, out_process_data, latlon) + collapse_remote_data(output, out_process_data, + list(lat = settings$run$site$lat, lon = settings$run$site$lon)) } # insert output data in the DB From f1e9e92844478652c8bdaeea9e749e781853a1fb Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 15:09:36 +0530 Subject: [PATCH 1450/2289] remove , --- modules/data.remote/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 4662ae64842..e85f1db2b76 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -27,7 +27,7 @@ Imports: doParallel, parallel, foreach, - ncdf4, + ncdf4 Suggests: testthat (>= 1.0.2), ggplot2, From 81c6f4d841e2d7a092f1b2194012ae0c9829241a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 23 Aug 2020 16:16:32 +0530 Subject: [PATCH 1451/2289] change filepath and filename method --- modules/data.remote/R/remote_process.R | 84 +++++++++++++------------- 1 file changed, 42 insertions(+), 42 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 970610475c8..c7c4084fd44 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -226,17 +226,17 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - # IF: this is extremely hacky but we will need a post-processing function/sub-module here - # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example - # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ - settings$remotedata$collapse <- TRUE - } - - if(!is.null(settings$remotedata$collapse)){ - collapse_remote_data(output, out_process_data, - list(lat = settings$run$site$lat, lon = settings$run$site$lon)) - } + # # IF: this is extremely hacky but we will need a post-processing function/sub-module here + # # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example + # # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion + # if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ + # settings$remotedata$collapse <- TRUE + # } + # + # if(!is.null(settings$remotedata$collapse)){ + # collapse_remote_data(output, out_process_data, + # list(lat = settings$run$site$lat, lon = settings$run$site$lon)) + # } # insert output data in the DB db_out <- @@ -534,7 +534,7 @@ remotedata_db_check <- if (nrow(pro_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", pro_file_name ), dbcon @@ -542,7 +542,7 @@ remotedata_db_check <- if (nrow(raw_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", raw_file_name ), dbcon @@ -562,7 +562,7 @@ remotedata_db_check <- if (nrow(raw_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", raw_file_name ), dbcon @@ -587,7 +587,7 @@ remotedata_db_check <- if (nrow(pro_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", pro_file_name ), dbcon @@ -602,14 +602,14 @@ remotedata_db_check <- stage_process_data <- datalist$stage pro_merge <- datalist$merge if (pro_merge == TRUE) { - existing_pro_file_path <- pro_check$file_path + existing_pro_file_path <- file.path(pro_check$file_path, pro_check$name) } if (stage_process_data == TRUE) { # check about the status of raw file raw_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", raw_file_name ), dbcon @@ -629,7 +629,7 @@ remotedata_db_check <- } remotefile_check_flag <- 4 if (raw_merge == TRUE) { - existing_raw_file_path <- raw_check$file_path + existing_raw_file_path <- file.path(raw_check$file_path, raw_check$name) } if (pro_merge == TRUE && stage_get_data == FALSE) { remotefile_check_flag <- 5 @@ -660,7 +660,7 @@ remotedata_db_check <- if (nrow(raw_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", raw_file_name ), dbcon @@ -673,7 +673,7 @@ remotedata_db_check <- else if (nrow(raw_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", raw_file_name ), dbcon @@ -699,7 +699,7 @@ remotedata_db_check <- stage_process_data <- TRUE pro_merge <- FALSE if (raw_merge == TRUE || raw_merge == "replace") { - existing_raw_file_path = raw_check$file_path + existing_raw_file_path = file.path(raw_check$file_path, raw_check$name) remotefile_check_flag <- 3 } else{ existing_raw_file_path <- NULL @@ -723,7 +723,7 @@ remotedata_db_check <- } else if (nrow(raw_check <- PEcAn.DB::db.query( sprintf( - "SELECT inputs.id, inputs.site_id, inputs.name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.name=dbfiles.file_name AND inputs.name LIKE '%s%%';", + "SELECT inputs.id, inputs.site_id, dbfiles.file_name as name, inputs.start_date, inputs.end_date, dbfiles.file_path FROM inputs INNER JOIN dbfiles ON inputs.id=dbfiles.container_id AND dbfiles.file_name LIKE '%s%%';", raw_file_name ), dbcon @@ -744,7 +744,7 @@ remotedata_db_check <- raw_path <- raw_check$file_path } if (raw_merge == TRUE) { - existing_raw_file_path <- raw_check$file_path + existing_raw_file_path <- file.path(raw_check$file_path, raw_check$name) remotefile_check_flag <- 2 } else{ existing_raw_file_path <- NULL @@ -915,7 +915,7 @@ remotedata_db_insert <- # insert processed data pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, + in.path = dirname(output$process_data_path), in.prefix = basename(output$process_data_path), siteid = siteid, startdate = write_pro_start, @@ -927,7 +927,7 @@ remotedata_db_insert <- # insert raw file raw_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, + in.path = dirname(output$raw_data_path), in.prefix = basename(output$raw_data_path), siteid = siteid, startdate = write_raw_start, @@ -945,7 +945,7 @@ remotedata_db_insert <- PEcAn.logger::logger.info("Inserting processed file for the first time") pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, + in.path = dirname(output$process_data_path), in.prefix = basename(output$process_data_path), siteid = siteid, startdate = write_pro_start, @@ -961,7 +961,7 @@ remotedata_db_insert <- } else if (remotefile_check_flag == 3) { # requested processed file does not exist, raw file used to create it is present but has to be updated to match with the requested dates pro_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$process_data_path, + in.path = dirname(output$process_data_path), in.prefix = basename(output$process_data_path), siteid = siteid, startdate = write_pro_start, @@ -976,7 +976,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, - basename(output$raw_data_path), + basename(dirname(output$raw_data_path)), raw_id ), dbcon @@ -984,7 +984,7 @@ remotedata_db_insert <- PEcAn.DB::db.query( sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$raw_data_path, + dirname(output$raw_data_path), basename(output$raw_data_path), raw_id ), @@ -1004,7 +1004,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, - basename(output$process_data_path), + basename(dirname(output$process_data_path)), pro_id ), dbcon @@ -1012,7 +1012,7 @@ remotedata_db_insert <- PEcAn.DB::db.query( sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$process_data_path, + dirname(output$process_data_path), basename(output$process_data_path), pro_id ), @@ -1023,7 +1023,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f", write_raw_start, write_raw_end, - basename(output$raw_data_path), + basename(dirname(output$raw_data_path)), raw_id ), dbcon @@ -1031,7 +1031,7 @@ remotedata_db_insert <- PEcAn.DB::db.query( sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$raw_data_path, + dirname(output$raw_data_path), basename(output$raw_data_path), raw_id ), @@ -1049,7 +1049,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, - basename(output$process_data_path), + basename(dirname(output$process_data_path)), pro_id ), dbcon @@ -1057,7 +1057,7 @@ remotedata_db_insert <- PEcAn.DB::db.query( sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$process_data_path, + dirname(output$process_data_path), basename(output$process_data_path), pro_id ), @@ -1074,7 +1074,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_pro_start, write_pro_end, - basename(output$process_data_path), + basename(dirname(output$process_data_path)), pro_id ), dbcon @@ -1082,7 +1082,7 @@ remotedata_db_insert <- PEcAn.DB::db.query( sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$process_data_path, + dirname(output$process_data_path), basename(output$process_data_path), pro_id ), @@ -1090,7 +1090,7 @@ remotedata_db_insert <- ) raw_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, + in.path = dirname(output$raw_data_path), in.prefix = basename(output$raw_data_path), siteid = siteid, startdate = write_raw_start, @@ -1115,7 +1115,7 @@ remotedata_db_insert <- PEcAn.logger::logger.info(("Inserting raw file for the first time")) raw_ins <- PEcAn.DB::dbfile.input.insert( - in.path = output$raw_data_path, + in.path = dirname(output$raw_data_path), in.prefix = basename(output$raw_data_path), siteid = siteid, startdate = write_raw_start, @@ -1135,7 +1135,7 @@ remotedata_db_insert <- "UPDATE inputs SET start_date='%s', end_date='%s', name='%s' WHERE id=%f;", write_raw_start, write_raw_end, - basename(output$raw_data_path), + basename(dirname(output$raw_data_path)), raw_id ), dbcon @@ -1143,8 +1143,8 @@ remotedata_db_insert <- PEcAn.DB::db.query( sprintf( "UPDATE dbfiles SET file_path='%s', file_name='%s' WHERE container_id=%f;", - output$raw_data_path, - basename(output$raw_data_path), + dirname(output$raw_data_path), + basename(dirname(output$raw_data_path)), raw_id ), dbcon From c11539bc308cc6561eda92c8b187058940a999e7 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 24 Aug 2020 12:07:57 +0530 Subject: [PATCH 1452/2289] Use Remote generic format Co-authored-by: istfer --- modules/data.remote/inst/registration/register.GEE.xml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/inst/registration/register.GEE.xml b/modules/data.remote/inst/registration/register.GEE.xml index 14a9f16a98f..da32d156f18 100644 --- a/modules/data.remote/inst/registration/register.GEE.xml +++ b/modules/data.remote/inst/registration/register.GEE.xml @@ -9,13 +9,13 @@ 10 1 - - CF Meteorology + 1000000129 + Remote_generic application/x-netcdf - - NCEP + 1000000129 + Remote_generic application/x-netcdf From 109d74e54d05770110adfe82fab30dee10bd928d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 24 Aug 2020 12:18:30 +0530 Subject: [PATCH 1453/2289] early exit --- modules/data.remote/R/remote_process.R | 43 +++++++++++++------ .../inst/RpTools/RpTools/rp_control.py | 10 ++--- 2 files changed, 36 insertions(+), 17 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index c7c4084fd44..5496c89c917 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -183,7 +183,24 @@ remote_process <- function(settings) { input_file <- dbstatus$input_file existing_pro_file_path <- dbstatus$existing_pro_file_path raw_check <- dbstatus$raw_check - pro_check <- dbstatus$pro_check + pro_check <- dbstatus$pro_check + + + + if(stage_get_data == FALSE && stage_process_data == FALSE){ + # requested data already exists, no need to call rp_control + + pro_id <- pro_check$id + pro_path <- pro_check$file_path + raw_id <- raw_check$id + raw_path <- raw_check$file_path + + settings$remotedata$raw_id <- raw_id + settings$remotedata$raw_path <- raw_path + settings$remotedata$pro_id <- pro_id + settings$remotedata$pro_path <- pro_path + return(settings) + } # construct outdir path @@ -226,17 +243,19 @@ remote_process <- function(settings) { # call rp_control output <- do.call(RpTools$rp_control, fcn.args) - # # IF: this is extremely hacky but we will need a post-processing function/sub-module here - # # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example - # # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - # if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ - # settings$remotedata$collapse <- TRUE - # } - # - # if(!is.null(settings$remotedata$collapse)){ - # collapse_remote_data(output, out_process_data, - # list(lat = settings$run$site$lat, lon = settings$run$site$lon)) - # } + + + # IF: this is extremely hacky but we will need a post-processing function/sub-module here + # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example + # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion + if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ + settings$remotedata$collapse <- TRUE + } + + if(!is.null(settings$remotedata$collapse)){ + collapse_remote_data(output, out_process_data, + list(lat = settings$run$site$lat, lon = settings$run$site$lon)) + } # insert output data in the DB db_out <- diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index 4229ed44432..9cb05f9b06f 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -96,12 +96,12 @@ def rp_control( """ - geofile = create_geojson(coords, siteid, outdir) - - - aoi_name = get_sitename(geofile) - + if stage_get_data: + + # create GeoJSOn file from the BETY sites data + geofile = create_geojson(coords, siteid, outdir) + get_datareturn_path = get_remote_data( geofile, outdir, From 1144af0cf431b46c66446532f3f52fc6e0799f15 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 24 Aug 2020 12:21:22 +0530 Subject: [PATCH 1454/2289] remove unrequired tags --- modules/data.remote/R/remote_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 5496c89c917..2de71c9e667 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -4,7 +4,7 @@ ##' @title remote_process ##' @export ##' -##' @param settings PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data, overwrite +##' @param settings PEcAn settings list containing remotedata tags: source, collection, scale, projection, qc, algorithm, credfile, out_get_data, out_process_data, overwrite ##' ##' @examples ##' \dontrun{ From 4e130ac09cced39c3f92634d4e0d032a0aaa9e56 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 14:14:43 +0530 Subject: [PATCH 1455/2289] aggregate lai in bands2lai --- modules/data.remote/R/remote_process.R | 97 ++----------------- .../inst/RpTools/RpTools/bands2lai_snap.py | 43 ++++++-- .../RpTools/RpTools/process_remote_data.py | 6 +- .../inst/RpTools/RpTools/rp_control.py | 8 ++ .../data.remote/man/collapse_remote_data.Rd | 29 ------ modules/data.remote/man/remote_process.Rd | 2 +- 6 files changed, 59 insertions(+), 126 deletions(-) delete mode 100644 modules/data.remote/man/collapse_remote_data.Rd diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index 2de71c9e667..f3f52beb430 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -40,6 +40,8 @@ remote_process <- function(settings) { siteid <- as.numeric(settings$run$site$id) siteid_short <- paste0(siteid %/% 1e+09, "-", siteid %% 1e+09) outdir <- settings$database$dbfiles + lat <- as.numeric(settings$run$site$lat) + lon <- as.numeric(settings$run$site$lon) start <- as.character(as.Date(settings$run$start.date)) end <- as.character(as.Date(settings$run$end.date)) source <- settings$remotedata$source @@ -189,16 +191,10 @@ remote_process <- function(settings) { if(stage_get_data == FALSE && stage_process_data == FALSE){ # requested data already exists, no need to call rp_control - - pro_id <- pro_check$id - pro_path <- pro_check$file_path - raw_id <- raw_check$id - raw_path <- raw_check$file_path - - settings$remotedata$raw_id <- raw_id - settings$remotedata$raw_path <- raw_path - settings$remotedata$pro_id <- pro_id - settings$remotedata$pro_path <- pro_path + settings$remotedata$raw_id <- raw_check$id + settings$remotedata$raw_path <- raw_check$file_path + settings$remotedata$pro_id <- pro_check$id + settings$remotedata$pro_path <- pro_check$file_path return(settings) } @@ -211,6 +207,8 @@ remote_process <- function(settings) { fcn.args <- list() fcn.args$coords <- coords fcn.args$outdir <- outdir + fcn.args$lat <- lat + fcn.args$lon <- lon fcn.args$start <- start fcn.args$end <- end fcn.args$source <- source @@ -245,18 +243,6 @@ remote_process <- function(settings) { output <- do.call(RpTools$rp_control, fcn.args) - # IF: this is extremely hacky but we will need a post-processing function/sub-module here - # this code can remind us to implement it later, for now it is only used for GEE - Sentinel2 - SNAP- LAI example - # it would be better if this sub-module comes after DB insertion below and the processed files have their own insertion - if(source == "gee" & collection == "s2" & !is.null(algorithm) & !is.null(out_process_data)){ - settings$remotedata$collapse <- TRUE - } - - if(!is.null(settings$remotedata$collapse)){ - collapse_remote_data(output, out_process_data, - list(lat = settings$run$site$lat, lon = settings$run$site$lon)) - } - # insert output data in the DB db_out <- remotedata_db_insert( @@ -1173,69 +1159,4 @@ remotedata_db_insert <- } return(list(raw_id = raw_id, raw_path = raw_path, pro_id = pro_id, pro_path = pro_path)) - } - - - - - -##' collapses remote data (currently LAI only) -##' -##' @name collapse_remote_data -##' @title collapse_remote_data -##' @param output output list from rp_control -##' @param out_process_data type of processed data -##' @param latlon latlon -##' @examples -##' \dontrun{ -##' collapse_remote_data( -##' output, -##' out_process_data, -##' latlon) -##' } -##' @author Istem Fer -collapse_remote_data <- function(output, out_process_data, latlon){ - - # open up the nc - nc <- ncdf4::nc_open(output$process_data_path) - ncdat <- ncdf4::ncvar_get(nc, out_process_data) - if(length(dim(ncdat)) == 3){ - aggregated_ncdat_mean <- apply(ncdat, 3, mean, na.rm = TRUE) - aggregated_ncdat_std <- apply(ncdat, 3, sd, na.rm = TRUE) - } - - # write back to nc file - outlist <- list() - outlist[[1]] <- aggregated_ncdat_mean # LAI in (m2 m-2) - outlist[[2]] <- aggregated_ncdat_std - outlist[[3]] <- ncdat - - t <- ncdf4::ncdim_def(name = "time", - units = nc$var$lai$dim[[3]]$units, - nc$var$lai$dim[[3]]$vals, - calendar = "standard", - unlim = TRUE) - - - lat <- ncdf4::ncdim_def("lat", "degrees_north", vals = as.numeric(latlon$lat), longname = "latitude") - lon <- ncdf4::ncdim_def("lon", "degrees_east", vals = as.numeric(latlon$lon), longname = "longitude") - - x <- ncdf4::ncdim_def("x", "", vals = as.numeric(nc$var$lai$dim[[2]]$vals), longname = "") - y <- ncdf4::ncdim_def("y", "", vals = as.numeric(nc$var$lai$dim[[1]]$vals), longname = "") - - nc_var <- list() - nc_var[[1]] <- ncdf4::ncvar_def("LAI", units = "m2 m-2", dim = list(lon, lat, t), missval = -999, longname = "Leaf Area Index") - nc_var[[2]] <- ncdf4::ncvar_def("LAI_STD", units = "", dim = list(lon, lat, t), missval = -999, longname = "") - nc_var[[3]] <- ncdf4::ncvar_def("LAI_UNCOLLAPSED", units = "m2 m-2", dim = list(y, x, t), missval = -999, longname = "") - - ncdf4::nc_close(nc) - - ### Output netCDF data, overwrite previous - nc <- ncdf4::nc_create(output$process_data_path, nc_var) - for (i in seq_along(nc_var)) { - # print(i) - ncdf4::ncvar_put(nc, nc_var[[i]], outlist[[i]]) - } - ncdf4::nc_close(nc) - -} # collapse_remote_data + } \ No newline at end of file diff --git a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py index bf775873308..b936c4188fe 100644 --- a/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py +++ b/modules/data.remote/inst/RpTools/RpTools/bands2lai_snap.py @@ -10,12 +10,15 @@ import RpTools.gee2pecan_s2 as gee from RpTools.gee2pecan_s2 import xr_dataset_to_timeseries import RpTools.biophys_xarray as bio +from collections import OrderedDict import geopandas as gpd import xarray as xr +import numpy as np import os import time -def bands2lai_snap(inputfile, outdir, filename): + +def bands2lai_snap(inputfile, outdir, lat, lon, filename): """ Calculates LAI for the input netCDF file and saves it in a new netCDF file. @@ -24,6 +27,10 @@ def bands2lai_snap(inputfile, outdir, filename): input (str) -- path to the input netCDF file containing bands. outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + lat (float) -- latitude of the site + + lon (float) -- longitude of the site filename (str) -- filename of the output file @@ -44,15 +51,39 @@ def bands2lai_snap(inputfile, outdir, filename): timeseries = {} timeseries_variable = ["lai"] - # if specified output directory does not exist, create it. + # if specified output directory does not exist, create it. if not os.path.exists(outdir): os.makedirs(outdir, exist_ok=True) timestamp = time.strftime("%y%m%d%H%M%S") - + + # creating a timerseries + timeseries[area.name] = xr_dataset_to_timeseries(area, timeseries_variable) + + area = area.rename_vars({"lai": "LAI_UNCOLLAPSED"}) + + lat = np.array([lat]) + lon = np.array([lon]) + + od = OrderedDict() + od["x"] = "x" + od["y"] = "y" + + latlon = OrderedDict() + latlon["lat"] = lat + latlon["lon"] = lon + + # aggregate the values + LAI = area.LAI_UNCOLLAPSED.mean(dim=od) + LAI = LAI.expand_dims(dim=latlon) + LAI.attrs = {"units": "m2 m-2"} + + LAI_STD = area.LAI_UNCOLLAPSED.std(dim=od) + LAI_STD = LAI_STD.expand_dims(dim=latlon) + + area = area.assign({"LAI": LAI, "LAI_STD": LAI_STD}) + save_path = os.path.join(outdir, filename + "_" + timestamp + ".nc") - # creating a timerseries and saving the netCDF file area.to_netcdf(save_path) - timeseries[area.name] = xr_dataset_to_timeseries(area, timeseries_variable) - + return os.path.abspath(save_path) diff --git a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py index 79374d41abd..a337d221474 100644 --- a/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/process_remote_data.py @@ -11,7 +11,7 @@ import os import time -def process_remote_data(out_get_data, out_process_data, outdir, algorithm, input_file, pro_merge=None, existing_pro_file_path=None, pro_file_name=None): +def process_remote_data(out_get_data, out_process_data, outdir, lat, lon, algorithm, input_file, pro_merge=None, existing_pro_file_path=None, pro_file_name=None): """ uses processing functions to perform computation on input data @@ -19,6 +19,8 @@ def process_remote_data(out_get_data, out_process_data, outdir, algorithm, input ---------- output (dict) -- dictionary contatining the keys get_data and process_data outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + lat (float) -- latitude of the site + lon (float) -- longitude of the site algorithm (str) -- name of the algorithm used to perform computation. inputfile (str) -- path to raw file pro_merge (str) -- if the pro file has to be merged @@ -45,7 +47,7 @@ def process_remote_data(out_get_data, out_process_data, outdir, algorithm, input # import the function from the module func = getattr(module, func_name) # call the function - process_datareturn_path = func(input_file, outdir, pro_file_name) + process_datareturn_path = func(input_file, outdir, lat, lon, pro_file_name) if pro_merge == True and pro_merge != "replace": try: diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index 9cb05f9b06f..fc007d7fccc 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -18,6 +18,8 @@ def rp_control( coords, outdir, + lat, + lon, start, end, source, @@ -49,6 +51,10 @@ def rp_control( coords (str) -- geometry of the site from BETY outdir (str) -- path to the directory where the output file is stored. If specified directory does not exists, it is created. + + lat (float) -- latitude of the site + + lon (float) -- longitude of the site start (str) -- starting date of the data request in the form YYYY-MM-DD @@ -125,6 +131,8 @@ def rp_control( out_get_data, out_process_data, outdir, + lat, + lon, algorithm, input_file, pro_merge, diff --git a/modules/data.remote/man/collapse_remote_data.Rd b/modules/data.remote/man/collapse_remote_data.Rd deleted file mode 100644 index c8dba92f5de..00000000000 --- a/modules/data.remote/man/collapse_remote_data.Rd +++ /dev/null @@ -1,29 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/remote_process.R -\name{collapse_remote_data} -\alias{collapse_remote_data} -\title{collapse_remote_data} -\usage{ -collapse_remote_data(output, out_process_data, latlon) -} -\arguments{ -\item{output}{output list from rp_control} - -\item{out_process_data}{type of processed data} - -\item{latlon}{latlon} -} -\description{ -collapses remote data (currently LAI only) -} -\examples{ -\dontrun{ -collapse_remote_data( - output, - out_process_data, - latlon) -} -} -\author{ -Istem Fer -} diff --git a/modules/data.remote/man/remote_process.Rd b/modules/data.remote/man/remote_process.Rd index 60283627bdc..5a1724f46f0 100644 --- a/modules/data.remote/man/remote_process.Rd +++ b/modules/data.remote/man/remote_process.Rd @@ -7,7 +7,7 @@ remote_process(settings) } \arguments{ -\item{settings}{PEcAn settings list containing remotedata tags: raw_mimetype, raw_formatname, source, collection, scale, projection, qc, algorithm, credfile, pro_mimetype, pro_formatname, out_get_data, out_process_data, overwrite} +\item{settings}{PEcAn settings list containing remotedata tags: source, collection, scale, projection, qc, algorithm, credfile, out_get_data, out_process_data, overwrite} } \description{ call rp_control (from RpTools Python package) and store the output in BETY From 2c5708f88a25be0985b0191547f3c2d30dc42826 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 14:23:33 +0530 Subject: [PATCH 1456/2289] convert out_get_data and out_process_data to lowercase in rp_control --- modules/data.remote/inst/RpTools/RpTools/rp_control.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index fc007d7fccc..0ebef9874ef 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -102,6 +102,9 @@ def rp_control( """ + out_get_data = out_get_data.lower() + out_process_data = out_process_data.lower() + if stage_get_data: From 14b0b1e0045926de2689099032b0fa6e7136ed06 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 14:38:20 +0530 Subject: [PATCH 1457/2289] remove netcdf from description --- modules/data.remote/DESCRIPTION | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index e85f1db2b76..7b6c3663c2d 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -26,8 +26,7 @@ Imports: binaryLogic, doParallel, parallel, - foreach, - ncdf4 + foreach Suggests: testthat (>= 1.0.2), ggplot2, From c1fef37a679019eec039955caa743901e318b67a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 16:09:52 +0530 Subject: [PATCH 1458/2289] update book documentation --- book_source/03_topical_pages/03_pecan_xml.Rmd | 4 +- .../03_topical_pages/09_standalone_tools.Rmd | 110 ++++++++++++------ .../11_images/remotemodule.png | Bin 0 -> 108972 bytes modules/data.remote/inst/requirements.txt | 51 -------- 4 files changed, 76 insertions(+), 89 deletions(-) create mode 100644 book_source/03_topical_pages/11_images/remotemodule.png delete mode 100644 modules/data.remote/inst/requirements.txt diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index b07e2cabd6e..ec4293aa443 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -795,8 +795,6 @@ This section describes the tags required for configuring `remote_process`. ``` -* `raw_mimetype`: (required) MIME type of the raw file -* `raw_formatname`: (required) format name of the raw file * `out_get_data`: (required) type of raw output requested, e.g, bands, smap * `source`: (required) source of remote data, e.g., gee or appeears * `collection`: (required) dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears @@ -813,6 +811,8 @@ These tags are only required if processed data is requested: * `pro_mimetype`: (optional) MIME type of the processed file * `pro_formatname`: (optional) format name of the processed file +Additional information for the module are taken from the registration files located at `data.remote/inst/registration` + The output data from the module are returned in the following tags: * `raw_id`: input id of the raw file diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index fa6c53a5a05..9e743e4b72e 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -174,10 +174,25 @@ pip3 install -e . #this will open a browser and ask you to sign in with the Google account registered for GEE earthengine authenticate ``` +Alternate way, +```bash +python3 + +import ee +ee.Authenticate() +``` + 5. **Save the Earthdata credentials**. If you wish to use AppEEARS you will have to store your username and password inside a JSON file and then pass its file path as an argument in `remote_process` #### Usage guide: -This module contains GEE and other processing functions which retrieve data from a specific dataset and source. For example, `gee2pecan_s2()` downloads bands from Sentinel 2 using GEE and `bands2lai_snap()` uses the downloaded bands to compute LAI using the SNAP algorithm. These functions can be used independently but the recommended way is to use `remote_process`which is the main function that controls all the individual functions to create an organized way of downloading and storing remote sensing data in PEcAn. +This module is accesible using the R function `remote_process` which uses the Python package "RpTools" (located at `data.remote/inst/RpTools`) for downloading and processing data. RpTools has a function named `rp_control` which controls two other functions, + +1. `get_remote_data` which controls the scripts which are used for downloading data from the source. For example, `gee2pecan_s2` downloads bands from Sentinel 2 using GEE. + +2. `process_remote_data` which controls the scripts responsible for processing the raw data. For example, `bands2lai_snap` uses the downloaded bands to compute LAI using the SNAP algorithm. + +![Workflow of the module](03_topical_pages/11_images/remotemodule.png) + ### Configuring `remote_process` @@ -185,8 +200,7 @@ This module contains GEE and other processing functions which retrieve data from ```xml - ... - ... + ... ... ... ... @@ -194,16 +208,12 @@ This module contains GEE and other processing functions which retrieve data from ... ... ... - ... - ... - ... ... ... ``` -* `raw_mimetype`: (required) MIME type of the raw file -* `raw_formatname`: (required) format name of the raw file + * `out_get_data`: (required) type of raw output requested, e.g, bands, smap * `source`: (required) source of remote data, e.g., gee or appeears * `collection`: (required) dataset or product name as it is provided on the source, e.g. "COPERNICUS/S2_SR" for gee or "SPL3SMP_E.003" for appeears @@ -217,10 +227,37 @@ These tags are only required if processed data is requested: * `out_process_data`: (optional) type of processed output requested, e.g, lai * `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands * `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS -* `pro_mimetype`: (optional) MIME type of the processed file -* `pro_formatname`: (optional) format name of the processed file -Other input data: +Additional information are taken from the registration files located at pecan/modules/data.remote/inst/registration, each source has its own registration file: + +GEE registration file (register.GEE.xml) : + +* `collection` + * `original_name` original name of the image collection, e.g. COPERNICUS/S2_SR + * `pecan_name` short form of original name using which the collection is represented in PEcAn, e.g. s2 +* `coord` + * `coord_type` coordinate type supported by the collection +* `scale` the default value of the scale can be specified here +* `qc` the default value of the qc parameter can be specified here +* `raw_format` format details of the raw file + * `id` id of the format + * `name` name of the format + * `mimetype` MIME type +* `pro_format` format details of the processed file when the collection is used to create a processed file + * `id` id of the format + * `name` name of the format + * `mimetype` MIME type + +AppEEARS registarion file (registrer.APPEEARS.xml) : + +* `coord` + * `coord_type` coordinate type supported by the product +* `raw_format` format details of the output file + * `id` id of the format + * `name` name of the format + * `mimetype` MIME type + +Remaining input data: * start date, end date: these are taken from the `run` tag in `pecan.xml` * outdir: from the `outdir` tag in `pecan.xml` @@ -235,12 +272,9 @@ The output data from the module are returned in the following tags: **Output files**: -The output files are stored in a directory inside the specified outdir with the following naming convention: `source_site_siteid` +The output files are of netCDF type and are stored in a directory inside the specified outdir with the following naming convention: `source_site_siteid` -Output files returned by GEE functions are in the form of netCDF files. When using AppEEARS, output is in the form of netCDF files if the AOI type is a polygon and in the form of csv files if the AOI type is a point. - -The output files are created with the following naming convention: `collection_scale_projection_qc_site_siteid_TimeStampOfFileCreation` -If scale, projection or qc is not applicable for the requested product "NA" would be put. +The output files are created with the following naming convention: `source_collection_scale_projection_qc_site_siteid_TimeStampOfFileCreation` Whenever a data product is requested the output files are stored in the inputs table of BETYdb. Subsequently when the same product is requested again with a different date range but with the same qc, scale, projection the previous file in the db would be extended. The DB would always contain only one file of the same type. As an example, if a file containing Sentinel 2 bands for start date: 2018-01-01, end date: 2018-06-30 exists in the DB and the same product is requested again for a different date range one of the following cases would happen, @@ -253,6 +287,24 @@ As an example, if a file containing Sentinel 2 bands for start date: 2018-01-01, When a processed data product such as SNAP-LAI is requested, the raw product (here Sentinel 2 bands) used to create it would also be stored in the DB. If the raw product required for creating the processed product already exists for the requested time period, the processed product would be created for the entire time period of the raw file. For example, if Sentinel 2 bands are present in the DB for 2017-01-01 to 2017-12-31 and SNAP-LAI is requested for 2017-03-01 to 2017-07-31, the output file would be containing LAI for 2017-01-01 to 2017-12-31. + +#### Creating Polygon based sites + +A polygon site can be created in the BETYdb using the following way, + +``` +PEcAn.DB::db.query("insert into sites (country, sitename, geometry) values ('country_name', 'site_name', ST_SetSRID(ST_MakePolygon(ST_GeomFromText('LINESTRING(lon lat elevation)')), crs));", con) +``` + +Example, + +``` +db.query("insert into sites (country, sitename, geometry) values ('FI', 'Qvidja_ca6cm', ST_SetSRID(ST_MakePolygon(ST_GeomFromText('LINESTRING(22.388957339620813 60.287395608412218 14.503780364990234, 22.389600591651835 60.287182336733203 14.503780364990234, +22.38705422266651 60.285516177775868 14.503780364990234, +22.386575219445195 60.285763643883932 14.503780364990234, +22.388957339620813 60.287395608412218 14.503780364990234 )')), 4326));", con) +``` + #### Example use (GEE) This example will download Sentinel 2 bands and then use the SNAP algorithm to compute Leaf Area Index. @@ -260,20 +312,13 @@ This example will download Sentinel 2 bands and then use the SNAP algorithm to c ```xml - raw_mimetype - raw_formatname bands gee COPERNICUS/S2_SR 10 - 1 snap - - pro_mimetype - pro_formatname - lai - + LAI ``` @@ -293,20 +338,12 @@ This example will download the layers of a SMAP product(SPL3SMP_E.003) ```xml - raw_mimetype - raw_formatname smap appeears SPL3SMP_E.003 - native - - - - - - + path/to/jsonfile/containingcredentials ``` @@ -324,8 +361,7 @@ The output netCDF file will be saved at outdir and its record would be kept in t Once you have the Python script for downloading the collection from GEE, please do the following to integrate it with this module. 1. Make sure that the function and script names are same and named in the following way: `gee2pecan_pecancodeofimagecollection` - `pecancodeofimagecollection` can be any name which you want to use for representing the collection is an easier way. For example, the colection `NASA_USDA/HSL/SMAP_soil_moisture` has the pecancode `smap`. Add this information to the `collection_lut` data frame in `remote_process.R` - + `pecancodeofimagecollection` can be any name which you want to use for representing the collection is an easier way. Additionaly, ensure that the function accepts and uses the following arguments, * `geofile` - (str) GeoJSON file containing AOI information of the site * `outdir` - (str) path where the output file has to be saved @@ -333,8 +369,10 @@ Once you have the Python script for downloading the collection from GEE, please * `end` - (str) end date in the form YYYY-MM-DD * `scale` and `qc` if applicable. -2. Make sure the output file is of netCDF or CSV type and follows the naming convention described above. +2. Make sure the output file is of netCDF type and follows the naming convention described above. 3. Store the Python script at `pecan/modules/data.remote/inst/RpTools/RpTools` + +4. Update the `register.GEE.xml` file. After performing these steps the script will be integrated with the remote data module and would be ready to use. diff --git a/book_source/03_topical_pages/11_images/remotemodule.png b/book_source/03_topical_pages/11_images/remotemodule.png new file mode 100644 index 0000000000000000000000000000000000000000..2d6f18309283314d512f03ad410d69db46a839b1 GIT binary patch literal 108972 zcmeFYc{r4B{688WS&}tcY%!KfS;jUqDl)qn%Zz2lmYOkx8N;m142nwD5-RN}R7xdE zr6eIll8REwE-lt1l{mN0r%#{n^*iTxUFSM~opW8s_3%8N<-YIdeZTM5`}JO)G?E8S zWr@y`MT-`xxZ=@Xix$Zt7A^YAUQr&LiTz~WyJ(TBr3m9Eij;)1ILt++wy1xOOl_>g zxk8btE!xz^hQ{MrGFY@Q0WDH!8O0QVQ{Z|em%$2UF&Y2dV`FV&ZD9?!uyOFQMwr^V z*jR%L3j$c+^hEdhH8++mx8-8yjO!sBt6 zR3@Fs0=;4FZDE$S4hxSE@jmWurZy;W%wa_^!4Ag^iQq0gf(jRKBf%Lg+|JV4(hh0| zvxM5%gA3lYP?~`CKblz>JyS#r`)A}lF9hF{5@kU_`yfIIe2P$H$NOi*F-(Dw#f|#s zHmI$ogC%Hg;bD=K$Nc9kgBimL0iD{I+G2q4eoG15!ul;2l)V>HXlKhs(=3RL*Z`)7 zC|deYBT+Ok+P~#X0-~{i{?iDI0;O~8xei#89aJFlHMK$7xKkLR0TMb>Y9Gst;zjcK z%&1U5ES81gMq!YAF&DgUGWQo^Ro@GiHL;aW56*!5-$#O zcVDIW6z@nINPc}FXV@bWB z0Wt1=E=(NH-_=XP3YUi25?O3N>rfjGhKKe-q5R+itR$R8iIVVLP!tJ)=|)CUBWw|N zY=M^t#+Jgs+mL}OV8uAwkVsz=Mc{`GrAOk0;Y6XGmlzcu3-^s+d&6P{4(Je!oi`tJ z3kSbsC?C=7O##9t%~b3mh`fm&i2ToJ*; z-PZ@^;$h8kU@$SUv9WkBmMvd|g2lLt5t1;h2L%_!Vo1WVCJ_HGQZ$cBIj^ANa+P$ZPQjUS05@*#@7T#@en0`Dli z4Oa-`@sU0Pf;$)k*;BIcCOo3Giz~*}%M(TtM!O<;0&vbJfX)f=B6*{{sS$20I?fi{ z9u`GrqiuLxXryb*!c;?X9v;!e5W&K9vhirDufMGi(M#e*j^=u?(In6VGKvyTvZhia zqI^&!FEP?J1S$6McNaUvVpv!ofwc>qM~M=_qoa6sd?AMp^PO!KU7GF4hti50w{113L3?wIM94!q>(%d(k&d$_JKwa z!y=)s0aTuYHJs}a?iUfx!*Cru!^v2ow;#nr?BeRr7Kgi_NZ2rOSO_yRip+8I0OPaw zilV!_@oWgT=vX9JA)IJ|8*2xki6q-1c`oiSydB)#T}&5nF+{Pyz>AMTP`GyCV!Sxo4`t&X;u;a|LkwVhz_?IX zX(TfuEL0!~=Z8i{BG@!AAM8+00GJCbF;paRqriR8QWvQxBF2Yk%MM-81#CnlH;jw+ z4gg4kWMgat!YI+9TvjYUn&T52iVX;5g+_8Bkr7^Yt_(Jn6)k|lJvd=r4EHF!Tdai1 zW&5KMB#gi|n!>iHyTffc5-2SiE=AbdqLD8C1Uxk)$~FQUZU^@Y564Kn7&JOOR4C<( ztUbdCxBy%LUW_2)LLyu+;gRlsWG{rBkPyJ(z$1BBkvEwxi4sd-1P3;Oixh^Sm^P7k z54_OY0SSwaVn|r%09FK^AV4q~6mNk$7#zw4=SCJ_!ra2qR1%!A0AUL%O&|o=*w_%I zP=H1p0CreXBveckc=A2N9qgFC(e4gN3fGoMawoB@J$d153Q-gb$9p?4NJ6p)QVK;1 zqU^v^*0^YYTm*~hBIM#wM1QgbP5@^kX*5g#Tp$d=G0~WX1&_t}`;i0yFKnVBnSMxE zXgF8k1(JR@mDq^$tF zt-Bq}o@pl$J4Co4ND*+qSOSvm9Zlt6+&u(TJRL4zMMny;2%88BizebN3=(hc<|bl# zvciC=!eZP}u^dl&EYe%V35gD2h+`b+5u&Il9)q+X26vH!PK@N(c?H0Ep4NP%P!I+q zz<{yCMlZk{*gRch_y|8YC?4#s$XUy7KFSXttXAxt`qMv4p?|+T};&_)y&4B1jPe9+eyF4hIj~@xpEVeBJD0Lwu!BiYrYL zE{uugp~5M2Vnhg?0Df^%-a?|ETO?b6m$-+9_OB6k$c#Us=Pi{y#LZeheovXC4S?rZDC_dtb2SjUFDN;#}>Aa`W6 z4T_ERb&F*15oli`IXZx|fVi+oA7EG=BJgxKZ*LohK#Ig;B~T#`g~!_wZ5SRfcN#|u zBN1b8R3V>i>p*5OtwjO>iysS<*wdg;A0duN4~W1!*zvsi0n}(P9rSQ&n6(SSH5N`I zM0hU1CIRPeZG%Nn?ZM<=nS8W6)WL-&cCZeE2Ox>yI-Tp_WosAVZtKEvC*$aDwlPpo z0N>UjYz7V)fTIh=4pbW66<9lTBn`o5*kD{?P#ZS{g%^!rctnN8h=rc^P;VPFj>!i6 z1xcZ}`ypr%d}}t(m$UHh6n`d~B7pHZEDoE)Aq#vkSbr~XFRs*%040O-}T4&mEJkr-|s zu_B7Cn^f${bg(8LkWRXSX>$>0!`({C*CFmX2*50_M!&xB3VpoC>THl%NGZu zVhHpo3XZcN;jkEQZ<+{WEkKC4ECinr!|iw`&9b z|Lq#uViq4S*Il&8aFHw8#YYnS`ILf4`{9au=bWr0EzQ^!g#}aZ7N0Z5Y{6VgJOh#4 zZD{zc@LT3y2xPY+ep&Xr#TYqs?3n4Uq^x&y?=$D?7Ofi_ymxDCq>B5bH=w7X{?tY>Z|vP>IXO8)Yio5E7Z*iU)%pV5 z0d*an^lR6w(P*@xx%m<(6q>q!|K1zfMkx8)cka}*w5W`Zj*d@FB^^1kqVnFoY~V%A zeEPL*yvYUD6$KsT1v(EX~;C%dF(sJaA+*7nhBHCF4U&D>sVXemE2!Cy20b@{Wm zwPQ(zT}kbh?@Qr2Y4 z?`@q%>Jp~|sfos^iZjcyWpEl3<6Ldy-qAoggLiv-Mz<~+eb;JKK$W>2H%JV={8Wb< zua{pF1#$fBn2#(mGM{pUph^tQ2lCUGO+5CJH|ShnV;g5G2`yPFc6L%@jXeLy3)X}47AdG1_~0&4u3V~1Bpfzh$KR5hGF;v`xoTB^Uc%hPx)a?A z^IKKUg|-|o@-1HevExc{edC{D_f*P3WDgk4J_$~G`z~WV?!L?fzj0uj&go;s>gOd| zn|=|B&Ou@=${1#lnGKNLp2mG`{z(d_d}`dzpS15ee8JLY@TzhvC*I&$aRLjbl$8io zA`Mu#m6T0|H0N5px*4f^liPhYH}gd?Rn6+&lqUP-)e)`cjg0RtK+{UYS`9wGyBBDY z&3bx9M#q7ZYUX~t$kbuOADPyhs_SYeJ=G0VkKLhM6BUJ%KeBJUiJ}zj-(>9FG1+)@ z-2uMB{CqYhr5ii5Cqyw$vXr(f0pet-6Z~DL`tFuxlJ{I$s@O?w)lN-bB4}3CJrRO4 zAI&y7^Qq%X?T@dYO8Na*JMKHZmgDxQ{`&pxWy`{>${r8C+?McgsO7}OmsaNjOvdu? z_Dk9Ux1z4(oGSYR2*Y`aCNe4&cT^9QTz=U9=|u0rpQe(q64g9C`>eM0_vaMCdlmC* zqk1NOb*t;}6c3Dj-sQO~bbQ?l`KrZ%#f%HT-}1HQna1p;Tnz`x)u6Q1J7;7@UYDcZ z+>C5H@2*?EM~OHo$9R9FJmMuCjL`31to^(AFYY(E-N8x*|B2~`v0)9iaqpl+B(;zK z;;@^0HFmsZ>oyGcnu+I@d`INyXJq5sw~7_z+Q;&^3f=kohkG9UqXK7CWYYEK-^YLI zu-Fh>xssT>!R1}0w5>uICi?iob%awxH<0P<9lSeMRa?evHVnEO{V93$?cMcP|LZ)H zXlV9%`R-Or?9)yR?@Ocn{Yv*N#5V8Sw__czwyCoHgE(>qZ)X=jxOLOe%tn3AG`6|~ zExs+CkT$w(#a;*4wiH?Y0i)XK$^c#aUE3}b`}O0wxyC5Sh7KdcyDjlw+o>O0e~#Th zpyo8QWdvUPpYp=xAgfO!pI$^rBtM8VD>%yYvR8UPP@nOazNj$Oco3#8+M&Pu>Q$Fq zud$l>y=uMLds|VC;`J?SWHfmsS&#LCpxZnd?+YFeCkLA++HkASCd|(@9=9*vbH{Z) zbhBi9k!{$Mx_zr`pO%K;;%6rs)&|@tnp3FP_^ktpnM&x*cM=uWnDw{q9c=s&ufUMb zS=Wen9~= z^zq7l`Wu($?fkJWSw*+rbe+zV0RJYNowJgCx-iw+6t#v_4dX1R@yluUuxt$GH{7ss zS=IOJ4SPyT8!0yw%CtxReo)!JNmD{olC|1^oL*9!eWgZm$gwTU`7T68)7nAif+k|; zj}L2>>N{$+7gMPa+}!MA#i)`GJNoNYOa*$5BE@B9Zq*Sb_d>0_Us?S&d~oeuS?is9 z=f?JAKN?#+uy)R%Ym?BieMX;k4XW{tH{Z2FQO#+up6{C4Tk3f!dXJoM?3KfB$&)*Y z97sKI0M}i$qme_Q>EK8D;-tXcPZZ zPd(7^w%Z9kT<+fJf5|ys(7|6+5WZVo63<;ddg`>&*!y72RUP3XCpA8OVIsENQCb;1 z0GCdh#7=*{n6s5INu*L=ifYR!55IqT-+xN{t~d9_yn>FdZn0(+LGx9~{W+a;#3MJC zvu~pQMejw1lfAB8op1ChYdn9M7Iw=S6+$=fk~EPX(72cMKO%jfgk9{1)P ztN~mjC-r$l1Md3HbH*CW^jrp9Sx+?|O?-c!SIoG!U&sF0A(f3lGyb-y1)7TYDztCK zRPFtYxob>3JB`&p|ET}?e9HmJg^-Bb7mU{&wZLR1$!w}D)0A1U+<2^ceb1=_tthjN zxodKORI2BdR#FCD-R2tu+}aOj+c|rTR-3G{63#o5KDrViFxmNY1a`=3n*m@EzIU$x zK6OSi(PtipyB^x2b>7_f(wQAUrxyU+<0O0+r?ZSU*mRhED+|ys^NH=fHG9LKp4sI% z|GGS4`1u7jRfA26YHDhBQC)r(2PdYc(77A@)eQ_(xm>PWp3_#0bzz4%z}7e4z%lw` z($>2%y}&Z{3;&K$X% zqHrOmrs2(}v`bKEDNjh;T!T^0M(tZtaa|X7KovQc{Hga#S3+y}(7wa(7X+PnUDLX- zu{7>mlpW`lIz)uOno>A;WZO=c6jlB4dR=Z&-H{5xV9oT%o3;X+8h&T0V6e&MNp^q2 z{I6S@GA87UmE##_7Rcv|l5N{8uFOm{tZwhdY|xk(0E+6BZS*Wz(HO!>Ju@+Mf_rIL zMap-M`MlEHrzkNiOI`a>sr1W`&rgpCdioaO{M?SuZ||^jnH>!1l1`2#2r5j09B9&;9)Pjs7tG?6&(m;R$n<1oh*& zS8BZEmeHi|pXi%fMvuJqRdTwh9~ZliUH~|ed3$l#Pld7@E5E>hL&R|QmRBQhZ&y_q zO;V4kj_KJ)IwYwCexvskts3t9V88F=!grc{>iysvryX<&Tt!kU|78}}6zg~z`t z*ew5Tl%Avi-g0YgT;9Hh2hJXb)Xd8(U))vxA&Nt_-;4R8`eW&deS2`3wY6{c_kU`y z-q-e5&yh}rv$WpvbpZ;W?jOrk66W@Foch_A`%ta>iNPNFUge^5X1yBOll^7KE>^t6W2ewD@Ybdi{6rD{{ zNSn!mG=~=6J9EY`s8TYalx=*a>iO|=5CJ|xdHDv?g{I{dn)gaMoi_M45K;V>78rq( z1&P5IrlaFb4js3TFb}nHN(+`wJY2D?!JPguQ-9+j^=H6VYa!CMn;@S6|KRWz6H}Vh z4g91xc!rGLXiDK7qulw>%_7Y{^rhAF23KYy;`dYpvS36&~>aaMLoZ7gS z5nE-Z({UxaeVZpXbQ`zw-KS+}TtiWR!yF=IiONR9f!Sc~M-z`;bCp+bN)Sq>puRp%`VD{!>XROCB&E}VLyNM|dhBoF+p$J4y$mDPbrOO$%dU2rs z?h`BWQY6_27nbfixut6am2*EnnLipHZYwybe;=*X*c|KQtC~bb zIfx*2JLGO6UZWS4TQU3NQ_08qZ)4ph5Y++Z&WGvR=JP43dP{7>wJpT<(IBJ193nPsQ*OQWkG(W?N}W@?$Y<&Ynxy%@ZGTbM@%g= zTP8%V%@qG8NX(&i-dM~5{9RLGu1`y0%!YGNLXsP@&MiPsa&o~VT-bV?Wns6Ro%gE#IxwkvHSDEn6-F23;}~yee`{NW}?5& z8~8GN#oaro_qx_l(F0ldC z508kPv__+5uDsgZ*sil%cFCG|vX5A9J=6$rHQ@}v-b?E=%@Nz2?zee3l7KbGzM6|4QAjNNW-YVPVf;z#H|6s}WB{U1J4mcuUGiu(WR3Zc+YK0|Lh#otZ)97B%iqnv9tzOSCjE)$M-KlpcNGJbPd??MmxO zvxXn4dHLwfB*8?Jiq@4*oi#C+(WN!5>$3a5wRGnbzOQ((uKP;ozTSh&*J_v{vinTuW9V?&UXmHff&Vur=MlFf?V zFIn2>H=F<#&l)a+O4HL`Pl<=n!Ua`CkAbXNwVa5MwtP&QugL|>-0b)EC+mk&^&o2- zGo;T3&C70Jf~S}j>52xw$F#(Rm+}P=y5Om4RbPdDACoANa(~0!B1TK-&Uvv1=TFMvu@+%Q-@0HhP_aE3+ zg&7o_Pt#k|8eDgt8+K`06P=k5Q&OY+aDEwLlFCUoT3xl@Bl5&+%zz|*Y>}*`CGxC8 zL*$6ZP0s~_cxTP_mD#8|W&d|s0a_`N^@wa#a3S+p#)g+!N$1Mer>Cmrq~3@bc^#Uj z_besVLK9(h_d@D5qsZn_lQQGx&g)_AnyMNo6E+=C;fvF!I+iZWHnz4VSscu+gydwP zHBZ006Rix{v`PPwQ9klk)LS19jZ$w;4*G14oo@^~J3LeTPKqmC8+h|_Q-;yqn~E=A zz#mZ&m2dvN{(+j|jk^js15{i8MQJe4O~#I>U8$0S}rtKfdNT zhF=s`tcA{O*{IsnwQD}t@5PQQYc5gE_dKfodH!Q=Eza|j={n6v*~Wp(qenlwZ|oU+ zuk5^SwNJ#muW5tk&pexNIeRtxc6H5dy_yr8jmj;aSzesM6xN$qQnTv{5lwd0$=vhn zf0{}!Dw+en+$M^3wZmny5B8jze^mJ6xXyzo3#he4L2jwB0H2xmJ})77WJ6AXE6~a> z%x@YRrY3FY6*U1(CAxL6JWNtz3V)>)F9b zQYc$KdU@6PL+H$h>tBXUj_(}qs9HQd8>7|Y`1RpjbjBf4RseZpJFs(u+csSNNGz-` z-ufft;p{~C4}rT6=LH$VY8+kba=@UtE`RZt!3RxN^=r%w++7FYaTDf*4puQbOB1t% z)^!$(FFOH zu?P6g1Y5UtX?n1zZcJ4dVq)ZgbM5+rYve26X#CaviIKClth|_G6gS*@u<_XDjMvEN z+R^#Z<7@I8e&)GV?{dy89w@NTY4>e2DhnjkjGBX%K8ARwsQJDn{}pF6P8+v{q;OE{ zrOq8u?-s8(vb2a?u^}|UNpY|8&~uS;=LHvxTweXD1hWFtn*{N<5Tn2Oy7lcwqc&L( z_NBCGKJhu}GLU6b3Iep`z^VV@FokYCT`;G$ueU1)1=)6L9~>A^&ql={Xr-#VZolFM zy?Q72uzSu*S`{4-x_*J^Y~LW8oI0THJ9p*QyBa`-wyzxnao<1-gzH7TlUnEb_Tg+j zo1L9C)X7Xd#Bcd;x504NfMQ*5M@L6vU})jn3DUr|{mN0lHtR@rul!a0-sp{bXfbFU zYq(u88>QEKIa?2VG>Nr;SGFEYqmTc)jhQ2fz$qMD0Ro<#+OBK}Zd9=UMOn_FrmDQ< z`ED2UW6HZVHins`7{~5)0PZjxvM`fRjvuVmSaY~MlJ21zv5XnF@Cu+K)9nXN>S385 z!u!E_+!w_WATXYfY_bNz?d{fwWPO*rUS`#Aec>B_Glgn9+38nXAKEoba?i=#zMFnd z&Oc8MU+u5B5<8G}g@()2O9PRHjVH&h2H#89Fvf0w|Kt>~um&K0y!%e{r%P9Yn9VOc zt^|hs`tfz$Yl{jPh~!<#d|9$>b=>r4@|zdSErVv#906CU1Aw zJMY{yJ3HGmHdb;o+5IXA#zV}^>U(E=izzA~Y)=Us`=3*3I90#A%XwC5Nqf-er$;?b zSETML2i>5em)X%_3Hd!p5koSnpvBlJ^fJI zQprPkz3x)k#gjt>y@(OY=JB7U2|YQz-HrwZtH|fNGLG6tsS~v03{LN5L(HbT!%h`O z9!V&b%jEbEJ$-yt?{lVJyV-+{&LEcYnrxQ|gfzMUSBj6)wb#=>zbXyyxP0vG12g-c zy(@yZT=Y8kA|n>Wt@tCa?c*mOtpTpU5|2EmD@kgPHX1QOEbl0s57{~Yt2?3VTq>#e z!-uqt6;}Cf$85s8ZbO6PCouLMg}};zc%%MU)z8uS`H4*5k#8e!R+K3>0~dn#eUe#M zUw^5ej`H(cIrR8g!Q3`3jzk4i&J3H)jUS`hjyF7PnK2HA9tEI*FPbYmm++;Fm)>o%& zRgStfZK1Jx^?|diV~Cr~APsR_U*~(jNyuAVYYc%3J|1~>^S%1q>**bR(lV9E53yWG zUz1A;^U#ZH4c}A^d-@<+Y+=xf>5pxAsmE%2_8|lJ3oY`-q>|vxpI?_ZZ!dTj15egO zRL#%DJw9&Vd^R5u_4dx*`0tawfCOScJUh4SUfj3Qt=jk6K+H`N7q@0^t_9ZJ(|^Dj zyJbtYtu1Z%=~>{UZn`7-up5v#6a;PZ19CRC(P-lp{I9}6K=OR4SX%n-2e`&?k>n7i4tzKPYuo)oszx*Z-^aeMDb< z-ihor)M+&C)ctssy?M>^=mG2;1D%<7YpmhY{x?1YDd-)5g_&crQL3qD$d{rvqPK1? zlslZ%J!K-mYvDR3^9+R=08BnMEzd@!qm!>^C2m$~+(D2F%AHzfeW=lBrAdwpWQ}H& z&j(z{spD;JZRWA4+3 zBAA_e?wL-MNZ>Z1kBfA(QFGad=#N_HLfSu|xxo5PRkuV&_Vk`s>CCtv7Hg`~%6S`U z81=5=z})8Un&$FLn~Y*i?2Q4r{2H9s^Y%$(+Zhkv%E|hlN8z7QKI`UpUz!$~;QSHCcbDkaF6W}52sg}`J z`Y#T=-Er5^YbS2ssd5x(jqQId)jbwWF0eH{-rqdd(bus54S#0X_0Wa@#oC|Ww%ny| zlh*9MfXPgGaDa4?za|KNTh-Fi(r>a>UT-sBURbZnj@2De^_NQ(E>480$U4Prih3U@wU*az< z+${wO;tBFU^mN= zRq5V6C*DAV50!M^QSRJ&?nATZ5-WGd_|%ujI~wkudSjwG!^$oQ_;U5A{ZRiJ&Fq`8 zaft-v>pQOZ!qAzge$H=caH{3cReO$4OcOXS+c23~8}IVJ3>yw`ZyPwZPDt1Oq9%T@ zUne;FBt7hP?jnq8^LHJ}fVy4attj8L&CWkR+m_d+Zf*h&kZh-s2~FW(p#z@r@F+B7 zeD3Gi1#?0C-1qlcg_j^*CKk#oEPWmknF`wL9$8Sc&zYCSWiW;hUwcycunnCFtR$-b zPW*R)9eY4>wtV`i^XG%VA!&oDdaQ7L)JVK?XGRPz7dc`P`K0ehm{l<5@cC1vqK@zz z?AaWG{GGA!ZIUZL8(vlXP(BoQR#FCO0MQm-)yxyP1Xnjh^EFL5hxZ45qmAURYfaS? z1mzqgJjpE0o9fuxI5FM?pvZ4urBz9y*vU!Fe^aj;2P8?;&Pa~8fCSz(z0$K7)d#}K z0oSR|o!+L#zdjb1oc7b9BdTr$&H&a|RR65y_^1YXO2P1THa@V;jYm zl*@iHh`HG>tryk`@GH-wX?WGl3u1R51a! z;}6Tr1buDyThDlW67xd4b+j5!5Zj~c@R>6+eq9KJ2DulOb$665VSMLyW?IBoJLH*c z$=+jILskS5>ROh(0wI^JxLEU%QH4S19V; zrVQe%CEDhj+MYdc_BGMLe0Dz`XUtmM$-?BMTLxFcfgql2y(7A)=?DDbbgy#+Q;IkV zuj-oo1LUMGb}cw$*UC?ECNZE6>^Wi+0)*c_8maGVV} zKjPTeb6)pJ<zdT` z$4Be=f0E|R9H6A@We%vkFJr#G!7{Je0n#VKfUf5^LL)jhECtc%D_a(FK|+^O-VF~c zfyp>X?dB5RRWKfDE+h*^lAm4lUSr90ZIC-?x~_Z2H~n0TK>6O+R{~+8v3Fbf%s*U% z7bFX%-BOQ_f$n}{x%&jpVP-IL_v7!!1?z~8L1sj+|LffET77tA@Lotti^lamT8c?Y zNrr(pIXt1zd+hYCUAx$OT>!nI)9EkffByK9lebE4t-$232$0S3s}Pv(oat3qFcOh( zZemV2MD~o1M#%@fceDp}KM^mlJ_JGWYq4M1mEVU>gm+hM2Kld%AVYY-Zz!==+?7@Q zNnJog7;Ftz>8{-0bE4QFUidO>G}UKzeTB#WIa%HtnAM8P-04pm;EB{$y>aT)3;8ki zAxnHI2{Ly2FZrdU6L!~*;uh)(D$(}K9xQlB3wgKWFs^E6SvkMI9+b|cN()sa(+@LN zOnfT2PY2UK1G0V~2B&<3-A+)_&By-d1P$kH;h+uCra{&kPjp z+85k)(bDm=yg~Q8%2}C|@tG$EGg{&DdC15^*LpubdT(?uNJa}W)weF>^}04t>oTx~ z!C>?)+PrBK425?df)QT z5PlSQe+OQJOZ|dv>H~#OGrzudx37%oxV(_6ISV`t_P+l@IZud&5HNzV&zM&u_Z07c z(^X%c;x>1#_gIA-sCRkryhJYO?e*d1B+d&IS}qTR9%rF9IcAKXIk-iB1*6ITuVCp% zi1+B7@yGS=jov!K(|WZK`+6`t4j8fQ6bj%QzL)~iFk{-`1X5|5e=%Sh8VVqVde)}D z$pZpXZpcSDrsT_h(DKLUKvfklI=bAe^#j+;!C^TFQ}5oeQR8RpNyMJ#7d%lhC(}Uw z9F(UW&N8~&k}5HtzW%59Bx?&nE`N8cF{DQkB0Gm>RFl!0W_4FBy8Uzct`xD+?YI?* zcfSc6hkX}~27f=aefu}f)hR{SPNb^d%`AEvk-FqwX5-WKsY@zSC5uk`H&q+QVL+vk zTDC;eS0n;Apv)0{4Uks)s>`17)|2w0a2Y|)! zN4pTFW~oKI!6xm)P#7pcYdm7LdUD{=9>4}JM*D$Va3CA};@JTJt+gQg&FuU-HRVHO ztLXKp}J+WZdcd!>ULx0Aln}g>_b99)yCe86R@DJ%B%i{X3HY;MZZD&~ArjmC$Z=JL;0UT{u40Hb1ns zV7qI0twBpi=S+J>sj^^~-PSPEoNW)ZmHuw#;??DG1}HvRRaurYgQ z%!jM%$eIA3sq{Ums^ZbET^rOipvat4>2M7_y~|1IPoI)ZgHjWXI%Al+MjG~oDrs68 zZK#{NInjZTq*TMck&!EMn$FJ7rnz`cl7=VayW412#o;1I9|5?KKDz&DzsZ@+yCkL} zkmp0Yy@~`o7f#*gy5#OXCL8zfk4?N?w7m|KVrc28#V_qmJ+R*gBHu?>U%gtP$S-4Q zqoIb8WfVc~QO`%26))X*kA_uNXt;L$>iKxu*Hk9@e=T72RMLAUpCMcZ-a2sp%c-|t6W$SmmmgGm%)5SJ@HEBcYV zR`4oj$#_7y#kR1>Q;?Jgn_6}EKG19G{nOo1_rz9IsBy?%tp#c~@}j4l9a#0`HL%@# zq^Ds^_8<<`{xNsfCl=+UJjT{90o1}qL2~8w;=8>Y&o$DlSA4&=zbYvsyWftV@*1!{njtN)K8ix<{aFUrmByuw9LD7>*|nC!#Fv-0 zUem4ePS=wcLb+P z@QQ`@{QhZJ>{Ph@Z4g@55og8d9m_6hS6*S&`{dN!qOHZr`bsA{zILyfcX^_R$XZYJ zcy11#@z+A=b-4^34;#?kTd_gn^ruOoCeRn*kGIPU&NtGOFU^;KD0&G#e%KJSb*qKW z>^^^?#wzVJqdoEgFI3l`@C1zZ`(wR3$p!D$>gnk0|HG55wZMF?YC098zZBF9_r3fP zprWi?J6yW#TJ%p<5Pdz%>QA2GyjlakKEV~=AKuP*d=%J}?4z3^BF(XE27Thv#TN=!oHLuV7vDE#9UCrQnYFh!7JN4;{)9cEPCB>AI zhf~Ld*Jj^f8jrwMSN+_%xN0UAy7SFll#=Y&w+{yZD>K0g+>Sf`)!;e1|Cslh2d!4N zLkSc14w3J6Upeve!YX^g=NYBSwzJAZPmX6E`S`+%UZ2A~Qs7!3KNJ5V?A1N->SUGG zdrOu-cDa~idM#{h=ajy+qI1c%{kvN?r&t8uqRpg2X9RP4hb+E8Vm`m#KlhhH$k(R9 zwZHn5JyZ4CBVOX>t5)}Xy;;&b_iNGA=ueH{h<4=QC-q%;j>g8a&rTx?>Rrmv1 zn$`yvz_tJ3b#F5OTsNczB@p;njjONZOPmzMI2Nabc#!Pc;eTZh6kj!wSm?VjjQXMWUTmAD*SUgTeJ zToVyn^%93kp8D1EtIiCawT^mq-w*e>;3ATXc6km7ZFkY<>?WsGc827E;iq#dK1Bh_ z%Nq8(TsPaVIU#pVT>h<9+#>t0VKG_^yh_eS`x})MQ}eS!#FFjJ z=krK~yxuT*Gf*MXJ^U-mGy3y&a$d;brS{g7N36I8<+CPm_{!&@PagC-XBt8_Ay0`v zO-FM4?h-KGCfW$iA0CFw> z>*N;Qj(op1NZQB?X@y4Y^l&olO*t9TaNM`Mfb(HR`@~n>gnEE;T2>ay7*z{a#M2|W zuq7(VwU&Qu(0{hFa0@`d$P3B2FWCgE4@$h>QiPf8vKU4r228vc52cEIzpd9!=`jva>^+h*E3 zB%k)jNbV9ZFitGbE{J%aIXu&GBCSC|)!@8l0b$_b^T0+Y$vDY^)mbH6F}-?5<$9Q# zOXst5hw)R*^?9TljR_F)$Pbl?(T#5>>gO?;<;7LFRq1*|caFZlfZfupyLj{en9WQ) zARn2-+B;sOfy(cEJ{jQdchyGP=Xco6yD9D9e9z}jq4|8>b4WwJX*V)w$LLJ7H$+Zj zef*C2p5ESFM|5Egahv0cj>N4iJ6TA)b^1{{2=?U`@B6_yn!XRe7RG&ttcsUDPuj$4 z66tN6fKfIC=18Jy&s&i01^Ai@rat~MDr2@BKbqv@kyDh@1xUNe#j=xw74Mg9Dh)a} zqN`5=Rp*Bfn1JLf{y?>YTGr;kn{_+u-B0fR^pd-_^g_^}&QV8Je4pTOR#rn%z$epq zM1sR5-;0+ah_!;k4<4ttTgq5)wT52Te*qt3{GQ1z%7F4b+g&D{a#?ukjvoPs>yF{( zHl=qyzfj%YqFpvn5W4^4%1zhiLE5AnDs#mjQWF!m98S8Bwj*E2SJ>Jcf-1F$M=WrAs2@Rce&o%3 z$m3I+^~>2Ws<$;{uIueDUMwV6&=+fGLm96WH_QqQg5%E~B1jxXGkDX+#wTl(K=j@W zva@mJz*t$tW89wPBB7JMjUq%cE1j~^zSsBZQ$hWWxK<}lcNuxa0R<%RGUNHpt6%rm zuv~;ae+Z;15om_$C!M#eu;3QU=fIS!s5~rcZ3uajxs@mUDOUFHH1LJkp3WS|%3i%m z6=})Kjaer_6`bb9o!2=CgUw6G&F)SHPu1*; zsop6EEvf#uXa`@*PS>}b7@wX-{yGv|6ILs~-Uxd$P2V;Oax?19uBDciMWm_?*rmGp zXkFJjz(P%^lO{qXV2iC^I-UNz8s{JB++5(Ify-h}Pp=ca>Udd6aeu$w6=^GtHKPo_ z?%!Izo`}r%c^()${djrFy-e6P?oTw`i}?Dwc&nK~`7MRZUCvxBihR@1A7ByTfUNHb zkJajcRMyR_z2$73ZD}T~cb6;{SkY_yh@Kv9O8c{8?d$J*lTAKYoA!NoTRKV1e!*81rC|KBb6J;VQ>^sy=^{`l(UAk^QV$8tc$3+}(p z7vE(0%Fg26UV?Y7{Np*O0%s4O;vpm*1R<0w8*~wZ!aV`_>nHav8jHw{I}T#sOYK=S z5C>lA6c0D^IGGAQs5GKYwjQ@vMkOzO01^ODUe6YH4sFdJ>@n$4l$){XQ4F?DRsr#? zUYzm1-QVn+@8LnBH~IJtbxqCP;G=*C;JeD;;9$k2OVJ&d{WS~>GLIfz`E|aks_OWw z8jnAV3tz3FIL@rrsNB9(cdcM$?E&n{#S0_L{(EE*3pxFxPCFd)^Yb--ip1iEM47*= zX*F^!pSR=wZw{#3@K3yxo@ZcSa64h%sQ_;GMRw!gh1&~j5#&DNSMjtwhr3nN^?rOi zOU73JJU)23u#tv9xd-BPS{|%bb#J?F`)A!M7*`*WQ4pqPww!G|8}feDW6`^p8^mit zbZ-^=HS+HWWqA7OQ@=l40rfg7WvZ6KkD^2zRg9|s#@6lUzf^@*pwnTP^4fW-~H=`*k zx(*~ag2G11zmbfe)8A9 z-{wexN^CcSO*OPJ+UA_iNInzygZHm+ESdWfhoDRuGizLuw>m+nXDpB;tHSy7sz8 zMWh!-qW7{{+si*8l1c-^|4?dt#8YwZc7^x$T^@9Q;-IehGuWouXjJ3?dXt$#Zuz%C zanPGo8f&9nF;w%&@uz_$SN*aHZCqOWZ;Wr}w{^an^E z&u+`0nq37x%?5rQE-LysRc;x7qs!$=0=f9yXLZMb7ld-=Sc@sjZszuZ`690$y5~c7x3r~hp6+hgvNK}-vv!I(PuJs(jGuYGoPr3j z^T^Fc-A~}S1l7>CmLNhj35R-O0HiaZ%T%nAPwpe1glQjyJXRh!# z(q+k~gy-bsw6Imx)EG7ToPG{9P@6Ur`Q|kin1X|b^~)D7hya%TZFk!HmE&^%%dQ|$ zu>z&^I$D?+b*#&NW_1F8F(rw-as5xK+lYZK{Q0!y$%Ah99ku~020<5KMYy28zV#BF zz8~&xum@8m$(7`{f3ec`IRQ5p9 zg1QWY6=)ZQb#*`qW%%+^SBl`xjoKf$NRvTJyNdm6b-_NHKXV}&1X)4%r9Y^J6)itz zl(@s~U@Cshlma7lRP$(kx{T6`z(B0~Nj$2L(}2g!koGaZsNSNQc3@P*EuFKesR_t? z5uh?|Y-^)U;va8#S=J?ZNSJ;0&tN{#g+3N^4!qFzS;T(ymkaQ!{maLx%9D|3^w*m0 zy+>r{dRNr9pg{MfZg;8~3OQ(!8Pw`#;A0{HQpa#EbSLE_Lw$Ts5vNgJwM)2X`3#22 z5K8Hf?Cj98RK==C2IVIu+1a!pI}6};SOD7bW8hoFx)uXzzZ@4Nz1j?vU6s>b2~{yB z@1C4}e%CxoYoQ-FuYS*-<W}ZuDRLzUI79Xk} z_et^&Sr8X$+Xxs+r5?8|CZ6amro4YXLjGq{;<^h<0e9_&?o|ntsQUVyj1qwIYYN)~ zG!|xxKejp(a&H|FABAyFDZH><>`H**0oOfjnsi}%t@h368~kn$fXYOt{+i5df1b2` zKgvurF7u7eq}CB9Wt3<-B@h64i91ktX#*0Y>6LGRpiH4W0h|iZ5*55J22lP4E>$GK zbvSFEf`~G2$Fw^!w0ULBgsRO0S&t zd24xK4=8IT4vTU>cgAhMK1aq7F{)j>bnUJ}6Vig?C4+stURVD~z2{-f=P1VlIJ^eR z+tU{J&Ax1RfBhR|bP!}0=jCc}pCJNv_Ev_ABA`5EInCyN_pUDphTr{eW9OY%wL-02 ztaJE;)J-#>!&4*7LOGItRe=o`nOHAw2`s{|qA3M(;y=YkoCviI?E~FHt_U(G+Cd z2Dfe}%VFRmeTIw9a5oEOwb%CR%-wEMckGZrmwoe0=o!^tx17J}k-BLlITef+w^+Bn z)M4X{0(#@I5R^1i@VR+h)(g@(k$aMKYvYyb9YSJrkx@}$S#YUO-m(MY+dVmal?x19~$Zch74RQTCYgcfa*=?7hr${m*u|F~~#%>~aM-+^;@ z0cQJQ_yj%An~?Uj-J&IzjVUqEPks{2Q=;a62t`OdGO|z-X005fsGsaJs6VJ4PY(~r|4a;Q1mO>xVGGth!;biD zy+_9 zDb?C1(s)7#Ga{~AX37aUphP$H`{r-$OgQx9N`?t|9^_nK0^ABm1Vt!9>$$4^+$x$mJM8;CL#&Q&?V#}JL z6JrE+VihRjFZ0cp1zUi$pgQQk8XOfDCdKL#7)bu=`dUl_PgN5G9xBrpAOp|cFmaxTWd z_d2fM=UFqCSGgl~Rv;Phv3RkhZ`5_hv*t&H!r@DkQhz|i-QN~gu0)$^PL0kUZF!P; zoGzGrhcqEUOdMQ;3H2U1gujbHn3!dSQ{_uDl8VuA0e;1aW$&T#Y2t^3a()Sk9SniAdy z0ZW`F2!{@XbT5FTMo5p~30a|5x%BD}Se9R386uH>2dVAz45l?yT~LzjN46053;+J$ zz!!40l53~C1@qI4MgJ^8NPOTT5>?5nS?X8KMfRR?=UxjDN|a!!^EG``djn|hK759g zC0+RGIe9BVrVc5Sr#_%yRGceDX#>)NRS?aYAB0Z(J>d3k*%(OOD$h98F2 zZsY8*@KF(a1gIsJuk=O=lChqHXwQ{;Q-T9?q|!F!+7#?`InMGrZGxg0o4 zqh;9y0#`Z=5;Oty=g>*2Q;-A6n=h<{pH3|HziYadxDqhXEn+8|cL!B<9fPP69ooYPjXoKFnFJ?Afl#Fg4) zRA)b)irCk70N_qn(ckYwWdHL{Xf7ZHq=oMeM zfzkQCl0+ebgn;#z;};H$D#MS@*)YMt5W2|$L+EgFO*erk z)_C7TQqHPVs z<5uvMrvwrPm(WR~a9=G4nb;f`(A8}ws(c~OeZhH(H;Lc-*rWDGH!svAE8m`72qvp| zLsR#(>;*c|mYuLyG@qVYOu0u_dg+Z# zuQu)Yu&xkIsvsVX8844(r%ZoV>uNh%diLxEA_a^Vus8@Vsy~np!hz<#|I!>#V%T~C z2$`zbc!)cR+z#T*X3?ZB_`=}cv5qL{T&qbCHGifNWk=?T( zTA!-RkK3v}T(8;Nn~g{*nFpByon|TZ&}-cg*gt2Wr4L7nDRIfTQs2K)EqU8c>7(`L zp)Zi>9^N~*BZ?u9=enF3Ukv>3xz6|q&gFlU#h`y(U%doq?mdxBAKc^6E!dp4QlGJ{+i6J?Z7{Z_uLc1D)<&r#kd^X^*CrH>^aTyO0F?>`uAMmtmVap+wm`NAU}s zlqsbDtZy`0hlvqIt%8ZWZwn`LfsImAa}5^xCLV|!rGUxY>m{JE;wj#8E+{V&E8Gu( zPgrKw^UUK^tO;F5ETCndVHt5~&pjM1r7;e=FG8=Fj!7N-Wf{>aRKLG)oyImogY zr&5L0C=!ev?jsYcy^h}U9Dn;FO6(b>qY{KnyPwRb87ACKcoM8}$Go4qJW(=V1SQJY z&#}~|#d-Gn^Y^#@RRe2e!q5`a8evdsDR{SO9025UQTIx8x8iczqb`ryQKz-uc$z?G;xbkhH9_he8K2R!Bf^H`2t|kIO zAT4r70iObiCB=aDBCyb_1Dm*DBKq{WZb@DxKV@(&h~=V`v7#mr-SkJBJ6oMYt$0H=4-?gi9fTDw;B$mvwUQb-#kOpefx0W zkOwKz;OANuB`8qe5hn=0MQbmq7X{oCkKIX^uUi0y=MrJSY0WK$yT;1ux%03v;fa8R zeEvUk5GwSbB@mzCOEQxpiZxh^82coShH-Ny@|$6X>hx+G+RUtXigm~>1x0Gtflpb@ zY(LlT3xS-^N2u?$Z(=y3IBFube$O9=^gSr}3gaqu>fBW8<{Gw#v@Q07ezLHBGgOxr zOJ{`&9UlG+TsYKs*`(gk^Tr15QE&@_gwh(e#3Sh~QMmhkTBk=?NsgCNOT2g8S=B6m zgKWXm%qsmL9cip&JvcKkLjC8AgNz_UN=T$M;-=v1 zC||fB@OS+v^n7BqO~=Yu8vPFLjs=r^;{#VArx@e?sI|df>WI|K5wik)tzHF9k^fv& zq^_KC0D-0{WzCEB>G_xONix)iBEf^vOJ}8;=$is+YHBnQ0f|ieg2>+YhYO|AK6?gsv82j(=b$x&VFg+)zDk|r7Ndp|0dJ~z1XKU zjaV*EgeNbtQ{v|p++zikGr6Vy`<9T^8e9rqD1Wzm^@ruaQo{zwc9B9`_uvHXamK$Ud$haN#8uh-50uh zsh~3C{(|!?Id4SP-)qxp?5&mI%izPQ&dmjRy-gMVIZVtPGAh^;nqduIH)iV>`JtC3@sE{zOMX1$`4cQxLWLkj!6zcZlp0!ISTHT_JK{`)T3jsh6WESH zOy67OO0s)E#QkQM=l}Q>KcBMtE#8nEqeyd${@Zl+&;2^`Xuo^Dhic;pT0zwNUFGw? zkLW3k?s|{rrA~*7x5c`y0X4Mjk>qV5lLb#=Wfbkg|&A{~_|HE(ZkCf&D#c9$|uVE&G% z|Bt~uPtzE0KhE_m9c5jmRi+AvPlmacwnqwE7-7$y2x!9(SMI^lT6*A8JiPcPyqA(s z#jB4D`Rg1TY1hY9!1OCUSs3Qr_7v|nV&m}L+jGso6p0A0joB#T&yp9y zaHD$keMYRjARXY_E9|DBQy+)J>ac@!Os*<`klxX#lKl#Lqh?Jw5HR}scr3z)U+bw) z`4N=JuIcsVgn_-60Vv|o_qsV5p(f)J>3_2TNX5aF|hA16+N5~|6P*5m@@gd zUfqoyO2$n9wE}e+6;Si#{ph>W_7h~eIhl-T3@w!IcK6a28v&#@haH3Q+R8lxUYM(W z*Dwp#^dT(21(T0OKE##6yM>+qJfs;H_d)!hu}T%iDCHf8G(5Hu#qaJ+9^N=|>pUhV zUD_+aVj?GhXuLtsh4xf`LN04%R&w?c*2t@SSQ_IE0D*iV=6yriM+UW=<|rB&^zZ&0 zrTH7?VSjeL03pA3^KJ-5C9PB^0;Y;)0K5_p>%Vu=0T~W<+~`Dn39jUC52SNi=EX?9 zu-544azHmOFtyf2CT#DXqeBhUfD}uG!b-JJ|4Et9EkM9K) zMhOLD7I@+l-up9F)DoXzGq)HO1ymI78qXbtqvD-3Ilg~>cV=&IFC+hwMDAUWy+_~& zp7SttiwK$n^%Ns;8|AW}lXQFQmfj7ns>KG)=gL3k047IL*>+9|NXMdwqXu6cO3*9l z39WlfJGTs?i257a>qEhtdSeFUsj9(GpsZ#DHqbjalDiHnE4%?1{itg795agc!CJ`k z#6Q{KXLF1NE2(SbLjIU5+lZZ}PXH80Jbi&t(7kw=7th{7%K zwplkB5v$$=H7$!q@u}!rvP#ur;Dj0k@HAAJ4MAj>cMRx6>>9y4gf-X$lECMmuF|pF zOmJ0IRZ-YZ)uQbSUmlV3x#EEbhk)@gp@Cw9m!}8#O$2qS0!W$l{+2Z)BQjw!T)~Zf zwT~UzJHu}S>>)^$Jy?}#azHM@ZdGrz%FYO|&of?LUT8_D!110DF?O5+weTZwBtzp`pTe1#qupm< z>;{h{TCmMGvQxa2(%Ybp_Bx#{V{LH#DY|e4r6_Pd)TaP%ggoAkcXWn}0MdQhEC>AB z;M?bSU#G5Kx$?_Z`p3Li^qA-Ech1$UGP)(za!EXSHjtK|v`sx0l2Np>c0UPD#d?BS z6ugX)6%uGz`{aV59G$6HkRc|~MT)>DbJW6Oir9LJiozX%1hwZHL%Dsp&jhCuAoqv0 zy&TrRJnDY2zMTAG{YCbW;c0f_l)TsLUS=D*Hu5aiDg9oPFlO4t4}9wYvo1rsXbD{@P{&EU^m7>%pG zPUqI1#BLVOYbW-O#bi76JP02>hj9h=9?*KVM~Mzygk8nqa<%(fc|0NUR^!h}muc>@ zuPx0#R!7UMVI{hVw6OQ^8G-Fi1rBR%FaL8^z1oo*o?D9fN}0iglv@~ST%;XdM>=mF zq6n`W_dWVRpUuT4%6jEm{(&G&iRD06m~4u`WRlz+&=Bu_D-5tBk98l?G^=i?)39Iq z;u*oe^JqKBUoJyY*X=!H7u#PhK#+pIAWZPMZNpLG?Tq>yTiV1&PwKY%W#p<98>lg9 zn7E_tWO$@ZIRG>n-FjWL!f49J)mDxVZ>gHfF z&3CmS1F$-%?4VhXmKU&d*C_RrUh_CP+O?67f5h0gG^B0oBoeYfD|vyZL2O)n4(rbK z7=NJ)m`(M8%1anx6sJyO2~h-K-?bQQ=Py_}JtEUDTB(<~jbl6$FT@{XTem&513zJX z%YKPe@<`+bmzUb*%7a~i$Uf^GT)+G9K^vLF{deDY4xCgP@T3QOpYG%GuA+$q?AwY` zYfJG;*n~kn(qGG3V!=*i6iR}+Brlv+D34%;x*!4yXz#AWz}KZ-s8b=cYr$V<7eoZ; zd8lkaB}wOV)~k3DY+(ZK!Ht{VKl_|VP2!(afJMCJu#Q8R?&JAb?Le}Qs>d&A=lfWz zdISy&2z4cwDO1Z4oi~+Cv0Ob_c3u``=&jJ`4QagfDyOB{xw+;*pm%b#;XMwP+Bh?M zOIpLnl=&rB*-c|CIw)nzg9vHI%u+nniaDi-;#yzs(9o=9jNw;mEGd{Oc67~mS3NZ`={T6!Xr-r{*(kslLTJ}rtw6jL_J-++9pHjbg`RH18 z7st32rF}aYuR%weE4ehm9kjPP?^5d1iO#1?H;vz+Bzf1~M2B5rYYOMz(J({k=|pim z!*L^%grCHLAdPJDTK<~NBtmV!6c5i$5y>*Z>Bpsy2pEg}EbRV4k>W+5CZ!t#pPpDGcx8FfxWJK*?tT|n9X4-9NqW_*0 z?}^?$<;db{UQQrm;D6bcA94!HIFe;y$0lv+-w|Gm9(!fnrFfTSJe%5{t~NO~*o_F6? zHvJNmp?}jvi3KIfN~o@5=kM;r%CcWF7nw>i?YQ4Y8})Hyqh>>Z)_%#oZe)p6@2E6a z1x}*fewD;_RH*hHl9oA1@xYB)>Bbd@l3|YyE|0A~iEdTVxXbn(?(?zgiPr-sIH>n- zMHzl?SGqrhKXyJOn9OZ-*eknE*4=mW;fabg*42aiYJ~pLJ4CF9sS`Sbv}3N+UEDE^8I9w7Ol<_(J&X>4$yZ2Yko)?G{ z2usFFsNJ&AKP_<-NES?=esG~CUs4ElJp_r{`cVykB?f=FIU%IGbidPz>tFr)f+T#(@-I%wrw{)zlhugvJrX&Tpq2t~F|75XIJi5TkiLWuj_sGVOaJa2rK z;1f0#fwZHvkKvhJJ>s?c7cyPZ2>v$axikWYH9u0_YOPi%dQv^-bYNE#=GR{2p7)j>JC9Z|vkN#G7_ zyf3V}v{@OS(x8{TC4v}mtRw0O6Pu0wOeyG5w@TM&U_K#zt;(w-R=(aai+U!t#fiD! z7K=#YcVyBis`;2yCz=;U`9IU)G1kqhdjpn^*0V5*r&E>p;0_HiA94}%yf_raV@~nf{ z$Ij4vlzgbhM6sqi)*v%}zC2zsV@QDhwAeeP`EaV!nuiwVheU|;!?!Uh6cE0t33ne- zqK~0d7+zR(;r>XR>e!|D)^w@M9RyUyv1!U@w|tmoE@ETyC@HWm*_-r*)ZKWpUCN4s zT}RYzEh@Z0L?nOtm_T`}sp!0Q5E;)Gv8=uzqjRba9&PiaMc#}J6GB^zdGQ%u%U3kR zq_T0^wZus1+^wnlk*^@pbSr7@)jI;+=v#g@_q+pY z8NRcKPxU#o=Fw|U5A@p1Wd>P1{0a(|sZHZ68W`p9*BKORgO>;~5keG8x5#}dd(WbW z8ZVI2_kP{oPIVoU%hCi?TS}=u9<6olhpie_HnFj-&-_U1YUiX0*!K?^ha~ONRbGci z+X)F_)(Viicg3%$(7bFX5EYSS$t+$3le+3RJXd6x&uVsV~rkGs6d~i=O zj-240@lJuh$o5M1lh|8+5h}v%;I_1Q2azA8BdIqLVsDTbdKv@! z_sDfRl?IYcnO;FPU#GPfrnrk?)t{D@f%w&YNnam1L6l&XN0}4S@VS&4)+Xn0(5Z%_ zodgF7RFB&XW>nOeQl#<=pZJ(L3o25=>b9-5@nIrI`GrrVpC!o(m(0j`CLcgfN<)HT zt;3AHf9pkOCy|Z~8sx^T?LcSsp8n9es5G#yto0S2`}RIa2M;UkRD$C} zx?HSZ-BhA(L}k!Fkx^!AkT}lDy{>}e8%p^5FTOu7^fV0*%iRl~TjT`^gRhSZ`rx0^ zzD0}qEA9I1jcH7rCS+f_q~PeKWj||t* zA`hh@K__lTY=Eu)S}-W;+mAPmN#~SA0*-H)#QDg{p&HW~i1qz2eLbjtZ>=7)@r<*V z(2cM+F}WQ+f`@g82=8iKru)?hWYnoR%+VJlOy;G{Noudees2+d*U0O>OH-`wi_LwL zPWHgV3KAM2wUeAozxHeF+l??<{MuLN2nlJ*YR#G7{*DULNmPGXKb9YVLm*iJCzkPZ z^xt__Bk%Ax&4Jt6f{t1@-;et%aO(~azjN69weMAcY!c6>>YTwmRmMZ|^z(S_h~Ei^ zi5=Ztav!0QA0h4d5AMw9Y<;Jt9Y15#HlBD@`Hny+yhhVEruVqeb7y~%!>-AdcVz;{ z`hAR4MbA|ET4-5M<&PfzahAHd^w3>KPv8~>yIzLS10ZR4r@sQUlf&!h9+5?=q#Y*I;|=oF3hhbApdfvk z0F&Z@sh(K4$J4X%57=)oqGsp)L#%pxzw8VWM%b`@6+HPlP%-7U5@RlTH^nfpY5ViE3{Cx9Dgh38}hDl z+S82Reb-?+mboU3d@|M7X~q7PYJtW(626ls+h510Ut4;YPo8|{(yrd5<^AmAJ=*Jg zR4C%VCTrq8F*QEry22dYA${}|w{)t`Z3NQqbsaKlx)?_F?qs(a{Di$0W9>52u%pdR z3-|49POF7MDd#~p+12XN0jnOk(HS4Je^GS**^zBErW{dol7Dpk^x2rYzPEL`#qoem z^}cw%_alDZx%OujHj{2A2f=d1hzas9>IYC8{}K z8 z2}{R2lZ=uoBZnU6q2GVAAB>DgKFd@}pF^ zZ`tSTR7YL$|lL-%jPLn4hm%oI12V_&Ca8wm=je?A`b-V4R*d>7nTP?J^+me7ct6nzvT^qnl&h z_pm^#JZ~snmUrBC+B1nFql)-zzTdO5gyb)N;T-BZ)@x&L>=T1a=?bm8hS}PZj2Dv} zDs(;P=XbtH+~n<^uV7w8;(B^pMMtl_QI@a39o92pR4;fFXI=CLhm~u1tirs;&? zqrj(AwxhNfK0XY`XN8rS5+wkLclK1z_ov|j; z$$+tKCla?Kt- z%jlcl89td_YqpODQ5A-*()tLAz6u&5GRG0c_00Xi=F7}WhwlPzn(Rir?ZnMf9}Xfr zaCv1E;XIw`-JJ%)+Vggo7Ftxg16sibv-v3NOnt4EfReeRvfRVH+P`_ zT&%Q$ID4^lXpQW|kFlHF^I#l@m6fS@Zf`j|Wl{$Y#G&%xe0lFf_*biuNMArSUd~Q? z>s#`!@9qBl(6CkfD!kk*OVIkB)v{2QGMwx+9aES6t8}-z3&n=6{586hheQ}r>E^#*19j52Wq&LG2GrroFC3%aJzqGpH2H;N%482^^`v|`Cc(bC^oV>_x= zbG%c1M6r#7zGB2jpe`}i4;fA$eQT^?XS!#;CPVBx@7s5Z&L%Y+>nFPtykjH{n=iv| zvX2d_YZiyrIy@3ilE3fvL6c}ntoC-uywn&=ne5dS&If_a9tL{Cx|d~L2XA=Im0+NZ z)_1X>+#X#Ov^u zsF@xzJX)Ub9bu1Dgy0MldyMhC!{Rqo#pmkLZXU7pp*L5{CKF|r_U-mB)a3qf>edqd zQE4@reanTE#?IDhKX&IhYWTuIgZ|{xwJ+a|ta>J0Sj$T zWBIZfz1IM{^T`eh*_T8o9|=*fP0OzXdDDr63`%_^Lxm^K?li)uFj@&eS6^XSuHsN4 z;SRoVHKna{@G#16^~i%uQOg!R(Eue{$v9ugzP|}giX(N znI@s(b3^we=-5`~-*BO3X8}qa!k2Ea6e-K%)3nD(P-etl@wuh^$V$6b(bb1z)RxE0Bk@H=P`X1)Hp;ZH#((smz?X8g5 z+hVx1Hjh8k=aW>6(-V_$-(3(E4H!MzozkUva+iM*WAPvwk6Ykqi=9(|28@{zZX1Q_ zIIgtF@VcFeSA+F?b+<=99uZIp{t0p)xpRFuwSKrsjUr=-r#v+9@xDp;EE*zly&2zY zuY*XbH7Qc&Lpte#b}R$$+c?vn$3v}?ISM7|N$&4x5~}l=_BL>w=kJlPhPASeTH4lX z=C!Ah?&PWFRn7XvBqwlAZz&xuFtXATa}O2lfBJf}uhCT2{e#7D-tq^=#<>tyR(oHG zc?`sE3^?VG_C9{JDmpY*CCzz zKmR_|fpyJ)Qq{Qi`&Y}N@#qNt+Or}3V{we)HTh$<3$GFHB;u|RaJLwLOKM(g-R~jU zI-Kd$(#|Azrc;|!D2QZawVe$gSJkT8Bd#g zlV=RxKx*H2XU_mjj}WqeA;GG_de@r4A;f@8LVEn2v=&Evb2yNbeaf9y{6-=rQk0m`lUqumd(5ln$nz4B4>nBScq4sMv z>oYUFOa3*~GPs%qWS$d2Ltjf5hJHqcte%XXl+}n&TAp}*q1nYa8kvE-Df$HIL)3F z9-EXWxoRnszDipV+$~+JJmDRZl*;7-qGZd&;~#7FH4-E1H5;acDbMG0*ppLu$CCoq zCFUYkKiJVGf0g?g>EzEO3%89h-RUn{q{``ZAqG?%^d;6|X$G@|iDDm#t;efUQ#f7f zK2=QGXw%xa2MY^x(B!O^9NIV!^Y2zaRubxxBBbS#iDrc#6IOh+LMr;|hQx%*2vlmSS` zucMe6E34!!7l|Xa9)I(ulclc>MajTGZ<`Gu2x zg>7U5y*Co}kiOgRYxjE+GGi_0#HXIh( z?_`IIw9O9Jc6X)dw{Z(r#-l{TLSmT|sS#2tk&^FkSwQ4X9#Nuh*kC*14s%^So%jl>=Lees>6 z_tm-7B9;1yPq+6XZy_cgHy(s(Bm5y8%R}kY{_YyO&Ung~?^dLxM4r8|%?(!5XUS?9 zfn?~!@d5miDxHKUH22^V8qM+%N+v`TXq?t3Mw9s9{k!Oq{^z_duX%+{p{zaxiqVnq zr{SU$!GFHP0&C*mzH=I@Dq4aXPjaajo5@bbZo_-Rbzu4@l@IVnP~oZ(|GV`BSc*@l5`-D&>E->J;V$tQO`hvDd9c2J|8nd4Imn@t5@lBU+B9QC5)OE(4h?EOG(^bmA>c5Fy^ zl!K)DxQ`NR?%HW&c!QaWZ&;1XhRxC8F6WvkHp>`ojpMQ*ykX(&n52xE1uY>i-Dg{>i6ak9?>3}3d-aPcyf%CsT`gLK`oP#`wFo@wz0P{stZ?h z)k3XLYy9(_@s+WR>RM3qw*opr7QgaIkw%Fj3#-3E+p~6j204bOLiP|P9am9je%%l3 zQ-oBI{yD)c)eRU|C@~8f4NsGp!t0y&TF;CX8;G6`2Lh57JZxP64?j|D3ZFtXpJyr+ zp|}VmZJ>~W`#aXoVvIUp4tjfdpXDQ)k&lo$thwWx{jG5mG-Q2hBdA{zi6ie ztPtD!Qx~)y9<-tkS|Nsbc%?-sXkY~B4PN2>%0%Y4(8*oZz!}F%jh|Q%ggo>G06I-%FAye9khb3Kl@F-76wYctpoE`3}zo^nT(a z6iyh~8H|Z_GzqlKUI|`+1sBs1lw#h|u!dF%K3BWS9G5P@ThXy$7TwqL)EUq)>`u`F_K4CFYKah#ALh@({E~v}N74 z{Aj&{x|TF7OR1EU#r``oA$%yXeT$7ujd>Q%6J9)x*QMpEZ=n151z=Ty z*?anX{2A}>|AtlXuvwa0P_4E`X4xn){~A>EULYCN2+!dp_A0`=F4qM-sy2B3t`H0f z5R`btXQK%WdKh}sX0;14E^li{|8fCrZFBi`f49`JOXb|xOYUsOshxL!@3iP%dfw;dji>0vH7K6O1nj#C3y6-ySp)eH-fZe8if63~=pPhLd`xhW;&@K0D zNaht0hf%!yyJVVajsNMO(EZH2)1wI&FHot~}_YF#!hoQqBdgC#*I~_AC-V{_E#tFoSPkU*AEMXpF|$mDesG;_gCPUc^IBg5dk z?B9&iqKLeNp;t8BzX=SbP7s?L+iMRjO9y`%0Nh$Rb*8Xl>;oQmDO=g`*w$Oz+M(d~ zFR?0^Km6;iAh?ike{ZemAC;zQ{DFsqiSeE0^Y#HV{WRE3SPtvC7j~r2fXJ|Htzx`e z55yZQAg@Yt`5gIGXS`)N<$u@D zejkSv64(%b^eidip;JZ?{Hx$9P%&;(W6drSLc$DBNFI!OAOE;X>xM2ea)45`29Psr z*d&KCqzDthRs@Y#$0i7R*N1$k8jiSj&jNfX)+SC&u+so&QRV?Xo&@F z-arr|TuN*jj-1`$zgdGKTn)mx>mcJ8=%2A5S>rRavmaEHsCL^Kvt$M4*0hd3ygNH{ zfEE;Leacf!l5}W1eZI==Zg~Fdw;$qIbiaP!u$dU~)K9GW0Cp7*E#%W9m^5KSw{@9x zieJee8+3uEdhAzYAa)jyPk>7@4)~i%0J-z?>l*@xt48_iMHL_y9tA-v&J2b51hD=} zPoFtA0_Wo@G~LBWffwFuPt8|e>qru}&P@i@4DXlRsyQce*Sd>%+_h1`59%QjFaspx z|BkE=oXD1^Fu2ic;=bMf){~<66;SVtet`G+5R`RujHW$6+}W7WqE-QGj9ySC&FE-( z{roPGuUk_ATqn=&YjeO#0|1h{6&P^l;Bo&3d@p$IgeBC0IW+b9BW;R@A%POy{|IU2 z-hlX>%l8%d&}eUE19L{M)^)X{tpFIDEDEJa-{pa9UX<{Y7HrzY&ol0;#m%^{S-pRL z58yxA8d$f*=CK}oW2eS|&Ix7}UqG=uOcv4d#(ZG)xly>?>6aVTi#LY#u|)`+Q-~W$cG`@#&#b><~fyAu~^ac`G1^hxy0Rb6< zTEUXvDeRl;2j(eW~z~nzWL+T`=d3$==70d5`yIlTxFL1yHHr zznDlMhP(Y89)NeUrnv$`+#hs>faOCRGYAZKUVbXlpWc>wA_)OP zZi#`n6EtBxP3#iL8lw=x5^#db%AHcWNn@YrJZ9c~4e1agIYY(aFDT=&_+2oG>INbW6J>xLYYh0V13YXpM*^;`a{s;Uo)?}97v`oC3ahXB!u#b zXvV*`(o_}LN=t+&@bu9e>C9$F@NB;fs}tKzV<&~X9a;tn))n*|-yRyEL}&a9_~uBI zboy~>3T_!YDc+TuiyV0b{4Z|K=*lU#&Y*K*(M;q-?xJdxq5-&0>An;CDzA&-W!(_b z%lw1HtW5Upd9T1vWd}M+q~ZzuIv&!5s0r1r$As}#QpIWrCMI}ZcaXIQu>7Qo$p zpA^1I*x&eCFnonPuwCoCLOQcYu3aF=dypKAt;j`h>Tf8wq@k&)K&OTl6>CLiLR(jG zstf5-%@9)eBOjH*w$H;!giU%OBRAv94Vdu;`l{>+mHNFI9QUue*@+L(Z)_iKc%Rg? zE>r1yZpVh?Qsze^?+1fxPt_Gtw94{TBG7RFosvXGUYAioiGG)wdyUA)3nC4o#v#CF zZw(^5R%7V;umDZo0Uk~wA@|^LlJi(D1y=5+$Nqdg`h+0HbYSMw*Bka7jt&1W=hv@` zNV_6BL6QDIJAD3Z)gVrTUT@{Jm;0ic-AZBQ7)`v+3=$I?Gc$;cfSHI+R1ww^2lyVwowl*0RhZDl9`8mbu7KD5+3XGAv~F7&8ycSQ#=F z4a!gvWr!?fEJb8oMUjjpl6~EVp6_q(-@EsI|9anj9Q!yrp5u9*L+;Q0xj(~oo!5Du zS8jxj1HadB&@>?tG`1bjDLgDH5?xxNIadCQJ?&p}T~1bAec<*fp(lJoGXEL;Dy4{uF5CmX9OwSsKXSyi``Dsa_0_} z`IzXcoC@KnF%0P5>zCo8d1x01P9p|PQl9>I&^*!)^8@Td;;4KLoc9PUV%w< z57q^C#0lkb_vREu;i^jcA2S%lYZpvTz8cHp& zP0}+x{?IZR+6>`y$(xhRDlnFY4SC6Wl z7|)e!3H%$`M8UZeHG02ObUjdCUcA|H>K=tLB#@?(`_MCU?%kzdJDVi~OrZcT96rQH zwh8zZ-RxKhRQPGX&kZ&S7ZctpY4M(Td-~t#gUmy7?lAv2lfo;8ATiX8&#F%wPnhf{ z%{_LrCQqGC^F$L$j1ZOK_@EFIw}9THLKwmo2(U*-|N0bt9Qruh%M15Nxwe-aiLw*P zUz@D9|9|}527BB`qpf@pr0+ze#wm!Du5ibQ{bqINddZE>Se$ zIIo%Vh)LawAp_O8=|fjtGQu#OING_?!1t5yFTbd!*vqJ3k|W5 z-nOU8oQdPodHp$YdOhiaJOY)W*2~&d=S$d$3Jf%dlkLpIU)4_xUO2F=6F+hf)@1pZ z_TJnk$znt%Z^eOXaqRRfBZ@k*vQqSf#EqxwvLqS`3!{Gg#u9N_g3p=59^fC1MhyP$ z6xxcJ<#fZ%}#AC`R~=IcM|P>FvU(-b2=<#4WfE zz}Qph9|dVU=}&BA?rLaGnt?IRI$dEDnUT5}@>+K(m33^c9k?pnVh55tF&iIEO^mjp zDLj~@k51sCv4R!##lmX?BO(daZr@cYo};;WfgGcEiD#t~N>M{g(pEse(_J#N0fh-j6`!#!-9JlwIzboPQU~K0%tpdS%=}YWNj}OTut`A))d=+U6XPwSrLn|pOk~&wTKE5fi9D&1 z30wXtbg=7%AUeJ7M6~l=moq@Izj(R-$%6+LCD?@k0dETF>U8D!hn)%os7oIMpn=dX zi2w-zK9DITT!Ym?K0+rD6zIJEiC*x;h6BnZh!!qIG~ho#8Q&i%nl+L4cCB=oZQY&C zcWEU20f1HFk5lfWusU)YejDTm*fad}Gl`txmbvk$;q!wXyUw1O*Sy?XUBZQjjK%>V z0tR`XTLlOx2N+Ko7!9sS{qnqe7~AC5p=u9(_DUcb+*|M@kxOHpO}uqZMQo$C-afj| zV1W%r7VwU0<>sD~vOIY35OL-?<5v>PxBu0TkDh0cU|}j+=EWl@rhPynU8L@d>k-2!gPm3McH_Vi!6_rUS}Y0t;N)~|m1B+Tb+q47J&5Kn=B z(rF?H5}X$erX~JaT}<=%_Myk}GSV8>Nd+Rd$B?wQ<6Mw9Sp;KW_3cR+(x33I`9mjj zYwrU1o*n)HfAd8n$?WnMK*44gf^(5xIfO_O~m(>^TFDuKA16J&)g8Pkz?q z?@<1(f*?M~QB5h%eFmJUy$XAFg@4wOmeMb1yY<5S1{5B;pjHS?gV$CC^1wgul(A@#F8zGLWTolwpL4+NDg(2_e)!GbXc-vCm^#@{zXapXbGh~LQ=eI-O13*Qs)2wY`4;_#n&tKb_BEOD)AMn61oZI?4h7N;PQ^wfy-au7s1 zk@^B>ooO@zA;G`PR^R43Q@7)z3U^wsT9Tm<-@i#U0CC1NDtSmXp{`M%(#cd7(6$Hc z2y8J-;n0Ec1CXaP_GOoY5dY^5T2@QK#Q9T0lEZhbj^a+}HgrDxqu+8bZ3P5Vec;{r zov?HDj2<^5z;Ta})VejQ9;;LOMEDuf%sEmd(tS8+2e-x9LYcc~Y@!lF=~h=lSP!|R z3TbJ@K?YmpS4wN#&0Adgc^8rrClyBn6q)*g2Lk7^hLF(n5wwDq7pg(yQ1Gti;v12s zAG1w+JY$mW9?wEr|EyQ|=Ke_4$g}qliv~D}`WH{79)J~hUmj6RD0D>PT=i2GNAI33 z0xe6)+AT3bd4_2}nlmDJ|s_ zBrQe6T|NboGMA+_t{Xp|F*a~;p_EEQhS@{2m_qS$7#&EaMT*XiM$fxF88iq z3V>eg7GHliM-j%-6k!(kO+A?rr-H5S;a;!+m0j57=LXZ=Ak%;ei9Li(|0^ zaU6O-Jy$z(aOSqEo?mX*_FsAGaqG*kug#*j-2aIZvF!Q*^xaC{$sNL~<{#m(oYvu6 zO~@ZFZiu&)sA`q2>IKnQLT23?3EH^~s*Z_98LA4}s(m)1M9xETW->KV!p7avFCYb+ zvtIJuX(998=hq1d31T(mAU_`dvwAP^t5emt&sI1}i-`qA_uY^4X{SFEBI8$QId=@QX&o|6MHjtGX7q98{t2TodH;-)J8svhUV< z6dT;V84tL#C6HYaRLuTCu3l=T5BELeylWqct$U{{iw}^5NM|~PeO`3$PnJ4!{^-4{ zAWPyk_C*lT6*1*?fc`?A zKo=44faO?s^fOK$%Ogu{_yvtK(Dak%p3q1!mWjGYNL^i40x+rO5ASTus$=V?C7zW4 zaT^yl?Bp zm{_s*VAQowh_<9n{L^Jv7WB$aCpUe4o)SE1^*QjI@dmE1In$3E_EDLCy}tk))Ms^a zBkp@IdUk;-#`^Gw~UP3 zyqOit>Fb~=d{LG_(lx#bAVPBYXQSNc1`us-$3Nv>ox{r7A|-Rz zbV-m%y4JrNPVFK%=H>vyc*N?=`8Lc?i`7TRZ-TzY`jhCgV1s=e20VLz zQ#oT$BK=IvpTly?)K)8LgBRKLzGp`XeEGDG{zTszeS6afn&xWN|7mfc7 zXX!L2%2K)9v9H@QXTrg%u{MD(k(Qf5@>$U2D`_^wa_n|srraB|0p`l*%z`St{twgq z77HFZUvy=vhWLQ>#Ar~cpruPhh#gnfS81gw08TF-mY3VRuq@}7w>GA`m@Y>$TQ zHNhKhx3&1n!6J%QcEXOxe9sanmD)CrxLtguPnp5XmuhAQQtcLtrrt=JP)BKmEs46X z6BaMm3tOraJPxOYUD3lf0lxidG=q!#2>5_c2hjN=25ppx&+OUe-bdiVXcMh@ba2NB z{-QIyP#i4Fvx|X*<&{I}gv3Tt#T5i7uw_k4BY|5mYN#Mz@oZ^BLUif~NNZk~<3=U!wRVnSKL1wn)Pp(@NDVL~W)voV5Dp}hb}8sL z#z2DfS|sLg?hXT_s02BxXoAVgC1x{#C=%aCqgqv#e@}0Wd zayE1f@OO}?kSvG%y<2{B%~wYwMW5wa>h*~x#QjpEkpEP7hcKhaueRR|I@d+JF*5Xb z2ABAb6=AACUJmlY;=PJPUWnO#U93#d5vT0g0z$`T)-Wps z31EPmc048lozOg~AmGII+U_fDH+~OHUk8)0Ediah+acyr=kd*CbqYoqzse~W2%L+e zNrlr+Y~Cs!LD#aT;R*ByR5|0Af{t9tb{$bay!nJmBy%i&jTOHxncV*Ce)@T2%TUH`S5ses9Z)hHX(9SArl^P&=Rv#o?||U%+_sZ ze6*aw&SqPzJhjhem6UuGQf)UkYrf+FDBYin=}u20eqS$`(Y{fH49dB^P`sYD`+1PI z*M$4djV+Y$26@K9f?Nf^XhQp>0*4bD{*Y=8i%ily3A%P}*KD~$1HOIw7X!ZxsF{96 zg>6Nn|A`ZzGv7Ol3}EX+<-w6>@?+K0OyB9sCofr)d*Ld$Vsz+NK2mEzVEI#UU0!ed zRg5%%Mriwekb7MU7xXFoK6fJ{9&lC@e}*5b2ApKYAawfKgi^jeVmr6Mqv0D=KfO8% z$@K8@y6+cvUpv(R^|Y~uFet}f2t5T=G=%=}4w1${5zi4q^AiXqB8q>%eS0~i1^q~| zOPvt6@AJzJ9GbY4qcwP+oxYS|yLft!`WwZqZBf8JEZDIQPUnv;AN|aYI50{=dA+C; zu^I8Z=sMP4gCOi14-vw z)%O=a-kKWqLGF4EQoqNlYrp43B%@4?RC}&5^Vxu(Lz#f*Ko=sxK;+>&1(ee=0zB?HaY`x_ zszC1$`S4eNKoA^XUfcN&;MuA#x6`exD{HMb>zhf6Zji6`A;nz8-vGw$dix11hEwAj zZh#F}`3U+>$u@n-^c*btXLa(7(~(j|aq;urM}@m~5aO1m8NA=mudfa79XF}hS$vRl z5@2h^h(s4_WYY&ddN`CgfQ?ferAKmW-=M>gab`s1bo>D^`>9)p{4=s$;X+US&hAMQ6Ur2+41TOvo#eoe($hoFk#yFyH*u`jHr|8Zv+q2Q@_2^ZebB%f9*~ee z|KTVdDP*t6RoY<3VD*+)ESv(@Hr5V>4M6}8^f4bCrw{dBP5sP`;Yom=*+T$OPV!24 z-L{g&+2CxPq`)7-U!-mf3X_Y%M9v3?*=jUs!b7QqElJ?eM?SPP9FnZXl5$Jd_S#;H zN7>nNNCg3g9}9$wL_$%aBEwYO;MVPg>D*woF-#U)@i@&bG`gPW18 z=S7-u^nInM-KO#bW+*(-DVn@WP1i>olHBHB8iU%rxX-|XA5->R;HNofT~ko0r5ma^ z7PJ38Sd6edvwr*Og^AVjR9QzxPxu}K_#VJc2CSnI%o6s!kCDPY13(C>$=)s4CN6WU zb6|oonEM-`a!beO?07^sC#Ary$GwMLLBuF)s)2t|uUNEH#JJ{sbIzqFQvJo({%6not+Q8Ly!sf`L^fR)GGZy1fDk5;pI?=1I} zJ`xor>H2b~LPfDrhGG|v7iaxBI<5)ms5Xu&Fp=#$^Bq8eZR5*Eo1gioH>Qsz@X?_^B7_OVNW=9|GQyD& z(vG3W5xs82yFXL~gqK6Q;E!1xkG`iTmxp;1?QF*ydf4GX+F%1y!-|Q=jK9kPHo=7%kSK!McDUoD(8VE*L3_LNkhspsRRTT zL!uo$xzYhIz+D~yc?~ESTs-M{K&fh1BK%qqM0|7<3=x8_l!PRk);qywDSY*7oe8EQ z-7uhqqEeL*=xzn@o^QF)(nLH?m&`A2X?Ob5bHJ1w!LDXnH_QJ=UlW+?f8$Ov85s*J zR|8$bf-U0y7>3$^LeTV8uqQan8-l-Y6Hj{?aH$0mM^msxN<8a-LB4MVV z8jqz15jZ|E-CRE^wtrLjw1tc6?tzv3#nzw=rK=E>4PIJLL0iJoIM~T{&u3N!D%b}v zb~nJmE$CU_f>1N1vgh9n!VeSC!})seoWy)!lj)JajT6j1`|;Th5~kfbg)2$s+nUXvK+5yP^2?y7C@Q3eT99 zWRb&k#wI>as+*0#NEULFkxeSZ2p_xBGBt^?V!Hf>f`?EQD`JkMY;Uk;9^$Fsf{LM z4CpP->CPJ9GEYmnYTn{fwHCt!lhgF;LtYo)IKlGUF52VUCo;ORyYpA;oM;lG_$`To zdM2y8c7PdWkaO~D_seye=!5qR8OtCai90O4I2p_&{Rrq+*Zr!O7bnMGq&7xiSGf;e z5Fy2yS1e4d#ofYBBOZA-l)MmY2fi{XCEpfW zEZiu&Q56*&-qNiOnAli|T>)m2uJM5`uJVc)@!@^RC=NW%MGRdua0nrRVVAkns`>fy z6|q1^iNK`s#A+2iiV}U#yJ(#BFkjw7u=KuKa0EZ`4M{kPW=8DGb>Z0Eo+*k><_mnA z7cVPKQ8lj7)^35FrqT$VGO3{N#8!D&vss6?5mE#(a6qjnTH;fhw;-M864I3UR!T@9 z=Nv8&*ppwIM_fbNFe%Gw6=T>B*{ybh=NgD4J=Ar0o7=RoAy$W@44UJZP28-idfvWKMx)1Pg3o=6&%(wr+KGnz_m$Wn!sIICf4gP>dv)Vb-;i{ zpdf!)xO>9yq827g{>sgJ90qZe-L&|vn|KBWoQ7L_aEu0Mt2?M`-aEe?z;@T+3_Y|x z4x#W-mRdKZ1Q_y#OgOk%d`8l0XCWL;@$pNzYs$=+^1iY$$~4E;n=?tXNYQ}Bzu4CP z(-W*@G>~6YE1noA$?57;wH()$mWdp@bPG^u&s{N6TAa)WFefG)Z3P5y7)@#&;YSl0 z_9$r?P&|kIaB^4CDE^3{AAl!tcGUTRSNN^=s_e zsLlGVjV>}s=5T{FOoA!YgxxMF3GL@4#38jxJ#92ugg4)@GyoibHWyPLlJ5IXUF4%t zXK-2FF0A`&6bLMoU(Y((D#4HW2`k87=5Xs*sJ0-pKgp9Rlt{}20@QRB z;3AcBhgTPpo&z}$l)a3;KyLa5;8ZDun;(pI1kOX(i6@Ys(|e{cKiqb6vkI(+=0{#{ zljv(z)0dsKas^AY^NNe#%(^%vSd=EvD^To#CKtfE>w~J9JyOicB({jZgEnrL`$`Lr zW~uZ=vIDVty#|EP&H*Bd1dcd4%WqCkGy?p7+XUX+RpGJr%GnF~3k#ltsscb6$!f3u z0~x@t6F`IMhamJmL~ie%`+A-#wToQwgBqzO%$!m-ehV6^$J82_DDoG91Oi>Vn7;FC z<&X$g=i2xZ)NjkTvZa%P2OJXGuFJmUhTagJNt+4Kfc>2;pS#NcSo}(UvG&I|9b_^b z`wb7Ty7z2CD5-g!L+6RycfUvNI>7 z?Z*BcB%zm+fuFaIH2QEV8NCsgd~SNX5UTgi@`d98=@I;VS+3J)Bb-qEf(0O%JHnr3Ii@7mn}ydA%C%l_Db$0s^W%ZUAwoQ#50x4 zU+*!s09$zgVHI0h;)qBG4SD}%_n*&41Kpv54QWGo@NdsS?>lZOBlLaDGPM7$9!_NS zZ1Oy!SS@;URV6dCyE242VehZ}18~y^&J+?|r7@j&deJNo7aoB*fKy5gQsdCW5&-CN zL<(NBcxZ`r9w9leEH~}>ryQDw3^`TLYAeGBIIR*7)fa#(DiC-<+Tx-y$rjqOv`6JX zs)~}*qr3_xlZv?1l(z_8p?YYRj(^U@r9A@y$ZuS&^|)e-zvu9ZddGr%IIV0;$86s8pVu_Uf?{WaTSQT#3Aw8LMgnv zO*GBTmf$=nIZapPrt&X4f{`bJ$RLEd2S7`HSANknskH_0Gn!aQj<67`UiFs$KA8+u ztEPYS{#UvT2!E@|nZh*c(o^X3We2QohHYY4wcFrm6hxEgz${4>8MbKzc_f~vvW$Qx zhY936rd~5B+nQ}Jbgk;(He4y3eI=sETr(oCi1kdRr>%ShKtZ?5MeeQSkQMS--#`L4 zRO^1w_D=pjiou2-wx+bR2cdbcCX`z4l5YD85C#E2$eKW=p{8%hqVmnG5hqY)K%8%h zMN#TmOymt-7Fceb2r57f`cBPm37@-{+@dD{G@%+$Ojm=slnFA0d8nA>=M zWWGqvZ5lrhaXp%AkIL8_ry?7S^G;vxLm1~>WSk^Tm@P@UO3R7CJ2WQ-ElC^R6ljzv z0@fTjwZH{oR|?GQVIneDS|SQeA6Lw3Ta+bnM{eZ|4Sy*vQ-rrLWt^Q@%zvaa&JY39Ri2q$?k&a#p#dz z(D*Uq@gJ?qbMj$g9W4Hvmu9{pOwQLSF(3>!gd?l7QUjUq%a!l%TiPNl4A{pOAS^)) zdJ#P06i|C1OYT~&0T9q~@=WsX3#JisQ=JN%Q}`7DdfMX>;q&>e?@dW$VoGmT0he3e zed1U@2m{p*Bh-A5e;`7Z^N*d4oidJMTFNwsHt(MzSgW@Xu^ko7EC$(##mKU1x#aY( zDzGyiw2gh>0a`LDcK$*yXX!J?i~Jj#GeI7SI*ZZTwg!#}m90DXC(O8PTyeM9<-;g3xr9r2SYQVKrj6mk@#Yy9th)T&+lxdaGX??hML0}!SOUP8`C zs-Ow>VgFY{15jT1AHFB2XM9>k*Hh;+yXEX6t^l<61kCDL4n9Q@^zp6E3BzT}w9f32J#GU(cci3l}Vp|)(gP{cLfiiRqh^{;E)_N8D&GK9vX!{pGOWpv?a8d zISk4Mh2)6ly1N^dCec7={Q*8~tq{4@A)!?Ln)DkoNT&e(PWss90i~CxK(RapB)rp) zEF1i zoPF3mrF^u2h@6ilxIi+S%LfF^tb%g&6=;hKANe4nLiK>Y3ooFfBw{0!!PM;k^P@dz zUa}AX{3_UFTR`)q2Ve~b3uE7S@n~b>7-Ugph*SoV8Y+Vx@*5!LPXYDrTjz|PdRlzE z*s8Bb*ns8B2`GEsIqfwPb`=Uf{g6VQ70vR>J?+}abrq@wy}4S^Q-yhPh=95BkM8SZ zXK)D)Qbd0!3$2@RBjyIct1N|nT|e2z91&5zw z>>KKfxo}t=wUyIDH8rY0OZ-s}zE{!A1xR61;FP>WCpvwrtixq#qTL=9(G1~c#EDv!UF*l7uk$*Yl8j+W zf}ecNfQ!V*QX+)ndZ)?)C;Yfrv1YX5llPMBHj7hg=sY+v?Ku@H2QvG>Hw%yy zen84#h=FsaOFhHY&o}Iq?uB1@()Y^JaDpc^R}b3=yq#$SN`eb^G9IR|9^p{h2f2c% zsdB6fATiGKR@i7M{p5f@zwAX5RgmIxt`^g{$o_Nnp~1yP9gLmABwIx6bwU)hR*=I= zqf6dK?eLCZFp+;eXtJ@V;IujdQQ$3xmi6ih$YD`Z z3iyLty@qRZys%F0ZvZ+dae6@4%jMXIH0*e9ea?X86N#yH8>`A$gk6n@TN}S4ZL&|F zL71)f&oMt!35l1ZzXFohi)ME~YKN$aP*bgh@ayRFFTbch{)+*yAQvj)v;f43hIHhk z{zA&QA7{@amEC)Ku8P3>ogQ^03lpdhL4)U$ol&E8TJ*!_%p|*x#VULN4db_(wN>Z3pN!IXscy65 z+ZD%{`!C+fWDHJ_cmu4-i;zpFa~vrp1* zYI=C0;UQ90c<@z;z&o_t8Q72-Opxw*u31R3!J4pbV2}UpzE54DqW%w)tnQULzL|*DP})NF@dfUJm#Mv(Q}hk#5w8G8{ZQ2 zrm%@-k*{dGYI0hb&8oN$JXjW!i|OuRdnr9AdJ|(<*dI zoUC~yp|o$8%(txrARz<7sLiMcJ%nIZ{V{7pUVvGV#QG^VSKh55RMkh02OE)+8(Ce^cJ+khZ70wUwpS7 zulv?+`PYqv(}0R3gk`dYbMpzX!BTqDL+snNx3~ZR*VHj^oGSpo2F0g05QA()$!uSj zlVp*5Gnk%By+u1^$LbbRMrE+&apI^)MG862#zeDzk=as*8*aYyTr#|}N~YJfop8s3 zeM$tsRJ;X>e*K$D4R`8o2Edk`_&s3CfN`dr7g*EU14$|DejOLs`y zB0+h=E@~;f$(VK#Gsng+yB4nZm$$Tp?2C_gn;F;cXU9NjBtnb;e_Siq0&?^y0 zDN9yar(c1ifExNB{2aJVm|bI_T|aAHzRdrNLHQvW=QeIjWF!=ZpbU2WoL&3e)$8l* znkf~kg8w#!1V)!3+MW;>layE2iyEdDlQ-4U*?gH7$PNsPUiTK53x*wNeZ(z*HE~Dw z@2r2cH(=*V19;!*_Ps*u?<)b{+^pGHTAtR6Dyq$_%Xnkv0%hRX@gKqNM*_HNH1NN8 z-ee4gP^iUf#&314zo^wu+kUC)3I@*k{^8rea4FId?pG>5jX?xdkide`h_FuO|35|l zyG#*EoRkNu?w-q0f}&w?!?nan0h zw49bwSd=oNYWrK=@Bl6Jd4I+U1Zo2Eml`EvAlYjYrQqS5L8gPX2CF1cZ5qW3`2-T( zAt<$N~1as)CFf=D7{C!$wA^WHbKyA+%pnYg9M?q%gWg4ox4t$gmd#fpiCnN|soIw_AB zC37_&LL zoUXhDKMCF_$`;z1DZ2=@ctKT1s8SgYLo??S>i3XqU88BsBj%X1gNQl9Oh)Ft{kw-; zOGH-N1h!zg_}6BgY+JJ;r=V0YcHg;XQ7G(RI#pf)bs1&2n-QXlyx-Cc^Fo9O zF@>t5uTrUOR)FVn_t3x=c($-E&D)xpwsmK05WPfmC;~EVYIAwFf|EP90E8}2pyk<} zN^`SY_mDa2xe@!l0_N!RU%2kS%+W<`!d(Z9u^pI?v_B{~V2nHZ@%D%7955@NIj)C% zw1-(S`)_9@KS=G^&m_C)y%#6sj(?`7&MI@W!@4=}-q-slDRK+x6xz?_zG@QgRUtR%4FQ!{a%d&1Qg}O9Q9R=#Dsw8uq z`-C$)tv^kfKB(-+hx(_j&i5$h`ioRS7m}Z)Eq^$Nj2v|15~izO&(&d@jV~ z7Vs%Qvi1I-eaP}2FRnp7{M8i;zDbbiygC-%k4ks0P$}9Z3sVKX<^a6sKTPOt1uUL8 z-f*+TgOua}@YeY*r(GLW4;-KJdTm<#vlO665jHR2fzoM$h0A?t{nHG2ecI$p)&qwK zacWDY=GWU4)~m1^Lk?Q&7`zd9>fq|y+I-Sy6SI6mK4+pBR#;-E-MhfsGQ#2necfEW z++^9sp2E;%k$eJ2)Pd|{IoP9#7aG^R_pbTZ+?mYs?WkPI`nBpU$U5EU`|z>#vp(1V?n7r}*9aZI#fWa2t`f(db*mW@56<+<2`F12@VV!{le!29vxls0JM10u;((@`P!;bd|NY< zZDR=3Y~*0I>hSOeB|mK@u>j<80iQKgWnZR;T#__1HI|O+JOQ4-or;Ygjp311;>O{B z9~q2V0tw&gKha)B3txmagpJ66j1>PmVoc-HZrBwW|K{9STNiM*eHVMoQ=)|$@GuS7 zLNDbbXI{+83w@2Jj|g1%PRqb``4Izz_OIL8VcHd)V;kAqWPAB4gxw&+!Jlq{gy}GOdOA9EHqM$(!-=WgYbSve~wsd7q8;a~V4QEL9+3BAVNu zwo)8h^MeZVLaTk^q-6=Ek+_b#DeSYx5$mb4XrBIk;`KE}op86ikFg}7pbHrm25*aSdygIi!$*2%o?07g10BhLyX=ujC}K%!Fp#{fnM7t27#QUj zXfJvfMTn!#m{oIT@pi=x)81{}99}{`Et&6$-!EcXNS@d1sGdR#lq@Uxg#cQVCFV9K zrUl9mb=^4S8pkqjb{X7YY1+NA6~e0(*=AYm=jgQ=0pFr20rs&xh zBQ9gnuJ`s=S01)`UTqHZ%u)<*GSCUvR>SHzWQooC} z+TCzxoUfPR+{58vp|_uvmC9uPK6aSz3^cp*_|n}7m#{5p4(w>+38#7fKKlTTe#5Vh z7~#vgJ%m&7_>X6N zAsF%6^ufZ?JFJs{7Ire9jCv?TUy9` zSIT*4;r?^lA)`x8Mzs{AnaEW!%$ zv;v65N-X-w<_dE1p_E0By`p|3Du}{sys8E%_XfyJDL(mGz?YP-(txw_|Qw`B(fW$klk2*661_B zrUP%XxD!cyPlQN9x0lMC_&Z`DxWWhJnH1<--a@(1a+!OHsZUgZNb;qU`HsKy89_yB zri$~ORO_)XZQ5(y-VJMIKL*yy)Z}{Nohoz;OS|WuT5zmp7}jv;mV1C&6cls*G8y!yp03d(%w+m_1tdSoJ7i+T}p5(Y7-iftY(Lnh+Op zmQH1rQ~t<+u-y5_QPv_p3*W=vex{b5D&wqw?o=JRt12p zX22f$K;}ulo9WWoV+q}t??bLveCaJP!)<|=bLS(#>hj>Kxy}I6Q9Z|-cKWL2?OW#$ z7V*WNTl=%>KLeL`pfL&xg^ZU{)pIxZ{=CE^fNLLKSr1T00Mfd z$fSV33IW{@c6gng0|9o0Y#HkmlwjL3mEU{+oX@1~5H?7ya3wX7O6rp2SAY&-5o+Ut z^y%E$@fchldS{_^;gORSEzp4Z0qBGfyQ_*9`wT0vF8SYE`Rp^!sroFqWQn|%T{AMP z|CDw{P~SXGQ5&n%OUC-rL=OYk+jB&G;4S{*0Da-r9uAqWFZzD(6!lJ!*Kb>H(JF^X*c!fh#4AvV%ghfV&w8zTaB)JC{Sr|1o1*;cjFh^U8YRd7Pk(C(c}Tiqm}a+m!q`dz!Q@C=4C|1`1_RhSiLQ6&?k^LxDAT%TJMUGiQu2HarOn-xckp~2r1vH?#$kj-$f0?K&bKu;V$KPck#vzky`?fp6Sp41@*T}niIbv_S%=C+#G z4D&-q4StjaioVdQ;hMnM&cnfUTDxeTznvx4x&`f&8)SlwJXy0n&eKu2u!(=!;WjH z(4X5dXrw@38}>BHzFIAON3Ochxu67;igfqxEc-_tuTg2!HERK)`rg-eFc>p{!}YOk zJ6PNWCFn7)54v4LC6UY-R)47Ab@ihVq24QNzW5B6BUl~9a;%F~^g3&+AVQ08udVFp z$%dV;cT$R+yDpviwi(0*(l@YDr>MkETNUD*-M#Ja9E{}e%i;cmhxHV?!#Sk2r#JiO zRNtHR$`cXoFx(U9jco#1-g!hKb#hEJ(>+m{JDx>_33@0kKa3k;s~?Q6)~t&;s+7Z3 z-jww%+0@Y0h6W>k!#|ZTY{6D75JSQBY72hfPKc}RV8hc_jhwXbA#GC+;up+f+g@Md z^|gQpPj05(K?rw=B~yi_sl)1_4fJ%vZn-VYs8Ubggdwy;snW=@Dy%<5ZXt!K|!0 zsl-ozOrn<#=Y8snewwYqgqY59u{A7S7bX(yedHL;F%M}4ckyN!gC!m7UCT3IKOQFCs<8AtuO4A^>jBWjhy`x5=;RGUgdk9I25 zpW9c>yrf(y1;(Iz7WQ}aV|kx1V0SdJ6$k&^W-XOPDom>89Eswm$YI8{&sH#6^CKM6 z)<}{y#i2VtQUGi`oeqEHJA#UeQ~Fv6=gxfnub>eLdXdl)G;u~F^AUPE1amT3`t%1d z7KvomVo~Ww9YwrXJyp7P_$lbaB7Z(n#pT5Q9TU7qk^OZ;_Y5Wb@B-aK_8WhX^-HMX zSU!w|%F_Jz*T~FZ^_02K@6r@t#Yoa0`Mm-s3+v(m0UkOUc6=@s{JbMFIV)B#A@7s@uU>3rE=??K!o01Lmt}~C% z%Vl-q-f0!~SXcj%uo71K>3mXhAe5d#N<^ZlA)4f{<8f?bV_MX|E31;hj-pq)Tjk*Z z)>%*QgN|?zzz@Ig4eVi^=Wo@N)1YZGi4A-j&s$7UnVIW+y<17}Ql!w&!Hf$hUdT$$ znVAhgK(nqn>Ov$wAtCq3veqT?)wfJpcDMHJ5m)i?e<*F~?-%LbUy|46-7O?|x2PqM zNdY6Atl4k~tY5}G>OvHpvHee-zyGhEw_Z}+^RO# zlg;WYUKaZd2?B&|1o~IZmtQ0;SqmNB5vd<3638>;NskMqpsuJH#vNlmWR?E?D1mgj zxbl_%!^E*j`NGtfhmyW)R@1RXqHdp!bl~jexjQnGuD<6N%lJzAG5!}dzXFOO&=5OK zY=UHgFJ*0;@x2NL*Bu&`(F&TeniIoJVOm;#*Z=)CzvyXO8S45C2J2_9&4Je@aksb1 zvDpRP@1I|nF}uzsE7@%eHWw>ywysu@G4%3n8gF<$e>FAybOURateoV~buQP@O7G&) z*eNiSx{NevC>~cS7qWJ_@`2z!vR8Wgiz9yo(>QfTEG_HM@64y6G21ZL6bc<}TpnK7 zUoS$>l5CQ%YHdQwkAGT=*zB`XZpgMymDS2hi0Yr%FSrWV86n);hZJ&*j;KQ>)UtxT zM8HIttMs%{g9b*>j_V%RtMgFxF4n4G=z8BOQ=@{tGLic?H7~(*BIRO;WjUmKt9R_G zr%m@Ku)LPMLH_3s+-v!@)!i`_%JZ)ho(5I>H*np2>Kwt+3pU#W{py4`2%PLrz92ol zdxH<#1RcZ2^_hIM#Whv->3%kh)cTX2V8!ejrchlSB+Q4=zML!#II&1o$V`qoSObY!RVqVZPVg;u1tewL8H5)?8Q^(FQgi+mqFr(@@= z)r_?l+|`9dwSlFU6v@*SjP={I_#?J^sSQiVz2By{yw9dYd_ID#zNB;E)9&X9X=jDh zDYXK-I(Bx<+*R|!bkJjW2=XNhHOwsOo;e>h&*nWtg9*qCFm=T|C+Wr~WG-AgrO-^h z-!`?+o-a05p_g93D^mq%)+bwJaR68-%_QkjhLsc38tY;Pc+D>AalRaj`@VP<$~}G{ zm?v7X=wNkd?}RBZQ-+2gT>B}syY4gbcXzU_Mm9zyN`YVQZ+6-s9LPfl%TQWwJ{g@H zV$WM|NPlnyzr7<&jTGW1K+lE?5n2cv;KjA&(W0+-(aC(eAI4mcVYIaS?$yDww3}e5K(XImEWGhKSCX`4FRA&T54SuA^pS znC&?sc5*RvJSYCWG)7VM_HT|9X;m9}t2NMHRYz)rZFV5>_s~~}`W0+8H!v5*}E2*-M(p^HioCCAZnkD>1?_K`l#ui}? zSJSA+1@@{lkm~OH2lsm93SF*R7$X<9Q)59!FV%K%6xAQKa_8TZ;SWP7$DcsnV3(jk zwMJ?|_U1sN+kb!*C15-qPmBfvK2Z=A`gHVDV8tW%FD7ZmdxApvc_}ruWx)ZDE(TC6 z4s3A&?zCj2c^xR5i?t*b#H-kMiCC9;T^l}LzsRQ89nEjW+F=&TWlg|1qM|g(41*F8 z($(Y<@9=%QLg}j2W_0W5pJNV$Uv0)~RGR<6;4kjGKreY?_VuM0zpUe4#$xX~WwEFZ zIK&&~r*KX~aF`DdU>s*O4IE9hOVaVZW=EtOce7S2Ed`D1oEF){8nLJv+TLuAAAt%q zHdEg}VZD|cYX`(n zf2>me-UGwe>jE649T&MJu=0#TmQ|F> zKEe?y94qrkC@UI-6v^K6*rT$Fj3dgXP}cAD>Avspeg7Vh>yPWY@5kdfpU-)Z*X#Lu zjw*d=(y8WVjO=wN2S1&t?ORJ|Ov6h(Guy}ulb&hM!_LS_9DMDH{33=pRUbPc#nba4 zj421N`)ld5{L%}wOT1%I)sOstNF>uK1^k(sSUst(J4POX64BytSQ2K_@-0({1TEP< zdV2Ch^o)7S9k+IcrFtJ4Ds|Lq5O$nOdP~~`{=S$5B5FFDUnFf1y1uFeI$AHXf%|9{ zSDn37A?&0V;V5e%$wM?|oP#ghkcu@}a%{Dz@!j>7NXBI;Y{{g#b|lNyPl&&IZ-+i< zzCXK}aZ2x1%jqG$B0W7JZ265x<^$xwDrtc|wg2j?}!f?A2?A0C~)fzhS zACQxmYSKr2@B&tU^(w|<2(%W!r2A@fnhYCgvWFM-MyjV3i5E|ZWKx;3Yi1fFuYiB& z5gz3m3}Hkb=WAm4Xul^KNDzpObIhj*e7!=#8r4FC=JuAQ@#^8$?8MkKZY~?aiyeAR zVJ^as9^fJ}&YT0+j zol?odCQV@l&mRayv_~gA2FJo`nTA&mk&c;R80=(6K#HGByBaR%6*2A5HI z^9@WKsUp+lPpT#ve~-Xg9v0iOoC&X0d9&k733#YDr#R{I|D!otyFDGDR<392J2XlaZJf$I4z$)cy;s&m;A_@cr=#7f;UuuhN_M@(Grp z)tXJwl~wu7*=iW!e!aW%8d`X(=c+}#5M!f~aWd(?b{qp6<-}2($7E-C1FuI|Ok+ov zK+EA5&HtWZ@Ek=6&*?J_0{T{A`ZmfPItIy_)sT&<3)~M9gwc5;y8Fl0&m&NsNfjsI zZa4D?F_Rnhw7J_dz}m2eue;D~^(^(~cTjU#G= zYy)}sNvHBSLq&&bijS^d3>q#Co;|+qpQ94am>avpC_?4fbWx7pHuvkUZ#Jq zXmHADxd5GZTI}76W&7aDR|Z~lRxq3X&}5!0ZhaLosR7hw5b4*EWDTf$x;37cwABUe z6asTY;+)DOJ8%+l0o3Fj(19Bb6{b*@0?WYqjYI!*wP82;5zgW6Nq!+G+H~|po#8oY zxN1yYo1Xn+6NP?!e;F7LL4ayWsDwQ~Te3=>Fi-sp9iB(}?|nbQs8_m5DYt{cT ziakQ!4x+7jZ944-NcO`Mc@KBQjMJnfS(H7A9+Zv+%&K{%jlXf&gnQXAr$I2rl-yuv zmCOs^rI?yr&+99@Vbs;ZYIocv^Oo8^Qf3_}>bG#|lZXyO##p4)Uw6tu#0+^VjZ%Up|@jZ~!Fh-M>*%c#w^W?`%`x=AdLd=T7ht^9PC zqSei);JY3DmgJ(1(XA2@GGXMvG95QLrJXE`LJR;cuO|=M<9vz#H>SI)%fqOj=ss53 zQR_L)PEJqj3K&#x^$@cnLKa5MY!SnqP2@(f?CQ2W<=2W|k5l)_A)Vg^U` zO$l*h3RsSQ@rvXj{=HzyiV4=V!Jm%LWx&m?6yRl4yNwJIz?MnL-}y{9A7Art;FM-W zoQIWrb%go!|BdV}-2Zwt-(Ye;qhuBL4#6q`CPZRmkSLMLWdwri1%Fu~!(Esyh6}ZU zy(}=W?Ztu`|I^OMp=ws?@pMh<(;a$O?J7L4 z2yFH&}#P7lu1VP`74mFf|6GNG0F6ToW*;ofa`>Z~=KU-V=Ohek8Ff zZf83_8wWR%lsr*1ANvse1n%Wuum%dbt&1Z-xv~a+VbiUwh$9mc2~=XT&*~KD8C!b7 z9b5XEkSrKKJ5Ia7x1L4Wj{i54*=3F#PG>+=!D75Spi-~~lSH?M`(%bmlPloK>?LVn z2RNqXnK0){@RqQG2&6(lk0od+=^65Kjf29R;{Xj-B*S|%o>?W&Mx7$3&XyP`(I!SQ zonpNu^d7dP-qP>*kMkETNR#bxv1(}xNblxa*MgtMbZ~^?cwSHTtPog_^ycZiXJ7z(K_0m z_JUoemtLv=Jo9Bbu{dDMJ(slMA@?bXyQ^DR$Usx{3r6HjxQiozw}?9y_HiX#9GcA! zuXLwM4}nBwZXXBd*Jqd+j9(4`%wBm|e*&|rWl>;pYUT-*_(vBY1lKZi<0zzoMN|!U zq3n{8ZxLuxhbv9J01N zn#0-0Ii+7a$>+X~iR;Rzm1MD;>`YHkIsf0%HHaJ-3*L6=h&Tk~$8vu-_10Pai{HVF zckY^L4Q`E}`*pYwxi#}DAJ(%-ypsAYWl!9TBw}y?CWN9W+RT)?bp*$US6L_W-7J1; zX3smyk`}0<=({*FCHI@OrLa``VKGj5+4EY5-JS0OTlDTpNO<7Ad~F13uYrk-0>Lxi;Ocj}HlG zKZ`cs#=~q1E3t4JRhPVtp)`u?j_Z1`6>^v+{ZMPOImmcQlx(u1|1)OOzJG|f5_kLf zzeiiatoxPx^;5O48{F{q@mGE;}KpWT=F3cP3v46VN|_ z)~~-o8i6U+hC(9)7O?^gn9q)ggAfq?#yFb8N+ASG)!53WXWIY!dyXZ|(Qe#`(oy_y z+4d|d$Dm=gYm>!Dv1(f2lr@Gtnrv$rXjP62n@5iX&uKZ77`BjUp2fXm>@M*>qiLYY zs5sBT`wxE50hhv)KiN_2vtSX*XU8S3py|GfZEP0kf5`W4rfJHsm4&HU#u>l;-y0~$ z2qrE2sbahB+-eyayQp1vCjl(~%E`5AxkL*R&+y?X5g<)?X0l|mY5Jx#A+qr#89wXd znqhMt!E;B*$pm^lm!9t{X_8?m{ZBEluSPa_R`YvM=1@%^a*v*}@-1qocNK;%oksK6 z``Kdf-L9@$i$vqxjW1^!XF%~Y3cbK`NjktcGRhL^URd$j>D3=Rmej+K{%;(lopnD& z_#vAL*!kS~shsfi)cS5qw3q<)LO-=x`7=#;y%rcyB=R~jx6rsmtu3cn2 zUgCn2nJhZTMUTCdpc~5n4AGWd>qcgth+#W3B~dq+jZza8b)myqL&mVlVmqDmf>eS8 zIWX%XgVF`4O*WDnC_>5w_$Ry1$8_}m(o4Mrv!#@a&!LjiFiD`IVT|~KN%DEoeUinZ zhWlodJ2J;ek%~qk##27Dd6Al5#ACLl>3wZ{zugy=>*jkgnRMusGRi0J-*?Kk(9^`+ zDe0Ku#G0vCP2Fg967}{t5m4-1bZh?MdKbR$+HrH40lfkp{2zm3E4cZbC|06@!&3c) zoqtCApGE_GKN|`qHw|<)FKWHDREK6GCKI_i@M3(I=216lQDpO7MV@F8=iCb6NYk2H ztHvX5uY^|1nV+L_?C53?g$2}PgXHY6m4)^*b>l8;eSL6TWcrT`$YI%@grsVX96M*C zn6*aujB1QnyLjz|QI@J<1+_0aro<7KFiD{B{+49S^WSy{#C{yzU34l+2*+W(jXCc< z!h%RqRpIdWZ)3{a`H@eh{pkMJ*g~Lu8d{ohY`{W;{&J-F*(qQNx`i{cOg9oorv%u} z@{Csdy)kflV%zn7;aWNOfNFdK4@&$S>l?=Z{cFbT_+u)Xlq^}4Dw)aHcU4JsC>P^g^0 zf2n-eHLTHOxb8FA%6J|R&@D_v_*ld&N`ZrVrf}hfb1+eJbS%boZMya5HP4x-D}U-3 zzBLDCfpv!jWBz~9nkrP1_q5&wa45!qa8qX5)4YmInZgpzt%jhUE1-_|wTa5XOnlOK-xA!4Swwupp1d@_MPZfakf$)dB?&gYZ) z&35C{VRyCwp?#?A=gwyOa3Zp{76fB6tQ+7fHh$;EaFOlF+=?aFL?7DVnKx>3mtJ{i z>6H*}D!kI1oRbU~GYK?epa7kTO65A`wB+z`H0nitD;MhWBAGIJIj5Z%Dq<)6e@ zsALA#M$Y~^hnj}h`S-bSd6HId8^Wej0DqX}wyE&U1}1}Q^Pcq=BUifH{jkJ@369W) zxN9kc=dvU8|)YwXo~LG3bLTU!+7yu;outHpEN-_*}LU#gj( z%^=w~p1dMX1xJz2CDIAVBUo#CoYGvQ{c^PtM(In&FX%1(VbB|Eb~G}pCa{%nrXiT( zrXp%I1?e`!DpqrT!vwlq@WPV{7bIOl!fyuX7t8zHUZEi;7gvxI1|JXflFEOMA%}7$ zwQlUmYky5y_Mzs!El>x%Q#YX0lmTO@`d~{maJ?k04ADasm=`1?KMf-5tYDk3IX*Y0 z?7QeW{NmUfZc_D^x}a*n(&om;Auc48#l6&ohL@wm*4daxuk{|gXhL*H zlzJf3c2Kj~=e-?$G24Ts?Zu~e1jwnic}AP%P0y$Z1t;fJJ>fyFQ9co@@#6j3<#X-F^wP{=9t9(;R1boi+!I zCO4pdTyNX@iA;{$WFf*3No;WguMmkn3KJ$O48kCfW}0kOnx^Fvv`J(84jio0iAK6g zNo^}*kpTpX)~NEIl=NJ;(=3w;wWU#yL7}``P;GSPSCY3I?I!yVdmYw%qDG)FuQ-wt zSj}sCmUN$8UM%y}mzWy!#IxZIevZ+fU%hV=qtta;{gH3NuO1ioYr~^>{Z#qcV)N4i z@x_j>9|~)Sa9pm~iW+Tw*x;f!^(KXGF*=(lo?<{P`(^eq&IlJOW+>E02TkV!)DYUe%}Y&K~pr&Tdg`1*;-bM3j&Frr9E)ZF@qWPNfpTFetQi{Y}ML|6&{AaH~9w zF1+q|_4D)k4kym%tj^IKxpIC%5Gz}Dr{%!9RV+oPh|Ja+m80kIE9REP+GhQ;k(?(? z!6d;1R{?W#8TrcL0v$GG^1JdSgyV`8o3Ov!!1)K!QCFr%;uPKrot>mQu81qv7v?ab z%slNj_7SRl1tBcM5zSN@@+fC~P=e?26?&Kk=_aW+;J{_bHZ7A_3^VJ0-q1HWQ<+b- z%C#Tk-e2MWfq&+ScKcviqd3UHE1D@p8Po{+b0I)H9&db9C#g@W&jVeA5p#HzDeC!e zTMjQV?5a*Czc@|{q#>e-w9MbevT@kARFzT2lQmAy5fS|vfQ2R0BAhEkvUNbhtafVc zMovTl%FqcDa7dsjB#B6%diPv4TptLcdCzaBgGcE7^5_=)%*$BIn>i;vRJ;`L{rR3d zPIDZtdi7q$)oMcQb7G@k>Rg930&NG-rCidX}iTQMLhtcg=CPW*3ilrpTF zczo$(F%aA(QYiS1Nmk%A+|@qySeuS}LCqQ~)*@>20-EC9=xq25cUDhAlkJ0uV|^8j zz-Rt8T~_l$v;&M%XZTZSt|$kZytpO7#$A53{~}C0LvAg;sq`M^_H}LTa-ID$ytMdW z_UhMc;a~9hN$LfUllU4q^ikwo)6B&m19G<1W?9pZq!vdW^O%b-Ty1E>U3C6d*k zi*I{d{>+Fp&GY@{)2%(|PJgsKfa9Su$N>PJ!Tdy~kPUE>E)R_(O%{rHEZG3>_QiwkP^qA61^TK>`+YcJw8i^ zdw$0}6J2*|UFVFcXiFrE(h=f9xFHP6(4?lg>+K==4i?67n(`I;Nv?4vUhn zo#)9s8;4jcEoJESpIbh%mi-vK4 z1~kqH-rLK8he&3-7S!NT8@SEg527a-Zx{d1^}Y$eKa*I)o_KXi}v$K0KR= zR$bw_FEc96Gf~&Ie}M;p4B5(8#8(>0HZfMG+$;Wr?J|9XaY0J75SG;WXu&?#bOIVi zF8%HAJRDN)FTrSA&oOZ|^ug32Q)`k!;I?GfqlLT^gY*+WCxUif-)L1Tn^~x|$eSd* zNTTOda@FPDTM1BEy#>QQ?4?g4jCwfosN@frQS1Z$UGl+0iOK3JuqLMXD+RBrgSgFx zcyiWW1l_nF`GsY-p3S}K>uR2Z{U@L)vYDE`bCd(Ufxfsu3 zIKJ$tieb<5etzed#iCT3e1Y><%rEZE{Jk3g0v6@6$XND26_35KrCKr`usQmB&aCuGP_=dIhps2u zaf{>hN(Lxx9vAW`(+qy=f@Y^PT1uLz93x~kd)`c_c&8qGQ0tKMMLd&PNv)!Weuw0>|Sl#d5{L`*CKj_f=wqLu>|5B-ZqR039**WVsJD)$x ze|=pZF7H<+AFg)um}E6E|D(&5PX*~WLj%69prWpGa+T(^A1@Pmqw2Opk1pNKwR8RD zVt>Ph(GrBitNX>+S6TCez)8%qT8mMm{9Bo_AlX#{CrbPI!u`6PowrhvkF_WQ(mMw8U3?-ofDdH;RuizkVYRcn{ zw_bQ@sc5{V`lqy3-lI5ZV=e9GtmbzHXM)v&+k683l9J%U^Vr2^7II;bM}0SqFa8%9|f) z9Bu4j@=d?VdHqej?B}6-f8vyrBZ5O_C5qSjwu%={+J4(F3+S^G^Sy4Hrm`+l;I9FM z`l<1wN|)fV&y(ZKr{5dc{00>kOx>^%7s(TQP+kj}(SD>$A07dO%ab6Ei3s3=_*>gZ zDn-FH*7i`oAf|!EZs5hPzCzx{V``4`iP^^I&I*zj^Qaf17ZRM#1WJ2p5jFV|9$?x- zUh%!Y{!5gO0n#1BH2uM?as1g z;e4v7_))=5Ag>|Ek^2ln1zIIUd6ca~YShe>#tI%vP`&2|v;$SAAL|#p->t~aAr?Cg zohY!F<-S5jnEvdeHrDN>^+Bp^aW}8)_TCsx(7WS`gEZ_DKqcM!iFUed(Q8=Uwx#C! zcAd?BuCprh?k#p9ENW9gjNtg`$$Pn=?gT#VxHsP}l*o$h8j%#f*R*Xq%*pvK9E;`0 zZdXd*E`l<|3U!r}i)m#__DbW&MohweWVy}=*a(mfv}gqgF>9)l- zRxW}Bdb=~vo|xC{t!Q(F{E;B1HfS*Tc`jA25<9Rn(U|t|EqkYiDT@Zcp$tJt?Fs}+ zHRpj_1MrhRAZi?QXd=HgT*R3Y^)lzQ^}vn4Ncpq42Yu_w11IXU4!dVA%9A?>VMRl?+8Yfisq6N9YO#wSnEz?G^VB z6MlpYEnxbpOPnU_P^M0y<@VGk#_or_yJ81BF+h^&#VBuFkRNvfo7KoCL=9_amCIJ% zfgQ%oD#m|bi`(w3rI_tHj$@;ZRxCj+YuvB>l<&QW%UYLCCi%X=FA8P7DVd4pUeG7{ z?iwM<#Y)+q#2cu^{^%iFvsW{|jm|hZa(t2`+lONUrHTmv0Od;RLVW1o{U8r#B1zNJd z+H^5-r4~8%|8{-?*cSI?%y`My_lzcl&K!&IjcrJ!->TMGU z&qX*20b#csd*<|?e!nMT>~#&QK*RY8FudKfmkJfbW3WgWb8D#ZCaBIm-D|S(1th!> z<@mJ=OC!b2ztUGz)QcTIEVNr8Z5ZAFW0RDQjzpU&5(HGq!V38J(YH_?8ut@eP*nS| z4Ot(Mi8NbI(+dAVGQd?W$^Qc5Mtbe}FK<=LVFa<0M{y7e@xrYaL(OYmG!DRn<>fl0 zOMqwG$_)5ysKD^+ORiCtJ5v9#pc%P^6rHTvLAcsCP*x*Ja==i=0;i(ScV%KpMPxM} z;XyBGNf9WF;$iH@N%+vb@XuAm#LC-=7R+mW*SdmE)(H@_FuHMtuW%KS+Httl!@o38 z`@9SyM$tF?AKc)AYeQ3L(vOeh4%lEbQaf3;Jit*~`(sIq*jxQigAF7`LH+4B1^Sv1 zkgm)%U1U&Xm9%q3zQw4{#hRywJ$XYyWLI0f8G=|dF#4xU3h`)eujg$x3JTQsw z;Z$-N5)Xhdp7e4u4%2-1^t)AZM%G^4iyOGE4rxHfE`w5V>S{FOoAcZVmV`5hX0AUt|Nw8JJ3}xY8r& zlZ3y#%Oj@dI;QA%?YxY2=u>C3TRgj4~GDQvL?M0&GX)%%hSG z8n&XJ&@mpw%oJB0#WWvgvzcs(MS%Ftm9H!NX@R_E1H3{s=Ypsmo5&6Nk1+iPuOS4J zq+(CZFTls>Qwoy7T>6|;)JaWKrjatYG-jpkSgk<%VZ`=~4f~*>(K`{@HFZ=sX1M+Y zM#4AB0qV;OI3X%pQbDs(V4!^oy4wDLd_KaYC=W6z9ge;(9x?E7-bA8g$_K&$$b zc&h(A7`yT-fGGB6H2;%}TgwObIS+rAHXDx9H=kVKIHORwAlTFh_x_dSkL_jxp7Z^@ zDg32326E)!*h`Qubsrz|UH~hd9!jUU|r|vB0q+8H1g0QwUryyrt54lOxr82mEJvf`FGEj#T#qg9skkl zWU*nBo6LUOtM^b@L^`&=1ok>cK%dS4p z&Zzx3w&ObdvPSi1odvU&nTHNY@;%uCTWZe6dcEhsz{rxWeWZ~+D`M%z?z5t6g4wZP0w0)~b$>p|z^Ulrhw3KAo z1h5(w<`t{oQ}CEryQHxFO(6OUykL<>U{Qo zV<@AkSGPZZOw8u((?_+FqDUvnw!gL}WN)=Y`Dfn%P4%r~piqmx!1h`GnSmuktw7xF zW5%-Ujh8KQMv8}C49g-_e!#VUzMBtOVTe2AyLqjr&^hJv>(=+jV%BT}eJ)?F#^o=K zm#f`4I74u3XwG=8*|V_JF7Uy&qC0Omw{k&Ke|(|xu?8jFRo&#yP`J+b~! zu%IlNAS+qhao2uhfN`ObS&`V>7|+Hl9eLcZtZu;HKB0hOZK^rU!0kt#Gf2l+1Fu7A zPFfGc!MiHlyYejK^1=1bzm{}wPMo7UjW2QiTr7LzV5$KZ@oVY&i(a=Id)z;qDDJ-B7- z*DevszUF(EB>LK*fdV0?ItWqL7KS)==DVlAzo}w4K}!f#^&hrfpB?sf`@HkvPm6Rg zrhU54(rSZ_UeSoQugKLu-gbCi=5y6v`P02D)quN~rU$MS3{@2O?$2K=uy~Pp?A{MX zg$C?3{J?eVBVZVa!#H-=3V z9Glacq^DgF!04-HXm)#kRCZ=@Kb!v4F5j#6I{%I9)ucp&CN*nkw8nkfc}P{l&Rz&* zhs4i+laS2jr%tu}$%imrgFvwjU7p9KS#?$gib#xj(Vr>(0^@ooh{!v?wy%VAUX{zH>Nt$^7A#+MkwLYHMldt7H7v zNgZoDPyBm#P}QrQLz~3f^3i0s`ol{HMCyzC=C)xVz2UZA`?a;%JAY5*9CT;Zez(j1 zg2X&dvyf8n*%y1~T(Y-~j{svND{_no{XIrF2%+vCz_Ww!DLG5QnAJzSzc(UOPiE;d z)u}(e+!@)HHdVLmW-JK9!Z+yQ+IQYpJJP>J2M&btxH>KSHP+BJzS?<#-yh;|0pVQ)y=8-ZulI_VAC5< zQySTd&K=3D-3i_N+STE;(=&W>;uODpPl|lJiU0M%($SkaG*hCM@_BAZXcq|J-|4U|6Rtu~-T zUCvbTUwoCDSpo?0DAH9x#aRYCn@b407K*Cgy)|G@lp?xqghdMMv0{Wp>R$Kz`sb2s z(eQy?o6F-a|4!qsaLUK!Ofs708S^5lS=+$1*2kMixAd=``%Zu5_|I988F||=w4=Ra zn8}aOq32a$SeoGj%5;!g{)l^a@4}N^X12c1v^H{nD-&*@IDc&L4H%?$i2G@E8=Pgf zECd3d?il>?^KAnlXx@Q2<09yz!I@Xt&*p8tvs@&ooBM!{K*-w}Dakr#8L!-~hQ?toDDE4?QrLsI zfUQdYoZNJ?LBYC_ulii};#!1IwL728iH!t6I|F_U1Qgy~q8VRn`~&1ApivRZuXp7gR}}emMSH8-(#8M&x)1;Uy2lkykf+_2 z`{`;p8Z(!a0f}XsKpU@on*^nOb$=L=x#u=Gnl#fM(kW4&OIDcA0FS$_E%BJz~9NT6@Qb4MTWLNrpk#(Ko=oal7nciwY`LN>VJhHBKCqgcECE$est=bl935Y>s&8%EKX1j@r? zhdtOO(MMJiV)nUWc7Wz_vpQ073}Yy@I=;)V;~ytH{{f^fxf*5$b_P}QvvgPo6)+w)>~5ecXt*5w z_Gpa^6~|iQEj#EQdq1FxLwoKzl=Q%KUZF3X!=pxezOa8Ev8hGULE3R#JR~6vl0&my zU7-J#M~&WU!YZv~A`q~9f6<#b{hS}rEe&mGQmBY?UfqR=*e_qV(~ap;Z-Zr^KOuqP zpJ(-y9EUWIPbuRoaTYKEnR{%SZ}#db8Vb~ITtr>^fuf8lm-^Hu-)Q`SyQ`P-`yrgN z(#pY*1ty-|BTn#j^@!W4A}@2j>5l2Mq*N8}BIeMV^~$vtqLDov=g9Sixvy7eFS~De zvuB6_)WHX9C!P^^T8vy#U46@Ao}pD##llMxi))V3c0a6wh9qdW&vMles{&m8)w}c1 zxzKFz+-pH{)*u2`W2E`7bpmyaCxfO6GVapUREZBa&LxB96^R=$56b} za034gqwRS}LBPDe+7!m7B+1ltW_2u8aU<&}LnIf@E%4`nZg{E`oYCj2H-NWMslv5V zvk^yAxh6jHl>+Xr5Zx$w_pw%rBSv)Fhi6U*W1ZTR&En8(9X1kEKlxbV#{bz-5;J>V z!Xn~*L6p|g@%&ZtdXLmw)%WagzE}{lq%veOdK@lDuFKFLTF;87;+G&Bug~twKewf32iKz867_+ZF7^v-=ecY@MiX?0p1r!K`KB9i8Z zqO1|pOGcVvB@;5q*s11->U1;`_|7a$e5`>rZZh1C;eXypOLzv?!6t}$D{Q#ISfScL zRG%AGnxqhA$k9w@+`W&GzbG8kRl#7pnvbNArngfD@kZSOc0CNKYBW}vc!Nc$O};xW z482A|QzTkYUZJF{w>|SKjZ6)RsgHIA(!1V+U=n|%kFMpB1Dg7E@aZ7%2bc@xV(Ucz zb?8w!tnpx%gG5d^o7}K`OxA5-I$-JJC1%X$gdz!YM!~mt@6w2Q4KN@KEBxibS!i4_ z8tU}>gs1r~57ci{&F>$}c!#{FY5%ds>esiBf~?>Bg0^U1lPP? zYhKn9yaODWk(MZC+x0$BTDlk);}@pJVOERDB34%&Yi`$0|eU6S=(g`p1$bG!Z zdZukOfF_a{T%mCq>HZ*>(|XRN`K0>O#Isq4G~4U4=0?4)rCn~EaS^-_7dQXVEP4Q= zP8at-^fP`>ZJL@fM##8;OBCPmqXffnn7rzqR)jb-o%(u&W)9309q8tK9~qU1$>8G_ zviPWVAxO*KCX-G@epTmb6ICYLsI)d+VtAs|MOGy;mFvXIAq^UK$E_Es7^>EHeptG$ zeEaRY-LN_0qkSqGVq>tgr>-B?A1cAHd=}9bK-np2qoyzr#;2Dr5b!zSa>yn+lX#(w%sg~S3|VEAeCK3(b1ek}6qB5MD_-@m4b z{IyjKWi!|?zW%(O32~&$J*v^_$$=<6r%siJ0r)^6P;K~<|IskmWFOjwsm(4^|JHN+ z5G8xmRbBWD&&}hZ4t>qh*B@Bp3aN&3iCU1xY%@h`-8O9v%*%;>z31m1G2!Xnv=j)c zDpDngCu_g!K4os>gh@Dx`Z<$z^j)F!gZot5@YT+#{e(FSpsUMYCwD~z5f2w-@L94g zrhRZ04elXj9$!^cvJww+^YG$_VbkdSN&w+gg$ifV-3zQP0BS-MSNpMu2mDay>2sWn zP9H1-g{Ac+O|#A^gyg-(<9wLKT0cp6D*p{Cx%9;t?UXQg;A-$r#H~}o!<#3ku0IF^ zf4~NR57*I>R%$eyno(!vKf*1CzkhevF~l|&z9>HfS&)%nzZ3YyCB(ru?M^`jjDaYi z4eG#DUk9k2({+_Usa<1dcCZzABorav4druHtQSJdvi|5K+c^e8ptPcxB4^U((;D&L zno?$UumdXxE0S&;Gbc@+1h_KDhUeMWZb#5S92*vYO!fRE+1_^USd2>z^!QWo`kg9T zZr325Cgb&w`wmOy`k(7DuF;>8N{=q%L1(cnZXrlDpU7S2Dd%Dx2uZY?_EY$PuYAnvSj)1~YMcl6wYR-P) zcoxvc`Td?ds^qnbq*lUd$vC}Rgk~-Ej$t5EDQ1Ua-xN#b*VlmT?cmlOAhtp}!PBQ% zhs9z~J|Gr@H5WpeUj%+tIZSMXl7uMt+AfAEHfzDK@|C{2+Wsr0jg#fj!R4WTo?#ai z;eJm~A`=dgCml zewLHOc^kWh-4gADM|_vGN?Mf)1P+or#dw96BO=8rdCA0SYmFg?-R~j^X3r8IKo>sA?N<$T?PX!u zN$^hy18k<`@PFT%J)9Hhn!LHzTG4!`9`P62be?b3NAy-k-?jT6O%k`3anV=w)D~|t z*Q>kJb*g*>q4up4cHz;NUm}#CCGd?i;`oFF1meqRxm8uJW7F^G}Eg00TvOKIBlJ^?~FoE$UL1s`r;y^65QD9_{*HIP^5%A zdrfu9fcotogM_nzxT`aY8qthTaKOJ~S*(bbOf*Juh=&@rWcOKU0-4<8 z+b1lh6#W&Mts(DxO@{eL1Qp(>p>J`ZBOwT%i-)1fMC6M+Q`v|GJbyF);M}9P==@b{tZcLxZ>f zsXkuy4-rN{-wVD53weyZZCnF)vzd$r4Gbrc${Gl$-N44^7ynUq!B|I@Os!AxZf~r! z;E&I0sNHT4{b+PUb3t&dFZ^x9hO7U{112&RJ*uh3ci_nc6!G8~>h0elY#0AybpXhv zGx^<2@7;8czVjdDqtJ>aMY+=Sv_T#r8+)4e+OaO9`(pP0~(()*wPJLVGu#M$?kDxOfA5;N`%m3`)@Ff8=erjt z9`Z5Bu~;Jgw%k{{ZJmmT*)y_2>Mz(dogHkzwrvWero@o@S(3O$L`EHq<7d#{mlS=C zq*?=HeHDgUc6t~}mRY*>pYcxr;R0|fIUy#CEW*YO{tFQ6&^PPgGV(T?Suu~tZ{c-$ z8y_q1o=p?)XU(D2=3eqIqBu&kem^1-+)JFhQ>SW_K!Bln5@=rqiFWzddkv`32G1Ib(J`j3Cb0)%`FO?SvSG zL%a6;*gQx^N+1wJc>rc*x96;1N2v>=c2(xK73GuazNd~bt;*IjDaxc)BM1gHC;wK< z%p}yFaZ9v9xCLlxY&cAzUfcHL_ ze^7C;1|ty~NPoVBL_;G94TP)EpZh|Jfk?sOwAOMGK{P+ovi%rLrl5|KMLF#hW`S$<$RfUWQbQjfl+JP?DU)<-QB*a-0l zpBRMBd;x(5jZx$5abMtO#68S>#eqq%LiUx=klv@u@QuyjI}QL4lY;uEAIW=ZPD^Gp zpgNw3I1xrML)3)bOF++t;cwTwLievDAyspZS|eFVH@{FSR+d2ZuZ4?|*8f!Z=dnE~ zwhNyiyLwMOah#(MW&I76EMfZt2ab#X+K!9Ns~d;2_AGo}BHJ3wh_6qHMlzb6^Pt6RpMR$FlYm7v>f1L*4dFU65 zYLCoQbb|u;4HDiq0dPZl-RTj!w+*R0gbtxFlQ+lY&-l0GyiB;OqN%{-JZ@x16 zpJq@;e6SF1Ew%pd6(pnxsk7uxU}(Z-p+559KGJFubZDLlF{a#Dt5 zbah!^#nywcA1=sS1Fyvm>>GNS5$8#Xxp?pluuY=sA*ptr1Iipf7O%*$ZlS=GT4U2> zqpoCwYT}ukqfw2yemn2K$GCO|95CrIz2h)D-JqJ873+whI7TQ1k8L^MMJu5@#_oR* zyH>!ePVBIw`Vi0bSrjH13A0(jTdvbUl{v9jpHh?RSpkjVC=$S$^j614aW?5m52hU7rn7*Vk-|SW0 zarNVD(`fnzA2*Q^fMg!^$udPV2H&I~pd`!zw|f&TAuH|1c}l3*K^f|RIs5)zj?a~JLf+eZa(!B>4Thza)9i9>%cIv0Td(9b%o zQKN(FB<>a+13F4Y#v{LJ?i-z!iLtHEE^718B0p^N4V?%mX#ZpM{1Le&I_7QUwq1+v z?JUW&m9ch@keGXsJW;lx5aZx-woH3`DN-6ENx4X)5gdAQ@EXsMXY z^7!j1J05?h$-H&(<$13Ioqk?^dP$abdTFlS4n2~CYb1kNcD`+O8MB~piN-xfJcEyL zeJxyvLN6KqY=OVEQt%P4BN`!hv2J7G9$>fN3a%v<`DP+W1{w1uB&MwWsC1}}K1o$W z`!PheEr>ZHDG-dN+V|jU{%(~kU+)6FW?G`NN%@QzF9I$O)V- zge zxFMJ)w0I?_2Z`RPb$!`S5fW!fII1xV%a&dg+ux2DrMmdj!s=btrp42&8?DP^-Lm*e zt??ZExtDR8rmGX`&w0)>NMD1)WeA)qwiI)yxkT({#bvJY92(Hoj)RHdpns^8Ze}B; zT=2aO)xSG#yT%B3&0a`Z`z1%~romyp-eWHkr7!-tfH1t+k#)rPfzQ?)IUuH^pk%Mfp>1fq4k2E@%9*MRmqMS0 z@JzE$@xwqO(3n8sgqUyO+we#i7#u zl6|Wm)zvjXvce_(J$Az(bK>t#nfel?5qH+7xqzaP!taICX?Ni^fAE%23=ZY^OKDU5 z`@q9upxZzm7-^7ZoFzUoBQmvrJ@E7m2*K|9onUhMd<_eBt~cY#^nf~nVvSsqzDZL? zMs5*zVbX^%!Jx>%k++FwYlDUIT|4w=crHby{OVDEh)qksp}tKsclZGsXJ=Mm(UZX? zjD5?t01o}g6@4@1B}tA~uCd8{yINkbB?f{>l%!i_jUcj^f<6V@oY8?_@{(a1BW4fsmNz$l8DjZfe9oH*d8FgEmXbbJLTxUMw zw0Sa-IwL;UD^C(>6c|Lv|76A-x3;xJVUU_VZJ?4g#1nR?L#2b7@dl}oXAX7k+S*ed z%NDRi`anm0TyeFI$2AO}naL~2lTHFcQCK#QO@baXUj$|59{(2q`B@~{8BUJR)U0NA zkwonS^B^bixsb>`Vkcy0_yFSAKF(H9oh3DJyntLiU3>u#hM{fRlEkA%4dxGFJa)J? z8V~&A7;U;XLpu7lAM71xK>~#p!uAitOhEM%wMhFe2&k54D^t2pd)EDZX2rk*3~KU$ zRwkXG9DB@&u;QUYFW38 zsU@E(#y`*FNA*yH(^+^wYmI?JaAflcqvvY)xv%p2UJw03YXnGZ>P~mSt#b(+(JmOM z=uexa5~hPYr543H)`IgA|Fx#?E_d_HO=GpspK#Yc@0xQ`LKl~du;XF{_$&*wIn%(g zKS~{W1DF`2pyPn>-M?(VV_jcJf&m;_nmp(Id&x6 zT;a-nnYZ}Qh1T!d{Ntbin&*pB1%c$^<}IqjHlYbDHM)ZwntFLtqBb;);Y=ln*ks2; zG{@78DGbKL8UfGU8z{oRpnpc+<&iOYORnW`H7n+Z_Fd+-WdR9rr>fqODon2QuOR3R zcAx~GOtppO>-pNU79dmbKd9Z^ZQc6vTYvrcK>hyp;ec}y>sg*$s_n{qd+BKx_lAAq z_|-AP4TH5f*U3*BvyVJ``gXBJDvC`4Ty1MDm)^U59ujcp4GjC!9_4_9vy-h!c-XuOVBBhx~wHb%RGd9R96XU~+E%m$$pC zZbtg+nKe^TyQM_qzX&0S6wOLwXjsa(sV7K(#3-@k%GGy0j9(ldXa=_PX_isFH$RF9 zCdbw#3hN-p@EP(JOI8@VRchwo1KxLeF}M7RKgbQQhUZ04228{d;8sdx^&Hlk3)j}P z?(gyOo`K66ORc{)SplfcyH6T$@nxb%o4EGRnb~B2WO!&an1+*;v&4BMTuW7*4#FUh zrb@~b+WOv?VRda8~zD%q@aas@c?c>9`U&U@2Or7Y4Jj@IorW_bPBB!-d%@( z^ABj$^emVqb^afFZyrr$`~D5HZP`+0DRyl0JXGd+-ek%cks*?zh!QdnnTKr%ks)P> zj474a5fLE?mCPYjB=htf7v1;o`#isQt>63n`L6Y@wU>3@vajnLj`KK=;d6WrUOP1P zTMi=xuFgp-kHW-5_E4PuypLAV7vQD+)=c-}q7l;Y;Oa?YSEzdywa!=i;1rI%^?FNU zFcQ4$3>-^e1xN4x`Q>JZkRH`aiK_J^Txvazh>#ncGsJags@q^Ib7+>3LmdFt4Q>3e zQSk-NL>BbsU_(Hs{=w%_iYIps_gkKZ%}kesxkMdJS!Gzx<_q@#b?#YqehO}6-ZZG- zA?)w5$OKsTY*bX%S_jG)h|<$u<4 zU3!@(bqs11q+TcI10}kVi`BthZtvRMIb9?k74XwmF{WiacG5McYQvS6z;$8j z)yl`B+TYNtbfA^Mg=ALh%bC4{5h@+%pZL{kw7OGH2w(mv<^dbTg-#aZb-Chipw4tH zD5=lnV9dUu(e!CZR&mMzg40WZffr75QocNT^vJ$o0h0n5<|U-*0(fQ2Eg47H zpL_i{GsWZa`C_3g4RvvLJz)mrbmi#x5_Me;ON_H3U%TEVY_T>36aSAixlj}~Z!!8a z0U;)1;1x74b>`BFZS{Kbp+MiW|2ueDyK5?|sbc$BR@1mYiUJvjcP}JcZoWv-wmgatZ&q9Yjjz3dcoM;Q{zF_IFD{HE}|y^?UE4 z6O|0P;gL}iRx>3Std7b`x%vN=JUhvWpRPOS!Oe6U%7wp$#j{?i{|6X2nugX*2RG|l_t$VnefS)4kZ$(eWpW;l5O4sUNuupAGi?jCN1$#?RY%&GXRPZnS06%ym}R1uDo~$8G1(~XahaXrema%u3i7( z%+T!{FLkws4<(C)b~`J3Q#Ej(k@YjMhF#_%a9y*xdKIRNzz%k+ub(N6enn!_X@TS5 zJIBQYJY;Cohnco1HWW+AJ(@Djyc#E+0byzVf=|5`z&DKKq5tbyp71Q_`Z1zrj5#ms zJ)hg&01ZE)YQN{pB@LqgXl4#BW?stw;A>EGq~AL^3cl=kM?a3|Ba)T#Z~1qYr}3g^ z9=8Im%z4<=8X%P8DqAn;Qxt4LYN=P zoz_B5aCiI1ezumB*M9~GGarHLrxWj*{C%;{6FK9iEO~vNg72QR+;N2k-SIuV(K)2QF8SG zYBzJ?0o~Q+pqbiLOE$aKYWlbJvLXVefisVxaWNK;-60XbC)XA3A;pY+NpUWVI7|H*n~g zrDVoDIO;LrC4V@UUL@zeSWk-_;gwRy%x{m?Jch%a>*ZyhLbN=-^^LaaN-xO ztiBw5I_#A8vH0%Yf+_{RLI}~ZMiWV)W$qJ%R%hfwPBj??jm)ofZOg}fp7+Yt(!Uz?qM|EKl+fE1`ja4-|A9oV=0%VCRxx+?z0 z<7%l#S86K9`371~~AE{_f85d?<|t6wJ?U zvZM`cZ&3(CTgl~9D`kCk-{Y3s3U#e*`ySc4@nHi)B;9;Nw+i)zvv)c#N`tfafMN-XV5q3-R? zhUHH5;N;B4j*#b(6A3)UDVCh-w>lg#c^#g&B25K3GLGxEmU5=}U^q_D8Bfx3noRmE z2M*0{$gc#Rof`7&RLoh(S_#PBaR0UDw`{Xfvm%@0KXmg^4Ua=hY|o=vir*&}?$tlM z-8w3$?VUQ+oc3*&(DZG#l40V{_Z@D{=!xSMf$ToZ8FO#%pU&C-y;#NMvGm!@N6t!$ ziQh(aGwBHfLsz}+0eXcFF`GK>hRa`h4-P#P%6}7^jAx7D`rcDOFl$d+eJm-(oGzK; z;6U|@gOnZ+7sZ@}~{f?$1(q zj87i?KCninb6cV~cKcHL+@}|Y4?mW21-gD(ZP4qR>eR{DU#{~jJLlws=x^V;N4a|H z%&fi-)W`Muy?pK`fBjDEQDvk0Oc{fo0(~yBeKg6Io|8H|N-s?(ynlrzeESnvXm(A9 zRyp|EkG`|X_&C_chQY0 zi1Y8{vwfm6A#-Cp>(A^~24icQA+O%H7O&o>R$tksWfGquo5L1Cllpb|paI!WBOZ^&II$UptXuO7Ccm${Iy$Ffs%-*5A3hT@HSUx(bgRLo*6tqhx!A{P9Q ztRzf^E*cl!mijTA%9qojW=NfLB8WLr#hYGy)`u}$;*$$~#ckNL?S+Y#=CP1V+ znnx*)M`Ht2Jbv>q%4R9ZHHj1%z9I&J$`tA!pEXp%yAuKF zx5cUvpJpSBW3L~!v}KoZtSz;vHb2yPx46M=V7272Phs+Y=D^Ko>Ow5t)0|CQ(YY2vnVy>-Unk`-PLm64)T^`%47p3Bxpq_0*9pOrb? z5}V1d8(_oxp*~YMik_#pU(p_4?|p9WkcbUu$!(GNiU`h#a>CRsH|-F*!}2W)D+J0VO(heMmGH`q#R4K8xav@ND566h1ba zH$f*`V7SK1*>-;I?Z@6w&P&0V5V1L0*@vbNar&&%y}I>S*)!_LdE++D1P)jIxN)(& zHe>GV!V6*R#}>XzKg2TocXmwFwq<=yUsB>ojhTWs4N-lx?r%4q0*9+Hp!F2*q&5ejUd2;rQg6Hw z=cnTLK!+ztQ>YS4Ba-kF>03=JEXEve*%DJx3*JqV!%R(Hihp+UjhDUJiVEt7l(#5}9tp%*pRGs_Y82O|tOHoKCnfrZM zp7RU$yfjWg7?RgJw;exzt($o!3N>oKmPCF>9({fwlJNUS%CDEvSkEc36jk+z_UEbX zoS%J5j85oe3ugcUvECDJjrckr51n6K&t0~%-BSs+)KS`FEMHR&XiPv`sXJ_6!zUi` zGmF{u(q_vl8zjdyT!|;}X;L!>@^@dOuZRk=?BdFN-sGf%E5!_KZEL2Vgp|A^L_?EK zOzlYZTg-9U^?IM0sBk-Q@L~pM1jk`Z9bruR(y8BYvcsn5uQ0F8v~w7jYSXRxZ+HdI zAM^Ti+PFexJ4l22T8;0c=QBJZA50*FQ>t@f^kPsQ^Lzon?Stn;U5+m%mfu?5Wg0)X zK7Vn=OXy-xQGJ%Ir}<~EX>O*sse~tPPm}1l8g3ssaiO>$ljOL}E@N>>>}3Yi^LoN~-MRAgiWs3gPy$-& z;e9xDoz^b(9=5E$@{%oI&ZWgPC!aA{gpDx18NT@05JHrh;+DJy+va`cHgp8}CJZ58 zy>O+5C2S6tC-6J}ao~y~liSbst;0UP46<_9*G^seS})>%bE*5jRr*aW=IY7n_g+;> z^tj*%%K5E!OI6fQ9=nwd>46H&O45yjZ(4D1wl2h%51SiJ%(F6yNu&quiV=lg#0pt` z;?9l9(*5*qPcP9C(j{mOse8lcQ-0WvXFL}?(T|c!zy1YF7@$~KTRZV(sJ>1bRPe10nSUq(W zm3;V;w|cT8^_BYXqb=|L!2;l?ub*#E>0?k!#4+OA%WNt&wBnBNX}4I`IU6nYO}{Tm z5VrV}{JE!SI}>c4b3H|tLIS20L9v@2&awH^?Y9L#y0pzoK5Sb*$a=oL?{boW1Ml~R z?|9?;?i@MQTrd}I!ygMi>`R|(eHL#iq)L|5Z5YV=)LhfK%*UHOq(L2NeRuemU=%&C zu@hwYRxY~bA|?-6O=y2I2_@Pf=+`hYV}Y99eLt!EaD!HurcAg4|A74j$fWyPY;No5 zZ_;a(YhFqz*0FH~@+l-!)#?N>!^qlVw3)7)d8({4l zoL+#pl`b3{V2Q?(a*ZF{M^`bM=a{%wRl7w^7oCj?oJ ztAh1u63ZFUPSM>wX|;9AcJKCuCv?Cj7CZXs-UE<72`Fndhr(|{r zeE#|Q)e4IIAMtfScuvo$GfM!FqN#w#Xa&a84k?b7b+^G<4#?vdZ9)bfXlY&rT|VZb z0M~;8W9Rc=&WP$3uB_6^^*lxc*H5QsDczN)qW^bJ^#Q|&aUxmviM ztLC=axBZbQr;El;R-S)%jG^Lp-s?&nX*IknQ$KwvImicjSAvKOFZ|A8X?uz3(ABBV zqfwyoV!yrw1qdabY5@A;1KW-g6Q{g{e>4x-k4qnbclsNEL<2_7%PYzPj$afI6RAML zF470a#{vdZmKG!6f!6Jw+z{{>P$01%tTUr|AgW~B{+qBh_fjXuCjTq%F0VtJI( zLYw$-rTl3{B(wl+g+dG-20nun-~u&(CtNIw<+4zju-5D0Ha0xQOXJw&6U){!gl>7&Sg26o@r|@F$&-tzPqnQJ-1^gcAs;Sr4 z)rk)yl9Zp!v(RI1lT3M5pna4n2k;=r8w1tn;denyzlNt9IU(^X%q(i%EjACeRVH*~ zfkMXjBcZb)o;(iHW)BC%q1NOR5C%?k<=Bea)|Z|m4+}~MR?;Q)14KhuF^Teh9;B{M#Sf;j2(g@GP75$1--i@_Pp&E%lO4`lW$+xinodn-HzIA??X$7B#Dd0OmTHgCz?#*U~I%+Yu#d$#U7`%cfQ3067%}4=obx;7~Xl;+vo#0 zkQh>0OLEEq4dUF2o1Q%#hSUeZ>gS<6{qs%kE)D*D8V3wJ)V7S<9NsL28s>o5cd5C@vamQP$Q z&jbDpv-Ddf7!!IN%JMG)pLv#o&+#(^vJf4h4*zrAyNX!SWR=rFZIah&|AF<>w3C3x zVf~b)HC^IdL{fv-yd1rF2xwJ0MoS9H0d?5Ao3+(u@VlT-zlPHrIdN)`=`d4Tn_&2{ zzrKjeuh~i#IxzV9qsQ9lY622kDd5c`*L_-6}(R=N-L1FvC~Qeo7bar)UImNl&(Qv=O}dwXnY%Wo4SKS8q~lw zm%?bMYC%nZ7gF?Qo^Cj1%HQ$0IwwHE^Z-BlVa@nfe5+PJpY6E)1f!z?HP^+@rR*4+hJi~aSm;FwBdJ7F9WwIV4+dTN)llS+7HGa@-?+X-eaZP#zzhN?Dji+-ZA{pPJxK%S+{5IijRsp^Zzy3^oKR zr#x?}PgEJ86`$2L9w2eV>kze2PIZA>2V>@WJ#Z`g;w_q)caY}XfK_#bE6TU=EhtR1 zZ6D-F#h}iqH{S{Lvsi0ous9Qj6SIIcr?~aXf~hQI6p6{#0X(Y;SnU;g_Kot<+I3U7 zaZ-e_VkKkpZP;(l9YZ*V*<@ovG)e)PQ8_tQ1Y9u(SR}Mhrinhnw~s}h%~A+;ECemw z&tTyfh24&iAbd#Vj37u{E{frdIN1G(8@X6wBO1ads|DzBr1auFw*7(IGmaw3Ih5|D?|3BZUVvQO|AcAlnzl2`Rb$UED-_;Wnz#d zdR6w~!3$5+P)@YszqefQ7GX#9I?6r>5}G=}FtT43KK?8q8p})Ow8U+9f|QGWTk1ym zJ22rAg9MPT?l(6-iMmZIKC|U=+9Hf6(Rv{AEAquBz)Tmf!76lp%&Hg9^1 zpiU548}N{;vfm@^bLqHgg-}{`fj=8 ze0M>1kP#{i+327QxM({z`YG+{48|`ODjKlRL}R@=(NFYIumnmN^x{09_3rQ}kl{rb zIkBs!s)+v@2JVtJuqM=*;%3*)pc@L1tKFaI$oSy(1+i7oNSmgPJCeFcl*?WRloSHv z=p6g}p14D1du=hCh;ragMX*no8JotZ{a1AFz-v_XnbhyEM z@%ZssQYR6o`ug&bH3J>O1Bz+~jl7Iec5~|d zYAO`n0w|3D&V()?%X+Jk%kWq<7VL{tMvg!ox$f2rx5xabLpZmc9H~;b9&(<7(AEY) zImw7%u=nkbfAxQVBp6J$!Sd0CQrM4~4nRK(diwU^CNnSsIQ`fE5TL=(Nl+paU|ys_ zB-{8$L96}he5D$4X4#z!xSYsfOGWvJT{>wrxOX%zNB~!P(~;@0P&9UX@%ZM+NbvDH zQvr&q(R=p6Nv3zrM!71avS93S8)R8P_nRWlW_T`rerxbaQJXq`q0?n403!Nc0~+?U z8|`2qXif?bkC9(MCx4>O7LK?!@i59S_ec|=CXs|POcdnw*3W^e3wefchwr>eFGV?M z5}E>SZrL8W%SZ7#7Pv;wC`zA7)L>+|2_T90(O8&dvTmBFA}$@VeE_Y`2fDD3FJd(R z&3+xCA3QvxD_sO0UYiMO>8oPEW2HK9aAtQ_lg;c(`8`&<+UJEJYz2`2kF*Tl-i*Q) zr@m;=8+o6@;om@M#q83xRLz(}IzZXI#Qz{gz+}cBl&`t{1d#B6F|s<1tX>YR-sA}< zDpI|qYb`X@sf=Y9ow;B*s$?7zwTCac>0f{&pZTm(oP*kd1!yxt#% zuR37ngPjiG`_w?6K1cNFbBFe980BUlYO|Ee6(CWV*tGnzg4WTZ(cEL)b~11r4&dJ+ z^H+<7iKx4(d^!rxDc^n3>%TA3Co~x~sXAyA-4XeUPho=dA9fg#uYsT{c^D+4?16ML z_z~oMnh9I8OB8#&_^yc!zhay|5j822U9qTf$ugs)uE!=tFwI~HDb^&20Yv&_)jMt zK+&KO$r|bKrU1$ErUS|Hj+4HriyA!{lyf~ePC+xC=q)=l6MN*uTWG=Y4Xn2Ksu*Mu z4y%*coPnoH6dXr3R0YMM9YnQ5i#7F%LTit3?Cz-X-yMzbOh#}b9?NrGJDI%^mq9>+V-RXF5lhHA%>C0;d3Ld@$juf$kPwV?~UzJcG=jjy!?-U zIfL-D%V2q+vdh8;Y1Op1kRa2$A1+GwQ8#Y=K1~$&i0SV59l?WYX*W8J=kt(xtIeYR zcPKC*UOE^JcSY%K7!8CMK6oIgD;Of7=k{% z?%l}Q*!#p#dRwik4^G3~cYsa7Vyq=j2u#1H1A;49Ke4sAodQxeqbwL_v`Z%Lv7)4~Ii^0=Uq z);XLrB8Gy1{75K3tdHFfS|7)27!NA4tGas6a{#uM%fBk!;M zvrfUel;V8&Zj2=<@_;ZbDcDeK#>fwaI|l!360HPjb{V~dn)&WdaNvXBr4cFr{mliA zelnNO&ka-}bkXErsfG6D<4yxl{duG$Lij=J=-8|L?|WhhKUEnj%nxxW)iROv?6DlTz%CMvCT;24f^{PzdJcK(k81bO+NXBv(DzaLN)hoawV zh2y(BW{*JSR=vu9_U{yLfuPy*8<=Y$i^-viEZwcEP#R=hbIGh2nEeoXfZxFGp$*Lg zYbcT80Tq=a72kJ8mlex?hx)g~{|2`D0|KGOpsIj*vfU(RWdeUi)6+NncXV=c&eJdi- zuNKC??*32ou7sHmdygOVfx5r@NQui&a{XNwYWG9{+7<*!{z_d8vSw--K|&hw(EYO2 zPG4_TpljEv1hI@C8DIC5OW z{hA;ZrGRAUR!nJ-?9Q|kjTP~5XI9KA zEZFj&+N=&d;JkanVLvr^-QgnENfk_e9m8zM2JWov&E?YjdKt!LV7vp)=KV12q*#iW~2nyq)B7XR`vS52Nq=qmYd-i1f0&^#c<%@-#iSc5Ze9h&Gql-dc+D zjaU#uUrgI8ow)o{X)5O(8F3OZtKfrULCsHUC-cS)*7=+mi-w}Fw}6boi&1y7>}jT} zP>Fi>TBm_LrTg|Lot(>mw_l|L&3V(aJ0`Rm90%k%$4T+gSc4%=W%rRFg~g5&9%HX5 zb#jzmmN92av`35fRZ$LeTq*cFcINc|O=V>h`Uq|#ge1bA5 ziJ4@Dk47YzCdb?twR^mfiHP408rPsM{B@5)&;$CUS|>;VZ+?I5(PtLDljW^f;cj^a z%&J$LnXBz3ZPq%I4>-kcMqeoDjp zV^?g+C}j*L#C@5;RRMQMmw8EMJ_Ybius%yY@Xv0jl_cP0Hh4?-nSV zRtk2%+_W48A7d8jE5gY6hssH{3K!_t&=l#O3Xn{Fh&OWsFJ*sz$EDsP4uEFT)*949 zgS{#!?AJ-*C*;2SRli{n_$HoAO$*czHHz|< zPqkO`Cy;i_sXI_{jI!!xTi;jCXbny-dyPDxc78+Y;z{IKf_F3XEaW%TmQjW=RV23 zREVkm{Y?Cu#m^x*ai9nh>x47n$cyQEjd&;z1E|9hF7UZcg2$lnB?UsuZ~umujiHC( zD+AK{Y1@~+W({HLMJfjI(D0)TsA2@b8%_r7>|0R$64&y9Q&q4u7fEAi?)TmeWs{RU zUTI-iP;5=Zq>P&ce%4B;N%nyf;6E(Z3H77H5QY|D%~BnH zal?(&a4=!<`zxxgU_OeuNGMV?4}?ubYLt%KH=@6hE`u4823E|;dcQSP*4tOMS6Viy zh6MZev;gf-bOmK_NcVw5^c8nVrgA>M{=^3MZv`654QMD5ZjjsTZ2W5Y2FKR?6{#9}{p$9n55zd4 zhK8ZhAX176nRa--(W`GjOC~soyN}*d8_pZbAAEW9Gz3%1Y?xQAaGdzwbtCphZz7ou ze>Tc~Ul!6nZElcyHQ>agEm3QJ-=1oskJKQs!Okm%;{_E|EVYRzQCWfJJx*4jM6~GK zV_WU}M4u~jz5!olslBd{Sb~E77w?FTja-5w*ZnxUlKBk7T1opA{REVK{}c^T8CgF3U1AE3t{It>W7ZEc%S z{4gaCplh{k2zYrbM`eDV@ejoi+mM1R#O_PX^_#Ar!j0h$G+U|82^}UE^w20{Us9VO zF@!KOs*Bb;(Rtt=^Y?VO6kN^I7#Xbz zaTn&VnXRPuB}{-ZNH%e{iGhW|a$*&P zS|W!+%~^pwTL6<9WiX*~b{?zx%3p@F|HEx*EB`1gtr=}}NEVjXh_n9}oh*>k7H4yK z#bBM@Dcd@9^#60k$e{)`rv@vMK`8J3`6LBGDUQZ+c%(mzQG$G%enFJq;p`CF1TaQ4 zBe9T!x#oQm!7S;be54ZXD6{uvyE9D`s{Gs^$pn&X!olHT`~gBkZ=pKqr`d#x@DJz} zAW2tLsG=9-9z1d1hzfnEcNm(S_)|7&LcC~`UKZyF9{WG=` z5bV&3Lz$LYTS9F|p&EsnVWJ}gU+_qb0=*9augn2h6Q_~CN>8JD4``QXDuZST6%6ET zhh>V%uX8bSLwN#AJSh!*SW<7PrAgrY(;{*ovBdmyP-*)4HeMo#591o<|K7)A}hgCs1=yVqrcNTjREly)VOP8W+i=IQ$AA&45tJ=@&#}xs76;x+g(2O(+I314i>Zkn)(> z3a4W3`!Y&OR(^>4Lijvsrn^4`F_zLTxsPjlA5b&SVktYF@p z$f@T%V(*&MR+vq=H;T$>QVSJiocQA2xarAzL)%5EJzE0(pSSAr37*Tf6WyCU+75M>B zh6{ZDd(mxf+N1Df&np8uVDK8OyMkIF@}r(aVFxv(FJUYVUBRktzQuyzUJ(Ncnd-ay z?a%$`Mm85UBg?_is|^nlK_2Xo&4%QTlTSk*T@FYFSGkI6Qh%lVCeGHW++M)Q%DMXl zOb|SQET~!cARmlE#@JjNCdWW(r9u7k-p;SpkI+4E)~JM^5C|dYcqqCvKw9wXWXV*X zH{q(PxH>{4ede=RfCSkuYyjO|5`ZMML(r6dHnU+=GCNt~`yU7YV`{>*l>T#C&Zyfl zFs+?oTHtNBhC&!Yqz_yFa2%U+;7m;1N(%(k0W!?!0cc9(Hd1~!;CpZ?=h5mXnLY8A zK-b1>D~qrE8cu@>hc@u~&O`363IJEoJ2-u0?P%up=pzhQASrG9!s{|D!O7TYYqzXw3r31#nxe1L?oL%hFSPIhOr@pTLs$ku8^4+2+;*L(2l1fVHB z0phR^-;lF;Bf-&xTTsNS4R+c$U_VTRKAgp;TY%lU*K-^!D5k+oek0D86f-^(!&mR} zS$_jMQsV_XK!=A!e1i}?55&HH1I3_LSd|^!Jg~rF8m-SfaTNglVI~#}WUZ~hK){70 zEL;aFRGFzqlp$0E6KPjBk$CQ&3Z|tl!12&DW5B@Q4%tyL2iLV`S`JW96YesGR$Cn=CW$p`Y)vpd$zPv-Bs9$o#UGY3Bs_WZbxJDp2TEB zi1<5X0VUjqtPQdi1R!kwT;j^0#pkDPVn{)t>*vQh5QFB03aAwDx7z{5U`7jQlx(Gr z3g>xm4~=a_4+F?3BbK&tnYHZKW^FKN5bOfL{=g$gtW*FzfPyZD7GKdcv}bVH2nqwS zfxjL(42hm-kyA6H*MpzmyFCr5>HOTXoWN~M_-Bz}zs0zpa(1IY_h!(Rqudb9Zm=?e zsaa!IVmFx+Ktw4_iu8rod&q$ZflC`l_Y+7^J}xq5?+ZQ@Hhk_KgErEW6k<@C(lO`G z6rzm_>H4*niA(qi;vHI-TByTVcio2hCMo#*Zr$G6czH!_trXWvHFqaa_0GF5hzh`+ z*JkTJNum5BF3&7W$Rjbi&)J*bbs|I>oK@=jmr;i z#xIehDN23jH+Mif9ihIRr64$t#qia>R=6V`K&jPC_(GqR+xIiyo=s(c$T~Z}ueqQb zs>A@_{{iXGl>x0C%ThU+BPnPpaPCBP#>=l5I6-2k=i(XoI@OcOq6SBg1>XcGod-}R zAJ;*C%lq%S-IcrEGk@5XI!|oAR6kvORGqCBL^%y8bWETF=lzvUu-S|Y^p5;skn%Ig zlF7ra{PbFgW>5JyDmxZB{>aaE*l}FQv5L^#~gil^fh&iD|tn z|J1bTlqw{Q_+>pNx+AQ>19@p6tU#Lybxjq!aEyaKs;?SR*^E$sV%|k3tTb=8#26_$ z&jz4g@l;4F&Z;wqP}Hf=iZTFiA35dv4Y-V#_>0XS-Ptb>N4mc`2+-)KvI6b5+L}8- z5htod(pGpdA02(Lib}Y!4j6-C7pAa$;Ls+s~q< z#zT>G&<^nZJNAKyM2D~?&Ogh*+mix?cRJuYd;>R5FaUv-l&eP7$&w%2$7*OnW&hzj z=u5h1eck1NL$Fcw<>>T2sW;S63ynvW?z|vlJ!yVOXeuXl>l|X4kLnYT6Sz!oyk8(@ z0oCnfhl&h^hx-UAFy;BGS_`l}96FhsjUBKjcH&J5WR3glq&a^Tjc~-RFOcA0-9ucg z8O@Ny4A}LH)ComA1ze8rTOcT0LyZx%IDpHzZPjR|c6~d1iVhq=A-wJ_^?Fb;gfI&^_q~$ zHs3oGs7Y<>g@436Tgj;zt?@Xj!q&US54dmyE$kF2y6C;H(7JSK=Bazp*l7l- znu;Oy;51B~dUqiNtV|WvyKf-!rRa4|gwlwTquHFPWGcsO>zw?q0T9(n7yV*-wI|^$ zntw4HmDUWIJ1cS;1%5hC6kVgqhz~*1r7wc*wh?>wkcWo`p9cbpc&;0Am(p;7TROZo z_-iDftF#=KZe6ypw(IA(1g1JM{$N$e&G_nOG8w?QJTD1tkpG&@#W=;T{2U3_1fd5F zi8E)fCA0=R%)~GW6_AWR3JRMrqLMmMaV09Q)d+&32g6ktycLO|O(wLBA!6IKR{ymS zoD_YIY?IzcU5;}{;bjhL?8ET2O5gGTUk~oK$CJu2&kfXbfNreVMto?!_sM0evlld% ztKvF0`aPAUl5BjwF}-^>DP$OJbYggB?Lx}e(g18+r?;!!4Lj)~r7)qYx2|$a zg|qSk-RCq>>GjY(Hv^$OFgjY&mywtfk{=_+Bg;kSqyN4@pLI!l-Y+BJGZkEK-p(T$ z(xr+cZNZWud>u_zNd{aRATBoQeL`D zwLv$zy);K`eC9_~m!1=g`7K(B)DxY=PLi-A1DMmsAm9oql|sR2$7!n92OdY=Wyu-{ zv1T6#R>2A>=w_L!`E1KZow&SbDPm%Sj=q>4(ufDgdX{Aj_%@4S#9}Wr9dMof$@ljZe44mbb)Li(s72{HI@zx+ehUfT;0m+??pUktn0YVy5 z6HPFMGnL;%IpHJG+XGQVGM36xi}VrFSck#JXYSkKi__J=5S1`ujq>_&4@5}iVskXp zOjML4c^z}&3zXWv`ib!vxke8TTH+TH7bQoJKcKB#X|1U<2U% zg91GGX9@?kp}fpaqvs&hbAd17E{C{8C)CViu-fzS^Y&p&-tif6Wm7Us)N6x@BVXql z*Dgre_kn&NL8A%~hvr7({H^kXS%IVS%X6~Gd6h{8XXFR#krGYnKa4(75KMeTseQHa z?$Z!=2k((9pQ^qEZ(edr9NRn*DmcpxujC{(K^ztNt3^$3csm>BkGL$S?CMDknAR1R zN8Qz_i(-FTQWU{doi`vdLLxhH+}jj_ZJ{LbwLiYTc*;xe@K5HDj~1Lj1}zT1%Ojgq z!<84hbZ}V8bWV>jE^YbsSl6l6<+$;4mV1gAerD;yJ35bMx>v+5iea@f{9!=zv`BvK z8RRCKL-8N}x}&#XQcX@pM?V>LD&wiCpqVJWH~|uh45N3>Lh`bjvO3^zXbh7Le;RtB zmF1ja(4yF0x++K>5+@+>%8e}N2?r~r@baB%T`2T$GuN$ouns|Sg_P>P6MYdc6EvSkD4p-2QctV+1PcO${i z+Em#4r}5Yy$Spy-h=Dyzc=TNo_Uz~}Ryap{A9ryW3N;&cFGD5;c{O_mZ>~EJTJ6!A zeX4(l#XVwM`*hC+V!8AY7<|+ek?DQcdmkHsB3?yI) z85B;q^$AFN2U?nOK+Yi*T#=J`YT=GhmGVleY2)wlMa~8jk|REs?#zvdngx>Seo!0g zCTv^phjb->c5|gn0n#c-khFUSF>c6TnoM@(hym7@rk?I^PGtAQiQ+(($wM_5c2C?n z5Od*uY_6UV6{+_1v<8TN@IxdL^{LLsv*P38KME8kfE{(IO zS-gd4${{0;I$7ERH8JY(!_~Zh=Z_(Ve3m%|0)-4;u><~0%M&ycGm%t?hzkfAG|~} zgau;ZWC#?9Ge@ZESsq0G75N0c06pN|@BRJ^GN_6WIDA6l~!h)WpX z*XBS%H{eU^f3@@z#uz@Pe`maCUFGzbDWRn-4z?KRG zNbM$uiag-UJw|fdzk#0;f#Xkr9VwkbUGkr>1=vWsn_hLv$o8or`k4%WXzbN(5i3W? z%_9NDHOO0-T0r?%;r@NJ)CgODTejj0$M)O$;Z%H#kw!GN>d^C*#;Fry+WwM>=EsLV3#%uCyhLEMFi4a zCY4TrXcioD9jdcbREcI3-~?+l8M@|ZBK`~&WE%e!XYovuI#ZJeCm+~92cp(=--nGG zOg5#v7zVns_FXU?H*G!gYMEu3k7#Ac;J)mrLaRj-Z zSu%nUjeu=^9fVq5eY|H-lyG!oFZKwi7nPCx1SJrfTh8fag}AJzkhSXpyz{AP&W*im zmKF`753bhAv%jGki^uy{I{%{W><$OF?y7_XLPB7~;er!ENpbav zb6))j5DZ5sReH_}7y$K)z~rB;&`2*6j6l#J-CbzI1_y}%&dH@R`u|ve%q$t;#T48g+I7&v#h#$h zn_kTD0x38W0uS{g`T<0_-@kq&{O`lKY57ULJ&C%EY_%QDbo`1oyYZPkU7V{Xb=U(x zDA7V{Y7mZT9stDW$-kwZJE9;DXo~zzU=Ho#_ee2E!m=d_NIXAyOJ(S{oD;jKxAM*&jo%L4&~obn>+71e6{M zCaMU(y68@8nCSTf3Clgi!kw&n8;*ibU#aCsh>92H^dz8R@wEHt>=YY|I#i21*^?I#xrodXgGeT%_&AlfU?Nw80ozb>B`0#8+B~(vv#=zNh@P#J(U$-ku9XFvF@9-wR4Jjs-^_)z3pO078VB4=4K-+%NESQRC zP;8Wqaz$5&%1bD#!iD~9PMU~S!JB1H!~HKWmu4Mt-Bku45q=CR({CUljqaKSE^1{F3pQh(xko;DU+(>V}@XO3dm>X*TwBDQ3@KuE2Z>Jdl}$ax{n z^;#QFpN|kdm;8g2-M*hVJ^m)4KLWdd%n*dN`~`~z2@<$AWKtCtIHyM^79D2sH^0+7 z@TB}ZLFlw7(?0Vu>*BIh?W*vK1lTqgVL|#U{?JkE^Ayw&YN&gS=p2V~$6!p@{T#5I zk{+6sXi0o*2n_T;O4*S36#b^bcg)yf49yDh2(?q)$`uq`-lebqoS9H+G-DjUrNj}q zpXy&j#UGuBz6RGDQgv%tVHb?#Mf&j;UvP8WgH#IPX`*KZmAC#RBjA&M6C|2YbD%Hg zuVD7kV=M_|lm)p(4UpLR4GhpqD%YXi#>`hENaG;4A@8%sD;iMVhICR(f+l&dB)$DX z)&)^@kpEZ@XAjvoD0=nJcLYp^)b8x43kQ3n(zLv9r-MR+zZ2A@E>BO<;F>3wLX+oW^k604#M zw^FklL|rY9drs<|-La1&2j1qoJ$7dc@ZwVJqJ2 z`zs;a#I&AR2ARdjS-8Whtr>um*ssB#J%1Rxbm*(g`=wv7COzq=2eyICb*+*350nIo zM!16*@blOliHk{BVkKM$PALji+Bec4CJo~=%s8&u?=S-F{w7F0vB&wpiR}ax?Px5m z6fGYmUE?mT%Iv8xisbn4QQhmkD0>pyDh>~oxRopvYLv`g)ZI`+)s^?0@xpm%k9)BP zKub)r3gnIZ6x5)5aZI2iq3QIO`djRr6o)tn33km!&cHWro^TKtWoRg?PIa@*flYN* z<6|v5jl-P6$n~(t#7wSB3+IK@EbZ_h>K|{SmpGH1y=PXr?g zcSISe%wdS~*8yht+&-j+EWrH^CgvF0{c{H(9+xO)g-Ty#C1Qrthbnh3fA>Sdi43yt z%W`#$I13fid2`skAM{g&H}}U&x_o_J6Yy*W`Bn0(MLEc1y5)+d6me3h2}381qee_~ zYSR>rwzx3B{P@-UaCxgTp|bi80Ex6$FWwfml%Uap3e}O(?883WusO~aDT|q75-Nu+ zhvrb|2}LL&q8!s4l611QGKXY%Iv|fpkrX;2Y0WS?h@ze+X|z<24i4#&PVaT2df&HC zpFaKJGxvS({{4Q}^*vqn;02i8ms{+fm_YTyzJOu!j;Ek=Oztvfet;)g>VU zHs@*1LrTKX-ITG_WYA&BuX(;#hcQG}c|UzQ;=D zWU6x+m9QAkMJqM85Q~XaJDJVh>#WBPkeQF0_I>I9p9XEOa`3Q(Tln{1KrL zH%Jf?u=8D8GbGg`<_@PS%o-jCpRZ;;&@Ad5gJu4uA^-O`D5OgjlBqWpqcu6CCl5ku z)1-Qc=R*0wUBoXiWUDlo{;I|R^>)u*MJ$kTr)3%p;_~}t2 zrH9I0A+<6LbPh`)oD>l^AHECO5dkoi?u?7T#U5vRw&_KFBZQl8N+{Fe(R!t{j{4{w zZu>Fn578VGq7?RX{)=%aU5tasiSq_7i#!k43;?v^>sjwqfg@be?VRhF53j{}UOnEf z=Iru>p{yoln{{#?gL`1s!&V_jwZQY9JQp31V=ZGSP3fFxH9GrUeKmwf=u(#wKe?js zeN%2R>Altp|LJx^FL*H%?23^~cVZWSticx&9cm5cx2+=PMmXlDx|(mht8wp7S*{H2 z>If-qFzw$8&kl@kvkz zo^5{?SD+XKnzb)z!#&5h43`wx{8|S&F)i@h@P7q3*b?W@{5*VPzoYDhdMKd0CWLMq&ol&d)>S48hT|AZX)~*A*%9aDmW73V|5h!8X-4i_A!*%2u z{vCAb;kI#^eo{mgyRiXz z-J;3e@`R_&HZnQ_-waa3Ef7~3Kj7ybnyrhhnK=X+l6P|BD>b3d-H<^ZnqzvZAF3lC zOk0Kne>arAX8_wuxwcQjaz5XZId|^cRpQB~h?}G_YlF|*$`Y-{Vyo@`&o2v@eo)MA z+~u3Y$^6x;kW)}kMsAP3gSd-J7-FLdxsw0lwl~>}T-}N|Sw^j~7^(y2Q_K2B+`Qu9 zQGX~LZp|ax|&EYFp%-vk@0SbLd*gFTh*W^g51mNFY@0&Wo_shZC?NI|?v@ zmlU}eqky`aPnsbPvla3ZB6cAc9f#5CNOAlJWIS8K&~dUl3t@CzjK9uT%x-IZLx$N`ZnMt= zWm~tJApRSp+Z+jkV$U759pk)TJ=<_iuO6w*hgC~i!jry#g3e1KNG<7t+HvE}l&iA>-B;sA$yg;< zQ~6ri|3bH?T~%(R=Il95XQKW+!-RzpL#{QlV$}GK>1iFgpBD-XK4rCh&;ghRlt?rJ zM)(A%jLC+qTuW{@m5%VIYU0v%eH&E}C zE$*ut?SYv|&E#5i88;U7~&-sKAJm`CR&Fy1R4#@=j9VpmsmExkrk;o#X8yJ5~BX2>wHfdY%P#fbt?3Sw$dIWg5xv3;9v)IR80A4Rlw_S(s9=1$V6(-0=K67}OaD2UCDkLsbDM8{cQS0J_H6rSMh8qsB@yXHvL4U*t(<52 z5Fn47$i9TYjWCTA1pH6mn^G@*YuGs)6-!`u&7J|thWJZxRKFMa5LEUS?F2L|!sY(K zCmhmiEMa%Zb=hJqs)D@%7ya*w#uYhN~JU9xD*C;XMV)o>0%|-8%J%) zYd*KPFJ^9UyvTtiDWBa$x`^1_aIbCc6zm@QrL$o2w`IJqCmP6s?!}D%kP>zK9HlsG zfNMu%EXekBiamnC5h~JFL7SAoehXF=3m(88H!bRW!3Lw79h%_yT=!{eS%`?oI-MBZ zQ~AYD$m0{#m8J5Z59!);pVD z-Lrn44@H4}{^dKH2uYqlAR@jZsJ22bnoe>XZe=|{+@pU$o<=h#;q4H!R&6alqk#!H(zNf!7&Asl@^teDv_OZNkG2)B@ z#)5U8vuj#>sDa0B;-W({(mc4@rC71?co1EKq0kG)`e#H1BZjT>aA#3BOT zshm~HT<|Z<^QJ}rB9W3gboh=Z!f5P^b-c+%@ni>@W4xFNfliM&69v|u+0YPE=~7YM zW$FAI=<tE(%k*wVoNc=6LCAbw(~xSX+;0q*p^JJ81RlUSWAAM#!Xa=kAwRH+kLmjJK~H zp4C=F$9aK?+5AKeQN0dPM-B91EWnc;ZA%O>_o4<7^6Y+`e{`hjL2TZ-HiQ|TPy zvj$e~qtO&n^uAe$h`OB5^b35%wV9+C#OVmIzKvSikm5NQZ*_5nPU7rNIq35r<9 z9f*|sKuIOSU=zau=I}aNs+3lCh7%G9b4d|YxyA{z%hSZX1JuMW1v)&V--kj1Hu#y> zt^NBxgHZYma2)rSnB?l$s?r!Xjv+cx{D)|ls#d-DfLr?mZ>|X1|9LBdYU$GWM(jZ` zhAtu9p30cndC2C#L8D$w@Ov0L17LUG|E4Wg0O<1>7>P2K)8aH|$zQSGyAMDj&`umO zxld~&GUwDGYX_IIGVwY_po}SY(b}zt&1n2h633umW>7vX+76hQMy9?ofsS*V^}My; z@H8MDs_VN-)vJ~3yb!KF$vL3>m#lLUv zxo!~S^2BWBLD0nriFe=R9~&@qEeQrv(^#4Oo~W`p%-SVLB@E?Kr;u%WaFUtu{M1^6 zKLv93QA-PKdyj0;Do=n^MhH#IBsG>2Thl;!r-0M-lgh*v@qvQ)sNp3^;jgO_g%0GS zxS^nL4!XxA1Ff;3nj#mgVV82HxDQsrEjWQZx7KO9*>G>2hN2#E-IpiY2(rQA6D3lL zFDLV&pkg6~>nb7qn)G!%5bR^Fr)UYQpk+BNkalLH<&-N*7?UF*&sM^FoN1q|YEuB# z1Ghq}3-)`#wZ9PIVPKQ{1L0xt2*#@q$P)q5tVrkMb%pi8Emt{>?^3hL2VvR)nZK|qLUtI@FJ%~aKt*{J?RBV*UNM;ZtE@WFf(QGZY{E3k79PW;CFo1671UDHN9XvIM*iav=OE4 z!7k%F4S~zK=Rp{07tG&U_HvfD3)X@UL9$T)*N6C{;1%p4x}5^@7pYNF0p z{R7Y#TIqq|_X_lvh2PJI5^Xh5BW3WLoM$o#f>q)3)S26s{jbkkg>vQChul2g$xd2P720-W~i1`!4+fX%AFp=m|?6c)a zvs2}Nb+u>HA|R#m7x{U>{GW>Qe}2(Yuxsvn`}fl6GI1K8t}EaJthp8L{nGw#DSy|5 z9Q=grTZmy}H=r`pQU&OTAAcH!K9*$us6?-~$ZdHGq`U2S00kY8q%H zUS-KF{&ixLrTvIcZXLLlAx(A14Dw;aUWl}A#d>29-uMS1y8zO$%ltUBA#OlmZ~yQC zgc2_S!%%0R;(v#A)B*Ji#_&E8Foa;8j9svskeJf tVtkB@GZGTnon_P80cX+w<+=Gk(9s*aJO8dU3z339ZYw>UFFFRZ{s#t3{h0s& literal 0 HcmV?d00001 diff --git a/modules/data.remote/inst/requirements.txt b/modules/data.remote/inst/requirements.txt deleted file mode 100644 index 0d793eb7fa2..00000000000 --- a/modules/data.remote/inst/requirements.txt +++ /dev/null @@ -1,51 +0,0 @@ -attrs>=19.3.0 -cachetools>=4.1.1 -certifi>=2020.6.20 -cffi>=1.14.1 -chardet>=3.0.4 -click>=7.1.2 -click-plugins>=1.1.1 -cligj>=0.5.0 -cryptography>=1.7.1 -dask>=2.6.0 -earthengine-api>=0.1.229 -Fiona>=1.8.13.post1 -future>=0.18.2 -geopandas>=0.8.1 -google-api-core>=1.22.0 -google-api-python-client>=1.10.0 -google-auth>=1.20.0 -google-auth-httplib2>=0.0.4 -google-cloud-core>=1.3.0 -google-cloud-storage>=1.30.0 -google-crc32c>=0.1.0 -google-resumable-media>=0.7.0 -googleapis-common-protos>=1.52.0 -httplib2>=0.18.1 -httplib2shim>=0.0.3 -idna>=2.10 -keyring>=10.1 -keyrings.alt>=1.3 -munch>=2.5.0 -numpy>=1.18.5 -pandas>=0.25.3 -protobuf>=3.12.4 -pyasn1>=0.4.8 -pyasn1-modules>=0.2.8 -pycparser>=2.20 -pycrypto>=2.6.1 -pygobject>=3.22.0 -pyproj>=2.6.1.post1 -python-dateutil>=2.8.1 -pytz>=2020.1 -pyxdg>=0.25 -requests>=2.24.0 -rsa>=4.6 -scipy>=1.4.1 -SecretStorage>=2.3.1 -Shapely>=1.7.0 -six>=1.10.0 -toolz>=0.10.0 -uritemplate>=3.0.1 -urllib3>=1.25.10 -xarray>=0.13.0 \ No newline at end of file From ee8d62a6edacc4625506c5a31a69c5956b1f1467 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 16:45:30 +0530 Subject: [PATCH 1459/2289] elaborate registration documentation --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 9e743e4b72e..658abf8b38d 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -228,7 +228,7 @@ These tags are only required if processed data is requested: * `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands * `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS -Additional information are taken from the registration files located at pecan/modules/data.remote/inst/registration, each source has its own registration file: +Additional information are taken from the registration files located at pecan/modules/data.remote/inst/registration, each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collections may require its own way of performing quality checks, etc wheras all of the products available on AppEEARS can be retrieved using its API. GEE registration file (register.GEE.xml) : From 6785037d056d9f68cf5047473d5fc0d84820e3ec Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 17:05:26 +0530 Subject: [PATCH 1460/2289] Update book_source/03_topical_pages/09_standalone_tools.Rmd Co-authored-by: istfer --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 658abf8b38d..03da8597bb8 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -228,7 +228,7 @@ These tags are only required if processed data is requested: * `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands * `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS -Additional information are taken from the registration files located at pecan/modules/data.remote/inst/registration, each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collections may require its own way of performing quality checks, etc wheras all of the products available on AppEEARS can be retrieved using its API. +Additional information are taken from the registration files located at pecan/modules/data.remote/inst/registration, each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collections may require its own way of performing quality checks, etc whereas all of the products available on AppEEARS can be retrieved using its API in a standardized way. GEE registration file (register.GEE.xml) : From 73bc26cdd78051eb206b14e9aba190a2433e90e0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 17:12:48 +0530 Subject: [PATCH 1461/2289] add link to registration files --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 03da8597bb8..4b9e65b5ccc 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -228,7 +228,7 @@ These tags are only required if processed data is requested: * `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands * `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS -Additional information are taken from the registration files located at pecan/modules/data.remote/inst/registration, each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collections may require its own way of performing quality checks, etc whereas all of the products available on AppEEARS can be retrieved using its API in a standardized way. +Additional information are taken from the registration files located at [pecan/modules/data.remote/inst/registration](https://github.com/PecanProject/pecan/tree/develop/modules/data.remote/inst/registration), each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collections may require its own way of performing quality checks, etc whereas all of the products available on AppEEARS can be retrieved using its API in a standardized way. GEE registration file (register.GEE.xml) : From 7f6f3c00f4dd1158b87e2248621af8f4a1554692 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 17:15:37 +0530 Subject: [PATCH 1462/2289] typo --- book_source/03_topical_pages/09_standalone_tools.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/09_standalone_tools.Rmd b/book_source/03_topical_pages/09_standalone_tools.Rmd index 4b9e65b5ccc..b9819fda433 100644 --- a/book_source/03_topical_pages/09_standalone_tools.Rmd +++ b/book_source/03_topical_pages/09_standalone_tools.Rmd @@ -228,7 +228,7 @@ These tags are only required if processed data is requested: * `algorithm`: (optional) algorithm used for processing data, currently only SNAP is implemented to estimate LAI from Sentinel-2 bands * `credfile`: (optional) absolute path to JSON file containing Earthdata username and password, only required for AppEEARS -Additional information are taken from the registration files located at [pecan/modules/data.remote/inst/registration](https://github.com/PecanProject/pecan/tree/develop/modules/data.remote/inst/registration), each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collections may require its own way of performing quality checks, etc whereas all of the products available on AppEEARS can be retrieved using its API in a standardized way. +Additional information are taken from the registration files located at [pecan/modules/data.remote/inst/registration](https://github.com/PecanProject/pecan/tree/develop/modules/data.remote/inst/registration), each source has its own registration file. This is so because there isn't a standardized way to retrieve all image collections from GEE and each image collection may require its own way of performing quality checks, etc whereas all of the products available on AppEEARS can be retrieved using its API in a standardized way. GEE registration file (register.GEE.xml) : From f3af8e2f3fe4d3b354a375d6ad3a45a08eb2f654 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 26 Aug 2020 17:46:19 +0530 Subject: [PATCH 1463/2289] update rp_control --- modules/data.remote/inst/RpTools/RpTools/rp_control.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index 0ebef9874ef..e5a82d16ea2 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -2,7 +2,7 @@ # -*- coding: utf-8 -*- """ -remote_process controls the individual functions to create an automatic workflow for downloading and performing computation on remote sensing data. +rp_control manages the individual functions to create an automatic workflow for downloading and performing computation on remote sensing data. Requires Python3 From 912f3c31bc40cf0ce11e11a4e4ed311859606a08 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 1 Sep 2020 10:52:45 +0530 Subject: [PATCH 1464/2289] minor appeears fix --- modules/data.remote/inst/RpTools/RpTools/get_remote_data.py | 2 +- modules/data.remote/inst/RpTools/RpTools/rp_control.py | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py index a2f0510a244..bab35d60d95 100644 --- a/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py +++ b/modules/data.remote/inst/RpTools/RpTools/get_remote_data.py @@ -84,7 +84,7 @@ def get_remote_data( get_datareturn_path = func(geofile=geofile, outdir=outdir, start=start, end=end, filename=raw_file_name) if source == "appeears": - get_datareturn_path = appeears2pecan(geofile, outdir, start, end, collection, projection, credfile, raw_file_name) + get_datareturn_path = appeears2pecan(geofile=geofile, outdir=outdir, out_filename=raw_file_name, start=start, end=end, product=collection, projection=projection, credfile=credfile) if raw_merge == True and raw_merge != "replace": # if output file is of csv type use csv_merge, example AppEEARS point AOI type diff --git a/modules/data.remote/inst/RpTools/RpTools/rp_control.py b/modules/data.remote/inst/RpTools/RpTools/rp_control.py index e5a82d16ea2..561898c7339 100644 --- a/modules/data.remote/inst/RpTools/RpTools/rp_control.py +++ b/modules/data.remote/inst/RpTools/RpTools/rp_control.py @@ -103,7 +103,9 @@ def rp_control( """ out_get_data = out_get_data.lower() - out_process_data = out_process_data.lower() + + if out_process_data: + out_process_data = out_process_data.lower() if stage_get_data: From 6623c0a1d7c2504574fa11ef40844bf67e4525d7 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 2 Sep 2020 13:32:04 -0400 Subject: [PATCH 1465/2289] addressing PR comments --- base/db/R/dbfiles.R | 116 +++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 62 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 4043ad15dfb..726540fb43d 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -664,6 +664,7 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { } + ##' ##' This function will move dbfiles - clim or nc - from one location ##' to another on the same machine and update BETY @@ -672,8 +673,9 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { ##' @title Move files to new location ##' @param old.dir directory with files to be moved ##' @param new.dir directory where files should be moved -##' @param file.type what type of files are we moving either clim or nc -##' @param siteid needed to register .nc files that arent already in BETY +##' @param file.type what type of files are being moved +##' @param siteid needed to register files that arent already in BETY +##' @param register if file isn't already in BETY, should it be registered? ##' @return print statement of how many files were moved, registered, or have symbolic links ##' @export ##' @author kzarada @@ -683,15 +685,16 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { ##' old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", ##' new.dir = '/projectnb/dietzelab/pecan.data/dbfiles/NOAA_GEFS_site_0-676' ##' file.type= clim, -##' siteid = 676 +##' siteid = 676, +##' register = TRUE ##' ) ##' } -dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ +dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = FALSE ){ - #create nulls for file movement and error info + #create nulls for file movement and error info error = 0 files.sym = 0 files.changed = 0 @@ -700,7 +703,7 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ #check for file type and update to make it *.file type if(file.type != "clim" | file.type != "nc"){ - PEcAn.logger::logger.error('File type not supported by move at this time. Please enter either clim or nc') + PEcAn.logger::logger.error('File type not supported by move at this time. Currently only supports NC and CLIM files') error = 1 } file.pattern = paste0("*.", file.type) @@ -796,9 +799,9 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ } #end dbfile loop - #if statement for when there are no matching files in BETY or if some clim files matched but others didn't - #for clim files not registered, we'll create a symbolic link to where we move the files - if (dim(dbfiles)[1] == 0 & file.pattern == "*.clim" | files.changed > 0 & file.pattern == "*.clim" ){ + + #if there are files that are in the folder but not in BETY, we can either register them or not + if (dim(dbfiles)[1] == 0 | files.changed > 0){ #Recheck what files are in the directory since others may have been moved above old.files <- list.files(path= old.dir, pattern = file.pattern) @@ -806,8 +809,6 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ #Recreate full file path full.old.file = file.path(old.dir, old.files) - #Record number of files that will have a symbolic link made - files.sym = length(full.old.file) #Error check again to make sure there aren't any matching dbfiles dbfile.path = dirname(full.old.file) @@ -816,13 +817,52 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ dplyr::filter(file_path %in% dbfile.path) if(dim(dbfiles)[1] > 0){ - PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling symbolic link creation") + PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling link or registration") error = 1 } + + if(error == 0 & register == TRUE){ + + #Record how many files are being registered to BETY + files.reg= length(full.old.file) + + for(i in 1:length(full.old.file)){ + + file_path = dirname(full.old.file[i]) + file_name = basename(full.old.file[i]) + + if(file.type == "nc"){mimetype = "application/x-netcdf" + formatname ="CF Meteorology application" } + else if(file.type = "clim"){mimetype = 'text/csv' + formatname = "Sipnet.climna"} + else{PEcAn.logger::logger.error("File Type is currently not supported")} + + + dbfile.input.insert(in.path = file_path, + in.prefix = file_name, + siteid = siteid, + startdate = NULL, + enddate = NULL, + mimetype = mimetype, + formatname = formatname, + parentid=NA, + con = con, + hostname=PEcAn.remote::fqdn(), + allow.conflicting.dates=FALSE, + ens=FALSE) + }#end i loop + } #end error loop + + } #end register == TRUE + + if(error == 0 & register == FALSE){ #Create file path for symbolic link full.new.file = file.path(new.dir, old.files) + #Record number of files that will have a symbolic link made + files.sym = length(full.new.file) + #Line up files full.new.file = sort(full.new.file) full.old.file <- sort(full.old.file) @@ -843,55 +883,7 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ }#end i loop } #end error loop - - } #end clim if statement - - - #If files are .nc files and aren't in BETY for some reason, we will register them - if (dim(dbfiles)[1] == 0 & file.pattern == "*.nc" | files.changed == 1 & file.pattern == "*.nc" ){ - - #Re make full file path and find files that were not moved - old.files <- list.files(path= old.dir, pattern = file.pattern) - - full.old.file = file.path(old.dir, old.files) - - #Record how many files are being registered to BETY - files.reg= length(full.old.file) - - #Check again to make sure there aren't any matching dbfiles - dbfile.path = dirname(full.old.file) - dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% - dplyr::filter(file_name %in% basename(full.old.file)) %>% - dplyr::filter(file_path %in% dbfile.path) - - if(dim(dbfiles)[1] > 0){ - PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling symbolic link creation") - error = 1 - } - - if(error == 0){ - - for(i in 1:length(full.old.file)){ - - file_path = dirname(full.old.file[i]) - file_name = basename(full.old.file[i]) - - - dbfile.input.insert(in.path = file_path, - in.prefix = file_name, - siteid = siteid, - startdate = NULL, - enddate = NULL, - mimetype = "application/x-netcdf", - formatname = "CF Meteorology application", - parentid=NA, - con = con, - hostname=PEcAn.remote::fqdn(), - allow.conflicting.dates=FALSE, - ens=FALSE) - }#end i loop - } #end error loop - } #end nc file registration + } #end Register == FALSE if(error > 0){ @@ -905,4 +897,4 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL ){ } -} #end dbfile.move() \ No newline at end of file +} #end dbfile.move() From 714a6f91f2086e24b0e0e5f041ce5caa9df35dfb Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 2 Sep 2020 13:51:32 -0400 Subject: [PATCH 1466/2289] updating Rd files --- base/db/R/dbfiles.R | 2 +- base/db/man/dbfile.move.Rd | 11 +++++++---- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 726540fb43d..a287a23702c 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -834,7 +834,7 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = F if(file.type == "nc"){mimetype = "application/x-netcdf" formatname ="CF Meteorology application" } - else if(file.type = "clim"){mimetype = 'text/csv' + else if(file.type == "clim"){mimetype = 'text/csv' formatname = "Sipnet.climna"} else{PEcAn.logger::logger.error("File Type is currently not supported")} diff --git a/base/db/man/dbfile.move.Rd b/base/db/man/dbfile.move.Rd index e33aaf1eb2b..4bbe67216cd 100644 --- a/base/db/man/dbfile.move.Rd +++ b/base/db/man/dbfile.move.Rd @@ -4,16 +4,18 @@ \alias{dbfile.move} \title{Move files to new location} \usage{ -dbfile.move(old.dir, new.dir, file.type, siteid = NULL) +dbfile.move(old.dir, new.dir, file.type, siteid = NULL, register = FALSE) } \arguments{ \item{old.dir}{directory with files to be moved} \item{new.dir}{directory where files should be moved} -\item{file.type}{what type of files are we moving either clim or nc} +\item{file.type}{what type of files are being moved} -\item{siteid}{needed to register .nc files that arent already in BETY} +\item{siteid}{needed to register files that arent already in BETY} + +\item{register}{if file isn't already in BETY, should it be registered?} } \value{ print statement of how many files were moved, registered, or have symbolic links @@ -28,7 +30,8 @@ to another on the same machine and update BETY old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", new.dir = '/projectnb/dietzelab/pecan.data/dbfiles/NOAA_GEFS_site_0-676' file.type= clim, - siteid = 676 + siteid = 676, + register = TRUE ) } } From 54e42a0539a301ccd55aa3f6910c5b1c91dc803f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 3 Sep 2020 20:04:57 +0530 Subject: [PATCH 1467/2289] make gee2pecan_s2 work with Point type --- .../inst/RpTools/RpTools/gee2pecan_s2.py | 34 +++++++++++++------ .../inst/registration/register.GEE.xml | 5 +-- 2 files changed, 26 insertions(+), 13 deletions(-) diff --git a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py index 96c90c660b8..aa1c7851f98 100644 --- a/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py +++ b/modules/data.remote/inst/RpTools/RpTools/gee2pecan_s2.py @@ -121,7 +121,7 @@ class AOI: Name of the area. geometry : str Geometry of the area of interest e.g. from geopandas. - Currently only polygons tested. The default is None. + Currently only polygons tested. Ayush: modified to work with Point type. The default is None. coordinate_list : list, optional List of coordinates of a polygon (loop should be closed). Computed from geometry if not @@ -145,7 +145,7 @@ class AOI: __init__ """ - def __init__(self, name, geometry=None, coordinate_list=None, tile=None): + def __init__(self, name, geometry=None, geometry_type=None, coordinate_list=None, tile=None): """. Parameters @@ -154,7 +154,8 @@ def __init__(self, name, geometry=None, coordinate_list=None, tile=None): Name of the area. geometry : geometry in wkt, optional Geometry of the area of interest e.g. from geopandas. - Currently only polygons tested. The default is None. + Currently only polygons tested. Ayush: modified to work with Point type The default is None. + geometry_type: type of geometry, Polygon or Point. The default is None. coordinate_list : list, optional List of coordinates of a polygon (loop should be closed). Computed from geometry if not @@ -172,14 +173,20 @@ def __init__(self, name, geometry=None, coordinate_list=None, tile=None): if not geometry and not coordinate_list: sys.exit("AOI has to get either geometry or coordinates as list!") elif geometry and not coordinate_list: - coordinate_list = list(geometry.exterior.coords) - for i in range(len(coordinate_list)): - coordinate_list[i] = coordinate_list[i][0:2] + if geometry.type == "Polygon": + coordinate_list = list(geometry.exterior.coords) + for i in range(len(coordinate_list)): + coordinate_list[i] = coordinate_list[i][0:2] + else: + lon = float(geometry.x) + lat = float(geometry.y) + coordinate_list = [(lon, lat)] elif coordinate_list and not geometry: geometry = None self.name = name self.geometry = geometry + self.geometry_type = geometry.type self.coordinate_list = coordinate_list self.qi = None self.data = None @@ -211,8 +218,7 @@ def ee_get_s2_quality_info(AOIs, req_params): AOIs = list([AOIs]) features = [ - ee.Feature(ee.Geometry.Polygon(a.coordinate_list), {"name": a.name}) - for a in AOIs + ee.Feature(ee.Geometry.Polygon(a.coordinate_list), {"name": a.name}) if a.geometry_type == "Polygon" else ee.Feature(ee.Geometry.Point(a.coordinate_list[0][0], a.coordinate_list[0][1]), {"name": a.name}) for a in AOIs ] feature_collection = ee.FeatureCollection(features) @@ -357,10 +363,16 @@ def ee_get_s2_data(AOIs, req_params, qi_threshold=0, qi_filter=s2_filter1): full_assetids = "COPERNICUS/S2_SR/" + filtered_qi["assetid"] image_list = [ee.Image(asset_id) for asset_id in full_assetids] crs = filtered_qi["projection"].values[0]["crs"] - feature = ee.Feature( - ee.Geometry.Polygon(a.coordinate_list), + if a.geometry_type == "Polygon": + feature = ee.Feature( + ee.Geometry.Polygon(a.coordinate_list), + {"name": a.name, "image_list": image_list}, + ) + else: + feature = ee.Feature( + ee.Geometry.Point(a.coordinate_list[0][0], a.coordinate_list[0][1]), {"name": a.name, "image_list": image_list}, - ) + ) features.append(feature) diff --git a/modules/data.remote/inst/registration/register.GEE.xml b/modules/data.remote/inst/registration/register.GEE.xml index da32d156f18..9ee865ee10e 100644 --- a/modules/data.remote/inst/registration/register.GEE.xml +++ b/modules/data.remote/inst/registration/register.GEE.xml @@ -5,16 +5,17 @@ s2 polygon + point 10 1 - 1000000129 + 99000000002 Remote_generic application/x-netcdf - 1000000129 + 99000000002 Remote_generic application/x-netcdf From 71fd65f9055713f8245878a1676903b0409251e0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 3 Sep 2020 20:09:06 +0530 Subject: [PATCH 1468/2289] dont change remote format id --- modules/data.remote/inst/registration/register.GEE.xml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.remote/inst/registration/register.GEE.xml b/modules/data.remote/inst/registration/register.GEE.xml index 9ee865ee10e..680d1b0100e 100644 --- a/modules/data.remote/inst/registration/register.GEE.xml +++ b/modules/data.remote/inst/registration/register.GEE.xml @@ -10,12 +10,12 @@ 10 1 - 99000000002 + 1000000129 Remote_generic application/x-netcdf - 99000000002 + 1000000129 Remote_generic application/x-netcdf From 556fa435541e1a6cf50d2e4161a9d148b774a8d7 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 4 Sep 2020 09:37:42 +0530 Subject: [PATCH 1469/2289] set stages to FALSE by default --- modules/data.remote/R/remote_process.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.remote/R/remote_process.R b/modules/data.remote/R/remote_process.R index f3f52beb430..4e745e55b22 100644 --- a/modules/data.remote/R/remote_process.R +++ b/modules/data.remote/R/remote_process.R @@ -187,8 +187,7 @@ remote_process <- function(settings) { raw_check <- dbstatus$raw_check pro_check <- dbstatus$pro_check - - + if(stage_get_data == FALSE && stage_process_data == FALSE){ # requested data already exists, no need to call rp_control settings$remotedata$raw_id <- raw_check$id @@ -516,8 +515,8 @@ remotedata_db_check <- req_start <- start req_end <- end input_file <- NULL - stage_get_data <- NULL - stage_process_data <- NULL + stage_get_data <- FALSE + stage_process_data <- FALSE raw_merge <- NULL pro_merge <- NULL existing_raw_file_path <- NULL @@ -799,6 +798,7 @@ remotedata_db_check <- } } + return( list( remotefile_check_flag = remotefile_check_flag, From 22a2208d7f56a00d91ee0233d9dd171e45e27943 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Mon, 7 Sep 2020 18:54:24 -0700 Subject: [PATCH 1470/2289] remove pecan logger from photosynthesis package --- modules/photosynthesis/DESCRIPTION | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/photosynthesis/DESCRIPTION b/modules/photosynthesis/DESCRIPTION index 173f5c63365..8f8d43fce45 100644 --- a/modules/photosynthesis/DESCRIPTION +++ b/modules/photosynthesis/DESCRIPTION @@ -18,7 +18,6 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific Depends: rjags Imports: - PEcAn.logger, coda (>= 0.18) SystemRequirements: JAGS2.2.0 License: BSD_3_clause + file LICENSE From 8d4386b60a20aa6c4a07c1baaf876cfb80ac0214 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 8 Sep 2020 09:48:06 -0400 Subject: [PATCH 1471/2289] updating small error in call_MODIS --- modules/data.remote/R/call_MODIS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.remote/R/call_MODIS.R b/modules/data.remote/R/call_MODIS.R index 2cb138472a1..c5c9db94bb4 100755 --- a/modules/data.remote/R/call_MODIS.R +++ b/modules/data.remote/R/call_MODIS.R @@ -147,7 +147,7 @@ call_MODIS <- function(var, product, { PEcAn.logger::logger.severe( "Start and end date (", start_date, ", ", end_date, - ") are not within MODIS data product date range (", modis_dates[1], ", ", modis_dates(length(modis_dates)), + ") are not within MODIS data product date range (", modis_dates[1], ", ", modis_dates[length(modis_dates)], "). Please choose another date.") } From 6000668af7fcd02d2e22b7ef6e3ba4ab2aef0204 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 24 Sep 2020 15:25:44 -0400 Subject: [PATCH 1472/2289] changing litter to litter_carbon_content --- models/sipnet/R/read_restart.SIPNET.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index 3650f737f29..a46b80a137f 100755 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -85,7 +85,7 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p if ("litter_carbon_content" %in% var.names) { forecast[[length(forecast) + 1]] <- ens$litter_carbon_content[last] ##kgC/m2 - names(forecast[[length(forecast)]]) <- c("Litter") + names(forecast[[length(forecast)]]) <- c("litter_carbon_content") } From 2ff19f39fd47a38deca1359a2ad469e4b771e3a2 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 24 Sep 2020 16:32:05 -0400 Subject: [PATCH 1473/2289] adding sf:: to IC file --- modules/data.land/R/IC_BADM_Utilities.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.land/R/IC_BADM_Utilities.R b/modules/data.land/R/IC_BADM_Utilities.R index d4e51feacbf..a5a64627f9c 100644 --- a/modules/data.land/R/IC_BADM_Utilities.R +++ b/modules/data.land/R/IC_BADM_Utilities.R @@ -298,10 +298,10 @@ EPA_ecoregion_finder <- function(Lat, Lon){ ) %>% sf::st_transform("+proj=longlat +datum=WGS84") - sp::proj4string(U.S.SB.sp) <- sp::proj4string(as_Spatial(L1)) + sp::proj4string(U.S.SB.sp) <- sp::proj4string(sf::as_Spatial(L1)) # finding the code for each site - over.out.L1 <- sp::over(U.S.SB.sp, as_Spatial(L1)) - over.out.L2 <- sp::over(U.S.SB.sp, as_Spatial(L2)) + over.out.L1 <- sp::over(U.S.SB.sp, sf::as_Spatial(L1)) + over.out.L2 <- sp::over(U.S.SB.sp, sf::as_Spatial(L2)) return(data.frame(L1 = over.out.L1$NA_L1CODE, L2 = over.out.L2$NA_L2CODE)) } From b112880e65da7205cab8827a53ef26c71ad5b465 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 6 Oct 2020 13:36:54 -0700 Subject: [PATCH 1474/2289] update link to documentation in README fixes #2697 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9e1029a71c7..dfea6998435 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ PEcAn is not itself an ecosystem model, and it can be used to with a variety of ## Documentation -Consult our [Documentation](https://pecanproject.github.io/pecan-documentation/) for full documentation of the PEcAn Project. +Consult documentation of the PEcAn Project; either the [lastest stable development](https://pecanproject.github.io/pecan-documentation/develop/) branch, the latest [release](https://pecanproject.github.io/pecan-documentation/master/). Documentation from [earlier releases is here](https://pecanproject.github.io/documentation.html). ## Getting Started From 9cb4143b7fdd60aee0992eac9740f36ca4dc8158 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 7 Oct 2020 10:17:29 -0700 Subject: [PATCH 1475/2289] Update README.md --- README.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index dfea6998435..962d920efba 100644 --- a/README.md +++ b/README.md @@ -26,11 +26,14 @@ Consult documentation of the PEcAn Project; either the [lastest stable developme ## Getting Started -See ["Getting Started"](https://pecanproject.github.io/pecan-documentation/getting-started.html) on the PEcAn. +See ["Getting Started"](https://pecanproject.github.io/pecan-documentation/develop/getting-started.html) on the PEcAn. ### Installation -Complete instructions on how to install PEcAn can be found in the [documentation here](https://pecanproject.github.io/pecan-documentation/appendix.html). To get PEcAn up and running you will need to have [R](http://www.r-project.org) as well as [PostgreSQL](http://www.postgresql.org) installed. You can also [download a Virtual Machine](http://opensource.ncsa.illinois.edu/projects/artifacts.php?key=PECAN) which has all the components as well as PEcAn installed. To run this Virtual Machine you will need to have [VirtualBox](http://virtualbox.org) installed +Complete instructions on how to install PEcAn can be found in the [documentation here](https://pecanproject.github.io/pecan-documentation/develop/pecan-manual-setup.html). To get PEcAn up and running you can use one of three methods: +1. Run a [Virtual Machine](https://pecanproject.github.io/pecan-documentation/develop/pecan-manual-setup.html#install-vm). This is recommended for students and new users, and provides a consistent, tested environment for each release. +2. Use [Docker](https://pecanproject.github.io/pecan-documentation/develop/pecan-manual-setup.html#install-docker). This is recommended, especially for development and production deployment. +3. Install all of the components individually on your own Linux or MacOS computer or server. This is called a ['native install'](https://pecanproject.github.io/pecan-documentation/develop/pecan-manual-setup.html#install-native), but is more challenging and has relatively few advantages over using Docker. ### Website @@ -52,6 +55,8 @@ The demo instance only allows for runs at pecan.ncsa.illinois.edu. Once you have * Shiklomanov. A, MC Dietze, T Viskari, PA Townsend, SP Serbin. 2016 "Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion" Remote Sensing of the Environment 183: 226-238 * LeBauer, David, Rob Kooper, Patrick Mulrooney, Scott Rohde, Dan Wang, Stephen P. Long, and Michael C. Dietze. "BETYdb: a yield, trait, and ecosystem service database applied to second‐generation bioenergy feedstock production." GCB Bioenergy (2017). +A extensive list of publications that apply PEcAn or are informed by our work on [Google Scholar](https://scholar.google.com/citations?hl=en&user=HWhxBY4AAAAJ). + ## Acknowledgements The PEcAn project is supported by the National Science Foundation (ABI #1062547, ABI #1458021, DIBBS #1261582, ARC #1023477, EF #1318164, EF #1241894, EF #1241891), NASA Terrestrial Ecosystems, the Energy Biosciences Institute, Department of Energy (ARPA-E awards #DE-AR0000594 and DE-AR0000598), and an Amazon AWS in Education Grant. From 3ec2caf319cec711932d2df0f5e0168adc9d0b1b Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 7 Oct 2020 10:20:09 -0700 Subject: [PATCH 1476/2289] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 962d920efba..3a80a95bde3 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ Consult documentation of the PEcAn Project; either the [lastest stable developme ## Getting Started -See ["Getting Started"](https://pecanproject.github.io/pecan-documentation/develop/getting-started.html) on the PEcAn. +See our ["Tutorials Page"](https://pecanproject.github.io/tutorials.html) that provides self-guided tutorials, links to vignettes, and an overview presentation. ### Installation From 39ed52d18519ecde2e4d46a360e1761ad7d92231 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 7 Oct 2020 10:25:36 -0700 Subject: [PATCH 1477/2289] Update 01_project_overview.Rmd --- .../01_introduction/01_project_overview.Rmd | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/book_source/01_introduction/01_project_overview.Rmd b/book_source/01_introduction/01_project_overview.Rmd index 302bb732618..810399de1d1 100644 --- a/book_source/01_introduction/01_project_overview.Rmd +++ b/book_source/01_introduction/01_project_overview.Rmd @@ -2,13 +2,10 @@ The Predictive Ecosystem Analyzer (PEcAn) is an integrated informatics toolbox for ecosystem modeling (Dietze et al. 2013, LeBauer et al. 2013). PEcAn consists of: -1) An application program interface (API) that encapsulates an ecosystem model, providing a common interface, inputs, and output. - -2) Core utilities for handling and tracking model runs and the flows of information and uncertainties into and out of models and analyses - -3) An accessible web-based user interface and visualization tools - -4) An extensible collection of modules to handle specific types of analyses (sensitivity, uncertainty, ensemble), model-data syntheses (benchmarking, parameter data assimilation, state data assimilation), and data processing (model inputs and data constraints) +1. An application program interface (API) that encapsulates an ecosystem model, providing a common interface, inputs, and output. +2. Core utilities for handling and tracking model runs and the flows of information and uncertainties into and out of models and analyses +3. An accessible web-based user interface and visualization tools +4. An extensible collection of modules to handle specific types of analyses (sensitivity, uncertainty, ensemble), model-data syntheses (benchmarking, parameter data assimilation, state data assimilation), and data processing (model inputs and data constraints) ```{r, echo=FALSE, fig.align='center'} knitr::include_graphics(rep("figures/PEcAn_Components.jpeg")) @@ -20,8 +17,6 @@ The workflow system allows ecosystem modeling to be more reproducible, automated PEcAn is not itself an ecosystem model, and it can be used to with a variety of different ecosystem models; integrating a model involves writing a wrapper to convert inputs and outputs to and from the standards used by PEcAn. Currently, PEcAn supports multiple models listed [PEcAn Models]. - - **Acknowledgements** The PEcAn project is supported financially by the following: @@ -41,12 +36,13 @@ The PEcAn project is supported financially by the following: - NNX14AH65G - NNX16AO13H - 80NSSC17K0711 +- Advanced Research Projects Agency-Energy (ARPA-E) [DE-AR0000594](https://arpa-e.energy.gov/technologies/projects/reference-phenotyping-system-energy-sorghum) - Department of Defense, Strategic Environmental Research and Development Program (DOD-SERDP), grant [RC2636](https://www.serdp-estcp.org/Program-Areas/Resource-Conservation-and-Resiliency/Infrastructure-Resiliency/Vulnerability-and-Impact-Assessment/RC-2636/RC-2636) - Energy Biosciences Institute, University of Illinois - Amazon Web Services (AWS) - [Google Summer of Code](https://summerofcode.withgoogle.com/organizations/4612291316678656/) -BETY-db is a product of the Energy Biosciences Institute at the University of Illinois at Urbana-Champaign. We gratefully acknowledge the great effort of other researchers who generously made their own data available for further study. +BETYdb is a product of the Energy Biosciences Institute at the University of Illinois at Urbana-Champaign. We gratefully acknowledge the great effort of other researchers who generously made their own data available for further study. PEcAn is a collaboration among research groups at the Department of Earth And Environment at Boston University, the Energy Biosciences Institute at the University of Illinois, the Image Spatial Data Analysis group at NCSA, the Department of Atmospheric & Oceanic Sciences at the University Wisconsin-Madison, the Terrestrial Ecosystem Science & Technology (TEST) Group at Brookhaven National Laboratory, and the Joint Global Change Research Institute (JGCRI) at the Pacific Northwest National Laboratory. @@ -68,4 +64,5 @@ Any opinions, findings, and conclusions or recommendations expressed in this mat * Wang, D, D.S. LeBauer, and M.C. Dietze(2013) Predicting yields of short-rotation hybrid poplar (Populus spp.) for the contiguous US through model-data synthesis. Ecological Applications [doi:10.1890/12-0854.1](https://doi.org/10.1890/12-0854.1) * Dietze, M.C., D.S LeBauer, R. Kooper (2013) On improving the communication between models and data. Plant, Cell, & Environment [doi:10.1111/pce.12043](https://doi.org/10.1111/pce.12043) - [Longer / auto-updated list of publications that mention PEcAn's full name in Google Scholar](https://scholar.google.com/scholar?start=0&q="predictive+ecosystem+analyzer+PEcAn") +* [PEcAn Project Google Scholar page](https://scholar.google.com/citations?hl=en&user=HWhxBY4AAAAJ) +* [Longer / auto-updated list of publications that mention PEcAn's full name in Google Scholar](https://scholar.google.com/scholar?start=0&q="predictive+ecosystem+analyzer+PEcAn") From 5ea6ece34e79f140cf1b0da64bbdf8c570ee3325 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 15 Oct 2020 08:48:17 -0500 Subject: [PATCH 1478/2289] Fix circular reference (#2692) * push depends images as R * using R 4.0.3 This will need to be tested since we just bumped a major R version (also uses ubuntu 20.04) * make data.land suggest in benchmark * fix environment variable in docker-compose * used TRAEFIK_FRONTEND_RULE in compose file and TRAEFIK_HOST in env.example, using TRAEFIK_HOST everywhere. * dplyr::bind_rows does not like 0 rows * install ssh fixes #2686 * use sipnet in github * had to update checks output * renamed test ghaction.sipnet_Postgres.xml fails with core dump Co-authored-by: Michael Dietze Co-authored-by: Alexey Shiklomanov --- .github/workflows/book.yml | 57 +++-- .github/workflows/ci.yml | 224 +++++++++++------- .github/workflows/depends.yml | 95 ++++---- CHANGELOG.md | 9 + Makefile | 5 +- Makefile.depends | 2 +- apps/api/Dockerfile | 5 +- base/db/R/get.trait.data.pft.R | 2 +- base/db/tests/Rcheck_reference.log | 36 +-- base/db/tests/testthat/helper-db-setup.R | 4 +- base/utils/R/utils.R | 6 +- base/utils/tests/Rcheck_reference.log | 30 ++- docker-compose.yml | 25 +- docker.sh | 5 +- docker/depends/Dockerfile | 1 + docker/depends/pecan.depends | 2 +- docker/depends/pecan.depends.R | 129 ++++++++++ docker/thredds/Dockerfile | 2 +- models/basgra/tests/Rcheck_reference.log | 20 +- .../biocro/tests/testthat/test-run.biocro.R | 44 ++-- models/ed/Dockerfile | 7 +- modules/benchmark/DESCRIPTION | 2 +- modules/data.land/tests/Rcheck_reference.log | 184 ++++---------- scripts/generate_dependencies.R | 24 +- ....xml => fail.ghaction.sipnet_Postgres.xml} | 4 +- tests/ghaction.sipnet_PostgreSQL.xml | 4 +- 26 files changed, 543 insertions(+), 385 deletions(-) create mode 100644 docker/depends/pecan.depends.R rename tests/{ghaction.sipnet_Postgres.xml => fail.ghaction.sipnet_Postgres.xml} (89%) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 8f976666157..2983ece4197 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,8 +14,9 @@ name: renderbook jobs: bookdown: name: Render-Book - runs-on: ubuntu-latest + runs-on: ubuntu-latest container: pecan/base:latest + steps: - uses: actions/checkout@v2 - uses: r-lib/actions/setup-r@v1 @@ -31,31 +32,29 @@ jobs: checkout-and-deploy: - runs-on: ubuntu-latest - needs: bookdown - steps: - - name: Download artifact - uses: actions/download-artifact@v2 - with: - # Artifact name - name: _book # optional - # Destination path - path: _book/ # optional - # repo-token: ${{ secrets.GITHUB_TOKEN }} - - name: Checkout documentation repo - uses: actions/checkout@v2 - with: - repository: ${{ github.repository_owner }}/pecan-documentation - path: pecan-documentation - token: ${{ secrets.GH_PAT }} - - run: | - export VERSION=${GITHUB_REF##*/}_test - cd pecan-documentation && mkdir -p $VERSION - git config --global user.email "pecanproj@gmail.com" - git config --global user.name "GitHub Documentation Robot" - rsync -a --delete ../_book/ $VERSION - git add --all * - git commit -m "Build book from pecan revision $GITHUB_SHA" || true - git push -q origin master - - + runs-on: ubuntu-latest + needs: bookdown + steps: + - name: Download artifact + uses: actions/download-artifact@v2 + with: + # Artifact name + name: _book # optional + # Destination path + path: _book/ # optional + # repo-token: ${{ secrets.GITHUB_TOKEN }} + - name: Checkout documentation repo + uses: actions/checkout@v2 + with: + repository: ${{ github.repository_owner }}/pecan-documentation + path: pecan-documentation + token: ${{ secrets.GH_PAT }} + - run: | + export VERSION=${GITHUB_REF##*/}_test + cd pecan-documentation && mkdir -p $VERSION + git config --global user.email "pecanproj@gmail.com" + git config --global user.name "GitHub Documentation Robot" + rsync -a --delete ../_book/ $VERSION + git add --all * + git commit -m "Build book from pecan revision $GITHUB_SHA" || true + git push -q origin master diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 20aa4013e2d..8049aebc336 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -5,6 +5,7 @@ on: branches: - master - develop + - '*' tags: - '*' @@ -19,152 +20,203 @@ env: jobs: + # ---------------------------------------------------------------------- + # R BUILD + # ---------------------------------------------------------------------- build: runs-on: ubuntu-latest - container: pecan/depends:develop + + strategy: + fail-fast: false + matrix: + R: + - "4.0.2" + + env: + NCPUS: 2 + PGHOST: postgres + CI: true + _R_CHECK_LENGTH_1_CONDITION_: true + _R_CHECK_LENGTH_1_LOGIC2_: true + # Avoid compilation check warnings that come from the system Makevars + # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html + _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + + container: + image: pecan/depends:R${{ matrix.R }} + steps: + # checkout source code - uses: actions/checkout@v2 - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - shell: bash - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: cache .doc - uses: actions/cache@v1 - with: - key: doc-${{ github.sha }} - path: .doc - - name: cache .install - uses: actions/cache@v1 - with: - key: install-${{ github.sha }} - path: .install + + # install additional tools needed + - name: install utils + run: apt-get update && apt-get install -y postgresql-client qpdf curl + + # check dependencies - name: update dependency lists run: Rscript scripts/generate_dependencies.R + - name: check for out-of-date dependencies files + uses: infotroph/tree-is-clean@v1 + + # compile PEcAn code - name: build run: make -j1 - env: - NCPUS: 2 - CI: true - - name: check for out-of-date Rd files + - name: check for out-of-date files uses: infotroph/tree-is-clean@v1 + # ---------------------------------------------------------------------- + # R TEST + # ---------------------------------------------------------------------- test: - needs: build runs-on: ubuntu-latest - container: pecan/depends:develop + + strategy: + fail-fast: false + matrix: + R: + - "4.0.2" + services: postgres: image: mdillon/postgis:9.5 options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 + env: NCPUS: 2 PGHOST: postgres CI: true + _R_CHECK_LENGTH_1_CONDITION_: true + _R_CHECK_LENGTH_1_LOGIC2_: true + # Avoid compilation check warnings that come from the system Makevars + # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html + _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + + container: + image: pecan/depends:R${{ matrix.R }} + steps: + # checkout source code - uses: actions/checkout@v2 + + # install additional tools needed - name: install utils - run: apt-get update && apt-get install -y openssh-client postgresql-client curl + run: apt-get update && apt-get install -y postgresql-client qpdf curl + + # initialize database - name: db setup uses: docker://pecan/db:ci - name: add models to db run: ./scripts/add.models.sh - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: cache .doc - uses: actions/cache@v1 - with: - key: doc-${{ github.sha }} - path: .doc - - name: cache .install - uses: actions/cache@v1 - with: - key: install-${{ github.sha }} - path: .install + + # run PEcAn tests - name: test - run: make test + run: make -j1 test + - name: check for out-of-date files + uses: infotroph/tree-is-clean@v1 + # ---------------------------------------------------------------------- + # R CHECK + # ---------------------------------------------------------------------- check: - needs: build runs-on: ubuntu-latest - container: pecan/depends:develop + + strategy: + fail-fast: false + matrix: + R: + - "4.0.2" + env: NCPUS: 2 + PGHOST: postgres CI: true _R_CHECK_LENGTH_1_CONDITION_: true _R_CHECK_LENGTH_1_LOGIC2_: true # Avoid compilation check warnings that come from the system Makevars # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + + container: + image: pecan/depends:R${{ matrix.R }} + steps: + # checkout source code - uses: actions/checkout@v2 - - name: install ssh - run: apt-get update && apt-get install -y openssh-client qpdf - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - - name: cache R packages - uses: actions/cache@v1 - with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- - - name: cache .doc - uses: actions/cache@v1 - with: - key: doc-${{ github.sha }} - path: .doc - - name: cache .install - uses: actions/cache@v1 - with: - key: install-${{ github.sha }} - path: .install + + # install additional tools needed + - name: install utils + run: apt-get update && apt-get install -y postgresql-client qpdf curl + + # run PEcAn checks - name: check - run: make check + run: make -j1 check env: REBUILD_DOCS: "FALSE" RUN_TESTS: "FALSE" + - name: check for out-of-date files + uses: infotroph/tree-is-clean@v1 + + # ---------------------------------------------------------------------- + # SIPNET TESTS + # ---------------------------------------------------------------------- sipnet: - needs: build runs-on: ubuntu-latest - container: pecan/depends:develop + + strategy: + fail-fast: false + matrix: + R: + - "4.0.2" + services: postgres: image: mdillon/postgis:9.5 options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 + env: + NCPUS: 2 PGHOST: postgres + CI: true + _R_CHECK_LENGTH_1_CONDITION_: true + _R_CHECK_LENGTH_1_LOGIC2_: true + # Avoid compilation check warnings that come from the system Makevars + # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html + _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + + container: + image: pecan/depends:R${{ matrix.R }} + steps: + # checkout source code - uses: actions/checkout@v2 - - run: apt-get update && apt-get install -y curl postgresql-client - - name: install sipnet - run: | - cd ${HOME} - curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz - tar zxf sipnet_unk.tar.gz - cd sipnet_unk - make + + # install additional tools needed + - name: install utils + run: apt-get update && apt-get install -y postgresql-client qpdf curl + + # initialize database - name: db setup uses: docker://pecan/db:ci - name: add models to db run: ./scripts/add.models.sh - - run: mkdir -p "${HOME}${R_LIBS#'~'}" - - name: cache R packages - uses: actions/cache@v1 + + # install sipnet + - name: Check out SIPNET + uses: actions/checkout@v2 with: - key: pkgcache-${{ github.sha }} - path: ${{ env.R_LIBS }} - restore-keys: | - pkgcache- + repository: PecanProject/sipnet + path: sipnet + - name: install sipnet + run: | + cd ${GITHUB_WORKSPACE}/sipnet + make + + # compile PEcAn code + - name: build + run: make -j1 + + # run SIPNET test - name: integration test run: ./tests/integration.sh ghaction diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 814547f9dc7..0ced3c814a7 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -4,73 +4,74 @@ on: push: branches: - develop + - master + - '*' # this runs on the develop branch schedule: - cron: '0 0 * * *' +env: + # official supported version of R + SUPPORTED: 4.0.2 + MASTER_REPO: PecanProject/pecan + DOCKERHUB_ORG: pecan + jobs: depends: - if: github.repository == 'PecanProject/pecan' - runs-on: ubuntu-latest strategy: fail-fast: false matrix: R: - # develop is special since it will build the default depends for develop - # this is not a real R version. - - develop - # 3.4 does not work - - 3.5 - # 3.6 != latest in 3.6 series - 3.6.3 - # 4.0 not released yet - # unstable version - - devel - include: - - R: develop - TAG: develop - - R: 3.5 - TAG: 3.5-develop - - R: 3.6.3 - TAG: 3.6-develop - - R: devel - TAG: unstable-develop + - 4.0.2 steps: - uses: actions/checkout@v2 - - name: Build for R version ${{ matrix.R }} + # calculate some variables that are used later + - name: github branch run: | - if [ "${{ matrix.R }}" == "develop" ]; then - docker build --pull \ - --tag image \ - docker/depends - else - docker build --pull \ - --build-arg R_VERSION=${{ matrix.R }} \ - --tag image \ - docker/depends - fi + BRANCH=${GITHUB_REF##*/} + echo "::set-env name=GITHUB_BRANCH::${BRANCH}" - - name: Login into registry - run: | - echo "${{ secrets.GITHUB_TOKEN }}" | docker login docker.pkg.github.com -u ${{ github.actor }} --password-stdin - if [ -n "${{ secrets.DOCKERHUB_USERNAME }}" -a -n "${{ secrets.DOCKERHUB_PASSWORD }}" ]; then - echo "${{ secrets.DOCKERHUB_PASSWORD }}" | docker login -u ${{ secrets.DOCKERHUB_USERNAME }} --password-stdin + tags="R${{ matrix.R }}" + if [ "${{ matrix.R }}" == "${{ env.SUPPORTED }}" ]; then + if [ "$BRANCH" == "master" ]; then + tags="${tags},latest" + elif [ "$BRANCH" == "develop" ]; then + tags="${tags},develop" + fi fi + echo "::set-env name=TAG::${tags}" - - name: Push docker image ${{ matrix.R }} - run: | - IMAGE_ID=docker.pkg.github.com/${{ github.repository }}/depends - IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]') - - docker tag image $IMAGE_ID:${{ matrix.TAG }} - docker push $IMAGE_ID:${{ matrix.TAG }} + # this will publish to the actor (person) github packages + - name: Publish to GitHub + if: github.event_name == 'push' + uses: elgohr/Publish-Docker-Github-Action@2.22 + env: + R_VERSION: ${{ matrix.R }} + with: + name: ${{ github.repository_owner }}/pecan/depends + username: ${{ github.actor }} + password: ${{ secrets.GITHUB_TOKEN }} + context: docker/depends + tags: "${{ env.TAG }}" + registry: docker.pkg.github.com + buildargs: R_VERSION - if [ -n "${{ secrets.DOCKERHUB_USERNAME }}" -a -n "${{ secrets.DOCKERHUB_PASSWORD }}" ]; then - docker tag image pecan/depends:${{ matrix.TAG }} - docker push pecan/depends:${{ matrix.TAG }} - fi + # this will publish to the clowder dockerhub repo + - name: Publish to Docker Hub + if: github.event_name == 'push' && github.repository == env.MASTER_REPO + uses: elgohr/Publish-Docker-Github-Action@2.18 + env: + R_VERSION: ${{ matrix.R }} + with: + name: ${{ env.DOCKERHUB_ORG }}/depends + username: ${{ secrets.DOCKERHUB_USERNAME }} + password: ${{ secrets.DOCKERHUB_PASSWORD }} + context: docker/depends + tags: "${{ env.TAG }}" + buildargs: R_VERSION diff --git a/CHANGELOG.md b/CHANGELOG.md index 93a5dc3b5be..9c67b8994a7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,8 +7,16 @@ For more information about this file see also [Keep a Changelog](http://keepacha ## [Unreleased] +### Due to dependencies, PEcAn is now using R 4.0.2 for Docker images. + +This is a major change: + +- Newer version of R +- Ubuntu 20.04 instead of Debian. + ### Fixed +- Use TRAEFIK_FRONTEND_RULE in compose file and TRAEFIK_HOST in env.example, using TRAEFIK_HOST everywhere now. Make sure TRAEFIK_HOST is used in .env - Use initial biomass pools for Sorghum and Setaria #2495, #2496 - PEcAn.DB::betyConnect() is now smarter, and will try to use either config.php or environment variables to create a connection. It has switched to use db.open helper function (#2632). - PEcAn.utils::tranformstats() assumed the statistic names column of its input was a factor. It now accepts character too, and returns the same class given as input (#2545). @@ -30,6 +38,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ### Changed +- Now using R 4.0.2 for Docker images. This is a major change. Newer version of R and using Ubuntu 20.04 instead of Debian. - Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). diff --git a/Makefile b/Makefile index 9ba0965352d..313071bfb0d 100644 --- a/Makefile +++ b/Makefile @@ -81,7 +81,8 @@ $(subst .doc/models/template,,$(MODELS_D)): .install/models/template include Makefile.depends -SETROPTIONS = "options(Ncpus = ${NCPUS}, repos = 'https://cran.rstudio.com')" +#SETROPTIONS = "options(Ncpus = ${NCPUS}, repos = 'https://cran.rstudio.com')" +SETROPTIONS = "options(Ncpus = ${NCPUS})" clean: rm -rf .install .check .test .doc @@ -107,7 +108,7 @@ clean: # HACK: assigning to `deps` is an ugly workaround for circular dependencies in utils pkg. # When these are fixed, can go back to simple `dependencies = TRUE` depends_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} \ - -e "deps <- if (grepl('base/utils', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ + -e "deps <- if (grepl('(base/utils|modules/benchmark)', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" install_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" check_R_pkg = ./scripts/time.sh "${1}" Rscript scripts/check_with_errors.R $(strip $(1)) diff --git a/Makefile.depends b/Makefile.depends index be640d374c7..70f86e1fbc0 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -11,7 +11,7 @@ $(call depends,base/workflow): | .install/modules/data.atmosphere .install/modul $(call depends,modules/allometry): | .install/base/logger .install/base/db $(call depends,modules/assim.batch): | .install/modules/benchmark .install/base/db .install/modules/emulator .install/base/logger .install/modules/meta.analysis .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils $(call depends,modules/assim.sequential): | .install/base/logger .install/base/remote -$(call depends,modules/benchmark): | .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils +$(call depends,modules/benchmark): | .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/modules/data.land $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils $(call depends,modules/data.land): | .install/modules/benchmark .install/modules/data.atmosphere .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/base/visualization diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index 25112a1449e..4e1392d046b 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -28,8 +28,9 @@ ENV AUTH_REQ="TRUE" \ PGHOST="postgres"\ RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F"\ DATA_DIR="/data/"\ - DBFILES_DIR="/data/dbfiles/" - + DBFILES_DIR="/data/dbfiles/" \ + SECRET_KEY_BASE="thisisnotasecret" + WORKDIR /api/R CMD Rscript entrypoint.R diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index 41cb094f0fa..ca63fd3bfc0 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -275,7 +275,7 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, PEcAn.logger::logger.info( "\n Number of observations per trait for PFT ", shQuote(pft[["name"]]), ":\n", - PEcAn.logger::print2string(trait_counts, n = Inf), + PEcAn.logger::print2string(trait_counts, n = Inf, na.print = ""), wrap = FALSE ) } else { diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index d9cd7590ff1..e4d87ef12b1 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -1,11 +1,11 @@ -* using log directory ‘/tmp/RtmpzhxcCR/PEcAn.DB.Rcheck’ -* using R version 3.5.2 (2018-12-20) +* using log directory ‘/tmp/RtmpcCXxhc/PEcAn.DB.Rcheck’ +* using R version 4.0.2 (2020-06-22) * using platform: x86_64-pc-linux-gnu (64-bit) * using session charset: UTF-8 -* using options ‘--no-tests --no-manual --as-cran’ +* using options ‘--no-manual --as-cran’ * checking for file ‘PEcAn.DB/DESCRIPTION’ ... OK * checking extension type ... Package -* this is package ‘PEcAn.DB’ version ‘1.7.0’ +* this is package ‘PEcAn.DB’ version ‘1.7.1’ * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... OK @@ -19,6 +19,7 @@ * checking whether package ‘PEcAn.DB’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK +* checking for future file timestamps ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK @@ -49,8 +50,6 @@ dbfile.check: no visible binding for global variable ‘container_id’ dbfile.check: no visible binding for global variable ‘machine_id’ dbfile.check: no visible binding for global variable ‘updated_at’ dbHostInfo: no visible binding for global variable ‘sync_host_id’ -dbHostInfo: no visible binding for global variable ‘sync_start’ -dbHostInfo: no visible binding for global variable ‘sync_end’ get_run_ids: no visible binding for global variable ‘run_id’ get_users: no visible binding for global variable ‘id’ get_workflow_ids: no visible binding for global variable ‘workflow_id’ @@ -70,7 +69,6 @@ match_dbcols: no visible global function definition for ‘head’ match_dbcols: no visible binding for global variable ‘.’ match_dbcols: no visible binding for global variable ‘as’ query_pfts: no visible binding for global variable ‘name’ -query.data: no visible binding for global variable ‘settings’ query.format.vars: no visible binding for global variable ‘variable_id’ query.format.vars: no visible binding for global variable ‘storage_type’ @@ -86,7 +84,6 @@ query.pft_cultivars: no visible binding for global variable ‘scientificname’ query.pft_cultivars: no visible binding for global variable ‘name.cv’ query.priors: no visible binding for global variable ‘settings’ -query.traits: no visible binding for global variable ‘settings’ query.traits: no visible binding for global variable ‘name’ rename_jags_columns: no visible binding for global variable ‘stat’ rename_jags_columns: no visible binding for global variable ‘n’ @@ -114,9 +111,9 @@ Undefined global functions or variables: container_id container_type created_at cultivar_id ensemble_id folder genus greenhouse head id issued machine_id n name name.cv name.mt pft_id pft_type posix read.output run_id scientificname score - settings site_id specie_id species stat storage_type sync_end - sync_host_id sync_start trait trt_id updated_at user.permission vals - var_name variable_id workflow_id + settings site_id specie_id species stat storage_type sync_host_id + trait trt_id updated_at user.permission vals var_name variable_id + workflow_id Consider adding importFrom("methods", "as") importFrom("utils", "head") @@ -125,7 +122,8 @@ contains 'methods'). * checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... OK -* checking Rd cross-references ... OK +* checking Rd cross-references ... NOTE +Unknown package ‘PEcAn.MA’ in Rd xrefs * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... WARNING @@ -172,6 +170,16 @@ Files in the 'vignettes' directory but no files in 'inst/doc': Package has no Sweave vignette sources and no VignetteBuilder field. * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK -* checking tests ... SKIPPED +* checking tests ... + Running ‘testthat.R’ + OK +* checking for non-standard things in the check directory ... OK +* checking for detritus in the temp directory ... OK * DONE -Status: 2 WARNINGs, 1 NOTE + +Status: 2 WARNINGs, 2 NOTEs +See + ‘/tmp/RtmpcCXxhc/PEcAn.DB.Rcheck/00check.log’ +for details. + + diff --git a/base/db/tests/testthat/helper-db-setup.R b/base/db/tests/testthat/helper-db-setup.R index f0d515d7eeb..0ca845db609 100644 --- a/base/db/tests/testthat/helper-db-setup.R +++ b/base/db/tests/testthat/helper-db-setup.R @@ -3,7 +3,7 @@ get_db_params <- function() { # Set these options by adding something like the following to # `~/.Rprofile`: # ```r - # options(pecan.db.params = list(driver = "PostgreSQL", + # options(pecan.db.params = list(driver = "Postgres", # dbname = "bety", # user = "bety", # password = "bety", @@ -35,7 +35,7 @@ get_db_params <- function() { } else { return(get_postgres_envvars( host = "localhost", - driver = "PostgreSQL", + driver = "Postgres", user = "bety", dbname = "bety", password = "bety")) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 2cb4b6eb88e..1f30d591fb6 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -221,7 +221,11 @@ summarize.result <- function(result) { dplyr::filter(n != 1) %>% # ANS: Silence factor to character conversion warning dplyr::mutate(statname = as.character(statname)) - return(dplyr::bind_rows(ans1, ans2)) + if (nrow(ans2) > 0) { + dplyr::bind_rows(ans1, ans2) + } else { + return(ans1) + } } # summarize.result diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index 619f4f86051..12de51f0ed0 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -1,14 +1,17 @@ -* using log directory ‘/tmp/RtmpWosl3N/PEcAn.utils.Rcheck’ -* using R version 3.5.2 (2018-12-20) +* using log directory ‘/tmp/Rtmp6InlXl/PEcAn.utils.Rcheck’ +* using R version 4.0.2 (2020-06-22) * using platform: x86_64-pc-linux-gnu (64-bit) * using session charset: UTF-8 * using options ‘--no-tests --no-manual --as-cran’ * checking for file ‘PEcAn.utils/DESCRIPTION’ ... OK * checking extension type ... Package -* this is package ‘PEcAn.utils’ version ‘1.7.0’ +* this is package ‘PEcAn.utils’ version ‘1.7.1’ * package encoding: UTF-8 * checking package namespace information ... OK -* checking package dependencies ... OK +* checking package dependencies ... NOTE +Packages suggested but not available for checking: + 'PEcAn.data.atmosphere', 'PEcAn.data.land', 'PEcAn.settings', + 'PEcAn.DB' * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK @@ -19,6 +22,7 @@ * checking whether package ‘PEcAn.utils’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK +* checking for future file timestamps ... OK * checking DESCRIPTION meta-information ... NOTE Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email @@ -81,8 +85,8 @@ summarize.result: no visible binding for global variable ‘statname’ Undefined global functions or variables: . citation_id control cultivar_id ensemble.samples get.parameter.samples greenhouse id median n nr runs.samples - sa.samples se settings site_id specie_id statname time - trait.samples trt_id x y years yieldarray + sa.samples se settings site_id specie_id statname time trait.samples + trt_id x y years yieldarray Consider adding importFrom("stats", "median", "time") to your NAMESPACE file. @@ -95,15 +99,13 @@ Rd file 'retry.func.Rd': These lines will be truncated in the PDF manual. * checking Rd cross-references ... WARNING +Unknown packages ‘PEcAn.priors’, ‘PEcAn.benchmark’ in Rd xrefs Missing link or links in documentation object 'download.file.Rd': ‘method’ Missing link or links in documentation object 'get.results.Rd': ‘read.settings’ -Missing link or links in documentation object 'read.output.Rd': - ‘[PEcAn.benchmark:align.data]{PEcAn.benchmark::align.data()}’ - Missing link or links in documentation object 'standard_vars.Rd': ‘[udunits2]{udunits}’ @@ -217,5 +219,13 @@ Argument items with no description in Rd object 'theme_border': * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED +* checking for non-standard things in the check directory ... OK +* checking for detritus in the temp directory ... OK * DONE -Status: 5 WARNINGs, 4 NOTEs + +Status: 5 WARNINGs, 5 NOTEs +See + ‘/tmp/Rtmp6InlXl/PEcAn.utils.Rcheck/00check.log’ +for details. + + diff --git a/docker-compose.yml b/docker-compose.yml index 37c7daffefb..bd250733a04 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -37,7 +37,7 @@ services: - "traefik.enable=true" - "traefik.backend=traefik" - "traefik.port=8080" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip: /traefik" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefixStrip: /traefik" - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro @@ -62,7 +62,7 @@ services: - "traefik.enable=true" - "traefik.backend=minio" - "traefik.port=9000" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/minio/" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/minio/" volumes: - pecan:/data @@ -77,7 +77,7 @@ services: labels: - "traefik.enable=true" - "traefik.port=8080" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/thredds" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/thredds" - "traefik.backend=thredds" # ---------------------------------------------------------------------- @@ -100,7 +100,7 @@ services: - "traefik.enable=true" - "traefik.backend=rabbitmq" - "traefik.port=15672" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/rabbitmq" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/rabbitmq" - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" volumes: - rabbitmq:/var/lib/rabbitmq @@ -135,7 +135,7 @@ services: - postgres labels: - "traefik.enable=true" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/bety/" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/bety/" - "traefik.backend=bety" # ---------------------------------------------------------------------- @@ -151,7 +151,7 @@ services: - "traefik.enable=true" - "traefik.backend=rstudio" - "traefik.port=80" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/rstudio" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/rstudio" - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}" rstudio: @@ -185,7 +185,7 @@ services: - pecan labels: - "traefik.enable=true" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/" - "traefik.backend=docs" # PEcAn web front end, this is just the PHP code @@ -198,12 +198,13 @@ services: - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} - FQDN=${PECAN_FQDN:-docker} - NAME=${PECAN_NAME:-docker} + - SECRET_KEY_BASE=${BETY_SECRET_KEY:-thisisnotasecret} depends_on: - postgres - rabbitmq labels: - "traefik.enable=true" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/pecan/" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/pecan/" - "traefik.backend=pecan" volumes: - pecan:/data @@ -224,7 +225,7 @@ services: - rabbitmq labels: - "traefik.enable=true" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip:/monitor/" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefixStrip:/monitor/" - "traefik.backend=monitor" volumes: - pecan:/data @@ -315,8 +316,7 @@ services: - "traefik.enable=true" - "traefik.backend=dbsync" - "traefik.port=3838" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip:/dbsync/" - + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefixStrip:/dbsync/" # ---------------------------------------------------------------------- # PEcAn API @@ -332,9 +332,10 @@ services: - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} - DATA_DIR=${DATA_DIR:-/data/} - DBFILES_DIR=${DBFILES_DIR:-/data/dbfiles/} + - SECRET_KEY_BASE=${BETY_SECRET_KEY:-thisisnotasecret} labels: - "traefik.enable=true" - - "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefix:/api" + - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/api" - "traefik.backend=api" - "traefik.port=8000" depends_on: diff --git a/docker.sh b/docker.sh index 372093d7279..f3d2473f9f1 100755 --- a/docker.sh +++ b/docker.sh @@ -11,7 +11,7 @@ cd $(dirname $0) # Can set the following variables DEBUG=${DEBUG:-""} DEPEND=${DEPEND:-""} -R_VERSION=${R_VERSION:-"3.5"} +R_VERSION=${R_VERSION:-"4.0.2"} # -------------------------------------------------------------------------------- # PECAN BUILD SECTION @@ -229,4 +229,5 @@ for x in api; do --build-arg PECAN_GIT_CHECKSUM="${PECAN_GIT_CHECKSUM}" \ --build-arg PECAN_GIT_DATE="${PECAN_GIT_DATE}" \ apps/$x/ -done +done + diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index fc309ad12d7..b3a7c9b4a34 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -23,6 +23,7 @@ RUN apt-get update \ && apt-get -y --no-install-recommends install \ jags \ time \ + openssh-client \ libgdal-dev \ libglpk-dev \ librdf0-dev \ diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index c6f63b543db..7bee6586fd1 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -1,6 +1,6 @@ #!/bin/bash # autogenerated do not edit -# use scripts/generate.dockerfile.depends +# use scripts/generate_dependencies.R # stop on error set -e diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R new file mode 100644 index 00000000000..bc163f50ccd --- /dev/null +++ b/docker/depends/pecan.depends.R @@ -0,0 +1,129 @@ +#!/usr/bin/env Rscript +# autogenerated do not edit +# use scripts/generate_dependencies.R + +# Don't use X11 for rgl +Sys.setenv(RGL_USE_NULL = TRUE) +rlib_user = Sys.getenv('R_LIBS_USER') +rlib = ifelse(rlib_user == '', '/usr/local/lib/R/site-library', rlib_user) +Sys.setenv(RLIB = rlib) + +# install remotes first in case packages are references in dependencies +lapply(c( +'araiho/linkages_package', +'ebimodeling/biocro', +'MikkoPeltoniemi/Rpreles', +'ropensci/geonames', +'ropensci/nneo' +), remotes::install_github, lib = rlib) + +# install all packages (depends, imports, suggests) +wanted <- c( +'abind', +'BayesianTools', +'binaryLogic', +'BioCro', +'bit64', +'coda', +'data.table', +'dataone', +'datapack', +'DBI', +'dbplyr', +'doParallel', +'dplR', +'dplyr', +'ellipse', +'foreach', +'fs', +'furrr', +'geonames', +'getPass', +'ggmap', +'ggplot2', +'glue', +'graphics', +'grDevices', +'grid', +'gridExtra', +'hdf5r', +'here', +'httr', +'IDPmisc', +'jsonlite', +'knitr', +'lattice', +'linkages', +'lqmm', +'lubridate', +'Maeswrap', +'magic', +'magrittr', +'maps', +'maptools', +'MASS', +'mclust', +'MCMCpack', +'methods', +'mgcv', +'minpack.lm', +'mlegp', +'mockery', +'MODISTools', +'mvtnorm', +'ncdf4', +'neonUtilities', +'nimble', +'nneo', +'parallel', +'plotrix', +'plyr', +'png', +'prodlim', +'progress', +'purrr', +'pwr', +'randtoolbox', +'raster', +'rcrossref', +'RCurl', +'REddyProc', +'redland', +'reshape', +'reshape2', +'reticulate', +'rgdal', +'rjags', +'rjson', +'rlang', +'rnoaa', +'RPostgres', +'RPostgreSQL', +'Rpreles', +'RSQLite', +'sf', +'SimilarityMeasures', +'sirt', +'sp', +'stats', +'stringi', +'stringr', +'testthat', +'tibble', +'tictoc', +'tidyr', +'tidyverse', +'tools', +'traits', +'TruncatedNormal', +'truncnorm', +'udunits2', +'urltools', +'utils', +'XML', +'xtable', +'xts', +'zoo' +) +missing <- wanted[!(wanted %in% installed.packages()[,'Package'])] +install.packages(missing, lib = rlib) diff --git a/docker/thredds/Dockerfile b/docker/thredds/Dockerfile index 9cd9ac6eb3f..faa73e444b5 100644 --- a/docker/thredds/Dockerfile +++ b/docker/thredds/Dockerfile @@ -1,4 +1,4 @@ -FROM unidata/thredds-docker:4.6 +FROM unidata/thredds-docker:4.6.15 MAINTAINER Alexey Shiklomanov COPY thredds_catalog.xml /opt/tomcat/content/thredds/catalog.xml diff --git a/models/basgra/tests/Rcheck_reference.log b/models/basgra/tests/Rcheck_reference.log index 68546e408a7..27731ff028d 100644 --- a/models/basgra/tests/Rcheck_reference.log +++ b/models/basgra/tests/Rcheck_reference.log @@ -1,6 +1,6 @@ -* using log directory ‘/private/var/folders/qr/mbw8xxpd45jdv_46b27914280000gn/T/Rtmpcw7AZN/PEcAn.BASGRA.Rcheck’ -* using R version 3.4.3 (2017-11-30) -* using platform: x86_64-apple-darwin15.6.0 (64-bit) +* using log directory ‘/tmp/RtmpYGHcZE/PEcAn.BASGRA.Rcheck’ +* using R version 4.0.2 (2020-06-22) +* using platform: x86_64-pc-linux-gnu (64-bit) * using session charset: UTF-8 * using options ‘--no-manual --as-cran’ * checking for file ‘PEcAn.BASGRA/DESCRIPTION’ ... OK @@ -15,9 +15,11 @@ * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK +* checking serialization versions ... OK * checking whether package ‘PEcAn.BASGRA’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK +* checking for future file timestamps ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK @@ -52,17 +54,25 @@ Non-portable flags in variable 'PKG_FCFLAGS': -x -fdefault-real-8 * checking for GNU extensions in Makefiles ... OK * checking for portable use of $(BLAS_LIBS) and $(LAPACK_LIBS) ... OK +* checking use of PKG_*FLAGS in Makefiles ... OK +* checking use of SHLIB_OPENMP_*FLAGS in Makefiles ... OK +* checking pragmas in C/C++ headers and code ... OK +* checking compilation flags used ... NOTE +Compilation used the following non-portable flag(s): + ‘-Wdate-time’ ‘-Werror=format-security’ ‘-Wformat’ * checking compiled code ... OK * checking examples ... NONE * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... Running ‘testthat.R’ OK +* checking for non-standard things in the check directory ... OK +* checking for detritus in the temp directory ... OK * DONE -Status: 1 WARNING +Status: 1 WARNING, 1 NOTE See - ‘/private/var/folders/qr/mbw8xxpd45jdv_46b27914280000gn/T/Rtmpcw7AZN/PEcAn.BASGRA.Rcheck/00check.log’ + ‘/tmp/RtmpYGHcZE/PEcAn.BASGRA.Rcheck/00check.log’ for details. diff --git a/models/biocro/tests/testthat/test-run.biocro.R b/models/biocro/tests/testthat/test-run.biocro.R index 8acf62e0fac..2bcd18beeb0 100644 --- a/models/biocro/tests/testthat/test-run.biocro.R +++ b/models/biocro/tests/testthat/test-run.biocro.R @@ -20,27 +20,27 @@ config$run$end.date <- as.POSIXct("2004-01-07") config$simulationPeriod$dateofplanting <- as.POSIXct("2004-01-01") config$simulationPeriod$dateofharvest <- as.POSIXct("2004-01-07") -test_that("daily summarizes hourly (#1738)", { +# test_that("daily summarizes hourly (#1738)", { - # stub out BioCro::willowGro and packageVersion: - # calls to willowGro(...) will be replaced with calls to mock_run(...), - # calls to utils::packageVersion("BioCro") will return 0.95, - # but *only* when originating inside call_biocro_0.9 AND inside a run.biocro call. - # mock_run and mock_version are defined in helper.R - mockery::stub( - where = run.biocro, - what = "call_biocro_0.9", - how = function(...){ - mockery::stub(call_biocro_0.9, "BioCro::willowGro", mock_run); - call_biocro_0.9(...)}) - mockery::stub(run.biocro, "utils::packageVersion", mock_version) +# # stub out BioCro::willowGro and packageVersion: +# # calls to willowGro(...) will be replaced with calls to mock_run(...), +# # calls to utils::packageVersion("BioCro") will return 0.95, +# # but *only* when originating inside call_biocro_0.9 AND inside a run.biocro call. +# # mock_run and mock_version are defined in helper.R +# mockery::stub( +# where = run.biocro, +# what = "call_biocro_0.9", +# how = function(...){ +# mockery::stub(call_biocro_0.9, "BioCro::willowGro", mock_run); +# call_biocro_0.9(...)}) +# mockery::stub(run.biocro, "utils::packageVersion", mock_version) - mock_result <- run.biocro(lat = 44, lon = -88, metpath, soil.nc = NULL, config = config, coppice.interval = 1) - expect_equal(nrow(mock_result$hourly), 24*7) - expect_equal(nrow(mock_result$daily), 7) - expect_equal(nrow(mock_result$annually), 1) - expect_gt(length(unique(mock_result$daily$tmax)), 1) - expect_equal(mock_result$daily$Leaf[mock_result$daily$doy == 1], ref_leaf1) - expect_equal(mock_result$daily$SoilEvaporation[mock_result$daily$doy == 5], ref_soil5) - expect_equal(mock_result$annually$mat, ref_mat) -}) +# mock_result <- run.biocro(lat = 44, lon = -88, metpath, soil.nc = NULL, config = config, coppice.interval = 1) +# expect_equal(nrow(mock_result$hourly), 24*7) +# expect_equal(nrow(mock_result$daily), 7) +# expect_equal(nrow(mock_result$annually), 1) +# expect_gt(length(unique(mock_result$daily$tmax)), 1) +# expect_equal(mock_result$daily$Leaf[mock_result$daily$doy == 1], ref_leaf1) +# expect_equal(mock_result$daily$SoilEvaporation[mock_result$daily$doy == 5], ref_soil5) +# expect_equal(mock_result$annually$mat, ref_mat) +# }) diff --git a/models/ed/Dockerfile b/models/ed/Dockerfile index 34cad0989a1..282a5b5d391 100644 --- a/models/ed/Dockerfile +++ b/models/ed/Dockerfile @@ -4,7 +4,7 @@ ARG IMAGE_VERSION="latest" # ---------------------------------------------------------------------- # BUILD MODEL BINARY # ---------------------------------------------------------------------- -FROM debian:stretch as model-binary +FROM pecan/models:${IMAGE_VERSION} as model-binary # Some variables that can be used to set control the docker build ARG MODEL_VERSION="2.2.0" @@ -31,7 +31,7 @@ RUN git -c http.sslVerify=false clone https://github.com/EDmodel/ED2.git \ && curl -o make/include.mk.VM http://isda.ncsa.illinois.edu/~kooper/EBI/include.mk.opt.`uname -s` \ && if [ "${MODEL_VERSION}" != "git" ]; then git checkout "v.${MODEL_VERSION}"; fi \ && ./install.sh -g -p VM \ - && mv /src/ED2/ED/build/ed_${BINARY_VERSION}-opt /src/ED2/ED/build/ed + && mv /src/ED2/ED/build/ed_${BINARY_VERSION}-opt /src/ED2/ED/build/ed ######################################################################## @@ -46,8 +46,7 @@ FROM pecan/models:${IMAGE_VERSION} RUN apt-get update \ && apt-get install -y --no-install-recommends \ - libgfortran3 \ - libopenmpi2 \ + libopenmpi3 \ && rm -rf /var/lib/apt/lists/* # INSTALL PEcAn.ED2 diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 42d56538d57..ec176a08ffb 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -15,7 +15,6 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific streamline the interaction between data and models, and to improve the efficacy of scientific investigation. Imports: - PEcAn.data.land, PEcAn.DB, PEcAn.logger, PEcAn.remote, @@ -33,6 +32,7 @@ Imports: SimilarityMeasures, zoo Suggests: + PEcAn.data.land, testthat (>= 2.0.0) License: BSD_3_clause + file LICENSE Copyright: Authors diff --git a/modules/data.land/tests/Rcheck_reference.log b/modules/data.land/tests/Rcheck_reference.log index 6fa69f29f0a..a635142754a 100644 --- a/modules/data.land/tests/Rcheck_reference.log +++ b/modules/data.land/tests/Rcheck_reference.log @@ -1,18 +1,18 @@ -* using log directory ‘/tmp/RtmpVmBk6A/PEcAn.data.land.Rcheck’ -* using R version 3.5.2 (2018-12-20) +* using log directory ‘/tmp/Rtmp9ipF88/PEcAn.data.land.Rcheck’ +* using R version 4.0.2 (2020-06-22) * using platform: x86_64-pc-linux-gnu (64-bit) * using session charset: UTF-8 -* using options ‘--no-tests --no-manual --as-cran’ +* using options ‘--no-manual --as-cran’ * checking for file ‘PEcAn.data.land/DESCRIPTION’ ... OK * checking extension type ... Package -* this is package ‘PEcAn.data.land’ version ‘1.7.0’ +* this is package ‘PEcAn.data.land’ version ‘1.7.1’ * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... NOTE -Depends: includes the non-default packages: - ‘datapack’ ‘dataone’ ‘PEcAn.DB’ ‘PEcAn.utils’ ‘redland’ ‘sirt’ ‘sf’ -Adding so many packages to the search path is excessive and importing -selectively is preferable. +Imports includes 31 non-default packages. +Importing from so many packages makes the package vulnerable to any of +them becoming unavailable. Move as many as possible to Suggests and +use conditionally. * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK @@ -23,7 +23,7 @@ selectively is preferable. * checking whether package ‘PEcAn.data.land’ can be installed ... WARNING Found the following significant warnings: Note: possible error in 'write_veg(outfolder, ': unused argument (source) -See ‘/tmp/RtmpVmBk6A/PEcAn.data.land.Rcheck/00install.out’ for details. +See ‘/tmp/Rtmp9ipF88/PEcAn.data.land.Rcheck/00install.out’ for details. Information on the location(s) of code generating the ‘Note’s can be obtained by re-running with environment variable R_KEEP_PKG_SOURCE set to ‘yes’. @@ -33,13 +33,8 @@ to ‘yes’. data 9.4Mb FIA_allometry 2.7Mb * checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -Authors@R field gives no person with name and roles. -Authors@R field gives no person with maintainer role, valid email -address and non-empty name. -Package listed in more than one of Depends, Imports, Suggests, Enhances: - ‘sf’ -A package should be listed in only one of these fields. +* checking for future file timestamps ... OK +* checking DESCRIPTION meta-information ... OK * checking top-level files ... NOTE Non-standard file/directory found at top level: ‘contrib’ @@ -54,83 +49,23 @@ Non-standard file/directory found at top level: * checking whether the namespace can be loaded with stated dependencies ... OK * checking whether the namespace can be unloaded cleanly ... OK * checking loading without being on the library search path ... OK -* checking dependencies in R code ... WARNING -'::' or ':::' imports not declared from: - ‘coda’ ‘lubridate’ ‘PEcAn.benchmark’ ‘PEcAn.data.atmosphere’ - ‘PEcAn.visualization’ ‘raster’ ‘RCurl’ ‘traits’ ‘udunits2’ -'library' or 'require' calls not declared from: - ‘dplR’ ‘maptools’ ‘mvtnorm’ ‘RCurl’ ‘rjags’ -'library' or 'require' calls in package code: - ‘dplR’ ‘fields’ ‘maptools’ ‘mvtnorm’ ‘RCurl’ ‘rgdal’ ‘rjags’ - Please use :: or requireNamespace() instead. - See section 'Suggested packages' in the 'Writing R Extensions' manual. -Packages in Depends field not imported from: - ‘dataone’ ‘datapack’ ‘PEcAn.DB’ ‘PEcAn.utils’ ‘redland’ ‘sf’ ‘sirt’ - These packages need to be imported from (in the NAMESPACE file) - for when this namespace is loaded but not attached. -* checking S3 generic/method consistency ... WARNING -subset: - function(x, ...) -subset.layer: - function(file, coords, sub.layer, clip, out.dir, out.name) - -See section ‘Generic functions and methods’ in the ‘Writing R -Extensions’ manual. - -Found the following apparent S3 methods exported but not registered: - subset.layer -See section ‘Registering S3 methods’ in the ‘Writing R Extensions’ -manual. +* checking dependencies in R code ... OK +* checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... NOTE -BADM_IC_process: no visible global function definition for ‘setNames’ dataone_download: no visible binding for '<<-' assignment to ‘newdir_D1’ dataone_download: no visible binding for global variable ‘newdir_D1’ -diametergrow : plotend: no visible global function definition for - ‘dev.off’ -diametergrow : plotstart: no visible global function definition for - ‘pdf’ -diametergrow : tnorm: no visible global function definition for ‘runif’ -diametergrow : tnorm: no visible global function definition for ‘pnorm’ -diametergrow : tnorm: no visible global function definition for ‘qnorm’ -diametergrow : diamint: no visible global function definition for ‘cov’ -diametergrow : diamint: no visible global function definition for ‘var’ -diametergrow : f.update: no visible global function definition for - ‘rmvnorm’ -diametergrow : f.update: no visible global function definition for - ‘rnorm’ -diametergrow : di.update_new: no visible global function definition for - ‘rgamma’ diametergrow : di.update_new: no visible binding for global variable ‘aa’ -diametergrow : di.update: no visible global function definition for - ‘dnorm’ diametergrow : di.update: no visible global function definition for ‘runif’ -diametergrow : di.update: no visible global function definition for - ‘rgamma’ -diametergrow : sd.update: no visible global function definition for - ‘rgamma’ -diametergrow : sp.update: no visible global function definition for - ‘rgamma’ -diametergrow : se.update: no visible global function definition for - ‘rgamma’ diametergrow: no visible binding for global variable ‘settings’ -diametergrow: no visible global function definition for ‘rnorm’ -diametergrow: no visible global function definition for ‘rgamma’ -diametergrow: no visible binding for global variable ‘quantile’ -diametergrow: no visible global function definition for ‘write.table’ -diametergrow: no visible global function definition for ‘lm’ -diametergrow: no visible global function definition for ‘predict.lm’ -diametergrow: no visible global function definition for ‘par’ -diametergrow: no visible global function definition for ‘plot’ diametergrow: no visible global function definition for ‘lines’ diametergrow: no visible global function definition for ‘density’ diametergrow: no visible global function definition for ‘dgamma’ diametergrow: no visible global function definition for ‘title’ -diametergrow: no visible global function definition for ‘dnorm’ diametergrow: no visible global function definition for ‘text’ diametergrow: no visible global function definition for ‘abline’ diametergrow: no visible global function definition for ‘points’ @@ -178,11 +113,6 @@ extract_soil_gssurgo : : no visible global function extract_soil_nc: no visible global function definition for ‘median’ extract_veg: possible error in write_veg(outfolder, start_date, end_date, veg_info = veg_info, source): unused argument (source) -fia.to.psscss: no visible global function definition for ‘db.open’ -fia.to.psscss: no visible global function definition for ‘db.close’ -fia.to.psscss: no visible global function definition for - ‘dbfile.input.check’ -fia.to.psscss: no visible global function definition for ‘db.query’ fia.to.psscss: no visible global function definition for ‘data’ fia.to.psscss: no visible binding for global variable ‘pftmapping’ fia.to.psscss: no visible global function definition for ‘na.omit’ @@ -192,9 +122,6 @@ fia.to.psscss: no visible global function definition for ‘dbfile.input.insert’ find.land: no visible global function definition for ‘data’ find.land: no visible binding for global variable ‘wrld_simpl’ -find.land: no visible global function definition for ‘SpatialPoints’ -find.land: no visible global function definition for ‘CRS’ -find.land: no visible global function definition for ‘proj4string’ find.land: no visible global function definition for ‘over’ find.land: no visible binding for global variable ‘land’ format_identifier: no visible binding for '<<-' assignment to ‘doi1’ @@ -210,10 +137,6 @@ get_veg_module: no visible global function definition for ‘convert.input’ get_veg_module: no visible binding for global variable ‘input’ get_veg_module: no visible binding for global variable ‘new.site’ -get.elevation: no visible global function definition for ‘getURL’ -get.elevation: no visible global function definition for ‘xmlTreeParse’ -get.elevation: no visible global function definition for ‘xpathApply’ -get.elevation: no visible global function definition for ‘xmlValue’ gSSURGO.Query: no visible global function definition for ‘xmlTreeParse’ gSSURGO.Query: no visible global function definition for ‘xmlRoot’ gSSURGO.Query: no visible global function definition for ‘getNodeSet’ @@ -233,12 +156,9 @@ InventoryGrowthFusion: no visible global function definition for ‘runif’ InventoryGrowthFusion: no visible global function definition for ‘var’ InventoryGrowthFusion: no visible global function definition for - ‘jags.model’ + ‘load.module’ InventoryGrowthFusion: no visible global function definition for ‘coda.samples’ -InventoryGrowthFusion: no visible global function definition for ‘plot’ -InventoryGrowthFusion: no visible global function definition for - ‘load.module’ InventoryGrowthFusion: no visible global function definition for ‘as.mcmc.list’ InventoryGrowthFusion : : no visible global function @@ -251,8 +171,6 @@ InventoryGrowthFusionDiagnostics: no visible global function definition for ‘layout’ InventoryGrowthFusionDiagnostics: no visible binding for global variable ‘data’ -InventoryGrowthFusionDiagnostics: no visible global function definition - for ‘plot’ InventoryGrowthFusionDiagnostics: no visible global function definition for ‘points’ InventoryGrowthFusionDiagnostics: no visible global function definition @@ -279,25 +197,20 @@ is.land: no visible binding for global variable ‘met.nc’ match_species_id: no visible binding for global variable ‘id’ match_species_id: no visible binding for global variable ‘genus’ match_species_id: no visible binding for global variable ‘species’ -match_species_id: no visible binding for global variable ‘input_code’ matchInventoryRings: no visible global function definition for ‘combine.rwl’ matchInventoryRings: no visible global function definition for ‘write.table’ plot2AGB: no visible global function definition for ‘txtProgressBar’ -plot2AGB: no visible global function definition for ‘rmvnorm’ plot2AGB: no visible global function definition for ‘setTxtProgressBar’ plot2AGB: no visible binding for global variable ‘sd’ plot2AGB: no visible global function definition for ‘pdf’ plot2AGB: no visible global function definition for ‘par’ -plot2AGB: no visible global function definition for ‘plot’ plot2AGB: no visible global function definition for ‘lines’ -plot2AGB: no visible global function definition for ‘dev.off’ put_veg_module: no visible global function definition for ‘db.close’ put_veg_module: no visible global function definition for ‘db.query’ put_veg_module: no visible global function definition for ‘convert.input’ -Read_Tucson: no visible global function definition for ‘read.tucson’ Read.IC.info.BADM: no visible global function definition for ‘read.csv’ Read.IC.info.BADM: no visible binding for global variable ‘NA_L2CODE’ Read.IC.info.BADM: no visible binding for global variable ‘VARIABLE’ @@ -317,8 +230,6 @@ Read.IC.info.BADM : : no visible binding for global variable read.plot: no visible global function definition for ‘read.csv’ read.velmex: no visible global function definition for ‘read.fwf’ sample_ic: no visible global function definition for ‘complete.cases’ -shp2kml: no visible global function definition for ‘ogrListLayers’ -shp2kml: no visible global function definition for ‘ogrInfo’ soil_params: no visible binding for global variable ‘soil.name’ soil_params: no visible binding for global variable ‘xsand.def’ soil_params: no visible binding for global variable ‘xclay.def’ @@ -346,30 +257,26 @@ write_ic: no visible binding for global variable ‘input_veg’ Undefined global functions or variables: . aa abline air.cond air.hcap Area as as_Spatial as.mcmc.list aws050wta bagitFile boxplot clay.cond clay.hcap coda.samples coef - combine.rwl complete.cases comppct_r convert.input cor cov CRS data - DATAVALUE db.close db.open db.query dbfile.input.check - dbfile.input.insert density dev.off dgamma dnorm doi1 fieldcp.K - filter gelman.diag genus getNodeSet getURL grav GROUP_ID h2o.cond - hist hzdept_r id input input_code input_veg is.mcmc.list jags.model - kair kclay ksand ksilt land layout legend lines lm load.module - mcmc.list2init median met.nc mn mnId mukey mutate NA_L1CODE NA_L2CODE - na.omit new.site newdir_D1 ogrInfo ogrListLayers over pairs par pdf - pftmapping plot pnorm points predict.lm proj4string qnorm quantile - read.csv read.fwf read.tucson resource_map rgamma rmvnorm rnorm runif - sand.cond sand.hcap sd setNames settings setTxtProgressBar silt.cond - silt.hcap SITE_ID soil.name soilcp.MPa soilwp.MPa SpatialPoints - species text texture title txtProgressBar var VARIABLE VARIABLE_GROUP - write.table wrld_simpl xclay.def xmlRoot xmlToList xmlTreeParse - xmlValue xpathApply xsand.def zip_contents + combine.rwl complete.cases comppct_r convert.input cor data DATAVALUE + db.close db.query dbfile.input.insert density dgamma doi1 fieldcp.K + filter gelman.diag genus getNodeSet grav GROUP_ID h2o.cond hist + hzdept_r id input input_veg is.mcmc.list kair kclay ksand ksilt land + layout legend lines lm load.module mcmc.list2init median met.nc mn + mnId mukey mutate NA_L1CODE NA_L2CODE na.omit new.site newdir_D1 over + pairs par pdf pftmapping points quantile read.csv read.fwf + resource_map runif sand.cond sand.hcap sd setNames settings + setTxtProgressBar silt.cond silt.hcap SITE_ID soil.name soilcp.MPa + soilwp.MPa species text texture title txtProgressBar var VARIABLE + VARIABLE_GROUP write.table wrld_simpl xclay.def xmlRoot xmlToList + xmlTreeParse xsand.def zip_contents Consider adding importFrom("graphics", "abline", "boxplot", "hist", "layout", "legend", - "lines", "pairs", "par", "plot", "points", "text", "title") - importFrom("grDevices", "dev.off", "pdf") + "lines", "pairs", "par", "points", "text", "title") + importFrom("grDevices", "pdf") importFrom("methods", "as") - importFrom("stats", "coef", "complete.cases", "cor", "cov", "density", - "dgamma", "dnorm", "filter", "lm", "median", "na.omit", - "pnorm", "predict.lm", "qnorm", "quantile", "rgamma", - "rnorm", "runif", "sd", "setNames", "var") + importFrom("stats", "coef", "complete.cases", "cor", "density", + "dgamma", "filter", "lm", "median", "na.omit", "quantile", + "runif", "sd", "setNames", "var") importFrom("utils", "data", "read.csv", "read.fwf", "setTxtProgressBar", "txtProgressBar", "write.table") to your NAMESPACE file (and ensure that your DESCRIPTION Imports field @@ -378,11 +285,8 @@ contains 'methods'). Found the following calls to data() loading into the global environment: File ‘PEcAn.data.land/R/fia2ED.R’: data(pftmapping) -File ‘PEcAn.data.land/R/find.land.R’: - data("wrld_simpl", package = "maptools") See section ‘Good practice’ in ‘?data’. -* checking Rd files ... WARNING -checkRd: (7) gSSURGO.Query.Rd:29-35: Tag \dontrun not recognized +* checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... NOTE Rd file 'dataone_download.Rd': @@ -494,9 +398,6 @@ Undocumented arguments in documentation object 'write_ic' ‘in.path’ ‘in.name’ ‘start_date’ ‘end_date’ ‘outfolder’ ‘model’ ‘new_site’ ‘pfts’ ‘source’ ‘overwrite’ ‘...’ -Undocumented arguments in documentation object 'write_veg' - ‘outfolder’ ‘start_date’ ‘veg_info’ ‘source’ - Functions with \usage entries need to have the appropriate \alias entries, and all their arguments documented. The \usage entries must correspond to syntactically valid R code. @@ -529,7 +430,6 @@ Please use e.g. ‘inst/extdata’ for non-R data files Object named ‘.Random.seed’ found in dataset: ‘soil_class’ Please remove it. - * checking data for non-ASCII characters ... WARNING Warning: found non-ASCII strings 'US-Bar,272,44.0646,-71.2881,NA,4297,GRP_LOCATION,LOCATION_COMMENT,... Bartlett Experimental Forest. ...The correct location is 44 deg 3' 52.702794\ N [NOT 43 deg as I had earlier specified] 71 deg 17 17.0766744\" W","5","NORTHERN FORESTS","5.3","ATLANTIC HIGHLANDS" @@ -1385,6 +1285,9 @@ Please remove it. US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_PHEN,Mixed/unknown,13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_SPP,PIPO (NRCS plant code),13,TEMPERATE SIERRAS,13.1,UPPER GILA MOUNTAINS US-Fmf,2160,35.1426,-111.7273,NA,27372,GRP_BIOMASS_CHEM,BIOMASS_COMMENT,pinus ponderosa' in object 'BADM' + 'US-SP1,50,29.7381,-82.2188,NA,29203,GRP_AG_LIT_BIOMASS,AG_LIT_BIOMASS_COMMENT,842 277,8,EASTERN TEMPERATE FORESTS,8.5,MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS' in object 'BADM' + 'US-SRM,1120,31.8214,-110.8661,NA,27355,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,*aboveground tree biomass = (EXP(1.6*LN(35)-0.58)/100)/1000*.47 estimated using equation given in Browning et al. 2008EcolApp,18,933 (be careful<80>numbers not verified), biomass of herbage production given in Follet, SRER100 issue,12,SOUTHERN SEMIARID HIGHLANDS,12.1,WESTERN SIERRA MADRE PIEDMONT' in object 'BADM' + 'US-SRM,1120,31.8214,-110.8661,NA,27975,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,*aboveground tree biomass = (EXP(1.6*LN(35)-0.58)/100)/1000*.47 estimated using equation given in Browning et al. 2008EcolApp,18,933 (be careful<80>numbers not verified), biomass of herbage production given in Follet, SRER100 issue,12,SOUTHERN SEMIARID HIGHLANDS,12.1,WESTERN SIERRA MADRE PIEDMONT' in object 'BADM' 'US-Shd,346,36.9333,-96.6833,NA,23740,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(5-6); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' 'US-Shd,346,36.9333,-96.6833,NA,23741,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(6-7); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' 'US-Shd,346,36.9333,-96.6833,NA,23742,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(4-5); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' @@ -1395,9 +1298,6 @@ Please remove it. 'US-Shd,346,36.9333,-96.6833,NA,24781,GRP_SOIL_TEX,SOIL_TEX_COMMENT,(4-5); Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' 'US-Shd,346,36.9333,-96.6833,NA,26057,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' 'US-Shd,346,36.9333,-96.6833,NA,26058,GRP_SOIL_TEX,SOIL_TEX_COMMENT,Soil at Tallgrass Prairie Site: silty clay loam of Wolco-Dwight complex (Pachic Argiustolls & Typic Natrustolls); George Burba. Y:\Okla-SR\LAIBiomass&Soils\Properties of OK soils.doc.,9,GREAT PLAINS,9.4,SOUTH CENTRAL SEMIARID PRAIRIES' in object 'BADM' - 'US-SP1,50,29.7381,-82.2188,NA,29203,GRP_AG_LIT_BIOMASS,AG_LIT_BIOMASS_COMMENT,842 277,8,EASTERN TEMPERATE FORESTS,8.5,MISSISSIPPI ALLUVIAL AND SOUTHEAST USA COASTAL PLAINS' in object 'BADM' - 'US-SRM,1120,31.8214,-110.8661,NA,27355,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,*aboveground tree biomass = (EXP(1.6*LN(35)-0.58)/100)/1000*.47 estimated using equation given in Browning et al. 2008EcolApp,18,933 (be careful<80>numbers not verified), biomass of herbage production given in Follet, SRER100 issue,12,SOUTHERN SEMIARID HIGHLANDS,12.1,WESTERN SIERRA MADRE PIEDMONT' in object 'BADM' - 'US-SRM,1120,31.8214,-110.8661,NA,27975,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,*aboveground tree biomass = (EXP(1.6*LN(35)-0.58)/100)/1000*.47 estimated using equation given in Browning et al. 2008EcolApp,18,933 (be careful<80>numbers not verified), biomass of herbage production given in Follet, SRER100 issue,12,SOUTHERN SEMIARID HIGHLANDS,12.1,WESTERN SIERRA MADRE PIEDMONT' in object 'BADM' 'US-UMB,234,45.5598,-84.7138,NA,18391,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' 'US-UMB,234,45.5598,-84.7138,NA,18392,GRP_AG_BIOMASS_TREE,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower a plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' 'US-UMB,234,45.5598,-84.7138,NA,18423,GRP_AG_BIOMASS_OTHER,AG_BIOMASS_COMMENT,Litter traps are .179 m^2 with one trap in each of 60 - 0.08 ha plots and 20 traps in a 1.13 ha plot with flux tower at plot center. Litter trap collection date is 20011020 20 d,5,NORTHERN FORESTS,5.2,MIXED WOOD SHIELD' in object 'BADM' @@ -1421,6 +1321,16 @@ Please remove it. soil_class.RData 117Kb 18Kb xz * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK -* checking tests ... SKIPPED +* checking tests ... + Running ‘testthat.R’ + OK +* checking for non-standard things in the check directory ... OK +* checking for detritus in the temp directory ... OK * DONE -Status: 11 WARNINGs, 6 NOTEs + +Status: 8 WARNINGs, 5 NOTEs +See + ‘/tmp/Rtmp9ipF88/PEcAn.data.land.Rcheck/00check.log’ +for details. + + diff --git a/scripts/generate_dependencies.R b/scripts/generate_dependencies.R index ef824e8d4b7..eac872b8bdc 100755 --- a/scripts/generate_dependencies.R +++ b/scripts/generate_dependencies.R @@ -90,7 +90,7 @@ for (name in names(depends)) { # write for dockerfile cat("#!/bin/bash", "# autogenerated do not edit", - "# use scripts/generate.dockerfile.depends", + "# use scripts/generate_dependencies.R", "", "# stop on error", "set -e", @@ -109,3 +109,25 @@ cat("#!/bin/bash", "install2.r -e -s -l \"${RLIB}\" -n -1\\\n ", paste(sort(docker), sep = "", collapse = " \\\n ")), file = "docker/depends/pecan.depends", sep = "\n", append = FALSE) + +# write for CI +cat("#!/usr/bin/env Rscript", + "# autogenerated do not edit", + "# use scripts/generate_dependencies.R", + "", + "# Don\'t use X11 for rgl", + "Sys.setenv(RGL_USE_NULL = TRUE)", + "rlib_user = Sys.getenv('R_LIBS_USER')", + "rlib = ifelse(rlib_user == '', '/usr/local/lib/R/site-library', rlib_user)", + "Sys.setenv(RLIB = rlib)", + "", + "# install remotes first in case packages are references in dependencies", + "lapply(c(", + paste(shQuote(sort(remotes)), collapse = ",\n"), + "), remotes::install_github, lib = rlib)", + "", + "# install all packages (depends, imports, suggests)", + "wanted <- c(", paste(shQuote(sort(docker)), sep = "", collapse = ",\n"), ")", + "missing <- wanted[!(wanted %in% installed.packages()[,'Package'])]", + "install.packages(missing, lib = rlib)", + file = "docker/depends/pecan.depends.R", sep = "\n", append = FALSE) diff --git a/tests/ghaction.sipnet_Postgres.xml b/tests/fail.ghaction.sipnet_Postgres.xml similarity index 89% rename from tests/ghaction.sipnet_Postgres.xml rename to tests/fail.ghaction.sipnet_Postgres.xml index f1ffd6935f1..8b8fab9f297 100644 --- a/tests/ghaction.sipnet_Postgres.xml +++ b/tests/fail.ghaction.sipnet_Postgres.xml @@ -39,7 +39,7 @@ - ${HOME}/sipnet_unk/sipnet + ${GITHUB_WORKSPACE}/sipnet/sipnet SIPNET @@ -49,7 +49,7 @@ - ${HOME}/sipnet_unk/niwot_tutorial.clim + ${GITHUB_WORKSPACE}/sipnet/Sites/Niwot/niwot.clim 2002-01-01 00:00:00 diff --git a/tests/ghaction.sipnet_PostgreSQL.xml b/tests/ghaction.sipnet_PostgreSQL.xml index 8301b03a59a..202750c9bbe 100644 --- a/tests/ghaction.sipnet_PostgreSQL.xml +++ b/tests/ghaction.sipnet_PostgreSQL.xml @@ -39,7 +39,7 @@ - ${HOME}/sipnet_unk/sipnet + ${GITHUB_WORKSPACE}/sipnet/sipnet SIPNET @@ -49,7 +49,7 @@ - ${HOME}/sipnet_unk/niwot_tutorial.clim + ${GITHUB_WORKSPACE}/sipnet/Sites/Niwot/niwot.clim 2002-01-01 00:00:00 From 0fe13f24ef61a1922c1ef7e750f644ef93ac7f3b Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Thu, 15 Oct 2020 16:23:06 -0700 Subject: [PATCH 1479/2289] Update Makefile.depends removed photosynthesis module from makefile depends; no longer required per PR #2690 --- Makefile.depends | 1 - 1 file changed, 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 70f86e1fbc0..9f9f077976d 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -18,7 +18,6 @@ $(call depends,modules/data.land): | .install/modules/benchmark .install/modules $(call depends,modules/data.remote): | .install/base/db .install/base/utils .install/base/logger .install/base/remote $(call depends,modules/emulator): | .install/base/logger $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .install/base/logger .install/base/settings -$(call depends,modules/photosynthesis): | .install/base/logger $(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger From fd8673c873a8383c8e3f16e8fd598936ca74f0a9 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 16 Oct 2020 07:23:12 -0500 Subject: [PATCH 1480/2289] Update Makefile.depends --- Makefile.depends | 1 + 1 file changed, 1 insertion(+) diff --git a/Makefile.depends b/Makefile.depends index 9f9f077976d..3fd75b3ddfa 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -18,6 +18,7 @@ $(call depends,modules/data.land): | .install/modules/benchmark .install/modules $(call depends,modules/data.remote): | .install/base/db .install/base/utils .install/base/logger .install/base/remote $(call depends,modules/emulator): | .install/base/logger $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .install/base/logger .install/base/settings +$(call depends,modules/photosynthesis): | $(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger From 29927c165df4612e67aae9a53ce07eb2a3f3a0d9 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 16 Oct 2020 07:29:52 -0500 Subject: [PATCH 1481/2289] Update pecan.depends --- docker/depends/pecan.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 1fc77b201d0..169045a7cd1 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -26,9 +26,9 @@ install2.r -e -s -l "${RLIB}" -n -1\ bit64 \ coda \ corrplot \ + data.table \ dataone \ datapack \ - data.table \ DBI \ dbplyr \ devtools \ From b242e4fcb2c65b5e5fffea2f2b45b21acbce297b Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 16 Oct 2020 11:28:18 -0500 Subject: [PATCH 1482/2289] GitHub actions docker (#2701) * build docker images as github action --- .github/workflows/ci.yml | 1 - .github/workflows/depends.yml | 1 - .github/workflows/docker.yml | 102 ++++++++++++++++++++++++++++++++++ CHANGELOG.md | 1 + docker.sh | 15 +++-- 5 files changed, 112 insertions(+), 8 deletions(-) create mode 100644 .github/workflows/docker.yml diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 8049aebc336..f2e34f7f167 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -5,7 +5,6 @@ on: branches: - master - develop - - '*' tags: - '*' diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 0ced3c814a7..3b00122c1e0 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -5,7 +5,6 @@ on: branches: - develop - master - - '*' # this runs on the develop branch schedule: diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml new file mode 100644 index 00000000000..bc038054983 --- /dev/null +++ b/.github/workflows/docker.yml @@ -0,0 +1,102 @@ +name: Docker + +# initially we would us on: [release] as well, the problem is that +# the code in clowder would not know what branch the code is in, +# and would not set the right version flags. + +# This will run when: +# - when new code is pushed to master/develop to push the tags +# latest and develop +# - when a pull request is created and updated to make sure the +# Dockerfile is still valid. +# To be able to push to dockerhub, this execpts the following +# secrets to be set in the project: +# - DOCKERHUB_USERNAME : username that can push to the org +# - DOCKERHUB_PASSWORD : password asscoaited with the username +on: + push: + branches: + - master + - develop + + pull_request: + +# Certain actions will only run when this is the master repo. +env: + MASTER_REPO: PecanProject/pecan + DOCKERHUB_ORG: pecan + +jobs: + docker: + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v2 + + # calculate some variables that are used later + - name: get version tag + run: | + BRANCH=${GITHUB_REF##*/} + echo "GITHUB_BRANCH=${BRANCH}" >> $GITHUB_ENV + if [ "$BRANCH" == "master" ]; then + version="$(awk '/Version:/ { print $2 }' base/all/DESCRIPTION)" + tags="latest" + oldversion="" + while [ "${oldversion}" != "${version}" ]; do + oldversion="${version}" + tags="${tags},${version}" + version=${version%.*} + done + echo "PECAN_VERSION=$(awk '/Version:/ { print $2 }' base/all/DESCRIPTION)" >> $GITHUB_ENV + echo "PECAN_TAGS=${tags}" >> $GITHUB_ENV + elif [ "$BRANCH" == "develop" ]; then + echo "PECAN_VERSION=develop" >> $GITHUB_ENV + echo "PECAN_TAGS=develop" >> $GITHUB_ENV + else + echo "PECAN_VERSION=develop" >> $GITHUB_ENV + echo "PECAN_TAGS=develop" >> $GITHUB_ENV + fi + + # use shell script to build, there is some complexity in this + - name: create images + run: ./docker.sh -i github + env: + PECAN_GIT_CHECKSUM: ${{ github.sha }} + PECAN_GIT_BRANCH: ${GITHUB_BRANCH} + VERSION: ${PECAN_VERSION} + + # push all images to github + - name: Publish to GitHub + if: github.event_name == 'push' && github.repository == env.MASTER_REPO + run: | + echo "${INPUT_PASSWORD}" | docker login -u ${INPUT_USERNAME} --password-stdin ${INPUT_REGISTRY} + repo=$(echo ${{ github.repository }} | tr 'A-Z' 'a-z') + for image in $(docker image ls pecan/*:github --format "{{ .Repository }}"); do + for v in ${PECAN_TAGS}; do + docker tag ${image}:github ${INPUT_REGISTRY}/${DOCKER_PROJECT}/${image#pecan/}:${v} + docker push docker.pkg.github.com/${repo}/${image#pecan/}:${v} + done + done + docker logout + env: + DOCKER_PROJECT: ${{ github.repository_owner }}/pecan + INPUT_REGISTRY: docker.pkg.github.com + INPUT_USERNAME: ${{ github.actor }} + INPUT_PASSWORD: ${{ secrets.GITHUB_TOKEN }} + + # push all images to dockerhub + - name: Publish to DockerHub + if: github.event_name == 'push' && github.repository == env.MASTER_REPO + run: | + echo "${INPUT_PASSWORD}" | docker login -u ${INPUT_USERNAME} --password-stdin + for image in $(docker image ls pecan/*:github --format "{{ .Repository }}"); do + for v in ${PECAN_TAGS}; do + docker tag ${image}:github ${DOCKER_PROJECT}/${image#pecan/}:${v} + docker push ${{ env.DOCKERHUB_ORG }}/${image#pecan/}:${v} + done + done + docker logout + env: + DOCKER_PROJECT: ${{ env.DOCKERHUB_ORG }} + INPUT_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }} + INPUT_PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }} diff --git a/CHANGELOG.md b/CHANGELOG.md index 9c67b8994a7..a0f32663468 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -58,6 +58,7 @@ This is a major change: ### Added +- Now creates docker images during a PR, when merged it will push them to docker hub and github packages - New functionality to the PEcAn API to GET information about PFTs, formats & sites, submit workflows in XML or JSON formats & download relevant inputs/outputs/files related to runs & workflows (#2674 #2665 #2662 #2655) - Functions to send/receive messages to/from rabbitmq. - Documentation in [DEV-INTRO.md](DEV-INTRO.md) on development in a docker environment (#2553) diff --git a/docker.sh b/docker.sh index f3d2473f9f1..960e2c06bd2 100755 --- a/docker.sh +++ b/docker.sh @@ -45,7 +45,7 @@ while getopts dfhi:r: opt; do ;; h) cat << EOF -$0 [-dfh] [-i ] [-r ] [-r Date: Sun, 18 Oct 2020 07:53:37 -0500 Subject: [PATCH 1483/2289] fix missing , in execute --- docker/monitor/monitor.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docker/monitor/monitor.py b/docker/monitor/monitor.py index 159d4fe3f5c..75c5c98e42b 100644 --- a/docker/monitor/monitor.py +++ b/docker/monitor/monitor.py @@ -165,7 +165,7 @@ def insert_model(model_info): logging.debug("Adding host") cur = conn.cursor() cur.execute('INSERT INTO machines (hostname) ' - 'VALUES (%s) RETURNING id', (pecan_fqdn, )) + 'VALUES (%s) RETURNING id', (pecan_fqdn,)) result = cur.fetchone() cur.close() if not result: @@ -185,7 +185,7 @@ def insert_model(model_info): logging.debug("Adding modeltype") cur = conn.cursor() cur.execute('INSERT INTO modeltypes (name) ' - 'VALUES (%s) RETURNING id', (model_info['type'])) + 'VALUES (%s) RETURNING id', (model_info['type'],)) result = cur.fetchone() cur.close() if not result: From ec6113db51b9681b7c4dac9790dcc6c3df1660a8 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 16:02:28 -0500 Subject: [PATCH 1484/2289] Update docker.yml --- .github/workflows/docker.yml | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index bc038054983..1b4eda65329 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -73,13 +73,12 @@ jobs: repo=$(echo ${{ github.repository }} | tr 'A-Z' 'a-z') for image in $(docker image ls pecan/*:github --format "{{ .Repository }}"); do for v in ${PECAN_TAGS}; do - docker tag ${image}:github ${INPUT_REGISTRY}/${DOCKER_PROJECT}/${image#pecan/}:${v} - docker push docker.pkg.github.com/${repo}/${image#pecan/}:${v} + docker tag ${image}:github ${INPUT_REGISTRY}/${repo}/${image#pecan/}:${v} + docker push ${INPUT_REGISTRY}/${repo}/${image#pecan/}:${v} done done docker logout env: - DOCKER_PROJECT: ${{ github.repository_owner }}/pecan INPUT_REGISTRY: docker.pkg.github.com INPUT_USERNAME: ${{ github.actor }} INPUT_PASSWORD: ${{ secrets.GITHUB_TOKEN }} @@ -91,12 +90,11 @@ jobs: echo "${INPUT_PASSWORD}" | docker login -u ${INPUT_USERNAME} --password-stdin for image in $(docker image ls pecan/*:github --format "{{ .Repository }}"); do for v in ${PECAN_TAGS}; do - docker tag ${image}:github ${DOCKER_PROJECT}/${image#pecan/}:${v} + docker tag ${image}:github {{ env.DOCKERHUB_ORG }}/${image#pecan/}:${v} docker push ${{ env.DOCKERHUB_ORG }}/${image#pecan/}:${v} done done docker logout env: - DOCKER_PROJECT: ${{ env.DOCKERHUB_ORG }} INPUT_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }} INPUT_PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }} From 87ff636c7d61dc0497535bc4a6b7c8a02acd1847 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 23:25:53 -0500 Subject: [PATCH 1485/2289] fix bookdown --- .github/workflows/book.yml | 105 +++++++++--------- .../03_topical_pages/11_adding_to_pecan.Rmd | 4 +- .../11_images/bety_new_model.png | Bin 0 -> 48869 bytes .../11_images}/var_record.png | Bin book_source/index.Rmd | 18 +++ 5 files changed, 73 insertions(+), 54 deletions(-) create mode 100644 book_source/03_topical_pages/11_images/bety_new_model.png rename book_source/{02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images => 03_topical_pages/11_images}/var_record.png (100%) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 2983ece4197..50dad46dd5f 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -1,60 +1,61 @@ +name: renderbook + on: push: - branches: - - master - - develop - - release/* - tags: - - v1 - - v1* + branches: + - master + - develop -# render book -name: renderbook + tags: + - '*' + +pull_request: jobs: - bookdown: - name: Render-Book +bookdown: + name: Render-Book runs-on: ubuntu-latest - container: pecan/base:latest + + container: + image: pecan/depends:R${{ matrix.R }} steps: - - uses: actions/checkout@v2 - - uses: r-lib/actions/setup-r@v1 - - uses: r-lib/actions/setup-pandoc@v1 - - name: Install rmarkdown - run: Rscript -e 'install.packages(c("rmarkdown","bookdown"))' - - name: Render Book - run: cd book_source && Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd")' - - uses: actions/upload-artifact@v2 - with: - name: _book - path: book_source/_book/ - - - checkout-and-deploy: - runs-on: ubuntu-latest - needs: bookdown - steps: - - name: Download artifact - uses: actions/download-artifact@v2 - with: - # Artifact name - name: _book # optional - # Destination path - path: _book/ # optional - # repo-token: ${{ secrets.GITHUB_TOKEN }} - - name: Checkout documentation repo - uses: actions/checkout@v2 - with: - repository: ${{ github.repository_owner }}/pecan-documentation - path: pecan-documentation - token: ${{ secrets.GH_PAT }} - - run: | - export VERSION=${GITHUB_REF##*/}_test - cd pecan-documentation && mkdir -p $VERSION - git config --global user.email "pecanproj@gmail.com" - git config --global user.name "GitHub Documentation Robot" - rsync -a --delete ../_book/ $VERSION - git add --all * - git commit -m "Build book from pecan revision $GITHUB_SHA" || true - git push -q origin master + # checkout source code + - uses: actions/checkout@v2 + # install rmarkdown + - name: Install rmarkdown + run: | + Rscript -e 'install.packages(c("rmarkdown","bookdown"))' + # compile PEcAn code + - name: build + run: make -j1 + # render book + - name: Render Book + run: | + cd book_source + Rscript -e 'options(bookdown.render.file_scope=FALSE); bookdown::render_book("index.Rmd", "bookdown::gitbook")' + # save artifact + - uses: actions/upload-artifact@v2 + with: + name: pecan-documentation + path: book_source/_book/ + # download documentation repo + - name: Checkout documentation repo + if: github.event_name != 'pull_request' + uses: actions/checkout@v2 + with: + repository: ${{ github.repository_owner }}/pecan-documentation + path: pecan-documentation + token: ${{ secrets.GH_PAT }} + # upload new documentation + - name: publish to github + if: github.event_name != 'pull_request' + run: | + export VERSION=${GITHUB_REF##*/} + cd pecan-documentation && mkdir -p $VERSION + git config --global user.email "pecanproj@gmail.com" + git config --global user.name "GitHub Documentation Robot" + rsync -a --delete ../_book/ $VERSION + git add --all * + git commit -m "Build book from pecan revision $GITHUB_SHA" || true + git push -q origin master diff --git a/book_source/03_topical_pages/11_adding_to_pecan.Rmd b/book_source/03_topical_pages/11_adding_to_pecan.Rmd index 9c5d0051121..8e5abd9bc16 100644 --- a/book_source/03_topical_pages/11_adding_to_pecan.Rmd +++ b/book_source/03_topical_pages/11_adding_to_pecan.Rmd @@ -41,7 +41,7 @@ The instructions in this section assume that you will be specifying this informa The figure below summarizes the relevant database tables that need to be updated to add a new model and the primary variables that define each table. -![](https://www.lucidchart.com/publicSegments/view/54a8aea8-9360-4628-af9e-392a0a00c27b/image.png) +![](03_topical_pages/11_images/bety_new_model.png) ### Define MODEL_TYPE @@ -747,7 +747,7 @@ Make sure to search for your variables under Data > Variables before suggesting For example bety contains a record for Net Primary Productivity: -![var_record](04_advanced_user_guide/images/var_record.png) +![var_record](03_topical_pages/11_images/var_record.png) This record does not have the same variable name or the same units as NPP in the example data. You may have to do some reading to confirm that they are the same variable. diff --git a/book_source/03_topical_pages/11_images/bety_new_model.png b/book_source/03_topical_pages/11_images/bety_new_model.png new file mode 100644 index 0000000000000000000000000000000000000000..24860308e6fe410cdf32eaef93116b47e45c1985 GIT binary patch literal 48869 zcmeEucUV)|w|1LZk+W zbQD8?01-klK;T;k=l*{8xzGK+=eghC*TFFf$;mnU?7h~z-uGSmg{HdV(Iad}P$<+< zWu@C%DAaxh6lyR2&_Q@5Nb62J{IlQshT07jsyK{(>%jqde)ge~mKqA>!vpX51BKc^ z-Z_Ipp@mVXIdc?B>MaU&$|a%Zo(%lrpoNO!ZPX6(-@E$ENOWafeuFN{^XrgbZjJ!=sfI!@(cY9 zEc6ov#i^+bW0UO4&+Y$k{e9#aOPY&kR#Q+fOV-19g=zVx63>{Ayb9g^ijJAD3cAlK zta^NaZGWra0X2QE8+-&dmB;q{>ZP?QkDTi4Z`P^Y*B&+eb2Ev|H7q$T)E=QatND$x z=eM(r-PD9`Ng1Nsu?Zrj*?OenGMp@&9LtK4srBwdRFeQQV({lW_Pp@k-A9HjPndQe z^pqTWz55_2=8pOP-A84DdoS!hcxTDPx@Y&%b&e51OXWIC)z!@4URM@XvX{_O-;HdfZjy|3=XWn|mq3z*IhP4e*Ytb{uI-J<%9 zlk!k%&B6@+JDU^u@p|9-t|Nx5gln3wmQT~46Mkw*T3qyac3yo2y}7nvYE7>;`HWF6 z%ZuLxY*;nTHmwyOU%WH>Euny%IU{-go;ix|v$ zKGp9aKk@;Hk9kMz{zPVmKI4BITlGt{vcIEQP$DmkuMp8JLFgxPF-3!L$>}M63X*#- zR26Dvr`5&2UyKD8u%a#4qB0sb31b0}M-^Ch3s6J&|T5?K71U)?Dz zeIa_A`r38BLeabBpih&N@0O``cmumb#OY>lw5O-Q@fR~cm!?#%ux4gth)76eWoKV1 z6sOZaK@cBpf7C~s7kU2blo0>PBeR|26UHt{aLQ|I9?x%{qfLu9dcRvWh)GXM;w%(r z7x+t+hxy3!sy%Hv@BBGi*b8Cs?u z7nTkR2S4~!A_xcJkoL&Zq8E@`}XZKbv{y*GdJcEpONh@wSXR! z@%%A)L%EM$sIF*IM!kod<<+I%4$1Uv9Dni08&27|RjY?_+iT7{x*4{2za2aidWTwf zwOC(u$*XSBUiLt8p;=?WhYvRjON$Hc$)!F$x|La7Eh)_zHWK{cON4`R#RFCk8O}=L z^t8iB#n|kS!yswC;4Sw^Ngiglw6wIs7E{6PYh^Y4WN_&n*7+p+S1(%~cvFQSmUi^w zP=@XBtov|c$OOKk`RkAOmcbN%s$7RvDRM~MY?)C-4A!c2?epi?vf|>_)};p0k?q>5 zh2lFmXDv$&DnG|3CYE1|d(9EnwlFj_bby#ECM|6r78W)#>+bVmAt5fVxOB)tg^PY5 zr#lRW!t`N6tfj&HwJ#O^WG{mw9mMDtZBy8BUN<*452?h0FO`H2Ea7>Fedg-|Q)iw? z1L3h{0t^2Lwor&*NtvG)BrN%B%B6C`V%56-cQMnIGWxNtkqSrT3nS)xd+&`@kWFr1 zi&Jem@F|WntZmHF+S+;nUiXy8WZ1&H^yY6_S1<3Ci0>RmzEY6IZQWVoOvPnvj#8lP z@S!%9T!W6ImCq(Tx2^6*aJ*`ZFd=v zXJ%~1EG3V@5=JimvV-IvrZxpW^HcP%B~C@nH7R+2m5`P;I>CCW@lS&oOSsV51Gx`a zoi@5$VU7epsG5}|g({?a@I)53wY8DtY)zfTJ}&j~$G9~wOs*#iDJD{?F>a>FFjn{r z1x-e=6EwDRoXZkvX4_H4HS?}9=zvsJo`>-nw(P<8PW<58ku4p{Gm?uYrc3!-bMo$7 zyLJtYH!u|&UaL31Cvj$NgSgaB>tb|jy6033SMBytkXKlXbH>HaOahnSdwZlpU~x7F zYoB?>-7Z3cSbM*s`}61iQO}q{-)#Jm5H}$^Oop@>obA!(A~l@xkMRcPf3Do-B68(b zJy+5%g_imgo`m~|$|t;vxj&Lx6C84F%=@?|C+j7m@qX-j?uj6S`PR%K$3LblQ;php z?`o1;k{OVeIkQeka?|M+(Gv{4%JY$8dN7kSIu5Q|~?*hT{ z%jl@_9iM?GOvHn=TUKKg-+3q!XVkTG)8P)In_J7jjNC zS6sDsaw^EEFsq(>S6G^y8+gLn$6Ip7Sm1qcmUQPOdo0d)vHasb(_2M#b(t~+cc`y@ zwr*niqVKY?v*TWz;2mYMVrcFeDz!#WHtnI(^4%mxs~)FjMCxQtmDq^nbai#TB){*S zdA~_)W|WUg!>HFL`7J9SJ$iJOYx^r#nJ3xQbK`4sh`{tnV_;)Mds>p$x5M?+>3E-& zUPb0Z`&p$<V2oazTVW@*+kOsXHjr)u>04K2RtX#)YbWu zy9HQYRa+`LI~O+mcrIk)<6{U(F(NH+n;00w?Y}Xb$!yCOoKjFwZ~#BMzBw>5l6x~n zGd$^<)}SJnzxI+eTkySm!IEU>(W-mmw91J9qlguaofQqVqO4N7Rz#Yt@7gC}2ISn~ zTKcbb%cPprdR$^<@S&CFvBOE#ph+=2yu7RPrQQ6I?R;^n{Bf!sC4z}^{x#E>KSD#7 zz0nIJmChf?B|~|e0c~$ML??`5(jQ&T*2>ntm$5inU20|8EX?2W(Q%|gIig`>ps)s) zpkAGLRmX6={JpWYb*3s#n?JfQwqfOBqNJNb;8t6ps`nO(FWKldKKyiE16}qxEj9MZ z!+&Uxto4-$O0&EgkV(nM$12s&YSH+M)K|v6rHV9tAe^8_Qgn&d^+Si+unLs38fSBt zHplOqbPmnNr*2&`@ebrrzh5)QbGf44CBI^M7Cs>A*iTBY)X&W6T2@!|Cr}1RMrj*_ zPmivOQjB5-mNoeFd;)&_Q6d_>GZ@vK+gwfes>K50a^a?{RD6MFq1DKGN9!EVwSl?l zT6$4Jc9N$g{^cB6+CMdl>z2A3=jbh%ScaJ|o1}CES@dKwHjrzZ3Eb}9$@Y>dKvHl- z+5~NEVgSw= z?;mk)%+J*;w0rdE_Js=v-D~E=Lj;*ho(L>e&47D+n8a8U1Oq;h}RSaBF zJaX_^jmy`kSB$GB(PH@gSW_S6Z0)=F0ux+QQ;>p!!pUPHW4!2UV|6%WL>lA@xmDS~ z?YcJ{(iQWoa1eR7WFG6Qrz<_$fBY!A8mHR1Rp~rY5I5b=a(Mq<(Y7qgikqKb-TMk7 zSrZczDbK}(t|ZyqcyZ^s^jK3@%HStgk9xo+j0wS%KaR7Tt&$VkANAw$M0AB%)H6E9 zD?VncZ7@C&Y4DN3{!xz|#-!)DxxBBY8FEtmQZ3O#N4NLSMhOfka>FG-fzZ zSE$%)qwVPEh&8l=sb}ZxjQ1tiXKKjP`Y$KLJNR)Tv583{JGC4VE^~1HE zg+4w$?(<)MH>q~Llc}D#>~o^{wz_a>UAtlF#tKeDAct&0@bUGvWDRbhB$8)JlWBQr zRPz_jm*@qtIaN!K_f91HEw`5p6159CIXLodTq(o$g#(H5k?sBaolZ&1B|84>4GYj_ zV!af7mziLkqN!P|uMMkXq>dOWKHh^YkBgDY@a^g87~8%2J1{a3qb7D}6L>%3{sP@x z?~Z520xw(MA9?=Nh*SCoiBaz7b-2OR#&75xb+#AD+XWA7{gD8~m>gpdfN9~Ixw(0h zqt)~B=CQjc*f=={dwTxb91><&08~FTyA`-Iv6Hz&3EU3b#_w#HZLAgsURK`tY%P~6 zztt??AGqnfqt>`;)L1?*;%%Wbu}#}V!|)`^c=ypRXbQ5RX&Y$eJ|rGqsd9ih3&q9C zmxp1qFoj(+$;0JXQs>7b2{PU#=ik{H#H^7g$mj}g<(;2lJ6YIRA<|Tsyho<&(%&aG zHt>xN@8jzb04AQDUH9|bD=Clpw+&07&Yy%|zl=BJWyPrSpuc}&lHL9}CFQ%8Hy*g- zuiRJsux%wI@MnlhfMFS$q^sPA6>7{Y45X3xSzgt+5?&e^egFH#i8(U4P=6X8NcnG+ zanBk}Y{YP1f(WBFF)RJZ!9f)eegBhvbSrQ>Rl8*wM0R=)x|J}N*aMQ28Jk3ZgcgN!d^7H(yz6rx3Tqzea= z2MNXk5c_x{AMfLg6*3!yGZvAO%5e_d99iC^@kihF*`ck&d=hgWpI>!ylnSJX2bRuL zay9v8=;VJs8TVf|@f?o1Y~()EbB^|bYwN?y<;O7SFiQv&@1w_#@w0`_jaIvvl9L9f z&_PQ{@Gq=yc)-fMa0`ns*WA>yuCc`c zI4e!<&D!*hX6&pOcYrJ|*%9Ih&`ND{L^LPVA(z&$et&yYFKpUk#G_LVAGmfyYdkZ^ zQ7Au*3GhV+Wb?SVxNLn5D&K(tN-SfsQK3+TD$jk~#FNMvw}rRy@wxhDR4GGRm417G z%_FYs`X-@Q$cbfq$$H(c!K8e=UdVQ7V8rrvWCwQ9b+uzDeeL&>e*5RwjD4uf1lmA! z=KaFPDBZ!=1OTQEEjuAr^&z954!ssRo!Oi@AaRZj>efT246!A1+qT)fiRvRUmIEh! z=+!*0q;;{b4m75e8rRdr<>Z_cQ}2z4sKJk~ySuvy*?MH0wGHbU#n`s>Zck_Vk9Xpe z`D>e3T7LfY8grWnZ+|rUpeoTjp-TfBa=6elRmZvJTHZtN-|A&#t-m-E>mwWICLD^t zen=$GUA7*f;A0%b?AxXuOS6UQIv5EUby06ze?+r&_co)^#V=PcCAiv25+d(Ay$K7$ z(-IR)ipN*xD4r>+_;}At_#-0?o}HJ>#tk&!UI1{s6h^WcT&~{QBl#h(T+uwOUE0@d6>~X+x+KdnM6#X!>`^bcLrt!Ngsrn@O9Dh|>ddmH_Yki2cy$`f zNsC4^H2?4sFHazNUVWlNC}Rwq%g)qwsjHw zc&)wMD8n|IQ`4SNGZ8jIF#AzODuawfhjr8i;iqh`4k)3@lAa^Koj~%f^0#32+-J+D z^(BA3;lKd>Unm=sRd|Eo;w!VQ^WvjbLcRfMpO}R-s0@N!glip4tRMWN>6B1S)qCh% z0-fuY4vLTJUxVxD16+LO36o&l1bJPG9x3ZlA}9ZU$oNuS+A66tQgzdkhX9uH!GPr%d z_&ou@9J?47a-!xB&k8?9Te`StK6~~|z;N;7bNZw4jN?D3er5CsP2QvF-tZEAF}~9$ z5qXMgI(Rdb$ty!&Fh^hTihUfw-~+yTLMM+vs8YGh8;i|X4tSyq41zY_$B99C0R?U(KDz+nllce#&u5L}sweph0R$}SmaYxjNso(2SyuJ%Dxlfrel z&+U>Mw}HT! zo0-w%KK?i}Ry-Y`ypzwu!oqNwZRE!0=Bc`$N%DcF@Xi!XrLnOwl{t+^fTP2e{_b>E zRHUQ*g<5Ai@*DsVoP7VpL}5dN98h0x-n@AkKMrI}QX_C#xCZ~t{=vcNB+9_-o%2jU zWgsCI2#(1Q&tL!`zU%Ux)B>6a&YRz9`BlsN;E<4VODxOk>Z>ecsSwVncr>H?f$_EuI3aBD<0&Os5e zwlI*eKJ%b`fj@W#)@sB=Lu9EtUOefMQXNE$;BarFv$}X z2!wxQ9x0~wzMSo~>5q8?6=_xSWo!vB2xHz5KQ6Z;xXD0+TDClU3|r!WEtv-z#?ugZ zl8ZxAQ&XbYbeyf{@cew<<@RGC&CC>o7(#%U8cs~D^gIpTX%yjz$-v7i;_O$JjA@(V zk*YjX)7_He;x#P+6F$TQw?)?Auj(q9I0WHKv?chT2Ku27`{?ST2+>$b-a646+pU2U z5$JLYdB9qc_7c4>v~M2oCiB*6$Ny6Dljr$F%BkFtF1_C05)%`Fv;-K9ppmVB#=Gee z02Cf!G61u2&q7LGyxL-M4;Z?}ErQTR<@@*VOZlwW+^I3!*}$JNarQoOhPS>uV68MF z4FUN8Kba}=fNd5xub+MQt0oX%fK>&z4T@1fDB6D3m<&KB_*xFN?*n>f(cI6QAFO+Y zlmV7qub&_nN)iE?uKt_~BXuQ8V=$NqZW={?r<5Bh`SO&>y^37u2~RXySxYUnWEm|% zd@OOMz_tWGK3?D_R|sDq_{Eb6<)9tajdEdp!X)y!vSIZyFmI~j&g0rpmyW!i2EKJs z25JWY(JIIIN6C!SD=TGd^FyhAa)A?N49%Knf6IywxDuON{bRdXe(M>x?XTJejT2;f zC2ISA_2jEN>#W|h>9KWP>i3s61%$2*5Lc)aB7!NEDa4t@v04GC1IB{&A)G#0i!ua~ zOV^Y(f4TW@CsBq>glnkLTU-}x-rfp1ri>x4-->`L=dAK@xqYm(2H#276zfOXVmgSFmkOy)KI-yk2V#!PTw0&aOuicd%z4R>!x-LqXU*BSru%C?u$BEz5LqwX52>(b0slkiwFe;iac~2Re z-RpM`5;88qB-p?4~InE`+ zFe(%XOU5ZABWLalM48rwFWA|HtH8!pg{o9Gau;VFd@FuM}jUd7=tpyKTA-oNk( zYtT@4LO2?IQcPT&m_cwL!`dE6(f4k9nAJT*W~VW~YQd}}n7o?sEbgM^!1`17l)CQG z@p5R$0LH9emfV5;%V$Y!Twj|4|A*fXmc2hdvG&At}1(D6wGmIHSfo2`n z<_}2Jt(LiS46lNRORdAwsB755k!aKk&)>3M<-vktU$R{O{(@?FG!;vyKfdiwXEM-#xB;RcgSgW zP-y|XdWLwz(ou-)aOTJE@G2uf$H%jf9Rc3Bzm6%>(a~L0Uh*0xsYP)6uazbbB+2@E zy)VD{^$_1B=EU_yQcdGH{9iUlkZEQXzII(M92Xg4QM_9e^`)z9bm%p_ zFAP}CMM_dA^#Xo86{F z6ObI>J6Ty-WNqwL)1jfWKfX3gJ^pt44vx*bVc9Mf@+~ZE_mP2P3=B3q+cY)mrbxU3 zSGT&)Nu$mO#S?f*#awI<&<(rQB+dG&YK|Y;U)P1`;6X?K_#f-qxmUuK_`;P0e|n1!ic3mXPZG=Z zd4pO(n1LTt>wPFgHp}0GLSHr=(j5cZ=O7*3sja$`Fyz(7PVJ;xm#-h6CUa>Q>*uwr zq~iCygKVpPHO^iB)T>wo!qh5$0n#E3lL|ffpsexnlFj*hA{;lB_?6J z+Ks4tc?Wo%+;*&8;Rzdbg(J4Yah+0bkB{LnKY)^XJ6BQ_XWST&NZK334ER{*o*>Jq zt%B0h`;fS>cggkZ<3`JM@E8~)`hdoM3@?3k0BxgW&_Iwyhd(+d$|FV7>s{TR&hxR@ z0a<)bCegL}NGudc*m38R!EAMZ>BJpg#i-gKz^w7OAuI|J?5xN749rxRaW zI({sdb*O1*2n~-`Y4h7!Slk5oCeWY=;1CghHn$<#i{e9`S3vsU6?!$+CU(9Nh50Kq z<9vI}BhWw!OH1arIy*auU@=05k9pK)@w=~w$Dm1u)KNvT_;NrE7l(j&kFgwrAE1eu zDbhmNlF?as1|Kmg@bjT=2x3bV6&1%sRB>gvS@%1fIkCB>0hhclO`Mu_ud%3wWKgR% zVA~DfQmp?u(G%iVQhYpRgBW1m86Y$DL2zwTH<<|KWLFzhcp;E*eSzleyKLr^LJ{T+ z!~5~JcRjjlO6D&++tC=BKNL8z8Qq;>TeHqbu(YMp{=Cl;8=JD);esvU?{HT35Me7v zq!Jqu!^dQu04@Au$o%-3wLdq-&StG*tC&-MqwB(7Re?8#|nxo|IpwRu+qHl53-u zheXssz4`(;uYR46@sMD;pobg`oHV4JIMr4)C?gS34Q`i{At27hk}m&;Q1^naHV+A; z(m^QMxq4*YWwE}4>yKy+&&AT5!drwV8_SN)?g3vDSN*mvR)1r)z1kUYUNA~N)9O#) zGwxBAmX_}9yFh<{8}Jc`WMlHz1_GaU;IbQ7FP{XpMp%l9o;I>GPm|hNod&w%vGehc z_2;E;mbV7!Vvpmkmpo1>MR*U2HMgcqZ&2wQuhn+G(R%V7R(C%|ko4!BBC6{L| zoK;Y81&u~uwjB#O#$nj!2Shyr4<`#Mdu87?t_~VSqzMI5vCNwvXY~k>HUMB|Nsj?v z=1AKj$_puP{Agxm#|XWPTNtgz`IFZ-K_Lfx9#iKh%o=5BaF-`?0~q-ERxcrT7n;FuapA2C#Kw|*SMkMiMfAF+BSOqqK-e8 z6#`2??{hX2X4RHBqkA>3KV2pE7v|9&n-03;I4tOKiAZk27lj95o`(f7`=#~*p_pOrhaevyHORP`ZR6+bVQs6-Fp>p z7OTCf35ZLmY5~G_=Q4eh^y|CNf;YZLYQX7RYX1p$XRO{=FHThfCNjcGBW$E~;F|SO zEdKN7&rc^V4^nLL_)M@M+f>kSs}S^VO?2{F6?EZ``w`ybH&$G<{<^Lg@l_IW~*70FW>^N@@78fBeyy#oqwBye0D^Zq5pOO$bMRq; zye^beLiH zN+PeD`q0wah_e8c@hh{qMM-0D%d7|PtY2kFeScXN)nug3a6@)WtE_dHOE?wB!3aA{hjiwbzTP`17fz+F|FXzJD+72RBVO;`GbOnUwCdtJiA z3CW3ET$Zlvoa(CI!0gCceacB}w>MGd;2;}eqIGb-k!(TD)mxOZv_pG)Hj@uYHHP#X#CqBn>3JCd_cZ(2Z9=@9AhkP{rpB9 zBhGh*@tuH!=15-ArhF{Pd#P@o*p0x#+`@*O#+}VZJsD1XQ`7Z3kueA_z5%u4Et4vj z8xfEobDp9Zi|rQ>E_mci=DBqsT) zbNT&v60nlWHJfWj%h?9C;;qL7N;AW>nw{Ywb6A`GqU@p`3-G;;V>RU#h8h}S-D2J=i9X-9+*0#1)C_o3lBef(9l+!}E&j@+6&9uSK zU~`>ZIAin(vi!U1pWrX4@7lk(Yg}>=GrE6P{FzlZ7lytK+uBN||4-<+G6tsN&6K|& z+mFMrW61t;$sj|W4nD=MSDMV#nba8i@&~wIQ9&76f-Lq@w@sAl4cTE?Ap-c9FK-!| z&;HnBe(#r=QzRM!pRJ3Fi!x=mX&^8!o7`%8djx;>(3UbvMob?I+a_H>1tz0q>~>@Ufa~TU%R9 zgR=P*-T;N^0CsRQ=@{47i$M4uCtPEJ4`P8*=AMN>ye=hR`CHdd49RBh7&| z*wk<7j_mTM3#QgbIgyOBvWyiHNbau10b&KMlz<)ti=BPA?vbSuE5c?h|`I=wAV$Y)k}gP6NJUH-?_-GA$?`+ps($O8B)i1d&R>%6K9zkYo{aFr}%)Ih1dCeaBM zaG`>%{_`Y?fXWM;h4o$PCck(uP~1(e52$?>QG*4ZZyWXyWgkYOi?NV3mLF`Uu?4NZ zSZ-n89VQ8;n%dg2cte~BdiwKe$BBk9l0(X2zaJmye7-(E#s4M@WEJnk%P6&{Bs!Lo z@d7UU$BnvBd-0vD8`qv}0Bit7Hh$=f@L==KhtNNN;r?^6Sa8lMM*z;Psi~>!I*#c9 z3QiRl)nC_kHK@F0b)QPHX#rQRccYl3WH$Y&D`4gDsgG-l__dM|Vuh5K=i9j*=#u(A zj7bm+mQwCc`S!0FEcCN;bFR(~i4PB6x1ZX0l5RsFwZXB0=tdZ1&HW^Gs8WA1-+=5E z&>lgSsjWycawsLT5e$|KW3_nF{JR&R0oVW&9+3uYEHEyGrTd$18Os-GlfG^gv#r|` z)H=O3-mIq8#N00xe{S!wdn)IJ`yOFvw{6z8QoN(?UUVETQ$F5Ck1F8OuUU1W*{m$7 z-oNIspOk*dxOB>A_9&(grO({?^AAQ;&z9lR#?lx$K<~I(XHi46_;a^6w&wMq7QrO& z0>|L(h?Ne3La#*T$tMzRKR$i>6Xq^{Xiv=R-I~8=zubCtw|C-!$tUv&MDBLM>1+Hm zr#STm#GW^|UN?XAEIux731puj7LTQGcWALY%Vu?53b($yN?tsEnBZD1YH$pN$~(IE z0{bX=XPaCr%bpt3P0aQBw}TSxY@SOjTn9l!1=AgO3MF=)_v|uwZRYTe+vr<3kB*)f zFJ2rV#=U)O<7FT%Dw+;DIHcA54Oe3oHtdid3e|thgtttlS-8x1-D%wod;5DAWVEtjhVj(~rT98caKU=-{(l zpUdES*ywZ)aGQ8{_d?}in?Jzu>@ipEO_{FoH@tnGNqN-MdjV^o5f{bQtVl= zxx25Q>Ck?6SVGEufHt1#7$}041m+Z@`bxlR;YxqJk_bYln?}av8&%FFdXC?_5_{jw zY$7Lb9T=~Ox!B$=5$hyZz>dI*fhq(zUC-0A5|k)N1^m-MMgV2AC_^u*cS!^Iv}yv? zFtl~jSP_!KsVygfkuPq4bo96es|l>QS^4>D5IjV0*-4G`_x65uy3c()`-`G+bI~9e zPQYpl+EIokUmE;?P%*!V#kE)fK@;6JNk=8B_^uQz0-q?`jL9O?nt^-;G)QHRtWYW6Yi2dluW zFAS9`f_eod6U0*Z`A5J4JKG)2t_sJYiiH49cJ}*xCzpM$8Fo!$ke(t}&#Zo6T=s za}w!VGV)&pelxbx8*g!^QP4=Uy~@kWFM*qnEp)K#g(^=Z{TqhKGFy|E0O^z>5Vs@)jc-Rk9{-D`p20UtuTMLehXcP3?5>0b2O1FERtZ-*Ry+P}Sj zLNNl62?_0C*ebt6kynY3qkfOXkRc5|8mU0K4%FQ+!xH2~D^aN5+QPVEMRts&Iwv7F z+G3F69C&j9J6|L_VPh^k;W(^M-m+W2Vgkw*d^p`*O6}-z5uw5h*PxCIxLLwRrC>VF zo;F0uJIvHy(A?2Ob(E6x!=%xR(!{$w)OpnhnfsF3Ac;89wt-*KE4Gmh=Z~<5? z2jfn>uJ_SItC8{0jJj4p54Ge5p`Chg4g2QB;8`Er-2jcSGWdm?nQlM=Tmbxlxep;x zfwf!|vHPpvvvDepjr&!q%#VYqeInUVX_3MeIeWoEWl#IgEAKtIk}!NYOUGGhyhYGb z8h zAi^DZ?W?-|u9)8lfWl5ne5K>VGdhTRh!L;`ckGOWrsp^!^Z<9z2!$NKotpOg>93!O z{omarH1at>JYe`T+O!dz|E7=Yz#tT?Y3MaA(&OcIiW>U_KVx}R~mA(woT(PlNOd@PCyBZ%wCR}NzHIyCEVh|sr z8Cb=Dxd3%IOl~6!%1)T05ZY(#r}s#RD*fALTZKQVQ2fNY?#<_S5iH(!ajJ!($3OI6 z-DEIW+#+Bv6WraBX7V3#blO1p@)B0x*|+X{zSI6;Bb&*?zblCLPJaNa3WlSVO-*=# zYHX7G0I_Bc-bOdFA0uX$mO2qd0y{=)_c7JKVhiN3LF!-d~Z~EE>a5FUK%Osibu^P1ft>up= z##_O$F@bETd2oo%zq3Hmk! zQ<1JEQ$LsVQGvGcq>&o8?e91;DR0zIY|6@R{b&~G7_l>zPYT@LHN-{VwFQgU2P#-X zE-G7hCG`HHBVhudSLbw;*X=Me5IG=;^?5dKnk35l>cTyT@(5~j=2=?B$@l?ZE+XhA` zGr3(?L_F(08E)@qg#b?s|@wzNsx*j{fe^xs^AbiEOg z6tH27jA-u2tV#X3b5B<1*(ZJma|4Fskn_{$RZn=ww!uF$GxJw}6)izMbM^DJ?y4K1 z$Q0k$y&0=!JLS@<^IG}Ss}I3x>c0aHQvVNBC|{&k=0n-V-GR0NGp`b6Cv|FT)_-fM z4fI-AgCb&L8DMmQe0w_htC(nbTD|W&IL<2%zlf|52|PMc=S|6Rbd!bq_juo5^fWm$ zYKI?@CO+<(fKhiNb|LKG#;itQ&ab*@4sJO!N5gwv z5jGPKbGQHEJU;rD!&r?@hHU|#=3Vm>D&Nr2%-UK8%q0ri1CcBR9-C)p9iwtgUZLL( z|4GHh=Hg|nfC>X&CDJIgYr7l~-W;qDA76SJ{Wy5XLAyG5L)T5dk4Dl$4#kWwZP`gYZ(?D;=F=r1Rz;2vguZw3ou< zH3Wygc_UU=+HW=IhZcQYZ~O(ojPL6K(tKInLtUcrpg*QhLalr5Xtl8kXFvN#yscAd zxYTE7?);+cVY^1%+)JZ&3-TS$$_cw%jKnkF?GbPhZ-6gl6=C-ax918uC1i<*O;Q7_ zcx9#NTC`m1u1A2U(1;@@vpm=O@|qj^=yBVIYND3iUz(hG4HRNG=}YlLHZCrt6H%Z+ zh5I;p;r)_YeZ&VW^(mTvrQKP#hn`tD>w^am z4&E8qv1;V9TU$6#R~9wpoxk_|066!Mva4CxeSOiS&{M}Gif()|9Ebz2vnts)o_dEi zv>)BSSLKMiyN8GCv!hLJrU!1Hfq>Wpo)3upuGMD<=Z$|+olUKi`<2GMi5x%(1NJ~2 zM!iD=uk_17D9-mUYizmIt=WPQHn$%K5%Xw|<|;T^g9dSedzdBz4!w48sc;l`^fg2^ zwd!)7-Bb5B?YE+}_q}|3nDW?Heuop&)dBtm*iuwpapPufV`N!{|DPy|NM@_)Gs^oLMiGUzhvj5nx16O;Ufa?gO7IyWPL` z1_))!Qxc1}{CRmawwp)+Mg+rvXB0euXGAp_{@PSPWDf?jCrBKs_FWRrkVO>PbRdZ5 zI>O(GJa6{7{V$vAe?E;|#BQq1KjxWaQCyX_0yS%AnO3&@1R=Kc7%FJJDgcgAejBWm zQxqy<-Tc)Xi}0(s3^ia?3#yA{K$$_fDo7C(6=JLF>jL3Q{c6tOIsi{8;D0z{9B574 zh%#hU;c(B$$Y>0$s-Zyw3;&`NNKlRmye1K^Gv2PM;gy}tBK%ktB~ z{&}Jh^_Hwi`i*87;ac*(Cqdx)1C1{eK_Wn~01X9%1O$wKnI1)Fk&cbUO~g4Akp{f7 zCkDKV`at49(|R9RK*m4b0~}vP-0dZT>+ely-~%kh*X}_TsT#}o zbOiit=HiFa9;^jOLkf5O+ORz{|CIfW_=Mkm-}OZu zsE83Lt*iy3hzMdJj=|k6u;Tc4$EIOc zaj#neJ6q>|g6b;nz`-cH@)!3kpee-C?i0BOb+l=9Uw%QI?b``*EabT-AE1W-%x~a9 zRf>@NIk^wKNxl(j$aV45?>Z40k=~wYwN7rGyi|Qb=um{da8Hgm|2PAo`N@s0sRlwz zUiPwGXXDQnA@0tJdD9wq$4-_1+Y#^_z0QTdQku9|sS44={-T8mPIB?mf!G z#Sax*h){(@sJ_%^ZQn7(%pK-!b8&&1>r8~_t5ViN>s>yEB?{q?hF!4AcrOrMV!0>H~U03sWBX5T?4C$yW+LP@IyGXP(E z1f>~FqWfWZy)7B>nZLi!10nx4u)#USE)DegK6}=8X z+gAgK`zE0;0P!ncHqhouzDJ!P$e5l3mGiKCf$ytu%!L>a{AmP2Qi5k0+8AeniiARW zabCb%-8_5aTK@EX!zX)tC`9l&#F>${GS0`AM-l>yj)5>W$Gf+d-};OY5~ zc!cawGg!RqoEB7u2!Ml*DP+R}a^TJ;@N|nSbPFN8z)Y72f}h_G2%upo#t^$U>p|LU zls^7f!V-awO=#*3zo3Y$=5-3Yb6pDX6LzqWfzR0S1XsQp;2p#O2`ipQc_VRxnpiV? z;Mt9qsU=K?cH`~4~DjDw9$F$EjQcp zsq}%0Q|w0IMf%RT+7tClMg70-#iM_%5jt(b#)vTnx^04P@1DZaeJGnfh#1?Ha}BoC zSlJ*I`X(}qR14@8C}HHSSZDX6idc};8>Hj8%c%pH0VgMHUJ;4(H?XlqxRc-RL7A8D z?)sDKw7cU4bZ*+Yy7s}A811U+>T}Qwu}EBgcn}pd@O@nhGQIf}Ws?TO3?KhxWsCy% z^HR&HRltf~iLzcZks5NnNN1j(99bQ@YnL~x)eeASQXm1JRZEaU-$j-LGBcsW04zff z+undOCc=a*afgmSD1dzunNina;}O6>gQ^gp!03*i?Yn3K(mB{?O|1{Bz01s01V+)~ zBpWJ-_t0zeTT0x=Bhr+(OL(w*P@Jk~f9vT@-R7SLQ+~1YCzoM`Nkb9@KSvm*7g% zD;V0|ramUt8@JN6E+Mf9Y3Km_0-dKwI}G40>m+XlB!fUY0@{RtjU6KZdWPf+Ozz3B zHn2pXK^c2~Gv%duEAV<_ayP*3APPDbTvh+~3*k!IX|d{=U}OR(ESH1?_zmH+o7+&; z!K{G@AiC$xZqGU*^LP#eYfuJGGyk7oL;l^}A|fL*-{rQw}{( z;wvi{UBJVKd^zo&%ZEKV_Ji3OzN(m5b(v{j>8bV!S)Bu zIj}7O%cvrNw688-#<&1lK~)T6YKA<3{$8iE=MI-6;<7 z{v{P`XosUD0$Bkd)*9Du0X9|cgHvTo3N#m$*_O9nkMJh0wAk?kAmK`1=HH(et9RP~DHG(8{Z)r`O^~^$YkQ(y=3n2ux9q6CHMJiJQ z3!wB>#knWD!;ThUCgh970@HlI#iH2a?^DVM{2V!j*aq0Wl(~;Xzri1fll6Y4x+wPq zyytyP8{AnnNa9t2hZ5!a2M`#8SiXRyclxcRL{NN3olCIGqd<2Xe>c^)WgSqF^YIwi zx&jd!NDgIx>|&ZIU_zl|$n)>K%p{+VqpVX&*UvWHch6G+t(GID`|GGD)XJYo<7W4i z)yCdm)Ft8x>02{IJBde~@(naDQS3M?7y zWW(Qa$CSPi%;vzR2`HfU)fM$`<}QbCfon%P%*{+IfPBxt@IYSl>Q-RJX%XLTG5*~a zA~r@s0`FVU==0y2j`i9BCXNW0V6mb0;d$%9l;Zt zg#cwIpHo+idmwm^@n#bI1NyNPEyN_(Yg4v3BZi<-aE!b;BdO&%1jamQU8ovigP=Xx z^yKCA?_eutrh@j31Y7U|*=-0KFanv87`*c8TfYogyx<7h@pI>K^iS;on%2Yx_rt z)8#MFHmZ!c6E1e*E7F{=p!CnE*`fx>pB&DuzORIa47b>@77L^e#Q8YN{4j*NU#4qF zZE-Q#y&N`Fbk|bEbxmb9(DIVQ2+|pfk4J#4 z`95UlI7zqZ*WAz^1>4fJgI!1&${*yO)_8svxx4!3b$B1@$5{Yb2GCbH)gbTn;~fZQ z^1A-u>wVujpESSC&i(>X`<2&FkB6ajJhFb}FAX^kpdT`)zUO0ezw2-0riVnk;5k=b zKd7ZfAvE%b_xNmw=E9Y7AcB6W;;^kjK+#~In7f8*+Ojve&s`F0dcG%C z@xMa%PcE5ZW7 ziPq!l$0Fk_TiOP$c%82k5M=%9<_L&RoA~Y9$9p8_FVW)(_}zK$Uy`a6n}f~0PiB8+ zz##JWx?fI_fbfQ#!T7y^u{&k<goWs$2Y`&9{gSqR7uWr^<;cGH!j1r@fvhi#lD=|(qu2q7;jrb z^}Duj^&WrRz))vyu4wU{rY3WUs8XxZ|H0ej>71=FmTBkI`b^AB)F_R4A*9&B$qVZ> zSGu!m;5!`jRC}QM!8^cz16YCD8WB8cReX%uDvlo2!~P3pec--w)STEJ*kQzbovI!* zeEl3MiTr;9ZczQAqxjsC1Hmlhr;6DS3E~@mE=y64-z(F6(B7TK$3!pJu2)q2U5TGgfog7^r3V&04Brp0#x!!Zpp`w zi$d~B@)xaw>riUbrexD>9>sf`RgFo7)fEhj#(j0EZHGEYnTF`FVeb01^ zT(vG76iYn~3!3vP{aDRa;wk4|9D1aD^baP$(A_;lw5cH{P^x;}7d|Ei!3Ruc0BP${ z2w0pIMB@YM1c%S@%6h1t{-aTuAN5#s)k+^_Qdxl7EP%UrF5lgo5hcLuLh>A|N4Ded z6jfHXNl)V1<;H0oW^;}%fJ7?L8HCfQC^{^P(v!4 zPe1?`BJ7(hg}q^(Z~5~23d+#gFRr3VO?5qc)!W+=2nZ0~NS4IE^&NBhueCyYgo+nC zt4m*{V-a+#csdmZpD1*>B#klSLEEFf$`>ckamzQP?Bj5Zy-RSu=_Q}7;8ag zyq8osXr%1vjN7veO){)6aZh-emKnHH9?O66;HfYEz}Wyp%g2u&1Dt~syPyxW6aB_D z0qtJqi(c}dexeE6FdlzBKdpicWtezy8SdcS(dT;wpVp;qD#u4DzAq&`U0lrGH_p3m()hYFh+?C z)$p>zD5FQJOQx+UdhBoPr>=tzcy(+5j_J+Q1-b2k(A;G=SW9obOYH|%WnbQGl;)xZ z&GknB7Eop!;Yg%kr@HTEYPO0?S~=tRjMRwmOEXwuD4tejpnru)a zD}=0Pq(Mn|M9&ZFG3?$>?`E*S>s6mvuYK9H1PW{U$o_|?MK7QWXrI?pdr`zs(r9cw z7-9RaBoZhVJWEX_q6nI)7D&=d$OQqM!ra25IbA3T8(U~U9@zMYkgsw=;Gpq^=3@c7 z;diMC@dGHm?xJwOQz$M6ZRmbx#wH(J>Wxg|tnfy^$v`x6z?Xn7J1XuexBPB(Y zMZsH4a4L3~cx!z6 zjVr!Lo^iS!(UlwWV~5oJMQb0QhQ5Lz1>Cv2AI+35Mi!A*gHScFl2D@KgK8sK>EOIn zm!7CCAqA0^aPx0bm*Tr;nxL!*wy`UmgUc4G$p_U#@b>!cW~c%* z@aVx!pmtlfsiz$gDw&i2!@`jrA|JX72iG94mFkPq>mE72z5U;270NmQ(2PnhHF>x`yHbm@sn{tcy7?~D)jE)j?+8YSjI5!HQ6V6 znA~~#`tjdo%c>n`8SDZHK((>nu-^j+#_k|&5S5OS3i zd9p`DE%X*dWq8W}Ojn9uv(CT<+V#S_jzahA3>wQeWS_aM`R7 zAm#4iA22*b+VzBAi=R`s=T1IuM6rAcc(e8zDq)|q2g%5uAb-tJQfOe-i(#BbS+4Gl zxk8g27{>2yM_(5+nJJ!jkHqaQ^^sJmdKA<4NnigP4A}1oT6?IP4UZls#1q&vxzF*n z5D~%x%NKqsDT%-VWAESxP4c&){w~~YXwz+tAdu8~q}wfXij>0;yK%ko%%xhFM`Ze! z1G+N%#WYBgcP~rJEpx?ga9SSb>h7VgJQX%20pK~oA>4FPVdxHa6uE38>A)VBxd(j# z;m19pcVK)OK}M5%EH3H#-M5l{wkE(G*m3GsA24wMB=L#cMpHkv@F*Fgt8`u&?BF5}pvSZX{Ja z5dkz5tRxhzP57XwJ&4sqePrpEhpcN@Z*U+}n+RAGl9H0(IDmt9C+&&UjM9yOTwm7| z%rIRRNSi?fa#D$i$pn4DE!V8`8@Ha7)D=SbaN@D9FR!CpvDoutCSHA->@{QBi6D7b zSacPHd(H}iBj_Alz5hDSGz!)NMD3YXM=)$h6*H)vDzUMh7Ws9YS4y`;f5}l!fdjKiA+w=|{(=3FMn?p>3FMV{k zU+JFJ(*=urh!5awy1UtweuRQOGB) z4IA#^NCA_LfhC<>vUC$JtVSm=Hto@iH;?7Dc#u<0ZxLL#>%=gkD2xQu0ZQ@+N@Rm)}~ zG2Uy3)cW;b(g9nS-}%Yvl^2dAZ#1Vprg2eFz2O_n z5IQ^*p1-4Leelf^9%!x8BPfVOS%f)mfdjgQbaEw+P!$X#1o67)aDl>b0lTXgb(Knh zfcoa)dxa|d0|a_2;D!be>N41tMDz?Ws#BeS;kt8D=>F|1=d}TTn)qrP@bht6H6IJ4ON}hvCWCV6MNu ziGbPk?)wr{N%&_UOV(EqJb?0d(}er|OeN%Efl7fp@PG}2R&`Ka{xVxNLROB&f9zU) z;UD^5u21Lm^D+{6^e8kGXDa+*x{r0m!qJeNRPeOF5-zE_V%o$N@RXy#Ko1Hp#$z$d z&vCeh3Pq&#`!s^w)#&4!p5N^j%kFe3sc_6l0}@Q@|NHmjIelmtu^y1wGxV76Lh>vR9eR1d6#pY6T|#MN_thgkr=Nga{+Y4uA3(96y$EP(YVv&U zU}u+HzHZ$*G@Uv=534&$&>BClYSUeO)Xc0g`NW0O4%Z@4^3kS6rVO*vfKI7J_NRUocm@V*l=_Mv3z)_*P_`JC7*E&V>3J z{;k47<7ANtIY(GO-d{S-%^5}V&(e4N4ajVM73>GDr_CWT>#-Pt=g%BFkU{_sk|T6Q zzqKo1bY>zZA>gdRL58P2N#J@X1?He!sbH>(_6@^3w5qr%$fd}9OZLr(12Bl+mvBY5 zy~EEW63ilYaBK~a#iGr(ytV57@u@Yd`cVJsjwy@4qLW;n%WcMC-QT)3Uv6}$E7ff{ z6+6?f6j2uWmAFky`PL^I_rws(@ghZ8RQ<$|VVXymmzR=5NYOHZB1s4j^c8$BgOz5@>m0nExg&!<&ASp>1-Fr2U-u(gso`|?#sn6?wICNT2NGM_HXV6u_4SYqdyAgzM?=P<47Th0n_}wkgVmcq@dS~fK zZnty3?Vr7QhfwbE0$|>U!v_Nznn&|>eTr5IELa5F zOp}!DEl;oPmCKikJK69^KHI&lpu5i^F)h7g`6Cyc>Y==ou3J5wnc%ar7%_ffz0W!o z4sb+YNV$4?h~lotIMnl8KdEMX@KOJJ8@N2yhq583GE{R>%M9ChW_tN;0n?kjjzJzf zo#T((Q3$UuZxYjy7%gg1v^8(Yyky&|$=l(PFyqSWiS38B9e-b-E7oAIJ?H6ddVQ!k z$J;wN04Z1j;!sr;r7m)&=2+zkgsywPWs-fAvI1J|*+9~jfDpbGq_ z3f4^G=%sHvFk0lgsB7P|YKV^DH-t(C2@)FYPE3l zPkw^8@9mi$sRPcD?eQGr|aQa2| zx5IV8ZWud3E!Goelik?+eLg3$)?tHwP)T<44bq7!G>W8u$aUa0Ns>YLFrQ))+z#p?w?0K*O?9|)Zf z6HdUv(B;dQSNdG#nsm86bj86=a5Ab1t@o!+ubfzvpcQ1gA$R5gsJ5a5?Va`6);;@| zU-}{`K^z%JG#52*>T#(~Z?C+^V#qYC#KH~lodL{M=#TEbhaW8vWs0jfWSN&&WL(@H zKNC_=0lJmilM?dF=>Rw3Z&O$M?n-%8Ed!g$L&LNv3VH|0x9BeAQ1j+?P}Dm>uG_bT z(Y?MBQcI1F&gOrw3!KOQlB9^_EeLUG^MaD$>fNZu z9=ZCSyz_%jjlqr!K2^MnbjCBT^Yr=5y=Y93EXs*h)tw(PM>~bX%^`-l*t4lfDAvK= zz6l3jKu}O2C99wqC_MIM&gJbOxkJ&UH7Ar9f|o8pf!^7+Y%_QkDVgEl8^e3ZzS68e#LSZ-(l{rqM zrtW!6`~%a;r{_!_Kb+e!G*z|3ga2Bt4i9q$eHnmc6&GK}OU=p1zQR(b(pFb&$Hgqh z!yffB6)0%kGFRPuGm$agM+uAVHIEG=0+2+4==Z+1*7b;+>ITrHD0lw%7eY~P>x#Bz zN}!fKxutJH{G)w`A?$g8CmwmNOVi$OVcCy6w-p;mRo@%Q8J7#~R=~cjVS55C4!5QG zrKX(sKHVi9A`w!WI5{OBn&^Bqy;(l?J<`55lM9;-5!JV1Pch$J?bCjs4Z^#@ishRt z5M#?te2`2dZ3uNEz?|c7-*5mzCxFZutq(~!aw*)KoTnywemDT3&ee8&`t&^B8(M8v z*S)8ZcY&*hWJUwJN0-(|Sx|?Ob%n*ok4vswSKG5@O#JVX_o=JN?>03x?f#J^qEo2$ z8+0g?p6OYO?7#A!8zDp6feWBlLzG4d`dK8Bk8A~>c;HhiD0gCQP*M}O)+S;J;ECN` z;yzr`jsW|LNHmfyj_!RZH0qOek4ZF4OxKb5JjcEUnzY^l!YB~b{AIC zy+i3(QmjwNp)a{EtK5~KP8jVf-wK8=XX}|BOiCjewYc-m$%e6mV?2s9mh^N1`h}}8cDix!;Hn9YVCO@5C_*Y~PD#AH@cD{Kf5F6nMO0v~q4RD=QHZz%)nZ!zZ%)xe18LBL~ zpa^XPlq}D4i&JaItzS@d^FpTbVY%MY!!BPgFDHXM6@lm{uw{0Ij6M(RUna#(*1`r# zPEZ6<5)K(350({`1^=JFZ9cgj$wF)_%csclCF2vDk|bGdf>(RO$LcN`v&2jWb@->U z#PS1X!B|71of4yWK`sN*h3MZm@gCMO0`D!^WGDcoEQ7xPky!MXE);Ee_=%cb2Y*6g zup5HhNzqt6Sk*`Ay2IZuD#WC?2Vy6N(}4qvLCA*6Zy#L#=t7h`DA~)oGi)#=DM^R8 z3uO0Ic)X=JDcFO~n>J+1G;niIbQOZ+muNrno_*HSdY5sOi9Abu5ih;2sF zvHe)YekLhfXH^*8^Y%~JG$MU+gpoMmCg~4J8tP*#OBl@F<9XQPm9A4eWOkKXm|~sE zyy@A?9F{;Z0a~@g@*rfM|FqLjh{*)rlsN@4MU}Ov4@u>D@Nu1I#8mMM%NT03;th za0EQz!G=XPwTS|7(V9LMj5KMp`CvYekW&U%>i+bKn@Y~83e+NmpRQ0WJa}^3aL<}J1(R0#}X`~e5 z1CUsXf1Gy#Zn5Syh7GYkW!V2f?Wq+Yo0+f1}(F)QQ7$$$_7i6Ry<9 z4VA7?Fa-I2s&xzGNQ18ZHV6Xitd1Dc%{#LG%EKjy8${9Hpv$AJ@+2f9_Cj^t3(e|i z$yIW00#@U4++=Sk6mA3j{W$-!44jxF`&RqEJA&lI_K}sgBM&x*AbYT)M#-7EJ7kKJ zxE#P2(h1w2!s*}V?PgyUq;O$MefA`m!tym2XI@=do!gmt>B&gv!naThRw}+a*=y-s zvj|1{*6W?m(f&*yccN22J=Z1U#*@nW7r`zqYx(Xi{xhMpQ~$h_3E0eQ*{M)#eMx!|)0!;9umNKkUY$IXa&m-}pEV#2}T6?9X#{C*k=f1$0cn zs!{kV0DFdyUIj*_^;Lz9?4WH)eK?)Cfr8c|LU5HrQs5G5C*s<@uWuNY)KV>JpA|Op zu+bu-rW+Q2X!S9C#`6xP#eJJ=f>Z$X$Sw`Hj#ilG_T|c;y#`JO`B$s;VP_YwD(~2D zV5l4ekDxkJ0lGjLpx^^nv2a|uA^gG)h|%~lJW%^uz+OfgZqk|d|Aq_m#ZdvnKtxlr zzFp z)LH;qc(?_KWLsb5Lx6Mv!);fMJ$MVMt2@#K#*LBUiGYn1Q0FQdQRK65pJ2QN&I6)9L($fjPylo26D4f5LH7ayl5Y2^@KxsZuKgb1X2@5C;dpk^c?1 zw31yi{;%bV?v2&rjmJF#?y~x<5kArWwP~G-p)t5O?a+#wKNs&Zejll;WwBfkpF$%j!8?I~hpiEX+G3bA%6Jh&4FH12H9PN%q-90@}O*~Vz5`L<^b>XzVvciZd$CHgSlWh5dt(Z%8 z{8iGFzMC7boH%QQW0fpFq9~{8&YUxfo^M55RPhoV;}aodKw*FZR|sbu@xh_d9wKAz zHtQz1R?G@fVbh~3FK_rUGR#B|z0`0;QPF)o>d1<*1ZB!@7|g@~uGuecHrLMysKSg9 zGs)&~Et}gwzqGI3uKS=-b;1flB>9n_hlzdz6HESY^al2wU*DN@WHdi*Q3Vq~C^6(; zV0GFbN&sL*{e>Rf+Gs}4r|2Zro(8Y&i%F0H88Gcx7)*p@0L_l{L2K2~d1hiq?UpZp zTZDrdR1Ey9`lyQvM8X8F7fqA~pU<@1$}@ewPV5{vv0HB-G}Bdz!)DkZd^h37`-=vC zVIYzqH+dgyh-w;J(g9eG=#6$4#kpybmkRaEuZTpN2!O)FN!11VA<#@EkeIDHd!eZ1 zI9Vz%lq1!3FpS-g+P`hF*Gg=-59Iy+I)H?vmNv^f5++V#pXhW4pR`DSQB;%{vsPtJH7y( zIVc~5XByusyKu`2cLVw8JK(mt1(|JG$J2D@;F(16iQf#hC%!?W%r!c_&s(+=3bk_= z5f?L)!r;O8v_I7qTb2=W2o4rs>>F5%(tVy$nov4TdZPA{v2`xIZ zYOqpZXhf3Dz?+TfwYgj5I&$+(kQWlw0*YZ)+GN9K3(g4oV1V4%fq@7;m{(!>h z)5>1VwKBjtK|`rvD5F%sZ=mY$Y`ds^Y}SI1yyYz54jjjxZOH*{k1*HGyWw$ZN9}1H z!`1NNk(&#ZgiTtHUs!+HVm#N_E>6b1qT>O_HzfEt-}QYAWrDS#krO3W7DK&sm0ZAL zA)YGg&tj~-zG`JI-BD$Wuz|w(ud(}&8b_6lyC~Rhq(ybEf6d`a=5zx&BBSn z036o=?iFLKM5Z*s?b3k%E#r`!}VtOq?c-k(+*4?=%j6FKtHPB6f zvD#EPkH=%#zDmBg@9irXK8?Sgw7a%JEIQkwSaolXk3)gjagvL^6M4k0_V zCEzr{3>)gBQZ~L(;SXn+1xq+=AD6em!G`NE2M7=cO7O_StyWm@f1iy1v%^xGV*X-$ z0DuyUoR)R~s0(tT!gBtlAO!c0xtLR@roQmO-U0vdk3ZB>b(98P&ozp9mJT$P^tEv7 z;?Q`Ah6}}K4<$TLX;_Dj5wI*pZOIDH)HEz)7`eBs?OQcVP<7z&hTcN6fYpL)OOS$V z=ey;INzj2x6}VQ#)B%PO$feoFS{umHd1wGD5M&W}vWIxUXl@TN$`IAjtT4&kqCYc#PS2z^$?nZ zALQrbLxqdpI;CsYs1Grca6dLAlpXj6G6%!7wtR>VU=Wl7Fr!hjw+X-l^~jqq zYmZmT|BfddJ~HrZpcpF?>p?MKf41LZfaIY;cNKPer*B;H*6HTyp73_Lv?Y{S_&Fo| zx3VwSY@9&ZH6ffmx+;LKjQM?e*99$G{%ZxpDkWOkQrJz8fSFpIg z**dpz&nH={)!`8g)nT!rVAAWV1lEn(a@ASn_-KvW`@WiFI4zjm&U}B1(Jo@<@|o9j zOk2`}^yjIm-Y7A0Kfvy$+)7AB7Ed`*b{4%cuoqlQ$~r-2P`Z}mNONL+{e$s1uQ|;C z@y@s#l*+Q}4lHQP2Cue+t8V9?cz2x1` zVK=%%>ch-V)Q%9e;@Ms`W;upBur(busUR>;BZ;>#B8IFL7Qzz^uOWCvaQ>R|Atg{) z>a81vzeF^rk9BC?g)Iwq%K=Ng&MkVeso;`QF^0v8eIuz$tI05ADSZFHbn#25Xa>)M z(m$=r1TUrd3VK5iDPukD$^~;V;0JDc7}>6i;u=C5fF@G=gCJB9S6*Yiq>m1FNTg21 zZoyH*?&FM&L{otR^&!QMh~d<(bF0Fr6{9cv)nk zaA^caOzDYtOI2(C*yn%s(_{T#%d|Vd@afA;j|-ABirL(F40tQ%5SEFr{Ku%k=qRXV1j6)Q2cEWzm6048z-P9 z^q8>jAJ1=Sqyh8@pN2Ni;G2+^J2k!*1~^Ei`nm#b($V>TwNC`jO+l&rC;!a|48H*o zT9EQ#ry3`EydlIE#+bo=8W>_Xf^!6t8v8_^P;<@;Vtlgq;0PSS;CZ@bR6HUexU$C> z_CDMG&0~zsv3Eda2R#6e3v@g9UKQ~E=0+(@pR->3iG#Z(cafwP2puW5puIXRFXDQG z@es`@nUClGCBg|I#l*tns6$A0k9AqN#9Q`@eXoa97jTGg_5McKCmbVi@B4=<9GGk= zP)rc75)%Z+HpqJtxnW-=njoy~fR0dmfYlh`U69e65SQY5 z<0kK3EV>10x|i|*3AF5HGSxTjd4)(hCEr@p4gX~82G7p41@7(>c&_o0>XZ7xsn85iY#tTqZeXE+W&A6Y zW~}t39sUwUS$*gJ3<{$3T~Tufhs2dz#zma&o#1WFguVcAK+~X+1<(YN5y3j0GAfw? zE!;{jpfh}?t^`<07$lIk0Z{Yb2TN=YMk$W;PXtO0%$;=c73>DkLfKur%HM&AAYW0W zSv=hHKoOZOEfc2OF=l-43#b5GhYv%v5A+eF;H2M4kgt1dyUyXTp_To=Z<>?8B1ix5 z5f_&-p{?<^ydB>kwLywJJ@#RAZb2fN6ygIR1~ml{5l7#dbA6YAiJP|#YG-g}p@xr% zKE(Ac@_YUn&dTJG`eoAJ9MS(FDP#E9@Vk(q%Qy@0#Q7Jx=WP>^6J?=^l6If?$P~8H zZ3VU#>ar7o#zRsS=P)T?-mfegwEtAjA?bmlNJ;=-FJ)1NC5A80|Ktd82#66r88(G> zlHLHrMS;@mqH#tGI6g+y!J79hX2dyP<^g5$Fp_s@0ypYlU?>{c6;VKw>>mw?o?n7- zn>W%nq0O^^M~B8&K`M{rTxddtZ-qBWuRJ~=?u^nPtxD5MktE^U0?al`?ZK=cPd|YP z=j@((4*ts3A?E-ZZahmWl*I&w3(>!jgkY-kmJp~hva#i16fEZcB^Ugv3>=mFQL9of zgP?~mlVr4CbNEoZE2hS}s#kZ7MJABHHTRyr>^S2UrnmFHp2P})=UYWUf#ECEgzL~@ z7;)N;=)|ye9j*)HU zseOaq8Gp@kxK-G$zHEtYu+pl}$*RqDhN&M94t@%FKfjbG&MsT&ZB3cQX{ z3=2~`{RE}owtEFjj;)@t*7vWM{AsmilQ({Ron(;hcyM*vW;NSa_cc?msQmHzldpdL z@45MX0`B$}MGA+OU-NG5I_;4uBcmWMBmelmzkEm^FFwgbrP+dv6CY43lxFa_2U}0B z`4cw*gdt{0Y_6Pz&D-Q7h}UAGYE)FNAdLOaL<0#(I)9rQeg8fT@h_ShnB%%|Qy?QJ zbv_e6Q4ZyX5CAZQSWqP|W8ZueDOmM(PdxY?u_{P1$I-jvM<`NKZR3lrnF0?dG{jbziDcsR?|$KiexA(V$1P#v)uN=7$UteQ;gPED zs*0Q$Wc1nDdZ^yVG|D+w-qrnh@aWM#Na``)*Y-6N$NPNoF(ikJ9t%hJ+v z7g=G8KN+VL^oDXDtQ=EF17?Q1!1)#A@JRcDwZ#F?FCfrTk};Bi2>{X1YvNxbA00}K z&uNcQO->b(5oMuCqd1G5J1=5?fdNJeqbLwlqLj88TLAF6qWt_&vhv*8y(T5E2ou~g z4(vICx*ewt-s0fh9`(q=yV6x4wox%U2>M&9tn$STPer+gm%#6w%{KuXx5}n5#%-&Z zc$RXMa@5x@=ZL_^E6aH|T`t+_THCkJ$Zyc~LvF6ND5el9=bXe`O4M}gH*VAclH!Y8 zg8t6(0Wih85C&C3{|XF~XW0Ai-@o5IEMbsXA+K{7IkeVYcwpP=LGQ{J?>hFIh)A4r zl<19jv&(1TG=5T4i z?xBZE^hg_cO0|OgVp6Utj<052vpjFB80EX{`fi6pas*A=Fu%^$BtjOFs#z zB}Mw>x}5psUE4g+FXV^^=IomBT9+qg5`KTo*eVa6=q@l4TPnvHYnWSGQ#lKjHK`SG zYn^1%j|f~n4_FvSC%9M?u+Tiq-5;vcj9q$?td#XeBMRnvewA-PY$csId9w3NmJ_z} zrSGb&Bll})6dZPc$RtiZJDqVFhsA>{2amhj#@BEKXv|8-YS-4b(*sQrOT~pBcA@zwsu*pv zabIoj<~fLJ=b0B}9|=2MU4Gl?cQ)DS@E66M_iwt&%LxC0{$2RxOT{o5uZ6{X@Gxf8 z>HW~=--ZIB?Xy5sUtaK*#%_(w$&i!}xhe2W`pyul?j(za(7Qga#1d+YggDn&(_xE+ zkK=nOHsO$hG|+1nV0T@6{%hy$6|`T0Ze7c#x_ftNIlLw&5syQHgJaF>fZ_WcJ8bWc zNP9d-NWDT`;3n(4>+?1*##nqJbzpvzx1&tK$|z7Rr!dI7AHT7aup26~D~~M*eC6%d zy#zWD);Q73bFzqNMV|*JgEk6NJbYjV(NseL-Igi@aQTL$LV^uOid{*Zb43Zy4}z*7 zeoPx^j)wBKx6{|qvB>;H^q|3&4unMjO#~K0>Ti?+fBm&RM~18&$E?iFAC)H`?^i$e z!zGRM%vh~$$M);%!xbEj8W^79&rL=`9;z2`RkaV^SNsa{%+jskQqlbR^B-M&f4aXI z>@v$Csi_DF38xuHU2wLfM?bQjr8q{`qof655z5M0I4%BE7HWlOeHSG~)&qh_PGGhCRztdyx>>`u(ER= zVRQu^X2e!N*--fgDXHbh3?7Q(FV z4m~o9L;8eW_qTnygNOgBjvjc~F?Ro?lCHn^A?e^e`N7slB8R2qwy$VS+{`z1DrgPx z?6`QK2sn&G+spHQCU^kh-?G3~Ovjw1Wi7R!jE&^~J2+xb z^TBQQo}(NR2Y(YIfRM@C+jm}c;>Dz4Eb ztKSwIs)y>~+KZs&NfV3&{zjiFkJz0H)n6+-?$O}ip*^r;5W{`()>lu!n1UT>xK|5D zFL;brnxf(o6)p*0xI@bYX0F>Z)+)GWls9q4 z!^WMj@Uo-)JXQ+~<8%lhMDK!F42AD&v{Xm8qM{*~>rI(^{c59+G>>(-KV&ooIy!*x z4c!_JX+AzaJ6$xsXJ9xKyFo#@qdLR=ojpxr(q+TbPE-G{iO0bRw^CB{s!- zQYAM1uS7Jq7|Z`UQs(tM5~=C%WAg9zs{3M+{@7)eY{*qDAkO z#~<#)F*@GUSyBnxpsJwUy<6cco32zBy&RB2u1sufZ0hO$$Jj-%v%v0B+r9F0w{Szx zquWv*9xiW21+11DkyXoxy0-}^8g(Zx~=nObtN%nDSL&o{JHQ)G7u%$VV3>>Tav zn!jzF_3QugaxSQP$8;~`krtNQjtcw+ogb5MFx2f_h<>b@hfa47k8!&0;})ZdxjhX( zzA9$fLZ0u&;Gq^x<7q}<<>RAWQCY)0nRVNLXfaL`vuB(8uJ9W#c3*$@L(jWWnRA=( zlz#D#kax6j$J#SBy)b$&P*=_%FK~Wi-h5m@kBjjJ0yCJ^u#OLIaJj_j7WcH2Am=04 zdIICW2>*R&{91a!jFFzWN!vXx0)#?FSX8mNKO9&sXIf$+irV80)<`^Sr|R0e5wQqY zaF7)9h;(`s)h3LIf&BzNuTVe_;i!F_WrdMxoM?*CmiKU)p;iTug-yuZ|&`L3|fspr*z}(K%P3AI@`=hZRf}uOvL@S9t{(808J(#0YZ3 zPKKj0Jw09NwJOF3kcXV?eX&VZq(nobd49^=R(sm1p?M>JECyAgb4J`s>-dv#qD+cf+KW;yf@CcM_+F%I4P z&`4lH5Z8VWi;U(CLI@;Pj_`~!g~8MEC_n(9u^wC6B4566+$1PS-G}$K?av+|Lo8sF zki4S`{#fCfL_;M!#78amG@FfAd9bZWVCb`zL(zr=%~P71pjF^EE)lalcI?23zy$rA z5me4#7mfrp99H0E;Y=sYFWCn9+&$E7uHfrVy>3R3NW zJsc^S_4%7E!)XWc*X&!-=C`O@hAvoGfnLe9|J76_y*7FU(Uq9ij8m;QJ`jbZAY&`fx{4Q+!8zQr_($| z6o?~Gl#WMP-%5+dam-|O9b_f&z6CvWATBIwL3m8xX2#<_Rv^8v=C&kG=qU6JsNYE_ z0pT%hRi|Xi6EpW0@Ek`!kxyB^^*y=u@^GrBizh!u!X2EN1 z^*y5sV%hpV-DxPNdkdc_YRMhQ3!IK8dT8lbMvR@;7;FtYH;-YQzrbO};38^|8V_)P zX17fAph#M6je*S~$+@(i08p7Zub8F(!|TV%a=F|@g0sBG;*D7`+?@vJKl-jH6eYsm zl$^myD#Xd14f8)n1?~%73#9eUn;lq~_$I%Kf3~g;-k5<2%yf4j37hn4!C& z^=SM`pfq}b5rH$knok2i!T$5cT$OW%Y0{PFz&|MMXskN45YC#jxL z)LtE^lV}s5`6p94wL5nhbnVl;VFU9`R(*N=JNYGZqf0rue$@K*d zcR@KU&O#;k$RDrD-PBHfzO~`#&0Y85ddB1)%kZ7Q2@2lmR?1;R6};s-(0USht@8B* zwCHyX^v`us%**6W96dkRFy8&)$P-}%he{f0Af|{YBJ7%gk?{={5wQ;IRlCmEv+!(r zc)f{{!~pgvIeAfYTp@NCxjamRuU)$qaQ*Jy6%Syw3pZc$teW!n*fbI1VTw>RqI9%_ z{C=SM^_3Kf7n}PSVA%IIUi`l1$tc`Hui-pAz6sJUu7F-@2)qdVGZ$zN)?C1(rE2gxBhN-XQLLM;5 zPE6KDracNK0ADyMhaJZPNXJIftZ!@kh6}g4Ki84+Sa1Yl zwv!nRgE;$Sy4kVG_Tin8upFbmDnl&u#fq#|zA zJOYfahu0d>L8eKy&`;6v4j(_#6(2M-m`1qsKcL~WEei_QSy|c>3-Rir07OA_IRRGP zS2r(8I9C&Me%l)iEe@Z!LoEAljkJgAD-A$pY8%HeiDMdqdHs^A;=G$joCKdZh>MFm zX)DxfE+TD02mi)0&AWE&q4Qvs6sOvM?92xS@Y_2SM;`kG*X?r!U0UT zF-vdlE*qbWfq8&mzjjBC^O(KwG`Dtw{B~ft!#xg2Ef;!6j5GWQ)JX>k*oYY!8}l{2eJ#WfRPSEtKd_gPMeRMwGg$LWfQHA*(mV(MYHN72#;L zwr2})&!Y{K^p$(ag%s9bwTzBql#Z}P^9>OLs^)>$UXVj=;TL-k-l9&;z`<6uRkV1={g5*8O+HW;yfiZz^9n-sn5Hhik zuy~uPI4bxih*1b>)tvW}k zvg010%|QbT=blX}vHQiFSLN`oMaV-5KvHag5>1G)BFt%Icu)%3ADllbY@E8>sTW_n zSN^e$22;moe^`YspJF4Lx;0$Ha5TNhza>+Hoo#l6AP=+uOjezbD5h{~*&m)z@y9PV z;V0@)y^fZYR0a9OFDIycp|9iOw1+RoJd?x8RCq}(rh^L!`$*#pFhS_TldM|$V7fx| zbAM3y%kfREc2H$99;)8P=FQuC`0zcPY51l+$>Jhd1(*jv4|=^mBvrouUu`bxMlkrI3-Rt4 zzpcN@LJRy`oa6ve$$L(Ej zr92`Ubw};(&Yc&zo71=M0$xTo6P;g!-As?2kK;#fAt@(yb7mCrY!k>ufeMLy^{Cgz zpL-FI|D{o3G=&?GOLIpcAfp)yYkX3>O9JTl_u>3zzWHIlQAggNk2lP(JJc~ao- zn)gVxy1`Z(WIg%2GAPESIYtG( z)W8Wu;)Fr64@x>;QJNUrJWnG1=iT8a(CHzm4QVU`kz*OZ6O;r1y7=g~CDP*AkKxH% z>Uu&oZmvs6%?aw9_>pLC(Vmo;xQjd;OITe8_U``=iY5H;>HeH;xL5^nGRB{UGOkpC zgUStPD9RzPfsS=o_gX1_s|kh?0M)Dy>G?gck{IcRu4WIr8wn)JY`O6^(D@)!3X6eW z=UJc>W84CS9k?b$8R*0DjiCx;(7Nr?yfPw?SBwgAsvuy6N)tnF-t!V3>9G%WQOr_r zp0)!rOkCsTtOR2Owid!z*t&5xiS-I{CNQ$5G_HsI&1B06Rs;51w&6NY;8u7DLDSDE zk{>HQmL6(F0DmL_{@UXL2mrQIL=-KKbeQ-7JJ4P1e81l@p1U`gyYH+t#V$98AI-Cs z8crt4IK(GHczt|xnYzaf{r&q9tIN~uVXVQqvSjhY@aF@#bq+dBoY9;3^gu++18|rE z(~jXHY>@F$iwg)<(j3EG;# z)okEL_n}fj$h~FyL2`BDI3yD&z!Hu)rMVfrlRCCBKndMwexg%S^pKV;?775s-`k#S zzx@>Es@RD(#k0VR@rcc=`$0K3Lw9S%i?V*+n3AO5)N{rwt79zVA+h#=1b+C}#zB9m z(I?d@|MlI=sP)9Qu$1%8P{^olq&1s4gZf$eG9Z|XzBv6a{eiRl&>w2_Ij9GIIsw}T zeR)`}M`r@aW6_JJl7z*Y95cKh;fG){DiRD+I z%2Rxm^LociQO01EyDH4%BhInOlt+N>GN-P^Wsk+S&#C*wj8(x)-Tz{@gfU+I-AR@m z2D1gkQed%DU$)#!|33IY0*D~$4ffAsp}+$FQ(G!C+>Hz+s$U%L3gv95HK8aV$`rh> zU7O))TwspJ16tT$D}(b1JmX;R@*qsT%h{2fsU>M^@OI zM3H)aXoew6jHD4s)Tpmr_n(cr?=JC_3=X)ngPMoB9}v1Kvw?Ugpls|P9$D&5Dj!sv zgY$&FkA7UoFoG?Ym)|~~Iw^CBp6-)N5{B`&ICmLwAY|}Hvl33~6!gCEM#ftx$vOsr_(k*P&Og@0qQtbmoFS9mvau_>76@W+xaT|yfMK+|x$C2Bo+BXk2lD~vqSWs|S zqRS;sDl1-01sw|Zgx(R**|4=k83#CE{!Bi6JRL%i`8C__>?;DPYny?_7-AA2FgQG+ zGeY5=-5ysKy`2sGd?|~{RzJBn0HBw$P?lq+L4!8`o_Y99f*9$xxpEwHTn^9GgC+Q@ z@Pq-uLtYS#r;Rn1qd}PLFm!FF-UE-7Zb;xcI#!x zSXz68cdiY=KWQ1CPH5pN4fu0~5Yvq5OxHzGM^XNfVaHI&H$oRHuk8!|0ehIdafJl4 zX(6g0q-GkG1*W#Gd==2z{g|yMhQFCNb>1-1ewD z$_af{+6ag-(JQ`N51Aeu2x=Smu<5YYTxBv>lDB3Jmd)PfoE>@ZBVA+$k=_w1E(|@6 z0kP2Gvd*+V^g3Y@76#&^0V8lT_&mri zl)2h@pIQ zt^ht0Pccj}NG#usHTnA;#RC$dYHKkaMp{vF!GV*`n5>&K4B!5NyF zc)J_tBQM(Hu|U{xpj8ip$|3{3X=BKsz4Xx}(Y_UVt1uUA4OsruDxxZ@w=P z{?v#fuMm%6uDa&~g>#{yLeGHug@^(uRxCXcdrG}J$})_TNLPRy7x@LJKDUWt&)SOz zBPk@*UCNEL(O9-I9%5j?ZnaBzQ5&!R{+Y@}q>X8FSBJNPIiCB-vr*J7UsP{sr_ zhU7J_X0L(~xJAbwTgs_h<^jErVGg*3nVWg8_G!)Oa}7HX5CPOzy&$~V<=v5!TIDzh zGH}SQ0*KuEW6qkv&XJjyw=1c?mQ;N}OkCxom`~Hf#(a8c}M&L?1L~sOvswLxPV-3mfMFo9NUGq8?DGsG~uL zbv^Bw6G*A)n7m0ORGu zgl2L3Xb}dXA0Dm1s@Qe^vDHC+M)0CMWr?q`#gqlerNDY%xRj2&nY$+6E$4U?M2K%h z_-!O=S$22c=uqBzzE{GDkN4V7+%2U!5740~fv=xV$6ixhXl&Dm82j>amo!5p*DD<> zedcJBGYsHw7rPsck=yID(e8su?)z;Kg&8bWPR}sA`;DMYVT=yH4K*7mtC7}AL5YZa z*2x&~#fPIXiiguf)i`Z)rU_TDCWUktme4!>uc=}Xq^I-uGFD1@c#kr*8OCN5tp$fD5NZrS<0`q6zms!xPdp3Dmo+Z8U zzxYgZ_-B?v;?ZadCY9TMKPf56Gz}?n_-I?oS|VkhH~!+_9}03=WD4iNVM1sqkOsu$ zyk2@Lg(=;HO%a6?X%L7agR|G6V|vXA)X^X>N+6?${p>DySgSqW)UQLXby0Jh&w053 z2q-qNKCO;>4!HvyQf*n#kfH+vR=@GcstOBlHtytTjGp0&S%8ngS>iuqbj_`o8)r>6 zC-ZBo;W{R08|M@I1JOX6`Z!{I^(6uJ{uj{KVGQ?(5;e!ICPt$JB|YAwKfMQ{GX|oi zI^tx!TD1Ep@fB8V>;MGw`5)*g(lJ8LB?x(KI+4$Qz%iGXM?ye)SMbRc0WoI!M; zk16F02yGOM)wg!b7cY|5+Ij6aii4&LPXh8XDEMjQU6}?y1~INeh(4v0V#xqb=STw3 z_LZO*6ZKp;w#Ng7VX(($H$z>9D8w$F+M-dz5~);rk=H=rSisj~x$8)d=M=^;97CaW7hGwoM6}U5|YtZ;k zWQ>UByvaW8$+W12K_0V`iV| zcYcvUiUAb4>KYn1H z;?M(mEy}{f*{d0xsofpVGf700v=WCmaCsAp*RCvT8iD;3sE+v!^F3(j+Th5`l8N!N zVf3Ov@GgwJ-^aq7b()B%qB3Q4E)%N8RovRYRn$Vm_oFVWEpigi#$>>JT%_7e;_#*U z6=n%?%0Dj%K|!=ZdI`!O-J|dnQGlfL624?DZvVGp%|xA3dlg^uvPjBMdKGO)%{l;5 zG!X)A658_kT5dacMqE#Knv-t!c+qh4U0=?LCQdjz-D(G?n$Mh|ziOe+&xMy`&lryj zoDOCijSK%%O&@K>!b-YQ75=U+Nij!3;_ZWiVNc#>X=OJ9lyXWZYRAuBu`?#F5-^4vh%VR%JzZv^Df-M zq#qm#mPJ`S9yID4ssrP@s6aTHZ ze(wY(aR&G0uOZn<6IBlyD>P{ULjf|d=`~Thn-mRVPalA2Z!wXBdXiz`=*nu3JAI6m z;p1fz7o)Mr*Z@RaRpxQ*NJeArOyYHsR>tHdWj!|Nj;i__ct9!DwS8aJ1eemq@m6 z@hebegx`V5Z3_XD20A9*f^(ld&f)kLZ33*qX`G8JH~A>PZRwGz{LQ%N?}i_5CmDV|Gc}=mR~&Hkw*eq!n>qbI(31a;`wHg*{TuA^ ze}56@ivAy=ssDS8{{MQ7ivQRCGyB|r!@tSXSPAGw)e_f;l9a@fRIB8oR3OD*WME{V zYY0RZAx1`4rUq6zopr07WLv AjsO4v literal 0 HcmV?d00001 diff --git a/book_source/02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/var_record.png b/book_source/03_topical_pages/11_images/var_record.png similarity index 100% rename from book_source/02_demos_tutorials_workflows/02_user_demos/05_advanced_user_guide/images/var_record.png rename to book_source/03_topical_pages/11_images/var_record.png diff --git a/book_source/index.Rmd b/book_source/index.Rmd index 8b71e4c339d..51eb2e000c6 100644 --- a/book_source/index.Rmd +++ b/book_source/index.Rmd @@ -6,6 +6,24 @@ documentclass: book biblio-style: apalike link-citations: yes author: "By: PEcAn Team" +output: + bookdown::gitbook: + lib_dir: assets + split_by: section + config: + toolbar: + position: static + bookdown::pdf_book: + keep_tex: yes + bookdown::epub_book: + toc: true + bookdown::html_book: + css: toc.css +always_allow_html: true +#output: +# html_document: +# toc: true +# toc_float: true --- # Welcome {-} From 3d6cfad54c1be63599f1132601ee1d76a41de23e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 23:28:17 -0500 Subject: [PATCH 1486/2289] fix spaces --- .github/workflows/book.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 50dad46dd5f..ca818236b4c 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -6,10 +6,10 @@ on: - master - develop - tags: - - '*' + tags: + - '*' -pull_request: + pull_request: jobs: bookdown: From 6e9c3d715be6ce9b59cba33291279817aeec8ac7 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 23:32:25 -0500 Subject: [PATCH 1487/2289] fix docker build --- .github/workflows/docker.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index 1b4eda65329..7b9695bc949 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -90,7 +90,7 @@ jobs: echo "${INPUT_PASSWORD}" | docker login -u ${INPUT_USERNAME} --password-stdin for image in $(docker image ls pecan/*:github --format "{{ .Repository }}"); do for v in ${PECAN_TAGS}; do - docker tag ${image}:github {{ env.DOCKERHUB_ORG }}/${image#pecan/}:${v} + docker tag ${image}:github ${{ env.DOCKERHUB_ORG }}/${image#pecan/}:${v} docker push ${{ env.DOCKERHUB_ORG }}/${image#pecan/}:${v} done done From 4c9fa8e0e3552f7f8136b39742c113b27aa7489c Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 23:35:13 -0500 Subject: [PATCH 1488/2289] Update book.yml --- .github/workflows/book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index ca818236b4c..6e75792a2f6 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -17,7 +17,7 @@ bookdown: runs-on: ubuntu-latest container: - image: pecan/depends:R${{ matrix.R }} + image: pecan/depends:R4.0.2 steps: # checkout source code From f524b5c150f3bed994011e9b8186be060ccfb5b4 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 23:36:44 -0500 Subject: [PATCH 1489/2289] Update book.yml --- .github/workflows/book.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 6e75792a2f6..1af88be0d67 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -12,8 +12,7 @@ on: pull_request: jobs: -bookdown: - name: Render-Book + bookdown: runs-on: ubuntu-latest container: From b27f9a71ed675f268af9c16bc74de98203326d04 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sun, 18 Oct 2020 23:50:29 -0500 Subject: [PATCH 1490/2289] Update book.yml --- .github/workflows/book.yml | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 1af88be0d67..ae15ed989cf 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -25,6 +25,11 @@ jobs: - name: Install rmarkdown run: | Rscript -e 'install.packages(c("rmarkdown","bookdown"))' + # copy files + - name: copy extfiles + run: | + mkdir -p book_source/extfiles + cp -f documentation/tutorials/01_Demo_Basic_Run/extfiles/* book_source/extfiles # compile PEcAn code - name: build run: make -j1 From bea4f7cd84e6c4a51b6ec7e81b7b510934822b16 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 19 Oct 2020 10:04:36 -0500 Subject: [PATCH 1491/2289] install rsync --- docker/depends/Dockerfile | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index b3a7c9b4a34..74af08bafde 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -24,6 +24,7 @@ RUN apt-get update \ jags \ time \ openssh-client \ + rsync \ libgdal-dev \ libglpk-dev \ librdf0-dev \ From 2230a5b72f9db17b67b4fcd7f6043ca44c3ed95d Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 19 Oct 2020 12:12:06 -0500 Subject: [PATCH 1492/2289] fix rsync --- .github/workflows/book.yml | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index ae15ed989cf..76c1f891598 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -55,11 +55,12 @@ jobs: - name: publish to github if: github.event_name != 'pull_request' run: | - export VERSION=${GITHUB_REF##*/} - cd pecan-documentation && mkdir -p $VERSION git config --global user.email "pecanproj@gmail.com" git config --global user.name "GitHub Documentation Robot" - rsync -a --delete ../_book/ $VERSION + export VERSION=${GITHUB_REF##*/} + cd pecan-documentation + mkdir -p $VERSION + rsync -a --delete ../book_source/_book/ ${VERSION}/ git add --all * - git commit -m "Build book from pecan revision $GITHUB_SHA" || true + git commit -m "Build book from pecan revision ${GITHUB_SHA}" || true git push -q origin master From 0df235b9e74f1cf05fc1e3bd692ec063fd625620 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 13:27:42 -0500 Subject: [PATCH 1493/2289] TDM Updates Major changes: 1) Fix precipitation redistribution bug (tdm_lm_ensemble_sims.R line 572 most critical); 2) Improve support for single-prediction downscaling by using the model coefficients and circumventing creating/pulling from a simulated posterior distribution of coefficients (necessary for generating an ensemble that characterizies and propagates uncertiainty) --- .../data.atmosphere/R/tdm_lm_ensemble_sims.R | 81 ++++++++++++++----- modules/data.atmosphere/R/tdm_model_train.R | 36 +++++---- .../R/tdm_predict_subdaily_met.R | 46 ++++++++--- modules/data.atmosphere/R/tdm_subdaily_pred.R | 8 +- 4 files changed, 124 insertions(+), 47 deletions(-) diff --git a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R index fc9108a8f62..3491c99b5b5 100644 --- a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R +++ b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R @@ -169,8 +169,8 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. dat.mod$hour == lag.time, ], dim = c(1, ncol(dat.sim$air_temperature))))) names(sim.lag) <- c("lag.air_temperature", "ens") - sim.lag$lag.air_temperature_min <- utils::stack(apply(dat.sim[["air_temperature"]][dat.mod$sim.day == days.sim[i-1], ], 2, min))[, 1] - sim.lag$lag.air_temperature_max <- utils::stack(apply(dat.sim[["air_temperature"]][dat.mod$sim.day == days.sim[i-1], ], 2, max))[, 1] + sim.lag$lag.air_temperature_min <- utils::stack(apply(data.frame(dat.sim[["air_temperature"]][dat.mod$sim.day == days.sim[i-1], ]), 2, min))[, 1] + sim.lag$lag.air_temperature_max <- utils::stack(apply(data.frame(dat.sim[["air_temperature"]][dat.mod$sim.day == days.sim[i-1], ]), 2, max))[, 1] } dat.temp <- merge(dat.temp, sim.lag, all.x = TRUE) } else if (v == "precipitation_flux") { @@ -235,15 +235,27 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # rows.beta[i] <- betas.tem # } # rows.beta <- as.numeric(rows.beta) + + #### ******************************** #### + #### ******************************** #### + #### CHRISTY START HERE FOR DEBUGGING #### + #### ******************************** #### + #### ******************************** #### + # n.new <- ifelse(n.ens==1, 10, n.ens) # If we're not creating an ensemble, we'll add a mean step to remove chance of odd values n.new <- n.ens cols.redo <- 1:n.new sane.attempt=0 betas_nc <- ncdf4::nc_open(file.path(path.model, v, paste0("betas_", v, "_", day.now, ".nc"))) col.beta <- betas_nc$var[[1]]$dim[[2]]$len # number of coefficients while(n.new>0 & sane.attempt <= sanity.tries){ - betas.tem <- sample(1:(n.beta-n.new), 1, replace = TRUE) - Rbeta <- as.matrix(ncdf4::ncvar_get(betas_nc, paste(day.now), c(betas.tem,1), c(n.new,col.beta)), ncol = col.beta) + if(n.ens==1){ + Rbeta <- matrix(mod.save$coef, ncol=col.beta) + } else { + betas.tem <- sample(1:max((n.beta-n.new), 1), 1, replace = TRUE) + Rbeta <- matrix(ncdf4::ncvar_get(betas_nc, paste(day.now), c(betas.tem,1), c(n.new,col.beta)), ncol = col.beta) + } + if(ncol(Rbeta)!=col.beta) Rbeta <- t(Rbeta) @@ -252,9 +264,17 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. dat.pred <- matrix(nrow=nrow(dat.temp), ncol=n.ens) } - dat.pred[,cols.redo] <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, - Rbeta = Rbeta, resid.err = FALSE, model.resid = NULL, Rbeta.resid = NULL, - n.ens = n.new) + # if(n.ens==1){ + # dat.dum <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, + # Rbeta = Rbeta, resid.err = FALSE, model.resid = NULL, Rbeta.resid = NULL, + # n.ens = n.new) + # dat.pred[,1] <- apply(dat.dum, 1, mean) + # } else { + dat.pred[,cols.redo] <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, + Rbeta = Rbeta, resid.err = FALSE, model.resid = NULL, Rbeta.resid = NULL, + n.ens = n.new) + + # } # Occasionally specific humidty may go serioulsy off the rails if(v=="specific_humidity" & (max(dat.pred)>log(40e-3) | min(dat.pred) 0) { + + # if(n.ens == 1) next + cols.check <- ifelse(n.ens==1, 1, cols.check) + if (max(dat.pred[,cols.check]) > 0) { tmp <- 1:nrow(dat.pred) # A dummy vector of the - for (j in cols.redo) { + for (j in cols.check) { if (min(dat.pred[, j]) >= 0) next # skip if no negative rain to redistribute rows.neg <- which(dat.pred[, j] < 0) rows.add <- sample(tmp[!tmp %in% rows.neg], length(rows.neg), @@ -282,12 +305,15 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } # j End loop # Make sure each day sums to 1 - dat.pred[,cols.redo] <- dat.pred[,cols.redo]/rowSums(dat.pred[,cols.redo], na.rm=T) + # dat.pred[,cols.check] <- dat.pred[,cols.check]/colSums(data.frame(dat.pred[,cols.check]), na.rm=T) dat.pred[is.na(dat.pred)] <- 0 } # End case of re-proportioning # Convert precip proportions into real units - dat.pred[,cols.redo] <- dat.pred[,cols.redo] * as.vector((dat.temp$precipitation_flux.day))*length(unique(dat.temp$hour)) + # Total Daily precip = precipitaiton_flux.day*24*60*60 + # precip.day <- dat.temp$precipitation_flux.day[1]*nrow(dat.temp) + precip.day <- dat.temp$precipitation_flux.day[1] + dat.pred[,cols.check] <- dat.pred[,cols.check] * precip.day } # End Precip re-propogation # ----- @@ -309,7 +335,12 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } else { rows.filter <- which(dat.mod$sim.day <= days.sim[i] & dat.mod$sim.day >= days.sim[i]-14) } - dat.filter <- utils::stack(dat.sim[[v]][rows.filter,])[,1] + if(n.ens>1){ + dat.filter <- utils::stack(dat.sim[[v]][rows.filter,])[,1] + } else { + dat.filter <- dat.sim[[v]][rows.filter,] + } + filter.mean <- mean(dat.filter, na.rm=T) filter.sd <- sd(dat.filter, na.rm=T) @@ -538,7 +569,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. hrs.wet <- sample(hrs.add, length(hrs.go), replace=T) for(dry in seq_along(hrs.go)){ - rain.now[hrs.wet[dry]] <- rain.now[hrs.go[dry]] + rain.now[hrs.wet[dry]] <- rain.now[hrs.wet[dry]] + rain.now[hrs.go[dry]] rain.now[hrs.go[dry]] <- 0 } @@ -554,17 +585,27 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # ---------- if (v == "surface_downwelling_shortwave_flux_in_air") { # Randomly pick which values to save & propogate - cols.prop <- as.integer(cols.list[i,]) - for (j in 1:ncol(dat.sim[[v]])) { - dat.sim[[v]][rows.mod, j] <- dat.pred[dat.temp$ens == paste0("X", j), cols.prop[j]] + + if(ncol(dat.sim[[v]])>1){ + cols.prop <- as.integer(cols.list[i,]) + for (j in 1:ncol(dat.sim[[v]])) { + dat.sim[[v]][rows.mod, j] <- dat.pred[dat.temp$ens == paste0("X", j), cols.prop[j]] + } + } else { # Only one ensemble member... it's really easy + dat.sim[[v]][rows.mod, 1] <- dat.pred } + dat.sim[[v]][rows.now[!rows.now %in% rows.mod], ] <- 0 } else { - cols.prop <- as.integer(cols.list[i,]) - for (j in 1:ncol(dat.sim[[v]])) { - dat.sim[[v]][rows.now, j] <- dat.pred[dat.temp$ens == paste0("X", j), cols.prop[j]] + if(ncol(dat.sim[[v]])>1){ + cols.prop <- as.integer(cols.list[i,]) + for (j in 1:ncol(dat.sim[[v]])) { + dat.sim[[v]][rows.now, j] <- dat.pred[dat.temp$ens == paste0("X", j), cols.prop[j]] + } + } else { # Only one ensemble member... it's really easy + dat.sim[[v]][rows.now, 1] <- dat.pred } } rm(mod.save) # Clear out the model to save memory @@ -578,7 +619,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } # end day loop # -------------------------------- - } + } # End vars.list # ---------- End of downscaling for loop return(dat.sim) } diff --git a/modules/data.atmosphere/R/tdm_model_train.R b/modules/data.atmosphere/R/tdm_model_train.R index ad42210e1bc..685f7880590 100644 --- a/modules/data.atmosphere/R/tdm_model_train.R +++ b/modules/data.atmosphere/R/tdm_model_train.R @@ -65,10 +65,8 @@ model.train <- function(dat.subset, v, n.beta, resids = resids, threshold = NULL # Precip needs to be a bit different. We're going to calculate the # fraction of precip occuring in each hour we're going to estimate the # probability distribution of rain occuring in a given hour - dat.subset$rain.prop <- dat.subset$precipitation_flux/(dat.subset$precipitation_flux.day * - length(unique(dat.subset$hour))) - mod.doy <- lm(rain.prop ~ as.ordered(hour) * precipitation_flux.day - - 1 - as.ordered(hour) - precipitation_flux.day, data = dat.subset) + dat.subset$rain.prop <- dat.subset$precipitation_flux/(dat.subset$precipitation_flux.day) + mod.doy <- lm(rain.prop ~ as.ordered(hour) - 1 , data = dat.subset) } if (v == "air_pressure") { @@ -105,10 +103,15 @@ model.train <- function(dat.subset, v, n.beta, resids = resids, threshold = NULL # ----- Each variable must do this Generate a bunch of random # coefficients that we can pull from without needing to do this step # every day - mod.coef <- coef(mod.doy) - mod.cov <- vcov(mod.doy) - piv <- as.numeric(which(!is.na(mod.coef))) - Rbeta <- MASS::mvrnorm(n = n.beta, mod.coef[piv], mod.cov) + if(n.beta>1){ + mod.coef <- coef(mod.doy) + mod.cov <- vcov(mod.doy) + piv <- as.numeric(which(!is.na(mod.coef))) + Rbeta <- MASS::mvrnorm(n = n.beta, mod.coef[piv], mod.cov[piv,piv]) + } else { + Rbeta <- matrix(coef(mod.doy), nrow=1) + colnames(Rbeta) <- names(coef(mod.doy)) + } list.out <- list(model = mod.doy, betas = Rbeta) @@ -165,12 +168,17 @@ model.train <- function(dat.subset, v, n.beta, resids = resids, threshold = NULL 1, data = dat.subset[, ]) } - res.coef <- coef(resid.model) - res.cov <- vcov(resid.model) - res.piv <- as.numeric(which(!is.na(res.coef))) - - beta.resid <- MASS::mvrnorm(n = n.beta, res.coef[res.piv], - res.cov) + if(n.beta>1){ + res.coef <- coef(resid.model) + res.cov <- vcov(resid.model) + res.piv <- as.numeric(which(!is.na(res.coef))) + + beta.resid <- MASS::mvrnorm(n = n.beta, res.coef[res.piv], res.cov) + } else { + beta.resid <- matrix(coef(resid.model), nrow=1) + colnames(beta.resid) <- names(coef(mod.doy)) + } + list.out[["model.resid"]] <- resid.model list.out[["betas.resid"]] <- beta.resid diff --git a/modules/data.atmosphere/R/tdm_predict_subdaily_met.R b/modules/data.atmosphere/R/tdm_predict_subdaily_met.R index 9f4da154249..58c3d72845e 100644 --- a/modules/data.atmosphere/R/tdm_predict_subdaily_met.R +++ b/modules/data.atmosphere/R/tdm_predict_subdaily_met.R @@ -30,6 +30,7 @@ ##' @param ens.labs - vector containing the labels (suffixes) for each ensemble member; this allows you to add to your ##' ensemble rather than overwriting with a default naming scheme ##' @param resids - logical stating whether to pass on residual data or not +##' @param adjust.pr - adjustment factor fore preciptiation when the extracted values seem off ##' @param force.sanity - (logical) do we force the data to meet sanity checks? ##' @param sanity.tries - how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop ##' @param overwrite logical: replace output file if it already exists? @@ -55,7 +56,7 @@ #---------------------------------------------------------------------- predict_subdaily_met <- function(outfolder, in.path, in.prefix, path.train, direction.filter="forward", lm.models.base, - yrs.predict=NULL, ens.labs = 1:3, resids = FALSE, force.sanity=TRUE, sanity.tries=25, + yrs.predict=NULL, ens.labs = 1:3, resids = FALSE, adjust.pr=1, force.sanity=TRUE, sanity.tries=25, overwrite = FALSE, verbose = FALSE, seed=format(Sys.time(), "%m%d"), print.progress=FALSE, ...) { if(direction.filter %in% toupper( c("backward", "backwards"))) direction.filter="backward" @@ -72,7 +73,7 @@ predict_subdaily_met <- function(outfolder, in.path, in.prefix, path.train, dire n.ens <- length(ens.labs) # Update in.path with our prefix (seems silly, but helps with parallelization) - in.path <- file.path(in.path, in.prefix) + # in.path <- file.path(in.path, in.prefix) # Extract the lat/lon info from the first of the source files fnow <- dir(in.path, ".nc")[1] @@ -189,6 +190,18 @@ predict_subdaily_met <- function(outfolder, in.path, in.prefix, path.train, dire yrs.train=yr.train, yrs.source=yrs.tdm[y], n.ens=1, seed=201708, pair.mems = FALSE) + # Adjust the preciptiation for the source data if it can't be right (default = 1) + met.out$dat.source$precipitation_flux <- met.out$dat.source$precipitation_flux*adjust.pr + + # Create wind speed variable if it doesn't exist + if(!"wind_speed" %in% names(met.out$dat.train) & "eastward_wind" %in% names(met.out$dat.train)){ + met.out$dat.train$wind_speed <- sqrt(met.out$dat.train$eastward_wind^2 + met.out$dat.train$northward_wind^2) + } + if(!"wind_speed" %in% names(met.out$dat.source) & "eastward_wind" %in% names(met.out$dat.source)){ + met.out$dat.source$wind_speed <- sqrt(met.out$dat.source$eastward_wind^2 + met.out$dat.source$northward_wind^2) + } + + # Package the raw data into the dataframe that will get passed into the function dat.ens <- data.frame(year = met.out$dat.source$time$Year, doy = met.out$dat.source$time$DOY, @@ -203,12 +216,6 @@ predict_subdaily_met <- function(outfolder, in.path, in.prefix, path.train, dire specific_humidity.day = met.out$dat.source$specific_humidity, wind_speed.day = met.out$dat.source$wind_speed) - # Create wind speed variable if it doesn't exist - if(!"wind_speed" %in% names(met.out$dat.source)){ - dat.ens$wind_speed <- sqrt(met.out$dat.source$eastward_wind^2 + met.out$dat.source$northward_wind^2) - } else { - dat.ens$wind_speed <- met.out$dat.source$wind_speed - } # Set up our simulation time variables; it *should* be okay that this resets each year since it's really only doy that matters dat.ens$sim.hr <- trunc(as.numeric(difftime(dat.ens$date, min(dat.ens$date), tz = "GMT", units = "hour")))+1 @@ -249,6 +256,17 @@ predict_subdaily_met <- function(outfolder, in.path, in.prefix, path.train, dire met.nxt <- align.met(train.path=in.path, source.path=in.path, yrs.train=yrs.tdm[y], yrs.source=yrs.tdm[y], n.ens=1, seed=201708, pair.mems = FALSE) } + # Adjust precipitation rate for both "train" and "source" since both are for the data being downscaled + met.nxt$dat.train$precipitation_flux <- met.nxt$dat.train$precipitation_flux*adjust.pr + met.nxt$dat.source$precipitation_flux <- met.nxt$dat.source$precipitation_flux*adjust.pr + + if(!"wind_speed" %in% names(met.nxt$dat.train) & "eastward_wind" %in% names(met.nxt$dat.train)){ + met.nxt$dat.train$wind_speed <- sqrt(met.nxt$dat.train$eastward_wind^2 + met.nxt$dat.train$northward_wind^2) + } + if(!"wind_speed" %in% names(met.nxt$dat.source) & "eastward_wind" %in% names(met.nxt$dat.source)){ + met.nxt$dat.source$wind_speed <- sqrt(met.nxt$dat.source$eastward_wind^2 + met.nxt$dat.source$northward_wind^2) + } + dat.nxt <- data.frame(year = met.nxt$dat.train$time$Year, doy = met.nxt$dat.train$time$DOY-met.lag, next.air_temperature_max = met.nxt$dat.train$air_temperature_maximum, @@ -352,12 +370,16 @@ predict_subdaily_met <- function(outfolder, in.path, in.prefix, path.train, dire } # End j loop for (i in seq_len(n.ens)) { - df <- data.frame(matrix(ncol = length(nc.info$name), nrow = nrow(dat.ens))) - colnames(df) <- nc.info$name + df <- data.frame(matrix(ncol = length(nc.info$CF.name), nrow = nrow(dat.ens))) + colnames(df) <- nc.info$CF.name for (j in nc.info$CF.name) { - ens.sims[[j]][["X1"]] + # ens.sims[[j]][["X1"]] + if(n.ens>1){ e <- paste0("X", i) - df[,j] <- ens.sims[[j]][[e]] + df[,paste(j)] <- ens.sims[[j]][[e]] + } else { + df[,paste(j)] <- ens.sims[[j]] + } } df <- df[, c("air_temperature", "precipitation_flux", "surface_downwelling_shortwave_flux_in_air", diff --git a/modules/data.atmosphere/R/tdm_subdaily_pred.R b/modules/data.atmosphere/R/tdm_subdaily_pred.R index cc9ad91772a..1b02852519c 100644 --- a/modules/data.atmosphere/R/tdm_subdaily_pred.R +++ b/modules/data.atmosphere/R/tdm_subdaily_pred.R @@ -70,7 +70,13 @@ subdaily_pred <- function(newdata, model.predict, Rbeta, resid.err = FALSE, mode err.resid <- Xp.res[, resid.piv] %*% t(Rbeta.resid) } # End residual error - dat.sim <- Xp[, piv] %*% t(Rbeta) + err.resid + if(length(piv)==ncol(Rbeta)){ + dat.sim <- Xp[, piv] %*% t(Rbeta) + err.resid + } else { + # dat.sim <- Xp[,piv] %*% t(Rbeta[,piv]) + err.resid + dat.sim <- Xp[,piv] %*% t(matrix(Rbeta[,piv], nrow=nrow(Rbeta))) + err.resid + } + return(dat.sim) From 521b899bc59088f0a374e621f6ce7c0f111ff0e7 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 13:37:10 -0500 Subject: [PATCH 1494/2289] remove note to self --- modules/data.atmosphere/R/tdm_lm_ensemble_sims.R | 5 ----- 1 file changed, 5 deletions(-) diff --git a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R index 3491c99b5b5..e95e8935483 100644 --- a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R +++ b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R @@ -236,11 +236,6 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # } # rows.beta <- as.numeric(rows.beta) - #### ******************************** #### - #### ******************************** #### - #### CHRISTY START HERE FOR DEBUGGING #### - #### ******************************** #### - #### ******************************** #### # n.new <- ifelse(n.ens==1, 10, n.ens) # If we're not creating an ensemble, we'll add a mean step to remove chance of odd values n.new <- n.ens cols.redo <- 1:n.new From ec39dfcb8437ca7c94ca8d4a9d4e2a1fc566fba8 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 14:17:15 -0500 Subject: [PATCH 1495/2289] failed attempt at rebuidling docs --- modules/data.atmosphere/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index e56422f3fa5..be5ba92ecae 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -69,4 +69,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.1 From 702aa6cc359b1dd4b935043ea2130ac8911d7449 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 14:36:52 -0500 Subject: [PATCH 1496/2289] fix date format --- modules/data.atmosphere/R/debias_met_regression.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 4c5ceead22d..c66184e7ce6 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -990,6 +990,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU dir.create(path.diagnostics, recursive=T, showWarnings=F) dat.pred <- source.data$time + dat.pred$Date <- as.POSIXct(dat.pred$Date) dat.pred$obs <- apply(source.data[[v]], 1, mean, na.rm=T) dat.pred$mean <- apply(dat.out[[v]], 1, mean, na.rm=T) dat.pred$lwr <- apply(dat.out[[v]], 1, quantile, 0.025, na.rm=T) From d07a837c20770240dcbc4a43d4e0e3c31e8d9afb Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 14:37:49 -0500 Subject: [PATCH 1497/2289] adjust precip if necessary despite the unit documentation matching CF units, the precip for sites I've checked is WAAAAAY to low. Chicago is ~1/10 of what it should be, so adding the option to correct this in the extraction phase if necessary --- modules/data.atmosphere/R/extract_local_CMIP5.R | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 76c715eb5fb..80769ef2822 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -27,6 +27,7 @@ ##' @param no.leap (optional, logical) if you know your GCM of interest is missing leap year, you can specify it here. ##' otherwise the code will automatically determine if leap year is missing and if it should be ##' added in. +##' @param adjust.pr - adjustment factor fore preciptiation when the extracted values seem off ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param verbose logical. to control printing of debug info ##' @param ... Other arguments, currently ignored @@ -34,7 +35,7 @@ ##' @examples # ----------------------------------- extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in, lon.in, - model , scenario , ensemble_member = "r1i1p1", date.origin=NULL, no.leap=NULL, + model , scenario , ensemble_member = "r1i1p1", date.origin=NULL, no.leap=NULL, adjust.pr=1, overwrite = FALSE, verbose = FALSE, ...){ # Some GCMs don't do leap year; we'll have to deal with this separately @@ -316,7 +317,9 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in for(v in 1:nrow(var)){ dat.list[[v]] <- dat.all[[v]][yr.ind] } # End variable loop - + + # Adjusting Preciptiation if necessary + dat.list[["precipitation_flux"]] <- dat.list[["precipitation_flux"]]*adjust.pr ## put data in new file loc <- ncdf4::nc_create(filename=loc.file, vars=var.list, verbose=verbose) for(j in 1:nrow(var)){ From 51846ab6d0f88094609f4f1afa062733e7fb832f Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Mon, 19 Oct 2020 15:40:38 -0400 Subject: [PATCH 1498/2289] Roxygen fix --- modules/data.atmosphere/DESCRIPTION | 2 +- modules/data.atmosphere/man/predict_subdaily_met.Rd | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index be5ba92ecae..e56422f3fa5 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -69,4 +69,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.1.1 +RoxygenNote: 7.0.2 diff --git a/modules/data.atmosphere/man/predict_subdaily_met.Rd b/modules/data.atmosphere/man/predict_subdaily_met.Rd index 83ca26069c8..3d4898e3e6f 100644 --- a/modules/data.atmosphere/man/predict_subdaily_met.Rd +++ b/modules/data.atmosphere/man/predict_subdaily_met.Rd @@ -14,6 +14,7 @@ predict_subdaily_met( yrs.predict = NULL, ens.labs = 1:3, resids = FALSE, + adjust.pr = 1, force.sanity = TRUE, sanity.tries = 25, overwrite = FALSE, @@ -46,6 +47,8 @@ ensemble rather than overwriting with a default naming scheme} \item{resids}{- logical stating whether to pass on residual data or not} +\item{adjust.pr}{- adjustment factor fore preciptiation when the extracted values seem off} + \item{force.sanity}{- (logical) do we force the data to meet sanity checks?} \item{sanity.tries}{- how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop} From e98fcfd2ac23dc4ffdbd108715801b2dcfe3ce69 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 15:28:37 -0500 Subject: [PATCH 1499/2289] file path updates; preserving structure on ceda --- .../data.atmosphere/R/extract_local_CMIP5.R | 26 ++++++++++++++----- 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 80769ef2822..6424d3e055d 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -47,7 +47,8 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in } else if(scenario == "historical" & GCM!="MPI-ESM-P") { date.origin=as.Date("1850-01-01") } else { - PEcAn.logger::logger.error("No date.origin specified and scenario not implemented yet") + # PEcAn.logger::logger.error("No date.origin specified and scenario not implemented yet") + date.origin=as.Date("0001-01-01") } } @@ -86,8 +87,20 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in units = c("Kelvin", "Kelvin", "Kelvin", "W/m2", "Pascal", "W/m2", "m/s", "m/s", "m/s", "m/s", "m/s", "g/g", "kg/m2/s")) # Figuring out what we have daily for and what we only have monthly for - vars.gcm.day <- dir(file.path(in.path, "day")) - vars.gcm.mo <- dir(file.path(in.path, "month")) + path.day <- file.path(in.path, "day") + path.mo <- file.path(in.path, "month") + + vars.gcm.day <- dir(path.day) + vars.gcm.mo <- dir(path.month) + + # If our extraction bath is different from what we had, modify it + if("atmos" %in% vars.gcm.day){ + path.day <- file.path(in.path, "day", "atmos", "day", ensemble_member, "latest") + path.mo <- file.path(in.path, "month", "atmos", "month", ensemble_member, "latest") + + vars.gcm.day <- dir(path.day) + vars.gcm.mo <- dir(path.mo) + } vars.gcm.mo <- vars.gcm.mo[!vars.gcm.mo %in% vars.gcm.day] vars.gcm <- c(vars.gcm.day, vars.gcm.mo) @@ -109,9 +122,9 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in files.var[[v]] <- list() if(v %in% vars.gcm.day){ # Get a list of file names - files.var[[v]] <- data.frame(file.name=dir(file.path(in.path, "day", v)) ) + files.var[[v]] <- data.frame(file.name=dir(file.path(path.day, v)) ) } else { - files.var[[v]] <- data.frame(file.name=dir(file.path(in.path, "month", v))) + files.var[[v]] <- data.frame(file.name=dir(file.path(path.mo, v))) } # Set up an index to help us find out which file we'll need @@ -151,6 +164,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in dat.all[[v]] <- vector() # initialize the layer # Figure out the temporal resolution of the variable v.res <- ifelse(var.now %in% vars.gcm.day, "day", "month") + p.res <- ifelse(var.now %in% vars.gcm.day, path.day, path.mo) # Figure out what file we need # file.ind <- which(files.var[[var.now]][i]) @@ -161,7 +175,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # print(f.now) # Open up the file - ncT <- ncdf4::nc_open(file.path(in.path, v.res, var.now, f.now)) + ncT <- ncdf4::nc_open(file.path(p.res, var.now, f.now)) # Extract our dimensions lat_bnd <- ncdf4::ncvar_get(ncT, "lat_bnds") From 6e20cd78d690fcb6fa689627bf5d53b5b29d929b Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 15:50:24 -0500 Subject: [PATCH 1500/2289] add CO2 capabilities if present --- .../data.atmosphere/R/extract_local_CMIP5.R | 24 ++++++++++++++----- 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 6424d3e055d..9416f07f4b5 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -78,21 +78,27 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # The table of var name conversion # psl; sfcWind; tasmax; tasmin; huss - var <- data.frame(DAP.name = c("tas", "tasmax", "tasmin", "rlds", "ps", "rsds", "uas", "vas", "sfcWind", "ua", "va", "huss", "pr"), + #"co2", "mole_fraction_of_carbon_dioxide_in_air", "1e-6" + var <- data.frame(DAP.name = c("tas", "tasmax", "tasmin", "rlds", "ps", "rsds", "uas", "vas", "sfcWind", "ua", "va", "huss", "pr", "co2mass"), CF.name = c("air_temperature", "air_temperature_maximum", "air_temperature_minimum", "surface_downwelling_longwave_flux_in_air", "air_pressure", "surface_downwelling_shortwave_flux_in_air", "eastward_wind", "northward_wind", "wind_speed", "eastward_wind", "northward_wind", - "specific_humidity", "precipitation_flux"), - units = c("Kelvin", "Kelvin", "Kelvin", "W/m2", "Pascal", "W/m2", "m/s", "m/s", "m/s", "m/s", "m/s", "g/g", "kg/m2/s")) - + "specific_humidity", "precipitation_flux", "mole_fraction_of_carbon_dioxide_in_air"), + units = c("Kelvin", "Kelvin", "Kelvin", "W/m2", "Pascal", "W/m2", "m/s", "m/s", "m/s", "m/s", "m/s", "g/g", "kg/m2/s", "1e-6")) + + # Some constants for converting CO2 if it's there + co2.molmass <- 44.01 # g/mol https://en.wikipedia.org/wiki/Carbon_dioxide#Atmospheric_concentration + atm.molmass <- 28.97 # g/mol https://en.wikipedia.org/wiki/Density_of_air + atm.masstot <- 5.1480e18 # kg https://journals.ametsoc.org/doi/10.1175/JCLI-3299.1 + atm.mol <- atm.masstot/atm.molmass + # Figuring out what we have daily for and what we only have monthly for path.day <- file.path(in.path, "day") path.mo <- file.path(in.path, "month") vars.gcm.day <- dir(path.day) - vars.gcm.mo <- dir(path.month) - + vars.gcm.mo <- dir(path.mo) # If our extraction bath is different from what we had, modify it if("atmos" %in% vars.gcm.day){ path.day <- file.path(in.path, "day", "atmos", "day", ensemble_member, "latest") @@ -334,6 +340,12 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # Adjusting Preciptiation if necessary dat.list[["precipitation_flux"]] <- dat.list[["precipitation_flux"]]*adjust.pr + + if("mole_fraction_of_carbon_dioxide_in_air" %in% names(dat.list)){ + co2.mol <- dat.list[["mole_fraction_of_carbon_dioxide_in_air"]]/co2.molmass # kg co2 + dat.list[["mole_fraction_of_carbon_dioxide_in_air"]] <- co2.mol/atm.mol*1e6 # kmol/kmol * 1e6 to be in CF units (ppm) + } + ## put data in new file loc <- ncdf4::nc_create(filename=loc.file, vars=var.list, verbose=verbose) for(j in 1:nrow(var)){ From aa2132f4a5f4d221db9cd99562921e35a3f77884 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 19 Oct 2020 16:34:49 -0500 Subject: [PATCH 1501/2289] fix CO2 extraction issues a typo in the file path, plus CO2 is a global attribute with no lat/long, so dealing wiht that accordingly. --- .../data.atmosphere/R/extract_local_CMIP5.R | 20 +++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 9416f07f4b5..0e75602da89 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -102,7 +102,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # If our extraction bath is different from what we had, modify it if("atmos" %in% vars.gcm.day){ path.day <- file.path(in.path, "day", "atmos", "day", ensemble_member, "latest") - path.mo <- file.path(in.path, "month", "atmos", "month", ensemble_member, "latest") + path.mo <- file.path(in.path, "mon", "atmos", "Amon", ensemble_member, "latest") vars.gcm.day <- dir(path.day) vars.gcm.mo <- dir(path.mo) @@ -184,13 +184,20 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in ncT <- ncdf4::nc_open(file.path(p.res, var.now, f.now)) # Extract our dimensions - lat_bnd <- ncdf4::ncvar_get(ncT, "lat_bnds") - lon_bnd <- ncdf4::ncvar_get(ncT, "lon_bnds") + # Check to see if we need to extract lat/lon or not + if(ncT$var[[var.now]]$ndims>1){ + lat_bnd <- ncdf4::ncvar_get(ncT, "lat_bnds") + lon_bnd <- ncdf4::ncvar_get(ncT, "lon_bnds") + } nc.time <- ncdf4::ncvar_get(ncT, "time") # splt.ind <- ifelse(GCM %in% c("MPI-ESM-P"), 4, 3) # date.origin <- as.Date(stringr::str_split(ncT$dim$time$units, " ")[[1]][splt.ind]) nc.date <- date.origin + nc.time + + if(as.Date(min(nc.date)) < as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01"))){ + nc.date <- as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")) + nc.time + } date.leaps <- seq(as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")), as.Date(paste0(files.var[[var.now]][i,"last.year"], "-12-31")), by="day") # Figure out if we're missing leap dat no.leap <- ifelse(is.null(no.leap) & length(nc.date)!=length(date.leaps), TRUE, FALSE) @@ -235,7 +242,12 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in dat.temp <- ncdf4::ncvar_get(ncT, var.now, c(ind.lon, ind.lat, puse, time.ind[1]), c(1,1,1,length(time.ind))) } } else { - dat.temp <- ncdf4::ncvar_get(ncT, var.now, c(ind.lon, ind.lat, time.ind[1]), c(1,1,length(time.ind))) + # Note that CO2 appears to be a global value + if(ncT$var[[var.now]]$ndims==1){ + dat.temp <- ncdf4::ncvar_get(ncT, var.now, c(time.ind[1]), c(length(time.ind))) + } else { + dat.temp <- ncdf4::ncvar_get(ncT, var.now, c(ind.lon, ind.lat, time.ind[1]), c(1,1,length(time.ind))) + } } # Add leap year and trick monthly into daily From f1d0f97cfce8f475492366dd23bee39a1acb5927 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Mon, 19 Oct 2020 21:11:50 -0700 Subject: [PATCH 1502/2289] Update sshkey.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Fix Typo - repeated ‘echo’ --- scripts/sshkey.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/sshkey.sh b/scripts/sshkey.sh index 24518be6ba5..258a6d879cb 100755 --- a/scripts/sshkey.sh +++ b/scripts/sshkey.sh @@ -44,6 +44,6 @@ echo "paste the following lines to the commandline:" echo "" echo " (the first only if there is no .ssh directory on the server)" echo "" -echo "echo ssh ${USERNAME}@${SERVER} \"mkdir ~/.ssh\"" +echo "ssh ${USERNAME}@${SERVER} \"mkdir ~/.ssh\"" echo "" echo "cat ~/.ssh/${SERVER}.pub | ssh ${USERNAME}@${SERVER} \"cat >> ~/.ssh/authorized_keys\"" From 4ba1c1be1d3024e283fa2317f46b7fce92914b17 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Tue, 20 Oct 2020 12:25:20 -0500 Subject: [PATCH 1503/2289] Fix variable recoding commit # 5ab53e2594ef6326e3db36b45809839d6623b5d4 broke improtant recoding causing air pressure and humidity to not be renamed if necessary. --- .../data.atmosphere/R/extract_local_CMIP5.R | 22 +++++++++++++------ 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 0e75602da89..61861974aca 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -101,10 +101,10 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in vars.gcm.mo <- dir(path.mo) # If our extraction bath is different from what we had, modify it if("atmos" %in% vars.gcm.day){ - path.day <- file.path(in.path, "day", "atmos", "day", ensemble_member, "latest") - path.mo <- file.path(in.path, "mon", "atmos", "Amon", ensemble_member, "latest") + path.day <- file.path(in.path, "day", "atmos", "day", ensemble_member, "latest") + path.mo <- file.path(in.path, "mon", "atmos", "Amon", ensemble_member, "latest") - vars.gcm.day <- dir(path.day) + vars.gcm.day <- dir(path.day) vars.gcm.mo <- dir(path.mo) } vars.gcm.mo <- vars.gcm.mo[!vars.gcm.mo %in% vars.gcm.day] @@ -112,8 +112,8 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in vars.gcm <- c(vars.gcm.day, vars.gcm.mo) # Rewriting the dap name to get the closest variable that we have for the GCM (some only give uss stuff at sea level) - if(!("huss" %in% vars.gcm)) levels(var$DAP.name) <- sub("huss", "hus", levels(var$DAP.name)) - if(!("ps" %in% vars.gcm )) levels(var$DAP.name) <- sub("ps", "psl", levels(var$DAP.name)) + if(!("huss" %in% vars.gcm)) var$DAP.name[var$DAP.name=="huss"] <- "hus" + if(!("ps" %in% vars.gcm)) var$DAP.name[var$DAP.name=="ps"] <- "psl" # Making sure we're only trying to grab the variables we have (i.e. don't try sfcWind if we don't have it) var <- var[var$DAP.name %in% vars.gcm,] @@ -195,9 +195,17 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # date.origin <- as.Date(stringr::str_split(ncT$dim$time$units, " ")[[1]][splt.ind]) nc.date <- date.origin + nc.time - if(as.Date(min(nc.date)) < as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01"))){ - nc.date <- as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")) + nc.time + nc.min <- as.Date(min(nc.date)) + date.ref <- as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")) + + # If things don't align with the specified origin, update it & try again + if(nc.min != date.ref){ + date.off <- date.ref - nc.min + date.origin <- date.origin + date.off + 1 + + nc.date <- nc.time + date.origin } + date.leaps <- seq(as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")), as.Date(paste0(files.var[[var.now]][i,"last.year"], "-12-31")), by="day") # Figure out if we're missing leap dat no.leap <- ifelse(is.null(no.leap) & length(nc.date)!=length(date.leaps), TRUE, FALSE) From 1be7645b3c362c865a87fcd39c839999dfabfe84 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Tue, 20 Oct 2020 12:34:25 -0500 Subject: [PATCH 1504/2289] fix origin date bug was overwriting date.origin, which was a bad idea since it could spiral out and ended up with us missing a single day for our last year --- modules/data.atmosphere/R/extract_local_CMIP5.R | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 61861974aca..5dd14117449 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -200,10 +200,9 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # If things don't align with the specified origin, update it & try again if(nc.min != date.ref){ - date.off <- date.ref - nc.min - date.origin <- date.origin + date.off + 1 - - nc.date <- nc.time + date.origin + date.off <- date.ref - nc.min # Figure out our date offset + + nc.date <- nc.date + date.off + 1 } date.leaps <- seq(as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")), as.Date(paste0(files.var[[var.now]][i,"last.year"], "-12-31")), by="day") From 4d44940f8c7a0d5a49bfd3b98e4b26fbe6f8b37e Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Tue, 20 Oct 2020 13:46:54 -0500 Subject: [PATCH 1505/2289] Fix date issues 1) Not all GCM output is Jan 1 - Dec 31 -- some are stupid and annoying and have January somewhere in the middle. So we need to index off of the actual file date rather than just extracting the year. 2) because dates are centered in the time period, it's important to add 0.5 days to cell references otherwise particularly the end gets cut of because 365 (end_date) < 365.5 (nc end time) --- .../data.atmosphere/R/extract_local_CMIP5.R | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 5dd14117449..0215c43f87c 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -128,24 +128,24 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in files.var[[v]] <- list() if(v %in% vars.gcm.day){ # Get a list of file names - files.var[[v]] <- data.frame(file.name=dir(file.path(path.day, v)) ) + files.var[[v]] <- data.frame(file.name=dir(file.path(path.day, v))) } else { files.var[[v]] <- data.frame(file.name=dir(file.path(path.mo, v))) } # Set up an index to help us find out which file we'll need - # files.var[[v]][["years"]] <- data.frame(first.year=NA, last.year=NA) + # files.var[[v]][["years"]] <- data.frame(first.date=NA, last.date=NA) for(i in 1:nrow(files.var[[v]])){ - yr.str <- stringr::str_split(stringr::str_split(files.var[[v]][i,"file.name"], "_")[[1]][6], "-")[[1]] + dt.str <- stringr::str_split(stringr::str_split(files.var[[v]][i,"file.name"], "_")[[1]][6], "-")[[1]] # Don't bother storing this file if we don't want those years - files.var[[v]][i, "first.year"] <- as.numeric(substr(yr.str[1], 1, 4)) - files.var[[v]][i, "last.year" ] <- as.numeric(substr(yr.str[2], 1, 4)) + files.var[[v]][i, "first.date"] <- as.Date(dt.str[1], format="%Y%m%d") + files.var[[v]][i, "last.date" ] <- as.Date(substr(dt.str[2], 1, 8), format="%Y%m%d") } # End file loop # get rid of files outside of what we actually need - files.var[[v]] <- files.var[[v]][files.var[[v]]$first.year<=end_year & files.var[[v]]$last.year>=start_year,] + files.var[[v]] <- files.var[[v]][files.var[[v]]$first.date<=as.Date(end_date) & files.var[[v]]$last.date>=as.Date(start_date),] # if(as.numeric(substr(yr.str[1], 1, 4)) > end_year | as.numeric(substr(yr.str[2], 1, 4))< start_year) next n.file=n.file+nrow(files.var[[v]]) @@ -196,7 +196,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in nc.date <- date.origin + nc.time nc.min <- as.Date(min(nc.date)) - date.ref <- as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")) + date.ref <- files.var[[var.now]][i,"first.date"]+0.5 # Set a half-day offset to make centered # If things don't align with the specified origin, update it & try again if(nc.min != date.ref){ @@ -205,7 +205,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in nc.date <- nc.date + date.off + 1 } - date.leaps <- seq(as.Date(paste0(files.var[[var.now]][i,"first.year"], "-01-01")), as.Date(paste0(files.var[[var.now]][i,"last.year"], "-12-31")), by="day") + date.leaps <- seq(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="day") # Figure out if we're missing leap dat no.leap <- ifelse(is.null(no.leap) & length(nc.date)!=length(date.leaps), TRUE, FALSE) @@ -219,10 +219,10 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # Find our time index if(v.res=="day"){ - time.ind <- which(lubridate::year(nc.date)>=start_year & lubridate::year(nc.date)<=end_year) + time.ind <- which(nc.date>=as.Date(start_date) & nc.date<=as.Date(end_date)+0.5) } else { - yr.ind <- rep(files.var[[var.now]][i,"first.year"]:files.var[[var.now]][i,"last.year"], each=12) - time.ind <- which(yr.ind>=start_year & yr.ind<=end_year) + date.ind <- rep(files.var[[var.now]][i,"first.date"]:files.var[[var.now]][i,"last.date"], each=12) + time.ind <- which(date.ind>=as.Date(start_date) & date.ind<=as.Date(end_date)+0.5) } # Subset our dates & times to match our index @@ -270,7 +270,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # If we have monthly data, lets trick it into being daily if(v.res == "month"){ mo.ind <- rep(1:12, length.out=length(dat.temp)) - yr.ind <- rep(files.var[[var.now]][i,"first.year"]:files.var[[var.now]][i,"last.year"], each=12) + yr.ind <- rep(files.var[[var.now]][i,"first.date"]:files.var[[var.now]][i,"last.date"], each=12) dat.trick <- vector() for(j in 1:length(dat.temp)){ if(lubridate::leap_year(yr.ind[j]) & mo.ind[j]==2){ From 072092c55f81a581c2ae00db2f0eccbc30908ba6 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Tue, 20 Oct 2020 15:31:06 -0500 Subject: [PATCH 1506/2289] fixes to monthly output --- .../data.atmosphere/R/extract_local_CMIP5.R | 59 ++++++++++++++----- 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 0215c43f87c..a9f6858ef69 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -127,9 +127,11 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in for(v in var$DAP.name){ files.var[[v]] <- list() if(v %in% vars.gcm.day){ - # Get a list of file names + v.res="day" + # Get a list of file names files.var[[v]] <- data.frame(file.name=dir(file.path(path.day, v))) } else { + v.res="month" files.var[[v]] <- data.frame(file.name=dir(file.path(path.mo, v))) } @@ -139,8 +141,19 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in dt.str <- stringr::str_split(stringr::str_split(files.var[[v]][i,"file.name"], "_")[[1]][6], "-")[[1]] # Don't bother storing this file if we don't want those years - files.var[[v]][i, "first.date"] <- as.Date(dt.str[1], format="%Y%m%d") - files.var[[v]][i, "last.date" ] <- as.Date(substr(dt.str[2], 1, 8), format="%Y%m%d") + if(v.res=="day"){ + files.var[[v]][i, "first.date"] <- as.Date(dt.str[1], format="%Y%m%d") + files.var[[v]][i, "last.date" ] <- as.Date(substr(dt.str[2], 1, 8), format="%Y%m%d") + } else { + # For monthly data, we can assume the first day of the month is day 1 of that month + dfirst <- lubridate::days_in_month(as.numeric(substr(dt.str[1], 5, 6))) + files.var[[v]][i, "first.date"] <- as.Date(paste0(dt.str[1], dfirst/2), format="%Y%m%d") + + # For the last day, i wish we could assume it ends in December, but some models are + # jerks, so we should double check + dlast <- lubridate::days_in_month(as.numeric(substr(dt.str[2], 5, 6))) + files.var[[v]][i, "last.date" ] <- as.Date(paste0(substr(dt.str[2], 1, 6), dlast), format="%Y%m%d") + } } # End file loop @@ -193,16 +206,30 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # splt.ind <- ifelse(GCM %in% c("MPI-ESM-P"), 4, 3) # date.origin <- as.Date(stringr::str_split(ncT$dim$time$units, " ")[[1]][splt.ind]) - nc.date <- date.origin + nc.time - - nc.min <- as.Date(min(nc.date)) - date.ref <- files.var[[var.now]][i,"first.date"]+0.5 # Set a half-day offset to make centered - - # If things don't align with the specified origin, update it & try again - if(nc.min != date.ref){ - date.off <- date.ref - nc.min # Figure out our date offset - - nc.date <- nc.date + date.off + 1 + if(v.res == "day"){ + nc.date <- date.origin + nc.time + + nc.min <- as.Date(min(nc.date)) + # mean(diff(nc.date)) + date.ref <- files.var[[var.now]][i,"first.date"]+0.5 # Set a half-day offset to make centered + + # If things don't align with the specified origin, update it & try again + if(nc.min != date.ref & v.res=="day"){ + date.off <- date.ref - nc.min # Figure out our date offset + + nc.date <- nc.date + date.off + 1 + } + } else { + dates.mo <- seq.Date(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="month") + + if(length(dates.mo) == length(nc.time)){ + nc.date <- dates.mo + } else { + # I have no freaking clue what to do if things don't work out, so lets just go back to whatever we first tried + date.off <- date.ref - nc.min # Figure out our date offset + + nc.date <- nc.date + date.off + 1 + } } date.leaps <- seq(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="day") @@ -221,8 +248,8 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in if(v.res=="day"){ time.ind <- which(nc.date>=as.Date(start_date) & nc.date<=as.Date(end_date)+0.5) } else { - date.ind <- rep(files.var[[var.now]][i,"first.date"]:files.var[[var.now]][i,"last.date"], each=12) - time.ind <- which(date.ind>=as.Date(start_date) & date.ind<=as.Date(end_date)+0.5) + # date.ind <- rep(files.var[[var.now]][i,"first.date"]:files.var[[var.now]][i,"last.date"], each=12) + time.ind <- which(nc.date>=as.Date(start_date) & nc.date<=as.Date(end_date)+0.5) } # Subset our dates & times to match our index @@ -270,7 +297,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # If we have monthly data, lets trick it into being daily if(v.res == "month"){ mo.ind <- rep(1:12, length.out=length(dat.temp)) - yr.ind <- rep(files.var[[var.now]][i,"first.date"]:files.var[[var.now]][i,"last.date"], each=12) + yr.ind <- lubridate::year(nc.date) dat.trick <- vector() for(j in 1:length(dat.temp)){ if(lubridate::leap_year(yr.ind[j]) & mo.ind[j]==2){ From afe9bf326f41fdd764ef4b7ee8065cc9a4ebc845 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Tue, 20 Oct 2020 16:01:35 -0500 Subject: [PATCH 1507/2289] Fixes to leap year stupid leap year. Stupid stupid leap year. --- .../data.atmosphere/R/extract_local_CMIP5.R | 31 +++++++++++-------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index a9f6858ef69..559efff7300 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -24,9 +24,6 @@ ##' @param date.origin (optional) specify the date of origin for timestamps in the files being read. ##' If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and ##' 850 for p1000 simulations (plus MPI-ESM-P historical). Format: YYYY-MM-DD -##' @param no.leap (optional, logical) if you know your GCM of interest is missing leap year, you can specify it here. -##' otherwise the code will automatically determine if leap year is missing and if it should be -##' added in. ##' @param adjust.pr - adjustment factor fore preciptiation when the extracted values seem off ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param verbose logical. to control printing of debug info @@ -35,7 +32,7 @@ ##' @examples # ----------------------------------- extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in, lon.in, - model , scenario , ensemble_member = "r1i1p1", date.origin=NULL, no.leap=NULL, adjust.pr=1, + model , scenario , ensemble_member = "r1i1p1", date.origin=NULL, adjust.pr=1, overwrite = FALSE, verbose = FALSE, ...){ # Some GCMs don't do leap year; we'll have to deal with this separately @@ -146,8 +143,8 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in files.var[[v]][i, "last.date" ] <- as.Date(substr(dt.str[2], 1, 8), format="%Y%m%d") } else { # For monthly data, we can assume the first day of the month is day 1 of that month - dfirst <- lubridate::days_in_month(as.numeric(substr(dt.str[1], 5, 6))) - files.var[[v]][i, "first.date"] <- as.Date(paste0(dt.str[1], dfirst/2), format="%Y%m%d") + # dfirst <- lubridate::days_in_month(as.numeric(substr(dt.str[1], 5, 6))) + files.var[[v]][i, "first.date"] <- as.Date(paste0(dt.str[1], 01), format="%Y%m%d") # For the last day, i wish we could assume it ends in December, but some models are # jerks, so we should double check @@ -204,6 +201,15 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in } nc.time <- ncdf4::ncvar_get(ncT, "time") + if(v.res=="day"){ + date.leaps <- seq(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="day") + } else { + # if we're dealing with monthly data, start with the first of the month + date.leaps <- seq(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="day") + } + # Figure out if we're missing leap dat + no.leap <- ifelse(length(nc.time)!=length(date.leaps), TRUE, FALSE) + # splt.ind <- ifelse(GCM %in% c("MPI-ESM-P"), 4, 3) # date.origin <- as.Date(stringr::str_split(ncT$dim$time$units, " ")[[1]][splt.ind]) if(v.res == "day"){ @@ -214,13 +220,15 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in date.ref <- files.var[[var.now]][i,"first.date"]+0.5 # Set a half-day offset to make centered # If things don't align with the specified origin, update it & try again - if(nc.min != date.ref & v.res=="day"){ + if(nc.min != date.ref){ date.off <- date.ref - nc.min # Figure out our date offset - nc.date <- nc.date + date.off + 1 + nc.date <- date.origin + nc.time + date.off } } else { - dates.mo <- seq.Date(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="month") + dfirst <- lubridate::days_in_month(lubridate::month(files.var[[var.now]][i,"first.date"])) + + dates.mo <- seq.Date(files.var[[var.now]][i,"first.date"]+dfirst/2, files.var[[var.now]][i,"last.date"], by="month") if(length(dates.mo) == length(nc.time)){ nc.date <- dates.mo @@ -232,15 +240,12 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in } } - date.leaps <- seq(files.var[[var.now]][i,"first.date"], files.var[[var.now]][i,"last.date"], by="day") - # Figure out if we're missing leap dat - no.leap <- ifelse(is.null(no.leap) & length(nc.date)!=length(date.leaps), TRUE, FALSE) # If we're missing leap year, lets adjust our date stamps so we can only pull what we need if(v.res=="day" & no.leap==TRUE){ cells.bump <- which(lubridate::leap_year(lubridate::year(date.leaps)) & lubridate::month(date.leaps)==02 & lubridate::day(date.leaps)==29) for(j in 1:length(cells.bump)){ - nc.date[cells.bump[j]:length(nc.date)] <- nc.date[cells.bump[j]:length(nc.date)]+1 + nc.date[(cells.bump[j]-1):length(nc.date)] <- nc.date[(cells.bump[j]-1):length(nc.date)]+1 } } From 4240caf725c699f63ffac382c063ad5ba7a7e97e Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Tue, 20 Oct 2020 16:31:02 -0500 Subject: [PATCH 1508/2289] update documentation --- modules/data.atmosphere/man/extract.local.CMIP5.Rd | 5 ----- 1 file changed, 5 deletions(-) diff --git a/modules/data.atmosphere/man/extract.local.CMIP5.Rd b/modules/data.atmosphere/man/extract.local.CMIP5.Rd index 575dae521f4..246df4349ff 100644 --- a/modules/data.atmosphere/man/extract.local.CMIP5.Rd +++ b/modules/data.atmosphere/man/extract.local.CMIP5.Rd @@ -16,7 +16,6 @@ extract.local.CMIP5( scenario, ensemble_member = "r1i1p1", date.origin = NULL, - no.leap = NULL, overwrite = FALSE, verbose = FALSE, ... @@ -45,10 +44,6 @@ extract.local.CMIP5( If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and 850 for p1000 simulations (plus MPI-ESM-P historical). Format: YYYY-MM-DD} -\item{no.leap}{(optional, logical) if you know your GCM of interest is missing leap year, you can specify it here. -otherwise the code will automatically determine if leap year is missing and if it should be -added in.} - \item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} \item{verbose}{logical. to control printing of debug info} From bc234367852808c68e181238c46b36defc776129 Mon Sep 17 00:00:00 2001 From: Julian Pistorius Date: Wed, 21 Oct 2020 01:58:05 +0000 Subject: [PATCH 1509/2289] Allow Docker services to be run as unprivileged user (#2669) * Start Docker services with the same user ID as the logged-in user. Particularly any services which access the PEcAn data volume. * Added instructions for setting permissions Particularly so that Docker services running as the operating system user can write to the PEcAn data volume. * Changed the default port for the 'web' container Changed from 80 to 8080 because unprivileged users can't allocate ports below 1024 (and the Docker services can now be run as an unprivileged user). * use new script to rstudio this will safe specific environment variables and make them visible in rstudio Co-authored-by: David LeBauer Co-authored-by: Rob Kooper --- CHANGELOG.md | 2 +- DEV-INTRO.md | 9 +++++++++ docker-compose.yml | 16 +++++++++++++++- docker/base/Dockerfile | 3 +-- docker/web/Dockerfile | 6 ++++-- 5 files changed, 30 insertions(+), 6 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index a0f32663468..3e06d5e9616 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -33,8 +33,8 @@ This is a major change: - Update ED docker build, will now build version 2.2.0 and git - Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 - model2netcdf.ED2 no longer detecting which varibles names `-T-` files have based on ED2 version (#2623) +- Changed docker-compose.yml to use user & group IDs of the operating system user (#2572) -gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) - ### Changed diff --git a/DEV-INTRO.md b/DEV-INTRO.md index b559c3fc07a..9f48ad97bfe 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -171,6 +171,15 @@ docker pull pecan/data:develop docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop ``` +Linux & Mac + +```bash +# Change ownership of /data directory in pecan volume to the current user +docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data pecan/data:develop chown -R "$(id -u).$(id -g)" /data + +docker run -ti --user="$(id -u)" --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop +``` + #### copy R packages (optional but recommended) Next copy the R packages from a container to volume `pecan_lib`. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. This folder is shared with all PEcAn containers, allowing you to compile the code in one place, and have the compiled code available in all other containers. For example modify the code for a model, allows you to compile the code in rstudio container, and see the results in the model container. diff --git a/docker-compose.yml b/docker-compose.yml index bd250733a04..e668799fa5f 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -50,6 +50,7 @@ services: # webserver to handle access to data minio: + user: "${UID:-1001}:${GID:-1001}" image: minio/minio:latest command: server /data restart: unless-stopped @@ -68,6 +69,7 @@ services: # THREDDS data server thredds: + user: "${UID:-1001}:${GID:-1001}" image: pecan/thredds:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -156,6 +158,7 @@ services: rstudio: image: pecan/base:${PECAN_VERSION:-latest} + command: /work/rstudio.sh restart: unless-stopped networks: - pecan @@ -163,12 +166,14 @@ services: - rabbitmq - postgres environment: + - KEEP_ENV=RABBITMQ_URI FQDN NAME - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} - FQDN=${PECAN_FQDN:-docker} - NAME=${PECAN_NAME:-docker} - USER=${PECAN_RSTUDIO_USER:-carya} - PASSWORD=${PECAN_RSTUDIO_PASS:-illinois} - entrypoint: /init + - USERID=${UID:-1001} + - GROUPID=${GID:-1001} volumes: - pecan:/data - rstudio:/home @@ -190,6 +195,7 @@ services: # PEcAn web front end, this is just the PHP code web: + user: "${UID:-1001}:${GID:-1001}" image: pecan/web:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -203,6 +209,7 @@ services: - postgres - rabbitmq labels: + - "traefik.port=8080" - "traefik.enable=true" - "traefik.frontend.rule=${TRAEFIK_HOST:-}PathPrefix:/pecan/" - "traefik.backend=pecan" @@ -212,6 +219,7 @@ services: # PEcAn model monitor monitor: + user: "${UID:-1001}:${GID:-1001}" image: pecan/monitor:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -232,6 +240,7 @@ services: # PEcAn executor, executes jobs. Does not the actual models executor: + user: "${UID:-1001}:${GID:-1001}" image: pecan/executor:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -251,6 +260,7 @@ services: # PEcAn basgra model runner basgra: + user: "${UID:-1001}:${GID:-1001}" image: pecan/model-basgra-basgra_n_v1.0:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -264,6 +274,7 @@ services: # PEcAn sipnet model runner sipnet: + user: "${UID:-1001}:${GID:-1001}" image: pecan/model-sipnet-r136:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -277,6 +288,7 @@ services: # PEcAn ED model runner ed2: + user: "${UID:-1001}:${GID:-1001}" image: pecan/model-ed2-git:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -290,6 +302,7 @@ services: # PEcAn MAESPA model runner maespa: + user: "${UID:-1001}:${GID:-1001}" image: pecan/model-maespa-git:${PECAN_VERSION:-latest} restart: unless-stopped networks: @@ -323,6 +336,7 @@ services: # ---------------------------------------------------------------------- api: image: pecan/api:${PECAN_VERSION:-latest} + user: "${UID:-1001}:${GID:-1001}" networks: - pecan environment: diff --git a/docker/base/Dockerfile b/docker/base/Dockerfile index 17866c2d338..142fbe65787 100644 --- a/docker/base/Dockerfile +++ b/docker/base/Dockerfile @@ -27,12 +27,11 @@ RUN cd /pecan && make && rm -rf /tmp/downloaded_packages # COPY WORKFLOW WORKDIR /work -COPY web/workflow.R /work/ +COPY web/workflow.R docker/base/rstudio.sh /work/ # COMMAND TO RUN CMD Rscript --vanilla workflow.R | tee workflow.Rout - # variables to store in docker image ENV PECAN_VERSION=${PECAN_VERSION} \ PECAN_GIT_BRANCH=${PECAN_GIT_BRANCH} \ diff --git a/docker/web/Dockerfile b/docker/web/Dockerfile index d6f9c413883..674b42d254a 100644 --- a/docker/web/Dockerfile +++ b/docker/web/Dockerfile @@ -12,8 +12,8 @@ RUN apt-get update \ && docker-php-ext-install -j$(nproc) pdo pdo_pgsql \ && pecl install amqp \ && docker-php-ext-enable amqp \ - && mkdir -p /data/workflows /data/dbfiles \ - && chown www-data /data/workflows /data/dbfiles + && sed -i -e 's/ 80/ 8080/g' -e 's/ 443/ 8443/g' /etc/apache2/ports.conf \ + && sed -i -e 's/:80/:8080/g' -e 's/:443/:8443/g' /etc/apache2/sites-enabled/000-default.conf # ---------------------------------------------------------------------- # copy webpages @@ -35,3 +35,5 @@ ENV PECAN_VERSION=${PECAN_VERSION} \ PECAN_GIT_BRANCH=${PECAN_GIT_BRANCH} \ PECAN_GIT_CHECKSUM=${PECAN_GIT_CHECKSUM} \ PECAN_GIT_DATE=${PECAN_GIT_DATE} + +EXPOSE 8080 From b938b7497c453d7ceddd96f0b01cd61c3e909479 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 20 Oct 2020 21:03:25 -0500 Subject: [PATCH 1510/2289] missing rstudio file --- docker/base/rstudio.sh | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100755 docker/base/rstudio.sh diff --git a/docker/base/rstudio.sh b/docker/base/rstudio.sh new file mode 100755 index 00000000000..0f7e1e94c88 --- /dev/null +++ b/docker/base/rstudio.sh @@ -0,0 +1,13 @@ +#!/bin/bash + +mkdir -p /home/${USER} +#if [ ! -e "/home/${USER}/.Renviron" ]; then + echo "# Environment Variables" >> "/home/${USER}/.Renviron" +#fi + +for x in ${KEEP_ENV}; do + value="$(echo "${!x}" | sed 's/\//\\\//g')" + sed -i -e "/^$x=/{h;s/=.*/=${value}/};\${x;/^\$/{s//$x=${value}/;H};x}" "/home/${USER}/.Renviron" +done + +exec /init From 872afd75d55d16091562b3619d0ffa21e58a905e Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 21 Oct 2020 08:53:13 -0500 Subject: [PATCH 1511/2289] more date indexing fixes --- modules/data.atmosphere/R/extract_local_CMIP5.R | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 559efff7300..7ce9c72e952 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -259,7 +259,7 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # Subset our dates & times to match our index nc.date <- nc.date[time.ind] - date.leaps <- date.leaps[which(lubridate::year(date.leaps)>=start_year & lubridate::year(date.leaps)<=end_year)] + date.leaps <- date.leaps[which(date.leaps>=as.Date(start_date) & date.leaps<=as.Date(end_date))] # Find the closest grid cell for our site (using harvard as a protoype) ind.lat <- which(lat_bnd[1,]<=lat.in & lat_bnd[2,]>=lat.in) @@ -293,8 +293,10 @@ extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in # Figure out if we're missing leap year if(v.res=="day" & no.leap==TRUE){ cells.dup <- which(lubridate::leap_year(lubridate::year(date.leaps)) & lubridate::month(date.leaps)==02 & lubridate::day(date.leaps)==28) - for(j in 1:length(cells.dup)){ - dat.temp <- append(dat.temp, dat.temp[cells.dup[j]], cells.dup[j]) + if(length(cells.dup)>0){ + for(j in 1:length(cells.dup)){ + dat.temp <- append(dat.temp, dat.temp[cells.dup[j]], cells.dup[j]) + } } } From 927b41d00f934c932f2be5dad2b07e789a2b8e7b Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 21 Oct 2020 14:51:14 -0700 Subject: [PATCH 1512/2289] Update DEV-INTRO.md * typo, file is .env (for consistency w/ later instructions) ./env --> .env * added 'requirements' section * other minor changes --- DEV-INTRO.md | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 9f48ad97bfe..9fdbd51ae21 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -2,6 +2,18 @@ This is a minimal guide to getting started with PEcAn development under Docker. You can find more information about docker in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). +## Requirements and Recommendations + +Docker is the primary software requirement; it handles all of the other software dependencies. This has been tested on Ubuntu 18.04 and above, MacOS Catalina, and Windows 10 with Windows Subsystem for Linux 2. + +- Software (installation instructions below): + - Docker version 19 + - Docker-compose version 1.26 + - Git (optional until you want to make major changes) +- Hardware + - 100 GB storage (minimum 50 GB) + - 16 GB RAM (minimum 8 GB) + ## Git Repository and Workflow We recommend following the the [gitflow](https://nvie.com/posts/a-successful-git-branching-model/) workflow and working in your own [fork of the PEcAn repsitory](https://help.github.com/en/github/getting-started-with-github/fork-a-repo). See the [PEcAn developer guide](book_source/02_demos_tutorials_workflows/05_developer_workflows/02_git/01_using-git.Rmd) for further details. In the `/scripts` folder there is a script called [syncgit.sh](scripts/syncgit.sh) that will help with synchronizing your fork with the official repository. @@ -69,13 +81,13 @@ You can copy the [`docker/env.example`](docker/env.example) file as .env in your For Linux/MacOSX ```sh -cp docker/env.example ./env +cp docker/env.example .env ``` For Windows ``` -copy docker/env.example ./env +copy docker/env.example .env ``` * `COMPOSE_PROJECT_NAME` set this to pecan, the prefix for all containers @@ -137,7 +149,7 @@ The following volumes are specified: These folders will hold all the persistent data for each of the respective containers and can grow. For example the postgres database is multiple GB. The pecan folder will hold all data produced by the workflows, including any downloaded data, and can grow to many giga bytes. -#### postgresql database +#### Postgresql database First we bring up postgresql (we will start RabbitMQ as well since it takes some time to start): @@ -180,7 +192,7 @@ docker run -ti --rm --network pecan_pecan --volume pecan_pecan:/data pecan/data: docker run -ti --user="$(id -u)" --rm --network pecan_pecan --volume pecan_pecan:/data --env FQDN=docker pecan/data:develop ``` -#### copy R packages (optional but recommended) +#### Copy R packages (optional but recommended) Next copy the R packages from a container to volume `pecan_lib`. This is not really needed, but will speed up the process of the first compilation. Later we will put our newly compiled code here as well. This folder is shared with all PEcAn containers, allowing you to compile the code in one place, and have the compiled code available in all other containers. For example modify the code for a model, allows you to compile the code in rstudio container, and see the results in the model container. @@ -190,7 +202,7 @@ You can copy all the data using the following command. This will copy all compil docker run -ti --rm -v pecan_lib:/rlib pecan/base:develop cp -a /usr/local/lib/R/site-library/. /rlib/ ``` -#### copy web config file (optional) +#### Copy web config file (optional) The `docker-compose.override.yml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using From f9024814dc81d2747273a2bee33ce88093e591fb Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 21 Oct 2020 15:20:23 -0700 Subject: [PATCH 1513/2289] Update DEV-INTRO.md --- DEV-INTRO.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 9fdbd51ae21..871b9c5934b 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -174,7 +174,7 @@ docker-compose run --rm bety user guestuser guestuser "Guest User" guestuser@exa docker-compose run --rm bety user carya illinois "Carya Demo User" carya@example.com 1 1 ``` -#### load example data +#### Load example data Once the database is loaded we can add some example data, some of the example runs and runs for the ED model, assume some of this data is available. This can take some time, but all the data needed will be copied to the `/data` folder in the pecan containers. As with the database we first pull the latest version of the image, and then execute the image to copy all the data: @@ -204,7 +204,10 @@ docker run -ti --rm -v pecan_lib:/rlib pecan/base:develop cp -a /usr/local/lib/R #### Copy web config file (optional) -The `docker-compose.override.yml` file has a section that will enable editing the web application. This is by default commented out. If you want to uncoment it you will need to first copy the config.php from the docker/web folder. You can do this using +If you want to use the web interface, you will need to: + +1. Uncomment the web section from the `docker-compose.override.yml` file. This section includes three lines at the top of the file, just under the `services` section. Uncomment the lines that start `web:`, ` volumes:`, and `- pecan_web:`. +2. Then copy the config.php from the docker/web folder. You can do this using For Linux/MacOSX @@ -218,8 +221,6 @@ For Windows copy docker\web\config.docker.php web\config.php ``` - - ### PEcAn Development To begin development we first have to bring up the full PEcAn stack. This assumes you have done once the steps above. You don't need to stop any running containers, you can use the following command to start all containers. At this point you have PEcAn running in docker. From 63ea5e8b636780c0d23b254e5b9c7a5f0a05d2a3 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 21 Oct 2020 19:49:26 -0500 Subject: [PATCH 1514/2289] use rabbitmq in R this will use the R functions to submit to R. This will allow the execution of workflows from Rstudio. --- base/remote/R/rabbitmq.R | 36 +++--- base/remote/R/start_rabbitmq.R | 7 +- .../94_docker/05_building_images.Rmd | 2 +- .../94_docker/09_rabbitmq.Rmd | 39 ++---- docker-compose.yml | 6 +- docker/executor/Dockerfile | 2 +- docker/models/model.py | 119 +++++++++--------- 7 files changed, 105 insertions(+), 106 deletions(-) diff --git a/base/remote/R/rabbitmq.R b/base/remote/R/rabbitmq.R index da7023b9d30..1a9eef3e3fd 100644 --- a/base/remote/R/rabbitmq.R +++ b/base/remote/R/rabbitmq.R @@ -1,5 +1,7 @@ -#' parse the RabbiMQ URI. This will parse the uri into smaller pieces that can -#' be used to talk to the rest endpoint for RabbitMQ. +#' parse the RabbiMQ URI. +#' +#' This will parse the uri into smaller pieces that can be used to talk to the +#' rest endpoint for RabbitMQ. #' #' @param uri the amqp URI #' @param prefix the prefix that the RabbitMQ managmenet interface uses @@ -44,8 +46,10 @@ rabbitmq_parse_uri <- function(uri, prefix="", port=15672) { return(list(url=url, vhost=vhost, username=upw[[1]], password=upw[[2]])) } -#' Send a message to RabbitMQ rest API. It will check the resulting status code -#' and print a message in case something goes wrong. +#' Send a message to RabbitMQ rest API. +#' +#' It will check the resulting status code and print a message in case +#' something goes wrong. #' #' @param url the full endpoint rest url #' @param auth authentication for rabbitmq in httr:auth @@ -97,10 +101,11 @@ rabbitmq_send_message <- function(url, auth, body, action = "POST", silent = FAL } } -#' Create a queu in RabbitMQ. This will first check to see if the queue -#' already exists in RabbitMQ, if not it will create the queue. If the -#' queue exists, or is created it will return TRUE, it will return FALSE -#' otherwise. +#' Create a queue in RabbitMQ. +#' +#' This will first check to see if the queue already exists in RabbitMQ, if not +#' it will create the queue. If the queue exists, or is created it will return +#' TRUE, it will return FALSE otherwise. #' #' @param url parsed RabbitMQ URL. #' @param auth the httr authentication object to use. @@ -129,9 +134,11 @@ rabbitmq_create_queue <- function(url, auth, vhost, queue, auto_delete = FALSE, return(length(result) > 1 || !is.na(result)) } -#' Post message to RabbitMQ. This will submit a message to RabbitMQ, if the -#' queue does not exist it will be created. The message will be converted to -#' a json message that is submitted. +#' Post message to RabbitMQ. +#' +#' This will submit a message to RabbitMQ, if the queue does not exist it will +#' be created. The message will be converted to a json message that is +#' submitted. #' #' @param uri RabbitMQ URI or URL to rest endpoint #' @param queue the queue the message is submitted to @@ -167,9 +174,10 @@ rabbitmq_post_message <- function(uri, queue, message, prefix="", port=15672) { return(rabbitmq_send_message(url, auth, body, "POST")) } -#' Get message from RabbitMQ. This will get a message from RabbitMQ, if the -#' queue does not exist it will be created. The message will be converted to -#' a json message that is returned. +#' Get message from RabbitMQ. +#' +#' This will get a message from RabbitMQ, if the queue does not exist it will +#' be created. The message will be converted to a json message that is returned. #' #' @param uri RabbitMQ URI or URL to rest endpoint #' @param queue the queue the message is received from. diff --git a/base/remote/R/start_rabbitmq.R b/base/remote/R/start_rabbitmq.R index f61a813f217..20f9bf831f3 100644 --- a/base/remote/R/start_rabbitmq.R +++ b/base/remote/R/start_rabbitmq.R @@ -1,7 +1,10 @@ #' Start model execution using rabbitmq #' -#' @return Output of execution command, as a character (see [remote.execute.cmd()]). +#' @return Output of execution command, as a character (see [rabbitmq_post_message()]). start_rabbitmq <- function(folder, rabbitmq_uri, rabbitmq_queue) { - out <- system2('python3', c('/work/sender.py', rabbitmq_uri, rabbitmq_queue, folder), stdout = TRUE, stderr = TRUE) + message <- list("folder"=folder) + prefix <- Sys.getenv("RABBITMQ_PREFIX", "") + port <- Sys.getenv("RABBITMQ_PORT", "15672") + out <- rabbitmq_post_message(rabbitmq_uri, rabbitmq_queue, message, prefix, port) return(out) } diff --git a/book_source/03_topical_pages/94_docker/05_building_images.Rmd b/book_source/03_topical_pages/94_docker/05_building_images.Rmd index f0c212fc326..40fcaf1f6d4 100644 --- a/book_source/03_topical_pages/94_docker/05_building_images.Rmd +++ b/book_source/03_topical_pages/94_docker/05_building_images.Rmd @@ -20,7 +20,7 @@ The following is a list of PEcAn-specific images and reasons why you would want - `pecan/executor` -- Rebuild if: - You built a new version of `pecan/base` (on which `pecan/executor` depends) and/or, `pecan/depends` (on which `pecan/base` depends) - You modified the `docker/executor/Dockerfile` - - You modified the RabbitMQ Python scripts (e.g. `docker/receiver.py`, `docker/sender.py`) + - You modified the RabbitMQ Python script (e.g. `docker/receiver.py`) - `pecan/web` -- Rebuild if you modified any of the following: - `docker/web/Dockerfile` - The PHP/HTML/JavaScript code for the PEcAn web interface in `web/` (_except_ `web/workflow.R` -- that goes in `pecan/base`) diff --git a/book_source/03_topical_pages/94_docker/09_rabbitmq.Rmd b/book_source/03_topical_pages/94_docker/09_rabbitmq.Rmd index 76ecf1f6777..76601a50879 100644 --- a/book_source/03_topical_pages/94_docker/09_rabbitmq.Rmd +++ b/book_source/03_topical_pages/94_docker/09_rabbitmq.Rmd @@ -2,42 +2,25 @@ This section provides additional details about how PEcAn uses RabbitMQ to manage communication between its Docker containers. -In PEcAn, we use the Python [`pika`](http://www.rabbitmq.com/tutorials/tutorial-one-python.html) client to post and retrieve messages from RabbitMQ. -As such, every Docker container that communicates with RabbitMQ contains two Python scripts: `sender.py` and `reciever.py`. -Both are located in the `docker` directory in the PEcAn source code root. +In PEcAn, we use the Python [`pika`](http://www.rabbitmq.com/tutorials/tutorial-one-python.html) client to retrieve messages from RabbitMQ. The PEcAn.remote library has convenience functions that wrap the API to post and read messages. The executor and models use the python version of the RabbitMQ scripts to retrieve messages, and launch the appropriate code. -### Producer -- `sender.py` {#rabbitmq-basics-sender} +### Producer -- `PEcAn.remote::rabbitmq_post_message` {#rabbitmq-basics-sender} -The `sender.py` script is in charge of posting messages to RabbitMQ. -In the RabbitMQ documentation, it is known as a "producer". -It runs once for every message posted to RabbitMQ, and then immediately exits (unlike the `receiver.py`, which runs continuously -- see [below](#rabbitmq-basics-receiver)). +The `PEcAn.remote::rabbitmq_post_message` function allows you to post messages to RabbitMQ from R. In the RabbitMQ documentation, it is known as a "producer". It takes the body of the message to be posted and will return any output generated when the message is posted to the approriate queue. -Its usage is as follows: +The function has three required arguments and two optional ones: -```bash -python3 sender.py -``` - -The arguments are: - -- `` -- The unique identifier of the RabbitMQ instance, similar to a URL. -The format is `amqp://username:password@host/vhost`. -By default, this is `amqp://guest:guest@rabbitmq/%2F` (the `%2F` here is the hexadecimal encoding for the `/` character). - -- `` -- The name of the queue on which to post the message. +- `uri` -- This is the URI used to connect to RabbitMQ, it will have the form `amqp://username:password\@server:5672/vhost`. Most containers will have this injected using the environment variable `RABBITMQ_URI`. +- `queue` -- The queue to post the message on, this is either `pecan` for a workflow to be exected, the name of the model and version to be executed, for example `SIPNET_r136`. +- `message` -- The actual message to be send, this is of type list and will be converted to a json string representation. +- `prefix` -- The code will talk to the rest api, this is the prefix of the rest api. In the case of the default docker-compse file, this will be `/rabbitmq` and will be injected as `RABBITMQ_PREFIX`. +- `port` -- The code will talk to the rest api, this is the port of the rest api. In the case of the default docker-compse file, this will be `15672` and will be injected as `RABBITMQ_PORT`. -- `` -- The contents of the message to post, in JSON format. -A typical message posted by PEcAn looks like the following: - - ```json - { "folder" : "/path/to/PEcAn_WORKFLOWID", "workflow" : "WORKFLOWID" } - ``` - -The `PEcAn.remote::start_rabbitmq` function is a wrapper for this script that provides an easy way to post a `folder` message to RabbitMQ from R. +The `PEcAn.remote::start_rabbitmq` function is a wrapper for this function that provides an easy way to post a `folder` message to RabbitMQ. ### Consumer -- `receiver.py` {#rabbitmq-basics-receiver} -Unlike `sender.py`, `receiver.py` runs like a daemon, constantly listening for messages. +The `receiver.py` script runs like a daemon, constantly listening for messages. In the RabbitMQ documentation, it is known as a "consumer". In PEcAn, you can tell that it is ready to receive messages if the corresponding logs (e.g. `docker-compose logs executor`) show the following message: diff --git a/docker-compose.yml b/docker-compose.yml index e668799fa5f..a5957241184 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -166,8 +166,10 @@ services: - rabbitmq - postgres environment: - - KEEP_ENV=RABBITMQ_URI FQDN NAME + - KEEP_ENV=RABBITMQ_URI RABBITMQ_PREFIX RABBITMQ_PORT FQDN NAME - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} + - RABBITMQ_PREFIX=/rabbitmq + - RABBITMQ_PORT=15672 - FQDN=${PECAN_FQDN:-docker} - NAME=${PECAN_NAME:-docker} - USER=${PECAN_RSTUDIO_USER:-carya} @@ -247,6 +249,8 @@ services: - pecan environment: - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F} + - RABBITMQ_PREFIX=/rabbitmq + - RABBITMQ_PORT=15672 - FQDN=${PECAN_FQDN:-docker} depends_on: - postgres diff --git a/docker/executor/Dockerfile b/docker/executor/Dockerfile index 0cf5622e487..34722783909 100644 --- a/docker/executor/Dockerfile +++ b/docker/executor/Dockerfile @@ -23,5 +23,5 @@ ENV RABBITMQ_URI="amqp://guest:guest@rabbitmq/%2F" \ APPLICATION="workflow" # actual application that will be executed -COPY executor.py sender.py /work/ +COPY executor.py /work/ CMD python3 /work/executor.py diff --git a/docker/models/model.py b/docker/models/model.py index 7d21f73c6b8..26352744ed7 100644 --- a/docker/models/model.py +++ b/docker/models/model.py @@ -24,67 +24,68 @@ def __init__(self, method, properties, body): self.finished = False def runfunc(self): - logging.debug(self.body) - jbody = json.loads(self.body.decode('UTF-8')) - - folder = jbody.get('folder') - rebuild = jbody.get('rebuild') - pecan_xml = jbody.get('pecan_xml') - custom_application = jbody.get('custom_application') - - if rebuild is not None: - logging.info("Rebuilding PEcAn with make") - application = 'make' - folder = '/pecan' - elif pecan_xml is not None: - # Passed entire pecan XML as a string - logging.info("Running XML passed directly") - try: - os.mkdir(folder) - except OSError as e: - logging.info("Caught the following OSError. ", - "If it's just that the directory exists, ", - "this can probably be ignored: ", e) - workflow_path = os.path.join(folder, "workflow.R") - shutil.copyfile("/pecan/web/workflow.R", workflow_path) - xml_file = open(os.path.join(folder, "pecan.xml"), "w") - xml_file.write(pecan_xml) - xml_file.close() - - # Set variables for execution - application = "R CMD BATCH workflow.R" - elif custom_application is not None: - application = custom_application - else: - logging.info("Running default command: %s" % default_application) - application = default_application - - logging.info("Running command: %s" % application) - logging.info("Starting command in directory %s." % folder) try: - output = subprocess.check_output(application, stderr=subprocess.STDOUT, shell=True, cwd=folder) - status = 'OK' - except subprocess.CalledProcessError as e: - logging.exception("Error running job.") - output = e.output - status = 'ERROR' - except Exception as e: - logging.exception("Error running job.") - output = str(e) - status = 'ERROR' - - logging.info("Finished running job with status " + status) - logging.info(output) + logging.debug(self.body) + jbody = json.loads(self.body.decode('UTF-8')) + + folder = jbody.get('folder') + rebuild = jbody.get('rebuild') + pecan_xml = jbody.get('pecan_xml') + custom_application = jbody.get('custom_application') + + if rebuild is not None: + logging.info("Rebuilding PEcAn with make") + application = 'make' + folder = '/pecan' + elif pecan_xml is not None: + # Passed entire pecan XML as a string + logging.info("Running XML passed directly") + try: + os.mkdir(folder) + except OSError as e: + logging.info("Caught the following OSError. ", + "If it's just that the directory exists, ", + "this can probably be ignored: ", e) + workflow_path = os.path.join(folder, "workflow.R") + shutil.copyfile("/pecan/web/workflow.R", workflow_path) + xml_file = open(os.path.join(folder, "pecan.xml"), "w") + xml_file.write(pecan_xml) + xml_file.close() + + # Set variables for execution + application = "R CMD BATCH workflow.R" + elif custom_application is not None: + application = custom_application + else: + logging.info("Running default command: %s" % default_application) + application = default_application + + logging.info("Running command: %s" % application) + logging.info("Starting command in directory %s." % folder) + try: + output = subprocess.check_output(application, stderr=subprocess.STDOUT, shell=True, cwd=folder) + status = 'OK' + except subprocess.CalledProcessError as e: + logging.exception("Error running job.") + output = e.output + status = 'ERROR' + except Exception as e: + logging.exception("Error running job.") + output = str(e) + status = 'ERROR' + + logging.info("Finished running job with status " + status) + logging.info(output) - try: - with open(os.path.join(folder, 'rabbitmq.out'), 'w') as out: - out.write(str(output) + "\n") - out.write(status + "\n") - except Exception: - logging.exception("Error writing status.") - - # done processing, set finished to true - self.finished = True + try: + with open(os.path.join(folder, 'rabbitmq.out'), 'w') as out: + out.write(str(output) + "\n") + out.write(status + "\n") + except Exception: + logging.exception("Error writing status.") + finally: + # done processing, set finished to true + self.finished = True # called for every message, this will start the program and ack message if all is ok. From bfd71afe746cffdd368828a59208913e2f6e18ac Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 21 Oct 2020 19:51:51 -0500 Subject: [PATCH 1515/2289] update changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3e06d5e9616..7137acbc431 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,6 +16,7 @@ This is a major change: ### Fixed +- Removed sender.py and now allow the submission of workflows from inside the rstudio container. - Use TRAEFIK_FRONTEND_RULE in compose file and TRAEFIK_HOST in env.example, using TRAEFIK_HOST everywhere now. Make sure TRAEFIK_HOST is used in .env - Use initial biomass pools for Sorghum and Setaria #2495, #2496 - PEcAn.DB::betyConnect() is now smarter, and will try to use either config.php or environment variables to create a connection. It has switched to use db.open helper function (#2632). From 2fe69ee645c688d616e4a5cc86fcf8d65ce51ff9 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 21 Oct 2020 20:56:26 -0500 Subject: [PATCH 1516/2289] update documentation --- base/remote/man/rabbitmq_create_queue.Rd | 12 ++++-------- base/remote/man/rabbitmq_get_message.Rd | 9 +++------ base/remote/man/rabbitmq_parse_uri.Rd | 7 +++---- base/remote/man/rabbitmq_post_message.Rd | 10 ++++------ base/remote/man/rabbitmq_send_message.Rd | 7 +++---- base/remote/man/start_rabbitmq.Rd | 2 +- 6 files changed, 18 insertions(+), 29 deletions(-) diff --git a/base/remote/man/rabbitmq_create_queue.Rd b/base/remote/man/rabbitmq_create_queue.Rd index ec21e5a57f0..6e9c5f4175b 100644 --- a/base/remote/man/rabbitmq_create_queue.Rd +++ b/base/remote/man/rabbitmq_create_queue.Rd @@ -2,10 +2,7 @@ % Please edit documentation in R/rabbitmq.R \name{rabbitmq_create_queue} \alias{rabbitmq_create_queue} -\title{Create a queu in RabbitMQ. This will first check to see if the queue -already exists in RabbitMQ, if not it will create the queue. If the -queue exists, or is created it will return TRUE, it will return FALSE -otherwise.} +\title{Create a queue in RabbitMQ.} \usage{ rabbitmq_create_queue( url, @@ -33,10 +30,9 @@ rabbitmq_create_queue( TRUE if the queue now exists, FALSE otherwise. } \description{ -Create a queu in RabbitMQ. This will first check to see if the queue -already exists in RabbitMQ, if not it will create the queue. If the -queue exists, or is created it will return TRUE, it will return FALSE -otherwise. +This will first check to see if the queue already exists in RabbitMQ, if not +it will create the queue. If the queue exists, or is created it will return +TRUE, it will return FALSE otherwise. } \author{ Rob Kooper diff --git a/base/remote/man/rabbitmq_get_message.Rd b/base/remote/man/rabbitmq_get_message.Rd index 755ddd4bb30..233a1dcaae5 100644 --- a/base/remote/man/rabbitmq_get_message.Rd +++ b/base/remote/man/rabbitmq_get_message.Rd @@ -2,9 +2,7 @@ % Please edit documentation in R/rabbitmq.R \name{rabbitmq_get_message} \alias{rabbitmq_get_message} -\title{Get message from RabbitMQ. This will get a message from RabbitMQ, if the -queue does not exist it will be created. The message will be converted to -a json message that is returned.} +\title{Get message from RabbitMQ.} \usage{ rabbitmq_get_message(uri, queue, count = 1, prefix = "", port = 15672) } @@ -23,9 +21,8 @@ rabbitmq_get_message(uri, queue, count = 1, prefix = "", port = 15672) NA if no message was retrieved, or a list of the messages payload. } \description{ -Get message from RabbitMQ. This will get a message from RabbitMQ, if the -queue does not exist it will be created. The message will be converted to -a json message that is returned. +This will get a message from RabbitMQ, if the queue does not exist it will +be created. The message will be converted to a json message that is returned. } \author{ Alexey Shiklomanov, Rob Kooper diff --git a/base/remote/man/rabbitmq_parse_uri.Rd b/base/remote/man/rabbitmq_parse_uri.Rd index 9505e11745e..2bf82667238 100644 --- a/base/remote/man/rabbitmq_parse_uri.Rd +++ b/base/remote/man/rabbitmq_parse_uri.Rd @@ -2,8 +2,7 @@ % Please edit documentation in R/rabbitmq.R \name{rabbitmq_parse_uri} \alias{rabbitmq_parse_uri} -\title{parse the RabbiMQ URI. This will parse the uri into smaller pieces that can -be used to talk to the rest endpoint for RabbitMQ.} +\title{parse the RabbiMQ URI.} \usage{ rabbitmq_parse_uri(uri, prefix = "", port = 15672) } @@ -19,6 +18,6 @@ a list that contains the url to the mangement interface, username password and vhost. } \description{ -parse the RabbiMQ URI. This will parse the uri into smaller pieces that can -be used to talk to the rest endpoint for RabbitMQ. +This will parse the uri into smaller pieces that can be used to talk to the +rest endpoint for RabbitMQ. } diff --git a/base/remote/man/rabbitmq_post_message.Rd b/base/remote/man/rabbitmq_post_message.Rd index c5655ed36a0..3eda68eebe5 100644 --- a/base/remote/man/rabbitmq_post_message.Rd +++ b/base/remote/man/rabbitmq_post_message.Rd @@ -2,9 +2,7 @@ % Please edit documentation in R/rabbitmq.R \name{rabbitmq_post_message} \alias{rabbitmq_post_message} -\title{Post message to RabbitMQ. This will submit a message to RabbitMQ, if the -queue does not exist it will be created. The message will be converted to -a json message that is submitted.} +\title{Post message to RabbitMQ.} \usage{ rabbitmq_post_message(uri, queue, message, prefix = "", port = 15672) } @@ -23,9 +21,9 @@ rabbitmq_post_message(uri, queue, message, prefix = "", port = 15672) the result of the post if message was send, or NA if it failed. } \description{ -Post message to RabbitMQ. This will submit a message to RabbitMQ, if the -queue does not exist it will be created. The message will be converted to -a json message that is submitted. +This will submit a message to RabbitMQ, if the queue does not exist it will +be created. The message will be converted to a json message that is +submitted. } \author{ Alexey Shiklomanov, Rob Kooper diff --git a/base/remote/man/rabbitmq_send_message.Rd b/base/remote/man/rabbitmq_send_message.Rd index 581e40fda14..bdc6eae85b2 100644 --- a/base/remote/man/rabbitmq_send_message.Rd +++ b/base/remote/man/rabbitmq_send_message.Rd @@ -2,8 +2,7 @@ % Please edit documentation in R/rabbitmq.R \name{rabbitmq_send_message} \alias{rabbitmq_send_message} -\title{Send a message to RabbitMQ rest API. It will check the resulting status code -and print a message in case something goes wrong.} +\title{Send a message to RabbitMQ rest API.} \usage{ rabbitmq_send_message(url, auth, body, action = "POST", silent = FALSE) } @@ -23,6 +22,6 @@ will return NA if message failed, otherwise it will either return the resulting message, or if not availble an empty string "". } \description{ -Send a message to RabbitMQ rest API. It will check the resulting status code -and print a message in case something goes wrong. +It will check the resulting status code and print a message in case +something goes wrong. } diff --git a/base/remote/man/start_rabbitmq.Rd b/base/remote/man/start_rabbitmq.Rd index 209d06e1c4e..cf7b2fd7592 100644 --- a/base/remote/man/start_rabbitmq.Rd +++ b/base/remote/man/start_rabbitmq.Rd @@ -7,7 +7,7 @@ start_rabbitmq(folder, rabbitmq_uri, rabbitmq_queue) } \value{ -Output of execution command, as a character (see \code{\link[=remote.execute.cmd]{remote.execute.cmd()}}). +Output of execution command, as a character (see \code{\link[=rabbitmq_post_message]{rabbitmq_post_message()}}). } \description{ Start model execution using rabbitmq From 576b84755abf1f44ba6cc9ace93a019bf16ab78b Mon Sep 17 00:00:00 2001 From: Kristina Riemer Date: Thu, 22 Oct 2020 06:25:57 -0700 Subject: [PATCH 1517/2289] Make minor fixes to remote execute host info in docs --- book_source/03_topical_pages/03_pecan_xml.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index de9d1cd0ea1..28162727f44 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -479,7 +479,7 @@ The following provides a quick overview of XML tags related to remote execution. TRUE qsub -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash Your job ([0-9]+) .* - qstat -j @JOBID@ &> /dev/null || echo DONE + 'qstat -j @JOBID@ &> /dev/null || echo DONE' module load udunits R/R-3.0.0_gnu-4.4.6 /usr/local/bin/modellauncher From fb38a67385857e97235a4bd5b97cf14b5b1dc9d7 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 09:02:38 -0500 Subject: [PATCH 1518/2289] Update documentation I _think_ I've updated the data.atmosphere documentation and jsut that... hopefully I didn't break pecan (because that's something that happens when I blindly muddle through)... fingers crossed. --- modules/data.atmosphere/DESCRIPTION | 2 +- modules/data.atmosphere/man/extract.local.CMIP5.Rd | 3 +++ modules/data.atmosphere/man/narr_flx_vars.Rd | 8 +++++++- modules/data.atmosphere/man/pecan_standard_met_table.Rd | 4 +++- 4 files changed, 14 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index e56422f3fa5..be5ba92ecae 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -69,4 +69,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.1 diff --git a/modules/data.atmosphere/man/extract.local.CMIP5.Rd b/modules/data.atmosphere/man/extract.local.CMIP5.Rd index 246df4349ff..fec601e2925 100644 --- a/modules/data.atmosphere/man/extract.local.CMIP5.Rd +++ b/modules/data.atmosphere/man/extract.local.CMIP5.Rd @@ -16,6 +16,7 @@ extract.local.CMIP5( scenario, ensemble_member = "r1i1p1", date.origin = NULL, + adjust.pr = 1, overwrite = FALSE, verbose = FALSE, ... @@ -44,6 +45,8 @@ extract.local.CMIP5( If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and 850 for p1000 simulations (plus MPI-ESM-P historical). Format: YYYY-MM-DD} +\item{adjust.pr}{- adjustment factor fore preciptiation when the extracted values seem off} + \item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} \item{verbose}{logical. to control printing of debug info} diff --git a/modules/data.atmosphere/man/narr_flx_vars.Rd b/modules/data.atmosphere/man/narr_flx_vars.Rd index c2da49226d6..920459e08b5 100644 --- a/modules/data.atmosphere/man/narr_flx_vars.Rd +++ b/modules/data.atmosphere/man/narr_flx_vars.Rd @@ -6,7 +6,13 @@ \alias{narr_sfc_vars} \alias{narr_all_vars} \title{NARR flux and sfc variables} -\format{An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 5 rows and 3 columns.} +\format{ +An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 5 rows and 3 columns. + +An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 3 rows and 3 columns. + +An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 8 rows and 3 columns. +} \usage{ narr_flx_vars diff --git a/modules/data.atmosphere/man/pecan_standard_met_table.Rd b/modules/data.atmosphere/man/pecan_standard_met_table.Rd index 9108ca6f6c2..16bfadce2d9 100644 --- a/modules/data.atmosphere/man/pecan_standard_met_table.Rd +++ b/modules/data.atmosphere/man/pecan_standard_met_table.Rd @@ -4,7 +4,9 @@ \name{pecan_standard_met_table} \alias{pecan_standard_met_table} \title{Conversion table for PEcAn standard meteorology} -\format{An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 18 rows and 8 columns.} +\format{ +An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 18 rows and 8 columns. +} \usage{ pecan_standard_met_table } From 27cc767f1f4b4a896e6719649bd0524874e9a1d6 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 22 Oct 2020 10:10:45 -0400 Subject: [PATCH 1519/2289] need to keep roxygen versions consistent --- modules/data.atmosphere/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index be5ba92ecae..e56422f3fa5 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -69,4 +69,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.1.1 +RoxygenNote: 7.0.2 From 5a99547592b2a5f08372fdfbad686d0e82cd44e4 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 10:31:04 -0500 Subject: [PATCH 1520/2289] fix NLDAS extraction issue at some point my files started following CF standard. :shrug: --- modules/data.atmosphere/R/extract_local_NLDAS.R | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/data.atmosphere/R/extract_local_NLDAS.R b/modules/data.atmosphere/R/extract_local_NLDAS.R index 8039a1d7d0d..a23c4d74531 100644 --- a/modules/data.atmosphere/R/extract_local_NLDAS.R +++ b/modules/data.atmosphere/R/extract_local_NLDAS.R @@ -142,6 +142,8 @@ extract.local.NLDAS <- function(outfolder, in.path, start_date, end_date, lat.in v.nldas <- paste(var$NLDAS.name[v]) v.cf <- paste(var$CF.name [v]) + if(!v.nldas %in% names(dap_file$var) & v.cf %in% names(dap_file$var)) v.nldas <- v.cf + # Variables have different dimensions (which is a pain in the butt) # so we need to check to see whether we're pulling 4 dimensions or just 3 if(dap_file$var[[v.nldas]]$ndims == 4){ From b9acb4664ecd05b6eb23eec8e9782e6d25df1ba4 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 11:52:54 -0500 Subject: [PATCH 1521/2289] support for single-member downsclaing plus some better labels & resolution to the ifugres. --- .../data.atmosphere/R/debias_met_regression.R | 46 +++++++++++++------ 1 file changed, 33 insertions(+), 13 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 4c5ceead22d..5e6e9dc7920 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -114,7 +114,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU ens.src <- ens.train } - if(pair.ens==F & ncol(source.data[[2]])==1 ){ + if(pair.ens==F & ncol(source.data[[2]])==1){ ens.src=1 } else if(pair.ens==F & ncol(source.data[[2]]) > n.ens) { ens.src <- sample(1:ncol(source.data[[2]]), ncol(source.data[[2]]),replace=T) @@ -524,7 +524,11 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Generate a random distribution of betas using the covariance matrix # I think the anomalies might be problematic, so lets get way more betas than we need and trim the distribution # set.seed=42 - Rbeta <- matrix(MASS::mvrnorm(n=n.new, coef(mod.bias), vcov(mod.bias)), ncol=length(coef(mod.bias))) + if(n.ens==1){ + Rbeta <- matrix(coef(mod.bias), ncol=length(coef(mod.bias))) + } else { + Rbeta <- matrix(MASS::mvrnorm(n=n.new, coef(mod.bias), vcov(mod.bias)), ncol=length(coef(mod.bias))) + } dimnames(Rbeta)[[2]] <- names(coef(mod.bias)) # # Filter our betas to remove outliers @@ -542,7 +546,12 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # while(nrow(Rbeta.anom)<1 & try.now<=ntries){ # Generate a random distribution of betas using the covariance matrix # I think the anomalies might be problematic, so lets get way more betas than we need and trim the distribution + if(n.ens>1){ + Rbeta.anom <- matrix(coef(mod.anom), ncol=length(coef(mod.anom))) + } else { + Rbeta.anom <- matrix(MASS::mvrnorm(n=n.new, coef(mod.anom), vcov(mod.anom)), ncol=length(coef(mod.anom))) + } dimnames(Rbeta.anom)[[2]] <- names(coef(mod.anom)) # # Filter our betas to remove outliers # ci.anom <- matrix(apply(Rbeta.anom, 2, quantile, c(0.01, 0.99)), nrow=2) @@ -560,7 +569,11 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU if(v == "precipitation_flux"){ - Rbeta.ann <- matrix(MASS::mvrnorm(n=n.new, coef(mod.ann), vcov(mod.ann)), ncol=length(coef(mod.ann))) + if(n.ens==1){ + Rbeta.ann <- matrix(coef(mod.ann), ncol=length(coef.ann)) + } else { + Rbeta.ann <- matrix(MASS::mvrnorm(n=n.new, coef(mod.ann), vcov(mod.ann)), ncol=length(coef(mod.ann))) + } # ci.ann <- matrix(apply(Rbeta.ann, 2, quantile, c(0.01, 0.99)), nrow=2) # Rbeta.ann <- Rbeta.ann[which(apply(Rbeta.ann, 1, function(x) all(x > ci.ann[1,] & x < ci.ann[2,]))),] # Rbeta.ann <- matrix(Rbeta.ann[sample(1:nrow(Rbeta.ann), n.new, replace=T),], ncol=ncol(Rbeta.ann)) @@ -963,7 +976,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU pb.ind <- pb.ind+1 rm(mod.bias, anom.train, anom.src, mod.anom, Xp, Xp.anom, sim1, sim1a, sim1b) - } + } # End ensemble loop if(v == "precipitation_flux"){ # sim1 <- sim1*sim1c @@ -996,12 +1009,15 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU dat.pred$upr <- apply(dat.out[[v]], 1, quantile, 0.975, na.rm=T) # Plotting the observed and the bias-corrected 95% CI - grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day.png", sep="_"))) + grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day.png", sep="_")), height=6, width=6, units="in", res=220) print( ggplot2::ggplot(data=dat.pred[dat.pred$Year>=mean(dat.pred$Year)-1 & dat.pred$Year<=mean(dat.pred$Year)+1,]) + - ggplot2::geom_ribbon(ggplot2::aes(x=Date, ymin=lwr, ymax=upr), fill="red", alpha=0.5) + - ggplot2::geom_line(ggplot2::aes(x=Date, y=mean), color="red", size=0.5) + - ggplot2::geom_line(ggplot2::aes(x=Date, y=obs), color='black', size=0.5) + + ggplot2::geom_ribbon(ggplot2::aes(x=Date, ymin=lwr, ymax=upr, fill="corrected"), alpha=0.5) + + ggplot2::geom_line(ggplot2::aes(x=Date, y=mean, color="corrected"), size=0.5) + + ggplot2::geom_line(ggplot2::aes(x=Date, y=obs, color="original"), size=0.5) + + ggplot2::scale_color_manual(values=c("corrected" = "red", "original"="black")) + + ggplot2::scale_fill_manual(values=c("corrected" = "red", "original"="black")) + + ggplot2::guides(fill=F) + ggplot2::ggtitle(paste0(v, " - ensemble mean & 95% CI (daily slice)")) + ggplot2::theme_bw() ) @@ -1011,7 +1027,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU stack.sims <- utils::stack(data.frame(dat.out[[v]][,sample(1:n.ens, min(3, n.ens))])) stack.sims[,c("Year", "DOY", "Date")] <- dat.pred[,c("Year", "DOY", "Date")] - grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day2.png", sep="_"))) + grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day2.png", sep="_")), height=6, width=6, units="in", res=220) print( ggplot2::ggplot(data=stack.sims[stack.sims$Year>=mean(stack.sims$Year)-2 & stack.sims$Year<=mean(stack.sims$Year)+2,]) + ggplot2::geom_line(ggplot2::aes(x=Date, y=values, color=ind), size=0.2, alpha=0.8) + @@ -1026,12 +1042,16 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU FUN=mean) names(dat.yr)[1] <- "Year" - grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "annual.png", sep="_"))) + grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "annual.png", sep="_")), height=6, width=6, units="in", res=220) print( ggplot2::ggplot(data=dat.yr[,]) + - ggplot2::geom_ribbon(ggplot2::aes(x=Year, ymin=lwr, ymax=upr), fill="red", alpha=0.5) + - ggplot2::geom_line(ggplot2::aes(x=Year, y=mean), color="red", size=0.5) + - ggplot2::geom_line(ggplot2::aes(x=Year, y=obs), color='black', size=0.5) + + ggplot2::geom_ribbon(ggplot2::aes(x=Year, ymin=lwr, ymax=upr, fill="corrected"), alpha=0.5) + + ggplot2::geom_line(ggplot2::aes(x=Year, y=mean, color="corrected"), size=0.5) + + ggplot2::geom_line(ggplot2::aes(x=Year, y=obs, color="original"), size=0.5) + + ggplot2::scale_color_manual(values=c("corrected" = "red", "original"="black")) + + ggplot2::scale_fill_manual(values=c("corrected" = "red", "original"="black")) + + ggplot2::guides(fill=F) + + ggplot2::ggtitle(paste0(v, " - annual mean time series")) + ggplot2::theme_bw() ) From 887aafac918c54029634717a340b30198dd7c5b4 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 12:20:14 -0500 Subject: [PATCH 1522/2289] Fix temporal trends in tair a logically error was removing temporal trends from temperature rather than adding it in. To get the temporal trend back with uncertainty, we essentially want the residual trend from the simulated tdistribution. This means removing the mean trend from the air temp anomalies and then adding back in a randomly distributed trend based on the Rbeta distribution. For a single-member ensemble, we could just work with the temp anomalies since we're workign with the model best-estimate coefficients rather than a simulated distribution, but it makes for more consistent code and logic if we keep it there. --- modules/data.atmosphere/R/debias_met_regression.R | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 5e6e9dc7920..26a7f5dcef4 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -499,7 +499,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } # summary(mod.anom) # plot(mod.anom, pages=1) - + # pred.anom <- predict(mod.anom) resid.anom <- resid(mod.anom) # --------- @@ -637,11 +637,14 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # we'll get the uncertainty subtract the multi-decadal trend out of the anomalies; not a perfect solution, but it will increase the variability if(pair.anoms==F & (v %in% c("air_temperature_maximum", "air_temperature_minimum"))){ # sim1b.norm <- apply(sim1b, 1, mean) - sim1b[,cols.redo] <- as.vector(met.src[met.src$ind==ind,"anom.raw"]) - sim1b[,cols.redo] # Get the range around that medium-frequency trend + # What we need is to remove the mean-trend from the anomalies and then add the trend (with uncertinaties) back in + # Note that for a single-member ensemble, this just undoes itself + anom.detrend <- met.src[met.src$ind==ind,"anom.raw"] - predict(mod.anom) + + sim1b[,cols.redo] <- anom.detrend + sim1b[,cols.redo] # Get the range around that medium-frequency trend } - # Option 1: Adding a constant error per time series for the cliamte correction # (otherwise we're just doubling anomalies) # sim1a <- sweep(sim1a, 2, rnorm(n, mean(resid.bias), sd(resid.bias)), FUN="+") From 9b835e64af4985463e53dc3e78ff5117814c0639 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 12:34:34 -0500 Subject: [PATCH 1523/2289] time fix for QAQC graphs --- modules/data.atmosphere/R/debias_met_regression.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 26a7f5dcef4..10a9e339a43 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -1006,6 +1006,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU dir.create(path.diagnostics, recursive=T, showWarnings=F) dat.pred <- source.data$time + dat.pred$Date <- as.POSIXct(dat.pred$Date) dat.pred$obs <- apply(source.data[[v]], 1, mean, na.rm=T) dat.pred$mean <- apply(dat.out[[v]], 1, mean, na.rm=T) dat.pred$lwr <- apply(dat.out[[v]], 1, quantile, 0.025, na.rm=T) From ce8b92eabde3721795e912facdc4d8a5e2939e44 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 13:19:37 -0500 Subject: [PATCH 1524/2289] tried documentation update... maybe this works? --- modules/data.atmosphere/man/narr_flx_vars.Rd | 8 +------- modules/data.atmosphere/man/pecan_standard_met_table.Rd | 4 +--- 2 files changed, 2 insertions(+), 10 deletions(-) diff --git a/modules/data.atmosphere/man/narr_flx_vars.Rd b/modules/data.atmosphere/man/narr_flx_vars.Rd index 920459e08b5..c2da49226d6 100644 --- a/modules/data.atmosphere/man/narr_flx_vars.Rd +++ b/modules/data.atmosphere/man/narr_flx_vars.Rd @@ -6,13 +6,7 @@ \alias{narr_sfc_vars} \alias{narr_all_vars} \title{NARR flux and sfc variables} -\format{ -An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 5 rows and 3 columns. - -An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 3 rows and 3 columns. - -An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 8 rows and 3 columns. -} +\format{An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 5 rows and 3 columns.} \usage{ narr_flx_vars diff --git a/modules/data.atmosphere/man/pecan_standard_met_table.Rd b/modules/data.atmosphere/man/pecan_standard_met_table.Rd index 16bfadce2d9..9108ca6f6c2 100644 --- a/modules/data.atmosphere/man/pecan_standard_met_table.Rd +++ b/modules/data.atmosphere/man/pecan_standard_met_table.Rd @@ -4,9 +4,7 @@ \name{pecan_standard_met_table} \alias{pecan_standard_met_table} \title{Conversion table for PEcAn standard meteorology} -\format{ -An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 18 rows and 8 columns. -} +\format{An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 18 rows and 8 columns.} \usage{ pecan_standard_met_table } From a0d9c8f644a4c368770fa8fadcc1634eb7d92c17 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 22 Oct 2020 15:42:08 -0500 Subject: [PATCH 1525/2289] trigger at least 1 ensemble member per comment from @ankurdesai --- modules/data.atmosphere/R/debias_met_regression.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 10a9e339a43..be8847c898a 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -72,6 +72,10 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU if(parallel==TRUE) warning("Warning! Parallel processing not reccomended because of memory constraints") if(ncol(source.data[[2]])>1) warning("Feeding an ensemble of source data is currently experimental! This could crash") + if(n.ens<1){ + warning("You need to generate at least one vector of outputs. Changing n.ens to 1, which will be based on the model means.") + n.ens=1 + } if(!uncert.prop %in% c("mean", "random")) stop("unspecified uncertainty propogation method. Must be 'random' or 'mean' ") # Variables need to be done in a specific order From eca28a19058d6d15aeadd17d0bec6c18381881a3 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 20 Oct 2020 14:37:52 +0200 Subject: [PATCH 1526/2289] try stable version of setup-r action --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index d14246432c3..2a7f3ac2c04 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -55,7 +55,7 @@ jobs: - uses: r-lib/actions/pr-fetch@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} - - uses: r-lib/actions/setup-r@master + - uses: r-lib/actions/setup-r@v1 - name : download artifacts uses: actions/download-artifact@v1 with: From 318973b6d736057fee82aab46a8bb0c5d65b56ef Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 20 Oct 2020 17:04:43 +0200 Subject: [PATCH 1527/2289] do not setup R --- .github/workflows/styler-actions.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 2a7f3ac2c04..a2accecd578 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -20,7 +20,7 @@ jobs: - name: Install dependencies run: | Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' - Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "http://cran.us.r-project.org")' + Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "cloud.r-project.org")' - name: string operations shell: bash run: | @@ -55,7 +55,7 @@ jobs: - uses: r-lib/actions/pr-fetch@master with: repo-token: ${{ secrets.GITHUB_TOKEN }} - - uses: r-lib/actions/setup-r@v1 + # - uses: r-lib/actions/setup-r@v1 - name : download artifacts uses: actions/download-artifact@v1 with: From 4e42854ab2744cc794249aa71eb357ba91cbf36c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 21 Oct 2020 20:40:14 +0200 Subject: [PATCH 1528/2289] Actions: accept /document command to Roxygenize PR, simplify stlying logic --- .github/workflows/styler-actions.yml | 43 +++++++++++----------------- 1 file changed, 17 insertions(+), 26 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index a2accecd578..0c3b660fc63 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -10,8 +10,8 @@ jobs: steps: - id: file_changes uses: trilom/file-changes-action@v1.2.3 - - name: testing - run: echo '${{ steps.file_changes.outputs.files_modified}}' + - name: list changed files + run: echo '${{ steps.file_changes.outputs.files_modified }}' - uses: actions/checkout@v2 - uses: r-lib/actions/pr-fetch@master with: @@ -24,18 +24,11 @@ jobs: - name: string operations shell: bash run: | - echo '${{ steps.file_changes.outputs.files_modified}}' > names.txt - cat names.txt | tr -d '[]' > changed_files.txt - text=$(cat changed_files.txt) - IFS=',' read -ra ids <<< "$text" - for i in "${ids[@]}"; do if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; then echo "$i" >> files_to_style.txt; fi; done - - name: Upload artifacts - uses: actions/upload-artifact@v1 - with: - name: artifacts - path: files_to_style.txt - - name: Style - run: for i in $(cat files_to_style.txt); do Rscript -e "styler::style_file("$i")"; done + for i in ${{ join(steps.file_changes.outputs.files_modified, " ") }}; do; + if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; + then Rscript -e "styler::style_file("$i")"; + fi; + done - name: commit run: | git add \*.R @@ -47,29 +40,27 @@ jobs: check: - needs: [style] + if: (startsWith(github.event.comment.body, '/style') || startsWith(github.event.comment.body, '/document')) runs-on: ubuntu-latest container: pecan/depends:develop steps: - uses: actions/checkout@v2 - - uses: r-lib/actions/pr-fetch@master + - uses: r-lib/actions/pr-fetch@v1 with: repo-token: ${{ secrets.GITHUB_TOKEN }} - # - uses: r-lib/actions/setup-r@v1 - - name : download artifacts - uses: actions/download-artifact@v1 - with: - name: artifacts - name: update dependency lists run: Rscript scripts/generate_dependencies.R + - id: file_changes + uses: trilom/file-changes-action@v1.2.3 - name : make shell: bash run: | - cut -d / -f 1-2 artifacts/files_to_style.txt | tr -d '"' > changed_dirs.txt - cat changed_dirs.txt - sort changed_dirs.txt | uniq > needs_documenting.txt - cat needs_documenting.txt - for i in $(cat needs_documenting.txt); do make .doc/${i}; done + echo ${{ join(steps.file_changes.outputs.files_modified, "\n") }} \ + | cut -d / -f 1-2 \ + | sort \ + | uniq \ + | grep -e base -e models -e modules \ + | xargs -n1 -I{} make .doc/{} - name: commit run: | git config --global user.email "pecan_bot@example.com" From 972ca391b20a3c74fd492703d411e9abc7b005f5 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 21 Oct 2020 21:01:30 +0200 Subject: [PATCH 1529/2289] single quotes? --- .github/workflows/styler-actions.yml | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 0c3b660fc63..28fc53ea54b 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -21,13 +21,15 @@ jobs: run: | Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "cloud.r-project.org")' - - name: string operations + - name: run styler shell: bash + env: + FILES: ${{ join(fromJSON(steps.file_changes.outputs.files_modified), ' ') }} run: | - for i in ${{ join(steps.file_changes.outputs.files_modified, " ") }}; do; - if [[ "$i" == *.R\" || "$i" == *.Rmd\" ]]; - then Rscript -e "styler::style_file("$i")"; - fi; + for f in ${FILES}; do + if [[ "$f" == *.R\" || "$f" == *.Rmd\" ]] + then Rscript -e "styler::style_file("$f")" + fi done - name: commit run: | @@ -39,7 +41,7 @@ jobs: repo-token: ${{ secrets.GITHUB_TOKEN }} - check: + document: if: (startsWith(github.event.comment.body, '/style') || startsWith(github.event.comment.body, '/document')) runs-on: ubuntu-latest container: pecan/depends:develop @@ -54,12 +56,15 @@ jobs: uses: trilom/file-changes-action@v1.2.3 - name : make shell: bash + env: + FILES: ${{ join(fromJSON(steps.file_changes.outputs.files_modified), ' ') }} run: | - echo ${{ join(steps.file_changes.outputs.files_modified, "\n") }} \ + echo ${FILES} \ + | tr ' ' '\n' \ + | grep -e base -e models -e modules \ | cut -d / -f 1-2 \ | sort \ | uniq \ - | grep -e base -e models -e modules \ | xargs -n1 -I{} make .doc/{} - name: commit run: | From 9811e4be80356741627185c3a1281828083c7448 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 22 Oct 2020 12:07:43 +0200 Subject: [PATCH 1530/2289] roxygen not needed for styling (?) --- .github/workflows/styler-actions.yml | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 28fc53ea54b..b16d7cad0c0 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -17,10 +17,9 @@ jobs: with: repo-token: ${{ secrets.GITHUB_TOKEN }} - uses: r-lib/actions/setup-r@master - - name: Install dependencies + - name: Install styler run: | - Rscript -e 'install.packages(c("styler", "devtools"), repos = "cloud.r-project.org")' - Rscript -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "cloud.r-project.org")' + Rscript -e 'install.packages("styler", repos = "cloud.r-project.org")' - name: run styler shell: bash env: From 263991de55e5a8c49d26ce254b817da6f5061edd Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 22 Oct 2020 17:24:59 +0200 Subject: [PATCH 1531/2289] remove leftover quotes --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index b16d7cad0c0..bf440581af0 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -26,7 +26,7 @@ jobs: FILES: ${{ join(fromJSON(steps.file_changes.outputs.files_modified), ' ') }} run: | for f in ${FILES}; do - if [[ "$f" == *.R\" || "$f" == *.Rmd\" ]] + if [[ "$f" == *.R || "$f" == *.Rmd ]] then Rscript -e "styler::style_file("$f")" fi done From 479bde7c8ed24fdc90982c4eddf68f4cf59b1925 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 22 Oct 2020 17:42:41 +0200 Subject: [PATCH 1532/2289] and add missing quotes --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index bf440581af0..1f2abba3d52 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -27,7 +27,7 @@ jobs: run: | for f in ${FILES}; do if [[ "$f" == *.R || "$f" == *.Rmd ]] - then Rscript -e "styler::style_file("$f")" + then Rscript -e 'styler::style_file("'${f}'")' fi done - name: commit From fdd475343fdfcd7f488c8c66a65d29f890d4a097 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 23 Oct 2020 21:46:24 +0200 Subject: [PATCH 1533/2289] commit any changes to depend files --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 1f2abba3d52..cf59e9a8bb7 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -69,7 +69,7 @@ jobs: run: | git config --global user.email "pecan_bot@example.com" git config --global user.name "PEcAn stylebot" - git add \*.Rd + git add \*.Rd Makefile.depends docker/depends/pecan.depends docker/depends/pecan.depends.R if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated documentation update' ; fi - uses: r-lib/actions/pr-push@master with: From efd40332f8cff03b20344661acf509c8a51f483c Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 24 Oct 2020 10:03:40 +0000 Subject: [PATCH 1534/2289] automated documentation update --- base/settings/man/get_args.Rd | 18 ++++++++++++++++++ docker/depends/pecan.depends | 1 + docker/depends/pecan.depends.R | 1 + 3 files changed, 20 insertions(+) create mode 100644 base/settings/man/get_args.Rd diff --git a/base/settings/man/get_args.Rd b/base/settings/man/get_args.Rd new file mode 100644 index 00000000000..d405fb6009e --- /dev/null +++ b/base/settings/man/get_args.Rd @@ -0,0 +1,18 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/get_args.R +\name{get_args} +\alias{get_args} +\title{Get Args} +\usage{ +get_args() +} +\value{ + +} +\description{ +Used in web/workflow.R to parse command line arguments. +See also https://github.com/PecanProject/pecan/pull/2626. +} +\examples{ +\dontrun{./web/workflow.R -h} +} diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends index 7bee6586fd1..3d4db6bc5c1 100644 --- a/docker/depends/pecan.depends +++ b/docker/depends/pecan.depends @@ -75,6 +75,7 @@ install2.r -e -s -l "${RLIB}" -n -1\ neonUtilities \ nimble \ nneo \ + optparse \ parallel \ plotrix \ plyr \ diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index bc163f50ccd..dce29950d9b 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -75,6 +75,7 @@ wanted <- c( 'neonUtilities', 'nimble', 'nneo', +'optparse', 'parallel', 'plotrix', 'plyr', From 8f1fce682b8784e3efaf9bb38a308ca9c3e07790 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 24 Oct 2020 18:50:03 +0200 Subject: [PATCH 1535/2289] commit namespace changes too --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index cf59e9a8bb7..b34780d3206 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -69,7 +69,7 @@ jobs: run: | git config --global user.email "pecan_bot@example.com" git config --global user.name "PEcAn stylebot" - git add \*.Rd Makefile.depends docker/depends/pecan.depends docker/depends/pecan.depends.R + git add \*.Rd \*NAMESPACE Makefile.depends docker/depends/pecan.depends docker/depends/pecan.depends.R if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated documentation update' ; fi - uses: r-lib/actions/pr-push@master with: From c9336c89928d81115952fc7e86335ea94655ae27 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 24 Oct 2020 17:56:06 +0000 Subject: [PATCH 1536/2289] automated documentation update --- base/settings/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/base/settings/NAMESPACE b/base/settings/NAMESPACE index 6deea2209fe..c185b4a03e4 100644 --- a/base/settings/NAMESPACE +++ b/base/settings/NAMESPACE @@ -36,6 +36,7 @@ export(createSitegroupMultiSettings) export(expandMultiSettings) export(fix.deprecated.settings) export(getRunSettings) +export(get_args) export(is.MultiSettings) export(is.SafeList) export(is.Settings) From d6980c8fdabd92b744a1045a30a911f3a51cae67 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sat, 24 Oct 2020 13:17:30 -0500 Subject: [PATCH 1537/2289] enable webhook build see https://blog.s1h.org/github-actions-webhook/ --- .github/workflows/ci.yml | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index f2e34f7f167..047f6c1f10a 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -1,6 +1,10 @@ name: CI on: + repository_dispatch: + types: + - force + push: branches: - master From 8f384f07eefb3893cf0ca8371bcdb98031d7b015 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 24 Oct 2020 14:22:12 -0400 Subject: [PATCH 1538/2289] update depends --- Makefile.depends | 2 +- docker/depends/pecan.depends.R | 5 +++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 3fd75b3ddfa..e833665ad8e 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -10,7 +10,7 @@ $(call depends,base/visualization): | .install/base/db .install/base/logger .ins $(call depends,base/workflow): | .install/modules/data.atmosphere .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils $(call depends,modules/allometry): | .install/base/logger .install/base/db $(call depends,modules/assim.batch): | .install/modules/benchmark .install/base/db .install/modules/emulator .install/base/logger .install/modules/meta.analysis .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils -$(call depends,modules/assim.sequential): | .install/base/logger .install/base/remote +$(call depends,modules/assim.sequential): | .install/modules/benchmark .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,modules/benchmark): | .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/modules/data.land $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index bc163f50ccd..6aa8e02c182 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -25,11 +25,13 @@ wanted <- c( 'BioCro', 'bit64', 'coda', +'corrplot', 'data.table', 'dataone', 'datapack', 'DBI', 'dbplyr', +'devtools', 'doParallel', 'dplR', 'dplyr', @@ -37,10 +39,12 @@ wanted <- c( 'foreach', 'fs', 'furrr', +'future', 'geonames', 'getPass', 'ggmap', 'ggplot2', +'ggrepel', 'glue', 'graphics', 'grDevices', @@ -62,6 +66,7 @@ wanted <- c( 'maps', 'maptools', 'MASS', +'Matrix', 'mclust', 'MCMCpack', 'methods', From cb3e05870f97791f1d4252832062e794c2424fb8 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Sat, 24 Oct 2020 19:48:55 -0500 Subject: [PATCH 1539/2289] curl + updates this installs curl and will removes pecan.depends. --- docker/depends/Dockerfile | 7 +- docker/depends/pecan.depends | 126 -------------------------------- scripts/generate_dependencies.R | 25 +------ 3 files changed, 5 insertions(+), 153 deletions(-) delete mode 100644 docker/depends/pecan.depends diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index 74af08bafde..f3707d531d8 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -1,4 +1,4 @@ -ARG R_VERSION="3.5" +ARG R_VERSION="4.0.2" # ---------------------------------------------------------------------- # PECAN FOR MODEL BASE IMAGE @@ -21,6 +21,7 @@ RUN if [ "$(lsb_release -s -c)" = "stretch" ]; then \ # ---------------------------------------------------------------------- RUN apt-get update \ && apt-get -y --no-install-recommends install \ + curl \ jags \ time \ openssh-client \ @@ -37,9 +38,9 @@ RUN apt-get update \ # ---------------------------------------------------------------------- # INSTALL DEPENDENCIES # ---------------------------------------------------------------------- -COPY pecan.depends / +COPY pecan.depends.R / RUN Rscript -e "install.packages(c('devtools'), repos = 'http://cran.rstudio.com')" \ && Rscript -e "devtools::install_version('roxygen2', '7.0.2', repos = 'http://cran.rstudio.com')" \ - && bash /pecan.depends \ + && R_LIBS_USER='/usr/local/lib/R/site-library' Rscript /pecan.depends.R \ && rm -rf /tmp/* diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends deleted file mode 100644 index 7bee6586fd1..00000000000 --- a/docker/depends/pecan.depends +++ /dev/null @@ -1,126 +0,0 @@ -#!/bin/bash -# autogenerated do not edit -# use scripts/generate_dependencies.R - -# stop on error -set -e - -# Don't use X11 for rgl -RGL_USE_NULL=TRUE -RLIB=${R_LIBS_USER:-/usr/local/lib/R/site-library} - -# install remotes first in case packages are references in dependencies -installGithub.r \ - araiho/linkages_package \ - ebimodeling/biocro \ - MikkoPeltoniemi/Rpreles \ - ropensci/geonames \ - ropensci/nneo - -# install all packages (depends, imports, suggests) -install2.r -e -s -l "${RLIB}" -n -1\ - abind \ - BayesianTools \ - binaryLogic \ - BioCro \ - bit64 \ - coda \ - data.table \ - dataone \ - datapack \ - DBI \ - dbplyr \ - doParallel \ - dplR \ - dplyr \ - ellipse \ - foreach \ - fs \ - furrr \ - geonames \ - getPass \ - ggmap \ - ggplot2 \ - glue \ - graphics \ - grDevices \ - grid \ - gridExtra \ - hdf5r \ - here \ - httr \ - IDPmisc \ - jsonlite \ - knitr \ - lattice \ - linkages \ - lqmm \ - lubridate \ - Maeswrap \ - magic \ - magrittr \ - maps \ - maptools \ - MASS \ - mclust \ - MCMCpack \ - methods \ - mgcv \ - minpack.lm \ - mlegp \ - mockery \ - MODISTools \ - mvtnorm \ - ncdf4 \ - neonUtilities \ - nimble \ - nneo \ - parallel \ - plotrix \ - plyr \ - png \ - prodlim \ - progress \ - purrr \ - pwr \ - randtoolbox \ - raster \ - rcrossref \ - RCurl \ - REddyProc \ - redland \ - reshape \ - reshape2 \ - reticulate \ - rgdal \ - rjags \ - rjson \ - rlang \ - rnoaa \ - RPostgres \ - RPostgreSQL \ - Rpreles \ - RSQLite \ - sf \ - SimilarityMeasures \ - sirt \ - sp \ - stats \ - stringi \ - stringr \ - testthat \ - tibble \ - tictoc \ - tidyr \ - tidyverse \ - tools \ - traits \ - TruncatedNormal \ - truncnorm \ - udunits2 \ - urltools \ - utils \ - XML \ - xtable \ - xts \ - zoo diff --git a/scripts/generate_dependencies.R b/scripts/generate_dependencies.R index eac872b8bdc..13394111c5b 100755 --- a/scripts/generate_dependencies.R +++ b/scripts/generate_dependencies.R @@ -87,30 +87,7 @@ for (name in names(depends)) { cat(x, file = "Makefile.depends", sep = "\n", append = TRUE) } -# write for dockerfile -cat("#!/bin/bash", - "# autogenerated do not edit", - "# use scripts/generate_dependencies.R", - "", - "# stop on error", - "set -e", - "", - "# Don\'t use X11 for rgl", - "RGL_USE_NULL=TRUE", - "RLIB=${R_LIBS_USER:-/usr/local/lib/R/site-library}", - "", - "# install remotes first in case packages are references in dependencies", - paste0( - "installGithub.r \\\n ", - paste(sort(remotes), sep = "", collapse = " \\\n ")), - "", - "# install all packages (depends, imports, suggests)", - paste0( - "install2.r -e -s -l \"${RLIB}\" -n -1\\\n ", - paste(sort(docker), sep = "", collapse = " \\\n ")), - file = "docker/depends/pecan.depends", sep = "\n", append = FALSE) - -# write for CI +# write for docker dependency image cat("#!/usr/bin/env Rscript", "# autogenerated do not edit", "# use scripts/generate_dependencies.R", From 5fe6134a234e003905339dcc751e479db0b8362d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Oct 2020 17:04:06 +0100 Subject: [PATCH 1540/2289] pecan.depend was removed in #2716 --- .github/workflows/styler-actions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index b34780d3206..0eb0ac60e55 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -69,7 +69,7 @@ jobs: run: | git config --global user.email "pecan_bot@example.com" git config --global user.name "PEcAn stylebot" - git add \*.Rd \*NAMESPACE Makefile.depends docker/depends/pecan.depends docker/depends/pecan.depends.R + git add \*.Rd \*NAMESPACE Makefile.depends docker/depends/pecan.depends.R if [ "$(git diff --name-only --cached)" != "" ]; then git commit -m 'automated documentation update' ; fi - uses: r-lib/actions/pr-push@master with: From 78fa40672b62e6d0163177b82415dbf2a2e07a60 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 25 Oct 2020 17:21:40 +0100 Subject: [PATCH 1541/2289] resolve modify/delete conflict --- docker/depends/pecan.depends | 127 ----------------------------------- 1 file changed, 127 deletions(-) delete mode 100644 docker/depends/pecan.depends diff --git a/docker/depends/pecan.depends b/docker/depends/pecan.depends deleted file mode 100644 index 3d4db6bc5c1..00000000000 --- a/docker/depends/pecan.depends +++ /dev/null @@ -1,127 +0,0 @@ -#!/bin/bash -# autogenerated do not edit -# use scripts/generate_dependencies.R - -# stop on error -set -e - -# Don't use X11 for rgl -RGL_USE_NULL=TRUE -RLIB=${R_LIBS_USER:-/usr/local/lib/R/site-library} - -# install remotes first in case packages are references in dependencies -installGithub.r \ - araiho/linkages_package \ - ebimodeling/biocro \ - MikkoPeltoniemi/Rpreles \ - ropensci/geonames \ - ropensci/nneo - -# install all packages (depends, imports, suggests) -install2.r -e -s -l "${RLIB}" -n -1\ - abind \ - BayesianTools \ - binaryLogic \ - BioCro \ - bit64 \ - coda \ - data.table \ - dataone \ - datapack \ - DBI \ - dbplyr \ - doParallel \ - dplR \ - dplyr \ - ellipse \ - foreach \ - fs \ - furrr \ - geonames \ - getPass \ - ggmap \ - ggplot2 \ - glue \ - graphics \ - grDevices \ - grid \ - gridExtra \ - hdf5r \ - here \ - httr \ - IDPmisc \ - jsonlite \ - knitr \ - lattice \ - linkages \ - lqmm \ - lubridate \ - Maeswrap \ - magic \ - magrittr \ - maps \ - maptools \ - MASS \ - mclust \ - MCMCpack \ - methods \ - mgcv \ - minpack.lm \ - mlegp \ - mockery \ - MODISTools \ - mvtnorm \ - ncdf4 \ - neonUtilities \ - nimble \ - nneo \ - optparse \ - parallel \ - plotrix \ - plyr \ - png \ - prodlim \ - progress \ - purrr \ - pwr \ - randtoolbox \ - raster \ - rcrossref \ - RCurl \ - REddyProc \ - redland \ - reshape \ - reshape2 \ - reticulate \ - rgdal \ - rjags \ - rjson \ - rlang \ - rnoaa \ - RPostgres \ - RPostgreSQL \ - Rpreles \ - RSQLite \ - sf \ - SimilarityMeasures \ - sirt \ - sp \ - stats \ - stringi \ - stringr \ - testthat \ - tibble \ - tictoc \ - tidyr \ - tidyverse \ - tools \ - traits \ - TruncatedNormal \ - truncnorm \ - udunits2 \ - urltools \ - utils \ - XML \ - xtable \ - xts \ - zoo From b1d93865fd1a24be52b2b38e088a90202a373456 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 26 Oct 2020 15:27:19 -0400 Subject: [PATCH 1542/2289] API: Modify plumber calls for v1.0 Silences deprecation warnings about `plumber::plumber`. --- apps/api/R/entrypoint.R | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 3423f259912..2b4c9c41ba4 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -7,7 +7,7 @@ source("auth.R") source("general.R") -root <- plumber::plumber$new() +root <- plumber::Plumber$new() root$setSerializer(plumber::serializer_unboxed_json()) # Filter for authenticating users trying to hit the API endpoints @@ -20,31 +20,31 @@ root$handle("GET", "/api/ping", ping) root$handle("GET", "/api/status", status) # The endpoints mounted here are related to details of PEcAn models -models_pr <- plumber::plumber$new("models.R") +models_pr <- plumber::Plumber$new("models.R") root$mount("/api/models", models_pr) # The endpoints mounted here are related to details of PEcAn sites -sites_pr <- plumber::plumber$new("sites.R") +sites_pr <- plumber::Plumber$new("sites.R") root$mount("/api/sites", sites_pr) # The endpoints mounted here are related to details of PEcAn pfts -pfts_pr <- plumber::plumber$new("pfts.R") +pfts_pr <- plumber::Plumber$new("pfts.R") root$mount("/api/pfts", pfts_pr) # The endpoints mounted here are related to details of PEcAn formats -formats_pr <- plumber::plumber$new("formats.R") +formats_pr <- plumber::Plumber$new("formats.R") root$mount("/api/formats", formats_pr) # The endpoints mounted here are related to details of PEcAn inputs -inputs_pr <- plumber::plumber$new("inputs.R") +inputs_pr <- plumber::Plumber$new("inputs.R") root$mount("/api/inputs", inputs_pr) # The endpoints mounted here are related to details of PEcAn workflows -workflows_pr <- plumber::plumber$new("workflows.R") +workflows_pr <- plumber::Plumber$new("workflows.R") root$mount("/api/workflows", workflows_pr) # The endpoints mounted here are related to details of PEcAn runs -runs_pr <- plumber::plumber$new("runs.R") +runs_pr <- plumber::Plumber$new("runs.R") root$mount("/api/runs", runs_pr) # The API server is bound to 0.0.0.0 on port 8000 From a04b3d5676d4615837e58a7549e93155245bad0f Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 26 Oct 2020 16:34:20 -0400 Subject: [PATCH 1543/2289] API: Consistent parameters for bety DB connections --- apps/api/R/submit.workflow.R | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 9e5d1081575..5cb3afdc331 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -1,5 +1,13 @@ library(dplyr) +.bety_params <- PEcAn.DB::get_postgres_envvars( + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "Postgres" +) + #* Submit a workflow sent as XML #* @param workflowXmlString String containing the XML workflow from request body #* @param userDetails List containing userid & username @@ -36,18 +44,12 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* @author Tezan Sahu submit.workflow.list <- function(workflowList, userDetails) { # Fix details about the database - workflowList$database <- list( - bety = PEcAn.DB::get_postgres_envvars( - host = "localhost", - dbname = "bety", - user = "bety", - password = "bety", - driver = "PostgreSQL" - ) - ) - if(! is.null(workflowList$model$id) && (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { - dbcon <- PEcAn.DB::betyConnect() - res <- dplyr::tbl(dbcon, "models") %>% + workflowList$database <- list(bety = .bety_params) + + if (!is.null(workflowList$model$id) && + (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { + dbcon <- PEcAn.DB::db.open(.bety_params) + res <- dplyr::tbl(dbcon, "models") %>% select(id, model_name, revision) %>% filter(id == !!workflowList$model$id) %>% collect() @@ -56,8 +58,9 @@ submit.workflow.list <- function(workflowList, userDetails) { workflowList$model$type <- res$model_name workflowList$model$revision <- res$revision } + # Fix RabbitMQ details - dbcon <- PEcAn.DB::betyConnect() + dbcon <- PEcAn.DB::db.open(.bety_params) hostInfo <- PEcAn.DB::dbHostInfo(dbcon) PEcAn.DB::db.close(dbcon) workflowList$host <- list( @@ -126,8 +129,8 @@ submit.workflow.list <- function(workflowList, userDetails) { #* @author Tezan Sahu insert.workflow <- function(workflowList){ - dbcon <- PEcAn.DB::betyConnect() - + dbcon <- PEcAn.DB::db.open(.bety_params) + model_id <- workflowList$model$id if(is.null(model_id)){ model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), dbcon) From fb9afbed1935466dadba48ef43c2fa9c11c28ab4 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 26 Oct 2020 16:35:49 -0400 Subject: [PATCH 1544/2289] API: More robust submit workflow - Use Postgres prepared statements - Use `INSERT ... RETURNING id` to get workflow_id (rather than a separate DB query) - Remove unnecessary `c(...)` and clean up some formatting --- apps/api/R/submit.workflow.R | 58 ++++++++++++++++++++++-------------- 1 file changed, 35 insertions(+), 23 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 5cb3afdc331..592fb436056 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -139,33 +139,45 @@ insert.workflow <- function(workflowList){ start_time <- Sys.time() workflow_df <- tibble::tibble( - "site_id" = c(bit64::as.integer64(workflowList$run$site$id)), - "model_id" = c(bit64::as.integer64(model_id)), + "site_id" = bit64::as.integer64(workflowList$run$site$id), + "model_id" = bit64::as.integer64(model_id), "folder" = "temp_dir", - "hostname" = c("docker"), - "start_date" = c(as.POSIXct(workflowList$run$start.date)), - "end_date" = c(as.POSIXct(workflowList$run$end.date)), - "advanced_edit" = c(FALSE), - "started_at" = c(start_time), - stringsAsFactors = FALSE + "hostname" = "docker", + "start_date" = as.POSIXct(workflowList$run$start.date), + "end_date" = as.POSIXct(workflowList$run$end.date), + "advanced_edit" = FALSE, + "started_at" = start_time ) - if(! is.na(workflowList$info$userid)){ - workflow_df <- workflow_df %>% tibble::add_column("user_id" = c(bit64::as.integer64(workflowList$info$userid))) + if (! is.na(workflowList$info$userid)){ + workflow_df <- workflow_df %>% + tibble::add_column("user_id" = bit64::as.integer64(workflowList$info$userid)) } - - insert <- PEcAn.DB::insert_table(workflow_df, "workflows", dbcon) - - workflow_id <- dplyr::tbl(dbcon, "workflows") %>% - filter(started_at == start_time - && site_id == bit64::as.integer64(workflowList$run$site$id) - && model_id == bit64::as.integer64(model_id) - ) %>% - pull(id) - - update_qry <- paste0("UPDATE workflows SET folder = 'data/workflows/PEcAn_", workflow_id, "' WHERE id = '", workflow_id, "';") - PEcAn.DB::db.query(update_qry, dbcon) - + + insert_query <- glue::glue( + "INSERT INTO workflows ", + "({paste(colnames(workflow_df), collapse = ', ')}) ", + "VALUES ({paste0('$', seq_len(ncol(workflow_df)), collapse = ', ')}) ", + "RETURNING id" + ) + PEcAn.logger::logger.debug(insert_query) + workflow_id <- PEcAn.DB::db.query( + insert_query, dbcon, + values = unname(as.list(workflow_df)) + )[["id"]] + + PEcAn.logger::logger.debug( + "Running workflow ID: ", + format(workflow_id, scientific = FALSE) + ) + + PEcAn.DB::db.query( + "UPDATE workflows SET folder = $1 WHERE id = $2", dbcon, values = list( + file.path("data", "workflows", paste0("PEcAn_", format(workflow_id, scientific = FALSE))), + workflow_id + ) + ) + PEcAn.DB::db.close(dbcon) return(workflow_id) From dcd64c73f140767c003929a8ffcc167e0da54556 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 27 Oct 2020 16:10:13 +0100 Subject: [PATCH 1545/2289] install new dependencies before starting tests --- .github/workflows/ci.yml | 12 +++++++++--- docker/depends/pecan.depends.R | 3 +-- scripts/generate_dependencies.R | 3 +-- 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 047f6c1f10a..7118a5b47ac 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -104,7 +104,9 @@ jobs: # install additional tools needed - name: install utils - run: apt-get update && apt-get install -y postgresql-client qpdf curl + run: apt-get update && apt-get install -y postgresql-client qpdf + - name: install new dependencies + run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R # initialize database - name: db setup @@ -149,7 +151,9 @@ jobs: # install additional tools needed - name: install utils - run: apt-get update && apt-get install -y postgresql-client qpdf curl + run: apt-get update && apt-get install -y postgresql-client qpdf + - name: install new dependencies + run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R # run PEcAn checks - name: check @@ -197,7 +201,9 @@ jobs: # install additional tools needed - name: install utils - run: apt-get update && apt-get install -y postgresql-client qpdf curl + run: apt-get update && apt-get install -y postgresql-client qpdf + - name: install new dependencies + run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R # initialize database - name: db setup diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index bc163f50ccd..e406d552370 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -4,8 +4,7 @@ # Don't use X11 for rgl Sys.setenv(RGL_USE_NULL = TRUE) -rlib_user = Sys.getenv('R_LIBS_USER') -rlib = ifelse(rlib_user == '', '/usr/local/lib/R/site-library', rlib_user) +rlib <- Sys.getenv('R_LIBS_USER', '/usr/local/lib/R/site-library') Sys.setenv(RLIB = rlib) # install remotes first in case packages are references in dependencies diff --git a/scripts/generate_dependencies.R b/scripts/generate_dependencies.R index 13394111c5b..a3569c73d6c 100755 --- a/scripts/generate_dependencies.R +++ b/scripts/generate_dependencies.R @@ -94,8 +94,7 @@ cat("#!/usr/bin/env Rscript", "", "# Don\'t use X11 for rgl", "Sys.setenv(RGL_USE_NULL = TRUE)", - "rlib_user = Sys.getenv('R_LIBS_USER')", - "rlib = ifelse(rlib_user == '', '/usr/local/lib/R/site-library', rlib_user)", + "rlib <- Sys.getenv('R_LIBS_USER', '/usr/local/lib/R/site-library')", "Sys.setenv(RLIB = rlib)", "", "# install remotes first in case packages are references in dependencies", From 043688548f80ddab007b32a16d59fc97a739c70f Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 11:30:00 -0400 Subject: [PATCH 1546/2289] API: Use PostgreSQL driver internally in PEcAn A _lot_ of the convert.input, met.process, etc. code relies on implicit string conversions that choke on `bit64` integers returned (correctly!) by the `Postgres` driver. This temporarily circumvents the problem. --- apps/api/R/submit.workflow.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 592fb436056..66d69a56eb4 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -46,6 +46,10 @@ submit.workflow.list <- function(workflowList, userDetails) { # Fix details about the database workflowList$database <- list(bety = .bety_params) + # HACK: We are not read for the Postgres driver yet. Way too many places rely + # on implicit string conversion, which doesn't work well for bit64 integers + workflowList$database$bety$driver <- "PostgreSQL" + if (!is.null(workflowList$model$id) && (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { dbcon <- PEcAn.DB::db.open(.bety_params) From f8ee1ed8759942c7fde4b0f3b2c3f5085422aa66 Mon Sep 17 00:00:00 2001 From: araiho Date: Tue, 27 Oct 2020 12:35:35 -0400 Subject: [PATCH 1547/2289] post running generate dependencies --- Makefile.depends | 3 +- docker/depends/pecan.depends.R | 5 +++ modules/rtm/NAMESPACE | 59 ---------------------------------- 3 files changed, 7 insertions(+), 60 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 3fd75b3ddfa..33a871c2b06 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -10,7 +10,7 @@ $(call depends,base/visualization): | .install/base/db .install/base/logger .ins $(call depends,base/workflow): | .install/modules/data.atmosphere .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils $(call depends,modules/allometry): | .install/base/logger .install/base/db $(call depends,modules/assim.batch): | .install/modules/benchmark .install/base/db .install/modules/emulator .install/base/logger .install/modules/meta.analysis .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils -$(call depends,modules/assim.sequential): | .install/base/logger .install/base/remote +$(call depends,modules/assim.sequential): | .install/modules/benchmark .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,modules/benchmark): | .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/modules/data.land $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils @@ -40,3 +40,4 @@ $(call depends,models/preles): | .install/base/utils .install/base/logger .insta $(call depends,models/sipnet): | .install/modules/data.atmosphere .install/base/logger .install/base/remote .install/base/utils $(call depends,models/stics): | .install/base/settings .install/base/db .install/base/logger .install/base/utils .install/base/remote $(call depends,models/template): | .install/base/db .install/base/logger .install/base/utils +$(call depends,models/uvafme): | .install/base/db .install/base/logger .install/base/utils diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index e406d552370..b7d94b3faee 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -24,11 +24,13 @@ wanted <- c( 'BioCro', 'bit64', 'coda', +'corrplot', 'data.table', 'dataone', 'datapack', 'DBI', 'dbplyr', +'devtools', 'doParallel', 'dplR', 'dplyr', @@ -36,10 +38,12 @@ wanted <- c( 'foreach', 'fs', 'furrr', +'future', 'geonames', 'getPass', 'ggmap', 'ggplot2', +'ggrepel', 'glue', 'graphics', 'grDevices', @@ -61,6 +65,7 @@ wanted <- c( 'maps', 'maptools', 'MASS', +'Matrix', 'mclust', 'MCMCpack', 'methods', diff --git a/modules/rtm/NAMESPACE b/modules/rtm/NAMESPACE index 0a0a7407af9..da197a7d8d6 100644 --- a/modules/rtm/NAMESPACE +++ b/modules/rtm/NAMESPACE @@ -1,62 +1,3 @@ # Generated by roxygen2: do not edit by hand -S3method("[",spectra) -S3method("[[",spectra) -S3method("[[<-",spectra) -S3method(cbind,spectra) -S3method(matplot,default) -S3method(matplot,spectra) -S3method(neff,default) -S3method(neff,matrix) -S3method(plot,spectra) -S3method(print,spectra) -S3method(resample,default) -S3method(resample,matrix) -S3method(resample,spectra) -S3method(str,spectra) -export(EDR) -export(EDR.preprocess.history) -export(burnin.thin) -export(check.convergence) -export(default.settings.prospect) -export(defparam) -export(dtnorm) -export(fortran_data_module) -export(foursail) -export(generalized_plate_model) -export(generate.noise) -export(get.EDR.output) -export(invert.auto) -export(invert.custom) -export(invert.lsq) -export(invert_bt) -export(is_spectra) -export(load.from.name) -export(lognorm.mu) -export(lognorm.sigma) -export(matplot) -export(neff) -export(params.prospect4) -export(params.prospect5) -export(params.prospect5b) -export(params.prospectd) -export(params2edr) -export(print_results_summary) -export(prior.defaultvals.prospect) -export(priorfunc.prospect) -export(pro2s) -export(pro4sail) -export(pro4saild) -export(prospect) -export(prospect_bt_prior) -export(resample) -export(rtnorm) -export(sensor.list) -export(sensor.proper) -export(setup_edr) -export(spectra) -export(spectral.response) -export(summary_mvnorm) -export(summary_simple) -export(wavelengths) useDynLib(PEcAnRTM) From 99649f3ac9ac441fb6d61124ddb616f4b4af93bc Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 27 Oct 2020 12:44:40 -0400 Subject: [PATCH 1548/2289] make sure trt is factor --- modules/meta.analysis/R/single.MA.R | 3 +++ 1 file changed, 3 insertions(+) diff --git a/modules/meta.analysis/R/single.MA.R b/modules/meta.analysis/R/single.MA.R index a98040526db..b44df016ccf 100644 --- a/modules/meta.analysis/R/single.MA.R +++ b/modules/meta.analysis/R/single.MA.R @@ -43,6 +43,9 @@ single.MA <- function(data, j.chains, j.iter, tauA, tauB, prior, jag.model.file, site = "beta.site[site[k]]", trt = "beta.trt[trt[k]]") + # making sure trt is factor + data$trt <- as.factor(data$trt) + if (sum(model.parms > 1) == 0) { reg.model <- "" } else { From aa2d47e6189c5e1a3f88b1d186a8ef1ced0e3eec Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 27 Oct 2020 13:24:13 -0400 Subject: [PATCH 1549/2289] ghs as factor just in case --- modules/meta.analysis/R/single.MA.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/single.MA.R b/modules/meta.analysis/R/single.MA.R index b44df016ccf..5e5aeb035e8 100644 --- a/modules/meta.analysis/R/single.MA.R +++ b/modules/meta.analysis/R/single.MA.R @@ -43,7 +43,8 @@ single.MA <- function(data, j.chains, j.iter, tauA, tauB, prior, jag.model.file, site = "beta.site[site[k]]", trt = "beta.trt[trt[k]]") - # making sure trt is factor + # making sure ghs and trt are factor + data$ghs <- as.factor(data$ghs) data$trt <- as.factor(data$trt) if (sum(model.parms > 1) == 0) { From 30b4a1808e55358dda9fb69c0c3f396edd32fdc3 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 12:22:09 -0400 Subject: [PATCH 1550/2289] API: Add available models entrypoint --- apps/api/R/available-models.R | 46 +++++++++++++++++++++++++++++++++++ apps/api/R/entrypoint.R | 6 ++++- 2 files changed, 51 insertions(+), 1 deletion(-) create mode 100644 apps/api/R/available-models.R diff --git a/apps/api/R/available-models.R b/apps/api/R/available-models.R new file mode 100644 index 00000000000..b0634fa7643 --- /dev/null +++ b/apps/api/R/available-models.R @@ -0,0 +1,46 @@ +library(magrittr, include.only = "%>%") + +.bety_params <- PEcAn.DB::get_postgres_envvars( + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "Postgres" +) + +#' List models available on a specific machine +#' +#' @param machine_name Target machine hostname. Default = `"docker"` +#' @param machine_id Target machine ID. If `NULL` (default), deduced from hostname. +#' @return `data.frame` of information on available models +#' @author Alexey Shiklomanov +#* @get / +availableModels <- function(machine_name = "docker", machine_id = NULL) { + dbcon <- PEcAn.DB::db.open(.bety_params) + if (is.null(machine_id)) { + machines <- dplyr::tbl(dbcon, "machines") + machineid <- machines %>% + dplyr::filter(hostname == !!machine_name) %>% + dplyr::pull(id) + if (length(machineid) > 1) { + stop("Found ", length(machineid), " machines with name ", machine_name) + } + if (length(machineid) < 1) { + stop("Found no machines with name ", machine_name) + } + } + dbfiles <- dplyr::tbl(dbcon, "dbfiles") %>% + dplyr::filter(machine_id == !!machineid) + modelfiles <- dbfiles %>% + dplyr::filter(container_type == "Model") + models <- dplyr::tbl(dbcon, "models") + modeltypes <- dplyr::tbl(dbcon, "modeltypes") %>% + dplyr::select(modeltype_id = id, modeltype = name) + + modelfiles %>% + dplyr::select(dbfile_id = id, file_name, file_path, + model_id = container_id) %>% + dplyr::inner_join(models, c("model_id" = "id")) %>% + dplyr::inner_join(modeltypes, "modeltype_id") %>% + dplyr::collect() +} diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 2b4c9c41ba4..def885fd334 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -47,9 +47,13 @@ root$mount("/api/workflows", workflows_pr) runs_pr <- plumber::Plumber$new("runs.R") root$mount("/api/runs", runs_pr) +# Available models +runs_pr <- plumber::Plumber$new("available-models.R") +root$mount("/api/availableModels", runs_pr) + # The API server is bound to 0.0.0.0 on port 8000 # The Swagger UI for the API draws its source from the pecanapi-spec.yml file root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { spec <- yaml::read_yaml("../pecanapi-spec.yml") spec -}) \ No newline at end of file +}) From 56ceb77b17dfd130e32300e6513d2d84cc65f262 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 14:23:55 -0400 Subject: [PATCH 1551/2289] API: Use a single global postgres connection pool This mitigates errors related to "too many open Postgres connections" by creating and managing a single database connection pool. In theory, this should do all the necessary smart connection management (rate limiting, etc.) automatically. --- apps/api/Dockerfile | 1 + apps/api/R/auth.R | 4 ---- apps/api/R/available-models.R | 9 --------- apps/api/R/entrypoint.R | 13 +++++++++++++ apps/api/R/formats.R | 9 --------- apps/api/R/general.R | 1 - apps/api/R/get.file.R | 5 +---- apps/api/R/inputs.R | 7 ------- apps/api/R/models.R | 6 ------ apps/api/R/pfts.R | 8 -------- apps/api/R/runs.R | 21 --------------------- apps/api/R/sites.R | 8 -------- apps/api/R/submit.workflow.R | 21 ++------------------- apps/api/R/workflows.R | 16 ---------------- 14 files changed, 17 insertions(+), 112 deletions(-) diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index 4e1392d046b..bafbfac16b5 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -20,6 +20,7 @@ RUN apt-get update \ && rm -rf /var/lib/apt/lists/* \ && Rscript -e "devtools::install_version('promises', '1.1.0', repos = 'http://cran.rstudio.com')" \ && Rscript -e "devtools::install_version('webutils', '1.1', repos = 'http://cran.rstudio.com')" \ + && Rscript -e "install.packages('pool', repos = 'http://cran.rstudio.com')" \ && Rscript -e "devtools::install_github('rstudio/swagger')" \ && Rscript -e "devtools::install_github('rstudio/plumber')" diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R index b925fef7b6f..3ddd260efd7 100644 --- a/apps/api/R/auth.R +++ b/apps/api/R/auth.R @@ -29,15 +29,11 @@ get_crypt_pass <- function(username, password, secretkey = NULL) { #* @author Tezan Sahu validate_crypt_pass <- function(username, crypt_pass) { - dbcon <- PEcAn.DB::betyConnect() - res <- tbl(dbcon, "users") %>% filter(login == username, crypted_password == crypt_pass) %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(res) == 1) { return(res$id) } diff --git a/apps/api/R/available-models.R b/apps/api/R/available-models.R index b0634fa7643..67b88517046 100644 --- a/apps/api/R/available-models.R +++ b/apps/api/R/available-models.R @@ -1,13 +1,5 @@ library(magrittr, include.only = "%>%") -.bety_params <- PEcAn.DB::get_postgres_envvars( - host = "localhost", - dbname = "bety", - user = "bety", - password = "bety", - driver = "Postgres" -) - #' List models available on a specific machine #' #' @param machine_name Target machine hostname. Default = `"docker"` @@ -16,7 +8,6 @@ library(magrittr, include.only = "%>%") #' @author Alexey Shiklomanov #* @get / availableModels <- function(machine_name = "docker", machine_id = NULL) { - dbcon <- PEcAn.DB::db.open(.bety_params) if (is.null(machine_id)) { machines <- dplyr::tbl(dbcon, "machines") machineid <- machines %>% diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index def885fd334..7f4cf598244 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -7,6 +7,19 @@ source("auth.R") source("general.R") +# Set up the global database pool +.bety_params <- PEcAn.DB::get_postgres_envvars( + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "Postgres" +) + +.bety_params$driver <- NULL +.bety_params$drv <- RPostgres::Postgres() +dbcon <- do.call(pool::dbPool, .bety_params) + root <- plumber::Plumber$new() root$setSerializer(plumber::serializer_unboxed_json()) diff --git a/apps/api/R/formats.R b/apps/api/R/formats.R index 6947634c36c..e4276bce029 100644 --- a/apps/api/R/formats.R +++ b/apps/api/R/formats.R @@ -7,8 +7,6 @@ library(dplyr) #* @get / getFormat <- function(format_id, res){ - dbcon <- PEcAn.DB::betyConnect() - Format <- tbl(dbcon, "formats") %>% select(format_id = id, name, notes, header, mimetype_id) %>% filter(format_id == !!format_id) @@ -21,7 +19,6 @@ getFormat <- function(format_id, res){ qry_res <- Format %>% collect() if (nrow(qry_res) == 0) { - PEcAn.DB::db.close(dbcon) res$status <- 404 return(list(error="Format not found")) } @@ -42,8 +39,6 @@ getFormat <- function(format_id, res){ select(-variable_id, -format_id, -units) %>% collect() - PEcAn.DB::db.close(dbcon) - response$format_variables <- format_vars return(response) } @@ -62,8 +57,6 @@ searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res){ format_name <- URLdecode(format_name) mimetype <- URLdecode(mimetype) - dbcon <- PEcAn.DB::betyConnect() - Formats <- tbl(dbcon, "formats") %>% select(format_id = id, format_name=name, mimetype_id) %>% filter(grepl(!!format_name, format_name, ignore.case=ignore_case)) @@ -77,8 +70,6 @@ searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res){ qry_res <- Formats %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Format(s) not found")) diff --git a/apps/api/R/general.R b/apps/api/R/general.R index dd51a8bd798..cacfa61fcee 100644 --- a/apps/api/R/general.R +++ b/apps/api/R/general.R @@ -18,7 +18,6 @@ status <- function() { if (value == "") default else value } - dbcon <- PEcAn.DB::betyConnect() res <- list(host_details = PEcAn.DB::dbHostInfo(dbcon)) res$host_details$authentication_required = get_env_var("AUTH_REQ") diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R index 7fb1e5d88cb..85a0a67719a 100644 --- a/apps/api/R/get.file.R +++ b/apps/api/R/get.file.R @@ -12,8 +12,7 @@ get.file <- function(filepath, userid) { run_id <- substr(parent_dir, stringi::stri_locate_last(parent_dir, regex="/")[1] + 1, stringr::str_length(parent_dir)) if(Sys.getenv("AUTH_REQ") == TRUE) { - dbcon <- PEcAn.DB::betyConnect() - + Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) Run <- tbl(dbcon, "ensembles") %>% @@ -25,8 +24,6 @@ get.file <- function(filepath, userid) { filter(id == !!run_id) %>% pull(user_id) - PEcAn.DB::db.close(dbcon) - if(! user_id == userid) { return(list(status = "Error", message = "Access forbidden")) } diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index 986a012adc6..1906d79217d 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -14,8 +14,6 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ return(list(error = "Invalid value for parameter")) } - dbcon <- PEcAn.DB::betyConnect() - inputs <- tbl(dbcon, "inputs") %>% select(input_name=name, id, site_id, format_id, start_date, end_date) @@ -77,8 +75,6 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ arrange(id) %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { res$status <- 404 return(list(error="Input(s) not found")) @@ -137,7 +133,6 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ #* @serializer contentType list(type="application/octet-stream") #* @get / downloadInput <- function(input_id, filename="", req, res){ - dbcon <- PEcAn.DB::betyConnect() db_hostid <- PEcAn.DB::dbHostInfo(dbcon)$hostid # This is just for temporary testing due to the existing issue in dbHostInfo() @@ -150,8 +145,6 @@ downloadInput <- function(input_id, filename="", req, res){ filter(container_id == !!input_id) %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(input) == 0) { res$status <- 404 return() diff --git a/apps/api/R/models.R b/apps/api/R/models.R index 05530248a32..a2eba5d2672 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -7,8 +7,6 @@ library(dplyr) #* @get / getModel <- function(model_id, res){ - dbcon <- PEcAn.DB::betyConnect() - Model <- tbl(dbcon, "models") %>% select(model_id = id, model_name, revision, modeltype_id) %>% filter(model_id == !!model_id) @@ -53,8 +51,6 @@ searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ model_name <- URLdecode(model_name) revision <- URLdecode(revision) - dbcon <- PEcAn.DB::betyConnect() - Models <- tbl(dbcon, "models") %>% select(model_id = id, model_name, revision) %>% filter(grepl(!!model_name, model_name, ignore.case=ignore_case)) %>% @@ -63,8 +59,6 @@ searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ qry_res <- Models %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Model(s) not found")) diff --git a/apps/api/R/pfts.R b/apps/api/R/pfts.R index d68f344a82a..914aad14821 100644 --- a/apps/api/R/pfts.R +++ b/apps/api/R/pfts.R @@ -7,8 +7,6 @@ library(dplyr) #* @get / getPfts <- function(pft_id, res){ - dbcon <- PEcAn.DB::betyConnect() - pft <- tbl(dbcon, "pfts") %>% select(pft_id = id, pft_name = name, definition, pft_type, modeltype_id) %>% filter(pft_id == !!pft_id) @@ -21,8 +19,6 @@ getPfts <- function(pft_id, res){ select(-modeltype_id) %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="PFT not found")) @@ -58,8 +54,6 @@ searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE return(list(error = "Invalid pft_type")) } - dbcon <- PEcAn.DB::betyConnect() - pfts <- tbl(dbcon, "pfts") %>% select(pft_id = id, pft_name = name, pft_type, modeltype_id) @@ -75,8 +69,6 @@ searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE arrange(pft_id) %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="PFT(s) not found")) diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 29b4d5a351d..176610af429 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -14,8 +14,6 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ return(list(error = "Invalid value for parameter")) } - dbcon <- PEcAn.DB::betyConnect() - Runs <- tbl(dbcon, "runs") %>% select(id, model_id, site_id, parameter_list, ensemble_id, start_time, finish_time) @@ -32,8 +30,6 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ arrange(id) %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { res$status <- 404 return(list(error="Run(s) not found")) @@ -88,8 +84,6 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ #* @get / getRunDetails <- function(req, run_id, res){ - dbcon <- PEcAn.DB::betyConnect() - Runs <- tbl(dbcon, "runs") %>% select(-outdir, -outprefix, -setting, -created_at, -updated_at) @@ -107,8 +101,6 @@ getRunDetails <- function(req, run_id, res){ pull(user_id) } - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Run with specified ID was not found")) @@ -156,8 +148,6 @@ getRunDetails <- function(req, run_id, res){ #* @get //input/ getRunInputFile <- function(req, run_id, filename, res){ - dbcon <- PEcAn.DB::betyConnect() - Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) @@ -167,8 +157,6 @@ getRunInputFile <- function(req, run_id, filename, res){ filter(id == !!run_id) %>% pull(workflow_id) - PEcAn.DB::db.close(dbcon) - inputpath <- paste0( Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/run/", run_id, "/", filename) result <- get.file(inputpath, req$user$userid) @@ -196,8 +184,6 @@ getRunInputFile <- function(req, run_id, filename, res){ #* @get //output/ getRunOutputFile <- function(req, run_id, filename, res){ - dbcon <- PEcAn.DB::betyConnect() - Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) @@ -207,8 +193,6 @@ getRunOutputFile <- function(req, run_id, filename, res){ filter(id == !!run_id) %>% pull(workflow_id) - PEcAn.DB::db.close(dbcon) - outputpath <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/out/", run_id, "/", filename) result <- get.file(outputpath, req$user$userid) @@ -241,8 +225,6 @@ getRunOutputFile <- function(req, run_id, filename, res){ plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, height=600, res){ # Get workflow_id for the run - dbcon <- PEcAn.DB::betyConnect() - Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) @@ -259,9 +241,6 @@ plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, heigh pull(user_id) } - - PEcAn.DB::db.close(dbcon) - # Check if the data file exists on the host datafile <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id, "/out/", run_id, "/", year, ".nc") if(! file.exists(datafile)){ diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R index 12a5eb83903..5a3e02112aa 100644 --- a/apps/api/R/sites.R +++ b/apps/api/R/sites.R @@ -7,8 +7,6 @@ library(dplyr) #* @get / getSite <- function(site_id, res){ - dbcon <- PEcAn.DB::betyConnect() - site <- tbl(dbcon, "sites") %>% select(-created_at, -updated_at, -user_id, -geometry) %>% filter(id == !!site_id) @@ -16,8 +14,6 @@ getSite <- function(site_id, res){ qry_res <- site %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Site not found")) @@ -43,8 +39,6 @@ getSite <- function(site_id, res){ searchSite <- function(sitename="", ignore_case=TRUE, res){ sitename <- URLdecode(sitename) - dbcon <- PEcAn.DB::betyConnect() - sites <- tbl(dbcon, "sites") %>% select(id, sitename) %>% filter(grepl(!!sitename, sitename, ignore.case=ignore_case)) %>% @@ -53,8 +47,6 @@ searchSite <- function(sitename="", ignore_case=TRUE, res){ qry_res <- sites %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Site(s) not found")) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 66d69a56eb4..8b54156fe58 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -1,13 +1,5 @@ library(dplyr) -.bety_params <- PEcAn.DB::get_postgres_envvars( - host = "localhost", - dbname = "bety", - user = "bety", - password = "bety", - driver = "Postgres" -) - #* Submit a workflow sent as XML #* @param workflowXmlString String containing the XML workflow from request body #* @param userDetails List containing userid & username @@ -52,21 +44,17 @@ submit.workflow.list <- function(workflowList, userDetails) { if (!is.null(workflowList$model$id) && (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { - dbcon <- PEcAn.DB::db.open(.bety_params) res <- dplyr::tbl(dbcon, "models") %>% select(id, model_name, revision) %>% filter(id == !!workflowList$model$id) %>% collect() - PEcAn.DB::db.close(dbcon) - + workflowList$model$type <- res$model_name workflowList$model$revision <- res$revision } # Fix RabbitMQ details - dbcon <- PEcAn.DB::db.open(.bety_params) hostInfo <- PEcAn.DB::dbHostInfo(dbcon) - PEcAn.DB::db.close(dbcon) workflowList$host <- list( rabbitmq = list( uri = Sys.getenv("RABBITMQ_URI", "amqp://guest:guest@localhost/%2F"), @@ -133,8 +121,6 @@ submit.workflow.list <- function(workflowList, userDetails) { #* @author Tezan Sahu insert.workflow <- function(workflowList){ - dbcon <- PEcAn.DB::db.open(.bety_params) - model_id <- workflowList$model$id if(is.null(model_id)){ model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), dbcon) @@ -182,8 +168,6 @@ insert.workflow <- function(workflowList){ ) ) - PEcAn.DB::db.close(dbcon) - return(workflow_id) } @@ -193,8 +177,7 @@ insert.workflow <- function(workflowList){ #* @param workflowList List containing the workflow details #* @author Tezan Sahu insert.attribute <- function(workflowList){ - dbcon <- PEcAn.DB::betyConnect() - + # Create an array of PFTs pfts <- c() for(i in seq(length(workflowList$pfts))){ diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 31959018af0..0ed13eb31ed 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -15,8 +15,6 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r return(list(error = "Invalid value for parameter")) } - dbcon <- PEcAn.DB::betyConnect() - Workflow <- tbl(dbcon, "workflows") %>% select(-created_at, -updated_at, -params, -advanced_edit, -notes) @@ -32,8 +30,6 @@ getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, r qry_res <- Workflow %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0 || as.numeric(offset) >= nrow(qry_res)) { res$status <- 404 return(list(error="Workflows not found")) @@ -116,8 +112,6 @@ submitWorkflow <- function(req, res){ #' @author Tezan Sahu #* @get / getWorkflowDetails <- function(id, req, res){ - dbcon <- PEcAn.DB::betyConnect() - Workflow <- tbl(dbcon, "workflows") %>% select(id, model_id, site_id, folder, hostname, user_id) @@ -128,8 +122,6 @@ getWorkflowDetails <- function(id, req, res){ qry_res <- Workflow %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Workflow with specified ID was not found")) @@ -173,8 +165,6 @@ getWorkflowDetails <- function(id, req, res){ #' @author Tezan Sahu #* @get //status getWorkflowStatus <- function(req, id, res){ - dbcon <- PEcAn.DB::betyConnect() - Workflow <- tbl(dbcon, "workflows") %>% select(id, user_id) %>% filter(id == !!id) @@ -182,8 +172,6 @@ getWorkflowStatus <- function(req, id, res){ qry_res <- Workflow %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return(list(error="Workflow with specified ID was not found on this host")) @@ -211,16 +199,12 @@ getWorkflowStatus <- function(req, id, res){ #* @serializer contentType list(type="application/octet-stream") #* @get //file/ getWorkflowFile <- function(req, id, filename, res){ - dbcon <- PEcAn.DB::betyConnect() - Workflow <- tbl(dbcon, "workflows") %>% select(id, user_id) %>% filter(id == !!id) qry_res <- Workflow %>% collect() - PEcAn.DB::db.close(dbcon) - if (nrow(qry_res) == 0) { res$status <- 404 return() From 79747c33ecf99aed267abf5bfa8e4e7b32716baa Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 14:53:46 -0400 Subject: [PATCH 1552/2289] API: Revert PEcAn bety parameter setting It's better to give the user full control over what PEcAn itself is doing, with sensible defaults. Moreover, this is now irrelevant to the API itself, which uses its own connection objects (RPostgres-based database pools). --- apps/api/R/submit.workflow.R | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 8b54156fe58..89c5e58ed5c 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -35,12 +35,17 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* @return ID & status of the submitted workflow #* @author Tezan Sahu submit.workflow.list <- function(workflowList, userDetails) { - # Fix details about the database - workflowList$database <- list(bety = .bety_params) - # HACK: We are not read for the Postgres driver yet. Way too many places rely - # on implicit string conversion, which doesn't work well for bit64 integers - workflowList$database$bety$driver <- "PostgreSQL" + # Set database details + workflowList$database <- list( + bety = PEcAn.DB::get_postgres_envvars( + host = "localhost", + dbname = "bety", + user = "bety", + password = "bety", + driver = "PostgreSQL" + ) + ) if (!is.null(workflowList$model$id) && (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { From a8f1fbd645da0684f94a3f26a516ca21197e95b0 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 14:55:49 -0400 Subject: [PATCH 1553/2289] API: Format the workflow ID string explicitly ...to avoid insidious bugs caused by implicit coercion. --- apps/api/R/submit.workflow.R | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 89c5e58ed5c..ebca4ac3f9b 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -82,12 +82,14 @@ submit.workflow.list <- function(workflowList, userDetails) { # Add entry to workflows table in database workflow_id <- insert.workflow(workflowList) workflowList$workflow$id <- workflow_id + workflow_id_str <- format(workflow_id, scientific = FALSE) # Add entry to attributes table in database insert.attribute(workflowList) # Fix the output directory - outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", workflow_id) + outdir <- paste0(Sys.getenv("DATA_DIR", "/data/"), "workflows/PEcAn_", + workflow_id_str) workflowList$outdir <- outdir # Create output diretory @@ -105,11 +107,11 @@ submit.workflow.list <- function(workflowList, userDetails) { res <- file.copy("/work/workflow.R", outdir) # Post workflow to RabbitMQ - message <- list(folder = outdir, workflowid = workflow_id) + message <- list(folder = outdir, workflowid = workflow_id_str) res <- PEcAn.remote::rabbitmq_post_message(workflowList$host$rabbitmq$uri, "pecan", message, "rabbitmq") if(res$routed){ - return(list(workflow_id = as.character(workflow_id), status = "Submitted successfully")) + return(list(workflow_id = workflow_id_str, status = "Submitted successfully")) } else{ return(list(status = "Error", message = "Could not submit to RabbitMQ")) From 123a0a8f7472a0d53d3a1a089616db6ac21ab713 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 14:56:34 -0400 Subject: [PATCH 1554/2289] API: Checkout pool connections when necessary Looks like `dplyr::tbl()` works fine, but `DBI::db()` doesn't. You have to explicitly check out connections from the pool and then return them when you're done. --- apps/api/R/submit.workflow.R | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index ebca4ac3f9b..6b3572bddcf 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -151,6 +151,12 @@ insert.workflow <- function(workflowList){ tibble::add_column("user_id" = bit64::as.integer64(workflowList$info$userid)) } + # NOTE: Have to "checkout" a connection from the pool here to work with + # dbSendStatement and friends. We make sure to return the connection when the + # function exits (successfully or not). + con <- pool::poolCheckout(dbcon) + on.exit(pool::poolReturn(con), add = TRUE) + insert_query <- glue::glue( "INSERT INTO workflows ", "({paste(colnames(workflow_df), collapse = ', ')}) ", @@ -159,7 +165,7 @@ insert.workflow <- function(workflowList){ ) PEcAn.logger::logger.debug(insert_query) workflow_id <- PEcAn.DB::db.query( - insert_query, dbcon, + insert_query, con, values = unname(as.list(workflow_df)) )[["id"]] @@ -169,7 +175,7 @@ insert.workflow <- function(workflowList){ ) PEcAn.DB::db.query( - "UPDATE workflows SET folder = $1 WHERE id = $2", dbcon, values = list( + "UPDATE workflows SET folder = $1 WHERE id = $2", con, values = list( file.path("data", "workflows", paste0("PEcAn_", format(workflow_id, scientific = FALSE))), workflow_id ) @@ -235,9 +241,11 @@ insert.attribute <- function(workflowList){ # Insert properties into attributes table value_json <- as.character(jsonlite::toJSON(properties, auto_unbox = TRUE)) - res <- DBI::dbSendStatement(dbcon, + con <- pool::poolCheckout(dbcon) + on.exit(pool::poolReturn(con), add = TRUE) + res <- DBI::dbSendStatement(con, "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) -} \ No newline at end of file +} From b3d8ee9eb9e3eefb61d4fc49dff483cdcd8eb4d4 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 27 Oct 2020 16:39:17 -0400 Subject: [PATCH 1555/2289] UTILS: Convert input - subset ID only if present Otherwise, throws an error. --- base/utils/R/convert.input.R | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index e79a136bbca..7912dd130fe 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -270,8 +270,11 @@ convert.input <- hostname = host$name, exact.dates = TRUE, pattern = pattern - ) %>% - dplyr::filter(id==input.args$dbfile.id) + ) + if ("id" %in% colnames(existing.dbfile)) { + existing.dbfile <- existing.dbfile %>% + dplyr::filter(id==input.args$dbfile.id) + } }else{ existing.dbfile <- PEcAn.DB::dbfile.input.check(siteid = site.id, mimetype = mimetype, From 526db40fa2fd17a0b03d901f8795bb5a0729fdbf Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Oct 2020 09:23:59 -0400 Subject: [PATCH 1556/2289] API: Database connections as function argument --- apps/api/R/auth.R | 5 +++-- apps/api/R/available-models.R | 4 +++- apps/api/R/entrypoint.R | 2 +- apps/api/R/formats.R | 9 ++++++--- apps/api/R/get.file.R | 12 ++++++++++-- apps/api/R/inputs.R | 11 +++++++---- apps/api/R/models.R | 9 ++++++--- apps/api/R/pfts.R | 9 ++++++--- apps/api/R/runs.R | 18 +++++++++++------- apps/api/R/sites.R | 8 +++++--- apps/api/R/submit.workflow.R | 9 ++++++--- apps/api/R/workflows.R | 17 +++++++++++------ 12 files changed, 75 insertions(+), 38 deletions(-) diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R index 3ddd260efd7..5273c6297ca 100644 --- a/apps/api/R/auth.R +++ b/apps/api/R/auth.R @@ -25,9 +25,10 @@ get_crypt_pass <- function(username, password, secretkey = NULL) { #* Check if the encrypted password for the user is valid #* @param username Username #* @param crypt_pass Encrypted password +#* @param dbcon Database connection object. Default is global database pool. #* @return TRUE if encrypted password is correct, else FALSE #* @author Tezan Sahu -validate_crypt_pass <- function(username, crypt_pass) { +validate_crypt_pass <- function(username, crypt_pass, dbcon = global_db_pool) { res <- tbl(dbcon, "users") %>% filter(login == username, @@ -83,4 +84,4 @@ authenticate_user <- function(req, res) { res$status <- 401 # Unauthorized return(list(error="Authentication required")) -} \ No newline at end of file +} diff --git a/apps/api/R/available-models.R b/apps/api/R/available-models.R index 67b88517046..0c467b843a2 100644 --- a/apps/api/R/available-models.R +++ b/apps/api/R/available-models.R @@ -4,10 +4,12 @@ library(magrittr, include.only = "%>%") #' #' @param machine_name Target machine hostname. Default = `"docker"` #' @param machine_id Target machine ID. If `NULL` (default), deduced from hostname. +#' @param dbcon Database connection object. Default is global database pool. #' @return `data.frame` of information on available models #' @author Alexey Shiklomanov #* @get / -availableModels <- function(machine_name = "docker", machine_id = NULL) { +availableModels <- function(machine_name = "docker", machine_id = NULL, + dbcon = global_db_pool) { if (is.null(machine_id)) { machines <- dplyr::tbl(dbcon, "machines") machineid <- machines %>% diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 7f4cf598244..2ec009f50d9 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -18,7 +18,7 @@ source("general.R") .bety_params$driver <- NULL .bety_params$drv <- RPostgres::Postgres() -dbcon <- do.call(pool::dbPool, .bety_params) +global_db_pool <- do.call(pool::dbPool, .bety_params) root <- plumber::Plumber$new() root$setSerializer(plumber::serializer_unboxed_json()) diff --git a/apps/api/R/formats.R b/apps/api/R/formats.R index e4276bce029..2f3d6ecccd9 100644 --- a/apps/api/R/formats.R +++ b/apps/api/R/formats.R @@ -2,10 +2,11 @@ library(dplyr) #' Retrieve the details of a PEcAn format, based on format_id #' @param format_id Format ID (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Format details #' @author Tezan Sahu #* @get / -getFormat <- function(format_id, res){ +getFormat <- function(format_id, res, dbcon = global_db_pool){ Format <- tbl(dbcon, "formats") %>% select(format_id = id, name, notes, header, mimetype_id) %>% @@ -50,10 +51,12 @@ getFormat <- function(format_id, res){ #' @param format_name Format name search string (character) #' @param mimetype Mime type search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @param dbcon Database connection object. Default is global database pool. #' @return Formats subset matching the model search string #' @author Tezan Sahu #* @get / -searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res){ +searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res, + dbcon = global_db_pool){ format_name <- URLdecode(format_name) mimetype <- URLdecode(mimetype) @@ -77,4 +80,4 @@ searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res){ else { return(list(formats=qry_res, count = nrow(qry_res))) } -} \ No newline at end of file +} diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R index 85a0a67719a..6d5f345cd12 100644 --- a/apps/api/R/get.file.R +++ b/apps/api/R/get.file.R @@ -1,6 +1,14 @@ library(dplyr) -get.file <- function(filepath, userid) { +#' Download a file associated with PEcAn +#' +#' @param filepath Absolute path to file on target machine +#' @param userid User ID associated with file (typically the same as the user +#' running the corresponding workflow) +#' @param dbcon Database connection object. Default is global database pool. +#' @return Raw binary file contents +#' @author Tezan Sehu +get.file <- function(filepath, userid, dbcon = global_db_pool) { # Check if the file path is valid if(! file.exists(filepath)){ return(list(status = "Error", message = "File not found")) @@ -32,4 +40,4 @@ get.file <- function(filepath, userid) { # Read the data in binary form & return it bin <- readBin(filepath,'raw', n = file.info(filepath)$size) return(list(file_contents = bin)) -} \ No newline at end of file +} diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index 1906d79217d..110fe1bddec 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -4,11 +4,13 @@ library(dplyr) #' @param model_id Model Id (character) #' @param site_id Site Id (character) #' @param offset -#' @param limit +#' @param limit +#' @param dbcon Database connection object. Default is global database pool. #' @return Information about Inputs based on model & site #' @author Tezan Sahu #* @get / -searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_id=NULL, offset=0, limit=50, res){ +searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_id=NULL, offset=0, limit=50, res, + dbcon = global_db_pool){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) @@ -128,11 +130,12 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ #' @param id Input id (character) #' @param filename Optional filename specified if the id points to a folder instead of file (character) #' If this is passed with an id that actually points to a file, this name will be ignored +#' @param dbcon Database connection object. Default is global database pool. #' @return Input file specified by user #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get / -downloadInput <- function(input_id, filename="", req, res){ +downloadInput <- function(input_id, filename="", req, res, dbcon = global_db_pool){ db_hostid <- PEcAn.DB::dbHostInfo(dbcon)$hostid # This is just for temporary testing due to the existing issue in dbHostInfo() @@ -175,4 +178,4 @@ downloadInput <- function(input_id, filename="", req, res){ bin <- readBin(filepath,'raw', n = file.info(filepath)$size) return(bin) } -} \ No newline at end of file +} diff --git a/apps/api/R/models.R b/apps/api/R/models.R index a2eba5d2672..82ca656aed1 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -2,10 +2,11 @@ library(dplyr) #' Retrieve the details of a PEcAn model, based on model_id #' @param model_id Model ID (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Model details #' @author Tezan Sahu #* @get / -getModel <- function(model_id, res){ +getModel <- function(model_id, res, dbcon = global_db_pool){ Model <- tbl(dbcon, "models") %>% select(model_id = id, model_name, revision, modeltype_id) %>% @@ -44,10 +45,12 @@ getModel <- function(model_id, res){ #' @param model_name Model name search string (character) #' @param revision Model version/revision search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @param dbcon Database connection object. Default is global database pool. #' @return Model subset matching the model search string #' @author Tezan Sahu #* @get / -searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ +searchModels <- function(model_name="", revision="", ignore_case=TRUE, res, + dbcon = global_db_pool){ model_name <- URLdecode(model_name) revision <- URLdecode(revision) @@ -66,4 +69,4 @@ searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ else { return(list(models=qry_res, count = nrow(qry_res))) } -} \ No newline at end of file +} diff --git a/apps/api/R/pfts.R b/apps/api/R/pfts.R index 914aad14821..607ce7d66a3 100644 --- a/apps/api/R/pfts.R +++ b/apps/api/R/pfts.R @@ -2,10 +2,11 @@ library(dplyr) #' Retrieve the details of a PEcAn PFT, based on pft_id #' @param pft_id PFT ID (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return PFT details #' @author Tezan Sahu #* @get / -getPfts <- function(pft_id, res){ +getPfts <- function(pft_id, res, dbcon = global_db_pool){ pft <- tbl(dbcon, "pfts") %>% select(pft_id = id, pft_name = name, definition, pft_type, modeltype_id) %>% @@ -41,10 +42,12 @@ getPfts <- function(pft_id, res){ #' @param pft_type PFT type (either 'plant' or 'cultivar') (character) #' @param model_type Model type serch string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @param dbcon Database connection object. Default is global database pool. #' @return PFT subset matching the searc criteria #' @author Tezan Sahu #* @get / -searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE, res){ +searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE, res, + dbcon = global_db_pool){ pft_name <- URLdecode(pft_name) pft_type <- URLdecode(pft_type) model_type <- URLdecode(model_type) @@ -76,4 +79,4 @@ searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE else { return(list(pfts=qry_res, count = nrow(qry_res))) } -} \ No newline at end of file +} diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 176610af429..8ed18cfeed1 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -4,11 +4,13 @@ source("get.file.R") #' Get the list of runs (belonging to a particuar workflow) #' @param workflow_id Workflow id (character) #' @param offset -#' @param limit +#' @param limit +#' @param dbcon Database connection object. Default is global database pool. #' @return List of runs (belonging to a particuar workflow) #' @author Tezan Sahu #* @get / -getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ +getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res, + dbcon = global_db_pool){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) @@ -82,7 +84,7 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res){ #' @return Details of requested run #' @author Tezan Sahu #* @get / -getRunDetails <- function(req, run_id, res){ +getRunDetails <- function(req, run_id, res, dbcon = global_db_pool){ Runs <- tbl(dbcon, "runs") %>% select(-outdir, -outprefix, -setting, -created_at, -updated_at) @@ -142,11 +144,12 @@ getRunDetails <- function(req, run_id, res){ #' Get the input file specified by user for a run #' @param run_id Run id (character) #' @param filename Name of the input file (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Input file specified by user for the run #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get //input/ -getRunInputFile <- function(req, run_id, filename, res){ +getRunInputFile <- function(req, run_id, filename, res, dbcon = global_db_pool){ Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) @@ -182,7 +185,7 @@ getRunInputFile <- function(req, run_id, filename, res){ #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get //output/ -getRunOutputFile <- function(req, run_id, filename, res){ +getRunOutputFile <- function(req, run_id, filename, res, dbcon = global_db_pool){ Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) @@ -223,7 +226,8 @@ getRunOutputFile <- function(req, run_id, filename, res){ #* @get //graph// #* @serializer contentType list(type='image/png') -plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, height=600, res){ +plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, height=600, res, + dbcon = global_db_pool) { # Get workflow_id for the run Run <- tbl(dbcon, "runs") %>% filter(id == !!run_id) @@ -320,4 +324,4 @@ getRunOutputs <- function(outdir){ outputs$years[years[i]] <- years_data[i] } return(outputs) -} \ No newline at end of file +} diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R index 5a3e02112aa..c62bde56618 100644 --- a/apps/api/R/sites.R +++ b/apps/api/R/sites.R @@ -2,10 +2,11 @@ library(dplyr) #' Retrieve the details of a PEcAn site, based on site_id #' @param site_id Site ID (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Site details #' @author Tezan Sahu #* @get / -getSite <- function(site_id, res){ +getSite <- function(site_id, res, dbcon = global_db_pool){ site <- tbl(dbcon, "sites") %>% select(-created_at, -updated_at, -user_id, -geometry) %>% @@ -33,10 +34,11 @@ getSite <- function(site_id, res){ #' Search for PEcAn sites containing wildcards for filtering #' @param sitename Site name search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search +#' @param dbcon Database connection object. Default is global database pool. #' @return Site subset matching the site search string #' @author Tezan Sahu #* @get / -searchSite <- function(sitename="", ignore_case=TRUE, res){ +searchSite <- function(sitename="", ignore_case=TRUE, res, dbcon = global_db_pool){ sitename <- URLdecode(sitename) sites <- tbl(dbcon, "sites") %>% @@ -54,4 +56,4 @@ searchSite <- function(sitename="", ignore_case=TRUE, res){ else { return(list(sites=qry_res, count = nrow(qry_res))) } -} \ No newline at end of file +} diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 6b3572bddcf..8e1b45d6383 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -32,9 +32,10 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* Submit a workflow (converted to list) #* @param workflowList Workflow parameters expressed as a list #* @param userDetails List containing userid & username +#* @param dbcon Database connection object. Default is global database pool. #* @return ID & status of the submitted workflow #* @author Tezan Sahu -submit.workflow.list <- function(workflowList, userDetails) { +submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_pool) { # Set database details workflowList$database <- list( @@ -124,9 +125,10 @@ submit.workflow.list <- function(workflowList, userDetails) { #* Insert the workflow into workflows table to obtain the workflow_id #* @param workflowList List containing the workflow details +#* @param dbcon Database connection object. Default is global database pool. #* @return ID of the submitted workflow #* @author Tezan Sahu -insert.workflow <- function(workflowList){ +insert.workflow <- function(workflowList, dbcon = global_db_pool){ model_id <- workflowList$model$id if(is.null(model_id)){ @@ -188,8 +190,9 @@ insert.workflow <- function(workflowList){ #* Insert the workflow into attributes table #* @param workflowList List containing the workflow details +#* @param dbcon Database connection object. Default is global database pool. #* @author Tezan Sahu -insert.attribute <- function(workflowList){ +insert.attribute <- function(workflowList, dbcon = global_db_pool){ # Create an array of PFTs pfts <- c() diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 0ed13eb31ed..1f3d7b3eac4 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -5,11 +5,13 @@ source("submit.workflow.R") #' @param model_id Model id (character) #' @param site_id Site id (character) #' @param offset -#' @param limit +#' @param limit Max number of workflows to retrieve (default = 50) +#' @param dbcon Database connection object. Default is global database pool. #' @return List of workflows (using a particular model & site, if specified) #' @author Tezan Sahu #* @get / -getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res){ +getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res, + dbcon = global_db_pool){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) @@ -108,10 +110,11 @@ submitWorkflow <- function(req, res){ #' Get the of the workflow specified by the id #' @param id Workflow id (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Details of requested workflow #' @author Tezan Sahu #* @get / -getWorkflowDetails <- function(id, req, res){ +getWorkflowDetails <- function(id, req, res, dbcon = global_db_pool){ Workflow <- tbl(dbcon, "workflows") %>% select(id, model_id, site_id, folder, hostname, user_id) @@ -161,10 +164,11 @@ getWorkflowDetails <- function(id, req, res){ #' Get the of the workflow specified by the id #' @param id Workflow id (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Details of requested workflow #' @author Tezan Sahu #* @get //status -getWorkflowStatus <- function(req, id, res){ +getWorkflowStatus <- function(req, id, res, dbcon = global_db_pool){ Workflow <- tbl(dbcon, "workflows") %>% select(id, user_id) %>% filter(id == !!id) @@ -194,11 +198,12 @@ getWorkflowStatus <- function(req, id, res){ #' Get a specified file of the workflow specified by the id #' @param id Workflow id (character) +#' @param dbcon Database connection object. Default is global database pool. #' @return Details of requested workflow #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get //file/ -getWorkflowFile <- function(req, id, filename, res){ +getWorkflowFile <- function(req, id, filename, res, dbcon = global_db_pool){ Workflow <- tbl(dbcon, "workflows") %>% select(id, user_id) %>% filter(id == !!id) @@ -228,4 +233,4 @@ getWorkflowFile <- function(req, id, filename, res){ bin <- readBin(filepath,'raw', n = file.info(filepath)$size) return(bin) } -} \ No newline at end of file +} From 64928eb00b20957034aa87813efdef5678a2fbc9 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 28 Oct 2020 16:51:35 +0100 Subject: [PATCH 1557/2289] R_LIBS_USER now used directly --- .github/workflows/ci.yml | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 7118a5b47ac..6259738fb3d 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -16,10 +16,7 @@ on: pull_request: env: - # Would be more usual to set R_LIBS_USER, but R uses R_LIBS first if present - # ...and it's always present here, because the rocker/tidyverse base image - # checks at R startup time for R_LIBS and R_LIBS_USER, sets both if not found - R_LIBS: ~/R/library + R_LIBS_USER: /usr/local/lib/R/site-library jobs: From f3f9e19da859d03022cf2cf9636afc9b56c78075 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 29 Oct 2020 09:57:20 +0100 Subject: [PATCH 1558/2289] specify return value, plus code cleanup --- base/settings/R/get_args.R | 13 ++++++------- base/settings/man/get_args.Rd | 2 +- 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/base/settings/R/get_args.R b/base/settings/R/get_args.R index 59d34ae4f14..a7e8ef30073 100644 --- a/base/settings/R/get_args.R +++ b/base/settings/R/get_args.R @@ -2,18 +2,17 @@ #' #' Used in web/workflow.R to parse command line arguments. #' See also https://github.com/PecanProject/pecan/pull/2626. -#' -#' @return +#' +#' @return list generated by \link[optparse]{parse_args}; see there for details. #' @export #' #' @examples #' \dontrun{./web/workflow.R -h} -get_args <- function () { - option_list = list( +get_args <- function() { + option_list <- list( optparse::make_option( c("-s", "--settings"), - default = ifelse(Sys.getenv("PECAN_SETTINGS") != "", - Sys.getenv("PECAN_SETTINGS"), "pecan.xml"), + default = Sys.getenv("PECAN_SETTINGS", "pecan.xml"), type = "character", help = "Settings XML file", metavar = "FILE", @@ -35,5 +34,5 @@ get_args <- function () { stop(sprintf('--settings "%s" not a valid file\n', args$settings)) } - return(invisible(args)) + return(args) } \ No newline at end of file diff --git a/base/settings/man/get_args.Rd b/base/settings/man/get_args.Rd index d405fb6009e..9dd874cbe6e 100644 --- a/base/settings/man/get_args.Rd +++ b/base/settings/man/get_args.Rd @@ -7,7 +7,7 @@ get_args() } \value{ - +list generated by \link[optparse]{parse_args}; see there for details. } \description{ Used in web/workflow.R to parse command line arguments. From 51e4bbd32f58b7d144b032a8822f69ecc0d43f42 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 29 Oct 2020 09:58:51 +0100 Subject: [PATCH 1559/2289] run styler --- base/settings/R/get_args.R | 46 +++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/base/settings/R/get_args.R b/base/settings/R/get_args.R index a7e8ef30073..277ad5e0e53 100644 --- a/base/settings/R/get_args.R +++ b/base/settings/R/get_args.R @@ -9,30 +9,30 @@ #' @examples #' \dontrun{./web/workflow.R -h} get_args <- function() { - option_list <- list( - optparse::make_option( - c("-s", "--settings"), - default = Sys.getenv("PECAN_SETTINGS", "pecan.xml"), - type = "character", - help = "Settings XML file", - metavar = "FILE", - ), - optparse::make_option( - c("-c", "--continue"), - default = FALSE, - action = "store_true", - type = "logical", - help = "Continue processing", - ) + option_list <- list( + optparse::make_option( + c("-s", "--settings"), + default = Sys.getenv("PECAN_SETTINGS", "pecan.xml"), + type = "character", + help = "Settings XML file", + metavar = "FILE", + ), + optparse::make_option( + c("-c", "--continue"), + default = FALSE, + action = "store_true", + type = "logical", + help = "Continue processing", ) + ) - parser <- optparse::OptionParser(option_list = option_list) - args <- optparse::parse_args(parser) + parser <- optparse::OptionParser(option_list = option_list) + args <- optparse::parse_args(parser) - if (!file.exists(args$settings)) { - optparse::print_help(parser) - stop(sprintf('--settings "%s" not a valid file\n', args$settings)) - } + if (!file.exists(args$settings)) { + optparse::print_help(parser) + stop(sprintf('--settings "%s" not a valid file\n', args$settings)) + } - return(args) -} \ No newline at end of file + return(args) +} From e458ca329ab86a116e4f64ce71fa7665a067d215 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 14 Aug 2020 18:18:30 -0400 Subject: [PATCH 1560/2289] First draft of download.merra function --- modules/data.atmosphere/R/download.MERRA.R | 228 +++++++++++++++++++++ 1 file changed, 228 insertions(+) create mode 100644 modules/data.atmosphere/R/download.MERRA.R diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R new file mode 100644 index 00000000000..fed5ba5454c --- /dev/null +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -0,0 +1,228 @@ +#' Download MERRA data +#' +#' @param outfolder +#' @param start_date +#' @param end_date +#' @param lat.in +#' @param lon.in +#' @param overwrite +#' @param verbose +#' @return +#' @author Alexey Shiklomanov +download.MERRA <- function(outfolder, start_date, end_date, + lat.in, lon.in, + overwrite = FALSE, + verbose = FALSE) { + + dates <- seq.Date(start_date, end_date, "1 day") + + dir.create(outfolder, showWarnings = FALSE, recursive = TRUE) + + # Download all MERRA files first. This skips files that have already been downloaded. + for (i in seq_along(dates)) { + date <- dates[[i]] + message("Downloading ", as.character(date), + " (", i, " of ", length(dates), ")") + get_merra_date(date, lat.in, lon.in, outfolder) + } + + # Now, post-process + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + ylist <- seq(start_year, end_year) + + nyear <- length(ylist) + results <- data.frame( + file = character(nyear), + host = "", + mimimetype = "", + formatname = "", + startdate = "", + enddate = "", + dbfile.name = "MERRA", + stringsAsFactors = FALSE + ) + + for (i in seq_len(nyear)) { + year <- ylist[i] + ntime <- PEcAn.utils::days_in_year(year) * 24 # Hourly data + + loc.file <- file.path(outfolder, paste("MERRA", year, "nc", sep = ".")) + results$file[i] <- loc.file + results$host[i] <- PEcAn.remote::fqdn() + results$startdate[i] <- paste0(year, "-01-01 00:00:00") + results$enddate[i] <- paste0(year, "-12-31 23:59:59") + results$mimetype[i] <- "application/x-netcdf" + results$formatname[i] <- "CF Meteorology" + + if (file.exists(loc.file) && !isTRUE(overwrite)) { + PEcAn.logger::logger.error("File already exists. Skipping to next year") + next + } + + ## Create dimensions + lat <- ncdf4::ncdim_def(name = "latitude", units = "degree_north", vals = lat.in, create_dimvar = TRUE) + lon <- ncdf4::ncdim_def(name = "longitude", units = "degree_east", vals = lon.in, create_dimvar = TRUE) + days_elapsed <- seq(1, ntime) * 1 / 24 - 0.5 / 24 + time <- ncdf4::ncdim_def(name = "time", units = paste0("days since ", year, "-01-01T00:00:00Z"), + vals = as.array(days_elapsed), create_dimvar = TRUE, unlim = TRUE) + dim <- list(lat, lon, time) + + ## Create output variables + var_list <- list() + for (dat in list(merra_vars, merra_pres_vars, merra_flux_vars)) { + for (j in seq_len(nrow(dat))) { + var_list <- c(var_list, list(ncdf4::ncvar_def( + name = dat[j, ][["CF_name"]], + units = dat[j, ][["units"]], + dim = dim, + missval = -999 + ))) + } + } + + ## Create output file + loc <- ncdf4::nc_create(loc.file, var_list) + on.exit(ncdf4::nc_close(loc), add = TRUE) + + # Populate output file + dates_yr <- dates[lubridate::year(dates) == year] + for (d in seq_along(dates_yr)) { + date <- dates_yr[[d]] + end <- d * 24 + start <- end - 23 + mostfile <- file.path(outfolder, sprintf("merra-most-%s.nc", as.character(date))) + nc <- ncdf4::nc_open(mostfile) + for (r in seq_len(nrow(merra_vars))) { + x <- ncdf4::ncvar_get(nc, merra_vars[r,][["MERRA_name"]]) + ncdf4::ncvar_put(loc, merra_vars[r,][["CF_name"]], x, + start = c(1, 1, start), count = c(1, 1, 24)) + } + ncdf4::nc_close(nc) + presfile <- file.path(outfolder, sprintf("merra-pres-%s.nc", as.character(date))) + nc <- ncdf4::nc_open(presfile) + for (r in seq_len(nrow(merra_pres_vars))) { + x <- ncdf4::ncvar_get(nc, merra_pres_vars[r,][["MERRA_name"]]) + ncdf4::ncvar_put(loc, merra_pres_vars[r,][["CF_name"]], x, + start = c(1, 1, start), count = c(1, 1, 24)) + } + ncdf4::nc_close(nc) + fluxfile <- file.path(outfolder, sprintf("merra-flux-%s.nc", as.character(date))) + nc <- ncdf4::nc_open(fluxfile) + for (r in seq_len(nrow(merra_flux_vars))) { + x <- ncdf4::ncvar_get(nc, merra_flux_vars[r,][["MERRA_name"]]) + ncdf4::ncvar_put(loc, merra_flux_vars[r,][["CF_name"]], x, + start = c(1, 1, start), count = c(1, 1, 24)) + } + ncdf4::nc_close(nc) + } + } + + return(results) +} + +get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) { + date <- as.character(date) + dpat <- "([[:digit:]]{4})-([[:digit:]]{2})-([[:digit:]]{2})" + year <- as.numeric(gsub(dpat, "\\1", date)) + month <- as.numeric(gsub(dpat, "\\2", date)) + day <- as.numeric(gsub(dpat, "\\3", date)) + dir.create(outdir, showWarnings = FALSE, recursive = TRUE) + version <- if (year >= 2011) { + 400 + } else if (year >= 2001) { + 300 + } else { + 200 + } + base_url <- "https://goldsmr4.gesdisc.eosdis.nasa.gov/opendap/MERRA2" + + lat_grid <- seq(-90, 90, 0.5) + lon_grid <- seq(-180, 180, 0.625) + ilat <- which.min(abs(lat_grid - latitude)) + ilon <- which.min(abs(lon_grid - longitude)) + idxstring <- sprintf("[0:1:23][%d][%d]", ilat, ilon) + + # Standard variables + url <- glue::glue( + "{base_url}/{merra_prod}/{year}/{sprintf('%02d', month)}/", + "MERRA2_{version}.{merra_file}.", + "{year}{sprintf('%02d', month)}{sprintf('%02d', day)}.nc4.nc4" + ) + qvars <- sprintf("%s%s", merra_vars$MERRA_name, idxstring) + qstring <- paste(qvars, collapse = ",") + outfile <- file.path(outdir, sprintf("merra-most-%d-%02d-%02d.nc", + year, month, day)) + if (overwrite || !file.exists(outfile)) { + req <- httr::GET( + paste(url, qstring, sep = "?"), + httr::authenticate(user = "pecanproject", password = "Data4pecan3"), + httr::write_disk(outfile, overwrite = TRUE) + ) + } + + # Pressure + url <- glue::glue( + "{base_url}/{merra_pres_prod}/{year}/{sprintf('%02d', month)}/", + "MERRA2_{version}.{merra_pres_file}.", + "{year}{sprintf('%02d', month)}{sprintf('%02d', day)}.nc4.nc4" + ) + qvars <- sprintf("%s%s", merra_pres_vars$MERRA_name, idxstring) + qstring <- paste(qvars, collapse = ",") + outfile <- file.path(outdir, sprintf("merra-pres-%d-%02d-%02d.nc", + year, month, day)) + if (overwrite || !file.exists(outfile)) { + req <- httr::GET( + paste(url, qstring, sep = "?"), + httr::authenticate(user = "pecanproject", password = "Data4pecan3"), + httr::write_disk(outfile, overwrite = TRUE) + ) + } + + # Flux + url <- glue::glue( + "{base_url}/{merra_flux_prod}/{year}/{sprintf('%02d', month)}/", + "MERRA2_{version}.{merra_flux_file}.", + "{year}{sprintf('%02d', month)}{sprintf('%02d', day)}.nc4.nc4" + ) + qvars <- sprintf("%s%s", merra_flux_vars$MERRA_name, idxstring) + qstring <- paste(qvars, collapse = ",") + outfile <- file.path(outdir, sprintf("merra-flux-%d-%02d-%02d.nc", + year, month, day)) + if (overwrite || !file.exists(outfile)) { + req <- httr::GET( + paste(url, qstring, sep = "?"), + httr::authenticate(user = "pecanproject", password = "Data4pecan3"), + httr::write_disk(outfile, overwrite = TRUE) + ) + } +} + +# Time-integrated variables +merra_prod <- "M2T1NXFLX.5.12.4" +merra_file <- "tavg1_2d_flx_Nx" +merra_vars <- tibble::tribble( + ~CF_name, ~MERRA_name, ~units, + "air_temperature", "TLML", "Kelvin", + "eastward_wind", "ULML", "m/s", + "northward_wind", "VLML", "m/s", + "specific_humidity", "QSH", "g/g", + "precipitation_flux", "PRECTOT", "kg/m2/s" +) + +# Instantaneous variables +merra_pres_prod <- "M2I1NXASM.5.12.4" +merra_pres_file <- "inst1_2d_asm_Nx" +merra_pres_vars <- tibble::tribble( + ~CF_name, ~MERRA_name, ~units, + "air_pressure", "PS", "Pascal", +) + +# Radiation variables +merra_flux_prod <- "M2T1NXRAD.5.12.4" +merra_flux_file <- "tavg1_2d_rad_Nx" +merra_flux_vars <- tibble::tribble( + ~CF_name, ~MERRA_name, ~units, + "surface_downwelling_longwave_flux_in_air", "LWGNT", "W/m2", + "surface_downwelling_shortwave_flux_in_air", "SWGNT", "W/m2" +) From cb5da7ba2415f8c84182f0514ca26038d4080e96 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 28 Sep 2020 18:05:15 -0400 Subject: [PATCH 1561/2289] Add MERRA download test --- .../tests/testthat/test.download.MERRA.R | 26 +++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 modules/data.atmosphere/tests/testthat/test.download.MERRA.R diff --git a/modules/data.atmosphere/tests/testthat/test.download.MERRA.R b/modules/data.atmosphere/tests/testthat/test.download.MERRA.R new file mode 100644 index 00000000000..6cb6fe90c50 --- /dev/null +++ b/modules/data.atmosphere/tests/testthat/test.download.MERRA.R @@ -0,0 +1,26 @@ +context("Download MERRA") + +outdir <- tempdir() +testthat::setup(dir.create(outdir, showWarnings = FALSE, recursive = TRUE)) +testthat::teardown(unlink(outdir, recursive = TRUE)) + +test_that("MERRA download works", { + start_date <- "2009-06-01" + end_date <- "2009-06-30" + dat <- download.MERRA(outdir, start_date, end_date, + lat.in = 45.3, lon.in = -85.3, overwrite = TRUE) + expect_true(file.exists(dat$file[[1]])) + nc <- ncdf4::nc_open(dat$file[[1]]) + on.exit(ncdf4::nc_close(nc), add = TRUE) + expect_timeseq <- seq(lubridate::as_datetime(start_date), + lubridate::as_datetime(paste(end_date, "23:59:59")), + by = "1 hour", tz = "UTC") + time <- lubridate::as_datetime("2009-01-01") + + as.difftime(ncdf4::ncvar_get(nc, "time"), units = "days") + expect_equal(time, expect_timeseq) + temp_k <- ncdf4::ncvar_get(nc, "air_temperature") + # June temperatures here should always be greater than 0 degC + expect_true(all(temp_k > 273.15)) + # ...and temperatures anywhere should be less than 60 degC + expect_true(all(temp_k < 323.15)) +}) From eb3c0e2a3c52e8e87645546d766b35cdfbfecdc7 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 28 Sep 2020 18:09:45 -0400 Subject: [PATCH 1562/2289] Working MERRA download, and registration --- modules/data.atmosphere/NAMESPACE | 1 + modules/data.atmosphere/R/download.MERRA.R | 52 ++++++++++++------- .../inst/registration/register.MERRA.xml | 10 ++++ modules/data.atmosphere/man/download.MERRA.Rd | 43 +++++++++++++++ .../tests/testthat/test.download.MERRA.R | 6 +-- 5 files changed, 90 insertions(+), 22 deletions(-) create mode 100644 modules/data.atmosphere/inst/registration/register.MERRA.xml create mode 100644 modules/data.atmosphere/man/download.MERRA.Rd diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 76f258229d4..3729ed261a7 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -26,6 +26,7 @@ export(download.GFDL) export(download.GLDAS) export(download.Geostreams) export(download.MACA) +export(download.MERRA) export(download.MsTMIP_NARR) export(download.NARR) export(download.NARR_site) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index fed5ba5454c..fb61052b2e3 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -1,28 +1,24 @@ #' Download MERRA data #' -#' @param outfolder -#' @param start_date -#' @param end_date -#' @param lat.in -#' @param lon.in -#' @param overwrite -#' @param verbose -#' @return +#' @inheritParams download.CRUNCEP +#' @return `data.frame` of meteorology data metadata #' @author Alexey Shiklomanov +#' @export download.MERRA <- function(outfolder, start_date, end_date, lat.in, lon.in, overwrite = FALSE, verbose = FALSE) { - dates <- seq.Date(start_date, end_date, "1 day") + dates <- seq.Date(as.Date(start_date), as.Date(end_date), "1 day") dir.create(outfolder, showWarnings = FALSE, recursive = TRUE) # Download all MERRA files first. This skips files that have already been downloaded. for (i in seq_along(dates)) { date <- dates[[i]] - message("Downloading ", as.character(date), - " (", i, " of ", length(dates), ")") + PEcAn.logger::logger.debug(paste0( + "Downloading ", as.character(date), " (", i, " of ", length(dates), ")" + )) get_merra_date(date, lat.in, lon.in, outfolder) } @@ -45,7 +41,20 @@ download.MERRA <- function(outfolder, start_date, end_date, for (i in seq_len(nyear)) { year <- ylist[i] - ntime <- PEcAn.utils::days_in_year(year) * 24 # Hourly data + baseday <- paste0(year, "-01-01T00:00:00Z") + + # Accommodate partial years + y_startdate <- pmax(ISOdate(year, 01, 01, 0, tz = "UTC"), + lubridate::as_datetime(start_date)) + y_enddate <- pmin(ISOdate(year, 12, 31, 23, 59, 59, tz = "UTC"), + lubridate::as_datetime(paste(end_date, "23:59:59Z"))) + + timeseq <- as.numeric(difftime( + seq(y_startdate, y_enddate, "hours"), + baseday, + tz = "UTC", unit = "days" + )) + ntime <- length(timeseq) loc.file <- file.path(outfolder, paste("MERRA", year, "nc", sep = ".")) results$file[i] <- loc.file @@ -55,17 +64,22 @@ download.MERRA <- function(outfolder, start_date, end_date, results$mimetype[i] <- "application/x-netcdf" results$formatname[i] <- "CF Meteorology" - if (file.exists(loc.file) && !isTRUE(overwrite)) { - PEcAn.logger::logger.error("File already exists. Skipping to next year") - next + if (file.exists(loc.file)) { + if (overwrite) { + PEcAn.logger::logger.info(paste0("Removing existing file ", loc.file)) + file.remove(loc.file) + } else { + PEcAn.logger::logger.info(paste0( + "File ", loc.file, " already exists. Skipping to next year" + )) + next + } } ## Create dimensions lat <- ncdf4::ncdim_def(name = "latitude", units = "degree_north", vals = lat.in, create_dimvar = TRUE) lon <- ncdf4::ncdim_def(name = "longitude", units = "degree_east", vals = lon.in, create_dimvar = TRUE) - days_elapsed <- seq(1, ntime) * 1 / 24 - 0.5 / 24 - time <- ncdf4::ncdim_def(name = "time", units = paste0("days since ", year, "-01-01T00:00:00Z"), - vals = as.array(days_elapsed), create_dimvar = TRUE, unlim = TRUE) + time <- ncdf4::ncdim_def(name = "time", units = baseday, vals = timeseq, create_dimvar = TRUE, unlim = TRUE) dim <- list(lat, lon, time) ## Create output variables @@ -190,7 +204,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) outfile <- file.path(outdir, sprintf("merra-flux-%d-%02d-%02d.nc", year, month, day)) if (overwrite || !file.exists(outfile)) { - req <- httr::GET( + req <- robustly(httr::GET, n = 10)( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), httr::write_disk(outfile, overwrite = TRUE) diff --git a/modules/data.atmosphere/inst/registration/register.MERRA.xml b/modules/data.atmosphere/inst/registration/register.MERRA.xml new file mode 100644 index 00000000000..d5d5638dd64 --- /dev/null +++ b/modules/data.atmosphere/inst/registration/register.MERRA.xml @@ -0,0 +1,10 @@ + + +regional + + 33 + CF Meteorology + application/x-netcdf + nc + + diff --git a/modules/data.atmosphere/man/download.MERRA.Rd b/modules/data.atmosphere/man/download.MERRA.Rd new file mode 100644 index 00000000000..5e30aaa5742 --- /dev/null +++ b/modules/data.atmosphere/man/download.MERRA.Rd @@ -0,0 +1,43 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/download.MERRA.R +\name{download.MERRA} +\alias{download.MERRA} +\title{Download MERRA data} +\usage{ +download.MERRA( + outfolder, + start_date, + end_date, + lat.in, + lon.in, + overwrite = FALSE, + verbose = FALSE +) +} +\arguments{ +\item{outfolder}{Directory where results should be written} + +\item{start_date}{Range of years to retrieve. Format is YYYY-MM-DD, +but only the year portion is used and the resulting files always contain a full year of data.} + +\item{end_date}{Range of years to retrieve. Format is YYYY-MM-DD, +but only the year portion is used and the resulting files always contain a full year of data.} + +\item{lat.in}{site latitude in decimal degrees} + +\item{lon.in}{site longitude in decimal degrees} + +\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} + +\item{verbose}{logical. Passed on to \code{\link[ncdf4]{ncvar_def}} and \code{\link[ncdf4]{nc_create}} +to control printing of debug info} +} +\value{ +`data.frame` of meteorology data metadata +} +\description{ +Download MERRA data +} +\author{ +Alexey Shiklomanov +} diff --git a/modules/data.atmosphere/tests/testthat/test.download.MERRA.R b/modules/data.atmosphere/tests/testthat/test.download.MERRA.R index 6cb6fe90c50..58606e1e0f7 100644 --- a/modules/data.atmosphere/tests/testthat/test.download.MERRA.R +++ b/modules/data.atmosphere/tests/testthat/test.download.MERRA.R @@ -1,12 +1,12 @@ context("Download MERRA") outdir <- tempdir() -testthat::setup(dir.create(outdir, showWarnings = FALSE, recursive = TRUE)) -testthat::teardown(unlink(outdir, recursive = TRUE)) +setup(dir.create(outdir, showWarnings = FALSE, recursive = TRUE)) +teardown(unlink(outdir, recursive = TRUE)) test_that("MERRA download works", { start_date <- "2009-06-01" - end_date <- "2009-06-30" + end_date <- "2009-06-10" dat <- download.MERRA(outdir, start_date, end_date, lat.in = 45.3, lon.in = -85.3, overwrite = TRUE) expect_true(file.exists(dat$file[[1]])) From ea15fadb969450dbdae3e9c864aea5df2341c96e Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Oct 2020 17:26:57 -0400 Subject: [PATCH 1563/2289] Make download.MERRA silently absorb extra args So that it works with `convert.input` and friends. --- modules/data.atmosphere/R/download.MERRA.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index fb61052b2e3..a5ce078889d 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -1,13 +1,15 @@ #' Download MERRA data #' #' @inheritParams download.CRUNCEP +#' @param ... Not used -- silently soak up extra arguments from `convert.input`, etc. #' @return `data.frame` of meteorology data metadata #' @author Alexey Shiklomanov #' @export download.MERRA <- function(outfolder, start_date, end_date, lat.in, lon.in, overwrite = FALSE, - verbose = FALSE) { + verbose = FALSE, + ...) { dates <- seq.Date(as.Date(start_date), as.Date(end_date), "1 day") From d4c7ea42024e133fa5555a356f22b4eb4450db67 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 28 Oct 2020 17:27:48 -0400 Subject: [PATCH 1564/2289] REMOTE: Ensure scratchdir exists Otherwise, this will throw an error. --- base/remote/R/remote.execute.R.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/remote/R/remote.execute.R.R b/base/remote/R/remote.execute.R.R index 6a9dc39cdcc..cb7a74e878b 100644 --- a/base/remote/R/remote.execute.R.R +++ b/base/remote/R/remote.execute.R.R @@ -22,6 +22,7 @@ remote.execute.R <- function(script, host = "localhost", user = NA, verbose = FA if (is.character(host)) { host <- list(name = host) } + dir.create(scratchdir, showWarnings = FALSE, recursive = TRUE) uuid <- paste0("pecan-", paste(sample(c(letters[1:6], 0:9), 30, replace = TRUE), collapse = "")) tmpfile <- file.path(scratchdir, uuid) From b74a427597329c5b59a8cdb9c4a448b955d25ef9 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 29 Oct 2020 08:37:00 -0400 Subject: [PATCH 1565/2289] Add MERRA to list of CF met products This should really come from the XML registration or database, rather than being hard coded, but that's a problem for a later time. --- modules/data.atmosphere/R/met.process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index 33328c9a306..871f8494084 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -201,7 +201,7 @@ met.process <- function(site, input_met, start_date, end_date, model, dbparms=dbparms ) - if (met %in% c("CRUNCEP", "GFDL","NOAA_GEFS_downscale")) { + if (met %in% c("CRUNCEP", "GFDL", "NOAA_GEFS_downscale", "MERRA")) { ready.id <- raw.id # input_met$id overwrites ready.id below, needs to be populated here input_met$id <- raw.id From 62d0dcc58e250cd95d64c2e5ac390c5a50accc5b Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 29 Oct 2020 08:55:40 -0400 Subject: [PATCH 1566/2289] Fix MERRA time unit --- modules/data.atmosphere/R/download.MERRA.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index a5ce078889d..694a1f8e460 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -81,7 +81,8 @@ download.MERRA <- function(outfolder, start_date, end_date, ## Create dimensions lat <- ncdf4::ncdim_def(name = "latitude", units = "degree_north", vals = lat.in, create_dimvar = TRUE) lon <- ncdf4::ncdim_def(name = "longitude", units = "degree_east", vals = lon.in, create_dimvar = TRUE) - time <- ncdf4::ncdim_def(name = "time", units = baseday, vals = timeseq, create_dimvar = TRUE, unlim = TRUE) + time <- ncdf4::ncdim_def(name = "time", units = paste("Days since ", baseday), + vals = timeseq, create_dimvar = TRUE, unlim = TRUE) dim <- list(lat, lon, time) ## Create output variables From 96cdc0025609d77aa15b8fc6d551d5edff56d875 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 29 Oct 2020 08:58:52 -0400 Subject: [PATCH 1567/2289] Add MERRA to CHANGELOG --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7137acbc431..db71107693e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -74,6 +74,7 @@ This is a major change: - Documentation how to run ED using singularity - PEcAn.DB gains new function `get_postgres_envvars`, which tries to look up connection parameters from Postgres environment variables (if they are set) and return them as a list ready to be passed to `db.open`. It should be especially useful when writing tests that need to run on systems with many different database configurations (#2541). - New shiny application to show database synchronization status (shiny/dbsync) +- Ability to run with [MERRA-2 meteorology](https://gmao.gsfc.nasa.gov/reanalysis/MERRA-2/) (reanalysis product based on GEOS-5 model) ### Removed From 9bdfd741ce731d7cd4bc36a6c7fc8945d443dbcd Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 29 Oct 2020 10:18:48 -0400 Subject: [PATCH 1568/2289] Fix mimetype misspelling in download.MERRA --- modules/data.atmosphere/R/download.MERRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index 694a1f8e460..8ae053c2c4e 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -33,7 +33,7 @@ download.MERRA <- function(outfolder, start_date, end_date, results <- data.frame( file = character(nyear), host = "", - mimimetype = "", + mimetype = "", formatname = "", startdate = "", enddate = "", From e82f66167123ebcaff23d154ccc46ca8ac8a2005 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Thu, 29 Oct 2020 14:38:09 +0000 Subject: [PATCH 1569/2289] automated documentation update --- modules/data.atmosphere/man/download.MERRA.Rd | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.MERRA.Rd b/modules/data.atmosphere/man/download.MERRA.Rd index 5e30aaa5742..9f550ce0d6e 100644 --- a/modules/data.atmosphere/man/download.MERRA.Rd +++ b/modules/data.atmosphere/man/download.MERRA.Rd @@ -11,7 +11,8 @@ download.MERRA( lat.in, lon.in, overwrite = FALSE, - verbose = FALSE + verbose = FALSE, + ... ) } \arguments{ @@ -31,6 +32,8 @@ but only the year portion is used and the resulting files always contain a full \item{verbose}{logical. Passed on to \code{\link[ncdf4]{ncvar_def}} and \code{\link[ncdf4]{nc_create}} to control printing of debug info} + +\item{...}{Not used -- silently soak up extra arguments from `convert.input`, etc.} } \value{ `data.frame` of meteorology data metadata From 38225497e72369d517ae1203c1a168cd219cc6c0 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 29 Oct 2020 10:59:16 -0400 Subject: [PATCH 1570/2289] Fix partial argument match in download.MERRA --- modules/data.atmosphere/R/download.MERRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index 8ae053c2c4e..9a80e6a796d 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -54,7 +54,7 @@ download.MERRA <- function(outfolder, start_date, end_date, timeseq <- as.numeric(difftime( seq(y_startdate, y_enddate, "hours"), baseday, - tz = "UTC", unit = "days" + tz = "UTC", units = "days" )) ntime <- length(timeseq) From 28761bd3f73ef15e3c3589600418c2b97508c3ad Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 30 Oct 2020 10:41:49 -0700 Subject: [PATCH 1571/2289] fix Tleaf assumptions in fitA Addresses two issues in #2672 * Checks if Tleaf provided in C; if so, convert to K * If Tleaf not provided, set to 25C and send warning --- modules/photosynthesis/R/fitA.R | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/modules/photosynthesis/R/fitA.R b/modules/photosynthesis/R/fitA.R index 35aee63f879..e1bb6253a76 100644 --- a/modules/photosynthesis/R/fitA.R +++ b/modules/photosynthesis/R/fitA.R @@ -167,9 +167,16 @@ To <- 35 ## Representative value, would benifit from spp calibration! ## prep data sel <- seq_len(nrow(dat)) #which(dat$spp == s) -if (!any(names(dat) == "Tleaf")) { - dat$Tleaf <- rep(25 + 273.15, nrow(dat)) ## if leaf temperature is absent, assume 25C +if("Tleaf" %in% names(dat)){ + if(max(dat$Tleaf) < 100){ # if Tleaf in C, convert to K + dat$Tleaf <- dat$Tleaf + 273.15 + } else if (!"Tleaf" %in% names(dat)) { + dat$Tleaf <- 25 + 273.15 ## if no Tleaf, assume 25C in Kelvin + warning("No Leaf Temperature provided, setting to 25C\n", + "To change add a column named Tleaf to flux.data data frame") + } } + mydat <- list(an = dat$Photo[sel], pi = dat$Ci[sel], q = dat$PARi[sel], From 8463fc4daa74c32a501426870ecefdb4daee3ec9 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 30 Oct 2020 10:44:30 -0700 Subject: [PATCH 1572/2289] Update Changelog for #2726 --- CHANGELOG.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index db71107693e..97a9fbf450f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -35,7 +35,8 @@ This is a major change: - Do not override meta-analysis settings random-effects = FALSE https://github.com/PecanProject/pecan/pull/2625 - model2netcdf.ED2 no longer detecting which varibles names `-T-` files have based on ED2 version (#2623) - Changed docker-compose.yml to use user & group IDs of the operating system user (#2572) --gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) +- gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) +- ensure Tleaf converted to K for temperature corrections in PEcAn.photosynthesis::fitA (#2726) ### Changed From efecc551b269cb182e7dc2048775041f1c51907e Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 30 Oct 2020 10:53:02 -0700 Subject: [PATCH 1573/2289] fix if / else nesting error --- modules/photosynthesis/R/fitA.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/photosynthesis/R/fitA.R b/modules/photosynthesis/R/fitA.R index e1bb6253a76..39e38f70abf 100644 --- a/modules/photosynthesis/R/fitA.R +++ b/modules/photosynthesis/R/fitA.R @@ -170,11 +170,11 @@ sel <- seq_len(nrow(dat)) #which(dat$spp == s) if("Tleaf" %in% names(dat)){ if(max(dat$Tleaf) < 100){ # if Tleaf in C, convert to K dat$Tleaf <- dat$Tleaf + 273.15 - } else if (!"Tleaf" %in% names(dat)) { + } +} else if (!"Tleaf" %in% names(dat)) { dat$Tleaf <- 25 + 273.15 ## if no Tleaf, assume 25C in Kelvin warning("No Leaf Temperature provided, setting to 25C\n", "To change add a column named Tleaf to flux.data data frame") - } } mydat <- list(an = dat$Photo[sel], From fd9bee4c1bacff8f6c19476e10de805d560cef9b Mon Sep 17 00:00:00 2001 From: araiho Date: Fri, 30 Oct 2020 15:43:37 -0400 Subject: [PATCH 1574/2289] taking out where = for nimble version 10 --- modules/assim.sequential/R/Nimble_codes.R | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index d2fd9848f14..26e24ef5123 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -15,8 +15,7 @@ y_star_create <- nimbleFunction( y_star <- X return(y_star) - }, - where = nimble::getLoadingNamespace() + } ) #' @export @@ -29,8 +28,7 @@ alr <- nimbleFunction( y_out <- log(y[1:(length(y) - 1)] / y[length(y)]) return(y_out) - }, - where = nimble::getLoadingNamespace() + } ) #' @export @@ -41,8 +39,7 @@ inv.alr <- nimbleFunction( y = exp(c(alr, 0)) / sum(exp(c(alr, 0))) return(y) - }, - where = nimble::getLoadingNamespace() + } ) #' @export @@ -57,8 +54,7 @@ rwtmnorm <- nimbleFunction( Prob <- rmnorm_chol(n = 1, mean, chol(prec), prec_param = TRUE) * wt return(Prob) - }, - where = nimble::getLoadingNamespace() + } ) #' @export @@ -84,8 +80,7 @@ dwtmnorm <- nimbleFunction( } else { return((exp(logProb))) } - }, - where = nimble::getLoadingNamespace() + } ) registerDistributions(list(dwtmnorm = list( @@ -223,8 +218,7 @@ sampler_toggle <- nimbleFunction( methods = list( reset = function() nested_sampler_list[[1]]$reset() - ), - where = nimble::getLoadingNamespace() + ) ) #' @export From 159126994dd99bc3ef4ae18b13065ed99c1f0f5e Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 2 Nov 2020 14:21:56 -0500 Subject: [PATCH 1575/2289] API: Only set model ID, not revision or type A model ID unambiguously determines model name, type, revision, etc., so additional information on this in the XML is redundant. Moreover, the query here was inserting this information _incorrectly_ --- `models.model_name` is technically _not_ the same as `model$type` (though, in practice, they often are). This was causing the ED2 integration tests to fail (because they were looking for a nonexistent model type `ED2.2`). The "correct" approach is for users to identify their target model ID by using API queries and then pass that to the API. Note that this is already how the `rpecanapi::submit.workflow` function works (it takes a `model_id` as an argument, not `model_name`, `revision`, etc.). --- apps/api/R/submit.workflow.R | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 8e1b45d6383..9061904c106 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -48,15 +48,8 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po ) ) - if (!is.null(workflowList$model$id) && - (is.null(workflowList$model$type) || is.null(workflowList$model$revision))) { - res <- dplyr::tbl(dbcon, "models") %>% - select(id, model_name, revision) %>% - filter(id == !!workflowList$model$id) %>% - collect() - - workflowList$model$type <- res$model_name - workflowList$model$revision <- res$revision + if (is.null(workflowList$model$id)) { + stop("Must provide model ID.") } # Fix RabbitMQ details From bb7ac95fb733157302e0d1c1fe6696e5b67a25a7 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 2 Nov 2020 14:56:34 -0500 Subject: [PATCH 1576/2289] API: Query model type & ID to set RabbitMQ params _This_ is why the stuff removed in the previous commit was necessary. It seems like PEcAn should be setting this internally...but this is fine for now. --- apps/api/R/submit.workflow.R | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 9061904c106..68f7f890cd8 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -52,12 +52,31 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po stop("Must provide model ID.") } + # Get model revision and type for the RabbitMQ queue + model_info <- dplyr::tbl(dbcon, "models") %>% + dplyr::filter(id == !!workflowList$model$id) %>% + dplyr::inner_join(dplyr::tbl(dbcon, "modeltypes"), + by = c("modeltype_id" = "id")) %>% + dplyr::collect() + + if (nrow(model_info) < 1) { + stop("No models found with ID ", format(workflowList$model$id, scientific = FALSE)) + } else if (nrow(model_info) > 1) { + stop("Found multiple (", nrow(model_info), ") matching models for id ", + format(workflowList$model$id, scientific = FALSE), + ". This shouldn't happen! ", + "Check your database for errors.") + } + + model_type <- model_info$name + model_revision <- model_info$revision + # Fix RabbitMQ details hostInfo <- PEcAn.DB::dbHostInfo(dbcon) workflowList$host <- list( rabbitmq = list( uri = Sys.getenv("RABBITMQ_URI", "amqp://guest:guest@localhost/%2F"), - queue = paste0(workflowList$model$type, "_", workflowList$model$revision) + queue = paste0(model_type, "_", model_revision) ) ) workflowList$host$name <- if(hostInfo$hostname == "") "localhost" else hostInfo$hostname From 453e9a3abf7d03ba14239b4eba71fea178ff2b1b Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 3 Nov 2020 09:44:18 -0500 Subject: [PATCH 1577/2289] API: Better error codes in API submit.workflow @robkooper --- apps/api/R/submit.workflow.R | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 68f7f890cd8..2074238d804 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -35,7 +35,7 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* @param dbcon Database connection object. Default is global database pool. #* @return ID & status of the submitted workflow #* @author Tezan Sahu -submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_pool) { +submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_pool, res) { # Set database details workflowList$database <- list( @@ -49,7 +49,8 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po ) if (is.null(workflowList$model$id)) { - stop("Must provide model ID.") + res$status <- 400 + return(list(error = "Must provide model ID.")) } # Get model revision and type for the RabbitMQ queue @@ -60,12 +61,17 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po dplyr::collect() if (nrow(model_info) < 1) { - stop("No models found with ID ", format(workflowList$model$id, scientific = FALSE)) + res$status <- 404 # Not found + msg <- paste0("No models found with ID ", format(workflowList$model$id, scientific = FALSE)) + return(list(error = msg)) } else if (nrow(model_info) > 1) { - stop("Found multiple (", nrow(model_info), ") matching models for id ", - format(workflowList$model$id, scientific = FALSE), - ". This shouldn't happen! ", - "Check your database for errors.") + res$status <- 400 + msg <- paste0( + "Found multiple (", nrow(model_info), ") matching models for id ", + format(workflowList$model$id, scientific = FALSE), + ". This shouldn't happen! Check your database for errors." + ) + return(list(error = msg)) } model_type <- model_info$name From 35ef735f197b1f802d28572ac98a588062fff03e Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 3 Nov 2020 11:01:55 -0500 Subject: [PATCH 1578/2289] API: Punt submit workflow error handling to parent `submitWorkflow` captures errors that are marked with `status = "Error"`. Use this mechanism for error handling instead (because it's simpler and already implemented). --- apps/api/R/submit.workflow.R | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 2074238d804..2c1e804b779 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -35,7 +35,7 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* @param dbcon Database connection object. Default is global database pool. #* @return ID & status of the submitted workflow #* @author Tezan Sahu -submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_pool, res) { +submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_pool) { # Set database details workflowList$database <- list( @@ -49,8 +49,8 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po ) if (is.null(workflowList$model$id)) { - res$status <- 400 - return(list(error = "Must provide model ID.")) + return(list(status = "Error", + error = "Must provide model ID.")) } # Get model revision and type for the RabbitMQ queue @@ -61,17 +61,15 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po dplyr::collect() if (nrow(model_info) < 1) { - res$status <- 404 # Not found msg <- paste0("No models found with ID ", format(workflowList$model$id, scientific = FALSE)) - return(list(error = msg)) + return(list(status = "Error", error = msg)) } else if (nrow(model_info) > 1) { - res$status <- 400 msg <- paste0( "Found multiple (", nrow(model_info), ") matching models for id ", format(workflowList$model$id, scientific = FALSE), ". This shouldn't happen! Check your database for errors." ) - return(list(error = msg)) + return(list(status = "Error", error = msg)) } model_type <- model_info$name From 98a7ae09d8fb09e7a7ef53acdd4b18184b63b038 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 10:31:52 -0600 Subject: [PATCH 1579/2289] set santiy outlier threshold rather than a priori deciding on 8 standard deviations, allow the user to specify to make things behave better --- .../data.atmosphere/R/debias_met_regression.R | 81 ++++++++++--------- 1 file changed, 41 insertions(+), 40 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index be8847c898a..7ad4ad9cb36 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -32,6 +32,7 @@ ##' without having to do do giant runs at once; if NULL will be numbered 1:n.ens ##' @param force.sanity - (logical) do we force the data to meet sanity checks? ##' @param sanity.tries - how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop +##' @param sanity.sd - how many standard deviations from the mean should be used to determine sane outliers (default 8) ##' @param lat.in - latitude of site ##' @param lon.in - longitude of site ##' @param save.diagnostics - logical; save diagnostic plots of output? @@ -62,7 +63,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NULL, CRUNCEP=FALSE, pair.anoms = TRUE, pair.ens = FALSE, uncert.prop="mean", resids = FALSE, seed=Sys.Date(), - outfolder, yrs.save=NULL, ens.name, ens.mems=NULL, force.sanity=TRUE, sanity.tries=25, lat.in, lon.in, + outfolder, yrs.save=NULL, ens.name, ens.mems=NULL, force.sanity=TRUE, sanity.tries=25, sanity.sd=8, lat.in, lon.in, save.diagnostics=TRUE, path.diagnostics=NULL, parallel = FALSE, n.cores = NULL, overwrite = TRUE, verbose = FALSE) { library(MASS) @@ -682,8 +683,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # max air temp = 70 C; hottest temperature from sattellite; very ridiculous # min air temp = -95 C; colder than coldest natural temperature recorded in Antarctica cols.redo <- which(apply(sim1, 2, function(x) min(x) < 273.15-95 | max(x) > 273.15+70 | - min(x) < mean(met.train$X) - 8*sd(met.train$X) | - max(x) > mean(met.train$X) + 8*sd(met.train$X) + min(x) < mean(met.train$X) - sanity.sd*sd(met.train$X) | + max(x) > mean(met.train$X) + sanity.sd*sd(met.train$X) )) } #"specific_humidity", @@ -692,8 +693,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Also, the minimum humidity can't be 0 so lets just make it extremely dry; lets set this for 1 g/Mg cols.redo <- which(apply(sim1, 2, function(x) min(x^2) < 1e-6 | max(x^2) > 40e-3 | - min(x^2) < mean(met.train$X^2) - 8*sd(met.train$X^2) | - max(x^2) > mean(met.train$X^2) + 8*sd(met.train$X^2) + min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | + max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) )) } #"surface_downwelling_shortwave_flux_in_air", @@ -702,8 +703,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Lets round 1360 and divide that by 2 (because it should be a daily average) and conservatively assume albedo of 20% (average value is more like 30) # Source http://eesc.columbia.edu/courses/ees/climate/lectures/radiation/ cols.redo <- which(apply(sim1, 2, function(x) max(x^2) > 1360/2*0.8 | - min(x^2) < mean(met.train$X^2) - 8*sd(met.train$X^2) | - max(x^2) > mean(met.train$X^2) + 8*sd(met.train$X^2) + min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | + max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) )) } if(v == "air_pressure"){ @@ -711,8 +712,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # - Lets round up to 1100 hPA # Also according to Wikipedia, the lowest non-tornadic pressure ever measured was 870 hPA cols.redo <- which(apply(sim1, 2, function(x) min(x) < 870*100 | max(x) > 1100*100 | - min(x) < mean(met.train$X) - 8*sd(met.train$X) | - max(x) > mean(met.train$X) + 8*sd(met.train$X) + min(x) < mean(met.train$X) - sanity.sd*sd(met.train$X) | + max(x) > mean(met.train$X) + sanity.sd*sd(met.train$X) )) } if(v == "surface_downwelling_longwave_flux_in_air"){ @@ -721,16 +722,16 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # ED2 sanity checks boudn longwave at 40 & 600 cols.redo <- which(apply(sim1, 2, function(x) min(x^2) < 40 | max(x^2) > 600 | - min(x^2) < mean(met.train$X^2) - 8*sd(met.train$X^2) | - max(x^2) > mean(met.train$X^2) + 8*sd(met.train$X^2) + min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | + max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) )) } if(v == "wind_speed"){ # According to wikipedia, the hgihest wind speed ever recorded is a gust of 113 m/s; the maximum 5-mind wind speed is 49 m/s cols.redo <- which(apply(sim1, 2, function(x) max(x^2) > 50/2 | - min(x^2) < mean(met.train$X^2) - 8*sd(met.train$X^2) | - max(x^2) > mean(met.train$X^2) + 8*sd(met.train$X^2) + min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | + max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) )) } if(v == "precipitation_flux"){ @@ -738,8 +739,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # https://www.wunderground.com/blog/weatherhistorian/what-is-the-most-rain-to-ever-fall-in-one-minute-or-one-hour.html # 16/2 = round number; x25.4 = inches to mm; /(60*60) = hr to sec cols.redo <- which(apply(sim1, 2, function(x) max(x) > 16/2*25.4/(60*60) | - min(x) < min(met.train$X) - 8*sd(met.train$X) | - max(x) > max(met.train$X) + 8*sd(met.train$X) + min(x) < min(met.train$X) - sanity.sd*sd(met.train$X) | + max(x) > max(met.train$X) + sanity.sd*sd(met.train$X) )) } n.new = length(cols.redo) @@ -762,12 +763,12 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # for known problem variables, lets force sanity as a last resort if(v %in% c("air_temperature", "air_temperature_maximum", "air_temperature_minimum")){ warning(paste("Forcing Sanity:", v)) - if(min(sim1) < max(273.15-95, mean(met.train$X) - 8*sd(met.train$X))) { - qtrim <- max(273.15-95, mean(met.train$X) - 8*sd(met.train$X)) + 1e-6 + if(min(sim1) < max(273.15-95, mean(met.train$X) - sanity.sd*sd(met.train$X))) { + qtrim <- max(273.15-95, mean(met.train$X) - sanity.sd*sd(met.train$X)) + 1e-6 sim1[sim1 < qtrim] <- qtrim } if(max(sim1) > min(273.15+70, mean(met.train$X) + sd(met.train$X^2))) { - qtrim <- min(273.15+70, mean(met.train$X) + 8*sd(met.train$X)) - 1e-6 + qtrim <- min(273.15+70, mean(met.train$X) + sanity.sd*sd(met.train$X)) - 1e-6 sim1[sim1 > qtrim] <- qtrim } } else if(v == "surface_downwelling_shortwave_flux_in_air"){ @@ -775,16 +776,16 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # # Lets round 1360 and divide that by 2 (because it should be a daily average) and conservatively assume albedo of 20% (average value is more like 30) # # Source http://eesc.columbia.edu/courses/ees/climate/lectures/radiation/ # cols.redo <- which(apply(sim1, 2, function(x) max(x^2) > 1360/2*0.8 | - # min(x) < mean(met.train$X) - 8*sd(met.train$X) | - # max(x) > mean(met.train$X) + 8*sd(met.train$X) + # min(x) < mean(met.train$X) - sanity.sd*sd(met.train$X) | + # max(x) > mean(met.train$X) + sanity.sd*sd(met.train$X) # )) warning(paste("Forcing Sanity:", v)) - if(min(sim1^2) < max(mean(met.train$X^2) - 8*sd(met.train$X^2))) { - qtrim <- max(mean(met.train$X^2) - 8*sd(met.train$X^2)) + if(min(sim1^2) < max(mean(met.train$X^2) - sanity.sd*sd(met.train$X^2))) { + qtrim <- max(mean(met.train$X^2) - sanity.sd*sd(met.train$X^2)) sim1[sim1^2 < qtrim] <- sqrt(qtrim) } - if(max(sim1^2) > min(1360/2*0.8, mean(met.train$X^2) + 8*sd(met.train$X^2))) { - qtrim <- min(1360/2*0.8, mean(met.train$X^2) + 8*sd(met.train$X^2)) + if(max(sim1^2) > min(1360/2*0.8, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { + qtrim <- min(1360/2*0.8, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) sim1[sim1^2 > qtrim] <- sqrt(qtrim) } @@ -793,43 +794,43 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # ED2 sanity checks boudn longwave at 40 & 600 warning(paste("Forcing Sanity:", v)) - if(min(sim1^2) < max(40, mean(met.train$X^2) - 8*sd(met.train$X^2))) { - qtrim <- max(40, mean(met.train$X^2) - 8*sd(met.train$X^2)) + if(min(sim1^2) < max(40, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2))) { + qtrim <- max(40, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2)) sim1[sim1^2 < qtrim] <- sqrt(qtrim) } - if(max(sim1^2) > min(600, mean(met.train$X^2) + 8*sd(met.train$X^2))) { - qtrim <- min(600, mean(met.train$X^2) + 8*sd(met.train$X^2)) + if(max(sim1^2) > min(600, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { + qtrim <- min(600, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) sim1[sim1^2 > qtrim] <- sqrt(qtrim) } } else if(v=="specific_humidity"){ warning(paste("Forcing Sanity:", v)) # I'm having a hell of a time trying to get SH to fit sanity bounds, so lets brute-force fix outliers - if(min(sim1^2) < max(1e-6, mean(met.train$X^2) - 8*sd(met.train$X^2))) { - qtrim <- max(1e-6, mean(met.train$X^2) - 8*sd(met.train$X^2)) + if(min(sim1^2) < max(1e-6, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2))) { + qtrim <- max(1e-6, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2)) sim1[sim1^2 < qtrim] <- sqrt(qtrim) } - if(max(sim1^2) > min(40e-3, mean(met.train$X^2) + 8*sd(met.train$X^2))) { - qtrim <- min(40e-3, mean(met.train$X^2) + 8*sd(met.train$X^2)) + if(max(sim1^2) > min(40e-3, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { + qtrim <- min(40e-3, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) sim1[sim1^2 > qtrim] <- sqrt(qtrim) } } else if(v=="air_pressure"){ warning(paste("Forcing Sanity:", v)) - if(min(sim1)< max(870*100, mean(met.train$X) - 8*sd(met.train$X))){ - qtrim <- min(870*100, mean(met.train$X) - 8*sd(met.train$X)) + if(min(sim1)< max(870*100, mean(met.train$X) - sanity.sd*sd(met.train$X))){ + qtrim <- min(870*100, mean(met.train$X) - sanity.sd*sd(met.train$X)) sim1[sim1 < qtrim] <- qtrim } - if(max(sim1) < min(1100*100, mean(met.train$X) + 8*sd(met.train$X))){ - qtrim <- min(1100*100, mean(met.train$X) + 8*sd(met.train$X)) + if(max(sim1) < min(1100*100, mean(met.train$X) + sanity.sd*sd(met.train$X))){ + qtrim <- min(1100*100, mean(met.train$X) + sanity.sd*sd(met.train$X)) sim1[sim1 > qtrim] <- qtrim } } else if(v=="wind_speed"){ warning(paste("Forcing Sanity:", v)) - if(min(sim1)< max(0, mean(met.train$X) - 8*sd(met.train$X))){ - qtrim <- min(0, mean(met.train$X) - 8*sd(met.train$X)) + if(min(sim1)< max(0, mean(met.train$X) - sanity.sd*sd(met.train$X))){ + qtrim <- min(0, mean(met.train$X) - sanity.sd*sd(met.train$X)) sim1[sim1 < qtrim] <- qtrim } - if(max(sim1) < min(sqrt(50/2), mean(met.train$X) + 8*sd(met.train$X))){ - qtrim <- min(sqrt(50/2), mean(met.train$X) + 8*sd(met.train$X)) + if(max(sim1) < min(sqrt(50/2), mean(met.train$X) + sanity.sd*sd(met.train$X))){ + qtrim <- min(sqrt(50/2), mean(met.train$X) + sanity.sd*sd(met.train$X)) sim1[sim1 > qtrim] <- qtrim } } else { From 726f042beb8febef46767941addc94785391b210 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 10:33:37 -0600 Subject: [PATCH 1580/2289] 2 Debias bug fixes 1: line 554 should have been == otherwise was having the opposite of intended behavior and no anomaly trend uncertainty 2: fix application of anomaly uncertainty so that it works with >1 ensemble members --- modules/data.atmosphere/R/debias_met_regression.R | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 7ad4ad9cb36..5f25e2319c2 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -551,10 +551,9 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # while(nrow(Rbeta.anom)<1 & try.now<=ntries){ # Generate a random distribution of betas using the covariance matrix # I think the anomalies might be problematic, so lets get way more betas than we need and trim the distribution - if(n.ens>1){ + if(n.ens==1){ Rbeta.anom <- matrix(coef(mod.anom), ncol=length(coef(mod.anom))) } else { - Rbeta.anom <- matrix(MASS::mvrnorm(n=n.new, coef(mod.anom), vcov(mod.anom)), ncol=length(coef(mod.anom))) } dimnames(Rbeta.anom)[[2]] <- names(coef(mod.anom)) @@ -646,7 +645,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Note that for a single-member ensemble, this just undoes itself anom.detrend <- met.src[met.src$ind==ind,"anom.raw"] - predict(mod.anom) - sim1b[,cols.redo] <- anom.detrend + sim1b[,cols.redo] # Get the range around that medium-frequency trend + sim1b[,cols.redo] <- apply(sim1b[,cols.redo], 2, FUN=function(x){x+anom.detrend}) # Get the range around that medium-frequency trend } From 3b5bf0893c61d0cfe042c4247af46c139686a5f8 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 10:49:25 -0600 Subject: [PATCH 1581/2289] potential speedup: only generate 1 set of values Rather than generate an arbitrary number of values to try and pick from, we can just generate 1 random set of values at a time to propagate uncertainty. If you wanted to use the "mean" option to not really generate more uncertainty, then just use the mean model betas in the same way we do with a single enesmble member. This should reduce memory needs and possibly computational time. However, there are now a lot of artifacts int he code that could probably be cleaned over. I don't *think* we can just get rid of the ensemble loop since we train the bias-correction on each enesmble member that's read in, but I'm not positive about that. --- .../data.atmosphere/R/debias_met_regression.R | 30 ++++++++++++------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 5f25e2319c2..8b3d0639f89 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -22,7 +22,7 @@ ##' @param pair.anoms - logical stating whether anomalies from the same year should be matched or not ##' @param pair.ens - logical stating whether ensembles from train and source data need to be paired together ##' (for uncertainty propogation) -##' @param uncert.prop - method for error propogation if only 1 ensemble member; options=c(random, mean); *Not Implemented yet +##' @param uncert.prop - method for error propogation for child ensemble members 1 ensemble member; options=c(random, mean); randomly strongly encouraged if n.ens>1 ##' @param resids - logical stating whether to pass on residual data or not *Not implemented yet ##' @param seed - specify seed so that random draws can be reproduced ##' @param outfolder - directory where the data should go @@ -78,6 +78,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU n.ens=1 } if(!uncert.prop %in% c("mean", "random")) stop("unspecified uncertainty propogation method. Must be 'random' or 'mean' ") + if(uncert.prop=="mean" & n.ens>1) warning(pate0("Warning! Use of mean propagation with n.ens>1 not encouraged as all results will be the same and you will not be adding uncertainty at this stage.")) # Variables need to be done in a specific order vars.all <- c("air_temperature", "air_temperature_maximum", "air_temperature_minimum", "specific_humidity", "surface_downwelling_shortwave_flux_in_air", "air_pressure", "surface_downwelling_longwave_flux_in_air", "wind_speed", "precipitation_flux") @@ -517,8 +518,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU if(v == "precipitation_flux") coef.ann <- coef(mod.ann) # Setting up a case where if sanity checks fail, we pull more ensemble members - n.new <- round(n.ens/2)+1 - cols.redo <- 1:n.new + n.new <- 1 + cols.redo <- n.new sane.attempt=0 while(n.new>0 & sane.attempt <= sanity.tries){ @@ -529,7 +530,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Generate a random distribution of betas using the covariance matrix # I think the anomalies might be problematic, so lets get way more betas than we need and trim the distribution # set.seed=42 - if(n.ens==1){ + if(n.ens==1 | uncert.prop=="mean"){ Rbeta <- matrix(coef(mod.bias), ncol=length(coef(mod.bias))) } else { Rbeta <- matrix(MASS::mvrnorm(n=n.new, coef(mod.bias), vcov(mod.bias)), ncol=length(coef(mod.bias))) @@ -645,7 +646,13 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Note that for a single-member ensemble, this just undoes itself anom.detrend <- met.src[met.src$ind==ind,"anom.raw"] - predict(mod.anom) - sim1b[,cols.redo] <- apply(sim1b[,cols.redo], 2, FUN=function(x){x+anom.detrend}) # Get the range around that medium-frequency trend + # NOTE: This section can probably be removed and simplified since it should always be a 1-column array now + if(length(cols.redo)>1){ + sim1b[,cols.redo] <- apply(sim1b[,cols.redo], 2, FUN=function(x){x+anom.detrend}) # Get the range around that medium-frequency trend + } else { + sim1b[,cols.redo] <- as.matrix(sim1b[,cols.redo] + anom.detrend) + } + } @@ -972,12 +979,13 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Randomly pick one from this meta-ensemble to save # this *should* be propogating uncertainty because we have the ind effects in all of the models and we're randomly adding as we go - if(uncert.prop=="random"){ - sim.final[,ens] <- sim1[,sample(1:ncol(sim1),1)] - } - if(uncert.prop=="mean"){ - sim.final[,ens] <- apply(sim1, 1, mean) - } + sim.final[,ens] <- sim1 + # if(uncert.prop=="random"){ + # sim.final[,ens] <- sim1[,sample(1:ncol(sim1),1)] + # } + # if(uncert.prop=="mean"){ + # sim.final[,ens] <- apply(sim1, 1, mean) + # } utils::setTxtProgressBar(pb, pb.ind) pb.ind <- pb.ind+1 From bc55144d87fbb286fd0fc72be9b9d230b8594f2b Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 11:10:31 -0600 Subject: [PATCH 1582/2289] fix weirdness for stacking I'm not sure why i'm having trouble, but this seems to fix it, so we're going with the ineligant hack in the interest of getting sh*t done. --- modules/data.atmosphere/R/debias_met_regression.R | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 8b3d0639f89..05cb5655589 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -1040,7 +1040,14 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU grDevices::dev.off() # Plotting a few random series to get an idea for what an individual pattern looks liek - stack.sims <- utils::stack(data.frame(dat.out[[v]][,sample(1:n.ens, min(3, n.ens))])) + col.samp <- paste0("X", sample(1:n.ens, min(3, n.ens))) + + sim.sub <- data.frame(dat.out[[v]])[,col.samp] + for(i in 1:ncol(sim.sub)){ + sim.sub[,i] <- as.vector(sim.sub[,i]) + } + # names(test) <- col.samp + stack.sims <- utils::stack(sim.sub) stack.sims[,c("Year", "DOY", "Date")] <- dat.pred[,c("Year", "DOY", "Date")] grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day2.png", sep="_")), height=6, width=6, units="in", res=220) From a26827b41173539367950b277a6eefac9a90cd21 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 11:19:27 -0600 Subject: [PATCH 1583/2289] minor fix that seems to get things to work :shrug: --- modules/data.atmosphere/R/debias_met_regression.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index 05cb5655589..d2f81016531 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -979,7 +979,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Randomly pick one from this meta-ensemble to save # this *should* be propogating uncertainty because we have the ind effects in all of the models and we're randomly adding as we go - sim.final[,ens] <- sim1 + sim.final[,ens] <- as.vector(sim1) # if(uncert.prop=="random"){ # sim.final[,ens] <- sim1[,sample(1:ncol(sim1),1)] # } From c3c0b453812bc338fa0f7bb6831817ed3cb5f24f Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 12:27:44 -0600 Subject: [PATCH 1584/2289] update documentation --- modules/data.atmosphere/man/debias.met.regression.Rd | 5 ++++- modules/data.atmosphere/man/extract.local.CMIP5.Rd | 4 ++-- modules/data.atmosphere/man/lm_ensemble_sims.Rd | 4 ++-- 3 files changed, 8 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/man/debias.met.regression.Rd b/modules/data.atmosphere/man/debias.met.regression.Rd index c815dd1a04e..d81d57bb541 100644 --- a/modules/data.atmosphere/man/debias.met.regression.Rd +++ b/modules/data.atmosphere/man/debias.met.regression.Rd @@ -21,6 +21,7 @@ debias.met.regression( ens.mems = NULL, force.sanity = TRUE, sanity.tries = 25, + sanity.sd = 8, lat.in, lon.in, save.diagnostics = TRUE, @@ -48,7 +49,7 @@ met variables that have been naively gapfilled for certain time periods} \item{pair.ens}{- logical stating whether ensembles from train and source data need to be paired together (for uncertainty propogation)} -\item{uncert.prop}{- method for error propogation if only 1 ensemble member; options=c(random, mean); *Not Implemented yet} +\item{uncert.prop}{- method for error propogation for child ensemble members 1 ensemble member; options=c(random, mean); randomly strongly encouraged if n.ens>1} \item{resids}{- logical stating whether to pass on residual data or not *Not implemented yet} @@ -67,6 +68,8 @@ without having to do do giant runs at once; if NULL will be numbered 1:n.ens} \item{sanity.tries}{- how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop} +\item{sanity.sd}{- how many standard deviations from the mean should be used to determine sane outliers (default 8)} + \item{lat.in}{- latitude of site} \item{lon.in}{- longitude of site} diff --git a/modules/data.atmosphere/man/extract.local.CMIP5.Rd b/modules/data.atmosphere/man/extract.local.CMIP5.Rd index fec601e2925..f3d96381bd5 100644 --- a/modules/data.atmosphere/man/extract.local.CMIP5.Rd +++ b/modules/data.atmosphere/man/extract.local.CMIP5.Rd @@ -42,7 +42,7 @@ extract.local.CMIP5( \item{ensemble_member}{which CMIP5 experiment ensemble member} \item{date.origin}{(optional) specify the date of origin for timestamps in the files being read. -If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and +If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and 850 for p1000 simulations (plus MPI-ESM-P historical). Format: YYYY-MM-DD} \item{adjust.pr}{- adjustment factor fore preciptiation when the extracted values seem off} @@ -57,7 +57,7 @@ If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and This function extracts CMIP5 data from grids that have been downloaded and stored locally. Files are saved as a netCDF file in CF conventions at *DAILY* resolution. Note: At this point in time, variables that are only available at a native monthly resolution will be repeated to - give a pseudo-daily record (and can get dealt with in the downscaling workflow). These files + give a pseudo-daily record (and can get dealt with in the downscaling workflow). These files are ready to be used in the general PEcAn workflow or fed into the downscaling workflow. } \author{ diff --git a/modules/data.atmosphere/man/lm_ensemble_sims.Rd b/modules/data.atmosphere/man/lm_ensemble_sims.Rd index ec838820813..7743ba39677 100644 --- a/modules/data.atmosphere/man/lm_ensemble_sims.Rd +++ b/modules/data.atmosphere/man/lm_ensemble_sims.Rd @@ -31,10 +31,10 @@ lm_ensemble_sims( \item{lags.init}{- a data frame of initialization parameters to match the data in dat.mod} -\item{dat.train}{- the training data used to fit the model; needed for night/day in +\item{dat.train}{- the training data used to fit the model; needed for night/day in surface_downwelling_shortwave_flux_in_air} -\item{precip.distribution}{- a list with 2 sub-lists containing the number of observations with precip in the training data per day & +\item{precip.distribution}{- a list with 2 sub-lists containing the number of observations with precip in the training data per day & the hour of max rain in the training data. This will be used to help solve the "constant drizzle" problem} \item{force.sanity}{- (logical) do we force the data to meet sanity checks?} From 509209c5f349a88b4062e84df668e1c1a279b376 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 4 Nov 2020 13:20:29 -0600 Subject: [PATCH 1585/2289] Fix force sanity routine since we're no longer generating pseudo-sub-ensembles, we can't do the duplicating of other columns (which was a hack anyways), so just go ahead and force the values to work --- modules/data.atmosphere/R/debias_met_regression.R | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index d2f81016531..e97f9710799 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -760,12 +760,12 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU if(force.sanity & n.new>0){ - # If we're still struggling, but we have at least some workable columns, lets just duplicate those: - if(n.new<(round(n.ens/2)+1)){ - cols.safe <- 1:ncol(sim1) - cols.safe <- cols.safe[!(cols.safe %in% cols.redo)] - sim1[,cols.redo] <- sim1[,sample(cols.safe, n.new, replace=T)] - } else { + # # If we're still struggling, but we have at least some workable columns, lets just duplicate those: + # if(n.new<(round(n.ens/2)+1)){ + # cols.safe <- 1:ncol(sim1) + # cols.safe <- cols.safe[!(cols.safe %in% cols.redo)] + # sim1[,cols.redo] <- sim1[,sample(cols.safe, n.new, replace=T)] + # } else { # for known problem variables, lets force sanity as a last resort if(v %in% c("air_temperature", "air_temperature_maximum", "air_temperature_minimum")){ warning(paste("Forcing Sanity:", v)) @@ -843,7 +843,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # If this is a new problem variable, lets stop and look at it stop(paste("Unable to produce a sane prediction:", v, "- ens", ens, "; problem child =", paste(cols.redo, collapse=" "))) } - } + # } # End if else } # End force sanity From e7afacbc0ffd682b7a4b91f3584fc29289730124 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 4 Nov 2020 21:41:06 +0000 Subject: [PATCH 1586/2289] automated documentation update --- modules/data.atmosphere/man/extract.local.CMIP5.Rd | 4 ++-- modules/data.atmosphere/man/lm_ensemble_sims.Rd | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/man/extract.local.CMIP5.Rd b/modules/data.atmosphere/man/extract.local.CMIP5.Rd index f3d96381bd5..fec601e2925 100644 --- a/modules/data.atmosphere/man/extract.local.CMIP5.Rd +++ b/modules/data.atmosphere/man/extract.local.CMIP5.Rd @@ -42,7 +42,7 @@ extract.local.CMIP5( \item{ensemble_member}{which CMIP5 experiment ensemble member} \item{date.origin}{(optional) specify the date of origin for timestamps in the files being read. -If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and +If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and 850 for p1000 simulations (plus MPI-ESM-P historical). Format: YYYY-MM-DD} \item{adjust.pr}{- adjustment factor fore preciptiation when the extracted values seem off} @@ -57,7 +57,7 @@ If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and This function extracts CMIP5 data from grids that have been downloaded and stored locally. Files are saved as a netCDF file in CF conventions at *DAILY* resolution. Note: At this point in time, variables that are only available at a native monthly resolution will be repeated to - give a pseudo-daily record (and can get dealt with in the downscaling workflow). These files + give a pseudo-daily record (and can get dealt with in the downscaling workflow). These files are ready to be used in the general PEcAn workflow or fed into the downscaling workflow. } \author{ diff --git a/modules/data.atmosphere/man/lm_ensemble_sims.Rd b/modules/data.atmosphere/man/lm_ensemble_sims.Rd index 7743ba39677..ec838820813 100644 --- a/modules/data.atmosphere/man/lm_ensemble_sims.Rd +++ b/modules/data.atmosphere/man/lm_ensemble_sims.Rd @@ -31,10 +31,10 @@ lm_ensemble_sims( \item{lags.init}{- a data frame of initialization parameters to match the data in dat.mod} -\item{dat.train}{- the training data used to fit the model; needed for night/day in +\item{dat.train}{- the training data used to fit the model; needed for night/day in surface_downwelling_shortwave_flux_in_air} -\item{precip.distribution}{- a list with 2 sub-lists containing the number of observations with precip in the training data per day & +\item{precip.distribution}{- a list with 2 sub-lists containing the number of observations with precip in the training data per day & the hour of max rain in the training data. This will be used to help solve the "constant drizzle" problem} \item{force.sanity}{- (logical) do we force the data to meet sanity checks?} From c486da06e4a0815c5a19fe981ac9dd089c746ee4 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 5 Nov 2020 08:32:14 -0500 Subject: [PATCH 1587/2289] API: Fix missing `dbcon --> global_db_pool` --- apps/api/R/general.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/apps/api/R/general.R b/apps/api/R/general.R index cacfa61fcee..fca4d0f8f72 100644 --- a/apps/api/R/general.R +++ b/apps/api/R/general.R @@ -10,7 +10,7 @@ ping <- function(req){ #* Function to get the status & basic information about the Database Host #* @return Details about the database host #* @author Tezan Sahu -status <- function() { +status <- function(dbcon = global_db_pool) { ## helper function to obtain environment variables get_env_var = function (item, default = "unknown") { From 610870cf9f446429eefae3bb426fa0f7084ecfca Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Thu, 5 Nov 2020 09:16:16 -0500 Subject: [PATCH 1588/2289] API: Don't close connections in model funs They are opened externally to the function. --- apps/api/R/models.R | 2 -- 1 file changed, 2 deletions(-) diff --git a/apps/api/R/models.R b/apps/api/R/models.R index 82ca656aed1..22ddc2275d4 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -19,7 +19,6 @@ getModel <- function(model_id, res, dbcon = global_db_pool){ qry_res <- Model %>% collect() if (nrow(qry_res) == 0) { - PEcAn.DB::db.close(dbcon) res$status <- 404 return(list(error="Model not found")) } @@ -34,7 +33,6 @@ getModel <- function(model_id, res, dbcon = global_db_pool){ filter(modeltype_id == bit64::as.integer64(qry_res$modeltype_id)) %>% select(input=tag, required) %>% collect() response$inputs <- jsonlite::fromJSON(gsub('(\")', '"', jsonlite::toJSON(inputs_req))) - PEcAn.DB::db.close(dbcon) return(response) } } From 3e21ed35cd072a04f4f617df115a4fe550148a54 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 5 Nov 2020 14:47:39 -0600 Subject: [PATCH 1589/2289] bug fix. there was a language change along the way --- modules/data.atmosphere/R/tdm_lm_ensemble_sims.R | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R index e95e8935483..2be747f82ec 100644 --- a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R +++ b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R @@ -283,7 +283,6 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. if (v == "precipitation_flux") { # if(n.ens == 1) next - cols.check <- ifelse(n.ens==1, 1, cols.check) if (max(dat.pred[,cols.check]) > 0) { tmp <- 1:nrow(dat.pred) # A dummy vector of the for (j in cols.check) { From 90275196e2d06aeaeaecc54f67cbd6659615293a Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Thu, 5 Nov 2020 14:47:57 -0600 Subject: [PATCH 1590/2289] user-specified sd threshold for sanity --- .../data.atmosphere/R/tdm_lm_ensemble_sims.R | 347 +++++++++--------- 1 file changed, 175 insertions(+), 172 deletions(-) diff --git a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R index 2be747f82ec..80a38f069ef 100644 --- a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R +++ b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R @@ -1,6 +1,6 @@ ##' Linear Regression Ensemble Simulation -##' Met downscaling function that predicts ensembles of downscaled meteorology -# ----------------------------------- +##' Met downscaling function that predicts ensembles of downscaled meteorology +# ----------------------------------- # Description # ----------------------------------- ##' @title lm_ensemble_sims @@ -10,22 +10,23 @@ ##' function of the tdm workflow titled predict_subdaily_met(). It uses a linear ##' regression approach by generating the hourly values from the coarse data of ##' the file the user selects to downscale based on the hourly models and betas -##' generated by gen.subdaily.models(). -# ----------------------------------- +##' generated by gen.subdaily.models(). +# ----------------------------------- # Parameters # ----------------------------------- ##' @param dat.mod - dataframe to be predicted at the time step of the training data ##' @param n.ens - number of hourly ensemble members to generate ##' @param path.model - path to where the training model & betas is stored ##' @param direction.filter - Whether the model will be filtered backward or forward in time. options = c("backward", "forward") -##' (PalEON will go backward, anybody interested in the future will go forward) +##' (PalEON will go backward, anybody interested in the future will go forward) ##' @param lags.init - a data frame of initialization parameters to match the data in dat.mod -##' @param dat.train - the training data used to fit the model; needed for night/day in +##' @param dat.train - the training data used to fit the model; needed for night/day in ##' surface_downwelling_shortwave_flux_in_air -##' @param precip.distribution - a list with 2 sub-lists containing the number of observations with precip in the training data per day & +##' @param precip.distribution - a list with 2 sub-lists containing the number of observations with precip in the training data per day & ##' the hour of max rain in the training data. This will be used to help solve the "constant drizzle" problem -##' @param force.sanity - (logical) do we force the data to meet sanity checks? +##' @param force.sanity - (logical) do we force the data to meet sanity checks? ##' @param sanity.tries - how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop +##' @param sanity.sd - how many standard deviations from the mean should be used to determine sane outliers (default 6) ##' @param seed - (optional) set the seed manually to allow reproducible results ##' @param print.progress - if TRUE will print progress bar ##' @export @@ -34,16 +35,17 @@ # Begin Function #---------------------------------------------------------------------- -lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags.list = NULL, - lags.init = NULL, dat.train, precip.distribution, force.sanity=TRUE, sanity.tries=25, +lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags.list = NULL, + lags.init = NULL, dat.train, precip.distribution, + force.sanity=TRUE, sanity.tries=25, sanity.sd=6, seed=Sys.time(), print.progress=FALSE) { - + # Set our random seed set.seed(seed) - - # Just in case we have a capitalization or singular/plural issue + + # Just in case we have a capitalization or singular/plural issue if(direction.filter %in% toupper( c("backward", "backwards"))) direction.filter="backward" - + # Setting our our time indexes if(direction.filter=="backward"){ days.sim <- max(dat.mod$sim.day):min(dat.mod$sim.day) @@ -52,30 +54,30 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. days.sim <- min(dat.mod$sim.day):max(dat.mod$sim.day) lag.time <- max(dat.mod$hour) } - + # Declare the variables of interest that will be called in the # overarching loop - vars.list <- c("surface_downwelling_shortwave_flux_in_air", "air_temperature", - "precipitation_flux", "surface_downwelling_longwave_flux_in_air", + vars.list <- c("surface_downwelling_shortwave_flux_in_air", "air_temperature", + "precipitation_flux", "surface_downwelling_longwave_flux_in_air", "air_pressure", "specific_humidity", "wind_speed") - + # Data info that will be used to help organize dataframe for # downscaling - dat.info <- c("sim.day", "year", "doy", "hour", "air_temperature_max.day", - "air_temperature_min.day", "precipitation_flux.day", "surface_downwelling_shortwave_flux_in_air.day", - "surface_downwelling_longwave_flux_in_air.day", "air_pressure.day", - "specific_humidity.day", "wind_speed.day", "next.air_temperature_max", - "next.air_temperature_min", "next.precipitation_flux", "next.surface_downwelling_shortwave_flux_in_air", - "next.surface_downwelling_longwave_flux_in_air", "next.air_pressure", + dat.info <- c("sim.day", "year", "doy", "hour", "air_temperature_max.day", + "air_temperature_min.day", "precipitation_flux.day", "surface_downwelling_shortwave_flux_in_air.day", + "surface_downwelling_longwave_flux_in_air.day", "air_pressure.day", + "specific_humidity.day", "wind_speed.day", "next.air_temperature_max", + "next.air_temperature_min", "next.precipitation_flux", "next.surface_downwelling_shortwave_flux_in_air", + "next.surface_downwelling_longwave_flux_in_air", "next.air_pressure", "next.specific_humidity", "next.wind_speed") - + # # Set progress bar if(print.progress==TRUE){ pb.index <- 1 pb <- utils::txtProgressBar(min = 1, max = length(vars.list)*length(days.sim), style = 3) utils::setTxtProgressBar(pb, pb.index) } - + # Figure out if we need to extract the approrpiate if (is.null(lags.list) & is.null(lags.init)) { PEcAn.logger::logger.error("lags.init & lags.list are NULL, this is a required argument") @@ -83,23 +85,23 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. if (is.null(lags.init)) { lags.init <- lags.list[[unique(dat.mod$ens.day)]] } - - + + # Set up the ensemble members in a list so the uncertainty can be # propogated dat.sim <- list() - + # ------ Beginning of Downscaling For Loop - + for (v in vars.list) { # Initalize our ouroutput dat.sim[[v]] <- array(dim=c(nrow(dat.mod), n.ens)) - + # create column propagation list and betas progagation list cols.list <- array(dim=c(length(days.sim), n.ens)) # An array with number of days x number of ensembles rows.beta <- vector(length=n.ens) # A vector that ends up being the length of the number of our days - - # This gives us a + + # This gives us a for (i in seq_len(length(days.sim))) { cols.tem <- sample(1:n.ens, n.ens, replace = TRUE) cols.list[i,] <- cols.tem @@ -110,7 +112,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # first_beta <- assign(paste0("betas.", v, "_1"), first_model) # does below need to be first_beta? n.beta <- first_model$var[[1]]$dim[[1]]$len # Number of rows; should be same for all ncdf4::nc_close(first_model) - + # Create beta list so each ensemble for each variable pulls the same # betas # for (i in seq_len(length(days.sim))) { @@ -121,7 +123,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # fill our dat.sim list dat.sim[[v]] <- data.frame(array(dim = c(nrow(dat.mod), n.ens))) - + # -------------------------------- # Looping through time # -------------------------------- @@ -129,44 +131,44 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. for (i in 1:length(days.sim)) { day.now <- unique(dat.mod[dat.mod$sim.day == days.sim[i], "doy"]) rows.now <- which(dat.mod$sim.day == days.sim[i]) - + # shortwave is different because we only want to model daylight if (v == "surface_downwelling_shortwave_flux_in_air") { # Finding which days have measurable light thresh.swdown <- quantile(dat.train$surface_downwelling_shortwave_flux_in_air[dat.train$surface_downwelling_shortwave_flux_in_air > 0], 0.05) - - - hrs.day <- unique(dat.train$time[dat.train$time$DOY == day.now & - dat.train$surface_downwelling_shortwave_flux_in_air > thresh.swdown, + + + hrs.day <- unique(dat.train$time[dat.train$time$DOY == day.now & + dat.train$surface_downwelling_shortwave_flux_in_air > thresh.swdown, "Hour"]) - + rows.mod <- which(dat.mod$sim.day == days.sim[i] & dat.mod$hour %in% hrs.day) dat.temp <- dat.mod[rows.mod, dat.info] - + # dat.temp <- merge(dat.temp, data.frame(ens=paste0("X", 1:n.ens))) if (i == 1) { sim.lag <- utils::stack(lags.init[[v]]) names(sim.lag) <- c(paste0("lag.", v), "ens") - + } else { sim.lag <- utils::stack(data.frame(array(0,dim = c(1, ncol(dat.sim[[v]]))))) names(sim.lag) <- c(paste0("lag.", v), "ens") } dat.temp <- merge(dat.temp, sim.lag, all.x = TRUE) - + } else if (v == "air_temperature") { dat.temp <- dat.mod[rows.now, dat.info] - + # Set up the lags if (i == 1) { # First time through, so pull from our inital lags sim.lag <- utils::stack(lags.init$air_temperature) names(sim.lag) <- c("lag.air_temperature", "ens") - + sim.lag$lag.air_temperature_min <- utils::stack(lags.init$air_temperature_min)[,1] sim.lag$lag.air_temperature_max <- utils::stack(lags.init$air_temperature_max)[,1] } else { sim.lag <- utils::stack(data.frame(array(dat.sim[["air_temperature"]][dat.mod$sim.day == (days.sim[i-1]) & - dat.mod$hour == lag.time, ], + dat.mod$hour == lag.time, ], dim = c(1, ncol(dat.sim$air_temperature))))) names(sim.lag) <- c("lag.air_temperature", "ens") sim.lag$lag.air_temperature_min <- utils::stack(apply(data.frame(dat.sim[["air_temperature"]][dat.mod$sim.day == days.sim[i-1], ]), 2, min))[, 1] @@ -175,59 +177,59 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. dat.temp <- merge(dat.temp, sim.lag, all.x = TRUE) } else if (v == "precipitation_flux") { dat.temp <- dat.mod[rows.now, dat.info] - + dat.temp[,v] <- 99999 dat.temp$rain.prop <- 99999 - + day.now <- unique(dat.temp$doy) - + # Set up the lags This is repeated differently because Precipitation # dat.temp is merged if (i == 1) { sim.lag <- utils::stack(lags.init[[v]]) names(sim.lag) <- c(paste0("lag.", v), "ens") - + } else { sim.lag <- utils::stack(data.frame(array(dat.sim[[v]][dat.mod$sim.day == days.sim[i-1] & - dat.mod$hour == lag.time, ], + dat.mod$hour == lag.time, ], dim = c(1, ncol(dat.sim[[v]]))))) names(sim.lag) <- c(paste0("lag.", v), "ens") } dat.temp <- merge(dat.temp, sim.lag, all.x = TRUE) - + # End Precipitation Flux specifics } else { dat.temp <- dat.mod[rows.now, dat.info] - + if (i == 1) { sim.lag <- utils::stack(lags.init[[v]]) names(sim.lag) <- c(paste0("lag.", v), "ens") - + } else { sim.lag <- utils::stack(data.frame(array(dat.sim[[v]][dat.mod$sim.day == days.sim[i-1] & - dat.mod$hour == lag.time, ], + dat.mod$hour == lag.time, ], dim = c(1, ncol(dat.sim[[v]]))))) names(sim.lag) <- c(paste0("lag.", v), "ens") } dat.temp <- merge(dat.temp, sim.lag, all.x = TRUE) } # End special formatting - + # Create dummy value dat.temp[,v] <- 99999 - + # Creating some necessary dummy variable names vars.sqrt <- c("surface_downwelling_longwave_flux_in_air", "wind_speed") vars.log <- c("specific_humidity") - if (v %in% vars.sqrt) { + if (v %in% vars.sqrt) { dat.temp[,paste0("sqrt(",v,")")] <- sqrt(dat.temp[,v]) - } else if (v %in% vars.log) { - dat.temp[,paste0("log(",v,")")] <- log(dat.temp[,v]) + } else if (v %in% vars.log) { + dat.temp[,paste0("log(",v,")")] <- log(dat.temp[,v]) } # Load the saved model - load(file.path(path.model, v, paste0("model_", v, "_", day.now, + load(file.path(path.model, v, paste0("model_", v, "_", day.now, ".Rdata"))) - + # Pull coefficients (betas) from our saved matrix # for (i in seq_len(length(days.sim))) { @@ -235,7 +237,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # rows.beta[i] <- betas.tem # } # rows.beta <- as.numeric(rows.beta) - + # n.new <- ifelse(n.ens==1, 10, n.ens) # If we're not creating an ensemble, we'll add a mean step to remove chance of odd values n.new <- n.ens cols.redo <- 1:n.new @@ -243,54 +245,55 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. betas_nc <- ncdf4::nc_open(file.path(path.model, v, paste0("betas_", v, "_", day.now, ".nc"))) col.beta <- betas_nc$var[[1]]$dim[[2]]$len # number of coefficients while(n.new>0 & sane.attempt <= sanity.tries){ - + if(n.ens==1){ Rbeta <- matrix(mod.save$coef, ncol=col.beta) } else { betas.tem <- sample(1:max((n.beta-n.new), 1), 1, replace = TRUE) Rbeta <- matrix(ncdf4::ncvar_get(betas_nc, paste(day.now), c(betas.tem,1), c(n.new,col.beta)), ncol = col.beta) } - - + + if(ncol(Rbeta)!=col.beta) Rbeta <- t(Rbeta) - + # If we're starting from scratch, set up the prediction matrix if(sane.attempt==0){ dat.pred <- matrix(nrow=nrow(dat.temp), ncol=n.ens) } - + # if(n.ens==1){ - # dat.dum <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, + # dat.dum <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, # Rbeta = Rbeta, resid.err = FALSE, model.resid = NULL, Rbeta.resid = NULL, # n.ens = n.new) # dat.pred[,1] <- apply(dat.dum, 1, mean) # } else { - dat.pred[,cols.redo] <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, - Rbeta = Rbeta, resid.err = FALSE, model.resid = NULL, Rbeta.resid = NULL, + dat.pred[,cols.redo] <- subdaily_pred(newdata = dat.temp, model.predict = mod.save, + Rbeta = Rbeta, resid.err = FALSE, model.resid = NULL, Rbeta.resid = NULL, n.ens = n.new) - + # } - + # Occasionally specific humidty may go serioulsy off the rails if(v=="specific_humidity" & (max(dat.pred)>log(40e-3) | min(dat.pred)log(40e-3)] <- log(40e-3) dat.pred[dat.pred 0) { - tmp <- 1:nrow(dat.pred) # A dummy vector of the + tmp <- 1:nrow(dat.pred) # A dummy vector of the for (j in cols.check) { if (min(dat.pred[, j]) >= 0) next # skip if no negative rain to redistribute rows.neg <- which(dat.pred[, j] < 0) - rows.add <- sample(tmp[!tmp %in% rows.neg], length(rows.neg), + rows.add <- sample(tmp[!tmp %in% rows.neg], length(rows.neg), replace = TRUE) - + # Redistribute days with negative rain for (z in 1:length(rows.neg)) { dat.pred[rows.add[z], j] <- dat.pred[rows.add[z], j] - dat.pred[rows.neg[z], j] @@ -302,23 +305,23 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # dat.pred[,cols.check] <- dat.pred[,cols.check]/colSums(data.frame(dat.pred[,cols.check]), na.rm=T) dat.pred[is.na(dat.pred)] <- 0 } # End case of re-proportioning - + # Convert precip proportions into real units # Total Daily precip = precipitaiton_flux.day*24*60*60 # precip.day <- dat.temp$precipitation_flux.day[1]*nrow(dat.temp) precip.day <- dat.temp$precipitation_flux.day[1] dat.pred[,cols.check] <- dat.pred[,cols.check] * precip.day } # End Precip re-propogation - + # ----- # SANITY CHECKS!!! # ----- # Here we'll also take into account the values from the past 2 weeks using a "six-sigma filter" per email with Ankur # -- this is apparently what they do with the flux data - + # vars.sqrt <- c("surface_downwelling_longwave_flux_in_air", "wind_speed") # vars.log <- c("specific_humidity") - + # Determine which ensemble members fail sanity checks #don't forget to check for transformed variables # vars.transform <- c("surface_downwelling_shortwave_flux_in_air", "specific_humidity", "surface_downwelling_longwave_flux_in_air", "wind_speed") @@ -334,12 +337,12 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } else { dat.filter <- dat.sim[[v]][rows.filter,] } - - - filter.mean <- mean(dat.filter, na.rm=T) - filter.sd <- sd(dat.filter, na.rm=T) + + + filter.mean <- mean(dat.filter, na.rm=T) + filter.sd <- sd(dat.filter, na.rm=T) } else { - + if(v %in% vars.sqrt){ filter.mean <- mean(c(dat.pred^2, utils::stack(dat.sim[[v]])[,1]), na.rm=T) filter.sd <- sd(c(dat.pred^2, utils::stack(dat.sim[[v]])[,1]), na.rm=T) @@ -351,48 +354,48 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. filter.sd <- sd(c(dat.pred, utils::stack(dat.sim[[v]])[,1]), na.rm=T) } } - + if(v %in% c("air_temperature", "air_temperature_maximum", "air_temperature_minimum")){ # max air temp = 70 C; hottest temperature from sattellite; very ridiculous # min air temp = -95 C; colder than coldest natural temperature recorded in Antarctica - + tmax.ens <- max(dat.temp$air_temperature_max.day) tmin.ens <- min(dat.temp$air_temperature_min.day) - - # we'll allow some drift outside of what we have for our max/min, but not too much; + + # we'll allow some drift outside of what we have for our max/min, but not too much; # - right now general rule of thumb of 2 degrees leeway on the prescribed - cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 273.15-95 | max(x) > 273.15+70 | - # min(x) < tmin.ens-2 | max(x) > tmax.ens+2 | - min(x) < filter.mean-6*filter.sd | max(x) > filter.mean+6*filter.sd + cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 273.15-95 | max(x) > 273.15+70 | + # min(x) < tmin.ens-2 | max(x) > tmax.ens+2 | + min(x) < filter.mean-sanity.sd*filter.sd | max(x) > filter.mean+sanity.sd*filter.sd )) } - #"specific_humidity", + #"specific_humidity", if(v == "specific_humidity"){ #LOG!! # Based on google, it looks like values of 30 g/kg can occur in the tropics, so lets go above that # Also, the minimum humidity can't be 0 so lets just make it extremely dry; lets set this for 1 g/Mg - cols.redo <- which(apply(dat.pred, 2, function(x) min(exp(x)) < 1e-6 | max(exp(x)) > 40e-3 | - min(exp(x)) < filter.mean-6*filter.sd | - max(exp(x)) > filter.mean+6*filter.sd + cols.redo <- which(apply(dat.pred, 2, function(x) min(exp(x)) < 1e-6 | max(exp(x)) > 40e-3 | + min(exp(x)) < filter.mean-sanity.sd*filter.sd | + max(exp(x)) > filter.mean+sanity.sd*filter.sd ) ) } - #"surface_downwelling_shortwave_flux_in_air", + #"surface_downwelling_shortwave_flux_in_air", if(v == "surface_downwelling_shortwave_flux_in_air"){ # Based on something found from Columbia, average Radiative flux at ATM is 1360 W/m2, so for a daily average it should be less than this # Lets round 1360 and divide that by 2 (because it should be a daily average) and conservatively assume albedo of 20% (average value is more like 30) # Source http://eesc.columbia.edu/courses/ees/climate/lectures/radiation/ dat.pred[dat.pred < 0] <- 0 - cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 1360 | min(x) < filter.mean-6*filter.sd | - max(x) > filter.mean+6*filter.sd - )) + cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 1360 | min(x) < filter.mean-sanity.sd*filter.sd | + max(x) > filter.mean+sanity.sd*filter.sd + )) } if(v == "air_pressure"){ # According to wikipedia the highest barometric pressure ever recorded was 1085.7 hPa = 1085.7*100 Pa; Dead sea has average pressure of 1065 hPa # - Lets round up to 1100 hPA # Also according to Wikipedia, the lowest non-tornadic pressure ever measured was 870 hPA cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 850*100 | max(x) > 1100*100 | - min(x) < filter.mean-6*filter.sd | - max(x) > filter.mean+6*filter.sd - )) + min(x) < filter.mean-sanity.sd*filter.sd | + max(x) > filter.mean+sanity.sd*filter.sd + )) } if(v == "surface_downwelling_longwave_flux_in_air"){ # SQRT # A NASA presentation has values topping out ~300 and min ~0: https://ceres.larc.nasa.gov/documents/STM/2003-05/pdf/smith.pdf @@ -400,36 +403,36 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # Based on what what CRUNCEP did, lets assume these are annual averages, so we can do 50% above it and for the min, in case we run tropics, lets go 130/4 # ED2 sanity checks bound longwave at 40 & 600 cols.redo <- which(apply(dat.pred, 2, function(x) min(x^2) < 40 | max(x^2) > 600 | - min(x^2) < filter.mean-6*filter.sd | - max(x^2) > filter.mean+6*filter.sd - )) - + min(x^2) < filter.mean-sanity.sd*filter.sd | + max(x^2) > filter.mean+sanity.sd*filter.sd + )) + } if(v == "wind_speed"){ # According to wikipedia, the hgihest wind speed ever recorded is a gust of 113 m/s; the maximum 5-mind wind speed is 49 m/s cols.redo <- which(apply(dat.pred, 2, function(x) max(x^2) > 50 | - min(x^2) < filter.mean-6*filter.sd | - max(x^2) > filter.mean+6*filter.sd - )) + min(x^2) < filter.mean-sanity.sd*filter.sd | + max(x^2) > filter.mean+sanity.sd*filter.sd + )) } if(v == "precipitation_flux"){ # According to wunderground, ~16" in 1 hr is the max # https://www.wunderground.com/blog/weatherhistorian/what-is-the-most-rain-to-ever-fall-in-one-minute-or-one-hour.html # 16; x25.4 = inches to mm; /(60*60) = hr to sec - cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 16*25.4/(60*60) - )) + cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 16*25.4/(60*60) + )) } - + n.new = length(cols.redo) if(force.sanity){ sane.attempt = sane.attempt + 1 - } else { + } else { # If we're not forcing sanity, just stop now sane.attempt=sanity.tries + 1 } # ----- } # End while case - + # If we ran out of attempts, but want to foce sanity, do so now if(force.sanity & n.new>0){ # If we're still struggling, but we have at least some workable columns, lets just duplicate those: @@ -442,98 +445,98 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # Shouldn't be a huge problem, but it's not looking good # min(x) < 273.15-95 | max(x) > 273.15+70 warning(paste("Forcing Sanity:", v)) - if(min(dat.pred) < max(filter.mean-6*filter.sd)){ - qtrim <- max(filter.mean-6*filter.sd) + if(min(dat.pred) < max(filter.mean-sanity.sd*filter.sd)){ + qtrim <- max(filter.mean-sanity.sd*filter.sd) dat.pred[dat.pred < qtrim] <- qtrim } - if(max(dat.pred) > min(1360, filter.mean+6*filter.sd)){ - qtrim <- min(1360, filter.mean+6*filter.sd) + if(max(dat.pred) > min(1360, filter.mean+sanity.sd*filter.sd)){ + qtrim <- min(1360, filter.mean+sanity.sd*filter.sd) dat.pred[dat.pred > qtrim] <- qtrim } - + } else if(v=="air_temperature"){ # Shouldn't be a huge problem, but it's not looking good # min(x) < 273.15-95 | max(x) > 273.15+70 warning(paste("Forcing Sanity:", v)) - if(min(dat.pred) < max(273.15-95, filter.mean-6*filter.sd )){ - qtrim <- max(273.15-95, filter.mean-6*filter.sd) + if(min(dat.pred) < max(273.15-95, filter.mean-sanity.sd*filter.sd )){ + qtrim <- max(273.15-95, filter.mean-sanity.sd*filter.sd) dat.pred[dat.pred < qtrim] <- qtrim } - if(max(dat.pred) > min(273.15+70, filter.mean+6*filter.sd)){ - qtrim <- min(273.15+70, filter.mean+6*filter.sd) + if(max(dat.pred) > min(273.15+70, filter.mean+sanity.sd*filter.sd)){ + qtrim <- min(273.15+70, filter.mean+sanity.sd*filter.sd) dat.pred[dat.pred > qtrim] <- qtrim } - + } else if(v=="air_pressure"){ # A known problem child warning(paste("Forcing Sanity:", v)) - if(min(dat.pred) < max(870*100, filter.mean-6*filter.sd )){ - qtrim <- max(870*100, filter.mean-6*filter.sd) + if(min(dat.pred) < max(870*100, filter.mean-sanity.sd*filter.sd )){ + qtrim <- max(870*100, filter.mean-sanity.sd*filter.sd) dat.pred[dat.pred < qtrim] <- qtrim } - if(max(dat.pred) > min(1100*100, filter.mean+6*filter.sd)){ - qtrim <- min(1100*100, filter.mean+6*filter.sd) + if(max(dat.pred) > min(1100*100, filter.mean+sanity.sd*filter.sd)){ + qtrim <- min(1100*100, filter.mean+sanity.sd*filter.sd) dat.pred[dat.pred > qtrim] <- qtrim } - + } else if(v=="surface_downwelling_longwave_flux_in_air"){ # A known problem child # ED2 sanity checks boudn longwave at 40 & 600 warning(paste("Forcing Sanity:", v)) - if(min(dat.pred^2) < max(40, filter.mean-6*filter.sd )){ - qtrim <- max(40, filter.mean-6*filter.sd) + if(min(dat.pred^2) < max(40, filter.mean-sanity.sd*filter.sd )){ + qtrim <- max(40, filter.mean-sanity.sd*filter.sd) dat.pred[dat.pred^2 < qtrim] <- sqrt(qtrim) } - if(max(dat.pred^2) > min(600, filter.mean+6*filter.sd)){ - qtrim <- min(600, filter.mean+6*filter.sd) + if(max(dat.pred^2) > min(600, filter.mean+sanity.sd*filter.sd)){ + qtrim <- min(600, filter.mean+sanity.sd*filter.sd) dat.pred[dat.pred^2 > qtrim] <- sqrt(qtrim) } - + } else if(v=="specific_humidity") { warning(paste("Forcing Sanity:", v)) - if(min(exp(dat.pred)) < max(1e-6, filter.mean-6*filter.sd )){ - qtrim <- max(1e-6, filter.mean-6*filter.sd) + if(min(exp(dat.pred)) < max(1e-6, filter.mean-sanity.sd*filter.sd )){ + qtrim <- max(1e-6, filter.mean-sanity.sd*filter.sd) dat.pred[exp(dat.pred) < qtrim] <- log(qtrim) } - if(max(exp(dat.pred)) > min(40e-3, filter.mean+6*filter.sd)){ - qtrim <- min(40e-3, filter.mean+6*filter.sd) + if(max(exp(dat.pred)) > min(30e-3, filter.mean+sanity.sd*filter.sd)){ + qtrim <- min(40e-3, filter.mean+sanity.sd*filter.sd) dat.pred[exp(dat.pred) > qtrim] <- log(qtrim) } - + } else if(v=="wind_speed"){ # A known problem child warning(paste("Forcing Sanity:", v)) - # if(min(dat.pred^2) < max(0, filter.mean-6*filter.sd )){ + # if(min(dat.pred^2) < max(0, filter.mean-sanity.sd*filter.sd )){ # qtrim <- max(0, 1) # dat.pred[dat.pred < qtrim] <- qtrim # } - if(max(dat.pred^2) > min(50, filter.mean+6*filter.sd)){ - qtrim <- min(50, filter.mean+6*filter.sd) + if(max(dat.pred^2) > min(50, filter.mean+sanity.sd*filter.sd)){ + qtrim <- min(50, filter.mean+sanity.sd*filter.sd) dat.pred[dat.pred^2 > qtrim] <- sqrt(qtrim) } - + } else { stop(paste("Unable to produce a sane prediction:", v, "- day", day.now, "; problem child =", paste(cols.redo, collapse=" "))) } - + } } # End force sanity - + ncdf4::nc_close(betas_nc) - + #----- Now we do a little quality control per variable # un-transforming our variables - if (v %in% vars.sqrt) { + if (v %in% vars.sqrt) { dat.pred <- dat.pred^2 - } else if (v %in% vars.log) { - dat.pred <- exp(dat.pred) + } else if (v %in% vars.log) { + dat.pred <- exp(dat.pred) } - - # ---------- + + # ---------- # Re-distribute precip so we don't get the constant drizzle problem # -- this could go earlier, but I'm being lazy because I don't want to mess with cols.redo - # ---------- + # ---------- if(v == "precipitation_flux"){ # Pick the number of hours to spread rain across from our observed distribution # in case we don't have a large distribution, use multiple days @@ -544,39 +547,39 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } else { rain.ind <- (day.now-3):(day.now+3) } - + hrs.rain <- sample(unlist(precip.distribution$hrs.rain[rain.ind]),1) # hr.max <- sample(precip.distribution$hrs.max[[day.now]],1) - + for(j in 1:ncol(dat.pred)){ obs.day <- nrow(dat.pred)/ncol(dat.pred) start.ind <- seq(1, nrow(dat.pred), by=obs.day) for(z in seq_along(start.ind)){ rain.now <- dat.pred[start.ind[z]:(start.ind[z]+obs.day-1),j] hrs.now <- which(rain.now>0) - + if(length(hrs.now)<=hrs.rain) next # If we don't need to redistribute, skip what's next - + # Figure out when it's going to rain based on what normally has the most number of hours hrs.add <- sample(unlist(precip.distribution$hrs.max[rain.ind]), hrs.rain, replace=T) hrs.go <- hrs.now[!hrs.now %in% hrs.add] hrs.wet <- sample(hrs.add, length(hrs.go), replace=T) - + for(dry in seq_along(hrs.go)){ rain.now[hrs.wet[dry]] <- rain.now[hrs.wet[dry]] + rain.now[hrs.go[dry]] rain.now[hrs.go[dry]] <- 0 } - + # Put the rain back into place dat.pred[start.ind[z]:(start.ind[z]+obs.day-1),j] <- rain.now } # End row loop } # End column loop } # End hour redistribution - # ---------- - - # ---------- + # ---------- + + # ---------- # Begin propogating values and saving values Shortwave Radiaiton - # ---------- + # ---------- if (v == "surface_downwelling_shortwave_flux_in_air") { # Randomly pick which values to save & propogate @@ -588,11 +591,11 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } else { # Only one ensemble member... it's really easy dat.sim[[v]][rows.mod, 1] <- dat.pred } - - + + dat.sim[[v]][rows.now[!rows.now %in% rows.mod], ] <- 0 } else { - + if(ncol(dat.sim[[v]])>1){ cols.prop <- as.integer(cols.list[i,]) for (j in 1:ncol(dat.sim[[v]])) { @@ -608,12 +611,12 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. utils::setTxtProgressBar(pb, pb.index) pb.index <- pb.index + 1 } - + rm(dat.temp, dat.pred) } # end day loop # -------------------------------- - - } # End vars.list + + } # End vars.list # ---------- End of downscaling for loop return(dat.sim) } From 4c5c3c8889888b873ca9c06589f23e8ad7ea6a3c Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Thu, 5 Nov 2020 21:35:45 +0000 Subject: [PATCH 1591/2289] automated documentation update --- modules/data.atmosphere/man/lm_ensemble_sims.Rd | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/lm_ensemble_sims.Rd b/modules/data.atmosphere/man/lm_ensemble_sims.Rd index ec838820813..72fc86a8fe6 100644 --- a/modules/data.atmosphere/man/lm_ensemble_sims.Rd +++ b/modules/data.atmosphere/man/lm_ensemble_sims.Rd @@ -15,6 +15,7 @@ lm_ensemble_sims( precip.distribution, force.sanity = TRUE, sanity.tries = 25, + sanity.sd = 6, seed = Sys.time(), print.progress = FALSE ) @@ -31,16 +32,18 @@ lm_ensemble_sims( \item{lags.init}{- a data frame of initialization parameters to match the data in dat.mod} -\item{dat.train}{- the training data used to fit the model; needed for night/day in +\item{dat.train}{- the training data used to fit the model; needed for night/day in surface_downwelling_shortwave_flux_in_air} -\item{precip.distribution}{- a list with 2 sub-lists containing the number of observations with precip in the training data per day & +\item{precip.distribution}{- a list with 2 sub-lists containing the number of observations with precip in the training data per day & the hour of max rain in the training data. This will be used to help solve the "constant drizzle" problem} \item{force.sanity}{- (logical) do we force the data to meet sanity checks?} \item{sanity.tries}{- how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop} +\item{sanity.sd}{- how many standard deviations from the mean should be used to determine sane outliers (default 6)} + \item{seed}{- (optional) set the seed manually to allow reproducible results} \item{print.progress}{- if TRUE will print progress bar} From 59f7cecc69c4e3d6f1551ad8dd89fba1db537f72 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 6 Nov 2020 16:04:55 -0500 Subject: [PATCH 1592/2289] ED: met2model overhaul The motivation for this was that the previous version didn't allow ED2 to generate met inputs for incomplete years. I don't really know why we previously didn't allow this. I would have liked to implement a less disruptive fix, but I couldn't understand what the old code was doing. So I reorganized it in a way that will definitely work for partial years (I tested!) and, IMHO, is a bit clearer. I did my best to add explanatory comments for any code that wasn't immediately obvious. --- models/ed/R/met2model.ED2.R | 207 +++++++++++++++++------------------- 1 file changed, 97 insertions(+), 110 deletions(-) diff --git a/models/ed/R/met2model.ED2.R b/models/ed/R/met2model.ED2.R index 9088f86d2ce..6ae7a142e45 100644 --- a/models/ed/R/met2model.ED2.R +++ b/models/ed/R/met2model.ED2.R @@ -49,13 +49,13 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l ## check to see if the outfolder is defined, if not create directory for output dir.create(met_folder, recursive = TRUE, showWarnings = FALSE) - dm <- c(0, 32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366) - dl <- c(0, 32, 61, 92, 122, 153, 183, 214, 245, 275, 306, 336, 367) month <- c("JAN", "FEB", "MAR", "APR", "MAY", "JUN", "JUL", "AUG", "SEP", "OCT", "NOV", "DEC") - mon_num <- c("01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12") - day2mo <- function(year, day, leap_year) { + # DOY corresponding to start of each month without a leap year... + dm <- c(0, 32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366) + # ...and with a leap year + dl <- c(0, 32, 61, 92, 122, 153, 183, 214, 245, 275, 306, 336, 367) mo <- rep(NA, length(day)) if (!leap_year) { mo <- findInterval(day, dm) @@ -149,9 +149,7 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l ## determine GMT adjustment lst <- site$LST_shift[which(site$acro == froot)] ## extract variables - lat <- eval(parse(text = lat)) - lon <- eval(parse(text = lon)) - sec <- nc$dim$time$vals + tdays <- nc$dim$time$vals Tair <- ncdf4::ncvar_get(nc, "air_temperature") Qair <- ncdf4::ncvar_get(nc, "specific_humidity") #humidity (kg/kg) U <- try(ncdf4::ncvar_get(nc, "eastward_wind"), silent = TRUE) @@ -176,16 +174,32 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l useCO2 <- is.numeric(CO2) ## convert time to seconds - sec <- udunits2::ud.convert(sec, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") + sec <- udunits2::ud.convert(tdays, unlist(strsplit(nc$dim$time$units, " "))[1], "seconds") ncdf4::nc_close(nc) - dt <- PEcAn.utils::seconds_in_year(year, leap_year) / length(sec) + # `dt` is the met product time step in seconds. We calculate it here as + # `sec[i] - sec[i-1]` for all `[i]`. For a properly formatted product, the + # timesteps should be regular, so there is a single, constant difference. If + # that's not the case, we throw an informative error. + # + # `drop` here simplifies length-1 arrays to vectors. Without it, R will + # later throw an error about "non-conformable arrays" when trying to add a + # length-1 array to a vector. + dt <- drop(unique(diff(sec))) + if (length(dt) > 1) { + PEcAn.logger::logger.severe(paste0( + "Time step (`dt`) is not uniform! Identified ", + length(dt), " unique time steps. ", + "`head(dt)` (in seconds): ", + paste(head(dt), collapse = ", ") + )) + } toff <- -as.numeric(lst) * 3600 / dt ## buffer to get to GMT - slen <- seq_along(SW) + slen <- seq_along(sec) Tair <- c(rep(Tair[1], toff), Tair)[slen] Qair <- c(rep(Qair[1], toff), Qair)[slen] U <- c(rep(U[1], toff), U)[slen] @@ -198,58 +212,37 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l CO2 <- c(rep(CO2[1], toff), CO2)[slen] } - ## build time variables (year, month, day of year) - skip <- FALSE - nyr <- floor(length(sec) * dt / 86400 / 365) - yr <- NULL - doy <- NULL - hr <- NULL - asec <- sec - for (y in seq(year, year + nyr - 1)) { - diy <- PEcAn.utils::days_in_year(y, leap_year) - ytmp <- rep(y, udunits2::ud.convert(diy / dt, "days", "seconds")) - dtmp <- rep(seq_len(diy), each = day_secs / dt) - if (is.null(yr)) { - yr <- ytmp - doy <- dtmp - hr <- rep(NA, length(dtmp)) - } else { - yr <- c(yr, ytmp) - doy <- c(doy, dtmp) - hr <- c(hr, rep(NA, length(dtmp))) - } - rng <- length(doy) - length(ytmp):1 + 1 - if (!all(rng >= 0)) { - skip <- TRUE - PEcAn.logger::logger.warn(year, " is not a complete year and will not be included") - break - } - asec[rng] <- asec[rng] - asec[rng[1]] - hr[rng] <- (asec[rng] - (dtmp - 1) * day_secs) / day_secs * 24 - } - mo <- day2mo(yr, doy, leap_year) - if (length(yr) < length(sec)) { - rng <- (length(yr) + 1):length(sec) - if (!all(rng >= 0)) { - skip <- TRUE - PEcAn.logger::logger.warn(paste(year, "is not a complete year and will not be included")) - break - } - yr[rng] <- rep(y + 1, length(rng)) - doy[rng] <- rep(1:366, each = day_secs / dt)[1:length(rng)] - hr[rng] <- rep(seq(0, length = day_secs / dt, by = dt / day_secs * 24), 366)[1:length(rng)] - } - if (skip) { - print("Skipping to next year") - next - } + # We need to figure out the local solar zenith angle to estimate the + # potential radiation. For that, we need the Julian Date (`doy`) and local + # time in hours (`hr`). + # First, calculate `doy`. Use `floor(tdays) + 1` here because, e.g., 6am on + # January 1 corresponds to "0.25 days since YYYY-01-01", but this is DOY 1, + # not 0. Similarly, 6pm on December 31 is "364.75 days since YYYY-01-01", + # but this is DOY 365, not 364. + doy <- floor(tdays) + 1 + + invalid_doy <- doy < 1 | doy > PEcAn.utils::days_in_year(year, leap_year) + if (any(invalid_doy)) { + PEcAn.logger::logger.severe(paste0( + "Identified at least one invalid day-of-year (`doy`). ", + "PEcAn met standard uses days since start of year as its time unit, ", + "so this suggests a problem with the input met file. ", + "Invalid values are: ", paste(doy[invalid_doy], collapse = ", "), ". ", + "Source file is: ", normalizePath(ncfile) + )) + } + # Local time in hours (`hr`) is just the fractional part of the "Days since + # YYYY-01-01" value x 24. So we calculate it here using mod division. + # (e.g., 12.5 days %% 1 = 0.5 day; 0.5 day x 24 = 12 hours) + hr <- (tdays %% 1) * 24 + ## calculate potential radiation in order to estimate diffuse/direct cosz <- PEcAn.data.atmosphere::cos_solar_zenith_angle(doy, lat, lon, dt, hr) rpot <- 1366 * cosz - rpot <- rpot[1:length(SW)] + rpot <- rpot[seq_along(tdays)] SW[rpot < SW] <- rpot[rpot < SW] ## ensure radiation < max ### this causes trouble at twilight bc of missmatch btw bin avergage and bin midpoint @@ -278,59 +271,57 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l co2A <- CO2 * 1e+06 # surface co2 concentration [ppm] converted from mole fraction [kg/kg] } - ## create directory if(system(paste('ls',froot),ignore.stderr=TRUE)>0) - ## system(paste('mkdir',froot)) - - ## write by year and month - for (y in year + 1:nyr - 1) { - sely <- which(yr == y) - for (m in unique(mo[sely])) { - selm <- sely[which(mo[sely] == m)] - mout <- paste(met_folder, "/", y, month[m], ".h5", sep = "") - if (file.exists(mout)) { - if (overwrite) { - file.remove(mout) - ed_met_h5 <- hdf5r::H5File$new(mout) - } else { - PEcAn.logger::logger.warn("The file already exists! Moving to next month!") - next - } - } else { + # Next, because ED2 stores values in monthly HDF5 files, we need to + # calculate the month corresponding to each DOY. + mo <- day2mo(year, doy, leap_year) + + # Now, write these monthly outputs for the current year + for (m in unique(mo)) { + selm <- which(mo == m) + mout <- file.path(met_folder, paste0(year, month[m], ".h5")) + if (file.exists(mout)) { + if (overwrite) { + file.remove(mout) ed_met_h5 <- hdf5r::H5File$new(mout) + } else { + PEcAn.logger::logger.warn("The file already exists! Moving to next month!") + next } - dims <- c(length(selm), 1, 1) - nbdsf <- array(nbdsfA[selm], dim = dims) - nddsf <- array(nddsfA[selm], dim = dims) - vbdsf <- array(vbdsfA[selm], dim = dims) - vddsf <- array(vddsfA[selm], dim = dims) - prate <- array(prateA[selm], dim = dims) - dlwrf <- array(dlwrfA[selm], dim = dims) - pres <- array(presA[selm], dim = dims) - hgt <- array(hgtA[selm], dim = dims) - ugrd <- array(ugrdA[selm], dim = dims) - vgrd <- array(vgrdA[selm], dim = dims) - sh <- array(shA[selm], dim = dims) - tmp <- array(tmpA[selm], dim = dims) - if (useCO2) { - co2 <- array(co2A[selm], dim = dims) - } - ed_met_h5[["nbdsf"]] <- nbdsf - ed_met_h5[["nddsf"]] <- nddsf - ed_met_h5[["vbdsf"]] <- vbdsf - ed_met_h5[["vddsf"]] <- vddsf - ed_met_h5[["prate"]] <- prate - ed_met_h5[["dlwrf"]] <- dlwrf - ed_met_h5[["pres"]] <- pres - ed_met_h5[["hgt"]] <- hgt - ed_met_h5[["ugrd"]] <- ugrd - ed_met_h5[["vgrd"]] <- vgrd - ed_met_h5[["sh"]] <- sh - ed_met_h5[["tmp"]] <- tmp - if (useCO2) { - ed_met_h5[["co2"]] <- co2 - } - ed_met_h5$close_all() + } else { + ed_met_h5 <- hdf5r::H5File$new(mout) + } + dims <- c(length(selm), 1, 1) + nbdsf <- array(nbdsfA[selm], dim = dims) + nddsf <- array(nddsfA[selm], dim = dims) + vbdsf <- array(vbdsfA[selm], dim = dims) + vddsf <- array(vddsfA[selm], dim = dims) + prate <- array(prateA[selm], dim = dims) + dlwrf <- array(dlwrfA[selm], dim = dims) + pres <- array(presA[selm], dim = dims) + hgt <- array(hgtA[selm], dim = dims) + ugrd <- array(ugrdA[selm], dim = dims) + vgrd <- array(vgrdA[selm], dim = dims) + sh <- array(shA[selm], dim = dims) + tmp <- array(tmpA[selm], dim = dims) + if (useCO2) { + co2 <- array(co2A[selm], dim = dims) + } + ed_met_h5[["nbdsf"]] <- nbdsf + ed_met_h5[["nddsf"]] <- nddsf + ed_met_h5[["vbdsf"]] <- vbdsf + ed_met_h5[["vddsf"]] <- vddsf + ed_met_h5[["prate"]] <- prate + ed_met_h5[["dlwrf"]] <- dlwrf + ed_met_h5[["pres"]] <- pres + ed_met_h5[["hgt"]] <- hgt + ed_met_h5[["ugrd"]] <- ugrd + ed_met_h5[["vgrd"]] <- vgrd + ed_met_h5[["sh"]] <- sh + ed_met_h5[["tmp"]] <- tmp + if (useCO2) { + ed_met_h5[["co2"]] <- co2 } + ed_met_h5$close_all() } ## write DRIVER file @@ -341,10 +332,6 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l update_frequency = dt, flag = 1 ) - # if (!useCO2) { - # metvar_table[metvar_table$variable == "co2", - # c("update_frequency", "flag")] <- list(380, 4) - # } if (!useCO2) { metvar_table_vars <- metvar_table[metvar_table$variable != "co2",] ## CO2 optional in ED2 From 83ad3248818d9c93620414ac6d82cdece3729153 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 6 Nov 2020 15:36:03 -0500 Subject: [PATCH 1593/2289] MERRA: Fix radiation variables Instead of net variables, use "total absorbed longwave" for incoming longwave radiation and "total incoming shortwave" for incoming shortwave. Also, more thoroughly annotate the variables to make it easier to figure out what they are, and eventually add new ones. --- modules/data.atmosphere/R/download.MERRA.R | 23 +++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index 9a80e6a796d..8db1d498747 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -215,31 +215,44 @@ get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) } } -# Time-integrated variables +# For more on MERRA variables, see: +# - The MERRA2 readme -- https://goldsmr4.gesdisc.eosdis.nasa.gov/data/MERRA2/M2T1NXRAD.5.12.4/doc/MERRA2.README.pdf +# - The MERRA2 file spec -- https://gmao.gsfc.nasa.gov/pubs/docs/Bosilovich785.pdf +# Page numbers below correspond to pages in the file spec. + +# Surface flux diagnostics (pg. 33) merra_prod <- "M2T1NXFLX.5.12.4" merra_file <- "tavg1_2d_flx_Nx" merra_vars <- tibble::tribble( ~CF_name, ~MERRA_name, ~units, + # TLML - Surface air temperature "air_temperature", "TLML", "Kelvin", + # ULML - Surface eastward wind "eastward_wind", "ULML", "m/s", + # VLML - Surface northward wind "northward_wind", "VLML", "m/s", + # QSH - Effective surface specific humidity "specific_humidity", "QSH", "g/g", + # PRECTOT - Total precipitation from atmospheric model physics "precipitation_flux", "PRECTOT", "kg/m2/s" ) -# Instantaneous variables +# Single-level diagnostics (pg. 17) merra_pres_prod <- "M2I1NXASM.5.12.4" merra_pres_file <- "inst1_2d_asm_Nx" merra_pres_vars <- tibble::tribble( ~CF_name, ~MERRA_name, ~units, + # PS - Surface pressure "air_pressure", "PS", "Pascal", ) -# Radiation variables +# Radiation diagnostics (pg. 43) merra_flux_prod <- "M2T1NXRAD.5.12.4" merra_flux_file <- "tavg1_2d_rad_Nx" merra_flux_vars <- tibble::tribble( ~CF_name, ~MERRA_name, ~units, - "surface_downwelling_longwave_flux_in_air", "LWGNT", "W/m2", - "surface_downwelling_shortwave_flux_in_air", "SWGNT", "W/m2" + # LWGAB is 'Surface absorbed longwave radiation' + "surface_downwelling_longwave_flux_in_air", "LWGAB", "W/m2", + # SWGDN is 'Surface incoming shortwave flux' + "surface_downwelling_shortwave_flux_in_air", "SWGDN", "W/m2" ) From 9284e0068525469834c76eda4c030daf419aed75 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 6 Nov 2020 15:41:48 -0500 Subject: [PATCH 1594/2289] MERRA: Change default to `overwrite=TRUE` This is necessary to allow met time series to be extended. --- modules/data.atmosphere/R/download.MERRA.R | 31 +++++++++++++--------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index 8db1d498747..b937bcda81d 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -7,7 +7,7 @@ #' @export download.MERRA <- function(outfolder, start_date, end_date, lat.in, lon.in, - overwrite = FALSE, + overwrite = TRUE, verbose = FALSE, ...) { @@ -66,18 +66,6 @@ download.MERRA <- function(outfolder, start_date, end_date, results$mimetype[i] <- "application/x-netcdf" results$formatname[i] <- "CF Meteorology" - if (file.exists(loc.file)) { - if (overwrite) { - PEcAn.logger::logger.info(paste0("Removing existing file ", loc.file)) - file.remove(loc.file) - } else { - PEcAn.logger::logger.info(paste0( - "File ", loc.file, " already exists. Skipping to next year" - )) - next - } - } - ## Create dimensions lat <- ncdf4::ncdim_def(name = "latitude", units = "degree_north", vals = lat.in, create_dimvar = TRUE) lon <- ncdf4::ncdim_def(name = "longitude", units = "degree_east", vals = lon.in, create_dimvar = TRUE) @@ -99,6 +87,23 @@ download.MERRA <- function(outfolder, start_date, end_date, } ## Create output file + if (file.exists(loc.file) && overwrite) { + if (overwrite) { + PEcAn.logger::logger.warn( + "Target file ", loc.file, " already exists.", + "It will be overwritten." + ) + } else { + PEcAn.logger::logger.warn( + "Target file ", loc.file, " already exists and", + "`overwrite = FALSE`. Skipping to next year.", + "Note that `overwrite = TRUE` by default to allow met", + "time series to be extended in the PEcAn workflow!", + "Running with `overwrite = FALSE` may produce unexpected behavior." + ) + } + next + } loc <- ncdf4::nc_create(loc.file, var_list) on.exit(ncdf4::nc_close(loc), add = TRUE) From 4cf9ec7416342d6125497d75ad13d3dfc19a1fa3 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 6 Nov 2020 15:45:25 -0500 Subject: [PATCH 1595/2289] MERRA: Add separate `redownload` argument This determines whether the raw MERRA met files will be re-downloaded. This is distinct from the `overwrite` argument, which determines what will happen to the PEcAn standard met file. --- modules/data.atmosphere/R/download.MERRA.R | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index b937bcda81d..0299c6fbc4b 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -1,6 +1,8 @@ #' Download MERRA data #' #' @inheritParams download.CRUNCEP +#' @param redownload Logical. If `TRUE`, force re-download of files even if they +#' already exist (default = `FALSE`). #' @param ... Not used -- silently soak up extra arguments from `convert.input`, etc. #' @return `data.frame` of meteorology data metadata #' @author Alexey Shiklomanov @@ -9,6 +11,7 @@ download.MERRA <- function(outfolder, start_date, end_date, lat.in, lon.in, overwrite = TRUE, verbose = FALSE, + redownload = FALSE, ...) { dates <- seq.Date(as.Date(start_date), as.Date(end_date), "1 day") @@ -21,7 +24,7 @@ download.MERRA <- function(outfolder, start_date, end_date, PEcAn.logger::logger.debug(paste0( "Downloading ", as.character(date), " (", i, " of ", length(dates), ")" )) - get_merra_date(date, lat.in, lon.in, outfolder) + get_merra_date(date, lat.in, lon.in, outfolder, redownload = redownload) } # Now, post-process @@ -143,7 +146,7 @@ download.MERRA <- function(outfolder, start_date, end_date, return(results) } -get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) { +get_merra_date <- function(date, latitude, longitude, outdir, redownload = FALSE) { date <- as.character(date) dpat <- "([[:digit:]]{4})-([[:digit:]]{2})-([[:digit:]]{2})" year <- as.numeric(gsub(dpat, "\\1", date)) @@ -175,7 +178,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) qstring <- paste(qvars, collapse = ",") outfile <- file.path(outdir, sprintf("merra-most-%d-%02d-%02d.nc", year, month, day)) - if (overwrite || !file.exists(outfile)) { + if (redownload || !file.exists(outfile)) { req <- httr::GET( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), @@ -193,7 +196,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) qstring <- paste(qvars, collapse = ",") outfile <- file.path(outdir, sprintf("merra-pres-%d-%02d-%02d.nc", year, month, day)) - if (overwrite || !file.exists(outfile)) { + if (redownload || !file.exists(outfile)) { req <- httr::GET( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), @@ -211,7 +214,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) qstring <- paste(qvars, collapse = ",") outfile <- file.path(outdir, sprintf("merra-flux-%d-%02d-%02d.nc", year, month, day)) - if (overwrite || !file.exists(outfile)) { + if (redownload || !file.exists(outfile)) { req <- robustly(httr::GET, n = 10)( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), From 73242e576a829273baa271b80b2a214e7f57616e Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Fri, 6 Nov 2020 21:23:28 +0000 Subject: [PATCH 1596/2289] automated documentation update --- modules/data.atmosphere/man/download.MERRA.Rd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.MERRA.Rd b/modules/data.atmosphere/man/download.MERRA.Rd index 9f550ce0d6e..c35246424c4 100644 --- a/modules/data.atmosphere/man/download.MERRA.Rd +++ b/modules/data.atmosphere/man/download.MERRA.Rd @@ -10,8 +10,9 @@ download.MERRA( end_date, lat.in, lon.in, - overwrite = FALSE, + overwrite = TRUE, verbose = FALSE, + redownload = FALSE, ... ) } @@ -33,6 +34,9 @@ but only the year portion is used and the resulting files always contain a full \item{verbose}{logical. Passed on to \code{\link[ncdf4]{ncvar_def}} and \code{\link[ncdf4]{nc_create}} to control printing of debug info} +\item{redownload}{Logical. If `TRUE`, force re-download of files even if they +already exist (default = `FALSE`).} + \item{...}{Not used -- silently soak up extra arguments from `convert.input`, etc.} } \value{ From 0ee071aecb1321ecd8d770c7398178b52d74fc68 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Fri, 6 Nov 2020 16:56:27 -0500 Subject: [PATCH 1597/2289] ED: `drop` lat/lon length-1 dims in met2model Otherwise, this will throw "non-conformable array" errors in `cos_solar_zenith_angle`. Note that this only applies when reading the lat/lon from the NetCDF -- users won't be protected from this when they pass lat/lon in themselves. --- models/ed/R/met2model.ED2.R | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/models/ed/R/met2model.ED2.R b/models/ed/R/met2model.ED2.R index 6ae7a142e45..f2e5086cb51 100644 --- a/models/ed/R/met2model.ED2.R +++ b/models/ed/R/met2model.ED2.R @@ -131,17 +131,22 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l flat <- nc$dim[[1]]$vals[1] } if (is.na(lat)) { - lat <- flat + # Have to `drop` here because NetCDF returns a length-1 array, not a + # scalar. This causes `non-conforming array` errors in the + # `cos_solar_zenith_angle` function later when trying to add these to + # scalars. `drop` simplifies away any length-1 dimensions. + lat <- drop(flat) } else if (lat != flat) { PEcAn.logger::logger.warn("Latitude does not match that of file", lat, "!=", flat) } flon <- try(ncdf4::ncvar_get(nc, "longitude"), silent = TRUE) if (!is.numeric(flon)) { - flat <- nc$dim[[2]]$vals[1] + flon <- nc$dim[[2]]$vals[1] } if (is.na(lon)) { - lon <- flon + # See above comment re: `drop` + lon <- drop(flon) } else if (lon != flon) { PEcAn.logger::logger.warn("Longitude does not match that of file", lon, "!=", flon) } From fbc59d6deb8cc59d21afa68949021135341a2a44 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 06:59:24 -0500 Subject: [PATCH 1598/2289] Makefile.dependes --- Makefile.depends | 1 - 1 file changed, 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 33a871c2b06..e833665ad8e 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -40,4 +40,3 @@ $(call depends,models/preles): | .install/base/utils .install/base/logger .insta $(call depends,models/sipnet): | .install/modules/data.atmosphere .install/base/logger .install/base/remote .install/base/utils $(call depends,models/stics): | .install/base/settings .install/base/db .install/base/logger .install/base/utils .install/base/remote $(call depends,models/template): | .install/base/db .install/base/logger .install/base/utils -$(call depends,models/uvafme): | .install/base/db .install/base/logger .install/base/utils From b9cf0b540d52af667317c4b6f0a4b1cdebc887a9 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 07:55:34 -0500 Subject: [PATCH 1599/2289] Moved assim.sequential packages from Imports to Suggests --- Makefile.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index e833665ad8e..151599077e0 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -10,7 +10,7 @@ $(call depends,base/visualization): | .install/base/db .install/base/logger .ins $(call depends,base/workflow): | .install/modules/data.atmosphere .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils $(call depends,modules/allometry): | .install/base/logger .install/base/db $(call depends,modules/assim.batch): | .install/modules/benchmark .install/base/db .install/modules/emulator .install/base/logger .install/modules/meta.analysis .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils -$(call depends,modules/assim.sequential): | .install/modules/benchmark .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils +$(call depends,modules/assim.sequential): | .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/modules/benchmark $(call depends,modules/benchmark): | .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/base/utils .install/modules/data.land $(call depends,modules/data.atmosphere): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,modules/data.hydrology): | .install/base/logger .install/base/utils From 37d2957ee24f72300dfd82a13c1a65144bb834bf Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 08:25:48 -0500 Subject: [PATCH 1600/2289] update to DESCRIPTION and Roxygen --- modules/assim.sequential/DESCRIPTION | 20 ++++++++--------- modules/assim.sequential/R/Remote_helpers.R | 4 ++-- .../R/met_filtering_helpers.R | 1 + .../man/SDA_remote_launcher.Rd | 22 +++++++++++++++++++ modules/assim.sequential/man/alltocs.Rd | 12 ++++++++++ modules/assim.sequential/man/sample_met.Rd | 16 ++++++++++++++ 6 files changed, 63 insertions(+), 12 deletions(-) create mode 100644 modules/assim.sequential/man/sample_met.Rd diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index fe98afd94d0..10d01dc13de 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -11,38 +11,38 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific streamline the interaction between data and models, and to improve the efficacy of scientific investigation. Imports: - corrplot, - devtools, dplyr, future, - ggrepel, Matrix, ncdf4, - PEcAn.benchmark, PEcAn.DB, PEcAn.logger, PEcAn.remote, PEcAn.settings, PEcAn.utils, plyr (>= 1.8.4), - magic (>= 1.5.0), lubridate (>= 1.6.0), - plotrix, reshape2 (>= 1.4.2), - sf, sp, nimble, - tictoc, tidyr, purrr, furrr, XML, coda Suggests: - testthat + PEcAn.benchmark, + corrplot, + devtools, + ggrepel, + magic (>= 1.5.0), + plotrix, + sf, + testthat, + tictoc License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.0.2 +RoxygenNote: 7.1.1 diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index 16fbcceca3f..3eaf82558b0 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -70,7 +70,7 @@ Obs.data.prepare.MultiSite <- function(obs.path, site.ids) { #' #' @export #' @return This function returns a list of two pieces of information. One the remote path that SDA is running and the PID of the active run. -#' @example +#' @examples #' \dontrun{ #' # This example can be found under inst folder in the package #' library(PEcAn.all) @@ -443,7 +443,7 @@ Remote_Sync_launcher <- function(settingPath, remote.path, PID) { #' @export #' #' -#' @example +#' @examples #' \dontrun{ #' library(tictoc) #' tic("Analysis") diff --git a/modules/assim.sequential/R/met_filtering_helpers.R b/modules/assim.sequential/R/met_filtering_helpers.R index bdcaa3b5526..523d75bb685 100644 --- a/modules/assim.sequential/R/met_filtering_helpers.R +++ b/modules/assim.sequential/R/met_filtering_helpers.R @@ -1,3 +1,4 @@ +##' Sample meteorological ensembles ##' ##' @param settings PEcAn settings list ##' @param nens number of ensemble members to be sampled diff --git a/modules/assim.sequential/man/SDA_remote_launcher.Rd b/modules/assim.sequential/man/SDA_remote_launcher.Rd index 171b3f148ae..df2b5bb10fb 100644 --- a/modules/assim.sequential/man/SDA_remote_launcher.Rd +++ b/modules/assim.sequential/man/SDA_remote_launcher.Rd @@ -17,3 +17,25 @@ This function returns a list of two pieces of information. One the remote path t \description{ SDA_remote_launcher } +\examples{ +\dontrun{ + # This example can be found under inst folder in the package + library(PEcAn.all) + library(purrr) + + run.bash.args <- c( + "#$ -l h_rt=48:00:00", + "#$ -pe omp 28 # Request a parallel environment with 4 cores", + "#$ -l mem_per_core=1G # and 4G memory for each", + "#$ -l buyin", + "module load R/3.5.2", + "module load python/2.7.13" + ) + settingPath <-"pecan.SDA.4sites.xml" + + ObsPath <- "Obs/LandTrendr_AGB_output50s.RData" + + SDA_remote_launcher(settingPath, ObsPath, run.bash.args) +} + +} diff --git a/modules/assim.sequential/man/alltocs.Rd b/modules/assim.sequential/man/alltocs.Rd index f861a8f2b3e..d6c6ee51fb5 100644 --- a/modules/assim.sequential/man/alltocs.Rd +++ b/modules/assim.sequential/man/alltocs.Rd @@ -15,3 +15,15 @@ This function writes down a csv file with three columns: 1- message sepecified i \description{ This function finds all the tic functions called before and estimates the time elapsed for each one saves/appends it to a csv file. } +\examples{ + +\dontrun{ + library(tictoc) + tic("Analysis") + Sys.sleep(5) + testfunc() + tic("Adjustment") + Sys.sleep(4) + alltocs("timing.csv") +} +} diff --git a/modules/assim.sequential/man/sample_met.Rd b/modules/assim.sequential/man/sample_met.Rd new file mode 100644 index 00000000000..221637dbbd9 --- /dev/null +++ b/modules/assim.sequential/man/sample_met.Rd @@ -0,0 +1,16 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/met_filtering_helpers.R +\name{sample_met} +\alias{sample_met} +\title{Sample meteorological ensembles} +\usage{ +sample_met(settings, nens = 1) +} +\arguments{ +\item{settings}{PEcAn settings list} + +\item{nens}{number of ensemble members to be sampled} +} +\description{ +Sample meteorological ensembles +} From a0ed58c58d28789369bdfb1bff11098b852b77bc Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 08:44:10 -0500 Subject: [PATCH 1601/2289] rtm Namespace --- modules/rtm/NAMESPACE | 60 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 59 insertions(+), 1 deletion(-) diff --git a/modules/rtm/NAMESPACE b/modules/rtm/NAMESPACE index da197a7d8d6..b49ccad7f9f 100644 --- a/modules/rtm/NAMESPACE +++ b/modules/rtm/NAMESPACE @@ -1,3 +1,61 @@ # Generated by roxygen2: do not edit by hand -useDynLib(PEcAnRTM) +S3method("[",spectra) +S3method("[[",spectra) +S3method("[[<-",spectra) +S3method(cbind,spectra) +S3method(matplot,default) +S3method(matplot,spectra) +S3method(neff,default) +S3method(neff,matrix) +S3method(plot,spectra) +S3method(print,spectra) +S3method(resample,default) +S3method(resample,matrix) +S3method(resample,spectra) +S3method(str,spectra) +export(EDR) +export(EDR.preprocess.history) +export(burnin.thin) +export(check.convergence) +export(default.settings.prospect) +export(defparam) +export(dtnorm) +export(fortran_data_module) +export(foursail) +export(generalized_plate_model) +export(generate.noise) +export(get.EDR.output) +export(invert.auto) +export(invert.custom) +export(invert.lsq) +export(invert_bt) +export(is_spectra) +export(load.from.name) +export(lognorm.mu) +export(lognorm.sigma) +export(matplot) +export(neff) +export(params.prospect4) +export(params.prospect5) +export(params.prospect5b) +export(params.prospectd) +export(params2edr) +export(print_results_summary) +export(prior.defaultvals.prospect) +export(priorfunc.prospect) +export(pro2s) +export(pro4sail) +export(pro4saild) +export(prospect) +export(prospect_bt_prior) +export(resample) +export(rtnorm) +export(sensor.list) +export(sensor.proper) +export(setup_edr) +export(spectra) +export(spectral.response) +export(summary_mvnorm) +export(summary_simple) +useDynLib(PEcAnRTM) \ No newline at end of file From a4f26b5949116efc15c3926a394fd99715ba97a0 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 08:48:38 -0500 Subject: [PATCH 1602/2289] SDA namespace --- modules/assim.sequential/NAMESPACE | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index 68660175397..894feba25d7 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -49,4 +49,3 @@ export(y_star_create) import(furrr) import(lubridate) import(nimble) -import(tictoc) From b68429f00d21c2077efca2d368c33a527d615135 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 09:12:40 -0500 Subject: [PATCH 1603/2289] tictoc --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index 04a4fe3f8be..cb3ac3e0a4c 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -18,7 +18,7 @@ #' @description State Variable Data Assimilation: Ensemble Kalman Filter and Generalized ensemble filter. Check out SDA_control function for more details on the control arguments. #' #' @return NONE -#' @import nimble tictoc furrr +#' @import nimble furrr #' @export #' sda.enkf.multisite <- function(settings, @@ -42,7 +42,7 @@ sda.enkf.multisite <- function(settings, ...) { future::plan(multiprocess) if (control$debug) browser() - tic("Prepration") + tictoc::tic("Prepration") ###-------------------------------------------------------------------### ### read settings ### ###-------------------------------------------------------------------### @@ -267,7 +267,7 @@ sda.enkf.multisite <- function(settings, # if it beaks at least save the trace #tryCatch({ - tic(paste0("Writing configs for cycle = ", t)) + tictoc::tic(paste0("Writing configs for cycle = ", t)) # do we have obs for this time - what year is it ? obs <- which(!is.na(obs.mean[[t]])) obs.t<-names(obs.mean)[t] @@ -353,7 +353,7 @@ sda.enkf.multisite <- function(settings, #if(t==1) inputs <- out.configs %>% map(~.x[['samples']][['met']]) # for any time after t==1 the met is the splitted met #-------------------------------------------- RUN - tic(paste0("Running models for cycle = ", t)) + tictoc::tic(paste0("Running models for cycle = ", t)) if (control$debug) browser() PEcAn.remote::start.model.runs(settings, settings$database$bety$write) @@ -381,7 +381,7 @@ sda.enkf.multisite <- function(settings, } } - tic(paste0("Preparing for Analysis for cycle = ", t)) + tictoc::tic(paste0("Preparing for Analysis for cycle = ", t)) #------------------------------------------- Reading the output if (control$debug) browser() #--- Reading just the first run when we have all years and for VIS @@ -491,7 +491,7 @@ sda.enkf.multisite <- function(settings, ### Analysis ### ###-------------------------------------------------------------------###---- - tic(paste0("Analysis for cycle = ", t)) + tictoc::tic(paste0("Analysis for cycle = ", t)) if(processvar == FALSE){an.method<-EnKF.MultiSite }else{ an.method<-GEF.MultiSite } @@ -519,7 +519,7 @@ sda.enkf.multisite <- function(settings, blocked.dis = blocked.dis, distances = distances ) - tic(paste0("Preparing for Adjustment for cycle = ", t)) + tictoc::tic(paste0("Preparing for Adjustment for cycle = ", t)) #Forecast mu.f <- enkf.params[[obs.t]]$mu.f Pf <- enkf.params[[obs.t]]$Pf From c3b70d55bb691e9774a7e5e585ca468f082738f5 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 09:14:32 -0500 Subject: [PATCH 1604/2289] RTM namespace --- modules/rtm/NAMESPACE | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/rtm/NAMESPACE b/modules/rtm/NAMESPACE index b49ccad7f9f..0a0a7407af9 100644 --- a/modules/rtm/NAMESPACE +++ b/modules/rtm/NAMESPACE @@ -58,4 +58,5 @@ export(spectra) export(spectral.response) export(summary_mvnorm) export(summary_simple) -useDynLib(PEcAnRTM) \ No newline at end of file +export(wavelengths) +useDynLib(PEcAnRTM) From 095be951187d14abe921c727a5ef16fa706aae45 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 09:20:01 -0500 Subject: [PATCH 1605/2289] gridExtra --- modules/assim.sequential/R/sda.enkf.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf.R b/modules/assim.sequential/R/sda.enkf.R index 09f124511f7..585b2a552ab 100644 --- a/modules/assim.sequential/R/sda.enkf.R +++ b/modules/assim.sequential/R/sda.enkf.R @@ -1293,7 +1293,7 @@ for(t in seq_len(nt)) { # r2 = PEcAn.benchmark::metric_R2(dat), rae = PEcAn.benchmark::metric_RAE(dat), ame = PEcAn.benchmark::metric_AME(dat)) - require(gridExtra) +## require(gridExtra) plot1 <- PEcAn.benchmark::metric_residual_plot(dat, var = colnames(Ybar)[i]) plot2 <- PEcAn.benchmark::metric_scatter_plot(dat, var = colnames(Ybar)[i]) #PEcAn.benchmark::metric_lmDiag_plot(dat, var = colnames(Ybar)[i]) @@ -1301,8 +1301,8 @@ for(t in seq_len(nt)) { # text = paste("\n The following is text that'll appear in a plot window.\n", " As you can see, it's in the plot window\n", " One might imagine useful informaiton here") - ss <- tableGrob(signif(dat.stats,digits = 3)) - grid.arrange(plot1,plot2,plot3,ss,ncol=2) + ss <- gridExtra::tableGrob(signif(dat.stats,digits = 3)) + gridExtra::grid.arrange(plot1,plot2,plot3,ss,ncol=2) } From 1717853ec6151d5a8f53c6b087af4389eeffb8be Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 09:36:14 -0500 Subject: [PATCH 1606/2289] various package namespace issues --- .../R/Analysis_sda_multiSite.R | 2 +- modules/assim.sequential/R/sda.enkf.R | 49 +++++++++---------- 2 files changed, 23 insertions(+), 28 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda_multiSite.R b/modules/assim.sequential/R/Analysis_sda_multiSite.R index fae437e2c86..f391353f45e 100644 --- a/modules/assim.sequential/R/Analysis_sda_multiSite.R +++ b/modules/assim.sequential/R/Analysis_sda_multiSite.R @@ -92,7 +92,7 @@ GEF.MultiSite<-function(setting, Forecast, Observed, H, extraArg,...){ q.type <- ifelse(q.type == "SITE", Site.q, pft.q) } #Loading nimbles functions - if (!exists('GEF.MultiSite.Nimble')) PEcAn.assim.sequential::load_nimble() + if (!exists('GEF.MultiSite.Nimble')) load_nimble() ## not using namespace because internal to PEcAn.assim.sequential #load_nimble() #Forecast inputs Q <- Forecast$Q # process error diff --git a/modules/assim.sequential/R/sda.enkf.R b/modules/assim.sequential/R/sda.enkf.R index 585b2a552ab..caf3315268d 100644 --- a/modules/assim.sequential/R/sda.enkf.R +++ b/modules/assim.sequential/R/sda.enkf.R @@ -21,8 +21,6 @@ ##' sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, adjustment = TRUE, restart=NULL) { - library(nimble) - ymd_hms <- lubridate::ymd_hms hms <- lubridate::hms second <- lubridate::second @@ -371,14 +369,14 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, (Om[i, j]^2 + Om[i, i] * Om[j, j]) / var(X[, col]) } - sampler_toggle <- nimbleFunction( + sampler_toggle <- nimble::nimbleFunction( contains = sampler_BASE, setup = function(model, mvSaved, target, control) { type <- control$type nested_sampler_name <- paste0('sampler_', type) - control_new <- nimbleOptions('MCMCcontrolDefaultList') + control_new <- nimble::nimbleOptions('MCMCcontrolDefaultList') control_new[[names(control)]] <- control - nested_sampler_list <- nimbleFunctionList(sampler_BASE) + nested_sampler_list <- nimble::nimbleFunctionList(sampler_BASE) nested_sampler_list[[1]] <- do.call(nested_sampler_name, list(model, mvSaved, target, control_new)) toggle <- 1 }, @@ -393,7 +391,7 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, ) if(var.names=="Fcomp"){ - y_star_create <- nimbleFunction( + y_star_create <- nimble::nimbleFunction( run = function(X = double(1)) { returnType(double(1)) @@ -404,7 +402,7 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, return(y_star) }) }else{ - y_star_create <- nimbleFunction( + y_star_create <- nimble::nimbleFunction( run = function(X = double(1)) { returnType(double(1)) @@ -415,7 +413,7 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, } - tobit.model <- nimbleCode({ + tobit.model <- nimble::nimbleCode({ q[1:N,1:N] ~ dwish(R = aq[1:N,1:N], df = bq) ## aq and bq are estimated over time Q[1:N,1:N] <- inverse(q[1:N,1:N]) @@ -439,7 +437,7 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, }) - tobit2space.model <- nimbleCode({ + tobit2space.model <- nimble::nimbleCode({ for(i in 1:N){ y.censored[i,1:J] ~ dmnorm(muf[1:J], cov = pf[1:J,1:J]) for(j in 1:J){ @@ -454,7 +452,7 @@ sda.enkf.original <- function(settings, obs.mean, obs.cov, IC = NULL, Q = NULL, }) - tobit2space.model <- nimbleCode({ + tobit2space.model <- nimble::nimbleCode({ for(i in 1:N){ y.censored[i,1:J] ~ dmnorm(muf[1:J], cov = pf[1:J,1:J]) for(j in 1:J){ @@ -683,7 +681,7 @@ for(t in seq_len(nt)) { # inits.tobit2space = list(pf = Pf, muf = colMeans(X)) #pf = cov(X) #set.seed(0) #ptm <- proc.time() - tobit2space_pred <- nimbleModel(tobit2space.model, data = data.tobit2space, + tobit2space_pred <- nimble::nimbleModel(tobit2space.model, data = data.tobit2space, constants = constants.tobit2space, inits = inits.tobit2space, name = 'space') ## Adding X.mod,q,r as data for building model. @@ -705,16 +703,16 @@ for(t in seq_len(nt)) { # #conf_tobit2space$printSamplers() - Rmcmc_tobit2space <- buildMCMC(conf_tobit2space) + Rmcmc_tobit2space <- nimble::buildMCMC(conf_tobit2space) - Cmodel_tobit2space <- compileNimble(tobit2space_pred) - Cmcmc_tobit2space <- compileNimble(Rmcmc_tobit2space, project = tobit2space_pred) + Cmodel_tobit2space <- nimble::compileNimble(tobit2space_pred) + Cmcmc_tobit2space <- nimble::compileNimble(Rmcmc_tobit2space, project = tobit2space_pred) for(i in seq_along(X)) { ## ironically, here we have to "toggle" the value of y.ind[i] ## this specifies that when y.ind[i] = 1, ## indicator variable is set to 0, which specifies *not* to sample - valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) + nimble::valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) } }else{ @@ -728,7 +726,7 @@ for(t in seq_len(nt)) { # ## ironically, here we have to "toggle" the value of y.ind[i] ## this specifies that when y.ind[i] = 1, ## indicator variable is set to 0, which specifies *not* to sample - valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) + nimble::valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) } } @@ -801,7 +799,7 @@ for(t in seq_len(nt)) { # inits.pred = list(q = diag(length(mu.f)), X.mod = as.vector(mu.f), X = rnorm(length(mu.f),0,1)) # - model_pred <- nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, + model_pred <- nimble::nimbleModel(tobit.model, data = data.tobit, dimensions = dimensions.tobit, constants = constants.tobit, inits = inits.pred, name = 'base') ## Adding X.mod,q,r as data for building model. @@ -824,16 +822,16 @@ for(t in seq_len(nt)) { # ## can monitor y.censored, if you wish, to verify correct behaviour #conf$addMonitors('y.censored') - Rmcmc <- buildMCMC(conf) + Rmcmc <- nimble::buildMCMC(conf) - Cmodel <- compileNimble(model_pred) - Cmcmc <- compileNimble(Rmcmc, project = model_pred) + Cmodel <- nimble::compileNimble(model_pred) + Cmcmc <- nimble::compileNimble(Rmcmc, project = model_pred) for(i in 1:length(y.ind)) { ## ironically, here we have to "toggle" the value of y.ind[i] ## this specifies that when y.ind[i] = 1, ## indicator variable is set to 0, which specifies *not* to sample - valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) + nimble::valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) } }else{ @@ -853,7 +851,7 @@ for(t in seq_len(nt)) { # ## ironically, here we have to "toggle" the value of y.ind[i] ## this specifies that when y.ind[i] = 1, ## indicator variable is set to 0, which specifies *not* to sample - valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) + nimble::valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) } } @@ -1293,7 +1291,6 @@ for(t in seq_len(nt)) { # r2 = PEcAn.benchmark::metric_R2(dat), rae = PEcAn.benchmark::metric_RAE(dat), ame = PEcAn.benchmark::metric_AME(dat)) -## require(gridExtra) plot1 <- PEcAn.benchmark::metric_residual_plot(dat, var = colnames(Ybar)[i]) plot2 <- PEcAn.benchmark::metric_scatter_plot(dat, var = colnames(Ybar)[i]) #PEcAn.benchmark::metric_lmDiag_plot(dat, var = colnames(Ybar)[i]) @@ -1313,14 +1310,13 @@ for(t in seq_len(nt)) { # ###-------------------------------------------------------------------### if (processvar) { - library(corrplot) pdf('process.var.plots.pdf') cor.mat <- cov2cor(solve(enkf.params[[t]]$q.bar)) colnames(cor.mat) <- colnames(X) rownames(cor.mat) <- colnames(X) par(mfrow = c(1, 1), mai = c(1, 1, 4, 1)) - corrplot(cor.mat, type = "upper", tl.srt = 45,order='FPC') + corrplot::corrplot(cor.mat, type = "upper", tl.srt = 45,order='FPC') par(mfrow=c(1,1)) plot(as.Date(obs.times[t1:t]), unlist(lapply(enkf.params,'[[','n')), @@ -1494,14 +1490,13 @@ for(t in seq_len(nt)) { # ###-------------------------------------------------------------------### if (processvar) { - library(corrplot) pdf('process.var.plots.pdf') cor.mat <- cov2cor(aqq[t,,] / bqq[t]) colnames(cor.mat) <- colnames(X) rownames(cor.mat) <- colnames(X) par(mfrow = c(1, 1), mai = c(1, 1, 4, 1)) - corrplot(cor.mat, type = "upper", tl.srt = 45,order='FPC') + corrplot::corrplot(cor.mat, type = "upper", tl.srt = 45,order='FPC') par(mfrow=c(1,1)) plot(as.Date(obs.times[t1:t]), bqq[t1:t], From acac2eea6e9db1c79e239a2be16c3cbe162f511f Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 09:38:11 -0500 Subject: [PATCH 1607/2289] more tictoc --- modules/assim.sequential/R/sda.enkf_MultiSite.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_MultiSite.R b/modules/assim.sequential/R/sda.enkf_MultiSite.R index cb3ac3e0a4c..2e2109a0224 100755 --- a/modules/assim.sequential/R/sda.enkf_MultiSite.R +++ b/modules/assim.sequential/R/sda.enkf_MultiSite.R @@ -590,7 +590,7 @@ sda.enkf.multisite <- function(settings, ###-------------------------------------------------------------------### ### adjustement/update state matrix ### ###-------------------------------------------------------------------###---- - tic(paste0("Adjustment for cycle = ", t)) + tictoc::tic(paste0("Adjustment for cycle = ", t)) if(adjustment == TRUE){ analysis <-adj.ens(Pf, X, mu.f, mu.a, Pa) } else { @@ -625,7 +625,7 @@ sda.enkf.multisite <- function(settings, out.configs, ensemble.samples, inputs, Viz.output, file = file.path(settings$outdir,"SDA", "sda.output.Rdata")) - tic(paste0("Visulization for cycle = ", t)) + tictoc::tic(paste0("Visulization for cycle = ", t)) #writing down the image - either you asked for it or nor :) From fe150875fed5e265ab2710307f25f203f047302b Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 09:40:25 -0500 Subject: [PATCH 1608/2289] toc --- modules/assim.sequential/R/Remote_helpers.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Remote_helpers.R b/modules/assim.sequential/R/Remote_helpers.R index 3eaf82558b0..cace77c4487 100644 --- a/modules/assim.sequential/R/Remote_helpers.R +++ b/modules/assim.sequential/R/Remote_helpers.R @@ -460,7 +460,7 @@ alltocs <-function(fname="tocs.csv") { get(".tictoc", envir = baseenv())) %>% seq_along() %>% map_dfr(function(x) { - s <- toc(quiet = T, log = T) + s <- tictoc::toc(quiet = T, log = T) dfout <- data.frame( Task = s$msg %>% as.character(), TimeElapsed = round(s$toc - s$tic, 1), From 1a677e3da45287dce1d2d1f473ab526a82df50d4 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 10:01:59 -0500 Subject: [PATCH 1609/2289] doc --- modules/assim.sequential/R/Analysis_sda_multiSite.R | 2 +- modules/assim.sequential/R/Nimble_codes.R | 13 ++++++++++++- modules/assim.sequential/man/load_nimble.Rd | 3 +++ 3 files changed, 16 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda_multiSite.R b/modules/assim.sequential/R/Analysis_sda_multiSite.R index f391353f45e..5ee5c025637 100644 --- a/modules/assim.sequential/R/Analysis_sda_multiSite.R +++ b/modules/assim.sequential/R/Analysis_sda_multiSite.R @@ -92,7 +92,7 @@ GEF.MultiSite<-function(setting, Forecast, Observed, H, extraArg,...){ q.type <- ifelse(q.type == "SITE", Site.q, pft.q) } #Loading nimbles functions - if (!exists('GEF.MultiSite.Nimble')) load_nimble() ## not using namespace because internal to PEcAn.assim.sequential + #if (!exists('GEF.MultiSite.Nimble')) load_nimble() #load_nimble() #Forecast inputs Q <- Forecast$Q # process error diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index 26e24ef5123..c1231e02bdc 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -5,7 +5,7 @@ ##' @description This functions is internally used to register a series of nimble functions inside GEF analysis function. ##' #' @import nimble -#' +#' @param X state var #' #' @export y_star_create <- nimbleFunction( @@ -18,6 +18,7 @@ y_star_create <- nimbleFunction( } ) +#' @param y state var #' @export alr <- nimbleFunction( run = function(y = double(1)) { @@ -31,6 +32,7 @@ alr <- nimbleFunction( } ) +#' @param alr state var #' @export inv.alr <- nimbleFunction( run = function(alr = double(1)) { @@ -42,6 +44,10 @@ inv.alr <- nimbleFunction( } ) +#' @param n sample size +#' @param mean mean +#' @param prec precision +#' @param wt weight #' @export rwtmnorm <- nimbleFunction( run = function(n = integer(0), @@ -57,6 +63,11 @@ rwtmnorm <- nimbleFunction( } ) +#' @param n sample size +#' @param mean mean +#' @param prec precision +#' @param wt weight +#' @param log log #' @export dwtmnorm <- nimbleFunction( run = function(x = double(1), diff --git a/modules/assim.sequential/man/load_nimble.Rd b/modules/assim.sequential/man/load_nimble.Rd index 903fc4f0372..cceeacc4f46 100644 --- a/modules/assim.sequential/man/load_nimble.Rd +++ b/modules/assim.sequential/man/load_nimble.Rd @@ -7,6 +7,9 @@ \usage{ y_star_create(X) } +\arguments{ +\item{X}{state var} +} \description{ This functions is internally used to register a series of nimble functions inside GEF analysis function. } From 8897f69912f8b0892aca75aad095c82593ae0aed Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 10:19:17 -0500 Subject: [PATCH 1610/2289] gridExtra --- modules/assim.sequential/R/sda.enkf.R | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf.R b/modules/assim.sequential/R/sda.enkf.R index caf3315268d..355c4fa72ab 100644 --- a/modules/assim.sequential/R/sda.enkf.R +++ b/modules/assim.sequential/R/sda.enkf.R @@ -1298,9 +1298,14 @@ for(t in seq_len(nt)) { # text = paste("\n The following is text that'll appear in a plot window.\n", " As you can see, it's in the plot window\n", " One might imagine useful informaiton here") - ss <- gridExtra::tableGrob(signif(dat.stats,digits = 3)) - gridExtra::grid.arrange(plot1,plot2,plot3,ss,ncol=2) - + if(require(gridExtra)){ + ss <- gridExtra::tableGrob(signif(dat.stats,digits = 3)) + gridExtra::grid.arrange(plot1,plot2,plot3,ss,ncol=2) + } else { + print(plot1) + print(plot2) + print(plot3) + } } dev.off() From 91ba20becc79ed2fdbb50a14bd3db31f6c4e3f73 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 10:21:15 -0500 Subject: [PATCH 1611/2289] add methods to Description --- modules/assim.sequential/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 10d01dc13de..8af2bbedbd9 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -36,6 +36,7 @@ Suggests: devtools, ggrepel, magic (>= 1.5.0), + methods, plotrix, sf, testthat, From 9646b6b523fd5a8eb313e6e56000597d7706ccff Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 10:27:51 -0500 Subject: [PATCH 1612/2289] Document nimble codes --- modules/assim.sequential/R/Nimble_codes.R | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index c1231e02bdc..e14bbeb5dc5 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -18,6 +18,7 @@ y_star_create <- nimbleFunction( } ) +#' Additive Log Ratio transform #' @param y state var #' @export alr <- nimbleFunction( @@ -32,6 +33,7 @@ alr <- nimbleFunction( } ) +#' inverse of ALR transform #' @param alr state var #' @export inv.alr <- nimbleFunction( @@ -44,6 +46,7 @@ inv.alr <- nimbleFunction( } ) +#' random weighted multivariate normal #' @param n sample size #' @param mean mean #' @param prec precision @@ -63,6 +66,7 @@ rwtmnorm <- nimbleFunction( } ) +#' weighted multivariate normal density #' @param n sample size #' @param mean mean #' @param prec precision @@ -105,6 +109,7 @@ registerDistributions(list(dwtmnorm = list( ))) #tobit2space.model------------------------------------------------------------------------------------------------ +#' Fit tobit prior to ensemble members #' @export tobit2space.model <- nimbleCode({ for (i in 1:N) { @@ -122,6 +127,7 @@ tobit2space.model <- nimbleCode({ }) #tobit.model--This does the GEF ---------------------------------------------------- +#' TWEnF #' @export tobit.model <- nimbleCode({ q[1:N, 1:N] ~ dwish(R = aq[1:N, 1:N], df = bq) ## aq and bq are estimated over time @@ -164,6 +170,7 @@ tobit.model <- nimbleCode({ }) #tobit.model--This does the GEF for multi Site ------------------------------------- +#' multisite TWEnF #' @export GEF.MultiSite.Nimble <- nimbleCode({ if (q.type == 1) { @@ -208,6 +215,7 @@ GEF.MultiSite.Nimble <- nimbleCode({ }) #sampler_toggle------------------------------------------------------------------------------------------------ +#' sampler toggling #' @export sampler_toggle <- nimbleFunction( contains = sampler_BASE, @@ -232,6 +240,7 @@ sampler_toggle <- nimbleFunction( ) ) +#' Weighted conjugate wishart #' @export conj_wt_wishart_sampler <- nimbleFunction( contains = sampler_BASE, From 029ec5ae271903b0ffe7f496b61ccb984555069d Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 10:30:28 -0500 Subject: [PATCH 1613/2289] roxygen for nimble functions --- .../man/GEF.MultiSite.Nimble.Rd | 14 ++++++++++++ modules/assim.sequential/man/alr.Rd | 14 ++++++++++++ .../man/conj_wt_wishart_sampler.Rd | 11 ++++++++++ modules/assim.sequential/man/dwtmnorm.Rd | 22 +++++++++++++++++++ modules/assim.sequential/man/inv.alr.Rd | 14 ++++++++++++ modules/assim.sequential/man/rwtmnorm.Rd | 20 +++++++++++++++++ .../assim.sequential/man/sampler_toggle.Rd | 11 ++++++++++ modules/assim.sequential/man/tobit.model.Rd | 14 ++++++++++++ .../assim.sequential/man/tobit2space.model.Rd | 14 ++++++++++++ 9 files changed, 134 insertions(+) create mode 100644 modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd create mode 100644 modules/assim.sequential/man/alr.Rd create mode 100644 modules/assim.sequential/man/conj_wt_wishart_sampler.Rd create mode 100644 modules/assim.sequential/man/dwtmnorm.Rd create mode 100644 modules/assim.sequential/man/inv.alr.Rd create mode 100644 modules/assim.sequential/man/rwtmnorm.Rd create mode 100644 modules/assim.sequential/man/sampler_toggle.Rd create mode 100644 modules/assim.sequential/man/tobit.model.Rd create mode 100644 modules/assim.sequential/man/tobit2space.model.Rd diff --git a/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd b/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd new file mode 100644 index 00000000000..f16534c31ac --- /dev/null +++ b/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\docType{data} +\name{GEF.MultiSite.Nimble} +\alias{GEF.MultiSite.Nimble} +\title{multisite TWEnF} +\format{An object of class \code{{} of length 9.} +\usage{ +GEF.MultiSite.Nimble +} +\description{ +multisite TWEnF +} +\keyword{datasets} diff --git a/modules/assim.sequential/man/alr.Rd b/modules/assim.sequential/man/alr.Rd new file mode 100644 index 00000000000..41f5f73ebb4 --- /dev/null +++ b/modules/assim.sequential/man/alr.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\name{alr} +\alias{alr} +\title{Additive Log Ratio transform} +\usage{ +alr(y) +} +\arguments{ +\item{y}{state var} +} +\description{ +Additive Log Ratio transform +} diff --git a/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd b/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd new file mode 100644 index 00000000000..2ed48aed3d1 --- /dev/null +++ b/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd @@ -0,0 +1,11 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\name{conj_wt_wishart_sampler} +\alias{conj_wt_wishart_sampler} +\title{Weighted conjugate wishart} +\usage{ +conj_wt_wishart_sampler(model, mvSaved, target, control) +} +\description{ +Weighted conjugate wishart +} diff --git a/modules/assim.sequential/man/dwtmnorm.Rd b/modules/assim.sequential/man/dwtmnorm.Rd new file mode 100644 index 00000000000..71cd01c7f28 --- /dev/null +++ b/modules/assim.sequential/man/dwtmnorm.Rd @@ -0,0 +1,22 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\name{dwtmnorm} +\alias{dwtmnorm} +\title{weighted multivariate normal density} +\usage{ +dwtmnorm(x, mean, prec, wt, log = 0) +} +\arguments{ +\item{mean}{mean} + +\item{prec}{precision} + +\item{wt}{weight} + +\item{log}{log} + +\item{n}{sample size} +} +\description{ +weighted multivariate normal density +} diff --git a/modules/assim.sequential/man/inv.alr.Rd b/modules/assim.sequential/man/inv.alr.Rd new file mode 100644 index 00000000000..370e956f7af --- /dev/null +++ b/modules/assim.sequential/man/inv.alr.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\name{inv.alr} +\alias{inv.alr} +\title{inverse of ALR transform} +\usage{ +inv.alr(alr) +} +\arguments{ +\item{alr}{state var} +} +\description{ +inverse of ALR transform +} diff --git a/modules/assim.sequential/man/rwtmnorm.Rd b/modules/assim.sequential/man/rwtmnorm.Rd new file mode 100644 index 00000000000..97a9c147c73 --- /dev/null +++ b/modules/assim.sequential/man/rwtmnorm.Rd @@ -0,0 +1,20 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\name{rwtmnorm} +\alias{rwtmnorm} +\title{random weighted multivariate normal} +\usage{ +rwtmnorm(n, mean, prec, wt) +} +\arguments{ +\item{n}{sample size} + +\item{mean}{mean} + +\item{prec}{precision} + +\item{wt}{weight} +} +\description{ +random weighted multivariate normal +} diff --git a/modules/assim.sequential/man/sampler_toggle.Rd b/modules/assim.sequential/man/sampler_toggle.Rd new file mode 100644 index 00000000000..95e7fce8ddb --- /dev/null +++ b/modules/assim.sequential/man/sampler_toggle.Rd @@ -0,0 +1,11 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\name{sampler_toggle} +\alias{sampler_toggle} +\title{sampler toggling} +\usage{ +sampler_toggle(model, mvSaved, target, control) +} +\description{ +sampler toggling +} diff --git a/modules/assim.sequential/man/tobit.model.Rd b/modules/assim.sequential/man/tobit.model.Rd new file mode 100644 index 00000000000..f648dfa87ff --- /dev/null +++ b/modules/assim.sequential/man/tobit.model.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\docType{data} +\name{tobit.model} +\alias{tobit.model} +\title{TWEnF} +\format{An object of class \code{{} of length 10.} +\usage{ +tobit.model +} +\description{ +TWEnF +} +\keyword{datasets} diff --git a/modules/assim.sequential/man/tobit2space.model.Rd b/modules/assim.sequential/man/tobit2space.model.Rd new file mode 100644 index 00000000000..54ec55453ca --- /dev/null +++ b/modules/assim.sequential/man/tobit2space.model.Rd @@ -0,0 +1,14 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/Nimble_codes.R +\docType{data} +\name{tobit2space.model} +\alias{tobit2space.model} +\title{Fit tobit prior to ensemble members} +\format{An object of class \code{{} of length 4.} +\usage{ +tobit2space.model +} +\description{ +Fit tobit prior to ensemble members +} +\keyword{datasets} From 3a12b5f7370152416ffaf5706b76ae7a410b84fe Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 11:11:56 -0500 Subject: [PATCH 1614/2289] typo in autogenerated NimbleCode @format; created a dummy entry --- modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd | 2 +- modules/assim.sequential/man/tobit.model.Rd | 2 +- modules/assim.sequential/man/tobit2space.model.Rd | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd b/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd index f16534c31ac..bb3c667240a 100644 --- a/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd +++ b/modules/assim.sequential/man/GEF.MultiSite.Nimble.Rd @@ -4,7 +4,7 @@ \name{GEF.MultiSite.Nimble} \alias{GEF.MultiSite.Nimble} \title{multisite TWEnF} -\format{An object of class \code{{} of length 9.} +\format{TBD} \usage{ GEF.MultiSite.Nimble } diff --git a/modules/assim.sequential/man/tobit.model.Rd b/modules/assim.sequential/man/tobit.model.Rd index f648dfa87ff..51ccd47fb09 100644 --- a/modules/assim.sequential/man/tobit.model.Rd +++ b/modules/assim.sequential/man/tobit.model.Rd @@ -4,7 +4,7 @@ \name{tobit.model} \alias{tobit.model} \title{TWEnF} -\format{An object of class \code{{} of length 10.} +\format{TBD} \usage{ tobit.model } diff --git a/modules/assim.sequential/man/tobit2space.model.Rd b/modules/assim.sequential/man/tobit2space.model.Rd index 54ec55453ca..018972d0b5c 100644 --- a/modules/assim.sequential/man/tobit2space.model.Rd +++ b/modules/assim.sequential/man/tobit2space.model.Rd @@ -4,7 +4,7 @@ \name{tobit2space.model} \alias{tobit2space.model} \title{Fit tobit prior to ensemble members} -\format{An object of class \code{{} of length 4.} +\format{TBD} \usage{ tobit2space.model } From 6f73f3fa65c07bff8e82ae00239533be8050582f Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 11:51:39 -0500 Subject: [PATCH 1615/2289] fix to nimble function docs and gridExtra --- modules/assim.sequential/DESCRIPTION | 1 + modules/assim.sequential/R/Nimble_codes.R | 9 ++++++++- modules/assim.sequential/R/sda.enkf.R | 2 +- modules/assim.sequential/man/conj_wt_wishart_sampler.Rd | 9 +++++++++ modules/assim.sequential/man/dwtmnorm.Rd | 4 ++-- 5 files changed, 21 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 8af2bbedbd9..bcc152cf358 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -35,6 +35,7 @@ Suggests: corrplot, devtools, ggrepel, + gridExtra, magic (>= 1.5.0), methods, plotrix, diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index e14bbeb5dc5..e4011518ab8 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -67,7 +67,7 @@ rwtmnorm <- nimbleFunction( ) #' weighted multivariate normal density -#' @param n sample size +#' @param x random variable #' @param mean mean #' @param prec precision #' @param wt weight @@ -110,6 +110,7 @@ registerDistributions(list(dwtmnorm = list( #tobit2space.model------------------------------------------------------------------------------------------------ #' Fit tobit prior to ensemble members +#' @format TBD #' @export tobit2space.model <- nimbleCode({ for (i in 1:N) { @@ -129,6 +130,7 @@ tobit2space.model <- nimbleCode({ #tobit.model--This does the GEF ---------------------------------------------------- #' TWEnF #' @export +#' @format TBD tobit.model <- nimbleCode({ q[1:N, 1:N] ~ dwish(R = aq[1:N, 1:N], df = bq) ## aq and bq are estimated over time Q[1:N, 1:N] <- inverse(q[1:N, 1:N]) @@ -171,6 +173,7 @@ tobit.model <- nimbleCode({ #tobit.model--This does the GEF for multi Site ------------------------------------- #' multisite TWEnF +#' @format TBD #' @export GEF.MultiSite.Nimble <- nimbleCode({ if (q.type == 1) { @@ -241,6 +244,10 @@ sampler_toggle <- nimbleFunction( ) #' Weighted conjugate wishart +#' @param model model +#' @param mvSaved copied to +#' @param target thing being targetted +#' @param control unused #' @export conj_wt_wishart_sampler <- nimbleFunction( contains = sampler_BASE, diff --git a/modules/assim.sequential/R/sda.enkf.R b/modules/assim.sequential/R/sda.enkf.R index 355c4fa72ab..2d9d9f72b85 100644 --- a/modules/assim.sequential/R/sda.enkf.R +++ b/modules/assim.sequential/R/sda.enkf.R @@ -1298,7 +1298,7 @@ for(t in seq_len(nt)) { # text = paste("\n The following is text that'll appear in a plot window.\n", " As you can see, it's in the plot window\n", " One might imagine useful informaiton here") - if(require(gridExtra)){ + if(requireNamespace("gridExtra")){ ss <- gridExtra::tableGrob(signif(dat.stats,digits = 3)) gridExtra::grid.arrange(plot1,plot2,plot3,ss,ncol=2) } else { diff --git a/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd b/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd index 2ed48aed3d1..2bbf6d0528f 100644 --- a/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd +++ b/modules/assim.sequential/man/conj_wt_wishart_sampler.Rd @@ -6,6 +6,15 @@ \usage{ conj_wt_wishart_sampler(model, mvSaved, target, control) } +\arguments{ +\item{model}{model} + +\item{mvSaved}{copied to} + +\item{target}{thing being targetted} + +\item{control}{unused} +} \description{ Weighted conjugate wishart } diff --git a/modules/assim.sequential/man/dwtmnorm.Rd b/modules/assim.sequential/man/dwtmnorm.Rd index 71cd01c7f28..ef70f5ad689 100644 --- a/modules/assim.sequential/man/dwtmnorm.Rd +++ b/modules/assim.sequential/man/dwtmnorm.Rd @@ -7,6 +7,8 @@ dwtmnorm(x, mean, prec, wt, log = 0) } \arguments{ +\item{x}{random variable} + \item{mean}{mean} \item{prec}{precision} @@ -14,8 +16,6 @@ dwtmnorm(x, mean, prec, wt, log = 0) \item{wt}{weight} \item{log}{log} - -\item{n}{sample size} } \description{ weighted multivariate normal density From 14066189407440e9a70ae537a25b838cb2501979 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 11:53:26 -0500 Subject: [PATCH 1616/2289] more man --- modules/assim.sequential/R/Nimble_codes.R | 4 ++++ modules/assim.sequential/man/sampler_toggle.Rd | 9 +++++++++ 2 files changed, 13 insertions(+) diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index e4011518ab8..e3be073bcfa 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -220,6 +220,10 @@ GEF.MultiSite.Nimble <- nimbleCode({ #sampler_toggle------------------------------------------------------------------------------------------------ #' sampler toggling #' @export +#' @param model model +#' @param mvSaved copied to +#' @param target thing being targetted +#' @param control unused sampler_toggle <- nimbleFunction( contains = sampler_BASE, setup = function(model, mvSaved, target, control) { diff --git a/modules/assim.sequential/man/sampler_toggle.Rd b/modules/assim.sequential/man/sampler_toggle.Rd index 95e7fce8ddb..d887a8a705e 100644 --- a/modules/assim.sequential/man/sampler_toggle.Rd +++ b/modules/assim.sequential/man/sampler_toggle.Rd @@ -6,6 +6,15 @@ \usage{ sampler_toggle(model, mvSaved, target, control) } +\arguments{ +\item{model}{model} + +\item{mvSaved}{copied to} + +\item{target}{thing being targetted} + +\item{control}{unused} +} \description{ sampler toggling } From f4642510d3c1e99ef608e851efe2f7bd1d38d5b8 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 12:33:18 -0500 Subject: [PATCH 1617/2289] linkages rnorm --- models/linkages/R/write.config.LINKAGES.R | 2 +- models/linkages/R/write_restart.LINKAGES.R | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/models/linkages/R/write.config.LINKAGES.R b/models/linkages/R/write.config.LINKAGES.R index 2040f346223..28ce3f719bb 100644 --- a/models/linkages/R/write.config.LINKAGES.R +++ b/models/linkages/R/write.config.LINKAGES.R @@ -138,7 +138,7 @@ write.config.LINKAGES <- function(defaults = NULL, trait.values, settings, run.i if ("SLA" %in% names(vals)) { sla_use <- (1/vals$SLA)*1000 - sla_use[sla_use>5000] <- rnorm(1,4000,100) + sla_use[sla_use>5000] <- stats::rnorm(1,4000,100) spp.params[spp.params$Spp_Name == group, ]$FWT <- sla_use ## If change here need to change in write_restart as well } diff --git a/models/linkages/R/write_restart.LINKAGES.R b/models/linkages/R/write_restart.LINKAGES.R index 40e962ad0bc..77b073590b5 100644 --- a/models/linkages/R/write_restart.LINKAGES.R +++ b/models/linkages/R/write_restart.LINKAGES.R @@ -111,7 +111,7 @@ write_restart.LINKAGES <- function(outdir, runid, start.time, stop.time, } if ("SLA" %in% names(new.params[[as.character(pft)]])) { sla_use <- (1/new.params[[as.character(pft)]]$SLA)*1000 - sla_use[sla_use>5000] <- rnorm(1,4000,100) + sla_use[sla_use>5000] <- stats::rnorm(1,4000,100) fwt <- sla_use#(1 / new.params[[as.character(pft)]]$SLA) * 10000 } else { fwt <- default.params[default.params$Spp_Name == pft, ]$FWT @@ -355,7 +355,7 @@ write_restart.LINKAGES <- function(outdir, runid, start.time, stop.time, ##### SOIL if ("TotSoilCarb" %in% names(new.state.other)) { leaf.sum <- sum(tyl[1:12]) * 0.48 - if(new.state.other["TotSoilCarb"] > 1000) new.state.other["TotSoilCarb"] = rnorm(1,1000,10) + if(new.state.other["TotSoilCarb"] > 1000) new.state.other["TotSoilCarb"] = stats::rnorm(1,1000,10) soil.org.mat <- new.state.other["TotSoilCarb"] - leaf.sum soil.corr <- soil.org.mat / (sum(C.mat[C.mat[1:ncohrt, 5], 1]) * 0.48) #if(soil.corr > 1) soil.corr <- 1 From b002aa039387b8a79289bf070ed9dd7d559327a8 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 20:26:13 -0500 Subject: [PATCH 1618/2289] doc --- modules/assim.sequential/NAMESPACE | 1 + modules/assim.sequential/man/rescaling_stateVars.Rd | 3 +++ 2 files changed, 4 insertions(+) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index 894feba25d7..c28ddc4a96d 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -36,6 +36,7 @@ export(post.analysis.multisite.ggplot) export(postana.bias.plotting.sda) export(postana.bias.plotting.sda.corr) export(postana.timeser.plotting.sda) +export(rescaling_stateVars) export(rwtmnorm) export(sample_met) export(sampler_toggle) diff --git a/modules/assim.sequential/man/rescaling_stateVars.Rd b/modules/assim.sequential/man/rescaling_stateVars.Rd index 8447e644a80..1cb2bf06cb0 100644 --- a/modules/assim.sequential/man/rescaling_stateVars.Rd +++ b/modules/assim.sequential/man/rescaling_stateVars.Rd @@ -10,6 +10,9 @@ rescaling_stateVars(settings, X, multiply = TRUE) \item{settings}{pecan xml settings where state variables have the scaling_factor tag} \item{X}{Any Matrix with column names as variable names} +} +\value{ + } \description{ This function uses a set of scaling factors defined in the pecan XML to scale a given matrix From e1377314b5d47a7e51261e7cda7ea52d04e1dbe0 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 20:34:08 -0500 Subject: [PATCH 1619/2289] roxygen and import pipe --- modules/assim.sequential/NAMESPACE | 1 + modules/assim.sequential/R/Helper.functions.R | 4 +++- modules/assim.sequential/man/rescaling_stateVars.Rd | 2 ++ 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index c28ddc4a96d..539d2678220 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -50,3 +50,4 @@ export(y_star_create) import(furrr) import(lubridate) import(nimble) +importFrom(magrittr,"%>%") diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 9e9608d0a50..0977cf4b192 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -7,6 +7,7 @@ #' @return A list the same dimension as X, with each column of each dataframe #' modified by replacing outlier points with the column median #' @export +#' @importFrom magrittr %>% #' outlier.detector.boxplot<-function(X) { X <- X %>% @@ -81,10 +82,11 @@ SDA_control <- #' #' @param settings pecan xml settings where state variables have the scaling_factor tag #' @param X Any Matrix with column names as variable names +#' @param multiply TRUE = multiplication, FALSE = division #' @description This function uses a set of scaling factors defined in the pecan XML to scale a given matrix #' @return #' @export -#' +#' @importFrom magrittr %>% rescaling_stateVars <- function(settings, X, multiply=TRUE) { FUN <- ifelse(multiply, .Primitive('*'), .Primitive('/')) diff --git a/modules/assim.sequential/man/rescaling_stateVars.Rd b/modules/assim.sequential/man/rescaling_stateVars.Rd index 1cb2bf06cb0..7ad5d40085d 100644 --- a/modules/assim.sequential/man/rescaling_stateVars.Rd +++ b/modules/assim.sequential/man/rescaling_stateVars.Rd @@ -10,6 +10,8 @@ rescaling_stateVars(settings, X, multiply = TRUE) \item{settings}{pecan xml settings where state variables have the scaling_factor tag} \item{X}{Any Matrix with column names as variable names} + +\item{multiply}{TRUE = multiplication, FALSE = division} } \value{ From 9829a23847ef69ea06bc84b55caa38a89593a4a9 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 20:47:50 -0500 Subject: [PATCH 1620/2289] namespace --- modules/assim.sequential/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index bcc152cf358..a9eaa4a60f4 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -14,6 +14,7 @@ Imports: dplyr, future, Matrix, + magrittr, ncdf4, PEcAn.DB, PEcAn.logger, From d4398861fe9691a05f611005a84593141511a38d Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 21:37:50 -0500 Subject: [PATCH 1621/2289] rescaling return fix --- modules/assim.sequential/R/Helper.functions.R | 2 +- modules/assim.sequential/man/rescaling_stateVars.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 0977cf4b192..5bd6d294937 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -84,7 +84,7 @@ SDA_control <- #' @param X Any Matrix with column names as variable names #' @param multiply TRUE = multiplication, FALSE = division #' @description This function uses a set of scaling factors defined in the pecan XML to scale a given matrix -#' @return +#' @return rescaled Matrix #' @export #' @importFrom magrittr %>% rescaling_stateVars <- function(settings, X, multiply=TRUE) { diff --git a/modules/assim.sequential/man/rescaling_stateVars.Rd b/modules/assim.sequential/man/rescaling_stateVars.Rd index 7ad5d40085d..24baf41a6af 100644 --- a/modules/assim.sequential/man/rescaling_stateVars.Rd +++ b/modules/assim.sequential/man/rescaling_stateVars.Rd @@ -14,7 +14,7 @@ rescaling_stateVars(settings, X, multiply = TRUE) \item{multiply}{TRUE = multiplication, FALSE = division} } \value{ - +rescaled Matrix } \description{ This function uses a set of scaling factors defined in the pecan XML to scale a given matrix From c6fb232f68b9180320a75e4d7bf5d17bd36867f9 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 21:53:31 -0500 Subject: [PATCH 1622/2289] setNames --- modules/assim.sequential/R/Helper.functions.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 5bd6d294937..0afe70dfa90 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -96,7 +96,7 @@ rescaling_stateVars <- function(settings, X, multiply=TRUE) { scaling.factors <- settings$state.data.assimilation$state.variables %>% purrr::map('scaling_factor') %>% - setNames(settings$state.data.assimilation$state.variables %>% + stats::setNames(settings$state.data.assimilation$state.variables %>% purrr::map('variable.name')) %>% purrr::discard(is.null) From 1f70c396e55940a5c3a8e49664a7da71f4efda1e Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sun, 8 Nov 2020 23:00:39 -0500 Subject: [PATCH 1623/2289] doc fixes --- models/ed/NAMESPACE | 1 + models/ed/R/SAS.ED2.R | 7 +-- models/ed/R/get.model.output.ed.R | 6 +-- models/ed/man/SAS.ED2.Rd | 78 ++++++------------------------- models/ed/man/read.output.ED2.Rd | 6 ++- 5 files changed, 26 insertions(+), 72 deletions(-) diff --git a/models/ed/NAMESPACE b/models/ed/NAMESPACE index 87b4bd81d28..7d8559612da 100644 --- a/models/ed/NAMESPACE +++ b/models/ed/NAMESPACE @@ -4,6 +4,7 @@ S3method(print,ed2in) S3method(write_ed2in,default) S3method(write_ed2in,ed2in) export(SAS.ED2) +export(SAS.ED2.param.Args) export(check_css) export(check_ed2in) export(check_ed_metheader) diff --git a/models/ed/R/SAS.ED2.R b/models/ed/R/SAS.ED2.R index 13d63ac4450..e4332ef68a1 100644 --- a/models/ed/R/SAS.ED2.R +++ b/models/ed/R/SAS.ED2.R @@ -1,5 +1,6 @@ +##' sets parameters and defaults for the ED2 semi-analytical spin-up ##' @param decomp_scheme Decomposition scheme specified in ED2IN -##' @param kh_active_depth +##' @param kh_active_depth Depth threshold for averaging soil moisture and temperature ##' @param Lc Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90) ##' @param c2n_slow Carbon to Nitrogen ratio, slow pool; ED Default 10.0 ##' @param c2n_structural Carbon to Nitrogen ratio, structural pool. ED default 150.0 @@ -139,7 +140,7 @@ smfire.pos <- function(slmsts, soilcp, smfire){ ##' @name SAS.ED2 -##' @title Use semi-analytical solution to accellerate model spinup +##' @title Use semi-analytic solution to accelerate model spinup ##' @author Christine Rollinson, modified from original by Jaclyn Hatala-Matthes (2/18/14) ##' 2014 Feb: Original ED SAS solution Script at PalEON modeling HIPS sites (Matthes) ##' 2015 Aug: Modifications for greater site flexibility & updated ED @@ -148,7 +149,7 @@ smfire.pos <- function(slmsts, soilcp, smfire){ ##'@description This functions approximates landscape equilibrium steady state for vegetation and ##' soil pools using the successional trajectory of a single patch modeled with disturbance ##' off and the prescribed disturbance rates for runs (Xia et al. 2012 GMD 5:1259-1271). -##' @param dir.analy Location of ED2 analyis files; expects monthly and yearly output +##' @param dir.analy Location of ED2 analysis files; expects monthly and yearly output ##' @param dir.histo Location of ED2 history files (for vars not in analy); expects monthly ##' @param outdir Location to write SAS .css & .pss files ##' @param lat site latitude; used for file naming diff --git a/models/ed/R/get.model.output.ed.R b/models/ed/R/get.model.output.ed.R index 529c8aa7146..e8cad0ce747 100644 --- a/models/ed/R/get.model.output.ed.R +++ b/models/ed/R/get.model.output.ed.R @@ -42,10 +42,10 @@ read.output.file.ed <- function(filename, variables = c("AGB_CO", "NPLANT")) { ##' This function applies \link{read.output.file.ed} to a list of files from a single run ##' @title Read ED output ##' @name read.output.ED2 -##' @param run.id the id distiguishing the model run +##' @param run.id the id distinguishing the model run ##' @param outdir the directory that the model's output was sent to -##' @param start.year -##' @param end.year +##' @param start.year first year +##' @param end.year last year ##' @param output.type type of output file to read, can be '-Y-' for annual output, '-M-' for monthly means, '-D-' for daily means, '-T-' for instantaneous fluxes. Output types are set in the ED2IN namelist as NL%I[DMYT]OUTPUT ##' @return vector of output variable for all runs within ensemble ##' @export diff --git a/models/ed/man/SAS.ED2.Rd b/models/ed/man/SAS.ED2.Rd index faf7af4330c..d42d2c3a2c5 100644 --- a/models/ed/man/SAS.ED2.Rd +++ b/models/ed/man/SAS.ED2.Rd @@ -2,23 +2,23 @@ % Please edit documentation in R/SAS.ED2.R \name{SAS.ED2} \alias{SAS.ED2} -\title{Use semi-analytical solution to accellerate model spinup} +\title{Use semi-analytic solution to accelerate model spinup} \usage{ -SAS.ED2(dir.analy, dir.histo, outdir, prefix, lat, lon, block, yrs.met = 30, - treefall, sm_fire = 0, fire_intensity = 0, slxsand = 0.33, - slxclay = 0.33, sufx = "g01.h5", decomp_scheme = 2, - kh_active_depth = -0.2, decay_rate_fsc = 11, decay_rate_stsc = 4.5, - decay_rate_ssc = 0.2, Lc = 0.049787, c2n_slow = 10, - c2n_structural = 150, r_stsc = 0.3, rh_decay_low = 0.24, - rh_decay_high = 0.6, rh_low_temp = 18 + 273.15, rh_high_temp = 45 + - 273.15, rh_decay_dry = 12, rh_decay_wet = 36, rh_dry_smoist = 0.48, - rh_wet_smoist = 0.98, resp_opt_water = 0.8938, - resp_water_below_opt = 5.0786, resp_water_above_opt = 4.5139, - resp_temperature_increase = 0.0757, rh_lloyd_1 = 308.56, - rh_lloyd_2 = 1/56.02, rh_lloyd_3 = 227.15) +SAS.ED2( + dir.analy, + dir.histo, + outdir, + lat, + lon, + block, + prefix, + treefall, + param.args = SAS.ED2.param.Args(), + sufx = "g01.h5" +) } \arguments{ -\item{dir.analy}{Location of ED2 analyis files; expects monthly and yearly output} +\item{dir.analy}{Location of ED2 analysis files; expects monthly and yearly output} \item{dir.histo}{Location of ED2 history files (for vars not in analy); expects monthly} @@ -30,59 +30,9 @@ SAS.ED2(dir.analy, dir.histo, outdir, prefix, lat, lon, block, yrs.met = 30, \item{block}{Number of years between patch ages} -\item{yrs.met}{Number of years cycled in model spinup part 1} - \item{treefall}{Value to be used for TREEFALL_DISTURBANCE_RATE in ED2IN for full runs (disturbance on)} -\item{sm_fire}{Value to be used for SM_FIRE if INCLUDE_FIRE=2; defaults to 0 (fire off)} - -\item{fire_intensity}{Value to be used for FIRE_PARAMTER; defaults to 0 (fire off)} - -\item{slxsand}{Soil percent sand; used to calculate expected fire return interval} - -\item{slxclay}{Soil percent clay; used to calculate expected fire return interval} - \item{sufx}{ED2 out file suffix; used in constructing file names(default "g01.h5)} - -\item{decomp_scheme}{Decomposition scheme specified in ED2IN} - -\item{Lc}{Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90)} - -\item{c2n_slow}{Carbon to Nitrogen ratio, slow pool; ED Default 10.0} - -\item{c2n_structural}{Carbon to Nitrogen ratio, structural pool. ED default 150.0} - -\item{r_stsc}{Decomp param} - -\item{rh_decay_low}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.24} - -\item{rh_decay_high}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.60} - -\item{rh_low_temp}{Param used for ED-1/CENTURY decomp schemes; ED default = 291} - -\item{rh_high_temp}{Param used for ED-1/CENTURY decomp schemes; ED default = 318.15} - -\item{rh_decay_dry}{Param used for ED-1/CENTURY decomp schemes; ED default = 12.0} - -\item{rh_decay_wet}{Param used for ED-1/CENTURY decomp schemes; ED default = 36.0} - -\item{rh_dry_smoist}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.48} - -\item{rh_wet_smoist}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.98} - -\item{resp_opt_water}{Param used for decomp schemes 0 & 3, ED default = 0.8938} - -\item{resp_water_below_opt}{Param used for decomp schemes 0 & 3, ED default = 5.0786} - -\item{resp_water_above_opt}{Param used for decomp schemes 0 & 3, ED default = 4.5139} - -\item{resp_temperature_increase}{Param used for decomp schemes 0 & 3, ED default = 0.0757} - -\item{rh_lloyd_1}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 308.56} - -\item{rh_lloyd_2}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 1/56.02} - -\item{rh_lloyd_3}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 227.15} } \description{ This functions approximates landscape equilibrium steady state for vegetation and diff --git a/models/ed/man/read.output.ED2.Rd b/models/ed/man/read.output.ED2.Rd index 57a8fbca5db..8dbc4e51276 100644 --- a/models/ed/man/read.output.ED2.Rd +++ b/models/ed/man/read.output.ED2.Rd @@ -14,11 +14,13 @@ read.output.ED2( ) } \arguments{ -\item{run.id}{the id distiguishing the model run} +\item{run.id}{the id distinguishing the model run} \item{outdir}{the directory that the model's output was sent to} -\item{start.year}{} +\item{start.year}{first year} + +\item{end.year}{last year} \item{output.type}{type of output file to read, can be '-Y-' for annual output, '-M-' for monthly means, '-D-' for daily means, '-T-' for instantaneous fluxes. Output types are set in the ED2IN namelist as NL\%I\link{DMYT}OUTPUT} } From e3ee77dd22e15bcc4372c0a23ba021a2ee77bea5 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Mon, 9 Nov 2020 05:19:10 -0500 Subject: [PATCH 1624/2289] removing coda, missed a new Rd file --- models/ed/DESCRIPTION | 3 +- models/ed/man/SAS.ED2.param.Args.Rd | 94 +++++++++++++++++++++++++++++ 2 files changed, 95 insertions(+), 2 deletions(-) create mode 100644 models/ed/man/SAS.ED2.param.Args.Rd diff --git a/models/ed/DESCRIPTION b/models/ed/DESCRIPTION index 899ba36974f..9610a99226b 100644 --- a/models/ed/DESCRIPTION +++ b/models/ed/DESCRIPTION @@ -21,8 +21,7 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific efficacy of scientific investigation. This package provides functions to link the Ecosystem Demography Model, version 2, to PEcAn. Depends: - PEcAn.utils, - coda + PEcAn.utils Imports: PEcAn.data.atmosphere, PEcAn.logger, diff --git a/models/ed/man/SAS.ED2.param.Args.Rd b/models/ed/man/SAS.ED2.param.Args.Rd new file mode 100644 index 00000000000..ba649cbdabd --- /dev/null +++ b/models/ed/man/SAS.ED2.param.Args.Rd @@ -0,0 +1,94 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/SAS.ED2.R +\name{SAS.ED2.param.Args} +\alias{SAS.ED2.param.Args} +\title{sets parameters and defaults for the ED2 semi-analytical spin-up} +\usage{ +SAS.ED2.param.Args( + decomp_scheme = 2, + kh_active_depth = -0.2, + decay_rate_fsc = 11, + decay_rate_stsc = 4.5, + decay_rate_ssc = 0.2, + Lc = 0.049787, + c2n_slow = 10, + c2n_structural = 150, + r_stsc = 0.3, + rh_decay_low = 0.24, + rh_decay_high = 0.6, + rh_low_temp = 18 + 273.15, + rh_high_temp = 45 + 273.15, + rh_decay_dry = 12, + rh_decay_wet = 36, + rh_dry_smoist = 0.48, + rh_wet_smoist = 0.98, + resp_opt_water = 0.8938, + resp_water_below_opt = 5.0786, + resp_water_above_opt = 4.5139, + resp_temperature_increase = 0.0757, + rh_lloyd_1 = 308.56, + rh_lloyd_2 = 1/56.02, + rh_lloyd_3 = 227.15, + yrs.met = 30, + sm_fire = 0, + fire_intensity = 0, + slxsand = 0.33, + slxclay = 0.33 +) +} +\arguments{ +\item{decomp_scheme}{Decomposition scheme specified in ED2IN} + +\item{kh_active_depth}{Depth threshold for averaging soil moisture and temperature} + +\item{Lc}{Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90)} + +\item{c2n_slow}{Carbon to Nitrogen ratio, slow pool; ED Default 10.0} + +\item{c2n_structural}{Carbon to Nitrogen ratio, structural pool. ED default 150.0} + +\item{r_stsc}{Decomp param} + +\item{rh_decay_low}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.24} + +\item{rh_decay_high}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.60} + +\item{rh_low_temp}{Param used for ED-1/CENTURY decomp schemes; ED default = 291} + +\item{rh_high_temp}{Param used for ED-1/CENTURY decomp schemes; ED default = 318.15} + +\item{rh_decay_dry}{Param used for ED-1/CENTURY decomp schemes; ED default = 12.0} + +\item{rh_decay_wet}{Param used for ED-1/CENTURY decomp schemes; ED default = 36.0} + +\item{rh_dry_smoist}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.48} + +\item{rh_wet_smoist}{Param used for ED-1/CENTURY decomp schemes; ED default = 0.98} + +\item{resp_opt_water}{Param used for decomp schemes 0 & 3, ED default = 0.8938} + +\item{resp_water_below_opt}{Param used for decomp schemes 0 & 3, ED default = 5.0786} + +\item{resp_water_above_opt}{Param used for decomp schemes 0 & 3, ED default = 4.5139} + +\item{resp_temperature_increase}{Param used for decomp schemes 0 & 3, ED default = 0.0757} + +\item{rh_lloyd_1}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 308.56} + +\item{rh_lloyd_2}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 1/56.02} + +\item{rh_lloyd_3}{Param used for decomp schemes 1 & 4 (Lloyd & Taylor 1994); ED default = 227.15} + +\item{yrs.met}{Number of years cycled in model spinup part 1} + +\item{sm_fire}{Value to be used for SM_FIRE if INCLUDE_FIRE=2; defaults to 0 (fire off)} + +\item{fire_intensity}{Value to be used for FIRE_PARAMTER; defaults to 0 (fire off)} + +\item{slxsand}{Soil percent sand; used to calculate expected fire return interval} + +\item{slxclay}{Soil percent clay; used to calculate expected fire return interval} +} +\description{ +sets parameters and defaults for the ED2 semi-analytical spin-up +} From cf9e2e92bf226455dc292c1f123585051e2dbcdd Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Mon, 9 Nov 2020 06:14:10 -0500 Subject: [PATCH 1625/2289] pecan.data.land -> ed --- Makefile.depends | 2 +- models/ed/DESCRIPTION | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 151599077e0..cb69f04c1d0 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -28,7 +28,7 @@ $(call depends,models/cable): | .install/base/logger .install/base/utils $(call depends,models/clm45): | .install/base/logger .install/base/utils $(call depends,models/dalec): | .install/base/logger .install/base/remote .install/base/utils $(call depends,models/dvmdostem): | .install/base/utils -$(call depends,models/ed): | .install/base/utils .install/modules/data.atmosphere .install/base/logger .install/base/remote .install/base/settings +$(call depends,models/ed): | .install/base/utils .install/modules/data.atmosphere .install/modules/data.land .install/base/logger .install/base/remote .install/base/settings $(call depends,models/fates): | .install/base/utils .install/base/logger .install/base/remote $(call depends,models/gday): | .install/base/utils .install/base/logger .install/base/remote $(call depends,models/jules): | .install/base/utils .install/base/logger .install/base/remote diff --git a/models/ed/DESCRIPTION b/models/ed/DESCRIPTION index 9610a99226b..64f0023144e 100644 --- a/models/ed/DESCRIPTION +++ b/models/ed/DESCRIPTION @@ -24,6 +24,7 @@ Depends: PEcAn.utils Imports: PEcAn.data.atmosphere, + PEcAn.data.land, PEcAn.logger, PEcAn.remote, PEcAn.settings, From c7b1ba57eb36f56c2c4adceb5e226cf834928c4e Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Mon, 9 Nov 2020 06:54:10 -0500 Subject: [PATCH 1626/2289] more missing docs --- models/ed/R/SAS.ED2.R | 21 +++++++++++++-------- models/ed/R/get.model.output.ed.R | 1 + models/ed/man/SAS.ED2.Rd | 8 ++++++-- models/ed/man/SAS.ED2.param.Args.Rd | 6 ++++++ models/ed/man/read.output.ED2.Rd | 2 ++ 5 files changed, 28 insertions(+), 10 deletions(-) diff --git a/models/ed/R/SAS.ED2.R b/models/ed/R/SAS.ED2.R index e4332ef68a1..626876003a4 100644 --- a/models/ed/R/SAS.ED2.R +++ b/models/ed/R/SAS.ED2.R @@ -1,6 +1,9 @@ ##' sets parameters and defaults for the ED2 semi-analytical spin-up ##' @param decomp_scheme Decomposition scheme specified in ED2IN ##' @param kh_active_depth Depth threshold for averaging soil moisture and temperature +##' @param decay_rate_fsc Fast soil carbon decay rate +##' @param decay_rate_stsc Structural soil carbon decay rate +##' @param decay_rate_ssc Slow soil carbon decay rate ##' @param Lc Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90) ##' @param c2n_slow Carbon to Nitrogen ratio, slow pool; ED Default 10.0 ##' @param c2n_structural Carbon to Nitrogen ratio, structural pool. ED default 150.0 @@ -94,7 +97,7 @@ SAS.ED2.param.Args <- function(decomp_scheme=2, } -#Soil Moisture at saturation +#Soil Moisture at saturation (should replace with soil functions in data.lab) calc.slmsts <- function(slxsand, slxclay){ # Soil moisture at saturation [ m^3/m^3 ] (50.5 - 14.2*slxsand - 3.7*slxclay) / 100. @@ -154,12 +157,14 @@ smfire.pos <- function(slmsts, soilcp, smfire){ ##' @param outdir Location to write SAS .css & .pss files ##' @param lat site latitude; used for file naming ##' @param lon site longitude; used for file naming -##' @param block Number of years between patch ages +##' @param blckyr Number of years between patch ages (aka blocks) +##' @param prefix ED2 -E- output file prefix ##' @param treefall Value to be used for TREEFALL_DISTURBANCE_RATE in ED2IN for full runs (disturbance on) +##' @param param.args ED2 parameter arguments (mostly soil biogeochem) ##' @param sufx ED2 out file suffix; used in constructing file names(default "g01.h5) ##' @export ##' -SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, block, +SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, prefix, treefall, param.args = SAS.ED2.param.Args(), sufx = "g01.h5") { @@ -330,9 +335,9 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, block, stand.age <- seq(yrs[1]-yeara,nrow(pss.big)*blckyr,by=blckyr) area.dist <- vector(length=nrow(pss.big)) - area.dist[1] <- sum(dgeom(0:(stand.age[2]-1), disturb)) + area.dist[1] <- sum(stats::dgeom(0:(stand.age[2]-1), disturb)) for(i in 2:(length(area.dist)-1)){ - area.dist[i] <- sum(dgeom((stand.age[i]):(stand.age[i+1]-1),disturb)) + area.dist[i] <- sum(stats::dgeom((stand.age[i]):(stand.age[i+1]-1),disturb)) } area.dist[length(area.dist)] <- 1 - sum(area.dist[1:(length(area.dist)-1)]) pss.big[,"area"] <- area.dist @@ -414,7 +419,7 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, block, # -- Monthly data is then aggregated to a yearly value: sum for carbon inputs; mean for temp/moist # (if not calculated above) #--------------------------------------- - pss.big <- pss.big[complete.cases(pss.big),] + pss.big <- pss.big[stats::complete.cases(pss.big),] # some empty vectors for storage etc fsc_in_y <- ssc_in_y <- ssl_in_y <- fsn_in_y <- pln_up_y <- vector() @@ -628,10 +633,10 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, block, # Write everything to file!! #--------------------------------------- file.prefix=paste0(prefix, "-lat", lat, "lon", lon) - write.table(css.big,file=file.path(outdir,paste0(file.prefix,".css")),row.names=FALSE,append=FALSE, + utils::write.table(css.big,file=file.path(outdir,paste0(file.prefix,".css")),row.names=FALSE,append=FALSE, col.names=TRUE,quote=FALSE) - write.table(pss.big,file=file.path(outdir,paste0(file.prefix,".pss")),row.names=FALSE,append=FALSE, + utils::write.table(pss.big,file=file.path(outdir,paste0(file.prefix,".pss")),row.names=FALSE,append=FALSE, col.names=TRUE,quote=FALSE) #--------------------------------------- diff --git a/models/ed/R/get.model.output.ed.R b/models/ed/R/get.model.output.ed.R index e8cad0ce747..595fa4866af 100644 --- a/models/ed/R/get.model.output.ed.R +++ b/models/ed/R/get.model.output.ed.R @@ -46,6 +46,7 @@ read.output.file.ed <- function(filename, variables = c("AGB_CO", "NPLANT")) { ##' @param outdir the directory that the model's output was sent to ##' @param start.year first year ##' @param end.year last year +##' @param variables which ED2 variables to extract ##' @param output.type type of output file to read, can be '-Y-' for annual output, '-M-' for monthly means, '-D-' for daily means, '-T-' for instantaneous fluxes. Output types are set in the ED2IN namelist as NL%I[DMYT]OUTPUT ##' @return vector of output variable for all runs within ensemble ##' @export diff --git a/models/ed/man/SAS.ED2.Rd b/models/ed/man/SAS.ED2.Rd index d42d2c3a2c5..e4bea72549f 100644 --- a/models/ed/man/SAS.ED2.Rd +++ b/models/ed/man/SAS.ED2.Rd @@ -10,7 +10,7 @@ SAS.ED2( outdir, lat, lon, - block, + blckyr, prefix, treefall, param.args = SAS.ED2.param.Args(), @@ -28,10 +28,14 @@ SAS.ED2( \item{lon}{site longitude; used for file naming} -\item{block}{Number of years between patch ages} +\item{blckyr}{Number of years between patch ages (aka blocks)} + +\item{prefix}{ED2 -E- output file prefix} \item{treefall}{Value to be used for TREEFALL_DISTURBANCE_RATE in ED2IN for full runs (disturbance on)} +\item{param.args}{ED2 parameter arguments (mostly soil biogeochem)} + \item{sufx}{ED2 out file suffix; used in constructing file names(default "g01.h5)} } \description{ diff --git a/models/ed/man/SAS.ED2.param.Args.Rd b/models/ed/man/SAS.ED2.param.Args.Rd index ba649cbdabd..9d12258a9cf 100644 --- a/models/ed/man/SAS.ED2.param.Args.Rd +++ b/models/ed/man/SAS.ED2.param.Args.Rd @@ -41,6 +41,12 @@ SAS.ED2.param.Args( \item{kh_active_depth}{Depth threshold for averaging soil moisture and temperature} +\item{decay_rate_fsc}{Fast soil carbon decay rate} + +\item{decay_rate_stsc}{Structural soil carbon decay rate} + +\item{decay_rate_ssc}{Slow soil carbon decay rate} + \item{Lc}{Used to compute nitrogen immpobilzation factor; ED default is 0.049787 (soil_respiration.f90)} \item{c2n_slow}{Carbon to Nitrogen ratio, slow pool; ED Default 10.0} diff --git a/models/ed/man/read.output.ED2.Rd b/models/ed/man/read.output.ED2.Rd index 8dbc4e51276..373a89a2823 100644 --- a/models/ed/man/read.output.ED2.Rd +++ b/models/ed/man/read.output.ED2.Rd @@ -22,6 +22,8 @@ read.output.ED2( \item{end.year}{last year} +\item{variables}{which ED2 variables to extract} + \item{output.type}{type of output file to read, can be '-Y-' for annual output, '-M-' for monthly means, '-D-' for daily means, '-T-' for instantaneous fluxes. Output types are set in the ED2IN namelist as NL\%I\link{DMYT}OUTPUT} } \value{ From c439041a9d021c2866c19e49d05b93f8813b17e5 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 9 Nov 2020 13:04:39 +0100 Subject: [PATCH 1627/2289] remove devtools dep --- docker/depends/pecan.depends.R | 1 - modules/assim.sequential/DESCRIPTION | 1 - modules/assim.sequential/R/sda_plotting.R | 10 +++++++--- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 9b264a1b5fb..bb6b7074193 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -30,7 +30,6 @@ wanted <- c( 'datapack', 'DBI', 'dbplyr', -'devtools', 'doParallel', 'dplR', 'dplyr', diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index a9eaa4a60f4..871cbe4d04d 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -34,7 +34,6 @@ Imports: Suggests: PEcAn.benchmark, corrplot, - devtools, ggrepel, gridExtra, magic (>= 1.5.0), diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index aae1abc54a6..2655f45d959 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -509,6 +509,13 @@ post.analysis.ggplot.violin <- function(settings, t, obs.times, obs.mean, obs.co ##' @rdname interactive.plotting.sda ##' @export post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=FALSE, readsFF=NULL){ + + if (!requireNamespace("ggrepel", quietly = TRUE)) { + PEcAn.logger::logger.error( + "Package `ggrepel` not found, but needed by", + "PEcAn.assim.sequential::post.analysis.multisite.ggplot.", + "Please install it and try again.") + } # fix obs.mean/obs.cov for multivariable plotting issues when there is NA data. When more than 1 data set is assimilated, but there are missing data # for some sites/years/etc. the plotting will fail and crash the SDA because the numbers of columns are not consistent across all sublists within obs.mean @@ -551,9 +558,6 @@ post.analysis.multisite.ggplot <- function(settings, t, obs.times, obs.mean, obs obs.mean[name] = data_mean obs.cov[name] = data_cov } - - - if (!('ggrepel' %in% installed.packages()[,1])) devtools::install_github("slowkow/ggrepel") #Defining some colors t1 <- 1 From aa86ba1d638cfe6f03c0a574d265b125fe341481 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 9 Nov 2020 13:05:04 +0100 Subject: [PATCH 1628/2289] match roxygen versions --- modules/assim.sequential/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 871cbe4d04d..9789dbb29df 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -47,4 +47,4 @@ Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 -RoxygenNote: 7.1.1 +RoxygenNote: 7.0.2 From 74a6839a798907cbb8a106f002fd3240b33cc10c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 9 Nov 2020 13:05:26 +0100 Subject: [PATCH 1629/2289] remove fixed check messages --- .../tests/Rcheck_reference.log | 24 +++---------------- 1 file changed, 3 insertions(+), 21 deletions(-) diff --git a/modules/assim.sequential/tests/Rcheck_reference.log b/modules/assim.sequential/tests/Rcheck_reference.log index c5162842b79..e664e22b985 100644 --- a/modules/assim.sequential/tests/Rcheck_reference.log +++ b/modules/assim.sequential/tests/Rcheck_reference.log @@ -37,20 +37,7 @@ Use \uxxxx escapes for other characters. * checking whether the namespace can be loaded with stated dependencies ... OK * checking whether the namespace can be unloaded cleanly ... OK * checking loading without being on the library search path ... OK -* checking dependencies in R code ... WARNING -'::' or ':::' imports not declared from: - ‘devtools’ ‘dplyr’ ‘future’ ‘ggrepel’ ‘Matrix’ ‘ncdf4’ - ‘PEcAn.benchmark’ ‘PEcAn.DB’ ‘PEcAn.settings’ ‘sf’ ‘sp’ ‘tidyr’ ‘XML’ -'library' or 'require' calls not declared from: - ‘corrplot’ ‘gridExtra’ ‘nimble’ ‘plotrix’ ‘plyr’ -'library' or 'require' calls in package code: - ‘corrplot’ ‘gridExtra’ ‘nimble’ ‘plotrix’ ‘plyr’ - Please use :: or requireNamespace() instead. - See section 'Suggested packages' in the 'Writing R Extensions' manual. -There are ::: calls to the package's namespace in its code. A package - almost never needs to use ::: for its own objects: - ‘load_nimble’ ‘post.analysis.ggplot.violin’ - ‘postana.bias.plotting.sda’ +* checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK @@ -896,12 +883,7 @@ File ‘PEcAn.assim.sequential/R/Analysis_sda.R’: * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... OK -* checking for missing documentation entries ... WARNING -Undocumented code objects: - ‘sample_met’ -All user-level objects in a package should have documentation entries. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. +* checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... WARNING Undocumented arguments in documentation object 'Contruct.Pf' @@ -948,4 +930,4 @@ Extensions’ manual. * checking for unstated dependencies in examples ... OK * checking examples ... NONE * DONE -Status: 4 WARNINGs, 1 NOTE +Status: 2 WARNINGs, 1 NOTE From a1e178f9a220a24ad2c868c29a4b02df7299fbb3 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Mon, 9 Nov 2020 07:47:52 -0500 Subject: [PATCH 1630/2289] per @infotroph suggestion on #2482, move parameter vector to a named parameter data.frame and reference parameters where used. --- models/ed/R/SAS.ED2.R | 159 +++++++++++++++++------------------------- 1 file changed, 64 insertions(+), 95 deletions(-) diff --git a/models/ed/R/SAS.ED2.R b/models/ed/R/SAS.ED2.R index 626876003a4..679e2710481 100644 --- a/models/ed/R/SAS.ED2.R +++ b/models/ed/R/SAS.ED2.R @@ -61,36 +61,36 @@ SAS.ED2.param.Args <- function(decomp_scheme=2, slxclay=0.33) { return( - c( - decomp_scheme, - kh_active_depth, - decay_rate_fsc, - decay_rate_stsc, - decay_rate_ssc, - Lc, - c2n_slow, - c2n_structural, - r_stsc, # Constants from ED - rh_decay_low, - rh_decay_high, - rh_low_temp, - rh_high_temp, - rh_decay_dry, - rh_decay_wet, - rh_dry_smoist, - rh_wet_smoist, - resp_opt_water, - resp_water_below_opt, - resp_water_above_opt, - resp_temperature_increase, - rh_lloyd_1, - rh_lloyd_2, - rh_lloyd_3, - yrs.met, - sm_fire, - fire_intensity, - slxsand, - slxclay + data.frame( + decomp_scheme = decomp_scheme, + kh_active_depth = kh_active_depth, + decay_rate_fsc = decay_rate_fsc, + decay_rate_stsc = decay_rate_stsc, + decay_rate_ssc = decay_rate_ssc, + Lc = Lc, + c2n_slow = c2n_slow, + c2n_structural = c2n_structural, + r_stsc = r_stsc, # Constants from ED + rh_decay_low = rh_decay_low, + rh_decay_high = rh_decay_high, + rh_low_temp = rh_low_temp, + rh_high_temp = rh_high_temp, + rh_decay_dry = rh_decay_dry, + rh_decay_wet = rh_decay_wet, + rh_dry_smoist = rh_dry_smoist, + rh_wet_smoist = rh_wet_smoist, + resp_opt_water = resp_opt_water, + resp_water_below_opt = resp_water_below_opt, + resp_water_above_opt = resp_water_above_opt, + resp_temperature_increase = resp_temperature_increase, + rh_lloyd_1 = rh_lloyd_1, + rh_lloyd_2 = rh_lloyd_2, + rh_lloyd_3 = rh_lloyd_3, + yrs.met = yrs.met, + sm_fire = sm_fire, + fire_intensity = fire_intensity, + slxsand = slxsand, + slxclay = slxclay ) ) @@ -168,38 +168,7 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, prefix, treefall, param.args = SAS.ED2.param.Args(), sufx = "g01.h5") { - #--- Reading in the parameters - decomp_scheme <- param.args[1] - kh_active_depth <- param.args[2] - decay_rate_fsc <- param.args[3] - decay_rate_stsc <- param.args[4] - decay_rate_ssc <- param.args[5] - Lc <- param.args[6] - c2n_slow <- param.args[7] - c2n_structural <- param.args[8] - r_stsc <- param.args[9] # Constants from ED - rh_decay_low <- param.args[10] - rh_decay_high <- param.args[11] - rh_low_temp <- param.args[12] - rh_high_temp <- param.args[13] - rh_decay_dry <- param.args[14] - rh_decay_wet <- param.args[15] - rh_dry_smoist <- param.args[16] - rh_wet_smoist <- param.args[17] - resp_opt_water <- param.args[18] - resp_water_below_opt <- param.args[19] - resp_water_above_opt <- param.args[20] - resp_temperature_increase <- param.args[21] - rh_lloyd_1 <- param.args[22] - rh_lloyd_2 <- param.args[23] - rh_lloyd_3 <- param.args[24] - yrs.met <- param.args[25] - sm_fire <- param.args[26] - fire_intensity <- param.args[27] - slxsand <- param.args[28] - slxclay <- param.args[29] - - if(!decomp_scheme %in% 0:4) stop("Invalid decomp_scheme") + if(!param.args$decomp_scheme %in% 0:4) stop("Invalid decomp_scheme") # create a directory for the initialization files dir.create(outdir, recursive=T, showWarnings=F) @@ -233,7 +202,7 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, dslz[i] <- slz[i+1] - slz[i] } - nsoil=which(slz >= kh_active_depth-1e-3) # Maximum depth for avg. temperature and moisture; adding a fudge factor bc it's being weird + nsoil=which(slz >= param.args$kh_active_depth-1e-3) # Maximum depth for avg. temperature and moisture; adding a fudge factor bc it's being weird # nsoil=length(slz) #--------------------------------------- @@ -252,20 +221,20 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # with; if using the means from the spin met cycle work best, insert them here # This will also be necessary for helping update disturbance parameter #--------------------------------------- - soil.obj <- PEcAn.data.land::soil_params(sand = slxsand, clay = slxclay) + soil.obj <- PEcAn.data.land::soil_params(sand = param.args$slxsand, clay = param.args$slxclay) - slmsts <- calc.slmsts(slxsand, slxclay) # Soil Moisture at saturation - slpots <- calc.slpots(slxsand, slxclay) # Soil moisture potential at saturation [ m ] - slbs <- calc.slbs(slxsand, slxclay) # B exponent (unitless) + slmsts <- calc.slmsts(param.args$slxsand, param.args$slxclay) # Soil Moisture at saturation + slpots <- calc.slpots(param.args$slxsand, param.args$slxclay) # Soil moisture potential at saturation [ m ] + slbs <- calc.slbs(param.args$slxsand, param.args$slxclay) # B exponent (unitless) soilcp <- calc.soilcp(slmsts, slpots, slbs) # Dry soil capacity (at -3.1MPa) [ m^3/m^3 ] # Calculating Soil fire characteristics soilfr=0 - if(abs(sm_fire)>0){ - if(sm_fire>0){ - soilfr <- smfire.pos(slmsts, soilcp, smfire=sm_fire) + if(abs(param.args$sm_fire)>0){ + if(param.args$sm_fire>0){ + soilfr <- smfire.pos(slmsts, soilcp, smfire=param.args$sm_fire) } else { - soilfr <- smfire.neg(slmsts, slpots, smfire=sm_fire, slbs) + soilfr <- smfire.neg(slmsts, slpots, smfire=param.args$sm_fire, slbs) } } @@ -326,7 +295,7 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # ------ # Calculate the Rate of fire & total disturbance # ------ - fire_rate <- pfire * fire_intensity + fire_rate <- pfire * param.args$fire_intensity # Total disturbance rate = treefall + fire # -- treefall = % area/yr @@ -513,9 +482,9 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # ----------------------- # Calculate the annual carbon loss if things are stable # ----------- - fsc_loss <- decay_rate_fsc - ssc_loss <- decay_rate_ssc - ssl_loss <- decay_rate_stsc + fsc_loss <- param.args$decay_rate_fsc + ssc_loss <- param.args$decay_rate_ssc + ssl_loss <- param.args$decay_rate_stsc # ----------- @@ -527,20 +496,20 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # Temperature Limitation # ======================== # soil_tempk <- sum(soil_tempo_y*area.dist) - if(decomp_scheme %in% c(0, 3)){ - temperature_limitation = min(1,exp(resp_temperature_increase * (soil_tempk-318.15))) - } else if(decomp_scheme %in% c(1,4)){ - lnexplloyd = rh_lloyd_1 * ( rh_lloyd_2 - 1. / (soil_tempk - rh_lloyd_3)) + if(param.args$decomp_scheme %in% c(0, 3)){ + temperature_limitation = min(1,exp(param.args$resp_temperature_increase * (soil_tempk-318.15))) + } else if(param.args$decomp_scheme %in% c(1,4)){ + lnexplloyd = param.args$rh_lloyd_1 * ( param.args$rh_lloyd_2 - 1. / (soil_tempk - param.args$rh_lloyd_3)) lnexplloyd = max(-38.,min(38,lnexplloyd)) - temperature_limitation = min( 1.0, resp_temperature_increase * exp(lnexplloyd) ) - } else if(decomp_scheme==2) { + temperature_limitation = min( 1.0, param.args$resp_temperature_increase * exp(lnexplloyd) ) + } else if(param.args$decomp_scheme==2) { # Low Temp Limitation - lnexplow <- rh_decay_low * (rh_low_temp - soil_tempk) + lnexplow <- param.args$rh_decay_low * (param.args$rh_low_temp - soil_tempk) lnexplow <- max(-38, min(38, lnexplow)) tlow_fun <- 1 + exp(lnexplow) # High Temp Limitation - lnexphigh <- rh_decay_high*(soil_tempk - rh_high_temp) + lnexphigh <- param.args$rh_decay_high*(soil_tempk - param.args$rh_high_temp) lnexphigh <- max(-38, min(38, lnexphigh)) thigh_fun <- 1 + exp(lnexphigh) @@ -552,20 +521,20 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # Moisture Limitation # ======================== # rel_soil_moist <- sum(swc_y*area.dist) - if(decomp_scheme %in% c(0,1)){ - if (rel_soil_moist <= resp_opt_water){ - water_limitation = exp( (rel_soil_moist - resp_opt_water) * resp_water_below_opt) + if(param.args$decomp_scheme %in% c(0,1)){ + if (rel_soil_moist <= param.args$resp_opt_water){ + water_limitation = exp( (rel_soil_moist - param.args$resp_opt_water) * param.args$resp_water_below_opt) } else { - water_limitation = exp( (resp_opt_water - rel_soil_moist) * resp_water_above_opt) + water_limitation = exp( (param.args$resp_opt_water - rel_soil_moist) * param.args$resp_water_above_opt) } - } else if(decomp_scheme==2){ + } else if(param.args$decomp_scheme==2){ # Dry soil Limitation - lnexpdry <- rh_decay_dry * (rh_dry_smoist - rel_soil_moist) + lnexpdry <- param.args$rh_decay_dry * (param.args$rh_dry_smoist - rel_soil_moist) lnexpdry <- max(-38, min(38, lnexpdry)) smdry_fun <- 1+exp(lnexpdry) # Wet Soil limitation - lnexpwet <- rh_decay_wet * (rel_soil_moist - rh_wet_smoist) + lnexpwet <- param.args$rh_decay_wet * (rel_soil_moist - param.args$rh_wet_smoist) lnexpwet <- max(-38, min(38, lnexpwet)) smwet_fun <- 1+exp(lnexpwet) @@ -587,8 +556,8 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # Do the carbon and fast nitrogen pools # ------------------- fsc_ss <- fsc_in_y[length(fsc_in_y)]/(fsc_loss * A_decomp) - ssl_ss <- ssl_in_y[length(ssl_in_y)]/(ssl_loss * A_decomp * Lc) # Structural soil C - ssc_ss <- ((ssl_loss * A_decomp * Lc * ssl_ss)*(1 - r_stsc))/(ssc_loss * A_decomp ) + ssl_ss <- ssl_in_y[length(ssl_in_y)]/(ssl_loss * A_decomp * param.args$Lc) # Structural soil C + ssc_ss <- ((ssl_loss * A_decomp * param.args$Lc * ssl_ss)*(1 - param.args$r_stsc))/(ssc_loss * A_decomp ) fsn_ss <- fsn_in_y[length(fsn_in_y)]/(fsc_loss * A_decomp) # ------------------- @@ -599,11 +568,11 @@ SAS.ED2 <- function(dir.analy, dir.histo, outdir, lat, lon, blckyr, # + csite%today_Af_decomp(ipa) * Lc * K1 * csite%structural_soil_C(ipa) # * ( (1.0 - r_stsc) / c2n_slow - 1.0 / c2n_structural) msn_loss <- pln_up_y[length(pln_up_y)] + - A_decomp*Lc*ssl_loss*ssl_in_y[length(ssl_in_y)]* - ((1.0-r_stsc)/c2n_slow - 1.0/c2n_structural) + A_decomp*param.args$Lc*ssl_loss*ssl_in_y[length(ssl_in_y)]* + ((1.0-param.args$r_stsc)/param.args$c2n_slow - 1.0/param.args$c2n_structural) #fast_N_loss + slow_C_loss/c2n_slow - msn_med <- fsc_loss*A_decomp*fsn_in_y[length(fsn_in_y)]+ (ssc_loss * A_decomp)/c2n_slow + msn_med <- fsc_loss*A_decomp*fsn_in_y[length(fsn_in_y)]+ (ssc_loss * A_decomp)/param.args$c2n_slow msn_ss <- msn_med/msn_loss # ------------------- From 02047a1e3318019d8eee544489a2da9a3b7cdf5c Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Mon, 9 Nov 2020 08:39:52 -0600 Subject: [PATCH 1631/2289] typo correction --- modules/data.atmosphere/R/debias_met_regression.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index e97f9710799..ea8afd3bb98 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -78,7 +78,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU n.ens=1 } if(!uncert.prop %in% c("mean", "random")) stop("unspecified uncertainty propogation method. Must be 'random' or 'mean' ") - if(uncert.prop=="mean" & n.ens>1) warning(pate0("Warning! Use of mean propagation with n.ens>1 not encouraged as all results will be the same and you will not be adding uncertainty at this stage.")) + if(uncert.prop=="mean" & n.ens>1) warning(paste0("Warning! Use of mean propagation with n.ens>1 not encouraged as all results will be the same and you will not be adding uncertainty at this stage.")) # Variables need to be done in a specific order vars.all <- c("air_temperature", "air_temperature_maximum", "air_temperature_minimum", "specific_humidity", "surface_downwelling_shortwave_flux_in_air", "air_pressure", "surface_downwelling_longwave_flux_in_air", "wind_speed", "precipitation_flux") From b06078dcc9a17c4655382a0b81464492f24632cd Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Tue, 22 Oct 2019 15:44:57 -0500 Subject: [PATCH 1632/2289] Fix units problem in variable mapping, add/change output variables. Removed C and N from unit specification strings. C and N in the udunits specification strings do not represent Carbon and Nitrogen. Also updates the copy of the output_spec.csv file provided in the inst directory to match the default copy from dvmdostem-v0.2.1 and up for both units strings and changed dvmdostem output variables (was SOC, now outputs SOMA, SOMCR, SOMPR, SOMRAWC). --- .../R/pecan2dvmdostem_variable_mapping.R | 36 ++-- models/dvmdostem/inst/output_spec.csv | 203 ++++++++++-------- 2 files changed, 129 insertions(+), 110 deletions(-) diff --git a/models/dvmdostem/R/pecan2dvmdostem_variable_mapping.R b/models/dvmdostem/R/pecan2dvmdostem_variable_mapping.R index 63441ce591c..688cbdcf9f2 100644 --- a/models/dvmdostem/R/pecan2dvmdostem_variable_mapping.R +++ b/models/dvmdostem/R/pecan2dvmdostem_variable_mapping.R @@ -6,24 +6,24 @@ ##' that might depend on more than one DVMDOSTEM files. ##' @export vmap_reverse <- list( - "GPP" = c(depends_on="GPP", longname="Gross Primary Productivity", newunits="kg C m-2 s-1"), - "NPP" = c(depends_on="NPP", longname="Net Primary Productivity", newunits="kg C m-2 s-1"), - "HeteroResp" = c(depends_on="RH", longname="Heterotrophic Respiration", newunits="kg C m-2 s-1"), - "AutoResp" = c(depends_on="RM,RG", longname="Autotrophic Respiration", newunits="kg C m-2 s-1"), - "SoilOrgC" = c(depends_on="DEEPC,SHLWC,SOC", longname="Soil Organic Carbon", newunits="kg C m-2"), + "GPP" = c(depends_on="GPP", longname="Gross Primary Productivity", newunits="kg m-2 s-1"), + "NPP" = c(depends_on="NPP", longname="Net Primary Productivity", newunits="kg m-2 s-1"), + "HeteroResp" = c(depends_on="RH", longname="Heterotrophic Respiration", newunits="kg m-2 s-1"), + "AutoResp" = c(depends_on="RM,RG", longname="Autotrophic Respiration", newunits="kg m-2 s-1"), + "SoilOrgC" = c(depends_on="SHLWC,SOMA,SOMCR,SOMPR,SOMRAWC", longname="Soil Organic Carbon", newunits="kg m-2"), "LAI" = c(depends_on="LAI", longname="Leaf Area Index", newunits="m2/m2"), - "VegC" = c(depends_on="VEGC", longname="Vegetation Carbon", newunits="kg C m-2"), - "DeepC" = c(depends_on="DEEPC", longname="Deep (amporphous) soil C", newunits="kg C m-2"), - "AvailN" = c(depends_on="AVLN", longname="Available Nitrogen", newunits="kg N m-2"), - "NetNMin" = c(depends_on="NETNMIN", longname="Net N Mineralization", newunits="kg N m-2 s-1"), - "NImmob" = c(depends_on="NIMMOB", longname="N Immobilization", newunits="kg N m-2 s-1"), - "NInput" = c(depends_on="NINPUT", longname="N Inputs to soil", newunits="kg N m-2 s-1"), - "NLost" = c(depends_on="NLOST", longname="N Lost from soil", newunits="kg N m-2 s-1"), - "NUptakeIn" = c(depends_on="NUPTAKEIN", longname="N Uptake ignoring N limitation", newunits="kg N m-2 s-1"), - "NUptakeSt" = c(depends_on="NUPTAKEST", longname="N Uptake Structural", newunits="kg N m-2 s-1"), - "NUptakeLab" = c(depends_on="NUPTAKELAB", longname="N Uptake Labile", newunits="kg N m-2 s-1"), - "OrgN" = c(depends_on="ORGN", longname="Total Soil Organic N", newunits="kg N m-2"), - "ShlwC" = c(depends_on="SHLWC", longname="Shallow (fibrous) soil C", newunits="kg C m-2"), - "VegN" = c(depends_on="VEGN", longname="Vegetation N", newunits="kg N m-2") + "VegC" = c(depends_on="VEGC", longname="Vegetation Carbon", newunits="kg m-2"), + "DeepC" = c(depends_on="DEEPC", longname="Deep (amporphous) soil C", newunits="kg m-2"), + "AvailN" = c(depends_on="AVLN", longname="Available Nitrogen", newunits="kg m-2"), + "NetNMin" = c(depends_on="NETNMIN", longname="Net N Mineralization", newunits="kg m-2 s-1"), + "NImmob" = c(depends_on="NIMMOB", longname="N Immobilization", newunits="kg m-2 s-1"), + "NInput" = c(depends_on="NINPUT", longname="N Inputs to soil", newunits="kg m-2 s-1"), + "NLost" = c(depends_on="NLOST", longname="N Lost from soil", newunits="kg m-2 s-1"), + "NUptakeIn" = c(depends_on="INNUPTAKE", longname="N Uptake ignoring N limitation", newunits="kg m-2 s-1"), + "NUptakeSt" = c(depends_on="NUPTAKEST", longname="N Uptake Structural", newunits="kg m-2 s-1"), + "NUptakeLab" = c(depends_on="NUPTAKELAB", longname="N Uptake Labile", newunits="kg m-2 s-1"), + "OrgN" = c(depends_on="ORGN", longname="Total Soil Organic N", newunits="kg m-2"), + "ShlwC" = c(depends_on="SHLWC", longname="Shallow (fibrous) soil C", newunits="kg m-2"), + "VegN" = c(depends_on="VEGN", longname="Vegetation N", newunits="kg m-2") #"" = c(depends_on="", longname="", newunits="") ) \ No newline at end of file diff --git a/models/dvmdostem/inst/output_spec.csv b/models/dvmdostem/inst/output_spec.csv index 3eb6f80052c..a0d5a0be8f6 100644 --- a/models/dvmdostem/inst/output_spec.csv +++ b/models/dvmdostem/inst/output_spec.csv @@ -1,92 +1,111 @@ -Name,Description,Units,Yearly,Monthly,Daily,PFT,Compartments,Layers,Placeholder -ALD,Soil active layer depth,m,y,invalid,invalid,invalid,invalid,invalid, -AVLN,Total soil available N,gN/m2,y,,invalid,invalid,invalid,l, -BURNAIR2SOIN,Nitrogen deposit from fire emission,gN/m2/time,y,,invalid,invalid,invalid,invalid, -BURNSOIC,Burned soil C,gC/m2/time,y,,invalid,invalid,invalid,invalid, -BURNSOILN,Burned soil N,gN/m2/time,y,,invalid,invalid,invalid,invalid, -BURNTHICK,Ground burn thickness ,m,y,,invalid,invalid,invalid,invalid, -BURNVEG2AIRC,Burned vegetation C to atmosphere,gC/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2AIRN,Burned vegetation N to atmosphere,gN/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2DEADC,Burned vegetation C to standing dead C,gC/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2DEADN,Burned vegetation N to standing dead N,gN/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2SOIABVC,Burned vegetation C to soil above,gC/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2SOIABVN,Burned vegetation N to soil above,gN/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2SOIBLWC,Burned vegetation C to soil below,gC/m2/time,y,,invalid,invalid,invalid,invalid, -BURNVEG2SOIBLWN,Burned vegetation N to soil below,gN/m2/time,y,,invalid,invalid,invalid,invalid, -DEADC,Standing dead C,gC/m2/time,y,,invalid,invalid,invalid,invalid, -DEADN,Standing dead N,gN/m2/time,y,,invalid,invalid,invalid,invalid, -DEEPC,Amorphous SOM C,gC/m2,y,,invalid,invalid,invalid,invalid, -DEEPDZ,Amorphous SOM layer thickness,m,y,invalid,invalid,invalid,invalid,invalid, -DWDC,Dead woody debris C,gC/m2/time,y,,invalid,invalid,invalid,invalid, -DWDN,Dead woody debris N,gN/m2/time,y,,invalid,invalid,invalid,invalid, -EET,Actual ET,mm/m2/time,y,,,,invalid,invalid, -GPP,GPP,gC/m2/time,y,,invalid,,,invalid, -GROWEND,Growing ending day,DOY,y,invalid,invalid,invalid,invalid,invalid, -GROWSTART,Growing starting day,DOY,y,invalid,invalid,invalid,invalid,invalid, -HKDEEP,Hydraulic conductivity in amorphous layer,mm/day,y,,,invalid,invalid,invalid, -HKLAYER,Hydraulic conductivity by layer,mm/day,y,,invalid,invalid,invalid,l, -HKMINEA,Hydraulic conductivity top mineral,mm/day,y,,,invalid,invalid,invalid, -HKMINEB,Hydraulic conductivity middle mineral,mm/day,y,,,invalid,invalid,invalid, -HKMINEC,Hydraulic conductivity bottom mineral,mm/day,y,,,invalid,invalid,invalid, -HKSHLW,Hydraulic conductivity in fibrous layer,mm/day,y,,,invalid,invalid,invalid, -INGPP,GPP without N limitation,gC/m2/time,y,,invalid,,,invalid, -INNPP,NPP witout N limitation,gC/m2/time,y,,invalid,,,invalid, -LAI,LAI,m2/m2,y,,invalid,,invalid,invalid, -LAYERDEPTH,Layer depth from the surface,,y,,invalid,invalid,invalid,l, -LAYERDZ,Thickness of layer,,y,,invalid,invalid,invalid,l, -LAYERTYPE,0:moss 1:shlw 2:deep 3:mineral,,y,,invalid,invalid,invalid,l, -LTRFALC,Litterfall C,gC/m2/time,y,,invalid,,,invalid, -LTRFALN,Litterfall N,gN/m2/time,y,,invalid,,,invalid, -MINEC,Mineral SOM C,gC/m2,y,,invalid,invalid,invalid,invalid, -MOSSDEATHC,Moss death carbon,,y,,invalid,invalid,invalid,invalid, -MOSSDEATHN,Moss death nitrogen,,y,,invalid,invalid,invalid,invalid, -MOSSDZ,Moss thickness,m,y,invalid,invalid,invalid,invalid,invalid, -NDRAIN,N losses from drainage (AVLN),gN/m2/time,invalid,invalid,invalid,invalid,invalid,invalid, -NETNMIN,Soil net N mineralization,gN/m2/time,y,,invalid,invalid,invalid,, -NIMMOB,Nitrogen immobilization,gN/m2/time,y,,invalid,invalid,invalid,, -NINPUT,N inputs into soil (AVLN),gN/m2/time,y,,invalid,invalid,invalid,invalid, -NLOST,N losses from soil (AVLN),gN/m2/time,y,,invalid,invalid,invalid,invalid, -NPP,NPP,gC/m2/time,y,,invalid,,,invalid, -NUPTAKEIN,Unlimited N uptake by plants. N uptake by happy plants when N is not limited.,gN/m2/time,y,,invalid,,invalid,invalid, -NUPTAKELAB,Labile N uptake by plants,gN/m2/time,y,,invalid,,invalid,invalid, -NUPTAKEST,Structural N uptake by plants,gN/m2/time,y,,invalid,,,invalid, -ORGN,Total soil organic N,gN/m2,y,,invalid,invalid,invalid,, -PERMAFROST,Permafrost (1 or 0),,y,invalid,invalid,invalid,invalid,invalid, -PET,Potential ET,mm/m2/time,y,,,,invalid,invalid, -QDRAINAGE,Water drainage quotient (~ratio),,y,,invalid,invalid,invalid,invalid, -QINFILTRATION,Water infiltration quotient (~ratio),,y,,invalid,invalid,invalid,invalid, -QRUNOFF,Water runoff quotient (~ratio),,y,,invalid,invalid,invalid,invalid, -RG,Growth respiration,gC/m2/time,y,m,invalid,p,c,invalid, -RH,Heterotrophic respiration,gC/m2/time,y,,invalid,invalid,invalid,, -RM,Maintenance respiration,gC/m2/time,y,m,invalid,p,c,invalid, -ROLB,Relative organic layer burned,gC/gC,y,,invalid,invalid,invalid,invalid, -SHLWC,Fibrous SOM C,gC/m2,y,,invalid,invalid,invalid,invalid, -SHLWDZ,Fibrous SOM layer thickness,m,y,invalid,invalid,invalid,invalid,invalid, -SNOWEND,DOY of last snow fall,DOY,y,invalid,invalid,invalid,invalid,invalid, -SNOWSTART,DOY of first snow fall,DOY,y,invalid,invalid,invalid,invalid,invalid, -SNOWTHICK,Snow pack thickness,m,y,,,invalid,invalid,invalid, -SOC,Soil organic C,gC/m2,y,,invalid,invalid,invalid,, -SWE,Snow water equivalent,kg/m2,y,,,invalid,invalid,invalid, -TCDEEP,Thermal conductivity in amorphous layer,W/m/K,y,,,invalid,invalid,invalid, -TCLAYER,Thermal conductivity by layer,W/m/K,y,,invalid,invalid,invalid,l, -TCMINEA,Thermal conductivity top mineral,W/m/K,y,,,invalid,invalid,invalid, -TCMINEB,Thermal conductivity middle mineral,W/m/K,y,,,invalid,invalid,invalid, -TCMINEC,Thermal conductivity bottom mineral,W/m/K,y,,,invalid,invalid,invalid, -TCSHLW,Thermal conductivity in fibrous layer,W/m/K,y,,,invalid,invalid,invalid, -TDEEP,Temperature in amorphous layer,oC,y,,,invalid,invalid,invalid, -TLAYER,Temperature by layer,oC,y,,invalid,invalid,invalid,l, -TMINEA,Temperature top mineral,oC,y,,,invalid,invalid,invalid, -TMINEB,Temperature middle mineral,oC,y,,,invalid,invalid,invalid, -TMINEC,Temperature bottom mineral,oC,y,,,invalid,invalid,invalid, -TSHLW,Temperature in fibrous layer,oC,y,,,invalid,invalid,invalid, -VEGC,Total veg. biomass C,gC/m2,y,,invalid,,,invalid, -VEGN,Total veg. biomass N,gN/m2,y,,invalid,,,invalid, -VWCDEEP,VWC in amorphous layer,m3/m3,y,,,invalid,invalid,invalid, -VWCLAYER,VWC by layer,m3/m3,y,,invalid,invalid,invalid,l, -VWCMINEA,VWC top mineral,m3/m3,y,,,invalid,invalid,invalid, -VWCMINEB,VWC middle mineral,m3/m3,y,,,invalid,invalid,invalid, -VWCMINEC,VWC bottom mineral,m3/m3,y,,,invalid,invalid,invalid, -VWCSHLW,Volume water content (vwc) in fibrous layer,m3/m3,y,,,invalid,invalid,invalid, -WATERTAB,Water table,m,y,,,invalid,invalid,invalid, -WDRH,Dead woody debris HR,gC/m2/time,y,,invalid,invalid,invalid,invalid, -YSD,Years since last disturbance,Years,y,invalid,invalid,invalid,invalid,invalid, +Name,Description,Units,Yearly,Monthly,Daily,PFT,Compartments,Layers,Data Type,Placeholder +ALD,Soil active layer depth,m,,invalid,invalid,invalid,invalid,invalid,double, +AVLN,Total soil available N,g/m2,,,invalid,invalid,invalid,,double, +BURNAIR2SOIN,Nitrogen deposit from fire emission,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNSOIC,Burned soil C,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNSOILN,Burned soil N,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNTHICK,Ground burn thickness ,m,,,invalid,invalid,invalid,invalid,double, +BURNVEG2AIRC,Burned vegetation C to atmosphere,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2AIRN,Burned vegetation N to atmosphere,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2DEADC,Burned vegetation C to standing dead C,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2DEADN,Burned vegetation N to standing dead N,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2SOIABVC,Burned vegetation C to soil above,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2SOIABVN,Burned vegetation N to soil above,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2SOIBLWC,Burned vegetation C to soil below,g/m2/time,,,invalid,invalid,invalid,invalid,double, +BURNVEG2SOIBLWN,Burned vegetation N to soil below,g/m2/time,,,invalid,invalid,invalid,invalid,double, +DEADC,Standing dead C,g/m2/time,,,invalid,invalid,invalid,invalid,double, +DEADN,Standing dead N,g/m2/time,,,invalid,invalid,invalid,invalid,double, +DEEPC,Amorphous SOM C,g/m2,,,invalid,invalid,invalid,invalid,double, +DEEPDZ,Amorphous SOM layer thickness,m,,invalid,invalid,invalid,invalid,invalid,double, +DRIVINGNIRR,Input driving NIRR,W/m2,invalid,invalid,,invalid,invalid,invalid, +DRIVINGRAINFALL,Input driving precip data after being split from snowfall,mm,,,,invalid,invalid,invalid,float, +DRIVINGSNOWFALL,Input driving precip data after being split from rainfall,mm,,,,invalid,invalid,invalid,float, +DRIVINGTAIR,Input driving air temperature data,degree_C,invalid,invalid,,invalid,invalid,invalid,float, +DRIVINGVAPO,Input driving vapor pressure,hPa,invalid,invalid,,invalid,invalid,invalid,float, +DWDC,Dead woody debris C,g/m2/time,,,invalid,invalid,invalid,invalid,double, +DWDN,Dead woody debris N,g/m2/time,,,invalid,invalid,invalid,invalid,double, +EET,Actual ET,mm/m2/time,,,,,invalid,invalid, +FRONTSDEPTH,Depth of fronts,mm,,,,invalid,invalid,,double, +FRONTSTYPE,Front types,unitless,,,,invalid,invalid,,int, +GPP,GPP,g/m2/time,y,,invalid,,,invalid,double, +GROWEND,Growing ending day,DOY,,invalid,invalid,invalid,invalid,invalid,int, +GROWSTART,Growing starting day,DOY,,invalid,invalid,invalid,invalid,invalid,int, +HKDEEP,Hydraulic conductivity in amorphous layer,mm/day,,,,invalid,invalid,invalid,double, +HKLAYER,Hydraulic conductivity by layer,mm/day,,,invalid,invalid,invalid,,double, +HKMINEA,Hydraulic conductivity top mineral,mm/day,,,,invalid,invalid,invalid,double, +HKMINEB,Hydraulic conductivity middle mineral,mm/day,,,,invalid,invalid,invalid,double, +HKMINEC,Hydraulic conductivity bottom mineral,mm/day,,,,invalid,invalid,invalid,double, +HKSHLW,Hydraulic conductivity in fibrous layer,mm/day,,,,invalid,invalid,invalid,double, +INGPP,GPP without N limitation,g/m2/time,,,invalid,,,invalid,double, +INNPP,NPP witout N limitation,g/m2/time,,,invalid,,,invalid,double, +INNUPTAKE,Unlimited N uptake by plants. N uptake by happy plants when N is not limited.,g/m2/time,,,invalid,,invalid,invalid,double, +IWCLAYER,IWC by layer,m3/m3,,,invalid,invalid,invalid,,double, +LAI,LAI,m2/m2,,,invalid,,invalid,invalid,double, +LATERALDRAINAGE,Qdrain per layer,mm,,,,invalid,invalid,,double, +LAYERDEPTH,Layer depth from the surface,,,,invalid,invalid,invalid,,double, +LAYERDZ,Thickness of layer,,,,invalid,invalid,invalid,,double, +LAYERTYPE,0:moss 1:shlw 2:deep 3:mineral,,,,invalid,invalid,invalid,,int, +LTRFALC,Litterfall C,g/m2/time,,,invalid,,,invalid,double, +LTRFALN,Litterfall N,g/m2/time,,,invalid,,,invalid,double, +LWCLAYER,LWC by layer,m3/m3,,,invalid,invalid,invalid,,double, +MINEC,Mineral SOM C,g/m2,,,invalid,invalid,invalid,invalid,double, +MOSSDEATHC,Moss death carbon,,,,invalid,invalid,invalid,invalid,double, +MOSSDEATHN,Moss death nitrogen,,,,invalid,invalid,invalid,invalid,double, +MOSSDZ,Moss thickness,m,,invalid,invalid,invalid,invalid,invalid,double, +NDRAIN,N losses from drainage (AVLN),g/m2/time,invalid,invalid,invalid,invalid,invalid,invalid,double, +NETNMIN,Soil net N mineralization,g/m2/time,,,invalid,invalid,invalid,,double, +NIMMOB,Nitrogen immobilization,g/m2/time,,,invalid,invalid,invalid,,double, +NINPUT,N inputs into soil (AVLN),g/m2/time,,,invalid,invalid,invalid,invalid,double, +NLOST,N losses from soil (AVLN),g/m2/time,,,invalid,invalid,invalid,invalid,double, +NPP,NPP,g/m2/time,,,invalid,,,invalid,double, +NRESORB,NRESORB,??,,,invalid,,,invalid,double, +NUPTAKELAB,Labile N uptake by plants,g/m2/time,,,invalid,,invalid,invalid,double, +NUPTAKEST,Structural N uptake by plants,g/m2/time,,,invalid,,,invalid,double, +ORGN,Total soil organic N,g/m2,,,invalid,invalid,invalid,,double, +PERCOLATION,percolation,,,,,invalid,invalid,,double, +PERMAFROST,Permafrost (1 or 0),,,invalid,invalid,invalid,invalid,invalid,int, +PET,Potential ET,mm/m2/time,,,,,invalid,invalid,double, +QDRAINAGE,Water drainage quotient (~ratio),,,,invalid,invalid,invalid,invalid,double, +QINFILTRATION,Water infiltration quotient (~ratio),,,,invalid,invalid,invalid,invalid,double, +QRUNOFF,Water runoff quotient (~ratio),,,,invalid,invalid,invalid,invalid,double, +RAINFALL,Total rainfall,mm,,,invalid,invalid,invalid,invalid,double, +RG,Growth respiration,g/m2/time,,,invalid,,,invalid,double, +RH,Heterotrophic respiration,g/m2/time,,,invalid,invalid,invalid,,double, +RM,Maintenance respiration,g/m2/time,,,invalid,,,invalid,double, +ROLB,Relative organic layer burned ratio C,g/g,,,invalid,invalid,invalid,invalid,double, +ROOTWATERUPTAKE,Water uptake by roots per layer,,,,,invalid,invalid,,double, +SHLWC,Fibrous SOM C,g/m2,,,invalid,invalid,invalid,invalid,double, +SHLWDZ,Fibrous SOM layer thickness,m,,invalid,invalid,invalid,invalid,invalid,double, +SNOWEND,DOY of last snow fall,DOY,,invalid,invalid,invalid,invalid,invalid,int, +SNOWFALL,Total snowfall,mm,,,invalid,invalid,invalid,invalid,double, +SNOWSTART,DOY of first snow fall,DOY,,invalid,invalid,invalid,invalid,invalid,int, +SNOWTHICK,Snow pack thickness,m,,,,invalid,invalid,invalid,double, +SOMA,Soil organic C active,g/m2,,,invalid,invalid,invalid,,double, +SOMCR,Soil organic C chemically resistant,g/m2,,,invalid,invalid,invalid,,double, +SOMPR,Soil organic C physically resistant,g/m2,,,invalid,invalid,invalid,,double, +SOMRAWC,Soil organic C raw,g/m2,,,invalid,invalid,invalid,,double, +SWE,Snow water equivalent,kg/m2,,,,invalid,invalid,invalid,double, +TCDEEP,Thermal conductivity in amorphous layer,W/m/K,,,,invalid,invalid,invalid,double, +TCLAYER,Thermal conductivity by layer,W/m/K,,,invalid,invalid,invalid,,double, +TCMINEA,Thermal conductivity top mineral,W/m/K,,,,invalid,invalid,invalid,double, +TCMINEB,Thermal conductivity middle mineral,W/m/K,,,,invalid,invalid,invalid,double, +TCMINEC,Thermal conductivity bottom mineral,W/m/K,,,,invalid,invalid,invalid,double, +TCSHLW,Thermal conductivity in fibrous layer,W/m/K,,,,invalid,invalid,invalid,double, +TDEEP,Temperature in amorphous layer,degree_C,,,,invalid,invalid,invalid,double, +TLAYER,Temperature by layer,degree_C,,,invalid,invalid,invalid,,double, +TMINEA,Temperature top mineral,degree_C,,,,invalid,invalid,invalid,double, +TMINEB,Temperature middle mineral,degree_C,,,,invalid,invalid,invalid,double, +TMINEC,Temperature bottom mineral,degree_C,,,,invalid,invalid,invalid,double, +TRANSPIRATION,Transpiration,mm/day,,,,invalid,invalid,invalid,double, +TSHLW,Temperature in fibrous layer,degree_C,,,,invalid,invalid,invalid,double, +VEGC,Total veg. biomass C,g/m2,,,invalid,,,invalid,double, +VEGN,Total veg. biomass N,g/m2,,,invalid,,,invalid,double, +VWCDEEP,VWC in amorphous layer,m3/m3,,,,invalid,invalid,invalid,double, +VWCLAYER,VWC by layer,m3/m3,,,invalid,invalid,invalid,,double, +VWCMINEA,VWC top mineral,m3/m3,,,,invalid,invalid,invalid,double, +VWCMINEB,VWC middle mineral,m3/m3,,,,invalid,invalid,invalid,double, +VWCMINEC,VWC bottom mineral,m3/m3,,,,invalid,invalid,invalid,double, +VWCSHLW,VWC in fibrous layer,m3/m3,,,,invalid,invalid,invalid,double, +WATERTAB,Water table depth below surface,m,,,,invalid,invalid,invalid,double, +WDRH,Dead woody debris HR,g/m2/time,,,invalid,invalid,invalid,invalid,double, +YSD,Years since last disturbance,Years,,invalid,invalid,invalid,invalid,invalid,int, From d855880cb26c13513911377448a86209a6c2964a Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Thu, 24 Oct 2019 15:57:59 -0500 Subject: [PATCH 1633/2289] Fix typo in log statement. --- models/dvmdostem/R/write.config.dvmdostem.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index 3225a06b897..9abe20f32c5 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -350,7 +350,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. cur_cmtnum <- as.numeric(unlist(strsplit(cur_cmtname, "CMT"))[2]) if (cur_cmtname == cmtname) { # pass, evertthing ok - PEcAn.logger::logger.debug(paste0("AlL ok - CMTs of all the selected PFTs match.")) + PEcAn.logger::logger.debug(paste0("All ok - CMTs of all the selected PFTs match.")) } else { PEcAn.logger::logger.error(paste0("CMTs of selected PFTS do not match!!!")) stop() From a19b8741c5619a9e4c7c0ec0e9aaf60f1b0083e0 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Wed, 13 Nov 2019 15:46:46 -0500 Subject: [PATCH 1634/2289] Add projected CO2 file setting to config template. Add comment in model2netcdf. --- models/dvmdostem/R/model2netcdf.dvmdostem.R | 4 ++++ models/dvmdostem/inst/config.js.template | 1 + 2 files changed, 5 insertions(+) diff --git a/models/dvmdostem/R/model2netcdf.dvmdostem.R b/models/dvmdostem/R/model2netcdf.dvmdostem.R index cbf3721686e..4fc1407d00e 100644 --- a/models/dvmdostem/R/model2netcdf.dvmdostem.R +++ b/models/dvmdostem/R/model2netcdf.dvmdostem.R @@ -164,6 +164,10 @@ model2netcdf.dvmdostem <- function(outdir, runstart, runend, pecan_requested_var #bad_px <- which(run_status < 0) } + # A less aggressive check here might be to see if enough of the transient + # and scenario runs completed to do the analysis we need... + + # Get the actual pixel coords of the cell that ran px <- which(run_status > 0, arr.ind = TRUE) # Returns x,y array indices px_X <- px[1] diff --git a/models/dvmdostem/inst/config.js.template b/models/dvmdostem/inst/config.js.template index 8d29a23f067..fb4a569047b 100644 --- a/models/dvmdostem/inst/config.js.template +++ b/models/dvmdostem/inst/config.js.template @@ -11,6 +11,7 @@ "drainage_file": "@INPUT_DATA_DIR@/drainage.nc", "soil_texture_file": "@INPUT_DATA_DIR@/soil-texture.nc", "co2_file": "@MET_DRIVER_DIR@/co2.nc", + "proj_co2_file": "@MET_DRIVER_DIR@/projected-co2.nc", "runmask_file": "@CUSTOM_RUN_MASK@/run-mask.nc", "topo_file": "@INPUT_DATA_DIR@/topo.nc", "fri_fire_file": "@INPUT_DATA_DIR@/fri-fire.nc", From 91bd9ccaa5ac75c92f8c0e86a3f4ce8b4c34321c Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Thu, 30 Apr 2020 11:28:04 -0500 Subject: [PATCH 1635/2289] Implement dvmdostem calibration parameters in write.config.dvmdostem.R To use this, in bety, create new PFTs that are the "calibraton variants" of the PFTs you'd like to calibrate for. The "variants" will have a .cal suffix on their bety name, i.e. CMT06-Sphagnum.cal and will only these calibration parameters set as related priors. First step toward using PEcAn to calibrate dvmdostem. Aiming to perform a sensitivity analysis across the calibration parameters to see which parameters are most important. One issue with this implementation is that the 5 soil parameters are included with each PFT, which will result in unnecessary runs. --- models/dvmdostem/R/write.config.dvmdostem.R | 54 +++++++++++++++++++-- 1 file changed, 51 insertions(+), 3 deletions(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index 9abe20f32c5..98460fc080e 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -66,7 +66,16 @@ setup.outputs.dvmdostem <- function(pecan_requested_outputs, pecan_outvars <- pecan_requested_outputs outspec_path <- dvmdostem_output_spec } + + # 4) user specified calibration, outvars and custom path + # NOT IMPLENTED YET + # 5) user specified calibration, custom path, but no vars + # NOT IMPLENTED YET + + # 6) user specified calibration, outvars, but no custom path + # NOT IMPLENTED YET + # Verify that the base output_spec file exists. if (! file.exists(outspec_path) ) { PEcAn.logger::logger.error("ERROR! The specified output spec file does not exist on this system!") @@ -293,7 +302,6 @@ enforce.runmask.cmt.vegmap.harmony <- function(siteDataPath, rundir, cmtnum){ ##' @importFrom rjson fromJSON toJSON ##' write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run.id) { - ## site information ## (Currently unused) site <- settings$run$site @@ -377,6 +385,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. dimveg_params <- paste(appbinary_path, "parameters", 'cmt_dimvegetation.txt', sep="/") envcanopy_params <- paste(appbinary_path, "parameters", 'cmt_envcanopy.txt', sep="/") bgcveg_params <- paste(appbinary_path, "parameters", 'cmt_bgcvegetation.txt', sep="/") + calparbgc_params <- paste(appbinary_path, "parameters", "cmt_calparbgc.txt", sep="/") # Call the helper script and write out the data to a temporary file # This gets just the block we are interested in (based on community type) @@ -401,11 +410,18 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. system2(paste0(appbinary_path,"/scripts/param_util.py"), args=(c("--dump-block-to-json", bgcveg_params, cmtnum)), stdout=bgcveg_jsonfile, wait=TRUE) - + + calparbgc_jsonfile <- file.path(local_rundir, "tmp",'dvmdostem-calparbgc.json') + PEcAn.logger::logger.info(paste0("calparbgc_jsonfile: ", calparbgc_jsonfile)) + system2(paste0(appbinary_path,"/scripts/param_util.py"), + args=(c("--dump-block-to-json", calparbgc_params, cmtnum)), + stdout=calparbgc_jsonfile, wait=TRUE) + # Read the json file into memory dimveg_jsondata <- fromJSON(paste(readLines(dimveg_jsonfile), collapse="")) envcanopy_jsondata <- fromJSON(paste(readLines(envcanopy_jsonfile), collapse="")) bgcveg_jsondata <- fromJSON(paste(readLines(bgcveg_jsonfile), collapse="")) + calparbgc_jsondata <- fromJSON(paste(readLines(calparbgc_jsonfile), collapse="")) # (2) # Overwrite parameter values with (ma-posterior) trait data from pecan @@ -441,8 +457,32 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # name to make sure we are updating the correct spot in the json # data structure. pft_common_name <- unlist(strsplit(singlepft$name, "-"))[2] + # In addition, there can be variants, i.e. for calibration, where the name + # in betydb might be like this: CMT06-Decid.cal, and we need to strip off + # the variant. + pft_common_name <- unlist(strsplit(pft_common_name, ".", fixed = TRUE))[1] + #PEcAn.logger::logger.info(paste0("PFT Name: ",cmtname)) # too verbose if (identical(jd[[i]]$name, pft_common_name)) { + if (curr_trait == "cfall_leaf") { calparbgc_jsondata[[i]]$`cfall(0)` = traits[[curr_trait]] } + if (curr_trait == "cfall_stem") { calparbgc_jsondata[[i]]$`cfall(1)` = traits[[curr_trait]] } + if (curr_trait == "cfall_root") { calparbgc_jsondata[[i]]$`cfall(2)` = traits[[curr_trait]] } + if (curr_trait == "nfall_leaf") { calparbgc_jsondata[[i]]$`nfall(0)` = traits[[curr_trait]] } + if (curr_trait == "nfall_stem") { calparbgc_jsondata[[i]]$`nfall(1)` = traits[[curr_trait]] } + if (curr_trait == "nfall_root") { calparbgc_jsondata[[i]]$`nfall(2)` = traits[[curr_trait]] } + if (curr_trait == "krb_leaf") { calparbgc_jsondata[[i]]$`krb(0)` = traits[[curr_trait]] } + if (curr_trait == "krb_stem") { calparbgc_jsondata[[i]]$`krb(1)` = traits[[curr_trait]] } + if (curr_trait == "krb_root") { calparbgc_jsondata[[i]]$`krb(2)` = traits[[curr_trait]] } + if (curr_trait == "kra") { calparbgc_jsondata[[i]]$kra = traits[[curr_trait]] } + if (curr_trait == "frg") { calparbgc_jsondata[[i]]$frg = traits[[curr_trait]] } + if (curr_trait == "nmax") { calparbgc_jsondata[[i]]$nmax = traits[[curr_trait]] } + if (curr_trait == "cmax") { calparbgc_jsondata[[i]]$cmax = traits[[curr_trait]] } + if (curr_trait == "micbnup") { calparbgc_jsondata[[i]]$micbnup = traits[[curr_trait]] } + if (curr_trait == "kdcrawc") { calparbgc_jsondata[[i]]$kdcrawc = traits[[curr_trait]] } + if (curr_trait == "kdcsoma") { calparbgc_jsondata[[i]]$kdcsoma = traits[[curr_trait]] } + if (curr_trait == "kdcsompr") { calparbgc_jsondata[[i]]$kdcsompr = traits[[curr_trait]] } + if (curr_trait == "kdcsomcr") { calparbgc_jsondata[[i]]$kdcsomcr = traits[[curr_trait]] } + if (curr_trait == "SLA") { dimveg_jsondata[[i]]$sla = traits[[curr_trait]] } @@ -520,7 +560,9 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. bgcveg_exportJson <- toJSON(bgcveg_jsondata) write(bgcveg_exportJson, file.path(local_rundir, "tmp","bgcveg_newfile.json")) - + calparbgc_exportJson <- toJSON(calparbgc_jsondata) + write(calparbgc_exportJson, file.path(local_rundir, "tmp", "calparbgc_newfile.json")) + # (3) # Format a new dvmdostem parameter file using the new json file as a source. @@ -548,6 +590,12 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. system2(paste0(appbinary_path,"/scripts/param_util.py"), args=(c("--fmt-block-from-json", file.path(local_rundir, "tmp","bgcveg_newfile.json"), ref_file)), stdout=new_param_file, wait=TRUE) + + ref_file <- paste0(file.path(appbinary_path, "parameters/"), 'cmt_calparbgc.txt') + new_param_file <- paste0(file.path(local_rundir, "parameters/"), "cmt_calparbgc.txt") + system2(paste0(appbinary_path,"/scripts/param_util.py"), + args=(c("--fmt-block-from-json", file.path(local_rundir, "tmp","calparbgc_newfile.json"), ref_file)), + stdout=new_param_file, wait=TRUE) ## Cleanup rundir temp directory - comment out for debugging unlink(file.path(local_rundir, "tmp"), recursive = TRUE, force = FALSE) # comment out for debugging From 51fc345fee4f02ada5a06466a6f53c6b0b9c99aa Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Tue, 12 May 2020 15:37:51 -0500 Subject: [PATCH 1636/2289] Very rough start at adding code to handle dvmdostem calibration SA run. Bunch of hard coded stuff in here. Needs cleanup. --- models/dvmdostem/R/write.config.dvmdostem.R | 319 ++++++++++++-------- models/dvmdostem/inst/config.js.template | 3 +- models/dvmdostem/inst/pecan.dvmdostem.xml | 12 +- 3 files changed, 212 insertions(+), 122 deletions(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index 98460fc080e..a44976bdad7 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -21,6 +21,7 @@ ##' ##' @name setup.outputs.dvmdostem ##' @title Setup outputs to be generated by dvmdostem and analyzed by PEcAn. +##' @param dvmdostem_calibration a string with 'yes' or 'YES' ##' @param pecan_requested_outputs a space separated string of variables to process or NULL. ##' @param dvmdostem_output_spec a path to a custom output spec file or NULL. ##' @param run_directory a path to the direcotory containing the PEcAn run. @@ -31,12 +32,13 @@ ##' @export ##' @author Tobey Carman ##' -setup.outputs.dvmdostem <- function(pecan_requested_outputs, +setup.outputs.dvmdostem <- function(dvmdostem_calibration, + pecan_requested_outputs, dvmdostem_output_spec, run_directory, run_id, appbinary_path) { - + is.not.null <- function(x) !is.null(x) # helper for readability # 0) neither path or variables specified @@ -66,76 +68,101 @@ setup.outputs.dvmdostem <- function(pecan_requested_outputs, pecan_outvars <- pecan_requested_outputs outspec_path <- dvmdostem_output_spec } - - # 4) user specified calibration, outvars and custom path - # NOT IMPLENTED YET - - # 5) user specified calibration, custom path, but no vars - # NOT IMPLENTED YET - - # 6) user specified calibration, outvars, but no custom path - # NOT IMPLENTED YET - # Verify that the base output_spec file exists. - if (! file.exists(outspec_path) ) { - PEcAn.logger::logger.error("ERROR! The specified output spec file does not exist on this system!") - PEcAn.logger::logger.error(c("Cannot find file: ", outspec_path)) - stop() - } - - # Check that at least one variable is enabled. - if( length(unlist((strsplit(pecan_outvars, ",")))) < 1 ){ - PEcAn.logger::logger.error("ERROR! No output variables enabled!") - PEcAn.logger::logger.error("Try adding the tag to your pecan.xml file!") - stop() - } + # Calibraton run? + if (grepl(tolower(dvmdostem_calibration), 'yes', fixed = TRUE )) { + PEcAn.logger::logger.warn("Calibration run requested! Ignoring requested ", + "output variables and using pre-set dvmdostem ", + "calibration outputs list") + pecan_outvars <- "" + + # Copy the base file to a run-specific output spec file + if (! file.exists(file.path(run_directory, "config")) ) { + dir.create(file.path(run_directory, "config"), recursive = TRUE) + } + rs_outspec_path <- file.path(run_directory, "config/", basename(outspec_path)) + file.copy(outspec_path, rs_outspec_path) + + # Empty the run specific output spec file + system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c("--empty", rs_outspec_path)) + + # Turn on the dvmdostem calibration outputs + system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c(rs_outspec_path, "--enable-cal-vars")) - # Look up the "depends_on" in the output variable mapping, - # accumulate list of dvmdostem variables to turn on to support - # the requested variables in the pecan.xml tag - req_v_str <- "" - for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { - #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) - req_v_str <- trimws(paste(req_v_str, vmap_reverse[[pov]][["depends_on"]], sep = ",")) - } - # Ugly, but basically jsut takes care of stripping out empty strings and - # making sure the final result is a 1D list, not nested. - req_v_str <- trimws(req_v_str) - req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) + ret <- system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c(rs_outspec_path, "--summary"), + stdout=TRUE, stderr=TRUE) - # Check that all variables specified in list exist in the base output spec file. - a <- read.csv(outspec_path) - for (j in req_v_list) { - if (! j %in% a[["Name"]]) { - PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", outspec_path)) + req_v_str <- paste0(sapply(strsplit(ret, "\\s+"), function(a) a[2])[-1], collapse = " ") + + + + # done with calibration setup + } else { + # not a calibration run + # Verify that the base output_spec file exists. + if (! file.exists(outspec_path) ) { + PEcAn.logger::logger.error("ERROR! The specified output spec file does not exist on this system!") + PEcAn.logger::logger.error(c("Cannot find file: ", outspec_path)) stop() } - } - - # Copy the base file to a run-specific output spec file - if (! file.exists(file.path(run_directory, "config")) ) { - dir.create(file.path(run_directory, "config"), recursive = TRUE) - } - rs_outspec_path <- file.path(run_directory, "config/", basename(outspec_path)) - file.copy(outspec_path, rs_outspec_path) - - # A more sophisticated test will verify that all the variables - # are valid at the correct dimensions(month and year??) - - # Empty the run specific output spec file - system2(file.path(appbinary_path, "scripts/outspec_utils.py"), - args=c("--empty", rs_outspec_path)) - - # Fill the run specific output spec file according to list - for (j in req_v_list) { - system2(file.path(appbinary_path, "scripts/outspec_utils.py"), - args=c(rs_outspec_path, "--on", j, "y", "m")) - } + + # Check that at least one variable is enabled. + if( length(unlist((strsplit(pecan_outvars, ",")))) < 1 ){ + PEcAn.logger::logger.error("ERROR! No output variables enabled!") + PEcAn.logger::logger.error("Try adding the tag to your pecan.xml file!") + stop() + } + + # Look up the "depends_on" in the output variable mapping, + # accumulate list of dvmdostem variables to turn on to support + # the requested variables in the pecan.xml tag + req_v_str <- "" + for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { + #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) + req_v_str <- trimws(paste(req_v_str, vmap_reverse[[pov]][["depends_on"]], sep = ",")) + } + # Ugly, but basically jsut takes care of stripping out empty strings and + # making sure the final result is a 1D list, not nested. + req_v_str <- trimws(req_v_str) + req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) + + # Check that all variables specified in list exist in the base output spec file. + a <- read.csv(outspec_path) + for (j in req_v_list) { + if (! j %in% a[["Name"]]) { + PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", outspec_path)) + stop() + } + } + + # Copy the base file to a run-specific output spec file + if (! file.exists(file.path(run_directory, "config")) ) { + dir.create(file.path(run_directory, "config"), recursive = TRUE) + } + rs_outspec_path <- file.path(run_directory, "config/", basename(outspec_path)) + file.copy(outspec_path, rs_outspec_path) + + # A more sophisticated test will verify that all the variables + # are valid at the correct dimensions(month and year??) + + # Empty the run specific output spec file + system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c("--empty", rs_outspec_path)) + + # Fill the run specific output spec file according to list + for (j in req_v_list) { + system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c(rs_outspec_path, "--on", j, "y", "m")) + } + } # Done with non-calibration run setup # Print summary for debugging #system2(file.path(appbinary_path, "scripts/outspec_utils.py"), # args=c("-s", rs_outspec_path)) - + return(c(rs_outspec_path, req_v_str)) } @@ -154,7 +181,7 @@ setup.outputs.dvmdostem <- function(pecan_requested_outputs, ##' @author Shawn Serbin, Tobey Carman ##' convert.samples.dvmdostem <- function(trait_values) { - + if("SLA" %in% names(trait_values)) { # Convert from m2 / kg to m2 / g trait_values[["SLA"]] <- trait_values[["SLA"]] / 1000.0 @@ -202,20 +229,20 @@ convert.samples.dvmdostem <- function(trait_values) { ##' @author Tobey Carman ##' adjust.runmask.dvmdostem <- function(siteDataPath, rundir, pixel_X, pixel_Y) { - + # Copy the run-mask from the input data directory to the run directory system2(paste0("cp"), wait=TRUE, args=(c("-r", file.path(siteDataPath, 'run-mask.nc'), file.path(rundir, 'run-mask.nc')))) - + # # Turn off all pixels except the 0,0 pixel in the mask # Can't seem to use this as python-netcdf4 is not available. WTF. # system2(paste0(file.path(appbinary_path, "scripts/runmask-util.py")), # wait=TRUE, # args=c("--reset", "--yx", pixel_Y, pixel_X, file.path(rundir, 'run-mask.nc'))) - + ## !!WARNING!! See note here: ## https://github.com/cran/ncdf4/blob/master/R/ncdf4.R ## Permalink: https://github.com/cran/ncdf4/blob/6eea28ce4e457054ff8d4cb90c58dce4ec765fd7/R/ncdf4.R#L1 @@ -233,9 +260,9 @@ adjust.runmask.dvmdostem <- function(siteDataPath, rundir, pixel_X, pixel_Y) { new_data[[strtoi(pixel_X), strtoi(pixel_Y)]] <- 1 ncdf4::ncvar_put(ncMaskFile, ncMaskFile$var$run, new_data, verbose=TRUE) ncdf4::nc_close(ncMaskFile) - + PEcAn.logger::logger.info(paste0("Set run mask pixel (y,x)=("),pixel_Y,",",pixel_X,")" ) - + } ##-------------------------------------------------------------------------------------------------# @@ -256,7 +283,7 @@ adjust.runmask.dvmdostem <- function(siteDataPath, rundir, pixel_X, pixel_Y) { ##' @author Tobey Carman ##' enforce.runmask.cmt.vegmap.harmony <- function(siteDataPath, rundir, cmtnum){ - + # Open the runmask and see which pixel is enabled ncRunMaskFile <- ncdf4::nc_open(file.path(rundir, "run-mask.nc"), write=FALSE) run_mask <- ncdf4::ncvar_get(ncRunMaskFile, ncRunMaskFile$var$run) @@ -269,7 +296,7 @@ enforce.runmask.cmt.vegmap.harmony <- function(siteDataPath, rundir, cmtnum){ PEcAn.logger::logger.error(c("Instead found ", length(enabled_px), " pixels in file: ", file.path(rundir, "run-mask.nc") ) ) stop() } - + # Open the input veg file, check that the pixel that is enabled in the # run mask is the right veg type to match the cmt/pft that is selected # for the run. @@ -280,7 +307,7 @@ enforce.runmask.cmt.vegmap.harmony <- function(siteDataPath, rundir, cmtnum){ PEcAn.logger::logger.error("STOPPING NOW TO PREVENT FUTURE HEARTACHE!") stop() } - + } ##-------------------------------------------------------------------------------------------------# @@ -302,11 +329,17 @@ enforce.runmask.cmt.vegmap.harmony <- function(siteDataPath, rundir, cmtnum){ ##' @importFrom rjson fromJSON toJSON ##' write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run.id) { + + # MAKE SURE TO USE PYTHON 3 FOR dvmdostem v0.5.0 AND UP!! + # Also on the ubuntu VM, there is a symlink from ~/bin/python to /usr/bin/python3 + Sys.setenv(PATH = paste(c("/home/carya/bin", Sys.getenv("PATH")), collapse = .Platform$path.sep)) + #PEcAn.logger::logger.info(system2("python", args="-V")) + ## site information ## (Currently unused) site <- settings$run$site site.id <- as.numeric(site$id) - + # Setup some local variables for this function for easily referencing # common locations for input, output, and the application binary. local_rundir <- file.path(settings$rundir, run.id) # on local machine for staging @@ -314,11 +347,11 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. outdir <- file.path(settings$host$outdir, run.id) appbinary <- settings$model$binary appbinary_path <- dirname(appbinary) # path of dvmdostem binary file - + # On the VM, these seem to be the same. PEcAn.logger::logger.info(paste0("local_rundir: ", local_rundir)) PEcAn.logger::logger.info(paste0("rundir: ", rundir)) - + # Copy the base set of dvmdostem parameters and configurations into the # run directory. Some of the values in these files will be overwritten in # subsequent steps, but copying everything up makes sure that all the @@ -333,7 +366,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. file.path(appbinary_path, 'config'), file.path(rundir, 'config')))) # this seems like a problem with below since we copy this first # and below ask if it exists and if so don't copy the template version - + if (dir.exists(file.path(rundir, 'parameters'))) { unlink(file.path(rundir, 'parameters'), recursive=TRUE) } @@ -342,15 +375,15 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. args=(c("-r", file.path(appbinary_path, 'parameters'), file.path(rundir, 'parameters')))) - - + + # Pull out the community name/number for use below in extracting # the correct block of data from the dvmdostem parameter files. # The settings$pfts$pft$name variable will be something like this: "CMT04-Salix" cmtname <- unlist(strsplit(settings$pfts$pft$name, "-", fixed=TRUE))[1] cmtnum <- as.numeric(unlist(strsplit(cmtname, "CMT"))[2]) # PEcAn.logger::logger.info(paste("cmtname: ", cmtname, " cmtnum: ", cmtnum)) - + # Check that all selected PFTs (from pecan.xml) have the same CMT number! for (pft in settings$pfts) { cur_pftname <- pft$name @@ -364,10 +397,10 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. stop() } } - + # (1) # Read in a parameter data block from dvmdostem - + # Now we have to read the appropriate values out of the trait_df # and get those values written into the parameter file(s) that dvmdostem will # need when running. Because the dvmdostem parameters have a sort of @@ -378,7 +411,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # - Read dvmdostem parameter file into json object, load into memory # - Update the in-memory json object # - Write the json object back out to a new dvmdostem parameter file - + # Next, use a helper script distributed with dvmdostem to read the dvmdostem # parameter data into memory as a json object, using a temporaroy json file # to hold a representation of each dvmdostem parameter file. @@ -398,7 +431,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. system2(paste0(appbinary_path,"/scripts/param_util.py"), args=(c("--dump-block-to-json", dimveg_params, cmtnum)), stdout=dimveg_jsonfile, wait=TRUE) - + envcanopy_jsonfile <- file.path(local_rundir, "tmp",'dvmdostem-envcanopy.json') PEcAn.logger::logger.info(paste0("envcanopy_jsonfile: ", envcanopy_jsonfile)) system2(paste0(appbinary_path,"/scripts/param_util.py"), @@ -442,9 +475,9 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # care of. So result will be something like this: # SW_albedo gcmax cuticular_cond SLA # 1.0 3.4 2.5 11.0 - + traits <- convert.samples.dvmdostem(trait.values[[singlepft$name]]) - + for (curr_trait in names(traits)) { for (jd in list(bgcveg_jsondata, envcanopy_jsondata, dimveg_jsondata)) { for (i in names(jd)) { @@ -464,6 +497,8 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. #PEcAn.logger::logger.info(paste0("PFT Name: ",cmtname)) # too verbose if (identical(jd[[i]]$name, pft_common_name)) { + PEcAn.logger::logger.info("Somewhere to stop...") + if (curr_trait == "cfall_leaf") { calparbgc_jsondata[[i]]$`cfall(0)` = traits[[curr_trait]] } if (curr_trait == "cfall_stem") { calparbgc_jsondata[[i]]$`cfall(1)` = traits[[curr_trait]] } if (curr_trait == "cfall_root") { calparbgc_jsondata[[i]]$`cfall(2)` = traits[[curr_trait]] } @@ -549,14 +584,14 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } # end loop over different json structures } # end loop over traits } # end loop over pfts - + # Write it back out to disk (overwriting ok??) dimveg_exportJson <- toJSON(dimveg_jsondata) write(dimveg_exportJson, file.path(local_rundir, "tmp","dimveg_newfile.json")) - + envcanopy_exportJson <- toJSON(envcanopy_jsondata) write(envcanopy_exportJson, file.path(local_rundir, "tmp","envcanopy_newfile.json")) - + bgcveg_exportJson <- toJSON(bgcveg_jsondata) write(bgcveg_exportJson, file.path(local_rundir, "tmp","bgcveg_newfile.json")) @@ -565,20 +600,20 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # (3) # Format a new dvmdostem parameter file using the new json file as a source. - + if (dir.exists(file.path(rundir, "parameters/"))) { # pass } else { print("No parameter/ directory in run directory! Need to create...") dir.create(file.path(rundir,"parameters" )) } - + ref_file <- paste0(file.path(appbinary_path, "parameters/"), 'cmt_dimvegetation.txt') new_param_file <- paste0(file.path(local_rundir, "parameters/"), "cmt_dimvegetation.txt") system2(paste0(appbinary_path,"/scripts/param_util.py"), args=(c("--fmt-block-from-json", file.path(local_rundir, "tmp","dimveg_newfile.json"), ref_file)), stdout=new_param_file, wait=TRUE) - + ref_file <- paste0(file.path(appbinary_path, "parameters/"), 'cmt_envcanopy.txt') new_param_file <- paste0(file.path(local_rundir, "parameters/"), "cmt_envcanopy.txt") system2(paste0(appbinary_path,"/scripts/param_util.py"), @@ -599,7 +634,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. ## Cleanup rundir temp directory - comment out for debugging unlink(file.path(local_rundir, "tmp"), recursive = TRUE, force = FALSE) # comment out for debugging - + # TODO: # [x] finish with parameter update process # [x] dynamically copy parameters to right place @@ -608,14 +643,14 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # - figure out how to handle the met. # -> step one is symlink from raw data locations (Model install folder) # into pecan run folder, maybe do this within job.sh? - + # Here we set up two things: # 1) the paths to the met drivers (temperature, precip, etc) # 2) paths to the other input data files that dvmdostem requires (soil maps, # vegetation maps, topography etc). # This will allow us to source the meteorology data from PEcAn (or BetyDB) and # collect the other inputs from a different location. - + # Met info met_driver_dir <- dirname(settings$run$inputs$met$path) # Not sure what happens here if the site is selected from the @@ -634,7 +669,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } PEcAn.logger::logger.info(paste0("Using siteDataPath: ", siteDataPath)) PEcAn.logger::logger.info(paste0("Using met_driver_path: ", met_driver_dir)) - + # Check the size of the input dataset(s) # 1) met data set and other site data are the same size/shape (what if the # met comes from PEcAn/Bety and is single site and the other inputs come @@ -644,7 +679,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # 3) if the incoming datasets ARE single pixel, then no runmask # adjustment, but warn user if they have set dvmdostem_pixel_x # and dvmdostem_pixel_y tags in the xml file - + if (is.null(settings$model$dvmdostem_pixel_y)){ pixel_Y <- 1 } else { @@ -655,12 +690,12 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } else { pixel_X <- settings$model$dvmdostem_pixel_x } - + # First, turn on a specific pixel in the run mask. # In the case of a single pixel dataset, this will ensure that the # pixel is set to run. adjust.runmask.dvmdostem(siteDataPath, rundir, pixel_X, pixel_Y) - + # If the user has not explicity said to force the CMT type in # their xml settings, then we have to look at the run mask, figure out # which pixel is enabled, and then check the corresponding pixel in the @@ -672,18 +707,56 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. print("Enforcing harmony between the runmask, vegmap, and required CMT type.") enforce.runmask.cmt.vegmap.harmony(siteDataPath, rundir, cmtnum) } + + # if (!is.null(settings$model$dvmdostem_calibration)){ + # if (grepl(settings$model$dvmdostem_calibration, "yes", ignore.case = TRUE)) { + # # Ignore the following: + # # settings$model$dvmdostem_output_spec + # # settings$model$dvmdostem_pecan_outputs variables + # # Copy the base file to a run-specific output spec file + # if (! file.exists(file.path(rundir, "config")) ) { + # dir.create(file.path(rundir, "config"), recursive = TRUE) + # } + # rs_outspec_path <- file.path(rundir, "config/", basename(outspec_path)) + # file.copy(outspec_path, rs_outspec_path) + # + # + # } else { + # + # } + + # # A more sophisticated test will verify that all the variables + # # are valid at the correct dimensions(month and year??) + # + # # Empty the run specific output spec file + # system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + # args=c("--empty", rs_outspec_path)) + # + # # Fill the run specific output spec file according to list + # for (j in req_v_list) { + # system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + # args=c(rs_outspec_path, "--on", j, "y", "m")) + # } + # + # # Print summary for debugging + # #system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + # # args=c("-s", rs_outspec_path)) + # + # return(c(rs_outspec_path, req_v_str)) + # setup the variables to output based on tags in xml file. v <- setup.outputs.dvmdostem( - settings$model$dvmdostem_pecan_outputs, - settings$model$dvmdostem_output_spec, - rundir, run.id, appbinary_path + settings$model$dvmdostem_calibration, + settings$model$dvmdostem_pecan_outputs, + settings$model$dvmdostem_output_spec, + rundir, run.id, appbinary_path ) rs_outspec_path <- v[1] PEcAn.logger::logger.info(paste0("Will be generating the following dvmdostem output variables: ", v[2])) - + ## Update dvm-dos-tem config.js file - + # Get a copy of the config file written into the run directory with the # appropriate template parameters substituted. if (!is.null(settings$model$configtemplate) && file.exists(settings$model$configtemplate)) { @@ -691,7 +764,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } else { config_template <- readLines(con=system.file("config.js.template", package = "PEcAn.dvmdostem"), n=-1) } - + config_template <- gsub("@MET_DRIVER_DIR@", met_driver_dir, config_template) #config_template <- gsub("@INPUT_DATA_DIR@", file.path(dirname(appbinary), siteDataPath), config_template) config_template <- gsub("@INPUT_DATA_DIR@", siteDataPath, config_template) @@ -699,20 +772,26 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. config_template <- gsub("@CUSTOM_RUN_MASK@", file.path(rundir), config_template) config_template <- gsub("@CUSTOM_OUTSPEC@", file.path("config/", basename(rs_outspec_path)), config_template) config_template <- gsub("@DYNAMIC_MODELED_LAI@", settings$model$dvmdostem_dynamic_modeled_lai, config_template) - + if (grepl(tolower(settings$model$dvmdostem_calibration), "yes", fixed=TRUE)) { + config_template <- gsub("@EQ_NC_OUTPUT@", 1, config_template) # WARNING, output volume can be prohibitive! + config_template <- gsub("@NC_OUTPUT_LAST_N_EQ@", settings$model$dvmdostem_nc_output_last_n_eq, config_template) + } else { + config_template <- gsub("@EQ_NC_OUTPUT@", 0, config_template) + config_template <- gsub("@NC_OUTPUT_LAST_N_EQ@", -1, config_template) + } if (! file.exists(file.path(settings$rundir, run.id,"config")) ) { dir.create(file.path(settings$rundir, run.id,"config"),recursive = TRUE) } - + writeLines(config_template, con=file.path(settings$rundir, run.id,"config/config.js")) - + ### create launch script (which will create symlink) - needs to be created if (!is.null(settings$model$jobtemplate) && file.exists(settings$model$jobtemplate)) { jobsh <- readLines(con=settings$model$jobtemplate, n=-1) } else { jobsh <- readLines(con=system.file("job.sh.template", package = "PEcAn.dvmdostem"), n=-1) } - + ### create host specific setttings - stubbed for now, nothing to do yet, ends up as empty ### string that is put into the job.sh file hostsetup <- "" @@ -722,7 +801,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. if (!is.null(settings$host$prerun)) { hostsetup <- paste(hostsetup, sep="\n", paste(settings$host$prerun, collapse="\n")) } - + hostteardown <- "" if (!is.null(settings$model$postrun)) { hostteardown <- paste(hostteardown, sep="\n", paste(settings$model$postrun, collapse="\n")) @@ -730,14 +809,14 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. if (!is.null(settings$host$postrun)) { hostteardown <- paste(hostteardown, sep="\n", paste(settings$host$postrun, collapse="\n")) } - + jobsh <- gsub("@HOST_SETUP@", hostsetup, jobsh) jobsh <- gsub("@HOST_TEARDOWN@", hostteardown, jobsh) - + jobsh <- gsub("@RUNDIR@", rundir, jobsh) jobsh <- gsub("@OUTDIR@", outdir, jobsh) jobsh <- gsub("@BINARY@", appbinary, jobsh) - + ## model specific options from the pecan.xml file # setup defaults if missing - may not want to do this long term if (is.null(settings$model$dvmdostem_prerun)){ @@ -751,7 +830,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } else { jobsh <- gsub("@EQUILIBRIUM@", settings$model$dvmdostem_equil, jobsh) } - + if (is.null(settings$model$dvmdostem_spinup)){ jobsh <- gsub("@SPINUP@", 450, jobsh) } else { @@ -763,19 +842,19 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } else { # could just invoke a stop() here for these if missing and provide an error message jobsh <- gsub("@TRANSIENT@", settings$model$dvmdostem_transient, jobsh) } - + if (is.null(settings$model$dvmdostem_scenerio)){ jobsh <- gsub("@SCENERIO@", 91, jobsh) } else { jobsh <- gsub("@SCENERIO@", settings$model$dvmdostem_scenerio, jobsh) } - + if (is.null(settings$model$dvmdostem_loglevel)){ jobsh <- gsub("@LOGLEVEL@", "err", jobsh) } else { jobsh <- gsub("@LOGLEVEL@", settings$model$dvmdostem_loglevel, jobsh) } - + if (is.null(settings$model$dvmdostem_forcecmtnum)){ PEcAn.logger::logger.info("Using vegetation.nc input file to determine community type of pixel...") PEcAn.logger::logger.warn("The CMT type of your selected PFT must match the CMT type in the input veg file for the selected pixel!") @@ -784,7 +863,7 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. PEcAn.logger::logger.info("FORCING cmt type to match selected PFT. IGNORING vegetation.nc map!") jobsh <- gsub("@FORCE_CMTNUM@", paste0("--force-cmt ", cmtnum), jobsh) } - + jobsh <- gsub("@PECANREQVARS@", settings$model$dvmdostem_pecan_outputs, jobsh) # Really no idea what the defaults should be for these if the user @@ -794,16 +873,16 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. } else { jobsh <- gsub("@RUNSTART@", settings$run$start.date, jobsh) } - + if (is.null(settings$run$end.date)) { jobsh <- gsub("@RUNEND@", "", jobsh) } else { jobsh <- gsub("@RUNEND@", settings$run$end.date, jobsh) } - + writeLines(jobsh, con=file.path(settings$rundir, run.id,"job.sh")) Sys.chmod(file.path(settings$rundir, run.id,"job.sh")) - + } # end of function #------------------------------------------------------------------------------------------------# ### EOF diff --git a/models/dvmdostem/inst/config.js.template b/models/dvmdostem/inst/config.js.template index fb4a569047b..90577e37aed 100644 --- a/models/dvmdostem/inst/config.js.template +++ b/models/dvmdostem/inst/config.js.template @@ -21,7 +21,8 @@ "output_dir": "@MODEL_OUTPUT_DIR@/", "output_spec_file": "@CUSTOM_OUTSPEC@", "output_monthly": 0, //JSON specific - "output_nc_eq": 0, + "nc_output_last_n_eq":@NC_OUTPUT_LAST_N_EQ@, + "output_nc_eq": @EQ_NC_OUTPUT@, "output_nc_sp": 0, "output_nc_tr": 1, "output_nc_sc": 1 diff --git a/models/dvmdostem/inst/pecan.dvmdostem.xml b/models/dvmdostem/inst/pecan.dvmdostem.xml index ee393d6e877..086e5d5219e 100644 --- a/models/dvmdostem/inst/pecan.dvmdostem.xml +++ b/models/dvmdostem/inst/pecan.dvmdostem.xml @@ -82,10 +82,20 @@ 109 91 - err + + yes + + + 10 + From 7dbe83b96c0b7f684ee6df5a2a9bc35caeaca2d4 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Tue, 26 May 2020 15:09:54 -0500 Subject: [PATCH 1637/2289] Add code to summarize dvmdostem NetCDF outputs so they can be converted to PEcAn NetCDF outputs. PEcAn does not have the provision for per-pft or per compartment or per layer outputs, so if this resolution was enabled for the dvmdostem runs (as is necessary for a dvmdostem calibration run) then we have to summarize the dvmdostem outputs before writing data to the PEcAn files. --- models/dvmdostem/R/model2netcdf.dvmdostem.R | 34 +++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/models/dvmdostem/R/model2netcdf.dvmdostem.R b/models/dvmdostem/R/model2netcdf.dvmdostem.R index 4fc1407d00e..a14abf0fde5 100644 --- a/models/dvmdostem/R/model2netcdf.dvmdostem.R +++ b/models/dvmdostem/R/model2netcdf.dvmdostem.R @@ -75,6 +75,40 @@ write.data2pecan.file <- function(y_starts, outdir, pecan_requested_vars, monthl dim.order <- sapply(ncin_tr_y$var[[k]]$dim, function(x) x$name) starts <-c(y = px_Y, x = px_X, time = 1) + # What if variable as output from dvmdostem is by pft, or layer or pft and + # compartment (pftpart)? + # + # Guessing/assuming that this is a "calibration run" and that the + # dvmdostem calibraiton variables were enabled, which includes netCDF + # output by PFT and soil layer. These have to be sumarized in order to be + # included in the pecan output.") + # + # Due to the way R handles NetCDF files, it appears that the dimensions of + # vardata_new are (X, Y, PFT, TIME), even though in the NetCDf file, the + # dimensions are explicitly set as y, x, pft, time as reccomended by the + # CF standards. In this case we want to sum over pft, which is the 3rd + # dimension in vardata_new. Note, to further confuse things, the + # ncdf4.helpers::nc.get.dim.names() function returns the following: + # + # Browse[4]> nc.get.dim.names(ncin_tr_y) + # [1] "time" "y" "x" "pft" + # + # But some testing in an interactive R session seems to indicate that + # the following apply function sums over PFTs as we want, and we end up + # with vardata_new being an array with the dimensions X, Y, time + if (length(dim(vardata_new)) == 5) { + PEcAn.logger::logger.debug("") + vardata_new <- apply(vardata_new, c(1,2,5), function(x) sum(x)) + dim.order <- dim.order[!dim.order %in% c('pft', 'pftpart')] + } + if (length(dim(vardata_new)) == 4){ + PEcAn.logger::logger.debug("") + vardata_new <- apply(vardata_new, c(1,2,4), function(x) sum(x)) + dim.order <- dim.order[!dim.order %in% c('pft', 'layer')] + } + #if ('pft' %in% nc.get.dim.names(ncin_tr_y)) {} + #if ('layers' %in% nc.get.dim.names(ncin_tr_y)) {} + #if ('pft' %in% nc.get.dim.names(ncin_tr_y)) if (TRUE %in% sapply(monthly_dvmdostem_outputs, function(x) grepl(paste0("^",k,"_"), x))) { # The current variable (j) is a monthly output From 38214ea58205a1f0981552ead24d8831f3ca38a8 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Sat, 6 Jun 2020 18:09:12 -0400 Subject: [PATCH 1638/2289] Prepend PATH for working with python3 on modex. --- models/dvmdostem/R/write.config.dvmdostem.R | 2 ++ 1 file changed, 2 insertions(+) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index a44976bdad7..0fd6cda9308 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -332,7 +332,9 @@ write.config.dvmdostem <- function(defaults = NULL, trait.values, settings, run. # MAKE SURE TO USE PYTHON 3 FOR dvmdostem v0.5.0 AND UP!! # Also on the ubuntu VM, there is a symlink from ~/bin/python to /usr/bin/python3 + # Same sym link setup on modex. Sys.setenv(PATH = paste(c("/home/carya/bin", Sys.getenv("PATH")), collapse = .Platform$path.sep)) + Sys.setenv(PATH = paste(c("/home/tcarman/bin", Sys.getenv("PATH")), collapse = .Platform$path.sep)) #PEcAn.logger::logger.info(system2("python", args="-V")) ## site information From 0d9f1288edcc15621b5a33b6de49784055219e0a Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Sat, 6 Jun 2020 18:11:41 -0400 Subject: [PATCH 1639/2289] Change output setup function so that with the pecan.xml dvmdostem_calibration=yes, it will honor additional outputs specified on the command line. --- models/dvmdostem/R/write.config.dvmdostem.R | 33 +++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index 0fd6cda9308..cd43cef641d 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -74,7 +74,7 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, PEcAn.logger::logger.warn("Calibration run requested! Ignoring requested ", "output variables and using pre-set dvmdostem ", "calibration outputs list") - pecan_outvars <- "" + #pecan_outvars <- "" # Copy the base file to a run-specific output spec file if (! file.exists(file.path(run_directory, "config")) ) { @@ -91,13 +91,42 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, system2(file.path(appbinary_path, "scripts/outspec_utils.py"), args=c(rs_outspec_path, "--enable-cal-vars")) + # Now enable anything in pecan_outvars that is not already enabled. + + # Look up the "depends_on" in the output variable mapping, + # accumulate list of dvmdostem variables to turn on to support + # the requested variables in the pecan.xml tag + req_v_str <- "" + for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { + #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) + req_v_str <- trimws(paste(req_v_str, vmap_reverse[[pov]][["depends_on"]], sep = ",")) + } + # # Ugly, but basically jsut takes care of stripping out empty strings and + # making sure the final result is a 1D list, not nested. + req_v_str <- trimws(req_v_str) + req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) + + # Check that all variables specified in list exist in the base output spec file. + a <- read.csv(rs_outspec_path) + for (j in req_v_list) { + if (! j %in% a[["Name"]]) { + PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", rs_outspec_path)) + stop() + } + } + + # Fill the run specific output spec file according to list + for (j in req_v_list) { + system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c(rs_outspec_path, "--on", j, "y", "m")) + } + ret <- system2(file.path(appbinary_path, "scripts/outspec_utils.py"), args=c(rs_outspec_path, "--summary"), stdout=TRUE, stderr=TRUE) req_v_str <- paste0(sapply(strsplit(ret, "\\s+"), function(a) a[2])[-1], collapse = " ") - # done with calibration setup } else { From 480c0dbe3bf4f17413836afef6b7f04e537df27c Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Fri, 12 Jun 2020 17:56:58 -0500 Subject: [PATCH 1640/2289] refactor logic to a function. moves the logic for translating from a "requested variable string" as is obtained from the pecan.xml file (settings object) to a list of dvmdostem variables to enable into a function for reuse. --- models/dvmdostem/R/write.config.dvmdostem.R | 71 +++++++++------------ 1 file changed, 29 insertions(+), 42 deletions(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index cd43cef641d..f008f6f6459 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -92,28 +92,8 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, args=c(rs_outspec_path, "--enable-cal-vars")) # Now enable anything in pecan_outvars that is not already enabled. - - # Look up the "depends_on" in the output variable mapping, - # accumulate list of dvmdostem variables to turn on to support - # the requested variables in the pecan.xml tag - req_v_str <- "" - for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { - #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) - req_v_str <- trimws(paste(req_v_str, vmap_reverse[[pov]][["depends_on"]], sep = ",")) - } - # # Ugly, but basically jsut takes care of stripping out empty strings and - # making sure the final result is a 1D list, not nested. - req_v_str <- trimws(req_v_str) - req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) + requested_vars <- requested_vars_str2list(pecan_outvars, outspec_path = rs_outspec_path) - # Check that all variables specified in list exist in the base output spec file. - a <- read.csv(rs_outspec_path) - for (j in req_v_list) { - if (! j %in% a[["Name"]]) { - PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", rs_outspec_path)) - stop() - } - } # Fill the run specific output spec file according to list for (j in req_v_list) { @@ -145,27 +125,7 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, stop() } - # Look up the "depends_on" in the output variable mapping, - # accumulate list of dvmdostem variables to turn on to support - # the requested variables in the pecan.xml tag - req_v_str <- "" - for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { - #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) - req_v_str <- trimws(paste(req_v_str, vmap_reverse[[pov]][["depends_on"]], sep = ",")) - } - # Ugly, but basically jsut takes care of stripping out empty strings and - # making sure the final result is a 1D list, not nested. - req_v_str <- trimws(req_v_str) - req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) - - # Check that all variables specified in list exist in the base output spec file. - a <- read.csv(outspec_path) - for (j in req_v_list) { - if (! j %in% a[["Name"]]) { - PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", outspec_path)) - stop() - } - } + req_var_list <- requested_vars_string2list() # Copy the base file to a run-specific output spec file if (! file.exists(file.path(run_directory, "config")) ) { @@ -195,6 +155,33 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, return(c(rs_outspec_path, req_v_str)) } +requested_vars_string2list <- function(req_v_str, outspec_path){ + + # Look up the "depends_on" in the output variable mapping, + # accumulate list of dvmdostem variables to turn on to support + # the requested variables in the pecan.xml tag + + req_v_str <- "" + for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { + #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) + req_v_str <- trimws(paste(req_v_str, vmap_reverse[[pov]][["depends_on"]], sep = ",")) + } + # # Ugly, but basically jsut takes care of stripping out empty strings and + # making sure the final result is a 1D list, not nested. + req_v_str <- trimws(req_v_str) + req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) + + # Check that all variables specified in list exist in the base output spec file. + a <- read.csv(rs_outspec_path) + for (j in req_v_list) { + if (! j %in% a[["Name"]]) { + PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", rs_outspec_path)) + stop() + } + } + + return(req_v_list) +} ##------------------------------------------------------------------------------------------------# ##' convert parameters, do unit conversions and update parameter names from PEcAn database default ##' to units/names within dvmdostem From 7f0fea39894c5dd503658dc409194181e864e827 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Fri, 12 Jun 2020 18:00:41 -0500 Subject: [PATCH 1641/2289] Add logic so that calibration variables are left on and not... adjusted (potentially changing the output resolution) by an unnecessary system2 call to the dvmdostem outspec_utils.py script. **REQUIRES UPDATE FROM DVMDOSTEM!** --- models/dvmdostem/R/write.config.dvmdostem.R | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index f008f6f6459..4582c1f1225 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -74,7 +74,6 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, PEcAn.logger::logger.warn("Calibration run requested! Ignoring requested ", "output variables and using pre-set dvmdostem ", "calibration outputs list") - #pecan_outvars <- "" # Copy the base file to a run-specific output spec file if (! file.exists(file.path(run_directory, "config")) ) { @@ -94,11 +93,22 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, # Now enable anything in pecan_outvars that is not already enabled. requested_vars <- requested_vars_str2list(pecan_outvars, outspec_path = rs_outspec_path) + # Figure out which variables are already 'ON' in order to support the calibration + # run. These will be at yearly resolution. We don't want to modify the output spec + # setting for any of the calibration variables, not to mention the redundant work + # (extra system call) to turn the variable on again. + a <- system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c(rs_outspec_path, "-s", "--csv"), stdout=TRUE) + con <- textConnection(a) + already_on <- read.csv(con, header = TRUE) - # Fill the run specific output spec file according to list for (j in req_v_list) { - system2(file.path(appbinary_path, "scripts/outspec_utils.py"), - args=c(rs_outspec_path, "--on", j, "y", "m")) + if (j %in% already_on$Name) { + PEcAn.logger::logger.info(paste0("Passing on ",j,"; it is already enabled..." )) + } else { + system2(file.path(appbinary_path, "scripts/outspec_utils.py"), + args=c(rs_outspec_path, "--on", j, "y", "m")) + } } ret <- system2(file.path(appbinary_path, "scripts/outspec_utils.py"), From 60fc4cb1f7b1fb744163544a17ca01cd677622f8 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Tue, 10 Nov 2020 16:54:52 -0500 Subject: [PATCH 1642/2289] MERRA: Change argument `redownload -> overwrite` Per @mdietze comment. --- modules/data.atmosphere/R/download.MERRA.R | 36 ++++++------------- modules/data.atmosphere/man/download.MERRA.Rd | 6 +--- 2 files changed, 12 insertions(+), 30 deletions(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index 0299c6fbc4b..f1ad38d06fb 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -1,17 +1,14 @@ #' Download MERRA data #' #' @inheritParams download.CRUNCEP -#' @param redownload Logical. If `TRUE`, force re-download of files even if they -#' already exist (default = `FALSE`). #' @param ... Not used -- silently soak up extra arguments from `convert.input`, etc. #' @return `data.frame` of meteorology data metadata #' @author Alexey Shiklomanov #' @export download.MERRA <- function(outfolder, start_date, end_date, lat.in, lon.in, - overwrite = TRUE, + overwrite = FALSE, verbose = FALSE, - redownload = FALSE, ...) { dates <- seq.Date(as.Date(start_date), as.Date(end_date), "1 day") @@ -24,7 +21,7 @@ download.MERRA <- function(outfolder, start_date, end_date, PEcAn.logger::logger.debug(paste0( "Downloading ", as.character(date), " (", i, " of ", length(dates), ")" )) - get_merra_date(date, lat.in, lon.in, outfolder, redownload = redownload) + get_merra_date(date, lat.in, lon.in, outfolder, overwrite = overwrite) } # Now, post-process @@ -90,22 +87,11 @@ download.MERRA <- function(outfolder, start_date, end_date, } ## Create output file - if (file.exists(loc.file) && overwrite) { - if (overwrite) { - PEcAn.logger::logger.warn( - "Target file ", loc.file, " already exists.", - "It will be overwritten." - ) - } else { - PEcAn.logger::logger.warn( - "Target file ", loc.file, " already exists and", - "`overwrite = FALSE`. Skipping to next year.", - "Note that `overwrite = TRUE` by default to allow met", - "time series to be extended in the PEcAn workflow!", - "Running with `overwrite = FALSE` may produce unexpected behavior." - ) - } - next + if (file.exists(loc.file)) { + PEcAn.logger::logger.warn( + "Target file ", loc.file, " already exists.", + "It will be overwritten." + ) } loc <- ncdf4::nc_create(loc.file, var_list) on.exit(ncdf4::nc_close(loc), add = TRUE) @@ -146,7 +132,7 @@ download.MERRA <- function(outfolder, start_date, end_date, return(results) } -get_merra_date <- function(date, latitude, longitude, outdir, redownload = FALSE) { +get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) { date <- as.character(date) dpat <- "([[:digit:]]{4})-([[:digit:]]{2})-([[:digit:]]{2})" year <- as.numeric(gsub(dpat, "\\1", date)) @@ -178,7 +164,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, redownload = FALSE qstring <- paste(qvars, collapse = ",") outfile <- file.path(outdir, sprintf("merra-most-%d-%02d-%02d.nc", year, month, day)) - if (redownload || !file.exists(outfile)) { + if (overwrite || !file.exists(outfile)) { req <- httr::GET( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), @@ -196,7 +182,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, redownload = FALSE qstring <- paste(qvars, collapse = ",") outfile <- file.path(outdir, sprintf("merra-pres-%d-%02d-%02d.nc", year, month, day)) - if (redownload || !file.exists(outfile)) { + if (overwrite || !file.exists(outfile)) { req <- httr::GET( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), @@ -214,7 +200,7 @@ get_merra_date <- function(date, latitude, longitude, outdir, redownload = FALSE qstring <- paste(qvars, collapse = ",") outfile <- file.path(outdir, sprintf("merra-flux-%d-%02d-%02d.nc", year, month, day)) - if (redownload || !file.exists(outfile)) { + if (overwrite || !file.exists(outfile)) { req <- robustly(httr::GET, n = 10)( paste(url, qstring, sep = "?"), httr::authenticate(user = "pecanproject", password = "Data4pecan3"), diff --git a/modules/data.atmosphere/man/download.MERRA.Rd b/modules/data.atmosphere/man/download.MERRA.Rd index c35246424c4..9f550ce0d6e 100644 --- a/modules/data.atmosphere/man/download.MERRA.Rd +++ b/modules/data.atmosphere/man/download.MERRA.Rd @@ -10,9 +10,8 @@ download.MERRA( end_date, lat.in, lon.in, - overwrite = TRUE, + overwrite = FALSE, verbose = FALSE, - redownload = FALSE, ... ) } @@ -34,9 +33,6 @@ but only the year portion is used and the resulting files always contain a full \item{verbose}{logical. Passed on to \code{\link[ncdf4]{ncvar_def}} and \code{\link[ncdf4]{nc_create}} to control printing of debug info} -\item{redownload}{Logical. If `TRUE`, force re-download of files even if they -already exist (default = `FALSE`).} - \item{...}{Not used -- silently soak up extra arguments from `convert.input`, etc.} } \value{ From 01bb2ec94d8298986f72b7d96e99ef03f0a9b81a Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 08:59:35 -0500 Subject: [PATCH 1643/2289] MERRA: Partition diffuse vs. direct SW radiation Actually, grab diffuse PAR, direct PAR, diffuse NIR, and direct NIR. --- modules/data.atmosphere/R/download.MERRA.R | 80 +++++++++++++++++++++- 1 file changed, 77 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.MERRA.R b/modules/data.atmosphere/R/download.MERRA.R index f1ad38d06fb..a21502a38c9 100644 --- a/modules/data.atmosphere/R/download.MERRA.R +++ b/modules/data.atmosphere/R/download.MERRA.R @@ -75,7 +75,7 @@ download.MERRA <- function(outfolder, start_date, end_date, ## Create output variables var_list <- list() - for (dat in list(merra_vars, merra_pres_vars, merra_flux_vars)) { + for (dat in list(merra_vars, merra_pres_vars, merra_flux_vars, merra_lfo_vars)) { for (j in seq_len(nrow(dat))) { var_list <- c(var_list, list(ncdf4::ncvar_def( name = dat[j, ][["CF_name"]], @@ -86,6 +86,20 @@ download.MERRA <- function(outfolder, start_date, end_date, } } + # Add additional derived flux variables + var_list <- c(var_list, list( + # Direct PAR + Direct NIR + ncdf4::ncvar_def( + name = "surface_direct_downwelling_shortwave_flux_in_air", + units = "W/m2", dim = dim, missval = -999 + ), + # Diffuse PAR + Diffuse NIR + ncdf4::ncvar_def( + name = "surface_diffuse_downwelling_shortwave_flux_in_air", + units = "W/m2", dim = dim, missval = -999 + ) + )) + ## Create output file if (file.exists(loc.file)) { PEcAn.logger::logger.warn( @@ -125,8 +139,31 @@ download.MERRA <- function(outfolder, start_date, end_date, ncdf4::ncvar_put(loc, merra_flux_vars[r,][["CF_name"]], x, start = c(1, 1, start), count = c(1, 1, 24)) } + lfofile <- file.path(outfolder, sprintf("merra-lfo-%s.nc", as.character(date))) + nc <- ncdf4::nc_open(lfofile) + for (r in seq_len(nrow(merra_lfo_vars))) { + x <- ncdf4::ncvar_get(nc, merra_lfo_vars[r,][["MERRA_name"]]) + ncdf4::ncvar_put(loc, merra_lfo_vars[r,][["CF_name"]], x, + start = c(1, 1, start), count = c(1, 1, 24)) + } + ncdf4::nc_close(nc) } + + # Add derived variables + # Total SW diffuse = Diffuse PAR + Diffuse NIR + sw_diffuse <- + ncdf4::ncvar_get(loc, "surface_diffuse_downwelling_photosynthetic_radiative_flux_in_air") + + ncdf4::ncvar_get(loc, "surface_diffuse_downwelling_nearinfrared_radiative_flux_in_air") + ncdf4::ncvar_put(loc, "surface_diffuse_downwelling_shortwave_flux_in_air", sw_diffuse, + start = c(1, 1, 1), count = c(1, 1, -1)) + + # Total SW direct = Direct PAR + Direct NIR + sw_direct <- + ncdf4::ncvar_get(loc, "surface_direct_downwelling_photosynthetic_radiative_flux_in_air") + + ncdf4::ncvar_get(loc, "surface_direct_downwelling_nearinfrared_radiative_flux_in_air") + ncdf4::ncvar_put(loc, "surface_direct_downwelling_shortwave_flux_in_air", sw_direct, + start = c(1, 1, 1), count = c(1, 1, -1)) } return(results) @@ -207,6 +244,24 @@ get_merra_date <- function(date, latitude, longitude, outdir, overwrite = FALSE) httr::write_disk(outfile, overwrite = TRUE) ) } + + # Land forcing + url <- glue::glue( + "{base_url}/{merra_lfo_prod}/{year}/{sprintf('%02d', month)}/", + "MERRA2_{version}.{merra_lfo_file}.", + "{year}{sprintf('%02d', month)}{sprintf('%02d', day)}.nc4.nc4" + ) + qvars <- sprintf("%s%s", merra_lfo_vars$MERRA_name, idxstring) + qstring <- paste(qvars, collapse = ",") + outfile <- file.path(outdir, sprintf("merra-lfo-%d-%02d-%02d.nc", + year, month, day)) + if (overwrite || !file.exists(outfile)) { + req <- robustly(httr::GET, n = 10)( + paste(url, qstring, sep = "?"), + httr::authenticate(user = "pecanproject", password = "Data4pecan3"), + httr::write_disk(outfile, overwrite = TRUE) + ) + } } # For more on MERRA variables, see: @@ -228,7 +283,11 @@ merra_vars <- tibble::tribble( # QSH - Effective surface specific humidity "specific_humidity", "QSH", "g/g", # PRECTOT - Total precipitation from atmospheric model physics - "precipitation_flux", "PRECTOT", "kg/m2/s" + "precipitation_flux", "PRECTOT", "kg/m2/s", + # NIRDF - Surface downwelling nearinfared diffuse flux + "surface_diffuse_downwelling_nearinfrared_radiative_flux_in_air", "NIRDF", "W/m2", + # NIRDR - Surface downwelling nearinfrared beam flux + "surface_direct_downwelling_nearinfrared_radiative_flux_in_air", "NIRDR", "W/m2" ) # Single-level diagnostics (pg. 17) @@ -241,12 +300,27 @@ merra_pres_vars <- tibble::tribble( ) # Radiation diagnostics (pg. 43) +# NOTE: Downwelling longwave is calculated as Net + Absorbed + Emitted (because +# Net = Downwelling - Absorbed - Emitted). merra_flux_prod <- "M2T1NXRAD.5.12.4" merra_flux_file <- "tavg1_2d_rad_Nx" merra_flux_vars <- tibble::tribble( ~CF_name, ~MERRA_name, ~units, - # LWGAB is 'Surface absorbed longwave radiation' + # LWGAB is 'Surface absorbed longwave radiation' -- In MERRA, surface net + # longwave flux is `absorbed - emitted`, so we assume this is the correct variable "surface_downwelling_longwave_flux_in_air", "LWGAB", "W/m2", # SWGDN is 'Surface incoming shortwave flux' "surface_downwelling_shortwave_flux_in_air", "SWGDN", "W/m2" ) + + +# Land surface forcings (pg. 39) +merra_lfo_prod <- "M2T1NXLFO.5.12.4" +merra_lfo_file <- "tavg1_2d_lfo_Nx" +merra_lfo_vars <- tibble::tribble( + ~CF_name, ~MERRA_name, ~units, + # Surface downwelling PAR diffuse flux, PARDF + "surface_diffuse_downwelling_photosynthetic_radiative_flux_in_air", "PARDF", "W/m2", + # Surface downwelling PAR beam flux, PARDR + "surface_direct_downwelling_photosynthetic_radiative_flux_in_air", "PARDR", "W/m2" +) From 20c7d31ea091b7acac0eb4cf10b936224f8b03f1 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 09:00:40 -0500 Subject: [PATCH 1644/2289] MERRA: Shorten test to just 4 days So it runs faster. --- modules/data.atmosphere/tests/testthat/test.download.MERRA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/tests/testthat/test.download.MERRA.R b/modules/data.atmosphere/tests/testthat/test.download.MERRA.R index 58606e1e0f7..0c264231b91 100644 --- a/modules/data.atmosphere/tests/testthat/test.download.MERRA.R +++ b/modules/data.atmosphere/tests/testthat/test.download.MERRA.R @@ -6,7 +6,7 @@ teardown(unlink(outdir, recursive = TRUE)) test_that("MERRA download works", { start_date <- "2009-06-01" - end_date <- "2009-06-10" + end_date <- "2009-06-04" dat <- download.MERRA(outdir, start_date, end_date, lat.in = 45.3, lon.in = -85.3, overwrite = TRUE) expect_true(file.exists(dat$file[[1]])) From e0bc9b3b3291405f914d7a8c783d29beafceb381 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 09:43:33 -0500 Subject: [PATCH 1645/2289] ED: Clarify namespace for `utils::head` Apparently, we have to do this for all non-`base` functions now! https://stackoverflow.com/questions/31132552/no-visible-global-function-definition-for-median --- models/ed/DESCRIPTION | 1 + models/ed/R/met2model.ED2.R | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/models/ed/DESCRIPTION b/models/ed/DESCRIPTION index 64f0023144e..6b44a531c21 100644 --- a/models/ed/DESCRIPTION +++ b/models/ed/DESCRIPTION @@ -40,6 +40,7 @@ Imports: tidyr, tibble, udunits2 (>= 0.11), + utils, XML (>= 3.98-1.4) Suggests: testthat (>= 1.0.2) diff --git a/models/ed/R/met2model.ED2.R b/models/ed/R/met2model.ED2.R index f2e5086cb51..0ad6039d6f6 100644 --- a/models/ed/R/met2model.ED2.R +++ b/models/ed/R/met2model.ED2.R @@ -197,7 +197,7 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l "Time step (`dt`) is not uniform! Identified ", length(dt), " unique time steps. ", "`head(dt)` (in seconds): ", - paste(head(dt), collapse = ", ") + paste(utils::head(dt), collapse = ", ") )) } From ef5244adbc3fdd54eca691bfd77264d2c2b06866 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 09:56:27 -0500 Subject: [PATCH 1646/2289] ED2: If dt is not unique, use the rounded mean Per @mdietze suggestion. Though, we still try to take the unique value first and throw an informative warning if we have to fall back on the mean. --- models/ed/R/met2model.ED2.R | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/models/ed/R/met2model.ED2.R b/models/ed/R/met2model.ED2.R index 0ad6039d6f6..7279cbe024a 100644 --- a/models/ed/R/met2model.ED2.R +++ b/models/ed/R/met2model.ED2.R @@ -186,18 +186,22 @@ met2model.ED2 <- function(in.path, in.prefix, outfolder, start_date, end_date, l # `dt` is the met product time step in seconds. We calculate it here as # `sec[i] - sec[i-1]` for all `[i]`. For a properly formatted product, the # timesteps should be regular, so there is a single, constant difference. If - # that's not the case, we throw an informative error. + # that's not the case, we throw an informative warning and try to round + # instead (as some met products will not always have neat time steps). # # `drop` here simplifies length-1 arrays to vectors. Without it, R will # later throw an error about "non-conformable arrays" when trying to add a # length-1 array to a vector. dt <- drop(unique(diff(sec))) if (length(dt) > 1) { - PEcAn.logger::logger.severe(paste0( + dt_old <- dt + dt <- drop(round(mean(diff(sec)))) + PEcAn.logger::logger.warn(paste0( "Time step (`dt`) is not uniform! Identified ", - length(dt), " unique time steps. ", + length(dt_old), " unique time steps. ", "`head(dt)` (in seconds): ", - paste(utils::head(dt), collapse = ", ") + paste(utils::head(dt_old), collapse = ", "), + " Using the rounded mean difference as the time step: ", dt )) } From 7dc4036bd16d0fe6360eb3368bdb0feff6b01c7b Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 12:03:19 -0500 Subject: [PATCH 1647/2289] Allow setting stop_on_error via settings Also, determine it automatically based on whether we are running an ensemble simulation or not. If we are, set it to FALSE, otherwise default to TRUE. --- CHANGELOG.md | 3 ++- book_source/03_topical_pages/03_pecan_xml.Rmd | 3 +++ web/workflow.R | 13 ++++++++++++- 3 files changed, 17 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 97a9fbf450f..22ddcc28080 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -56,7 +56,8 @@ This is a major change: - PEcAn.JULES: Removed dependency on `ncdf4.helpers` package, which has been removed from CRAN (#2511). - data.remote: Arguments to the function `call_MODIS()` have been changed (issue #2519). - Changed precipitaion downscale in `PEcAn.data.atmosphere::download.NOAA_GEFS_downscale`. Precipitation was being downscaled via a spline which was causing fake rain events. Instead the 6 hr precipitation flux values from GEFS are preserved with 0's filling in the hours between. --Changed `dbfile.input.insert` to work with inputs (i.e soils) that don't have start and end dates associated with them +- Changed `dbfile.input.insert` to work with inputs (i.e soils) that don't have start and end dates associated with them +- Default behavior for `stop_on_error` is now `TRUE` for non-ensemble runs; i.e., workflows that run only one model simulation (or omit the `ensemble` XML group altogether) will fail if the model run fails. For ensemble runs, the old behavior is preserved; i.e., workflows will continue even if one of the model runs failed. This behavior can also be manually controlled by setting the new `run -> stop_on_error` XML tag to `TRUE` or `FALSE`. ### Added diff --git a/book_source/03_topical_pages/03_pecan_xml.Rmd b/book_source/03_topical_pages/03_pecan_xml.Rmd index b72345fc667..8cece42197e 100644 --- a/book_source/03_topical_pages/03_pecan_xml.Rmd +++ b/book_source/03_topical_pages/03_pecan_xml.Rmd @@ -380,6 +380,7 @@ This section provides detailed configuration for the model run, including the si 2004/01/01 2004/12/31 + TRUE ``` @@ -457,6 +458,8 @@ The following tags are optional run settings that apply to any model: * `jobtemplate`: the template used when creating a `job.sh` file, which is used to launch the actual model. Each model has its own default template in the `inst` folder of the corresponding R package (for instance, here is the one for [ED2](https://github.com/PecanProject/pecan/blob/master/models/ed/inst/template.job)). The following variables can be used: `@SITE_LAT@`, `@SITE_LON@`, `@SITE_MET@`, `@START_DATE@`, `@END_DATE@`, `@OUTDIR@`, `@RUNDIR@` which all come variables in the `pecan.xml` file. The following two command can be used to copy and clean the results from a scratch folder (specified as scratch in the run section below, for example local disk vs network disk) : `@SCRATCH_COPY@`, `@SCRATCH_CLEAR@`. +* `stop_on_error`: (logical) Whether the workflow should immediately terminate if _any_ of the model runs fail. If unset, this defaults to `TRUE` unless you are running an ensemble simulation (and ensemble size is greater than 1). + Some models also have model-specific tags, which are described in the [PEcAn Models](#pecan-models) section. ### `host`: Host information for remote execution {#xml-host} diff --git a/web/workflow.R b/web/workflow.R index fcef5ba70f7..2ca3818608f 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -116,7 +116,18 @@ if ((length(which(commandArgs() == "--advanced")) != 0) # Start ecosystem model runs if (PEcAn.utils::status.check("MODEL") == 0) { PEcAn.utils::status.start("MODEL") - PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = FALSE) + stop_on_error <- as.logical(settings[[c("run", "stop_on_error")]]) + if (is.null(stop_on_error)) { + # If we're doing an ensemble run, don't stop. If only a single run, we + # should be stopping. + if (is.null(settings[["ensemble"]]) || + as.numeric(settings[[c("ensemble", "size")]]) == 1) { + stop_on_error <- FALSE + } else { + stop_on_error <- TRUE + } + } + PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = stop_on_error) PEcAn.utils::status.end() } From 70bbf94b3157b4474a76018a8cddc5fe9f4ac0f3 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 13:48:23 -0500 Subject: [PATCH 1648/2289] META: update=AUTO fix b/c of NAs Old meta-analysis PFT membership files sometimes stored NA's as literal "NA" strings, which were not being read in as NA's. This caused the difference checking code to (incorrectly) pick up a difference between in PFT membership and caused the trait meta analysis to always run. Here, this is fixed by adding "NA" to the list of strings converted to NA's when reading old PFT membership. Now, if nothing changes, we don't re-run the meta-analysis! --- base/db/R/get.trait.data.pft.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index ca63fd3bfc0..8c5f5891310 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -141,10 +141,10 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, existing_membership <- utils::read.csv( need_paths[["pft_membership"]], # Columns are: id, genus, species, scientificname - # Need this so NA values are + # Need this so NA values are formatted consistently colClasses = c("double", "character", "character", "character"), stringsAsFactors = FALSE, - na.strings = "" + na.strings = c("", "NA") ) diff_membership <- symmetric_setdiff( existing_membership, From 09d0f04b8096b2a399603cf841d16991bd0cb414 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 14:39:41 -0500 Subject: [PATCH 1649/2289] WORKFLOW: Bugfix empty stop.on.error Turns out that `as.logical(NULL)` produces a length-0 logical vector, which is not equal to `NULL`! This causes some downstream errors in edge cases. --- web/workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/web/workflow.R b/web/workflow.R index 2ca3818608f..7de714989a1 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -117,7 +117,7 @@ if ((length(which(commandArgs() == "--advanced")) != 0) if (PEcAn.utils::status.check("MODEL") == 0) { PEcAn.utils::status.start("MODEL") stop_on_error <- as.logical(settings[[c("run", "stop_on_error")]]) - if (is.null(stop_on_error)) { + if (length(stop_on_error) == 0) { # If we're doing an ensemble run, don't stop. If only a single run, we # should be stopping. if (is.null(settings[["ensemble"]]) || From 5a2deb238bbfe510ab6263b4d89323a2a5aac713 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 15:21:26 -0500 Subject: [PATCH 1650/2289] WORKFLOW: Fix swapped stop.on.error TRUE/FALSE If no ensemble, should be TRUE! If ensemble, should be FALSE! --- web/workflow.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/web/workflow.R b/web/workflow.R index 7de714989a1..317aef7faa7 100755 --- a/web/workflow.R +++ b/web/workflow.R @@ -122,9 +122,9 @@ if (PEcAn.utils::status.check("MODEL") == 0) { # should be stopping. if (is.null(settings[["ensemble"]]) || as.numeric(settings[[c("ensemble", "size")]]) == 1) { - stop_on_error <- FALSE - } else { stop_on_error <- TRUE + } else { + stop_on_error <- FALSE } } PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = stop_on_error) From 903e117ab3c168f191c802cd03177a262737dcbb Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 14:31:59 -0500 Subject: [PATCH 1651/2289] ED2: Catch model2netcdf errors in template.job Otherwise, workflows would still finish successfully even if this step failed. Another fix to ED2 job template --- models/ed/inst/template.job | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/models/ed/inst/template.job b/models/ed/inst/template.job index afe1bc5aec5..4ee63aa0c29 100644 --- a/models/ed/inst/template.job +++ b/models/ed/inst/template.job @@ -47,9 +47,16 @@ if [ ! -e "@OUTDIR@/history.xml" ]; then fi # convert to MsTMIP - echo "require (PEcAn.ED2) -model2netcdf.ED2('@OUTDIR@', @SITE_LAT@, @SITE_LON@, '@START_DATE@', '@END_DATE@', @PFT_NAMES@) -" | R --vanilla + Rscript \ + -e "library(PEcAn.ED2)" \ + -e "model2netcdf.ED2('@OUTDIR@', @SITE_LAT@, @SITE_LON@, '@START_DATE@', '@END_DATE@', @PFT_NAMES@)" + STATUS=$? + if [ $STATUS -ne 0 ]; then + echo -e "ERROR IN model2netcdf.ED2\nLogfile is located at '@OUTDIR@'/logfile.txt" + echo "************************************************* End Log $TIMESTAMP" + echo "" + exit $STATUS + fi fi # copy readme with specs to output From 6f51cc0a63d8131b14e3afc0f55892e2a6a1927e Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Wed, 11 Nov 2020 14:43:53 -0500 Subject: [PATCH 1652/2289] MET: Cache raw CRUNCEP files This should accelerate downloads. --- .../data.atmosphere/R/download.CRUNCEP_Global.R | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.CRUNCEP_Global.R b/modules/data.atmosphere/R/download.CRUNCEP_Global.R index 9e852accb07..54fb4cea4f7 100644 --- a/modules/data.atmosphere/R/download.CRUNCEP_Global.R +++ b/modules/data.atmosphere/R/download.CRUNCEP_Global.R @@ -201,9 +201,20 @@ download.CRUNCEP <- function(outfolder, start_date, end_date, lat.in, lon.in, "time_end={year}-12-31T21:00:00Z&", "accept=netcdf" ) - tmp_file <- tempfile() - utils::download.file(ncss_query, tmp_file) - dap <- ncdf4::nc_open(tmp_file) + # Cache raw CRUNCEP files so that later workflows don't have to download + # them (even if they do have to do some reprocessing). + raw_file <- file.path( + outfolder, + glue::glue("cruncep-raw-{year}-{lat.in}-{lon.in}-{current_var}.nc") + ) + if (overwrite || !file.exists(raw_file)) { + utils::download.file(ncss_query, raw_file) + } else { + PEcAn.logger::logger.debug(glue::glue( + "Skipping file because it already exists: {raw_file}" + )) + } + dap <- ncdf4::nc_open(raw_file) } # confirm that timestamps match From 63b2b0641dedf36e6eb946593e647e64807ef8ea Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Fri, 13 Nov 2020 13:26:17 -0900 Subject: [PATCH 1653/2289] Update documentation. --- models/dvmdostem/man/setup.outputs.dvmdostem.Rd | 3 +++ 1 file changed, 3 insertions(+) diff --git a/models/dvmdostem/man/setup.outputs.dvmdostem.Rd b/models/dvmdostem/man/setup.outputs.dvmdostem.Rd index ce7f16a2547..51d5b80be8a 100644 --- a/models/dvmdostem/man/setup.outputs.dvmdostem.Rd +++ b/models/dvmdostem/man/setup.outputs.dvmdostem.Rd @@ -5,6 +5,7 @@ \title{Setup outputs to be generated by dvmdostem and analyzed by PEcAn.} \usage{ setup.outputs.dvmdostem( + dvmdostem_calibration, pecan_requested_outputs, dvmdostem_output_spec, run_directory, @@ -13,6 +14,8 @@ setup.outputs.dvmdostem( ) } \arguments{ +\item{dvmdostem_calibration}{a string with 'yes' or 'YES'} + \item{pecan_requested_outputs}{a space separated string of variables to process or NULL.} \item{dvmdostem_output_spec}{a path to a custom output spec file or NULL.} From bfca7539d746658e2b9ba5bbc88dceaf6dbae636 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Mon, 16 Nov 2020 08:26:17 -0900 Subject: [PATCH 1654/2289] Add namespace declaration for some functions, add documentation... ...to avoid "no visible binding for global" errors with scripts/check_with_errors.R that runs during CI checks. --- models/dvmdostem/NAMESPACE | 1 + models/dvmdostem/R/write.config.dvmdostem.R | 22 +++++++++++++-------- 2 files changed, 15 insertions(+), 8 deletions(-) diff --git a/models/dvmdostem/NAMESPACE b/models/dvmdostem/NAMESPACE index ff5863eda9c..6e3a749f44a 100644 --- a/models/dvmdostem/NAMESPACE +++ b/models/dvmdostem/NAMESPACE @@ -4,6 +4,7 @@ export(adjust.runmask.dvmdostem) export(convert.samples.dvmdostem) export(enforce.runmask.cmt.vegmap.harmony) export(model2netcdf.dvmdostem) +export(requested_vars_string2list) export(setup.outputs.dvmdostem) export(vmap_reverse) export(write.config.dvmdostem) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index 4582c1f1225..9a64a00b85b 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -100,7 +100,7 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, a <- system2(file.path(appbinary_path, "scripts/outspec_utils.py"), args=c(rs_outspec_path, "-s", "--csv"), stdout=TRUE) con <- textConnection(a) - already_on <- read.csv(con, header = TRUE) + already_on <- utils::read.csv(con, header = TRUE) for (j in req_v_list) { if (j %in% already_on$Name) { @@ -165,12 +165,18 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, return(c(rs_outspec_path, req_v_str)) } -requested_vars_string2list <- function(req_v_str, outspec_path){ - - # Look up the "depends_on" in the output variable mapping, - # accumulate list of dvmdostem variables to turn on to support - # the requested variables in the pecan.xml tag - +##------------------------------------------------------------------------------------------------# +##' Look up the "depends_on" in the output variable mapping, +##' accumulate a list of dvmdostem variables to turn on to support +##' the requested variables in the pecan.xml tag +##' @name requested_vars_string2list +##' @title Requested variables string to list conversion. +##' @param req_v_str A string, (comma or space separated?) of variables +##' @param outspec_path The path to an outspec file +##' @return a list of the requested variables +##' @export +##' @author Tobey Carman +requested_vars_string2list <- function(req_v_str, outspec_path) { req_v_str <- "" for (pov in unlist(lapply(unlist(strsplit(pecan_outvars, ",")), trimws))) { #print(paste("HERE>>>", vmap_reverse[[pov]][["depends_on"]])) @@ -182,7 +188,7 @@ requested_vars_string2list <- function(req_v_str, outspec_path){ req_v_list <- unlist(lapply(unlist(strsplit(req_v_str, ",")), function(x){x[!x== ""]})) # Check that all variables specified in list exist in the base output spec file. - a <- read.csv(rs_outspec_path) + a <- utils::read.csv(rs_outspec_path) for (j in req_v_list) { if (! j %in% a[["Name"]]) { PEcAn.logger::logger.error(paste0("ERROR! Can't find variable: '", j, "' in the output spec file: ", rs_outspec_path)) From 8fcc5e4e56c2d49b627ee23e684b98fb63d1e143 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Mon, 16 Nov 2020 12:45:08 -0900 Subject: [PATCH 1655/2289] Add straggling documentation file. --- .../man/requested_vars_string2list.Rd | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 models/dvmdostem/man/requested_vars_string2list.Rd diff --git a/models/dvmdostem/man/requested_vars_string2list.Rd b/models/dvmdostem/man/requested_vars_string2list.Rd new file mode 100644 index 00000000000..c508937094b --- /dev/null +++ b/models/dvmdostem/man/requested_vars_string2list.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/write.config.dvmdostem.R +\name{requested_vars_string2list} +\alias{requested_vars_string2list} +\title{Requested variables string to list conversion.} +\usage{ +requested_vars_string2list(req_v_str, outspec_path) +} +\arguments{ +\item{req_v_str}{A string, (comma or space separated?) of variables} + +\item{outspec_path}{The path to an outspec file} +} +\value{ +a list of the requested variables +} +\description{ +Look up the "depends_on" in the output variable mapping, +accumulate a list of dvmdostem variables to turn on to support +the requested variables in the pecan.xml tag +} +\author{ +Tobey Carman +} From 40737faa51a8121865199c0b8c35f25425e4af1a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 17 Nov 2020 09:52:37 -0600 Subject: [PATCH 1656/2289] update github actions remvove final set-env push to ghcr.io --- .github/workflows/depends.yml | 4 ++-- .github/workflows/docker.yml | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 3b00122c1e0..adc81d47936 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -34,7 +34,7 @@ jobs: - name: github branch run: | BRANCH=${GITHUB_REF##*/} - echo "::set-env name=GITHUB_BRANCH::${BRANCH}" + echo "GITHUB_BRANCH=${BRANCH}" >> $GITHUB_ENV tags="R${{ matrix.R }}" if [ "${{ matrix.R }}" == "${{ env.SUPPORTED }}" ]; then @@ -44,7 +44,7 @@ jobs: tags="${tags},develop" fi fi - echo "::set-env name=TAG::${tags}" + echo "TAG=${tags}" >> $GITHUB_ENV # this will publish to the actor (person) github packages - name: Publish to GitHub diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index 7b9695bc949..314711ede00 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -70,7 +70,7 @@ jobs: if: github.event_name == 'push' && github.repository == env.MASTER_REPO run: | echo "${INPUT_PASSWORD}" | docker login -u ${INPUT_USERNAME} --password-stdin ${INPUT_REGISTRY} - repo=$(echo ${{ github.repository }} | tr 'A-Z' 'a-z') + repo=$(echo ${{ github.repository_owner }} | tr 'A-Z' 'a-z') for image in $(docker image ls pecan/*:github --format "{{ .Repository }}"); do for v in ${PECAN_TAGS}; do docker tag ${image}:github ${INPUT_REGISTRY}/${repo}/${image#pecan/}:${v} @@ -79,9 +79,9 @@ jobs: done docker logout env: - INPUT_REGISTRY: docker.pkg.github.com - INPUT_USERNAME: ${{ github.actor }} - INPUT_PASSWORD: ${{ secrets.GITHUB_TOKEN }} + INPUT_REGISTRY: ghcr.io + INPUT_USERNAME: ${{ secrets.GHCR_USERNAME }} + INPUT_PASSWORD: ${{ secrets.GHCR_PASSWORD }} # push all images to dockerhub - name: Publish to DockerHub From fa19d31e6df8022515d9fa469baa0d4a50a51ff5 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Tue, 17 Nov 2020 10:27:30 -0900 Subject: [PATCH 1657/2289] Fix typo in function call name. Was in calibration case so didn't catch this for quite a while. --- models/dvmdostem/R/write.config.dvmdostem.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/models/dvmdostem/R/write.config.dvmdostem.R b/models/dvmdostem/R/write.config.dvmdostem.R index 9a64a00b85b..77bb6da58ce 100644 --- a/models/dvmdostem/R/write.config.dvmdostem.R +++ b/models/dvmdostem/R/write.config.dvmdostem.R @@ -91,7 +91,7 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, args=c(rs_outspec_path, "--enable-cal-vars")) # Now enable anything in pecan_outvars that is not already enabled. - requested_vars <- requested_vars_str2list(pecan_outvars, outspec_path = rs_outspec_path) + requested_vars <- requested_vars_string2list(pecan_outvars, outspec_path = rs_outspec_path) # Figure out which variables are already 'ON' in order to support the calibration # run. These will be at yearly resolution. We don't want to modify the output spec @@ -169,6 +169,7 @@ setup.outputs.dvmdostem <- function(dvmdostem_calibration, ##' Look up the "depends_on" in the output variable mapping, ##' accumulate a list of dvmdostem variables to turn on to support ##' the requested variables in the pecan.xml tag +##' ##' @name requested_vars_string2list ##' @title Requested variables string to list conversion. ##' @param req_v_str A string, (comma or space separated?) of variables From efceb7dbf413d684aff16ed6109de821aece84e6 Mon Sep 17 00:00:00 2001 From: Tobey Carman Date: Tue, 17 Nov 2020 10:36:59 -0900 Subject: [PATCH 1658/2289] Add globals.R file to beat warnings/errors in devtools check --- models/dvmdostem/R/globals.R | 1 + 1 file changed, 1 insertion(+) create mode 100644 models/dvmdostem/R/globals.R diff --git a/models/dvmdostem/R/globals.R b/models/dvmdostem/R/globals.R new file mode 100644 index 00000000000..9edd78ac0f1 --- /dev/null +++ b/models/dvmdostem/R/globals.R @@ -0,0 +1 @@ +utils::globalVariables(c('pecan_outvars', 'rs_outspec_path','req_v_list')) From ed2068eac21a830c35e5c52f47fdb41a34737985 Mon Sep 17 00:00:00 2001 From: Christy Rollinson Date: Wed, 18 Nov 2020 14:06:35 -0600 Subject: [PATCH 1659/2289] update met downscaling sanity bounds Make sanity bounds in line with ED2 limits (to save Christy's sanity). ED2 is probably more rigid than most other models. --- .../data.atmosphere/R/debias_met_regression.R | 484 +++++++++--------- .../data.atmosphere/R/tdm_lm_ensemble_sims.R | 13 +- 2 files changed, 249 insertions(+), 248 deletions(-) diff --git a/modules/data.atmosphere/R/debias_met_regression.R b/modules/data.atmosphere/R/debias_met_regression.R index ea8afd3bb98..9589777bf59 100644 --- a/modules/data.atmosphere/R/debias_met_regression.R +++ b/modules/data.atmosphere/R/debias_met_regression.R @@ -7,9 +7,9 @@ ##' @title debias.met.regression ##' @family debias - Debias & Align Meteorology Datasets into continuous time series ##' @author Christy Rollinson -##' @description This script debiases one dataset (e.g. GCM, re-analysis product) given another higher -##' resolution product or empirical observations. It assumes input are in annual CF standard -##' files that are generate from the pecan extract or download funcitons. +##' @description This script debiases one dataset (e.g. GCM, re-analysis product) given another higher +##' resolution product or empirical observations. It assumes input are in annual CF standard +##' files that are generate from the pecan extract or download funcitons. # ----------------------------------- # Parameters # ----------------------------------- @@ -17,10 +17,10 @@ ##' @param source.data - data to be bias-corrected aligned with training data (from align.met) ##' @param n.ens - number of ensemble members to generate and save for EACH source ensemble member ##' @param vars.debias - which met variables should be debiased? if NULL, all variables in train.data -##' @param CRUNCEP - flag for if the dataset being downscaled is CRUNCEP; if TRUE, special cases triggered for +##' @param CRUNCEP - flag for if the dataset being downscaled is CRUNCEP; if TRUE, special cases triggered for ##' met variables that have been naively gapfilled for certain time periods ##' @param pair.anoms - logical stating whether anomalies from the same year should be matched or not -##' @param pair.ens - logical stating whether ensembles from train and source data need to be paired together +##' @param pair.ens - logical stating whether ensembles from train and source data need to be paired together ##' (for uncertainty propogation) ##' @param uncert.prop - method for error propogation for child ensemble members 1 ensemble member; options=c(random, mean); randomly strongly encouraged if n.ens>1 ##' @param resids - logical stating whether to pass on residual data or not *Not implemented yet @@ -28,9 +28,9 @@ ##' @param outfolder - directory where the data should go ##' @param yrs.save - what years from the source data should be saved; if NULL all years of the source data will be saved ##' @param ens.name - what is the name that should be attached to the debiased ensemble -##' @param ens.mems - what labels/numbers to attach to the ensemble members so we can gradually build bigger ensembles +##' @param ens.mems - what labels/numbers to attach to the ensemble members so we can gradually build bigger ensembles ##' without having to do do giant runs at once; if NULL will be numbered 1:n.ens -##' @param force.sanity - (logical) do we force the data to meet sanity checks? +##' @param force.sanity - (logical) do we force the data to meet sanity checks? ##' @param sanity.tries - how many time should we try to predict a reasonable value before giving up? We don't want to end up in an infinite loop ##' @param sanity.sd - how many standard deviations from the mean should be used to determine sane outliers (default 8) ##' @param lat.in - latitude of site @@ -46,7 +46,7 @@ # ----------------------------------- # Workflow # ----------------------------------- -# The general workflow is as follows: +# The general workflow is as follows: # 1. read in & format data (coerce to daily format) # 2. set up the file structures for the output # 3. define the training window @@ -68,21 +68,21 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU parallel = FALSE, n.cores = NULL, overwrite = TRUE, verbose = FALSE) { library(MASS) library(mgcv) - + set.seed(seed) - + if(parallel==TRUE) warning("Warning! Parallel processing not reccomended because of memory constraints") if(ncol(source.data[[2]])>1) warning("Feeding an ensemble of source data is currently experimental! This could crash") if(n.ens<1){ warning("You need to generate at least one vector of outputs. Changing n.ens to 1, which will be based on the model means.") n.ens=1 - } + } if(!uncert.prop %in% c("mean", "random")) stop("unspecified uncertainty propogation method. Must be 'random' or 'mean' ") if(uncert.prop=="mean" & n.ens>1) warning(paste0("Warning! Use of mean propagation with n.ens>1 not encouraged as all results will be the same and you will not be adding uncertainty at this stage.")) - + # Variables need to be done in a specific order vars.all <- c("air_temperature", "air_temperature_maximum", "air_temperature_minimum", "specific_humidity", "surface_downwelling_shortwave_flux_in_air", "air_pressure", "surface_downwelling_longwave_flux_in_air", "wind_speed", "precipitation_flux") - + if(is.null(vars.debias)) vars.debias <- vars.all[vars.all %in% names(train.data)] # Don't try to do vars that we don't have if(is.null(yrs.save)) yrs.save <- unique(source.data$time$Year) if(is.null(ens.mems)) ens.mems <- stringr::str_pad(1:n.ens, nchar(n.ens), "left", pad="0") @@ -91,35 +91,35 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU vars.pred <- vector() dat.out <- list() dat.out[["time"]] <- source.data$time - + # Transforming zero-truncated variables where negative values are not possible (and zero is unlikely) # - Tried a couple different things, but the sqaure root transformation seems to be working best vars.transform <- c("surface_downwelling_shortwave_flux_in_air", "specific_humidity", "surface_downwelling_longwave_flux_in_air", "wind_speed") - + # --------- - # Setting up some cases about how to duplicate the training data in case we don't pass through the + # Setting up some cases about how to duplicate the training data in case we don't pass through the # same number of ensembles as we want in our output # - Referencing off of whatever the layer after "time" is # --------- # If we have fewer columns then we need, randomly duplicate some if(ncol(train.data[[2]])==n.ens) ens.train <- 1:n.ens - + if(ncol(train.data[[2]]) < n.ens){ ens.train <- c(1:ncol(train.data[[2]]), sample(1:ncol(train.data[[2]]), n.ens-ncol(train.data[[2]]),replace=T)) } - + # If we have more columns than we need, randomly subset if(ncol(train.data[[2]]) > n.ens) { ens.train <- sample(1:ncol(train.data[[2]]), ncol(train.data[[2]]),replace=T) } - + # Setting up cases for dealing with an ensemble of source data to be biased - if(pair.ens==T & ncol(train.data[[2]]!=ncol(source.data[[2]]))){ + if(pair.ens==T & ncol(train.data[[2]]!=ncol(source.data[[2]]))){ stop("Cannot pair ensembles of different size") } else if(pair.ens==T) { ens.src <- ens.train } - + if(pair.ens==F & ncol(source.data[[2]])==1){ ens.src=1 } else if(pair.ens==F & ncol(source.data[[2]]) > n.ens) { @@ -128,11 +128,11 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU ens.src <- c(1:ncol(source.data[[2]]), sample(1:ncol(source.data[[2]]), n.ens-ncol(source.data[[2]]),replace=T)) } # --------- - + # Find the period of years to use to train the model # This formulation should take yrs.overlap <- unique(train.data$time$Year)[unique(train.data$time$Year) %in% unique(source.data$time$Year)] - + # If we don't have a year of overlap, take closest 20 years from each dataset if(length(yrs.overlap)<1){ if(pair.anoms==TRUE) warning("No overlap in years, so we cannot pair the anomalies") @@ -144,9 +144,9 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU for(v in vars.debias){ train.data[[v]] <- matrix(train.data[[v]][which(train.data$time$Year %in% yrs.overlap),], ncol=ncol(train.data[[v]])) } - train.data$time <- train.data$time[which(train.data$time$Year %in% yrs.overlap),] - - + train.data$time <- train.data$time[which(train.data$time$Year %in% yrs.overlap),] + + # ------------------------------------------- # Loop through the variables # ------------------------------------------- @@ -157,7 +157,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU for(v in vars.debias){ # ------------- # If we're dealing with precip, lets keep the training data handy & - # calculate the number of rainless time periods (days) in each year to + # calculate the number of rainless time periods (days) in each year to # make sure we don't get a constant drizzle # Update: We also need to look at the distribution of consequtive rainless days # ------------- @@ -168,17 +168,17 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU for(y in unique(train.data$time$Year)){ for(i in 1:ncol(train.data$precipitation_flux)){ rain.now <- train.data$precipitation_flux[train.data$time$Year==y, i] - + rainless <- c(rainless, length(which(rain.now==0))) - - # calculating the mean & sd for rainless days + + # calculating the mean & sd for rainless days tally = 0 for(z in 1:length(rain.now)){ # If we don't have rain, add it to our tally if(rain.now[z]>0){ tally=tally+1 } - + # If we have rain and it resets our tally, # - store tally in our vector; then reset if(rain.now[z]==0 & tally>0){ @@ -188,21 +188,21 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } # z End loop } # end i loop } # end y loop - + # Hard-coding in some sort of max for precipitaiton rain.max <- max(train.data$precipitation_flux) + sd(train.data$precipitation_flux) rainless.min <- ifelse(min(rainless)-sd(rainless)>=0, min(rainless)-sd(rainless), max(min(rainless)-sd(rainless)/2, 0)) rainless.max <- ifelse(max(rainless)+sd(rainless)<=365, max(rainless)+sd(rainless), min(max(rainless)+sd(rainless)/2, 365)) } # ------------- - + # ------------- # Set up the datasets for training and prediction # ------------- # ----- # 1. Grab the training data -- this will be called "Y" in our bias correction equations # -- preserving the different simulations so we can look at a distribution of potential values - # -- This will get aggregated right off the bat so we so we're looking at the climatic means + # -- This will get aggregated right off the bat so we so we're looking at the climatic means # for the first part of bias-correction # ----- met.train <- data.frame(year=train.data$time$Year, @@ -212,23 +212,23 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU ) met.train[,v] <- 0 - # For precip, we want to adjust the total annual precipitation, and then calculate day of year + # For precip, we want to adjust the total annual precipitation, and then calculate day of year # adjustment & anomaly as fraction of total annual precipitation if(v == "precipitation_flux"){ # Find total annual preciptiation precip.ann <- aggregate(met.train$Y, by=met.train[,c("year", "ind")], FUN=sum) names(precip.ann)[3] <- "Y.tot" - + met.train <- merge(met.train, precip.ann, all=T) met.train$Y <- met.train$Y/met.train$Y.tot # Y is now fraction of annual precip in each timestep } - + # Aggregate to get rid of years so that we can compare climatic means; bring in covariance among climatic predictors dat.clim <- aggregate(met.train[,"Y"], by=met.train[,c("doy", "ind")], FUN=mean) # dat.clim[,v] <- 1 names(dat.clim)[3] <- "Y" # ----- - + # ----- # 2. Pull the raw ("source") data that needs to be bias-corrected -- this will be called "X" # -- this gets aggregated to the climatological mean right off the bat @@ -238,19 +238,19 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU X=utils::stack(data.frame(source.data[[v]][,ens.src]))[,1], ind.src=rep(paste0("X", 1:length(ens.src)), each=nrow(source.data[[v]])) ) - # met.src[,v] <- - + # met.src[,v] <- + if(v=="precipitation_flux"){ src.ann <- aggregate(met.src$X, by=met.src[,c("year", "ind.src")], FUN=sum) names(src.ann)[3] <- "X.tot" - + met.src <- merge(met.src, src.ann, all.x=T) # Putting precip as fraction of the year again - met.src$X <- met.src$X/met.src$X.tot - - } - + met.src$X <- met.src$X/met.src$X.tot + + } + # Lets deal with the source data first # - Adding in the ensembles to be predicted @@ -259,27 +259,27 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } else { met.src$ind <- met.src$ind.src } - - + + # Adding in the covariates from what's been done: for(v.pred in vars.debias[!vars.debias==v]){ met.train[,v.pred] <- utils::stack(data.frame(train.data[[v.pred]][,ens.train]))[,1] - + if(v.pred %in% names(dat.out)){ met.src[,v.pred] <- utils::stack(data.frame(dat.out[[v.pred]]))[,1] } else { met.src[,v.pred] <- utils::stack(data.frame(source.data[[v.pred]][,ens.src]))[,1] } } - - # Zero out other predictors we'd like to use, but don't actually have data for or don't + + # Zero out other predictors we'd like to use, but don't actually have data for or don't # want to rely on met.train[,vars.all[!vars.all %in% vars.debias]] <- 0 met.src [,vars.all[!vars.all %in% vars.debias]] <- 0 - + # met.src <- merge(met.src, src.cov) met.src[,v] <- 0 - + # Aggregate to get rid of years so that we can compare climatic means clim.src <- aggregate(met.src[met.src$year %in% yrs.overlap,c("X", vars.debias)], by=met.src[met.src$year %in% yrs.overlap,c("doy", "ind", "ind.src")], @@ -287,13 +287,13 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU clim.src[,vars.debias[!vars.debias %in% names(dat.out)]] <- 0 # names(clim.src)[3] <- "X" # ----- - + # ----- - # 3. Merge the training & cource climate data together the two sets of daily means + # 3. Merge the training & cource climate data together the two sets of daily means # -- this ends up pairing each daily climatological mean of the raw data with each simulation from the training data # ----- dat.clim <- merge(dat.clim[,], clim.src, all=T) - + if(v=="precipitation_flux"){ if(pair.anoms==F){ dat.ann <- precip.ann @@ -303,16 +303,16 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } } # ----- - + # ----- # 4. Pulling the source and training data to model the anomalies # - this includes pulling the covariates from what's already been done # ----- # The training data is already formatted, we just need to copy "Y" (our variable) to "X" as well met.train$X <- met.train$Y - + # ----- - + # Transforming zero-truncated variables where negative values are not possible (and zero is unlikely) # - Tried a couple different things, but the sqaure root transformation seems to be working best if(v %in% vars.transform){ @@ -322,48 +322,48 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU met.train$Y <- sqrt(met.train$Y) } # ------------- - - + + # ------------- # 5. Doing the bias correction by looping through the ensemble members - # - This is almost certainly not the most efficient way of doing it, but should fix some + # - This is almost certainly not the most efficient way of doing it, but should fix some # issues with the prediction phase needing lots of memory for large or long ensembles # ------------- sim.final <- data.frame(array(dim=c(nrow(source.data[[v]]), n.ens))) names(sim.final) <- paste0("X", 1:n.ens) - + for(ens in 1:n.ens){ - + ind = paste0("X", ens) # --------- # Doing the climatological bias correction # In all variables except precip, this adjusts the climatological means closest to the splice point - # -- because precip is relatively stochastic without a clear seasonal pattern, a zero-inflated distribution, + # -- because precip is relatively stochastic without a clear seasonal pattern, a zero-inflated distribution, # and low correlation with other met variables, we'll instead model potential low-frequency patterns in - # the data that is to be bias-corrected. In this instance we essentially consider any daily precip to be + # the data that is to be bias-corrected. In this instance we essentially consider any daily precip to be # an anomaly # --------- # mod.bias0 <- mgcv::gam(Y ~ s(doy, k=6) + X, data=dat.clim[dat.clim$ind == ind, ]) # summary(mod.bias) mod.bias <- mgcv::gam(Y ~ s(doy, k=6), data=dat.clim[dat.clim$ind == ind, ]) # summary(mod.bias) - + # Saving the mean predicted & residuals dat.clim[dat.clim$ind == ind, "pred"] <- predict(mod.bias) dat.clim[dat.clim$ind == ind, "resid"] <- resid(mod.bias) # summary(dat.clim) - + # Storing the model residuals to add in some extra error resid.bias <- resid(mod.bias) - + # # Checking the residuals to see if we can assume normality # plot(resid ~ pred, data=dat.clim); abline(h=0, col="red") # plot(resid ~ doy, data=dat.clim); abline(h=0, col="red") # hist(dat.clim$resid) met.src [met.src $ind == ind, "pred"] <- predict(mod.bias, newdata=met.src [met.src $ind == ind, ]) met.train[met.train$ind == ind, "pred"] <- predict(mod.bias, newdata=met.train[met.train$ind == ind, ]) - + # For Precip we need to bias-correct the total annual preciptiation + seasonal distribution if(v == "precipitation_flux"){ mod.ann <- lm(Y.tot ~ X.tot , data=dat.ann[dat.ann$ind==ind,]) @@ -375,7 +375,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU met.src[met.src$ind==ind,"pred.ann"] <- predict(mod.ann, newdata=met.src[met.src$ind==ind,]) } # --------- - + # --------- # Modeling the anomalies # In most cases, this is the deviation of each observation from the climatic mean for that day (estimated using a smoother) @@ -387,7 +387,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # --------- # We want to look at anomalies relative to the raw expected seasonal pattern, so we need to fit training and data to be debiased separately - anom.train <- mgcv::gam(X ~ s(doy, k=6) , data=met.train[met.train$ind==ind,]) + anom.train <- mgcv::gam(X ~ s(doy, k=6) , data=met.train[met.train$ind==ind,]) anom.src <- mgcv::gam(X ~ s(doy, k=6) , data=met.src[met.src$ind==ind & met.src$year %in% yrs.overlap,]) if(v == "precipitation_flux"){ @@ -401,61 +401,61 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # plot(anom.train~doy, data=met.train) # plot(anom.raw~doy, data=met.src[met.src$year %in% yrs.overlap,]) # par(mfrow=c(1,1)) - - - # Modeling the anomalies of the other predictors + + + # Modeling the anomalies of the other predictors # -- note: the downscaling & bias-correction of precip should have removed the monsoonal trend if there is no empirical basis for it # so this should be pretty straight-forward now for(j in vars.all[vars.all!=v]){ met.train[met.train$ind==ind, "Q"] <- met.train[met.train$ind==ind,j] met.src[met.src$ind==ind, "Q"] <- met.src[met.src$ind==ind,j] - + # Generating the predicted seasonal cycle for each variable - anom.train2 <- mgcv::gam(Q ~ s(doy, k=6), data=met.train[met.train$ind==ind,]) - anom.src2 <- mgcv::gam(Q ~ s(doy, k=6), data=met.src[met.src$year %in% yrs.overlap & met.src$ind==ind,]) + anom.train2 <- mgcv::gam(Q ~ s(doy, k=6), data=met.train[met.train$ind==ind,]) + anom.src2 <- mgcv::gam(Q ~ s(doy, k=6), data=met.src[met.src$year %in% yrs.overlap & met.src$ind==ind,]) met.train[met.train$ind==ind, paste0(j, ".anom")] <- resid(anom.train2) met.src[met.src$ind==ind, paste0(j, ".anom")] <- met.src[met.src$ind==ind,"Q"] - predict(anom.src2, newdata=met.src[met.src$ind==ind,]) - + rm(anom.train2, anom.src2) } - - # CRUNCEP has a few variables that assume a constant pattern from 1901-1950; + + # CRUNCEP has a few variables that assume a constant pattern from 1901-1950; # so we don't want to use their anomaly as a predictor otherwise we will perpetuate that less than ideal situation if(CRUNCEP==T & v %in% c("surface_downwelling_longwave_flux_in_air", "air_pressure", "wind_speed")) met.src$anom.raw <- 0 - + # Actually Modeling the anomalies # -- If we have empirical data, we can pair the anomalies to find a way to bias-correct those # -- If one of our datasets is a GCM, the patterns observed are just what underly the climate signal and no actual # event is "real". In this case we just want to leverage use the covariance our other met drivers to try and get # the right distribution of anomalies - if(pair.anoms==TRUE){ + if(pair.anoms==TRUE){ # if it's empirical we can, pair the anomalies for best estimation # Note: Pull the covariates from the training data to get any uncertainty &/or try to correct covariances # -- this makes it mroe consistent with the GCM calculations - dat.anom <- merge(met.src [met.src$ind==ind & met.src$year %in% yrs.overlap, c("year", "doy", "ind", "X", "anom.raw")], + dat.anom <- merge(met.src [met.src$ind==ind & met.src$year %in% yrs.overlap, c("year", "doy", "ind", "X", "anom.raw")], met.train[met.train$ind==ind,c("year", "doy", "anom.train", "ind", vars.all[vars.all!=v], paste0(vars.all[vars.all!=v], ".anom"))]) - + dat.anom[,v] <- 0 k=round(length(unique(met.src$year))/50,0) k=max(k, 4) # we can't have less than 4 knots - + # plot(anom.train ~ anom.raw, data=dat.anom) # abline(a=0, b=1, col="blue") # abline(lm(anom.train ~ anom.raw, data=dat.anom), col="red", lty="dashed") - + # Modeling in the predicted value from mod.bias dat.anom$pred <- predict(mod.bias, newdata=dat.anom) - + if (v %in% c("air_temperature", "air_temperature_maximum", "air_temperature_minimum")){ # ** We want to make sure we do these first ** - # These are the variables that have quasi-observed values for their whole time period, + # These are the variables that have quasi-observed values for their whole time period, # so we can use the the seasonsal trend, and the observed anaomalies # Note: because we can directly model the anomalies, the inherent long-term trend should be preserved mod.anom <- mgcv::gam(anom.train ~ s(doy, k=6) + anom.raw, data=dat.anom) } else if(v %in% c("surface_downwelling_shortwave_flux_in_air", "specific_humidity")){ - # CRUNCEP surface_downwelling_shortwave_flux_in_air and specific_humidity have been vary hard to fit to NLDAS because it has a different variance for some reason, - # and the only way I've been able to fix it is to model the temporal pattern seen in the dataset based on + # CRUNCEP surface_downwelling_shortwave_flux_in_air and specific_humidity have been vary hard to fit to NLDAS because it has a different variance for some reason, + # and the only way I've been able to fix it is to model the temporal pattern seen in the dataset based on # its own anomalies (not ideal, but it works) mod.anom <- mgcv::gam(anom.raw ~ s(doy, k=6) + s(year, k=k) + air_temperature_maximum.anom*air_temperature_minimum.anom, data=met.src[met.src$ind==ind,]) } else if(v=="precipitation_flux"){ @@ -463,35 +463,35 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # through 0 so we can try and reduce the likelihood of evenly distributed precipitation events # k=round(length(met.src$year)/(25*366),0) # k=max(k, 4) # we can't have less than 4 knots - + # mod.anom <- mgcv::gam(anom.raw ~ s(year, k=k) + (air_temperature_maximum.anom + air_temperature_minimum.anom + surface_downwelling_shortwave_flux_in_air.anom + surface_downwelling_longwave_flux_in_air.anom + specific_humidity.anom) -1, data=met.src[met.src$ind==ind,]) mod.anom <- mgcv::gam(anom.train ~ s(doy, k=6) + anom.raw - 1, data=dat.anom) } else if(v %in% c("wind_speed", "air_pressure", "surface_downwelling_longwave_flux_in_air")) { - # These variables are constant in CRU pre-1950. - # This means that we can not use information about the long term trend OR the actual annomalies + # These variables are constant in CRU pre-1950. + # This means that we can not use information about the long term trend OR the actual annomalies # -- they must be inferred from the other met we have mod.anom <- mgcv::gam(anom.train ~ s(doy, k=6) + (air_temperature_minimum.anom*air_temperature_maximum.anom + surface_downwelling_shortwave_flux_in_air.anom + specific_humidity.anom) , data=met.train[met.train$ind==ind,]) - } - } else { - # If we're dealing with non-empirical datasets, we can't pair anomalies to come up with a direct adjustment + } + } else { + # If we're dealing with non-empirical datasets, we can't pair anomalies to come up with a direct adjustment # In this case we have 2 options: - # 1) If we've already done at least one variable, we can leverage the covariance of the met drivers we've already downscaled + # 1) If we've already done at least one variable, we can leverage the covariance of the met drivers we've already downscaled # to come up with a relationship that we an use to predict the new set of anomalies # 2) If we don't have any other variables to leverage (i.e. this is our first met variable), we incorporate both the seasonal # trend (doy spline) and potential low-frequency trends in the data (year spline) k=round(length(unique(met.src$year))/50,0) k=max(k, 4) # we can't have less than 4 knots - + # vars.debias <- c("air_temperature", "air_temperature_maximum", "air_temperature_minimum", "specific_humidity", "precipitation_flux", "surface_downwelling_shortwave_flux_in_air", "air_pressure", "surface_downwelling_longwave_flux_in_air", "wind_speed") # Vars that are at daily and we just need to adjust the variance # We have some other anomaly to use! that helps a lot. -- use that to try and get low-frequency trends in the past if(v %in% c("air_temperature_maximum", "air_temperature_minimum")){ - # If we haven't already done another met product, our best shot is to just model the existing variance + # If we haven't already done another met product, our best shot is to just model the existing variance # and preserve as much of the low-frequency cylce as possible - mod.anom <- mgcv::gam(anom.raw ~ s(year, k=k), data=met.src[met.src$ind==ind,]) - } else if(v=="precipitation_flux"){ + mod.anom <- mgcv::gam(anom.raw ~ s(year, k=k), data=met.src[met.src$ind==ind,]) + } else if(v=="precipitation_flux"){ # If we're working with precipitation_flux, need to make the intercept 0 so that we have plenty of days with little/no rain - mod.anom <- mgcv::gam(anom.raw ~ s(year, k=k) + (air_temperature_maximum.anom*air_temperature_minimum.anom + surface_downwelling_shortwave_flux_in_air.anom + surface_downwelling_longwave_flux_in_air.anom + specific_humidity.anom), data=met.src[met.src$ind==ind,]) + mod.anom <- mgcv::gam(anom.raw ~ s(year, k=k) + (air_temperature_maximum.anom*air_temperature_minimum.anom + surface_downwelling_shortwave_flux_in_air.anom + surface_downwelling_longwave_flux_in_air.anom + specific_humidity.anom), data=met.src[met.src$ind==ind,]) } else if(v %in% c("surface_downwelling_shortwave_flux_in_air", "surface_downwelling_longwave_flux_in_air")){ # See if we have some other anomaly that we can use to get the anomaly covariance & temporal trends right # This relies on the assumption that the low-frequency trends are in proportion to the other met variables @@ -508,7 +508,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # pred.anom <- predict(mod.anom) resid.anom <- resid(mod.anom) # --------- - + # -------- # Predicting a bunch of potential posteriors over the full dataset # -------- @@ -516,20 +516,20 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU coef.gam <- coef(mod.bias) coef.anom <- coef(mod.anom) if(v == "precipitation_flux") coef.ann <- coef(mod.ann) - + # Setting up a case where if sanity checks fail, we pull more ensemble members n.new <- 1 cols.redo <- n.new sane.attempt=0 while(n.new>0 & sane.attempt <= sanity.tries){ - + # Rbeta <- matrix(nrow=0, ncol=1); Rbeta.anom <- matrix(nrow=0, ncol=1) # ntries=50 # try.now=0 # while(nrow(Rbeta)<1 & try.now<=ntries){ # Generate a random distribution of betas using the covariance matrix # I think the anomalies might be problematic, so lets get way more betas than we need and trim the distribution - # set.seed=42 + # set.seed=42 if(n.ens==1 | uncert.prop=="mean"){ Rbeta <- matrix(coef(mod.bias), ncol=length(coef(mod.bias))) } else { @@ -539,15 +539,15 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # # Filter our betas to remove outliers # ci.beta <- matrix(apply(Rbeta, 2, quantile, c(0.01, 0.99)), nrow=2) - # + # # # Only worry about the non-0 betas # beta.use <- which(abs(as.vector(coef(mod.bias)))>0) - # + # # Rbeta <- matrix(Rbeta[which(apply(Rbeta[,beta.use], 1, function(x) all(x > ci.beta[1,beta.use] & x < ci.beta[2,beta.use]))),], ncol=ncol(Rbeta)) - # + # # try.now=try.now+1 # } - + # try.now=0 # while(nrow(Rbeta.anom)<1 & try.now<=ntries){ # Generate a random distribution of betas using the covariance matrix @@ -557,46 +557,46 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } else { Rbeta.anom <- matrix(MASS::mvrnorm(n=n.new, coef(mod.anom), vcov(mod.anom)), ncol=length(coef(mod.anom))) } - dimnames(Rbeta.anom)[[2]] <- names(coef(mod.anom)) + dimnames(Rbeta.anom)[[2]] <- names(coef(mod.anom)) # # Filter our betas to remove outliers # ci.anom <- matrix(apply(Rbeta.anom, 2, quantile, c(0.01, 0.99)), nrow=2) - # + # # # Only worry about the non-0 betas # anom.use <- which(abs(as.vector(coef(mod.anom)))>0) - # + # # Rbeta.anom <- matrix(Rbeta.anom[which(apply(Rbeta.anom[,anom.use], 1, function(x) all(x > ci.anom[1,anom.use] & x < ci.anom[2,anom.use]))),], ncol=ncol(Rbeta.anom)) - # + # # try.now=try.now+1 # } - + # Rbeta <- matrix(Rbeta[sample(1:nrow(Rbeta), n.new, replace=T),], ncol=ncol(Rbeta)) # Rbeta.anom <- matrix(Rbeta.anom[sample(1:nrow(Rbeta.anom), n.new, replace=T),], ncol=ncol(Rbeta.anom)) - - + + if(v == "precipitation_flux"){ if(n.ens==1){ Rbeta.ann <- matrix(coef(mod.ann), ncol=length(coef.ann)) - } else { + } else { Rbeta.ann <- matrix(MASS::mvrnorm(n=n.new, coef(mod.ann), vcov(mod.ann)), ncol=length(coef(mod.ann))) } # ci.ann <- matrix(apply(Rbeta.ann, 2, quantile, c(0.01, 0.99)), nrow=2) # Rbeta.ann <- Rbeta.ann[which(apply(Rbeta.ann, 1, function(x) all(x > ci.ann[1,] & x < ci.ann[2,]))),] # Rbeta.ann <- matrix(Rbeta.ann[sample(1:nrow(Rbeta.ann), n.new, replace=T),], ncol=ncol(Rbeta.ann)) - } - + } + # Create the prediction matrix Xp <- predict(mod.bias, newdata=met.src[met.src$ind==ind,], type="lpmatrix") Xp.anom <- predict(mod.anom, newdata=met.src[met.src$ind==ind,], type="lpmatrix") if(v == "precipitation_flux"){ # Linear models have a bit of a difference in how we get the info out # Xp.ann <- predict(mod.ann, newdata=met.src, type="lpmatrix") - + met.src[met.src$ind==ind,"Y.tot"] <- met.src[met.src$ind==ind,"pred.ann"] mod.terms <- terms(mod.ann) m <- model.frame(mod.terms, met.src[met.src$ind==ind,], xlev=mod.ann$xlevels) Xp.ann <- model.matrix(mod.terms, m, constrasts.arg <- mod.ann$contrasts) - } - + } + # ----- # Simulate predicted met variables & add in some residual error # NOTE: Here we're assuming normal distribution of the errors, which looked pretty valid @@ -606,14 +606,14 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Options for adding in residual error # # Option 1: Adding a constant error per time series # -- This is currently used for the climatological bias-correction because we're going to assume - # that we've biased the mean offset in the climatology (the seasonal bias is encorporated in the + # that we've biased the mean offset in the climatology (the seasonal bias is encorporated in the # spline estimation) # -- Note: Precipitation doesn't get residual error added here because that sort of bias is funneled into - # the anomaly model. The error in the Rbetas should adequately represent the uncertainty in the + # the anomaly model. The error in the Rbetas should adequately represent the uncertainty in the # low-frequency trends in the data # # Option 2: Adding a random error to each observation # -- This is used for the anomalies because they are by definition stochastic, highly unpredictable - # -- Note: this option currently ignores potential autocorrelation in anomalies (i.e. if 1 Jan was + # -- Note: this option currently ignores potential autocorrelation in anomalies (i.e. if 1 Jan was # unseasonably warm, odds are that the days around it weren't record-breaking cold) # -- I'm rolling with this for now and will smooth some of these over in the downscaling to # subdaily data @@ -626,59 +626,59 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU sim1a <- matrix(nrow=dim.new[1], ncol=dim.new[2]) sim1b <- matrix(nrow=dim.new[1], ncol=dim.new[2]) if(v == "precipitation_flux") sim1c <- matrix(nrow=dim.new[1], ncol=dim.new[2]) - + sim1 <- matrix(nrow=dim.new[1], ncol=dim.new[2]) } - + # Default option: no residual error; all error from the downscaling parameters sim1a[,cols.redo] <- Xp %*% t(Rbeta) # Seasonal Climate component with uncertainty sim1b[,cols.redo] <- Xp.anom %*% t(Rbeta.anom) # Weather component with uncertainty - if(v == "precipitation_flux"){ + if(v == "precipitation_flux"){ sim1a[,cols.redo] <- 0 sim1c <- Xp.ann %*% t(Rbeta.ann) # Mean annual precip uncertainty } - - # If we're dealing with the temperatures where there's basically no anomaly, + + # If we're dealing with the temperatures where there's basically no anomaly, # we'll get the uncertainty subtract the multi-decadal trend out of the anomalies; not a perfect solution, but it will increase the variability if(pair.anoms==F & (v %in% c("air_temperature_maximum", "air_temperature_minimum"))){ - # sim1b.norm <- apply(sim1b, 1, mean) + # sim1b.norm <- apply(sim1b, 1, mean) # What we need is to remove the mean-trend from the anomalies and then add the trend (with uncertinaties) back in # Note that for a single-member ensemble, this just undoes itself anom.detrend <- met.src[met.src$ind==ind,"anom.raw"] - predict(mod.anom) - + # NOTE: This section can probably be removed and simplified since it should always be a 1-column array now if(length(cols.redo)>1){ sim1b[,cols.redo] <- apply(sim1b[,cols.redo], 2, FUN=function(x){x+anom.detrend}) # Get the range around that medium-frequency trend } else { sim1b[,cols.redo] <- as.matrix(sim1b[,cols.redo] + anom.detrend) } - + } - - - # Option 1: Adding a constant error per time series for the cliamte correction + + + # Option 1: Adding a constant error per time series for the cliamte correction # (otherwise we're just doubling anomalies) # sim1a <- sweep(sim1a, 2, rnorm(n, mean(resid.bias), sd(resid.bias)), FUN="+") # if(v!="precipitation_flux") sim1a <- sweep(sim1a, 2, rnorm(n, mean(resid.bias), sd(resid.bias)), FUN="+") # Only apply if not working with precipitation_flux # sim1b <- sweep(sim1b, 2, rnorm(n, mean(resid.anom), sd(resid.anom)), FUN="+") - + # # # Option 2: Adding a random error to each observation (anomaly error) # if(v!="precipitation_flux") sim1a <- sim1a + rnorm(length(sim1a), mean(resid.bias), sd(resid.bias)) # sim1b <- sim1b + rnorm(length(sim1b), mean(resid.anom), sd(resid.anom)) - + # # Option 3: explicitly modeling the errors in some way # ----- - + # Adding climate and anomaly together sim1[,cols.redo] <- sim1a[,cols.redo] + sim1b[,cols.redo] # climate + weather = met driver!! # If we're dealing with precip, transform proportions of rain back to actual precip - if(v == "precipitation_flux"){ + if(v == "precipitation_flux"){ sim1[,cols.redo] <- sim1[,cols.redo]*sim1c[,cols.redo] # met.src$X <- met.src$X*met.src$X.tot # met.src$anom.raw <- met.src$anom.raw*met.src$X.tot } - + # ----- # SANITY CHECKS!!! # ----- @@ -693,17 +693,17 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU max(x) > mean(met.train$X) + sanity.sd*sd(met.train$X) )) } - #"specific_humidity", + #"specific_humidity", if(v == "specific_humidity"){ # Based on google, it looks like values of 30 g/kg can occur in the tropics, so lets go above that # Also, the minimum humidity can't be 0 so lets just make it extremely dry; lets set this for 1 g/Mg - + cols.redo <- which(apply(sim1, 2, function(x) min(x^2) < 1e-6 | max(x^2) > 40e-3 | min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) - )) + )) } - #"surface_downwelling_shortwave_flux_in_air", + #"surface_downwelling_shortwave_flux_in_air", if(v == "surface_downwelling_shortwave_flux_in_air"){ # Based on something found from Columbia, average Radiative flux at ATM is 1360 W/m2, so for a daily average it should be less than this # Lets round 1360 and divide that by 2 (because it should be a daily average) and conservatively assume albedo of 20% (average value is more like 30) @@ -711,7 +711,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU cols.redo <- which(apply(sim1, 2, function(x) max(x^2) > 1360/2*0.8 | min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) - )) + )) } if(v == "air_pressure"){ # According to wikipedia the highest barometric pressure ever recorded was 1085.7 hPa = 1085.7*100 Pa; Dead sea has average pressure of 1065 hPa @@ -720,25 +720,25 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU cols.redo <- which(apply(sim1, 2, function(x) min(x) < 870*100 | max(x) > 1100*100 | min(x) < mean(met.train$X) - sanity.sd*sd(met.train$X) | max(x) > mean(met.train$X) + sanity.sd*sd(met.train$X) - )) + )) } - if(v == "surface_downwelling_longwave_flux_in_air"){ + if(v == "surface_downwelling_longwave_flux_in_air"){ # A NASA presentation has values topping out ~300 and min ~0: https://ceres.larc.nasa.gov/documents/STM/2003-05/pdf/smith.pdf # A random journal article has 130 - 357.3: http://www.tandfonline.com/doi/full/10.1080/07055900.2012.760441 # ED2 sanity checks boudn longwave at 40 & 600 - + cols.redo <- which(apply(sim1, 2, function(x) min(x^2) < 40 | max(x^2) > 600 | min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) - )) - + )) + } if(v == "wind_speed"){ # According to wikipedia, the hgihest wind speed ever recorded is a gust of 113 m/s; the maximum 5-mind wind speed is 49 m/s cols.redo <- which(apply(sim1, 2, function(x) max(x^2) > 50/2 | min(x^2) < mean(met.train$X^2) - sanity.sd*sd(met.train$X^2) | max(x^2) > mean(met.train$X^2) + sanity.sd*sd(met.train$X^2) - )) + )) } if(v == "precipitation_flux"){ # According to wunderground, ~16" in 1 hr is the max; Lets divide that by 2 for the daily rainfall rate @@ -747,7 +747,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU cols.redo <- which(apply(sim1, 2, function(x) max(x) > 16/2*25.4/(60*60) | min(x) < min(met.train$X) - sanity.sd*sd(met.train$X) | max(x) > max(met.train$X) + sanity.sd*sd(met.train$X) - )) + )) } n.new = length(cols.redo) if(force.sanity){ @@ -757,8 +757,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } # ----- } # End Sanity Attempts - - + + if(force.sanity & n.new>0){ # # If we're still struggling, but we have at least some workable columns, lets just duplicate those: # if(n.new<(round(n.ens/2)+1)){ @@ -766,15 +766,15 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # cols.safe <- cols.safe[!(cols.safe %in% cols.redo)] # sim1[,cols.redo] <- sim1[,sample(cols.safe, n.new, replace=T)] # } else { - # for known problem variables, lets force sanity as a last resort + # for known problem variables, lets force sanity as a last resort if(v %in% c("air_temperature", "air_temperature_maximum", "air_temperature_minimum")){ warning(paste("Forcing Sanity:", v)) - if(min(sim1) < max(273.15-95, mean(met.train$X) - sanity.sd*sd(met.train$X))) { - qtrim <- max(273.15-95, mean(met.train$X) - sanity.sd*sd(met.train$X)) + 1e-6 + if(min(sim1) < max(184, mean(met.train$X) - sanity.sd*sd(met.train$X))) { + qtrim <- max(184, mean(met.train$X) - sanity.sd*sd(met.train$X)) + 1e-6 sim1[sim1 < qtrim] <- qtrim } - if(max(sim1) > min(273.15+70, mean(met.train$X) + sd(met.train$X^2))) { - qtrim <- min(273.15+70, mean(met.train$X) + sanity.sd*sd(met.train$X)) - 1e-6 + if(max(sim1) > min(331, mean(met.train$X) + sd(met.train$X^2))) { + qtrim <- min(331, mean(met.train$X) + sanity.sd*sd(met.train$X)) - 1e-6 sim1[sim1 > qtrim] <- qtrim } } else if(v == "surface_downwelling_shortwave_flux_in_air"){ @@ -784,28 +784,28 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # cols.redo <- which(apply(sim1, 2, function(x) max(x^2) > 1360/2*0.8 | # min(x) < mean(met.train$X) - sanity.sd*sd(met.train$X) | # max(x) > mean(met.train$X) + sanity.sd*sd(met.train$X) - # )) + # )) warning(paste("Forcing Sanity:", v)) if(min(sim1^2) < max(mean(met.train$X^2) - sanity.sd*sd(met.train$X^2))) { qtrim <- max(mean(met.train$X^2) - sanity.sd*sd(met.train$X^2)) sim1[sim1^2 < qtrim] <- sqrt(qtrim) } - if(max(sim1^2) > min(1360/2*0.8, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { - qtrim <- min(1360/2*0.8, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) + if(max(sim1^2) > min(1500*0.8, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { + qtrim <- min(1500*0.8, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) sim1[sim1^2 > qtrim] <- sqrt(qtrim) } - - } else if(v == "surface_downwelling_longwave_flux_in_air"){ + + } else if(v == "surface_downwelling_longwave_flux_in_air"){ # Having a heck of a time keeping things reasonable, so lets trim it # ED2 sanity checks boudn longwave at 40 & 600 - + warning(paste("Forcing Sanity:", v)) if(min(sim1^2) < max(40, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2))) { qtrim <- max(40, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2)) sim1[sim1^2 < qtrim] <- sqrt(qtrim) } if(max(sim1^2) > min(600, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { - qtrim <- min(600, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) + qtrim <- min(600, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) sim1[sim1^2 > qtrim] <- sqrt(qtrim) } } else if(v=="specific_humidity"){ @@ -815,18 +815,18 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU qtrim <- max(1e-6, mean(met.train$X^2) - sanity.sd*sd(met.train$X^2)) sim1[sim1^2 < qtrim] <- sqrt(qtrim) } - if(max(sim1^2) > min(40e-3, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { - qtrim <- min(40e-3, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) + if(max(sim1^2) > min(3.2e-2, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2))) { + qtrim <- min(3.2e-2, mean(met.train$X^2) + sanity.sd*sd(met.train$X^2)) sim1[sim1^2 > qtrim] <- sqrt(qtrim) } } else if(v=="air_pressure"){ warning(paste("Forcing Sanity:", v)) - if(min(sim1)< max(870*100, mean(met.train$X) - sanity.sd*sd(met.train$X))){ - qtrim <- min(870*100, mean(met.train$X) - sanity.sd*sd(met.train$X)) + if(min(sim1)< max(45000, mean(met.train$X) - sanity.sd*sd(met.train$X))){ + qtrim <- min(45000, mean(met.train$X) - sanity.sd*sd(met.train$X)) sim1[sim1 < qtrim] <- qtrim } - if(max(sim1) < min(1100*100, mean(met.train$X) + sanity.sd*sd(met.train$X))){ - qtrim <- min(1100*100, mean(met.train$X) + sanity.sd*sd(met.train$X)) + if(max(sim1) < min(11000000, mean(met.train$X) + sanity.sd*sd(met.train$X))){ + qtrim <- min(11000000, mean(met.train$X) + sanity.sd*sd(met.train$X)) sim1[sim1 > qtrim] <- qtrim } } else if(v=="wind_speed"){ @@ -835,8 +835,8 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU qtrim <- min(0, mean(met.train$X) - sanity.sd*sd(met.train$X)) sim1[sim1 < qtrim] <- qtrim } - if(max(sim1) < min(sqrt(50/2), mean(met.train$X) + sanity.sd*sd(met.train$X))){ - qtrim <- min(sqrt(50/2), mean(met.train$X) + sanity.sd*sd(met.train$X)) + if(max(sim1) < min(sqrt(85), mean(met.train$X) + sanity.sd*sd(met.train$X))){ + qtrim <- min(sqrt(85), mean(met.train$X) + sanity.sd*sd(met.train$X)) sim1[sim1 > qtrim] <- qtrim } } else { @@ -845,19 +845,19 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } # } # End if else } # End force sanity - - + + # Un-transform variables where we encounter zero-truncation issues - # NOTE: Need to do this *before* we sum the components!! + # NOTE: Need to do this *before* we sum the components!! #if(v %in% vars.transform){ # sim1 <- sim1^2 # # met.src[met.src$ind==ind,"X"] <- met.src[met.src$ind==ind,"X"]^2 - #} - - - # For preciptiation, we need to make sure we don't have constant drizzle and have + #} + + + # For preciptiation, we need to make sure we don't have constant drizzle and have # at least some dry days. To deal with this, I make the assumption that there hasn't - # been a trend in number of rainless days over the past 1000 years and use the mean & + # been a trend in number of rainless days over the past 1000 years and use the mean & # sd of rainless days in the training data to randomly distribute the rain in the past # Update: We also need to look at the distribution of consequtive rainless days if(v=="precipitation_flux"){ @@ -865,7 +865,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU for(y in min(met.src[met.src$ind==ind, "year"]):max(met.src[met.src$ind==ind, "year"])){ # Figure out which rows belong to this particular year rows.yr <- which(met.src[met.src$ind==ind, "year"]==y) - + # Before adjusting rainless days, make sure we get rid of our negative days first dry <- rows.yr[which(sim1[rows.yr,j] < 0)] while(length(dry)>0){ # until we have our water year balanced @@ -878,12 +878,12 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } dry <- rows.yr[which(sim1[rows.yr,j] < 0)] # update our dry days } - + # n.now = number of rainless days for this sim - n.now <- round(rnorm(1, mean(rainless, na.rm=T), sd(rainless, na.rm=T)), 0) + n.now <- round(rnorm(1, mean(rainless, na.rm=T), sd(rainless, na.rm=T)), 0) if(n.now < rainless.min) n.now <- rainless.min # Make sure we don't have negative or no rainless days if(n.now > rainless.max) n.now <- rainless.max # Make sure we have at least one day with rain - + # We're having major seasonality issues, so lets randomly redistribute our precip # Pull ~twice what we need and randomly select from that so that we don't have such clean cuttoffs # set.seed(12) @@ -893,9 +893,9 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # cutoff <- 1-cutoff dry1 <- rows.yr[which(sim1[rows.yr,j] > cutoff)] dry <- sample(dry1, 365-n.now, replace=T) - + wet <- sample(rows.yr[!rows.yr %in% dry], length(dry), replace=T) - + # Go through and randomly redistribute the precipitation to days we're not designating as rainless # Note, if we don't loop through, we might lose some of our precip # IN the case of redistributing rain to prevent super droughts, divide by 2 @@ -903,24 +903,24 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU sim1[wet[r],j] <- sim1[dry[r],j]/2 sim1[dry[r],j] <- sim1[dry[r],j]/2 } - + } else { - # Figure out which days are currently below our cutoff and randomly distribute - # their precip to days that are not below the cutoff (this causes a more bi-modal - # distribution hwere dry days get drier), but other options ended up with either - # too few rainless days because of only slight redistribution (r+1) or buildup + # Figure out which days are currently below our cutoff and randomly distribute + # their precip to days that are not below the cutoff (this causes a more bi-modal + # distribution hwere dry days get drier), but other options ended up with either + # too few rainless days because of only slight redistribution (r+1) or buildup # towards the end of the year (random day that hasn't happened) dry1 <- rows.yr[which(sim1[rows.yr,j] < cutoff)] dry <- sample(dry1, min(n.now, length(dry1)), replace=F) dry1 <- dry1[!dry1 %in% dry] # dry <- dry[order(dry)] - + # Figure out how close together our dry are # Now checking to see if we need to move rainy days # calculating the mean & sd for rainless days redistrib=T - # wet.max <- round(rnorm(1, mean(cons.wet, na.rm=T), sd(cons.wet, na.rm=T)), 0) + # wet.max <- round(rnorm(1, mean(cons.wet, na.rm=T), sd(cons.wet, na.rm=T)), 0) while(redistrib==T & length(dry1)>1){ ens.wet <- vector() wet.end <- vector() @@ -929,7 +929,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # If we don't have rain, add it to our tally if(!rows.yr[z] %in% dry){ tally=tally+1 - } + } # If we have rain and it resets our tally, # - store tally in our vector; then reset if(rows.yr[z] %in% dry & tally>0){ @@ -938,32 +938,32 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU tally=0 } } # end z - + # If we have a worryingly high number of consequtive wet days (outside of 6 sd); try a new dry if(max(ens.wet) > max(cons.wet)+sd(cons.wet) ){ # print("redistributing dry days") - # If we have a wet period that's too long, lets find the random dry that's - # closest to the midpoint of the longest + # If we have a wet period that's too long, lets find the random dry that's + # closest to the midpoint of the longest # Finding what we're going to insert as our new dry day wet.max <- which(ens.wet==max(ens.wet))[1] dry.diff <- abs(dry1 - round(wet.end[wet.max]-ens.wet[wet.max]/2)+1) dry.new <- which(dry.diff==min(dry.diff))[1] - + # Finding the closest dry date to shift dry.diff2 <- abs(dry - round(wet.end[wet.max]-ens.wet[wet.max]/2)+1) dry.replace <- which(dry.diff2==min(dry.diff2))[1] dry[dry.replace] <- dry1[dry.new] - + dry1 <- dry1[dry1!=dry1[dry.new]] # Drop the one we just moved so we don't get in an infinite loop } else { redistrib=F } } - # - + # + # Figure out where to put the extra rain; allow replacement for good measure wet <- sample(rows.yr[!rows.yr %in% dry], length(dry), replace=T) - + # Go through and randomly redistribute the precipitation to days we're not designating as rainless # Note, if we don't loop through, we might lose some of our precip for(r in 1:length(dry)){ @@ -972,11 +972,11 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU } } - + } # End year (y) } # End sim (j) } # End precip - + # Randomly pick one from this meta-ensemble to save # this *should* be propogating uncertainty because we have the ind effects in all of the models and we're randomly adding as we go sim.final[,ens] <- as.vector(sim1) @@ -986,19 +986,19 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # if(uncert.prop=="mean"){ # sim.final[,ens] <- apply(sim1, 1, mean) # } - + utils::setTxtProgressBar(pb, pb.ind) pb.ind <- pb.ind+1 - + rm(mod.bias, anom.train, anom.src, mod.anom, Xp, Xp.anom, sim1, sim1a, sim1b) } # End ensemble loop - if(v == "precipitation_flux"){ + if(v == "precipitation_flux"){ # sim1 <- sim1*sim1c met.src$X <- met.src$X*met.src$X.tot met.src$anom.raw <- met.src$anom.raw*met.src$X.tot } - + if(v %in% vars.transform){ sim.final <- sim.final^2 dat.clim[,c("X", "Y")] <- (dat.clim[,c("X", "Y")]^2) @@ -1006,24 +1006,24 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU met.train$X <- (met.train$X)^2 met.train$Y <- (met.train$Y)^2 } - + # Store the output in our dat.out dat.out[[v]] <- sim.final # ------------- - + # ------------- # Save some diagnostic graphs if useful # ------------- if(save.diagnostics==TRUE){ dir.create(path.diagnostics, recursive=T, showWarnings=F) - + dat.pred <- source.data$time dat.pred$Date <- as.POSIXct(dat.pred$Date) dat.pred$obs <- apply(source.data[[v]], 1, mean, na.rm=T) dat.pred$mean <- apply(dat.out[[v]], 1, mean, na.rm=T) dat.pred$lwr <- apply(dat.out[[v]], 1, quantile, 0.025, na.rm=T) dat.pred$upr <- apply(dat.out[[v]], 1, quantile, 0.975, na.rm=T) - + # Plotting the observed and the bias-corrected 95% CI grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day.png", sep="_")), height=6, width=6, units="in", res=220) print( @@ -1038,10 +1038,10 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU ggplot2::theme_bw() ) grDevices::dev.off() - + # Plotting a few random series to get an idea for what an individual pattern looks liek col.samp <- paste0("X", sample(1:n.ens, min(3, n.ens))) - + sim.sub <- data.frame(dat.out[[v]])[,col.samp] for(i in 1:ncol(sim.sub)){ sim.sub[,i] <- as.vector(sim.sub[,i]) @@ -1049,7 +1049,7 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # names(test) <- col.samp stack.sims <- utils::stack(sim.sub) stack.sims[,c("Year", "DOY", "Date")] <- dat.pred[,c("Year", "DOY", "Date")] - + grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "day2.png", sep="_")), height=6, width=6, units="in", res=220) print( ggplot2::ggplot(data=stack.sims[stack.sims$Year>=mean(stack.sims$Year)-2 & stack.sims$Year<=mean(stack.sims$Year)+2,]) + @@ -1058,13 +1058,13 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU ggplot2::theme_bw() ) grDevices::dev.off() - + # Looking tat the annual means over the whole time series to make sure we're getting decent interannual variability dat.yr <- aggregate(dat.pred[,c("obs", "mean", "lwr", "upr")], by=list(dat.pred$Year), FUN=mean) names(dat.yr)[1] <- "Year" - + grDevices::png(file.path(path.diagnostics, paste(ens.name, v, "annual.png", sep="_")), height=6, width=6, units="in", res=220) print( ggplot2::ggplot(data=dat.yr[,]) + @@ -1079,26 +1079,26 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU ggplot2::theme_bw() ) grDevices::dev.off() - + } # ------------- - + } # End looping through variables # ------------------------------------------- - - + + # Save the output - nc.info <- data.frame(CF.name = c("air_temperature", "air_temperature_minimum", "air_temperature_maximum", "precipitation_flux", - "surface_downwelling_shortwave_flux_in_air", "surface_downwelling_longwave_flux_in_air", - "air_pressure", "specific_humidity", "wind_speed"), - longname = c("2 meter air temperature", "2 meter minimum air temperature", "2 meter maximum air temperature", - "cumulative precipitation (water equivalent)", "incident (downwelling) showtwave radiation", - "incident (downwelling) longwave radiation", "air pressure at the surface", - "Specific humidity measured at the lowest level of the atmosphere", - "wind_speed speed"), + nc.info <- data.frame(CF.name = c("air_temperature", "air_temperature_minimum", "air_temperature_maximum", "precipitation_flux", + "surface_downwelling_shortwave_flux_in_air", "surface_downwelling_longwave_flux_in_air", + "air_pressure", "specific_humidity", "wind_speed"), + longname = c("2 meter air temperature", "2 meter minimum air temperature", "2 meter maximum air temperature", + "cumulative precipitation (water equivalent)", "incident (downwelling) showtwave radiation", + "incident (downwelling) longwave radiation", "air pressure at the surface", + "Specific humidity measured at the lowest level of the atmosphere", + "wind_speed speed"), units = c("K", "K", "K", "kg m-2 s-1", "W m-2", "W m-2", "Pa", "kg kg-1", "m s-1") ) - + # Define our lat/lon dims since those will be constant dim.lat <- ncdf4::ncdim_def(name='latitude', units='degree_north', vals=lat.in, create_dimvar=TRUE) dim.lon <- ncdf4::ncdim_def(name='longitude', units='degree_east', vals=lon.in, create_dimvar=TRUE) @@ -1111,35 +1111,35 @@ debias.met.regression <- function(train.data, source.data, n.ens, vars.debias=NU # Doing some row/time indexing rows.yr <- which(dat.out$time$Year==yr) nday <- ifelse(lubridate::leap_year(yr), 366, 365) - + # Finish defining our time variables (same for all ensemble members) dim.time <- ncdf4::ncdim_def(name='time', units="sec", vals=seq(1*24*360, (nday+1-1/24)*24*360, length.out=length(rows.yr)), create_dimvar=TRUE, unlim=TRUE) nc.dim=list(dim.lat,dim.lon,dim.time) - + # Setting up variables and dimensions var.list = list() dat.list = list() - + for(j in 1:length(vars.debias)){ - var.list[[j]] = ncdf4::ncvar_def(name=vars.debias[j], - units=as.character(nc.info[nc.info$CF.name==vars.debias[j], "units"]), + var.list[[j]] = ncdf4::ncvar_def(name=vars.debias[j], + units=as.character(nc.info[nc.info$CF.name==vars.debias[j], "units"]), longname=as.character(nc.info[nc.info$CF.name==vars.debias[j], "longname"]), dim=nc.dim, missval=-999, verbose=verbose) } names(var.list) <- vars.debias - + # Loop through & write each ensemble member for(i in 1:n.ens){ # Setting up file structure ens.path <- file.path(outfolder, paste(ens.name, ens.mems[i], sep="_")) dir.create(ens.path, recursive=T, showWarnings=F) loc.file <- file.path(ens.path, paste(ens.name, ens.mems[i], stringr::str_pad(yr, width=4, side="left", pad="0"), "nc", sep = ".")) - + for(j in 1:length(vars.debias)){ dat.list[[j]] = array(dat.out[[vars.debias[j]]][rows.yr,i], dim=c(length(lat.in), length(lon.in), length(rows.yr))) # Go ahead and make the arrays } names(dat.list) <- vars.debias - + ## put data in new file loc <- ncdf4::nc_create(filename=loc.file, vars=var.list, verbose=verbose) for(j in 1:length(vars.debias)){ diff --git a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R index 80a38f069ef..b9ee80e71f6 100644 --- a/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R +++ b/modules/data.atmosphere/R/tdm_lm_ensemble_sims.R @@ -364,7 +364,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # we'll allow some drift outside of what we have for our max/min, but not too much; # - right now general rule of thumb of 2 degrees leeway on the prescribed - cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 273.15-95 | max(x) > 273.15+70 | + cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 184 | max(x) > 331 | # min(x) < tmin.ens-2 | max(x) > tmax.ens+2 | min(x) < filter.mean-sanity.sd*filter.sd | max(x) > filter.mean+sanity.sd*filter.sd )) @@ -373,7 +373,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. if(v == "specific_humidity"){ #LOG!! # Based on google, it looks like values of 30 g/kg can occur in the tropics, so lets go above that # Also, the minimum humidity can't be 0 so lets just make it extremely dry; lets set this for 1 g/Mg - cols.redo <- which(apply(dat.pred, 2, function(x) min(exp(x)) < 1e-6 | max(exp(x)) > 40e-3 | + cols.redo <- which(apply(dat.pred, 2, function(x) min(exp(x)) < 1e-6 | max(exp(x)) > 3.2e-2 | min(exp(x)) < filter.mean-sanity.sd*filter.sd | max(exp(x)) > filter.mean+sanity.sd*filter.sd ) ) @@ -384,7 +384,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # Lets round 1360 and divide that by 2 (because it should be a daily average) and conservatively assume albedo of 20% (average value is more like 30) # Source http://eesc.columbia.edu/courses/ees/climate/lectures/radiation/ dat.pred[dat.pred < 0] <- 0 - cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 1360 | min(x) < filter.mean-sanity.sd*filter.sd | + cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 1500 | min(x) < filter.mean-sanity.sd*filter.sd | max(x) > filter.mean+sanity.sd*filter.sd )) } @@ -392,7 +392,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # According to wikipedia the highest barometric pressure ever recorded was 1085.7 hPa = 1085.7*100 Pa; Dead sea has average pressure of 1065 hPa # - Lets round up to 1100 hPA # Also according to Wikipedia, the lowest non-tornadic pressure ever measured was 870 hPA - cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 850*100 | max(x) > 1100*100 | + cols.redo <- which(apply(dat.pred, 2, function(x) min(x) < 45000 | max(x) > 110000 | min(x) < filter.mean-sanity.sd*filter.sd | max(x) > filter.mean+sanity.sd*filter.sd )) @@ -410,7 +410,7 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. } if(v == "wind_speed"){ # According to wikipedia, the hgihest wind speed ever recorded is a gust of 113 m/s; the maximum 5-mind wind speed is 49 m/s - cols.redo <- which(apply(dat.pred, 2, function(x) max(x^2) > 50 | + cols.redo <- which(apply(dat.pred, 2, function(x) max(x^2) > 85 | min(x^2) < filter.mean-sanity.sd*filter.sd | max(x^2) > filter.mean+sanity.sd*filter.sd )) @@ -419,7 +419,8 @@ lm_ensemble_sims <- function(dat.mod, n.ens, path.model, direction.filter, lags. # According to wunderground, ~16" in 1 hr is the max # https://www.wunderground.com/blog/weatherhistorian/what-is-the-most-rain-to-ever-fall-in-one-minute-or-one-hour.html # 16; x25.4 = inches to mm; /(60*60) = hr to sec - cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 16*25.4/(60*60) + # Updated to ED2 max: 400 mm/hr + cols.redo <- which(apply(dat.pred, 2, function(x) max(x) > 0.1111 )) } From 8791c36236edf82b1294100df373d9ecbbad2525 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 18 Nov 2020 22:55:40 +0000 Subject: [PATCH 1660/2289] automated documentation update --- modules/data.atmosphere/man/debias.met.regression.Rd | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/man/debias.met.regression.Rd b/modules/data.atmosphere/man/debias.met.regression.Rd index d81d57bb541..ab18a97cbcd 100644 --- a/modules/data.atmosphere/man/debias.met.regression.Rd +++ b/modules/data.atmosphere/man/debias.met.regression.Rd @@ -41,12 +41,12 @@ debias.met.regression( \item{vars.debias}{- which met variables should be debiased? if NULL, all variables in train.data} -\item{CRUNCEP}{- flag for if the dataset being downscaled is CRUNCEP; if TRUE, special cases triggered for +\item{CRUNCEP}{- flag for if the dataset being downscaled is CRUNCEP; if TRUE, special cases triggered for met variables that have been naively gapfilled for certain time periods} \item{pair.anoms}{- logical stating whether anomalies from the same year should be matched or not} -\item{pair.ens}{- logical stating whether ensembles from train and source data need to be paired together +\item{pair.ens}{- logical stating whether ensembles from train and source data need to be paired together (for uncertainty propogation)} \item{uncert.prop}{- method for error propogation for child ensemble members 1 ensemble member; options=c(random, mean); randomly strongly encouraged if n.ens>1} @@ -61,7 +61,7 @@ met variables that have been naively gapfilled for certain time periods} \item{ens.name}{- what is the name that should be attached to the debiased ensemble} -\item{ens.mems}{- what labels/numbers to attach to the ensemble members so we can gradually build bigger ensembles +\item{ens.mems}{- what labels/numbers to attach to the ensemble members so we can gradually build bigger ensembles without having to do do giant runs at once; if NULL will be numbered 1:n.ens} \item{force.sanity}{- (logical) do we force the data to meet sanity checks?} @@ -88,8 +88,8 @@ without having to do do giant runs at once; if NULL will be numbered 1:n.ens} functions print debugging information as they run?} } \description{ -This script debiases one dataset (e.g. GCM, re-analysis product) given another higher - resolution product or empirical observations. It assumes input are in annual CF standard +This script debiases one dataset (e.g. GCM, re-analysis product) given another higher + resolution product or empirical observations. It assumes input are in annual CF standard files that are generate from the pecan extract or download funcitons. } \details{ From 3ed7e260adb3b11fe372e4a521666520a54069a4 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 14:34:11 -0500 Subject: [PATCH 1661/2289] updating grep to look at basename so dir name doesn't impact file renaming --- modules/assim.sequential/R/sda.enkf_refactored.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index d96c25efae3..d1b73974441 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -316,7 +316,7 @@ sda.enkf <- function(settings, run.id <- outconfig$runs$id ensemble.id <- outconfig$ensemble.id - if(t==1) inputs <- outconfig$samples$met # for any time after t==1 the met is the splitted met + if(t==1) inputs <- outconfig$samples$met # for any time after t==1 the met is the split met if(control$debug) browser() #-------------------------------------------- RUN @@ -355,7 +355,7 @@ sda.enkf <- function(settings, "*.nc$", recursive = TRUE, full.names = TRUE) - files <- files[grep(pattern = "SDA*", files, invert = TRUE)] + files <- files[grep(pattern = "SDA*", basename(files), invert = TRUE)] file.rename(files, From 62727c98541c0d4db3fd63105396e48aa6b4c394 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 14:38:17 -0500 Subject: [PATCH 1662/2289] Hamze's forecast movemade no data not have forecast so I moved it back to outside the any(obs) part --- modules/assim.sequential/R/sda.enkf_refactored.R | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index d1b73974441..79f7c7558e3 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -366,6 +366,10 @@ sda.enkf <- function(settings, X <- do.call(rbind, X) + #unit scaling if needed + + X <- rescaling_stateVars(settings, X, multiply = TRUE) + if(sum(X,na.rm=T) == 0){ logger.severe(paste('NO FORECAST for',obs.times[t],'Check outdir logfiles or read restart. Do you have the right variable names?')) @@ -541,6 +545,8 @@ sda.enkf <- function(settings, new.state <- as.data.frame(analysis) ANALYSIS[[t]] <- analysis + FORECAST[[t]] <- X + ###-------------------------------------------------------------------### ### save outputs ### From 4b640c2383211e3cbfa64bd0a3358fe0c0077b81 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 14:48:05 -0500 Subject: [PATCH 1663/2289] fix for split inputs --- models/sipnet/R/split_inputs.SIPNET.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/sipnet/R/split_inputs.SIPNET.R b/models/sipnet/R/split_inputs.SIPNET.R index 2b1817ee244..d4eee384a26 100644 --- a/models/sipnet/R/split_inputs.SIPNET.R +++ b/models/sipnet/R/split_inputs.SIPNET.R @@ -52,7 +52,7 @@ split_inputs.SIPNET <- function(settings, start.time, stop.time, inputs, overwri #@Hamze, I added the Date variable by using year, doy, and hour and filtered the clim based that and then removed it afterwards. dat<-input.dat %>% dplyr::mutate(Date = strptime(paste(V2, V3), format = "%Y %j", tz = "UTC")%>% as.POSIXct()) %>% - dplyr::mutate(Date = as.POSIXct(paste0(Date, V4, ":00"), format = "%Y-%m-%d %H:%M", tz = "UTC")) %>% + dplyr::mutate(Date = as.POSIXct(paste0(Date, ceiling(V4), ":00"), format = "%Y-%m-%d %H:%M", tz = "UTC")) %>% dplyr::filter(Date >= start.time, Date < stop.time) %>% dplyr::select(-Date) From 90c93a53c8f3604311091208d6ae758574fbdc86 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 14:58:37 -0500 Subject: [PATCH 1664/2289] adding workflows and xmls for no data and regular WCR SDA --- .../inst/WillowCreek/NoDataWorkflow.R | 329 +++++++++++ .../inst/WillowCreek/SDA_Workflow.R | 532 ++++++++++++++++++ .../inst/WillowCreek/nodata.xml | 191 +++++++ .../inst/WillowCreek/testing.xml | 194 +++++++ 4 files changed, 1246 insertions(+) create mode 100644 modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R create mode 100644 modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R create mode 100644 modules/assim.sequential/inst/WillowCreek/nodata.xml create mode 100644 modules/assim.sequential/inst/WillowCreek/testing.xml diff --git a/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R b/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R new file mode 100644 index 00000000000..036f3106d64 --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R @@ -0,0 +1,329 @@ +# ---------------------------------------------------------------------- +#------------------------------------------ Load required libraries----- +# ---------------------------------------------------------------------- +library("PEcAn.all") +library("PEcAn.utils") +library("RCurl") +library("REddyProc") +library("tidyverse") +library("furrr") +library("R.utils") +library("dynutils") +plan(multisession) + + +# ---------------------------------------------------------------------------------------------- +#------------------------------------------ That's all we need xml path and the out folder ----- +# ---------------------------------------------------------------------------------------------- + +outputPath <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output/NoData/" +nodata <- TRUE +restart <- FALSE +days.obs <- 1 #how many of observed data to include -- not including today +setwd(outputPath) + +c( + 'Utils.R', + 'download_WCr.R', + "gapfill_WCr.R", + 'prep.data.assim.R' +) %>% walk( ~ source( + system.file("WillowCreek", + .x, + package = "PEcAn.assim.sequential") +)) + + +#------------------------------------------------------------------------------------------------ +#------------------------------------------ Preparing the pecan xml ----------------------------- +#------------------------------------------------------------------------------------------------ +#--------------------------- Finding old sims + + +setwd("/projectnb/dietzelab/kzarada/US_WCr_SDA_output/NoData/") + +#reading xml +settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/nodata.xml") + +#connecting to DB +con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + + + +all.previous.sims <- list.dirs(outputPath, recursive = F) +if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { + + tryCatch({ + # Looking through all the old simulations and find the most recent + all.previous.sims <- all.previous.sims %>% + map(~ list.files(path = file.path(.x, "SDA"))) %>% + setNames(all.previous.sims) %>% + discard( ~ !"sda.output.Rdata" %in% .x) # I'm throwing out the ones that they did not have a SDA output + + last.sim <- + names(all.previous.sims) %>% + map_chr( ~ strsplit(.x, "_")[[1]][5]) %>% + map_dfr(~ db.query( + query = paste("SELECT * FROM workflows WHERE id =", .x), + con = con + ) %>% + mutate(ID=.x)) %>% + mutate(start_date = as.Date(start_date)) %>% + arrange(desc(start_date), desc(ID)) %>% + head(1) + # pulling the date and the path to the last SDA + restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) + sda.start <- last.sim$start_date + lubridate::days(1) + }, + error = function(e) { + restart.path <- NULL + sda.start <- Sys.Date() - 1 + PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) + }) + + # if there was no older sims + if (is.na(sda.start)) + sda.start <- Sys.Date() - 9 +} +#sda.start <- Sys.Date() +sda.end <- sda.start + lubridate::days(5) +#----------------------------------------------------------------------------------------------- +#------------------------------------------ Download met and flux ------------------------------ +#----------------------------------------------------------------------------------------------- + + +# Finding the right end and start date +met.start <- Sys.Date() +met.end <- met.start + lubridate::days(16) + + +#pad Observed Data to match met data + +date <- + seq( + from = lubridate::with_tz(as.POSIXct(sda.start, format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), + to = lubridate::with_tz(as.POSIXct(sda.end, format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), + by = "1 hour" + ) + +pad.prep <- as.data.frame(cbind(Date = as.character(date), means = rep("NA", length(date)), covs = rep("NA", length(date)))) %>% + dynutils::tibble_as_list() + +names(pad.prep) <-date + + +prep.data = pad.prep + + + +obs.mean <- prep.data %>% + purrr::map('means') %>% + setNames(names(prep.data)) +obs.cov <- prep.data %>% purrr::map('covs') %>% setNames(names(prep.data)) + +if (nodata) { + obs.mean <- obs.mean %>% purrr::map(function(x) + return(NA)) + obs.cov <- obs.cov %>% purrr::map(function(x) + return(NA)) +} + + +#----------------------------------------------------------------------------------------------- +#------------------------------------------ Fixing the settings -------------------------------- +#----------------------------------------------------------------------------------------------- +#unlink existing IC files +sapply(paste0("/projectnb/dietzelabe/pecan.data/dbfiles/IC_site_0-676_", 1:100, ".nc"), unlink) +#Using the found dates to run - this will help to download mets +settings$run$start.date <- as.character(met.start) +settings$run$end.date <- as.character(met.end) +settings$run$site$met.start <- as.character(met.start) +settings$run$site$met.end <- as.character(met.end) +#info +settings$info$date <- paste0(format(Sys.time(), "%Y/%m/%d %H:%M:%S"), " +0000") +# -------------------------------------------------------------------------------------------------- +#---------------------------------------------- PEcAn Workflow ------------------------------------- +# -------------------------------------------------------------------------------------------------- +#Update/fix/check settings. Will only run the first time it's called, unless force=TRUE +settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) +setwd(settings$outdir) + + +#Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") +# start from scratch if no continue is passed in +statusFile <- file.path(settings$outdir, "STATUS") +if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile)) { + file.remove(statusFile) +} +# Do conversions +settings <- PEcAn.workflow::do_conversions(settings, T, T, T) + +# Query the trait database for data and priors +if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings(settings, outputfile = 'pecan.TRAIT.xml') + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, 'pecan.TRAIT.xml'))) { + settings <- + PEcAn.settings::read.settings(file.path(settings$outdir, 'pecan.TRAIT.xml')) +} +# Run the PEcAn meta.analysis +if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } +} +#sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# Setting dates in assimilation tags - This will help with preprocess split in SDA code +settings$state.data.assimilation$start.date <-as.character(first(names(obs.mean))) +settings$state.data.assimilation$end.date <-as.character(last(names(obs.mean))) + +#- lubridate::hms("06:00:00") + +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Restart ------------------------------------- +# -------------------------------------------------------------------------------------------------- + +if(restart == TRUE){ + if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) + + #Update the SDA Output to just have last time step + temp<- new.env() + load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) + temp <- as.list(temp) + + #we want ANALYSIS, FORECAST, and enkf.parms to match up with how many days obs data we have + # +24 because it's hourly now and we want the next day as the start + if(length(temp$ANALYSIS) > 1){ + + for(i in 1:days.obs + 1){ + temp$ANALYSIS[[i]] <- temp$ANALYSIS[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$ANALYSIS))){ + temp$ANALYSIS[[i]] <- NULL + } + + for(i in 1:days.obs + 1){ + temp$FORECAST[[i]] <- temp$FORECAST[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$FORECAST))){ + temp$FORECAST[[i]] <- NULL + } + + for(i in 1:days.obs + 1){ + temp$enkf.params[[i]] <- temp$enkf.params[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$enkf.params))){ + temp$enkf.params[[i]] <- NULL + } + + } + temp$t = 1 + + #change inputs path to match sampling met paths + + for(i in 1: length(temp$inputs$ids)){ + + temp$inputs$samples[i] <- settings$run$inputs$met$path[temp$inputs$ids[i]] + + } + + temp1<- new.env() + list2env(temp, envir = temp1) + save(list = c("ANALYSIS", "enkf.params", "ensemble.id", "ensemble.samples", 'inputs', 'new.params', 'new.state', 'run.id', 'site.locs', 't', 'Viz.output', 'X'), + envir = temp1, + file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + + + + temp.out <- new.env() + load(file.path(restart.path, "SDA", 'outconfig.Rdata'), envir = temp.out) + temp.out <- as.list(temp.out) + temp.out$outconfig$samples <- NULL + + temp.out1 <- new.env() + list2env(temp.out, envir = temp.out1) + save(list = c('outconfig'), + envir = temp.out1, + file = file.path(settings$outdir, "SDA", "outconfig.Rdata")) + + + + #copy over run and out folders + + if(!dir.exists("run")) dir.create("run",showWarnings = F) + + files <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.clim") + readfiles <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "README.txt") + + newfiles <- gsub(pattern = restart.path, settings$outdir, files) + readnewfiles <- gsub(pattern = restart.path, settings$outdir, readfiles) + + rundirs <- gsub(pattern = "/sipnet.clim", "", files) + rundirs <- gsub(pattern = restart.path, settings$outdir, rundirs) + for(i in 1 : length(rundirs)){ + dir.create(rundirs[i]) + file.copy(from = files[i], to = newfiles[i]) + file.copy(from = readfiles[i], to = readnewfiles[i])} + file.copy(from = paste0(restart.path, '/run/runs.txt'), to = paste0(settings$outdir,'/run/runs.txt' )) + + if(!dir.exists("out")) dir.create("out",showWarnings = F) + + files <- list.files(path = file.path(restart.path, "out/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.out") + newfiles <- gsub(pattern = restart.path, settings$outdir, files) + outdirs <- gsub(pattern = "/sipnet.out", "", files) + outdirs <- gsub(pattern = restart.path, settings$outdir, outdirs) + for(i in 1 : length(outdirs)){ + dir.create(outdirs[i]) + file.copy(from = files[i], to = newfiles[i])} + +} + +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Run state data assimilation ------------------------------------- +# -------------------------------------------------------------------------------------------------- + + +settings$host$name <- "geo.bu.edu" +settings$host$user <- 'kzarada' +settings$host$folder <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output" +settings$host$job.sh <- "module load udunits/2.2.26 R/3.5.1" +settings$host$qsub <- 'qsub -l h_rt=24:00:00 -V -N @NAME@ -o @STDOUT@ -e @STDERR@' +settings$host$qsub.jobid <- 'Your job ([0-9]+) .*' +settings$host$qstat <- 'qstat -j @JOBID@ || echo DONE' +settings$host$tunnel <- '/tmp/tunnel' +settings$model$binary = "/usr2/postdoc/istfer/SIPNET/1023/sipnet" + + +unlink(c('run','out'), recursive = T) + +#debugonce(PEcAn.assim.sequential::sda.enkf) +if ('state.data.assimilation' %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + PEcAn.assim.sequential::sda.enkf( + settings, + restart=restart, + Q=0, + obs.mean = obs.mean, + obs.cov = obs.cov, + control = list( + trace = TRUE, + interactivePlot =FALSE, + TimeseriesPlot =TRUE, + BiasPlot =FALSE, + debug = FALSE, + pause=FALSE + ) + ) + + PEcAn.utils::status.end() + } +} + + diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R new file mode 100644 index 00000000000..f6b82778364 --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -0,0 +1,532 @@ +# ---------------------------------------------------------------------- +#------------------------------------------ Load required libraries----- +# ---------------------------------------------------------------------- +library("PEcAn.all") +library("PEcAn.utils") +library("PEcAn.data.remote") +library("RCurl") +library("REddyProc") +library("tidyverse") +library("furrr") +library("R.utils") +library("dynutils") +library('nimble') +plan(multisession) + + +# ---------------------------------------------------------------------------------------------- +#------------------------------------------Prepared SDA Settings ----- +# ---------------------------------------------------------------------------------------------- + +outputPath <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output/" +nodata <- FALSE #use this to run SDA with no data +restart <- FALSE #flag to start from previous run or not +days.obs <- 3 #how many of observed data *BY HOURS* to include -- not including today +setwd(outputPath) +options(warn=-1) + + +#------------------------------------------------------------------------------------------------ +#------------------------------------------ sourcing the required tools ------------------------- +#------------------------------------------------------------------------------------------------ +c( + 'Utils.R', + 'download_WCr.R', + "gapfill_WCr.R", + 'prep.data.assim.R' +) %>% walk( ~ source( + system.file("WillowCreek", + .x, + package = "PEcAn.assim.sequential") +)) + +#------------------------------------------------------------------------------------------------ +#------------------------------------------ Preparing the pecan xml ----------------------------- +#------------------------------------------------------------------------------------------------ + +#reading xml +settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/testing.xml") + +#connecting to DB +con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + +#Find last SDA Run to get new start date +all.previous.sims <- list.dirs(outputPath, recursive = F) +if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { + + tryCatch({ + # Looking through all the old simulations and find the most recent + all.previous.sims <- all.previous.sims %>% + map(~ list.files(path = file.path(.x, "SDA"))) %>% + setNames(all.previous.sims) %>% + discard( ~ !"sda.output.Rdata" %in% .x) # I'm throwing out the ones that they did not have a SDA output + + last.sim <- + names(all.previous.sims) %>% + map_chr( ~ strsplit(.x, "_")[[1]][5]) %>% + map_dfr(~ db.query( + query = paste("SELECT * FROM workflows WHERE id =", .x), + con = con + ) %>% + mutate(ID=.x)) %>% + mutate(start_date = as.Date(start_date)) %>% + arrange(desc(start_date), desc(ID)) %>% + head(1) + # pulling the date and the path to the last SDA + restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) + sda.start <- last.sim$start_date+ lubridate::days(3) + }, + error = function(e) { + restart.path <- NULL + sda.start <- Sys.Date() - 9 + PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) + }) + + # if there was no older sims + if (is.na(sda.start)) + sda.start <- Sys.Date() - 9 +} +#to manually change start date +sda.start <- Sys.Date() +sda.end <- sda.start + lubridate::days(5) + +# Finding the right end and start date +met.start <- sda.start - lubridate::days(2) +met.end <- met.start + lubridate::days(16) + + +#----------------------------------------------------------------------------------------------- +#------------------------------------------ Download met and flux ------------------------------ +#----------------------------------------------------------------------------------------------- +#Fluxes +prep.data <- prep.data.assim( + sda.start - lubridate::days(90),# it needs at least 90 days for gap filling + sda.end, + numvals = 100, + vars = c("NEE", "LE"), + data.len = days.obs, + sda.start) + +obs.raw <-prep.data$rawobs +prep.data<-prep.data$obs + +# if there is infinte value then take it out - here we want to remove any that just have one NA in the observed data +prep.data <- prep.data %>% + map(function(day.data){ + #cheking the mean + nan.mean <- which(is.infinite(day.data$means) | is.nan(day.data$means) | is.na(day.data$means)) + if ( length(nan.mean)>0 ) { + + day.data$means <- day.data$means[-nan.mean] + day.data$covs <- day.data$covs[-nan.mean, -nan.mean] %>% + as.matrix() %>% + `colnames <-`(c(colnames(day.data$covs)[-nan.mean])) + } + day.data + }) + + +# Changing LE to Qle which is what SIPNET expects +prep.data <- prep.data %>% + map(function(day.data) { + names(day.data$means)[names(day.data$means) == "LE"] <- "Qle" + dimnames(day.data$covs) <- dimnames(day.data$covs) %>% + map(function(name) { + name[name == "LE"] <- "Qle" + name + }) + + day.data + }) + + +# -------------------------------------------------------------------------------------------------- +#---------------------------------------------- LAI DATA ------------------------------------- +# -------------------------------------------------------------------------------------------------- + +site_info <- list( + site_id = 676, + site_name = "Willow Creek", + lat = 45.805925, + lon = -90.07961, + time_zone = "UTC") + +tryCatch({ + lai <- call_MODIS(outdir = NULL, + var = 'lai', + site_info = site_info, + product_dates = c(paste0(lubridate::year(met.start), strftime(met.start, format = "%j")),paste0(lubridate::year(met.end), strftime(met.end, format = "%j"))), + run_parallel = TRUE, + ncores = NULL, + product = "MOD15A2H", + band = "Lai_500m", + package_method = "MODISTools", + QC_filter = TRUE, + progress = TRUE) + lai <- lai %>% filter(qc == "000")}, + error = function(e) { + lai <- NULL + PEcAn.logger::logger.warn(paste0("MODIS Data not available for these dates",conditionMessage(e))) + } +) +if(!exists('lai')){lai = NULL} + + +tryCatch({ + lai_sd <- call_MODIS(outdir = NULL, + var = 'lai', + site_info = site_info, + product_dates = c(paste0(lubridate::year(met.start), strftime(met.start, format = "%j")),paste0(lubridate::year(met.end), strftime(met.end, format = "%j"))), + run_parallel = TRUE, + ncores = NULL, + product = "MOD15A2H", + band = "LaiStdDev_500m", + package_method = "MODISTools", + QC_filter = TRUE, + progress = TRUE) + lai_sd <- lai_sd %>% filter(qc == "000")}, + error = function(e) { + lai_sd <- NULL + PEcAn.logger::logger.warn(paste0("MODIS Data not available for these dates",conditionMessage(e))) + } +) +if(!exists('lai_sd')){lai_sd = NULL} + +###### Pad Observed Data to forecast ############# + +date <- + seq( + from = lubridate::force_tz(as.POSIXct(last(names(prep.data)), format = "%Y-%m-%d %H:%M:%S"), tz = "UTC") + lubridate::hours(1), + to = lubridate::with_tz(as.POSIXct(first(sda.end) + lubridate::days(1), format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), + by = "1 hour" + ) + +pad.prep <- obs.raw %>% + tidyr::complete(Date = date) %>% + filter(Date %in% date) %>% + mutate(means = NA, covs = NA) %>% + dplyr::select(Date, means, covs) %>% + dynutils::tibble_as_list() + +names(pad.prep) <-date + + +#Add in LAI info + +if(is.null(lai)){index <- rep(FALSE, length(names(prep.data)))}else{ + index <- as.Date(names(prep.data)) %in% as.Date(lai$calendar_date) +} + + +for(i in 1:length(index)){ + + if(index[i]){ + lai.date <- which(as.Date(lai$calendar_date) %in% as.Date(names(prep.data))) + LAI <- c(0,0) + prep.data[[i]]$means <- c(prep.data[[i]]$means, lai$data[lai.date]) + prep.data[[i]]$covs <- rbind(cbind(prep.data[[i]]$covs, c(0, 0)), c(0,0, lai_sd$data)) + + names(prep.data[[i]]$means) <- c("NEE", "Qle", "LAI") + rownames(prep.data[[i]]$covs) <- c("NEE", "Qle", "LAI") + colnames(prep.data[[i]]$covs) <- c("NEE", "Qle", "LAI") + + } +} + +#add forecast pad to the obs data +prep.data = c(prep.data, pad.prep) + +#split into means and covs + +obs.mean <- prep.data %>% + map('means') %>% + setNames(names(prep.data)) +obs.cov <- prep.data %>% map('covs') %>% setNames(names(prep.data)) + + + + +#----------------------------------------------------------------------------------------------- +#------------------------------------------ Fixing the settings -------------------------------- +#----------------------------------------------------------------------------------------------- +#unlink existing IC files +sapply(paste0("/projectnb/dietzelab/pecan.data/dbfiles/BADM_site_0-676/IC_site_0-676_", 1:100, ".nc"), unlink) +#Using the found dates to run - this will help to download mets +settings$run$start.date <- as.character(met.start) +settings$run$end.date <- as.character(met.end) +settings$run$site$met.start <- as.character(met.start) +settings$run$site$met.end <- as.character(met.end) +#info +settings$info$date <- paste0(format(Sys.time(), "%Y/%m/%d %H:%M:%S"), " +0000") + + + + +# -------------------------------------------------------------------------------------------------- +#---------------------------------------------- PEcAn Workflow ------------------------------------- +# -------------------------------------------------------------------------------------------------- +#Update/fix/check settings. Will only run the first time it's called, unless force=TRUE +settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) +setwd(settings$outdir) +ggsave( + file.path(settings$outdir, "Obs_plot.pdf"), + ploting_fluxes(obs.raw) , + width = 16, + height = 9 +) + +#Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") +# start from scratch if no continue is passed in +statusFile <- file.path(settings$outdir, "STATUS") +if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile)) { + file.remove(statusFile) +} +# Do conversions + +######### Check for input files and insert paths ############# +# con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) +# +# input_check <- PEcAn.DB::dbfile.input.check( +# siteid=settings$run$site$id %>% as.character(), +# startdate = settings$run$start.date %>% as.Date, +# enddate = settings$run$end.date %>% as.Date, +# parentid = NA, +# mimetype="application/x-netcdf", +# formatname="CF Meteorology", +# con, +# hostname = PEcAn.remote::fqdn(), +# exact.dates = TRUE, +# pattern = "NOAA_GEFS_downscale", +# return.all=TRUE +# ) +# +# clim_check = list() +# for(i in 1:length(input_check$id)){ +# clim_check[[i]] = file.path(PEcAn.DB::dbfile.input.check( +# siteid=settings$run$site$id %>% as.character(), +# startdate = settings$run$start.date %>% as.Date, +# enddate = settings$run$end.date %>% as.Date, +# parentid = input_check$container_id[i], +# mimetype="text/csv", +# formatname="Sipnet.climna", +# con, +# hostname = PEcAn.remote::fqdn(), +# exact.dates = TRUE, +# pattern = "NOAA_GEFS_downscale", +# return.all=TRUE +# )$file_path, PEcAn.DB::dbfile.input.check( +# siteid=settings$run$site$id %>% as.character(), +# startdate = settings$run$start.date %>% as.Date, +# enddate = settings$run$end.date %>% as.Date, +# parentid = input_check$container_id[i], +# mimetype="text/csv", +# formatname="Sipnet.climna", +# con, +# hostname = PEcAn.remote::fqdn(), +# exact.dates = TRUE, +# pattern = "NOAA_GEFS_downscale", +# return.all=TRUE +# )$file_name)} +# +# #If INPUTS already exsits, add id and met path to settings file +# +# if(length(input_check$id) > 0){ +# index_id = list() +# index_path = list() +# for(i in 1:length(input_check$id)){ +# index_id[[i]] = as.character(dbfile.id(type = "Input", +# file = file.path(input_check$file_path, +# input_check$file_name)[i], con = con))#get ids as list +# +# }#end i loop for making lists +# names(index_id) = sprintf("id%s",seq(1:length(input_check$id))) #rename list +# names(clim_check) = sprintf("path%s",seq(1:length(input_check$id))) +# +# settings$run$inputs$met$id = index_id +# settings$run$inputs$met$path = clim_check +# } + +settings <- PEcAn.workflow::do_conversions(settings) #end if loop for existing inputs + +# if(is_empty(settings$run$inputs$met$path) & length(clim_check)>0){ +# settings$run$inputs$met$id = index_id +# settings$run$inputs$met$path = clim_check +# } + + +# PEcAn.DB::db.close(con) +# Query the trait database for data and priors +if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings(settings, outputfile = 'pecan.TRAIT.xml') + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, 'pecan.TRAIT.xml'))) { + settings <- + PEcAn.settings::read.settings(file.path(settings$outdir, 'pecan.TRAIT.xml')) +} +# Run the PEcAn meta.analysis +if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } +} +#sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# Setting dates in assimilation tags - This will help with preprocess split in SDA code +settings$state.data.assimilation$start.date <-as.character(first(names(obs.mean))) +settings$state.data.assimilation$end.date <-as.character(last(names(obs.mean))) + +if (nodata) { + obs.mean <- obs.mean %>% map(function(x) + return(NA)) + obs.cov <- obs.cov %>% map(function(x) + return(NA)) +} + +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Restart ------------------------------------- +# -------------------------------------------------------------------------------------------------- + +if(restart == TRUE){ + if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) + + #Update the SDA Output to just have last time step + temp<- new.env() + load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) + temp <- as.list(temp) + + #we want ANALYSIS, FORECAST, and enkf.parms to match up with how many days obs data we have + # +24 because it's hourly now and we want the next day as the start + if(length(temp$ANALYSIS) > 1){ + + for(i in 1:days.obs + 1){ + temp$ANALYSIS[[i]] <- temp$ANALYSIS[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$ANALYSIS))){ + temp$ANALYSIS[[i]] <- NULL + } + + + for(i in 1:days.obs + 1){ + temp$FORECAST[[i]] <- temp$FORECAST[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$FORECAST))){ + temp$FORECAST[[i]] <- NULL + } + + for(i in 1:days.obs + 1){ + temp$enkf.params[[i]] <- temp$enkf.params[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$enkf.params))){ + temp$enkf.params[[i]] <- NULL + } + + } + temp$t = 1 + + #change inputs path to match sampling met paths + + for(i in 1: length(temp$inputs$ids)){ + + temp$inputs$samples[i] <- settings$run$inputs$met$path[temp$inputs$ids[i]] + + } + + temp1<- new.env() + list2env(temp, envir = temp1) + save(list = c("ANALYSIS", 'FORECAST', "enkf.params", "ensemble.id", "ensemble.samples", 'inputs', 'new.params', 'new.state', 'run.id', 'site.locs', 't', 'Viz.output', 'X'), + envir = temp1, + file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + + + + temp.out <- new.env() + load(file.path(restart.path, "SDA", 'outconfig.Rdata'), envir = temp.out) + temp.out <- as.list(temp.out) + temp.out$outconfig$samples <- NULL + + temp.out1 <- new.env() + list2env(temp.out, envir = temp.out1) + save(list = c('outconfig'), + envir = temp.out1, + file = file.path(settings$outdir, "SDA", "outconfig.Rdata")) + + + + #copy over run and out folders + + if(!dir.exists("run")) dir.create("run",showWarnings = F) + + files <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.clim") + readfiles <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "README.txt") + + newfiles <- gsub(pattern = restart.path, settings$outdir, files) + readnewfiles <- gsub(pattern = restart.path, settings$outdir, readfiles) + + rundirs <- gsub(pattern = "/sipnet.clim", "", files) + rundirs <- gsub(pattern = restart.path, settings$outdir, rundirs) + for(i in 1 : length(rundirs)){ + dir.create(rundirs[i]) + file.copy(from = files[i], to = newfiles[i]) + file.copy(from = readfiles[i], to = readnewfiles[i])} + file.copy(from = paste0(restart.path, '/run/runs.txt'), to = paste0(settings$outdir,'/run/runs.txt' )) + + if(!dir.exists("out")) dir.create("out",showWarnings = F) + + files <- list.files(path = file.path(restart.path, "out/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.out") + newfiles <- gsub(pattern = restart.path, settings$outdir, files) + outdirs <- gsub(pattern = "/sipnet.out", "", files) + outdirs <- gsub(pattern = restart.path, settings$outdir, outdirs) + for(i in 1 : length(outdirs)){ + dir.create(outdirs[i]) + file.copy(from = files[i], to = newfiles[i])} + +} +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Run state data assimilation ------------------------------------- +# -------------------------------------------------------------------------------------------------- + +settings$host$name <- "geo.bu.edu" +settings$host$user <- 'kzarada' +settings$host$folder <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output" +settings$host$job.sh <- "module load udunits/2.2.26 R/3.5.1" +settings$host$qsub <- 'qsub -l h_rt=24:00:00 -V -N @NAME@ -o @STDOUT@ -e @STDERR@' +settings$host$qsub.jobid <- 'Your job ([0-9]+) .*' +settings$host$qstat <- 'qstat -j @JOBID@ || echo DONE' +settings$host$tunnel <- '/tmp/tunnel' +settings$model$binary = "/usr2/postdoc/istfer/SIPNET/1023/sipnet" + + +if(restart == FALSE) unlink(c('run','out','SDA'), recursive = T) + +if ('state.data.assimilation' %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + PEcAn.assim.sequential::sda.enkf( + settings, + restart=restart, + Q=0, + obs.mean = obs.mean, + obs.cov = obs.cov, + control = list( + trace = TRUE, + interactivePlot =FALSE, + TimeseriesPlot =TRUE, + BiasPlot =FALSE, + debug = FALSE, + pause=FALSE + ) + ) + + PEcAn.utils::status.end() + } +} + + + + + diff --git a/modules/assim.sequential/inst/WillowCreek/nodata.xml b/modules/assim.sequential/inst/WillowCreek/nodata.xml new file mode 100644 index 00000000000..2623ae9571a --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/nodata.xml @@ -0,0 +1,191 @@ + + + + FALSE + TRUE + TRUE + + 1000000040 + 1000013298 + + + + 1000013298 + direct + 0 + 9999 + + AbvGrndWood + + + + 1000013298 + direct + -9999 + 9999 + + NEE + + + + 1000013298 + direct + 0 + 9999 + + Qle + + + + 1000013298 + direct + 0 + 9999 + + LAI + + + + 1000013298 + direct + 0 + 9999 + + TotSoilCarb + + + + 1000013298 + direct + 0 + 1 + + SoilMoistFrac + + + + 1000013298 + direct + 0 + 9999 + + litter_carbon_content + + + + + + AbvGrndWood + 0 + 9999 + + + NEE + umol C m-2 s-1 + -9999 + 9999 + + + Qle + 0 + 9999 + + + LAI + 0 + 9999 + + + TotSoilCarb + 0 + 9999 + + + SoilMoistFrac + 0 + 1 + + + litter_carbon_content + 0 + 9999 + + + day + 2017-01-01 + 2018-11-05 + 1 + + + LE fix + -1 + + 2019/01/04 10:19:35 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + TRUE + + /projectnb/dietzelab/pecan.data/dbfiles/ + + + + temperate.deciduous.ALL + + 1 + + 1000012409 + + + + 3000 + FALSE + + + 100 + NEE + 2020 + 2020 + + + uniform + + + sampling + + + sampling + + + + + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + 1000000030 + + + + 676 + 2018-08-26 + 2018-09-23 + + + + BADM + + + NOAA_GEFS + SIPNET + + + 2018-12-05 + 2018-12-20 + + + localhost + + \ No newline at end of file diff --git a/modules/assim.sequential/inst/WillowCreek/testing.xml b/modules/assim.sequential/inst/WillowCreek/testing.xml new file mode 100644 index 00000000000..330e1fd7fe6 --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/testing.xml @@ -0,0 +1,194 @@ + + + + TRUE + TRUE + TRUE + + 1000000040 + 1000013298 + + + + 1000013298 + direct + 0 + 9999 + + AbvGrndWood + + + + 1000013298 + direct + -9999 + 9999 + + NEE + + + + 1000013298 + direct + 0 + 9999 + + Qle + + + + 1000013298 + direct + 0 + 9999 + + LAI + + + + 1000013298 + direct + 0 + 9999 + + TotSoilCarb + + + + 1000013298 + direct + 0 + 1 + + SoilMoistFrac + + + + 1000013298 + direct + 0 + 9999 + + litter_carbon_content + + + + + + AbvGrndWood + 0 + 9999 + + + NEE + umol C m-2 s-1 + -9999 + 9999 + 1000 + + + Qle + mW m-2 + 0 + 9999 + 100 + + + LAI + 0 + 9999 + + + TotSoilCarb + 0 + 9999 + + + SoilMoistFrac + 0 + 1 + + + litter_carbon_content + 0 + 9999 + + + year + 2017-01-01 + 2018-11-05 + 10 + + + LE fix + -1 + + 2019/01/04 10:19:35 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + TRUE + + /projectnb/dietzelab/pecan.data/dbfiles/ + + + + temperate.deciduous.ALL + + 1 + + 1000012409 + + + + 3000 + FALSE + + + 10 + NEE + 2018 + 2018 + + + uniform + + + sampling + + + sampling + + + + + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + 1000000030 + + + + 676 + 2018-08-26 + 2018-09-23 + + + + BADM + + + NOAA_GEFS + SIPNET + + + 2018-12-05 + 2018-12-20 + + + localhost + + From 69fa309ca4f54d80dcb20385cec5f9c397a748db Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 15:02:25 -0500 Subject: [PATCH 1665/2289] updating WCr download functions --- .../inst/WillowCreek/download_WCr.R | 21 ++++++--- .../inst/WillowCreek/download_soilmoist_WCr.R | 45 +++++++++++++++++++ 2 files changed, 60 insertions(+), 6 deletions(-) create mode 100644 modules/assim.sequential/inst/WillowCreek/download_soilmoist_WCr.R diff --git a/modules/assim.sequential/inst/WillowCreek/download_WCr.R b/modules/assim.sequential/inst/WillowCreek/download_WCr.R index daae2ce711d..9a541b63f8a 100644 --- a/modules/assim.sequential/inst/WillowCreek/download_WCr.R +++ b/modules/assim.sequential/inst/WillowCreek/download_WCr.R @@ -28,8 +28,14 @@ download_US_WCr_met <- function(start_date, end_date) { mutate_all(funs(as.numeric)) #Constructing the date based on the columns we have - raw.data$date <-as.POSIXct(paste0(raw.data$V1,"/",raw.data$V2,"/",raw.data$V3," ", raw.data$V4 %>% as.integer(), ":",(raw.data$V4-as.integer(raw.data$V4))*60), - format="%Y/%m/%d %H:%M", tz="UTC") + #Converting the WCR data from CST to UTC + raw.data$date <-lubridate::with_tz(as.POSIXct(paste0(raw.data$V1,"/",raw.data$V2,"/",raw.data$V3," ", raw.data$V4 %>% as.integer(), ":",(raw.data$V4-as.integer(raw.data$V4))*60), + format="%Y/%m/%d %H:%M", tz="US/Central"), tz = "UTC") + + + + start_date <- as.POSIXct(start_date, format = "%Y-%m-%d", tz = "UTC") + end_date <- as.POSIXct(end_date, format = "%Y-%m-%d", tz = "UTC") # Some cleaning and filtering raw.data <- raw.data %>% dplyr::select(V1,V2,V3,V4,V5, V6, V26, V35, V40, V59, date) %>% @@ -37,7 +43,7 @@ download_US_WCr_met <- function(start_date, end_date) { #Colnames changed colnames(raw.data) <- c("Year", "Month", "Day", "Hour", "DoY", "FjDay", "Tair", "rH", "Tsoil", "Rg", "date") - + return(raw.data) } @@ -73,13 +79,16 @@ download_US_WCr_flux <- function(start_date, end_date) { #Constructing the date based on the columns we have raw.data$date <-as.POSIXct(paste0(raw.data$V1,"/",raw.data$V2,"/",raw.data$V3," ", raw.data$V4 %>% as.integer(), ":",(raw.data$V4-as.integer(raw.data$V4))*60), format="%Y/%m/%d %H:%M", tz="UTC") + + start_date <- as.POSIXct(start_date, format = "%Y-%m-%d", tz = "UTC") + end_date <- as.POSIXct(end_date, format = "%Y-%m-%d", tz = "UTC") + # Some cleaning and filtering raw.data <- raw.data %>% # select(-V5, -V6) %>% - filter(date >= start_date & date <=end_date) - + filter(date >= start_date & date <=end_date) #Colnames changed colnames(raw.data) <- c("Year", "Month", "Day", "Hour", "DoY", "FjDay", "SC", "FC", "NEE", "LE", "H", "Ustar", "Flag", "date") return(raw.data) -} \ No newline at end of file +} diff --git a/modules/assim.sequential/inst/WillowCreek/download_soilmoist_WCr.R b/modules/assim.sequential/inst/WillowCreek/download_soilmoist_WCr.R new file mode 100644 index 00000000000..1d4063160cb --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/download_soilmoist_WCr.R @@ -0,0 +1,45 @@ +download_soilmoist_WCr <- function(start_date, end_date) { + base_url <- "http://co2.aos.wisc.edu/data/cheas/wcreek/flux/prelim/clean/ameriflux/US-WCr_HH_" + start_year <- lubridate::year(start_date) + end_year <- lubridate::year(end_date) + + # Reading in the data + raw.data <- start_year:end_year %>% + purrr::map_df(function(syear) { + influx <- + tryCatch( + read.table( + paste0(base_url, syear, "01010000_", syear+1, "01010000.csv"), + sep = ",", + header = TRUE, stringsAsFactors = F + ) %>% + apply(2, trimws) %>% + apply(2, as.character) %>% + data.frame(stringsAsFactors = F), + error = function(e) { + NULL + }, + warning = function(e) { + NULL + } + ) + }) %>% + mutate_all(funs(as.numeric)) + + #Constructing the date based on the columns we have + if(dim(raw.data)[1] > 0 & dim(raw.data)[2] > 0){ + raw.data$Time <-as.POSIXct(as.character(raw.data$TIMESTAMP_START), + format="%Y%m%d%H%M", tz="UTC") + # Some cleaning and filtering + raw.data <- raw.data %>% + dplyr::select(SWC_1_1_1, SWC_1_2_1, SWC_1_3_1, SWC_1_4_1, SWC_1_5_1, Time) %>% + na_if(-9999) %>% + filter(Time >= start_date & Time <=end_date) + + #get average soil moisture + + raw.data$avgsoil <- raw.data$SWC_1_2_1*0.12 + raw.data$SWC_1_3_1*0.16 + raw.data$SWC_1_4_1*0.32 + raw.data$SWC_1_5_1*0.4 + raw.data <- raw.data %>% dplyr::select(Time, avgsoil) + }else(raw.data <- NULL) + return(raw.data) +} \ No newline at end of file From 10b0778537a819f2000f38e45a8d1ed03a657d09 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 15:03:32 -0500 Subject: [PATCH 1666/2289] updating issues to prep.data that came from package updates --- .../inst/WillowCreek/gapfill_WCr.R | 110 ++++++++-------- .../inst/WillowCreek/prep.data.assim.R | 124 +++++++++--------- 2 files changed, 113 insertions(+), 121 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R b/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R index 4c8ca90df3d..185cf878526 100644 --- a/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R +++ b/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R @@ -16,61 +16,55 @@ gapfill_WCr <- function(start_date, end_date, FUN.flux=download_US_WCr_flux){ -start_date <- as.Date(start_date) -end_date <- as.Date(end_date) - -#download WCr flux and met date -flux <- FUN.flux(start_date, end_date) -met <- FUN.met(start_date, end_date) -# converting NEE and LE -#change -999 to NA's -flux[flux == -999] <- NA -flux$NEE<-PEcAn.utils::misc.convert(flux$NEE, "umol C m-2 s-1", "kg C m-2 s-1") -flux$LE<-flux$LE*1e-6 -#join met and flux data by date (which includes time and day) -met <- met %>% dplyr::select(date, Tair, Rg, Tsoil) -flux <- left_join(flux, met, by = "date") %>% - dplyr::select(-FjDay, -SC, -FC) -#print(str(flux)) - - -#Start REddyProc gapfilling -suppressWarnings({ - - EddyDataWithPosix.F <- - fConvertTimeToPosix(flux, - 'YDH', - Year.s = 'Year' - , - Day.s = 'DoY', - Hour.s = 'Hour') %>% - dplyr::select(-date,-Month,-Day) -}) - - -EddyProc.C <- sEddyProc$new('WCr', EddyDataWithPosix.F, - c(var,'Rg','Tair', 'Ustar')) - -tryCatch( - { - EddyProc.C$sMDSGapFill(var) - }, - error = function(e) { - PEcAn.logger::logger.warn(e) - } -) - -#Merging the output -FilledEddyData.F <- EddyProc.C$sExportResults() -CombinedData.F <- cbind(flux, FilledEddyData.F) - -return(CombinedData.F) - -} - - - - - - - + start_date <- as.Date(start_date) + end_date <- as.Date(end_date) + + #download WCr flux and met date + flux <- FUN.flux(start_date, end_date) + met <- FUN.met(start_date, end_date) + # converting NEE and LE + #change -999 to NA's + flux[flux == -999] <- NA #want both NEE and LE to be larger numbers + #flux$NEE<-PEcAn.utils::misc.convert(flux$NEE, "umol C m-2 s-1", "kg C m-2 s-1") + #flux$LE<-flux$LE*1e-6 + #join met and flux data by date (which includes time and day) + met <- met %>% dplyr::select(date, Tair, Rg, Tsoil) + flux <- left_join(flux, met, by = "date") %>% + dplyr::select(-FjDay, -SC, -FC) %>% + distinct(date, .keep_all = TRUE) + #print(str(flux)) + + + #Start REddyProc gapfilling + suppressWarnings({ + + EddyDataWithPosix.F <- + fConvertTimeToPosix(flux, + 'YDH', + Year.s = 'Year' + , + Day.s = 'DoY', + Hour.s = 'Hour') %>% + dplyr::select(-date,-Month,-Day) %>% + distinct(DateTime, .keep_all = TRUE) + }) + + + EddyProc.C <- sEddyProc$new('WCr', EddyDataWithPosix.F, + c(var,'Rg','Tair', 'Ustar')) + + tryCatch( + { + EddyProc.C$sMDSGapFill(var) + }, + error = function(e) { + PEcAn.logger::logger.warn(e) + } + ) + + #Merging the output + FilledEddyData.F <- EddyProc.C$sExportResults() + CombinedData.F <- cbind(flux, FilledEddyData.F) + + return(CombinedData.F) + diff --git a/modules/assim.sequential/inst/WillowCreek/prep.data.assim.R b/modules/assim.sequential/inst/WillowCreek/prep.data.assim.R index 06f470a6946..6791d97112a 100644 --- a/modules/assim.sequential/inst/WillowCreek/prep.data.assim.R +++ b/modules/assim.sequential/inst/WillowCreek/prep.data.assim.R @@ -10,35 +10,33 @@ ##'@return None ##'@export ##'@author Luke Dramko and K. Zarada and Hamze Dokoohaki -prep.data.assim <- function(start_date, end_date, numvals, vars, data.len = 48) { - - data.len = data.len *2 #turn hour time steps into half hour +prep.data.assim <- function(start_date, end_date, numvals, vars, data.len = 3, sda.start) { Date.vec <-NULL - - gapfilled.vars <- vars %>% - purrr::map_dfc(function(var) { - - field_data <- gapfill_WCr(start_date, end_date, var) - PEcAn.logger::logger.info(paste(var, " is done")) - #I'm sending the date out to use it later on - return(field_data) + gapfilled.vars <- vars %>% + purrr::map_dfc(function(var) { + + field_data <- gapfill_WCr(start_date, end_date, var) + + PEcAn.logger::logger.info(paste(var, " is done")) + #I'm sending the date out to use it later on + return(field_data) }) - - + + #gapfilled.vars$NEE_f = PEcAn.utils::misc.convert(gapfilled.vars$NEE_f, "kg C m-2 s-1", "umol C m-2 s-1") + #Reading the columns we need cols <- grep(paste0("_*_f$"), colnames(gapfilled.vars), value = TRUE) - gapfilled.vars <- gapfilled.vars %>% dplyr::select(Date=date, Flag,cols) - - #Creating NEE and LE filled output + gapfilled.vars <- gapfilled.vars %>% dplyr::select(Date=date...11, Flag = Flag...10,cols) + + #Creating NEE and LE filled output gapfilled.vars.out <- gapfilled.vars %>% dplyr::select(-Flag) %>% - tail(data.len) - - #Pecan Flux Uncertainty + filter(Date >= (sda.start - lubridate::days(data.len)) & Date < sda.start) + + #Pecan Flux Uncertainty processed.flux <- 3:(3+length(vars)-1) %>% purrr::map(function(col.num) { - field_data <- gapfilled.vars[,c(1,2,col.num)] uncertainty_vals <- list() @@ -53,7 +51,7 @@ prep.data.assim <- function(start_date, end_date, numvals, vars, data.len = 48) # Create proxy row for rbinding random_mat = NULL new_col = rep(0, dim(field_data)[1]) - + # Create a new column # i: the particular variable being worked with; j: the column number; k: the row number for (j in 1:numvals) { @@ -70,53 +68,53 @@ prep.data.assim <- function(start_date, end_date, numvals, vars, data.len = 48) random_multiplier <- sample(c(-1, 1), length(res), replace = TRUE) simulated <- obs + (random_multiplier * res) - - random_mat = cbind(random_mat, simulated) - } # end j - obs.mean <- c(obs.mean, mean(field_data[, 3], na.rm = TRUE)) - # this keeps the mean of each day for the whole time series and all variables - sums = c(sums, list(random_mat)) - - data.frame(Date=field_data$Date,sums) + random_mat = cbind(random_mat, simulated) + } # end j + + obs.mean <- c(obs.mean, mean(field_data[, 3], na.rm = TRUE)) + # this keeps the mean of each day for the whole time series and all variables + sums = c(sums, list(random_mat)) + + data.frame(Date=field_data$Date[!is.na(field_data[, 3])],sums) }) # end of map - - #I'm sending mixing up simulations of vars to aggregate them first and then estimate their var/cov + + + #I'm sending mixing up simulations of vars to aggregate them first and then estimate their var/cov outlist<-processed.flux %>% - map2_dfc(vars, function(x, xnames) { - names(x)[2:numvals] <- paste0(names(x)[2:numvals], xnames) - - x %>% - tail(data.len) %>% - mutate(Interval = lubridate::round_date(Date, "6 hour")) %>% - dplyr::select(-Date) - }) %>% - split(.$Interval) %>% - map(function(row) { - - #fidning the interval cols / taking them out - colsDates <- grep(paste0("Interval"), colnames(row), value = FALSE) - Date1 <- row[, colsDates[1]] - row <- row[, -c(colsDates)] - # finding the order of columns in dataframe - var.order <- split(1:ncol(row), - ceiling(seq_along(1:ncol(row))/(ncol(row)/length(vars)))) - - #combine all the numbers for this time interval - alldata <- var.order %>% - map_dfc(~row[,.x] %>% unlist %>% as.numeric) %>% - setNames(vars) - # mean and the cov between all the state variables is estimated here - return(list( - Date = Date1 %>% unique(), - covs = cov(alldata), - means = apply(alldata, 2, mean) - )) - }) + map2_dfc(vars, function(x, xnames) { + names(x)[2:numvals] <- paste0(names(x)[2:numvals], xnames) + + x %>% + filter(Date >= (sda.start - lubridate::hours(data.len)) & Date < sda.start) %>% + mutate(Interval = lubridate::round_date(Date, "1 hour")) %>% + dplyr::select(-Date) + }) %>% + split(.$Interval...202) %>% + map(function(row) { + + #fidning the interval cols / taking them out + colsDates <- grep(paste0("Interval"), colnames(row), value = FALSE) + Date1 <- row[, colsDates[1]] + row <- row[, -c(colsDates)] + # finding the order of columns in dataframe + var.order <- split(1:ncol(row), + ceiling(seq_along(1:ncol(row))/(ncol(row)/length(vars)))) + + #combine all the numbers for this time interval + alldata <- var.order %>% + map_dfc(~row[,.x] %>% unlist %>% as.numeric) %>% + setNames(vars) + # mean and the cov between all the state variables is estimated here + return(list( + Date = Date1 %>% unique(), + covs = cov(alldata), + means = apply(alldata, 2, mean) + )) + }) outlist <- list(obs=outlist, rawobs=gapfilled.vars.out ) return(outlist) - + } # prep.data.assim - From 76dc46dfd22805983d0c66562f6d818891416dc3 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 15:57:41 -0500 Subject: [PATCH 1667/2289] fixing bracket --- modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R b/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R index 185cf878526..6d1ecca1358 100644 --- a/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R +++ b/modules/assim.sequential/inst/WillowCreek/gapfill_WCr.R @@ -67,4 +67,4 @@ gapfill_WCr <- function(start_date, end_date, CombinedData.F <- cbind(flux, FilledEddyData.F) return(CombinedData.F) - +} From 14a3cc03c893b6e12c544971038a292a605d2e07 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 16:11:04 -0500 Subject: [PATCH 1668/2289] updates to GEFS files --- .../data.atmosphere/R/GEFS_helper_functions.R | 906 ++++++++++++++++++ .../data.atmosphere/R/download.NOAA_GEFS.R | 298 +----- .../R/download.NOAA_GEFS_downscale.R | 378 -------- 3 files changed, 954 insertions(+), 628 deletions(-) create mode 100644 modules/data.atmosphere/R/GEFS_helper_functions.R delete mode 100644 modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R new file mode 100644 index 00000000000..336ef9235c7 --- /dev/null +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -0,0 +1,906 @@ +#' Download gridded forecast in the box bounded by the latitude and longitude list +#' +#' @param lat_list +#' @param lon_list +#' @param forecast_time +#' @param forecast_date +#' @param model_name_raw +#' @param num_cores +#' @param output_directory +#' +#' @return +#' @export +#' +#' @examples +noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date ,model_name_raw, output_directory, end_hr) { + + + download_neon_grid <- function(ens_index, location, directory, hours_char, cycle, base_filename1, vars,working_directory){ + #for(j in 1:31){ + if(ens_index == 1){ + base_filename2 <- paste0("gec00",".t",cycle,"z.pgrb2a.0p50.f") + curr_hours <- hours_char[hours <= 384] + }else{ + if((ens_index-1) < 10){ + ens_name <- paste0("0",ens_index - 1) + }else{ + ens_name <- as.character(ens_index -1) + } + base_filename2 <- paste0("gep",ens_name,".t",cycle,"z.pgrb2a.0p50.f") + curr_hours <- hours_char + } + + + for(i in 1:length(curr_hours)){ + file_name <- paste0(base_filename2, curr_hours[i]) + + destfile <- paste0(working_directory,"/", file_name,".neon.grib") + + if(file.exists(destfile)){ + + fsz <- file.info(destfile)$size + gribf <- file(destfile, "rb") + fsz4 <- fsz-4 + seek(gribf,where = fsz4,origin = "start") + last4 <- readBin(gribf,"raw",4) + if(as.integer(last4[1])==55 & as.integer(last4[2])==55 & as.integer(last4[3])==55 & as.integer(last4[4])==55) { + download_file <- FALSE + } else { + download_file <- TRUE + } + close(gribf) + + }else{ + download_file <- TRUE + } + + if(download_file){ + + out <- tryCatch(utils::download.file(paste0(base_filename1, file_name, vars, location, directory), + destfile = destfile, quiet = TRUE), + error = function(e){ + warning(paste(e$message, "skipping", file_name), + call. = FALSE) + return(NA) + }, + finally = NULL) + + if(is.na(out)) next + } + } + } + + model_dir <- file.path(output_directory, model_name_raw) + + curr_time <- lubridate::with_tz(Sys.time(), tzone = "UTC") + curr_date <- lubridate::as_date(curr_time) + + noaa_page <- readLines('https://nomads.ncep.noaa.gov/pub/data/nccf/com/gens/prod/') + + potential_dates <- NULL + for(i in 1:length(noaa_page)){ + if(stringr::str_detect(noaa_page[i], ">gefs.")){ + end <- stringr::str_locate(noaa_page[i], ">gefs.")[2] + dates <- stringr::str_sub(noaa_page[i], start = end+1, end = end+8) + potential_dates <- c(potential_dates, dates) + } + } + + + last_cycle_page <- readLines(paste0('https://nomads.ncep.noaa.gov/pub/data/nccf/com/gens/prod/gefs.', dplyr::last(potential_dates))) + + potential_cycle <- NULL + for(i in 1:length(last_cycle_page)){ + if(stringr::str_detect(last_cycle_page[i], 'href=\"')){ + end <- stringr::str_locate(last_cycle_page[i], 'href=\"')[2] + cycles <- stringr::str_sub(last_cycle_page[i], start = end+1, end = end+2) + if(cycles %in% c("00","06", "12", "18")){ + potential_cycle <- c(potential_cycle, cycles) + } + } + } + + potential_dates <- lubridate::as_date(potential_dates) + + potential_dates = potential_dates[which(potential_dates == forecast_date)] + + if(length(potential_dates) == 0){PEcAn.logger::logger.error("Forecast Date not available")} + + + location <- paste0("&subregion=&leftlon=", + floor(min(lon_list)), + "&rightlon=", + ceiling(max(lon_list)), + "&toplat=", + ceiling(max(lat_list)), + "&bottomlat=", + floor(min(lat_list))) + + base_filename1 <- "https://nomads.ncep.noaa.gov/cgi-bin/filter_gefs_atmos_0p50a.pl?file=" + vars <- "&lev_10_m_above_ground=on&lev_2_m_above_ground=on&lev_surface=on&lev_entire_atmosphere=on&var_APCP=on&var_DLWRF=on&var_DSWRF=on&var_PRES=on&var_RH=on&var_TMP=on&var_UGRD=on&var_VGRD=on&var_TCDC=on" + + for(i in 1:length(potential_dates)){ + + forecast_date <- lubridate::as_date(potential_dates[i]) + forecast_hours = as.numeric(forecast_time) + + + for(j in 1:length(forecast_hours)){ + cycle <- forecast_hours[j] + + if(cycle < 10) cycle <- paste0("0",cycle) + + model_date_hour_dir <- file.path(model_dir,forecast_date,cycle) + if(!dir.exists(model_date_hour_dir)){ + dir.create(model_date_hour_dir, recursive=TRUE, showWarnings = FALSE) + } + + new_download <- TRUE + + if(new_download){ + + print(paste("Downloading", forecast_date, cycle)) + + if(cycle == "00"){ + hours <- c(seq(0, 240, 3),seq(246, min(end_hr, 384), 6)) + }else{ + hours <- c(seq(0, 240, 3),seq(246, min(end_hr, 840) , 6)) + } + hours_char <- hours + hours_char[which(hours < 100)] <- paste0("0",hours[which(hours < 100)]) + hours_char[which(hours < 10)] <- paste0("0",hours_char[which(hours < 10)]) + curr_year <- lubridate::year(forecast_date) + curr_month <- lubridate::month(forecast_date) + if(curr_month < 10) curr_month <- paste0("0",curr_month) + curr_day <- lubridate::day(forecast_date) + if(curr_day < 10) curr_day <- paste0("0",curr_day) + curr_date <- paste0(curr_year,curr_month,curr_day) + directory <- paste0("&dir=%2Fgefs.",curr_date,"%2F",cycle,"%2Fatmos%2Fpgrb2ap5") + + ens_index <- 1:31 + + parallel::mclapply(X = ens_index, + FUN = download_neon_grid, + location, + directory, + hours_char, + cycle, + base_filename1, + vars, + working_directory = model_date_hour_dir, + mc.cores = 1) + }else{ + print(paste("Existing", forecast_date, cycle)) + } + } + } +} +#' Extract and temporally downscale points from downloaded grid files +#' +#' @param lat_list +#' @param lon_list +#' @param site_id +#' @param downscale +#' @param overwrite +#' @param model_name +#' @param model_name_ds +#' @param model_name_raw +#' @param output_directory +#' +#' @return +#' @export +#' +#' @examples +#' +process_gridded_noaa_download <- function(lat_list, + lon_list, + site_id, + downscale, + overwrite, + forecast_date, + forecast_time, + model_name, + model_name_ds, + model_name_raw, + output_directory){ + + extract_sites <- function(ens_index, hours_char, hours, cycle, site_id, lat_list, lon_list, working_directory){ + + site_length <- length(site_id) + tmp2m <- array(NA, dim = c(site_length, length(hours_char))) + rh2m <- array(NA, dim = c(site_length, length(hours_char))) + ugrd10m <- array(NA, dim = c(site_length,length(hours_char))) + vgrd10m <- array(NA, dim = c(site_length, length(hours_char))) + pressfc <- array(NA, dim = c(site_length, length(hours_char))) + apcpsfc <- array(NA, dim = c(site_length, length(hours_char))) + tcdcclm <- array(NA, dim = c(site_length, length(hours_char))) + dlwrfsfc <- array(NA, dim = c(site_length, length(hours_char))) + dswrfsfc <- array(NA, dim = c(site_length, length(hours_char))) + + if(ens_index == 1){ + base_filename2 <- paste0("gec00",".t",cycle,"z.pgrb2a.0p50.f") + }else{ + if(ens_index-1 < 10){ + ens_name <- paste0("0",ens_index-1) + }else{ + ens_name <- as.character(ens_index-1) + } + base_filename2 <- paste0("gep",ens_name,".t",cycle,"z.pgrb2a.0p50.f") + } + + lats <- round(lat_list/.5)*.5 + lons <- round(lon_list/.5)*.5 + + if(lons < 0){ + lons <- 360 + lons + } + curr_hours <- hours_char + + for(hr in 1:length(curr_hours)){ + file_name <- paste0(base_filename2, curr_hours[hr]) + + if(file.exists(paste0(working_directory,"/", file_name,".neon.grib"))){ + grib <- rgdal::readGDAL(paste0(working_directory,"/", file_name,".neon.grib"), silent = TRUE) + lat_lon <- sp::coordinates(grib) + for(s in 1:length(site_id)){ + + index <- which(lat_lon[,2] == lats[s] & lat_lon[,1] == lons[s]) + + pressfc[s, hr] <- grib$band1[index] + tmp2m[s, hr] <- grib$band2[index] + rh2m[s, hr] <- grib$band3[index] + ugrd10m[s, hr] <- grib$band4[index] + vgrd10m[s, hr] <- grib$band5[index] + + if(curr_hours[hr] != "000"){ + apcpsfc[s, hr] <- grib$band6[index] + tcdcclm[s, hr] <- grib$band7[index] + dswrfsfc[s, hr] <- grib$band8[index] + dlwrfsfc[s, hr] <- grib$band9[index] + } + } + } + } + + return(list(tmp2m = tmp2m, + pressfc = pressfc, + rh2m = rh2m, + dlwrfsfc = dlwrfsfc, + dswrfsfc = dswrfsfc, + ugrd10m = ugrd10m, + vgrd10m = vgrd10m, + apcpsfc = apcpsfc, + tcdcclm = tcdcclm)) + } + + noaa_var_names <- c("tmp2m", "pressfc", "rh2m", "dlwrfsfc", + "dswrfsfc", "apcpsfc", + "ugrd10m", "vgrd10m", "tcdcclm") + + + model_dir <- file.path(output_directory) + model_name_raw_dir <- file.path(output_directory, model_name_raw) + + curr_time <- lubridate::with_tz(Sys.time(), tzone = "UTC") + curr_date <- lubridate::as_date(curr_time) + potential_dates <- seq(curr_date - lubridate::days(6), curr_date, by = "1 day") + + #Remove dates before the new GEFS system + potential_dates <- potential_dates[which(potential_dates > lubridate::as_date("2020-09-23"))] + + + + + cycle <-forecast_time + curr_forecast_time <- forecast_date + lubridate::hours(cycle) + if(cycle < 10) cycle <- paste0("0",cycle) + if(cycle == "00"){ + hours <- c(seq(0, 240, 3),seq(246, 840 , 6)) + }else{ + hours <- c(seq(0, 240, 3),seq(246, 384 , 6)) + } + hours_char <- hours + hours_char[which(hours < 100)] <- paste0("0",hours[which(hours < 100)]) + hours_char[which(hours < 10)] <- paste0("0",hours_char[which(hours < 10)]) + + raw_files <- list.files(file.path(model_name_raw_dir,forecast_date,cycle)) + hours_present <- as.numeric(stringr::str_sub(raw_files, start = 25, end = 27)) + + all_downloaded <- TRUE + # if(cycle == "00"){ + # #Sometime the 16-35 day forecast is not competed for some of the forecasts. If over 24 hrs has passed then they won't show up. + # #Go ahead and create the netcdf files + # if(length(which(hours_present == 840)) == 30 | (length(which(hours_present == 384)) == 30 & curr_forecast_time + lubridate::hours(24) < curr_time)){ + # all_downloaded <- TRUE + # } + # }else{ + # if(length(which(hours_present == 384)) == 31 | (length(which(hours_present == 384)) == 31 & curr_forecast_time + lubridate::hours(24) < curr_time)){ + # all_downloaded <- TRUE + # } + # } + + + + + + if(all_downloaded){ + + ens_index <- 1:31 + #Run download_downscale_site() over the site_index + output <- parallel::mclapply(X = ens_index, + FUN = extract_sites, + hours_char = hours_char, + hours = hours, + cycle, + site_id, + lat_list, + lon_list, + working_directory = file.path(model_name_raw_dir,forecast_date,cycle), + mc.cores = 1) + + + forecast_times <- lubridate::as_datetime(forecast_date) + lubridate::hours(as.numeric(cycle)) + lubridate::hours(as.numeric(hours_char)) + + + + #Convert negetive longitudes to degrees east + if(lon_list < 0){ + lon_east <- 360 + lon_list + }else{ + lon_east <- lon_list + } + + model_site_date_hour_dir <- file.path(model_dir, site_id, forecast_date,cycle) + + if(!dir.exists(model_site_date_hour_dir)){ + dir.create(model_site_date_hour_dir, recursive=TRUE, showWarnings = FALSE) + }else{ + unlink(list.files(model_site_date_hour_dir, full.names = TRUE)) + } + + if(downscale){ + modelds_site_date_hour_dir <- file.path(output_directory,model_name_ds,site_id, forecast_date,cycle) + if(!dir.exists(modelds_site_date_hour_dir)){ + dir.create(modelds_site_date_hour_dir, recursive=TRUE, showWarnings = FALSE) + }else{ + unlink(list.files(modelds_site_date_hour_dir, full.names = TRUE)) + } + } + + noaa_data <- list() + + for(v in 1:length(noaa_var_names)){ + + value <- NULL + ensembles <- NULL + forecast.date <- NULL + + noaa_data[v] <- NULL + + for(ens in 1:31){ + curr_ens <- output[[ens]] + value <- c(value, curr_ens[[noaa_var_names[v]]][1, ]) + ensembles <- c(ensembles, rep(ens, length(curr_ens[[noaa_var_names[v]]][1, ]))) + forecast.date <- c(forecast.date, forecast_times) + } + noaa_data[[v]] <- list(value = value, + ensembles = ensembles, + forecast.date = lubridate::as_datetime(forecast.date)) + + } + + #These are the cf standard names + cf_var_names <- c("air_temperature", "air_pressure", "relative_humidity", "surface_downwelling_longwave_flux_in_air", + "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "eastward_wind", "northward_wind","cloud_area_fraction") + + #Replace "eastward_wind" and "northward_wind" with "wind_speed" + cf_var_names1 <- c("air_temperature", "air_pressure", "relative_humidity", "surface_downwelling_longwave_flux_in_air", + "surface_downwelling_shortwave_flux_in_air", "precipitation_flux","specific_humidity", "cloud_area_fraction","wind_speed") + + cf_var_units1 <- c("K", "Pa", "1", "Wm-2", "Wm-2", "kgm-2s-1", "1", "1", "ms-1") #Negative numbers indicate negative exponents + + names(noaa_data) <- cf_var_names + + specific_humidity <- rep(NA, length(noaa_data$relative_humidity$value)) + + noaa_data$relative_humidity$value <- noaa_data$relative_humidity$value / 100 + + noaa_data$air_temperature$value <- noaa_data$air_temperature$value + 273.15 + + specific_humidity[which(!is.na(noaa_data$relative_humidity$value))] <- PEcAn.data.atmosphere::rh2qair(rh = noaa_data$relative_humidity$value[which(!is.na(noaa_data$relative_humidity$value))], + T = noaa_data$air_temperature$value[which(!is.na(noaa_data$relative_humidity$value))], + press = noaa_data$air_pressure$value[which(!is.na(noaa_data$relative_humidity$value))]) + + + #Calculate wind speed from east and north components + wind_speed <- sqrt(noaa_data$eastward_wind$value^2 + noaa_data$northward_wind$value^2) + + forecast_noaa <- tibble::tibble(time = noaa_data$air_temperature$forecast.date, + NOAA.member = noaa_data$air_temperature$ensembles, + air_temperature = noaa_data$air_temperature$value, + air_pressure= noaa_data$air_pressure$value, + relative_humidity = noaa_data$relative_humidity$value, + surface_downwelling_longwave_flux_in_air = noaa_data$surface_downwelling_longwave_flux_in_air$value, + surface_downwelling_shortwave_flux_in_air = noaa_data$surface_downwelling_shortwave_flux_in_air$value, + precipitation_flux = noaa_data$precipitation_flux$value, + specific_humidity = specific_humidity, + cloud_area_fraction = noaa_data$cloud_area_fraction$value, + wind_speed = wind_speed) + + forecast_noaa$cloud_area_fraction <- forecast_noaa$cloud_area_fraction / 100 #Convert from % to proportion + + # Convert the 3 hr precip rate to per second. + forecast_noaa$precipitation_flux <- forecast_noaa$precipitation_flux / (60 * 60 * 3) + + + + # Create a data frame with information about the file. This data frame's format is an internal PEcAn standard, and is stored in the BETY database to + # locate the data file. The data file is stored on the local machine where the download occured. Because NOAA GEFS is an + # ensemble of 21 different forecast models, each model gets its own data frame. All of the information is the same for + # each file except for the file name. + + results_list = list() + + + for (ens in 1:31) { # i is the ensemble number + + #Turn the ensemble number into a string + if(ens-1< 10){ + ens_name <- paste0("0",ens-1) + }else{ + ens_name <- ens - 1 + } + + forecast_noaa_ens <- forecast_noaa %>% + dplyr::filter(NOAA.member == ens) %>% + dplyr::filter(!is.na(air_temperature)) + + end_date <- forecast_noaa_ens %>% + dplyr::summarise(max_time = max(time)) + + results = data.frame( + file = "", #Path to the file (added in loop below). + host = PEcAn.remote::fqdn(), #Name of the server where the file is stored + mimetype = "application/x-netcdf", #Format the data is saved in + formatname = "CF Meteorology", #Type of data + startdate = paste0(format(forecast_date, "%Y-%m-%dT%H:%M:00")), #starting date and time, down to the second + enddate = paste0(format(end_date$max_time, "%Y-%m-%dT%H:%M:00")), #ending date and time, down to the second + dbfile.name = "NOAA_GEFS_downscale", #Source of data (ensemble number will be added later) + stringsAsFactors = FALSE + ) + + identifier = paste("NOAA_GEFS", site_id, ens_name, format(forecast_date, "%Y-%m-%dT%H:%M"), + format(end_date$max_time, "%Y-%m-%dT%H:%M"), sep="_") + + fname <- paste0(identifier, ".nc") + ensemble_folder = file.path(output_directory, identifier) + output_file <- file.path(ensemble_folder,fname) + + if (!dir.exists(ensemble_folder)) { + dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} + + + #Write netCDF + noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) + + if(downscale){ + #Downscale the forecast from 6hr to 1hr + + + identifier_ds = paste("NOAA_GEFS_downscale", site_id, ens_name, format(forecast_date, "%Y-%m-%dT%H:%M"), + format(end_date$max_time, "%Y-%m-%dT%H:%M"), sep="_") + + fname_ds <- paste0(identifier_ds, ".nc") + ensemble_folder_ds = file.path(output_directory, identifier_ds) + output_file_ds <- file.path(ensemble_folder_ds,fname_ds) + + if (!dir.exists(ensemble_folder_ds)) { + dir.create(ensemble_folder_ds, recursive=TRUE, showWarnings = FALSE)} + + results$file = output_file_ds + results$dbfile.name = fname_ds + results_list[[ens]] <- results + + #Run downscaling + noaaGEFSpoint::temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) + } + + + } + } + return(results_list) +} #process_gridded_noaa_download + +#' @title Downscale NOAA GEFS frin 6hr to 1hr +#' @return None +#' +#' @param input_file, full path to 6hr file +#' @param output_file, full path to 1hr file that will be generated +#' @param overwrite, logical stating to overwrite any existing output_file +#' @param hr time step in hours of temporal downscaling (default = 1) +#' @export +#' +#' @author Quinn Thomas +#' +#' + +temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1){ + + # open netcdf + nc <- ncdf4::nc_open(input_file) + + if(stringr::str_detect(input_file, "ens")){ + ens_postion <- stringr::str_locate(input_file, "ens") + ens_name <- stringr::str_sub(input_file, start = ens_postion[1], end = ens_postion[2] + 2) + ens <- as.numeric(stringr::str_sub(input_file, start = ens_postion[2] + 1, end = ens_postion[2] + 2)) + }else{ + ens <- 0 + ens_name <- "ens00" + } + + # retrive variable names + cf_var_names <- names(nc$var) + + # generate time vector + time <- ncdf4::ncvar_get(nc, "time") + begining_time <- lubridate::ymd_hm(ncdf4::ncatt_get(nc, "time", + attname = "units")$value) + time <- begining_time + lubridate::hours(time) + + # retrive lat and lon + lat.in <- ncdf4::ncvar_get(nc, "latitude") + lon.in <- ncdf4::ncvar_get(nc, "longitude") + + # generate data frame from netcdf variables and retrive units + noaa_data <- tibble::tibble(time = time) + var_units <- rep(NA, length(cf_var_names)) + for(i in 1:length(cf_var_names)){ + curr_data <- ncdf4::ncvar_get(nc, cf_var_names[i]) + noaa_data <- cbind(noaa_data, curr_data) + var_units[i] <- ncdf4::ncatt_get(nc, cf_var_names[i], attname = "units")$value + } + + ncdf4::nc_close(nc) + + names(noaa_data) <- c("time",cf_var_names) + + # spline-based downscaling + if(length(which(c("air_temperature", "wind_speed","specific_humidity", "air_pressure") %in% cf_var_names) == 4)){ + forecast_noaa_ds <- downscale_spline_to_hrly(df = noaa_data, VarNames = c("air_temperature", "wind_speed","specific_humidity", "air_pressure")) + }else{ + #Add error message + } + + # Convert splined SH, temperature, and presssure to RH + forecast_noaa_ds <- forecast_noaa_ds %>% + dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, + temp = forecast_noaa_ds$air_temperature, + press = forecast_noaa_ds$air_pressure)) %>% + dplyr::mutate(relative_humidity = relative_humidity, + relative_humidity = ifelse(relative_humidity > 1, 0, relative_humidity)) + + # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) + if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ + LW.flux.hrly <- downscale_repeat_6hr_to_hrly(df = noaa_data, varName = "surface_downwelling_longwave_flux_in_air") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, LW.flux.hrly, by = "time") + }else{ + #Add error message + } + + # convert precipitation to hourly (just copy 6 hourly values over past 6-hour time period) + if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ + Precip.flux.hrly <- downscale_repeat_6hr_to_hrly(df = noaa_data, varName = "precipitation_flux") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, Precip.flux.hrly, by = "time") + }else{ + #Add error message + } + + # convert cloud_area_fraction to hourly (just copy 6 hourly values over past 6-hour time period) + if("cloud_area_fraction" %in% cf_var_names){ + cloud_area_fraction.flux.hrly <- downscale_repeat_6hr_to_hrly(df = noaa_data, varName = "cloud_area_fraction") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, cloud_area_fraction.flux.hrly, by = "time") + }else{ + #Add error message + } + + # use solar geometry to convert shortwave from 6 hr to 1 hr + if("surface_downwelling_shortwave_flux_in_air" %in% cf_var_names){ + ShortWave.hrly <- downscale_ShortWave_to_hrly(df = noaa_data, lat = lat.in, lon = lon.in) + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, ShortWave.hrly, by = "time") + }else{ + #Add error message + } + + #Add dummy ensemble number to work with write_noaa_gefs_netcdf() + forecast_noaa_ds$NOAA.member <- ens + + #Make sure var names are in correct order + forecast_noaa_ds <- forecast_noaa_ds %>% + dplyr::select("time", tidyselect::all_of(cf_var_names), "NOAA.member") + + #Write netCDF + noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ds, + ens = ens, + lat = lat.in, + lon = lon.in, + cf_units = var_units, + output_file = output_file, + overwrite = overwrite) + +} #temporal_downscale + +#' @title Downscale spline to hourly +#' @return A dataframe of downscaled state variables +#' @param df, dataframe of data to be downscales +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ + # -------------------------------------- + # purpose: interpolates debiased forecasts from 6-hourly to hourly + # Creator: Laura Puckett, December 16 2018 + # -------------------------------------- + # @param: df, a dataframe of debiased 6-hourly forecasts + + t0 = min(df$time) + df <- df %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) + + for(Var in 1:length(VarNames)){ + curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y + noaa_data_interp <- cbind(noaa_data_interp, curr_data) + } + + names(noaa_data_interp) <- c("time",VarNames) + + return(noaa_data_interp) +} + +#' @title Downscale shortwave to hourly +#' @return A dataframe of downscaled state variables +#' +#' @param df, data frame of variables +#' @param lat, lat of site +#' @param lon, long of site +#' @return ShortWave.ds +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ + ## downscale shortwave to hourly + + t0 <- min(df$time) + df <- df %>% + dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + data.hrly$group_6hr <- NA + + group <- 0 + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + group <- group + 1 + data.hrly$group_6hr[i] <- group + }else{ + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + data.hrly$group_6hr[i] <- group + } + } + + ShortWave.ds <- data.hrly %>% + dplyr::mutate(hour = lubridate::hour(time)) %>% + dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% + dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry + dplyr::group_by(group_6hr) %>% + dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry + dplyr::ungroup() %>% + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% + dplyr::select(time,surface_downwelling_shortwave_flux_in_air) + + return(ShortWave.ds) + +} + +#' Cosine of solar zenith angle +#' +#' For explanations of formulae, see http://www.itacanet.org/the-sun-as-a-source-of-energy/part-3-calculating-solar-angles/ +#' +#' @author Alexey Shiklomanov +#' @param doy Day of year +#' @param lat Latitude +#' @param lon Longitude +#' @param dt Timestep +#' @noRd +#' @param hr Hours timestep +#' @return `numeric(1)` of cosine of solar zenith angle +#' @export +cos_solar_zenith_angle <- function(doy, lat, lon, dt, hr) { + et <- equation_of_time(doy) + merid <- floor(lon / 15) * 15 + merid[merid < 0] <- merid[merid < 0] + 15 + lc <- (lon - merid) * -4/60 ## longitude correction + tz <- merid / 360 * 24 ## time zone + midbin <- 0.5 * dt / 86400 * 24 ## shift calc to middle of bin + t0 <- 12 + lc - et - tz - midbin ## solar time + h <- pi/12 * (hr - t0) ## solar hour + dec <- -23.45 * pi / 180 * cos(2 * pi * (doy + 10) / 365) ## declination + cosz <- sin(lat * pi / 180) * sin(dec) + cos(lat * pi / 180) * cos(dec) * cos(h) + cosz[cosz < 0] <- 0 + return(cosz) +} + +#' Equation of time: Eccentricity and obliquity +#' +#' For description of calculations, see https://en.wikipedia.org/wiki/Equation_of_time#Calculating_the_equation_of_time +#' +#' @author Alexey Shiklomanov +#' @param doy Day of year +#' @noRd +#' @return `numeric(1)` length of the solar day, in hours. + +equation_of_time <- function(doy) { + stopifnot(doy <= 366) + f <- pi / 180 * (279.5 + 0.9856 * doy) + et <- (-104.7 * sin(f) + 596.2 * sin(2 * f) + 4.3 * + sin(4 * f) - 429.3 * cos(f) - 2 * + cos(2 * f) + 19.3 * cos(3 * f)) / 3600 # equation of time -> eccentricity and obliquity + return(et) +} + +#' @title Downscale repeat to hourly +#' @return A dataframe of downscaled data +#' @param df, dataframe of data to be downscaled (Longwave) +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ + + #Get first time point + t0 <- min(df$time) + + df <- df %>% + dplyr::select("time", all_of(varName)) %>% + #Calculate time difference + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + #Shift valued back because the 6hr value represents the average over the + #previous 6hr period + dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) + + #Create new vector with all hours + interp.df.days <- seq(min(df$days_since_t0), + as.numeric(max(df$days_since_t0)), + 1 / (24 / hr)) + + #Create new data frame + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + #Join 1 hr data frame with 6 hr data frame + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + #Fill in hours + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + }else{ + data.hrly$lead_var[i] <- curr + } + } + + #Clean up data frame + data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% + dplyr::arrange(time) + + names(data.hrly) <- c("time", varName) + + return(data.hrly) +} + +#' @title Calculate potential shortwave radiation +#' @return vector of potential shortwave radiation for each doy +#' +#' @param doy, day of year in decimal +#' @param lon, longitude +#' @param lat, latitude +#' @return `numeric(1)` +#' @author Quinn Thomas +#' @noRd +#' +#' +downscale_solar_geom <- function(doy, lon, lat) { + + dt <- median(diff(doy)) * 86400 # average number of seconds in time interval + hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy + + ## calculate potential radiation + cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) + rpot <- 1366 * cosz + return(rpot) +} + +##' @title Write NOAA GEFS netCDF +##' @param df data frame of meterological variables to be written to netcdf. Columns +##' must start with time with the following columns in the order of `cf_units` +##' @param ens ensemble index used for subsetting df +##' @param lat latitude in degree north +##' @param lon longitude in degree east +##' @param cf_units vector of variable names in order they appear in df +##' @param output_file name, with full path, of the netcdf file that is generated +##' @param overwrite logical to overwrite existing netcdf file +##' @return NA +##' +##' @export +##' +##' @author Quinn Thomas +##' +##' + +write_noaa_gefs_netcdf <- function(df, ens = NA, lat, lon, cf_units, output_file, overwrite){ + + if(!is.na(ens)){ + data <- df + max_index <- max(which(!is.na(data$air_temperature))) + start_time <- min(data$time) + end_time <- data$time[max_index] + + data <- data %>% dplyr::select(-c("time", "NOAA.member")) + }else{ + data <- df + max_index <- max(which(!is.na(data$air_temperature))) + start_time <- min(data$time) + end_time <- data$time[max_index] + + data <- df %>% + dplyr::select(-c("time")) + } + + diff_time <- as.numeric(difftime(df$time, df$time[1])) / (60 * 60) + + cf_var_names <- names(data) + + time_dim <- ncdf4::ncdim_def(name="time", + units = paste("hours since", format(start_time, "%Y-%m-%d %H:%M")), + diff_time, #GEFS forecast starts 6 hours from start time + create_dimvar = TRUE) + lat_dim <- ncdf4::ncdim_def("latitude", "degree_north", lat, create_dimvar = TRUE) + lon_dim <- ncdf4::ncdim_def("longitude", "degree_east", lon, create_dimvar = TRUE) + + dimensions_list <- list(time_dim, lat_dim, lon_dim) + + nc_var_list <- list() + for (i in 1:length(cf_var_names)) { #Each ensemble member will have data on each variable stored in their respective file. + nc_var_list[[i]] <- ncdf4::ncvar_def(cf_var_names[i], cf_units[i], dimensions_list, missval=NaN) + } + + if (!file.exists(output_file) | overwrite) { + nc_flptr <- ncdf4::nc_create(output_file, nc_var_list, verbose = FALSE) + + #For each variable associated with that ensemble + for (j in 1:ncol(data)) { + # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble + ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], unlist(data[,j])) + } + + ncdf4::nc_close(nc_flptr) #Write to the disk/storage + } +} \ No newline at end of file diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 1cbcb5c70a3..e4e229d5733 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -31,6 +31,7 @@ ##' @param lon site longitude in decimal degrees ##' @param site_id The unique ID given to each site. This is used as part of the file name. ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? +##' @param downscale logical, assumed True. Indicated whether data should be downscaled to hourly ##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. ##' @param ... Other arguments, currently ignored ##' @export @@ -43,254 +44,51 @@ ##' site_id = 676) ##' } ##' -##' @author Luke Dramko +##' @author Quinn Thomas, modified by K Zarada ##' -download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, site_id, start_date = Sys.time(), end_date = (as.POSIXct(start_date, tz="UTC") + lubridate::days(16)), - overwrite = FALSE, verbose = FALSE, ...) { - - start_date <- as.POSIXct(start_date, tz = "UTC") - end_date <- as.POSIXct(end_date, tz = "UTC") - - #It takes about 2 hours for NOAA GEFS weather data to be posted. Therefore, if a request is made within that 2 hour window, - #we instead want to adjust the start time to the previous forecast, which is the most recent one avaliable. (For example, if - #there's a request at 7:00 a.m., the data isn't up yet, so the function grabs the data at midnight instead.) - if (abs(as.numeric(Sys.time() - start_date, units="hours")) <= 2) { - start_date = start_date - lubridate::hours(2) - end_date = end_date - lubridate::hours(2) - } - - #Date/time error checking - Checks to see if the start date is before the end date - if (start_date > end_date) { - PEcAn.logger::logger.severe("Invalid dates: end date occurs before start date") - } else if (as.numeric(end_date - start_date, units="hours") < 6) { #Done separately to produce a more helpful error message. - PEcAn.logger::logger.severe("Times not far enough appart for a forecast to fall between them. Forecasts occur every six hours; make sure start - and end dates are at least 6 hours appart.") - } - - #Set the end forecast date (default is the full 16 days) - if (end_date > start_date + lubridate::days(16)) { - end_date = start_date + lubridate::days(16) - } - - #Round the starting date/time down to the previous block of 6 hours. Adjust the time frame to match. - forecast_hour = (lubridate::hour(start_date) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - increments = as.integer(as.numeric(end_date - start_date, units = "hours") / 6) #Calculating the number of forecasts between start and end dates. - increments = increments + ((lubridate::hour(end_date) - lubridate::hour(start_date)) %/% 6) #These calculations are required to use the rnoaa package. - - end_hour = sprintf("%04d", ((forecast_hour + (increments * 6)) %% 24) * 100) #Calculating the starting hour as a string, which is required type to access the - #data via the rnoaa package - forecast_hour = sprintf("%04d", forecast_hour * 100) #Having the end date as a string is useful later, too. - - #Recreate the adjusted start and end dates. - start_date = as.POSIXct(paste0(lubridate::year(start_date), "-", lubridate::month(start_date), "-", lubridate::day(start_date), " ", - substring(forecast_hour, 1,2), ":00:00"), tz="UTC") - end_date = start_date + lubridate::hours(increments * 6) - - #Bounds date checking - #NOAA's GEFS database maintains a rolling 12 days of forecast data for access through this function. - #We do want Sys.Date() here - NOAA makes data unavaliable days at a time, not forecasts at a time. - NOAA_GEFS_Start_Date = as.POSIXct(Sys.Date(), tz="UTC") - lubridate::days(11) #Subtracting 11 days is correct, not 12. - - #Check to see if start_date is valid. This must be done after date adjustment. - if (as.POSIXct(Sys.time(), tz="UTC") < start_date || start_date < NOAA_GEFS_Start_Date) { - PEcAn.logger::logger.severe(sprintf('Start date (%s) exceeds the NOAA GEFS range (%s to %s).', - start_date, - NOAA_GEFS_Start_Date, Sys.Date())) - } - - if (lubridate::hour(start_date) > 23) { - PEcAn.logger::logger.severe(sprintf("Start time %s is not a valid time", lubridate::hour(start_date))) - } - - if (lubridate::hour(end_date) > 23) { #Done separately from the previous if statement in order to have more specific error messages. - PEcAn.logger::logger.severe(sprintf("End time %s is not a valid time", lubridate::hour(end_date))) - } - #End date/time error checking - - ################################################# - #NOAA variable downloading - #Uses the rnoaa package to download data - - #We want data for each of the following variables. Here, we're just getting the raw data; later, we will convert it to the - #cf standard format when relevant. - noaa_var_names = c("Temperature_height_above_ground_ens", "Pressure_surface_ens", "Relative_humidity_height_above_ground_ens", "Downward_Long-Wave_Radp_Flux_surface_6_Hour_Average_ens", - "Downward_Short-Wave_Radiation_Flux_surface_6_Hour_Average_ens", "Total_precipitation_surface_6_Hour_Accumulation_ens", - "u-component_of_wind_height_above_ground_ens", "v-component_of_wind_height_above_ground_ens") - - #These are the cf standard names - cf_var_names = c("air_temperature", "air_pressure", "specific_humidity", "surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "eastward_wind", "northward_wind") - cf_var_units = c("K", "Pa", "1", "Wm-2", "Wm-2", "kgm-2s-1", "ms-1", "ms-1") #Negative numbers indicate negative exponents - - # This debugging loop allows you to check if the cf variables are correctly mapped to the equivalent - # NOAA variable names. This is very important, as much of the processing below will be erroneous if - # these fail to match up. - # for (i in 1:length(cf_var_names)) { - # print(sprintf("cf / noaa : %s / %s", cf_var_names[[i]], noaa_var_names[[i]])) - #} - - noaa_data = list() - - #Downloading the data here. It is stored in a matrix, where columns represent time in intervals of 6 hours, and rows represent - #each ensemble member. Each variable gets its own matrix, which is stored in the list noaa_data. - for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments + 1), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data - } - - - ### ERROR CHECK FOR FIRST TIME POINT ### - - #Check if first time point is present, if not, grab from previous forecast - - index <- which(lengths(noaa_data) == increments * 21) #finding which ones are missing first point - new_start = start_date - lubridate::hours(6) #grab previous forecast - new_forecast_hour = (lubridate::hour(new_start) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - new_forecast_hour = sprintf("%04d", new_forecast_hour * 100) #Having the end date as a string is useful later, too. - - filled_noaa_data = list() - - for (i in index) { - filled_noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1, forecast_time = new_forecast_hour, date=format(new_start, "%Y%m%d"))$data - } - - #add filled data into first slot of forecast - for(i in index){ - noaa_data[[i]] = cbind(filled_noaa_data[[i]], noaa_data[[i]])} - - - #Fills in data with NaNs if there happens to be missing columns. - for (i in 1:length(noaa_var_names)) { - if (!is.null(ncol(noaa_data[[i]]))) { # Is a matrix - nans <- rep(NaN, nrow(noaa_data[[i]])) - while (ncol(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- cbind(noaa_data[[i]], nans) - } - } else { # Is a vector - while (length(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- c(noaa_data[[i]], NaN); - } - } - } - - ################################################### - # Not all NOAA data units match the cf data standard. In this next section, data are processed to - # confirm with the standard when necessary. - # The following is a list of variables which need to be processed: - # 1. NOAA's relative humidity must be converted to specific humidity - # 2. NOAA's measure of precipitation is the accumulation over 6 hours; cf's standard is precipitation per second - - #Convert NOAA's relative humidity to specific humidity - humid_index = which(cf_var_names == "specific_humidity") - - #Temperature, pressure, and relative humidity are required to calculate specific humidity. - humid_data = noaa_data[[humid_index]] - temperature_data = noaa_data[[which(cf_var_names == "air_temperature")]] - pressure_data = noaa_data[[which(cf_var_names == "air_pressure")]] - - #Depending on the volume and dimensions of data you download, sometimes R stores it as a vector and sometimes - #as a matrix; the different cases must be processed with different loops. - #(The specific corner case in which a vector would be generated is if only one hour is requested; for example, - #only the data at time_idx 1, for example). - if (as.logical(nrow(humid_data))) { - for (i in 1:length(humid_data)) { - humid_data[i] = PEcAn.data.atmosphere::rh2qair(humid_data[i], temperature_data[i], pressure_data[i]) - } - } else { - for (i in 1:nrow(humid_data)) { - for (j in 1:ncol(humid_data)) { - humid_data[i,j] = PEcAn.data.atmosphere::rh2qair(humid_data[i,j], temperature_data[i,j], pressure_data[i,j]) - } - } - } - - #Update the noaa_data list with the correct data - noaa_data[[humid_index]] <- humid_data - - # Convert NOAA's total precipitation (kg m-2) to precipitation flux (kg m-2 s-1) - #NOAA precipitation data is an accumulation over 6 hours. - precip_index = which(cf_var_names == "precipitation_flux") - - #The statement udunits2::ud.convert(1, "kg m-2 6 hr-1", "kg m-2 s-1") is equivalent to udunits2::ud.convert(1, "kg m-2 hr-1", "kg m-2 s-1") * 6, - #which is a little unintuitive. What will do the conversion we want is what's below: - noaa_data[[precip_index]] = udunits2::ud.convert(noaa_data[[precip_index]], "kg m-2 hr-1", "kg m-2 6 s-1") #There are 21600 seconds in 6 hours - - ############################################# - # Done with data processing. Now writing the data to the specified directory. Each ensemble member is written to its own file, for a total - # of 21 files. - if (!dir.exists(outfolder)) { - dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) - } - - # Create a data frame with information about the file. This data frame's format is an internal PEcAn standard, and is stored in the BETY database to - # locate the data file. The data file is stored on the local machine where the download occured. Because NOAA GEFS is an - # ensemble of 21 different forecast models, each model gets its own data frame. All of the information is the same for - # each file except for the file name. - results = data.frame( - file = "", #Path to the file (added in loop below). - host = PEcAn.remote::fqdn(), #Name of the server where the file is stored - mimetype = "application/x-netcdf", #Format the data is saved in - formatname = "CF Meteorology", #Type of data - startdate = paste0(format(start_date, "%Y-%m-%dT%H:%M:00")), #starting date and time, down to the second - enddate = paste0(format(end_date, "%Y-%m-%dT%H:%M:00")), #ending date and time, down to the second - dbfile.name = "NOAA_GEFS", #Source of data (ensemble number will be added later) - stringsAsFactors = FALSE - ) - - results_list = list() - - #Each ensemble gets its own file. - #These dimensions will be used for all 21 ncdf4 file members, so they're all declared once here. - #The data is really one-dimensional for each file (though we include lattitude and longitude dimensions - #to comply with the PEcAn standard). - time_dim = ncdf4::ncdim_def(name="time", - paste(units="hours since", format(start_date, "%Y-%m-%dT%H:%M")), - seq(0, 6 * increments, by = 6), - create_dimvar = TRUE) - lat_dim = ncdf4::ncdim_def("latitude", "degree_north", lat.in, create_dimvar = TRUE) - lon_dim = ncdf4::ncdim_def("longitude", "degree_east", lon.in, create_dimvar = TRUE) - - dimensions_list = list(time_dim, lat_dim, lon_dim) - - nc_var_list = list() - for (i in 1:length(cf_var_names)) { #Each ensemble member will have data on each variable stored in their respective file. - nc_var_list[[i]] = ncdf4::ncvar_def(cf_var_names[i], cf_var_units[i], dimensions_list, missval=NaN) - } - - #For each ensemble - for (i in 1:21) { # i is the ensemble number - #Generating a unique identifier string that characterizes a particular data set. - identifier = paste("NOAA_GEFS", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), - format(end_date, "%Y-%m-%dT%H:%M"), sep="_") - - ensemble_folder = file.path(outfolder, identifier) - #Each file will go in its own folder. - if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} - - flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) - - #Each ensemble member gets its own unique data frame, which is stored in results_list - #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it - #for you instead of just inserting the object reference, so this works. - results$file <- flname - results$dbfile.name <- flname - results_list[[i]] <- results - - if (!file.exists(flname) | overwrite) { - nc_flptr = ncdf4::nc_create(flname, nc_var_list, verbose=verbose) - - #For each variable associated with that ensemble - for (j in 1:length(cf_var_names)) { - # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble - ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], noaa_data[[j]][i,]) - } - - ncdf4::nc_close(nc_flptr) #Write to the disk/storage - } else { - PEcAn.logger::logger.info(paste0("The file ", flname, " already exists. It was not overwritten.")) - } - - } - - return(results_list) -} #download.NOAA_GEFS \ No newline at end of file +download.NOAA_GEFS <- function(site_id, + sitename = NULL, + username = 'pecan', + lat.in, + lon.in, + outfolder, + start_date= Sys.Date(), + end_date = start_date + lubridate::days(16), + downscale = TRUE, + overwrite = FALSE){ + + forecast_date = as.Date(start_date) + forecast_time = (lubridate::hour(start_date) %/% 6)*6 + + end_hr = (as.numeric(difftime(end_date, start_date, units = 'hours')) %/% 6)*6 + + model_name <- "NOAAGEFS_6hr" + model_name_ds <-"NOAAGEFS_1hr" #Downscaled NOAA GEFS + model_name_raw <- "NOAAGEFS_raw" + + PEcAn.logger::logger.info(paste0("Downloading GEFS for site ", site_id, " for ", start_date)) + + PEcAn.logger::logger.info(paste0("Overwrite existing files: ", overwrite)) + + + PEcAn.data.atmosphere::noaa_grid_download(lat_list = lat.in, + lon_list = lon.in, + end_hr = end_hr, + forecast_time = forecast_time, + forecast_date = forecast_date, + model_name_raw = model_name_raw, + output_directory = outfolder) + + results <- PEcAn.data.atmosphere::process_gridded_noaa_download(lat_list = lat.in, + lon_list = lon.in, + site_id = site_id, + downscale = downscale, + overwrite = overwrite, + forecast_date = forecast_date, + forecast_time = forecast_time, + model_name = model_name, + model_name_ds = model_name_ds, + model_name_raw = model_name_raw, + output_directory = outfolder) + return(results) +} diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R deleted file mode 100644 index ccdee0f5c34..00000000000 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ /dev/null @@ -1,378 +0,0 @@ -##' @title Downscale NOAA GEFS Weather Data -##' -##' @section Information on Units: -##' Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downlaoded -##' in Kelvin. -##' @references https://www.ncdc.noaa.gov/crn/measurements.html -##' -##' @section NOAA_GEFS General Information: -##' This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. A 16 day forecast is avaliable -##' every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn -##' standard. -##' -##' @section Data Avaliability: -##' NOAA GEFS weather data is avaliable on a rolling 12 day basis; dates provided in "start_date" must be within this range. The end date can be any point after -##' that, but if the end date is beyond 16 days, only 16 days worth of forecast are recorded. Times are rounded down to the previous 6 hour forecast. NOAA -##' GEFS weather data isn't always posted immediately, and to compensate, this function adjusts requests made in the last two hours -##' back two hours (approximately the amount of time it takes to post the data) to make sure the most current forecast is used. -##' -##' @section Data Save Format: -##' Data is saved in the netcdf format to the specified directory. File names reflect the precision of the data to the given range of days. -##' NOAA.GEFS.willow creek.3.2018-06-08T06:00.2018-06-24T06:00.nc specifies the forecast, using ensemble nubmer 3 at willow creek on -##' June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. -##' -##' @return A list of data frames is returned containing information about the data file that can be used to locate it later. Each -##' data frame contains information about one file. -##' -##' @param outfolder Directory where results should be written -##' @param start_date, end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight) -##' @param lat site latitude in decimal degrees -##' @param lon site longitude in decimal degrees -##' @param site_id The unique ID given to each site. This is used as part of the file name. -##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? -##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. -##' @param ... Other arguments, currently ignored -##' @export -##' -##' @examples -##' \dontrun{ -##' download.NOAA_GEFS(outfolder="~/Working/results", -##' lat.in= 45.805925, -##' lon.in = -90.07961, -##' site_id = 676) -##' } -##' -##' @author Katie Zarada - modified code from Luke Dramko and Laura Puckett -##' -##' - - -download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, start_date, end_date, - overwrite = FALSE, verbose = FALSE, ...) { - - start_date <- as.POSIXct(start_date, tz = "UTC") - end_date <- as.POSIXct(end_date, tz = "UTC") - - #It takes about 2 hours for NOAA GEFS weather data to be posted. Therefore, if a request is made within that 2 hour window, - #we instead want to adjust the start time to the previous forecast, which is the most recent one avaliable. (For example, if - #there's a request at 7:00 a.m., the data isn't up yet, so the function grabs the data at midnight instead.) - if (abs(as.numeric(Sys.time() - start_date, units="hours")) <= 2) { - start_date = start_date - lubridate::hours(2) - end_date = end_date - lubridate::hours(2) - } - - #Date/time error checking - Checks to see if the start date is before the end date - if (start_date > end_date) { - PEcAn.logger::logger.severe("Invalid dates: end date occurs before start date") - } else if (as.numeric(end_date - start_date, units="hours") < 6) { #Done separately to produce a more helpful error message. - PEcAn.logger::logger.severe("Times not far enough appart for a forecast to fall between them. Forecasts occur every six hours; make sure start - and end dates are at least 6 hours apart.") - } - - #Set the end forecast date (default is the full 16 days) - if (end_date > start_date + lubridate::days(16)) { - end_date = start_date + lubridate::days(16) - PEcAn.logger::logger.info(paste0("Updated end date is ", end_date)) - } - - #Round the starting date/time down to the previous block of 6 hours. Adjust the time frame to match. - forecast_hour = (lubridate::hour(start_date) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - increments = as.integer(as.numeric(end_date - start_date, units = "hours") / 6) #Calculating the number of forecasts between start and end dates - increments = increments + ((lubridate::hour(end_date) - lubridate::hour(start_date)) %/% 6) #These calculations are required to use the rnoaa package. - - end_hour = sprintf("%04d", ((forecast_hour + (increments * 6)) %% 24) * 100) #Calculating the starting hour as a string, which is required type to access the - #data via the rnoaa package - forecast_hour = sprintf("%04d", forecast_hour * 100) #Having the end date as a string is useful later, too. - - #Recreate the adjusted start and end dates. - start_date = as.POSIXct(paste0(lubridate::year(start_date), "-", lubridate::month(start_date), "-", lubridate::day(start_date), " ", - substring(forecast_hour, 1,2), ":00:00"), tz="UTC") - - end_date = start_date + lubridate::hours(increments * 6) - - - #Bounds date checking - #NOAA's GEFS database maintains a rolling 12 days of forecast data for access through this function. - #We do want Sys.Date() here - NOAA makes data unavaliable days at a time, not forecasts at a time. - NOAA_GEFS_Start_Date = as.POSIXct(Sys.Date(), tz="UTC") - lubridate::days(11) #Subtracting 11 days is correct, not 12. - - #Check to see if start_date is valid. This must be done after date adjustment. - if (as.POSIXct(Sys.time(), tz="UTC") < start_date || start_date < NOAA_GEFS_Start_Date) { - PEcAn.logger::logger.severe(sprintf('Start date (%s) exceeds the NOAA GEFS range (%s to %s).', - start_date, - NOAA_GEFS_Start_Date, Sys.Date())) - } - - if (lubridate::hour(start_date) > 23) { - PEcAn.logger::logger.severe(sprintf("Start time %s is not a valid time", lubridate::hour(start_date))) - } - - if (lubridate::hour(end_date) > 23) { #Done separately from the previous if statement in order to have more specific error messages. - PEcAn.logger::logger.severe(sprintf("End time %s is not a valid time", lubridate::hour(end_date))) - } - #End date/time error checking - - ################################################# - #NOAA variable downloading - #Uses the rnoaa package to download data - - #We want data for each of the following variables. Here, we're just getting the raw data; later, we will convert it to the - #cf standard format when relevant. - noaa_var_names = c("Temperature_height_above_ground_ens", "Pressure_surface_ens", "Relative_humidity_height_above_ground_ens", "Downward_Long-Wave_Radp_Flux_surface_6_Hour_Average_ens", - "Downward_Short-Wave_Radiation_Flux_surface_6_Hour_Average_ens", "Total_precipitation_surface_6_Hour_Accumulation_ens", - "u-component_of_wind_height_above_ground_ens", "v-component_of_wind_height_above_ground_ens") - - cf_var_names = c("air_temperature", "air_pressure", "specific_humidity", "surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "eastward_wind", "northward_wind") - #These are the cf standard names - cf_var_names1 = c("air_temperature", "air_pressure", "specific_humidity", "surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "wind_speed") - cf_var_units = c("K", "Pa", "1", "Wm-2", "Wm-2", "kgm-2s-1", "ms-1") #Negative numbers indicate negative exponents - - # This debugging loop allows you to check if the cf variables are correctly mapped to the equivalent - # NOAA variable names. This is very important, as much of the processing below will be erroneous if - # these fail to match up. - # for (i in 1:length(cf_var_names)) { - # print(sprintf("cf / noaa : %s / %s", cf_var_names[[i]], noaa_var_names[[i]])) - #} - - noaa_data = list() - - #Downloading the data here. It is stored in a matrix, where columns represent time in intervals of 6 hours, and rows represent - #each ensemble member. Each variable gets its own matrix, which is stored in the list noaa_data. - - - for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments + 1), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data - } - - - ### ERROR CHECK FOR FIRST TIME POINT ### - - #Check if first time point is present, if not, grab from previous forecast - - index <- which(lengths(noaa_data) == increments * 21) #finding which ones are missing first point - new_start = start_date - lubridate::hours(6) #grab previous forecast - new_forecast_hour = (lubridate::hour(new_start) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - new_forecast_hour = sprintf("%04d", new_forecast_hour * 100) #Having the end date as a string is useful later, too. - - filled_noaa_data = list() - - for (i in index) { - filled_noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1, forecast_time = new_forecast_hour, date=format(new_start, "%Y%m%d"))$data - } - - #add filled data into first slot of forecast - for(i in index){ - noaa_data[[i]] = cbind(filled_noaa_data[[i]], noaa_data[[i]])} - - - #Fills in data with NaNs if there happens to be missing columns. - for (i in 1:length(noaa_var_names)) { - if (!is.null(ncol(noaa_data[[i]]))) { # Is a matrix - nans <- rep(NaN, nrow(noaa_data[[i]])) - while (ncol(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- cbind(noaa_data[[i]], nans) - } - } else { # Is a vector - while (length(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- c(noaa_data[[i]], NaN); - } - } - } - - ################################################### - # Not all NOAA data units match the cf data standard. In this next section, data are processed to - # confirm with the standard when necessary. - # The following is a list of variables which need to be processed: - # 1. NOAA's relative humidity must be converted to specific humidity - # 2. NOAA's measure of precipitation is the accumulation over 6 hours; cf's standard is precipitation per second - - #Convert NOAA's relative humidity to specific humidity - humid_index = which(cf_var_names == "specific_humidity") - - #Temperature, pressure, and relative humidity are required to calculate specific humidity. - humid_data = noaa_data[[humid_index]] - temperature_data = noaa_data[[which(cf_var_names == "air_temperature")]] - pressure_data = noaa_data[[which(cf_var_names == "air_pressure")]] - - #Depending on the volume and dimensions of data you download, sometimes R stores it as a vector and sometimes - #as a matrix; the different cases must be processed with different loops. - #(The specific corner case in which a vector would be generated is if only one hour is requested; for example, - #only the data at time_idx 1, for example). - if (as.logical(nrow(humid_data))) { - for (i in 1:length(humid_data)) { - humid_data[i] = PEcAn.data.atmosphere::rh2qair(humid_data[i], temperature_data[i], pressure_data[i]) - } - } else { - for (i in 1:nrow(humid_data)) { - for (j in 1:ncol(humid_data)) { - humid_data[i,j] = PEcAn.data.atmosphere::rh2qair(humid_data[i,j], temperature_data[i,j], pressure_data[i,j]) - } - } - } - - #Update the noaa_data list with the correct data - noaa_data[[humid_index]] <- humid_data - - # Convert NOAA's total precipitation (kg m-2) to precipitation flux (kg m-2 s-1) - #NOAA precipitation data is an accumulation over 6 hours. - precip_index = which(cf_var_names == "precipitation_flux") - - #The statement udunits2::ud.convert(1, "kg m-2 6 hr-1", "kg m-2 s-1") is equivalent to udunits2::ud.convert(1, "kg m-2 hr-1", "kg m-2 s-1") * 6, - #which is a little unintuitive. What will do the conversion we want is what's below: - noaa_data[[precip_index]] = udunits2::ud.convert(noaa_data[[precip_index]], "kg m-2 hr-1", "kg m-2 6 s-1") #There are 21600 seconds in 6 hours - - - ##################################### - #done with data processing- now want to take the list and make one df for downscaling - - time = seq(from = start_date, to = end_date, by = "6 hour") - forecasts = matrix(ncol = length(noaa_data)+ 2, nrow = 0) - colnames(forecasts) <- c(cf_var_names, "timestamp", "NOAA.member") - - index = matrix(ncol = length(noaa_data), nrow = length(time)) - for(i in 1:21){ - rm(index) - index = matrix(ncol = length(noaa_data), nrow = length(time)) - for(j in 1:length(noaa_data)){ - index[,j] <- noaa_data[[j]][i,] - colnames(index) <- c(cf_var_names) - index <- as.data.frame(index) - } - index$timestamp <- as.POSIXct(time) - index$NOAA.member <- rep(i, times = length(time)) - forecasts <- rbind(forecasts, index) - } - - forecasts <- forecasts %>% tidyr::drop_na() - #forecasts$timestamp <- as.POSIXct(rep(time, 21)) - forecasts$wind_speed <- sqrt(forecasts$eastward_wind^ 2 + forecasts$northward_wind^ 2) - - ### Downscale state variables - gefs_hour <- PEcAn.data.atmosphere::downscale_spline_to_hourly(df = forecasts, VarNamesStates = c("air_temperature", "wind_speed", "specific_humidity", "air_pressure")) - - - ## convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) - nonSW.flux.hrly <- forecasts %>% - dplyr::select(timestamp, NOAA.member, surface_downwelling_longwave_flux_in_air) %>% - PEcAn.data.atmosphere::downscale_repeat_6hr_to_hrly() %>% dplyr::group_by_at(c("NOAA.member", "timestamp")) %>% - dplyr::summarize(surface_downwelling_longwave_flux_in_air = mean(surface_downwelling_longwave_flux_in_air)) - - ## downscale shortwave to hourly - time0 = min(forecasts$timestamp) - time_end = max(forecasts$timestamp) - ShortWave.ds = PEcAn.data.atmosphere::downscale_ShortWave_to_hrly(forecasts, time0, time_end, lat = lat.in, lon = lon.in, output_tz= "UTC")%>% - dplyr::group_by_at(c("NOAA.member", "timestamp")) %>% - dplyr::summarize(surface_downwelling_shortwave_flux_in_air = mean(surface_downwelling_shortwave_flux_in_air)) - - ## Downscale Precipitation Flux - #fills in the hours between the 6hr GEFS with zeros using the timestamp from downscaled Flux - precip.hrly <- forecasts %>% - dplyr::select(timestamp, NOAA.member, precipitation_flux) %>% - tidyr::complete(timestamp = nonSW.flux.hrly$timestamp, tidyr::nesting(NOAA.member), fill = list(precipitation_flux = 0)) - -#join together the 4 different downscaled data frames - #checks for errors in downscaled data; removes NA times; replaces erroneous values with 0's or NA's - joined<- dplyr::inner_join(gefs_hour, nonSW.flux.hrly, by = c("NOAA.member", "timestamp")) - joined<- dplyr::inner_join(joined, precip.hrly, by = c("NOAA.member", "timestamp")) - - joined <- dplyr::inner_join(joined, ShortWave.ds, by = c("NOAA.member", "timestamp")) %>% - dplyr::distinct() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = dplyr::if_else(surface_downwelling_shortwave_flux_in_air < 0, 0, surface_downwelling_shortwave_flux_in_air), - specific_humidity = dplyr::if_else(specific_humidity <0, 0, specific_humidity), - air_temperature = dplyr::if_else(air_temperature > 320, NA_real_, air_temperature), - air_temperature = dplyr::if_else(air_temperature < 240, NA_real_, air_temperature), - precipitation_flux = dplyr::if_else(precipitation_flux < 0, 0, precipitation_flux), - surface_downwelling_longwave_flux_in_air = dplyr::if_else(surface_downwelling_longwave_flux_in_air < 0, NA_real_, surface_downwelling_longwave_flux_in_air), - wind_speed = dplyr::if_else(wind_speed <0, 0, wind_speed)) %>% - dplyr::filter(is.na(timestamp) == FALSE) - - - - - ############################################# - # Done with data processing. Now writing the data to the specified directory. Each ensemble member is written to its own file, for a total - # of 21 files. - if (!dir.exists(outfolder)) { - dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) - } - - # Create a data frame with information about the file. This data frame's format is an internal PEcAn standard, and is stored in the BETY database to - # locate the data file. The data file is stored on the local machine where the download occured. Because NOAA GEFS is an - # ensemble of 21 different forecast models, each model gets its own data frame. All of the information is the same for - # each file except for the file name. - results = data.frame( - file = "", #Path to the file (added in loop below). - host = PEcAn.remote::fqdn(), #Name of the server where the file is stored - mimetype = "application/x-netcdf", #Format the data is saved in - formatname = "CF Meteorology", #Type of data - startdate = paste0(format(start_date, "%Y-%m-%dT%H:%M:00")), #starting date and time, down to the second - enddate = paste0(format(end_date, "%Y-%m-%dT%H:%M:00")), #ending date and time, down to the second - dbfile.name = "NOAA_GEFS_downscale", #Source of data (ensemble number will be added later) - stringsAsFactors = FALSE - ) - - results_list = list() - - #Each ensemble gets its own file. - #These dimensions will be used for all 21 ncdf4 file members, so they're all declared once here. - #The data is really one-dimensional for each file (though we include lattitude and longitude dimensions - #to comply with the PEcAn standard). - time_dim = ncdf4::ncdim_def(name="time", - units = paste("hours since", format(start_date, "%Y-%m-%dT%H:%M")), - seq(from = 0, length.out = length(unique(joined$timestamp))), #GEFS forecast starts 6 hours from start time - create_dimvar = TRUE) - lat_dim = ncdf4::ncdim_def("latitude", "degree_north", lat.in, create_dimvar = TRUE) - lon_dim = ncdf4::ncdim_def("longitude", "degree_east", lon.in, create_dimvar = TRUE) - - dimensions_list = list(time_dim, lat_dim, lon_dim) - - nc_var_list = list() - for (i in 1:length(cf_var_names1)) { #Each ensemble member will have data on each variable stored in their respective file. - nc_var_list[[i]] = ncdf4::ncvar_def(cf_var_names1[i], cf_var_units[i], dimensions_list, missval=NaN) - } - - #For each ensemble - for (i in 1:21) { # i is the ensemble number - #Generating a unique identifier string that characterizes a particular data set. - identifier = paste("NOAA_GEFS_downscale", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), - format(end_date, "%Y-%m-%dT%H:%M"), sep="_") - ensemble_folder = file.path(outfolder, identifier) - - #ensemble_folder = file.path(outfolder, identifier) - data = as.data.frame(joined %>% dplyr::select(NOAA.member, cf_var_names1) %>% - dplyr::filter(NOAA.member == i) %>% - dplyr::select(-NOAA.member)) - - - if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} - - flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) - - #Each ensemble member gets its own unique data frame, which is stored in results_list - #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it - #for you instead of just inserting the object reference, so this works. - results$file <- flname - results$dbfile.name <- flname - results_list[[i]] <- results - - if (!file.exists(flname) | overwrite) { - nc_flptr = ncdf4::nc_create(flname, nc_var_list, verbose=verbose) - - #For each variable associated with that ensemble - for (j in 1:length(cf_var_names1)) { - # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble - ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], data[,j]) - } - - ncdf4::nc_close(nc_flptr) #Write to the disk/storage - } else { - PEcAn.logger::logger.info(paste0("The file ", flname, " already exists. It was not overwritten.")) - } - - } - - return(results_list) -} #downscale.NOAA_GEFS From 8857bdf8e5ade36ff00e3d7514620cd9b032727e Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 19 Nov 2020 16:49:06 -0500 Subject: [PATCH 1669/2289] updating rd files --- modules/data.atmosphere/NAMESPACE | 5 +- .../data.atmosphere/man/download.NOAA_GEFS.Rd | 31 ++++--- .../man/download.NOAA_GEFS_downscale.Rd | 85 ------------------- .../data.atmosphere/man/noaa_grid_download.Rd | 25 ++++++ .../man/process_gridded_noaa_download.Rd | 32 +++++++ .../data.atmosphere/man/temporal_downscale.Rd | 26 ++++++ .../man/write_noaa_gefs_netcdf.Rd | 41 +++++++++ 7 files changed, 145 insertions(+), 100 deletions(-) delete mode 100644 modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd create mode 100644 modules/data.atmosphere/man/noaa_grid_download.Rd create mode 100644 modules/data.atmosphere/man/process_gridded_noaa_download.Rd create mode 100644 modules/data.atmosphere/man/temporal_downscale.Rd create mode 100644 modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 3729ed261a7..22787f6ceb7 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -33,7 +33,6 @@ export(download.NARR_site) export(download.NEONmet) export(download.NLDAS) export(download.NOAA_GEFS) -export(download.NOAA_GEFS_downscale) export(download.PalEON) export(download.PalEON_ENS) export(download.US_WCr) @@ -77,10 +76,12 @@ export(metgapfill) export(metgapfill.NOAA_GEFS) export(model.train) export(nc.merge) +export(noaa_grid_download) export(par2ppfd) export(pecan_standard_met_table) export(permute.nc) export(predict_subdaily_met) +export(process_gridded_noaa_download) export(qair2rh) export(read.register) export(rh2qair) @@ -96,8 +97,10 @@ export(subdaily_pred) export(sw2par) export(sw2ppfd) export(temporal.downscale.functions) +export(temporal_downscale) export(upscale_met) export(wide2long) +export(write_noaa_gefs_netcdf) import(dplyr) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index c6e313362ec..8a6c43e20e8 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -5,33 +5,36 @@ \title{Download NOAA GEFS Weather Data} \usage{ download.NOAA_GEFS( - outfolder, + site_id, + sitename = NULL, + username = "pecan", lat.in, lon.in, - site_id, - start_date = Sys.time(), - end_date = (as.POSIXct(start_date, tz = "UTC") + lubridate::days(16)), - overwrite = FALSE, - verbose = FALSE, - ... + outfolder, + start_date = Sys.Date(), + end_date = start_date + lubridate::days(16), + downscale = TRUE, + overwrite = FALSE ) } \arguments{ -\item{outfolder}{Directory where results should be written} - \item{site_id}{The unique ID given to each site. This is used as part of the file name.} -\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} +\item{outfolder}{Directory where results should be written} -\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} +\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} -\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} +\item{downscale}{logical, assumed True. Indicated whether data should be downscaled to hourly} -\item{...}{Other arguments, currently ignored} +\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} \item{lat}{site latitude in decimal degrees} \item{lon}{site longitude in decimal degrees} + +\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} + +\item{...}{Other arguments, currently ignored} } \value{ A list of data frames is returned containing information about the data file that can be used to locate it later. Each @@ -82,5 +85,5 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. https://www.ncdc.noaa.gov/crn/measurements.html } \author{ -Luke Dramko +Quinn Thomas, modified by K Zarada } diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd deleted file mode 100644 index d13d8ffa128..00000000000 --- a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd +++ /dev/null @@ -1,85 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/download.NOAA_GEFS_downscale.R -\name{download.NOAA_GEFS_downscale} -\alias{download.NOAA_GEFS_downscale} -\title{Downscale NOAA GEFS Weather Data} -\usage{ -download.NOAA_GEFS_downscale( - outfolder, - lat.in, - lon.in, - site_id, - start_date, - end_date, - overwrite = FALSE, - verbose = FALSE, - ... -) -} -\arguments{ -\item{outfolder}{Directory where results should be written} - -\item{site_id}{The unique ID given to each site. This is used as part of the file name.} - -\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} - -\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} - -\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} - -\item{...}{Other arguments, currently ignored} - -\item{lat}{site latitude in decimal degrees} - -\item{lon}{site longitude in decimal degrees} -} -\value{ -A list of data frames is returned containing information about the data file that can be used to locate it later. Each -data frame contains information about one file. -} -\description{ -Downscale NOAA GEFS Weather Data -} -\section{Information on Units}{ - -Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downlaoded -in Kelvin. -} - -\section{NOAA_GEFS General Information}{ - -This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. A 16 day forecast is avaliable -every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn -standard. -} - -\section{Data Avaliability}{ - -NOAA GEFS weather data is avaliable on a rolling 12 day basis; dates provided in "start_date" must be within this range. The end date can be any point after -that, but if the end date is beyond 16 days, only 16 days worth of forecast are recorded. Times are rounded down to the previous 6 hour forecast. NOAA -GEFS weather data isn't always posted immediately, and to compensate, this function adjusts requests made in the last two hours -back two hours (approximately the amount of time it takes to post the data) to make sure the most current forecast is used. -} - -\section{Data Save Format}{ - -Data is saved in the netcdf format to the specified directory. File names reflect the precision of the data to the given range of days. -NOAA.GEFS.willow creek.3.2018-06-08T06:00.2018-06-24T06:00.nc specifies the forecast, using ensemble nubmer 3 at willow creek on -June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. -} - -\examples{ -\dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", - lat.in= 45.805925, - lon.in = -90.07961, - site_id = 676) -} - -} -\references{ -https://www.ncdc.noaa.gov/crn/measurements.html -} -\author{ -Katie Zarada - modified code from Luke Dramko and Laura Puckett -} diff --git a/modules/data.atmosphere/man/noaa_grid_download.Rd b/modules/data.atmosphere/man/noaa_grid_download.Rd new file mode 100644 index 00000000000..e52b605abc6 --- /dev/null +++ b/modules/data.atmosphere/man/noaa_grid_download.Rd @@ -0,0 +1,25 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{noaa_grid_download} +\alias{noaa_grid_download} +\title{Download gridded forecast in the box bounded by the latitude and longitude list} +\usage{ +noaa_grid_download( + lat_list, + lon_list, + forecast_time, + forecast_date, + model_name_raw, + output_directory, + end_hr +) +} +\arguments{ +\item{output_directory}{} +} +\value{ + +} +\description{ +Download gridded forecast in the box bounded by the latitude and longitude list +} diff --git a/modules/data.atmosphere/man/process_gridded_noaa_download.Rd b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd new file mode 100644 index 00000000000..e8f2d6113f0 --- /dev/null +++ b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd @@ -0,0 +1,32 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{process_gridded_noaa_download} +\alias{process_gridded_noaa_download} +\title{Extract and temporally downscale points from downloaded grid files} +\usage{ +process_gridded_noaa_download( + lat_list, + lon_list, + site_id, + downscale, + overwrite, + forecast_date, + forecast_time, + model_name, + model_name_ds, + model_name_raw, + output_directory +) +} +\arguments{ +\item{output_directory}{} +} +\value{ + +} +\description{ +Extract and temporally downscale points from downloaded grid files +} +\examples{ + +} diff --git a/modules/data.atmosphere/man/temporal_downscale.Rd b/modules/data.atmosphere/man/temporal_downscale.Rd new file mode 100644 index 00000000000..9ed0999227e --- /dev/null +++ b/modules/data.atmosphere/man/temporal_downscale.Rd @@ -0,0 +1,26 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{temporal_downscale} +\alias{temporal_downscale} +\title{Downscale NOAA GEFS frin 6hr to 1hr} +\usage{ +temporal_downscale(input_file, output_file, overwrite = TRUE, hr = 1) +} +\arguments{ +\item{input_file, }{full path to 6hr file} + +\item{output_file, }{full path to 1hr file that will be generated} + +\item{overwrite, }{logical stating to overwrite any existing output_file} + +\item{hr}{time step in hours of temporal downscaling (default = 1)} +} +\value{ +None +} +\description{ +Downscale NOAA GEFS frin 6hr to 1hr +} +\author{ +Quinn Thomas +} diff --git a/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd new file mode 100644 index 00000000000..dd8c10ca76e --- /dev/null +++ b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd @@ -0,0 +1,41 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{write_noaa_gefs_netcdf} +\alias{write_noaa_gefs_netcdf} +\title{Write NOAA GEFS netCDF} +\usage{ +write_noaa_gefs_netcdf( + df, + ens = NA, + lat, + lon, + cf_units, + output_file, + overwrite +) +} +\arguments{ +\item{df}{data frame of meterological variables to be written to netcdf. Columns +must start with time with the following columns in the order of `cf_units`} + +\item{ens}{ensemble index used for subsetting df} + +\item{lat}{latitude in degree north} + +\item{lon}{longitude in degree east} + +\item{cf_units}{vector of variable names in order they appear in df} + +\item{output_file}{name, with full path, of the netcdf file that is generated} + +\item{overwrite}{logical to overwrite existing netcdf file} +} +\value{ +NA +} +\description{ +Write NOAA GEFS netCDF +} +\author{ +Quinn Thomas +} From b121361a071d776e24799ca53e92926ab68c7fa2 Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 20 Nov 2020 08:37:33 -0500 Subject: [PATCH 1670/2289] updating gefs to work with this branch --- modules/data.atmosphere/NAMESPACE | 4 + .../data.atmosphere/R/GEFS_helper_functions.R | 906 ++++++++++++++++++ .../data.atmosphere/R/download.NOAA_GEFS.R | 298 +----- .../data.atmosphere/man/download.NOAA_GEFS.Rd | 31 +- .../data.atmosphere/man/noaa_grid_download.Rd | 25 + .../man/process_gridded_noaa_download.Rd | 32 + .../data.atmosphere/man/temporal_downscale.Rd | 26 + .../man/write_noaa_gefs_netcdf.Rd | 41 + 8 files changed, 1099 insertions(+), 264 deletions(-) create mode 100644 modules/data.atmosphere/R/GEFS_helper_functions.R create mode 100644 modules/data.atmosphere/man/noaa_grid_download.Rd create mode 100644 modules/data.atmosphere/man/process_gridded_noaa_download.Rd create mode 100644 modules/data.atmosphere/man/temporal_downscale.Rd create mode 100644 modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 3729ed261a7..580812a2ad1 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -77,10 +77,12 @@ export(metgapfill) export(metgapfill.NOAA_GEFS) export(model.train) export(nc.merge) +export(noaa_grid_download) export(par2ppfd) export(pecan_standard_met_table) export(permute.nc) export(predict_subdaily_met) +export(process_gridded_noaa_download) export(qair2rh) export(read.register) export(rh2qair) @@ -96,8 +98,10 @@ export(subdaily_pred) export(sw2par) export(sw2ppfd) export(temporal.downscale.functions) +export(temporal_downscale) export(upscale_met) export(wide2long) +export(write_noaa_gefs_netcdf) import(dplyr) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R new file mode 100644 index 00000000000..336ef9235c7 --- /dev/null +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -0,0 +1,906 @@ +#' Download gridded forecast in the box bounded by the latitude and longitude list +#' +#' @param lat_list +#' @param lon_list +#' @param forecast_time +#' @param forecast_date +#' @param model_name_raw +#' @param num_cores +#' @param output_directory +#' +#' @return +#' @export +#' +#' @examples +noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date ,model_name_raw, output_directory, end_hr) { + + + download_neon_grid <- function(ens_index, location, directory, hours_char, cycle, base_filename1, vars,working_directory){ + #for(j in 1:31){ + if(ens_index == 1){ + base_filename2 <- paste0("gec00",".t",cycle,"z.pgrb2a.0p50.f") + curr_hours <- hours_char[hours <= 384] + }else{ + if((ens_index-1) < 10){ + ens_name <- paste0("0",ens_index - 1) + }else{ + ens_name <- as.character(ens_index -1) + } + base_filename2 <- paste0("gep",ens_name,".t",cycle,"z.pgrb2a.0p50.f") + curr_hours <- hours_char + } + + + for(i in 1:length(curr_hours)){ + file_name <- paste0(base_filename2, curr_hours[i]) + + destfile <- paste0(working_directory,"/", file_name,".neon.grib") + + if(file.exists(destfile)){ + + fsz <- file.info(destfile)$size + gribf <- file(destfile, "rb") + fsz4 <- fsz-4 + seek(gribf,where = fsz4,origin = "start") + last4 <- readBin(gribf,"raw",4) + if(as.integer(last4[1])==55 & as.integer(last4[2])==55 & as.integer(last4[3])==55 & as.integer(last4[4])==55) { + download_file <- FALSE + } else { + download_file <- TRUE + } + close(gribf) + + }else{ + download_file <- TRUE + } + + if(download_file){ + + out <- tryCatch(utils::download.file(paste0(base_filename1, file_name, vars, location, directory), + destfile = destfile, quiet = TRUE), + error = function(e){ + warning(paste(e$message, "skipping", file_name), + call. = FALSE) + return(NA) + }, + finally = NULL) + + if(is.na(out)) next + } + } + } + + model_dir <- file.path(output_directory, model_name_raw) + + curr_time <- lubridate::with_tz(Sys.time(), tzone = "UTC") + curr_date <- lubridate::as_date(curr_time) + + noaa_page <- readLines('https://nomads.ncep.noaa.gov/pub/data/nccf/com/gens/prod/') + + potential_dates <- NULL + for(i in 1:length(noaa_page)){ + if(stringr::str_detect(noaa_page[i], ">gefs.")){ + end <- stringr::str_locate(noaa_page[i], ">gefs.")[2] + dates <- stringr::str_sub(noaa_page[i], start = end+1, end = end+8) + potential_dates <- c(potential_dates, dates) + } + } + + + last_cycle_page <- readLines(paste0('https://nomads.ncep.noaa.gov/pub/data/nccf/com/gens/prod/gefs.', dplyr::last(potential_dates))) + + potential_cycle <- NULL + for(i in 1:length(last_cycle_page)){ + if(stringr::str_detect(last_cycle_page[i], 'href=\"')){ + end <- stringr::str_locate(last_cycle_page[i], 'href=\"')[2] + cycles <- stringr::str_sub(last_cycle_page[i], start = end+1, end = end+2) + if(cycles %in% c("00","06", "12", "18")){ + potential_cycle <- c(potential_cycle, cycles) + } + } + } + + potential_dates <- lubridate::as_date(potential_dates) + + potential_dates = potential_dates[which(potential_dates == forecast_date)] + + if(length(potential_dates) == 0){PEcAn.logger::logger.error("Forecast Date not available")} + + + location <- paste0("&subregion=&leftlon=", + floor(min(lon_list)), + "&rightlon=", + ceiling(max(lon_list)), + "&toplat=", + ceiling(max(lat_list)), + "&bottomlat=", + floor(min(lat_list))) + + base_filename1 <- "https://nomads.ncep.noaa.gov/cgi-bin/filter_gefs_atmos_0p50a.pl?file=" + vars <- "&lev_10_m_above_ground=on&lev_2_m_above_ground=on&lev_surface=on&lev_entire_atmosphere=on&var_APCP=on&var_DLWRF=on&var_DSWRF=on&var_PRES=on&var_RH=on&var_TMP=on&var_UGRD=on&var_VGRD=on&var_TCDC=on" + + for(i in 1:length(potential_dates)){ + + forecast_date <- lubridate::as_date(potential_dates[i]) + forecast_hours = as.numeric(forecast_time) + + + for(j in 1:length(forecast_hours)){ + cycle <- forecast_hours[j] + + if(cycle < 10) cycle <- paste0("0",cycle) + + model_date_hour_dir <- file.path(model_dir,forecast_date,cycle) + if(!dir.exists(model_date_hour_dir)){ + dir.create(model_date_hour_dir, recursive=TRUE, showWarnings = FALSE) + } + + new_download <- TRUE + + if(new_download){ + + print(paste("Downloading", forecast_date, cycle)) + + if(cycle == "00"){ + hours <- c(seq(0, 240, 3),seq(246, min(end_hr, 384), 6)) + }else{ + hours <- c(seq(0, 240, 3),seq(246, min(end_hr, 840) , 6)) + } + hours_char <- hours + hours_char[which(hours < 100)] <- paste0("0",hours[which(hours < 100)]) + hours_char[which(hours < 10)] <- paste0("0",hours_char[which(hours < 10)]) + curr_year <- lubridate::year(forecast_date) + curr_month <- lubridate::month(forecast_date) + if(curr_month < 10) curr_month <- paste0("0",curr_month) + curr_day <- lubridate::day(forecast_date) + if(curr_day < 10) curr_day <- paste0("0",curr_day) + curr_date <- paste0(curr_year,curr_month,curr_day) + directory <- paste0("&dir=%2Fgefs.",curr_date,"%2F",cycle,"%2Fatmos%2Fpgrb2ap5") + + ens_index <- 1:31 + + parallel::mclapply(X = ens_index, + FUN = download_neon_grid, + location, + directory, + hours_char, + cycle, + base_filename1, + vars, + working_directory = model_date_hour_dir, + mc.cores = 1) + }else{ + print(paste("Existing", forecast_date, cycle)) + } + } + } +} +#' Extract and temporally downscale points from downloaded grid files +#' +#' @param lat_list +#' @param lon_list +#' @param site_id +#' @param downscale +#' @param overwrite +#' @param model_name +#' @param model_name_ds +#' @param model_name_raw +#' @param output_directory +#' +#' @return +#' @export +#' +#' @examples +#' +process_gridded_noaa_download <- function(lat_list, + lon_list, + site_id, + downscale, + overwrite, + forecast_date, + forecast_time, + model_name, + model_name_ds, + model_name_raw, + output_directory){ + + extract_sites <- function(ens_index, hours_char, hours, cycle, site_id, lat_list, lon_list, working_directory){ + + site_length <- length(site_id) + tmp2m <- array(NA, dim = c(site_length, length(hours_char))) + rh2m <- array(NA, dim = c(site_length, length(hours_char))) + ugrd10m <- array(NA, dim = c(site_length,length(hours_char))) + vgrd10m <- array(NA, dim = c(site_length, length(hours_char))) + pressfc <- array(NA, dim = c(site_length, length(hours_char))) + apcpsfc <- array(NA, dim = c(site_length, length(hours_char))) + tcdcclm <- array(NA, dim = c(site_length, length(hours_char))) + dlwrfsfc <- array(NA, dim = c(site_length, length(hours_char))) + dswrfsfc <- array(NA, dim = c(site_length, length(hours_char))) + + if(ens_index == 1){ + base_filename2 <- paste0("gec00",".t",cycle,"z.pgrb2a.0p50.f") + }else{ + if(ens_index-1 < 10){ + ens_name <- paste0("0",ens_index-1) + }else{ + ens_name <- as.character(ens_index-1) + } + base_filename2 <- paste0("gep",ens_name,".t",cycle,"z.pgrb2a.0p50.f") + } + + lats <- round(lat_list/.5)*.5 + lons <- round(lon_list/.5)*.5 + + if(lons < 0){ + lons <- 360 + lons + } + curr_hours <- hours_char + + for(hr in 1:length(curr_hours)){ + file_name <- paste0(base_filename2, curr_hours[hr]) + + if(file.exists(paste0(working_directory,"/", file_name,".neon.grib"))){ + grib <- rgdal::readGDAL(paste0(working_directory,"/", file_name,".neon.grib"), silent = TRUE) + lat_lon <- sp::coordinates(grib) + for(s in 1:length(site_id)){ + + index <- which(lat_lon[,2] == lats[s] & lat_lon[,1] == lons[s]) + + pressfc[s, hr] <- grib$band1[index] + tmp2m[s, hr] <- grib$band2[index] + rh2m[s, hr] <- grib$band3[index] + ugrd10m[s, hr] <- grib$band4[index] + vgrd10m[s, hr] <- grib$band5[index] + + if(curr_hours[hr] != "000"){ + apcpsfc[s, hr] <- grib$band6[index] + tcdcclm[s, hr] <- grib$band7[index] + dswrfsfc[s, hr] <- grib$band8[index] + dlwrfsfc[s, hr] <- grib$band9[index] + } + } + } + } + + return(list(tmp2m = tmp2m, + pressfc = pressfc, + rh2m = rh2m, + dlwrfsfc = dlwrfsfc, + dswrfsfc = dswrfsfc, + ugrd10m = ugrd10m, + vgrd10m = vgrd10m, + apcpsfc = apcpsfc, + tcdcclm = tcdcclm)) + } + + noaa_var_names <- c("tmp2m", "pressfc", "rh2m", "dlwrfsfc", + "dswrfsfc", "apcpsfc", + "ugrd10m", "vgrd10m", "tcdcclm") + + + model_dir <- file.path(output_directory) + model_name_raw_dir <- file.path(output_directory, model_name_raw) + + curr_time <- lubridate::with_tz(Sys.time(), tzone = "UTC") + curr_date <- lubridate::as_date(curr_time) + potential_dates <- seq(curr_date - lubridate::days(6), curr_date, by = "1 day") + + #Remove dates before the new GEFS system + potential_dates <- potential_dates[which(potential_dates > lubridate::as_date("2020-09-23"))] + + + + + cycle <-forecast_time + curr_forecast_time <- forecast_date + lubridate::hours(cycle) + if(cycle < 10) cycle <- paste0("0",cycle) + if(cycle == "00"){ + hours <- c(seq(0, 240, 3),seq(246, 840 , 6)) + }else{ + hours <- c(seq(0, 240, 3),seq(246, 384 , 6)) + } + hours_char <- hours + hours_char[which(hours < 100)] <- paste0("0",hours[which(hours < 100)]) + hours_char[which(hours < 10)] <- paste0("0",hours_char[which(hours < 10)]) + + raw_files <- list.files(file.path(model_name_raw_dir,forecast_date,cycle)) + hours_present <- as.numeric(stringr::str_sub(raw_files, start = 25, end = 27)) + + all_downloaded <- TRUE + # if(cycle == "00"){ + # #Sometime the 16-35 day forecast is not competed for some of the forecasts. If over 24 hrs has passed then they won't show up. + # #Go ahead and create the netcdf files + # if(length(which(hours_present == 840)) == 30 | (length(which(hours_present == 384)) == 30 & curr_forecast_time + lubridate::hours(24) < curr_time)){ + # all_downloaded <- TRUE + # } + # }else{ + # if(length(which(hours_present == 384)) == 31 | (length(which(hours_present == 384)) == 31 & curr_forecast_time + lubridate::hours(24) < curr_time)){ + # all_downloaded <- TRUE + # } + # } + + + + + + if(all_downloaded){ + + ens_index <- 1:31 + #Run download_downscale_site() over the site_index + output <- parallel::mclapply(X = ens_index, + FUN = extract_sites, + hours_char = hours_char, + hours = hours, + cycle, + site_id, + lat_list, + lon_list, + working_directory = file.path(model_name_raw_dir,forecast_date,cycle), + mc.cores = 1) + + + forecast_times <- lubridate::as_datetime(forecast_date) + lubridate::hours(as.numeric(cycle)) + lubridate::hours(as.numeric(hours_char)) + + + + #Convert negetive longitudes to degrees east + if(lon_list < 0){ + lon_east <- 360 + lon_list + }else{ + lon_east <- lon_list + } + + model_site_date_hour_dir <- file.path(model_dir, site_id, forecast_date,cycle) + + if(!dir.exists(model_site_date_hour_dir)){ + dir.create(model_site_date_hour_dir, recursive=TRUE, showWarnings = FALSE) + }else{ + unlink(list.files(model_site_date_hour_dir, full.names = TRUE)) + } + + if(downscale){ + modelds_site_date_hour_dir <- file.path(output_directory,model_name_ds,site_id, forecast_date,cycle) + if(!dir.exists(modelds_site_date_hour_dir)){ + dir.create(modelds_site_date_hour_dir, recursive=TRUE, showWarnings = FALSE) + }else{ + unlink(list.files(modelds_site_date_hour_dir, full.names = TRUE)) + } + } + + noaa_data <- list() + + for(v in 1:length(noaa_var_names)){ + + value <- NULL + ensembles <- NULL + forecast.date <- NULL + + noaa_data[v] <- NULL + + for(ens in 1:31){ + curr_ens <- output[[ens]] + value <- c(value, curr_ens[[noaa_var_names[v]]][1, ]) + ensembles <- c(ensembles, rep(ens, length(curr_ens[[noaa_var_names[v]]][1, ]))) + forecast.date <- c(forecast.date, forecast_times) + } + noaa_data[[v]] <- list(value = value, + ensembles = ensembles, + forecast.date = lubridate::as_datetime(forecast.date)) + + } + + #These are the cf standard names + cf_var_names <- c("air_temperature", "air_pressure", "relative_humidity", "surface_downwelling_longwave_flux_in_air", + "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "eastward_wind", "northward_wind","cloud_area_fraction") + + #Replace "eastward_wind" and "northward_wind" with "wind_speed" + cf_var_names1 <- c("air_temperature", "air_pressure", "relative_humidity", "surface_downwelling_longwave_flux_in_air", + "surface_downwelling_shortwave_flux_in_air", "precipitation_flux","specific_humidity", "cloud_area_fraction","wind_speed") + + cf_var_units1 <- c("K", "Pa", "1", "Wm-2", "Wm-2", "kgm-2s-1", "1", "1", "ms-1") #Negative numbers indicate negative exponents + + names(noaa_data) <- cf_var_names + + specific_humidity <- rep(NA, length(noaa_data$relative_humidity$value)) + + noaa_data$relative_humidity$value <- noaa_data$relative_humidity$value / 100 + + noaa_data$air_temperature$value <- noaa_data$air_temperature$value + 273.15 + + specific_humidity[which(!is.na(noaa_data$relative_humidity$value))] <- PEcAn.data.atmosphere::rh2qair(rh = noaa_data$relative_humidity$value[which(!is.na(noaa_data$relative_humidity$value))], + T = noaa_data$air_temperature$value[which(!is.na(noaa_data$relative_humidity$value))], + press = noaa_data$air_pressure$value[which(!is.na(noaa_data$relative_humidity$value))]) + + + #Calculate wind speed from east and north components + wind_speed <- sqrt(noaa_data$eastward_wind$value^2 + noaa_data$northward_wind$value^2) + + forecast_noaa <- tibble::tibble(time = noaa_data$air_temperature$forecast.date, + NOAA.member = noaa_data$air_temperature$ensembles, + air_temperature = noaa_data$air_temperature$value, + air_pressure= noaa_data$air_pressure$value, + relative_humidity = noaa_data$relative_humidity$value, + surface_downwelling_longwave_flux_in_air = noaa_data$surface_downwelling_longwave_flux_in_air$value, + surface_downwelling_shortwave_flux_in_air = noaa_data$surface_downwelling_shortwave_flux_in_air$value, + precipitation_flux = noaa_data$precipitation_flux$value, + specific_humidity = specific_humidity, + cloud_area_fraction = noaa_data$cloud_area_fraction$value, + wind_speed = wind_speed) + + forecast_noaa$cloud_area_fraction <- forecast_noaa$cloud_area_fraction / 100 #Convert from % to proportion + + # Convert the 3 hr precip rate to per second. + forecast_noaa$precipitation_flux <- forecast_noaa$precipitation_flux / (60 * 60 * 3) + + + + # Create a data frame with information about the file. This data frame's format is an internal PEcAn standard, and is stored in the BETY database to + # locate the data file. The data file is stored on the local machine where the download occured. Because NOAA GEFS is an + # ensemble of 21 different forecast models, each model gets its own data frame. All of the information is the same for + # each file except for the file name. + + results_list = list() + + + for (ens in 1:31) { # i is the ensemble number + + #Turn the ensemble number into a string + if(ens-1< 10){ + ens_name <- paste0("0",ens-1) + }else{ + ens_name <- ens - 1 + } + + forecast_noaa_ens <- forecast_noaa %>% + dplyr::filter(NOAA.member == ens) %>% + dplyr::filter(!is.na(air_temperature)) + + end_date <- forecast_noaa_ens %>% + dplyr::summarise(max_time = max(time)) + + results = data.frame( + file = "", #Path to the file (added in loop below). + host = PEcAn.remote::fqdn(), #Name of the server where the file is stored + mimetype = "application/x-netcdf", #Format the data is saved in + formatname = "CF Meteorology", #Type of data + startdate = paste0(format(forecast_date, "%Y-%m-%dT%H:%M:00")), #starting date and time, down to the second + enddate = paste0(format(end_date$max_time, "%Y-%m-%dT%H:%M:00")), #ending date and time, down to the second + dbfile.name = "NOAA_GEFS_downscale", #Source of data (ensemble number will be added later) + stringsAsFactors = FALSE + ) + + identifier = paste("NOAA_GEFS", site_id, ens_name, format(forecast_date, "%Y-%m-%dT%H:%M"), + format(end_date$max_time, "%Y-%m-%dT%H:%M"), sep="_") + + fname <- paste0(identifier, ".nc") + ensemble_folder = file.path(output_directory, identifier) + output_file <- file.path(ensemble_folder,fname) + + if (!dir.exists(ensemble_folder)) { + dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} + + + #Write netCDF + noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) + + if(downscale){ + #Downscale the forecast from 6hr to 1hr + + + identifier_ds = paste("NOAA_GEFS_downscale", site_id, ens_name, format(forecast_date, "%Y-%m-%dT%H:%M"), + format(end_date$max_time, "%Y-%m-%dT%H:%M"), sep="_") + + fname_ds <- paste0(identifier_ds, ".nc") + ensemble_folder_ds = file.path(output_directory, identifier_ds) + output_file_ds <- file.path(ensemble_folder_ds,fname_ds) + + if (!dir.exists(ensemble_folder_ds)) { + dir.create(ensemble_folder_ds, recursive=TRUE, showWarnings = FALSE)} + + results$file = output_file_ds + results$dbfile.name = fname_ds + results_list[[ens]] <- results + + #Run downscaling + noaaGEFSpoint::temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) + } + + + } + } + return(results_list) +} #process_gridded_noaa_download + +#' @title Downscale NOAA GEFS frin 6hr to 1hr +#' @return None +#' +#' @param input_file, full path to 6hr file +#' @param output_file, full path to 1hr file that will be generated +#' @param overwrite, logical stating to overwrite any existing output_file +#' @param hr time step in hours of temporal downscaling (default = 1) +#' @export +#' +#' @author Quinn Thomas +#' +#' + +temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1){ + + # open netcdf + nc <- ncdf4::nc_open(input_file) + + if(stringr::str_detect(input_file, "ens")){ + ens_postion <- stringr::str_locate(input_file, "ens") + ens_name <- stringr::str_sub(input_file, start = ens_postion[1], end = ens_postion[2] + 2) + ens <- as.numeric(stringr::str_sub(input_file, start = ens_postion[2] + 1, end = ens_postion[2] + 2)) + }else{ + ens <- 0 + ens_name <- "ens00" + } + + # retrive variable names + cf_var_names <- names(nc$var) + + # generate time vector + time <- ncdf4::ncvar_get(nc, "time") + begining_time <- lubridate::ymd_hm(ncdf4::ncatt_get(nc, "time", + attname = "units")$value) + time <- begining_time + lubridate::hours(time) + + # retrive lat and lon + lat.in <- ncdf4::ncvar_get(nc, "latitude") + lon.in <- ncdf4::ncvar_get(nc, "longitude") + + # generate data frame from netcdf variables and retrive units + noaa_data <- tibble::tibble(time = time) + var_units <- rep(NA, length(cf_var_names)) + for(i in 1:length(cf_var_names)){ + curr_data <- ncdf4::ncvar_get(nc, cf_var_names[i]) + noaa_data <- cbind(noaa_data, curr_data) + var_units[i] <- ncdf4::ncatt_get(nc, cf_var_names[i], attname = "units")$value + } + + ncdf4::nc_close(nc) + + names(noaa_data) <- c("time",cf_var_names) + + # spline-based downscaling + if(length(which(c("air_temperature", "wind_speed","specific_humidity", "air_pressure") %in% cf_var_names) == 4)){ + forecast_noaa_ds <- downscale_spline_to_hrly(df = noaa_data, VarNames = c("air_temperature", "wind_speed","specific_humidity", "air_pressure")) + }else{ + #Add error message + } + + # Convert splined SH, temperature, and presssure to RH + forecast_noaa_ds <- forecast_noaa_ds %>% + dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, + temp = forecast_noaa_ds$air_temperature, + press = forecast_noaa_ds$air_pressure)) %>% + dplyr::mutate(relative_humidity = relative_humidity, + relative_humidity = ifelse(relative_humidity > 1, 0, relative_humidity)) + + # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) + if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ + LW.flux.hrly <- downscale_repeat_6hr_to_hrly(df = noaa_data, varName = "surface_downwelling_longwave_flux_in_air") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, LW.flux.hrly, by = "time") + }else{ + #Add error message + } + + # convert precipitation to hourly (just copy 6 hourly values over past 6-hour time period) + if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ + Precip.flux.hrly <- downscale_repeat_6hr_to_hrly(df = noaa_data, varName = "precipitation_flux") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, Precip.flux.hrly, by = "time") + }else{ + #Add error message + } + + # convert cloud_area_fraction to hourly (just copy 6 hourly values over past 6-hour time period) + if("cloud_area_fraction" %in% cf_var_names){ + cloud_area_fraction.flux.hrly <- downscale_repeat_6hr_to_hrly(df = noaa_data, varName = "cloud_area_fraction") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, cloud_area_fraction.flux.hrly, by = "time") + }else{ + #Add error message + } + + # use solar geometry to convert shortwave from 6 hr to 1 hr + if("surface_downwelling_shortwave_flux_in_air" %in% cf_var_names){ + ShortWave.hrly <- downscale_ShortWave_to_hrly(df = noaa_data, lat = lat.in, lon = lon.in) + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, ShortWave.hrly, by = "time") + }else{ + #Add error message + } + + #Add dummy ensemble number to work with write_noaa_gefs_netcdf() + forecast_noaa_ds$NOAA.member <- ens + + #Make sure var names are in correct order + forecast_noaa_ds <- forecast_noaa_ds %>% + dplyr::select("time", tidyselect::all_of(cf_var_names), "NOAA.member") + + #Write netCDF + noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ds, + ens = ens, + lat = lat.in, + lon = lon.in, + cf_units = var_units, + output_file = output_file, + overwrite = overwrite) + +} #temporal_downscale + +#' @title Downscale spline to hourly +#' @return A dataframe of downscaled state variables +#' @param df, dataframe of data to be downscales +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ + # -------------------------------------- + # purpose: interpolates debiased forecasts from 6-hourly to hourly + # Creator: Laura Puckett, December 16 2018 + # -------------------------------------- + # @param: df, a dataframe of debiased 6-hourly forecasts + + t0 = min(df$time) + df <- df %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) + + for(Var in 1:length(VarNames)){ + curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y + noaa_data_interp <- cbind(noaa_data_interp, curr_data) + } + + names(noaa_data_interp) <- c("time",VarNames) + + return(noaa_data_interp) +} + +#' @title Downscale shortwave to hourly +#' @return A dataframe of downscaled state variables +#' +#' @param df, data frame of variables +#' @param lat, lat of site +#' @param lon, long of site +#' @return ShortWave.ds +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ + ## downscale shortwave to hourly + + t0 <- min(df$time) + df <- df %>% + dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + data.hrly$group_6hr <- NA + + group <- 0 + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + group <- group + 1 + data.hrly$group_6hr[i] <- group + }else{ + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + data.hrly$group_6hr[i] <- group + } + } + + ShortWave.ds <- data.hrly %>% + dplyr::mutate(hour = lubridate::hour(time)) %>% + dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% + dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry + dplyr::group_by(group_6hr) %>% + dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry + dplyr::ungroup() %>% + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% + dplyr::select(time,surface_downwelling_shortwave_flux_in_air) + + return(ShortWave.ds) + +} + +#' Cosine of solar zenith angle +#' +#' For explanations of formulae, see http://www.itacanet.org/the-sun-as-a-source-of-energy/part-3-calculating-solar-angles/ +#' +#' @author Alexey Shiklomanov +#' @param doy Day of year +#' @param lat Latitude +#' @param lon Longitude +#' @param dt Timestep +#' @noRd +#' @param hr Hours timestep +#' @return `numeric(1)` of cosine of solar zenith angle +#' @export +cos_solar_zenith_angle <- function(doy, lat, lon, dt, hr) { + et <- equation_of_time(doy) + merid <- floor(lon / 15) * 15 + merid[merid < 0] <- merid[merid < 0] + 15 + lc <- (lon - merid) * -4/60 ## longitude correction + tz <- merid / 360 * 24 ## time zone + midbin <- 0.5 * dt / 86400 * 24 ## shift calc to middle of bin + t0 <- 12 + lc - et - tz - midbin ## solar time + h <- pi/12 * (hr - t0) ## solar hour + dec <- -23.45 * pi / 180 * cos(2 * pi * (doy + 10) / 365) ## declination + cosz <- sin(lat * pi / 180) * sin(dec) + cos(lat * pi / 180) * cos(dec) * cos(h) + cosz[cosz < 0] <- 0 + return(cosz) +} + +#' Equation of time: Eccentricity and obliquity +#' +#' For description of calculations, see https://en.wikipedia.org/wiki/Equation_of_time#Calculating_the_equation_of_time +#' +#' @author Alexey Shiklomanov +#' @param doy Day of year +#' @noRd +#' @return `numeric(1)` length of the solar day, in hours. + +equation_of_time <- function(doy) { + stopifnot(doy <= 366) + f <- pi / 180 * (279.5 + 0.9856 * doy) + et <- (-104.7 * sin(f) + 596.2 * sin(2 * f) + 4.3 * + sin(4 * f) - 429.3 * cos(f) - 2 * + cos(2 * f) + 19.3 * cos(3 * f)) / 3600 # equation of time -> eccentricity and obliquity + return(et) +} + +#' @title Downscale repeat to hourly +#' @return A dataframe of downscaled data +#' @param df, dataframe of data to be downscaled (Longwave) +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ + + #Get first time point + t0 <- min(df$time) + + df <- df %>% + dplyr::select("time", all_of(varName)) %>% + #Calculate time difference + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + #Shift valued back because the 6hr value represents the average over the + #previous 6hr period + dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) + + #Create new vector with all hours + interp.df.days <- seq(min(df$days_since_t0), + as.numeric(max(df$days_since_t0)), + 1 / (24 / hr)) + + #Create new data frame + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + #Join 1 hr data frame with 6 hr data frame + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + #Fill in hours + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + }else{ + data.hrly$lead_var[i] <- curr + } + } + + #Clean up data frame + data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% + dplyr::arrange(time) + + names(data.hrly) <- c("time", varName) + + return(data.hrly) +} + +#' @title Calculate potential shortwave radiation +#' @return vector of potential shortwave radiation for each doy +#' +#' @param doy, day of year in decimal +#' @param lon, longitude +#' @param lat, latitude +#' @return `numeric(1)` +#' @author Quinn Thomas +#' @noRd +#' +#' +downscale_solar_geom <- function(doy, lon, lat) { + + dt <- median(diff(doy)) * 86400 # average number of seconds in time interval + hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy + + ## calculate potential radiation + cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) + rpot <- 1366 * cosz + return(rpot) +} + +##' @title Write NOAA GEFS netCDF +##' @param df data frame of meterological variables to be written to netcdf. Columns +##' must start with time with the following columns in the order of `cf_units` +##' @param ens ensemble index used for subsetting df +##' @param lat latitude in degree north +##' @param lon longitude in degree east +##' @param cf_units vector of variable names in order they appear in df +##' @param output_file name, with full path, of the netcdf file that is generated +##' @param overwrite logical to overwrite existing netcdf file +##' @return NA +##' +##' @export +##' +##' @author Quinn Thomas +##' +##' + +write_noaa_gefs_netcdf <- function(df, ens = NA, lat, lon, cf_units, output_file, overwrite){ + + if(!is.na(ens)){ + data <- df + max_index <- max(which(!is.na(data$air_temperature))) + start_time <- min(data$time) + end_time <- data$time[max_index] + + data <- data %>% dplyr::select(-c("time", "NOAA.member")) + }else{ + data <- df + max_index <- max(which(!is.na(data$air_temperature))) + start_time <- min(data$time) + end_time <- data$time[max_index] + + data <- df %>% + dplyr::select(-c("time")) + } + + diff_time <- as.numeric(difftime(df$time, df$time[1])) / (60 * 60) + + cf_var_names <- names(data) + + time_dim <- ncdf4::ncdim_def(name="time", + units = paste("hours since", format(start_time, "%Y-%m-%d %H:%M")), + diff_time, #GEFS forecast starts 6 hours from start time + create_dimvar = TRUE) + lat_dim <- ncdf4::ncdim_def("latitude", "degree_north", lat, create_dimvar = TRUE) + lon_dim <- ncdf4::ncdim_def("longitude", "degree_east", lon, create_dimvar = TRUE) + + dimensions_list <- list(time_dim, lat_dim, lon_dim) + + nc_var_list <- list() + for (i in 1:length(cf_var_names)) { #Each ensemble member will have data on each variable stored in their respective file. + nc_var_list[[i]] <- ncdf4::ncvar_def(cf_var_names[i], cf_units[i], dimensions_list, missval=NaN) + } + + if (!file.exists(output_file) | overwrite) { + nc_flptr <- ncdf4::nc_create(output_file, nc_var_list, verbose = FALSE) + + #For each variable associated with that ensemble + for (j in 1:ncol(data)) { + # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble + ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], unlist(data[,j])) + } + + ncdf4::nc_close(nc_flptr) #Write to the disk/storage + } +} \ No newline at end of file diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 1cbcb5c70a3..e4e229d5733 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -31,6 +31,7 @@ ##' @param lon site longitude in decimal degrees ##' @param site_id The unique ID given to each site. This is used as part of the file name. ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? +##' @param downscale logical, assumed True. Indicated whether data should be downscaled to hourly ##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. ##' @param ... Other arguments, currently ignored ##' @export @@ -43,254 +44,51 @@ ##' site_id = 676) ##' } ##' -##' @author Luke Dramko +##' @author Quinn Thomas, modified by K Zarada ##' -download.NOAA_GEFS <- function(outfolder, lat.in, lon.in, site_id, start_date = Sys.time(), end_date = (as.POSIXct(start_date, tz="UTC") + lubridate::days(16)), - overwrite = FALSE, verbose = FALSE, ...) { - - start_date <- as.POSIXct(start_date, tz = "UTC") - end_date <- as.POSIXct(end_date, tz = "UTC") - - #It takes about 2 hours for NOAA GEFS weather data to be posted. Therefore, if a request is made within that 2 hour window, - #we instead want to adjust the start time to the previous forecast, which is the most recent one avaliable. (For example, if - #there's a request at 7:00 a.m., the data isn't up yet, so the function grabs the data at midnight instead.) - if (abs(as.numeric(Sys.time() - start_date, units="hours")) <= 2) { - start_date = start_date - lubridate::hours(2) - end_date = end_date - lubridate::hours(2) - } - - #Date/time error checking - Checks to see if the start date is before the end date - if (start_date > end_date) { - PEcAn.logger::logger.severe("Invalid dates: end date occurs before start date") - } else if (as.numeric(end_date - start_date, units="hours") < 6) { #Done separately to produce a more helpful error message. - PEcAn.logger::logger.severe("Times not far enough appart for a forecast to fall between them. Forecasts occur every six hours; make sure start - and end dates are at least 6 hours appart.") - } - - #Set the end forecast date (default is the full 16 days) - if (end_date > start_date + lubridate::days(16)) { - end_date = start_date + lubridate::days(16) - } - - #Round the starting date/time down to the previous block of 6 hours. Adjust the time frame to match. - forecast_hour = (lubridate::hour(start_date) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - increments = as.integer(as.numeric(end_date - start_date, units = "hours") / 6) #Calculating the number of forecasts between start and end dates. - increments = increments + ((lubridate::hour(end_date) - lubridate::hour(start_date)) %/% 6) #These calculations are required to use the rnoaa package. - - end_hour = sprintf("%04d", ((forecast_hour + (increments * 6)) %% 24) * 100) #Calculating the starting hour as a string, which is required type to access the - #data via the rnoaa package - forecast_hour = sprintf("%04d", forecast_hour * 100) #Having the end date as a string is useful later, too. - - #Recreate the adjusted start and end dates. - start_date = as.POSIXct(paste0(lubridate::year(start_date), "-", lubridate::month(start_date), "-", lubridate::day(start_date), " ", - substring(forecast_hour, 1,2), ":00:00"), tz="UTC") - end_date = start_date + lubridate::hours(increments * 6) - - #Bounds date checking - #NOAA's GEFS database maintains a rolling 12 days of forecast data for access through this function. - #We do want Sys.Date() here - NOAA makes data unavaliable days at a time, not forecasts at a time. - NOAA_GEFS_Start_Date = as.POSIXct(Sys.Date(), tz="UTC") - lubridate::days(11) #Subtracting 11 days is correct, not 12. - - #Check to see if start_date is valid. This must be done after date adjustment. - if (as.POSIXct(Sys.time(), tz="UTC") < start_date || start_date < NOAA_GEFS_Start_Date) { - PEcAn.logger::logger.severe(sprintf('Start date (%s) exceeds the NOAA GEFS range (%s to %s).', - start_date, - NOAA_GEFS_Start_Date, Sys.Date())) - } - - if (lubridate::hour(start_date) > 23) { - PEcAn.logger::logger.severe(sprintf("Start time %s is not a valid time", lubridate::hour(start_date))) - } - - if (lubridate::hour(end_date) > 23) { #Done separately from the previous if statement in order to have more specific error messages. - PEcAn.logger::logger.severe(sprintf("End time %s is not a valid time", lubridate::hour(end_date))) - } - #End date/time error checking - - ################################################# - #NOAA variable downloading - #Uses the rnoaa package to download data - - #We want data for each of the following variables. Here, we're just getting the raw data; later, we will convert it to the - #cf standard format when relevant. - noaa_var_names = c("Temperature_height_above_ground_ens", "Pressure_surface_ens", "Relative_humidity_height_above_ground_ens", "Downward_Long-Wave_Radp_Flux_surface_6_Hour_Average_ens", - "Downward_Short-Wave_Radiation_Flux_surface_6_Hour_Average_ens", "Total_precipitation_surface_6_Hour_Accumulation_ens", - "u-component_of_wind_height_above_ground_ens", "v-component_of_wind_height_above_ground_ens") - - #These are the cf standard names - cf_var_names = c("air_temperature", "air_pressure", "specific_humidity", "surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "eastward_wind", "northward_wind") - cf_var_units = c("K", "Pa", "1", "Wm-2", "Wm-2", "kgm-2s-1", "ms-1", "ms-1") #Negative numbers indicate negative exponents - - # This debugging loop allows you to check if the cf variables are correctly mapped to the equivalent - # NOAA variable names. This is very important, as much of the processing below will be erroneous if - # these fail to match up. - # for (i in 1:length(cf_var_names)) { - # print(sprintf("cf / noaa : %s / %s", cf_var_names[[i]], noaa_var_names[[i]])) - #} - - noaa_data = list() - - #Downloading the data here. It is stored in a matrix, where columns represent time in intervals of 6 hours, and rows represent - #each ensemble member. Each variable gets its own matrix, which is stored in the list noaa_data. - for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments + 1), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data - } - - - ### ERROR CHECK FOR FIRST TIME POINT ### - - #Check if first time point is present, if not, grab from previous forecast - - index <- which(lengths(noaa_data) == increments * 21) #finding which ones are missing first point - new_start = start_date - lubridate::hours(6) #grab previous forecast - new_forecast_hour = (lubridate::hour(new_start) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - new_forecast_hour = sprintf("%04d", new_forecast_hour * 100) #Having the end date as a string is useful later, too. - - filled_noaa_data = list() - - for (i in index) { - filled_noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1, forecast_time = new_forecast_hour, date=format(new_start, "%Y%m%d"))$data - } - - #add filled data into first slot of forecast - for(i in index){ - noaa_data[[i]] = cbind(filled_noaa_data[[i]], noaa_data[[i]])} - - - #Fills in data with NaNs if there happens to be missing columns. - for (i in 1:length(noaa_var_names)) { - if (!is.null(ncol(noaa_data[[i]]))) { # Is a matrix - nans <- rep(NaN, nrow(noaa_data[[i]])) - while (ncol(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- cbind(noaa_data[[i]], nans) - } - } else { # Is a vector - while (length(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- c(noaa_data[[i]], NaN); - } - } - } - - ################################################### - # Not all NOAA data units match the cf data standard. In this next section, data are processed to - # confirm with the standard when necessary. - # The following is a list of variables which need to be processed: - # 1. NOAA's relative humidity must be converted to specific humidity - # 2. NOAA's measure of precipitation is the accumulation over 6 hours; cf's standard is precipitation per second - - #Convert NOAA's relative humidity to specific humidity - humid_index = which(cf_var_names == "specific_humidity") - - #Temperature, pressure, and relative humidity are required to calculate specific humidity. - humid_data = noaa_data[[humid_index]] - temperature_data = noaa_data[[which(cf_var_names == "air_temperature")]] - pressure_data = noaa_data[[which(cf_var_names == "air_pressure")]] - - #Depending on the volume and dimensions of data you download, sometimes R stores it as a vector and sometimes - #as a matrix; the different cases must be processed with different loops. - #(The specific corner case in which a vector would be generated is if only one hour is requested; for example, - #only the data at time_idx 1, for example). - if (as.logical(nrow(humid_data))) { - for (i in 1:length(humid_data)) { - humid_data[i] = PEcAn.data.atmosphere::rh2qair(humid_data[i], temperature_data[i], pressure_data[i]) - } - } else { - for (i in 1:nrow(humid_data)) { - for (j in 1:ncol(humid_data)) { - humid_data[i,j] = PEcAn.data.atmosphere::rh2qair(humid_data[i,j], temperature_data[i,j], pressure_data[i,j]) - } - } - } - - #Update the noaa_data list with the correct data - noaa_data[[humid_index]] <- humid_data - - # Convert NOAA's total precipitation (kg m-2) to precipitation flux (kg m-2 s-1) - #NOAA precipitation data is an accumulation over 6 hours. - precip_index = which(cf_var_names == "precipitation_flux") - - #The statement udunits2::ud.convert(1, "kg m-2 6 hr-1", "kg m-2 s-1") is equivalent to udunits2::ud.convert(1, "kg m-2 hr-1", "kg m-2 s-1") * 6, - #which is a little unintuitive. What will do the conversion we want is what's below: - noaa_data[[precip_index]] = udunits2::ud.convert(noaa_data[[precip_index]], "kg m-2 hr-1", "kg m-2 6 s-1") #There are 21600 seconds in 6 hours - - ############################################# - # Done with data processing. Now writing the data to the specified directory. Each ensemble member is written to its own file, for a total - # of 21 files. - if (!dir.exists(outfolder)) { - dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) - } - - # Create a data frame with information about the file. This data frame's format is an internal PEcAn standard, and is stored in the BETY database to - # locate the data file. The data file is stored on the local machine where the download occured. Because NOAA GEFS is an - # ensemble of 21 different forecast models, each model gets its own data frame. All of the information is the same for - # each file except for the file name. - results = data.frame( - file = "", #Path to the file (added in loop below). - host = PEcAn.remote::fqdn(), #Name of the server where the file is stored - mimetype = "application/x-netcdf", #Format the data is saved in - formatname = "CF Meteorology", #Type of data - startdate = paste0(format(start_date, "%Y-%m-%dT%H:%M:00")), #starting date and time, down to the second - enddate = paste0(format(end_date, "%Y-%m-%dT%H:%M:00")), #ending date and time, down to the second - dbfile.name = "NOAA_GEFS", #Source of data (ensemble number will be added later) - stringsAsFactors = FALSE - ) - - results_list = list() - - #Each ensemble gets its own file. - #These dimensions will be used for all 21 ncdf4 file members, so they're all declared once here. - #The data is really one-dimensional for each file (though we include lattitude and longitude dimensions - #to comply with the PEcAn standard). - time_dim = ncdf4::ncdim_def(name="time", - paste(units="hours since", format(start_date, "%Y-%m-%dT%H:%M")), - seq(0, 6 * increments, by = 6), - create_dimvar = TRUE) - lat_dim = ncdf4::ncdim_def("latitude", "degree_north", lat.in, create_dimvar = TRUE) - lon_dim = ncdf4::ncdim_def("longitude", "degree_east", lon.in, create_dimvar = TRUE) - - dimensions_list = list(time_dim, lat_dim, lon_dim) - - nc_var_list = list() - for (i in 1:length(cf_var_names)) { #Each ensemble member will have data on each variable stored in their respective file. - nc_var_list[[i]] = ncdf4::ncvar_def(cf_var_names[i], cf_var_units[i], dimensions_list, missval=NaN) - } - - #For each ensemble - for (i in 1:21) { # i is the ensemble number - #Generating a unique identifier string that characterizes a particular data set. - identifier = paste("NOAA_GEFS", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), - format(end_date, "%Y-%m-%dT%H:%M"), sep="_") - - ensemble_folder = file.path(outfolder, identifier) - #Each file will go in its own folder. - if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} - - flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) - - #Each ensemble member gets its own unique data frame, which is stored in results_list - #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it - #for you instead of just inserting the object reference, so this works. - results$file <- flname - results$dbfile.name <- flname - results_list[[i]] <- results - - if (!file.exists(flname) | overwrite) { - nc_flptr = ncdf4::nc_create(flname, nc_var_list, verbose=verbose) - - #For each variable associated with that ensemble - for (j in 1:length(cf_var_names)) { - # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble - ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], noaa_data[[j]][i,]) - } - - ncdf4::nc_close(nc_flptr) #Write to the disk/storage - } else { - PEcAn.logger::logger.info(paste0("The file ", flname, " already exists. It was not overwritten.")) - } - - } - - return(results_list) -} #download.NOAA_GEFS \ No newline at end of file +download.NOAA_GEFS <- function(site_id, + sitename = NULL, + username = 'pecan', + lat.in, + lon.in, + outfolder, + start_date= Sys.Date(), + end_date = start_date + lubridate::days(16), + downscale = TRUE, + overwrite = FALSE){ + + forecast_date = as.Date(start_date) + forecast_time = (lubridate::hour(start_date) %/% 6)*6 + + end_hr = (as.numeric(difftime(end_date, start_date, units = 'hours')) %/% 6)*6 + + model_name <- "NOAAGEFS_6hr" + model_name_ds <-"NOAAGEFS_1hr" #Downscaled NOAA GEFS + model_name_raw <- "NOAAGEFS_raw" + + PEcAn.logger::logger.info(paste0("Downloading GEFS for site ", site_id, " for ", start_date)) + + PEcAn.logger::logger.info(paste0("Overwrite existing files: ", overwrite)) + + + PEcAn.data.atmosphere::noaa_grid_download(lat_list = lat.in, + lon_list = lon.in, + end_hr = end_hr, + forecast_time = forecast_time, + forecast_date = forecast_date, + model_name_raw = model_name_raw, + output_directory = outfolder) + + results <- PEcAn.data.atmosphere::process_gridded_noaa_download(lat_list = lat.in, + lon_list = lon.in, + site_id = site_id, + downscale = downscale, + overwrite = overwrite, + forecast_date = forecast_date, + forecast_time = forecast_time, + model_name = model_name, + model_name_ds = model_name_ds, + model_name_raw = model_name_raw, + output_directory = outfolder) + return(results) +} diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index c6e313362ec..8a6c43e20e8 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -5,33 +5,36 @@ \title{Download NOAA GEFS Weather Data} \usage{ download.NOAA_GEFS( - outfolder, + site_id, + sitename = NULL, + username = "pecan", lat.in, lon.in, - site_id, - start_date = Sys.time(), - end_date = (as.POSIXct(start_date, tz = "UTC") + lubridate::days(16)), - overwrite = FALSE, - verbose = FALSE, - ... + outfolder, + start_date = Sys.Date(), + end_date = start_date + lubridate::days(16), + downscale = TRUE, + overwrite = FALSE ) } \arguments{ -\item{outfolder}{Directory where results should be written} - \item{site_id}{The unique ID given to each site. This is used as part of the file name.} -\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} +\item{outfolder}{Directory where results should be written} -\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} +\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} -\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} +\item{downscale}{logical, assumed True. Indicated whether data should be downscaled to hourly} -\item{...}{Other arguments, currently ignored} +\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} \item{lat}{site latitude in decimal degrees} \item{lon}{site longitude in decimal degrees} + +\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} + +\item{...}{Other arguments, currently ignored} } \value{ A list of data frames is returned containing information about the data file that can be used to locate it later. Each @@ -82,5 +85,5 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. https://www.ncdc.noaa.gov/crn/measurements.html } \author{ -Luke Dramko +Quinn Thomas, modified by K Zarada } diff --git a/modules/data.atmosphere/man/noaa_grid_download.Rd b/modules/data.atmosphere/man/noaa_grid_download.Rd new file mode 100644 index 00000000000..e52b605abc6 --- /dev/null +++ b/modules/data.atmosphere/man/noaa_grid_download.Rd @@ -0,0 +1,25 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{noaa_grid_download} +\alias{noaa_grid_download} +\title{Download gridded forecast in the box bounded by the latitude and longitude list} +\usage{ +noaa_grid_download( + lat_list, + lon_list, + forecast_time, + forecast_date, + model_name_raw, + output_directory, + end_hr +) +} +\arguments{ +\item{output_directory}{} +} +\value{ + +} +\description{ +Download gridded forecast in the box bounded by the latitude and longitude list +} diff --git a/modules/data.atmosphere/man/process_gridded_noaa_download.Rd b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd new file mode 100644 index 00000000000..e8f2d6113f0 --- /dev/null +++ b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd @@ -0,0 +1,32 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{process_gridded_noaa_download} +\alias{process_gridded_noaa_download} +\title{Extract and temporally downscale points from downloaded grid files} +\usage{ +process_gridded_noaa_download( + lat_list, + lon_list, + site_id, + downscale, + overwrite, + forecast_date, + forecast_time, + model_name, + model_name_ds, + model_name_raw, + output_directory +) +} +\arguments{ +\item{output_directory}{} +} +\value{ + +} +\description{ +Extract and temporally downscale points from downloaded grid files +} +\examples{ + +} diff --git a/modules/data.atmosphere/man/temporal_downscale.Rd b/modules/data.atmosphere/man/temporal_downscale.Rd new file mode 100644 index 00000000000..9ed0999227e --- /dev/null +++ b/modules/data.atmosphere/man/temporal_downscale.Rd @@ -0,0 +1,26 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{temporal_downscale} +\alias{temporal_downscale} +\title{Downscale NOAA GEFS frin 6hr to 1hr} +\usage{ +temporal_downscale(input_file, output_file, overwrite = TRUE, hr = 1) +} +\arguments{ +\item{input_file, }{full path to 6hr file} + +\item{output_file, }{full path to 1hr file that will be generated} + +\item{overwrite, }{logical stating to overwrite any existing output_file} + +\item{hr}{time step in hours of temporal downscaling (default = 1)} +} +\value{ +None +} +\description{ +Downscale NOAA GEFS frin 6hr to 1hr +} +\author{ +Quinn Thomas +} diff --git a/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd new file mode 100644 index 00000000000..dd8c10ca76e --- /dev/null +++ b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd @@ -0,0 +1,41 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/GEFS_helper_functions.R +\name{write_noaa_gefs_netcdf} +\alias{write_noaa_gefs_netcdf} +\title{Write NOAA GEFS netCDF} +\usage{ +write_noaa_gefs_netcdf( + df, + ens = NA, + lat, + lon, + cf_units, + output_file, + overwrite +) +} +\arguments{ +\item{df}{data frame of meterological variables to be written to netcdf. Columns +must start with time with the following columns in the order of `cf_units`} + +\item{ens}{ensemble index used for subsetting df} + +\item{lat}{latitude in degree north} + +\item{lon}{longitude in degree east} + +\item{cf_units}{vector of variable names in order they appear in df} + +\item{output_file}{name, with full path, of the netcdf file that is generated} + +\item{overwrite}{logical to overwrite existing netcdf file} +} +\value{ +NA +} +\description{ +Write NOAA GEFS netCDF +} +\author{ +Quinn Thomas +} From fedbc62cf60c7b29ce1566b19156b4232c6367f6 Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 20 Nov 2020 10:35:02 -0500 Subject: [PATCH 1671/2289] consolidating downscale functions and un-exporting helper functions --- modules/data.atmosphere/NAMESPACE | 7 +- .../data.atmosphere/R/GEFS_helper_functions.R | 227 +----------------- .../data.atmosphere/R/download.NOAA_GEFS.R | 36 +-- .../R/downscale_ShortWave_to_hrly.R | 56 ----- .../R/downscale_repeat_6hr_to_hrly.R | 28 --- .../R/downscale_spline_to_hourly.R | 56 ----- .../R/downscaling_helper_functions.R | 166 +++++++++++++ .../inst/registration/register.NOAA_GEFS.xml | 2 +- .../man/downscale_ShortWave_to_hrly.Rd | 37 --- .../man/downscale_repeat_6hr_to_hrly.Rd | 20 -- .../man/downscale_spline_to_hourly.Rd | 22 -- .../data.atmosphere/man/temporal_downscale.Rd | 4 +- 12 files changed, 197 insertions(+), 464 deletions(-) delete mode 100644 modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R delete mode 100644 modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R delete mode 100644 modules/data.atmosphere/R/downscale_spline_to_hourly.R create mode 100644 modules/data.atmosphere/R/downscaling_helper_functions.R delete mode 100644 modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd delete mode 100644 modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd delete mode 100644 modules/data.atmosphere/man/downscale_spline_to_hourly.Rd diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 22787f6ceb7..eaf37a76a8e 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -39,7 +39,8 @@ export(download.US_WCr) export(download.US_Wlef) export(downscale_ShortWave_to_hrly) export(downscale_repeat_6hr_to_hrly) -export(downscale_spline_to_hourly) +export(downscale_solar_geom) +export(downscale_spline_to_hrly) export(equation_of_time) export(exner) export(extract.local.CMIP5) @@ -76,12 +77,10 @@ export(metgapfill) export(metgapfill.NOAA_GEFS) export(model.train) export(nc.merge) -export(noaa_grid_download) export(par2ppfd) export(pecan_standard_met_table) export(permute.nc) export(predict_subdaily_met) -export(process_gridded_noaa_download) export(qair2rh) export(read.register) export(rh2qair) @@ -97,10 +96,8 @@ export(subdaily_pred) export(sw2par) export(sw2ppfd) export(temporal.downscale.functions) -export(temporal_downscale) export(upscale_met) export(wide2long) -export(write_noaa_gefs_netcdf) import(dplyr) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 336ef9235c7..8eb3bc5e738 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -9,13 +9,12 @@ #' @param output_directory #' #' @return -#' @export #' #' @examples -noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date ,model_name_raw, output_directory, end_hr) { +noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, model_name_raw, output_directory, end_hr) { - download_neon_grid <- function(ens_index, location, directory, hours_char, cycle, base_filename1, vars,working_directory){ + download_grid <- function(ens_index, location, directory, hours_char, cycle, base_filename1, vars,working_directory){ #for(j in 1:31){ if(ens_index == 1){ base_filename2 <- paste0("gec00",".t",cycle,"z.pgrb2a.0p50.f") @@ -34,7 +33,7 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date for(i in 1:length(curr_hours)){ file_name <- paste0(base_filename2, curr_hours[i]) - destfile <- paste0(working_directory,"/", file_name,".neon.grib") + destfile <- paste0(working_directory,"/", file_name,".grib") if(file.exists(destfile)){ @@ -160,7 +159,7 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date ens_index <- 1:31 parallel::mclapply(X = ens_index, - FUN = download_neon_grid, + FUN = download_grid, location, directory, hours_char, @@ -188,7 +187,6 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date #' @param output_directory #' #' @return -#' @export #' #' @examples #' @@ -239,8 +237,8 @@ process_gridded_noaa_download <- function(lat_list, for(hr in 1:length(curr_hours)){ file_name <- paste0(base_filename2, curr_hours[hr]) - if(file.exists(paste0(working_directory,"/", file_name,".neon.grib"))){ - grib <- rgdal::readGDAL(paste0(working_directory,"/", file_name,".neon.grib"), silent = TRUE) + if(file.exists(paste0(working_directory,"/", file_name,".grib"))){ + grib <- rgdal::readGDAL(paste0(working_directory,"/", file_name,".grib"), silent = TRUE) lat_lon <- sp::coordinates(grib) for(s in 1:length(site_id)){ @@ -511,14 +509,13 @@ process_gridded_noaa_download <- function(lat_list, return(results_list) } #process_gridded_noaa_download -#' @title Downscale NOAA GEFS frin 6hr to 1hr +#' @title Downscale NOAA GEFS from 6hr to 1hr #' @return None #' #' @param input_file, full path to 6hr file #' @param output_file, full path to 1hr file that will be generated #' @param overwrite, logical stating to overwrite any existing output_file #' @param hr time step in hours of temporal downscaling (default = 1) -#' @export #' #' @author Quinn Thomas #' @@ -619,7 +616,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 dplyr::select("time", tidyselect::all_of(cf_var_names), "NOAA.member") #Write netCDF - noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ds, + write_noaa_gefs_netcdf(df = forecast_noaa_ds, ens = ens, lat = lat.in, lon = lon.in, @@ -629,214 +626,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 } #temporal_downscale -#' @title Downscale spline to hourly -#' @return A dataframe of downscaled state variables -#' @param df, dataframe of data to be downscales -#' @noRd -#' @author Laura Puckett -#' -#' - -downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ - # -------------------------------------- - # purpose: interpolates debiased forecasts from 6-hourly to hourly - # Creator: Laura Puckett, December 16 2018 - # -------------------------------------- - # @param: df, a dataframe of debiased 6-hourly forecasts - - t0 = min(df$time) - df <- df %>% - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) - - interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) - - noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) - - for(Var in 1:length(VarNames)){ - curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y - noaa_data_interp <- cbind(noaa_data_interp, curr_data) - } - - names(noaa_data_interp) <- c("time",VarNames) - - return(noaa_data_interp) -} - -#' @title Downscale shortwave to hourly -#' @return A dataframe of downscaled state variables -#' -#' @param df, data frame of variables -#' @param lat, lat of site -#' @param lon, long of site -#' @return ShortWave.ds -#' @noRd -#' @author Laura Puckett -#' -#' - -downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ - ## downscale shortwave to hourly - - t0 <- min(df$time) - df <- df %>% - dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% - dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) - - interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) - - noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) - - data.hrly <- noaa_data_interp %>% - dplyr::left_join(df, by = "time") - - data.hrly$group_6hr <- NA - - group <- 0 - for(i in 1:nrow(data.hrly)){ - if(!is.na(data.hrly$lead_var[i])){ - curr <- data.hrly$lead_var[i] - data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr - group <- group + 1 - data.hrly$group_6hr[i] <- group - }else{ - data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr - data.hrly$group_6hr[i] <- group - } - } - - ShortWave.ds <- data.hrly %>% - dplyr::mutate(hour = lubridate::hour(time)) %>% - dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% - dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry - dplyr::group_by(group_6hr) %>% - dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry - dplyr::ungroup() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% - dplyr::select(time,surface_downwelling_shortwave_flux_in_air) - - return(ShortWave.ds) - -} - -#' Cosine of solar zenith angle -#' -#' For explanations of formulae, see http://www.itacanet.org/the-sun-as-a-source-of-energy/part-3-calculating-solar-angles/ -#' -#' @author Alexey Shiklomanov -#' @param doy Day of year -#' @param lat Latitude -#' @param lon Longitude -#' @param dt Timestep -#' @noRd -#' @param hr Hours timestep -#' @return `numeric(1)` of cosine of solar zenith angle -#' @export -cos_solar_zenith_angle <- function(doy, lat, lon, dt, hr) { - et <- equation_of_time(doy) - merid <- floor(lon / 15) * 15 - merid[merid < 0] <- merid[merid < 0] + 15 - lc <- (lon - merid) * -4/60 ## longitude correction - tz <- merid / 360 * 24 ## time zone - midbin <- 0.5 * dt / 86400 * 24 ## shift calc to middle of bin - t0 <- 12 + lc - et - tz - midbin ## solar time - h <- pi/12 * (hr - t0) ## solar hour - dec <- -23.45 * pi / 180 * cos(2 * pi * (doy + 10) / 365) ## declination - cosz <- sin(lat * pi / 180) * sin(dec) + cos(lat * pi / 180) * cos(dec) * cos(h) - cosz[cosz < 0] <- 0 - return(cosz) -} - -#' Equation of time: Eccentricity and obliquity -#' -#' For description of calculations, see https://en.wikipedia.org/wiki/Equation_of_time#Calculating_the_equation_of_time -#' -#' @author Alexey Shiklomanov -#' @param doy Day of year -#' @noRd -#' @return `numeric(1)` length of the solar day, in hours. - -equation_of_time <- function(doy) { - stopifnot(doy <= 366) - f <- pi / 180 * (279.5 + 0.9856 * doy) - et <- (-104.7 * sin(f) + 596.2 * sin(2 * f) + 4.3 * - sin(4 * f) - 429.3 * cos(f) - 2 * - cos(2 * f) + 19.3 * cos(3 * f)) / 3600 # equation of time -> eccentricity and obliquity - return(et) -} - -#' @title Downscale repeat to hourly -#' @return A dataframe of downscaled data -#' @param df, dataframe of data to be downscaled (Longwave) -#' @noRd -#' @author Laura Puckett -#' -#' - -downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ - - #Get first time point - t0 <- min(df$time) - - df <- df %>% - dplyr::select("time", all_of(varName)) %>% - #Calculate time difference - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% - #Shift valued back because the 6hr value represents the average over the - #previous 6hr period - dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) - - #Create new vector with all hours - interp.df.days <- seq(min(df$days_since_t0), - as.numeric(max(df$days_since_t0)), - 1 / (24 / hr)) - - #Create new data frame - noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) - - #Join 1 hr data frame with 6 hr data frame - data.hrly <- noaa_data_interp %>% - dplyr::left_join(df, by = "time") - - #Fill in hours - for(i in 1:nrow(data.hrly)){ - if(!is.na(data.hrly$lead_var[i])){ - curr <- data.hrly$lead_var[i] - }else{ - data.hrly$lead_var[i] <- curr - } - } - - #Clean up data frame - data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% - dplyr::arrange(time) - - names(data.hrly) <- c("time", varName) - - return(data.hrly) -} -#' @title Calculate potential shortwave radiation -#' @return vector of potential shortwave radiation for each doy -#' -#' @param doy, day of year in decimal -#' @param lon, longitude -#' @param lat, latitude -#' @return `numeric(1)` -#' @author Quinn Thomas -#' @noRd -#' -#' -downscale_solar_geom <- function(doy, lon, lat) { - - dt <- median(diff(doy)) * 86400 # average number of seconds in time interval - hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy - - ## calculate potential radiation - cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) - rpot <- 1366 * cosz - return(rpot) -} ##' @title Write NOAA GEFS netCDF ##' @param df data frame of meterological variables to be written to netcdf. Columns @@ -849,7 +639,6 @@ downscale_solar_geom <- function(doy, lon, lat) { ##' @param overwrite logical to overwrite existing netcdf file ##' @return NA ##' -##' @export ##' ##' @author Quinn Thomas ##' diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index e4e229d5733..1fcbce2df63 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -71,24 +71,24 @@ download.NOAA_GEFS <- function(site_id, PEcAn.logger::logger.info(paste0("Overwrite existing files: ", overwrite)) - PEcAn.data.atmosphere::noaa_grid_download(lat_list = lat.in, - lon_list = lon.in, - end_hr = end_hr, - forecast_time = forecast_time, - forecast_date = forecast_date, - model_name_raw = model_name_raw, - output_directory = outfolder) + noaa_grid_download(lat_list = lat.in, + lon_list = lon.in, + end_hr = end_hr, + forecast_time = forecast_time, + forecast_date = forecast_date, + model_name_raw = model_name_raw, + output_directory = outfolder) - results <- PEcAn.data.atmosphere::process_gridded_noaa_download(lat_list = lat.in, - lon_list = lon.in, - site_id = site_id, - downscale = downscale, - overwrite = overwrite, - forecast_date = forecast_date, - forecast_time = forecast_time, - model_name = model_name, - model_name_ds = model_name_ds, - model_name_raw = model_name_raw, - output_directory = outfolder) + results <- process_gridded_noaa_download(lat_list = lat.in, + lon_list = lon.in, + site_id = site_id, + downscale = downscale, + overwrite = overwrite, + forecast_date = forecast_date, + forecast_time = forecast_time, + model_name = model_name, + model_name_ds = model_name_ds, + model_name_raw = model_name_raw, + output_directory = outfolder) return(results) } diff --git a/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R b/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R deleted file mode 100644 index b5f146fba0c..00000000000 --- a/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R +++ /dev/null @@ -1,56 +0,0 @@ -##' @title Downscale shortwave to hourly -##' @return A dataframe of downscaled state variables -##' -##' @param debiased, data frame of variables -##' @param time0, first timestep -##' @param time_end, last time step -##' @param lat, lat of site -##' @param lon, long of site -##' @param output_tz, output timezone -##' @export -##' -##' @author Laura Puckett -##' -##' - -downscale_ShortWave_to_hrly <- function(debiased, time0, time_end, lat, lon, output_tz = "UTC"){ - ## downscale shortwave to hourly - - - downscale_solar_geom <- function(doy, lon, lat) { - - dt <- median(diff(doy)) * 86400 # average number of seconds in time interval - hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy - - ## calculate potential radiation - cosz <- PEcAn.data.atmosphere::cos_solar_zenith_angle(doy, lat, lon, dt, hr) - rpot <- 1366 * cosz - return(rpot) - } - grouping = append("NOAA.member", "timestamp") - - surface_downwelling_shortwave_flux_in_air<- rep(debiased$surface_downwelling_shortwave_flux_in_air, each = 6) - time = rep(seq(from = as.POSIXct(time0, tz = output_tz), to = as.POSIXct(time_end + lubridate::hours(5), tz = output_tz), by = 'hour'), times = 21) - - ShortWave.hours <- as.data.frame(surface_downwelling_shortwave_flux_in_air) - ShortWave.hours$timestamp = time - ShortWave.hours$NOAA.member = rep(debiased$NOAA.member, each = 6) - ShortWave.hours$hour = as.numeric(format(time, "%H")) - ShortWave.hours$group = as.numeric(as.factor(format(ShortWave.hours$time, "%d"))) - - - - ShortWave.ds <- ShortWave.hours %>% - dplyr::mutate(doy = lubridate::yday(timestamp) + hour/24) %>% - dplyr::mutate(rpot = downscale_solar_geom(doy, lon, lat)) %>% # hourly sw flux calculated using solar geometry - dplyr::group_by_at(c("group", "NOAA.member")) %>% - dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry - dplyr::ungroup() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% - dplyr::select(timestamp, NOAA.member, surface_downwelling_shortwave_flux_in_air) %>% - dplyr::filter(timestamp >= min(debiased$timestamp) & timestamp <= max(debiased$timestamp)) - - -} - - diff --git a/modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R b/modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R deleted file mode 100644 index c55dbc2b99c..00000000000 --- a/modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R +++ /dev/null @@ -1,28 +0,0 @@ -##' @title Downscale repeat to hourly -##' @return A dataframe of downscaled data -##' -##' @param data.6hr, dataframe of data to be downscaled (Longwave) -##' @export -##' -##' @author Laura Puckett -##' -##' - -downscale_repeat_6hr_to_hrly <- function(data.6hr){ - data.hrly = data.6hr %>% - dplyr::group_by_all() %>% - tidyr::expand(timestamp = c(timestamp, - timestamp + lubridate::hours(1), - timestamp + lubridate::hours(2), - timestamp + lubridate::hours(3), - timestamp + lubridate::hours(4), - timestamp + lubridate::hours(5), - timestamp + lubridate::hours(6))) %>% - dplyr::ungroup() %>% - dplyr::mutate(timestamp = lubridate::as_datetime(timestamp, tz = "UTC")) %>% - dplyr::filter(timestamp >= min(data.6hr$timestamp) & timestamp <= max(data.6hr$timestamp)) %>% - dplyr::distinct() - - #arrange(timestamp) -return(data.hrly) -} diff --git a/modules/data.atmosphere/R/downscale_spline_to_hourly.R b/modules/data.atmosphere/R/downscale_spline_to_hourly.R deleted file mode 100644 index 34ffabcb331..00000000000 --- a/modules/data.atmosphere/R/downscale_spline_to_hourly.R +++ /dev/null @@ -1,56 +0,0 @@ -##' @title Downscale spline to hourly -##' @return A dataframe of downscaled state variables -##' -##' @param df, dataframe of data to be downscales -##' @param VarNamesStates, names of vars that are state variables -##' @export -##' -##' @author Laura Puckett -##' -##' - -downscale_spline_to_hourly <- function(df,VarNamesStates){ - # -------------------------------------- - # purpose: interpolates debiased forecasts from 6-hourly to hourly - # Creator: Laura Puckett, December 16 2018 - # -------------------------------------- - # @param: df, a dataframe of debiased 6-hourly forecasts - - interpolate <- function(jday, var){ - result <- splinefun(jday, var, method = "monoH.FC") - return(result(seq(min(as.numeric(jday)), max(as.numeric(jday)), 1/24))) - } - - - - t0 = min(df$timestamp) - df <- df %>% - dplyr::mutate(days_since_t0 = difftime(.$timestamp, t0, units = "days")) - - if("dscale.member" %in% colnames(df)){ - by.ens <- df %>% - dplyr::group_by(NOAA.member, dscale.member) - }else{ - by.ens <- df %>% - dplyr::group_by(NOAA.member) - } - - interp.df.days <- by.ens %>% dplyr::do(days = seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/24)) - interp.df <- interp.df.days - - for(Var in 1:length(VarNamesStates)){ - assign(paste0("interp.df.",VarNamesStates[Var]), dplyr::do(by.ens, var = interpolate(.$days_since_t0,unlist(.[,VarNamesStates[Var]]))) %>% dplyr::rename(!!VarNamesStates[Var] := "var")) - if("dscale.member" %in% colnames(df)){ - interp.df <- dplyr::inner_join(interp.df, get(paste0("interp.df.",VarNamesStates[Var])), by = c("NOAA.member", "dscale.member")) - }else{ - interp.df <- dplyr::inner_join(interp.df, get(paste0("interp.df.",VarNamesStates[Var])), by = c("NOAA.member")) - } - } - - # converting from time difference back to timestamp - interp.df = interp.df %>% - tidyr::unnest() %>% - dplyr::mutate(timestamp = lubridate::as_datetime(t0 + days, tz = attributes(t0)$tzone)) - return(interp.df) -} - diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R new file mode 100644 index 00000000000..0d318d4ecd8 --- /dev/null +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -0,0 +1,166 @@ +#' @title Downscale spline to hourly +#' @return A dataframe of downscaled state variables +#' @param df, dataframe of data to be downscales +#' @noRd +#' @author Laura Puckett +#' @export +#' + +downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ + # -------------------------------------- + # purpose: interpolates debiased forecasts from 6-hourly to hourly + # Creator: Laura Puckett, December 16 2018 + # -------------------------------------- + # @param: df, a dataframe of debiased 6-hourly forecasts + + t0 = min(df$time) + df <- df %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) + + for(Var in 1:length(VarNames)){ + curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y + noaa_data_interp <- cbind(noaa_data_interp, curr_data) + } + + names(noaa_data_interp) <- c("time",VarNames) + + return(noaa_data_interp) +} + +#' @title Downscale shortwave to hourly +#' @return A dataframe of downscaled state variables +#' +#' @param df, data frame of variables +#' @param lat, lat of site +#' @param lon, long of site +#' @return ShortWave.ds +#' @noRd +#' @author Laura Puckett +#' @export +#' +#' + +downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ + ## downscale shortwave to hourly + + t0 <- min(df$time) + df <- df %>% + dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + data.hrly$group_6hr <- NA + + group <- 0 + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + group <- group + 1 + data.hrly$group_6hr[i] <- group + }else{ + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + data.hrly$group_6hr[i] <- group + } + } + + ShortWave.ds <- data.hrly %>% + dplyr::mutate(hour = lubridate::hour(time)) %>% + dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% + dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry + dplyr::group_by(group_6hr) %>% + dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry + dplyr::ungroup() %>% + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% + dplyr::select(time,surface_downwelling_shortwave_flux_in_air) + + return(ShortWave.ds) + +} + + +#' @title Downscale repeat to hourly +#' @return A dataframe of downscaled data +#' @param df, dataframe of data to be downscaled (Longwave) +#' @noRd +#' @author Laura Puckett +#' @export +#' + +downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ + + #Get first time point + t0 <- min(df$time) + + df <- df %>% + dplyr::select("time", all_of(varName)) %>% + #Calculate time difference + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + #Shift valued back because the 6hr value represents the average over the + #previous 6hr period + dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) + + #Create new vector with all hours + interp.df.days <- seq(min(df$days_since_t0), + as.numeric(max(df$days_since_t0)), + 1 / (24 / hr)) + + #Create new data frame + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + #Join 1 hr data frame with 6 hr data frame + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + #Fill in hours + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + }else{ + data.hrly$lead_var[i] <- curr + } + } + + #Clean up data frame + data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% + dplyr::arrange(time) + + names(data.hrly) <- c("time", varName) + + return(data.hrly) +} + + + +#' @title Calculate potential shortwave radiation +#' @return vector of potential shortwave radiation for each doy +#' +#' @param doy, day of year in decimal +#' @param lon, longitude +#' @param lat, latitude +#' @return `numeric(1)` +#' @author Quinn Thomas +#' @noRd +#' @export +#' +downscale_solar_geom <- function(doy, lon, lat) { + + dt <- median(diff(doy)) * 86400 # average number of seconds in time interval + hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy + + ## calculate potential radiation + cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) + rpot <- 1366 * cosz + return(rpot) +} \ No newline at end of file diff --git a/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml b/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml index a6eccef2ec0..e0123157390 100644 --- a/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml +++ b/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml @@ -1,7 +1,7 @@ site - 21 + 31 TRUE 33 diff --git a/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd b/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd deleted file mode 100644 index c43a092340d..00000000000 --- a/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd +++ /dev/null @@ -1,37 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/downscale_ShortWave_to_hrly.R -\name{downscale_ShortWave_to_hrly} -\alias{downscale_ShortWave_to_hrly} -\title{Downscale shortwave to hourly} -\usage{ -downscale_ShortWave_to_hrly( - debiased, - time0, - time_end, - lat, - lon, - output_tz = "UTC" -) -} -\arguments{ -\item{debiased, }{data frame of variables} - -\item{time0, }{first timestep} - -\item{time_end, }{last time step} - -\item{lat, }{lat of site} - -\item{lon, }{long of site} - -\item{output_tz, }{output timezone} -} -\value{ -A dataframe of downscaled state variables -} -\description{ -Downscale shortwave to hourly -} -\author{ -Laura Puckett -} diff --git a/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd b/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd deleted file mode 100644 index 523d92ba4da..00000000000 --- a/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd +++ /dev/null @@ -1,20 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/downscale_repeat_6hr_to_hrly.R -\name{downscale_repeat_6hr_to_hrly} -\alias{downscale_repeat_6hr_to_hrly} -\title{Downscale repeat to hourly} -\usage{ -downscale_repeat_6hr_to_hrly(data.6hr) -} -\arguments{ -\item{data.6hr, }{dataframe of data to be downscaled (Longwave)} -} -\value{ -A dataframe of downscaled data -} -\description{ -Downscale repeat to hourly -} -\author{ -Laura Puckett -} diff --git a/modules/data.atmosphere/man/downscale_spline_to_hourly.Rd b/modules/data.atmosphere/man/downscale_spline_to_hourly.Rd deleted file mode 100644 index dd57682b1de..00000000000 --- a/modules/data.atmosphere/man/downscale_spline_to_hourly.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/downscale_spline_to_hourly.R -\name{downscale_spline_to_hourly} -\alias{downscale_spline_to_hourly} -\title{Downscale spline to hourly} -\usage{ -downscale_spline_to_hourly(df, VarNamesStates) -} -\arguments{ -\item{df, }{dataframe of data to be downscales} - -\item{VarNamesStates, }{names of vars that are state variables} -} -\value{ -A dataframe of downscaled state variables -} -\description{ -Downscale spline to hourly -} -\author{ -Laura Puckett -} diff --git a/modules/data.atmosphere/man/temporal_downscale.Rd b/modules/data.atmosphere/man/temporal_downscale.Rd index 9ed0999227e..00ed7050b05 100644 --- a/modules/data.atmosphere/man/temporal_downscale.Rd +++ b/modules/data.atmosphere/man/temporal_downscale.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/GEFS_helper_functions.R \name{temporal_downscale} \alias{temporal_downscale} -\title{Downscale NOAA GEFS frin 6hr to 1hr} +\title{Downscale NOAA GEFS from 6hr to 1hr} \usage{ temporal_downscale(input_file, output_file, overwrite = TRUE, hr = 1) } @@ -19,7 +19,7 @@ temporal_downscale(input_file, output_file, overwrite = TRUE, hr = 1) None } \description{ -Downscale NOAA GEFS frin 6hr to 1hr +Downscale NOAA GEFS from 6hr to 1hr } \author{ Quinn Thomas From cf3939ddab74f3e6fa5ccddeac27f1d54cd2c7e2 Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 20 Nov 2020 11:46:55 -0500 Subject: [PATCH 1672/2289] changes for Rd fail --- modules/data.atmosphere/R/GEFS_helper_functions.R | 7 +++---- modules/data.atmosphere/man/noaa_grid_download.Rd | 2 +- .../data.atmosphere/man/process_gridded_noaa_download.Rd | 5 +---- 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 8eb3bc5e738..e763990f665 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -8,9 +8,9 @@ #' @param num_cores #' @param output_directory #' -#' @return +#' @return NA #' -#' @examples + noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, model_name_raw, output_directory, end_hr) { @@ -186,9 +186,8 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, #' @param model_name_raw #' @param output_directory #' -#' @return +#' @return List #' -#' @examples #' process_gridded_noaa_download <- function(lat_list, lon_list, diff --git a/modules/data.atmosphere/man/noaa_grid_download.Rd b/modules/data.atmosphere/man/noaa_grid_download.Rd index e52b605abc6..b916aeeef8c 100644 --- a/modules/data.atmosphere/man/noaa_grid_download.Rd +++ b/modules/data.atmosphere/man/noaa_grid_download.Rd @@ -18,7 +18,7 @@ noaa_grid_download( \item{output_directory}{} } \value{ - +NA } \description{ Download gridded forecast in the box bounded by the latitude and longitude list diff --git a/modules/data.atmosphere/man/process_gridded_noaa_download.Rd b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd index e8f2d6113f0..f210560b0e1 100644 --- a/modules/data.atmosphere/man/process_gridded_noaa_download.Rd +++ b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd @@ -22,11 +22,8 @@ process_gridded_noaa_download( \item{output_directory}{} } \value{ - +List } \description{ Extract and temporally downscale points from downloaded grid files } -\examples{ - -} From b45ef21e086897fa6204e3f91d0320595472de34 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 23 Nov 2020 09:21:26 -0500 Subject: [PATCH 1673/2289] fixing SDA to not run entire run at first time step --- modules/assim.sequential/R/sda.enkf_refactored.R | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 79f7c7558e3..1692f107f33 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -304,12 +304,17 @@ sda.enkf <- function(settings, restart.arg = NULL } + if(t == 1){ + config.settings = settings + config.settings$run$end.date = lubridate::ymd_hms(obs.times[t+1], truncated = 3)} + if(t != 1){config.settings = settings} + #-------------------------- Writing the config/Running the model and reading the outputs for each ensemble - outconfig <- write.ensemble.configs(defaults = settings$pfts, + outconfig <- write.ensemble.configs(defaults = config.settings$pfts, ensemble.samples = ensemble.samples, - settings = settings, - model = settings$model$type, - write.to.db = settings$database$bety$write, + settings = config.settings, + model = config.settings$model$type, + write.to.db = config.settings$database$bety$write, restart = restart.arg) save(outconfig, file = file.path(settings$outdir,"SDA", "outconfig.Rdata")) From 4a60651b072c5cebc1f219e8146d11fb0e6bfb81 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 23 Nov 2020 10:16:40 -0500 Subject: [PATCH 1674/2289] updating some make issues --- .../data.atmosphere/R/GEFS_helper_functions.R | 39 ++++++++++--------- .../data.atmosphere/R/download.NOAA_GEFS.R | 11 +++--- .../R/downscaling_helper_functions.R | 4 +- 3 files changed, 29 insertions(+), 25 deletions(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index e763990f665..ce70a5f2841 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -1,12 +1,12 @@ #' Download gridded forecast in the box bounded by the latitude and longitude list #' -#' @param lat_list -#' @param lon_list -#' @param forecast_time -#' @param forecast_date -#' @param model_name_raw -#' @param num_cores -#' @param output_directory +#' @param lat_list lat for site +#' @param lon_list long for site +#' @param forecast_time start hour of forecast +#' @param forecast_date date for forecast +#' @param model_name_raw model name for directory creation +#' @param end_hr end hr to determine how many hours to download +#' @param output_directory output directory #' #' @return NA #' @@ -176,15 +176,17 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, } #' Extract and temporally downscale points from downloaded grid files #' -#' @param lat_list -#' @param lon_list -#' @param site_id -#' @param downscale -#' @param overwrite -#' @param model_name -#' @param model_name_ds -#' @param model_name_raw -#' @param output_directory +#' @param lat_list lat for site +#' @param lon_list lon for site +#' @param site_id Unique site_id for file creation +#' @param downscale Logical. Default is TRUE. Downscales from 6hr to hourly +#' @param overwrite Logical. Default is FALSE. Should exisiting files be overwritten +#' @param forecast_date Date for download +#' @param forecast_time Time (0,6,12,18) for start of download +#' @param model_name Name of model for file name +#' @param model_name_ds Name of downscale file name +#' @param model_name_raw Name of raw file name +#' @param output_directory Output directory #' #' @return List #' @@ -478,7 +480,7 @@ process_gridded_noaa_download <- function(lat_list, #Write netCDF - noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) + write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) if(downscale){ #Downscale the forecast from 6hr to 1hr @@ -499,7 +501,7 @@ process_gridded_noaa_download <- function(lat_list, results_list[[ens]] <- results #Run downscaling - noaaGEFSpoint::temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) + temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) } @@ -516,6 +518,7 @@ process_gridded_noaa_download <- function(lat_list, #' @param overwrite, logical stating to overwrite any existing output_file #' @param hr time step in hours of temporal downscaling (default = 1) #' +#' @import tidyselect #' @author Quinn Thomas #' #' diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 1fcbce2df63..1f2c7ebc594 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -26,14 +26,15 @@ ##' data frame contains information about one file. ##' ##' @param outfolder Directory where results should be written -##' @param start_date, end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight) -##' @param lat site latitude in decimal degrees -##' @param lon site longitude in decimal degrees +##' @param start_date, Range of dates/times to be downloaded (default assumed to be time that function is run) +##' @param end_date, end date for range of dates to be downloaded (default 16 days from start_date) +##' @param lat.in site latitude in decimal degrees +##' @param lon.in site longitude in decimal degrees ##' @param site_id The unique ID given to each site. This is used as part of the file name. +##' @param sitename Site name +##' @param username username from pecan workflow ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param downscale logical, assumed True. Indicated whether data should be downscaled to hourly -##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. -##' @param ... Other arguments, currently ignored ##' @export ##' ##' @examples diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 0d318d4ecd8..75f0450f522 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -37,8 +37,8 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ #' @param df, data frame of variables #' @param lat, lat of site #' @param lon, long of site -#' @return ShortWave.ds #' @noRd +#' @return ShortWave.ds #' @author Laura Puckett #' @export #' @@ -150,8 +150,8 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ #' @param lon, longitude #' @param lat, latitude #' @return `numeric(1)` -#' @author Quinn Thomas #' @noRd +#' @author Quinn Thomas #' @export #' downscale_solar_geom <- function(doy, lon, lat) { From cce392cd8e72529c36c3fb0f99b27900235ffbfe Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 23 Nov 2020 10:36:54 -0500 Subject: [PATCH 1675/2289] updating rd files --- modules/data.atmosphere/NAMESPACE | 1 + .../data.atmosphere/man/download.NOAA_GEFS.Rd | 18 ++++++++------- .../data.atmosphere/man/noaa_grid_download.Rd | 14 +++++++++++- .../man/process_gridded_noaa_download.Rd | 22 ++++++++++++++++++- 4 files changed, 45 insertions(+), 10 deletions(-) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index eaf37a76a8e..08e0c81477a 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -99,5 +99,6 @@ export(temporal.downscale.functions) export(upscale_met) export(wide2long) import(dplyr) +import(tidyselect) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index 8a6c43e20e8..c0d8304cfe6 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -20,21 +20,23 @@ download.NOAA_GEFS( \arguments{ \item{site_id}{The unique ID given to each site. This is used as part of the file name.} -\item{outfolder}{Directory where results should be written} +\item{sitename}{Site name} -\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} +\item{username}{username from pecan workflow} -\item{downscale}{logical, assumed True. Indicated whether data should be downscaled to hourly} +\item{lat.in}{site latitude in decimal degrees} -\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} +\item{lon.in}{site longitude in decimal degrees} -\item{lat}{site latitude in decimal degrees} +\item{outfolder}{Directory where results should be written} -\item{lon}{site longitude in decimal degrees} +\item{start_date, }{Range of dates/times to be downloaded (default assumed to be time that function is run)} -\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} +\item{end_date, }{end date for range of dates to be downloaded (default 16 days from start_date)} -\item{...}{Other arguments, currently ignored} +\item{downscale}{logical, assumed True. Indicated whether data should be downscaled to hourly} + +\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} } \value{ A list of data frames is returned containing information about the data file that can be used to locate it later. Each diff --git a/modules/data.atmosphere/man/noaa_grid_download.Rd b/modules/data.atmosphere/man/noaa_grid_download.Rd index b916aeeef8c..8d48f18e485 100644 --- a/modules/data.atmosphere/man/noaa_grid_download.Rd +++ b/modules/data.atmosphere/man/noaa_grid_download.Rd @@ -15,7 +15,19 @@ noaa_grid_download( ) } \arguments{ -\item{output_directory}{} +\item{lat_list}{lat for site} + +\item{lon_list}{long for site} + +\item{forecast_time}{start hour of forecast} + +\item{forecast_date}{date for forecast} + +\item{model_name_raw}{model name for directory creation} + +\item{output_directory}{output directory} + +\item{end_hr}{end hr to determine how many hours to download} } \value{ NA diff --git a/modules/data.atmosphere/man/process_gridded_noaa_download.Rd b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd index f210560b0e1..ca4a99fe23c 100644 --- a/modules/data.atmosphere/man/process_gridded_noaa_download.Rd +++ b/modules/data.atmosphere/man/process_gridded_noaa_download.Rd @@ -19,7 +19,27 @@ process_gridded_noaa_download( ) } \arguments{ -\item{output_directory}{} +\item{lat_list}{lat for site} + +\item{lon_list}{lon for site} + +\item{site_id}{Unique site_id for file creation} + +\item{downscale}{Logical. Default is TRUE. Downscales from 6hr to hourly} + +\item{overwrite}{Logical. Default is FALSE. Should exisiting files be overwritten} + +\item{forecast_date}{Date for download} + +\item{forecast_time}{Time (0,6,12,18) for start of download} + +\item{model_name}{Name of model for file name} + +\item{model_name_ds}{Name of downscale file name} + +\item{model_name_raw}{Name of raw file name} + +\item{output_directory}{Output directory} } \value{ List From bbd2406871771611e2ea0b3c2fe793b1434579ba Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 24 Nov 2020 10:06:26 -0500 Subject: [PATCH 1676/2289] updating register for gefs --- modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R | 2 +- .../data.atmosphere/inst/registration/register.NOAA_GEFS.xml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R index f6b82778364..8fe6706c5bb 100644 --- a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -349,7 +349,7 @@ if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile) settings <- PEcAn.workflow::do_conversions(settings) #end if loop for existing inputs -# if(is_empty(settings$run$inputs$met$path) & length(clim_check)>0){ + # if(is_empty(settings$run$inputs$met$path) & length(clim_check)>0){ # settings$run$inputs$met$id = index_id # settings$run$inputs$met$path = clim_check # } diff --git a/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml b/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml index a6eccef2ec0..e0123157390 100644 --- a/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml +++ b/modules/data.atmosphere/inst/registration/register.NOAA_GEFS.xml @@ -1,7 +1,7 @@ site - 21 + 31 TRUE 33 From 9d7a1357c2a1f33b085a4ea0aa1d0f0b914c37bf Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Wed, 25 Nov 2020 22:13:45 +0000 Subject: [PATCH 1677/2289] Fix summarize.result so it generates statistics. --- CHANGELOG.md | 1 + base/utils/R/utils.R | 8 +++--- base/utils/tests/testthat/test.utils.R | 37 +++++++++++++++++++------- 3 files changed, 32 insertions(+), 14 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 22ddcc28080..7e786342cc6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -37,6 +37,7 @@ This is a major change: - Changed docker-compose.yml to use user & group IDs of the operating system user (#2572) - gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) - ensure Tleaf converted to K for temperature corrections in PEcAn.photosynthesis::fitA (#2726) +- fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2752) ### Changed diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 1f30d591fb6..3397e069976 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -210,11 +210,11 @@ summarize.result <- function(result) { dplyr::group_by(citation_id, site_id, trt_id, control, greenhouse, date, time, cultivar_id, specie_id) %>% - dplyr::summarize( - n = length(n), - mean = mean(mean), + dplyr::summarize( # stat must be computed first, before n and mean statname = dplyr::if_else(length(n) == 1, "none", "SE"), - stat = stats::sd(mean) / sqrt(length(n)) + stat = stats::sd(mean) / sqrt(length(n)), + n = length(n), + mean = mean(mean) ) %>% dplyr::ungroup() ans2 <- result %>% diff --git a/base/utils/tests/testthat/test.utils.R b/base/utils/tests/testthat/test.utils.R index 9d72a72803b..16e467de79a 100644 --- a/base/utils/tests/testthat/test.utils.R +++ b/base/utils/tests/testthat/test.utils.R @@ -9,9 +9,9 @@ context("Other utilities") test.stats <- data.frame(Y=rep(1,5), - stat=rep(1,5), - n=rep(4,5), - statname=c('SD', 'MSE', 'LSD', 'HSD', 'MSD')) + stat=rep(1,5), + n=rep(4,5), + statname=c('SD', 'MSE', 'LSD', 'HSD', 'MSD')) test_that("transformstats works",{ expect_equal(signif(transformstats(test.stats)$stat, 5), @@ -20,7 +20,7 @@ test_that("transformstats works",{ expect_equal(test.stats$Y, transformstats(test.stats)$Y) expect_equal(test.stats$n, transformstats(test.stats)$n) expect_false(any(as.character(test.stats$statname) == - as.character(transformstats(test.stats)$statname))) + as.character(transformstats(test.stats)$statname))) }) @@ -33,7 +33,7 @@ test_that('arrhenius scaling works', { test_that("vecpaste works",{ - + ## vecpaste() expect_that(vecpaste(c('a','b')), equals("'a','b'")) @@ -69,12 +69,29 @@ test_that("summarize.result works appropriately", { specie_id = 1, n = 1, mean = sqrt(1:10), - stat = 'none', - statname = 'none' - ) + stat = NA, + statname = NA + ) + # check that individual means produced for distinct sites + expect_that(summarize.result(testresult)$mean, equals(testresult$mean)) + + # check that four means are produced for a single site testresult2 <- transform(testresult, site_id= 1) - expect_that(summarize.result(testresult)$mean, equals(testresult$mean)) - expect_that(nrow(summarize.result(testresult2)), equals(4)) + expect_that(nrow(summarize.result(testresult2)), equals(4)) + + # check that if stat == NA, SE will be computed + testresult3 <- summarize.result(testresult2) + expect_true(all(!is.na(testresult3$stat))) + expect_equal(testresult3$n, c(3L, 2L, 2L, 3L)) + expect_equal(round(testresult3$stat, 3), c(0.359, 0.177, 0.293, 0.206)) + expect_equal(round(testresult3$mean, 3), c(1.656, 2.823, 1.707, 2.813)) + + # check that site groups correctly for length(site) > 1 + testresult4 <- rbind.data.frame(testresult2, transform(testresult2, site_id= 2)) + testresult5 <- summarize.result(testresult4) + expect_true(all(!is.na(testresult5$stat))) + expect_equal(nrow(testresult5), 8) + }) From 6bd98e0f350f4a2a3ae822a30d31b223a04a28b2 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 25 Nov 2020 15:41:33 -0700 Subject: [PATCH 1678/2289] Update CHANGELOG.md --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7e786342cc6..2893b42bbb6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -37,7 +37,7 @@ This is a major change: - Changed docker-compose.yml to use user & group IDs of the operating system user (#2572) - gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) - ensure Tleaf converted to K for temperature corrections in PEcAn.photosynthesis::fitA (#2726) -- fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2752) +- fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2753) ### Changed From dbe50522ba45cf77cce8c8be81ff7ce9f5d10889 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 30 Nov 2020 10:24:44 -0500 Subject: [PATCH 1679/2289] forecast t was doubling. Trying with changing t to 1 --- modules/assim.sequential/R/sda.enkf_refactored.R | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 1692f107f33..1a2b1cbfa60 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -211,7 +211,7 @@ sda.enkf <- function(settings, file.path(file.path(settings$outdir,"SDA"),paste0(assimyears[t],"/",files.last.sda))) } - if(length(FORECAST) == length(ANALYSIS) && length(FORECAST) > 0) t = t + length(FORECAST) #if you made it through the forecast and the analysis in t and failed on the analysis in t+1 so you didn't save t + if(length(FORECAST) == length(ANALYSIS) && length(FORECAST) > 0) t = 1 + length(FORECAST) #if you made it through the forecast and the analysis in t and failed on the analysis in t+1 so you didn't save t }else{ t = 1 @@ -304,10 +304,12 @@ sda.enkf <- function(settings, restart.arg = NULL } - if(t == 1){ - config.settings = settings - config.settings$run$end.date = lubridate::ymd_hms(obs.times[t+1], truncated = 3)} - if(t != 1){config.settings = settings} + if(t == 1){ + config.settings = settings + config.settings$run$end.date = lubridate::ymd_hms(obs.times[t+1], truncated = 3)} + if(t != 1){config.settings = settings} + + #-------------------------- Writing the config/Running the model and reading the outputs for each ensemble outconfig <- write.ensemble.configs(defaults = config.settings$pfts, From 56a404629c0e986b33c30a06e9e0c8ab9d8dede4 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 30 Nov 2020 10:25:34 -0500 Subject: [PATCH 1680/2289] adding in change for Pf diag and changing an input that is equivalent to PF back to pf to have the diagonal fix in --- modules/assim.sequential/R/Analysis_sda.R | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index ab95dbaba02..a2ef27dd1a3 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -129,6 +129,8 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 Pf = stats::cov(X) # Cov Forecast - Goes into tobit2space as initial condition but is re-estimated in tobit space mu.f <- colMeans(X) #mean Forecast - This is used as an initial condition + diag(Pf)[which(diag(Pf)==0)] <- min(diag(Pf)[which(diag(Pf) != 0)])/5 #fixing det(Pf)==0 + #Observed inputs R <- try(solve(Observed$R), silent = F) #putting solve() here so if not invertible error is before compiling tobit2space #sfsmisc::posdefify( Y <- Observed$Y @@ -203,7 +205,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 inits.tobit2space <- function() list(muf = rmnorm_chol(1,colMeans(X), chol(diag(ncol(X))*100)), pf = rwish_chol(1,df = ncol(X)+1, - cholesky = chol(solve(stats::cov(X))))) + cholesky = chol(solve(Pf)))) #ptm <- proc.time() tobit2space_pred <- nimbleModel(tobit2space.model, data = data.tobit2space, @@ -226,7 +228,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 for(j in seq_along(mu.f)){ for(n in seq_len(nrow(X))){ node <- paste0('y.censored[',n,',',j,']') - conf_tobit2space$addSampler(node, 'toggle', control=list(type='RW')) + conf_tobit2space$addSampler(node, 'toggle', control=list(type='slice')) ## could instead use slice samplers, or any combination thereof, e.g.: ##conf$addSampler(node, 'toggle', control=list(type='slice')) } From c0b2902ad8036dfd23b4ec3974343a15c51c3926 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 30 Nov 2020 10:26:04 -0500 Subject: [PATCH 1681/2289] getting error about missing function. Trying to add in package name --- modules/assim.sequential/R/Nimble_codes.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index e3be073bcfa..1b27aad4ed0 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -114,7 +114,7 @@ registerDistributions(list(dwtmnorm = list( #' @export tobit2space.model <- nimbleCode({ for (i in 1:N) { - y.censored[i, 1:J] ~ dwtmnorm(mean = muf[1:J], + y.censored[i, 1:J] ~ PEcAn.assim.sequential::dwtmnorm(mean = muf[1:J], prec = pf[1:J, 1:J], wt = wts[i]) # for (j in 1:J) { From 3419c85040b9e03a55ae37f9651ad2c28553fdf8 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 30 Nov 2020 10:26:16 -0500 Subject: [PATCH 1682/2289] updating xml --- modules/assim.sequential/inst/WillowCreek/testing.xml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/testing.xml b/modules/assim.sequential/inst/WillowCreek/testing.xml index 330e1fd7fe6..847ebd7c5d5 100644 --- a/modules/assim.sequential/inst/WillowCreek/testing.xml +++ b/modules/assim.sequential/inst/WillowCreek/testing.xml @@ -84,14 +84,14 @@ umol C m-2 s-1 -9999 9999 - 1000 + 1000000000 Qle mW m-2 0 9999 - 100 + 100 LAI From dd88f154f68f07c2fac83d7166f71d85c305ae56 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 30 Nov 2020 18:20:28 -0600 Subject: [PATCH 1683/2289] remove left over references to google map key --- docker/web/config.docker.php | 3 --- scripts/install_pecan.sh | 1 - web/setups/page.template.php | 1 - 3 files changed, 5 deletions(-) diff --git a/docker/web/config.docker.php b/docker/web/config.docker.php index 7263d5ab2df..5cf34391e28 100644 --- a/docker/web/config.docker.php +++ b/docker/web/config.docker.php @@ -26,9 +26,6 @@ # sshTunnel binary $SSHtunnel=dirname(__FILE__) . DIRECTORY_SEPARATOR . "sshtunnel.sh"; -# google map key -$googleMapKey="AIzaSyDBBrRM8Ygo-wGAnubrtVGZklK3bmXlUPI"; - # Require username/password, can set min level to 0 so nobody can run/delete. # 4 = viewer # 3 = creator diff --git a/scripts/install_pecan.sh b/scripts/install_pecan.sh index 12d5e6a1610..ddcf1a3eb7c 100644 --- a/scripts/install_pecan.sh +++ b/scripts/install_pecan.sh @@ -496,7 +496,6 @@ if [ ! -e ${HOME}/pecan/web/config.php ]; then sed -e "s#browndog_url=.*#browndog_url=\"${BROWNDOG_URL}\";#" \ -e "s#browndog_username=.*#browndog_username=\"${BROWNDOG_USERNAME}\";#" \ -e "s#browndog_password=.*#browndog_password=\"${BROWNDOG_PASSWORD}\";#" \ - -e "s#googleMapKey=.*#googleMapKey=\"${GOOGLE_MAP_KEY}\";#" \ -e "s/carya/$USER/g" ${HOME}/pecan/web/config.example.php > ${HOME}/pecan/web/config.php fi diff --git a/web/setups/page.template.php b/web/setups/page.template.php index 812da7a7d40..c2e3486c1c9 100644 --- a/web/setups/page.template.php +++ b/web/setups/page.template.php @@ -44,7 +44,6 @@ Database
Browndog
FIA Database
- Google MapKey
Change Password

From d58457821f62457f632d0fb9010bd230575f463c Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 1 Dec 2020 16:12:42 -0500 Subject: [PATCH 1684/2289] adding in cov na removal --- modules/assim.sequential/R/sda.enkf_refactored.R | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 1a2b1cbfa60..a102b5a202b 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -121,6 +121,7 @@ sda.enkf <- function(settings, ### tests before data assimilation ### ###-------------------------------------------------------------------###---- obs.times <- names(obs.mean) + obs.times.POSIX <- lubridate::ymd_hms(obs.times) ### TO DO: Need to find a way to deal with years before 1000 for paleon ### need a leading zero @@ -408,8 +409,9 @@ sda.enkf <- function(settings, } # droping the ones that their means are zero na.obs.mean <- which(is.na(unlist(obs.mean[[t]][choose]))) - if (length(na.obs.mean) > 0) - choose <- choose [-na.obs.mean] + na.obs.cov <- which(is.na(unlist(obs.cov[[t]][choose]))) + if (length(na.obs.mean) > 0) choose <- choose [-na.obs.mean] + if (length(na.obs.cov) > 0) choose.cov <- choose[-na.obs.cov] Y <- unlist(obs.mean[[t]][choose]) From 48e3a246041d5263e22b1dbc7e0ed9af7130da43 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 1 Dec 2020 16:13:18 -0500 Subject: [PATCH 1685/2289] trying to get sda to work --- modules/assim.sequential/R/Analysis_sda.R | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index a2ef27dd1a3..a4ad8aed5b9 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -194,6 +194,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 constants.tobit2space <- list(N = nrow(X), J = length(mu.f)) + save.image(file = '/fs/data3/kzarada/nimble_fix.RData') data.tobit2space <- list(y.ind = x.ind, y.censored = x.censored, mu_0 = rep(0,length(mu.f)), @@ -201,13 +202,15 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 nu_0 = ncol(X)+1, wts = wts*nrow(X), #sigma x2 max Y Sigma_0 = solve(diag(1000,length(mu.f))))#some measure of prior obs - + Pf.i = stats::cov(X) + diag(Pf.i)[which(diag(Pf.i)==0)] <- min(diag(Pf.i)[which(diag(Pf.i) != 0)])/5 inits.tobit2space <- function() list(muf = rmnorm_chol(1,colMeans(X), chol(diag(ncol(X))*100)), pf = rwish_chol(1,df = ncol(X)+1, - cholesky = chol(solve(Pf)))) + cholesky = chol(solve(Pf.i)))) #ptm <- proc.time() + tobit2space_pred <- nimbleModel(tobit2space.model, data = data.tobit2space, constants = constants.tobit2space, inits = inits.tobit2space(), name = 'space') @@ -228,7 +231,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 for(j in seq_along(mu.f)){ for(n in seq_len(nrow(X))){ node <- paste0('y.censored[',n,',',j,']') - conf_tobit2space$addSampler(node, 'toggle', control=list(type='slice')) + conf_tobit2space$addSampler(node, 'toggle', control=list(type='RW')) ## could instead use slice samplers, or any combination thereof, e.g.: ##conf$addSampler(node, 'toggle', control=list(type='slice')) } From 1d17dc1c3dfe870a1a4b878693158708f8046157 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 1 Dec 2020 16:13:40 -0500 Subject: [PATCH 1686/2289] changes to nimble code to try to compile --- modules/assim.sequential/R/Nimble_codes.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Nimble_codes.R b/modules/assim.sequential/R/Nimble_codes.R index 1b27aad4ed0..e3be073bcfa 100644 --- a/modules/assim.sequential/R/Nimble_codes.R +++ b/modules/assim.sequential/R/Nimble_codes.R @@ -114,7 +114,7 @@ registerDistributions(list(dwtmnorm = list( #' @export tobit2space.model <- nimbleCode({ for (i in 1:N) { - y.censored[i, 1:J] ~ PEcAn.assim.sequential::dwtmnorm(mean = muf[1:J], + y.censored[i, 1:J] ~ dwtmnorm(mean = muf[1:J], prec = pf[1:J, 1:J], wt = wts[i]) # for (j in 1:J) { From e52a6b313997849765cfbdd031b1cb638aece366 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 3 Dec 2020 08:55:31 -0500 Subject: [PATCH 1687/2289] trying to remove problem variables --- modules/assim.sequential/R/Analysis_sda.R | 5 +-- .../inst/WillowCreek/SDA_Workflow.R | 1 + .../inst/WillowCreek/testing.xml | 44 ------------------- 3 files changed, 3 insertions(+), 47 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index a4ad8aed5b9..2e51aa54ffc 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -202,12 +202,11 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 nu_0 = ncol(X)+1, wts = wts*nrow(X), #sigma x2 max Y Sigma_0 = solve(diag(1000,length(mu.f))))#some measure of prior obs - Pf.i = stats::cov(X) - diag(Pf.i)[which(diag(Pf.i)==0)] <- min(diag(Pf.i)[which(diag(Pf.i) != 0)])/5 + inits.tobit2space <- function() list(muf = rmnorm_chol(1,colMeans(X), chol(diag(ncol(X))*100)), pf = rwish_chol(1,df = ncol(X)+1, - cholesky = chol(solve(Pf.i)))) + cholesky = chol(stats::cov(X)))) #ptm <- proc.time() diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R index 8fe6706c5bb..e3d5d240ac5 100644 --- a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -502,6 +502,7 @@ settings$model$binary = "/usr2/postdoc/istfer/SIPNET/1023/sipnet" if(restart == FALSE) unlink(c('run','out','SDA'), recursive = T) +debugonce(PEcAn.assim.sequential::sda.enkf) if ('state.data.assimilation' %in% names(settings)) { if (PEcAn.utils::status.check("SDA") == 0) { diff --git a/modules/assim.sequential/inst/WillowCreek/testing.xml b/modules/assim.sequential/inst/WillowCreek/testing.xml index 847ebd7c5d5..db39f96863c 100644 --- a/modules/assim.sequential/inst/WillowCreek/testing.xml +++ b/modules/assim.sequential/inst/WillowCreek/testing.xml @@ -27,15 +27,6 @@ NEE - - 1000013298 - direct - 0 - 9999 - - Qle - - 1000013298 direct @@ -45,15 +36,6 @@ LAI - - 1000013298 - direct - 0 - 9999 - - TotSoilCarb - - 1000013298 direct @@ -63,15 +45,6 @@ SoilMoistFrac - - 1000013298 - direct - 0 - 9999 - - litter_carbon_content - - @@ -85,34 +58,17 @@ -9999 9999 1000000000 - - - Qle - mW m-2 - 0 - 9999 - 100 LAI 0 9999 - - TotSoilCarb - 0 - 9999 - SoilMoistFrac 0 1 - - litter_carbon_content - 0 - 9999 - year 2017-01-01 From 795823ae4c1078c97a9ac291be9b5682e27c37c5 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Fri, 4 Dec 2020 00:49:29 +0000 Subject: [PATCH 1688/2289] Add identifier columns to jagged.data and save out --- base/db/R/rename_jags_columns.R | 11 +++++++---- modules/meta.analysis/R/jagify.R | 20 ++++++++++++++++---- modules/meta.analysis/R/meta.analysis.R | 5 +++-- modules/meta.analysis/R/run.meta.analysis.R | 3 +++ 4 files changed, 29 insertions(+), 10 deletions(-) diff --git a/base/db/R/rename_jags_columns.R b/base/db/R/rename_jags_columns.R index 3e82ecfa445..b2d08e13c5e 100644 --- a/base/db/R/rename_jags_columns.R +++ b/base/db/R/rename_jags_columns.R @@ -18,18 +18,21 @@ rename_jags_columns <- function(data) { # Change variable names and calculate obs.prec within data frame + # Swap column names + colnames(data)[colnames(data) %in% c("greenhouse", "ghs")] <- c("ghs", "greenhouse") + colnames(data)[colnames(data) %in% c("site_id", "site")] <- c("site", "site_id") + transformed <- transform(data, Y = mean, se = stat, obs.prec = 1 / (sqrt(n) * stat) ^2, trt = trt_id, - site = site_id, - cite = citation_id, - ghs = greenhouse) + cite = citation_id) # Subset data frame selected <- subset(transformed, select = c('Y', 'n', 'site', 'trt', 'ghs', 'obs.prec', - 'se', 'cite')) + 'se', 'cite', + "greenhouse", "site_id", "treatment_id", "trt_name", "trt_num")) # add original # original versions of greenhouse, site_id, treatment_id, trt_name # Return subset data frame return(selected) } diff --git a/modules/meta.analysis/R/jagify.R b/modules/meta.analysis/R/jagify.R index cf17f00c6e8..2ac7e0b2cbe 100644 --- a/modules/meta.analysis/R/jagify.R +++ b/modules/meta.analysis/R/jagify.R @@ -20,10 +20,10 @@ jagify <- function(result, use_ghs = TRUE) { - ## Rename 'name' column from 'treatment' table to trt_id. Remove NAs. Assign treatments. + ## Create new column "trt_id" from column 'name'. Remove NAs. Assign treatments. ## Finally, summarize the results by calculating summary statistics from experimental replicates r <- result[!is.na(result$mean), ] - colnames(r)[colnames(r) == "name"] <- "trt_id" + r$trt_id <- r$name r <- transform.nas(r) # exclude greenhouse data unless requested otherwise @@ -39,8 +39,20 @@ jagify <- function(result, use_ghs = TRUE) { site_id = as.integer(factor(site_id, unique(site_id))), greenhouse = as.integer(factor(greenhouse, unique(greenhouse))), mean = mean, - citation_id = citation_id), - select = c("stat", "n", "site_id", "trt_id", "mean", "citation_id", "greenhouse")) + citation_id = citation_id, + ghs = greenhouse, + site = site_id, + trt_name = name), + select = c("stat", "n", "site_id", "trt_id", "mean", "citation_id", "greenhouse", + "ghs", "treatment_id", "site", "trt_name")) # original versions of greenhouse, treatment_id, site_id, and name + + #order by site_id and trt_id, but make sure "control" is the first trt of each site + uniq <- setdiff(unique(r $trt_id), "control") + r$trt_id <- factor(r$trt_id, levels = c("control", uniq[order(uniq)])) + r <- r[order(r$site_id, r$trt_id), ] + + #add beta.trt index associated with each trt_id (performed in single.MA, replicated here for matching purposes) + r$trt_num <- as.integer(factor(r$trt_id, levels = unique(r$trt_id))) if (length(r$stat[!is.na(r$stat) & r$stat <= 0]) > 0) { varswithbadstats <- unique(result$vname[which(r$stat <= 0)]) diff --git a/modules/meta.analysis/R/meta.analysis.R b/modules/meta.analysis/R/meta.analysis.R index e572b6f5128..e4620034441 100644 --- a/modules/meta.analysis/R/meta.analysis.R +++ b/modules/meta.analysis/R/meta.analysis.R @@ -103,8 +103,9 @@ pecan.ma <- function(trait.data, prior.distns, writeLines(paste("------------------------------------------------")) } data <- trait.data[[trait.name]] - data <- data[, which(!colnames(data) %in% c("cite", "trait_id", "se"))] ## remove citation and other unneeded columns - data <- data[order(data[["site"]], data[["trt"]]), ] # not sure why, but required for JAGS model + data <- data[, which(!colnames(data) %in% c("cite", "trait_id", "se", + "greenhouse", "site_id", "treatment_id", "trt_name", "trt_num"))] ## remove citation and other unneeded columns + ## check for excess missing data diff --git a/modules/meta.analysis/R/run.meta.analysis.R b/modules/meta.analysis/R/run.meta.analysis.R index e2dd732d1b2..5695d0580dd 100644 --- a/modules/meta.analysis/R/run.meta.analysis.R +++ b/modules/meta.analysis/R/run.meta.analysis.R @@ -58,6 +58,9 @@ run.meta.analysis.pft <- function(pft, iterations, random = TRUE, threshold = 1. ## Convert data to format expected by pecan.ma jagged.data <- lapply(trait.data, PEcAn.MA::jagify, use_ghs = use_ghs) + ## Save the jagged.data object, meta-analysis inputs + save(jagged.data, file = file.path(pft$outdir, "jagged.data.Rdata")) + if(!use_ghs){ # check if any data left after excluding greenhouse all_trait_check <- sapply(jagged.data, nrow) From 54976652107612c77bfb87d0b98fa6fe6b01f274 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Fri, 4 Dec 2020 15:46:45 +0000 Subject: [PATCH 1689/2289] Fix get.trait.data.pft so forceupdate set to true stays true --- base/db/R/get.trait.data.pft.R | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index 8c5f5891310..18cb952c21c 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -77,9 +77,7 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, traits <- names(trait.data.check) # Set forceupdate FALSE if it's a string (backwards compatible with 'AUTO' flag used in the past) - if (!is.logical(forceupdate)) { - forceupdate <- FALSE - } + forceupdate <- isTRUE(as.logical(forceupdate)) # check to see if we need to update if (!forceupdate) { From 9140839e1f94254565d10efb47eb5134987ae3a4 Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 4 Dec 2020 11:48:49 -0500 Subject: [PATCH 1690/2289] trying different cvar configs to compile nimble --- modules/assim.sequential/R/Analysis_sda.R | 1 - .../inst/WillowCreek/SDA_Workflow.R | 1 + .../inst/WillowCreek/testing.xml | 34 ++----------------- 3 files changed, 4 insertions(+), 32 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 2e51aa54ffc..f77ef468b5f 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -194,7 +194,6 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 constants.tobit2space <- list(N = nrow(X), J = length(mu.f)) - save.image(file = '/fs/data3/kzarada/nimble_fix.RData') data.tobit2space <- list(y.ind = x.ind, y.censored = x.censored, mu_0 = rep(0,length(mu.f)), diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R index e3d5d240ac5..15c08610018 100644 --- a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -4,6 +4,7 @@ library("PEcAn.all") library("PEcAn.utils") library("PEcAn.data.remote") +library("PEcAn.assim.sequential") library("RCurl") library("REddyProc") library("tidyverse") diff --git a/modules/assim.sequential/inst/WillowCreek/testing.xml b/modules/assim.sequential/inst/WillowCreek/testing.xml index db39f96863c..4146eff6a9e 100644 --- a/modules/assim.sequential/inst/WillowCreek/testing.xml +++ b/modules/assim.sequential/inst/WillowCreek/testing.xml @@ -9,15 +9,6 @@ 1000013298 - - 1000013298 - direct - 0 - 9999 - - AbvGrndWood - - 1000013298 direct @@ -27,7 +18,7 @@ NEE - + 1000013298 direct 0 @@ -36,22 +27,8 @@ LAI - - 1000013298 - direct - 0 - 1 - - SoilMoistFrac - - - - AbvGrndWood - 0 - 9999 - NEE umol C m-2 s-1 @@ -64,16 +41,11 @@ 0 9999 - - SoilMoistFrac - 0 - 1 - year 2017-01-01 2018-11-05 - 10 + 50 LE fix @@ -106,7 +78,7 @@ FALSE - 10 + 50 NEE 2018 2018 From 7ccc68bc457eacc5131b6336922e0bd5d256b4a7 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Fri, 4 Dec 2020 21:26:46 +0000 Subject: [PATCH 1691/2289] Add matching column ids to summarized datasets --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 3397e069976..77e090eb865 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -209,7 +209,7 @@ summarize.result <- function(result) { dplyr::filter(n == 1) %>% dplyr::group_by(citation_id, site_id, trt_id, control, greenhouse, date, time, - cultivar_id, specie_id) %>% + cultivar_id, specie_id, name, treatment_id) %>% dplyr::summarize( # stat must be computed first, before n and mean statname = dplyr::if_else(length(n) == 1, "none", "SE"), stat = stats::sd(mean) / sqrt(length(n)), From f73ce5a877709e3efbcdf24ee4feedd33de2d781 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Fri, 4 Dec 2020 23:11:56 +0000 Subject: [PATCH 1692/2289] Test whether treatment indices are correctly assigned to controls, regardless of alphabetical order of treatment names --- .../tests/testthat/test.run.meta.analysis.R | 27 ++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/modules/meta.analysis/tests/testthat/test.run.meta.analysis.R b/modules/meta.analysis/tests/testthat/test.run.meta.analysis.R index 2d174d948e1..cc6f464e593 100644 --- a/modules/meta.analysis/tests/testthat/test.run.meta.analysis.R +++ b/modules/meta.analysis/tests/testthat/test.run.meta.analysis.R @@ -22,4 +22,29 @@ test_that("singleMA gives expected result for example inputs",{ ## need to calculate x ## x <- singleMA(....) #expect_equal(round(summary(x)$statistics["beta.o", "Mean"]), 5) -}) \ No newline at end of file +}) + +test_that("jagify correctly assigns treatment index of 1 to all control treatments, regardless of alphabetical order", { + ## generate test data; controls assigned to early alphabet and late alphabet trt names + testresult <- data.frame(citation_id = 1, + site_id = rep(1:2, each = 5), + name = rep(letters[1:5],2), + trt_id = as.character(rep(letters[1:5],2)), + control = c(1, rep(0,8), 1), + greenhouse = c(rep(0,5), rep(1,5)), + date = 1, + time = NA, + cultivar_id = 1, + specie_id = 1, + n = 2, + mean = sqrt(1:10), + stat = 1, + statname = "SE", + treatment_id = 1:10 + ) + i <- sapply(testresult, is.factor) + testresult[i] <- lapply(testresult[i], as.character) + + jagged.data <- jagify(testresult) + expect_equal(jagged.data$trt_num[jagged.data$trt == "control"], c(1, 1)) +}) From c28916fdeb7b1d6b1c1999f5a5617c4ea1dd987c Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Fri, 4 Dec 2020 23:56:26 +0000 Subject: [PATCH 1693/2289] Remove madata.Rdata object, replaced by jagged.data.Rdata --- modules/meta.analysis/R/meta.analysis.R | 16 ++-------------- modules/meta.analysis/R/run.meta.analysis.R | 3 ++- 2 files changed, 4 insertions(+), 15 deletions(-) diff --git a/modules/meta.analysis/R/meta.analysis.R b/modules/meta.analysis/R/meta.analysis.R index e4620034441..ff57087ab8d 100644 --- a/modules/meta.analysis/R/meta.analysis.R +++ b/modules/meta.analysis/R/meta.analysis.R @@ -34,8 +34,6 @@ ##' @param logfile Path to file for sinking meta analysis output. If ##' `NULL`, only print output to console. ##' @param verbose Logical. If `TRUE` (default), print progress messages. -##' @param madata_file Path to file for storing copy of data used in -##' meta-analysis. If `NULL`, don't store at all. ##' @return four chains with 5000 total samples from posterior ##' @author David LeBauer, Michael C. Dietze, Alexey Shiklomanov ##' @export @@ -67,13 +65,8 @@ pecan.ma <- function(trait.data, prior.distns, outdir, random = FALSE, overdispersed = TRUE, logfile = file.path(outdir, "meta-analysis.log)"), - verbose = TRUE, - madata_file = file.path(outdir, "madata.Rdata")) { + verbose = TRUE)) { - if (!is.null(madata_file)) { - madata <- list() - } - ## Meta-analysis for each trait mcmc.object <- list() # initialize output list of mcmc objects for each trait mcmc.mat <- list() @@ -140,9 +133,6 @@ pecan.ma <- function(trait.data, prior.distns, } } - if (!is.null(madata)) { - madata[[trait.name]] <- data - } jag.model.file <- file.path(outdir, paste0(trait.name, ".model.bug")) # file to store model ## run the meta-analysis in JAGS @@ -166,8 +156,6 @@ pecan.ma <- function(trait.data, prior.distns, mcmc.object[[trait.name]] <- jags.out.trunc } - if (!is.null(madata_file)) { - save(madata, file = madata_file) - } + return(mcmc.object) } # pecan.ma diff --git a/modules/meta.analysis/R/run.meta.analysis.R b/modules/meta.analysis/R/run.meta.analysis.R index 5695d0580dd..308d668d8ad 100644 --- a/modules/meta.analysis/R/run.meta.analysis.R +++ b/modules/meta.analysis/R/run.meta.analysis.R @@ -58,7 +58,8 @@ run.meta.analysis.pft <- function(pft, iterations, random = TRUE, threshold = 1. ## Convert data to format expected by pecan.ma jagged.data <- lapply(trait.data, PEcAn.MA::jagify, use_ghs = use_ghs) - ## Save the jagged.data object, meta-analysis inputs + ## Save the jagged.data object, replaces previous madata.Rdata object + ## First 6 columns are equivalent and direct inputs into the meta-analysis save(jagged.data, file = file.path(pft$outdir, "jagged.data.Rdata")) if(!use_ghs){ From cddbb1141298fdad2affedb8a241b19eaf7bfdaa Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Sat, 5 Dec 2020 00:07:06 +0000 Subject: [PATCH 1694/2289] Update tracking of random effects in the meta-analysis module --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 2893b42bbb6..d344640bfc6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -38,6 +38,7 @@ This is a major change: - gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) - ensure Tleaf converted to K for temperature corrections in PEcAn.photosynthesis::fitA (#2726) - fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2753) +- ensure that control treatments always receive the random effect index of 1, replace madata.Rdata with jagged.data.Rdata, which includes identifying variables useful for calculating parameter estimates by treatment (#2756) ### Changed From c75612e2327771bf8c46464333e8610a8125e71b Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 7 Dec 2020 13:16:07 -0500 Subject: [PATCH 1695/2289] reverting back from the change I made to inits --- modules/assim.sequential/R/Analysis_sda.R | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index f77ef468b5f..000875a9e9b 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -204,9 +204,8 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 inits.tobit2space <- function() list(muf = rmnorm_chol(1,colMeans(X), chol(diag(ncol(X))*100)), - pf = rwish_chol(1,df = ncol(X)+1, - cholesky = chol(stats::cov(X)))) - + pf = rwish_chol(1,df = ncol(X)+1, + cholesky = chol(solve(stats::cov(X))))) #ptm <- proc.time() tobit2space_pred <- nimbleModel(tobit2space.model, data = data.tobit2space, @@ -238,7 +237,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 Rmcmc_tobit2space <- buildMCMC(conf_tobit2space) #restarting at good initial conditions is somewhat important here - Cmodel_tobit2space <- compileNimble(tobit2space_pred) + Cmodel_tobit2space <- compileNimble(tobit2space_pred, showCompilerOutput = TRUE) Cmcmc_tobit2space <- compileNimble(Rmcmc_tobit2space, project = tobit2space_pred) for(i in seq_along(X)) { From d0b4f4587f301c3e57b6d7b9c062254b29ffe163 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 7 Dec 2020 15:49:16 -0500 Subject: [PATCH 1696/2289] updating depends for gefs function --- modules/data.atmosphere/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index e56422f3fa5..33617fa0d12 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -49,6 +49,7 @@ Imports: testthat (>= 2.0.0), tibble, tidyr, + tidyselect, truncnorm, udunits2 (>= 0.11), XML (>= 3.98-1.4), From ac6074b2e23e5e5255ee0bf5ad9bb7771bb8a1b0 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 7 Dec 2020 16:12:08 -0500 Subject: [PATCH 1697/2289] changing to suggested --- modules/data.atmosphere/DESCRIPTION | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 33617fa0d12..8a5aeb39bf6 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -49,7 +49,6 @@ Imports: testthat (>= 2.0.0), tibble, tidyr, - tidyselect, truncnorm, udunits2 (>= 0.11), XML (>= 3.98-1.4), @@ -61,7 +60,8 @@ Suggests: parallel, progress, reticulate, - rlang (>= 0.2.0) + rlang (>= 0.2.0), + tidyselect Remotes: github::ropensci/geonames, github::ropensci/nneo From 9d58f0bd8b26b03d0293e32e25e45b6c6712e7ca Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 7 Dec 2020 16:20:04 -0500 Subject: [PATCH 1698/2289] jk back to import --- modules/data.atmosphere/DESCRIPTION | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 8a5aeb39bf6..33617fa0d12 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -49,6 +49,7 @@ Imports: testthat (>= 2.0.0), tibble, tidyr, + tidyselect, truncnorm, udunits2 (>= 0.11), XML (>= 3.98-1.4), @@ -60,8 +61,7 @@ Suggests: parallel, progress, reticulate, - rlang (>= 0.2.0), - tidyselect + rlang (>= 0.2.0) Remotes: github::ropensci/geonames, github::ropensci/nneo From 0768f2ff29570aea019b6993284aafd73a00f5b6 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Mon, 7 Dec 2020 14:55:42 -0700 Subject: [PATCH 1699/2289] Update modules/meta.analysis/R/jagify.R Co-authored-by: Chris Black --- modules/meta.analysis/R/jagify.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/jagify.R b/modules/meta.analysis/R/jagify.R index 2ac7e0b2cbe..e2641739bdd 100644 --- a/modules/meta.analysis/R/jagify.R +++ b/modules/meta.analysis/R/jagify.R @@ -47,7 +47,7 @@ jagify <- function(result, use_ghs = TRUE) { "ghs", "treatment_id", "site", "trt_name")) # original versions of greenhouse, treatment_id, site_id, and name #order by site_id and trt_id, but make sure "control" is the first trt of each site - uniq <- setdiff(unique(r $trt_id), "control") + uniq <- setdiff(unique(r$trt_id), "control") r$trt_id <- factor(r$trt_id, levels = c("control", uniq[order(uniq)])) r <- r[order(r$site_id, r$trt_id), ] From d576a968b659a395977dfdc9fd78af184e1221a7 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 8 Dec 2020 13:00:28 -0600 Subject: [PATCH 1700/2289] remove api folder --- CHANGELOG.md | 1 + api/.Rbuildignore | 2 - api/.gitignore | 3 - api/DESCRIPTION | 22 --- api/LICENSE | 34 ---- api/NAMESPACE | 24 --- api/R/add_database.R | 42 ----- api/R/add_pft.R | 45 ----- api/R/add_rabbitmq.R | 65 ------- api/R/add_workflow.R | 31 ---- api/R/get_model_id.R | 37 ---- api/R/insert_new_workflow.R | 105 ----------- api/R/list_runs.R | 15 -- api/R/prepared_query.R | 83 --------- api/R/search.R | 85 --------- api/R/submit_workflow.R | 111 ------------ api/R/thredds_fileserver.R | 76 -------- api/R/thredds_opendap.R | 36 ---- api/R/watch_progress.R | 36 ---- api/R/zzz.R | 96 ---------- api/inst/test_sipnet.R | 44 ----- api/man/add_database.Rd | 52 ------ api/man/add_pft.Rd | 46 ----- api/man/add_rabbitmq.Rd | 62 ------- api/man/add_workflow.Rd | 26 --- api/man/dbfile_url.Rd | 22 --- api/man/get_model_id.Rd | 27 --- api/man/get_next_workflow_id.Rd | 29 --- api/man/get_run_id.Rd | 22 --- api/man/insert_new_workflow.Rd | 59 ------- api/man/listToXML.Rd | 22 --- api/man/list_runs.Rd | 22 --- api/man/output_url.Rd | 24 --- api/man/pecanapi_options.Rd | 63 ------- api/man/prepared_query.Rd | 76 -------- api/man/run_dap.Rd | 29 --- api/man/run_url.Rd | 31 ---- api/man/search.Rd | 80 --------- api/man/submit_workflow.Rd | 54 ------ api/man/thredds_dap_url.Rd | 34 ---- api/man/thredds_fs_url.Rd | 29 --- api/man/watch_workflow.Rd | 29 --- api/man/workflow_output.Rd | 23 --- api/vignettes/pecanapi.Rmd | 302 -------------------------------- 44 files changed, 1 insertion(+), 2155 deletions(-) delete mode 100644 api/.Rbuildignore delete mode 100644 api/.gitignore delete mode 100644 api/DESCRIPTION delete mode 100644 api/LICENSE delete mode 100644 api/NAMESPACE delete mode 100644 api/R/add_database.R delete mode 100644 api/R/add_pft.R delete mode 100644 api/R/add_rabbitmq.R delete mode 100644 api/R/add_workflow.R delete mode 100644 api/R/get_model_id.R delete mode 100644 api/R/insert_new_workflow.R delete mode 100644 api/R/list_runs.R delete mode 100644 api/R/prepared_query.R delete mode 100644 api/R/search.R delete mode 100644 api/R/submit_workflow.R delete mode 100644 api/R/thredds_fileserver.R delete mode 100644 api/R/thredds_opendap.R delete mode 100644 api/R/watch_progress.R delete mode 100644 api/R/zzz.R delete mode 100644 api/inst/test_sipnet.R delete mode 100644 api/man/add_database.Rd delete mode 100644 api/man/add_pft.Rd delete mode 100644 api/man/add_rabbitmq.Rd delete mode 100644 api/man/add_workflow.Rd delete mode 100644 api/man/dbfile_url.Rd delete mode 100644 api/man/get_model_id.Rd delete mode 100644 api/man/get_next_workflow_id.Rd delete mode 100644 api/man/get_run_id.Rd delete mode 100644 api/man/insert_new_workflow.Rd delete mode 100644 api/man/listToXML.Rd delete mode 100644 api/man/list_runs.Rd delete mode 100644 api/man/output_url.Rd delete mode 100644 api/man/pecanapi_options.Rd delete mode 100644 api/man/prepared_query.Rd delete mode 100644 api/man/run_dap.Rd delete mode 100644 api/man/run_url.Rd delete mode 100644 api/man/search.Rd delete mode 100644 api/man/submit_workflow.Rd delete mode 100644 api/man/thredds_dap_url.Rd delete mode 100644 api/man/thredds_fs_url.Rd delete mode 100644 api/man/watch_workflow.Rd delete mode 100644 api/man/workflow_output.Rd delete mode 100644 api/vignettes/pecanapi.Rmd diff --git a/CHANGELOG.md b/CHANGELOG.md index 2893b42bbb6..f2f74b62027 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -41,6 +41,7 @@ This is a major change: ### Changed +- Removed old api, now split into rpecanapi and apps/api. - Now using R 4.0.2 for Docker images. This is a major change. Newer version of R and using Ubuntu 20.04 instead of Debian. - Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). diff --git a/api/.Rbuildignore b/api/.Rbuildignore deleted file mode 100644 index 91114bf2f2b..00000000000 --- a/api/.Rbuildignore +++ /dev/null @@ -1,2 +0,0 @@ -^.*\.Rproj$ -^\.Rproj\.user$ diff --git a/api/.gitignore b/api/.gitignore deleted file mode 100644 index 807ea251739..00000000000 --- a/api/.gitignore +++ /dev/null @@ -1,3 +0,0 @@ -.Rproj.user -.Rhistory -.RData diff --git a/api/DESCRIPTION b/api/DESCRIPTION deleted file mode 100644 index aa9a52c0b8d..00000000000 --- a/api/DESCRIPTION +++ /dev/null @@ -1,22 +0,0 @@ -Package: pecanapi -Title: R API for Dockerized remote PEcAn instances -Version: 1.7.1 -Date: 2019-09-05 -Authors@R: person("Alexey", "Shiklomanov", email = "alexey.shiklomanov@gmail.coim", role = c("aut", "cre")) -Description: Start PEcAn workflows, and analyze their outputs from within an R session. -Depends: R (>= 3.5.1) -Imports: - bit64, - DBI, - httr, - jsonlite, - XML, - RPostgres -Suggests: - magrittr, - ncdf4 -License: BSD_3_clause + file LICENSE -Encoding: UTF-8 -LazyData: true -RoxygenNote: 7.0.2 -Roxygen: list(markdown = TRUE) diff --git a/api/LICENSE b/api/LICENSE deleted file mode 100644 index 5a9e44128f1..00000000000 --- a/api/LICENSE +++ /dev/null @@ -1,34 +0,0 @@ -## This is the master copy of the PEcAn License - -University of Illinois/NCSA Open Source License - -Copyright (c) 2012, University of Illinois, NCSA. All rights reserved. - -PEcAn project -www.pecanproject.org - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal with the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -- Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimers. -- Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimers in the - documentation and/or other materials provided with the distribution. -- Neither the names of University of Illinois, NCSA, nor the names - of its contributors may be used to endorse or promote products - derived from this Software without specific prior written permission. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR -ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF -CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE. - diff --git a/api/NAMESPACE b/api/NAMESPACE deleted file mode 100644 index f3c15e990c4..00000000000 --- a/api/NAMESPACE +++ /dev/null @@ -1,24 +0,0 @@ -# Generated by roxygen2: do not edit by hand - -export(add_database) -export(add_pft) -export(add_pft_list) -export(add_rabbitmq) -export(add_workflow) -export(dbfile_url) -export(get_model_id) -export(get_next_workflow_id) -export(insert_new_workflow) -export(list_runs) -export(output_url) -export(prepared_query) -export(prepared_statement) -export(run_dap) -export(run_url) -export(search_models) -export(search_pfts) -export(search_sites) -export(submit_workflow) -export(thredds_dap_url) -export(watch_workflow) -export(workflow_output) diff --git a/api/R/add_database.R b/api/R/add_database.R deleted file mode 100644 index 9919e70d946..00000000000 --- a/api/R/add_database.R +++ /dev/null @@ -1,42 +0,0 @@ -#' Add PEcAn database information to settings list -#' -#' @param settings Input settings list (list) -#' @param host Database server hostname (character, default = `pecanapi.db_hostname`) -#' @param user Database user name (character, default = option `pecanapi.db_username`)) -#' @param password Database password (character, default = option `pecanapi.db_password`) -#' @param dbname Database name (character, default = option `pecanapi.db_dbname`) -#' @param driver Database driver (character, default = option `pecanapi.db_driver`) -#' @param dbfiles Path to `dbfiles` directory (character, default = -#' option `pecanapi.db_dbfiles`) -#' @param write Whether or not to write to the database (logical, -#' default = option `pecanapi.db_write`) -#' @param overwrite Whether or not to overwrite any already existing -#' input settings (logical, default = `FALSE`) -#' @param ... Additional named PEcAn database configuration tags -#' @return Updated settings list with database information included -#' @author Alexey Shiklomanov -#' @export -add_database <- function(settings, - host = getOption("pecanapi.db_hostname"), - user = getOption("pecanapi.db_user"), - password = getOption("pecanapi.db_password"), - dbname = getOption("pecanapi.db_dbname"), - driver = getOption("pecanapi.db_driver"), - dbfiles = getOption("pecanapi.db_dbfiles"), - write = getOption("pecanapi.db_write"), - overwrite = FALSE, - ...) { - bety_list <- list( - host = host, - user = user, - password = password, - dbname = dbname, - driver = driver, - write = write, - ... - ) - db_list <- list(database = list(bety = bety_list, dbfiles = dbfiles)) - new_settings <- modifyList(settings, db_list) - if (!overwrite) new_settings <- modifyList(new_settings, settings) - new_settings -} diff --git a/api/R/add_pft.R b/api/R/add_pft.R deleted file mode 100644 index 4a74e0f7562..00000000000 --- a/api/R/add_pft.R +++ /dev/null @@ -1,45 +0,0 @@ -#' Add a PFT or list of PFTs to a settings object -#' -#' @param settings Input PEcAn settings list -#' @param name PFT name (character) -#' @param pfts Either a character vector of PFT names, or a list of -#' PFT objects (which must each include an element named "name") -#' @param ... Additional arguments for modifying the PFT. -#' @return Updated settings list with PFTs added -#' @author Alexey Shiklomanov -#' @examples -#' settings <- list() -#' add_pft(settings, "Optics.Temperate_Early_Hardwood") -#' add_pft_list(settings, sprintf("Temperate_%s_Hardwood", c("Early", "Mid", "Late"))) -#' add_pft_list( -#' settings, -#' list(list(name = "deciduous", num = 3), -#' list(name = "coniferous", num = 6)) -#' ) -#' if (require("magrittr")) { -#' list() %>% -#' add_pft("early_hardwood") %>% -#' add_pft("mid_hardwood") %>% -#' add_pft("late_hardwood") -#' } -#' @export -add_pft <- function(settings, name, ...) { - pft_list <- settings[["pfts"]] - new_pft <- list(name = name, ...) - settings[["pfts"]] <- c(pft_list, list(pft = new_pft)) - settings -} - -#' @rdname add_pft -#' @export -add_pft_list <- function(settings, pfts, ...) { - for (pft in pfts) { - if (is.character(pfts)) { - settings <- add_pft(settings, pft, ...) - } else { - args <- c(list(settings = settings), pft) - settings <- do.call(add_pft, args) - } - } - settings -} diff --git a/api/R/add_rabbitmq.R b/api/R/add_rabbitmq.R deleted file mode 100644 index cdda204dd8c..00000000000 --- a/api/R/add_rabbitmq.R +++ /dev/null @@ -1,65 +0,0 @@ -#' Add RabbitMQ configuration -#' -#' @inheritParams add_workflow -#' @param model_queue Name of RabbitMQ model queue (character, default -#' = `NULL`). This should be in the form `modelname_modelrevision`. -#' If this is `NULL`, this function will try to figure it out based -#' on the model ID in the settings object, which requires access to -#' the database (i.e. `con` must not be `NULL`). -#' @param con Database connection object (default = `NULL`). Ignored -#' unless `model_queue` is `NULL`. -#' @param rabbitmq_user Username for RabbitMQ server (character, -#' default = option `pecanapi.rabbitmq_user`) -#' @param rabbitmq_password Password for RabbitMQ server (character, -#' default = option `pecanapi.rabbitmq_password`) -#' @param rabbitmq_service Name of RabbitMQ `docker-compose` service -#' (character, default = option `pecanapi.rabbitmq_service`) -#' @param rabbitmq_service_port RabbitMQ service port (numeric or -#' character, default = option `pecanapi.rabbitmq_service_port`). -#' Note that this is internal to the Docker stack, _not_. The only -#' reason this should be changed is if you changed low-level -#' RabbitMQ settings in the `docker-compose.yml` file -#' @param rabbitmq_vhost RabbitMQ vhost (character, default = option -#' `pecanapi.rabbitmq_vhost`). The only reason this should be -#' changed is if you change the low-level RabbitMQ setup in the -#' `docker-compose.yml` file. -#' @return Modified settings list with RabbitMQ configuration added. -#' @author Alexey Shiklomanov -#' @export -add_rabbitmq <- function(settings, - model_queue = NULL, - con = NULL, - rabbitmq_user = getOption("pecanapi.rabbitmq_user"), - rabbitmq_password = getOption("pecanapi.rabbitmq_password"), - rabbitmq_service = getOption("pecanapi.rabbitmq_service"), - rabbitmq_service_port = getOption("pecanapi.rabbitmq_service_port"), - rabbitmq_vhost = getOption("pecanapi.rabbitmq_vhost"), - overwrite = FALSE) { - if (is.null(model_queue) && is.null(settings[["rabbitmq"]][["queue"]])) { - # Deduce model queue from settings and database - if (is.null(con)) { - stop("Database connection object (`con`) required to automatically determine model queue.") - } - model_id <- settings[["model"]][["id"]] - if (is.null(model_id)) { - stop("Settings list must include model ID to automatically determine model queue.") - } - model_dat <- prepared_query(con, ( - "SELECT model_name, revision FROM models WHERE id = $1" - ), list(model_id)) - if (!nrow(model_dat) > 0) stop("Multiple models found. Unable to automatically determine model queue.") - model_queue <- paste(model_dat[["model_name"]], model_dat[["revision"]], sep = "_") - } - - rabbitmq_settings <- list( - uri = sprintf("amqp://%s:%s@%s:%d/%s", - rabbitmq_user, rabbitmq_password, - rabbitmq_service, rabbitmq_service_port, rabbitmq_vhost), - queue = model_queue - ) - new_settings <- modifyList(settings, list(host = list(rabbitmq = rabbitmq_settings))) - if (!overwrite) { - new_settings <- modifyList(new_settings, settings) - } - new_settings -} diff --git a/api/R/add_workflow.R b/api/R/add_workflow.R deleted file mode 100644 index 8f9023c6360..00000000000 --- a/api/R/add_workflow.R +++ /dev/null @@ -1,31 +0,0 @@ -#' Add information from workflow data frame into settings list. -#' -#' @param settings Partially completed PEcAn `settings` list. -#' @param workflow_df Workflow `data.frame`, such as that returned by -#' [insert_new_workflow], or from running query `SELECT * FROM -#' workflows` -#' @param overwrite Whether or not to overwrite existing `settings` -#' tags (logical; default = `FALSE`) -#' @return PEcAn settings list with workflow information added -#' @author Alexey Shiklomanov -#' @export -add_workflow <- function(settings, workflow_df, overwrite = FALSE) { - workflow_settings <- list( - workflow = list(id = workflow_df[["id"]]), - outdir = workflow_df[["folder"]], - model = list(id = workflow_df[["model_id"]]), - run = list( - site = list(id = workflow_df[["site_id"]], - met.start = workflow_df[["start_date"]], - met.end = workflow_df[["end_date"]]), - start.date = workflow_df[["start_date"]], - end.date = workflow_df[["end_date"]] - ), - info = list(notes = workflow_df[["notes"]]) - ) - new_settings <- modifyList(settings, workflow_settings) - if (!overwrite) { - new_settings <- modifyList(new_settings, settings) - } - new_settings -} diff --git a/api/R/get_model_id.R b/api/R/get_model_id.R deleted file mode 100644 index ed9fa452650..00000000000 --- a/api/R/get_model_id.R +++ /dev/null @@ -1,37 +0,0 @@ -#' Retrieve database ID of a particular version of a model -#' -#' @param con Database connection object (Pqconnection) -#' @param name Model name (character) -#' @param revision Model version/revision (character) -#' @param multi_action Action to take if multiple models found -#' (character). Must be one of "first", "last" (default), "all", or "error". -#' @return Model ID, as `integer64` -#' @author Alexey Shiklomanov -#' @export -get_model_id <- function(con, name, revision, multi_action = "last") { - qry <- DBI::dbSendQuery(con, paste( - "SELECT id FROM models WHERE model_name = $1 and revision = $2 ORDER BY id DESC" - )) - res <- DBI::dbBind(qry, list(name, revision)) - on.exit(DBI::dbClearResult(res), add = TRUE) - id <- DBI::dbFetch(res)[["id"]] - if (length(id) == 0) { - stop("Model ", name, " with revision ", revision, " not found.") - } - if (length(id) > 1) { - warning("Multiple models with name ", name, " and revision ", revision, "found. ", - "Returning ", multi_action, " result.") - if (multi_action == "first") { - id <- head(id, 1) - } else if (multi_action == "last") { - id <- tail(id, 1) - } else if (multi_action == "all") { - # Return all IDs -- leave as is - } else if (multi_action == "error") { - stop("Multiple models found, and 'error' action selected.") - } else { - stop("Unknown multi_action: ", multi_action) - } - } - id -} diff --git a/api/R/insert_new_workflow.R b/api/R/insert_new_workflow.R deleted file mode 100644 index 9be0d1a7038..00000000000 --- a/api/R/insert_new_workflow.R +++ /dev/null @@ -1,105 +0,0 @@ -#' Insert a new workflow into PEcAn database, returning the workflow -#' as a `data.frame` -#' -#' @inheritParams prepared_query -#' @param site_id Site ID from `sites` table (numeric) -#' @param model_id Model ID from `models` table (numeric) -#' @param start_date Model run start date (character or POSIX) -#' @param end_date Model run end date (character or POSIX) -#' @param user_id User ID from `users` table (default = option -#' `pecanapi.user_id`). Note that this option is _not set by -#' default_, and this function will not run without a set `user_id`. -#' @param hostname Workflow server hostname (character; default = -#' option `pecanapi.workflow_hostname`) -#' @param folder_prefix Output directory prefix (character; default = -#' option `pecanapi.workflow_prefix`). Workflow ID will be appended -#' to the end with `paste0` -#' @param params Additional workflow parameters, stored in -#' `workflows.params` (character or NULL (default)) -#' @param notes Additional workflow notes, stored in `workflows.notes` -#' (character or NULL (default)) -#' @return `data.frame` containing new workflow(s), including all -#' columns from `workflows` table. -#' @author Alexey Shiklomanov -#' @export -insert_new_workflow <- function(con, - site_id, - model_id, - start_date, - end_date, - user_id = getOption("pecanapi.user_id"), - hostname = getOption("pecanapi.workflow_hostname"), - folder_prefix = getOption("pecanapi.workflow_prefix"), - params = NULL, - notes = NULL) { - if (is.null(notes)) notes <- "" - if (is.null(params)) params <- "" - if (is.null(user_id)) { - stop("API-based inserts into the workflows table are not allowed without a user ID. ", - "Either pass the user_id directly, or set it via `options(pecanapi.user_id = )`") - } - stopifnot( - # Must be scalar - length(folder_prefix) == 1, - length(user_id) <= 1, - length(con) == 1, - # Must be RPostgres connection for prepared queries - inherits(con, "PqConnection") - ) - lens <- lengths(list(site_id, model_id, start_date, end_date)) - n_workflow <- max(lens) - if (!all(lens == 1 | lens == n_workflow)) { - stop( - "All inputs must be either the same length or length 1. ", - "You provided the following: ", - paste(sprintf("%s (%s)", c("site_id", "model_id", "start_date", "end_date"), lens), - collapse = ", ") - ) - } - id <- bit64::integer64() - for (i in seq_len(n_workflow)) { - id[i] <- get_next_workflow_id(con)[[1]] - } - stopifnot(length(id) >= 1) - folder <- paste0(folder_prefix, id) - query_string <- paste( - "INSERT INTO workflows", - "(id, site_id, model_id, folder,", - "hostname, start_date, end_date, params,", - "notes,", - "user_id,", - "advanced_edit)", - "VALUES", - "($1, $2, $3, $4,", - "$5, $6, $7, $8,", - "$9,", - "$10,", - "false)", - "RETURNING *" - ) - params <- list(id, site_id, model_id, folder, - hostname, start_date, end_date, params, - notes, user_id) - prepared_query(con, query_string, params) -} - -#' Get current workflow ID and update internal workflow ID PostgreSQL -#' sequence -#' -#' The `workflows` table has an internal -#' [sequence](https://www.postgresql.org/docs/9.6/sql-createsequence.html) -#' that keeps track of and automatically updates the workflow ID -#' (that's why inserting into the table without explicitly setting a -#' workflow ID is a safe and robust operation). This function is a -#' wrapper around the -#' [`nextval` function](https://www.postgresql.org/docs/9.6/functions-sequence.html), -#' which retrieves the current value of the sequence _and_ augments -#' the sequence by 1. -#' -#' @inheritParams prepared_query -#' @return Workflow ID, as numeric/base64 integer -#' @author Alexey Shiklomanov -#' @export -get_next_workflow_id <- function(con) { - DBI::dbGetQuery(con, "SELECT nextval('workflows_id_seq')")[[1]] -} diff --git a/api/R/list_runs.R b/api/R/list_runs.R deleted file mode 100644 index 0fef629806e..00000000000 --- a/api/R/list_runs.R +++ /dev/null @@ -1,15 +0,0 @@ -#' List all runs associated with a particular workflow -#' -#' @inheritParams prepared_query -#' @param workflow_id ID of target workflow (character or numeric) -#' @return Runs `data.frame` subset to rows containing specific workflow -#' @author Alexey Shiklomanov -#' @export -list_runs <- function(con, workflow_id) { - prepared_query(con, paste( - "SELECT runs.* FROM runs", - "INNER JOIN ensembles ON (runs.ensemble_id = ensembles.id)", - "INNER JOIN workflows ON (ensembles.workflow_id = workflows.id)", - "WHERE workflows.id = $1" - ), list(workflow_id)) -} diff --git a/api/R/prepared_query.R b/api/R/prepared_query.R deleted file mode 100644 index f5dd90d399d..00000000000 --- a/api/R/prepared_query.R +++ /dev/null @@ -1,83 +0,0 @@ -#' Execute a PostgreSQL prepared query or statement -#' -#' This provides a safe and efficient way of executing a query or -#' statement with a list of parameters to be substituted. -#' -#' A prepared statement consists of a template query (`query`), which -#' is compiled prior to execution, and a series of parameters -#' (`params`) that are passed into the relevant spots in the template -#' query. In R's `DBI` database interface, this uses three statements: -#' [DBI::dbSendQuery] to create the template query, [DBI::dbBind] to -#' bind parameters to that query, and [DBI::dbFetch] to retrieve the -#' results. Statements ([DBI::dbSendStatement]) work the same way, -#' except there are no results to fetch with [DBI::dbFetch]. -#' -#' Prepared statements have several important advantages. First of -#' all, they are automatically and efficiently vectorized, meaning -#' that it is possible to build a single query and run it against a -#' vector of parameters. Second, they automatically enforce strict -#' type checking and quoting of inputs, meaning that they are secure -#' against SQL injection attacks and input mistakes (e.g. giving a -#' character when the table expects a number). -#' -#' @param con Database connection, as created by [RPostgres::dbConnect] -#' @param query Query template (character, length 1) -#' @param params Query parameters (unnamed list) -#' @return For `prepared_query`, the query result as a `data.frame`. -#' `prepared_statement` exits silently on success. -#' @author Alexey Shiklomanov -#' @examples -#' \dontrun{ -#' prepared_query(con, paste( -#' "SELECT id, folder FROM workflows", -#' "WHERE user_id = $1" -#' ), list(my_user_id)) -#' -#' prepared_statement(con, paste( -#' "INSERT INTO workflows (id, site_id, model_id, folder)", -#' "VALUES ($1, $2, $3, $4)" -#' ), list(workflow_id, my_site_id, my_model_id, my_folder)) -#' -#' # Note that queries and statements are automatically vectorized -#' # The below query will execute two searches, and return the results -#' # of both in one data.frame -#' prepared_query(con, paste( -#' "SELECT * FROM dbfiles", -#' "WHERE file_name ILIKE $1", -#' ), list(c("%cruncep%", "%gfdl%"))) -#' -#' # Similarly, this will create two workflows, all with the same -#' model_id (1) but different site_ids (33, 67) -#' prepared_statement(con, paste( -#' "INSERT INTO workflows (site_id, model_id)", -#' "VALUES ($1, $2)" -#' ), list(c(33, 67), 1)) -#' -#'} -#' @export -prepared_query <- function(con, query, params) { - stopifnot( - inherits(con, "PqConnection"), - is.character(query), - length(query) == 1, - is.list(params) - ) - qry <- DBI::dbSendQuery(con, query) - res <- DBI::dbBind(qry, params) - on.exit(DBI::dbClearResult(res), add = TRUE) - DBI::dbFetch(res) -} - -#' @rdname prepared_query -#' @export -prepared_statement <- function(con, query, params) { - stopifnot( - inherits(con, "PqConnection"), - is.character(query), - length(query) == 1, - is.list(params) - ) - qry <- DBI::dbSendStatement(con, query) - res <- DBI::dbBind(qry, params) - on.exit(DBI::dbClearResult(res), add = TRUE) -} diff --git a/api/R/search.R b/api/R/search.R deleted file mode 100644 index bcf1583b1a2..00000000000 --- a/api/R/search.R +++ /dev/null @@ -1,85 +0,0 @@ -#' Search for sites or models -#' -#' @inheritParams prepared_query -#' @param name Model/PFT (depending on function) name search string (character) -#' @param sitename Model name search string (character) -#' @param definition PFT definition search string (character) -#' @param modeltype Model type search string (character) -#' @param revision Model version search string (character) -#' @param auto_pct Logical. If `TRUE` (default), automatically -#' surround search strings in `%`. If this is `FALSE`, you should -#' explicitly specify `"%%"` for one or both other arguments. -#' @param ignore.case Logical. If `TRUE` (default) use -#' case-insensitive search (SQL `ILIKE` operator); otherwise, use -#' case-sensitive search (SQL `LIKE` operator). -#' @return Bety `models` table (`data.frame`) subset to matching model -#' name or version -#' @author Alexey Shiklomanov -#' @examples -#' \dontrun{ -#' search_models(con, "SIPNET") -#' -#' # Partial match -#' search_models(con, "ED") -#' search_models(con, modeltype = "ED") -#' search_sites(con, "UMBS") -#' search_pfts(con, "early", modeltype = "ED") -#' -#' # Case sensitivity -#' search_models(con, "ed") -#' search_models(con, "ed", ignore.case = FALSE) -#' -#' # Starts with UMBS -#' search_sites(con, "UMBS%", auto_pct = FALSE) -#' -#' # SQL wildcards can still be used inside search strings. -#' search_pfts(con, "early%hardwood") -#' } -#' @rdname search -#' @export -search_models <- function(con, name = "", revision = "", modeltype = "", auto_pct = TRUE, ignore.case = TRUE) { - if (auto_pct) { - name <- paste0("%", name, "%") - revision <- paste0("%", revision, "%") - modeltype <- paste0("%", modeltype, "%") - } - like <- "LIKE" - if (ignore.case) like <- "ILIKE" - prepared_query(con, paste( - "SELECT models.*, modeltypes.name AS modeltype FROM models", - "INNER JOIN modeltypes ON (models.modeltype_id = modeltypes.id)", - "WHERE model_name", like, "$1 AND revision", like, "$2 AND modeltypes.name", like, "$3" - ), list(name, revision, modeltype)) -} - -#' @rdname search -#' @export -search_sites <- function(con, sitename = "", auto_pct = TRUE, ignore.case = TRUE) { - if (auto_pct) { - sitename <- paste0("%", sitename, "%") - } - like <- "LIKE" - if (ignore.case) like <- "ILIKE" - prepared_query(con, paste( - "SELECT * FROM sites WHERE sitename", like, "$1" - ), list(sitename)) -} - -#' @rdname search -#' @export -search_pfts <- function(con, name = "", definition = "", modeltype = "", auto_pct = TRUE, ignore.case = TRUE) { - if (auto_pct) { - name <- paste0("%", name, "%") - definition <- paste0("%", definition, "%") - modeltype <- paste0("%", modeltype, "%") - } - like <- "LIKE" - if (ignore.case) like <- "ILIKE" - prepared_query(con, paste( - "SELECT pfts.id AS id, pfts.name AS name, pfts.definition AS definition, pfts.pft_type AS pft_type,", - "modeltypes.name AS modeltype, modeltypes.id AS modeltype_id", - "FROM pfts INNER JOIN modeltypes", - "ON (pfts.modeltype_id = modeltypes.id)", - "WHERE pfts.name", like, "$1 AND pfts.definition", like, "$2 AND modeltypes.name", like, "$3" - ), list(name, definition, modeltype)) -} diff --git a/api/R/submit_workflow.R b/api/R/submit_workflow.R deleted file mode 100644 index 043c83488a9..00000000000 --- a/api/R/submit_workflow.R +++ /dev/null @@ -1,111 +0,0 @@ -#' Post complete settings list as RabbitMQ message -#' -#' @param settings PEcAn settings list object -#' @param rabbitmq_hostname RabbitMQ server hostname (character. -#' Default = `"localhost"`) -#' @param rabbitmq_port RabbitMQ server port (character or numeric. -#' Default = option `pecanapi.docker_port`) -#' @param rabbitmq_user RabbitMQ user name (character. Default = -#' option `pecanapi.rabbitmq_user`) -#' @param rabbitmq_password RabbitMQ password (character. Default = -#' option `pecanapi.rabbitmq_password`) -#' @param rabbitmq_prefix Complete RabbitMQ API prefix. If `NULL` -#' (default), this is constructed from the other arguments. If this -#' argument is not `NULL`, it overrides all other arguments except -#' `httr_auth` and `settings`. -#' @param httr_auth Whether or not to use [httr::authenticate] to -#' generate CURL authentication (default = `TRUE`). If `FALSE`, you -#' must pass the authentication as part of the RabbitMQ hostname or prefix. -#' @param https Whether or not to use `https`. If `FALSE`, use `http`. -#' Default = option `pecanapi.docker_https` -#' @return Curl `POST` output, parsed by [httr::content] -#' @author Alexey Shiklomanov -#' @export -submit_workflow <- function(settings, - rabbitmq_hostname = getOption("pecanapi.docker_hostname"), - rabbitmq_frontend = getOption("pecanapi.docker_rabbitmq_frontend"), - rabbitmq_port = getOption("pecanapi.docker_port"), - rabbitmq_user = getOption("pecanapi.rabbitmq_user"), - rabbitmq_password = getOption("pecanapi.rabbitmq_password"), - rabbitmq_prefix = NULL, - httr_auth = TRUE, - https = getOption("pecanapi.docker_https")) { - if (is.numeric(rabbitmq_port)) rabbitmq_port <- as.character(rabbitmq_port) - # Create xml object - settings_xml <- listToXML(settings, "pecan") - settings_xml_string <- XML::toString.XMLNode(settings_xml) - settings_json <- jsonlite::toJSON( - list(pecan_xml = settings_xml_string, folder = settings[["outdir"]]), - auto_unbox = TRUE -) - bod_raw <- list( - properties = list(delivery_mode = 2), - routing_key = "pecan", - payload = settings_json, - payload_encoding = "string" - ) - auth <- NULL - if (httr_auth) { - auth <- httr::authenticate(rabbitmq_user, rabbitmq_password) - } - bod <- jsonlite::toJSON(bod_raw, auto_unbox = TRUE) - if (is.null(rabbitmq_prefix)) { - httpstring <- "http" - if (https) httpstring <- "https" - base_url <- sprintf("%s://%s:%s", httpstring, rabbitmq_hostname, rabbitmq_port) - rabbitmq_prefix <- paste0(base_url, rabbitmq_frontend) - } - result <- httr::POST( - paste0(rabbitmq_prefix, "/api/exchanges/%2F//publish"), - auth, - body = bod - ) - follow_url <- sprintf("%s/pecan/05-running.php?workflowid=%s", - base_url, as.character(settings[["workflow"]][["id"]])) - message("Follow workflow progress from your browser at:\n", follow_url) - httr::content(result) -} - -#' Convert List to XML -#' -#' Can convert list or other object to an xml object using xmlNode -#' @param item object to be converted. Despite the function name, need not actually be a list -#' @param tag xml tag -#' @return xmlNode -#' @author David LeBauer, Carl Davidson, Rob Kooper -listToXML <- function(item, tag) { - - # just a textnode, or empty node with attributes - if (typeof(item) != "list") { - if (length(item) > 1) { - xml <- XML::xmlNode(tag) - for (name in names(item)) { - XML::xmlAttrs(xml)[[name]] <- item[[name]] - } - return(xml) - } else { - return(XML::xmlNode(tag, item)) - } - } - - # create the node - if (identical(names(item), c("text", ".attrs"))) { - # special case a node with text and attributes - xml <- XML::xmlNode(tag, item[["text"]]) - } else { - # node with child nodes - xml <- XML::xmlNode(tag) - for (i in seq_along(item)) { - if (is.null(names(item)) || names(item)[i] != ".attrs") { - xml <- XML::append.xmlNode(xml, listToXML(item[[i]], names(item)[i])) - } - } - } - - # add attributes to node - attrs <- item[[".attrs"]] - for (name in names(attrs)) { - XML::xmlAttrs(xml)[[name]] <- attrs[[name]] - } - return(xml) -} # listToXML diff --git a/api/R/thredds_fileserver.R b/api/R/thredds_fileserver.R deleted file mode 100644 index 9d6c332ccba..00000000000 --- a/api/R/thredds_fileserver.R +++ /dev/null @@ -1,76 +0,0 @@ -#' Build URL for output file hosted on THREDDS fileServer -#' -#' @param workflow_id ID of target workflow (numeric or character) -#' @param target Target path, relative to workflow directory (character) -#' @param ... Additional arguments to [thredds_fs_url] -#' @return THREDDS http fileServer URL (character) -#' @author Alexey Shiklomanov -#' @export -output_url <- function(workflow_id, target, ...) { - workflow_id <- as.character(workflow_id) - prefix_url <- sprintf("%s/outputs/PEcAn_%s", thredds_fs_url(...), workflow_id) - file.path(prefix_url, target) -} - -#' Build a THREDDS fileServer URL for a specific output file from a -#' specific run -#' -#' @inheritParams output_url -#' @param target Target file path, relative to output directory of -#' specific run (as specified by `run_id`) -#' @param run_id Run ID (numeric or character). If `NULL`, try to use -#' the run listed in the `runs.txt` file. If multiple runs are -#' available, throw a warning and use the first one. -#' @param ... Additional arguments to [thredds_fs_url] -#' @return HTTP fileServer URL of a particular run output file (character) -#' @author Alexey Shiklomanov -#' @export -run_url <- function(workflow_id, target, run_id = NULL, ...) { - if (is.null(run_id)) run_id <- get_run_id(workflow_id, ...) - new_target <- file.path("out", as.character(run_id), target) - output_url(workflow_id, new_target, ...) -} - -#' Get a run ID from the `runs.txt` file -#' -#' @inheritParams output_url -#' @return A single output ID (integer64) -#' @author Alexey Shiklomanov -get_run_id <- function(workflow_id, ...) { - run_id <- bit64::as.integer64(readLines(output_url(workflow_id, "run/runs.txt", ...))) - if (length(run_id) > 1) { - warning("Multiple runs found. Selecting first run.") - run_id <- head(run_id, 1) - } - run_id -} - -#' Build a THREDDS fileServer URL for a specific dbfile -#' -#' @param target Target file path, relative to `dbfiles` root folder (character) -#' @param ... Additional arguments to [thredds_fs_url] -#' @return THREDDS HTTP fileServer URL to dbfile (character) -#' @author Alexey Shiklomanov -#' @export -dbfile_url <- function(target, ...) { - file.path(thredds_fs_url(...), "dbfiles", target) -} - -#' Create a THREDDS fileServer URL prefix -#' -#' @param hostname THREDDS server hostname (default = "localhost") -#' @param port THREDDS server port (default = 8000) -#' @param https Logical. If `TRUE`, use https, otherwise use http -#' (default = `FALSE`). -#' @return THREDDS fileServer URL prefix (character) -#' @author Alexey Shiklomanov -thredds_fs_url <- function(hostname = getOption("pecanapi.docker_hostname"), - port = getOption("pecanapi.docker_port"), - https = getOption("pecanapi.docker_https")) { - httpstring <- if (https) "https" else "http" - port <- as.character(port) - sprintf( - "%s://%s:%s/thredds/fileServer", - httpstring, hostname, port - ) -} diff --git a/api/R/thredds_opendap.R b/api/R/thredds_opendap.R deleted file mode 100644 index 0086a0808eb..00000000000 --- a/api/R/thredds_opendap.R +++ /dev/null @@ -1,36 +0,0 @@ -#' Create a THREDDS OpenDAP access URL to a PEcAn file -#' -#' @param target Full path to target (character) relative to THREDDS -#' root. For outputs, this should start with "outputs/", and for -#' dbfiles, "dbfiles/". -#' @inheritParams thredds_fs_url -#' @return OpenDAP URL to target file (character) -#' @author Alexey Shiklomanov -#' @export -thredds_dap_url <- function(target, - hostname = getOption("pecanapi.docker_hostname"), - port = getOption("pecanapi.docker_port"), - https = getOption("pecanapi.docker_https")) { - httpstring <- if (https) "https" else "http" - port <- as.character(port) - prefix_url <- sprintf("%s://%s:%s/thredds/dodsC", httpstring, hostname, port) - file.path(prefix_url, target) -} - -#' Create a THREDDS OpenDAP access URL to a specific model output file -#' -#' @inheritParams run_url -#' @return OpenDAP URL to target file (character) -#' @author Alexey Shiklomanov -#' @export -run_dap <- function(workflow_id, target, run_id = NULL, ...) { - if (is.null(run_id)) run_id <- get_run_id(workflow_id, ...) - new_target <- file.path( - "outputs", - paste0("PEcAn_", workflow_id), - "out", - run_id, - target -) - thredds_dap_url(new_target, ...) -} diff --git a/api/R/watch_progress.R b/api/R/watch_progress.R deleted file mode 100644 index 99b17daaf16..00000000000 --- a/api/R/watch_progress.R +++ /dev/null @@ -1,36 +0,0 @@ -#' Retrieve the current output of a workflow given its ID -#' -#' @param workflow_id Workflow ID (character or numeric) -#' @param ... Additional arguments to [output_url] -#' @return Workflow output, as character vector with one item per line -#' in `workflow.Rout` (as returned by `readLines`). -#' @author Alexey Shiklomanov -#' @export -workflow_output <- function(workflow_id, ...) { - readLines(output_url(workflow_id, "workflow.Rout", ...)) -} - -#' Continuously monitor the progress of a workflow until it completes. -#' To exit early, send an interrupt signal (Control-C). -#' -#' @inheritParams workflow_output -#' @param nlines Print the last N lines (numeric, default = 10) -#' @param sleep Number of seconds to sleep between status updates -#' (numeric, default = 3) -#' @return If successful -#' @author Alexey Shiklomanov -#' @export -watch_workflow <- function(workflow_id, nlines = 10, sleep = 3, ...) { - repeat { - output <- tryCatch( - workflow_output(workflow_id, ...), - error = function(e) "Unable to access workflow output" - ) - out_sub <- tail(output, nlines) - message(paste(out_sub, collapse = "\n"), "\n----------------\n") - if (any(grepl("PEcAn Workflow Complete", output))) { - return(invisible(NULL)) - } - Sys.sleep(sleep) - } -} diff --git a/api/R/zzz.R b/api/R/zzz.R deleted file mode 100644 index e7f3349a054..00000000000 --- a/api/R/zzz.R +++ /dev/null @@ -1,96 +0,0 @@ -#' Package-specific options -#' -#' To minimize the number of changes that have to happen for scripts -#' using `pecanapi` to be shared across machines, most -#' Docker/RabbitMQ/database-related configurations can be configured -#' via `options`. All options take the form `pecanapi.optionname`, -#' where `optionname` is the specific option. Note that these values -#' are used only as default function arguments, and can be substituted -#' for any individual function call by passing the appropriate argument. -#' -#' The following options (prefixed with `pecanapi.`) are used by -#' `pecanapi`. To see the default values, just call `options()` with -#' no arguments. These are sorted in order of decreasing likelihood of -#' needing to be set by the user (first options are most likely to be -#' changed across different systems): -#' -#' -`pecanapi.user_id` -- The User ID to associate with all workflows -#' created by this package. This is the only option that _must_ be set -#' by the user -- it is set to `NULL` by default, which will cause -#' many of the functions in the `pecanapi` to fail. -#' -#' - `docker_hostname`, `docker_port` -- The hostname and port of the -#' Docker service. You can check that these values work by browsing to -#' `docker_hostname:docker_port` (by default, `localhost:8000`) in a -#' web browser. -#' -#' - `docker_rabbitmq_frontend` -- The "frontend rule" for RabbitMQ. -#' By default, this is `/rabbitmq`, meaning that the RabbitMQ console -#' is accessible at `localhost:8000/rabbitmq` (adjusted for whatever -#' combination of `docker_hostname` and `docker_port` you are using). -#' -#' `docker_https` -- (Logical) If `TRUE`, all URLs use `https` access. -#' By default, this is `FALSE`. -#' -#' - `db_hostname` -- The name of the PostgreSQL container service -#' inside the PEcAn stack. This is the same as its service name in -#' `docker-compose.yml`. This is the hostname used by the `executor` -#' service to access the database, and which is written into each -#' `pecan.xml` file. -#' -#' - `db_user`, `db_password`, `db_dbname`, `db_driver`, `db_write` -- -#' These correspond to the `user`, `password`, `dbname`, `driver`, and -#' `write` tags in the `database/bety` part of the PEcAn XML. -#' -#' - `rabbitmq_user`, `rabbitmq_password` -- The RabbitMQ -#' authentication credentials. These are set in the -#' `docker-compose.yml` file, under the `rabbitmq` service. -#' -#' - `rabbitmq_service`, `rabbitmq_service_port`, `rabbitmq_vhost` -- -#' The name, internal port, and `vhost` of the RabbitMQ service. -#' Unless you are making major changes to the guts of -#' `docker-compose.yml`, you shouldn't change these values (i.e. they -#' should be the same on most machines). -#' -#' - `workflow_hostname` -- The hostname passed to the `host` section -#' of the `pecan.xml`. By default, this is "docker". -#' -#' - `workflow_prefix` -- The location and directory prefix for -#' storing workflow outputs. By default, this is -#' `/data/workflows/PEcAn_`. The workflow ID will be appended directly -#' to this value. -#' @name pecanapi_options -NULL - -.onLoad <- function(libname, packagename) { - op <- options() - api_opts <- list( - pecanapi.user_id = NULL, - # Docker options (submit_workflow) - pecanapi.docker_hostname = "localhost", - pecanapi.docker_port = 8000, - pecanapi.docker_rabbitmq_frontend = "/rabbitmq", - pecanapi.docker_https = FALSE, - # Database settings (add_database) - pecanapi.db_hostname = "postgres", - pecanapi.db_user = "bety", - pecanapi.db_password = "bety", - pecanapi.db_dbname = "bety", - pecanapi.db_driver = "PostgreSQL", - pecanapi.db_dbfiles = "/data/dbfiles", - pecanapi.db_write = TRUE, - # Workflow options (insert_new_workflow) - pecanapi.workflow_hostname = "docker", - pecanapi.workflow_prefix = "/data/workflows/PEcAn_", - # RabbitMQ options (add_rabbitmq) - pecanapi.rabbitmq_user = "guest", - pecanapi.rabbitmq_password = "guest", - pecanapi.rabbitmq_service = "rabbitmq", - pecanapi.rabbitmq_service_port = 5672, - pecanapi.rabbitmq_vhost = "%2F" - ) - toset <- !(names(api_opts) %in% names(op)) - if (any(toset)) options(api_opts[toset]) - - invisible() -} diff --git a/api/inst/test_sipnet.R b/api/inst/test_sipnet.R deleted file mode 100644 index 7a1942b3c3e..00000000000 --- a/api/inst/test_sipnet.R +++ /dev/null @@ -1,44 +0,0 @@ -library(pecanapi) -import::from(magrittr, "%>%") - -options(pecanapi.user_id = 99000000002) - -# Establish database connection -con <- DBI::dbConnect( - RPostgres::Postgres(), - user = "bety", - password = "bety", - host = "localhost", - port = 5432 -) - -model_id <- get_model_id(con, "SIPNET", "136") -all_umbs <- search_sites(con, "umbs%disturbance") -site_id <- subset(all_umbs, !is.na(mat))[["id"]] -workflow <- insert_new_workflow(con, site_id, model_id, - start_date = "2004-01-01", - end_date = "2004-12-31") -workflow_id <- workflow[["id"]] - -settings <- list() %>% - add_workflow(workflow) %>% - add_database() %>% - add_pft("temperate.deciduous") %>% - add_rabbitmq(con = con) %>% - modifyList(list( - meta.analysis = list(iter = 3000, random.effects = FALSE), - run = list(inputs = list(met = list(source = "CRUNCEP", output = "SIPNET", method = "ncss"))), - ensemble = list(size = 1, variable = "NPP") - )) - -submit_workflow(settings) - -watch_workflow(workflow_id) -output <- workflow_output(workflow_id) - -sipnet_out <- ncdf4::nc_open(run_dap(workflow_id, "2004.nc")) -gpp <- ncdf4::ncvar_get(sipnet_out, "GPP") -time <- ncdf4::ncvar_get(sipnet_out, "time") -ncdf4::nc_close(sipnet_out) - -plot(time, gpp, type = "l") diff --git a/api/man/add_database.Rd b/api/man/add_database.Rd deleted file mode 100644 index 35075bd7dde..00000000000 --- a/api/man/add_database.Rd +++ /dev/null @@ -1,52 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/add_database.R -\name{add_database} -\alias{add_database} -\title{Add PEcAn database information to settings list} -\usage{ -add_database( - settings, - host = getOption("pecanapi.db_hostname"), - user = getOption("pecanapi.db_user"), - password = getOption("pecanapi.db_password"), - dbname = getOption("pecanapi.db_dbname"), - driver = getOption("pecanapi.db_driver"), - dbfiles = getOption("pecanapi.db_dbfiles"), - write = getOption("pecanapi.db_write"), - overwrite = FALSE, - ... -) -} -\arguments{ -\item{settings}{Input settings list (list)} - -\item{host}{Database server hostname (character, default = \code{pecanapi.db_hostname})} - -\item{user}{Database user name (character, default = option \code{pecanapi.db_username}))} - -\item{password}{Database password (character, default = option \code{pecanapi.db_password})} - -\item{dbname}{Database name (character, default = option \code{pecanapi.db_dbname})} - -\item{driver}{Database driver (character, default = option \code{pecanapi.db_driver})} - -\item{dbfiles}{Path to \code{dbfiles} directory (character, default = -option \code{pecanapi.db_dbfiles})} - -\item{write}{Whether or not to write to the database (logical, -default = option \code{pecanapi.db_write})} - -\item{overwrite}{Whether or not to overwrite any already existing -input settings (logical, default = \code{FALSE})} - -\item{...}{Additional named PEcAn database configuration tags} -} -\value{ -Updated settings list with database information included -} -\description{ -Add PEcAn database information to settings list -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/add_pft.Rd b/api/man/add_pft.Rd deleted file mode 100644 index eeae71df17a..00000000000 --- a/api/man/add_pft.Rd +++ /dev/null @@ -1,46 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/add_pft.R -\name{add_pft} -\alias{add_pft} -\alias{add_pft_list} -\title{Add a PFT or list of PFTs to a settings object} -\usage{ -add_pft(settings, name, ...) - -add_pft_list(settings, pfts, ...) -} -\arguments{ -\item{settings}{Input PEcAn settings list} - -\item{name}{PFT name (character)} - -\item{...}{Additional arguments for modifying the PFT.} - -\item{pfts}{Either a character vector of PFT names, or a list of -PFT objects (which must each include an element named "name")} -} -\value{ -Updated settings list with PFTs added -} -\description{ -Add a PFT or list of PFTs to a settings object -} -\examples{ -settings <- list() -add_pft(settings, "Optics.Temperate_Early_Hardwood") -add_pft_list(settings, sprintf("Temperate_\%s_Hardwood", c("Early", "Mid", "Late"))) -add_pft_list( - settings, - list(list(name = "deciduous", num = 3), - list(name = "coniferous", num = 6)) -) -if (require("magrittr")) { - list() \%>\% - add_pft("early_hardwood") \%>\% - add_pft("mid_hardwood") \%>\% - add_pft("late_hardwood") -} -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/add_rabbitmq.Rd b/api/man/add_rabbitmq.Rd deleted file mode 100644 index 0fcbc919b09..00000000000 --- a/api/man/add_rabbitmq.Rd +++ /dev/null @@ -1,62 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/add_rabbitmq.R -\name{add_rabbitmq} -\alias{add_rabbitmq} -\title{Add RabbitMQ configuration} -\usage{ -add_rabbitmq( - settings, - model_queue = NULL, - con = NULL, - rabbitmq_user = getOption("pecanapi.rabbitmq_user"), - rabbitmq_password = getOption("pecanapi.rabbitmq_password"), - rabbitmq_service = getOption("pecanapi.rabbitmq_service"), - rabbitmq_service_port = getOption("pecanapi.rabbitmq_service_port"), - rabbitmq_vhost = getOption("pecanapi.rabbitmq_vhost"), - overwrite = FALSE -) -} -\arguments{ -\item{settings}{Partially completed PEcAn \code{settings} list.} - -\item{model_queue}{Name of RabbitMQ model queue (character, default -= \code{NULL}). This should be in the form \code{modelname_modelrevision}. -If this is \code{NULL}, this function will try to figure it out based -on the model ID in the settings object, which requires access to -the database (i.e. \code{con} must not be \code{NULL}).} - -\item{con}{Database connection object (default = \code{NULL}). Ignored -unless \code{model_queue} is \code{NULL}.} - -\item{rabbitmq_user}{Username for RabbitMQ server (character, -default = option \code{pecanapi.rabbitmq_user})} - -\item{rabbitmq_password}{Password for RabbitMQ server (character, -default = option \code{pecanapi.rabbitmq_password})} - -\item{rabbitmq_service}{Name of RabbitMQ \code{docker-compose} service -(character, default = option \code{pecanapi.rabbitmq_service})} - -\item{rabbitmq_service_port}{RabbitMQ service port (numeric or -character, default = option \code{pecanapi.rabbitmq_service_port}). -Note that this is internal to the Docker stack, \emph{not}. The only -reason this should be changed is if you changed low-level -RabbitMQ settings in the \code{docker-compose.yml} file} - -\item{rabbitmq_vhost}{RabbitMQ vhost (character, default = option -\code{pecanapi.rabbitmq_vhost}). The only reason this should be -changed is if you change the low-level RabbitMQ setup in the -\code{docker-compose.yml} file.} - -\item{overwrite}{Whether or not to overwrite existing \code{settings} -tags (logical; default = \code{FALSE})} -} -\value{ -Modified settings list with RabbitMQ configuration added. -} -\description{ -Add RabbitMQ configuration -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/add_workflow.Rd b/api/man/add_workflow.Rd deleted file mode 100644 index b94bdcbeb9b..00000000000 --- a/api/man/add_workflow.Rd +++ /dev/null @@ -1,26 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/add_workflow.R -\name{add_workflow} -\alias{add_workflow} -\title{Add information from workflow data frame into settings list.} -\usage{ -add_workflow(settings, workflow_df, overwrite = FALSE) -} -\arguments{ -\item{settings}{Partially completed PEcAn \code{settings} list.} - -\item{workflow_df}{Workflow \code{data.frame}, such as that returned by -\link{insert_new_workflow}, or from running query \verb{SELECT * FROM workflows}} - -\item{overwrite}{Whether or not to overwrite existing \code{settings} -tags (logical; default = \code{FALSE})} -} -\value{ -PEcAn settings list with workflow information added -} -\description{ -Add information from workflow data frame into settings list. -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/dbfile_url.Rd b/api/man/dbfile_url.Rd deleted file mode 100644 index 97f593ce7cf..00000000000 --- a/api/man/dbfile_url.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_fileserver.R -\name{dbfile_url} -\alias{dbfile_url} -\title{Build a THREDDS fileServer URL for a specific dbfile} -\usage{ -dbfile_url(target, ...) -} -\arguments{ -\item{target}{Target file path, relative to \code{dbfiles} root folder (character)} - -\item{...}{Additional arguments to \link{thredds_fs_url}} -} -\value{ -THREDDS HTTP fileServer URL to dbfile (character) -} -\description{ -Build a THREDDS fileServer URL for a specific dbfile -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/get_model_id.Rd b/api/man/get_model_id.Rd deleted file mode 100644 index 457c7155157..00000000000 --- a/api/man/get_model_id.Rd +++ /dev/null @@ -1,27 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get_model_id.R -\name{get_model_id} -\alias{get_model_id} -\title{Retrieve database ID of a particular version of a model} -\usage{ -get_model_id(con, name, revision, multi_action = "last") -} -\arguments{ -\item{con}{Database connection object (Pqconnection)} - -\item{name}{Model name (character)} - -\item{revision}{Model version/revision (character)} - -\item{multi_action}{Action to take if multiple models found -(character). Must be one of "first", "last" (default), "all", or "error".} -} -\value{ -Model ID, as \code{integer64} -} -\description{ -Retrieve database ID of a particular version of a model -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/get_next_workflow_id.Rd b/api/man/get_next_workflow_id.Rd deleted file mode 100644 index aa8d9424e58..00000000000 --- a/api/man/get_next_workflow_id.Rd +++ /dev/null @@ -1,29 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/insert_new_workflow.R -\name{get_next_workflow_id} -\alias{get_next_workflow_id} -\title{Get current workflow ID and update internal workflow ID PostgreSQL -sequence} -\usage{ -get_next_workflow_id(con) -} -\arguments{ -\item{con}{Database connection, as created by \link[RPostgres:dbConnect]{RPostgres::dbConnect}} -} -\value{ -Workflow ID, as numeric/base64 integer -} -\description{ -The \code{workflows} table has an internal -\href{https://www.postgresql.org/docs/9.6/sql-createsequence.html}{sequence} -that keeps track of and automatically updates the workflow ID -(that's why inserting into the table without explicitly setting a -workflow ID is a safe and robust operation). This function is a -wrapper around the -\href{https://www.postgresql.org/docs/9.6/functions-sequence.html}{\code{nextval} function}, -which retrieves the current value of the sequence \emph{and} augments -the sequence by 1. -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/get_run_id.Rd b/api/man/get_run_id.Rd deleted file mode 100644 index dbdc0c80918..00000000000 --- a/api/man/get_run_id.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_fileserver.R -\name{get_run_id} -\alias{get_run_id} -\title{Get a run ID from the \code{runs.txt} file} -\usage{ -get_run_id(workflow_id, ...) -} -\arguments{ -\item{workflow_id}{ID of target workflow (numeric or character)} - -\item{...}{Additional arguments to \link{thredds_fs_url}} -} -\value{ -A single output ID (integer64) -} -\description{ -Get a run ID from the \code{runs.txt} file -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/insert_new_workflow.Rd b/api/man/insert_new_workflow.Rd deleted file mode 100644 index 2f04478bb0d..00000000000 --- a/api/man/insert_new_workflow.Rd +++ /dev/null @@ -1,59 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/insert_new_workflow.R -\name{insert_new_workflow} -\alias{insert_new_workflow} -\title{Insert a new workflow into PEcAn database, returning the workflow -as a \code{data.frame}} -\usage{ -insert_new_workflow( - con, - site_id, - model_id, - start_date, - end_date, - user_id = getOption("pecanapi.user_id"), - hostname = getOption("pecanapi.workflow_hostname"), - folder_prefix = getOption("pecanapi.workflow_prefix"), - params = NULL, - notes = NULL -) -} -\arguments{ -\item{con}{Database connection, as created by \link[RPostgres:dbConnect]{RPostgres::dbConnect}} - -\item{site_id}{Site ID from \code{sites} table (numeric)} - -\item{model_id}{Model ID from \code{models} table (numeric)} - -\item{start_date}{Model run start date (character or POSIX)} - -\item{end_date}{Model run end date (character or POSIX)} - -\item{user_id}{User ID from \code{users} table (default = option -\code{pecanapi.user_id}). Note that this option is \emph{not set by -default}, and this function will not run without a set \code{user_id}.} - -\item{hostname}{Workflow server hostname (character; default = -option \code{pecanapi.workflow_hostname})} - -\item{folder_prefix}{Output directory prefix (character; default = -option \code{pecanapi.workflow_prefix}). Workflow ID will be appended -to the end with \code{paste0}} - -\item{params}{Additional workflow parameters, stored in -\code{workflows.params} (character or NULL (default))} - -\item{notes}{Additional workflow notes, stored in \code{workflows.notes} -(character or NULL (default))} -} -\value{ -\code{data.frame} containing new workflow(s), including all -columns from \code{workflows} table. -} -\description{ -Insert a new workflow into PEcAn database, returning the workflow -as a \code{data.frame} -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/listToXML.Rd b/api/man/listToXML.Rd deleted file mode 100644 index 8ce76805720..00000000000 --- a/api/man/listToXML.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/submit_workflow.R -\name{listToXML} -\alias{listToXML} -\title{Convert List to XML} -\usage{ -listToXML(item, tag) -} -\arguments{ -\item{item}{object to be converted. Despite the function name, need not actually be a list} - -\item{tag}{xml tag} -} -\value{ -xmlNode -} -\description{ -Can convert list or other object to an xml object using xmlNode -} -\author{ -David LeBauer, Carl Davidson, Rob Kooper -} diff --git a/api/man/list_runs.Rd b/api/man/list_runs.Rd deleted file mode 100644 index 6f8afcb1b95..00000000000 --- a/api/man/list_runs.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/list_runs.R -\name{list_runs} -\alias{list_runs} -\title{List all runs associated with a particular workflow} -\usage{ -list_runs(con, workflow_id) -} -\arguments{ -\item{con}{Database connection, as created by \link[RPostgres:dbConnect]{RPostgres::dbConnect}} - -\item{workflow_id}{ID of target workflow (character or numeric)} -} -\value{ -Runs \code{data.frame} subset to rows containing specific workflow -} -\description{ -List all runs associated with a particular workflow -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/output_url.Rd b/api/man/output_url.Rd deleted file mode 100644 index 91513363571..00000000000 --- a/api/man/output_url.Rd +++ /dev/null @@ -1,24 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_fileserver.R -\name{output_url} -\alias{output_url} -\title{Build URL for output file hosted on THREDDS fileServer} -\usage{ -output_url(workflow_id, target, ...) -} -\arguments{ -\item{workflow_id}{ID of target workflow (numeric or character)} - -\item{target}{Target path, relative to workflow directory (character)} - -\item{...}{Additional arguments to \link{thredds_fs_url}} -} -\value{ -THREDDS http fileServer URL (character) -} -\description{ -Build URL for output file hosted on THREDDS fileServer -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/pecanapi_options.Rd b/api/man/pecanapi_options.Rd deleted file mode 100644 index 26cc4d610ea..00000000000 --- a/api/man/pecanapi_options.Rd +++ /dev/null @@ -1,63 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/zzz.R -\name{pecanapi_options} -\alias{pecanapi_options} -\title{Package-specific options} -\description{ -To minimize the number of changes that have to happen for scripts -using \code{pecanapi} to be shared across machines, most -Docker/RabbitMQ/database-related configurations can be configured -via \code{options}. All options take the form \code{pecanapi.optionname}, -where \code{optionname} is the specific option. Note that these values -are used only as default function arguments, and can be substituted -for any individual function call by passing the appropriate argument. -} -\details{ -The following options (prefixed with \code{pecanapi.}) are used by -\code{pecanapi}. To see the default values, just call \code{options()} with -no arguments. These are sorted in order of decreasing likelihood of -needing to be set by the user (first options are most likely to be -changed across different systems): - --\code{pecanapi.user_id} -- The User ID to associate with all workflows -created by this package. This is the only option that \emph{must} be set -by the user -- it is set to \code{NULL} by default, which will cause -many of the functions in the \code{pecanapi} to fail. -\itemize{ -\item \code{docker_hostname}, \code{docker_port} -- The hostname and port of the -Docker service. You can check that these values work by browsing to -\code{docker_hostname:docker_port} (by default, \code{localhost:8000}) in a -web browser. -\item \code{docker_rabbitmq_frontend} -- The "frontend rule" for RabbitMQ. -By default, this is \verb{/rabbitmq}, meaning that the RabbitMQ console -is accessible at \code{localhost:8000/rabbitmq} (adjusted for whatever -combination of \code{docker_hostname} and \code{docker_port} you are using). -} - -\code{docker_https} -- (Logical) If \code{TRUE}, all URLs use \code{https} access. -By default, this is \code{FALSE}. -\itemize{ -\item \code{db_hostname} -- The name of the PostgreSQL container service -inside the PEcAn stack. This is the same as its service name in -\code{docker-compose.yml}. This is the hostname used by the \code{executor} -service to access the database, and which is written into each -\code{pecan.xml} file. -\item \code{db_user}, \code{db_password}, \code{db_dbname}, \code{db_driver}, \code{db_write} -- -These correspond to the \code{user}, \code{password}, \code{dbname}, \code{driver}, and -\code{write} tags in the \code{database/bety} part of the PEcAn XML. -\item \code{rabbitmq_user}, \code{rabbitmq_password} -- The RabbitMQ -authentication credentials. These are set in the -\code{docker-compose.yml} file, under the \code{rabbitmq} service. -\item \code{rabbitmq_service}, \code{rabbitmq_service_port}, \code{rabbitmq_vhost} -- -The name, internal port, and \code{vhost} of the RabbitMQ service. -Unless you are making major changes to the guts of -\code{docker-compose.yml}, you shouldn't change these values (i.e. they -should be the same on most machines). -\item \code{workflow_hostname} -- The hostname passed to the \code{host} section -of the \code{pecan.xml}. By default, this is "docker". -\item \code{workflow_prefix} -- The location and directory prefix for -storing workflow outputs. By default, this is -\verb{/data/workflows/PEcAn_}. The workflow ID will be appended directly -to this value. -} -} diff --git a/api/man/prepared_query.Rd b/api/man/prepared_query.Rd deleted file mode 100644 index 22958e4c996..00000000000 --- a/api/man/prepared_query.Rd +++ /dev/null @@ -1,76 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/prepared_query.R -\name{prepared_query} -\alias{prepared_query} -\alias{prepared_statement} -\title{Execute a PostgreSQL prepared query or statement} -\usage{ -prepared_query(con, query, params) - -prepared_statement(con, query, params) -} -\arguments{ -\item{con}{Database connection, as created by \link[RPostgres:dbConnect]{RPostgres::dbConnect}} - -\item{query}{Query template (character, length 1)} - -\item{params}{Query parameters (unnamed list)} -} -\value{ -For \code{prepared_query}, the query result as a \code{data.frame}. -\code{prepared_statement} exits silently on success. -} -\description{ -This provides a safe and efficient way of executing a query or -statement with a list of parameters to be substituted. -} -\details{ -A prepared statement consists of a template query (\code{query}), which -is compiled prior to execution, and a series of parameters -(\code{params}) that are passed into the relevant spots in the template -query. In R's \code{DBI} database interface, this uses three statements: -\link[DBI:dbSendQuery]{DBI::dbSendQuery} to create the template query, \link[DBI:dbBind]{DBI::dbBind} to -bind parameters to that query, and \link[DBI:dbFetch]{DBI::dbFetch} to retrieve the -results. Statements (\link[DBI:dbSendStatement]{DBI::dbSendStatement}) work the same way, -except there are no results to fetch with \link[DBI:dbFetch]{DBI::dbFetch}. - -Prepared statements have several important advantages. First of -all, they are automatically and efficiently vectorized, meaning -that it is possible to build a single query and run it against a -vector of parameters. Second, they automatically enforce strict -type checking and quoting of inputs, meaning that they are secure -against SQL injection attacks and input mistakes (e.g. giving a -character when the table expects a number). -} -\examples{ -\dontrun{ -prepared_query(con, paste( - "SELECT id, folder FROM workflows", - "WHERE user_id = $1" -), list(my_user_id)) - -prepared_statement(con, paste( - "INSERT INTO workflows (id, site_id, model_id, folder)", - "VALUES ($1, $2, $3, $4)" -), list(workflow_id, my_site_id, my_model_id, my_folder)) - -# Note that queries and statements are automatically vectorized -# The below query will execute two searches, and return the results -# of both in one data.frame -prepared_query(con, paste( - "SELECT * FROM dbfiles", - "WHERE file_name ILIKE $1", -), list(c("\%cruncep\%", "\%gfdl\%"))) - -# Similarly, this will create two workflows, all with the same - model_id (1) but different site_ids (33, 67) -prepared_statement(con, paste( - "INSERT INTO workflows (site_id, model_id)", - "VALUES ($1, $2)" -), list(c(33, 67), 1)) - -} -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/run_dap.Rd b/api/man/run_dap.Rd deleted file mode 100644 index d9d8fd6fdaa..00000000000 --- a/api/man/run_dap.Rd +++ /dev/null @@ -1,29 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_opendap.R -\name{run_dap} -\alias{run_dap} -\title{Create a THREDDS OpenDAP access URL to a specific model output file} -\usage{ -run_dap(workflow_id, target, run_id = NULL, ...) -} -\arguments{ -\item{workflow_id}{ID of target workflow (numeric or character)} - -\item{target}{Target file path, relative to output directory of -specific run (as specified by \code{run_id})} - -\item{run_id}{Run ID (numeric or character). If \code{NULL}, try to use -the run listed in the \code{runs.txt} file. If multiple runs are -available, throw a warning and use the first one.} - -\item{...}{Additional arguments to \link{thredds_fs_url}} -} -\value{ -OpenDAP URL to target file (character) -} -\description{ -Create a THREDDS OpenDAP access URL to a specific model output file -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/run_url.Rd b/api/man/run_url.Rd deleted file mode 100644 index 38a34deb6c5..00000000000 --- a/api/man/run_url.Rd +++ /dev/null @@ -1,31 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_fileserver.R -\name{run_url} -\alias{run_url} -\title{Build a THREDDS fileServer URL for a specific output file from a -specific run} -\usage{ -run_url(workflow_id, target, run_id = NULL, ...) -} -\arguments{ -\item{workflow_id}{ID of target workflow (numeric or character)} - -\item{target}{Target file path, relative to output directory of -specific run (as specified by \code{run_id})} - -\item{run_id}{Run ID (numeric or character). If \code{NULL}, try to use -the run listed in the \code{runs.txt} file. If multiple runs are -available, throw a warning and use the first one.} - -\item{...}{Additional arguments to \link{thredds_fs_url}} -} -\value{ -HTTP fileServer URL of a particular run output file (character) -} -\description{ -Build a THREDDS fileServer URL for a specific output file from a -specific run -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/search.Rd b/api/man/search.Rd deleted file mode 100644 index d38a66ea1ea..00000000000 --- a/api/man/search.Rd +++ /dev/null @@ -1,80 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/search.R -\name{search_models} -\alias{search_models} -\alias{search_sites} -\alias{search_pfts} -\title{Search for sites or models} -\usage{ -search_models( - con, - name = "", - revision = "", - modeltype = "", - auto_pct = TRUE, - ignore.case = TRUE -) - -search_sites(con, sitename = "", auto_pct = TRUE, ignore.case = TRUE) - -search_pfts( - con, - name = "", - definition = "", - modeltype = "", - auto_pct = TRUE, - ignore.case = TRUE -) -} -\arguments{ -\item{con}{Database connection, as created by \link[RPostgres:dbConnect]{RPostgres::dbConnect}} - -\item{name}{Model/PFT (depending on function) name search string (character)} - -\item{revision}{Model version search string (character)} - -\item{modeltype}{Model type search string (character)} - -\item{auto_pct}{Logical. If \code{TRUE} (default), automatically -surround search strings in \verb{\%}. If this is \code{FALSE}, you should -explicitly specify \code{"\%\%"} for one or both other arguments.} - -\item{ignore.case}{Logical. If \code{TRUE} (default) use -case-insensitive search (SQL \code{ILIKE} operator); otherwise, use -case-sensitive search (SQL \code{LIKE} operator).} - -\item{sitename}{Model name search string (character)} - -\item{definition}{PFT definition search string (character)} -} -\value{ -Bety \code{models} table (\code{data.frame}) subset to matching model -name or version -} -\description{ -Search for sites or models -} -\examples{ -\dontrun{ -search_models(con, "SIPNET") - -# Partial match -search_models(con, "ED") -search_models(con, modeltype = "ED") -search_sites(con, "UMBS") -search_pfts(con, "early", modeltype = "ED") - -# Case sensitivity -search_models(con, "ed") -search_models(con, "ed", ignore.case = FALSE) - -# Starts with UMBS -search_sites(con, "UMBS\%", auto_pct = FALSE) - -# SQL wildcards can still be used inside search strings. -search_pfts(con, "early\%hardwood") -} -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/submit_workflow.Rd b/api/man/submit_workflow.Rd deleted file mode 100644 index 9e915cc2649..00000000000 --- a/api/man/submit_workflow.Rd +++ /dev/null @@ -1,54 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/submit_workflow.R -\name{submit_workflow} -\alias{submit_workflow} -\title{Post complete settings list as RabbitMQ message} -\usage{ -submit_workflow( - settings, - rabbitmq_hostname = getOption("pecanapi.docker_hostname"), - rabbitmq_frontend = getOption("pecanapi.docker_rabbitmq_frontend"), - rabbitmq_port = getOption("pecanapi.docker_port"), - rabbitmq_user = getOption("pecanapi.rabbitmq_user"), - rabbitmq_password = getOption("pecanapi.rabbitmq_password"), - rabbitmq_prefix = NULL, - httr_auth = TRUE, - https = getOption("pecanapi.docker_https") -) -} -\arguments{ -\item{settings}{PEcAn settings list object} - -\item{rabbitmq_hostname}{RabbitMQ server hostname (character. -Default = \code{"localhost"})} - -\item{rabbitmq_port}{RabbitMQ server port (character or numeric. -Default = option \code{pecanapi.docker_port})} - -\item{rabbitmq_user}{RabbitMQ user name (character. Default = -option \code{pecanapi.rabbitmq_user})} - -\item{rabbitmq_password}{RabbitMQ password (character. Default = -option \code{pecanapi.rabbitmq_password})} - -\item{rabbitmq_prefix}{Complete RabbitMQ API prefix. If \code{NULL} -(default), this is constructed from the other arguments. If this -argument is not \code{NULL}, it overrides all other arguments except -\code{httr_auth} and \code{settings}.} - -\item{httr_auth}{Whether or not to use \link[httr:authenticate]{httr::authenticate} to -generate CURL authentication (default = \code{TRUE}). If \code{FALSE}, you -must pass the authentication as part of the RabbitMQ hostname or prefix.} - -\item{https}{Whether or not to use \code{https}. If \code{FALSE}, use \code{http}. -Default = option \code{pecanapi.docker_https}} -} -\value{ -Curl \code{POST} output, parsed by \link[httr:content]{httr::content} -} -\description{ -Post complete settings list as RabbitMQ message -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/thredds_dap_url.Rd b/api/man/thredds_dap_url.Rd deleted file mode 100644 index 11de87fc171..00000000000 --- a/api/man/thredds_dap_url.Rd +++ /dev/null @@ -1,34 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_opendap.R -\name{thredds_dap_url} -\alias{thredds_dap_url} -\title{Create a THREDDS OpenDAP access URL to a PEcAn file} -\usage{ -thredds_dap_url( - target, - hostname = getOption("pecanapi.docker_hostname"), - port = getOption("pecanapi.docker_port"), - https = getOption("pecanapi.docker_https") -) -} -\arguments{ -\item{target}{Full path to target (character) relative to THREDDS -root. For outputs, this should start with "outputs/", and for -dbfiles, "dbfiles/".} - -\item{hostname}{THREDDS server hostname (default = "localhost")} - -\item{port}{THREDDS server port (default = 8000)} - -\item{https}{Logical. If \code{TRUE}, use https, otherwise use http -(default = \code{FALSE}).} -} -\value{ -OpenDAP URL to target file (character) -} -\description{ -Create a THREDDS OpenDAP access URL to a PEcAn file -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/thredds_fs_url.Rd b/api/man/thredds_fs_url.Rd deleted file mode 100644 index 71dfea0eedb..00000000000 --- a/api/man/thredds_fs_url.Rd +++ /dev/null @@ -1,29 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/thredds_fileserver.R -\name{thredds_fs_url} -\alias{thredds_fs_url} -\title{Create a THREDDS fileServer URL prefix} -\usage{ -thredds_fs_url( - hostname = getOption("pecanapi.docker_hostname"), - port = getOption("pecanapi.docker_port"), - https = getOption("pecanapi.docker_https") -) -} -\arguments{ -\item{hostname}{THREDDS server hostname (default = "localhost")} - -\item{port}{THREDDS server port (default = 8000)} - -\item{https}{Logical. If \code{TRUE}, use https, otherwise use http -(default = \code{FALSE}).} -} -\value{ -THREDDS fileServer URL prefix (character) -} -\description{ -Create a THREDDS fileServer URL prefix -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/watch_workflow.Rd b/api/man/watch_workflow.Rd deleted file mode 100644 index dbc309eefc0..00000000000 --- a/api/man/watch_workflow.Rd +++ /dev/null @@ -1,29 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/watch_progress.R -\name{watch_workflow} -\alias{watch_workflow} -\title{Continuously monitor the progress of a workflow until it completes. -To exit early, send an interrupt signal (Control-C).} -\usage{ -watch_workflow(workflow_id, nlines = 10, sleep = 3, ...) -} -\arguments{ -\item{workflow_id}{Workflow ID (character or numeric)} - -\item{nlines}{Print the last N lines (numeric, default = 10)} - -\item{sleep}{Number of seconds to sleep between status updates -(numeric, default = 3)} - -\item{...}{Additional arguments to \link{output_url}} -} -\value{ -If successful -} -\description{ -Continuously monitor the progress of a workflow until it completes. -To exit early, send an interrupt signal (Control-C). -} -\author{ -Alexey Shiklomanov -} diff --git a/api/man/workflow_output.Rd b/api/man/workflow_output.Rd deleted file mode 100644 index c4bd32cdae4..00000000000 --- a/api/man/workflow_output.Rd +++ /dev/null @@ -1,23 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/watch_progress.R -\name{workflow_output} -\alias{workflow_output} -\title{Retrieve the current output of a workflow given its ID} -\usage{ -workflow_output(workflow_id, ...) -} -\arguments{ -\item{workflow_id}{Workflow ID (character or numeric)} - -\item{...}{Additional arguments to \link{output_url}} -} -\value{ -Workflow output, as character vector with one item per line -in \code{workflow.Rout} (as returned by \code{readLines}). -} -\description{ -Retrieve the current output of a workflow given its ID -} -\author{ -Alexey Shiklomanov -} diff --git a/api/vignettes/pecanapi.Rmd b/api/vignettes/pecanapi.Rmd deleted file mode 100644 index bfcaf7577eb..00000000000 --- a/api/vignettes/pecanapi.Rmd +++ /dev/null @@ -1,302 +0,0 @@ ---- -title: Introduction to the PEcAn R API -author: Alexey Shiklomanov ---- - -# Introduction to the PEcAn R API {#pecanapi-vignette} - -```{r, include = FALSE, eval = TRUE} -op_default <- knitr::opts_chunk$get(default = TRUE) -knitr::opts_chunk$set(op_default) -``` - -## Introduction - -The PEcAn API package (`pecanapi`) is designed to allow users to submit PEcAn workflows directly from an R session. -The basic idea is that users build the PEcAn settings object via an R script (manually, or using the included helper functions) and then use the RabbitMQ API to send this object to a Dockerized PEcAn instance running on a local or remote machine. - -`pecanapi` is specifically designed to only depend on CRAN packages, and not on any PEcAn internal packages. -This makes it easy to install, and allows it to be used without needing to download and install PEcAn itself (which is large and has many complex R package and system dependencies). -It can be installed directly from GitHub as follows: - -```{r, eval = 2} -devtools::install_github("pecanproject/pecan", subdir = "api") -library(pecanapi) -``` - -This vignette covers the following major sections: - -- [Initial setup](#pecanapi-setup) goes over the configuration, both inside and outside R, required to make `pecanapi` work. -- [Registering a workflow](#pecanapi-workflow) goes over how to register a PEcAn workflow with the PEcAn database, including searching for the required site and model IDs -- [Building a settings object](#pecanapi-settings) covers how to configure a PEcAn workflow using the PEcAn settings list. -- Finally, [submitting a run](#pecanapi-submit) covers how to submit the complete settings object for execution. - -## Initial setup {#pecanapi-setup} - -This tutorial assumes you are running a Dockerized instance of PEcAn on your local machine (hostname `localhost`, port 8000). -To check this, open a browser and try to access `http://localhost:8000/pecan/`. -If you are trying to access a remote instance of PEcAn, you will need to substitute the hostname and port accordingly. - -To perform database operations, you will also need to have read access to the PEcAn database. -Note that the PEcAn database Docker container (`postgres`) does not provide this by default, so you will need to open port 5432 (the PostgreSQL default) to that container. -You can do this by creating a `docker-compose.override.yml` file with the following contents in the root directory of the PEcAn source code: - -```yml -version: "3" -services: - postgres: - ports: - - 5432:5432 -``` - -Here, the first port is the one used to access the database (can be any open port; most PostgreSQL applications assume 5432 by default), and the second is the port the database is actually running on (which will always be 5432). -After making this change, reload the `postgres` container by running `docker-compose up -d`. -To check that this works, open an R session and try to create a database connection object to the PEcAn database. - -```{r, eval = FALSE} -con <- DBI::dbConnect( - RPostgres::Postgres(), - user = "bety", - password = "bety", - host = "localhost", - port = 5432 -) -DBI::dbListTables(con)[1:5] -``` - -This code should print out five table names from the PEcAn database. -If it throws an error, you have a problem with your database connection. - -The rest of this tutorial assumes that you are using this same database connection object (`con`). - -In addition, any API operations that modify the database will not work unless a user ID is set. -To avoid having to manually specify the ID each time, we can set it via `options`: - -```{r, eval = FALSE} -options(pecanapi.user_id = 99000000002) -``` - -The `pecanapi` package has many other options that it uses for its default configuration, including the Docker server and RabbitMQ hostname and credentials. -To learn more about them, see `?pecanapi_options`. - -## Registering a workflow with the database {#pecanapi-workflow} - -For the PEcAn workflow to work, it needs to be registered with the PEcAn database. -In `pecanapi`, this is done via the `insert_new_workflow` function. - -Building a workflow requires two important pieces of information: the model and site IDs. -If you know these for your site and model, you can pass them directly into `insert_new_workflow`. -However, chances are you may have to look them up in the database first. -`pecanapi` provides several `search_*` utilities to make this easier. - -First, let's pick a model. -To list all models, we can run `search_models` with no arguments (other than the database connection object, `con`). - -```{r, eval = FALSE} -models <- search_models(con) -``` - -We can narrow down our search by model name, revision, or "type". - -```{r, eval = FALSE} -search_models(con, "ED") -search_models(con, "sipnet") -search_models(con, "ED", revision = "git") -``` - -Note that the search is case-insensitive by default, and searches before and after the input string. -See `?search_models` to learn how to toggle this behavior. -For the purposes of this tutorial, let's use the SIPNET model because it has low input requirements and runs very quickly. -Specifically, let's use the `136` version. -We could grab the model ID from the search results, but `pecanapi` also provides an additional helper function for retrieving model IDs if you know the exact name and revision. - -```{r, eval = FALSE} -model_id <- get_model_id(con, "SIPNET", "136") -model_id -``` - -We can repeat this process for sites with the `search_sites` function (though there is currently no `get_site_id` function). -Note the use of `%` as a wildcard (matches zero or more of any character, equivalent to the regular expression `.*`). -The two sites in the search below are largely identical, so we'll use the one with more site information (i.e. where `mat` is not `NA`). - -```{r, eval = FALSE} -all_umbs <- search_sites(con, "umbs%disturbance") -all_umbs -site_id <- subset(all_umbs, !is.na(mat))[["id"]] -``` - -With site and model IDs in hand, we are ready to create a workflow. - -```{r, eval = FALSE} -workflow <- insert_new_workflow(con, site_id, model_id, start_date = "2004-01-01", end_date = "2004-12-31") -workflow -``` - -The `insert_new_workflow` function inserts the workflow into the database and returns a `data.frame` containing the row that was inserted. - - -## Building a settings object {#pecanapi-settings} - -Now that we have a workflow registered, we need to configure it via the PEcAn settings list. -The PEcAn settings list is a nested list providing parameters for the various actions performed by the PEcAn workflow, including the trait meta-analysis, processing input files, and running models. -It can be created manually with a bunch of `list` calls. -However, this is tedious and error-prone, so `pecanapi` provides several utilities that facilitate this process. - -We start with a blank list. - -```{r} -settings <- list() -``` - -Let's start by adding the workflow we created in the previous section to this list. -This is done via the `add_workflow` function, which takes as input a workflow `data.frame` and adds the relevant fields to the right places in the settings list. - -```{r, eval = FALSE} -settings <- add_workflow(settings, workflow) -``` - -All `add_*` functions work by incrementally adding to an input settings object and returning a new modified settings object. -The first argument of these functions is always the settings list, which gives these functions a consistent syntax and makes it easy to string multiple settings modifications together using the `magrittr` pipe (`%>%`), similar to `tidyverse` tabular data manipulations. - -Let's continue by adding a basic database configuration to this settings list. - -```{r} -settings <- add_database(settings) -settings -``` - -The `add_database` function adds a sensible default configuration for the PEcAn database in the right place with the right names in the settings file. -These defaults can, of course, be modified in the function call (see `?add_database`), or, better yet, by setting package options, which is where most `add_*` functions get their defaults (see `?pecanapi_options`). - -Similarly, `add_rabbitmq` automatically adds the RabbitMQ configuration to the settings object. -Like `add_database`, it takes all of its defaults from `options` (see `?pecanapi_options`). - -```{r} -settings <- add_rabbitmq(settings) -settings -``` - -PFTs are added to the settings object with the `add_pft` function. -To search for PFTs, use the `search_pfts` function, which can take optional arguments for PFT name (`name`), description of its definition (`definition`), and model type (`modeltype`). - -```{r, eval = FALSE} -search_pfts(con, name = "deciduous", modeltype = "sipnet") -search_pfts(con, name = "tundra", modeltype = "ED") -``` - -As with `search_models` and `search_sites`, these functions are case insensitive and do partial matching by default. -The `add_pft` function adds individual PFTs by name. - -```{r} -settings <- add_pft(settings, "temperate.deciduous") -settings -``` - -This adds the `temperate.deciduous` PFT to the appropriate spot in the settings hierarchy. -Whereas `add_pft` adds a single PFT to the settings, `add_pft_list` can add a vector of PFTs. - -```{r} -settings <- add_pft_list(settings, c("temperate.coniferous", "miscanthus")) -settings -``` - -Like `add_database`, `add_pft` and `add_pft_list` can also take arbitrary additional configuration arguments via their `...` argument. -For `add_pft`, such arguments are passed only to that PFT, while for `add_pft_list`, they are shared between all PFTs. -For more details, see `?add_pft`. - -One final note is that, because the settings object is just a list, you can make arbitrary modifications to it via base R's `modifyList` function (indeed, many of the `pecanapi::add_*` functions use `modifyList` under the hood). - -```{r} -customization <- list( - meta.analysis = list(iter = 3000, random.effects = FALSE), - run = list( - inputs = list(met = list(source = "CRUNCEP", output = "SIPNET", method = "ncss")) - ) - ) -settings <- modifyList(settings, customization) -``` - -Note that `modifyList` operates recursively on nested lists, which makes it easy to modify settings at different levels of the list hierarchy. -For instance, below, we modify the previous settings object to make `random.effects = TRUE`, and change the download method of the `inputs` to OpenDAP, but keep all the other settings the same. - -```{r} -settings <- modifyList(settings, list( - meta.analysis = list(random.effects = TRUE), - run = list(inputs = list(met = list(method = "opendap"))) -)) -``` - -All of these steps can be chained together via `magrittr` pipes (`%>%`). - -```{r, eval = FALSE} -library(magrittr) -settings <- list() %>% - add_workflow(workflow) %>% - add_database() %>% - add_rabbitmq() %>% - add_pft("temperate.deciduous") %>% - add_pft("temperate.coniferous") %>% - modifyList(list( - meta.analysis = list(iter = 3000, random.effects = FALSE), - run = list(inputs = list(met = list(source = "CRUNCEP", output = "SIPNET", method = "ncss"))), - host = list(rabbitmq = list( - uri = "amqp://guest:guest@rabbitmq:5672/%2F", - queue = "SIPNET_136" - )) - )) -``` - -## Submitting a run {#pecanapi-submit} - -Now that we have all the pieces, let's put them together into a single settings object. - -```{r, eval = FALSE} -settings <- list() %>% - add_workflow(workflow) %>% - add_database() %>% - add_pft("temperate.deciduous") %>% - modifyList(list( - meta.analysis = list(iter = 3000, random.effects = FALSE), - run = list(inputs = list(met = list(source = "CRUNCEP", output = "SIPNET", method = "ncss"))), - host = list(rabbitmq = list( - uri = "amqp://guest:guest@rabbitmq:5672/%2F", - queue = "SIPNET_136" - )) - )) -``` - -We can then submit these settings as a run via the `submit_workflow` function. -This function has only one required input -- the settings list -- but a number of optional arguments for specifying how to connect to the RabbitMQ API (see `?submit_workflow` for details). - -```{r, eval = FALSE} -submit_workflow(settings) -``` - -If the workflow was submitted successfully, this will return the HTTP response `routed = TRUE` as a named list. -Note that this only means that the RabbitMQ message was posted; the workflow can still crash for various reasons. -To see the status of the workflow, look at `docker-compose logs executor` or use the Portainer interface. - -## Processing output {#pecanapi-output} - -All of PEcAn's outputs as well as its database files (`dbfiles`) can be accessed remotely via the THREDDS data server. -You can explore these files by browsing to `localhost:8000/thredds/` in a browser (substituting hostname and port, accordingly). - -All files, regardless of file type, can be downloaded directly (via HTTP) through the THREDDS `fileServer` protocol. -In `pecanapi`, URLs for these files can be easily constructed via `output_url` for any workflow output and `run_url` for run-specific outputs. -For instance, to read the `workflow.Rout` file from the workflow we created earlier, you can do the following: - -```{r, eval = FALSE} -workflow_id <- workflow[["id"]] -readLines(workflow_id, "workflow.Rout") -``` - -Outputs in NetCDF format can also be accessed via the OpenDAP service, which allows remote variable selection and subsetting (meaning you can only download the outputs you need without needing to download the entire file). -These URLs are created via the `thredds_dap_url` (for a generic URL) or `run_dap` (to access outputs from a specific model run). - -```{r, eval = FALSE} -sipnet_out <- ncdf4::nc_open(run_dap(workflow_id, "2004.nc")) -gpp <- ncdf4::ncvar_get(sipnet_out, "GPP") -time <- ncdf4::ncvar_get(sipnet_out, "time") -ncdf4::nc_close(sipnet_out) -plot(time, gpp, type = "l") -``` From 096acdc68af116af82a59b3a77e8fe528f010ecf Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 8 Dec 2020 14:57:47 -0600 Subject: [PATCH 1701/2289] use apache with php 7 --- docker/web/Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/web/Dockerfile b/docker/web/Dockerfile index 674b42d254a..69610652342 100644 --- a/docker/web/Dockerfile +++ b/docker/web/Dockerfile @@ -1,4 +1,4 @@ -FROM php:apache +FROM php:7-apache MAINTAINER Rob Kooper # ---------------------------------------------------------------------- From a36cac75d550d4295453e41a297bdeb2aebaa606 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Tue, 8 Dec 2020 15:09:51 -0700 Subject: [PATCH 1702/2289] Update modules/meta.analysis/R/meta.analysis.R Co-authored-by: Kristina Riemer --- modules/meta.analysis/R/meta.analysis.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/meta.analysis.R b/modules/meta.analysis/R/meta.analysis.R index ff57087ab8d..6a55f5d35b1 100644 --- a/modules/meta.analysis/R/meta.analysis.R +++ b/modules/meta.analysis/R/meta.analysis.R @@ -65,7 +65,7 @@ pecan.ma <- function(trait.data, prior.distns, outdir, random = FALSE, overdispersed = TRUE, logfile = file.path(outdir, "meta-analysis.log)"), - verbose = TRUE)) { + verbose = TRUE) { mcmc.object <- list() # initialize output list of mcmc objects for each trait mcmc.mat <- list() From 37fdc0fc33b6ff3e9719c724fe2f6d7ec1897282 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 9 Dec 2020 17:09:07 +0000 Subject: [PATCH 1703/2289] automated documentation update --- modules/meta.analysis/man/pecan.ma.Rd | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/modules/meta.analysis/man/pecan.ma.Rd b/modules/meta.analysis/man/pecan.ma.Rd index 5d0e2d3f34a..d97a3146386 100644 --- a/modules/meta.analysis/man/pecan.ma.Rd +++ b/modules/meta.analysis/man/pecan.ma.Rd @@ -13,8 +13,7 @@ pecan.ma( random = FALSE, overdispersed = TRUE, logfile = file.path(outdir, "meta-analysis.log)"), - verbose = TRUE, - madata_file = file.path(outdir, "madata.Rdata") + verbose = TRUE ) } \arguments{ @@ -39,9 +38,6 @@ by call to \code{\link[PEcAn.DB:query.priors]{PEcAn.DB::query.priors()}}} \code{NULL}, only print output to console.} \item{verbose}{Logical. If \code{TRUE} (default), print progress messages.} - -\item{madata_file}{Path to file for storing copy of data used in -meta-analysis. If \code{NULL}, don't store at all.} } \value{ four chains with 5000 total samples from posterior From 31e63a92d395b9b6cb1b9f9f4707f29462d94ba2 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 9 Dec 2020 14:39:10 -0700 Subject: [PATCH 1704/2289] Update CHANGELOG.md --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index d344640bfc6..08159e8d82f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -38,7 +38,7 @@ This is a major change: - gSSURGO file download now added as inputs into BETY through extract_soil_gssurgo (#2666) - ensure Tleaf converted to K for temperature corrections in PEcAn.photosynthesis::fitA (#2726) - fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2753) -- ensure that control treatments always receive the random effect index of 1, replace madata.Rdata with jagged.data.Rdata, which includes identifying variables useful for calculating parameter estimates by treatment (#2756) +- ensure that control treatments always receives the random effect index of 1; rename madata.Rdata to jagged.data.Rdata and include database ids and names useful for calculating parameter estimates by treatment (#2756) ### Changed From 816d8eb3ae11b3242bf82a3f8cb58abdd2e03e5b Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Wed, 9 Dec 2020 23:39:50 +0000 Subject: [PATCH 1705/2289] Update test for summarize.result() --- base/utils/tests/testthat/test.utils.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/base/utils/tests/testthat/test.utils.R b/base/utils/tests/testthat/test.utils.R index 16e467de79a..7f01144ffdf 100644 --- a/base/utils/tests/testthat/test.utils.R +++ b/base/utils/tests/testthat/test.utils.R @@ -70,7 +70,9 @@ test_that("summarize.result works appropriately", { n = 1, mean = sqrt(1:10), stat = NA, - statname = NA + statname = NA, + name = NA, + treatment_id = NA ) # check that individual means produced for distinct sites expect_that(summarize.result(testresult)$mean, equals(testresult$mean)) From 2248d0a669a20646f8514e2e5b8bcb2cf626e758 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 10 Dec 2020 11:57:33 -0500 Subject: [PATCH 1706/2289] global var throwing errors --- modules/assim.sequential/R/Analysis_sda.R | 36 ++++++++--------------- 1 file changed, 13 insertions(+), 23 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 000875a9e9b..69a2a9be04d 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -247,17 +247,17 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) } - utils::globalVariables( - 'constants.tobit2space', - 'data.tobit2space', - 'inits.tobit2space', - 'tobit2space_pred', - 'conf_tobit2space', - 'samplerNumberOffset_tobit2space', - 'Rmcmc_tobit2space', - 'Cmodel_tobit2space', - 'Cmcmc_tobit2space' - ) + # utils::globalVariables(c( + # 'constants.tobit2space', + # 'data.tobit2space', + # 'inits.tobit2space', + # 'tobit2space_pred', + # 'conf_tobit2space', + # 'samplerNumberOffset_tobit2space', + # 'Rmcmc_tobit2space', + # 'Cmodel_tobit2space', + # 'Cmcmc_tobit2space' + # )) }else{ @@ -365,7 +365,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 )) %>% as.matrix() - rownames(interval) <- names(obs.mean[[t]]) + rownames(interval) <- names(input.vars) #### These vectors are used to categorize data based on censoring #### from the interval matrix @@ -529,17 +529,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 valueInCompiledNimbleFunction(Cmcmc$samplerFunctions[[samplerNumberOffset+i]], 'toggle', 1-y.ind[i]) } - utils::globalVariables( - 'constants.tobit', - 'data.tobit', - 'inits.tobit', - 'model_pred', - 'conf', - 'samplerNumberOffset', - 'Rmcmc', - 'Cmodel', - 'Cmcmc' - ) + }else{ Cmodel$y.ind <- y.ind From 0d3967af0c1f4463f496b4f966c026192448d5a0 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 10 Dec 2020 11:57:59 -0500 Subject: [PATCH 1707/2289] trying with just qle and nee getting errors from mismatched obs data and state vars --- modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R | 3 +++ modules/assim.sequential/inst/WillowCreek/testing.xml | 8 +++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R index 15c08610018..28c3f275943 100644 --- a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -502,6 +502,9 @@ settings$host$tunnel <- '/tmp/tunnel' settings$model$binary = "/usr2/postdoc/istfer/SIPNET/1023/sipnet" +source('/fs/data3/kzarada/pecan/modules/assim.sequential/R/Nimble_codes.R') + + if(restart == FALSE) unlink(c('run','out','SDA'), recursive = T) debugonce(PEcAn.assim.sequential::sda.enkf) diff --git a/modules/assim.sequential/inst/WillowCreek/testing.xml b/modules/assim.sequential/inst/WillowCreek/testing.xml index 4146eff6a9e..637eb245100 100644 --- a/modules/assim.sequential/inst/WillowCreek/testing.xml +++ b/modules/assim.sequential/inst/WillowCreek/testing.xml @@ -24,7 +24,7 @@ 0 9999 - LAI + Qle @@ -36,10 +36,12 @@ 9999 1000000000 - - LAI + + Qle + mW m-2 0 9999 + 100 year From 086d8e46513b799a850113c080e89f68f820038b Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 14 Dec 2020 09:53:04 -0500 Subject: [PATCH 1708/2289] updating wts to work with sipnet split inputs --- modules/assim.sequential/R/sda.enkf_refactored.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index a102b5a202b..7df659bed00 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -448,7 +448,8 @@ sda.enkf <- function(settings, - wts <- unlist(weight_list[[t]][outconfig$samples$met$ids]) + if(is.null(outconfig$samples$met$ids){wts <- unlist(weight_list[[t]])}else{ + (wts <- unlist(weight_list[[t]][outconfig$samples$met$ids])} #-analysis function enkf.params[[t]] <- Analysis.sda(settings, From 80c8257039ab3274edfb78505d1af2c490b5d56a Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 15 Dec 2020 11:26:33 -0500 Subject: [PATCH 1709/2289] changing to Pf for singular issues --- modules/assim.sequential/R/Analysis_sda.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index 69a2a9be04d..aaf12f78698 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -205,7 +205,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 inits.tobit2space <- function() list(muf = rmnorm_chol(1,colMeans(X), chol(diag(ncol(X))*100)), pf = rwish_chol(1,df = ncol(X)+1, - cholesky = chol(solve(stats::cov(X))))) + cholesky = chol(solve(Pf)))) #ptm <- proc.time() tobit2space_pred <- nimbleModel(tobit2space.model, data = data.tobit2space, From 4ccb4be3b34f269489a510a35ea29f93f5bc942e Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Wed, 16 Dec 2020 21:51:34 +0000 Subject: [PATCH 1710/2289] Correct file name from madata.Rdata to jagged.data.Rdata in the documentation file --- documentation/tutorials/02_Demo_Uncertainty_Analysis/Demo02.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/tutorials/02_Demo_Uncertainty_Analysis/Demo02.Rmd b/documentation/tutorials/02_Demo_Uncertainty_Analysis/Demo02.Rmd index 7ad0a5eead0..d4bb840763f 100644 --- a/documentation/tutorials/02_Demo_Uncertainty_Analysis/Demo02.Rmd +++ b/documentation/tutorials/02_Demo_Uncertainty_Analysis/Demo02.Rmd @@ -46,7 +46,7 @@ This menu shows the contents of /out. A number of files generated by the underly #### PFTs: This menu shows the contents of /pft. There is a wide array of outputs available that are related to the process of estimating the model parameters and running sensitivity/uncertainty analyses for a specific Plant Functional Type. -1. **TRAITS**: The Rdata files **trait.data.Rdata** and **madata.Rdata** are, respectively, the available trait data extracted from the database that was used to estimate the model parameters and that same data cleaned and formatted for the statistical code. The **list of variables that are queried is determined by what variables have priors associated with them in the definition of the PFTs**. Priors are output into **prior.distns.Rdata**. Likewise, the **list of species that are associated with a PFT determines what subset of data is extracted** out of all data matching a given variable name. Demo 3 will demonstrate how a PFT can be created or modified. To look at these files in RStudio **click on these files to load them into your workspace**. You can further examine them in the _Environment_ window or accessing them at the command line. For example, try typing ```names(trait.data)``` as this will tell you what variables were extracted, ```names(trait.data$Amax)``` will tell you the names of the columns in the Amax table, and ```summary(trait.data$Amax)``` will give you summary data about the Amax values. +1. **TRAITS**: The Rdata files **trait.data.Rdata** and **jagged.data.Rdata** are, respectively, the available trait data extracted from the database that was used to estimate the model parameters and that same data cleaned and formatted for the statistical code. The **list of variables that are queried is determined by what variables have priors associated with them in the definition of the PFTs**. Priors are output into **prior.distns.Rdata**. Likewise, the **list of species that are associated with a PFT determines what subset of data is extracted** out of all data matching a given variable name. Demo 3 will demonstrate how a PFT can be created or modified. To look at these files in RStudio **click on these files to load them into your workspace**. You can further examine them in the _Environment_ window or accessing them at the command line. For example, try typing ```names(trait.data)``` as this will tell you what variables were extracted, ```names(trait.data$Amax)``` will tell you the names of the columns in the Amax table, and ```summary(trait.data$Amax)``` will give you summary data about the Amax values. 2. **META-ANALYSIS**: + ```*.bug```: The evaluation of the meta-analysis is done using a Bayesian statistical software package called JAGS that is called by the R code. For each trait, the R code will generate a [trait].model.bug file that is the JAGS code for the meta-analysis itself. This code is generated on the fly, with PEcAn adding or subtracting the site, treatment, and greenhouse terms depending upon the presence of these effects in the data itself. If the tag is set to FALSE then all random effects will be turned off even if there are multiple sites. + ```meta-analysis.log``` contains a number of diagnostics, including the summary statistics of the model, an assessment of whether the posterior is consistent with the prior, and the status of the Gelman-Brooks-Rubin convergence statistic (which is ideally 1.0 but should be less than 1.1). From 47b56f7eb39377d503e72899c29b3899dd670b27 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Wed, 16 Dec 2020 22:09:27 +0000 Subject: [PATCH 1711/2289] Add .data$ pronoun to address NOTE regarding no visible binding for global variable --- base/utils/R/utils.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 77e090eb865..167765e40a0 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -207,9 +207,9 @@ zero.bounded.density <- function(x, bw = "SJ", n = 1001) { summarize.result <- function(result) { ans1 <- result %>% dplyr::filter(n == 1) %>% - dplyr::group_by(citation_id, site_id, trt_id, - control, greenhouse, date, time, - cultivar_id, specie_id, name, treatment_id) %>% + dplyr::group_by(.data$citation_id, .data$site_id, .data$trt_id, + .data$control, .data$greenhouse, .data$date, .data$time, + .data$cultivar_id, .data$specie_id, .data$name, .data$treatment_id) %>% dplyr::summarize( # stat must be computed first, before n and mean statname = dplyr::if_else(length(n) == 1, "none", "SE"), stat = stats::sd(mean) / sqrt(length(n)), From 0476c5bea6c2b3b75c7fcd5b6a089ac3da5b2ffd Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 16 Dec 2020 16:14:01 -0700 Subject: [PATCH 1712/2289] add rlang to utils imports https://github.com/PecanProject/pecan/pull/2756/files#r544687930 --- base/utils/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/DESCRIPTION b/base/utils/DESCRIPTION index 1174f8fef3f..2b728da22ed 100644 --- a/base/utils/DESCRIPTION +++ b/base/utils/DESCRIPTION @@ -31,6 +31,7 @@ Imports: PEcAn.remote, purrr, RCurl, + rlang, stringi, udunits2 (>= 0.11), XML From 396c23ce8a2d6b18bec5983eccbdc68f8c96f079 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Thu, 17 Dec 2020 15:23:20 -0700 Subject: [PATCH 1713/2289] Update utils.R fixing no visible binding; see #2758 --- base/utils/R/utils.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 167765e40a0..95f4302e331 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -203,6 +203,7 @@ zero.bounded.density <- function(x, bw = "SJ", n = 1001) { ##' @return result with replicate observations summarized ##' @export summarize.result ##' @usage summarize.result(result) +##' @importFrom rlang .data ##' @author David LeBauer, Alexey Shiklomanov summarize.result <- function(result) { ans1 <- result %>% From 4e6437b42106b7b1800485ec6347daee72cfda7e Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Thu, 17 Dec 2020 22:27:10 +0000 Subject: [PATCH 1714/2289] automated documentation update --- base/utils/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/NAMESPACE b/base/utils/NAMESPACE index 0b8158083f9..0feee9d2ed5 100644 --- a/base/utils/NAMESPACE +++ b/base/utils/NAMESPACE @@ -93,3 +93,4 @@ importFrom(PEcAn.logger,logger.setWidth) importFrom(PEcAn.logger,logger.severe) importFrom(PEcAn.logger,logger.warn) importFrom(magrittr,"%>%") +importFrom(rlang,.data) From f67f61df10763c29d392b6593e5d296fd7c3c49f Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 18 Dec 2020 14:08:17 -0700 Subject: [PATCH 1715/2289] Update Rcheck_reference.log --- base/utils/tests/Rcheck_reference.log | 8 -------- 1 file changed, 8 deletions(-) diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index 12de51f0ed0..e4c1b61381d 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -73,14 +73,6 @@ run.write.configs: no visible binding for global variable ‘sa.samples’ run.write.configs: no visible binding for global variable ‘ensemble.samples’ summarize.result: no visible binding for global variable ‘n’ -summarize.result: no visible binding for global variable ‘citation_id’ -summarize.result: no visible binding for global variable ‘site_id’ -summarize.result: no visible binding for global variable ‘trt_id’ -summarize.result: no visible binding for global variable ‘control’ -summarize.result: no visible binding for global variable ‘greenhouse’ -summarize.result: no visible binding for global variable ‘time’ -summarize.result: no visible binding for global variable ‘cultivar_id’ -summarize.result: no visible binding for global variable ‘specie_id’ summarize.result: no visible binding for global variable ‘statname’ Undefined global functions or variables: . citation_id control cultivar_id ensemble.samples From 4a303b7fe3b32914c1f329be5efffc5f55030339 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 18 Dec 2020 15:36:47 -0700 Subject: [PATCH 1716/2289] Update Rcheck_reference.log --- modules/meta.analysis/tests/Rcheck_reference.log | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/meta.analysis/tests/Rcheck_reference.log b/modules/meta.analysis/tests/Rcheck_reference.log index 00f77a3a167..b488b3828f2 100644 --- a/modules/meta.analysis/tests/Rcheck_reference.log +++ b/modules/meta.analysis/tests/Rcheck_reference.log @@ -69,6 +69,7 @@ jagify: no visible binding for global variable ‘n’ jagify: no visible binding for global variable ‘site_id’ jagify: no visible binding for global variable ‘greenhouse’ jagify: no visible binding for global variable ‘citation_id’ +jagify: no visible binding for global variable ‘name’ pecan.ma: no visible global function definition for ‘stem’ pecan.ma: no visible global function definition for ‘window’ pecan.ma.summary: no visible global function definition for From 80148af21142f05d1ab248bee6af2159218fcac5 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 21 Dec 2020 19:17:28 +0000 Subject: [PATCH 1717/2289] Fix column duplication renaming in run.biocro --- models/biocro/R/run.biocro.R | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/models/biocro/R/run.biocro.R b/models/biocro/R/run.biocro.R index beb140b0579..f405c442a93 100644 --- a/models/biocro/R/run.biocro.R +++ b/models/biocro/R/run.biocro.R @@ -100,7 +100,7 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi # with bare variable names, but this way works and ensures that # `R CMD check` doesn't complain about undefined variables. hourly_grp <- dplyr::group_by_at(.tbl = hourly.results, .vars = c("year", "doy")) - daily.results <- dplyr::bind_cols( + daily.results.initial <- dplyr::bind_cols( dplyr::summarize_at( .tbl = hourly_grp, .vars = c("Stem", "Leaf", "Root", "AboveLitter", "BelowLitter", @@ -118,6 +118,11 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .tbl = hourly_grp, .vars = c(tavg = "Temp"), .fun = mean)) + daily.results.inter <- dplyr::select(daily.results.initial, year...1, doy...2, Stem, Leaf, Root, AboveLitter, BelowLitter, + Rhizome, Grain, LAI, tmax, SoilEvaporation, CanopyTrans, precip, + tmin, tavg) + daily.results <- dplyr::rename(daily.results.inter, + year = year...1, doy = doy...2) # bind_cols on 4 tables leaves 3 sets of duplicate year and day columns. # Let's drop these. col_order <- c("year", "doy", "Stem", "Leaf", "Root", @@ -127,7 +132,7 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi daily.results <- daily.results[, col_order] daily_grp <- dplyr::group_by_at(.tbl = hourly.results, .vars = "year") - annual.results <- dplyr::bind_cols( + annual.results.initial <- dplyr::bind_cols( dplyr::summarize_at( .tbl = daily_grp, .vars = c("Stem", "Leaf", "Root", "AboveLitter", "BelowLitter", @@ -141,6 +146,11 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .tbl = daily_grp, .vars = c(mat = "Temp"), .fun = mean)) + annual.results.inter <- dplyr::select(annual.results.initial, year...1, Stem, + Leaf, Root, AboveLitter, BelowLitter, + Rhizome, Grain, + SoilEvaporation, CanopyTrans, map, mat) + annual.results <- dplyr::rename(annual.results.inter, year = year...1) col_order <- c("year", "Stem", "Leaf", "Root", "AboveLitter", "BelowLitter", "Rhizome", "Grain", "SoilEvaporation", "CanopyTrans", "map", "mat") From c3073d4647b104e672459dee11f1302e33ac65fe Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Tue, 22 Dec 2020 16:04:11 +0000 Subject: [PATCH 1718/2289] Add fixes for variable naming --- models/biocro/DESCRIPTION | 3 ++- models/biocro/R/run.biocro.R | 19 ++++++++++++------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/models/biocro/DESCRIPTION b/models/biocro/DESCRIPTION index 61367cd2717..2ccb30f8c0d 100644 --- a/models/biocro/DESCRIPTION +++ b/models/biocro/DESCRIPTION @@ -21,7 +21,8 @@ Imports: lubridate (>= 1.7.0), data.table, dplyr, - XML + XML, + rlang Suggests: BioCro, testthat (>= 2.0.0), diff --git a/models/biocro/R/run.biocro.R b/models/biocro/R/run.biocro.R index f405c442a93..38ef438363b 100644 --- a/models/biocro/R/run.biocro.R +++ b/models/biocro/R/run.biocro.R @@ -118,9 +118,13 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .tbl = hourly_grp, .vars = c(tavg = "Temp"), .fun = mean)) - daily.results.inter <- dplyr::select(daily.results.initial, year...1, doy...2, Stem, Leaf, Root, AboveLitter, BelowLitter, - Rhizome, Grain, LAI, tmax, SoilEvaporation, CanopyTrans, precip, - tmin, tavg) + daily.results.inter <- dplyr::select(daily.results.initial, .data$year...1, . + data$doy...2, .data$Stem, .data$Leaf, + .data$Root, .data$AboveLitter, .data$BelowLitter, + .data$Rhizome, .data$Grain, .data$LAI, + .data$tmax, .data$SoilEvaporation, + .data$CanopyTrans, .data$precip, + .data$tmin, .data$tavg) daily.results <- dplyr::rename(daily.results.inter, year = year...1, doy = doy...2) # bind_cols on 4 tables leaves 3 sets of duplicate year and day columns. @@ -146,10 +150,11 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .tbl = daily_grp, .vars = c(mat = "Temp"), .fun = mean)) - annual.results.inter <- dplyr::select(annual.results.initial, year...1, Stem, - Leaf, Root, AboveLitter, BelowLitter, - Rhizome, Grain, - SoilEvaporation, CanopyTrans, map, mat) + annual.results.inter <- dplyr::select(annual.results.initial, .data$year...1, + .data$Stem, .data$Leaf, .data$Root, + .data$AboveLitter, .data$BelowLitter, + .data$Rhizome, .data$Grain, .data$SoilEvaporation, + .data$CanopyTrans, .data$map, .data$mat) annual.results <- dplyr::rename(annual.results.inter, year = year...1) col_order <- c("year", "Stem", "Leaf", "Root", "AboveLitter", "BelowLitter", "Rhizome", "Grain", "SoilEvaporation", "CanopyTrans", From 2b70a315f115d111978dd161889d1015595964f7 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Tue, 22 Dec 2020 16:13:46 +0000 Subject: [PATCH 1719/2289] Add rlang import --- models/biocro/R/run.biocro.R | 1 + 1 file changed, 1 insertion(+) diff --git a/models/biocro/R/run.biocro.R b/models/biocro/R/run.biocro.R index 38ef438363b..3d069972e51 100644 --- a/models/biocro/R/run.biocro.R +++ b/models/biocro/R/run.biocro.R @@ -9,6 +9,7 @@ #' @param coppice.interval numeric, number of years between cuttings for coppice plant or perennial grass. Only used with BioCro 0.9; ignored when using later versions. #' @return output from one of the \code{BioCro::*.Gro} functions (determined by \code{config$genus}), as data.table object #' @export +#' @importFrom rlang .data #' @author David LeBauer run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppice.interval = 1) { From 843e220efae0c6da6245f9b73d6531a81324e29d Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Tue, 22 Dec 2020 16:46:39 +0000 Subject: [PATCH 1720/2289] Fix typo --- models/biocro/NAMESPACE | 1 + models/biocro/R/run.biocro.R | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/models/biocro/NAMESPACE b/models/biocro/NAMESPACE index e54e16a2827..b2bf97f059f 100644 --- a/models/biocro/NAMESPACE +++ b/models/biocro/NAMESPACE @@ -11,3 +11,4 @@ export(remove.config.BIOCRO) export(run.biocro) export(write.config.BIOCRO) importFrom(data.table,":=") +importFrom(rlang,.data) diff --git a/models/biocro/R/run.biocro.R b/models/biocro/R/run.biocro.R index 3d069972e51..404a5067190 100644 --- a/models/biocro/R/run.biocro.R +++ b/models/biocro/R/run.biocro.R @@ -119,8 +119,8 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .tbl = hourly_grp, .vars = c(tavg = "Temp"), .fun = mean)) - daily.results.inter <- dplyr::select(daily.results.initial, .data$year...1, . - data$doy...2, .data$Stem, .data$Leaf, + daily.results.inter <- dplyr::select(daily.results.initial, .data$year...1, + .data$doy...2, .data$Stem, .data$Leaf, .data$Root, .data$AboveLitter, .data$BelowLitter, .data$Rhizome, .data$Grain, .data$LAI, .data$tmax, .data$SoilEvaporation, From d6540dd7b7945349c1fb246bacf5a4ddc4535e9b Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Tue, 22 Dec 2020 17:08:18 +0000 Subject: [PATCH 1721/2289] Fix more variables --- models/biocro/R/run.biocro.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/biocro/R/run.biocro.R b/models/biocro/R/run.biocro.R index 404a5067190..800623cac73 100644 --- a/models/biocro/R/run.biocro.R +++ b/models/biocro/R/run.biocro.R @@ -127,7 +127,7 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .data$CanopyTrans, .data$precip, .data$tmin, .data$tavg) daily.results <- dplyr::rename(daily.results.inter, - year = year...1, doy = doy...2) + year = .data$year...1, doy = .data$doy...2) # bind_cols on 4 tables leaves 3 sets of duplicate year and day columns. # Let's drop these. col_order <- c("year", "doy", "Stem", "Leaf", "Root", @@ -156,7 +156,7 @@ run.biocro <- function(lat, lon, metpath, soil.nc = NULL, config = config, coppi .data$AboveLitter, .data$BelowLitter, .data$Rhizome, .data$Grain, .data$SoilEvaporation, .data$CanopyTrans, .data$map, .data$mat) - annual.results <- dplyr::rename(annual.results.inter, year = year...1) + annual.results <- dplyr::rename(annual.results.inter, year = .data$year...1) col_order <- c("year", "Stem", "Leaf", "Root", "AboveLitter", "BelowLitter", "Rhizome", "Grain", "SoilEvaporation", "CanopyTrans", "map", "mat") From d0c57e582e9a2aaa3ff0d0dbd228bf7d3ba15e5c Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 4 Jan 2021 15:22:39 -0500 Subject: [PATCH 1722/2289] updating GEFS to work with SDA --- .../assim.sequential/R/sda.enkf_refactored.R | 15 +- modules/assim.sequential/R/sda_plotting.R | 29 +- .../inst/WillowCreek/NoDataWorkflow.R | 76 ++-- .../inst/WillowCreek/SDA_Workflow.R | 4 +- .../inst/WillowCreek/testing.xml | 25 ++ modules/data.atmosphere/NAMESPACE | 4 - .../data.atmosphere/R/GEFS_helper_functions.R | 51 +-- .../R/download.NOAA_GEFS_downscale.R | 378 ------------------ .../R/downscale_ShortWave_to_hrly.R | 56 --- .../R/downscale_repeat_6hr_to_hrly.R | 28 -- .../R/downscale_spline_to_hourly.R | 56 --- modules/data.atmosphere/R/solar_angle.R | 2 +- .../man/download.NOAA_GEFS_downscale.Rd | 85 ---- .../man/downscale_ShortWave_to_hrly.Rd | 37 -- .../man/downscale_repeat_6hr_to_hrly.Rd | 20 - .../man/downscale_spline_to_hourly.Rd | 22 - 16 files changed, 93 insertions(+), 795 deletions(-) delete mode 100644 modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R delete mode 100644 modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R delete mode 100644 modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R delete mode 100644 modules/data.atmosphere/R/downscale_spline_to_hourly.R delete mode 100644 modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd delete mode 100644 modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd delete mode 100644 modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd delete mode 100644 modules/data.atmosphere/man/downscale_spline_to_hourly.Rd diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 7df659bed00..04526060b3b 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -292,8 +292,13 @@ sda.enkf <- function(settings, #---------------- setting up the restart argument if(exists('new.state')){ #Has the analysis been run? Yes, then restart from analysis. - restart.arg<-list(runid = run.id, - start.time = lubridate::ymd_hms(obs.times[t - 1], truncated = 3), + + if(t == 2){start.time = lubridate::ymd_hms(settings$run$start.date, truncated = 3)}else + if(t != 2){start.time = lubridate::ymd_hms(obs.times[t - 1], truncated = 3)} + + + restart.arg<-list(runid = run.id, + start.time = start.time, stop.time = lubridate::ymd_hms(obs.times[t], truncated = 3), settings = settings, new.state = new.state, @@ -307,7 +312,7 @@ sda.enkf <- function(settings, if(t == 1){ config.settings = settings - config.settings$run$end.date = lubridate::ymd_hms(obs.times[t+1], truncated = 3)} + config.settings$run$end.date = format(lubridate::ymd_hms(obs.times[t], truncated = 3), "%Y/%m/%d")} if(t != 1){config.settings = settings} @@ -448,8 +453,8 @@ sda.enkf <- function(settings, - if(is.null(outconfig$samples$met$ids){wts <- unlist(weight_list[[t]])}else{ - (wts <- unlist(weight_list[[t]][outconfig$samples$met$ids])} + if(is.null(outconfig$samples$met$ids)){wts <- unlist(weight_list[[t]])}else{ + wts <- unlist(weight_list[[t]][outconfig$samples$met$ids])} #-analysis function enkf.params[[t]] <- Analysis.sda(settings, diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 2655f45d959..f53ed737aad 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -87,7 +87,7 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob xlab = "Year", ylab = ylab.names[grep(colnames(X)[i], var.names)], main = colnames(X)[i]) - ciEnvelope(as.Date(obs.times[t1:t]), + PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), as.numeric(Ybar[, i]) - as.numeric(YCI[, i]) * 1.96, as.numeric(Ybar[, i]) + as.numeric(YCI[, i]) * 1.96, col = alphagreen) @@ -107,11 +107,11 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob } # forecast - ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') + PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') lines(as.Date(obs.times[t1:t]), Xbar, col = "darkblue", type = "l", lwd = 2) # analysis - ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) + PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) lines(as.Date(obs.times[t1:t]), Xa, col = "black", lty = 2, lwd = 2) #legend('topright',c('Forecast','Data','Analysis'),col=c(alphablue,alphagreen,alphapink),lty=1,lwd=5) } @@ -125,8 +125,7 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov #Defining some colors generate_colors_sda() t1 <- 1 - ylab.names <- unlist(sapply(settings$state.data.assimilation$state.variable, - function(x) { x })[2, ], use.names = FALSE) + #ylab.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "unit") var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") #---- pdf(file.path(settings$outdir,"SDA", "sda.enkf.time-series.pdf")) @@ -142,16 +141,16 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov Y.order <- sapply(colnames(FORECAST[[t]]),agrep,x=colnames(Ybar),max=2,USE.NAMES = F)%>%unlist Ybar <- Ybar[,Y.order] YCI <- t(as.matrix(sapply(obs.cov[t1:t], function(x) { - if (is.null(x)) { + if (is.na(x)) { rep(NA, length(names.y)) - } - sqrt(diag(x)) + }else{ + sqrt(diag(x))} }))) Ybar[is.na(Ybar)]<-0 YCI[is.na(YCI)]<-0 - YCI <- YCI[,Y.order] + YCI <- YCI[,c(Y.order)] @@ -184,12 +183,12 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov ylim = range(c(XaCI, Xci,Ybar[, 1]), na.rm = TRUE), type = "n", xlab = "Year", - ylab = ylab.names[grep(colnames(X)[i], var.names)], + #ylab = ylab.names[grep(colnames(X)[i], var.names)], main = colnames(X)[i]) # observation / data if (i<=ncol(X)) { # - ciEnvelope(as.Date(obs.times[t1:t]), + PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), as.numeric(Ybar[, i]) - as.numeric(YCI[, i]) * 1.96, as.numeric(Ybar[, i]) + as.numeric(YCI[, i]) * 1.96, col = alphagreen) @@ -199,11 +198,11 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov } # forecast - ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') #alphablue + PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') #alphablue lines(as.Date(obs.times[t1:t]), Xbar, col = "darkblue", type = "l", lwd = 2) #"darkblue" # analysis - ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) #alphapink + PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) #alphapink lines(as.Date(obs.times[t1:t]), Xa, col = "black", lty = 2, lwd = 2) #"black" legend('topright',c('Forecast','Data','Analysis'),col=c(alphablue,alphagreen,alphapink),lty=1,lwd=5) @@ -250,7 +249,7 @@ postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, o xlab = "Time", ylab = "Error", main = paste(colnames(X)[i], " Error = Forecast - Data")) - ciEnvelope(rev(t1:t), + PEcAn.photosynthesis::ciEnvelope(rev(t1:t), rev(Xci[, 1] - unlist(Ybar[, i])), rev(Xci[, 2] - unlist(Ybar[, i])), col = alphabrown) @@ -269,7 +268,7 @@ postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, o xlab = "Time", ylab = "Update", main = paste(colnames(X)[i], "Update = Forecast - Analysis")) - ciEnvelope(rev(t1:t), + PEcAn.photosynthesis::ciEnvelope(rev(t1:t), rev(Xbar - XaCI[, 1]), rev(Xbar - XaCI[, 2]), col = alphapurple) diff --git a/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R b/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R index 036f3106d64..af07b3b03e1 100644 --- a/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R +++ b/modules/assim.sequential/inst/WillowCreek/NoDataWorkflow.R @@ -50,50 +50,50 @@ con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) -all.previous.sims <- list.dirs(outputPath, recursive = F) -if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { - - tryCatch({ - # Looking through all the old simulations and find the most recent - all.previous.sims <- all.previous.sims %>% - map(~ list.files(path = file.path(.x, "SDA"))) %>% - setNames(all.previous.sims) %>% - discard( ~ !"sda.output.Rdata" %in% .x) # I'm throwing out the ones that they did not have a SDA output - - last.sim <- - names(all.previous.sims) %>% - map_chr( ~ strsplit(.x, "_")[[1]][5]) %>% - map_dfr(~ db.query( - query = paste("SELECT * FROM workflows WHERE id =", .x), - con = con - ) %>% - mutate(ID=.x)) %>% - mutate(start_date = as.Date(start_date)) %>% - arrange(desc(start_date), desc(ID)) %>% - head(1) - # pulling the date and the path to the last SDA - restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) - sda.start <- last.sim$start_date + lubridate::days(1) - }, - error = function(e) { - restart.path <- NULL - sda.start <- Sys.Date() - 1 - PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) - }) - - # if there was no older sims - if (is.na(sda.start)) - sda.start <- Sys.Date() - 9 -} -#sda.start <- Sys.Date() -sda.end <- sda.start + lubridate::days(5) +# all.previous.sims <- list.dirs(outputPath, recursive = F) +# if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { +# +# tryCatch({ +# # Looking through all the old simulations and find the most recent +# all.previous.sims <- all.previous.sims %>% +# map(~ list.files(path = file.path(.x, "SDA"))) %>% +# setNames(all.previous.sims) %>% +# discard( ~ !"sda.output.Rdata" %in% .x) # I'm throwing out the ones that they did not have a SDA output +# +# last.sim <- +# names(all.previous.sims) %>% +# map_chr( ~ strsplit(.x, "_")[[1]][5]) %>% +# map_dfr(~ db.query( +# query = paste("SELECT * FROM workflows WHERE id =", .x), +# con = con +# ) %>% +# mutate(ID=.x)) %>% +# mutate(start_date = as.Date(start_date)) %>% +# arrange(desc(start_date), desc(ID)) %>% +# head(1) +# # pulling the date and the path to the last SDA +# restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) +# sda.start <- last.sim$start_date + lubridate::days(1) +# }, +# error = function(e) { +# restart.path <- NULL +# sda.start <- Sys.Date() - 1 +# PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) +# }) +# +# # if there was no older sims +# if (is.na(sda.start)) +# sda.start <- Sys.Date() - 9 +# } +sda.start <- Sys.Date() +sda.end <- sda.start + lubridate::days(3) #----------------------------------------------------------------------------------------------- #------------------------------------------ Download met and flux ------------------------------ #----------------------------------------------------------------------------------------------- # Finding the right end and start date -met.start <- Sys.Date() +met.start <- sda.start - lubridate::days(2) met.end <- met.start + lubridate::days(16) diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R index 28c3f275943..885f07fdda0 100644 --- a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -21,7 +21,7 @@ plan(multisession) outputPath <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output/" nodata <- FALSE #use this to run SDA with no data -restart <- FALSE #flag to start from previous run or not +restart <- TRUE #flag to start from previous run or not days.obs <- 3 #how many of observed data *BY HOURS* to include -- not including today setwd(outputPath) options(warn=-1) @@ -88,7 +88,7 @@ if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { sda.start <- Sys.Date() - 9 } #to manually change start date -sda.start <- Sys.Date() +#sda.start <- Sys.Date() sda.end <- sda.start + lubridate::days(5) # Finding the right end and start date diff --git a/modules/assim.sequential/inst/WillowCreek/testing.xml b/modules/assim.sequential/inst/WillowCreek/testing.xml index 637eb245100..722984b9677 100644 --- a/modules/assim.sequential/inst/WillowCreek/testing.xml +++ b/modules/assim.sequential/inst/WillowCreek/testing.xml @@ -43,6 +43,31 @@ 9999 100 + + AbvGrndWood + 0 + 9999 + + + LAI + 0 + 9999 + + + TotSoilCarb + 0 + 9999 + + + SoilMoistFrac + 0 + 1 + + + litter_carbon_content + 0 + 9999 + year 2017-01-01 diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 580812a2ad1..6c60b78ad42 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -33,14 +33,10 @@ export(download.NARR_site) export(download.NEONmet) export(download.NLDAS) export(download.NOAA_GEFS) -export(download.NOAA_GEFS_downscale) export(download.PalEON) export(download.PalEON_ENS) export(download.US_WCr) export(download.US_Wlef) -export(downscale_ShortWave_to_hrly) -export(downscale_repeat_6hr_to_hrly) -export(downscale_spline_to_hourly) export(equation_of_time) export(exner) export(extract.local.CMIP5) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 336ef9235c7..39b60d0c644 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -481,7 +481,7 @@ process_gridded_noaa_download <- function(lat_list, #Write netCDF - noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) + PEcAn.data.atmosphere::write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) if(downscale){ #Downscale the forecast from 6hr to 1hr @@ -502,7 +502,7 @@ process_gridded_noaa_download <- function(lat_list, results_list[[ens]] <- results #Run downscaling - noaaGEFSpoint::temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) + temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) } @@ -619,7 +619,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 dplyr::select("time", tidyselect::all_of(cf_var_names), "NOAA.member") #Write netCDF - noaaGEFSpoint::write_noaa_gefs_netcdf(df = forecast_noaa_ds, + PEcAn.data.atmosphere::write_noaa_gefs_netcdf(df = forecast_noaa_ds, ens = ens, lat = lat.in, lon = lon.in, @@ -719,51 +719,6 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ } -#' Cosine of solar zenith angle -#' -#' For explanations of formulae, see http://www.itacanet.org/the-sun-as-a-source-of-energy/part-3-calculating-solar-angles/ -#' -#' @author Alexey Shiklomanov -#' @param doy Day of year -#' @param lat Latitude -#' @param lon Longitude -#' @param dt Timestep -#' @noRd -#' @param hr Hours timestep -#' @return `numeric(1)` of cosine of solar zenith angle -#' @export -cos_solar_zenith_angle <- function(doy, lat, lon, dt, hr) { - et <- equation_of_time(doy) - merid <- floor(lon / 15) * 15 - merid[merid < 0] <- merid[merid < 0] + 15 - lc <- (lon - merid) * -4/60 ## longitude correction - tz <- merid / 360 * 24 ## time zone - midbin <- 0.5 * dt / 86400 * 24 ## shift calc to middle of bin - t0 <- 12 + lc - et - tz - midbin ## solar time - h <- pi/12 * (hr - t0) ## solar hour - dec <- -23.45 * pi / 180 * cos(2 * pi * (doy + 10) / 365) ## declination - cosz <- sin(lat * pi / 180) * sin(dec) + cos(lat * pi / 180) * cos(dec) * cos(h) - cosz[cosz < 0] <- 0 - return(cosz) -} - -#' Equation of time: Eccentricity and obliquity -#' -#' For description of calculations, see https://en.wikipedia.org/wiki/Equation_of_time#Calculating_the_equation_of_time -#' -#' @author Alexey Shiklomanov -#' @param doy Day of year -#' @noRd -#' @return `numeric(1)` length of the solar day, in hours. - -equation_of_time <- function(doy) { - stopifnot(doy <= 366) - f <- pi / 180 * (279.5 + 0.9856 * doy) - et <- (-104.7 * sin(f) + 596.2 * sin(2 * f) + 4.3 * - sin(4 * f) - 429.3 * cos(f) - 2 * - cos(2 * f) + 19.3 * cos(3 * f)) / 3600 # equation of time -> eccentricity and obliquity - return(et) -} #' @title Downscale repeat to hourly #' @return A dataframe of downscaled data diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R b/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R deleted file mode 100644 index ccdee0f5c34..00000000000 --- a/modules/data.atmosphere/R/download.NOAA_GEFS_downscale.R +++ /dev/null @@ -1,378 +0,0 @@ -##' @title Downscale NOAA GEFS Weather Data -##' -##' @section Information on Units: -##' Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downlaoded -##' in Kelvin. -##' @references https://www.ncdc.noaa.gov/crn/measurements.html -##' -##' @section NOAA_GEFS General Information: -##' This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. A 16 day forecast is avaliable -##' every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn -##' standard. -##' -##' @section Data Avaliability: -##' NOAA GEFS weather data is avaliable on a rolling 12 day basis; dates provided in "start_date" must be within this range. The end date can be any point after -##' that, but if the end date is beyond 16 days, only 16 days worth of forecast are recorded. Times are rounded down to the previous 6 hour forecast. NOAA -##' GEFS weather data isn't always posted immediately, and to compensate, this function adjusts requests made in the last two hours -##' back two hours (approximately the amount of time it takes to post the data) to make sure the most current forecast is used. -##' -##' @section Data Save Format: -##' Data is saved in the netcdf format to the specified directory. File names reflect the precision of the data to the given range of days. -##' NOAA.GEFS.willow creek.3.2018-06-08T06:00.2018-06-24T06:00.nc specifies the forecast, using ensemble nubmer 3 at willow creek on -##' June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. -##' -##' @return A list of data frames is returned containing information about the data file that can be used to locate it later. Each -##' data frame contains information about one file. -##' -##' @param outfolder Directory where results should be written -##' @param start_date, end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight) -##' @param lat site latitude in decimal degrees -##' @param lon site longitude in decimal degrees -##' @param site_id The unique ID given to each site. This is used as part of the file name. -##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? -##' @param verbose logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info. -##' @param ... Other arguments, currently ignored -##' @export -##' -##' @examples -##' \dontrun{ -##' download.NOAA_GEFS(outfolder="~/Working/results", -##' lat.in= 45.805925, -##' lon.in = -90.07961, -##' site_id = 676) -##' } -##' -##' @author Katie Zarada - modified code from Luke Dramko and Laura Puckett -##' -##' - - -download.NOAA_GEFS_downscale <- function(outfolder, lat.in, lon.in, site_id, start_date, end_date, - overwrite = FALSE, verbose = FALSE, ...) { - - start_date <- as.POSIXct(start_date, tz = "UTC") - end_date <- as.POSIXct(end_date, tz = "UTC") - - #It takes about 2 hours for NOAA GEFS weather data to be posted. Therefore, if a request is made within that 2 hour window, - #we instead want to adjust the start time to the previous forecast, which is the most recent one avaliable. (For example, if - #there's a request at 7:00 a.m., the data isn't up yet, so the function grabs the data at midnight instead.) - if (abs(as.numeric(Sys.time() - start_date, units="hours")) <= 2) { - start_date = start_date - lubridate::hours(2) - end_date = end_date - lubridate::hours(2) - } - - #Date/time error checking - Checks to see if the start date is before the end date - if (start_date > end_date) { - PEcAn.logger::logger.severe("Invalid dates: end date occurs before start date") - } else if (as.numeric(end_date - start_date, units="hours") < 6) { #Done separately to produce a more helpful error message. - PEcAn.logger::logger.severe("Times not far enough appart for a forecast to fall between them. Forecasts occur every six hours; make sure start - and end dates are at least 6 hours apart.") - } - - #Set the end forecast date (default is the full 16 days) - if (end_date > start_date + lubridate::days(16)) { - end_date = start_date + lubridate::days(16) - PEcAn.logger::logger.info(paste0("Updated end date is ", end_date)) - } - - #Round the starting date/time down to the previous block of 6 hours. Adjust the time frame to match. - forecast_hour = (lubridate::hour(start_date) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - increments = as.integer(as.numeric(end_date - start_date, units = "hours") / 6) #Calculating the number of forecasts between start and end dates - increments = increments + ((lubridate::hour(end_date) - lubridate::hour(start_date)) %/% 6) #These calculations are required to use the rnoaa package. - - end_hour = sprintf("%04d", ((forecast_hour + (increments * 6)) %% 24) * 100) #Calculating the starting hour as a string, which is required type to access the - #data via the rnoaa package - forecast_hour = sprintf("%04d", forecast_hour * 100) #Having the end date as a string is useful later, too. - - #Recreate the adjusted start and end dates. - start_date = as.POSIXct(paste0(lubridate::year(start_date), "-", lubridate::month(start_date), "-", lubridate::day(start_date), " ", - substring(forecast_hour, 1,2), ":00:00"), tz="UTC") - - end_date = start_date + lubridate::hours(increments * 6) - - - #Bounds date checking - #NOAA's GEFS database maintains a rolling 12 days of forecast data for access through this function. - #We do want Sys.Date() here - NOAA makes data unavaliable days at a time, not forecasts at a time. - NOAA_GEFS_Start_Date = as.POSIXct(Sys.Date(), tz="UTC") - lubridate::days(11) #Subtracting 11 days is correct, not 12. - - #Check to see if start_date is valid. This must be done after date adjustment. - if (as.POSIXct(Sys.time(), tz="UTC") < start_date || start_date < NOAA_GEFS_Start_Date) { - PEcAn.logger::logger.severe(sprintf('Start date (%s) exceeds the NOAA GEFS range (%s to %s).', - start_date, - NOAA_GEFS_Start_Date, Sys.Date())) - } - - if (lubridate::hour(start_date) > 23) { - PEcAn.logger::logger.severe(sprintf("Start time %s is not a valid time", lubridate::hour(start_date))) - } - - if (lubridate::hour(end_date) > 23) { #Done separately from the previous if statement in order to have more specific error messages. - PEcAn.logger::logger.severe(sprintf("End time %s is not a valid time", lubridate::hour(end_date))) - } - #End date/time error checking - - ################################################# - #NOAA variable downloading - #Uses the rnoaa package to download data - - #We want data for each of the following variables. Here, we're just getting the raw data; later, we will convert it to the - #cf standard format when relevant. - noaa_var_names = c("Temperature_height_above_ground_ens", "Pressure_surface_ens", "Relative_humidity_height_above_ground_ens", "Downward_Long-Wave_Radp_Flux_surface_6_Hour_Average_ens", - "Downward_Short-Wave_Radiation_Flux_surface_6_Hour_Average_ens", "Total_precipitation_surface_6_Hour_Accumulation_ens", - "u-component_of_wind_height_above_ground_ens", "v-component_of_wind_height_above_ground_ens") - - cf_var_names = c("air_temperature", "air_pressure", "specific_humidity", "surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "eastward_wind", "northward_wind") - #These are the cf standard names - cf_var_names1 = c("air_temperature", "air_pressure", "specific_humidity", "surface_downwelling_longwave_flux_in_air", - "surface_downwelling_shortwave_flux_in_air", "precipitation_flux", "wind_speed") - cf_var_units = c("K", "Pa", "1", "Wm-2", "Wm-2", "kgm-2s-1", "ms-1") #Negative numbers indicate negative exponents - - # This debugging loop allows you to check if the cf variables are correctly mapped to the equivalent - # NOAA variable names. This is very important, as much of the processing below will be erroneous if - # these fail to match up. - # for (i in 1:length(cf_var_names)) { - # print(sprintf("cf / noaa : %s / %s", cf_var_names[[i]], noaa_var_names[[i]])) - #} - - noaa_data = list() - - #Downloading the data here. It is stored in a matrix, where columns represent time in intervals of 6 hours, and rows represent - #each ensemble member. Each variable gets its own matrix, which is stored in the list noaa_data. - - - for (i in 1:length(noaa_var_names)) { - noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = seq_len(increments + 1), forecast_time = forecast_hour, date=format(start_date, "%Y%m%d"))$data - } - - - ### ERROR CHECK FOR FIRST TIME POINT ### - - #Check if first time point is present, if not, grab from previous forecast - - index <- which(lengths(noaa_data) == increments * 21) #finding which ones are missing first point - new_start = start_date - lubridate::hours(6) #grab previous forecast - new_forecast_hour = (lubridate::hour(new_start) %/% 6) * 6 #Integer division by 6 followed by re-multiplication acts like a "floor function" for multiples of 6 - new_forecast_hour = sprintf("%04d", new_forecast_hour * 100) #Having the end date as a string is useful later, too. - - filled_noaa_data = list() - - for (i in index) { - filled_noaa_data[[i]] = rnoaa::gefs(noaa_var_names[i], lat.in, lon.in, raw=TRUE, time_idx = 1, forecast_time = new_forecast_hour, date=format(new_start, "%Y%m%d"))$data - } - - #add filled data into first slot of forecast - for(i in index){ - noaa_data[[i]] = cbind(filled_noaa_data[[i]], noaa_data[[i]])} - - - #Fills in data with NaNs if there happens to be missing columns. - for (i in 1:length(noaa_var_names)) { - if (!is.null(ncol(noaa_data[[i]]))) { # Is a matrix - nans <- rep(NaN, nrow(noaa_data[[i]])) - while (ncol(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- cbind(noaa_data[[i]], nans) - } - } else { # Is a vector - while (length(noaa_data[[i]]) < increments) { - noaa_data[[i]] <- c(noaa_data[[i]], NaN); - } - } - } - - ################################################### - # Not all NOAA data units match the cf data standard. In this next section, data are processed to - # confirm with the standard when necessary. - # The following is a list of variables which need to be processed: - # 1. NOAA's relative humidity must be converted to specific humidity - # 2. NOAA's measure of precipitation is the accumulation over 6 hours; cf's standard is precipitation per second - - #Convert NOAA's relative humidity to specific humidity - humid_index = which(cf_var_names == "specific_humidity") - - #Temperature, pressure, and relative humidity are required to calculate specific humidity. - humid_data = noaa_data[[humid_index]] - temperature_data = noaa_data[[which(cf_var_names == "air_temperature")]] - pressure_data = noaa_data[[which(cf_var_names == "air_pressure")]] - - #Depending on the volume and dimensions of data you download, sometimes R stores it as a vector and sometimes - #as a matrix; the different cases must be processed with different loops. - #(The specific corner case in which a vector would be generated is if only one hour is requested; for example, - #only the data at time_idx 1, for example). - if (as.logical(nrow(humid_data))) { - for (i in 1:length(humid_data)) { - humid_data[i] = PEcAn.data.atmosphere::rh2qair(humid_data[i], temperature_data[i], pressure_data[i]) - } - } else { - for (i in 1:nrow(humid_data)) { - for (j in 1:ncol(humid_data)) { - humid_data[i,j] = PEcAn.data.atmosphere::rh2qair(humid_data[i,j], temperature_data[i,j], pressure_data[i,j]) - } - } - } - - #Update the noaa_data list with the correct data - noaa_data[[humid_index]] <- humid_data - - # Convert NOAA's total precipitation (kg m-2) to precipitation flux (kg m-2 s-1) - #NOAA precipitation data is an accumulation over 6 hours. - precip_index = which(cf_var_names == "precipitation_flux") - - #The statement udunits2::ud.convert(1, "kg m-2 6 hr-1", "kg m-2 s-1") is equivalent to udunits2::ud.convert(1, "kg m-2 hr-1", "kg m-2 s-1") * 6, - #which is a little unintuitive. What will do the conversion we want is what's below: - noaa_data[[precip_index]] = udunits2::ud.convert(noaa_data[[precip_index]], "kg m-2 hr-1", "kg m-2 6 s-1") #There are 21600 seconds in 6 hours - - - ##################################### - #done with data processing- now want to take the list and make one df for downscaling - - time = seq(from = start_date, to = end_date, by = "6 hour") - forecasts = matrix(ncol = length(noaa_data)+ 2, nrow = 0) - colnames(forecasts) <- c(cf_var_names, "timestamp", "NOAA.member") - - index = matrix(ncol = length(noaa_data), nrow = length(time)) - for(i in 1:21){ - rm(index) - index = matrix(ncol = length(noaa_data), nrow = length(time)) - for(j in 1:length(noaa_data)){ - index[,j] <- noaa_data[[j]][i,] - colnames(index) <- c(cf_var_names) - index <- as.data.frame(index) - } - index$timestamp <- as.POSIXct(time) - index$NOAA.member <- rep(i, times = length(time)) - forecasts <- rbind(forecasts, index) - } - - forecasts <- forecasts %>% tidyr::drop_na() - #forecasts$timestamp <- as.POSIXct(rep(time, 21)) - forecasts$wind_speed <- sqrt(forecasts$eastward_wind^ 2 + forecasts$northward_wind^ 2) - - ### Downscale state variables - gefs_hour <- PEcAn.data.atmosphere::downscale_spline_to_hourly(df = forecasts, VarNamesStates = c("air_temperature", "wind_speed", "specific_humidity", "air_pressure")) - - - ## convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) - nonSW.flux.hrly <- forecasts %>% - dplyr::select(timestamp, NOAA.member, surface_downwelling_longwave_flux_in_air) %>% - PEcAn.data.atmosphere::downscale_repeat_6hr_to_hrly() %>% dplyr::group_by_at(c("NOAA.member", "timestamp")) %>% - dplyr::summarize(surface_downwelling_longwave_flux_in_air = mean(surface_downwelling_longwave_flux_in_air)) - - ## downscale shortwave to hourly - time0 = min(forecasts$timestamp) - time_end = max(forecasts$timestamp) - ShortWave.ds = PEcAn.data.atmosphere::downscale_ShortWave_to_hrly(forecasts, time0, time_end, lat = lat.in, lon = lon.in, output_tz= "UTC")%>% - dplyr::group_by_at(c("NOAA.member", "timestamp")) %>% - dplyr::summarize(surface_downwelling_shortwave_flux_in_air = mean(surface_downwelling_shortwave_flux_in_air)) - - ## Downscale Precipitation Flux - #fills in the hours between the 6hr GEFS with zeros using the timestamp from downscaled Flux - precip.hrly <- forecasts %>% - dplyr::select(timestamp, NOAA.member, precipitation_flux) %>% - tidyr::complete(timestamp = nonSW.flux.hrly$timestamp, tidyr::nesting(NOAA.member), fill = list(precipitation_flux = 0)) - -#join together the 4 different downscaled data frames - #checks for errors in downscaled data; removes NA times; replaces erroneous values with 0's or NA's - joined<- dplyr::inner_join(gefs_hour, nonSW.flux.hrly, by = c("NOAA.member", "timestamp")) - joined<- dplyr::inner_join(joined, precip.hrly, by = c("NOAA.member", "timestamp")) - - joined <- dplyr::inner_join(joined, ShortWave.ds, by = c("NOAA.member", "timestamp")) %>% - dplyr::distinct() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = dplyr::if_else(surface_downwelling_shortwave_flux_in_air < 0, 0, surface_downwelling_shortwave_flux_in_air), - specific_humidity = dplyr::if_else(specific_humidity <0, 0, specific_humidity), - air_temperature = dplyr::if_else(air_temperature > 320, NA_real_, air_temperature), - air_temperature = dplyr::if_else(air_temperature < 240, NA_real_, air_temperature), - precipitation_flux = dplyr::if_else(precipitation_flux < 0, 0, precipitation_flux), - surface_downwelling_longwave_flux_in_air = dplyr::if_else(surface_downwelling_longwave_flux_in_air < 0, NA_real_, surface_downwelling_longwave_flux_in_air), - wind_speed = dplyr::if_else(wind_speed <0, 0, wind_speed)) %>% - dplyr::filter(is.na(timestamp) == FALSE) - - - - - ############################################# - # Done with data processing. Now writing the data to the specified directory. Each ensemble member is written to its own file, for a total - # of 21 files. - if (!dir.exists(outfolder)) { - dir.create(outfolder, recursive=TRUE, showWarnings = FALSE) - } - - # Create a data frame with information about the file. This data frame's format is an internal PEcAn standard, and is stored in the BETY database to - # locate the data file. The data file is stored on the local machine where the download occured. Because NOAA GEFS is an - # ensemble of 21 different forecast models, each model gets its own data frame. All of the information is the same for - # each file except for the file name. - results = data.frame( - file = "", #Path to the file (added in loop below). - host = PEcAn.remote::fqdn(), #Name of the server where the file is stored - mimetype = "application/x-netcdf", #Format the data is saved in - formatname = "CF Meteorology", #Type of data - startdate = paste0(format(start_date, "%Y-%m-%dT%H:%M:00")), #starting date and time, down to the second - enddate = paste0(format(end_date, "%Y-%m-%dT%H:%M:00")), #ending date and time, down to the second - dbfile.name = "NOAA_GEFS_downscale", #Source of data (ensemble number will be added later) - stringsAsFactors = FALSE - ) - - results_list = list() - - #Each ensemble gets its own file. - #These dimensions will be used for all 21 ncdf4 file members, so they're all declared once here. - #The data is really one-dimensional for each file (though we include lattitude and longitude dimensions - #to comply with the PEcAn standard). - time_dim = ncdf4::ncdim_def(name="time", - units = paste("hours since", format(start_date, "%Y-%m-%dT%H:%M")), - seq(from = 0, length.out = length(unique(joined$timestamp))), #GEFS forecast starts 6 hours from start time - create_dimvar = TRUE) - lat_dim = ncdf4::ncdim_def("latitude", "degree_north", lat.in, create_dimvar = TRUE) - lon_dim = ncdf4::ncdim_def("longitude", "degree_east", lon.in, create_dimvar = TRUE) - - dimensions_list = list(time_dim, lat_dim, lon_dim) - - nc_var_list = list() - for (i in 1:length(cf_var_names1)) { #Each ensemble member will have data on each variable stored in their respective file. - nc_var_list[[i]] = ncdf4::ncvar_def(cf_var_names1[i], cf_var_units[i], dimensions_list, missval=NaN) - } - - #For each ensemble - for (i in 1:21) { # i is the ensemble number - #Generating a unique identifier string that characterizes a particular data set. - identifier = paste("NOAA_GEFS_downscale", site_id, i, format(start_date, "%Y-%m-%dT%H:%M"), - format(end_date, "%Y-%m-%dT%H:%M"), sep="_") - ensemble_folder = file.path(outfolder, identifier) - - #ensemble_folder = file.path(outfolder, identifier) - data = as.data.frame(joined %>% dplyr::select(NOAA.member, cf_var_names1) %>% - dplyr::filter(NOAA.member == i) %>% - dplyr::select(-NOAA.member)) - - - if (!dir.exists(ensemble_folder)) { - dir.create(ensemble_folder, recursive=TRUE, showWarnings = FALSE)} - - flname = file.path(ensemble_folder, paste(identifier, "nc", sep = ".")) - - #Each ensemble member gets its own unique data frame, which is stored in results_list - #Object references in R work differently than in other languages. When adding an item to a list, R creates a copy of it - #for you instead of just inserting the object reference, so this works. - results$file <- flname - results$dbfile.name <- flname - results_list[[i]] <- results - - if (!file.exists(flname) | overwrite) { - nc_flptr = ncdf4::nc_create(flname, nc_var_list, verbose=verbose) - - #For each variable associated with that ensemble - for (j in 1:length(cf_var_names1)) { - # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble - ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], data[,j]) - } - - ncdf4::nc_close(nc_flptr) #Write to the disk/storage - } else { - PEcAn.logger::logger.info(paste0("The file ", flname, " already exists. It was not overwritten.")) - } - - } - - return(results_list) -} #downscale.NOAA_GEFS diff --git a/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R b/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R deleted file mode 100644 index b5f146fba0c..00000000000 --- a/modules/data.atmosphere/R/downscale_ShortWave_to_hrly.R +++ /dev/null @@ -1,56 +0,0 @@ -##' @title Downscale shortwave to hourly -##' @return A dataframe of downscaled state variables -##' -##' @param debiased, data frame of variables -##' @param time0, first timestep -##' @param time_end, last time step -##' @param lat, lat of site -##' @param lon, long of site -##' @param output_tz, output timezone -##' @export -##' -##' @author Laura Puckett -##' -##' - -downscale_ShortWave_to_hrly <- function(debiased, time0, time_end, lat, lon, output_tz = "UTC"){ - ## downscale shortwave to hourly - - - downscale_solar_geom <- function(doy, lon, lat) { - - dt <- median(diff(doy)) * 86400 # average number of seconds in time interval - hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy - - ## calculate potential radiation - cosz <- PEcAn.data.atmosphere::cos_solar_zenith_angle(doy, lat, lon, dt, hr) - rpot <- 1366 * cosz - return(rpot) - } - grouping = append("NOAA.member", "timestamp") - - surface_downwelling_shortwave_flux_in_air<- rep(debiased$surface_downwelling_shortwave_flux_in_air, each = 6) - time = rep(seq(from = as.POSIXct(time0, tz = output_tz), to = as.POSIXct(time_end + lubridate::hours(5), tz = output_tz), by = 'hour'), times = 21) - - ShortWave.hours <- as.data.frame(surface_downwelling_shortwave_flux_in_air) - ShortWave.hours$timestamp = time - ShortWave.hours$NOAA.member = rep(debiased$NOAA.member, each = 6) - ShortWave.hours$hour = as.numeric(format(time, "%H")) - ShortWave.hours$group = as.numeric(as.factor(format(ShortWave.hours$time, "%d"))) - - - - ShortWave.ds <- ShortWave.hours %>% - dplyr::mutate(doy = lubridate::yday(timestamp) + hour/24) %>% - dplyr::mutate(rpot = downscale_solar_geom(doy, lon, lat)) %>% # hourly sw flux calculated using solar geometry - dplyr::group_by_at(c("group", "NOAA.member")) %>% - dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry - dplyr::ungroup() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% - dplyr::select(timestamp, NOAA.member, surface_downwelling_shortwave_flux_in_air) %>% - dplyr::filter(timestamp >= min(debiased$timestamp) & timestamp <= max(debiased$timestamp)) - - -} - - diff --git a/modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R b/modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R deleted file mode 100644 index c55dbc2b99c..00000000000 --- a/modules/data.atmosphere/R/downscale_repeat_6hr_to_hrly.R +++ /dev/null @@ -1,28 +0,0 @@ -##' @title Downscale repeat to hourly -##' @return A dataframe of downscaled data -##' -##' @param data.6hr, dataframe of data to be downscaled (Longwave) -##' @export -##' -##' @author Laura Puckett -##' -##' - -downscale_repeat_6hr_to_hrly <- function(data.6hr){ - data.hrly = data.6hr %>% - dplyr::group_by_all() %>% - tidyr::expand(timestamp = c(timestamp, - timestamp + lubridate::hours(1), - timestamp + lubridate::hours(2), - timestamp + lubridate::hours(3), - timestamp + lubridate::hours(4), - timestamp + lubridate::hours(5), - timestamp + lubridate::hours(6))) %>% - dplyr::ungroup() %>% - dplyr::mutate(timestamp = lubridate::as_datetime(timestamp, tz = "UTC")) %>% - dplyr::filter(timestamp >= min(data.6hr$timestamp) & timestamp <= max(data.6hr$timestamp)) %>% - dplyr::distinct() - - #arrange(timestamp) -return(data.hrly) -} diff --git a/modules/data.atmosphere/R/downscale_spline_to_hourly.R b/modules/data.atmosphere/R/downscale_spline_to_hourly.R deleted file mode 100644 index 34ffabcb331..00000000000 --- a/modules/data.atmosphere/R/downscale_spline_to_hourly.R +++ /dev/null @@ -1,56 +0,0 @@ -##' @title Downscale spline to hourly -##' @return A dataframe of downscaled state variables -##' -##' @param df, dataframe of data to be downscales -##' @param VarNamesStates, names of vars that are state variables -##' @export -##' -##' @author Laura Puckett -##' -##' - -downscale_spline_to_hourly <- function(df,VarNamesStates){ - # -------------------------------------- - # purpose: interpolates debiased forecasts from 6-hourly to hourly - # Creator: Laura Puckett, December 16 2018 - # -------------------------------------- - # @param: df, a dataframe of debiased 6-hourly forecasts - - interpolate <- function(jday, var){ - result <- splinefun(jday, var, method = "monoH.FC") - return(result(seq(min(as.numeric(jday)), max(as.numeric(jday)), 1/24))) - } - - - - t0 = min(df$timestamp) - df <- df %>% - dplyr::mutate(days_since_t0 = difftime(.$timestamp, t0, units = "days")) - - if("dscale.member" %in% colnames(df)){ - by.ens <- df %>% - dplyr::group_by(NOAA.member, dscale.member) - }else{ - by.ens <- df %>% - dplyr::group_by(NOAA.member) - } - - interp.df.days <- by.ens %>% dplyr::do(days = seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/24)) - interp.df <- interp.df.days - - for(Var in 1:length(VarNamesStates)){ - assign(paste0("interp.df.",VarNamesStates[Var]), dplyr::do(by.ens, var = interpolate(.$days_since_t0,unlist(.[,VarNamesStates[Var]]))) %>% dplyr::rename(!!VarNamesStates[Var] := "var")) - if("dscale.member" %in% colnames(df)){ - interp.df <- dplyr::inner_join(interp.df, get(paste0("interp.df.",VarNamesStates[Var])), by = c("NOAA.member", "dscale.member")) - }else{ - interp.df <- dplyr::inner_join(interp.df, get(paste0("interp.df.",VarNamesStates[Var])), by = c("NOAA.member")) - } - } - - # converting from time difference back to timestamp - interp.df = interp.df %>% - tidyr::unnest() %>% - dplyr::mutate(timestamp = lubridate::as_datetime(t0 + days, tz = attributes(t0)$tzone)) - return(interp.df) -} - diff --git a/modules/data.atmosphere/R/solar_angle.R b/modules/data.atmosphere/R/solar_angle.R index 30a9b0a024b..06098ec3525 100644 --- a/modules/data.atmosphere/R/solar_angle.R +++ b/modules/data.atmosphere/R/solar_angle.R @@ -34,7 +34,7 @@ cos_solar_zenith_angle <- function(doy, lat, lon, dt, hr) { #' @return `numeric(1)` length of the solar day, in hours. #' @export equation_of_time <- function(doy) { - stopifnot(doy <= 366) + stopifnot(doy <= 367) f <- pi / 180 * (279.5 + 0.9856 * doy) et <- (-104.7 * sin(f) + 596.2 * sin(2 * f) + 4.3 * sin(4 * f) - 429.3 * cos(f) - 2 * diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd deleted file mode 100644 index d13d8ffa128..00000000000 --- a/modules/data.atmosphere/man/download.NOAA_GEFS_downscale.Rd +++ /dev/null @@ -1,85 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/download.NOAA_GEFS_downscale.R -\name{download.NOAA_GEFS_downscale} -\alias{download.NOAA_GEFS_downscale} -\title{Downscale NOAA GEFS Weather Data} -\usage{ -download.NOAA_GEFS_downscale( - outfolder, - lat.in, - lon.in, - site_id, - start_date, - end_date, - overwrite = FALSE, - verbose = FALSE, - ... -) -} -\arguments{ -\item{outfolder}{Directory where results should be written} - -\item{site_id}{The unique ID given to each site. This is used as part of the file name.} - -\item{start_date, }{end_date Range of dates/times to be downloaded (default assumed time of day is 0:00, midnight)} - -\item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} - -\item{verbose}{logical. Print additional debug information. Passed on to functions in the netcdf4 package to provide debugging info.} - -\item{...}{Other arguments, currently ignored} - -\item{lat}{site latitude in decimal degrees} - -\item{lon}{site longitude in decimal degrees} -} -\value{ -A list of data frames is returned containing information about the data file that can be used to locate it later. Each -data frame contains information about one file. -} -\description{ -Downscale NOAA GEFS Weather Data -} -\section{Information on Units}{ - -Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downlaoded -in Kelvin. -} - -\section{NOAA_GEFS General Information}{ - -This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. A 16 day forecast is avaliable -every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn -standard. -} - -\section{Data Avaliability}{ - -NOAA GEFS weather data is avaliable on a rolling 12 day basis; dates provided in "start_date" must be within this range. The end date can be any point after -that, but if the end date is beyond 16 days, only 16 days worth of forecast are recorded. Times are rounded down to the previous 6 hour forecast. NOAA -GEFS weather data isn't always posted immediately, and to compensate, this function adjusts requests made in the last two hours -back two hours (approximately the amount of time it takes to post the data) to make sure the most current forecast is used. -} - -\section{Data Save Format}{ - -Data is saved in the netcdf format to the specified directory. File names reflect the precision of the data to the given range of days. -NOAA.GEFS.willow creek.3.2018-06-08T06:00.2018-06-24T06:00.nc specifies the forecast, using ensemble nubmer 3 at willow creek on -June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. -} - -\examples{ -\dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", - lat.in= 45.805925, - lon.in = -90.07961, - site_id = 676) -} - -} -\references{ -https://www.ncdc.noaa.gov/crn/measurements.html -} -\author{ -Katie Zarada - modified code from Luke Dramko and Laura Puckett -} diff --git a/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd b/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd deleted file mode 100644 index c43a092340d..00000000000 --- a/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd +++ /dev/null @@ -1,37 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/downscale_ShortWave_to_hrly.R -\name{downscale_ShortWave_to_hrly} -\alias{downscale_ShortWave_to_hrly} -\title{Downscale shortwave to hourly} -\usage{ -downscale_ShortWave_to_hrly( - debiased, - time0, - time_end, - lat, - lon, - output_tz = "UTC" -) -} -\arguments{ -\item{debiased, }{data frame of variables} - -\item{time0, }{first timestep} - -\item{time_end, }{last time step} - -\item{lat, }{lat of site} - -\item{lon, }{long of site} - -\item{output_tz, }{output timezone} -} -\value{ -A dataframe of downscaled state variables -} -\description{ -Downscale shortwave to hourly -} -\author{ -Laura Puckett -} diff --git a/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd b/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd deleted file mode 100644 index 523d92ba4da..00000000000 --- a/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd +++ /dev/null @@ -1,20 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/downscale_repeat_6hr_to_hrly.R -\name{downscale_repeat_6hr_to_hrly} -\alias{downscale_repeat_6hr_to_hrly} -\title{Downscale repeat to hourly} -\usage{ -downscale_repeat_6hr_to_hrly(data.6hr) -} -\arguments{ -\item{data.6hr, }{dataframe of data to be downscaled (Longwave)} -} -\value{ -A dataframe of downscaled data -} -\description{ -Downscale repeat to hourly -} -\author{ -Laura Puckett -} diff --git a/modules/data.atmosphere/man/downscale_spline_to_hourly.Rd b/modules/data.atmosphere/man/downscale_spline_to_hourly.Rd deleted file mode 100644 index dd57682b1de..00000000000 --- a/modules/data.atmosphere/man/downscale_spline_to_hourly.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/downscale_spline_to_hourly.R -\name{downscale_spline_to_hourly} -\alias{downscale_spline_to_hourly} -\title{Downscale spline to hourly} -\usage{ -downscale_spline_to_hourly(df, VarNamesStates) -} -\arguments{ -\item{df, }{dataframe of data to be downscales} - -\item{VarNamesStates, }{names of vars that are state variables} -} -\value{ -A dataframe of downscaled state variables -} -\description{ -Downscale spline to hourly -} -\author{ -Laura Puckett -} From f732c40380f5ed52ee4b36a8215c2bf9547c804b Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 4 Jan 2021 16:05:35 -0500 Subject: [PATCH 1723/2289] addressing PR checks --- docker/depends/pecan.depends.R | 1 + .../R/downscaling_helper_functions.R | 18 ++++++------ .../man/downscale_ShortWave_to_hrly.Rd | 28 +++++++++++++++++++ .../man/downscale_repeat_6hr_to_hrly.Rd | 24 ++++++++++++++++ .../man/downscale_solar_geom.Rd | 24 ++++++++++++++++ .../man/downscale_spline_to_hrly.Rd | 24 ++++++++++++++++ 6 files changed, 111 insertions(+), 8 deletions(-) create mode 100644 modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd create mode 100644 modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd create mode 100644 modules/data.atmosphere/man/downscale_solar_geom.Rd create mode 100644 modules/data.atmosphere/man/downscale_spline_to_hrly.Rd diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index bb6b7074193..52526af35c4 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -116,6 +116,7 @@ wanted <- c( 'tibble', 'tictoc', 'tidyr', +'tidyselect' 'tidyverse', 'tools', 'traits', diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 75f0450f522..e68d3526bb1 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -1,7 +1,8 @@ #' @title Downscale spline to hourly -#' @return A dataframe of downscaled state variables #' @param df, dataframe of data to be downscales -#' @noRd +#' @param varName, variable names to be downscaled +#' @param hr, hour to downscale to- default is 1 +#' @return A dataframe of downscaled state variables #' @author Laura Puckett #' @export #' @@ -37,7 +38,8 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ #' @param df, data frame of variables #' @param lat, lat of site #' @param lon, long of site -#' @noRd +#' @param hr, hour to downscale to- default is 1 +#' #' @return ShortWave.ds #' @author Laura Puckett #' @export @@ -91,9 +93,10 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ #' @title Downscale repeat to hourly -#' @return A dataframe of downscaled data #' @param df, dataframe of data to be downscaled (Longwave) -#' @noRd +#' @param varName, variable names to be downscaled +#' @param hr, hour to downscale to- default is 1 +#' @return A dataframe of downscaled data #' @author Laura Puckett #' @export #' @@ -144,13 +147,12 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ #' @title Calculate potential shortwave radiation -#' @return vector of potential shortwave radiation for each doy #' #' @param doy, day of year in decimal #' @param lon, longitude #' @param lat, latitude -#' @return `numeric(1)` -#' @noRd +#' @return vector of potential shortwave radiation for each doy +#' #' @author Quinn Thomas #' @export #' diff --git a/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd b/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd new file mode 100644 index 00000000000..f2da62dcd32 --- /dev/null +++ b/modules/data.atmosphere/man/downscale_ShortWave_to_hrly.Rd @@ -0,0 +1,28 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/downscaling_helper_functions.R +\name{downscale_ShortWave_to_hrly} +\alias{downscale_ShortWave_to_hrly} +\title{Downscale shortwave to hourly} +\usage{ +downscale_ShortWave_to_hrly(df, lat, lon, hr = 1) +} +\arguments{ +\item{df, }{data frame of variables} + +\item{lat, }{lat of site} + +\item{lon, }{long of site} + +\item{hr, }{hour to downscale to- default is 1} +} +\value{ +A dataframe of downscaled state variables + +ShortWave.ds +} +\description{ +Downscale shortwave to hourly +} +\author{ +Laura Puckett +} diff --git a/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd b/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd new file mode 100644 index 00000000000..47b6f87dc1f --- /dev/null +++ b/modules/data.atmosphere/man/downscale_repeat_6hr_to_hrly.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/downscaling_helper_functions.R +\name{downscale_repeat_6hr_to_hrly} +\alias{downscale_repeat_6hr_to_hrly} +\title{Downscale repeat to hourly} +\usage{ +downscale_repeat_6hr_to_hrly(df, varName, hr = 1) +} +\arguments{ +\item{df, }{dataframe of data to be downscaled (Longwave)} + +\item{varName, }{variable names to be downscaled} + +\item{hr, }{hour to downscale to- default is 1} +} +\value{ +A dataframe of downscaled data +} +\description{ +Downscale repeat to hourly +} +\author{ +Laura Puckett +} diff --git a/modules/data.atmosphere/man/downscale_solar_geom.Rd b/modules/data.atmosphere/man/downscale_solar_geom.Rd new file mode 100644 index 00000000000..25a823899cc --- /dev/null +++ b/modules/data.atmosphere/man/downscale_solar_geom.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/downscaling_helper_functions.R +\name{downscale_solar_geom} +\alias{downscale_solar_geom} +\title{Calculate potential shortwave radiation} +\usage{ +downscale_solar_geom(doy, lon, lat) +} +\arguments{ +\item{doy, }{day of year in decimal} + +\item{lon, }{longitude} + +\item{lat, }{latitude} +} +\value{ +vector of potential shortwave radiation for each doy +} +\description{ +Calculate potential shortwave radiation +} +\author{ +Quinn Thomas +} diff --git a/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd b/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd new file mode 100644 index 00000000000..8d38c0ef990 --- /dev/null +++ b/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/downscaling_helper_functions.R +\name{downscale_spline_to_hrly} +\alias{downscale_spline_to_hrly} +\title{Downscale spline to hourly} +\usage{ +downscale_spline_to_hrly(df, VarNames, hr = 1) +} +\arguments{ +\item{df, }{dataframe of data to be downscales} + +\item{hr, }{hour to downscale to- default is 1} + +\item{varName, }{variable names to be downscaled} +} +\value{ +A dataframe of downscaled state variables +} +\description{ +Downscale spline to hourly +} +\author{ +Laura Puckett +} From 504f199f9f70bb4c8d88dbde522733f648c16fed Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 4 Jan 2021 16:39:22 -0500 Subject: [PATCH 1724/2289] forgot comma --- docker/depends/pecan.depends.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 52526af35c4..e336c8199c5 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -116,7 +116,7 @@ wanted <- c( 'tibble', 'tictoc', 'tidyr', -'tidyselect' +'tidyselect', 'tidyverse', 'tools', 'traits', From 4a27df9f684ed928a5e2d562b1c549dec8e1a7cb Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 4 Jan 2021 17:00:07 -0500 Subject: [PATCH 1725/2289] updating misspelling in function documentation --- modules/data.atmosphere/R/downscaling_helper_functions.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index e68d3526bb1..bde95c00a58 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -1,6 +1,6 @@ #' @title Downscale spline to hourly #' @param df, dataframe of data to be downscales -#' @param varName, variable names to be downscaled +#' @param VarNames, variable names to be downscaled #' @param hr, hour to downscale to- default is 1 #' @return A dataframe of downscaled state variables #' @author Laura Puckett From ef207cb68e833199152d015774cea25321a30ed2 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 08:51:00 -0500 Subject: [PATCH 1726/2289] adding updated rd --- modules/data.atmosphere/man/downscale_spline_to_hrly.Rd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd b/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd index 8d38c0ef990..58bac009000 100644 --- a/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd +++ b/modules/data.atmosphere/man/downscale_spline_to_hrly.Rd @@ -9,9 +9,9 @@ downscale_spline_to_hrly(df, VarNames, hr = 1) \arguments{ \item{df, }{dataframe of data to be downscales} -\item{hr, }{hour to downscale to- default is 1} +\item{VarNames, }{variable names to be downscaled} -\item{varName, }{variable names to be downscaled} +\item{hr, }{hour to downscale to- default is 1} } \value{ A dataframe of downscaled state variables From db998906fb743787a199258dc11b4244617eb5b3 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 09:17:00 -0500 Subject: [PATCH 1727/2289] adding globabl function definition --- modules/data.atmosphere/R/downscaling_helper_functions.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index bde95c00a58..36272794072 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -23,7 +23,7 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) for(Var in 1:length(VarNames)){ - curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y + curr_data <- stats::spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y noaa_data_interp <- cbind(noaa_data_interp, curr_data) } @@ -158,7 +158,7 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ #' downscale_solar_geom <- function(doy, lon, lat) { - dt <- median(diff(doy)) * 86400 # average number of seconds in time interval + dt <- stats::median(diff(doy)) * 86400 # average number of seconds in time interval hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy ## calculate potential radiation From 04a71334890c8accf5220fe62a88df6e2ead647d Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 12:49:05 -0500 Subject: [PATCH 1728/2289] addressing check issues --- modules/data.atmosphere/NAMESPACE | 1 + .../R/downscaling_helper_functions.R | 23 +++++++++++-------- 2 files changed, 15 insertions(+), 9 deletions(-) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 08e0c81477a..e2baecdb60a 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -102,3 +102,4 @@ import(dplyr) import(tidyselect) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) +importFrom(rlang,.data) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 36272794072..772843963a4 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -3,6 +3,7 @@ #' @param VarNames, variable names to be downscaled #' @param hr, hour to downscale to- default is 1 #' @return A dataframe of downscaled state variables +#' @importFrom rlang .data #' @author Laura Puckett #' @export #' @@ -13,10 +14,10 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ # Creator: Laura Puckett, December 16 2018 # -------------------------------------- # @param: df, a dataframe of debiased 6-hourly forecasts - + time <- NULL t0 = min(df$time) df <- df %>% - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) + dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) @@ -39,7 +40,7 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ #' @param lat, lat of site #' @param lon, long of site #' @param hr, hour to downscale to- default is 1 -#' +#' @importFrom rlang .data #' @return ShortWave.ds #' @author Laura Puckett #' @export @@ -47,13 +48,14 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ #' downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ + ## downscale shortwave to hourly t0 <- min(df$time) df <- df %>% dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% - dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) + dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) %>% + dplyr::mutate(lead_var = dplyr::lead(.data$surface_downwelling_shortwave_flux_in_air, 1)) interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) @@ -81,11 +83,11 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ dplyr::mutate(hour = lubridate::hour(time)) %>% dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry - dplyr::group_by(group_6hr) %>% + dplyr::group_by(.data$group_6hr) %>% dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry dplyr::ungroup() %>% dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% - dplyr::select(time,surface_downwelling_shortwave_flux_in_air) + dplyr::select(.data$time,.data$surface_downwelling_shortwave_flux_in_air) return(ShortWave.ds) @@ -97,19 +99,22 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ #' @param varName, variable names to be downscaled #' @param hr, hour to downscale to- default is 1 #' @return A dataframe of downscaled data +#' @importFrom rlang .data #' @author Laura Puckett #' @export #' downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ + #bind variables + lead_var <- time <- NULL #Get first time point t0 <- min(df$time) df <- df %>% dplyr::select("time", all_of(varName)) %>% #Calculate time difference - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) %>% #Shift valued back because the 6hr value represents the average over the #previous 6hr period dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) @@ -136,7 +141,7 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ } #Clean up data frame - data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% + data.hrly <- data.hrly %>% dplyr::select("time", .data$lead_var) %>% dplyr::arrange(time) names(data.hrly) <- c("time", varName) From 56408c68c21692a4d287493a2f25063254e7b7f1 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 14:28:57 -0500 Subject: [PATCH 1729/2289] addressing globabl variable problems --- modules/data.atmosphere/DESCRIPTION | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 33617fa0d12..285a429df4f 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -43,6 +43,7 @@ Imports: REddyProc, reshape2, rgdal, + rlang (>= 0.2.0), rnoaa, sp, stringr (>= 1.1.0), @@ -60,8 +61,7 @@ Suggests: foreach, parallel, progress, - reticulate, - rlang (>= 0.2.0) + reticulate Remotes: github::ropensci/geonames, github::ropensci/nneo From 4fcf580a7a4a70be0c594afe723ed18f1bf389f3 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 15:06:43 -0500 Subject: [PATCH 1730/2289] still addressing global var issues --- modules/data.atmosphere/R/GEFS_helper_functions.R | 10 +++++----- .../data.atmosphere/R/downscaling_helper_functions.R | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index ce70a5f2841..ca1d2f38632 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -187,7 +187,7 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, #' @param model_name_ds Name of downscale file name #' @param model_name_raw Name of raw file name #' @param output_directory Output directory -#' +#' @importFrom rlang .data #' @return List #' #' @@ -452,10 +452,10 @@ process_gridded_noaa_download <- function(lat_list, forecast_noaa_ens <- forecast_noaa %>% dplyr::filter(NOAA.member == ens) %>% - dplyr::filter(!is.na(air_temperature)) + dplyr::filter(!is.na(.data$air_temperature)) end_date <- forecast_noaa_ens %>% - dplyr::summarise(max_time = max(time)) + dplyr::summarise(max_time = max(.data$time)) results = data.frame( file = "", #Path to the file (added in loop below). @@ -575,7 +575,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, temp = forecast_noaa_ds$air_temperature, press = forecast_noaa_ds$air_pressure)) %>% - dplyr::mutate(relative_humidity = relative_humidity, + dplyr::mutate(relative_humidity = .data$relative_humidity, relative_humidity = ifelse(relative_humidity > 1, 0, relative_humidity)) # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) @@ -615,7 +615,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 #Make sure var names are in correct order forecast_noaa_ds <- forecast_noaa_ds %>% - dplyr::select("time", tidyselect::all_of(cf_var_names), "NOAA.member") + dplyr::select(.data$time, tidyselect::all_of(cf_var_names), .data$NOAA.member) #Write netCDF write_noaa_gefs_netcdf(df = forecast_noaa_ds, diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 772843963a4..928e4664485 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -53,7 +53,7 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ t0 <- min(df$time) df <- df %>% - dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% + dplyr::select(.data$time, .data$surface_downwelling_shortwave_flux_in_air) %>% dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) %>% dplyr::mutate(lead_var = dplyr::lead(.data$surface_downwelling_shortwave_flux_in_air, 1)) From de5f737ed0228e49e92665489cc29659439a9593 Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 15:30:22 -0500 Subject: [PATCH 1731/2289] globabl variable binding --- modules/data.atmosphere/NAMESPACE | 1 - modules/data.atmosphere/R/GEFS_helper_functions.R | 3 +-- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index e2baecdb60a..3884316b385 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -99,7 +99,6 @@ export(temporal.downscale.functions) export(upscale_met) export(wide2long) import(dplyr) -import(tidyselect) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) importFrom(rlang,.data) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index ca1d2f38632..7fea3aed2ee 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -517,8 +517,7 @@ process_gridded_noaa_download <- function(lat_list, #' @param output_file, full path to 1hr file that will be generated #' @param overwrite, logical stating to overwrite any existing output_file #' @param hr time step in hours of temporal downscaling (default = 1) -#' -#' @import tidyselect +#' @importFrom rlang .data #' @author Quinn Thomas #' #' From 794a184f587f38c52c694c1009d82995caffe56c Mon Sep 17 00:00:00 2001 From: kzarada Date: Tue, 5 Jan 2021 15:55:34 -0500 Subject: [PATCH 1732/2289] binding more variables --- modules/data.atmosphere/R/GEFS_helper_functions.R | 5 +++-- modules/data.atmosphere/R/downscaling_helper_functions.R | 6 +++--- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 7fea3aed2ee..f89a1346cc1 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -202,7 +202,8 @@ process_gridded_noaa_download <- function(lat_list, model_name_ds, model_name_raw, output_directory){ - + #binding variables + NOAA.member <- NULL extract_sites <- function(ens_index, hours_char, hours, cycle, site_id, lat_list, lon_list, working_directory){ site_length <- length(site_id) @@ -575,7 +576,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 temp = forecast_noaa_ds$air_temperature, press = forecast_noaa_ds$air_pressure)) %>% dplyr::mutate(relative_humidity = .data$relative_humidity, - relative_humidity = ifelse(relative_humidity > 1, 0, relative_humidity)) + relative_humidity = ifelse(.data$relative_humidity > 1, 0, .data$relative_humidity)) # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 928e4664485..1fbf04182a8 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -80,13 +80,13 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ } ShortWave.ds <- data.hrly %>% - dplyr::mutate(hour = lubridate::hour(time)) %>% - dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% + dplyr::mutate(hour = lubridate::hour(.data$time)) %>% + dplyr::mutate(doy = lubridate::yday(.data$time) + hour/(24/hr))%>% dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry dplyr::group_by(.data$group_6hr) %>% dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry dplyr::ungroup() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (.data$surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% dplyr::select(.data$time,.data$surface_downwelling_shortwave_flux_in_air) return(ShortWave.ds) From 8e8fbf95c5555ebd48b8a93b91dfe4635ef3d543 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 6 Jan 2021 09:01:19 -0500 Subject: [PATCH 1733/2289] updating description --- base/db/DESCRIPTION | 2 ++ 1 file changed, 2 insertions(+) diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index 11508374dfa..517200e9cc4 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -50,12 +50,14 @@ Imports: PEcAn.utils, dbplyr, dplyr, + fs, glue, lubridate, magrittr, ncdf4, purrr, rlang, + R.utils, tibble, tidyr, udunits2 From 52dcfb3e6d27a80de52ad287bd4913f0a6e06561 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 6 Jan 2021 09:49:32 -0500 Subject: [PATCH 1734/2289] updating depends --- docker/depends/pecan.depends.R | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 0ed1678a53c..a1aac19212b 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -101,6 +101,7 @@ wanted <- c( 'RPostgreSQL', 'Rpreles', 'RSQLite', +'R.utils', 'sf', 'SimilarityMeasures', 'sirt', From 7190f3c4d2d025ea261bddd77e2244449fabafb7 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 6 Jan 2021 12:51:40 -0500 Subject: [PATCH 1735/2289] description --- base/db/DESCRIPTION | 2 +- .../WillowCreek/.nfs000000000205e03300000008 | 329 ++++++++++++++++++ 2 files changed, 330 insertions(+), 1 deletion(-) create mode 100644 modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index 517200e9cc4..20c8f87e8a8 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -57,7 +57,6 @@ Imports: ncdf4, purrr, rlang, - R.utils, tibble, tidyr, udunits2 @@ -65,6 +64,7 @@ Suggests: RPostgreSQL, RPostgres, RSQLite, + R.utils, bit64, data.table, here, diff --git a/modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 b/modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 new file mode 100644 index 00000000000..af07b3b03e1 --- /dev/null +++ b/modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 @@ -0,0 +1,329 @@ +# ---------------------------------------------------------------------- +#------------------------------------------ Load required libraries----- +# ---------------------------------------------------------------------- +library("PEcAn.all") +library("PEcAn.utils") +library("RCurl") +library("REddyProc") +library("tidyverse") +library("furrr") +library("R.utils") +library("dynutils") +plan(multisession) + + +# ---------------------------------------------------------------------------------------------- +#------------------------------------------ That's all we need xml path and the out folder ----- +# ---------------------------------------------------------------------------------------------- + +outputPath <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output/NoData/" +nodata <- TRUE +restart <- FALSE +days.obs <- 1 #how many of observed data to include -- not including today +setwd(outputPath) + +c( + 'Utils.R', + 'download_WCr.R', + "gapfill_WCr.R", + 'prep.data.assim.R' +) %>% walk( ~ source( + system.file("WillowCreek", + .x, + package = "PEcAn.assim.sequential") +)) + + +#------------------------------------------------------------------------------------------------ +#------------------------------------------ Preparing the pecan xml ----------------------------- +#------------------------------------------------------------------------------------------------ +#--------------------------- Finding old sims + + +setwd("/projectnb/dietzelab/kzarada/US_WCr_SDA_output/NoData/") + +#reading xml +settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/nodata.xml") + +#connecting to DB +con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + + + +# all.previous.sims <- list.dirs(outputPath, recursive = F) +# if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { +# +# tryCatch({ +# # Looking through all the old simulations and find the most recent +# all.previous.sims <- all.previous.sims %>% +# map(~ list.files(path = file.path(.x, "SDA"))) %>% +# setNames(all.previous.sims) %>% +# discard( ~ !"sda.output.Rdata" %in% .x) # I'm throwing out the ones that they did not have a SDA output +# +# last.sim <- +# names(all.previous.sims) %>% +# map_chr( ~ strsplit(.x, "_")[[1]][5]) %>% +# map_dfr(~ db.query( +# query = paste("SELECT * FROM workflows WHERE id =", .x), +# con = con +# ) %>% +# mutate(ID=.x)) %>% +# mutate(start_date = as.Date(start_date)) %>% +# arrange(desc(start_date), desc(ID)) %>% +# head(1) +# # pulling the date and the path to the last SDA +# restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) +# sda.start <- last.sim$start_date + lubridate::days(1) +# }, +# error = function(e) { +# restart.path <- NULL +# sda.start <- Sys.Date() - 1 +# PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) +# }) +# +# # if there was no older sims +# if (is.na(sda.start)) +# sda.start <- Sys.Date() - 9 +# } +sda.start <- Sys.Date() +sda.end <- sda.start + lubridate::days(3) +#----------------------------------------------------------------------------------------------- +#------------------------------------------ Download met and flux ------------------------------ +#----------------------------------------------------------------------------------------------- + + +# Finding the right end and start date +met.start <- sda.start - lubridate::days(2) +met.end <- met.start + lubridate::days(16) + + +#pad Observed Data to match met data + +date <- + seq( + from = lubridate::with_tz(as.POSIXct(sda.start, format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), + to = lubridate::with_tz(as.POSIXct(sda.end, format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), + by = "1 hour" + ) + +pad.prep <- as.data.frame(cbind(Date = as.character(date), means = rep("NA", length(date)), covs = rep("NA", length(date)))) %>% + dynutils::tibble_as_list() + +names(pad.prep) <-date + + +prep.data = pad.prep + + + +obs.mean <- prep.data %>% + purrr::map('means') %>% + setNames(names(prep.data)) +obs.cov <- prep.data %>% purrr::map('covs') %>% setNames(names(prep.data)) + +if (nodata) { + obs.mean <- obs.mean %>% purrr::map(function(x) + return(NA)) + obs.cov <- obs.cov %>% purrr::map(function(x) + return(NA)) +} + + +#----------------------------------------------------------------------------------------------- +#------------------------------------------ Fixing the settings -------------------------------- +#----------------------------------------------------------------------------------------------- +#unlink existing IC files +sapply(paste0("/projectnb/dietzelabe/pecan.data/dbfiles/IC_site_0-676_", 1:100, ".nc"), unlink) +#Using the found dates to run - this will help to download mets +settings$run$start.date <- as.character(met.start) +settings$run$end.date <- as.character(met.end) +settings$run$site$met.start <- as.character(met.start) +settings$run$site$met.end <- as.character(met.end) +#info +settings$info$date <- paste0(format(Sys.time(), "%Y/%m/%d %H:%M:%S"), " +0000") +# -------------------------------------------------------------------------------------------------- +#---------------------------------------------- PEcAn Workflow ------------------------------------- +# -------------------------------------------------------------------------------------------------- +#Update/fix/check settings. Will only run the first time it's called, unless force=TRUE +settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) +setwd(settings$outdir) + + +#Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") +# start from scratch if no continue is passed in +statusFile <- file.path(settings$outdir, "STATUS") +if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile)) { + file.remove(statusFile) +} +# Do conversions +settings <- PEcAn.workflow::do_conversions(settings, T, T, T) + +# Query the trait database for data and priors +if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings(settings, outputfile = 'pecan.TRAIT.xml') + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, 'pecan.TRAIT.xml'))) { + settings <- + PEcAn.settings::read.settings(file.path(settings$outdir, 'pecan.TRAIT.xml')) +} +# Run the PEcAn meta.analysis +if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } +} +#sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# Setting dates in assimilation tags - This will help with preprocess split in SDA code +settings$state.data.assimilation$start.date <-as.character(first(names(obs.mean))) +settings$state.data.assimilation$end.date <-as.character(last(names(obs.mean))) + +#- lubridate::hms("06:00:00") + +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Restart ------------------------------------- +# -------------------------------------------------------------------------------------------------- + +if(restart == TRUE){ + if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) + + #Update the SDA Output to just have last time step + temp<- new.env() + load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) + temp <- as.list(temp) + + #we want ANALYSIS, FORECAST, and enkf.parms to match up with how many days obs data we have + # +24 because it's hourly now and we want the next day as the start + if(length(temp$ANALYSIS) > 1){ + + for(i in 1:days.obs + 1){ + temp$ANALYSIS[[i]] <- temp$ANALYSIS[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$ANALYSIS))){ + temp$ANALYSIS[[i]] <- NULL + } + + for(i in 1:days.obs + 1){ + temp$FORECAST[[i]] <- temp$FORECAST[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$FORECAST))){ + temp$FORECAST[[i]] <- NULL + } + + for(i in 1:days.obs + 1){ + temp$enkf.params[[i]] <- temp$enkf.params[[i + 24]] + } + for(i in rev((days.obs + 2):length(temp$enkf.params))){ + temp$enkf.params[[i]] <- NULL + } + + } + temp$t = 1 + + #change inputs path to match sampling met paths + + for(i in 1: length(temp$inputs$ids)){ + + temp$inputs$samples[i] <- settings$run$inputs$met$path[temp$inputs$ids[i]] + + } + + temp1<- new.env() + list2env(temp, envir = temp1) + save(list = c("ANALYSIS", "enkf.params", "ensemble.id", "ensemble.samples", 'inputs', 'new.params', 'new.state', 'run.id', 'site.locs', 't', 'Viz.output', 'X'), + envir = temp1, + file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) + + + + temp.out <- new.env() + load(file.path(restart.path, "SDA", 'outconfig.Rdata'), envir = temp.out) + temp.out <- as.list(temp.out) + temp.out$outconfig$samples <- NULL + + temp.out1 <- new.env() + list2env(temp.out, envir = temp.out1) + save(list = c('outconfig'), + envir = temp.out1, + file = file.path(settings$outdir, "SDA", "outconfig.Rdata")) + + + + #copy over run and out folders + + if(!dir.exists("run")) dir.create("run",showWarnings = F) + + files <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.clim") + readfiles <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "README.txt") + + newfiles <- gsub(pattern = restart.path, settings$outdir, files) + readnewfiles <- gsub(pattern = restart.path, settings$outdir, readfiles) + + rundirs <- gsub(pattern = "/sipnet.clim", "", files) + rundirs <- gsub(pattern = restart.path, settings$outdir, rundirs) + for(i in 1 : length(rundirs)){ + dir.create(rundirs[i]) + file.copy(from = files[i], to = newfiles[i]) + file.copy(from = readfiles[i], to = readnewfiles[i])} + file.copy(from = paste0(restart.path, '/run/runs.txt'), to = paste0(settings$outdir,'/run/runs.txt' )) + + if(!dir.exists("out")) dir.create("out",showWarnings = F) + + files <- list.files(path = file.path(restart.path, "out/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.out") + newfiles <- gsub(pattern = restart.path, settings$outdir, files) + outdirs <- gsub(pattern = "/sipnet.out", "", files) + outdirs <- gsub(pattern = restart.path, settings$outdir, outdirs) + for(i in 1 : length(outdirs)){ + dir.create(outdirs[i]) + file.copy(from = files[i], to = newfiles[i])} + +} + +# -------------------------------------------------------------------------------------------------- +#--------------------------------- Run state data assimilation ------------------------------------- +# -------------------------------------------------------------------------------------------------- + + +settings$host$name <- "geo.bu.edu" +settings$host$user <- 'kzarada' +settings$host$folder <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output" +settings$host$job.sh <- "module load udunits/2.2.26 R/3.5.1" +settings$host$qsub <- 'qsub -l h_rt=24:00:00 -V -N @NAME@ -o @STDOUT@ -e @STDERR@' +settings$host$qsub.jobid <- 'Your job ([0-9]+) .*' +settings$host$qstat <- 'qstat -j @JOBID@ || echo DONE' +settings$host$tunnel <- '/tmp/tunnel' +settings$model$binary = "/usr2/postdoc/istfer/SIPNET/1023/sipnet" + + +unlink(c('run','out'), recursive = T) + +#debugonce(PEcAn.assim.sequential::sda.enkf) +if ('state.data.assimilation' %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + PEcAn.assim.sequential::sda.enkf( + settings, + restart=restart, + Q=0, + obs.mean = obs.mean, + obs.cov = obs.cov, + control = list( + trace = TRUE, + interactivePlot =FALSE, + TimeseriesPlot =TRUE, + BiasPlot =FALSE, + debug = FALSE, + pause=FALSE + ) + ) + + PEcAn.utils::status.end() + } +} + + From f263d9e24b9ed9e028e2c06d9f616d9b2b0053c1 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 6 Jan 2021 13:28:59 -0500 Subject: [PATCH 1736/2289] changing R.utils location per check error --- docker/depends/pecan.depends.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 68a759bd903..532d7e28fbb 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -87,6 +87,7 @@ wanted <- c( 'progress', 'purrr', 'pwr', +'R.utils', 'randtoolbox', 'raster', 'rcrossref', @@ -105,7 +106,6 @@ wanted <- c( 'RPostgreSQL', 'Rpreles', 'RSQLite', -'R.utils', 'sf', 'SimilarityMeasures', 'sirt', From 0475250cbfcd9347266c810917bd5ceec9a09ad6 Mon Sep 17 00:00:00 2001 From: Katherine Zarada <31037847+kzarada@users.noreply.github.com> Date: Wed, 6 Jan 2021 13:53:58 -0500 Subject: [PATCH 1737/2289] Delete .nfs000000000205e03300000008 --- .../WillowCreek/.nfs000000000205e03300000008 | 329 ------------------ 1 file changed, 329 deletions(-) delete mode 100644 modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 diff --git a/modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 b/modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 deleted file mode 100644 index af07b3b03e1..00000000000 --- a/modules/assim.sequential/inst/WillowCreek/.nfs000000000205e03300000008 +++ /dev/null @@ -1,329 +0,0 @@ -# ---------------------------------------------------------------------- -#------------------------------------------ Load required libraries----- -# ---------------------------------------------------------------------- -library("PEcAn.all") -library("PEcAn.utils") -library("RCurl") -library("REddyProc") -library("tidyverse") -library("furrr") -library("R.utils") -library("dynutils") -plan(multisession) - - -# ---------------------------------------------------------------------------------------------- -#------------------------------------------ That's all we need xml path and the out folder ----- -# ---------------------------------------------------------------------------------------------- - -outputPath <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output/NoData/" -nodata <- TRUE -restart <- FALSE -days.obs <- 1 #how many of observed data to include -- not including today -setwd(outputPath) - -c( - 'Utils.R', - 'download_WCr.R', - "gapfill_WCr.R", - 'prep.data.assim.R' -) %>% walk( ~ source( - system.file("WillowCreek", - .x, - package = "PEcAn.assim.sequential") -)) - - -#------------------------------------------------------------------------------------------------ -#------------------------------------------ Preparing the pecan xml ----------------------------- -#------------------------------------------------------------------------------------------------ -#--------------------------- Finding old sims - - -setwd("/projectnb/dietzelab/kzarada/US_WCr_SDA_output/NoData/") - -#reading xml -settings <- read.settings("/fs/data3/kzarada/pecan/modules/assim.sequential/inst/WillowCreek/nodata.xml") - -#connecting to DB -con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - - - -# all.previous.sims <- list.dirs(outputPath, recursive = F) -# if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { -# -# tryCatch({ -# # Looking through all the old simulations and find the most recent -# all.previous.sims <- all.previous.sims %>% -# map(~ list.files(path = file.path(.x, "SDA"))) %>% -# setNames(all.previous.sims) %>% -# discard( ~ !"sda.output.Rdata" %in% .x) # I'm throwing out the ones that they did not have a SDA output -# -# last.sim <- -# names(all.previous.sims) %>% -# map_chr( ~ strsplit(.x, "_")[[1]][5]) %>% -# map_dfr(~ db.query( -# query = paste("SELECT * FROM workflows WHERE id =", .x), -# con = con -# ) %>% -# mutate(ID=.x)) %>% -# mutate(start_date = as.Date(start_date)) %>% -# arrange(desc(start_date), desc(ID)) %>% -# head(1) -# # pulling the date and the path to the last SDA -# restart.path <-grep(last.sim$ID, names(all.previous.sims), value = T) -# sda.start <- last.sim$start_date + lubridate::days(1) -# }, -# error = function(e) { -# restart.path <- NULL -# sda.start <- Sys.Date() - 1 -# PEcAn.logger::logger.warn(paste0("There was a problem with finding the last successfull SDA.",conditionMessage(e))) -# }) -# -# # if there was no older sims -# if (is.na(sda.start)) -# sda.start <- Sys.Date() - 9 -# } -sda.start <- Sys.Date() -sda.end <- sda.start + lubridate::days(3) -#----------------------------------------------------------------------------------------------- -#------------------------------------------ Download met and flux ------------------------------ -#----------------------------------------------------------------------------------------------- - - -# Finding the right end and start date -met.start <- sda.start - lubridate::days(2) -met.end <- met.start + lubridate::days(16) - - -#pad Observed Data to match met data - -date <- - seq( - from = lubridate::with_tz(as.POSIXct(sda.start, format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), - to = lubridate::with_tz(as.POSIXct(sda.end, format = "%Y-%m-%d %H:%M:%S"), tz = "UTC"), - by = "1 hour" - ) - -pad.prep <- as.data.frame(cbind(Date = as.character(date), means = rep("NA", length(date)), covs = rep("NA", length(date)))) %>% - dynutils::tibble_as_list() - -names(pad.prep) <-date - - -prep.data = pad.prep - - - -obs.mean <- prep.data %>% - purrr::map('means') %>% - setNames(names(prep.data)) -obs.cov <- prep.data %>% purrr::map('covs') %>% setNames(names(prep.data)) - -if (nodata) { - obs.mean <- obs.mean %>% purrr::map(function(x) - return(NA)) - obs.cov <- obs.cov %>% purrr::map(function(x) - return(NA)) -} - - -#----------------------------------------------------------------------------------------------- -#------------------------------------------ Fixing the settings -------------------------------- -#----------------------------------------------------------------------------------------------- -#unlink existing IC files -sapply(paste0("/projectnb/dietzelabe/pecan.data/dbfiles/IC_site_0-676_", 1:100, ".nc"), unlink) -#Using the found dates to run - this will help to download mets -settings$run$start.date <- as.character(met.start) -settings$run$end.date <- as.character(met.end) -settings$run$site$met.start <- as.character(met.start) -settings$run$site$met.end <- as.character(met.end) -#info -settings$info$date <- paste0(format(Sys.time(), "%Y/%m/%d %H:%M:%S"), " +0000") -# -------------------------------------------------------------------------------------------------- -#---------------------------------------------- PEcAn Workflow ------------------------------------- -# -------------------------------------------------------------------------------------------------- -#Update/fix/check settings. Will only run the first time it's called, unless force=TRUE -settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) -setwd(settings$outdir) - - -#Write pecan.CHECKED.xml -PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") -# start from scratch if no continue is passed in -statusFile <- file.path(settings$outdir, "STATUS") -if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile)) { - file.remove(statusFile) -} -# Do conversions -settings <- PEcAn.workflow::do_conversions(settings, T, T, T) - -# Query the trait database for data and priors -if (PEcAn.utils::status.check("TRAIT") == 0) { - PEcAn.utils::status.start("TRAIT") - settings <- PEcAn.workflow::runModule.get.trait.data(settings) - PEcAn.settings::write.settings(settings, outputfile = 'pecan.TRAIT.xml') - PEcAn.utils::status.end() -} else if (file.exists(file.path(settings$outdir, 'pecan.TRAIT.xml'))) { - settings <- - PEcAn.settings::read.settings(file.path(settings$outdir, 'pecan.TRAIT.xml')) -} -# Run the PEcAn meta.analysis -if (!is.null(settings$meta.analysis)) { - if (PEcAn.utils::status.check("META") == 0) { - PEcAn.utils::status.start("META") - PEcAn.MA::runModule.run.meta.analysis(settings) - PEcAn.utils::status.end() - } -} -#sample from parameters used for both sensitivity analysis and Ens -get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingspace$parameters$method) -# Setting dates in assimilation tags - This will help with preprocess split in SDA code -settings$state.data.assimilation$start.date <-as.character(first(names(obs.mean))) -settings$state.data.assimilation$end.date <-as.character(last(names(obs.mean))) - -#- lubridate::hms("06:00:00") - -# -------------------------------------------------------------------------------------------------- -#--------------------------------- Restart ------------------------------------- -# -------------------------------------------------------------------------------------------------- - -if(restart == TRUE){ - if(!dir.exists("SDA")) dir.create("SDA",showWarnings = F) - - #Update the SDA Output to just have last time step - temp<- new.env() - load(file.path(restart.path, "SDA", "sda.output.Rdata"), envir = temp) - temp <- as.list(temp) - - #we want ANALYSIS, FORECAST, and enkf.parms to match up with how many days obs data we have - # +24 because it's hourly now and we want the next day as the start - if(length(temp$ANALYSIS) > 1){ - - for(i in 1:days.obs + 1){ - temp$ANALYSIS[[i]] <- temp$ANALYSIS[[i + 24]] - } - for(i in rev((days.obs + 2):length(temp$ANALYSIS))){ - temp$ANALYSIS[[i]] <- NULL - } - - for(i in 1:days.obs + 1){ - temp$FORECAST[[i]] <- temp$FORECAST[[i + 24]] - } - for(i in rev((days.obs + 2):length(temp$FORECAST))){ - temp$FORECAST[[i]] <- NULL - } - - for(i in 1:days.obs + 1){ - temp$enkf.params[[i]] <- temp$enkf.params[[i + 24]] - } - for(i in rev((days.obs + 2):length(temp$enkf.params))){ - temp$enkf.params[[i]] <- NULL - } - - } - temp$t = 1 - - #change inputs path to match sampling met paths - - for(i in 1: length(temp$inputs$ids)){ - - temp$inputs$samples[i] <- settings$run$inputs$met$path[temp$inputs$ids[i]] - - } - - temp1<- new.env() - list2env(temp, envir = temp1) - save(list = c("ANALYSIS", "enkf.params", "ensemble.id", "ensemble.samples", 'inputs', 'new.params', 'new.state', 'run.id', 'site.locs', 't', 'Viz.output', 'X'), - envir = temp1, - file = file.path(settings$outdir, "SDA", "sda.output.Rdata")) - - - - temp.out <- new.env() - load(file.path(restart.path, "SDA", 'outconfig.Rdata'), envir = temp.out) - temp.out <- as.list(temp.out) - temp.out$outconfig$samples <- NULL - - temp.out1 <- new.env() - list2env(temp.out, envir = temp.out1) - save(list = c('outconfig'), - envir = temp.out1, - file = file.path(settings$outdir, "SDA", "outconfig.Rdata")) - - - - #copy over run and out folders - - if(!dir.exists("run")) dir.create("run",showWarnings = F) - - files <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.clim") - readfiles <- list.files(path = file.path(restart.path, "run/"), full.names = T, recursive = T, include.dirs = T, pattern = "README.txt") - - newfiles <- gsub(pattern = restart.path, settings$outdir, files) - readnewfiles <- gsub(pattern = restart.path, settings$outdir, readfiles) - - rundirs <- gsub(pattern = "/sipnet.clim", "", files) - rundirs <- gsub(pattern = restart.path, settings$outdir, rundirs) - for(i in 1 : length(rundirs)){ - dir.create(rundirs[i]) - file.copy(from = files[i], to = newfiles[i]) - file.copy(from = readfiles[i], to = readnewfiles[i])} - file.copy(from = paste0(restart.path, '/run/runs.txt'), to = paste0(settings$outdir,'/run/runs.txt' )) - - if(!dir.exists("out")) dir.create("out",showWarnings = F) - - files <- list.files(path = file.path(restart.path, "out/"), full.names = T, recursive = T, include.dirs = T, pattern = "sipnet.out") - newfiles <- gsub(pattern = restart.path, settings$outdir, files) - outdirs <- gsub(pattern = "/sipnet.out", "", files) - outdirs <- gsub(pattern = restart.path, settings$outdir, outdirs) - for(i in 1 : length(outdirs)){ - dir.create(outdirs[i]) - file.copy(from = files[i], to = newfiles[i])} - -} - -# -------------------------------------------------------------------------------------------------- -#--------------------------------- Run state data assimilation ------------------------------------- -# -------------------------------------------------------------------------------------------------- - - -settings$host$name <- "geo.bu.edu" -settings$host$user <- 'kzarada' -settings$host$folder <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output" -settings$host$job.sh <- "module load udunits/2.2.26 R/3.5.1" -settings$host$qsub <- 'qsub -l h_rt=24:00:00 -V -N @NAME@ -o @STDOUT@ -e @STDERR@' -settings$host$qsub.jobid <- 'Your job ([0-9]+) .*' -settings$host$qstat <- 'qstat -j @JOBID@ || echo DONE' -settings$host$tunnel <- '/tmp/tunnel' -settings$model$binary = "/usr2/postdoc/istfer/SIPNET/1023/sipnet" - - -unlink(c('run','out'), recursive = T) - -#debugonce(PEcAn.assim.sequential::sda.enkf) -if ('state.data.assimilation' %in% names(settings)) { - if (PEcAn.utils::status.check("SDA") == 0) { - PEcAn.utils::status.start("SDA") - PEcAn.assim.sequential::sda.enkf( - settings, - restart=restart, - Q=0, - obs.mean = obs.mean, - obs.cov = obs.cov, - control = list( - trace = TRUE, - interactivePlot =FALSE, - TimeseriesPlot =TRUE, - BiasPlot =FALSE, - debug = FALSE, - pause=FALSE - ) - ) - - PEcAn.utils::status.end() - } -} - - From 7e1c9c29813616631939ea51fa4634235cd6cd44 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 6 Jan 2021 14:29:34 -0500 Subject: [PATCH 1738/2289] changing to imports --- base/db/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index 20c8f87e8a8..7c15e77ec1a 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -56,6 +56,7 @@ Imports: magrittr, ncdf4, purrr, + R.utils, rlang, tibble, tidyr, @@ -64,7 +65,6 @@ Suggests: RPostgreSQL, RPostgres, RSQLite, - R.utils, bit64, data.table, here, From 9d02f8724487c2d20736115050f0e7983acaa970 Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 13 Jan 2021 15:28:00 -0500 Subject: [PATCH 1739/2289] changing back. had to change to get gefs to work with leap year --- modules/data.atmosphere/R/solar_angle.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/solar_angle.R b/modules/data.atmosphere/R/solar_angle.R index 06098ec3525..30a9b0a024b 100644 --- a/modules/data.atmosphere/R/solar_angle.R +++ b/modules/data.atmosphere/R/solar_angle.R @@ -34,7 +34,7 @@ cos_solar_zenith_angle <- function(doy, lat, lon, dt, hr) { #' @return `numeric(1)` length of the solar day, in hours. #' @export equation_of_time <- function(doy) { - stopifnot(doy <= 367) + stopifnot(doy <= 366) f <- pi / 180 * (279.5 + 0.9856 * doy) et <- (-104.7 * sin(f) + 596.2 * sin(2 * f) + 4.3 * sin(4 * f) - 429.3 * cos(f) - 2 * From 833f365a6713eb0affe43e6a43c884fcd97500fc Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 09:07:10 -0500 Subject: [PATCH 1740/2289] removing rnoaa --- modules/data.atmosphere/DESCRIPTION | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 285a429df4f..988c58a9db9 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -44,7 +44,6 @@ Imports: reshape2, rgdal, rlang (>= 0.2.0), - rnoaa, sp, stringr (>= 1.1.0), testthat (>= 2.0.0), From 3c199e6122df6311fc7d7893eb326b9b6b6aab7a Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 11:05:33 -0500 Subject: [PATCH 1741/2289] updating depends --- docker/depends/pecan.depends.R | 1 - 1 file changed, 1 deletion(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 373352c140f..0232b1b4bb0 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -101,7 +101,6 @@ wanted <- c( 'rjags', 'rjson', 'rlang', -'rnoaa', 'RPostgres', 'RPostgreSQL', 'Rpreles', From e8442afa1c3a65c1e67f8a4e225536e11ae6154b Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 12:26:09 -0500 Subject: [PATCH 1742/2289] updating depends --- docker/depends/pecan.depends.R | 1 - modules/data.atmosphere/DESCRIPTION | 1 - modules/data.atmosphere/NAMESPACE | 1 + modules/data.atmosphere/R/GEFS_helper_functions.R | 1 + 4 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 0232b1b4bb0..1f9aea241af 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -116,7 +116,6 @@ wanted <- c( 'tibble', 'tictoc', 'tidyr', -'tidyselect', 'tidyverse', 'tools', 'traits', diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 988c58a9db9..6fddb4710bc 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -49,7 +49,6 @@ Imports: testthat (>= 2.0.0), tibble, tidyr, - tidyselect, truncnorm, udunits2 (>= 0.11), XML (>= 3.98-1.4), diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 3884316b385..e2baecdb60a 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -99,6 +99,7 @@ export(temporal.downscale.functions) export(upscale_met) export(wide2long) import(dplyr) +import(tidyselect) importFrom(magrittr,"%>%") importFrom(rgdal,checkCRSArgs) importFrom(rlang,.data) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index f89a1346cc1..793ca3aab61 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -519,6 +519,7 @@ process_gridded_noaa_download <- function(lat_list, #' @param overwrite, logical stating to overwrite any existing output_file #' @param hr time step in hours of temporal downscaling (default = 1) #' @importFrom rlang .data +#' @import tidyselect #' @author Quinn Thomas #' #' From edeaba2c3a8ce4a14b6525179fa5155f55b2c088 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 12:42:18 -0500 Subject: [PATCH 1743/2289] adding tidyselect to suggests --- modules/data.atmosphere/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 6fddb4710bc..20c385bbcf3 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -59,6 +59,7 @@ Suggests: foreach, parallel, progress, + tidyselect, reticulate Remotes: github::ropensci/geonames, From 3d7ef78a27741fcac32512eb97f9691379873a76 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 13:06:14 -0500 Subject: [PATCH 1744/2289] take 2 --- docker/depends/pecan.depends.R | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 1f9aea241af..0232b1b4bb0 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -116,6 +116,7 @@ wanted <- c( 'tibble', 'tictoc', 'tidyr', +'tidyselect', 'tidyverse', 'tools', 'traits', From c5edbfc014df6bb148c8ff30ade8183256d56aeb Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 13:28:48 -0500 Subject: [PATCH 1745/2289] changing to tidyverse to keep package dependencies lower --- modules/data.atmosphere/DESCRIPTION | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 20c385bbcf3..e1ab0d0c0b2 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -22,7 +22,6 @@ Imports: data.table, dplyr, geonames (> 0.998), - ggplot2, glue, httr, jsonlite, @@ -45,10 +44,9 @@ Imports: rgdal, rlang (>= 0.2.0), sp, - stringr (>= 1.1.0), testthat (>= 2.0.0), - tibble, - tidyr, + tidyverse, + tidyselect, truncnorm, udunits2 (>= 0.11), XML (>= 3.98-1.4), @@ -59,7 +57,6 @@ Suggests: foreach, parallel, progress, - tidyselect, reticulate Remotes: github::ropensci/geonames, From 46877a126459eddf0514484dd00bd976cab151a1 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 13:49:54 -0500 Subject: [PATCH 1746/2289] jk it didnt like tidyverse --- modules/data.atmosphere/DESCRIPTION | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index e1ab0d0c0b2..988c58a9db9 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -22,6 +22,7 @@ Imports: data.table, dplyr, geonames (> 0.998), + ggplot2, glue, httr, jsonlite, @@ -44,8 +45,10 @@ Imports: rgdal, rlang (>= 0.2.0), sp, + stringr (>= 1.1.0), testthat (>= 2.0.0), - tidyverse, + tibble, + tidyr, tidyselect, truncnorm, udunits2 (>= 0.11), From 1a0e51ba3c3cf427f9cd499e2297879272c17709 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 25 Jan 2021 14:37:06 -0500 Subject: [PATCH 1747/2289] updating logs to try to pass check per mike's suggestions --- modules/data.atmosphere/tests/Rcheck_reference.log | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index b0445673eaf..49fe2064cc4 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -9,7 +9,7 @@ * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... NOTE -Imports includes 36 non-default packages. +Imports includes 37 non-default packages. Importing from so many packages makes the package vulnerable to any of them becoming unavailable. Move as many as possible to Suggests and use conditionally. From be8d62e1acbe4e0ea24336f31807eed70da21af3 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 27 Jan 2021 14:38:26 -0700 Subject: [PATCH 1748/2289] deprecating mstmip_vars in favor of standard_vars --- base/utils/R/utils.R | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 95f4302e331..7dfdcb70100 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -27,21 +27,18 @@ ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { - nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] + nc_var <- PEcAn.utils::standard_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] dims <- list() - + if (nrow(nc_var) == 0) { - nc_var <- PEcAn.utils::mstmip_local[PEcAn.utils::mstmip_local$Variable.Name == name, ] - if (nrow(nc_var) == 0) { - if (!silent) { - PEcAn.logger::logger.info("Don't know about variable", name, " in mstmip_vars in PEcAn.utils") - } - if (is.na(time)) { - time <- ncdf4::ncdim_def(name = "time", units = "days since 1900-01-01 00:00:00", - vals = 1:365, calendar = "standard", unlim = TRUE) - } - return(ncdf4::ncvar_def(name, "", list(time), -999, name)) + if (!silent) { + PEcAn.logger::logger.info("Don't know about variable", name, " in standard_vars in PEcAn.utils") + } + if (is.na(time)) { + time <- ncdf4::ncdim_def(name = "time", units = "days since 1900-01-01 00:00:00", + vals = 1:365, calendar = "standard", unlim = TRUE) } + return(ncdf4::ncvar_def(name, "", list(time), -999, name)) } for (i in 1:4) { From e8231ba3a956ca8bd6d0d82991f4926f9949c665 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 27 Jan 2021 15:53:23 -0700 Subject: [PATCH 1749/2289] finished replacing mstmip_vars with standard_vars --- base/utils/inst/get_mstmip_vars.R | 20 ------------------- .../ed/tests/testthat/test.model2netcdf.ED2.R | 8 ++++---- modules/data.atmosphere/R/load.cfmet.R | 4 ++-- 3 files changed, 6 insertions(+), 26 deletions(-) delete mode 100644 base/utils/inst/get_mstmip_vars.R diff --git a/base/utils/inst/get_mstmip_vars.R b/base/utils/inst/get_mstmip_vars.R deleted file mode 100644 index 5c1bc89b0ac..00000000000 --- a/base/utils/inst/get_mstmip_vars.R +++ /dev/null @@ -1,20 +0,0 @@ -ncvar_def.pecan <- function(name, units, longname, - dim = list(lon, lat, t), - missval = -999, prec = "float"){ - ans <- ncvar_def(name = name, units = units, dim = dim, - missval = missval, prec = "float") - return(ans) -} - -make.ncdf_vars <- function(vars = c("LAI")){ - library(data.table) - data(mstmip_vars, package = "PEcAn.utils") - mstmip <- data.table(mstmip_vars) - mstmip[Variable.Name %in% vars, - list(name = Variable.Name, units = Units, - longname = Long.name)] - - with(mstmip_variables, - (Variable.Name, function(x) - names(lapply(mstmip_variables$Variable.Name, identity)) -} diff --git a/models/ed/tests/testthat/test.model2netcdf.ED2.R b/models/ed/tests/testthat/test.model2netcdf.ED2.R index df093d1e0cf..43e753a7059 100644 --- a/models/ed/tests/testthat/test.model2netcdf.ED2.R +++ b/models/ed/tests/testthat/test.model2netcdf.ED2.R @@ -57,10 +57,10 @@ test_that("dimenstions have MsTMIP standard units",{ test_that("variables have MsTMIP standard units",{ skip("tests are broken #1329") - data(mstmip_vars, package = "PEcAn.utils") + data(standard_vars, package = "PEcAn.utils") for(var in vars){ - if(var$name %in% mstmip_vars$Variable.Name){ - ms.units <- mstmip_vars[mstmip_vars$Variable.Name == var$name, "Units"] + if(var$name %in% standard_vars$Variable.Name){ + ms.units <- standard_vars[standard_vars$Variable.Name == var$name, "Units"] if(!(ms.units == var$units)) { ed.output.message <- paste(var$name, "units", var$units, "do not match MsTMIP Units", ms.units) PEcAn.logger::logger.warn(ed.output.message) @@ -70,6 +70,6 @@ test_that("variables have MsTMIP standard units",{ ## The following test should pass if MsTMIP units / dimname standards are used ## expect_true( - ## var$units == mstmip_vars[mstmip_vars$Variable.Name == var$name, "Units"] + ## var$units == standard_vars[standard_vars$Variable.Name == var$name, "Units"] ## ) }) diff --git a/modules/data.atmosphere/R/load.cfmet.R b/modules/data.atmosphere/R/load.cfmet.R index 71fb8bce8d1..b8117df52d7 100644 --- a/modules/data.atmosphere/R/load.cfmet.R +++ b/modules/data.atmosphere/R/load.cfmet.R @@ -71,10 +71,10 @@ load.cfmet <- function(met.nc, lat, lon, start.date, end.date) { results <- list() - utils::data(mstmip_vars, package = "PEcAn.utils", envir = environment()) + utils::data(standard_vars, package = "PEcAn.utils", envir = environment()) ## pressure naming hack pending https://github.com/ebimodeling/model-drivers/issues/2 - standard_names <- append(as.character(mstmip_vars$standard_name), "surface_pressure") + standard_names <- append(as.character(standard_vars$standard_name), "surface_pressure") variables <- as.character(standard_names[standard_names %in% c("surface_pressure", attributes(met.nc$var)$names)]) From 8466016f5128996ae7fb80a64b1dbe057120cd75 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 27 Jan 2021 15:57:09 -0700 Subject: [PATCH 1750/2289] removed deprecated mstmip_local and mstmip_vars from PEcAn.utils --- base/utils/data/mstmip_local.csv | 21 ----------- base/utils/data/mstmip_vars.csv | 64 -------------------------------- 2 files changed, 85 deletions(-) delete mode 100644 base/utils/data/mstmip_local.csv delete mode 100644 base/utils/data/mstmip_vars.csv diff --git a/base/utils/data/mstmip_local.csv b/base/utils/data/mstmip_local.csv deleted file mode 100644 index 288484d51b0..00000000000 --- a/base/utils/data/mstmip_local.csv +++ /dev/null @@ -1,21 +0,0 @@ -"Num";"Group";"order";"Saveit";"Variable.Name";"Units";"Long.name";"Priority";"Category";"X3.hourly";"Monthly";"var_type";"ndim";"dim1";"dim2";"dim3";"dim4";"Description" -"1";1;3;1;"Yes";"RootBiom";"kg C m-2";"Total root biomass";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"2";2;3;2;"Yes";"StemBiom";"kg C m-2";"Total stem biomass";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"3";3;3;3;"Yes";"CO2CAS";"ppmv";"CO2CAS";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"4";4;3;4;"Yes";"CropYield";"kg m-2";"CropYield";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"5";5;3;5;"Yes";"SnowFrac";"-";"SnowFrac";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"6";6;3;6;"Yes";"LWdown";"W m-2";"LWdown";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"7";7;3;7;"Yes";"SWdown";"W m-2";"SWdown";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"8";8;3;8;"Yes";"Qg";"W m-2";"Qg";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"9";9;3;9;"Yes";"Swnet";"W m-2";"Swnet";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"10";10;3;10;"Yes";"RootMoist";"kg m-2";"RootMoist";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"11";11;3;11;"Yes";"Tveg";"kg m-2 s-1";"Tveg";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"12";12;3;12;"Yes";"WaterTableD";"m";"WaterTableD";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"13";13;3;13;"Yes";"SMFrozFrac";"-";"SMFrozFrac";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"14";14;3;14;"Yes";"SMLiqFrac";"-";"SMLiqFrac";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"15";15;3;15;"Yes";"Albedo";"-";"Albedo";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"16";16;3;16;"Yes";"SnowT";"K";"SnowT";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"17";17;3;17;"Yes";"VegT";"K";"VegT";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"18";18;3;18;"Yes";"LeafC";"kg C m-2";"LeafC";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"19";19;3;19;"Yes";"Yield";"kg m-2";"Yield";0;"Carbon Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" -"20";20;3;20;"Yes";"stomatal_conductance";"kg m-2 s-1";"Stomatal Conductance";0;"Energy Fluxes";"Yes";"Yes";"real";1;"lon";"lat";"time";"na";"" diff --git a/base/utils/data/mstmip_vars.csv b/base/utils/data/mstmip_vars.csv deleted file mode 100644 index e7702c0bf50..00000000000 --- a/base/utils/data/mstmip_vars.csv +++ /dev/null @@ -1,64 +0,0 @@ -Num;Group;order;Saveit;Variable.Name;standard_name;Units;Long.name;Priority;Category;Hourly;Monthly;var_type;ndim;dim1;dim2;dim3;dim4;Description -1;1;1;Yes;lon;longitude;degrees_east;Longitude;0;Grid;Yes;Yes;real;1;lon;na;na;na;longitude at center of each grid cell -2;1;2;Yes;lat;latitude;degrees_north;Latitude;0;Grid;Yes;Yes;real;1;lat;na;na;na;latitude at center of each grid cell -3;1;3;Yes;lon_bnds;;degrees_east;Longitude west-east bounds;0;Grid;Yes;Yes;real;2;nbnds;lon;na;na;(west boundary of grid cell, east boundary of grid cell) -4;1;4;Yes;lat_bnds;;degrees_north;Latitude south-north bounds;0;Grid;Yes;Yes;real;2;nbnds;lat;na;na;(south boundary of grid cell, north boundary of grid cell) -5;2;1;Yes;time;time;days since 1700-01-01 00:00:00 UTC;Time middle averaging period;0;Time;Yes;Yes;double;1;time;na;na;na;julian days days since 1700-01-01 00:00:00 UTC for middle of time averaging period Proleptic_Gregorianc calendar -6;2;2;Yes;time_bnds;;days since 1700-01-01 00:00:00 UTC;Time beginning-end bounds;0;Time;Yes;Yes;double;2;nbnds;time;na;na;(julian days days since 1700-01-01 beginning time ave period, julian days days since 1700-01-01 end time ave period) -7;2;3;Yes;dec_date;;yr;Decimal date middle averaging period;0;Time;Yes;Yes;double;1;time;na;na;na;decimal date in fractional years for middle of time averaging period -8;2;4;Yes;dec_date_bnds;;yr;Decimal date beginning-end bounds;0;Time;Yes;Yes;double;2;nbnds;time;na;na;(decimal date beginning time ave period, decimal date end time ave period) -9;2;5;Yes;cal_date_mid;;yr, mon, day, hr, min, sec;Calender date middle averaging period;0;Time;Yes;Yes;integer;2;ncal;time;na;na;calender date middle of time ave period: year, month, day, hour, minute, second for UTC time zone -10;2;6;Yes;cal_date_beg;;yr, mon, day, hr, min, sec;Calender date beginning averaging period;0;Time;Yes;Yes;integer;2;ncal;time;na;na;calender date beginning of time ave period: year, month, day, hour, minute, second for UTC time zone -11;2;7;Yes;cal_date_end;;yr, mon, day, hr, min, sec;Calender date end averaging period;0;Time;Yes;Yes;integer;2;ncal;time;na;na;calender date end of time ave period: year, month, day, hour, minute, second for UTC time zone -12;3;1;Yes;GPP;;kg C m-2 s-1;Gross Primary Productivity;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Rate of photosynthesis (always positive) -13;3;2;Yes;NPP;;kg C m-2 s-1;Net Primary Productivity;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Net Primary Productivity (NPP=GPP-AutoResp, positive into plants) -14;3;3;Yes;TotalResp;;kg C m-2 s-1;Total Respiration;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Total respiration (TotalResp=AutoResp+heteroResp, always positive) -15;3;4;Yes;AutoResp;;kg C m-2 s-1;Autotrophic Respiration;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Autotrophic respiration rate (always positive) -16;3;5;Yes;HeteroResp;;kg C m-2 s-1;Heterotrophic Respiration;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Heterotrophic respiration rate (always positive) -17;3;6;Yes;DOC_flux;;kg C m-2 s-1;Dissolved Organic Carbon flux;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Loss of organic carbon dissolved in ground water or rivers (positive out of grid cell) -18;3;7;Yes;Fire_flux;;kg C m-2 s-1;Fire emissions;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Flux of carbon due to fires (always positive) -19;3;8;Yes;NEE;;kg C m-2 s-1;Net Ecosystem Exchange;1;Carbon Fluxes;Yes;Yes;real;3;lon;lat;time;na;Net Ecosystem Exchange (NEE=HeteroResp+AutoResp-GPP, positive into atmosphere) -21;4;2;Yes;poolname;;(-);Name of each Carbon Pool;1;Carbon Pools;No;Yes;character;2;nchar;npool;na;na;"Name of each carbon pool (i.e., \""wood,\"" or \""Coarse Woody Debris\"")" -22;4;3;Yes;CarbPools;;kg C m-2;Size of each carbon pool;1;Carbon Pools;No;Yes;real;4;lon;lat;npool;time;Total size of each carbon pool vertically integrated over the entire soil column -23;4;4;Yes;AbvGrndWood;;kg C m-2;Above ground woody biomass;1;Carbon Pools;No;Yes;real;3;lon;lat;time;na;Total above ground wood biomass -24;4;5;Yes;TotLivBiom;;kg C m-2;Total living biomass;1;Carbon Pools;No;Yes;real;3;lon;lat;time;na;Total carbon content of the living biomass (leaves+roots+wood) -25;4;6;Yes;TotSoilCarb;;kg C m-2;Total Soil Carbon;1;Carbon Pools;No;Yes;real;3;lon;lat;time;na;Total soil and litter carbon content vertically integrated over the enire soil column -26;4;7;Yes;LAI;;m2 m-2;Leaf Area Index;1;Carbon Pools;No;Yes;real;3;lon;lat;time;na;Area of leaves per area ground -27;5;1;Yes;Qh;;W m-2;Sensible heat;1;Energy Fluxes;Yes;Yes;real;3;lon;lat;time;na;Sensible heat flux into the boundary layer (positive into atmosphere) -28;5;2;Yes;Qle;;W m-2;Latent heat;1;Energy Fluxes;Yes;Yes;real;3;lon;lat;time;na;Latent heat flux into the boundary layer (positive into atmosphere) -29;5;3;Yes;Evap;;kg m-2 s-1;Total Evaporation;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;Sum of all evaporation sources (positive into atmosphere) -30;5;4;Yes;TVeg;;kg m-2 s-1;Transpiration;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;Total Plant transpiration (always positive) -31;5;5;Yes;LW_albedo;;(-);Longwave Albedo;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;Longwave Albedo -32;5;6;Yes;SW_albedo;;(-);Shortwave Albedo;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;Shortwave albedo -33;5;7;Yes;Lwnet;;W m-2;Net Longwave Radiation;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;Incident longwave radiation minus simulated outgoing longwave radiation (positive into grnd) -34;5;8;Yes;SWnet;;W m-2;Net shortwave radiation;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;Incident shortwave radiation minus simulated outgoing shortwave radiation (positive into grnd) -35;5;9;Yes;fPAR;;(-);Absorbed fraction incoming PAR;1;Energy Fluxes;No;Yes;real;3;lon;lat;time;na;absorbed fraction incoming photosyntetically active radiation -37;6;2;Yes;z_top;;m;Soil Layer Top Depth;1;Physical Variables;No;Yes;real;1;nsoil;na;na;na;Depth from soil surface to top of soil layer -38;6;3;Yes;z_node;;m;Soil Layer Node Depth;1;Physical Variables;No;Yes;real;1;nsoil;na;na;na;"Depth from soil surface to layer prognostic variables; typically center of soil layer" -39;6;4;Yes;z_bottom;;m;Soil Layer Bottom Depth;1;Physical Variables;No;Yes;real;1;nsoil;na;na;na;Depth from soil surface to bottom of soil layer -40;6;5;Yes;SoilMoist;;kg m-2;Average Layer Soil Moisture;1;Physical Variables;No;Yes;real;4;lon;lat;nsoil;time;Soil water content in each soil layer, including liquid, vapor and ice -41;6;5;Yes;SoilMoistFrac;;(-);Average Layer Fraction of Saturation;1;Physical Variables;No;Yes;real;4;lon;lat;nsoil;time;Fraction of saturation of soil water in each soil layer, including liquid and ice -42;6;6;Yes;SoilWet;;(-);Total Soil Wetness;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Vertically integrated soil moisture divided by maximum allowable soil moisture above wilting point -43;6;7;Yes;Qs;;kg m-2 s-1;Surface runoff;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Runoff from the landsurface and/or subsurface stormflow -44;6;8;Yes;Qsb;;kg m-2 s-1;Subsurface runoff;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Gravity soil water drainage and/or soil water lateral flow -45;6;9;Yes;SoilTemp;;K;Average Layer Soil Temperature;1;Physical Variables;No;Yes;real;4;lon;lat;nsoil;time;Average soil temperature in each soil layer -46;6;10;Yes;Tdepth;;m;Active Layer Thickness;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;"Thaw depth; depth to zero centigrade isotherm in permafrost" -47;6;11;Yes;Fdepth;;m;Frozen Layer Thickness;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;"Freeze depth; depth to zero centigrade isotherm in non-permafrost" -48;6;12;Yes;Tcan;;K;Canopy Temperature;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Canopy or vegetation temperature (or temperature used in photosynthesis calculations) -49;6;13;Yes;SWE;;kg m-2;Snow Water Equivalent;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Total water mass of snow pack, including ice and liquid water -50;6;14;Yes;SnowDen;;kg m-3;Bulk Snow Density;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Overall bulk density of the snow pack, including ice and liquid water -51;6;15;Yes;SnowDepth;;m;Total snow depth;1;Physical Variables;No;Yes;real;3;lon;lat;time;na;Total snow depth -52;7;1;Yes;CO2air;;micromol mol-1;Near surface CO2 concentration;1;Driver;No;Yes;real;3;lon;lat;time;na;Near surface dry air CO2 mole fraction -53;7;2;Yes;LWdown;surface_downwelling_longwave_flux_in_air;W/m2;Surface incident longwave radiation;1;Driver;No;Yes;real;3;lon;lat;time;na;Surface incident longwave radiation -54;7;3;Yes;Psurf;air_pressure;Pa;Surface pressure;1;Driver;No;Yes;real;3;lon;lat;time;na;Surface pressure -55;7;4;Yes;Qair;specific_humidity;kg kg-1;Near surface specific humidity;1;Driver;No;Yes;real;3;lon;lat;time;na;Near surface specific humidity -56;7;5;Yes;Rainf;precipitation_flux;kg m-2 s-1;Rainfall rate;1;Driver;No;Yes;real;3;lon;lat;time;na;Rainfall rate -57;7;6;Yes;SWdown;surface_downwelling_shortwave_flux_in_air;W m-2;Surface incident shortwave radiation;1;Driver;No;Yes;real;3;lon;lat;time;na;Surface incident shortwave radiation -58;7;7;Yes;Tair;air_temperature;K;Near surface air temperature;1;Driver;No;Yes;real;3;lon;lat;time;na;Near surface air temperature -59;7;8;Yes;Wind;wind_speed;m s-1;Near surface module of the wind;1;Driver;No;Yes;real;3;lon;lat;time;na;Near surface wind magnitude -60;;;;Tmin;air_temperature_max;K;Daily Maximum Temperature;1;Driver;No;Yes;real;3;lon;lat;time;na;Daily Maximum Temperature -61;;;;Tmax;air_temperature_min;K;Daily Minimum Temperature;1;Driver;No;Yes;real;3;lon;lat;time;na;Daily Minimum Temperature -62;;;;Uwind;northward_wind;m s-1;Northward Component of Wind;1;Driver;No;Yes;real;3;lon;lat;time;na;Northward Component of Wind -63;;;;Vwind;eastward_wind;m s-1;Eastward Component of Wind;1;Driver;No;Yes;real;3;lon;lat;time;na;Eastward Component of Wind -64;;;;RH;relative_humidity;%;Relative Humidity;1;Driver;No;Yes;real;3;lon;lat;time;na;Relative Humidity -65;;;;PAR;surface_downwelling_photosynthetic_photon_flux_in_air;mol m-2 s-1;Photosynthetically Active Radiation;1;Driver;No;Yes;real;3;lon;lat;time;na;Photosynthetically Active Radiation From 3cef7dbc35c24797d549d9d972d1c7d4d01e4737 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Wed, 27 Jan 2021 15:58:52 -0700 Subject: [PATCH 1751/2289] update changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index ad6ffd7c94d..6b394f14d27 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -42,6 +42,7 @@ This is a major change: ### Changed +- Removed deprecated mstmip_vars and mstmip_local; now all functions use the combined standard_vars.csv - Removed old api, now split into rpecanapi and apps/api. - Now using R 4.0.2 for Docker images. This is a major change. Newer version of R and using Ubuntu 20.04 instead of Debian. - Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. From b81dcbd8467bbf95ff5b1925718467961f38f408 Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Thu, 28 Jan 2021 00:18:30 +0000 Subject: [PATCH 1752/2289] Condition reading in of species.csv or cultivars.csv for existing membership check --- base/db/R/get.trait.data.pft.R | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index 18cb952c21c..58bc16be689 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -136,15 +136,26 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, } else { # Check if PFT membership has changed PEcAn.logger::logger.debug("Checking if PFT membership has changed.") - existing_membership <- utils::read.csv( - need_paths[["pft_membership"]], - # Columns are: id, genus, species, scientificname - # Need this so NA values are formatted consistently - colClasses = c("double", "character", "character", "character"), - stringsAsFactors = FALSE, - na.strings = c("", "NA") - ) - diff_membership <- symmetric_setdiff( + if (pfttype == "plant") { + existing_membership <- utils::read.csv( + need_paths[["pft_membership"]], + # Columns are: id, genus, species, scientificname + # Need this so NA values are formatted consistently + colClasses = c("double", "character", "character", "character"), + stringsAsFactors = FALSE, + na.strings = c("", "NA") + ) + } else if (pfttype == "cultivar") { + existing_membership <- utils::read.csv( + need_paths[["pft_membership"]], + # Columns are: id, specie_id, genus, species, scientificname, cultivar + # Need this so NA values are formatted consistently + colClasses = c("double", "double", "character", "character", "character", "character"), + stringsAsFactors = FALSE, + na.strings = c("", "NA") + ) + } + diff_membership <- symmetric_setdiff( existing_membership, pft_members, xname = "existing", From e25e197eeed023e59924afedf62bec72a449ac2a Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Thu, 28 Jan 2021 00:33:36 +0000 Subject: [PATCH 1753/2289] Add PR #2761 --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index ad6ffd7c94d..ce5a024b119 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -39,6 +39,7 @@ This is a major change: - ensure Tleaf converted to K for temperature corrections in PEcAn.photosynthesis::fitA (#2726) - fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2753) - ensure that control treatments always receives the random effect index of 1; rename madata.Rdata to jagged.data.Rdata and include database ids and names useful for calculating parameter estimates by treatment (#2756) +- ensure that existing meta-analysis results can be used for pfts with cultivars (#2761) ### Changed From eed9c88908072088964d7311a26e18530a1ca731 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 29 Jan 2021 07:11:29 -0500 Subject: [PATCH 1754/2289] addnl rd file --- modules/priors/man/get.sample.Rd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/priors/man/get.sample.Rd b/modules/priors/man/get.sample.Rd index c62f5dea45f..2ce56b6b438 100644 --- a/modules/priors/man/get.sample.Rd +++ b/modules/priors/man/get.sample.Rd @@ -4,7 +4,7 @@ \alias{get.sample} \title{Get Samples} \usage{ -get.sample(prior, n) +get.sample(prior, n, p = NULL) } \arguments{ \item{prior}{data.frame with distn, parama, paramb} From 81829ad9f3e3fac12d4a89b429de21c86a6b5cca Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Mon, 1 Feb 2021 12:38:38 -0700 Subject: [PATCH 1755/2289] in ed2 job.sh, create out directory first if the directory doesn't exist, then the output is sent to console rather than to logfile.txt this moves the command to make the directory as the first step --- models/ed/inst/template.job | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/models/ed/inst/template.job b/models/ed/inst/template.job index 4ee63aa0c29..425632210ca 100644 --- a/models/ed/inst/template.job +++ b/models/ed/inst/template.job @@ -1,5 +1,9 @@ #!/bin/bash -l +# create output folder +mkdir -p "@OUTDIR@" +@SCRATCH_MKDIR@ + # redirect output exec 3>&1 exec &>> "@OUTDIR@/logfile.txt" @@ -10,10 +14,6 @@ echo "Logging on "$TIMESTAMP # host specific setup @HOST_SETUP@ -# create output folder -mkdir -p "@OUTDIR@" -@SCRATCH_MKDIR@ - # @REMOVE_HISTXML@ : tag to remove "history.xml" on remote for restarts, commented out on purpose From 6a7a8e76d5ad8425106888eb018c52670eb42744 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 2 Feb 2021 11:47:32 -0600 Subject: [PATCH 1756/2289] run push only on master repo --- .github/workflows/depends.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index adc81d47936..cb32b3f9fb0 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -18,6 +18,7 @@ env: jobs: depends: + if: github.repository == env.MASTER_REPO runs-on: ubuntu-latest strategy: @@ -48,7 +49,6 @@ jobs: # this will publish to the actor (person) github packages - name: Publish to GitHub - if: github.event_name == 'push' uses: elgohr/Publish-Docker-Github-Action@2.22 env: R_VERSION: ${{ matrix.R }} @@ -63,7 +63,6 @@ jobs: # this will publish to the clowder dockerhub repo - name: Publish to Docker Hub - if: github.event_name == 'push' && github.repository == env.MASTER_REPO uses: elgohr/Publish-Docker-Github-Action@2.18 env: R_VERSION: ${{ matrix.R }} From d67a2f990f894f6f4a7f3ed44648a2c97fd4fa45 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 2 Feb 2021 11:49:07 -0600 Subject: [PATCH 1757/2289] db pool was throwing errors the pool created errors, will investigate later, this should fix the issue. --- apps/api/R/entrypoint.R | 23 ++++++++++++----------- apps/api/R/submit.workflow.R | 10 ++++++---- apps/api/pecanapi-spec.yml | 2 +- 3 files changed, 19 insertions(+), 16 deletions(-) diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 2ec009f50d9..23ff7911464 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -8,17 +8,18 @@ source("auth.R") source("general.R") # Set up the global database pool -.bety_params <- PEcAn.DB::get_postgres_envvars( - host = "localhost", - dbname = "bety", - user = "bety", - password = "bety", - driver = "Postgres" -) - -.bety_params$driver <- NULL -.bety_params$drv <- RPostgres::Postgres() -global_db_pool <- do.call(pool::dbPool, .bety_params) +#.bety_params <- PEcAn.DB::get_postgres_envvars( +# host = "localhost", +# dbname = "bety", +# user = "bety", +# password = "bety", +# driver = "Postgres" +#) +# +#.bety_params$driver <- NULL +#.bety_params$drv <- RPostgres::Postgres() +#global_db_pool <- do.call(pool::dbPool, .bety_params) +global_db_pool <- PEcAn.DB::betyConnect() root <- plumber::Plumber$new() root$setSerializer(plumber::serializer_unboxed_json()) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 2c1e804b779..4b9f2047c87 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -172,8 +172,9 @@ insert.workflow <- function(workflowList, dbcon = global_db_pool){ # NOTE: Have to "checkout" a connection from the pool here to work with # dbSendStatement and friends. We make sure to return the connection when the # function exits (successfully or not). - con <- pool::poolCheckout(dbcon) - on.exit(pool::poolReturn(con), add = TRUE) + #con <- pool::poolCheckout(dbcon) + #on.exit(pool::poolReturn(con), add = TRUE) + con <- dbcon insert_query <- glue::glue( "INSERT INTO workflows ", @@ -260,8 +261,9 @@ insert.attribute <- function(workflowList, dbcon = global_db_pool){ # Insert properties into attributes table value_json <- as.character(jsonlite::toJSON(properties, auto_unbox = TRUE)) - con <- pool::poolCheckout(dbcon) - on.exit(pool::poolReturn(con), add = TRUE) + # con <- pool::poolCheckout(dbcon) + # on.exit(pool::poolReturn(con), add = TRUE) + con <- dbcon res <- DBI::dbSendStatement(con, "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) diff --git a/apps/api/pecanapi-spec.yml b/apps/api/pecanapi-spec.yml index 21f9be50613..b6ea7e3a757 100644 --- a/apps/api/pecanapi-spec.yml +++ b/apps/api/pecanapi-spec.yml @@ -1278,4 +1278,4 @@ components: securitySchemes: basicAuth: type: http - scheme: basic \ No newline at end of file + scheme: basic From b5549e86d82a60ada3be3ae746e101ffdd6a6641 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Thu, 4 Feb 2021 20:24:56 +0000 Subject: [PATCH 1758/2289] Enable getting PFT numbers from settings file in addition to pftmapping file --- models/ed/R/model2netcdf.ED2.R | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 3170d9b8457..8a61930de74 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -937,7 +937,9 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n npft <- length(pft_names) data(pftmapping, package = "PEcAn.ED2") - pfts <- sapply(pft_names, function(x) pftmapping$ED[pftmapping$PEcAn == x]) + pfts <- sapply(pft_names, function(x) ifelse(x %in% settings$pfts$pft$name, + as.numeric(settings$pfts$pft$ed2_pft_number[settings$pfts$pft$name == x]), + pftmapping$ED[pftmapping$PEcAn == x])) out <- list() for (varname in varnames) { @@ -1010,7 +1012,9 @@ put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pft_names, ... } data(pftmapping, package = "PEcAn.ED2") - pfts <- sapply(pft_names, function(x) pftmapping$ED[pftmapping$PEcAn == x]) + pfts <- sapply(pft_names, function(x) ifelse(x %in% settings$pfts$pft$name, + as.numeric(settings$pfts$pft$ed2_pft_number[settings$pfts$pft$name == x]), + pftmapping$ED[pftmapping$PEcAn == x])) # ----- fill list From 5ec7916411979431376a87507bfbb35e07082b98 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 8 Feb 2021 12:00:39 -0500 Subject: [PATCH 1759/2289] adding agb by pft to recognized variables --- models/ed/R/ed_varlookup.R | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/models/ed/R/ed_varlookup.R b/models/ed/R/ed_varlookup.R index 811d0b3d648..0f80aeb0718 100644 --- a/models/ed/R/ed_varlookup.R +++ b/models/ed/R/ed_varlookup.R @@ -27,6 +27,11 @@ ed.var <- function(varname) { type = 'co', units = "kgC/plant", drelated = NULL, expr = "AGB_CO") + } else if(varname == "AGB.pft") { + out = list(readvar = c("AGB_CO"), #until I change BLEAF keeper to be annual work with total AGB + type = 'co', units = "kgC/plant", + drelated = NULL, + expr = "AGB_CO") } else if(varname == "leaf_carbon_content") { out = list(readvar = "BLEAF", type = 'co', units = "kgC/plant", From 312b5bd9d76b04299dbd43189a4e37c6a582c97a Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 8 Feb 2021 14:05:19 -0500 Subject: [PATCH 1760/2289] Force update testthat and rlang in test and check See discussion here: https://github.com/r-lib/devtools/issues/2309#issuecomment-773572728 --- .github/workflows/ci.yml | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 6259738fb3d..cee3c0966d9 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -105,6 +105,9 @@ jobs: - name: install new dependencies run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R + - name: Manually update rlang and testthat (HACK) + run: Rscript -e 'remotes::install_github("r-lib/testthat@v3.0.1")' + # initialize database - name: db setup uses: docker://pecan/db:ci @@ -152,6 +155,9 @@ jobs: - name: install new dependencies run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R + - name: Manually install latest testthat (HACK) + run: Rscript -e 'remotes::install_github("r-lib/testthat@v3.0.1")' + # run PEcAn checks - name: check run: make -j1 check From 11d4f10bfa941f8478376100ad9f70ca6c9a7ce9 Mon Sep 17 00:00:00 2001 From: kzarada Date: Mon, 8 Feb 2021 14:29:00 -0500 Subject: [PATCH 1761/2289] adding per pft functionality --- models/ed/R/read_restart.ED2.R | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/models/ed/R/read_restart.ED2.R b/models/ed/R/read_restart.ED2.R index 66cc7878467..146feb4d3f6 100644 --- a/models/ed/R/read_restart.ED2.R +++ b/models/ed/R/read_restart.ED2.R @@ -55,6 +55,14 @@ read_restart.ED2 <- function(outdir, } + if (var_name == "AGB.pft") { + perpft <- TRUE + forecast_tmp <- switch(perpft+1, sum(histout$AGB, na.rm = TRUE), histout$AGB) # kgC/m2 + forecast[[length(forecast)+1]] <- udunits2::ud.convert(forecast_tmp, "kg/m^2", "Mg/ha") # conv to MgC/ha + names(forecast)[length(forecast)] <- switch(perpft+1, "AGB", paste0("AGB.", pft_names)) + + } + if (var_name == "TotLivBiom") { forecast_tmp <- switch(perpft+1, sum(histout$TotLivBiom, na.rm = TRUE), histout$TotLivBiom) # kgC/m2 From 6832dc424283f716d849e91f45b745c13b97bd47 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 8 Feb 2021 13:54:35 -0600 Subject: [PATCH 1762/2289] add foldering on github actions --- scripts/time.sh | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/scripts/time.sh b/scripts/time.sh index 2f42cb7425b..5b42eebc173 100755 --- a/scripts/time.sh +++ b/scripts/time.sh @@ -11,6 +11,10 @@ if [ "$TRAVIS" == "true" ]; then travis_time_start "${FOLD_NAME}" "${FOLD_NAME}" "$@" travis_time_end +elif [ -n "$GITHUB_WORKFLOW" ]; then + echo "::group::${FOLD_NAME}" + "$@" + echo "::endgroup::" else time "$@" fi From a1eb5696cb9178533d2c0f80fd259db0974f28c3 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Mon, 8 Feb 2021 13:50:56 -0700 Subject: [PATCH 1763/2289] one remaining mstmip_vars call --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 7dfdcb70100..736a328019f 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -27,7 +27,7 @@ ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { - nc_var <- PEcAn.utils::standard_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] + nc_var <- PEcAn.utils::standard_vars[PEcAn.utils::standard_vars$Variable.Name == name, ] dims <- list() if (nrow(nc_var) == 0) { From 7f1b4ce7b8281a219a5f6c597139b0b3ea007f43 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 8 Feb 2021 17:17:19 -0600 Subject: [PATCH 1764/2289] better folding --- Makefile | 18 +++++++++--------- scripts/time.sh | 8 ++++++-- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/Makefile b/Makefile index 313071bfb0d..accb2999803 100644 --- a/Makefile +++ b/Makefile @@ -90,39 +90,39 @@ clean: find models/basgra/src \( -name \*.mod -o -name \*.o -o -name \*.so \) -delete .install/devtools: | .install - + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('devtools', quietly = TRUE)) install.packages('devtools')" + + ./scripts/time.sh "devtools ${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('devtools', quietly = TRUE)) install.packages('devtools')" echo `date` > $@ .install/roxygen2: | .install - + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('roxygen2', quietly = TRUE)) install.packages('roxygen2')" + + ./scripts/time.sh "roxygen2 ${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('roxygen2', quietly = TRUE)) install.packages('roxygen2')" echo `date` > $@ .install/testthat: | .install - + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('testthat', quietly = TRUE)) install.packages('testthat')" + + ./scripts/time.sh "testthat ${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('testthat', quietly = TRUE)) install.packages('testthat')" echo `date` > $@ .install/mockery: | .install - + ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('mockery', quietly = TRUE)) install.packages('mockery')" + + ./scripts/time.sh "mockery ${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('mockery', quietly = TRUE)) install.packages('mockery')" echo `date` > $@ # HACK: assigning to `deps` is an ugly workaround for circular dependencies in utils pkg. # When these are fixed, can go back to simple `dependencies = TRUE` -depends_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} \ +depends_R_pkg = ./scripts/time.sh "depends ${1}" Rscript -e ${SETROPTIONS} \ -e "deps <- if (grepl('(base/utils|modules/benchmark)', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" -install_R_pkg = ./scripts/time.sh "${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" -check_R_pkg = ./scripts/time.sh "${1}" Rscript scripts/check_with_errors.R $(strip $(1)) +install_R_pkg = ./scripts/time.sh "install ${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" +check_R_pkg = ./scripts/time.sh "check ${1}" Rscript scripts/check_with_errors.R $(strip $(1)) # Would use devtools::test(), but devtools 2.2.1 hardcodes stop_on_failure=FALSE # To work around this, we reimplement about half of test() here :( -test_R_pkg = ./scripts/time.sh "${1}" Rscript \ +test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ -e "if (length(list.files('$(strip $(1))/tests/testthat', 'test.*.[rR]')) == 0) {" \ -e "print('No tests found'); quit('no') }" \ -e "env <- devtools::load_all('$(strip $(1))', quiet = TRUE)[['env']]" \ -e "testthat::test_dir('$(strip $(1))/tests/testthat', env = env," \ -e "stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can -doc_R_pkg = ./scripts/time.sh "${1}" Rscript -e "devtools::document('"$(strip $(1))"')" +doc_R_pkg = ./scripts/time.sh "document ${1}" Rscript -e "devtools::document('"$(strip $(1))"')" $(ALL_PKGS_I) $(ALL_PKGS_C) $(ALL_PKGS_T) $(ALL_PKGS_D): | .install/devtools .install/roxygen2 .install/testthat diff --git a/scripts/time.sh b/scripts/time.sh index 5b42eebc173..35e09fc6570 100755 --- a/scripts/time.sh +++ b/scripts/time.sh @@ -2,19 +2,23 @@ set -e -FOLD_NAME=$( echo "make_$1" | sed -e 's#[^a-z0-9]#_#g' ) +TITLE="$(echo $1 | xargs)" shift if [ "$TRAVIS" == "true" ]; then + FOLD_NAME=$( echo "make_$TITLE" | sed -e 's#[^A-Za-z0-9]#_#g' ) + . $( dirname $0 )/travis/func.sh travis_time_start "${FOLD_NAME}" "${FOLD_NAME}" "$@" travis_time_end elif [ -n "$GITHUB_WORKFLOW" ]; then - echo "::group::${FOLD_NAME}" + echo "::group::${TITLE}" "$@" echo "::endgroup::" else + echo "=========== START $TITLE ===========" time "$@" + echo "=========== END $TITLE ===========" fi From 365f7bc4057d895f71b5d4d8588723dfc25ef92d Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Feb 2021 04:47:48 -0500 Subject: [PATCH 1765/2289] docs --- modules/priors/R/priors.R | 1 + modules/priors/man/get.sample.Rd | 2 ++ 2 files changed, 3 insertions(+) diff --git a/modules/priors/R/priors.R b/modules/priors/R/priors.R index 8efd49c255d..ef2960c1d03 100644 --- a/modules/priors/R/priors.R +++ b/modules/priors/R/priors.R @@ -186,6 +186,7 @@ pr.samp <- function(distn, parama, paramb, n) { ##' @title Get Samples ##' @param prior data.frame with distn, parama, paramb ##' @param n number of samples to return +##' @param p probability vector, pre-generated upstream to be used in the quantile function ##' @return vector with n random samples from prior ##' @seealso \link{pr.samp} ##' @export diff --git a/modules/priors/man/get.sample.Rd b/modules/priors/man/get.sample.Rd index 2ce56b6b438..54ec100147f 100644 --- a/modules/priors/man/get.sample.Rd +++ b/modules/priors/man/get.sample.Rd @@ -10,6 +10,8 @@ get.sample(prior, n, p = NULL) \item{prior}{data.frame with distn, parama, paramb} \item{n}{number of samples to return} + +\item{p}{probability vector, pre-generated upstream to be used in the quantile function} } \value{ vector with n random samples from prior From b0ec907e1373cf9d387a0c55b0791e81252b45e4 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Feb 2021 04:51:12 -0500 Subject: [PATCH 1766/2289] changelog --- CHANGELOG.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index ad6ffd7c94d..3eaa911bc5b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -64,6 +64,9 @@ This is a major change: ### Added +- Functionality for generating the same ensemble parameter sets with randtoolbox functions. +- Functionality for joint sampling from the posteriors using randtoolbox functions. +- BASGRA-SDA couplers. - Now creates docker images during a PR, when merged it will push them to docker hub and github packages - New functionality to the PEcAn API to GET information about PFTs, formats & sites, submit workflows in XML or JSON formats & download relevant inputs/outputs/files related to runs & workflows (#2674 #2665 #2662 #2655) - Functions to send/receive messages to/from rabbitmq. From c2ba8b7bfee87e9dee3564ccd2d2948c1a1a3b52 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Feb 2021 05:17:12 -0500 Subject: [PATCH 1767/2289] action fixes --- models/basgra/R/read_restart.BASGRA.R | 10 +++++----- models/basgra/R/write.config.BASGRA.R | 1 + 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/models/basgra/R/read_restart.BASGRA.R b/models/basgra/R/read_restart.BASGRA.R index b86ab8741ec..f558784a9e6 100644 --- a/models/basgra/R/read_restart.BASGRA.R +++ b/models/basgra/R/read_restart.BASGRA.R @@ -16,11 +16,11 @@ read_restart.BASGRA <- function(outdir, runid, stop.time, settings, var.names, p # maybe have some checks here to make sure the first run is actually ran for the period you requested # Read ensemble output - ens <- read.output(runid = runid, - outdir = file.path(outdir, runid), - start.year = lubridate::year(stop.time), - end.year = lubridate::year(stop.time), - variables = var.names) + ens <- PEcAn.utils::read.output(runid = runid, + outdir = file.path(outdir, runid), + start.year = lubridate::year(stop.time), + end.year = lubridate::year(stop.time), + variables = var.names) last <- length(ens[[1]]) diff --git a/models/basgra/R/write.config.BASGRA.R b/models/basgra/R/write.config.BASGRA.R index 6a29de28508..fd3c9ea564c 100644 --- a/models/basgra/R/write.config.BASGRA.R +++ b/models/basgra/R/write.config.BASGRA.R @@ -429,6 +429,7 @@ write.config.BASGRA <- function(defaults, trait.values, settings, run.id, IC = N # overwrite initial values with previous time steps # as model2netcdf is developed, some or all of these can be dropped? + last_vals <- c() last_states_file <- file.path(outdir, "last_vals_basgra.Rdata") if(file.exists(last_states_file)){ From 5a52bdde575a708e4edb6d68ec05d39b70e7ef12 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Feb 2021 05:48:35 -0500 Subject: [PATCH 1768/2289] add dependency --- models/basgra/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index 6712582757c..9432345def2 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -7,6 +7,7 @@ Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", Description: This module provides functions to link the BASGRA model to PEcAn. Imports: PEcAn.logger, + PEcAn.data.atmosphere, PEcAn.utils (>= 1.4.8), lubridate, ncdf4, From bb78640571fe59a2b5538b5a6e78888f170d7e34 Mon Sep 17 00:00:00 2001 From: istfer Date: Tue, 9 Feb 2021 05:55:14 -0500 Subject: [PATCH 1769/2289] update makefile depends --- Makefile.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index cb69f04c1d0..035af3ba0fc 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -22,7 +22,7 @@ $(call depends,modules/photosynthesis): | $(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger -$(call depends,models/basgra): | .install/base/logger .install/base/utils +$(call depends,models/basgra): | .install/base/logger .install/modules/data.atmosphere .install/base/utils $(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/base/db $(call depends,models/cable): | .install/base/logger .install/base/utils $(call depends,models/clm45): | .install/base/logger .install/base/utils From 15262e4ccb97ccf26f63312bfb3e6d6c187e7efd Mon Sep 17 00:00:00 2001 From: Jessica Guo Date: Tue, 9 Feb 2021 16:27:23 +0000 Subject: [PATCH 1770/2289] Make colClasses if/then more compact --- base/db/R/get.trait.data.pft.R | 31 +++++++++++++------------------ 1 file changed, 13 insertions(+), 18 deletions(-) diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index 58bc16be689..d36e4893088 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -137,30 +137,25 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, # Check if PFT membership has changed PEcAn.logger::logger.debug("Checking if PFT membership has changed.") if (pfttype == "plant") { - existing_membership <- utils::read.csv( - need_paths[["pft_membership"]], - # Columns are: id, genus, species, scientificname - # Need this so NA values are formatted consistently - colClasses = c("double", "character", "character", "character"), - stringsAsFactors = FALSE, - na.strings = c("", "NA") - ) + # Columns are: id, genus, species, scientificname + colClass = c("double", "character", "character", "character") } else if (pfttype == "cultivar") { - existing_membership <- utils::read.csv( - need_paths[["pft_membership"]], - # Columns are: id, specie_id, genus, species, scientificname, cultivar - # Need this so NA values are formatted consistently - colClasses = c("double", "double", "character", "character", "character", "character"), - stringsAsFactors = FALSE, - na.strings = c("", "NA") + # Columns are: id, specie_id, genus, species, scientificname, cultivar + colClass = c("double", "double", "character", "character", "character", "character") + } + existing_membership <- utils::read.csv( + need_paths[["pft_membership"]], + # Need this so NA values are formatted consistently + colClasses = colClass, + stringsAsFactors = FALSE, + na.strings = c("", "NA") ) - } - diff_membership <- symmetric_setdiff( + diff_membership <- symmetric_setdiff( existing_membership, pft_members, xname = "existing", yname = "current" - ) + ) if (nrow(diff_membership) > 0) { PEcAn.logger::logger.error( "\n PFT membership has changed. \n", From 2a8a1ee50e7f051bac10e33d64df19b529b3047e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 13:56:31 -0600 Subject: [PATCH 1771/2289] disable travis --- .travis.yml | 88 ---------------------------- scripts/time.sh | 14 +---- scripts/travis/after_script.sh | 20 ------- scripts/travis/before_script.sh | 28 --------- scripts/travis/build_order.txt | 38 ------------ scripts/travis/cache_buildup.sh | 54 ----------------- scripts/travis/func.sh | 61 ------------------- scripts/travis/install.sh | 69 ---------------------- scripts/travis/prime_travis_cache.sh | 36 ------------ scripts/travis/script.sh | 61 ------------------- 10 files changed, 3 insertions(+), 466 deletions(-) delete mode 100644 .travis.yml delete mode 100755 scripts/travis/after_script.sh delete mode 100755 scripts/travis/before_script.sh delete mode 100644 scripts/travis/build_order.txt delete mode 100755 scripts/travis/cache_buildup.sh delete mode 100755 scripts/travis/func.sh delete mode 100755 scripts/travis/install.sh delete mode 100755 scripts/travis/prime_travis_cache.sh delete mode 100755 scripts/travis/script.sh diff --git a/.travis.yml b/.travis.yml deleted file mode 100644 index 20fc82ada9f..00000000000 --- a/.travis.yml +++ /dev/null @@ -1,88 +0,0 @@ -language: r - -dist: xenial -os: linux - -env: - global: - # TODO: `make -j2` interleaves output lines from simultaneous processes. - # Would be nice to fix by adding `-Otarget`, but not supported in Make 3.x. - # When Travis updates, check for Make 4 and add -O if available. - - MAKEFLAGS="-j2" - - PGHOST=localhost - - RGL_USE_NULL=TRUE # Keeps RGL from complaining it can't find X11 - - _R_CHECK_LENGTH_1_CONDITION_=true - - _R_CHECK_LENGTH_1_LOGIC2_=true - -addons: - apt: - sources: - - sourceline: 'ppa:ubuntugis/ppa' # for GDAL 2 binaries - packages: - - bc - - curl - - gdal-bin - - jags - - libgdal-dev - - libgl1-mesa-dev - - libglu1-mesa-dev - - libglpk-dev # indirectly needed by BayesianTools - - libgmp-dev - - libhdf5-dev - - liblapack-dev - - libnetcdf-dev - - libproj-dev - - librdf0-dev - - libudunits2-dev - - netcdf-bin - - pandoc - - python-dev - - qpdf - - tcl - - tcl-dev - - udunits-bin - -jobs: - fast_finish: true - include: - - r: release - - r: devel - - r: oldrel - - r: 3.5 - allow_failures: - - r: devel - - r: oldrel - - r: 3.5 - -cache: - - directories: - - .install - - .check - - .test - - .doc - - packages - -## notifications should go to slack -notifications: - slack: - # Slack token created by Chris Black, 2018-02-17 - secure: "DHHSNmiCf71SLa/FFSqx9oOnJjJt2GHYk7NsFIBb9ZY10RvQtIPfaoNxkPjqu9HLyZWJSFtg/uNKKplEHc6W80NoXyqoTvwOxTPjMaViXaCNqsmzjjR/JaCWT/oWGXyAw0VX3S8cwuIexlKQGgZwJpIzoVOZqUrDrHI/O17kZoM=" - email: - on_success: always - on_failure: always - -## list of services to be running -services: - - docker - -install: - - scripts/travis/install.sh - -before_script: - - scripts/travis/before_script.sh - -script: - - scripts/travis/script.sh - -after_script: - - scripts/travis/after_script.sh diff --git a/scripts/time.sh b/scripts/time.sh index 35e09fc6570..c4c82f98284 100755 --- a/scripts/time.sh +++ b/scripts/time.sh @@ -5,20 +5,12 @@ set -e TITLE="$(echo $1 | xargs)" shift -if [ "$TRAVIS" == "true" ]; then - FOLD_NAME=$( echo "make_$TITLE" | sed -e 's#[^A-Za-z0-9]#_#g' ) - - . $( dirname $0 )/travis/func.sh - - travis_time_start "${FOLD_NAME}" "${FOLD_NAME}" - "$@" - travis_time_end -elif [ -n "$GITHUB_WORKFLOW" ]; then +if [ -n "$GITHUB_WORKFLOW" ]; then echo "::group::${TITLE}" "$@" echo "::endgroup::" else - echo "=========== START $TITLE ===========" + echo "=========== START $TITLE ===========" time "$@" - echo "=========== END $TITLE ===========" + echo "=========== END $TITLE ===========" fi diff --git a/scripts/travis/after_script.sh b/scripts/travis/after_script.sh deleted file mode 100755 index 4ace384a5d3..00000000000 --- a/scripts/travis/after_script.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -set -e -. $( dirname $0 )/func.sh - -# BUILD DOCUMENTATION -( - travis_time_start "build_book" "Building Book" - cd book_source - make - travis_time_end -) - -# BUILD TUTORIALS -( - travis_time_start "build_tutorials" "Building Tutorials" - cd documentation/tutorials - make build deploy - travis_time_end -) diff --git a/scripts/travis/before_script.sh b/scripts/travis/before_script.sh deleted file mode 100755 index e60b193b273..00000000000 --- a/scripts/travis/before_script.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash - -set -e -. $( dirname $0 )/func.sh - -# LOADING BETY DATABASE -( - travis_time_start "load_database" "Loading minimal BETY database" - - sudo service postgresql stop - docker run --detach --rm --name postgresql --publish 5432:5432 mdillon/postgis:9.6-alpine - echo -n "Waiting for Postgres to start..."; - until psql -U postgres -c 'select 1' >/dev/null 2>&1; - do echo -n "."; - sleep 1; - done; - echo " OK" - psql -q -o /dev/null -U postgres -c "CREATE ROLE BETY WITH LOGIN CREATEDB SUPERUSER CREATEROLE UNENCRYPTED PASSWORD 'bety'"; - psql -q -o /dev/null -U postgres -c "CREATE DATABASE bety OWNER bety;" - curl -o bety.sql http://isda.ncsa.illinois.edu/~kooper/PEcAn/data/bety.sql - psql -q -o /dev/null -U postgres < bety.sql - rm bety.sql - ./scripts/add.models.sh - chmod +x book_source/deploy.sh - chmod +x documentation/tutorials/deploy.sh - - travis_time_end -) diff --git a/scripts/travis/build_order.txt b/scripts/travis/build_order.txt deleted file mode 100644 index 6dc914e5b59..00000000000 --- a/scripts/travis/build_order.txt +++ /dev/null @@ -1,38 +0,0 @@ -base/logger -base/remote -modules/emulator -base/utils -base/db -base/settings -base/visualization -base/qaqc -modules/data.atmosphere -modules/data.land -modules/priors -modules/uncertainty -base/workflow -modules/allometry -modules/meta.analysis -modules/assim.batch -modules/assim.sequential -modules/benchmark -modules/data.hydrology -modules/data.remote -modules/photosynthesis -models/template -models/ed -modules/rtm -models/biocro -models/clm45 -models/dalec -models/dvmdostem -models/fates -models/gday -models/jules -models/linkages -models/lpjguess -models/maat -models/maespa -models/preles -models/sipnet -base/all diff --git a/scripts/travis/cache_buildup.sh b/scripts/travis/cache_buildup.sh deleted file mode 100755 index b7e3e37b884..00000000000 --- a/scripts/travis/cache_buildup.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/bin/bash - -# Motivation: -# Installing PEcAn from scratch involves building a *lot* of dependencies -# and takes longer than one (50-minute) Travis run, so we cache packages -# between runs and only update the ones that have changed since last time. -# But the cache is only updated after a successful build, so we need a way -# to populate it before the first run (or to clean up from a later run that -# times out because too many packages need updating). -# This script provides such a bootstrap: -# It runs an incremental build, pausing after less-then-50 minutes to report -# success and let Travis cache what we've built so far. Call it several times -# until all dependencies are in-cache, then restart any automated builds that -# were timing out before. - -# Usage via Travis web interface: -# * Select "More options" > "Trigger build" -# * Choose branch -# * Enter a commit message (I use "cache priming"). -# NB this commit is only visible on Travis and *will not* be pushed to GitHub -# * Enter config as JSON. I use: -# ``` -# install: scripts/travis/cache_buildup.sh -# script: echo 'just caching dependencies' -# ``` -# - Any section you include here *replaces* that section in .travis.yml -# - All sections of .travis.yml you *don't* override here will run as normal - -# Usage via Travis API: see prime_travis_cache.sh - -set +e - -MAX_TIME=${1:-30m} -CMDS=${2:-'scripts/travis/install.sh && make install'} - -# Spends up to $MAX_TIME installing packages, then sends HUP -timeout ${MAX_TIME} bash -c "${CMDS}" - -if [[ $? -ne 0 ]]; then - # Clean up any lock files left from killing install.packages - # (these packages will be re-freshened in the next priming round) - # - # TODO BUGBUG this assumes staged installation (R 3.6 and later)! - # R <= 3.5 backs up the old version to lockdir, builds new version in place - # ==> killing and deleting lockfile leaves broken package. - # Also per-package locking is optional; check if make uses it reliably - # - find $(Rscript -e 'cat(.libPaths())') -path '*/00LOCK-*' -delete - echo "Still more packages to cache. Please initiate another build." -else - echo "Build finished. Cache should now be up to date." -fi - -exit 0 diff --git a/scripts/travis/func.sh b/scripts/travis/func.sh deleted file mode 100755 index 95ec69b90fa..00000000000 --- a/scripts/travis/func.sh +++ /dev/null @@ -1,61 +0,0 @@ -#!/bin/bash - -TRAVIS_STACK=() -if [ "$(uname -s)" == "Darwin" ]; then - DATE_OPTION="+%s" - DATE_DIV=1 -else - DATE_OPTION="+%s%N" - DATE_DIV=1000000000 -fi - -function travis_time_start { - old_setting=${-//[^x]/} - set +x - TRAVIS_START_TIME=$(date ${DATE_OPTION}) - TRAVIS_TIME_ID=$( uuidgen | sed 's/-//g' | cut -c 1-8 ) - TRAVIS_FOLD_NAME=$1 - TRAVIS_STACK=("${TRAVIS_FOLD_NAME}#${TRAVIS_TIME_ID}#${TRAVIS_START_TIME}" "${TRAVIS_STACK[@]}") - echo -e "\e[0Ktravis_fold:start:$TRAVIS_FOLD_NAME" - echo -e "\e[0Ktravis_time:start:$TRAVIS_TIME_ID" - if [ "$2" != "" ]; then - echo "$2" - fi - if [[ -n "$old_setting" ]]; then set -x; else set +x; fi -} - -function travis_time_end { - old_setting=${-//[^x]/} - set +x - _COLOR=${1:-32} - TRAVIS_ITEM="${TRAVIS_STACK[0]}" - TRAVIS_ITEMS=(${TRAVIS_ITEM//#/ }) - TRAVIS_FOLD_NAME="${TRAVIS_ITEMS[0]}" - TRAVIS_TIME_ID="${TRAVIS_ITEMS[1]}" - TRAVIS_START_TIME="${TRAVIS_ITEMS[2]}" - TRAVIS_STACK=("${TRAVIS_STACK[@]:1}") - TRAVIS_END_TIME=$(date ${DATE_OPTION}) - TIME_ELAPSED_SECONDS=$(( ($TRAVIS_END_TIME - $TRAVIS_START_TIME)/1000000000 )) - echo -e "travis_time:end:$TRAVIS_TIME_ID:start=$TRAVIS_START_TIME,finish=$TRAVIS_END_TIME,duration=$(($TRAVIS_END_TIME - $TRAVIS_START_TIME))\n\e[0K" - echo -e "travis_fold:end:$TRAVIS_FOLD_NAME" - echo -e "\e[0K\e[${_COLOR}mFunction $TRAVIS_FOLD_NAME takes $(( $TIME_ELAPSED_SECONDS / 60 )) min $(( $TIME_ELAPSED_SECONDS % 60 )) sec\e[0m" - if [[ -n "$old_setting" ]]; then set -x; else set +x; fi -} - -function check_git_clean { - if [[ `git status -s` ]]; then - echo -e "\nThese files were changed by the build process:"; - git status -s; - echo -e "The two most common causes of this message:\n"; - echo -e " * Changed file ends with '*.Rd' or 'NAMESPACE':" \ - " Rerun Roxygen and commit any updated outputs\n"; - echo -e " * Changed file end with '*.depends':" \ - " Rerun './scripts/generate_dependencies.sh'" \ - " and commit any updated outputs\n"; - echo -e " * Something else: Hmm... Maybe the full diff below can help?\n"; - echo -e "travis_fold:start:gitdiff\nFull diff:\n"; - git diff; - echo -e "travis_fold:end:gitdiff\n\n"; - exit 1; - fi -} \ No newline at end of file diff --git a/scripts/travis/install.sh b/scripts/travis/install.sh deleted file mode 100755 index eb062a3f249..00000000000 --- a/scripts/travis/install.sh +++ /dev/null @@ -1,69 +0,0 @@ -#!/bin/bash - -set -e -. $( dirname $0 )/func.sh - -# Install R packages that need specified versions -( - travis_time_start "pecan_install_roxygen" "Installing Roxygen 7.0.2 to match comitted documentation version" - # We keep Roxygen pinned to a known version, merely to avoid hassle / - # merge conflicts from updates causing unplanned documentation changes. - # It's OK to bump the Roxygen version when needed, but please coordinate - # with the team to update all documentation at once and to get all - # PEcAn developers to update Roxygen on their own machines to match. - Rscript -e 'if (!requireNamespace("devtools", quietly = TRUE)) { install.packages("devtools", repos = "https://cloud.r-project.org") }' \ - -e 'devtools::install_version("roxygen2", version = "7.0.2", repos = "https://cloud.r-project.org")' - travis_time_end - - # MCMCpack >= 1.4-5 requires R >= 3.6; - # fall back to last compatible version if running older R - # (but only then -- new MCMCpack version is more efficient) - if [[ `Rscript -e 'cat(getRversion() < "3.6")'` = "TRUE" ]]; then - travis_time_start "pecan_install_MCMCpack" "Installing MCMCpack 1.4-4 (last version compatible with R < 3.6)" - Rscript -e 'devtools::install_version("MCMCpack", version = "1.4-4", repos = "https://cloud.r-project.org")' - travis_time_end - fi -) - -# Install R package dependencies -# N.B. we run this *after* installing packages that need pinned versions, -# relying on fact that pecan.depends calls littler with -s, -# so it will skip reinstalling packages that already exist. -# This way each package is only installed once. -( - travis_time_start "r_pkgs" "installing R packages" - # Seems like a lot of fiddling to set up littler and only use it once - # inside pecan.depends, but still easier than duplicating the script - Rscript -e 'if (!requireNamespace("littler", quietly = TRUE)) { install.packages(c("littler", "remotes", "docopt"), repos = "https://cloud.r-project.org") }' - LRPATHS=$(Rscript -e 'cat(system.file(c("examples", "bin"), package = "littler"), sep = ":")') - echo 'options(repos="https://cloud.r-project.org")' > ~/.littler.r - PATH=$LRPATHS:$PATH bash docker/depends/pecan.depends - travis_time_end -) - -# INSTALLING SIPNET -( - travis_time_start "install_sipnet" "Installing SIPNET for testing" - - cd ${HOME} - curl -o sipnet_unk.tar.gz http://isda.ncsa.illinois.edu/~kooper/EBI/sipnet_unk.tar.gz - tar zxf sipnet_unk.tar.gz - cd sipnet_unk - make - ls -l sipnet - - travis_time_end -) - -# INSTALLING BIOCRO -( - travis_time_start "install_biocro" "Installing BioCro" - - cd ${HOME} - curl -sL https://github.com/ebimodeling/biocro/archive/0.95.tar.gz | tar zxf - - cd biocro-0.95 - rm configure - R CMD INSTALL . - - travis_time_end -) diff --git a/scripts/travis/prime_travis_cache.sh b/scripts/travis/prime_travis_cache.sh deleted file mode 100755 index bf9537ae7ee..00000000000 --- a/scripts/travis/prime_travis_cache.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash - -# Triggers an incremental Travis build of PEcAn: -# Installs as many dependencies as it can build in 30 minutes, -# then exits with status 0 so that Travis will cache the built packages. - -# Usage: -# * first find your Travis token and store in an env var named TRAVIS_TOKEN -# I looked mine up using the Travis CLI tool: -# travis login --org --auto -# travis token -# export TRAVIS_TOKEN= -# * ./prime_travis_cache.sh [user] [branchname] -# * Repeat the call several times (but let each run finish first!) -# until cache_buildup.sh reports "build finished" - -USER=${1:-PecanProject} -BRANCH=${2:-develop} - -BODY='{ - "request": { - "message":"Prime caches via API", - "branch":"'${BRANCH}'", - "config": { - "install":"echo skipping", - "before_script":"echo skipping", - "script":"scripts/travis/cache_buildup.sh 30m"} -}}' - -curl -s -X POST \ - -H "Content-Type: application/json" \ - -H "Accept: application/json" \ - -H "Travis-API-Version: 3" \ - -H "Authorization: token ${TRAVIS_TOKEN}" \ - -d "${BODY}" \ - https://api.travis-ci.org/repo/${USER}%2Fpecan/requests diff --git a/scripts/travis/script.sh b/scripts/travis/script.sh deleted file mode 100755 index f09d82b3284..00000000000 --- a/scripts/travis/script.sh +++ /dev/null @@ -1,61 +0,0 @@ -#!/bin/bash - -set -e -. $( dirname $0 )/func.sh - -# GENERATING DEPENDENCIES -( - travis_time_start "dependency_generate" "Generate PEcAn package dependencies" - Rscript scripts/generate_dependencies.R - travis_time_end - check_git_clean -) - -# DUMP PACKAGE VERSIONS -( - travis_time_start "installed_packages" \ - "Version info of all installed R packages, for debugging" - Rscript -e 'op <- options(width = 1000)' \ - -e 'pkgs <- as.data.frame(installed.packages())' \ - -e 'cols <- c("Package", "Version", "MD5sum", "Built", "LibPath")' \ - -e 'print(pkgs[order(pkgs$Package), cols], row.names = FALSE)' \ - -e 'options(op)' - travis_time_end -) - -# COMPILE PECAN -( - travis_time_start "pecan_make_all" "Compiling PEcAn" - # TODO: Would probably be faster to use -j2 NCPUS=1 as for other steps, - # but many dependency compilations seem not parallel-safe. - # More debugging needed. - NCPUS=2 make -j1 - travis_time_end - check_git_clean -) - - -# INSTALLING PECAN (compile, intall, test, check) -( - travis_time_start "pecan_make_test" "Testing PEcAn" - make test - travis_time_end - check_git_clean -) - -# INSTALLING PECAN (compile, intall, test, check) -( - travis_time_start "pecan_make_check" "Checking PEcAn" - REBUILD_DOCS=FALSE RUN_TESTS=FALSE make check - travis_time_end - check_git_clean -) - - -# RUNNING SIMPLE PECAN WORKFLOW -( - travis_time_start "integration_test" "Testing Integration using simple PEcAn workflow" - ./tests/integration.sh travis - travis_time_end - check_git_clean -) From 264578a5e387cd50601c3f2270a8be7e923dd234 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 13:59:34 -0600 Subject: [PATCH 1772/2289] =?UTF-8?q?Do=20not=20set=20`repos=20=3D=20...`?= =?UTF-8?q?=20in=20install.packages=20calls=20=E2=80=A6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit We should not be overwriting the user- or system-specific setting in `.Rprofile`. In particular, this can produce insidious bugs when interacting with Docker containers that are configured to use specific CRAN snapshots. Co-Authored-By: Alexey Shiklomanov --- .github/workflows/ci.yml | 6 ---- .github/workflows/styler-actions.yml | 2 +- Makefile | 1 - apps/api/Dockerfile | 8 ++--- .../01_Installing-PEcAn-Ubuntu.Rmd | 2 +- .../02_Installing-PEcAn-CentOS.Rmd | 2 +- .../94_docker/06_troubleshooting.Rmd | 10 +++---- .../03_topical_pages/95_remote_execution.Rmd | 3 +- docker/depends/Dockerfile | 4 +-- scripts/install_pecan.sh | 30 +++++++++---------- shiny/workflowPlot/helper.R | 2 +- 11 files changed, 31 insertions(+), 39 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index cee3c0966d9..6259738fb3d 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -105,9 +105,6 @@ jobs: - name: install new dependencies run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R - - name: Manually update rlang and testthat (HACK) - run: Rscript -e 'remotes::install_github("r-lib/testthat@v3.0.1")' - # initialize database - name: db setup uses: docker://pecan/db:ci @@ -155,9 +152,6 @@ jobs: - name: install new dependencies run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R - - name: Manually install latest testthat (HACK) - run: Rscript -e 'remotes::install_github("r-lib/testthat@v3.0.1")' - # run PEcAn checks - name: check run: make -j1 check diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 0eb0ac60e55..9fa9f5f171a 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -19,7 +19,7 @@ jobs: - uses: r-lib/actions/setup-r@master - name: Install styler run: | - Rscript -e 'install.packages("styler", repos = "cloud.r-project.org")' + Rscript -e 'install.packages("styler")' - name: run styler shell: bash env: diff --git a/Makefile b/Makefile index accb2999803..307c58f0b19 100644 --- a/Makefile +++ b/Makefile @@ -81,7 +81,6 @@ $(subst .doc/models/template,,$(MODELS_D)): .install/models/template include Makefile.depends -#SETROPTIONS = "options(Ncpus = ${NCPUS}, repos = 'https://cran.rstudio.com')" SETROPTIONS = "options(Ncpus = ${NCPUS})" clean: diff --git a/apps/api/Dockerfile b/apps/api/Dockerfile index bafbfac16b5..81a756fbee9 100644 --- a/apps/api/Dockerfile +++ b/apps/api/Dockerfile @@ -18,9 +18,9 @@ EXPOSE 8000 RUN apt-get update \ && apt-get install libsodium-dev -y \ && rm -rf /var/lib/apt/lists/* \ - && Rscript -e "devtools::install_version('promises', '1.1.0', repos = 'http://cran.rstudio.com')" \ - && Rscript -e "devtools::install_version('webutils', '1.1', repos = 'http://cran.rstudio.com')" \ - && Rscript -e "install.packages('pool', repos = 'http://cran.rstudio.com')" \ + && Rscript -e "devtools::install_version('promises', '1.1.0')" \ + && Rscript -e "devtools::install_version('webutils', '1.1')" \ + && Rscript -e "install.packages('pool')" \ && Rscript -e "devtools::install_github('rstudio/swagger')" \ && Rscript -e "devtools::install_github('rstudio/plumber')" @@ -36,4 +36,4 @@ WORKDIR /api/R CMD Rscript entrypoint.R -COPY ./ /api \ No newline at end of file +COPY ./ /api diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd index 472417c9ccf..907c074c2de 100755 --- a/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/01_Installing-PEcAn-Ubuntu.Rmd @@ -32,7 +32,7 @@ apt-get -y install apache2 libapache2-mod-php5 php5 apt-get -y install texinfo texlive-latex-base texlive-latex-extra texlive-fonts-recommended # install devtools -echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla +echo 'install.packages("devtools")' | R --vanilla # done as root exit diff --git a/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd b/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd index e3b1aa94064..b51113b290b 100755 --- a/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd +++ b/book_source/03_topical_pages/93_installation/03_install_OS/02_Installing-PEcAn-CentOS.Rmd @@ -37,7 +37,7 @@ firewall-cmd --reload #apt-get -y install texinfo texlive-latex-base texlive-latex-extra texlive-fonts-recommended # install devtools -echo 'install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla +echo 'install.packages("devtools")' | R --vanilla # done as root exit diff --git a/book_source/03_topical_pages/94_docker/06_troubleshooting.Rmd b/book_source/03_topical_pages/94_docker/06_troubleshooting.Rmd index 855d96d40aa..ca162c24ffc 100644 --- a/book_source/03_topical_pages/94_docker/06_troubleshooting.Rmd +++ b/book_source/03_topical_pages/94_docker/06_troubleshooting.Rmd @@ -7,20 +7,20 @@ ``` Installing package into ‘/usr/local/lib/R/site-library’ (as ‘lib’ is unspecified) -Warning: unable to access index for repository https://mran.microsoft.com/snapshot/2018-09-01/src/contrib: - cannot open URL 'https://mran.microsoft.com/snapshot/2018-09-01/src/contrib/PACKAGES' +Warning: unable to access index for repository : + cannot open URL '' Warning message: package ‘’ is not available (for R version 3.5.1) ``` -**CAUSE**: This can sometimes happen if there are problems with Microsoft's CRAN snapshots, which are the default repository for the `rocker/tidyverse` containers. +**CAUSE**: This can sometimes happen if there are problems with the RStudio Package manager, which is the default repository for the `rocker/tidyverse` containers. See GitHub issues [rocker-org/rocker-versioned#102](https://github.com/rocker-org/rocker-versioned/issues/102) and [#58](https://github.com/rocker-org/rocker-versioned/issues/58). -**SOLUTION**: Add the following line to the `depends` and/or `base` Dockerfiles _before_ (i.e. above) any commands that install R packages (e.g. `Rscript -e "install.packages(...)"`): +**WORKAROUND**: Add the following line to the `depends` and/or `base` Dockerfiles _before_ (i.e. above) any commands that install R packages (e.g. `Rscript -e "install.packages(...)"`): ``` RUN echo "options(repos = c(CRAN = 'https://cran.rstudio.org'))" >> /usr/local/lib/R/etc/Rprofile.site ``` -This will set the default repository to the more reliable (albeit, more up-to-date; beware of breaking package changes!) RStudio CRAN mirror. +This will set the default repository to the more reliable (**albeit, more up-to-date; beware of breaking package changes!**) RStudio CRAN mirror. Then, build the image as usual. diff --git a/book_source/03_topical_pages/95_remote_execution.Rmd b/book_source/03_topical_pages/95_remote_execution.Rmd index 6d02a401520..ef879465994 100644 --- a/book_source/03_topical_pages/95_remote_execution.Rmd +++ b/book_source/03_topical_pages/95_remote_execution.Rmd @@ -308,8 +308,7 @@ will fall back on the older versions install site-library ``` install.packages(c('udunits2', 'lubridate'), - configure.args=c(udunits2='--with-udunits2-lib=/project/earth/packages/udunits-2.1.24/lib --with-udunits2-include=/project/earth/packages/udunits-2.1.24/include'), - repos='http://cran.us.r-project.org') + configure.args=c(udunits2='--with-udunits2-lib=/project/earth/packages/udunits-2.1.24/lib --with-udunits2-include=/project/earth/packages/udunits-2.1.24/include')) ``` Finally to install support for both ED and SIPNET: diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index f3707d531d8..ca9bfb566c5 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -39,8 +39,8 @@ RUN apt-get update \ # INSTALL DEPENDENCIES # ---------------------------------------------------------------------- COPY pecan.depends.R / -RUN Rscript -e "install.packages(c('devtools'), repos = 'http://cran.rstudio.com')" \ - && Rscript -e "devtools::install_version('roxygen2', '7.0.2', repos = 'http://cran.rstudio.com')" \ +RUN Rscript -e "install.packages(c('devtools'))" \ + && Rscript -e "devtools::install_version('roxygen2', '7.0.2')" \ && R_LIBS_USER='/usr/local/lib/R/site-library' Rscript /pecan.depends.R \ && rm -rf /tmp/* diff --git a/scripts/install_pecan.sh b/scripts/install_pecan.sh index ddcf1a3eb7c..db0845b41bc 100644 --- a/scripts/install_pecan.sh +++ b/scripts/install_pecan.sh @@ -357,21 +357,21 @@ if [ -z "${R_LIBS_USER}" ]; then ;; esac fi -echo 'if(!"devtools" %in% installed.packages()) install.packages("devtools", repos="http://cran.rstudio.com/")' | R --vanilla -echo 'if(!"udunits2" %in% installed.packages()) install.packages("udunits2", configure.args=c(udunits2="--with-udunits2-include=/usr/include/udunits2"), repo="http://cran.rstudio.com")' | R --vanilla +echo 'if(!"devtools" %in% installed.packages()) install.packages("devtools")' | R --vanilla +echo 'if(!"udunits2" %in% installed.packages()) install.packages("udunits2", configure.args=c(udunits2="--with-udunits2-include=/usr/include/udunits2"))' | R --vanilla # packages for BrownDog shiny app -echo 'if(!"leaflet" %in% installed.packages()) install.packages("leaflet", repos="http://cran.rstudio.com/")' | R --vanilla -echo 'if(!"RJSONIO" %in% installed.packages()) install.packages("RJSONIO", repos="http://cran.rstudio.com/")' | R --vanilla +echo 'if(!"leaflet" %in% installed.packages()) install.packages("leaflet")' | R --vanilla +echo 'if(!"RJSONIO" %in% installed.packages()) install.packages("RJSONIO")' | R --vanilla # packages for other shiny apps -echo 'if(!"DT" %in% installed.packages()) install.packages("DT", repos="http://cran.rstudio.com/")' | R --vanilla +echo 'if(!"DT" %in% installed.packages()) install.packages("DT")' | R --vanilla -#echo 'update.packages(repos="http://cran.rstudio.com/", ask=FALSE)' | sudo R --vanilla -echo 'x <- rownames(old.packages(repos="http://cran.rstudio.com/")); update.packages(repos="http://cran.rstudio.com/", ask=FALSE, oldPkgs=x[!x %in% "rgl"])' | sudo R --vanilla +#echo 'update.packages(ask=FALSE)' | sudo R --vanilla +echo 'x <- rownames(old.packages()); update.packages(ask=FALSE, oldPkgs=x[!x %in% "rgl"])' | sudo R --vanilla -#echo 'update.packages(repos="http://cran.rstudio.com/", ask=FALSE)' | R --vanilla -echo 'x <- rownames(old.packages(repos="http://cran.rstudio.com/")); update.packages(repos="http://cran.rstudio.com/", ask=FALSE, oldPkgs=x[!x %in% "rgl"])' | R --vanilla +#echo 'update.packages(ask=FALSE)' | R --vanilla +echo 'x <- rownames(old.packages()); update.packages(ask=FALSE, oldPkgs=x[!x %in% "rgl"])' | R --vanilla echo "######################################################################" echo "ED" @@ -422,7 +422,7 @@ echo "######################################################################" if [ ! -e ${HOME}/maespa ]; then cd git clone https://bitbucket.org/remkoduursma/maespa.git - echo 'install.packages("Maeswrap", repo="http://cran.rstudio.com")' | R --vanilla + echo 'install.packages("Maeswrap")' | R --vanilla fi cd ${HOME}/maespa make clean @@ -479,7 +479,7 @@ echo "######################################################################" echo "PECAN" echo "######################################################################" -echo 'if(!"rgl" %in% installed.packages()) install.packages("rgl", repos="http://cran.rstudio.com/")' | R --vanilla +echo 'if(!"rgl" %in% installed.packages()) install.packages("rgl")' | R --vanilla if [ ! -e ${HOME}/pecan ]; then cd @@ -622,12 +622,12 @@ echo "######################################################################" echo "SHINY SERVER" echo "######################################################################" if [ "${SHINY_SERVER}" != "" -a $( uname -m ) == "x86_64" ]; then - sudo su - -c "R -e \"install.packages(c('rmarkdown', 'shiny'), repos='https://cran.rstudio.com/')\"" + sudo su - -c "R -e \"install.packages(c('rmarkdown', 'shiny'))\"" R -e "install.packages(c('https://www.bioconductor.org/packages/release/bioc/src/contrib/BiocGenerics_0.28.0.tar.gz', 'http://www.bioconductor.org/packages/release/bioc/src/contrib/graph_1.60.0.tar.gz'), repos=NULL)" R -e "devtools::install_github('duncantl/CodeDepends')" #R -e "devtools::install_github('OakleyJ/SHELF')" - R -e "install.packages(c('shinythemes', 'shinytoastr', 'plotly'), repos='https://cran.rstudio.com/')" + R -e "install.packages(c('shinythemes', 'shinytoastr', 'plotly'))" cd @@ -731,8 +731,8 @@ echo "######################################################################" echo "PalEON" echo "######################################################################" if [ "$SETUP_PALEON" != "" ]; then - echo 'if(!"neotoma" %in% installed.packages()) install.packages("neotoma", repos="http://cran.rstudio.com/")' | R --vanilla - echo 'if(!"R2jags" %in% installed.packages()) install.packages("R2jags", repos="http://cran.rstudio.com/")' | R --vanilla + echo 'if(!"neotoma" %in% installed.packages()) install.packages("neotoma")' | R --vanilla + echo 'if(!"R2jags" %in% installed.packages()) install.packages("R2jags")' | R --vanilla if [ ! -e ${HOME}/Camp2016 ]; then cd diff --git a/shiny/workflowPlot/helper.R b/shiny/workflowPlot/helper.R index 7115f781bca..dc23e4656ac 100644 --- a/shiny/workflowPlot/helper.R +++ b/shiny/workflowPlot/helper.R @@ -2,7 +2,7 @@ checkAndDownload<-function(packageNames) { for(packageName in packageNames) { if(!isInstalled(packageName)) { - install.packages(packageName,repos="http://lib.stat.cmu.edu/R/CRAN") + install.packages(packageName) } library(packageName,character.only=TRUE,quietly=TRUE,verbose=FALSE) } From 7a00ff4b0ef680a51a95a91643e27e8d958e3f3e Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 15:55:02 -0600 Subject: [PATCH 1773/2289] more changes to remove travis --- CHANGELOG.md | 1 + README.md | 2 +- .../05_developer_workflows/04-testing.Rmd | 55 ++++++------------- docker/base/Dockerfile | 1 - tests/integration.sh | 2 - 5 files changed, 18 insertions(+), 43 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 262ddeede84..616a3036df1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -87,6 +87,7 @@ This is a major change: ### Removed +- Removed travis integration - Removed the sugarcane and db folders from web, this removes the simple DB editor in the web folder. (#2532) - Removed ED2IN.git (#2599) 'definitely going to break things for people' - but they can still use PEcAn <=1.7.1 - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). diff --git a/README.md b/README.md index 3a80a95bde3..ea55705e702 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -[![Build Status](https://travis-ci.org/PecanProject/pecan.svg?branch=master)](https://travis-ci.org/PecanProject/pecan) +[![GitHub Actions CI](https://github.com/PecanProject/pecan/workflows/CI/badge.svg)](https://github.com/PecanProject/pecan/actions) [![Slack](https://img.shields.io/badge/slack-login-green.svg)](https://pecanproject.slack.com/) [![Slack](https://img.shields.io/badge/slack-join_chat-green.svg)](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ) [![DOI](https://zenodo.org/badge/4469/PecanProject/pecan.svg)](https://zenodo.org/badge/latestdoi/4469/PecanProject/pecan) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index 2694dc56c57..5143aa39519 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -124,49 +124,26 @@ The `batch_run.R` script can take the following command-line arguments: Every time anyone commits a change to the PEcAn code, the act of pushing to GitHub triggers an automated build and test of the full PEcAn codebase, and all pull requests must report a successful CI build before they will be merged. This will sometimes feel like a burden when the build breaks on an issue that looks trivial, but anything that breaks the build is important enough to fix. It's much better to find errors early and fix them before they get incorporated into the released PEcAn code. -At this writing PEcAn's CI builds primarily use [Travis CI](https://travis-ci.org/pecanProject/pecan) and the rest of this section assumes a Travis build, but as of May 2020 we also have an experimental GitHub Actions build, and if we switch completely to GitHub Actions then this guide will need to be rewritten. +At this writing PEcAn's CI builds primarily use [GitHub Actions](https://github.com/PecanProject/pecan/actions) and the rest of this section assumes a GitHub Actions. -All our Travis builds run on the same version of Linux (currently Ubuntu 16.04, `xenial`) using four different versions of R in parallel: the two most recent previous releases, current release, and nightly builds of the R development branch. In most cases the build should pass on all four versions, but only the current release (`R:release`) is truly mandatory. We aim to fix errors found on R:release before merging, errors found on R:devel before the next R release, and errors found on older releases as developer time and forward compatibility allow. +All our GitHub Actions builds run in a containers using different versions of R in parallel. The build will use the latest pecan/depends container for that specific R version. Each night this depends image is rebuild. Each build starts by launching a separate clean virtual machine for each R version and performs roughly the following actions on all of them: - -* Installs binary system dependencies needed by PEcAn (NetCDF and HDF5 libraries, JAGS, udunits, etc). -* Installs all the R packages that are declared as dependencies in any PEcAn package, as computed by `scripts/generate_dependencies.R`. -* Clones the PEcAn repository from GitHub, and checks out the branch to be tested. -* Retrieves any cached files available from previous Travis builds. - - The main thing in the cache is previously-installed dependency R packages, to avoid recompiling them every time. - - If the cache becomes stale or is preventing a package update needed by the build (e.g. to get a new version that contains a needed bug fix), delete the cache through the Travis web interface and it will be reconstructed on the next build. - - Because PEcAn has so many dependencies, builds with no cache will spend most of their time recompiling packages and will probably run out of time before the tests complete. You can fix this by using `scripts/travis/cache_buildup.sh` and `scripts/travis/prime_travis_cache.sh` to build up the cache incrementally through one or more [custom builds](https://blog.travis-ci.com/2017-08-24-trigger-custom-build), each of which installs some dependencies and then uploads a freshened cache *without* running any tests. Once all dependencies have been cached, restart the standard full build. -* Initializes a skeleton version of the PEcAn database (BeTY) containing a few public records to be used by the test runs. -* Installs all the PEcAn packages, recompiling documentation and installing dependencies as needed. -* Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). - - As discussed in [Unit testing](#developer-testing-unit), these tests should run quickly and test individual components in relative isolation. - - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time (e.g. large data product downloads) or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! -* Runs R package checks (the same ones you run locally with `make check` or `devtools::check(pkgname)`), skipping tests and documentation rebuild because we just did those in the previous steps. - - Any ERROR in the check output will stop the build immediately. - - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. If the package has no stored reference result, all WARNINGs and NOTEs are considered newly added and reported as build failures. - - If all messages from the current built were also present in the reference result, the check passes. If any messages are newly added, a build failure is reported. - - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! - - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. - - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. - - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it as part of your PR anyway. It's frustrating to see tests complain about code you didn't touch, but the failures all need to be cleaned up eventually and it's likely easier to fix the error than to figure out how to re-ignore it. -* Runs a simple PEcAn workflow from beginning to end (three years of SIPNET at Niwot Ridge using stored weather data, meta-analysis on fixed effects only, sensitivity analysis on NPP), and verifies that the models complete successfully. -* Checks whether any version-controlled files have been changed by the build and testing process, and marks a build failure if they have. - - If your build fails at this step, the most common cause is that you forgot to Roxygenize before committing. - - This step will also detect newly added files, e.g. tests improperly writing to the current working directory rather than `tempdir()` and then failing to clean up after themselves. - -If any of these steps reports an error, the build is marked as "broken" and stops before the next step. If they all pass, the Travis CI bot marks the build as successful and tells the GitHub bot that it's OK to allow your changes to be merged... but the final decision belongs to the human reviewing your code and they might still ask you for other changes! - -After a successful build, Travis performs two post-build steps: - +* Compile the source code in the container +* Run the tests inside the container, and checks to see if they all pass + - This will also check to see if any files have been modified during this step +* Run the doxygen command inside the container + - This will also check to see if any files have been modified during this step +* Run the check command inside the container, and checks if there are any new warnings and/or errors + - This will also check to see if any files have been modified during this step +* Run a simple integration test using SIPNET model +* Create the docker images + - Once your PR is merged, it will push them to DockerHub and github container repository. * Compiles the PEcAn documentation book (`book_source`) and the tutorials (`documentation/tutorials`) and uploads them to the [PEcAn website](https://pecanproject.github.io/pecan-documentation). - This is only done for commits to the `master` or `develop` branch, so changes to in-progress pull requests never change the live documentation until after they are merged. -* Packs up selected build artifacts into a cache file and uploads it to the Travis servers for use on the next build. - -The post-build steps are allowed to fail without breaking the build. If you made documentation changes but don't see them deployed, or if your build seems to be reinstalling packages that ought to be cached, inspect the Travis logs of the previous supposedly-successful build to see if their uploads succeeded. -All of the above descriptions apply to the build Travis generates when you push to the main `PecanProject/pecan` repository, either by directly pushing to a branch or by opening a pull request. If you like, you can also [enable Travis builds](https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci) from your own PEcAn fork. This can be useful for several reasons: +If your build fails and indicates that files have been modified there are a few common causes. It should also list the files that have changes, and what has changed. +* The most common cause is that you forgot to Roxygenize before committing. +* This step will also detect newly added files, e.g. tests improperly writing to the current working directory rather than `tempdir()` and then failing to clean up after themselves. -* It lets your test whether your changes worked before you open a pull request. -* It often lets you get faster test results: The PEcAn project uses Travis CI's free tier, which only allows a few simultaneous build jobs per repository. If many people are pushing code at the same time, your build might wait in line for a long time at `PecanProject/pecan` but start immediately at `yourname/pecan`. -* If you will be editing the documentation a lot and want to see rendered previews of your in-progress work (instead of waiting until it is merged into develop), you can clone the [pecan-documentation](https://github.com/PecanProject/pecan-documentation) repository to your own GitHub account and let Travis update it for you. +If any of the actionsreports an error, the build is marked as "failed". If they all pass, the GitHub actions marks the build as successful and tells the PR that it's OK to allow your changes to be merged... but the final decision belongs to the human reviewing your code and they might still ask you for other changes! diff --git a/docker/base/Dockerfile b/docker/base/Dockerfile index 142fbe65787..6f79a8adb2d 100644 --- a/docker/base/Dockerfile +++ b/docker/base/Dockerfile @@ -18,7 +18,6 @@ ARG PECAN_GIT_DATE="unknown" # copy folders COPY Makefile Makefile.depends /pecan/ COPY scripts/time.sh /pecan/scripts/time.sh -COPY scripts/travis /pecan/scripts/travis/ COPY base /pecan/base/ COPY modules /pecan/modules/ COPY models /pecan/models/ diff --git a/tests/integration.sh b/tests/integration.sh index 49d28c1d08d..36514665749 100755 --- a/tests/integration.sh +++ b/tests/integration.sh @@ -6,7 +6,6 @@ set -o pipefail cd $( dirname $0 ) for f in ${NAME}.*.xml; do - echo -en "travis_fold:start:TEST $f\r" rm -rf pecan output.log Rscript --vanilla ../web/workflow.R --settings $f 2>&1 | tee output.log if [ $? -ne 0 ]; then @@ -26,5 +25,4 @@ for f in ${NAME}.*.xml; do echo "----------------------------------------------------------------------" fi rm -rf output.log pecan - echo -en 'travis_fold:end:TEST $f\r' done From ba845fab754830f435e9979c4dcf9eef88c054d5 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 16:09:18 -0600 Subject: [PATCH 1774/2289] pass github workflow parameter to docker --- docker.sh | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/docker.sh b/docker.sh index 960e2c06bd2..bc5c1ef66b7 100755 --- a/docker.sh +++ b/docker.sh @@ -94,6 +94,11 @@ EOF esac done +# pass github workflow +if [ -n "$GITHUB_WORKFLOW" ]; then + GITHUB_WORKFLOW_ARG="--build-arg GITHUB_WORKFLOW=${GITHUB_WORKFLOW}" +fi + # information for user before we build things echo "# ----------------------------------------------------------------------" echo "# Building PEcAn" @@ -116,7 +121,7 @@ echo "# ----------------------------------------------------------------------" if [ "${DEPEND}" == "build" ]; then ${DEBUG} docker build \ --pull \ - --build-arg R_VERSION=${R_VERSION} \ + --build-arg R_VERSION=${R_VERSION} ${GITHUB_WORKFLOW_ARG} \ --tag pecan/depends:${IMAGE_VERSION} \ docker/depends else @@ -143,7 +148,7 @@ for x in base web docs; do ${DEBUG} docker build \ --tag pecan/$x:${IMAGE_VERSION} \ --build-arg FROM_IMAGE="${FROM_IMAGE:-depends}" \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ --build-arg PECAN_VERSION="${VERSION}" \ --build-arg PECAN_GIT_BRANCH="${PECAN_GIT_BRANCH}" \ --build-arg PECAN_GIT_CHECKSUM="${PECAN_GIT_CHECKSUM}" \ @@ -156,7 +161,7 @@ done for x in models executor data thredds monitor rstudio-nginx check; do ${DEBUG} docker build \ --tag pecan/$x:${IMAGE_VERSION} \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ docker/$x done @@ -164,7 +169,7 @@ done for x in dbsync; do ${DEBUG} docker build \ --tag pecan/shiny-$x:${IMAGE_VERSION} \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ shiny/$x done @@ -177,7 +182,7 @@ for version in BASGRA_N_v1.0; do ${DEBUG} docker build \ --tag pecan/model-basgra-$(echo $version | tr '[A-Z]' '[a-z]'):${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ models/basgra done @@ -186,7 +191,7 @@ for version in 0.95; do ${DEBUG} docker build \ --tag pecan/model-biocro-${version}:${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ models/biocro done @@ -195,7 +200,7 @@ for version in 2.2.0; do ${DEBUG} docker build \ --tag pecan/model-ed2-${version}:${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ --build-arg BINARY_VERSION="2.2" \ models/ed done @@ -205,7 +210,7 @@ for version in git; do ${DEBUG} docker build \ --tag pecan/model-maespa-${version}:${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ models/maespa done @@ -214,7 +219,7 @@ for version in git r136; do ${DEBUG} docker build \ --tag pecan/model-sipnet-${version}:${IMAGE_VERSION} \ --build-arg MODEL_VERSION="${version}" \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ models/sipnet done @@ -222,11 +227,11 @@ done # PEcAn Apps # -------------------------------------------------------------------------------- -# build API +# build apps for x in api; do ${DEBUG} docker build \ --tag pecan/$x:${IMAGE_VERSION} \ - --build-arg IMAGE_VERSION="${IMAGE_VERSION}" \ + --build-arg IMAGE_VERSION="${IMAGE_VERSION}" ${GITHUB_WORKFLOW_ARG} \ --build-arg PECAN_VERSION="${VERSION}" \ --build-arg PECAN_GIT_BRANCH="${PECAN_GIT_BRANCH}" \ --build-arg PECAN_GIT_CHECKSUM="${PECAN_GIT_CHECKSUM}" \ From 5bf35ae93940cb7d12d06ed76cb8684e32c3af87 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 17:23:42 -0600 Subject: [PATCH 1775/2289] adding back some deleted comments. --- .../05_developer_workflows/04-testing.Rmd | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd index 5143aa39519..68ccc3a399e 100755 --- a/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd +++ b/book_source/02_demos_tutorials_workflows/05_developer_workflows/04-testing.Rmd @@ -130,12 +130,24 @@ All our GitHub Actions builds run in a containers using different versions of R Each build starts by launching a separate clean virtual machine for each R version and performs roughly the following actions on all of them: * Compile the source code in the container + - Installs all the R packages that are declared as dependencies in any PEcAn package, as computed by `scripts/generate_dependencies.R`. + - This will also check to see if any files have been modified during this step * Run the tests inside the container, and checks to see if they all pass - This will also check to see if any files have been modified during this step * Run the doxygen command inside the container - This will also check to see if any files have been modified during this step * Run the check command inside the container, and checks if there are any new warnings and/or errors + - Runs package unit tests (the same ones you run locally with `make test` or `devtools::test(pkgname)`). + - As discussed in [Unit testing](#developer-testing-unit), these tests should run quickly and test individual components in relative isolation. + - Any test that calls the `skip_on_ci` function will be skipped. This is useful for tests that need to run for a very long time (e.g. large data product downloads) or require resources that aren't available on Travis (e.g. specific models), but be sure to run these tests locally before pushing your code! - This will also check to see if any files have been modified during this step + - Any ERROR in the check output will stop the build immediately. + - If there are no ERRORs, any WARNINGs or NOTEs are compared against a stored historic check result in `/tests/Rcheck_reference.log`. If the package has no stored reference result, all WARNINGs and NOTEs are considered newly added and reported as build failures. + - If all messages from the current built were also present in the reference result, the check passes. If any messages are newly added, a build failure is reported. + - Each line of the check log is considered a separate message, and the test requires exact matching, so a change from `Undocumented arguments in documentation object 'foo': 'x'` to `Undocumented arguments in documentation object 'foo': 'x', 'y'` will be counted as a new warning... and you should fix both of them while you're at it! + - The idea here is to enforce good coding practice and catch likely errors in all new code while recognizing that we have a lot of legacy code whose warnings need to be fixed as we have time rather than all at once. + - As we fix historic warnings, we will revoke their grandfathered status by removing them from the stored check results, so that they will break the build if they reappear. + - If your PR reports a failure in pre-existing code that you think ought to be grandfathered, please fix it as part of your PR anyway. It's frustrating to see tests complain about code you didn't touch, but the failures all need to be cleaned up eventually and it's likely easier to fix the error than to figure out how to re-ignore it. * Run a simple integration test using SIPNET model * Create the docker images - Once your PR is merged, it will push them to DockerHub and github container repository. From a38385a8fa80223a20e6b971b48f07c01209f3f5 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 17:25:22 -0600 Subject: [PATCH 1776/2289] remove 3.6.3 add 4.0.3 --- .github/workflows/ci.yml | 1 + .github/workflows/depends.yml | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 6259738fb3d..000b27e3192 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -31,6 +31,7 @@ jobs: matrix: R: - "4.0.2" + - "4.0.3" env: NCPUS: 2 diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index cb32b3f9fb0..5caaf3e6f7d 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -25,8 +25,8 @@ jobs: fail-fast: false matrix: R: - - 3.6.3 - - 4.0.2 + - "4.0.2" + - "4.0.3" steps: - uses: actions/checkout@v2 From 76efce4e372cb4b4d114f695c553f0c349395c8b Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 9 Feb 2021 17:28:48 -0600 Subject: [PATCH 1777/2289] missing matrix --- .github/workflows/ci.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 000b27e3192..2b8e5cf6e78 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -77,6 +77,7 @@ jobs: matrix: R: - "4.0.2" + - "4.0.3" services: postgres: @@ -129,6 +130,7 @@ jobs: matrix: R: - "4.0.2" + - "4.0.3" env: NCPUS: 2 @@ -174,6 +176,7 @@ jobs: matrix: R: - "4.0.2" + - "4.0.3" services: postgres: From 751c217dfffa62d1a808e3f1ea7c52ba8d7e95e4 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Wed, 10 Feb 2021 00:08:26 -0600 Subject: [PATCH 1778/2289] fix depends workflow --- .github/workflows/depends.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 5caaf3e6f7d..2008676a3b7 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -13,12 +13,11 @@ on: env: # official supported version of R SUPPORTED: 4.0.2 - MASTER_REPO: PecanProject/pecan DOCKERHUB_ORG: pecan jobs: depends: - if: github.repository == env.MASTER_REPO + if: github.repository == 'PecanProject/pecan' runs-on: ubuntu-latest strategy: From a35cd807a61fd2bcb19f21ec4def4ecfd966183c Mon Sep 17 00:00:00 2001 From: kzarada Date: Wed, 10 Feb 2021 09:06:18 -0500 Subject: [PATCH 1779/2289] adding agb.pft --- models/ed/R/read_restart.ED2.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/ed/R/read_restart.ED2.R b/models/ed/R/read_restart.ED2.R index 146feb4d3f6..52a27505907 100644 --- a/models/ed/R/read_restart.ED2.R +++ b/models/ed/R/read_restart.ED2.R @@ -59,7 +59,7 @@ read_restart.ED2 <- function(outdir, perpft <- TRUE forecast_tmp <- switch(perpft+1, sum(histout$AGB, na.rm = TRUE), histout$AGB) # kgC/m2 forecast[[length(forecast)+1]] <- udunits2::ud.convert(forecast_tmp, "kg/m^2", "Mg/ha") # conv to MgC/ha - names(forecast)[length(forecast)] <- switch(perpft+1, "AGB", paste0("AGB.", pft_names)) + names(forecast)[[length(forecast)]] <- switch(perpft+1, "AGB", paste0("AGB.", pft_names)) } From 5bccee8f2a8c1cfd6e13264f67a04657f63838e2 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Wed, 17 Feb 2021 18:24:09 +0530 Subject: [PATCH 1780/2289] Updated load_data.R removed library functions by appropriately providing namespaces to functions. --- modules/benchmark/R/load_data.R | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/modules/benchmark/R/load_data.R b/modules/benchmark/R/load_data.R index b3c882e41b4..28c893be5cf 100644 --- a/modules/benchmark/R/load_data.R +++ b/modules/benchmark/R/load_data.R @@ -9,6 +9,7 @@ ##' @author Betsy Cowdery, Istem Fer, Joshua Mantooth ##' Generic function to convert input files containing observational data to ##' a common PEcAn format. +#' @importFrom magrittr %>% load_data <- function(data.path, format, start_year = NA, end_year = NA, site = NA, vars.used.index=NULL, ...) { @@ -30,12 +31,7 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = vars.used.index <- setdiff(seq_along(format$vars$variable_id), format$time.row) } - library(PEcAn.utils) - library(PEcAn.benchmark) - library(lubridate) - library(udunits2) - library(dplyr) - + # Determine the function that should be used to load the data mimetype <- gsub("-", "_", format$mimetype) fcn1 <- paste0("load_", format$file_name) @@ -47,10 +43,10 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = } else if (!exists(fcn1) & !exists(fcn2) & require(bd)) { #To Do: call to DAP to see if conversion to csv is possible #Brown Dog API call through BDFiddle, requires username and password - key <- get_key("https://bd-api.ncsa.illinois.edu",username,password) - token <- get_token("https://bd-api.ncsa.illinois.edu",key) + key <- BrownDog::get_key("https://bd-api.ncsa.illinois.edu",username,password) + token <- BrownDog::get_token("https://bd-api.ncsa.illinois.edu",key) #output_path = where are we putting converted file? - converted.data.path <- convert_file(url = "https://bd-api.ncsa.illinois.edu", input_filename = data.path, + converted.data.path <- BrownDog::convert_file(url = "https://bd-api.ncsa.illinois.edu", input_filename = data.path, output = "csv", output_path = output_path, token = token) if (is.na(converted.data.path)){ PEcAn.logger::logger.error("Converted file was not returned from Brown Dog") @@ -94,11 +90,11 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = vars_used$pecan_name[i], vars_used$pecan_units[i])) out[col] <- udunits2::ud.convert(as.numeric(x), u1, u2) colnames(out)[col] <- vars_used$pecan_name[i] - } else if (misc.are.convertible(u1, u2)) { + } else if (PEcAn.utils::misc.are.convertible(u1, u2)) { print(sprintf("convert %s %s to %s %s", vars_used$input_name[i], u1, vars_used$pecan_name[i], u2)) - out[col] <- as.vector(misc.convert(x, u1, u2)) # Betsy: Adding this because misc.convert returns vector with attributes original agrument x, which causes problems later + out[col] <- as.vector(PEcAn.utils::misc.convert(x, u1, u2)) # Betsy: Adding this because misc.convert returns vector with attributes original agrument x, which causes problems later colnames(out)[col] <- vars_used$pecan_name[i] } else { PEcAn.logger::logger.warn(paste("Units cannot be converted. Removing variable. please check the units of",vars_used$input_name[i])) @@ -115,7 +111,7 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = names(out)[col] <- format$vars$pecan_name[time.row] # Need a much more spohisticated approach to converting into time format. - y <- dplyr::select(out, one_of(format$vars$pecan_name[time.row])) + y <- dplyr::select(out, tidyselect::one_of(format$vars$pecan_name[time.row])) if(!is.null(site$time_zone)){ tz = site$time_zone From 3cfbe7dc14901aaf28801a9fe67f024a4ed1cc73 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 17 Feb 2021 13:31:51 +0000 Subject: [PATCH 1781/2289] automated documentation update --- modules/benchmark/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/benchmark/NAMESPACE b/modules/benchmark/NAMESPACE index cc43c3d919b..b36226c444d 100644 --- a/modules/benchmark/NAMESPACE +++ b/modules/benchmark/NAMESPACE @@ -44,3 +44,4 @@ importFrom(ggplot2,geom_path) importFrom(ggplot2,geom_point) importFrom(ggplot2,ggplot) importFrom(ggplot2,labs) +importFrom(magrittr,"%>%") From eedb7ba7eed162341ab0b8ff31c3e1046e8c1f41 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 08:59:29 -0500 Subject: [PATCH 1782/2289] updating GEFS to not be gapfilled like the previous downscaling function --- .../inst/WillowCreek/gefs.sipnet.template.xml | 125 ------------------ modules/data.atmosphere/R/met.process.R | 8 +- 2 files changed, 2 insertions(+), 131 deletions(-) delete mode 100755 modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml diff --git a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml b/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml deleted file mode 100755 index c9dac66dc28..00000000000 --- a/modules/assim.sequential/inst/WillowCreek/gefs.sipnet.template.xml +++ /dev/null @@ -1,125 +0,0 @@ - - - - FALSE - TRUE - - 1000000040 - 1000013298 - - - - NEE - MgC/ha/yr - -9999 - 9999 - - - Qle - MW/m2 - -9999 - 9999 - - - TotSoilCarb - KgC/m^2 - 0 - 9999 - - - SoilMoistFrac - m/m - 0 - 1 - - - Litter - gC/m^2 - 0 - 9999 - - - SWE - kg/m^2 - 0 - 9999 - - - litter_carbon_content - kgC/m2 - 0 - 9999 - - - year - 2017-01-01 - 2018-11-05 - 1 - - - - -1 - - 2019/01/04 10:19:35 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - TRUE - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - temperate.deciduous.ALL - - 1 - - 1000016041 - - - - 3000 - FALSE - - - 200 - NEE - 2018 - 2018 - - - uniform - - - sampling - - - - - 10 - /fs/data3/kzarada/US_WCr/data/WillowCreek.param - - - - 676 - 2018-08-26 - 2018-09-23 - - - - NOAA_GEFS - SIPNET - - - 2018-12-05 - 2018-12-20 - - - localhost - - \ No newline at end of file diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index 871f8494084..333b337bc60 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -201,17 +201,13 @@ met.process <- function(site, input_met, start_date, end_date, model, dbparms=dbparms ) - if (met %in% c("CRUNCEP", "GFDL", "NOAA_GEFS_downscale", "MERRA")) { + if (met %in% c("CRUNCEP", "GFDL", "NOAA_GEFS", "MERRA")) { ready.id <- raw.id # input_met$id overwrites ready.id below, needs to be populated here input_met$id <- raw.id stage$met2cf <- FALSE stage$standardize <- FALSE - } else if (met %in% c("NOAA_GEFS")) { # Can sometimes have missing values, so the gapfilling step is required. - cf.id <- raw.id - input_met$id <-raw.id - stage$met2cf <- FALSE - } + } }else if (stage$local){ # In parallel to download met module this needs to check if the files are already downloaded or not db.file <- PEcAn.DB::dbfile.input.check( From 70046c2cf95180dabb6fbedea0723279e0cba92f Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 09:00:10 -0500 Subject: [PATCH 1783/2289] updating restart to create agb by pft --- models/ed/R/read_restart.ED2.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/ed/R/read_restart.ED2.R b/models/ed/R/read_restart.ED2.R index 52a27505907..1c8e64f8af1 100644 --- a/models/ed/R/read_restart.ED2.R +++ b/models/ed/R/read_restart.ED2.R @@ -58,9 +58,9 @@ read_restart.ED2 <- function(outdir, if (var_name == "AGB.pft") { perpft <- TRUE forecast_tmp <- switch(perpft+1, sum(histout$AGB, na.rm = TRUE), histout$AGB) # kgC/m2 + names(forecast_tmp) <- switch(perpft + 1, "AGB", paste0(pft_names)) forecast[[length(forecast)+1]] <- udunits2::ud.convert(forecast_tmp, "kg/m^2", "Mg/ha") # conv to MgC/ha - names(forecast)[[length(forecast)]] <- switch(perpft+1, "AGB", paste0("AGB.", pft_names)) - + names(forecast)[length(forecast)] <- "AGB.pft" } if (var_name == "TotLivBiom") { From bd18a433f156547e5ff430bca2288b4eb7908b7e Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 09:00:29 -0500 Subject: [PATCH 1784/2289] changes to sda workflow --- .../inst/WillowCreek/SDA_Workflow.R | 183 ++++++++++++------ 1 file changed, 120 insertions(+), 63 deletions(-) diff --git a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R index 885f07fdda0..75b0836ab19 100644 --- a/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R +++ b/modules/assim.sequential/inst/WillowCreek/SDA_Workflow.R @@ -21,7 +21,7 @@ plan(multisession) outputPath <- "/projectnb/dietzelab/kzarada/US_WCr_SDA_output/" nodata <- FALSE #use this to run SDA with no data -restart <- TRUE #flag to start from previous run or not +restart <- FALSE#flag to start from previous run or not days.obs <- 3 #how many of observed data *BY HOURS* to include -- not including today setwd(outputPath) options(warn=-1) @@ -88,7 +88,7 @@ if (length(all.previous.sims) > 0 & !inherits(con, "try-error")) { sda.start <- Sys.Date() - 9 } #to manually change start date -#sda.start <- Sys.Date() +sda.start <- Sys.Date() - lubridate::days(15) sda.end <- sda.start + lubridate::days(5) # Finding the right end and start date @@ -286,68 +286,125 @@ if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile) # Do conversions ######### Check for input files and insert paths ############# -# con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) -# -# input_check <- PEcAn.DB::dbfile.input.check( -# siteid=settings$run$site$id %>% as.character(), -# startdate = settings$run$start.date %>% as.Date, -# enddate = settings$run$end.date %>% as.Date, -# parentid = NA, -# mimetype="application/x-netcdf", -# formatname="CF Meteorology", -# con, -# hostname = PEcAn.remote::fqdn(), -# exact.dates = TRUE, -# pattern = "NOAA_GEFS_downscale", -# return.all=TRUE -# ) -# -# clim_check = list() -# for(i in 1:length(input_check$id)){ -# clim_check[[i]] = file.path(PEcAn.DB::dbfile.input.check( -# siteid=settings$run$site$id %>% as.character(), -# startdate = settings$run$start.date %>% as.Date, -# enddate = settings$run$end.date %>% as.Date, -# parentid = input_check$container_id[i], -# mimetype="text/csv", -# formatname="Sipnet.climna", -# con, -# hostname = PEcAn.remote::fqdn(), -# exact.dates = TRUE, -# pattern = "NOAA_GEFS_downscale", -# return.all=TRUE -# )$file_path, PEcAn.DB::dbfile.input.check( -# siteid=settings$run$site$id %>% as.character(), -# startdate = settings$run$start.date %>% as.Date, -# enddate = settings$run$end.date %>% as.Date, -# parentid = input_check$container_id[i], -# mimetype="text/csv", -# formatname="Sipnet.climna", -# con, -# hostname = PEcAn.remote::fqdn(), -# exact.dates = TRUE, -# pattern = "NOAA_GEFS_downscale", -# return.all=TRUE -# )$file_name)} -# -# #If INPUTS already exsits, add id and met path to settings file -# -# if(length(input_check$id) > 0){ -# index_id = list() -# index_path = list() -# for(i in 1:length(input_check$id)){ -# index_id[[i]] = as.character(dbfile.id(type = "Input", -# file = file.path(input_check$file_path, -# input_check$file_name)[i], con = con))#get ids as list -# -# }#end i loop for making lists -# names(index_id) = sprintf("id%s",seq(1:length(input_check$id))) #rename list -# names(clim_check) = sprintf("path%s",seq(1:length(input_check$id))) -# -# settings$run$inputs$met$id = index_id -# settings$run$inputs$met$path = clim_check -# } + con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + +#checks for .nc files for NOAA GEFS + input_check <- PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = settings$run$end.date %>% as.Date, + parentid = NA, + mimetype="application/x-netcdf", + formatname="CF Meteorology", + con, + hostname = PEcAn.remote::fqdn(), + exact.dates = TRUE, + pattern = "NOAA_GEFS", + return.all=TRUE + ) + +#new NOAA GEFS files were going through gapfilled so the parent +#files of the clim files were the CF_gapfilled nc files + input_check_2 <- list() + + for(i in 1:length(input_check$id)){ + input_check_2[i] <- PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = settings$run$end.date %>% as.Date, + parentid = input_check$container_id[i], + mimetype="application/x-netcdf", + formatname="CF Meteorology", + con, + hostname = PEcAn.remote::fqdn(), + exact.dates = TRUE, + pattern = "NOAA_GEFS", + return.all=TRUE + )$container_id + } + +#this if statement deals with the NOAA GEFS files that +#were gapfilled, and allows for us to find the clim files of the ones that weren't +#the problem here is that when we get GEFS nc files -> clim files, +#the GEFS nc files are the parent ids for finding the clim files +#but with GEFS nc -> gapfilled nc -> clim, the gapfilled files are the parent ids for the clim files +if(length(input_check_2)>1){ +input_check_2 = unlist(input_check_2) + +clim_check = list() +for(i in 1:length(input_check$id)){ + clim_check[[i]] = file.path(PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = settings$run$end.date %>% as.Date, + parentid = input_check_2[i], + mimetype="text/csv", + formatname="Sipnet.climna", + con, + hostname = PEcAn.remote::fqdn(), + exact.dates = TRUE, + pattern = "NOAA_GEFS", + return.all=TRUE + )$file_path, PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = settings$run$end.date %>% as.Date, + parentid = input_check_2[i], + mimetype="text/csv", + formatname="Sipnet.climna", + con, + hostname = PEcAn.remote::fqdn(), + exact.dates = TRUE, + pattern = "NOAA_GEFS", + return.all=TRUE + )$file_name)}}else{ + for(i in 1:length(input_check$id)){ + clim_check[[i]] = file.path(PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = settings$run$end.date %>% as.Date, + parentid = input_check$container_id[i], + mimetype="text/csv", + formatname="Sipnet.climna", + con, + hostname = PEcAn.remote::fqdn(), + exact.dates = TRUE, + pattern = "NOAA_GEFS", + return.all=TRUE + )$file_path, PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = settings$run$end.date %>% as.Date, + parentid = input_check$container_id[i], + mimetype="text/csv", + formatname="Sipnet.climna", + con, + hostname = PEcAn.remote::fqdn(), + exact.dates = TRUE, + pattern = "NOAA_GEFS", + return.all=TRUE + )$file_name)} + }#end if/else look for making clim file paths + + #If INPUTS already exists, add id and met path to settings file + + if(length(input_check$id) > 0){ + index_id = list() + index_path = list() + for(i in 1:length(input_check$id)){ + index_id[[i]] = as.character(dbfile.id(type = "Input", + file = file.path(input_check$file_path, + input_check$file_name)[i], con = con))#get ids as list + + }#end i loop for making lists + names(index_id) = sprintf("id%s",seq(1:length(input_check$id))) #rename list + names(clim_check) = sprintf("path%s",seq(1:length(input_check$id))) + + settings$run$inputs$met$id = index_id + settings$run$inputs$met$path = clim_check + } +#still want to run this to get the IC files settings <- PEcAn.workflow::do_conversions(settings) #end if loop for existing inputs # if(is_empty(settings$run$inputs$met$path) & length(clim_check)>0){ From dc18c9d35cf01a783377e8bd0b790040a1ec3c31 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 10:21:57 -0500 Subject: [PATCH 1785/2289] addressing check fails --- modules/assim.sequential/NAMESPACE | 1 + modules/assim.sequential/R/sda_plotting.R | 3 +++ 2 files changed, 4 insertions(+) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index 539d2678220..e0d48f71ef4 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -47,6 +47,7 @@ export(simple.local) export(tobit.model) export(tobit2space.model) export(y_star_create) +import(PEcAn.photosynthesis) import(furrr) import(lubridate) import(nimble) diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index f53ed737aad..066dbe2876f 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -28,6 +28,7 @@ generate_colors_sda <-function(){ ##' @param FORECAST dataframe of state variables for each ensemble ##' @param ANALYSIS vector of mean of state variable after analysis ##' @param plot.title character giving the title for post visualization ggplots +##' @import PEcAn.photosynthesis ##' @export interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS){ @@ -118,6 +119,7 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob } ##' @rdname interactive.plotting.sda +##' @import PEcAn.photosynthesis ##' @export postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS){ @@ -214,6 +216,7 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov } ##' @rdname interactive.plotting.sda +##' @import PEcAn.photosynthesis ##' @export postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS){ From c15e841123373af5338fd38cbbcb12b9384aa522 Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 11:29:06 -0500 Subject: [PATCH 1786/2289] addressing checks --- modules/assim.sequential/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 9789dbb29df..bb627e8a1c7 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -18,6 +18,7 @@ Imports: ncdf4, PEcAn.DB, PEcAn.logger, + PEcAn.photosynthesis, PEcAn.remote, PEcAn.settings, PEcAn.utils, From 115f9893525445ce9c8b64460e2f98753147082e Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 11:38:20 -0500 Subject: [PATCH 1787/2289] changes for checks --- modules/assim.sequential/DESCRIPTION | 1 - modules/assim.sequential/R/sda_plotting.R | 16 ++++++++-------- 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index bb627e8a1c7..9789dbb29df 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -18,7 +18,6 @@ Imports: ncdf4, PEcAn.DB, PEcAn.logger, - PEcAn.photosynthesis, PEcAn.remote, PEcAn.settings, PEcAn.utils, diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 066dbe2876f..54b96126c1a 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -88,7 +88,7 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob xlab = "Year", ylab = ylab.names[grep(colnames(X)[i], var.names)], main = colnames(X)[i]) - PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), + ciEnvelope(as.Date(obs.times[t1:t]), as.numeric(Ybar[, i]) - as.numeric(YCI[, i]) * 1.96, as.numeric(Ybar[, i]) + as.numeric(YCI[, i]) * 1.96, col = alphagreen) @@ -108,11 +108,11 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob } # forecast - PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') + ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') lines(as.Date(obs.times[t1:t]), Xbar, col = "darkblue", type = "l", lwd = 2) # analysis - PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) + ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) lines(as.Date(obs.times[t1:t]), Xa, col = "black", lty = 2, lwd = 2) #legend('topright',c('Forecast','Data','Analysis'),col=c(alphablue,alphagreen,alphapink),lty=1,lwd=5) } @@ -190,7 +190,7 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov # observation / data if (i<=ncol(X)) { # - PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), + ciEnvelope(as.Date(obs.times[t1:t]), as.numeric(Ybar[, i]) - as.numeric(YCI[, i]) * 1.96, as.numeric(Ybar[, i]) + as.numeric(YCI[, i]) * 1.96, col = alphagreen) @@ -200,11 +200,11 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov } # forecast - PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') #alphablue + ciEnvelope(as.Date(obs.times[t1:t]), Xci[, 1], Xci[, 2], col = alphablue) #col='lightblue') #alphablue lines(as.Date(obs.times[t1:t]), Xbar, col = "darkblue", type = "l", lwd = 2) #"darkblue" # analysis - PEcAn.photosynthesis::ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) #alphapink + ciEnvelope(as.Date(obs.times[t1:t]), XaCI[, 1], XaCI[, 2], col = alphapink) #alphapink lines(as.Date(obs.times[t1:t]), Xa, col = "black", lty = 2, lwd = 2) #"black" legend('topright',c('Forecast','Data','Analysis'),col=c(alphablue,alphagreen,alphapink),lty=1,lwd=5) @@ -252,7 +252,7 @@ postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, o xlab = "Time", ylab = "Error", main = paste(colnames(X)[i], " Error = Forecast - Data")) - PEcAn.photosynthesis::ciEnvelope(rev(t1:t), + ciEnvelope(rev(t1:t), rev(Xci[, 1] - unlist(Ybar[, i])), rev(Xci[, 2] - unlist(Ybar[, i])), col = alphabrown) @@ -271,7 +271,7 @@ postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, o xlab = "Time", ylab = "Update", main = paste(colnames(X)[i], "Update = Forecast - Analysis")) - PEcAn.photosynthesis::ciEnvelope(rev(t1:t), + ciEnvelope(rev(t1:t), rev(Xbar - XaCI[, 1]), rev(Xbar - XaCI[, 2]), col = alphapurple) From f1f79dc8f4c3b8cc698358a6b7b1da0f5fcc6c4f Mon Sep 17 00:00:00 2001 From: kzarada Date: Thu, 18 Feb 2021 12:38:25 -0500 Subject: [PATCH 1788/2289] addressing check fails --- modules/assim.sequential/NAMESPACE | 1 - modules/assim.sequential/R/sda_plotting.R | 3 --- 2 files changed, 4 deletions(-) diff --git a/modules/assim.sequential/NAMESPACE b/modules/assim.sequential/NAMESPACE index e0d48f71ef4..539d2678220 100644 --- a/modules/assim.sequential/NAMESPACE +++ b/modules/assim.sequential/NAMESPACE @@ -47,7 +47,6 @@ export(simple.local) export(tobit.model) export(tobit2space.model) export(y_star_create) -import(PEcAn.photosynthesis) import(furrr) import(lubridate) import(nimble) diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index 54b96126c1a..dfc75cf0861 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -28,7 +28,6 @@ generate_colors_sda <-function(){ ##' @param FORECAST dataframe of state variables for each ensemble ##' @param ANALYSIS vector of mean of state variable after analysis ##' @param plot.title character giving the title for post visualization ggplots -##' @import PEcAn.photosynthesis ##' @export interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS){ @@ -119,7 +118,6 @@ interactive.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, ob } ##' @rdname interactive.plotting.sda -##' @import PEcAn.photosynthesis ##' @export postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS){ @@ -216,7 +214,6 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov } ##' @rdname interactive.plotting.sda -##' @import PEcAn.photosynthesis ##' @export postana.bias.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov, obs, X, FORECAST, ANALYSIS){ From 1e49b687dda1eec5ce5306f1d603856db6469e78 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 05:20:09 +0530 Subject: [PATCH 1789/2289] Updated base/utils #2758 fixed tidyverse notes for the remaining variables in base/utils convert.input :'id' summarize.result: 'n' , 'statname' --- base/utils/R/utils.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 95f4302e331..f919d977af0 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -27,7 +27,7 @@ ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { - nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] +nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] dims <- list() if (nrow(nc_var) == 0) { @@ -207,21 +207,21 @@ zero.bounded.density <- function(x, bw = "SJ", n = 1001) { ##' @author David LeBauer, Alexey Shiklomanov summarize.result <- function(result) { ans1 <- result %>% - dplyr::filter(n == 1) %>% + dplyr::filter(.data$n == 1) %>% dplyr::group_by(.data$citation_id, .data$site_id, .data$trt_id, .data$control, .data$greenhouse, .data$date, .data$time, .data$cultivar_id, .data$specie_id, .data$name, .data$treatment_id) %>% dplyr::summarize( # stat must be computed first, before n and mean - statname = dplyr::if_else(length(n) == 1, "none", "SE"), + .data$statname = dplyr::if_else(length(n) == 1, "none", "SE"), stat = stats::sd(mean) / sqrt(length(n)), - n = length(n), + .data$n = length(n), mean = mean(mean) ) %>% dplyr::ungroup() ans2 <- result %>% - dplyr::filter(n != 1) %>% + dplyr::filter(.data$n != 1) %>% # ANS: Silence factor to character conversion warning - dplyr::mutate(statname = as.character(statname)) + dplyr::mutate(.data$statname = as.character(.data$statname)) if (nrow(ans2) > 0) { dplyr::bind_rows(ans1, ans2) } else { From 13a02a127ef9e32ecebfee9b5e7b244bdcfba635 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 05:56:59 +0530 Subject: [PATCH 1790/2289] updated Load_data.R replaced require(bd) with requireNamespace(bd, quietly = TRUE ) in line 43 --- modules/benchmark/R/load_data.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/benchmark/R/load_data.R b/modules/benchmark/R/load_data.R index 28c893be5cf..5ba4193165d 100644 --- a/modules/benchmark/R/load_data.R +++ b/modules/benchmark/R/load_data.R @@ -40,7 +40,7 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = fcn <- match.fun(fcn1) } else if (exists(fcn2)) { fcn <- match.fun(fcn2) - } else if (!exists(fcn1) & !exists(fcn2) & require(bd)) { + } else if (!exists(fcn1) & !exists(fcn2) & requireNamespace(bd, quietly = TRUE)) { #To Do: call to DAP to see if conversion to csv is possible #Brown Dog API call through BDFiddle, requires username and password key <- BrownDog::get_key("https://bd-api.ncsa.illinois.edu",username,password) From e9c1c1d0773e4f46128315975305965abce32988 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 05:59:54 +0530 Subject: [PATCH 1791/2289] updated modules/benchmark/DESCRIPTION added 'bd' in the suggests section of modules/benchmark/DESCRIPTION --- modules/benchmark/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 1d11a74e4b7..a3ba20df765 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -35,6 +35,7 @@ Imports: Suggests: PEcAn.data.land, testthat (>= 2.0.0) + bd License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes From 717c85dfd3bb59c6e7a10350c366f625b9b33b42 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 11:44:34 +0530 Subject: [PATCH 1792/2289] update utils.R removed 'data' verb from statname and added '.data$' in length(n) - line 215 --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index f919d977af0..69371b3321b 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -212,7 +212,7 @@ summarize.result <- function(result) { .data$control, .data$greenhouse, .data$date, .data$time, .data$cultivar_id, .data$specie_id, .data$name, .data$treatment_id) %>% dplyr::summarize( # stat must be computed first, before n and mean - .data$statname = dplyr::if_else(length(n) == 1, "none", "SE"), + statname = dplyr::if_else(length(.data$n) == 1, "none", "SE"), stat = stats::sd(mean) / sqrt(length(n)), .data$n = length(n), mean = mean(mean) From abf27632fb68ef04b3d1b214e445a75fdd6fb87f Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 11:57:44 +0530 Subject: [PATCH 1793/2289] updated base/db/R/clone_pft.R fixed tidyverse notes "no visible binding for global variable" for "name, id, created_at, updated_at, pft_id" in clone_pft.R --- base/db/R/clone_pft.R | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/base/db/R/clone_pft.R b/base/db/R/clone_pft.R index 0376cadaf6d..ccec2494d69 100644 --- a/base/db/R/clone_pft.R +++ b/base/db/R/clone_pft.R @@ -2,8 +2,8 @@ ##' ##' Creates a new pft that is a duplicate of an existing pft, ##' including relationships with priors, species, and cultivars (if any) of the existing pft. -##' This function mimics the 'clone pft' button in the PFTs record view page in the -##' BETYdb web interface for PFTs that aggregate >=1 species, but adds the ability to +##' This function mimics the 'clone pft' button in the PFTs record view page in the +##' BETYdb web interface for PFTs that aggregate >=1 species, but adds the ability to ##' clone the cultivar associations. ##' ##' @param parent.pft.name name of PFT to duplicate @@ -35,7 +35,7 @@ clone_pft <- function(parent.pft.name, on.exit(db.close(con), add = TRUE) parent.pft <- (dplyr::tbl(con, "pfts") - %>% dplyr::filter(name == !!parent.pft.name) + %>% dplyr::filter(.data$name == !!parent.pft.name) %>% dplyr::collect()) if (nrow(parent.pft) == 0) { @@ -43,9 +43,9 @@ clone_pft <- function(parent.pft.name, } new.pft <- (parent.pft - %>% dplyr::select(-id, -created_at, -updated_at) + %>% dplyr::select(-.data$id, -.data$created_at, -.data$updated_at) %>% dplyr::mutate( - name = !!new.pft.name, + .data$name = !!new.pft.name, definition = !!new.pft.definition, parent_id = !!parent.pft$id)) @@ -58,8 +58,8 @@ clone_pft <- function(parent.pft.name, row.names = FALSE) new.pft$id <- (dplyr::tbl(con, "pfts") - %>% dplyr::filter(name == !!new.pft.name) - %>% dplyr::pull(id)) + %>% dplyr::filter(.data$name == !!new.pft.name) + %>% dplyr::pull(.data$id)) # PFT members are stored in different tables depending on pft_type. @@ -72,8 +72,8 @@ clone_pft <- function(parent.pft.name, member_tbl <- "pfts_species" } new_members <- (dplyr::tbl(con, member_tbl) - %>% dplyr::filter(pft_id == !!parent.pft$id) - %>% dplyr::mutate(pft_id = !!new.pft$id) + %>% dplyr::filter(.data$pft_id == !!parent.pft$id) + %>% dplyr::mutate(.data$pft_id = !!new.pft$id) %>% dplyr::distinct() %>% dplyr::collect()) @@ -87,8 +87,8 @@ clone_pft <- function(parent.pft.name, } new_priors <- (dplyr::tbl(con, "pfts_priors") - %>% dplyr::filter(pft_id == !!parent.pft$id) - %>% dplyr::mutate(pft_id = !!new.pft$id) + %>% dplyr::filter(.data$pft_id == !!parent.pft$id) + %>% dplyr::mutate(.data$pft_id = !!new.pft$id) %>% dplyr::distinct() %>% dplyr::collect()) From 6f6f1c1e128acc3e23ae37e8a0bacbe3857e81c6 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 12:06:54 +0530 Subject: [PATCH 1794/2289] updated dbfiles.R fixed tidyverse notes "no visible binding for global variable" for "container_type, container_id, machine_id, updated_at" in dbfiles.R --- base/db/R/dbfiles.R | 408 ++++++++++++++++++++++---------------------- 1 file changed, 204 insertions(+), 204 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index a287a23702c..2a2ec9f5765 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -43,10 +43,10 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, parentid=NA, con, hostname=PEcAn.remote::fqdn(), allow.conflicting.dates=FALSE, ens=FALSE) { name <- basename(in.path) hostname <- default_hostname(hostname) - + # find mimetype, if it does not exist, it will create one mimetypeid <- get.id("mimetypes", "type_string", mimetype, con, create = TRUE) - + # find appropriate format, create if it does not exist formatid <- get.id( table = "formats", @@ -56,8 +56,8 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, create = TRUE, dates = TRUE ) - - + + # setup parent part of query if specified if (is.na(parentid)) { parent <- "" @@ -75,7 +75,7 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, ), con = con ) - + inputid <- NULL if (nrow(existing.input) > 0) { # Convert dates to Date objects and strip all time zones (DB values are timezone-free) @@ -83,17 +83,17 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, if(!is.null(enddate)){enddate <- lubridate::force_tz(time = lubridate::as_date(enddate), tzone = 'UTC')} existing.input$start_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$start_date), tzone = 'UTC') existing.input$end_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$end_date), tzone = 'UTC') - + for (i in seq_len(nrow(existing.input))) { existing.input.i <- existing.input[i,] if (is.na(existing.input.i$start_date) && is.null(startdate)) { inputid <- existing.input.i[['id']] }else if(existing.input.i$start_date == startdate && existing.input.i$end_date == enddate){ inputid <- existing.input.i[['id']] - + } } - + if (is.null(inputid) && !allow.conflicting.dates) { print(existing.input, digits = 10) PEcAn.logger::logger.error(paste0( @@ -107,17 +107,17 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, return(NULL) } } - + if (is.null(inputid)) { # Either there was no existing input, or there was but the dates don't match and # allow.conflicting.dates==TRUE. So, insert new input record. # adding is.null(startdate) to add inputs like soil that don't have dates - if(parent == "" && is.null(startdate)) { + if(parent == "" && is.null(startdate)) { cmd <- paste0("INSERT INTO inputs ", "(site_id, format_id, name) VALUES (", - siteid, ", ", formatid, ", '", name, + siteid, ", ", formatid, ", '", name, "'",") RETURNING id") - } else if(parent == "" && !is.null(startdate)) { + } else if(parent == "" && !is.null(startdate)) { cmd <- paste0("INSERT INTO inputs ", "(site_id, format_id, start_date, end_date, name) VALUES (", siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, @@ -142,7 +142,7 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, " AND format_id=", formatid), con = con )$id - + }else{inputid <- db.query( query = paste0( "SELECT id FROM inputs WHERE site_id=", siteid, @@ -156,28 +156,28 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, }else{ inserted.id <- data.frame(id=inputid) # in the case that inputid is not null then this means that there was an exsiting input } - + if (length(inputid) > 1 && !ens) { - PEcAn.logger::logger.warn(paste0("Multiple input files found matching parameters format_id = ", formatid, - ", startdate = ", startdate, ", enddate = ", enddate, ", parent = ", parent, ". Selecting the", - " last input file. This is normal for when an entire ensemble is inserted iteratively, but ", + PEcAn.logger::logger.warn(paste0("Multiple input files found matching parameters format_id = ", formatid, + ", startdate = ", startdate, ", enddate = ", enddate, ", parent = ", parent, ". Selecting the", + " last input file. This is normal for when an entire ensemble is inserted iteratively, but ", " is likely an error otherwise.")) inputid = inputid[length(inputid)] } else if (ens){ inputid <- inserted.id$id } - + # find appropriate dbfile, if not in database, insert new dbfile dbfile <- dbfile.check(type = 'Input', container.id = inputid, con = con, hostname = hostname) - + if (nrow(dbfile) > 0 & !ens) { - + if (nrow(dbfile) > 1) { print(dbfile) PEcAn.logger::logger.warn("Multiple dbfiles found. Using last.") dbfile <- dbfile[nrow(dbfile),] } - + if (dbfile$file_name != in.prefix || dbfile$file_path != in.path && !ens) { print(dbfile, digits = 10) PEcAn.logger::logger.error(paste0( @@ -189,13 +189,13 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, } else { dbfileid <- dbfile[['id']] } - + } else { #insert dbfile & return dbfile id dbfileid <- dbfile.insert(in.path = in.path, in.prefix = in.prefix, type = 'Input', id = inputid, con = con, reuse = TRUE, hostname = hostname) } - + invisible(list(input.id = inputid, dbfile.id = dbfileid)) } @@ -225,30 +225,30 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, ##' } dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, formatname, parentid=NA, con, hostname=PEcAn.remote::fqdn(), exact.dates=FALSE, pattern=NULL, return.all=FALSE) { - - + + hostname <- default_hostname(hostname) - + mimetypeid <- get.id(table = 'mimetypes', colnames = 'type_string', values = mimetype, con = con) if (is.null(mimetypeid)) { return(invisible(data.frame())) } - + # find appropriate format formatid <- get.id(table = 'formats', colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), con = con) - + if (is.null(formatid)) { invisible(data.frame()) } - + # setup parent part of query if specified if (is.na(parentid)) { parent <- "" } else { parent <- paste0(" AND parent_id=", parentid) } - + # find appropriate input if (exact.dates) { inputs <- db.query( @@ -271,21 +271,21 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f con = con )#[['id']] } - + if (is.null(inputs) | length(inputs$id) == 0) { return(data.frame()) } else { - + if (!is.null(pattern)) { ## Case where pattern is not NULL inputs <- inputs[grepl(pattern, inputs$name),] } - + ## parent check when NA if (is.na(parentid)) { inputs <- inputs[is.na(inputs$parent_id),] } - + if (length(inputs$id) > 1) { PEcAn.logger::logger.warn("Found multiple matching inputs. Checking for one with associate files on host machine") print(inputs) @@ -294,7 +294,7 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f # for(i in seq_len(ni)){ # dbfile[[i]] <- dbfile.check(type = 'Input', container.id = inputs$id[i], con = con, hostname = hostname, machine.check = TRUE) # } - + dbfile <- dbfile.check( type = 'Input', @@ -304,8 +304,8 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f machine.check = TRUE, return.all = return.all ) - - + + if (nrow(dbfile) == 0) { ## With the possibility of dbfile.check returning nothing, ## as.data.frame ensures a empty data.frame is returned @@ -313,15 +313,15 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f PEcAn.logger::logger.info("File not found on host machine. Returning Valid input with file associated on different machine if possible") return(as.data.frame(dbfile.check(type = 'Input', container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) } - + return(dbfile) } else if (length(inputs$id) == 0) { - + # need this third case here because prent check above can return an empty inputs return(data.frame()) - + }else{ - + PEcAn.logger::logger.warn("Found possible matching input. Checking if its associate files are on host machine") print(inputs) dbfile <- dbfile.check( @@ -332,7 +332,7 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f machine.check = TRUE, return.all = return.all ) - + if (nrow(dbfile) == 0) { ## With the possibility of dbfile.check returning nothing, ## as.data.frame ensures an empty data.frame is returned @@ -340,9 +340,9 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f PEcAn.logger::logger.info("File not found on host machine. Returning Valid input with file associated on different machine if possible") return(as.data.frame(dbfile.check(type = 'Input', container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) } - + return(dbfile) - + } } } @@ -368,24 +368,24 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f ##' } dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, hostname=PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) - + # find appropriate pft pftid <- get.id("pfts", "name", pft, con) if (is.null(pftid)) { PEcAn.logger::logger.severe("Could not find pft, could not store file", filename) } - + mimetypeid <- get.id(table = 'mimetypes', colnames = 'type_string', values = mimetype, con = con, create = TRUE) - + # find appropriate format formatid <- get.id(table = "formats", colnames = c('mimetype_id', 'name'), values = c(mimetypeid, formatname), con = con, create = TRUE, dates = TRUE) - + # find appropriate posterior # NOTE: This is defined but not used # posterior_ids <- get.id("posteriors", "pft_id", pftid, con) - + posteriorid_query <- paste0("SELECT id FROM posteriors WHERE pft_id=", pftid, " AND format_id=", formatid) posteriorid <- db.query(query = posteriorid_query, con = con)[['id']] @@ -400,7 +400,7 @@ dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, ho ) posteriorid <- db.query(posteriorid_query, con)[['id']] } - + # NOTE: Modified by Alexey Shiklomanov. # I'm not sure how this is supposed to work, but I think it's like this invisible(dbfile.insert(in.path = dirname(filename), in.prefix = basename(filename), type = "Posterior", id = posteriorid, @@ -427,24 +427,24 @@ dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, ho ##' } dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname=PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) - + # find appropriate pft pftid <- get.id(table = "pfts", values = "name", colnames = pft, con = con) if (is.null(pftid)) { invisible(data.frame()) } - + # find appropriate format mimetypeid <- get.id(table = "mimetypes", values = "type_string", colnames = mimetype, con = con) if (is.null(mimetypeid)) { PEcAn.logger::logger.error("mimetype ", mimetype, "does not exist") } formatid <- get.id(table = "formats", colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), con = con) - + if (is.null(formatid)) { invisible(data.frame()) } - + # find appropriate posterior posteriorid <- db.query( query = paste0( @@ -456,7 +456,7 @@ dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname=PEcA if (is.null(posteriorid)) { invisible(data.frame()) } - + invisible(dbfile.check(type = 'Posterior', container.id = posteriorid, con = con, hostname = hostname)) } @@ -481,14 +481,14 @@ dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname=PEcA ##' } dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostname=PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) - + if (substr(in.path, 1, 1) != '/') { PEcAn.logger::logger.error("path to dbfiles:", in.path, " is not a valid full path") } - + # find appropriate host hostid <- get.id(table = "machines", colnames = "hostname", values = hostname, con = con, create = TRUE, dates = TRUE) - + # Query for existing dbfile record with same file_name, file_path, machine_id , # container_type, and container_id. dbfile <- invisible(db.query( @@ -499,7 +499,7 @@ dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostn "machine_id='", hostid, "'" ), con = con)) - + if (nrow(dbfile) == 0) { # If no existing record, insert one @@ -510,7 +510,7 @@ dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostn ") RETURNING id"), con = con ) - + file.id <- insert_result[['id']] } else if (!reuse) { # If there is an existing record but reuse==FALSE, return NA. @@ -528,7 +528,7 @@ dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostn file.id <- dbfile[['id']] } } - + # Return the new dbfile ID, or the one that existed already (reuse==T), or NA (reuse==F) return(file.id) } @@ -559,32 +559,32 @@ dbfile.check <- function(type, container.id, con, hostname = PEcAn.remote::fqdn(), machine.check = TRUE, return.all = FALSE) { - + type <- match.arg(type, c("Input", "Posterior", "Model")) - + hostname <- default_hostname(hostname) - + # find appropriate host hostid <- get.id(table = "machines", colnames = "hostname", values = hostname, con = con) if (is.null(hostid)) return(data.frame()) - + dbfiles <- dplyr::tbl(con, "dbfiles") %>% - dplyr::filter(container_type == !!type, - container_id %in% !!container.id) - + dplyr::filter(.data$container_type == !!type, + .data$container_id %in% !!container.id) + if (machine.check) { dbfiles <- dbfiles %>% - dplyr::filter(machine_id == !!hostid) + dplyr::filter(.data$machine_id == !!hostid) } - + dbfiles <- dplyr::collect(dbfiles) - + if (nrow(dbfiles) > 1 && !return.all) { PEcAn.logger::logger.warn("Multiple Valid Files found on host machine. Returning last updated record.") dbfiles <- dbfiles %>% - dplyr::filter(updated_at == max(updated_at)) + dplyr::filter(.data$updated_at == max(.data$updated_at)) } - + dbfiles } @@ -609,9 +609,9 @@ dbfile.check <- function(type, container.id, con, ##' } dbfile.file <- function(type, id, con, hostname=PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) - + files <- dbfile.check(type = type, container.id = id, con = con, hostname = hostname) - + if (nrow(files) > 1) { PEcAn.logger::logger.warn("multiple files found for", id, "returned; using the first one found") invisible(file.path(files[1, 'file_path'], files[1, 'file_name'])) @@ -633,13 +633,13 @@ dbfile.file <- function(type, id, con, hostname=PEcAn.remote::fqdn()) { ##' } dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) - + # find appropriate host hostid <- db.query(query = paste0("SELECT id FROM machines WHERE hostname='", hostname, "'"), con = con)[['id']] if (is.null(hostid)) { invisible(NA) } - + # find file file_name <- basename(file) file_path <- dirname(file) @@ -651,7 +651,7 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { "' AND machine_id=", hostid ), con = con) - + if (nrow(ids) > 1) { PEcAn.logger::logger.warn("multiple ids found for", file, "returned; using the first one found") invisible(ids[1, 'container_id']) @@ -665,180 +665,180 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { -##' -##' This function will move dbfiles - clim or nc - from one location +##' +##' This function will move dbfiles - clim or nc - from one location ##' to another on the same machine and update BETY -##' +##' ##' @name dbfile.move -##' @title Move files to new location +##' @title Move files to new location ##' @param old.dir directory with files to be moved -##' @param new.dir directory where files should be moved +##' @param new.dir directory where files should be moved ##' @param file.type what type of files are being moved ##' @param siteid needed to register files that arent already in BETY -##' @param register if file isn't already in BETY, should it be registered? +##' @param register if file isn't already in BETY, should it be registered? ##' @return print statement of how many files were moved, registered, or have symbolic links ##' @export ##' @author kzarada ##' @examples ##' \dontrun{ ##' dbfile.move( -##' old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", +##' old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", ##' new.dir = '/projectnb/dietzelab/pecan.data/dbfiles/NOAA_GEFS_site_0-676' -##' file.type= clim, -##' siteid = 676, +##' file.type= clim, +##' siteid = 676, ##' register = TRUE ##' ) ##' } -dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = FALSE ){ - - - #create nulls for file movement and error info +dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = FALSE ){ + + + #create nulls for file movement and error info error = 0 files.sym = 0 - files.changed = 0 - files.reg = 0 - files.indb = 0 - - #check for file type and update to make it *.file type - if(file.type != "clim" | file.type != "nc"){ + files.changed = 0 + files.reg = 0 + files.indb = 0 + + #check for file type and update to make it *.file type + if(file.type != "clim" | file.type != "nc"){ PEcAn.logger::logger.error('File type not supported by move at this time. Currently only supports NC and CLIM files') error = 1 } file.pattern = paste0("*.", file.type) - - - + + + #create new directory if it doesn't exist - if(!dir.exists(new.dir)){ + if(!dir.exists(new.dir)){ dir.create(new.dir)} - - - # check to make sure both directories exist - if(!dir.exists(old.dir)){ + + + # check to make sure both directories exist + if(!dir.exists(old.dir)){ PEcAn.logger::logger.error('Old File directory does not exist. Please enter valid file path') error = 1} - - if(!dir.exists(new.dir)){ + + if(!dir.exists(new.dir)){ PEcAn.logger::logger.error('New File directory does not exist. Please enter valid file path') error = 1} - - if(basename(new.dir) != basename(old.dir)){ + + if(basename(new.dir) != basename(old.dir)){ PEcAn.logger::logger.error('Basenames of files do not match') } - + #list files in the old directory old.files <- list.files(path= old.dir, pattern = file.pattern) - - #check to make sure there are files - if(length(old.files) == 0){ + + #check to make sure there are files + if(length(old.files) == 0){ PEcAn.logger::logger.warn('No files found') - error = 1 + error = 1 } - - #create full file path + + #create full file path full.old.file = file.path(old.dir, old.files) - - + + ### Get BETY information ### - bety <- dplyr::src_postgres(dbname = 'bety', - host = 'psql-pecan.bu.edu', - user = 'bety', + bety <- dplyr::src_postgres(dbname = 'bety', + host = 'psql-pecan.bu.edu', + user = 'bety', password = 'bety') con <- bety$con - - #get matching dbfiles from BETY + + #get matching dbfiles from BETY dbfile.path = dirname(full.old.file) dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% - dplyr::filter(file_name %in% basename(full.old.file)) %>% + dplyr::filter(file_name %in% basename(full.old.file)) %>% dplyr::filter(file_path %in% dbfile.path) - - + + #if there are matching db files if(dim(dbfiles)[1] > 0){ - - # Check to make sure files line up - if(dim(dbfiles)[1] != length(full.old.file)) { + + # Check to make sure files line up + if(dim(dbfiles)[1] != length(full.old.file)) { PEcAn.logger::logger.warn("Files to be moved don't match up with BETY files, only moving the files that match") - - #IF DB FILES AND FULL FILES DONT MATCH, remove those not in BETY - will take care of the rest below + + #IF DB FILES AND FULL FILES DONT MATCH, remove those not in BETY - will take care of the rest below index = which(basename(full.old.file) %in% dbfiles$file_name) index1 = seq(1, length(full.old.file)) check <- index1[-which(index1 %in% index)] full.old.file <- full.old.file[-check] - - #record the number of files that are being moved + + #record the number of files that are being moved files.changed = length(full.old.file) - + } - - #Check to make sure the files line up - if(dim(dbfiles)[1] != length(full.old.file)) { + + #Check to make sure the files line up + if(dim(dbfiles)[1] != length(full.old.file)) { PEcAn.logger::logger.error("Files to be moved don't match up with BETY files, canceling move") - error = 1 - } - - - #Make sure the files line up + error = 1 + } + + + #Make sure the files line up dbfiles <- dbfiles[order(dbfiles$file_name),] full.old.file <- sort(full.old.file) - + #Record number of files moved and changed in BETY files.indb = dim(dbfiles)[1] - + #Move files and update BETY if(error == 0) { for(i in 1:length(full.old.file)){ fs::file_move(full.old.file[i], new.dir) db.query(paste0("UPDATE dbfiles SET file_path= '", new.dir, "' where id=", dbfiles$id[i]), con) - } #end i loop - } #end error if statement - - - - } #end dbfile loop - - - #if there are files that are in the folder but not in BETY, we can either register them or not - if (dim(dbfiles)[1] == 0 | files.changed > 0){ - - #Recheck what files are in the directory since others may have been moved above + } #end i loop + } #end error if statement + + + + } #end dbfile loop + + + #if there are files that are in the folder but not in BETY, we can either register them or not + if (dim(dbfiles)[1] == 0 | files.changed > 0){ + + #Recheck what files are in the directory since others may have been moved above old.files <- list.files(path= old.dir, pattern = file.pattern) - - #Recreate full file path + + #Recreate full file path full.old.file = file.path(old.dir, old.files) - - - #Error check again to make sure there aren't any matching dbfiles + + + #Error check again to make sure there aren't any matching dbfiles dbfile.path = dirname(full.old.file) dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% - dplyr::filter(file_name %in% basename(full.old.file)) %>% + dplyr::filter(file_name %in% basename(full.old.file)) %>% dplyr::filter(file_path %in% dbfile.path) - - if(dim(dbfiles)[1] > 0){ + + if(dim(dbfiles)[1] > 0){ PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling link or registration") error = 1 } - - + + if(error == 0 & register == TRUE){ - - #Record how many files are being registered to BETY + + #Record how many files are being registered to BETY files.reg= length(full.old.file) - + for(i in 1:length(full.old.file)){ - + file_path = dirname(full.old.file[i]) file_name = basename(full.old.file[i]) - + if(file.type == "nc"){mimetype = "application/x-netcdf" formatname ="CF Meteorology application" } else if(file.type == "clim"){mimetype = 'text/csv' formatname = "Sipnet.climna"} - else{PEcAn.logger::logger.error("File Type is currently not supported")} - - + else{PEcAn.logger::logger.error("File Type is currently not supported")} + + dbfile.input.insert(in.path = file_path, in.prefix = file_name, siteid = siteid, @@ -847,54 +847,54 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = F mimetype = mimetype, formatname = formatname, parentid=NA, - con = con, + con = con, hostname=PEcAn.remote::fqdn(), allow.conflicting.dates=FALSE, - ens=FALSE) - }#end i loop - } #end error loop - - } #end register == TRUE - + ens=FALSE) + }#end i loop + } #end error loop + + } #end register == TRUE + if(error == 0 & register == FALSE){ - #Create file path for symbolic link + #Create file path for symbolic link full.new.file = file.path(new.dir, old.files) - - #Record number of files that will have a symbolic link made + + #Record number of files that will have a symbolic link made files.sym = length(full.new.file) - - #Line up files + + #Line up files full.new.file = sort(full.new.file) full.old.file <- sort(full.old.file) - - #Check to make sure the files are the same length - if(length(full.new.file) != length(full.old.file)) { - + + #Check to make sure the files are the same length + if(length(full.new.file) != length(full.old.file)) { + PEcAn.logger::logger.error("Files to be moved don't match up with BETY. Canceling Move") error = 1 } - - #Move file and create symbolic link if there are no errors - + + #Move file and create symbolic link if there are no errors + if(error ==0){ for(i in 1:length(full.old.file)){ fs::file_move(full.old.file[i], new.dir) R.utils::createLink(link = full.old.file[i], target = full.new.file[i]) - }#end i loop - } #end error loop - + }#end i loop + } #end error loop + } #end Register == FALSE - - - if(error > 0){ + + + if(error > 0){ PEcAn.logger::logger.error("There was an error, files were not moved or linked") - + } - + if(error == 0){ - + PEcAn.logger::logger.info(paste0(files.changed + files.indb, " files were moved and updated on BETY, ", files.sym, " were moved and had a symbolic link created, and ", files.reg , " files were moved and then registered in BETY")) - + } - + } #end dbfile.move() From c6ff211427a201f9614b2ecc166fd6b4c0b7d65d Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 12:44:42 +0530 Subject: [PATCH 1795/2289] updated query.dplyr.R fixed tidyverse notes "no visible binding for global variable" for "sync_host_id, run_id, id, workflow_id, posix, folder, ensemble_id" --- base/db/R/query.dplyr.R | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index b08751f1337..dddde24d514 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -1,7 +1,7 @@ #' Connect to bety using current PEcAn configuration #' @param php.config Path to `config.php` #' @export -#' +#' betyConnect <- function(php.config = "../../web/config.php") { ## Read PHP config file for webserver if (file.exists(php.config)) { @@ -77,8 +77,8 @@ dbHostInfo <- function(bety) { # get machine start and end based on hostid machine <- dplyr::tbl(bety, "machines") %>% - dplyr::filter(sync_host_id == !!hostid) - + dplyr::filter(.data$sync_host_id == !!hostid) + if (is.na(nrow(machine)) || nrow(machine) == 0) { return(list(hostid = hostid, @@ -111,7 +111,7 @@ workflows <- function(bety, ensemble = FALSE) { query <- "SELECT id AS workflow_id, folder FROM workflows" } dplyr::tbl(bety, dbplyr::sql(query)) %>% - dplyr::filter(workflow_id >= !!hostinfo$start & workflow_id <= !!hostinfo$end) %>% + dplyr::filter(.data$workflow_id >= !!hostinfo$start & .data$workflow_id <= !!hostinfo$end) %>% return() } # workflows @@ -122,7 +122,7 @@ workflows <- function(bety, ensemble = FALSE) { #' @export workflow <- function(bety, workflow_id) { workflows(bety) %>% - dplyr::filter(workflow_id == !!workflow_id) + dplyr::filter(.data$workflow_id == !!.data$workflow_id) } # workflow @@ -132,14 +132,14 @@ workflow <- function(bety, workflow_id) { #' @export runs <- function(bety, workflow_id) { Workflows <- workflow(bety, workflow_id) %>% - dplyr::select(workflow_id, folder) + dplyr::select(.data$workflow_id, .data$folder) Ensembles <- dplyr::tbl(bety, "ensembles") %>% - dplyr::select(ensemble_id = id, workflow_id) %>% - dplyr::inner_join(Workflows, by = "workflow_id") + dplyr::select(.data$ensemble_id = .data$id, .data$workflow_id) %>% + dplyr::inner_join(Workflows, by = ".data$workflow_id") Runs <- dplyr::tbl(bety, "runs") %>% - dplyr::select(run_id = id, ensemble_id) %>% - dplyr::inner_join(Ensembles, by = "ensemble_id") - dplyr::select(Runs, -workflow_id, -ensemble_id) %>% + dplyr::select(.data$run_id = .data$id, .data$ensemble_id) %>% + dplyr::inner_join(Ensembles, by = ".data$ensemble_id") + dplyr::select(Runs, -.data$workflow_id, -.data$ensemble_id) %>% return() } # runs @@ -155,9 +155,9 @@ get_workflow_ids <- function(bety, query, all.ids = FALSE) { } else { # Get all workflow IDs ids <- workflows(bety, ensemble = FALSE) %>% - dplyr::distinct(workflow_id) %>% + dplyr::distinct(.data$workflow_id) %>% dplyr::collect() %>% - dplyr::pull(workflow_id) %>% + dplyr::pull(.data$workflow_id) %>% sort(decreasing = TRUE) } return(ids) @@ -170,7 +170,7 @@ get_users <- function(bety) { hostinfo <- dbHostInfo(bety) query <- "SELECT id, login FROM users" out <- dplyr::tbl(bety, dbplyr::sql(query)) %>% - dplyr::filter(id >= hostinfo$start & id <= hostinfo$end) + dplyr::filter(.data$id >= hostinfo$start & .data$id <= hostinfo$end) return(out) } # get_workflow_ids @@ -184,7 +184,7 @@ get_run_ids <- function(bety, workflow_id) { if (workflow_id != "") { runs <- runs(bety, workflow_id) if (dplyr.count(runs) > 0) { - run_ids <- dplyr::pull(runs, run_id) %>% sort() + run_ids <- dplyr::pull(runs, .data$run_id) %>% sort() } } return(run_ids) @@ -200,7 +200,7 @@ get_run_ids <- function(bety, workflow_id) { get_var_names <- function(bety, workflow_id, run_id, remove_pool = TRUE) { var_names <- character(0) if (workflow_id != "" && run_id != "") { - workflow <- dplyr::collect(workflow(bety, workflow_id)) + workflow <- dplyr::collect(workflow(bety, .data$workflow_id)) if (nrow(workflow) > 0) { outputfolder <- file.path(workflow$folder, "out", run_id) if (utils::file_test("-d", outputfolder)) { @@ -254,18 +254,18 @@ load_data_single_run <- function(bety, workflow_id, run_id) { # @return Dataframe for one run # Adapted from earlier code in pecan/shiny/workflowPlot/server.R globalDF <- data.frame() - workflow <- dplyr::collect(workflow(bety, workflow_id)) + workflow <- dplyr::collect(workflow(bety, .data$workflow_id)) # Use the function 'var_names_all' to get all variables var_names <- var_names_all(bety, workflow_id, run_id) # lat/lon often cause trouble (like with JULES) but aren't needed for this basic plotting - var_names <- setdiff(var_names, c("lat", "latitude", "lon", "longitude")) + var_names <- setdiff(var_names, c("lat", "latitude", "lon", "longitude")) outputfolder <- file.path(workflow$folder, 'out', run_id) out <- read.output(runid = run_id, outdir = outputfolder, variables = var_names, dataframe = TRUE) ncfile <- list.files(path = outputfolder, pattern = "\\.nc$", full.names = TRUE)[1] nc <- ncdf4::nc_open(ncfile) - + globalDF <- tidyr::gather(out, key = var_name, value = vals, names(out)[names(out) != "posix"]) %>% - dplyr::rename(dates = posix) + dplyr::rename(dates = .data$posix) globalDF$workflow_id <- workflow_id globalDF$run_id <- run_id globalDF$xlab <- "Time" From f144376c96fdc1037c6f03f38f4e14d999005d7b Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 12:51:20 +0530 Subject: [PATCH 1796/2289] updated get.trait.data.pft.R fixed tidyverse notes "no visible binding for global variable" for " pft_id, created_at, stat, trait " --- base/db/R/get.trait.data.pft.R | 86 +++++++++++++++++----------------- 1 file changed, 43 insertions(+), 43 deletions(-) diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index d36e4893088..8334716df92 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -22,33 +22,33 @@ ##' @export get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, forceupdate = FALSE) { - + # Create directory if necessary if (!file.exists(pft$outdir) && !dir.create(pft$outdir, recursive = TRUE)) { PEcAn.logger::logger.error(paste0("Couldn't create PFT output directory: ", pft$outdir)) } - + ## Remove old files. Clean up. old.files <- list.files(path = pft$outdir, full.names = TRUE, include.dirs = FALSE) file.remove(old.files) - + # find appropriate pft pftres <- query_pfts(dbcon, pft[["name"]], modeltype) pfttype <- pftres[["pft_type"]] pftid <- pftres[["id"]] - + if (nrow(pftres) > 1) { PEcAn.logger::logger.severe( "Multiple PFTs named", pft[["name"]], "found,", "with ids", PEcAn.utils::vecpaste(pftres[["id"]]), ".", "Specify modeltype to fix this.") } - + if (nrow(pftres) == 0) { PEcAn.logger::logger.severe("Could not find pft", pft[["name"]]) return(NA) } - + # get the member species/cultivars, we need to check if anything changed if (pfttype == "plant") { pft_member_filename = "species.csv" @@ -59,41 +59,41 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, } else { PEcAn.logger::logger.severe("Unknown pft type! Expected 'plant' or 'cultivar', got", pfttype) } - + # ANS: Need to do this conversion for the check against existing # membership later on. Otherwise, `NA` from the CSV is interpreted # as different from `""` returned here, even though they are really # the same thing. pft_members <- pft_members %>% dplyr::mutate_if(is.character, ~dplyr::na_if(., "")) - + # get the priors prior.distns <- PEcAn.DB::query.priors(pft = pftid, trstr = PEcAn.utils::vecpaste(trait.names), con = dbcon) prior.distns <- prior.distns[which(!rownames(prior.distns) %in% names(pft$constants)),] traits <- rownames(prior.distns) - + # get the trait data (don't bother sampling derived traits until after update check) trait.data.check <- PEcAn.DB::query.traits(ids = pft_members$id, priors = traits, con = dbcon, update.check.only = TRUE, ids_are_cultivars = (pfttype=="cultivar")) traits <- names(trait.data.check) - + # Set forceupdate FALSE if it's a string (backwards compatible with 'AUTO' flag used in the past) forceupdate <- isTRUE(as.logical(forceupdate)) - + # check to see if we need to update if (!forceupdate) { if (is.null(pft$posteriorid)) { recent_posterior <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid) %>% + dplyr::filter(.data$pft_id == !!pftid) %>% dplyr::collect() if (length(recent_posterior) > 0) { pft$posteriorid <- dplyr::tbl(dbcon, "posteriors") %>% - dplyr::filter(pft_id == !!pftid) %>% - dplyr::arrange(dplyr::desc(created_at)) %>% + dplyr::filter(.data$pft_id == !!pftid) %>% + dplyr::arrange(dplyr::desc(.data$created_at)) %>% utils::head(1) %>% dplyr::pull(id) } else { PEcAn.logger::logger.info("No previous posterior found. Forcing update") - } + } } if (!is.null(pft$posteriorid)) { files <- dbfile.check(type = "Posterior", container.id = pft$posteriorid, con = dbcon, @@ -165,13 +165,13 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, ) foundallfiles <- FALSE } - + # Check if priors have changed PEcAn.logger::logger.debug("Checking if priors have changed") existing_prior <- PEcAn.utils::load_local(need_paths[["priors"]])[["prior.distns"]] diff_prior <- symmetric_setdiff( - dplyr::as_tibble(prior.distns, rownames = "trait"), - dplyr::as_tibble(existing_prior, rownames = "trait") + dplyr::as_tibble(prior.distns, rownames = ".data$trait"), + dplyr::as_tibble(existing_prior, rownames = ".data$trait") ) if (nrow(diff_prior) > 0) { PEcAn.logger::logger.error( @@ -182,7 +182,7 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, ) foundallfiles <- FALSE } - + # Check if trait data have changed PEcAn.logger::logger.debug("Checking if trait data have changed") existing_trait_data <- PEcAn.utils::load_local( @@ -197,14 +197,14 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, } else if (length(trait.data.check) == 0) { PEcAn.logger::logger.warn("New and existing trait data are both empty. Skipping this check.") } else { - current_traits <- dplyr::bind_rows(trait.data.check, .id = "trait") %>% - dplyr::select(-mean, -stat) - existing_traits <- dplyr::bind_rows(existing_trait_data, .id = "trait") %>% - dplyr::select(-mean, -stat) + current_traits <- dplyr::bind_rows(trait.data.check, .id = ".data$trait") %>% + dplyr::select(-mean, -.data$stat) + existing_traits <- dplyr::bind_rows(existing_trait_data, .id = ".data$trait") %>% + dplyr::select(-mean, -.data$stat) diff_traits <- symmetric_setdiff(current_traits, existing_traits) if (nrow(diff_traits) > 0) { diff_summary <- diff_traits %>% - dplyr::count(source, trait) + dplyr::count(source, .data$trait) PEcAn.logger::logger.error( "\n Prior has changed. \n", "Here are the number of differing trait records by trait:\n", @@ -215,8 +215,8 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, } } } - - + + if (foundallfiles) { PEcAn.logger::logger.info( "Reusing existing files from posterior", pft$posteriorid, @@ -226,9 +226,9 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, file.copy(from = file.path(files[[id, "file_path"]], files[[id, "file_name"]]), to = file.path(pft$outdir, files[[id, "file_name"]])) } - + done <- TRUE - + # May need to symlink the generic post.distns.Rdata to a specific post.distns.*.Rdata file. if (length(list.files(pft$outdir, "post.distns.Rdata")) == 0) { all.files <- list.files(pft$outdir) @@ -265,18 +265,18 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, } } } - + # get the trait data (including sampling of derived traits, if any) trait.data <- query.traits(pft_members$id, traits, con = dbcon, update.check.only = FALSE, ids_are_cultivars = (pfttype == "cultivar")) traits <- names(trait.data) - + if (length(trait.data) > 0) { trait_counts <- trait.data %>% - dplyr::bind_rows(.id = "trait") %>% - dplyr::count(trait) - + dplyr::bind_rows(.id = ".data$trait") %>% + dplyr::count(.data$trait) + PEcAn.logger::logger.info( "\n Number of observations per trait for PFT ", shQuote(pft[["name"]]), ":\n", PEcAn.logger::print2string(trait_counts, n = Inf, na.print = ""), @@ -288,36 +288,36 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, format(pft_members[["id"]], scientific = FALSE) ) } - + # get list of existing files so they get ignored saving old.files <- list.files(path = pft$outdir) - + # create a new posterior insert_result <- db.query( paste0("INSERT INTO posteriors (pft_id) VALUES (", pftid, ") RETURNING id"), con = dbcon) pft$posteriorid <- insert_result[["id"]] - + # create path where to store files pathname <- file.path(dbfiles, "posterior", pft$posteriorid) dir.create(pathname, showWarnings = FALSE, recursive = TRUE) - + ## 1. get species/cultivar list based on pft utils::write.csv(pft_members, file.path(pft$outdir, pft_member_filename), row.names = FALSE) - + ## save priors save(prior.distns, file = file.path(pft$outdir, "prior.distns.Rdata")) utils::write.csv(prior.distns, file.path(pft$outdir, "prior.distns.csv"), row.names = TRUE) - + ## 3. display info to the console PEcAn.logger::logger.info( "\n Summary of prior distributions for PFT ", shQuote(pft$name), ":\n", PEcAn.logger::print2string(prior.distns), wrap = FALSE ) - + ## traits = variables with prior distributions for this pft trait.data.file <- file.path(pft$outdir, "trait.data.Rdata") save(trait.data, file = trait.data.file) @@ -326,7 +326,7 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, file.path(pft$outdir, "trait.data.csv"), row.names = FALSE ) - + ### save and store in database all results except those that were there already store_files_all <- list.files(path = pft[["outdir"]]) store_files <- setdiff(store_files_all, old.files) @@ -346,6 +346,6 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, type = "Posterior", id = pft[["posteriorid"]], con = dbcon) } - + return(pft) -} \ No newline at end of file +} From baa49dd6c847d985a49ed81014413a70b4f9168d Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 12:55:29 +0530 Subject: [PATCH 1797/2289] updated insert.format.vars.R fixed tidyverse notes "no visible binding for global variable" for "id, name, collect" --- base/db/R/insert.format.vars.R | 56 +++++++++++++++++----------------- 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/base/db/R/insert.format.vars.R b/base/db/R/insert.format.vars.R index cc78866ad7e..3a660a67a54 100644 --- a/base/db/R/insert.format.vars.R +++ b/base/db/R/insert.format.vars.R @@ -1,17 +1,17 @@ -#' Insert Format and Format-Variable Records +#' Insert Format and Format-Variable Records #' #' @param con SQL connection to BETYdb #' @param format_name The name of the format. Type: character string. -#' @param mimetype_id The id associated with the mimetype of the format. Type: integer. +#' @param mimetype_id The id associated with the mimetype of the format. Type: integer. #' @param header Boolean that indicates the presence of a header in the format. Defaults to "TRUE". -#' @param skip Integer that indicates the number of lines to skip in the header. Defaults to 0. -#' @param formats_variables A 'tibble' consisting of entries that correspond to columns in the formats-variables table. See Details for further information. +#' @param skip Integer that indicates the number of lines to skip in the header. Defaults to 0. +#' @param formats_variables A 'tibble' consisting of entries that correspond to columns in the formats-variables table. See Details for further information. #' @param notes Additional description of the format: character string. -#' @param suppress Boolean that suppresses or allows a test for an existing variable id. This test is inconvenient in applications where the variable_ids are already known. -#' @details The formats_variables argument must be a 'tibble' and be structured in a specific format so that the SQL query functions properly. All arguments should be passed as vectors so that each entry will correspond with a specific row. All empty values should be specified as NA. +#' @param suppress Boolean that suppresses or allows a test for an existing variable id. This test is inconvenient in applications where the variable_ids are already known. +#' @details The formats_variables argument must be a 'tibble' and be structured in a specific format so that the SQL query functions properly. All arguments should be passed as vectors so that each entry will correspond with a specific row. All empty values should be specified as NA. #' \describe{ #' \item{variable_id}{(Required) Vector of integers.} -#' \item{name}{(Optional) Vector of character strings. The variable name in the imported data need only be specified if it differs from the BETY variable name.} +#' \item{name}{(Optional) Vector of character strings. The variable name in the imported data need only be specified if it differs from the BETY variable name.} #' \item{unit}{(Optional) Vector of type character string. Should be in a format parseable by the udunits library and need only be secified if the units of the data in the file differ from the BETY standard.} #' \item{storage_type}{(Optional) Vector of character strings. Storage type need only be specified if the variable is stored in a format other than would be expected (e.g. if numeric values are stored as quoted character strings). Additionally, storage_type stores POSIX codes that are used to store any time variables (e.g. a column with a 4-digit year would be \%Y). See also \code{[base::strptime]}} #' \item{column_number}{Vector of integers that list the column numbers associated with variables in a dataset. Required for text files that lack headers.}} @@ -39,44 +39,44 @@ #' formats_variables = formats_variables_tibble) #' } insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, header = TRUE, skip = 0, formats_variables = NULL, suppress = TRUE){ - + # Test if name is a character string if(!is.character(format_name)){ PEcAn.logger::logger.error( "Name must be a character string" ) } - + # Test if format name already exists - name_test <- dplyr::tbl(con, "formats") %>% dplyr::select(id, name) %>% dplyr::filter(name %in% !!format_name) %>% collect() + name_test <- dplyr::tbl(con, "formats") %>% dplyr::select(.data$id, .data$name) %>% dplyr::filter(.data$name %in% !!format_name) %>% .data$collect() name_test_df <- as.data.frame(name_test) if(!is.null(name_test_df[1,1])){ PEcAn.logger::logger.error( "Name already exists" ) } - + #Test if skip is an integer if(!is.character(skip)){ PEcAn.logger::logger.error( "Skip must be of type character" ) } - + # Test if header is a Boolean if(!is.logical(header)){ PEcAn.logger::logger.error( "Header must be of type Boolean" ) } - + # Test if notes are a character string if(!is.character(notes)&!is.null(notes)){ PEcAn.logger::logger.error( "Notes must be of type character" ) } - + ######## Formats-Variables tests ############### if(!is.null(formats_variables)){ for(i in 1:nrow(formats_variables)){ @@ -88,7 +88,7 @@ insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, head if(suppress == FALSE){ ## Test if variable_id already exists ## - var_id_test <- dplyr::tbl(con, "variables") %>% dplyr::select(id) %>% dplyr::filter(id %in% !!formats_variables[[i, "variable_id"]]) %>% dplyr::collect(id) + var_id_test <- dplyr::tbl(con, "variables") %>% dplyr::select(.data$id) %>% dplyr::filter(.data$id %in% !!formats_variables[[i, "variable_id"]]) %>% dplyr::collect(.data$id) if(!is.null(var_id_test[1,1])){ PEcAn.logger::logger.error( "variable_id already exists" @@ -117,21 +117,21 @@ insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, head ) } } - + ## convert NA to "" for inserting into db ## formats_variables[is.na(formats_variables)] <- "" - + ### udunit tests ### for(i in 1:nrow(formats_variables)){ u1 <- formats_variables[1,"unit"] - u2 <- dplyr::tbl(con, "variables") %>% dplyr::select(id, units) %>% dplyr::filter(id %in% !!formats_variables[[1, "variable_id"]]) %>% dplyr::pull(units) - + u2 <- dplyr::tbl(con, "variables") %>% dplyr::select(.data$id, units) %>% dplyr::filter(.data$id %in% !!formats_variables[[1, "variable_id"]]) %>% dplyr::pull(units) + if(!udunits2::ud.is.parseable(u1)){ PEcAn.logger::logger.error( "Units not parseable. Please enter a unit that is parseable by the udunits library." ) } - # Grab the bety units and + # Grab the bety units and if(!udunits2::ud.are.convertible(u1, u2)){ PEcAn.logger::logger.error( "Units are not convertable." @@ -139,7 +139,7 @@ insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, head } } } - + formats_df <- tibble::tibble( header = as.character(header), skip = skip, @@ -151,20 +151,20 @@ insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, head ## Insert format record inserted_formats <- db_merge_into(formats_df, "formats", con = con, by = c("name", "mimetype_id")) ## Make sure to include a 'by' argument - format_id <- dplyr::pull(inserted_formats, id) - + format_id <- dplyr::pull(inserted_formats, .data$id) + if(!is.null(formats_variables)){ - ## Insert format_id into + ## Insert format_id into n <- nrow(formats_variables) format_id_df <- matrix(data = format_id, nrow = n, ncol = 1) colnames(format_id_df) <- "format_id" - + ## Make query data.frame - formats_variables_input <- cbind(format_id_df, formats_variables) - + formats_variables_input <- cbind(format_id_df, formats_variables) + ## Insert Format-Variable record inserted_formats_variables <- db_merge_into(formats_variables_input, "formats_variables", con = con, by = c("variable_id")) } return(format_id) - + } From b7045e95bd461aebab79ea3776fe4286fd12d743 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 13:00:26 +0530 Subject: [PATCH 1798/2289] updated query.pft.R ixed tidyverse notes "no visible binding for global variable" for "name, pft_type, name.mt, cultivar_id, specie_id, genus, species, scientificname, name.cv" --- base/db/R/query.pft.R | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/base/db/R/query.pft.R b/base/db/R/query.pft.R index bf10f85f88b..bba0efc19e6 100644 --- a/base/db/R/query.pft.R +++ b/base/db/R/query.pft.R @@ -65,7 +65,7 @@ query.pft_species <- function(pft, modeltype = NULL, con) { query.pft_cultivars <- function(pft, modeltype = NULL, con) { pft_tbl <- (dplyr::tbl(con, "pfts") - %>% dplyr::filter(name == !!pft, pft_type == "cultivar")) + %>% dplyr::filter(.data$name == !!pft, .data$pft_type == "cultivar")) if (!is.null(modeltype)) { pft_tbl <- (pft_tbl @@ -73,7 +73,7 @@ query.pft_cultivars <- function(pft, modeltype = NULL, con) { dplyr::tbl(con, "modeltypes"), by = c("modeltype_id" = "id"), suffix = c("", ".mt")) - %>% dplyr::filter(name.mt == !!modeltype)) + %>% dplyr::filter(.data$name.mt == !!modeltype)) } (pft_tbl @@ -83,19 +83,19 @@ query.pft_cultivars <- function(pft, modeltype = NULL, con) { suffix = c("", ".cvpft")) %>% dplyr::inner_join( dplyr::tbl(con, "cultivars"), - by = c("cultivar_id" = "id"), + by = c(".data$cultivar_id" = "id"), suffix = c("", ".cv")) %>% dplyr::inner_join( - dplyr::tbl(con, "species"), - by=c("specie_id" = "id"), + dplyr::tbl(con, ".data$species"), + by=c(".data$specie_id" = "id"), suffix=c("", ".sp")) %>% dplyr::select( - id = cultivar_id, - specie_id, - genus, - species, - scientificname, - cultivar = name.cv) + id = .data$cultivar_id, + .data$specie_id, + .data$genus, + .data$species, + .data$scientificname, + cultivar = .data$name.cv) %>% dplyr::collect()) } From 63f11b055544e18a1cd9d07ff980b768f21d529d Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 13:03:16 +0530 Subject: [PATCH 1799/2289] updated query.format.vars.R --- base/db/R/query.format.vars.R | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/base/db/R/query.format.vars.R b/base/db/R/query.format.vars.R index 61ba7cf3e0e..c59bdcd131b 100644 --- a/base/db/R/query.format.vars.R +++ b/base/db/R/query.format.vars.R @@ -59,7 +59,7 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { if(all(!is.na(var.ids))){ # Need to subset the formats table - fv <- fv %>% dplyr::filter(variable_id %in% !!var.ids | storage_type != "") + fv <- fv %>% dplyr::filter(.data$variable_id %in% !!var.ids | .data$storage_type != "") if(dim(fv)[1] == 0){ PEcAn.logger::logger.error("None of your requested variables are available") } @@ -169,36 +169,36 @@ query.format.vars <- function(bety, input.id=NA, format.id=NA, var.ids=NA) { ##' Convert BETY variable names to MsTMIP and subsequently PEcAn standard names ##' ##' @param vars_bety data frame with variable names and units -##' @export +##' @export ##' ##' @author Betsy Cowdery bety2pecan <- function(vars_bety){ - - # This needs to be moved to lazy load - bety_mstmip <- utils::read.csv(system.file("bety_mstmip_lookup.csv", package= "PEcAn.DB"), + + # This needs to be moved to lazy load + bety_mstmip <- utils::read.csv(system.file("bety_mstmip_lookup.csv", package= "PEcAn.DB"), header = T, stringsAsFactors = FALSE) - + vars_full <- merge(vars_bety, bety_mstmip, by = "bety_name", all.x = TRUE) - + vars_full$pecan_name <- vars_full$mstmip_name vars_full$pecan_units <- vars_full$mstmip_units ind <- is.na(vars_full$pecan_name) vars_full$pecan_name[ind] <- vars_full$bety_name[ind] vars_full$pecan_units[ind] <- vars_full$bety_units[ind] - + dups <- unique(vars_full$pecan_name[duplicated(vars_full$pecan_name)]) - + if("NEE" %in% dups){ # This is a hack specific to Ameriflux! - # It ultimately needs to be generalized, perhaps in a better version of + # It ultimately needs to be generalized, perhaps in a better version of # bety2pecan that doesn't use a lookup table # In Ameriflux FC and NEE can map to NEE in mstmip/pecan standard # Thus if both are reported in the data, both will be converted to NEE - # which creates a conflict. + # which creates a conflict. # Here we go back to the bety name to determine which of those is NEE # The variable that is not NEE in bety (assuming it's FC) is discarded. - + keep <- which(vars_full$bety_name[which(vars_full$pecan_name == "NEE")] == "NEE") if(length(keep) == 1){ discard <- vars_full$bety_name[which(vars_full$pecan_name == "NEE")][-keep] From 9fee6181e368cdc9fb38d97f9df5cd785d49bf6e Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 13:14:27 +0530 Subject: [PATCH 1800/2289] updated query.traits.R fixed tidyverse notes "no visible binding for global variable" for "name" --- base/db/R/query.traits.R | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/base/db/R/query.traits.R b/base/db/R/query.traits.R index 686d61a19ce..62fd8176b52 100644 --- a/base/db/R/query.traits.R +++ b/base/db/R/query.traits.R @@ -40,21 +40,21 @@ query.traits <- function(ids, priors, con, if (length(ids) == 0 || length(priors) == 0) { return(list()) } - + id_type = rlang::sym(if (ids_are_cultivars) {"cultivar_id"} else {"specie_id"}) - + traits <- (dplyr::tbl(con, "traits") %>% dplyr::inner_join(dplyr::tbl(con, "variables"), by = c("variable_id" = "id")) %>% dplyr::filter( (!!id_type %in% ids), - (name %in% !!priors)) # TODO: use .data$name when filter supports it - %>% dplyr::distinct(name) # TODO: use .data$name when distinct supports it + (.data$name %in% !!priors)) # TODO: use .data$name when filter supports it + %>% dplyr::distinct(.data$name) # TODO: use .data$name when distinct supports it %>% dplyr::collect()) - + if (nrow(traits) == 0) { return(list()) } - + ### Grab trait data trait.data <- lapply(traits$name, function(trait){ query.trait.data( @@ -65,6 +65,6 @@ query.traits <- function(ids, priors, con, ids_are_cultivars = ids_are_cultivars) }) names(trait.data) <- traits$name - + return(trait.data) -} \ No newline at end of file +} From bbcf0b139b69227bdaae3dd793f7f4c35969125e Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 13:22:52 +0530 Subject: [PATCH 1801/2289] updated search_references.R fixed tidyverse notes "no visible binding for global variable" for "score, author, author_family, author_given, issued" --- base/db/R/search_references.R | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/base/db/R/search_references.R b/base/db/R/search_references.R index e1da9d9c0df..84a08797e7e 100644 --- a/base/db/R/search_references.R +++ b/base/db/R/search_references.R @@ -32,8 +32,8 @@ search_reference_single <- function(query, limit = 1, min_score = 85) { return(tibble::tibble(query = query)) } crdata <- crsearch[["data"]] %>% - dplyr::mutate(score = as.numeric(score)) %>% - dplyr::filter(score > !!min_score) + dplyr::mutate(.data$score = as.numeric(.data$score)) %>% + dplyr::filter(.data$score > !!min_score) if (nrow(crdata) < 1) { PEcAn.logger::logger.info( "No matches found. ", @@ -54,14 +54,13 @@ search_reference_single <- function(query, limit = 1, min_score = 85) { proc_search <- crdata %>% dplyr::mutate( # Get the first author only -- this is the BETY format - author_family = purrr::map(author, list("family", 1)), - author_given = purrr::map(author, list("given", 1)), - author = paste(author_family, author_given, sep = ", "), - year = gsub("([[:digit:]]{4}).*", "\\1", issued) %>% as.numeric(), + .data$author_family = purrr::map(.data$author, list("family", 1)), + .data$author_given = purrr::map(.data$author, list("given", 1)), + .data$author = paste(.data$author_family, .data$author_given, sep = ", "), + year = gsub("([[:digit:]]{4}).*", "\\1", .data$issued) %>% as.numeric(), query = query, - score = as.numeric(score) + .data$score = as.numeric(.data$score) ) use_cols <- keep_cols[keep_cols %in% colnames(proc_search)] dplyr::select(proc_search, !!!use_cols) } - From 84be22e32ad1cd90696c0e121bbf0210ed342cf8 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 13:27:23 +0530 Subject: [PATCH 1802/2289] updated Rcheck_reference.log removed relevant "no visible binding for global variable" lines from the file . --- base/db/tests/Rcheck_reference.log | 59 ++++-------------------------- 1 file changed, 7 insertions(+), 52 deletions(-) diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index e4d87ef12b1..7365aeeb96c 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -38,53 +38,21 @@ * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... NOTE -clone_pft: no visible binding for global variable ‘name’ -clone_pft: no visible binding for global variable ‘id’ -clone_pft: no visible binding for global variable ‘created_at’ -clone_pft: no visible binding for global variable ‘updated_at’ -clone_pft: no visible binding for global variable ‘pft_id’ + + db.exists: no visible binding for '<<-' assignment to ‘user.permission’ db.exists: no visible binding for global variable ‘user.permission’ -dbfile.check: no visible binding for global variable ‘container_type’ -dbfile.check: no visible binding for global variable ‘container_id’ -dbfile.check: no visible binding for global variable ‘machine_id’ -dbfile.check: no visible binding for global variable ‘updated_at’ -dbHostInfo: no visible binding for global variable ‘sync_host_id’ -get_run_ids: no visible binding for global variable ‘run_id’ -get_users: no visible binding for global variable ‘id’ -get_workflow_ids: no visible binding for global variable ‘workflow_id’ -get.trait.data.pft: no visible binding for global variable ‘pft_id’ -get.trait.data.pft: no visible binding for global variable ‘created_at’ -get.trait.data.pft: no visible binding for global variable ‘stat’ -get.trait.data.pft: no visible binding for global variable ‘trait’ -insert.format.vars: no visible binding for global variable ‘id’ -insert.format.vars: no visible binding for global variable ‘name’ -insert.format.vars: no visible global function definition for ‘collect’ + load_data_single_run: no visible global function definition for ‘read.output’ load_data_single_run: no visible binding for global variable ‘var_name’ load_data_single_run: no visible binding for global variable ‘vals’ -load_data_single_run: no visible binding for global variable ‘posix’ match_dbcols: no visible global function definition for ‘head’ match_dbcols: no visible binding for global variable ‘.’ match_dbcols: no visible binding for global variable ‘as’ -query_pfts: no visible binding for global variable ‘name’ -query.format.vars: no visible binding for global variable ‘variable_id’ -query.format.vars: no visible binding for global variable - ‘storage_type’ -query.pft_cultivars: no visible binding for global variable ‘name’ -query.pft_cultivars: no visible binding for global variable ‘pft_type’ -query.pft_cultivars: no visible binding for global variable ‘name.mt’ -query.pft_cultivars: no visible binding for global variable - ‘cultivar_id’ -query.pft_cultivars: no visible binding for global variable ‘specie_id’ -query.pft_cultivars: no visible binding for global variable ‘genus’ -query.pft_cultivars: no visible binding for global variable ‘species’ -query.pft_cultivars: no visible binding for global variable - ‘scientificname’ -query.pft_cultivars: no visible binding for global variable ‘name.cv’ + query.priors: no visible binding for global variable ‘settings’ -query.traits: no visible binding for global variable ‘name’ + rename_jags_columns: no visible binding for global variable ‘stat’ rename_jags_columns: no visible binding for global variable ‘n’ rename_jags_columns: no visible binding for global variable ‘trt_id’ @@ -93,19 +61,8 @@ rename_jags_columns: no visible binding for global variable ‘citation_id’ rename_jags_columns: no visible binding for global variable ‘greenhouse’ -runs: no visible binding for global variable ‘folder’ -runs: no visible binding for global variable ‘id’ -runs: no visible binding for global variable ‘ensemble_id’ -search_reference_single: no visible binding for global variable ‘score’ -search_reference_single: no visible binding for global variable - ‘author’ -search_reference_single: no visible binding for global variable - ‘author_family’ -search_reference_single: no visible binding for global variable - ‘author_given’ -search_reference_single: no visible binding for global variable - ‘issued’ -workflows: no visible binding for global variable ‘workflow_id’ + + Undefined global functions or variables: . as author author_family author_given citation_id collect container_id container_type created_at cultivar_id ensemble_id folder @@ -181,5 +138,3 @@ Status: 2 WARNINGs, 2 NOTEs See ‘/tmp/RtmpcCXxhc/PEcAn.DB.Rcheck/00check.log’ for details. - - From bd90f297f20c8ae4d5247060c0dd03c1af1326b2 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Fri, 19 Feb 2021 20:24:49 +0530 Subject: [PATCH 1803/2289] updated DESCRIPTION added comma after testthat (>= 2.0.0) (line 37) and 'BrownDog' packageName to suggests section. --- modules/benchmark/DESCRIPTION | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index a3ba20df765..8e1038d7b22 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -34,8 +34,8 @@ Imports: stringr Suggests: PEcAn.data.land, - testthat (>= 2.0.0) - bd + testthat (>= 2.0.0), + BrownDog License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes From 9ba0cf319eebf4ebc3b7a102acd953e9a496dcc0 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Fri, 19 Feb 2021 15:09:03 +0000 Subject: [PATCH 1804/2289] automated documentation update --- docker/depends/pecan.depends.R | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 0232b1b4bb0..0ccf146953c 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -23,6 +23,7 @@ wanted <- c( 'binaryLogic', 'BioCro', 'bit64', +'BrownDog', 'coda', 'corrplot', 'data.table', From b376ca140f9a3b400bc6e26b21b02cfb5cd6610b Mon Sep 17 00:00:00 2001 From: kzarada Date: Fri, 19 Feb 2021 11:55:23 -0500 Subject: [PATCH 1805/2289] addressing PR comments --- modules/assim.sequential/R/Analysis_sda.R | 14 +--------- .../assim.sequential/R/sda.enkf_refactored.R | 27 +++++++++++-------- modules/assim.sequential/R/sda_plotting.R | 6 ++--- 3 files changed, 20 insertions(+), 27 deletions(-) diff --git a/modules/assim.sequential/R/Analysis_sda.R b/modules/assim.sequential/R/Analysis_sda.R index aaf12f78698..f074de0bf4c 100644 --- a/modules/assim.sequential/R/Analysis_sda.R +++ b/modules/assim.sequential/R/Analysis_sda.R @@ -237,7 +237,7 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 Rmcmc_tobit2space <- buildMCMC(conf_tobit2space) #restarting at good initial conditions is somewhat important here - Cmodel_tobit2space <- compileNimble(tobit2space_pred, showCompilerOutput = TRUE) + Cmodel_tobit2space <- compileNimble(tobit2space_pred) Cmcmc_tobit2space <- compileNimble(Rmcmc_tobit2space, project = tobit2space_pred) for(i in seq_along(X)) { @@ -247,18 +247,6 @@ GEF<-function(settings, Forecast, Observed, H, extraArg, nitr=50000, nburnin=100 valueInCompiledNimbleFunction(Cmcmc_tobit2space$samplerFunctions[[samplerNumberOffset_tobit2space+i]], 'toggle', 1-x.ind[i]) } - # utils::globalVariables(c( - # 'constants.tobit2space', - # 'data.tobit2space', - # 'inits.tobit2space', - # 'tobit2space_pred', - # 'conf_tobit2space', - # 'samplerNumberOffset_tobit2space', - # 'Rmcmc_tobit2space', - # 'Cmodel_tobit2space', - # 'Cmcmc_tobit2space' - # )) - }else{ Cmodel_tobit2space$wts <- wts*nrow(X) diff --git a/modules/assim.sequential/R/sda.enkf_refactored.R b/modules/assim.sequential/R/sda.enkf_refactored.R index 04526060b3b..2b7c6f09840 100644 --- a/modules/assim.sequential/R/sda.enkf_refactored.R +++ b/modules/assim.sequential/R/sda.enkf_refactored.R @@ -292,11 +292,12 @@ sda.enkf <- function(settings, #---------------- setting up the restart argument if(exists('new.state')){ #Has the analysis been run? Yes, then restart from analysis. - - if(t == 2){start.time = lubridate::ymd_hms(settings$run$start.date, truncated = 3)}else - if(t != 2){start.time = lubridate::ymd_hms(obs.times[t - 1], truncated = 3)} - - + + if (t == 2) { + start.time = lubridate::ymd_hms(settings$run$start.date, truncated = 3) + } else { + start.time = lubridate::ymd_hms(obs.times[t - 1], truncated = 3) + } restart.arg<-list(runid = run.id, start.time = start.time, stop.time = lubridate::ymd_hms(obs.times[t], truncated = 3), @@ -311,9 +312,11 @@ sda.enkf <- function(settings, } if(t == 1){ - config.settings = settings - config.settings$run$end.date = format(lubridate::ymd_hms(obs.times[t], truncated = 3), "%Y/%m/%d")} - if(t != 1){config.settings = settings} + config.settings = settings + config.settings$run$end.date = format(lubridate::ymd_hms(obs.times[t], truncated = 3), "%Y/%m/%d") + } else { + config.settings = settings + } @@ -452,9 +455,11 @@ sda.enkf <- function(settings, } - - if(is.null(outconfig$samples$met$ids)){wts <- unlist(weight_list[[t]])}else{ - wts <- unlist(weight_list[[t]][outconfig$samples$met$ids])} + if (is.null(outconfig$samples$met$ids)) { + wts <- unlist(weight_list[[t]]) + } else { + wts <- unlist(weight_list[[t]][outconfig$samples$met$ids]) + } #-analysis function enkf.params[[t]] <- Analysis.sda(settings, diff --git a/modules/assim.sequential/R/sda_plotting.R b/modules/assim.sequential/R/sda_plotting.R index dfc75cf0861..8d67d4bb8ed 100755 --- a/modules/assim.sequential/R/sda_plotting.R +++ b/modules/assim.sequential/R/sda_plotting.R @@ -125,7 +125,6 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov #Defining some colors generate_colors_sda() t1 <- 1 - #ylab.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "unit") var.names <- sapply(settings$state.data.assimilation$state.variable, '[[', "variable.name") #---- pdf(file.path(settings$outdir,"SDA", "sda.enkf.time-series.pdf")) @@ -143,8 +142,9 @@ postana.timeser.plotting.sda<-function(settings, t, obs.times, obs.mean, obs.cov YCI <- t(as.matrix(sapply(obs.cov[t1:t], function(x) { if (is.na(x)) { rep(NA, length(names.y)) - }else{ - sqrt(diag(x))} + } else { + sqrt(diag(x)) + } }))) Ybar[is.na(Ybar)]<-0 From 5343a186677c553bf94daefc52067adba7c3ecd2 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 02:31:37 +0530 Subject: [PATCH 1806/2289] final update on base/utils/R/utils.R fixed variables by adding/deleting '.data' in relevant positions. --- base/utils/R/utils.R | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 69371b3321b..34229462276 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -212,16 +212,16 @@ summarize.result <- function(result) { .data$control, .data$greenhouse, .data$date, .data$time, .data$cultivar_id, .data$specie_id, .data$name, .data$treatment_id) %>% dplyr::summarize( # stat must be computed first, before n and mean - statname = dplyr::if_else(length(.data$n) == 1, "none", "SE"), - stat = stats::sd(mean) / sqrt(length(n)), - .data$n = length(n), + statname = dplyr::if_else(length(.data$n) == 1, "none", "SE"), + stat = stats::.data$sd(.data$mean) / sqrt(length(.data$n)), + n = length(.data$n), mean = mean(mean) ) %>% dplyr::ungroup() ans2 <- result %>% dplyr::filter(.data$n != 1) %>% # ANS: Silence factor to character conversion warning - dplyr::mutate(.data$statname = as.character(.data$statname)) + dplyr::mutate(statname = as.character(.data$statname)) if (nrow(ans2) > 0) { dplyr::bind_rows(ans1, ans2) } else { @@ -684,7 +684,7 @@ download.file <- function(url, filename, method) { #--------------------------------------------------------------------------------------------------# ##' Retry function X times before stopping in error -##' +##' ##' @title retry.func ##' @name retry.func ##' @description Retry function X times before stopping in error @@ -692,9 +692,9 @@ download.file <- function(url, filename, method) { ##' @param expr The function to try running ##' @param maxErrors The number of times to retry the function ##' @param sleep How long to wait before retrying the function call -##' +##' ##' @return retval returns the results of the function call -##' +##' ##' @examples ##' \dontrun{ ##' dap <- retry.func( @@ -702,7 +702,7 @@ download.file <- function(url, filename, method) { ##' maxErrors=10, ##' sleep=2) ##' } -##' +##' ##' @export ##' @author Shawn Serbin retry.func <- function(expr, isError = function(x) inherits(x, "try-error"), maxErrors = 5, sleep = 0) { @@ -715,7 +715,7 @@ retry.func <- function(expr, isError = function(x) inherits(x, "try-error"), max PEcAn.logger::logger.warn(msg) stop(msg) } else { - msg = sprintf("retry: error in attempt %i/%i [[%s]]", attempts, maxErrors, + msg = sprintf("retry: error in attempt %i/%i [[%s]]", attempts, maxErrors, utils::capture.output(utils::str(retval))) PEcAn.logger::logger.warn(msg) #warning(msg) From d06da082cb583464b2bb0385fa7301361052f5c7 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 02:40:34 +0530 Subject: [PATCH 1807/2289] final update on clone_pft.R removed/added '.data' pronoun where required. --- base/db/R/clone_pft.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/base/db/R/clone_pft.R b/base/db/R/clone_pft.R index ccec2494d69..9460cb85582 100644 --- a/base/db/R/clone_pft.R +++ b/base/db/R/clone_pft.R @@ -45,7 +45,7 @@ clone_pft <- function(parent.pft.name, new.pft <- (parent.pft %>% dplyr::select(-.data$id, -.data$created_at, -.data$updated_at) %>% dplyr::mutate( - .data$name = !!new.pft.name, + name = !!new.pft.name, definition = !!new.pft.definition, parent_id = !!parent.pft$id)) @@ -73,7 +73,7 @@ clone_pft <- function(parent.pft.name, } new_members <- (dplyr::tbl(con, member_tbl) %>% dplyr::filter(.data$pft_id == !!parent.pft$id) - %>% dplyr::mutate(.data$pft_id = !!new.pft$id) + %>% dplyr::mutate(pft_id = !!new.pft$id) %>% dplyr::distinct() %>% dplyr::collect()) @@ -88,7 +88,7 @@ clone_pft <- function(parent.pft.name, new_priors <- (dplyr::tbl(con, "pfts_priors") %>% dplyr::filter(.data$pft_id == !!parent.pft$id) - %>% dplyr::mutate(.data$pft_id = !!new.pft$id) + %>% dplyr::mutate(pft_id = !!new.pft$id) %>% dplyr::distinct() %>% dplyr::collect()) From d09716c237466792f3978153733052aff78367aa Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 03:19:54 +0530 Subject: [PATCH 1808/2289] final update to dbfiles.R added '.data' pronoun to 'file_name' and 'file_path'. --- base/db/R/dbfiles.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 2a2ec9f5765..779c228f45a 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -751,8 +751,8 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = F #get matching dbfiles from BETY dbfile.path = dirname(full.old.file) dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% - dplyr::filter(file_name %in% basename(full.old.file)) %>% - dplyr::filter(file_path %in% dbfile.path) + dplyr::filter(.data$file_name %in% basename(full.old.file)) %>% + dplyr::filter(.data$file_path %in% dbfile.path) #if there are matching db files @@ -813,8 +813,8 @@ dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = F #Error check again to make sure there aren't any matching dbfiles dbfile.path = dirname(full.old.file) dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% - dplyr::filter(file_name %in% basename(full.old.file)) %>% - dplyr::filter(file_path %in% dbfile.path) + dplyr::filter(.data$file_name %in% basename(full.old.file)) %>% + dplyr::filter(.data$file_path %in% dbfile.path) if(dim(dbfiles)[1] > 0){ PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling link or registration") From a1732c4c35cc0feb058398918d4fddcbed4b8f1e Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 03:31:32 +0530 Subject: [PATCH 1809/2289] final update to get.trait.data.pft.R fixed by removing ' .data ' at appropriate places. --- base/db/R/get.trait.data.pft.R | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/base/db/R/get.trait.data.pft.R b/base/db/R/get.trait.data.pft.R index 8334716df92..3f48fe64416 100644 --- a/base/db/R/get.trait.data.pft.R +++ b/base/db/R/get.trait.data.pft.R @@ -170,8 +170,8 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, PEcAn.logger::logger.debug("Checking if priors have changed") existing_prior <- PEcAn.utils::load_local(need_paths[["priors"]])[["prior.distns"]] diff_prior <- symmetric_setdiff( - dplyr::as_tibble(prior.distns, rownames = ".data$trait"), - dplyr::as_tibble(existing_prior, rownames = ".data$trait") + dplyr::as_tibble(prior.distns, rownames = "trait"), + dplyr::as_tibble(existing_prior, rownames = "trait") ) if (nrow(diff_prior) > 0) { PEcAn.logger::logger.error( @@ -197,9 +197,9 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, } else if (length(trait.data.check) == 0) { PEcAn.logger::logger.warn("New and existing trait data are both empty. Skipping this check.") } else { - current_traits <- dplyr::bind_rows(trait.data.check, .id = ".data$trait") %>% + current_traits <- dplyr::bind_rows(trait.data.check, .id = "trait") %>% dplyr::select(-mean, -.data$stat) - existing_traits <- dplyr::bind_rows(existing_trait_data, .id = ".data$trait") %>% + existing_traits <- dplyr::bind_rows(existing_trait_data, .id = "trait") %>% dplyr::select(-mean, -.data$stat) diff_traits <- symmetric_setdiff(current_traits, existing_traits) if (nrow(diff_traits) > 0) { @@ -274,7 +274,7 @@ get.trait.data.pft <- function(pft, modeltype, dbfiles, dbcon, trait.names, if (length(trait.data) > 0) { trait_counts <- trait.data %>% - dplyr::bind_rows(.id = ".data$trait") %>% + dplyr::bind_rows(.id = "trait") %>% dplyr::count(.data$trait) PEcAn.logger::logger.info( From 703254473a94bca3399842cd8a497a6e042394cf Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 03:36:35 +0530 Subject: [PATCH 1810/2289] final update to insert.format.vars.R removed ' .data' from collect() function and added dplyr Namespace. --- base/db/R/insert.format.vars.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/db/R/insert.format.vars.R b/base/db/R/insert.format.vars.R index 3a660a67a54..e759dde76e4 100644 --- a/base/db/R/insert.format.vars.R +++ b/base/db/R/insert.format.vars.R @@ -48,7 +48,7 @@ insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, head } # Test if format name already exists - name_test <- dplyr::tbl(con, "formats") %>% dplyr::select(.data$id, .data$name) %>% dplyr::filter(.data$name %in% !!format_name) %>% .data$collect() + name_test <- dplyr::tbl(con, "formats") %>% dplyr::select(.data$id, .data$name) %>% dplyr::filter(.data$name %in% !!format_name) %>% dplyr::collect() name_test_df <- as.data.frame(name_test) if(!is.null(name_test_df[1,1])){ PEcAn.logger::logger.error( From 337e00325500ced0f985735c6d13e9bf550c895e Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 03:51:35 +0530 Subject: [PATCH 1811/2289] final update to query.dplyr.R removed ' .data' pronoun from appropriate places. --- base/db/R/query.dplyr.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/base/db/R/query.dplyr.R b/base/db/R/query.dplyr.R index dddde24d514..e587925cc2c 100644 --- a/base/db/R/query.dplyr.R +++ b/base/db/R/query.dplyr.R @@ -134,11 +134,11 @@ runs <- function(bety, workflow_id) { Workflows <- workflow(bety, workflow_id) %>% dplyr::select(.data$workflow_id, .data$folder) Ensembles <- dplyr::tbl(bety, "ensembles") %>% - dplyr::select(.data$ensemble_id = .data$id, .data$workflow_id) %>% - dplyr::inner_join(Workflows, by = ".data$workflow_id") + dplyr::select(ensemble_id = .data$id, .data$workflow_id) %>% + dplyr::inner_join(Workflows, by = "workflow_id") Runs <- dplyr::tbl(bety, "runs") %>% - dplyr::select(.data$run_id = .data$id, .data$ensemble_id) %>% - dplyr::inner_join(Ensembles, by = ".data$ensemble_id") + dplyr::select(run_id = .data$id, .data$ensemble_id) %>% + dplyr::inner_join(Ensembles, by = "ensemble_id") dplyr::select(Runs, -.data$workflow_id, -.data$ensemble_id) %>% return() } # runs From 33702b8404df2f0242ec0be78bd74c8606da58c5 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 03:58:27 +0530 Subject: [PATCH 1812/2289] final update to query.pft.R removed ' .data' pronoun from appropriate places. --- base/db/R/query.pft.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/base/db/R/query.pft.R b/base/db/R/query.pft.R index bba0efc19e6..920cacbd5f7 100644 --- a/base/db/R/query.pft.R +++ b/base/db/R/query.pft.R @@ -83,11 +83,11 @@ query.pft_cultivars <- function(pft, modeltype = NULL, con) { suffix = c("", ".cvpft")) %>% dplyr::inner_join( dplyr::tbl(con, "cultivars"), - by = c(".data$cultivar_id" = "id"), + by = c("cultivar_id" = "id"), suffix = c("", ".cv")) %>% dplyr::inner_join( - dplyr::tbl(con, ".data$species"), - by=c(".data$specie_id" = "id"), + dplyr::tbl(con, "species"), + by=c("specie_id" = "id"), suffix=c("", ".sp")) %>% dplyr::select( id = .data$cultivar_id, From ea1f98a38ed38b1d30ce7c4188664978e65be87d Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:02:01 +0530 Subject: [PATCH 1813/2289] final update to search_reference.R removed ' .data' pronoun from appropriate places. --- base/db/R/search_references.R | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/base/db/R/search_references.R b/base/db/R/search_references.R index 84a08797e7e..80be447dbbf 100644 --- a/base/db/R/search_references.R +++ b/base/db/R/search_references.R @@ -32,7 +32,7 @@ search_reference_single <- function(query, limit = 1, min_score = 85) { return(tibble::tibble(query = query)) } crdata <- crsearch[["data"]] %>% - dplyr::mutate(.data$score = as.numeric(.data$score)) %>% + dplyr::mutate(score = as.numeric(.data$score)) %>% dplyr::filter(.data$score > !!min_score) if (nrow(crdata) < 1) { PEcAn.logger::logger.info( @@ -54,12 +54,12 @@ search_reference_single <- function(query, limit = 1, min_score = 85) { proc_search <- crdata %>% dplyr::mutate( # Get the first author only -- this is the BETY format - .data$author_family = purrr::map(.data$author, list("family", 1)), - .data$author_given = purrr::map(.data$author, list("given", 1)), - .data$author = paste(.data$author_family, .data$author_given, sep = ", "), + author_family = purrr::map(.data$author, list("family", 1)), + author_given = purrr::map(.data$author, list("given", 1)), + author = paste(.data$author_family, .data$author_given, sep = ", "), year = gsub("([[:digit:]]{4}).*", "\\1", .data$issued) %>% as.numeric(), query = query, - .data$score = as.numeric(.data$score) + score = as.numeric(.data$score) ) use_cols <- keep_cols[keep_cols %in% colnames(proc_search)] dplyr::select(proc_search, !!!use_cols) From ef1deb8c54139a6ad7cf8c8005e92415ba2e53ec Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:05:02 +0530 Subject: [PATCH 1814/2289] final update to query.traits.R --- base/db/R/query.traits.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/R/query.traits.R b/base/db/R/query.traits.R index 62fd8176b52..5b4dab14c68 100644 --- a/base/db/R/query.traits.R +++ b/base/db/R/query.traits.R @@ -47,8 +47,8 @@ query.traits <- function(ids, priors, con, %>% dplyr::inner_join(dplyr::tbl(con, "variables"), by = c("variable_id" = "id")) %>% dplyr::filter( (!!id_type %in% ids), - (.data$name %in% !!priors)) # TODO: use .data$name when filter supports it - %>% dplyr::distinct(.data$name) # TODO: use .data$name when distinct supports it + (.data$name %in% !!priors)) + %>% dplyr::distinct(.data$name) %>% dplyr::collect()) if (nrow(traits) == 0) { From 4264103df55e958de3f61d28eeeedb37a603aa03 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:12:59 +0530 Subject: [PATCH 1815/2289] updated check_met_input.R fixed tidyverse notes for the remaining variables in check_met_input.R for variables " is_required, cf_standard_name, test_passed, test_raw ". --- modules/data.atmosphere/R/check_met_input.R | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/check_met_input.R b/modules/data.atmosphere/R/check_met_input.R index fde275f760b..870386a254c 100644 --- a/modules/data.atmosphere/R/check_met_input.R +++ b/modules/data.atmosphere/R/check_met_input.R @@ -15,8 +15,8 @@ check_met_input_file <- function(metfile, variable_table = pecan_standard_met_table, required_vars = variable_table %>% - dplyr::filter(is_required) %>% - dplyr::pull(cf_standard_name), + dplyr::filter(.data$is_required) %>% + dplyr::pull(.data$cf_standard_name), warn_unknown = TRUE ) { @@ -82,7 +82,7 @@ check_met_input_file <- function(metfile, target_variable = required_vars, test_passed = required_vars %in% nc_vars, test_error_message = dplyr::if_else( - test_passed, + .data$test_passed, NA_character_, as.character(glue::glue("Missing variable '{target_variable}'.")) ) @@ -94,7 +94,7 @@ check_met_input_file <- function(metfile, test_raw = purrr::map(nc_vars, check_unit, nc = nc, variable_table = variable_table), test_passed = !purrr::map_lgl(test_raw, inherits, "try-error"), test_error_message = purrr::map_chr(test_raw, purrr::possibly(as.character, NA_character_)) - ) %>% dplyr::select(-test_raw) + ) %>% dplyr::select(-.data$test_raw) results_df <- dplyr::bind_rows(test_dims_summary, test_required_vars, test_var_units) @@ -115,7 +115,7 @@ check_unit <- function(variable, nc, variable_table, warn_unknown = TRUE) { return(TRUE) } var_correct_unit <- variable_table %>% - dplyr::filter(cf_standard_name == variable) %>% + dplyr::filter(.data$cf_standard_name == variable) %>% dplyr::pull(units) ncvar_unit <- ncdf4::ncatt_get(nc, variable, "units")[["value"]] try(testthat::expect_true( From f99e0c2ccf0e551a2800a756b35c12c4f5cc03b7 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:28:35 +0530 Subject: [PATCH 1816/2289] updated download.NARR_site.R fixed tidyverse notes for the remaining variables in download.NARR_site.R for variables " CF_name, year, data, month , startdate, hours ". --- .../data.atmosphere/R/download.NARR_site.R | 72 +++++++++---------- 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index 582474a5275..ae9dd1ab122 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -8,15 +8,15 @@ #' @param overwrite Overwrite existing files? Default=FALSE #' @param verbose Turn on verbose output? Default=FALSE #' @param parallel Download in parallel? Default = TRUE -#' @param ncores Number of cores for parallel download. Default is +#' @param ncores Number of cores for parallel download. Default is #' `parallel::detectCores()` #' #' @examples -#' +#' #' \dontrun{ #' download.NARR_site(tempdir(), "2001-01-01", "2001-01-12", 43.372, -89.907) #' } -#' +#' #' #' @export #' @@ -43,14 +43,14 @@ download.NARR_site <- function(outfolder, date_limits_chr <- strftime(range(narr_data$datetime), "%Y-%m-%d %H:%M:%S", tz = "UTC") narr_byyear <- narr_data %>% - dplyr::mutate(year = lubridate::year(datetime)) %>% - dplyr::group_by(year) %>% + dplyr::mutate(year = lubridate::.data$year(datetime)) %>% + dplyr::group_by(.data$year) %>% tidyr::nest() # Prepare result data frame result_full <- narr_byyear %>% dplyr::mutate( - file = file.path(outfolder, paste("NARR", year, "nc", sep = ".")), + file = file.path(outfolder, paste("NARR", .data$year, "nc", sep = ".")), host = PEcAn.remote::fqdn(), start_date = date_limits_chr[1], end_date = date_limits_chr[2], @@ -73,10 +73,10 @@ download.NARR_site <- function(outfolder, narr_proc <- result_full %>% dplyr::mutate( - data_nc = purrr::map2(data, file, prepare_narr_year, lat = lat, lon = lon) + data_nc = purrr::map2(.data$data, file, prepare_narr_year, lat = lat, lon = lon) ) - results <- dplyr::select(result_full, -data) + results <- dplyr::select(result_full, -.data$data) return(invisible(results)) } # download.NARR_site @@ -86,8 +86,8 @@ download.NARR_site <- function(outfolder, #' @param file Full path to target file #' @param lat_nc `ncdim` object for latitude #' @param lon_nc `ncdim` object for longitude -#' @param verbose -#' @return List of NetCDF variables in data. Creates NetCDF file containing +#' @param verbose +#' @return List of NetCDF variables in data. Creates NetCDF file containing #' data as a side effect prepare_narr_year <- function(dat, file, lat_nc, lon_nc, verbose = FALSE) { starttime <- min(dat$datetime) @@ -117,11 +117,11 @@ prepare_narr_year <- function(dat, file, lat_nc, lon_nc, verbose = FALSE) { #' Create `ncvar` object from variable name #' #' @param variable CF variable name -#' @param dims List of NetCDF dimension objects (passed to +#' @param dims List of NetCDF dimension objects (passed to #' `ncdf4::ncvar_def(..., dim)`) #' @return `ncvar` object (from `ncvar_def`) col2ncvar <- function(variable, dims) { - var_info <- narr_all_vars %>% dplyr::filter(CF_name == variable) + var_info <- narr_all_vars %>% dplyr::filter(.data$CF_name == variable) ncdf4::ncvar_def( name = variable, units = var_info$units, @@ -136,9 +136,9 @@ col2ncvar <- function(variable, dims) { #' @param end_date End date for meteorology #' @param lat.in Latitude coordinate #' @param lon.in Longitude coordinate -#' @param progress Whether or not to show a progress bar (default = `TRUE`). +#' @param progress Whether or not to show a progress bar (default = `TRUE`). #' Requires the `progress` package to be installed. -#' @param drop_outside Whether or not to drop dates outside of `start_date` to +#' @param drop_outside Whether or not to drop dates outside of `start_date` to #' `end_date` range (default = `TRUE`). #' @inheritParams download.NARR_site #' @return `tibble` containing time series of NARR data for the given site @@ -225,7 +225,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, PEcAn.logger::logger.info("Downloading in parallel") flx_df$flx <- TRUE sfc_df$flx <- FALSE - get_dfs <- dplyr::bind_rows(flx_df, sfc_df) + get_dfs <- dplyr::bind_rows(.data$flx_df, sfc_df) cl <- parallel::makeCluster(ncores) doParallel::registerDoParallel(cl) get_dfs$data <- foreach::`%dopar%`( @@ -236,8 +236,8 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, ), robustly(get_narr_url)(url, xy = xy, flx = flx) ) - flx_data_raw <- dplyr::filter(get_dfs, flx) - sfc_data_raw <- dplyr::filter(get_dfs, !flx) + flx_data_raw <- dplyr::filter(get_dfs, .data$flx) + sfc_data_raw <- dplyr::filter(get_dfs, !.data$flx) } else { # Retrieve remaining variables by iterating over URLs @@ -258,7 +258,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, url, robustly(get_narr_url, n = 20, timeout = 1), xy = xy, - flx = TRUE, + .data$flx = TRUE, pb = pb ) ) @@ -269,7 +269,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, url, robustly(get_narr_url, n = 20, timeout = 1), xy = xy, - flx = FALSE, + .data$flx = FALSE, pb = pb ) ) @@ -296,19 +296,19 @@ post_process <- function(dat) { dat %>% tidyr::unnest(data) %>% dplyr::ungroup() %>% - dplyr::mutate(datetime = startdate + lubridate::dhours(dhours)) %>% - dplyr::select(-startdate, -dhours) %>% + dplyr::mutate(datetime = .data$startdate + lubridate::.data$dhours(.data$dhours)) %>% + dplyr::select(-.data$startdate, -.data$dhours) %>% dplyr::select(datetime, dplyr::everything()) %>% dplyr::select(-url, url) } #' Generate NARR url from a vector of dates #' -#' Figures out file names for the given dates, based on NARR's convoluted and +#' Figures out file names for the given dates, based on NARR's convoluted and #' inconsistent naming scheme. #' #' @param dates Vector of dates for which to generate URL -#' @param flx (Logical) If `TRUE`, format for `flx` variables. Otherwise, +#' @param flx (Logical) If `TRUE`, format for `flx` variables. Otherwise, #' format for `sfc` variables. See [narr_flx_vars]. #' @author Alexey Shiklomanov generate_narr_url <- function(dates, flx) { @@ -322,25 +322,25 @@ generate_narr_url <- function(dates, flx) { ) tibble::tibble(date = dates) %>% dplyr::mutate( - year = lubridate::year(date), - month = lubridate::month(date), - daygroup = daygroup(date, flx) + year = lubridate::.data$year(date), + month = lubridate::.data$month(date), + daygroup = daygroup(date, .data$flx) ) %>% - dplyr::group_by(year, month, daygroup) %>% + dplyr::group_by(.data$year, .data$month, daygroup) %>% dplyr::summarize( startdate = min(date), url = sprintf( "%s/%d/NARR%s_%d%02d_%s.tar", base_url, - unique(year), + unique(.data$year), tag, - unique(year), - unique(month), + unique(.data$year), + unique(.data$month), unique(daygroup) ) ) %>% dplyr::ungroup() %>% - dplyr::select(startdate, url) + dplyr::select(.data$startdate, url) } #' Assign daygroup tag for a given date @@ -382,12 +382,12 @@ get_narr_url <- function(url, xy, flx, pb = NULL) { if (dhours[1] == 3) dhours <- dhours - 3 narr_vars <- if (flx) narr_flx_vars else narr_sfc_vars result <- purrr::pmap( - narr_vars %>% dplyr::select(variable = NARR_name, unit = units), + narr_vars %>% dplyr::select(variable = .data$NARR_name, unit = units), read_narr_var, nc = nc, xy = xy, flx = flx, pb = pb ) names(result) <- narr_vars$CF_name - dplyr::bind_cols(dhours = dhours, result) + dplyr::bind_cols(dhours = .data$dhours, result) } #' Read a specific variable from a NARR NetCDF file @@ -445,7 +445,7 @@ narr_all_vars <- dplyr::bind_rows(narr_flx_vars, narr_sfc_vars) #' #' @inheritParams read_narr_var #' @inheritParams get_NARR_thredds -#' @return Vector length 2 containing NARR `x` and `y` indices, which can be +#' @return Vector length 2 containing NARR `x` and `y` indices, which can be #' used in `ncdf4::ncvar_get` `start` argument. #' @author Alexey Shiklomanov latlon2narr <- function(nc, lat.in, lon.in) { @@ -457,11 +457,11 @@ latlon2narr <- function(nc, lat.in, lon.in) { c(x = x_ind, y = y_ind) } -#' Convert latitude and longitude to x-y coordinates (in km) in Lambert +#' Convert latitude and longitude to x-y coordinates (in km) in Lambert #' conformal conic projection (used by NARR) #' #' @inheritParams get_NARR_thredds -#' @return `sp::SpatialPoints` object containing transformed x and y +#' @return `sp::SpatialPoints` object containing transformed x and y #' coordinates, in km, which should match NARR coordinates #' @importFrom rgdal checkCRSArgs # ^not used directly here, but needed by sp::CRS. From 06090fa705bb751ff544493ae8106a8c1b019aee Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:36:07 +0530 Subject: [PATCH 1817/2289] updated downscaling_helper_functions.R fixed tidyverse notes for the remaining variables "doy, rpot, avg.rpot, days" . --- .../R/downscaling_helper_functions.R | 80 +++++++++---------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 1fbf04182a8..2846c8f38f4 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -1,9 +1,9 @@ #' @title Downscale spline to hourly #' @param df, dataframe of data to be downscales -#' @param VarNames, variable names to be downscaled +#' @param VarNames, variable names to be downscaled #' @param hr, hour to downscale to- default is 1 #' @return A dataframe of downscaled state variables -#' @importFrom rlang .data +#' @importFrom rlang .data #' @author Laura Puckett #' @export #' @@ -18,18 +18,18 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ t0 = min(df$time) df <- df %>% dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) - + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) - + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) - + for(Var in 1:length(VarNames)){ curr_data <- stats::spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y noaa_data_interp <- cbind(noaa_data_interp, curr_data) } - + names(noaa_data_interp) <- c("time",VarNames) - + return(noaa_data_interp) } @@ -40,32 +40,32 @@ downscale_spline_to_hrly <- function(df,VarNames, hr = 1){ #' @param lat, lat of site #' @param lon, long of site #' @param hr, hour to downscale to- default is 1 -#' @importFrom rlang .data +#' @importFrom rlang .data #' @return ShortWave.ds #' @author Laura Puckett #' @export -#' +#' #' downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ ## downscale shortwave to hourly - + t0 <- min(df$time) df <- df %>% dplyr::select(.data$time, .data$surface_downwelling_shortwave_flux_in_air) %>% dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) %>% dplyr::mutate(lead_var = dplyr::lead(.data$surface_downwelling_shortwave_flux_in_air, 1)) - + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) - + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) - + data.hrly <- noaa_data_interp %>% dplyr::left_join(df, by = "time") - + data.hrly$group_6hr <- NA - + group <- 0 for(i in 1:nrow(data.hrly)){ if(!is.na(data.hrly$lead_var[i])){ @@ -78,39 +78,39 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ data.hrly$group_6hr[i] <- group } } - + ShortWave.ds <- data.hrly %>% - dplyr::mutate(hour = lubridate::hour(.data$time)) %>% - dplyr::mutate(doy = lubridate::yday(.data$time) + hour/(24/hr))%>% - dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry + dplyr::mutate(hour = lubridate::.data$hour(.data$time)) %>% + dplyr::mutate(doy = lubridate::yday(.data$time) + .data$hour/(24/hr))%>% + dplyr::mutate(rpot = downscale_solar_geom(.data$doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry dplyr::group_by(.data$group_6hr) %>% - dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry + dplyr::mutate(avg.rpot = mean(.data$rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry dplyr::ungroup() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (.data$surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(.data$avg.rpot > 0, .data$rpot* (.data$surface_downwelling_shortwave_flux_in_air/.data$avg.rpot),0)) %>% dplyr::select(.data$time,.data$surface_downwelling_shortwave_flux_in_air) - + return(ShortWave.ds) - + } #' @title Downscale repeat to hourly #' @param df, dataframe of data to be downscaled (Longwave) -#' @param varName, variable names to be downscaled -#' @param hr, hour to downscale to- default is 1 +#' @param varName, variable names to be downscaled +#' @param hr, hour to downscale to- default is 1 #' @return A dataframe of downscaled data -#' @importFrom rlang .data +#' @importFrom rlang .data #' @author Laura Puckett #' @export #' downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ - - #bind variables - lead_var <- time <- NULL + + #bind variables + lead_var <- time <- NULL #Get first time point t0 <- min(df$time) - + df <- df %>% dplyr::select("time", all_of(varName)) %>% #Calculate time difference @@ -118,19 +118,19 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ #Shift valued back because the 6hr value represents the average over the #previous 6hr period dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) - + #Create new vector with all hours interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1 / (24 / hr)) - + #Create new data frame noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) - + #Join 1 hr data frame with 6 hr data frame data.hrly <- noaa_data_interp %>% dplyr::left_join(df, by = "time") - + #Fill in hours for(i in 1:nrow(data.hrly)){ if(!is.na(data.hrly$lead_var[i])){ @@ -139,13 +139,13 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ data.hrly$lead_var[i] <- curr } } - + #Clean up data frame data.hrly <- data.hrly %>% dplyr::select("time", .data$lead_var) %>% dplyr::arrange(time) - + names(data.hrly) <- c("time", varName) - + return(data.hrly) } @@ -162,12 +162,12 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ #' @export #' downscale_solar_geom <- function(doy, lon, lat) { - + dt <- stats::median(diff(doy)) * 86400 # average number of seconds in time interval hr <- (doy - floor(doy)) * 24 # hour of day for each element of doy - + ## calculate potential radiation cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) rpot <- 1366 * cosz return(rpot) -} \ No newline at end of file +} From 670b2bd83effc98ee1964461a830ffb1f7b84438 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:38:40 +0530 Subject: [PATCH 1818/2289] updated get_cf_variales_table.R fixed tidyverse notes for the remaining variables " .attrs, canonical_units, description ". --- modules/data.atmosphere/R/get_cf_variables_table.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/get_cf_variables_table.R b/modules/data.atmosphere/R/get_cf_variables_table.R index 33260b97400..efc76e2735b 100644 --- a/modules/data.atmosphere/R/get_cf_variables_table.R +++ b/modules/data.atmosphere/R/get_cf_variables_table.R @@ -17,9 +17,9 @@ get_cf_variables_table <- function(cf_url = build_cf_variables_table_url(57)) { purrr::map_dfc(unlist, recursive = TRUE) entries_df %>% dplyr::select( - cf_standard_name = .attrs, - unit = canonical_units, - description, + cf_standard_name = .data$.attrs, + unit = .data$canonical_units, + .data$description, dplyr::everything() ) } From cffae520b049056e88a126a428835664a980a9cf Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 04:48:09 +0530 Subject: [PATCH 1819/2289] updated Rcheck_reference.log deleted (relevant) lines starting : no visible binding for global variable . --- .../tests/Rcheck_reference.log | 31 ------------------- 1 file changed, 31 deletions(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index 49fe2064cc4..737b2389a02 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -81,15 +81,7 @@ cfmet.downscale.subdaily: no visible binding for global variable ‘month’ cfmet.downscale.subdaily: no visible binding for global variable ‘day’ cfmet.downscale.subdaily: no visible binding for global variable ‘hour’ -check_met_input_file: no visible binding for global variable - ‘is_required’ -check_met_input_file: no visible binding for global variable - ‘cf_standard_name’ -check_met_input_file: no visible binding for global variable - ‘test_passed’ -check_met_input_file: no visible binding for global variable ‘test_raw’ check_unit: no visible binding for global variable ‘cf_standard_name’ -col2ncvar: no visible binding for global variable ‘CF_name’ debias.met.regression: no visible global function definition for ‘sd’ debias.met.regression: no visible global function definition for ‘aggregate’ @@ -120,8 +112,6 @@ debias.met.regression: no visible binding for global variable ‘upr’ debias.met.regression: no visible binding for global variable ‘obs’ debias.met.regression: no visible binding for global variable ‘values’ debias.met.regression: no visible binding for global variable ‘Year’ -download.NARR_site: no visible binding for global variable ‘year’ -download.NARR_site: no visible binding for global variable ‘data’ download.NOAA_GEFS_downscale: no visible binding for global variable ‘timestamp’ download.NOAA_GEFS_downscale: no visible binding for global variable @@ -147,14 +137,6 @@ downscale_ShortWave_to_hrly : downscale_solar_geom: no visible global function definition for ‘median’ downscale_ShortWave_to_hrly: no visible binding for global variable ‘timestamp’ -downscale_ShortWave_to_hrly: no visible binding for global variable - ‘hour’ -downscale_ShortWave_to_hrly: no visible binding for global variable - ‘doy’ -downscale_ShortWave_to_hrly: no visible binding for global variable - ‘rpot’ -downscale_ShortWave_to_hrly: no visible binding for global variable - ‘avg.rpot’ downscale_ShortWave_to_hrly: no visible binding for global variable ‘NOAA.member’ downscale_spline_to_hourly : interpolate: no visible global function @@ -166,22 +148,12 @@ downscale_spline_to_hourly: no visible binding for global variable ‘dscale.member’ downscale_spline_to_hourly: no visible global function definition for ‘:=’ -downscale_spline_to_hourly: no visible binding for global variable - ‘days’ extract.local.CMIP5: no visible binding for global variable ‘GCM’ extract.nc.ERA5 : : no visible global function definition for ‘setNames’ extract.nc.ERA5: no visible global function definition for ‘setNames’ extract.nc.ERA5 : : no visible binding for global variable ‘.’ -generate_narr_url: no visible binding for global variable ‘year’ -generate_narr_url: no visible binding for global variable ‘month’ -generate_narr_url: no visible binding for global variable ‘startdate’ -get_cf_variables_table: no visible binding for global variable ‘.attrs’ -get_cf_variables_table: no visible binding for global variable - ‘canonical_units’ -get_cf_variables_table: no visible binding for global variable - ‘description’ get_NARR_thredds: no visible binding for global variable ‘latitude’ get_NARR_thredds: no visible binding for global variable ‘longitude’ get_NARR_thredds: no visible binding for global variable ‘flx’ @@ -227,9 +199,6 @@ model.train: no visible global function definition for ‘lm’ model.train: no visible global function definition for ‘coef’ model.train: no visible global function definition for ‘vcov’ model.train: no visible global function definition for ‘resid’ -post_process: no visible binding for global variable ‘data’ -post_process: no visible binding for global variable ‘startdate’ -post_process: no visible binding for global variable ‘dhours’ subdaily_pred: no visible global function definition for ‘model.matrix’ Undefined global functions or variables: := . .attrs aggregate air_pressure air_temperature avg.rpot From d44bf68b3b6c7419f97c9dc6dfa27a51c9d04b22 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 20 Feb 2021 09:46:02 +0000 Subject: [PATCH 1820/2289] automated documentation update --- base/db/man/clone_pft.Rd | 4 ++-- base/db/man/dbfile.move.Rd | 8 ++++---- base/db/man/insert.format.vars.Rd | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/base/db/man/clone_pft.Rd b/base/db/man/clone_pft.Rd index 824f15683f7..b21629a3d39 100644 --- a/base/db/man/clone_pft.Rd +++ b/base/db/man/clone_pft.Rd @@ -21,8 +21,8 @@ ID of the newly created pft in database, creates new PFT as a side effect \description{ Creates a new pft that is a duplicate of an existing pft, including relationships with priors, species, and cultivars (if any) of the existing pft. -This function mimics the 'clone pft' button in the PFTs record view page in the -BETYdb web interface for PFTs that aggregate >=1 species, but adds the ability to +This function mimics the 'clone pft' button in the PFTs record view page in the +BETYdb web interface for PFTs that aggregate >=1 species, but adds the ability to clone the cultivar associations. } \examples{ diff --git a/base/db/man/dbfile.move.Rd b/base/db/man/dbfile.move.Rd index 4bbe67216cd..9e0a2d57546 100644 --- a/base/db/man/dbfile.move.Rd +++ b/base/db/man/dbfile.move.Rd @@ -21,16 +21,16 @@ dbfile.move(old.dir, new.dir, file.type, siteid = NULL, register = FALSE) print statement of how many files were moved, registered, or have symbolic links } \description{ -This function will move dbfiles - clim or nc - from one location +This function will move dbfiles - clim or nc - from one location to another on the same machine and update BETY } \examples{ \dontrun{ dbfile.move( - old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", + old.dir = "/fs/data3/kzarada/pecan.data/dbfiles/NOAA_GEFS_site_0-676", new.dir = '/projectnb/dietzelab/pecan.data/dbfiles/NOAA_GEFS_site_0-676' - file.type= clim, - siteid = 676, + file.type= clim, + siteid = 676, register = TRUE ) } diff --git a/base/db/man/insert.format.vars.Rd b/base/db/man/insert.format.vars.Rd index b0902645918..e9adf10a81a 100644 --- a/base/db/man/insert.format.vars.Rd +++ b/base/db/man/insert.format.vars.Rd @@ -39,10 +39,10 @@ format_id Insert Format and Format-Variable Records } \details{ -The formats_variables argument must be a 'tibble' and be structured in a specific format so that the SQL query functions properly. All arguments should be passed as vectors so that each entry will correspond with a specific row. All empty values should be specified as NA. +The formats_variables argument must be a 'tibble' and be structured in a specific format so that the SQL query functions properly. All arguments should be passed as vectors so that each entry will correspond with a specific row. All empty values should be specified as NA. \describe{ \item{variable_id}{(Required) Vector of integers.} -\item{name}{(Optional) Vector of character strings. The variable name in the imported data need only be specified if it differs from the BETY variable name.} +\item{name}{(Optional) Vector of character strings. The variable name in the imported data need only be specified if it differs from the BETY variable name.} \item{unit}{(Optional) Vector of type character string. Should be in a format parseable by the udunits library and need only be secified if the units of the data in the file differ from the BETY standard.} \item{storage_type}{(Optional) Vector of character strings. Storage type need only be specified if the variable is stored in a format other than would be expected (e.g. if numeric values are stored as quoted character strings). Additionally, storage_type stores POSIX codes that are used to store any time variables (e.g. a column with a 4-digit year would be \%Y). See also \code{[base::strptime]}} \item{column_number}{Vector of integers that list the column numbers associated with variables in a dataset. Required for text files that lack headers.}} From 907b52b5aea6e7c5bc87b352dc8cbe17bf2b8bb9 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 16:10:00 +0530 Subject: [PATCH 1821/2289] final update to utils.R fixes in line 30 and line 216 --- base/utils/R/utils.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 34229462276..dafe319eb3c 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -27,7 +27,7 @@ ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { -nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] + nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] dims <- list() if (nrow(nc_var) == 0) { @@ -213,7 +213,7 @@ summarize.result <- function(result) { .data$cultivar_id, .data$specie_id, .data$name, .data$treatment_id) %>% dplyr::summarize( # stat must be computed first, before n and mean statname = dplyr::if_else(length(.data$n) == 1, "none", "SE"), - stat = stats::.data$sd(.data$mean) / sqrt(length(.data$n)), + stat = stats::sd(.data$mean) / sqrt(length(.data$n)), n = length(.data$n), mean = mean(mean) ) %>% From 71fa93e23d34b9eb2508fa09499379fa5475606e Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 16:20:26 +0530 Subject: [PATCH 1822/2289] updated Rcheck_reference.log removed relevant "no visible binding for global variable" lines from the file for variables that have been fixed. --- base/utils/tests/Rcheck_reference.log | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index e4c1b61381d..347c3776280 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -48,7 +48,6 @@ Non-standard file/directory found at top level: * checking foreign function calls ... OK * checking R code for possible problems ... NOTE convert.input: no visible binding for global variable ‘settings’ -convert.input: no visible binding for global variable ‘id’ convert.input : log_format_df: no visible binding for global variable ‘.’ get.results: no visible binding for global variable ‘trait.samples’ @@ -72,8 +71,7 @@ run.write.configs: no visible binding for global variable run.write.configs: no visible binding for global variable ‘sa.samples’ run.write.configs: no visible binding for global variable ‘ensemble.samples’ -summarize.result: no visible binding for global variable ‘n’ -summarize.result: no visible binding for global variable ‘statname’ + Undefined global functions or variables: . citation_id control cultivar_id ensemble.samples get.parameter.samples greenhouse id median n nr runs.samples @@ -219,5 +217,4 @@ Status: 5 WARNINGs, 5 NOTEs See ‘/tmp/Rtmp6InlXl/PEcAn.utils.Rcheck/00check.log’ for details. - - +s From b70f7451d927d1e44a4bd60a522c419ddc87ddfb Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 17:15:12 +0530 Subject: [PATCH 1823/2289] final update to download.NARR_site.R fixed the mistakes that were mentioned. --- .../data.atmosphere/R/download.NARR_site.R | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index ae9dd1ab122..aa9f942a4a3 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -43,7 +43,7 @@ download.NARR_site <- function(outfolder, date_limits_chr <- strftime(range(narr_data$datetime), "%Y-%m-%d %H:%M:%S", tz = "UTC") narr_byyear <- narr_data %>% - dplyr::mutate(year = lubridate::.data$year(datetime)) %>% + dplyr::mutate(year = lubridate::year(datetime)) %>% dplyr::group_by(.data$year) %>% tidyr::nest() @@ -73,7 +73,7 @@ download.NARR_site <- function(outfolder, narr_proc <- result_full %>% dplyr::mutate( - data_nc = purrr::map2(.data$data, file, prepare_narr_year, lat = lat, lon = lon) + data_nc = purrr::map2(.data$data, .data$file, prepare_narr_year, lat = lat, lon = lon) ) results <- dplyr::select(result_full, -.data$data) @@ -225,7 +225,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, PEcAn.logger::logger.info("Downloading in parallel") flx_df$flx <- TRUE sfc_df$flx <- FALSE - get_dfs <- dplyr::bind_rows(.data$flx_df, sfc_df) + get_dfs <- dplyr::bind_rows(flx_df, sfc_df) cl <- parallel::makeCluster(ncores) doParallel::registerDoParallel(cl) get_dfs$data <- foreach::`%dopar%`( @@ -296,7 +296,7 @@ post_process <- function(dat) { dat %>% tidyr::unnest(data) %>% dplyr::ungroup() %>% - dplyr::mutate(datetime = .data$startdate + lubridate::.data$dhours(.data$dhours)) %>% + dplyr::mutate(datetime = .data$startdate + lubridate::dhours(.data$dhours)) %>% dplyr::select(-.data$startdate, -.data$dhours) %>% dplyr::select(datetime, dplyr::everything()) %>% dplyr::select(-url, url) @@ -322,13 +322,13 @@ generate_narr_url <- function(dates, flx) { ) tibble::tibble(date = dates) %>% dplyr::mutate( - year = lubridate::.data$year(date), - month = lubridate::.data$month(date), - daygroup = daygroup(date, .data$flx) + year = lubridate::year(.data$date), + month = lubridate::month(.data$date), + daygroup = daygroup(.data$date, .data$flx) ) %>% - dplyr::group_by(.data$year, .data$month, daygroup) %>% + dplyr::group_by(.data$year, .data$month, .data$daygroup) %>% dplyr::summarize( - startdate = min(date), + startdate = min(.data$date), url = sprintf( "%s/%d/NARR%s_%d%02d_%s.tar", base_url, @@ -336,11 +336,11 @@ generate_narr_url <- function(dates, flx) { tag, unique(.data$year), unique(.data$month), - unique(daygroup) + unique(.data$daygroup) ) ) %>% dplyr::ungroup() %>% - dplyr::select(.data$startdate, url) + dplyr::select(.data$startdate, .data$url) } #' Assign daygroup tag for a given date @@ -382,12 +382,12 @@ get_narr_url <- function(url, xy, flx, pb = NULL) { if (dhours[1] == 3) dhours <- dhours - 3 narr_vars <- if (flx) narr_flx_vars else narr_sfc_vars result <- purrr::pmap( - narr_vars %>% dplyr::select(variable = .data$NARR_name, unit = units), + narr_vars %>% dplyr::select(variable = .data$NARR_name, unit = .data$units), read_narr_var, nc = nc, xy = xy, flx = flx, pb = pb ) names(result) <- narr_vars$CF_name - dplyr::bind_cols(dhours = .data$dhours, result) + dplyr::bind_cols(dhours = dhours, result) } #' Read a specific variable from a NARR NetCDF file From 7ec7fbd87030633e846f9291bdbb8b7964d48dc8 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Sat, 20 Feb 2021 17:17:15 +0530 Subject: [PATCH 1824/2289] final update to downscaling_helper_functions.R removed ' .data' pronoun from hour() function in line 83 . --- modules/data.atmosphere/R/downscaling_helper_functions.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 2846c8f38f4..47962899f87 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -80,7 +80,7 @@ downscale_ShortWave_to_hrly <- function(df,lat, lon, hr = 1){ } ShortWave.ds <- data.hrly %>% - dplyr::mutate(hour = lubridate::.data$hour(.data$time)) %>% + dplyr::mutate(hour = lubridate::hour(.data$time)) %>% dplyr::mutate(doy = lubridate::yday(.data$time) + .data$hour/(24/hr))%>% dplyr::mutate(rpot = downscale_solar_geom(.data$doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry dplyr::group_by(.data$group_6hr) %>% From 751f9c09d5097d91ede700acf39558c7b665f9fa Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 20 Feb 2021 13:46:21 +0100 Subject: [PATCH 1825/2289] spacing --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index dafe319eb3c..dad8f489010 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -27,7 +27,7 @@ ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { - nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] + nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] dims <- list() if (nrow(nc_var) == 0) { From 40f9c4257b92bd9575d067e21d67f35031c0ffe7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 20 Feb 2021 13:47:28 +0100 Subject: [PATCH 1826/2289] typo --- base/utils/tests/Rcheck_reference.log | 1 - 1 file changed, 1 deletion(-) diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index 347c3776280..b9fd01511c6 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -217,4 +217,3 @@ Status: 5 WARNINGs, 5 NOTEs See ‘/tmp/Rtmp6InlXl/PEcAn.utils.Rcheck/00check.log’ for details. -s From 065aba26f09ebdb8bdc8640b93df235068b3d829 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 20 Feb 2021 15:08:59 +0100 Subject: [PATCH 1827/2289] Update base/utils/tests/Rcheck_reference.log --- base/utils/tests/Rcheck_reference.log | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index b9fd01511c6..e2211086872 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -48,6 +48,7 @@ Non-standard file/directory found at top level: * checking foreign function calls ... OK * checking R code for possible problems ... NOTE convert.input: no visible binding for global variable ‘settings’ +convert.input: no visible binding for global variable ‘id’ convert.input : log_format_df: no visible binding for global variable ‘.’ get.results: no visible binding for global variable ‘trait.samples’ From 4ecfa00bf34b8707f089ecd1c229dd6a46b4d469 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 20 Feb 2021 17:36:36 +0100 Subject: [PATCH 1828/2289] Update modules/data.atmosphere/R/download.NARR_site.R --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index aa9f942a4a3..b1737f5c1d3 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -258,7 +258,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, url, robustly(get_narr_url, n = 20, timeout = 1), xy = xy, - .data$flx = TRUE, + flx = TRUE, pb = pb ) ) From e9eba91bcbb0b8d8f29fc45837b8b30f64efb869 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 20 Feb 2021 17:36:45 +0100 Subject: [PATCH 1829/2289] Update modules/data.atmosphere/R/download.NARR_site.R --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index b1737f5c1d3..25ebc8e95af 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -269,7 +269,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, url, robustly(get_narr_url, n = 20, timeout = 1), xy = xy, - .data$flx = FALSE, + flx = FALSE, pb = pb ) ) From e7cfb5b3d0976a8d021563c6e6eccd2396d3f297 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 20 Feb 2021 19:26:17 +0000 Subject: [PATCH 1830/2289] automated documentation update --- modules/data.atmosphere/man/check_met_input_file.Rd | 4 ++-- modules/data.atmosphere/man/col2ncvar.Rd | 2 +- modules/data.atmosphere/man/download.NARR_site.Rd | 2 +- modules/data.atmosphere/man/generate_narr_url.Rd | 4 ++-- modules/data.atmosphere/man/get_NARR_thredds.Rd | 4 ++-- modules/data.atmosphere/man/get_narr_url.Rd | 2 +- modules/data.atmosphere/man/latlon2lcc.Rd | 6 +++--- modules/data.atmosphere/man/latlon2narr.Rd | 2 +- modules/data.atmosphere/man/prepare_narr_year.Rd | 4 +--- modules/data.atmosphere/man/read_narr_var.Rd | 2 +- 10 files changed, 15 insertions(+), 17 deletions(-) diff --git a/modules/data.atmosphere/man/check_met_input_file.Rd b/modules/data.atmosphere/man/check_met_input_file.Rd index 9ff96a8ecb5..dce4db0c1d0 100644 --- a/modules/data.atmosphere/man/check_met_input_file.Rd +++ b/modules/data.atmosphere/man/check_met_input_file.Rd @@ -7,8 +7,8 @@ check_met_input_file( metfile, variable_table = pecan_standard_met_table, - required_vars = variable_table \%>\% dplyr::filter(is_required) \%>\% - dplyr::pull(cf_standard_name), + required_vars = variable_table \%>\% dplyr::filter(.data$is_required) \%>\% + dplyr::pull(.data$cf_standard_name), warn_unknown = TRUE ) } diff --git a/modules/data.atmosphere/man/col2ncvar.Rd b/modules/data.atmosphere/man/col2ncvar.Rd index ec2738697cf..481bc75acf6 100644 --- a/modules/data.atmosphere/man/col2ncvar.Rd +++ b/modules/data.atmosphere/man/col2ncvar.Rd @@ -9,7 +9,7 @@ col2ncvar(variable, dims) \arguments{ \item{variable}{CF variable name} -\item{dims}{List of NetCDF dimension objects (passed to +\item{dims}{List of NetCDF dimension objects (passed to `ncdf4::ncvar_def(..., dim)`)} } \value{ diff --git a/modules/data.atmosphere/man/download.NARR_site.Rd b/modules/data.atmosphere/man/download.NARR_site.Rd index fe8653a579a..d310c6a2060 100644 --- a/modules/data.atmosphere/man/download.NARR_site.Rd +++ b/modules/data.atmosphere/man/download.NARR_site.Rd @@ -35,7 +35,7 @@ download.NARR_site( \item{parallel}{Download in parallel? Default = TRUE} -\item{ncores}{Number of cores for parallel download. Default is +\item{ncores}{Number of cores for parallel download. Default is `parallel::detectCores()`} } \description{ diff --git a/modules/data.atmosphere/man/generate_narr_url.Rd b/modules/data.atmosphere/man/generate_narr_url.Rd index 40980a60db3..5fb6e1b844a 100644 --- a/modules/data.atmosphere/man/generate_narr_url.Rd +++ b/modules/data.atmosphere/man/generate_narr_url.Rd @@ -9,11 +9,11 @@ generate_narr_url(dates, flx) \arguments{ \item{dates}{Vector of dates for which to generate URL} -\item{flx}{(Logical) If `TRUE`, format for `flx` variables. Otherwise, +\item{flx}{(Logical) If `TRUE`, format for `flx` variables. Otherwise, format for `sfc` variables. See [narr_flx_vars].} } \description{ -Figures out file names for the given dates, based on NARR's convoluted and +Figures out file names for the given dates, based on NARR's convoluted and inconsistent naming scheme. } \author{ diff --git a/modules/data.atmosphere/man/get_NARR_thredds.Rd b/modules/data.atmosphere/man/get_NARR_thredds.Rd index 1d11266330a..58896d47f1a 100644 --- a/modules/data.atmosphere/man/get_NARR_thredds.Rd +++ b/modules/data.atmosphere/man/get_NARR_thredds.Rd @@ -27,12 +27,12 @@ get_NARR_thredds( \item{progress}{Whether or not to show a progress bar (default = `TRUE`). Requires the `progress` package to be installed.} -\item{drop_outside}{Whether or not to drop dates outside of `start_date` to +\item{drop_outside}{Whether or not to drop dates outside of `start_date` to `end_date` range (default = `TRUE`).} \item{parallel}{Download in parallel? Default = TRUE} -\item{ncores}{Number of cores for parallel download. Default is +\item{ncores}{Number of cores for parallel download. Default is `parallel::detectCores()`} } \value{ diff --git a/modules/data.atmosphere/man/get_narr_url.Rd b/modules/data.atmosphere/man/get_narr_url.Rd index 218e835c031..bc80fc4c05c 100644 --- a/modules/data.atmosphere/man/get_narr_url.Rd +++ b/modules/data.atmosphere/man/get_narr_url.Rd @@ -11,7 +11,7 @@ get_narr_url(url, xy, flx, pb = NULL) \item{xy}{Vector length 2 containing NARR coordinates} -\item{flx}{(Logical) If `TRUE`, format for `flx` variables. Otherwise, +\item{flx}{(Logical) If `TRUE`, format for `flx` variables. Otherwise, format for `sfc` variables. See [narr_flx_vars].} \item{pb}{Progress bar R6 object (default = `NULL`)} diff --git a/modules/data.atmosphere/man/latlon2lcc.Rd b/modules/data.atmosphere/man/latlon2lcc.Rd index cbd47e269d9..7db1f0ae231 100644 --- a/modules/data.atmosphere/man/latlon2lcc.Rd +++ b/modules/data.atmosphere/man/latlon2lcc.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/download.NARR_site.R \name{latlon2lcc} \alias{latlon2lcc} -\title{Convert latitude and longitude to x-y coordinates (in km) in Lambert +\title{Convert latitude and longitude to x-y coordinates (in km) in Lambert conformal conic projection (used by NARR)} \usage{ latlon2lcc(lat.in, lon.in) @@ -13,11 +13,11 @@ latlon2lcc(lat.in, lon.in) \item{lon.in}{Longitude coordinate} } \value{ -`sp::SpatialPoints` object containing transformed x and y +`sp::SpatialPoints` object containing transformed x and y coordinates, in km, which should match NARR coordinates } \description{ -Convert latitude and longitude to x-y coordinates (in km) in Lambert +Convert latitude and longitude to x-y coordinates (in km) in Lambert conformal conic projection (used by NARR) } \author{ diff --git a/modules/data.atmosphere/man/latlon2narr.Rd b/modules/data.atmosphere/man/latlon2narr.Rd index 3f99b2c53ed..e9895bb0fa7 100644 --- a/modules/data.atmosphere/man/latlon2narr.Rd +++ b/modules/data.atmosphere/man/latlon2narr.Rd @@ -14,7 +14,7 @@ latlon2narr(nc, lat.in, lon.in) \item{lon.in}{Longitude coordinate} } \value{ -Vector length 2 containing NARR `x` and `y` indices, which can be +Vector length 2 containing NARR `x` and `y` indices, which can be used in `ncdf4::ncvar_get` `start` argument. } \description{ diff --git a/modules/data.atmosphere/man/prepare_narr_year.Rd b/modules/data.atmosphere/man/prepare_narr_year.Rd index 237bec25fc9..50d72c9b4ae 100644 --- a/modules/data.atmosphere/man/prepare_narr_year.Rd +++ b/modules/data.atmosphere/man/prepare_narr_year.Rd @@ -14,11 +14,9 @@ prepare_narr_year(dat, file, lat_nc, lon_nc, verbose = FALSE) \item{lat_nc}{`ncdim` object for latitude} \item{lon_nc}{`ncdim` object for longitude} - -\item{verbose}{} } \value{ -List of NetCDF variables in data. Creates NetCDF file containing +List of NetCDF variables in data. Creates NetCDF file containing data as a side effect } \description{ diff --git a/modules/data.atmosphere/man/read_narr_var.Rd b/modules/data.atmosphere/man/read_narr_var.Rd index 59e0c16002a..7307aa7028f 100644 --- a/modules/data.atmosphere/man/read_narr_var.Rd +++ b/modules/data.atmosphere/man/read_narr_var.Rd @@ -15,7 +15,7 @@ read_narr_var(nc, xy, variable, unit, flx, pb = NULL) \item{unit}{Output unit of variable to retrieve} -\item{flx}{(Logical) If `TRUE`, format for `flx` variables. Otherwise, +\item{flx}{(Logical) If `TRUE`, format for `flx` variables. Otherwise, format for `sfc` variables. See [narr_flx_vars].} \item{pb}{Progress bar R6 object (default = `NULL`)} From c4e5cddeae2317e4c9fd1ed9c91566086d3266c0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 21 Feb 2021 22:39:18 +0100 Subject: [PATCH 1831/2289] indentation fix Mostly to trigger a CI rebuild --- base/db/R/search_references.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/base/db/R/search_references.R b/base/db/R/search_references.R index 80be447dbbf..1e59df4e2c2 100644 --- a/base/db/R/search_references.R +++ b/base/db/R/search_references.R @@ -54,12 +54,12 @@ search_reference_single <- function(query, limit = 1, min_score = 85) { proc_search <- crdata %>% dplyr::mutate( # Get the first author only -- this is the BETY format - author_family = purrr::map(.data$author, list("family", 1)), - author_given = purrr::map(.data$author, list("given", 1)), - author = paste(.data$author_family, .data$author_given, sep = ", "), + author_family = purrr::map(.data$author, list("family", 1)), + author_given = purrr::map(.data$author, list("given", 1)), + author = paste(.data$author_family, .data$author_given, sep = ", "), year = gsub("([[:digit:]]{4}).*", "\\1", .data$issued) %>% as.numeric(), query = query, - score = as.numeric(.data$score) + score = as.numeric(.data$score) ) use_cols <- keep_cols[keep_cols %in% colnames(proc_search)] dplyr::select(proc_search, !!!use_cols) From 644842d1a1fb224298a81e31b7daaa04cced193e Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 21 Feb 2021 22:40:40 +0100 Subject: [PATCH 1832/2289] Update base/db/R/insert.format.vars.R Co-authored-by: Alexey Shiklomanov --- base/db/R/insert.format.vars.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/db/R/insert.format.vars.R b/base/db/R/insert.format.vars.R index e759dde76e4..95cafbf6c87 100644 --- a/base/db/R/insert.format.vars.R +++ b/base/db/R/insert.format.vars.R @@ -124,7 +124,7 @@ insert.format.vars <- function(con, format_name, mimetype_id, notes = NULL, head ### udunit tests ### for(i in 1:nrow(formats_variables)){ u1 <- formats_variables[1,"unit"] - u2 <- dplyr::tbl(con, "variables") %>% dplyr::select(.data$id, units) %>% dplyr::filter(.data$id %in% !!formats_variables[[1, "variable_id"]]) %>% dplyr::pull(units) + u2 <- dplyr::tbl(con, "variables") %>% dplyr::select(.data$id, units) %>% dplyr::filter(.data$id %in% !!formats_variables[[1, "variable_id"]]) %>% dplyr::pull(.data$units) if(!udunits2::ud.is.parseable(u1)){ PEcAn.logger::logger.error( From aa419c78bf788ac3f06c5d9add4e99673c98dd3b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 21 Feb 2021 22:56:48 +0100 Subject: [PATCH 1833/2289] indentation mostly to trigger a CI run --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index 25ebc8e95af..853ecdc70fe 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -323,7 +323,7 @@ generate_narr_url <- function(dates, flx) { tibble::tibble(date = dates) %>% dplyr::mutate( year = lubridate::year(.data$date), - month = lubridate::month(.data$date), + month = lubridate::month(.data$date), daygroup = daygroup(.data$date, .data$flx) ) %>% dplyr::group_by(.data$year, .data$month, .data$daygroup) %>% From 6ac602d2995ce38b8bfb02c5984ce0b2a2f02852 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 22 Feb 2021 11:59:46 +0100 Subject: [PATCH 1834/2289] trivial whitespace change to trigger CI build --- modules/benchmark/R/load_data.R | 2 -- 1 file changed, 2 deletions(-) diff --git a/modules/benchmark/R/load_data.R b/modules/benchmark/R/load_data.R index 5ba4193165d..41d65e0c8aa 100644 --- a/modules/benchmark/R/load_data.R +++ b/modules/benchmark/R/load_data.R @@ -31,7 +31,6 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = vars.used.index <- setdiff(seq_along(format$vars$variable_id), format$time.row) } - # Determine the function that should be used to load the data mimetype <- gsub("-", "_", format$mimetype) fcn1 <- paste0("load_", format$file_name) @@ -141,4 +140,3 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = return(out) } # load_data - From 33396910743cef51f6fde242e726bd003ad55070 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Mon, 22 Feb 2021 19:31:13 +0530 Subject: [PATCH 1835/2289] update query_pfts.R added ' .data ' pronoun to appropriate places. --- base/db/R/query_pfts.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/db/R/query_pfts.R b/base/db/R/query_pfts.R index 2b9ed77400a..0015a1d790d 100644 --- a/base/db/R/query_pfts.R +++ b/base/db/R/query_pfts.R @@ -11,10 +11,10 @@ #' @export query_pfts <- function(dbcon, pft_names, modeltype = NULL, strict = FALSE) { pftres <- (dplyr::tbl(dbcon, "pfts") - %>% dplyr::filter(name %in% !!pft_names)) + %>% dplyr::filter(.data$name %in% !!pft_names)) if (!is.null(modeltype)) { pftres <- (pftres %>% dplyr::semi_join( - (dplyr::tbl(dbcon, "modeltypes") %>% dplyr::filter(name == !!modeltype)), + (dplyr::tbl(dbcon, "modeltypes") %>% dplyr::filter(.data$name == !!modeltype)), by = c("modeltype_id" = "id"))) } result <- (pftres From 8fabcd88c5bc7f46a1f9076c20bf0c29acf684ad Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Mon, 22 Feb 2021 19:51:20 +0530 Subject: [PATCH 1836/2289] updated DESCRIPTION --- modules/benchmark/DESCRIPTION | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 8e1038d7b22..ae4d5082f66 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -15,27 +15,29 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific streamline the interaction between data and models, and to improve the efficacy of scientific investigation. Imports: + dbplyr, + dplyr, + ggplot2, + gridExtra, + lubridate (>= 1.6.0), + magrittr, + ncdf4 (>= 1.15), PEcAn.DB, PEcAn.logger, PEcAn.remote, PEcAn.settings, PEcAn.utils, - lubridate (>= 1.6.0), - ncdf4 (>= 1.15), - udunits2 (>= 0.11), - XML (>= 3.98-1.4), - dplyr, - ggplot2, - gridExtra, reshape2, - dbplyr, SimilarityMeasures, - zoo, stringr + udunits2 (>= 0.11), + XML (>= 3.98-1.4), + zoo, + Suggests: PEcAn.data.land, testthat (>= 2.0.0), - BrownDog + BrownDog, License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes From 603d20ef25dd8820e5fee1e9813ada3944560c15 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 22 Feb 2021 09:29:47 -0500 Subject: [PATCH 1837/2289] Fix commas in benchmark DESCRIPTION --- modules/benchmark/DESCRIPTION | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index ae4d5082f66..e9e4acd36e2 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -29,15 +29,15 @@ Imports: PEcAn.utils, reshape2, SimilarityMeasures, - stringr + stringr, udunits2 (>= 0.11), XML (>= 3.98-1.4), - zoo, + zoo Suggests: PEcAn.data.land, testthat (>= 2.0.0), - BrownDog, + BrownDog License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes From 566d329a5f3567a55116b970e52483607083e791 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Mon, 22 Feb 2021 20:11:38 +0530 Subject: [PATCH 1838/2289] Update DESCRIPTION --- modules/benchmark/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index e9e4acd36e2..40eeb14431a 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -32,7 +32,7 @@ Imports: stringr, udunits2 (>= 0.11), XML (>= 3.98-1.4), - zoo + zoo Suggests: PEcAn.data.land, From 6a4c6f979f2a94c9031fa44b92dd2d5b711f60ed Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 22 Feb 2021 13:58:56 -0500 Subject: [PATCH 1839/2289] BUMP DEFAULT R SUPPORT TO 4.0.3 --- .github/workflows/book.yml | 2 +- .github/workflows/ci.yml | 8 ++++---- .github/workflows/depends.yml | 4 ++-- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 76c1f891598..25301005721 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -16,7 +16,7 @@ jobs: runs-on: ubuntu-latest container: - image: pecan/depends:R4.0.2 + image: pecan/depends:R4.0.3 steps: # checkout source code diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2b8e5cf6e78..2445e3e9778 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -30,8 +30,8 @@ jobs: fail-fast: false matrix: R: - - "4.0.2" - "4.0.3" + - "4.0.4" env: NCPUS: 2 @@ -76,8 +76,8 @@ jobs: fail-fast: false matrix: R: - - "4.0.2" - "4.0.3" + - "4.0.4" services: postgres: @@ -129,8 +129,8 @@ jobs: fail-fast: false matrix: R: - - "4.0.2" - "4.0.3" + - "4.0.4" env: NCPUS: 2 @@ -175,8 +175,8 @@ jobs: fail-fast: false matrix: R: - - "4.0.2" - "4.0.3" + - "4.0.4" services: postgres: diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 2008676a3b7..0d31bc8da67 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -12,7 +12,7 @@ on: env: # official supported version of R - SUPPORTED: 4.0.2 + SUPPORTED: 4.0.3 DOCKERHUB_ORG: pecan jobs: @@ -24,8 +24,8 @@ jobs: fail-fast: false matrix: R: - - "4.0.2" - "4.0.3" + - "4.0.4" steps: - uses: actions/checkout@v2 From 677171aeca76e604c986922c430e2cc6bde6326d Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 22 Feb 2021 14:50:00 -0500 Subject: [PATCH 1840/2289] Update changelog --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 616a3036df1..c677195351f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,7 +7,7 @@ For more information about this file see also [Keep a Changelog](http://keepacha ## [Unreleased] -### Due to dependencies, PEcAn is now using R 4.0.2 for Docker images. +### Due to dependencies, PEcAn is now using R 4.0.3 for Docker images. This is a major change: From f1e3ee9cb94a587f55d7795a0ce1e5fcd8016235 Mon Sep 17 00:00:00 2001 From: Alexey Shiklomanov Date: Mon, 22 Feb 2021 17:37:32 -0500 Subject: [PATCH 1841/2289] Remove blank space in modules/benchmark/DESCRIPTION --- modules/benchmark/DESCRIPTION | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 40eeb14431a..1b12edc6aac 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -33,7 +33,6 @@ Imports: udunits2 (>= 0.11), XML (>= 3.98-1.4), zoo - Suggests: PEcAn.data.land, testthat (>= 2.0.0), From 795e461c0b2af17129c77d50f96501abc91cee98 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 22 Feb 2021 22:55:00 +0000 Subject: [PATCH 1842/2289] Implement getting PFT numbers from pftmapping or pecan xml tag --- models/ed/R/model2netcdf.ED2.R | 50 +++++++++++++++++++++++++++++----- 1 file changed, 43 insertions(+), 7 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 8a61930de74..d14cdb0c97c 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -937,9 +937,27 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n npft <- length(pft_names) data(pftmapping, package = "PEcAn.ED2") - pfts <- sapply(pft_names, function(x) ifelse(x %in% settings$pfts$pft$name, - as.numeric(settings$pfts$pft$ed2_pft_number[settings$pfts$pft$name == x]), - pftmapping$ED[pftmapping$PEcAn == x])) + pfts <- numeric(npft) + names(pfts) <- pft_names + + # Extract the PFT names and numbers for all PFTs + xml_pft_names <- lapply(settings$pfts, "[[", "name") + for (pft in pft_names) { + which_pft <- which(xml_pft_names == pft) + xml_pft <- settings$pfts[[which_pft]] + if ("ed2_pft_number" %in% names(xml_pft)) { + pft_number <- as.numeric(xml_pft$ed2_pft_number) + if (!is.finite(pft_number)) { + PEcAn.logger::logger.severe( + "ED2 PFT number present but not parseable as number. Value was ", + xml_pft$ed2_pft_number + ) + } + } else { + pft_number <- pftmapping$ED[pftmapping$PEcAn == x] + } + pfts[pft] <- pft_number + } out <- list() for (varname in varnames) { @@ -1011,11 +1029,29 @@ put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pft_names, ... pft_names <- pft_names[!(soil.check)] } + npft <- length(pft_names) data(pftmapping, package = "PEcAn.ED2") - pfts <- sapply(pft_names, function(x) ifelse(x %in% settings$pfts$pft$name, - as.numeric(settings$pfts$pft$ed2_pft_number[settings$pfts$pft$name == x]), - pftmapping$ED[pftmapping$PEcAn == x])) - + pfts <- numeric(npft) + names(pfts) <- pft_names + + # Extract the PFT names and numbers for all PFTs + xml_pft_names <- lapply(settings$pfts, "[[", "name") + for (pft in pft_names) { + which_pft <- which(xml_pft_names == pft) + xml_pft <- settings$pfts[[which_pft]] + if ("ed2_pft_number" %in% names(xml_pft)) { + pft_number <- as.numeric(xml_pft$ed2_pft_number) + if (!is.finite(pft_number)) { + PEcAn.logger::logger.severe( + "ED2 PFT number present but not parseable as number. Value was ", + xml_pft$ed2_pft_number + ) + } + } else { + pft_number <- pftmapping$ED[pftmapping$PEcAn == x] + } + pfts[pft] <- pft_number + } # ----- fill list ##### setup output time and time bounds From cce02fba92b14492c87a51c05601ce07a18da610 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Tue, 23 Feb 2021 16:55:19 +0530 Subject: [PATCH 1843/2289] Update DESCRIPTION --- modules/benchmark/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index 1b12edc6aac..f5680f22155 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -30,6 +30,7 @@ Imports: reshape2, SimilarityMeasures, stringr, + tidyselect, udunits2 (>= 0.11), XML (>= 3.98-1.4), zoo From be7586b9de3fbdeacea6f71dcf20b9f650150caa Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Tue, 23 Feb 2021 16:57:07 +0530 Subject: [PATCH 1844/2289] Update load_data.R --- modules/benchmark/R/load_data.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/benchmark/R/load_data.R b/modules/benchmark/R/load_data.R index 41d65e0c8aa..a534cd74950 100644 --- a/modules/benchmark/R/load_data.R +++ b/modules/benchmark/R/load_data.R @@ -39,7 +39,7 @@ load_data <- function(data.path, format, start_year = NA, end_year = NA, site = fcn <- match.fun(fcn1) } else if (exists(fcn2)) { fcn <- match.fun(fcn2) - } else if (!exists(fcn1) & !exists(fcn2) & requireNamespace(bd, quietly = TRUE)) { + } else if (!exists(fcn1) & !exists(fcn2) & requireNamespace("BrownDog", quietly = TRUE)) { #To Do: call to DAP to see if conversion to csv is possible #Brown Dog API call through BDFiddle, requires username and password key <- BrownDog::get_key("https://bd-api.ncsa.illinois.edu",username,password) From a242e315af869df11e945524864749060b98aebb Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Tue, 23 Feb 2021 21:24:26 +0530 Subject: [PATCH 1845/2289] Update download.NARR_site.R --- modules/data.atmosphere/R/download.NARR_site.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index 853ecdc70fe..981991cbd58 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -1,4 +1,4 @@ -#' Download NARR time series for a single site +=#' Download NARR time series for a single site #' #' @param outfolder Target directory for storing output #' @param start_date Start date for met data @@ -86,7 +86,7 @@ download.NARR_site <- function(outfolder, #' @param file Full path to target file #' @param lat_nc `ncdim` object for latitude #' @param lon_nc `ncdim` object for longitude -#' @param verbose +#' @param verbose logical: ask`ncdf4` functions to be very chatty while they work? #' @return List of NetCDF variables in data. Creates NetCDF file containing #' data as a side effect prepare_narr_year <- function(dat, file, lat_nc, lon_nc, verbose = FALSE) { @@ -294,7 +294,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, #' @param dat Nested `tibble` from mapped call to [get_narr_url] post_process <- function(dat) { dat %>% - tidyr::unnest(data) %>% + tidyr::unnest(.data$data) %>% dplyr::ungroup() %>% dplyr::mutate(datetime = .data$startdate + lubridate::dhours(.data$dhours)) %>% dplyr::select(-.data$startdate, -.data$dhours) %>% From 868a5bfc38794cb386f48afba11725a1bab6a26b Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Tue, 23 Feb 2021 21:25:15 +0530 Subject: [PATCH 1846/2289] Update Rcheck_reference.log --- modules/data.atmosphere/tests/Rcheck_reference.log | 3 --- 1 file changed, 3 deletions(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index 737b2389a02..8b46e189b42 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -417,9 +417,6 @@ Argument items with no description in Rd object 'met.process.stage': Argument items with no description in Rd object 'met_temporal_downscale.Gaussian_ensemble': ‘in.path’ ‘in.prefix’ -Argument items with no description in Rd object 'prepare_narr_year': - ‘verbose’ - Argument items with no description in Rd object 'split_wind': ‘start_date’ ‘end_date’ From f933f1c80fbb487e12d2ea87d3d1468462e52d32 Mon Sep 17 00:00:00 2001 From: moki1202 <73598347+moki1202@users.noreply.github.com> Date: Tue, 23 Feb 2021 21:26:29 +0530 Subject: [PATCH 1847/2289] Update check_met_input.R --- modules/data.atmosphere/R/check_met_input.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/check_met_input.R b/modules/data.atmosphere/R/check_met_input.R index 870386a254c..16b939c797f 100644 --- a/modules/data.atmosphere/R/check_met_input.R +++ b/modules/data.atmosphere/R/check_met_input.R @@ -92,8 +92,8 @@ check_met_input_file <- function(metfile, test_type = "variable has correct units", target_variable = nc_vars, test_raw = purrr::map(nc_vars, check_unit, nc = nc, variable_table = variable_table), - test_passed = !purrr::map_lgl(test_raw, inherits, "try-error"), - test_error_message = purrr::map_chr(test_raw, purrr::possibly(as.character, NA_character_)) + test_passed = !purrr::map_lgl(.data$test_raw, inherits, "try-error"), + test_error_message = purrr::map_chr(.data$test_raw, purrr::possibly(as.character, NA_character_)) ) %>% dplyr::select(-.data$test_raw) results_df <- dplyr::bind_rows(test_dims_summary, test_required_vars, test_var_units) From 86bd4676d1ce3d08fd2e773d40ff78e3823b6c05 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 23 Feb 2021 16:41:11 -0700 Subject: [PATCH 1848/2289] improve documentation for get.samples --- modules/priors/R/priors.R | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/modules/priors/R/priors.R b/modules/priors/R/priors.R index ef2960c1d03..a26aed5db31 100644 --- a/modules/priors/R/priors.R +++ b/modules/priors/R/priors.R @@ -182,15 +182,28 @@ pr.samp <- function(distn, parama, paramb, n) { #--------------------------------------------------------------------------------------------------# ##' Take n random samples from prior ##' -##' Like pr.samp, with prior as a single input +##' Similar to the prior sample function \link{pr.samp}, except 1) it takes the prior as a named dataframe +##' or list and it can return either a random sample of length n OR a sample from a quantile specified as p ##' @title Get Samples -##' @param prior data.frame with distn, parama, paramb -##' @param n number of samples to return -##' @param p probability vector, pre-generated upstream to be used in the quantile function +##' @param prior data.frame with distn, parama, and optionally paramb. +##' @param n number of samples to return from a random sample of the rdistn family of functions (e.g. qnorm) +##' @param p vector of quantiles from which to sample the distribution; typically pre-generated upstream +##' in the workflow to be used by the qdist family of functions (e.g. qnorm) ##' @return vector with n random samples from prior ##' @seealso \link{pr.samp} +##' @examples +##' \dontrun{ +##' # return 1st through 99th quantile of standard normal distribution: +##' PEcAn.priors::get.sample( +##' prior = data.frame(distn = 'norm', parama = 0, paramb = 1), +##' p = 1:99/100) +##' # return 100 random samples from standard normal distribution: +##' PEcAn.priors::get.sample( +##' prior = data.frame(distn = 'norm', parama = 0, paramb = 1), +##' n = 100) +##' } ##' @export -get.sample <- function(prior, n, p = NULL) { +get.sample <- function(prior, n = NULL, p = NULL) { if(!is.null(p)){ if (as.character(prior$distn) %in% c("exp", "pois", "geom")) { ## one parameter distributions From e021c9d2ef7df234c2be9f9eda05d13d45707ddd Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 24 Feb 2021 10:54:34 +0100 Subject: [PATCH 1849/2289] typo --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index 981991cbd58..ba202c02fad 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -1,4 +1,4 @@ -=#' Download NARR time series for a single site +#' Download NARR time series for a single site #' #' @param outfolder Target directory for storing output #' @param start_date Start date for met data From f4c12a608e3030acc60afdaf49f9f36c07e72b2a Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 24 Feb 2021 09:56:21 +0000 Subject: [PATCH 1850/2289] automated documentation update --- modules/priors/man/get.sample.Rd | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/modules/priors/man/get.sample.Rd b/modules/priors/man/get.sample.Rd index 54ec100147f..03dc50d0d7f 100644 --- a/modules/priors/man/get.sample.Rd +++ b/modules/priors/man/get.sample.Rd @@ -4,14 +4,15 @@ \alias{get.sample} \title{Get Samples} \usage{ -get.sample(prior, n, p = NULL) +get.sample(prior, n = NULL, p = NULL) } \arguments{ -\item{prior}{data.frame with distn, parama, paramb} +\item{prior}{data.frame with distn, parama, and optionally paramb.} -\item{n}{number of samples to return} +\item{n}{number of samples to return from a random sample of the rdistn family of functions (e.g. qnorm)} -\item{p}{probability vector, pre-generated upstream to be used in the quantile function} +\item{p}{vector of quantiles from which to sample the distribution; typically pre-generated upstream +in the workflow to be used by the qdist family of functions (e.g. qnorm)} } \value{ vector with n random samples from prior @@ -20,7 +21,20 @@ vector with n random samples from prior Take n random samples from prior } \details{ -Like pr.samp, with prior as a single input +Similar to the prior sample function \link{pr.samp}, except 1) it takes the prior as a named dataframe +or list and it can return either a random sample of length n OR a sample from a quantile specified as p +} +\examples{ +\dontrun{ +# return 1st through 99th quantile of standard normal distribution: +PEcAn.priors::get.sample( + prior = data.frame(distn = 'norm', parama = 0, paramb = 1), + p = 1:99/100) +# return 100 random samples from standard normal distribution: +PEcAn.priors::get.sample( + prior = data.frame(distn = 'norm', parama = 0, paramb = 1), + n = 100) +} } \seealso{ \link{pr.samp} From ac7708dd0b6bdaa0989c85afe4bc22d4e266f713 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 24 Feb 2021 09:58:18 +0000 Subject: [PATCH 1851/2289] automated documentation update --- modules/data.atmosphere/man/prepare_narr_year.Rd | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/data.atmosphere/man/prepare_narr_year.Rd b/modules/data.atmosphere/man/prepare_narr_year.Rd index 50d72c9b4ae..181eaf0e9f8 100644 --- a/modules/data.atmosphere/man/prepare_narr_year.Rd +++ b/modules/data.atmosphere/man/prepare_narr_year.Rd @@ -14,6 +14,8 @@ prepare_narr_year(dat, file, lat_nc, lon_nc, verbose = FALSE) \item{lat_nc}{`ncdim` object for latitude} \item{lon_nc}{`ncdim` object for longitude} + +\item{verbose}{logical: ask`ncdf4` functions to be very chatty while they work?} } \value{ List of NetCDF variables in data. Creates NetCDF file containing From 2177bec04b50d6e038bc40ca50cfb7a0e33125de Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 28 Feb 2021 14:27:13 +0100 Subject: [PATCH 1852/2289] move macros and variable assigments before start of rules --- Makefile | 44 +++++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 21 deletions(-) diff --git a/Makefile b/Makefile index 307c58f0b19..d6f7808e1a6 100644 --- a/Makefile +++ b/Makefile @@ -44,6 +44,29 @@ MODELS_D := $(MODELS:%=.doc/%) MODULES_D := $(MODULES:%=.doc/%) ALL_PKGS_D := $(BASE_D) $(MODULES_D) $(MODELS_D) +SETROPTIONS = "options(Ncpus = ${NCPUS})" + +### Macros + +# HACK: assigning to `deps` is an ugly workaround for circular dependencies in utils pkg. +# When these are fixed, can go back to simple `dependencies = TRUE` +depends_R_pkg = ./scripts/time.sh "depends ${1}" Rscript -e ${SETROPTIONS} \ + -e "deps <- if (grepl('(base/utils|modules/benchmark)', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ + -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" +install_R_pkg = ./scripts/time.sh "install ${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" +check_R_pkg = ./scripts/time.sh "check ${1}" Rscript scripts/check_with_errors.R $(strip $(1)) + +# Would use devtools::test(), but devtools 2.2.1 hardcodes stop_on_failure=FALSE +# To work around this, we reimplement about half of test() here :( +test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ + -e "if (length(list.files('$(strip $(1))/tests/testthat', 'test.*.[rR]')) == 0) {" \ + -e "print('No tests found'); quit('no') }" \ + -e "env <- devtools::load_all('$(strip $(1))', quiet = TRUE)[['env']]" \ + -e "testthat::test_dir('$(strip $(1))/tests/testthat', env = env," \ + -e "stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can + +doc_R_pkg = ./scripts/time.sh "document ${1}" Rscript -e "devtools::document('"$(strip $(1))"')" + .PHONY: all install check test document shiny all: install document @@ -81,8 +104,6 @@ $(subst .doc/models/template,,$(MODELS_D)): .install/models/template include Makefile.depends -SETROPTIONS = "options(Ncpus = ${NCPUS})" - clean: rm -rf .install .check .test .doc find modules/rtm/src \( -name \*.mod -o -name \*.o -o -name \*.so \) -delete @@ -104,25 +125,6 @@ clean: + ./scripts/time.sh "mockery ${1}" Rscript -e ${SETROPTIONS} -e "if(!requireNamespace('mockery', quietly = TRUE)) install.packages('mockery')" echo `date` > $@ -# HACK: assigning to `deps` is an ugly workaround for circular dependencies in utils pkg. -# When these are fixed, can go back to simple `dependencies = TRUE` -depends_R_pkg = ./scripts/time.sh "depends ${1}" Rscript -e ${SETROPTIONS} \ - -e "deps <- if (grepl('(base/utils|modules/benchmark)', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ - -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" -install_R_pkg = ./scripts/time.sh "install ${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" -check_R_pkg = ./scripts/time.sh "check ${1}" Rscript scripts/check_with_errors.R $(strip $(1)) - -# Would use devtools::test(), but devtools 2.2.1 hardcodes stop_on_failure=FALSE -# To work around this, we reimplement about half of test() here :( -test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ - -e "if (length(list.files('$(strip $(1))/tests/testthat', 'test.*.[rR]')) == 0) {" \ - -e "print('No tests found'); quit('no') }" \ - -e "env <- devtools::load_all('$(strip $(1))', quiet = TRUE)[['env']]" \ - -e "testthat::test_dir('$(strip $(1))/tests/testthat', env = env," \ - -e "stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can - -doc_R_pkg = ./scripts/time.sh "document ${1}" Rscript -e "devtools::document('"$(strip $(1))"')" - $(ALL_PKGS_I) $(ALL_PKGS_C) $(ALL_PKGS_T) $(ALL_PKGS_D): | .install/devtools .install/roxygen2 .install/testthat .SECONDEXPANSION: From c9beb65922cc7137e1b07a36b1fa1adf84d51977 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 28 Feb 2021 14:34:08 +0100 Subject: [PATCH 1853/2289] avoid shell glob when recursing inside directories (fixes #2776) The behavior of `**` is shell-dependent, so on some systems this did not recurse more than two levels below package root e.g. changes in `test/testthat/data/foo.csv` would not cause a package rebuild --- Makefile | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/Makefile b/Makefile index d6f7808e1a6..de5677ce560 100644 --- a/Makefile +++ b/Makefile @@ -48,6 +48,19 @@ SETROPTIONS = "options(Ncpus = ${NCPUS})" ### Macros +# Generates a list of all files and subdirectories at any depth inside its argument +recurse_dir = $(foreach d, $(wildcard $1*), $(call recurse_dir, $d/) $d) + +# Filters a list from recurse_dir to remove paths that are directories +# Caveat: Really only removes *direct parents* of other paths *in the list*: +# $(call drop_dirs,a a/b a/b/c) => 'a/b/c', +# but $(call drop_dirs,a a/b d d/e/f) => 'a/b d d/e/f' +# For output from recurse_dir this removes all dirs, but in other cases beware. +drop_parents = $(filter-out $(patsubst %/,%,$(dir $1)), $1) + +# Generates a list of regular files at any depth inside its argument +files_in_dir = $(call drop_parents, $(call recurse_dir, $1)) + # HACK: assigning to `deps` is an ugly workaround for circular dependencies in utils pkg. # When these are fixed, can go back to simple `dependencies = TRUE` depends_R_pkg = ./scripts/time.sh "depends ${1}" Rscript -e ${SETROPTIONS} \ @@ -67,6 +80,9 @@ test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ doc_R_pkg = ./scripts/time.sh "document ${1}" Rscript -e "devtools::document('"$(strip $(1))"')" + +### Rules + .PHONY: all install check test document shiny all: install document @@ -128,24 +144,24 @@ clean: $(ALL_PKGS_I) $(ALL_PKGS_C) $(ALL_PKGS_T) $(ALL_PKGS_D): | .install/devtools .install/roxygen2 .install/testthat .SECONDEXPANSION: -.doc/%: $$(wildcard %/**/*) $$(wildcard %/*) | $$(@D) +.doc/%: $$(call files_in_dir, %) | $$(@D) + $(call depends_R_pkg, $(subst .doc/,,$@)) $(call doc_R_pkg, $(subst .doc/,,$@)) echo `date` > $@ -.install/%: $$(wildcard %/**/*) $$(wildcard %/*) .doc/% | $$(@D) +.install/%: $$(call files_in_dir, %) .doc/% | $$(@D) + $(call install_R_pkg, $(subst .install/,,$@)) echo `date` > $@ -.check/%: $$(wildcard %/**/*) $$(wildcard %/*) | $$(@D) +.check/%: $$(call files_in_dir, %) | $$(@D) + $(call check_R_pkg, $(subst .check/,,$@)) echo `date` > $@ -.test/%: $$(wildcard %/**/*) $$(wildcard %/*) | $$(@D) +.test/%: $$(call files_in_dir, %) | $$(@D) $(call test_R_pkg, $(subst .test/,,$@)) echo `date` > $@ # Install dependencies declared by Shiny apps -.shiny_depends/%: $$(wildcard %/**/*) $$(wildcard %/*) | $$(@D) +.shiny_depends/%: $$(call files_in_dir, %) | $$(@D) Rscript scripts/install_shiny_deps.R $(subst .shiny_depends/,shiny/,$@) echo `date` > $@ From 4191134ff200b31a11894b21f50d9c42e6165938 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 28 Feb 2021 15:09:08 +0100 Subject: [PATCH 1854/2289] remove ugly hack now that devtools::test accepts stop_on_failure again --- Makefile | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/Makefile b/Makefile index de5677ce560..f5c9d7db925 100644 --- a/Makefile +++ b/Makefile @@ -68,15 +68,10 @@ depends_R_pkg = ./scripts/time.sh "depends ${1}" Rscript -e ${SETROPTIONS} \ -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" install_R_pkg = ./scripts/time.sh "install ${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" check_R_pkg = ./scripts/time.sh "check ${1}" Rscript scripts/check_with_errors.R $(strip $(1)) - -# Would use devtools::test(), but devtools 2.2.1 hardcodes stop_on_failure=FALSE -# To work around this, we reimplement about half of test() here :( test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ - -e "if (length(list.files('$(strip $(1))/tests/testthat', 'test.*.[rR]')) == 0) {" \ - -e "print('No tests found'); quit('no') }" \ - -e "env <- devtools::load_all('$(strip $(1))', quiet = TRUE)[['env']]" \ - -e "testthat::test_dir('$(strip $(1))/tests/testthat', env = env," \ - -e "stop_on_failure = TRUE, stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can + -e "devtools::test('$(strip $(1))'," \ + -e "stop_on_failure = TRUE," \ + -e "stop_on_warning = FALSE)" # TODO: Raise bar to stop_on_warning = TRUE when we can doc_R_pkg = ./scripts/time.sh "document ${1}" Rscript -e "devtools::document('"$(strip $(1))"')" From ee8f9890d941a2261315a52c7eeb2bc2df8f7f64 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 28 Feb 2021 15:11:10 +0100 Subject: [PATCH 1855/2289] linewrap for readability --- Makefile | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index f5c9d7db925..44b3848af74 100644 --- a/Makefile +++ b/Makefile @@ -66,7 +66,9 @@ files_in_dir = $(call drop_parents, $(call recurse_dir, $1)) depends_R_pkg = ./scripts/time.sh "depends ${1}" Rscript -e ${SETROPTIONS} \ -e "deps <- if (grepl('(base/utils|modules/benchmark)', '$(1)')) { c('Depends', 'Imports', 'LinkingTo') } else { TRUE }" \ -e "devtools::install_deps('$(strip $(1))', dependencies = deps, upgrade=FALSE)" -install_R_pkg = ./scripts/time.sh "install ${1}" Rscript -e ${SETROPTIONS} -e "devtools::install('$(strip $(1))', upgrade=FALSE)" +install_R_pkg = ./scripts/time.sh "install ${1}" Rscript \ + -e ${SETROPTIONS} \ + -e "devtools::install('$(strip $(1))', upgrade=FALSE)" check_R_pkg = ./scripts/time.sh "check ${1}" Rscript scripts/check_with_errors.R $(strip $(1)) test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ -e "devtools::test('$(strip $(1))'," \ From 634a482dfb658a522bb1087aa76fb2caa1c24c38 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 2 Mar 2021 00:44:30 +0100 Subject: [PATCH 1856/2289] remove spaces from filenames Needed so that Make doesn't break when they appear in prerequisite lists. Yes, Make really is that inflexible about assuming spaces are separators! --- .../Sunny.preliminary.code/{C3 Species.csv => C3_Species.csv} | 0 .../Sunny.preliminary.code/{C3 photomodel.r => C3_photomodel.r} | 2 +- 2 files changed, 1 insertion(+), 1 deletion(-) rename modules/photosynthesis/code/Sunny.preliminary.code/{C3 Species.csv => C3_Species.csv} (100%) rename modules/photosynthesis/code/Sunny.preliminary.code/{C3 photomodel.r => C3_photomodel.r} (99%) diff --git a/modules/photosynthesis/code/Sunny.preliminary.code/C3 Species.csv b/modules/photosynthesis/code/Sunny.preliminary.code/C3_Species.csv similarity index 100% rename from modules/photosynthesis/code/Sunny.preliminary.code/C3 Species.csv rename to modules/photosynthesis/code/Sunny.preliminary.code/C3_Species.csv diff --git a/modules/photosynthesis/code/Sunny.preliminary.code/C3 photomodel.r b/modules/photosynthesis/code/Sunny.preliminary.code/C3_photomodel.r similarity index 99% rename from modules/photosynthesis/code/Sunny.preliminary.code/C3 photomodel.r rename to modules/photosynthesis/code/Sunny.preliminary.code/C3_photomodel.r index 4de0f8024ed..595061b960f 100644 --- a/modules/photosynthesis/code/Sunny.preliminary.code/C3 photomodel.r +++ b/modules/photosynthesis/code/Sunny.preliminary.code/C3_photomodel.r @@ -1,6 +1,6 @@ library(R2WinBUGS) library(BRugs) -dat=read.csv("C3 Species.csv",header=T) +dat=read.csv("C3_Species.csv",header=T) #dat2=read.csv('c3covariates.csv',header=T) my.model = function(){ From 5413e77886deb9c6d44ad56b2cd889e1032f60b4 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 2 Mar 2021 01:01:24 +0100 Subject: [PATCH 1857/2289] update check log --- modules/photosynthesis/tests/Rcheck_reference.log | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/modules/photosynthesis/tests/Rcheck_reference.log b/modules/photosynthesis/tests/Rcheck_reference.log index a352a37ad11..8ba8dd03acb 100644 --- a/modules/photosynthesis/tests/Rcheck_reference.log +++ b/modules/photosynthesis/tests/Rcheck_reference.log @@ -13,12 +13,7 @@ * checking if there is a namespace ... OK * checking for executable files ... OK * checking for hidden files and directories ... OK -* checking for portable file names ... WARNING -Found the following files with non-portable file names: - code/Sunny.preliminary.code/C3 photomodel.r - code/Sunny.preliminary.code/C3 Species.csv -These are not fully portable file names. -See section ‘Package structure’ in the ‘Writing R Extensions’ manual. +* checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK * checking serialization versions ... OK * checking whether package ‘PEcAn.photosynthesis’ can be installed ... OK @@ -108,4 +103,4 @@ Files in the 'vignettes' directory but no files in 'inst/doc': Package has no Sweave vignette sources and no VignetteBuilder field. * checking examples ... NONE * DONE -Status: 3 WARNINGs, 4 NOTEs +Status: 2 WARNINGs, 4 NOTEs From 048b6243b5a632d5dbf6ae6fc534a07396d8011d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 2 Mar 2021 09:30:18 +0100 Subject: [PATCH 1858/2289] move up one more macro --- Makefile | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Makefile b/Makefile index 44b3848af74..7a6caf127a1 100644 --- a/Makefile +++ b/Makefile @@ -46,6 +46,7 @@ ALL_PKGS_D := $(BASE_D) $(MODULES_D) $(MODELS_D) SETROPTIONS = "options(Ncpus = ${NCPUS})" + ### Macros # Generates a list of all files and subdirectories at any depth inside its argument @@ -77,6 +78,8 @@ test_R_pkg = ./scripts/time.sh "test ${1}" Rscript \ doc_R_pkg = ./scripts/time.sh "document ${1}" Rscript -e "devtools::document('"$(strip $(1))"')" +depends = .doc/$(1) .install/$(1) .check/$(1) .test/$(1) + ### Rules @@ -94,8 +97,6 @@ shiny: $(SHINY_I) book: cd ./book_source && make build -depends = .doc/$(1) .install/$(1) .check/$(1) .test/$(1) - # Make the timestamp directories if they don't exist yet .doc .install .check .test .shiny_depends $(call depends,base) $(call depends,models) $(call depends,modules): mkdir -p $@ From bbf92f861b50bfc903269fa76122a10760e5f535 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 2 Mar 2021 09:37:43 +0100 Subject: [PATCH 1859/2289] typo in doc --- base/utils/R/read.output.R | 2 +- base/utils/man/read.output.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base/utils/R/read.output.R b/base/utils/R/read.output.R index 5720e47ca57..ef15e2925e2 100644 --- a/base/utils/R/read.output.R +++ b/base/utils/R/read.output.R @@ -31,7 +31,7 @@ ##' variables in output file.. ##' @param dataframe Logical: if TRUE, will return output in a ##' `data.frame` format with a posix column. Useful for -##' [PEcAn.benchmark::align.data()] and plotting. +##' [PEcAn.benchmark::align_data()] and plotting. ##' @param pft.name character string, name of the plant functional ##' type (PFT) to read PFT-specific output. If `NULL` no ##' PFT-specific output will be read even the variable has PFT as a diff --git a/base/utils/man/read.output.Rd b/base/utils/man/read.output.Rd index 825b8328f55..83f7b4abeb8 100644 --- a/base/utils/man/read.output.Rd +++ b/base/utils/man/read.output.Rd @@ -35,7 +35,7 @@ variables in output file..} \item{dataframe}{Logical: if TRUE, will return output in a \code{data.frame} format with a posix column. Useful for -\code{\link[PEcAn.benchmark:align.data]{PEcAn.benchmark::align.data()}} and plotting.} +\code{\link[PEcAn.benchmark:align_data]{PEcAn.benchmark::align_data()}} and plotting.} \item{pft.name}{character string, name of the plant functional type (PFT) to read PFT-specific output. If \code{NULL} no From eaa742364d0f8af22e97e4071e8dd55b4721e781 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 11 Mar 2021 04:05:03 +0530 Subject: [PATCH 1860/2289] Update download.NARR_site.R --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index ba202c02fad..569b2af1ecc 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -344,7 +344,7 @@ generate_narr_url <- function(dates, flx) { } #' Assign daygroup tag for a given date -daygroup <- function(date, flx) { +daygroup <- function(date, .data$flx) { mday <- lubridate::mday(date) mmax <- lubridate::days_in_month(date) if (flx) { From 002f2a3569a373169b242b3bdd8c6974decb47fb Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 11 Mar 2021 11:28:53 +0530 Subject: [PATCH 1861/2289] Update Rcheck_reference.log --- modules/data.atmosphere/tests/Rcheck_reference.log | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index 8b46e189b42..beaa804d530 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -156,7 +156,6 @@ extract.nc.ERA5 : : no visible binding for global variable ‘.’ get_NARR_thredds: no visible binding for global variable ‘latitude’ get_NARR_thredds: no visible binding for global variable ‘longitude’ -get_NARR_thredds: no visible binding for global variable ‘flx’ get_narr_url: no visible binding for global variable ‘NARR_name’ get.cruncep: no visible binding for global variable ‘Lat’ get.cruncep: no visible binding for global variable ‘lati’ @@ -424,7 +423,7 @@ Argument items with no description in Rd object 'split_wind': * checking contents of ‘data’ directory ... OK * checking data for non-ASCII characters ... OK * checking data for ASCII and uncompressed saves ... WARNING - + Note: significantly better compression could be obtained by using R CMD build --resave-data old_size new_size compress From 0147052ba86b6b8362d1a554a611db4d7b439d26 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 11 Mar 2021 09:00:11 +0100 Subject: [PATCH 1862/2289] Update modules/data.atmosphere/R/download.NARR_site.R Co-authored-by: Alexey Shiklomanov --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index 569b2af1ecc..a505f07bf68 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -324,7 +324,7 @@ generate_narr_url <- function(dates, flx) { dplyr::mutate( year = lubridate::year(.data$date), month = lubridate::month(.data$date), - daygroup = daygroup(.data$date, .data$flx) + daygroup = daygroup(.data$date, flx) ) %>% dplyr::group_by(.data$year, .data$month, .data$daygroup) %>% dplyr::summarize( From 54785f3dcbad25451135781948573e4ab7162af7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 11 Mar 2021 09:09:55 +0100 Subject: [PATCH 1863/2289] Update modules/data.atmosphere/R/download.NARR_site.R --- modules/data.atmosphere/R/download.NARR_site.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index a505f07bf68..73ac8bae7f0 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -344,7 +344,7 @@ generate_narr_url <- function(dates, flx) { } #' Assign daygroup tag for a given date -daygroup <- function(date, .data$flx) { +daygroup <- function(date, flx) { mday <- lubridate::mday(date) mmax <- lubridate::days_in_month(date) if (flx) { From 68ddb5e2f0da42e6d9308022bd9ab049ff997681 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 11 Mar 2021 16:52:47 +0530 Subject: [PATCH 1864/2289] Update download.NARR_site.R --- modules/data.atmosphere/R/download.NARR_site.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/download.NARR_site.R b/modules/data.atmosphere/R/download.NARR_site.R index 73ac8bae7f0..61195cbdbea 100644 --- a/modules/data.atmosphere/R/download.NARR_site.R +++ b/modules/data.atmosphere/R/download.NARR_site.R @@ -228,6 +228,7 @@ get_NARR_thredds <- function(start_date, end_date, lat.in, lon.in, get_dfs <- dplyr::bind_rows(flx_df, sfc_df) cl <- parallel::makeCluster(ncores) doParallel::registerDoParallel(cl) + flx <- NULL get_dfs$data <- foreach::`%dopar%`( foreach::foreach( url = get_dfs$url, flx = get_dfs$flx, From d3348f6229b75e4f8468022850f1599df8f72be0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 13 Mar 2021 21:44:20 +0100 Subject: [PATCH 1865/2289] Update modules/data.atmosphere/tests/Rcheck_reference.log --- modules/data.atmosphere/tests/Rcheck_reference.log | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/tests/Rcheck_reference.log b/modules/data.atmosphere/tests/Rcheck_reference.log index beaa804d530..87a244351d5 100644 --- a/modules/data.atmosphere/tests/Rcheck_reference.log +++ b/modules/data.atmosphere/tests/Rcheck_reference.log @@ -423,7 +423,7 @@ Argument items with no description in Rd object 'split_wind': * checking contents of ‘data’ directory ... OK * checking data for non-ASCII characters ... OK * checking data for ASCII and uncompressed saves ... WARNING - + Note: significantly better compression could be obtained by using R CMD build --resave-data old_size new_size compress From c0ecfe0ed11e874f4c9be980a2f68d5827e2511e Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 18 Mar 2021 14:38:39 -0400 Subject: [PATCH 1866/2289] Update matchInventoryRings.R This code was giving me a mysterious error because of the lack of namespace on this function --- modules/data.land/R/matchInventoryRings.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.land/R/matchInventoryRings.R b/modules/data.land/R/matchInventoryRings.R index c6526a2d380..7abb370a03a 100644 --- a/modules/data.land/R/matchInventoryRings.R +++ b/modules/data.land/R/matchInventoryRings.R @@ -13,7 +13,7 @@ matchInventoryRings <- function(trees, rings, extractor = "TreeCode", nyears = 3 ## build tree ring codes if (is.list(rings)) { ring.file <- rep(names(rings), times = sapply(rings, ncol)) - rings <- combine.rwl(rings) + rings <- dplR::combine.rwl(rings) } ring.ID <- names(rings) id.extract <- function(x) { From 1624165bfb144222c06229f3e29c167bd8233619 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 23 Mar 2021 13:11:03 -0700 Subject: [PATCH 1867/2289] add aboveground biomass to BioCro output --- models/biocro/R/model2netcdf.BIOCRO.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/models/biocro/R/model2netcdf.BIOCRO.R b/models/biocro/R/model2netcdf.BIOCRO.R index 54f0fc90ddd..65b52b15e2f 100644 --- a/models/biocro/R/model2netcdf.BIOCRO.R +++ b/models/biocro/R/model2netcdf.BIOCRO.R @@ -64,6 +64,7 @@ model2netcdf.BIOCRO <- function(result, genus = NULL, outdir, lat = -9999, lon = TotLivBiom = PEcAn.utils::to_ncvar("TotLivBiom", dims), root_carbon_content = PEcAn.utils::to_ncvar("root_carbon_content", dims), AbvGrndWood = PEcAn.utils::to_ncvar("AbvGrndWood", dims), + AGB = PEcAn.utils::to_ncvar("AGB", dims), Evap = PEcAn.utils::to_ncvar("Evap", dims), TVeg = PEcAn.utils::to_ncvar("TVeg", dims), LAI = PEcAn.utils::to_ncvar("LAI", dims)) @@ -75,11 +76,12 @@ model2netcdf.BIOCRO <- function(result, genus = NULL, outdir, lat = -9999, lon = TotLivBiom = k * (Leaf + Root + Stem + Rhizome + Grain), root_carbon_content = k * Root, AbvGrndWood = k * Stem, + AGB = k * (Leaf + Stem + Grain), Evap = udunits2::ud.convert(SoilEvaporation + CanopyTrans, "Mg/ha/h", "kg/m2/s"), TVeg = udunits2::ud.convert(CanopyTrans, "Mg/ha/h", "kg/m2/s"), LAI = LAI)) - total_biomass <- with(result_yeari, + total_biomass <- with(result_yeari, # this is for calculating NPP and includes litter k * (Leaf + Root + Stem + Rhizome + Grain + AboveLitter + BelowLitter)) delta_biomass <- udunits2::ud.convert(c(0, diff(total_biomass)), "kg/m2/h", "kg/m2/s") delta_biomass[delta_biomass < 0] <- 0 From 75dce0184758277509946fcc911eb1434396a8e0 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 23 Mar 2021 13:13:38 -0700 Subject: [PATCH 1868/2289] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index c677195351f..fe44fd4995a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -65,6 +65,7 @@ This is a major change: ### Added +- BioCro can export Aboveground Biomass (#2790) - Functionality for generating the same ensemble parameter sets with randtoolbox functions. - Functionality for joint sampling from the posteriors using randtoolbox functions. - BASGRA-SDA couplers. From 768d5f9e2999557bcd724a9f84d403c6db31c0cd Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 23 Mar 2021 23:39:54 -0700 Subject: [PATCH 1869/2289] Update modules/priors/R/priors.R Co-authored-by: istfer --- modules/priors/R/priors.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/priors/R/priors.R b/modules/priors/R/priors.R index a26aed5db31..2a4f5b1de32 100644 --- a/modules/priors/R/priors.R +++ b/modules/priors/R/priors.R @@ -186,7 +186,7 @@ pr.samp <- function(distn, parama, paramb, n) { ##' or list and it can return either a random sample of length n OR a sample from a quantile specified as p ##' @title Get Samples ##' @param prior data.frame with distn, parama, and optionally paramb. -##' @param n number of samples to return from a random sample of the rdistn family of functions (e.g. qnorm) +##' @param n number of samples to return from a random sample of the rdistn family of functions (e.g. rnorm) ##' @param p vector of quantiles from which to sample the distribution; typically pre-generated upstream ##' in the workflow to be used by the qdist family of functions (e.g. qnorm) ##' @return vector with n random samples from prior From b5cdb9e6a191aad97a89ac922c2b41b6cb6bdc15 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Tue, 23 Mar 2021 23:40:02 -0700 Subject: [PATCH 1870/2289] Update modules/priors/R/priors.R Co-authored-by: istfer --- modules/priors/R/priors.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/priors/R/priors.R b/modules/priors/R/priors.R index 2a4f5b1de32..406b31cad6b 100644 --- a/modules/priors/R/priors.R +++ b/modules/priors/R/priors.R @@ -188,7 +188,7 @@ pr.samp <- function(distn, parama, paramb, n) { ##' @param prior data.frame with distn, parama, and optionally paramb. ##' @param n number of samples to return from a random sample of the rdistn family of functions (e.g. rnorm) ##' @param p vector of quantiles from which to sample the distribution; typically pre-generated upstream -##' in the workflow to be used by the qdist family of functions (e.g. qnorm) +##' in the workflow to be used by the qdistn family of functions (e.g. qnorm) ##' @return vector with n random samples from prior ##' @seealso \link{pr.samp} ##' @examples From f74220cc1853219f26d9edfcd2e4e547aaae6f7a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 24 Mar 2021 22:30:40 +0100 Subject: [PATCH 1871/2289] avoid failing builds from R using an unreliable time service --- .github/workflows/ci.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2445e3e9778..8fc667dde64 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -42,6 +42,8 @@ jobs: # Avoid compilation check warnings that come from the system Makevars # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + # Keep R checks from trying to consult the very flaky worldclockapi.com + _R_CHECK_SYSTEM_CLOCK_: 0 container: image: pecan/depends:R${{ matrix.R }} From d984199e9655e0b2e212918e5853f063ab161000 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 27 Mar 2021 23:29:20 +0100 Subject: [PATCH 1872/2289] _R_CHECK*_ var only have effect in check job --- .github/workflows/ci.yml | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 8fc667dde64..2cf4861c877 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -37,13 +37,6 @@ jobs: NCPUS: 2 PGHOST: postgres CI: true - _R_CHECK_LENGTH_1_CONDITION_: true - _R_CHECK_LENGTH_1_LOGIC2_: true - # Avoid compilation check warnings that come from the system Makevars - # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html - _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time - # Keep R checks from trying to consult the very flaky worldclockapi.com - _R_CHECK_SYSTEM_CLOCK_: 0 container: image: pecan/depends:R${{ matrix.R }} @@ -90,11 +83,6 @@ jobs: NCPUS: 2 PGHOST: postgres CI: true - _R_CHECK_LENGTH_1_CONDITION_: true - _R_CHECK_LENGTH_1_LOGIC2_: true - # Avoid compilation check warnings that come from the system Makevars - # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html - _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time container: image: pecan/depends:R${{ matrix.R }} @@ -143,6 +131,8 @@ jobs: # Avoid compilation check warnings that come from the system Makevars # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time + # Keep R checks from trying to consult the very flaky worldclockapi.com + _R_CHECK_SYSTEM_CLOCK_: 0 container: image: pecan/depends:R${{ matrix.R }} @@ -189,11 +179,6 @@ jobs: NCPUS: 2 PGHOST: postgres CI: true - _R_CHECK_LENGTH_1_CONDITION_: true - _R_CHECK_LENGTH_1_LOGIC2_: true - # Avoid compilation check warnings that come from the system Makevars - # See https://stat.ethz.ch/pipermail/r-package-devel/2019q2/003898.html - _R_CHECK_COMPILATION_FLAGS_KNOWN_: -Wformat -Werror=format-security -Wdate-time container: image: pecan/depends:R${{ matrix.R }} From 692b65aa588d2352bb3dfb2f88a235e32295bc1e Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 18 May 2021 15:02:19 -0400 Subject: [PATCH 1873/2289] Edited the dbfile.input.check function so that the input object can be created without the enddate arguement --- base/db/R/dbfiles.R | 100 ++++++++++++++++++++++---------------------- 1 file changed, 51 insertions(+), 49 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 779c228f45a..7842891de78 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -225,76 +225,78 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, ##' } dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, formatname, parentid=NA, con, hostname=PEcAn.remote::fqdn(), exact.dates=FALSE, pattern=NULL, return.all=FALSE) { - - - + + hostname <- default_hostname(hostname) - + mimetypeid <- get.id(table = 'mimetypes', colnames = 'type_string', values = mimetype, con = con) if (is.null(mimetypeid)) { return(invisible(data.frame())) } - + # find appropriate format formatid <- get.id(table = 'formats', colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), con = con) - + if (is.null(formatid)) { invisible(data.frame()) } - + # setup parent part of query if specified if (is.na(parentid)) { parent <- "" } else { parent <- paste0(" AND parent_id=", parentid) } - + # find appropriate input if (exact.dates) { - inputs <- db.query( - query = paste0( - "SELECT * FROM inputs WHERE site_id=", siteid, - " AND format_id=", formatid, - " AND start_date='", startdate, - "' AND end_date='", enddate, - "'", parent - ), - con = con - )#[['id']] - } else { - inputs <- db.query( - query = paste0( - "SELECT * FROM inputs WHERE site_id=", siteid, - " AND format_id=", formatid, - parent - ), - con = con - )#[['id']] + if(!is.null(enddate)){ + inputs <- db.query( + query = paste0( + "SELECT * FROM inputs WHERE site_id=", siteid, + " AND format_id=", formatid, + " AND start_date='", startdate, + "' AND end_date='", enddate, + "'", parent + ), + con = con + ) + } else{ + inputs <- db.query( + query = paste0( + "SELECT * FROM inputs WHERE site_id=", siteid, + " AND format_id=", formatid, + " AND start_date='", startdate, + "'", parent + ), + con = con + ) + } } - + if (is.null(inputs) | length(inputs$id) == 0) { return(data.frame()) } else { - + if (!is.null(pattern)) { ## Case where pattern is not NULL inputs <- inputs[grepl(pattern, inputs$name),] } - + ## parent check when NA - if (is.na(parentid)) { - inputs <- inputs[is.na(inputs$parent_id),] - } - + # if (is.na(parentid)) { + # inputs <- inputs[is.na(inputs$parent_id),] + # } + if (length(inputs$id) > 1) { PEcAn.logger::logger.warn("Found multiple matching inputs. Checking for one with associate files on host machine") print(inputs) - # ni = length(inputs$id) - # dbfile = list() - # for(i in seq_len(ni)){ - # dbfile[[i]] <- dbfile.check(type = 'Input', container.id = inputs$id[i], con = con, hostname = hostname, machine.check = TRUE) - # } - + # ni = length(inputs$id) + # dbfile = list() + # for(i in seq_len(ni)){ + # dbfile[[i]] <- dbfile.check(type = 'Input', container.id = inputs$id[i], con = con, hostname = hostname, machine.check = TRUE) + # } + dbfile <- dbfile.check( type = 'Input', @@ -304,8 +306,8 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f machine.check = TRUE, return.all = return.all ) - - + + if (nrow(dbfile) == 0) { ## With the possibility of dbfile.check returning nothing, ## as.data.frame ensures a empty data.frame is returned @@ -313,15 +315,15 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f PEcAn.logger::logger.info("File not found on host machine. Returning Valid input with file associated on different machine if possible") return(as.data.frame(dbfile.check(type = 'Input', container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) } - + return(dbfile) } else if (length(inputs$id) == 0) { - + # need this third case here because prent check above can return an empty inputs return(data.frame()) - + }else{ - + PEcAn.logger::logger.warn("Found possible matching input. Checking if its associate files are on host machine") print(inputs) dbfile <- dbfile.check( @@ -332,7 +334,7 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f machine.check = TRUE, return.all = return.all ) - + if (nrow(dbfile) == 0) { ## With the possibility of dbfile.check returning nothing, ## as.data.frame ensures an empty data.frame is returned @@ -340,9 +342,9 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f PEcAn.logger::logger.info("File not found on host machine. Returning Valid input with file associated on different machine if possible") return(as.data.frame(dbfile.check(type = 'Input', container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) } - + return(dbfile) - + } } } From 65408cc11d0f571b46906b7f925c510d4a9bbe41 Mon Sep 17 00:00:00 2001 From: runner Date: Tue, 18 May 2021 20:32:35 +0000 Subject: [PATCH 1874/2289] automated syle update --- base/db/R/dbfiles.R | 547 +++++++++++++++++++++++--------------------- 1 file changed, 287 insertions(+), 260 deletions(-) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index 7842891de78..d18b264b99b 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -40,7 +40,7 @@ ##' con = dbcon) ##' } dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, mimetype, formatname, - parentid=NA, con, hostname=PEcAn.remote::fqdn(), allow.conflicting.dates=FALSE, ens=FALSE) { + parentid = NA, con, hostname = PEcAn.remote::fqdn(), allow.conflicting.dates = FALSE, ens = FALSE) { name <- basename(in.path) hostname <- default_hostname(hostname) @@ -50,7 +50,7 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, # find appropriate format, create if it does not exist formatid <- get.id( table = "formats", - colnames = c('mimetype_id', 'name'), + colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), con = con, create = TRUE, @@ -79,18 +79,21 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, inputid <- NULL if (nrow(existing.input) > 0) { # Convert dates to Date objects and strip all time zones (DB values are timezone-free) - if(!is.null(startdate)){ startdate <- lubridate::force_tz(time = lubridate::as_date(startdate), tzone = 'UTC')} - if(!is.null(enddate)){enddate <- lubridate::force_tz(time = lubridate::as_date(enddate), tzone = 'UTC')} - existing.input$start_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$start_date), tzone = 'UTC') - existing.input$end_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$end_date), tzone = 'UTC') + if (!is.null(startdate)) { + startdate <- lubridate::force_tz(time = lubridate::as_date(startdate), tzone = "UTC") + } + if (!is.null(enddate)) { + enddate <- lubridate::force_tz(time = lubridate::as_date(enddate), tzone = "UTC") + } + existing.input$start_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$start_date), tzone = "UTC") + existing.input$end_date <- lubridate::force_tz(time = lubridate::as_date(existing.input$end_date), tzone = "UTC") for (i in seq_len(nrow(existing.input))) { - existing.input.i <- existing.input[i,] + existing.input.i <- existing.input[i, ] if (is.na(existing.input.i$start_date) && is.null(startdate)) { - inputid <- existing.input.i[['id']] - }else if(existing.input.i$start_date == startdate && existing.input.i$end_date == enddate){ - inputid <- existing.input.i[['id']] - + inputid <- existing.input.i[["id"]] + } else if (existing.input.i$start_date == startdate && existing.input.i$end_date == enddate) { + inputid <- existing.input.i[["id"]] } } @@ -112,70 +115,81 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, # Either there was no existing input, or there was but the dates don't match and # allow.conflicting.dates==TRUE. So, insert new input record. # adding is.null(startdate) to add inputs like soil that don't have dates - if(parent == "" && is.null(startdate)) { - cmd <- paste0("INSERT INTO inputs ", - "(site_id, format_id, name) VALUES (", - siteid, ", ", formatid, ", '", name, - "'",") RETURNING id") - } else if(parent == "" && !is.null(startdate)) { - cmd <- paste0("INSERT INTO inputs ", - "(site_id, format_id, start_date, end_date, name) VALUES (", - siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, - "') RETURNING id") - }else if(is.null(startdate)){ - cmd <- paste0("INSERT INTO inputs ", - "(site_id, format_id, name, parent_id) VALUES (", - siteid, ", ", formatid, ", '", name, "',", parentid, ") RETURNING id") - }else { - cmd <- paste0("INSERT INTO inputs ", - "(site_id, format_id, start_date, end_date, name, parent_id) VALUES (", - siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, "',", parentid, ") RETURNING id") + if (parent == "" && is.null(startdate)) { + cmd <- paste0( + "INSERT INTO inputs ", + "(site_id, format_id, name) VALUES (", + siteid, ", ", formatid, ", '", name, + "'", ") RETURNING id" + ) + } else if (parent == "" && !is.null(startdate)) { + cmd <- paste0( + "INSERT INTO inputs ", + "(site_id, format_id, start_date, end_date, name) VALUES (", + siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, + "') RETURNING id" + ) + } else if (is.null(startdate)) { + cmd <- paste0( + "INSERT INTO inputs ", + "(site_id, format_id, name, parent_id) VALUES (", + siteid, ", ", formatid, ", '", name, "',", parentid, ") RETURNING id" + ) + } else { + cmd <- paste0( + "INSERT INTO inputs ", + "(site_id, format_id, start_date, end_date, name, parent_id) VALUES (", + siteid, ", ", formatid, ", '", startdate, "', '", enddate, "','", name, "',", parentid, ") RETURNING id" + ) } # This is the id that we just registered - inserted.id <-db.query(query = cmd, con = con) + inserted.id <- db.query(query = cmd, con = con) name.s <- name - if(is.null(startdate)){ + if (is.null(startdate)) { inputid <- db.query( query = paste0( "SELECT id FROM inputs WHERE site_id=", siteid, - " AND format_id=", formatid), + " AND format_id=", formatid + ), con = con )$id - - }else{inputid <- db.query( - query = paste0( - "SELECT id FROM inputs WHERE site_id=", siteid, - " AND format_id=", formatid, - " AND start_date='", startdate, - "' AND end_date='", enddate, - "'" , parent, ";" - ), - con = con - )$id} - }else{ - inserted.id <- data.frame(id=inputid) # in the case that inputid is not null then this means that there was an exsiting input + } else { + inputid <- db.query( + query = paste0( + "SELECT id FROM inputs WHERE site_id=", siteid, + " AND format_id=", formatid, + " AND start_date='", startdate, + "' AND end_date='", enddate, + "'", parent, ";" + ), + con = con + )$id + } + } else { + inserted.id <- data.frame(id = inputid) # in the case that inputid is not null then this means that there was an exsiting input } if (length(inputid) > 1 && !ens) { - PEcAn.logger::logger.warn(paste0("Multiple input files found matching parameters format_id = ", formatid, - ", startdate = ", startdate, ", enddate = ", enddate, ", parent = ", parent, ". Selecting the", - " last input file. This is normal for when an entire ensemble is inserted iteratively, but ", - " is likely an error otherwise.")) - inputid = inputid[length(inputid)] - } else if (ens){ + PEcAn.logger::logger.warn(paste0( + "Multiple input files found matching parameters format_id = ", formatid, + ", startdate = ", startdate, ", enddate = ", enddate, ", parent = ", parent, ". Selecting the", + " last input file. This is normal for when an entire ensemble is inserted iteratively, but ", + " is likely an error otherwise." + )) + inputid <- inputid[length(inputid)] + } else if (ens) { inputid <- inserted.id$id } # find appropriate dbfile, if not in database, insert new dbfile - dbfile <- dbfile.check(type = 'Input', container.id = inputid, con = con, hostname = hostname) + dbfile <- dbfile.check(type = "Input", container.id = inputid, con = con, hostname = hostname) if (nrow(dbfile) > 0 & !ens) { - if (nrow(dbfile) > 1) { print(dbfile) PEcAn.logger::logger.warn("Multiple dbfiles found. Using last.") - dbfile <- dbfile[nrow(dbfile),] + dbfile <- dbfile[nrow(dbfile), ] } if (dbfile$file_name != in.prefix || dbfile$file_path != in.path && !ens) { @@ -187,13 +201,14 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, )) dbfileid <- NA } else { - dbfileid <- dbfile[['id']] + dbfileid <- dbfile[["id"]] } - } else { - #insert dbfile & return dbfile id - dbfileid <- dbfile.insert(in.path = in.path, in.prefix = in.prefix, type = 'Input', id = inputid, - con = con, reuse = TRUE, hostname = hostname) + # insert dbfile & return dbfile id + dbfileid <- dbfile.insert( + in.path = in.path, in.prefix = in.prefix, type = "Input", id = inputid, + con = con, reuse = TRUE, hostname = hostname + ) } invisible(list(input.id = inputid, dbfile.id = dbfileid)) @@ -223,34 +238,32 @@ dbfile.input.insert <- function(in.path, in.prefix, siteid, startdate, enddate, ##' \dontrun{ ##' dbfile.input.check(siteid, startdate, enddate, 'application/x-RData', 'traits', dbcon) ##' } -dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, formatname, parentid=NA, - con, hostname=PEcAn.remote::fqdn(), exact.dates=FALSE, pattern=NULL, return.all=FALSE) { - - +dbfile.input.check <- function(siteid, startdate = NULL, enddate = NULL, mimetype, formatname, parentid = NA, + con, hostname = PEcAn.remote::fqdn(), exact.dates = FALSE, pattern = NULL, return.all = FALSE) { hostname <- default_hostname(hostname) - - mimetypeid <- get.id(table = 'mimetypes', colnames = 'type_string', values = mimetype, con = con) + + mimetypeid <- get.id(table = "mimetypes", colnames = "type_string", values = mimetype, con = con) if (is.null(mimetypeid)) { return(invisible(data.frame())) } - + # find appropriate format - formatid <- get.id(table = 'formats', colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), con = con) - + formatid <- get.id(table = "formats", colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), con = con) + if (is.null(formatid)) { invisible(data.frame()) } - + # setup parent part of query if specified if (is.na(parentid)) { parent <- "" } else { parent <- paste0(" AND parent_id=", parentid) } - + # find appropriate input if (exact.dates) { - if(!is.null(enddate)){ + if (!is.null(enddate)) { inputs <- db.query( query = paste0( "SELECT * FROM inputs WHERE site_id=", siteid, @@ -261,7 +274,7 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f ), con = con ) - } else{ + } else { inputs <- db.query( query = paste0( "SELECT * FROM inputs WHERE site_id=", siteid, @@ -273,21 +286,20 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f ) } } - + if (is.null(inputs) | length(inputs$id) == 0) { return(data.frame()) } else { - if (!is.null(pattern)) { ## Case where pattern is not NULL - inputs <- inputs[grepl(pattern, inputs$name),] + inputs <- inputs[grepl(pattern, inputs$name), ] } - + ## parent check when NA # if (is.na(parentid)) { # inputs <- inputs[is.na(inputs$parent_id),] # } - + if (length(inputs$id) > 1) { PEcAn.logger::logger.warn("Found multiple matching inputs. Checking for one with associate files on host machine") print(inputs) @@ -296,55 +308,52 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f # for(i in seq_len(ni)){ # dbfile[[i]] <- dbfile.check(type = 'Input', container.id = inputs$id[i], con = con, hostname = hostname, machine.check = TRUE) # } - + dbfile <- dbfile.check( - type = 'Input', + type = "Input", container.id = inputs$id, con = con, hostname = hostname, machine.check = TRUE, return.all = return.all ) - - + + if (nrow(dbfile) == 0) { ## With the possibility of dbfile.check returning nothing, ## as.data.frame ensures a empty data.frame is returned ## rather than an empty list. PEcAn.logger::logger.info("File not found on host machine. Returning Valid input with file associated on different machine if possible") - return(as.data.frame(dbfile.check(type = 'Input', container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) + return(as.data.frame(dbfile.check(type = "Input", container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) } - + return(dbfile) } else if (length(inputs$id) == 0) { - + # need this third case here because prent check above can return an empty inputs return(data.frame()) - - }else{ - + } else { PEcAn.logger::logger.warn("Found possible matching input. Checking if its associate files are on host machine") print(inputs) dbfile <- dbfile.check( - type = 'Input', + type = "Input", container.id = inputs$id, con = con, hostname = hostname, machine.check = TRUE, return.all = return.all ) - + if (nrow(dbfile) == 0) { ## With the possibility of dbfile.check returning nothing, ## as.data.frame ensures an empty data.frame is returned ## rather than an empty list. PEcAn.logger::logger.info("File not found on host machine. Returning Valid input with file associated on different machine if possible") - return(as.data.frame(dbfile.check(type = 'Input', container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) + return(as.data.frame(dbfile.check(type = "Input", container.id = inputs$id, con = con, hostname = hostname, machine.check = FALSE))) } - + return(dbfile) - } } } @@ -368,7 +377,7 @@ dbfile.input.check <- function(siteid, startdate=NULL, enddate=NULL, mimetype, f ##' \dontrun{ ##' dbfile.posterior.insert('trait.data.Rdata', pft, 'application/x-RData', 'traits', dbcon) ##' } -dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, hostname=PEcAn.remote::fqdn()) { +dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, hostname = PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) # find appropriate pft @@ -377,20 +386,26 @@ dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, ho PEcAn.logger::logger.severe("Could not find pft, could not store file", filename) } - mimetypeid <- get.id(table = 'mimetypes', colnames = 'type_string', values = mimetype, - con = con, create = TRUE) + mimetypeid <- get.id( + table = "mimetypes", colnames = "type_string", values = mimetype, + con = con, create = TRUE + ) # find appropriate format - formatid <- get.id(table = "formats", colnames = c('mimetype_id', 'name'), values = c(mimetypeid, formatname), - con = con, create = TRUE, dates = TRUE) + formatid <- get.id( + table = "formats", colnames = c("mimetype_id", "name"), values = c(mimetypeid, formatname), + con = con, create = TRUE, dates = TRUE + ) # find appropriate posterior # NOTE: This is defined but not used # posterior_ids <- get.id("posteriors", "pft_id", pftid, con) - posteriorid_query <- paste0("SELECT id FROM posteriors WHERE pft_id=", pftid, - " AND format_id=", formatid) - posteriorid <- db.query(query = posteriorid_query, con = con)[['id']] + posteriorid_query <- paste0( + "SELECT id FROM posteriors WHERE pft_id=", pftid, + " AND format_id=", formatid + ) + posteriorid <- db.query(query = posteriorid_query, con = con)[["id"]] if (is.null(posteriorid)) { # insert input db.query( @@ -400,13 +415,15 @@ dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, ho ), con = con ) - posteriorid <- db.query(posteriorid_query, con)[['id']] + posteriorid <- db.query(posteriorid_query, con)[["id"]] } # NOTE: Modified by Alexey Shiklomanov. # I'm not sure how this is supposed to work, but I think it's like this - invisible(dbfile.insert(in.path = dirname(filename), in.prefix = basename(filename), type = "Posterior", id = posteriorid, - con = con, reuse = TRUE, hostname = hostname)) + invisible(dbfile.insert( + in.path = dirname(filename), in.prefix = basename(filename), type = "Posterior", id = posteriorid, + con = con, reuse = TRUE, hostname = hostname + )) } ##' Function to check to see if a file exists in the dbfiles table as an input @@ -427,7 +444,7 @@ dbfile.posterior.insert <- function(filename, pft, mimetype, formatname, con, ho ##' \dontrun{ ##' dbfile.posterior.check(pft, 'application/x-RData', 'traits', dbcon) ##' } -dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname=PEcAn.remote::fqdn()) { +dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname = PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) # find appropriate pft @@ -454,12 +471,12 @@ dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname=PEcA " AND format_id=", formatid ), con = con - )[['id']] + )[["id"]] if (is.null(posteriorid)) { invisible(data.frame()) } - invisible(dbfile.check(type = 'Posterior', container.id = posteriorid, con = con, hostname = hostname)) + invisible(dbfile.check(type = "Posterior", container.id = posteriorid, con = con, hostname = hostname)) } ##' Function to insert a file into the dbfiles table @@ -481,10 +498,10 @@ dbfile.posterior.check <- function(pft, mimetype, formatname, con, hostname=PEcA ##' \dontrun{ ##' dbfile.insert('somefile.txt', 'Input', 7, dbcon) ##' } -dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostname=PEcAn.remote::fqdn()) { +dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostname = PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) - if (substr(in.path, 1, 1) != '/') { + if (substr(in.path, 1, 1) != "/") { PEcAn.logger::logger.error("path to dbfiles:", in.path, " is not a valid full path") } @@ -500,20 +517,23 @@ dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostn "file_path='", in.path, "' AND ", "machine_id='", hostid, "'" ), - con = con)) + con = con + )) if (nrow(dbfile) == 0) { # If no existing record, insert one insert_result <- db.query( - query = paste0("INSERT INTO dbfiles ", - "(container_type, container_id, file_name, file_path, machine_id) VALUES (", - "'", type, "', ", id, ", '", basename(in.prefix), "', '", in.path, "', ", hostid, - ") RETURNING id"), + query = paste0( + "INSERT INTO dbfiles ", + "(container_type, container_id, file_name, file_path, machine_id) VALUES (", + "'", type, "', ", id, ", '", basename(in.prefix), "', '", in.path, "', ", hostid, + ") RETURNING id" + ), con = con ) - file.id <- insert_result[['id']] + file.id <- insert_result[["id"]] } else if (!reuse) { # If there is an existing record but reuse==FALSE, return NA. file.id <- NA @@ -527,7 +547,7 @@ dbfile.insert <- function(in.path, in.prefix, type, id, con, reuse = TRUE, hostn )) file.id <- NA } else { - file.id <- dbfile[['id']] + file.id <- dbfile[["id"]] } } @@ -561,18 +581,21 @@ dbfile.check <- function(type, container.id, con, hostname = PEcAn.remote::fqdn(), machine.check = TRUE, return.all = FALSE) { - type <- match.arg(type, c("Input", "Posterior", "Model")) hostname <- default_hostname(hostname) # find appropriate host hostid <- get.id(table = "machines", colnames = "hostname", values = hostname, con = con) - if (is.null(hostid)) return(data.frame()) + if (is.null(hostid)) { + return(data.frame()) + } dbfiles <- dplyr::tbl(con, "dbfiles") %>% - dplyr::filter(.data$container_type == !!type, - .data$container_id %in% !!container.id) + dplyr::filter( + .data$container_type == !!type, + .data$container_id %in% !!container.id + ) if (machine.check) { dbfiles <- dbfiles %>% @@ -609,16 +632,16 @@ dbfile.check <- function(type, container.id, con, ##' \dontrun{ ##' dbfile.file('Input', 7, dbcon) ##' } -dbfile.file <- function(type, id, con, hostname=PEcAn.remote::fqdn()) { +dbfile.file <- function(type, id, con, hostname = PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) files <- dbfile.check(type = type, container.id = id, con = con, hostname = hostname) if (nrow(files) > 1) { PEcAn.logger::logger.warn("multiple files found for", id, "returned; using the first one found") - invisible(file.path(files[1, 'file_path'], files[1, 'file_name'])) + invisible(file.path(files[1, "file_path"], files[1, "file_name"])) } else if (nrow(files) == 1) { - invisible(file.path(files[1, 'file_path'], files[1, 'file_name'])) + invisible(file.path(files[1, "file_path"], files[1, "file_name"])) } else { PEcAn.logger::logger.warn("no files found for ", id, "in database") invisible(NA) @@ -633,11 +656,11 @@ dbfile.file <- function(type, id, con, hostname=PEcAn.remote::fqdn()) { ##' \dontrun{ ##' dbfile.id('Model', '/usr/local/bin/sipnet', dbcon) ##' } -dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { +dbfile.id <- function(type, file, con, hostname = PEcAn.remote::fqdn()) { hostname <- default_hostname(hostname) # find appropriate host - hostid <- db.query(query = paste0("SELECT id FROM machines WHERE hostname='", hostname, "'"), con = con)[['id']] + hostid <- db.query(query = paste0("SELECT id FROM machines WHERE hostname='", hostname, "'"), con = con)[["id"]] if (is.null(hostid)) { invisible(NA) } @@ -652,13 +675,14 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { "' AND file_name='", file_name, "' AND machine_id=", hostid ), - con = con) + con = con + ) if (nrow(ids) > 1) { PEcAn.logger::logger.warn("multiple ids found for", file, "returned; using the first one found") - invisible(ids[1, 'container_id']) + invisible(ids[1, "container_id"]) } else if (nrow(ids) == 1) { - invisible(ids[1, 'container_id']) + invisible(ids[1, "container_id"]) } else { PEcAn.logger::logger.warn("no id found for", file, "in database") invisible(NA) @@ -693,210 +717,213 @@ dbfile.id <- function(type, file, con, hostname=PEcAn.remote::fqdn()) { ##' } -dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = FALSE ){ +dbfile.move <- function(old.dir, new.dir, file.type, siteid = NULL, register = FALSE) { - #create nulls for file movement and error info - error = 0 - files.sym = 0 - files.changed = 0 - files.reg = 0 - files.indb = 0 + # create nulls for file movement and error info + error <- 0 + files.sym <- 0 + files.changed <- 0 + files.reg <- 0 + files.indb <- 0 - #check for file type and update to make it *.file type - if(file.type != "clim" | file.type != "nc"){ - PEcAn.logger::logger.error('File type not supported by move at this time. Currently only supports NC and CLIM files') - error = 1 + # check for file type and update to make it *.file type + if (file.type != "clim" | file.type != "nc") { + PEcAn.logger::logger.error("File type not supported by move at this time. Currently only supports NC and CLIM files") + error <- 1 } - file.pattern = paste0("*.", file.type) + file.pattern <- paste0("*.", file.type) - #create new directory if it doesn't exist - if(!dir.exists(new.dir)){ - dir.create(new.dir)} + # create new directory if it doesn't exist + if (!dir.exists(new.dir)) { + dir.create(new.dir) + } # check to make sure both directories exist - if(!dir.exists(old.dir)){ - PEcAn.logger::logger.error('Old File directory does not exist. Please enter valid file path') - error = 1} + if (!dir.exists(old.dir)) { + PEcAn.logger::logger.error("Old File directory does not exist. Please enter valid file path") + error <- 1 + } - if(!dir.exists(new.dir)){ - PEcAn.logger::logger.error('New File directory does not exist. Please enter valid file path') - error = 1} + if (!dir.exists(new.dir)) { + PEcAn.logger::logger.error("New File directory does not exist. Please enter valid file path") + error <- 1 + } - if(basename(new.dir) != basename(old.dir)){ - PEcAn.logger::logger.error('Basenames of files do not match') + if (basename(new.dir) != basename(old.dir)) { + PEcAn.logger::logger.error("Basenames of files do not match") } - #list files in the old directory - old.files <- list.files(path= old.dir, pattern = file.pattern) + # list files in the old directory + old.files <- list.files(path = old.dir, pattern = file.pattern) - #check to make sure there are files - if(length(old.files) == 0){ - PEcAn.logger::logger.warn('No files found') - error = 1 + # check to make sure there are files + if (length(old.files) == 0) { + PEcAn.logger::logger.warn("No files found") + error <- 1 } - #create full file path - full.old.file = file.path(old.dir, old.files) + # create full file path + full.old.file <- file.path(old.dir, old.files) ### Get BETY information ### - bety <- dplyr::src_postgres(dbname = 'bety', - host = 'psql-pecan.bu.edu', - user = 'bety', - password = 'bety') + bety <- dplyr::src_postgres( + dbname = "bety", + host = "psql-pecan.bu.edu", + user = "bety", + password = "bety" + ) con <- bety$con - #get matching dbfiles from BETY - dbfile.path = dirname(full.old.file) - dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% + # get matching dbfiles from BETY + dbfile.path <- dirname(full.old.file) + dbfiles <- dplyr::tbl(con, "dbfiles") %>% + dplyr::collect() %>% dplyr::filter(.data$file_name %in% basename(full.old.file)) %>% dplyr::filter(.data$file_path %in% dbfile.path) - #if there are matching db files - if(dim(dbfiles)[1] > 0){ + # if there are matching db files + if (dim(dbfiles)[1] > 0) { # Check to make sure files line up - if(dim(dbfiles)[1] != length(full.old.file)) { + if (dim(dbfiles)[1] != length(full.old.file)) { PEcAn.logger::logger.warn("Files to be moved don't match up with BETY files, only moving the files that match") - #IF DB FILES AND FULL FILES DONT MATCH, remove those not in BETY - will take care of the rest below - index = which(basename(full.old.file) %in% dbfiles$file_name) - index1 = seq(1, length(full.old.file)) + # IF DB FILES AND FULL FILES DONT MATCH, remove those not in BETY - will take care of the rest below + index <- which(basename(full.old.file) %in% dbfiles$file_name) + index1 <- seq(1, length(full.old.file)) check <- index1[-which(index1 %in% index)] full.old.file <- full.old.file[-check] - #record the number of files that are being moved - files.changed = length(full.old.file) - + # record the number of files that are being moved + files.changed <- length(full.old.file) } - #Check to make sure the files line up - if(dim(dbfiles)[1] != length(full.old.file)) { + # Check to make sure the files line up + if (dim(dbfiles)[1] != length(full.old.file)) { PEcAn.logger::logger.error("Files to be moved don't match up with BETY files, canceling move") - error = 1 + error <- 1 } - #Make sure the files line up - dbfiles <- dbfiles[order(dbfiles$file_name),] + # Make sure the files line up + dbfiles <- dbfiles[order(dbfiles$file_name), ] full.old.file <- sort(full.old.file) - #Record number of files moved and changed in BETY - files.indb = dim(dbfiles)[1] + # Record number of files moved and changed in BETY + files.indb <- dim(dbfiles)[1] - #Move files and update BETY - if(error == 0) { - for(i in 1:length(full.old.file)){ + # Move files and update BETY + if (error == 0) { + for (i in 1:length(full.old.file)) { fs::file_move(full.old.file[i], new.dir) db.query(paste0("UPDATE dbfiles SET file_path= '", new.dir, "' where id=", dbfiles$id[i]), con) - } #end i loop - } #end error if statement - - - - } #end dbfile loop + } # end i loop + } # end error if statement + } # end dbfile loop - #if there are files that are in the folder but not in BETY, we can either register them or not - if (dim(dbfiles)[1] == 0 | files.changed > 0){ + # if there are files that are in the folder but not in BETY, we can either register them or not + if (dim(dbfiles)[1] == 0 | files.changed > 0) { - #Recheck what files are in the directory since others may have been moved above - old.files <- list.files(path= old.dir, pattern = file.pattern) + # Recheck what files are in the directory since others may have been moved above + old.files <- list.files(path = old.dir, pattern = file.pattern) - #Recreate full file path - full.old.file = file.path(old.dir, old.files) + # Recreate full file path + full.old.file <- file.path(old.dir, old.files) - #Error check again to make sure there aren't any matching dbfiles - dbfile.path = dirname(full.old.file) - dbfiles <- dplyr::tbl(con, "dbfiles") %>% dplyr::collect() %>% + # Error check again to make sure there aren't any matching dbfiles + dbfile.path <- dirname(full.old.file) + dbfiles <- dplyr::tbl(con, "dbfiles") %>% + dplyr::collect() %>% dplyr::filter(.data$file_name %in% basename(full.old.file)) %>% dplyr::filter(.data$file_path %in% dbfile.path) - if(dim(dbfiles)[1] > 0){ + if (dim(dbfiles)[1] > 0) { PEcAn.logger::logger.error("There are still dbfiles matching these files! Canceling link or registration") - error = 1 + error <- 1 } - if(error == 0 & register == TRUE){ - - #Record how many files are being registered to BETY - files.reg= length(full.old.file) - - for(i in 1:length(full.old.file)){ - - file_path = dirname(full.old.file[i]) - file_name = basename(full.old.file[i]) - - if(file.type == "nc"){mimetype = "application/x-netcdf" - formatname ="CF Meteorology application" } - else if(file.type == "clim"){mimetype = 'text/csv' - formatname = "Sipnet.climna"} - else{PEcAn.logger::logger.error("File Type is currently not supported")} - - - dbfile.input.insert(in.path = file_path, - in.prefix = file_name, - siteid = siteid, - startdate = NULL, - enddate = NULL, - mimetype = mimetype, - formatname = formatname, - parentid=NA, - con = con, - hostname=PEcAn.remote::fqdn(), - allow.conflicting.dates=FALSE, - ens=FALSE) - }#end i loop - } #end error loop - - } #end register == TRUE + if (error == 0 & register == TRUE) { + + # Record how many files are being registered to BETY + files.reg <- length(full.old.file) + + for (i in 1:length(full.old.file)) { + file_path <- dirname(full.old.file[i]) + file_name <- basename(full.old.file[i]) + + if (file.type == "nc") { + mimetype <- "application/x-netcdf" + formatname <- "CF Meteorology application" + } + else if (file.type == "clim") { + mimetype <- "text/csv" + formatname <- "Sipnet.climna" + } + else { + PEcAn.logger::logger.error("File Type is currently not supported") + } + + + dbfile.input.insert( + in.path = file_path, + in.prefix = file_name, + siteid = siteid, + startdate = NULL, + enddate = NULL, + mimetype = mimetype, + formatname = formatname, + parentid = NA, + con = con, + hostname = PEcAn.remote::fqdn(), + allow.conflicting.dates = FALSE, + ens = FALSE + ) + } # end i loop + } # end error loop + } # end register == TRUE - if(error == 0 & register == FALSE){ - #Create file path for symbolic link - full.new.file = file.path(new.dir, old.files) + if (error == 0 & register == FALSE) { + # Create file path for symbolic link + full.new.file <- file.path(new.dir, old.files) - #Record number of files that will have a symbolic link made - files.sym = length(full.new.file) + # Record number of files that will have a symbolic link made + files.sym <- length(full.new.file) - #Line up files - full.new.file = sort(full.new.file) + # Line up files + full.new.file <- sort(full.new.file) full.old.file <- sort(full.old.file) - #Check to make sure the files are the same length - if(length(full.new.file) != length(full.old.file)) { - + # Check to make sure the files are the same length + if (length(full.new.file) != length(full.old.file)) { PEcAn.logger::logger.error("Files to be moved don't match up with BETY. Canceling Move") - error = 1 + error <- 1 } - #Move file and create symbolic link if there are no errors + # Move file and create symbolic link if there are no errors - if(error ==0){ - for(i in 1:length(full.old.file)){ + if (error == 0) { + for (i in 1:length(full.old.file)) { fs::file_move(full.old.file[i], new.dir) R.utils::createLink(link = full.old.file[i], target = full.new.file[i]) - }#end i loop - } #end error loop + } # end i loop + } # end error loop + } # end Register == FALSE - } #end Register == FALSE - - if(error > 0){ + if (error > 0) { PEcAn.logger::logger.error("There was an error, files were not moved or linked") - } - if(error == 0){ - - PEcAn.logger::logger.info(paste0(files.changed + files.indb, " files were moved and updated on BETY, ", files.sym, " were moved and had a symbolic link created, and ", files.reg , " files were moved and then registered in BETY")) - + if (error == 0) { + PEcAn.logger::logger.info(paste0(files.changed + files.indb, " files were moved and updated on BETY, ", files.sym, " were moved and had a symbolic link created, and ", files.reg, " files were moved and then registered in BETY")) } - -} #end dbfile.move() +} # end dbfile.move() From 4cc6bb94da313e1600fb5ffd1d7e0c3b49fb31d6 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 07:14:29 -0400 Subject: [PATCH 1875/2289] Update check_with_errors.R temporary fix to bug in rcmdcheck https://github.com/r-lib/rcmdcheck/issues/140 --- scripts/check_with_errors.R | 3 +++ 1 file changed, 3 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 534d542ffe3..67beea1fd65 100755 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -92,6 +92,9 @@ if (!file.exists(old_file)) { quit("no") } +## temporary fix to bug in rcmdcheck https://github.com/r-lib/rcmdcheck/issues/140 +Sys.setlocale('LC_ALL','C') + old <- rcmdcheck::parse_check(old_file) cmp <- rcmdcheck::compare_checks(old, chk) From a40885424d3e9b9a13e9315db28800b5f40d2206 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 11:28:43 -0400 Subject: [PATCH 1876/2289] Update pecan.depends.R added r-lib/processx to install_github list --- docker/depends/pecan.depends.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 0ccf146953c..8aa64ec13da 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -13,7 +13,8 @@ lapply(c( 'ebimodeling/biocro', 'MikkoPeltoniemi/Rpreles', 'ropensci/geonames', -'ropensci/nneo' +'ropensci/nneo', +'r-lib/processx' ), remotes::install_github, lib = rlib) # install all packages (depends, imports, suggests) From 315c3724e90a2fc6fb6eb7350ba000234b974093 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 12:03:23 -0400 Subject: [PATCH 1877/2289] Update Dockerfile Added r-lib/processx at @infotroph suggestion --- docker/depends/Dockerfile | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index ca9bfb566c5..e82a03bdfe7 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -41,6 +41,7 @@ RUN apt-get update \ COPY pecan.depends.R / RUN Rscript -e "install.packages(c('devtools'))" \ && Rscript -e "devtools::install_version('roxygen2', '7.0.2')" \ + && Rscript -e "devtools::install_github('r-lib/processx')" \ && R_LIBS_USER='/usr/local/lib/R/site-library' Rscript /pecan.depends.R \ && rm -rf /tmp/* From 0d4bd292c1d8d693b33ee9c8a5e59261de451ba4 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 12:07:57 -0400 Subject: [PATCH 1878/2289] Update pecan.depends.R removing 'r-lib/processx' because put it in Dockerfile instead --- docker/depends/pecan.depends.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 8aa64ec13da..425238d06db 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -13,8 +13,7 @@ lapply(c( 'ebimodeling/biocro', 'MikkoPeltoniemi/Rpreles', 'ropensci/geonames', -'ropensci/nneo', -'r-lib/processx' +'ropensci/nneo' ), remotes::install_github, lib = rlib) # install all packages (depends, imports, suggests) From 1cb7a676e69a288c5e92fc7f50c84ef87353c5ef Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 22 May 2021 18:10:07 +0200 Subject: [PATCH 1879/2289] install dev version of processx --- .github/workflows/ci.yml | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2cf4861c877..1ca058e7794 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -146,6 +146,11 @@ jobs: run: apt-get update && apt-get install -y postgresql-client qpdf - name: install new dependencies run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R + -name: update processx + # need processx > 3.5.2 to avoid + # https://github.com/r-lib/rcmdcheck/issues/140 + # Remove when https://github.com/r-lib/processx/pull/299 reaches CRAN + run: Rscript -e 'devtools::install_github("processx")' # run PEcAn checks - name: check From ad5e32aa20056905e27515f34bc608bd293743e4 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 14:27:56 -0400 Subject: [PATCH 1880/2289] Update pecan.depends.R whitespace --- docker/depends/pecan.depends.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 425238d06db..0ccf146953c 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -13,7 +13,7 @@ lapply(c( 'ebimodeling/biocro', 'MikkoPeltoniemi/Rpreles', 'ropensci/geonames', -'ropensci/nneo' +'ropensci/nneo' ), remotes::install_github, lib = rlib) # install all packages (depends, imports, suggests) From 6b18fa226b95c6f6d843c21ce5a74d2b543f4d7c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 22 May 2021 20:36:57 +0200 Subject: [PATCH 1881/2289] yaml typo + move comments --- .github/workflows/ci.yml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 1ca058e7794..83fdb5bc7db 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -146,10 +146,10 @@ jobs: run: apt-get update && apt-get install -y postgresql-client qpdf - name: install new dependencies run: Rscript scripts/generate_dependencies.R && Rscript docker/depends/pecan.depends.R - -name: update processx - # need processx > 3.5.2 to avoid - # https://github.com/r-lib/rcmdcheck/issues/140 - # Remove when https://github.com/r-lib/processx/pull/299 reaches CRAN + # need processx > 3.5.2 to avoid cryptic errors from + # https://github.com/r-lib/rcmdcheck/issues/140 + # Remove when https://github.com/r-lib/processx/pull/299 reaches CRAN + - name: update processx run: Rscript -e 'devtools::install_github("processx")' # run PEcAn checks From 752dd978ceb735f2c5978e89bed1ddebf4745820 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 22 May 2021 20:42:23 +0200 Subject: [PATCH 1882/2289] mroe typeos --- .github/workflows/ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 83fdb5bc7db..ad3b1c56248 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -150,7 +150,7 @@ jobs: # https://github.com/r-lib/rcmdcheck/issues/140 # Remove when https://github.com/r-lib/processx/pull/299 reaches CRAN - name: update processx - run: Rscript -e 'devtools::install_github("processx")' + run: Rscript -e 'devtools::install_github("r-lib/processx")' # run PEcAn checks - name: check From 8b7199cea866c2c465fcd074b0c2665d39c30ab0 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 16:19:11 -0400 Subject: [PATCH 1883/2289] Update scripts/check_with_errors.R reverting to previous because another fix has been introduced Co-authored-by: Chris Black --- scripts/check_with_errors.R | 3 --- 1 file changed, 3 deletions(-) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 67beea1fd65..534d542ffe3 100755 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -92,9 +92,6 @@ if (!file.exists(old_file)) { quit("no") } -## temporary fix to bug in rcmdcheck https://github.com/r-lib/rcmdcheck/issues/140 -Sys.setlocale('LC_ALL','C') - old <- rcmdcheck::parse_check(old_file) cmp <- rcmdcheck::compare_checks(old, chk) From 9934c35ca5626cf072955557ee3f48aef0a642fa Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Sat, 22 May 2021 16:20:00 -0400 Subject: [PATCH 1884/2289] Update Dockerfile Reverting --- docker/depends/Dockerfile | 1 - 1 file changed, 1 deletion(-) diff --git a/docker/depends/Dockerfile b/docker/depends/Dockerfile index e82a03bdfe7..ca9bfb566c5 100644 --- a/docker/depends/Dockerfile +++ b/docker/depends/Dockerfile @@ -41,7 +41,6 @@ RUN apt-get update \ COPY pecan.depends.R / RUN Rscript -e "install.packages(c('devtools'))" \ && Rscript -e "devtools::install_version('roxygen2', '7.0.2')" \ - && Rscript -e "devtools::install_github('r-lib/processx')" \ && R_LIBS_USER='/usr/local/lib/R/site-library' Rscript /pecan.depends.R \ && rm -rf /tmp/* From 7de49ee3716c2f61bdf7b80893848e719b8d840e Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sun, 23 May 2021 11:17:18 +0000 Subject: [PATCH 1885/2289] automated documentation update --- modules/priors/man/get.sample.Rd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/priors/man/get.sample.Rd b/modules/priors/man/get.sample.Rd index 03dc50d0d7f..c3dc313f4ef 100644 --- a/modules/priors/man/get.sample.Rd +++ b/modules/priors/man/get.sample.Rd @@ -9,10 +9,10 @@ get.sample(prior, n = NULL, p = NULL) \arguments{ \item{prior}{data.frame with distn, parama, and optionally paramb.} -\item{n}{number of samples to return from a random sample of the rdistn family of functions (e.g. qnorm)} +\item{n}{number of samples to return from a random sample of the rdistn family of functions (e.g. rnorm)} \item{p}{vector of quantiles from which to sample the distribution; typically pre-generated upstream -in the workflow to be used by the qdist family of functions (e.g. qnorm)} +in the workflow to be used by the qdistn family of functions (e.g. qnorm)} } \value{ vector with n random samples from prior From 8cd6ae1c20ab5e09279fe0cf876e0d64c2e5b214 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 24 May 2021 15:49:27 -0500 Subject: [PATCH 1886/2289] allow for /build comment --- .github/workflows/ci.yml | 34 ++++++++++++---------------------- .github/workflows/docker.yml | 5 +++++ 2 files changed, 17 insertions(+), 22 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index ad3b1c56248..47673593c7a 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -1,10 +1,6 @@ name: CI on: - repository_dispatch: - types: - - force - push: branches: - master @@ -15,8 +11,16 @@ on: pull_request: + issue_comment: + types: + - created + env: R_LIBS_USER: /usr/local/lib/R/site-library + LC_ALL: en_US.UTF-8 + NCPUS: 2 + PGHOST: postgres + CI: true jobs: @@ -24,6 +28,7 @@ jobs: # R BUILD # ---------------------------------------------------------------------- build: + if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest strategy: @@ -33,11 +38,6 @@ jobs: - "4.0.3" - "4.0.4" - env: - NCPUS: 2 - PGHOST: postgres - CI: true - container: image: pecan/depends:R${{ matrix.R }} @@ -65,6 +65,7 @@ jobs: # R TEST # ---------------------------------------------------------------------- test: + if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest strategy: @@ -79,11 +80,6 @@ jobs: image: mdillon/postgis:9.5 options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 - env: - NCPUS: 2 - PGHOST: postgres - CI: true - container: image: pecan/depends:R${{ matrix.R }} @@ -113,6 +109,7 @@ jobs: # R CHECK # ---------------------------------------------------------------------- check: + if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest strategy: @@ -123,9 +120,6 @@ jobs: - "4.0.4" env: - NCPUS: 2 - PGHOST: postgres - CI: true _R_CHECK_LENGTH_1_CONDITION_: true _R_CHECK_LENGTH_1_LOGIC2_: true # Avoid compilation check warnings that come from the system Makevars @@ -166,6 +160,7 @@ jobs: # SIPNET TESTS # ---------------------------------------------------------------------- sipnet: + if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest strategy: @@ -180,11 +175,6 @@ jobs: image: mdillon/postgis:9.5 options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 - env: - NCPUS: 2 - PGHOST: postgres - CI: true - container: image: pecan/depends:R${{ matrix.R }} diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index 314711ede00..b9c5f9a38b0 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -21,6 +21,10 @@ on: pull_request: + issue_comment: + types: + - created + # Certain actions will only run when this is the master repo. env: MASTER_REPO: PecanProject/pecan @@ -28,6 +32,7 @@ env: jobs: docker: + if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest steps: From 936383ce30f1a40a207320b1e58a7dfc4485e75c Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 28 May 2021 18:54:34 -0400 Subject: [PATCH 1887/2289] changed the hours object --- modules/data.atmosphere/R/GEFS_helper_functions.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 098b37eff54..c9bace88d1d 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -141,7 +141,8 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, print(paste("Downloading", forecast_date, cycle)) if(cycle == "00"){ - hours <- c(seq(0, 240, 3),seq(246, min(end_hr, 384), 6)) + hours <- c(seq(0, 240, 3),seq(246, 384, 6)) + hours <- hours[hours<=end_hr] }else{ hours <- c(seq(0, 240, 3),seq(246, min(end_hr, 840) , 6)) } From 7a7c047ce8fdadfd675fe078e3672df0ae414ea8 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 31 May 2021 14:36:32 -0400 Subject: [PATCH 1888/2289] added ... to function arguements --- modules/data.atmosphere/R/download.NOAA_GEFS.R | 6 ++++-- modules/data.atmosphere/man/download.NOAA_GEFS.Rd | 6 +++++- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 07f3660a92c..95351dd934a 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -35,7 +35,8 @@ ##' @param username username from pecan workflow ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param downscale logical, assumed True. Indicated whether data should be downscaled to hourly -##' @export +##' @param ... Additional optional parameters +##' @export ##' ##' @examples ##' \dontrun{ @@ -56,7 +57,8 @@ download.NOAA_GEFS <- function(site_id, start_date= Sys.Date(), end_date = start_date + lubridate::days(16), downscale = TRUE, - overwrite = FALSE){ + overwrite = FALSE, + ...){ forecast_date = as.Date(start_date) forecast_time = (lubridate::hour(start_date) %/% 6)*6 diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index c0d8304cfe6..104ddd8364c 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -14,7 +14,8 @@ download.NOAA_GEFS( start_date = Sys.Date(), end_date = start_date + lubridate::days(16), downscale = TRUE, - overwrite = FALSE + overwrite = FALSE, + ... ) } \arguments{ @@ -37,6 +38,9 @@ download.NOAA_GEFS( \item{downscale}{logical, assumed True. Indicated whether data should be downscaled to hourly} \item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} + +\item{...}{Additional optional parameters +@export} } \value{ A list of data frames is returned containing information about the data file that can be used to locate it later. Each From 7b53a91bd1864dd531f2692bcb2d8603a3658592 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 31 May 2021 17:05:26 -0400 Subject: [PATCH 1889/2289] created if/else in #write netcdf that skips missing ensemble members --- modules/data.atmosphere/R/GEFS_helper_functions.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index c9bace88d1d..13ea4476e7d 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -482,7 +482,9 @@ process_gridded_noaa_download <- function(lat_list, #Write netCDF - write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) + if(!nrow(forecast_noaa_ens) == 0){ + write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) +}else {next} if(downscale){ #Downscale the forecast from 6hr to 1hr From 20eb68124c2477ea65169d2587773352145df2b4 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 31 May 2021 22:26:44 -0400 Subject: [PATCH 1890/2289] Added filter to end results object so it will not contain any NULL objects --- modules/data.atmosphere/R/GEFS_helper_functions.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 13ea4476e7d..e08724c42b7 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -484,7 +484,8 @@ process_gridded_noaa_download <- function(lat_list, #Write netCDF if(!nrow(forecast_noaa_ens) == 0){ write_noaa_gefs_netcdf(df = forecast_noaa_ens,ens, lat = lat_list[1], lon = lon_east, cf_units = cf_var_units1, output_file = output_file, overwrite = TRUE) -}else {next} + }else {results_list[[ens]] <- NULL + next} if(downscale){ #Downscale the forecast from 6hr to 1hr @@ -511,6 +512,7 @@ process_gridded_noaa_download <- function(lat_list, } } + results_list <- results_list[!sapply(results_list, is.null)] return(results_list) } #process_gridded_noaa_download From 6529f21e792bd6da47c5764509fb7b1fc0952d49 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 3 Jun 2021 18:45:01 -0400 Subject: [PATCH 1891/2289] Downloaded Met Data for 4 EFI Sites plus HARV --- scripts/EFI_dataprep.R | 136 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 136 insertions(+) create mode 100644 scripts/EFI_dataprep.R diff --git a/scripts/EFI_dataprep.R b/scripts/EFI_dataprep.R new file mode 100644 index 00000000000..7dd6cfe6d48 --- /dev/null +++ b/scripts/EFI_dataprep.R @@ -0,0 +1,136 @@ +############################################## +# +# EFI Forecasting Challenge +# +############################################### +source('/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/half_hour_downscale.R') +library(PEcAn.all) +library(tidyverse) +########## Site Info ########### +#read in .csv with site info +setwd('/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/CSV') +data_prep <- read.csv("data_prep_5_sites.csv") +sitename <- data_prep$siteid_NEON2 +siteid <- data_prep$siteid_BETY2 +base_dir <- data_prep$base_dir + +#run info +start_date = format(Sys.Date()-1, "%Y-%m-%d") + +for(i in 1:length(sitename)){ + +###### Download Data ########### + +download_noaa_files_s3 <- function(siteID, date, cycle, local_directory){ + + Sys.setenv("AWS_DEFAULT_REGION" = "data", + "AWS_S3_ENDPOINT" = "ecoforecast.org") + + object <- aws.s3::get_bucket("drivers", prefix=paste0("noaa/NOAAGEFS_1hr/",siteID,"/",date,"/",cycle)) + + for(j in 1:length(object)){ + aws.s3::save_object(object[[j]], bucket = "drivers", file = file.path(local_directory, object[[j]]$Key)) + } +} + + +download_noaa_files_s3(siteID = sitename[i], date = as.Date(start_date), cycle = "00", local_directory <- base_dir[i] )# } + + +############ Downscale to 30 minutes ############## + +input_path = file.path(base_dir[i], "noaa/NOAAGEFS_1hr/", sitename[i], "/", start_date, "/00/") +output_path = file.path(base_dir[i], "noaa/half_hour/", sitename[i], "/", start_date, "/") +files = list.files(input_path) + +if(!dir.exists(output_path)){dir.create(output_path, recursive = TRUE)} + +for(k in 1:length(files)){ + +input_file = paste0(input_path, files[k]) +output_file = paste0(output_path, "Half_Hour_", files[k]) +temporal_downscale_half_hour(input_file = input_file, output_file = output_file , overwrite = FALSE, hr = 0.5) + +} + + + +########## Met2Model For SIPNET ############## +outfolder = file.path(base_dir[i], "noaa_clim/", sitename[i], "/", start_date, "/") +if(!dir.exists(outfolder)){dir.create(outfolder, recursive = TRUE)} + +in.path = dirname(output_path) +in.prefix = list.files(output_path) + +end_date = as.Date(start_date) + lubridate::days(35) + +for(l in 1:length(in.prefix)){ + + PEcAn.SIPNET::met2model.SIPNET(in.path = in.path, + in.prefix = in.prefix[l], + outfolder = outfolder, + start_date = start_date, + end_date = end_date, + overwrite = FALSE, + verbose = FALSE, + year.fragment = TRUE) + +} + +##### register downloaded met to BETY ############## +files = list.files(outfolder) + +### Get BETY information ### +bety <- dplyr::src_postgres(dbname = 'bety', + host = 'psql-pecan.bu.edu', + user = 'bety', + password = 'bety') +con <- bety$con + + + +for(h in 1:length(files)){ + + + + dbfile.input.insert(in.path = outfolder, + in.prefix = files[h], + startdate = start_date, + enddate = end_date, + siteid = siteid[i], + mimetype = "text/csv", + formatname = "Sipnet.climna", + parentid=NA, + con = con, + hostname=PEcAn.remote::fqdn(), + allow.conflicting.dates=TRUE, + ens=TRUE) + + + + +} + + +######### Get clim id's and paths ################# + +index = PEcAn.DB::dbfile.input.check( + siteid= siteid[i] %>% as.character(), + startdate = start_date %>% as.Date, + enddate = end_date %>% as.Date, + parentid = NA, + mimetype="text/csv", + formatname="Sipnet.climna", + con, + hostname = PEcAn.remote::fqdn(), + pattern = "2021", + exact.dates = TRUE, + return.all=TRUE +) + +} + + + + + From 8b391a2dc50af67d06bbbe2dd0c51475e9d226e3 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 3 Jun 2021 18:47:08 -0400 Subject: [PATCH 1892/2289] Download Met Data --- scripts/EFI_metprocess.R | 68 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 scripts/EFI_metprocess.R diff --git a/scripts/EFI_metprocess.R b/scripts/EFI_metprocess.R new file mode 100644 index 00000000000..f4af0438a4b --- /dev/null +++ b/scripts/EFI_metprocess.R @@ -0,0 +1,68 @@ +############################################## +# +# EFI Forecasting Challenge +# +############################################### +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/met.process.R') +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/download.raw.met.module.R') +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/GEFS_helper_functions.R') +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/download.NOAA_GEFS.R') +library(PEcAn.all) +library(tidyverse) + +#read in .csv with site info +setwd("/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/CSV/") +data_prep <- read.csv("data_prep_metprocess.csv") +sitename <- data_prep$site_name +site_id <- data_prep$siteid_BETY3 +base_dir <- data_prep$base_dir +model_name <- data_prep$model_name3 +met_source <- data_prep$input_met_source3 +met_output <- data_prep$input_met_output3 + +#run info +start_date = as.Date(format(Sys.Date(), "%Y-%m-%d")) +end_date = as.Date(format(Sys.Date()+1, "%Y-%m-%d")) +host = list() + host$name = "localhost" +dbparms = list() + dbparms$dbname = "bety" + dbparms$host = "psql-pecan.bu.edu" + dbparms$user = "bety" + dbparms$password = "bety" + +#met.process + for (i in 1:length(sitename)) { + outfolder = file.path(base_dir[i], "noaa_clim/", sitename[i], "/", start_date, "/") + if(!dir.exists(outfolder)){dir.create(outfolder, recursive = TRUE)} + + input_met = list() + input_met$source = met_source[i] + input_met$output = met_output[i] + + site = list() + site$id = site_id[i] + site$name = sitename[i] + + model = model_name[i] + + met.process(site = site, + input_met = input_met, + start_date = start_date, + end_date = end_date, + model = model, + host = host, + dbparms = dbparms, + dir = outfolder, + browndog = NULL, + spin = NULL, + overwrite = FALSE) + + + + } + + + + + From 76be3d587fa96b4874c6910eb3304e0640da3dcb Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 3 Jun 2021 18:48:16 -0400 Subject: [PATCH 1893/2289] Edited PEcAn workflow for EFI forecasts --- scripts/EFI_workflow.R | 242 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 242 insertions(+) create mode 100644 scripts/EFI_workflow.R diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R new file mode 100644 index 00000000000..29b016548c9 --- /dev/null +++ b/scripts/EFI_workflow.R @@ -0,0 +1,242 @@ +library("PEcAn.all") +library("PEcAn.utils") +library("RCurl") +library("REddyProc") +library("tidyverse") +library("furrr") +library("R.utils") +library("dynutils") + +###### Preping Workflow for regular SIPNET Run ############## +args = list() +args$settings = "/projectnb/dietzelab/ahelgeso/Site_XMLS/los.xml" +args$continue = TRUE +outputPath <- "/projectnb/dietzelab/ahelgeso/Site_Outputs/Lost_Creek/" + + +if(!dir.exists(outputPath)){dir.create(outputPath, recursive = TRUE)} +setwd(outputPath) + +# Open and read in settings file for PEcAn run. +settings <- PEcAn.settings::read.settings(args$settings) + +start_date <- as.Date('2021-06-01') +end_date<- as.Date('2021-06-02') + +# Finding the right end and start date +met.start <- start_date +met.end <- met.start + lubridate::days(35) + + + +settings$run$start.date <- as.character(met.start) +settings$run$end.date <- as.character(met.end) +settings$run$site$met.start <- as.character(met.start) +settings$run$site$met.end <- as.character(met.end) +#info +settings$info$date <- paste0(format(Sys.time(), "%Y/%m/%d %H:%M:%S"), " +0000") + +# Update/fix/check settings. +# Will only run the first time it's called, unless force=TRUE +settings <- + PEcAn.settings::prepare.settings(settings, force = FALSE) + +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") + +#manually add in clim files +con <-try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + +input_check <- PEcAn.DB::dbfile.input.check( + siteid=settings$run$site$id %>% as.character(), + startdate = settings$run$start.date %>% as.Date, + enddate = NULL, + parentid = NA, + mimetype="text/csv", + formatname="Sipnet.climna", + con = con, + hostname = PEcAn.remote::fqdn(), + pattern = NULL, + exact.dates = TRUE, + return.all=TRUE +) + +#If INPUTS already exists, add id and met path to settings file + + + +if(length(input_check$id) > 0){ + #met paths + clim_check = list() + for(i in 1:length(input_check$file_path)){ + + clim_check[[i]] <- file.path(input_check$file_path[i], input_check$file_name[i]) + }#end i loop for creating file paths + #ids + index_id = list() + index_path = list() + for(i in 1:length(input_check$id)){ + index_id[[i]] = as.character(input_check$id[i])#get ids as list + + }#end i loop for making lists + names(index_id) = sprintf("id%s",seq(1:length(input_check$id))) #rename list + names(clim_check) = sprintf("path%s",seq(1:length(input_check$id))) + + settings$run$inputs$met$id = index_id + settings$run$inputs$met$path = clim_check +}else{PEcAn.utils::logger.error("No met file found")} +#settings <- PEcAn.workflow::do_conversions(settings, T, T, T) + +if(is_empty(settings$run$inputs$met$path) & length(clim_check)>0){ + settings$run$inputs$met$id = index_id + settings$run$inputs$met$path = clim_check +} +PEcAn.DB::db.close(con) + +# Write out the file with updated settings +PEcAn.settings::write.settings(settings, outputfile = "pecan.GEFS.xml") + +# start from scratch if no continue is passed in +status_file <- file.path(settings$outdir, "STATUS") +if (args$continue && file.exists(status_file)) { + file.remove(status_file) +} + +# Do conversions +#settings <- PEcAn.workflow::do_conversions(settings) + +# Write model specific configs +if (PEcAn.utils::status.check("CONFIG") == 0) { + PEcAn.utils::status.start("CONFIG") + settings <- + PEcAn.workflow::runModule.run.write.configs(settings) + + PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.CONFIGS.xml")) +} + +if ((length(which(commandArgs() == "--advanced")) != 0) + && (PEcAn.utils::status.check("ADVANCED") == 0)) { + PEcAn.utils::status.start("ADVANCED") + q() +} + +# Start ecosystem model runs +if (PEcAn.utils::status.check("MODEL") == 0) { + PEcAn.utils::status.start("MODEL") + stop_on_error <- as.logical(settings[[c("run", "stop_on_error")]]) + if (length(stop_on_error) == 0) { + # If we're doing an ensemble run, don't stop. If only a single run, we + # should be stopping. + if (is.null(settings[["ensemble"]]) || + as.numeric(settings[[c("ensemble", "size")]]) == 1) { + stop_on_error <- TRUE + } else { + stop_on_error <- FALSE + } + } + PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = stop_on_error) + PEcAn.utils::status.end() +} + +# Get results of model runs +if (PEcAn.utils::status.check("OUTPUT") == 0) { + PEcAn.utils::status.start("OUTPUT") + runModule.get.results(settings) + PEcAn.utils::status.end() +} + +# Run ensemble analysis on model output. +if ("ensemble" %in% names(settings) + && PEcAn.utils::status.check("ENSEMBLE") == 0) { + PEcAn.utils::status.start("ENSEMBLE") + runModule.run.ensemble.analysis(settings, TRUE) + PEcAn.utils::status.end() +} + +# Run state data assimilation +if ("state.data.assimilation" %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + settings <- sda.enfk(settings) + PEcAn.utils::status.end() + } +} + +# Pecan workflow complete +if (PEcAn.utils::status.check("FINISHED") == 0) { + PEcAn.utils::status.start("FINISHED") + PEcAn.remote::kill.tunnel(settings) + db.query( + paste( + "UPDATE workflows SET finished_at=NOW() WHERE id=", + settings$workflow$id, + "AND finished_at IS NULL" + ), + params = settings$database$bety + ) + + # Send email if configured + if (!is.null(settings$email) + && !is.null(settings$email$to) + && (settings$email$to != "")) { + sendmail( + settings$email$from, + settings$email$to, + paste0("Workflow has finished executing at ", base::date()), + paste0("You can find the results on ", settings$email$url) + ) + } + PEcAn.utils::status.end() +} + +db.print.connections() +print("---------- PEcAn Workflow Complete ----------") + + +#EFI Output Configuration +library("ggplot2") +library("plotly") +library("gganimate") +library("thematic") +thematic_on() +source('/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/efi_data_process_v2.R') +#Load Output args +site.num <- settings$run$site$id +outdir <- outputPath +site.name <- settings$run$site$name +wid <- settings$workflow$id + +output_args = c(as.character(wid), site.num, outdir) + +data = efi.data.process(output_args) +#Run SIPNET Outputs +#To figure out the size for minutes +data.final = data %>% + mutate(date = as.Date(date)) %>% + filter(date < end_date) %>% + arrange(ensemble, date) %>% + mutate(time = as.POSIXct(paste(date, Time, minutes, sep = " "), format = "%Y-%m-%d %H %M")) %>% + mutate(siteID = site.name, + forecast = 1, + data_assimilation = 0, + time = lubridate::force_tz(time, tz = "UTC")) + +############ Plots to check out reliability of forecast ######################### + +ggplot(data.final, aes(x = time, y = nee, group = ensemble)) + + geom_line(aes(x = time, y = nee, color = ensemble)) + +ggplot(data.final, aes(x = time, y = le, group = ensemble)) + + geom_line(aes(x = time, y = le, color = ensemble)) + +ggplot(data.final, aes(x = time, y = vswc, group = ensemble)) + + geom_line(aes(x = time, y = vswc, color = ensemble)) + +########### Export data.final ############### + +write.csv(data.final, file = "[insert sitename].csv") + + From 850598b82bf182c9306b390c72ea2bd9e9a23963 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 3 Jun 2021 18:54:12 -0400 Subject: [PATCH 1894/2289] Source this when running EFI_dataprep --- scripts/half_hour_downscale.R | 336 ++++++++++++++++++++++++++++++++++ 1 file changed, 336 insertions(+) create mode 100644 scripts/half_hour_downscale.R diff --git a/scripts/half_hour_downscale.R b/scripts/half_hour_downscale.R new file mode 100644 index 00000000000..d1ddbe8c2c5 --- /dev/null +++ b/scripts/half_hour_downscale.R @@ -0,0 +1,336 @@ +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/GEFS_helper_functions.R') +temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TRUE, hr = 0.5){ + + # open netcdf + nc <- ncdf4::nc_open(input_file) + + if(stringr::str_detect(input_file, "ens")){ + ens_postion <- stringr::str_locate(input_file, "ens") + ens_name <- stringr::str_sub(input_file, start = ens_postion[1], end = ens_postion[2] + 2) + ens <- as.numeric(stringr::str_sub(input_file, start = ens_postion[2] + 1, end = ens_postion[2] + 2)) + }else{ + ens <- 0 + ens_name <- "ens00" + } + + # retrive variable names + cf_var_names <- names(nc$var) + + # generate time vector + time <- ncdf4::ncvar_get(nc, "time") + begining_time <- lubridate::ymd_hm(ncdf4::ncatt_get(nc, "time", + attname = "units")$value) + time <- begining_time + lubridate::hours(time) + + # retrive lat and lon + lat.in <- ncdf4::ncvar_get(nc, "latitude") + lon.in <- ncdf4::ncvar_get(nc, "longitude") + + # generate data frame from netcdf variables and retrive units + noaa_data <- tibble::tibble(time = time) + var_units <- rep(NA, length(cf_var_names)) + for(i in 1:length(cf_var_names)){ + curr_data <- ncdf4::ncvar_get(nc, cf_var_names[i]) + noaa_data <- cbind(noaa_data, curr_data) + var_units[i] <- ncdf4::ncatt_get(nc, cf_var_names[i], attname = "units")$value + } + + ncdf4::nc_close(nc) + + names(noaa_data) <- c("time",cf_var_names) + + # spline-based downscaling + if(length(which(c("air_temperature", "wind_speed","specific_humidity", "air_pressure") %in% cf_var_names) == 4)){ + forecast_noaa_ds <- downscale_spline_to_half_hrly(df = noaa_data, VarNames = c("air_temperature", "wind_speed","specific_humidity", "air_pressure")) + }else{ + #Add error message + } + + # Convert splined SH, temperature, and presssure to RH + forecast_noaa_ds <- forecast_noaa_ds %>% + dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, + temp = forecast_noaa_ds$air_temperature, + press = forecast_noaa_ds$air_pressure)) %>% + dplyr::mutate(relative_humidity = relative_humidity, + relative_humidity = ifelse(relative_humidity > 1, 0, relative_humidity)) + + # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) + if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ + LW.flux.hrly <- downscale_repeat_6hr_to_half_hrly(df = noaa_data, varName = "surface_downwelling_longwave_flux_in_air") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, LW.flux.hrly, by = "time") + }else{ + #Add error message + } + + # convert precipitation to hourly (just copy 6 hourly values over past 6-hour time period) + if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ + Precip.flux.hrly <- downscale_repeat_6hr_to_half_hrly(df = noaa_data, varName = "precipitation_flux") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, Precip.flux.hrly, by = "time") + }else{ + #Add error message + } + + # convert cloud_area_fraction to hourly (just copy 6 hourly values over past 6-hour time period) + if("cloud_area_fraction" %in% cf_var_names){ + cloud_area_fraction.flux.hrly <- downscale_repeat_6hr_to_half_hrly(df = noaa_data, varName = "cloud_area_fraction") + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, cloud_area_fraction.flux.hrly, by = "time") + }else{ + #Add error message + } + + # use solar geometry to convert shortwave from 6 hr to 1 hr + if("surface_downwelling_shortwave_flux_in_air" %in% cf_var_names){ + ShortWave.hrly <- downscale_ShortWave_to_half_hrly(df = noaa_data, lat = lat.in, lon = lon.in) + forecast_noaa_ds <- dplyr::inner_join(forecast_noaa_ds, ShortWave.hrly, by = "time") + }else{ + #Add error message + } + + #Add dummy ensemble number to work with write_noaa_gefs_netcdf() + forecast_noaa_ds$NOAA.member <- ens + + #Make sure var names are in correct order + forecast_noaa_ds <- forecast_noaa_ds %>% + dplyr::select("time", tidyselect::all_of(cf_var_names), "NOAA.member") + + #Write netCDF + write_noaa_gefs_netcdf(df = forecast_noaa_ds, + ens = ens, + lat = lat.in, + lon = lon.in, + cf_units = var_units, + output_file = output_file, + overwrite = overwrite) + +} #temporal_downscale + +#' @title Downscale spline to hourly +#' @return A dataframe of downscaled state variables +#' @param df, dataframe of data to be downscales +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_spline_to_half_hrly <- function(df,VarNames, hr = 0.5){ + # -------------------------------------- + # purpose: interpolates debiased forecasts from 6-hourly to hourly + # Creator: Laura Puckett, December 16 2018 + # -------------------------------------- + # @param: df, a dataframe of debiased 6-hourly forecasts + + t0 = min(df$time) + df <- df %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) + + for(Var in 1:length(VarNames)){ + curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y + noaa_data_interp <- cbind(noaa_data_interp, curr_data) + } + + names(noaa_data_interp) <- c("time",VarNames) + + return(noaa_data_interp) +} + +#' @title Downscale shortwave to hourly +#' @return A dataframe of downscaled state variables +#' +#' @param df, data frame of variables +#' @param lat, lat of site +#' @param lon, long of site +#' @return ShortWave.ds +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_ShortWave_to_half_hrly <- function(df,lat, lon, hr = 0.5){ + ## downscale shortwave to hourly + + t0 <- min(df$time) + df <- df %>% + dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) + + interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) + + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + data.hrly$group_6hr <- NA + + group <- 0 + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + group <- group + 1 + data.hrly$group_6hr[i] <- group + }else{ + data.hrly$surface_downwelling_shortwave_flux_in_air[i] <- curr + data.hrly$group_6hr[i] <- group + } + } + + ShortWave.ds <- data.hrly %>% + dplyr::mutate(hour = lubridate::hour(time)) %>% + dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% + dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry + dplyr::group_by(group_6hr) %>% + dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry + dplyr::ungroup() %>% + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% + dplyr::select(time,surface_downwelling_shortwave_flux_in_air) + + return(ShortWave.ds) + +} + + +#' @title Downscale repeat to hourly +#' @return A dataframe of downscaled data +#' @param df, dataframe of data to be downscaled (Longwave) +#' @noRd +#' @author Laura Puckett +#' +#' + +downscale_repeat_6hr_to_half_hrly <- function(df, varName, hr = 0.5){ + + #Get first time point + t0 <- min(df$time) + + df <- df %>% + dplyr::select("time", all_of(varName)) %>% + #Calculate time difference + dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + #Shift valued back because the 6hr value represents the average over the + #previous 6hr period + dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) + + #Create new vector with all hours + interp.df.days <- seq(min(df$days_since_t0), + as.numeric(max(df$days_since_t0)), + 1 / (24 / hr)) + + #Create new data frame + noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days)) + + #Join 1 hr data frame with 6 hr data frame + data.hrly <- noaa_data_interp %>% + dplyr::left_join(df, by = "time") + + #Fill in hours + for(i in 1:nrow(data.hrly)){ + if(!is.na(data.hrly$lead_var[i])){ + curr <- data.hrly$lead_var[i] + }else{ + data.hrly$lead_var[i] <- curr + } + } + + #Clean up data frame + data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% + dplyr::arrange(time) + + names(data.hrly) <- c("time", varName) + + return(data.hrly) +} + +#' @title Calculate potential shortwave radiation +#' @return vector of potential shortwave radiation for each doy +#' +#' @param doy, day of year in decimal +#' @param lon, longitude +#' @param lat, latitude +#' @return `numeric(1)` +#' @author Quinn Thomas +#' @noRd +#' +#' +downscale_solar_geom <- function(doy, lon, lat) { + + dt <- median(diff(doy)) * 86400 # average number of seconds in time interval + hr <- (doy - floor(doy)) * 48# hour of day for each element of doy + + ## calculate potential radiation + cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) + rpot <- 1366 * cosz + return(rpot) +} + +##' @title Write NOAA GEFS netCDF +##' @param df data frame of meterological variables to be written to netcdf. Columns +##' must start with time with the following columns in the order of `cf_units` +##' @param ens ensemble index used for subsetting df +##' @param lat latitude in degree north +##' @param lon longitude in degree east +##' @param cf_units vector of variable names in order they appear in df +##' @param output_file name, with full path, of the netcdf file that is generated +##' @param overwrite logical to overwrite existing netcdf file +##' @return NA +##' +##' @export +##' +##' @author Quinn Thomas +##' +##' + +write_noaa_gefs_netcdf <- function(df, ens = NA, lat, lon, cf_units, output_file, overwrite){ + + if(!is.na(ens)){ + data <- df + max_index <- max(which(!is.na(data$air_temperature))) + start_time <- min(data$time) + end_time <- data$time[max_index] + + data <- data %>% dplyr::select(-c("time", "NOAA.member")) + }else{ + data <- df + max_index <- max(which(!is.na(data$air_temperature))) + start_time <- min(data$time) + end_time <- data$time[max_index] + + data <- df %>% + dplyr::select(-c("time")) + } + + diff_time <- as.numeric(difftime(df$time, df$time[1])) / (60 * 60) + + cf_var_names <- names(data) + + time_dim <- ncdf4::ncdim_def(name="time", + units = paste("hours since", format(start_time, "%Y-%m-%d %H:%M")), + diff_time, #GEFS forecast starts 6 hours from start time + create_dimvar = TRUE) + lat_dim <- ncdf4::ncdim_def("latitude", "degree_north", lat, create_dimvar = TRUE) + lon_dim <- ncdf4::ncdim_def("longitude", "degree_east", lon, create_dimvar = TRUE) + + dimensions_list <- list(time_dim, lat_dim, lon_dim) + + nc_var_list <- list() + for (i in 1:length(cf_var_names)) { #Each ensemble member will have data on each variable stored in their respective file. + nc_var_list[[i]] <- ncdf4::ncvar_def(cf_var_names[i], cf_units[i], dimensions_list, missval=NaN) + } + + if (!file.exists(output_file) | overwrite) { + nc_flptr <- ncdf4::nc_create(output_file, nc_var_list, verbose = FALSE) + + #For each variable associated with that ensemble + for (j in 1:ncol(data)) { + # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble + ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], unlist(data[,j])) + } + + ncdf4::nc_close(nc_flptr) #Write to the disk/storage + } +} \ No newline at end of file From 022892d9143589e23f17dba7c8df4a94b87b99ef Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 3 Jun 2021 18:56:12 -0400 Subject: [PATCH 1895/2289] Source this when running EFI_workflow to process outputs --- scripts/efi_data_process.R | 210 +++++++++++++++++++++++++++++++++++++ 1 file changed, 210 insertions(+) create mode 100644 scripts/efi_data_process.R diff --git a/scripts/efi_data_process.R b/scripts/efi_data_process.R new file mode 100644 index 00000000000..c40fc690c60 --- /dev/null +++ b/scripts/efi_data_process.R @@ -0,0 +1,210 @@ +#### need to create a graph funciton here to call with the args of start time + +efi.data.process <- function(args){ + start_date <- tryCatch(as.POSIXct(args[1]), error = function(e) {NULL} ) + if (is.null(start_date)) { + in_wid <- as.integer(args[1]) + } + dbparms = list() + dbparms$dbname = "bety" + dbparms$host = "128.197.168.114" + dbparms$user = "bety" + dbparms$password = "bety" + #Connection code copied and pasted from met.process + bety <- dplyr::src_postgres(dbname = dbparms$dbname, + host = dbparms$host, + user = dbparms$user, + password = dbparms$password) + con <- bety$con #Connection to the database. dplyr returns a list. + on.exit(PEcAn.DB::db.close(con), add = TRUE) + # Identify the workflow with the proper information + if (!is.null(start_date)) { + workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE start_date='", format(start_date, "%Y-%m-%d %H:%M:%S"), + "' ORDER BY id"), con) + } else { + workflows <- PEcAn.DB::db.query(paste0("SELECT * FROM workflows WHERE id='", in_wid, "'"), con) + } + print(workflows) + + workflows <- workflows[which(workflows$site_id == args[2]),] + + SDA_check = grep("StateData", workflows$folder) + if(length(SDA_check) > 0){ workflows = workflows[-SDA_check,]} + + index = grepl(basename(args[3]), workflows$folder) + + if(length(index) == 0){ workflow <- workflows[which(workflows$folder == "" ),]} + + if(length(index) > 0){workflow = workflows[index, ]} + + + if (nrow(workflow) > 1) { + workflow <- workflow[1,] + } + + if(nrow(workflow) == 0){ + PEcAn.logger::logger.error(paste0("There are no workflows for ", start_date)) + stop() + } + + print(paste0("Using workflow ", workflow$id)) + wid <- workflow$id + outdir <- args[3] + pecan_out_dir <- paste0(outdir, "PEcAn_", wid, "/out"); + pecan_out_dirs <- list.dirs(path = pecan_out_dir) + if (is.na(pecan_out_dirs[1])) { + print(paste0(pecan_out_dirs, " does not exist.")) + } + + + #neemat <- matrix(1:64, nrow=1, ncol=64) # Proxy row, will be deleted later. + #qlemat <- matrix(1:64, nrow=1, ncol=64)# Proxy row, will be deleted later. + + neemat <- vector() + qlemat <- vector() + soilmoist <- vector() + gppmat <- vector() + time <- vector() + + num_results <- 0; + for (i in 2:length(pecan_out_dirs)) { + #datafile <- file.path(pecan_out_dirs[i], format(workflow$start_date, "%Y.nc")) + datafiles <- list.files(pecan_out_dirs[i]) + datafiles <- datafiles[grep("*.nc$", datafiles)] + + if (length(datafiles) == 0) { + print(paste0("File ", pecan_out_dirs[i], " does not exist.")) + next + } + + if(length(datafiles) == 1){ + + file = paste0(pecan_out_dirs[i],'/', datafiles[1]) + + num_results <- num_results + 1 + + #open netcdf file + ncptr <- ncdf4::nc_open(file); + + # Attach data to matricies + nee <- ncdf4::ncvar_get(ncptr, "NEE") + if(i == 2){ neemat <- nee} else{neemat <- cbind(neemat,nee)} + + qle <- ncdf4::ncvar_get(ncptr, "Qle") + if(i == 2){ qlemat <- qle} else{qlemat <- cbind(qlemat,qle)} + + soil <- ncdf4::ncvar_get(ncptr, "SoilMoistFrac") + if(i == 2){ soilmoist <- soil} else{soilmoist <- cbind(soilmoist,soil)} + + gpp <- ncdf4::ncvar_get(ncptr, "GPP") + if(i == 2){ gppmat <- gpp} else{gppmat <- cbind(gppmat,nee)} + + + sec <- ncptr$dim$time$vals + origin <- strsplit(ncptr$dim$time$units, " ")[[1]][3] + + # Close netcdf file + ncdf4::nc_close(ncptr) + } + + if(length(datafiles) > 1){ + + + file = paste0(pecan_out_dirs[i],'/', datafiles[1]) + file2 = paste0(pecan_out_dirs[i],'/', datafiles[2]) + + num_results <- num_results + 1 + + #open netcdf file + ncptr1 <- ncdf4::nc_open(file); + ncptr2 <- ncdf4::nc_open(file2); + # Attach data to matricies + nee1 <- ncdf4::ncvar_get(ncptr1, "NEE") + nee2 <- ncdf4::ncvar_get(ncptr2, "NEE") + nee <- c(nee1, nee2) + if(i == 2){ neemat <- nee} else{neemat <- cbind(neemat,nee)} + + qle1 <- ncdf4::ncvar_get(ncptr1, "Qle") + qle2 <- ncdf4::ncvar_get(ncptr2, "Qle") + qle <- c(qle1, qle2) + + if(i == 2){ qlemat <- qle} else{qlemat <- cbind(qlemat,qle)} + + soil1 <- ncdf4::ncvar_get(ncptr1, "SoilMoistFrac") + soil2 <- ncdf4::ncvar_get(ncptr2, "SoilMoistFrac") + soil <- c(soil1, soil2) + if(i == 2){ soilmoist <- soil} else{soilmoist <- cbind(soilmoist,soil)} + + + sec <- c(ncptr1$dim$time$vals, ncptr2$dim$time$vals+ last(ncptr1$dim$time$vals)) + origin <- strsplit(ncptr1$dim$time$units, " ")[[1]][3] + + + # Close netcdf file + ncdf4::nc_close(ncptr1) + ncdf4::nc_close(ncptr2) + + } + + } + + if (num_results == 0) { + print("No results found.") + quit("no") + } else { + print(paste0(num_results, " results found.")) + } + + # Time + time <- seq(1, length.out= length(sec)) + + + # Change to long format with ensemble numbers + + #lets get rid of col names for easier pivoting + colnames(neemat) <- paste0(rep("ens_", 100), seq(1, 100)) + needf = neemat %>% + as_tibble() %>% + mutate(date= as.Date(sec, origin = origin), + Time = round(abs(sec - floor(sec)) * 24)) %>% + pivot_longer(!c(date, Time), + names_to = "ensemble", + names_prefix = "ens_", + values_to = "nee") %>% + mutate(nee = PEcAn.utils::misc.convert(nee, "kg C m-2 s-1", "umol C m-2 s-1")) + + colnames(qlemat) <- paste0(rep("ens_", 100), seq(1, 100)) + qledf = qlemat %>% + as_tibble() %>% + mutate(date= as.Date(sec, origin = origin), + Time = round(abs(sec - floor(sec)) * 24)) %>% + pivot_longer(!c(date, Time), + names_to = "ens", + names_prefix = "ens_", + values_to = "le") + + colnames(soilmoist) <- paste0(rep("ens_", 100), seq(1, 100)) + soildf = soilmoist %>% + as_tibble() %>% + mutate(date= as.Date(sec, origin = origin), + Time = round(abs(sec - floor(sec)) * 24)) %>% + pivot_longer(!c(date, Time), + names_to = "ens", + names_prefix = "ens_", + values_to = "vswc") + + + data = needf %>% + mutate(le = qledf$le, + vswc = soildf$vswc) + + + + + +return(data) + +} + + + From 52d33058c471fc0f14cdc9485dda98b3cb828583 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Fri, 4 Jun 2021 16:26:46 +0000 Subject: [PATCH 1896/2289] Add settings arg to model2netcdf and read functions --- models/ed/R/model2netcdf.ED2.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index d14cdb0c97c..7ef6bd48c41 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -27,7 +27,7 @@ ## further modified by S. Serbin 09/2018 ##' model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, - end_date, pft_names = NULL) { + end_date, pft_names = NULL, settings = NULL) { start_year <- lubridate::year(start_date) end_year <- lubridate::year(end_date) @@ -861,7 +861,7 @@ put_T_values <- function(yr, nc_var, out, lat, lon, begins, ends, ...){ ##' @param yfiles the years on the filenames, will be used to matched efiles for that year ##' ##' @export -read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_names, ...){ +read_E_files <- function(settings, yr, yfiles, efiles, outdir, start_date, end_date, pft_names, ...){ PEcAn.logger::logger.info(paste0("*** Reading -E- file ***")) @@ -1017,7 +1017,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n ##' Function for put -E- values to nc_var list ##' @export -put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pft_names, ...){ +put_E_values <- function(settings, yr, nc_var, out, lat, lon, begins, ends, pft_names, ...){ s <- length(nc_var) From 2196ce0dbc32a4cf4cc05eeee0b4b1e1717a2fb6 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 7 Jun 2021 16:20:06 +0000 Subject: [PATCH 1897/2289] Fix settings arg locations --- models/ed/R/model2netcdf.ED2.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 7ef6bd48c41..ca6dd18bffd 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -110,7 +110,7 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, fcn <- match.fun(fcnx) out_list[[rflag]] <- fcn(yr = y, ylist[[rflag]], flist[[rflag]], outdir, start_date, end_date, - pft_names) + pft_names, settings) } # generate start/end dates for processing @@ -142,7 +142,7 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, fcn <- match.fun(fcnx) put_out <- fcn(yr = y, nc_var = nc_var, out = out_list[[rflag]], lat = lat, lon = lon, begins = begin_date, - ends = ends, pft_names) + ends = ends, pft_names, settings) nc_var <- put_out$nc_var out_list[[rflag]] <- put_out$out @@ -861,7 +861,7 @@ put_T_values <- function(yr, nc_var, out, lat, lon, begins, ends, ...){ ##' @param yfiles the years on the filenames, will be used to matched efiles for that year ##' ##' @export -read_E_files <- function(settings, yr, yfiles, efiles, outdir, start_date, end_date, pft_names, ...){ +read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_names, settings, ...){ PEcAn.logger::logger.info(paste0("*** Reading -E- file ***")) @@ -1017,7 +1017,7 @@ read_E_files <- function(settings, yr, yfiles, efiles, outdir, start_date, end_d ##' Function for put -E- values to nc_var list ##' @export -put_E_values <- function(settings, yr, nc_var, out, lat, lon, begins, ends, pft_names, ...){ +put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pft_names, settings, ...){ s <- length(nc_var) From c3b07884a8528bfa36f9f75624097a1958a428cc Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:26:06 -0400 Subject: [PATCH 1898/2289] Incorporated changes requested in comments of pull request --- scripts/EFI_workflow.R | 43 +++++++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 19 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 29b016548c9..2d922ee17c9 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -8,10 +8,14 @@ library("R.utils") library("dynutils") ###### Preping Workflow for regular SIPNET Run ############## +#set home directory as object (remember to change to your own directory before running this script) +homedir <- "/projectnb/dietzelab/ahelgeso" + +#Load site.xml and outputPath (i.e. where the model outputs will be stored) into args args = list() -args$settings = "/projectnb/dietzelab/ahelgeso/Site_XMLS/los.xml" +args$settings = file.path(homedir, "Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls args$continue = TRUE -outputPath <- "/projectnb/dietzelab/ahelgeso/Site_Outputs/Lost_Creek/" +outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved if(!dir.exists(outputPath)){dir.create(outputPath, recursive = TRUE)} @@ -20,8 +24,8 @@ setwd(outputPath) # Open and read in settings file for PEcAn run. settings <- PEcAn.settings::read.settings(args$settings) -start_date <- as.Date('2021-06-01') -end_date<- as.Date('2021-06-02') +start_date <- as.Date('2021-06-02') +end_date<- as.Date('2021-06-03') # Finding the right end and start date met.start <- start_date @@ -63,8 +67,6 @@ input_check <- PEcAn.DB::dbfile.input.check( #If INPUTS already exists, add id and met path to settings file - - if(length(input_check$id) > 0){ #met paths clim_check = list() @@ -202,7 +204,7 @@ library("plotly") library("gganimate") library("thematic") thematic_on() -source('/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/efi_data_process_v2.R') +source(file.path(homedir, 'pecan/scripts/efi_data_process.R')) #remember to change to where you store your pecan folder in your directory #Load Output args site.num <- settings$run$site$id outdir <- outputPath @@ -212,31 +214,34 @@ wid <- settings$workflow$id output_args = c(as.character(wid), site.num, outdir) data = efi.data.process(output_args) + #Run SIPNET Outputs -#To figure out the size for minutes data.final = data %>% mutate(date = as.Date(date)) %>% filter(date < end_date) %>% arrange(ensemble, date) %>% - mutate(time = as.POSIXct(paste(date, Time, minutes, sep = " "), format = "%Y-%m-%d %H %M")) %>% + mutate(time = as.POSIXct(paste(date, Time, sep = " "), format = "%Y-%m-%d %H %M")) %>% mutate(siteID = site.name, forecast = 1, data_assimilation = 0, - time = lubridate::force_tz(time, tz = "UTC")) + time = lubridate::force_tz(time, tz = "UTC")) +#re-order columns and delete unnecessary columns in data.final +datacols <- c("date", "time", "siteID", "ensemble", "nee", "le", "vswc", "forecast", "data_assimilation") +data.final = data.final[datacols] ############ Plots to check out reliability of forecast ######################### -ggplot(data.final, aes(x = time, y = nee, group = ensemble)) + - geom_line(aes(x = time, y = nee, color = ensemble)) - -ggplot(data.final, aes(x = time, y = le, group = ensemble)) + - geom_line(aes(x = time, y = le, color = ensemble)) - -ggplot(data.final, aes(x = time, y = vswc, group = ensemble)) + - geom_line(aes(x = time, y = vswc, color = ensemble)) +# ggplot(data.final, aes(x = time, y = nee, group = ensemble)) + +# geom_line(aes(x = time, y = nee, color = ensemble)) +# +# ggplot(data.final, aes(x = time, y = le, group = ensemble)) + +# geom_line(aes(x = time, y = le, color = ensemble)) +# +# ggplot(data.final, aes(x = time, y = vswc, group = ensemble)) + +# geom_line(aes(x = time, y = vswc, color = ensemble)) ########### Export data.final ############### -write.csv(data.final, file = "[insert sitename].csv") +write.csv(data.final, file = paste0(site.name, "-", start_date, "-", end_date, ".csv")) From ded8d6bf7d0cdac27e6b97c0c877606885a08416 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:29:39 -0400 Subject: [PATCH 1899/2289] Incorporated changes requested in comments of pull request --- scripts/EFI_dataprep.R | 42 +++++++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 19 deletions(-) diff --git a/scripts/EFI_dataprep.R b/scripts/EFI_dataprep.R index 7dd6cfe6d48..701ed02a2c1 100644 --- a/scripts/EFI_dataprep.R +++ b/scripts/EFI_dataprep.R @@ -3,16 +3,20 @@ # EFI Forecasting Challenge # ############################################### -source('/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/half_hour_downscale.R') +#set home directory as object (remember to change to your own directory before running this script) +homedir <- "/projectnb/dietzelab/ahelgeso" + +source(file.path(homedir, 'pecan/scripts/half_hour_downscale.R')) #remember to change to where you store your pecan folder in your directory library(PEcAn.all) library(tidyverse) ########## Site Info ########### #read in .csv with site info -setwd('/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/CSV') -data_prep <- read.csv("data_prep_5_sites.csv") -sitename <- data_prep$siteid_NEON2 -siteid <- data_prep$siteid_BETY2 -base_dir <- data_prep$base_dir +setwd(file.path(homedir, 'EFI_Forecast_Scripts/CSV')) #remember to change to where you keep your dataprep .csv file with the site info +data_prep <- read.csv("dataprep_10_sites.csv") #this .csv file contains the NEON site id, BETY site id, and location where you want the met data saved. Remember to change to fit your sites and file path before running the script +data_prep <- filter(data_prep, met_download == "dataprep") +sitename <- data_prep$siteid_NEON4 +siteid <- data_prep$siteid_BETY4 +base_dir <- data_prep$base_dir4 #run info start_date = format(Sys.Date()-1, "%Y-%m-%d") @@ -114,19 +118,19 @@ for(h in 1:length(files)){ ######### Get clim id's and paths ################# -index = PEcAn.DB::dbfile.input.check( - siteid= siteid[i] %>% as.character(), - startdate = start_date %>% as.Date, - enddate = end_date %>% as.Date, - parentid = NA, - mimetype="text/csv", - formatname="Sipnet.climna", - con, - hostname = PEcAn.remote::fqdn(), - pattern = "2021", - exact.dates = TRUE, - return.all=TRUE -) +# index = PEcAn.DB::dbfile.input.check( +# siteid= siteid[i] %>% as.character(), +# startdate = start_date %>% as.Date, +# enddate = end_date %>% as.Date, +# parentid = NA, +# mimetype="text/csv", +# formatname="Sipnet.climna", +# con, +# hostname = PEcAn.remote::fqdn(), +# pattern = "2021", +# exact.dates = TRUE, +# return.all=TRUE +# ) } From 92ddcfeaf15933fdf3fd23f4e8ad25fcfc99fc15 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:31:53 -0400 Subject: [PATCH 1900/2289] Incorporated changes requested in comments of pull request --- scripts/EFI_metprocess.R | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/scripts/EFI_metprocess.R b/scripts/EFI_metprocess.R index f4af0438a4b..4da22e220e6 100644 --- a/scripts/EFI_metprocess.R +++ b/scripts/EFI_metprocess.R @@ -3,22 +3,26 @@ # EFI Forecasting Challenge # ############################################### -source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/met.process.R') -source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/download.raw.met.module.R') -source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/GEFS_helper_functions.R') -source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/download.NOAA_GEFS.R') +#set home directory as object (remember to change to your own directory before running this script) +homedir <- "/projectnb/dietzelab/ahelgeso" + +source(file.path(homedir, 'pecan/modules/data.atmosphere/R/met.process.R')) #remember to change to where you store your pecan folder in your directory +source(file.path(homedir, 'pecan/modules/data.atmosphere/R/download.raw.met.module.R')) #remember to change to where you store your pecan folder in your directory +source(file.path(homedir, 'pecan/modules/data.atmosphere/R/GEFS_helper_functions.R')) #remember to change to where you store your pecan folder in your directory +source(file.path(homedir, 'pecan/modules/data.atmosphere/R/download.NOAA_GEFS.R')) #remember to change to where you store your pecan folder in your directory library(PEcAn.all) library(tidyverse) #read in .csv with site info -setwd("/projectnb/dietzelab/ahelgeso/EFI_Forecast_Scripts/CSV/") -data_prep <- read.csv("data_prep_metprocess.csv") +setwd(file.path(homedir, "EFI_Forecast_Scripts/CSV/")) #remember to change to where you keep your dataprep .csv file with the site info +data_prep <- read.csv("dataprep_10_sites.csv") #this .csv file contains the sitename, BETY site id, location to store met files, model name, met source (from .xml), and the met output (from .xml) for each site you want to download met data +data_prep <- filter(data_prep, met_download == "metprocess") sitename <- data_prep$site_name -site_id <- data_prep$siteid_BETY3 +site_id <- data_prep$siteid_BETY4 base_dir <- data_prep$base_dir -model_name <- data_prep$model_name3 -met_source <- data_prep$input_met_source3 -met_output <- data_prep$input_met_output3 +model_name <- data_prep$model_name4 +met_source <- data_prep$input_met_source4 +met_output <- data_prep$input_met_output4 #run info start_date = as.Date(format(Sys.Date(), "%Y-%m-%d")) From a188e79084ba4e3d87bf1f14b3ac8a345a84b11f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:37:02 -0400 Subject: [PATCH 1901/2289] Use this csv for the EFI_dataprep and EFI_metprocess scripts --- base/settings/examples/dataprep_10_sites.csv | 11 +++++++++++ 1 file changed, 11 insertions(+) create mode 100644 base/settings/examples/dataprep_10_sites.csv diff --git a/base/settings/examples/dataprep_10_sites.csv b/base/settings/examples/dataprep_10_sites.csv new file mode 100644 index 00000000000..b19e6192557 --- /dev/null +++ b/base/settings/examples/dataprep_10_sites.csv @@ -0,0 +1,11 @@ +"","siteid_BETY4","site_name4","base_dir4","met_download","model_id4","model_name4","input_met_source4","input_met_output4","siteid_NEON4" +"1",646,"HARV","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","HARV" +"2",679,"LOS","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"3",1000026756,"POTATO","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"4",622,"SYV","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"5",676,"WCR","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"6",678,"WLEF","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"7",1000004924,"BART","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","BART" +"8",1000004927,"KONZ","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","KONZ" +"9",1000004916,"OSBS","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","OSBS" +"10",1000004876,"SRER","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","SRER" From 2083e5b27688bd2e9a8ee2ccab5f0db0df4fd6f0 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:48:20 -0400 Subject: [PATCH 1902/2289] Bartlett Site XML for EFI_Workflow --- base/settings/examples/bart.xml | 83 +++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) create mode 100644 base/settings/examples/bart.xml diff --git a/base/settings/examples/bart.xml b/base/settings/examples/bart.xml new file mode 100644 index 00000000000..bdfc9fa4378 --- /dev/null +++ b/base/settings/examples/bart.xml @@ -0,0 +1,83 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/05/05 13:20:05 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Bartlett/noaa/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.HPDA + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + + 1000004924 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Bartlett/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + From c23fd174f3ca37bdf859a5d93c98669e7bde808f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:49:48 -0400 Subject: [PATCH 1903/2289] Harvard Forest Site XML for EFI_Workflow --- base/settings/examples/harvard.xml | 92 ++++++++++++++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100755 base/settings/examples/harvard.xml diff --git a/base/settings/examples/harvard.xml b/base/settings/examples/harvard.xml new file mode 100755 index 00000000000..b6f181bf97a --- /dev/null +++ b/base/settings/examples/harvard.xml @@ -0,0 +1,92 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:42:37 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.ALL + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + + FALSE + FALSE + + 1.2 + AUTO + + + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + SIPNET + FALSE + /fs/data5/pecan.models/sipnet_unk/sipnet + + + + 646 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From d73e3086d0be0690188c4432c6dc5b2aba13f874 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:51:24 -0400 Subject: [PATCH 1904/2289] Konza Site XML for EFI_Workflow --- base/settings/examples/konz.xml | 82 +++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 base/settings/examples/konz.xml diff --git a/base/settings/examples/konz.xml b/base/settings/examples/konz.xml new file mode 100644 index 00000000000..7820dbc3727 --- /dev/null +++ b/base/settings/examples/konz.xml @@ -0,0 +1,82 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/05/06 15:06:40 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + TRUE + + /projectnb/dietzelab/EFI_Forecast_Challenge/Konza/noaa/ + + + + semiarid.grassland_HPDA + + 1 + + 1000016525 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland + + + soil.ALL_Arid_GrassHPDA + + 1 + + 1000016524 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL + + + + 3000 + FALSE + + + 1000000030 + + + + 1000004927 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Konza/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + From 350d8bb3d421130aea75073aea8743e329624420 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:52:53 -0400 Subject: [PATCH 1905/2289] Lost Creek Site XML for EFI_Workflow --- base/settings/examples/los.xml | 79 ++++++++++++++++++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100755 base/settings/examples/los.xml diff --git a/base/settings/examples/los.xml b/base/settings/examples/los.xml new file mode 100755 index 00000000000..25bdec748de --- /dev/null +++ b/base/settings/examples/los.xml @@ -0,0 +1,79 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:44:50 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.HPDA + + 1 + + 1000016485 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + + + + 679 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From 6b70573bc235af71189d62c27306d82b404cddac Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:54:12 -0400 Subject: [PATCH 1906/2289] Ordway Site XML for EFI_Workflow --- base/settings/examples/ordway.xml | 82 +++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 base/settings/examples/ordway.xml diff --git a/base/settings/examples/ordway.xml b/base/settings/examples/ordway.xml new file mode 100644 index 00000000000..e24f7804763 --- /dev/null +++ b/base/settings/examples/ordway.xml @@ -0,0 +1,82 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/05/05 13:47:07 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /projectnb/dietzelab/EFI_Forecast_Challenge/Ordway/noaa/ + + + + temperate.coniferous + + 1 + + 1000016486 + /fs/data3/hamzed/Projects/HPDA/Helpers/Sites/Me4-Ameri/pft/boreal.coniferous + + + soil.HPDA + + 1 + + 1000016485 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 1000000030 + + + + 1000004916 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Ordway/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + From 50dcdad59d89f2587241d0cdd277cf9392fb9b5f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:55:29 -0400 Subject: [PATCH 1907/2289] Potato Site XML for EFI_Workflow --- base/settings/examples/potato.xml | 79 +++++++++++++++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 base/settings/examples/potato.xml diff --git a/base/settings/examples/potato.xml b/base/settings/examples/potato.xml new file mode 100644 index 00000000000..16d8f2bfd16 --- /dev/null +++ b/base/settings/examples/potato.xml @@ -0,0 +1,79 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:21:05 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + semiarid.grassland_HPDA + + 1 + + 1000016525 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland + + + soil.ALL_Arid_GrassHPDA + + 1 + + 1000016524 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + + + + 1000026756 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From 361d7b692168b4a2da7573049e8b67506d37c107 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:56:56 -0400 Subject: [PATCH 1908/2289] Santa Rita Site XML for EFI_Workflow --- base/settings/examples/santarita.xml | 82 ++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 base/settings/examples/santarita.xml diff --git a/base/settings/examples/santarita.xml b/base/settings/examples/santarita.xml new file mode 100644 index 00000000000..1bd31fd696e --- /dev/null +++ b/base/settings/examples/santarita.xml @@ -0,0 +1,82 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/04/20 15:32:55 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /projectnb/dietzelab/EFI_Forecast_Challenge/Santa_Rita/noaa/ + + + + semiarid.grassland_HPDA + + 1 + + 1000016525 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland + + + soil.ALL_Arid_GrassHPDA + + 1 + + 1000016524 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL + + + + 3000 + FALSE + + + 1000000030 + + + + 1000004876 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Santa_Rita/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + From d301b88b9972d2dd0cc97fd7cf153215e3b21558 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 21:59:06 -0400 Subject: [PATCH 1909/2289] Sylvannia Site XML for EFI_Workflow --- base/settings/examples/syv.xml | 80 ++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100755 base/settings/examples/syv.xml diff --git a/base/settings/examples/syv.xml b/base/settings/examples/syv.xml new file mode 100755 index 00000000000..9d2fe33d664 --- /dev/null +++ b/base/settings/examples/syv.xml @@ -0,0 +1,80 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:47:17 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.coniferous + + 1 + + 1000016486 + /fs/data3/hamzed/Projects/HPDA/Helpers/Sites/Me4-Ameri/pft/boreal.coniferous + + + soil.HPDA + + 1 + + 1000016485 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + + 622 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From 6e8834d51e130f6403902b1b467721780b7c8ab0 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 22:00:38 -0400 Subject: [PATCH 1910/2289] Willow Creek Site XML for EFI_Workflow --- base/settings/examples/wcr.xml | 80 ++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 base/settings/examples/wcr.xml diff --git a/base/settings/examples/wcr.xml b/base/settings/examples/wcr.xml new file mode 100644 index 00000000000..71a1675e4de --- /dev/null +++ b/base/settings/examples/wcr.xml @@ -0,0 +1,80 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:48:20 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.HPDA + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + + 676 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From 7a261161dfc961f727b1bf1673d054b468aefad1 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 7 Jun 2021 22:02:15 -0400 Subject: [PATCH 1911/2289] Park Falls Site XML for EFI_Workflow --- base/settings/examples/wlef.xml | 92 +++++++++++++++++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100755 base/settings/examples/wlef.xml diff --git a/base/settings/examples/wlef.xml b/base/settings/examples/wlef.xml new file mode 100755 index 00000000000..2792e5db7f1 --- /dev/null +++ b/base/settings/examples/wlef.xml @@ -0,0 +1,92 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:49:09 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.ALL + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + + FALSE + FALSE + + 1.2 + AUTO + + + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + SIPNET + FALSE + /fs/data5/pecan.models/SIPNET/1023/sipnet + + + + 678 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From fc06f1ef7d7ed3641ad893f60daefabb59617234 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 8 Jun 2021 17:53:41 -0400 Subject: [PATCH 1912/2289] Fixed the extra space before @export --- modules/data.atmosphere/R/download.NOAA_GEFS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index 95351dd934a..e53f6f30af7 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -36,7 +36,7 @@ ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param downscale logical, assumed True. Indicated whether data should be downscaled to hourly ##' @param ... Additional optional parameters -##' @export +##' @export ##' ##' @examples ##' \dontrun{ From ce626ba8f7f8ef2712f1df3ba801359057f69fee Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 10 Jun 2021 13:14:41 -0400 Subject: [PATCH 1913/2289] Changed start and end date to args --- scripts/EFI_workflow.R | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 2d922ee17c9..37726d63231 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -15,6 +15,9 @@ homedir <- "/projectnb/dietzelab/ahelgeso" args = list() args$settings = file.path(homedir, "Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls args$continue = TRUE +args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) +args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) + outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved @@ -24,8 +27,8 @@ setwd(outputPath) # Open and read in settings file for PEcAn run. settings <- PEcAn.settings::read.settings(args$settings) -start_date <- as.Date('2021-06-02') -end_date<- as.Date('2021-06-03') +start_date <- args$start_date +end_date<- args$end_date # Finding the right end and start date met.start <- start_date From 165642828fda611556d2def6b8c6881e1ab78b66 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 14 Jun 2021 16:28:19 -0400 Subject: [PATCH 1914/2289] Adjusted date --- scripts/EFI_metprocess.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/EFI_metprocess.R b/scripts/EFI_metprocess.R index 4da22e220e6..74fc217df7e 100644 --- a/scripts/EFI_metprocess.R +++ b/scripts/EFI_metprocess.R @@ -25,8 +25,8 @@ met_source <- data_prep$input_met_source4 met_output <- data_prep$input_met_output4 #run info -start_date = as.Date(format(Sys.Date(), "%Y-%m-%d")) -end_date = as.Date(format(Sys.Date()+1, "%Y-%m-%d")) +start_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) +end_date = as.Date(format(Sys.Date(), "%Y-%m-%d")) host = list() host$name = "localhost" dbparms = list() From f47ff160306d0d7ce4683011c7cd708ef4a9772d Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 17:51:21 -0400 Subject: [PATCH 1915/2289] Info for EFI_dataprep and EFI_metprocess --- scripts/dataprep_10_sites.csv | 11 +++++++++++ 1 file changed, 11 insertions(+) create mode 100644 scripts/dataprep_10_sites.csv diff --git a/scripts/dataprep_10_sites.csv b/scripts/dataprep_10_sites.csv new file mode 100644 index 00000000000..b19e6192557 --- /dev/null +++ b/scripts/dataprep_10_sites.csv @@ -0,0 +1,11 @@ +"","siteid_BETY4","site_name4","base_dir4","met_download","model_id4","model_name4","input_met_source4","input_met_output4","siteid_NEON4" +"1",646,"HARV","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","HARV" +"2",679,"LOS","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"3",1000026756,"POTATO","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"4",622,"SYV","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"5",676,"WCR","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"6",678,"WLEF","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA +"7",1000004924,"BART","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","BART" +"8",1000004927,"KONZ","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","KONZ" +"9",1000004916,"OSBS","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","OSBS" +"10",1000004876,"SRER","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","SRER" From 251945f83ff50aec27fc96e62498e073ebdd349e Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:01:57 -0400 Subject: [PATCH 1916/2289] Site xmls for forecast --- base/settings/examples/Site_XMLS/bart.xml | 83 +++++++++++++++++ base/settings/examples/Site_XMLS/harvard.xml | 92 +++++++++++++++++++ base/settings/examples/Site_XMLS/konz.xml | 82 +++++++++++++++++ base/settings/examples/Site_XMLS/los.xml | 79 ++++++++++++++++ base/settings/examples/Site_XMLS/ordway.xml | 82 +++++++++++++++++ base/settings/examples/Site_XMLS/potato.xml | 79 ++++++++++++++++ .../settings/examples/Site_XMLS/santarita.xml | 82 +++++++++++++++++ base/settings/examples/Site_XMLS/syv.xml | 80 ++++++++++++++++ base/settings/examples/Site_XMLS/wcr.xml | 80 ++++++++++++++++ base/settings/examples/Site_XMLS/wlef.xml | 92 +++++++++++++++++++ 10 files changed, 831 insertions(+) create mode 100644 base/settings/examples/Site_XMLS/bart.xml create mode 100755 base/settings/examples/Site_XMLS/harvard.xml create mode 100644 base/settings/examples/Site_XMLS/konz.xml create mode 100755 base/settings/examples/Site_XMLS/los.xml create mode 100644 base/settings/examples/Site_XMLS/ordway.xml create mode 100644 base/settings/examples/Site_XMLS/potato.xml create mode 100644 base/settings/examples/Site_XMLS/santarita.xml create mode 100755 base/settings/examples/Site_XMLS/syv.xml create mode 100644 base/settings/examples/Site_XMLS/wcr.xml create mode 100755 base/settings/examples/Site_XMLS/wlef.xml diff --git a/base/settings/examples/Site_XMLS/bart.xml b/base/settings/examples/Site_XMLS/bart.xml new file mode 100644 index 00000000000..bdfc9fa4378 --- /dev/null +++ b/base/settings/examples/Site_XMLS/bart.xml @@ -0,0 +1,83 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/05/05 13:20:05 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Bartlett/noaa/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.HPDA + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + + 1000004924 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Bartlett/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + diff --git a/base/settings/examples/Site_XMLS/harvard.xml b/base/settings/examples/Site_XMLS/harvard.xml new file mode 100755 index 00000000000..b6f181bf97a --- /dev/null +++ b/base/settings/examples/Site_XMLS/harvard.xml @@ -0,0 +1,92 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:42:37 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.ALL + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + + FALSE + FALSE + + 1.2 + AUTO + + + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + SIPNET + FALSE + /fs/data5/pecan.models/sipnet_unk/sipnet + + + + 646 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + diff --git a/base/settings/examples/Site_XMLS/konz.xml b/base/settings/examples/Site_XMLS/konz.xml new file mode 100644 index 00000000000..7820dbc3727 --- /dev/null +++ b/base/settings/examples/Site_XMLS/konz.xml @@ -0,0 +1,82 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/05/06 15:06:40 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + TRUE + + /projectnb/dietzelab/EFI_Forecast_Challenge/Konza/noaa/ + + + + semiarid.grassland_HPDA + + 1 + + 1000016525 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland + + + soil.ALL_Arid_GrassHPDA + + 1 + + 1000016524 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL + + + + 3000 + FALSE + + + 1000000030 + + + + 1000004927 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Konza/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + diff --git a/base/settings/examples/Site_XMLS/los.xml b/base/settings/examples/Site_XMLS/los.xml new file mode 100755 index 00000000000..25bdec748de --- /dev/null +++ b/base/settings/examples/Site_XMLS/los.xml @@ -0,0 +1,79 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:44:50 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.HPDA + + 1 + + 1000016485 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + + + + 679 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + diff --git a/base/settings/examples/Site_XMLS/ordway.xml b/base/settings/examples/Site_XMLS/ordway.xml new file mode 100644 index 00000000000..e24f7804763 --- /dev/null +++ b/base/settings/examples/Site_XMLS/ordway.xml @@ -0,0 +1,82 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/05/05 13:47:07 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /projectnb/dietzelab/EFI_Forecast_Challenge/Ordway/noaa/ + + + + temperate.coniferous + + 1 + + 1000016486 + /fs/data3/hamzed/Projects/HPDA/Helpers/Sites/Me4-Ameri/pft/boreal.coniferous + + + soil.HPDA + + 1 + + 1000016485 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 1000000030 + + + + 1000004916 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Ordway/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + diff --git a/base/settings/examples/Site_XMLS/potato.xml b/base/settings/examples/Site_XMLS/potato.xml new file mode 100644 index 00000000000..16d8f2bfd16 --- /dev/null +++ b/base/settings/examples/Site_XMLS/potato.xml @@ -0,0 +1,79 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:21:05 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + semiarid.grassland_HPDA + + 1 + + 1000016525 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland + + + soil.ALL_Arid_GrassHPDA + + 1 + + 1000016524 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + + + + 1000026756 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + diff --git a/base/settings/examples/Site_XMLS/santarita.xml b/base/settings/examples/Site_XMLS/santarita.xml new file mode 100644 index 00000000000..1bd31fd696e --- /dev/null +++ b/base/settings/examples/Site_XMLS/santarita.xml @@ -0,0 +1,82 @@ + + + + EFI Forecast + 1000012038 + ahelgeso + 2021/04/20 15:32:55 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /projectnb/dietzelab/EFI_Forecast_Challenge/Santa_Rita/noaa/ + + + + semiarid.grassland_HPDA + + 1 + + 1000016525 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland + + + soil.ALL_Arid_GrassHPDA + + 1 + + 1000016524 + /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL + + + + 3000 + FALSE + + + 1000000030 + + + + 1000004876 + + + + NOAA_GEFS + SIPNET + + + /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Santa_Rita/soil.nc + + + + + localhost + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + diff --git a/base/settings/examples/Site_XMLS/syv.xml b/base/settings/examples/Site_XMLS/syv.xml new file mode 100755 index 00000000000..9d2fe33d664 --- /dev/null +++ b/base/settings/examples/Site_XMLS/syv.xml @@ -0,0 +1,80 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:47:17 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.coniferous + + 1 + + 1000016486 + /fs/data3/hamzed/Projects/HPDA/Helpers/Sites/Me4-Ameri/pft/boreal.coniferous + + + soil.HPDA + + 1 + + 1000016485 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + + 622 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + diff --git a/base/settings/examples/Site_XMLS/wcr.xml b/base/settings/examples/Site_XMLS/wcr.xml new file mode 100644 index 00000000000..71a1675e4de --- /dev/null +++ b/base/settings/examples/Site_XMLS/wcr.xml @@ -0,0 +1,80 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:48:20 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.HPDA + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + FALSE + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + + + + 676 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + diff --git a/base/settings/examples/Site_XMLS/wlef.xml b/base/settings/examples/Site_XMLS/wlef.xml new file mode 100755 index 00000000000..2792e5db7f1 --- /dev/null +++ b/base/settings/examples/Site_XMLS/wlef.xml @@ -0,0 +1,92 @@ + + + + Daily Forecast SIPNET Site + 1000012038 + ahelgeso + 2021/05/10 20:49:09 +0000 + + + + bety + bety + psql-pecan.bu.edu + bety + PostgreSQL + true + + /fs/data3/kzarada/pecan.data/dbfiles/ + + + + temperate.deciduous.HPDA + + 1 + + 1000022311 + /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA + + + soil.ALL + + 1 + + 1000022310 + /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA + + + + 3000 + + FALSE + FALSE + + 1.2 + AUTO + + + + + 100 + NEE + 2021 + 2021 + + + uniform + + + sampling + + + parameters + + + soil + + + + + + + 1000000030 + /fs/data3/kzarada/US_WCr/data/WillowCreek.param + SIPNET + FALSE + /fs/data5/pecan.models/SIPNET/1023/sipnet + + + + 678 + + + + NOAA_GEFS + SIPNET + + + + + localhost + + From 65547570d5f20c71294ebacaeeb9ee3d105cbf67 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:03:03 -0400 Subject: [PATCH 1917/2289] Adjusted .csv path to point to pecan folder --- scripts/EFI_dataprep.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/EFI_dataprep.R b/scripts/EFI_dataprep.R index 701ed02a2c1..17350a49861 100644 --- a/scripts/EFI_dataprep.R +++ b/scripts/EFI_dataprep.R @@ -11,7 +11,7 @@ library(PEcAn.all) library(tidyverse) ########## Site Info ########### #read in .csv with site info -setwd(file.path(homedir, 'EFI_Forecast_Scripts/CSV')) #remember to change to where you keep your dataprep .csv file with the site info +setwd(file.path(homedir, 'pecan/scripts/')) #remember to change to where you keep your dataprep .csv file with the site info data_prep <- read.csv("dataprep_10_sites.csv") #this .csv file contains the NEON site id, BETY site id, and location where you want the met data saved. Remember to change to fit your sites and file path before running the script data_prep <- filter(data_prep, met_download == "dataprep") sitename <- data_prep$siteid_NEON4 From 3aa493b6e43fec7cadd98d45643d6c08eedddf61 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:04:14 -0400 Subject: [PATCH 1918/2289] Adjusted .csv path to point to pecan folder and deleted source() --- scripts/EFI_metprocess.R | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/scripts/EFI_metprocess.R b/scripts/EFI_metprocess.R index 74fc217df7e..054d95aed7d 100644 --- a/scripts/EFI_metprocess.R +++ b/scripts/EFI_metprocess.R @@ -6,15 +6,12 @@ #set home directory as object (remember to change to your own directory before running this script) homedir <- "/projectnb/dietzelab/ahelgeso" -source(file.path(homedir, 'pecan/modules/data.atmosphere/R/met.process.R')) #remember to change to where you store your pecan folder in your directory source(file.path(homedir, 'pecan/modules/data.atmosphere/R/download.raw.met.module.R')) #remember to change to where you store your pecan folder in your directory -source(file.path(homedir, 'pecan/modules/data.atmosphere/R/GEFS_helper_functions.R')) #remember to change to where you store your pecan folder in your directory -source(file.path(homedir, 'pecan/modules/data.atmosphere/R/download.NOAA_GEFS.R')) #remember to change to where you store your pecan folder in your directory library(PEcAn.all) library(tidyverse) #read in .csv with site info -setwd(file.path(homedir, "EFI_Forecast_Scripts/CSV/")) #remember to change to where you keep your dataprep .csv file with the site info +setwd(file.path(homedir, "pecan/scripts/")) #remember to change to where you keep your dataprep .csv file with the site info data_prep <- read.csv("dataprep_10_sites.csv") #this .csv file contains the sitename, BETY site id, location to store met files, model name, met source (from .xml), and the met output (from .xml) for each site you want to download met data data_prep <- filter(data_prep, met_download == "metprocess") sitename <- data_prep$site_name From a6b762e93fce6597bd7d09dd46d311bb1823a658 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:06:28 -0400 Subject: [PATCH 1919/2289] changed xml path to point to pecan folder --- scripts/EFI_workflow.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 37726d63231..23df258d22a 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -13,7 +13,7 @@ homedir <- "/projectnb/dietzelab/ahelgeso" #Load site.xml and outputPath (i.e. where the model outputs will be stored) into args args = list() -args$settings = file.path(homedir, "Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls +args$settings = file.path(homedir, "/pecan/base/settings/examples/Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls args$continue = TRUE args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) From 24c22014ec9fb2bdcc3bc0a3669db54e5597e10d Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:16:40 -0400 Subject: [PATCH 1920/2289] added @export --- modules/data.atmosphere/R/GEFS_helper_functions.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index e08724c42b7..41d5aa66295 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -6,7 +6,8 @@ #' @param forecast_date date for forecast #' @param model_name_raw model name for directory creation #' @param end_hr end hr to determine how many hours to download -#' @param output_directory output directory +#' @param output_directory output directory +#' @export #' #' @return NA #' From 056db5db96dd81a65dde145ac83b198379cc24f5 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:17:32 -0400 Subject: [PATCH 1921/2289] added @export --- modules/data.atmosphere/R/met.process.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index 333b337bc60..48fad73ddb6 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -12,6 +12,7 @@ ##' @param dir directory to write outputs to ##' @param spin spin-up settings passed to model-specific met2model. List containing nyear (number of years of spin-up), nsample (first n years to cycle), and resample (TRUE/FALSE) ##' @param overwrite Whether to force met.process to proceed. +##' @export ##' ##' `overwrite` may be a list with individual components corresponding to ##' `download`, `met2cf`, `standardize`, and `met2model`. If it is instead a simple boolean, From 2ab392572974e8a2060f6f7c353b68d5f2b29e52 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:46:10 -0400 Subject: [PATCH 1922/2289] added @export --- modules/data.atmosphere/man/download.NOAA_GEFS.Rd | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index 104ddd8364c..b5c6fe86266 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -39,8 +39,7 @@ download.NOAA_GEFS( \item{overwrite}{logical. Download a fresh version even if a local file with the same name already exists?} -\item{...}{Additional optional parameters -@export} +\item{...}{Additional optional parameters} } \value{ A list of data frames is returned containing information about the data file that can be used to locate it later. Each From 3f769e7b4678bd7d5fba65f82c2c5eb2ec0f2fc2 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:47:19 -0400 Subject: [PATCH 1923/2289] added @export --- modules/data.atmosphere/man/met.process.Rd | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/modules/data.atmosphere/man/met.process.Rd b/modules/data.atmosphere/man/met.process.Rd index 70def82af27..79023d8b151 100644 --- a/modules/data.atmosphere/man/met.process.Rd +++ b/modules/data.atmosphere/man/met.process.Rd @@ -37,15 +37,7 @@ met.process( \item{spin}{spin-up settings passed to model-specific met2model. List containing nyear (number of years of spin-up), nsample (first n years to cycle), and resample (TRUE/FALSE)} -\item{overwrite}{Whether to force met.process to proceed. - - `overwrite` may be a list with individual components corresponding to - `download`, `met2cf`, `standardize`, and `met2model`. If it is instead a simple boolean, - the default behavior for `overwrite=FALSE` is to overwrite nothing, as you might expect. - Note however that the default behavior for `overwrite=TRUE` is to overwrite everything - *except* raw met downloads. I.e., it corresponds to: - - list(download = FALSE, met2cf = TRUE, standardize = TRUE, met2model = TRUE)} +\item{overwrite}{Whether to force met.process to proceed.} } \description{ met.process From 56cfe890c74dc2446fd947248a4affe1c07146f0 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 18:48:40 -0400 Subject: [PATCH 1924/2289] added @export --- modules/data.atmosphere/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index e2baecdb60a..a1fb97b2b87 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -77,6 +77,7 @@ export(metgapfill) export(metgapfill.NOAA_GEFS) export(model.train) export(nc.merge) +export(noaa_grid_download) export(par2ppfd) export(pecan_standard_met_table) export(permute.nc) From 7ae4ac9386f047c49aa7a41861101be8204234bd Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 19:10:01 -0400 Subject: [PATCH 1925/2289] added @export --- modules/data.atmosphere/R/GEFS_helper_functions.R | 3 +++ 1 file changed, 3 insertions(+) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 41d5aa66295..7d626a0ee08 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -190,6 +190,7 @@ noaa_grid_download <- function(lat_list, lon_list, forecast_time, forecast_date, #' @param model_name_raw Name of raw file name #' @param output_directory Output directory #' @importFrom rlang .data +#' @export #' @return List #' #' @@ -526,6 +527,7 @@ process_gridded_noaa_download <- function(lat_list, #' @param hr time step in hours of temporal downscaling (default = 1) #' @importFrom rlang .data #' @import tidyselect +#' @export #' @author Quinn Thomas #' #' @@ -646,6 +648,7 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 ##' @param cf_units vector of variable names in order they appear in df ##' @param output_file name, with full path, of the netcdf file that is generated ##' @param overwrite logical to overwrite existing netcdf file +##' @export ##' @return NA ##' ##' From 18653f9f02622e87d91fb8c068426871a47d55ca Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 15 Jun 2021 20:38:20 -0400 Subject: [PATCH 1926/2289] added @export to NOAA functions --- modules/data.atmosphere/NAMESPACE | 3 +++ 1 file changed, 3 insertions(+) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index a1fb97b2b87..21649862ff3 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -82,6 +82,7 @@ export(par2ppfd) export(pecan_standard_met_table) export(permute.nc) export(predict_subdaily_met) +export(process_gridded_noaa_download) export(qair2rh) export(read.register) export(rh2qair) @@ -97,8 +98,10 @@ export(subdaily_pred) export(sw2par) export(sw2ppfd) export(temporal.downscale.functions) +export(temporal_downscale) export(upscale_met) export(wide2long) +export(write_noaa_gefs_netcdf) import(dplyr) import(tidyselect) importFrom(magrittr,"%>%") From 1b48372abd56badf0113a8f3db3d6d20f01e4d00 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 16 Jun 2021 15:19:15 -0400 Subject: [PATCH 1927/2289] deleted redundant db input check at the end of for loop --- scripts/EFI_dataprep.R | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/scripts/EFI_dataprep.R b/scripts/EFI_dataprep.R index 17350a49861..26040b29e45 100644 --- a/scripts/EFI_dataprep.R +++ b/scripts/EFI_dataprep.R @@ -115,23 +115,6 @@ for(h in 1:length(files)){ } - -######### Get clim id's and paths ################# - -# index = PEcAn.DB::dbfile.input.check( -# siteid= siteid[i] %>% as.character(), -# startdate = start_date %>% as.Date, -# enddate = end_date %>% as.Date, -# parentid = NA, -# mimetype="text/csv", -# formatname="Sipnet.climna", -# con, -# hostname = PEcAn.remote::fqdn(), -# pattern = "2021", -# exact.dates = TRUE, -# return.all=TRUE -# ) - } From e6af8c54470b9f3323aa7a7a8dc2f5d060c44421 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 16 Jun 2021 18:26:50 -0400 Subject: [PATCH 1928/2289] Added commandArgs for args --- scripts/EFI_workflow.R | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 23df258d22a..57ea8be3bb6 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -12,11 +12,13 @@ library("dynutils") homedir <- "/projectnb/dietzelab/ahelgeso" #Load site.xml and outputPath (i.e. where the model outputs will be stored) into args -args = list() -args$settings = file.path(homedir, "/pecan/base/settings/examples/Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls -args$continue = TRUE -args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) -args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) +# args = list() +# args$settings = file.path(homedir, "/pecan/base/settings/examples/Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls +# args$continue = TRUE +# args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) +# args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) + +args = commandArgs() outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved From 9e6cdd2cb3fef4686aaddcc1c357c5d9c4ea055f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 16 Jun 2021 18:49:10 -0400 Subject: [PATCH 1929/2289] changed back to args --- scripts/EFI_workflow.R | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 57ea8be3bb6..23df258d22a 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -12,13 +12,11 @@ library("dynutils") homedir <- "/projectnb/dietzelab/ahelgeso" #Load site.xml and outputPath (i.e. where the model outputs will be stored) into args -# args = list() -# args$settings = file.path(homedir, "/pecan/base/settings/examples/Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls -# args$continue = TRUE -# args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) -# args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) - -args = commandArgs() +args = list() +args$settings = file.path(homedir, "/pecan/base/settings/examples/Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls +args$continue = TRUE +args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) +args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved From 014c9afe2b4890159afdf2b43237e6836f5f0040 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 10:57:08 -0400 Subject: [PATCH 1930/2289] Replaced args() with temp commandArgs() --- scripts/EFI_workflow.R | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 23df258d22a..4758f0b08da 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -11,16 +11,23 @@ library("dynutils") #set home directory as object (remember to change to your own directory before running this script) homedir <- "/projectnb/dietzelab/ahelgeso" -#Load site.xml and outputPath (i.e. where the model outputs will be stored) into args -args = list() -args$settings = file.path(homedir, "/pecan/base/settings/examples/Site_XMLS/harvard.xml") #remember to change to where you store the site.xmls -args$continue = TRUE -args$start_date = as.Date(format(Sys.Date()-2, "%Y-%m-%d")) -args$end_date = as.Date(format(Sys.Date()-1, "%Y-%m-%d")) +#Load site.xml, start & end date, (with commandArgs) and outputPath (i.e. where the model outputs will be stored) into args +tmp = commandArgs(trailingOnly = TRUE) +args$settings = tmp[1] +args$start_date = as.Date(tmp[2]) +if(length(args)>2){ + args$end_date = as.Date(tmp[3]) +} else { + args$end_date = args$start_date + 35 +} -outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved +if(length(args)>3){ + args$continue = tmp[4] +} else { + args$continue = TRUE +} - +outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved if(!dir.exists(outputPath)){dir.create(outputPath, recursive = TRUE)} setwd(outputPath) From de789170b27558be4100b8d44aedde9a6e393ae6 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 14:19:48 -0400 Subject: [PATCH 1931/2289] Added Roxygen --- .../data.atmosphere/R}/half_hour_downscale.R | 0 scripts/efi_data_process.R | 10 +++++++++- 2 files changed, 9 insertions(+), 1 deletion(-) rename {scripts => modules/data.atmosphere/R}/half_hour_downscale.R (100%) diff --git a/scripts/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R similarity index 100% rename from scripts/half_hour_downscale.R rename to modules/data.atmosphere/R/half_hour_downscale.R diff --git a/scripts/efi_data_process.R b/scripts/efi_data_process.R index c40fc690c60..63df503c26e 100644 --- a/scripts/efi_data_process.R +++ b/scripts/efi_data_process.R @@ -1,5 +1,13 @@ -#### need to create a graph funciton here to call with the args of start time +#### need to create a graph function here to call with the args of start time +#' EFI Data Process +#' +#' @param args completed forecast run settings file +#' +#' @return +#' @export +#' +#' @examples efi.data.process <- function(args){ start_date <- tryCatch(as.POSIXct(args[1]), error = function(e) {NULL} ) if (is.null(start_date)) { From 53837685ff3382f06463d6ab9dfc21fc7ee24bcc Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 14:21:38 -0400 Subject: [PATCH 1932/2289] Added Roxygen --- .../R/download.raw.met.module.R | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index ed186d30b3d..72c6548c04f 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -1,3 +1,30 @@ +#' @name download.raw.met.module +#' @title download.raw.met.module +#' +#' @param dir directory to write outputs to +#' @param met source included in input_met +#' @param register register.xml, provided by met.process +#' @param machine machine associated with hostname, provided by met.process +#' @param start_date the start date of the data to be downloaded (will only use the year part of the date) +#' @param end_date the end date of the data to be downloaded (will only use the year part of the date) +#' @param str_ns substitute for site_id if not provided, provided by met.process +#' @param con database connection based on dbparms in met.process +#' @param input_met Which data source to process +#' @param site.id site id +#' @param lat.in site latitude, provided by met.process +#' @param lon.in site longitude, provided by met.process +#' @param host host info from settings file +#' @param site site info from settings file +#' @param username database username +#' @param overwrite whether to force download.raw.met.module to proceed +#' @param dbparms database settings from settings file +#' @param Ens.Flag +#' +#' @return +#' @export +#' +#' @examples + .download.raw.met.module <- function(dir, met, From b1f0498ce6daaa3ec83a1062909361457a3dcab5 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 14:22:38 -0400 Subject: [PATCH 1933/2289] Added Roxygen --- modules/data.atmosphere/R/half_hour_downscale.R | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index d1ddbe8c2c5..50fc595ae2b 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -1,4 +1,15 @@ -source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/GEFS_helper_functions.R') +#' @name half_hour_downscale +#' @title half_hour_downscale +#' +#' @param input_file location of NOAAGEFS_1hr files +#' @param output_file location where to store half_hour files +#' @param overwrite whether to force hamf_hour_downscale to proceed +#' @param hr set half hour +#' +#' @return +#' @export +#' +#' @examples temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TRUE, hr = 0.5){ # open netcdf From b7ba5989bee4d55435f1d7d94cda72604677b74d Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 14:24:13 -0400 Subject: [PATCH 1934/2289] Added Roxygen --- .../man/download.raw.met.module.Rd | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 modules/data.atmosphere/man/download.raw.met.module.Rd diff --git a/modules/data.atmosphere/man/download.raw.met.module.Rd b/modules/data.atmosphere/man/download.raw.met.module.Rd new file mode 100644 index 00000000000..37556eed8ca --- /dev/null +++ b/modules/data.atmosphere/man/download.raw.met.module.Rd @@ -0,0 +1,71 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/download.raw.met.module.R +\name{download.raw.met.module} +\alias{download.raw.met.module} +\alias{.download.raw.met.module} +\title{download.raw.met.module} +\usage{ +.download.raw.met.module( + dir, + met, + register, + machine, + start_date, + end_date, + str_ns, + con, + input_met, + site.id, + lat.in, + lon.in, + host, + site, + username, + overwrite = FALSE, + dbparms, + Ens.Flag = FALSE +) +} +\arguments{ +\item{dir}{directory to write outputs to} + +\item{met}{source included in input_met} + +\item{register}{register.xml, provided by met.process} + +\item{machine}{machine associated with hostname, provided by met.process} + +\item{start_date}{the start date of the data to be downloaded (will only use the year part of the date)} + +\item{end_date}{the end date of the data to be downloaded (will only use the year part of the date)} + +\item{str_ns}{substitute for site_id if not provided, provided by met.process} + +\item{con}{database connection based on dbparms in met.process} + +\item{input_met}{Which data source to process} + +\item{site.id}{site id} + +\item{lat.in}{site latitude, provided by met.process} + +\item{lon.in}{site longitude, provided by met.process} + +\item{host}{host info from settings file} + +\item{site}{site info from settings file} + +\item{username}{database username} + +\item{overwrite}{whether to force download.raw.met.module to proceed} + +\item{dbparms}{database settings from settings file} + +\item{Ens.Flag}{} +} +\value{ + +} +\description{ +download.raw.met.module +} From 0c771b2fb7075a896c7bca179feaf1e63d6fde79 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 14:52:23 -0400 Subject: [PATCH 1935/2289] Added Roxygen to functions --- modules/data.atmosphere/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 21649862ff3..e8f38ef760c 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -1,5 +1,6 @@ # Generated by roxygen2: do not edit by hand +export(.download.raw.met.module) export(.extract.nc.module) export(.met2model.module) export(AirDens) From 2b95c375925d52eaf9cf18344e53e01a6d75f849 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 14:53:59 -0400 Subject: [PATCH 1936/2289] args() is now added using commandArgs() --- scripts/EFI_workflow.R | 68 ++++++++++++++++++++++++++---------------- 1 file changed, 43 insertions(+), 25 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 4758f0b08da..4c108682633 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -1,3 +1,6 @@ +#You must run this script in the terminal using the code: +#Rscript --vanilla EFI_workflow.R "[file path to site xml]" "[file path to output folder]" [start_date] + library("PEcAn.all") library("PEcAn.utils") library("RCurl") @@ -11,25 +14,40 @@ library("dynutils") #set home directory as object (remember to change to your own directory before running this script) homedir <- "/projectnb/dietzelab/ahelgeso" -#Load site.xml, start & end date, (with commandArgs) and outputPath (i.e. where the model outputs will be stored) into args +#Load site.xml, start & end date, (with commandArgs specify args in terminal) and outputPath (i.e. where the model outputs will be stored) into args tmp = commandArgs(trailingOnly = TRUE) +if(length(tmp)<3){ + logger.severe("Missing required arguments") +} +args = list() args$settings = tmp[1] -args$start_date = as.Date(tmp[2]) -if(length(args)>2){ - args$end_date = as.Date(tmp[3]) +if(!file.exists(args$settings)){ + logger.severe("Not a valid xml path") +} +args$outputPath = tmp[2] +if(!isAbsolutePath(args$outputPath)){ + logger.severe("Not a valid outputPath") +} +args$start_date = as.Date(tmp[3]) +if(is.na(args$start_date)){ + logger.severe("No start date provided") +} + +if(length(args)>3){ + args$end_date = as.Date(tmp[4]) } else { args$end_date = args$start_date + 35 } -if(length(args)>3){ - args$continue = tmp[4] +if(length(args)>4){ + args$continue = tmp[5] } else { args$continue = TRUE } -outputPath <- file.path(homedir, "Site_Outputs/Harvard/") #remember to change to where you want the model outputs saved -if(!dir.exists(outputPath)){dir.create(outputPath, recursive = TRUE)} -setwd(outputPath) + +if(!dir.exists(args$outputPath)){dir.create(args$outputPath, recursive = TRUE)} +setwd(args$outputPath) # Open and read in settings file for PEcAn run. settings <- PEcAn.settings::read.settings(args$settings) @@ -160,13 +178,13 @@ if (PEcAn.utils::status.check("OUTPUT") == 0) { PEcAn.utils::status.end() } -# Run ensemble analysis on model output. -if ("ensemble" %in% names(settings) - && PEcAn.utils::status.check("ENSEMBLE") == 0) { - PEcAn.utils::status.start("ENSEMBLE") - runModule.run.ensemble.analysis(settings, TRUE) - PEcAn.utils::status.end() -} +# # Run ensemble analysis on model output. +# if ("ensemble" %in% names(settings) +# && PEcAn.utils::status.check("ENSEMBLE") == 0) { +# PEcAn.utils::status.start("ENSEMBLE") +# runModule.run.ensemble.analysis(settings, TRUE) +# PEcAn.utils::status.end() +# } # Run state data assimilation if ("state.data.assimilation" %in% names(settings)) { @@ -226,14 +244,14 @@ output_args = c(as.character(wid), site.num, outdir) data = efi.data.process(output_args) #Run SIPNET Outputs -data.final = data %>% - mutate(date = as.Date(date)) %>% - filter(date < end_date) %>% - arrange(ensemble, date) %>% - mutate(time = as.POSIXct(paste(date, Time, sep = " "), format = "%Y-%m-%d %H %M")) %>% +data.final = data %>% + mutate(date = as.Date(date)) %>% + filter(date < end_date) %>% + arrange(ensemble, date) %>% + mutate(time = as.POSIXct(paste(date, Time, sep = " "), format = "%Y-%m-%d %H %M")) %>% mutate(siteID = site.name, - forecast = 1, - data_assimilation = 0, + forecast = 1, + data_assimilation = 0, time = lubridate::force_tz(time, tz = "UTC")) #re-order columns and delete unnecessary columns in data.final datacols <- c("date", "time", "siteID", "ensemble", "nee", "le", "vswc", "forecast", "data_assimilation") @@ -243,10 +261,10 @@ data.final = data.final[datacols] # ggplot(data.final, aes(x = time, y = nee, group = ensemble)) + # geom_line(aes(x = time, y = nee, color = ensemble)) -# +# # ggplot(data.final, aes(x = time, y = le, group = ensemble)) + # geom_line(aes(x = time, y = le, color = ensemble)) -# +# # ggplot(data.final, aes(x = time, y = vswc, group = ensemble)) + # geom_line(aes(x = time, y = vswc, color = ensemble)) From 28cbb043d70b8913786fe94bc0b6e1b0883ab16d Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:38:32 -0400 Subject: [PATCH 1937/2289] Removed source() call --- scripts/EFI_dataprep.R | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/scripts/EFI_dataprep.R b/scripts/EFI_dataprep.R index 26040b29e45..fc5808c819c 100644 --- a/scripts/EFI_dataprep.R +++ b/scripts/EFI_dataprep.R @@ -6,7 +6,6 @@ #set home directory as object (remember to change to your own directory before running this script) homedir <- "/projectnb/dietzelab/ahelgeso" -source(file.path(homedir, 'pecan/scripts/half_hour_downscale.R')) #remember to change to where you store your pecan folder in your directory library(PEcAn.all) library(tidyverse) ########## Site Info ########### @@ -113,9 +112,9 @@ for(h in 1:length(files)){ -} +} #closes files for loop -} +} #closes sitename for loop From 84181fd0d1964f5d806bbcfa36478bb721c3e07a Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:39:35 -0400 Subject: [PATCH 1938/2289] Removed source() call --- scripts/EFI_metprocess.R | 1 - 1 file changed, 1 deletion(-) diff --git a/scripts/EFI_metprocess.R b/scripts/EFI_metprocess.R index 054d95aed7d..3335b630ce2 100644 --- a/scripts/EFI_metprocess.R +++ b/scripts/EFI_metprocess.R @@ -6,7 +6,6 @@ #set home directory as object (remember to change to your own directory before running this script) homedir <- "/projectnb/dietzelab/ahelgeso" -source(file.path(homedir, 'pecan/modules/data.atmosphere/R/download.raw.met.module.R')) #remember to change to where you store your pecan folder in your directory library(PEcAn.all) library(tidyverse) From 2eac98a1728e1edb324e1f33b5ca723a3e44c0ec Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:40:35 -0400 Subject: [PATCH 1939/2289] Removed source() call --- scripts/EFI_workflow.R | 1 - 1 file changed, 1 deletion(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 4c108682633..2931ca37a3f 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -232,7 +232,6 @@ library("plotly") library("gganimate") library("thematic") thematic_on() -source(file.path(homedir, 'pecan/scripts/efi_data_process.R')) #remember to change to where you store your pecan folder in your directory #Load Output args site.num <- settings$run$site$id outdir <- outputPath From 0b76563808c54e264da8d2a934cc91e479313c87 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:41:55 -0400 Subject: [PATCH 1940/2289] Edited Roxygen --- modules/data.atmosphere/R/half_hour_downscale.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index 50fc595ae2b..a851f1c6e0d 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -7,8 +7,8 @@ #' @param hr set half hour #' #' @return +#' #' @export -#' #' @examples temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TRUE, hr = 0.5){ From 859482326ebc5c36649a179cf083b93b3d10ebb2 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:42:57 -0400 Subject: [PATCH 1941/2289] Edited Roxygen --- .../man/half_hour_downscale.Rd | 29 +++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 modules/data.atmosphere/man/half_hour_downscale.Rd diff --git a/modules/data.atmosphere/man/half_hour_downscale.Rd b/modules/data.atmosphere/man/half_hour_downscale.Rd new file mode 100644 index 00000000000..bb175de3c0d --- /dev/null +++ b/modules/data.atmosphere/man/half_hour_downscale.Rd @@ -0,0 +1,29 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/half_hour_downscale.R +\name{half_hour_downscale} +\alias{half_hour_downscale} +\alias{temporal_downscale_half_hour} +\title{half_hour_downscale} +\usage{ +temporal_downscale_half_hour( + input_file, + output_file, + overwrite = TRUE, + hr = 0.5 +) +} +\arguments{ +\item{input_file}{location of NOAAGEFS_1hr files} + +\item{output_file}{location where to store half_hour files} + +\item{overwrite}{whether to force hamf_hour_downscale to proceed} + +\item{hr}{set half hour} +} +\value{ + +} +\description{ +half_hour_downscale +} From 6764972ac1851615332d0238bf1a269cfa334f2f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:44:08 -0400 Subject: [PATCH 1942/2289] Changed location of @export in Roxygen --- modules/data.atmosphere/R/met.process.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/met.process.R b/modules/data.atmosphere/R/met.process.R index 48fad73ddb6..c14551a99b4 100644 --- a/modules/data.atmosphere/R/met.process.R +++ b/modules/data.atmosphere/R/met.process.R @@ -1,6 +1,6 @@ ##' @name met.process ##' @title met.process -##' @export +##' ##' ##' @param site Site info from settings file ##' @param input_met Which data source to process. @@ -12,7 +12,7 @@ ##' @param dir directory to write outputs to ##' @param spin spin-up settings passed to model-specific met2model. List containing nyear (number of years of spin-up), nsample (first n years to cycle), and resample (TRUE/FALSE) ##' @param overwrite Whether to force met.process to proceed. -##' @export +##' ##' ##' `overwrite` may be a list with individual components corresponding to ##' `download`, `met2cf`, `standardize`, and `met2model`. If it is instead a simple boolean, @@ -21,7 +21,7 @@ ##' *except* raw met downloads. I.e., it corresponds to: ##' ##' list(download = FALSE, met2cf = TRUE, standardize = TRUE, met2model = TRUE) -##' +##' @export ##' @author Elizabeth Cowdery, Michael Dietze, Ankur Desai, James Simkins, Ryan Kelly met.process <- function(site, input_met, start_date, end_date, model, host = "localhost", dbparms, dir, browndog = NULL, spin=NULL, From 4a507b025fddae66e8b5a4cb1b9ce7c7c4d59098 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:45:15 -0400 Subject: [PATCH 1943/2289] Changed location of @export in Roxygen --- modules/data.atmosphere/man/met.process.Rd | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/met.process.Rd b/modules/data.atmosphere/man/met.process.Rd index 79023d8b151..7aee712c518 100644 --- a/modules/data.atmosphere/man/met.process.Rd +++ b/modules/data.atmosphere/man/met.process.Rd @@ -37,7 +37,16 @@ met.process( \item{spin}{spin-up settings passed to model-specific met2model. List containing nyear (number of years of spin-up), nsample (first n years to cycle), and resample (TRUE/FALSE)} -\item{overwrite}{Whether to force met.process to proceed.} +\item{overwrite}{Whether to force met.process to proceed. + + + `overwrite` may be a list with individual components corresponding to + `download`, `met2cf`, `standardize`, and `met2model`. If it is instead a simple boolean, + the default behavior for `overwrite=FALSE` is to overwrite nothing, as you might expect. + Note however that the default behavior for `overwrite=TRUE` is to overwrite everything + *except* raw met downloads. I.e., it corresponds to: + + list(download = FALSE, met2cf = TRUE, standardize = TRUE, met2model = TRUE)} } \description{ met.process From fa95ad472e4f02fc9bdf3500c897cd9ecc37ac97 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:46:25 -0400 Subject: [PATCH 1944/2289] Edited Roxygen --- modules/data.atmosphere/R/GEFS_helper_functions.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index 7d626a0ee08..d14b0c4b729 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -648,10 +648,10 @@ temporal_downscale <- function(input_file, output_file, overwrite = TRUE, hr = 1 ##' @param cf_units vector of variable names in order they appear in df ##' @param output_file name, with full path, of the netcdf file that is generated ##' @param overwrite logical to overwrite existing netcdf file -##' @export +##' ##' @return NA ##' -##' +##' @export ##' @author Quinn Thomas ##' ##' From 5055eca7e988b7514e02abe8e619835ebdaba440 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:47:27 -0400 Subject: [PATCH 1945/2289] Edited Roxygen --- .../man/write_noaa_gefs_netcdf.Rd | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd index dd8c10ca76e..ecb110855ff 100644 --- a/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd +++ b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd @@ -1,9 +1,20 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/GEFS_helper_functions.R +% Please edit documentation in R/GEFS_helper_functions.R, +% R/half_hour_downscale.R \name{write_noaa_gefs_netcdf} \alias{write_noaa_gefs_netcdf} \title{Write NOAA GEFS netCDF} \usage{ +write_noaa_gefs_netcdf( + df, + ens = NA, + lat, + lon, + cf_units, + output_file, + overwrite +) + write_noaa_gefs_netcdf( df, ens = NA, @@ -31,11 +42,17 @@ must start with time with the following columns in the order of `cf_units`} \item{overwrite}{logical to overwrite existing netcdf file} } \value{ +NA + NA } \description{ +Write NOAA GEFS netCDF + Write NOAA GEFS netCDF } \author{ +Quinn Thomas + Quinn Thomas } From 9166a7d9f4252084e67b6d9e2489a769bb03d055 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 17 Jun 2021 21:49:03 -0400 Subject: [PATCH 1946/2289] Edited Roxygen --- modules/data.atmosphere/NAMESPACE | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index e8f38ef760c..a5cf1d22c53 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -100,6 +100,7 @@ export(sw2par) export(sw2ppfd) export(temporal.downscale.functions) export(temporal_downscale) +export(temporal_downscale_half_hour) export(upscale_met) export(wide2long) export(write_noaa_gefs_netcdf) From 408763ee2e879dc58397ab5c4c7093a92a535577 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 18 Jun 2021 17:11:13 -0400 Subject: [PATCH 1947/2289] Removed source() --- scripts/EFI_dataprep.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/EFI_dataprep.R b/scripts/EFI_dataprep.R index fc5808c819c..112c4438b8b 100644 --- a/scripts/EFI_dataprep.R +++ b/scripts/EFI_dataprep.R @@ -18,7 +18,7 @@ siteid <- data_prep$siteid_BETY4 base_dir <- data_prep$base_dir4 #run info -start_date = format(Sys.Date()-1, "%Y-%m-%d") +start_date = format(Sys.Date()-2, "%Y-%m-%d") for(i in 1:length(sitename)){ From 3e07ec382d1dca622001d5adb0c147c55f2d9a04 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 18 Jun 2021 17:12:13 -0400 Subject: [PATCH 1948/2289] Added source() back for post processing --- scripts/EFI_workflow.R | 1 + 1 file changed, 1 insertion(+) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 2931ca37a3f..b0bace87b7c 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -232,6 +232,7 @@ library("plotly") library("gganimate") library("thematic") thematic_on() +source("/projectnb/dietzelab/ahelgeso/pecan/scripts/efi_data_process.R") #Load Output args site.num <- settings$run$site$id outdir <- outputPath From 248aeb04c8c37569a8cf610f5ae5e18df85e2047 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 18 Jun 2021 17:13:43 -0400 Subject: [PATCH 1949/2289] Added source() back because the script throws less errors --- scripts/EFI_metprocess.R | 2 ++ 1 file changed, 2 insertions(+) diff --git a/scripts/EFI_metprocess.R b/scripts/EFI_metprocess.R index 3335b630ce2..7483ebddae2 100644 --- a/scripts/EFI_metprocess.R +++ b/scripts/EFI_metprocess.R @@ -8,6 +8,8 @@ homedir <- "/projectnb/dietzelab/ahelgeso" library(PEcAn.all) library(tidyverse) +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/download.NOAA_GEFS.R') +source('/projectnb/dietzelab/ahelgeso/pecan/modules/data.atmosphere/R/download.raw.met.module.R') #read in .csv with site info setwd(file.path(homedir, "pecan/scripts/")) #remember to change to where you keep your dataprep .csv file with the site info From 5de71592470fadb2533b66f3b5cb9c217819ddec Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 18 Jun 2021 17:14:32 -0400 Subject: [PATCH 1950/2289] removed @export --- scripts/efi_data_process.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/efi_data_process.R b/scripts/efi_data_process.R index 63df503c26e..aed1549acf9 100644 --- a/scripts/efi_data_process.R +++ b/scripts/efi_data_process.R @@ -5,7 +5,7 @@ #' @param args completed forecast run settings file #' #' @return -#' @export +#' #' #' @examples efi.data.process <- function(args){ From 9d9e9447b53f94739aedf3614cc0930c750f87ab Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 18 Jun 2021 17:20:41 -0400 Subject: [PATCH 1951/2289] Edited Roxygen --- modules/data.atmosphere/R/download.NOAA_GEFS.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/download.NOAA_GEFS.R b/modules/data.atmosphere/R/download.NOAA_GEFS.R index e53f6f30af7..d3c6045f8e6 100644 --- a/modules/data.atmosphere/R/download.NOAA_GEFS.R +++ b/modules/data.atmosphere/R/download.NOAA_GEFS.R @@ -36,6 +36,7 @@ ##' @param overwrite logical. Download a fresh version even if a local file with the same name already exists? ##' @param downscale logical, assumed True. Indicated whether data should be downscaled to hourly ##' @param ... Additional optional parameters +##' ##' @export ##' ##' @examples From 8abaf5550ba2e349b402ca88b5a748c88d1bee3f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Fri, 18 Jun 2021 18:14:59 -0400 Subject: [PATCH 1952/2289] removed bart.xml --- base/settings/examples/bart.xml | 83 ------------------- .../assim.sequential/inst}/Site_XMLS/bart.xml | 0 .../inst}/Site_XMLS/harvard.xml | 0 .../assim.sequential/inst}/Site_XMLS/konz.xml | 0 .../assim.sequential/inst}/Site_XMLS/los.xml | 0 .../inst}/Site_XMLS/ordway.xml | 0 .../inst}/Site_XMLS/potato.xml | 0 .../inst}/Site_XMLS/santarita.xml | 0 .../assim.sequential/inst}/Site_XMLS/syv.xml | 0 .../assim.sequential/inst}/Site_XMLS/wcr.xml | 0 .../assim.sequential/inst}/Site_XMLS/wlef.xml | 0 11 files changed, 83 deletions(-) delete mode 100644 base/settings/examples/bart.xml rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/bart.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/harvard.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/konz.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/los.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/ordway.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/potato.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/santarita.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/syv.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/wcr.xml (100%) rename {base/settings/examples => modules/assim.sequential/inst}/Site_XMLS/wlef.xml (100%) diff --git a/base/settings/examples/bart.xml b/base/settings/examples/bart.xml deleted file mode 100644 index bdfc9fa4378..00000000000 --- a/base/settings/examples/bart.xml +++ /dev/null @@ -1,83 +0,0 @@ - - - - EFI Forecast - 1000012038 - ahelgeso - 2021/05/05 13:20:05 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Bartlett/noaa/ - - - - temperate.deciduous.HPDA - - 1 - - 1000022311 - /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA - - - soil.HPDA - - 1 - - 1000022310 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - FALSE - - - 1000000030 - /fs/data3/kzarada/US_WCr/data/WillowCreek.param - - - - 1000004924 - - - - NOAA_GEFS - SIPNET - - - /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Bartlett/soil.nc - - - - - localhost - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - diff --git a/base/settings/examples/Site_XMLS/bart.xml b/modules/assim.sequential/inst/Site_XMLS/bart.xml similarity index 100% rename from base/settings/examples/Site_XMLS/bart.xml rename to modules/assim.sequential/inst/Site_XMLS/bart.xml diff --git a/base/settings/examples/Site_XMLS/harvard.xml b/modules/assim.sequential/inst/Site_XMLS/harvard.xml similarity index 100% rename from base/settings/examples/Site_XMLS/harvard.xml rename to modules/assim.sequential/inst/Site_XMLS/harvard.xml diff --git a/base/settings/examples/Site_XMLS/konz.xml b/modules/assim.sequential/inst/Site_XMLS/konz.xml similarity index 100% rename from base/settings/examples/Site_XMLS/konz.xml rename to modules/assim.sequential/inst/Site_XMLS/konz.xml diff --git a/base/settings/examples/Site_XMLS/los.xml b/modules/assim.sequential/inst/Site_XMLS/los.xml similarity index 100% rename from base/settings/examples/Site_XMLS/los.xml rename to modules/assim.sequential/inst/Site_XMLS/los.xml diff --git a/base/settings/examples/Site_XMLS/ordway.xml b/modules/assim.sequential/inst/Site_XMLS/ordway.xml similarity index 100% rename from base/settings/examples/Site_XMLS/ordway.xml rename to modules/assim.sequential/inst/Site_XMLS/ordway.xml diff --git a/base/settings/examples/Site_XMLS/potato.xml b/modules/assim.sequential/inst/Site_XMLS/potato.xml similarity index 100% rename from base/settings/examples/Site_XMLS/potato.xml rename to modules/assim.sequential/inst/Site_XMLS/potato.xml diff --git a/base/settings/examples/Site_XMLS/santarita.xml b/modules/assim.sequential/inst/Site_XMLS/santarita.xml similarity index 100% rename from base/settings/examples/Site_XMLS/santarita.xml rename to modules/assim.sequential/inst/Site_XMLS/santarita.xml diff --git a/base/settings/examples/Site_XMLS/syv.xml b/modules/assim.sequential/inst/Site_XMLS/syv.xml similarity index 100% rename from base/settings/examples/Site_XMLS/syv.xml rename to modules/assim.sequential/inst/Site_XMLS/syv.xml diff --git a/base/settings/examples/Site_XMLS/wcr.xml b/modules/assim.sequential/inst/Site_XMLS/wcr.xml similarity index 100% rename from base/settings/examples/Site_XMLS/wcr.xml rename to modules/assim.sequential/inst/Site_XMLS/wcr.xml diff --git a/base/settings/examples/Site_XMLS/wlef.xml b/modules/assim.sequential/inst/Site_XMLS/wlef.xml similarity index 100% rename from base/settings/examples/Site_XMLS/wlef.xml rename to modules/assim.sequential/inst/Site_XMLS/wlef.xml From 1fbbc7168d66518aecb729f69c7c52668a09dbc1 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 21 Jun 2021 22:57:21 +0530 Subject: [PATCH 1953/2289] issue #2804 --- base/db/R/rename_jags_columns.R | 39 --------------------------------- 1 file changed, 39 deletions(-) delete mode 100644 base/db/R/rename_jags_columns.R diff --git a/base/db/R/rename_jags_columns.R b/base/db/R/rename_jags_columns.R deleted file mode 100644 index 1c1111422a4..00000000000 --- a/base/db/R/rename_jags_columns.R +++ /dev/null @@ -1,39 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##-----------------------------------------------------------------------------# -##' renames the variables within output data frame trait.data -##' -##' @param data data frame to with variables to rename -##' -##' @seealso used with \code{\link[PEcAn.MA]{jagify}}; -##' @export -##' @author David LeBauer -rename_jags_columns <- function(data) { - - # Change variable names and calculate obs.prec within data frame - # Swap column names; needed for downstream function pecan.ma() - colnames(data)[colnames(data) %in% c("greenhouse", "ghs")] <- c("ghs", "greenhouse") - colnames(data)[colnames(data) %in% c("site_id", "site")] <- c("site", "site_id") - - transformed <- transform(data, - Y = mean, - se = stat, - obs.prec = 1 / (sqrt(n) * stat) ^2, - trt = trt_id, - cite = citation_id) - - # Subset data frame - selected <- subset(transformed, select = c('Y', 'n', 'site', 'trt', 'ghs', 'obs.prec', - 'se', 'cite', - "greenhouse", "site_id", "treatment_id", "trt_name", "trt_num")) # add original # original versions of greenhouse, site_id, treatment_id, trt_name - # Return subset data frame - return(selected) -} -##=============================================================================# From ad7d9681280ab05298c796e7bb6f5708774963df Mon Sep 17 00:00:00 2001 From: Julius Vira Date: Mon, 21 Jun 2021 12:32:39 -0500 Subject: [PATCH 1954/2289] trying to prevent crashing when exact.dates == FALSE --- base/db/R/dbfiles.R | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/base/db/R/dbfiles.R b/base/db/R/dbfiles.R index d18b264b99b..ba7dccbaf75 100644 --- a/base/db/R/dbfiles.R +++ b/base/db/R/dbfiles.R @@ -285,6 +285,15 @@ dbfile.input.check <- function(siteid, startdate = NULL, enddate = NULL, mimetyp con = con ) } + } else { # not exact dates + inputs <- db.query( + query = paste0( + "SELECT * FROM inputs WHERE site_id=", siteid, + " AND format_id=", formatid, + parent + ), + con = con + ) } if (is.null(inputs) | length(inputs$id) == 0) { From 292d4a17b355be8ef6d9fc253e911bcfe6776fc1 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 21 Jun 2021 23:14:21 +0530 Subject: [PATCH 1955/2289] moving this file to /modules/meta.analysis --- modules/meta.analysis/R/rename_jags_columns.R | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 modules/meta.analysis/R/rename_jags_columns.R diff --git a/modules/meta.analysis/R/rename_jags_columns.R b/modules/meta.analysis/R/rename_jags_columns.R new file mode 100644 index 00000000000..1c1111422a4 --- /dev/null +++ b/modules/meta.analysis/R/rename_jags_columns.R @@ -0,0 +1,39 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##-----------------------------------------------------------------------------# +##' renames the variables within output data frame trait.data +##' +##' @param data data frame to with variables to rename +##' +##' @seealso used with \code{\link[PEcAn.MA]{jagify}}; +##' @export +##' @author David LeBauer +rename_jags_columns <- function(data) { + + # Change variable names and calculate obs.prec within data frame + # Swap column names; needed for downstream function pecan.ma() + colnames(data)[colnames(data) %in% c("greenhouse", "ghs")] <- c("ghs", "greenhouse") + colnames(data)[colnames(data) %in% c("site_id", "site")] <- c("site", "site_id") + + transformed <- transform(data, + Y = mean, + se = stat, + obs.prec = 1 / (sqrt(n) * stat) ^2, + trt = trt_id, + cite = citation_id) + + # Subset data frame + selected <- subset(transformed, select = c('Y', 'n', 'site', 'trt', 'ghs', 'obs.prec', + 'se', 'cite', + "greenhouse", "site_id", "treatment_id", "trt_name", "trt_num")) # add original # original versions of greenhouse, site_id, treatment_id, trt_name + # Return subset data frame + return(selected) +} +##=============================================================================# From 7de0d7868d1f84ca6e70a691c9e9caee1d36a7ea Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 21 Jun 2021 23:15:04 +0530 Subject: [PATCH 1956/2289] Delete rename_jags_columns.Rd --- base/db/man/rename_jags_columns.Rd | 20 -------------------- 1 file changed, 20 deletions(-) delete mode 100644 base/db/man/rename_jags_columns.Rd diff --git a/base/db/man/rename_jags_columns.Rd b/base/db/man/rename_jags_columns.Rd deleted file mode 100644 index 5a53963c293..00000000000 --- a/base/db/man/rename_jags_columns.Rd +++ /dev/null @@ -1,20 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/rename_jags_columns.R -\name{rename_jags_columns} -\alias{rename_jags_columns} -\title{renames the variables within output data frame trait.data} -\usage{ -rename_jags_columns(data) -} -\arguments{ -\item{data}{data frame to with variables to rename} -} -\description{ -renames the variables within output data frame trait.data -} -\seealso{ -used with \code{\link[PEcAn.MA]{jagify}}; -} -\author{ -David LeBauer -} From bc45c539ab929b8943a16af9ee5fd9a22a85617b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 21 Jun 2021 21:08:35 +0200 Subject: [PATCH 1957/2289] update file changes action to avoid error on file rename --- .github/workflows/styler-actions.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 9fa9f5f171a..581dab727a4 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -9,7 +9,7 @@ jobs: runs-on: macOS-latest steps: - id: file_changes - uses: trilom/file-changes-action@v1.2.3 + uses: trilom/file-changes-action@v1.2.4 - name: list changed files run: echo '${{ steps.file_changes.outputs.files_modified }}' - uses: actions/checkout@v2 @@ -52,7 +52,7 @@ jobs: - name: update dependency lists run: Rscript scripts/generate_dependencies.R - id: file_changes - uses: trilom/file-changes-action@v1.2.3 + uses: trilom/file-changes-action@v1.2.4 - name : make shell: bash env: From 52c83b150614926b4d57a5a9de7d4d940060c649 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 17:41:01 -0400 Subject: [PATCH 1958/2289] Added [end_date] to code at the top --- scripts/EFI_workflow.R | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index b0bace87b7c..8016e91ccf2 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -1,5 +1,5 @@ #You must run this script in the terminal using the code: -#Rscript --vanilla EFI_workflow.R "[file path to site xml]" "[file path to output folder]" [start_date] +#Rscript --vanilla EFI_workflow.R "[file path to site xml]" "[file path to output folder]" [start_date] [end_date] library("PEcAn.all") library("PEcAn.utils") @@ -45,7 +45,6 @@ if(length(args)>4){ args$continue = TRUE } - if(!dir.exists(args$outputPath)){dir.create(args$outputPath, recursive = TRUE)} setwd(args$outputPath) @@ -235,7 +234,7 @@ thematic_on() source("/projectnb/dietzelab/ahelgeso/pecan/scripts/efi_data_process.R") #Load Output args site.num <- settings$run$site$id -outdir <- outputPath +outdir <- args$outputPath site.name <- settings$run$site$name wid <- settings$workflow$id From e40393c7ef3771661eff343d53fe593098fa6f1c Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 17:50:58 -0400 Subject: [PATCH 1959/2289] Edited Roxygen fixed empy value --- modules/data.atmosphere/R/download.raw.met.module.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index 72c6548c04f..3223fc4a941 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -1,5 +1,8 @@ #' @name download.raw.met.module #' @title download.raw.met.module +#' +#' @return A list of data frames is returned containing information about the data file that can be used to locate it later. Each +#' data frame contains information about one file. #' #' @param dir directory to write outputs to #' @param met source included in input_met @@ -20,7 +23,6 @@ #' @param dbparms database settings from settings file #' @param Ens.Flag #' -#' @return #' @export #' #' @examples From 1504a001e6c3f098db1500f616755c90f1459399 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 17:51:52 -0400 Subject: [PATCH 1960/2289] Edited Roxygen --- modules/data.atmosphere/man/download.raw.met.module.Rd | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.raw.met.module.Rd b/modules/data.atmosphere/man/download.raw.met.module.Rd index 37556eed8ca..eec914e1f29 100644 --- a/modules/data.atmosphere/man/download.raw.met.module.Rd +++ b/modules/data.atmosphere/man/download.raw.met.module.Rd @@ -64,7 +64,8 @@ \item{Ens.Flag}{} } \value{ - +A list of data frames is returned containing information about the data file that can be used to locate it later. Each +data frame contains information about one file. } \description{ download.raw.met.module From a8d2295a6a343ac8c18b2c51af26ea294fdda343 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 17:52:55 -0400 Subject: [PATCH 1961/2289] Fixed missing value in Roxygen --- modules/data.atmosphere/R/half_hour_downscale.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index a851f1c6e0d..1787b407a1b 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -1,12 +1,15 @@ #' @name half_hour_downscale #' @title half_hour_downscale #' +#' @return A list of data frames is returned containing information about the data file that can be used to locate it later. Each +#' data frame contains information about one file. +#' #' @param input_file location of NOAAGEFS_1hr files #' @param output_file location where to store half_hour files #' @param overwrite whether to force hamf_hour_downscale to proceed #' @param hr set half hour #' -#' @return +#' #' #' @export #' @examples From 47660479858f96d36de2a5b22152c1f2c4396183 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 17:53:45 -0400 Subject: [PATCH 1962/2289] Edited Roxygen --- modules/data.atmosphere/man/half_hour_downscale.Rd | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/half_hour_downscale.Rd b/modules/data.atmosphere/man/half_hour_downscale.Rd index bb175de3c0d..58cfcc6d76d 100644 --- a/modules/data.atmosphere/man/half_hour_downscale.Rd +++ b/modules/data.atmosphere/man/half_hour_downscale.Rd @@ -22,7 +22,8 @@ temporal_downscale_half_hour( \item{hr}{set half hour} } \value{ - +A list of data frames is returned containing information about the data file that can be used to locate it later. Each +data frame contains information about one file. } \description{ half_hour_downscale From a56be917a3fc0f17f7da76f34e9dfd70bbe019ea Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 19:32:19 -0400 Subject: [PATCH 1963/2289] Edited Roxygen --- modules/data.atmosphere/R/half_hour_downscale.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index 1787b407a1b..5152c4315c1 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -12,6 +12,7 @@ #' #' #' @export +#' #' @examples temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TRUE, hr = 0.5){ From b6547ac3f7b71564d00c57c2cd2d907ab52994c3 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Mon, 21 Jun 2021 19:34:54 -0400 Subject: [PATCH 1964/2289] Edited Roxygen --- modules/data.atmosphere/R/download.raw.met.module.R | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index 3223fc4a941..2f6e76f0c29 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -23,6 +23,7 @@ #' @param dbparms database settings from settings file #' @param Ens.Flag #' +#' #' @export #' #' @examples From e9e3c04b5a9e3d6cb9ceaaa0f29d970151f840ba Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 12:23:19 +0530 Subject: [PATCH 1965/2289] Update Rcheck_reference.log --- base/db/tests/Rcheck_reference.log | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index 7365aeeb96c..10bcf3c60ca 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -53,15 +53,6 @@ match_dbcols: no visible binding for global variable ‘as’ query.priors: no visible binding for global variable ‘settings’ -rename_jags_columns: no visible binding for global variable ‘stat’ -rename_jags_columns: no visible binding for global variable ‘n’ -rename_jags_columns: no visible binding for global variable ‘trt_id’ -rename_jags_columns: no visible binding for global variable ‘site_id’ -rename_jags_columns: no visible binding for global variable - ‘citation_id’ -rename_jags_columns: no visible binding for global variable - ‘greenhouse’ - Undefined global functions or variables: . as author author_family author_given citation_id collect @@ -80,10 +71,7 @@ contains 'methods'). * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... NOTE -Unknown package ‘PEcAn.MA’ in Rd xrefs -* checking for missing documentation entries ... OK -* checking for code/documentation mismatches ... OK -* checking Rd \usage sections ... WARNING + Undocumented arguments in documentation object 'get_workflow_ids' ‘all.ids’ From 8f58a7e88b0dc555398f52e2fa278eb516286b91 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 12:26:50 +0530 Subject: [PATCH 1966/2289] Update DESCRIPTION --- base/db/DESCRIPTION | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index 7c15e77ec1a..d5eaf264342 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -37,7 +37,8 @@ Authors@R: c(person("David", "LeBauer", role = c("aut", "cre"), person("Ryan", "Kelly", role = c("aut")), person("Dan", "Wang", role = c("aut")), person("Carl", "Davidson", role = c("aut")), - person("Xiaohui", "Feng", role = c("aut"))) + person("Xiaohui", "Feng", role = c("aut")), + person("Shashank", "Singh", role = c("aut"))) Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to From b57d2b91707e54b180c7483becde834c7765a039 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 12:28:32 +0530 Subject: [PATCH 1967/2289] Update DESCRIPTION --- modules/meta.analysis/DESCRIPTION | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index d012296d43e..0c1eddddd96 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -9,7 +9,8 @@ Authors@R: c(person("Mike","Dietze"), person("Dan"," Wang"), person("Carl", "Davidson"), person("Rob","Kooper"), - person("Shawn", "Serbin")) + person("Shawn", "Serbin"), + person("Shashank", "Singh")) Author: David LeBauer, Mike Dietze, Xiaohui Feng, Dan Wang, Carl Davidson, Rob Kooper, Shawn Serbin Maintainer: David LeBauer From 6489a69073e4ca4e50e45ef2692a13cd66947b04 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Tue, 22 Jun 2021 07:08:34 +0000 Subject: [PATCH 1968/2289] automated documentation update --- base/db/NAMESPACE | 1 - base/db/man/PEcAn.DB-package.Rd | 1 + modules/meta.analysis/NAMESPACE | 1 + .../meta.analysis/man/rename_jags_columns.Rd | 20 +++++++++++++++++++ 4 files changed, 22 insertions(+), 1 deletion(-) create mode 100644 modules/meta.analysis/man/rename_jags_columns.Rd diff --git a/base/db/NAMESPACE b/base/db/NAMESPACE index c1ff7c38dc6..2aa2d3a7ad7 100644 --- a/base/db/NAMESPACE +++ b/base/db/NAMESPACE @@ -52,7 +52,6 @@ export(query.trait.data) export(query.traits) export(query_pfts) export(query_priors) -export(rename_jags_columns) export(runs) export(search_references) export(symmetric_setdiff) diff --git a/base/db/man/PEcAn.DB-package.Rd b/base/db/man/PEcAn.DB-package.Rd index cdb16e67324..c972c6ee550 100644 --- a/base/db/man/PEcAn.DB-package.Rd +++ b/base/db/man/PEcAn.DB-package.Rd @@ -28,6 +28,7 @@ Authors: \item Dan Wang \item Carl Davidson \item Xiaohui Feng + \item Shashank Singh } } diff --git a/modules/meta.analysis/NAMESPACE b/modules/meta.analysis/NAMESPACE index 1d376bbdfe7..8702ae5460e 100644 --- a/modules/meta.analysis/NAMESPACE +++ b/modules/meta.analysis/NAMESPACE @@ -5,6 +5,7 @@ export(jagify) export(p.point.in.prior) export(pecan.ma) export(pecan.ma.summary) +export(rename_jags_columns) export(run.meta.analysis) export(runModule.run.meta.analysis) export(single.MA) diff --git a/modules/meta.analysis/man/rename_jags_columns.Rd b/modules/meta.analysis/man/rename_jags_columns.Rd new file mode 100644 index 00000000000..5a53963c293 --- /dev/null +++ b/modules/meta.analysis/man/rename_jags_columns.Rd @@ -0,0 +1,20 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rename_jags_columns.R +\name{rename_jags_columns} +\alias{rename_jags_columns} +\title{renames the variables within output data frame trait.data} +\usage{ +rename_jags_columns(data) +} +\arguments{ +\item{data}{data frame to with variables to rename} +} +\description{ +renames the variables within output data frame trait.data +} +\seealso{ +used with \code{\link[PEcAn.MA]{jagify}}; +} +\author{ +David LeBauer +} From a35cdc5869c5ea16ab243aaa6a5f27b1925a176b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 14:35:24 +0530 Subject: [PATCH 1969/2289] Update modules/meta.analysis/DESCRIPTION Co-authored-by: Chris Black --- modules/meta.analysis/DESCRIPTION | 3 --- 1 file changed, 3 deletions(-) diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index 0c1eddddd96..18555feba7d 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -11,9 +11,6 @@ Authors@R: c(person("Mike","Dietze"), person("Rob","Kooper"), person("Shawn", "Serbin"), person("Shashank", "Singh")) -Author: David LeBauer, Mike Dietze, Xiaohui Feng, Dan Wang, Carl - Davidson, Rob Kooper, Shawn Serbin -Maintainer: David LeBauer Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to From 83638c77427879003736352dda406351b24b7ac9 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 14:39:58 +0530 Subject: [PATCH 1970/2289] Update Rcheck_reference.log --- base/db/tests/Rcheck_reference.log | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/base/db/tests/Rcheck_reference.log b/base/db/tests/Rcheck_reference.log index 10bcf3c60ca..bc7160b8833 100644 --- a/base/db/tests/Rcheck_reference.log +++ b/base/db/tests/Rcheck_reference.log @@ -54,6 +54,7 @@ match_dbcols: no visible binding for global variable ‘as’ query.priors: no visible binding for global variable ‘settings’ + Undefined global functions or variables: . as author author_family author_given citation_id collect container_id container_type created_at cultivar_id ensemble_id folder @@ -70,8 +71,9 @@ contains 'methods'). * checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... OK -* checking Rd cross-references ... NOTE - +* checking for missing documentation entries ... OK +* checking for code/documentation mismatches ... OK +* checking Rd \usage sections ... WARNING Undocumented arguments in documentation object 'get_workflow_ids' ‘all.ids’ From cc657aacc4d741950962d0ee9472932b13201416 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 15:49:17 +0530 Subject: [PATCH 1971/2289] Update DESCRIPTION --- modules/meta.analysis/DESCRIPTION | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index 18555feba7d..8856a3fce07 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -4,13 +4,14 @@ Title: PEcAn functions used for meta-analysis Version: 1.7.1 Date: 2019-09-05 Authors@R: c(person("Mike","Dietze"), - person("David","LeBauer"), + person("David", "LeBauer", role = c("aut", "cre"), email = "dlebauer@email.arizona.edu"), person("Xiaohui", "Feng"), person("Dan"," Wang"), person("Carl", "Davidson"), person("Rob","Kooper"), person("Shawn", "Serbin"), person("Shashank", "Singh")) + Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to From 3db63c1266e5b3e55451b72c13b276d2819113c3 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 22 Jun 2021 16:26:41 +0530 Subject: [PATCH 1972/2289] Update modules/meta.analysis/DESCRIPTION Co-authored-by: Chris Black --- modules/meta.analysis/DESCRIPTION | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index 8856a3fce07..5ecf850ba77 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -11,7 +11,6 @@ Authors@R: c(person("Mike","Dietze"), person("Rob","Kooper"), person("Shawn", "Serbin"), person("Shashank", "Singh")) - Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to From b4d59d92b980ea321c9f8ff966e5a73c59483222 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 22 Jun 2021 14:34:10 -0400 Subject: [PATCH 1973/2289] Added description for Ens.Flag --- modules/data.atmosphere/R/download.raw.met.module.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index 2f6e76f0c29..d6ae2af96ec 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -21,7 +21,7 @@ #' @param username database username #' @param overwrite whether to force download.raw.met.module to proceed #' @param dbparms database settings from settings file -#' @param Ens.Flag +#' @param Ens.Flag default set to FALSE #' #' #' @export From 6040be34171b3f7f72b9fafe94848c3b57456f2f Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 22 Jun 2021 14:35:11 -0400 Subject: [PATCH 1974/2289] Added description for Ens.Flag --- modules/data.atmosphere/man/download.raw.met.module.Rd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.raw.met.module.Rd b/modules/data.atmosphere/man/download.raw.met.module.Rd index eec914e1f29..6ead5a0aeb1 100644 --- a/modules/data.atmosphere/man/download.raw.met.module.Rd +++ b/modules/data.atmosphere/man/download.raw.met.module.Rd @@ -61,7 +61,7 @@ \item{dbparms}{database settings from settings file} -\item{Ens.Flag}{} +\item{Ens.Flag}{default set to FALSE} } \value{ A list of data frames is returned containing information about the data file that can be used to locate it later. Each From 35b2c908bd8aab16425204040b98e501b2392a13 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 22 Jun 2021 16:07:12 -0400 Subject: [PATCH 1975/2289] Edited line 144 to include .data --- modules/data.atmosphere/R/downscaling_helper_functions.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 47962899f87..0ae4aad0ffa 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -141,8 +141,8 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ } #Clean up data frame - data.hrly <- data.hrly %>% dplyr::select("time", .data$lead_var) %>% - dplyr::arrange(time) + data.hrly <- data.hrly %>% dplyr::select(.data$time, .data$lead_var) %>% + dplyr::arrange(data.$time) names(data.hrly) <- c("time", varName) From c02c8243c666c4092da4e57935390f786d13225e Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 22 Jun 2021 18:17:44 -0400 Subject: [PATCH 1976/2289] Deleted one of the .data --- modules/data.atmosphere/R/downscaling_helper_functions.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 0ae4aad0ffa..63f37a8fab2 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -141,7 +141,7 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ } #Clean up data frame - data.hrly <- data.hrly %>% dplyr::select(.data$time, .data$lead_var) %>% + data.hrly <- data.hrly %>% dplyr::select("time", .data$lead_var) %>% dplyr::arrange(data.$time) names(data.hrly) <- c("time", varName) From fed7516208efcd73be1547641f6c27756bba52ba Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 24 Jun 2021 00:58:08 +0530 Subject: [PATCH 1977/2289] NEWS.md file for PEcAn.DB package --- NEWS.md | 5 +++++ 1 file changed, 5 insertions(+) create mode 100644 NEWS.md diff --git a/NEWS.md b/NEWS.md new file mode 100644 index 00000000000..51b8d3522b9 --- /dev/null +++ b/NEWS.md @@ -0,0 +1,5 @@ +# PEcAn.DB 1.7.1 + +* All changes in 1.7.1 and earlier are recorded in a single file for all of the PEcAn packages; please see https://github.com/PecanProject/pecan/blob/v1.7.1/CHANGELOG.md for details. + +* `rename_jags_columns.R`: previously a `PEcAn.DB` function ,has now been moved to `/modules/meta.analysis`. From 5e73a128580ffe10e55d40fc6e0d416e9889e2a5 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Wed, 23 Jun 2021 21:12:02 +0000 Subject: [PATCH 1978/2289] Make settings optional argument for ED2 postprocessing functions --- models/ed/R/model2netcdf.ED2.R | 35 ++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index ca6dd18bffd..ab6b19a61ec 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -27,7 +27,7 @@ ## further modified by S. Serbin 09/2018 ##' model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, - end_date, pft_names = NULL, settings = NULL) { + end_date, pfts, settings = NULL) { start_year <- lubridate::year(start_date) end_year <- lubridate::year(end_date) @@ -110,7 +110,7 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, fcn <- match.fun(fcnx) out_list[[rflag]] <- fcn(yr = y, ylist[[rflag]], flist[[rflag]], outdir, start_date, end_date, - pft_names, settings) + pfts, settings) } # generate start/end dates for processing @@ -142,7 +142,7 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, fcn <- match.fun(fcnx) put_out <- fcn(yr = y, nc_var = nc_var, out = out_list[[rflag]], lat = lat, lon = lon, begins = begin_date, - ends = ends, pft_names, settings) + ends = ends, pfts, settings) nc_var <- put_out$nc_var out_list[[rflag]] <- put_out$out @@ -861,10 +861,18 @@ put_T_values <- function(yr, nc_var, out, lat, lon, begins, ends, ...){ ##' @param yfiles the years on the filenames, will be used to matched efiles for that year ##' ##' @export -read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_names, settings, ...){ +read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, + pfts, settings = NULL, ...){ PEcAn.logger::logger.info(paste0("*** Reading -E- file ***")) + if(missing(outdir)) outdir <- settings$outdir + if(missing(start_date)) start_date <- settings$run$start.date + if(missing(end_date)) end_date <- settings$run$end.date + if(missing(pfts)) pfts <- settings$pfts + stopifnot(!is.null(outdir), !is.null(start_date), !is.null(end_date), + !is.null(pfts)) + # there are multiple -E- files per year ysel <- which(yr == yfiles) @@ -929,6 +937,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n } # end ysel-loop # for now this function does not read any ED variable that has soil as a dimension + pft_names <- pfts$pft$name soil.check <- grepl("soil", pft_names) if (any(soil.check)) { # for now keep soil out @@ -937,14 +946,14 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n npft <- length(pft_names) data(pftmapping, package = "PEcAn.ED2") - pfts <- numeric(npft) - names(pfts) <- pft_names + pfts_nums <- numeric(npft) + names(pfts_nums) <- pft_names # Extract the PFT names and numbers for all PFTs - xml_pft_names <- lapply(settings$pfts, "[[", "name") + xml_pft_names <- lapply(pfts, "[[", "name") for (pft in pft_names) { which_pft <- which(xml_pft_names == pft) - xml_pft <- settings$pfts[[which_pft]] + xml_pft <- pfts[[which_pft]] if ("ed2_pft_number" %in% names(xml_pft)) { pft_number <- as.numeric(xml_pft$ed2_pft_number) if (!is.finite(pft_number)) { @@ -956,7 +965,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n } else { pft_number <- pftmapping$ED[pftmapping$PEcAn == x] } - pfts[pft] <- pft_number + pfts_nums[pft] <- pft_number } out <- list() @@ -985,7 +994,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n # For MMEAN_MORT_RATE_CO, it first sums over columns representing different mortality types first, then proceeds with weighting. for (k in 1:npft) { - ind <- (pft == pfts[k]) + ind <- (pft == pfts_nums[k]) if (any(ind)) { for (varname in varnames) { @@ -1007,7 +1016,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n } - out$PFT <- pfts # will write this to the .nc file + out$PFT <- pfts_nums # will write this to the .nc file return(out) @@ -1017,12 +1026,13 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, pft_n ##' Function for put -E- values to nc_var list ##' @export -put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pft_names, settings, ...){ +put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pfts, settings, ...){ s <- length(nc_var) # even if this is a SA run for soil, currently we are not reading any variable that has a soil dimension # "soil" will be passed to read.output as pft.name from upstream, when it's not part of the attribute it will read the sum + pft_names <- pfts$pft$name soil.check <- grepl("soil", pft_names) if(any(soil.check)){ # for now keep soil out @@ -1167,6 +1177,7 @@ read_S_files <- function(sfile, outdir, pft_names, pecan_names = NULL){ # for now this function does not read any ED variable that has soil as a dimension + pft_names <- pfts$pft$name soil.check <- grepl("soil", pft_names) if (any(soil.check)) { # for now keep soil out From 4afb2974fa2ec8fe690a8dbb864111e3eafccf45 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 24 Jun 2021 13:21:13 +0300 Subject: [PATCH 1979/2289] h. laplace likelihood fix --- modules/assim.batch/R/pda.define.llik.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/assim.batch/R/pda.define.llik.R b/modules/assim.batch/R/pda.define.llik.R index 9e09db1a3c8..bd267c904d4 100644 --- a/modules/assim.batch/R/pda.define.llik.R +++ b/modules/assim.batch/R/pda.define.llik.R @@ -98,8 +98,8 @@ pda.calc.error <-function(settings, con, model_out, run.id, inputs, bias.terms){ # weigh down log-likelihood calculation with neff # if we had one beta value (no heteroscadasticity), we could've multiply n_eff*beta # now need to multiply every term with n_eff/n - SS_p <- - (inputs[[k]]$n_eff/inputs[[k]]$n) * log(beta_p) - resid[[1]][pos]/beta_p - SS_n <- - (inputs[[k]]$n_eff/inputs[[k]]$n) * log(beta_n) - resid[[1]][!pos]/beta_n + SS_p <- - log(2*beta_p) - resid[[1]][pos]/beta_p + SS_n <- - log(2*beta_n) - resid[[1]][!pos]/beta_n suppressWarnings(if(length(SS_n) == 0) SS_n <- 0) pda.errors[[k]] <- sum(SS_p, SS_n, na.rm = TRUE) SSdb[[k]] <- pda.errors[[k]] From 12a81638407f819692cd9e2a01cc1a7635eef9e1 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 24 Jun 2021 16:25:50 +0530 Subject: [PATCH 1980/2289] Update NEWS.md Co-authored-by: Chris Black --- NEWS.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/NEWS.md b/NEWS.md index 51b8d3522b9..2809e79d405 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,5 +1,10 @@ -# PEcAn.DB 1.7.1 +# PEcAn.DB 1.7.1.9000 + +## Removed + +* `rename_jags_columns()` has been removed from PEcAn.DB but is now available in package `PEcAn.MA` (#2805, @moki1202). -* All changes in 1.7.1 and earlier are recorded in a single file for all of the PEcAn packages; please see https://github.com/PecanProject/pecan/blob/v1.7.1/CHANGELOG.md for details. -* `rename_jags_columns.R`: previously a `PEcAn.DB` function ,has now been moved to `/modules/meta.analysis`. +# PEcAn.DB 1.7.1 + +* All changes in 1.7.1 and earlier were recorded in a single file for all of the PEcAn packages; please see https://github.com/PecanProject/pecan/blob/v1.7.1/CHANGELOG.md for details. From 5d5c3bb23bc72a4880184f8b53a1a504c4184fe9 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 24 Jun 2021 16:26:45 +0530 Subject: [PATCH 1981/2289] Update NEWS.md --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 2809e79d405..f7302e59928 100644 --- a/NEWS.md +++ b/NEWS.md @@ -2,7 +2,7 @@ ## Removed -* `rename_jags_columns()` has been removed from PEcAn.DB but is now available in package `PEcAn.MA` (#2805, @moki1202). +* `rename_jags_columns()` has been removed from `PEcAn.DB` but is now available in package `PEcAn.MA` (#2805, @moki1202). # PEcAn.DB 1.7.1 From b0ea988f0e23ee6379759d42a596944215259251 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 24 Jun 2021 15:35:17 +0300 Subject: [PATCH 1982/2289] remove extra comment lines --- modules/assim.batch/R/pda.define.llik.R | 3 --- 1 file changed, 3 deletions(-) diff --git a/modules/assim.batch/R/pda.define.llik.R b/modules/assim.batch/R/pda.define.llik.R index bd267c904d4..e12014bc79b 100644 --- a/modules/assim.batch/R/pda.define.llik.R +++ b/modules/assim.batch/R/pda.define.llik.R @@ -95,9 +95,6 @@ pda.calc.error <-function(settings, con, model_out, run.id, inputs, bias.terms){ # there might not be a negative slope if non-negative variable, assign zero, move on suppressWarnings(if(length(beta_n) == 0) beta_n <- 0) - # weigh down log-likelihood calculation with neff - # if we had one beta value (no heteroscadasticity), we could've multiply n_eff*beta - # now need to multiply every term with n_eff/n SS_p <- - log(2*beta_p) - resid[[1]][pos]/beta_p SS_n <- - log(2*beta_n) - resid[[1]][!pos]/beta_n suppressWarnings(if(length(SS_n) == 0) SS_n <- 0) From 2c822c0459d43ec6b775a573bc661f46b8e0a068 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:24:57 -0400 Subject: [PATCH 1983/2289] Edited Roxygen, added .data to functions, and changed funciton names --- .../data.atmosphere/R/half_hour_downscale.R | 158 +++++------------- 1 file changed, 46 insertions(+), 112 deletions(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index 5152c4315c1..d65968be899 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -9,8 +9,6 @@ #' @param overwrite whether to force hamf_hour_downscale to proceed #' @param hr set half hour #' -#' -#' #' @export #' #' @examples @@ -119,31 +117,29 @@ temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TR } #temporal_downscale -#' @title Downscale spline to hourly -#' @return A dataframe of downscaled state variables -#' @param df, dataframe of data to be downscales -#' @noRd + +#' @title Downscale spline to half hourly +#' @param df dataframe of data to be downscales +#' @param VarNames variable names to be downscaled +#' @param hr hour to downscale to- default is 0.5 +#' @return A dataframe of half hourly downscaled state variables +#' @importFrom rlang .data #' @author Laura Puckett -#' +#' @export #' downscale_spline_to_half_hrly <- function(df,VarNames, hr = 0.5){ - # -------------------------------------- - # purpose: interpolates debiased forecasts from 6-hourly to hourly - # Creator: Laura Puckett, December 16 2018 - # -------------------------------------- - # @param: df, a dataframe of debiased 6-hourly forecasts - + time <- NULL t0 = min(df$time) df <- df %>% - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) + dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) noaa_data_interp <- tibble::tibble(time = lubridate::as_datetime(t0 + interp.df.days, tz = "UTC")) for(Var in 1:length(VarNames)){ - curr_data <- spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y + curr_data <- stats::spline(x = df$days_since_t0, y = unlist(df[VarNames[Var]]), method = "fmm", xout = interp.df.days)$y noaa_data_interp <- cbind(noaa_data_interp, curr_data) } @@ -152,26 +148,27 @@ downscale_spline_to_half_hrly <- function(df,VarNames, hr = 0.5){ return(noaa_data_interp) } -#' @title Downscale shortwave to hourly +#' @title Downscale shortwave to half hourly #' @return A dataframe of downscaled state variables -#' -#' @param df, data frame of variables -#' @param lat, lat of site -#' @param lon, long of site +#' +#' @param df data frame of variables +#' @param lat lat of site +#' @param lon long of site +#' @param hr hour to downscale to- default is 1 +#' @importFrom rlang .data #' @return ShortWave.ds -#' @noRd #' @author Laura Puckett -#' +#' @export #' downscale_ShortWave_to_half_hrly <- function(df,lat, lon, hr = 0.5){ - ## downscale shortwave to hourly + ## downscale shortwave to half hourly t0 <- min(df$time) df <- df %>% dplyr::select("time", "surface_downwelling_shortwave_flux_in_air") %>% - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% - dplyr::mutate(lead_var = dplyr::lead(surface_downwelling_shortwave_flux_in_air, 1)) + dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) %>% + dplyr::mutate(lead_var = dplyr::lead(.data$surface_downwelling_shortwave_flux_in_air, 1)) interp.df.days <- seq(min(df$days_since_t0), as.numeric(max(df$days_since_t0)), 1/(24/hr)) @@ -196,37 +193,41 @@ downscale_ShortWave_to_half_hrly <- function(df,lat, lon, hr = 0.5){ } ShortWave.ds <- data.hrly %>% - dplyr::mutate(hour = lubridate::hour(time)) %>% - dplyr::mutate(doy = lubridate::yday(time) + hour/(24/hr))%>% - dplyr::mutate(rpot = downscale_solar_geom(doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry - dplyr::group_by(group_6hr) %>% - dplyr::mutate(avg.rpot = mean(rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry + dplyr::mutate(hour = lubridate::hour(.data$time)) %>% + dplyr::mutate(doy = lubridate::yday(.data$time) + .data$hour/(24/hr))%>% + dplyr::mutate(rpot = downscale_solar_geom_halfhour(.data$doy, as.vector(lon), as.vector(lat))) %>% # hourly sw flux calculated using solar geometry + dplyr::group_by(.data$group_6hr) %>% + dplyr::mutate(avg.rpot = mean(.data$rpot, na.rm = TRUE)) %>% # daily sw mean from solar geometry dplyr::ungroup() %>% - dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(avg.rpot > 0, rpot* (surface_downwelling_shortwave_flux_in_air/avg.rpot),0)) %>% - dplyr::select(time,surface_downwelling_shortwave_flux_in_air) + dplyr::mutate(surface_downwelling_shortwave_flux_in_air = ifelse(.data$avg.rpot > 0, .data$rpot* (.data$surface_downwelling_shortwave_flux_in_air/.data$avg.rpot),0)) %>% + dplyr::select(.data$time, .data$surface_downwelling_shortwave_flux_in_air) return(ShortWave.ds) } -#' @title Downscale repeat to hourly +#' @title Downscale repeat to half hourly +#' @param df dataframe of data to be downscaled (Longwave) +#' @param varName variable names to be downscaled +#' @param hr hour to downscale to- default is 0.5 #' @return A dataframe of downscaled data -#' @param df, dataframe of data to be downscaled (Longwave) -#' @noRd +#' @importFrom rlang .data #' @author Laura Puckett -#' +#' @export #' downscale_repeat_6hr_to_half_hrly <- function(df, varName, hr = 0.5){ + #bind variables + lead_var <- time <- NULL #Get first time point t0 <- min(df$time) df <- df %>% dplyr::select("time", all_of(varName)) %>% #Calculate time difference - dplyr::mutate(days_since_t0 = difftime(.$time, t0, units = "days")) %>% + dplyr::mutate(days_since_t0 = difftime(.data$time, t0, units = "days")) %>% #Shift valued back because the 6hr value represents the average over the #previous 6hr period dplyr::mutate(lead_var = dplyr::lead(df[,varName], 1)) @@ -253,8 +254,8 @@ downscale_repeat_6hr_to_half_hrly <- function(df, varName, hr = 0.5){ } #Clean up data frame - data.hrly <- data.hrly %>% dplyr::select("time", lead_var) %>% - dplyr::arrange(time) + data.hrly <- data.hrly %>% dplyr::select("time", .data$lead_var) %>% + dplyr::arrange(.data$time) names(data.hrly) <- c("time", varName) @@ -262,90 +263,23 @@ downscale_repeat_6hr_to_half_hrly <- function(df, varName, hr = 0.5){ } #' @title Calculate potential shortwave radiation -#' @return vector of potential shortwave radiation for each doy #' #' @param doy, day of year in decimal #' @param lon, longitude #' @param lat, latitude -#' @return `numeric(1)` +#' @return vector of potential shortwave radiation for each doy +#' #' @author Quinn Thomas -#' @noRd +#' @export #' #' -downscale_solar_geom <- function(doy, lon, lat) { +downscale_solar_geom_halfhour_halfhour <- function(doy, lon, lat) { - dt <- median(diff(doy)) * 86400 # average number of seconds in time interval - hr <- (doy - floor(doy)) * 48# hour of day for each element of doy + dt <- stats::median(diff(doy)) * 86400 # average number of seconds in time interval + hr <- (doy - floor(doy)) * 48 # hour of day for each element of doy ## calculate potential radiation cosz <- cos_solar_zenith_angle(doy, lat, lon, dt, hr) rpot <- 1366 * cosz return(rpot) } - -##' @title Write NOAA GEFS netCDF -##' @param df data frame of meterological variables to be written to netcdf. Columns -##' must start with time with the following columns in the order of `cf_units` -##' @param ens ensemble index used for subsetting df -##' @param lat latitude in degree north -##' @param lon longitude in degree east -##' @param cf_units vector of variable names in order they appear in df -##' @param output_file name, with full path, of the netcdf file that is generated -##' @param overwrite logical to overwrite existing netcdf file -##' @return NA -##' -##' @export -##' -##' @author Quinn Thomas -##' -##' - -write_noaa_gefs_netcdf <- function(df, ens = NA, lat, lon, cf_units, output_file, overwrite){ - - if(!is.na(ens)){ - data <- df - max_index <- max(which(!is.na(data$air_temperature))) - start_time <- min(data$time) - end_time <- data$time[max_index] - - data <- data %>% dplyr::select(-c("time", "NOAA.member")) - }else{ - data <- df - max_index <- max(which(!is.na(data$air_temperature))) - start_time <- min(data$time) - end_time <- data$time[max_index] - - data <- df %>% - dplyr::select(-c("time")) - } - - diff_time <- as.numeric(difftime(df$time, df$time[1])) / (60 * 60) - - cf_var_names <- names(data) - - time_dim <- ncdf4::ncdim_def(name="time", - units = paste("hours since", format(start_time, "%Y-%m-%d %H:%M")), - diff_time, #GEFS forecast starts 6 hours from start time - create_dimvar = TRUE) - lat_dim <- ncdf4::ncdim_def("latitude", "degree_north", lat, create_dimvar = TRUE) - lon_dim <- ncdf4::ncdim_def("longitude", "degree_east", lon, create_dimvar = TRUE) - - dimensions_list <- list(time_dim, lat_dim, lon_dim) - - nc_var_list <- list() - for (i in 1:length(cf_var_names)) { #Each ensemble member will have data on each variable stored in their respective file. - nc_var_list[[i]] <- ncdf4::ncvar_def(cf_var_names[i], cf_units[i], dimensions_list, missval=NaN) - } - - if (!file.exists(output_file) | overwrite) { - nc_flptr <- ncdf4::nc_create(output_file, nc_var_list, verbose = FALSE) - - #For each variable associated with that ensemble - for (j in 1:ncol(data)) { - # "j" is the variable number. "i" is the ensemble number. Remember that each row represents an ensemble - ncdf4::ncvar_put(nc_flptr, nc_var_list[[j]], unlist(data[,j])) - } - - ncdf4::nc_close(nc_flptr) #Write to the disk/storage - } -} \ No newline at end of file From cf5f28ae3a81ef6762ad665cd23f439c3f71f41e Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:26:14 -0400 Subject: [PATCH 1984/2289] Add .data to functions --- modules/data.atmosphere/R/downscaling_helper_functions.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/downscaling_helper_functions.R b/modules/data.atmosphere/R/downscaling_helper_functions.R index 63f37a8fab2..09c7f1ab83f 100644 --- a/modules/data.atmosphere/R/downscaling_helper_functions.R +++ b/modules/data.atmosphere/R/downscaling_helper_functions.R @@ -142,7 +142,7 @@ downscale_repeat_6hr_to_hrly <- function(df, varName, hr = 1){ #Clean up data frame data.hrly <- data.hrly %>% dplyr::select("time", .data$lead_var) %>% - dplyr::arrange(data.$time) + dplyr::arrange(.data$time) names(data.hrly) <- c("time", varName) From b10d672517377dddc6143c8d747cc847d8891781 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:31:38 -0400 Subject: [PATCH 1985/2289] Edited Roxygen --- .../man/downscale_spline_to_half_hrly.Rd | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 modules/data.atmosphere/man/downscale_spline_to_half_hrly.Rd diff --git a/modules/data.atmosphere/man/downscale_spline_to_half_hrly.Rd b/modules/data.atmosphere/man/downscale_spline_to_half_hrly.Rd new file mode 100644 index 00000000000..f10c7ac7cbe --- /dev/null +++ b/modules/data.atmosphere/man/downscale_spline_to_half_hrly.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/half_hour_downscale.R +\name{downscale_spline_to_half_hrly} +\alias{downscale_spline_to_half_hrly} +\title{Downscale spline to half hourly} +\usage{ +downscale_spline_to_half_hrly(df, VarNames, hr = 0.5) +} +\arguments{ +\item{df}{dataframe of data to be downscales} + +\item{VarNames}{variable names to be downscaled} + +\item{hr}{hour to downscale to- default is 0.5} +} +\value{ +A dataframe of half hourly downscaled state variables +} +\description{ +Downscale spline to half hourly +} +\author{ +Laura Puckett +} From 6d29cfd5c7ffdc7e1148ab8ea9e4dbfdde4737ef Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:33:44 -0400 Subject: [PATCH 1986/2289] Edited Roxygen --- .../man/downscale_ShortWave_to_half_hrly.Rd | 28 +++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 modules/data.atmosphere/man/downscale_ShortWave_to_half_hrly.Rd diff --git a/modules/data.atmosphere/man/downscale_ShortWave_to_half_hrly.Rd b/modules/data.atmosphere/man/downscale_ShortWave_to_half_hrly.Rd new file mode 100644 index 00000000000..55b8e6de323 --- /dev/null +++ b/modules/data.atmosphere/man/downscale_ShortWave_to_half_hrly.Rd @@ -0,0 +1,28 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/half_hour_downscale.R +\name{downscale_ShortWave_to_half_hrly} +\alias{downscale_ShortWave_to_half_hrly} +\title{Downscale shortwave to half hourly} +\usage{ +downscale_ShortWave_to_half_hrly(df, lat, lon, hr = 0.5) +} +\arguments{ +\item{df}{data frame of variables} + +\item{lat}{lat of site} + +\item{lon}{long of site} + +\item{hr}{hour to downscale to- default is 1} +} +\value{ +A dataframe of downscaled state variables + +ShortWave.ds +} +\description{ +Downscale shortwave to half hourly +} +\author{ +Laura Puckett +} From e350f93ab4d6422be02d66a82cc96a5f274e5ffc Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:35:35 -0400 Subject: [PATCH 1987/2289] Edited Roxygen --- .../man/downscale_repeat_6hr_to_half_hrly.Rd | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 modules/data.atmosphere/man/downscale_repeat_6hr_to_half_hrly.Rd diff --git a/modules/data.atmosphere/man/downscale_repeat_6hr_to_half_hrly.Rd b/modules/data.atmosphere/man/downscale_repeat_6hr_to_half_hrly.Rd new file mode 100644 index 00000000000..b391679862f --- /dev/null +++ b/modules/data.atmosphere/man/downscale_repeat_6hr_to_half_hrly.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/half_hour_downscale.R +\name{downscale_repeat_6hr_to_half_hrly} +\alias{downscale_repeat_6hr_to_half_hrly} +\title{Downscale repeat to half hourly} +\usage{ +downscale_repeat_6hr_to_half_hrly(df, varName, hr = 0.5) +} +\arguments{ +\item{df}{dataframe of data to be downscaled (Longwave)} + +\item{varName}{variable names to be downscaled} + +\item{hr}{hour to downscale to- default is 0.5} +} +\value{ +A dataframe of downscaled data +} +\description{ +Downscale repeat to half hourly +} +\author{ +Laura Puckett +} From b5744e0c02f4a49a86cff0578da57fa254b8ab06 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:37:38 -0400 Subject: [PATCH 1988/2289] Edited function name --- modules/data.atmosphere/R/half_hour_downscale.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index d65968be899..facefc50bc6 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -273,7 +273,7 @@ downscale_repeat_6hr_to_half_hrly <- function(df, varName, hr = 0.5){ #' @export #' #' -downscale_solar_geom_halfhour_halfhour <- function(doy, lon, lat) { +downscale_solar_geom_halfhour <- function(doy, lon, lat) { dt <- stats::median(diff(doy)) * 86400 # average number of seconds in time interval hr <- (doy - floor(doy)) * 48 # hour of day for each element of doy From 2ad3b3d93039a2f49bce41ec97a72f560a674684 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 14:40:12 -0400 Subject: [PATCH 1989/2289] Edited function name and Roxygen --- .../man/downscale_solar_geom_halfhour.Rd | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 modules/data.atmosphere/man/downscale_solar_geom_halfhour.Rd diff --git a/modules/data.atmosphere/man/downscale_solar_geom_halfhour.Rd b/modules/data.atmosphere/man/downscale_solar_geom_halfhour.Rd new file mode 100644 index 00000000000..bb190f90ac3 --- /dev/null +++ b/modules/data.atmosphere/man/downscale_solar_geom_halfhour.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/half_hour_downscale.R +\name{downscale_solar_geom_halfhour} +\alias{downscale_solar_geom_halfhour} +\title{Calculate potential shortwave radiation} +\usage{ +downscale_solar_geom_halfhour(doy, lon, lat) +} +\arguments{ +\item{doy, }{day of year in decimal} + +\item{lon, }{longitude} + +\item{lat, }{latitude} +} +\value{ +vector of potential shortwave radiation for each doy +} +\description{ +Calculate potential shortwave radiation +} +\author{ +Quinn Thomas +} From 00943c8b94c20f9a3876c9d7df5834062a90a908 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 15:31:36 -0400 Subject: [PATCH 1990/2289] Edited Roxygen --- .../man/write_noaa_gefs_netcdf.Rd | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd index ecb110855ff..dd8c10ca76e 100644 --- a/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd +++ b/modules/data.atmosphere/man/write_noaa_gefs_netcdf.Rd @@ -1,20 +1,9 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/GEFS_helper_functions.R, -% R/half_hour_downscale.R +% Please edit documentation in R/GEFS_helper_functions.R \name{write_noaa_gefs_netcdf} \alias{write_noaa_gefs_netcdf} \title{Write NOAA GEFS netCDF} \usage{ -write_noaa_gefs_netcdf( - df, - ens = NA, - lat, - lon, - cf_units, - output_file, - overwrite -) - write_noaa_gefs_netcdf( df, ens = NA, @@ -42,17 +31,11 @@ must start with time with the following columns in the order of `cf_units`} \item{overwrite}{logical to overwrite existing netcdf file} } \value{ -NA - NA } \description{ -Write NOAA GEFS netCDF - Write NOAA GEFS netCDF } \author{ -Quinn Thomas - Quinn Thomas } From 634bbb1967d509329abe5fa6182e486692143ee5 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 15:32:24 -0400 Subject: [PATCH 1991/2289] Edited Roxygen --- modules/data.atmosphere/NAMESPACE | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index a5cf1d22c53..2494fbf6fc3 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -38,9 +38,13 @@ export(download.PalEON) export(download.PalEON_ENS) export(download.US_WCr) export(download.US_Wlef) +export(downscale_ShortWave_to_half_hrly) export(downscale_ShortWave_to_hrly) +export(downscale_repeat_6hr_to_half_hrly) export(downscale_repeat_6hr_to_hrly) export(downscale_solar_geom) +export(downscale_solar_geom_halfhour) +export(downscale_spline_to_half_hrly) export(downscale_spline_to_hrly) export(equation_of_time) export(exner) From 5c7c09fc00d4a717d931347182fa64db53e7277b Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 18:12:41 -0400 Subject: [PATCH 1992/2289] Changed temporal downscale to temporal downscale half hour --- modules/data.atmosphere/R/GEFS_helper_functions.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/GEFS_helper_functions.R b/modules/data.atmosphere/R/GEFS_helper_functions.R index d14b0c4b729..7cfc4441f77 100644 --- a/modules/data.atmosphere/R/GEFS_helper_functions.R +++ b/modules/data.atmosphere/R/GEFS_helper_functions.R @@ -508,7 +508,7 @@ process_gridded_noaa_download <- function(lat_list, results_list[[ens]] <- results #Run downscaling - temporal_downscale(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) + temporal_downscale_half_hour(input_file = output_file, output_file = output_file_ds, overwrite = TRUE, hr = 1) } From 16fc80051b96792948c434391e73838cafb84139 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Thu, 24 Jun 2021 18:48:31 -0400 Subject: [PATCH 1993/2289] Added .data --- modules/data.atmosphere/R/half_hour_downscale.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index facefc50bc6..d3d46b50b79 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -64,8 +64,8 @@ temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TR dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, temp = forecast_noaa_ds$air_temperature, press = forecast_noaa_ds$air_pressure)) %>% - dplyr::mutate(relative_humidity = relative_humidity, - relative_humidity = ifelse(relative_humidity > 1, 0, relative_humidity)) + dplyr::mutate(.data$relative_humidity = .data$relative_humidity, + relative_humidity = ifelse(.data$relative_humidity > 1, 0, .data$relative_humidity)) # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ From 2c658d46a9413eea59700828aa3578b9236d2320 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Fri, 25 Jun 2021 16:34:49 +0000 Subject: [PATCH 1994/2289] Fix pftmapping subset error in read_E_files --- models/ed/R/model2netcdf.ED2.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index ab6b19a61ec..7918e66308e 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -963,7 +963,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, ) } } else { - pft_number <- pftmapping$ED[pftmapping$PEcAn == x] + pft_number <- pftmapping$ED[pftmapping$PEcAn == xml_pft$name] } pfts_nums[pft] <- pft_number } From 06f732cfd8a35fca2b0608b77decca362693fbf7 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Fri, 25 Jun 2021 16:35:38 +0000 Subject: [PATCH 1995/2289] Add initial read_E_files input tests --- models/ed/tests/testthat.R | 1 + .../data/analysis-E-2004-04-00-000000-g01.h5 | Bin 0 -> 334092 bytes models/ed/tests/testthat/test.read_E_files.R | 53 ++++++++++++++++++ 3 files changed, 54 insertions(+) create mode 100644 models/ed/tests/testthat/data/analysis-E-2004-04-00-000000-g01.h5 create mode 100644 models/ed/tests/testthat/test.read_E_files.R diff --git a/models/ed/tests/testthat.R b/models/ed/tests/testthat.R index 5f913145300..241e0a79814 100644 --- a/models/ed/tests/testthat.R +++ b/models/ed/tests/testthat.R @@ -8,6 +8,7 @@ #------------------------------------------------------------------------------- library(testthat) library(PEcAn.utils) +library(PEcAn.ED2) PEcAn.logger::logger.setQuitOnSevere(FALSE) test_check("PEcAn.ED2") diff --git a/models/ed/tests/testthat/data/analysis-E-2004-04-00-000000-g01.h5 b/models/ed/tests/testthat/data/analysis-E-2004-04-00-000000-g01.h5 new file mode 100644 index 0000000000000000000000000000000000000000..0347281fe81da0ceafcaa713bd212a64a18218a0 GIT binary patch literal 334092 zcmeEv4}2U|wg03|OMoH)0u4}Pkb)^RZ72mo|D@S$(j<_iOE#rJ(PopKWY_LLb~kN; zeG;`m&;ms(1SnXwYScc7(rSHHiO{D}i&iXJeO0QjNYV0M-?Pt`=da-J-nr+_&dko< z%r=?1+uhxM(%rfD&Y3&&opbIv=bn4+Lk+bn&pYR`a|HY;DiR8XV)-Zi?;r4)b?FSb zh#u4W4){?3--GZyJdOM)5b!@iD24J1fWE$6TWejNP*5mP`t%!46OMhDT%rA4q@?VZ zhet|K!_F(Mk}8 zFC|qw3fc7bK)ZcsvDLAk38C?LK%drk=07ruZt?aWI3`WZGbO)qhyKz>*GwgUc~DEq(3$6~F$% zNEQ0|gI_lPakTKxACCU5Z0G2P?|*vqp5^zB_I}zm`uC4qH~Oz)(dbwI>y;NSU---m z<=_0k3mvZ)ztI2Ym%jhhlb3$~oL^u4J#;?$h}D8{0{t~O1YtA!+lBrPqrYi21*lg2 z7wj2%aj8SBLB-1FUBCR!@^znRUGDj}gG=A})PH+-@H@!3VoN`Rd>GzRhcjd>5cHHvWSp}7}O#S_()ve1v{N}$p&YxFbbF=Vd zRcp`d%Z6SZa=!A1N1d~qK`kZCv@FMZ+Ygn`@j9J z^X|I^)4sX=>6>2p*dsOnxbaIb4sQDpdItpIsc(OB#pvR1F8jv|zgT|rlmDgM=d2=e z5w5@B%RgLx|MW{My~lr4Nxwh+Pk+9#<~w7@YVN;mR?WpPKI!ax`wP`GcRaP?n(JPyKUa9!`CDnzEe|{u zbNu10!9`c^+*J9%`25OC+WvXLEsvEfpx^lV^z+W4Yj6JX1HZXx!Lf_3pYg~)U5Crp z{Iq=8EmuCU^sMXuvE+&Sx*UJ{`?s&Fp7qi^dJg@LjK6)uc=K5|Jh9~58)%vO`+|e_ zFFo}5Weu<2zQO6b>dD%nFNt+^vmRKv@TWz~EB`#Y{Fe2{mi7PTy^H6ZePKmoUui|` zwf%K(Jo0z>eO1%R|IV!^aOpiIE%;AFe|yp2r(z{7Gx|$f&VR6^<=tN_X*uV&B`uF# zb8*XKAGx^Y%HLnya(&GuEq{3Yl9rPXU()i>YcpDQzjXho(O=`({iCn_@&3_g?|5Kz zbojy1H-|ql`d#m%qu2I)W^_gAv!ly8o*TW(amh>5dmpMY`isRMs#Z;;%A9muuOVHml=x@<^4?B-u@UV07 zyB~J8-}AgKixkhk9O<- zu3Yf+?q_SB+8u0teD|xfKe2n=NXPEr2d~)uz8wd5efRR+yXqgDyKB`OKfST|8(ZZ@=^Wi`#pi|LV(w-#9q-%=6!S?Ah<^_`KYv23PCB5I@ z`QwroRu?|~!moCmJNmuF9~k}oH=Z5+%#4Gh&7YY0gEv~P{=r3_B|rGxTNTymUZ0G$ zR_l4byq;$Q80+oWfGHfP9XosxbQ-@T^2bMiJO_4>_#DIBPNBVGYLENeqQ8{6=oL+v+sk)o_Zc^lPUhn<6##bgLynm zhzdIXr{WJh-+@2$dpery!>p7(e{2Rksxdj`qn{_v^9Osro{nGi`}FbH44dgV^nl#^PekAD2Xb$m{QtVyUAeAfDp= z<>enp-H1`6j;AQ|M^G2$F~@t9S9JJAcDtdu?=>SMzuY0hfr_G>1cRU!Y$M zN}iBdF4jlGF=RYO{XtK(AAN_ty6bXkc?2o1;w9XuPRl*sM7AZa< z8lobvvLBC9&UiFQo^CNJ#UlPFQlO1w73LHE{9%`#JjWCsR8MZG23^f&J$VZC37x#G zkpkfjJ!lvUc}2G;vLPHs-|=wN(<7-~6aMdmU3P4PsrSJ;JRF_j^%CIG>4~m000Ze4 zyzxlaVzKUdwT8oS)r)GNdu4_^v+Mm8IlsU1_=4Ww4RId>7T)hK^%q<3r<}_f51Bvm zcqsW4$~fbpmhUkbZB-c9OvgxK>)!{?HA$F5Thc|*e1*)6i3oNQ?8?3dip1D2WM zE9>x(_2gKR<0EJBsOF4^S`VOpsbqNg^M_sIPOT|C=(sbu9K@L--Z~8B+f)}?$DeCF zaTE|3>y1RvTu;hR;(fp~31}*MVJu7Mb@rKhAFRW}ow0DVQ}T+P{UV7F&;z5! z1J$ibH#lJO^K5qU=a?xxD1VMV&4)jUUN}?eC@0rH`7k?9g|6c)`yg9VoG2spv%1)N z+#zVU(Zg<6Jde=Z=4H1Vyb{`doUQM%d!IeE>Bsq>?s~JPwBGlk_rWu^c|0QT`wU?i z?3Vb|?Dwe@*rBwM{Z1UYp49LCGMnBZSZ6$bg>vJDgneKKZCk*mXLzy=h1QKDN&W>x zny37Mx+HOfY?SDgUUaO*L$*Y0wnOaFd+#=d2i1FvJ7KU=hQkgUMlX~{ zSI$d5Lm0e_T&`&jTVEVHi?loNBX;+OL7&}vH+$S}m_g`>kFe=&0eXe+XVYU-v98v- zaYrlXQZW{PzOqZsb(rESi}CPDp19cMiFmqn@v~IqY4E_K&cfSmmUDLD;mrg*)_Wr1 zUbHM0#c!e^ND{qLw;w@6(a?`OsNChrQapT_fJa;kqGigSIL5;8qu~#^MYi%sfHxlg zdwTs|u{-KP;r^+@nss^^0wm3Z3}=o%GGYo3s+SF~1D%13URER&Lq3AGrEI-t2-LRG z9=48C4fNtwY@bab*UP|K$A-gB(L*<-o>7c2A?yQ}6iJRgp>+q2EP>SZ_g`{HOdkQdy zEv*}Ov~n)*pV?%JuPnwR@1IfR++Zf)k@p8Ecx>j3M^oM(fJTVKKC)X(;X(C)Js*bQ zMHy~749A?&xRJ8w7gCYuWV5pn4?E zrZ?5K7LPmW*`Ih#dDi*^wwmHA>+rz&HDYl*At0hDX%xSna0M~gom4fjYP$|M>Xl!M&CV>@R&9I==`=ns(GGid5e z63aqiF&6d*#2{v!1u{?+kKSN7kbV@uC{BK!#x8lagEJn@{*XV|8x%cBxG2nle&h}w z!<_MObj5qoYzrHX9KvHKXFQtxJ-&E(0LiR4TIiAr7XJKUSG?bdDLkkhDSQGf8!mQa zkeEt(jP9KvG{XFS@Xo=_r5;|4!U zr-k)oV(BZY869_UL3Xcq-J4N(ps?7Wr%zU#HIVSf!!A8w)D#|652$E`;YAs4It<5~ zsV=lsUW(m;-o7QGCmsyPB0dSlfR^(vjpXMck={K|v(JpV^Mbe$IH1d<_t=P8bDc^1@kW)l8DWBiqNX1DS`1f#Bkg zhh5{>K~s3p@oQ`dMkbrbFESrmB$Pp3t635|PN)*{S{>QS9!ZOV-o`FAJrU@+K0N`w z>q!09k80(A(peB^C43>NMpG~X^qL-F(>nn4w(ruMo^)WH9?(X%favz14bxTgs;A0(!-7vgr*1y?8a79*xc%{7*JL>-6MCno5f-VZxqCu?$Bj zm*Zv2bURJ{d}Wv2a?BK8S%(L)*9i>7L0pc}pg@46%N;z9bH+o~1LXD0T~Z`vCXL*| z<3uLl5%Wo2FWv+=)rUVGcIg2pP2oZHfPoPhXSnDAGllICkF{+FTMrn5c&wqzl#Ut> zkSFGn;!T&Z>2&~^zGa4Fl)__>*JSIpYt_Yt^IBLQodNzCyYzrUQ+QB4U~mr%SIY3( z0R`56(fRvhuwM9N6MLK;gz7yT+5H;7oYa@x!nU`IK#Fu6Wz#zX^fukZrgx%<&^ho& zHoZM_N&UeK*!Nooc#Pe`rf1!_vj+23-riC2HWZoSD~s_s1MD5U{4=vSnTS>QeXQt zn_kCxq+Lgoa^q~m8Cq+d9^j5gQMTCydQrDwfmiB`E}*h5cjVk0Q+#DT9vV@nL&Fo# z9Xv$Ncr^RFqTxbQ1=?ime#(nE-+*7?2eOtIx6 z5^gHaN|JL~yYFI?DZa7}k2ckoi&_k*M8n@nc!+k(IlJbEU7Yc7N2D&~^6C&uJH zc_UP`cp$rFT~C(k$!(nRzjeUju{@J5^Y%C2$eZc}_^9UisO z{&Ek(1SN>~q((-Ahn!13Tk+`NjE9TvNp3X!E2^0W4_unHda{=@9`z*Ip4u6pppf{H zgh$ru$-d0MBbKBtC%{0zw0LB*o*dwe2cD1Y@`S>XeuM3k6-=~vAn*RdMOiwYM>ykA zi&h)^kga_O>STU}T4D3Wy#L`{!HivRmS&@St{!?=PSibFy2?t|Zsr`eC*n zUvWNZHyTwsYN*h+1HFv{YG)t@}T84vxwy{6=qC}!<`mYX=^F;SibvJ^?j9~|-|*rk^Zn!zoX;*Wvo0T}h1Ht^wL@x`*9v2-+=P&u%ve`&x~* zS^GRrm%4R&fIe|DEa{_$4t(bhUu{l{uiD1}Q7(K{Bn-p4vA~sV{eKUv8$0n6_8WK% z^5^YZ$)-oMw6y;po8BPAP3*gqO>YRE`u4Mw2Wd$70Q7He9h)A_nkoFvkc?7j9bZk5 zM+Vr8KVR83PqW1oUs;F8s@`}{7|%urk`pi|gkA37u{BfhzynaqPwwDBC7@PMv2Wih zg+D?Pn#c5qx}%=*DB5W@ku@HJlskBA%M|{I86@ym_$7Dn*v=UbJWrG01+=qUJRA|j z-E{5-5Xl`pc5ueS5$Ke>VTBP={LtZ1jy_pBZ#JA6c$AA?vLR1`SJcq($HOlF%uZ8y zQ2)&KnPB%Q=1+%WqgeTa`e*jMi(F->N9lm!WyecN@!tPr$KO{2J>i3FdILaj?w{E7 z%7EVR``GjhJ8eUub#@ELIm}mNV&WEu|1hP zc8F zTt98vG?{cF|99mMqA4HOD9rZx2_2o`^%Bfpbb6xe#E>5)W$lqdq8v*EBQBLE>v~!i z2Ga{wl_}Jsj4)`8OE?r;B9?{LM!MtkLu$=g6I!*C&bi7LgQfeVjedRt)Q>kJTw1 zPNIbiO0OryN1xF8xAl#WLH~~Ru=_Uv?FO9ec6$1gJ+{uSs6p!MhBJV#GDQv+f0ZE5 zOAfYZ#2zvT11O0YnqG|iW3gT-mN9ZLQ}8;qlP_NGIEv=)k&K5cW#HgHZeC(Oq~Jt|P^z&=H?Bt>En!$_OUE*9T?XDR@<#ojJ z*tLmm|FpwC%tc>gTPkBf@9?8+ddGlX#kbh>ioibGyp>IlO~ty}w${2jK@f$qUlN?g z`pLHl-lNh1IG#oNFm@w39)jbxMqN&Ox;f6wjY{NZ!S_pUJM&oPK|ulO=4r~6<>K2A;? zfal}=5oy2WBGRt-Po#bnj`#kA)HlHWiGL>T=sLq48%cc!)DQHL^Gl(>J9|hyoy{9~ zf}DQ<>JN>QW4b>@iG$oC@DLkmZtN13+u&JUgbWF9XZ%B{WQAI*E=Yi~K2m0f!BDN}sK1rN6;B5uIj#^(|q0(dle+4T(m zsKMcInKJI!#UF*4i-$ApiN=V;;;#qTg$Lb9B?I{*CP`kI$Pwzi2-Nyoih}S}Ico$B}g` zKd|Wy0lirZ+4P2iUi)D-Js;3Jem$FBAJD7(4V#{Ic8eijse+g?g|F<=%jTHkE9>xZ z)Qd)tZ1AIO=PjZ?7z}s%1AhK;&MrJe&UndFnOyg!CxA9P(So8I zcWz5}(de0j$NbE~gM=w*!9X0M#Uq>Z{#S6ugRy`4f+eB@B4V_7U=fzBdH)x4#=~&` z@&phlGzoul2aih5c&PTpNJ2q|B17=7OHW>A3JDmJhC}n zyn!H)$Bc!h`B%L)|>}vLa!Kgb@k5-eBudMUdyB^e(m@)d2Jg|D8?m1n6ah zYuNNEfL{C$YjlbIdSdZ~P5yXX!uyV-?<-W-!IMUx@8Bu ze0wtC$=Y ze}>eLz=ljs&yo5F)DM1()E5K2qtB4~0jNK*lhl_1{lIh*ST+LnrO%N1hWC)`9Ump< zAAU^?U8~7~;i7;CSQP9xNiydjyW-my_pL4bSW3406H=s4rVi>U~f@tANxWg9~h#PU^?5 zCI!3RCiFYNU$N~Ca(n>ZmqWiJ_y~{}`N%KG@d%th_Yb7~A)w!}i2P`P_=UTtk@|MX zGSfbToKN?=J^B~YpLX!S1DTxtvd+$y^`r?#>8Ihr!_8jd=&Tqz67DemUon{ z10RI^T5~=^_$AfFT@c5+{T8L8hE2Bvy`A^5>G^DV$-Ah5RAXYrkClpHu04k?H#zVp*QI7_Df#1qlG_T z+2t4BYKpIH!o%f>#r*3fH4?I0Hv{CHU3d&}#smB;$~KL%r#=ag1bZkkb^*P&ez!V=zLoHH`(+IFOorp7U*Rl=Uf3#D2}sSMf@l@ z8RwQY`s9wB+hdBatiwaQ|Cet}0C#~SUGbe6uFK0aDO?ssDo=BL%9x6Q6 z^~j1qEL-(}(M-T29`%Hn0xQ8EyzsC~57=i452^=rJPYHXG8}f;GRo2Q6h-eLmpk}v z_PAXEaSO#SC>=HIX9v(b`~sWaAkaH9#->N(W17Ctrf1k08w#x(cN#+|G(3c6JU4i7 zj5ywSR4l{0_LK)CPd9{%=a8HmGsRa^h)1o*ALtiBD#~Qq$SY9tv@HogBj|K@18{v2~9P#

yaKf>d;Mi}rKfhD%!Ym4VFbYozUao`U9bo&Ag}+Lr$Q%p@HotwKV&>&coA4? zG(rjeD(m^<2xmOVgx3N+i)>inl{%vfD39b0e;myaJTxq#L&Fp2kB41${V`K`P`keJ z>wN9{EenY8d$5QmG$zOT|4+brlM~z7cDU~O#ibyMzX*HLx_tk7l(tWCda38XvR+Rf=Zwdy-gr;gALSYJM%~LNc68b=o+WZx^-m@L_sId{Y z9yKl!jp*PuHa!vO74Klv8-o7r+{val3iOUW&!%?_=SI4jzS>f=4WQgNp=@r(ZZ~a6D#422hM|x<0mxP_sgq9r^BtFk%*(jNs#%( z?b&6wl$yeW+AV|oKrdF*;|^O~p)^jc6ymzP-(ls6t5h`>+9;r<~o0h7HT<(L~ zYRF@isxi$|)&c!Y)o!ZGt>Y^a@rUEgdr9==5;`@l9<(T5i6~1vpYh5re*n!0#LFLG zfQM=uABsut;4#;foU@KUwDFfjvm=qQA_W7DJ_$VVP{hLzZ5Mx(Wgs5O43a5$Y*B_? zlE9*18Y!To8y!aU$J0@KB`&)_yiy>NNEBul3A1UMj2S2*7S~UkMhGV2W9I`t)tVv5 z!uUTDKOU4kcs~R;4m9)Akba5mm7r*6i62^g#M>sd-&N?|p809!mMIVjzu1pYfImJ# zPaHW7P_Cc26pG_ybIBhaKB&PY_DeV|eDwCIUHr4t6h72mZFm(pmol*6((v6>9VmJ< z(IRqwaSdtT z2gknO!S&&|<5hBe0QU9Sv6h_QP)^Esev;7dgW2W5214(^m8AaoCrJAf=ab`O50QGB zXQ}$zq`Zt23wuQJqv(24zU?#Q{vZvp(D5NsJ_4`X+)GJ4Tvj-8HK`wf`mI|?|0?15 zz-m%I2>shBk@J0!ziMX_X(&RzVA9(B|5K<(CXb6`gJ4-q zN?IN%wcJAkRWA9XG6A0nJyJS+5JYzSqj>4vJ2L^F2`yJGK0NGJZhC_$e5l^g{sxSr z%5Z9>-cTeIfj_18060)IQaEvN0`!6QV@k&om8xH(?;!^}f6S&g4D?3-z@~Qs=Kmani0l{5YZ@Hkeh-abLFS#Oxo zRr-+!ot~k1HJL??LMQQ=(cyafcjrIH>*(*4-{@)n^{e}rYaVL;ApZoi^a=dUb69zFF)d(|MQyK&)xX*rE^`IoW6ISsQJ#= zv6}lYn^kl1i%&ZH-u^=M%pFgyxaPVS>(3QlcK%k{bjt%z#T;*}K^7hUb#&9iK(wLQ`Gc1?@`y zADdpP^Q|tQ!g&WdVYw$NdC%&p+eGHR-9y}uAUOsrgE?k)*sLRC0r*3 zbyo?gIP@QiKsIdY0y((#f7$ffLB8!c#g=blSCh+Z{u8^MVP|Y8oI<{m^$Sohkl#BR z$C6Df^GY7Hy-6y4gYp$W{et*c`Rnh8FEqnfQ;1g$%4f47kJst010B2AX1vOr;b2gT zcA*7@^TalPuoLa(l!~pC0gN|)*&xEWUSjw1`FZ~)R|N7Gx9bVK|HNQ^-pa0roxe{0 z3FMzZ{t4utK>i8jpFsWz!R z<@{m^wqI=qFY4ppJBPHU{T77kG~ZO?=E~Y^@U@@(U$zgu0&@5qDu@0UlNAoYT)?hU zcDri0?xz1^UpK{NGNyQ{ea19~;LPExO!4t=zLvak>==S6#LJE1+$VMtE8b%ABQFO_ zW!{}$(079wyr>-Pn+NhlnPRNwDgz3h>Ajh22l5xsKLMMbz^5O032=P^ zg|1f?a_vC=;`t|F(-W9NUHC3}>E}dFfQ-|HGsSlCE;geJPa$5+c(RZ`9u4TY{rWjPnO4v2+Y*`3M^YKiDlvl%a$zx2Mh zOvlS6H6eeQ{1eDOf&3H5KY{!c$UlMn6UeA1aPAYoSd>v6%6BIJ1T1<2Bxd6*p$uXt zynkcIPK-h<#^71Rno8BJL7=y+s4!I%QVua7fh*bcMq!=joO9XzJF$R}JzmV-SJwyq zJM|AXJ$C04wV4yURnlOyoh!#IOk~$me+2!``z5GfLb%;#ymWb&YaHbpB!7=Djv`NL zJkfP%KNNr0x{#zzHbU|0N-xbkg?kFt(`SmG_INX2yqY9WH_3*;N?$?o3N8wu_UXIU zvnT(-p>{KP(U_n^F4B6cVWn6e^Q8?Ep#pA9qe{K zxbDW!v9H?!*R7nbERHg~@&UMRWjFh}h9^&=&^o?Shp~2If2g5MCA;{ad zMp8F!sl!+^7q3aY>Y7uVd>7~Q*J++e{vcbQz(4-byfj!$e?MUuW&*EVUvJ&?Ilg$w{`-u_%Q`>nN`FWykK#!TFW%@o#V7H; zI#Ceg!v1JlyG2iaAE(J`Chz%BEbqgZ>_g0-e?~lkf(I_m=Q%nfkji&*vQJ>w(eK_e zSsMBC^H1PRcmmexfU?cPQvYPFkm`VE zAqoUy^A!}30vaJfzfXTp@ID`j%FxDUT!`|F;}51^_48*ttivNcojm*T*vJ_VrzhGO z4vC!}wEKFOWb|(0GChB!X(DOY=4CX03~fT$%BHv0Riz z89_*kNBUOlnIo?@WiB3Al!>x?LB}82jK^T+;^Fp0M0r;ay}Zh1JT_-89`aJ}T$5K@ zG8YeL*b|KriIqF!$JWfnBZhY3fT_vc!DA?M@sM|@CnHGi;IS=p@rZ}xo>wB^jPKXzmW9{x}s z59FJgUND>)cmRLM=0-;Hhki|*RsO`CrtqL^-cD{N5fO>pHysF7>px^oT%mB{UF2HD zA?isrzIFxVf(x94{u(zEeD5X|o7S;&y!D+++Ks)Jeccq7$(Um6{K7EqfSk)*z9MVl ztoPH8nBptz@Q|I2{tz1ZLukk+BCacpjzh>liPZt*FOKWLe}K@Z|LXyX4$1 zQ+QB0cla@$a&CrjuU{j+3NEsluk4a@dra|_b$F}^hrQw%Am{ADV{eAzk!$=h${P>a z`0oz%_KDrmaFDpi;$hh{FkeLbJ$DDf8+3XA3ZhsT%Tjr@FT?QYiH3VaUJ)UJb>{hG zKGLJ}&tx$kW4!QacDwIElk)Y^FiK-=IAc#;V=8|ciY{7&bYvW5p4Naqc|F;#@nb({ zJW|F_sL)90SJ{up0nT{1WIHh`#UlQwC+Qk zt-eN{e*OvMpFsWz}Y=EKex^1Ty#u?5(*g|E^~+KKZWYpFsWzi8j z_7foMT2Q?AJn|gkA7RI5ADBVf9r-N#xeZ)L>b(!K>204$+6{bzO|S9_QoqT?UMIt* zVqGoGztjiIb0*HeR4gxHDn@CO$P*If&dor1lrV{D)27LkiXe~E^n$yKN$<7K)55$< zZhsslzV7x%V`#yKC*I{lQO;g}mo6E$E&_zceUz7{74D%>(Z;F8P#4n4f+dR9<1lxe zKx`WhExo}`v|a|~B$rV@&mS+J!H-6HYRNycABQr#ad62z(1oI9<6?I>DoP&25);6n z#NA{sKg_oqho9SI7DD>T7Nmo~&x^d41W?t3%FL{_v>t z)w>?3oqPCE=Tjqxm)<=6W#`v#duqk|T-Vf`b<<1rhrh7J`T4)Rx%4yN`pGgi-FZV> zoPsd0V)Ka0xyrS#=9Rjyzu5B0A38S#dL6@iXV&bSU0d_Y`KN03e`QHk&rfDl4L;Ld z6+Y+7E6(1wwaO*ce1HD6;+`umd~4~3ubfj=^zkS5gs%JOEzjI`|F_?D-hFbO-<#W? zzUh^ZJyP?J8^848;InCL1LN~6FKPSd1-Cp_vVeZ8`}WSEYj6JX z1HZXx!Lf_3pYg~)$=dti^d~E&l~*-xnObf9aveFKc-H_6<(gRZrFqeMzi)?Ddi6!dJI0x#qb$mff=c*s}h=ym#@O zvoEY@>?^H^y|%yZjYs}ocj-MPE%;AFe|yp2r(z{7Gx|$f&VR6^<=tN_X*uV&B`uF# zb8*XKAGx^Y%HLnya(&GuEq{3Yl9rPXU()i>YcpDQzjXho(O=`({iCn_@&3_g?|5Kz zbojy1H-|ql`d#m%qu2I)W^_gAv!ly8o*TW(amh>5dmpMY`isRMs#Z;;%A9muuOVHml=x@<^4?B-u@UV07 zyB~J8-}it1l0Jd$j!g^r*7 zZpI5wRs62{v6^o>oVPvU`1k*Q-Qk}-_m=-$eCbV>-TLUwk9=U`if13YcZK!#Qia|6 zp&KiPUEx5uH;OG*sn3LZm@#_#0KB{t`wvQGKfwj!l>+r2%q|jU-v=*KYCpna+ws7b zs=VqpDh1FIh4m6%Y#0d#`g_n~9Fy}Hw_(M-{N)}`@bk2INN-hsU75}tLBd~nL&)NV*rI*WFH>YcH_|$ z?&-l>|LGx#`>`J`#be4HJZkO60}{*!@G|MR53N@3@rm8N5+37&oxL&RHE+3tM}ysX zG@~R=KAhrF@`TX(^~Nr#9Irk^+Lm-ruYdgloak|Z7?*;P1xrLnJQ$8ed=kYYAx~~k zYalsupA7KQdkr34UBGSue|9xl#!v@uXfa07ol?8<}IW(p6Q2kq$Z zdFDYoOE@r_7*v7Zu=9~cATOloQF|Z27*lYqBjdZfo;=7!?|ja81(ny88JD*pfM&YJbHuSz(VwwrTj6#84p(!`B@_VI7#-1ATgY~ zSK)z^X`>BHCzDV6bo2nb@&#|=j7MEaiuUx2Jsv-v4U6J0gO3^yRGr0o@*rnCwEW?f zB5~iuc%T-O*?;cz`C|~&r~;u()_V%bvM)Kd3m3qmucYJYs0N>Ee*M^Hz@-|KRX$d( zAH#8Yaj6f_KZ;!e51PK8_(S=G_j;Kv6W}rno*>nN>G5bLESjHg=n;)O+OFVJDL=1*>7;<-ftFRuANrlD z{PgiU1$Zf}DPPY*yw2#jy*>pmVrUi=Xz`->q%YS_{U15kw)L1R8L-#dDLj1M^0o-eZQ{VZYN zYI42fzhTdd`=H&vU$NVjLA=2J7ubH33TW5%19rQSxuk!!#oG5<|C9!yVB|`DjrMo| z4n^2^Qd^t;(G>atoMAj;iK6=O0$Zd-%j;_=Vy-FXE4<1*tq&cKtdnnrrub?Kaj5mj zkjmES33Z`B{r-rdd71Q#_~VeS-#K0RR%8kXD&KZY2RT6pHi3tHD-!xd!qbJHVau&? zP+;3@l@1#Ug9zl|SiG7|Z{QqqdP5~=o5?jEqC3~vGzwnMwJ z0(SqVkguBjJ-)aoh1UC{;ZP87gKC1i`16%rew~=)NIZ&1e>5%k zrXnxZ@iIt{JMHmi0AAaP^|;fuM~m&oqahsi--}|)(Qd&p9|~DQ4(D!9R~%{Bj1R?Q zO4f;;79Q!2JEx0Bi77nj_%r7mkdumd>qsREr?i~v)uUGvvZs!+<#j35Z~w|}S9~#{ z*LNLTU*29q+6_!&w-e#I2T!uEE5dcl=CkRIz;zGJV7D{-u%}+m;cbD{3oxGYN@gQ?+H&%E({b^0GojG{)*^S2;?dPn|ImxD*Y?(IRpmT6~depHwQZ&{HBJD@_ANEw9Rm;bLaG zNCCw@JT`I0W2FZzNw^s=a0>t z@o>kZy*WC$J_7SW;MiRTU;TRG#=?C*+(JN?)TCMU8Y zLx_G{&mBC5IOE|Cgg1y~cu{M&U-HgNflBV+v5hkxE)+?q83VQz*(m`jI>Qv4b-nt+y{0QK~A$AaN<8hCuUMg$I~CTA!N9{4vTI4^@6W z9R@1)P}?6+oUQU|A7?z0@_-qlqU8^y?@lTYSSItw7-u~6>$eTzQ1izWtl!RLJoaz4N7FftQ_FF?TO5q65+g8D*HwyGdUQoE4$(v51QgD>+nG9@JV7;y{V#B%hclzDbHFx`4DG3>QUx= zltE0D6kG!Y9BBAMF3VOu`EX|7fik*j)8=YX(c+QKdh!v@c;I=gE)tsE6%OG^EO`Z_ z7LFu&6A+ydQz+>5PS=osw7VL<#g{ z`$UJwB=h8*j^E2J`}`DVJW!r>gXfPWlKPW7cnCkSc^#Zjj!8(%;D7_9I2UZh>ut#@))f_uJwqFu)!|4eNI z+pk^>d1a3MU4JF*@t!&4c&n35Z|rQ+Zuq~o=UaVbonKhtD?9+D{NxT_%}R@}PB!!8 zs~JKc><>4xkNpM?!G3dxuVBC5!$7a&$837Vz<=9rXVVjb|N2g`>G7(yfv-Xmn*Z{L zkl|AvMXI&3`do5 zUKZ^#RRg^&+wrL8j0cGmTkna4d!xW8ka|tcBx*fCjuM;Hevz5XAGMi>hiW@-iV1%_ z?9u}oOyNQGfRjF+dO)Et0`pKu?_$qGje@=~dJbD(*bemC+S&9*fL_JfYNKGadOtNC8#se`Ahj>`K^ni9#cu+myRD`D~VX`xnyK)pT{1zU00KKb6;iCtAu@KD&Jz$8=pt$6%MfD-;T{7?INEe-*)|P z-MC}KSIG)Cf4;Iy54hVDUs;Do`g&P*%ef9ycu+Zaw3nxxn<0#VoI7<7TP}@)oa>v< zmP=&|iG&Hv2Emg;;1ogp;N2InuUiW7gQee1L5Op)b$sRSjiT-GB$2IQ=Z>87n&K<# z@TjGlHLQ;fgwe{(q)+bP;p2=4m9^?>95#8VHLDMVmj`B@)hkH>xIlL0rffvpFW zilp7XN7(elV$yE(VKzM%=nLCsv*~q!z97Ct=-Tzab>ogs&XMhPdZQ@lMkVI7c}uwP zm0k8u+!S9~hlfVar9dMSSc+|r__T7vSM`n!9g#xOI~ftBs}ouQwtkl?;u zV<_c?N*Tx<{@BhLk6NiKjB;Ma{BpKMvIwxu(-~eb^~jn1$b^dq4?H`Twd?tJWFj7& zXszsq+{9y;Gain5(TGJh_~SmYMf3-QXw&b2KX>?JCuclr@^~P7rAR21t(NHpBfRnG z^`ebg#V()JwGMCFf|ik?9Yp-zUNn=H_^9~-y{7P>ey{nT<>~h-5)NENoI`V;XZxSVFCgtY z{)O#-+VURKZr?sOy*(F_cEWx(y*{9~_$4;I?LaT`6E;0@0lDt}XV~k}2Oz$q{RuWb z3;Zn2ZudQCplOiMJ_=#vOpf^cU0AwB{Al?Y&KR%$D15c9XVJpo;)VUuY>g8eHN{uf z;UT;8iHU-8^F=!m5M=>5Zf~9no!r4=UnbxY!@+y0(Fi5xhqInP#yI0a9FGfNzXS!8 z4GX+dXLJGOk=)^r{TYIXhDCH}c;dN(#{tfGBtlr!i0H@l+`;1@XFPBSi&*A^7?ydW zF)F!($05#m&=3}59H`ctJ9r%Cj0et3UDp+k^~X?>L@^Q#vah&a|e$jobgch zr$}Spmo?ZXwAl}7m!IXRDLkm3W$OrEKg;M<#P6{68*D$uG4Qh-*u(Za2;j%q`4roa z(E)yp$fwx!MxcMozRvF7A?RP7D)mTc;+nc7pEXQKX04nnGKB|~bBkZ(Dd)}-A`m|^_^)g^QVD*IyFbg8 zOG6-+jt#Nt4Ff&zITPd^1Gzcyc{aU1pjUeqo8BSlU&m+J^vp_3Tlp$EZDL}cA0_AE zqd(Xs=VqDWE9>wu%3}#(F*Hd$3Q$t!1>|6*Y_>m(GZ2qt2H}r~UcWArMN7dnB7dk9 z0|IWXktv3(D`rnR^(I;ix6Y*+Ne)Jb?dPR`0cl=E@z2Wx| zdTl>s)7x?pX(t|G(=+b%>6cm{=Yo^o{qn@#Je1)omHuDwH=crr3KEB#& zimxojX9L=(Br0}=k@!P9XBvF8_yljXU+(e3=U^t_qwHH6HL;%zf$#Pl;h&QI#4@ke zh*-V8XEFZ-{P*iXeO2Ixl|IZ}%2eOcVz^zKn zCxISM!r$;R8GmZ+8h={CVo-{Eyq>s6^sM*z10KA4(*z&NKP`^5)B=TStv`tOi3*2e zOT-q3Qf&H1fk4KeKClN1V7-4q0V$wnBK@9@JVE$(F`6c&{vZ5+Q~Q0|0SERx>lEq% zu7C&W$}4+A@{o^w25tQj(>Ro$9t>jU<8Zg#IOO>O@kO#9v?Gs$3F&0LT=M2|NCSrf z7yu?7A5M3Ez-KoeWSsEg;WOTE;!(&==y)B$gG{=~Ob~_1j}!X&8}@{cmz!0 zLE{Djf7JVxJFr?{`G?Hk6bioC757cs?GQ;3HH=U6O<(11>VIOs9jcuXe%shq=G!N{^MO}$tx=loq^ zUpQ*$zSdneaY)C1>+y)&jYo|q<_VO0qLL?ZyA%GRcyy(?{}?4ac|7bIclu1>LC2k= z$9RrAX9S{bDZZLQ zJX)my%9FKT5*_t5I2#QU2MQ3y@dd4u1^VjYn-wgK05E#f>t*(iJ={_R#nPGFwmHY&RYXf20FU?(oMJyYZ;? zd(iwA3PgdM3)??i41U~Q|HW=M4)nJE2fJMd_@fv9AKM>o!L3gzNh|!2UC0L+A{?gUqU6H_ zU&&{s%~)p+UyY>2S8dZEj*kmpoh6LJy86u*ve(s@!hHPxi`ngppxyiucDpUWfAN1~ z^ItJMm&%*j?64i;6`Fp<=F?+9#&rXmp4nPU_=@bp(CI-jF@d%+;JT42HoX&Y-Mv?^+oiaS<;7FTS8fSy89R}k zsxTD%pnR46ezv`)_-YFAxXUjE(gq}V@EEllk9@ozl4hybzwFE7p%;z@$I1CKtcq(M zGld7$18UFbsRtAagD}qay{#BrhGS<3ZFhp*ZV#L-UT;W7DIA4g!}FD*GxEn2jyuhs zK7X(`NOsif>&L-OXzw|+7odb^L~ppsY4BPr@d2Z4&4XC8_A zgPv&r#QWBue1SO9Jk9Vjjr~~$0W|ZB7%@eeeR1XoIpcxz)lY;*BHmBTdOQws#zS`3 z=(n9qdP}Li%6>c!bH<}J9FD6tkfj@)=@8+Mhh6@%Bc|}6dUD`0o_g|3VeDF>p6g)jM$Fsh`rq_1~p>wR1t#==qOWK_*Vbf~>GRKy%`#06KQ>d5e^Z;4T zLBM%xamSyp?9!8un&K<#@Q~$P9uLTKGQHp!XFSwloHPn28J^tXkK>&2P=|45G9Gs6 z0Vho1LG^(6dtiK1hQp4j9;yuKk+_?)gi$(g=4H=I4PHaY%n{k+@i^SaV7=jGmBLZb z7v_J7O>YnA3k`FW2c*zY|7ZYxW>)zG^x!rJtJG(o{%`BX9krb6^hDRuwKiIvAa~^4 zNmG1f9Uf{qm&apz!8m6;WIfp(4@W&cQffPtKY!RY?wm4(2OW2gm+>5T3WagVCo^X* zJ3g!k<;}yN?LQ~qss0zQAT=jj*z_vSA?=FZNk!?*D7Q||$@b0}z*lzJpTf_G%AnON z?90on)&tzI3M3gG{&LPPJPJAEq1FTPcuX%S;*5uyKV-cunK_VeI(PVE7H2%v{E^9c z*rk^ho5F+YWt$f9)XQcFqvsR-XV3j?d;7qy?jL)LdY!@VIdj!zDF`pHI%I%xRgVV7OM*c2YruD`p6vt3WN*A+DE5jZNN;RN~7jP4Sh* zc;xL!1&?K!fJferRPdMbkr&6A^{uvi%JetG)SRDJ10TktlYkuSo9&McQ zXz|CR;T|cJ#{=sI(+k=;3GS??v8yHXMohmaOLwFK0ZO@YW>d0mK_|l)o<( zEV;uUKF)Zw_SU#2gd>OeBfuFC=c+{$!;wRHL^$Ko7WIT;5kJZ&hSQ2|@S|NTv3^A? zL}f@<XAH)Kfo2*fS#`4GFE;mH|PXr13n%U38+rriZ zDvL=w*Q0EDhafJg>3nuvk`LCY&%cy?U0$kzb>og&&LO_)?(U7DJPUfp;>}n0-}8|b zn||=d&8t83)0;kj{u4JZdH=1|)t64Me)FumR=jiT@2ZRbc=_m(OPZ@o-g;p1p1*8f z{NX=ee$zW+&n;%*HN3iT`OJO)vb6K-!gAW~xvNW;fAozPDrWzprSeZdIjfSE)9+(n ztY7Zh_xt6cOQ)~+b-`>U8{IY6H{6KZxZB2)GS0kx>lc?v3ksKhIXZv2`+pu;p z=)o1cANxhs&+n?Qdg8t=$DjWG?dz&%y)=*3Pvv*pzAbYxy?YM|%kcMap1x%n{kyZM zemVV}>RW%#)vYI$^Ua=9bB-_V>$tvguBmhW`SZ`#;6C0|b9T+Hr#Co?ula?1eBM#9 z=F79ru30qeY3Em-`JofH$$#=sApZpNPayvU&YUM;Uq142Q+l;^`k2enDs~5Y`_P)b zPz5{y8*&He0l+riB;x-A7EDDH!H6>fIBJy(+Yy$|G?U#6D0II%jw zU@!iBW!L=7EK_`C-MCW=t}!+s{PD0$&J~-&gUY%1Kk<}vg+e>bFQ4?Y<%kdFm&cxD z%cX{ENd2ifYeKq~+~JQoobizJV@No`5`?7bp^%T&c<_lw zvx`5(Ou!?q2@*6wBzN$bn+bTxc`mx7h#a$N0EcG$$Zq~9%LF`PKFRC#hkA^l$T4{} zpEDj4>AQGwaZ+4a7W>O8OyNQ8$i1KBX-Ccw8em_Ep?|O~wo=$1ylMRec@Sn3I^s3# zIHoeNHv^Zm?ad(|b4ZSUVE2(nEjPP`u$5-qL4uD1U)iOXEjGnh*5Q%2TQGkm@?2DM z#zW3?k+)lv{7TC*0gt?%tl&|d33%l7WCf2}&Un<|y%6I*f7iMY&IOa=qqghiCSAcS zomXx!g$LD>k3P;ke;?KtcYI(#KtPacB(MK;~YrdI^hKtp%) z(1ItvrgeIm?7#FxC6DM2d8NJy`6{>wMRauAAFpATp4?=LuPnx+(~pv1rXQprnhMSQ!Y#Cb#`%)J^%^oL8 zVcZ!gW{*4D>HgYo_ITF-*S-5$_H`@Zx)m3)ugkv6^lPn?b4mO68S=q76bnEc&;YyN1ONPLTPh{KRP($LAMt*WD7-W&K*3wobizD zmY@fF%tG=WF(~{(QgEO@5_pioV_|gHw@vzH|jGMxP+L2AYU`HwDPe=O3%p}feajj~)tA16Bdu@Hwsv1WVCDr0?Y^zfh zwzwQ^&W1!ui*r@Os@688Ma%o@gyX@>eWT^na)+sHLKlib+v0( zuC8msEy<_3xz5qDw%+A}B68fKJ8r6TtX$h#=ccU$fwXH~wW`giTy9_Ax@t{ZgSJ|I z9;4pva5>hp>uXr`HICNXjR}x`<9cHPq+ag7yKPmgqrOgi1FHLG)XV*6*Vn9N)RX=j zHBk3Ib-k;Jf~C5jx|X`u`nxnZ%lEm)(N@<=-w0KEQr_6S2-m5~8=MaPdmz)f(^Xry z(tu7|U9(~Pl}(Lz(B^9T_3kxFv8KExseH|v5tQoYg_7M_6NF- zwM~un4WvZP=Z>|lt7~L16nd*1b?nV4yY3t<>LHSHHHl z(d}w-IP1tA0=-qM>&Y#XV<#H(+8XK_@1&0b=tFs%!=PLv-qh{Y;!RcVZb(q4{m1lC zUSfs1y1cbvl{-<9=x1&7s-*W4({Z)BliJs})UMTtE9eg{Pt~7#T&a^M?v>6pYirT_ zt$R=8aszoQmp3D=Mj=G{FPCdNq$)R%pTH-o9+CEFt>#))p@KKzA9W3BFPE=Qet!@h zC?)M$8nwsd`sx#b9u=?fJRMCnb+xPT&60Y0jJm0jV^EGRqb(=>re|SYjL<SCN7KwKcS2PFm?`qOT-8 zpVcj^hzuvkwXKfE7OXqz=qhqrxg=??lq;7c<#!}q5XxJX3(9vx@8^!B3zPOppOsY< zr8{sxsC=cY!zvX@d!)k2qdC+lbqjAFJbir6ou6Ku0K#ps5$15F(f$ z?GyK<)t3|Zr7c(A7pY0^r@Fkk5!p(J736-(70u1=J63D#WPGf)lku_I?!w24TvJp~ zLdQri#qHI4DLyvROB3ZIF64KBQWWY zdi=%pFm5WN61^{@@f5c=8c!4DDl+hVl=nk59^&>&xpGRa?5C06M~(iaDsOhTxs)MD zRjyH$RplDJOjWMY%b>h@RU?)p2}6RE%kL|)ND~4N+EaT>IRp3&iyTe=RppxgtI9R~ zSCwn}uPWE{AIg*ZPs)?~5ABosFZ(}3Ac<;1NsO(g&xyWYTQ~q zqej(NOc1U|#Uj04;#i?iiW2ncu}oiS56`cGszcRKEv)E31!kb3zsFzQM7j2b{cW&dUBdinab*fmTalfQqB^~|qnxj!NA(<9FY7rn4d@^B zcj@RR`MY%WT7MU=1bW2Zg$l?q__eUV3n*yHu|1Yl4t7^kIT?1LSVdp9$7r#tT()Bj z%89>=RwVjE{9VcA#NVYY$DSXf{^0Ud{UQD?EgiMLi`0<*X#8Drg{qwRyMU0YT+<;{ zInhfr?Lnr<4lMO`u`E#)CUBKyhPsB(SM+3{PH_i48I_WD;O|n7$@Q_GOiu)QvYw2~ z$T8KE>HDi3Ba@k&sc9$syVT`ce^-Jo=-a4(oDcpka zYswS)ITSc47%CUa0D%bkEa0R*E9f=Fd zcd3*oU6`~_*vDG>3Hw-EuC|X!4WXm6`%#B9<<#!ims7i6Dc9Tm^trj!c0aydps%<4 zY5T-|A%#oTK5<{#a`krt=Zz7)Hjb`;80@(Q1)$R{{YEoX4O%B}}OgpQHi z!0nCXMxtC{V&(ZL{H@}9++HbHPJ#DPZ5PWuQk7G?SYHlyF}VR%d!z9O(>EA@+zGo_ zc@8)pL}eFi%9Z|OyI!tot#h@ai4bUo?1z;^@oKGewyth$TU%SxAipIJ7PBrLIVoN!40w9>|BNHRfqf~)Z&8LHlel&%_qpeWO70+YJiGyIsAfh zbU|njzi4~X4`K&{JzRsniJhIWuSs*|7&VbU*pA2jLiWuXG{J)Hczh^0Yg*SLYfWfs za=RQz-vv8duEQsS9j+X^9BbAhT~?^WBRSGw?`)P&NAW@A5fU76JPtX99AiJD`dGH_ z~uu z!-Zeq!i57Tgb?Dufe-K>@B=ycz<~n?K7a#NUDdTcv%QOV#d?ENd+=<(?y2f|wcTCY zUEU%`eXzZq2LkQwJP^P!?#98se2iyl;P&t;(0_{u`fu?-|1BQqr^N&Pw0NLDF1|*( z$^FOp8p4QQjIZ&_9A5(v)x)5s_!<%9a9m&zbY+UKk!FJ1_!?gxd<*a2C-F6MBgymP zYq*ARqLSllYz^V0{oebqS5SK%AGgsq4LyFn-IHk4h9+L366Kc`iWDYeTC}b zv?)~Y^MLX`aXL_tCx=ekSOnqxJ;^<+w5RVC9?WuNoq}^05=3%n9T!mK@VN^#rFOM= z8Y`ag9T(pt4_ND;`>=^z>YoQU`v*rdgZ&dwoM9Z=TMdEs0o;nFNeVuqsKDc$aflxk z+=>Xa_wibL7nj#~E{V(YaP+TzzX{903sXz}y}u6E{BfBQs9k8D!1n$;fn_OhkpMK8?)`G2~=#b@Y{xn+N=9?to<2io*A@Y8Cg({A} z9W8Ij?Nf)RkjG_Xd;2r9Y;0bHsz|c(D20>=X`Z=U2>#Xo-S@SX?#-2Q?Ske-I6c$2 znw~v&AT32<_B*{}vtQ(0V-rtTxxp-YOQuYY_B;6(^*q#%57sWOJ$`@2>~T=!y=>2b zMbI)!rpWWkB>nLLqvk@}d2R_T|A+x|<*7MfHYL@GsXmqP(tOv)dd;;c?}>Z|7aR^5o;{t&~1tS8E@U z=D3=ceY(I=(n+f)CYVM{SabGS9XF8}OK7Ftc2Q8b-f4t9@Q#7M*=u zd2;)-*giMU%RakF@d&=WJW4t?z1vaI{yfclrCD7w-8`!f%Nx7BqG)H8&zQ3hWqV7q zYOK*?jBlUSw_(`w`0;g&dOth+9G|y+?)EwzNKJWgSeb^YI!Rd#G1K#FpPy%9pKhL( zRj+8D)kM#)eNJa;pBprg?#OgEWDNaZ?DJw{{c+{V*PpAOl7CSDNAmOX50=eVvX|0! o+EJ13r=7uEqqr_O`|PZA$J Date: Fri, 25 Jun 2021 22:20:23 +0530 Subject: [PATCH 1996/2289] add ICOS Drought 2018 download script --- .../data.atmosphere/R/download.Drought2018.R | 152 ++++++++++++++++++ .../man/download.Drought2018.Rd | 43 +++++ .../testthat/test.download.Drought2018.R | 14 ++ 3 files changed, 209 insertions(+) create mode 100644 modules/data.atmosphere/R/download.Drought2018.R create mode 100644 modules/data.atmosphere/man/download.Drought2018.Rd create mode 100644 modules/data.atmosphere/tests/testthat/test.download.Drought2018.R diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R new file mode 100644 index 00000000000..e9a09c9b8f9 --- /dev/null +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -0,0 +1,152 @@ +#' Download ICOS Drought 2018 data +#' +#' Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 +#' +#' Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF +#' +#' Compatible with FLUXNET 2015 format +#' +#' @param sitename ICOS id of the site. Example - "BE-Bra" +#' @param outfolder path to the directory where the output file is stored. If specified directory does not exists, it is created. +#' @param start_date start date of the data request in the form YYYY-MM-DD +#' @param end_date end date area of the data request in the form YYYY-MM-DD +#' @param overwrite should existing files be overwritten. Default False. +#' @return information about the output file +#' @export +#' @author Ayush Prasad +#' @examples download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") +#' +download.Drought2018 <- + function(sitename, + outfolder, + start_date, + end_date, + overwrite = FALSE) { + + # construct zip file name + zip_file_name <- paste0(outfolder, "/Drought", sitename, ".zip") + + stage_download <- TRUE + + if (file.exists(zip_file_name) && !overwrite) { + PEcAn.logger::logger.info("Zip file for the requested site already exists, extracting it...") + stage_download <- FALSE + } + + if (as.Date(end_date) > as.Date("2018-12-31")) { + PEcAn.logger::logger.severe("End date should not exceed 2018-12-31") + } + + if (stage_download) { + # Find dataset product id by using the site name + + # ICOS SPARQL end point + url <- "https://meta.icos-cp.eu/sparql?type=JSON" + + # RDF query to find out the information about the data set using the site name + body <- " + prefix cpmeta: + prefix prov: + select ?dobj + where { + VALUES ?spec {} + ?dobj cpmeta:hasObjectSpec ?spec . + VALUES ?station {} + ?dobj cpmeta:wasAcquiredBy/prov:wasAssociatedWith ?station . + FILTER NOT EXISTS {[] cpmeta:isNextVersionOf ?dobj} + } + " + body <- gsub("sitename", sitename, body) + response <- httr::POST(url, body = body) + response <- httr::content(response, as = "text") + response <- jsonlite::fromJSON(response) + dataset_url <- response$results$bindings$dobj$value + if(is.null(dataset_url)){ + PEcAn.logger::logger.severe("Data is not available for the requested site") + } + dataset_id <- sub(".*/", "", dataset_url) + + # construct the download URL + download_url <- + paste0('https://data.icos-cp.eu/licence_accept?ids=%5B%22', + dataset_id, + '%22%5D') + # Download the zip file + file <- + httr::GET(url = download_url, httr::write_disk( + paste0(outfolder, "/Drought", sitename, ".zip"), + overwrite = TRUE + ), httr::progress()) + } + + file_name <- paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') + + # extract only the hourly data file + zipped_csv_name <- + grep( + paste0('^', file_name), + unzip(zip_file_name, list = TRUE)$Name, + ignore.case = TRUE, + value = TRUE + ) + unzip(file.path(outfolder, paste0('Drought', sitename, '.zip')), + files = zipped_csv_name, + exdir = outfolder) + `%>%` <- dplyr::`%>%` + # read in the output CSV file and select the variables required + df <- read.csv(file.path(outfolder, zipped_csv_name)) + df <- + subset( + df, + select = c( + "TIMESTAMP_START", + "TA_F", + "SW_IN_F", + "LW_IN_F", + "VPD_F", + "PA_F", + "P_F", + "WS_F", + "WD", + "RH", + "PPFD_IN", + "CO2_F_MDS", + "TS_F_MDS_1", + "TS_F_MDS_2", + "NEE_VUT_REF", + "LE_F_MDS", + "RECO_NT_VUT_REF", + "GPP_NT_VUT_REF" + ) + ) + df <- + df %>% dplyr::filter(( + as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) >= as.Date(start_date) & + as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) <= as.Date(end_date) + )) + # save the csv file + write.csv(df, file.path(outfolder, zipped_csv_name), row.names = FALSE) + + rows <- 1 + results <- data.frame( + file = character(rows), + host = character(rows), + mimetype = character(rows), + formatname = character(rows), + startdate = character(rows), + enddate = character(rows), + dbfile.name = zipped_csv_name, + stringsAsFactors = FALSE + ) + + results$file[rows] <- file.path(outfolder, zipped_csv_name) + results$host[rows] <- PEcAn.remote::fqdn() + results$startdate[rows] <- start_date + results$enddate[rows] <- end_date + results$mimetype[rows] <- "text/csv" + results$formatname[rows] <- "Drought2018_HH" + + # return list of files downloaded + return(results) + + } \ No newline at end of file diff --git a/modules/data.atmosphere/man/download.Drought2018.Rd b/modules/data.atmosphere/man/download.Drought2018.Rd new file mode 100644 index 00000000000..52596e63015 --- /dev/null +++ b/modules/data.atmosphere/man/download.Drought2018.Rd @@ -0,0 +1,43 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/download.Drought2018.R +\name{download.Drought2018} +\alias{download.Drought2018} +\title{Download ICOS Drought 2018 data} +\usage{ +download.Drought2018( + sitename, + outfolder, + start_date, + end_date, + overwrite = FALSE +) +} +\arguments{ +\item{sitename}{ICOS id of the site. Example - "BE-Bra"} + +\item{outfolder}{path to the directory where the output file is stored. If specified directory does not exists, it is created.} + +\item{start_date}{start date of the data request in the form YYYY-MM-DD} + +\item{end_date}{end date area of the data request in the form YYYY-MM-DD} + +\item{overwrite}{should existing files be overwritten. Default False.} +} +\value{ +information about the output file +} +\description{ +Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 +} +\details{ +Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF + +Compatible with FLUXNET 2015 format +} +\examples{ +download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") + +} +\author{ +Ayush Prasad +} diff --git a/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R b/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R new file mode 100644 index 00000000000..1ddfbf705ef --- /dev/null +++ b/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R @@ -0,0 +1,14 @@ +context("Download Drought 2018 product") + +outfolder <- tempdir() +setup(dir.create(outfolder, showWarnings = FALSE, recursive = TRUE)) +teardown(unlink(outfolder, recursive = TRUE)) + +test_that("Drought 2018 download works", { + start_date <- "2016-01-01" + end_date <- "2017-01-01" + sitename <- "FI-Sii" + print(outfolder) + dat <- download.Drought2018(sitename, outfolder, start_date, end_date, overwrite = TRUE) + expect_true(file.exists(dat$file)) +}) \ No newline at end of file From b1570b502b65a437a82bfc9d3f9efa748c82aaf9 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 25 Jun 2021 22:21:30 +0530 Subject: [PATCH 1997/2289] add Drought 2018 to CF converter --- modules/data.atmosphere/NAMESPACE | 2 + .../data.atmosphere/R/met2CF.Drought2018.R | 30 +++++++++++ .../data.atmosphere/man/met2CF.Drought2018.Rd | 50 +++++++++++++++++++ 3 files changed, 82 insertions(+) create mode 100644 modules/data.atmosphere/R/met2CF.Drought2018.R create mode 100644 modules/data.atmosphere/man/met2CF.Drought2018.Rd diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index e2baecdb60a..5983c8148da 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -18,6 +18,7 @@ export(debias.met.regression) export(download.Ameriflux) export(download.AmerifluxLBL) export(download.CRUNCEP) +export(download.Drought2018) export(download.ERA5.old) export(download.FACE) export(download.Fluxnet2015) @@ -65,6 +66,7 @@ export(met.process.stage) export(met2CF.ALMA) export(met2CF.Ameriflux) export(met2CF.AmerifluxLBL) +export(met2CF.Drought2018) export(met2CF.ERA5) export(met2CF.FACE) export(met2CF.Geostreams) diff --git a/modules/data.atmosphere/R/met2CF.Drought2018.R b/modules/data.atmosphere/R/met2CF.Drought2018.R new file mode 100644 index 00000000000..5b716f86a69 --- /dev/null +++ b/modules/data.atmosphere/R/met2CF.Drought2018.R @@ -0,0 +1,30 @@ +#' Convert variables from ICOS Drought 2018 product to CF format. +#' +#' @param in.path path to the input Drought 2018 CSV file +#' @param in.prefix name of the input file +#' @param outfolder path to the directory where the output file is stored. If specified directory does not exists, it is created. +#' @param start_date start date of the input file +#' @param end_date end date of the input file +#' @param format format is data frame or list with elements as described below +#' REQUIRED: +#' format$header = number of lines of header +#' format$vars is a data.frame with lists of information for each variable to read, at least airT is required +#' format$vars$input_name = Name in CSV file +#' format$vars$input_units = Units in CSV file +#' format$vars$bety_name = Name in BETY +#' OPTIONAL: +#' format$lat = latitude of site +#' format$lon = longitude of site +#' format$na.strings = list of missing values to convert to NA, such as -9999 +#' format$skip = lines to skip excluding header +#' format$vars$column_number = Column number in CSV file (optional, will use header name first) +#' Columns with NA for bety variable name are dropped. +#' @param overwrite overwrite should existing files be overwritten. Default False. +#' @return information about the output file +#' @export +#' + +met2CF.Drought2018 <- function(in.path, in.prefix, outfolder, start_date, end_date, format, overwrite = FALSE){ + results <- PEcAn.data.atmosphere::met2CF.csv(in.path, in.prefix, outfolder, start_date, end_date, format, overwrite = overwrite) + return(results) +} \ No newline at end of file diff --git a/modules/data.atmosphere/man/met2CF.Drought2018.Rd b/modules/data.atmosphere/man/met2CF.Drought2018.Rd new file mode 100644 index 00000000000..6dff4986606 --- /dev/null +++ b/modules/data.atmosphere/man/met2CF.Drought2018.Rd @@ -0,0 +1,50 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/met2CF.Drought2018.R +\name{met2CF.Drought2018} +\alias{met2CF.Drought2018} +\title{Convert variables from ICOS Drought 2018 product to CF format.} +\usage{ +met2CF.Drought2018( + in.path, + in.prefix, + outfolder, + start_date, + end_date, + format, + overwrite = FALSE +) +} +\arguments{ +\item{in.path}{path to the input Drought 2018 CSV file} + +\item{in.prefix}{name of the input file} + +\item{outfolder}{path to the directory where the output file is stored. If specified directory does not exists, it is created.} + +\item{start_date}{start date of the input file} + +\item{end_date}{end date of the input file} + +\item{format}{format is data frame or list with elements as described below + REQUIRED: + format$header = number of lines of header + format$vars is a data.frame with lists of information for each variable to read, at least airT is required + format$vars$input_name = Name in CSV file + format$vars$input_units = Units in CSV file + format$vars$bety_name = Name in BETY + OPTIONAL: + format$lat = latitude of site + format$lon = longitude of site + format$na.strings = list of missing values to convert to NA, such as -9999 + format$skip = lines to skip excluding header + format$vars$column_number = Column number in CSV file (optional, will use header name first) +Columns with NA for bety variable name are dropped.} + +\item{overwrite}{overwrite should existing files be overwritten. Default False.} +} +\value{ +information about the output file +} +\description{ +Convert variables from ICOS Drought 2018 product to CF format. +} From 375178510ce3637c91c1dd470fda71931114d79e Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 26 Jun 2021 09:49:38 +0530 Subject: [PATCH 1998/2289] add comment about output variables --- .../data.atmosphere/R/download.Drought2018.R | 7 ++-- .../data.atmosphere/R/met2CF.Drought2018.R | 34 +++++++++++++++---- .../man/download.Drought2018.Rd | 2 -- .../data.atmosphere/man/met2CF.Drought2018.Rd | 10 ++++-- 4 files changed, 39 insertions(+), 14 deletions(-) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index e9a09c9b8f9..a0e1d1c6e06 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -4,8 +4,6 @@ #' #' Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF #' -#' Compatible with FLUXNET 2015 format -#' #' @param sitename ICOS id of the site. Example - "BE-Bra" #' @param outfolder path to the directory where the output file is stored. If specified directory does not exists, it is created. #' @param start_date start date of the data request in the form YYYY-MM-DD @@ -34,7 +32,9 @@ download.Drought2018 <- } if (as.Date(end_date) > as.Date("2018-12-31")) { - PEcAn.logger::logger.severe("End date should not exceed 2018-12-31") + PEcAn.logger::logger.severe(paste0( + "Requested end date ", as.Date(end_date), " exceeds Drought 2018 availability period" + )) } if (stage_download) { @@ -146,7 +146,6 @@ download.Drought2018 <- results$mimetype[rows] <- "text/csv" results$formatname[rows] <- "Drought2018_HH" - # return list of files downloaded return(results) } \ No newline at end of file diff --git a/modules/data.atmosphere/R/met2CF.Drought2018.R b/modules/data.atmosphere/R/met2CF.Drought2018.R index 5b716f86a69..7e56e6b2f47 100644 --- a/modules/data.atmosphere/R/met2CF.Drought2018.R +++ b/modules/data.atmosphere/R/met2CF.Drought2018.R @@ -1,4 +1,12 @@ -#' Convert variables from ICOS Drought 2018 product to CF format. +#' Convert variables ICOS Drought 2018 variables to CF format. +#' +#' Variables present in the output netCDF file: +#' air_temperature, air_temperature, relative_humidity, +#' specific_humidity, water_vapor_saturation_deficit, +#' surface_downwelling_longwave_flux_in_air, +#' surface_downwelling_shortwave_flux_in_air, +#' surface_downwelling_photosynthetic_photon_flux_in_air, precipitation_flux, +#' eastward_wind, northward_wind #' #' @param in.path path to the input Drought 2018 CSV file #' @param in.prefix name of the input file @@ -18,13 +26,27 @@ #' format$na.strings = list of missing values to convert to NA, such as -9999 #' format$skip = lines to skip excluding header #' format$vars$column_number = Column number in CSV file (optional, will use header name first) -#' Columns with NA for bety variable name are dropped. +#' Columns with NA for bety variable name are dropped. #' @param overwrite overwrite should existing files be overwritten. Default False. #' @return information about the output file #' @export #' -met2CF.Drought2018 <- function(in.path, in.prefix, outfolder, start_date, end_date, format, overwrite = FALSE){ - results <- PEcAn.data.atmosphere::met2CF.csv(in.path, in.prefix, outfolder, start_date, end_date, format, overwrite = overwrite) - return(results) -} \ No newline at end of file +met2CF.Drought2018 <- + function(in.path, + in.prefix, + outfolder, + start_date, + end_date, + format, + overwrite = FALSE) { + results <- + PEcAn.data.atmosphere::met2CF.csv(in.path, + in.prefix, + outfolder, + start_date, + end_date, + format, + overwrite = overwrite) + return(results) + } \ No newline at end of file diff --git a/modules/data.atmosphere/man/download.Drought2018.Rd b/modules/data.atmosphere/man/download.Drought2018.Rd index 52596e63015..6b454070adc 100644 --- a/modules/data.atmosphere/man/download.Drought2018.Rd +++ b/modules/data.atmosphere/man/download.Drought2018.Rd @@ -31,8 +31,6 @@ Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 } \details{ Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF - -Compatible with FLUXNET 2015 format } \examples{ download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") diff --git a/modules/data.atmosphere/man/met2CF.Drought2018.Rd b/modules/data.atmosphere/man/met2CF.Drought2018.Rd index 6dff4986606..cf6cc8f75c0 100644 --- a/modules/data.atmosphere/man/met2CF.Drought2018.Rd +++ b/modules/data.atmosphere/man/met2CF.Drought2018.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/met2CF.Drought2018.R \name{met2CF.Drought2018} \alias{met2CF.Drought2018} -\title{Convert variables from ICOS Drought 2018 product to CF format.} +\title{Convert variables ICOS Drought 2018 variables to CF format.} \usage{ met2CF.Drought2018( in.path, @@ -46,5 +46,11 @@ Columns with NA for bety variable name are dropped.} information about the output file } \description{ -Convert variables from ICOS Drought 2018 product to CF format. +Variables present in the output netCDF file: +air_temperature, air_temperature, relative_humidity, +specific_humidity, water_vapor_saturation_deficit, +surface_downwelling_longwave_flux_in_air, +surface_downwelling_shortwave_flux_in_air, +surface_downwelling_photosynthetic_photon_flux_in_air, precipitation_flux, +eastward_wind, northward_wind } From c29a2d7737edf172e31e06a60eb2c6c6ff159861 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 26 Jun 2021 10:23:05 +0530 Subject: [PATCH 1999/2289] add test for product url --- .../data.atmosphere/tests/testthat/test.download.Drought2018.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R b/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R index 1ddfbf705ef..aa26fb504c5 100644 --- a/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R +++ b/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R @@ -8,7 +8,8 @@ test_that("Drought 2018 download works", { start_date <- "2016-01-01" end_date <- "2017-01-01" sitename <- "FI-Sii" - print(outfolder) + res <- httr::GET("https://meta.icos-cp.eu/objects/a8OW2wWfAYqZrj31S8viVLUS") + expect_equal(200, res$status_code) dat <- download.Drought2018(sitename, outfolder, start_date, end_date, overwrite = TRUE) expect_true(file.exists(dat$file)) }) \ No newline at end of file From 12653f3b4b7b0f8f96ee7882fa9b443d519462ad Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 26 Jun 2021 10:44:52 +0530 Subject: [PATCH 2000/2289] remove example --- modules/data.atmosphere/R/download.Drought2018.R | 1 - modules/data.atmosphere/man/download.Drought2018.Rd | 4 ---- 2 files changed, 5 deletions(-) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index a0e1d1c6e06..e689881431c 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -12,7 +12,6 @@ #' @return information about the output file #' @export #' @author Ayush Prasad -#' @examples download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") #' download.Drought2018 <- function(sitename, diff --git a/modules/data.atmosphere/man/download.Drought2018.Rd b/modules/data.atmosphere/man/download.Drought2018.Rd index 6b454070adc..e9af4935f2e 100644 --- a/modules/data.atmosphere/man/download.Drought2018.Rd +++ b/modules/data.atmosphere/man/download.Drought2018.Rd @@ -31,10 +31,6 @@ Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 } \details{ Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF -} -\examples{ -download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") - } \author{ Ayush Prasad From c779517c509ca27f933dba638ae3aa60bfa768e8 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 26 Jun 2021 10:50:28 +0530 Subject: [PATCH 2001/2289] add corrected example --- modules/data.atmosphere/R/download.Drought2018.R | 4 ++++ modules/data.atmosphere/man/download.Drought2018.Rd | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index e689881431c..e7048df7ab6 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -11,6 +11,10 @@ #' @param overwrite should existing files be overwritten. Default False. #' @return information about the output file #' @export +#' @examples +#' \dontrun{ +#' download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") +#' } #' @author Ayush Prasad #' download.Drought2018 <- diff --git a/modules/data.atmosphere/man/download.Drought2018.Rd b/modules/data.atmosphere/man/download.Drought2018.Rd index e9af4935f2e..924545ab7fc 100644 --- a/modules/data.atmosphere/man/download.Drought2018.Rd +++ b/modules/data.atmosphere/man/download.Drought2018.Rd @@ -32,6 +32,11 @@ Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 \details{ Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF } +\examples{ +\dontrun{ +download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") +} +} \author{ Ayush Prasad } From 421d2bd63d6c3218e48af0f3504ebc23cf18d854 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sat, 26 Jun 2021 11:03:42 +0530 Subject: [PATCH 2002/2289] call utils function with namespace --- modules/data.atmosphere/R/download.Drought2018.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index e7048df7ab6..797507b2ba9 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -88,16 +88,16 @@ download.Drought2018 <- zipped_csv_name <- grep( paste0('^', file_name), - unzip(zip_file_name, list = TRUE)$Name, + utils::unzip(zip_file_name, list = TRUE)$Name, ignore.case = TRUE, value = TRUE ) - unzip(file.path(outfolder, paste0('Drought', sitename, '.zip')), + utils::unzip(file.path(outfolder, paste0('Drought', sitename, '.zip')), files = zipped_csv_name, exdir = outfolder) `%>%` <- dplyr::`%>%` # read in the output CSV file and select the variables required - df <- read.csv(file.path(outfolder, zipped_csv_name)) + df <- utils::read.csv(file.path(outfolder, zipped_csv_name)) df <- subset( df, @@ -128,7 +128,7 @@ download.Drought2018 <- as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) <= as.Date(end_date) )) # save the csv file - write.csv(df, file.path(outfolder, zipped_csv_name), row.names = FALSE) + utils::write.csv(df, file.path(outfolder, zipped_csv_name), row.names = FALSE) rows <- 1 results <- data.frame( From e616d8a03b1de956a6b59024480a3a4e199b2d4a Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 27 Jun 2021 12:26:59 +0530 Subject: [PATCH 2003/2289] use sparql to check for start date validity --- .../data.atmosphere/R/download.Drought2018.R | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index 797507b2ba9..7af9615122a 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -36,7 +36,7 @@ download.Drought2018 <- if (as.Date(end_date) > as.Date("2018-12-31")) { PEcAn.logger::logger.severe(paste0( - "Requested end date ", as.Date(end_date), " exceeds Drought 2018 availability period" + "Requested end date ", end_date, " exceeds Drought 2018 availability period" )) } @@ -50,13 +50,14 @@ download.Drought2018 <- body <- " prefix cpmeta: prefix prov: - select ?dobj + select ?dobj ?spec ?timeStart where { - VALUES ?spec {} - ?dobj cpmeta:hasObjectSpec ?spec . - VALUES ?station {} - ?dobj cpmeta:wasAcquiredBy/prov:wasAssociatedWith ?station . - FILTER NOT EXISTS {[] cpmeta:isNextVersionOf ?dobj} + VALUES ?spec {} + ?dobj cpmeta:hasObjectSpec ?spec . + VALUES ?station {} + ?dobj cpmeta:wasAcquiredBy/prov:wasAssociatedWith ?station . + ?dobj cpmeta:hasStartTime | (cpmeta:wasAcquiredBy / prov:startedAtTime) ?timeStart . + FILTER NOT EXISTS {[] cpmeta:isNextVersionOf ?dobj} } " body <- gsub("sitename", sitename, body) @@ -64,9 +65,13 @@ download.Drought2018 <- response <- httr::content(response, as = "text") response <- jsonlite::fromJSON(response) dataset_url <- response$results$bindings$dobj$value + dataset_start_date <- as.Date(strptime(response$results$bindings$timeStart$value, format = "%Y-%m-%dT%H:%M:%S")) if(is.null(dataset_url)){ PEcAn.logger::logger.severe("Data is not available for the requested site") } + if(dataset_start_date > as.Date(start_date)){ + PEcAn.logger::logger.severe(paste("Data is not available for the requested start date. Please try again with", dataset_start_date, "as start date.") ) + } dataset_id <- sub(".*/", "", dataset_url) # construct the download URL From 39d4683bdfdabf18ce5e376c46ea6a464f1cd4ae Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 27 Jun 2021 13:29:59 +0530 Subject: [PATCH 2004/2289] check for existing CSV file --- .../data.atmosphere/R/download.Drought2018.R | 27 ++++++++++++++----- 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index 7af9615122a..7d06e727606 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -24,14 +24,27 @@ download.Drought2018 <- end_date, overwrite = FALSE) { + stage_download <- TRUE + stage_extract <- TRUE + + # construct output CSV file name + output_file_name <- paste0("FLX_", sitename, "_FLUXNET2015_FULLSET_HH_", as.character(format(as.Date(start_date),'%Y')), "-2018_beta-3.csv") + + if (file.exists(file.path(outfolder, output_file_name)) && !overwrite) { + PEcAn.logger::logger.info("Output CSV file for the requested site already exists") + stage_download <- FALSE + stage_extract <- FALSE + } + + # construct zip file name zip_file_name <- paste0(outfolder, "/Drought", sitename, ".zip") - stage_download <- TRUE - if (file.exists(zip_file_name) && !overwrite) { + if (stage_extract && file.exists(zip_file_name) && !overwrite) { PEcAn.logger::logger.info("Zip file for the requested site already exists, extracting it...") stage_download <- FALSE + stage_extract <- FALSE } if (as.Date(end_date) > as.Date("2018-12-31")) { @@ -87,8 +100,9 @@ download.Drought2018 <- ), httr::progress()) } + if (stage_extract){ file_name <- paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') - + # extract only the hourly data file zipped_csv_name <- grep( @@ -133,7 +147,8 @@ download.Drought2018 <- as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) <= as.Date(end_date) )) # save the csv file - utils::write.csv(df, file.path(outfolder, zipped_csv_name), row.names = FALSE) + utils::write.csv(df, file.path(outfolder, output_file_name), row.names = FALSE) + } rows <- 1 results <- data.frame( @@ -143,11 +158,11 @@ download.Drought2018 <- formatname = character(rows), startdate = character(rows), enddate = character(rows), - dbfile.name = zipped_csv_name, + dbfile.name = output_file_name, stringsAsFactors = FALSE ) - results$file[rows] <- file.path(outfolder, zipped_csv_name) + results$file[rows] <- file.path(outfolder, output_file_name) results$host[rows] <- PEcAn.remote::fqdn() results$startdate[rows] <- start_date results$enddate[rows] <- end_date From 9632c823f5ac1d7b58bc8ed7717c53581798e9e3 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Sun, 27 Jun 2021 13:34:31 +0530 Subject: [PATCH 2005/2289] fix indentation --- .../data.atmosphere/R/download.Drought2018.R | 147 ++++++++++-------- 1 file changed, 84 insertions(+), 63 deletions(-) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R index 7d06e727606..3755c988f40 100644 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ b/modules/data.atmosphere/R/download.Drought2018.R @@ -11,26 +11,33 @@ #' @param overwrite should existing files be overwritten. Default False. #' @return information about the output file #' @export -#' @examples +#' @examples #' \dontrun{ #' download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") #' } #' @author Ayush Prasad -#' +#' download.Drought2018 <- function(sitename, outfolder, start_date, end_date, overwrite = FALSE) { - stage_download <- TRUE stage_extract <- TRUE # construct output CSV file name - output_file_name <- paste0("FLX_", sitename, "_FLUXNET2015_FULLSET_HH_", as.character(format(as.Date(start_date),'%Y')), "-2018_beta-3.csv") + output_file_name <- + paste0( + "FLX_", + sitename, + "_FLUXNET2015_FULLSET_HH_", + as.character(format(as.Date(start_date), '%Y')), + "-2018_beta-3.csv" + ) - if (file.exists(file.path(outfolder, output_file_name)) && !overwrite) { + if (file.exists(file.path(outfolder, output_file_name)) && + !overwrite) { PEcAn.logger::logger.info("Output CSV file for the requested site already exists") stage_download <- FALSE stage_extract <- FALSE @@ -49,7 +56,9 @@ download.Drought2018 <- if (as.Date(end_date) > as.Date("2018-12-31")) { PEcAn.logger::logger.severe(paste0( - "Requested end date ", end_date, " exceeds Drought 2018 availability period" + "Requested end date ", + end_date, + " exceeds Drought 2018 availability period" )) } @@ -63,7 +72,7 @@ download.Drought2018 <- body <- " prefix cpmeta: prefix prov: - select ?dobj ?spec ?timeStart + select ?dobj ?spec ?timeStart where { VALUES ?spec {} ?dobj cpmeta:hasObjectSpec ?spec . @@ -78,12 +87,21 @@ download.Drought2018 <- response <- httr::content(response, as = "text") response <- jsonlite::fromJSON(response) dataset_url <- response$results$bindings$dobj$value - dataset_start_date <- as.Date(strptime(response$results$bindings$timeStart$value, format = "%Y-%m-%dT%H:%M:%S")) - if(is.null(dataset_url)){ + dataset_start_date <- + as.Date( + strptime(response$results$bindings$timeStart$value, format = "%Y-%m-%dT%H:%M:%S") + ) + if (is.null(dataset_url)) { PEcAn.logger::logger.severe("Data is not available for the requested site") } - if(dataset_start_date > as.Date(start_date)){ - PEcAn.logger::logger.severe(paste("Data is not available for the requested start date. Please try again with", dataset_start_date, "as start date.") ) + if (dataset_start_date > as.Date(start_date)) { + PEcAn.logger::logger.severe( + paste( + "Data is not available for the requested start date. Please try again with", + dataset_start_date, + "as start date." + ) + ) } dataset_id <- sub(".*/", "", dataset_url) @@ -94,60 +112,62 @@ download.Drought2018 <- '%22%5D') # Download the zip file file <- - httr::GET(url = download_url, httr::write_disk( - paste0(outfolder, "/Drought", sitename, ".zip"), - overwrite = TRUE - ), httr::progress()) + httr::GET(url = download_url, + httr::write_disk( + paste0(outfolder, "/Drought", sitename, ".zip"), + overwrite = TRUE + ), + httr::progress()) } - if (stage_extract){ - file_name <- paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') - - # extract only the hourly data file - zipped_csv_name <- - grep( - paste0('^', file_name), - utils::unzip(zip_file_name, list = TRUE)$Name, - ignore.case = TRUE, - value = TRUE - ) - utils::unzip(file.path(outfolder, paste0('Drought', sitename, '.zip')), - files = zipped_csv_name, - exdir = outfolder) - `%>%` <- dplyr::`%>%` - # read in the output CSV file and select the variables required - df <- utils::read.csv(file.path(outfolder, zipped_csv_name)) - df <- - subset( - df, - select = c( - "TIMESTAMP_START", - "TA_F", - "SW_IN_F", - "LW_IN_F", - "VPD_F", - "PA_F", - "P_F", - "WS_F", - "WD", - "RH", - "PPFD_IN", - "CO2_F_MDS", - "TS_F_MDS_1", - "TS_F_MDS_2", - "NEE_VUT_REF", - "LE_F_MDS", - "RECO_NT_VUT_REF", - "GPP_NT_VUT_REF" + if (stage_extract) { + file_name <- paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') + + # extract only the hourly data file + zipped_csv_name <- + grep( + paste0('^', file_name), + utils::unzip(zip_file_name, list = TRUE)$Name, + ignore.case = TRUE, + value = TRUE ) - ) - df <- - df %>% dplyr::filter(( - as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) >= as.Date(start_date) & - as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) <= as.Date(end_date) - )) - # save the csv file - utils::write.csv(df, file.path(outfolder, output_file_name), row.names = FALSE) + utils::unzip(file.path(outfolder, paste0('Drought', sitename, '.zip')), + files = zipped_csv_name, + exdir = outfolder) + `%>%` <- dplyr::`%>%` + # read in the output CSV file and select the variables required + df <- utils::read.csv(file.path(outfolder, zipped_csv_name)) + df <- + subset( + df, + select = c( + "TIMESTAMP_START", + "TA_F", + "SW_IN_F", + "LW_IN_F", + "VPD_F", + "PA_F", + "P_F", + "WS_F", + "WD", + "RH", + "PPFD_IN", + "CO2_F_MDS", + "TS_F_MDS_1", + "TS_F_MDS_2", + "NEE_VUT_REF", + "LE_F_MDS", + "RECO_NT_VUT_REF", + "GPP_NT_VUT_REF" + ) + ) + df <- + df %>% dplyr::filter(( + as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) >= as.Date(start_date) & + as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) <= as.Date(end_date) + )) + # save the csv file + utils::write.csv(df, file.path(outfolder, output_file_name), row.names = FALSE) } rows <- 1 @@ -162,7 +182,8 @@ download.Drought2018 <- stringsAsFactors = FALSE ) - results$file[rows] <- file.path(outfolder, output_file_name) + results$file[rows] <- + file.path(outfolder, output_file_name) results$host[rows] <- PEcAn.remote::fqdn() results$startdate[rows] <- start_date results$enddate[rows] <- end_date From bddc22483a328a7757ab9696709558435c41d9b6 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:17:56 -0400 Subject: [PATCH 2006/2289] Added param description --- modules/data.atmosphere/R/debias.met.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/debias.met.R b/modules/data.atmosphere/R/debias.met.R index ea05ad0609f..b4d666fddec 100644 --- a/modules/data.atmosphere/R/debias.met.R +++ b/modules/data.atmosphere/R/debias.met.R @@ -6,11 +6,11 @@ substrRight <- function(x, n) { ##' @name debias_met ##' @title debias_met ##' @export -##' @param outfolder +##' @param outfolder location where output is stored ##' @param input_met - the source_met dataset that will be altered by the training dataset in NC format. ##' @param train_met - the observed dataset that will be used to train the modeled dataset in NC format ##' @param de_method - select which debias method you would like to use, options are 'normal', 'linear regression' -##' @param site.id +##' @param site.id site id ##' @param overwrite logical: replace output file if it already exists? Currently ignored. ##' @param verbose logical: should \code{\link[ncdf4:ncdf4-package]{ncdf4}} ##' functions print debugging information as they run? From d894a8a8d7cfff99f62cc90d9237fd8ef932adab Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:18:58 -0400 Subject: [PATCH 2007/2289] Added param description --- modules/data.atmosphere/man/debias_met.Rd | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.atmosphere/man/debias_met.Rd b/modules/data.atmosphere/man/debias_met.Rd index 7bed62a5028..142c0c422fd 100644 --- a/modules/data.atmosphere/man/debias_met.Rd +++ b/modules/data.atmosphere/man/debias_met.Rd @@ -17,6 +17,8 @@ debias.met( ) } \arguments{ +\item{outfolder}{location where output is stored} + \item{input_met}{- the source_met dataset that will be altered by the training dataset in NC format.} \item{train_met}{- the observed dataset that will be used to train the modeled dataset in NC format} @@ -27,6 +29,8 @@ debias.met( \item{verbose}{logical: should \code{\link[ncdf4:ncdf4-package]{ncdf4}} functions print debugging information as they run?} + +\item{site.id}{site id} } \description{ debias.met takes input_met and debiases it based on statistics from a train_met dataset From 542383459f3d91c2851d203d72997feece2ccde8 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:21:52 -0400 Subject: [PATCH 2008/2289] Added param description --- modules/data.atmosphere/R/download.FACE.R | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/download.FACE.R b/modules/data.atmosphere/R/download.FACE.R index 88e00cb2d5b..4bb521811d3 100644 --- a/modules/data.atmosphere/R/download.FACE.R +++ b/modules/data.atmosphere/R/download.FACE.R @@ -3,10 +3,10 @@ ##' @name download.FACE ##' @title download.FACE ##' @export -##' @param sitename -##' @param outfolder -##' @param start_year -##' @param end_year +##' @param sitename sitename +##' @param outfolder location where output is stored +##' @param start_year desired start year +##' @param end_year desired end year ##' @param method Optional. Passed to download.file() function. Use this to set custom programs such as ncftp to use when ##' downloading files from FTP sites ##' From afada94a636bf7293694a75ed04d1226b286e743 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:22:37 -0400 Subject: [PATCH 2009/2289] Added param description --- modules/data.atmosphere/man/download.FACE.Rd | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/modules/data.atmosphere/man/download.FACE.Rd b/modules/data.atmosphere/man/download.FACE.Rd index 69c6bfe60ab..b9ae6a518ab 100644 --- a/modules/data.atmosphere/man/download.FACE.Rd +++ b/modules/data.atmosphere/man/download.FACE.Rd @@ -15,8 +15,16 @@ download.FACE( ) } \arguments{ +\item{sitename}{sitename} + +\item{outfolder}{location where output is stored} + \item{method}{Optional. Passed to download.file() function. Use this to set custom programs such as ncftp to use when downloading files from FTP sites} + +\item{start_year}{desired start year} + +\item{end_year}{desired end year} } \description{ Download Raw FACE data from the internet From 37dc064e41111882c7dcea89a84115050973acff Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:24:14 -0400 Subject: [PATCH 2010/2289] Added param description --- modules/data.atmosphere/R/download.GLDAS.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/data.atmosphere/R/download.GLDAS.R b/modules/data.atmosphere/R/download.GLDAS.R index 2ba558807e5..71ba4f5974d 100644 --- a/modules/data.atmosphere/R/download.GLDAS.R +++ b/modules/data.atmosphere/R/download.GLDAS.R @@ -3,12 +3,12 @@ ##' Download and convert single grid point GLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface ##' ##' @export -##' @param outfolder -##' @param start_date -##' @param end_date -##' @param site_id -##' @param lat.in -##' @param lon.in +##' @param outfolder location where output is stored +##' @param start_date desired start date +##' @param end_date desired end date +##' @param site_id desired site id +##' @param lat.in latitude of site +##' @param lon.in longistude of site ##' ##' @author Christy Rollinson download.GLDAS <- function(outfolder, start_date, end_date, site_id, lat.in, lon.in, From 3c6f9b8f34514af48b5c51c90980a98502f301fb Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:26:34 -0400 Subject: [PATCH 2011/2289] Added param description --- modules/data.atmosphere/man/download.GLDAS.Rd | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.GLDAS.Rd b/modules/data.atmosphere/man/download.GLDAS.Rd index aeb3f22ee54..a105e1678d6 100644 --- a/modules/data.atmosphere/man/download.GLDAS.Rd +++ b/modules/data.atmosphere/man/download.GLDAS.Rd @@ -17,7 +17,17 @@ download.GLDAS( ) } \arguments{ -\item{lon.in}{} +\item{outfolder}{location where output is stored} + +\item{start_date}{desired start date} + +\item{end_date}{desired end date} + +\item{site_id}{desired site id} + +\item{lat.in}{latitude of site} + +\item{lon.in}{longistude of site} } \description{ Download and convert single grid point GLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface From 806e172489847d0083bce07d54d61fd4adf0b0b1 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:27:56 -0400 Subject: [PATCH 2012/2289] Added param description --- modules/data.atmosphere/R/download.MACA.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.MACA.R b/modules/data.atmosphere/R/download.MACA.R index b3cb02e6aba..5f866b7bccb 100644 --- a/modules/data.atmosphere/R/download.MACA.R +++ b/modules/data.atmosphere/R/download.MACA.R @@ -2,11 +2,11 @@ ##' @name download.MACA ##' @title download.MACA ##' @export -##' @param outfolder +##' @param outfolder location where output is stored ##' @param start_date , of the format "YEAR-01-01 00:00:00" ##' @param end_date , of the format "YEAR-12-31 23:59:59" -##' @param lat -##' @param lon +##' @param lat latitude of site +##' @param lon longitude of site ##' @param model , select which MACA model to run (options are BNU-ESM, CNRM-CM5, CSIRO-Mk3-6-0, bcc-csm1-1, bcc-csm1-1-m, CanESM2, GFDL-ESM2M, GFDL-ESM2G, HadGEM2-CC365, HadGEM2-ES365, inmcm4, MIROC5, MIROC-ESM, MIROC-ESM-CHEM, MRI-CGCM3, CCSM4, IPSL-CM5A-LR, IPSL-CM5A-MR, IPSL-CM5B-LR, NorESM1-M) ##' @param scenario , select which scenario to run (options are rcp45, rcp85) ##' @param ensemble_member , r1i1p1 is the only ensemble member available for this dataset, CCSM4 uses r6i1p1 instead From 8796a3a2501180cecd34c7f3c31d577e57564958 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:28:51 -0400 Subject: [PATCH 2013/2289] Added param description --- modules/data.atmosphere/man/download.MACA.Rd | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/data.atmosphere/man/download.MACA.Rd b/modules/data.atmosphere/man/download.MACA.Rd index 349c7a3352e..61ac08827e3 100644 --- a/modules/data.atmosphere/man/download.MACA.Rd +++ b/modules/data.atmosphere/man/download.MACA.Rd @@ -20,6 +20,8 @@ download.MACA( ) } \arguments{ +\item{outfolder}{location where output is stored} + \item{start_date}{, of the format "YEAR-01-01 00:00:00"} \item{end_date}{, of the format "YEAR-12-31 23:59:59"} @@ -29,6 +31,10 @@ download.MACA( \item{scenario}{, select which scenario to run (options are rcp45, rcp85)} \item{ensemble_member}{, r1i1p1 is the only ensemble member available for this dataset, CCSM4 uses r6i1p1 instead} + +\item{lat}{latitude of site} + +\item{lon}{longitude of site} } \description{ Download MACA CMIP5 outputs for a single grid point using OPeNDAP and convert to CF From 6f0bbc2c6463069b6ccc4fc5a88c41a4b38ad36c Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:30:38 -0400 Subject: [PATCH 2014/2289] Added param description --- modules/data.atmosphere/R/download.MsTMIP_NARR.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.MsTMIP_NARR.R b/modules/data.atmosphere/R/download.MsTMIP_NARR.R index 73937411785..2dd8ecf6054 100644 --- a/modules/data.atmosphere/R/download.MsTMIP_NARR.R +++ b/modules/data.atmosphere/R/download.MsTMIP_NARR.R @@ -2,7 +2,7 @@ ##' @name download.MsTMIP_NARR ##' @title download.MsTMIP_NARR ##' @export -##' @param outfolder +##' @param outfolder location where output is stored ##' @param start_date YYYY-MM-DD ##' @param end_date YYYY-MM-DD ##' @param lat decimal degrees [-90, 90] From 3c66b7cc3525ebc8b97e0d20d94a31168a57bb8c Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:31:30 -0400 Subject: [PATCH 2015/2289] Added param description --- modules/data.atmosphere/man/download.MsTMIP_NARR.Rd | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd b/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd index 3ebaeba41ae..61afc87a566 100644 --- a/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd +++ b/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd @@ -17,6 +17,8 @@ download.MsTMIP_NARR( ) } \arguments{ +\item{outfolder}{location where output is stored} + \item{start_date}{YYYY-MM-DD} \item{end_date}{YYYY-MM-DD} From 43d2dddb5848a43a8c90465c0177e7d2b69c6624 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:34:13 -0400 Subject: [PATCH 2016/2289] Added param descritpion --- modules/data.atmosphere/R/download.NARR.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.NARR.R b/modules/data.atmosphere/R/download.NARR.R index 364d3110dc6..c92bbbd1d93 100644 --- a/modules/data.atmosphere/R/download.NARR.R +++ b/modules/data.atmosphere/R/download.NARR.R @@ -1,8 +1,8 @@ ##' Download NARR files ##' -##' @param outfolder -##' @param start_year -##' @param end_year +##' @param outfolder location where output is stored +##' @param start_year desired start year YYYY-MM-DD +##' @param end_year desired end year YYYY-MM-DD ##' @param overwrite Overwrite existing files? Default=FALSE ##' @param verbose Turn on verbose output? Default=FALSE ##' @param method Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. From b8097f77a1da6cf3f0a0dd596d10a3dbd8ed0302 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:35:07 -0400 Subject: [PATCH 2017/2289] Added param description --- modules/data.atmosphere/man/download.NARR.Rd | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/data.atmosphere/man/download.NARR.Rd b/modules/data.atmosphere/man/download.NARR.Rd index 82614526ea9..38b02566889 100644 --- a/modules/data.atmosphere/man/download.NARR.Rd +++ b/modules/data.atmosphere/man/download.NARR.Rd @@ -15,12 +15,18 @@ download.NARR( ) } \arguments{ +\item{outfolder}{location where output is stored} + \item{overwrite}{Overwrite existing files? Default=FALSE} \item{verbose}{Turn on verbose output? Default=FALSE} \item{method}{Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. example options(download.ftp.method="ncftpget")} + +\item{start_year}{desired start year YYYY-MM-DD} + +\item{end_year}{desired end year YYYY-MM-DD} } \description{ Download NARR files From 45ea27f78d9a2e50c0cc99c8e7a6f4b00bf05f7a Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:36:33 -0400 Subject: [PATCH 2018/2289] Added param description --- modules/data.atmosphere/R/download.NLDAS.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/modules/data.atmosphere/R/download.NLDAS.R b/modules/data.atmosphere/R/download.NLDAS.R index c8a2fe00661..92c8ac7d3ac 100644 --- a/modules/data.atmosphere/R/download.NLDAS.R +++ b/modules/data.atmosphere/R/download.NLDAS.R @@ -2,12 +2,12 @@ ##' ##' Download and convert single grid point NLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface ##' -##' @param outfolder -##' @param start_date -##' @param end_date -##' @param site_id -##' @param lat -##' @param lon +##' @param outfolder location of output +##' @param start_date desired start date YYYY-MM-DD +##' @param end_date desired end date YYYY-MM-DD +##' @param site_id site id (BETY) +##' @param lat latitude of site +##' @param lon longitude of site ##' @export ##' ##' @author Christy Rollinson (with help from Ankur Desai) From 68cd0750e0161b89565968980e080e5c443c5503 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:38:32 -0400 Subject: [PATCH 2019/2289] Adedd param description --- modules/data.atmosphere/man/download.NLDAS.Rd | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/modules/data.atmosphere/man/download.NLDAS.Rd b/modules/data.atmosphere/man/download.NLDAS.Rd index 8e1880b7507..80a088ba23c 100644 --- a/modules/data.atmosphere/man/download.NLDAS.Rd +++ b/modules/data.atmosphere/man/download.NLDAS.Rd @@ -16,6 +16,19 @@ download.NLDAS( ... ) } +\arguments{ +\item{outfolder}{location of output} + +\item{start_date}{desired start date YYYY-MM-DD} + +\item{end_date}{desired end date YYYY-MM-DD} + +\item{site_id}{site id (BETY)} + +\item{lat}{latitude of site} + +\item{lon}{longitude of site} +} \description{ Download and convert single grid point NLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface } From 05ca6ece06f989f21e104f96d7cc9a431489bb0c Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:40:06 -0400 Subject: [PATCH 2020/2289] Added param description --- modules/data.atmosphere/R/download.PalEON.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.PalEON.R b/modules/data.atmosphere/R/download.PalEON.R index f6bc4261833..c42e3c1fc01 100644 --- a/modules/data.atmosphere/R/download.PalEON.R +++ b/modules/data.atmosphere/R/download.PalEON.R @@ -3,9 +3,9 @@ ##' @name download.PalEON ##' @title download.PalEON ##' @export -##' @param outfolder -##' @param start_date -##' @param end_date +##' @param outfolder desired output location +##' @param start_date desired start date YYYY-MM-DD +##' @param end_date desired end date YYYY-MM-DD ##' ##' @author Betsy Cowdery download.PalEON <- function(sitename, outfolder, start_date, end_date, overwrite = FALSE, ...) { From 5d9dfba4c4125f33be0d3fe510a9a109785c4b05 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:40:52 -0400 Subject: [PATCH 2021/2289] Added param description --- modules/data.atmosphere/man/download.PalEON.Rd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.PalEON.Rd b/modules/data.atmosphere/man/download.PalEON.Rd index 437ee22391c..f3aca1bfc1c 100644 --- a/modules/data.atmosphere/man/download.PalEON.Rd +++ b/modules/data.atmosphere/man/download.PalEON.Rd @@ -14,7 +14,11 @@ download.PalEON( ) } \arguments{ -\item{end_date}{} +\item{outfolder}{desired output location} + +\item{start_date}{desired start date YYYY-MM-DD} + +\item{end_date}{desired end date YYYY-MM-DD} } \description{ Download PalEON files From 1bed525acebfa7670551ad5731fe5868b8f826d9 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:41:58 -0400 Subject: [PATCH 2022/2289] Added param description --- modules/data.atmosphere/R/download.PalEON_ENS.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/R/download.PalEON_ENS.R b/modules/data.atmosphere/R/download.PalEON_ENS.R index 9f2ed487bbd..66a87763005 100644 --- a/modules/data.atmosphere/R/download.PalEON_ENS.R +++ b/modules/data.atmosphere/R/download.PalEON_ENS.R @@ -1,9 +1,9 @@ ##' @title Download PalEON met ensemble files ##' ##' @export -##' @param outfolder -##' @param start_date -##' @param end_date +##' @param outfolder desired output folder +##' @param start_date desired start date YYYY-MM-DD +##' @param end_date desired end date YYYY-MM-DD ##' ##' @author Betsy Cowdery, Mike Dietze download.PalEON_ENS <- function(sitename, outfolder, start_date, end_date, overwrite = FALSE, ...) { From 854bb6df8b9086661dc5719061e5a4853c50c196 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:43:00 -0400 Subject: [PATCH 2023/2289] Added param description --- modules/data.atmosphere/man/download.PalEON_ENS.Rd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/man/download.PalEON_ENS.Rd b/modules/data.atmosphere/man/download.PalEON_ENS.Rd index a9e28836c71..6c7e22055b7 100644 --- a/modules/data.atmosphere/man/download.PalEON_ENS.Rd +++ b/modules/data.atmosphere/man/download.PalEON_ENS.Rd @@ -14,7 +14,11 @@ download.PalEON_ENS( ) } \arguments{ -\item{end_date}{} +\item{outfolder}{desired output folder} + +\item{start_date}{desired start date YYYY-MM-DD} + +\item{end_date}{desired end date YYYY-MM-DD} } \description{ Download PalEON met ensemble files From 98a4dd4cf0dbf0e353feaac1a38eab909a27d499 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:44:00 -0400 Subject: [PATCH 2024/2289] Deleted @examples --- modules/data.atmosphere/R/download.raw.met.module.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index d6ae2af96ec..c7334b243f4 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -26,7 +26,7 @@ #' #' @export #' -#' @examples +#' .download.raw.met.module <- function(dir, From ff1522a1addc834b18c757b43feebd724f0f0755 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 17:46:52 -0400 Subject: [PATCH 2025/2289] Deleted @examples --- modules/data.atmosphere/R/extract_local_CMIP5.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/extract_local_CMIP5.R b/modules/data.atmosphere/R/extract_local_CMIP5.R index 7ce9c72e952..45952de9d8a 100644 --- a/modules/data.atmosphere/R/extract_local_CMIP5.R +++ b/modules/data.atmosphere/R/extract_local_CMIP5.R @@ -29,7 +29,7 @@ ##' @param verbose logical. to control printing of debug info ##' @param ... Other arguments, currently ignored ##' @export -##' @examples +##' # ----------------------------------- extract.local.CMIP5 <- function(outfolder, in.path, start_date, end_date, lat.in, lon.in, model , scenario , ensemble_member = "r1i1p1", date.origin=NULL, adjust.pr=1, From 31ff7983672252e8fa0eb818e49b72a6106227bd Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Tue, 29 Jun 2021 18:44:42 -0400 Subject: [PATCH 2026/2289] Changed relative humidity to be 1 if >1 --- modules/data.atmosphere/R/half_hour_downscale.R | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/half_hour_downscale.R b/modules/data.atmosphere/R/half_hour_downscale.R index d3d46b50b79..7272185205f 100644 --- a/modules/data.atmosphere/R/half_hour_downscale.R +++ b/modules/data.atmosphere/R/half_hour_downscale.R @@ -61,11 +61,8 @@ temporal_downscale_half_hour <- function(input_file, output_file, overwrite = TR # Convert splined SH, temperature, and presssure to RH forecast_noaa_ds <- forecast_noaa_ds %>% - dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, - temp = forecast_noaa_ds$air_temperature, - press = forecast_noaa_ds$air_pressure)) %>% - dplyr::mutate(.data$relative_humidity = .data$relative_humidity, - relative_humidity = ifelse(.data$relative_humidity > 1, 0, .data$relative_humidity)) + dplyr::mutate(relative_humidity = qair2rh(qair = forecast_noaa_ds$specific_humidity, temp = forecast_noaa_ds$air_temperature, press = forecast_noaa_ds$air_pressure)) %>% + dplyr::mutate(relative_humidity = ifelse(.data$relative_humidity > 1, 1, .data$relative_humidity)) # convert longwave to hourly (just copy 6 hourly values over past 6-hour time period) if("surface_downwelling_longwave_flux_in_air" %in% cf_var_names){ From b9143c675cfff6bd6fa733d876d2966304c26c89 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 30 Jun 2021 13:52:47 +0530 Subject: [PATCH 2027/2289] Update DESCRIPTION --- base/logger/DESCRIPTION | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 8d2cd9ab22d..931a5e4b650 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -2,11 +2,10 @@ Package: PEcAn.logger Title: Logger functions for PEcAn Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Rob","Kooper"), - person("Alexey", "Shiklomanov")) -Author: Rob Kooper, Alexey Shiklomanov -Maintainer: Alexey Shiklomanov +Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), + person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu")) Description: Special logger functions for tracking execution status and the environment. +imports: utils Suggests: testthat License: BSD_3_clause + file LICENSE Encoding: UTF-8 From 0c0bf6a4cee8db6c300330a97599530ce660082b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 30 Jun 2021 14:04:42 +0530 Subject: [PATCH 2028/2289] Update logger.R added utils namespace to eliminate NOTE-no visible binding for global variable | similar note remaining for variable "dump.log"! --- base/logger/R/logger.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/logger/R/logger.R b/base/logger/R/logger.R index 9feee614b11..b8c4cd1ae81 100644 --- a/base/logger/R/logger.R +++ b/base/logger/R/logger.R @@ -140,11 +140,11 @@ logger.severe <- function(msg, ..., wrap = TRUE) { ##' } logger.message <- function(level, msg, ..., wrap = TRUE) { if (logger.getLevelNumber(level) >= .utils.logger$level) { - dump.frames(dumpto = "dump.log") + utils::dump.frames(dumpto = "dump.log") calls <- names(dump.log) calls <- calls[!grepl("^(#[0-9]+: )?(PEcAn\\.logger::)?logger", calls)] calls <- calls[!grepl("(severe|error|warn|info|debug)ifnot", calls)] - func <- sub("\\(.*", "", tail(calls, 1)) + func <- sub("\\(.*", "", utils::tail(calls, 1)) if (length(func) == 0) { func <- "console" } From d89c1e12a35b94aa101deb80ef7f78bc9f081f82 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 30 Jun 2021 14:05:13 +0530 Subject: [PATCH 2029/2289] Update print2string.R --- base/logger/R/print2string.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/R/print2string.R b/base/logger/R/print2string.R index 8eadea51dd3..67089ec6f8c 100644 --- a/base/logger/R/print2string.R +++ b/base/logger/R/print2string.R @@ -17,6 +17,6 @@ #' logger.debug("Current status:\n", print2string(df, row.names = FALSE), wrap = FALSE) #' @export print2string <- function(x, ...) { - cout <- capture.output(print(x, ...)) + cout <- utils::capture.output(print(x, ...)) paste(cout, collapse = "\n") } From 67c0bf79858272959cf76bd1aea977f1851d8380 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 30 Jun 2021 14:07:12 +0530 Subject: [PATCH 2030/2289] Update logifnot.R fixes NOTE- `* checking Rd line widths ... NOTE Rd file 'severeifnot.Rd': \examples lines wider than 100 characters: severeifnot("I absolutely cannot deal with the fact that something is not a list.", is.list(a), is.list(b))` --- base/logger/R/logifnot.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index b368728ce5a..4c4dae08911 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -18,7 +18,7 @@ #' warnifnot("I would prefer it if you used lists.", is.list(a), is.list(b)) #' errorifnot("You should definitely use lists.", is.list(a), is.list(b)) #' try({ -#' severeifnot("I absolutely cannot deal with the fact that something is not a list.", is.list(a), is.list(b)) +#' severeifnot("I cannot deal with the fact that something is not a list.", is.list(a), is.list(b)) #' }) #' @export severeifnot <- function(msg, ...) { From f43f4165f1c3426adf0269562022cae1f0f823f9 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:52:59 -0400 Subject: [PATCH 2031/2289] Added missing Roxygen values --- modules/data.atmosphere/R/debias.met.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/debias.met.R b/modules/data.atmosphere/R/debias.met.R index b4d666fddec..52265855bca 100644 --- a/modules/data.atmosphere/R/debias.met.R +++ b/modules/data.atmosphere/R/debias.met.R @@ -6,13 +6,15 @@ substrRight <- function(x, n) { ##' @name debias_met ##' @title debias_met ##' @export +##' ##' @param outfolder location where output is stored ##' @param input_met - the source_met dataset that will be altered by the training dataset in NC format. ##' @param train_met - the observed dataset that will be used to train the modeled dataset in NC format ##' @param de_method - select which debias method you would like to use, options are 'normal', 'linear regression' -##' @param site.id site id ##' @param overwrite logical: replace output file if it already exists? Currently ignored. ##' @param verbose logical: should \code{\link[ncdf4:ncdf4-package]{ncdf4}} +##' @param site_id BETY site id +##' @param ... other inputs ##' functions print debugging information as they run? ##' @author James Simkins debias.met <- function(outfolder, input_met, train_met, site_id, de_method = "linear", From c3f0b5c31c05bc2d957c83f24986d9531368af23 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:54:44 -0400 Subject: [PATCH 2032/2289] Adedd missing Roxygen values --- modules/data.atmosphere/man/debias_met.Rd | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/man/debias_met.Rd b/modules/data.atmosphere/man/debias_met.Rd index 142c0c422fd..289f48ae67d 100644 --- a/modules/data.atmosphere/man/debias_met.Rd +++ b/modules/data.atmosphere/man/debias_met.Rd @@ -23,14 +23,16 @@ debias.met( \item{train_met}{- the observed dataset that will be used to train the modeled dataset in NC format} +\item{site_id}{BETY site id} + \item{de_method}{- select which debias method you would like to use, options are 'normal', 'linear regression'} \item{overwrite}{logical: replace output file if it already exists? Currently ignored.} -\item{verbose}{logical: should \code{\link[ncdf4:ncdf4-package]{ncdf4}} -functions print debugging information as they run?} +\item{verbose}{logical: should \code{\link[ncdf4:ncdf4-package]{ncdf4}}} -\item{site.id}{site id} +\item{...}{other inputs +functions print debugging information as they run?} } \description{ debias.met takes input_met and debiases it based on statistics from a train_met dataset From 067646d0609ec8f8d0871a8e92a06a5bfaf8c2f0 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:55:46 -0400 Subject: [PATCH 2033/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.FACE.R | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/download.FACE.R b/modules/data.atmosphere/R/download.FACE.R index 4bb521811d3..735571f5e88 100644 --- a/modules/data.atmosphere/R/download.FACE.R +++ b/modules/data.atmosphere/R/download.FACE.R @@ -3,12 +3,15 @@ ##' @name download.FACE ##' @title download.FACE ##' @export +##' ##' @param sitename sitename ##' @param outfolder location where output is stored -##' @param start_year desired start year -##' @param end_year desired end year ##' @param method Optional. Passed to download.file() function. Use this to set custom programs such as ncftp to use when ##' downloading files from FTP sites +##' @param start_date desired start date YYYY-MM-DD +##' @param end_date desired end date YYYY-MM-DD +##' @param overwrite overwrite existing files? Default is FALSE +##' @param ... other inputs ##' ##' @author Betsy Cowdery download.FACE <- function(sitename, outfolder, start_date, end_date, overwrite = FALSE, method, ...) { From 248ce993de0c48126a4c5900443a51e2bd6174de Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:56:40 -0400 Subject: [PATCH 2034/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.FACE.Rd | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/modules/data.atmosphere/man/download.FACE.Rd b/modules/data.atmosphere/man/download.FACE.Rd index b9ae6a518ab..2c6b6d583e4 100644 --- a/modules/data.atmosphere/man/download.FACE.Rd +++ b/modules/data.atmosphere/man/download.FACE.Rd @@ -19,12 +19,16 @@ download.FACE( \item{outfolder}{location where output is stored} +\item{start_date}{desired start date YYYY-MM-DD} + +\item{end_date}{desired end date YYYY-MM-DD} + +\item{overwrite}{overwrite existing files? Default is FALSE} + \item{method}{Optional. Passed to download.file() function. Use this to set custom programs such as ncftp to use when downloading files from FTP sites} -\item{start_year}{desired start year} - -\item{end_year}{desired end year} +\item{...}{other inputs} } \description{ Download Raw FACE data from the internet From 7c556e48fe287d504af91001d420705f2e4b78ce Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:57:38 -0400 Subject: [PATCH 2035/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.GLDAS.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.atmosphere/R/download.GLDAS.R b/modules/data.atmosphere/R/download.GLDAS.R index 71ba4f5974d..13971492f86 100644 --- a/modules/data.atmosphere/R/download.GLDAS.R +++ b/modules/data.atmosphere/R/download.GLDAS.R @@ -3,12 +3,16 @@ ##' Download and convert single grid point GLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface ##' ##' @export +##' ##' @param outfolder location where output is stored ##' @param start_date desired start date ##' @param end_date desired end date ##' @param site_id desired site id ##' @param lat.in latitude of site ##' @param lon.in longistude of site +##' @param overwrite overwrite existing files? Default is FALSE +##' @param verbose Default is FALSE, used as input for ncdf4::ncvar_def +##' @param ... other inputs ##' ##' @author Christy Rollinson download.GLDAS <- function(outfolder, start_date, end_date, site_id, lat.in, lon.in, From 2af05aa1e57b5201d0e0ee879c2701dc0d1106c0 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:58:30 -0400 Subject: [PATCH 2036/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.GLDAS.Rd | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/data.atmosphere/man/download.GLDAS.Rd b/modules/data.atmosphere/man/download.GLDAS.Rd index a105e1678d6..e33ff56b2a1 100644 --- a/modules/data.atmosphere/man/download.GLDAS.Rd +++ b/modules/data.atmosphere/man/download.GLDAS.Rd @@ -28,6 +28,12 @@ download.GLDAS( \item{lat.in}{latitude of site} \item{lon.in}{longistude of site} + +\item{overwrite}{overwrite existing files? Default is FALSE} + +\item{verbose}{Default is FALSE, used as input for ncdf4::ncvar_def} + +\item{...}{other inputs} } \description{ Download and convert single grid point GLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface From 2fd4fddd0a897c206e47ca0124a561e47949bbe0 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 14:59:32 -0400 Subject: [PATCH 2037/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.MACA.R | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/download.MACA.R b/modules/data.atmosphere/R/download.MACA.R index 5f866b7bccb..f4bc93253fb 100644 --- a/modules/data.atmosphere/R/download.MACA.R +++ b/modules/data.atmosphere/R/download.MACA.R @@ -2,14 +2,19 @@ ##' @name download.MACA ##' @title download.MACA ##' @export +##' ##' @param outfolder location where output is stored ##' @param start_date , of the format "YEAR-01-01 00:00:00" ##' @param end_date , of the format "YEAR-12-31 23:59:59" -##' @param lat latitude of site -##' @param lon longitude of site ##' @param model , select which MACA model to run (options are BNU-ESM, CNRM-CM5, CSIRO-Mk3-6-0, bcc-csm1-1, bcc-csm1-1-m, CanESM2, GFDL-ESM2M, GFDL-ESM2G, HadGEM2-CC365, HadGEM2-ES365, inmcm4, MIROC5, MIROC-ESM, MIROC-ESM-CHEM, MRI-CGCM3, CCSM4, IPSL-CM5A-LR, IPSL-CM5A-MR, IPSL-CM5B-LR, NorESM1-M) ##' @param scenario , select which scenario to run (options are rcp45, rcp85) ##' @param ensemble_member , r1i1p1 is the only ensemble member available for this dataset, CCSM4 uses r6i1p1 instead +##' @param site_id BETY site id +##' @param lat.in latitude of site +##' @param lon.in longitude of site +##' @param overwrite overwrite existing files? Default is FALSE +##' @param verbose Default is FALSE, used as input in ncdf4::ncvar_def +##' @param ... other inputs ##' ##' @author James Simkins download.MACA <- function(outfolder, start_date, end_date, site_id, lat.in, lon.in, model='IPSL-CM5A-LR', scenario='rcp85', ensemble_member='r1i1p1', From 876bcb5b650892fd9ea29aed4ec5e003480d8bc9 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:04:45 -0400 Subject: [PATCH 2038/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.MACA.Rd | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/download.MACA.Rd b/modules/data.atmosphere/man/download.MACA.Rd index 61ac08827e3..3dfb425267f 100644 --- a/modules/data.atmosphere/man/download.MACA.Rd +++ b/modules/data.atmosphere/man/download.MACA.Rd @@ -26,15 +26,23 @@ download.MACA( \item{end_date}{, of the format "YEAR-12-31 23:59:59"} +\item{site_id}{BETY site id} + +\item{lat.in}{latitude of site} + +\item{lon.in}{longitude of site} + \item{model}{, select which MACA model to run (options are BNU-ESM, CNRM-CM5, CSIRO-Mk3-6-0, bcc-csm1-1, bcc-csm1-1-m, CanESM2, GFDL-ESM2M, GFDL-ESM2G, HadGEM2-CC365, HadGEM2-ES365, inmcm4, MIROC5, MIROC-ESM, MIROC-ESM-CHEM, MRI-CGCM3, CCSM4, IPSL-CM5A-LR, IPSL-CM5A-MR, IPSL-CM5B-LR, NorESM1-M)} \item{scenario}{, select which scenario to run (options are rcp45, rcp85)} \item{ensemble_member}{, r1i1p1 is the only ensemble member available for this dataset, CCSM4 uses r6i1p1 instead} -\item{lat}{latitude of site} +\item{overwrite}{overwrite existing files? Default is FALSE} + +\item{verbose}{Default is FALSE, used as input in ncdf4::ncvar_def} -\item{lon}{longitude of site} +\item{...}{other inputs} } \description{ Download MACA CMIP5 outputs for a single grid point using OPeNDAP and convert to CF From 796593ce8f288a7e6cc761cee9fece5e6efc6214 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:05:59 -0400 Subject: [PATCH 2039/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.NARR.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/download.NARR.R b/modules/data.atmosphere/R/download.NARR.R index c92bbbd1d93..19e117042d6 100644 --- a/modules/data.atmosphere/R/download.NARR.R +++ b/modules/data.atmosphere/R/download.NARR.R @@ -1,11 +1,12 @@ ##' Download NARR files ##' ##' @param outfolder location where output is stored -##' @param start_year desired start year YYYY-MM-DD -##' @param end_year desired end year YYYY-MM-DD ##' @param overwrite Overwrite existing files? Default=FALSE ##' @param verbose Turn on verbose output? Default=FALSE ##' @param method Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. +##' @param start_date desired start date YYYY-MM-DD +##' @param end_date desired end date YYYY-MM-DD +##' @param ... other inputs ##' example options(download.ftp.method="ncftpget") ##' @importFrom magrittr %>% ##' From 2ae67953d89ec87138ab6393036a0bf883bb171e Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:06:45 -0400 Subject: [PATCH 2040/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.NARR.Rd | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/modules/data.atmosphere/man/download.NARR.Rd b/modules/data.atmosphere/man/download.NARR.Rd index 38b02566889..baa71aec13f 100644 --- a/modules/data.atmosphere/man/download.NARR.Rd +++ b/modules/data.atmosphere/man/download.NARR.Rd @@ -17,16 +17,18 @@ download.NARR( \arguments{ \item{outfolder}{location where output is stored} +\item{start_date}{desired start date YYYY-MM-DD} + +\item{end_date}{desired end date YYYY-MM-DD} + \item{overwrite}{Overwrite existing files? Default=FALSE} \item{verbose}{Turn on verbose output? Default=FALSE} -\item{method}{Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. -example options(download.ftp.method="ncftpget")} +\item{method}{Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile.} -\item{start_year}{desired start year YYYY-MM-DD} - -\item{end_year}{desired end year YYYY-MM-DD} +\item{...}{other inputs +example options(download.ftp.method="ncftpget")} } \description{ Download NARR files From 8bcec40908a34703594af44467d4f3c8634758f1 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:08:03 -0400 Subject: [PATCH 2041/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.NLDAS.R | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/download.NLDAS.R b/modules/data.atmosphere/R/download.NLDAS.R index 92c8ac7d3ac..7e028ea5fe3 100644 --- a/modules/data.atmosphere/R/download.NLDAS.R +++ b/modules/data.atmosphere/R/download.NLDAS.R @@ -5,9 +5,13 @@ ##' @param outfolder location of output ##' @param start_date desired start date YYYY-MM-DD ##' @param end_date desired end date YYYY-MM-DD +##' @param lat.in latitude of site +##' @param lon.in longitude of site +##' @param overwrite overwrite existing files? Default is FALSE +##' @param verbose Turn on verbose output? Default=FALSE +##' @param ... Other inputs ##' @param site_id site id (BETY) -##' @param lat latitude of site -##' @param lon longitude of site +##' ##' @export ##' ##' @author Christy Rollinson (with help from Ankur Desai) From 5547bf8a6a74e2018279d5a28b0b7c4397b866b1 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:09:21 -0400 Subject: [PATCH 2042/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.NLDAS.Rd | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/download.NLDAS.Rd b/modules/data.atmosphere/man/download.NLDAS.Rd index 80a088ba23c..80684fdcad1 100644 --- a/modules/data.atmosphere/man/download.NLDAS.Rd +++ b/modules/data.atmosphere/man/download.NLDAS.Rd @@ -25,9 +25,15 @@ download.NLDAS( \item{site_id}{site id (BETY)} -\item{lat}{latitude of site} +\item{lat.in}{latitude of site} -\item{lon}{longitude of site} +\item{lon.in}{longitude of site} + +\item{overwrite}{overwrite existing files? Default is FALSE} + +\item{verbose}{Turn on verbose output? Default=FALSE} + +\item{...}{Other inputs} } \description{ Download and convert single grid point NLDAS to CF single grid point from hydro1.sci.gsfc.nasa.gov using OPENDAP interface From 23e45772f963f020f57576fdb354766d95945679 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:10:59 -0400 Subject: [PATCH 2043/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.PalEON.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.atmosphere/R/download.PalEON.R b/modules/data.atmosphere/R/download.PalEON.R index c42e3c1fc01..50a55ffe9c5 100644 --- a/modules/data.atmosphere/R/download.PalEON.R +++ b/modules/data.atmosphere/R/download.PalEON.R @@ -3,9 +3,13 @@ ##' @name download.PalEON ##' @title download.PalEON ##' @export +##' ##' @param outfolder desired output location ##' @param start_date desired start date YYYY-MM-DD ##' @param end_date desired end date YYYY-MM-DD +##' @param sitename sitename +##' @param overwrite overwrite existing files? Default is FALSE +##' @param ... Other inputs ##' ##' @author Betsy Cowdery download.PalEON <- function(sitename, outfolder, start_date, end_date, overwrite = FALSE, ...) { From d9f87030d579b54a988eb7d3b88c5cd29f26c30a Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:11:43 -0400 Subject: [PATCH 2044/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.PalEON.Rd | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/data.atmosphere/man/download.PalEON.Rd b/modules/data.atmosphere/man/download.PalEON.Rd index f3aca1bfc1c..969771e593d 100644 --- a/modules/data.atmosphere/man/download.PalEON.Rd +++ b/modules/data.atmosphere/man/download.PalEON.Rd @@ -14,11 +14,17 @@ download.PalEON( ) } \arguments{ +\item{sitename}{sitename} + \item{outfolder}{desired output location} \item{start_date}{desired start date YYYY-MM-DD} \item{end_date}{desired end date YYYY-MM-DD} + +\item{overwrite}{overwrite existing files? Default is FALSE} + +\item{...}{Other inputs} } \description{ Download PalEON files From 54caecda67c57f9398eb238835fb2add2e56a22c Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:12:42 -0400 Subject: [PATCH 2045/2289] Added missing Roxygen values --- modules/data.atmosphere/R/download.PalEON_ENS.R | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/modules/data.atmosphere/R/download.PalEON_ENS.R b/modules/data.atmosphere/R/download.PalEON_ENS.R index 66a87763005..2858b6a4447 100644 --- a/modules/data.atmosphere/R/download.PalEON_ENS.R +++ b/modules/data.atmosphere/R/download.PalEON_ENS.R @@ -1,9 +1,13 @@ ##' @title Download PalEON met ensemble files ##' ##' @export +##' ##' @param outfolder desired output folder ##' @param start_date desired start date YYYY-MM-DD ##' @param end_date desired end date YYYY-MM-DD +##' @param sitename sitename +##' @param overwrite overwrite existing files? Default is FALSE +##' @param ... Other inputs ##' ##' @author Betsy Cowdery, Mike Dietze download.PalEON_ENS <- function(sitename, outfolder, start_date, end_date, overwrite = FALSE, ...) { From 211959dc5a12e0734812c3949ad44c54daf7c883 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:13:28 -0400 Subject: [PATCH 2046/2289] Added missing Roxygen valuse --- modules/data.atmosphere/man/download.PalEON_ENS.Rd | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/data.atmosphere/man/download.PalEON_ENS.Rd b/modules/data.atmosphere/man/download.PalEON_ENS.Rd index 6c7e22055b7..915a2e33aa5 100644 --- a/modules/data.atmosphere/man/download.PalEON_ENS.Rd +++ b/modules/data.atmosphere/man/download.PalEON_ENS.Rd @@ -14,11 +14,17 @@ download.PalEON_ENS( ) } \arguments{ +\item{sitename}{sitename} + \item{outfolder}{desired output folder} \item{start_date}{desired start date YYYY-MM-DD} \item{end_date}{desired end date YYYY-MM-DD} + +\item{overwrite}{overwrite existing files? Default is FALSE} + +\item{...}{Other inputs} } \description{ Download PalEON met ensemble files From a28fda68a6f82b5472cf16be9758f99c7071cbbc Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 15:59:42 -0400 Subject: [PATCH 2047/2289] Added missing Roxgyen values --- modules/data.atmosphere/R/download.MsTMIP_NARR.R | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/download.MsTMIP_NARR.R b/modules/data.atmosphere/R/download.MsTMIP_NARR.R index 2dd8ecf6054..5a0c66e86f8 100644 --- a/modules/data.atmosphere/R/download.MsTMIP_NARR.R +++ b/modules/data.atmosphere/R/download.MsTMIP_NARR.R @@ -2,11 +2,16 @@ ##' @name download.MsTMIP_NARR ##' @title download.MsTMIP_NARR ##' @export +##' ##' @param outfolder location where output is stored ##' @param start_date YYYY-MM-DD +##' @param site_id BETY site id +##' @param lat.in latitude of site +##' @param lon.in longitude of site +##' @param overwrite overwrite existing files? Default is FALSE +##' @param verbose Default is FALSE, used in ncdf4::ncvar_def +##' @param ... Other inputs ##' @param end_date YYYY-MM-DD -##' @param lat decimal degrees [-90, 90] -##' @param lon decimal degrees [-180, 180] ##' ##' @author James Simkins download.MsTMIP_NARR <- function(outfolder, start_date, end_date, site_id, lat.in, lon.in, From 65f38c43a27940b800de619ddfa4123275ea70c5 Mon Sep 17 00:00:00 2001 From: Alexis Helgeson Date: Wed, 30 Jun 2021 16:00:34 -0400 Subject: [PATCH 2048/2289] Added missing Roxygen values --- modules/data.atmosphere/man/download.MsTMIP_NARR.Rd | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd b/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd index 61afc87a566..03867540801 100644 --- a/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd +++ b/modules/data.atmosphere/man/download.MsTMIP_NARR.Rd @@ -23,9 +23,17 @@ download.MsTMIP_NARR( \item{end_date}{YYYY-MM-DD} -\item{lat}{decimal degrees [-90, 90]} +\item{site_id}{BETY site id} -\item{lon}{decimal degrees [-180, 180]} +\item{lat.in}{latitude of site} + +\item{lon.in}{longitude of site} + +\item{overwrite}{overwrite existing files? Default is FALSE} + +\item{verbose}{Default is FALSE, used in ncdf4::ncvar_def} + +\item{...}{Other inputs} } \description{ Download and conver to CF NARR single grid point from MSTIMIP server using OPENDAP interface From 9132a971b4ce52308230712af98b6e5097e36fa5 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 1 Jul 2021 12:07:50 +0530 Subject: [PATCH 2049/2289] Update base/logger/DESCRIPTION Co-authored-by: Chris Black --- base/logger/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 931a5e4b650..271c95b44bc 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -5,7 +5,7 @@ Date: 2019-09-05 Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu")) Description: Special logger functions for tracking execution status and the environment. -imports: utils +Imports: utils Suggests: testthat License: BSD_3_clause + file LICENSE Encoding: UTF-8 From f697fb0be2005ef104e1aa4c55ec8e4d3df20dca Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 1 Jul 2021 12:09:52 +0530 Subject: [PATCH 2050/2289] Update DESCRIPTION --- base/logger/DESCRIPTION | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 271c95b44bc..ee4358b2330 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -3,7 +3,8 @@ Title: Logger functions for PEcAn Version: 1.7.1 Date: 2019-09-05 Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), - person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu")) + person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu"), + person("Shashank", "Singh", role = c("aut"), email = "shashanksingh819@gmail.com")) Description: Special logger functions for tracking execution status and the environment. Imports: utils Suggests: testthat From 80ac2981dbb6448bb91ea4f9dba231b30aa7c038 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 1 Jul 2021 14:25:36 +0530 Subject: [PATCH 2051/2289] Update logifnot.R MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit fix warning - Undocumented arguments in documentation object 'check_conditions' ‘...’ --- base/logger/R/logifnot.R | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 4c4dae08911..8d02ac27dad 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -1,14 +1,15 @@ #' Logger message if conditions are not met #' -#' Similar to [base::stopifnot], but allows you to use a custom message and +#' Similar to [base::stopifnot], but allows you to use a custom message and #' logger level. If all conditions are `TRUE`, silently exit. #' -#' Conditions can be vectorized, or can return non-logical values.The -#' underlying function automatically applies `isTRUE(all(.))` to the +#' Conditions can be vectorized, or can return non-logical values.The +#' underlying function automatically applies `isTRUE(all(.))` to the #' conditions. #' #' @param msg Logger message to write, as a single character string. -#' @param ... Conditions to evaluate +#' @param ... Conditions to evaluate +#' @param ... other arguments passed on to severeifnot #' @return Invisibly, `TRUE` if conditions are met, `FALSE` otherwise #' @examples #' a <- 1:5 @@ -30,6 +31,7 @@ severeifnot <- function(msg, ...) { } #' @rdname severeifnot +#’ @param ... other arguments passed on to errorifnot #' @export errorifnot <- function(msg, ...) { if (!check_conditions(...)) { @@ -41,6 +43,7 @@ errorifnot <- function(msg, ...) { } #' @rdname severeifnot +#’ @param ... other arguments passed on to warnifnot #' @export warnifnot <- function(msg, ...) { if (!check_conditions(...)) { @@ -52,6 +55,7 @@ warnifnot <- function(msg, ...) { } #' @rdname severeifnot +#’ @param ... other arguments passed on to infoifnot #' @export infoifnot <- function(msg, ...) { if (!check_conditions(...)) { @@ -63,6 +67,7 @@ infoifnot <- function(msg, ...) { } #' @rdname severeifnot +#’ @param ... other arguments passed on to debugifnot #' @export debugifnot <- function(msg, ...) { if (!check_conditions(...)) { @@ -74,6 +79,7 @@ debugifnot <- function(msg, ...) { } #' Check a list of conditions +#’ @param ... other arguments passed on to check_conditions check_conditions <- function(...) { dots <- list(...) conditions <- vapply(dots, is_definitely_true, logical(1)) From a0104242c5b4b11112626309f7077be7adc98e7a Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 1 Jul 2021 14:17:05 +0300 Subject: [PATCH 2052/2289] started vignette --- .../ICOS_Drought2018_Vignette.Rmd | 57 +++++++++++++++++++ .../extfiles/ICOS_Drought2018_Sites.csv | 16 ++++++ 2 files changed, 73 insertions(+) create mode 100644 documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd create mode 100644 documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv diff --git a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd new file mode 100644 index 00000000000..f7fa7a3b34a --- /dev/null +++ b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd @@ -0,0 +1,57 @@ +--- +title: "ICOS Drought2018 Vignette" +author: +- "Ayush Prasad" +- "Istem Fer" +date: "July 1, 2021" +--- + + + +```{r setup, include=FALSE} +knitr::opts_chunk$set(echo = TRUE) +``` + +## Drought-2018 ecosystem eddy covariance flux product + +Drought-2018 ecosystem eddy covariance flux product is a release of the observational data product for eddy covariance fluxes at 52 stations from the Drought-2018 team, covering the period 1989-2018. + +For details see . + +The following sites and years are available within this dataset: + +```{r icos_d2018, echo=FALSE} +library(ggplot2) +icos_d2018 <- read.csv('extfiles/ICOS_Drought2018_Sites.csv') +knitr::kable(icos_d2018) +world_map <- ggplot2::map_data("world") +`%>%` <- dplyr::`%>%` +europe_cropped <- + world_map %>% dplyr::filter(( + world_map$lat >= 30 & world_map$lat <= 75 & + world_map$long >= -10 & world_map$long <= 40 + )) +# Compute the centroid as the mean longitude and lattitude +# Used as label coordinate for country's names +icos_data <- europe_cropped %>% + dplyr::group_by(region) %>% + dplyr::summarise(long = mean(long), lat = mean(lat)) +ggplot(europe_cropped, aes(x = long, y = lat)) + + geom_polygon(aes( group = group, color = region),fill="grey88")+ + scale_colour_manual(values=rep("black",length(unique(europe_cropped$region))), + guide=FALSE) + + theme_void()+ + theme(legend.position = "none")+ + geom_point(data=icos_d2018, aes(x=Longitude, y=Latitude)) + +``` + +To download Drought2018 files we use `download.ICOS` function in PEcAn. +```{r download, eval=FALSE} +sitename <- "FI-Sii" +outfolder <- "~/" +start_date <- "2016-01-01" +end_date <- "2018-01-01" +PEcAn.data.atmosphere::download.ICOS(sitename, outfolder, start_date, end_date) + +``` diff --git a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv new file mode 100644 index 00000000000..9630ecd338e --- /dev/null +++ b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv @@ -0,0 +1,16 @@ +Sitename,Site,Country,Latitude,Longitude,MAT,MAP,IGBP,Köppen,AvailableYears,DOI +SE-Svb,Svartberget,Sweden,64.25611,19.7745,1.8,614.00,ENF,Dfc,2014-2018,10.18160/X57W-HWTE +SE-Nor,Norunda,Sweden,60.08649722,17.47950278,5.5,527.00,ENF,Dfb,2014-2018,10.18160/K57M-TVGE +SE-Lnn,Lanna,Sweden,58.3406295776367,13.101767539978,6.0,558.00,CRO,Cfb,2014-2018,10.18160/5GZQ-S6Z0 +SE-Htm,Hyltemossa,Sweden,56.09763,13.41897,7.4,707.00,ENF,Cfb,2015-2018,10.18160/17FF-96RT +SE-Deg,Degero,Sweden,64.182029,19.556539,1.2,523.00,WET,Dfc,2001-2018,10.18160/0T47-MEEU +RU-Fyo,Fyodorovskoye,Russia,56.4615278,32.9220833,3.9,711.00,ENF,Dfb,1998-2018,10.18160/4J2N-DY7S +RU-Fy2,Fyodorovskoye dry spruce stand,Russia,56.447603,32.901878,3.9,711.00,ENF,Dfb,2015-2018,10.18160/4J2N-DY7S +IT-BCi,Borgo Cioffi,Italy,40.52375,14.95744444,18,600.00,CRO,Csa,2004–2018,10.18160/T25N-PD1H +FI-Sii,Siikaneva,Finland,61.83265,24.19285,3.5,701.00,WET,Dfc,2016–2018,10.18160/0RE3-DTWD +ES-LM2,Majadas del Tietar South,Spain,39.934592,-5.775881,16,700.00,SAV,Csa,2014–2018,10.18160/3SVJ-XSB7 +DE-Tha,Tharandt,Germany,50.96256,13.56515,8.2,843.00,ENF,Dfb,1996–2018,10.18160/BSE6-EMVJ +CZ-wet,Trebon,Czech Republic,49.02465,14.77035,7.7,604.00,WET,Dfb,2006–2018,10.18160/W4YS-463W +FR-Hes,Hesse,France,48.6741,7.06465,9.2,820.00,DBF,Cfb,2014–2018,10.18160/WTYC-JVQV +BE-Lon,Lonzee,Belgium,50.5516198,4.7462339,10,800.00,CRO,BSk,2004–2018,10.18160/6SM0-NFES + From f146ee746146a86d0823721eedfb8755e7a2751c Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 1 Jul 2021 16:11:20 +0300 Subject: [PATCH 2053/2289] fill in list --- .../ICOS_Drought2018_Vignette.Rmd | 2 +- .../extfiles/ICOS_Drought2018_Sites.csv | 63 +++++++++++++++---- 2 files changed, 51 insertions(+), 14 deletions(-) diff --git a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd index f7fa7a3b34a..2bee334b280 100644 --- a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd +++ b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd @@ -28,7 +28,7 @@ world_map <- ggplot2::map_data("world") `%>%` <- dplyr::`%>%` europe_cropped <- world_map %>% dplyr::filter(( - world_map$lat >= 30 & world_map$lat <= 75 & + world_map$lat >= 35 & world_map$lat <= 75 & world_map$long >= -10 & world_map$long <= 40 )) # Compute the centroid as the mean longitude and lattitude diff --git a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv index 9630ecd338e..7958c88d7d2 100644 --- a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv +++ b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv @@ -1,16 +1,53 @@ Sitename,Site,Country,Latitude,Longitude,MAT,MAP,IGBP,Köppen,AvailableYears,DOI -SE-Svb,Svartberget,Sweden,64.25611,19.7745,1.8,614.00,ENF,Dfc,2014-2018,10.18160/X57W-HWTE -SE-Nor,Norunda,Sweden,60.08649722,17.47950278,5.5,527.00,ENF,Dfb,2014-2018,10.18160/K57M-TVGE -SE-Lnn,Lanna,Sweden,58.3406295776367,13.101767539978,6.0,558.00,CRO,Cfb,2014-2018,10.18160/5GZQ-S6Z0 -SE-Htm,Hyltemossa,Sweden,56.09763,13.41897,7.4,707.00,ENF,Cfb,2015-2018,10.18160/17FF-96RT -SE-Deg,Degero,Sweden,64.182029,19.556539,1.2,523.00,WET,Dfc,2001-2018,10.18160/0T47-MEEU -RU-Fyo,Fyodorovskoye,Russia,56.4615278,32.9220833,3.9,711.00,ENF,Dfb,1998-2018,10.18160/4J2N-DY7S -RU-Fy2,Fyodorovskoye dry spruce stand,Russia,56.447603,32.901878,3.9,711.00,ENF,Dfb,2015-2018,10.18160/4J2N-DY7S -IT-BCi,Borgo Cioffi,Italy,40.52375,14.95744444,18,600.00,CRO,Csa,2004–2018,10.18160/T25N-PD1H -FI-Sii,Siikaneva,Finland,61.83265,24.19285,3.5,701.00,WET,Dfc,2016–2018,10.18160/0RE3-DTWD -ES-LM2,Majadas del Tietar South,Spain,39.934592,-5.775881,16,700.00,SAV,Csa,2014–2018,10.18160/3SVJ-XSB7 +BE-Bra,Brasschaat,Belgium,51.30761,4.51984,9.8,750.00,MF,Cfb,1996–2018,10.18160/F738-634R +BE-Lon,Lonzee,Belgium,50.5516198,4.7462339,10,800.00,CRO,BSk,2004–2018,10.18160/6SM0-NFES +BE-Vie,Vielsalm,Belgium,50.3049329013726,5.99811511350303,7.8,1062.00,MF,Cfb,1996–2018,10.18160/MK3Q-BBEK +CH-Aws,Alp Weissenstein,Switzerland,46.583194,9.790417,2.3,918.00,GRA,,2010–2018,10.18160/3YQE-7BR8 +CH-Cha,Chamau grassland,Switzerland,47.21022222,8.410444444,9.5,1136.00,GRA,Cfb,2005–2018,10.18160/GMMW-5E2D +CH-Dav,Davos,Switzerland,46.81533,9.85591,2.8,1062.00,ENF,Dfc,1997–2018,10.18160/R86M-H3HX +CH-Fru,Fruebuel grassland,Switzerland,47.11583333,8.537777778,7.2,1651.00,GRA,Dfb,2005–2018,10.18160/J938-0MKS +CH-Lae,Laegern,Switzerland,47.478333,8.364389,8.3,1100.00,MF,BWk,2004–2018,10.18160/FABD-SVJJ +CH-Oe2,Oensingen crop,Switzerland,47.286417,7.73375,9.8,1155.00,CRO,BSk,2004–2018,10.18160/N01Y-R7DF +CZ-BK1,Bily Kriz forest,Czechia,49.50207615,18.53688247,6.7,1316.00,ENF,Dwb,2004–2018,10.18160/7QXR-AYEE +CZ-Lnz,Lanzhot,Czechia,48.681611,16.946416,9.3,550.00,MF,,2015–2018,10.18160/84SN-YBSD +CZ-RAJ,Rajec,Czechia,49.4437236,16.6965125,7.1,681,ENF,,2012–2018,10.18160/HFS9-JBTG +CZ-Stn,Stitna,Czechia,49.035975,17.9699,8.7,685.00,DBF,,2010–2018,10.18160/V2JN-DQPJ +CZ-wet,Trebon,Czechia,49.02465,14.77035,7.7,604.00,WET,Dfb,2006–2018,10.18160/W4YS-463W +DE-Akm,Anklam,Germany,53.86617,13.68342,8.7,558.00,WET,BWk,2009–2018,10.18160/24B5-J44F +DE-Geb,Gebesee,Germany,51.09973,10.91463,8.5,470.00,CRO,Cfb,2001–2018,10.18160/ZK18-3YW3 +DE-Gri,Grillenburg,Germany,50.95004,13.51259,8.4,877.00,GRA,Dfb,2004–2018,10.18160/EN60-T3FG +DE-Hai,Hainich,Germany,51.079213,10.452168,8.3,720.00,DBF,Cfb,2000–2018,10.18160/D4ET-BFPS +DE-HoH,Hohes Holz,Germany,52.085306,11.219222,9.1,563.00,DBF,,2015–2018,10.18160/J1YB-YEHC +DE-Hte,Huetelmoor,Germany,54.210278,12.176111,9.2,645.00,WET,,2009–2018,10.18160/63V0-08T4 +DE-Hzd,Hetzdorf,Germany,50.96381,13.48978,7.8,901.00,DBF,,2010–2018,10.18160/PJEC-43XB +DE-Kli,Klingenberg,Germany,50.89306,13.52238,7.6,842.00,CRO,Dfb,2004–2018,10.18160/STT9-TBJZ +DE-Obe,Oberbärenburg,Germany,50.78666,13.72129,5.5,996.00,ENF,Dfb,2008–2018,10.18160/FSM3-RC5F +DE-RuR,Rollesbroich,Germany,50.6219142,6.3041256,7.7,1033.00,GRA,,2011–2018,10.18160/HPV9-K8R1 +DE-RuS,Selhausen Juelich,Germany,50.86590702,6.447144704,10,700.00,CRO,,2011–2018,10.18160/A2TK-QD5U +DE-RuW,Wustebach,Germany,50.50490703,6.33101886,7.5,1250.00,ENF,,2010–2018,10.18160/H7Y6-2R1H DE-Tha,Tharandt,Germany,50.96256,13.56515,8.2,843.00,ENF,Dfb,1996–2018,10.18160/BSE6-EMVJ -CZ-wet,Trebon,Czech Republic,49.02465,14.77035,7.7,604.00,WET,Dfb,2006–2018,10.18160/W4YS-463W +DK-Sor,Soroe,Denmark,55.4858694,11.6446444,8.3,660.00,DBF,Cfb,1996–2018,10.18160/BFDT-7HYE +ES-Abr,Albuera,Spain,38.701839,-6.785881,,,SAV,,2015–2018,10.18160/11TP-MX4F +ES-LM1,Majadas del Tietar North,Spain,39.94269,-5.778683,16,700.00,SAV,Csa,2014–2018,10.18160/FDSD-GVRS +ES-LM2,Majadas del Tietar South,Spain,39.934592,-5.775881,16,700.00,SAV,Csa,2014–2018,10.18160/3SVJ-XSB7 +FI-Hyy,Hyytiala,Finland,61.84741,24.29477,3.8,709.00,ENF,Dfb,1996–2018,10.18160/CWKM-YS54 +FI-Let,Lettosuo,Finland,60.64183,23.95952,4.6,627.00,ENF,,2009–2018,10.18160/0JHQ-BZMU +FI-Sii,Siikaneva,Finland,61.83265,24.19285,3.5,701.00,WET,Dfc,2016–2018,10.18160/0RE3-DTWD +FI-Var,Varrio,Finland,67.7549,29.61,-0.5,601.00,ENF,,2016–2018,10.18160/NYH7-5JEB +FR-Bil,Bilos,France,44.493672,-0.956082,12.9,930.00,ENF,,2014–2018,10.18160/ETDC-1K1F +FR-EM2,Estrees-Mons A28,France,49.8721083,3.02065,10.8,680.00,CRO,,2017–2018 FR-Hes,Hesse,France,48.6741,7.06465,9.2,820.00,DBF,Cfb,2014–2018,10.18160/WTYC-JVQV -BE-Lon,Lonzee,Belgium,50.5516198,4.7462339,10,800.00,CRO,BSk,2004–2018,10.18160/6SM0-NFES - +IT-BCi,Borgo Cioffi,Italy,40.52375,14.95744444,18,600.00,CRO,Csa,2004–2018,10.18160/T25N-PD1H +IT-Cp2,Castelporziano2,Italy,41.7042655944824,12.3572931289673,15.2,805.00,EBF,Csa,2012–2018,10.18160/5FPQ-G257 +IT-Lsn,Lison,Italy,45.740481,12.750297,13.1,1083.0,OSH,,2016–2018,10.18160/RTKZ-VTDJ +IT-SR2,San Rossore 2,Italy,43.732022,10.29091,14.2,920.0,ENF,,2013–2018,10.18160/FFK6-8ZV7 +IT-Tor,Torgnon,Italy,45.84444,7.578055,2.9,920.00,GRA,BSk,2008–2018,10.18160/ERMH-PSVW +NL-Loo,Loobos,Netherlands,52.166581,5.743556,9.8,786.00,ENF,Cfb,1996–2018,10.18160/MV3K-WM09 +RU-Fy2,Fyodorovskoye dry spruce stand,Russia,56.447603,32.901878,3.9,711.00,ENF,Dfb,2015-2018,10.18160/4J2N-DY7S +RU-Fyo,Fyodorovskoye,Russia,56.4615278,32.9220833,3.9,711.00,ENF,Dfb,1998-2018,10.18160/4J2N-DY7S +SE-Deg,Degero,Sweden,64.182029,19.556539,1.2,523.00,WET,Dfc,2001-2018,10.18160/0T47-MEEU +SE-Htm,Hyltemossa,Sweden,56.09763,13.41897,7.4,707.00,ENF,Cfb,2015-2018,10.18160/17FF-96RT +SE-Lnn,Lanna,Sweden,58.3406295776367,13.101767539978,6.0,558.00,CRO,Cfb,2014-2018,10.18160/5GZQ-S6Z0 +SE-Nor,Norunda,Sweden,60.08649722,17.47950278,5.5,527.00,ENF,Dfb,2014-2018,10.18160/K57M-TVGE +SE-Ros,Rosinedal-3,Sweden,64.1725,19.738,1.8,614.00,ENF,,2014–2018,10.18160/ZF2F-82Q7 +SE-Svb,Svartberget,Sweden,64.25611,19.7745,1.8,614.00,ENF,Dfc,2014-2018,10.18160/X57W-HWTE From ea866f5dfaf08ed4bab67e0bc9dff6e093666558 Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:00 -0700 Subject: [PATCH 2054/2289] Delete dataprep_10_sites.csv --- base/settings/examples/dataprep_10_sites.csv | 11 ----------- 1 file changed, 11 deletions(-) delete mode 100644 base/settings/examples/dataprep_10_sites.csv diff --git a/base/settings/examples/dataprep_10_sites.csv b/base/settings/examples/dataprep_10_sites.csv deleted file mode 100644 index b19e6192557..00000000000 --- a/base/settings/examples/dataprep_10_sites.csv +++ /dev/null @@ -1,11 +0,0 @@ -"","siteid_BETY4","site_name4","base_dir4","met_download","model_id4","model_name4","input_met_source4","input_met_output4","siteid_NEON4" -"1",646,"HARV","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","HARV" -"2",679,"LOS","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA -"3",1000026756,"POTATO","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA -"4",622,"SYV","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA -"5",676,"WCR","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA -"6",678,"WLEF","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","metprocess",1000000030,"SIPNET","NOAA_GEFS","SIPNET",NA -"7",1000004924,"BART","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","BART" -"8",1000004927,"KONZ","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","KONZ" -"9",1000004916,"OSBS","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","OSBS" -"10",1000004876,"SRER","/projectnb/dietzelab/ahelgeso/NOAA_met_data/","dataprep",1000000030,"SIPNET","NOAA_GEFS","SIPNET","SRER" From ab7b0422cd08682654ab45314f0095508777c595 Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:11 -0700 Subject: [PATCH 2055/2289] Delete harvard.xml --- base/settings/examples/harvard.xml | 92 ------------------------------ 1 file changed, 92 deletions(-) delete mode 100755 base/settings/examples/harvard.xml diff --git a/base/settings/examples/harvard.xml b/base/settings/examples/harvard.xml deleted file mode 100755 index b6f181bf97a..00000000000 --- a/base/settings/examples/harvard.xml +++ /dev/null @@ -1,92 +0,0 @@ - - - - Daily Forecast SIPNET Site - 1000012038 - ahelgeso - 2021/05/10 20:42:37 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - temperate.deciduous.HPDA - - 1 - - 1000022311 - /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA - - - soil.ALL - - 1 - - 1000022310 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - - FALSE - FALSE - - 1.2 - AUTO - - - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - - - - 1000000030 - /fs/data3/kzarada/US_WCr/data/WillowCreek.param - SIPNET - FALSE - /fs/data5/pecan.models/sipnet_unk/sipnet - - - - 646 - - - - NOAA_GEFS - SIPNET - - - - - localhost - - From 171388be611b3a9d0a0238c8ec03ef0a8680abd3 Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:19 -0700 Subject: [PATCH 2056/2289] Delete konz.xml --- base/settings/examples/konz.xml | 82 --------------------------------- 1 file changed, 82 deletions(-) delete mode 100644 base/settings/examples/konz.xml diff --git a/base/settings/examples/konz.xml b/base/settings/examples/konz.xml deleted file mode 100644 index 7820dbc3727..00000000000 --- a/base/settings/examples/konz.xml +++ /dev/null @@ -1,82 +0,0 @@ - - - - EFI Forecast - 1000012038 - ahelgeso - 2021/05/06 15:06:40 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - TRUE - - /projectnb/dietzelab/EFI_Forecast_Challenge/Konza/noaa/ - - - - semiarid.grassland_HPDA - - 1 - - 1000016525 - /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland - - - soil.ALL_Arid_GrassHPDA - - 1 - - 1000016524 - /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL - - - - 3000 - FALSE - - - 1000000030 - - - - 1000004927 - - - - NOAA_GEFS - SIPNET - - - /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Konza/soil.nc - - - - - localhost - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - From ea7d470006cfd62128b32fc2b493dc5c0871c885 Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:33 -0700 Subject: [PATCH 2057/2289] Delete los.xml --- base/settings/examples/los.xml | 79 ---------------------------------- 1 file changed, 79 deletions(-) delete mode 100755 base/settings/examples/los.xml diff --git a/base/settings/examples/los.xml b/base/settings/examples/los.xml deleted file mode 100755 index 25bdec748de..00000000000 --- a/base/settings/examples/los.xml +++ /dev/null @@ -1,79 +0,0 @@ - - - - Daily Forecast SIPNET Site - 1000012038 - ahelgeso - 2021/05/10 20:44:50 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - temperate.deciduous.HPDA - - 1 - - 1000022311 - /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA - - - soil.HPDA - - 1 - - 1000016485 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - FALSE - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - - 1000000030 - - - - 679 - - - - NOAA_GEFS - SIPNET - - - - - localhost - - From f07456fc98db8f9c3087908844b9293a37d527c6 Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:42 -0700 Subject: [PATCH 2058/2289] Delete ordway.xml --- base/settings/examples/ordway.xml | 82 ------------------------------- 1 file changed, 82 deletions(-) delete mode 100644 base/settings/examples/ordway.xml diff --git a/base/settings/examples/ordway.xml b/base/settings/examples/ordway.xml deleted file mode 100644 index e24f7804763..00000000000 --- a/base/settings/examples/ordway.xml +++ /dev/null @@ -1,82 +0,0 @@ - - - - EFI Forecast - 1000012038 - ahelgeso - 2021/05/05 13:47:07 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /projectnb/dietzelab/EFI_Forecast_Challenge/Ordway/noaa/ - - - - temperate.coniferous - - 1 - - 1000016486 - /fs/data3/hamzed/Projects/HPDA/Helpers/Sites/Me4-Ameri/pft/boreal.coniferous - - - soil.HPDA - - 1 - - 1000016485 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - FALSE - - - 1000000030 - - - - 1000004916 - - - - NOAA_GEFS - SIPNET - - - /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Ordway/soil.nc - - - - - localhost - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - From 9524e7689dd4b7b2b4d7d8a358835525ec54d49b Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:50 -0700 Subject: [PATCH 2059/2289] Delete potato.xml --- base/settings/examples/potato.xml | 79 ------------------------------- 1 file changed, 79 deletions(-) delete mode 100644 base/settings/examples/potato.xml diff --git a/base/settings/examples/potato.xml b/base/settings/examples/potato.xml deleted file mode 100644 index 16d8f2bfd16..00000000000 --- a/base/settings/examples/potato.xml +++ /dev/null @@ -1,79 +0,0 @@ - - - - Daily Forecast SIPNET Site - 1000012038 - ahelgeso - 2021/05/10 20:21:05 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - semiarid.grassland_HPDA - - 1 - - 1000016525 - /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland - - - soil.ALL_Arid_GrassHPDA - - 1 - - 1000016524 - /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL - - - - 3000 - FALSE - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - - 1000000030 - - - - 1000026756 - - - - NOAA_GEFS - SIPNET - - - - - localhost - - From f3fb35459cae2ffe0250a68ee4aea7652d3d787b Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:19:58 -0700 Subject: [PATCH 2060/2289] Delete santarita.xml --- base/settings/examples/santarita.xml | 82 ---------------------------- 1 file changed, 82 deletions(-) delete mode 100644 base/settings/examples/santarita.xml diff --git a/base/settings/examples/santarita.xml b/base/settings/examples/santarita.xml deleted file mode 100644 index 1bd31fd696e..00000000000 --- a/base/settings/examples/santarita.xml +++ /dev/null @@ -1,82 +0,0 @@ - - - - EFI Forecast - 1000012038 - ahelgeso - 2021/04/20 15:32:55 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /projectnb/dietzelab/EFI_Forecast_Challenge/Santa_Rita/noaa/ - - - - semiarid.grassland_HPDA - - 1 - - 1000016525 - /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/semiarid.grassland - - - soil.ALL_Arid_GrassHPDA - - 1 - - 1000016524 - /projectnb/dietzelab/hamzed/HPDA/Outputs/Grass-Arid/pft/soil.ALL - - - - 3000 - FALSE - - - 1000000030 - - - - 1000004876 - - - - NOAA_GEFS - SIPNET - - - /projectnb/dietzelab/ahelgeso/EFI_Forecast_Challenge/Santa_Rita/soil.nc - - - - - localhost - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - From f86953356995a9966500b1c19dd074ce549a792b Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:20:05 -0700 Subject: [PATCH 2061/2289] Delete syv.xml --- base/settings/examples/syv.xml | 80 ---------------------------------- 1 file changed, 80 deletions(-) delete mode 100755 base/settings/examples/syv.xml diff --git a/base/settings/examples/syv.xml b/base/settings/examples/syv.xml deleted file mode 100755 index 9d2fe33d664..00000000000 --- a/base/settings/examples/syv.xml +++ /dev/null @@ -1,80 +0,0 @@ - - - - Daily Forecast SIPNET Site - 1000012038 - ahelgeso - 2021/05/10 20:47:17 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - temperate.coniferous - - 1 - - 1000016486 - /fs/data3/hamzed/Projects/HPDA/Helpers/Sites/Me4-Ameri/pft/boreal.coniferous - - - soil.HPDA - - 1 - - 1000016485 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - FALSE - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - - 1000000030 - /fs/data3/kzarada/US_WCr/data/WillowCreek.param - - - - 622 - - - - NOAA_GEFS - SIPNET - - - - - localhost - - From 4d872b1ff77811956b1aff8b98cc8be891e79d9e Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:20:11 -0700 Subject: [PATCH 2062/2289] Delete wcr.xml --- base/settings/examples/wcr.xml | 80 ---------------------------------- 1 file changed, 80 deletions(-) delete mode 100644 base/settings/examples/wcr.xml diff --git a/base/settings/examples/wcr.xml b/base/settings/examples/wcr.xml deleted file mode 100644 index 71a1675e4de..00000000000 --- a/base/settings/examples/wcr.xml +++ /dev/null @@ -1,80 +0,0 @@ - - - - Daily Forecast SIPNET Site - 1000012038 - ahelgeso - 2021/05/10 20:48:20 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - temperate.deciduous.HPDA - - 1 - - 1000022311 - /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA - - - soil.HPDA - - 1 - - 1000022310 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - FALSE - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - - 1000000030 - /fs/data3/kzarada/US_WCr/data/WillowCreek.param - - - - 676 - - - - NOAA_GEFS - SIPNET - - - - - localhost - - From ac760cc9c00c4c48a6147bcaf8e410b6ebb51c0c Mon Sep 17 00:00:00 2001 From: ahelgeso Date: Thu, 1 Jul 2021 15:20:17 -0700 Subject: [PATCH 2063/2289] Delete wlef.xml --- base/settings/examples/wlef.xml | 92 --------------------------------- 1 file changed, 92 deletions(-) delete mode 100755 base/settings/examples/wlef.xml diff --git a/base/settings/examples/wlef.xml b/base/settings/examples/wlef.xml deleted file mode 100755 index 2792e5db7f1..00000000000 --- a/base/settings/examples/wlef.xml +++ /dev/null @@ -1,92 +0,0 @@ - - - - Daily Forecast SIPNET Site - 1000012038 - ahelgeso - 2021/05/10 20:49:09 +0000 - - - - bety - bety - psql-pecan.bu.edu - bety - PostgreSQL - true - - /fs/data3/kzarada/pecan.data/dbfiles/ - - - - temperate.deciduous.HPDA - - 1 - - 1000022311 - /fs/data2/output//PEcAn_1000010530/pft/temperate.deciduous.HPDA - - - soil.ALL - - 1 - - 1000022310 - /fs/data2/output//PEcAn_1000010530/pft/soil.HPDA - - - - 3000 - - FALSE - FALSE - - 1.2 - AUTO - - - - - 100 - NEE - 2021 - 2021 - - - uniform - - - sampling - - - parameters - - - soil - - - - - - - 1000000030 - /fs/data3/kzarada/US_WCr/data/WillowCreek.param - SIPNET - FALSE - /fs/data5/pecan.models/SIPNET/1023/sipnet - - - - 678 - - - - NOAA_GEFS - SIPNET - - - - - localhost - - From c7514b9a28d2607e6b9d1ed919da17babf56f8bd Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 2 Jul 2021 16:20:32 +0530 Subject: [PATCH 2064/2289] create generic ICOS functions --- modules/data.atmosphere/NAMESPACE | 4 +- .../data.atmosphere/R/download.Drought2018.R | 195 --------------- modules/data.atmosphere/R/download.ICOS.R | 233 ++++++++++++++++++ .../R/{met2CF.Drought2018.R => met2CF.ICOS.R} | 6 +- ...wnload.Drought2018.Rd => download.ICOS.Rd} | 22 +- .../{met2CF.Drought2018.Rd => met2CF.ICOS.Rd} | 12 +- .../testthat/test.download.Drought2018.R | 15 -- .../tests/testthat/test.download.ICOS.R | 25 ++ 8 files changed, 281 insertions(+), 231 deletions(-) delete mode 100644 modules/data.atmosphere/R/download.Drought2018.R create mode 100644 modules/data.atmosphere/R/download.ICOS.R rename modules/data.atmosphere/R/{met2CF.Drought2018.R => met2CF.ICOS.R} (93%) rename modules/data.atmosphere/man/{download.Drought2018.Rd => download.ICOS.Rd} (51%) rename modules/data.atmosphere/man/{met2CF.Drought2018.Rd => met2CF.ICOS.Rd} (86%) delete mode 100644 modules/data.atmosphere/tests/testthat/test.download.Drought2018.R create mode 100644 modules/data.atmosphere/tests/testthat/test.download.ICOS.R diff --git a/modules/data.atmosphere/NAMESPACE b/modules/data.atmosphere/NAMESPACE index 5983c8148da..237ef68e165 100644 --- a/modules/data.atmosphere/NAMESPACE +++ b/modules/data.atmosphere/NAMESPACE @@ -18,7 +18,6 @@ export(debias.met.regression) export(download.Ameriflux) export(download.AmerifluxLBL) export(download.CRUNCEP) -export(download.Drought2018) export(download.ERA5.old) export(download.FACE) export(download.Fluxnet2015) @@ -26,6 +25,7 @@ export(download.FluxnetLaThuile) export(download.GFDL) export(download.GLDAS) export(download.Geostreams) +export(download.ICOS) export(download.MACA) export(download.MERRA) export(download.MsTMIP_NARR) @@ -66,10 +66,10 @@ export(met.process.stage) export(met2CF.ALMA) export(met2CF.Ameriflux) export(met2CF.AmerifluxLBL) -export(met2CF.Drought2018) export(met2CF.ERA5) export(met2CF.FACE) export(met2CF.Geostreams) +export(met2CF.ICOS) export(met2CF.NARR) export(met2CF.PalEON) export(met2CF.PalEONregional) diff --git a/modules/data.atmosphere/R/download.Drought2018.R b/modules/data.atmosphere/R/download.Drought2018.R deleted file mode 100644 index 3755c988f40..00000000000 --- a/modules/data.atmosphere/R/download.Drought2018.R +++ /dev/null @@ -1,195 +0,0 @@ -#' Download ICOS Drought 2018 data -#' -#' Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 -#' -#' Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF -#' -#' @param sitename ICOS id of the site. Example - "BE-Bra" -#' @param outfolder path to the directory where the output file is stored. If specified directory does not exists, it is created. -#' @param start_date start date of the data request in the form YYYY-MM-DD -#' @param end_date end date area of the data request in the form YYYY-MM-DD -#' @param overwrite should existing files be overwritten. Default False. -#' @return information about the output file -#' @export -#' @examples -#' \dontrun{ -#' download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") -#' } -#' @author Ayush Prasad -#' -download.Drought2018 <- - function(sitename, - outfolder, - start_date, - end_date, - overwrite = FALSE) { - stage_download <- TRUE - stage_extract <- TRUE - - # construct output CSV file name - output_file_name <- - paste0( - "FLX_", - sitename, - "_FLUXNET2015_FULLSET_HH_", - as.character(format(as.Date(start_date), '%Y')), - "-2018_beta-3.csv" - ) - - if (file.exists(file.path(outfolder, output_file_name)) && - !overwrite) { - PEcAn.logger::logger.info("Output CSV file for the requested site already exists") - stage_download <- FALSE - stage_extract <- FALSE - } - - - # construct zip file name - zip_file_name <- paste0(outfolder, "/Drought", sitename, ".zip") - - - if (stage_extract && file.exists(zip_file_name) && !overwrite) { - PEcAn.logger::logger.info("Zip file for the requested site already exists, extracting it...") - stage_download <- FALSE - stage_extract <- FALSE - } - - if (as.Date(end_date) > as.Date("2018-12-31")) { - PEcAn.logger::logger.severe(paste0( - "Requested end date ", - end_date, - " exceeds Drought 2018 availability period" - )) - } - - if (stage_download) { - # Find dataset product id by using the site name - - # ICOS SPARQL end point - url <- "https://meta.icos-cp.eu/sparql?type=JSON" - - # RDF query to find out the information about the data set using the site name - body <- " - prefix cpmeta: - prefix prov: - select ?dobj ?spec ?timeStart - where { - VALUES ?spec {} - ?dobj cpmeta:hasObjectSpec ?spec . - VALUES ?station {} - ?dobj cpmeta:wasAcquiredBy/prov:wasAssociatedWith ?station . - ?dobj cpmeta:hasStartTime | (cpmeta:wasAcquiredBy / prov:startedAtTime) ?timeStart . - FILTER NOT EXISTS {[] cpmeta:isNextVersionOf ?dobj} - } - " - body <- gsub("sitename", sitename, body) - response <- httr::POST(url, body = body) - response <- httr::content(response, as = "text") - response <- jsonlite::fromJSON(response) - dataset_url <- response$results$bindings$dobj$value - dataset_start_date <- - as.Date( - strptime(response$results$bindings$timeStart$value, format = "%Y-%m-%dT%H:%M:%S") - ) - if (is.null(dataset_url)) { - PEcAn.logger::logger.severe("Data is not available for the requested site") - } - if (dataset_start_date > as.Date(start_date)) { - PEcAn.logger::logger.severe( - paste( - "Data is not available for the requested start date. Please try again with", - dataset_start_date, - "as start date." - ) - ) - } - dataset_id <- sub(".*/", "", dataset_url) - - # construct the download URL - download_url <- - paste0('https://data.icos-cp.eu/licence_accept?ids=%5B%22', - dataset_id, - '%22%5D') - # Download the zip file - file <- - httr::GET(url = download_url, - httr::write_disk( - paste0(outfolder, "/Drought", sitename, ".zip"), - overwrite = TRUE - ), - httr::progress()) - } - - if (stage_extract) { - file_name <- paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') - - # extract only the hourly data file - zipped_csv_name <- - grep( - paste0('^', file_name), - utils::unzip(zip_file_name, list = TRUE)$Name, - ignore.case = TRUE, - value = TRUE - ) - utils::unzip(file.path(outfolder, paste0('Drought', sitename, '.zip')), - files = zipped_csv_name, - exdir = outfolder) - `%>%` <- dplyr::`%>%` - # read in the output CSV file and select the variables required - df <- utils::read.csv(file.path(outfolder, zipped_csv_name)) - df <- - subset( - df, - select = c( - "TIMESTAMP_START", - "TA_F", - "SW_IN_F", - "LW_IN_F", - "VPD_F", - "PA_F", - "P_F", - "WS_F", - "WD", - "RH", - "PPFD_IN", - "CO2_F_MDS", - "TS_F_MDS_1", - "TS_F_MDS_2", - "NEE_VUT_REF", - "LE_F_MDS", - "RECO_NT_VUT_REF", - "GPP_NT_VUT_REF" - ) - ) - df <- - df %>% dplyr::filter(( - as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) >= as.Date(start_date) & - as.Date(strptime(df$TIMESTAMP_START, format = "%Y%m%d%H%M")) <= as.Date(end_date) - )) - # save the csv file - utils::write.csv(df, file.path(outfolder, output_file_name), row.names = FALSE) - } - - rows <- 1 - results <- data.frame( - file = character(rows), - host = character(rows), - mimetype = character(rows), - formatname = character(rows), - startdate = character(rows), - enddate = character(rows), - dbfile.name = output_file_name, - stringsAsFactors = FALSE - ) - - results$file[rows] <- - file.path(outfolder, output_file_name) - results$host[rows] <- PEcAn.remote::fqdn() - results$startdate[rows] <- start_date - results$enddate[rows] <- end_date - results$mimetype[rows] <- "text/csv" - results$formatname[rows] <- "Drought2018_HH" - - return(results) - - } \ No newline at end of file diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R new file mode 100644 index 00000000000..b1ef13374d2 --- /dev/null +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -0,0 +1,233 @@ +#' Download ICOS Ecosystem data products +#' +#' Currently available products: +#' Drought-2018 ecosystem eddy covariance flux product https://www.icos-cp.eu/data-products/YVR0-4898 +#' ICOS Final Fully Quality Controlled Observational Data (Level 2) https://www.icos-cp.eu/data-products/ecosystem-release +#' +#' +#' @param sitename ICOS id of the site. Example - "BE-Bra" +#' @param outfolder path to the directory where the output file is stored. If specified directory does not exists, it is created. +#' @param start_date start date of the data request in the form YYYY-MM-DD +#' @param end_date end date area of the data request in the form YYYY-MM-DD +#' @param product ICOS product to be downloaded. Currently supported options: "Drought2018", "ETC" +#' @param overwrite should existing files be overwritten. Default False. +#' @return information about the output file +#' @export +#' @examples +#' \dontrun{ +#' download.ICOS("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01", product="Drought2018") +#' } +#' @author Ayush Prasad +#' +download.ICOS <- + function(sitename, + outfolder, + start_date, + end_date, + product, + overwrite = FALSE) { + download_file_flag <- TRUE + extract_file_flag <- TRUE + + + if (tolower(product) == "drought2018") { + # construct output CSV file name + output_file_name <- + paste0( + "FLX_", + sitename, + "_FLUXNET2015_FULLSET_HH_", + as.character(format(as.Date(start_date), '%Y')), + "-2018_beta-3.csv" + ) + + # construct zip file name + zip_file_name <- + paste0(outfolder, "/Drought", sitename, ".zip") + + # data type, can be found from the machine readable page of the product + data_type <- + "http://meta.icos-cp.eu/resources/cpmeta/dought2018ArchiveProduct" + + file_name <- + paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') + + format_name <- "ICOS_Drought2018_HH" + + } else if (tolower(product) == "etc") { + output_file_name <- + paste0(sitename, "/ICOSETC_", sitename, "_FLUXNET_HH_01.csv") + + # construct zip file name + zip_file_name <- + paste0(outfolder, "/ICOSETC_Archive_", sitename, ".zip") + + # data type, can be found from the machine readable page of the product + data_type <- + "http://meta.icos-cp.eu/resources/cpmeta/etcArchiveProduct" + + file_name <- + paste0(sitename, "/ICOSETC_", sitename, "_FLUXNET_HH") + + format_name <- "ICOS_ETC_HH" + + } else { + PEcAn.logger::logger.severe("Inavlid product. Product should be one of 'Drought2018', 'ETC' ") + } + + + if (file.exists(file.path(outfolder, output_file_name)) && + !overwrite) { + PEcAn.logger::logger.info("Output CSV file for the requested site already exists") + download_file_flag <- FALSE + extract_file_flag <- FALSE + } + + if (extract_file_flag && + file.exists(zip_file_name) && !overwrite) { + PEcAn.logger::logger.info("Zip file for the requested site already exists, extracting it...") + download_file_flag <- FALSE + extract_file_flag <- TRUE + } + + if (download_file_flag) { + # Find dataset product id by using the site name + + # ICOS SPARQL end point + url <- "https://meta.icos-cp.eu/sparql?type=JSON" + + # RDF query to find out the information about the data set using the site name + body <- " + prefix cpmeta: + prefix prov: + select ?dobj ?spec ?timeStart ?timeEnd + where { + VALUES ?spec {} + ?dobj cpmeta:hasObjectSpec ?spec . + VALUES ?station {} + ?dobj cpmeta:wasAcquiredBy/prov:wasAssociatedWith ?station . + ?dobj cpmeta:hasStartTime | (cpmeta:wasAcquiredBy / prov:startedAtTime) ?timeStart . + ?dobj cpmeta:hasEndTime | (cpmeta:wasAcquiredBy / prov:endedAtTime) ?timeEnd . + FILTER NOT EXISTS {[] cpmeta:isNextVersionOf ?dobj} + } + " + body <- gsub("data_type", data_type, body) + body <- gsub("sitename", sitename, body) + response <- httr::POST(url, body = body) + response <- httr::content(response, as = "text") + response <- jsonlite::fromJSON(response) + dataset_url <- response$results$bindings$dobj$value + dataset_start_date <- + lubridate::as_datetime( + strptime(response$results$bindings$timeStart$value, format = "%Y-%m-%dT%H:%M:%S") + ) + dataset_end_date <- + lubridate::as_datetime( + strptime(response$results$bindings$timeEnd$value, format = "%Y-%m-%dT%H:%M:%S") + ) + if (is.null(dataset_url)) { + PEcAn.logger::logger.severe("Data is not available for the requested site") + } + if (dataset_start_date > lubridate::as_datetime(start_date)) { + PEcAn.logger::logger.severe( + paste( + "Data is not available for the requested start date. Please try again with", + dataset_start_date, + "as start date." + ) + ) + } + + if (dataset_end_date < lubridate::as_date(end_date)) { + PEcAn.logger::logger.severe( + paste( + "Data is not available for the requested end date. Please try again with", + dataset_end_date, + "as end date." + ) + ) + } + dataset_id <- sub(".*/", "", dataset_url) + + # construct the download URL + download_url <- + paste0('https://data.icos-cp.eu/licence_accept?ids=%5B%22', + dataset_id, + '%22%5D') + # Download the zip file + file <- + httr::GET(url = download_url, + httr::write_disk(zip_file_name, + overwrite = TRUE), + httr::progress()) + } + + if (extract_file_flag) { + # extract only the hourly data file + zipped_csv_name <- + grep( + paste0('^', file_name), + utils::unzip(zip_file_name, list = TRUE)$Name, + ignore.case = TRUE, + value = TRUE + ) + utils::unzip(zip_file_name, + files = zipped_csv_name, + exdir = outfolder) + } + + + # get start and end year of data from file + firstline <- + system(paste0("head -2 ", file.path(outfolder, output_file_name)), intern = TRUE) + firstline <- firstline[2] + lastline <- + system(paste0("tail -1 ", file.path(outfolder, output_file_name)), intern = TRUE) + + firstdate_st <- paste0( + substr(firstline, 1, 4), + "-", + substr(firstline, 5, 6), + "-", + substr(firstline, 7, 8), + " ", + substr(firstline, 9, 10), + ":", + substr(firstline, 11, 12) + ) + lastdate_st <- paste0( + substr(lastline, 1, 4), + "-", + substr(lastline, 5, 6), + "-", + substr(lastline, 7, 8), + " ", + substr(lastline, 9, 10), + ":", + substr(lastline, 11, 12) + ) + + + rows <- 1 + results <- data.frame( + file = character(rows), + host = character(rows), + mimetype = character(rows), + formatname = character(rows), + startdate = character(rows), + enddate = character(rows), + dbfile.name = basename(output_file_name), + stringsAsFactors = FALSE + ) + + results$file[rows] <- + file.path(outfolder, output_file_name) + results$host[rows] <- PEcAn.remote::fqdn() + results$startdate[rows] <- firstdate_st + results$enddate[rows] <- lastdate_st + results$mimetype[rows] <- "text/csv" + results$formatname[rows] <- format_name + + return(results) + + } \ No newline at end of file diff --git a/modules/data.atmosphere/R/met2CF.Drought2018.R b/modules/data.atmosphere/R/met2CF.ICOS.R similarity index 93% rename from modules/data.atmosphere/R/met2CF.Drought2018.R rename to modules/data.atmosphere/R/met2CF.ICOS.R index 7e56e6b2f47..48e9650de14 100644 --- a/modules/data.atmosphere/R/met2CF.Drought2018.R +++ b/modules/data.atmosphere/R/met2CF.ICOS.R @@ -1,4 +1,4 @@ -#' Convert variables ICOS Drought 2018 variables to CF format. +#' Convert variables ICOS variables to CF format. #' #' Variables present in the output netCDF file: #' air_temperature, air_temperature, relative_humidity, @@ -8,7 +8,7 @@ #' surface_downwelling_photosynthetic_photon_flux_in_air, precipitation_flux, #' eastward_wind, northward_wind #' -#' @param in.path path to the input Drought 2018 CSV file +#' @param in.path path to the input ICOS product CSV file #' @param in.prefix name of the input file #' @param outfolder path to the directory where the output file is stored. If specified directory does not exists, it is created. #' @param start_date start date of the input file @@ -32,7 +32,7 @@ #' @export #' -met2CF.Drought2018 <- +met2CF.ICOS <- function(in.path, in.prefix, outfolder, diff --git a/modules/data.atmosphere/man/download.Drought2018.Rd b/modules/data.atmosphere/man/download.ICOS.Rd similarity index 51% rename from modules/data.atmosphere/man/download.Drought2018.Rd rename to modules/data.atmosphere/man/download.ICOS.Rd index 924545ab7fc..d698cb267f2 100644 --- a/modules/data.atmosphere/man/download.Drought2018.Rd +++ b/modules/data.atmosphere/man/download.ICOS.Rd @@ -1,14 +1,15 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/download.Drought2018.R -\name{download.Drought2018} -\alias{download.Drought2018} -\title{Download ICOS Drought 2018 data} +% Please edit documentation in R/download.ICOS.R +\name{download.ICOS} +\alias{download.ICOS} +\title{Download ICOS Ecosystem data products} \usage{ -download.Drought2018( +download.ICOS( sitename, outfolder, start_date, end_date, + product, overwrite = FALSE ) } @@ -21,20 +22,21 @@ download.Drought2018( \item{end_date}{end date area of the data request in the form YYYY-MM-DD} +\item{product}{ICOS product to be downloaded. Currently supported options: "Drought2018", "ETC"} + \item{overwrite}{should existing files be overwritten. Default False.} } \value{ information about the output file } \description{ -Link to the product: https://www.icos-cp.eu/data-products/YVR0-4898 -} -\details{ -Variables present in the output CSV file: TA_F, SW_IN_F, LW_IN_F, VPD_F, PA_F, P_F, WS_F, WD, RH, PPFD_IN, CO2_F_MDS, TS_F_MDS_1, TS_F_MDS_2, NEE_VUT_REF, LE_F_MDS, RECO_NT_VUT_REF and GPP_NT_VUT_REF +Currently available products: +Drought-2018 ecosystem eddy covariance flux product https://www.icos-cp.eu/data-products/YVR0-4898 +ICOS Final Fully Quality Controlled Observational Data (Level 2) https://www.icos-cp.eu/data-products/ecosystem-release } \examples{ \dontrun{ -download.Drought2018("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01") +download.ICOS("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01", product="Drought2018") } } \author{ diff --git a/modules/data.atmosphere/man/met2CF.Drought2018.Rd b/modules/data.atmosphere/man/met2CF.ICOS.Rd similarity index 86% rename from modules/data.atmosphere/man/met2CF.Drought2018.Rd rename to modules/data.atmosphere/man/met2CF.ICOS.Rd index cf6cc8f75c0..4276e47d517 100644 --- a/modules/data.atmosphere/man/met2CF.Drought2018.Rd +++ b/modules/data.atmosphere/man/met2CF.ICOS.Rd @@ -1,10 +1,10 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/met2CF.Drought2018.R -\name{met2CF.Drought2018} -\alias{met2CF.Drought2018} -\title{Convert variables ICOS Drought 2018 variables to CF format.} +% Please edit documentation in R/met2CF.ICOS.R +\name{met2CF.ICOS} +\alias{met2CF.ICOS} +\title{Convert variables ICOS variables to CF format.} \usage{ -met2CF.Drought2018( +met2CF.ICOS( in.path, in.prefix, outfolder, @@ -15,7 +15,7 @@ met2CF.Drought2018( ) } \arguments{ -\item{in.path}{path to the input Drought 2018 CSV file} +\item{in.path}{path to the input ICOS product CSV file} \item{in.prefix}{name of the input file} diff --git a/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R b/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R deleted file mode 100644 index aa26fb504c5..00000000000 --- a/modules/data.atmosphere/tests/testthat/test.download.Drought2018.R +++ /dev/null @@ -1,15 +0,0 @@ -context("Download Drought 2018 product") - -outfolder <- tempdir() -setup(dir.create(outfolder, showWarnings = FALSE, recursive = TRUE)) -teardown(unlink(outfolder, recursive = TRUE)) - -test_that("Drought 2018 download works", { - start_date <- "2016-01-01" - end_date <- "2017-01-01" - sitename <- "FI-Sii" - res <- httr::GET("https://meta.icos-cp.eu/objects/a8OW2wWfAYqZrj31S8viVLUS") - expect_equal(200, res$status_code) - dat <- download.Drought2018(sitename, outfolder, start_date, end_date, overwrite = TRUE) - expect_true(file.exists(dat$file)) -}) \ No newline at end of file diff --git a/modules/data.atmosphere/tests/testthat/test.download.ICOS.R b/modules/data.atmosphere/tests/testthat/test.download.ICOS.R new file mode 100644 index 00000000000..945d59117f4 --- /dev/null +++ b/modules/data.atmosphere/tests/testthat/test.download.ICOS.R @@ -0,0 +1,25 @@ +context("Download ICOS data products") + +outfolder <- tempdir() +setup(dir.create(outfolder, showWarnings = FALSE, recursive = TRUE)) +teardown(unlink(outfolder, recursive = TRUE)) + +test_that("ICOS Drought 2018 download works", { + start_date <- "2016-01-01" + end_date <- "2017-01-01" + sitename <- "FI-Sii" + res <- httr::GET("https://meta.icos-cp.eu/objects/a8OW2wWfAYqZrj31S8viVLUS") + expect_equal(200, res$status_code) + dat <- download.ICOS(sitename, outfolder, start_date, end_date, "Drought2018", overwrite = TRUE) + expect_true(file.exists(dat$file)) +}) + +test_that("ICOS ETC download works", { + start_date <- "2019-01-01" + end_date <- "2020-01-01" + sitename <- "FI-Sii" + res <- httr::GET("https://meta.icos-cp.eu/objects/NEt3tFUV47QdjvJ-rgKgaiTE") + expect_equal(200, res$status_code) + dat <- download.ICOS(sitename, outfolder, start_date, end_date, "ETC", overwrite = TRUE) + expect_true(file.exists(dat$file)) +}) \ No newline at end of file From 0ee8935a6bcdd3d6c8c63beb572b5bec34a9ff14 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 2 Jul 2021 16:51:30 +0530 Subject: [PATCH 2065/2289] update vignette Co-authored-by: istfer --- .../tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd | 2 +- .../ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd index 2bee334b280..59c5389a467 100644 --- a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd +++ b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd @@ -42,7 +42,7 @@ ggplot(europe_cropped, aes(x = long, y = lat)) + guide=FALSE) + theme_void()+ theme(legend.position = "none")+ - geom_point(data=icos_d2018, aes(x=Longitude, y=Latitude)) + geom_point(data=icos_d2018, aes(x=Longitude, y=Latitude), color="red", size=2) ``` diff --git a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv index 7958c88d7d2..5689465d62d 100644 --- a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv +++ b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv @@ -18,7 +18,7 @@ DE-Geb,Gebesee,Germany,51.09973,10.91463,8.5,470.00,CRO,Cfb,2001–2018,10.18160 DE-Gri,Grillenburg,Germany,50.95004,13.51259,8.4,877.00,GRA,Dfb,2004–2018,10.18160/EN60-T3FG DE-Hai,Hainich,Germany,51.079213,10.452168,8.3,720.00,DBF,Cfb,2000–2018,10.18160/D4ET-BFPS DE-HoH,Hohes Holz,Germany,52.085306,11.219222,9.1,563.00,DBF,,2015–2018,10.18160/J1YB-YEHC -DE-Hte,Huetelmoor,Germany,54.210278,12.176111,9.2,645.00,WET,,2009–2018,10.18160/63V0-08T4 +DE-Hte,Huetelmoor,Germany,54.210278,12.176111,9.2,645.00,WET,Dfb,2009–2018,10.18160/63V0-08T4 DE-Hzd,Hetzdorf,Germany,50.96381,13.48978,7.8,901.00,DBF,,2010–2018,10.18160/PJEC-43XB DE-Kli,Klingenberg,Germany,50.89306,13.52238,7.6,842.00,CRO,Dfb,2004–2018,10.18160/STT9-TBJZ DE-Obe,Oberbärenburg,Germany,50.78666,13.72129,5.5,996.00,ENF,Dfb,2008–2018,10.18160/FSM3-RC5F @@ -40,7 +40,7 @@ FR-Hes,Hesse,France,48.6741,7.06465,9.2,820.00,DBF,Cfb,2014–2018,10.18160/WTYC IT-BCi,Borgo Cioffi,Italy,40.52375,14.95744444,18,600.00,CRO,Csa,2004–2018,10.18160/T25N-PD1H IT-Cp2,Castelporziano2,Italy,41.7042655944824,12.3572931289673,15.2,805.00,EBF,Csa,2012–2018,10.18160/5FPQ-G257 IT-Lsn,Lison,Italy,45.740481,12.750297,13.1,1083.0,OSH,,2016–2018,10.18160/RTKZ-VTDJ -IT-SR2,San Rossore 2,Italy,43.732022,10.29091,14.2,920.0,ENF,,2013–2018,10.18160/FFK6-8ZV7 +IT-SR2,San Rossore 2,Italy,43.732022,10.29091,14.2,920.0,ENF,Csa,2013–2018,10.18160/FFK6-8ZV7 IT-Tor,Torgnon,Italy,45.84444,7.578055,2.9,920.00,GRA,BSk,2008–2018,10.18160/ERMH-PSVW NL-Loo,Loobos,Netherlands,52.166581,5.743556,9.8,786.00,ENF,Cfb,1996–2018,10.18160/MV3K-WM09 RU-Fy2,Fyodorovskoye dry spruce stand,Russia,56.447603,32.901878,3.9,711.00,ENF,Dfb,2015-2018,10.18160/4J2N-DY7S From c5a8fba381e88c1f361843f9eb4347a73a089e50 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 2 Jul 2021 16:54:00 +0530 Subject: [PATCH 2066/2289] update function in vignette --- .../tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd index 59c5389a467..c3a22c92bd9 100644 --- a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd +++ b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd @@ -52,6 +52,7 @@ sitename <- "FI-Sii" outfolder <- "~/" start_date <- "2016-01-01" end_date <- "2018-01-01" -PEcAn.data.atmosphere::download.ICOS(sitename, outfolder, start_date, end_date) +product <- "Drought2018" +PEcAn.data.atmosphere::download.ICOS(sitename, outfolder, start_date, end_date, product) ``` From da84d6b5eb79a4cceb6f6929f70148a7bd10b16a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:00:07 +0530 Subject: [PATCH 2067/2289] Update base/logger/DESCRIPTION Co-authored-by: Chris Black --- base/logger/DESCRIPTION | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index ee4358b2330..585b459d22c 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -2,9 +2,12 @@ Package: PEcAn.logger Title: Logger functions for PEcAn Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), - person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu"), - person("Shashank", "Singh", role = c("aut"), email = "shashanksingh819@gmail.com")) +Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), + email = "kooper@illinois.edu"), + person("Alexey", "Shiklomanov", role = c("aut"), + email = "ashiklom@bu.edu"), + person("Shashank", "Singh", role = c("aut"), + email = "shashanksingh819@gmail.com")) Description: Special logger functions for tracking execution status and the environment. Imports: utils Suggests: testthat From cfd1680354e47e78633c95fc3c344fdb2e84b9a2 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:29:32 +0530 Subject: [PATCH 2068/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 8d02ac27dad..0bbf265b902 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -19,7 +19,9 @@ #' warnifnot("I would prefer it if you used lists.", is.list(a), is.list(b)) #' errorifnot("You should definitely use lists.", is.list(a), is.list(b)) #' try({ -#' severeifnot("I cannot deal with the fact that something is not a list.", is.list(a), is.list(b)) +#' severeifnot("I cannot deal with the fact that something is not a list.", +#' is.list(a), +#' is.list(b)) #' }) #' @export severeifnot <- function(msg, ...) { From f1b6c264adcba5a15d5a37e85f407a45858d2097 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:42:22 +0530 Subject: [PATCH 2069/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 0bbf265b902..703244501a3 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -9,7 +9,6 @@ #' #' @param msg Logger message to write, as a single character string. #' @param ... Conditions to evaluate -#' @param ... other arguments passed on to severeifnot #' @return Invisibly, `TRUE` if conditions are met, `FALSE` otherwise #' @examples #' a <- 1:5 From 3efa3203767b9867b55f51daf14ab8f1b332237a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:42:29 +0530 Subject: [PATCH 2070/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 703244501a3..146a34ac2d0 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -32,7 +32,6 @@ severeifnot <- function(msg, ...) { } #' @rdname severeifnot -#’ @param ... other arguments passed on to errorifnot #' @export errorifnot <- function(msg, ...) { if (!check_conditions(...)) { From b8f3de7c8bf4185a55fb3f2c5c75b20e3f2b7183 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:42:35 +0530 Subject: [PATCH 2071/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 146a34ac2d0..922ea929538 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -55,7 +55,6 @@ warnifnot <- function(msg, ...) { } #' @rdname severeifnot -#’ @param ... other arguments passed on to infoifnot #' @export infoifnot <- function(msg, ...) { if (!check_conditions(...)) { From 1e9b1b49eb7aca0884c111ca2c814510b7855974 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:43:44 +0530 Subject: [PATCH 2072/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 922ea929538..8d1b5a9d6ce 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -43,7 +43,6 @@ errorifnot <- function(msg, ...) { } #' @rdname severeifnot -#’ @param ... other arguments passed on to warnifnot #' @export warnifnot <- function(msg, ...) { if (!check_conditions(...)) { From df4873d83d6b49f053969d61c56084d9e8b69072 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:43:50 +0530 Subject: [PATCH 2073/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 8d1b5a9d6ce..3446e5cd87a 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -65,7 +65,6 @@ infoifnot <- function(msg, ...) { } #' @rdname severeifnot -#’ @param ... other arguments passed on to debugifnot #' @export debugifnot <- function(msg, ...) { if (!check_conditions(...)) { From f0427a79af0d6a207f4ab02451d176c59f9cfc67 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:46:20 +0530 Subject: [PATCH 2074/2289] Delete check_conditions.Rd --- base/logger/man/check_conditions.Rd | 11 ----------- 1 file changed, 11 deletions(-) delete mode 100644 base/logger/man/check_conditions.Rd diff --git a/base/logger/man/check_conditions.Rd b/base/logger/man/check_conditions.Rd deleted file mode 100644 index 4ef381d29f6..00000000000 --- a/base/logger/man/check_conditions.Rd +++ /dev/null @@ -1,11 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/logifnot.R -\name{check_conditions} -\alias{check_conditions} -\title{Check a list of conditions} -\usage{ -check_conditions(...) -} -\description{ -Check a list of conditions -} From b4b380e4ac45f4bb93eaada818dc775e7da62ada Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:46:40 +0530 Subject: [PATCH 2075/2289] Delete is_definitely_true.Rd --- base/logger/man/is_definitely_true.Rd | 11 ----------- 1 file changed, 11 deletions(-) delete mode 100644 base/logger/man/is_definitely_true.Rd diff --git a/base/logger/man/is_definitely_true.Rd b/base/logger/man/is_definitely_true.Rd deleted file mode 100644 index 4e060c0879c..00000000000 --- a/base/logger/man/is_definitely_true.Rd +++ /dev/null @@ -1,11 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/logifnot.R -\name{is_definitely_true} -\alias{is_definitely_true} -\title{Robust logical check} -\usage{ -is_definitely_true(x) -} -\description{ -Robust logical check -} From a9e9e15defbfa4089e37fd7bcb33aa160400fe8a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 20:47:51 +0530 Subject: [PATCH 2076/2289] Update base/logger/R/logifnot.R Co-authored-by: Chris Black --- base/logger/R/logifnot.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 3446e5cd87a..0b3149f3d01 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -75,8 +75,7 @@ debugifnot <- function(msg, ...) { } } -#' Check a list of conditions -#’ @param ... other arguments passed on to check_conditions +# Check a list of conditions check_conditions <- function(...) { dots <- list(...) conditions <- vapply(dots, is_definitely_true, logical(1)) From 0716cd2e8f69369334722b9e2f03681b66ba4546 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 21:08:15 +0530 Subject: [PATCH 2077/2289] changed roxygen comments to R comments --- base/logger/R/logger.R | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/base/logger/R/logger.R b/base/logger/R/logger.R index b8c4cd1ae81..17d8cc311fb 100644 --- a/base/logger/R/logger.R +++ b/base/logger/R/logger.R @@ -191,18 +191,15 @@ logger.setLevel <- function(level) { } # logger.setLevel -##' Returns numeric value for string -##' -##' Given the string representation this will return the numeric value -##' ALL = 0 -##' DEBUG = 10 -##' INFO = 20 -##' WARN = 30 -##' ERROR = 40 -##' ALL = 99 -##' -##' @return level the level of the message -##' @author Rob Kooper +## Given the string representation this will return the numeric value +## DEBUG = 10 +## INFO = 20 +## WARN = 30 +## ERROR = 40 +## ALL = 99 +## +##@return level the level of the message +##@author Rob Kooper logger.getLevelNumber <- function(level) { if (toupper(level) == "ALL") { return(0) From c9baaf4572d2bd9fe4482931e2cdcc36bdbec40b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 3 Jul 2021 21:10:25 +0530 Subject: [PATCH 2078/2289] Update logifnot.R --- base/logger/R/logifnot.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/R/logifnot.R b/base/logger/R/logifnot.R index 0b3149f3d01..13faa7d084b 100644 --- a/base/logger/R/logifnot.R +++ b/base/logger/R/logifnot.R @@ -82,7 +82,7 @@ check_conditions <- function(...) { all(conditions) } -#' Robust logical check +# Robust logical check is_definitely_true <- function(x) { if (is.null(x) || length(x) == 0 || !is.logical(x)) { return(FALSE) From 548c1a006b63203d86096c540239a163d6780d40 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 3 Jul 2021 16:32:49 +0000 Subject: [PATCH 2079/2289] automated documentation update --- base/logger/man/logger.getLevelNumber.Rd | 23 ----------------------- base/logger/man/severeifnot.Rd | 4 +++- 2 files changed, 3 insertions(+), 24 deletions(-) delete mode 100644 base/logger/man/logger.getLevelNumber.Rd diff --git a/base/logger/man/logger.getLevelNumber.Rd b/base/logger/man/logger.getLevelNumber.Rd deleted file mode 100644 index e3f4b0a3a9d..00000000000 --- a/base/logger/man/logger.getLevelNumber.Rd +++ /dev/null @@ -1,23 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/logger.R -\name{logger.getLevelNumber} -\alias{logger.getLevelNumber} -\title{Returns numeric value for string} -\usage{ -logger.getLevelNumber(level) -} -\value{ -level the level of the message -} -\description{ -Given the string representation this will return the numeric value -ALL = 0 -DEBUG = 10 -INFO = 20 -WARN = 30 -ERROR = 40 -ALL = 99 -} -\author{ -Rob Kooper -} diff --git a/base/logger/man/severeifnot.Rd b/base/logger/man/severeifnot.Rd index 82f01737ea6..0bc51df1826 100644 --- a/base/logger/man/severeifnot.Rd +++ b/base/logger/man/severeifnot.Rd @@ -43,6 +43,8 @@ infoifnot("Something is not a list.", is.list(a), is.list(b)) warnifnot("I would prefer it if you used lists.", is.list(a), is.list(b)) errorifnot("You should definitely use lists.", is.list(a), is.list(b)) try({ - severeifnot("I absolutely cannot deal with the fact that something is not a list.", is.list(a), is.list(b)) + severeifnot("I cannot deal with the fact that something is not a list.", + is.list(a), + is.list(b)) }) } From 4bde1109d3b61b4df64014671dad18eb14e51601 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 3 Jul 2021 22:25:23 +0200 Subject: [PATCH 2080/2289] quiet note about dump.log --- base/logger/DESCRIPTION | 1 - base/logger/R/logger.R | 9 +++++---- base/logger/man/logger.severe.Rd | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 585b459d22c..9c14b8f3f66 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -13,6 +13,5 @@ Imports: utils Suggests: testthat License: BSD_3_clause + file LICENSE Encoding: UTF-8 -LazyData: true RoxygenNote: 7.0.2 Roxygen: list(markdown = TRUE) diff --git a/base/logger/R/logger.R b/base/logger/R/logger.R index 17d8cc311fb..8bdd63d6bb1 100644 --- a/base/logger/R/logger.R +++ b/base/logger/R/logger.R @@ -88,9 +88,9 @@ logger.error <- function(msg, ...) { ##' Prints an severe message and stops execution. ##' ##' This function will print a message and stop execution of the code. This -##' should only be used if the application should terminate. +##' should only be used if the application should terminate. ##' -##' set \code{\link{logger.setQuitOnSevere(FALSE)}}. To avoid terminating +##' set \code{logger.setQuitOnSevere(FALSE)} to avoid terminating ##' the session. This is set by default to TRUE if interactive or running ##' inside Rstudio. ##' @@ -140,8 +140,9 @@ logger.severe <- function(msg, ..., wrap = TRUE) { ##' } logger.message <- function(level, msg, ..., wrap = TRUE) { if (logger.getLevelNumber(level) >= .utils.logger$level) { - utils::dump.frames(dumpto = "dump.log") - calls <- names(dump.log) + call_dump <- NULL # to avoid "no visible binding" note from R check + utils::dump.frames(dumpto = "call_dump") + calls <- names(call_dump) calls <- calls[!grepl("^(#[0-9]+: )?(PEcAn\\.logger::)?logger", calls)] calls <- calls[!grepl("(severe|error|warn|info|debug)ifnot", calls)] func <- sub("\\(.*", "", utils::tail(calls, 1)) diff --git a/base/logger/man/logger.severe.Rd b/base/logger/man/logger.severe.Rd index 1ea4c76cca9..b2d04efab1d 100644 --- a/base/logger/man/logger.severe.Rd +++ b/base/logger/man/logger.severe.Rd @@ -20,7 +20,7 @@ This function will print a message and stop execution of the code. This should only be used if the application should terminate. } \details{ -set \code{\link{logger.setQuitOnSevere(FALSE)}}. To avoid terminating +set \code{logger.setQuitOnSevere(FALSE)} to avoid terminating the session. This is set by default to TRUE if interactive or running inside Rstudio. } From f4d4f28c77213a6e92879f1341453cdee2f647a8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 3 Jul 2021 23:29:57 +0200 Subject: [PATCH 2081/2289] add myself as contributor, fix link --- base/logger/DESCRIPTION | 3 ++- base/logger/R/logger.R | 2 +- base/logger/man/logger.severe.Rd | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 9c14b8f3f66..5c51ef5c76e 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -7,7 +7,8 @@ Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu"), person("Shashank", "Singh", role = c("aut"), - email = "shashanksingh819@gmail.com")) + email = "shashanksingh819@gmail.com"), + person("Chris", "Black", role = c("ctb"))) Description: Special logger functions for tracking execution status and the environment. Imports: utils Suggests: testthat diff --git a/base/logger/R/logger.R b/base/logger/R/logger.R index 8bdd63d6bb1..a2133812653 100644 --- a/base/logger/R/logger.R +++ b/base/logger/R/logger.R @@ -90,7 +90,7 @@ logger.error <- function(msg, ...) { ##' This function will print a message and stop execution of the code. This ##' should only be used if the application should terminate. ##' -##' set \code{logger.setQuitOnSevere(FALSE)} to avoid terminating +##' set \code{\link{logger.setQuitOnSevere}(FALSE)} to avoid terminating ##' the session. This is set by default to TRUE if interactive or running ##' inside Rstudio. ##' diff --git a/base/logger/man/logger.severe.Rd b/base/logger/man/logger.severe.Rd index b2d04efab1d..4bda00ef83a 100644 --- a/base/logger/man/logger.severe.Rd +++ b/base/logger/man/logger.severe.Rd @@ -20,7 +20,7 @@ This function will print a message and stop execution of the code. This should only be used if the application should terminate. } \details{ -set \code{logger.setQuitOnSevere(FALSE)} to avoid terminating +set \code{\link{logger.setQuitOnSevere}(FALSE)} to avoid terminating the session. This is set by default to TRUE if interactive or running inside Rstudio. } From cbeab65c15cf3ca850bbf5d7b76e0d08338fb028 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 4 Jul 2021 12:24:15 +0530 Subject: [PATCH 2082/2289] Update DESCRIPTION --- base/logger/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 585b459d22c..3229ce412c1 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -1,5 +1,5 @@ Package: PEcAn.logger -Title: Logger functions for PEcAn +Title: Logger Functions for PEcAn Version: 1.7.1 Date: 2019-09-05 Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), From b0144189b1400e64033e6c766b23a3eeb2a99c9b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 4 Jul 2021 13:49:44 +0530 Subject: [PATCH 2083/2289] Delete NEWS.md --- NEWS.md | 10 ---------- 1 file changed, 10 deletions(-) delete mode 100644 NEWS.md diff --git a/NEWS.md b/NEWS.md deleted file mode 100644 index f7302e59928..00000000000 --- a/NEWS.md +++ /dev/null @@ -1,10 +0,0 @@ -# PEcAn.DB 1.7.1.9000 - -## Removed - -* `rename_jags_columns()` has been removed from `PEcAn.DB` but is now available in package `PEcAn.MA` (#2805, @moki1202). - - -# PEcAn.DB 1.7.1 - -* All changes in 1.7.1 and earlier were recorded in a single file for all of the PEcAn packages; please see https://github.com/PecanProject/pecan/blob/v1.7.1/CHANGELOG.md for details. From 725b3b1d4e52fd2d883099f77815e3be567b164b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 4 Jul 2021 13:50:12 +0530 Subject: [PATCH 2084/2289] Create NEWS.md --- base/db/NEWS.md | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 base/db/NEWS.md diff --git a/base/db/NEWS.md b/base/db/NEWS.md new file mode 100644 index 00000000000..e607aac3350 --- /dev/null +++ b/base/db/NEWS.md @@ -0,0 +1,10 @@ +# PEcAn.DB 1.7.1.9000 + +## Removed + +* `rename_jags_columns()` has been removed from `PEcAn.DB` but is now available in package `PEcAn.MA` (#2805, @moki1202). + + +# PEcAn.DB 1.7.1 + +* All changes in 1.7.1 and earlier were recorded in a single file for all of the PEcAn packages; please see https://github.com/PecanProject/pecan/blob/v1.7.1/CHANGELOG.md for details. From 358c9535576073934e44a2084d89d44ccc4e2952 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 4 Jul 2021 13:51:32 +0530 Subject: [PATCH 2085/2289] Create NEWS.md --- base/logger/NEWS.md | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 base/logger/NEWS.md diff --git a/base/logger/NEWS.md b/base/logger/NEWS.md new file mode 100644 index 00000000000..60e932d43fd --- /dev/null +++ b/base/logger/NEWS.md @@ -0,0 +1,7 @@ +# PEcAn.logger 1.7.1.9000 + + +#PEcAn.logger 1.7.1 + +* All changes in 1.7.1 and earlier were recorded in a single file for all of the PEcAn packages; please see +https://github.com/PecanProject/pecan/blob/v1.7.1/CHANGELOG.md for details. From 05c60c8ed87481caffed466e634fcea4db9c255c Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 4 Jul 2021 15:30:56 +0530 Subject: [PATCH 2086/2289] Update base/logger/NEWS.md Co-authored-by: Chris Black --- base/logger/NEWS.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/base/logger/NEWS.md b/base/logger/NEWS.md index 60e932d43fd..48168c23984 100644 --- a/base/logger/NEWS.md +++ b/base/logger/NEWS.md @@ -1,5 +1,7 @@ # PEcAn.logger 1.7.1.9000 +## Fixed +* Logger calls no longer create a stray `dump.log` object in the global environment #PEcAn.logger 1.7.1 From c80d2222211238a92ab74e2158b73081f9389800 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 4 Jul 2021 18:04:13 +0530 Subject: [PATCH 2087/2289] Update base/logger/R/logger.R Co-authored-by: Chris Black --- base/logger/R/logger.R | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/base/logger/R/logger.R b/base/logger/R/logger.R index a2133812653..d81ece9a970 100644 --- a/base/logger/R/logger.R +++ b/base/logger/R/logger.R @@ -140,9 +140,7 @@ logger.severe <- function(msg, ..., wrap = TRUE) { ##' } logger.message <- function(level, msg, ..., wrap = TRUE) { if (logger.getLevelNumber(level) >= .utils.logger$level) { - call_dump <- NULL # to avoid "no visible binding" note from R check - utils::dump.frames(dumpto = "call_dump") - calls <- names(call_dump) + calls <- utils::limitedLabels(sys.calls()) calls <- calls[!grepl("^(#[0-9]+: )?(PEcAn\\.logger::)?logger", calls)] calls <- calls[!grepl("(severe|error|warn|info|debug)ifnot", calls)] func <- sub("\\(.*", "", utils::tail(calls, 1)) From ea9c1ab1e26df370dfc87d4dfaa54b0417a7c869 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Tue, 6 Jul 2021 09:48:17 +0530 Subject: [PATCH 2088/2289] use same format for both the ICOS products --- modules/data.atmosphere/R/download.ICOS.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index b1ef13374d2..1c128e5d12f 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -52,7 +52,7 @@ download.ICOS <- file_name <- paste0('FLX_', sitename, '_FLUXNET2015_FULLSET_HH') - format_name <- "ICOS_Drought2018_HH" + format_name <- "ICOS_ECOSYSTEM_HH" } else if (tolower(product) == "etc") { output_file_name <- @@ -69,7 +69,7 @@ download.ICOS <- file_name <- paste0(sitename, "/ICOSETC_", sitename, "_FLUXNET_HH") - format_name <- "ICOS_ETC_HH" + format_name <- "ICOS_ECOSYSTEM_HH" } else { PEcAn.logger::logger.severe("Inavlid product. Product should be one of 'Drought2018', 'ETC' ") From 73e2780bedf0bbfa137d61080934b64bda1df9df Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Wed, 7 Jul 2021 09:57:27 +0530 Subject: [PATCH 2089/2289] remove subfolder Co-authored-by: istfer --- modules/data.atmosphere/R/download.ICOS.R | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 1c128e5d12f..85e051c5740 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -56,7 +56,7 @@ download.ICOS <- } else if (tolower(product) == "etc") { output_file_name <- - paste0(sitename, "/ICOSETC_", sitename, "_FLUXNET_HH_01.csv") + paste0("ICOSETC_", sitename, "_FLUXNET_HH_01.csv") # construct zip file name zip_file_name <- @@ -67,7 +67,7 @@ download.ICOS <- "http://meta.icos-cp.eu/resources/cpmeta/etcArchiveProduct" file_name <- - paste0(sitename, "/ICOSETC_", sitename, "_FLUXNET_HH") + paste0("ICOSETC_", sitename, "_FLUXNET_HH") format_name <- "ICOS_ECOSYSTEM_HH" @@ -166,13 +166,14 @@ download.ICOS <- # extract only the hourly data file zipped_csv_name <- grep( - paste0('^', file_name), + paste0('*', file_name), utils::unzip(zip_file_name, list = TRUE)$Name, ignore.case = TRUE, value = TRUE ) utils::unzip(zip_file_name, files = zipped_csv_name, + junkpaths = TRUE, exdir = outfolder) } @@ -230,4 +231,4 @@ download.ICOS <- return(results) - } \ No newline at end of file + } From de5ec33247e55f866e1479ea0191491af127a838 Mon Sep 17 00:00:00 2001 From: istfer Date: Wed, 7 Jul 2021 12:11:16 +0300 Subject: [PATCH 2090/2289] pass port --- web/common.php | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/web/common.php b/web/common.php index 9887fe989c9..7765c5692bb 100644 --- a/web/common.php +++ b/web/common.php @@ -76,6 +76,7 @@ function toXML($string) { # ---------------------------------------------------------------------- function open_database() { global $db_bety_hostname; + global $db_bety_port; global $db_bety_username; global $db_bety_password; global $db_bety_database; @@ -83,7 +84,7 @@ function open_database() { global $pdo; try { - $pdo = new PDO("${db_bety_type}:host=${db_bety_hostname};dbname=${db_bety_database}", $db_bety_username, $db_bety_password); + $pdo = new PDO("${db_bety_type}:host=${db_bety_hostname};dbname=${db_bety_database};port=${db_bety_port}", $db_bety_username, $db_bety_password); $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { // handler to input database configurations manually From 5055e47707bdda7034d633764c30ec1f2a8e499a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 9 Jul 2021 11:59:29 +0530 Subject: [PATCH 2091/2289] Update DESCRIPTION --- base/logger/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 93788a632e1..c58b04d59f0 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -10,6 +10,7 @@ Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "shashanksingh819@gmail.com"), person("Chris", "Black", role = c("ctb"))) Description: Special logger functions for tracking execution status and the environment. +BugReports: https://github.com/PecanProject/pecan Imports: utils Suggests: testthat License: BSD_3_clause + file LICENSE From 069ae33c8aa6ffbf8d3a124de52f4676d47fb39a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 9 Jul 2021 13:28:32 -0500 Subject: [PATCH 2092/2289] set port --- docker/web/config.docker.php | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/web/config.docker.php b/docker/web/config.docker.php index 5cf34391e28..b7c3bbdcdf7 100644 --- a/docker/web/config.docker.php +++ b/docker/web/config.docker.php @@ -3,6 +3,7 @@ # Information to connect to the BETY database $db_bety_type="pgsql"; $db_bety_hostname=getenv('PGHOST', true) ?: "postgres"; +$db_bety_port=getenv('PGPORT', true) ?: 5432; $db_bety_username=getenv('BETYUSER', true) ?: "bety"; $db_bety_password=getenv('BETYPASSWORD', true) ?: "bety"; $db_bety_database=getenv('BETYDATABASE', true) ?: "bety"; From a0f89701b25cc5bbaeb48f28004b22784aa73bf7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 13 Jul 2021 11:20:08 +0200 Subject: [PATCH 2093/2289] use NA or NULL as appropriate for devtools version --- scripts/check_with_errors.R | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/scripts/check_with_errors.R b/scripts/check_with_errors.R index 534d542ffe3..4a4e8701658 100755 --- a/scripts/check_with_errors.R +++ b/scripts/check_with_errors.R @@ -32,6 +32,14 @@ if (!runtests) { args <- c("--timings") } +# devtools 2.4.0 changed values accepted by document argument: +# < 2.4.0: TRUE = yes, FALSE = no, NA = yes if a Roxygen package +# >= 2.4.0: TRUE = yes, FALSE = no, +# NULL = if installed Roxygen is same version as package's RoxygenNote +if ((packageVersion("devtools") >= "2.4.0") && is.na(redocument)) { + redocument <- NULL +} + chk <- devtools::check(pkg, args = args, quiet = TRUE, error_on = die_level, document = redocument) From 57b9d0d7e1de2433b4606b4e6856d44cd698b3e9 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 14 Jul 2021 18:20:51 +0530 Subject: [PATCH 2094/2289] Update rename_jags_columns.R --- modules/meta.analysis/R/rename_jags_columns.R | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/rename_jags_columns.R b/modules/meta.analysis/R/rename_jags_columns.R index 1c1111422a4..ad1d4065d33 100644 --- a/modules/meta.analysis/R/rename_jags_columns.R +++ b/modules/meta.analysis/R/rename_jags_columns.R @@ -21,7 +21,11 @@ rename_jags_columns <- function(data) { # Swap column names; needed for downstream function pecan.ma() colnames(data)[colnames(data) %in% c("greenhouse", "ghs")] <- c("ghs", "greenhouse") colnames(data)[colnames(data) %in% c("site_id", "site")] <- c("site", "site_id") - + + stat <- NULL + n <- NULL + trt <- NULL + citation_id <- NULL transformed <- transform(data, Y = mean, se = stat, From 6e7155e172563797cba16d39f3828df43a6a4fb9 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 14 Jul 2021 18:24:10 +0530 Subject: [PATCH 2095/2289] Update jagify.R --- modules/meta.analysis/R/jagify.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/jagify.R b/modules/meta.analysis/R/jagify.R index e2641739bdd..ab8c1dd0d86 100644 --- a/modules/meta.analysis/R/jagify.R +++ b/modules/meta.analysis/R/jagify.R @@ -65,7 +65,7 @@ jagify <- function(result, use_ghs = TRUE) { r$stat[r$stat <= 0] <- NA } - PEcAn.DB::rename_jags_columns(r) + rename_jags_columns(r) } # jagify # ==================================================================================================# From 0e12835bfb3197e3f2568f16ba385d9d1c1d3962 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 14 Jul 2021 18:27:12 +0530 Subject: [PATCH 2096/2289] Update DESCRIPTION --- modules/meta.analysis/DESCRIPTION | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index 5ecf850ba77..6471f16d832 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -3,14 +3,15 @@ Type: Package Title: PEcAn functions used for meta-analysis Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Mike","Dietze"), +Authors@R: c(person("Mike","Dietze", role = c("aut")), person("David", "LeBauer", role = c("aut", "cre"), email = "dlebauer@email.arizona.edu"), - person("Xiaohui", "Feng"), - person("Dan"," Wang"), - person("Carl", "Davidson"), - person("Rob","Kooper"), - person("Shawn", "Serbin"), - person("Shashank", "Singh")) + person("Xiaohui", "Feng", role = c("aut")), + person("Dan"," Wang", role = c("aut")), + person("Carl", "Davidson", role = c("aut")), + person("Rob","Kooper", role = c("aut")), + person("Shawn", "Serbin", role = c("aut")), + person("Shashank", "Singh", role = c("aut"))) + Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to From 3c35b4e93975e7277c2e7d86d62c7569355e697f Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 14 Jul 2021 22:49:23 +0530 Subject: [PATCH 2097/2289] Delete Rcheck_reference.log --- base/logger/tests/Rcheck_reference.log | 87 -------------------------- 1 file changed, 87 deletions(-) delete mode 100644 base/logger/tests/Rcheck_reference.log diff --git a/base/logger/tests/Rcheck_reference.log b/base/logger/tests/Rcheck_reference.log deleted file mode 100644 index a439c196f29..00000000000 --- a/base/logger/tests/Rcheck_reference.log +++ /dev/null @@ -1,87 +0,0 @@ -* using log directory ‘/tmp/Rtmp1Q6pph/PEcAn.logger.Rcheck’ -* using R version 3.5.2 (2018-12-20) -* using platform: x86_64-pc-linux-gnu (64-bit) -* using session charset: UTF-8 -* using options ‘--no-manual --as-cran’ -* checking for file ‘PEcAn.logger/DESCRIPTION’ ... OK -* this is package ‘PEcAn.logger’ version ‘1.7.0’ -* package encoding: UTF-8 -* checking package namespace information ... OK -* checking package dependencies ... OK -* checking if this is a source package ... OK -* checking if there is a namespace ... OK -* checking for executable files ... OK -* checking for hidden files and directories ... OK -* checking for portable file names ... OK -* checking for sufficient/correct file permissions ... OK -* checking serialization versions ... OK -* checking whether package ‘PEcAn.logger’ can be installed ... OK -* checking installed package size ... OK -* checking package directory ... OK -* checking DESCRIPTION meta-information ... NOTE -Authors@R field gives no person with name and roles. -Authors@R field gives no person with maintainer role, valid email -address and non-empty name. -* checking top-level files ... OK -* checking for left-over files ... OK -* checking index information ... OK -* checking package subdirectories ... OK -* checking R files for non-ASCII characters ... OK -* checking R files for syntax errors ... OK -* checking whether the package can be loaded ... OK -* checking whether the package can be loaded with stated dependencies ... OK -* checking whether the package can be unloaded cleanly ... OK -* checking whether the namespace can be loaded with stated dependencies ... OK -* checking whether the namespace can be unloaded cleanly ... OK -* checking loading without being on the library search path ... OK -* checking dependencies in R code ... OK -* checking S3 generic/method consistency ... OK -* checking replacement functions ... OK -* checking foreign function calls ... OK -* checking R code for possible problems ... NOTE -logger.message: no visible global function definition for ‘dump.frames’ -logger.message: no visible binding for global variable ‘dump.log’ -logger.message: no visible global function definition for ‘tail’ -print2string: no visible global function definition for - ‘capture.output’ -Undefined global functions or variables: - capture.output dump.frames dump.log tail -Consider adding - importFrom("utils", "capture.output", "dump.frames", "tail") -to your NAMESPACE file. -* checking Rd files ... OK -* checking Rd metadata ... OK -* checking Rd line widths ... NOTE -Rd file 'severeifnot.Rd': - \examples lines wider than 100 characters: - severeifnot("I absolutely cannot deal with the fact that something is not a list.", is.list(a), is.list(b)) - -These lines will be truncated in the PDF manual. -* checking Rd cross-references ... WARNING -Missing link or links in documentation object 'logger.severe.Rd': - ‘logger.setQuitOnSevere(FALSE)’ - -See section 'Cross-references' in the 'Writing R Extensions' manual. - -* checking for missing documentation entries ... OK -* checking for code/documentation mismatches ... OK -* checking Rd \usage sections ... WARNING -Undocumented arguments in documentation object 'check_conditions' - ‘...’ - -Undocumented arguments in documentation object 'is_definitely_true' - ‘x’ - -Undocumented arguments in documentation object 'logger.getLevelNumber' - ‘level’ - -Functions with \usage entries need to have the appropriate \alias -entries, and all their arguments documented. -The \usage entries must correspond to syntactically valid R code. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. -* checking Rd contents ... OK -* checking for unstated dependencies in examples ... OK -* checking examples ... OK -* DONE -Status: 2 WARNINGs, 3 NOTEs From 99eba252bdbbe6dc48a1b36e6826aa316691bf67 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 14 Jul 2021 22:03:31 +0200 Subject: [PATCH 2098/2289] typo --- modules/meta.analysis/DESCRIPTION | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index 6471f16d832..d383b83b9d5 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -11,7 +11,6 @@ Authors@R: c(person("Mike","Dietze", role = c("aut")), person("Rob","Kooper", role = c("aut")), person("Shawn", "Serbin", role = c("aut")), person("Shashank", "Singh", role = c("aut"))) - Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model parameterization, execution, and analysis. The goal of PECAn is to From 5703b5c3b40adc7fd703960fc56783773bf2a6fc Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Wed, 14 Jul 2021 19:15:27 -0400 Subject: [PATCH 2099/2289] Update rename_jags_columns.R --- modules/meta.analysis/R/rename_jags_columns.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/meta.analysis/R/rename_jags_columns.R b/modules/meta.analysis/R/rename_jags_columns.R index ad1d4065d33..d18fc5bb453 100644 --- a/modules/meta.analysis/R/rename_jags_columns.R +++ b/modules/meta.analysis/R/rename_jags_columns.R @@ -24,7 +24,7 @@ rename_jags_columns <- function(data) { stat <- NULL n <- NULL - trt <- NULL + trt_id <- NULL citation_id <- NULL transformed <- transform(data, Y = mean, From ae785b131b8516415980092e4b01f75214170cb5 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 15 Jul 2021 14:33:53 +0530 Subject: [PATCH 2100/2289] Update base/logger/DESCRIPTION Co-authored-by: Chris Black --- base/logger/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index c58b04d59f0..bd9c335e9b7 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -10,7 +10,7 @@ Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "shashanksingh819@gmail.com"), person("Chris", "Black", role = c("ctb"))) Description: Special logger functions for tracking execution status and the environment. -BugReports: https://github.com/PecanProject/pecan +BugReports: https://github.com/PecanProject/pecan/issues Imports: utils Suggests: testthat License: BSD_3_clause + file LICENSE From a4e9fd55065579d28e2d7770cf04e78280e60475 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Thu, 15 Jul 2021 12:52:50 +0000 Subject: [PATCH 2101/2289] automated documentation update --- base/workflow/man/model_specific_tags.Rd | 3 +++ 1 file changed, 3 insertions(+) diff --git a/base/workflow/man/model_specific_tags.Rd b/base/workflow/man/model_specific_tags.Rd index 4e4685cbae5..64b33a83eb4 100644 --- a/base/workflow/man/model_specific_tags.Rd +++ b/base/workflow/man/model_specific_tags.Rd @@ -10,6 +10,9 @@ model_specific_tags(settings, model.info) \item{settings}{pecan xml settings} \item{model.info}{model info extracted from bety} +} +\value{ + } \description{ Title From 0c433bbc28c15dc42cf72741e578b60972ee8dbc Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 15 Jul 2021 09:51:38 -0400 Subject: [PATCH 2102/2289] Trying to get build unstuck --- base/workflow/R/create_execute_test_xml.R | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index fa8a48c3216..da4e9806328 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -172,10 +172,7 @@ create_execute_test_xml <- function(model_id, ) } - - - -#' Title +#' model_specific_tags #' #' @param settings pecan xml settings #' @param model.info model info extracted from bety From fd78538e1f1208bf365acf7b621a720e67bfa2ea Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Thu, 15 Jul 2021 15:57:51 +0000 Subject: [PATCH 2103/2289] automated documentation update --- base/workflow/man/model_specific_tags.Rd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/workflow/man/model_specific_tags.Rd b/base/workflow/man/model_specific_tags.Rd index 64b33a83eb4..d1a60ecd442 100644 --- a/base/workflow/man/model_specific_tags.Rd +++ b/base/workflow/man/model_specific_tags.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/create_execute_test_xml.R \name{model_specific_tags} \alias{model_specific_tags} -\title{Title} +\title{model_specific_tags} \usage{ model_specific_tags(settings, model.info) } @@ -15,5 +15,5 @@ model_specific_tags(settings, model.info) } \description{ -Title +model_specific_tags } From ac312d5b831e251567cab4762db5b6ba6417f462 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 15 Jul 2021 12:06:59 -0400 Subject: [PATCH 2104/2289] trigger build --- models/linkages/R/met2model.LINKAGES.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/linkages/R/met2model.LINKAGES.R b/models/linkages/R/met2model.LINKAGES.R index a8e119c0f79..d45f5d807a8 100644 --- a/models/linkages/R/met2model.LINKAGES.R +++ b/models/linkages/R/met2model.LINKAGES.R @@ -104,7 +104,7 @@ met2model.LINKAGES <- function(in.path, in.prefix, outfolder, start_date, end_da month_matrix_temp_mean <- matrix(NA, nyear, 12) for (i in seq_len(nyear)) { - + year_txt <- formatC(year[i], width = 4, format = "d", flag = "0") infile <- file.path(in.path, paste0(in.prefix, year_txt, ".nc")) From 4e6a037a412bec8ee74c6a949af050289f0cabd5 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 15 Jul 2021 12:47:40 -0400 Subject: [PATCH 2105/2289] specify model_specific_tags return --- base/workflow/R/create_execute_test_xml.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index da4e9806328..a93a6399114 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -177,7 +177,7 @@ create_execute_test_xml <- function(model_id, #' @param settings pecan xml settings #' @param model.info model info extracted from bety #' -#' @return +#' @return updated settings list #' @export #' model_specific_tags <- function(settings, model.info){ From 53fe3a56513e82d2d2456c73b3f25e665fa07413 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 15 Jul 2021 13:22:04 -0400 Subject: [PATCH 2106/2289] trigger --- models/linkages/R/write.config.LINKAGES.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/linkages/R/write.config.LINKAGES.R b/models/linkages/R/write.config.LINKAGES.R index 117f7763b6a..b656d14f294 100644 --- a/models/linkages/R/write.config.LINKAGES.R +++ b/models/linkages/R/write.config.LINKAGES.R @@ -103,7 +103,7 @@ write.config.LINKAGES <- function(defaults = NULL, trait.values, settings, run.i climate_file <- settings$run$inputs$met$path load(climate_file) } - + temp.mat <- matrix(temp.mat[which(rownames(temp.mat)%in%start.year:end.year),],ncol=12,byrow=F) precip.mat <- matrix(precip.mat[which(rownames(precip.mat)%in%start.year:end.year),],ncol=12,byrow=F) @@ -302,4 +302,4 @@ write.config.LINKAGES <- function(defaults = NULL, trait.values, settings, run.i jobsh <- gsub("@PFT_NAMES@", pft_names, jobsh) writeLines(jobsh, con = file.path(settings$rundir, run.id, "job.sh")) Sys.chmod(file.path(settings$rundir, run.id, "job.sh")) -} # write.config.LINKAGES \ No newline at end of file +} # write.config.LINKAGES From 463202f5234f0627f37b612d12c4269671cbea02 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Thu, 15 Jul 2021 17:46:33 +0000 Subject: [PATCH 2107/2289] automated documentation update --- base/workflow/man/model_specific_tags.Rd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/man/model_specific_tags.Rd b/base/workflow/man/model_specific_tags.Rd index d1a60ecd442..db22e556966 100644 --- a/base/workflow/man/model_specific_tags.Rd +++ b/base/workflow/man/model_specific_tags.Rd @@ -12,7 +12,7 @@ model_specific_tags(settings, model.info) \item{model.info}{model info extracted from bety} } \value{ - +updated settings list } \description{ model_specific_tags From f5de2e2b57406deb670994789d53f2bd04686165 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 15 Jul 2021 14:01:13 -0400 Subject: [PATCH 2108/2289] trigger --- base/workflow/R/create_execute_test_xml.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index a93a6399114..848b5c1bc36 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -54,7 +54,7 @@ create_execute_test_xml <- function(model_id, if (is.null(db_bety_hostname)) db_bety_hostname <- config.list$db_bety_hostname if (is.null(db_bety_port)) db_bety_port <- config.list$db_bety_port - #opening a connection to bety + #opening a connection to bety con <- PEcAn.DB::db.open(list( user = db_bety_username, password = db_bety_password, From 42c909a2565de26622d125a039150e0e9f4e296a Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 15 Jul 2021 15:04:02 -0400 Subject: [PATCH 2109/2289] Update create_execute_test_xml.R --- base/workflow/R/create_execute_test_xml.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/workflow/R/create_execute_test_xml.R b/base/workflow/R/create_execute_test_xml.R index 848b5c1bc36..aacb05d8b08 100644 --- a/base/workflow/R/create_execute_test_xml.R +++ b/base/workflow/R/create_execute_test_xml.R @@ -119,7 +119,7 @@ create_execute_test_xml <- function(model_id, constants = list(num = 1) ) ) %>% - setNames(rep("pft", length(.))) + stats::setNames(rep("pft", length(.data))) #Meta Analysis settings$meta.analysis <- list(iter = 3000, random.effects = FALSE) From 7d931c0dadb80f59ea51b6bfa5145bc1ec636c9e Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Thu, 22 Jul 2021 21:25:55 +0530 Subject: [PATCH 2110/2289] Update DESCRIPTION --- base/logger/DESCRIPTION | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index bd9c335e9b7..51b4e274a05 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -2,14 +2,18 @@ Package: PEcAn.logger Title: Logger Functions for PEcAn Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), +Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), person("Alexey", "Shiklomanov", role = c("aut"), email = "ashiklom@bu.edu"), person("Shashank", "Singh", role = c("aut"), email = "shashanksingh819@gmail.com"), - person("Chris", "Black", role = c("ctb"))) -Description: Special logger functions for tracking execution status and the environment. + person("Chris", "Black", role = c("ctb")), + person("University of Illinois, NCSA", role = c("cph"))) +Description: This package adds convenience functions for logging outputs. + This is loosely based on the log4j package. This will enable the user to + set what level messages are printed, and if these messages are written to a file, + console or both. BugReports: https://github.com/PecanProject/pecan/issues Imports: utils Suggests: testthat From 751f4bf32b8627567f200739075d2dbbead6a668 Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 22 Jul 2021 15:23:03 -0400 Subject: [PATCH 2111/2289] Bug fix to photosynthesis fitA that was incorrectly counting the number of columns in the covariate matrix when there was only one column --- modules/photosynthesis/R/fitA.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/photosynthesis/R/fitA.R b/modules/photosynthesis/R/fitA.R index 39e38f70abf..53c73077a25 100644 --- a/modules/photosynthesis/R/fitA.R +++ b/modules/photosynthesis/R/fitA.R @@ -220,7 +220,7 @@ if ("leaf" %in% V.random) { if (!is.null(XV)) { Vnames <- gsub(" ", "_", colnames(XV)) Vformula <- paste(Vformula, - paste0("+ betaV", Vnames, "*XV[rep[i],", seq_along(XV), "]", collapse = " ")) + paste0("+ betaV", Vnames, "*XV[rep[i],", seq_len(ncol(XV)), "]", collapse = " ")) Vpriors <- paste0(" betaV", Vnames, "~dnorm(0,0.001)", collapse = "\n") my.model <- sub(pattern = "## Vcmax BETAS", Vpriors, my.model) mydat[["XV"]] <- XV From a156f792212f20075f4e06b286c26369be4a07f4 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 23 Jul 2021 13:22:32 +0530 Subject: [PATCH 2112/2289] Update base/logger/DESCRIPTION Co-authored-by: Chris Black --- base/logger/DESCRIPTION | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 51b4e274a05..39e774a1725 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -10,10 +10,15 @@ Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "shashanksingh819@gmail.com"), person("Chris", "Black", role = c("ctb")), person("University of Illinois, NCSA", role = c("cph"))) -Description: This package adds convenience functions for logging outputs. - This is loosely based on the log4j package. This will enable the user to - set what level messages are printed, and if these messages are written to a file, - console or both. +Description: Convenience functions for logging outputs from 'PEcAn', + the Predictive Ecosystem Analyzer (LeBauer et al. 2017; + doi:10.1890/12-0137.1). Enables the user to set what level of messages are + printed, as well as whether these messages are written to the console, + a file, or both. It also allows control over whether severe errors should + stop execution of the PEcAn workflow; this allows strictness when deugging + and lenience when running large batches of simulations that should not be + terminated by errors in individual models. PEcAn.logger is loosely based on + the 'log4j' package. BugReports: https://github.com/PecanProject/pecan/issues Imports: utils Suggests: testthat From 5083203a5abf3fb33a190a65a91f6a195ff14531 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 27 Jul 2021 01:32:21 +0530 Subject: [PATCH 2113/2289] Update DESCRIPTION --- base/logger/DESCRIPTION | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 39e774a1725..d0041dda347 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -1,7 +1,7 @@ Package: PEcAn.logger Title: Logger Functions for PEcAn -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.8.0 +Date: 2019-27-07 Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), person("Alexey", "Shiklomanov", role = c("aut"), From ae4c779af8f65174950380e51695bd70f4c752ac Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 27 Jul 2021 01:32:52 +0530 Subject: [PATCH 2114/2289] Update NEWS.md --- base/logger/NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/NEWS.md b/base/logger/NEWS.md index 48168c23984..c83ce890025 100644 --- a/base/logger/NEWS.md +++ b/base/logger/NEWS.md @@ -1,4 +1,4 @@ -# PEcAn.logger 1.7.1.9000 +# PEcAn.logger 1.8.0 ## Fixed * Logger calls no longer create a stray `dump.log` object in the global environment From 58eb3608c21b19bc94c2dd72c21a5eb4f2965e8a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 27 Jul 2021 02:37:16 +0530 Subject: [PATCH 2115/2289] Update base/logger/DESCRIPTION Co-authored-by: Chris Black --- base/logger/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index d0041dda347..9b003c3d03d 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -1,7 +1,7 @@ Package: PEcAn.logger Title: Logger Functions for PEcAn Version: 1.8.0 -Date: 2019-27-07 +Date: 2021-07-27 Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), person("Alexey", "Shiklomanov", role = c("aut"), From f9a3ede10a271f7f8621a9f82dfec0af9206b878 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 28 Jul 2021 16:42:11 +0200 Subject: [PATCH 2116/2289] logger: fix NOTEs from r-hub spelling and URL checks --- base/logger/DESCRIPTION | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index 9b003c3d03d..b89e04a6e05 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -11,15 +11,16 @@ Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), person("Chris", "Black", role = c("ctb")), person("University of Illinois, NCSA", role = c("cph"))) Description: Convenience functions for logging outputs from 'PEcAn', - the Predictive Ecosystem Analyzer (LeBauer et al. 2017; - doi:10.1890/12-0137.1). Enables the user to set what level of messages are + the Predictive Ecosystem Analyzer (LeBauer et al. 2017) + . Enables the user to set what level of messages are printed, as well as whether these messages are written to the console, a file, or both. It also allows control over whether severe errors should - stop execution of the PEcAn workflow; this allows strictness when deugging + stop execution of the 'PEcAn' workflow; this allows strictness when debugging and lenience when running large batches of simulations that should not be - terminated by errors in individual models. PEcAn.logger is loosely based on + terminated by errors in individual models. It is loosely based on the 'log4j' package. BugReports: https://github.com/PecanProject/pecan/issues +URL: https://pecanproject.github.io/ Imports: utils Suggests: testthat License: BSD_3_clause + file LICENSE From 5c9b756ad1afc61c026b07b25c2a944bec79e719 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 28 Jul 2021 17:13:11 +0200 Subject: [PATCH 2117/2289] missed one --- base/logger/DESCRIPTION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/DESCRIPTION b/base/logger/DESCRIPTION index b89e04a6e05..2a6f19a31ee 100644 --- a/base/logger/DESCRIPTION +++ b/base/logger/DESCRIPTION @@ -1,5 +1,5 @@ Package: PEcAn.logger -Title: Logger Functions for PEcAn +Title: Logger Functions for 'PEcAn' Version: 1.8.0 Date: 2021-07-27 Authors@R: c(person("Rob", "Kooper", role = c("aut", "cre"), From 59d73f4ce7b337c2d72f9cd7e2b4ba4ef8b12f6b Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Thu, 29 Jul 2021 10:08:07 -0500 Subject: [PATCH 2118/2289] add cran comments --- base/logger/.Rbuildignore | 1 + base/logger/cran-comments.md | 21 +++++++++++++++++++++ 2 files changed, 22 insertions(+) create mode 100644 base/logger/.Rbuildignore create mode 100644 base/logger/cran-comments.md diff --git a/base/logger/.Rbuildignore b/base/logger/.Rbuildignore new file mode 100644 index 00000000000..23d05874bd7 --- /dev/null +++ b/base/logger/.Rbuildignore @@ -0,0 +1 @@ +cran-comments.md diff --git a/base/logger/cran-comments.md b/base/logger/cran-comments.md new file mode 100644 index 00000000000..db8c9ad8c3f --- /dev/null +++ b/base/logger/cran-comments.md @@ -0,0 +1,21 @@ +## Test environments +* local OS X install, R 4.0.2 +* Ubuntu Linux 20.04.2 LTS (on github), R 4.0.3 +* Ubuntu Linux 20.04.1 LTS (on R-hub), R-release, GCC +* Fedora Linux (on R-hub), R-devel, clang, gfortran +* Windows Server 2008 R2 SP1 (on R-hub), R-devel, 32/64 bit + +## R CMD check results +There were no ERRORs or WARNINGs. + +There are 2 NOTES: + +This is a new submission, which is one of the two notes. + +Possibly mis-spelled words in DESCRIPTION: +- LeBauer +- et al +- workflow + +LeBaur is the last name of one of the authors, the other words are +common words used. From 1af599827e745f5a29f053e513544066bffb39dd Mon Sep 17 00:00:00 2001 From: Michael Dietze Date: Thu, 29 Jul 2021 22:54:58 -0400 Subject: [PATCH 2119/2289] Update cran-comments.md --- base/logger/cran-comments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/logger/cran-comments.md b/base/logger/cran-comments.md index db8c9ad8c3f..b47a62c78d2 100644 --- a/base/logger/cran-comments.md +++ b/base/logger/cran-comments.md @@ -17,5 +17,5 @@ Possibly mis-spelled words in DESCRIPTION: - et al - workflow -LeBaur is the last name of one of the authors, the other words are +LeBauer is the last name of one of the authors, the other words are common words used. From 6331608f16f6a799bda7759dbf93bed2b3eb5b4c Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 12:28:36 +0530 Subject: [PATCH 2120/2289] Update DESCRIPTION --- base/utils/DESCRIPTION | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/base/utils/DESCRIPTION b/base/utils/DESCRIPTION index 2b728da22ed..304d72b6caa 100644 --- a/base/utils/DESCRIPTION +++ b/base/utils/DESCRIPTION @@ -1,19 +1,20 @@ Package: PEcAn.utils Type: Package -Title: PEcAn functions used for ecological forecasts and - reanalysis +Title: PEcAn Functions Used for Ecological Forecasts and + Reanalysis Version: 1.7.1 Date: 2019-09-05 -Authors@R: c(person("Mike","Dietze"), - person("David","LeBauer"), - person("Xiaohui", "Feng"), - person("Dan"," Wang"), - person("Carl", "Davidson"), - person("Rob","Kooper"), - person("Shawn", "Serbin")) -Author: David LeBauer, Mike Dietze, Xiaohui Feng, Dan Wang, - Carl Davidson, Rob Kooper, Shawn Serbin -Maintainer: David LeBauer +Authors@R: c( + person("Rob","Kooper", role = c("aut", "cre"), + email = "kooper@illinois.edu"), + person("David","LeBauer", role = c("aut")), + person("Xiaohui", "Feng", role = c("aut")), + person("Dan"," Wang", role = c("aut")), + person("Carl", "Davidson", role = c("aut")), + person("Shawn", "Serbin", role = c("aut")), + person("Shashank", "Singh", role = c("aut")), + person("Chris", "Black", role = c("aut")), + person("University of Illinois, NCSA", role = c("cph"))) Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific workflow management tool that is designed to simplify the management of model From 5502ad58ea434e2fea33679a40c75ab74f4eb373 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 12:45:12 +0530 Subject: [PATCH 2121/2289] Delete do_conversions.R --- base/utils/R/do_conversions.R | 103 ---------------------------------- 1 file changed, 103 deletions(-) delete mode 100644 base/utils/R/do_conversions.R diff --git a/base/utils/R/do_conversions.R b/base/utils/R/do_conversions.R deleted file mode 100644 index ff7e2211667..00000000000 --- a/base/utils/R/do_conversions.R +++ /dev/null @@ -1,103 +0,0 @@ -##' @export -##' @aliases do.conversions -##' @name do_conversions -##' @title do_conversions -##' @description Input conversion workflow -##' -##' DEPRECATED: This function has been moved to the PEcAn.workflow package and will be removed from PEcAn.utils. -##' @param settings PEcAn settings list -##' @param overwrite.met,overwrite.fia,overwrite.ic logical -##' -##' @author Ryan Kelly, Rob Kooper, Betsy Cowdery, Istem Fer - -do_conversions <- function(settings, overwrite.met = FALSE, overwrite.fia = FALSE, overwrite.ic = FALSE) { - - .Deprecated("PEcAn.workflow::do_conversions") - - if (PEcAn.settings::is.MultiSettings(settings)) { - return(PEcAn.settings::papply(settings, do_conversions)) - } - - needsave <- FALSE - if (is.character(settings$run$inputs)) { - settings$run$inputs <- NULL ## check for empty set - } - - dbfiles.local <- settings$database$dbfiles - dbfiles <- ifelse(!PEcAn.remote::is.localhost(settings$host) & !is.null(settings$host$folder), settings$host$folder, dbfiles.local) - PEcAn.logger::logger.debug("do.conversion outdir",dbfiles) - for (i in seq_along(settings$run$inputs)) { - input <- settings$run$inputs[[i]] - if (is.null(input)) { - next - } - - input.tag <- names(settings$run$input)[i] - PEcAn.logger::logger.info("PROCESSING: ",input.tag) - - - ic.flag <- fia.flag <- FALSE - - if ((input.tag %in% c("css", "pss", "site")) && - is.null(input$path) && !is.null(input$source)) { - if(!is.null(input$useic)){ # set TRUE if IC Workflow, leave empty if not - ic.flag <- TRUE - }else if(input$source == "FIA"){ - fia.flag <- TRUE - # possibly a warning for deprecation in the future - } - } - - # IC conversion : for now for ED only, hence the css/pss/site check - # TRUE - if (ic.flag) { - settings <- PEcAn.data.land::ic_process(settings, input, dir = dbfiles, overwrite = overwrite.ic) - needsave <- TRUE - } - - # keep fia.to.psscss - if (fia.flag) { - settings <- PEcAn.data.land::fia.to.psscss(settings, overwrite = overwrite.fia) - needsave <- TRUE - } - - # soil extraction - if(input.tag == "soil"&& is.null(input$path)){ - settings$run$inputs[[i]][['path']] <- PEcAn.data.land::soil_process(settings,input,dbfiles.local,overwrite=FALSE) - needsave <- TRUE - ## NOTES: at the moment only processing soil locally. Need to think about how to generalize this - ## because many models will read PEcAn standard in write.configs and write out into settings - ## which is done locally in rundir and then rsync'ed to remote - ## rather than having a model-format soils file that is processed remotely - } - # met conversion - - if (input.tag == "met") { - name <- ifelse(is.null(settings$browndog), "MET Process", "BrownDog") - if ( (PEcAn.utils::status.check(name) == 0)) { ## previously is.null(input$path) && - PEcAn.logger::logger.info("calling met.process: ",settings$run$inputs[[i]][['path']]) - settings$run$inputs[[i]] <- - PEcAn.data.atmosphere::met.process( - site = settings$run$site, - input_met = settings$run$inputs$met, - start_date = settings$run$start.date, - end_date = settings$run$end.date, - model = settings$model$type, - host = settings$host, - dbparms = settings$database$bety, - dir = dbfiles, - browndog = settings$browndog, - spin = settings$spin, - overwrite = overwrite.met) - PEcAn.logger::logger.debug("updated met path: ",settings$run$inputs[[i]][['path']]) - needsave <- TRUE - } - } - } - if (needsave) { - XML::saveXML(PEcAn.settings::listToXml(settings, "pecan"), file = file.path(settings$outdir, "pecan.METProcess.xml")) - } else if (file.exists(file.path(settings$outdir, "pecan.METProcess.xml"))) { - settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.METProcess.xml")) - } - return(settings) -} From 2a00ca3dd4f69d1d45471f8679a9ff91b5cdc462 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 12:45:25 +0530 Subject: [PATCH 2122/2289] Delete run.write.configs.R --- base/utils/R/run.write.configs.R | 172 ------------------------------- 1 file changed, 172 deletions(-) delete mode 100644 base/utils/R/run.write.configs.R diff --git a/base/utils/R/run.write.configs.R b/base/utils/R/run.write.configs.R deleted file mode 100644 index 6d7bb7c8062..00000000000 --- a/base/utils/R/run.write.configs.R +++ /dev/null @@ -1,172 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##' Main driver function to call the ecosystem model specific (e.g. ED, SiPNET) -##' run and configuration file scripts -##' -##' DEPRECATED: This function has been moved to the PEcAn.workflow package and will be removed from PEcAn.utils. -##' -##' @name run.write.configs -##' @title Run model specific write configuration functions -##' @param model the ecosystem model to generate the configuration files for -##' @param write should the runs be written to the database -##' @param ens.sample.method how to sample the ensemble members('halton' sequence or 'uniform' random) -##' @param posterior.files Filenames for posteriors for drawing samples for ensemble and sensitivity -##' analysis (e.g. post.distns.Rdata, or prior.distns.Rdata). Defaults to NA, in which case the -##' most recent posterior or prior (in that order) for the workflow is used. Should be a vector, -##' with one entry for each PFT. File name only; PFT outdirs will be appended (this forces use of only -##' files within this workflow, to avoid confusion). -##' -##' @return an updated settings list, which includes ensemble IDs for SA and ensemble analysis -##' -##' @author David LeBauer, Shawn Serbin, Ryan Kelly, Mike Dietze -##' @export -run.write.configs <- function(settings, write = TRUE, ens.sample.method = "uniform", - posterior.files = rep(NA, length(settings$pfts)), - overwrite = TRUE) { - .Deprecated("PEcAn.workflow::run.write.configs") - - con <- PEcAn.DB::db.open(settings$database$bety) - on.exit(PEcAn.DB::db.close(con), add = TRUE) - - ## Which posterior to use? - for (i in seq_along(settings$pfts)) { - ## if posterior.files is specified us that - if (is.na(posterior.files[i])) { - ## otherwise, check to see if posteriorid exists - if (!is.null(settings$pfts[[i]]$posteriorid)) { - files <- PEcAn.DB::dbfile.check("Posterior", - settings$pfts[[i]]$posteriorid, - con, settings$host$name, return.all = TRUE) - pid <- grep("post.distns.*Rdata", files$file_name) ## is there a posterior file? - if (length(pid) == 0) { - pid <- grep("prior.distns.Rdata", files$file_name) ## is there a prior file? - } - if (length(pid) > 0) { - posterior.files[i] <- file.path(files$file_path[pid], files$file_name[pid]) - } ## otherwise leave posteriors as NA - } - ## otherwise leave NA and get.parameter.samples will look for local - } - } - - ## Sample parameters - model <- settings$model$type - scipen <- getOption("scipen") - options(scipen = 12) - #sample from parameters used for both sensitivity analysis and Ens - get.parameter.samples(settings, posterior.files, ens.sample.method) - load(file.path(settings$outdir, "samples.Rdata")) - - ## remove previous runs.txt - if (overwrite && file.exists(file.path(settings$rundir, "runs.txt"))) { - PEcAn.logger::logger.warn("Existing runs.txt file will be removed.") - unlink(file.path(settings$rundir, "runs.txt")) - } - - load.modelpkg(model) - - ## Check for model-specific write configs - - my.write.config <- paste0("write.config.",model) - if (!exists(my.write.config)) { - PEcAn.logger::logger.error(my.write.config, - "does not exist, please make sure that the model package contains a function called", - my.write.config) - } - - ## Prepare for model output. Clean up any old config files (if exists) - my.remove.config <- paste0("remove.config.", model) - if (exists(my.remove.config)) { - do.call(my.remove.config, args = list(settings$rundir, settings)) - } - - # TODO RK : need to write to runs_inputs table - - # Save names - pft.names <- names(trait.samples) - trait.names <- lapply(trait.samples, names) - - ### NEED TO IMPLEMENT: Load Environmental Priors and Posteriors - - ### Sensitivity Analysis - if ("sensitivity.analysis" %in% names(settings)) { - - ### Write out SA config files - PEcAn.logger::logger.info("\n ----- Writing model run config files ----") - sa.runs <- write.sa.configs(defaults = settings$pfts, - quantile.samples = sa.samples, - settings = settings, - model = model, - write.to.db = write) - - # Store output in settings and output variables - runs.samples$sa <- sa.run.ids <- sa.runs$runs - settings$sensitivity.analysis$ensemble.id <- sa.ensemble.id <- sa.runs$ensemble.id - - # Save sensitivity analysis info - fname <- sensitivity.filename(settings, "sensitivity.samples", "Rdata", - all.var.yr = TRUE, pft = NULL) - save(sa.run.ids, sa.ensemble.id, sa.samples, pft.names, trait.names, file = fname) - - } ### End of SA - - ### Write ENSEMBLE - if ("ensemble" %in% names(settings)) { - ens.runs <- write.ensemble.configs(defaults = settings$pfts, - ensemble.samples = ensemble.samples, - settings = settings, - model = model, - write.to.db = write) - - # Store output in settings and output variables - runs.samples$ensemble <- ens.run.ids <- ens.runs$runs - settings$ensemble$ensemble.id <- ens.ensemble.id <- ens.runs$ensemble.id - ens.samples <- ensemble.samples # rename just for consistency - - # Save ensemble analysis info - fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", all.var.yr = TRUE) - save(ens.run.ids, ens.ensemble.id, ens.samples, pft.names, trait.names, file = fname) - } else { - PEcAn.logger::logger.info("not writing config files for ensemble, settings are NULL") - } ### End of Ensemble - - PEcAn.logger::logger.info("###### Finished writing model run config files #####") - PEcAn.logger::logger.info("config files samples in ", file.path(settings$outdir, "run")) - - ### Save output from SA/Ensemble runs - # A lot of this is duplicate with the ensemble/sa specific output above, but kept for backwards compatibility. - save(ensemble.samples, trait.samples, sa.samples, runs.samples, pft.names, trait.names, - file = file.path(settings$outdir, "samples.Rdata")) - PEcAn.logger::logger.info("parameter values for runs in ", file.path(settings$outdir, "samples.RData")) - options(scipen = scipen) - - return(invisible(settings)) -} # run.write.configs - - -#' @export -runModule.run.write.configs <- function(settings, overwrite = TRUE) { - .Deprecated("PEcAn.workflow::runModule.run.write.configs") - if (PEcAn.settings::is.MultiSettings(settings)) { - if (overwrite && file.exists(file.path(settings$rundir, "runs.txt"))) { - PEcAn.logger::logger.warn("Existing runs.txt file will be removed.") - unlink(file.path(settings$rundir, "runs.txt")) - } - return(PEcAn.settings::papply(settings, runModule.run.write.configs, overwrite = FALSE)) - } else if (PEcAn.settings::is.Settings(settings)) { - write <- settings$database$bety$write - # double check making sure we have method for parameter sampling - if (is.null(settings$ensemble$samplingspace$parameters$method)) settings$ensemble$samplingspace$parameters$method <- "uniform" - ens.sample.method <- settings$ensemble$samplingspace$parameters$method - return(run.write.configs(settings, write, ens.sample.method, overwrite = overwrite)) - } else { - stop("runModule.run.write.configs only works with Settings or MultiSettings") - } -} # runModule.run.write.configs From 2fdbaf409bbc68c800f364b8fea8ea8feae0128b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:07:43 +0530 Subject: [PATCH 2123/2289] Update convert.input.R No global variable note --- base/utils/R/convert.input.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index 7912dd130fe..f6f46eea1b2 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -96,7 +96,7 @@ convert.input <- mimetype, site.id, start_date, end_date)) # TODO see issue #18 - Rbinary <- ifelse(!exists("settings") || is.null(settings$host$Rbinary),"R",settings$host$Rbinary) + Rbinary <- ifelse(!exists("settings") || is.null(.data$settings$host$Rbinary),"R",.data$settings$host$Rbinary) n <- nchar(outfolder) if (substr(outfolder, n, n) != "/") { @@ -168,7 +168,8 @@ convert.input <- hostname = host$name, exact.dates = TRUE, pattern = filename_pattern) - + + id <- NULL if(nrow(existing.dbfile[[i]]) > 0) { existing.input[[i]] <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE id=", existing.dbfile[[i]]$container_id),con) From 4f35f190796d2956085a231c3ea6b862a0a13b29 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:16:32 +0530 Subject: [PATCH 2124/2289] Update read.output.R undeclared dependency PEcAn.benchmark --- base/utils/R/read.output.R | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/base/utils/R/read.output.R b/base/utils/R/read.output.R index ef15e2925e2..1fa2aa4a689 100644 --- a/base/utils/R/read.output.R +++ b/base/utils/R/read.output.R @@ -1,7 +1,7 @@ #------------------------------------------------------------------------------- # Copyright (c) 2012 University of Illinois, NCSA. # All rights reserved. This program and the accompanying materials -# are made available under the terms of the +# are made available under the terms of the # University of Illinois/NCSA Open Source License # which accompanies this distribution, and is available at # http://opensource.ncsa.illinois.edu/license.html @@ -11,7 +11,7 @@ ##' ##' Reads the output of a single model run ##' -##' Generic function to convert model output from model-specific format to +##' Generic function to convert model output from model-specific format to ##' a common PEcAn format. This function uses MsTMIP variables except that units of ##' (kg m-2 d-1) are converted to kg ha-1 y-1. Currently this function converts ##' Carbon fluxes: GPP, NPP, NEE, TotalResp, AutoResp, HeteroResp, @@ -31,7 +31,7 @@ ##' variables in output file.. ##' @param dataframe Logical: if TRUE, will return output in a ##' `data.frame` format with a posix column. Useful for -##' [PEcAn.benchmark::align_data()] and plotting. +##' `PEcAn.benchmark::align.data` and plotting. ##' @param pft.name character string, name of the plant functional ##' type (PFT) to read PFT-specific output. If `NULL` no ##' PFT-specific output will be read even the variable has PFT as a @@ -236,7 +236,7 @@ read.output <- function(runid, outdir, # check if the variable has 'pft' as a dimension if ("pft" %in% sapply(nc$var[[v]]$dim, `[[`, "name")) { # means there are PFT specific outputs we want - # the variable *PFT* in standard netcdfs has *pft* dimension, + # the variable *PFT* in standard netcdfs has *pft* dimension, # numbers as values, and full pft names as an attribute # parse pft names and match the requested pft.string <- ncdf4::ncatt_get(nc, "PFT", verbose = verbose) @@ -260,7 +260,7 @@ read.output <- function(runid, outdir, } } } # end of per-pft read - + # Dropping attempt to provide more sensible units because of graph unit errors, # issue #792 # if (v %in% c(cflux, wflux)) { @@ -273,7 +273,7 @@ read.output <- function(runid, outdir, if (print_summary) { result_means <- vapply(result, mean, numeric(1), na.rm = TRUE) - result_medians <- vapply(result, median, numeric(1), na.rm = TRUE) + result_medians <- vapply(result, stats::median, numeric(1), na.rm = TRUE) summary_matrix <- signif(cbind(Mean = result_means, Median = result_medians), 3) rownames(summary_matrix) <- names(result) PEcAn.logger::logger.info( From e16fada24c97eb4f918ac687b417901e478d894d Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:21:27 +0530 Subject: [PATCH 2125/2289] Delete plots.R --- base/utils/R/plots.R | 317 ------------------------------------------- 1 file changed, 317 deletions(-) delete mode 100644 base/utils/R/plots.R diff --git a/base/utils/R/plots.R b/base/utils/R/plots.R deleted file mode 100644 index 4bf7e2513d2..00000000000 --- a/base/utils/R/plots.R +++ /dev/null @@ -1,317 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##' Variable-width (dagonally cut) histogram -##' -##' -##' When constructing a histogram, it is common to make all bars the same width. -##' One could also choose to make them all have the same area. -##' These two options have complementary strengths and weaknesses; the equal-width histogram oversmooths in regions of high density, and is poor at identifying sharp peaks; the equal-area histogram oversmooths in regions of low density, and so does not identify outliers. -##' We describe a compromise approach which avoids both of these defects. We regard the histogram as an exploratory device, rather than as an estimate of a density. -##' @name dhist -##' @title Diagonally Cut Histogram -##' @param x is a numeric vector (the data) -##' @param a is the scaling factor, default is 5 * IQR -##' @param nbins is the number of bins, default is assigned by the Stuges method -##' @param rx is the range used for the left of the left-most bin to the right of the right-most bin -##' @param eps used to set artificial bound on min width / max height of bins as described in Denby and Mallows (2009) on page 24. -##' @param xlab is label for the x axis -##' @param plot = TRUE produces the plot, FALSE returns the heights, breaks and counts -##' @param lab.spikes = TRUE labels the % of data in the spikes -##' @return list with two elements, heights of length n and breaks of length n+1 indicating the heights and break points of the histogram bars. -##' @author Lorraine Denby, Colin Mallows -##' @references Lorraine Denby, Colin Mallows. Journal of Computational and Graphical Statistics. March 1, 2009, 18(1): 21-31. doi:10.1198/jcgs.2009.0002. -dhist <- function(x, a = 5 * iqr(x), nbins = grDevices::nclass.Sturges(x), rx = range(x, na.rm = TRUE), - eps = 0.15, xlab = "x", plot = TRUE, lab.spikes = TRUE) { - - if (is.character(nbins)) { - nbins <- switch(casefold(nbins), - sturges = grDevices::nclass.Sturges(x), - fd = grDevices::nclass.FD(x), - scott = grDevices::nclass.scott(x), - stop("Nclass method not recognized")) - } else { - if (is.function(nbins)) { - nbins <- nbins(x) - } - } - - x <- sort(x[!is.na(x)]) - if (a == 0) { - a <- diff(range(x)) / 1e+08 - } - if (a != 0 & a != Inf) { - n <- length(x) - h <- (rx[2] + a - rx[1]) / nbins - ybr <- rx[1] + h * (0:nbins) - yupper <- x + (a * seq_len(n)) / n - # upper and lower corners in the ecdf - ylower <- yupper - a / n - # - cmtx <- cbind(cut(yupper, breaks = ybr), - cut(yupper, breaks = ybr, left.include = TRUE), - cut(ylower, breaks = ybr), - cut(ylower, breaks = ybr, left.include = TRUE)) - cmtx[1, 3] <- cmtx[1, 4] <- 1 - # to replace NAs when default r is used - cmtx[n, 1] <- cmtx[n, 2] <- nbins - # checksum <- apply(cmtx, 1, sum) %% 4 - checksum <- (cmtx[, 1] + cmtx[, 2] + cmtx[, 3] + cmtx[, 4]) %% 4 - # will be 2 for obs. that straddle two bins - straddlers <- (1:n)[checksum == 2] - # to allow for zero counts - if (length(straddlers) > 0) { - counts <- table(c(1:nbins, cmtx[-straddlers, 1])) - } else { - counts <- table(c(1:nbins, cmtx[, 1])) - } - counts <- counts - 1 - # - if (length(straddlers) > 0) { - for (i in straddlers) { - binno <- cmtx[i, 1] - theta <- ((yupper[i] - ybr[binno]) * n) / a - counts[binno - 1] <- counts[binno - 1] + (1 - theta) - counts[binno] <- counts[binno] + theta - } - } - xbr <- ybr - xbr[-1] <- ybr[-1] - (a * cumsum(counts)) / n - spike <- eps * diff(rx)/nbins - flag.vec <- c(diff(xbr) < spike, FALSE) - if (sum(abs(diff(xbr)) <= spike) > 1) { - xbr.new <- xbr - counts.new <- counts - diff.xbr <- abs(diff(xbr)) - amt.spike <- diff.xbr[length(diff.xbr)] - for (i in rev(2:length(diff.xbr))) { - if (diff.xbr[i - 1] <= spike & diff.xbr[i] <= spike & !is.na(diff.xbr[i])) { - amt.spike <- amt.spike + diff.xbr[i - 1] - counts.new[i - 1] <- counts.new[i - 1] + counts.new[i] - xbr.new[i] <- NA - counts.new[i] <- NA - flag.vec[i - 1] <- TRUE - } else { - amt.spike <- diff.xbr[i - 1] - } - } - flag.vec <- flag.vec[!is.na(xbr.new)] - flag.vec <- flag.vec[-length(flag.vec)] - counts <- counts.new[!is.na(counts.new)] - xbr <- xbr.new[!is.na(xbr.new)] - - } else { - flag.vec <- flag.vec[-length(flag.vec)] - } - widths <- abs(diff(xbr)) - ## N.B. argument 'widths' in barplot must be xbr - heights <- counts/widths - } - bin.size <- length(x) / nbins - cut.pt <- unique(c(min(x) - abs(min(x)) / 1000, - stats::approx(seq(length(x)), x, seq_len(nbins - 1) * bin.size, rule = 2)$y, max(x))) - aa <- graphics::hist(x, breaks = cut.pt, plot = FALSE, probability = TRUE) - if (a == Inf) { - heights <- aa$counts - xbr <- aa$breaks - } - amt.height <- 3 - q75 <- stats::quantile(heights, 0.75) - if (sum(flag.vec) != 0) { - amt <- max(heights[!flag.vec]) - ylim.height <- amt * amt.height - ind.h <- flag.vec & heights > ylim.height - flag.vec[heights < ylim.height * (amt.height - 1) / amt.height] <- FALSE - heights[ind.h] <- ylim.height - } - amt.txt <- 0 - end.y <- (-10000) - if (plot) { - graphics::barplot(heights, abs(diff(xbr)), - space = 0, density = -1, - xlab = xlab, plot = TRUE, - xaxt = "n", yaxt = "n") - at <- pretty(xbr) - graphics::axis(1, at = at - xbr[1], labels = as.character(at)) - if (lab.spikes) { - if (sum(flag.vec) >= 1) { - usr <- graphics::par("usr") - for (i in seq(length(xbr) - 1)) { - if (!flag.vec[i]) { - amt.txt <- 0 - if (xbr[i] - xbr[1] < end.y) { - amt.txt <- 1 - } - } else { - amt.txt <- amt.txt + 1 - end.y <- xbr[i] - xbr[1] + 3 * graphics::par("cxy")[1] - } - if (flag.vec[i]) { - txt <- paste0(" ", format(round(counts[i]/sum(counts) * 100)), "%") - graphics::par(xpd = TRUE) - graphics::text(xbr[i + 1] - xbr[1], - ylim.height - graphics::par("cxy")[2] * (amt.txt -1), txt, adj = 0) - } - } - } else print("no spikes or more than one spike") - } - return(invisible(list(heights = heights, xbr = xbr))) - } else { - return(list(heights = heights, xbr = xbr, counts = counts)) - } -} # dhist - - -#--------------------------------------------------------------------------------------------------# -##' Calculate interquartile range -##' -##' Calculates the 25th and 75th quantiles given a vector x; used in function \link{dhist}. -##' @name iqr -##' @title Interquartile range -##' @param x vector -##' @return numeric vector of length 2, with the 25th and 75th quantiles of input vector x. -iqr <- function(x) { - return(diff(stats::quantile(x, c(0.25, 0.75), na.rm = TRUE))) -} # iqr - - -##' Creates empty ggplot object -##' -##' An empty base plot to which layers created by other functions -##' (\code{\link{plot_data}}, \code{\link[PEcAn.priors]{plot_prior.density}}, -##' \code{\link[PEcAn.priors]{plot_posterior.density}}) can be added. -##' @name create.base.plot -##' @title Create Base Plot -##' @return empty ggplot object -##' @export -##' @author David LeBauer -create.base.plot <- function() { - need_packages("ggplot2") - base.plot <- ggplot2::ggplot() - return(base.plot) -} # create.base.plot - - -##--------------------------------------------------------------------------------------------------# -##' Add data to an existing plot or create a new one from \code{\link{create.base.plot}} -##' -##' Used to add raw data or summary statistics to the plot of a distribution. -##' The height of Y is arbitrary, and can be set to optimize visualization. -##' If SE estimates are available, tehse wil be plotted -##' @name plot_data -##' @aliases plot.data -##' @title Add data to plot -##' @param trait.data data to be plotted -##' @param base.plot a ggplot object (grob), -##' created by \code{\link{create.base.plot}} if none provided -##' @param ymax maximum height of y -##' @seealso \code{\link{create.base.plot}} -##' @return updated plot object -##' @author David LeBauer -##' @export plot_data -##' @examples -##' \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} -plot_data <- function(trait.data, base.plot = NULL, ymax, color = "black") { - need_packages("ggplot2") - - if (is.null(base.plot)) { - base.plot <- create.base.plot() - } - - n.pts <- nrow(trait.data) - if (n.pts == 1) { - ymax <- ymax / 16 - } else if (n.pts < 5) { - ymax <- ymax / 8 - } else { - ymax <- ymax / 4 - } - y.pts <- seq(0, ymax, length.out = 1 + n.pts)[-1] - - if (!"ghs" %in% names(trait.data)) { - trait.data$ghs <- 1 - } - - plot.data <- data.frame(x = trait.data$Y, - y = y.pts, - se = trait.data$se, - control = !trait.data$trt == 1 & trait.data$ghs == 1) - new.plot <- base.plot + - ggplot2::geom_point(data = plot.data, ggplot2::aes(x = x, y = y, color = control)) + - ggplot2::geom_segment(data = plot.data, - ggplot2::aes(x = x - se, y = y, xend = x + se, yend = y, color = control)) + - ggplot2::scale_color_manual(values = c("black", "grey")) + - ggplot2::theme(legend.position = "none") - return(new.plot) -} # plot_data - - -#--------------------------------------------------------------------------------------------------# -##' Add borders to plot -##' -##' Has ggplot2 display only specified borders, e.g. ('L'-shaped) borders, -##' rather than a rectangle or no border. Note that the order can be significant; -##' for example, if you specify the L border option and then a theme, the theme settings -##' will override the border option, so you need to specify the theme (if any) before the border option, as above. -##' @name theme_border -##' @title Theme border for plot -##' @param type -##' @param colour -##' @param size -##' @param linetype -##' @return adds borders to ggplot as a side effect -##' @author Rudolf Cardinal -##' @author \url{ggplot2 google group}{https://groups.google.com/forum/?fromgroups#!topic/ggplot2/-ZjRE2OL8lE} -##' @examples -##' \dontrun{ -##' df = data.frame( x=c(1,2,3), y=c(4,5,6) ) -##' ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + -##' opts(panel.border = theme_border(c('bottom','left')) ) -##' ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + -##' opts(panel.border = theme_border(c('b','l')) ) -##' } -theme_border <- function(type = c("left", "right", "bottom", "top", "none"), - colour = "black", size = 1, linetype = 1) { - type <- match.arg(type, several.ok = TRUE) - structure(function(x = 0, y = 0, width = 1, height = 1, ...) { - xlist <- c() - ylist <- c() - idlist <- c() - if ("bottom" %in% type) { - # bottom - xlist <- append(xlist, c(x, x + width)) - ylist <- append(ylist, c(y, y)) - idlist <- append(idlist, c(1, 1)) - } - if ("top" %in% type) { - # top - xlist <- append(xlist, c(x, x + width)) - ylist <- append(ylist, c(y + height, y + height)) - idlist <- append(idlist, c(2, 2)) - } - if ("left" %in% type) { - # left - xlist <- append(xlist, c(x, x)) - ylist <- append(ylist, c(y, y + height)) - idlist <- append(idlist, c(3, 3)) - } - if ("right" %in% type) { - # right - xlist <- append(xlist, c(x + width, x + width)) - ylist <- append(ylist, c(y, y + height)) - idlist <- append(idlist, c(4, 4)) - } - grid::polylineGrob(x = xlist, y = ylist, - id = idlist, ..., - default.units = "npc", - gp = grid::gpar(lwd = size, - col = colour, - lty = linetype), ) - }, class = "theme", type = "box", call = match.call()) -} # theme_border From 436f30a978738d4413fe8b62fd10669053e4fbbe Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:22:19 +0530 Subject: [PATCH 2126/2289] Create plots.R --- base/visualization/R/plots.R | 317 +++++++++++++++++++++++++++++++++++ 1 file changed, 317 insertions(+) create mode 100644 base/visualization/R/plots.R diff --git a/base/visualization/R/plots.R b/base/visualization/R/plots.R new file mode 100644 index 00000000000..4bf7e2513d2 --- /dev/null +++ b/base/visualization/R/plots.R @@ -0,0 +1,317 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##' Variable-width (dagonally cut) histogram +##' +##' +##' When constructing a histogram, it is common to make all bars the same width. +##' One could also choose to make them all have the same area. +##' These two options have complementary strengths and weaknesses; the equal-width histogram oversmooths in regions of high density, and is poor at identifying sharp peaks; the equal-area histogram oversmooths in regions of low density, and so does not identify outliers. +##' We describe a compromise approach which avoids both of these defects. We regard the histogram as an exploratory device, rather than as an estimate of a density. +##' @name dhist +##' @title Diagonally Cut Histogram +##' @param x is a numeric vector (the data) +##' @param a is the scaling factor, default is 5 * IQR +##' @param nbins is the number of bins, default is assigned by the Stuges method +##' @param rx is the range used for the left of the left-most bin to the right of the right-most bin +##' @param eps used to set artificial bound on min width / max height of bins as described in Denby and Mallows (2009) on page 24. +##' @param xlab is label for the x axis +##' @param plot = TRUE produces the plot, FALSE returns the heights, breaks and counts +##' @param lab.spikes = TRUE labels the % of data in the spikes +##' @return list with two elements, heights of length n and breaks of length n+1 indicating the heights and break points of the histogram bars. +##' @author Lorraine Denby, Colin Mallows +##' @references Lorraine Denby, Colin Mallows. Journal of Computational and Graphical Statistics. March 1, 2009, 18(1): 21-31. doi:10.1198/jcgs.2009.0002. +dhist <- function(x, a = 5 * iqr(x), nbins = grDevices::nclass.Sturges(x), rx = range(x, na.rm = TRUE), + eps = 0.15, xlab = "x", plot = TRUE, lab.spikes = TRUE) { + + if (is.character(nbins)) { + nbins <- switch(casefold(nbins), + sturges = grDevices::nclass.Sturges(x), + fd = grDevices::nclass.FD(x), + scott = grDevices::nclass.scott(x), + stop("Nclass method not recognized")) + } else { + if (is.function(nbins)) { + nbins <- nbins(x) + } + } + + x <- sort(x[!is.na(x)]) + if (a == 0) { + a <- diff(range(x)) / 1e+08 + } + if (a != 0 & a != Inf) { + n <- length(x) + h <- (rx[2] + a - rx[1]) / nbins + ybr <- rx[1] + h * (0:nbins) + yupper <- x + (a * seq_len(n)) / n + # upper and lower corners in the ecdf + ylower <- yupper - a / n + # + cmtx <- cbind(cut(yupper, breaks = ybr), + cut(yupper, breaks = ybr, left.include = TRUE), + cut(ylower, breaks = ybr), + cut(ylower, breaks = ybr, left.include = TRUE)) + cmtx[1, 3] <- cmtx[1, 4] <- 1 + # to replace NAs when default r is used + cmtx[n, 1] <- cmtx[n, 2] <- nbins + # checksum <- apply(cmtx, 1, sum) %% 4 + checksum <- (cmtx[, 1] + cmtx[, 2] + cmtx[, 3] + cmtx[, 4]) %% 4 + # will be 2 for obs. that straddle two bins + straddlers <- (1:n)[checksum == 2] + # to allow for zero counts + if (length(straddlers) > 0) { + counts <- table(c(1:nbins, cmtx[-straddlers, 1])) + } else { + counts <- table(c(1:nbins, cmtx[, 1])) + } + counts <- counts - 1 + # + if (length(straddlers) > 0) { + for (i in straddlers) { + binno <- cmtx[i, 1] + theta <- ((yupper[i] - ybr[binno]) * n) / a + counts[binno - 1] <- counts[binno - 1] + (1 - theta) + counts[binno] <- counts[binno] + theta + } + } + xbr <- ybr + xbr[-1] <- ybr[-1] - (a * cumsum(counts)) / n + spike <- eps * diff(rx)/nbins + flag.vec <- c(diff(xbr) < spike, FALSE) + if (sum(abs(diff(xbr)) <= spike) > 1) { + xbr.new <- xbr + counts.new <- counts + diff.xbr <- abs(diff(xbr)) + amt.spike <- diff.xbr[length(diff.xbr)] + for (i in rev(2:length(diff.xbr))) { + if (diff.xbr[i - 1] <= spike & diff.xbr[i] <= spike & !is.na(diff.xbr[i])) { + amt.spike <- amt.spike + diff.xbr[i - 1] + counts.new[i - 1] <- counts.new[i - 1] + counts.new[i] + xbr.new[i] <- NA + counts.new[i] <- NA + flag.vec[i - 1] <- TRUE + } else { + amt.spike <- diff.xbr[i - 1] + } + } + flag.vec <- flag.vec[!is.na(xbr.new)] + flag.vec <- flag.vec[-length(flag.vec)] + counts <- counts.new[!is.na(counts.new)] + xbr <- xbr.new[!is.na(xbr.new)] + + } else { + flag.vec <- flag.vec[-length(flag.vec)] + } + widths <- abs(diff(xbr)) + ## N.B. argument 'widths' in barplot must be xbr + heights <- counts/widths + } + bin.size <- length(x) / nbins + cut.pt <- unique(c(min(x) - abs(min(x)) / 1000, + stats::approx(seq(length(x)), x, seq_len(nbins - 1) * bin.size, rule = 2)$y, max(x))) + aa <- graphics::hist(x, breaks = cut.pt, plot = FALSE, probability = TRUE) + if (a == Inf) { + heights <- aa$counts + xbr <- aa$breaks + } + amt.height <- 3 + q75 <- stats::quantile(heights, 0.75) + if (sum(flag.vec) != 0) { + amt <- max(heights[!flag.vec]) + ylim.height <- amt * amt.height + ind.h <- flag.vec & heights > ylim.height + flag.vec[heights < ylim.height * (amt.height - 1) / amt.height] <- FALSE + heights[ind.h] <- ylim.height + } + amt.txt <- 0 + end.y <- (-10000) + if (plot) { + graphics::barplot(heights, abs(diff(xbr)), + space = 0, density = -1, + xlab = xlab, plot = TRUE, + xaxt = "n", yaxt = "n") + at <- pretty(xbr) + graphics::axis(1, at = at - xbr[1], labels = as.character(at)) + if (lab.spikes) { + if (sum(flag.vec) >= 1) { + usr <- graphics::par("usr") + for (i in seq(length(xbr) - 1)) { + if (!flag.vec[i]) { + amt.txt <- 0 + if (xbr[i] - xbr[1] < end.y) { + amt.txt <- 1 + } + } else { + amt.txt <- amt.txt + 1 + end.y <- xbr[i] - xbr[1] + 3 * graphics::par("cxy")[1] + } + if (flag.vec[i]) { + txt <- paste0(" ", format(round(counts[i]/sum(counts) * 100)), "%") + graphics::par(xpd = TRUE) + graphics::text(xbr[i + 1] - xbr[1], + ylim.height - graphics::par("cxy")[2] * (amt.txt -1), txt, adj = 0) + } + } + } else print("no spikes or more than one spike") + } + return(invisible(list(heights = heights, xbr = xbr))) + } else { + return(list(heights = heights, xbr = xbr, counts = counts)) + } +} # dhist + + +#--------------------------------------------------------------------------------------------------# +##' Calculate interquartile range +##' +##' Calculates the 25th and 75th quantiles given a vector x; used in function \link{dhist}. +##' @name iqr +##' @title Interquartile range +##' @param x vector +##' @return numeric vector of length 2, with the 25th and 75th quantiles of input vector x. +iqr <- function(x) { + return(diff(stats::quantile(x, c(0.25, 0.75), na.rm = TRUE))) +} # iqr + + +##' Creates empty ggplot object +##' +##' An empty base plot to which layers created by other functions +##' (\code{\link{plot_data}}, \code{\link[PEcAn.priors]{plot_prior.density}}, +##' \code{\link[PEcAn.priors]{plot_posterior.density}}) can be added. +##' @name create.base.plot +##' @title Create Base Plot +##' @return empty ggplot object +##' @export +##' @author David LeBauer +create.base.plot <- function() { + need_packages("ggplot2") + base.plot <- ggplot2::ggplot() + return(base.plot) +} # create.base.plot + + +##--------------------------------------------------------------------------------------------------# +##' Add data to an existing plot or create a new one from \code{\link{create.base.plot}} +##' +##' Used to add raw data or summary statistics to the plot of a distribution. +##' The height of Y is arbitrary, and can be set to optimize visualization. +##' If SE estimates are available, tehse wil be plotted +##' @name plot_data +##' @aliases plot.data +##' @title Add data to plot +##' @param trait.data data to be plotted +##' @param base.plot a ggplot object (grob), +##' created by \code{\link{create.base.plot}} if none provided +##' @param ymax maximum height of y +##' @seealso \code{\link{create.base.plot}} +##' @return updated plot object +##' @author David LeBauer +##' @export plot_data +##' @examples +##' \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} +plot_data <- function(trait.data, base.plot = NULL, ymax, color = "black") { + need_packages("ggplot2") + + if (is.null(base.plot)) { + base.plot <- create.base.plot() + } + + n.pts <- nrow(trait.data) + if (n.pts == 1) { + ymax <- ymax / 16 + } else if (n.pts < 5) { + ymax <- ymax / 8 + } else { + ymax <- ymax / 4 + } + y.pts <- seq(0, ymax, length.out = 1 + n.pts)[-1] + + if (!"ghs" %in% names(trait.data)) { + trait.data$ghs <- 1 + } + + plot.data <- data.frame(x = trait.data$Y, + y = y.pts, + se = trait.data$se, + control = !trait.data$trt == 1 & trait.data$ghs == 1) + new.plot <- base.plot + + ggplot2::geom_point(data = plot.data, ggplot2::aes(x = x, y = y, color = control)) + + ggplot2::geom_segment(data = plot.data, + ggplot2::aes(x = x - se, y = y, xend = x + se, yend = y, color = control)) + + ggplot2::scale_color_manual(values = c("black", "grey")) + + ggplot2::theme(legend.position = "none") + return(new.plot) +} # plot_data + + +#--------------------------------------------------------------------------------------------------# +##' Add borders to plot +##' +##' Has ggplot2 display only specified borders, e.g. ('L'-shaped) borders, +##' rather than a rectangle or no border. Note that the order can be significant; +##' for example, if you specify the L border option and then a theme, the theme settings +##' will override the border option, so you need to specify the theme (if any) before the border option, as above. +##' @name theme_border +##' @title Theme border for plot +##' @param type +##' @param colour +##' @param size +##' @param linetype +##' @return adds borders to ggplot as a side effect +##' @author Rudolf Cardinal +##' @author \url{ggplot2 google group}{https://groups.google.com/forum/?fromgroups#!topic/ggplot2/-ZjRE2OL8lE} +##' @examples +##' \dontrun{ +##' df = data.frame( x=c(1,2,3), y=c(4,5,6) ) +##' ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + +##' opts(panel.border = theme_border(c('bottom','left')) ) +##' ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + +##' opts(panel.border = theme_border(c('b','l')) ) +##' } +theme_border <- function(type = c("left", "right", "bottom", "top", "none"), + colour = "black", size = 1, linetype = 1) { + type <- match.arg(type, several.ok = TRUE) + structure(function(x = 0, y = 0, width = 1, height = 1, ...) { + xlist <- c() + ylist <- c() + idlist <- c() + if ("bottom" %in% type) { + # bottom + xlist <- append(xlist, c(x, x + width)) + ylist <- append(ylist, c(y, y)) + idlist <- append(idlist, c(1, 1)) + } + if ("top" %in% type) { + # top + xlist <- append(xlist, c(x, x + width)) + ylist <- append(ylist, c(y + height, y + height)) + idlist <- append(idlist, c(2, 2)) + } + if ("left" %in% type) { + # left + xlist <- append(xlist, c(x, x)) + ylist <- append(ylist, c(y, y + height)) + idlist <- append(idlist, c(3, 3)) + } + if ("right" %in% type) { + # right + xlist <- append(xlist, c(x + width, x + width)) + ylist <- append(ylist, c(y, y + height)) + idlist <- append(idlist, c(4, 4)) + } + grid::polylineGrob(x = xlist, y = ylist, + id = idlist, ..., + default.units = "npc", + gp = grid::gpar(lwd = size, + col = colour, + lty = linetype), ) + }, class = "theme", type = "box", call = match.call()) +} # theme_border From b1a0083723f35bc661a40a646940129dd4feb5af Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:29:09 +0530 Subject: [PATCH 2127/2289] Delete ensemble.R --- base/utils/R/ensemble.R | 352 ---------------------------------------- 1 file changed, 352 deletions(-) delete mode 100644 base/utils/R/ensemble.R diff --git a/base/utils/R/ensemble.R b/base/utils/R/ensemble.R deleted file mode 100644 index c7fa774c1ad..00000000000 --- a/base/utils/R/ensemble.R +++ /dev/null @@ -1,352 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##' Reads output from model ensemble -##' -##' Reads output for an ensemble of length specified by \code{ensemble.size} and bounded by \code{start.year} -##' and \code{end.year} -##' -##' DEPRECATED: This function has been moved to the \code{PEcAn.uncertainty} package. -##' The version in \code{PEcAn.utils} is deprecated, will not be updated to add any new features, -##' and will be removed in a future release of PEcAn. -##' Please use \code{PEcAn.uncertainty::read.ensemble.output} instead. -##' -##' @title Read ensemble output -##' @return a list of ensemble model output -##' @param ensemble.size the number of ensemble members run -##' @param pecandir specifies where pecan writes its configuration files -##' @param outdir directory with model output to use in ensemble analysis -##' @param start.year first year to include in ensemble analysis -##' @param end.year last year to include in ensemble analysis -##' @param variables target variables for ensemble analysis -##' @export -##' @author Ryan Kelly, David LeBauer, Rob Kooper -#--------------------------------------------------------------------------------------------------# -read.ensemble.output <- function(ensemble.size, pecandir, outdir, start.year, end.year, - variable, ens.run.ids = NULL) { - - .Deprecated( - new = "PEcAn.uncertainty::read.ensemble.output", - msg = paste( - "read.ensemble.output has been moved to PEcAn.uncertainty and is deprecated from PEcAn.utils.", - "Please use PEcAn.uncertainty::read.ensemble.output instead.", - "PEcAn.utils::read.ensemble.output will not be updated and will be removed from a future version of PEcAn.", - sep = "\n")) - - if (is.null(ens.run.ids)) { - samples.file <- file.path(pecandir, "samples.Rdata") - if (file.exists(samples.file)) { - load(samples.file) - ens.run.ids <- runs.samples$ensemble - } else { - stop(samples.file, "not found required by read.ensemble.output") - } - } - - expr <- variable$expression - variables <- variable$variables - - ensemble.output <- list() - for (row in rownames(ens.run.ids)) { - run.id <- ens.run.ids[row, "id"] - PEcAn.logger::logger.info("reading ensemble output from run id: ", run.id) - - for(var in seq_along(variables)){ - out.tmp <- read.output(run.id, file.path(outdir, run.id), start.year, end.year, variables[var]) - assign(variables[var], out.tmp[[variables[var]]]) - } - - # derivation - out <- eval(parse(text = expr)) - - ensemble.output[[row]] <- mean(out, na.rm= TRUE) - - } - return(ensemble.output) -} # read.ensemble.output - - -##' Get parameter values used in ensemble -##' -##' DEPRECATED: This function has been moved to the \code{PEcAn.uncertainty} package. -##' The version in \code{PEcAn.utils} is deprecated, will not be updated to add any new features, -##' and will be removed in a future release of PEcAn. -##' Please use \code{PEcAn.uncertainty::get.ensemble.samples} instead. - -##' Returns a matrix of randomly or quasi-randomly sampled trait values -##' to be assigned to traits over several model runs. -##' given the number of model runs and a list of sample distributions for traits -##' The model run is indexed first by model run, then by trait -##' -##' @title Get Ensemble Samples -##' @name get.ensemble.samples -##' @param ensemble.size number of runs in model ensemble -##' @param pft.samples random samples from parameter distribution, e.g. from a MCMC chain -##' @param env.samples env samples -##' @param method the method used to generate the ensemble samples. Random generators: uniform, uniform with latin hypercube permutation. Quasi-random generators: halton, sobol, torus. Random generation draws random variates whereas quasi-random generation is deterministic but well equidistributed. Default is uniform. For small ensemble size with relatively large parameter number (e.g ensemble size < 5 and # of traits > 5) use methods other than halton. -##' @param param.names a list of parameter names that were fitted either by MA or PDA, important argument, if NULL parameters will be resampled independently -##' -##' @return matrix of (quasi-)random samples from trait distributions -##' @export -##' @author David LeBauer, Istem Fer -get.ensemble.samples <- function(ensemble.size, pft.samples, env.samples, - method = "uniform", param.names = NULL, ...) { - - .Deprecated( - new = "PEcAn.uncertainty::get.ensemble.samples", - msg = paste( - "get.ensemble.samples has been moved to PEcAn.uncertainty and is deprecated from PEcAn.utils.", - "Please use PEcAn.uncertainty::get.ensemble.samples instead.", - "PEcAn.utils::get.ensemble.samples will not be updated and will be removed from a future version of PEcAn.", - sep = "\n")) - - if (is.null(method)) { - PEcAn.logger::logger.info("No sampling method supplied, defaulting to uniform random sampling") - method <- "uniform" - } - - ## force as numeric for compatibility with Fortran code in halton() - ensemble.size <- as.numeric(ensemble.size) - if (ensemble.size <= 0) { - ans <- NULL - } else if (ensemble.size == 1) { - ans <- get.sa.sample.list(pft.samples, env.samples, 0.5) - } else { - pft.samples[[length(pft.samples) + 1]] <- env.samples - names(pft.samples)[length(pft.samples)] <- "env" - pft2col <- NULL - for (i in seq_along(pft.samples)) { - pft2col <- c(pft2col, rep(i, length(pft.samples[[i]]))) - } - - - total.sample.num <- sum(sapply(pft.samples, length)) - random.samples <- NULL - - - if (method == "halton") { - need_packages("randtoolbox") - PEcAn.logger::logger.info("Using ", method, "method for sampling") - random.samples <- randtoolbox::halton(n = ensemble.size, dim = total.sample.num, ...) - ## force as a matrix in case length(samples)=1 - random.samples <- as.matrix(random.samples) - } else if (method == "sobol") { - need_packages("randtoolbox") - PEcAn.logger::logger.info("Using ", method, "method for sampling") - random.samples <- randtoolbox::sobol(n = ensemble.size, dim = total.sample.num, ...) - ## force as a matrix in case length(samples)=1 - random.samples <- as.matrix(random.samples) - } else if (method == "torus") { - need_packages("randtoolbox") - PEcAn.logger::logger.info("Using ", method, "method for sampling") - random.samples <- randtoolbox::torus(n = ensemble.size, dim = total.sample.num, ...) - ## force as a matrix in case length(samples)=1 - random.samples <- as.matrix(random.samples) - } else if (method == "lhc") { - need_packages("PEcAn.emulator") - PEcAn.logger::logger.info("Using ", method, "method for sampling") - random.samples <- PEcAn.emulator::lhc(t(matrix(0:1, ncol = total.sample.num, nrow = 2)), ensemble.size) - } else if (method == "uniform") { - PEcAn.logger::logger.info("Using ", method, "random sampling") - # uniform random - random.samples <- matrix(stats::runif(ensemble.size * total.sample.num), - ensemble.size, - total.sample.num) - } else { - PEcAn.logger::logger.info("Method ", method, " has not been implemented yet, using uniform random sampling") - # uniform random - random.samples <- matrix(stats::runif(ensemble.size * total.sample.num), - ensemble.size, - total.sample.num) - } - - - ensemble.samples <- list() - - - col.i <- 0 - for (pft.i in seq(pft.samples)) { - ensemble.samples[[pft.i]] <- matrix(nrow = ensemble.size, ncol = length(pft.samples[[pft.i]])) - - # meaning we want to keep MCMC samples together - if(length(pft.samples[[pft.i]])>0 & !is.null(param.names)){ - # TODO: for now we are sampling row numbers uniformly - # stop if other methods were requested - if(method != "uniform"){ - PEcAn.logger::logger.severe("Only uniform sampling is available for joint sampling at the moment. Other approaches are not implemented yet.") - } - same.i <- sample.int(length(pft.samples[[pft.i]][[1]]), ensemble.size) - } - - for (trait.i in seq(pft.samples[[pft.i]])) { - col.i <- col.i + 1 - if(names(pft.samples[[pft.i]])[trait.i] %in% param.names[[pft.i]]){ # keeping samples - ensemble.samples[[pft.i]][, trait.i] <- pft.samples[[pft.i]][[trait.i]][same.i] - }else{ - ensemble.samples[[pft.i]][, trait.i] <- stats::quantile(pft.samples[[pft.i]][[trait.i]], - random.samples[, col.i]) - } - } # end trait - ensemble.samples[[pft.i]] <- as.data.frame(ensemble.samples[[pft.i]]) - colnames(ensemble.samples[[pft.i]]) <- names(pft.samples[[pft.i]]) - } #end pft - names(ensemble.samples) <- names(pft.samples) - ans <- ensemble.samples - } - return(ans) -} # get.ensemble.samples - - -##' Write ensemble config files -##' -##' DEPRECATED: This function has been moved to the \code{PEcAn.uncertainty} package. -##' The version in \code{PEcAn.utils} is deprecated, will not be updated to add any new features, -##' and will be removed in a future release of PEcAn. -##' Please use \code{PEcAn.uncertainty::write.ensemble.configs} instead. -##' -##' Writes config files for use in meta-analysis and returns a list of run ids. -##' Given a pft.xml object, a list of lists as supplied by get.sa.samples, -##' a name to distinguish the output files, and the directory to place the files. -##' @title Write ensemble configs -##' @param defaults pft -##' @param ensemble.samples list of lists supplied by \link{get.ensemble.samples} -##' @param settings list of PEcAn settings -##' @param write.config a model-specific function to write config files, e.g. \link{write.config.ED} -##' @param clean remove old output first? -##' @return list, containing $runs = data frame of runids, and $ensemble.id = the ensemble ID for these runs. Also writes sensitivity analysis configuration files as a side effect -##' @export -##' @author David LeBauer, Carl Davidson -write.ensemble.configs <- function(defaults, ensemble.samples, settings, model, - clean = FALSE, write.to.db = TRUE) { - - .Deprecated( - new = "PEcAn.uncertainty::write.ensemble.configs", - msg = paste( - "write.ensemble.configs has been moved to PEcAn.uncertainty and is deprecated from PEcAn.utils.", - "Please use PEcAn.uncertainty::write.ensemble.configs instead.", - "PEcAn.utils::write.ensemble.configs will not be updated and will be removed from a future version of PEcAn.", - sep = "\n")) - - my.write.config <- paste("write.config.", model, sep = "") - - if (is.null(ensemble.samples)) { - return(list(runs = NULL, ensemble.id = NULL)) - } - - # Open connection to database so we can store all run/ensemble information - if (write.to.db) { - con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - if (inherits(con, "try-error")) { - con <- NULL - } else { - on.exit(PEcAn.DB::db.close(con), add = TRUE) - } - } else { - con <- NULL - } - - # Get the workflow id - if ("workflow" %in% names(settings)) { - workflow.id <- settings$workflow$id - } else { - workflow.id <- -1 - } - - # create an ensemble id - if (!is.null(con)) { - # write ensemble first - ensemble.id <- PEcAn.DB::db.query(paste0( - "INSERT INTO ensembles (runtype, workflow_id) ", - "VALUES ('ensemble', ", format(workflow.id, scientific = FALSE), ")", - "RETURNING id"), con = con)[['id']] - - for (pft in defaults) { - PEcAn.DB::db.query(paste0( - "INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) ", - "values (", pft$posteriorid, ", ", ensemble.id, ")"), con = con) - } - } else { - ensemble.id <- NA - } - - # find all inputs that have an id - inputs <- names(settings$run$inputs) - inputs <- inputs[grepl(".id$", inputs)] - - # write configuration for each run of the ensemble - runs <- data.frame() - for (counter in seq_len(settings$ensemble$size)) { - if (!is.null(con)) { - paramlist <- paste("ensemble=", counter, sep = "") - run.id <- PEcAn.DB::db.query(paste0( - "INSERT INTO runs (model_id, site_id, start_time, finish_time, outdir, ensemble_id, parameter_list) ", - "values ('", - settings$model$id, "', '", - settings$run$site$id, "', '", - settings$run$start.date, "', '", - settings$run$end.date, "', '", - settings$run$outdir, "', ", - ensemble.id, ", '", - paramlist, "') ", - "RETURNING id"), con = con)[['id']] - - # associate inputs with runs - if (!is.null(inputs)) { - for (x in inputs) { - PEcAn.DB::db.query(paste0("INSERT INTO inputs_runs (input_id, run_id) ", - "values (", settings$run$inputs[[x]], ", ", run.id, ")"), - con = con) - } - } - - } else { - run.id <- get.run.id("ENS", left.pad.zeros(counter, 5)) - } - runs[counter, "id"] <- run.id - - # create folders (cleaning up old ones if needed) - if (clean) { - unlink(file.path(settings$rundir, run.id)) - unlink(file.path(settings$modeloutdir, run.id)) - } - dir.create(file.path(settings$rundir, run.id), recursive = TRUE) - dir.create(file.path(settings$modeloutdir, run.id), recursive = TRUE) - - # write run information to disk - cat("runtype : ensemble\n", - "workflow id : ", workflow.id, "\n", - "ensemble id : ", ensemble.id, "\n", - "run : ", counter, "/", settings$ensemble$size, "\n", - "run id : ", run.id, "\n", - "pft names : ", as.character(lapply(settings$pfts, function(x) x[['name']])), "\n", - "model : ", model, "\n", - "model id : ", settings$model$id, "\n", - "site : ", settings$run$site$name, "\n", - "site id : ", settings$run$site$id, "\n", - "met data : ", settings$run$site$met, "\n", - "start date : ", settings$run$start.date, "\n", - "end date : ", settings$run$end.date, "\n", - "hostname : ", settings$host$name, "\n", - "rundir : ", file.path(settings$host$rundir, run.id), "\n", - "outdir : ", file.path(settings$host$outdir, run.id), "\n", - file = file.path(settings$rundir, run.id, "README.txt")) - - do.call(my.write.config, args = list( - defaults = defaults, - trait.values = lapply( - ensemble.samples, function(x, n) { x[n, , drop=FALSE] }, n=counter - ), - settings = settings, - run.id = run.id) - ) - cat(run.id, file = file.path(settings$rundir, "runs.txt"), sep = "\n", append = TRUE) - } - - return(invisible(list(runs = runs, ensemble.id = ensemble.id))) -} # write.ensemble.configs From b4a9b06524b62ce075028e27d0160c082df2232a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:31:46 +0530 Subject: [PATCH 2128/2289] Update get.results.R --- base/utils/R/get.results.R | 241 ------------------------------------- 1 file changed, 241 deletions(-) diff --git a/base/utils/R/get.results.R b/base/utils/R/get.results.R index c78d01a9e39..8b137891791 100644 --- a/base/utils/R/get.results.R +++ b/base/utils/R/get.results.R @@ -1,242 +1 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- -##' Reads model output and runs sensitivity and ensemble analyses -##' -##' Output is placed in model output directory (settings$modeloutdir). -##' @name get.results -##' @title Generate model output for PEcAn analyses -##' @export -##' @param settings list, read from settings file (xml) using \code{\link{read.settings}} -##' @author David LeBauer, Shawn Serbin, Mike Dietze, Ryan Kelly -get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, - variable = NULL, start.year = NULL, end.year = NULL) { - - outdir <- settings$outdir - - sensitivity.output <- list() - if ("sensitivity.analysis" %in% names(settings)) { - ### Load PEcAn sa info - # Can specify ensemble ids manually. If not, look in settings. - # if no ensemble ids in settings look in samples.Rdata, - # which for backwards compatibility still contains the sample info for (the most recent) - # sensitivity and ensemble analysis combined. - if (!is.null(sa.ensemble.id)) { - fname <- sensitivity.filename(settings, "sensitivity.samples", "Rdata", - ensemble.id = sa.ensemble.id, - all.var.yr = TRUE, - pft = NULL) - } else if (!is.null(settings$sensitivity.analysis$ensemble.id)) { - sa.ensemble.id <- settings$sensitivity.analysis$ensemble.id - fname <- sensitivity.filename(settings, "sensitivity.samples", "Rdata", - ensemble.id = sa.ensemble.id, - all.var.yr = TRUE, - pft = NULL) - } else { - fname <- file.path(outdir, "samples.Rdata") - sa.ensemble.id <- NULL - } - - if (!file.exists(fname)) { - PEcAn.logger::logger.severe("No sensitivity analysis samples file found!") - } - load(fname) - - # For backwards compatibility, define some variables if not just loaded - if (!exists("pft.names")) { - pft.names <- names(trait.samples) - } - if (!exists("trait.names")) { - trait.names <- lapply(trait.samples, names) - } - if (!exists("sa.run.ids")) { - sa.run.ids <- runs.samples$sa - } - - # Set variable and years. Use args first, then settings, then defaults/error - start.year.sa <- start.year - if (is.null(start.year.sa)) { - start.year.sa <- settings$sensitivity.analysis$start.year - } - if (is.null(start.year.sa)) { - start.year.sa <- NA - } - - end.year.sa <- end.year - if (is.null(end.year.sa)) { - end.year.sa <- settings$sensitivity.analysis$end.year - } - if (is.null(end.year.sa)) { - end.year.sa <- NA - } - - variables.sa <- variable - if (is.null(variables.sa)) { - if ("variable" %in% names(settings$sensitivity.analysis)) { - variables.sa <- settings$sensitivity.analysis[names(settings$sensitivity.analysis) == "variable"] - } else { - PEcAn.logger::logger.severe("no variable defined for sensitivity analysis") - } - } - - # Only handling one variable at a time for now - if (length(variables.sa) >= 1) { - for(variable.sa in variables.sa){ - PEcAn.logger::logger.warn(paste0("Currently performing sensitivity analysis on variable ", - variable.sa, ")")) - - # if an expression is provided, convert.expr returns names of the variables accordingly - # if a derivation is not requested it returns the variable name as is - variables <- convert.expr(unlist(variable.sa)) - variable.sa <- variables$variable.eqn - variable.fn <- variables$variable.drv - - for(pft.name in pft.names){ - quantiles <- rownames(sa.samples[[pft.name]]) - traits <- trait.names[[pft.name]] - - # when there is variable-per pft in the outputs, check for the tag for deciding SA per pft - per.pft <- ifelse(!is.null(settings$sensitivity.analysis$perpft), - as.logical(settings$sensitivity.analysis$perpft), FALSE) - sensitivity.output[[pft.name]] <- read.sa.output(traits = traits, - quantiles = quantiles, - pecandir = outdir, - outdir = settings$modeloutdir, - pft.name = pft.name, - start.year = start.year.sa, - end.year = end.year.sa, - variable = variable.sa, - sa.run.ids = sa.run.ids, - per.pft = per.pft) - } - - # Save sensitivity output - - fname <- sensitivity.filename(settings, "sensitivity.output", "Rdata", - all.var.yr = FALSE, - pft = NULL, - ensemble.id = sa.ensemble.id, - variable = variable.fn, - start.year = start.year.sa, - end.year = end.year.sa) - save(sensitivity.output, file = fname) - } - } - } - - ensemble.output <- list() - if ("ensemble" %in% names(settings)) { - ### Load PEcAn ensemble info Can specify ensemble ids manually. If not, look in - ### settings. If none there, just look in samples.Rdata, which for backwards - ### compatibility still contains the sample info for (the most recent) sensitivity - ### and ensemble analysis combined. - if (!is.null(ens.ensemble.id)) { - fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", - ensemble.id = ens.ensemble.id, - all.var.yr = TRUE) - } else if (!is.null(settings$ensemble$ensemble.id)) { - ens.ensemble.id <- settings$ensemble$ensemble.id - fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", - ensemble.id = ens.ensemble.id, - all.var.yr = TRUE) - } else { - fname <- file.path(outdir, "samples.Rdata") - } - if (!file.exists(fname)) { - PEcAn.logger::logger.severe("No ensemble samples file found!") - } - load(fname) - - # For backwards compatibility, define some variables if not just loaded - if (!exists("pft.names")) { - pft.names <- names(trait.samples) - } - if (!exists("trait.names")) { - trait.names <- lapply(trait.samples, names) - } - if (!exists("ens.run.ids")) { - ens.run.ids <- runs.samples$ens - } - - # Set variable and years. Use args first, then settings, then defaults/error - start.year.ens <- start.year - if (is.null(start.year.ens)) { - start.year.ens <- settings$ensemble$start.year - } - if (is.null(start.year.ens)) { - start.year.ens <- NA - } - - end.year.ens <- end.year - if (is.null(end.year.ens)) { - end.year.ens <- settings$ensemble$end.year - } - if (is.null(end.year.ens)) { - end.year.ens <- NA - } - - variables.ens <- variable - if (is.null(variables.ens)) { - if ("variable" %in% names(settings$ensemble)) { - nc_var <- which(names(settings$ensemble) == "variable") - for (i in seq_along(nc_var)) { - variables.ens[i] <- settings$ensemble[[nc_var[i]]] - } - } - } - - if (is.null(variables.ens)) - PEcAn.logger::logger.severe("No variables for ensemble analysis!") - - # Only handling one variable at a time for now - if (length(variables.ens) >= 1) { - for(variable.ens in variables.ens){ - PEcAn.logger::logger.warn(paste0("Currently performing ensemble analysis on variable ", - variable.ens, ")")) - - # if an expression is provided, convert.expr returns names of the variables accordingly - # if a derivation is not requested it returns the variable name as is - variables <- convert.expr(variable.ens) - variable.ens <- variables$variable.eqn - variable.fn <- variables$variable.drv - - ensemble.output <- PEcAn.uncertainty::read.ensemble.output( - settings$ensemble$size, - pecandir = outdir, - outdir = settings$modeloutdir, - start.year = start.year.ens, - end.year = end.year.ens, - variable = variable.ens, - ens.run.ids = ens.run.ids - ) - - # Save ensemble output - fname <- ensemble.filename(settings, "ensemble.output", "Rdata", - all.var.yr = FALSE, - ensemble.id = ens.ensemble.id, - variable = variable.fn, - start.year = start.year.ens, - end.year = end.year.ens) - save(ensemble.output, file = fname) - } - } - } -} # get.results - - -##' @export -runModule.get.results <- function(settings) { - if (PEcAn.settings::is.MultiSettings(settings)) { - return(PEcAn.settings::papply(settings, runModule.get.results)) - } else if (PEcAn.settings::is.Settings(settings)) { - return(get.results(settings)) - } else { - stop("runModule.get.results only works with Settings or MultiSettings") - } -} # runModule.get.results From e93d1abf9936904498fb706d37af79e4de74ecd9 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:31:55 +0530 Subject: [PATCH 2129/2289] Delete get.results.R --- base/utils/R/get.results.R | 1 - 1 file changed, 1 deletion(-) delete mode 100644 base/utils/R/get.results.R diff --git a/base/utils/R/get.results.R b/base/utils/R/get.results.R deleted file mode 100644 index 8b137891791..00000000000 --- a/base/utils/R/get.results.R +++ /dev/null @@ -1 +0,0 @@ - From 9ea25de4ff6c7e66b79b12251e8f499e1fa8e073 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:32:30 +0530 Subject: [PATCH 2130/2289] Create get.results.R --- modules/uncertainty/R/get.results.R | 242 ++++++++++++++++++++++++++++ 1 file changed, 242 insertions(+) create mode 100644 modules/uncertainty/R/get.results.R diff --git a/modules/uncertainty/R/get.results.R b/modules/uncertainty/R/get.results.R new file mode 100644 index 00000000000..c78d01a9e39 --- /dev/null +++ b/modules/uncertainty/R/get.results.R @@ -0,0 +1,242 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##' Reads model output and runs sensitivity and ensemble analyses +##' +##' Output is placed in model output directory (settings$modeloutdir). +##' @name get.results +##' @title Generate model output for PEcAn analyses +##' @export +##' @param settings list, read from settings file (xml) using \code{\link{read.settings}} +##' @author David LeBauer, Shawn Serbin, Mike Dietze, Ryan Kelly +get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, + variable = NULL, start.year = NULL, end.year = NULL) { + + outdir <- settings$outdir + + sensitivity.output <- list() + if ("sensitivity.analysis" %in% names(settings)) { + ### Load PEcAn sa info + # Can specify ensemble ids manually. If not, look in settings. + # if no ensemble ids in settings look in samples.Rdata, + # which for backwards compatibility still contains the sample info for (the most recent) + # sensitivity and ensemble analysis combined. + if (!is.null(sa.ensemble.id)) { + fname <- sensitivity.filename(settings, "sensitivity.samples", "Rdata", + ensemble.id = sa.ensemble.id, + all.var.yr = TRUE, + pft = NULL) + } else if (!is.null(settings$sensitivity.analysis$ensemble.id)) { + sa.ensemble.id <- settings$sensitivity.analysis$ensemble.id + fname <- sensitivity.filename(settings, "sensitivity.samples", "Rdata", + ensemble.id = sa.ensemble.id, + all.var.yr = TRUE, + pft = NULL) + } else { + fname <- file.path(outdir, "samples.Rdata") + sa.ensemble.id <- NULL + } + + if (!file.exists(fname)) { + PEcAn.logger::logger.severe("No sensitivity analysis samples file found!") + } + load(fname) + + # For backwards compatibility, define some variables if not just loaded + if (!exists("pft.names")) { + pft.names <- names(trait.samples) + } + if (!exists("trait.names")) { + trait.names <- lapply(trait.samples, names) + } + if (!exists("sa.run.ids")) { + sa.run.ids <- runs.samples$sa + } + + # Set variable and years. Use args first, then settings, then defaults/error + start.year.sa <- start.year + if (is.null(start.year.sa)) { + start.year.sa <- settings$sensitivity.analysis$start.year + } + if (is.null(start.year.sa)) { + start.year.sa <- NA + } + + end.year.sa <- end.year + if (is.null(end.year.sa)) { + end.year.sa <- settings$sensitivity.analysis$end.year + } + if (is.null(end.year.sa)) { + end.year.sa <- NA + } + + variables.sa <- variable + if (is.null(variables.sa)) { + if ("variable" %in% names(settings$sensitivity.analysis)) { + variables.sa <- settings$sensitivity.analysis[names(settings$sensitivity.analysis) == "variable"] + } else { + PEcAn.logger::logger.severe("no variable defined for sensitivity analysis") + } + } + + # Only handling one variable at a time for now + if (length(variables.sa) >= 1) { + for(variable.sa in variables.sa){ + PEcAn.logger::logger.warn(paste0("Currently performing sensitivity analysis on variable ", + variable.sa, ")")) + + # if an expression is provided, convert.expr returns names of the variables accordingly + # if a derivation is not requested it returns the variable name as is + variables <- convert.expr(unlist(variable.sa)) + variable.sa <- variables$variable.eqn + variable.fn <- variables$variable.drv + + for(pft.name in pft.names){ + quantiles <- rownames(sa.samples[[pft.name]]) + traits <- trait.names[[pft.name]] + + # when there is variable-per pft in the outputs, check for the tag for deciding SA per pft + per.pft <- ifelse(!is.null(settings$sensitivity.analysis$perpft), + as.logical(settings$sensitivity.analysis$perpft), FALSE) + sensitivity.output[[pft.name]] <- read.sa.output(traits = traits, + quantiles = quantiles, + pecandir = outdir, + outdir = settings$modeloutdir, + pft.name = pft.name, + start.year = start.year.sa, + end.year = end.year.sa, + variable = variable.sa, + sa.run.ids = sa.run.ids, + per.pft = per.pft) + } + + # Save sensitivity output + + fname <- sensitivity.filename(settings, "sensitivity.output", "Rdata", + all.var.yr = FALSE, + pft = NULL, + ensemble.id = sa.ensemble.id, + variable = variable.fn, + start.year = start.year.sa, + end.year = end.year.sa) + save(sensitivity.output, file = fname) + } + } + } + + ensemble.output <- list() + if ("ensemble" %in% names(settings)) { + ### Load PEcAn ensemble info Can specify ensemble ids manually. If not, look in + ### settings. If none there, just look in samples.Rdata, which for backwards + ### compatibility still contains the sample info for (the most recent) sensitivity + ### and ensemble analysis combined. + if (!is.null(ens.ensemble.id)) { + fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", + ensemble.id = ens.ensemble.id, + all.var.yr = TRUE) + } else if (!is.null(settings$ensemble$ensemble.id)) { + ens.ensemble.id <- settings$ensemble$ensemble.id + fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", + ensemble.id = ens.ensemble.id, + all.var.yr = TRUE) + } else { + fname <- file.path(outdir, "samples.Rdata") + } + if (!file.exists(fname)) { + PEcAn.logger::logger.severe("No ensemble samples file found!") + } + load(fname) + + # For backwards compatibility, define some variables if not just loaded + if (!exists("pft.names")) { + pft.names <- names(trait.samples) + } + if (!exists("trait.names")) { + trait.names <- lapply(trait.samples, names) + } + if (!exists("ens.run.ids")) { + ens.run.ids <- runs.samples$ens + } + + # Set variable and years. Use args first, then settings, then defaults/error + start.year.ens <- start.year + if (is.null(start.year.ens)) { + start.year.ens <- settings$ensemble$start.year + } + if (is.null(start.year.ens)) { + start.year.ens <- NA + } + + end.year.ens <- end.year + if (is.null(end.year.ens)) { + end.year.ens <- settings$ensemble$end.year + } + if (is.null(end.year.ens)) { + end.year.ens <- NA + } + + variables.ens <- variable + if (is.null(variables.ens)) { + if ("variable" %in% names(settings$ensemble)) { + nc_var <- which(names(settings$ensemble) == "variable") + for (i in seq_along(nc_var)) { + variables.ens[i] <- settings$ensemble[[nc_var[i]]] + } + } + } + + if (is.null(variables.ens)) + PEcAn.logger::logger.severe("No variables for ensemble analysis!") + + # Only handling one variable at a time for now + if (length(variables.ens) >= 1) { + for(variable.ens in variables.ens){ + PEcAn.logger::logger.warn(paste0("Currently performing ensemble analysis on variable ", + variable.ens, ")")) + + # if an expression is provided, convert.expr returns names of the variables accordingly + # if a derivation is not requested it returns the variable name as is + variables <- convert.expr(variable.ens) + variable.ens <- variables$variable.eqn + variable.fn <- variables$variable.drv + + ensemble.output <- PEcAn.uncertainty::read.ensemble.output( + settings$ensemble$size, + pecandir = outdir, + outdir = settings$modeloutdir, + start.year = start.year.ens, + end.year = end.year.ens, + variable = variable.ens, + ens.run.ids = ens.run.ids + ) + + # Save ensemble output + fname <- ensemble.filename(settings, "ensemble.output", "Rdata", + all.var.yr = FALSE, + ensemble.id = ens.ensemble.id, + variable = variable.fn, + start.year = start.year.ens, + end.year = end.year.ens) + save(ensemble.output, file = fname) + } + } + } +} # get.results + + +##' @export +runModule.get.results <- function(settings) { + if (PEcAn.settings::is.MultiSettings(settings)) { + return(PEcAn.settings::papply(settings, runModule.get.results)) + } else if (PEcAn.settings::is.Settings(settings)) { + return(get.results(settings)) + } else { + stop("runModule.get.results only works with Settings or MultiSettings") + } +} # runModule.get.results From 982ca447db4e5604ab276b4f2823d72f0997ee81 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:42:21 +0530 Subject: [PATCH 2131/2289] years and yieldarray (No global variable) --- base/utils/R/regrid.R | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/base/utils/R/regrid.R b/base/utils/R/regrid.R index 1f618c67865..0acbca2828a 100644 --- a/base/utils/R/regrid.R +++ b/base/utils/R/regrid.R @@ -1,5 +1,5 @@ ##' Regrid dataset to even grid -##' +##' ##' @title regrid ##' @param latlon.data dataframe with lat, lon, and some value to be regridded ##' @return dataframe with regridded data @@ -13,15 +13,15 @@ regrid <- function(latlon.data) { e <- raster::extent(spdf) ## Determine ratio between x and y dimensions ratio <- (e@xmax - e@xmin) / (e@ymax - e@ymin) - + ## Create template raster to sample to r <- raster::raster(nrows = 56, ncols = floor(56 * ratio), ext = raster::extent(spdf)) rf <- raster::rasterize(spdf, r, field = "z", fun = mean) - + # rdf <- data.frame( rasterToPoints( rf ) ) colnames(rdf) <- # colnames(latlon.data) arf <- as.array(rf) - + # return(rdf) return(arf) } # regrid @@ -30,7 +30,10 @@ regrid <- function(latlon.data) { ##' Write gridded data to netcdf file ##' ##' @title grid2netcdf -##' @param grid.data +##' @param grid.data +##' @param gdata +##' @param Date as string or date object. +##' @param outfile location where output will be stored. ##' @return writes netCDF file ##' @author David LeBauer grid2netcdf <- function(gdata, date = "9999-09-09", outfile = "out.nc") { @@ -49,22 +52,24 @@ grid2netcdf <- function(gdata, date = "9999-09-09", outfile = "out.nc") { "please install `data.table`." ) } + years <- NULL grid.data <- merge(latlons, gdata, by = c("lat", "lon", "date"), all.x = TRUE) lat <- ncdf4::ncdim_def("lat", "degrees_east", vals = lats, longname = "station_latitude") lon <- ncdf4::ncdim_def("lon", "degrees_north", vals = lons, longname = "station_longitude") - time <- ncdf4::ncdim_def(name = "time", units = paste0("days since 1700-01-01"), + time <- ncdf4::ncdim_def(name = "time", units = paste0("days since 1700-01-01"), vals = as.numeric(lubridate::ymd(paste0(years, "01-01")) - lubridate::ymd("1700-01-01")), calendar = "standard", unlim = TRUE) - + yieldvar <- to_ncvar("CropYield", list(lat, lon, time)) nc <- ncdf4::nc_create(filename = outfile, vars = list(CropYield = yieldvar)) - + ## Output netCDF data + yieldarray <- NULL # ncvar_put(nc, varid = yieldvar, vals = grid.data[order(lat, lon, order(lubridate::ymd(date )))]$yield) # ncvar_put(nc, varid = yieldvar, vals = grid.data[order(order(lubridate::ymd(date), lat, lon))]$yield) ncdf4::ncvar_put(nc, varid = yieldvar, vals = yieldarray) - + ncdf4::ncatt_put(nc, 0, "description", "put description here") ncdf4::nc_close(nc) } # grid2netcdf From b703b8949a48f099c4095a782728ad2ddf06dcc9 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 14:45:42 +0530 Subject: [PATCH 2132/2289] Update mcmc.list2init.R --- base/utils/R/mcmc.list2init.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/base/utils/R/mcmc.list2init.R b/base/utils/R/mcmc.list2init.R index ef3f8b4f954..5b9970aaedd 100644 --- a/base/utils/R/mcmc.list2init.R +++ b/base/utils/R/mcmc.list2init.R @@ -38,7 +38,8 @@ mcmc.list2init <- function(dat) { ## detect variable type (scalar, vector, matrix) cols <- which(firstname == uname[v]) - + + nr <- NULL if(length(cols) == 1){ ## SCALAR for(c in seq_len(nc)){ From 3cf93a6ff93711c51955823813d1a0bbd308eab5 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 16:12:40 +0530 Subject: [PATCH 2133/2289] No global variable for runs.sample --- base/utils/R/sensitivity.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index da87d48a069..372f03d75ef 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -35,7 +35,7 @@ read.sa.output <- function(traits, quantiles, pecandir, outdir, pft.name = "", samples.file <- file.path(pecandir, "samples.Rdata") if (file.exists(samples.file)) { load(samples.file) - sa.run.ids <- runs.samples$sa + sa.run.ids <- .data$runs.samples$sa } else { PEcAn.logger::logger.error(samples.file, "not found, this file is required by the read.sa.output function") } From 578e64155fcf73cf8c3eb3c786453504df6b4973 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 30 Jul 2021 16:56:55 +0530 Subject: [PATCH 2134/2289] add load data example to vignette --- .../ICOS_Drought2018_Vignette.Rmd | 21 ++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd index c3a22c92bd9..809c06e911b 100644 --- a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd +++ b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd @@ -48,11 +48,26 @@ ggplot(europe_cropped, aes(x = long, y = lat)) + To download Drought2018 files we use `download.ICOS` function in PEcAn. ```{r download, eval=FALSE} -sitename <- "FI-Sii" -outfolder <- "~/" +sitename <- "FI-Hyy" +outfolder <- "/home/carya/pecan" start_date <- "2016-01-01" end_date <- "2018-01-01" product <- "Drought2018" -PEcAn.data.atmosphere::download.ICOS(sitename, outfolder, start_date, end_date, product) +res <- PEcAn.data.atmosphere::download.ICOS(sitename, outfolder, start_date, end_date, product) ``` + +To use the downloaded observations a function like `PEcAn.benchmark::load_data` can be used. Here we load the NEE observations. +```{r load, eval=FALSE} +format <- PEcAn.DB::query.format.vars(bety = dbcon, format.id = 1000000136) +start_date <- lubridate::year("2018-01-01") +end_date <- lubridate::year("2018-12-31") +vars.used.index <- which(format$vars$bety_name %in% c("NEE")) +obs <- PEcAn.benchmark::load_data(data.path = res$file, + format = format, start_year = start_date, end_year = end_date, + site = sitename, + vars.used.index = vars.used.index, + time.row = format$time.row) + +``` + From 68b2b01d4ac41bd58c0c1ba0894310c49bd57814 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 17:02:10 +0530 Subject: [PATCH 2135/2289] Update utils.R --- base/utils/R/utils.R | 2 ++ 1 file changed, 2 insertions(+) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index dad8f489010..16e505c7858 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -205,6 +205,8 @@ zero.bounded.density <- function(x, bw = "SJ", n = 1001) { ##' @usage summarize.result(result) ##' @importFrom rlang .data ##' @author David LeBauer, Alexey Shiklomanov +n <- NULL +statname <- NULL summarize.result <- function(result) { ans1 <- result %>% dplyr::filter(.data$n == 1) %>% From b9d5dca28f70e3d52f575b6e1a7e091014e200cb Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 17:15:21 +0530 Subject: [PATCH 2136/2289] \examples lines wider than 100 characters: --- base/utils/R/utils.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 16e505c7858..885f9ce1fd7 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -700,7 +700,9 @@ download.file <- function(url, filename, method) { ##' @examples ##' \dontrun{ ##' dap <- retry.func( -##' ncdf4::nc_open('https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4'), +##' file_host = 'https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220' +##' file_path = '/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4' +##' ncdf4::nc_open(file_host, file_path), ##' maxErrors=10, ##' sleep=2) ##' } From 6b886a09aca68ca771457255ca3bdd47a1cba840 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 17:24:56 +0530 Subject: [PATCH 2137/2289] Update convert.input.R --- base/utils/R/convert.input.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index f6f46eea1b2..98b3bd6a40b 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -56,6 +56,7 @@ ##' @param ensemble An integer representing the number of ensembles, or FALSE if it data product is not an ensemble. ##' @param ensemble_name If convert.input is being called iteratively for each ensemble, ensemble_name contains the identifying name/number for that ensemble. ##' @param ... Additional arguments, passed unchanged to \code{fcn} +##' @param dbparms list, settings$database info reqired for opening a connection to DB ##' ##' @return A list of two BETY IDs (input.id, dbfile.id) identifying a pre-existing file if one was available, or a newly created file if not. Each id may be a vector of ids if the function is processing an entire ensemble at once. ##' From 53abf1642a6bc057a0a41b3313b22248859ff5ee Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 17:42:03 +0530 Subject: [PATCH 2138/2289] Update get.model.output.R --- base/utils/R/get.model.output.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/base/utils/R/get.model.output.R b/base/utils/R/get.model.output.R index af2ce092198..469516a8859 100644 --- a/base/utils/R/get.model.output.R +++ b/base/utils/R/get.model.output.R @@ -1,7 +1,7 @@ #------------------------------------------------------------------------------- # Copyright (c) 2012 University of Illinois, NCSA. # All rights reserved. This program and the accompanying materials -# are made available under the terms of the +# are made available under the terms of the # University of Illinois/NCSA Open Source License # which accompanies this distribution, and is available at # http://opensource.ncsa.illinois.edu/license.html @@ -13,9 +13,10 @@ ##' @title Retrieve model output ##' ##' @param model the ecosystem model run +##' @param settings list of PEcAn settings. ##' ##' @export -##' +##' ##' @examples ##' \dontrun{ ##' get.model.output(model) From bc59d5f3c1c163a73fe89939ab5ca1a981fdfb00 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 17:44:04 +0530 Subject: [PATCH 2139/2289] Update full.path.R --- base/utils/R/full.path.R | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/base/utils/R/full.path.R b/base/utils/R/full.path.R index c95481acb1a..7a0b681cdb0 100644 --- a/base/utils/R/full.path.R +++ b/base/utils/R/full.path.R @@ -1,7 +1,7 @@ #------------------------------------------------------------------------------- # Copyright (c) 2012 University of Illinois, NCSA. # All rights reserved. This program and the accompanying materials -# are made available under the terms of the +# are made available under the terms of the # University of Illinois/NCSA Open Source License # which accompanies this distribution, and is available at # http://opensource.ncsa.illinois.edu/license.html @@ -15,6 +15,7 @@ ##' ##' @title Creates an absolute path to a folder ##' @name full.path +##' @param folder folder for file paths. ##' @author Rob Kooper ##' @return absolute path ##' @export @@ -23,12 +24,12 @@ full.path <- function(folder) { # normalize pathname folder <- normalizePath(folder, mustWork = FALSE) - + # add cwd if needed if (substr(folder, 1, 1) != "/") { folder <- file.path(getwd(), folder) folder <- normalizePath(folder, mustWork = FALSE) } - + return(invisible(folder)) } # full.path From e4ce8d6fdb9212e99e334a951ee5303ddadb8cc6 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 17:59:42 +0530 Subject: [PATCH 2140/2289] Update r2bugs.distributions.R --- base/utils/R/r2bugs.distributions.R | 32 +++++++++++++++-------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/base/utils/R/r2bugs.distributions.R b/base/utils/R/r2bugs.distributions.R index 455f3b60150..99052a86ebb 100644 --- a/base/utils/R/r2bugs.distributions.R +++ b/base/utils/R/r2bugs.distributions.R @@ -1,17 +1,19 @@ #------------------------------------------------------------------------------- # Copyright (c) 2012 University of Illinois, NCSA. # All rights reserved. This program and the accompanying materials -# are made available under the terms of the +# are made available under the terms of the # University of Illinois/NCSA Open Source License # which accompanies this distribution, and is available at # http://opensource.ncsa.illinois.edu/license.html #------------------------------------------------------------------------------- ##' convert R parameterizations to BUGS paramaterizations -##' +##' ##' R and BUGS have different parameterizations for some distributions. This function transforms the distributions from R defaults to BUGS defaults. BUGS is an implementation of the BUGS language, and these transformations are expected to work for bugs. ##' @title convert R parameterizations to BUGS paramaterizations -##' @param priors data.frame with columns distn = distribution name, parama, paramb using R default parameterizations +##' @param priors data.frame with columns distn = distribution name, parama, paramb using R default parameterizations. +##' @param direction Whether the model will be filtered backward or forward in time. options = c("backward", "forward") +##' (PalEON will go backward, anybody interested in the future will go forward) ##' @return priors dataframe using JAGS default parameterizations ##' @author David LeBauer, Ben Bolker ##' @export @@ -21,11 +23,11 @@ ##' paramb = c(2, 2, 2, 2)) ##' r2bugs.distributions(priors) r2bugs.distributions <- function(priors, direction = "r2bugs") { - + priors$distn <- as.character(priors$distn) priors$parama <- as.numeric(priors$parama) priors$paramb <- as.numeric(priors$paramb) - + ## index dataframe according to distribution norm <- priors$distn %in% c("norm", "lnorm") # these have same tramsform weib <- grepl("weib", priors$distn) # matches r and bugs version @@ -33,13 +35,13 @@ r2bugs.distributions <- function(priors, direction = "r2bugs") { chsq <- grepl("chisq", priors$distn) # matches r and bugs version bin <- priors$distn %in% c("binom", "bin") # matches r and bugs version nbin <- priors$distn %in% c("nbinom", "negbin") # matches r and bugs version - + ## Check that no rows are categorized into two distributions if (max(rowSums(cbind(norm, weib, gamma, chsq, bin, nbin))) > 1) { badrow <- rowSums(cbind(norm, weib, gamma, chsq, bin, nbin)) > 1 stop(paste(unique(priors$distn[badrow])), "are identified as > 1 distribution") } - + exponent <- ifelse(direction == "r2bugs", -2, -0.5) ## Convert sd to precision for norm & lnorm priors$paramb[norm] <- priors$paramb[norm]^exponent @@ -49,11 +51,11 @@ r2bugs.distributions <- function(priors, direction = "r2bugs") { } else if (direction == "bugs2r") { ## Convert BUGS parameter lambda to BUGS parameter b by b = l^(-1/a) priors$paramb[weib] <- priors$paramb[weib] ^ (-1 / priors$parama[weib]) - + } ## Reverse parameter order for binomial and negative binomial priors[bin | nbin, c("parama", "paramb")] <- priors[bin | nbin, c("paramb", "parama")] - + ## Translate distribution names if (direction == "r2bugs") { priors$distn[weib] <- "weib" @@ -81,13 +83,13 @@ bugs2r.distributions <- function(..., direction = "bugs2r") { ##' BUGS parameterization, and then samples from the distribution using ##' JAGS ##' @title bugs.rdist -##' @param prior dataframe with distribution name and parameters +##' @param prior dataframe with distribution name and parameters ##' @param n.iter number of samples, output will have n.iter/4 samples -##' @param n +##' @param n ##' @return vector of samples ##' @export ##' @author David LeBauer -bugs.rdist <- function(prior = data.frame(distn = "norm", parama = 0, paramb = 1), +bugs.rdist <- function(prior = data.frame(distn = "norm", parama = 0, paramb = 1), n.iter = 1e+05, n = NULL) { need_packages("rjags") if (!grepl("chisq", prior$distn)) { @@ -97,12 +99,12 @@ bugs.rdist <- function(prior = data.frame(distn = "norm", parama = 0, paramb = 1 } else { PEcAn.logger::logger.severe(paste("Unknown model.string", model.string)) } - + writeLines(model.string, con = "test.bug") j.model <- rjags::jags.model(file = "test.bug", data = list(x = 1)) mcmc.object <- stats::window(rjags::coda.samples(model = j.model, - variable.names = c("Y"), - n.iter = n.iter, thin = 2), + variable.names = c("Y"), + n.iter = n.iter, thin = 2), start = n.iter / 2) Y <- as.matrix(mcmc.object)[, "Y"] if (!is.null(n)) { From 77053c31cd6be1de8b33a7a834a1bd3d1ac0c635 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 18:01:36 +0530 Subject: [PATCH 2141/2289] Update sensitivity.R --- base/utils/R/sensitivity.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index 372f03d75ef..0a5bca3de48 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -22,6 +22,7 @@ ##' @param end.year last year to include in sensitivity analysis ##' @param variables variables to be read from model output ##' @param per.pft flag to determine whether we want SA on pft-specific variables +##' @param sa.run.ids ##' @export ##' @importFrom magrittr %>% ##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze, Istem Fer From 09ab92590cf7794d9e4c29f8d28cf43cb86ecb99 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 18:04:01 +0530 Subject: [PATCH 2142/2289] Update utils.R --- base/utils/R/utils.R | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 885f9ce1fd7..61929418c99 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -694,6 +694,7 @@ download.file <- function(url, filename, method) { ##' @param expr The function to try running ##' @param maxErrors The number of times to retry the function ##' @param sleep How long to wait before retrying the function call +##' @param isError ##' ##' @return retval returns the results of the function call ##' From b34a8d6544e3014906f40603db18ee18958d1a89 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 18:05:04 +0530 Subject: [PATCH 2143/2289] Update seconds_in_year.R --- base/utils/R/seconds_in_year.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/base/utils/R/seconds_in_year.R b/base/utils/R/seconds_in_year.R index f8124d2b419..88f8bb3a259 100644 --- a/base/utils/R/seconds_in_year.R +++ b/base/utils/R/seconds_in_year.R @@ -2,7 +2,8 @@ #' #' @author Alexey Shiklomanov #' @param year Numeric year (can be a vector) -#' @param leap_year Default = TRUE. If set to FALSE will always return 31536000 +#' @param leap_year Default = TRUE. If set to FALSE will always return 31536000. +#' @param ... additional arguments #' @examples #' seconds_in_year(2000) # Leap year -- 366 x 24 x 60 x 60 = 31622400 #' seconds_in_year(2001) # Regular year -- 365 x 24 x 60 x 60 = 31536000 From a9350652613b936c629adb1413a805227327f219 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Fri, 30 Jul 2021 12:48:42 +0000 Subject: [PATCH 2144/2289] automated documentation update --- base/utils/NAMESPACE | 10 ---- base/utils/man/bugs.rdist.Rd | 2 - base/utils/man/convert.input.Rd | 2 + base/utils/man/create.base.plot.Rd | 22 --------- base/utils/man/dhist.Rd | 52 -------------------- base/utils/man/do_conversions.Rd | 27 ---------- base/utils/man/full.path.Rd | 3 ++ base/utils/man/get.ensemble.samples.Rd | 45 ----------------- base/utils/man/get.model.output.Rd | 2 + base/utils/man/get.results.Rd | 27 ---------- base/utils/man/grid2netcdf.Rd | 4 +- base/utils/man/iqr.Rd | 20 -------- base/utils/man/{summarize.result.Rd => n.Rd} | 7 ++- base/utils/man/plot_data.Rd | 37 -------------- base/utils/man/r2bugs.distributions.Rd | 5 +- base/utils/man/read.ensemble.output.Rd | 47 ------------------ base/utils/man/read.output.Rd | 2 +- base/utils/man/read.sa.output.Rd | 2 + base/utils/man/retry.func.Rd | 6 ++- base/utils/man/run.write.configs.Rd | 40 --------------- base/utils/man/seconds_in_year.Rd | 4 +- base/utils/man/theme_border.Rd | 48 ------------------ base/utils/man/write.ensemble.configs.Rd | 45 ----------------- 23 files changed, 30 insertions(+), 429 deletions(-) delete mode 100644 base/utils/man/create.base.plot.Rd delete mode 100644 base/utils/man/dhist.Rd delete mode 100644 base/utils/man/do_conversions.Rd delete mode 100644 base/utils/man/get.ensemble.samples.Rd delete mode 100644 base/utils/man/get.results.Rd delete mode 100644 base/utils/man/iqr.Rd rename base/utils/man/{summarize.result.Rd => n.Rd} (79%) delete mode 100644 base/utils/man/plot_data.Rd delete mode 100644 base/utils/man/read.ensemble.output.Rd delete mode 100644 base/utils/man/run.write.configs.Rd delete mode 100644 base/utils/man/theme_border.Rd delete mode 100644 base/utils/man/write.ensemble.configs.Rd diff --git a/base/utils/NAMESPACE b/base/utils/NAMESPACE index 0feee9d2ed5..e48e4108e3c 100644 --- a/base/utils/NAMESPACE +++ b/base/utils/NAMESPACE @@ -8,23 +8,19 @@ export(cf2doy) export(clear.scratch) export(convert.expr) export(convert.input) -export(create.base.plot) export(datetime2cf) export(datetime2doy) export(days_in_year) export(distn.stats) export(distn.table.stats) -export(do_conversions) export(download.file) export(download.url) export(ensemble.filename) export(full.path) export(get.ensemble.inputs) -export(get.ensemble.samples) export(get.model.output) export(get.parameter.stat) export(get.quantiles) -export(get.results) export(get.run.id) export(get.sa.sample.list) export(get.sa.samples) @@ -49,17 +45,12 @@ export(misc.convert) export(mstmipvar) export(n_leap_day) export(paste.stats) -export(plot_data) export(r2bugs.distributions) -export(read.ensemble.output) export(read.output) export(read.sa.output) export(read_web_config) export(retry.func) export(rsync) -export(run.write.configs) -export(runModule.get.results) -export(runModule.run.write.configs) export(seconds_in_year) export(sendmail) export(sensitivity.filename) @@ -79,7 +70,6 @@ export(transformstats) export(tryl) export(units_are_equivalent) export(vecpaste) -export(write.ensemble.configs) export(write.sa.configs) export(zero.truncate) importFrom(PEcAn.logger,logger.debug) diff --git a/base/utils/man/bugs.rdist.Rd b/base/utils/man/bugs.rdist.Rd index f6a5a202da8..1fad7159e59 100644 --- a/base/utils/man/bugs.rdist.Rd +++ b/base/utils/man/bugs.rdist.Rd @@ -14,8 +14,6 @@ bugs.rdist( \item{prior}{dataframe with distribution name and parameters} \item{n.iter}{number of samples, output will have n.iter/4 samples} - -\item{n}{} } \value{ vector of samples diff --git a/base/utils/man/convert.input.Rd b/base/utils/man/convert.input.Rd index e8ee2faf5e4..c0fc5e85b71 100644 --- a/base/utils/man/convert.input.Rd +++ b/base/utils/man/convert.input.Rd @@ -77,6 +77,8 @@ Currently only \code{host$name} is used by \code{convert.input}, but whole list \item{ensemble_name}{If convert.input is being called iteratively for each ensemble, ensemble_name contains the identifying name/number for that ensemble.} +\item{dbparms}{list, settings$database info reqired for opening a connection to DB} + \item{...}{Additional arguments, passed unchanged to \code{fcn}} } \value{ diff --git a/base/utils/man/create.base.plot.Rd b/base/utils/man/create.base.plot.Rd deleted file mode 100644 index 6907bdd09dc..00000000000 --- a/base/utils/man/create.base.plot.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/plots.R -\name{create.base.plot} -\alias{create.base.plot} -\title{Create Base Plot} -\usage{ -create.base.plot() -} -\value{ -empty ggplot object -} -\description{ -Creates empty ggplot object -} -\details{ -An empty base plot to which layers created by other functions -(\code{\link{plot_data}}, \code{\link[PEcAn.priors]{plot_prior.density}}, -\code{\link[PEcAn.priors]{plot_posterior.density}}) can be added. -} -\author{ -David LeBauer -} diff --git a/base/utils/man/dhist.Rd b/base/utils/man/dhist.Rd deleted file mode 100644 index f591adcee32..00000000000 --- a/base/utils/man/dhist.Rd +++ /dev/null @@ -1,52 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/plots.R -\name{dhist} -\alias{dhist} -\title{Diagonally Cut Histogram} -\usage{ -dhist( - x, - a = 5 * iqr(x), - nbins = grDevices::nclass.Sturges(x), - rx = range(x, na.rm = TRUE), - eps = 0.15, - xlab = "x", - plot = TRUE, - lab.spikes = TRUE -) -} -\arguments{ -\item{x}{is a numeric vector (the data)} - -\item{a}{is the scaling factor, default is 5 * IQR} - -\item{nbins}{is the number of bins, default is assigned by the Stuges method} - -\item{rx}{is the range used for the left of the left-most bin to the right of the right-most bin} - -\item{eps}{used to set artificial bound on min width / max height of bins as described in Denby and Mallows (2009) on page 24.} - -\item{xlab}{is label for the x axis} - -\item{plot}{= TRUE produces the plot, FALSE returns the heights, breaks and counts} - -\item{lab.spikes}{= TRUE labels the \% of data in the spikes} -} -\value{ -list with two elements, heights of length n and breaks of length n+1 indicating the heights and break points of the histogram bars. -} -\description{ -Variable-width (dagonally cut) histogram -} -\details{ -When constructing a histogram, it is common to make all bars the same width. -One could also choose to make them all have the same area. -These two options have complementary strengths and weaknesses; the equal-width histogram oversmooths in regions of high density, and is poor at identifying sharp peaks; the equal-area histogram oversmooths in regions of low density, and so does not identify outliers. -We describe a compromise approach which avoids both of these defects. We regard the histogram as an exploratory device, rather than as an estimate of a density. -} -\references{ -Lorraine Denby, Colin Mallows. Journal of Computational and Graphical Statistics. March 1, 2009, 18(1): 21-31. doi:10.1198/jcgs.2009.0002. -} -\author{ -Lorraine Denby, Colin Mallows -} diff --git a/base/utils/man/do_conversions.Rd b/base/utils/man/do_conversions.Rd deleted file mode 100644 index a93512b675b..00000000000 --- a/base/utils/man/do_conversions.Rd +++ /dev/null @@ -1,27 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/do_conversions.R -\name{do_conversions} -\alias{do_conversions} -\alias{do.conversions} -\title{do_conversions} -\usage{ -do_conversions( - settings, - overwrite.met = FALSE, - overwrite.fia = FALSE, - overwrite.ic = FALSE -) -} -\arguments{ -\item{settings}{PEcAn settings list} - -\item{overwrite.met, overwrite.fia, overwrite.ic}{logical} -} -\description{ -Input conversion workflow - -DEPRECATED: This function has been moved to the PEcAn.workflow package and will be removed from PEcAn.utils. -} -\author{ -Ryan Kelly, Rob Kooper, Betsy Cowdery, Istem Fer -} diff --git a/base/utils/man/full.path.Rd b/base/utils/man/full.path.Rd index 78df75247d8..413c0f16435 100644 --- a/base/utils/man/full.path.Rd +++ b/base/utils/man/full.path.Rd @@ -6,6 +6,9 @@ \usage{ full.path(folder) } +\arguments{ +\item{folder}{folder for file paths.} +} \value{ absolute path } diff --git a/base/utils/man/get.ensemble.samples.Rd b/base/utils/man/get.ensemble.samples.Rd deleted file mode 100644 index aa1da94c78e..00000000000 --- a/base/utils/man/get.ensemble.samples.Rd +++ /dev/null @@ -1,45 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/ensemble.R -\name{get.ensemble.samples} -\alias{get.ensemble.samples} -\title{Get Ensemble Samples} -\usage{ -get.ensemble.samples( - ensemble.size, - pft.samples, - env.samples, - method = "uniform", - param.names = NULL, - ... -) -} -\arguments{ -\item{ensemble.size}{number of runs in model ensemble} - -\item{pft.samples}{random samples from parameter distribution, e.g. from a MCMC chain} - -\item{env.samples}{env samples} - -\item{method}{the method used to generate the ensemble samples. Random generators: uniform, uniform with latin hypercube permutation. Quasi-random generators: halton, sobol, torus. Random generation draws random variates whereas quasi-random generation is deterministic but well equidistributed. Default is uniform. For small ensemble size with relatively large parameter number (e.g ensemble size < 5 and # of traits > 5) use methods other than halton.} - -\item{param.names}{a list of parameter names that were fitted either by MA or PDA, important argument, if NULL parameters will be resampled independently} -} -\value{ -matrix of (quasi-)random samples from trait distributions -} -\description{ -Get parameter values used in ensemble -} -\details{ -DEPRECATED: This function has been moved to the \code{PEcAn.uncertainty} package. -The version in \code{PEcAn.utils} is deprecated, will not be updated to add any new features, -and will be removed in a future release of PEcAn. -Please use \code{PEcAn.uncertainty::get.ensemble.samples} instead. -Returns a matrix of randomly or quasi-randomly sampled trait values -to be assigned to traits over several model runs. -given the number of model runs and a list of sample distributions for traits -The model run is indexed first by model run, then by trait -} -\author{ -David LeBauer, Istem Fer -} diff --git a/base/utils/man/get.model.output.Rd b/base/utils/man/get.model.output.Rd index 408d2271f81..bb0aa460865 100644 --- a/base/utils/man/get.model.output.Rd +++ b/base/utils/man/get.model.output.Rd @@ -8,6 +8,8 @@ get.model.output(model, settings) } \arguments{ \item{model}{the ecosystem model run} + +\item{settings}{list of PEcAn settings.} } \description{ This function retrieves model output for further analyses diff --git a/base/utils/man/get.results.Rd b/base/utils/man/get.results.Rd deleted file mode 100644 index 96e99e016fe..00000000000 --- a/base/utils/man/get.results.Rd +++ /dev/null @@ -1,27 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.results.R -\name{get.results} -\alias{get.results} -\title{Generate model output for PEcAn analyses} -\usage{ -get.results( - settings, - sa.ensemble.id = NULL, - ens.ensemble.id = NULL, - variable = NULL, - start.year = NULL, - end.year = NULL -) -} -\arguments{ -\item{settings}{list, read from settings file (xml) using \code{\link{read.settings}}} -} -\description{ -Reads model output and runs sensitivity and ensemble analyses -} -\details{ -Output is placed in model output directory (settings$modeloutdir). -} -\author{ -David LeBauer, Shawn Serbin, Mike Dietze, Ryan Kelly -} diff --git a/base/utils/man/grid2netcdf.Rd b/base/utils/man/grid2netcdf.Rd index 1501952bee2..615d4069ecf 100644 --- a/base/utils/man/grid2netcdf.Rd +++ b/base/utils/man/grid2netcdf.Rd @@ -7,7 +7,9 @@ grid2netcdf(gdata, date = "9999-09-09", outfile = "out.nc") } \arguments{ -\item{grid.data}{} +\item{outfile}{location where output will be stored.} + +\item{Date}{as string or date object.} } \value{ writes netCDF file diff --git a/base/utils/man/iqr.Rd b/base/utils/man/iqr.Rd deleted file mode 100644 index 647629e5678..00000000000 --- a/base/utils/man/iqr.Rd +++ /dev/null @@ -1,20 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/plots.R -\name{iqr} -\alias{iqr} -\title{Interquartile range} -\usage{ -iqr(x) -} -\arguments{ -\item{x}{vector} -} -\value{ -numeric vector of length 2, with the 25th and 75th quantiles of input vector x. -} -\description{ -Calculate interquartile range -} -\details{ -Calculates the 25th and 75th quantiles given a vector x; used in function \link{dhist}. -} diff --git a/base/utils/man/summarize.result.Rd b/base/utils/man/n.Rd similarity index 79% rename from base/utils/man/summarize.result.Rd rename to base/utils/man/n.Rd index 28ad6262524..642cbf8cc61 100644 --- a/base/utils/man/summarize.result.Rd +++ b/base/utils/man/n.Rd @@ -1,8 +1,10 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/utils.R -\name{summarize.result} -\alias{summarize.result} +\docType{data} +\name{n} +\alias{n} \title{Summarize Results} +\format{An object of class \code{NULL} of length 0.} \usage{ summarize.result(result) } @@ -18,3 +20,4 @@ Summarize results of replicate observations in trait data query \author{ David LeBauer, Alexey Shiklomanov } +\keyword{datasets} diff --git a/base/utils/man/plot_data.Rd b/base/utils/man/plot_data.Rd deleted file mode 100644 index 77244f9e858..00000000000 --- a/base/utils/man/plot_data.Rd +++ /dev/null @@ -1,37 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/plots.R -\name{plot_data} -\alias{plot_data} -\alias{plot.data} -\title{Add data to plot} -\usage{ -plot_data(trait.data, base.plot = NULL, ymax, color = "black") -} -\arguments{ -\item{trait.data}{data to be plotted} - -\item{base.plot}{a ggplot object (grob), -created by \code{\link{create.base.plot}} if none provided} - -\item{ymax}{maximum height of y} -} -\value{ -updated plot object -} -\description{ -Add data to an existing plot or create a new one from \code{\link{create.base.plot}} -} -\details{ -Used to add raw data or summary statistics to the plot of a distribution. -The height of Y is arbitrary, and can be set to optimize visualization. -If SE estimates are available, tehse wil be plotted -} -\examples{ -\dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} -} -\seealso{ -\code{\link{create.base.plot}} -} -\author{ -David LeBauer -} diff --git a/base/utils/man/r2bugs.distributions.Rd b/base/utils/man/r2bugs.distributions.Rd index eec7e4fbc91..646c35158d4 100644 --- a/base/utils/man/r2bugs.distributions.Rd +++ b/base/utils/man/r2bugs.distributions.Rd @@ -7,7 +7,10 @@ r2bugs.distributions(priors, direction = "r2bugs") } \arguments{ -\item{priors}{data.frame with columns distn = distribution name, parama, paramb using R default parameterizations} +\item{priors}{data.frame with columns distn = distribution name, parama, paramb using R default parameterizations.} + +\item{direction}{Whether the model will be filtered backward or forward in time. options = c("backward", "forward") +(PalEON will go backward, anybody interested in the future will go forward)} } \value{ priors dataframe using JAGS default parameterizations diff --git a/base/utils/man/read.ensemble.output.Rd b/base/utils/man/read.ensemble.output.Rd deleted file mode 100644 index 51732f40ba2..00000000000 --- a/base/utils/man/read.ensemble.output.Rd +++ /dev/null @@ -1,47 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/ensemble.R -\name{read.ensemble.output} -\alias{read.ensemble.output} -\title{Read ensemble output} -\usage{ -read.ensemble.output( - ensemble.size, - pecandir, - outdir, - start.year, - end.year, - variable, - ens.run.ids = NULL -) -} -\arguments{ -\item{ensemble.size}{the number of ensemble members run} - -\item{pecandir}{specifies where pecan writes its configuration files} - -\item{outdir}{directory with model output to use in ensemble analysis} - -\item{start.year}{first year to include in ensemble analysis} - -\item{end.year}{last year to include in ensemble analysis} - -\item{variables}{target variables for ensemble analysis} -} -\value{ -a list of ensemble model output -} -\description{ -Reads output from model ensemble -} -\details{ -Reads output for an ensemble of length specified by \code{ensemble.size} and bounded by \code{start.year} -and \code{end.year} - -DEPRECATED: This function has been moved to the \code{PEcAn.uncertainty} package. -The version in \code{PEcAn.utils} is deprecated, will not be updated to add any new features, -and will be removed in a future release of PEcAn. -Please use \code{PEcAn.uncertainty::read.ensemble.output} instead. -} -\author{ -Ryan Kelly, David LeBauer, Rob Kooper -} diff --git a/base/utils/man/read.output.Rd b/base/utils/man/read.output.Rd index 83f7b4abeb8..89602be6a63 100644 --- a/base/utils/man/read.output.Rd +++ b/base/utils/man/read.output.Rd @@ -35,7 +35,7 @@ variables in output file..} \item{dataframe}{Logical: if TRUE, will return output in a \code{data.frame} format with a posix column. Useful for -\code{\link[PEcAn.benchmark:align_data]{PEcAn.benchmark::align_data()}} and plotting.} +\code{PEcAn.benchmark::align.data} and plotting.} \item{pft.name}{character string, name of the plant functional type (PFT) to read PFT-specific output. If \code{NULL} no diff --git a/base/utils/man/read.sa.output.Rd b/base/utils/man/read.sa.output.Rd index 7e29252bdfe..78ece073f8a 100644 --- a/base/utils/man/read.sa.output.Rd +++ b/base/utils/man/read.sa.output.Rd @@ -32,6 +32,8 @@ read.sa.output( \item{end.year}{last year to include in sensitivity analysis} +\item{sa.run.ids}{} + \item{per.pft}{flag to determine whether we want SA on pft-specific variables} \item{variables}{variables to be read from model output} diff --git a/base/utils/man/retry.func.Rd b/base/utils/man/retry.func.Rd index 3f88ece8adf..850b5d3fc98 100644 --- a/base/utils/man/retry.func.Rd +++ b/base/utils/man/retry.func.Rd @@ -14,6 +14,8 @@ retry.func( \arguments{ \item{expr}{The function to try running} +\item{isError}{} + \item{maxErrors}{The number of times to retry the function} \item{sleep}{How long to wait before retrying the function call} @@ -30,7 +32,9 @@ Retry function X times before stopping in error \examples{ \dontrun{ dap <- retry.func( - ncdf4::nc_open('https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4'), + file_host = 'https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220' + file_path = '/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4' + ncdf4::nc_open(file_host, file_path), maxErrors=10, sleep=2) } diff --git a/base/utils/man/run.write.configs.Rd b/base/utils/man/run.write.configs.Rd deleted file mode 100644 index 6de44206b99..00000000000 --- a/base/utils/man/run.write.configs.Rd +++ /dev/null @@ -1,40 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/run.write.configs.R -\name{run.write.configs} -\alias{run.write.configs} -\title{Run model specific write configuration functions} -\usage{ -run.write.configs( - settings, - write = TRUE, - ens.sample.method = "uniform", - posterior.files = rep(NA, length(settings$pfts)), - overwrite = TRUE -) -} -\arguments{ -\item{write}{should the runs be written to the database} - -\item{ens.sample.method}{how to sample the ensemble members('halton' sequence or 'uniform' random)} - -\item{posterior.files}{Filenames for posteriors for drawing samples for ensemble and sensitivity -analysis (e.g. post.distns.Rdata, or prior.distns.Rdata). Defaults to NA, in which case the -most recent posterior or prior (in that order) for the workflow is used. Should be a vector, -with one entry for each PFT. File name only; PFT outdirs will be appended (this forces use of only -files within this workflow, to avoid confusion).} - -\item{model}{the ecosystem model to generate the configuration files for} -} -\value{ -an updated settings list, which includes ensemble IDs for SA and ensemble analysis -} -\description{ -Main driver function to call the ecosystem model specific (e.g. ED, SiPNET) -run and configuration file scripts -} -\details{ -DEPRECATED: This function has been moved to the PEcAn.workflow package and will be removed from PEcAn.utils. -} -\author{ -David LeBauer, Shawn Serbin, Ryan Kelly, Mike Dietze -} diff --git a/base/utils/man/seconds_in_year.Rd b/base/utils/man/seconds_in_year.Rd index 663467547f0..6211000fb6c 100644 --- a/base/utils/man/seconds_in_year.Rd +++ b/base/utils/man/seconds_in_year.Rd @@ -9,7 +9,9 @@ seconds_in_year(year, leap_year = TRUE, ...) \arguments{ \item{year}{Numeric year (can be a vector)} -\item{leap_year}{Default = TRUE. If set to FALSE will always return 31536000} +\item{leap_year}{Default = TRUE. If set to FALSE will always return 31536000.} + +\item{...}{additional arguments} } \description{ Number of seconds in a given year diff --git a/base/utils/man/theme_border.Rd b/base/utils/man/theme_border.Rd deleted file mode 100644 index c69f43538b1..00000000000 --- a/base/utils/man/theme_border.Rd +++ /dev/null @@ -1,48 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/plots.R -\name{theme_border} -\alias{theme_border} -\title{Theme border for plot} -\usage{ -theme_border( - type = c("left", "right", "bottom", "top", "none"), - colour = "black", - size = 1, - linetype = 1 -) -} -\arguments{ -\item{type}{} - -\item{colour}{} - -\item{size}{} - -\item{linetype}{} -} -\value{ -adds borders to ggplot as a side effect -} -\description{ -Add borders to plot -} -\details{ -Has ggplot2 display only specified borders, e.g. ('L'-shaped) borders, -rather than a rectangle or no border. Note that the order can be significant; -for example, if you specify the L border option and then a theme, the theme settings -will override the border option, so you need to specify the theme (if any) before the border option, as above. -} -\examples{ -\dontrun{ -df = data.frame( x=c(1,2,3), y=c(4,5,6) ) -ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + - opts(panel.border = theme_border(c('bottom','left')) ) -ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + - opts(panel.border = theme_border(c('b','l')) ) -} -} -\author{ -Rudolf Cardinal - -\url{ggplot2 google group}{https://groups.google.com/forum/?fromgroups#!topic/ggplot2/-ZjRE2OL8lE} -} diff --git a/base/utils/man/write.ensemble.configs.Rd b/base/utils/man/write.ensemble.configs.Rd deleted file mode 100644 index f34890e9689..00000000000 --- a/base/utils/man/write.ensemble.configs.Rd +++ /dev/null @@ -1,45 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/ensemble.R -\name{write.ensemble.configs} -\alias{write.ensemble.configs} -\title{Write ensemble configs} -\usage{ -write.ensemble.configs( - defaults, - ensemble.samples, - settings, - model, - clean = FALSE, - write.to.db = TRUE -) -} -\arguments{ -\item{defaults}{pft} - -\item{ensemble.samples}{list of lists supplied by \link{get.ensemble.samples}} - -\item{settings}{list of PEcAn settings} - -\item{clean}{remove old output first?} - -\item{write.config}{a model-specific function to write config files, e.g. \link{write.config.ED}} -} -\value{ -list, containing $runs = data frame of runids, and $ensemble.id = the ensemble ID for these runs. Also writes sensitivity analysis configuration files as a side effect -} -\description{ -Write ensemble config files -} -\details{ -DEPRECATED: This function has been moved to the \code{PEcAn.uncertainty} package. -The version in \code{PEcAn.utils} is deprecated, will not be updated to add any new features, -and will be removed in a future release of PEcAn. -Please use \code{PEcAn.uncertainty::write.ensemble.configs} instead. - -Writes config files for use in meta-analysis and returns a list of run ids. -Given a pft.xml object, a list of lists as supplied by get.sa.samples, -a name to distinguish the output files, and the directory to place the files. -} -\author{ -David LeBauer, Carl Davidson -} From bfff5b286324e67da1b61e543051b2a73a23d99a Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 21:41:00 +0530 Subject: [PATCH 2145/2289] Update utils.R --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 61929418c99..52bd8a621d9 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -703,7 +703,7 @@ download.file <- function(url, filename, method) { ##' dap <- retry.func( ##' file_host = 'https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220' ##' file_path = '/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4' -##' ncdf4::nc_open(file_host, file_path), +##' ncdf4::nc_open(paste0(file_host, file_path)), ##' maxErrors=10, ##' sleep=2) ##' } From 34fe846482453e1de8481fed7852d54abbb8d84f Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 30 Jul 2021 21:41:29 +0530 Subject: [PATCH 2146/2289] Update retry.func.Rd --- base/utils/man/retry.func.Rd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/man/retry.func.Rd b/base/utils/man/retry.func.Rd index 850b5d3fc98..6587092fc97 100644 --- a/base/utils/man/retry.func.Rd +++ b/base/utils/man/retry.func.Rd @@ -34,7 +34,7 @@ Retry function X times before stopping in error dap <- retry.func( file_host = 'https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220' file_path = '/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4' - ncdf4::nc_open(file_host, file_path), + ncdf4::nc_open(paste0(file_host, file_path)), maxErrors=10, sleep=2) } From 8c34ebb5914b2d36569e989936c104ef17485481 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 31 Jul 2021 13:33:34 +0200 Subject: [PATCH 2147/2289] convert last uses of deprecated utils::logger* to logger::loger* --- base/utils/R/mcmc.list2init.R | 2 +- modules/assim.sequential/R/get_ensemble_weights.R | 2 +- modules/data.land/R/InventoryGrowthFusion.R | 2 +- scripts/EFI_workflow.R | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/base/utils/R/mcmc.list2init.R b/base/utils/R/mcmc.list2init.R index ef3f8b4f954..c0a1b0be376 100644 --- a/base/utils/R/mcmc.list2init.R +++ b/base/utils/R/mcmc.list2init.R @@ -66,7 +66,7 @@ mcmc.list2init <- function(dat) { } } else { - PEcAn.utils::logger.severe("dimension not supported",dim,uname[v]) + PEcAn.logger::logger.severe("dimension not supported",dim,uname[v]) } } ## end else VECTOR or MATRIX diff --git a/modules/assim.sequential/R/get_ensemble_weights.R b/modules/assim.sequential/R/get_ensemble_weights.R index ab5d24c8826..d5eabb5de27 100644 --- a/modules/assim.sequential/R/get_ensemble_weights.R +++ b/modules/assim.sequential/R/get_ensemble_weights.R @@ -49,7 +49,7 @@ get_ensemble_weights <- function(settings, time_do){ time_do[tt] & weight_file$climate_model %in% climate_names, 'weights'])) * nens - if(sum(weight_list[[tt]]) != nens) PEcAn.utils::logger.warn(paste('Time',tt,'does not equal the number of ensemble members',nens)) + if(sum(weight_list[[tt]]) != nens) PEcAn.logger::logger.warn(paste('Time',tt,'does not equal the number of ensemble members',nens)) #TO DO: will need to have some way of dealing with sampling too if there are more ensemble members than weights or vice versa diff --git a/modules/data.land/R/InventoryGrowthFusion.R b/modules/data.land/R/InventoryGrowthFusion.R index 94bfa06f34c..dd020a9ae63 100644 --- a/modules/data.land/R/InventoryGrowthFusion.R +++ b/modules/data.land/R/InventoryGrowthFusion.R @@ -29,7 +29,7 @@ InventoryGrowthFusion <- function(data, cov.data=NULL, time_data = NULL, n.iter= } max.chunks <- ceiling(n.iter/n.chunk) if(max.chunks < k_restart){ - PEcAn.utils::logger.warn("MCMC already complete",max.chunks,k_restart) + PEcAn.logger::logger.warn("MCMC already complete",max.chunks,k_restart) return(NULL) } avail.chunks <- k_restart:ceiling(n.iter/n.chunk) diff --git a/scripts/EFI_workflow.R b/scripts/EFI_workflow.R index 8016e91ccf2..1ba328895b7 100644 --- a/scripts/EFI_workflow.R +++ b/scripts/EFI_workflow.R @@ -113,7 +113,7 @@ if(length(input_check$id) > 0){ settings$run$inputs$met$id = index_id settings$run$inputs$met$path = clim_check -}else{PEcAn.utils::logger.error("No met file found")} +}else{PEcAn.logger::logger.error("No met file found")} #settings <- PEcAn.workflow::do_conversions(settings, T, T, T) if(is_empty(settings$run$inputs$met$path) & length(clim_check)>0){ From e5112c623f1e689c6f9fcf691fabd316486e8aec Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sat, 31 Jul 2021 23:57:55 +0530 Subject: [PATCH 2148/2289] Update base/utils/DESCRIPTION Co-authored-by: Chris Black --- base/utils/DESCRIPTION | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/base/utils/DESCRIPTION b/base/utils/DESCRIPTION index 304d72b6caa..ae4a09a6111 100644 --- a/base/utils/DESCRIPTION +++ b/base/utils/DESCRIPTION @@ -5,7 +5,8 @@ Title: PEcAn Functions Used for Ecological Forecasts and Version: 1.7.1 Date: 2019-09-05 Authors@R: c( - person("Rob","Kooper", role = c("aut", "cre"), + person("Mike", "Dietze", role = "aut"), + person("Rob","Kooper", role = c("aut", "cre"), email = "kooper@illinois.edu"), person("David","LeBauer", role = c("aut")), person("Xiaohui", "Feng", role = c("aut")), From d2c1f67c40008897a4ecaa21c79cc54310da717e Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 1 Aug 2021 12:58:27 +0530 Subject: [PATCH 2149/2289] Update utils.R --- base/utils/R/utils.R | 2 -- 1 file changed, 2 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 52bd8a621d9..fafa6057dbe 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -205,8 +205,6 @@ zero.bounded.density <- function(x, bw = "SJ", n = 1001) { ##' @usage summarize.result(result) ##' @importFrom rlang .data ##' @author David LeBauer, Alexey Shiklomanov -n <- NULL -statname <- NULL summarize.result <- function(result) { ans1 <- result %>% dplyr::filter(.data$n == 1) %>% From 8121cabeda295ab9f827adea15ec5d420d39907e Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 2 Aug 2021 13:43:48 +0530 Subject: [PATCH 2150/2289] Update get.analysis.filenames.r --- base/utils/R/get.analysis.filenames.r | 57 +++++++++++++++------------ 1 file changed, 32 insertions(+), 25 deletions(-) diff --git a/base/utils/R/get.analysis.filenames.r b/base/utils/R/get.analysis.filenames.r index b67afa272c6..1c733ec9a47 100644 --- a/base/utils/R/get.analysis.filenames.r +++ b/base/utils/R/get.analysis.filenames.r @@ -1,19 +1,26 @@ ##' Generate ensemble filenames -##' +##' ##' @name ensemble.filename ##' @title Generate ensemble filenames -##' +##' @param settings list of PEcAn settings. +##' @param prefix for the rabbitmq api endpoint, default is for no prefix. +##' @param suffix File suffix, as character (default = `NULL`). +##' @param all.var.yr +##' @param ensemble.id ensemble IDs +##' @param Character vector of variables to be read from. +##' @param start.year,end.year first and last year of output to read. +##' ##' @return a filename ##' @export ##' ##' @details Generally uses values in settings, but can be overwritten for manual uses ##' @author Ryan Kelly -ensemble.filename <- function(settings, prefix = "ensemble.samples", suffix = "Rdata", - all.var.yr = TRUE, ensemble.id = settings$ensemble$ensemble.id, - variable = settings$ensemble$variable, - start.year = settings$ensemble$start.year, +ensemble.filename <- function(settings, prefix = "ensemble.samples", suffix = "Rdata", + all.var.yr = TRUE, ensemble.id = settings$ensemble$ensemble.id, + variable = settings$ensemble$variable, + start.year = settings$ensemble$start.year, end.year = settings$ensemble$end.year) { - + if (is.null(ensemble.id) || is.na(ensemble.id)) { # This shouldn't generally arise, as run.write.configs() appends ensemble.id to # settings. However,it will come up if running run.write.configs(..., write=F), @@ -22,42 +29,42 @@ ensemble.filename <- function(settings, prefix = "ensemble.samples", suffix = "R # run. ensemble.id <- "NOENSEMBLEID" } - + ensemble.dir <- settings$outdir - + dir.create(ensemble.dir, showWarnings = FALSE, recursive = TRUE) - + if (all.var.yr) { # All variables and years will be included; omit those from filename ensemble.file <- file.path(ensemble.dir, paste(prefix, ensemble.id, suffix, sep = ".")) } else { - ensemble.file <- file.path(ensemble.dir, paste(prefix, ensemble.id, variable, + ensemble.file <- file.path(ensemble.dir, paste(prefix, ensemble.id, variable, start.year, end.year, suffix, sep = ".")) } - + return(ensemble.file) } # ensemble.filename ##' Generate sensitivity analysis filenames -##' +##' ##' @name sensitivity.filename ##' @title Generate sensitivity analysis filenames -##' +##' @inheritParams ensemble.filename ##' @return a filename ##' @export ##' ##' @details Generally uses values in settings, but can be overwritten for manual uses ##' @author Ryan Kelly -sensitivity.filename <- function(settings, - prefix = "sensitivity.samples", suffix = "Rdata", +sensitivity.filename <- function(settings, + prefix = "sensitivity.samples", suffix = "Rdata", all.var.yr = TRUE, pft = NULL, ensemble.id = settings$sensitivity.analysis$ensemble.id, variable = settings$sensitivity.analysis$variable, start.year = settings$sensitivity.analysis$start.year, end.year = settings$sensitivity.analysis$end.year) { - + if(is.null(ensemble.id) || is.na(ensemble.id)) { # This shouldn't generally arise, as run.write.configs() appends ensemble.id to settings. However,it will come up if running run.write.configs(..., write=F), because then no ensemble ID is created in the database. A simple workflow will still work in that case, but provenance will be lost if multiple ensembles are run. ensemble.id <- "NOENSEMBLEID" @@ -73,7 +80,7 @@ sensitivity.filename <- function(settings, if (is.null(end.year)) { end.year <- "NA" } - + if (is.null(pft)) { # Goes in main output directory. sensitivity.dir <- settings$outdir @@ -81,13 +88,13 @@ sensitivity.filename <- function(settings, ind <- which(sapply(settings$pfts, function(x) x$name) == pft) if (length(ind) == 0) { ## no match - PEcAn.logger::logger.warn("sensitivity.filename: unmatched PFT = ", pft, " not among ", + PEcAn.logger::logger.warn("sensitivity.filename: unmatched PFT = ", pft, " not among ", sapply(settings$pfts, function(x) x$name)) sensitivity.dir <- file.path(settings$outdir, "pfts", pft) } else { if (length(ind) > 1) { ## multiple matches - PEcAn.logger::logger.warn("sensitivity.filename: multiple matchs of PFT = ", pft, + PEcAn.logger::logger.warn("sensitivity.filename: multiple matchs of PFT = ", pft, " among ", sapply(settings$pfts, function(x) x$name), " USING") ind <- ind[1] } @@ -98,10 +105,10 @@ sensitivity.filename <- function(settings, sensitivity.dir <- settings$pfts[[ind]]$outdir } } - + dir.create(sensitivity.dir, showWarnings = FALSE, recursive = TRUE) if (!dir.exists(sensitivity.dir)) { - PEcAn.logger::logger.error("sensitivity.filename: could not create directory, please check permissions ", + PEcAn.logger::logger.error("sensitivity.filename: could not create directory, please check permissions ", sensitivity.dir, " will try ", settings$outdir) if (dir.exists(settings$outdir)) { sensitivity.dir <- settings$outdir @@ -109,15 +116,15 @@ sensitivity.filename <- function(settings, PEcAn.logger::logger.error("sensitivity.filename: no OUTDIR ", settings$outdir) } } - + if (all.var.yr) { # All variables and years will be included; omit those from filename sensitivity.file <- file.path(sensitivity.dir, paste(prefix, ensemble.id, suffix, sep = ".")) } else { - sensitivity.file <- file.path(sensitivity.dir, + sensitivity.file <- file.path(sensitivity.dir, paste(prefix, ensemble.id, variable, start.year, end.year, suffix, sep = ".")) } - + return(sensitivity.file) } # sensitivity.filename From 1207236d3faade8bfd59fe2d66937a1e000ce542 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 2 Aug 2021 13:47:31 +0530 Subject: [PATCH 2151/2289] Update sensitivity.R --- base/utils/R/sensitivity.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index 0a5bca3de48..c63eae4b81f 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -20,7 +20,7 @@ ##' @param pft.name name of PFT used in sensitivity analysis (Optional) ##' @param start.year first year to include in sensitivity analysis ##' @param end.year last year to include in sensitivity analysis -##' @param variables variables to be read from model output +##' @param variable variables to be read from model output ##' @param per.pft flag to determine whether we want SA on pft-specific variables ##' @param sa.run.ids ##' @export From e690e2c7158e3b1ac65e34967af243aae86f49de Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Mon, 2 Aug 2021 08:20:13 +0000 Subject: [PATCH 2152/2289] automated documentation update --- base/utils/man/ensemble.filename.Rd | 13 +++++++++++++ base/utils/man/read.sa.output.Rd | 4 ++-- base/utils/man/sensitivity.filename.Rd | 13 +++++++++++++ base/utils/man/{n.Rd => summarize.result.Rd} | 7 ++----- 4 files changed, 30 insertions(+), 7 deletions(-) rename base/utils/man/{n.Rd => summarize.result.Rd} (79%) diff --git a/base/utils/man/ensemble.filename.Rd b/base/utils/man/ensemble.filename.Rd index d7c8932887d..52bb338dcfa 100644 --- a/base/utils/man/ensemble.filename.Rd +++ b/base/utils/man/ensemble.filename.Rd @@ -15,6 +15,19 @@ ensemble.filename( end.year = settings$ensemble$end.year ) } +\arguments{ +\item{settings}{list of PEcAn settings.} + +\item{prefix}{for the rabbitmq api endpoint, default is for no prefix.} + +\item{suffix}{File suffix, as character (default = \code{NULL}).} + +\item{ensemble.id}{ensemble IDs} + +\item{start.year, end.year}{first and last year of output to read.} + +\item{Character}{vector of variables to be read from.} +} \value{ a filename } diff --git a/base/utils/man/read.sa.output.Rd b/base/utils/man/read.sa.output.Rd index 78ece073f8a..bf8760413af 100644 --- a/base/utils/man/read.sa.output.Rd +++ b/base/utils/man/read.sa.output.Rd @@ -32,11 +32,11 @@ read.sa.output( \item{end.year}{last year to include in sensitivity analysis} +\item{variable}{variables to be read from model output} + \item{sa.run.ids}{} \item{per.pft}{flag to determine whether we want SA on pft-specific variables} - -\item{variables}{variables to be read from model output} } \value{ dataframe with one col per quantile analysed and one row per trait, diff --git a/base/utils/man/sensitivity.filename.Rd b/base/utils/man/sensitivity.filename.Rd index 0a6491b2732..b952671e441 100644 --- a/base/utils/man/sensitivity.filename.Rd +++ b/base/utils/man/sensitivity.filename.Rd @@ -16,6 +16,19 @@ sensitivity.filename( end.year = settings$sensitivity.analysis$end.year ) } +\arguments{ +\item{settings}{list of PEcAn settings.} + +\item{prefix}{for the rabbitmq api endpoint, default is for no prefix.} + +\item{suffix}{File suffix, as character (default = \code{NULL}).} + +\item{ensemble.id}{ensemble IDs} + +\item{start.year}{first and last year of output to read.} + +\item{end.year}{first and last year of output to read.} +} \value{ a filename } diff --git a/base/utils/man/n.Rd b/base/utils/man/summarize.result.Rd similarity index 79% rename from base/utils/man/n.Rd rename to base/utils/man/summarize.result.Rd index 642cbf8cc61..28ad6262524 100644 --- a/base/utils/man/n.Rd +++ b/base/utils/man/summarize.result.Rd @@ -1,10 +1,8 @@ % Generated by roxygen2: do not edit by hand % Please edit documentation in R/utils.R -\docType{data} -\name{n} -\alias{n} +\name{summarize.result} +\alias{summarize.result} \title{Summarize Results} -\format{An object of class \code{NULL} of length 0.} \usage{ summarize.result(result) } @@ -20,4 +18,3 @@ Summarize results of replicate observations in trait data query \author{ David LeBauer, Alexey Shiklomanov } -\keyword{datasets} From b57362f4ad3217eb432d0bbf22f7a368edb8de48 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 2 Aug 2021 14:24:47 +0530 Subject: [PATCH 2153/2289] Delete logger.R --- base/utils/R/logger.R | 76 ------------------------------------------- 1 file changed, 76 deletions(-) delete mode 100644 base/utils/R/logger.R diff --git a/base/utils/R/logger.R b/base/utils/R/logger.R deleted file mode 100644 index b4da35028b7..00000000000 --- a/base/utils/R/logger.R +++ /dev/null @@ -1,76 +0,0 @@ -logger_deprecated <- function() { - warning('Logger functions have moved from PEcAn.utils to PEcAn.logger.', - 'This usage is deprecated') -} - -#' Logger functions (imported temporarily from PEcAn.logger) -#' -#' @importFrom PEcAn.logger logger.debug -#' @export -logger.debug <- function(...) { - logger_deprecated() - PEcAn.logger::logger.debug(...) -} - -#' @importFrom PEcAn.logger logger.info -#' @export -logger.info <- function(...) { - logger_deprecated() - PEcAn.logger::logger.info(...) -} - -#' @importFrom PEcAn.logger logger.warn -#' @export -logger.warn <- function(...) { - logger_deprecated() - PEcAn.logger::logger.warn(...) -} - -#' @importFrom PEcAn.logger logger.error -#' @export -logger.error <- function(...) { - logger_deprecated() - PEcAn.logger::logger.error(...) -} - -#' @importFrom PEcAn.logger logger.severe -#' @export -logger.severe <- function(...) { - logger_deprecated() - PEcAn.logger::logger.severe(...) -} - -#' @importFrom PEcAn.logger logger.setLevel -#' @export -logger.setLevel <- function(...) { - logger_deprecated() - PEcAn.logger::logger.setLevel(...) -} - -#' @importFrom PEcAn.logger logger.getLevel -#' @export -logger.getLevel <- function(...) { - logger_deprecated() - PEcAn.logger::logger.getLevel(...) -} - -#' @importFrom PEcAn.logger logger.setOutputFile -#' @export -logger.setOutputFile <- function(...) { - logger_deprecated() - PEcAn.logger::logger.setOutputFile(...) -} - -#' @importFrom PEcAn.logger logger.setQuitOnSevere -#' @export -logger.setQuitOnSevere <- function(...) { - logger_deprecated() - PEcAn.logger::logger.setQuitOnSevere(...) -} - -#' @importFrom PEcAn.logger logger.setWidth -#' @export -logger.setWidth <- function(...) { - logger_deprecated() - PEcAn.logger::logger.setWidth(...) -} From 1562f478024e6b2f9e0d3f0867117d902d6737ba Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 3 Aug 2021 01:37:44 +0530 Subject: [PATCH 2154/2289] Update get.analysis.filenames.r --- base/utils/R/get.analysis.filenames.r | 1 + 1 file changed, 1 insertion(+) diff --git a/base/utils/R/get.analysis.filenames.r b/base/utils/R/get.analysis.filenames.r index 1c733ec9a47..06cd3aaac59 100644 --- a/base/utils/R/get.analysis.filenames.r +++ b/base/utils/R/get.analysis.filenames.r @@ -9,6 +9,7 @@ ##' @param ensemble.id ensemble IDs ##' @param Character vector of variables to be read from. ##' @param start.year,end.year first and last year of output to read. +##' @param variable variables to be read ##' ##' @return a filename ##' @export From 0514314dd30836b100e4c9b8771645a6bc4cba64 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Mon, 2 Aug 2021 20:15:49 +0000 Subject: [PATCH 2155/2289] automated documentation update --- base/utils/NAMESPACE | 20 -------------------- base/utils/man/ensemble.filename.Rd | 2 ++ base/utils/man/logger.debug.Rd | 11 ----------- base/utils/man/sensitivity.filename.Rd | 2 ++ 4 files changed, 4 insertions(+), 31 deletions(-) delete mode 100644 base/utils/man/logger.debug.Rd diff --git a/base/utils/NAMESPACE b/base/utils/NAMESPACE index e48e4108e3c..d227376cddf 100644 --- a/base/utils/NAMESPACE +++ b/base/utils/NAMESPACE @@ -28,16 +28,6 @@ export(left.pad.zeros) export(listToArgString) export(load.modelpkg) export(load_local) -export(logger.debug) -export(logger.error) -export(logger.getLevel) -export(logger.info) -export(logger.setLevel) -export(logger.setOutputFile) -export(logger.setQuitOnSevere) -export(logger.setWidth) -export(logger.severe) -export(logger.warn) export(match_file) export(mcmc.list2init) export(misc.are.convertible) @@ -72,15 +62,5 @@ export(units_are_equivalent) export(vecpaste) export(write.sa.configs) export(zero.truncate) -importFrom(PEcAn.logger,logger.debug) -importFrom(PEcAn.logger,logger.error) -importFrom(PEcAn.logger,logger.getLevel) -importFrom(PEcAn.logger,logger.info) -importFrom(PEcAn.logger,logger.setLevel) -importFrom(PEcAn.logger,logger.setOutputFile) -importFrom(PEcAn.logger,logger.setQuitOnSevere) -importFrom(PEcAn.logger,logger.setWidth) -importFrom(PEcAn.logger,logger.severe) -importFrom(PEcAn.logger,logger.warn) importFrom(magrittr,"%>%") importFrom(rlang,.data) diff --git a/base/utils/man/ensemble.filename.Rd b/base/utils/man/ensemble.filename.Rd index 52bb338dcfa..1282583cdd5 100644 --- a/base/utils/man/ensemble.filename.Rd +++ b/base/utils/man/ensemble.filename.Rd @@ -24,6 +24,8 @@ ensemble.filename( \item{ensemble.id}{ensemble IDs} +\item{variable}{variables to be read} + \item{start.year, end.year}{first and last year of output to read.} \item{Character}{vector of variables to be read from.} diff --git a/base/utils/man/logger.debug.Rd b/base/utils/man/logger.debug.Rd deleted file mode 100644 index 7a0fb99009b..00000000000 --- a/base/utils/man/logger.debug.Rd +++ /dev/null @@ -1,11 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/logger.R -\name{logger.debug} -\alias{logger.debug} -\title{Logger functions (imported temporarily from PEcAn.logger)} -\usage{ -logger.debug(...) -} -\description{ -Logger functions (imported temporarily from PEcAn.logger) -} diff --git a/base/utils/man/sensitivity.filename.Rd b/base/utils/man/sensitivity.filename.Rd index b952671e441..4b9067938ee 100644 --- a/base/utils/man/sensitivity.filename.Rd +++ b/base/utils/man/sensitivity.filename.Rd @@ -25,6 +25,8 @@ sensitivity.filename( \item{ensemble.id}{ensemble IDs} +\item{variable}{variables to be read} + \item{start.year}{first and last year of output to read.} \item{end.year}{first and last year of output to read.} From a9bf9d9b8f964f9ae12de7c637ad62542ca79835 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 3 Aug 2021 18:57:39 +0530 Subject: [PATCH 2156/2289] Update utils.R --- base/utils/R/utils.R | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index fafa6057dbe..9bbbb6dfab6 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -699,9 +699,10 @@ download.file <- function(url, filename, method) { ##' @examples ##' \dontrun{ ##' dap <- retry.func( -##' file_host = 'https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220' -##' file_path = '/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4' -##' ncdf4::nc_open(paste0(file_host, file_path)), +##' file_url <- paste0("https://thredds.daac.ornl.gov/", +##' "thredds/dodsC/ornldaac/1220", +##' "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") +##' ncdf4::nc_open(file_url) ##' maxErrors=10, ##' sleep=2) ##' } From e151c04ec6ea70dcaafa05f87ee3cba1eb6ebe7b Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Tue, 3 Aug 2021 13:30:40 +0000 Subject: [PATCH 2157/2289] automated documentation update --- base/utils/man/retry.func.Rd | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/base/utils/man/retry.func.Rd b/base/utils/man/retry.func.Rd index 6587092fc97..9ea9730ca38 100644 --- a/base/utils/man/retry.func.Rd +++ b/base/utils/man/retry.func.Rd @@ -32,9 +32,10 @@ Retry function X times before stopping in error \examples{ \dontrun{ dap <- retry.func( - file_host = 'https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220' - file_path = '/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4' - ncdf4::nc_open(paste0(file_host, file_path)), + file_url <- paste0("https://thredds.daac.ornl.gov/", + "thredds/dodsC/ornldaac/1220", + "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") + ncdf4::nc_open(file_url) maxErrors=10, sleep=2) } From 308eee368398497fa44f8959df0a2eea30b4aa5b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Tue, 3 Aug 2021 19:57:43 +0530 Subject: [PATCH 2158/2289] Update utils.R --- base/utils/R/utils.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 9bbbb6dfab6..d8572a96d63 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -24,6 +24,7 @@ ##' @param lon longitude if dimension requests it ##' @param time time if dimension requests it ##' @param nsoil nsoil if dimension requests it +##' param silent boolean to indicate if logging should be performed. ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { @@ -650,7 +651,7 @@ convert.expr <- function(expression) { ##' @title download.file ##' @param url complete URL for file download ##' @param filename destination file name -##' @param method Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. +##' @param method Method of file retrieval. Can set this using the options(`download.ftp.method=[method]`) in your Rprofile. ##' example options(download.ftp.method="ncftpget") ##' ##' @examples @@ -702,7 +703,7 @@ download.file <- function(url, filename, method) { ##' file_url <- paste0("https://thredds.daac.ornl.gov/", ##' "thredds/dodsC/ornldaac/1220", ##' "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") -##' ncdf4::nc_open(file_url) +##' ncdf4::nc_open(file_url) ##' maxErrors=10, ##' sleep=2) ##' } From 45295abc3b52b82e3b23f256bafc8f8c86a8ca9d Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Tue, 3 Aug 2021 14:30:40 +0000 Subject: [PATCH 2159/2289] automated documentation update --- base/utils/man/download.file.Rd | 2 +- base/utils/man/mstmipvar.Rd | 3 ++- base/utils/man/retry.func.Rd | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/base/utils/man/download.file.Rd b/base/utils/man/download.file.Rd index e2e8ecd6461..70573a52693 100644 --- a/base/utils/man/download.file.Rd +++ b/base/utils/man/download.file.Rd @@ -11,7 +11,7 @@ download.file(url, filename, method) \item{filename}{destination file name} -\item{method}{Method of file retrieval. Can set this using the options(download.ftp.method=\link{method}) in your Rprofile. +\item{method}{Method of file retrieval. Can set this using the options(\verb{download.ftp.method=[method]}) in your Rprofile. example options(download.ftp.method="ncftpget")} } \description{ diff --git a/base/utils/man/mstmipvar.Rd b/base/utils/man/mstmipvar.Rd index ccd8c1a50ea..51ce3cf5789 100644 --- a/base/utils/man/mstmipvar.Rd +++ b/base/utils/man/mstmipvar.Rd @@ -15,7 +15,8 @@ mstmipvar(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) \item{time}{time if dimension requests it} -\item{nsoil}{nsoil if dimension requests it} +\item{nsoil}{nsoil if dimension requests it +param silent boolean to indicate if logging should be performed.} } \value{ ncvar based on MstMIP definition diff --git a/base/utils/man/retry.func.Rd b/base/utils/man/retry.func.Rd index 9ea9730ca38..633df861b9f 100644 --- a/base/utils/man/retry.func.Rd +++ b/base/utils/man/retry.func.Rd @@ -35,7 +35,7 @@ dap <- retry.func( file_url <- paste0("https://thredds.daac.ornl.gov/", "thredds/dodsC/ornldaac/1220", "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") - ncdf4::nc_open(file_url) + ncdf4::nc_open(file_url) maxErrors=10, sleep=2) } From ddc273178288d6222bba247a29f71a72f3f13522 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:08:54 +0530 Subject: [PATCH 2160/2289] Update base/utils/R/convert.input.R Co-authored-by: Chris Black --- base/utils/R/convert.input.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index 98b3bd6a40b..0dc35324ecc 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -56,7 +56,7 @@ ##' @param ensemble An integer representing the number of ensembles, or FALSE if it data product is not an ensemble. ##' @param ensemble_name If convert.input is being called iteratively for each ensemble, ensemble_name contains the identifying name/number for that ensemble. ##' @param ... Additional arguments, passed unchanged to \code{fcn} -##' @param dbparms list, settings$database info reqired for opening a connection to DB +##' @param dbparms list of parameters to use for opening a database connection ##' ##' @return A list of two BETY IDs (input.id, dbfile.id) identifying a pre-existing file if one was available, or a newly created file if not. Each id may be a vector of ids if the function is processing an entire ensemble at once. ##' From a1e85f1967a013badd302f98ab39b3367c78160e Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:09:19 +0530 Subject: [PATCH 2161/2289] Update base/utils/R/convert.input.R Co-authored-by: Chris Black --- base/utils/R/convert.input.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index 0dc35324ecc..99c013716a1 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -97,7 +97,7 @@ convert.input <- mimetype, site.id, start_date, end_date)) # TODO see issue #18 - Rbinary <- ifelse(!exists("settings") || is.null(.data$settings$host$Rbinary),"R",.data$settings$host$Rbinary) + Rbinary <- ifelse(!exists("settings") || is.null(settings$host$Rbinary),"R",settings$host$Rbinary) n <- nchar(outfolder) if (substr(outfolder, n, n) != "/") { From e48b4abd58232f36f18be1f838f20965670bd4a3 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:10:27 +0530 Subject: [PATCH 2162/2289] Update base/utils/R/convert.input.R Co-authored-by: Chris Black --- base/utils/R/convert.input.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index 99c013716a1..1f38ee128aa 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -170,7 +170,6 @@ convert.input <- exact.dates = TRUE, pattern = filename_pattern) - id <- NULL if(nrow(existing.dbfile[[i]]) > 0) { existing.input[[i]] <- PEcAn.DB::db.query(paste0("SELECT * FROM inputs WHERE id=", existing.dbfile[[i]]$container_id),con) From 63006c73c8da40d0aa5c89a643be06d82d485fc7 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:11:42 +0530 Subject: [PATCH 2163/2289] Update base/utils/R/get.analysis.filenames.r Co-authored-by: Chris Black --- base/utils/R/get.analysis.filenames.r | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/base/utils/R/get.analysis.filenames.r b/base/utils/R/get.analysis.filenames.r index 06cd3aaac59..85ab54d0109 100644 --- a/base/utils/R/get.analysis.filenames.r +++ b/base/utils/R/get.analysis.filenames.r @@ -3,13 +3,13 @@ ##' @name ensemble.filename ##' @title Generate ensemble filenames ##' @param settings list of PEcAn settings. -##' @param prefix for the rabbitmq api endpoint, default is for no prefix. -##' @param suffix File suffix, as character (default = `NULL`). -##' @param all.var.yr -##' @param ensemble.id ensemble IDs -##' @param Character vector of variables to be read from. -##' @param start.year,end.year first and last year of output to read. -##' @param variable variables to be read +##' @param prefix string to appear at the beginning of the filename +##' @param suffix file extension: string to appear at the end of the filename +##' @param all.var.yr logical: does ensemble include all vars and years? +##' If FALSE, filename will include years and vars +##' @param ensemble.id ensemble ID(s) +##' @param variable variable(s) included in the ensemble. +##' @param start.year,end.year first and last year simulated. ##' ##' @return a filename ##' @export From ea08fd567ef481932c792797825a427b3fc67ae7 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:12:53 +0530 Subject: [PATCH 2164/2289] Update base/utils/R/utils.R Co-authored-by: Chris Black --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index d8572a96d63..4b039d074ee 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -24,7 +24,7 @@ ##' @param lon longitude if dimension requests it ##' @param time time if dimension requests it ##' @param nsoil nsoil if dimension requests it -##' param silent boolean to indicate if logging should be performed. +##' @param silent logical: suppress log messages about missing variables? ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { From 1cff4af1a2acbe4b63566c81495971f7e614965d Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:13:00 +0530 Subject: [PATCH 2165/2289] Update base/utils/R/utils.R Co-authored-by: Chris Black --- base/utils/R/utils.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 4b039d074ee..99d870e1104 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -693,7 +693,9 @@ download.file <- function(url, filename, method) { ##' @param expr The function to try running ##' @param maxErrors The number of times to retry the function ##' @param sleep How long to wait before retrying the function call -##' @param isError +##' @param isError function to use for checking whether to try again. +##' Must take one argument that contains the result of evaluating `expr` +##' and return TRUE if another retry is needed ##' ##' @return retval returns the results of the function call ##' From cb1290ddfc9ae30871f668c050f9fc083e880f23 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:20:36 +0530 Subject: [PATCH 2166/2289] Update base/utils/R/get.analysis.filenames.r Co-authored-by: Chris Black --- base/utils/R/get.analysis.filenames.r | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/base/utils/R/get.analysis.filenames.r b/base/utils/R/get.analysis.filenames.r index 85ab54d0109..8898ebaf3f7 100644 --- a/base/utils/R/get.analysis.filenames.r +++ b/base/utils/R/get.analysis.filenames.r @@ -11,10 +11,9 @@ ##' @param variable variable(s) included in the ensemble. ##' @param start.year,end.year first and last year simulated. ##' -##' @return a filename +##' @return a vector of filenames, each in the form +##' `[settings$outdir]/[prefix].[ensemble.ID].[variable].[start.year].[end.year][suffix]`. ##' @export -##' -##' @details Generally uses values in settings, but can be overwritten for manual uses ##' @author Ryan Kelly ensemble.filename <- function(settings, prefix = "ensemble.samples", suffix = "Rdata", all.var.yr = TRUE, ensemble.id = settings$ensemble$ensemble.id, From b2f2c3f377a2bd5d092ebb643e35c8fd92d4026d Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:21:09 +0530 Subject: [PATCH 2167/2289] Update base/utils/R/utils.R Co-authored-by: Chris Black --- base/utils/R/utils.R | 3 --- 1 file changed, 3 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 99d870e1104..0d0afdaedfb 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -702,9 +702,6 @@ download.file <- function(url, filename, method) { ##' @examples ##' \dontrun{ ##' dap <- retry.func( -##' file_url <- paste0("https://thredds.daac.ornl.gov/", -##' "thredds/dodsC/ornldaac/1220", -##' "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") ##' ncdf4::nc_open(file_url) ##' maxErrors=10, ##' sleep=2) From d0ca89ac0d1a9205c7c135d19edbbf6c41d3151b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:21:31 +0530 Subject: [PATCH 2168/2289] Update base/utils/R/utils.R Co-authored-by: Chris Black --- base/utils/R/utils.R | 3 +++ 1 file changed, 3 insertions(+) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 0d0afdaedfb..0dfed978407 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -701,6 +701,9 @@ download.file <- function(url, filename, method) { ##' ##' @examples ##' \dontrun{ +##' file_url <- paste0("https://thredds.daac.ornl.gov/", +##' "thredds/dodsC/ornldaac/1220", +##' "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") ##' dap <- retry.func( ##' ncdf4::nc_open(file_url) ##' maxErrors=10, From 49a3801b956b9f60754c11799794c0bcd3f50719 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:23:03 +0530 Subject: [PATCH 2169/2289] Update base/utils/R/sensitivity.R Co-authored-by: Chris Black --- base/utils/R/sensitivity.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index c63eae4b81f..06e9c1239f2 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -36,7 +36,7 @@ read.sa.output <- function(traits, quantiles, pecandir, outdir, pft.name = "", samples.file <- file.path(pecandir, "samples.Rdata") if (file.exists(samples.file)) { load(samples.file) - sa.run.ids <- .data$runs.samples$sa + sa.run.ids <- runs.samples$sa } else { PEcAn.logger::logger.error(samples.file, "not found, this file is required by the read.sa.output function") } From f71acdfd723e34aec5ac35b52d1140052cdaa1a8 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:23:18 +0530 Subject: [PATCH 2170/2289] Update base/utils/R/mcmc.list2init.R Co-authored-by: Chris Black --- base/utils/R/mcmc.list2init.R | 2 -- 1 file changed, 2 deletions(-) diff --git a/base/utils/R/mcmc.list2init.R b/base/utils/R/mcmc.list2init.R index 5b9970aaedd..6f7ec6c40c5 100644 --- a/base/utils/R/mcmc.list2init.R +++ b/base/utils/R/mcmc.list2init.R @@ -38,8 +38,6 @@ mcmc.list2init <- function(dat) { ## detect variable type (scalar, vector, matrix) cols <- which(firstname == uname[v]) - - nr <- NULL if(length(cols) == 1){ ## SCALAR for(c in seq_len(nc)){ From e6bf4e361ac021ead979499126a018c081873531 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:23:32 +0530 Subject: [PATCH 2171/2289] Update base/utils/R/r2bugs.distributions.R Co-authored-by: Chris Black --- base/utils/R/r2bugs.distributions.R | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/base/utils/R/r2bugs.distributions.R b/base/utils/R/r2bugs.distributions.R index 99052a86ebb..bfd982d88a1 100644 --- a/base/utils/R/r2bugs.distributions.R +++ b/base/utils/R/r2bugs.distributions.R @@ -12,8 +12,7 @@ ##' R and BUGS have different parameterizations for some distributions. This function transforms the distributions from R defaults to BUGS defaults. BUGS is an implementation of the BUGS language, and these transformations are expected to work for bugs. ##' @title convert R parameterizations to BUGS paramaterizations ##' @param priors data.frame with columns distn = distribution name, parama, paramb using R default parameterizations. -##' @param direction Whether the model will be filtered backward or forward in time. options = c("backward", "forward") -##' (PalEON will go backward, anybody interested in the future will go forward) +##' @param direction One of "r2bugs" or "bugs2r" ##' @return priors dataframe using JAGS default parameterizations ##' @author David LeBauer, Ben Bolker ##' @export From ce1ad65844ba49e54260b16a9a0726f8f6cd3bae Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:24:07 +0530 Subject: [PATCH 2172/2289] Update base/utils/R/regrid.R Co-authored-by: Chris Black --- base/utils/R/regrid.R | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/base/utils/R/regrid.R b/base/utils/R/regrid.R index 0acbca2828a..b2acd4dbf96 100644 --- a/base/utils/R/regrid.R +++ b/base/utils/R/regrid.R @@ -30,10 +30,9 @@ regrid <- function(latlon.data) { ##' Write gridded data to netcdf file ##' ##' @title grid2netcdf -##' @param grid.data -##' @param gdata -##' @param Date as string or date object. -##' @param outfile location where output will be stored. +##' @param gdata gridded data to write out +##' @param date currently ignored; date(s) from `gdata` are used instead +##' @param outfile name for generated netCDF file. ##' @return writes netCDF file ##' @author David LeBauer grid2netcdf <- function(gdata, date = "9999-09-09", outfile = "out.nc") { From 1b77075f1f85463bdcb95e1e6ff119946cc932e1 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:24:32 +0530 Subject: [PATCH 2173/2289] Update base/utils/R/regrid.R Co-authored-by: Chris Black --- base/utils/R/regrid.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/utils/R/regrid.R b/base/utils/R/regrid.R index b2acd4dbf96..18311c384c4 100644 --- a/base/utils/R/regrid.R +++ b/base/utils/R/regrid.R @@ -64,7 +64,6 @@ grid2netcdf <- function(gdata, date = "9999-09-09", outfile = "out.nc") { nc <- ncdf4::nc_create(filename = outfile, vars = list(CropYield = yieldvar)) ## Output netCDF data - yieldarray <- NULL # ncvar_put(nc, varid = yieldvar, vals = grid.data[order(lat, lon, order(lubridate::ymd(date )))]$yield) # ncvar_put(nc, varid = yieldvar, vals = grid.data[order(order(lubridate::ymd(date), lat, lon))]$yield) ncdf4::ncvar_put(nc, varid = yieldvar, vals = yieldarray) From 915b81fc6fcd06be5e1f55fc838c261d1aaf4476 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:25:03 +0530 Subject: [PATCH 2174/2289] Update base/utils/R/seconds_in_year.R Co-authored-by: Chris Black --- base/utils/R/seconds_in_year.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/seconds_in_year.R b/base/utils/R/seconds_in_year.R index 88f8bb3a259..d255fa8e85a 100644 --- a/base/utils/R/seconds_in_year.R +++ b/base/utils/R/seconds_in_year.R @@ -3,7 +3,7 @@ #' @author Alexey Shiklomanov #' @param year Numeric year (can be a vector) #' @param leap_year Default = TRUE. If set to FALSE will always return 31536000. -#' @param ... additional arguments +#' @param ... additional arguments, all currently ignored #' @examples #' seconds_in_year(2000) # Leap year -- 366 x 24 x 60 x 60 = 31622400 #' seconds_in_year(2001) # Regular year -- 365 x 24 x 60 x 60 = 31536000 From 1aabf4a3711571c7778c10eadaf2481448682d75 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Wed, 4 Aug 2021 16:30:12 +0530 Subject: [PATCH 2175/2289] Update base/utils/R/get.analysis.filenames.r Co-authored-by: Chris Black --- base/utils/R/get.analysis.filenames.r | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/base/utils/R/get.analysis.filenames.r b/base/utils/R/get.analysis.filenames.r index 8898ebaf3f7..b77517c7e67 100644 --- a/base/utils/R/get.analysis.filenames.r +++ b/base/utils/R/get.analysis.filenames.r @@ -1,7 +1,18 @@ ##' Generate ensemble filenames ##' -##' @name ensemble.filename -##' @title Generate ensemble filenames +##' Generates a vector of filenames to be used for PEcAn ensemble output files. +##' All paths start from directory `settings$outdir`, +##' which will be created if it does not exist. +##' +##' Typically used by passing only a settings object, +##' but all values can be overridden for manual use. +##' +##' If only a single variable or a subset of years are needed, +##' the generated filename will identify these in the form +## `prefix.ensemble_id.variable.startyear.endyear.suffix` +##' If all vars and years are included, set `all.yr.var` to TRUE +##' to get a filename of the form `prefix.ensemble_id.suffix`. +##' All elements are recycled vectorwise. ##' @param settings list of PEcAn settings. ##' @param prefix string to appear at the beginning of the filename ##' @param suffix file extension: string to appear at the end of the filename From 69c24812486be5f9f8069ccd51b35b5cfc192b9d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 4 Aug 2021 13:08:54 +0200 Subject: [PATCH 2176/2289] Update base/utils/R/regrid.R --- base/utils/R/regrid.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/utils/R/regrid.R b/base/utils/R/regrid.R index 18311c384c4..6e8768104e2 100644 --- a/base/utils/R/regrid.R +++ b/base/utils/R/regrid.R @@ -51,7 +51,6 @@ grid2netcdf <- function(gdata, date = "9999-09-09", outfile = "out.nc") { "please install `data.table`." ) } - years <- NULL grid.data <- merge(latlons, gdata, by = c("lat", "lon", "date"), all.x = TRUE) lat <- ncdf4::ncdim_def("lat", "degrees_east", vals = lats, longname = "station_latitude") lon <- ncdf4::ncdim_def("lon", "degrees_north", vals = lons, longname = "station_longitude") From 0a27ddf3163bd2ad0e595e7c8fcc764cc2273be5 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Wed, 4 Aug 2021 11:41:04 +0000 Subject: [PATCH 2177/2289] automated documentation update --- base/utils/man/convert.input.Rd | 2 +- base/utils/man/ensemble.filename.Rd | 29 ++++++++++++++++++-------- base/utils/man/grid2netcdf.Rd | 6 ++++-- base/utils/man/mstmipvar.Rd | 5 +++-- base/utils/man/r2bugs.distributions.Rd | 3 +-- base/utils/man/retry.func.Rd | 6 ++++-- base/utils/man/seconds_in_year.Rd | 2 +- base/utils/man/sensitivity.filename.Rd | 15 +++++++------ 8 files changed, 43 insertions(+), 25 deletions(-) diff --git a/base/utils/man/convert.input.Rd b/base/utils/man/convert.input.Rd index c0fc5e85b71..7d69d76ac45 100644 --- a/base/utils/man/convert.input.Rd +++ b/base/utils/man/convert.input.Rd @@ -77,7 +77,7 @@ Currently only \code{host$name} is used by \code{convert.input}, but whole list \item{ensemble_name}{If convert.input is being called iteratively for each ensemble, ensemble_name contains the identifying name/number for that ensemble.} -\item{dbparms}{list, settings$database info reqired for opening a connection to DB} +\item{dbparms}{list of parameters to use for opening a database connection} \item{...}{Additional arguments, passed unchanged to \code{fcn}} } diff --git a/base/utils/man/ensemble.filename.Rd b/base/utils/man/ensemble.filename.Rd index 1282583cdd5..988c514ef3a 100644 --- a/base/utils/man/ensemble.filename.Rd +++ b/base/utils/man/ensemble.filename.Rd @@ -18,26 +18,37 @@ ensemble.filename( \arguments{ \item{settings}{list of PEcAn settings.} -\item{prefix}{for the rabbitmq api endpoint, default is for no prefix.} +\item{prefix}{string to appear at the beginning of the filename} -\item{suffix}{File suffix, as character (default = \code{NULL}).} +\item{suffix}{file extension: string to appear at the end of the filename} -\item{ensemble.id}{ensemble IDs} +\item{all.var.yr}{logical: does ensemble include all vars and years? +If FALSE, filename will include years and vars} -\item{variable}{variables to be read} +\item{ensemble.id}{ensemble ID(s)} -\item{start.year, end.year}{first and last year of output to read.} +\item{variable}{variable(s) included in the ensemble.} -\item{Character}{vector of variables to be read from.} +\item{start.year, end.year}{first and last year simulated.} } \value{ -a filename +a vector of filenames, each in the form +\verb{[settings$outdir]/[prefix].[ensemble.ID].[variable].[start.year].[end.year][suffix]}. } \description{ -Generate ensemble filenames +Generates a vector of filenames to be used for PEcAn ensemble output files. +All paths start from directory \code{settings$outdir}, +which will be created if it does not exist. } \details{ -Generally uses values in settings, but can be overwritten for manual uses +Typically used by passing only a settings object, +but all values can be overridden for manual use. + +If only a single variable or a subset of years are needed, +the generated filename will identify these in the form +If all vars and years are included, set \code{all.yr.var} to TRUE +to get a filename of the form \code{prefix.ensemble_id.suffix}. +All elements are recycled vectorwise. } \author{ Ryan Kelly diff --git a/base/utils/man/grid2netcdf.Rd b/base/utils/man/grid2netcdf.Rd index 615d4069ecf..5a133c59940 100644 --- a/base/utils/man/grid2netcdf.Rd +++ b/base/utils/man/grid2netcdf.Rd @@ -7,9 +7,11 @@ grid2netcdf(gdata, date = "9999-09-09", outfile = "out.nc") } \arguments{ -\item{outfile}{location where output will be stored.} +\item{gdata}{gridded data to write out} -\item{Date}{as string or date object.} +\item{date}{currently ignored; date(s) from \code{gdata} are used instead} + +\item{outfile}{name for generated netCDF file.} } \value{ writes netCDF file diff --git a/base/utils/man/mstmipvar.Rd b/base/utils/man/mstmipvar.Rd index 51ce3cf5789..461be026b34 100644 --- a/base/utils/man/mstmipvar.Rd +++ b/base/utils/man/mstmipvar.Rd @@ -15,8 +15,9 @@ mstmipvar(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) \item{time}{time if dimension requests it} -\item{nsoil}{nsoil if dimension requests it -param silent boolean to indicate if logging should be performed.} +\item{nsoil}{nsoil if dimension requests it} + +\item{silent}{logical: suppress log messages about missing variables?} } \value{ ncvar based on MstMIP definition diff --git a/base/utils/man/r2bugs.distributions.Rd b/base/utils/man/r2bugs.distributions.Rd index 646c35158d4..77d11fc8616 100644 --- a/base/utils/man/r2bugs.distributions.Rd +++ b/base/utils/man/r2bugs.distributions.Rd @@ -9,8 +9,7 @@ r2bugs.distributions(priors, direction = "r2bugs") \arguments{ \item{priors}{data.frame with columns distn = distribution name, parama, paramb using R default parameterizations.} -\item{direction}{Whether the model will be filtered backward or forward in time. options = c("backward", "forward") -(PalEON will go backward, anybody interested in the future will go forward)} +\item{direction}{One of "r2bugs" or "bugs2r"} } \value{ priors dataframe using JAGS default parameterizations diff --git a/base/utils/man/retry.func.Rd b/base/utils/man/retry.func.Rd index 633df861b9f..ad501ab215e 100644 --- a/base/utils/man/retry.func.Rd +++ b/base/utils/man/retry.func.Rd @@ -14,7 +14,9 @@ retry.func( \arguments{ \item{expr}{The function to try running} -\item{isError}{} +\item{isError}{function to use for checking whether to try again. +Must take one argument that contains the result of evaluating \code{expr} +and return TRUE if another retry is needed} \item{maxErrors}{The number of times to retry the function} @@ -31,10 +33,10 @@ Retry function X times before stopping in error } \examples{ \dontrun{ -dap <- retry.func( file_url <- paste0("https://thredds.daac.ornl.gov/", "thredds/dodsC/ornldaac/1220", "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") +dap <- retry.func( ncdf4::nc_open(file_url) maxErrors=10, sleep=2) diff --git a/base/utils/man/seconds_in_year.Rd b/base/utils/man/seconds_in_year.Rd index 6211000fb6c..720134b31bd 100644 --- a/base/utils/man/seconds_in_year.Rd +++ b/base/utils/man/seconds_in_year.Rd @@ -11,7 +11,7 @@ seconds_in_year(year, leap_year = TRUE, ...) \item{leap_year}{Default = TRUE. If set to FALSE will always return 31536000.} -\item{...}{additional arguments} +\item{...}{additional arguments, all currently ignored} } \description{ Number of seconds in a given year diff --git a/base/utils/man/sensitivity.filename.Rd b/base/utils/man/sensitivity.filename.Rd index 4b9067938ee..609ae39654c 100644 --- a/base/utils/man/sensitivity.filename.Rd +++ b/base/utils/man/sensitivity.filename.Rd @@ -19,17 +19,20 @@ sensitivity.filename( \arguments{ \item{settings}{list of PEcAn settings.} -\item{prefix}{for the rabbitmq api endpoint, default is for no prefix.} +\item{prefix}{string to appear at the beginning of the filename} -\item{suffix}{File suffix, as character (default = \code{NULL}).} +\item{suffix}{file extension: string to appear at the end of the filename} -\item{ensemble.id}{ensemble IDs} +\item{all.var.yr}{logical: does ensemble include all vars and years? +If FALSE, filename will include years and vars} -\item{variable}{variables to be read} +\item{ensemble.id}{ensemble ID(s)} -\item{start.year}{first and last year of output to read.} +\item{variable}{variable(s) included in the ensemble.} -\item{end.year}{first and last year of output to read.} +\item{start.year}{first and last year simulated.} + +\item{end.year}{first and last year simulated.} } \value{ a filename From bbf94e76e53250db00043b2d5f3771ddc905b90f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 4 Aug 2021 15:27:48 +0200 Subject: [PATCH 2178/2289] more missing param defs --- base/utils/R/get.analysis.filenames.r | 2 ++ base/utils/R/r2bugs.distributions.R | 5 +++-- base/utils/R/sensitivity.R | 4 +++- base/utils/man/bugs.rdist.Rd | 4 +++- base/utils/man/read.sa.output.Rd | 4 +++- base/utils/man/sensitivity.filename.Rd | 3 +++ base/utils/tests/Rcheck_reference.log | 6 +----- base/visualization/NAMESPACE | 2 ++ modules/uncertainty/NAMESPACE | 2 ++ 9 files changed, 22 insertions(+), 10 deletions(-) diff --git a/base/utils/R/get.analysis.filenames.r b/base/utils/R/get.analysis.filenames.r index b77517c7e67..621a58b525c 100644 --- a/base/utils/R/get.analysis.filenames.r +++ b/base/utils/R/get.analysis.filenames.r @@ -62,6 +62,8 @@ ensemble.filename <- function(settings, prefix = "ensemble.samples", suffix = "R ##' @name sensitivity.filename ##' @title Generate sensitivity analysis filenames ##' @inheritParams ensemble.filename +##' @param pft name of PFT used for analysis. If NULL, assumes all +##' PFTs in run are used and does not add them to the filename ##' @return a filename ##' @export ##' diff --git a/base/utils/R/r2bugs.distributions.R b/base/utils/R/r2bugs.distributions.R index bfd982d88a1..0e832c3183d 100644 --- a/base/utils/R/r2bugs.distributions.R +++ b/base/utils/R/r2bugs.distributions.R @@ -83,8 +83,9 @@ bugs2r.distributions <- function(..., direction = "bugs2r") { ##' JAGS ##' @title bugs.rdist ##' @param prior dataframe with distribution name and parameters -##' @param n.iter number of samples, output will have n.iter/4 samples -##' @param n +##' @param n.iter number of MCMC samples. Output will have n.iter/4 samples +##' @param n number of randomly chosen samples to return. +## If NULL, returns all n.iter/4 of them ##' @return vector of samples ##' @export ##' @author David LeBauer diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index 06e9c1239f2..c9e0bc9aa9e 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -22,7 +22,9 @@ ##' @param end.year last year to include in sensitivity analysis ##' @param variable variables to be read from model output ##' @param per.pft flag to determine whether we want SA on pft-specific variables -##' @param sa.run.ids +##' @param sa.run.ids list of run ids to read. +##' If NULL, will look in `pecandir` for a file named `samples.Rdata` +##' and read from that ##' @export ##' @importFrom magrittr %>% ##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze, Istem Fer diff --git a/base/utils/man/bugs.rdist.Rd b/base/utils/man/bugs.rdist.Rd index 1fad7159e59..6443e81cc52 100644 --- a/base/utils/man/bugs.rdist.Rd +++ b/base/utils/man/bugs.rdist.Rd @@ -13,7 +13,9 @@ bugs.rdist( \arguments{ \item{prior}{dataframe with distribution name and parameters} -\item{n.iter}{number of samples, output will have n.iter/4 samples} +\item{n.iter}{number of MCMC samples. Output will have n.iter/4 samples} + +\item{n}{number of randomly chosen samples to return.} } \value{ vector of samples diff --git a/base/utils/man/read.sa.output.Rd b/base/utils/man/read.sa.output.Rd index bf8760413af..c6efc9cd762 100644 --- a/base/utils/man/read.sa.output.Rd +++ b/base/utils/man/read.sa.output.Rd @@ -34,7 +34,9 @@ read.sa.output( \item{variable}{variables to be read from model output} -\item{sa.run.ids}{} +\item{sa.run.ids}{list of run ids to read. +If NULL, will look in \code{pecandir} for a file named \code{samples.Rdata} +and read from that} \item{per.pft}{flag to determine whether we want SA on pft-specific variables} } diff --git a/base/utils/man/sensitivity.filename.Rd b/base/utils/man/sensitivity.filename.Rd index 609ae39654c..f65939c0242 100644 --- a/base/utils/man/sensitivity.filename.Rd +++ b/base/utils/man/sensitivity.filename.Rd @@ -26,6 +26,9 @@ sensitivity.filename( \item{all.var.yr}{logical: does ensemble include all vars and years? If FALSE, filename will include years and vars} +\item{pft}{name of PFT used for analysis. If NULL, assumes all +PFTs in run are used and does not add them to the filename} + \item{ensemble.id}{ensemble ID(s)} \item{variable}{variable(s) included in the ensemble.} diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index e2211086872..cbe7338c86c 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -107,11 +107,7 @@ See section 'Cross-references' in the 'Writing R Extensions' manual. * checking for missing documentation entries ... WARNING Undocumented code objects: - ‘logger.error’ ‘logger.getLevel’ ‘logger.info’ ‘logger.setLevel’ - ‘logger.setOutputFile’ ‘logger.setQuitOnSevere’ ‘logger.setWidth’ - ‘logger.severe’ ‘logger.warn’ ‘mstmip_local’ ‘mstmip_vars’ - ‘runModule.get.results’ ‘runModule.run.write.configs’ - ‘trait.dictionary’ + ‘mstmip_local’ ‘mstmip_vars’ ‘trait.dictionary’ Undocumented data sets: ‘mstmip_local’ ‘mstmip_vars’ ‘trait.dictionary’ All user-level objects in a package should have documentation entries. diff --git a/base/visualization/NAMESPACE b/base/visualization/NAMESPACE index 92bf09a7c1e..515e6e71b26 100644 --- a/base/visualization/NAMESPACE +++ b/base/visualization/NAMESPACE @@ -2,6 +2,8 @@ export(add_icon) export(ciEnvelope) +export(create.base.plot) export(map.output) +export(plot_data) export(plot_netcdf) export(vwReg) diff --git a/modules/uncertainty/NAMESPACE b/modules/uncertainty/NAMESPACE index db2cf7d815e..ddbcd133a30 100644 --- a/modules/uncertainty/NAMESPACE +++ b/modules/uncertainty/NAMESPACE @@ -7,6 +7,7 @@ export(get.coef.var) export(get.elasticity) export(get.ensemble.samples) export(get.parameter.samples) +export(get.results) export(get.sensitivity) export(input.ens.gen) export(plot_flux_uncertainty) @@ -19,6 +20,7 @@ export(read.ensemble.output) export(read.ensemble.ts) export(run.ensemble.analysis) export(run.sensitivity.analysis) +export(runModule.get.results) export(runModule.run.ensemble.analysis) export(runModule.run.sensitivity.analysis) export(sa.splinefun) From 00c50229f95de53ec6507e87880498b9f997ce18 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 4 Aug 2021 16:00:43 +0200 Subject: [PATCH 2179/2289] Rd files for moved functions --- base/visualization/man/create.base.plot.Rd | 22 +++++++++ base/visualization/man/dhist.Rd | 52 ++++++++++++++++++++++ base/visualization/man/iqr.Rd | 20 +++++++++ base/visualization/man/plot_data.Rd | 37 +++++++++++++++ base/visualization/man/theme_border.Rd | 48 ++++++++++++++++++++ modules/uncertainty/man/get.results.Rd | 27 +++++++++++ 6 files changed, 206 insertions(+) create mode 100644 base/visualization/man/create.base.plot.Rd create mode 100644 base/visualization/man/dhist.Rd create mode 100644 base/visualization/man/iqr.Rd create mode 100644 base/visualization/man/plot_data.Rd create mode 100644 base/visualization/man/theme_border.Rd create mode 100644 modules/uncertainty/man/get.results.Rd diff --git a/base/visualization/man/create.base.plot.Rd b/base/visualization/man/create.base.plot.Rd new file mode 100644 index 00000000000..6907bdd09dc --- /dev/null +++ b/base/visualization/man/create.base.plot.Rd @@ -0,0 +1,22 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/plots.R +\name{create.base.plot} +\alias{create.base.plot} +\title{Create Base Plot} +\usage{ +create.base.plot() +} +\value{ +empty ggplot object +} +\description{ +Creates empty ggplot object +} +\details{ +An empty base plot to which layers created by other functions +(\code{\link{plot_data}}, \code{\link[PEcAn.priors]{plot_prior.density}}, +\code{\link[PEcAn.priors]{plot_posterior.density}}) can be added. +} +\author{ +David LeBauer +} diff --git a/base/visualization/man/dhist.Rd b/base/visualization/man/dhist.Rd new file mode 100644 index 00000000000..64e7e54a3de --- /dev/null +++ b/base/visualization/man/dhist.Rd @@ -0,0 +1,52 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/plots.R +\name{dhist} +\alias{dhist} +\title{Diagonally Cut Histogram} +\usage{ +dhist( + x, + a = 5 * iqr(x), + nbins = grDevices::nclass.Sturges(x), + rx = range(x, na.rm = TRUE), + eps = 0.15, + xlab = "x", + plot = TRUE, + lab.spikes = TRUE +) +} +\arguments{ +\item{x}{is a numeric vector (the data)} + +\item{a}{is the scaling factor, default is 5 * IQR} + +\item{nbins}{is the number of bins, default is assigned by the Stuges method} + +\item{rx}{is the range used for the left of the left-most bin to the right of the right-most bin} + +\item{eps}{used to set artificial bound on min width / max height of bins as described in Denby and Mallows (2009) on page 24.} + +\item{xlab}{is label for the x axis} + +\item{plot}{= TRUE produces the plot, FALSE returns the heights, breaks and counts} + +\item{lab.spikes}{= TRUE labels the % of data in the spikes} +} +\value{ +list with two elements, heights of length n and breaks of length n+1 indicating the heights and break points of the histogram bars. +} +\description{ +Variable-width (dagonally cut) histogram +} +\details{ +When constructing a histogram, it is common to make all bars the same width. +One could also choose to make them all have the same area. +These two options have complementary strengths and weaknesses; the equal-width histogram oversmooths in regions of high density, and is poor at identifying sharp peaks; the equal-area histogram oversmooths in regions of low density, and so does not identify outliers. +We describe a compromise approach which avoids both of these defects. We regard the histogram as an exploratory device, rather than as an estimate of a density. +} +\references{ +Lorraine Denby, Colin Mallows. Journal of Computational and Graphical Statistics. March 1, 2009, 18(1): 21-31. doi:10.1198/jcgs.2009.0002. +} +\author{ +Lorraine Denby, Colin Mallows +} diff --git a/base/visualization/man/iqr.Rd b/base/visualization/man/iqr.Rd new file mode 100644 index 00000000000..647629e5678 --- /dev/null +++ b/base/visualization/man/iqr.Rd @@ -0,0 +1,20 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/plots.R +\name{iqr} +\alias{iqr} +\title{Interquartile range} +\usage{ +iqr(x) +} +\arguments{ +\item{x}{vector} +} +\value{ +numeric vector of length 2, with the 25th and 75th quantiles of input vector x. +} +\description{ +Calculate interquartile range +} +\details{ +Calculates the 25th and 75th quantiles given a vector x; used in function \link{dhist}. +} diff --git a/base/visualization/man/plot_data.Rd b/base/visualization/man/plot_data.Rd new file mode 100644 index 00000000000..77244f9e858 --- /dev/null +++ b/base/visualization/man/plot_data.Rd @@ -0,0 +1,37 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/plots.R +\name{plot_data} +\alias{plot_data} +\alias{plot.data} +\title{Add data to plot} +\usage{ +plot_data(trait.data, base.plot = NULL, ymax, color = "black") +} +\arguments{ +\item{trait.data}{data to be plotted} + +\item{base.plot}{a ggplot object (grob), +created by \code{\link{create.base.plot}} if none provided} + +\item{ymax}{maximum height of y} +} +\value{ +updated plot object +} +\description{ +Add data to an existing plot or create a new one from \code{\link{create.base.plot}} +} +\details{ +Used to add raw data or summary statistics to the plot of a distribution. +The height of Y is arbitrary, and can be set to optimize visualization. +If SE estimates are available, tehse wil be plotted +} +\examples{ +\dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} +} +\seealso{ +\code{\link{create.base.plot}} +} +\author{ +David LeBauer +} diff --git a/base/visualization/man/theme_border.Rd b/base/visualization/man/theme_border.Rd new file mode 100644 index 00000000000..c69f43538b1 --- /dev/null +++ b/base/visualization/man/theme_border.Rd @@ -0,0 +1,48 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/plots.R +\name{theme_border} +\alias{theme_border} +\title{Theme border for plot} +\usage{ +theme_border( + type = c("left", "right", "bottom", "top", "none"), + colour = "black", + size = 1, + linetype = 1 +) +} +\arguments{ +\item{type}{} + +\item{colour}{} + +\item{size}{} + +\item{linetype}{} +} +\value{ +adds borders to ggplot as a side effect +} +\description{ +Add borders to plot +} +\details{ +Has ggplot2 display only specified borders, e.g. ('L'-shaped) borders, +rather than a rectangle or no border. Note that the order can be significant; +for example, if you specify the L border option and then a theme, the theme settings +will override the border option, so you need to specify the theme (if any) before the border option, as above. +} +\examples{ +\dontrun{ +df = data.frame( x=c(1,2,3), y=c(4,5,6) ) +ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + + opts(panel.border = theme_border(c('bottom','left')) ) +ggplot(data=df, aes(x=x, y=y)) + geom_point() + theme_bw() + + opts(panel.border = theme_border(c('b','l')) ) +} +} +\author{ +Rudolf Cardinal + +\url{ggplot2 google group}{https://groups.google.com/forum/?fromgroups#!topic/ggplot2/-ZjRE2OL8lE} +} diff --git a/modules/uncertainty/man/get.results.Rd b/modules/uncertainty/man/get.results.Rd new file mode 100644 index 00000000000..96e99e016fe --- /dev/null +++ b/modules/uncertainty/man/get.results.Rd @@ -0,0 +1,27 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/get.results.R +\name{get.results} +\alias{get.results} +\title{Generate model output for PEcAn analyses} +\usage{ +get.results( + settings, + sa.ensemble.id = NULL, + ens.ensemble.id = NULL, + variable = NULL, + start.year = NULL, + end.year = NULL +) +} +\arguments{ +\item{settings}{list, read from settings file (xml) using \code{\link{read.settings}}} +} +\description{ +Reads model output and runs sensitivity and ensemble analyses +} +\details{ +Output is placed in model output directory (settings$modeloutdir). +} +\author{ +David LeBauer, Shawn Serbin, Mike Dietze, Ryan Kelly +} From 880cdbd54b4f8943a3bc83253eefd25afcf9a7a3 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 5 Aug 2021 09:07:02 +0530 Subject: [PATCH 2180/2289] Changes to function definition Co-authored-by: istfer --- modules/data.atmosphere/R/download.ICOS.R | 7 +++++-- modules/data.atmosphere/R/met2CF.ICOS.R | 4 ++-- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 85e051c5740..39c0a3a33ba 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -25,10 +25,10 @@ download.ICOS <- start_date, end_date, product, - overwrite = FALSE) { + overwrite = FALSE, ...) { download_file_flag <- TRUE extract_file_flag <- TRUE - + sitename <- sub(".* \\((.*)\\)", "\\1", sitename) if (tolower(product) == "drought2018") { # construct output CSV file name @@ -175,6 +175,9 @@ download.ICOS <- files = zipped_csv_name, junkpaths = TRUE, exdir = outfolder) + if (tolower(product) == "drought2018") { + output_file_name <- zipped_csv_name + } } diff --git a/modules/data.atmosphere/R/met2CF.ICOS.R b/modules/data.atmosphere/R/met2CF.ICOS.R index 48e9650de14..fe0cf7d3a62 100644 --- a/modules/data.atmosphere/R/met2CF.ICOS.R +++ b/modules/data.atmosphere/R/met2CF.ICOS.R @@ -39,7 +39,7 @@ met2CF.ICOS <- start_date, end_date, format, - overwrite = FALSE) { + overwrite = FALSE, ...) { results <- PEcAn.data.atmosphere::met2CF.csv(in.path, in.prefix, @@ -49,4 +49,4 @@ met2CF.ICOS <- format, overwrite = overwrite) return(results) - } \ No newline at end of file + } From c6fb6017df540f7c21fbff8716d9fcc14577cbca Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 5 Aug 2021 09:38:05 +0530 Subject: [PATCH 2181/2289] add registration file --- .../inst/registration/register.ICOS.xml | 10 ++++++++++ modules/data.atmosphere/man/download.ICOS.Rd | 3 ++- modules/data.atmosphere/man/met2CF.ICOS.Rd | 3 ++- 3 files changed, 14 insertions(+), 2 deletions(-) create mode 100644 modules/data.atmosphere/inst/registration/register.ICOS.xml diff --git a/modules/data.atmosphere/inst/registration/register.ICOS.xml b/modules/data.atmosphere/inst/registration/register.ICOS.xml new file mode 100644 index 00000000000..2243512fa20 --- /dev/null +++ b/modules/data.atmosphere/inst/registration/register.ICOS.xml @@ -0,0 +1,10 @@ + + + site + + 1000000136 + ICOS_ECOSYSTEM_HH + text/csv + csv + + diff --git a/modules/data.atmosphere/man/download.ICOS.Rd b/modules/data.atmosphere/man/download.ICOS.Rd index d698cb267f2..9ecc708b83f 100644 --- a/modules/data.atmosphere/man/download.ICOS.Rd +++ b/modules/data.atmosphere/man/download.ICOS.Rd @@ -10,7 +10,8 @@ download.ICOS( start_date, end_date, product, - overwrite = FALSE + overwrite = FALSE, + ... ) } \arguments{ diff --git a/modules/data.atmosphere/man/met2CF.ICOS.Rd b/modules/data.atmosphere/man/met2CF.ICOS.Rd index 4276e47d517..83c9a79e8d0 100644 --- a/modules/data.atmosphere/man/met2CF.ICOS.Rd +++ b/modules/data.atmosphere/man/met2CF.ICOS.Rd @@ -11,7 +11,8 @@ met2CF.ICOS( start_date, end_date, format, - overwrite = FALSE + overwrite = FALSE, + ... ) } \arguments{ From c6963b5c1736c92f395c8b916d7a1c7c35c98a6f Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 5 Aug 2021 13:02:36 +0300 Subject: [PATCH 2182/2289] Update modules/data.atmosphere/R/download.ICOS.R --- modules/data.atmosphere/R/download.ICOS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 39c0a3a33ba..76dd1449425 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -220,7 +220,7 @@ download.ICOS <- formatname = character(rows), startdate = character(rows), enddate = character(rows), - dbfile.name = basename(output_file_name), + dbfile.name = substr(basename(output_file_name), 1, nchar(basename(output_file_name)) - 4), stringsAsFactors = FALSE ) From feae6e012fdcc2e7da94ee5aa74d930b37f6bd82 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 5 Aug 2021 07:06:04 -0400 Subject: [PATCH 2183/2289] fix typo --- modules/data.atmosphere/R/metgapfill.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/metgapfill.R b/modules/data.atmosphere/R/metgapfill.R index 756fb823bdb..fb0224d1aa4 100644 --- a/modules/data.atmosphere/R/metgapfill.R +++ b/modules/data.atmosphere/R/metgapfill.R @@ -220,7 +220,7 @@ metgapfill <- function(in.path, in.prefix, outfolder, start_date, end_date, lst Ts1 <- try(ncdf4::ncvar_get(nc = nc, varid = "soil_temperature"), silent = TRUE) if (!is.numeric(Ts1)) { - Lw <- missingarr + Ts1 <- missingarr myvar <- ncdf4::ncvar_def(name = "soil_temperature", units = "K", dim = xytdim) nc <- ncdf4::ncvar_add(nc = nc, v = myvar) ncdf4::ncvar_put(nc, varid = myvar, missingarr) From 7bc72f5b2e5d81d4f871ff8a50e57503be99fa0f Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 6 Aug 2021 12:12:56 +0530 Subject: [PATCH 2184/2289] add option to read met product type in download.raw.met.module --- modules/data.atmosphere/R/download.raw.met.module.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.raw.met.module.R b/modules/data.atmosphere/R/download.raw.met.module.R index c7334b243f4..3d7f9270fe8 100644 --- a/modules/data.atmosphere/R/download.raw.met.module.R +++ b/modules/data.atmosphere/R/download.raw.met.module.R @@ -118,7 +118,8 @@ lat.in = lat.in, lon.in = lon.in, pattern = met, - site_id = site.id + site_id = site.id, + product = input_met$product ) } else { From b232c353e17a5fb21e6ddc20a7e8a9fa545f7f5b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Fri, 6 Aug 2021 15:24:24 +0530 Subject: [PATCH 2185/2289] use same format for ETC --- modules/data.atmosphere/R/download.ICOS.R | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 76dd1449425..7c5ce8b34cc 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -177,6 +177,12 @@ download.ICOS <- exdir = outfolder) if (tolower(product) == "drought2018") { output_file_name <- zipped_csv_name + }else if (tolower(product) == "etc") { + # reformat file slightly so that both Drought2018 and ETC files can use the same format + tmp_csv <- read.csv(file.path(outfolder, output_file_name)) + new_tmp <- cbind(tmp_csv[, -which(colnames(tmp_csv)=="LW_OUT")], tmp_csv[, which(colnames(tmp_csv)=="LW_OUT")]) + colnames(new_tmp) <- c(colnames(tmp_csv)[-which(colnames(tmp_csv)=="LW_OUT")], "LW_OUT") + write.csv(new_tmp, file = file.path(outfolder, output_file_name), row.names = FALSE) } } From 113f7ef21edd860bdaf02d3dccca5f667379e64c Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 9 Aug 2021 10:13:54 +0530 Subject: [PATCH 2186/2289] add option to select icos products via web --- web/03-inputs.php | 8 ++++++++ web/04-runpecan.php | 3 ++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/web/03-inputs.php b/web/03-inputs.php index a63b53b4031..3728b7359ac 100644 --- a/web/03-inputs.php +++ b/web/03-inputs.php @@ -168,6 +168,14 @@ if (preg_match("/ \([A-Z]{2}-.*\)$/", $siteinfo["sitename"])) { $x['files'][] = array("id"=>"Fluxnet2015." . $type, "name"=>"Use Fluxnet2015"); } + if (preg_match("/ \([A-Z]{2}-.*\)$/", $siteinfo["sitename"])) { + $product = ".drought2018"; + $x['files'][] = array("id"=>"ICOS." . $type .$product, "name"=>"Use ICOS Drought 2018"); + } + if (preg_match("/ \([A-Z]{2}-.*\)$/", $siteinfo["sitename"])) { + $product = ".etc"; + $x['files'][] = array("id"=>"ICOS." . $type .$product, "name"=>"Use ICOS Ecosystem Archive"); + } // check for NARR,this is not exact since it is a conical projection if ($siteinfo['lat'] > 1 && $siteinfo['lat'] < 85 && $siteinfo['lon'] < -68 && $siteinfo['lon'] > -145) { $x['files'][] = array("id"=>"NARR." . $type, "name"=>"Use NARR"); diff --git a/web/04-runpecan.php b/web/04-runpecan.php index 2bbf990d70e..5f2f7609201 100644 --- a/web/04-runpecan.php +++ b/web/04-runpecan.php @@ -398,9 +398,10 @@ if (is_numeric($val)) { fwrite($fh, " ${val}" . PHP_EOL); } else { - $parts=explode(".", $val, 2); + $parts=explode(".", $val, 3); fwrite($fh, " ${parts[0]}" . PHP_EOL); fwrite($fh, " ${parts[1]}" . PHP_EOL); + fwrite($fh, " ${parts[2]}" . PHP_EOL); if (isset($_REQUEST['fluxusername'])) { fwrite($fh, " ${_REQUEST['fluxusername']}" . PHP_EOL); } From ea30abfba55502590dc9a122731b855b8b67b68d Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 9 Aug 2021 10:14:44 +0530 Subject: [PATCH 2187/2289] add ICOS documentation to book --- .../06_data/01_meteorology.Rmd | 20 +++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/book_source/03_topical_pages/06_data/01_meteorology.Rmd b/book_source/03_topical_pages/06_data/01_meteorology.Rmd index ee5d5a5e10e..98c81df8382 100644 --- a/book_source/03_topical_pages/06_data/01_meteorology.Rmd +++ b/book_source/03_topical_pages/06_data/01_meteorology.Rmd @@ -113,3 +113,23 @@ Notes: It's important to know that the raw ERA5 tiles needs to be downloaded and registered in the database first. Inside the `inst` folder in the data.atmosphere package there are R files for downloading and registering files in the BETY. However, it assumes that you have registered and setup your API requirements. Check out how to setup your API [here] (https://confluence.ecmwf.int/display/CKB/How+to+download+ERA5#HowtodownloadERA5-3-DownloadERA5datathroughtheCDSAPI). In the `inst` folder you can find two files (`ERA5_db_register.R` and `ERA5_USA_download.R`). If you setup your `ecmwf` account as it's explained in the link above, `ERA5_USA_download.R` will help you to download all the tiles with all the variables required for pecan `extract.nc.ERA5` function to generate pecan standard met files. Besides installing the required packages for this file, it should work from top to bottom with no problem. After downloading the tiles, there is simple script in `ERA5_db_register.R` which helps you register your tiles in the bety. `met.process` later on uses that entery to find the required tiles for extracting met data for your sites. There are important points about this file. 1- Make sure you don't change the site id in the script (which is the same the `ParentSite` in ERA5 registeration xml file). 2- Make sure the start and end date in that script matches the downloaded tiles. Set your `ERA5.files.path` to where you downloaded the tiles and then the rest of the script should be working fine. +## ICOS Drought 2018 + +Scale: site + +Resolution: 1 hr + +Availability: Varies by [site](https://meta.icos-cp.eu/collections/ueb_7FcyEcbG6y9-UGo5HUqV) + +Notes: To use this option, set source as `ICOS` and a product tag containing `drought2018` in `pecan.xml` + +## ICOS Ecosystem Archive + +Scale: site + +Resolution: 1 hr + +Availability: Varies by [site](https://meta.icos-cp.eu/collections/q4V7P1VLZevIrnlsW6SJO1Rz) + +Notes: To use this option, set source as `ICOS` and a product tag containing `etc` in `pecan.xml` + From e63663c53daf30c7a36bb6e9f7aa32077b40aca8 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 9 Aug 2021 11:37:58 +0530 Subject: [PATCH 2188/2289] small fix in documentations --- book_source/03_topical_pages/06_data/01_meteorology.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book_source/03_topical_pages/06_data/01_meteorology.Rmd b/book_source/03_topical_pages/06_data/01_meteorology.Rmd index 98c81df8382..31d0d91bfd3 100644 --- a/book_source/03_topical_pages/06_data/01_meteorology.Rmd +++ b/book_source/03_topical_pages/06_data/01_meteorology.Rmd @@ -121,7 +121,7 @@ Resolution: 1 hr Availability: Varies by [site](https://meta.icos-cp.eu/collections/ueb_7FcyEcbG6y9-UGo5HUqV) -Notes: To use this option, set source as `ICOS` and a product tag containing `drought2018` in `pecan.xml` +Notes: To use this option, set `source` as `ICOS` and a `product` tag containing `drought2018` in `pecan.xml` ## ICOS Ecosystem Archive @@ -131,5 +131,5 @@ Resolution: 1 hr Availability: Varies by [site](https://meta.icos-cp.eu/collections/q4V7P1VLZevIrnlsW6SJO1Rz) -Notes: To use this option, set source as `ICOS` and a product tag containing `etc` in `pecan.xml` +Notes: To use this option, set `source` as `ICOS` and a `product` tag containing `etc` in `pecan.xml` From b25601d82cfd116c811b62647656b12d2a38d0b0 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 9 Aug 2021 12:53:34 +0530 Subject: [PATCH 2189/2289] change file in check --- modules/data.atmosphere/R/download.ICOS.R | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 7c5ce8b34cc..72a0d015b8f 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -36,9 +36,7 @@ download.ICOS <- paste0( "FLX_", sitename, - "_FLUXNET2015_FULLSET_HH_", - as.character(format(as.Date(start_date), '%Y')), - "-2018_beta-3.csv" + "_FLUXNET2015_FULLSET_HH_" ) # construct zip file name @@ -75,12 +73,16 @@ download.ICOS <- PEcAn.logger::logger.severe("Inavlid product. Product should be one of 'Drought2018', 'ETC' ") } - - if (file.exists(file.path(outfolder, output_file_name)) && - !overwrite) { - PEcAn.logger::logger.info("Output CSV file for the requested site already exists") - download_file_flag <- FALSE - extract_file_flag <- FALSE + output_file <- list.files(path = outfolder, patt= output_file_name) + + if(length(output_file != 0)){ + if (file.exists(file.path(outfolder, output_file)) && + !overwrite) { + PEcAn.logger::logger.info("Output CSV file for the requested site already exists") + download_file_flag <- FALSE + extract_file_flag <- FALSE + output_file_name <- output_file + } } if (extract_file_flag && From bddb9fd0697e30a0a65669433614256ac56e4e95 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 9 Aug 2021 12:59:23 +0530 Subject: [PATCH 2190/2289] remove redundant if check --- modules/data.atmosphere/R/download.ICOS.R | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 72a0d015b8f..2e6b79ffe29 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -74,15 +74,11 @@ download.ICOS <- } output_file <- list.files(path = outfolder, patt= output_file_name) - - if(length(output_file != 0)){ - if (file.exists(file.path(outfolder, output_file)) && - !overwrite) { + if(length(output_file != 0) && !overwrite){ PEcAn.logger::logger.info("Output CSV file for the requested site already exists") download_file_flag <- FALSE extract_file_flag <- FALSE output_file_name <- output_file - } } if (extract_file_flag && From 665794f5cc5a5befb9488fbb61156473c8a8cc3b Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 9 Aug 2021 11:34:01 +0300 Subject: [PATCH 2191/2289] Update book_source/03_topical_pages/06_data/01_meteorology.Rmd --- book_source/03_topical_pages/06_data/01_meteorology.Rmd | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/book_source/03_topical_pages/06_data/01_meteorology.Rmd b/book_source/03_topical_pages/06_data/01_meteorology.Rmd index 31d0d91bfd3..c482a33bf98 100644 --- a/book_source/03_topical_pages/06_data/01_meteorology.Rmd +++ b/book_source/03_topical_pages/06_data/01_meteorology.Rmd @@ -117,7 +117,7 @@ In the `inst` folder you can find two files (`ERA5_db_register.R` and `ERA5_USA_ Scale: site -Resolution: 1 hr +Resolution: 30 min Availability: Varies by [site](https://meta.icos-cp.eu/collections/ueb_7FcyEcbG6y9-UGo5HUqV) @@ -132,4 +132,3 @@ Resolution: 1 hr Availability: Varies by [site](https://meta.icos-cp.eu/collections/q4V7P1VLZevIrnlsW6SJO1Rz) Notes: To use this option, set `source` as `ICOS` and a `product` tag containing `etc` in `pecan.xml` - From 216a71f35b967887aa5b39505c9f029fc00b7958 Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 9 Aug 2021 11:34:16 +0300 Subject: [PATCH 2192/2289] Update book_source/03_topical_pages/06_data/01_meteorology.Rmd --- book_source/03_topical_pages/06_data/01_meteorology.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book_source/03_topical_pages/06_data/01_meteorology.Rmd b/book_source/03_topical_pages/06_data/01_meteorology.Rmd index c482a33bf98..e912ac6545f 100644 --- a/book_source/03_topical_pages/06_data/01_meteorology.Rmd +++ b/book_source/03_topical_pages/06_data/01_meteorology.Rmd @@ -127,7 +127,7 @@ Notes: To use this option, set `source` as `ICOS` and a `product` tag containing Scale: site -Resolution: 1 hr +Resolution: 30 min Availability: Varies by [site](https://meta.icos-cp.eu/collections/q4V7P1VLZevIrnlsW6SJO1Rz) From a40cd48199a7d4386dea83671b0e822602253313 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 9 Aug 2021 19:09:19 +0530 Subject: [PATCH 2193/2289] updated changelog --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index fe44fd4995a..2987dda9f37 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -85,6 +85,7 @@ This is a major change: - PEcAn.DB gains new function `get_postgres_envvars`, which tries to look up connection parameters from Postgres environment variables (if they are set) and return them as a list ready to be passed to `db.open`. It should be especially useful when writing tests that need to run on systems with many different database configurations (#2541). - New shiny application to show database synchronization status (shiny/dbsync) - Ability to run with [MERRA-2 meteorology](https://gmao.gsfc.nasa.gov/reanalysis/MERRA-2/) (reanalysis product based on GEOS-5 model) +- Ability to run with ICOS Ecosystem products ### Removed From 9b997f9028747c183ed76cd593ec491b36e0eafe Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 9 Aug 2021 23:48:29 -0500 Subject: [PATCH 2194/2289] cleanup/fixes of API - use NA instad of NULL in functions, openapi does not know what to do with NULL. - don't pass in global_db_pool into function, this does not exist in openapi - fix plumber calls, remove deprecated calls --- apps/api/R/auth.R | 11 +++++----- apps/api/R/available-models.R | 16 ++++++-------- apps/api/R/entrypoint.R | 15 +++++++++---- apps/api/R/formats.R | 19 +++++++---------- apps/api/R/general.R | 5 ++--- apps/api/R/get.file.R | 9 ++++---- apps/api/R/inputs.R | 35 ++++++++++++++---------------- apps/api/R/models.R | 15 ++++++------- apps/api/R/pfts.R | 15 ++++++------- apps/api/R/runs.R | 40 ++++++++++++++++------------------- apps/api/R/sites.R | 10 ++++----- apps/api/R/submit.workflow.R | 31 ++++++++++++--------------- apps/api/R/workflows.R | 27 ++++++++++------------- 13 files changed, 112 insertions(+), 136 deletions(-) diff --git a/apps/api/R/auth.R b/apps/api/R/auth.R index 5273c6297ca..015ef9e314f 100644 --- a/apps/api/R/auth.R +++ b/apps/api/R/auth.R @@ -3,11 +3,11 @@ library(dplyr) #* Obtain the encrypted password for a user #* @param username Username, which is also the 'salt' #* @param password Unencrypted password -#* @param secretkey Secret Key, which if null, is set to 'notasecret' +#* @param secretkey Secret Key, which if NA, is set to 'notasecret' #* @return Encrypted password #* @author Tezan Sahu -get_crypt_pass <- function(username, password, secretkey = NULL) { - secretkey <- if(is.null(secretkey)) "notasecret" else secretkey +get_crypt_pass <- function(username, password, secretkey = NA) { + secretkey <- if(is.na(secretkey)) "notasecret" else secretkey dig <- secretkey salt <- username for (i in 1:10) { @@ -25,12 +25,11 @@ get_crypt_pass <- function(username, password, secretkey = NULL) { #* Check if the encrypted password for the user is valid #* @param username Username #* @param crypt_pass Encrypted password -#* @param dbcon Database connection object. Default is global database pool. #* @return TRUE if encrypted password is correct, else FALSE #* @author Tezan Sahu -validate_crypt_pass <- function(username, crypt_pass, dbcon = global_db_pool) { +validate_crypt_pass <- function(username, crypt_pass) { - res <- tbl(dbcon, "users") %>% + res <- tbl(global_db_pool, "users") %>% filter(login == username, crypted_password == crypt_pass) %>% collect() diff --git a/apps/api/R/available-models.R b/apps/api/R/available-models.R index 0c467b843a2..aa93e9db7ad 100644 --- a/apps/api/R/available-models.R +++ b/apps/api/R/available-models.R @@ -3,15 +3,13 @@ library(magrittr, include.only = "%>%") #' List models available on a specific machine #' #' @param machine_name Target machine hostname. Default = `"docker"` -#' @param machine_id Target machine ID. If `NULL` (default), deduced from hostname. -#' @param dbcon Database connection object. Default is global database pool. +#' @param machine_id Target machine ID. If `NA` (default), deduced from hostname. #' @return `data.frame` of information on available models #' @author Alexey Shiklomanov #* @get / -availableModels <- function(machine_name = "docker", machine_id = NULL, - dbcon = global_db_pool) { - if (is.null(machine_id)) { - machines <- dplyr::tbl(dbcon, "machines") +availableModels <- function(machine_name = "docker", machine_id = NA) { + if (is.na(machine_id)) { + machines <- dplyr::tbl(global_db_pool, "machines") machineid <- machines %>% dplyr::filter(hostname == !!machine_name) %>% dplyr::pull(id) @@ -22,12 +20,12 @@ availableModels <- function(machine_name = "docker", machine_id = NULL, stop("Found no machines with name ", machine_name) } } - dbfiles <- dplyr::tbl(dbcon, "dbfiles") %>% + dbfiles <- dplyr::tbl(global_db_pool, "dbfiles") %>% dplyr::filter(machine_id == !!machineid) modelfiles <- dbfiles %>% dplyr::filter(container_type == "Model") - models <- dplyr::tbl(dbcon, "models") - modeltypes <- dplyr::tbl(dbcon, "modeltypes") %>% + models <- dplyr::tbl(global_db_pool, "models") + modeltypes <- dplyr::tbl(global_db_pool, "modeltypes") %>% dplyr::select(modeltype_id = id, modeltype = name) modelfiles %>% diff --git a/apps/api/R/entrypoint.R b/apps/api/R/entrypoint.R index 23ff7911464..5f1d8a3fb94 100755 --- a/apps/api/R/entrypoint.R +++ b/apps/api/R/entrypoint.R @@ -21,6 +21,10 @@ source("general.R") #global_db_pool <- do.call(pool::dbPool, .bety_params) global_db_pool <- PEcAn.DB::betyConnect() +# redirect to trailing slash +plumber::options_plumber(trailingSlash=TRUE) + +# root router root <- plumber::Plumber$new() root$setSerializer(plumber::serializer_unboxed_json()) @@ -65,9 +69,12 @@ root$mount("/api/runs", runs_pr) runs_pr <- plumber::Plumber$new("available-models.R") root$mount("/api/availableModels", runs_pr) +# set swagger documentation +root$setApiSpec("../pecanapi-spec.yml") + +# enable debug +root$setDebug(TRUE) + # The API server is bound to 0.0.0.0 on port 8000 # The Swagger UI for the API draws its source from the pecanapi-spec.yml file -root$run(host="0.0.0.0", port=8000, debug=TRUE, swagger = function(pr, spec, ...) { - spec <- yaml::read_yaml("../pecanapi-spec.yml") - spec -}) +root$run(host="0.0.0.0", port=8000) diff --git a/apps/api/R/formats.R b/apps/api/R/formats.R index 2f3d6ecccd9..d805fc48e7c 100644 --- a/apps/api/R/formats.R +++ b/apps/api/R/formats.R @@ -2,17 +2,16 @@ library(dplyr) #' Retrieve the details of a PEcAn format, based on format_id #' @param format_id Format ID (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Format details #' @author Tezan Sahu #* @get / -getFormat <- function(format_id, res, dbcon = global_db_pool){ +getFormat <- function(format_id, res){ - Format <- tbl(dbcon, "formats") %>% + Format <- tbl(global_db_pool, "formats") %>% select(format_id = id, name, notes, header, mimetype_id) %>% filter(format_id == !!format_id) - Format <- tbl(dbcon, "mimetypes") %>% + Format <- tbl(global_db_pool, "mimetypes") %>% select(mimetype_id = id, mimetype = type_string) %>% inner_join(Format, by = "mimetype_id") %>% select(-mimetype_id) @@ -30,10 +29,10 @@ getFormat <- function(format_id, res, dbcon = global_db_pool){ response[colname] <- qry_res[colname] } - format_vars <- tbl(dbcon, "formats_variables") %>% + format_vars <- tbl(global_db_pool, "formats_variables") %>% select(name, unit, format_id, variable_id) %>% filter(format_id == !!format_id) - format_vars <- tbl(dbcon, "variables") %>% + format_vars <- tbl(global_db_pool, "variables") %>% select(variable_id = id, description, units) %>% inner_join(format_vars, by="variable_id") %>% mutate(unit = ifelse(unit %in% "", units, unit)) %>% @@ -51,20 +50,18 @@ getFormat <- function(format_id, res, dbcon = global_db_pool){ #' @param format_name Format name search string (character) #' @param mimetype Mime type search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search -#' @param dbcon Database connection object. Default is global database pool. #' @return Formats subset matching the model search string #' @author Tezan Sahu #* @get / -searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res, - dbcon = global_db_pool){ +searchFormats <- function(format_name="", mimetype="", ignore_case=TRUE, res){ format_name <- URLdecode(format_name) mimetype <- URLdecode(mimetype) - Formats <- tbl(dbcon, "formats") %>% + Formats <- tbl(global_db_pool, "formats") %>% select(format_id = id, format_name=name, mimetype_id) %>% filter(grepl(!!format_name, format_name, ignore.case=ignore_case)) - Formats <- tbl(dbcon, "mimetypes") %>% + Formats <- tbl(global_db_pool, "mimetypes") %>% select(mimetype_id = id, mimetype = type_string) %>% inner_join(Formats, by = "mimetype_id") %>% filter(grepl(!!mimetype, mimetype, ignore.case=ignore_case)) %>% diff --git a/apps/api/R/general.R b/apps/api/R/general.R index fca4d0f8f72..5f5c9ec36b2 100644 --- a/apps/api/R/general.R +++ b/apps/api/R/general.R @@ -10,15 +10,14 @@ ping <- function(req){ #* Function to get the status & basic information about the Database Host #* @return Details about the database host #* @author Tezan Sahu -status <- function(dbcon = global_db_pool) { - +status <- function() { ## helper function to obtain environment variables get_env_var = function (item, default = "unknown") { value = Sys.getenv(item) if (value == "") default else value } - res <- list(host_details = PEcAn.DB::dbHostInfo(dbcon)) + res <- list(host_details = PEcAn.DB::dbHostInfo(global_db_pool)) res$host_details$authentication_required = get_env_var("AUTH_REQ") res$pecan_details <- list( diff --git a/apps/api/R/get.file.R b/apps/api/R/get.file.R index 6d5f345cd12..1d1fdeda9c3 100644 --- a/apps/api/R/get.file.R +++ b/apps/api/R/get.file.R @@ -5,10 +5,9 @@ library(dplyr) #' @param filepath Absolute path to file on target machine #' @param userid User ID associated with file (typically the same as the user #' running the corresponding workflow) -#' @param dbcon Database connection object. Default is global database pool. #' @return Raw binary file contents #' @author Tezan Sehu -get.file <- function(filepath, userid, dbcon = global_db_pool) { +get.file <- function(filepath, userid) { # Check if the file path is valid if(! file.exists(filepath)){ return(list(status = "Error", message = "File not found")) @@ -21,13 +20,13 @@ get.file <- function(filepath, userid, dbcon = global_db_pool) { if(Sys.getenv("AUTH_REQ") == TRUE) { - Run <- tbl(dbcon, "runs") %>% + Run <- tbl(global_db_pool, "runs") %>% filter(id == !!run_id) - Run <- tbl(dbcon, "ensembles") %>% + Run <- tbl(global_db_pool, "ensembles") %>% select(ensemble_id=id, workflow_id) %>% full_join(Run, by="ensemble_id") %>% filter(id == !!run_id) - user_id <- tbl(dbcon, "workflows") %>% + user_id <- tbl(global_db_pool, "workflows") %>% select(workflow_id=id, user_id) %>% full_join(Run, by="workflow_id") %>% filter(id == !!run_id) %>% pull(user_id) diff --git a/apps/api/R/inputs.R b/apps/api/R/inputs.R index 110fe1bddec..d1d9b270c20 100644 --- a/apps/api/R/inputs.R +++ b/apps/api/R/inputs.R @@ -5,68 +5,66 @@ library(dplyr) #' @param site_id Site Id (character) #' @param offset #' @param limit -#' @param dbcon Database connection object. Default is global database pool. #' @return Information about Inputs based on model & site #' @author Tezan Sahu #* @get / -searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_id=NULL, offset=0, limit=50, res, - dbcon = global_db_pool){ +searchInputs <- function(req, model_id=NA, site_id=NA, format_id=NA, host_id=NA, offset=0, limit=50, res){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) } - inputs <- tbl(dbcon, "inputs") %>% + inputs <- tbl(global_db_pool, "inputs") %>% select(input_name=name, id, site_id, format_id, start_date, end_date) - inputs <- tbl(dbcon, "dbfiles") %>% + inputs <- tbl(global_db_pool, "dbfiles") %>% select(file_name, file_path, container_type, id=container_id, machine_id) %>% inner_join(inputs, by = "id") %>% filter(container_type == 'Input') %>% select(-container_type) - inputs <- tbl(dbcon, "machines") %>% + inputs <- tbl(global_db_pool, "machines") %>% select(hostname, machine_id=id) %>% inner_join(inputs, by='machine_id') - inputs <- tbl(dbcon, "formats") %>% + inputs <- tbl(global_db_pool, "formats") %>% select(format_id = id, format_name = name, mimetype_id) %>% inner_join(inputs, by='format_id') - inputs <- tbl(dbcon, "mimetypes") %>% + inputs <- tbl(global_db_pool, "mimetypes") %>% select(mimetype_id = id, mimetype = type_string) %>% inner_join(inputs, by='mimetype_id') %>% select(-mimetype_id) - inputs <- tbl(dbcon, "sites") %>% + inputs <- tbl(global_db_pool, "sites") %>% select(site_id = id, sitename) %>% inner_join(inputs, by='site_id') - if(! is.null(model_id)) { - inputs <- tbl(dbcon, "modeltypes_formats") %>% + if(! is.na(model_id)) { + inputs <- tbl(global_db_pool, "modeltypes_formats") %>% select(tag, modeltype_id, format_id, input) %>% inner_join(inputs, by='format_id') %>% filter(input) %>% select(-input) - inputs <- tbl(dbcon, "models") %>% + inputs <- tbl(global_db_pool, "models") %>% select(model_id = id, modeltype_id, model_name, revision) %>% inner_join(inputs, by='modeltype_id') %>% filter(model_id == !!model_id) %>% select(-modeltype_id, -model_id) } - if(! is.null(site_id)) { + if(! is.na(site_id)) { inputs <- inputs %>% filter(site_id == !!site_id) } - if(! is.null(format_id)) { + if(! is.na(format_id)) { inputs <- inputs %>% filter(format_id == !!format_id) } - if(! is.null(host_id)) { + if(! is.na(host_id)) { inputs <- inputs %>% filter(machine_id == !!host_id) } @@ -130,18 +128,17 @@ searchInputs <- function(req, model_id=NULL, site_id=NULL, format_id=NULL, host_ #' @param id Input id (character) #' @param filename Optional filename specified if the id points to a folder instead of file (character) #' If this is passed with an id that actually points to a file, this name will be ignored -#' @param dbcon Database connection object. Default is global database pool. #' @return Input file specified by user #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get / -downloadInput <- function(input_id, filename="", req, res, dbcon = global_db_pool){ - db_hostid <- PEcAn.DB::dbHostInfo(dbcon)$hostid +downloadInput <- function(input_id, filename="", req, res){ + db_hostid <- PEcAn.DB::dbHostInfo(global_db_pool)$hostid # This is just for temporary testing due to the existing issue in dbHostInfo() db_hostid <- ifelse(db_hostid == 99, 99000000001, db_hostid) - input <- tbl(dbcon, "dbfiles") %>% + input <- tbl(global_db_pool, "dbfiles") %>% select(file_name, file_path, container_id, machine_id, container_type) %>% filter(machine_id == !!db_hostid) %>% filter(container_type == "Input") %>% diff --git a/apps/api/R/models.R b/apps/api/R/models.R index 22ddc2275d4..1a74b54210e 100644 --- a/apps/api/R/models.R +++ b/apps/api/R/models.R @@ -2,17 +2,16 @@ library(dplyr) #' Retrieve the details of a PEcAn model, based on model_id #' @param model_id Model ID (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Model details #' @author Tezan Sahu #* @get / -getModel <- function(model_id, res, dbcon = global_db_pool){ +getModel <- function(model_id, res){ - Model <- tbl(dbcon, "models") %>% + Model <- tbl(global_db_pool, "models") %>% select(model_id = id, model_name, revision, modeltype_id) %>% filter(model_id == !!model_id) - Model <- tbl(dbcon, "modeltypes") %>% + Model <- tbl(global_db_pool, "modeltypes") %>% select(modeltype_id = id, model_type = name) %>% inner_join(Model, by = "modeltype_id") @@ -29,7 +28,7 @@ getModel <- function(model_id, res, dbcon = global_db_pool){ response[colname] <- qry_res[colname] } - inputs_req <- tbl(dbcon, "modeltypes_formats") %>% + inputs_req <- tbl(global_db_pool, "modeltypes_formats") %>% filter(modeltype_id == bit64::as.integer64(qry_res$modeltype_id)) %>% select(input=tag, required) %>% collect() response$inputs <- jsonlite::fromJSON(gsub('(\")', '"', jsonlite::toJSON(inputs_req))) @@ -43,16 +42,14 @@ getModel <- function(model_id, res, dbcon = global_db_pool){ #' @param model_name Model name search string (character) #' @param revision Model version/revision search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search -#' @param dbcon Database connection object. Default is global database pool. #' @return Model subset matching the model search string #' @author Tezan Sahu #* @get / -searchModels <- function(model_name="", revision="", ignore_case=TRUE, res, - dbcon = global_db_pool){ +searchModels <- function(model_name="", revision="", ignore_case=TRUE, res){ model_name <- URLdecode(model_name) revision <- URLdecode(revision) - Models <- tbl(dbcon, "models") %>% + Models <- tbl(global_db_pool, "models") %>% select(model_id = id, model_name, revision) %>% filter(grepl(!!model_name, model_name, ignore.case=ignore_case)) %>% filter(grepl(!!revision, revision, ignore.case=ignore_case)) %>% diff --git a/apps/api/R/pfts.R b/apps/api/R/pfts.R index 607ce7d66a3..732340759aa 100644 --- a/apps/api/R/pfts.R +++ b/apps/api/R/pfts.R @@ -2,17 +2,16 @@ library(dplyr) #' Retrieve the details of a PEcAn PFT, based on pft_id #' @param pft_id PFT ID (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return PFT details #' @author Tezan Sahu #* @get / -getPfts <- function(pft_id, res, dbcon = global_db_pool){ +getPfts <- function(pft_id, res){ - pft <- tbl(dbcon, "pfts") %>% + pft <- tbl(global_db_pool, "pfts") %>% select(pft_id = id, pft_name = name, definition, pft_type, modeltype_id) %>% filter(pft_id == !!pft_id) - pft <- tbl(dbcon, "modeltypes") %>% + pft <- tbl(global_db_pool, "modeltypes") %>% select(modeltype_id = id, model_type = name) %>% inner_join(pft, by = "modeltype_id") @@ -42,12 +41,10 @@ getPfts <- function(pft_id, res, dbcon = global_db_pool){ #' @param pft_type PFT type (either 'plant' or 'cultivar') (character) #' @param model_type Model type serch string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search -#' @param dbcon Database connection object. Default is global database pool. #' @return PFT subset matching the searc criteria #' @author Tezan Sahu #* @get / -searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE, res, - dbcon = global_db_pool){ +searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE, res){ pft_name <- URLdecode(pft_name) pft_type <- URLdecode(pft_type) model_type <- URLdecode(model_type) @@ -57,10 +54,10 @@ searchPfts <- function(pft_name="", pft_type="", model_type="", ignore_case=TRUE return(list(error = "Invalid pft_type")) } - pfts <- tbl(dbcon, "pfts") %>% + pfts <- tbl(global_db_pool, "pfts") %>% select(pft_id = id, pft_name = name, pft_type, modeltype_id) - pfts <- tbl(dbcon, "modeltypes") %>% + pfts <- tbl(global_db_pool, "modeltypes") %>% select(modeltype_id = id, model_type = name) %>% inner_join(pfts, by = "modeltype_id") diff --git a/apps/api/R/runs.R b/apps/api/R/runs.R index 8ed18cfeed1..c9d0acb3b4a 100644 --- a/apps/api/R/runs.R +++ b/apps/api/R/runs.R @@ -5,25 +5,23 @@ source("get.file.R") #' @param workflow_id Workflow id (character) #' @param offset #' @param limit -#' @param dbcon Database connection object. Default is global database pool. #' @return List of runs (belonging to a particuar workflow) #' @author Tezan Sahu #* @get / -getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res, - dbcon = global_db_pool){ +getRuns <- function(req, workflow_id=NA, offset=0, limit=50, res){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) } - Runs <- tbl(dbcon, "runs") %>% + Runs <- tbl(global_db_pool, "runs") %>% select(id, model_id, site_id, parameter_list, ensemble_id, start_time, finish_time) - Runs <- tbl(dbcon, "ensembles") %>% + Runs <- tbl(global_db_pool, "ensembles") %>% select(runtype, ensemble_id=id, workflow_id) %>% full_join(Runs, by="ensemble_id") - if(! is.null(workflow_id)){ + if(! is.na(workflow_id)){ Runs <- Runs %>% filter(workflow_id == !!workflow_id) } @@ -84,12 +82,12 @@ getRuns <- function(req, workflow_id=NULL, offset=0, limit=50, res, #' @return Details of requested run #' @author Tezan Sahu #* @get / -getRunDetails <- function(req, run_id, res, dbcon = global_db_pool){ +getRunDetails <- function(req, run_id, res){ - Runs <- tbl(dbcon, "runs") %>% + Runs <- tbl(global_db_pool, "runs") %>% select(-outdir, -outprefix, -setting, -created_at, -updated_at) - Runs <- tbl(dbcon, "ensembles") %>% + Runs <- tbl(global_db_pool, "ensembles") %>% select(runtype, ensemble_id=id, workflow_id) %>% full_join(Runs, by="ensemble_id") %>% filter(id == !!run_id) @@ -97,7 +95,7 @@ getRunDetails <- function(req, run_id, res, dbcon = global_db_pool){ qry_res <- Runs %>% collect() if(Sys.getenv("AUTH_REQ") == TRUE){ - user_id <- tbl(dbcon, "workflows") %>% + user_id <- tbl(global_db_pool, "workflows") %>% select(workflow_id=id, user_id) %>% full_join(Runs, by="workflow_id") %>% filter(id == !!run_id) %>% pull(user_id) @@ -144,17 +142,16 @@ getRunDetails <- function(req, run_id, res, dbcon = global_db_pool){ #' Get the input file specified by user for a run #' @param run_id Run id (character) #' @param filename Name of the input file (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Input file specified by user for the run #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get //input/ -getRunInputFile <- function(req, run_id, filename, res, dbcon = global_db_pool){ +getRunInputFile <- function(req, run_id, filename, res){ - Run <- tbl(dbcon, "runs") %>% + Run <- tbl(global_db_pool, "runs") %>% filter(id == !!run_id) - workflow_id <- tbl(dbcon, "ensembles") %>% + workflow_id <- tbl(global_db_pool, "ensembles") %>% select(ensemble_id=id, workflow_id) %>% full_join(Run, by="ensemble_id") %>% filter(id == !!run_id) %>% @@ -185,12 +182,12 @@ getRunInputFile <- function(req, run_id, filename, res, dbcon = global_db_pool){ #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get //output/ -getRunOutputFile <- function(req, run_id, filename, res, dbcon = global_db_pool){ +getRunOutputFile <- function(req, run_id, filename, res){ - Run <- tbl(dbcon, "runs") %>% + Run <- tbl(global_db_pool, "runs") %>% filter(id == !!run_id) - workflow_id <- tbl(dbcon, "ensembles") %>% + workflow_id <- tbl(global_db_pool, "ensembles") %>% select(ensemble_id=id, workflow_id) %>% full_join(Run, by="ensemble_id") %>% filter(id == !!run_id) %>% @@ -226,20 +223,19 @@ getRunOutputFile <- function(req, run_id, filename, res, dbcon = global_db_pool) #* @get //graph// #* @serializer contentType list(type='image/png') -plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, height=600, res, - dbcon = global_db_pool) { +plotResults <- function(req, run_id, year, y_var, x_var="time", width=800, height=600, res) { # Get workflow_id for the run - Run <- tbl(dbcon, "runs") %>% + Run <- tbl(global_db_pool, "runs") %>% filter(id == !!run_id) - workflow_id <- tbl(dbcon, "ensembles") %>% + workflow_id <- tbl(global_db_pool, "ensembles") %>% select(ensemble_id=id, workflow_id) %>% full_join(Run, by="ensemble_id") %>% filter(id == !!run_id) %>% pull(workflow_id) if(Sys.getenv("AUTH_REQ") == TRUE){ - user_id <- tbl(dbcon, "workflows") %>% + user_id <- tbl(global_db_pool, "workflows") %>% select(id, user_id) %>% filter(id == !!workflow_id) %>% pull(user_id) diff --git a/apps/api/R/sites.R b/apps/api/R/sites.R index c62bde56618..09b6abba4b2 100644 --- a/apps/api/R/sites.R +++ b/apps/api/R/sites.R @@ -2,13 +2,12 @@ library(dplyr) #' Retrieve the details of a PEcAn site, based on site_id #' @param site_id Site ID (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Site details #' @author Tezan Sahu #* @get / -getSite <- function(site_id, res, dbcon = global_db_pool){ +getSite <- function(site_id, res){ - site <- tbl(dbcon, "sites") %>% + site <- tbl(global_db_pool, "sites") %>% select(-created_at, -updated_at, -user_id, -geometry) %>% filter(id == !!site_id) @@ -34,14 +33,13 @@ getSite <- function(site_id, res, dbcon = global_db_pool){ #' Search for PEcAn sites containing wildcards for filtering #' @param sitename Site name search string (character) #' @param ignore_case Logical. If `TRUE` (default) use case-insensitive search otherwise, use case-sensitive search -#' @param dbcon Database connection object. Default is global database pool. #' @return Site subset matching the site search string #' @author Tezan Sahu #* @get / -searchSite <- function(sitename="", ignore_case=TRUE, res, dbcon = global_db_pool){ +searchSite <- function(sitename="", ignore_case=TRUE, res){ sitename <- URLdecode(sitename) - sites <- tbl(dbcon, "sites") %>% + sites <- tbl(global_db_pool, "sites") %>% select(id, sitename) %>% filter(grepl(!!sitename, sitename, ignore.case=ignore_case)) %>% arrange(id) diff --git a/apps/api/R/submit.workflow.R b/apps/api/R/submit.workflow.R index 4b9f2047c87..ace1c2896f7 100644 --- a/apps/api/R/submit.workflow.R +++ b/apps/api/R/submit.workflow.R @@ -32,10 +32,9 @@ submit.workflow.json <- function(workflowJsonString, userDetails){ #* Submit a workflow (converted to list) #* @param workflowList Workflow parameters expressed as a list #* @param userDetails List containing userid & username -#* @param dbcon Database connection object. Default is global database pool. #* @return ID & status of the submitted workflow #* @author Tezan Sahu -submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_pool) { +submit.workflow.list <- function(workflowList, userDetails) { # Set database details workflowList$database <- list( @@ -54,9 +53,9 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po } # Get model revision and type for the RabbitMQ queue - model_info <- dplyr::tbl(dbcon, "models") %>% + model_info <- dplyr::tbl(global_db_pool, "models") %>% dplyr::filter(id == !!workflowList$model$id) %>% - dplyr::inner_join(dplyr::tbl(dbcon, "modeltypes"), + dplyr::inner_join(dplyr::tbl(global_db_pool, "modeltypes"), by = c("modeltype_id" = "id")) %>% dplyr::collect() @@ -76,7 +75,7 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po model_revision <- model_info$revision # Fix RabbitMQ details - hostInfo <- PEcAn.DB::dbHostInfo(dbcon) + hostInfo <- PEcAn.DB::dbHostInfo(global_db_pool) workflowList$host <- list( rabbitmq = list( uri = Sys.getenv("RABBITMQ_URI", "amqp://guest:guest@localhost/%2F"), @@ -141,14 +140,13 @@ submit.workflow.list <- function(workflowList, userDetails, dbcon = global_db_po #* Insert the workflow into workflows table to obtain the workflow_id #* @param workflowList List containing the workflow details -#* @param dbcon Database connection object. Default is global database pool. #* @return ID of the submitted workflow #* @author Tezan Sahu -insert.workflow <- function(workflowList, dbcon = global_db_pool){ +insert.workflow <- function(workflowList){ model_id <- workflowList$model$id if(is.null(model_id)){ - model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), dbcon) + model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), global_db_pool) } start_time <- Sys.time() @@ -172,9 +170,9 @@ insert.workflow <- function(workflowList, dbcon = global_db_pool){ # NOTE: Have to "checkout" a connection from the pool here to work with # dbSendStatement and friends. We make sure to return the connection when the # function exits (successfully or not). - #con <- pool::poolCheckout(dbcon) + #con <- pool::poolCheckout(global_db_pool) #on.exit(pool::poolReturn(con), add = TRUE) - con <- dbcon + con <- global_db_pool insert_query <- glue::glue( "INSERT INTO workflows ", @@ -207,9 +205,8 @@ insert.workflow <- function(workflowList, dbcon = global_db_pool){ #* Insert the workflow into attributes table #* @param workflowList List containing the workflow details -#* @param dbcon Database connection object. Default is global database pool. #* @author Tezan Sahu -insert.attribute <- function(workflowList, dbcon = global_db_pool){ +insert.attribute <- function(workflowList){ # Create an array of PFTs pfts <- c() @@ -220,7 +217,7 @@ insert.attribute <- function(workflowList, dbcon = global_db_pool){ # Obtain the model_id model_id <- workflowList$model$id if(is.null(model_id)){ - model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), dbcon) + model_id <- PEcAn.DB::get.id("models", c("model_name", "revision"), c(workflowList$model$type, workflowList$model$revision), global_db_pool) } # Fill in the properties @@ -231,12 +228,12 @@ insert.attribute <- function(workflowList, dbcon = global_db_pool){ runs = if(is.null(workflowList$ensemble$size)) 1 else workflowList$ensemble$size, modelid = model_id, siteid = bit64::as.integer64(workflowList$run$site$id), - sitename = dplyr::tbl(dbcon, "sites") %>% filter(id == bit64::as.integer64(workflowList$run$site$id)) %>% pull(sitename), + sitename = dplyr::tbl(global_db_pool, "sites") %>% filter(id == bit64::as.integer64(workflowList$run$site$id)) %>% pull(sitename), #sitegroupid <- lat = if(is.null(workflowList$run$site$lat)) "" else workflowList$run$site$lat, lon = if(is.null(workflowList$run$site$lon)) "" else workflowList$run$site$lon, email = if(is.na(workflowList$info$userid) || workflowList$info$userid == -1) "" else - dplyr::tbl(dbcon, "users") %>% filter(id == bit64::as.integer64(workflowList$info$userid)) %>% pull(email), + dplyr::tbl(global_db_pool, "users") %>% filter(id == bit64::as.integer64(workflowList$info$userid)) %>% pull(email), notes = if(is.null(workflowList$info$notes)) "" else workflowList$info$notes, variables = workflowList$ensemble$variable ) @@ -261,9 +258,9 @@ insert.attribute <- function(workflowList, dbcon = global_db_pool){ # Insert properties into attributes table value_json <- as.character(jsonlite::toJSON(properties, auto_unbox = TRUE)) - # con <- pool::poolCheckout(dbcon) + # con <- pool::poolCheckout(global_db_pool) # on.exit(pool::poolReturn(con), add = TRUE) - con <- dbcon + con <- global_db_pool res <- DBI::dbSendStatement(con, "INSERT INTO attributes (container_type, container_id, value) VALUES ($1, $2, $3)", list("workflows", bit64::as.integer64(workflowList$workflow$id), value_json)) diff --git a/apps/api/R/workflows.R b/apps/api/R/workflows.R index 1f3d7b3eac4..44cb9196f18 100644 --- a/apps/api/R/workflows.R +++ b/apps/api/R/workflows.R @@ -6,26 +6,24 @@ source("submit.workflow.R") #' @param site_id Site id (character) #' @param offset #' @param limit Max number of workflows to retrieve (default = 50) -#' @param dbcon Database connection object. Default is global database pool. #' @return List of workflows (using a particular model & site, if specified) #' @author Tezan Sahu #* @get / -getWorkflows <- function(req, model_id=NULL, site_id=NULL, offset=0, limit=50, res, - dbcon = global_db_pool){ +getWorkflows <- function(req, model_id=NA, site_id=NA, offset=0, limit=50, res){ if (! limit %in% c(10, 20, 50, 100, 500)) { res$status <- 400 return(list(error = "Invalid value for parameter")) } - Workflow <- tbl(dbcon, "workflows") %>% + Workflow <- tbl(global_db_pool, "workflows") %>% select(-created_at, -updated_at, -params, -advanced_edit, -notes) - if (!is.null(model_id)) { + if (!is.na(model_id)) { Workflow <- Workflow %>% filter(model_id == !!model_id) } - if (!is.null(site_id)) { + if (!is.na(site_id)) { Workflow <- Workflow %>% filter(site_id == !!site_id) } @@ -110,15 +108,14 @@ submitWorkflow <- function(req, res){ #' Get the of the workflow specified by the id #' @param id Workflow id (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Details of requested workflow #' @author Tezan Sahu #* @get / -getWorkflowDetails <- function(id, req, res, dbcon = global_db_pool){ - Workflow <- tbl(dbcon, "workflows") %>% +getWorkflowDetails <- function(id, req, res){ + Workflow <- tbl(global_db_pool, "workflows") %>% select(id, model_id, site_id, folder, hostname, user_id) - Workflow <- tbl(dbcon, "attributes") %>% + Workflow <- tbl(global_db_pool, "attributes") %>% select(id = container_id, properties = value) %>% full_join(Workflow, by = "id") %>% filter(id == !!id) @@ -164,12 +161,11 @@ getWorkflowDetails <- function(id, req, res, dbcon = global_db_pool){ #' Get the of the workflow specified by the id #' @param id Workflow id (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Details of requested workflow #' @author Tezan Sahu #* @get //status -getWorkflowStatus <- function(req, id, res, dbcon = global_db_pool){ - Workflow <- tbl(dbcon, "workflows") %>% +getWorkflowStatus <- function(req, id, res){ + Workflow <- tbl(global_db_pool, "workflows") %>% select(id, user_id) %>% filter(id == !!id) @@ -198,13 +194,12 @@ getWorkflowStatus <- function(req, id, res, dbcon = global_db_pool){ #' Get a specified file of the workflow specified by the id #' @param id Workflow id (character) -#' @param dbcon Database connection object. Default is global database pool. #' @return Details of requested workflow #' @author Tezan Sahu #* @serializer contentType list(type="application/octet-stream") #* @get //file/ -getWorkflowFile <- function(req, id, filename, res, dbcon = global_db_pool){ - Workflow <- tbl(dbcon, "workflows") %>% +getWorkflowFile <- function(req, id, filename, res){ + Workflow <- tbl(global_db_pool, "workflows") %>% select(id, user_id) %>% filter(id == !!id) From 85bd5984128b7dd13000683f458ed843cfe8f38a Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 10 Aug 2021 07:36:36 -0500 Subject: [PATCH 2195/2289] set GITHUB_PAT from GITHUB_TOKEN --- .github/workflows/book.yml | 2 ++ .github/workflows/ci.yml | 8 ++++++++ .github/workflows/depends.yml | 2 ++ 3 files changed, 12 insertions(+) diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml index 25301005721..a10e89c89a2 100644 --- a/.github/workflows/book.yml +++ b/.github/workflows/book.yml @@ -14,6 +14,8 @@ on: jobs: bookdown: runs-on: ubuntu-latest + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} container: image: pecan/depends:R4.0.3 diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 47673593c7a..1e700659977 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -30,6 +30,8 @@ jobs: build: if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} strategy: fail-fast: false @@ -67,6 +69,8 @@ jobs: test: if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} strategy: fail-fast: false @@ -111,6 +115,8 @@ jobs: check: if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} strategy: fail-fast: false @@ -162,6 +168,8 @@ jobs: sipnet: if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} strategy: fail-fast: false diff --git a/.github/workflows/depends.yml b/.github/workflows/depends.yml index 0d31bc8da67..6e06e33962b 100644 --- a/.github/workflows/depends.yml +++ b/.github/workflows/depends.yml @@ -19,6 +19,8 @@ jobs: depends: if: github.repository == 'PecanProject/pecan' runs-on: ubuntu-latest + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} strategy: fail-fast: false From 3a0f641c1dac7ea71e60dcc2a9ed1f6021ad3368 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Tue, 10 Aug 2021 07:40:49 -0500 Subject: [PATCH 2196/2289] env already exists --- .github/workflows/ci.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 1e700659977..19467069d72 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -115,8 +115,6 @@ jobs: check: if: github.event_name != 'issue_comment' || startsWith(github.event.comment.body, '/build') runs-on: ubuntu-latest - env: - GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} strategy: fail-fast: false @@ -126,6 +124,7 @@ jobs: - "4.0.4" env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} _R_CHECK_LENGTH_1_CONDITION_: true _R_CHECK_LENGTH_1_LOGIC2_: true # Avoid compilation check warnings that come from the system Makevars From 6d3c07cb51845bd27f9c83475e59e5e64d61441d Mon Sep 17 00:00:00 2001 From: Shashank Singh Date: Tue, 10 Aug 2021 20:13:07 +0000 Subject: [PATCH 2197/2289] warning fix --- base/utils/.Rbuildignore | 1 + base/utils/R/convert.input.R | 2 +- base/utils/R/datasets.R | 2 +- base/utils/R/utils.R | 73 +++++++++++++++------------------ base/utils/man/standard_vars.Rd | 2 +- 5 files changed, 37 insertions(+), 43 deletions(-) diff --git a/base/utils/.Rbuildignore b/base/utils/.Rbuildignore index 91114bf2f2b..3b5d043dea9 100644 --- a/base/utils/.Rbuildignore +++ b/base/utils/.Rbuildignore @@ -1,2 +1,3 @@ ^.*\.Rproj$ ^\.Rproj\.user$ +^scripts \ No newline at end of file diff --git a/base/utils/R/convert.input.R b/base/utils/R/convert.input.R index 1f38ee128aa..e5f488987dd 100644 --- a/base/utils/R/convert.input.R +++ b/base/utils/R/convert.input.R @@ -274,7 +274,7 @@ convert.input <- ) if ("id" %in% colnames(existing.dbfile)) { existing.dbfile <- existing.dbfile %>% - dplyr::filter(id==input.args$dbfile.id) + dplyr::filter(.data$id==input.args$dbfile.id) } }else{ existing.dbfile <- PEcAn.DB::dbfile.input.check(siteid = site.id, diff --git a/base/utils/R/datasets.R b/base/utils/R/datasets.R index 384e0f1ba86..c5ada7f5679 100644 --- a/base/utils/R/datasets.R +++ b/base/utils/R/datasets.R @@ -15,7 +15,7 @@ #' \item{Variable.Name}{Short name suitable for programming with} #' \item{standard_name}{Name used in the NetCDF \href{http://cfconventions.org/standard-names.html}{CF metadata conventions} } #' \item{Units}{Standard units for this variable. Do not call variables by these names if they are in different units. -#' See \code{\link[udunits2]{udunits}} for conversions to and from non-standard units} +#' See \code{\link[udunits2]{ud.convert}} for conversions to and from non-standard units} #' \item{Long.Name}{Human-readable variable name, suitable for e.g. axis labels} #' \item{Category}{What kind of variable is it? (Carbon pool, N flux, dimension, input driver, etc)} #' \item{var_type}{Storage type (character, integer, etc)} diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 0dfed978407..59bd9423ee7 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -24,13 +24,12 @@ ##' @param lon longitude if dimension requests it ##' @param time time if dimension requests it ##' @param nsoil nsoil if dimension requests it -##' @param silent logical: suppress log messages about missing variables? ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] dims <- list() - + if (nrow(nc_var) == 0) { nc_var <- PEcAn.utils::mstmip_local[PEcAn.utils::mstmip_local$Variable.Name == name, ] if (nrow(nc_var) == 0) { @@ -39,12 +38,12 @@ mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = } if (is.na(time)) { time <- ncdf4::ncdim_def(name = "time", units = "days since 1900-01-01 00:00:00", - vals = 1:365, calendar = "standard", unlim = TRUE) + vals = 1:365, calendar = "standard", unlim = TRUE) } return(ncdf4::ncvar_def(name, "", list(time), -999, name)) } } - + for (i in 1:4) { vd <- nc_var[[paste0("dim", i)]] if (vd == "lon" && !is.na(lon)) { @@ -184,7 +183,7 @@ get.run.id <- function(run.type, index, trait = NULL, pft.name = NULL, site.id=N ##' @param bw The smoothing bandwidth to be used. See 'bw.nrd' ##' @param n number of points to use in kernel density estimate. See \code{\link[stats]{density}} ##' @return data frame with back-transformed log density estimate -##' @author \href{http://stats.stackexchange.com/q/6588/2750}{Rob Hyndman} +##' @author \href{https://stats.stackexchange.com/q/6588/2750}{Rob Hyndman} ##' @references M. P. Wand, J. S. Marron and D. Ruppert, 1991. Transformations in Density Estimation. Journal of the American Statistical Association. 86(414):343-353 \url{http://www.jstor.org/stable/2290569} zero.bounded.density <- function(x, bw = "SJ", n = 1001) { y <- log(x) @@ -308,24 +307,24 @@ get.parameter.stat <- function(mcmc.summary, parameter) { pdf.stats <- function(distn, A, B) { distn <- as.character(distn) mean <- switch(distn, - gamma = A/B, - lnorm = exp(A + 1/2 * B^2), - beta = A/(A + B), - weibull = B * gamma(1 + 1/A), - norm = A, - f = ifelse(B > 2, - B/(B - 2), - mean(stats::rf(10000, A, B)))) + gamma = A/B, + lnorm = exp(A + 1/2 * B^2), + beta = A/(A + B), + weibull = B * gamma(1 + 1/A), + norm = A, + f = ifelse(B > 2, + B/(B - 2), + mean(stats::rf(10000, A, B)))) var <- switch(distn, - gamma = A/B^2, - lnorm = exp(2 * A + B ^ 2) * (exp(B ^ 2) - 1), - beta = A * B/((A + B) ^ 2 * (A + B + 1)), - weibull = B ^ 2 * (gamma(1 + 2 / A) - - gamma(1 + 1 / A) ^ 2), - norm = B ^ 2, - f = ifelse(B > 4, - 2 * B^2 * (A + B - 2) / (A * (B - 2) ^ 2 * (B - 4)), - stats::var(stats::rf(1e+05, A, B)))) + gamma = A/B^2, + lnorm = exp(2 * A + B ^ 2) * (exp(B ^ 2) - 1), + beta = A * B/((A + B) ^ 2 * (A + B + 1)), + weibull = B ^ 2 * (gamma(1 + 2 / A) - + gamma(1 + 1 / A) ^ 2), + norm = B ^ 2, + f = ifelse(B > 4, + 2 * B^2 * (A + B - 2) / (A * (B - 2) ^ 2 * (B - 4)), + stats::var(stats::rf(1e+05, A, B)))) qci <- get(paste0("q", distn)) ci <- qci(c(0.025, 0.975), A, B) lcl <- ci[1] @@ -540,7 +539,7 @@ load.modelpkg <- function(model) { do.call(require, args = list(pecan.modelpkg)) } else { PEcAn.logger::logger.error("I can't find a package for the ", model, - "model; I expect it to be named ", pecan.modelpkg) + "model; I expect it to be named ", pecan.modelpkg) } } } # load.modelpkg @@ -557,10 +556,10 @@ load.modelpkg <- function(model) { ##' @return val converted values ##' @author Istem Fer, Shawn Serbin misc.convert <- function(x, u1, u2) { - + amC <- 12.0107 # atomic mass of carbon mmH2O <- 18.01528 # molar mass of H2O, g/mol - + if (u1 == "umol C m-2 s-1" & u2 == "kg C m-2 s-1") { val <- udunits2::ud.convert(x, "ug", "kg") * amC } else if (u1 == "kg C m-2 s-1" & u2 == "umol C m-2 s-1") { @@ -577,9 +576,9 @@ misc.convert <- function(x, u1, u2) { u1 <- gsub("gC","g*12",u1) u2 <- gsub("gC","g*12",u2) val <- udunits2::ud.convert(x,u1,u2) - - -# PEcAn.logger::logger.severe(paste("Unknown units", u1, u2)) + + + # PEcAn.logger::logger.severe(paste("Unknown units", u1, u2)) } return(val) } # misc.convert @@ -595,7 +594,7 @@ misc.convert <- function(x, u1, u2) { ##' @return logical ##' @author Istem Fer, Shawn Serbin misc.are.convertible <- function(u1, u2) { - + # make sure the order of vectors match units.from <- c("umol C m-2 s-1", "kg C m-2 s-1", "mol H2O m-2 s-1", "kg H2O m-2 s-1", @@ -603,7 +602,7 @@ misc.are.convertible <- function(u1, u2) { units.to <- c("kg C m-2 s-1", "umol C m-2 s-1", "kg H2O m-2 s-1", "mol H2O m-2 s-1", "kg C m-2", "Mg ha-1") - + if(u1 %in% units.from & u2 %in% units.to) { if (which(units.from == u1) == which(units.to == u2)) { return(TRUE) @@ -628,7 +627,7 @@ convert.expr <- function(expression) { # split equation to LHS and RHS deri.var <- gsub("=.*$", "", expression) # name of the derived variable deri.eqn <- gsub(".*=", "", expression) # derivation eqn - + non.match <- gregexpr('[^a-zA-Z_.]', deri.eqn) # match characters that are not "a-zA-Z_." split.chars <- unlist(regmatches(deri.eqn, non.match)) # where to split at # split the expression to retrieve variable names to be used in read.output @@ -638,7 +637,7 @@ convert.expr <- function(expression) { } else { variables <- deri.eqn } - + return(list(variable.drv = deri.var, variable.eqn = list(variables = variables, expression = deri.eqn))) } #--------------------------------------------------------------------------------------------------# @@ -651,7 +650,7 @@ convert.expr <- function(expression) { ##' @title download.file ##' @param url complete URL for file download ##' @param filename destination file name -##' @param method Method of file retrieval. Can set this using the options(`download.ftp.method=[method]`) in your Rprofile. +##' @param method Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. ##' example options(download.ftp.method="ncftpget") ##' ##' @examples @@ -693,19 +692,13 @@ download.file <- function(url, filename, method) { ##' @param expr The function to try running ##' @param maxErrors The number of times to retry the function ##' @param sleep How long to wait before retrying the function call -##' @param isError function to use for checking whether to try again. -##' Must take one argument that contains the result of evaluating `expr` -##' and return TRUE if another retry is needed ##' ##' @return retval returns the results of the function call ##' ##' @examples ##' \dontrun{ -##' file_url <- paste0("https://thredds.daac.ornl.gov/", -##' "thredds/dodsC/ornldaac/1220", -##' "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") ##' dap <- retry.func( -##' ncdf4::nc_open(file_url) +##' ncdf4::nc_open('https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4'), ##' maxErrors=10, ##' sleep=2) ##' } diff --git a/base/utils/man/standard_vars.Rd b/base/utils/man/standard_vars.Rd index 65fe44e88b3..a69bc524c4b 100644 --- a/base/utils/man/standard_vars.Rd +++ b/base/utils/man/standard_vars.Rd @@ -9,7 +9,7 @@ \item{Variable.Name}{Short name suitable for programming with} \item{standard_name}{Name used in the NetCDF \href{http://cfconventions.org/standard-names.html}{CF metadata conventions} } \item{Units}{Standard units for this variable. Do not call variables by these names if they are in different units. -See \code{\link[udunits2]{udunits}} for conversions to and from non-standard units} +See \code{\link[udunits2]{ud.convert}} for conversions to and from non-standard units} \item{Long.Name}{Human-readable variable name, suitable for e.g. axis labels} \item{Category}{What kind of variable is it? (Carbon pool, N flux, dimension, input driver, etc)} \item{var_type}{Storage type (character, integer, etc)} From 9b07830d1229ea9c7f6b872c32b79456ffa30eac Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 12 Aug 2021 15:43:27 +0530 Subject: [PATCH 2198/2289] function documentation fixes --- modules/data.atmosphere/R/download.ICOS.R | 7 ++++--- modules/data.atmosphere/R/met2CF.ICOS.R | 1 + modules/data.atmosphere/README.md | 3 ++- modules/data.atmosphere/man/download.ICOS.Rd | 2 ++ modules/data.atmosphere/man/met2CF.ICOS.Rd | 2 ++ 5 files changed, 11 insertions(+), 4 deletions(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 2e6b79ffe29..4003c1f921d 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -11,6 +11,7 @@ #' @param end_date end date area of the data request in the form YYYY-MM-DD #' @param product ICOS product to be downloaded. Currently supported options: "Drought2018", "ETC" #' @param overwrite should existing files be overwritten. Default False. +#' @param ... used when extra arguments are present. #' @return information about the output file #' @export #' @examples @@ -70,7 +71,7 @@ download.ICOS <- format_name <- "ICOS_ECOSYSTEM_HH" } else { - PEcAn.logger::logger.severe("Inavlid product. Product should be one of 'Drought2018', 'ETC' ") + PEcAn.logger::logger.severe("Invalid product. Product should be one of 'Drought2018', 'ETC' ") } output_file <- list.files(path = outfolder, patt= output_file_name) @@ -177,10 +178,10 @@ download.ICOS <- output_file_name <- zipped_csv_name }else if (tolower(product) == "etc") { # reformat file slightly so that both Drought2018 and ETC files can use the same format - tmp_csv <- read.csv(file.path(outfolder, output_file_name)) + tmp_csv <- utils::read.csv(file.path(outfolder, output_file_name)) new_tmp <- cbind(tmp_csv[, -which(colnames(tmp_csv)=="LW_OUT")], tmp_csv[, which(colnames(tmp_csv)=="LW_OUT")]) colnames(new_tmp) <- c(colnames(tmp_csv)[-which(colnames(tmp_csv)=="LW_OUT")], "LW_OUT") - write.csv(new_tmp, file = file.path(outfolder, output_file_name), row.names = FALSE) + utils::write.csv(new_tmp, file = file.path(outfolder, output_file_name), row.names = FALSE) } } diff --git a/modules/data.atmosphere/R/met2CF.ICOS.R b/modules/data.atmosphere/R/met2CF.ICOS.R index fe0cf7d3a62..f3775ad0426 100644 --- a/modules/data.atmosphere/R/met2CF.ICOS.R +++ b/modules/data.atmosphere/R/met2CF.ICOS.R @@ -28,6 +28,7 @@ #' format$vars$column_number = Column number in CSV file (optional, will use header name first) #' Columns with NA for bety variable name are dropped. #' @param overwrite overwrite should existing files be overwritten. Default False. +#' @param ... used when extra arguments are present. #' @return information about the output file #' @export #' diff --git a/modules/data.atmosphere/README.md b/modules/data.atmosphere/README.md index 0bbc556c844..225b4e9c000 100644 --- a/modules/data.atmosphere/README.md +++ b/modules/data.atmosphere/README.md @@ -11,7 +11,8 @@ Current list of input meteorological formats supported, functions are named `dow * FACE * ALMA * NOAA GEFS -* arbitrary csv files +* arbitrary csv files +* ICOS ## Installation diff --git a/modules/data.atmosphere/man/download.ICOS.Rd b/modules/data.atmosphere/man/download.ICOS.Rd index 9ecc708b83f..09ea66b81a4 100644 --- a/modules/data.atmosphere/man/download.ICOS.Rd +++ b/modules/data.atmosphere/man/download.ICOS.Rd @@ -26,6 +26,8 @@ download.ICOS( \item{product}{ICOS product to be downloaded. Currently supported options: "Drought2018", "ETC"} \item{overwrite}{should existing files be overwritten. Default False.} + +\item{...}{used when extra arguments are present.} } \value{ information about the output file diff --git a/modules/data.atmosphere/man/met2CF.ICOS.Rd b/modules/data.atmosphere/man/met2CF.ICOS.Rd index 83c9a79e8d0..417cd2cd630 100644 --- a/modules/data.atmosphere/man/met2CF.ICOS.Rd +++ b/modules/data.atmosphere/man/met2CF.ICOS.Rd @@ -42,6 +42,8 @@ met2CF.ICOS( Columns with NA for bety variable name are dropped.} \item{overwrite}{overwrite should existing files be overwritten. Default False.} + +\item{...}{used when extra arguments are present.} } \value{ information about the output file From 198d4a169c09c46c1c4fbe260fbf9e7d24825e75 Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Thu, 12 Aug 2021 15:59:08 +0530 Subject: [PATCH 2199/2289] use pattern insteaf of pat --- modules/data.atmosphere/R/download.ICOS.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index 4003c1f921d..ecacee30e81 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -74,7 +74,7 @@ download.ICOS <- PEcAn.logger::logger.severe("Invalid product. Product should be one of 'Drought2018', 'ETC' ") } - output_file <- list.files(path = outfolder, patt= output_file_name) + output_file <- list.files(path = outfolder, pattern = output_file_name) if(length(output_file != 0) && !overwrite){ PEcAn.logger::logger.info("Output CSV file for the requested site already exists") download_file_flag <- FALSE From 76fdf185c49ba6c9dd29c5fbc135bc80a184e47a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 12 Aug 2021 13:35:45 -0500 Subject: [PATCH 2200/2289] cleanup --- base/utils/R/utils.R | 2 +- base/utils/tests/Rcheck_reference.log | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index f562fdb9b38..94efdaec2c9 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -51,7 +51,7 @@ mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = dims[[length(dims) + 1]] <- time } else if (vd == "nsoil" && !is.na(nsoil)) { dims[[length(dims) + 1]] <- nsoil - } else if (vd == "na") { + } else if (is.na(vd)) { # skip } else { if (!silent) { diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index e2211086872..df083922969 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -109,11 +109,11 @@ See section 'Cross-references' in the 'Writing R Extensions' manual. Undocumented code objects: ‘logger.error’ ‘logger.getLevel’ ‘logger.info’ ‘logger.setLevel’ ‘logger.setOutputFile’ ‘logger.setQuitOnSevere’ ‘logger.setWidth’ - ‘logger.severe’ ‘logger.warn’ ‘mstmip_local’ ‘mstmip_vars’ + ‘logger.severe’ ‘logger.warn’ ‘runModule.get.results’ ‘runModule.run.write.configs’ ‘trait.dictionary’ Undocumented data sets: - ‘mstmip_local’ ‘mstmip_vars’ ‘trait.dictionary’ + ‘trait.dictionary’ All user-level objects in a package should have documentation entries. See chapter ‘Writing R documentation files’ in the ‘Writing R Extensions’ manual. From 87cc61b9da12bfd172093b5f3ba91c712f48a461 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 13 Aug 2021 00:52:16 +0530 Subject: [PATCH 2201/2289] Update utils.R --- base/utils/R/utils.R | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 59bd9423ee7..276fc81e662 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -24,6 +24,7 @@ ##' @param lon longitude if dimension requests it ##' @param time time if dimension requests it ##' @param nsoil nsoil if dimension requests it +##' @param silent logical: suppress log messages about missing variables? ##' @return ncvar based on MstMIP definition ##' @author Rob Kooper mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { @@ -650,7 +651,7 @@ convert.expr <- function(expression) { ##' @title download.file ##' @param url complete URL for file download ##' @param filename destination file name -##' @param method Method of file retrieval. Can set this using the options(download.ftp.method=[method]) in your Rprofile. +##' @param method Method of file retrieval. Can set this using the `options(download.ftp.method=[method])` in your Rprofile. ##' example options(download.ftp.method="ncftpget") ##' ##' @examples @@ -692,13 +693,19 @@ download.file <- function(url, filename, method) { ##' @param expr The function to try running ##' @param maxErrors The number of times to retry the function ##' @param sleep How long to wait before retrying the function call +##' @param isError function to use for checking whether to try again. +##' Must take one argument that contains the result of evaluating `expr` +##' and return TRUE if another retry is needed ##' ##' @return retval returns the results of the function call ##' ##' @examples ##' \dontrun{ +##' file_url <- paste0("https://thredds.daac.ornl.gov/", +##' "thredds/dodsC/ornldaac/1220", +##' "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") ##' dap <- retry.func( -##' ncdf4::nc_open('https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4'), +##' ncdf4::nc_open(file_url) ##' maxErrors=10, ##' sleep=2) ##' } From 8d07fd0d3304bd98b9e8e0b214fccee7e4f416f9 Mon Sep 17 00:00:00 2001 From: istfer Date: Fri, 13 Aug 2021 12:35:28 +0300 Subject: [PATCH 2202/2289] Update documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd --- .../ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd index 809c06e911b..91dff5a98f5 100644 --- a/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd +++ b/documentation/tutorials/ICOS_Drought2018/ICOS_Drought2018_Vignette.Rmd @@ -59,6 +59,16 @@ res <- PEcAn.data.atmosphere::download.ICOS(sitename, outfolder, start_date, end To use the downloaded observations a function like `PEcAn.benchmark::load_data` can be used. Here we load the NEE observations. ```{r load, eval=FALSE} + dbcon <-DBI::dbConnect( + RPostgres::Postgres(), + host = 'localhost', + user = 'bety', + password = 'bety', + dbname = 'bety' + ) + ## or if you have different DB settings try: + ## php_config <- ".../pecan/web/config.php" # path to your PHP config file + ## dcon <- betyConnect(php_config) format <- PEcAn.DB::query.format.vars(bety = dbcon, format.id = 1000000136) start_date <- lubridate::year("2018-01-01") end_date <- lubridate::year("2018-12-31") From d962869470ab953435951e0f076cc008a1682279 Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Fri, 13 Aug 2021 08:32:57 -0500 Subject: [PATCH 2203/2289] rabbitmq 3.9 and up require a config file, using 3.8 for now --- CHANGELOG.md | 1 + docker-compose.yml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index fe44fd4995a..06c50a8f924 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -43,6 +43,7 @@ This is a major change: ### Changed +- RabbitMQ is set to be 3.8 since the 3.9 version can no longer be configured with environment variables. - Removed old api, now split into rpecanapi and apps/api. - Now using R 4.0.2 for Docker images. This is a major change. Newer version of R and using Ubuntu 20.04 instead of Debian. - Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. diff --git a/docker-compose.yml b/docker-compose.yml index a5957241184..9fb83a3e77a 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -90,7 +90,7 @@ services: # rabbitmq to connect to extractors rabbitmq: - image: rabbitmq:management + image: rabbitmq:3.8-management restart: unless-stopped networks: - pecan From 96de52a6bad2898831af920651de81b2d84f8ddb Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Fri, 13 Aug 2021 16:11:28 +0000 Subject: [PATCH 2204/2289] Add newline --- models/ed/tests/testthat/test.read_E_files.R | 1 + 1 file changed, 1 insertion(+) diff --git a/models/ed/tests/testthat/test.read_E_files.R b/models/ed/tests/testthat/test.read_E_files.R index 3aa01757804..a14b502a99e 100644 --- a/models/ed/tests/testthat/test.read_E_files.R +++ b/models/ed/tests/testthat/test.read_E_files.R @@ -1,3 +1,4 @@ + context("test possible inputs to read_E_file") outfolder <- tempdir() From cfcec29c25e5f131517f203b2e1736494adcc8e1 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Fri, 13 Aug 2021 17:51:29 +0000 Subject: [PATCH 2205/2289] automated documentation update --- base/utils/man/download.file.Rd | 2 +- base/utils/man/zero.bounded.density.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base/utils/man/download.file.Rd b/base/utils/man/download.file.Rd index 70573a52693..b740fac65ef 100644 --- a/base/utils/man/download.file.Rd +++ b/base/utils/man/download.file.Rd @@ -11,7 +11,7 @@ download.file(url, filename, method) \item{filename}{destination file name} -\item{method}{Method of file retrieval. Can set this using the options(\verb{download.ftp.method=[method]}) in your Rprofile. +\item{method}{Method of file retrieval. Can set this using the \verb{options(download.ftp.method=[method])} in your Rprofile. example options(download.ftp.method="ncftpget")} } \description{ diff --git a/base/utils/man/zero.bounded.density.Rd b/base/utils/man/zero.bounded.density.Rd index 0f32e90edc5..c2b423e2c17 100644 --- a/base/utils/man/zero.bounded.density.Rd +++ b/base/utils/man/zero.bounded.density.Rd @@ -29,5 +29,5 @@ One useful approach is to transform to logs, estimate the density using KDE, and M. P. Wand, J. S. Marron and D. Ruppert, 1991. Transformations in Density Estimation. Journal of the American Statistical Association. 86(414):343-353 \url{http://www.jstor.org/stable/2290569} } \author{ -\href{http://stats.stackexchange.com/q/6588/2750}{Rob Hyndman} +\href{https://stats.stackexchange.com/q/6588/2750}{Rob Hyndman} } From 170b7013e6d70262624cc39f1ffcf3d5231cb0ce Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 13 Aug 2021 13:53:03 -0500 Subject: [PATCH 2206/2289] Update base/utils/.Rbuildignore Mostly to prompt a CI build --- base/utils/.Rbuildignore | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/.Rbuildignore b/base/utils/.Rbuildignore index 3b5d043dea9..0fa6193e633 100644 --- a/base/utils/.Rbuildignore +++ b/base/utils/.Rbuildignore @@ -1,3 +1,3 @@ ^.*\.Rproj$ ^\.Rproj\.user$ -^scripts \ No newline at end of file +^scripts From dd2b0bb0cb93698a9cba148225995369e910a61c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 13 Aug 2021 14:30:55 -0500 Subject: [PATCH 2207/2289] set Roxygen to use Markdown in visualization --- base/visualization/DESCRIPTION | 1 + base/visualization/man/ciEnvelope.Rd | 2 +- base/visualization/man/dhist.Rd | 2 +- base/visualization/man/vwReg.Rd | 2 +- 4 files changed, 4 insertions(+), 3 deletions(-) diff --git a/base/visualization/DESCRIPTION b/base/visualization/DESCRIPTION index b7d1fe42245..8a16caecbab 100644 --- a/base/visualization/DESCRIPTION +++ b/base/visualization/DESCRIPTION @@ -46,3 +46,4 @@ LazyLoad: yes LazyData: FALSE Encoding: UTF-8 RoxygenNote: 7.0.2 +Roxygen: list(markdown = TRUE) diff --git a/base/visualization/man/ciEnvelope.Rd b/base/visualization/man/ciEnvelope.Rd index afd7a5c92c3..fd227771b2c 100644 --- a/base/visualization/man/ciEnvelope.Rd +++ b/base/visualization/man/ciEnvelope.Rd @@ -13,7 +13,7 @@ ciEnvelope(x, ylo, yhi, ...) \item{yhi}{Vector defining top of CI envelope} -\item{...}{further arguments passed on to `graphics::polygon`} +\item{...}{further arguments passed on to \code{graphics::polygon}} } \description{ plots a confidence interval around an x-y plot (e.g. a timeseries) diff --git a/base/visualization/man/dhist.Rd b/base/visualization/man/dhist.Rd index 64e7e54a3de..f591adcee32 100644 --- a/base/visualization/man/dhist.Rd +++ b/base/visualization/man/dhist.Rd @@ -30,7 +30,7 @@ dhist( \item{plot}{= TRUE produces the plot, FALSE returns the heights, breaks and counts} -\item{lab.spikes}{= TRUE labels the % of data in the spikes} +\item{lab.spikes}{= TRUE labels the \% of data in the spikes} } \value{ list with two elements, heights of length n and breaks of length n+1 indicating the heights and break points of the histogram bars. diff --git a/base/visualization/man/vwReg.Rd b/base/visualization/man/vwReg.Rd index c7d9649899a..eaf8888b6fc 100644 --- a/base/visualization/man/vwReg.Rd +++ b/base/visualization/man/vwReg.Rd @@ -56,7 +56,7 @@ vwReg( \item{shape}{shape of points} -\item{show.CI}{should the 95\% CI limits be plotted?} +\item{show.CI}{should the 95\\% CI limits be plotted?} \item{method}{the fitting function for the spaghettis; default: loess} From 46937679436d95ef947e599ea06f7350f34d6dc7 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 13 Aug 2021 15:53:52 -0700 Subject: [PATCH 2208/2289] update documentation --- models/ed/R/model2netcdf.ED2.R | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 7918e66308e..863d75df98a 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -18,7 +18,8 @@ ##' @param sitelon Longitude of the site ##' @param start_date Start time of the simulation ##' @param end_date End time of the simulation -##' @param pft_names Names of PFTs used in the run, vector +##' @param pfts Names of PFTs used in the run, vector +##' @param settings pecan settings object ##' @export ##' ##' @author Michael Dietze, Shawn Serbin, Rob Kooper, Toni Viskari, Istem Fer @@ -189,6 +190,10 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, ##' ##' @param yr the year being processed ##' @param yfiles the years on the filenames, will be used to matched tfiles for that year +##' @param tfiles names of T files to be read +##' @param outdir directory where output will be written to +##' @param start_date start date in YYYY-MM-DD format +##' @param end_date end date in YYYY-MM-DD format ##' @export read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ @@ -859,6 +864,12 @@ put_T_values <- function(yr, nc_var, out, lat, lon, begins, ends, ...){ ##' ##' @param yr the year being processed ##' @param yfiles the years on the filenames, will be used to matched efiles for that year +##' @param efiles +##' @param outdir +##' @param start_date Start time of the simulation +##' @param end_date End time of the simulation +##' @param pfts Names of PFTs used in the run, vector +##' @param settings pecan settings object ##' ##' @export read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, @@ -1276,4 +1287,4 @@ read_S_files <- function(sfile, outdir, pft_names, pecan_names = NULL){ } # read_S_files ##-------------------------------------------------------------------------------------------------# -### EOF \ No newline at end of file +### EOF From 9e8b7cd008572801fefa2b4cc466c19349dca26d Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 13 Aug 2021 15:54:52 -0700 Subject: [PATCH 2209/2289] Update modules/data.atmosphere/R/load.cfmet.R Co-authored-by: Chris Black --- modules/data.atmosphere/R/load.cfmet.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/load.cfmet.R b/modules/data.atmosphere/R/load.cfmet.R index b8117df52d7..c92d4deed5e 100644 --- a/modules/data.atmosphere/R/load.cfmet.R +++ b/modules/data.atmosphere/R/load.cfmet.R @@ -74,7 +74,7 @@ load.cfmet <- function(met.nc, lat, lon, start.date, end.date) { utils::data(standard_vars, package = "PEcAn.utils", envir = environment()) ## pressure naming hack pending https://github.com/ebimodeling/model-drivers/issues/2 - standard_names <- append(as.character(standard_vars$standard_name), "surface_pressure") + standard_names <- append(as.character(PEcAn.utils::standard_vars$standard_name), "surface_pressure") variables <- as.character(standard_names[standard_names %in% c("surface_pressure", attributes(met.nc$var)$names)]) From 71b89d6b94dad9a187c8aff1324b6cd4fceadc84 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 13 Aug 2021 15:55:10 -0700 Subject: [PATCH 2210/2289] Update modules/data.atmosphere/R/load.cfmet.R Co-authored-by: Chris Black --- modules/data.atmosphere/R/load.cfmet.R | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/data.atmosphere/R/load.cfmet.R b/modules/data.atmosphere/R/load.cfmet.R index c92d4deed5e..fb5de4938bd 100644 --- a/modules/data.atmosphere/R/load.cfmet.R +++ b/modules/data.atmosphere/R/load.cfmet.R @@ -71,7 +71,6 @@ load.cfmet <- function(met.nc, lat, lon, start.date, end.date) { results <- list() - utils::data(standard_vars, package = "PEcAn.utils", envir = environment()) ## pressure naming hack pending https://github.com/ebimodeling/model-drivers/issues/2 standard_names <- append(as.character(PEcAn.utils::standard_vars$standard_name), "surface_pressure") From c5b8e42dccb4e49763a35c671b7875435e8c8cdf Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Sun, 15 Aug 2021 02:11:28 +0530 Subject: [PATCH 2211/2289] Update plots.R --- modules/priors/R/plots.R | 80 ++++++++++++++++++++-------------------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/modules/priors/R/plots.R b/modules/priors/R/plots.R index 82979201a46..92390c23191 100644 --- a/modules/priors/R/plots.R +++ b/modules/priors/R/plots.R @@ -1,5 +1,5 @@ ##--------------------------------------------------------------------------------------------------# -##' Plots a prior density from a parameterized probability distribution +##' Plots a prior density from a parameterized probability distribution ##' ##' @param prior.density data frame containing columns x and y ##' @param base.plot a ggplot object (grob), created by \code{\link[PEcAn.utils]{create.base.plot}} if none provided @@ -15,7 +15,7 @@ ##' } plot_prior.density <- function(prior.density, base.plot = NULL, prior.color = "black") { if (is.null(base.plot)) { - base.plot <- PEcAn.utils::create.base.plot() + base.plot <- PEcAn.visualization::create.base.plot() } new.plot <- base.plot + geom_line(data = prior.density, aes(x = x, y = y), color = prior.color) return(new.plot) @@ -34,7 +34,7 @@ plot_prior.density <- function(prior.density, base.plot = NULL, prior.color = "b ##' @author David LeBauer plot_posterior.density <- function(posterior.density, base.plot = NULL) { if (is.null(base.plot)) { - base.plot <- PEcAn.utils::create.base.plot() + base.plot <- PEcAn.visualization::create.base.plot() } new.plot <- base.plot + geom_line(data = posterior.density, aes(x = x, y = y)) return(new.plot) @@ -45,20 +45,20 @@ plot_posterior.density <- function(posterior.density, base.plot = NULL) { ##' Plot prior density and data ##' ##' @name priorfig -##' @title Prior Figure +##' @title Prior Figure ##' @param priordata observations to be plotted as points ##' @param priordensity density of prior distribution, calculated by \code{\link{prior.density}} ##' @param trait name of trait ##' @param xlim limits for x axis ##' @author David LeBauer -##' @return plot / grob of prior distribution with data used to inform the distribution +##' @return plot / grob of prior distribution with data used to inform the distribution ##' @export ##' @importFrom ggplot2 ggplot aes theme_bw scale_x_continuous scale_y_continuous element_blank element_text geom_rug geom_line geom_point priorfig <- function(priordata = NA, priordensity = NA, trait = "", xlim = "auto", fontsize = 18) { if (is.data.frame(priordata)) { colnames(priordata) <- "x" } - + if (isTRUE(xlim == "auto")) { x.breaks <- pretty(c(signif(priordensity$x, 2)), 4) xlim <- range(x.breaks) @@ -66,27 +66,27 @@ priorfig <- function(priordata = NA, priordensity = NA, trait = "", xlim = "auto x.breaks <- pretty(signif(xlim, 2), 4) xlim <- range(c(x.breaks, xlim)) } - - priorfigure <- ggplot() + theme_bw() + + + priorfigure <- ggplot() + theme_bw() + scale_x_continuous(limits = xlim, breaks = x.breaks, name = PEcAn.utils::trait.lookup(trait)$units) + - scale_y_continuous(breaks = NULL) + + scale_y_continuous(breaks = NULL) + labs(title = PEcAn.utils::trait.lookup(trait)$figid) + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.text.y = element_blank(), ## hide y axis label axis.text.x = element_text(size = fontsize), axis.title.y = element_blank(), ## hide y axis label - axis.title.x = element_text(size = fontsize * 0.9), + axis.title.x = element_text(size = fontsize * 0.9), plot.title = element_text(size = fontsize * 1.1)) - + if (is.data.frame(priordata)) { priordata <- subset(priordata, subset = !is.na(x)) dx <- with(priordata, min(abs(diff(x)[diff(x) != 0]))) - ## add jitter to separate equal values + ## add jitter to separate equal values priordata <- transform(priordata, x = x + runif(length(x), -dx / 2, dx / 2)) rug <- geom_rug(data = priordata, aes(x)) priorfigure <- priorfigure + rug - } + } if (is.data.frame(priordensity[1])) { dens.line <- geom_line(data = priordensity, aes(x, y)) qpts <- get.quantiles.from.density(priordensity) @@ -102,8 +102,8 @@ priorfig <- function(priordata = NA, priordensity = NA, trait = "", xlim = "auto ##' ##' @param trait character, name of trait to be plotted ##' @param prior named distribution with parameters -##' @param posterior.sample -##' @param trait.df +##' @param posterior.sample +##' @param trait.df ##' @param fontsize,x.lim,y.lim,logx passed on to ggplot ##' @return plot (grob) object ##' @author David LeBauer @@ -129,17 +129,17 @@ plot_trait <- function(trait, x.lim = NULL, y.lim = NULL, logx = FALSE) { - + ## Determine plot components plot_posterior <- !is.null(posterior.sample) plot_prior <- !is.null(prior) plot_data <- !is.null(trait.df) - + ## get units for plot title units <- PEcAn.utils::trait.lookup(trait)$units if(plot_data) trait.df <- PEcAn.MA::jagify(trait.df) - + if (plot_prior) { prior.color <- ifelse(plot_posterior, "grey", "black") prior.density <- create.density.df(distribution = prior) @@ -153,7 +153,7 @@ plot_trait <- function(trait, } else { posterior.density <- data.frame(x = NA, y = NA) } - + if (is.null(x.lim)) { if (!is.null(trait.df)) { data.range <- max(c(trait.df$Y, trait.df$Y + trait.df$se), na.rm = TRUE) @@ -165,10 +165,10 @@ plot_trait <- function(trait, if (is.null(y.lim)) { y.lim <- range(posterior.density$y, prior.density$y, na.rm = TRUE) } - + x.ticks <- pretty(c(0, x.lim[2])) - - base.plot <- PEcAn.utils::create.base.plot() + theme_bw() + + base.plot <- PEcAn.visualization::create.base.plot() + theme_bw() if (plot_prior) { base.plot <- plot_prior.density(prior.density, base.plot = base.plot, prior.color = prior.color) } @@ -178,21 +178,21 @@ plot_trait <- function(trait, if (plot_data) { base.plot <- PEcAn.utils::plot_data(trait.df, base.plot = base.plot, ymax = y.lim[2]) } - - trait.plot <- base.plot + - geom_segment(aes(x = min(x.ticks), xend = last(x.ticks), y = 0, yend = 0)) + + + trait.plot <- base.plot + + geom_segment(aes(x = min(x.ticks), xend = last(x.ticks), y = 0, yend = 0)) + scale_x_continuous(limits = range(x.ticks), breaks = x.ticks, name = PEcAn.utils::trait.lookup(trait)$units) + labs(title = PEcAn.utils::trait.lookup(trait)$figid) + - theme(axis.text.x = element_text(size = fontsize$axis), - axis.text.y = element_blank(), - axis.title.x = element_text(size = fontsize$axis), - axis.title.y = element_blank(), - axis.ticks.y = element_blank(), - axis.line.y = element_blank(), - legend.position = "none", - plot.title = element_text(size = fontsize$title), - panel.grid.major = element_blank(), - panel.grid.minor = element_blank(), + theme(axis.text.x = element_text(size = fontsize$axis), + axis.text.y = element_blank(), + axis.title.x = element_text(size = fontsize$axis), + axis.title.y = element_blank(), + axis.ticks.y = element_blank(), + axis.line.y = element_blank(), + legend.position = "none", + plot.title = element_text(size = fontsize$title), + panel.grid.major = element_blank(), + panel.grid.minor = element_blank(), panel.border = element_blank()) return(trait.plot) } # plot_trait @@ -205,19 +205,19 @@ plot_trait <- function(trait, ##' @aliases plot.densities ##' @param density.plot_inputs list containing trait.samples and trait.df ##' @param ... passed on to plot_density -##' @param outdir directory in which to generate figure as pdf +##' @param outdir directory in which to generate figure as pdf ##' @author David LeBauer -##' @return outputs plots in outdir/sensitivity.analysis.pdf file +##' @return outputs plots in outdir/sensitivity.analysis.pdf file plot_densities <- function(density.plot_inputs, outdir, ...) { trait.samples <- density.plot_inputs$trait.samples trait.df <- density.plot_inputs$trait.df prior.trait.samples <- density.plot_inputs$trait.df - + traits <- names(trait.samples) grDevices::pdf(paste0(outdir, "trait.densities.pdf"), height = 12, width = 20) - + for (trait in traits) { - density.plot <- plot_density(trait.sample = trait.samples[, trait], + density.plot <- plot_density(trait.sample = trait.samples[, trait], trait.df = trait.df[[trait]], ...) print(density.plot) } From 8a2e40def64dcad1117d528cf7bf71dd08e265ca Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 14 Aug 2021 20:47:48 +0000 Subject: [PATCH 2212/2289] automated documentation update --- modules/priors/man/plot_trait.Rd | 4 ---- 1 file changed, 4 deletions(-) diff --git a/modules/priors/man/plot_trait.Rd b/modules/priors/man/plot_trait.Rd index 4b409622045..f5459e0944b 100644 --- a/modules/priors/man/plot_trait.Rd +++ b/modules/priors/man/plot_trait.Rd @@ -21,10 +21,6 @@ plot_trait( \item{prior}{named distribution with parameters} -\item{posterior.sample}{} - -\item{trait.df}{} - \item{fontsize, x.lim, y.lim, logx}{passed on to ggplot} } \value{ From 8f6b161077d56adf568d2d60a242b69d8198e48b Mon Sep 17 00:00:00 2001 From: Ayush Prasad Date: Mon, 16 Aug 2021 11:25:16 +0530 Subject: [PATCH 2213/2289] add elevation info to drought18 csv file --- .../extfiles/ICOS_Drought2018_Sites.csv | 106 +++++++++--------- 1 file changed, 53 insertions(+), 53 deletions(-) diff --git a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv index 5689465d62d..b29d5e60fbe 100644 --- a/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv +++ b/documentation/tutorials/ICOS_Drought2018/extfiles/ICOS_Drought2018_Sites.csv @@ -1,53 +1,53 @@ -Sitename,Site,Country,Latitude,Longitude,MAT,MAP,IGBP,Köppen,AvailableYears,DOI -BE-Bra,Brasschaat,Belgium,51.30761,4.51984,9.8,750.00,MF,Cfb,1996–2018,10.18160/F738-634R -BE-Lon,Lonzee,Belgium,50.5516198,4.7462339,10,800.00,CRO,BSk,2004–2018,10.18160/6SM0-NFES -BE-Vie,Vielsalm,Belgium,50.3049329013726,5.99811511350303,7.8,1062.00,MF,Cfb,1996–2018,10.18160/MK3Q-BBEK -CH-Aws,Alp Weissenstein,Switzerland,46.583194,9.790417,2.3,918.00,GRA,,2010–2018,10.18160/3YQE-7BR8 -CH-Cha,Chamau grassland,Switzerland,47.21022222,8.410444444,9.5,1136.00,GRA,Cfb,2005–2018,10.18160/GMMW-5E2D -CH-Dav,Davos,Switzerland,46.81533,9.85591,2.8,1062.00,ENF,Dfc,1997–2018,10.18160/R86M-H3HX -CH-Fru,Fruebuel grassland,Switzerland,47.11583333,8.537777778,7.2,1651.00,GRA,Dfb,2005–2018,10.18160/J938-0MKS -CH-Lae,Laegern,Switzerland,47.478333,8.364389,8.3,1100.00,MF,BWk,2004–2018,10.18160/FABD-SVJJ -CH-Oe2,Oensingen crop,Switzerland,47.286417,7.73375,9.8,1155.00,CRO,BSk,2004–2018,10.18160/N01Y-R7DF -CZ-BK1,Bily Kriz forest,Czechia,49.50207615,18.53688247,6.7,1316.00,ENF,Dwb,2004–2018,10.18160/7QXR-AYEE -CZ-Lnz,Lanzhot,Czechia,48.681611,16.946416,9.3,550.00,MF,,2015–2018,10.18160/84SN-YBSD -CZ-RAJ,Rajec,Czechia,49.4437236,16.6965125,7.1,681,ENF,,2012–2018,10.18160/HFS9-JBTG -CZ-Stn,Stitna,Czechia,49.035975,17.9699,8.7,685.00,DBF,,2010–2018,10.18160/V2JN-DQPJ -CZ-wet,Trebon,Czechia,49.02465,14.77035,7.7,604.00,WET,Dfb,2006–2018,10.18160/W4YS-463W -DE-Akm,Anklam,Germany,53.86617,13.68342,8.7,558.00,WET,BWk,2009–2018,10.18160/24B5-J44F -DE-Geb,Gebesee,Germany,51.09973,10.91463,8.5,470.00,CRO,Cfb,2001–2018,10.18160/ZK18-3YW3 -DE-Gri,Grillenburg,Germany,50.95004,13.51259,8.4,877.00,GRA,Dfb,2004–2018,10.18160/EN60-T3FG -DE-Hai,Hainich,Germany,51.079213,10.452168,8.3,720.00,DBF,Cfb,2000–2018,10.18160/D4ET-BFPS -DE-HoH,Hohes Holz,Germany,52.085306,11.219222,9.1,563.00,DBF,,2015–2018,10.18160/J1YB-YEHC -DE-Hte,Huetelmoor,Germany,54.210278,12.176111,9.2,645.00,WET,Dfb,2009–2018,10.18160/63V0-08T4 -DE-Hzd,Hetzdorf,Germany,50.96381,13.48978,7.8,901.00,DBF,,2010–2018,10.18160/PJEC-43XB -DE-Kli,Klingenberg,Germany,50.89306,13.52238,7.6,842.00,CRO,Dfb,2004–2018,10.18160/STT9-TBJZ -DE-Obe,Oberbärenburg,Germany,50.78666,13.72129,5.5,996.00,ENF,Dfb,2008–2018,10.18160/FSM3-RC5F -DE-RuR,Rollesbroich,Germany,50.6219142,6.3041256,7.7,1033.00,GRA,,2011–2018,10.18160/HPV9-K8R1 -DE-RuS,Selhausen Juelich,Germany,50.86590702,6.447144704,10,700.00,CRO,,2011–2018,10.18160/A2TK-QD5U -DE-RuW,Wustebach,Germany,50.50490703,6.33101886,7.5,1250.00,ENF,,2010–2018,10.18160/H7Y6-2R1H -DE-Tha,Tharandt,Germany,50.96256,13.56515,8.2,843.00,ENF,Dfb,1996–2018,10.18160/BSE6-EMVJ -DK-Sor,Soroe,Denmark,55.4858694,11.6446444,8.3,660.00,DBF,Cfb,1996–2018,10.18160/BFDT-7HYE -ES-Abr,Albuera,Spain,38.701839,-6.785881,,,SAV,,2015–2018,10.18160/11TP-MX4F -ES-LM1,Majadas del Tietar North,Spain,39.94269,-5.778683,16,700.00,SAV,Csa,2014–2018,10.18160/FDSD-GVRS -ES-LM2,Majadas del Tietar South,Spain,39.934592,-5.775881,16,700.00,SAV,Csa,2014–2018,10.18160/3SVJ-XSB7 -FI-Hyy,Hyytiala,Finland,61.84741,24.29477,3.8,709.00,ENF,Dfb,1996–2018,10.18160/CWKM-YS54 -FI-Let,Lettosuo,Finland,60.64183,23.95952,4.6,627.00,ENF,,2009–2018,10.18160/0JHQ-BZMU -FI-Sii,Siikaneva,Finland,61.83265,24.19285,3.5,701.00,WET,Dfc,2016–2018,10.18160/0RE3-DTWD -FI-Var,Varrio,Finland,67.7549,29.61,-0.5,601.00,ENF,,2016–2018,10.18160/NYH7-5JEB -FR-Bil,Bilos,France,44.493672,-0.956082,12.9,930.00,ENF,,2014–2018,10.18160/ETDC-1K1F -FR-EM2,Estrees-Mons A28,France,49.8721083,3.02065,10.8,680.00,CRO,,2017–2018 -FR-Hes,Hesse,France,48.6741,7.06465,9.2,820.00,DBF,Cfb,2014–2018,10.18160/WTYC-JVQV -IT-BCi,Borgo Cioffi,Italy,40.52375,14.95744444,18,600.00,CRO,Csa,2004–2018,10.18160/T25N-PD1H -IT-Cp2,Castelporziano2,Italy,41.7042655944824,12.3572931289673,15.2,805.00,EBF,Csa,2012–2018,10.18160/5FPQ-G257 -IT-Lsn,Lison,Italy,45.740481,12.750297,13.1,1083.0,OSH,,2016–2018,10.18160/RTKZ-VTDJ -IT-SR2,San Rossore 2,Italy,43.732022,10.29091,14.2,920.0,ENF,Csa,2013–2018,10.18160/FFK6-8ZV7 -IT-Tor,Torgnon,Italy,45.84444,7.578055,2.9,920.00,GRA,BSk,2008–2018,10.18160/ERMH-PSVW -NL-Loo,Loobos,Netherlands,52.166581,5.743556,9.8,786.00,ENF,Cfb,1996–2018,10.18160/MV3K-WM09 -RU-Fy2,Fyodorovskoye dry spruce stand,Russia,56.447603,32.901878,3.9,711.00,ENF,Dfb,2015-2018,10.18160/4J2N-DY7S -RU-Fyo,Fyodorovskoye,Russia,56.4615278,32.9220833,3.9,711.00,ENF,Dfb,1998-2018,10.18160/4J2N-DY7S -SE-Deg,Degero,Sweden,64.182029,19.556539,1.2,523.00,WET,Dfc,2001-2018,10.18160/0T47-MEEU -SE-Htm,Hyltemossa,Sweden,56.09763,13.41897,7.4,707.00,ENF,Cfb,2015-2018,10.18160/17FF-96RT -SE-Lnn,Lanna,Sweden,58.3406295776367,13.101767539978,6.0,558.00,CRO,Cfb,2014-2018,10.18160/5GZQ-S6Z0 -SE-Nor,Norunda,Sweden,60.08649722,17.47950278,5.5,527.00,ENF,Dfb,2014-2018,10.18160/K57M-TVGE -SE-Ros,Rosinedal-3,Sweden,64.1725,19.738,1.8,614.00,ENF,,2014–2018,10.18160/ZF2F-82Q7 -SE-Svb,Svartberget,Sweden,64.25611,19.7745,1.8,614.00,ENF,Dfc,2014-2018,10.18160/X57W-HWTE +Sitename,Site,Country,Latitude,Longitude,MAT,MAP,IGBP,Köppen,AvailableYears,DOI,Elevation +BE-Bra,Brasschaat,Belgium,51.30761,4.51984,9.8,750.0,MF,Cfb,1996–2018,10.18160/F738-634R,16.0 +BE-Lon,Lonzee,Belgium,50.5516198,4.746233900000001,10.0,800.0,CRO,BSk,2004–2018,10.18160/6SM0-NFES,170.0 +BE-Vie,Vielsalm,Belgium,50.3049329013726,5.99811511350303,7.8,1062.0,MF,Cfb,1996–2018,10.18160/MK3Q-BBEK,490.0 +CH-Aws,Alp Weissenstein,Switzerland,46.583194,9.790417,2.3,918.0,GRA,,2010–2018,10.18160/3YQE-7BR8,1988.0 +CH-Cha,Chamau grassland,Switzerland,47.21022222,8.410444444,9.5,1136.0,GRA,Cfb,2005–2018,10.18160/GMMW-5E2D,394.0 +CH-Dav,Davos,Switzerland,46.815329999999996,9.85591,2.8,1062.0,ENF,Dfc,1997–2018,10.18160/R86M-H3HX,1689.0 +CH-Fru,Fruebuel grassland,Switzerland,47.11583333,8.537777777999999,7.2,1651.0,GRA,Dfb,2005–2018,10.18160/J938-0MKS,975.0 +CH-Lae,Laegern,Switzerland,47.478333,8.364389,8.3,1100.0,MF,BWk,2004–2018,10.18160/FABD-SVJJ,689.0 +CH-Oe2,Oensingen crop,Switzerland,47.286417,7.73375,9.8,1155.0,CRO,BSk,2004–2018,10.18160/N01Y-R7DF,452.0 +CZ-BK1,Bily Kriz forest,Czechia,49.50207615,18.53688247,6.7,1316.0,ENF,Dwb,2004–2018,10.18160/7QXR-AYEE,875.0 +CZ-Lnz,Lanzhot,Czechia,48.681611,16.946416,9.3,550.0,MF,,2015–2018,10.18160/84SN-YBSD,150.0 +CZ-RAJ,Rajec,Czechia,49.443723600000006,16.6965125,7.1,681.0,ENF,,2012–2018,10.18160/HFS9-JBTG,649.0 +CZ-Stn,Stitna,Czechia,49.035975,17.9699,8.7,685.0,DBF,,2010–2018,10.18160/V2JN-DQPJ,562.0 +CZ-wet,Trebon,Czechia,49.02465,14.77035,7.7,604.0,WET,Dfb,2006–2018,10.18160/W4YS-463W,426.0 +DE-Akm,Anklam,Germany,53.86617,13.683420000000002,8.7,558.0,WET,BWk,2009–2018,10.18160/24B5-J44F,-1.0 +DE-Geb,Gebesee,Germany,51.09973,10.91463,8.5,470.0,CRO,Cfb,2001–2018,10.18160/ZK18-3YW3,161.5 +DE-Gri,Grillenburg,Germany,50.95004,13.51259,8.4,877.0,GRA,Dfb,2004–2018,10.18160/EN60-T3FG,385.0 +DE-Hai,Hainich,Germany,51.079213,10.452167999999999,8.3,720.0,DBF,Cfb,2000–2018,10.18160/D4ET-BFPS,438.7 +DE-HoH,Hohes Holz,Germany,52.085306,11.219222,9.1,563.0,DBF,,2015–2018,10.18160/J1YB-YEHC,193.0 +DE-Hte,Huetelmoor,Germany,54.210278,12.176111,9.2,645.0,WET,Dfb,2009–2018,10.18160/63V0-08T4,1.0 +DE-Hzd,Hetzdorf,Germany,50.963809999999995,13.48978,7.8,901.0,DBF,,2010–2018,10.18160/PJEC-43XB,395.0 +DE-Kli,Klingenberg,Germany,50.89306,13.522379999999998,7.6,842.0,CRO,Dfb,2004–2018,10.18160/STT9-TBJZ,478.0 +DE-Obe,Oberbärenburg,Germany,50.78666,13.72129,5.5,996.0,ENF,Dfb,2008–2018,10.18160/FSM3-RC5F,739.0 +DE-RuR,Rollesbroich,Germany,50.621914200000006,6.304125599999999,7.7,1033.0,GRA,,2011–2018,10.18160/HPV9-K8R1,514.7 +DE-RuS,Selhausen Juelich,Germany,50.86590702,6.447144703999999,10.0,700.0,CRO,,2011–2018,10.18160/A2TK-QD5U,103.2 +DE-RuW,Wustebach,Germany,50.50490703,6.33101886,7.5,1250.0,ENF,,2010–2018,10.18160/H7Y6-2R1H,610.0 +DE-Tha,Tharandt,Germany,50.962559999999996,13.56515,8.2,843.0,ENF,Dfb,1996–2018,10.18160/BSE6-EMVJ,380.0 +DK-Sor,Soroe,Denmark,55.4858694,11.6446444,8.3,660.0,DBF,Cfb,1996–2018,10.18160/BFDT-7HYE,40.0 +ES-Abr,Albuera,Spain,38.701839,-6.785881,,,SAV,,2015–2018,10.18160/11TP-MX4F,279.0 +ES-LM1,Majadas del Tietar North,Spain,39.94269,-5.778683,16.0,700.0,SAV,Csa,2014–2018,10.18160/FDSD-GVRS,266.0 +ES-LM2,Majadas del Tietar South,Spain,39.934591999999995,-5.775881,16.0,700.0,SAV,Csa,2014–2018,10.18160/3SVJ-XSB7,270.0 +FI-Hyy,Hyytiala,Finland,61.847409999999996,24.29477,3.8,709.0,ENF,Dfb,1996–2018,10.18160/CWKM-YS54,181.0 +FI-Let,Lettosuo,Finland,60.641830000000006,23.95952,4.6,627.0,ENF,,2009–2018,10.18160/0JHQ-BZMU,125.0 +FI-Sii,Siikaneva,Finland,61.83265,24.19285,3.5,701.0,WET,Dfc,2016–2018,10.18160/0RE3-DTWD ,160.0 +FI-Var,Varrio,Finland,67.7549,29.61,-0.5,601.0,ENF,,2016–2018,10.18160/NYH7-5JEB,395.0 +FR-Bil,Bilos,France,44.493672,-0.956082,12.9,930.0,ENF,,2014–2018,10.18160/ETDC-1K1F,39.18 +FR-EM2,Estrees-Mons A28,France,49.8721083,3.02065,10.8,680.0,CRO,,2017–2018,,85.0 +FR-Hes,Hesse,France,48.6741,7.06465,9.2,820.0,DBF,Cfb,2014–2018,10.18160/WTYC-JVQV,310.0 +IT-BCi,Borgo Cioffi,Italy,40.52375,14.95744444,18.0,600.0,CRO,Csa,2004–2018,10.18160/T25N-PD1H,10.0 +IT-Cp2,Castelporziano2,Italy,41.7042655944824,12.357293128967301,15.2,805.0,EBF,Csa,2012–2018,10.18160/5FPQ-G257,19.0 +IT-Lsn,Lison,Italy,45.740481,12.750297,13.1,1083.0,OSH,,2016–2018,10.18160/RTKZ-VTDJ,1.0 +IT-SR2,San Rossore 2,Italy,43.732022,10.29091,14.2,920.0,ENF,Csa,2013–2018,10.18160/FFK6-8ZV7,4.0 +IT-Tor,Torgnon,Italy,45.844440000000006,7.578055,2.9,920.0,GRA,BSk,2008–2018,10.18160/ERMH-PSVW ,2168.0 +NL-Loo,Loobos,Netherlands,52.166581,5.743556,9.8,786.0,ENF,Cfb,1996–2018,10.18160/MV3K-WM09,25.0 +RU-Fy2,Fyodorovskoye dry spruce stand,Russia,56.447603,32.901878,3.9,711.0,ENF,Dfb,2015-2018,10.18160/4J2N-DY7S,276.0 +RU-Fyo,Fyodorovskoye,Russia,56.46152779999999,32.9220833,3.9,711.0,ENF,Dfb,1998-2018,10.18160/4J2N-DY7S,274.0 +SE-Deg,Degero,Sweden,64.182029,19.556539,1.2,523.0,WET,Dfc,2001-2018,10.18160/0T47-MEEU,270.0 +SE-Htm,Hyltemossa,Sweden,56.09763,13.418970000000002,7.4,707.0,ENF,Cfb,2015-2018,10.18160/17FF-96RT,115.0 +SE-Lnn,Lanna,Sweden,58.3406295776367,13.101767539978,6.0,558.0,CRO,Cfb,2014-2018,10.18160/5GZQ-S6Z0,71.0 +SE-Nor,Norunda,Sweden,60.08649722,17.47950278,5.5,527.0,ENF,Dfb,2014-2018,10.18160/K57M-TVGE,45.0 +SE-Ros,Rosinedal-3,Sweden,64.1725,19.738,1.8,614.0,ENF,,2014–2018,10.18160/ZF2F-82Q7,157.0 +SE-Svb,Svartberget,Sweden,64.25610999999999,19.7745,1.8,614.0,ENF,Dfc,2014-2018,10.18160/X57W-HWTE,267.0 From cc356a0714a4bc27dba25179bc91ca0b25ce7b42 Mon Sep 17 00:00:00 2001 From: David LeBauer Date: Fri, 20 Aug 2021 13:53:59 -0700 Subject: [PATCH 2214/2289] add XML:: to functions in gssurgo query This should fix ``` library(PEcAn.data.land) extract_soil_gssurgo('/tmp/', lat = 32.69, lon = -114.53, size=1, radius=500, depths=c(0.15,0.30,0.60)) > Error in xmlTreeParse(reader$value()) : could not find function "xmlTreeParse" ``` --- modules/data.land/R/gSSURGO_Query.R | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/modules/data.land/R/gSSURGO_Query.R b/modules/data.land/R/gSSURGO_Query.R index 4f6e6d2d3a0..649e7402d85 100644 --- a/modules/data.land/R/gSSURGO_Query.R +++ b/modules/data.land/R/gSSURGO_Query.R @@ -31,13 +31,13 @@ gSSURGO.Query <- function(mukeys=2747727, #browser() # , ######### Reteiv soil - headerFields = + headerFields <- c(Accept = "text/xml", Accept = "multipart/*", 'Content-Type' = "text/xml; charset=utf-8", SOAPAction = "http://SDMDataAccess.nrcs.usda.gov/Tabular/SDMTabularService.asmx/RunQuery") - body = paste(' + body <- paste(' @@ -53,14 +53,14 @@ gSSURGO.Query <- function(mukeys=2747727, ') reader <- RCurl::basicTextGatherer() - out<-RCurl::curlPerform(url = "https://SDMDataAccess.nrcs.usda.gov/Tabular/SDMTabularService.asmx", + out <- RCurl::curlPerform(url = "https://SDMDataAccess.nrcs.usda.gov/Tabular/SDMTabularService.asmx", httpheader = headerFields, postfields = body, writefunction = reader$update ) suppressWarnings( suppressMessages({ - xml_doc <- xmlTreeParse(reader$value()) - xmltop <- xmlRoot(xml_doc) + xml_doc <- XML::xmlTreeParse(reader$value()) + xmltop <- XML::xmlRoot(xml_doc) tablesxml <- (xmltop[[1]]["RunQueryResponse"][[1]]["RunQueryResult"][[1]]["diffgram"][[1]]["NewDataSet"][[1]]) }) ) @@ -69,21 +69,21 @@ gSSURGO.Query <- function(mukeys=2747727, tryCatch({ suppressMessages( suppressWarnings({ - tables<-getNodeSet(tablesxml,"//Table") + tables <- XML::getNodeSet(tablesxml,"//Table") ##### All datatables below newdataset # This method leaves out the variables are all NAs - so we can't have a fixed naming scheme for this df - dfs<-tables%>% + dfs <- tables %>% purrr::map_dfr(function(child){ #converting the xml obj to list - allfields <- xmlToList(child) - remov<-names(allfields) %in% c(".attrs") + allfields <- XML::xmlToList(child) + remov <- names(allfields) %in% c(".attrs") #browser() - names(allfields)[!remov]%>% + names(allfields)[!remov] %>% purrr::map_dfc(function(nfield){ #browser() - outv <-allfields[[nfield]] %>% unlist() %>% as.numeric - ifelse(length(outv)>0, outv, NA) + outv <- allfields[[nfield]] %>% unlist() %>% as.numeric + ifelse(length(outv) > 0, outv, NA) })%>% as.data.frame() %>% `colnames<-`(names(allfields)[!remov]) From 55e4e09e4b6ffa166abdee45ad7c242c8741a247 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 23 Aug 2021 16:37:25 +0000 Subject: [PATCH 2215/2289] Fixes to model2netcdf fxs for GH actions check --- models/ed/R/model2netcdf.ED2.R | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 7918e66308e..63c948676bc 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -189,6 +189,11 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, ##' ##' @param yr the year being processed ##' @param yfiles the years on the filenames, will be used to matched tfiles for that year +##' @param tfiles names of T h5 files +##' @param outdir path to run outdir +##' @param start_date start time of simulation +##' @param end_date end time of simulation +##' ##' @export read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ @@ -859,6 +864,12 @@ put_T_values <- function(yr, nc_var, out, lat, lon, begins, ends, ...){ ##' ##' @param yr the year being processed ##' @param yfiles the years on the filenames, will be used to matched efiles for that year +##' @param efiles names of E h5 files +##' @param outdir path to run outdir +##' @param start_date start time of simulation +##' @param end_date end time of simulation +##' @pfts manually input list of Pecan PFT numbers +##' @settings optional Pecan settings object ##' ##' @export read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, @@ -1025,6 +1036,17 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, ##-------------------------------------------------------------------------------------------------# ##' Function for put -E- values to nc_var list +##' +##' @param yr the year being processed +##' @param nc_var list of .nc files +##' @param out path to run outdir +##' @param lat latitude of site +##' @param lon longitude of site +##' @param begins start time of simulation +##' @param ends end time of simulation +##' @param pfts manually input list of Pecan PFT numbers +##' @param settings Pecan settings object +##' ##' @export put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pfts, settings, ...){ @@ -1058,7 +1080,7 @@ put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pfts, settings ) } } else { - pft_number <- pftmapping$ED[pftmapping$PEcAn == x] + pft_number <- pftmapping$ED[pftmapping$PEcAn == xml_pft$name] } pfts[pft] <- pft_number } From 47de10b6495c914e4fdac214e996af8cad290b33 Mon Sep 17 00:00:00 2001 From: istfer Date: Thu, 26 Aug 2021 13:22:19 +0300 Subject: [PATCH 2216/2289] create ourfolder --- modules/data.atmosphere/R/download.ICOS.R | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/modules/data.atmosphere/R/download.ICOS.R b/modules/data.atmosphere/R/download.ICOS.R index ecacee30e81..853f671209c 100644 --- a/modules/data.atmosphere/R/download.ICOS.R +++ b/modules/data.atmosphere/R/download.ICOS.R @@ -27,6 +27,13 @@ download.ICOS <- end_date, product, overwrite = FALSE, ...) { + + # make sure output folder exists + if (!file.exists(outfolder)) { + dir.create(outfolder, showWarnings = FALSE, recursive = TRUE) + } + + download_file_flag <- TRUE extract_file_flag <- TRUE sitename <- sub(".* \\((.*)\\)", "\\1", sitename) From 3672356ad43bc07d6bc740345cfeaefb37f020a8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 27 Aug 2021 11:39:13 +0200 Subject: [PATCH 2217/2289] Update base/utils/R/utils.R (Prevents a trivial merge conflict when we merge develop) --- base/utils/R/utils.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 276fc81e662..fbd4bf663ec 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -30,7 +30,7 @@ mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = FALSE) { nc_var <- PEcAn.utils::mstmip_vars[PEcAn.utils::mstmip_vars$Variable.Name == name, ] dims <- list() - + if (nrow(nc_var) == 0) { nc_var <- PEcAn.utils::mstmip_local[PEcAn.utils::mstmip_local$Variable.Name == name, ] if (nrow(nc_var) == 0) { From 7b6a3efbac15b933e3f823b03fdabd7915ce8820 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 27 Aug 2021 18:34:05 +0530 Subject: [PATCH 2218/2289] Update modules/priors/R/plots.R Co-authored-by: Chris Black --- modules/priors/R/plots.R | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/priors/R/plots.R b/modules/priors/R/plots.R index 92390c23191..fc06cde3548 100644 --- a/modules/priors/R/plots.R +++ b/modules/priors/R/plots.R @@ -102,8 +102,10 @@ priorfig <- function(priordata = NA, priordensity = NA, trait = "", xlim = "auto ##' ##' @param trait character, name of trait to be plotted ##' @param prior named distribution with parameters -##' @param posterior.sample -##' @param trait.df +##' @param posterior.sample samples from posterior distribution +##' whose density should be plotted +##' @param trait.df data to be plotted, in a format accepted by +##' \code{\link[PEcAn.MA]{jagify}} ##' @param fontsize,x.lim,y.lim,logx passed on to ggplot ##' @return plot (grob) object ##' @author David LeBauer From 25f09678d2c8aabdd6f696aae351678090e065d5 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Sat, 28 Aug 2021 08:25:58 +0000 Subject: [PATCH 2219/2289] automated documentation update --- modules/priors/man/plot_trait.Rd | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/priors/man/plot_trait.Rd b/modules/priors/man/plot_trait.Rd index f5459e0944b..3ba950c0720 100644 --- a/modules/priors/man/plot_trait.Rd +++ b/modules/priors/man/plot_trait.Rd @@ -21,6 +21,12 @@ plot_trait( \item{prior}{named distribution with parameters} +\item{posterior.sample}{samples from posterior distribution +whose density should be plotted} + +\item{trait.df}{data to be plotted, in a format accepted by +\code{\link[PEcAn.MA]{jagify}}} + \item{fontsize, x.lim, y.lim, logx}{passed on to ggplot} } \value{ From 460318f667ca6d11e6829a36e8a0f7fcdc7d4033 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 10:48:17 +0200 Subject: [PATCH 2220/2289] whitespace to trigger CI build --- base/utils/R/utils.R | 1 - 1 file changed, 1 deletion(-) diff --git a/base/utils/R/utils.R b/base/utils/R/utils.R index 330ce76ec0b..bf0bc04b763 100644 --- a/base/utils/R/utils.R +++ b/base/utils/R/utils.R @@ -41,7 +41,6 @@ mstmipvar <- function(name, lat = NA, lon = NA, time = NA, nsoil = NA, silent = } return(ncdf4::ncvar_def(name, "", list(time), -999, name)) } - for (i in 1:4) { vd <- nc_var[[paste0("dim", i)]] if (vd == "lon" && !is.na(lon)) { From 0fa022bb3d915ba1cba3ccc61b43d11584f1eda2 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 13:13:52 +0200 Subject: [PATCH 2221/2289] remove unused color argument --- base/visualization/R/plots.R | 6 +++--- base/visualization/man/plot_data.Rd | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/base/visualization/R/plots.R b/base/visualization/R/plots.R index 4bf7e2513d2..d2f8558f2e3 100644 --- a/base/visualization/R/plots.R +++ b/base/visualization/R/plots.R @@ -203,13 +203,13 @@ create.base.plot <- function() { ##' ##' Used to add raw data or summary statistics to the plot of a distribution. ##' The height of Y is arbitrary, and can be set to optimize visualization. -##' If SE estimates are available, tehse wil be plotted +##' If SE estimates are available, the se wil be plotted ##' @name plot_data ##' @aliases plot.data ##' @title Add data to plot ##' @param trait.data data to be plotted ##' @param base.plot a ggplot object (grob), -##' created by \code{\link{create.base.plot}} if none provided +##' created by \code{\link{create.base.plot}} if none provided ##' @param ymax maximum height of y ##' @seealso \code{\link{create.base.plot}} ##' @return updated plot object @@ -217,7 +217,7 @@ create.base.plot <- function() { ##' @export plot_data ##' @examples ##' \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} -plot_data <- function(trait.data, base.plot = NULL, ymax, color = "black") { +plot_data <- function(trait.data, base.plot = NULL, ymax) { need_packages("ggplot2") if (is.null(base.plot)) { diff --git a/base/visualization/man/plot_data.Rd b/base/visualization/man/plot_data.Rd index 77244f9e858..a722e4ff3ab 100644 --- a/base/visualization/man/plot_data.Rd +++ b/base/visualization/man/plot_data.Rd @@ -5,7 +5,7 @@ \alias{plot.data} \title{Add data to plot} \usage{ -plot_data(trait.data, base.plot = NULL, ymax, color = "black") +plot_data(trait.data, base.plot = NULL, ymax) } \arguments{ \item{trait.data}{data to be plotted} @@ -24,7 +24,7 @@ Add data to an existing plot or create a new one from \code{\link{create.base.pl \details{ Used to add raw data or summary statistics to the plot of a distribution. The height of Y is arbitrary, and can be set to optimize visualization. -If SE estimates are available, tehse wil be plotted +If SE estimates are available, the se wil be plotted } \examples{ \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} From f1288bac48c6f7b1d3347f56aeea5f71f0ddd845 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 13:15:13 +0200 Subject: [PATCH 2222/2289] add missing argument descriptions --- base/visualization/R/plots.R | 17 ++++++++++------- base/visualization/man/theme_border.Rd | 17 ++++++++++------- 2 files changed, 20 insertions(+), 14 deletions(-) diff --git a/base/visualization/R/plots.R b/base/visualization/R/plots.R index d2f8558f2e3..c188e0b50b5 100644 --- a/base/visualization/R/plots.R +++ b/base/visualization/R/plots.R @@ -256,15 +256,18 @@ plot_data <- function(trait.data, base.plot = NULL, ymax) { ##' Add borders to plot ##' ##' Has ggplot2 display only specified borders, e.g. ('L'-shaped) borders, -##' rather than a rectangle or no border. Note that the order can be significant; -##' for example, if you specify the L border option and then a theme, the theme settings -##' will override the border option, so you need to specify the theme (if any) before the border option, as above. +##' rather than a rectangle or no border. +##' Note that the order can be significant; +##' for example, if you specify the L border option and then a theme, +##' the theme settings will override the border option, +##' so you need to specify the theme (if any) before the border option, +##' as above. ##' @name theme_border ##' @title Theme border for plot -##' @param type -##' @param colour -##' @param size -##' @param linetype +##' @param type border(s) to display +##' @param colour what colo(u)r should the border be +##' @param size relative line thickness +##' @param linetype "solid", "dashed", etc ##' @return adds borders to ggplot as a side effect ##' @author Rudolf Cardinal ##' @author \url{ggplot2 google group}{https://groups.google.com/forum/?fromgroups#!topic/ggplot2/-ZjRE2OL8lE} diff --git a/base/visualization/man/theme_border.Rd b/base/visualization/man/theme_border.Rd index c69f43538b1..39027f06022 100644 --- a/base/visualization/man/theme_border.Rd +++ b/base/visualization/man/theme_border.Rd @@ -12,13 +12,13 @@ theme_border( ) } \arguments{ -\item{type}{} +\item{type}{border(s) to display} -\item{colour}{} +\item{colour}{what colo(u)r should the border be} -\item{size}{} +\item{size}{relative line thickness} -\item{linetype}{} +\item{linetype}{"solid", "dashed", etc} } \value{ adds borders to ggplot as a side effect @@ -28,9 +28,12 @@ Add borders to plot } \details{ Has ggplot2 display only specified borders, e.g. ('L'-shaped) borders, -rather than a rectangle or no border. Note that the order can be significant; -for example, if you specify the L border option and then a theme, the theme settings -will override the border option, so you need to specify the theme (if any) before the border option, as above. +rather than a rectangle or no border. +Note that the order can be significant; +for example, if you specify the L border option and then a theme, +the theme settings will override the border option, +so you need to specify the theme (if any) before the border option, +as above. } \examples{ \dontrun{ From 9712c6d94426467c21462b5faea59858a16ed35a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 13:16:18 +0200 Subject: [PATCH 2223/2289] percent escaping rules change when using markdown --- base/visualization/R/visually.weighted.watercolor.plots.R | 2 +- base/visualization/man/vwReg.Rd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base/visualization/R/visually.weighted.watercolor.plots.R b/base/visualization/R/visually.weighted.watercolor.plots.R index b0bfbf1938c..d224e8b2ab5 100644 --- a/base/visualization/R/visually.weighted.watercolor.plots.R +++ b/base/visualization/R/visually.weighted.watercolor.plots.R @@ -53,7 +53,7 @@ ##' @param spag.color color of spaghetti lines ##' @param mweight should the median smoother be visually weighted? ##' @param show.lm should the linear regresison line be plotted? -##' @param show.CI should the 95\% CI limits be plotted? +##' @param show.CI should the 95% CI limits be plotted? ##' @param show.median should the median smoother be plotted? ##' @param median.col color of the median smoother ##' @param shape shape of points diff --git a/base/visualization/man/vwReg.Rd b/base/visualization/man/vwReg.Rd index eaf8888b6fc..c7d9649899a 100644 --- a/base/visualization/man/vwReg.Rd +++ b/base/visualization/man/vwReg.Rd @@ -56,7 +56,7 @@ vwReg( \item{shape}{shape of points} -\item{show.CI}{should the 95\\% CI limits be plotted?} +\item{show.CI}{should the 95\% CI limits be plotted?} \item{method}{the fitting function for the spaghettis; default: loess} From f8b65b1cde1c8eb0c83128e0ec81cd5acd2d065d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 13:28:17 +0200 Subject: [PATCH 2224/2289] undefined globals warning --- base/visualization/DESCRIPTION | 1 + base/visualization/NAMESPACE | 1 + base/visualization/R/plots.R | 15 +++++++++++---- 3 files changed, 13 insertions(+), 4 deletions(-) diff --git a/base/visualization/DESCRIPTION b/base/visualization/DESCRIPTION index 8a16caecbab..1e8e4e4bf05 100644 --- a/base/visualization/DESCRIPTION +++ b/base/visualization/DESCRIPTION @@ -33,6 +33,7 @@ Imports: plyr (>= 1.8.4), RCurl, reshape2, + rlang, stringr(>= 1.1.0) Suggests: grid, diff --git a/base/visualization/NAMESPACE b/base/visualization/NAMESPACE index 515e6e71b26..fc03ab08d2a 100644 --- a/base/visualization/NAMESPACE +++ b/base/visualization/NAMESPACE @@ -7,3 +7,4 @@ export(map.output) export(plot_data) export(plot_netcdf) export(vwReg) +importFrom(rlang,.data) diff --git a/base/visualization/R/plots.R b/base/visualization/R/plots.R index c188e0b50b5..8c65b38d650 100644 --- a/base/visualization/R/plots.R +++ b/base/visualization/R/plots.R @@ -214,7 +214,8 @@ create.base.plot <- function() { ##' @seealso \code{\link{create.base.plot}} ##' @return updated plot object ##' @author David LeBauer -##' @export plot_data +##' @export +##' @importFrom rlang .data ##' @examples ##' \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} plot_data <- function(trait.data, base.plot = NULL, ymax) { @@ -243,9 +244,15 @@ plot_data <- function(trait.data, base.plot = NULL, ymax) { se = trait.data$se, control = !trait.data$trt == 1 & trait.data$ghs == 1) new.plot <- base.plot + - ggplot2::geom_point(data = plot.data, ggplot2::aes(x = x, y = y, color = control)) + - ggplot2::geom_segment(data = plot.data, - ggplot2::aes(x = x - se, y = y, xend = x + se, yend = y, color = control)) + + ggplot2::geom_point( + data = plot.data, + ggplot2::aes(x = .data$x, y = .data$y, color = .data$control)) + + ggplot2::geom_segment( + data = plot.data, + ggplot2::aes( + x = .data$x - .data$se, y = .data$y, + xend = .data$x + .data$se, yend = .data$y, + color = .data$control)) + ggplot2::scale_color_manual(values = c("black", "grey")) + ggplot2::theme(legend.position = "none") return(new.plot) From 42bc8589a970df8cc2f63f87ceaf74a246e50f54 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 13:29:06 +0200 Subject: [PATCH 2225/2289] ggplot2 is in Imports, can assume installed --- base/visualization/R/plots.R | 2 -- 1 file changed, 2 deletions(-) diff --git a/base/visualization/R/plots.R b/base/visualization/R/plots.R index 8c65b38d650..082d6a0b6d1 100644 --- a/base/visualization/R/plots.R +++ b/base/visualization/R/plots.R @@ -192,7 +192,6 @@ iqr <- function(x) { ##' @export ##' @author David LeBauer create.base.plot <- function() { - need_packages("ggplot2") base.plot <- ggplot2::ggplot() return(base.plot) } # create.base.plot @@ -219,7 +218,6 @@ create.base.plot <- function() { ##' @examples ##' \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} plot_data <- function(trait.data, base.plot = NULL, ymax) { - need_packages("ggplot2") if (is.null(base.plot)) { base.plot <- create.base.plot() From 378732a3673d018f947aaf25832a24c86920578a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 22:46:08 +0200 Subject: [PATCH 2226/2289] missed rename of plot_data call --- Makefile.depends | 2 +- modules/priors/DESCRIPTION | 1 + modules/priors/R/plots.R | 8 +++++++- 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 035af3ba0fc..32ec9eaa4f9 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -19,7 +19,7 @@ $(call depends,modules/data.remote): | .install/base/db .install/base/utils .ins $(call depends,modules/emulator): | .install/base/logger $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .install/base/logger .install/base/settings $(call depends,modules/photosynthesis): | -$(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis +$(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis .install/base/visualization $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger $(call depends,models/basgra): | .install/base/logger .install/modules/data.atmosphere .install/base/utils diff --git a/modules/priors/DESCRIPTION b/modules/priors/DESCRIPTION index e4585adc10b..00adc4fe2ac 100644 --- a/modules/priors/DESCRIPTION +++ b/modules/priors/DESCRIPTION @@ -19,6 +19,7 @@ Imports: ggplot2, MASS Suggests: + PEcAn.visualization, testthat Encoding: UTF-8 RoxygenNote: 7.0.2 diff --git a/modules/priors/R/plots.R b/modules/priors/R/plots.R index fc06cde3548..40678b09562 100644 --- a/modules/priors/R/plots.R +++ b/modules/priors/R/plots.R @@ -132,6 +132,12 @@ plot_trait <- function(trait, y.lim = NULL, logx = FALSE) { + if (!requireNamespace("PEcAn.visualization", quietly = TRUE)) { + PEcAn.logger::logger.severe( + "plot_trait requires package `PEcAn.visualization`,", + "but it is not installed. Please install it and try again.") + } + ## Determine plot components plot_posterior <- !is.null(posterior.sample) plot_prior <- !is.null(prior) @@ -178,7 +184,7 @@ plot_trait <- function(trait, base.plot <- plot_posterior.density(posterior.density, base.plot = base.plot) } if (plot_data) { - base.plot <- PEcAn.utils::plot_data(trait.df, base.plot = base.plot, ymax = y.lim[2]) + base.plot <- PEcAn.visualization::plot_data(trait.df, base.plot = base.plot, ymax = y.lim[2]) } trait.plot <- base.plot + From 9c1f5ae47a3c8c57a292f047870c4fab7bc0587e Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 28 Aug 2021 22:50:07 +0200 Subject: [PATCH 2227/2289] delete create.base.plot, replace with bare ggplot() call --- base/visualization/NAMESPACE | 1 - base/visualization/R/plots.R | 22 +++----------------- base/visualization/man/create.base.plot.Rd | 22 -------------------- base/visualization/man/plot_data.Rd | 7 ++----- modules/priors/R/plots.R | 10 ++++----- modules/priors/man/plot_posterior.density.Rd | 2 +- modules/priors/man/plot_prior.density.Rd | 2 +- 7 files changed, 12 insertions(+), 54 deletions(-) delete mode 100644 base/visualization/man/create.base.plot.Rd diff --git a/base/visualization/NAMESPACE b/base/visualization/NAMESPACE index fc03ab08d2a..4ae1df406d6 100644 --- a/base/visualization/NAMESPACE +++ b/base/visualization/NAMESPACE @@ -2,7 +2,6 @@ export(add_icon) export(ciEnvelope) -export(create.base.plot) export(map.output) export(plot_data) export(plot_netcdf) diff --git a/base/visualization/R/plots.R b/base/visualization/R/plots.R index 082d6a0b6d1..bf6aa9a8b89 100644 --- a/base/visualization/R/plots.R +++ b/base/visualization/R/plots.R @@ -181,24 +181,9 @@ iqr <- function(x) { } # iqr -##' Creates empty ggplot object -##' -##' An empty base plot to which layers created by other functions -##' (\code{\link{plot_data}}, \code{\link[PEcAn.priors]{plot_prior.density}}, -##' \code{\link[PEcAn.priors]{plot_posterior.density}}) can be added. -##' @name create.base.plot -##' @title Create Base Plot -##' @return empty ggplot object -##' @export -##' @author David LeBauer -create.base.plot <- function() { - base.plot <- ggplot2::ggplot() - return(base.plot) -} # create.base.plot - ##--------------------------------------------------------------------------------------------------# -##' Add data to an existing plot or create a new one from \code{\link{create.base.plot}} +##' Add data to an existing plot or create a new one ##' ##' Used to add raw data or summary statistics to the plot of a distribution. ##' The height of Y is arbitrary, and can be set to optimize visualization. @@ -208,9 +193,8 @@ create.base.plot <- function() { ##' @title Add data to plot ##' @param trait.data data to be plotted ##' @param base.plot a ggplot object (grob), -##' created by \code{\link{create.base.plot}} if none provided +##' created if none provided ##' @param ymax maximum height of y -##' @seealso \code{\link{create.base.plot}} ##' @return updated plot object ##' @author David LeBauer ##' @export @@ -220,7 +204,7 @@ create.base.plot <- function() { plot_data <- function(trait.data, base.plot = NULL, ymax) { if (is.null(base.plot)) { - base.plot <- create.base.plot() + base.plot <- ggplot2::ggplot() } n.pts <- nrow(trait.data) diff --git a/base/visualization/man/create.base.plot.Rd b/base/visualization/man/create.base.plot.Rd deleted file mode 100644 index 6907bdd09dc..00000000000 --- a/base/visualization/man/create.base.plot.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/plots.R -\name{create.base.plot} -\alias{create.base.plot} -\title{Create Base Plot} -\usage{ -create.base.plot() -} -\value{ -empty ggplot object -} -\description{ -Creates empty ggplot object -} -\details{ -An empty base plot to which layers created by other functions -(\code{\link{plot_data}}, \code{\link[PEcAn.priors]{plot_prior.density}}, -\code{\link[PEcAn.priors]{plot_posterior.density}}) can be added. -} -\author{ -David LeBauer -} diff --git a/base/visualization/man/plot_data.Rd b/base/visualization/man/plot_data.Rd index a722e4ff3ab..6738c18b789 100644 --- a/base/visualization/man/plot_data.Rd +++ b/base/visualization/man/plot_data.Rd @@ -11,7 +11,7 @@ plot_data(trait.data, base.plot = NULL, ymax) \item{trait.data}{data to be plotted} \item{base.plot}{a ggplot object (grob), -created by \code{\link{create.base.plot}} if none provided} +created if none provided} \item{ymax}{maximum height of y} } @@ -19,7 +19,7 @@ created by \code{\link{create.base.plot}} if none provided} updated plot object } \description{ -Add data to an existing plot or create a new one from \code{\link{create.base.plot}} +Add data to an existing plot or create a new one } \details{ Used to add raw data or summary statistics to the plot of a distribution. @@ -29,9 +29,6 @@ If SE estimates are available, the se wil be plotted \examples{ \dontrun{plot_data(data.frame(Y = c(1, 2), se = c(1,2)), base.plot = NULL, ymax = 10)} } -\seealso{ -\code{\link{create.base.plot}} -} \author{ David LeBauer } diff --git a/modules/priors/R/plots.R b/modules/priors/R/plots.R index 40678b09562..6146b6a5e40 100644 --- a/modules/priors/R/plots.R +++ b/modules/priors/R/plots.R @@ -2,7 +2,7 @@ ##' Plots a prior density from a parameterized probability distribution ##' ##' @param prior.density data frame containing columns x and y -##' @param base.plot a ggplot object (grob), created by \code{\link[PEcAn.utils]{create.base.plot}} if none provided +##' @param base.plot a ggplot object (grob), created if none provided ##' @param prior.color color of line to be plotted ##' @return plot with prior density added ##' @seealso \code{\link{pr.dens}} @@ -15,7 +15,7 @@ ##' } plot_prior.density <- function(prior.density, base.plot = NULL, prior.color = "black") { if (is.null(base.plot)) { - base.plot <- PEcAn.visualization::create.base.plot() + base.plot <- ggplot2::ggplot() } new.plot <- base.plot + geom_line(data = prior.density, aes(x = x, y = y), color = prior.color) return(new.plot) @@ -26,7 +26,7 @@ plot_prior.density <- function(prior.density, base.plot = NULL, prior.color = "b ##' Add posterior density to a plot ##' ##' @param posterior.density data frame containing columns x and y -##' @param base.plot a ggplot object (grob), created by \code{\link[PEcAn.utils]{create.base.plot}} if none provided +##' @param base.plot a ggplot object (grob), created if none provided ##' @return plot with posterior density line added ##' @aliases plot.posterior.density ##' @export @@ -34,7 +34,7 @@ plot_prior.density <- function(prior.density, base.plot = NULL, prior.color = "b ##' @author David LeBauer plot_posterior.density <- function(posterior.density, base.plot = NULL) { if (is.null(base.plot)) { - base.plot <- PEcAn.visualization::create.base.plot() + base.plot <- ggplot2::ggplot() } new.plot <- base.plot + geom_line(data = posterior.density, aes(x = x, y = y)) return(new.plot) @@ -176,7 +176,7 @@ plot_trait <- function(trait, x.ticks <- pretty(c(0, x.lim[2])) - base.plot <- PEcAn.visualization::create.base.plot() + theme_bw() + base.plot <- ggplot2::ggplot() + theme_bw() if (plot_prior) { base.plot <- plot_prior.density(prior.density, base.plot = base.plot, prior.color = prior.color) } diff --git a/modules/priors/man/plot_posterior.density.Rd b/modules/priors/man/plot_posterior.density.Rd index 5e6aaccf7f1..baad7ef7beb 100644 --- a/modules/priors/man/plot_posterior.density.Rd +++ b/modules/priors/man/plot_posterior.density.Rd @@ -10,7 +10,7 @@ plot_posterior.density(posterior.density, base.plot = NULL) \arguments{ \item{posterior.density}{data frame containing columns x and y} -\item{base.plot}{a ggplot object (grob), created by \code{\link[PEcAn.utils]{create.base.plot}} if none provided} +\item{base.plot}{a ggplot object (grob), created if none provided} } \value{ plot with posterior density line added diff --git a/modules/priors/man/plot_prior.density.Rd b/modules/priors/man/plot_prior.density.Rd index 8fe85c1a612..cc1e841c1f3 100644 --- a/modules/priors/man/plot_prior.density.Rd +++ b/modules/priors/man/plot_prior.density.Rd @@ -10,7 +10,7 @@ plot_prior.density(prior.density, base.plot = NULL, prior.color = "black") \arguments{ \item{prior.density}{data frame containing columns x and y} -\item{base.plot}{a ggplot object (grob), created by \code{\link[PEcAn.utils]{create.base.plot}} if none provided} +\item{base.plot}{a ggplot object (grob), created if none provided} \item{prior.color}{color of line to be plotted} } From 4320216678abe3279adccafb9cd104381cc89c79 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 29 Aug 2021 16:47:02 +0200 Subject: [PATCH 2228/2289] no idea why this is newly failing in this branch ...But the fix is easy, bc data.atm already has its own copy of need_packages --- modules/data.atmosphere/R/download.ERA5.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.ERA5.R b/modules/data.atmosphere/R/download.ERA5.R index a1c11fa4d7e..59d1d134a5f 100644 --- a/modules/data.atmosphere/R/download.ERA5.R +++ b/modules/data.atmosphere/R/download.ERA5.R @@ -51,7 +51,7 @@ download.ERA5.old <- function(outfolder, start_date, end_date, lat.in, lon.in, "This function is an incomplete prototype! Use with caution!" ) - PEcAn.utils:::need_packages("reticulate") + need_packages("reticulate") if (!is.null(reticulate_python)) { reticulate::use_python(reticulate_python) From e253874684bd29710648c3b23195a726c3890b97 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Aug 2021 11:53:15 +0200 Subject: [PATCH 2229/2289] get.results imports settings --- modules/uncertainty/DESCRIPTION | 1 + 1 file changed, 1 insertion(+) diff --git a/modules/uncertainty/DESCRIPTION b/modules/uncertainty/DESCRIPTION index 7095b1c06c8..bc8a8ceda73 100644 --- a/modules/uncertainty/DESCRIPTION +++ b/modules/uncertainty/DESCRIPTION @@ -33,6 +33,7 @@ Imports: PEcAn.DB, PEcAn.emulator, PEcAn.logger, + PEcAn.settings, plyr (>= 1.8.4), purrr, randtoolbox, From e9da48b9a0ad801fbd876b00607fcc23c4214be1 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Aug 2021 12:35:36 +0200 Subject: [PATCH 2230/2289] move sensitivity.filename and ensemble.filename to uncertainty --- base/utils/NAMESPACE | 2 -- base/workflow/R/run.write.configs.R | 4 ++-- modules/uncertainty/NAMESPACE | 2 ++ .../uncertainty}/R/get.analysis.filenames.r | 0 modules/uncertainty/R/get.results.R | 18 +++++++++++++----- modules/uncertainty/R/run.ensemble.analysis.R | 6 +++--- .../uncertainty}/man/ensemble.filename.Rd | 12 ++++++------ modules/uncertainty/man/get.results.Rd | 17 ++++++++++++----- .../uncertainty/man/runModule.get.results.Rd | 17 +++++++++++++++++ .../uncertainty}/man/sensitivity.filename.Rd | 0 10 files changed, 55 insertions(+), 23 deletions(-) rename {base/utils => modules/uncertainty}/R/get.analysis.filenames.r (100%) rename {base/utils => modules/uncertainty}/man/ensemble.filename.Rd (77%) create mode 100644 modules/uncertainty/man/runModule.get.results.Rd rename {base/utils => modules/uncertainty}/man/sensitivity.filename.Rd (100%) diff --git a/base/utils/NAMESPACE b/base/utils/NAMESPACE index d227376cddf..c4fa7fd0d9e 100644 --- a/base/utils/NAMESPACE +++ b/base/utils/NAMESPACE @@ -15,7 +15,6 @@ export(distn.stats) export(distn.table.stats) export(download.file) export(download.url) -export(ensemble.filename) export(full.path) export(get.ensemble.inputs) export(get.model.output) @@ -43,7 +42,6 @@ export(retry.func) export(rsync) export(seconds_in_year) export(sendmail) -export(sensitivity.filename) export(ssh) export(status.check) export(status.end) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 4da4faa581e..0bc746bdd33 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -112,7 +112,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo settings$sensitivity.analysis$ensemble.id <- sa.ensemble.id <- sa.runs$ensemble.id # Save sensitivity analysis info - fname <- PEcAn.utils::sensitivity.filename(settings, "sensitivity.samples", "Rdata", + fname <- PEcAn.uncertainty::sensitivity.filename(settings, "sensitivity.samples", "Rdata", all.var.yr = TRUE, pft = NULL) save(sa.run.ids, sa.ensemble.id, sa.samples, pft.names, trait.names, file = fname) @@ -132,7 +132,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo ens.samples <- ensemble.samples # rename just for consistency # Save ensemble analysis info - fname <- PEcAn.utils::ensemble.filename(settings, "ensemble.samples", "Rdata", all.var.yr = TRUE) + fname <- PEcAn.uncertainty::ensemble.filename(settings, "ensemble.samples", "Rdata", all.var.yr = TRUE) save(ens.run.ids, ens.ensemble.id, ens.samples, pft.names, trait.names, file = fname) } else { PEcAn.logger::logger.info("not writing config files for ensemble, settings are NULL") diff --git a/modules/uncertainty/NAMESPACE b/modules/uncertainty/NAMESPACE index ddbcd133a30..cfc1bd56af3 100644 --- a/modules/uncertainty/NAMESPACE +++ b/modules/uncertainty/NAMESPACE @@ -1,5 +1,6 @@ # Generated by roxygen2: do not edit by hand +export(ensemble.filename) export(ensemble.ts) export(flux.uncertainty) export(get.change) @@ -26,6 +27,7 @@ export(runModule.run.sensitivity.analysis) export(sa.splinefun) export(sd.var) export(sensitivity.analysis) +export(sensitivity.filename) export(spline.truncate) export(write.ensemble.configs) importFrom(dplyr,"%>%") diff --git a/base/utils/R/get.analysis.filenames.r b/modules/uncertainty/R/get.analysis.filenames.r similarity index 100% rename from base/utils/R/get.analysis.filenames.r rename to modules/uncertainty/R/get.analysis.filenames.r diff --git a/modules/uncertainty/R/get.results.R b/modules/uncertainty/R/get.results.R index c78d01a9e39..57bf3859e8f 100644 --- a/modules/uncertainty/R/get.results.R +++ b/modules/uncertainty/R/get.results.R @@ -9,11 +9,16 @@ ##' Reads model output and runs sensitivity and ensemble analyses ##' -##' Output is placed in model output directory (settings$modeloutdir). -##' @name get.results -##' @title Generate model output for PEcAn analyses +##' Output is placed in model output directory (settings$outdir). ##' @export ##' @param settings list, read from settings file (xml) using \code{\link{read.settings}} +##' @param sa.ensemble.id,ens.ensemble.id ensemble IDs for the sensitivity +##' analysis and ensemble analysis. +##' If not provided, they are first looked up from `settings`, +##' then if not found they are not used and the most recent set of results +##' is read from \code{samples.Rdata} in directory \code{settings$outdir} +##' @param variable variables to retrieve, as vector of names or expressions +##' @param start.year,end.year first and last years to retrieve ##' @author David LeBauer, Shawn Serbin, Mike Dietze, Ryan Kelly get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, variable = NULL, start.year = NULL, end.year = NULL) { @@ -229,8 +234,11 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, } } # get.results - -##' @export +#' Apply get.results to each of a list of settings +#' +#' @param settings a PEcAn \code{Settings} or \code{MultiSettings} object +#' @seealso get.results +#' @export runModule.get.results <- function(settings) { if (PEcAn.settings::is.MultiSettings(settings)) { return(PEcAn.settings::papply(settings, runModule.get.results)) diff --git a/modules/uncertainty/R/run.ensemble.analysis.R b/modules/uncertainty/R/run.ensemble.analysis.R index d245fb3bdc2..8ddf74d4cef 100644 --- a/modules/uncertainty/R/run.ensemble.analysis.R +++ b/modules/uncertainty/R/run.ensemble.analysis.R @@ -207,12 +207,12 @@ read.ensemble.ts <- function(settings, ensemble.id = NULL, variable = NULL, ### compatibility still contains the sample info for (the most recent) sensitivity ### and ensemble analysis combined. if (!is.null(ensemble.id)) { - fname <- PEcAn.utils::ensemble.filename(settings, "ensemble.samples", "Rdata", + fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", ensemble.id = ensemble.id, all.var.yr = TRUE) } else if (!is.null(settings$ensemble$ensemble.id)) { ensemble.id <- settings$ensemble$ensemble.id - fname <- PEcAn.utils::ensemble.filename(settings, "ensemble.samples", "Rdata", + fname <- ensemble.filename(settings, "ensemble.samples", "Rdata", ensemble.id = ensemble.id, all.var.yr = TRUE) } else { @@ -272,7 +272,7 @@ read.ensemble.ts <- function(settings, ensemble.id = NULL, variable = NULL, names(ensemble.ts) <- variable.fn # BMR 10/16/13 Save this variable now to operate later on - fname <- PEcAn.utils::ensemble.filename(settings, "ensemble.ts", "Rdata", + fname <- ensemble.filename(settings, "ensemble.ts", "Rdata", all.var.yr = FALSE, ensemble.id = ensemble.id, variable = variable, diff --git a/base/utils/man/ensemble.filename.Rd b/modules/uncertainty/man/ensemble.filename.Rd similarity index 77% rename from base/utils/man/ensemble.filename.Rd rename to modules/uncertainty/man/ensemble.filename.Rd index 988c514ef3a..83bbad2f201 100644 --- a/base/utils/man/ensemble.filename.Rd +++ b/modules/uncertainty/man/ensemble.filename.Rd @@ -33,21 +33,21 @@ If FALSE, filename will include years and vars} } \value{ a vector of filenames, each in the form -\verb{[settings$outdir]/[prefix].[ensemble.ID].[variable].[start.year].[end.year][suffix]}. + `[settings$outdir]/[prefix].[ensemble.ID].[variable].[start.year].[end.year][suffix]`. } \description{ Generates a vector of filenames to be used for PEcAn ensemble output files. -All paths start from directory \code{settings$outdir}, +All paths start from directory `settings$outdir`, which will be created if it does not exist. } \details{ Typically used by passing only a settings object, -but all values can be overridden for manual use. + but all values can be overridden for manual use. If only a single variable or a subset of years are needed, -the generated filename will identify these in the form -If all vars and years are included, set \code{all.yr.var} to TRUE -to get a filename of the form \code{prefix.ensemble_id.suffix}. + the generated filename will identify these in the form +If all vars and years are included, set `all.yr.var` to TRUE + to get a filename of the form `prefix.ensemble_id.suffix`. All elements are recycled vectorwise. } \author{ diff --git a/modules/uncertainty/man/get.results.Rd b/modules/uncertainty/man/get.results.Rd index 96e99e016fe..c7275e7f32a 100644 --- a/modules/uncertainty/man/get.results.Rd +++ b/modules/uncertainty/man/get.results.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/get.results.R \name{get.results} \alias{get.results} -\title{Generate model output for PEcAn analyses} +\title{Reads model output and runs sensitivity and ensemble analyses} \usage{ get.results( settings, @@ -15,12 +15,19 @@ get.results( } \arguments{ \item{settings}{list, read from settings file (xml) using \code{\link{read.settings}}} + +\item{sa.ensemble.id, ens.ensemble.id}{ensemble IDs for the sensitivity +analysis and ensemble analysis. +If not provided, they are first looked up from `settings`, +then if not found they are not used and the most recent set of results +is read from \code{samples.Rdata} in directory \code{settings$outdir}} + +\item{variable}{variables to retrieve, as vector of names or expressions} + +\item{start.year, end.year}{first and last years to retrieve} } \description{ -Reads model output and runs sensitivity and ensemble analyses -} -\details{ -Output is placed in model output directory (settings$modeloutdir). +Output is placed in model output directory (settings$outdir). } \author{ David LeBauer, Shawn Serbin, Mike Dietze, Ryan Kelly diff --git a/modules/uncertainty/man/runModule.get.results.Rd b/modules/uncertainty/man/runModule.get.results.Rd new file mode 100644 index 00000000000..cd5f741932d --- /dev/null +++ b/modules/uncertainty/man/runModule.get.results.Rd @@ -0,0 +1,17 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/get.results.R +\name{runModule.get.results} +\alias{runModule.get.results} +\title{Apply get.results to each of a list of settings} +\usage{ +runModule.get.results(settings) +} +\arguments{ +\item{settings}{a PEcAn \code{Settings} or \code{MultiSettings} object} +} +\description{ +Apply get.results to each of a list of settings +} +\seealso{ +get.results +} diff --git a/base/utils/man/sensitivity.filename.Rd b/modules/uncertainty/man/sensitivity.filename.Rd similarity index 100% rename from base/utils/man/sensitivity.filename.Rd rename to modules/uncertainty/man/sensitivity.filename.Rd From a7ae29307b7694cda7abdc9eeaacd5d446c7b177 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Aug 2021 12:53:59 +0200 Subject: [PATCH 2231/2289] add namespaces to utils functions in moved code --- modules/uncertainty/R/get.results.R | 25 ++++++++++--------- modules/uncertainty/R/run.ensemble.analysis.R | 2 +- .../uncertainty/R/run.sensitivity.analysis.R | 2 +- 3 files changed, 15 insertions(+), 14 deletions(-) diff --git a/modules/uncertainty/R/get.results.R b/modules/uncertainty/R/get.results.R index 57bf3859e8f..58b6e00b7f2 100644 --- a/modules/uncertainty/R/get.results.R +++ b/modules/uncertainty/R/get.results.R @@ -98,7 +98,7 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, # if an expression is provided, convert.expr returns names of the variables accordingly # if a derivation is not requested it returns the variable name as is - variables <- convert.expr(unlist(variable.sa)) + variables <- PEcAn.utils::convert.expr(unlist(variable.sa)) variable.sa <- variables$variable.eqn variable.fn <- variables$variable.drv @@ -109,16 +109,17 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, # when there is variable-per pft in the outputs, check for the tag for deciding SA per pft per.pft <- ifelse(!is.null(settings$sensitivity.analysis$perpft), as.logical(settings$sensitivity.analysis$perpft), FALSE) - sensitivity.output[[pft.name]] <- read.sa.output(traits = traits, - quantiles = quantiles, - pecandir = outdir, - outdir = settings$modeloutdir, - pft.name = pft.name, - start.year = start.year.sa, - end.year = end.year.sa, - variable = variable.sa, - sa.run.ids = sa.run.ids, - per.pft = per.pft) + sensitivity.output[[pft.name]] <- PEcAn.utils::read.sa.output( + traits = traits, + quantiles = quantiles, + pecandir = outdir, + outdir = settings$modeloutdir, + pft.name = pft.name, + start.year = start.year.sa, + end.year = end.year.sa, + variable = variable.sa, + sa.run.ids = sa.run.ids, + per.pft = per.pft) } # Save sensitivity output @@ -207,7 +208,7 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, # if an expression is provided, convert.expr returns names of the variables accordingly # if a derivation is not requested it returns the variable name as is - variables <- convert.expr(variable.ens) + variables <- PEcAn.utils::convert.expr(variable.ens) variable.ens <- variables$variable.eqn variable.fn <- variables$variable.drv diff --git a/modules/uncertainty/R/run.ensemble.analysis.R b/modules/uncertainty/R/run.ensemble.analysis.R index 8ddf74d4cef..453232df8dd 100644 --- a/modules/uncertainty/R/run.ensemble.analysis.R +++ b/modules/uncertainty/R/run.ensemble.analysis.R @@ -66,7 +66,7 @@ run.ensemble.analysis <- function(settings, plot.timeseries = NA, ensemble.id = cflux <- c("GPP", "NPP", "NEE", "TotalResp", "AutoResp", "HeteroResp", "DOC_flux", "Fire_flux") #converted to gC/m2/s wflux <- c("Evap", "TVeg", "Qs", "Qsb", "Rainf") #kgH20 m-2 s-1 - variables <- convert.expr(variable) + variables <- PEcAn.utils::convert.expr(variable) variable.ens <- variables$variable.eqn variable.fn <- variables$variable.drv diff --git a/modules/uncertainty/R/run.sensitivity.analysis.R b/modules/uncertainty/R/run.sensitivity.analysis.R index 6d796a18149..ea57be97944 100644 --- a/modules/uncertainty/R/run.sensitivity.analysis.R +++ b/modules/uncertainty/R/run.sensitivity.analysis.R @@ -75,7 +75,7 @@ run.sensitivity.analysis <- function(settings,plot=TRUE, ensemble.id=NULL, varia if(!exists("sa.run.ids")) sa.run.ids <- runs.samples$sa ### Load parsed model results - variables <- convert.expr(variable) + variables <- PEcAn.utils::convert.expr(variable) variable.fn <- variables$variable.drv fname <- sensitivity.filename( From 1d79b97ab63b6ea75033281ab1e6971fc308a02f Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Aug 2021 12:55:24 +0200 Subject: [PATCH 2232/2289] ignore these warnings for now To fix correctly, load samples.Rdata into its own environment and access variables from there rather than load into top-level env --- modules/uncertainty/tests/Rcheck_reference.log | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/modules/uncertainty/tests/Rcheck_reference.log b/modules/uncertainty/tests/Rcheck_reference.log index a97db2893ca..fa30301a0e8 100644 --- a/modules/uncertainty/tests/Rcheck_reference.log +++ b/modules/uncertainty/tests/Rcheck_reference.log @@ -72,6 +72,9 @@ get.parameter.samples: no visible global function definition for ‘vecpaste’ get.parameter.samples: no visible global function definition for ‘get.sa.sample.list’ +get.results: no visible binding for global variable ‘trait.samples’ +get.results: no visible binding for global variable ‘runs.samples’ +get.results: no visible binding for global variable ‘sa.samples’ get.sensitivity: no visible global function definition for ‘median’ kurtosis: no visible global function definition for ‘sd’ plot_flux_uncertainty: no visible global function definition for ‘plot’ @@ -220,8 +223,8 @@ Undefined global functions or variables: post.distns quantile read.table required rexp runs.samples sa.samples scale_x_continuous scale_y_continuous sd sensitivity.filename sensitivity.output site_id splinefun tag theme theme_bw theme_classic - theme_set trait.lookup trait.mcmc var variances vecpaste x y - zero.truncate + theme_set trait.lookup trait.samples trait.mcmc var variances vecpaste + x y zero.truncate Consider adding importFrom("graphics", "box", "boxplot", "hist", "legend", "lines", "par", "plot", "points") From a0e8ede640bd3f535c3d0cbb459d37fe120f37d9 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Aug 2021 13:11:12 +0200 Subject: [PATCH 2233/2289] update depends --- Makefile.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 32ec9eaa4f9..831e24b9839 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -21,7 +21,7 @@ $(call depends,modules/meta.analysis): | .install/base/utils .install/base/db .i $(call depends,modules/photosynthesis): | $(call depends,modules/priors): | .install/base/utils .install/base/logger .install/modules/meta.analysis .install/base/visualization $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed -$(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger +$(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger .install/base/settings $(call depends,models/basgra): | .install/base/logger .install/modules/data.atmosphere .install/base/utils $(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/base/db $(call depends,models/cable): | .install/base/logger .install/base/utils From 22a061e3cbc6888a6cf4a2d93719e2edd21ecc9d Mon Sep 17 00:00:00 2001 From: istfer Date: Mon, 30 Aug 2021 14:22:31 +0300 Subject: [PATCH 2234/2289] correct zip file name --- modules/data.atmosphere/R/download.AmerifluxLBL.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/data.atmosphere/R/download.AmerifluxLBL.R b/modules/data.atmosphere/R/download.AmerifluxLBL.R index 64cd4180a1c..93aa712bdf0 100644 --- a/modules/data.atmosphere/R/download.AmerifluxLBL.R +++ b/modules/data.atmosphere/R/download.AmerifluxLBL.R @@ -65,7 +65,7 @@ download.AmerifluxLBL <- function(sitename, outfolder, start_date, end_date, endname <- strsplit(outfname, "_") endname <- endname[[1]][length(endname[[1]])] - endname <- substr(endname, 1, nchar(endname) - 4) + endname <- gsub("\\..*", "", endname) outcsvname <- paste0(substr(outfname, 1, 15), "_", file_timestep_hh, "_", endname, ".csv") output_csv_file <- file.path(outfolder, outcsvname) outcsvname_hr <- paste0(substr(outfname, 1, 15), "_", file_timestep_hr, "_", endname, ".csv") From 25ea80dd183037f2aba65aedcacdc3ac9b9dc85d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 30 Aug 2021 14:25:50 +0200 Subject: [PATCH 2235/2289] biocro needs to know where get.results moved --- Makefile.depends | 2 +- models/biocro/DESCRIPTION | 1 + models/biocro/R/get.model.output.BIOCRO.R | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 831e24b9839..badbdeb124f 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -23,7 +23,7 @@ $(call depends,modules/priors): | .install/base/utils .install/base/logger .inst $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger .install/base/settings $(call depends,models/basgra): | .install/base/logger .install/modules/data.atmosphere .install/base/utils -$(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/base/db +$(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/modules/uncertainty .install/base/db $(call depends,models/cable): | .install/base/logger .install/base/utils $(call depends,models/clm45): | .install/base/logger .install/base/utils $(call depends,models/dalec): | .install/base/logger .install/base/remote .install/base/utils diff --git a/models/biocro/DESCRIPTION b/models/biocro/DESCRIPTION index 2ccb30f8c0d..5a9f8bdedf8 100644 --- a/models/biocro/DESCRIPTION +++ b/models/biocro/DESCRIPTION @@ -16,6 +16,7 @@ Imports: PEcAn.settings, PEcAn.data.atmosphere, PEcAn.data.land, + PEcAn.uncertainty, udunits2 (>= 0.11), ncdf4 (>= 1.15), lubridate (>= 1.7.0), diff --git a/models/biocro/R/get.model.output.BIOCRO.R b/models/biocro/R/get.model.output.BIOCRO.R index 6dc7d540691..c2a12a00c59 100644 --- a/models/biocro/R/get.model.output.BIOCRO.R +++ b/models/biocro/R/get.model.output.BIOCRO.R @@ -18,7 +18,7 @@ get.model.output.BIOCRO <- function(settings) { ### Get model output on the localhost if (settings$host$name == "localhost") { - PEcAn.utils::get.results(settings = settings) + PEcAn.uncertainty::get.results(settings = settings) } else { print(paste("biocro model specific get.model.output not implemented for\n", "use on remote host; generic get.model.output under development")) From da3c2a86d2e8890f614fed5f6a560aee210fbd24 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 30 Aug 2021 20:31:37 +0000 Subject: [PATCH 2236/2289] More fixes to get GH actions passing --- models/ed/R/model2netcdf.ED2.R | 11 +++++++---- models/ed/man/model2netcdf.ED2.Rd | 7 +++++-- models/ed/man/put_E_values.Rd | 23 ++++++++++++++++++++++- models/ed/man/read_E_files.Rd | 26 +++++++++++++++++++++++++- models/ed/man/read_S_files.Rd | 4 ++-- models/ed/man/read_T_files.Rd | 10 ++++++++++ 6 files changed, 71 insertions(+), 10 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 57b7380dc76..d7c3741717e 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -194,6 +194,7 @@ model2netcdf.ED2 <- function(outdir, sitelat, sitelon, start_date, ##' @param outdir directory where output will be written to ##' @param start_date start date in YYYY-MM-DD format ##' @param end_date end date in YYYY-MM-DD format +##' @param ... additional arguments ##' ##' @export read_T_files <- function(yr, yfiles, tfiles, outdir, start_date, end_date, ...){ @@ -865,12 +866,13 @@ put_T_values <- function(yr, nc_var, out, lat, lon, begins, ends, ...){ ##' ##' @param yr the year being processed ##' @param yfiles the years on the filenames, will be used to matched efiles for that year -##' @param efiles -##' @param outdir +##' @param efiles names of E h5 files +##' @param outdir directory where output will be written to ##' @param start_date Start time of the simulation ##' @param end_date End time of the simulation ##' @param pfts Names of PFTs used in the run, vector ##' @param settings pecan settings object +##' @param ... additional arguments ##' ##' @export read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, @@ -1047,6 +1049,7 @@ read_E_files <- function(yr, yfiles, efiles, outdir, start_date, end_date, ##' @param ends end time of simulation ##' @param pfts manually input list of Pecan PFT numbers ##' @param settings Pecan settings object +##' @param ... additional arguments ##' ##' @export put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pfts, settings, ...){ @@ -1156,11 +1159,11 @@ put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pfts, settings #' #' @param sfile history file name e.g. "history-S-1961-01-01-000000-g01.h5" #' @param outdir path to run outdir, where the -S- file is -#' @param pft_names string vector, names of ED2 pfts in the run, e.g. c("temperate.Early_Hardwood", "temperate.Late_Conifer") +#' @param pfts Names of PFTs used in the run, vector #' @param pecan_names string vector, pecan names of requested variables, e.g. c("AGB", "AbvGrndWood") #' #' @export -read_S_files <- function(sfile, outdir, pft_names, pecan_names = NULL){ +read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL){ PEcAn.logger::logger.info(paste0("*** Reading -S- file ***")) diff --git a/models/ed/man/model2netcdf.ED2.Rd b/models/ed/man/model2netcdf.ED2.Rd index d80a651c04b..7b4f164605d 100644 --- a/models/ed/man/model2netcdf.ED2.Rd +++ b/models/ed/man/model2netcdf.ED2.Rd @@ -10,7 +10,8 @@ model2netcdf.ED2( sitelon, start_date, end_date, - pft_names = NULL + pfts, + settings = NULL ) } \arguments{ @@ -24,7 +25,9 @@ model2netcdf.ED2( \item{end_date}{End time of the simulation} -\item{pft_names}{Names of PFTs used in the run, vector} +\item{pfts}{Names of PFTs used in the run, vector} + +\item{settings}{pecan settings object} } \description{ Modified from Code to convert ED2.1's HDF5 output into the NACP Intercomparison format (ALMA using netCDF) diff --git a/models/ed/man/put_E_values.Rd b/models/ed/man/put_E_values.Rd index 07dafd7c7e7..4b917db6289 100644 --- a/models/ed/man/put_E_values.Rd +++ b/models/ed/man/put_E_values.Rd @@ -4,7 +4,28 @@ \alias{put_E_values} \title{Function for put -E- values to nc_var list} \usage{ -put_E_values(yr, nc_var, out, lat, lon, begins, ends, pft_names, ...) +put_E_values(yr, nc_var, out, lat, lon, begins, ends, pfts, settings, ...) +} +\arguments{ +\item{yr}{the year being processed} + +\item{nc_var}{list of .nc files} + +\item{out}{path to run outdir} + +\item{lat}{latitude of site} + +\item{lon}{longitude of site} + +\item{begins}{start time of simulation} + +\item{ends}{end time of simulation} + +\item{pfts}{manually input list of Pecan PFT numbers} + +\item{settings}{Pecan settings object} + +\item{...}{additional arguments} } \description{ Function for put -E- values to nc_var list diff --git a/models/ed/man/read_E_files.Rd b/models/ed/man/read_E_files.Rd index aa06d40243b..51cdb5bf2d2 100644 --- a/models/ed/man/read_E_files.Rd +++ b/models/ed/man/read_E_files.Rd @@ -4,12 +4,36 @@ \alias{read_E_files} \title{Function for reading -E- files} \usage{ -read_E_files(yr, yfiles, efiles, outdir, start_date, end_date, pft_names, ...) +read_E_files( + yr, + yfiles, + efiles, + outdir, + start_date, + end_date, + pfts, + settings = NULL, + ... +) } \arguments{ \item{yr}{the year being processed} \item{yfiles}{the years on the filenames, will be used to matched efiles for that year} + +\item{efiles}{names of E h5 files} + +\item{outdir}{directory where output will be written to} + +\item{start_date}{Start time of the simulation} + +\item{end_date}{End time of the simulation} + +\item{pfts}{Names of PFTs used in the run, vector} + +\item{settings}{pecan settings object} + +\item{...}{additional arguments} } \description{ Function for reading -E- files diff --git a/models/ed/man/read_S_files.Rd b/models/ed/man/read_S_files.Rd index 41e709cbfd6..bfb9e36a224 100644 --- a/models/ed/man/read_S_files.Rd +++ b/models/ed/man/read_S_files.Rd @@ -5,14 +5,14 @@ \title{S-file contents are not written to standard netcdfs but are used by read_restart from SDA's perspective it doesn't make sense to write and read to ncdfs because ED restarts from history files} \usage{ -read_S_files(sfile, outdir, pft_names, pecan_names = NULL) +read_S_files(sfile, outdir, pfts, pecan_names = NULL) } \arguments{ \item{sfile}{history file name e.g. "history-S-1961-01-01-000000-g01.h5"} \item{outdir}{path to run outdir, where the -S- file is} -\item{pft_names}{string vector, names of ED2 pfts in the run, e.g. c("temperate.Early_Hardwood", "temperate.Late_Conifer")} +\item{pfts}{Names of PFTs used in the run, vector} \item{pecan_names}{string vector, pecan names of requested variables, e.g. c("AGB", "AbvGrndWood")} } diff --git a/models/ed/man/read_T_files.Rd b/models/ed/man/read_T_files.Rd index baf333e8af3..facdcadd66c 100644 --- a/models/ed/man/read_T_files.Rd +++ b/models/ed/man/read_T_files.Rd @@ -10,6 +10,16 @@ read_T_files(yr, yfiles, tfiles, outdir, start_date, end_date, ...) \item{yr}{the year being processed} \item{yfiles}{the years on the filenames, will be used to matched tfiles for that year} + +\item{tfiles}{names of T files to be read} + +\item{outdir}{directory where output will be written to} + +\item{start_date}{start date in YYYY-MM-DD format} + +\item{end_date}{end date in YYYY-MM-DD format} + +\item{...}{additional arguments} } \description{ Function for reading -T- files From 71c8212a0edf777a935ffbf6059aca35ee61af9f Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Mon, 30 Aug 2021 21:00:18 +0000 Subject: [PATCH 2237/2289] Another attempted fix to GH actions check --- models/ed/R/model2netcdf.ED2.R | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index d7c3741717e..882a9fbdad3 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -1212,7 +1212,27 @@ read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL){ npft <- length(pft_names) data(pftmapping, package = "PEcAn.ED2") - pft_nums <- sapply(pft_names, function(x) pftmapping$ED[pftmapping$PEcAn == x]) + pfts <- numeric(npft) + names(pfts) <- pft_names + + # Extract the PFT names and numbers for all PFTs + xml_pft_names <- lapply(settings$pfts, "[[", "name") + for (pft in pft_names) { + which_pft <- which(xml_pft_names == pft) + xml_pft <- settings$pfts[[which_pft]] + if ("ed2_pft_number" %in% names(xml_pft)) { + pft_number <- as.numeric(xml_pft$ed2_pft_number) + if (!is.finite(pft_number)) { + PEcAn.logger::logger.severe( + "ED2 PFT number present but not parseable as number. Value was ", + xml_pft$ed2_pft_number + ) + } + } else { + pft_number <- pftmapping$ED[pftmapping$PEcAn == xml_pft$name] + } + pfts[pft] <- pft_number + } out <- list() for (varname in pecan_names) { From ac73b7881b4a3c1a0eb0516eb8a613ba37d0868c Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 13:27:18 +0200 Subject: [PATCH 2238/2289] delete get.model.output* functions; all are unused since 2013 --- base/utils/NAMESPACE | 1 - base/utils/R/get.model.output.R | 28 ---- base/utils/man/get.model.output.Rd | 24 ---- models/biocro/NAMESPACE | 1 - models/biocro/R/get.model.output.BIOCRO.R | 28 ---- models/biocro/man/get.model.output.BIOCRO.Rd | 17 --- models/ed/NAMESPACE | 4 - models/ed/R/get.model.output.ed.R | 141 ------------------- models/ed/man/get.model.output.ED2.Rd | 16 --- models/ed/man/read.output.ED2.Rd | 40 ------ models/ed/man/read.output.file.ed.Rd | 22 --- models/sipnet/NAMESPACE | 2 - models/sipnet/R/get.model.output.SIPNET.R | 49 ------- models/sipnet/man/get.model.output.SIPNET.Rd | 11 -- 14 files changed, 384 deletions(-) delete mode 100644 base/utils/R/get.model.output.R delete mode 100644 base/utils/man/get.model.output.Rd delete mode 100644 models/biocro/R/get.model.output.BIOCRO.R delete mode 100644 models/biocro/man/get.model.output.BIOCRO.Rd delete mode 100644 models/ed/R/get.model.output.ed.R delete mode 100644 models/ed/man/get.model.output.ED2.Rd delete mode 100644 models/ed/man/read.output.ED2.Rd delete mode 100644 models/ed/man/read.output.file.ed.Rd delete mode 100644 models/sipnet/R/get.model.output.SIPNET.R delete mode 100644 models/sipnet/man/get.model.output.SIPNET.Rd diff --git a/base/utils/NAMESPACE b/base/utils/NAMESPACE index 0feee9d2ed5..be49b260394 100644 --- a/base/utils/NAMESPACE +++ b/base/utils/NAMESPACE @@ -21,7 +21,6 @@ export(ensemble.filename) export(full.path) export(get.ensemble.inputs) export(get.ensemble.samples) -export(get.model.output) export(get.parameter.stat) export(get.quantiles) export(get.results) diff --git a/base/utils/R/get.model.output.R b/base/utils/R/get.model.output.R deleted file mode 100644 index af2ce092198..00000000000 --- a/base/utils/R/get.model.output.R +++ /dev/null @@ -1,28 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -##' -##' This function retrieves model output for further analyses -##' @name get.model.output -##' @title Retrieve model output -##' -##' @param model the ecosystem model run -##' -##' @export -##' -##' @examples -##' \dontrun{ -##' get.model.output(model) -##' get.model.output('ED2') -##' } -##' -##' @author Michael Dietze, Shawn Serbin, David LeBauer -get.model.output <- function(model, settings) { - PEcAn.logger::logger.severe("Same as get.results(settings), please update your workflow") -} # get.model.output diff --git a/base/utils/man/get.model.output.Rd b/base/utils/man/get.model.output.Rd deleted file mode 100644 index 408d2271f81..00000000000 --- a/base/utils/man/get.model.output.Rd +++ /dev/null @@ -1,24 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.R -\name{get.model.output} -\alias{get.model.output} -\title{Retrieve model output} -\usage{ -get.model.output(model, settings) -} -\arguments{ -\item{model}{the ecosystem model run} -} -\description{ -This function retrieves model output for further analyses -} -\examples{ -\dontrun{ -get.model.output(model) -get.model.output('ED2') -} - -} -\author{ -Michael Dietze, Shawn Serbin, David LeBauer -} diff --git a/models/biocro/NAMESPACE b/models/biocro/NAMESPACE index b2bf97f059f..cca0c4e51b6 100644 --- a/models/biocro/NAMESPACE +++ b/models/biocro/NAMESPACE @@ -2,7 +2,6 @@ export(cf2biocro) export(convert.samples.BIOCRO) -export(get.model.output.BIOCRO) export(get_biocro_defaults) export(met2model.BIOCRO) export(model2netcdf.BIOCRO) diff --git a/models/biocro/R/get.model.output.BIOCRO.R b/models/biocro/R/get.model.output.BIOCRO.R deleted file mode 100644 index 6dc7d540691..00000000000 --- a/models/biocro/R/get.model.output.BIOCRO.R +++ /dev/null @@ -1,28 +0,0 @@ -##------------------------------------------------------------------------------- -## Copyright (c) 2012 University of Illinois, NCSA. All rights reserved. This -## program and the accompanying materials are made available under the terms of -## the University of Illinois/NCSA Open Source License which accompanies this -## distribution, and is available at -## http://opensource.ncsa.illinois.edu/license.html -##------------------------------------------------------------------------------- - -#--------------------------------------------------------------------------------------------------# -##' Function to retrieve model output from local server -##' -##' @name get.model.output.BIOCRO -##' @title Retrieve model output from local server -##' @param settings list generated from \code{\link{read.settings}} function applied to settings file -##' @export -##' @author Mike Dietze, David LeBauer -get.model.output.BIOCRO <- function(settings) { - - ### Get model output on the localhost - if (settings$host$name == "localhost") { - PEcAn.utils::get.results(settings = settings) - } else { - print(paste("biocro model specific get.model.output not implemented for\n", - "use on remote host; generic get.model.output under development")) - return(NULL) - } ### End of if/else - -} # get.model.output.BIOCRO diff --git a/models/biocro/man/get.model.output.BIOCRO.Rd b/models/biocro/man/get.model.output.BIOCRO.Rd deleted file mode 100644 index e3ec464ab5e..00000000000 --- a/models/biocro/man/get.model.output.BIOCRO.Rd +++ /dev/null @@ -1,17 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.BIOCRO.R -\name{get.model.output.BIOCRO} -\alias{get.model.output.BIOCRO} -\title{Retrieve model output from local server} -\usage{ -get.model.output.BIOCRO(settings) -} -\arguments{ -\item{settings}{list generated from \code{\link{read.settings}} function applied to settings file} -} -\description{ -Function to retrieve model output from local server -} -\author{ -Mike Dietze, David LeBauer -} diff --git a/models/ed/NAMESPACE b/models/ed/NAMESPACE index 7d8559612da..33cea79abf9 100644 --- a/models/ed/NAMESPACE +++ b/models/ed/NAMESPACE @@ -20,7 +20,6 @@ export(ed.var) export(example_css) export(example_pss) export(example_site) -export(get.model.output.ED2) export(get_ed2in_dates) export(get_met_dates) export(get_restartfile.ED2) @@ -34,8 +33,6 @@ export(modify_ed2in) export(parse.history) export(put_E_values) export(put_T_values) -export(read.output.ED2) -export(read.output.file.ed) export(read_E_files) export(read_S_files) export(read_T_files) @@ -60,5 +57,4 @@ export(write_ed_veg) export(write_pss) export(write_restart.ED2) export(write_site) -import(PEcAn.utils) importFrom(magrittr,"%>%") diff --git a/models/ed/R/get.model.output.ed.R b/models/ed/R/get.model.output.ed.R deleted file mode 100644 index 595fa4866af..00000000000 --- a/models/ed/R/get.model.output.ed.R +++ /dev/null @@ -1,141 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -#--------------------------------------------------------------------------------------------------# -##' Extract ED output for specific variables from an hdf5 file -##' @name read.output.file.ed -##' @title read output - ED -##' @param filename string, name of file with data -##' @param variables variables to extract from file -##' @return single value of output variable from filename. In the case of AGB, it is summed across all plants -##' @export -##' @author David LeBauer, Carl Davidson -read.output.file.ed <- function(filename, variables = c("AGB_CO", "NPLANT")) { - if (filename %in% dir(pattern = "h5")) { - Carbon2Yield <- 20 - nc <- ncdf4::nc_open(filename) - if (all(c("AGB_CO", "NPLANT") %in% variables)) { - result <- (sum(ncdf4::ncvar_get(nc, "AGB_CO") * ncdf4::ncvar_get(nc, "NPLANT"), na.rm = TRUE) * Carbon2Yield) - } else { - result <- sum(sapply(variables, function(x) { - sum(ncdf4::ncvar_get(nc, x)) - })) - } - ncdf4::nc_close(nc) - return(result) - } else { - return(NA) - } -} # read.output.file.ed -# ==================================================================================================# - - -#--------------------------------------------------------------------------------------------------# -##' Reads the output of a single model run -##' -##' This function applies \link{read.output.file.ed} to a list of files from a single run -##' @title Read ED output -##' @name read.output.ED2 -##' @param run.id the id distinguishing the model run -##' @param outdir the directory that the model's output was sent to -##' @param start.year first year -##' @param end.year last year -##' @param variables which ED2 variables to extract -##' @param output.type type of output file to read, can be '-Y-' for annual output, '-M-' for monthly means, '-D-' for daily means, '-T-' for instantaneous fluxes. Output types are set in the ED2IN namelist as NL%I[DMYT]OUTPUT -##' @return vector of output variable for all runs within ensemble -##' @export -##' @author David LeBauer, Carl Davidson -read.output.ED2 <- function(run.id, outdir, start.year = NA, end.year = NA, - variables = c("AGB_CO", "NPLANT"), output.type = "Y") { - print(run.id) - # if(any(grep(run.id, dir(outdir, pattern = 'finished')))){ - file.names <- dir(outdir, pattern = run.id, full.names = FALSE) - file.names <- grep(paste("-", output.type, "-", sep = ""), file.names, value = TRUE) - file.names <- grep("([0-9]{4}).*", file.names, value = TRUE) - if (length(file.names) > 0) { - years <- sub("((?!-Y-).)*-Y-([0-9]{4}).*", "\\2", file.names, perl = TRUE) - if (!is.na(start.year) && nchar(start.year) == 4) { - file.names <- file.names[years >= as.numeric(start.year)] - } - if (!is.na(end.year) && nchar(end.year) == 4) { - file.names <- file.names[years <= as.numeric(end.year)] - } - file.names <- file.names[!is.na(file.names)] - print(file.names) - - result <- mean(sapply(file.names, read.output.file.ed, variables)) ## if any are NA, NA is returned - - } else { - warning(cat(paste("no output files in", outdir, "\nfor", run.id, "\n"))) - result <- NA - } - # } else { warning(cat(paste(run.id, 'not finished \n'))) result <- NA } - - return(result) -} # read.output.ED2 -# ==================================================================================================# - - - -#--------------------------------------------------------------------------------------------------# -##' Function to retrieve ED2 HDF model output from local or remote server -##' -##' @name get.model.output.ED2 -##' @title Retrieve ED2 HDF model output from local or remote server -##' -##' @import PEcAn.utils -##' @export -##' -##' @author Shawn Serbin -##' @author David LeBauer -get.model.output.ED2 <- function(settings) { - model <- settings$model$type - - ### Get ED2 model output on the localhost - if (settings$host$name == "localhost") { - # setwd(settings$host$outdir) # Host model output directory - get.results(settings) - ### Move required functions to host TODO: take out functions read.output.file.ed & read.output.ed - ### from write.configs.ed & put into a new file specific for reading ED output - - ### Is the previous necessary for localhost? These functions should be availible within R & should - ### not need to be copied and run but could instead be called within the running R shell. SPS - - # setwd(settings$outdir) source('PEcAn.functions.R') # This won't work yet - - - } else { - ### Make a copy of the settings object for use on the remote sever - save(settings, file = paste0(settings$outdir, "settings.Rdata")) - - ### Make a copy of required functions and place in file PEcAn.functions.R - dump(c("get.run.id", "left.pad.zeros", "read.ensemble.output", - "read.sa.output", "read.output", "model2netcdf.ED2", "get.results"), - file = paste0(settings$outdir, "PEcAn.functions.R")) - - ### Add execution of get.results to the end of the PEcAn.functions.R file This will execute all the - ### code needed to extract output on remote host --- added loading of pecan settings object - cat("load(\"settings.Rdata\")", file = paste0(settings$outdir, "PEcAn.functions.R"), append = TRUE) - cat("\n", file = paste0(settings$outdir, "PEcAn.functions.R"), append = TRUE) - cat("get.results(settings)", file = paste0(settings$outdir, "PEcAn.functions.R"), append = TRUE) - - ### Copy required PEcAn.functions.R and settings object to remote host - rsync("-outi", paste0(settings$outdir, "settings.Rdata"), paste0(settings$host$name, ":", settings$host$outdir)) - rsync("-outi", paste0(settings$outdir, "PEcAn.functions.R"), paste0(settings$host$name, ":", settings$host$outdir)) - - ### Run script on remote host - system(paste("ssh -T", settings$host$name, "'", "cd", settings$host$outdir, "; R --vanilla < PEcAn.functions.R'")) - - ### Get PEcAn output from remote host - rsync("-outi", from = paste0(settings$host$name, ":", settings$host$outdir, "output.Rdata"), to = settings$outdir) - - } ### End of if/else - -} # get.model.output.ED2 -# ==================================================================================================# diff --git a/models/ed/man/get.model.output.ED2.Rd b/models/ed/man/get.model.output.ED2.Rd deleted file mode 100644 index 08d5e4bec98..00000000000 --- a/models/ed/man/get.model.output.ED2.Rd +++ /dev/null @@ -1,16 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.ed.R -\name{get.model.output.ED2} -\alias{get.model.output.ED2} -\title{Retrieve ED2 HDF model output from local or remote server} -\usage{ -get.model.output.ED2(settings) -} -\description{ -Function to retrieve ED2 HDF model output from local or remote server -} -\author{ -Shawn Serbin - -David LeBauer -} diff --git a/models/ed/man/read.output.ED2.Rd b/models/ed/man/read.output.ED2.Rd deleted file mode 100644 index 373a89a2823..00000000000 --- a/models/ed/man/read.output.ED2.Rd +++ /dev/null @@ -1,40 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.ed.R -\name{read.output.ED2} -\alias{read.output.ED2} -\title{Read ED output} -\usage{ -read.output.ED2( - run.id, - outdir, - start.year = NA, - end.year = NA, - variables = c("AGB_CO", "NPLANT"), - output.type = "Y" -) -} -\arguments{ -\item{run.id}{the id distinguishing the model run} - -\item{outdir}{the directory that the model's output was sent to} - -\item{start.year}{first year} - -\item{end.year}{last year} - -\item{variables}{which ED2 variables to extract} - -\item{output.type}{type of output file to read, can be '-Y-' for annual output, '-M-' for monthly means, '-D-' for daily means, '-T-' for instantaneous fluxes. Output types are set in the ED2IN namelist as NL\%I\link{DMYT}OUTPUT} -} -\value{ -vector of output variable for all runs within ensemble -} -\description{ -Reads the output of a single model run -} -\details{ -This function applies \link{read.output.file.ed} to a list of files from a single run -} -\author{ -David LeBauer, Carl Davidson -} diff --git a/models/ed/man/read.output.file.ed.Rd b/models/ed/man/read.output.file.ed.Rd deleted file mode 100644 index 0bcc4b0cf85..00000000000 --- a/models/ed/man/read.output.file.ed.Rd +++ /dev/null @@ -1,22 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.ed.R -\name{read.output.file.ed} -\alias{read.output.file.ed} -\title{read output - ED} -\usage{ -read.output.file.ed(filename, variables = c("AGB_CO", "NPLANT")) -} -\arguments{ -\item{filename}{string, name of file with data} - -\item{variables}{variables to extract from file} -} -\value{ -single value of output variable from filename. In the case of AGB, it is summed across all plants -} -\description{ -Extract ED output for specific variables from an hdf5 file -} -\author{ -David LeBauer, Carl Davidson -} diff --git a/models/sipnet/NAMESPACE b/models/sipnet/NAMESPACE index 75741c7a77d..198249ae9d4 100644 --- a/models/sipnet/NAMESPACE +++ b/models/sipnet/NAMESPACE @@ -1,6 +1,5 @@ # Generated by roxygen2: do not edit by hand -export(get.model.output.SIPNET) export(met2model.SIPNET) export(model2netcdf.SIPNET) export(read_restart.SIPNET) @@ -10,4 +9,3 @@ export(sipnet2datetime) export(split_inputs.SIPNET) export(write.config.SIPNET) export(write_restart.SIPNET) -import(PEcAn.utils) diff --git a/models/sipnet/R/get.model.output.SIPNET.R b/models/sipnet/R/get.model.output.SIPNET.R deleted file mode 100644 index e6dc1e88a95..00000000000 --- a/models/sipnet/R/get.model.output.SIPNET.R +++ /dev/null @@ -1,49 +0,0 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- - -#--------------------------------------------------------------------------------------------------# -##' Function to retrieve model output from local or remote server -##' -##' @name get.model.output.SIPNET -##' @title Retrieve model output from local or remote server -##' -##' @import PEcAn.utils -##' @export -get.model.output.SIPNET <- function(settings) { - - model <- "SIPNET" - - ### Get model output on the localhost - if (settings$host$name == "localhost") { - get.results(settings) - } else { - - ## model output is on a remote host - remoteScript <- paste(settings$outdir, "PEcAn.functions.R", sep = "") - - ### Make a copy of required functions and place in file PEcAn.functions.R - dump(c(paste("model2netcdf", model, sep = "."), "get.run.id", "read.ensemble.output", "read.sa.output", - "read.output", "get.results"), file = remoteScript) - - ### Add execution of get.results to the end of the PEcAn.functions.R file This will execute all the - ### code needed to extract output on remote host - cat("get.results()", file = remoteScript, append = TRUE) - - ### Copy required PEcAn.functions.R to remote host - rsync("-outi", remoteScript, paste(settings$host$name, ":", settings$host$outdir, sep = "")) - - ### Run script on remote host - system(paste("ssh -T", settings$host$name, "'", "cd", settings$host$outdir, "; R --vanilla < PEcAn.functions.R'")) - - ### Get PEcAn output from remote host - rsync("-outi", from = paste(settings$host$name, ":", settings$host$outdir, "output.Rdata", - sep = ""), to = settings$outdir) - } ### End of if/else - -} # get.model.output.SIPNET diff --git a/models/sipnet/man/get.model.output.SIPNET.Rd b/models/sipnet/man/get.model.output.SIPNET.Rd deleted file mode 100644 index fe396667e31..00000000000 --- a/models/sipnet/man/get.model.output.SIPNET.Rd +++ /dev/null @@ -1,11 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/get.model.output.SIPNET.R -\name{get.model.output.SIPNET} -\alias{get.model.output.SIPNET} -\title{Retrieve model output from local or remote server} -\usage{ -get.model.output.SIPNET(settings) -} -\description{ -Function to retrieve model output from local or remote server -} From 7989effe1cacc1d61661a35b9c5e60e496ccfe51 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 13:27:58 +0200 Subject: [PATCH 2239/2289] need namespaces now that (deleted) get.model.output.ED2 is no longer importing utils --- models/ed/DESCRIPTION | 3 +-- models/ed/R/check_ed2in.R | 2 +- models/ed/R/download_edi.R | 2 +- models/ed/R/model2netcdf.ED2.R | 2 +- models/ed/R/write.configs.ed.R | 6 +++--- 5 files changed, 7 insertions(+), 8 deletions(-) diff --git a/models/ed/DESCRIPTION b/models/ed/DESCRIPTION index 6b44a531c21..c2a79f26398 100644 --- a/models/ed/DESCRIPTION +++ b/models/ed/DESCRIPTION @@ -20,14 +20,13 @@ Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific streamline the interaction between data and models, and to improve the efficacy of scientific investigation. This package provides functions to link the Ecosystem Demography Model, version 2, to PEcAn. -Depends: - PEcAn.utils Imports: PEcAn.data.atmosphere, PEcAn.data.land, PEcAn.logger, PEcAn.remote, PEcAn.settings, + PEcAn.utils, abind (>= 1.4.5), dplyr, glue, diff --git a/models/ed/R/check_ed2in.R b/models/ed/R/check_ed2in.R index 1082a867916..338bb2faf51 100644 --- a/models/ed/R/check_ed2in.R +++ b/models/ed/R/check_ed2in.R @@ -58,7 +58,7 @@ check_ed2in <- function(ed2in) { } } else { # Check that at least one history file exists - history_files <- match_file(ed2in[["SFILIN"]]) + history_files <- PEcAn.utils::match_file(ed2in[["SFILIN"]]) if (!length(history_files) > 0) { PEcAn.logger::logger.severe( "No history files matched for prefix ", ed2in[["SFILIN"]] diff --git a/models/ed/R/download_edi.R b/models/ed/R/download_edi.R index 6351ea7f81f..c5b77981293 100644 --- a/models/ed/R/download_edi.R +++ b/models/ed/R/download_edi.R @@ -13,7 +13,7 @@ download_edi <- function(directory) { download_link <- "https://files.osf.io/v1/resources/b6umf/providers/osfstorage/5a948ea691b689000fa2a588/?zip=" target_file <- paste0(directory, ".zip") - download.file(download_link, target_file) + PEcAn.utils::download.file(download_link, target_file) unzip(target_file, exdir = directory) invisible(TRUE) } diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 3170d9b8457..19f9d0a290f 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -1165,7 +1165,7 @@ read_S_files <- function(sfile, outdir, pft_names, pecan_names = NULL){ # Aggregate for (l in seq_along(pecan_names)) { - variable <- convert.expr(ed_derivs[l]) # convert + variable <- PEcAn.utils::convert.expr(ed_derivs[l]) # convert expr <- variable$variable.eqn$expression sapply(variable$variable.eqn$variables, function(x) assign(x, ed.dat[[x]], envir = .GlobalEnv)) diff --git a/models/ed/R/write.configs.ed.R b/models/ed/R/write.configs.ed.R index 123464945ba..9d599bd9da3 100644 --- a/models/ed/R/write.configs.ed.R +++ b/models/ed/R/write.configs.ed.R @@ -52,14 +52,14 @@ convert.samples.ED <- function(trait.samples) { if ("root_respiration_rate" %in% names(trait.samples)) { rrr1 <- as.numeric(trait.samples[["root_respiration_rate"]]) rrr2 <- rrr1 * DEFAULT.MAINTENANCE.RESPIRATION - trait.samples[["root_respiration_rate"]] <- arrhenius.scaling(rrr2, old.temp = 25, new.temp = 15) + trait.samples[["root_respiration_rate"]] <- PEcAn.utils::arrhenius.scaling(rrr2, old.temp = 25, new.temp = 15) # model version compatibility (rrr and rrf are the same) trait.samples[["root_respiration_factor"]] <- trait.samples[["root_respiration_rate"]] } if ("Vcmax" %in% names(trait.samples)) { vcmax <- as.numeric(trait.samples[["Vcmax"]]) - trait.samples[["Vcmax"]] <- arrhenius.scaling(vcmax, old.temp = 25, new.temp = 15) + trait.samples[["Vcmax"]] <- PEcAn.utils::arrhenius.scaling(vcmax, old.temp = 25, new.temp = 15) # write as Vm0 for version compatibility (Vm0 = Vcmax @ 15C) trait.samples[["Vm0"]] <- trait.samples[["Vcmax"]] @@ -69,7 +69,7 @@ convert.samples.ED <- function(trait.samples) { ## First scale variables to 15 degC trait.samples[["leaf_respiration_rate_m2"]] <- - arrhenius.scaling(leaf_resp, old.temp = 25, new.temp = 15) + PEcAn.utils::arrhenius.scaling(leaf_resp, old.temp = 25, new.temp = 15) # convert leaf_respiration_rate_m2 to Rd0 (variable used in ED2) trait.samples[["Rd0"]] <- trait.samples[["leaf_respiration_rate_m2"]] From 83fdc247c2dd7155b619cdb812c7556f53ad2ace Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 15:04:43 +0200 Subject: [PATCH 2240/2289] remove get.model.output from test examples --- base/all/tests/testthat/test.workflow.R | 2 +- modules/data.atmosphere/tests/testthat/helper.R | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/base/all/tests/testthat/test.workflow.R b/base/all/tests/testthat/test.workflow.R index dc540016780..2e2b6e5dcac 100644 --- a/base/all/tests/testthat/test.workflow.R +++ b/base/all/tests/testthat/test.workflow.R @@ -19,4 +19,4 @@ # run.write.configs("ED2") # clear.scratch(settings) # start.model.runs("ED2") -# get.model.output("ED2") +# get.results(settings) diff --git a/modules/data.atmosphere/tests/testthat/helper.R b/modules/data.atmosphere/tests/testthat/helper.R index 3a014c85f35..deba7cbdd0c 100644 --- a/modules/data.atmosphere/tests/testthat/helper.R +++ b/modules/data.atmosphere/tests/testthat/helper.R @@ -9,7 +9,6 @@ #' @param ... other arguments passed on to \code{\link[testthat]{expect_match}} #' @examples #' expect_log(PEcAn.logger::logger.debug("test"), "DEBUG.*test") -#' expect_log(PEcAn.utils::get.model.output(), "update your workflow") #' expect_log(cat("Hello", file = stderr()), "Hello") #' # Only messages on stderr are recognized #' expect_failure(expect_log("Hello", "Hello")) From 2d4963a21578e999c7705765b7d2b1a969b7060a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 15:44:26 +0200 Subject: [PATCH 2241/2289] need namespace in sipnet now too --- models/sipnet/R/read_restart.SIPNET.R | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/models/sipnet/R/read_restart.SIPNET.R b/models/sipnet/R/read_restart.SIPNET.R index a46b80a137f..b3afee9ab4d 100755 --- a/models/sipnet/R/read_restart.SIPNET.R +++ b/models/sipnet/R/read_restart.SIPNET.R @@ -26,9 +26,9 @@ read_restart.SIPNET <- function(outdir, runid, stop.time, settings, var.names, p var.names <- c(var.names, "fine_root_carbon_content", "coarse_root_carbon_content") # Read ensemble output - ens <- read.output(runid = runid, - outdir = file.path(outdir, runid), - start.year = lubridate::year(stop.time), + ens <- PEcAn.utils::read.output(runid = runid, + outdir = file.path(outdir, runid), + start.year = lubridate::year(stop.time), end.year = lubridate::year(stop.time), variables = var.names) From b7a7254a52f6d499cb93d389a43ac64ab5688da0 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 15:58:36 +0200 Subject: [PATCH 2242/2289] depends update --- Makefile.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 035af3ba0fc..1c738fcab3f 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -28,7 +28,7 @@ $(call depends,models/cable): | .install/base/logger .install/base/utils $(call depends,models/clm45): | .install/base/logger .install/base/utils $(call depends,models/dalec): | .install/base/logger .install/base/remote .install/base/utils $(call depends,models/dvmdostem): | .install/base/utils -$(call depends,models/ed): | .install/base/utils .install/modules/data.atmosphere .install/modules/data.land .install/base/logger .install/base/remote .install/base/settings +$(call depends,models/ed): | .install/modules/data.atmosphere .install/modules/data.land .install/base/logger .install/base/remote .install/base/settings .install/base/utils $(call depends,models/fates): | .install/base/utils .install/base/logger .install/base/remote $(call depends,models/gday): | .install/base/utils .install/base/logger .install/base/remote $(call depends,models/jules): | .install/base/utils .install/base/logger .install/base/remote From 7fcc0d63594fd2ad50fdfa06b2681a971b3a20b1 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Tue, 31 Aug 2021 14:57:39 +0000 Subject: [PATCH 2243/2289] More GH action fixes for read_S_files --- models/ed/R/model2netcdf.ED2.R | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/models/ed/R/model2netcdf.ED2.R b/models/ed/R/model2netcdf.ED2.R index 882a9fbdad3..c9778a3d16c 100644 --- a/models/ed/R/model2netcdf.ED2.R +++ b/models/ed/R/model2netcdf.ED2.R @@ -1161,9 +1161,11 @@ put_E_values <- function(yr, nc_var, out, lat, lon, begins, ends, pfts, settings #' @param outdir path to run outdir, where the -S- file is #' @param pfts Names of PFTs used in the run, vector #' @param pecan_names string vector, pecan names of requested variables, e.g. c("AGB", "AbvGrndWood") +#' @param settings pecan settings object +#' @param ... additional arguments #' #' @export -read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL){ +read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL, settings = NULL, ...){ PEcAn.logger::logger.info(paste0("*** Reading -S- file ***")) @@ -1231,7 +1233,7 @@ read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL){ } else { pft_number <- pftmapping$ED[pftmapping$PEcAn == xml_pft$name] } - pfts[pft] <- pft_number + pfts_nums[pft] <- pft_number } out <- list() @@ -1257,7 +1259,7 @@ read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL){ # remove non-pft sublists pars[names(pars) != "pft"] <- NULL # pass pft numbers as sublist names - names(pars) <- pft_nums + names(pars) <- pfts_nums # Aggregate for (l in seq_along(pecan_names)) { @@ -1274,7 +1276,7 @@ read_S_files <- function(sfile, outdir, pfts, pecan_names = NULL){ } else {# per-pft vars for(k in seq_len(npft)) { - ind <- (pft == pft_nums[k]) + ind <- (pft == pfts_nums[k]) if (any(ind)) { # check for different variables/units? From 1df30587709fb5d8697e3874c87cd2d50c3ee977 Mon Sep 17 00:00:00 2001 From: KristinaRiemer Date: Tue, 31 Aug 2021 15:27:31 +0000 Subject: [PATCH 2244/2289] Update docs --- models/ed/man/read_S_files.Rd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/models/ed/man/read_S_files.Rd b/models/ed/man/read_S_files.Rd index bfb9e36a224..14caa902715 100644 --- a/models/ed/man/read_S_files.Rd +++ b/models/ed/man/read_S_files.Rd @@ -5,7 +5,7 @@ \title{S-file contents are not written to standard netcdfs but are used by read_restart from SDA's perspective it doesn't make sense to write and read to ncdfs because ED restarts from history files} \usage{ -read_S_files(sfile, outdir, pfts, pecan_names = NULL) +read_S_files(sfile, outdir, pfts, pecan_names = NULL, settings = NULL, ...) } \arguments{ \item{sfile}{history file name e.g. "history-S-1961-01-01-000000-g01.h5"} @@ -15,6 +15,10 @@ read_S_files(sfile, outdir, pfts, pecan_names = NULL) \item{pfts}{Names of PFTs used in the run, vector} \item{pecan_names}{string vector, pecan names of requested variables, e.g. c("AGB", "AbvGrndWood")} + +\item{settings}{pecan settings object} + +\item{...}{additional arguments} } \description{ S-file contents are not written to standard netcdfs but are used by read_restart From 11ea79248738ed1521de8cfe844fe4955ffd420a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 20:31:22 +0200 Subject: [PATCH 2245/2289] import was added for now-deleted fn --- models/biocro/DESCRIPTION | 1 - 1 file changed, 1 deletion(-) diff --git a/models/biocro/DESCRIPTION b/models/biocro/DESCRIPTION index 5a9f8bdedf8..2ccb30f8c0d 100644 --- a/models/biocro/DESCRIPTION +++ b/models/biocro/DESCRIPTION @@ -16,7 +16,6 @@ Imports: PEcAn.settings, PEcAn.data.atmosphere, PEcAn.data.land, - PEcAn.uncertainty, udunits2 (>= 0.11), ncdf4 (>= 1.15), lubridate (>= 1.7.0), From 5c026bbe21f7fe95f1850023b47760b86bd9e971 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 31 Aug 2021 20:33:20 +0200 Subject: [PATCH 2246/2289] Update Makefile.depends --- Makefile.depends | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 43cd5040ed8..1001fcf3e8f 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -23,7 +23,7 @@ $(call depends,modules/priors): | .install/base/utils .install/base/logger .inst $(call depends,modules/rtm): | .install/base/logger .install/modules/assim.batch .install/base/utils .install/models/ed $(call depends,modules/uncertainty): | .install/base/utils .install/modules/priors .install/base/db .install/modules/emulator .install/base/logger .install/base/settings $(call depends,models/basgra): | .install/base/logger .install/modules/data.atmosphere .install/base/utils -$(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/modules/uncertainty .install/base/db +$(call depends,models/biocro): | .install/base/logger .install/base/remote .install/base/utils .install/base/settings .install/modules/data.atmosphere .install/modules/data.land .install/base/db $(call depends,models/cable): | .install/base/logger .install/base/utils $(call depends,models/clm45): | .install/base/logger .install/base/utils $(call depends,models/dalec): | .install/base/logger .install/base/remote .install/base/utils From f3a5f0bd9b40e64a46a50c04b80c57cffd1c56e2 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 1 Sep 2021 11:38:57 +0200 Subject: [PATCH 2247/2289] skip dependency check when on CI Not needed because when on CI we run Rscript docker/depends/pecan.depends.R before invoking make --- Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Makefile b/Makefile index 7a6caf127a1..0fd902f08f3 100644 --- a/Makefile +++ b/Makefile @@ -143,7 +143,9 @@ $(ALL_PKGS_I) $(ALL_PKGS_C) $(ALL_PKGS_T) $(ALL_PKGS_D): | .install/devtools .in .SECONDEXPANSION: .doc/%: $$(call files_in_dir, %) | $$(@D) +ifeq ($(CI),) # skipped on CI because we start the run by bulk-installing all deps + $(call depends_R_pkg, $(subst .doc/,,$@)) +endif $(call doc_R_pkg, $(subst .doc/,,$@)) echo `date` > $@ From e64324d22bd6f62cf501cf412d74368bd7396813 Mon Sep 17 00:00:00 2001 From: "Shawn P. Serbin" Date: Fri, 10 Sep 2021 11:44:58 -0400 Subject: [PATCH 2248/2289] First backup from BNL modex --- .../bmorrison/Multi_Site_Constructors.R | 246 ++++++++++++ .../bmorrison/Multisite_SDA_Bailey.R | 341 ++++++++++++++++ .../bmorrison/Multisite_SDA_Bailey_AGB_LAI.R | 334 ++++++++++++++++ .../Multisite_SDA_Bailey_AGB_LAI_2_sites.R | 233 +++++++++++ ...isite_SDA_Bailey_AGB_LAI_2_sites_with_NA.R | 347 +++++++++++++++++ .../Multisite_SDA_Bailey_AGB_LAI_7_sites.r | 339 ++++++++++++++++ .../Multisite_SDA_Bailey_LAI_8_days.R | 129 ++++++ .../bmorrison/Multisite_SDA_Shawn.R | 168 ++++++++ .../bmorrison/ecoregion_lai_CONUS.R | 317 +++++++++++++++ .../bmorrison/ecoregion_lai_agb_trends.R | 326 ++++++++++++++++ .../bmorrison/extract_500_site_data.R | 288 ++++++++++++++ .../bmorrison/extract_50_sitegroup_data 2.R | 289 ++++++++++++++ .../bmorrison/extract_50_sitegroup_data.R | 289 ++++++++++++++ .../bmorrison/extract_lai_agb_data.R | 287 ++++++++++++++ .../bmorrison/extract_lai_agb_data_500.R | 368 ++++++++++++++++++ .../bmorrison/general_sda_setup 2.R | 301 ++++++++++++++ .../sda_backup/bmorrison/general_sda_setup.R | 301 ++++++++++++++ .../inst/sda_backup/bmorrison/nohuprun.txt | 19 + .../inst/sda_backup/bmorrison/pft_selection.R | 190 +++++++++ .../bmorrison/register_site_group.R | 44 +++ .../sserbin/R_scripts_2/Multisite-3sites.R | 102 +++++ .../sserbin/R_scripts_2/Multisite-4sites.R | 102 +++++ .../sserbin/R_scripts_2/Multisite_SDA_BNL.R | 171 ++++++++ .../R_scripts_2/Multisite_SDA_BNL_updated.R | 165 ++++++++ .../sserbin/R_scripts_2/nohuprun.txt | 23 ++ .../R_scripts_2/workflow_doconversions.R | 68 ++++ .../sserbin/R_scripts_2/workflow_metprocess.R | 76 ++++ .../sserbin/Rscripts/multi_site_LAI_SDA_BNL.R | 252 ++++++++++++ .../sserbin/Rscripts/single_site_SDA_BNL.R | 224 +++++++++++ .../inst/sda_backup/sserbin/nohuprun.txt | 25 ++ .../inst/sda_backup/sserbin/workflow 2.R | 215 ++++++++++ .../inst/sda_backup/sserbin/workflow.R | 215 ++++++++++ 32 files changed, 6794 insertions(+) create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multi_Site_Constructors.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites_with_NA.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_7_sites.r create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_LAI_8_days.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Shawn.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_CONUS.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_agb_trends.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/extract_500_site_data.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data 2.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data_500.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup 2.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/nohuprun.txt create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/pft_selection.R create mode 100755 modules/assim.sequential/inst/sda_backup/bmorrison/register_site_group.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-3sites.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-4sites.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL_updated.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/nohuprun.txt create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_doconversions.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_metprocess.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/multi_site_LAI_SDA_BNL.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/single_site_SDA_BNL.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/nohuprun.txt create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/workflow 2.R create mode 100755 modules/assim.sequential/inst/sda_backup/sserbin/workflow.R diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multi_Site_Constructors.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multi_Site_Constructors.R new file mode 100755 index 00000000000..12f3f50b9e7 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multi_Site_Constructors.R @@ -0,0 +1,246 @@ +##' @title Contruct.Pf +##' @name Contruct.Pf +##' @author Hamze Dokoohaki +##' +##' @param site.ids a vector name of site ids. +##' @param var.names vector names of state variable names. +##' @param X a matrix of state variables. In this matrix rows represent ensembles, while columns show the variables for different sites. +##' @param localization.FUN This is the function that performs the localization of the Pf matrix and it returns a localized matrix with the same dimensions. +##' @description The argument X needs to have an attribute pointing the state variables to their corresponding site. This attribute needs to be called `Site`. +##' At the moment, the cov between state variables at blocks defining the cov between two sites are assumed zero. +##' @return It returns the var-cov matrix of state variables at multiple sites. +##' @export + + +Contruct.Pf <- function(site.ids, var.names, X, localization.FUN=NULL, t=1, blocked.dis=NULL, ...) { + #setup + nsite <- length(site.ids) + nvariable <- length(var.names) + # I will make a big cov matrix and then I will populate it with the cov of each site + pf.matrix <-matrix(0,(nsite*nvariable),(nsite*nvariable)) + + ## This makes the diagonal of our big matrix - first filters out each site, estimates the cov and puts it where it needs to go. + for (site in site.ids){ + #let's find out where this cov (for the current site needs to go in the main cov matrix) + pos.in.matrix <- which(attr(X,"Site") %in% site) + #foreach site let's get the Xs + pf.matrix [pos.in.matrix, pos.in.matrix] <- cov( X [, pos.in.matrix] ,use="complete.obs") + } + + # This is where we estimate the cov between state variables of different sites + #I put this into a sperate loop so we can have more control over it + site.cov.orders <- expand.grid(site.ids,site.ids) %>% + filter( Var1 != Var2) + + for (i in 1:nrow(site.cov.orders)){ + # first we need to find out where to put it in the big matrix + rows.in.matrix <- which(attr(X,"Site") %in% site.cov.orders[i,1]) + cols.in.matrix <- which(attr(X,"Site") %in% site.cov.orders[i,2]) + #estimated between these two sites + two.site.cov <- cov( X [, c(rows.in.matrix, cols.in.matrix)],use="complete.obs" )[(nvariable+1):(2*nvariable),1:nvariable] + # I'm setting the off diag to zero + two.site.cov [which(lower.tri(two.site.cov, diag = FALSE),TRUE) %>% rbind (which(upper.tri(two.site.cov,FALSE),TRUE))] <- 0 + #putting it back to the main matrix + pf.matrix [rows.in.matrix, cols.in.matrix] <- two.site.cov + } + + # if I see that there is a localization function passed to this - I run it by the function. + if (!is.null(localization.FUN)) { + pf.matrix.out <- localization.FUN (pf.matrix, blocked.dis, ...) + } else{ + pf.matrix.out <- pf.matrix + } + + # adding labels to rownames and colnames + labelss <- paste0(rep(var.names, length(site.ids)) %>% as.character(),"(", + rep(site.ids, each=length(var.names)),")") + + colnames(pf.matrix.out ) <-labelss + rownames(pf.matrix.out ) <-labelss + + return(pf.matrix.out) + +} + +##' @title Construct.R +##' @name Construct.R +##' @author Hamze Dokoohaki +##' +##' @param site.ids a vector name of site ids +##' @param var.names vector names of state variable names +##' @param obs.t.mean list of vector of means for the time t for different sites. +##' @param obs.t.cov list of list of cov for the time for different sites. +##’ +##' +##' @description Make sure that both lists are named with siteids. +##' +##' @return This function returns a list with Y and R ready to be sent to the analysis functions. +##' @export + +Construct.R<-function(site.ids, var.names, obs.t.mean, obs.t.cov){ + + # keeps Hs of sites + site.specific.Rs <-list() + # + nsite <- length(site.ids) + # + nvariable <- length(var.names) + Y<-c() + + for (site in site.ids){ + choose <- sapply(var.names, agrep, x=names(obs.t.mean[[site]]), max=1, USE.NAMES = FALSE) %>% unlist + # if there is no obs for this site + if(length(choose)==0){ + next; + }else{ + Y <- c(Y, unlist(obs.t.mean[[site]][choose])) + # collecting them + site.specific.Rs <- c(site.specific.Rs, list(as.matrix(obs.t.cov[[site]][choose,choose])) ) + } + #make block matrix out of our collection + R <- Matrix::bdiag(site.specific.Rs) %>% as.matrix() + } + + return(list(Y=Y, R=R)) +} + + +##' @title block_matrix +##' @name block_matrix +##' @author Guy J. Abel +##' +##' @param x Vector of numbers to identify each block. +##' @param b Numeric value for the size of the blocks within the matrix ordered depending on byrow +##' @param byrow logical value. If FALSE (the default) the blocks are filled by columns, otherwise the blocks in the matrix are filled by rows. +##' @param dimnames Character string of name attribute for the basis of the blcok matrix. If NULL a vector of the same length of b provides the basis of row and column names.#'. +##’ +##' +##' @description This function is adopted from migest package. +##' +##' @return Returns a matrix with block sizes determined by the b argument. Each block is filled with the same value taken from x. +##' @export +block_matrix <- function (x = NULL, b = NULL, byrow = FALSE, dimnames = NULL) { + n <- length(b) + bb <- rep(1:n, times = b) + dn <- NULL + if (is.null(dimnames)) { + dn <- rep(1:n, times = b) + dd <- unlist(sapply(b, seq, from = 1)) + dn <- paste0(dn, dd) + dn <- list(dn, dn) + } + if (!is.null(dimnames)) { + dn <- dimnames + } + xx <- matrix(NA, nrow = sum(b), ncol = sum(b), dimnames = dn) + k <- 1 + if (byrow == TRUE) { + for (i in 1:n) { + for (j in 1:n) { + xx[i == bb, j == bb] <- x[k] + k <- k + 1 + } + } + } + if (byrow == FALSE) { + for (j in 1:n) { + for (i in 1:n) { + xx[i == bb, j == bb] <- x[k] + k <- k + 1 + } + } + } + return(xx) +} + +##' @title Construct.H.multisite +##' @name Construct.H.multisite +##' @author Hamze +##' +##' @param site.ids a vector name of site ids +##' @param var.names vector names of state variable names +##' @param obs.t.mean list of vector of means for the time t for different sites. +##' +##' @description This function is makes the blocked mapping function. +##' +##' @return Returns a matrix with block sizes determined by the b argument. Each block is filled with the same value taken from x. +##' @export +Construct.H.multisite <- function(site.ids, var.names, obs.t.mean){ + + site.ids.with.data <- names(obs.t.mean) + site.specific.Hs <- list() + + + nsite <- length(site.ids) # number of sites + nsite.ids.with.data <-length(site.ids.with.data) # number of sites with data + nvariable <- length(var.names) + #This is used inside the loop below for moving between the sites when populating the big H matrix + nobs <- obs.t.mean %>% map_dbl(~length(.x)) %>% max # this gives me the max number of obs at sites + nobstotal<-obs.t.mean %>% purrr::flatten() %>% length() # this gives me the total number of obs + + #Having the total number of obs as the row number + H <- matrix(0, nobstotal, (nvariable*nsite)) + j<-1 + + for(i in seq_along(site.ids)) + { + site <- site.ids[i] + obs.names <- names(obs.t.mean[[site]]) + + if(is.null(obs.names)) next; + + if (length(obs.names) == 1) + { + + # choose <- sapply(var.names, agrep, x = names(obs.t.mean[[site]]), + # max = 1, USE.NAMES = FALSE) %>% unlist + choose.col <- sapply(obs.names, agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + choose.row <- sapply(var.names, agrep, x = obs.names, max = 1, USE.NAMES = FALSE) %>% unlist + + # empty matrix for this site + H.this.site <- matrix(0, nrow(H), nvariable) + # fill in the ones based on choose + H.this.site [choose.row, choose.col] <- 1 + } + + if (length(obs.names) > 1) + { + # empty matrix for this site + H.this.site <- matrix(0, nobs, nvariable) + + for (n in seq_along(obs.names)) + { + choose.col <- sapply(obs.names[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + H.this.site[n, choose.col] = 1 + + } + H.this.site = do.call(rbind, replicate(length(obs.names), H.this.site, simplify = FALSE)) + } + + # for (n in seq_along(obs.names)) + # { + # choose.col <- sapply(obs.names[n], agrep, x = var.names, max = 1, USE.NAMES = FALSE) %>% unlist + # H.this.obs[n, choose.col] = 1 + # + # } + # H.this.site = data.frame() + # for (x in seq_along(obs.names)) + # { + # test = do.call(rbind, replicate(length(obs.names), H.this.obs[x,], simplify = FALSE)) + # H.this.site = rbind(H.this.site, test) + # + # } + # H.this.site = as.matrix(H.this.site) + # } + # + pos.row = 1:nobstotal + #pos.row<- ((nobs*j)-(nobs-1)):(nobs*j) + pos.col<- ((nvariable*i)-(nvariable-1)):(nvariable*i) + + H[pos.row,pos.col] <-H.this.site + + j <- j +1 + } + + return(H) +} diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey.R new file mode 100755 index 00000000000..5a5f2aeb815 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey.R @@ -0,0 +1,341 @@ +#################################################################################################### +# +# +# +# +# --- Last updated: 02.01.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + + +# temporary step until we get this code integrated into pecan +# library(RCurl) +# script <- getURL("https://raw.githubusercontent.com/serbinsh/pecan/download_osu_agb/modules/data.remote/R/LandTrendr.AGB.R", +# ssl.verifypeer = FALSE) +# eval(parse(text = script)) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/bmorrison/sda/lai" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# Deifine observation - use existing or generate new? +# set to a specific file, use that. +#observation <- "" +#observation = c("1000000048", "796") +#observation = c("1000000048", "796", "1100", "71", "954", "39") + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA.xml") + +# doesn't work for one site +# observation <- c() +# for (i in seq_along(1:length(settings$run))) { +# command <- paste0("settings$run$settings.",i,"$site$id") +# obs <- eval(parse(text=command)) +# observation <- c(observation,obs) +# } + +observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# + + +#---------AGB----------# +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs")) +sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs")) + + + +med_agb_data_sda <- med_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +sdev_agb_data_sda <- sdev_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(med_agb_data_sda$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +med_agb_data_sda <- med_agb_data_sda[site.order,] +sdev_agb_data_sda <- sdev_agb_data_sda[site.order,] + + +#----------LAI----------# +#set directory to output MODIS data too +data_dir <- "/data/bmorrison/sda/lai/modis_lai_data" + +# get the site location information to grab the correct lat/lons for site + add info to the point_list +# ################ Not working on interactive job on MODEX +bety <- list(user='bety', password='bety', host='localhost',dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con + +site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = observation, .con = con) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +# site_info = data.frame() +# for (i in seq_along(1:length(settings$run))) { +# id <- eval(parse(text = paste0("settings$run$settings.",i,"$site$id"))) +# name = eval(parse(text = paste0("settings$run$settings.", i, "$site$name"))) +# lat = eval(parse(text = paste0("settings$run$settings.", i, "$site$lat"))) +# lon = eval(parse(text = paste0("settings$run$settings.", i, "$site$lon"))) +# site_info = rbind(site_info,(cbind(id, name, lon, lat)), stringsAsFactors = F) +# } +site_IDs <- qry_results$id +site_names <- qry_results$sitename +site_coords <- data.frame(cbind(qry_results$lon, qry_results$lat)) +site_info = as.data.frame(cbind(site_IDs, site_names, site_coords)) +names(site_info) = c("IDs", "Names", "Longitude", "Latitude") +site_info$Longitude = as.numeric(site_info$Longitude) +site_info$Latitude = as.numeric(site_info$Latitude) + + +library(doParallel) +cl <- parallel::makeCluster(5, outfile="") +doParallel::registerDoParallel(cl) + +start = Sys.time() +data = foreach(i=1:nrow(site_info)) %dopar% PEcAn.data.remote::call_MODIS(start_date = "2000/01/01", end_date = "2010/12/31", band = "Lai_500m", product = "MOD15A2H", lat = site_info$Latitude[i], lon = site_info$Longitude[i], size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T) +end = Sys.time() +difference = end-start +stopCluster(cl) + +output = as.data.frame(data) + +# LAI is every 7 days --> calculate the peak LAI for a year for each site +load('/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_5site.RData') + +for (i in 1:nrow(site_info)) +{ + name = as.character(site_info$Names[i], stringsAsFactor = F) + g = which(round(output$lat, digits = 3) == round(site_info$Latitude[i], digits = 3)) + output$tile[g] = name +} + + + +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$calendar_date, "%Y-%m-%d"))) +for (i in 1:length(years)) +{ + year = years[i] + g = grep(data$calendar_date, pattern = year) + d = data[g,] + sites = unique(data$tile) + for (j in 1:length(sites)) + { + info = site_info[which(site_info$Names == sites[j]),] + index = which(round(d$lat, digits = 3) == round(info$Latitude, digits = 3) & round(d$lon, digits = 3) == round(info$Longitude, digits = 3)) + + if (length(index) > 0) + { + site = d[index,] + site$band = info$ID + max = which(site$data == max(site$data, na.rm = T)) + peak = site[max[1],] + #peak$data = max + #peak$sd = mean + peak$calendar_date = paste("Year", year, sep = "_") + peak$tile = sites[j] + peak_lai = rbind(peak_lai, peak) + } + + } + +} + + +# sort the data by site so the correct values are placed into the resized data frames below. + +peak_lai = peak_lai[order(peak_lai$tile), ] + +# # separate data into hotdog style dataframes with row == site and columns = info/data for each site +med_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$data)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(med_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) + +sdev_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$sd)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(sdev_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) + +point_list$med_lai_data <- point_list$med_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +point_list$stdv_lai <- point_list$stdv_lai[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(point_list$median_lai$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +point_list$median_lai <- point_list$median_lai[site.order,] +point_list$stdv_lai <- point_list$stdv_lai[site.order,] + +peak_lai_data_sda = point_list$median_lai +sdev_lai_data_sda = point_list$stdv_lai +# +# +# +# # make sure agb and lai only use same dates (for now just to test sda, will fix later) +# date_agb = colnames(med_agb_data_sda) +# date_lai = colnames(peak_lai_data_sda) +# +# if (length(date_agb) > length(date_lai)) +# { +# index = which(!(date_agb %in% date_lai)) +# med_agb_data_sda = med_agb_data_sda[,-index] +# sdev_agb_data_sda = sdev_agb_data_sda[,-index] +# } +# if (length(date_lai) > length(date_agb)) +# { +# index = which(!(date_lai %in% date_agb)) +# peak_lai_data_sda = peak_lai_data_sda[,-index] +# sdev_lai_data_sda = sdev_lai_data_sda[,-index] +# } +# +# # combine agb and lai datasets +# med_data_sda = list() +# med_data_da +# # point_list = list() +# # point_list$agb$median_agb = med_agb_data_sda +# # point_list$agb$stdv_agb = sdev_agb_data_sda +# # point_list$lai$peak_lai = peak_lai_data_sda +# # point_list$lai$stdv_lai = sdev_lai_data_sda +# +# # +# #point_list$agb$median_agb = as.character(point_list$agb$median_agb[[1]]) %>% filter(site_ID %in% site.ids) +# + +point_list = list() +point_list$median_lai = med_lai_data +point_list$sdev_lai = sdev_lai_data + +point_list$median_lai <- point_list$median_lai[[1]] %>% filter(site_ID %in% site.ids) +point_list$stdv_lai <- point_list$stdv_lai[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(point_list$median_lai$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +point_list$median_lai <- point_list$median_lai[site.order,] +point_list$stdv_lai <- point_list$stdv_lai[site.order,] + +med_lai_data_sda = point_list$median_lai +sdev_lai_data_sda = point_list$sdev_lai + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(med_lai_data_sda),"_")[3:length(med_lai_data_sda)] %>% map_chr(~.x[2]) %>% paste0(.,"/12/31") + +obs.mean <- names(med_lai_data_sda)[3:length(med_lai_data_sda)] %>% + map(function(namesl){ + ((med_lai_data_sda)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('LAI'))) %>% + setNames(site.ids[1:length(.)])) + }) %>% setNames(date.obs) + +obs.cov <-names(sdev_lai_data_sda)[3:length(sdev_lai_data_sda)] %>% + map(function(namesl) { + ((sdev_lai_data_sda)[[namesl]] %>% + map( ~ (.x) ^ 2%>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + }) %>% setNames(date.obs) + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings) +#--------------------------------------------------------------------------------------------------# + +#Construct.R(site.ids, "LAI", obs.mean[[1]], obs.cov[[1]]) + + + +#--------------------------------------------------------------------------------------------------# +## Run SDA + + +# sda.enkf(settings, obs.mean =obs.mean ,obs.cov = obs.cov, +# control=list(trace=T, +# FF=F, +# interactivePlot=F, +# TimeseriesPlot=T, +# BiasPlot=F, +# plot.title="LAI SDA, 1 site", +# facet.plots=T, +# debug=F, +# pause=F)) + +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="Sobol sampling - 5sites/15 Ensemble - LAI", + facet.plots=T, + debug=T, + pause=F)) + + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { + sendmail(settings$email$from, settings$email$to, + paste0("SDA workflow has finished executing at ", base::date())) +} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI.R new file mode 100755 index 00000000000..bb74cff9cbc --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI.R @@ -0,0 +1,334 @@ +#################################################################################################### +# +# +# +# +# --- Last updated: 02.01.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + + +# temporary step until we get this code integrated into pecan +# library(RCurl) +# script <- getURL("https://raw.githubusercontent.com/serbinsh/pecan/download_osu_agb/modules/data.remote/R/LandTrendr.AGB.R", +# ssl.verifypeer = FALSE) +# eval(parse(text = script)) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/bmorrison/sda/lai" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# Deifine observation - use existing or generate new? +# set to a specific file, use that. +#observation <- "" +#observation = c("1000000048", "796") +#observation = c("1000000048", "796", "1100", "71", "954", "39") + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB.xml") + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# + + +#---------AGB----------# +data_dir <- "/data/bmorrison/sda/lai/modis_lai_data" + +# get the site location information to grab the correct lat/lons for site + add info to the point_list +# ################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs")) +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs")) +# +# +# +# med_agb_data_sda <- med_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +# sdev_agb_data_sda <- sdev_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +# site.order <- sapply(site.ids,function(x) which(med_agb_data_sda$Site_ID %in% x)) %>% +# as.numeric() %>% na.omit() +# med_agb_data_sda <- med_agb_data_sda[site.order,] +# sdev_agb_data_sda <- sdev_agb_data_sda[site.order,] +# +# save(med_agb_data_sda, file = '/data/bmorrison/sda/lai/modis_lai_data/med_agb_data_5sites.Rdata') +# save(sdev_agb_data_sda, file = '/data/bmorrison/sda/lai/modis_lai_data/sdev_agb_data_5sites.Rdata') + +load('/data/bmorrison/sda/lai/modis_lai_data/med_agb_data_5sites.Rdata') +load( '/data/bmorrison/sda/lai/modis_lai_data/sdev_agb_data_5sites.Rdata') +# med_agb_data_sda = med_agb_data_sda[1,] +# sdev_agb_data_sda = sdev_agb_data_sda[1,] + +#----------LAI----------# + +# library(doParallel) +# cl <- parallel::makeCluster(5, outfile="") +# doParallel::registerDoParallel(cl) +# +# start = Sys.time() +# data = foreach(i=1:length(site_info$site_id), .combine = rbind) %dopar% PEcAn.data.remote::call_MODIS(start_date = "2000/01/01", end_date = "2017/12/31", band = "Lai_500m", product = "MOD15A2H", lat = site_info$lat[i], lon = site_info$lon[i], size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T, progress = T) +# end = Sys.time() +# difference = end-start +# stopCluster(cl) + +#for 1 site +#output2 = PEcAn.data.remote::call_MODIS(start_date = "2000/01/01", end_date = "2010/12/31", band = "Lai_500m", product = "MOD15A2H", lat = site_info$lat[i], lon = site_info$lon[i], size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T) + +# output = as.data.frame(data) +# save(output, file = '/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_5sites.Rdata') + +load('/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_5sites.Rdata') + +#rename tiles by actual site name +for (i in 1:length(site_info$site_name)) +{ + name = as.character(site_info$site_name[i], stringsAsFactor = F) + g = which(round(output$lat, digits = 3) == round(site_info$lat[i], digits = 3)) + output$tile[g] = name +} + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$calendar_date, "%Y-%m-%d"))) +for (i in 1:length(years)) +{ + year = years[i] + g = grep(data$calendar_date, pattern = year) + d = data[g,] + sites = unique(data$tile) + for (j in 1:length(sites)) + { + index = which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + #info = site_info[which(site_info$site_name == sites[j]),] + #index = which(round(d$lat, digits = 3) == round(site_info$lat, digits = 3) & round(d$lon, digits = 3) == round(site_info$lon, digits = 3)) + + if (length(index) > 0) + { + site = d[index,] + site$band = site_info$site_id[j] + max = which(site$data == max(site$data, na.rm = T)) + peak = site[max[1],] + #peak$data = max + #peak$sd = mean + peak$calendar_date = paste("Year", year, sep = "_") + peak$tile = sites[j] + peak_lai = rbind(peak_lai, peak) + } + + } + +} + + +# sort the data by site so the correct values are placed into the resized data frames below. + +peak_lai = peak_lai[order(peak_lai$tile), ] + +# # separate data into hotdog style dataframes with row == site and columns = info/data for each site +med_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$data)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(med_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) +med_lai_data$Site_ID = as.character(med_lai_data$Site_ID) +med_lai_data = list(med_lai_data) + +sdev_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$sd)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(sdev_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) +sdev_lai_data$Site_ID = as.character(sdev_lai_data$Site_ID) +sdev_lai_data = list(sdev_lai_data) + +#med_lai_data = list(med_lai_data) +med_lai_data_sda <- med_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +sdev_lai_data_sda <- sdev_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(med_lai_data_sda$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +med_lai_data_sda <- med_lai_data_sda[site.order,] +sdev_lai_data_sda <- sdev_lai_data_sda[site.order,] + + +#make sure agb and lai only use same dates (for now just to test sda, will fix later) +date_agb = colnames(med_agb_data_sda) +date_lai = colnames(med_lai_data_sda) + +if (length(date_agb) > length(date_lai)) +{ + index = which(!(date_agb %in% date_lai)) + med_agb_data_sda = med_agb_data_sda[,-index] + sdev_agb_data_sda = sdev_agb_data_sda[,-index] +} +if (length(date_lai) > length(date_agb)) +{ + index = which(!(date_lai %in% date_agb)) + med_lai_data_sda = med_lai_data_sda[,-index] + sdev_lai_data_sda = sdev_lai_data_sda[,-index] +} + +### REFORMAT ALL DATA BY YEAR INSTEAD OF SITE HOTDOG STYLE. COMBINE AGB + LAI INTO 1 MED + 1 SDEV LIST(S). +med_data = as.data.frame(cbind(colnames(med_lai_data_sda[,3:ncol(med_lai_data_sda)]), med_lai_data_sda$Site_ID, unlist(med_lai_data_sda[,3:ncol(med_lai_data_sda)]), unlist(med_agb_data_sda[,3:ncol(med_agb_data_sda)])), row.names = F, stringsAsFactors = F) +names(med_data) = c("date", "site_id", "med_lai", "med_agb") +med_data = med_data[order(med_data$date),] +med_data$date = as.character(med_data$date) +med_data$site_id = as.character(med_data$site_id, stringsAsFactors = F) +med_data$med_lai = as.numeric(med_data$med_lai, stringsAsFactors = F) +med_data$med_agb = as.numeric(med_data$med_agb, stringsAsFactors = F) +med_data = med_data %>% + split(.$date) + +date.obs <- strsplit(names(med_data), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + + +med_data = names(med_data) %>% + map(function(namesl){ + med_data[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("LAI", "AbvGrndWood"))) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + +names = names(med_data) +for (i in 1:length(names)) +{ + for (j in 1:length(names(med_data[[names[1]]]))) + { + rownames(med_data[[i]][[j]]) = NULL + } +} + + +sdev_data = as.data.frame(cbind(colnames(sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)]), sdev_lai_data_sda$Site_ID, unlist(sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)]), rep(0, nrow(sdev_lai_data_sda)), rep(0, nrow(sdev_lai_data_sda)),unlist(sdev_agb_data_sda[,3:ncol(sdev_agb_data_sda)])), row.names = F, stringsAsFactors =F) +names(sdev_data) = c("date", "site_id", "sdev_lai", "h1", "h2", "sdev_agb") +sdev_data = sdev_data[order(sdev_data$date),] +sdev_data$date = as.character(sdev_data$date, stringsAsFactors = F) +sdev_data$site_id = as.character(sdev_data$site_id, stringsAsFactors = F) +sdev_data$sdev_lai = as.numeric(sdev_data$sdev_lai, stringsAsFactors = F) +sdev_data$sdev_agb = as.numeric(sdev_data$sdev_agb, stringsAsFactors = F) +sdev_data$h1 = as.numeric(sdev_data$h1) +sdev_data$h2 = as.numeric(sdev_data$h2) +sdev_data = sdev_data %>% + split(.$date) + + + +sdev_data = names(sdev_data) %>% + map(function(namesl){ + sdev_data[[namesl]] %>% + split(.$site_id) %>% + map(~matrix(data = .x[3:6]^2, nrow = 2, ncol = 2)) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + + +obs.mean = med_data + +obs.cov = sdev_data +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings) +#--------------------------------------------------------------------------------------------------# + +#Construct.R(site.ids, "LAI", obs.mean[[1]], obs.cov[[1]]) + + + +#--------------------------------------------------------------------------------------------------# +## Run SDA + + +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=FALSE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=4, + debug=F, + pause=FALSE)) + +#-----------------------------------------------------------------------------------------------# +load('/data/bmorrison/sda/lai/SDA/sda.output.Rdata', verbose = T) +obs.times <- names(obs.mean) +post.analysis.multisite.ggplot(settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=4, readsFF=NULL) + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { + sendmail(settings$email$from, settings$email$to, + paste0("SDA workflow has finished executing at ", base::date())) +} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites.R new file mode 100755 index 00000000000..2cd058b917e --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites.R @@ -0,0 +1,233 @@ + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + +#--------------------------------------------------------------------------------------------------# + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_2_Sites.xml") + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +# get the site location information to grab the correct lat/lons for site + add info to the point_list +# ################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + +load('/data/bmorrison/sda/lai/modis_lai_data/med_agb_data.Rdata') +load( '/data/bmorrison/sda/lai/modis_lai_data/sdev_agb_data.Rdata') +load('/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_2_site.Rdata') + +#rename tiles by actual site name +for (i in 1:length(site_info$site_name)) +{ + name = as.character(site_info$site_name[i], stringsAsFactor = F) + g = which(round(output$lat, digits = 3) == round(site_info$lat[i], digits = 3)) + output$tile[g] = name +} + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$calendar_date, "%Y-%m-%d"))) +for (i in 1:length(years)) +{ + year = years[i] + g = grep(data$calendar_date, pattern = year) + d = data[g,] + sites = unique(data$tile) + for (j in 1:length(sites)) + { + #info = site_info[which(site_info$site_name == sites[j]),] + index = which(round(d$lat, digits = 3) == round(site_info$lat, digits = 3) & round(d$lon, digits = 3) == round(site_info$lon, digits = 3)) + + if (length(index) > 0) + { + site = d[index,] + site$band = site_info$site_id + max = which(site$data == max(site$data, na.rm = T)) + peak = site[max[1],] + #peak$data = max + #peak$sd = mean + peak$calendar_date = paste("Year", year, sep = "_") + peak$tile = sites[j] + peak_lai = rbind(peak_lai, peak) + } + + } + +} + + +# sort the data by site so the correct values are placed into the resized data frames below. + +peak_lai = peak_lai[order(peak_lai$tile), ] + +# # separate data into hotdog style dataframes with row == site and columns = info/data for each site +med_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$data)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(med_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) +med_lai_data$Site_ID = as.character(med_lai_data$Site_ID) +med_lai_data = list(med_lai_data) + +sdev_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$sd)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(sdev_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) +sdev_lai_data$Site_ID = as.character(sdev_lai_data$Site_ID) +sdev_lai_data = list(sdev_lai_data) + + +#med_lai_data = list(med_lai_data) +med_lai_data_sda <- med_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +sdev_lai_data_sda <- sdev_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(med_lai_data_sda$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +med_lai_data_sda <- med_lai_data_sda[site.order,] +sdev_lai_data_sda <- sdev_lai_data_sda[site.order,] + + +#make sure agb and lai only use same dates (for now just to test sda, will fix later) +date_agb = colnames(med_agb_data_sda) +date_lai = colnames(med_lai_data_sda) + +if (length(date_agb) > length(date_lai)) +{ + index = which(!(date_agb %in% date_lai)) + med_agb_data_sda = med_agb_data_sda[,-index] + sdev_agb_data_sda = sdev_agb_data_sda[,-index] +} +if (length(date_lai) > length(date_agb)) +{ + index = which(!(date_lai %in% date_agb)) + med_lai_data_sda = med_lai_data_sda[,-index] + sdev_lai_data_sda = sdev_lai_data_sda[,-index] +} + + +### REFORMAT ALL DATA BY YEAR INSTEAD OF SITE HOTDOG STYLE. COMBINE AGB + LAI INTO 1 MED + 1 SDEV LIST(S). +med_data = as.data.frame(cbind(colnames(med_lai_data_sda[,3:ncol(med_lai_data_sda)]), med_lai_data_sda$Site_ID, unlist(med_agb_data_sda[,3:ncol(med_agb_data_sda)]), unlist(med_lai_data_sda[,3:ncol(med_lai_data_sda)])), row.names = F, stringsAsFactors = F) +names(med_data) = c("date", "site_id", "med_agb", "med_lai") +med_data = med_data[order(med_data$date),] +med_data$date = as.character(med_data$date) +med_data$site_id = as.character(med_data$site_id, stringsAsFactors = F) +med_data$med_lai = as.numeric(med_data$med_lai, stringsAsFactors = F) +med_data$med_agb = as.numeric(med_data$med_agb, stringsAsFactors = F) +med_data = med_data %>% + split(.$date) + +date.obs <- strsplit(names(med_data), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + + +med_data = names(med_data) %>% + map(function(namesl){ + med_data[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI"))) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + +names = names(med_data) +for (i in 1:length(names)) +{ + for (j in 1:length(names(med_data[[names[1]]]))) + { + rownames(med_data[[i]][[j]]) = NULL + } +} + + +sdev_data = as.data.frame(cbind(colnames(sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)]), sdev_lai_data_sda$Site_ID, unlist(sdev_agb_data_sda[,3:ncol(sdev_agb_data_sda)]), rep(0, nrow(sdev_lai_data_sda)), rep(0, nrow(sdev_lai_data_sda)),unlist(sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)])), row.names = F, stringsAsFactors =F) +names(sdev_data) = c("date", "site_id", "sdev_agb", "h1", "h2", "sdev_lai") +sdev_data = sdev_data[order(sdev_data$date),] +sdev_data$date = as.character(sdev_data$date, stringsAsFactors = F) +sdev_data$site_id = as.character(sdev_data$site_id, stringsAsFactors = F) +sdev_data$sdev_lai = as.numeric(sdev_data$sdev_lai, stringsAsFactors = F) +sdev_data$sdev_agb = as.numeric(sdev_data$sdev_agb, stringsAsFactors = F) +sdev_data$h1 = as.numeric(sdev_data$h1) +sdev_data$h2 = as.numeric(sdev_data$h2) +sdev_data = sdev_data %>% + split(.$date) + +sdev_data = names(sdev_data) %>% + map(function(namesl){ + sdev_data[[namesl]] %>% + split(.$site_id) %>% + map(~matrix(data = .x[3:6]^2, nrow = 2, ncol = 2)) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + + +obs.mean = med_data + +obs.cov = sdev_data + +new.settings <- PEcAn.settings::prepare.settings(settings) + +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=FALSE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=4, + debug=F, + pause=FALSE)) + + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites_with_NA.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites_with_NA.R new file mode 100755 index 00000000000..4e1f19e3190 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_2_sites_with_NA.R @@ -0,0 +1,347 @@ + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + +#--------------------------------------------------------------------------------------------------# + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_2_Sites.xml") + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +# get the site location information to grab the correct lat/lons for site + add info to the point_list +# ################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + +load('/data/bmorrison/sda/lai/modis_lai_data/med_agb_data.Rdata') +load( '/data/bmorrison/sda/lai/modis_lai_data/sdev_agb_data.Rdata') +load('/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_2_site.Rdata') + +#rename tiles by actual site name +for (i in 1:length(site_info$site_name)) +{ + name = as.character(site_info$site_name[i], stringsAsFactor = F) + g = which(round(output$lat, digits = 3) == round(site_info$lat[i], digits = 3)) + output$tile[g] = name +} + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$calendar_date, "%Y-%m-%d"))) +for (i in 1:length(years)) +{ + year = years[i] + g = grep(data$calendar_date, pattern = year) + d = data[g,] + sites = unique(data$tile) + for (j in 1:length(sites)) + { + #info = site_info[which(site_info$site_name == sites[j]),] + index = which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + + if (length(index) > 0) + { + site = d[index,] + site$band = site_info$site_id[j] + max = which(site$data == max(site$data[which(site$data <= quantile(site$data, probs = 0.95))], na.rm = T))#max(site$data, na.rm = T)) + peak = site[max[1],] + + peak$calendar_date = paste("Year", year, sep = "_") + peak$tile = sites[j] + peak_lai = rbind(peak_lai, peak) + } + + } + +} + + +# sort the data by site so the correct values are placed into the resized data frames below. + +peak_lai = peak_lai[order(peak_lai$tile), ] + +# following the methods of Viskari et al 2015 for LAI sd values +peak_lai$sd[peak_lai$sd <0.66] = 0.66 + +# # separate data into hotdog style dataframes with row == site and columns = info/data for each site +med_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$data)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(med_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) +med_lai_data$Site_ID = as.character(med_lai_data$Site_ID) +med_lai_data = list(med_lai_data) + +sdev_lai_data = cbind(unique(peak_lai$band), unique(peak_lai$tile), as.data.frame(matrix(unlist(t(peak_lai$sd)), byrow = T, length(unique(peak_lai$tile)), length(years)))) +colnames(sdev_lai_data) = c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) +sdev_lai_data$Site_ID = as.character(sdev_lai_data$Site_ID) +sdev_lai_data = list(sdev_lai_data) + + +#med_lai_data = list(med_lai_data) +med_lai_data_sda <- med_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +sdev_lai_data_sda <- sdev_lai_data[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(med_lai_data_sda$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +med_lai_data_sda <- med_lai_data_sda[site.order,] +sdev_lai_data_sda <- sdev_lai_data_sda[site.order,] + + +#make sure agb and lai only use same dates (for now just to test sda, will fix later) +date_agb = colnames(med_agb_data_sda) +date_lai = colnames(med_lai_data_sda) + +# if (length(date_agb) > length(date_lai)) +# { +# index = which(!(date_agb %in% date_lai)) +# med_agb_data_sda = med_agb_data_sda[,-index] +# sdev_agb_data_sda = sdev_agb_data_sda[,-index] +# } +# if (length(date_lai) > length(date_agb)) +# { +# index = which(!(date_lai %in% date_agb)) +# med_lai_data_sda = med_lai_data_sda[,-index] +# sdev_lai_data_sda = sdev_lai_data_sda[,-index] +# } + +# fix missing data to feed into SDA +colnames = sort(unique(c(date_agb, date_lai))) + +blank = as.data.frame(matrix(NA, nrow = 2, ncol = length(colnames))) +colnames(blank) = colnames + +lai_same = which(colnames(blank) %in% colnames(med_lai_data_sda))[-(1:2)] +agb_same = which(colnames(blank) %in% colnames(med_agb_data_sda))[-(1:2)] + +if (length(agb_same) < length(colnames(blank)[-(1:2)])) +{ + agb_med= blank + agb_sdev = blank + agb_med[,1:2] = med_agb_data_sda[,1:2] + agb_sdev[,1:2] = sdev_agb_data_sda[,1:2] + agb_med[,agb_missing] = med_agb_data_sda[-agb_missing] + agb_sdev[,agb_missing] = sdev_agb_data_sda[-agb_missing] +} else { + agb_med = med_agb_data_sda + agb_sdev = sdev_agb_data_sda +} +if (length(lai_same) < length(colnames(blank)[-(1:2)])) +{ + lai_med = blank + lai_sdev = blank + lai_med[,1:2] = med_lai_data_sda[,1:2] + lai_sdev[,1:2] = sdev_lai_data_sda[,1:2] + lai_med[ ,lai_same] = med_lai_data_sda[,3:ncol(med_lai_data_sda)] + lai_sdev[ ,lai_same] = sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)] +} else { + lai_med = med_lai_data_sda + lai_sdev = sdev_lai_data_sda +} + +med_lai_data_sda = lai_med +med_agb_data_sda = agb_med +sdev_lai_data_sda = lai_sdev +sdev_agb_data_sda = agb_sdev + +### REFORMAT ALL DATA BY YEAR INSTEAD OF SITE HOTDOG STYLE. COMBINE AGB + LAI INTO 1 MED + 1 SDEV LIST(S). + +med_data = as.data.frame(cbind(sort(rep(colnames(med_lai_data_sda[,3:ncol(med_lai_data_sda)]), 2)), med_lai_data_sda$Site_ID, unlist(c(med_agb_data_sda[,3:ncol(med_agb_data_sda)]), use.names = F), unlist(c(med_lai_data_sda[,3:ncol(med_lai_data_sda)]), use.names = F))) +names(med_data) = c("date", "site_id", "med_agb", "med_lai") +#med_data = med_data[order(med_data$date),] +med_data$date = as.character(med_data$date) +med_data$site_id = as.character(med_data$site_id, stringsAsFactors = F) +med_data$med_lai = as.numeric(as.character(med_data$med_lai, stringsAsFactors = F))#as.numeric(levels(med_data$med_lai), stringsAsFactors = F)) +med_data$med_agb = as.numeric(as.character(med_data$med_agb, stringsAsFactors = F))#as.numeric(levels(med_data$med_agb), stringsAsFactors = F) +med_data = med_data %>% + split(.$date) + +date.obs <- strsplit(names(med_data), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +med_data = names(med_data) %>% + map(function(namesl){ + med_data[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI"))) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + +names = names(med_data) +for (i in 1:length(names)) +{ + for (j in 1:length(names(med_data[[names[i]]]))) + { + d = med_data[[i]][[j]] + + if (length(which(is.na(d)))>=1) + { + d = d[-which(is.na(d))] + } + med_data[[i]][[j]] = d + rownames(med_data[[i]][[j]]) = NULL + } +} + + +sdev_data = as.data.frame(cbind(sort(rep(colnames(sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)]), 2)), sdev_lai_data_sda$Site_ID, unlist(c(sdev_agb_data_sda[,3:ncol(sdev_agb_data_sda)]), use.names = F), rep(0, nrow(sdev_lai_data_sda)), rep(0, nrow(sdev_lai_data_sda)), unlist(c(sdev_lai_data_sda[,3:ncol(sdev_lai_data_sda)]), use.names = F))) # +names(sdev_data) = c("date", "site_id", "sdev_agb","h1", "h2", "sdev_lai") #c("date", "site_id", "sdev_agb", "h1", "h2", "sdev_lai") +sdev_data = sdev_data[order(sdev_data$date),] +sdev_data$date = as.character(sdev_data$date, stringsAsFactors = F) +sdev_data$site_id = as.character(sdev_data$site_id, stringsAsFactors = F) +sdev_data$sdev_lai = as.numeric(as.character(sdev_data$sdev_lai, stringsAsFactors = F)) #as.numeric(sdev_data$sdev_lai, stringsAsFactors = F) +sdev_data$sdev_agb = as.numeric(as.character(sdev_data$sdev_agb, stringsAsFactors = F))#as.numeric(sdev_data$sdev_agb, stringsAsFactors = F) +sdev_data$h1 = as.numeric(as.character(sdev_data$h1, stringsAsFactors = F)) +sdev_data$h2 = as.numeric(as.character(sdev_data$h2, stringsAsFactors = F)) + +#sdev_data[is.na(sdev_data$sdev_lai), 4:5] = NA + +sdev_data = sdev_data %>% + split(.$date) + +sdev_data = names(sdev_data) %>% + map(function(namesl){ + sdev_data[[namesl]] %>% + split(.$site_id) %>% + map(~matrix(data = .x[3:6]^2, nrow = 2, ncol = 2)) %>% + setNames(site.ids)}) %>% + setNames(date.obs) + + +names = names(sdev_data) +for (i in 1:length(names)) +{ + for (j in 1:length(names(sdev_data[[names[i]]]))) + { + d = matrix(unlist(sdev_data[[i]][[j]]), nrow = 2, ncol = 2) + + if (length(which(is.na(d)))>=1) + { + index = which(is.na(d)) + d = matrix(d[-index], nrow = 1, ncol = 1) + } + sdev_data[[i]][[j]] = d + # rownames(sdev_data[[i]][[j]]) = NULL + } +} + + + + +obs.mean = med_data + +obs.cov = sdev_data + +new.settings <- PEcAn.settings::prepare.settings(settings) + +# unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=FALSE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=4, + debug=FALSE, + pause=FALSE)) + +### FOR PLOTTING ONLY +# load('/data/bmorrison/sda/lai/SDA/sda.output.Rdata') +# plot.title=NULL +# facetg=4 +# readsFF=NULL +# +# settings = new.settings +# +# obs.mean = Viz.output[[2]] +# obs.cov = Viz.output[[3]] +# obs.times = names(obs.mean) +# PEcAn.assim.sequential::post.analysis.multisite.ggplot(settings = new.settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=4, readsFF=NULL, observed_vars = c("AbvGrndWood", "LAI")) +# +# +# observed_vars = c("AbvGrndWood", "LAI") +# ## fix values in obs.mean/obs.cov to include NAs so there are the same number of columns for plotting purposes only +# for (name in names(obs.mean)) +# { +# data_mean = obs.mean[name] +# data_cov = obs.cov[name] +# sites = names(data[[1]]) +# for (site in sites) +# { +# d_mean = data_mean[[1]][[site]] +# d_cov = data_cov[[1]][[site]] +# colnames = names(d_mean) +# if (length(colnames) < length(observed_vars)) +# { +# missing = which(!(observed_vars %in% colnames)) +# missing_mean = as.data.frame(NA) +# colnames(missing_mean) = observed_vars[missing] +# d_mean = cbind(d_mean, missing_mean) +# +# missing_cov = matrix(0, nrow = length(observed_vars), ncol = length(observed_vars)) +# diag(missing_cov) = c(diag(d_cov), NA) +# d_cov = missing_cov +# } +# data_mean[[1]][[site]] = d_mean +# data_cov[[1]][[site]] = d_cov +# } +# obs.mean[name] = data_mean +# obs.cov[name] = data_cov +# } + +obs.times = names(obs.mean) diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_7_sites.r b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_7_sites.r new file mode 100755 index 00000000000..a5abc358235 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_AGB_LAI_7_sites.r @@ -0,0 +1,339 @@ + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +#--------------------------------------------------------------------------------------------------# +######################################## INTIAL SET UP STUFF ####################################### +work_dir <- "/data/bmorrison/sda/lai" +setwd(work_dir) +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("/data/bmorrison/sda/lai/pecan_MultiSite_SDA_LAI_AGB_8_Sites_2009.xml") + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + +###################### EXTRACT AGB DATA + REFORMAT LONG VS. WIDE STYLE ##################################### +### this is for LandTrendr data ### + +# output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# +# ### temporary fix to make agb data long vs. wide format to match modis data. ### +ndates = colnames(med_agb_data)[-c(1:2)] + +med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) + +sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) + +agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +names(agb_data) = c("Site_ID", "Date", "Median", "SD") +agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) + +# save AGB data into long style +save(agb_data, file = '/data/bmorrison/sda/lai/modis_lai_data/agb_data_update_sites.Rdata') +# +# +# # ####################### Extract MODISTools LAI data ############################## +# +# library(doParallel) +# cl <- parallel::makeCluster(10, outfile="") +# doParallel::registerDoParallel(cl) +# +# start = Sys.time() +# # keep QC_filter on for this because bad LAI values crash the SDA. Progress can be turned off if it annoys you. +# data = foreach(i=1:length(site_info$site_id), .combine = rbind) %dopar% PEcAn.data.remote::call_MODIS(start_date = "2000/01/01", end_date = "2017/12/31", band = "Lai_500m", product = "MOD15A2H", lat = site_info$lat[i], lon = site_info$lon[i], size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T, progress = T) +# end = Sys.time() +# difference = end-start +# stopCluster(cl) +# +# # already in long format style for dataframe +# output = as.data.frame(data) +# save(output, file = '/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_update_sites.Rdata') +# +# # change tile names to the site name +# for (i in 1:length(site_info$site_name)) +# { +# name = as.character(site_info$site_id[i], stringsAsFactor = F) +# g = which(round(output$lat, digits = 3) == round(site_info$lat[i], digits = 3)) +# output$tile[g] = name +# } +# # remove extra data +# output = output[,c(4,2,8,10)] +# colnames(output) = names(agb_data) +# +# # compute peak lai per year +# data = output +# peak_lai = data.frame() +# years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +# for (i in seq_along(years)) +# { +# d = data[grep(data$Date, pattern = years[i]),] +# sites = unique(d$Site_ID) +# for (j in seq_along(sites)) +# { +# index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) +# site = d[index,] +# if (length(index) > 0) +# { +# # peak lai is the max value that is the value <95th quantile to remove potential outlier values +# max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] +# peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) +# peak_lai = rbind(peak_lai, peak) +# +# } +# } +# } +# +# # a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +# peak_lai$SD[peak_lai$SD < 0.66] = 0.66 +# +# #output data +# names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +# save(peak_lai, file = '/data/bmorrison/sda/lai/modis_lai_data/peak_lai_output_update_sites.Rdata') +# +# +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +# ################# +load('/data/bmorrison/sda/lai/modis_lai_data/agb_data_update_sites.Rdata') +load( '/data/bmorrison/sda/lai/modis_lai_data/peak_lai_output_update_sites.Rdata') +# output likes to make factors ..... :/... so this unfactors them +peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) + +observed_vars = c("AbvGrndWood", "LAI") + + +# merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") + +# order by year +observed_data = observed_data[order(observed_data$Date),] + +#sort by date +dates = sort(unique(observed_data$Date)) + +# create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) + +obs.mean = obs.mean %>% + split(.$date) + +# change the dates to be middle of the year +date.obs <- strsplit(names(obs.mean), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean = names(obs.mean) %>% + map(function(namesl){ + obs.mean[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI")) %>% `row.names<-`(NULL)) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + +#remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +names = date.obs +for (name in names) +{ + for (site in names(obs.mean[[name]])) + { + na_index = which(!(is.na(obs.mean[[ name]][[site]]))) + colnames = names(obs.mean[[name]][[site]]) + if (length(na_index) > 0) + { + obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] + row.names(obs.mean[[name]][[site]]) = NULL + } + } +} + +# fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) + +# create obs.cov dataframe -->list by date +obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai)#, filler_0) +obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) + +obs.cov = obs.cov %>% + split(.$date) + +obs.cov = names(obs.cov) %>% + map(function(namesl){ + obs.cov[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4]^2 %>% unlist %>% diag(nrow = 2, ncol = 2) ) %>% + setNames(site.ids) + }) %>% setNames(date.obs) + + +names = date.obs +for (name in names) +{ + for (site in names(obs.cov[[name]])) + { + na_index = which(is.na(obs.cov[[ name]][[site]])) + #colnames = names(obs.cov[[name]][[site]]) + if (length(na_index) > 0) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][1] + # row.names(obs.cov[[name]][[site]]) = NULL + # colnames(obs.cov[[name]][[site]]) = NULL + } + } +} +# #sublist by date --> site +# obs.cov = names(obs.cov) %>% +# map(function(namesl){ +# obs.cov[[namesl]] %>% +# split(.$site_id) %>% +# map(~diag(.x[3:4]^2, nrow = 2, ncol = 2)) %>% +# setNames(site.ids)}) %>% +# setNames(date.obs) + +# remove NA/missing observations from covariance matrix and removes NA values to restructure size of covar matrix +# names = names(obs.cov) +# for (name in names) +# { +# for (site in names(obs.cov[[name]])) +# { +# na_index = which(is.na(obs.cov[[ name]][[site]])) +# if (length(na_index) > 0) +# { +# n_good_vars = length(observed_vars)-length(na_index) +# obs.cov[[name]][[site]] = matrix(obs.cov[[name]][[site]][-na_index], nrow = n_good_vars, ncol = n_good_vars) +# } +# } +# } + +# save these lists for future use. +save(obs.mean, file = '/data/bmorrison/sda/lai/obs_mean_8_sites_dif_dates.Rdata') +save(obs.cov, file = '/data/bmorrison/sda/lai/obs_cov_8_sites_dif_dates.Rdata') +save(date.obs, file = '/data/bmorrison/sda/lai/date_obs_8_sites_dif_dates.Rdata') + + + +################################ START THE SDA ######################################## +load('/data/bmorrison/sda/lai/obs_mean_8_sites_dif_dates.Rdata') +load('/data/bmorrison/sda/lai/obs_cov_8_sites_dif_dates.Rdata') +date.obs = names(obs.mean) + +outfolder = "/data/bmorrison/sda/lai/easy_run_8_sites" +unlink(c('run','out', outfolder),recursive = T) + +new.settings <- PEcAn.settings::prepare.settings(settings) + +settings = new.settings +Q = NULL +restart = F +keepNC = T +forceRun = T +daily = F +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(outfolder = outfolder, + settings = new.settings, + obs.mean = obs.mean, + obs.cov = obs.cov, + keepNC = TRUE, + forceRun = TRUE, + daily = F, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=FALSE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=4, + debug=FALSE, + pause=FALSE, + Profiling = FALSE, + OutlierDetection=FALSE)) + + + + +### FOR PLOTTING after analysis if TimeseriesPlot == FALSE) +load('/data/bmorrison/sda/lai/8_sites_different_date/sda.output.Rdata') +facetg=4 +readsFF=NULL +plot.title=NULL + +obs.mean = Viz.output[[2]] +obs.cov = Viz.output[[3]] +obs.times = names(obs.mean) +PEcAn.assim.sequential::post.analysis.multisite.ggplot(settings = new.settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=4, readsFF=NULL) + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_LAI_8_days.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_LAI_8_days.R new file mode 100755 index 00000000000..a4f5a9b30be --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Bailey_LAI_8_days.R @@ -0,0 +1,129 @@ + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +#--------------------------------------------------------------------------------------------------# +######################################## INTIAL SET UP STUFF ####################################### +work_dir <- "/data/bmorrison/sda/lai" + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_4_sites_8_days.xml") + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + +################################ START THE SDA ######################################## +load('/data/bmorrison/sda/lai/obs_mean_4_sites_8_days.Rdata') +load('/data/bmorrison/sda/lai/obs_cov_4_sites_8_days.Rdata') +date.obs = names(obs.mean) + + +outfolder = "/data/bmorrison/sda/lai/4_sites_8_days" +unlink(c('run','out', outfolder),recursive = T) + +new.settings <- PEcAn.settings::prepare.settings(settings) + +settings = new.settings +Q = NULL +restart = F +keepNC = T +forceRun = T +daily = TRUE +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(outfolder = outfolder, + settings = new.settings, + obs.mean = obs.mean, + obs.cov = obs.cov, + keepNC = TRUE, + forceRun = TRUE, + daily = TRUE, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=FALSE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=2, + debug=FALSE, + pause=FALSE, + Profiling = FALSE, + OutlierDetection=FALSE)) + + + + +### FOR PLOTTING after analysis if TimeseriesPlot == FALSE) +load('/data/bmorrison/sda/lai/4_sites_8_days/sda.output.Rdata') +facetg=2 +readsFF=NULL +settings= new.settings +settings$outfolder = outfolder +obs.mean = Viz.output[[2]] +obs.cov = Viz.output[[3]] +obs.times = names(obs.mean) +PEcAn.assim.sequential::post.analysis.multisite.ggplot(settings = settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=2, readsFF=NULL) + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Shawn.R b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Shawn.R new file mode 100755 index 00000000000..780bcc21fa2 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/Multisite_SDA_Shawn.R @@ -0,0 +1,168 @@ +#################################################################################################### +# +# +# +# +# --- Last updated: 02.01.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + + +# temporary step until we get this code integrated into pecan +library(RCurl) +script <- getURL("https://raw.githubusercontent.com/serbinsh/pecan/download_osu_agb/modules/data.remote/R/LandTrendr.AGB.R", + ssl.verifypeer = FALSE) +eval(parse(text = script)) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/bmorrison/sda/lai" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# Deifine observation - use existing or generate new? +# set to a specific file, use that. +#observation <- "" +#observation <- c("1000025731","1000000048","796", "772", "763", "1000000146") +#observation <- c("1000025731","1000000048","763","796","772","764","765","1000000024","678","1000000146") + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB.xml") + +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Prepare observational data - still very hacky here +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs")) +sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs")) + +PEcAn.logger::logger.info("**** Preparing data for SDA ****") +#for multi site both mean and cov needs to be a list like this +# +date +# +siteid +# c(state variables)/matrix(cov state variables) +# +#reorder sites in obs +med_agb_data_sda <- med_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +sdev_agb_data_sda <- sdev_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(med_agb_data_sda$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +med_agb_data_sda <- med_agb_data_sda[site.order,] +sdev_agb_data_sda <- sdev_agb_data_sda[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(med_agb_data_sda),"_")[3:length(med_agb_data_sda)] %>% + map_chr(~.x[2]) %>% paste0(.,"/12/31") + +obs.mean <- names(med_agb_data_sda)[3:length(med_agb_data_sda)] %>% + map(function(namesl){ + ((med_agb_data_sda)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('AbvGrndWood'))) %>% + setNames(site.ids[1:length(.)])) + }) %>% setNames(date.obs) + +obs.cov <-names(sdev_agb_data_sda)[3:length(sdev_agb_data_sda)] %>% + map(function(namesl) { + ((sdev_agb_data_sda)[[namesl]] %>% + map( ~ (.x) ^ 2%>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + }) %>% setNames(date.obs) + +#--------------------------------------------------------------------------------------------------# + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings) +#new.settings = settings +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Run SDA +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + plot = T, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="Sobol sampling - 2 sites - AGB", + facet.plots=T, + debug=F, + pause=F) +) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { + sendmail(settings$email$from, settings$email$to, + paste0("SDA workflow has finished executing at ", base::date())) +} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_CONUS.R b/modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_CONUS.R new file mode 100755 index 00000000000..2e4f0849a87 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_CONUS.R @@ -0,0 +1,317 @@ +library(raster) +library(shapefiles) +library(sp) +library(PEcAn.data.remote) +library(sf) +library(dplyr) +library(rgdal) +library(tidyr) +library(rgeos) +library(rgbif) +library(viridis) +library(gridExtra) +library(rasterVis) +library(doParallel) +library(PEcAn.utils) +set.seed(1) + +#eco = st_read(dsn = '/data/bmorrison/sda/ecoregion_site_analysis/shapefiles', layer = 'eco_conus_rename') +setwd('/data/bmorrison/sda/ecoregion_site_analysis/modis_data/CONUS') +states = st_read(dsn = "/data/bmorrison/sda/ecoregion_site_analysis/shapefiles/states_21basic/states.shp") +states = as(states, "Spatial") + +### testing on 1 ecoregion +region = states +region = st_read(dsn = "/data/bmorrison/sda/ecoregion_site_analysis/shapefiles/states_21basic/states.shp") +region = region[-c(1,28,51),] +#region = eco[eco$name == eco$name[11],] +region = st_union(region) +region = as(region, "Spatial") +region = spTransform(region, CRS = "+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ") +region_ll = spTransform(region, CRS = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs ") + +# hexagonal tesellation random sampling + +# must be in meters +make_grid <- function(x, cell_diameter, cell_area, clip = FALSE) { + if (missing(cell_diameter)) { + if (missing(cell_area)) { + stop("Must provide cell_diameter or cell_area") + } else { + cell_diameter <- sqrt(2 * cell_area / sqrt(3)) + } + } + ext <- as(extent(x) + cell_diameter, "SpatialPolygons") + projection(ext) <- projection(x) + # generate array of hexagon centers + g <- spsample(ext, type = "hexagonal", cellsize = cell_diameter, + offset = c(0.5, 0.5)) + # convert center points to hexagons + g <- HexPoints2SpatialPolygons(g, dx = cell_diameter) + # clip to boundary of study area + if (clip) { + g <- gIntersection(g, x, byid = TRUE) + } else { + g <- g[x, ] + } + # clean up feature IDs + row.names(g) <- as.character(1:length(g)) + return(g) +} + +# trick to figure out how many polygons I want vs. cell area of hexagons +n <- 1000 + +area_of_region = raster::area(region) +cell_area= area_of_region/n + +# make hexagonal tesselation grid +hex_grid <- make_grid(region, cell_area = cell_area, clip = FALSE) +#hex_grid <- make_grid(region, cell_diameter = 37894.1, clip = FALSE) + +plot(region, col = "grey50", bg = "light blue") +plot(hex_grid, border = "orange", add = T) +# clip to ecogreion area + +save(hex_grid, file = paste("/data/bmorrison/sda/ecoregion_site_analysis/hex_grid_CONUS.Rdata", sep = "")) +load(paste("/data/bmorrison/sda/ecoregion_site_analysis/hex_grid_CONUS.Rdata", sep = "")) + +# randomly select one point from each hexagon (random) +samples = data.frame() +for (i in 1:length(names(hex_grid))) +{ + hex = hex_grid[i,] + sample = as.data.frame(spsample(hex, n = 1, type = 'random')) + names(sample) = c("x", "y") + samples = rbind(samples, sample) +} +coordinates(samples) = ~x+y +projection(samples) = crs(region) + +# clip out points outside of ecoregion area +samples <- gIntersection(samples, region, byid = TRUE) + +plot(region, col = "grey50", bg = "light blue", axes = TRUE) +plot(hex_grid, border = "orange", add = T) +plot(samples, pch = 20, add = T) +samples = spTransform(samples, CRS = crs(states)) +region = spTransform(region, CRS = crs(states)) + + +xy = as.data.frame(samples) +names(xy) = c("lon", "lat") +save(xy, file = paste('/data/bmorrison/sda/ecoregion_site_analysis/random_sites_CONUS.Rdata', sep = "")) +# extract MODIS data for location + +load("/data/bmorrison/sda/ecoregion_site_analysis/random_sites_CONUS.Rdata") + + +product = "MOD15A2H" + +dates = PEcAn.utils::retry.func(MODISTools::mt_dates(product, lat = xy$lat[1], lon = xy$lon[1]), maxError = 10, sleep = 2) + +starting_dates = dates$calendar_date[grep(dates$calendar_date, pattern = "2001-01")] +start_count = as.data.frame(table(starting_dates), stringsAsFactors = F) +start_date = gsub("-", "/", start_count$starting_dates[1]) + +ending_dates = dates$calendar_date[grep(dates$calendar_date, pattern = "2018-12")] +end_count = as.data.frame(table(ending_dates), stringsAsFactors = F) +end_date = gsub("-", "/", end_count$ending_dates[nrow(end_count)] ) + +# 10 cpu limit because THREADDS has 10 download limit +#xy = xy[1:nrow(xy),] + +cl <- parallel::makeCluster(10) #, outfile= "") +doParallel::registerDoParallel(cl) + +output = data.frame() +for (j in 1:ceiling(nrow(xy)/10)) +{ + if (j == ceiling(nrow(xy)/10)) + { + coords = xy[((j*10)-9):nrow(xy),] + working = print(paste("working on : ", ((j*10)-9), "-", nrow(xy), sep = "")) + + } else { + coords = xy[((j*10)-9):(j*10),] + working = print(paste("working on : ", ((j*10)-9), "-", (j*10), sep = "")) + + } + #siteID = paste(round(coords[i,], digits = 2), collapse = "_") + start = Sys.time() + data = PEcAn.utils::retry.func(foreach(i=1:nrow(coords), .combine = rbind) %dopar% PEcAn.data.remote::call_MODIS(outfolder = getwd(), iter = ((j*10-10)+i), product = "MOD15A2H",band = "Lai_500m", start_date = start_date, end_date = end_date, lat = coords$lat[i], lon = coords$lon[i],size = 0, band_qc = "FparLai_QC", band_sd = "", package_method = "MODISTools", QC_filter = T), maxError = 10, sleep = 2) + end = Sys.time() + difference = end-start + time = print(difference) + output = rbind(output, data) +} +stopCluster(cl) + + +save(output, file = paste('/data/bmorrison/sda/ecoregion_site_analysis/modis_data_output_', nrow(xy), '.Rdata', sep = "")) +# +# load(paste('/data/bmorrison/sda/ecoregion_site_analysis/modis_data_output_', nrow(output), '.Rdata', sep = "")) +# output = as.data.frame(output, row.names = NULL) + +# for large datasets to group together +files = list.files(path = '/data/bmorrison/sda/ecoregion_site_analysis/modis_data/CONUS', pattern = '.csv', include.dirs = T, full.names = T) +xy = data.frame() +for (i in 1:length(files)) +{ + f = read.csv(files[i]) + xy = rbind(xy, f) +} + +output = xy +# summarize into anual peak lai from 2001-2018 +years = lubridate::year(start_date):lubridate::year(end_date) + +data = output +sites = output +coordinates(sites) = ~lon+lat +projection(sites) = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs " +sites = as.data.frame(unique(coordinates(sites))) +coordinates(sites) = ~lon+lat +projection(sites) = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs " + + +compute_annual_lai = function(data, sites) +{ + index = which((round(data$lon, digits = 3)== round(sites$lon, digits = 3)) & (round(data$lat, digits = 3) == round(sites$lat, digits = 3))) + if (length(index) > 0) + { + site = data[index,] + years = unique(lubridate::year(site$calendar_date)) + + summary = data.frame() + for (j in 1:length(years)) + { + g = grep(site$calendar_date, pattern = years[j]) + if (length(g) > 0) + { + d = site[g,] + percentile = which(d$data <= quantile(d$data, probs = 0.95, na.rm = T)[1]) + peak = max(d$data[percentile], na.rm = T) + + info = d[1,] + info$data = peak + info$calendar_date = years[j] + + summary = rbind(summary, info) + } + } + peak_lai = summary[1,] + peak_lai$data = max(summary$data[which(summary$data <= quantile(summary$data, probs = 0.95))], na.rm = T) + return(peak_lai) + } +} + +cl <- parallel::makeCluster(10) #, outfile= "") +doParallel::registerDoParallel(cl) + +test = foreach(i=1:nrow(sites), .combine = rbind) %dopar% compute_annual_lai(data = data, sites = sites[i,]) + +stopCluster(cl) + +test = data.frame() +for (i in 1:nrow(coordinates(sites))) +{ + t = compute_annual_lai(data = data, sites = sites[i,]) + test = rbind(test, t) +} + + +# +# +# +# summary =data.frame() +# for (i in 1:nrow(xy)) +# { +# index = which(round(output$lon, digits =3) == round(xy$lon[i], digits = 3) & round(output$lat, digits = 3) == round(xy$lat[i], digits = 3)) +# if (length(index)>0) +# { +# site = output[index,] +# for (j in 1:length(years)) +# { +# g = grep(site$calendar_date, pattern = years[j]) +# if (length(g) > 0) +# { +# d = site[g,] +# percentile = which(d$data <= quantile(d$data, probs = 0.95, na.rm = T)[1]) +# peak = max(d$data[percentile], na.rm = T) +# +# info = d[1,] +# info$data = peak +# info$calendar_date = years[j] +# +# summary = rbind(summary, info) +# } +# } +# } +# } +# +# peak_lai = data.frame() +# for (i in 1:nrow(xy)) +# { +# index = which(round(summary$lat, digits = 3) == round(xy$lat[i], digits = 3) & round(summary$lon, digits = 3) == round(xy$lon[i], digits = 3)) +# +# if (length(index) >0) +# { +# site = summary[index,] +# +# peak = mean(site$data, na.rm = T) +# info = site[1,] +# info$data = peak +# peak_lai = rbind(peak_lai, info) +# } +# } +# +# peak_lai = as.data.frame(peak_lai, row.names = NULL) +# semivariogram analysis + +#1. reproject spatial data into aea so distances are in meteres +coordinates(test) = ~lon+lat +projection(test) = crs(sites) +test = spTransform(test, CRS = "+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ") + +library(gstat) +# 1. check that data is normally distributed, if not, transform. +hist(test$data) + +library(MASS) +norm = fitdistr(x = test$data, densfun = "normal") +test$trans= rnorm(test$data, mean = norm$estimate[1], sd = norm$estimate[2]) + +v = variogram(trans~1, data = test) +v.data = v[order(v$dist),] +plot(v) + +v.vgm = vgm( psill = NA, range = NA, model = "Sph", nugget = 1) +v.fit = fit.variogram(v, v.vgm, fit.sills = T, fit.ranges = T, fit.kappa = T) +plot(v, model = v.fit) + + + +cell_area= 37894.1 + +# make hexagonal tesselation grid +hex_grid <- make_grid(region, cell_area = cell_area, clip = FALSE) + +plot(region, col = "grey50", bg = "light blue") +plot(hex_grid, border = "orange", add = T) +# clip to ecogreion area + +samples = data.frame() +for (i in 1:length(names(hex_grid))) +{ + hex = hex_grid[i,] + sample = as.data.frame(spsample(hex, n = 1, type = 'random')) + names(sample) = c("x", "y") + samples = rbind(samples, sample) +} +coordinates(samples) = ~x+y +projection(samples) = crs(region) + +# clip out points outside of ecoregion area +samples <- gIntersection(samples, region, byid = TRUE) + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_agb_trends.R b/modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_agb_trends.R new file mode 100755 index 00000000000..98885da3863 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/ecoregion_lai_agb_trends.R @@ -0,0 +1,326 @@ +library(raster) +library(shapefiles) +library(sp) +library(PEcAn.data.remote) +library(sf) +library(dplyr) +library(rgdal) +library(tidyr) +library(rgeos) +library(rgbif) +library(viridis) +library(gridExtra) +library(rasterVis) +library(doParallel) +library(PEcAn.utils) +set.seed(1) + +eco = st_read(dsn = '/data/bmorrison/sda/ecoregion_site_analysis/shapefiles', layer = 'eco_conus_rename') +setwd('/data/bmorrison/sda/ecoregion_site_analysis/modis_data') + +### testing on 1 ecoregion +region = eco[eco$name == eco$name[11],] +region = st_union(region) +region = as(region, "Spatial") +region = spTransform(region, CRS = "+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ") + + +# hexagonal tesellation random sampling + +# must be in meters +make_grid <- function(x, cell_diameter, cell_area, clip = FALSE) { + if (missing(cell_diameter)) { + if (missing(cell_area)) { + stop("Must provide cell_diameter or cell_area") + } else { + cell_diameter <- sqrt(2 * cell_area / sqrt(3)) + } + } + ext <- as(extent(x) + cell_diameter, "SpatialPolygons") + projection(ext) <- projection(x) + # generate array of hexagon centers + g <- spsample(ext, type = "hexagonal", cellsize = cell_diameter, + offset = c(0.5, 0.5)) + # convert center points to hexagons + g <- HexPoints2SpatialPolygons(g, dx = cell_diameter) + # clip to boundary of study area + if (clip) { + g <- gIntersection(g, x, byid = TRUE) + } else { + g <- g[x, ] + } + # clean up feature IDs + row.names(g) <- as.character(1:length(g)) + return(g) +} + +# trick to figure out how many polygons I want vs. cell area of hexagons +n <- 1000 + +area_of_region = raster::area(region) +cell_area= area_of_region/n + +# make hexagonal tesselation grid +hex_grid <- make_grid(region, cell_area = cell_area, clip = FALSE) +hex_grid <- make_grid(region, cell_diameter = 37894.1, clip = FALSE) + +plot(region, col = "grey50", bg = "light blue", axes = TRUE) +plot(hex_grid, border = "orange", add = T) +# clip to ecogreion area + +save(hex_grid, file = paste("/data/bmorrison/sda/ecoregion_site_analysis/hex_grid_", length(names(hex_grid)), ".Rdata", sep = "")) +load(paste("/data/bmorrison/sda/ecoregion_site_analysis/hex_grid_1164.Rdata", sep = "")) + +# randomly select one point from each hexagon (random) +samples = data.frame() +for (i in 1:length(names(hex_grid))) +{ + hex = hex_grid[i,] + sample = as.data.frame(spsample(hex, n = 1, type = 'random')) + names(sample) = c("x", "y") + samples = rbind(samples, sample) +} +coordinates(samples) = ~x+y +projection(samples) = crs(region) + +# clip out points outside of ecoregion area +samples <- gIntersection(samples, region, byid = TRUE) + +plot(region, col = "grey50", bg = "light blue", axes = TRUE) +plot(hex_grid, border = "orange", add = T) +csamples = spTransform(samples, CRS = crs(eco)) +region = spTransform(region, CRS = crs(eco)) + + +xy = as.data.frame(samples) +names(xy) = c("lon", "lat") +save(xy, file = paste('/data/bmorrison/sda/ecoregion_site_analysis/random_sites_', nrow(xy), '.Rdata', sep = "")) +# extract MODIS data for location + +load("/data/bmorrison/sda/ecoregion_site_analysis/random_sites_989.Rdata") + + +product = "MOD15A2H" + +dates = PEcAn.utils::retry.func(MODISTools::mt_dates(product, lat = xy$lat[1], lon = xy$lon[1]), maxError = 10, sleep = 2) + +starting_dates = dates$calendar_date[grep(dates$calendar_date, pattern = "2001-01")] +start_count = as.data.frame(table(starting_dates), stringsAsFactors = F) +start_date = gsub("-", "/", start_count$starting_dates[1]) + +ending_dates = dates$calendar_date[grep(dates$calendar_date, pattern = "2018-12")] +end_count = as.data.frame(table(ending_dates), stringsAsFactors = F) +end_date = gsub("-", "/", end_count$ending_dates[nrow(end_count)] ) + +# 10 cpu limit because THREADDS has 10 download limit +# xy = xy[601:nrow(xy),] +# +# cl <- parallel::makeCluster(10) #, outfile= "") +# doParallel::registerDoParallel(cl) +# +# output = data.frame() +# for (j in 1:ceiling(nrow(xy)/10)) +# { +# if (j == ceiling(nrow(xy)/10)) +# { +# coords = xy[((j*10)-9):nrow(xy),] +# working = print(paste("working on : ", ((j*10)-9+600), "-", nrow(xy)+600, sep = "")) +# +# } else { +# coords = xy[((j*10)-9):(j*10),] +# working = print(paste("working on : ", ((j*10)-9+600), "-", (j*10+600), sep = "")) +# +# } +# # siteID = paste(round(coords[i,], digits = 2), collapse = "_") +# start = Sys.time() +# data = PEcAn.utils::retry.func(foreach(i=1:nrow(coords), .combine = rbind) %dopar% PEcAn.data.remote::call_MODIS(outfolder = getwd(), iter = ((j*10-10)+i+600), product = "MOD15A2H",band = "Lai_500m", start_date = start_date, end_date = end_date, lat = coords$lat[i], lon = coords$lon[i],size = 0, band_qc = "FparLai_QC", band_sd = "", package_method = "MODISTools", QC_filter = T), maxError = 10, sleep = 2) +# end = Sys.time() +# difference = end-start +# time = print(difference) +# output = rbind(output, data) +# } +# +# # end = Sys.time() +# # difference = end-start +# # difference +# stopCluster(cl) +# +# +# save(output, file = paste('/data/bmorrison/sda/ecoregion_site_analysis/modis_data_output_', nrow(xy), '.Rdata', sep = "")) +# +# load(paste('/data/bmorrison/sda/ecoregion_site_analysis/modis_data_output_', nrow(output), '.Rdata', sep = "")) +# output = as.data.frame(output, row.names = NULL) + + + +# extract AGB data +start_date = "2001/01/01" +end_date = "2018/01/01" + +library(RCurl) +script <- getURL("https://raw.githubusercontent.com/serbinsh/pecan/download_osu_agb/modules/data.remote/R/LandTrendr.AGB.R", + ssl.verifypeer = FALSE) +eval(parse(text = script)) + +# for large datasets to group together +files = list.files(path = '/data/bmorrison/sda/ecoregion_site_analysis/modis_data', pattern = '.csv', include.dirs = T, full.names = T) +xy = data.frame() +for (i in 1:length(files)) +{ + f = read.csv(files[i]) + xy = rbind(xy, f) +} + +output = xy +# summarize into anual peak lai from 2001-2018 +years = lubridate::year(start_date):lubridate::year(end_date) + +data = output +load("/data/bmorrison/sda/ecoregion_site_analysis/random_sites_989.Rdata") + +# sites = xy +# coordinates(sites) = ~lon+lat +# projection(sites) = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs " +# sites = as.data.frame(unique(coordinates(sites))) +sites = SpatialPointsDataFrame(data = xy, coords = xy, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs ")) +compute_annual_lai = function(data, sites) +{ + index = which((round(data$lon, digits = 3)== round(sites$lon, digits = 3)) & (round(data$lat, digits = 3) == round(sites$lat, digits = 3))) + if (length(index) > 0) + { + site = data[index,] + years = unique(lubridate::year(site$calendar_date)) + + summary = data.frame() + for (j in 1:length(years)) + { + g = grep(site$calendar_date, pattern = years[j]) + if (length(g) > 0) + { + d = site[g,] + percentile = which(d$data <= quantile(d$data, probs = 0.95, na.rm = T)[1]) + peak = max(d$data[percentile], na.rm = T) + + info = d[1,] + info$data = peak + info$calendar_date = years[j] + + summary = rbind(summary, info) + } + } + peak_lai = summary[1,] + peak_lai$data = max(summary$data[which(summary$data <= quantile(summary$data, probs = 0.95))], na.rm = T) + return(peak_lai) + } +} +test = data.frame() +for (i in 1:nrow(sites)) +{ + working = print(i) + site = sites[i,] + t = compute_annual_lai(data = data, sites = site) + test = rbind(test,t) +} + +# cl <- parallel::makeCluster(10) #, outfile= "") +# doParallel::registerDoParallel(cl) +# +# test = foreach(i=1:nrow(sites), .combine = rbind) %dopar% compute_annual_lai(data = data, sites = sites[i,]) +# +# stopCluster(cl) + + +# +# +# +# summary =data.frame() +# for (i in 1:nrow(xy)) +# { +# index = which(round(output$lon, digits =3) == round(xy$lon[i], digits = 3) & round(output$lat, digits = 3) == round(xy$lat[i], digits = 3)) +# if (length(index)>0) +# { +# site = output[index,] +# for (j in 1:length(years)) +# { +# g = grep(site$calendar_date, pattern = years[j]) +# if (length(g) > 0) +# { +# d = site[g,] +# percentile = which(d$data <= quantile(d$data, probs = 0.95, na.rm = T)[1]) +# peak = max(d$data[percentile], na.rm = T) +# +# info = d[1,] +# info$data = peak +# info$calendar_date = years[j] +# +# summary = rbind(summary, info) +# } +# } +# } +# } +# +# peak_lai = data.frame() +# for (i in 1:nrow(xy)) +# { +# index = which(round(summary$lat, digits = 3) == round(xy$lat[i], digits = 3) & round(summary$lon, digits = 3) == round(xy$lon[i], digits = 3)) +# +# if (length(index) >0) +# { +# site = summary[index,] +# +# peak = mean(site$data, na.rm = T) +# info = site[1,] +# info$data = peak +# peak_lai = rbind(peak_lai, info) +# } +# } +# +# peak_lai = as.data.frame(peak_lai, row.names = NULL) +# semivariogram analysis + +#1. reproject spatial data into aea so distances are in meteres +coordinates(test) = ~lon+lat +projection(test) = crs(eco) +test = spTransform(test, CRS = "+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ") + +library(gstat) +# 1. check that data is normally distributed, if not, transform. +hist(test$data) + +library(MASS) +norm = fitdistr(x = test$data, densfun = "normal") +test$trans= rnorm(test$data, mean = norm$estimate[1], sd = norm$estimate[2]) + +v = variogram(trans~1, data = test) +v.data = v[order(v$dist),] +plot(v) + +v.vgm = vgm( psill = NA, range = NA, model = "Sph", nugget = 0.9) +v.fit = fit.variogram(v, v.vgm, fit.sills = T, fit.ranges = T, fit.kappa = T) +plot(v, model = v.fit) + + + +cell_area= 37894.1 + +# make hexagonal tesselation grid +hex_grid <- make_grid(region, cell_area = cell_area, clip = FALSE) + +plot(region, col = "grey50", bg = "light blue") +plot(hex_grid, border = "orange", add = T) +# clip to ecogreion area + +samples = data.frame() +for (i in 1:length(names(hex_grid))) +{ + hex = hex_grid[i,] + sample = as.data.frame(spsample(hex, n = 1, type = 'random')) + names(sample) = c("x", "y") + samples = rbind(samples, sample) +} +coordinates(samples) = ~x+y +projection(samples) = crs(region) + +# clip out points outside of ecoregion area +samples <- gIntersection(samples, region, byid = TRUE) + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/extract_500_site_data.R b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_500_site_data.R new file mode 100755 index 00000000000..52f5e545282 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_500_site_data.R @@ -0,0 +1,288 @@ +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +work_dir <- "/data/bmorrison/sda/lai" + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_sitegroup.xml") + +if ("sitegroup" %in% names(settings)){ + if (is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + + + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + + + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +# get.parameter.samples(settings, +# ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# ## Aside: if method were set to unscented, would take minimal changes to do UnKF +# #--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX +observations = observation +lai_data = data.frame() +for (i in 1:5) +{ + start = (1+((i-1)*10)) + end = start+9 + + obs = observations[start:end] + + working = print(paste("working on: ", i)) + sites = print(obs) + PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- obs + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + lai = call_MODIS(outdir = NULL, var = "LAI", site_info = site_info, product_dates = c("1980001", "2018365"), + run_parallel = TRUE, ncores = 10, product = "MOD15A2H", band = "Lai_500m", + package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) + + lai_data = rbind(lai_data, lai) + +} +lai_sd = lai_data +save(lai_data, file = '/data/bmorrison/sda/lai/50_site_run/lai_data_sites.Rdata') + +observation = observations +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) +# # output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# +# ndates = colnames(med_agb_data)[-c(1:2)] +# +# med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +# med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) +# +# sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +# sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) +# +# agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +# names(agb_data) = c("Site_ID", "Date", "Median", "SD") +# agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) +# +# save AGB data into long style +#save(agb_data, file = '/data/bmorrison/sda/lai/50_site_run/agb_data_sites.Rdata') + +######### calculate peak_lai +# already in long format style for dataframe +names(lai_sd) = c("modis_date", "calendar_date", "band", "tile", "site_id", "lat", "lon", "pixels", "sd", "qc") +output = cbind(lai_data, lai_sd$sd) +names(output) = c(names(lai_data), "sd") +#output = as.data.frame(data) +save(output, file = '/data/bmorrison/sda/lai/50_site_run/all_lai_data.Rdata') + +# change tile names to the site name +h +# remove extra data +output = output[,c(5, 2, 9, 11)] +colnames(output) = names(agb_data) + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +for (i in seq_along(years)) +{ + d = data[grep(data$Date, pattern = years[i]),] + sites = unique(d$Site_ID) + for (j in seq_along(sites)) + { + index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + site = d[index,] + if (length(index) > 0) + { + # peak lai is the max value that is the value <95th quantile to remove potential outlier values + max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] + peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) + peak_lai = rbind(peak_lai, peak) + + } + } +} + +# a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +peak_lai$SD[peak_lai$SD < 0.66] = 0.66 + +#output data +names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +save(peak_lai, file = '/data/bmorrison/sda/lai/50_site_run/peak_lai_data.Rdata') + + +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) + +observed_vars = c("AbvGrndWood", "LAI") + + +# merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") + +# order by year +observed_data = observed_data[order(observed_data$Date),] + +#sort by date +dates = sort(unique(observed_data$Date)) + +# create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) + +obs.mean = obs.mean %>% + split(.$date) + +# change the dates to be middle of the year +date.obs <- strsplit(names(obs.mean), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean = names(obs.mean) %>% + map(function(namesl){ + obs.mean[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI")) %>% `row.names<-`(NULL)) + #setNames(site.ids) + }) %>% setNames(date.obs) + +#remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +names = date.obs +for (name in names) +{ + for (site in names(obs.mean[[name]])) + { + na_index = which(!(is.na(obs.mean[[ name]][[site]]))) + colnames = names(obs.mean[[name]][[site]]) + if (length(na_index) > 0) + { + obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] + } + } +} + +# fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) + +# create obs.cov dataframe -->list by date +obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai)#, filler_0) +obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) + +obs.cov = obs.cov %>% + split(.$date) + +obs.cov = names(obs.cov) %>% + map(function(namesl){ + obs.cov[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4]^2 %>% unlist %>% diag(nrow = 2, ncol = 2) ) + }) %>% setNames(date.obs) + + +names = date.obs +for (name in names) +{ + for (site in names(obs.cov[[name]])) + { + bad = which(apply(obs.cov[[name]][[site]], 2, function(x) any(is.na(x))) == TRUE) + if (length(bad) > 0) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][,-bad] + if (is.null(dim(obs.cov[[name]][[site]]))) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad] + } else { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad,] + } + } + } +} + + +save(obs.mean, file = '/data/bmorrison/sda/lai/50_site_run/obs_mean_50.Rdata') +save(obs.cov, file = '/data/bmorrison/sda/lai/50_site_run/obs_cov_50.Rdata') + + + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data 2.R b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data 2.R new file mode 100755 index 00000000000..f715a1261bf --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data 2.R @@ -0,0 +1,289 @@ +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +work_dir <- "/data/bmorrison/sda/lai" + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_sitegroup.xml") + +if ("sitegroup" %in% names(settings)){ + if (is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + + + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + + + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +# get.parameter.samples(settings, +# ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# ## Aside: if method were set to unscented, would take minimal changes to do UnKF +# #--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX +observations = observation +lai_data = data.frame() +for (i in 1:5) +{ + start = (1+((i-1)*10)) + end = start+9 + + obs = observations[start:end] + + working = print(paste("working on: ", i)) + sites = print(obs) + PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- obs + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + lai = call_MODIS(outdir = NULL, var = "LAI", site_info = site_info, product_dates = c("1980001", "2018365"), + run_parallel = TRUE, ncores = 10, product = "MOD15A2H", band = "Lai_500m", + package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) + + lai_data = rbind(lai_data, lai) + +} +lai_sd = lai_data +save(lai_data, file = '/data/bmorrison/sda/lai/50_site_run/lai_data_sites.Rdata') + +observation = observations +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) +# # output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# +# ndates = colnames(med_agb_data)[-c(1:2)] +# +# med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +# med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) +# +# sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +# sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) +# +# agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +# names(agb_data) = c("Site_ID", "Date", "Median", "SD") +# agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) +# +# save AGB data into long style +#save(agb_data, file = '/data/bmorrison/sda/lai/50_site_run/agb_data_sites.Rdata') + +######### calculate peak_lai +# already in long format style for dataframe +names(lai_sd) = c("modis_date", "calendar_date", "band", "tile", "site_id", "lat", "lon", "pixels", "sd", "qc") +output = cbind(lai_data, lai_sd$sd) +names(output) = c(names(lai_data), "sd") +#output = as.data.frame(data) +save(output, file = '/data/bmorrison/sda/lai/50_site_run/all_lai_data.Rdata') + +# change tile names to the site name +h +# remove extra data +output = output[,c(5, 2, 9, 11)] +colnames(output) = names(agb_data) + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +for (i in seq_along(years)) +{ + d = data[grep(data$Date, pattern = years[i]),] + sites = unique(d$Site_ID) + for (j in seq_along(sites)) + { + index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + site = d[index,] + if (length(index) > 0) + { + # peak lai is the max value that is the value <95th quantile to remove potential outlier values + max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] + peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) + peak_lai = rbind(peak_lai, peak) + + } + } +} + +# a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +peak_lai$SD[peak_lai$SD < 0.66] = 0.66 + +#output data +names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +save(peak_lai, file = '/data/bmorrison/sda/lai/50_site_run/peak_lai_data.Rdata') + + +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) + +observed_vars = c("AbvGrndWood", "LAI") + + +# merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") + +# order by year +observed_data = observed_data[order(observed_data$Date),] + +#sort by date +dates = sort(unique(observed_data$Date)) + +# create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) + +obs.mean = obs.mean %>% + split(.$date) + +# change the dates to be middle of the year +date.obs <- strsplit(names(obs.mean), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean = names(obs.mean) %>% + map(function(namesl){ + obs.mean[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI")) %>% `row.names<-`(NULL)) + #setNames(site.ids) + }) %>% setNames(date.obs) + +#remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +names = date.obs +for (name in names) +{ + for (site in names(obs.mean[[name]])) + { + na_index = which(!(is.na(obs.mean[[ name]][[site]]))) + colnames = names(obs.mean[[name]][[site]]) + if (length(na_index) > 0) + { + obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] + } + } +} + +# fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) + +# create obs.cov dataframe -->list by date +obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai)#, filler_0) +obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) + +obs.cov = obs.cov %>% + split(.$date) + +obs.cov = names(obs.cov) %>% + map(function(namesl){ + obs.cov[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4]^2 %>% unlist %>% diag(nrow = 2, ncol = 2) ) + }) %>% setNames(date.obs) + + +names = date.obs +for (name in names) +{ + for (site in names(obs.cov[[name]])) + { + bad = which(apply(obs.cov[[name]][[site]], 2, function(x) any(is.na(x))) == TRUE) + if (length(bad) > 0) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][,-bad] + if (is.null(dim(obs.cov[[name]][[site]]))) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad] + } else { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad,] + } + } + } +} + + +save(obs.mean, file = '/data/bmorrison/sda/lai/50_site_run/obs_mean_50.Rdata') +save(obs.cov, file = '/data/bmorrison/sda/lai/50_site_run/obs_cov_50.Rdata') + + + + \ No newline at end of file diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data.R b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data.R new file mode 100755 index 00000000000..f715a1261bf --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data.R @@ -0,0 +1,289 @@ +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +work_dir <- "/data/bmorrison/sda/lai" + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_sitegroup.xml") + +if ("sitegroup" %in% names(settings)){ + if (is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + + + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + + + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +# get.parameter.samples(settings, +# ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# ## Aside: if method were set to unscented, would take minimal changes to do UnKF +# #--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX +observations = observation +lai_data = data.frame() +for (i in 1:5) +{ + start = (1+((i-1)*10)) + end = start+9 + + obs = observations[start:end] + + working = print(paste("working on: ", i)) + sites = print(obs) + PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- obs + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + lai = call_MODIS(outdir = NULL, var = "LAI", site_info = site_info, product_dates = c("1980001", "2018365"), + run_parallel = TRUE, ncores = 10, product = "MOD15A2H", band = "Lai_500m", + package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) + + lai_data = rbind(lai_data, lai) + +} +lai_sd = lai_data +save(lai_data, file = '/data/bmorrison/sda/lai/50_site_run/lai_data_sites.Rdata') + +observation = observations +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) +# # output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# +# ndates = colnames(med_agb_data)[-c(1:2)] +# +# med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +# med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) +# +# sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +# sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) +# +# agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +# names(agb_data) = c("Site_ID", "Date", "Median", "SD") +# agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) +# +# save AGB data into long style +#save(agb_data, file = '/data/bmorrison/sda/lai/50_site_run/agb_data_sites.Rdata') + +######### calculate peak_lai +# already in long format style for dataframe +names(lai_sd) = c("modis_date", "calendar_date", "band", "tile", "site_id", "lat", "lon", "pixels", "sd", "qc") +output = cbind(lai_data, lai_sd$sd) +names(output) = c(names(lai_data), "sd") +#output = as.data.frame(data) +save(output, file = '/data/bmorrison/sda/lai/50_site_run/all_lai_data.Rdata') + +# change tile names to the site name +h +# remove extra data +output = output[,c(5, 2, 9, 11)] +colnames(output) = names(agb_data) + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +for (i in seq_along(years)) +{ + d = data[grep(data$Date, pattern = years[i]),] + sites = unique(d$Site_ID) + for (j in seq_along(sites)) + { + index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + site = d[index,] + if (length(index) > 0) + { + # peak lai is the max value that is the value <95th quantile to remove potential outlier values + max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] + peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) + peak_lai = rbind(peak_lai, peak) + + } + } +} + +# a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +peak_lai$SD[peak_lai$SD < 0.66] = 0.66 + +#output data +names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +save(peak_lai, file = '/data/bmorrison/sda/lai/50_site_run/peak_lai_data.Rdata') + + +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) + +observed_vars = c("AbvGrndWood", "LAI") + + +# merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") + +# order by year +observed_data = observed_data[order(observed_data$Date),] + +#sort by date +dates = sort(unique(observed_data$Date)) + +# create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) + +obs.mean = obs.mean %>% + split(.$date) + +# change the dates to be middle of the year +date.obs <- strsplit(names(obs.mean), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean = names(obs.mean) %>% + map(function(namesl){ + obs.mean[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI")) %>% `row.names<-`(NULL)) + #setNames(site.ids) + }) %>% setNames(date.obs) + +#remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +names = date.obs +for (name in names) +{ + for (site in names(obs.mean[[name]])) + { + na_index = which(!(is.na(obs.mean[[ name]][[site]]))) + colnames = names(obs.mean[[name]][[site]]) + if (length(na_index) > 0) + { + obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] + } + } +} + +# fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) + +# create obs.cov dataframe -->list by date +obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai)#, filler_0) +obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) + +obs.cov = obs.cov %>% + split(.$date) + +obs.cov = names(obs.cov) %>% + map(function(namesl){ + obs.cov[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4]^2 %>% unlist %>% diag(nrow = 2, ncol = 2) ) + }) %>% setNames(date.obs) + + +names = date.obs +for (name in names) +{ + for (site in names(obs.cov[[name]])) + { + bad = which(apply(obs.cov[[name]][[site]], 2, function(x) any(is.na(x))) == TRUE) + if (length(bad) > 0) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][,-bad] + if (is.null(dim(obs.cov[[name]][[site]]))) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad] + } else { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad,] + } + } + } +} + + +save(obs.mean, file = '/data/bmorrison/sda/lai/50_site_run/obs_mean_50.Rdata') +save(obs.cov, file = '/data/bmorrison/sda/lai/50_site_run/obs_cov_50.Rdata') + + + + \ No newline at end of file diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data.R b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data.R new file mode 100755 index 00000000000..23af11a8415 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data.R @@ -0,0 +1,287 @@ +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +work_dir <- "/data/bmorrison/sda/500_site_run" + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_sitegroup.xml") + +if ("sitegroup" %in% names(settings)){ + if (is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + + + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + + + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +# get.parameter.samples(settings, +# ens.sample.method = settings$ensemble$samplingspace$parameters$method) +# ## Aside: if method were set to unscented, would take minimal changes to do UnKF +# #--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX +observations = observation +lai_data = data.frame() +for (i in 1:16) +{ + start = (1+((i-1)*10)) + end = start+9 + obs = observations[start:end] + + working = print(paste("working on: ", i)) + sites = print(obs) + PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- obs + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + lai = call_MODIS(outdir = NULL, var = "LAI", site_info = site_info, product_dates = c("1980001", "2018365"), + run_parallel = TRUE, ncores = 10, product = "MOD15A2H", band = "LaiStdDev_500m", + package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) + + lai_data = rbind(lai_data, lai) + lai_sd = lai_data + save(lai_sd, file = paste('/data/bmorrison/sda/500_site_run/lai_sd_sites_', i, '.Rdata', sep = "")) + +} + +observation = observations +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) +# # output folder for the data +data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" + +# # extract the data +med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] + +sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] + +# +# ndates = colnames(med_agb_data)[-c(1:2)] +# +med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) + +sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) + +agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +names(agb_data) = c("Site_ID", "Date", "Median", "SD") +agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) + +save AGB data into long style +save(agb_data, file = '/data/bmorrison/sda/500_site_run/agb_data_sites.Rdata') + +######### calculate peak_lai +# already in long format style for dataframe +names(lai_sd) = c("modis_date", "calendar_date", "band", "tile", "site_id", "lat", "lon", "pixels", "sd", "qc") +output = cbind(lai_data, lai_sd$sd) +names(output) = c(names(lai_data), "sd") +#output = as.data.frame(data) +save(output, file = '/data/bmorrison/sda/lai/50_site_run/all_lai_data.Rdata') + +# change tile names to the site name +h +# remove extra data +output = output[,c(5, 2, 9, 11)] +colnames(output) = names(agb_data) + +# compute peak lai per year +data = output +peak_lai = data.frame() +years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +for (i in seq_along(years)) +{ + d = data[grep(data$Date, pattern = years[i]),] + sites = unique(d$Site_ID) + for (j in seq_along(sites)) + { + index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + site = d[index,] + if (length(index) > 0) + { + # peak lai is the max value that is the value <95th quantile to remove potential outlier values + max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] + peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) + peak_lai = rbind(peak_lai, peak) + + } + } +} + +# a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +peak_lai$SD[peak_lai$SD < 0.66] = 0.66 + +#output data +names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +save(peak_lai, file = '/data/bmorrison/sda/lai/50_site_run/peak_lai_data.Rdata') + + +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) + +observed_vars = c("AbvGrndWood", "LAI") + + +# merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") + +# order by year +observed_data = observed_data[order(observed_data$Date),] + +#sort by date +dates = sort(unique(observed_data$Date)) + +# create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) + +obs.mean = obs.mean %>% + split(.$date) + +# change the dates to be middle of the year +date.obs <- strsplit(names(obs.mean), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean = names(obs.mean) %>% + map(function(namesl){ + obs.mean[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI")) %>% `row.names<-`(NULL)) + #setNames(site.ids) + }) %>% setNames(date.obs) + +#remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +names = date.obs +for (name in names) +{ + for (site in names(obs.mean[[name]])) + { + na_index = which(!(is.na(obs.mean[[ name]][[site]]))) + colnames = names(obs.mean[[name]][[site]]) + if (length(na_index) > 0) + { + obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] + } + } +} + +# fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) + +# create obs.cov dataframe -->list by date +obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai)#, filler_0) +obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) + +obs.cov = obs.cov %>% + split(.$date) + +obs.cov = names(obs.cov) %>% + map(function(namesl){ + obs.cov[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4]^2 %>% unlist %>% diag(nrow = 2, ncol = 2) ) + }) %>% setNames(date.obs) + + +names = date.obs +for (name in names) +{ + for (site in names(obs.cov[[name]])) + { + bad = which(apply(obs.cov[[name]][[site]], 2, function(x) any(is.na(x))) == TRUE) + if (length(bad) > 0) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][,-bad] + if (is.null(dim(obs.cov[[name]][[site]]))) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad] + } else { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad,] + } + } + } +} + + +save(obs.mean, file = '/data/bmorrison/sda/lai/50_site_run/obs_mean_50.Rdata') +save(obs.cov, file = '/data/bmorrison/sda/lai/50_site_run/obs_cov_50.Rdata') + + + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data_500.R b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data_500.R new file mode 100755 index 00000000000..0ce0b06efcf --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_lai_agb_data_500.R @@ -0,0 +1,368 @@ +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +work_dir <- "/data/bmorrison/sda/500_site_run" + +# delete an old run +#unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_sitegroup_500.xml") + +if ("sitegroup" %in% names(settings)){ + if (is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + + +sites_500 = observation +load('/data/bmorrison/sda/500_site_run/all_lai_data.Rdata') +sites_200 = sort(unique(output$site_id)) + +# list for 500 sites +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- sites_500 +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +sites_500 <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +# list for previous 200 sites +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- sites_200 +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +sites_200 <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +# remove sites that were done from the 200 site run +sites_500_xy = as.data.frame(cbind(sites_500$lon, sites_500$lat)) +sites_200_xy = as.data.frame(cbind(sites_200$lon, sites_200$lat)) + +remove = vector() +for (i in 1:nrow(sites_200_xy)) +{ + index = which(sites_500_xy$V1 == sites_200_xy$V1[i] & sites_500_xy$V2 == sites_200_xy$V2[i]) + remove = c(remove, index) +} + +observation = sort(c(sites_500$site_id[-remove])) + + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX +observations = observation +lai_data = data.frame() +for (i in :37) +{ + start = (1+((i-1)*10)) + end = start+9 + obs = observations[start:end] + + working = print(paste("working on: ", i)) + sites = print(obs) + PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") + bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) + con <- PEcAn.DB::db.open(bety) + bety$con <- con + site_ID <- obs + suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + + suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) + suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) + site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + + + lai = call_MODIS(outdir = NULL, var = "LAI", site_info = site_info, product_dates = c("1980001", "2018365"), + run_parallel = TRUE, ncores = 10, product = "MOD15A2H", band = "LaiStdDev_500m", + package_method = "MODISTools", QC_filter = TRUE, progress = FALSE) + + #lai_data = rbind(lai_data, lai) + sd = lai + save(sd, file = paste('/data/bmorrison/sda/500_site_run/lai_sd_sites_', i+16, '.Rdata', sep = "")) + +} + +observation = observations +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) + +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) +# # output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# # +# # ndates = colnames(med_agb_data)[-c(1:2)] +# # +# med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +# med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) +# +# sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +# sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) +# +# agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +# names(agb_data) = c("Site_ID", "Date", "Median", "SD") +# agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) +# +# #save AGB data into long style +# save(agb_data, file = '/data/bmorrison/sda/500_site_run/agb_data_sites_500.Rdata') + +######### calculate peak_lai +# already in long format style for dataframe +names(lai_sd) = c("modis_date", "calendar_date", "band", "tile", "site_id", "lat", "lon", "pixels", "sd", "qc") +output = cbind(lai_data, lai_sd$sd) +names(output) = c(names(lai_data), "sd") +#output = as.data.frame(data) +save(output, file = '/data/bmorrison/sda/lai/50_site_run/all_lai_data.Rdata') + +# remove extra data +output = output[,c(5, 2, 9, 11)] +colnames(output) = names(agb_data) # should be Site_ID, Date, Median, SD + +data = output +data = data[-which(data$SD > 6),] +peak_lai = data.frame() + + +### Mikes way to do peak LAI for the summer months (weighted precision) +years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +for (i in 1:length(years)) +{ + d = data[grep(data$Date, pattern = years[i]),] + sites = unique(d$Site_ID) + for (j in 1:length(sites)) + { + index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) + site = d[index,] + if (length(index) > 0) + { + dates = as.Date(site$Date, format = "%Y-%m-%d") + index = which(dates >= as.Date(paste(years[i], "06-15", sep = "-")) & dates <= as.Date(paste(years[i], "08-15", sep = "-"))) + info = site[index,] + + weights = info$SD/sum(info$SD) + mean = sum(info$Median*weights) + sd = sum(info$SD*weights) + + output = as.data.frame(cbind(sites[j], paste(years[i], "07-15", sep = "-"), mean, sd), stringsAsFactors = F) + names(output) = c("Site_ID", "Date", "Median", "SD") + peak_lai = rbind(peak_lai, output) + } + } +} + +peak_lai$Site_ID = as.numeric(peak_lai$Site_ID) +peak_lai$Date = as.Date(peak_lai$Date) +peak_lai$Median = as.numeric(peak_lai$Median) +peak_lai$SD = as.numeric(peak_lai$SD) +peak_lai$Date = paste("Year", year(peak_lai$Date), sep = "_") + + + + +# compute peak lai per year +# data = output +# peak_lai = data.frame() +# years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +# for (i in seq_along(years)) +# { +# d = data[grep(data$Date, pattern = years[i]),] +# sites = unique(d$Site_ID) +# for (j in seq_along(sites)) +# { +# index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) +# site = d[index,] +# #count = print(nrow(site)) +# if (length(index) > 0) +# { +# # peak lai is the max value that is the value <95th quantile to remove potential outlier values +# max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] +# peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) +# peak_lai = rbind(peak_lai, peak) +# +# } +# } +# } +# +# # a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +# peak_lai$SD[peak_lai$SD < 0.66] = 0.66 +# +# #output data +# names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +# save(peak_lai, file = '/data/bmorrison/sda/500_site_run/peak_lai_data_500.Rdata') + + +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) + +observed_vars = c("AbvGrndWood", "LAI") + + +# merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") + +# order by year +observed_data = observed_data[order(observed_data$Date),] + +#sort by date +dates = sort(unique(observed_data$Date)) + +# create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) + +obs.mean = obs.mean %>% + split(.$date) + +# change the dates to be middle of the year +date.obs <- strsplit(names(obs.mean), "_") %>% + map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean = names(obs.mean) %>% + map(function(namesl){ + obs.mean[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI")) %>% `row.names<-`(NULL)) + #setNames(site.ids) + }) %>% setNames(date.obs) + +#remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +names = date.obs +for (name in names) +{ + for (site in names(obs.mean[[name]])) + { + na_index = which(!(is.na(obs.mean[[ name]][[site]]))) + colnames = names(obs.mean[[name]][[site]]) + if (length(na_index) > 0) + { + obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] + } + } +} + +# fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) + +# create obs.cov dataframe -->list by date +obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai)#, filler_0) +obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) + +obs.cov = obs.cov %>% + split(.$date) + +obs.cov = names(obs.cov) %>% + map(function(namesl){ + obs.cov[[namesl]] %>% + split(.$site_id) %>% + map(~.x[3:4]^2 %>% unlist %>% diag(nrow = 2, ncol = 2) ) + }) %>% setNames(date.obs) + + +names = date.obs +for (name in names) +{ + for (site in names(obs.cov[[name]])) + { + bad = which(apply(obs.cov[[name]][[site]], 2, function(x) any(is.na(x))) == TRUE) + if (length(bad) > 0) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][,-bad] + if (is.null(dim(obs.cov[[name]][[site]]))) + { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad] + } else { + obs.cov[[name]][[site]] = obs.cov[[name]][[site]][-bad,] + } + } + } +} + + +save(obs.mean, file = '/data/bmorrison/sda/500_site_run/obs_mean_500_ave.Rdata') +save(obs.cov, file = '/data/bmorrison/sda/500_site_run/obs_cov_500_ave.Rdata') + + + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup 2.R b/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup 2.R new file mode 100755 index 00000000000..174a14f3209 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup 2.R @@ -0,0 +1,301 @@ + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(future) +library(tictoc) +#--------------------------------------------------------------------------------------------------# +######################################## INTIAL SET UP STUFF ####################################### +work_dir <- "/data/bmorrison/sda/lai" + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_8_Sites_2009.xml") + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +# +# ###################### EXTRACT AGB DATA + REFORMAT LONG VS. WIDE STYLE ##################################### +# ### this is for LandTrendr data ### +# +# # output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# +# ### temporary fix to make agb data long vs. wide format to match modis data. ### +# ndates = colnames(med_agb_data)[-c(1:2)] +# +# med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +# med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) +# +# sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +# sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) +# +# agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +# names(agb_data) = c("Site_ID", "Date", "Median", "SD") +# agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) +# +# # save AGB data into long style +# save(agb_data, file = '/data/bmorrison/sda/lai/modis_lai_data/agb_data_update_sites.Rdata') +# +# +# # ####################### Extract MODISTools LAI data ############################## +# +# library(doParallel) +# cl <- parallel::makeCluster(10, outfile="") +# doParallel::registerDoParallel(cl) +# +# start = Sys.time() +# # keep QC_filter on for this because bad LAI values crash the SDA. Progress can be turned off if it annoys you. +# data = foreach(i=1:length(site_info$site_id), .combine = rbind) %dopar% PEcAn.data.remote::call_MODIS(start_date = "2000/01/01", end_date = "2017/12/31", band = "Lai_500m", product = "MOD15A2H", lat = site_info$lat[i], lon = site_info$lon[i], size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T, progress = T) +# end = Sys.time() +# difference = end-start +# stopCluster(cl) +# +# # already in long format style for dataframe +# output = as.data.frame(data) +# save(output, file = '/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_update_sites.Rdata') +# +# # change tile names to the site name +# for (i in 1:length(site_info$site_name)) +# { +# name = as.character(site_info$site_id[i], stringsAsFactor = F) +# g = which(round(output$lat, digits = 3) == round(site_info$lat[i], digits = 3)) +# output$tile[g] = name +# } +# # remove extra data +# output = output[,c(4,2,8,10)] +# colnames(output) = names(agb_data) +# +# # compute peak lai per year +# data = output +# peak_lai = data.frame() +# years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +# for (i in seq_along(years)) +# { +# d = data[grep(data$Date, pattern = years[i]),] +# sites = unique(d$Site_ID) +# for (j in seq_along(sites)) +# { +# index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) +# site = d[index,] +# if (length(index) > 0) +# { +# # peak lai is the max value that is the value <95th quantile to remove potential outlier values +# max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] +# peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) +# peak_lai = rbind(peak_lai, peak) +# +# } +# } +# } +# +# # a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +# peak_lai$SD[peak_lai$SD < 0.66] = 0.66 +# +# #output data +# names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +# save(peak_lai, file = '/data/bmorrison/sda/lai/modis_lai_data/peak_lai_output_update_sites.Rdata') +# +# +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +# ################# +# load('/data/bmorrison/sda/lai/modis_lai_data/agb_data_update_sites.Rdata') +# load( '/data/bmorrison/sda/lai/modis_lai_data/peak_lai_output_update_sites.Rdata') +# # output likes to make factors ..... :/... so this unfactors them +# peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +# peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) +# +# observed_vars = c("AbvGrndWood", "LAI") +# +# +# # merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +# observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +# names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") +# +# # order by year +# observed_data = observed_data[order(observed_data$Date),] +# +# #sort by date +# dates = sort(unique(observed_data$Date)) +# +# # create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +# obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +# obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) +# +# obs.mean = obs.mean %>% +# split(.$date) +# +# # change the dates to be middle of the year +# date.obs <- strsplit(names(obs.mean), "_") %>% +# map_chr(~.x[2]) %>% paste0(.,"/07/15") +# +# obs.mean = names(obs.mean) %>% +# map(function(namesl){ +# obs.mean[[namesl]] %>% +# split(.$site_id) %>% +# map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI"))) %>% +# setNames(site.ids) +# }) %>% setNames(date.obs) +# +# # remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +# names = date.obs +# for (name in names) +# { +# for (site in names(obs.mean[[name]])) +# { +# na_index = which(!(is.na(obs.mean[[ name]][[site]]))) +# colnames = names(obs.mean[[name]][[site]]) +# if (length(na_index) > 0) +# { +# obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] +# row.names(obs.mean[[name]][[site]]) = NULL +# } +# } +# } +# +# # fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) +# +# # create obs.cov dataframe -->list by date +# obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai, filler_0) +# obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) +# +# obs.cov = obs.cov %>% +# split(.$date) +# +# #sublist by date --> site +# obs.cov = names(obs.cov) %>% +# map(function(namesl){ +# obs.cov[[namesl]] %>% +# split(.$site_id) %>% +# map(~diag(.x[3:4]^2, nrow = 2, ncol = 2)) %>% +# setNames(site.ids)}) %>% +# setNames(date.obs) +# +# # remove NA/missing observations from covariance matrix and removes NA values to restructure size of covar matrix +# names = names(obs.cov) +# for (name in names) +# { +# for (site in names(obs.cov[[name]])) +# { +# na_index = which(is.na(obs.cov[[ name]][[site]])) +# if (length(na_index) > 0) +# { +# n_good_vars = length(observed_vars)-length(na_index) +# obs.cov[[name]][[site]] = matrix(obs.cov[[name]][[site]][-na_index], nrow = n_good_vars, ncol = n_good_vars) +# } +# } +# } +# +# # save these lists for future use. +# save(obs.mean, file = '/data/bmorrison/sda/lai/obs_mean_update_sites.Rdata') +# save(obs.cov, file = '/data/bmorrison/sda/lai/obs_cov_update_sites.Rdata') +# save(date.obs, file = '/data/bmorrison/sda/lai/date_obs_update_sites.Rdata') + + + +################################ START THE SDA ######################################## +load('/data/bmorrison/sda/lai/obs_mean_update_sites.Rdata') +load('/data/bmorrison/sda/lai/obs_cov_update_sites.Rdata') +date.obs = names(obs.mean) + +new.settings <- PEcAn.settings::prepare.settings(settings) + +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(new.settings, + obs.mean =obs.mean, + obs.cov = obs.cov, + keepNC = TRUE, + forceRun = TRUE, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=TRUE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=4, + debug=FALSE, + pause=FALSE, + Profiling = FALSE, + OutlierDetection=FALSE)) + + + + +### FOR PLOTTING after analysis if TimeseriesPlot == FALSE) +load('/data/bmorrison/sda/lai/SDA/sda.output.Rdata') +facetg=4 +readsFF=NULL + +obs.mean = Viz.output[[2]] +obs.cov = Viz.output[[3]] +obs.times = names(obs.mean) +PEcAn.assim.sequential::post.analysis.multisite.ggplot(settings = new.settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=facetg, readsFF=NULL) + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup.R b/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup.R new file mode 100755 index 00000000000..174a14f3209 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup.R @@ -0,0 +1,301 @@ + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(future) +library(tictoc) +#--------------------------------------------------------------------------------------------------# +######################################## INTIAL SET UP STUFF ####################################### +work_dir <- "/data/bmorrison/sda/lai" + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("pecan_MultiSite_SDA_LAI_AGB_8_Sites_2009.xml") + +# doesn't work for one site +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +#observation = "1000000048" + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +############################ EXTRACT SITE INFORMATION FROM XML TO DOWNLOAD DATA + RUN SDA ########################### +################ Not working on interactive job on MODEX + +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +# +# ###################### EXTRACT AGB DATA + REFORMAT LONG VS. WIDE STYLE ##################################### +# ### this is for LandTrendr data ### +# +# # output folder for the data +# data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +# +# # extract the data +# med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", +# data_dir, product_dates=NULL, file.path(work_dir,"Obs"))[[1]] +# +# +# ### temporary fix to make agb data long vs. wide format to match modis data. ### +# ndates = colnames(med_agb_data)[-c(1:2)] +# +# med_agb_data$Site_Name = as.character(med_agb_data$Site_Name, stringsAsFactors = FALSE) +# med_agb_data = reshape2::melt(med_agb_data, id.vars = "Site_ID", measure.vars = colnames(med_agb_data)[-c(1:2)]) +# +# sdev_agb_data$Site_Name = as.character(sdev_agb_data$Site_Name, stringsAsFactors = FALSE) +# sdev_agb_data = reshape2::melt(sdev_agb_data, id.vars = "Site_ID", measure.vars = colnames(sdev_agb_data)[-c(1:2)]) +# +# agb_data = as.data.frame(cbind(med_agb_data, sdev_agb_data$value)) +# names(agb_data) = c("Site_ID", "Date", "Median", "SD") +# agb_data$Date = as.character(agb_data$Date, stringsAsFactors = FALSE) +# +# # save AGB data into long style +# save(agb_data, file = '/data/bmorrison/sda/lai/modis_lai_data/agb_data_update_sites.Rdata') +# +# +# # ####################### Extract MODISTools LAI data ############################## +# +# library(doParallel) +# cl <- parallel::makeCluster(10, outfile="") +# doParallel::registerDoParallel(cl) +# +# start = Sys.time() +# # keep QC_filter on for this because bad LAI values crash the SDA. Progress can be turned off if it annoys you. +# data = foreach(i=1:length(site_info$site_id), .combine = rbind) %dopar% PEcAn.data.remote::call_MODIS(start_date = "2000/01/01", end_date = "2017/12/31", band = "Lai_500m", product = "MOD15A2H", lat = site_info$lat[i], lon = site_info$lon[i], size = 0, band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", package_method = "MODISTools", QC_filter = T, progress = T) +# end = Sys.time() +# difference = end-start +# stopCluster(cl) +# +# # already in long format style for dataframe +# output = as.data.frame(data) +# save(output, file = '/data/bmorrison/sda/lai/modis_lai_data/modis_lai_output_update_sites.Rdata') +# +# # change tile names to the site name +# for (i in 1:length(site_info$site_name)) +# { +# name = as.character(site_info$site_id[i], stringsAsFactor = F) +# g = which(round(output$lat, digits = 3) == round(site_info$lat[i], digits = 3)) +# output$tile[g] = name +# } +# # remove extra data +# output = output[,c(4,2,8,10)] +# colnames(output) = names(agb_data) +# +# # compute peak lai per year +# data = output +# peak_lai = data.frame() +# years = unique(year(as.Date(data$Date, "%Y-%m-%d"))) +# for (i in seq_along(years)) +# { +# d = data[grep(data$Date, pattern = years[i]),] +# sites = unique(d$Site_ID) +# for (j in seq_along(sites)) +# { +# index = which(d$Site_ID == site_info$site_id[j]) #which(round(d$lat, digits = 3) == round(site_info$lat[j], digits = 3) & round(d$lon, digits = 3) == round(site_info$lon[j], digits = 3)) +# site = d[index,] +# if (length(index) > 0) +# { +# # peak lai is the max value that is the value <95th quantile to remove potential outlier values +# max = site[which(site$Median == max(site$Median[which(site$Median <= quantile(site$Median, probs = 0.95))], na.rm = T))[1],] #which(d$Median == max(d$Median[index], na.rm = T))[1] +# peak = data.frame(max$Site_ID, Date = paste("Year", years[i], sep = "_"), Median = max$Median, SD = max$SD) +# peak_lai = rbind(peak_lai, peak) +# +# } +# } +# } +# +# # a fix for low SD values because of an issue with MODIS LAI error calculations. Reference: VISKARI et al 2014. +# peak_lai$SD[peak_lai$SD < 0.66] = 0.66 +# +# #output data +# names(peak_lai) = c("Site_ID", "Date", "Median", "SD") +# save(peak_lai, file = '/data/bmorrison/sda/lai/modis_lai_data/peak_lai_output_update_sites.Rdata') +# +# +# ######################### TIME TO FIX UP THE OBSERVED DATASETS INTO A FORMAT THAT WORKS TO MAKE OBS.MEAN and OBS.COV FOR SDA ######################## +# ################# +# load('/data/bmorrison/sda/lai/modis_lai_data/agb_data_update_sites.Rdata') +# load( '/data/bmorrison/sda/lai/modis_lai_data/peak_lai_output_update_sites.Rdata') +# # output likes to make factors ..... :/... so this unfactors them +# peak_lai$Site_ID = as.numeric(as.character(peak_lai$Site_ID, stringsAsFactors = F)) +# peak_lai$Date = as.character(peak_lai$Date, stringsAsFactors = F) +# +# observed_vars = c("AbvGrndWood", "LAI") +# +# +# # merge agb and lai dataframes and places NA values where data is missing between the 2 datasets +# observed_data = merge(agb_data, peak_lai, by = c("Site_ID", "Date"), all = T) +# names(observed_data) = c("Site_ID", "Date", "med_agb", "sdev_agb", "med_lai", "sdev_lai") +# +# # order by year +# observed_data = observed_data[order(observed_data$Date),] +# +# #sort by date +# dates = sort(unique(observed_data$Date)) +# +# # create the obs.mean list --> this needs to be adjusted to work with load.data in the future (via hackathon) +# obs.mean = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, med_agb = observed_data$med_agb, med_lai = observed_data$med_lai) +# obs.mean$date = as.character(obs.mean$date, stringsAsFactors = FALSE) +# +# obs.mean = obs.mean %>% +# split(.$date) +# +# # change the dates to be middle of the year +# date.obs <- strsplit(names(obs.mean), "_") %>% +# map_chr(~.x[2]) %>% paste0(.,"/07/15") +# +# obs.mean = names(obs.mean) %>% +# map(function(namesl){ +# obs.mean[[namesl]] %>% +# split(.$site_id) %>% +# map(~.x[3:4] %>% setNames(c("AbvGrndWood", "LAI"))) %>% +# setNames(site.ids) +# }) %>% setNames(date.obs) +# +# # remove NA data as this will crash the SDA. Removes rown numbers (may not be nessesary) +# names = date.obs +# for (name in names) +# { +# for (site in names(obs.mean[[name]])) +# { +# na_index = which(!(is.na(obs.mean[[ name]][[site]]))) +# colnames = names(obs.mean[[name]][[site]]) +# if (length(na_index) > 0) +# { +# obs.mean[[name]][[site]] = obs.mean[[name]][[site]][na_index] +# row.names(obs.mean[[name]][[site]]) = NULL +# } +# } +# } +# +# # fillers are 0's for the covariance matrix. This will need to change for differing size matrixes when more variables are added in. +# filler_0 = as.data.frame(matrix(0, ncol = length(observed_vars), nrow = nrow(observed_data))) +# names(filler_0) = paste0("h", seq_len(length(observed_vars))) +# +# # create obs.cov dataframe -->list by date +# obs.cov = data.frame(date = observed_data$Date, site_id = observed_data$Site_ID, sdev_agb = observed_data$sdev_agb, sdev_lai = observed_data$sdev_lai, filler_0) +# obs.cov$date = as.character(obs.cov$date, stringsAsFactors = F) +# +# obs.cov = obs.cov %>% +# split(.$date) +# +# #sublist by date --> site +# obs.cov = names(obs.cov) %>% +# map(function(namesl){ +# obs.cov[[namesl]] %>% +# split(.$site_id) %>% +# map(~diag(.x[3:4]^2, nrow = 2, ncol = 2)) %>% +# setNames(site.ids)}) %>% +# setNames(date.obs) +# +# # remove NA/missing observations from covariance matrix and removes NA values to restructure size of covar matrix +# names = names(obs.cov) +# for (name in names) +# { +# for (site in names(obs.cov[[name]])) +# { +# na_index = which(is.na(obs.cov[[ name]][[site]])) +# if (length(na_index) > 0) +# { +# n_good_vars = length(observed_vars)-length(na_index) +# obs.cov[[name]][[site]] = matrix(obs.cov[[name]][[site]][-na_index], nrow = n_good_vars, ncol = n_good_vars) +# } +# } +# } +# +# # save these lists for future use. +# save(obs.mean, file = '/data/bmorrison/sda/lai/obs_mean_update_sites.Rdata') +# save(obs.cov, file = '/data/bmorrison/sda/lai/obs_cov_update_sites.Rdata') +# save(date.obs, file = '/data/bmorrison/sda/lai/date_obs_update_sites.Rdata') + + + +################################ START THE SDA ######################################## +load('/data/bmorrison/sda/lai/obs_mean_update_sites.Rdata') +load('/data/bmorrison/sda/lai/obs_cov_update_sites.Rdata') +date.obs = names(obs.mean) + +new.settings <- PEcAn.settings::prepare.settings(settings) + +#unlink(c('run','out','SDA'),recursive = T) + +sda.enkf.multisite(new.settings, + obs.mean =obs.mean, + obs.cov = obs.cov, + keepNC = TRUE, + forceRun = TRUE, + control=list(trace=TRUE, + FF=FALSE, + interactivePlot=FALSE, + TimeseriesPlot=TRUE, + BiasPlot=FALSE, + plot.title=NULL, + facet.plots=4, + debug=FALSE, + pause=FALSE, + Profiling = FALSE, + OutlierDetection=FALSE)) + + + + +### FOR PLOTTING after analysis if TimeseriesPlot == FALSE) +load('/data/bmorrison/sda/lai/SDA/sda.output.Rdata') +facetg=4 +readsFF=NULL + +obs.mean = Viz.output[[2]] +obs.cov = Viz.output[[3]] +obs.times = names(obs.mean) +PEcAn.assim.sequential::post.analysis.multisite.ggplot(settings = new.settings, t, obs.times, obs.mean, obs.cov, FORECAST, ANALYSIS, plot.title=NULL, facetg=facetg, readsFF=NULL) + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/nohuprun.txt b/modules/assim.sequential/inst/sda_backup/bmorrison/nohuprun.txt new file mode 100755 index 00000000000..d3d35a295f0 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/nohuprun.txt @@ -0,0 +1,19 @@ +# model run +nohup Rscript workflow.R > workflow.log 2>&1 & +nohup Rscript workflow.R --settings pecan_US-CZ3_CRUNCEP.xml > workflow.log 2>&1 & + + +# SDA +# interactive qsub, keep enviroment, 3 hours, 1 node 15 CPUs +##qsub -IV -l walltime=03:00:00,nodes=1:ppn=15 +cd /data/bmorrison/sda/ +nohup Rscript Multisite-3sites.R > SDA_workflow.log 2>&1 & + +nohup Rscript Multisite-4sites.R > SDA_workflow.log 2>&1 & + +nohup Rscript Multisite_SDA_BNL.R > SDA_workflow.log 2>&1 & + + +qstat -f + +qstat -ext diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/pft_selection.R b/modules/assim.sequential/inst/sda_backup/bmorrison/pft_selection.R new file mode 100755 index 00000000000..baf648cdf71 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/pft_selection.R @@ -0,0 +1,190 @@ +library(raster) +library(shapefiles) +library(PEcAn.DB) + +analysis = readRDS("/Volumes/data2/bmorrison/sda/500_site_run/output_folder/ANALYSIS.RDS") + +dates = names(analysis) +sites = unique(attributes(analysis[[names(analysis)[1]]])$Site) +observations = sites + + +#working = print(paste("working on: ", i)) +sites = observations +bety <- list(user='bety', password='bety', host='modex.bnl.gov', + dbname='betydb', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- sites +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +# load('/Volumes/data/bmorrison/sda/500_site_run/all_lai_data_500.Rdata') +# site_ids = unique(lai_data$site_id) +# sites = data.frame() +# for (i in 1:length(site_ids)) +# { +# index = which(lai_data$site_id == site_ids[i]) +# sites = rbind(sites, lai_data[index[1],]) +# } +# sites = sites[,c(5,6,7)] +sites = as.data.frame(cbind(site_info$site_id,site_info$lon, site_info$lat)) +names(sites) = c("id", "lon", "lat") +coordinates(sites) = ~lon+lat +projection(sites) = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs " +cover = raster('/Volumes/data/bmorrison/sda/500_site_run/NLCD_2001_Land_Cover_L48_20190424.img') + +# sites = shapefile('/data/bmorrison/sda/500_site_run/shapefiles/500_site_selection.shp') +# s = shapefile('/data/bmorrison/sda/500_site_run/shapefiles/500_site_selection.shp') + +#sites = as.data.frame(sites) +# sites = sites[, 11:12] +# names(sites) = c("x", "y") +# coordinates(sites) = ~x+y +# projection(sites) = crs(s) + +sites = spTransform(sites, CRS = crs(cover)) + +# make sure projections match +data = extract(cover, sites) +sites$cover = data + +# bad = which(data == 11 | data == 12 | data == 31) +# site_data = sites[-bad,] +site_data = sites + +ecoregion = shapefile('/Volumes/data2/bmorrison/sda/bailey_paper/data/ecoregions_shapefile/eco_aea_l1.shp') +ecoregion = spTransform(ecoregion, CRS = crs(cover)) +eco_data = extract(ecoregion, site_data) +site_data$region = eco_data$NA_L1CODE +site_data$name = eco_data$NA_L1NAME + +site_data = as.data.frame(site_data) +names(site_data) = c("ID", "cover", "ecoregion", "name", "lon", "lat") +site_data$pft = NA +site_data$cover = as.numeric(site_data$cover) +site_data$ecoregion = as.numeric(site_data$ecoregion) +# remove sites that are categorized as unclassified, water, ice/snow, barren +index = which(site_data$cover == 0 | site_data$cover == 11 | site_data$cover == 12 | site_data$cover == 31) +site_data$pft[index] = NA + +# classify deciduous +index = which(site_data$cover == 41) +site_data$pft[index] = "deciduous" + + +# classify evergreen/conifer +index = which(site_data$cover == 42) +site_data$pft[index] = "conifer" + + +# classify mixed forest +index = which(site_data$cover == 43) +site_data$pft[index] = "mixed forest" + +# classify developed +index = which(site_data$cover == 21 | site_data$cover == 22 | site_data$cover == 23 | site_data$cover == 24) +site_data$pft[index] = "developed" + +# classify shrub/scrub +index = which(site_data$cover == 52 & (site_data$ecoregion == 10 | site_data$ecoregion == 11 | site_data$ecoregion == 12 | site_data$ecoregion == 13 | site_data$ecoregion == 14)) +site_data$pft[index] = "arid grassland" + +index = which(site_data$cover == 52 & (site_data$ecoregion == 9 | site_data$ecoregion == 8 | site_data$ecoregion == 6 | site_data$ecoregion == 7)) +site_data$pft[index] = "mesic grassland" + + +# classify herbaceous +index = which(site_data$cover == 71 & (site_data$ecoregion == 10 | site_data$ecoregion == 11 | site_data$ecoregion == 12 | site_data$ecoregion == 13 | site_data$ecoregion == 14)) +site_data$pft[index] = "arid grassland" + +index = which(site_data$cover == 71 & (site_data$ecoregion == 9 | site_data$ecoregion == 15 | site_data$ecoregion == 7 | site_data$ecoregion == 8 | site_data$ecoregion == 5 | site_data$ecoregion == 6)) +site_data$pft[index] = "mesic grassland" + + +# classify hay/pasture crops +index = which((site_data$cover == 81 | site_data$cover == 82) & (site_data$ecoregion == 10 | site_data$ecoregion == 11 | site_data$ecoregion == 12 | site_data$ecoregion == 13 | site_data$ecoregion == 14)) +site_data$pft[index] = "arid grassland" + +index = which((site_data$cover == 81 | site_data$cover == 82) & (site_data$ecoregion == 9 | site_data$ecoregion == 8 | site_data$ecoregion == 7)) +site_data$pft[index] = "mesic grassland" + + +# classify wetlands +index = which(site_data$cover == 95) +site_data$pft[index] = "mesic grassland" + +index = which(site_data$cover == 90) +site_data$pft[index] = "woody wetland" + + +# LAI analysis for forests (mixed + woody wetland) +index = which(site_data$cover == 43 | site_data$cover == 90) +data = site_data[index,] +coordinates(data) = ~lon+lat +projection(data) = crs(sites) +data = spTransform(data, CRS = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs ") +data = as.data.frame(data, stringsAsFactors = F) + + +library(PEcAn.data.remote) +site_id = data$ID +site_name = rep(NA, nrow(data)) +lat = data$lat +lon = data$lon +time_zone = rep("time", nrow(data)) +site_info = list(site_id, site_name, lat, lon, time_zone) +names(site_info) = c("site_id", "site_name", "lat", "lon", "time_zone") + +lai = call_MODIS(outdir = NULL, var = "lai", site_info = site_info, product_dates = c("2001001", "2001365"), run_parallel = T, ncores = 10, product ="MOD15A2H", band = "Lai_500m", package_method = "MODISTools", QC_filter = T, progress = F) +ndvi = call_MODIS(outdir = NULL, var = "NDVI", site_info = site_info, product_dates = c("2001001", "2001365"), run_parallel = T, ncores = 10, product ="MOD13Q1", band = "250m_16_days_NDVI", package_method = "MODISTools", QC_filter = F, progress = F) + +library(lubridate) + +par(mfrow = c(4,5)) +info = data.frame() +data = lai +sites = sort(unique(lai$site_id)) +# xy = data.frame() +# for (i in 1:length(sites)) +# { +# d = data[which(data$site_id == sites[i]),] +# xy = rbind(xy, d[1,c(5,7,6)]) +# } +#data$calendar_date = as.Date(data$calendar_date) + +for (i in 21:40) +{ + site = sites[i] + d = data[which(data$site_id == site),] + d = d[,c(2,5,6,7,9)] + d = d[order(d$calendar_date),] + d$calendar_date = as.Date(d$calendar_date) + min = min(d$data, na.rm = T) + max = max(d$data, na.rm = T) + difference = max-min + # winter = d %>% + # select(calendar_date, site_id, lat, lon, data) %>% + # filter((calendar_date >= month(ymd("2001-01-01")) & calendar_date <= month(ymd("2001-02-28"))) | (calendar_date >= month(ymd("2001-12-01")) & calendar_date <= month(ymd("2001-12-31")))) + # min = mean(winter$data, na.rm = T) + + # summer = d %>% + # select(calendar_date, site_id, lat, lon, data) %>% + # filter(calendar_date >= month(ymd("2001-06-01")) & calendar_date <= month(ymd("2001-08-30"))) + # max = mean(summer$data, na.rm = T) + # difference = max - min + + info = rbind(info, as.data.frame(cbind(site, d$lon[1], d$lat[1], min, max, difference))) + plot(d$calendar_date, d$data, ylim = c(0, max(data$data)+2), main = site) +} + + + + + + diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/register_site_group.R b/modules/assim.sequential/inst/sda_backup/bmorrison/register_site_group.R new file mode 100755 index 00000000000..3ad6e3dc4a3 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/bmorrison/register_site_group.R @@ -0,0 +1,44 @@ +library(raster) +library(shapefiles) +------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(furrr) +library(tictoc) + +data = shapefile('/data/bmorrison/sda/500_site_run/shapefiles/500_site_selection_final.shp') +data = as.data.frame(data) +names(data) = c("type", "lon", "lat") + + +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +#site_ID <- observation + +#-- register sites +site_id <-map2(data$lon, data$lat, function(lon, lat){ + pr<-paste0(round(lon,0), round(lat,0)) + out <-db.query(paste0("INSERT INTO sites (sitename, geometry) VALUES ('CMS_500_SDA_",pr,"', ", + "ST_SetSRID(ST_MakePoint(",lon,",", lat,", 1000), 4326) ) RETURNING id, sitename"), + con + ) + out +}) +#link to site group +site_id %>% + map(~ db.query(paste0("INSERT INTO sitegroups_sites (sitegroup_id , site_id ) VALUES (2000000009, ", + .x[[1]],")"),con)) \ No newline at end of file diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-3sites.R b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-3sites.R new file mode 100755 index 00000000000..6d956113e4e --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-3sites.R @@ -0,0 +1,102 @@ +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +#------------------------------------------ Setup ------------------------------------- +setwd("/data/sserbin/Modeling/sipnet/NASA_CMS") +unlink(c('run','out','SDA'),recursive = T) +rm(list=ls()) +settings <- read.settings("pecan.SDA.3sites.xml") +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() +#sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingspace$parameters$method) ## Aside: if method were set to unscented, would take minimal changes to do UnKF +#---------------------------------------------------------------- +# OBS data preparation +#--------------------------------------------------------------- +load("Obs/LandTrendr_AGB_output-4sites.RData") +site1 <- point_list +site1$median_AGB[[1]] %>% + filter(Site_ID!='772') -> site1$median_AGB[[1]] + + +site1$stdv_AGB[[1]] %>% + filter(Site_ID!='772') -> site1$stdv_AGB[[1]] + +load("Obs/LandTrendr_AGB_output_796-769.RData") +site2 <- point_list +site2$median_AGB[[1]] %>% + filter(Site_ID!='1000000074') ->site2$median_AGB[[1]] + + +site2$stdv_AGB[[1]] %>% + filter(Site_ID!='1000000074') ->site2$stdv_AGB[[1]] +#listviewer::jsonedit(point_list) +#-------------------------------------------------------------------------------- +#for multi site both mean and cov needs to be a list like this +# +date +# +siteid +# c(state variables)/matrix(cov state variables) +# +#reorder sites in obs +point_list$median_AGB <-rbind(site1$median_AGB[[1]], + site2$median_AGB[[1]]) %>% filter(Site_ID %in% site.ids) +point_list$stdv_AGB <-rbind(site1$stdv_AGB[[1]], + site2$stdv_AGB[[1]])%>% filter(Site_ID %in% site.ids) + +site.order <- sapply(site.ids,function(x) which(point_list$median_AGB$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() + +point_list$median_AGB <- point_list$median_AGB[site.order,] +point_list$stdv_AGB <- point_list$stdv_AGB[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(site1$median_AGB[[1]]),"_")[3:length(site1$median_AGB[[1]])] %>% + map_chr(~.x[2]) %>% paste0(.,"/12/31") + +obs.mean <-names(point_list$median_AGB)[3:length(point_list$median_AGB)] %>% + map(function(namesl){ + ((point_list$median_AGB)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('AbvGrndWood'))) %>% + setNames(site.ids[1:length(.)]) + ) + }) %>% setNames(date.obs) + + + +obs.cov <-names(point_list$stdv_AGB)[3:length(point_list$median_AGB)] %>% + map(function(namesl) { + ((point_list$stdv_AGB)[[namesl]] %>% + map( ~ (.x) ^ 2%>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + + }) %>% setNames(date.obs) + +#---------------------------------------------------------------- +# end OBS data preparation +#--------------------------------------------------------------- +new.settings <- PEcAn.settings::prepare.settings(settings) +#jsonedit(new.settings) +#------------------------------------------ SDA ------------------------------------- +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="lhc sampling - 4sites - SF50 - ALL PFTs - small sample size", + facet.plots=T, + debug=F, + pause=F) + ) + + diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-4sites.R b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-4sites.R new file mode 100755 index 00000000000..eb1a12e931e --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite-4sites.R @@ -0,0 +1,102 @@ +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +#------------------------------------------ Setup ------------------------------------- +setwd("/data/sserbin/Modeling/sipnet/NASA_CMS") +unlink(c('run','out','SDA'),recursive = T) +rm(list=ls()) +settings <- read.settings("pecan.SDA.4sites.xml") +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() +#sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, ens.sample.method = settings$ensemble$samplingspace$parameters$method) ## Aside: if method were set to unscented, would take minimal changes to do UnKF +#---------------------------------------------------------------- +# OBS data preparation +#--------------------------------------------------------------- +load("Obs/LandTrendr_AGB_output-4sites.RData") +site1 <- point_list +site1$median_AGB[[1]] %>% + filter(Site_ID!='772') -> site1$median_AGB[[1]] + + +site1$stdv_AGB[[1]] %>% + filter(Site_ID!='772') -> site1$stdv_AGB[[1]] + +load("Obs/LandTrendr_AGB_output_796-769.RData") +site2 <- point_list +site2$median_AGB[[1]] %>% + filter(Site_ID=='796') -> site2$median_AGB[[1]] + + +site2$stdv_AGB[[1]] %>% + filter(Site_ID=='796') -> site2$stdv_AGB[[1]] +#listviewer::jsonedit(point_list) +#-------------------------------------------------------------------------------- +#for multi site both mean and cov needs to be a list like this +# +date +# +siteid +# c(state variables)/matrix(cov state variables) +# +#reorder sites in obs +point_list$median_AGB <-rbind(site1$median_AGB[[1]], + site2$median_AGB[[1]]) %>% filter(Site_ID %in% site.ids) +point_list$stdv_AGB <-rbind(site1$stdv_AGB[[1]], + site2$stdv_AGB[[1]])%>% filter(Site_ID %in% site.ids) + +site.order <- sapply(site.ids,function(x) which(point_list$median_AGB$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() + +point_list$median_AGB <- point_list$median_AGB[site.order,] +point_list$stdv_AGB <- point_list$stdv_AGB[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(site1$median_AGB[[1]]),"_")[3:length(site1$median_AGB[[1]])] %>% + map_chr(~.x[2]) %>% paste0(.,"/12/31") + +obs.mean <-names(point_list$median_AGB)[3:length(point_list$median_AGB)] %>% + map(function(namesl){ + ((point_list$median_AGB)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('AbvGrndWood'))) %>% + setNames(site.ids[1:length(.)]) + ) + }) %>% setNames(date.obs) + + + +obs.cov <-names(point_list$stdv_AGB)[3:length(point_list$median_AGB)] %>% + map(function(namesl) { + ((point_list$stdv_AGB)[[namesl]] %>% + map( ~ (.x) ^ 2%>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + + }) %>% setNames(date.obs) + +#---------------------------------------------------------------- +# end OBS data preparation +#--------------------------------------------------------------- +new.settings <- PEcAn.settings::prepare.settings(settings) +#jsonedit(new.settings) +#------------------------------------------ SDA ------------------------------------- +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="lhc sampling - 4sites - SF50 - ALL PFTs - small sample size", + facet.plots=T, + debug=F, + pause=F) + ) + + diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL.R b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL.R new file mode 100755 index 00000000000..2a599eb0f53 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL.R @@ -0,0 +1,171 @@ +#################################################################################################### +# +# +# +# +# --- Last updated: 03.26.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + + +# temporary step until we get this code integrated into pecan +library(RCurl) +script <- getURL("https://raw.githubusercontent.com/serbinsh/pecan/download_osu_agb/modules/data.remote/R/LandTrendr.AGB.R", + ssl.verifypeer = FALSE) +eval(parse(text = script)) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/sserbin/Modeling/sipnet/NASA_CMS" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# Deifine observation - use existing or generate new? +# set to a specific file, use that. +#observation <- "" +observation <- c("1000025731","1000000048","763","796","772","764","765","1000000024","678", + "1000000146") + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("XMLs/pecan_MultiSite_SDA.xml") + + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Prepare observational data - still very hacky here + +# option 1: use existing observation file +# if (observation!="new") { +# load(observation) +# site1 <- point_list +# site1$median_AGB[[1]] %>% +# filter(Site_ID!='772') -> site1$median_AGB[[1]] +# site1$stdv_AGB[[1]] %>% +# filter(Site_ID!='772') -> site1$stdv_AGB[[1]] +# } + +# option 2: run extraction code to generate observation files +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation # BETYdb site IDs +data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +#results <- PEcAn.data.remote::extract.LandTrendr.AGB(coords=site_ID, +results <- extract.LandTrendr.AGB(coords=site_ID, + data_dir = data_dir, con = con, + output_file = file.path(work_dir,"Obs"), + plot_results = FALSE) +load("Obs/LandTrendr_AGB_output.RData") + + +#for multi site both mean and cov needs to be a list like this +# +date +# +siteid +# c(state variables)/matrix(cov state variables) +# +#reorder sites in obs +point_list$median_AGB <- point_list$median_AGB[[1]] %>% filter(Site_ID %in% site.ids) +point_list$stdv_AGB <- point_list$stdv_AGB[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(point_list$median_AGB$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +point_list$median_AGB <- point_list$median_AGB[site.order,] +point_list$stdv_AGB <- point_list$stdv_AGB[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(point_list$median_AGB),"_")[3:length(point_list$median_AGB)] %>% + map_chr(~.x[2]) %>% paste0(.,"/12/31") + +obs.mean <- names(point_list$median_AGB)[3:length(point_list$median_AGB)] %>% + map(function(namesl){ + ((point_list$median_AGB)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('AbvGrndWood'))) %>% + setNames(site.ids[1:length(.)]) + ) + }) %>% setNames(date.obs) + +obs.cov <-names(point_list$stdv_AGB)[3:length(point_list$median_AGB)] %>% + map(function(namesl) { + ((point_list$stdv_AGB)[[namesl]] %>% + map( ~ (.x) ^ 2%>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + + }) %>% setNames(date.obs) + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings, force = FALSE) +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(new.settings, outputfile = "pecan.CHECKED.xml") +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Run SDA +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="Uniform sampling - 10 sites", + facet.plots=T, + debug=F, + pause=F) +) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +#if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { +# sendmail(settings$email$from, settings$email$to, +# paste0("SDA workflow has finished executing at ", base::date())) +#} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL_updated.R b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL_updated.R new file mode 100755 index 00000000000..097c6ee4f9e --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/Multisite_SDA_BNL_updated.R @@ -0,0 +1,165 @@ +#################################################################################################### +# +# +# +# +# --- Last updated: 03.29.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + + +# temporary step until we get this code integrated into pecan +library(RCurl) +script <- getURL("https://raw.githubusercontent.com/serbinsh/pecan/download_osu_agb/modules/data.remote/R/LandTrendr.AGB.R", + ssl.verifypeer = FALSE) +eval(parse(text = script)) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/sserbin/Modeling/sipnet/NASA_CMS" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# grab multi-site XML file +settings <- read.settings("XMLs/pecan_MultiSite_SDA.xml") + +# grab observation IDs from settings file +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Prepare observational data - still very hacky here +PEcAn.logger::logger.info("**** Extracting LandTrendr AGB data for model sites ****") +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con +site_ID <- observation +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, +ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = site_ID, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, + lon=qry_results$lon, time_zone=qry_results$time_zone) + +data_dir <- "/data2/RS_GIS_Data/LandTrendr/LandTrendr_AGB_data" +med_agb_data <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs")) +sdev_agb_data <- extract.LandTrendr.AGB(site_info, "stdv", buffer = NULL, fun = "mean", + data_dir, product_dates=NULL, file.path(work_dir,"Obs")) + +PEcAn.logger::logger.info("**** Preparing data for SDA ****") +#for multi site both mean and cov needs to be a list like this +# +date +# +siteid +# c(state variables)/matrix(cov state variables) +# +#reorder sites in obs +med_agb_data_sda <- med_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +sdev_agb_data_sda <- sdev_agb_data[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(med_agb_data_sda$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +med_agb_data_sda <- med_agb_data_sda[site.order,] +sdev_agb_data_sda <- sdev_agb_data_sda[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(med_agb_data_sda),"_")[3:length(med_agb_data_sda)] %>% + map_chr(~.x[2]) %>% paste0(.,"/12/31") + +obs.mean <- names(med_agb_data_sda)[3:length(med_agb_data_sda)] %>% + map(function(namesl){ + ((med_agb_data_sda)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('AbvGrndWood'))) %>% + setNames(site.ids[1:length(.)])) + }) %>% setNames(date.obs) + +obs.cov <-names(sdev_agb_data_sda)[3:length(sdev_agb_data_sda)] %>% + map(function(namesl) { + ((sdev_agb_data_sda)[[namesl]] %>% + map( ~ (.x) ^ 2%>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + }) %>% setNames(date.obs) + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings, force = FALSE) +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(new.settings, outputfile = "pecan.CHECKED.xml") +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +PEcAn.logger::logger.info("**** Run SDA ****") +## Run SDA +sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="Uniform sampling - 10 sites", + facet.plots=T, + debug=F, + pause=F) +) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +#if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { +# sendmail(settings$email$from, settings$email$to, +# paste0("SDA workflow has finished executing at ", base::date())) +#} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/nohuprun.txt b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/nohuprun.txt new file mode 100755 index 00000000000..54c805c9203 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/nohuprun.txt @@ -0,0 +1,23 @@ +# model run +nohup Rscript workflow.R > workflow.log 2>&1 & +nohup Rscript workflow.R --settings pecan_US-CZ3_CRUNCEP.xml > workflow.log 2>&1 & +nohup Rscript workflow.R --settings XMLs/pecan_US-CZ3_CRUNCEP.xml > workflow.log 2>&1 & + + +# SDA +# interactive qsub, keep enviroment, 3 hours, 1 node 15 CPUs +qsub -IV -l walltime=03:00:00,nodes=1:ppn=15 +cd /data/sserbin/Modeling/sipnet/NASA_CMS +nohup Rscript Multisite-3sites.R > SDA_workflow.log 2>&1 & + +nohup Rscript Multisite-4sites.R > SDA_workflow.log 2>&1 & + +nohup Rscript Multisite_SDA_BNL.R > SDA_workflow.log 2>&1 & + +# latest +nohup Rscript R_scripts/Multisite_SDA_BNL_updated.R > SDA_workflow.log 2>&1 & + + +qstat -f + +qstat -ext diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_doconversions.R b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_doconversions.R new file mode 100755 index 00000000000..b97c5f22e84 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_doconversions.R @@ -0,0 +1,68 @@ +#!/usr/bin/env Rscript +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +# ---------------------------------------------------------------------- +# Load required libraries +# ---------------------------------------------------------------------- +library(PEcAn.all) +library(PEcAn.utils) +library(RCurl) + +# make sure always to call status.end +options(warn=1) +options(error=quote({ + PEcAn.utils::status.end("ERROR") + PEcAn.remote::kill.tunnel(settings) + if (!interactive()) { + q(status = 1) + } +})) + +#options(warning.expression=status.end("ERROR")) + + +# ---------------------------------------------------------------------- +# PEcAn Workflow +# ---------------------------------------------------------------------- +# Open and read in settings file for PEcAn run. +settings <- PEcAn.settings::read.settings("pecan_US-CZ3_CRUNCEP.xml") + +# Check for additional modules that will require adding settings +if("benchmarking" %in% names(settings)){ + library(PEcAn.benchmark) + settings <- papply(settings, read_settings_BRR) +} + +if("sitegroup" %in% names(settings)){ + if(is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, sitegroupId = settings$sitegroup$id,nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + +# Update/fix/check settings. Will only run the first time it's called, unless force=TRUE +settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) + +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") + +# start from scratch if no continue is passed in +statusFile <- file.path(settings$outdir, "STATUS") +if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile)) { + file.remove(statusFile) +} + +# Do conversions +settings <- PEcAn.workflow::do_conversions(settings, overwrite.met = list(download = TRUE, met2cf = TRUE, standardize = TRUE, met2model = TRUE)) + +db.print.connections() +print("---------- PEcAn Workflow Complete ----------") diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_metprocess.R b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_metprocess.R new file mode 100755 index 00000000000..98d772912b8 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/R_scripts_2/workflow_metprocess.R @@ -0,0 +1,76 @@ +#!/usr/bin/env Rscript +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +# ---------------------------------------------------------------------- +# Load required libraries +# ---------------------------------------------------------------------- +library(PEcAn.all) +library(PEcAn.utils) +library(RCurl) + +# make sure always to call status.end +options(warn=1) +options(error=quote({ + PEcAn.utils::status.end("ERROR") + PEcAn.remote::kill.tunnel(settings) + if (!interactive()) { + q(status = 1) + } +})) + +#options(warning.expression=status.end("ERROR")) + + +# ---------------------------------------------------------------------- +# PEcAn Workflow +# ---------------------------------------------------------------------- +settings <- PEcAn.settings::read.settings("pecan_US-CZ3_CRUNCEP.xml") + +# Check for additional modules that will require adding settings +if("benchmarking" %in% names(settings)){ + library(PEcAn.benchmark) + settings <- papply(settings, read_settings_BRR) +} + +if("sitegroup" %in% names(settings)){ + if(is.null(settings$sitegroup$nSite)){ + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, sitegroupId = settings$sitegroup$id) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, sitegroupId = settings$sitegroup$id,nSite = settings$sitegroup$nSite) + } + settings$sitegroup <- NULL ## zero out so don't expand a second time if re-reading +} + +# Update/fix/check settings. Will only run the first time it's called, unless force=TRUE +settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) + +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") + +# start from scratch if no continue is passed in +statusFile <- file.path(settings$outdir, "STATUS") +if (length(which(commandArgs() == "--continue")) == 0 && file.exists(statusFile)) { + file.remove(statusFile) +} + +# met process +PEcAn.data.atmosphere::met.process( + site = settings$run$site, + input_met = settings$run$inputs$met, + start_date = settings$run$start.date, + end_date = settings$run$end.date, + model = settings$model$type, + host = settings$host, + dbparms = settings$database$bety, + dir = settings$database$dbfiles, + browndog = settings$browndog, + spin = settings$spin, + overwrite = list(download = TRUE, met2cf = TRUE, standardize = TRUE, met2model = TRUE)) + diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/multi_site_LAI_SDA_BNL.R b/modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/multi_site_LAI_SDA_BNL.R new file mode 100755 index 00000000000..4d18e5e3760 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/multi_site_LAI_SDA_BNL.R @@ -0,0 +1,252 @@ +#################################################################################################### +# +# Single site LAI SDA +# +# +# --- Last updated: 03.27.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) +library(doParallel) + +extract_LAI <- TRUE #TRUE/FALSE +run_SDA <- TRUE #TRUE/FALSE + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/sserbin/Modeling/sipnet/NASA_CMS_AGB_LAI" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# grab multi-site XML file +settings <- read.settings("XMLs/pecan_MultiSite_LAI_SDA.xml") + +# grab observation IDs from settings file +observation <- c() +for (i in seq_along(1:length(settings$run))) { + command <- paste0("settings$run$settings.",i,"$site$id") + obs <- eval(parse(text=command)) + observation <- c(observation,obs) +} + + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Prepare observational data - still very hacky here + +# where to put MODIS LAI data? +data_dir <- "/data/sserbin/Modeling/sipnet/NASA_CMS_AGB_LAI/modis_lai_data" +parameters <- settings$run + +# get MODIS data +bety <- list(user=settings$database$bety$user, password=settings$database$bety$password, + host=settings$database$bety$host, + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con + +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, + ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", + ids = observation, .con = con)) +suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) +suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) +site_IDs <- qry_results$id +site_names <- qry_results$sitename +site_coords <- data.frame(cbind(qry_results$lon, qry_results$lat)) +names(site_coords) <- c("Longitude","Latitude") + +#extract lai using call_MODIS for the lat/lon per site and dates +if (extract_LAI) { + modis_data <- data.frame() + cl <- parallel::makeCluster(5, outfile="") + registerDoParallel(cl) + modis_data <- foreach(i=1:nrow(site_coords)) %dopar% PEcAn.data.remote::call_MODIS(product = "MOD15A2H", + band = "Lai_500m", start_date = "2001001", + end_date = "2010365", lat = site_coords$Latitude[i], + lon = site_coords$Longitude[i], + size = 0, band_qc = "FparLai_QC", + band_sd = "LaiStdDev_500m", + package_method = "MODISTools") + + stopCluster(cl) + modis_data <- do.call(rbind.data.frame, modis_data) + + # modis_data <- data.frame() + # for (i in 1:length(observation)) { + # print(paste("extracting site: ", observation[i], sep = "")) + # data <- PEcAn.data.remote::call_MODIS(lat = site_coords[i,2], lon = site_coords[i,1], + # start_date = "2001001", end_date = "2010365", + # size = 0, product = "MOD15A2H", band = "Lai_500m", + # band_qc = "", band_sd = "LaiStdDev_500m", package_method = "MODISTools") + # modis_data <- rbind(modis_data, data) + # } + # output resuls of call_MODIS + save(modis_data, file = file.path(data_dir,'modis_lai_output.RData')) +} else { + load(file = file.path(data_dir,'modis_lai_output.RData')) +} + +# find peaks +peak_lai <- data.frame() +years <- unique(year(as.Date(modis_data$calendar_date, "%Y-%m-%d"))) +#site_ll <- data.frame(cbind(lon=unique(modis_data$lon),lat=unique(modis_data$lat))) +site_ll <- data.frame(cbind(lat=unique(modis_data$lat),lon=unique(modis_data$lon))) +for (i in 1:length(years)) { + year <- years[i] + g <- grep(modis_data$calendar_date, pattern = year) + d <- modis_data[g,] + for (j in 1:length(site_IDs)) { + pixel <- filter(d, lat == site_ll[j,1] & lon == site_ll[j,2]) + + # using peak + peak <- pixel[which(pixel$data == max(pixel$data, na.rm = T)),][1,] + + # using mean + #mn_data <- mean(pixel$data, na.rm = T) + #mn_sd <- mean(pixel$sd, na.rm = T) + #peak <- pixel[1,] + #peak$data <- mn_data + #peak$sd <- mn_sd + + + peak$calendar_date = paste("Year", year, sep = "_") + peak$tile <- site_names[j] + #peak$tile <- site_IDs[j] + peak_lai <- rbind(peak_lai, peak) + } +} + +# sort the data by site so the correct values are placed into the resized data frames below. +peak_lai <- peak_lai[order(peak_lai$tile), ] + +# separate data into hotdog style dataframes with row == site and columns = info/data for each site +median_lai <- cbind(site_IDs, site_names, as.data.frame(matrix(unlist(t(peak_lai$data)), byrow = T, length(site_IDs), length(years)))) +colnames(median_lai) <- c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) + +stdv_lai <- cbind(site_IDs, site_names, as.data.frame(matrix(unlist(t(peak_lai$sd)), byrow = T, length(site_IDs), length(years)))) +colnames(stdv_lai) <- c("Site_ID", "Site_Name", unique(peak_lai$calendar_date)) + +# convert to list +point_list <- list() +point_list$median_lai <- list(median_lai) +point_list$stdv_lai <- list(stdv_lai) +point_list + +point_list$median_lai <- point_list$median_lai[[1]] +point_list$stdv_lai <- point_list$stdv_lai[[1]] + +#point_list$median_lai <- point_list$median_lai[[1]] %>% filter(Site_ID %in% site.ids) +#point_list$stdv_lai <- point_list$stdv_lai[[1]] %>% filter(Site_ID %in% site.ids) +#site.order <- sapply(site.ids,function(x) which(point_list$median_lai$Site_ID %in% x)) %>% +# as.numeric() %>% na.omit() +#point_list$median_lai <- point_list$median_lai[site.order,] +#point_list$stdv_lai <- point_list$stdv_lai[site.order,] +#point_list + +site.order <- sapply(site.ids,function(x) which(point_list$median_lai$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +point_list$median_lai <- point_list$median_lai[site.order,] +point_list$stdv_lai <- point_list$stdv_lai[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(point_list$median_lai),"_")[3:length(point_list$median_lai)] %>% map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean <- names(point_list$median_lai)[3:length(point_list$median_lai)] %>% + map(function(namesl){ + ((point_list$median_lai)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('LAI'))) %>% + setNames(site.ids[1:length(.)]) + ) + }) %>% setNames(date.obs) + +obs.cov <-names(point_list$stdv_lai)[3:length(point_list$median_lai)] %>% + map(function(namesl) { + ((point_list$stdv_lai)[[namesl]] %>% + map( ~ (.x) ^ 2 %>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + + }) %>% setNames(date.obs) + +# check input data - after creating list of lists +PEcAn.assim.sequential::Construct.R(site.ids, "LAI", obs.mean[[1]], obs.cov[[1]]) +PEcAn.assim.sequential::Construct.R(site.ids, "LAI", obs.mean[[10]], obs.cov[[10]]) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings) +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(new.settings, outputfile = "pecan.CHECKED.xml") +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Run SDA +if (run_SDA) { + sda.enkf.multisite(new.settings, obs.mean = obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=T, + TimeseriesPlot=T, + BiasPlot=T, + plot.title="LAI SDA, uniform sampling", + facet.plots=T, + debug=T, + pause=F)) +} else { + print("*** Not running SDA ***") +} + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +#if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { +# sendmail(settings$email$from, settings$email$to, +# paste0("SDA workflow has finished executing at ", base::date())) +#} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/single_site_SDA_BNL.R b/modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/single_site_SDA_BNL.R new file mode 100755 index 00000000000..1195e2b4ec4 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/Rscripts/single_site_SDA_BNL.R @@ -0,0 +1,224 @@ +#################################################################################################### +# +# Single site LAI SDA +# +# +# --- Last updated: 03.22.2019 By Shawn P. Serbin +#################################################################################################### + + +#---------------- Close all devices and delete all variables. -------------------------------------# +rm(list=ls(all=TRUE)) # clear workspace +graphics.off() # close any open graphics +closeAllConnections() # close any open connections to files +#--------------------------------------------------------------------------------------------------# + + +#---------------- Load required libraries ---------------------------------------------------------# +library(PEcAn.all) +library(PEcAn.SIPNET) +library(PEcAn.LINKAGES) +library(PEcAn.visualization) +library(PEcAn.assim.sequential) +library(nimble) +library(lubridate) +library(PEcAn.visualization) +#PEcAn.assim.sequential:: +library(rgdal) # need to put in assim.sequential +library(ncdf4) # need to put in assim.sequential +library(purrr) +library(listviewer) +library(dplyr) + +run_SDA <- TRUE #TRUE/FALSE +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## set run options, some of these should be tweaked or removed as requirements +work_dir <- "/data/sserbin/Modeling/sipnet/NASA_CMS_AGB_LAI" +setwd(work_dir) # best not to require setting wd and instead just providing full paths in functions + +# Deifine observation - use existing or generate new? +# set to a specific file, use that. +observation <- c("1000000048") + +# delete an old run +unlink(c('run','out','SDA'),recursive = T) + +# grab multi-site XML file +settings <- read.settings("XMLs/pecan_US-CZ3_LAI_SDA.xml") + + +# what is this step for???? is this to get the site locations for the map?? +if ("MultiSettings" %in% class(settings)) site.ids <- settings %>% + map(~.x[['run']] ) %>% map('site') %>% map('id') %>% unlist() %>% as.character() + +# sample from parameters used for both sensitivity analysis and Ens +get.parameter.samples(settings, + ens.sample.method = settings$ensemble$samplingspace$parameters$method) +## Aside: if method were set to unscented, would take minimal changes to do UnKF +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Prepare observational data - still very hacky here + +# option 1: use existing observation file +# if (observation!="new") { +# load(observation) +# site1 <- point_list +# site1$median_AGB[[1]] %>% +# filter(Site_ID!='772') -> site1$median_AGB[[1]] +# site1$stdv_AGB[[1]] %>% +# filter(Site_ID!='772') -> site1$stdv_AGB[[1]] +# } + +# where to put MODIS LAI data? +data_dir <- "/data/sserbin/Modeling/sipnet/NASA_CMS_AGB_LAI/modis_lai_data" +parameters <- settings$run + +# get MODIS data +#modis <- PEcAn.data.remote::call_MODIS(lat = as.numeric(parameters$site$lat), lon = as.numeric(parameters$site$lon), +# start_date = parameters$start.date, end_date = parameters$end.date, +# siteID = parameters$site$id, size = 0, product = "MOD15A2H", band = "Lai_500m", +# band_qc = "", band_sd = "LaiStdDev_500m", package_method = "MODISTools") +#modis <- PEcAn.data.remote::call_MODIS(lat = as.numeric(parameters$site$lat), lon = as.numeric(parameters$site$lon), +# start_date = "2001/01/01", end_date = "2002/01/01", +# size = 0, product = "MOD15A2H", band = "Lai_500m", +# band_qc = "", band_sd = "LaiStdDev_500m", package_method = "MODISTools") + +if (!file.exists(file.path(data_dir,'modis_lai_output.RData'))) { + modis <- call_MODIS(product = "MOD15A2H", band = "Lai_500m", start_date = "2001001", end_date = "2010365", + lat = as.numeric(parameters$site$lat), lon = as.numeric(parameters$site$lon), size = 0, + band_qc = "FparLai_QC", band_sd = "LaiStdDev_500m", + package_method = "MODISTools") + save(modis, file = file.path(data_dir,'modis_lai_output.RData')) + +} else { + load(file = file.path(data_dir,'modis_lai_output.RData')) +} + +# +bety <- list(user='bety', password='bety', host='localhost', + dbname='bety', driver='PostgreSQL',write=TRUE) +con <- PEcAn.DB::db.open(bety) +bety$con <- con + +suppressWarnings(Site_Info <- PEcAn.DB::query.site(observation, con)) +Site_Info +Site_ID <- Site_Info$id +Site_Name <- Site_Info$sitename + +#plot(lubridate::as_date(modis$calendar_date), modis$data, type="l") + +peak_lai <- vector() +years <- unique(year(as.Date(modis$calendar_date, "%Y-%m-%d"))) +for (i in seq_along(years)) { + year <- years[i] + g <- grep(modis$calendar_date, pattern = year) + d <- modis[g,] + max <- which(d$data == max(d$data, na.rm = T)) + peak <- d[max,][1,] + peak$calendar_date = paste("Year", year, sep = "_") + peak_lai <- rbind(peak_lai, peak) +} + +# transpose the data +median_lai = as.data.frame(cbind(Site_ID, Site_Name, t(cbind(peak_lai$data))), stringsAsFactors = F) +colnames(median_lai) = c("Site_ID", "Site_Name", peak_lai$calendar_date) +median_lai[3:length(median_lai)] = as.numeric(median_lai[3:length(median_lai)]) + +stdv_lai = as.data.frame(cbind(Site_ID, Site_Name, t(cbind(peak_lai$sd))), stringsAsFactors = F) +colnames(stdv_lai) = c("Site_ID", "Site_Name", peak_lai$calendar_date) +stdv_lai[3:length(stdv_lai)] = as.numeric(stdv_lai[3:length(stdv_lai)]) + +point_list = list() +point_list$median_lai = median_lai +point_list$stdv_lai = stdv_lai + +## needed for landtrendr for nested lists. Lai isn't as nested +#point_list$median_lai <- point_list$median_lai[[1]] %>% filter(Site_ID %in% site.ids) +#point_list$stdv_lai <- point_list$stdv_lai[[1]] %>% filter(Site_ID %in% site.ids) +site.order <- sapply(site.ids,function(x) which(point_list$median_lai$Site_ID %in% x)) %>% + as.numeric() %>% na.omit() +point_list$median_lai <- point_list$median_lai[site.order,] +point_list$stdv_lai <- point_list$stdv_lai[site.order,] + +# truning lists to dfs for both mean and cov +date.obs <- strsplit(names(point_list$median_lai),"_")[3:length(point_list$median_lai)] %>% map_chr(~.x[2]) %>% paste0(.,"/07/15") + +obs.mean <- names(point_list$median_lai)[3:length(point_list$median_lai)] %>% + map(function(namesl){ + ((point_list$median_lai)[[namesl]] %>% + map(~.x %>% as.data.frame %>% `colnames<-`(c('LAI'))) %>% + setNames(site.ids[1:length(.)]) + ) + }) %>% setNames(date.obs) + +obs.cov <-names(point_list$stdv_lai)[3:length(point_list$median_lai)] %>% + map(function(namesl) { + ((point_list$stdv_lai)[[namesl]] %>% + map( ~ (.x) ^ 2 %>% as.matrix()) %>% + setNames(site.ids[1:length(.)])) + + }) %>% setNames(date.obs) + +# check input data - after creating list of lists +PEcAn.assim.sequential::Construct.R(site.ids, "LAI", obs.mean[[1]], obs.cov[[1]]) +PEcAn.assim.sequential::Construct.R(site.ids, "LAI", obs.mean[[10]], obs.cov[[10]]) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## generate new settings object +new.settings <- PEcAn.settings::prepare.settings(settings) +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Run SDA +if (run_SDA) { + #sda.enkf.multisite(new.settings, obs.mean =obs.mean ,obs.cov = obs.cov, + + sda.enkf.multisite(settings, obs.mean =obs.mean ,obs.cov = obs.cov, + control=list(trace=T, + FF=F, + interactivePlot=F, + TimeseriesPlot=T, + BiasPlot=F, + plot.title="LAI SDA, 1 site", + facet.plots=T, + debug=T, + pause=F)) + + # sda.enkf(settings, obs.mean = obs.mean ,obs.cov = obs.cov, + # control=list(trace=T, + # FF=F, + # interactivePlot=F, + # TimeseriesPlot=T, + # BiasPlot=F, + # plot.title="LAI SDA, 1 site", + # facet.plots=T, + # debug=T, + # pause=F)) + +} else { + print("*** Not running SDA ***") +} + +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +## Wrap up +# Send email if configured +#if (!is.null(settings$email) && !is.null(settings$email$to) && (settings$email$to != "")) { +# sendmail(settings$email$from, settings$email$to, +# paste0("SDA workflow has finished executing at ", base::date())) +#} +#--------------------------------------------------------------------------------------------------# + + +#--------------------------------------------------------------------------------------------------# +### EOF diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/nohuprun.txt b/modules/assim.sequential/inst/sda_backup/sserbin/nohuprun.txt new file mode 100755 index 00000000000..c759693b0b2 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/nohuprun.txt @@ -0,0 +1,25 @@ +# model run +module load netcdf/4.4.1.1-gnu540 hdf5/1.8.19-gcc540 redland/1.0.17 openmpi/2.1.1-gnu540 +nohup Rscript workflow.R > workflow.log 2>&1 & +nohup Rscript workflow.R --settings pecan_US-CZ3_CRUNCEP.xml > workflow.log 2>&1 & +nohup Rscript workflow.R --settings XMLs/pecan_US-CZ3_CRUNCEP.xml > workflow.log 2>&1 & + + +# SDA +# interactive qsub, keep enviroment, 3 hours, 1 node 15 CPUs +qsub -IV -l walltime=03:00:00,nodes=1:ppn=15 +cd /data/sserbin/Modeling/sipnet/NASA_CMS +nohup Rscript Multisite-3sites.R > SDA_workflow.log 2>&1 & + +nohup Rscript Multisite-4sites.R > SDA_workflow.log 2>&1 & + +nohup Rscript Multisite_SDA_BNL.R > SDA_workflow.log 2>&1 & + +# latest +module load netcdf/4.4.1.1-gnu540 hdf5/1.8.19-gcc540 redland/1.0.17 openmpi/2.1.1-gnu540 +nohup Rscript R_Scripts/multi_site_LAI_SDA_BNL.R > SDA_workflow.log 2>&1 & + + +qstat -f + +qstat -ext diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/workflow 2.R b/modules/assim.sequential/inst/sda_backup/sserbin/workflow 2.R new file mode 100755 index 00000000000..317aef7faa7 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/workflow 2.R @@ -0,0 +1,215 @@ +#!/usr/bin/env Rscript +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +# ---------------------------------------------------------------------- +# Load required libraries +# ---------------------------------------------------------------------- +library("PEcAn.all") +library("RCurl") + + +# -------------------------------------------------- +# get command-line arguments +args <- get_args() + +# make sure always to call status.end +options(warn = 1) +options(error = quote({ + try(PEcAn.utils::status.end("ERROR")) + try(PEcAn.remote::kill.tunnel(settings)) + if (!interactive()) { + q(status = 1) + } +})) + +# ---------------------------------------------------------------------- +# PEcAn Workflow +# ---------------------------------------------------------------------- +# Open and read in settings file for PEcAn run. +settings <- PEcAn.settings::read.settings(args$settings) + +# Check for additional modules that will require adding settings +if ("benchmarking" %in% names(settings)) { + library(PEcAn.benchmark) + settings <- papply(settings, read_settings_BRR) +} + +if ("sitegroup" %in% names(settings)) { + if (is.null(settings$sitegroup$nSite)) { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id + ) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings( + settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite + ) + } + # zero out so don't expand a second time if re-reading + settings$sitegroup <- NULL +} + +# Update/fix/check settings. +# Will only run the first time it's called, unless force=TRUE +settings <- + PEcAn.settings::prepare.settings(settings, force = FALSE) + +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") + +# start from scratch if no continue is passed in +status_file <- file.path(settings$outdir, "STATUS") +if (args$continue && file.exists(status_file)) { + file.remove(status_file) +} + +# Do conversions +settings <- PEcAn.workflow::do_conversions(settings) + +# Query the trait database for data and priors +if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings(settings, + outputfile = "pecan.TRAIT.xml" + ) + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.TRAIT.xml")) +} + + +# Run the PEcAn meta.analysis +if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } +} + +# Write model specific configs +if (PEcAn.utils::status.check("CONFIG") == 0) { + PEcAn.utils::status.start("CONFIG") + settings <- + PEcAn.workflow::runModule.run.write.configs(settings) + PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.CONFIGS.xml")) +} + +if ((length(which(commandArgs() == "--advanced")) != 0) +&& (PEcAn.utils::status.check("ADVANCED") == 0)) { + PEcAn.utils::status.start("ADVANCED") + q() +} + +# Start ecosystem model runs +if (PEcAn.utils::status.check("MODEL") == 0) { + PEcAn.utils::status.start("MODEL") + stop_on_error <- as.logical(settings[[c("run", "stop_on_error")]]) + if (length(stop_on_error) == 0) { + # If we're doing an ensemble run, don't stop. If only a single run, we + # should be stopping. + if (is.null(settings[["ensemble"]]) || + as.numeric(settings[[c("ensemble", "size")]]) == 1) { + stop_on_error <- TRUE + } else { + stop_on_error <- FALSE + } + } + PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = stop_on_error) + PEcAn.utils::status.end() +} + +# Get results of model runs +if (PEcAn.utils::status.check("OUTPUT") == 0) { + PEcAn.utils::status.start("OUTPUT") + runModule.get.results(settings) + PEcAn.utils::status.end() +} + +# Run ensemble analysis on model output. +if ("ensemble" %in% names(settings) +&& PEcAn.utils::status.check("ENSEMBLE") == 0) { + PEcAn.utils::status.start("ENSEMBLE") + runModule.run.ensemble.analysis(settings, TRUE) + PEcAn.utils::status.end() +} + +# Run sensitivity analysis and variance decomposition on model output +if ("sensitivity.analysis" %in% names(settings) +&& PEcAn.utils::status.check("SENSITIVITY") == 0) { + PEcAn.utils::status.start("SENSITIVITY") + runModule.run.sensitivity.analysis(settings) + PEcAn.utils::status.end() +} + +# Run parameter data assimilation +if ("assim.batch" %in% names(settings)) { + if (PEcAn.utils::status.check("PDA") == 0) { + PEcAn.utils::status.start("PDA") + settings <- + PEcAn.assim.batch::runModule.assim.batch(settings) + PEcAn.utils::status.end() + } +} + +# Run state data assimilation +if ("state.data.assimilation" %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + settings <- sda.enfk(settings) + PEcAn.utils::status.end() + } +} + +# Run benchmarking +if ("benchmarking" %in% names(settings) +&& "benchmark" %in% names(settings$benchmarking)) { + PEcAn.utils::status.start("BENCHMARKING") + results <- + papply(settings, function(x) { + calc_benchmark(x, bety) + }) + PEcAn.utils::status.end() +} + +# Pecan workflow complete +if (PEcAn.utils::status.check("FINISHED") == 0) { + PEcAn.utils::status.start("FINISHED") + PEcAn.remote::kill.tunnel(settings) + db.query( + paste( + "UPDATE workflows SET finished_at=NOW() WHERE id=", + settings$workflow$id, + "AND finished_at IS NULL" + ), + params = settings$database$bety + ) + + # Send email if configured + if (!is.null(settings$email) + && !is.null(settings$email$to) + && (settings$email$to != "")) { + sendmail( + settings$email$from, + settings$email$to, + paste0("Workflow has finished executing at ", base::date()), + paste0("You can find the results on ", settings$email$url) + ) + } + PEcAn.utils::status.end() +} + +db.print.connections() +print("---------- PEcAn Workflow Complete ----------") diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/workflow.R b/modules/assim.sequential/inst/sda_backup/sserbin/workflow.R new file mode 100755 index 00000000000..317aef7faa7 --- /dev/null +++ b/modules/assim.sequential/inst/sda_backup/sserbin/workflow.R @@ -0,0 +1,215 @@ +#!/usr/bin/env Rscript +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +# ---------------------------------------------------------------------- +# Load required libraries +# ---------------------------------------------------------------------- +library("PEcAn.all") +library("RCurl") + + +# -------------------------------------------------- +# get command-line arguments +args <- get_args() + +# make sure always to call status.end +options(warn = 1) +options(error = quote({ + try(PEcAn.utils::status.end("ERROR")) + try(PEcAn.remote::kill.tunnel(settings)) + if (!interactive()) { + q(status = 1) + } +})) + +# ---------------------------------------------------------------------- +# PEcAn Workflow +# ---------------------------------------------------------------------- +# Open and read in settings file for PEcAn run. +settings <- PEcAn.settings::read.settings(args$settings) + +# Check for additional modules that will require adding settings +if ("benchmarking" %in% names(settings)) { + library(PEcAn.benchmark) + settings <- papply(settings, read_settings_BRR) +} + +if ("sitegroup" %in% names(settings)) { + if (is.null(settings$sitegroup$nSite)) { + settings <- PEcAn.settings::createSitegroupMultiSettings(settings, + sitegroupId = settings$sitegroup$id + ) + } else { + settings <- PEcAn.settings::createSitegroupMultiSettings( + settings, + sitegroupId = settings$sitegroup$id, + nSite = settings$sitegroup$nSite + ) + } + # zero out so don't expand a second time if re-reading + settings$sitegroup <- NULL +} + +# Update/fix/check settings. +# Will only run the first time it's called, unless force=TRUE +settings <- + PEcAn.settings::prepare.settings(settings, force = FALSE) + +# Write pecan.CHECKED.xml +PEcAn.settings::write.settings(settings, outputfile = "pecan.CHECKED.xml") + +# start from scratch if no continue is passed in +status_file <- file.path(settings$outdir, "STATUS") +if (args$continue && file.exists(status_file)) { + file.remove(status_file) +} + +# Do conversions +settings <- PEcAn.workflow::do_conversions(settings) + +# Query the trait database for data and priors +if (PEcAn.utils::status.check("TRAIT") == 0) { + PEcAn.utils::status.start("TRAIT") + settings <- PEcAn.workflow::runModule.get.trait.data(settings) + PEcAn.settings::write.settings(settings, + outputfile = "pecan.TRAIT.xml" + ) + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.TRAIT.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.TRAIT.xml")) +} + + +# Run the PEcAn meta.analysis +if (!is.null(settings$meta.analysis)) { + if (PEcAn.utils::status.check("META") == 0) { + PEcAn.utils::status.start("META") + PEcAn.MA::runModule.run.meta.analysis(settings) + PEcAn.utils::status.end() + } +} + +# Write model specific configs +if (PEcAn.utils::status.check("CONFIG") == 0) { + PEcAn.utils::status.start("CONFIG") + settings <- + PEcAn.workflow::runModule.run.write.configs(settings) + PEcAn.settings::write.settings(settings, outputfile = "pecan.CONFIGS.xml") + PEcAn.utils::status.end() +} else if (file.exists(file.path(settings$outdir, "pecan.CONFIGS.xml"))) { + settings <- PEcAn.settings::read.settings(file.path(settings$outdir, "pecan.CONFIGS.xml")) +} + +if ((length(which(commandArgs() == "--advanced")) != 0) +&& (PEcAn.utils::status.check("ADVANCED") == 0)) { + PEcAn.utils::status.start("ADVANCED") + q() +} + +# Start ecosystem model runs +if (PEcAn.utils::status.check("MODEL") == 0) { + PEcAn.utils::status.start("MODEL") + stop_on_error <- as.logical(settings[[c("run", "stop_on_error")]]) + if (length(stop_on_error) == 0) { + # If we're doing an ensemble run, don't stop. If only a single run, we + # should be stopping. + if (is.null(settings[["ensemble"]]) || + as.numeric(settings[[c("ensemble", "size")]]) == 1) { + stop_on_error <- TRUE + } else { + stop_on_error <- FALSE + } + } + PEcAn.remote::runModule.start.model.runs(settings, stop.on.error = stop_on_error) + PEcAn.utils::status.end() +} + +# Get results of model runs +if (PEcAn.utils::status.check("OUTPUT") == 0) { + PEcAn.utils::status.start("OUTPUT") + runModule.get.results(settings) + PEcAn.utils::status.end() +} + +# Run ensemble analysis on model output. +if ("ensemble" %in% names(settings) +&& PEcAn.utils::status.check("ENSEMBLE") == 0) { + PEcAn.utils::status.start("ENSEMBLE") + runModule.run.ensemble.analysis(settings, TRUE) + PEcAn.utils::status.end() +} + +# Run sensitivity analysis and variance decomposition on model output +if ("sensitivity.analysis" %in% names(settings) +&& PEcAn.utils::status.check("SENSITIVITY") == 0) { + PEcAn.utils::status.start("SENSITIVITY") + runModule.run.sensitivity.analysis(settings) + PEcAn.utils::status.end() +} + +# Run parameter data assimilation +if ("assim.batch" %in% names(settings)) { + if (PEcAn.utils::status.check("PDA") == 0) { + PEcAn.utils::status.start("PDA") + settings <- + PEcAn.assim.batch::runModule.assim.batch(settings) + PEcAn.utils::status.end() + } +} + +# Run state data assimilation +if ("state.data.assimilation" %in% names(settings)) { + if (PEcAn.utils::status.check("SDA") == 0) { + PEcAn.utils::status.start("SDA") + settings <- sda.enfk(settings) + PEcAn.utils::status.end() + } +} + +# Run benchmarking +if ("benchmarking" %in% names(settings) +&& "benchmark" %in% names(settings$benchmarking)) { + PEcAn.utils::status.start("BENCHMARKING") + results <- + papply(settings, function(x) { + calc_benchmark(x, bety) + }) + PEcAn.utils::status.end() +} + +# Pecan workflow complete +if (PEcAn.utils::status.check("FINISHED") == 0) { + PEcAn.utils::status.start("FINISHED") + PEcAn.remote::kill.tunnel(settings) + db.query( + paste( + "UPDATE workflows SET finished_at=NOW() WHERE id=", + settings$workflow$id, + "AND finished_at IS NULL" + ), + params = settings$database$bety + ) + + # Send email if configured + if (!is.null(settings$email) + && !is.null(settings$email$to) + && (settings$email$to != "")) { + sendmail( + settings$email$from, + settings$email$to, + paste0("Workflow has finished executing at ", base::date()), + paste0("You can find the results on ", settings$email$url) + ) + } + PEcAn.utils::status.end() +} + +db.print.connections() +print("---------- PEcAn Workflow Complete ----------") From 99863acd8fb8a1bcb9c8a6919a9f25c73a5525d5 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 11 Sep 2021 22:56:23 +0200 Subject: [PATCH 2249/2289] make qaqc vignettes compile (and delete empty ones) --- base/qaqc/DESCRIPTION | 8 ++++++++ .../vignettes/Pre-release-database-cleanup.Rmd | 17 ++++++++++------- base/qaqc/vignettes/compare_ED2.Rmd | 16 ---------------- base/qaqc/vignettes/function_relationships.Rmd | 11 ++++++++--- base/qaqc/vignettes/lebauer2013ffb.Rmd | 15 --------------- base/qaqc/vignettes/module_output.Rmd | 14 ++++++++++---- modules/assim.batch/DESCRIPTION | 1 + 7 files changed, 37 insertions(+), 45 deletions(-) delete mode 100644 base/qaqc/vignettes/compare_ED2.Rmd delete mode 100644 base/qaqc/vignettes/lebauer2013ffb.Rmd diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 5abc89f4cf3..4c40fae33e0 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -13,10 +13,18 @@ Depends: Imports: PEcAn.logger Suggests: + knitr, + mvbutils, + PEcAn.BIOCRO, + PEcAn.ED2, + PEcAn.SIPNET, + PEcAn.utils, + rmarkdown, testthat License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 +VignetteBuilder: knitr RoxygenNote: 7.0.2 diff --git a/base/qaqc/vignettes/Pre-release-database-cleanup.Rmd b/base/qaqc/vignettes/Pre-release-database-cleanup.Rmd index ffbe5e31f60..db9ecf221cd 100644 --- a/base/qaqc/vignettes/Pre-release-database-cleanup.Rmd +++ b/base/qaqc/vignettes/Pre-release-database-cleanup.Rmd @@ -1,13 +1,16 @@ --- title: "Pre Release database Cleanup" -output: html_notebook +output: html_vignette +vignette: > + %\VignetteIndexEntry{Pre Release database Cleanup} + %\VignetteEngine{knitr::rmarkdown} --- This is an quick script for cleaning up the database. For further documentation see the README.rmd file in the main `qaqc` folder. **Step 1: set up an outdir and a connection to bety** The outdir is where temporary editing files will be written, and where a backup of bety will be stored. -```{r} +```{r, eval=FALSE} con <- RPostgreSQL::dbConnect(RPostgreSQL::PostgreSQL(), dbname = "bety", password = 'bety', @@ -21,7 +24,7 @@ options(scipen=999) #To view full id values **Step 2: Back up bety** Before any major deletion processes, it makes sense to make a back-up version of the database. Don't skip this step. -```{r} +```{r, eval=FALSE} system('TODAY=$( date +"%d" )') backup_dir<-paste('pg_dump -U bety -d bety | gzip -9 > ', bety_backup_directory,'/bety-pre-culling${TODAY}.sql.gz', sep="") @@ -29,7 +32,7 @@ system(backup_dir) ``` **Step 3: find all of the entries that should be deleted *** -```{r} +```{r, eval=FALSE} formats<-find_formats_without_inputs(con=con, created_before = "2016-01-01", user_id = NULL, updated_after = "2016-01-01") #Ideally, date should be set to the date of the last release @@ -38,7 +41,7 @@ inputs<-find_inputs_without_formats(con=con, created_before = "2014-01-01",updat ``` Since just a dump of every column can be hard to read, just choose the columns that are important. -```{r} +```{r,eval=FALSE} column_names<-get_table_column_names(table = formats, con = con) column_names$formats @@ -49,7 +52,7 @@ column_names ** Option 1: Edit an R object *** This is the most important step! Navigate to the written out table and *delete entries that should remain in the database*. -```{r} +```{r, eval=FALSE} formats<-formats[colnames(formats) %in% column_names] #subset for easy viewing View(formats) @@ -61,7 +64,7 @@ View(formats) This is also the most important step! Navigate to the written out table and *delete entries that should remain in the database*. If the tables are difficult to read, change what columns are retained by editing the "relevant_table_columns" parameter. -```{r} +```{r, eval=FALSE} write_out_table(table = formats, outdir = outdir, relevant_table_columns = column_names, table_name = "formats") write_out_table(table = inputs,outdir = outdir, relevant_table_columns =c("id", "created_at", "name"), table_name = "inputs") ``` diff --git a/base/qaqc/vignettes/compare_ED2.Rmd b/base/qaqc/vignettes/compare_ED2.Rmd deleted file mode 100644 index a2b273c490c..00000000000 --- a/base/qaqc/vignettes/compare_ED2.Rmd +++ /dev/null @@ -1,16 +0,0 @@ -Title -============ - -looking at how read.output works ------- - -```{r, echo=FALSE, message=FALSE, eval = FALSE} -library(ncdf4) -library(PEcAn.utils) -ed2.2008 <- nc_open ('../output/PEcAn_9/out/9/2004.nc'); -xx <- nc_open ('../output/PEcAn_13/out/13/2004.nc') -read.output(run.id=1, outdir='../output/PEcAn_1/out/1', - start.year=2004, end.year=2009, - variables="GPP", - model="SIPNET") -``` diff --git a/base/qaqc/vignettes/function_relationships.Rmd b/base/qaqc/vignettes/function_relationships.Rmd index 6bee9bf5717..1de4de04f7b 100644 --- a/base/qaqc/vignettes/function_relationships.Rmd +++ b/base/qaqc/vignettes/function_relationships.Rmd @@ -1,3 +1,11 @@ +--- +title: "Package Interdependencies" +output: html_vignette +vignette: > + %\VignetteIndexEntry{Package Interdependencies} + %\VignetteEngine{knitr::rmarkdown} +--- + -Package Interdependencies -========================= - This some code helps to visualize the interdependence of functions within PEcAn diff --git a/base/qaqc/vignettes/lebauer2013ffb.Rmd b/base/qaqc/vignettes/lebauer2013ffb.Rmd deleted file mode 100644 index 5b769e5a31f..00000000000 --- a/base/qaqc/vignettes/lebauer2013ffb.Rmd +++ /dev/null @@ -1,15 +0,0 @@ -LeBauer 2013 analysis -======================================================== - - - -```{r} - -``` - - - -```{r fig.width=7, fig.height=6} - -``` - diff --git a/base/qaqc/vignettes/module_output.Rmd b/base/qaqc/vignettes/module_output.Rmd index 251f1f505d6..4ab3566b0ca 100644 --- a/base/qaqc/vignettes/module_output.Rmd +++ b/base/qaqc/vignettes/module_output.Rmd @@ -1,3 +1,11 @@ +--- +title: "Modules and outputs" +output: html_vignette +vignette: > + %\VignetteIndexEntry{Modules and outputs} + %\VignetteEngine{knitr::rmarkdown} +--- + -Modules and outputs -=================== - To get a better understanding on what files are created where, Rob created a workflow as an SVG diagram. You can find the diagram at http://isda.ncsa.illinois.edu/~kooper/EBI/workflow.svg @@ -22,7 +27,8 @@ are inputs and outputs. To create this I used the trace functionality in R to capture the files saved/loaded -```{r} +```{r, eval = FALSE} +library("ncdf4") trace(nc_open, quote(cat(c("LOAD : ", filename, "\n"), file="files.txt", append=TRUE))) trace(nc_create, quote(cat(c("SAVE : ", filename, "\n"), file="files.txt", append=TRUE))) trace(save, quote(cat(c("SAVE : ", file, "\n"), file="files.txt", append=TRUE))) diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index faddbebc3de..1dbdcb9d1fc 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -51,6 +51,7 @@ Imports: mvtnorm Suggests: knitr, + rmarkdown, testthat (>= 1.0.2) License: BSD_3_clause + file LICENSE Copyright: Authors From 58d4e50318287c75c77f1b97a481c77f2b4f0c25 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 11 Sep 2021 23:34:07 +0200 Subject: [PATCH 2250/2289] remove vignette warning from stored check --- base/qaqc/tests/Rcheck_reference.log | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/base/qaqc/tests/Rcheck_reference.log b/base/qaqc/tests/Rcheck_reference.log index 3e0eeb79c09..cb07568781d 100644 --- a/base/qaqc/tests/Rcheck_reference.log +++ b/base/qaqc/tests/Rcheck_reference.log @@ -109,14 +109,9 @@ See chapter ‘Writing R documentation files’ in the ‘Writing R Extensions’ manual. * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK -* checking files in ‘vignettes’ ... WARNING -Files in the 'vignettes' directory but no files in 'inst/doc': - ‘compare_ED2.Rmd’, ‘function_relationships.Rmd’, - ‘lebauer2013ffb.Rmd’, ‘module_output.Rmd’, - ‘Pre-release-database-cleanup.Rmd’ -Package has no Sweave vignette sources and no VignetteBuilder field. +* checking files in ‘vignettes’ ... OK * checking examples ... NONE * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED * DONE -Status: 4 WARNINGs, 3 NOTEs +Status: 3 WARNINGs, 3 NOTEs From a074be7f9388ea28e9bbc122f760258d36c40c39 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 11 Sep 2021 23:35:19 +0200 Subject: [PATCH 2251/2289] update deps --- Makefile.depends | 2 +- docker/depends/pecan.depends.R | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/Makefile.depends b/Makefile.depends index 1001fcf3e8f..204a53a9b8c 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -2,7 +2,7 @@ $(call depends,base/all): | .install/base/db .install/base/settings .install/modules/meta.analysis .install/base/logger .install/base/utils .install/modules/uncertainty .install/modules/data.atmosphere .install/modules/data.land .install/modules/data.remote .install/modules/assim.batch .install/modules/emulator .install/modules/priors .install/modules/benchmark .install/base/remote .install/base/workflow .install/models/ed .install/models/sipnet .install/models/biocro .install/models/dalec .install/models/linkages .install/modules/allometry .install/modules/photosynthesis $(call depends,base/db): | .install/base/logger .install/base/remote .install/base/utils $(call depends,base/logger): | -$(call depends,base/qaqc): | .install/base/logger +$(call depends,base/qaqc): | .install/base/logger .install/models/biocro .install/models/ed .install/models/sipnet .install/base/utils $(call depends,base/remote): | .install/base/logger $(call depends,base/settings): | .install/base/db .install/base/logger .install/base/remote .install/base/utils $(call depends,base/utils): | .install/base/logger .install/base/remote .install/modules/emulator diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 0ccf146953c..ea03199d3d8 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -74,6 +74,7 @@ wanted <- c( 'mlegp', 'mockery', 'MODISTools', +'mvbutils', 'mvtnorm', 'ncdf4', 'neonUtilities', @@ -102,6 +103,7 @@ wanted <- c( 'rjags', 'rjson', 'rlang', +'rmarkdown', 'RPostgres', 'RPostgreSQL', 'Rpreles', From 8050b6162b97210791bab1702f637192594d168d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 12 Sep 2021 21:52:22 +0200 Subject: [PATCH 2252/2289] add some very simple test cases These are definitely not the pieces that most need testing, just the ones that were easy to see how to test them --- modules/allometry/DESCRIPTION | 3 +- modules/allometry/tests/testthat/ignore | 0 .../allometry/tests/testthat/test_AllomAve.R | 36 +++++++++++++++++++ modules/allometry/tests/testthat/test_coefs.R | 22 ++++++++++++ 4 files changed, 60 insertions(+), 1 deletion(-) delete mode 100644 modules/allometry/tests/testthat/ignore create mode 100644 modules/allometry/tests/testthat/test_AllomAve.R create mode 100644 modules/allometry/tests/testthat/test_coefs.R diff --git a/modules/allometry/DESCRIPTION b/modules/allometry/DESCRIPTION index 5de4c9ff65f..d67be10cfda 100644 --- a/modules/allometry/DESCRIPTION +++ b/modules/allometry/DESCRIPTION @@ -17,7 +17,8 @@ Imports: XML (>= 3.98-1.4) Suggests: testthat (>= 1.0.2), - PEcAn.DB + PEcAn.DB, + withr License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes diff --git a/modules/allometry/tests/testthat/ignore b/modules/allometry/tests/testthat/ignore deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/modules/allometry/tests/testthat/test_AllomAve.R b/modules/allometry/tests/testthat/test_AllomAve.R new file mode 100644 index 00000000000..cd367a0bc92 --- /dev/null +++ b/modules/allometry/tests/testthat/test_AllomAve.R @@ -0,0 +1,36 @@ + +test_that("AllomAve writes raw outputs to specified path", { + outdir <- tempfile("allomAve_test") + withr::local_file(outdir) # deletes outdir when test ends + dir.create(outdir, recursive = TRUE) + + pfts <- list(FAGR = data.frame(spcd = 531, acronym = "FAGR")) + allom_stats <- AllomAve( + pfts, + components = 6, + outdir = outdir, + ngibbs = 5, + nchain = 2) + + expect_true(file.exists(file.path(outdir, "Allom.FAGR.6.Rdata"))) + expect_true(file.exists(file.path(outdir, "Allom.FAGR.6.MCMC.pdf"))) + +}) + +test_that("AllomAve writes to cwd by default", { + outdir <- tempfile("allomAve_test_cwd") + withr::local_file(outdir) # deletes outdir when test ends + dir.create(outdir, recursive = TRUE) + withr::local_dir(outdir) # sets working dir until test ends + + pfts <- list(FAGR = data.frame(spcd = 531, acronym = "FAGR")) + allom_stats <- AllomAve( + pfts, + components = 18, + ngibbs = 5, + nchain = 2) + + expect_true(file.exists(file.path(outdir, "Allom.FAGR.18.Rdata"))) + expect_true(file.exists(file.path(outdir, "Allom.FAGR.18.MCMC.pdf"))) + +}) diff --git a/modules/allometry/tests/testthat/test_coefs.R b/modules/allometry/tests/testthat/test_coefs.R new file mode 100644 index 00000000000..8129b375b7f --- /dev/null +++ b/modules/allometry/tests/testthat/test_coefs.R @@ -0,0 +1,22 @@ +test_that("unit conversion", { + expect_equal( + AllomUnitCoef(c("mm", "Mg", "in")), + c(10, 1000, 1 / 2.54)) + + # unknown value of x -> error + expect_error(AllomUnitCoef("invalid")) + + expect_equal( + AllomUnitCoef(x = c("cm", "cm", "m"), tp = c("d.b.h.^2", "crc", "cbh")), + c(NA, pi, 0.01 * pi)) + + # unknown value of tp -> ignored + expect_equal( + AllomUnitCoef(x = "cm", tp = "invalid"), + 1) + + # length(tp) must equal length(x) + expect_error( + AllomUnitCoef(x = c("kg", "cm"), tp = "crc"), + "missing value") +}) \ No newline at end of file From 56fe97333196a126918fa63ddc04a939d21a46c9 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 13 Sep 2021 11:49:24 +0200 Subject: [PATCH 2253/2289] first pass at tests for state variable rescaling --- .../assim.sequential/tests/testthat/ignore | 0 .../tests/testthat/test_rescaling.R | 75 +++++++++++++++++++ 2 files changed, 75 insertions(+) delete mode 100644 modules/assim.sequential/tests/testthat/ignore create mode 100644 modules/assim.sequential/tests/testthat/test_rescaling.R diff --git a/modules/assim.sequential/tests/testthat/ignore b/modules/assim.sequential/tests/testthat/ignore deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/modules/assim.sequential/tests/testthat/test_rescaling.R b/modules/assim.sequential/tests/testthat/test_rescaling.R new file mode 100644 index 00000000000..8b75dda758d --- /dev/null +++ b/modules/assim.sequential/tests/testthat/test_rescaling.R @@ -0,0 +1,75 @@ + +settings <- list( + state.data.assimilation = list( + state.variables = list( + variable = list(variable.name = "a", scaling_factor = 1), + variable = list(variable.name = "b", scaling_factor = 2), + variable = list(variable.name = "c", scaling_factor = 3), + variable = list(variable.name = "z", scaling_factor = 0)))) + +mkdata <- function(...) { + as.matrix(data.frame(...)) +} + +test_that("returns input where no scaling specified", { + expect_identical( + rescaling_stateVars(list(), 1L), + 1L) + + unscalable <- mkdata(d = 1, e = 1) + expect_identical( + rescaling_stateVars(settings, unscalable), + unscalable) + + partly_scaleable <- mkdata(c = 10, d = 10) + expect_equal( + rescaling_stateVars(settings, partly_scaleable), + partly_scaleable * c(3, 1)) +}) + +test_that("multiplies or divides as requested", { + expect_equal( + rescaling_stateVars( + settings, + mkdata(a = 1:3, b = 1:3, c = 1:3)), + mkdata(a = (1:3) * 1, b = (1:3) * 2, c = (1:3) * 3)) + expect_equal( + rescaling_stateVars( + settings, + mkdata(a = 1:3, b = 1:3, c = 1:3), + multiply = FALSE), + mkdata(a = (1:3) / 1, b = (1:3) / 2, c = (1:3) / 3)) +}) + +test_that("handles zeroes in data", { + expect_equal( + rescaling_stateVars(settings, mkdata(c = 0)), + mkdata(c = 0)) + expect_equal( + rescaling_stateVars(settings, mkdata(c = 0), multiply = FALSE), + mkdata(c = 0)) +}) + +test_that("handles zeroes in scalars", { + expect_equal( + rescaling_stateVars(settings, mkdata(z = 10)), + mkdata(z = 0)) + expect_equal( + rescaling_stateVars(settings, mkdata(z = 10), multiply = FALSE), + mkdata(z = Inf)) +}) + +test_that("retains attributes", { + x_attrs <- mkdata(b = 1:3) + attr(x_attrs, "site") <- "foo" + + expect_identical( + attr(rescaling_stateVars(settings, x_attrs), "site"), + "foo") +}) + +test_that("accepts data frames", { + expect_equal( + rescaling_stateVars(settings, data.frame(b = 2:4)), + data.frame(b = (2:4) * 2)) +}) From 61311b6508816b4c657ed8c0ca98d7a4948b6bf7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 13 Sep 2021 11:51:33 +0200 Subject: [PATCH 2254/2289] fix (silent) failure if multiple attributes to be copied --- modules/assim.sequential/R/Helper.functions.R | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index 0afe70dfa90..d19bb4566f9 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -124,7 +124,9 @@ rescaling_stateVars <- function(settings, X, multiply=TRUE) { purrr::discard( ~ .x %in% c("dim", "dimnames")) if (length(attr.X) > 0) { - attr(Y, attr.X) <- attr(X, attr.X) + for (att in attr.X) { + attr(Y, att) <- attr(X, att) + } } }, silent = TRUE) From 97b57f953a8d03e16972cf031f7c72ee86ec4f25 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 13 Sep 2021 12:01:12 +0200 Subject: [PATCH 2255/2289] Do not coerce output to matrix if input wasn't one With coercion, dataframe input was getting corrupted at the attribute copying step. With this change, dataframe input returns a rescaled dataframe. --- modules/assim.sequential/R/Helper.functions.R | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/modules/assim.sequential/R/Helper.functions.R b/modules/assim.sequential/R/Helper.functions.R index d19bb4566f9..4c27a0c5cdb 100644 --- a/modules/assim.sequential/R/Helper.functions.R +++ b/modules/assim.sequential/R/Helper.functions.R @@ -112,10 +112,13 @@ rescaling_stateVars <- function(settings, X, multiply=TRUE) { }else{ X[, .x] } - }) %>% - as.matrix() %>% - `colnames<-`(colnames(X)) - + }) + + if (is.matrix(X)) { + Y <- as.matrix(Y) + colnames(Y) <- colnames(X) + } + try({ # I'm trying to give the new transform variable the attributes of the old one # X for example has `site` attribute From d5f8da23a177deaad2610ba97bab6b9dd982f727 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 13 Sep 2021 13:42:46 +0200 Subject: [PATCH 2256/2289] final newline --- modules/allometry/tests/testthat/test_coefs.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/allometry/tests/testthat/test_coefs.R b/modules/allometry/tests/testthat/test_coefs.R index 8129b375b7f..8791c2e4f05 100644 --- a/modules/allometry/tests/testthat/test_coefs.R +++ b/modules/allometry/tests/testthat/test_coefs.R @@ -19,4 +19,4 @@ test_that("unit conversion", { expect_error( AllomUnitCoef(x = c("kg", "cm"), tp = "crc"), "missing value") -}) \ No newline at end of file +}) From 1496b0bf6ad53f4b07c3b243d6ed741de33f2b14 Mon Sep 17 00:00:00 2001 From: PEcAn stylebot Date: Mon, 13 Sep 2021 11:43:59 +0000 Subject: [PATCH 2257/2289] automated documentation update --- docker/depends/pecan.depends.R | 1 + 1 file changed, 1 insertion(+) diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index ea03199d3d8..2de3d831512 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -128,6 +128,7 @@ wanted <- c( 'udunits2', 'urltools', 'utils', +'withr', 'XML', 'xtable', 'xts', From 2088563d0f884e466f6cbd2fa7568a13c0dce5d7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 13 Sep 2021 14:39:40 +0200 Subject: [PATCH 2258/2289] whitespace --- modules/allometry/tests/testthat/test_AllomAve.R | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/allometry/tests/testthat/test_AllomAve.R b/modules/allometry/tests/testthat/test_AllomAve.R index cd367a0bc92..0bda132225b 100644 --- a/modules/allometry/tests/testthat/test_AllomAve.R +++ b/modules/allometry/tests/testthat/test_AllomAve.R @@ -32,5 +32,4 @@ test_that("AllomAve writes to cwd by default", { expect_true(file.exists(file.path(outdir, "Allom.FAGR.18.Rdata"))) expect_true(file.exists(file.path(outdir, "Allom.FAGR.18.MCMC.pdf"))) - }) From b9bea872374771575449494bb80597bc654ce899 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 16 Sep 2021 14:45:19 +0200 Subject: [PATCH 2259/2289] fix Roxygen warnings in QAQC --- base/qaqc/R/cull_database_entries.R | 24 ++++++++++++----------- base/qaqc/R/find_formats_without_inputs.R | 11 ++++++----- base/qaqc/R/find_inputs_without_formats.R | 8 ++++---- base/qaqc/R/get_table_column_names.R | 3 ++- base/qaqc/R/taylor.plot.R | 6 +++--- base/qaqc/R/write_out_table.R | 4 ++-- base/qaqc/man/new.taylor.Rd | 6 +++++- 7 files changed, 35 insertions(+), 27 deletions(-) diff --git a/base/qaqc/R/cull_database_entries.R b/base/qaqc/R/cull_database_entries.R index 56868026162..8f155c95393 100644 --- a/base/qaqc/R/cull_database_entries.R +++ b/base/qaqc/R/cull_database_entries.R @@ -1,20 +1,22 @@ -##' @export cull_database_entries +##' Delete selected records from bety +##' +##' @export ##' @author Tempest McCabe -##' -##' @param outdir Directory from which the file will be read, and where the delete_log_FILE_NAME will be read to -##' @param file_name The name of the file being read in +##' +##' @param table data frame containing records to be deleted. Specify either this or `file_name` +##' @param outdir Directory from which the file will be read, and where the delete_log_FILE_NAME will be read to +##' @param file_name The name of the file being read in. Specify either this or `table` ##' @param con connection the the bety database ##' @param machine_id Optional id of the machine that contains the bety entries. -##' -##' @description This is a fucntion that takes in a table of records and deletes everything in the file. Please do not run this function without -##' 1) Backing Up Bety -##' 2) Checking the the file only contains entries to be deleted. +##' @param table_name database table from which to delete +##' +##' @description This is a fucntion that takes in a table of records and deletes everything in the file. Please do not run this function without +##' 1) Backing Up Bety +##' 2) Checking the the file only contains entries to be deleted. ##' ##' For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder ##' or look at the README -##' -##' - +##' cull_database_entries<-function(table = NULL, outdir, file_name = NULL, con, machine_id = NULL, table_name = NULL){ diff --git a/base/qaqc/R/find_formats_without_inputs.R b/base/qaqc/R/find_formats_without_inputs.R index 9e2ee4d71c2..2ca993f4c87 100644 --- a/base/qaqc/R/find_formats_without_inputs.R +++ b/base/qaqc/R/find_formats_without_inputs.R @@ -1,7 +1,9 @@ -##' @export find_formats_without_inputs +##' Find formats in bety that have no input record in bety +##' ##' @author Tempest McCabe -##' -##' @param user_id Optional parameter to search by user_id +##' +##' @param con database connection object +##' @param user_id_code Optional parameter to search by user_id ##' @param created_after Optional parameter to search by creation date. Date must be in form 'YYYY-MM-DD'. ##' @param created_before Optional parameter to search by creation date. Can be used in conjunciton with created_after to specify a spesific window. Date must be in form 'YYYY-MM-DD'. ##' @param updated_after Optional parameter to search all entried updated after a certain date. Date must be in form 'YYYY-MM-DD'. @@ -13,8 +15,7 @@ ##' ##' For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder ##' or look at the README - - +##' @export find_formats_without_inputs <- function(con, user_id_code = NULL, created_after = NULL, updated_after = NULL, created_before = NULL, updated_before = NULL){ input_command<-dplyr::tbl(con, 'inputs') diff --git a/base/qaqc/R/find_inputs_without_formats.R b/base/qaqc/R/find_inputs_without_formats.R index e68b0b96dfa..25307d4b829 100644 --- a/base/qaqc/R/find_inputs_without_formats.R +++ b/base/qaqc/R/find_inputs_without_formats.R @@ -1,9 +1,9 @@ -##' @export find_inputs_without_formats +##' Find inputs in bety with no format records ##' @author Tempest McCabe ##' ##' @param user_id Optional parameter to search by user_id -##' @param created_after Optional parameter to search by creation date. Date must be in form 'YYYY-MM-DD' -##' @param updated_after Optional parameter to search all entried updated after a certain date. Date must be in form 'YYYY-MM-DD' +##' @param created_before,created_after Optional parameter to search by creation date. Date must be in form 'YYYY-MM-DD' +##' @param updated_before,updated_after Optional parameter to search all entried updated after a certain date. Date must be in form 'YYYY-MM-DD' ##' @param con connection the the bety database ##' ##' @@ -12,7 +12,7 @@ ##' ##' For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder ##' or look at the README - +##' @export find_inputs_without_formats<-function(con, user_id=NULL, created_after=NULL, updated_after=NULL, created_before = NULL, updated_before = NULL){ input_command<-dplyr::tbl(con, 'inputs') diff --git a/base/qaqc/R/get_table_column_names.R b/base/qaqc/R/get_table_column_names.R index 041a7d9407a..8a5a8bfe5ac 100644 --- a/base/qaqc/R/get_table_column_names.R +++ b/base/qaqc/R/get_table_column_names.R @@ -1,4 +1,4 @@ -##' @export get_table_column_names +##' get_table_column_names ##' @author Tempest McCabe ##' ##' @param table a table that is output from one of the find_* functions, @@ -11,6 +11,7 @@ ##' ##' For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder ##' or look at the README +##' @export get_table_column_names<-function(table, con){ if(is.data.frame(table)){ diff --git a/base/qaqc/R/taylor.plot.R b/base/qaqc/R/taylor.plot.R index faa9f1a4c24..d28a2ceb09a 100644 --- a/base/qaqc/R/taylor.plot.R +++ b/base/qaqc/R/taylor.plot.R @@ -8,10 +8,10 @@ #------------------------------------------------------------------------------- ##' Plot taylor diagram for benchmark sites -##' @title Taylor Diagram -##' @param dataset +##' +##' @param dataset data to plot ##' @param runid a numeric vector with the id(s) of one or more runs (folder in runs) to plot -##' @param siteid +##' @param siteid vector of sites to plot new.taylor <- function(dataset, runid, siteid) { attach(dataset) for (run in runid) { diff --git a/base/qaqc/R/write_out_table.R b/base/qaqc/R/write_out_table.R index 2c41f9a21d4..252c7e1c824 100644 --- a/base/qaqc/R/write_out_table.R +++ b/base/qaqc/R/write_out_table.R @@ -1,4 +1,4 @@ -##' @export write_out_table +##' write_out_table ##' @author Tempest McCabe ##' ##' @param table a table that is output from one of the find_* fucntions @@ -11,7 +11,7 @@ ##' ##' For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder ##' or look at the README - +##' @export write_out_table<-function(table,table_name,outdir, relevant_table_columns){ if(!"id" %in% relevant_table_columns){ diff --git a/base/qaqc/man/new.taylor.Rd b/base/qaqc/man/new.taylor.Rd index 242b376a2b5..7b0ed1b7221 100644 --- a/base/qaqc/man/new.taylor.Rd +++ b/base/qaqc/man/new.taylor.Rd @@ -2,12 +2,16 @@ % Please edit documentation in R/taylor.plot.R \name{new.taylor} \alias{new.taylor} -\title{Taylor Diagram} +\title{Plot taylor diagram for benchmark sites} \usage{ new.taylor(dataset, runid, siteid) } \arguments{ +\item{dataset}{data to plot} + \item{runid}{a numeric vector with the id(s) of one or more runs (folder in runs) to plot} + +\item{siteid}{vector of sites to plot} } \description{ Plot taylor diagram for benchmark sites From b4056cc2494709eb50943650030380fe965bbd7b Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 16 Sep 2021 20:03:15 +0200 Subject: [PATCH 2260/2289] remove fixed documentation warnings --- base/qaqc/tests/Rcheck_reference.log | 21 +++------------------ 1 file changed, 3 insertions(+), 18 deletions(-) diff --git a/base/qaqc/tests/Rcheck_reference.log b/base/qaqc/tests/Rcheck_reference.log index cb07568781d..cd722a43288 100644 --- a/base/qaqc/tests/Rcheck_reference.log +++ b/base/qaqc/tests/Rcheck_reference.log @@ -89,24 +89,9 @@ See section ‘Good practice’ in ‘?attach’. * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... OK -* checking for missing documentation entries ... WARNING -Undocumented code objects: - ‘cull_database_entries’ ‘find_formats_without_inputs’ - ‘find_inputs_without_formats’ ‘get_table_column_names’ - ‘write_out_table’ -All user-level objects in a package should have documentation entries. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. +* checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK -* checking Rd \usage sections ... WARNING -Undocumented arguments in documentation object 'new.taylor' - ‘dataset’ ‘siteid’ - -Functions with \usage entries need to have the appropriate \alias -entries, and all their arguments documented. -The \usage entries must correspond to syntactically valid R code. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. +* checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK * checking files in ‘vignettes’ ... OK @@ -114,4 +99,4 @@ Extensions’ manual. * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... SKIPPED * DONE -Status: 3 WARNINGs, 3 NOTEs +Status: 1 WARNING, 3 NOTEs From 53cac3daa40537b829a3359b0e59d1394415a384 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 17 Sep 2021 09:11:58 +0200 Subject: [PATCH 2261/2289] Rd files that were previously not written because of errors --- base/qaqc/man/cull_database_entries.Rd | 39 ++++++++++++++++++++ base/qaqc/man/find_formats_without_inputs.Rd | 37 +++++++++++++++++++ base/qaqc/man/find_inputs_without_formats.Rd | 34 +++++++++++++++++ base/qaqc/man/get_table_column_names.Rd | 24 ++++++++++++ base/qaqc/man/write_out_table.Rd | 26 +++++++++++++ 5 files changed, 160 insertions(+) create mode 100644 base/qaqc/man/cull_database_entries.Rd create mode 100644 base/qaqc/man/find_formats_without_inputs.Rd create mode 100644 base/qaqc/man/find_inputs_without_formats.Rd create mode 100644 base/qaqc/man/get_table_column_names.Rd create mode 100644 base/qaqc/man/write_out_table.Rd diff --git a/base/qaqc/man/cull_database_entries.Rd b/base/qaqc/man/cull_database_entries.Rd new file mode 100644 index 00000000000..463c8f2e444 --- /dev/null +++ b/base/qaqc/man/cull_database_entries.Rd @@ -0,0 +1,39 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/cull_database_entries.R +\name{cull_database_entries} +\alias{cull_database_entries} +\title{Delete selected records from bety} +\usage{ +cull_database_entries( + table = NULL, + outdir, + file_name = NULL, + con, + machine_id = NULL, + table_name = NULL +) +} +\arguments{ +\item{table}{data frame containing records to be deleted. Specify either this or `file_name`} + +\item{outdir}{Directory from which the file will be read, and where the delete_log_FILE_NAME will be read to} + +\item{file_name}{The name of the file being read in. Specify either this or `table`} + +\item{con}{connection the the bety database} + +\item{machine_id}{Optional id of the machine that contains the bety entries.} + +\item{table_name}{database table from which to delete} +} +\description{ +This is a fucntion that takes in a table of records and deletes everything in the file. Please do not run this function without +1) Backing Up Bety +2) Checking the the file only contains entries to be deleted. + +For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder +or look at the README +} +\author{ +Tempest McCabe +} diff --git a/base/qaqc/man/find_formats_without_inputs.Rd b/base/qaqc/man/find_formats_without_inputs.Rd new file mode 100644 index 00000000000..c3ab4efdf67 --- /dev/null +++ b/base/qaqc/man/find_formats_without_inputs.Rd @@ -0,0 +1,37 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/find_formats_without_inputs.R +\name{find_formats_without_inputs} +\alias{find_formats_without_inputs} +\title{Find formats in bety that have no input record in bety} +\usage{ +find_formats_without_inputs( + con, + user_id_code = NULL, + created_after = NULL, + updated_after = NULL, + created_before = NULL, + updated_before = NULL +) +} +\arguments{ +\item{con}{connection the the bety database} + +\item{user_id_code}{Optional parameter to search by user_id} + +\item{created_after}{Optional parameter to search by creation date. Date must be in form 'YYYY-MM-DD'.} + +\item{updated_after}{Optional parameter to search all entried updated after a certain date. Date must be in form 'YYYY-MM-DD'.} + +\item{created_before}{Optional parameter to search by creation date. Can be used in conjunciton with created_after to specify a spesific window. Date must be in form 'YYYY-MM-DD'.} + +\item{updated_before}{Optional parameter to search all entried updated before a certain date. Date must be in form 'YYYY-MM-DD'.} +} +\description{ +This is a fucntion that returns a dataframe with all of the format entries that have no assosiated input records. + +For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder +or look at the README +} +\author{ +Tempest McCabe +} diff --git a/base/qaqc/man/find_inputs_without_formats.Rd b/base/qaqc/man/find_inputs_without_formats.Rd new file mode 100644 index 00000000000..72076db503e --- /dev/null +++ b/base/qaqc/man/find_inputs_without_formats.Rd @@ -0,0 +1,34 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/find_inputs_without_formats.R +\name{find_inputs_without_formats} +\alias{find_inputs_without_formats} +\title{Find inputs in bety with no format records} +\usage{ +find_inputs_without_formats( + con, + user_id = NULL, + created_after = NULL, + updated_after = NULL, + created_before = NULL, + updated_before = NULL +) +} +\arguments{ +\item{con}{connection the the bety database} + +\item{user_id}{Optional parameter to search by user_id} + +\item{created_before, created_after}{Optional parameter to search by creation date. Date must be in form 'YYYY-MM-DD'} + +\item{updated_before, updated_after}{Optional parameter to search all entried updated after a certain date. Date must be in form 'YYYY-MM-DD'} +} +\description{ +This is a function that returns a dataframe with all of the input entries that have no assosiated format records. +This is very rare in the database. + +For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder +or look at the README +} +\author{ +Tempest McCabe +} diff --git a/base/qaqc/man/get_table_column_names.Rd b/base/qaqc/man/get_table_column_names.Rd new file mode 100644 index 00000000000..44f31c778a5 --- /dev/null +++ b/base/qaqc/man/get_table_column_names.Rd @@ -0,0 +1,24 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/get_table_column_names.R +\name{get_table_column_names} +\alias{get_table_column_names} +\title{get_table_column_names} +\usage{ +get_table_column_names(table, con) +} +\arguments{ +\item{table}{a table that is output from one of the find_* functions, +or a data.frame containing the output from multiple find_* functions. Could also be a vector of table names.} + +\item{con}{a connection to the bety database.} +} +\description{ +This function will return a vector of the column names for a given table(s) in the bety database. +Useful for choseing which columns to include in the written-out table. + +For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder +or look at the README +} +\author{ +Tempest McCabe +} diff --git a/base/qaqc/man/write_out_table.Rd b/base/qaqc/man/write_out_table.Rd new file mode 100644 index 00000000000..2eaf9dc57d5 --- /dev/null +++ b/base/qaqc/man/write_out_table.Rd @@ -0,0 +1,26 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/write_out_table.R +\name{write_out_table} +\alias{write_out_table} +\title{write_out_table} +\usage{ +write_out_table(table, table_name, outdir, relevant_table_columns) +} +\arguments{ +\item{table}{a table that is output from one of the find_* fucntions} + +\item{table_name}{name of table} + +\item{outdir}{path to folder into which the editable table will be written} + +\item{relevant_table_columns}{a list of all columns to keep. ID and table name will be automatically included.} +} +\description{ +This is a fucntion that returns a dataframe with all of the format entries that have no assosiated input records. + +For more information on how to use this function see the "Pre-release-database-cleanup" script in the 'vignettes' folder +or look at the README +} +\author{ +Tempest McCabe +} From 57563d8c614166404a5c1c226818c936962d55cc Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 17 Sep 2021 11:32:33 +0200 Subject: [PATCH 2262/2289] spaces in filename cause Make to skip whole package ...and also cause check warnings at package build time --- ...xtract_50_sitegroup_data 2.R => extract_50_sitegroup_data_2.R} | 0 .../bmorrison/{general_sda_setup 2.R => general_sda_setup_2.R} | 0 .../inst/sda_backup/sserbin/{workflow 2.R => workflow_2.R} | 0 3 files changed, 0 insertions(+), 0 deletions(-) rename modules/assim.sequential/inst/sda_backup/bmorrison/{extract_50_sitegroup_data 2.R => extract_50_sitegroup_data_2.R} (100%) rename modules/assim.sequential/inst/sda_backup/bmorrison/{general_sda_setup 2.R => general_sda_setup_2.R} (100%) rename modules/assim.sequential/inst/sda_backup/sserbin/{workflow 2.R => workflow_2.R} (100%) diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data 2.R b/modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data_2.R similarity index 100% rename from modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data 2.R rename to modules/assim.sequential/inst/sda_backup/bmorrison/extract_50_sitegroup_data_2.R diff --git a/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup 2.R b/modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup_2.R similarity index 100% rename from modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup 2.R rename to modules/assim.sequential/inst/sda_backup/bmorrison/general_sda_setup_2.R diff --git a/modules/assim.sequential/inst/sda_backup/sserbin/workflow 2.R b/modules/assim.sequential/inst/sda_backup/sserbin/workflow_2.R similarity index 100% rename from modules/assim.sequential/inst/sda_backup/sserbin/workflow 2.R rename to modules/assim.sequential/inst/sda_backup/sserbin/workflow_2.R From 8102d0d86dd48f5e94180657ec3c6c48d3372ca8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 18 Sep 2021 21:06:00 +0200 Subject: [PATCH 2263/2289] Quiet Roxygen complaints + style tweaks Mostly to eliminate "inline HTML not supported" and "no parameters to inherit with @inheritParams" messages. --- base/utils/R/seconds_in_year.R | 1 - base/utils/R/write.config.utils.R | 33 ++++++++++++++-------------- base/utils/man/get.quantiles.Rd | 2 +- base/utils/man/get.sa.sample.list.Rd | 2 +- base/utils/man/get.sa.samples.Rd | 13 ++++++----- base/utils/man/met2model.exists.Rd | 8 ++----- 6 files changed, 29 insertions(+), 30 deletions(-) diff --git a/base/utils/R/seconds_in_year.R b/base/utils/R/seconds_in_year.R index d255fa8e85a..d45ab750bd8 100644 --- a/base/utils/R/seconds_in_year.R +++ b/base/utils/R/seconds_in_year.R @@ -8,7 +8,6 @@ #' seconds_in_year(2000) # Leap year -- 366 x 24 x 60 x 60 = 31622400 #' seconds_in_year(2001) # Regular year -- 365 x 24 x 60 x 60 = 31536000 #' seconds_in_year(2000:2005) # Vectorized over year -#' @inheritParams days_in_year #' @export seconds_in_year <- function(year, leap_year = TRUE, ...) { diy <- days_in_year(year, leap_year) diff --git a/base/utils/R/write.config.utils.R b/base/utils/R/write.config.utils.R index b349e8636dd..ce30d82adb0 100644 --- a/base/utils/R/write.config.utils.R +++ b/base/utils/R/write.config.utils.R @@ -11,10 +11,10 @@ ### TODO: Generalize this code for all ecosystem models (e.g. ED2.2, SiPNET, etc). #--------------------------------------------------------------------------------------------------# -#--------------------------------------------------------------------------------------------------# -##' Returns a vector of quantiles specified by a given xml tag +##' Get Quantiles +##' +##' Returns a vector of quantiles specified by a given `` xml tag ##' -##' @title Get Quantiles ##' @param quantiles.tag specifies tag used to specify quantiles ##' @return vector of quantiles ##' @export @@ -40,7 +40,6 @@ get.quantiles <- function(quantiles.tag) { ##' get sensitivity samples as a list ##' -##' @title get.sa.sample.list ##' @param pft Plant Functional Type ##' @param env ##' @param quantiles quantiles at which to obtain samples from parameter for @@ -58,16 +57,20 @@ get.sa.sample.list <- function(pft, env, quantiles) { } # get.sa.sample.list -#--------------------------------------------------------------------------------------------------# +##' Get sensitivity analysis samples +##' ##' Samples parameters for a model run at specified quantiles. -##' -##' Samples from long (>2000) vectors that represent random samples from a trait distribution. -##' Samples are either the MCMC chains output from the Bayesian meta-analysis or are randomly sampled from -##' the closed-form distribution of the parameter probability distribution function. +##' +##' Samples from long (>2000) vectors that represent random samples from a +##' trait distribution. +##' Samples are either the MCMC chains output from the Bayesian meta-analysis +##' or are randomly sampled from the closed-form distribution of the +##' parameter probability distribution function. ##' The list is indexed first by trait, then by quantile. -##' @title get sensitivity analysis samples -##' @param samples random samples from trait distribution -##' @param quantiles list of quantiles to at which to sample, set in settings file +##' +##' @param samples random samples from trait distribution +##' @param quantiles list of quantiles to at which to sample, +##' set in settings file ##' @return a list of lists representing quantile values of trait distributions ##' @export ##' @author David LeBauer @@ -83,12 +86,10 @@ get.sa.samples <- function(samples, quantiles) { } # get.sa.samples -#--------------------------------------------------------------------------------------------------# ##' checks that met2model function exists ##' -##' Checks if met2model. exists for a particular -##' model -##' @title met2model.exists +##' Checks if `met2model.` exists for a particular model +##' ##' @param model model package name ##' @return logical met2model.exists <- function(model) { diff --git a/base/utils/man/get.quantiles.Rd b/base/utils/man/get.quantiles.Rd index d0687f9777f..541e2098b56 100644 --- a/base/utils/man/get.quantiles.Rd +++ b/base/utils/man/get.quantiles.Rd @@ -13,7 +13,7 @@ get.quantiles(quantiles.tag) vector of quantiles } \description{ -Returns a vector of quantiles specified by a given xml tag +Returns a vector of quantiles specified by a given \verb{} xml tag } \author{ David LeBauer diff --git a/base/utils/man/get.sa.sample.list.Rd b/base/utils/man/get.sa.sample.list.Rd index 82f239cca89..7cb0dce163d 100644 --- a/base/utils/man/get.sa.sample.list.Rd +++ b/base/utils/man/get.sa.sample.list.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/write.config.utils.R \name{get.sa.sample.list} \alias{get.sa.sample.list} -\title{get.sa.sample.list} +\title{get sensitivity samples as a list} \usage{ get.sa.sample.list(pft, env, quantiles) } diff --git a/base/utils/man/get.sa.samples.Rd b/base/utils/man/get.sa.samples.Rd index 74aa70f1e81..9f52c6bc321 100644 --- a/base/utils/man/get.sa.samples.Rd +++ b/base/utils/man/get.sa.samples.Rd @@ -2,14 +2,15 @@ % Please edit documentation in R/write.config.utils.R \name{get.sa.samples} \alias{get.sa.samples} -\title{get sensitivity analysis samples} +\title{Get sensitivity analysis samples} \usage{ get.sa.samples(samples, quantiles) } \arguments{ \item{samples}{random samples from trait distribution} -\item{quantiles}{list of quantiles to at which to sample, set in settings file} +\item{quantiles}{list of quantiles to at which to sample, +set in settings file} } \value{ a list of lists representing quantile values of trait distributions @@ -18,9 +19,11 @@ a list of lists representing quantile values of trait distributions Samples parameters for a model run at specified quantiles. } \details{ -Samples from long (>2000) vectors that represent random samples from a trait distribution. -Samples are either the MCMC chains output from the Bayesian meta-analysis or are randomly sampled from -the closed-form distribution of the parameter probability distribution function. +Samples from long (>2000) vectors that represent random samples from a +trait distribution. +Samples are either the MCMC chains output from the Bayesian meta-analysis +or are randomly sampled from the closed-form distribution of the +parameter probability distribution function. The list is indexed first by trait, then by quantile. } \author{ diff --git a/base/utils/man/met2model.exists.Rd b/base/utils/man/met2model.exists.Rd index 7f9c359ff50..5b1c80ca6a3 100644 --- a/base/utils/man/met2model.exists.Rd +++ b/base/utils/man/met2model.exists.Rd @@ -2,7 +2,7 @@ % Please edit documentation in R/write.config.utils.R \name{met2model.exists} \alias{met2model.exists} -\title{met2model.exists} +\title{checks that met2model function exists} \usage{ met2model.exists(model) } @@ -13,9 +13,5 @@ met2model.exists(model) logical } \description{ -checks that met2model function exists -} -\details{ -Checks if met2model. exists for a particular -model +Checks if \verb{met2model.} exists for a particular model } From 3d52e7577d30ee77fe9606e952e9fde5ff3a228a Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 18 Sep 2021 14:41:21 +0200 Subject: [PATCH 2264/2289] use snapshot testing for Taylor diagram --- base/qaqc/DESCRIPTION | 3 +- .../_snaps/test.taylor.plot/new.taylor.png | Bin 0 -> 44400 bytes base/qaqc/tests/testthat/test.taylor.plot.R | 51 ++++++++++++++---- 3 files changed, 42 insertions(+), 12 deletions(-) create mode 100644 base/qaqc/tests/testthat/_snaps/test.taylor.plot/new.taylor.png diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 4c40fae33e0..1a77d153e4d 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -20,11 +20,12 @@ Suggests: PEcAn.SIPNET, PEcAn.utils, rmarkdown, - testthat + testthat (>= 3.0.0) License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes LazyData: FALSE Encoding: UTF-8 VignetteBuilder: knitr +Config/testthat/edition: 3 RoxygenNote: 7.0.2 diff --git a/base/qaqc/tests/testthat/_snaps/test.taylor.plot/new.taylor.png b/base/qaqc/tests/testthat/_snaps/test.taylor.plot/new.taylor.png new file mode 100644 index 0000000000000000000000000000000000000000..3f48bc883389e4677b13d31428f5d75bf67e895e GIT binary patch literal 44400 zcmeFZQ+r-t_$}O|Nt(vCZ8mnB#|L9Q#D zYp%6!jC+hR6DlVo3J-$=^XbzkcyTddg-@TrV*Y(VLjm9M?PMSO^a%pn{KpSD%O9dY zY^`kV6@Td)8jBj+7~7j0Du{mn^ocVvQd!#!PZ^EB)xngMY(mlMTaaUk06aN-yvn!S z`WBzHHFldMarR}BL!&EAvH<=e%9_P)b7m$a|Wok*WXkZv=RrHuF+YZ6eA$+tk zfx$VLP`Rq|__W%kQsC^eur3Xn8$WtHbeHDV-m@zkMexyy>uvXLKb$@fW?#bt}ULT_7I zd;P)NTuDlgMHYaHSzS*96(L^;YgMd#DeK5CsB8!$_C5$?zoFZv51HgR7*XtAEH8`i z&**mUYh=^7zUG1G3T@F$yBf5Au~y8wKG-{EOB?lY6MlVea}&Gwvd`u^JMuid2~48@ z@>-aj5P;qn5aN?7B6NSS{a&*wf%F;M0%uR$haoN>L6dPu+q&p*QW6f6W0p`7- zA?HK5F5KQ1qz+Xs9}y|LOy7Kb@+99sf$5p;8N^=h=hFdS_Z5;Vg(OHkmb8u0{v4+~gxSUGu5ADalvAfz;MHKN)8csa` z-D>ly+cF@CR3t+_TRf>?%R~xS+y>kR*VDCAdQrU%w2ym@c#V5acy()o$MTG3?sRN1 z{33MF9k{o+!O__8z;MHAG(AePtdJlkwL&MruyVr`fy=ck-T1L$p0Jy<52yE?&n&LqZ?i|MeGU}|^u7%oJei#v7moA%f*<9d zOe1~3aLFSgcvBc2bPQ&vL%P1f(ZL5{W`3 z1+mz$66m0UpW?#E^3+QT^HgDI@_&>R)HQ})Ctb3dFW%2D);F}hJ0IExxw0oc9@bM? zP4~Th9wwSu9riQu!a@Tfz=Q;mkRTxZVxV{7h8a)&eTj&mp~3xqMFi18c%lFMG8PF5 z66Gs9HQE3E5fKrLaMu6%J2B`G@KA_!G!%&cSqhS5^1men$smONg#D#qz5>eutA>Eb zn*Q%=fe8_zfaeL4bW;TVcX?sIA=dx7ED_RIFnOX_z4-6{T{=&27aQ2$zsrRL`clP+ zRf`k<&z6DTyN3I>RbXx5?}BJzWQzqc|LvX%$qnkiwMmGHV8~Fc6vY1BCa?@}Tc7_g zZ_8ekh8);BCd41BrSG&V6~BhXCArU=c1m>G-LsyASXHXD#M3!!iyS8Tl4W?#>T~)% zpPWw@g(Q>cY98(PMt)LeI-(>p>Sl$ZQ5zKQ7!5>VJ6>)Iwd|)^8cgqfN8&6)lKbv2 z7>4BfcDI1u$(!Y)T&5&0F3qv-nqOtO-4{w7YZ^F~!8Oxx_H=Wg8ZGeN#Qkg2S6_J$ z&mm&}?V|Ie;u{w-L=-!O5rn;x7lYC1c$Uk$+c(EUIql&ET_0(~fr#AcLdlu-hxPKA z4Bad2m9)cY$wZo;Yb{RsZ!ZrsjT_#QI|GsV!36FhGMrlwEjv*>^_v8I?uV~f1)X=ioHv;;X#R=7pi>jv@cH1&^nN~? zEf7a7lgkCC*QggORt4!~;&9khj9#ho!xQkTW#=2ZMpcq{;x%^``4 z#T$6UNf4{n|8TYmJ3zdWg?L{X~7#`y~a{mi*cCreD_WF@@8cC?v%c0NZrpKmYO&^krX?Q` zRmJ=5=Pf6P!GzvK=Z<&Dd_|e3;m<3Lw#8BSJkzgFH+QO?F|I$Nat9fbSfcl-4`caF=T(1)A1F zNQy8IUzYDKLB{){{UIc=hMm33?C54vV1X3f1DgvIBJL+p0&1Zpp8i7Xz3II^UTNq% z9!&>fFzQqopRTp467o16=0wL+NKYrpC>BXgDGJd80a^}=e+g{8Waj1mth_jw$;)9* zS@o#)Q0(mXFh3e4+NeZ06y@G%v)JLB)fK%uIs{OAp+)YL*~hc(jDn?-d#iG?`xaOG@=-^4340 z_^B9(g@_V?x+qSbMna1O{`m{StR<-&0guah!AK(Qi9PRGK{UteuK*oWUvTxNU&L>> zGqUb7g!rKNYG+@Nw_}#^Xi`IMO%=Gm=MOCOreUTh9Z{;!6|BD1GCO1uEXNBVkSIE- zNCZ(9&CChYW-kek2u@B<5bO!*d#3Al=zQ~A-;#(xh18Nkf7p>?R_(NC3L)S z^%0}BYu>n6BBCx=H?W^e>M& zg4GX@BgiwVCJ8~1P8_w3%+M+#zvVk@7VDErfsSe24hEwR3_yLybxy z^%tMD$VbrU{i&FLo3NfdW*}!RSUj69b`Mh^5DI zGZ*p4N2sP2JyubOND+CcyLUy6HAbpZhlWQ3xy$fNIO)yZc^{Y}pVhWR=le5lx6@aQ zzyPFqNj;~)hQKijx)j5jwPt-33=>fMecHNGXe|6ZnF>fpdhx`c54X;AQf5&}paKOW zNlJ+5Gja@d1ssH`um`Dc0~p7i9|Bf7q}xTX3PH67Oc}2nZc!@&rVw)qBoOS?HL+`L zhf~mK16&6gKY#@9FzId}bkpBcmAQuWix3eMA;ONV9zu-#Bqe(_=l%Y4aKh+fD37io zhe6}ZhD{|BQN*O+hdMN0sfjA(Tr*Eh25lq?zzCy3-7_jPR^GK`%!KFyP5!uxGqLB) zU7c4TK)0h~fBgponuv&0)EIKa;{JmH0szc-Wkx~x4=`W?D>eC&i2v9A-+|P-8WiLF z4=88>AZHsqjO;(S@<>Vq(?>|U^&j~7^$$6~QpNoTfi|OksgQgk0)PAus_23Qdg1@O z@c(<%|1Y@yuhNMW4BY4K4)g8hqVp?f)EHeM+8a=hsoR_Jd7S^gzr8e!0cbMQ`C>g& z;fX{FNsJ9haD~W2d6J7>M1cYT$fb-VQ2!{DOfK3VPoDvxZf35FE&v)e8f~bUWGg+!oDDm~F*W?}e+;PomVXCw^OTsk64ocf~E9jfeYE!j=!DB8RjZ%r= zL8c3V*X@+3;~>6^nU=@Hx%A#hVm@Q%E6eW<@7OR8PoUvPk3@gsiLxg^278Y~L4o*A z{0|G3<^2pV<8s(90a|+r(6IiYNHIIp%Exl9gbJpyRr7M&jwcHJSOD1nU^AZauEGIBQulZd1=zmpSxm#I{yB|^CstyLt< zb$Gp0@E(ZQK7?#Uw<7eDiWi$TV?cN!6X_!8BSGLn1NWl_G{}+Ive^~e&PwOA<-!?E zI<*{Er97SmOHjMY)op|fA?L3g_tTm|rBD=dB>L_!C$cf^+S5wXxfe(uEJejX-k?Qt`oP+0B&FOePrgiE^S3Ru#czej+ zjt%8L$Z+^JnIFwx76WDRH^&c$$GL(=x1+7^HmD%T z9p=?dOMxN4+{*jyPW7%IQ`d~{`d|`NrV6S$p06yo-yO=85qNW~%)w>1GNYCr>Nv~` zcf@r|$m8*P;iKr8I3dvdV^xX{o$vv?C=?glI3j4We^VlfBiSUnAAopDfiP*=C$^g@ zl0JUiiB<)MqN#(!0x8k;N=**s0x@JZw`-%jCDUx`gS!(@XI1BQE>a94d5zI9igvO* z(a8d6A?&lYV6n^1I7j&?W2Y8v;8eb7Ie_G;voI7!lNnXfX;o&8MI*3sjfE(a3Je(A4baJ}^#!I`ZX(*hx*=8gU8lsap8Zs? zIMy8F6Td(jbPixX07F=1alafU_b`Bu)oySh{pBYBRO*CN zY<0Ge(6{A3T`hRQ!SDtZU?w*^ke*jp@?QP}w1J*-!*S%5ya#L}_eV3qu9@B2TpzDD zl@1wb3|c8oy~X!uE2u7g?zySO`39~Tm=o;XpL~nE>Hfsa@-J~8P&19j(IQEB zkbVHkfv$p-5J(9HmNiJJBN#2dQfG>IMdj(8daV|-Q!cSCr zfA%KiW&aLg8(A0*5ws@AmB45pi7wn({yEG(vpB5drIv+ z%VC=vPDr&aTKpwE)x78o@Zk{Ps7);W%Y zr{92zS?`)q4%9y(5v1*Yr|lz}=c_?NwwJ%TKaPNR6iST)(V7D83APtG6_0BoqXX2q z3;@L0k&rBOKm~szq`!-OnBCZ`=qE`!w0RT|t?%adFmwzdJ?-H6V!?1B?$AguUJs znH1O-I>agTrX@eXW%MOU-Wz#E$%7SHFnOUR5hahShhir1g&XV!$VWO@K;@H2vDTPd zX{pzLIdd8fHn3u5&0)8?;f(iKo5T7L4k7!fS_p`J) zB>0CMBJbrVyDr+Z&XI14WdAPtXy3;k?qfnQu&&Tg8JylgIbI%Ry51xqSohr0SNq5N zO(Vx3i*aMvC-oy?asDE*lXB$4!D2BAu{=SN{cJOaObFqF_dd^-mSd@(EG&!w(fzZX zxT%E0j#ElwmNa;$5Xg3UpeaH#ET17mp0VV1Ar-*b>;N4=*0%HQ51-})q#vChKu*$K zv5JvqKC$-ZRO6)raah&bIIuz&rSqh(hH^s*qqMSw8P>-?@Z!0k*aF zAB9>w4I;L#L2?DU?AIx1nCDyIp-)KVc>%z|==w~N;`=2b#lMZvf*flhJ4B55 z{{obVexi_)u&}CB5|S8=&D{j`^y0_~EZGz;^1U{0p0 zwv%#BZDu(qiyR(mHriB5TQmPmpwn;P15in0&pOFHPf<)9d0!t7OtY1XG+gU!`4M*%g;)I${VB3-q?-4zJciIA-MoXI%ih zO4U6ptSOzhshM?^( z_x_<+(uPD7jJ%o<1b=k&dE~akgKU?&F@ak%y_fsWMd1e`Mw5@Slcf$mO6!%RXd0!( zT%Kp`)+j^uy%LMLOm}I6$*djM0;QP5RUh}jIMPA83~$~O_aq!DT&mS{$2V&y&2N~ zq+NC^+7&^k7a1hqKhCjHfes_F?zEuozP1U_y;Bkp6g5W)0Ty!>;sH==x#K#W@9v0< zA%v5e_k->U*S(1S&8JVRT$A%Bh&~_MS;es^doAA7mKP&Y9n0d>?2Dd z0|2n!y}eK_*Fk+L(X?*s@s5C$dTgqL7}aE=nj)YuEm>hO#CEzs+!09c;8GFSXoY`| zx=_G#ab*CUt-{|Pz#YYUOl$s5!J`bVbTYH87aip@|1^SR%y0T{&+xo=^>I8a%Dl_X z5o>O3IXsIJi4%F<0h2i`PNs*{IXTy_kEar(W!KOeqNyxD&1LJpK|(%gd_i~}9C=iD zzF)CDT66I+y6B{JUA831bedOht5OKAYRbcNc{Ya`1|gVsc>SQb9{VlD0RZHE(rDiA z+O5us+7~iJd&}qmDO6Q0mWS_`gS#nb0#L%sSDOI!-?Y=(!$7h1RHf}ReZ41Xb~Igh z+XIU^-H#bvd@~6GCJ43yU_7|e800K(J3qx0#wAw!jz@`I`0aGgbuO02h2hKt_Pw#H zgS)cNwzFY@wCXFWw@Gvt=;nD0(|v@==Uv3gNhKCgmy9sa+@fMv=0vC*y3yRPw)?AI z)&BY%<_4PqN>84k7IK_0CD2DQLWZ31!1dX?^vhr~xHw$~ z!Dx~#O6aAONuyFhxPTB}{k~*fVU=X56OVCcM(8G-xTQJ)LRf{(pfyl3EfI?)QEGLz zewUQ=OJzwox!mfUsyBbQ2_ehAPgHKU*LL36?ht??53m{PTgSy>5L^=UgZ>IG^iO1j zD9n#I0OrKJld4vk4K`;9fDTyj12)!yuVXazG@i+$Y8w|IiR3!l-+_RXFfhvKBOZ=X zX+m4F$2;AWa^Bg=-C%a**+Wh3K^ejB1@sVVc_N*B@U=yhzd(W8f|GkC;6MS*Dh#~22U$!A2#`>GH6mYV;C|8?kii5@-y9r%U=5bv9Auc%F^ zw1YRV@wc;LsVvr|E*ItMZnw}8we6?_-r;~m0>yALV8-pz*k&{nVdasIIvf~kkJV$V z08G_ygEAiGcgF^fTlsUP6M?vHeN!2UBSL531TW(LRzrPMkgdCf9;eJ}a1op}4^S}u zYI#BC*_Q--&>`3tQ#YEFPQ^kTT}!m38VzOd{%cKFM(;fxG3RY{73a4-R=-&@YqV`x z4)k|ZCs<3cZLyYyR|DEB+GQS3-A^jntWKGJ$iip2TNth@#s%}g{))KxE!8?!ob9dA z8jH!Q!$V_#{u|)ISLI^sIU$|_^`sI+A(LTd?aS)H`cK*8=<0@Lz*sGoPVb)QC12t> z2elD{Xd(%`5M@>OqT{j2x=&~cX84Ix3Vstay6GFaUWvrEyWd&xSp@VeW3hzw>QJIN zj3z0oODS86-`%f-bFq)OxZ^{dAHrhrCTjP6$X@J=YRjQ5;`+m<0@xW15Ia#I8}ggr zig=g^FAAas)`!A3W)7R-*A8Pf0-EDnqKZrME{vJ*3+f0Q;5XEmJc2q)h;}I_r9}XA zh}L}HqF1Ss`yEo%N6nevfuNB8bawDP6H}KrAdO?AM`G80gAG$L8<_Hh|4BB1QM*qh zpW@~EHeD~;%iXjM5p4w}9}G@3Im21W6o}-xuZ?Ut(C^;fUv%+s9!Q@bcS4^@bKLJV zUFoTxI4Or$lM}|*frWkJ+>*rH9JV_(-&8~E4c-D8s$(## z^|He8)Q$lTn?-^%R7ey;V9NGzjF-%YY}68db_ioco3Oo9RF%)bqgashhD z3y+)K3v9*Fk<8gpF7 z@((LtgB-9{phh#Z_)PzTI?oN`SHlG_zW2n zdOw04D*JqS*Zo;*Lz~T7OU>X)m{ppGkJ;Wo;wVkPr$ZH;`RK4BE*BUAqc+1nCyd0# zm@kA-lx~oMEiEzz{3)&-*WqMPtL~cf%WD>O_-5T*V>syGns)v3ZN=rBH*7(IVxUR* zstooBponG|EH;~v8SJVT`XV=@p#172k{K$!es#k~$+7?NbpgQ`vqCIhWo~22o-Nk= zxLW;FrcfAW@S$wy(%lh<_Qgp-;N!J!TMen~XE?_3-GXlP4(nH7?pHJ(?EnGMsNUdm zp%Z`V&U??z=jt^6a**wV=Asc6z~8)I%o1xDSR!mKUc`WqR(P2#!D8KS>;K2}+ZIQc zt=~ppjCnxwJ6TpHD?5DCi{UT{H<|xVt$MBK{$QnHXai2?m-}J-l(Evb@p7~15 z06qWX)QFqSLnFa~^h;5738gxWgRcHD%bDYdX!_K>Qe5dM-oa`oC;1}h%3}M=6CSyJ z1p>i*$rP22?$fe?-9XfEw!rw8GzWZ$Ns8|WB;QH04qBW}rgMY>=(z0T_>hoH!V z0p;9M>;N-_6q7wte7p6^A9sNeyK^kLK$9P)mPSF;2ZNToEyMeF&}$|j*K6FjA3I|{jU2eU< z6PX{?gR45Z3BgZgY_$jrQS*Fnt8}`||6i9Sx0%94@ z>5OGRjBpJ#sKa|d)l{icqk*NaSwR&B35m7>%{OI<#Fa64yVG4G7?r3Jv0{n$t z0nn)z7F_+@!UV6Bl19>9Vl0;HOTA0!_oXy zR-M*IYS~YU+|O)?CTr@Z;j9k1OEyYO#4}ab%QTfA8DblGgakTDSFh{X5|s7gGI7Rl zR|}P^*8e&~7FuNZx=XIPJ|s&Pyg|cf?GJBWT0m4Kh0-uuPG2qP|1;h}V=$cXNd8!y zj4}UOipP%bk0*t6wvN9mW1OkeF7UlV)PZQ)OzUg%67^Z?Bg$@ zvW=7kSiv9VVAZsvRsTOEvEW~6Y0WZaI>yQd)oo)eXV?#>5;DJ;!@tOTge-_?Sfbj; zGYpb`0aF(iu=keqqV#xp{}IjqOx23=6oI+U*qHEdEc9GUSQ zjqDt&m%ob-b_;MQf)HuJhwo5=8;%s3QT@(VeDv$qY9Ma@%I9b0+3$sE&{aGXYMg}JQ|FmEtN-TwkSs7B2K%0slO*VNwVeUZEUMHA0F{9)~ufkk9f^_ zI7|2}S)|0-@$HOBw0^LD7%$Ul2WJxN-U;jbqoI`?->)1br=w|dipk3%<`)toMYW}e z#&B~ks|!L`y>GVARw56pGej$6se`M5XW;m%c~K(+WwKX$;q5GoFL5;XGWtQ*@tQo< zHjwdiTFv(TM#bNqBf4TRi@Y7+)OnG35?~GLT#j$C1d0FTsj3&gx0n8a6)JiYEhxOlnQvHQw4ZrRB7(NXMc6zE-CU`mFM)@?E}|(DK6`J$+XW#cN)ZD%OC~Gz%wc# z*r)URFNyO_pQYryaY;PW*)3{>IjP>sEtfH)=`pKlKs48~TW`jWdJk*={%l9T#cYWX zsMDJ|68i>*Q1@XE^EYs-5rVb%7duPbDOEBlOnxp2ApJc%flPM`;_>ha;w6OcI)&_r zUUiPNf%GY8nWw6jW18~9OXg<7FE!NW8q+TqsI*LZ&S%qAHF#iAzdpc`kAqu- z5%6_P3u}JoL*D0uLn7SCAsa1)bzvOW{alw_+uhIiW%_@@DGNx&!K30Sf{`l;DJt+iPn+%J4cM#QS(A=3pY<4%8POc}gnj&m-62&Hd zsc>RC>*C3yH)6Hi1^*CRtW;Y{LzjemtB+r>2>=r zd$IH8{iZ(=X0>_(2!(+$pJ3`}WC+ofhQmPAkQQg@4Gz2OVIPMOC0d?oaS_Ls81=%D z4acbQfn1BZdgL(k-D>@s1&XzckOl}fCu}^{yf+)`uDZ{dr>;qP3hQmnCAyu#TMD5M zjP0nyvpg1S3Jh)l@j@1@UnJJ`tMh00@H3>xZtTj=enkAyw>awXbVN;pb@hvm7aLfP z&TFq{wTjbY+^WtQP_E!+ZvW6Bpl^S4x?<$7f@=h)=f3@p8vZ;N~kxFJj6LF#u|QAy6B z;KNXz-|ji#b|z;=5@At-v!va^w-~Yv)kNbgxreBN+DAw-0IA=n-auH$* zIVn|kRG5roE?EWm3T~E>8411`|9c=tSJ}|%JJkUX0f_rkL$M3Zf94%2s#3qANOQGS zLx5mvJq8mEmef?(laDMw6kRnamjTBBTaVW;|^5<!qp(nnFy%WMK=iOeVh_Iw7bJX(|{o@T6J&pq#aoO{qm6 zS^lRwV!V4H*|u1($hURovml-Y1Z9l(G@>qiqkGN!0R*4KDmDt6CB|SJ6BvF}UH*U> zJaf}ocNhxHmJ|mU73w_9s3d}selD*3V_eNQ?)$X7dY5Mq(x9bSpC#YP6C>#Uv>X+2 zM!;c%)BiF(rM^Vk{rQMFVvfJ%?S6%~U0O7eeorA7s74CG#uDUb8$Q0p8fPp4@Z+NbK{`V|-1+SF|Td7vx{k zU;a{HAqo@E$N*XN`9&&$drg5r>*sIOW&LFX0gi{r{5RfID5nc=SWpn&ru>6gs6q-{ z47QdE;4Xvvg?5z^cev6Bde`uJ+zQ^o7>kr@>)Q`vGZ%{1F$VbbvK>{`5;@|Mk}l5_ z^R4&HVu}zBAr#u7uAhcj?aKtc3I5?sq%jc|igL}>!aI9})ZFD_>%$gLV?c(x=_ zmEAtcf@%SHS|B{q>qw>V6AuX+vesNIjoH1CD^H-g7%es!h5ti+-L-+wEIn^MAl;Lc zc8Q(Km@+QhoLdj-6t24q7M~uP=i=c)mIv`=W7aaNiSczSwW}^RHE?RTFLb=rXn2x~ zX?ZkNyKK&xX`qW594`ApekrP`B8cm|I&p_6+Gp5>k?$1SI$z)pCBv7ZtZ0E`ny;G4}ZUFVjC9x?HpLES}M*3N@ikU~@c{g5m`*D9P^#?4b zOd+O@YnMxKv)j>5;3F9XbufO9{!TDwx$zD&4l@Z~gRx{D>N43!Q)(2ww*Bi8V1!vz z-$;3cllf}@Q3Z(~s=wRzxE&tI{)B7<)hr;uwJ2u3VKt2KYiFo_L#J4CjiTdAR%q|f zl2+S|Vz+#KdG~wuhX+eQmkLZ_tww4Etj)YGgOdgyEP&jyz(2$7a$b>r)9a)wD%{RH zNtFNr4w#JEkLSvhmpgqrw@W!#J9$qrfWnCo;S?*E{v9Q(sTy_SekB`eW%NRS<&}Z=87v(EB z(QtjKRA!I=tk@_5N|z-ZBEzLlmh-Z?#nbg(n5fEeRqMF|Kmv#F#0PFdlYNN03YSbB za)2S0zakw&UQkG&3 zGW}}AmlP`_j$1*PwfSSO?i?=#;_=%W;YXYi70Bes77!&({Qm_Sr;1(1 zqU|S#DZqxM;U-6PAG~5jB=ZCqW&{C)tqG+^@(EKm@Hl4n2;B-WX>%P)WipR{uicn& zGt2&lz?L5&@rE9OOzj=yfsfasge=8`t*2+6-1{yx!V|6^9jg9C=t;@(S>7lI-dQxb zD04rnX9Pk-2(wQ9%e&sw9~X&bG#qwUKp-{`dSRCGM0ELzo$QXlgW9_3&*MhAL+5-F zZ&C*Dc_nWGJ)`Ezky#^;^c98S18`DF@;c{dmvz-;qp z8OGsQux;p0whS_oKN2bh=W8bbE6wB9$1lc#m=E*fVK#4>aY4*~k5QZcIf7Tu_tWIO zr9KW78ou9Vop#UDRxlxrck?abzSdG=;C5y|bJykJcj9uGEb?zd)^)>KmJjVffC>lJ=vw1bt_ z`ZLu7P*%S3Brq!~$%X#H+jUX|>$!lnO}oa$B69TO{XwAIn%7@}w`JiZQiUYD4ImrK ztd?q3xaUc(2w;AS0!CzF#|t;SqA1LfJHm`%eX`hOYszt@ufqDA+OKrL4G1ie$5^CKsySRQ84)lG2zBvpObZKD_lvYZ}+DZS58r z^^5zI5`r7x=^XP=mN!j;(3J>|MQ{a&*P~T4OA}sp<8>OyvZI{S{-ik^uvsv>30YgyMK9rIClu-!Bpr6iHRnx~lp@|R~v+ep#TPBst61FpFc?D{_4l{^<&QboF4GZ4p zp-hDh;o6(IJr(H91lZv~0Es{zb)bQ+Ieuul4qeEp(zp#Nmo;ILU)6Sv9L`88J0_WC zh0m*5d~}c3kVK3F9O>j{sZv|Ew#dFTd_fr|9!w1<92LAq!Mp*pk#9P#m{C5OH!wqZC zfO$xq3czrI{-f!s^q@9YYp%!RqveM3#GYP@4Vj6v!Wsyp&Ct?^a!|td9%1;Ru);rs z;=EIttgB)QaJWL(sHSKw7V?+SkeG^JK^}h77dL{b|8!1bC&zKfZLKx_>r2aXpLqT>6Uz3aiu-|afj9o;KPn(A%nk5_i%a?=BW2HB^w-*qu=zj52J8pHE#YRWFaewj?J@2yx z8f`_$z!&}ScHqR%Y5su&LQK3#=$`WWk<+Z&qU#l<1fpfEp;0bgD~3UMc0C%qs~vgc zBk^$kf(}OOZ{Q4rrDbD>>5f=e78SiEp zFmTcAiLqB6;QU?#kbiB0N1flzF+To|^fJp@uF79oX%cDF8ij7lP$Ai>QFwOO%?u0n z$Gv@bp(Qki*ruTT3*W@FwXR4#sRQZF`s(~!E?QOl^|8$91kQAy;OxTpZSB&cpRw~) z9-XFbbbvOvv|9(B=lhH?3~c})HZ|s4XDw+*5_Xa}N&{XFpFJkj;&%)kvn6V%YyGGD*ocgH*80Tn<+GEKFXaDu#yA%x+f9lyfQx97D zG4aUt)22ft{R0mc{QB7VL)ZJIK7s&LyQYKghrfesn8*gA0Q%Q&NnC5PBW-q5X}F;{ z{xrqz3MV$3N=ai28a!ED@}3Rij+CVU+1nfy84mIp9ypej%AN2&H-@=SUb;N3*p}KT zxA5}rxX#1>yc?X${ekJ6krMq`-kK$EHwEmU^7ctCawvONiQH)U1De9gxN2V!AkSLqVs1qO|@3s42^9#8oc5F%!;nWvD-GR4p0KObzm zdIUpdrp~ibT8xqR{uVw{B43+hGe|ynFy$Vh)VK|uz+t~Tcv{xZw8l~>ByoNY?wkw5 zF-KIq7ts!GYHBj+kVRSnkvBJ2`U8)YxaFdh$!U2-4ehU%b2N8S=FQ%!nA!Ahtfaxg_M zs;pF76jBjMAL0>Ugw6qy*U>@;tGNMvM(^Gn`I1brv9z}w-nRV?BX@$x$fzKX@BWzx z(*{uOccR7dXhKolVX5#vYZpS? zToowWO+9g1B-?g%0t5zw*0B&zP85&dS%oT29bSr9uglTugz!{@=PSeyZp&FSS?|Zx zDYt`(k^)s$~mokz_jM~rPB$3yWi9+X13N<*L8A3s_+na>;} z?YOlJY!mo%^p(5296p3Ml>2SS@zS{i@aqS>-t#OzWghIW|7D-)sKD9i+ zwJqOlZ(gQyW^g=r;$@noRs0DIf0rB5Ubv4Kg8$`I)MCY%o?hFXIkA1m^@Mza%J;G` z{IUu906Q>&1Yt*EEKFb8(v+`(-Ji^ckPJ9p%x7Gi_TSTs zqo+w|0iS*O?KlUeVym-i^^4ao${x>$iN^i(s8YN8veU?&WUw!0xp` zs>d(n;&_(tMXK8Oj(ukuZ7GLY9$49)HTmjP%=M$;z~A3Qe7}v`ON_$y!?kUT^*JF9 zWMj)kBOxjpGl%Yi2T%Vv`os#H;Jt)fWi=dtgS5?V>ndHN;CvHW16*-om9ZqfIDHfw z47}>P8u}-73z;Q!;DmgaU7trL-vYIDL^5tB%LXDPlQS;zC}<9Kegl@a>bXxIWSgEy zRp;<`$Az=|0+Ia;j_?I_3*>LqY86n%5_?uJlH^w1}zq-d93Z@U%=$z1U@<_3CeMs zK#;)o^iraMZ+~4qMk#}@;e)|ssNm$Syl?DvcndSk&_wEC`?eiWO`VsRq)zFYD;Uac zb#GO~QMl36nQTd=`@--+i^>7J}c67paIUYLvOIw17Qw^SzH`FR!udb{3&6lSIwTs8PR7p4sbA5OJ*r+2fvXZYmLfm?h2!mz zQA#V8b3dO-2o3sUkD($fLDjI}?qoj1NX^|+rQlx->B#}>sgK4yd9gp8jGkIn}7g5L_(ug>U9HikmJz;G|tB<6!a2b!8eQcg6;AJph z?tj7|#G`oJy0aTL$F&0LVz0^hD&%odx=nQHCuZT(upGeggxAuA(dygGl~=Oe??pQR z4M9{j8Tb{|zz9DQb3HzuI)h^LI1dRtzLU4Yq$dGMpW(yFs(H^WfH}SOFf+TM%t}p6SFenAoU-vKJgS*lIHz<{ zn)+angI+mIdl~dX>QMpXNdkLx2i|HPUG3Q1Cv*>9F#S9jG8TL^hp#-}(Oa>R^sIOL zaHn-R)+tcx^y44Wuh)a^pV!B{9`XHxRMUMMW{$^Xvo!gSneK8JK+jT3ylx=`Ze-T< z(3aLjFRW%h>@)U?9~1Slw0_wE|f_^FeqFi+;&!knYQm& zg9Y!>jaEx}@zqPQ0xX8#)p6JJ&!}Jmee9@w4Ws$ypU)zxi7_@#9f!%24}-&djZt`> z^9KoiGNQQlvG5I_)?7s8I=wN~PrVt_`^|Vl6`=aB@~~uPeHK0PU$_*QG`U7 zE?xBQaO@DjH&x};ZpS%%EuC{t zcz;)R`BBfkUUK*mTD_X6QY9j8UdjkBWEcW&uzo4lE?r57DG!%Y`5UsS)FJBBX*utM znP)D2kEXpmCB@cpOMUt63|(e)2(0v$En7yY!%3zycBY%_0|(|)#fs_enl>$6K^h)& z^pk~;A7CA&bAxOn-|Lw&?eII|9tIDN=F7@YcG!HoB56Fl1}l%M^x}();@*46#fS+C z*hx@?Wy+KxUI*d#sS;gr#TD9HfRpd2yFMD{?bfkxYAkf(59hF~y7=OBFSK8A3|GWA zcM=xZvF>3AF0sp%E3Yb7o~foy`$i|b&Al4_!+Au=QhNlq!_iJ)9VHYl8_+0()ine! zt8jI0zaNvA=m?9OprMB$hn0c;OC-~t2id5@PJTG!9v1S-zuj}}M;~ELBu$Ea&B{*} zum#3p33B~p2bGEYG2wBs?&G!|6;xJ6@!?+Ktrx~Fh?l<2W|qfOkggV1h$d<`jzLUy z{*cV5CT1JCvWU(<6|63b6)UD-T>)zhSYh!rmHrX=i^MYR!-K|iq-yV>pcy=RVXLe5 z>YjVz%>%uh0oc@jp=@hkktC0jyw=!NY%Tkp;y8!YBn_ zDTW3N7{DWr%Ie&6&sD8jwbBnDPhIG2e0{7en0FUVF{DcC#n9G7sYkIuGo5Ib4KJ z2By;XDJXO>yj3HP5_NPBKKNk5`*16Zy*u`4$hZm*C7^5=1;Unmdefef=T^7LGe>-> zV-JECxyj&}$QS~K`096d+%ZhH3&?bnYhww0if1X&r~{80Z;?7IWPb%3_#cDQbTIWel?xZb{k?PYwKZ46P%K?J5e zKtVem@J6lAt=!_pixqYw8$t)eGXoh;*bidwRNN{w?G2+qAk&0jHRau82Tk*m!H*Pw zyrfG3hQQK~AkRytUs=9E}h=QAFZO_?>aL}TJU-4nv5SIU^pVCS=)}D2u zv#nudCsWqc6GnlUGMRBS>m$5o9*9zbg|i}zI`Hg-q0E@ZvcF?^_Y!?9;N5N2svUc8 z2yb{0=Dv?)-rPr>*;$Hi7%scny@9p|_Y#RcBG_1FhnJlS^AStCKl^7 z&n#7{6tQcT#D$W`HD;l1B4*Acs?0$bVPfpCnAYshwz|pSUBo)tAnwCZqB;$a*D;y0 zkQuc6W_J0D=Yx$m;+of%#!gtm)Nh#f zue*-(GigAC*q_n-~~VJ^#&6Jyq2yy^@(0>L^29B>A0GunzaqwP)#A_G_fGmL_b zRXO~9*(rz|JAaUKuV({gQJ?cKh|C~7PPz7b!#iZ3LbJbPL1e?C@|tU|QL?rX1g&@; znZk<4#5d)cXWmvldc5e%mxuu-!OuG#A|)!|$g{sh$b$PGzvUKZ8HY%HiH<2F36^0y zc4S-)LGBSsB+=BPXGBqt-st8GYkJ{zdV{3R-v5qYLw z9PRr!qSQ5b@ZcEZmnbrh)-s|%BK-0&-wiwe>sF233W=w-63*AUl{aa>{E}uWX|9tw z8uMkyaE=PX%Eim=Bz9=Z_Vdov*lV|Pu^g$ta;r0_0!c+{-0BY_7gpza&%HS%ip*ZG z{TVRN2HKiTdmnj3u^Ftrmf+`D)4;o82=7p`?ODBgBb)B`F(Ty{0)cQoDg5Tcj)Hu+ z!a&41xQj@Z?CvJcJo8M;IUBzep+s{)H;(8Qs)#4E-CvRMW8|F6_AKnku+Lqa*x6mX z66JPTd4u~Hgu>-%nMiZ?yva+iKRBA;dCBicOrE{1F&P0a!r;RTyFA}MJw__Z2dN*} z03)ZbCI~zA=A5*1-b-4}rcE*8le(}!1E=0x$XeqG(q6BM*_B*qGVjFp?b`|OiXps1 zXbb*;qxm!WL5|g`m7zR&O3<`vD}%5Lj~#&!$|hT}KJmm8dcz-w7F8Icx)J7}!H11R z*d%0kr;W{lB})0FgwD7Rui_}h~v_yPx>e%qxz4a?awZJN8}V1 zTS)00qdR!AedETBs!EkA>LWN9a?tCqzpgI5^isW;jm0?ZE=O6=s8OS|neHZ@ST=v? zq^`K4X{fyujw%CR+9RH!emBAzZ2R`Hk?`F3l1Ve2htro<7@64akA)zEcjS2m|U;Bf>gjN(jF-WIE+qOer|QM~^+=mQ$qtnYX_SPCmcu zp!b2r_p7eDN-L_2>OX+(kw!=46c&F?P=^j3=%kZQ(jGJs51C&I`@l<|j~h3RZn@B60J>C&Zj9D&-kYmssH9w#t`33kuE?A%e; z?G&Gn<4?mX*k-}%WJ&tkOjYn`jCcokBzf0dL&OmJl9>v!#}Dx>bcn&UGbq2EFKXGS zpaw9()HG~kUcMz=P*T{GKJ`a58|cH<9Ho- z<_+231(O!^2yF0Yu<|O0ce-%lo9X%IS917+jOjn@Q`TPX$ZqKYUhC|3OGIt~qIiQ* z+>NluUE&dnm5w>)pAejZ>5DJEprJ#DQk^<=xPvZ5>^6hvu0Nx?bsJgy$)1=F(@ze6 z8CmnbDD&~hANv)F%MV&tT){&L7w{sBadXYnHC}8w9d#{VPBd&7(Y}4eF7FevTbZUK zw8syzG83GbdqV)Qn{K*^W1s9uz@3)GUz|FakSXh1z=Z*2*`kZ@k#Ew6&?dCa1SvyV zbH9m-@}jY8d=FFJO9nbuBJV~qpP zd@9M@letGX*_}F-r4=g{W?28>R%dokb6L*J?zcpQ&1LogQS1!Sb>We1_ley)>lYL+ zD!VqYA`@1vT9tUGIW=g|fY{3^k49Q>B)p*CEUztWQVEClyTmd8F}$k9>?QYL>FFop<#_V{x<#~?cPmPqeifiM@(V9 zz?WJWBL#DBh|TiqtFJo6xnAQpUAuN|pibOPfWd~RI(ai>WvW-No*SE$hIlciZKkZ1 znc&34_ej9vnN_eL(V#(|*;WSH12Q;T85w-S(>d0&%kEh8yM`*wv!3ee&HZ`ZZ|&M= z(eUAq5jIJty8ozKw=Ugx-+dW&Z>h#Au)HH-PL98f6^9Yke~4$?uwg?r1wkYZ5n`F& zaKjDeGUoWyQ%@$-U$PjC~0p-|ohIoVsAuUn$7(KxG8wl1NW)mPRt$2Z5r^8GT)>^!H zuCEcH5MQ964?dW$a^$$wbz_~|SBx%nnEpNX*kk&>Q&D}8dxK?(@v4i!ZUwF>5~=DS zb-4+4D+hDy;lqc!vB#w0*+}GU&us)^uMcM6L3Tk8)Rry^bZkF1DCm^K}eX1wtE55-WlXiKdsICQ`LVIE?ii> z^2#fzY;Km3rF~6g!+}$vCbR1W%Ez{@_)A3mpcAj5SPOb~UMv{*?b}yd(0saWJ{|aP zzdbwVyy9t<>FS!-#-ye40z>UzUOULUX5DvlJL`gBfSIOn=J{QtuAoyq3=v_91nbCs z`t;z1+W;%Kwgv>L3ya>EieyKKR>>gcEanI&X3-ngQ^``$w!+IZrdv7cT}+ih%f}RQ zTtLQ-u?K1NXrflFh)R@jFr?@b_1G!MbM(n4#OAX^0|um*j3Ot!boeua@v>!~)7Y`E z(-TkhOG*F13WUwrV=)Qex~y5^h*lt}fE-E))>zY~O`{&K-bIW|uG z*=1|cO$|E6r8^VMgX&Drh-E=?mNWOebLUQa=bd-x=9_P(<65P;+%S1z zj$!My%Qn9OWPs3_!$J;Qda6~kb0&9g7Y7D?0uD<=`=!Dg9k3i z7nZ$EVfj%Ob#DK8A01bqAa}}kX^GE`@&Ff16M1TmF1zfq#DzlS;=_u=qG)p77yKwa z=@+1cOGd#E6(%Z&SP!^QEMVKsLY-w+ty)De+4kGy=JoTyX-R&RhMjOx; zCpK>UiPE4D@eKI2(txf0`?F`# z5*-sM50o?Q$GNj^q@{vT2W0ttJwEM3=Y0qE0ON+_=P+Lq%p8UD*PZ~D;X}a!-t7-mMb7|(X>HGzohbmn9Hr+eA zn^kv^YV0o>`}?B$_3K;3c&^tvx1ZR{*wdZo+=tTdzo(}<%XRA1DLpg6Sr`mqodk!1 z__FpZhYg^WDpiW9*;xh_WcJOZG0e6LXE~I?h%OPKZnVMR@(It=u*rB_5!?&z$!yRp zUc9(=DhWoEMcweGz$WK$WfLVJP}tL>p3>#lKf8_C*PF$T-ww^^*;`kFl~lQM<%kVV zT=UI6JyPpQ>vzDjg=Z`(p-sl=rk@O8Vty0T7QZq<^C5TQP1AW=mgi-K5+Wc9Bq2P+ zCbBd53p@QQvta&$^zz-~3CF19_rMQ+XCN(KzmzVoaV0(Q%H7mwd{3Iwe<5vIaW&=2 z{~JwRG&!#7?Z0g!d=)#%yksWb{sjv&N<(%Jug)@t$9{w}Tj(E~wY2p@8$P=UDom~2 z+UJLbk!6lBImH@F@s8nYA|&9ZLGH|mS;*+eP?!Q$rcf8(RloqJCe#UmUo>G3!Fs(Z(Dv81A&X8o_UuFc5> z-OpC_AG}C?|I-GO6zT|vb;cYwRKbAE{H3q0ivqHzj1A_qyao~2WED{o5nlxav+lZe zyQ>`eb15E8XjAZx9lI$GL#sEH^2kB+@y=z{y?b|c$|&y6Cd`FC%Sbi1wilo>HCvBc;p{6gHs(#<|yK3CGo~r%8 z#VYT4?X}zXw?CVt`b~Jquv$kbkh?~AR%JV$tcFb=tjpRJoW+>ngy;0tU;q52JBzTc z+=J8wlTJ`r+0G`TE`)_eguF0j!)|RHI`kj*K#;cg2lcQ8r~2{7v>JKsggf{e+=*;d z_uO-jV&elU$bSWC z4+T7`VNb~asnMfHt456)X>YT<$x?f5fN3dPwrQmT1?VIwO31O}hG!%@5ZY{Sca%0G z3`9nLR7Gi1Rx2Y@SP4dMYXCQVLCz>%KSoM-L}Ne2)kZ=AN3vIi^_C5iOw!}E|n@+pGp?bO*ap3O+CimOKt18o0WEG za0`9(*h2bf<@;3ewi0ynWw+92YZnsMRNRP_8+CfN9j(}~oPPP^7kYNyU@CXZDU?5N ze!6A&O|<^|^=`#@v6N`mtk^8SX&P`leQ7zTg<>UoU!g86&|=!hxyIpLDs8=L;X>KOc?*!{YShIiwP}Fm<;U3!eKF={8BAAfEG7i4Nc*Mb8aHnX)0r-J~$Le#= zd4k?}<2}P}9q~+no~X@UF%OY_%F3jr`p8Q+f!9L3>Nd|~)MpHG;dugHxrMwD5C!6m z4Mzz4h+po?2{_WTF!%h=ezrq&(5#j3QSX+IP$k|Y@^I@%X~*wBQPamSr)@j8IyloW zYgM|CJ|DV<*1xcg2DE#MUhLVMhCOsOU4G7$v}*WwRQ^=&IeX;2+vg)%FmM?aJf*MN82cC92Z%?>|ci4j$0TX35}HRI+RddUfH8niKnE z*msdW$1)4A3sb{}>EiH`9{BG-dVBI&PmLG1*}|1@>%0JW|nf_t>*? z(V=w#2)*PIc&xqKQf@I7@x}A2hw661^cgO4uwDU0Sf@^H+P{B4ZQ8U+@7pL-CQa+C zSh0^S_RiK0`R*Ltnc!N-5boO$%JBwQjRUZF=%I(Sg&gvK+O^%VB|IAJ;1slF%a(eN zOrB%&&|Q^!5G<&?2yRudz>a$H_sAnv*tB=EZZpc@w>B6rd2*FjbY8MDA|3ANKm0Ix z$;m2_N|ny!HKMwJ0g(N*VPLv8A|qs_nm zNMCMTY4LfTKw%AkcL=Q+xsiT)?N932;^8>fPuDJ>Uf1=Z{Q2_JyGy6)bF{DADdkz& zJx0qmEY@XU{yT5pJbm7IWILYISj*;}MBi>*N6iOZMR{}QrE;B4p>8kUN&63YZ?v>G z0E^-1r0l-5?D>h#O~+t(c$&e}A_~fbz4Fs}4_BdATD?h#O~Q_G15@w1k0sZF zn}`!mI6*Ixqh9j|542E_pt^&2Qx8+ILd8DM1gob2veqPFRfaaX01q{#`xg#89dUdp!49=kLSpF zy^JknzS~GM-g{TaOn7?g6MAHYk(;L*=Ls0y zTtW*Me#YjYTl9!vD$ku^F9+hPU|I_s755(UZiwkEcs#)UVan5wfEi4LLD(>P{q@&V z(HjfU--{JbNu8tPl-Olp1ZC=ig%v^<7$=o4yzoMr#TJjMj%|@of8>~F$1Wqgo|}7d zdv^Tpsb$M=SuwBnV>^2Dv)ZCeR%y>8kBnkB(=%e4b9UECYTw^!3&F_1BKR4%mr+}P z-J;Lk7R12^i>lR|R>dvq@g5H~cJ_;4@g%HSnnhhtCZ7y13o#x+sdKtkR&{$_pgIn_ zK^1PDSGVz^`)jB(?>t?xX>g2Wj&!OjT-8&}Sw35FC#uSJIz{pNigjOKe!o&L`o|RF zi+}h3tDbs$fa*M=gIc-abB&c*hFFltg0hox z6d=n1Aa6L-i{9XYk1P%_rtZD>UR{LucPQ@66_2VE$Nf-wYoyzzJo)5MUPwPll_^t3 zU2{zfYY`oOqOQ5-8pR_R?LNFolO~Egc#S0%(VaM85gk@sZp3RdD%Yu`aug(mMf1xq zzg)*%hs$(agLvPHUeMEZhHEUw-AD!L8H49?WVDVJ5+nMV5_n&jM zsET3E#0&e2;$4Oxqi^K zYSWI5Y^l~!^S&|fHPv%$d?$DR%Xh1*A8n)_e!ZtwOi)@2zM7|?z<9Hyn!GUGAt5ZQ z5)tmp%lz5I4`B;bSAwE={`uz>uU)w2I}@QO@E3$2lXu-K9+||kqk~8Ks8OrbYp)$L z@jAqXHt`-DeP#d6;)HNQB5Y8W*F+LJ&xAFWv4q2K1{~of7vxvBW=irO>Im|OYFL-5 zIdhV%(4%;mdtI_*9b4T!=^;syMS-|H19-6-H*Or)(V(M+ne5=H17lve{%<~_3SFNk z!xTuE2j1m&A|-i^T82)0nw7z2s`Smp^oZrAXIg74uwSKAhoWlsk`EQ!oZG4TofXs{ zd;ic$119y;`&!IBt=jaZRw9PqbigowZa27fVDo3=R2QbO&HzcKF4O@tRP2igLMS)p zr7=NRwM=8;YOl`}Mw>qUbgAOlW}f6$4kHyq1{-s2j&9{lo;l(-fyu5R<6a)uC?7-wT9^m~z!xWA}@d$_liN{3Eo%=m+2#cM*GHhl#%3Packm;zP zFDW|dr>8ujy7Yfd-EiMjeU5AMhdm-0LH71JM=kT_rEjn^bz|yo?dG&m+LCpP)b4$| z^=3hKnPFJ;@tVgjSIfRl+vC92XVy6~C(M6c^>`(1D#%_4X_s`YslM5=Ry{i5VO6fv zsa7ZFwtoiuH=>Z)r*q=}@~b-CYsKu-xwHT6>XN%^#)!p!4*YqM9YZf3r1M}v@9Qx< zUghq;|9->bdGso4FM!_!ZsMJUvi9=4cb50EIJv@I8<}>w;UuJUr&3nB)No1@qqSuB zF4^lolFqaJBKp9A1N8zRTwo)Lpc^3)8u(ztjzvzyu{Ojn74dYHj=_z%*0AmW{r7L# zi|abW5_80YbhXlF#-!n*HZMZ^dEzyTd2xju2|w~;bl*1~q@DY}q=Kc`g~`s%F@L}9 z_O{+KqM|Gq+S_N}*TQSeyvKmp zb0&4U@*ZmSWK){4Y#NU z+SB|Q`!@{1ycNGI7V1HXurP)ZQ@X!T=YXhb#UB}j)TZq?rtTg)x z+5mr2#^l!IjT=Yw+;hayFHMU3(a)b4+G+pYj1<%h>DOie@l>1DoFK7E zyzYyQhUo6QEm1Ki_A;;#`|;NwVm4I5KT`QZ73hm$>(~ionyD)Upxnoge@d`sE08bV zy*$jHcjx>gy{_%K=*M4v(B{uKU)Grp{+EmQQ4pPfKGD@zr?GwT`Aquy$25x>n9Tmp z`&Z6npXGKe?rYYQ7A#o6_Wg6U)e{ueXHTxwrp&c#)FEDv(ff4LD~>&@Teq%GKwZ7Z z-A@~STCexO9Gmx8DsudZ1Zy~$GUv&iNAtn*3LiJl&Kn^{*dHT%dQNwkhyv*N@k|?w zF_`FvAy|1~;~@5#*nLxZdB!yzsl5(||4Dn6>p6Z)kee*m0s#}*Niy%LNoO|lQ&k`2 z8CmJI#GmD3+vxf8cXJ<;G5rlZLSsKpcBHglK0CIY+YIM$Fn`2uTDEf0U%rzjO^Uml zas0W5Q3m#O;IDP-*4pe6!J;@KomLnyRmDaxyzJ;;AkT};gY1=`ige@k7kfB@kQueC z+xkt+XvK_Pf{a?C+?TzKebi#w%n={;=}>KaB3kvk~0|K2&!GXwpJcEcka_^c}Z)% zPp4|BYP5I%9gM5k$M{`M?xnk%cGr7!_8<6{JqEP3%D`OL863)F z;%nb{bv9|(esEHQPN zx`WDd`$7YM{R=W?o%HX#<#T4eHU9qenRsIKFe}gq5 zebjQ0#>^TO)3JC-aE@A>d>FNyQRG_fA(g$2yHb#QGSS$ufsM^v^$Uv^ch|06^vk}V z>92qH(OG4A2b_%>oYja{KevH4?)YA(Lb;o|${b&g_UDP8GJ^NwNj;|1?IUldcY1%I zgMI-E6T8JG-a0jk>P}Vy6>?S4&%uMQb*Beg_N9SuJx*txQp4iKv%yXV>GR!-3H1St+(nT#fm-5>m}*)+(u5a$1o2%?**eST@ugVJKK}Ys0Hg0_Ahmu zW7L8vgd1*f)|7r7=AKZBt=lZ+h5$s*y*<=*VrJQ&8!JD0vmTYqQ4K6{RHs@! zYTuv}E&VR-Lt^ECbyoZkJ2O5ORw071b4?agu6YgV;M-GqN{i>Td}!C6T^0v0Gz4seP>_xvyZ6tGU+UB1Q9g)_ zAZ_vE!RGuo-}L6fGl8}wLw+)_=7qH{?DS7{e}hr8eX22;`#2nBv(*R$Lvpxq0AXDN z^M=NS1k%{dQGdbV+)fB|gPqj82Y`pf)u?yx-t_UuA8Qu4fZz>bJRS0=o_EIXVdsLU zT^7zWh1ni>DJOzqEJa%_v;!O)czT2bx9jK^RCnB*30jp`G+O>Okdv-?oM#}pe^l9e#PkfhIGdDTaoyqqE@2eOo;15pf;Je58 z7eZ5T?lymnkqVpQX$Nv}9|0di=10R60OpR_4L0#cEe>P3#&Qwib$d}BQuE?@$xLN> zn7YiU#W@%6)gJp=i`S^dtE@>k_+bQSeh1{iHKy7}=Nx6UBNq7-+us^K>>o(BA zXNK$M@}^bo?xy@Hr?X$P!PMl@hE!xfe4_WJV#Pf?`Zzx2{SEIOig^eA-`;hANl`TII>I42NsxFX z$r%n56chzSB`HXh9F<@K{gB{?f=Cb)^b-U@L_|eE1=Jq}Bq&LVf+Wc(aO5bV|9X3v zy}7yFx!t`R!dw*(J2O2U-=3SAuBxs|c%-*jj@#KLnHh;8wnx4$vF zgUvDev)RYq4?iA0RNqNGRqqeFn?(~mn)_O?j|J_R`&rDRxvvGg;!R@FP@H`W>6)Zb z>`f%j-U;Nzrq4hz8?0kGbmX9F{_KeS7Wg6i((~fx}VWSUG5D2a#vsz=68_%P+sod9zZ}rYCi|oAtvX z8+fluXyU8H_SN4P93tK}Zyvv9bJgPX#bzyKUA1(n*Tb)_1^FQlHVJXTO|P{`8~M;h zI(Ce-7hp3Fju1sb6lM|(Bsemlw%fOk>Dc*WwC2JxU;Gf5Y7ZScl)nCT4i!9CjE44q zo!UJ5w01^VIc;^ST(l~+Dc_OS9$KZV<`=g%|Hj3{?L!pI{&mouvl7z5O(c$o{1cbeaC7Yrfnma_7?LcjqZiOV<9x zu}AXg{WfDezvq*eA3t-Pu3o!J8Pi^ub^1y>pI9+*&Pmj&?OC7O{!msf3Dq0weg5<0 zxE#TiN@ykFRErZfu=IqH4s>8i`M&x1Hag_Mp`p%19ffJjo4WsrGF#^MXXFm zyLarS(I1bb-wv&&MQ{Hcr^m0)985p`zKBj((u_fMc;Nph&vLFR3vtd(-p7pQCgCvenPfhqd@?;lU0|ut9bqtZZ;t%-;!m&rC2CV4R?E z`WpzNS+myA+__t6=FF$H-Ylu!H^GeOR?c%eu-d`g!@!dd1Q_x)m*$ zy^u!q8BK#b45cX(KGF;cAVT+r)u?Gyfz5QFLz7SHJw5O~fcb?r9_APELl8+2)Z>xf zbS(QJ`r*XaROi(ilrwJ*%8@+>O`P>V`gG$*RI^}hoew{VvjG?C!@9ghe=m)p1p^jS z&dj-}amz-Oy-7N{zbJ9^Zc8IQ5Jo(Q&dz|I)41TPEN}A zjlG@Q?m8$zS2D_!FW++Z!h7_lj(G53rGEYO#_B-q14jD;2ijBPUMEK0Y}N!{PyB*q zM}sTa`|3}MMLmi=2`Jugq3&rEt%_EOlL4_3yCTD|<V~!7*IzSaF6Qza?8>YS6!tMKj@%`03eJbj< zb3K_!dpU-?c6Ohm;JC2&fM-?OtaqrrNA_sf{%|k}{Gd-MP&&WLR3w85iQ^GvVotM? zEChl(eW$tV_fcOxepCr(mvFsrQe@c|RjJlbsv{?lm`T3l{54-I4vDNj{&9+ekFzR$ zOY3&ZJX1u$Nv4(1zV*NL9?q@zocd}n=X4!zJ^soe-aLy+otjIQ_u{kK+l~b$ZHG&i ze2rz<*&HkN_l4qEAjF%H`BtkI{t`UI=KOwfRpR|~=!X-9^QGqxJ6J>A%W-Dmiwu@^ z@Y@7A^XON`(xF|4Y2|0#crU;Vf?p`;xFBN0IpoK%qJvu!j?Y24?#e~GGjG*h#-_dU zkKkMJUTADPQ3mzW63lQH9THlq3su^^w#Ia zy5*(DZ#+WmXwsTppR?kf^0vzw=YLeEocf-bZZBlF5L>e8M_-8I)Wt{Bzv+MRMaMMD z`m+#gRqJVOA=bHW4|THyl9_HkQ`T~O_CVxl&3)s~A?`GH`QP}~ zZ{Ze`CEE-zXjsp%FRTD}6PN7ot0r+Z-u%)@F(#})*m%J=7z8&$uqqH%Asp>bf1R=- zS@9kqUp}l>I2xeN%c3jAE@||=C7XUysnVrX8@FuKL7yTcBNZ#Q73(B)041DbMbM{D zAJw~eZ@uzCL3pY~Th`z~PYr0Q9)7Ntt`|SN_ei&yDpg8V^MM-r8eUr2ZHQXGe!X7V z@SnP3QJ41E%i>bqwr-zSLa8ECsnKIbt1(kY>M>3EXo^DL=l(E9744drJ@j7nwZ%q- zs`Ybo6`L6Aw1sQGRrkLft;?~PNR62@LVxl53S2+IN}<`C4HY~fZ`rq5uXr#|9EMq+ zV+CXbBsb|ErZ>*o!Dwh#~$9$g~i%_8%5vXg4v>h_$>Q!?t*R@|O^9 z`~J$6EjvNQ#KgC-^Ve>2Ns|?bg}Fl+Z@Pl8BqzzK`DGPKW&Xa_Ln-fe-lQ5+LRKhf z2Va03Q^PkTxD<(siqb0gxq&i)@s0jNwFqa75ih0E<1uF;WKFrwP(3zlZ zIN3zLXL#%XpkH3}wed)?hSa)z2j6wwx^=bF&E?BiPzrYDICA7jg0n*os^!arlf#z} z=(bTed|RRYNs~ULmajiSlXs4%HAjA>D+d&H?%s)tH;tm+GrQ1R?Z?L1(*XyTr%z;{ zOBW2?9y}SZ+qp)6!DmUIg`R8NkN#!9TgOhbX_|!)$NcmD^VnxtQ|dUjmG+Q~+bJSa z=uM39Xly4~h+$W}X(9I1eXXf$eRhMu^wVp9Q1;8QR&HilScq-fyHQ`lBQq?-P!9+e zV$8hsc{tCAF33M-S-l#`V5Z)@WA5CiS%-G|=5^o}Y*`qHWl}b7yhy+O*4dJB<4=zq zA!>cy7oYhD6iP3`VLJZGG;SQ8OYemG8p@cKFDKslYluPcf`_9_SUAC54LsJuNg+&J z;W*HAT|RH#JS}S4G=8UM3h+1pPs{M7g;?e=_r_*XlW;wV-TF3Xd*z{PU!ni&zg+vf zf`Vg8D_ZR7Qu`$(ZX$3|B(O9a~sp#EBqxFVXFgVuJ=xSnkv)~V-m(YJ7 z8%b62M$@IWL|tC(NL`-nPQQ-YNDZT#n9cc)wHmIW11C2Vf8o=;CE{8I#=y-?%2e@0 zOJ?DY4|kUirpAXU5^5x7y3%|P@uSi2?P(N>3OuUCi z-)mUS=>M`UOQ}G|J9Szpa~-dN>j`EZ{+`i5W$fvRDQm5(NRhqlzD5{`6%)3w5Hb-! z3~6@nZeIF5`dixi&3Kz9TK4n>97$4O0TC9PPW#%him*0p*sAj8&8P2SBOmKN6kBR3 z#0+oQvZdlUFZv!fg|fp;9g`d}zu6msN?YG}wAox&i?0y-yHce}YWeczS`@|%L-8IU zwepvhTJg<)6mgFE>*QA5hIZ0Qbcnnl;(`(tVzbhvODo<%ugp!W=nv~-bYBpU93rbJ zQ>N&$X1_45;lqdPJlJe#Zx7dqiEbr?{cSV4nSw|uTgyANI0y=H1n**fh7Lb$&F}9V zp@}h1HTqTdciWf7?p0KgE_qbxo`to@%A8y4#zkM|>nE(2QQHsx87CR#%(8KAB=RCv z(YJX{i-uK8IQ{Wg2JZ&EBZOiR5C#&9g9X1w~g8TPiv_)c19Fyos~^otKKMOa4tx^1N@)w!_h zIOJ(-b$Zu3tiV8KKhQtCsLi?&;~dWt+6KXb2lZOXIwn{wAyl%Rj|KwBc`K6P;NvcD=(2Y{ax-RO^*$J_|NpAyAoTix9hc zp{o&B^+r^*4T2R`p#4XYJ#0bo5@wk!)fRWw|oSkrIh`FG#3+m4ap zx9IP|PpCF~84mDBZ@l4L%3j7fckJC@Uosp8mfh?d<$^5}+e%mhXk`3x1G}OBaSZjY|AH=ede#-%_U=AfJYpq9 zri!E%Z?OU?2*M8Vgs21S@UQcsI9UTRgE*ON7l9a6*@ES1_4Z$Uxl8mY$a){lbB|1tFp6^?lbx6r+wek!!J~! zyZPPFvF}6F?fv#N{jW)MJn|^DiS9_OJK>;Ov*|lusKeNnl)CVT?ETtToj$>e%8dSK#*x7d^Bl*d zB#y+uUbw@nF`VbeUrVs6W4)WdTRIK#Lf{vTowx?fkVeG;gmtW;gk!fOfX|W9?1AFtXr#2lq{wG$#$nYmL{EA z-~3Sp1@=emikMyf`>N{ItCwOXqW3?v8u^6c4TkZ?!+xVwt~MF9PQ+J`PZq8JK^5zo zPyO)QLcNn5V>*BFyxMhmr+R5hAN9!K2dvqB=bwMBSXpmeq6~B`2@&>fKxv+(MWpUu z#^xziiS7kepNTyjz4xx(&h7`8tau+je#o~M#ojLV+%N-F911{}v6i#vwO)7X(Mgqy zE~_flsHp#!ty)HP>($MgCEhth&2tsIV?ADm1QLSno3$EynLd3MszHN(v}W<0WA6`) zEBK>CapX#0y4O#V%?bpO+pxSe%_{wU3GhZJf61Ot3+D?hoH771oO0om3A6&SPi6}C zS(U1C4a%@}7iBwshB9BeN~Py5qkGrxr;8Ua#tAM4-3MD?5lmQ9rbtO;dKJ;tJ@daV z#4bJPvDY4=Mz7bSce;I`O>C`gVbubkRu5OKPuVhN)3?&5NlS%u7jay*>^buz^_=un z(5n`R*ZNO=js~}Qopwz;tlRnL<1;jW@Q(?K*Qyp{_Xjpg$B`DJ(ly{v&$b(UX^;LG z>^%LpaUJc*xrw%A|6TvznR7F(ThDgQ4g{wG5ak7M5}dsN04VxNL_t*BhC@|h-@5b8 zGwh6&?aFNgPX+w3p@ZC+KJ{!_JbRhSvm>(&tXZ>W+TA-_x5o9x z!f6LZgDr9makU{)+VN z`GA-L^g!17RIFra8nJu`oj7ocB2%WOB6pOaN2@euC+*9(2Mbr#jU>j65j{xJWar&B-s^fP@duFBG^-JjBgiQ_3<>h#x3 z_&=C^J9v6Ojrn>Vb*_A#U75#M^yi>Fv}f%;U5Q!7e~q!JCypPd2T&2uNjQ4hvR$Hg^ujfVWxpI*Lih_1`buxkGb z+Q5+%J6GS9x;fH@TRA*Kanqg+^yk5CwEyT{cH)?p z@?_6T50-n_Z(yjIOQ#cDm`s2EbDx_Kvm+b+`km?xsznE;9*-j~LtOVB?{%bAY*kgc zNL9+p;<8JZFVWw}57Fj*8|keMwRSQRVgjV6ca-W#PJSsbtS6zs$g&t|k2)>FML= zXHh&x6&;vSp)ary8$I_;_F=V&SW&MTR^Dv+s9vQ8^xxKRQl@m7g6Xo(laPg2bfsz( z@mmCoidm__1QB8N>ybotg8bYz2zYV@;)K3+1p=`*6m#a6kW)Fi=WPTO``Ds`2iXof z14VF1+*^s_F$0HHOE~?8;vK7&3Ed}gDBoy;vqr314xc#0?&RO4&Ag&mFytr4suR0v zajg6%-Go#V%KK0#xvfBwjTM&do#kItQMbpHZUzAzDJ+WeRcnY`^>wU&)#SDK4+bE_Fik9@9&l|Z9QQ#+q^+TuRW_GjAfh(8;bCrG2$Cf2kG_fu`&bssvO_rC>$r%zOnM_q$E zSu%M(d3eHtBv#>6E4~Gw18m1>hNMG{XB#&0AHUwcV7}cei!CfPUebw$|I)TOWfbepm?V2I%&l~+ z817rknUKRsO-~UMF&K*TAvsk&?Gvg2pG4o}pD=jjd~%&U_> zcg`-7X@R*+TLx2!HaHZK)vBsnTPDN>@*^(gCgh1y%YA5cQ*~=aMH6TCEO==fQC5fx z0fR}zYQ=N&jjMfmDW|&|&61IC=G)OLK)>Eq6AD;lBj-&JH%+sK)USz4t>00;zpX5t!(}I4*29NYwr zV+7VOUWc(KC5Ty}FB;Ax{yk5^57805!`){x-yoqqHz`Siyape{J$0T?4z;U)W-dNi zY#Nx26l?jdDk98BV&Ai|CI~L`@9`X6z;?!PzES5%XF4f%yaemdoZgOA{*rK+DN;a` zKk>>1Z&%Vy8Jh0iY*&m|!54xZqr$RgL$qPpxHX+mDEjNTW~CWy^6jVg=IwU}4{*hr zGT$?&J5I5n#HR(Em$u*oj?&lJp4JS&8PZv7TRvrzAOXK(X)A<$a7oHQsJ>bG$j6M_ zVJKW^jHQK(@i^pgO8BP2Fwsb|H#0H#+GdWi_e|+J~B_pM?L7c z0a5R-6WAbo*N)RfT#j)lD{{UOAEo^)iJ%JCp__y$;v;vgCM%(Ck-G@8i;}tA$9*%# z5*!b@$~7Q+MxiuQymzx5%|68XreJ#;Mb<%OP$H3frhEh7q_`*@*TwGo|A12 z@M-bg?Ly9p-%@kNE}n_#o>b${qYWm6li?r{R~8}-GDiny8H*Di zj+eM|-n1BKtwIjg?YvfsvyD~AN0EhHcADYqlJ}C6OD&2q?=UXgWmNoUy+=I!>5aE) zey(8i3c;O@!&bgVi~jz$o`s))b;0m%H)Y%Nth~oqC4u*)>K{sH`jOAwp2+9~8E_J5 ziKW(#lutCKLIfSo+p}UOUg!4g>WJi6{W2&=tOPMRc3GYHvIoL8UwWErPD4fFV6uAx z*eCc#uZl%S!b-ko?j`bFmz$kZlwVe%@7;f-E%sj|#`#9Gx{@awSS#~EYy6H>8`N<@ z=iaZf@#pl3+^zU*0F%Fjg!yru)GeOGH_|0ouIa5z?;G zZyfXi`Jy=b&eZ)R4paR+#Pi~cqA!~OX0{oXHA{afyCJ7At?yU6M6R8<4x0OCFlpEe z`Gk*BZ5!X{P!e&0`c&d1`b*?8Dz78)f+X1jJOWON|QVl$9?abKW@(M~ptb^VDKtgDGpxm%t6rLaI5 z*ol30e4N2Q%i4q{LmgTcoOsmBR>jm6{B-kx8x23nK;0K8yCsGB-Q z*-sozHfPO2CHH~w+Y1!G$FY}b3)aeMZ2U#1$0Pli27KcDvkKyDy6FQU#YDWoXP zd%>+M?+*3syAgu59$tzTH*uQiRg|+2AlnE@(lW8_Dr&DDSXGRzw;9e+K z6%KGFH{p|Nx=u^tXzR6}9MP_t6j^f)P^}AVC!B!C&h%)bVj5FcL>9IAfUpfaroLBk zq1IY z5-2Q(LGhVn>hx&qA!Uy@LuNj|#CZQyIJH&je;@_)UQ09Aq+ohg^Eq@F{jI~ojK9#7OTiYEzX@#3#`*ACF$T1^t}n}h+#<`Ivd<5 zUk08L+e6?d2o`Q4dRsUjR?OaU^m2=UdjMgYutqZDBZ4SkAQX}5zTMBL+y0O$boh}( z*CbZ=yrMe@H0#QkV7=Q^ip2@6je0@QL$FZ!>BNXaLr*NBjgCw4CHT~Ot9f&+hN!Fo zk_Y&f=n}JUBj|znf9)!0XG8?Zk{*TTYA?MYVj*A=8#y7Q+q_sM)y!7KXl64G?T(_i z>(NOFhv3w{DR2*xxV2V_sYaq2EBP_neM<3udm>YO!etn;ESaPJBw zB~{4U2VCjCSty@%Mjym_hO-1vr^71I%Hcoh(I!P9Lm(ryL4j-1i*;cr*{{v(mE7-~S)l@Rm#WIHO_67F4#R`>gp!%_ykZjgYu{~STpwrOQeI62lGs)&gP zBBXj0|4#nAhXKuUzZ`%)Cy=Y{O(`BD;+qs01D5+T9vs*r4En-lTiz;5f!yR2F;6#4 z8)F_n<8_Rh=P;`^=FrJIG&UKAC}DRz(#Wdwb6j09_RaLIjmFvR6%7^Hky8b!I$}hG zvIrIG~XD^lB0Mbb-=nB}bfTp4zGXnuj1xfnVnerF+P3zEZ zeCO^5Hl$3Qv(*u!F~>F-Vp85|#B@N!Gs}16Q-^ONv&BNKRv0}y$eUB4hY%PLoKHhu zO6PYLMidAy6se>_^R#v)B8!rB4|Pm;)dbL;KL_QVsTUJDIfGL zhVM2mQZKJULXNi9F3c;9R8A0+6`Nwi*d=71v6s0#IM}8J%ai|&RJdl6u~#O>_lHJe z-Thixflb!7+}Q3^872m4p%0%er?9f%EU98Jd7NPqJ9W=+wqJXDvK%Zjr)!po*c>G)VFp!+ZexO*zOCQsp@qR z1|y6S7@;pgp~e1JZaqC}pX5~@;7?iQbt#kkQIrjhrx#9*la+odJ!5QNdZfVI86Y+3 zCE9r3yJaDW;QHt=A7i7h6J^K2ZnT(?*5<+*_-yJaaU=C5Mvmw(s$1hI=(=g_^e*h( zW?JXlb3AhR8@3sm{-M-;LBFxAG@ulwb- z`7XS`uYO;6=Bo0}%I56qTOR+qzrCZ~cF9XecmNq*`l7cTag%}B_hpIqyTd$yb>i5CM!58J^_{@g z&HjuB{PO$lUbFj4ht1q`PZfm3>uc=m3LuJvM5JNJt%2nL%dF-nZI+c8GZR%1_^45& zZSf}+Z;QR$AA@gnYb8%7=fvw=2+YphMXd~!TND+VpCW34^b=7qC*r*m+dOXdKSOzE zx7V#2*Y4cs$lOny@EWH8tL7*32Ejj#f(s<8ms-tMR~ziny|N}kKNI<Q9#2($kNTz&wSWm1+aM4%n|6)aD zk?2Maq?!3Q>s2n7Zd`p)aazW17-!&VU;0EGKVvLSTA+MSYIKANne-zs^)_$oM|L>1 zQE*tR%vAjM5!!;&z|H{HQH#r#&fJc-k2lYA>4M*Vhg`_C;lM)BtU~;1>)YC126da?*lv*0{tEFX*x z?#-)QPkrXg;xPa0i9zxRX9Slk!VR0dkO;_?NA7pd?1ct_Z(H=bVkq3M-m?Z>&~->1 zaXSWCa#^AbIP{TocSVB1go_vgRx=IG!;kC)oq7Co5QfgOCj75T^rVV#xEgDWMkhy# z%@|iOsaOoz&0@W2R|-7nr2-?U5W1rVAx8p|V${XZ!9K!>o=JU` z*EcuK4vN>!Vw@0sJbTxCdj}~5uH&h!|CQ~u-@@s*rC&NGD}DJgLp0}n>O-_%+0-5u zY>ED0tsBkEUt(ON_RliaY3Y9}+v24a#YJ-Im$fc$wzua2H5TgIJetWWBC`dW#Cf8M zCR25^Jo>^GquG-0^mgfwr=7007hAy+qYX_5i_p_zwVWudM#{=zKskqt^QV_k&3xS3 zS3_zwjdWTvH=VM(eH~52o)a#O?Oln#<2y;#%2ua4KkEwlb~s2)!Mlk zRCA1WJpXRmYRjiY(G{&%mS706SIr9Tq^XaHYNk<$PNpv;+B9`AAK2;0g3gHMuBHnS z4`e5tb%2yW5TZAs^M1>K9fg^^&0+uC|Uo5e{Qkk(%hRJyMfuf zb4gF1@_6t{^8S^yAy1H7*r7O)o+~R22CULt6~qf-f8Uk%I0uI89{vp(c{udWK4`(; zlelB-Dq-ESeP{Dv%{R_pik`1Td|4ZEJ1_~Q7fzol6g;pftB8s>OA^x`D7r`xyUkgysLH%??y@LZ zTZ-*IxRMb|X>Gq2toHSfx8tLSsf2waVCV$2k|6Z22g!|f=#I)u$bVoXtclU#xv!d8c0s=5sg_|aC8f6Q!VjIUD;I>$16u9rTb z?lAHyFhXfsVl`ow;4vKtzgOp6UE;cL{Ma!)h)XV0x_E}SagsJc22+>cbJh$j9r&2Y1eBZ6k z4I)2yZR~6nX~<;B@p<7L>#}~Ka{d8wDu=h$p2*&Hi0s*m8tHS(%98+xQP|N&I_(bj zMyD)naWQ>=D?a2vQM=P4Z|#Q_KfclznJ{)-%Vi%vtQV!_K&ABullF47z>cg-2uURE z!rc{GAm-JBI#?o$F-K48GndZ|BE=?b1P8P@_O8z@TcGDs-s9*64NgbVX8!cRJ+YE! zLsh`QfP;hfg@HjFWCTf9$9yO!3p$nt{AIh$Aiq5D1%zP+Jh@}IYSl}^LV(6T|EDKG zKe=&<-oK{p$-rXmXuK{#2k3|K&V7;~QF)0q4J_7r=gX<5uh=2_DdoVGR!$o>{0NC;7jo z*T8X*cTa|K{*C<&CNQxt-JI$FX&ML|$0s<+Hsasdzl8zA`|mXWdo>vVWBaes{=d36 z$wP}&lBXJ-=Adf8t`Y~0P8RS+*sMq}ZEu!ar5}dIbbj=Ou$Sot?jP;Ca430ntUL7Ot|e?oq&l{us?e*G8>PSLmm4 z8gdT3XO5<`1c>Vn0J3u$Y9b)+2?uomGLa3hem-(sY^v`3D_p#vt{z1|RsME+sKNWt zG)Ec;o>L3h5&UK4*4P1fD-#h3P36qF7Dfy z56}%utgWrd1QZ8G4CXGjKUs1H_3WR@N8(igf}Ge-+Gv^W%HxclwH^RsA!G}(_kRUm zLw*8NIPVC<#IF?hLco#0sZ%LY`=D$CbsI`%Na@GN^rIMi0T>M*716uGz*Q0t46XEiV_*0 zno&0ai1ESuV@WYc#sRiOg={F=U!<$$T?E4a206F~OsLrZcT03eoi4y+F8AD<0#Xt* z%S|;*!#{~-Ze&5s1!j?Wd4n@|)zm{G2q@b6L}v~pVQ zrl5!l!(s@qA~v|(Uo`|vZ*QOc*U@7bJa#dm(pPZi4W2ia!t8^ZXyGjZm;6Xhh4}ap ze~_BoD|8^vwYyP_MwRy4}&;RDZ$?77S>es6jpQ(Xoj@-+QkxEg$QL4L%iNpMkY-EygKOSY8dlkjGVO+S6aI|{5r&Fg3Qc`6mvEt) z0RYI{rJ=^S{#3^6&>nWnPDz)gZ@R(h3L}v`!R;6Y3L(B+o(!dph74!N8-uxoTfqSG zw857ABh!{5-UG6S0WCCzeq!>4vo$ZOoz_y&hx->F13wEu=Jzs{sS zMn-Uk$%(~y9gwL9@=7Sv*P*>-f|J%#JsNV5uo4QHAxGe*qy4jJi= "3.0.4") { + announce_snapshot_file(name = name) + } + fig <- save_png(code) + + expect_snapshot_file(fig, name) +} + + +test_that("taylor diagram", { + + set.seed(1) + testdata <- data.frame( + site = c(1, 1, 1, 2, 2, 3), + date = c(2001, 2001, 2002, 2003, 2004, 2005), + obs = rnorm(6, 10, 2), + model1 = rnorm(6, 10, 3) + 2, + model2 = rnorm(6, 11, 3) + 2) + + expect_plot( + "new.taylor.png", + new.taylor(testdata, siteid = 1:3, runid = 1:2)) +}) From fe4cd60ade983a09b0a9a1ab5bfef783260439e2 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 18 Sep 2021 15:35:44 +0200 Subject: [PATCH 2265/2289] avoid testtthat bug handling dots in filenames See https://github.com/r-lib/testthat/issues/1425 --- .../new.taylor.png => taylorplot/taylor.png} | Bin .../{test.taylor.plot.R => test-taylorplot.R} | 2 +- 2 files changed, 1 insertion(+), 1 deletion(-) rename base/qaqc/tests/testthat/_snaps/{test.taylor.plot/new.taylor.png => taylorplot/taylor.png} (100%) rename base/qaqc/tests/testthat/{test.taylor.plot.R => test-taylorplot.R} (98%) diff --git a/base/qaqc/tests/testthat/_snaps/test.taylor.plot/new.taylor.png b/base/qaqc/tests/testthat/_snaps/taylorplot/taylor.png similarity index 100% rename from base/qaqc/tests/testthat/_snaps/test.taylor.plot/new.taylor.png rename to base/qaqc/tests/testthat/_snaps/taylorplot/taylor.png diff --git a/base/qaqc/tests/testthat/test.taylor.plot.R b/base/qaqc/tests/testthat/test-taylorplot.R similarity index 98% rename from base/qaqc/tests/testthat/test.taylor.plot.R rename to base/qaqc/tests/testthat/test-taylorplot.R index aeaa5d34528..9a7abb6756d 100644 --- a/base/qaqc/tests/testthat/test.taylor.plot.R +++ b/base/qaqc/tests/testthat/test-taylorplot.R @@ -47,6 +47,6 @@ test_that("taylor diagram", { model2 = rnorm(6, 11, 3) + 2) expect_plot( - "new.taylor.png", + "taylor.png", new.taylor(testdata, siteid = 1:3, runid = 1:2)) }) From 555358b6b883aa362e2f6a25acb3d1f314c6a7df Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sat, 18 Sep 2021 14:52:22 +0200 Subject: [PATCH 2266/2289] refactor new.taylor to remove attach() --- base/qaqc/DESCRIPTION | 7 ++++--- base/qaqc/R/taylor.plot.R | 20 +++++++++----------- base/qaqc/tests/Rcheck_reference.log | 18 +----------------- 3 files changed, 14 insertions(+), 31 deletions(-) diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 1a77d153e4d..3137cf2427c 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -8,10 +8,11 @@ Authors@R: c(person("David","LeBauer"), Author: David LeBauer, Tess McCabe Maintainer: David LeBauer Description: PEcAn integration and model skill testing -Depends: - plotrix Imports: - PEcAn.logger + graphics, + PEcAn.logger, + plotrix, + stats Suggests: knitr, mvbutils, diff --git a/base/qaqc/R/taylor.plot.R b/base/qaqc/R/taylor.plot.R index d28a2ceb09a..00986770518 100644 --- a/base/qaqc/R/taylor.plot.R +++ b/base/qaqc/R/taylor.plot.R @@ -13,22 +13,20 @@ ##' @param runid a numeric vector with the id(s) of one or more runs (folder in runs) to plot ##' @param siteid vector of sites to plot new.taylor <- function(dataset, runid, siteid) { - attach(dataset) for (run in runid) { for (si in siteid) { + sitemask <- dataset$site %in% si + obs <- dataset$obs[sitemask] + mod <- dataset[sitemask, paste0("model", run)] + R <- stats::cor(obs, mod, use = "pairwise") + sd.f <- stats::sd(mod) + lab <- paste(paste0("model", run), paste0("site", si)) if (run == runid[1] && si == siteid[1]) { - taylor.diagram(obs[site %in% si], get(paste0("model", run))[site %in% si], pos.cor = FALSE) - R <- cor(obs[site %in% si], get(paste0("model", run))[site %in% si], use = "pairwise") - sd.f <- sd(get(paste0("model", run))[site %in% si]) - lab <- paste(paste0("model", run), paste0("site", si)) - text(sd.f * R, sd.f * sin(acos(R)), labels = lab, pos = 3) + plotrix::taylor.diagram(obs, mod, pos.cor = FALSE) } else { - taylor.diagram(obs[site %in% si], get(paste0("model", run))[site %in% si], pos.cor = FALSE, add = TRUE) - R <- cor(obs[site %in% si], get(paste0("model", run))[site %in% si], use = "pairwise") - sd.f <- sd(get(paste0("model", run))[site %in% si]) - lab <- paste(paste0("model", run), paste0("site", si)) - text(sd.f * R, sd.f * sin(acos(R)), labels = lab, pos = 3) + plotrix::taylor.diagram(obs, mod, pos.cor = FALSE, add = TRUE) } + graphics::text(sd.f * R, sd.f * sin(acos(R)), labels = lab, pos = 3) } } } # new.taylor diff --git a/base/qaqc/tests/Rcheck_reference.log b/base/qaqc/tests/Rcheck_reference.log index cd722a43288..0a786774508 100644 --- a/base/qaqc/tests/Rcheck_reference.log +++ b/base/qaqc/tests/Rcheck_reference.log @@ -41,9 +41,6 @@ Non-standard file/directory found at top level: * checking dependencies in R code ... WARNING '::' or ':::' imports not declared from: ‘dplyr’ ‘PEcAn.DB’ -Package in Depends field not imported from: ‘plotrix’ - These packages need to be imported from (in the NAMESPACE file) - for when this namespace is loaded but not attached. * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK @@ -64,27 +61,14 @@ find_inputs_without_formats: no visible binding for global variable ‘created_at’ find_inputs_without_formats: no visible binding for global variable ‘updated_at’ -new.taylor: no visible global function definition for ‘taylor.diagram’ -new.taylor: no visible binding for global variable ‘obs’ -new.taylor: no visible binding for global variable ‘site’ -new.taylor: no visible global function definition for ‘cor’ -new.taylor: no visible global function definition for ‘sd’ -new.taylor: no visible global function definition for ‘text’ write_out_table: no visible global function definition for ‘write.table’ Undefined global functions or variables: - cor created_at obs read.table sd site taylor.diagram text updated_at + created_at read.table updated_at user_id user_id_code write.table Consider adding - importFrom("graphics", "text") - importFrom("stats", "cor", "sd") importFrom("utils", "read.table", "write.table") to your NAMESPACE file. - -Found the following calls to attach(): -File ‘PEcAn.qaqc/R/taylor.plot.R’: - attach(dataset) -See section ‘Good practice’ in ‘?attach’. * checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... OK From f9faa4c8413c9800b89991b223bbb7b7d4b896b6 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 19 Sep 2021 22:57:37 +0200 Subject: [PATCH 2267/2289] use vdiffr for cross-OS reproducibility PNG snapshots had slightly different kerning on OS X and Linus NB vdiffr requires R >= 4.1, but will skip test gracefully on older R versions --- .../_snaps/taylorplot/taylor-diagram.svg | 1605 +++++++++++++++++ .../testthat/_snaps/taylorplot/taylor.png | Bin 44400 -> 0 bytes base/qaqc/tests/testthat/test-taylorplot.R | 35 +- 3 files changed, 1608 insertions(+), 32 deletions(-) create mode 100644 base/qaqc/tests/testthat/_snaps/taylorplot/taylor-diagram.svg delete mode 100644 base/qaqc/tests/testthat/_snaps/taylorplot/taylor.png diff --git a/base/qaqc/tests/testthat/_snaps/taylorplot/taylor-diagram.svg b/base/qaqc/tests/testthat/_snaps/taylorplot/taylor-diagram.svg new file mode 100644 index 00000000000..f7e9d2a9bf6 --- /dev/null +++ b/base/qaqc/tests/testthat/_snaps/taylorplot/taylor-diagram.svg @@ -0,0 +1,1605 @@ + + + + + + + + + + + + +Taylor Diagram +Standard deviation + + + + + + + + + + + + + + + + + + + + +0 +0 +0.2 +-0.2 +0.4 +-0.4 +0.6 +-0.6 +0.8 +-0.8 +0.9 +-0.9 +0.95 +-0.95 +0.99 +-0.99 +1 +-1 + + + + + + + + + + +0.13 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +0.26 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +0.4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +0.53 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +0.66 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +0.79 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +0.92 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.5 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.6 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.7 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +1.8 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +2.1 + + + + + + + + + + + +0 +0 + +0.33 +0.33 + +0.66 +0.66 + +0.99 +0.99 + +1.3 +1.3 +Standard Deviation +Centered RMS Difference + +Correlation Coefficient + +model1 site1 + +model1 site2 + +model2 site1 + +model2 site2 + + diff --git a/base/qaqc/tests/testthat/_snaps/taylorplot/taylor.png b/base/qaqc/tests/testthat/_snaps/taylorplot/taylor.png deleted file mode 100644 index 3f48bc883389e4677b13d31428f5d75bf67e895e..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 44400 zcmeFZQ+r-t_$}O|Nt(vCZ8mnB#|L9Q#D zYp%6!jC+hR6DlVo3J-$=^XbzkcyTddg-@TrV*Y(VLjm9M?PMSO^a%pn{KpSD%O9dY zY^`kV6@Td)8jBj+7~7j0Du{mn^ocVvQd!#!PZ^EB)xngMY(mlMTaaUk06aN-yvn!S z`WBzHHFldMarR}BL!&EAvH<=e%9_P)b7m$a|Wok*WXkZv=RrHuF+YZ6eA$+tk zfx$VLP`Rq|__W%kQsC^eur3Xn8$WtHbeHDV-m@zkMexyy>uvXLKb$@fW?#bt}ULT_7I zd;P)NTuDlgMHYaHSzS*96(L^;YgMd#DeK5CsB8!$_C5$?zoFZv51HgR7*XtAEH8`i z&**mUYh=^7zUG1G3T@F$yBf5Au~y8wKG-{EOB?lY6MlVea}&Gwvd`u^JMuid2~48@ z@>-aj5P;qn5aN?7B6NSS{a&*wf%F;M0%uR$haoN>L6dPu+q&p*QW6f6W0p`7- zA?HK5F5KQ1qz+Xs9}y|LOy7Kb@+99sf$5p;8N^=h=hFdS_Z5;Vg(OHkmb8u0{v4+~gxSUGu5ADalvAfz;MHKN)8csa` z-D>ly+cF@CR3t+_TRf>?%R~xS+y>kR*VDCAdQrU%w2ym@c#V5acy()o$MTG3?sRN1 z{33MF9k{o+!O__8z;MHAG(AePtdJlkwL&MruyVr`fy=ck-T1L$p0Jy<52yE?&n&LqZ?i|MeGU}|^u7%oJei#v7moA%f*<9d zOe1~3aLFSgcvBc2bPQ&vL%P1f(ZL5{W`3 z1+mz$66m0UpW?#E^3+QT^HgDI@_&>R)HQ})Ctb3dFW%2D);F}hJ0IExxw0oc9@bM? zP4~Th9wwSu9riQu!a@Tfz=Q;mkRTxZVxV{7h8a)&eTj&mp~3xqMFi18c%lFMG8PF5 z66Gs9HQE3E5fKrLaMu6%J2B`G@KA_!G!%&cSqhS5^1men$smONg#D#qz5>eutA>Eb zn*Q%=fe8_zfaeL4bW;TVcX?sIA=dx7ED_RIFnOX_z4-6{T{=&27aQ2$zsrRL`clP+ zRf`k<&z6DTyN3I>RbXx5?}BJzWQzqc|LvX%$qnkiwMmGHV8~Fc6vY1BCa?@}Tc7_g zZ_8ekh8);BCd41BrSG&V6~BhXCArU=c1m>G-LsyASXHXD#M3!!iyS8Tl4W?#>T~)% zpPWw@g(Q>cY98(PMt)LeI-(>p>Sl$ZQ5zKQ7!5>VJ6>)Iwd|)^8cgqfN8&6)lKbv2 z7>4BfcDI1u$(!Y)T&5&0F3qv-nqOtO-4{w7YZ^F~!8Oxx_H=Wg8ZGeN#Qkg2S6_J$ z&mm&}?V|Ie;u{w-L=-!O5rn;x7lYC1c$Uk$+c(EUIql&ET_0(~fr#AcLdlu-hxPKA z4Bad2m9)cY$wZo;Yb{RsZ!ZrsjT_#QI|GsV!36FhGMrlwEjv*>^_v8I?uV~f1)X=ioHv;;X#R=7pi>jv@cH1&^nN~? zEf7a7lgkCC*QggORt4!~;&9khj9#ho!xQkTW#=2ZMpcq{;x%^``4 z#T$6UNf4{n|8TYmJ3zdWg?L{X~7#`y~a{mi*cCreD_WF@@8cC?v%c0NZrpKmYO&^krX?Q` zRmJ=5=Pf6P!GzvK=Z<&Dd_|e3;m<3Lw#8BSJkzgFH+QO?F|I$Nat9fbSfcl-4`caF=T(1)A1F zNQy8IUzYDKLB{){{UIc=hMm33?C54vV1X3f1DgvIBJL+p0&1Zpp8i7Xz3II^UTNq% z9!&>fFzQqopRTp467o16=0wL+NKYrpC>BXgDGJd80a^}=e+g{8Waj1mth_jw$;)9* zS@o#)Q0(mXFh3e4+NeZ06y@G%v)JLB)fK%uIs{OAp+)YL*~hc(jDn?-d#iG?`xaOG@=-^4340 z_^B9(g@_V?x+qSbMna1O{`m{StR<-&0guah!AK(Qi9PRGK{UteuK*oWUvTxNU&L>> zGqUb7g!rKNYG+@Nw_}#^Xi`IMO%=Gm=MOCOreUTh9Z{;!6|BD1GCO1uEXNBVkSIE- zNCZ(9&CChYW-kek2u@B<5bO!*d#3Al=zQ~A-;#(xh18Nkf7p>?R_(NC3L)S z^%0}BYu>n6BBCx=H?W^e>M& zg4GX@BgiwVCJ8~1P8_w3%+M+#zvVk@7VDErfsSe24hEwR3_yLybxy z^%tMD$VbrU{i&FLo3NfdW*}!RSUj69b`Mh^5DI zGZ*p4N2sP2JyubOND+CcyLUy6HAbpZhlWQ3xy$fNIO)yZc^{Y}pVhWR=le5lx6@aQ zzyPFqNj;~)hQKijx)j5jwPt-33=>fMecHNGXe|6ZnF>fpdhx`c54X;AQf5&}paKOW zNlJ+5Gja@d1ssH`um`Dc0~p7i9|Bf7q}xTX3PH67Oc}2nZc!@&rVw)qBoOS?HL+`L zhf~mK16&6gKY#@9FzId}bkpBcmAQuWix3eMA;ONV9zu-#Bqe(_=l%Y4aKh+fD37io zhe6}ZhD{|BQN*O+hdMN0sfjA(Tr*Eh25lq?zzCy3-7_jPR^GK`%!KFyP5!uxGqLB) zU7c4TK)0h~fBgponuv&0)EIKa;{JmH0szc-Wkx~x4=`W?D>eC&i2v9A-+|P-8WiLF z4=88>AZHsqjO;(S@<>Vq(?>|U^&j~7^$$6~QpNoTfi|OksgQgk0)PAus_23Qdg1@O z@c(<%|1Y@yuhNMW4BY4K4)g8hqVp?f)EHeM+8a=hsoR_Jd7S^gzr8e!0cbMQ`C>g& z;fX{FNsJ9haD~W2d6J7>M1cYT$fb-VQ2!{DOfK3VPoDvxZf35FE&v)e8f~bUWGg+!oDDm~F*W?}e+;PomVXCw^OTsk64ocf~E9jfeYE!j=!DB8RjZ%r= zL8c3V*X@+3;~>6^nU=@Hx%A#hVm@Q%E6eW<@7OR8PoUvPk3@gsiLxg^278Y~L4o*A z{0|G3<^2pV<8s(90a|+r(6IiYNHIIp%Exl9gbJpyRr7M&jwcHJSOD1nU^AZauEGIBQulZd1=zmpSxm#I{yB|^CstyLt< zb$Gp0@E(ZQK7?#Uw<7eDiWi$TV?cN!6X_!8BSGLn1NWl_G{}+Ive^~e&PwOA<-!?E zI<*{Er97SmOHjMY)op|fA?L3g_tTm|rBD=dB>L_!C$cf^+S5wXxfe(uEJejX-k?Qt`oP+0B&FOePrgiE^S3Ru#czej+ zjt%8L$Z+^JnIFwx76WDRH^&c$$GL(=x1+7^HmD%T z9p=?dOMxN4+{*jyPW7%IQ`d~{`d|`NrV6S$p06yo-yO=85qNW~%)w>1GNYCr>Nv~` zcf@r|$m8*P;iKr8I3dvdV^xX{o$vv?C=?glI3j4We^VlfBiSUnAAopDfiP*=C$^g@ zl0JUiiB<)MqN#(!0x8k;N=**s0x@JZw`-%jCDUx`gS!(@XI1BQE>a94d5zI9igvO* z(a8d6A?&lYV6n^1I7j&?W2Y8v;8eb7Ie_G;voI7!lNnXfX;o&8MI*3sjfE(a3Je(A4baJ}^#!I`ZX(*hx*=8gU8lsap8Zs? zIMy8F6Td(jbPixX07F=1alafU_b`Bu)oySh{pBYBRO*CN zY<0Ge(6{A3T`hRQ!SDtZU?w*^ke*jp@?QP}w1J*-!*S%5ya#L}_eV3qu9@B2TpzDD zl@1wb3|c8oy~X!uE2u7g?zySO`39~Tm=o;XpL~nE>Hfsa@-J~8P&19j(IQEB zkbVHkfv$p-5J(9HmNiJJBN#2dQfG>IMdj(8daV|-Q!cSCr zfA%KiW&aLg8(A0*5ws@AmB45pi7wn({yEG(vpB5drIv+ z%VC=vPDr&aTKpwE)x78o@Zk{Ps7);W%Y zr{92zS?`)q4%9y(5v1*Yr|lz}=c_?NwwJ%TKaPNR6iST)(V7D83APtG6_0BoqXX2q z3;@L0k&rBOKm~szq`!-OnBCZ`=qE`!w0RT|t?%adFmwzdJ?-H6V!?1B?$AguUJs znH1O-I>agTrX@eXW%MOU-Wz#E$%7SHFnOUR5hahShhir1g&XV!$VWO@K;@H2vDTPd zX{pzLIdd8fHn3u5&0)8?;f(iKo5T7L4k7!fS_p`J) zB>0CMBJbrVyDr+Z&XI14WdAPtXy3;k?qfnQu&&Tg8JylgIbI%Ry51xqSohr0SNq5N zO(Vx3i*aMvC-oy?asDE*lXB$4!D2BAu{=SN{cJOaObFqF_dd^-mSd@(EG&!w(fzZX zxT%E0j#ElwmNa;$5Xg3UpeaH#ET17mp0VV1Ar-*b>;N4=*0%HQ51-})q#vChKu*$K zv5JvqKC$-ZRO6)raah&bIIuz&rSqh(hH^s*qqMSw8P>-?@Z!0k*aF zAB9>w4I;L#L2?DU?AIx1nCDyIp-)KVc>%z|==w~N;`=2b#lMZvf*flhJ4B55 z{{obVexi_)u&}CB5|S8=&D{j`^y0_~EZGz;^1U{0p0 zwv%#BZDu(qiyR(mHriB5TQmPmpwn;P15in0&pOFHPf<)9d0!t7OtY1XG+gU!`4M*%g;)I${VB3-q?-4zJciIA-MoXI%ih zO4U6ptSOzhshM?^( z_x_<+(uPD7jJ%o<1b=k&dE~akgKU?&F@ak%y_fsWMd1e`Mw5@Slcf$mO6!%RXd0!( zT%Kp`)+j^uy%LMLOm}I6$*djM0;QP5RUh}jIMPA83~$~O_aq!DT&mS{$2V&y&2N~ zq+NC^+7&^k7a1hqKhCjHfes_F?zEuozP1U_y;Bkp6g5W)0Ty!>;sH==x#K#W@9v0< zA%v5e_k->U*S(1S&8JVRT$A%Bh&~_MS;es^doAA7mKP&Y9n0d>?2Dd z0|2n!y}eK_*Fk+L(X?*s@s5C$dTgqL7}aE=nj)YuEm>hO#CEzs+!09c;8GFSXoY`| zx=_G#ab*CUt-{|Pz#YYUOl$s5!J`bVbTYH87aip@|1^SR%y0T{&+xo=^>I8a%Dl_X z5o>O3IXsIJi4%F<0h2i`PNs*{IXTy_kEar(W!KOeqNyxD&1LJpK|(%gd_i~}9C=iD zzF)CDT66I+y6B{JUA831bedOht5OKAYRbcNc{Ya`1|gVsc>SQb9{VlD0RZHE(rDiA z+O5us+7~iJd&}qmDO6Q0mWS_`gS#nb0#L%sSDOI!-?Y=(!$7h1RHf}ReZ41Xb~Igh z+XIU^-H#bvd@~6GCJ43yU_7|e800K(J3qx0#wAw!jz@`I`0aGgbuO02h2hKt_Pw#H zgS)cNwzFY@wCXFWw@Gvt=;nD0(|v@==Uv3gNhKCgmy9sa+@fMv=0vC*y3yRPw)?AI z)&BY%<_4PqN>84k7IK_0CD2DQLWZ31!1dX?^vhr~xHw$~ z!Dx~#O6aAONuyFhxPTB}{k~*fVU=X56OVCcM(8G-xTQJ)LRf{(pfyl3EfI?)QEGLz zewUQ=OJzwox!mfUsyBbQ2_ehAPgHKU*LL36?ht??53m{PTgSy>5L^=UgZ>IG^iO1j zD9n#I0OrKJld4vk4K`;9fDTyj12)!yuVXazG@i+$Y8w|IiR3!l-+_RXFfhvKBOZ=X zX+m4F$2;AWa^Bg=-C%a**+Wh3K^ejB1@sVVc_N*B@U=yhzd(W8f|GkC;6MS*Dh#~22U$!A2#`>GH6mYV;C|8?kii5@-y9r%U=5bv9Auc%F^ zw1YRV@wc;LsVvr|E*ItMZnw}8we6?_-r;~m0>yALV8-pz*k&{nVdasIIvf~kkJV$V z08G_ygEAiGcgF^fTlsUP6M?vHeN!2UBSL531TW(LRzrPMkgdCf9;eJ}a1op}4^S}u zYI#BC*_Q--&>`3tQ#YEFPQ^kTT}!m38VzOd{%cKFM(;fxG3RY{73a4-R=-&@YqV`x z4)k|ZCs<3cZLyYyR|DEB+GQS3-A^jntWKGJ$iip2TNth@#s%}g{))KxE!8?!ob9dA z8jH!Q!$V_#{u|)ISLI^sIU$|_^`sI+A(LTd?aS)H`cK*8=<0@Lz*sGoPVb)QC12t> z2elD{Xd(%`5M@>OqT{j2x=&~cX84Ix3Vstay6GFaUWvrEyWd&xSp@VeW3hzw>QJIN zj3z0oODS86-`%f-bFq)OxZ^{dAHrhrCTjP6$X@J=YRjQ5;`+m<0@xW15Ia#I8}ggr zig=g^FAAas)`!A3W)7R-*A8Pf0-EDnqKZrME{vJ*3+f0Q;5XEmJc2q)h;}I_r9}XA zh}L}HqF1Ss`yEo%N6nevfuNB8bawDP6H}KrAdO?AM`G80gAG$L8<_Hh|4BB1QM*qh zpW@~EHeD~;%iXjM5p4w}9}G@3Im21W6o}-xuZ?Ut(C^;fUv%+s9!Q@bcS4^@bKLJV zUFoTxI4Or$lM}|*frWkJ+>*rH9JV_(-&8~E4c-D8s$(## z^|He8)Q$lTn?-^%R7ey;V9NGzjF-%YY}68db_ioco3Oo9RF%)bqgashhD z3y+)K3v9*Fk<8gpF7 z@((LtgB-9{phh#Z_)PzTI?oN`SHlG_zW2n zdOw04D*JqS*Zo;*Lz~T7OU>X)m{ppGkJ;Wo;wVkPr$ZH;`RK4BE*BUAqc+1nCyd0# zm@kA-lx~oMEiEzz{3)&-*WqMPtL~cf%WD>O_-5T*V>syGns)v3ZN=rBH*7(IVxUR* zstooBponG|EH;~v8SJVT`XV=@p#172k{K$!es#k~$+7?NbpgQ`vqCIhWo~22o-Nk= zxLW;FrcfAW@S$wy(%lh<_Qgp-;N!J!TMen~XE?_3-GXlP4(nH7?pHJ(?EnGMsNUdm zp%Z`V&U??z=jt^6a**wV=Asc6z~8)I%o1xDSR!mKUc`WqR(P2#!D8KS>;K2}+ZIQc zt=~ppjCnxwJ6TpHD?5DCi{UT{H<|xVt$MBK{$QnHXai2?m-}J-l(Evb@p7~15 z06qWX)QFqSLnFa~^h;5738gxWgRcHD%bDYdX!_K>Qe5dM-oa`oC;1}h%3}M=6CSyJ z1p>i*$rP22?$fe?-9XfEw!rw8GzWZ$Ns8|WB;QH04qBW}rgMY>=(z0T_>hoH!V z0p;9M>;N-_6q7wte7p6^A9sNeyK^kLK$9P)mPSF;2ZNToEyMeF&}$|j*K6FjA3I|{jU2eU< z6PX{?gR45Z3BgZgY_$jrQS*Fnt8}`||6i9Sx0%94@ z>5OGRjBpJ#sKa|d)l{icqk*NaSwR&B35m7>%{OI<#Fa64yVG4G7?r3Jv0{n$t z0nn)z7F_+@!UV6Bl19>9Vl0;HOTA0!_oXy zR-M*IYS~YU+|O)?CTr@Z;j9k1OEyYO#4}ab%QTfA8DblGgakTDSFh{X5|s7gGI7Rl zR|}P^*8e&~7FuNZx=XIPJ|s&Pyg|cf?GJBWT0m4Kh0-uuPG2qP|1;h}V=$cXNd8!y zj4}UOipP%bk0*t6wvN9mW1OkeF7UlV)PZQ)OzUg%67^Z?Bg$@ zvW=7kSiv9VVAZsvRsTOEvEW~6Y0WZaI>yQd)oo)eXV?#>5;DJ;!@tOTge-_?Sfbj; zGYpb`0aF(iu=keqqV#xp{}IjqOx23=6oI+U*qHEdEc9GUSQ zjqDt&m%ob-b_;MQf)HuJhwo5=8;%s3QT@(VeDv$qY9Ma@%I9b0+3$sE&{aGXYMg}JQ|FmEtN-TwkSs7B2K%0slO*VNwVeUZEUMHA0F{9)~ufkk9f^_ zI7|2}S)|0-@$HOBw0^LD7%$Ul2WJxN-U;jbqoI`?->)1br=w|dipk3%<`)toMYW}e z#&B~ks|!L`y>GVARw56pGej$6se`M5XW;m%c~K(+WwKX$;q5GoFL5;XGWtQ*@tQo< zHjwdiTFv(TM#bNqBf4TRi@Y7+)OnG35?~GLT#j$C1d0FTsj3&gx0n8a6)JiYEhxOlnQvHQw4ZrRB7(NXMc6zE-CU`mFM)@?E}|(DK6`J$+XW#cN)ZD%OC~Gz%wc# z*r)URFNyO_pQYryaY;PW*)3{>IjP>sEtfH)=`pKlKs48~TW`jWdJk*={%l9T#cYWX zsMDJ|68i>*Q1@XE^EYs-5rVb%7duPbDOEBlOnxp2ApJc%flPM`;_>ha;w6OcI)&_r zUUiPNf%GY8nWw6jW18~9OXg<7FE!NW8q+TqsI*LZ&S%qAHF#iAzdpc`kAqu- z5%6_P3u}JoL*D0uLn7SCAsa1)bzvOW{alw_+uhIiW%_@@DGNx&!K30Sf{`l;DJt+iPn+%J4cM#QS(A=3pY<4%8POc}gnj&m-62&Hd zsc>RC>*C3yH)6Hi1^*CRtW;Y{LzjemtB+r>2>=r zd$IH8{iZ(=X0>_(2!(+$pJ3`}WC+ofhQmPAkQQg@4Gz2OVIPMOC0d?oaS_Ls81=%D z4acbQfn1BZdgL(k-D>@s1&XzckOl}fCu}^{yf+)`uDZ{dr>;qP3hQmnCAyu#TMD5M zjP0nyvpg1S3Jh)l@j@1@UnJJ`tMh00@H3>xZtTj=enkAyw>awXbVN;pb@hvm7aLfP z&TFq{wTjbY+^WtQP_E!+ZvW6Bpl^S4x?<$7f@=h)=f3@p8vZ;N~kxFJj6LF#u|QAy6B z;KNXz-|ji#b|z;=5@At-v!va^w-~Yv)kNbgxreBN+DAw-0IA=n-auH$* zIVn|kRG5roE?EWm3T~E>8411`|9c=tSJ}|%JJkUX0f_rkL$M3Zf94%2s#3qANOQGS zLx5mvJq8mEmef?(laDMw6kRnamjTBBTaVW;|^5<!qp(nnFy%WMK=iOeVh_Iw7bJX(|{o@T6J&pq#aoO{qm6 zS^lRwV!V4H*|u1($hURovml-Y1Z9l(G@>qiqkGN!0R*4KDmDt6CB|SJ6BvF}UH*U> zJaf}ocNhxHmJ|mU73w_9s3d}selD*3V_eNQ?)$X7dY5Mq(x9bSpC#YP6C>#Uv>X+2 zM!;c%)BiF(rM^Vk{rQMFVvfJ%?S6%~U0O7eeorA7s74CG#uDUb8$Q0p8fPp4@Z+NbK{`V|-1+SF|Td7vx{k zU;a{HAqo@E$N*XN`9&&$drg5r>*sIOW&LFX0gi{r{5RfID5nc=SWpn&ru>6gs6q-{ z47QdE;4Xvvg?5z^cev6Bde`uJ+zQ^o7>kr@>)Q`vGZ%{1F$VbbvK>{`5;@|Mk}l5_ z^R4&HVu}zBAr#u7uAhcj?aKtc3I5?sq%jc|igL}>!aI9})ZFD_>%$gLV?c(x=_ zmEAtcf@%SHS|B{q>qw>V6AuX+vesNIjoH1CD^H-g7%es!h5ti+-L-+wEIn^MAl;Lc zc8Q(Km@+QhoLdj-6t24q7M~uP=i=c)mIv`=W7aaNiSczSwW}^RHE?RTFLb=rXn2x~ zX?ZkNyKK&xX`qW594`ApekrP`B8cm|I&p_6+Gp5>k?$1SI$z)pCBv7ZtZ0E`ny;G4}ZUFVjC9x?HpLES}M*3N@ikU~@c{g5m`*D9P^#?4b zOd+O@YnMxKv)j>5;3F9XbufO9{!TDwx$zD&4l@Z~gRx{D>N43!Q)(2ww*Bi8V1!vz z-$;3cllf}@Q3Z(~s=wRzxE&tI{)B7<)hr;uwJ2u3VKt2KYiFo_L#J4CjiTdAR%q|f zl2+S|Vz+#KdG~wuhX+eQmkLZ_tww4Etj)YGgOdgyEP&jyz(2$7a$b>r)9a)wD%{RH zNtFNr4w#JEkLSvhmpgqrw@W!#J9$qrfWnCo;S?*E{v9Q(sTy_SekB`eW%NRS<&}Z=87v(EB z(QtjKRA!I=tk@_5N|z-ZBEzLlmh-Z?#nbg(n5fEeRqMF|Kmv#F#0PFdlYNN03YSbB za)2S0zakw&UQkG&3 zGW}}AmlP`_j$1*PwfSSO?i?=#;_=%W;YXYi70Bes77!&({Qm_Sr;1(1 zqU|S#DZqxM;U-6PAG~5jB=ZCqW&{C)tqG+^@(EKm@Hl4n2;B-WX>%P)WipR{uicn& zGt2&lz?L5&@rE9OOzj=yfsfasge=8`t*2+6-1{yx!V|6^9jg9C=t;@(S>7lI-dQxb zD04rnX9Pk-2(wQ9%e&sw9~X&bG#qwUKp-{`dSRCGM0ELzo$QXlgW9_3&*MhAL+5-F zZ&C*Dc_nWGJ)`Ezky#^;^c98S18`DF@;c{dmvz-;qp z8OGsQux;p0whS_oKN2bh=W8bbE6wB9$1lc#m=E*fVK#4>aY4*~k5QZcIf7Tu_tWIO zr9KW78ou9Vop#UDRxlxrck?abzSdG=;C5y|bJykJcj9uGEb?zd)^)>KmJjVffC>lJ=vw1bt_ z`ZLu7P*%S3Brq!~$%X#H+jUX|>$!lnO}oa$B69TO{XwAIn%7@}w`JiZQiUYD4ImrK ztd?q3xaUc(2w;AS0!CzF#|t;SqA1LfJHm`%eX`hOYszt@ufqDA+OKrL4G1ie$5^CKsySRQ84)lG2zBvpObZKD_lvYZ}+DZS58r z^^5zI5`r7x=^XP=mN!j;(3J>|MQ{a&*P~T4OA}sp<8>OyvZI{S{-ik^uvsv>30YgyMK9rIClu-!Bpr6iHRnx~lp@|R~v+ep#TPBst61FpFc?D{_4l{^<&QboF4GZ4p zp-hDh;o6(IJr(H91lZv~0Es{zb)bQ+Ieuul4qeEp(zp#Nmo;ILU)6Sv9L`88J0_WC zh0m*5d~}c3kVK3F9O>j{sZv|Ew#dFTd_fr|9!w1<92LAq!Mp*pk#9P#m{C5OH!wqZC zfO$xq3czrI{-f!s^q@9YYp%!RqveM3#GYP@4Vj6v!Wsyp&Ct?^a!|td9%1;Ru);rs z;=EIttgB)QaJWL(sHSKw7V?+SkeG^JK^}h77dL{b|8!1bC&zKfZLKx_>r2aXpLqT>6Uz3aiu-|afj9o;KPn(A%nk5_i%a?=BW2HB^w-*qu=zj52J8pHE#YRWFaewj?J@2yx z8f`_$z!&}ScHqR%Y5su&LQK3#=$`WWk<+Z&qU#l<1fpfEp;0bgD~3UMc0C%qs~vgc zBk^$kf(}OOZ{Q4rrDbD>>5f=e78SiEp zFmTcAiLqB6;QU?#kbiB0N1flzF+To|^fJp@uF79oX%cDF8ij7lP$Ai>QFwOO%?u0n z$Gv@bp(Qki*ruTT3*W@FwXR4#sRQZF`s(~!E?QOl^|8$91kQAy;OxTpZSB&cpRw~) z9-XFbbbvOvv|9(B=lhH?3~c})HZ|s4XDw+*5_Xa}N&{XFpFJkj;&%)kvn6V%YyGGD*ocgH*80Tn<+GEKFXaDu#yA%x+f9lyfQx97D zG4aUt)22ft{R0mc{QB7VL)ZJIK7s&LyQYKghrfesn8*gA0Q%Q&NnC5PBW-q5X}F;{ z{xrqz3MV$3N=ai28a!ED@}3Rij+CVU+1nfy84mIp9ypej%AN2&H-@=SUb;N3*p}KT zxA5}rxX#1>yc?X${ekJ6krMq`-kK$EHwEmU^7ctCawvONiQH)U1De9gxN2V!AkSLqVs1qO|@3s42^9#8oc5F%!;nWvD-GR4p0KObzm zdIUpdrp~ibT8xqR{uVw{B43+hGe|ynFy$Vh)VK|uz+t~Tcv{xZw8l~>ByoNY?wkw5 zF-KIq7ts!GYHBj+kVRSnkvBJ2`U8)YxaFdh$!U2-4ehU%b2N8S=FQ%!nA!Ahtfaxg_M zs;pF76jBjMAL0>Ugw6qy*U>@;tGNMvM(^Gn`I1brv9z}w-nRV?BX@$x$fzKX@BWzx z(*{uOccR7dXhKolVX5#vYZpS? zToowWO+9g1B-?g%0t5zw*0B&zP85&dS%oT29bSr9uglTugz!{@=PSeyZp&FSS?|Zx zDYt`(k^)s$~mokz_jM~rPB$3yWi9+X13N<*L8A3s_+na>;} z?YOlJY!mo%^p(5296p3Ml>2SS@zS{i@aqS>-t#OzWghIW|7D-)sKD9i+ zwJqOlZ(gQyW^g=r;$@noRs0DIf0rB5Ubv4Kg8$`I)MCY%o?hFXIkA1m^@Mza%J;G` z{IUu906Q>&1Yt*EEKFb8(v+`(-Ji^ckPJ9p%x7Gi_TSTs zqo+w|0iS*O?KlUeVym-i^^4ao${x>$iN^i(s8YN8veU?&WUw!0xp` zs>d(n;&_(tMXK8Oj(ukuZ7GLY9$49)HTmjP%=M$;z~A3Qe7}v`ON_$y!?kUT^*JF9 zWMj)kBOxjpGl%Yi2T%Vv`os#H;Jt)fWi=dtgS5?V>ndHN;CvHW16*-om9ZqfIDHfw z47}>P8u}-73z;Q!;DmgaU7trL-vYIDL^5tB%LXDPlQS;zC}<9Kegl@a>bXxIWSgEy zRp;<`$Az=|0+Ia;j_?I_3*>LqY86n%5_?uJlH^w1}zq-d93Z@U%=$z1U@<_3CeMs zK#;)o^iraMZ+~4qMk#}@;e)|ssNm$Syl?DvcndSk&_wEC`?eiWO`VsRq)zFYD;Uac zb#GO~QMl36nQTd=`@--+i^>7J}c67paIUYLvOIw17Qw^SzH`FR!udb{3&6lSIwTs8PR7p4sbA5OJ*r+2fvXZYmLfm?h2!mz zQA#V8b3dO-2o3sUkD($fLDjI}?qoj1NX^|+rQlx->B#}>sgK4yd9gp8jGkIn}7g5L_(ug>U9HikmJz;G|tB<6!a2b!8eQcg6;AJph z?tj7|#G`oJy0aTL$F&0LVz0^hD&%odx=nQHCuZT(upGeggxAuA(dygGl~=Oe??pQR z4M9{j8Tb{|zz9DQb3HzuI)h^LI1dRtzLU4Yq$dGMpW(yFs(H^WfH}SOFf+TM%t}p6SFenAoU-vKJgS*lIHz<{ zn)+angI+mIdl~dX>QMpXNdkLx2i|HPUG3Q1Cv*>9F#S9jG8TL^hp#-}(Oa>R^sIOL zaHn-R)+tcx^y44Wuh)a^pV!B{9`XHxRMUMMW{$^Xvo!gSneK8JK+jT3ylx=`Ze-T< z(3aLjFRW%h>@)U?9~1Slw0_wE|f_^FeqFi+;&!knYQm& zg9Y!>jaEx}@zqPQ0xX8#)p6JJ&!}Jmee9@w4Ws$ypU)zxi7_@#9f!%24}-&djZt`> z^9KoiGNQQlvG5I_)?7s8I=wN~PrVt_`^|Vl6`=aB@~~uPeHK0PU$_*QG`U7 zE?xBQaO@DjH&x};ZpS%%EuC{t zcz;)R`BBfkUUK*mTD_X6QY9j8UdjkBWEcW&uzo4lE?r57DG!%Y`5UsS)FJBBX*utM znP)D2kEXpmCB@cpOMUt63|(e)2(0v$En7yY!%3zycBY%_0|(|)#fs_enl>$6K^h)& z^pk~;A7CA&bAxOn-|Lw&?eII|9tIDN=F7@YcG!HoB56Fl1}l%M^x}();@*46#fS+C z*hx@?Wy+KxUI*d#sS;gr#TD9HfRpd2yFMD{?bfkxYAkf(59hF~y7=OBFSK8A3|GWA zcM=xZvF>3AF0sp%E3Yb7o~foy`$i|b&Al4_!+Au=QhNlq!_iJ)9VHYl8_+0()ine! zt8jI0zaNvA=m?9OprMB$hn0c;OC-~t2id5@PJTG!9v1S-zuj}}M;~ELBu$Ea&B{*} zum#3p33B~p2bGEYG2wBs?&G!|6;xJ6@!?+Ktrx~Fh?l<2W|qfOkggV1h$d<`jzLUy z{*cV5CT1JCvWU(<6|63b6)UD-T>)zhSYh!rmHrX=i^MYR!-K|iq-yV>pcy=RVXLe5 z>YjVz%>%uh0oc@jp=@hkktC0jyw=!NY%Tkp;y8!YBn_ zDTW3N7{DWr%Ie&6&sD8jwbBnDPhIG2e0{7en0FUVF{DcC#n9G7sYkIuGo5Ib4KJ z2By;XDJXO>yj3HP5_NPBKKNk5`*16Zy*u`4$hZm*C7^5=1;Unmdefef=T^7LGe>-> zV-JECxyj&}$QS~K`096d+%ZhH3&?bnYhww0if1X&r~{80Z;?7IWPb%3_#cDQbTIWel?xZb{k?PYwKZ46P%K?J5e zKtVem@J6lAt=!_pixqYw8$t)eGXoh;*bidwRNN{w?G2+qAk&0jHRau82Tk*m!H*Pw zyrfG3hQQK~AkRytUs=9E}h=QAFZO_?>aL}TJU-4nv5SIU^pVCS=)}D2u zv#nudCsWqc6GnlUGMRBS>m$5o9*9zbg|i}zI`Hg-q0E@ZvcF?^_Y!?9;N5N2svUc8 z2yb{0=Dv?)-rPr>*;$Hi7%scny@9p|_Y#RcBG_1FhnJlS^AStCKl^7 z&n#7{6tQcT#D$W`HD;l1B4*Acs?0$bVPfpCnAYshwz|pSUBo)tAnwCZqB;$a*D;y0 zkQuc6W_J0D=Yx$m;+of%#!gtm)Nh#f zue*-(GigAC*q_n-~~VJ^#&6Jyq2yy^@(0>L^29B>A0GunzaqwP)#A_G_fGmL_b zRXO~9*(rz|JAaUKuV({gQJ?cKh|C~7PPz7b!#iZ3LbJbPL1e?C@|tU|QL?rX1g&@; znZk<4#5d)cXWmvldc5e%mxuu-!OuG#A|)!|$g{sh$b$PGzvUKZ8HY%HiH<2F36^0y zc4S-)LGBSsB+=BPXGBqt-st8GYkJ{zdV{3R-v5qYLw z9PRr!qSQ5b@ZcEZmnbrh)-s|%BK-0&-wiwe>sF233W=w-63*AUl{aa>{E}uWX|9tw z8uMkyaE=PX%Eim=Bz9=Z_Vdov*lV|Pu^g$ta;r0_0!c+{-0BY_7gpza&%HS%ip*ZG z{TVRN2HKiTdmnj3u^Ftrmf+`D)4;o82=7p`?ODBgBb)B`F(Ty{0)cQoDg5Tcj)Hu+ z!a&41xQj@Z?CvJcJo8M;IUBzep+s{)H;(8Qs)#4E-CvRMW8|F6_AKnku+Lqa*x6mX z66JPTd4u~Hgu>-%nMiZ?yva+iKRBA;dCBicOrE{1F&P0a!r;RTyFA}MJw__Z2dN*} z03)ZbCI~zA=A5*1-b-4}rcE*8le(}!1E=0x$XeqG(q6BM*_B*qGVjFp?b`|OiXps1 zXbb*;qxm!WL5|g`m7zR&O3<`vD}%5Lj~#&!$|hT}KJmm8dcz-w7F8Icx)J7}!H11R z*d%0kr;W{lB})0FgwD7Rui_}h~v_yPx>e%qxz4a?awZJN8}V1 zTS)00qdR!AedETBs!EkA>LWN9a?tCqzpgI5^isW;jm0?ZE=O6=s8OS|neHZ@ST=v? zq^`K4X{fyujw%CR+9RH!emBAzZ2R`Hk?`F3l1Ve2htro<7@64akA)zEcjS2m|U;Bf>gjN(jF-WIE+qOer|QM~^+=mQ$qtnYX_SPCmcu zp!b2r_p7eDN-L_2>OX+(kw!=46c&F?P=^j3=%kZQ(jGJs51C&I`@l<|j~h3RZn@B60J>C&Zj9D&-kYmssH9w#t`33kuE?A%e; z?G&Gn<4?mX*k-}%WJ&tkOjYn`jCcokBzf0dL&OmJl9>v!#}Dx>bcn&UGbq2EFKXGS zpaw9()HG~kUcMz=P*T{GKJ`a58|cH<9Ho- z<_+231(O!^2yF0Yu<|O0ce-%lo9X%IS917+jOjn@Q`TPX$ZqKYUhC|3OGIt~qIiQ* z+>NluUE&dnm5w>)pAejZ>5DJEprJ#DQk^<=xPvZ5>^6hvu0Nx?bsJgy$)1=F(@ze6 z8CmnbDD&~hANv)F%MV&tT){&L7w{sBadXYnHC}8w9d#{VPBd&7(Y}4eF7FevTbZUK zw8syzG83GbdqV)Qn{K*^W1s9uz@3)GUz|FakSXh1z=Z*2*`kZ@k#Ew6&?dCa1SvyV zbH9m-@}jY8d=FFJO9nbuBJV~qpP zd@9M@letGX*_}F-r4=g{W?28>R%dokb6L*J?zcpQ&1LogQS1!Sb>We1_ley)>lYL+ zD!VqYA`@1vT9tUGIW=g|fY{3^k49Q>B)p*CEUztWQVEClyTmd8F}$k9>?QYL>FFop<#_V{x<#~?cPmPqeifiM@(V9 zz?WJWBL#DBh|TiqtFJo6xnAQpUAuN|pibOPfWd~RI(ai>WvW-No*SE$hIlciZKkZ1 znc&34_ej9vnN_eL(V#(|*;WSH12Q;T85w-S(>d0&%kEh8yM`*wv!3ee&HZ`ZZ|&M= z(eUAq5jIJty8ozKw=Ugx-+dW&Z>h#Au)HH-PL98f6^9Yke~4$?uwg?r1wkYZ5n`F& zaKjDeGUoWyQ%@$-U$PjC~0p-|ohIoVsAuUn$7(KxG8wl1NW)mPRt$2Z5r^8GT)>^!H zuCEcH5MQ964?dW$a^$$wbz_~|SBx%nnEpNX*kk&>Q&D}8dxK?(@v4i!ZUwF>5~=DS zb-4+4D+hDy;lqc!vB#w0*+}GU&us)^uMcM6L3Tk8)Rry^bZkF1DCm^K}eX1wtE55-WlXiKdsICQ`LVIE?ii> z^2#fzY;Km3rF~6g!+}$vCbR1W%Ez{@_)A3mpcAj5SPOb~UMv{*?b}yd(0saWJ{|aP zzdbwVyy9t<>FS!-#-ye40z>UzUOULUX5DvlJL`gBfSIOn=J{QtuAoyq3=v_91nbCs z`t;z1+W;%Kwgv>L3ya>EieyKKR>>gcEanI&X3-ngQ^``$w!+IZrdv7cT}+ih%f}RQ zTtLQ-u?K1NXrflFh)R@jFr?@b_1G!MbM(n4#OAX^0|um*j3Ot!boeua@v>!~)7Y`E z(-TkhOG*F13WUwrV=)Qex~y5^h*lt}fE-E))>zY~O`{&K-bIW|uG z*=1|cO$|E6r8^VMgX&Drh-E=?mNWOebLUQa=bd-x=9_P(<65P;+%S1z zj$!My%Qn9OWPs3_!$J;Qda6~kb0&9g7Y7D?0uD<=`=!Dg9k3i z7nZ$EVfj%Ob#DK8A01bqAa}}kX^GE`@&Ff16M1TmF1zfq#DzlS;=_u=qG)p77yKwa z=@+1cOGd#E6(%Z&SP!^QEMVKsLY-w+ty)De+4kGy=JoTyX-R&RhMjOx; zCpK>UiPE4D@eKI2(txf0`?F`# z5*-sM50o?Q$GNj^q@{vT2W0ttJwEM3=Y0qE0ON+_=P+Lq%p8UD*PZ~D;X}a!-t7-mMb7|(X>HGzohbmn9Hr+eA zn^kv^YV0o>`}?B$_3K;3c&^tvx1ZR{*wdZo+=tTdzo(}<%XRA1DLpg6Sr`mqodk!1 z__FpZhYg^WDpiW9*;xh_WcJOZG0e6LXE~I?h%OPKZnVMR@(It=u*rB_5!?&z$!yRp zUc9(=DhWoEMcweGz$WK$WfLVJP}tL>p3>#lKf8_C*PF$T-ww^^*;`kFl~lQM<%kVV zT=UI6JyPpQ>vzDjg=Z`(p-sl=rk@O8Vty0T7QZq<^C5TQP1AW=mgi-K5+Wc9Bq2P+ zCbBd53p@QQvta&$^zz-~3CF19_rMQ+XCN(KzmzVoaV0(Q%H7mwd{3Iwe<5vIaW&=2 z{~JwRG&!#7?Z0g!d=)#%yksWb{sjv&N<(%Jug)@t$9{w}Tj(E~wY2p@8$P=UDom~2 z+UJLbk!6lBImH@F@s8nYA|&9ZLGH|mS;*+eP?!Q$rcf8(RloqJCe#UmUo>G3!Fs(Z(Dv81A&X8o_UuFc5> z-OpC_AG}C?|I-GO6zT|vb;cYwRKbAE{H3q0ivqHzj1A_qyao~2WED{o5nlxav+lZe zyQ>`eb15E8XjAZx9lI$GL#sEH^2kB+@y=z{y?b|c$|&y6Cd`FC%Sbi1wilo>HCvBc;p{6gHs(#<|yK3CGo~r%8 z#VYT4?X}zXw?CVt`b~Jquv$kbkh?~AR%JV$tcFb=tjpRJoW+>ngy;0tU;q52JBzTc z+=J8wlTJ`r+0G`TE`)_eguF0j!)|RHI`kj*K#;cg2lcQ8r~2{7v>JKsggf{e+=*;d z_uO-jV&elU$bSWC z4+T7`VNb~asnMfHt456)X>YT<$x?f5fN3dPwrQmT1?VIwO31O}hG!%@5ZY{Sca%0G z3`9nLR7Gi1Rx2Y@SP4dMYXCQVLCz>%KSoM-L}Ne2)kZ=AN3vIi^_C5iOw!}E|n@+pGp?bO*ap3O+CimOKt18o0WEG za0`9(*h2bf<@;3ewi0ynWw+92YZnsMRNRP_8+CfN9j(}~oPPP^7kYNyU@CXZDU?5N ze!6A&O|<^|^=`#@v6N`mtk^8SX&P`leQ7zTg<>UoU!g86&|=!hxyIpLDs8=L;X>KOc?*!{YShIiwP}Fm<;U3!eKF={8BAAfEG7i4Nc*Mb8aHnX)0r-J~$Le#= zd4k?}<2}P}9q~+no~X@UF%OY_%F3jr`p8Q+f!9L3>Nd|~)MpHG;dugHxrMwD5C!6m z4Mzz4h+po?2{_WTF!%h=ezrq&(5#j3QSX+IP$k|Y@^I@%X~*wBQPamSr)@j8IyloW zYgM|CJ|DV<*1xcg2DE#MUhLVMhCOsOU4G7$v}*WwRQ^=&IeX;2+vg)%FmM?aJf*MN82cC92Z%?>|ci4j$0TX35}HRI+RddUfH8niKnE z*msdW$1)4A3sb{}>EiH`9{BG-dVBI&PmLG1*}|1@>%0JW|nf_t>*? z(V=w#2)*PIc&xqKQf@I7@x}A2hw661^cgO4uwDU0Sf@^H+P{B4ZQ8U+@7pL-CQa+C zSh0^S_RiK0`R*Ltnc!N-5boO$%JBwQjRUZF=%I(Sg&gvK+O^%VB|IAJ;1slF%a(eN zOrB%&&|Q^!5G<&?2yRudz>a$H_sAnv*tB=EZZpc@w>B6rd2*FjbY8MDA|3ANKm0Ix z$;m2_N|ny!HKMwJ0g(N*VPLv8A|qs_nm zNMCMTY4LfTKw%AkcL=Q+xsiT)?N932;^8>fPuDJ>Uf1=Z{Q2_JyGy6)bF{DADdkz& zJx0qmEY@XU{yT5pJbm7IWILYISj*;}MBi>*N6iOZMR{}QrE;B4p>8kUN&63YZ?v>G z0E^-1r0l-5?D>h#O~+t(c$&e}A_~fbz4Fs}4_BdATD?h#O~Q_G15@w1k0sZF zn}`!mI6*Ixqh9j|542E_pt^&2Qx8+ILd8DM1gob2veqPFRfaaX01q{#`xg#89dUdp!49=kLSpF zy^JknzS~GM-g{TaOn7?g6MAHYk(;L*=Ls0y zTtW*Me#YjYTl9!vD$ku^F9+hPU|I_s755(UZiwkEcs#)UVan5wfEi4LLD(>P{q@&V z(HjfU--{JbNu8tPl-Olp1ZC=ig%v^<7$=o4yzoMr#TJjMj%|@of8>~F$1Wqgo|}7d zdv^Tpsb$M=SuwBnV>^2Dv)ZCeR%y>8kBnkB(=%e4b9UECYTw^!3&F_1BKR4%mr+}P z-J;Lk7R12^i>lR|R>dvq@g5H~cJ_;4@g%HSnnhhtCZ7y13o#x+sdKtkR&{$_pgIn_ zK^1PDSGVz^`)jB(?>t?xX>g2Wj&!OjT-8&}Sw35FC#uSJIz{pNigjOKe!o&L`o|RF zi+}h3tDbs$fa*M=gIc-abB&c*hFFltg0hox z6d=n1Aa6L-i{9XYk1P%_rtZD>UR{LucPQ@66_2VE$Nf-wYoyzzJo)5MUPwPll_^t3 zU2{zfYY`oOqOQ5-8pR_R?LNFolO~Egc#S0%(VaM85gk@sZp3RdD%Yu`aug(mMf1xq zzg)*%hs$(agLvPHUeMEZhHEUw-AD!L8H49?WVDVJ5+nMV5_n&jM zsET3E#0&e2;$4Oxqi^K zYSWI5Y^l~!^S&|fHPv%$d?$DR%Xh1*A8n)_e!ZtwOi)@2zM7|?z<9Hyn!GUGAt5ZQ z5)tmp%lz5I4`B;bSAwE={`uz>uU)w2I}@QO@E3$2lXu-K9+||kqk~8Ks8OrbYp)$L z@jAqXHt`-DeP#d6;)HNQB5Y8W*F+LJ&xAFWv4q2K1{~of7vxvBW=irO>Im|OYFL-5 zIdhV%(4%;mdtI_*9b4T!=^;syMS-|H19-6-H*Or)(V(M+ne5=H17lve{%<~_3SFNk z!xTuE2j1m&A|-i^T82)0nw7z2s`Smp^oZrAXIg74uwSKAhoWlsk`EQ!oZG4TofXs{ zd;ic$119y;`&!IBt=jaZRw9PqbigowZa27fVDo3=R2QbO&HzcKF4O@tRP2igLMS)p zr7=NRwM=8;YOl`}Mw>qUbgAOlW}f6$4kHyq1{-s2j&9{lo;l(-fyu5R<6a)uC?7-wT9^m~z!xWA}@d$_liN{3Eo%=m+2#cM*GHhl#%3Packm;zP zFDW|dr>8ujy7Yfd-EiMjeU5AMhdm-0LH71JM=kT_rEjn^bz|yo?dG&m+LCpP)b4$| z^=3hKnPFJ;@tVgjSIfRl+vC92XVy6~C(M6c^>`(1D#%_4X_s`YslM5=Ry{i5VO6fv zsa7ZFwtoiuH=>Z)r*q=}@~b-CYsKu-xwHT6>XN%^#)!p!4*YqM9YZf3r1M}v@9Qx< zUghq;|9->bdGso4FM!_!ZsMJUvi9=4cb50EIJv@I8<}>w;UuJUr&3nB)No1@qqSuB zF4^lolFqaJBKp9A1N8zRTwo)Lpc^3)8u(ztjzvzyu{Ojn74dYHj=_z%*0AmW{r7L# zi|abW5_80YbhXlF#-!n*HZMZ^dEzyTd2xju2|w~;bl*1~q@DY}q=Kc`g~`s%F@L}9 z_O{+KqM|Gq+S_N}*TQSeyvKmp zb0&4U@*ZmSWK){4Y#NU z+SB|Q`!@{1ycNGI7V1HXurP)ZQ@X!T=YXhb#UB}j)TZq?rtTg)x z+5mr2#^l!IjT=Yw+;hayFHMU3(a)b4+G+pYj1<%h>DOie@l>1DoFK7E zyzYyQhUo6QEm1Ki_A;;#`|;NwVm4I5KT`QZ73hm$>(~ionyD)Upxnoge@d`sE08bV zy*$jHcjx>gy{_%K=*M4v(B{uKU)Grp{+EmQQ4pPfKGD@zr?GwT`Aquy$25x>n9Tmp z`&Z6npXGKe?rYYQ7A#o6_Wg6U)e{ueXHTxwrp&c#)FEDv(ff4LD~>&@Teq%GKwZ7Z z-A@~STCexO9Gmx8DsudZ1Zy~$GUv&iNAtn*3LiJl&Kn^{*dHT%dQNwkhyv*N@k|?w zF_`FvAy|1~;~@5#*nLxZdB!yzsl5(||4Dn6>p6Z)kee*m0s#}*Niy%LNoO|lQ&k`2 z8CmJI#GmD3+vxf8cXJ<;G5rlZLSsKpcBHglK0CIY+YIM$Fn`2uTDEf0U%rzjO^Uml zas0W5Q3m#O;IDP-*4pe6!J;@KomLnyRmDaxyzJ;;AkT};gY1=`ige@k7kfB@kQueC z+xkt+XvK_Pf{a?C+?TzKebi#w%n={;=}>KaB3kvk~0|K2&!GXwpJcEcka_^c}Z)% zPp4|BYP5I%9gM5k$M{`M?xnk%cGr7!_8<6{JqEP3%D`OL863)F z;%nb{bv9|(esEHQPN zx`WDd`$7YM{R=W?o%HX#<#T4eHU9qenRsIKFe}gq5 zebjQ0#>^TO)3JC-aE@A>d>FNyQRG_fA(g$2yHb#QGSS$ufsM^v^$Uv^ch|06^vk}V z>92qH(OG4A2b_%>oYja{KevH4?)YA(Lb;o|${b&g_UDP8GJ^NwNj;|1?IUldcY1%I zgMI-E6T8JG-a0jk>P}Vy6>?S4&%uMQb*Beg_N9SuJx*txQp4iKv%yXV>GR!-3H1St+(nT#fm-5>m}*)+(u5a$1o2%?**eST@ugVJKK}Ys0Hg0_Ahmu zW7L8vgd1*f)|7r7=AKZBt=lZ+h5$s*y*<=*VrJQ&8!JD0vmTYqQ4K6{RHs@! zYTuv}E&VR-Lt^ECbyoZkJ2O5ORw071b4?agu6YgV;M-GqN{i>Td}!C6T^0v0Gz4seP>_xvyZ6tGU+UB1Q9g)_ zAZ_vE!RGuo-}L6fGl8}wLw+)_=7qH{?DS7{e}hr8eX22;`#2nBv(*R$Lvpxq0AXDN z^M=NS1k%{dQGdbV+)fB|gPqj82Y`pf)u?yx-t_UuA8Qu4fZz>bJRS0=o_EIXVdsLU zT^7zWh1ni>DJOzqEJa%_v;!O)czT2bx9jK^RCnB*30jp`G+O>Okdv-?oM#}pe^l9e#PkfhIGdDTaoyqqE@2eOo;15pf;Je58 z7eZ5T?lymnkqVpQX$Nv}9|0di=10R60OpR_4L0#cEe>P3#&Qwib$d}BQuE?@$xLN> zn7YiU#W@%6)gJp=i`S^dtE@>k_+bQSeh1{iHKy7}=Nx6UBNq7-+us^K>>o(BA zXNK$M@}^bo?xy@Hr?X$P!PMl@hE!xfe4_WJV#Pf?`Zzx2{SEIOig^eA-`;hANl`TII>I42NsxFX z$r%n56chzSB`HXh9F<@K{gB{?f=Cb)^b-U@L_|eE1=Jq}Bq&LVf+Wc(aO5bV|9X3v zy}7yFx!t`R!dw*(J2O2U-=3SAuBxs|c%-*jj@#KLnHh;8wnx4$vF zgUvDev)RYq4?iA0RNqNGRqqeFn?(~mn)_O?j|J_R`&rDRxvvGg;!R@FP@H`W>6)Zb z>`f%j-U;Nzrq4hz8?0kGbmX9F{_KeS7Wg6i((~fx}VWSUG5D2a#vsz=68_%P+sod9zZ}rYCi|oAtvX z8+fluXyU8H_SN4P93tK}Zyvv9bJgPX#bzyKUA1(n*Tb)_1^FQlHVJXTO|P{`8~M;h zI(Ce-7hp3Fju1sb6lM|(Bsemlw%fOk>Dc*WwC2JxU;Gf5Y7ZScl)nCT4i!9CjE44q zo!UJ5w01^VIc;^ST(l~+Dc_OS9$KZV<`=g%|Hj3{?L!pI{&mouvl7z5O(c$o{1cbeaC7Yrfnma_7?LcjqZiOV<9x zu}AXg{WfDezvq*eA3t-Pu3o!J8Pi^ub^1y>pI9+*&Pmj&?OC7O{!msf3Dq0weg5<0 zxE#TiN@ykFRErZfu=IqH4s>8i`M&x1Hag_Mp`p%19ffJjo4WsrGF#^MXXFm zyLarS(I1bb-wv&&MQ{Hcr^m0)985p`zKBj((u_fMc;Nph&vLFR3vtd(-p7pQCgCvenPfhqd@?;lU0|ut9bqtZZ;t%-;!m&rC2CV4R?E z`WpzNS+myA+__t6=FF$H-Ylu!H^GeOR?c%eu-d`g!@!dd1Q_x)m*$ zy^u!q8BK#b45cX(KGF;cAVT+r)u?Gyfz5QFLz7SHJw5O~fcb?r9_APELl8+2)Z>xf zbS(QJ`r*XaROi(ilrwJ*%8@+>O`P>V`gG$*RI^}hoew{VvjG?C!@9ghe=m)p1p^jS z&dj-}amz-Oy-7N{zbJ9^Zc8IQ5Jo(Q&dz|I)41TPEN}A zjlG@Q?m8$zS2D_!FW++Z!h7_lj(G53rGEYO#_B-q14jD;2ijBPUMEK0Y}N!{PyB*q zM}sTa`|3}MMLmi=2`Jugq3&rEt%_EOlL4_3yCTD|<V~!7*IzSaF6Qza?8>YS6!tMKj@%`03eJbj< zb3K_!dpU-?c6Ohm;JC2&fM-?OtaqrrNA_sf{%|k}{Gd-MP&&WLR3w85iQ^GvVotM? zEChl(eW$tV_fcOxepCr(mvFsrQe@c|RjJlbsv{?lm`T3l{54-I4vDNj{&9+ekFzR$ zOY3&ZJX1u$Nv4(1zV*NL9?q@zocd}n=X4!zJ^soe-aLy+otjIQ_u{kK+l~b$ZHG&i ze2rz<*&HkN_l4qEAjF%H`BtkI{t`UI=KOwfRpR|~=!X-9^QGqxJ6J>A%W-Dmiwu@^ z@Y@7A^XON`(xF|4Y2|0#crU;Vf?p`;xFBN0IpoK%qJvu!j?Y24?#e~GGjG*h#-_dU zkKkMJUTADPQ3mzW63lQH9THlq3su^^w#Ia zy5*(DZ#+WmXwsTppR?kf^0vzw=YLeEocf-bZZBlF5L>e8M_-8I)Wt{Bzv+MRMaMMD z`m+#gRqJVOA=bHW4|THyl9_HkQ`T~O_CVxl&3)s~A?`GH`QP}~ zZ{Ze`CEE-zXjsp%FRTD}6PN7ot0r+Z-u%)@F(#})*m%J=7z8&$uqqH%Asp>bf1R=- zS@9kqUp}l>I2xeN%c3jAE@||=C7XUysnVrX8@FuKL7yTcBNZ#Q73(B)041DbMbM{D zAJw~eZ@uzCL3pY~Th`z~PYr0Q9)7Ntt`|SN_ei&yDpg8V^MM-r8eUr2ZHQXGe!X7V z@SnP3QJ41E%i>bqwr-zSLa8ECsnKIbt1(kY>M>3EXo^DL=l(E9744drJ@j7nwZ%q- zs`Ybo6`L6Aw1sQGRrkLft;?~PNR62@LVxl53S2+IN}<`C4HY~fZ`rq5uXr#|9EMq+ zV+CXbBsb|ErZ>*o!Dwh#~$9$g~i%_8%5vXg4v>h_$>Q!?t*R@|O^9 z`~J$6EjvNQ#KgC-^Ve>2Ns|?bg}Fl+Z@Pl8BqzzK`DGPKW&Xa_Ln-fe-lQ5+LRKhf z2Va03Q^PkTxD<(siqb0gxq&i)@s0jNwFqa75ih0E<1uF;WKFrwP(3zlZ zIN3zLXL#%XpkH3}wed)?hSa)z2j6wwx^=bF&E?BiPzrYDICA7jg0n*os^!arlf#z} z=(bTed|RRYNs~ULmajiSlXs4%HAjA>D+d&H?%s)tH;tm+GrQ1R?Z?L1(*XyTr%z;{ zOBW2?9y}SZ+qp)6!DmUIg`R8NkN#!9TgOhbX_|!)$NcmD^VnxtQ|dUjmG+Q~+bJSa z=uM39Xly4~h+$W}X(9I1eXXf$eRhMu^wVp9Q1;8QR&HilScq-fyHQ`lBQq?-P!9+e zV$8hsc{tCAF33M-S-l#`V5Z)@WA5CiS%-G|=5^o}Y*`qHWl}b7yhy+O*4dJB<4=zq zA!>cy7oYhD6iP3`VLJZGG;SQ8OYemG8p@cKFDKslYluPcf`_9_SUAC54LsJuNg+&J z;W*HAT|RH#JS}S4G=8UM3h+1pPs{M7g;?e=_r_*XlW;wV-TF3Xd*z{PU!ni&zg+vf zf`Vg8D_ZR7Qu`$(ZX$3|B(O9a~sp#EBqxFVXFgVuJ=xSnkv)~V-m(YJ7 z8%b62M$@IWL|tC(NL`-nPQQ-YNDZT#n9cc)wHmIW11C2Vf8o=;CE{8I#=y-?%2e@0 zOJ?DY4|kUirpAXU5^5x7y3%|P@uSi2?P(N>3OuUCi z-)mUS=>M`UOQ}G|J9Szpa~-dN>j`EZ{+`i5W$fvRDQm5(NRhqlzD5{`6%)3w5Hb-! z3~6@nZeIF5`dixi&3Kz9TK4n>97$4O0TC9PPW#%him*0p*sAj8&8P2SBOmKN6kBR3 z#0+oQvZdlUFZv!fg|fp;9g`d}zu6msN?YG}wAox&i?0y-yHce}YWeczS`@|%L-8IU zwepvhTJg<)6mgFE>*QA5hIZ0Qbcnnl;(`(tVzbhvODo<%ugp!W=nv~-bYBpU93rbJ zQ>N&$X1_45;lqdPJlJe#Zx7dqiEbr?{cSV4nSw|uTgyANI0y=H1n**fh7Lb$&F}9V zp@}h1HTqTdciWf7?p0KgE_qbxo`to@%A8y4#zkM|>nE(2QQHsx87CR#%(8KAB=RCv z(YJX{i-uK8IQ{Wg2JZ&EBZOiR5C#&9g9X1w~g8TPiv_)c19Fyos~^otKKMOa4tx^1N@)w!_h zIOJ(-b$Zu3tiV8KKhQtCsLi?&;~dWt+6KXb2lZOXIwn{wAyl%Rj|KwBc`K6P;NvcD=(2Y{ax-RO^*$J_|NpAyAoTix9hc zp{o&B^+r^*4T2R`p#4XYJ#0bo5@wk!)fRWw|oSkrIh`FG#3+m4ap zx9IP|PpCF~84mDBZ@l4L%3j7fckJC@Uosp8mfh?d<$^5}+e%mhXk`3x1G}OBaSZjY|AH=ede#-%_U=AfJYpq9 zri!E%Z?OU?2*M8Vgs21S@UQcsI9UTRgE*ON7l9a6*@ES1_4Z$Uxl8mY$a){lbB|1tFp6^?lbx6r+wek!!J~! zyZPPFvF}6F?fv#N{jW)MJn|^DiS9_OJK>;Ov*|lusKeNnl)CVT?ETtToj$>e%8dSK#*x7d^Bl*d zB#y+uUbw@nF`VbeUrVs6W4)WdTRIK#Lf{vTowx?fkVeG;gmtW;gk!fOfX|W9?1AFtXr#2lq{wG$#$nYmL{EA z-~3Sp1@=emikMyf`>N{ItCwOXqW3?v8u^6c4TkZ?!+xVwt~MF9PQ+J`PZq8JK^5zo zPyO)QLcNn5V>*BFyxMhmr+R5hAN9!K2dvqB=bwMBSXpmeq6~B`2@&>fKxv+(MWpUu z#^xziiS7kepNTyjz4xx(&h7`8tau+je#o~M#ojLV+%N-F911{}v6i#vwO)7X(Mgqy zE~_flsHp#!ty)HP>($MgCEhth&2tsIV?ADm1QLSno3$EynLd3MszHN(v}W<0WA6`) zEBK>CapX#0y4O#V%?bpO+pxSe%_{wU3GhZJf61Ot3+D?hoH771oO0om3A6&SPi6}C zS(U1C4a%@}7iBwshB9BeN~Py5qkGrxr;8Ua#tAM4-3MD?5lmQ9rbtO;dKJ;tJ@daV z#4bJPvDY4=Mz7bSce;I`O>C`gVbubkRu5OKPuVhN)3?&5NlS%u7jay*>^buz^_=un z(5n`R*ZNO=js~}Qopwz;tlRnL<1;jW@Q(?K*Qyp{_Xjpg$B`DJ(ly{v&$b(UX^;LG z>^%LpaUJc*xrw%A|6TvznR7F(ThDgQ4g{wG5ak7M5}dsN04VxNL_t*BhC@|h-@5b8 zGwh6&?aFNgPX+w3p@ZC+KJ{!_JbRhSvm>(&tXZ>W+TA-_x5o9x z!f6LZgDr9makU{)+VN z`GA-L^g!17RIFra8nJu`oj7ocB2%WOB6pOaN2@euC+*9(2Mbr#jU>j65j{xJWar&B-s^fP@duFBG^-JjBgiQ_3<>h#x3 z_&=C^J9v6Ojrn>Vb*_A#U75#M^yi>Fv}f%;U5Q!7e~q!JCypPd2T&2uNjQ4hvR$Hg^ujfVWxpI*Lih_1`buxkGb z+Q5+%J6GS9x;fH@TRA*Kanqg+^yk5CwEyT{cH)?p z@?_6T50-n_Z(yjIOQ#cDm`s2EbDx_Kvm+b+`km?xsznE;9*-j~LtOVB?{%bAY*kgc zNL9+p;<8JZFVWw}57Fj*8|keMwRSQRVgjV6ca-W#PJSsbtS6zs$g&t|k2)>FML= zXHh&x6&;vSp)ary8$I_;_F=V&SW&MTR^Dv+s9vQ8^xxKRQl@m7g6Xo(laPg2bfsz( z@mmCoidm__1QB8N>ybotg8bYz2zYV@;)K3+1p=`*6m#a6kW)Fi=WPTO``Ds`2iXof z14VF1+*^s_F$0HHOE~?8;vK7&3Ed}gDBoy;vqr314xc#0?&RO4&Ag&mFytr4suR0v zajg6%-Go#V%KK0#xvfBwjTM&do#kItQMbpHZUzAzDJ+WeRcnY`^>wU&)#SDK4+bE_Fik9@9&l|Z9QQ#+q^+TuRW_GjAfh(8;bCrG2$Cf2kG_fu`&bssvO_rC>$r%zOnM_q$E zSu%M(d3eHtBv#>6E4~Gw18m1>hNMG{XB#&0AHUwcV7}cei!CfPUebw$|I)TOWfbepm?V2I%&l~+ z817rknUKRsO-~UMF&K*TAvsk&?Gvg2pG4o}pD=jjd~%&U_> zcg`-7X@R*+TLx2!HaHZK)vBsnTPDN>@*^(gCgh1y%YA5cQ*~=aMH6TCEO==fQC5fx z0fR}zYQ=N&jjMfmDW|&|&61IC=G)OLK)>Eq6AD;lBj-&JH%+sK)USz4t>00;zpX5t!(}I4*29NYwr zV+7VOUWc(KC5Ty}FB;Ax{yk5^57805!`){x-yoqqHz`Siyape{J$0T?4z;U)W-dNi zY#Nx26l?jdDk98BV&Ai|CI~L`@9`X6z;?!PzES5%XF4f%yaemdoZgOA{*rK+DN;a` zKk>>1Z&%Vy8Jh0iY*&m|!54xZqr$RgL$qPpxHX+mDEjNTW~CWy^6jVg=IwU}4{*hr zGT$?&J5I5n#HR(Em$u*oj?&lJp4JS&8PZv7TRvrzAOXK(X)A<$a7oHQsJ>bG$j6M_ zVJKW^jHQK(@i^pgO8BP2Fwsb|H#0H#+GdWi_e|+J~B_pM?L7c z0a5R-6WAbo*N)RfT#j)lD{{UOAEo^)iJ%JCp__y$;v;vgCM%(Ck-G@8i;}tA$9*%# z5*!b@$~7Q+MxiuQymzx5%|68XreJ#;Mb<%OP$H3frhEh7q_`*@*TwGo|A12 z@M-bg?Ly9p-%@kNE}n_#o>b${qYWm6li?r{R~8}-GDiny8H*Di zj+eM|-n1BKtwIjg?YvfsvyD~AN0EhHcADYqlJ}C6OD&2q?=UXgWmNoUy+=I!>5aE) zey(8i3c;O@!&bgVi~jz$o`s))b;0m%H)Y%Nth~oqC4u*)>K{sH`jOAwp2+9~8E_J5 ziKW(#lutCKLIfSo+p}UOUg!4g>WJi6{W2&=tOPMRc3GYHvIoL8UwWErPD4fFV6uAx z*eCc#uZl%S!b-ko?j`bFmz$kZlwVe%@7;f-E%sj|#`#9Gx{@awSS#~EYy6H>8`N<@ z=iaZf@#pl3+^zU*0F%Fjg!yru)GeOGH_|0ouIa5z?;G zZyfXi`Jy=b&eZ)R4paR+#Pi~cqA!~OX0{oXHA{afyCJ7At?yU6M6R8<4x0OCFlpEe z`Gk*BZ5!X{P!e&0`c&d1`b*?8Dz78)f+X1jJOWON|QVl$9?abKW@(M~ptb^VDKtgDGpxm%t6rLaI5 z*ol30e4N2Q%i4q{LmgTcoOsmBR>jm6{B-kx8x23nK;0K8yCsGB-Q z*-sozHfPO2CHH~w+Y1!G$FY}b3)aeMZ2U#1$0Pli27KcDvkKyDy6FQU#YDWoXP zd%>+M?+*3syAgu59$tzTH*uQiRg|+2AlnE@(lW8_Dr&DDSXGRzw;9e+K z6%KGFH{p|Nx=u^tXzR6}9MP_t6j^f)P^}AVC!B!C&h%)bVj5FcL>9IAfUpfaroLBk zq1IY z5-2Q(LGhVn>hx&qA!Uy@LuNj|#CZQyIJH&je;@_)UQ09Aq+ohg^Eq@F{jI~ojK9#7OTiYEzX@#3#`*ACF$T1^t}n}h+#<`Ivd<5 zUk08L+e6?d2o`Q4dRsUjR?OaU^m2=UdjMgYutqZDBZ4SkAQX}5zTMBL+y0O$boh}( z*CbZ=yrMe@H0#QkV7=Q^ip2@6je0@QL$FZ!>BNXaLr*NBjgCw4CHT~Ot9f&+hN!Fo zk_Y&f=n}JUBj|znf9)!0XG8?Zk{*TTYA?MYVj*A=8#y7Q+q_sM)y!7KXl64G?T(_i z>(NOFhv3w{DR2*xxV2V_sYaq2EBP_neM<3udm>YO!etn;ESaPJBw zB~{4U2VCjCSty@%Mjym_hO-1vr^71I%Hcoh(I!P9Lm(ryL4j-1i*;cr*{{v(mE7-~S)l@Rm#WIHO_67F4#R`>gp!%_ykZjgYu{~STpwrOQeI62lGs)&gP zBBXj0|4#nAhXKuUzZ`%)Cy=Y{O(`BD;+qs01D5+T9vs*r4En-lTiz;5f!yR2F;6#4 z8)F_n<8_Rh=P;`^=FrJIG&UKAC}DRz(#Wdwb6j09_RaLIjmFvR6%7^Hky8b!I$}hG zvIrIG~XD^lB0Mbb-=nB}bfTp4zGXnuj1xfnVnerF+P3zEZ zeCO^5Hl$3Qv(*u!F~>F-Vp85|#B@N!Gs}16Q-^ONv&BNKRv0}y$eUB4hY%PLoKHhu zO6PYLMidAy6se>_^R#v)B8!rB4|Pm;)dbL;KL_QVsTUJDIfGL zhVM2mQZKJULXNi9F3c;9R8A0+6`Nwi*d=71v6s0#IM}8J%ai|&RJdl6u~#O>_lHJe z-Thixflb!7+}Q3^872m4p%0%er?9f%EU98Jd7NPqJ9W=+wqJXDvK%Zjr)!po*c>G)VFp!+ZexO*zOCQsp@qR z1|y6S7@;pgp~e1JZaqC}pX5~@;7?iQbt#kkQIrjhrx#9*la+odJ!5QNdZfVI86Y+3 zCE9r3yJaDW;QHt=A7i7h6J^K2ZnT(?*5<+*_-yJaaU=C5Mvmw(s$1hI=(=g_^e*h( zW?JXlb3AhR8@3sm{-M-;LBFxAG@ulwb- z`7XS`uYO;6=Bo0}%I56qTOR+qzrCZ~cF9XecmNq*`l7cTag%}B_hpIqyTd$yb>i5CM!58J^_{@g z&HjuB{PO$lUbFj4ht1q`PZfm3>uc=m3LuJvM5JNJt%2nL%dF-nZI+c8GZR%1_^45& zZSf}+Z;QR$AA@gnYb8%7=fvw=2+YphMXd~!TND+VpCW34^b=7qC*r*m+dOXdKSOzE zx7V#2*Y4cs$lOny@EWH8tL7*32Ejj#f(s<8ms-tMR~ziny|N}kKNI<Q9#2($kNTz&wSWm1+aM4%n|6)aD zk?2Maq?!3Q>s2n7Zd`p)aazW17-!&VU;0EGKVvLSTA+MSYIKANne-zs^)_$oM|L>1 zQE*tR%vAjM5!!;&z|H{HQH#r#&fJc-k2lYA>4M*Vhg`_C;lM)BtU~;1>)YC126da?*lv*0{tEFX*x z?#-)QPkrXg;xPa0i9zxRX9Slk!VR0dkO;_?NA7pd?1ct_Z(H=bVkq3M-m?Z>&~->1 zaXSWCa#^AbIP{TocSVB1go_vgRx=IG!;kC)oq7Co5QfgOCj75T^rVV#xEgDWMkhy# z%@|iOsaOoz&0@W2R|-7nr2-?U5W1rVAx8p|V${XZ!9K!>o=JU` z*EcuK4vN>!Vw@0sJbTxCdj}~5uH&h!|CQ~u-@@s*rC&NGD}DJgLp0}n>O-_%+0-5u zY>ED0tsBkEUt(ON_RliaY3Y9}+v24a#YJ-Im$fc$wzua2H5TgIJetWWBC`dW#Cf8M zCR25^Jo>^GquG-0^mgfwr=7007hAy+qYX_5i_p_zwVWudM#{=zKskqt^QV_k&3xS3 zS3_zwjdWTvH=VM(eH~52o)a#O?Oln#<2y;#%2ua4KkEwlb~s2)!Mlk zRCA1WJpXRmYRjiY(G{&%mS706SIr9Tq^XaHYNk<$PNpv;+B9`AAK2;0g3gHMuBHnS z4`e5tb%2yW5TZAs^M1>K9fg^^&0+uC|Uo5e{Qkk(%hRJyMfuf zb4gF1@_6t{^8S^yAy1H7*r7O)o+~R22CULt6~qf-f8Uk%I0uI89{vp(c{udWK4`(; zlelB-Dq-ESeP{Dv%{R_pik`1Td|4ZEJ1_~Q7fzol6g;pftB8s>OA^x`D7r`xyUkgysLH%??y@LZ zTZ-*IxRMb|X>Gq2toHSfx8tLSsf2waVCV$2k|6Z22g!|f=#I)u$bVoXtclU#xv!d8c0s=5sg_|aC8f6Q!VjIUD;I>$16u9rTb z?lAHyFhXfsVl`ow;4vKtzgOp6UE;cL{Ma!)h)XV0x_E}SagsJc22+>cbJh$j9r&2Y1eBZ6k z4I)2yZR~6nX~<;B@p<7L>#}~Ka{d8wDu=h$p2*&Hi0s*m8tHS(%98+xQP|N&I_(bj zMyD)naWQ>=D?a2vQM=P4Z|#Q_KfclznJ{)-%Vi%vtQV!_K&ABullF47z>cg-2uURE z!rc{GAm-JBI#?o$F-K48GndZ|BE=?b1P8P@_O8z@TcGDs-s9*64NgbVX8!cRJ+YE! zLsh`QfP;hfg@HjFWCTf9$9yO!3p$nt{AIh$Aiq5D1%zP+Jh@}IYSl}^LV(6T|EDKG zKe=&<-oK{p$-rXmXuK{#2k3|K&V7;~QF)0q4J_7r=gX<5uh=2_DdoVGR!$o>{0NC;7jo z*T8X*cTa|K{*C<&CNQxt-JI$FX&ML|$0s<+Hsasdzl8zA`|mXWdo>vVWBaes{=d36 z$wP}&lBXJ-=Adf8t`Y~0P8RS+*sMq}ZEu!ar5}dIbbj=Ou$Sot?jP;Ca430ntUL7Ot|e?oq&l{us?e*G8>PSLmm4 z8gdT3XO5<`1c>Vn0J3u$Y9b)+2?uomGLa3hem-(sY^v`3D_p#vt{z1|RsME+sKNWt zG)Ec;o>L3h5&UK4*4P1fD-#h3P36qF7Dfy z56}%utgWrd1QZ8G4CXGjKUs1H_3WR@N8(igf}Ge-+Gv^W%HxclwH^RsA!G}(_kRUm zLw*8NIPVC<#IF?hLco#0sZ%LY`=D$CbsI`%Na@GN^rIMi0T>M*716uGz*Q0t46XEiV_*0 zno&0ai1ESuV@WYc#sRiOg={F=U!<$$T?E4a206F~OsLrZcT03eoi4y+F8AD<0#Xt* z%S|;*!#{~-Ze&5s1!j?Wd4n@|)zm{G2q@b6L}v~pVQ zrl5!l!(s@qA~v|(Uo`|vZ*QOc*U@7bJa#dm(pPZi4W2ia!t8^ZXyGjZm;6Xhh4}ap ze~_BoD|8^vwYyP_MwRy4}&;RDZ$?77S>es6jpQ(Xoj@-+QkxEg$QL4L%iNpMkY-EygKOSY8dlkjGVO+S6aI|{5r&Fg3Qc`6mvEt) z0RYI{rJ=^S{#3^6&>nWnPDz)gZ@R(h3L}v`!R;6Y3L(B+o(!dph74!N8-uxoTfqSG zw857ABh!{5-UG6S0WCCzeq!>4vo$ZOoz_y&hx->F13wEu=Jzs{sS zMn-Uk$%(~y9gwL9@=7Sv*P*>-f|J%#JsNV5uo4QHAxGe*qy4jJi= "3.0.4") { - announce_snapshot_file(name = name) - } - fig <- save_png(code) - - expect_snapshot_file(fig, name) -} - - test_that("taylor diagram", { set.seed(1) @@ -46,7 +17,7 @@ test_that("taylor diagram", { model1 = rnorm(6, 10, 3) + 2, model2 = rnorm(6, 11, 3) + 2) - expect_plot( - "taylor.png", - new.taylor(testdata, siteid = 1:3, runid = 1:2)) + vdiffr::expect_doppelganger( + "taylor diagram", + function() new.taylor(testdata, siteid = 1:3, runid = 1:2)) }) From 829d0c2838337657ce5d1e0da5c394b226a48a88 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Sun, 19 Sep 2021 23:09:10 +0200 Subject: [PATCH 2268/2289] add vdiffr to depends --- base/qaqc/DESCRIPTION | 3 ++- docker/depends/pecan.depends.R | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 3137cf2427c..9cf8e723106 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -21,7 +21,8 @@ Suggests: PEcAn.SIPNET, PEcAn.utils, rmarkdown, - testthat (>= 3.0.0) + testthat (>= 3.0.0), + vdiffr (>= 1.0.2) License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 2de3d831512..9db487f17eb 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -128,6 +128,7 @@ wanted <- c( 'udunits2', 'urltools', 'utils', +'vdiffr', 'withr', 'XML', 'xtable', From b4b77be97d90b5242fa04f836543d3ef3784abb8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 20 Sep 2021 11:55:05 +0200 Subject: [PATCH 2269/2289] remove declaration of fixed circular deps --- Makefile.depends | 2 +- base/utils/DESCRIPTION | 4 ---- scripts/generate_dependencies.R | 3 --- 3 files changed, 1 insertion(+), 8 deletions(-) diff --git a/Makefile.depends b/Makefile.depends index 204a53a9b8c..659c1c6a62b 100644 --- a/Makefile.depends +++ b/Makefile.depends @@ -5,7 +5,7 @@ $(call depends,base/logger): | $(call depends,base/qaqc): | .install/base/logger .install/models/biocro .install/models/ed .install/models/sipnet .install/base/utils $(call depends,base/remote): | .install/base/logger $(call depends,base/settings): | .install/base/db .install/base/logger .install/base/remote .install/base/utils -$(call depends,base/utils): | .install/base/logger .install/base/remote .install/modules/emulator +$(call depends,base/utils): | .install/base/logger .install/base/remote $(call depends,base/visualization): | .install/base/db .install/base/logger .install/base/utils $(call depends,base/workflow): | .install/modules/data.atmosphere .install/modules/data.land .install/base/db .install/base/logger .install/base/remote .install/base/settings .install/modules/uncertainty .install/base/utils $(call depends,modules/allometry): | .install/base/logger .install/base/db diff --git a/base/utils/DESCRIPTION b/base/utils/DESCRIPTION index ae4a09a6111..fdfae602a68 100644 --- a/base/utils/DESCRIPTION +++ b/base/utils/DESCRIPTION @@ -42,10 +42,6 @@ Suggests: data.table, ggplot2, MASS, - PEcAn.data.atmosphere, - PEcAn.data.land, - PEcAn.emulator, - PEcAn.settings, PEcAn.DB, randtoolbox, raster, diff --git a/scripts/generate_dependencies.R b/scripts/generate_dependencies.R index a3569c73d6c..fa4da91f24b 100755 --- a/scripts/generate_dependencies.R +++ b/scripts/generate_dependencies.R @@ -79,9 +79,6 @@ for (name in names(depends)) { for (p in depends[[name]]) { # TEMP HACK: Don't declare known circular deps in utils Suggests if (name == "base/utils" && p == "PEcAn.DB") next - if (name == "base/utils" && p == "PEcAn.settings") next - if (name == "base/utils" && p == "PEcAn.data.atmosphere") next - if (name == "base/utils" && p == "PEcAn.data.land") next x <- paste0(x, " .install/", pecan[[p]]) } cat(x, file = "Makefile.depends", sep = "\n", append = TRUE) From 9af8908a314c627084f81a2b63ac8e1219ad90db Mon Sep 17 00:00:00 2001 From: Chris Black Date: Mon, 20 Sep 2021 12:18:03 +0200 Subject: [PATCH 2270/2289] update Rcheck reference --- base/utils/tests/Rcheck_reference.log | 144 ++------------------------ 1 file changed, 8 insertions(+), 136 deletions(-) diff --git a/base/utils/tests/Rcheck_reference.log b/base/utils/tests/Rcheck_reference.log index a5741040b01..80690eaefd9 100644 --- a/base/utils/tests/Rcheck_reference.log +++ b/base/utils/tests/Rcheck_reference.log @@ -9,9 +9,7 @@ * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... NOTE -Packages suggested but not available for checking: - 'PEcAn.data.atmosphere', 'PEcAn.data.land', 'PEcAn.settings', - 'PEcAn.DB' +Package suggested but not available for checking: 'PEcAn.DB' * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK @@ -27,9 +25,7 @@ Packages suggested but not available for checking: Authors@R field gives no person with name and roles. Authors@R field gives no person with maintainer role, valid email address and non-empty name. -* checking top-level files ... NOTE -Non-standard file/directory found at top level: - ‘scripts’ +* checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK * checking package subdirectories ... OK @@ -41,8 +37,7 @@ Non-standard file/directory found at top level: * checking whether the namespace can be loaded with stated dependencies ... OK * checking whether the namespace can be unloaded cleanly ... OK * checking loading without being on the library search path ... OK -* checking dependencies in R code ... WARNING -'::' or ':::' import not declared from: ‘PEcAn.uncertainty’ +* checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK @@ -51,60 +46,17 @@ convert.input: no visible binding for global variable ‘settings’ convert.input: no visible binding for global variable ‘id’ convert.input : log_format_df: no visible binding for global variable ‘.’ -get.results: no visible binding for global variable ‘trait.samples’ -get.results: no visible binding for global variable ‘runs.samples’ -get.results: no visible binding for global variable ‘sa.samples’ grid2netcdf: no visible binding for global variable ‘years’ grid2netcdf: no visible binding for global variable ‘yieldarray’ mcmc.list2init: no visible binding for global variable ‘nr’ -plot_data: no visible binding for global variable ‘x’ -plot_data: no visible binding for global variable ‘y’ -plot_data: no visible binding for global variable ‘control’ -plot_data: no visible binding for global variable ‘se’ -read.ensemble.output: no visible binding for global variable - ‘runs.samples’ -read.output: no visible binding for global variable ‘median’ read.sa.output: no visible binding for global variable ‘runs.samples’ -run.write.configs: no visible global function definition for - ‘get.parameter.samples’ -run.write.configs: no visible binding for global variable - ‘trait.samples’ -run.write.configs: no visible binding for global variable ‘sa.samples’ -run.write.configs: no visible binding for global variable - ‘ensemble.samples’ Undefined global functions or variables: - . citation_id control cultivar_id ensemble.samples - get.parameter.samples greenhouse id median n nr runs.samples - sa.samples se settings site_id specie_id statname time trait.samples - trt_id x y years yieldarray -Consider adding - importFrom("stats", "median", "time") -to your NAMESPACE file. + . nr runs.samples settings years yieldarray * checking Rd files ... OK * checking Rd metadata ... OK -* checking Rd line widths ... NOTE -Rd file 'retry.func.Rd': - \examples lines wider than 100 characters: - ncdf4::nc_open('https://thredds.daac.ornl.gov/thredds/dodsC/ornldaac/1220/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4'), - -These lines will be truncated in the PDF manual. -* checking Rd cross-references ... WARNING -Unknown packages ‘PEcAn.priors’, ‘PEcAn.benchmark’ in Rd xrefs -Missing link or links in documentation object 'download.file.Rd': - ‘method’ - -Missing link or links in documentation object 'get.results.Rd': - ‘read.settings’ - -Missing link or links in documentation object 'standard_vars.Rd': - ‘[udunits2]{udunits}’ - -Missing link or links in documentation object 'write.ensemble.configs.Rd': - ‘write.config.ED’ - -See section 'Cross-references' in the 'Writing R Extensions' manual. - +* checking Rd line widths ... OK +* checking Rd cross-references ... OK * checking for missing documentation entries ... WARNING Undocumented code objects: ‘trait.dictionary’ @@ -114,91 +66,11 @@ All user-level objects in a package should have documentation entries. See chapter ‘Writing R documentation files’ in the ‘Writing R Extensions’ manual. * checking for code/documentation mismatches ... OK -* checking Rd \usage sections ... WARNING -Undocumented arguments in documentation object 'convert.input' - ‘dbparms’ - -Undocumented arguments in documentation object 'ensemble.filename' - ‘settings’ ‘prefix’ ‘suffix’ ‘all.var.yr’ ‘ensemble.id’ ‘variable’ - ‘start.year’ ‘end.year’ - -Undocumented arguments in documentation object 'full.path' - ‘folder’ - -Undocumented arguments in documentation object 'get.ensemble.samples' - ‘...’ - -Undocumented arguments in documentation object 'get.model.output' - ‘settings’ - -Undocumented arguments in documentation object 'get.results' - ‘sa.ensemble.id’ ‘ens.ensemble.id’ ‘variable’ ‘start.year’ ‘end.year’ - -Undocumented arguments in documentation object 'grid2netcdf' - ‘gdata’ ‘date’ ‘outfile’ -Documented arguments not in \usage in documentation object 'grid2netcdf': - ‘grid.data’ - -Undocumented arguments in documentation object 'logger.debug' - ‘...’ - -Undocumented arguments in documentation object 'mstmipvar' - ‘silent’ - -Undocumented arguments in documentation object 'plot_data' - ‘color’ - -Undocumented arguments in documentation object 'r2bugs.distributions' - ‘direction’ - -Undocumented arguments in documentation object 'read.ensemble.output' - ‘variable’ ‘ens.run.ids’ -Documented arguments not in \usage in documentation object 'read.ensemble.output': - ‘variables’ - -Undocumented arguments in documentation object 'read.sa.output' - ‘variable’ ‘sa.run.ids’ -Documented arguments not in \usage in documentation object 'read.sa.output': - ‘variables’ - -Undocumented arguments in documentation object 'retry.func' - ‘isError’ - -Undocumented arguments in documentation object 'run.write.configs' - ‘settings’ ‘overwrite’ -Documented arguments not in \usage in documentation object 'run.write.configs': - ‘model’ - -Undocumented arguments in documentation object 'seconds_in_year' - ‘...’ - -Undocumented arguments in documentation object 'sensitivity.filename' - ‘settings’ ‘prefix’ ‘suffix’ ‘all.var.yr’ ‘pft’ ‘ensemble.id’ - ‘variable’ ‘start.year’ ‘end.year’ - -Undocumented arguments in documentation object 'write.ensemble.configs' - ‘model’ ‘write.to.db’ -Documented arguments not in \usage in documentation object 'write.ensemble.configs': - ‘write.config’ - -Functions with \usage entries need to have the appropriate \alias -entries, and all their arguments documented. -The \usage entries must correspond to syntactically valid R code. -See chapter ‘Writing R documentation files’ in the ‘Writing R -Extensions’ manual. +* checking Rd \usage sections ... OK * checking Rd contents ... WARNING -Argument items with no description in Rd object 'bugs.rdist': - ‘n’ - Argument items with no description in Rd object 'get.sa.sample.list': ‘env’ -Argument items with no description in Rd object 'grid2netcdf': - ‘grid.data’ - -Argument items with no description in Rd object 'theme_border': - ‘type’ ‘colour’ ‘size’ ‘linetype’ - * checking for unstated dependencies in examples ... OK * checking contents of ‘data’ directory ... OK * checking data for non-ASCII characters ... OK @@ -210,7 +82,7 @@ Argument items with no description in Rd object 'theme_border': * checking for detritus in the temp directory ... OK * DONE -Status: 5 WARNINGs, 5 NOTEs +Status: 2 WARNINGs, 3 NOTEs See ‘/tmp/Rtmp6InlXl/PEcAn.utils.Rcheck/00check.log’ for details. From cda904297a90b3ba080d8b28b94fa3538c147564 Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 20 Sep 2021 20:27:43 +0530 Subject: [PATCH 2271/2289] move sensitivity.R from utils to uncertainty --- base/utils/R/sensitivity.R | 340 ------------------------------------- 1 file changed, 340 deletions(-) diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R index c9e0bc9aa9e..8b137891791 100644 --- a/base/utils/R/sensitivity.R +++ b/base/utils/R/sensitivity.R @@ -1,341 +1 @@ -#------------------------------------------------------------------------------- -# Copyright (c) 2012 University of Illinois, NCSA. -# All rights reserved. This program and the accompanying materials -# are made available under the terms of the -# University of Illinois/NCSA Open Source License -# which accompanies this distribution, and is available at -# http://opensource.ncsa.illinois.edu/license.html -#------------------------------------------------------------------------------- -##' Reads output of sensitivity analysis runs -##' -##' -##' @title Read Sensitivity Analysis output -##' @return dataframe with one col per quantile analysed and one row per trait, -##' each cell is a list of AGB over time -##' @param traits model parameters included in the sensitivity analysis -##' @param quantiles quantiles selected for sensitivity analysis -##' @param pecandir specifies where pecan writes its configuration files -##' @param outdir directory with model output to use in sensitivity analysis -##' @param pft.name name of PFT used in sensitivity analysis (Optional) -##' @param start.year first year to include in sensitivity analysis -##' @param end.year last year to include in sensitivity analysis -##' @param variable variables to be read from model output -##' @param per.pft flag to determine whether we want SA on pft-specific variables -##' @param sa.run.ids list of run ids to read. -##' If NULL, will look in `pecandir` for a file named `samples.Rdata` -##' and read from that -##' @export -##' @importFrom magrittr %>% -##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze, Istem Fer -#--------------------------------------------------------------------------------------------------# -##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze -read.sa.output <- function(traits, quantiles, pecandir, outdir, pft.name = "", - start.year, end.year, variable, sa.run.ids = NULL, per.pft = FALSE) { - - - if (is.null(sa.run.ids)) { - samples.file <- file.path(pecandir, "samples.Rdata") - if (file.exists(samples.file)) { - load(samples.file) - sa.run.ids <- runs.samples$sa - } else { - PEcAn.logger::logger.error(samples.file, "not found, this file is required by the read.sa.output function") - } - } - - sa.output <- matrix(nrow = length(quantiles), - ncol = length(traits), - dimnames = list(quantiles, traits)) - - expr <- variable$expression - variables <- variable$variables - - for(trait in traits){ - for(quantile in quantiles){ - run.id <- sa.run.ids[[pft.name]][quantile, trait] - - for(var in seq_along(variables)){ - # if SA is requested on a variable available per pft, pass pft.name to read.output - # so that it only returns values for that pft - pass_pft <- switch(per.pft + 1, NULL, pft.name) - out.tmp <- read.output(runid = run.id, outdir = file.path(outdir, run.id), - start.year = start.year, end.year = end.year, - variables = variables[var], - pft.name = pass_pft) - assign(variables[var], out.tmp[[variables[var]]]) - } - - # derivation - out <- eval(parse(text = expr)) - - sa.output[quantile, trait] <- mean(out, na.rm=TRUE) - - } ## end loop over quantiles - PEcAn.logger::logger.info("reading sensitivity analysis output for model run at ", quantiles, "quantiles of trait", trait) - } ## end loop over traits - sa.output <- as.data.frame(sa.output) - return(sa.output) -} # read.sa.output - - -##' Write sensitivity analysis config files -##' -##' Writes config files for use in sensitivity analysis. -##' -##' @param defaults named list with default parameter values -##' @param quantile.samples list of lists supplied by \link{get.sa.samples} -##' @param settings list of settings -##' @param model name of model to be run -##' @param clean logical: Delete any existing contents of the directory specified by \code{settings$rundir} before writing to it? -##' @param write.to.db logical: Record this run to BETY? If TRUE, uses connection settings specified in \code{settings$database} -##' -##' @return list, containing $runs = data frame of runids, and $ensemble.id = the ensemble ID for these runs. Also writes sensitivity analysis configuration files as a side effect -##' @export -##' @author David LeBauer, Carl Davidson -write.sa.configs <- function(defaults, quantile.samples, settings, model, - clean = FALSE, write.to.db = TRUE) { - scipen <- getOption("scipen") - options(scipen = 12) - my.write.config <- paste("write.config.", model, sep = "") - - if (write.to.db) { - con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) - if (inherits(con, "try-error")) { - con <- NULL - } else { - on.exit(PEcAn.DB::db.close(con), add = TRUE) - } - } else { - con <- NULL - } - - # Get the workflow id - if ("workflow" %in% names(settings)) { - workflow.id <- settings$workflow$id - } else { - workflow.id <- -1 - } - - # find all inputs that have an id - inputs <- names(settings$run$inputs) - inputs <- inputs[grepl(".id$", inputs)] - - runs <- data.frame() - - # Reading the site.pft specific tags from xml - site.pfts.vec <- settings$run$site$site.pft %>% unlist %>% as.character - - if(!is.null(site.pfts.vec)){ - # find the name of pfts defined in the body of pecan.xml - defined.pfts <- settings$pfts %>% purrr::map('name') %>% unlist %>% as.character - # subset ensemble samples based on the pfts that are specified in the site and they are also sampled from. - if (length(which(site.pfts.vec %in% defined.pfts)) > 0 ) - quantile.samples <- quantile.samples [site.pfts.vec[ which(site.pfts.vec %in% defined.pfts) ]] - # warn if there is a pft specified in the site but it's not defined in the pecan xml. - if (length(which(!(site.pfts.vec %in% defined.pfts)))>0) - PEcAn.logger::logger.warn(paste0("The following pfts are specified for the siteid ", settings$run$site$id ," but they are not defined as a pft in pecan.xml:", - site.pfts.vec[which(!(site.pfts.vec %in% defined.pfts))])) - } - - - ## write median run - MEDIAN <- "50" - median.samples <- list() - for (i in seq_along(quantile.samples)) { - median.samples[[i]] <- quantile.samples[[i]][MEDIAN, , drop=FALSE] - } - names(median.samples) <- names(quantile.samples) - - if (!is.null(con)) { - ensemble.id <- PEcAn.DB::db.query(paste0( - "INSERT INTO ensembles (runtype, workflow_id) ", - "VALUES ('sensitivity analysis', ", format(workflow.id, scientific = FALSE), ") ", - "RETURNING id"), con = con)[['id']] - - paramlist <- paste0("quantile=MEDIAN,trait=all,pft=", - paste(lapply(settings$pfts, function(x) x[["name"]]), sep = ",")) - run.id <- PEcAn.DB::db.query(paste0("INSERT INTO runs ", - "(model_id, site_id, start_time, finish_time, outdir, ensemble_id, parameter_list) ", - "values ('", - settings$model$id, "', '", - settings$run$site$id, "', '", - settings$run$start.date, "', '", - settings$run$end.date, "', '", - settings$run$outdir, "', ", - ensemble.id, ", '", - paramlist, "') ", - "RETURNING id"), con = con)[['id']] - - # associate posteriors with ensembles - for (pft in defaults) { - PEcAn.DB::db.query(paste0( - "INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) ", - "values (", pft$posteriorid, ", ", ensemble.id, ")"), con = con) - } - - # associate inputs with runs - if (!is.null(inputs)) { - for (x in inputs) { - PEcAn.DB::db.query(paste0( - "INSERT INTO inputs_runs (input_id, run_id) ", - "values (", settings$run$inputs[[x]], ", ", run.id, ")"), con = con) - } - } - } else { - run.id <- get.run.id("SA", "median") - ensemble.id <- NA - } - medianrun <- run.id - - # create folders (cleaning up old ones if needed) - if (clean) { - unlink(file.path(settings$rundir, run.id)) - unlink(file.path(settings$modeloutdir, run.id)) - } - dir.create(file.path(settings$rundir, run.id), recursive = TRUE) - dir.create(file.path(settings$modeloutdir, run.id), recursive = TRUE) - - # write run information to disk TODO need to print list of pft names and trait - # names - cat("runtype : sensitivity analysis\n", - "workflow id : ", workflow.id, "\n", - "ensemble id : ", ensemble.id, "\n", - "pft name : ALL PFT", "\n", - "quantile : MEDIAN\n", - "trait : ALL TRAIT", "\n", - "run id : ", run.id, "\n", - "model : ", model, "\n", - "model id : ", settings$model$id, "\n", - "site : ", settings$run$site$name, "\n", - "site id : ", settings$run$site$id, "\n", - "met data : ", settings$run$site$met, "\n", - "start date : ", settings$run$start.date, "\n", - "end date : ", settings$run$end.date, "\n", - "hostname : ", settings$host$name, "\n", - "rundir : ", file.path(settings$host$rundir, run.id), "\n", - "outdir : ", file.path(settings$host$outdir, run.id), "\n", - file = file.path(settings$rundir, run.id, "README.txt"), - sep = "") - - - # I check to make sure the path under the met is a list. if it's specified what met needs to be used in 'met.id' under sensitivity analysis of pecan xml we used that otherwise, I use the first met. - if (is.list(settings$run$inputs$met$path)){ - # This checks for met.id tag in the settings under sensitivity analysis - if it's not there it creates it. Then it's gonna use what it created. - if (is.null(settings$sensitivity.analysis$met.id)) settings$sensitivity.analysis$met.id <- 1 - - settings$run$inputs$met$path <- settings$run$inputs$met$path[[settings$sensitivity.analysis$met.id]] - - } - - - # write configuration - do.call(my.write.config, args = list(defaults = defaults, - trait.values = median.samples, - settings = settings, - run.id = run.id)) - cat(run.id, file = file.path(settings$rundir, "runs.txt"), sep = "\n", append = TRUE) - - ## loop over pfts - runs <- list() - for (i in seq_along(names(quantile.samples))) { - pftname <- names(quantile.samples)[i] - if (pftname == "env") { - next - } - - traits <- colnames(quantile.samples[[i]]) - quantiles.str <- rownames(quantile.samples[[i]]) - - runs[[pftname]] <- data.frame() - - ## loop over variables - for (trait in traits) { - for (quantile.str in quantiles.str) { - if (quantile.str != MEDIAN) { - quantile <- as.numeric(quantile.str) / 100 - trait.samples <- median.samples - trait.samples[[i]][trait] <- quantile.samples[[i]][quantile.str, trait, drop=FALSE] - - if (!is.null(con)) { - paramlist <- paste0("quantile=", quantile.str, ",trait=", trait, ",pft=", pftname) - insert_result <- PEcAn.DB::db.query(paste0("INSERT INTO runs (model_id, site_id, start_time, finish_time, outdir, ensemble_id, parameter_list) values ('", - settings$model$id, "', '", - settings$run$site$id, "', '", - settings$run$start.date, "', '", - settings$run$end.date, "', '", - settings$run$outdir, "', ", - ensemble.id, ", '", - paramlist, "') RETURNING id"), con = con) - run.id <- insert_result[["id"]] - - # associate posteriors with ensembles - for (pft in defaults) { - PEcAn.DB::db.query(paste0("INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) values (", - pft$posteriorid, ", ", - ensemble.id, ");"), con = con) - } - - # associate inputs with runs - if (!is.null(inputs)) { - for (x in inputs) { - PEcAn.DB::db.query(paste0("INSERT INTO inputs_runs (input_id, run_id) ", - "values (", settings$run$inputs[[x]], ", ", run.id, ");"), - con = con) - } - } - } else { - run.id <- get.run.id("SA", - round(quantile, 3), - trait = trait, - pft.name = names(trait.samples)[i]) - } - runs[[pftname]][quantile.str, trait] <- run.id - - # create folders (cleaning up old ones if needed) - if (clean) { - unlink(file.path(settings$rundir, run.id)) - unlink(file.path(settings$modeloutdir, run.id)) - } - dir.create(file.path(settings$rundir, run.id), recursive = TRUE) - dir.create(file.path(settings$modeloutdir, run.id), recursive = TRUE) - - # write run information to disk - cat("runtype : sensitivity analysis\n", - "workflow id : ", workflow.id, "\n", - "ensemble id : ", ensemble.id, "\n", - "pft name : ", names(trait.samples)[i], "\n", - "quantile : ", quantile.str, "\n", - "trait : ", trait, "\n", - "run id : ", run.id, "\n", - "model : ", model, "\n", - "model id : ", settings$model$id, "\n", - "site : ", settings$run$site$name, "\n", - "site id : ", settings$run$site$id, "\n", - "met data : ", settings$run$site$met, "\n", - "start date : ", settings$run$start.date, "\n", - "end date : ", settings$run$end.date, "\n", - "hostname : ", settings$host$name, "\n", - "rundir : ", file.path(settings$host$rundir, run.id), "\n", - "outdir : ", file.path(settings$host$outdir, run.id), "\n", - file = file.path(settings$rundir, run.id, "README.txt"), - sep = "") - - - # write configuration - do.call(my.write.config, args = list(defaults = defaults, - trait.values = trait.samples, - settings = settings, - run.id)) - cat(run.id, file = file.path(settings$rundir, "runs.txt"), sep = "\n", - append = TRUE) - } else { - runs[[pftname]][MEDIAN, trait] <- medianrun - } - } - } - } - - options(scipen = scipen) - return(invisible(list(runs = runs, ensemble.id = ensemble.id))) -} # write.sa.configs From 38347511062c708e7155ad2b0a68a9498719660b Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Mon, 20 Sep 2021 20:36:40 +0530 Subject: [PATCH 2272/2289] Create sensitivity.R --- modules/uncertainty/R/sensitivity.R | 341 ++++++++++++++++++++++++++++ 1 file changed, 341 insertions(+) create mode 100644 modules/uncertainty/R/sensitivity.R diff --git a/modules/uncertainty/R/sensitivity.R b/modules/uncertainty/R/sensitivity.R new file mode 100644 index 00000000000..c9e0bc9aa9e --- /dev/null +++ b/modules/uncertainty/R/sensitivity.R @@ -0,0 +1,341 @@ +#------------------------------------------------------------------------------- +# Copyright (c) 2012 University of Illinois, NCSA. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the +# University of Illinois/NCSA Open Source License +# which accompanies this distribution, and is available at +# http://opensource.ncsa.illinois.edu/license.html +#------------------------------------------------------------------------------- + +##' Reads output of sensitivity analysis runs +##' +##' +##' @title Read Sensitivity Analysis output +##' @return dataframe with one col per quantile analysed and one row per trait, +##' each cell is a list of AGB over time +##' @param traits model parameters included in the sensitivity analysis +##' @param quantiles quantiles selected for sensitivity analysis +##' @param pecandir specifies where pecan writes its configuration files +##' @param outdir directory with model output to use in sensitivity analysis +##' @param pft.name name of PFT used in sensitivity analysis (Optional) +##' @param start.year first year to include in sensitivity analysis +##' @param end.year last year to include in sensitivity analysis +##' @param variable variables to be read from model output +##' @param per.pft flag to determine whether we want SA on pft-specific variables +##' @param sa.run.ids list of run ids to read. +##' If NULL, will look in `pecandir` for a file named `samples.Rdata` +##' and read from that +##' @export +##' @importFrom magrittr %>% +##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze, Istem Fer +#--------------------------------------------------------------------------------------------------# +##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze +read.sa.output <- function(traits, quantiles, pecandir, outdir, pft.name = "", + start.year, end.year, variable, sa.run.ids = NULL, per.pft = FALSE) { + + + if (is.null(sa.run.ids)) { + samples.file <- file.path(pecandir, "samples.Rdata") + if (file.exists(samples.file)) { + load(samples.file) + sa.run.ids <- runs.samples$sa + } else { + PEcAn.logger::logger.error(samples.file, "not found, this file is required by the read.sa.output function") + } + } + + sa.output <- matrix(nrow = length(quantiles), + ncol = length(traits), + dimnames = list(quantiles, traits)) + + expr <- variable$expression + variables <- variable$variables + + for(trait in traits){ + for(quantile in quantiles){ + run.id <- sa.run.ids[[pft.name]][quantile, trait] + + for(var in seq_along(variables)){ + # if SA is requested on a variable available per pft, pass pft.name to read.output + # so that it only returns values for that pft + pass_pft <- switch(per.pft + 1, NULL, pft.name) + out.tmp <- read.output(runid = run.id, outdir = file.path(outdir, run.id), + start.year = start.year, end.year = end.year, + variables = variables[var], + pft.name = pass_pft) + assign(variables[var], out.tmp[[variables[var]]]) + } + + # derivation + out <- eval(parse(text = expr)) + + sa.output[quantile, trait] <- mean(out, na.rm=TRUE) + + } ## end loop over quantiles + PEcAn.logger::logger.info("reading sensitivity analysis output for model run at ", quantiles, "quantiles of trait", trait) + } ## end loop over traits + sa.output <- as.data.frame(sa.output) + return(sa.output) +} # read.sa.output + + +##' Write sensitivity analysis config files +##' +##' Writes config files for use in sensitivity analysis. +##' +##' @param defaults named list with default parameter values +##' @param quantile.samples list of lists supplied by \link{get.sa.samples} +##' @param settings list of settings +##' @param model name of model to be run +##' @param clean logical: Delete any existing contents of the directory specified by \code{settings$rundir} before writing to it? +##' @param write.to.db logical: Record this run to BETY? If TRUE, uses connection settings specified in \code{settings$database} +##' +##' @return list, containing $runs = data frame of runids, and $ensemble.id = the ensemble ID for these runs. Also writes sensitivity analysis configuration files as a side effect +##' @export +##' @author David LeBauer, Carl Davidson +write.sa.configs <- function(defaults, quantile.samples, settings, model, + clean = FALSE, write.to.db = TRUE) { + scipen <- getOption("scipen") + options(scipen = 12) + my.write.config <- paste("write.config.", model, sep = "") + + if (write.to.db) { + con <- try(PEcAn.DB::db.open(settings$database$bety), silent = TRUE) + if (inherits(con, "try-error")) { + con <- NULL + } else { + on.exit(PEcAn.DB::db.close(con), add = TRUE) + } + } else { + con <- NULL + } + + # Get the workflow id + if ("workflow" %in% names(settings)) { + workflow.id <- settings$workflow$id + } else { + workflow.id <- -1 + } + + # find all inputs that have an id + inputs <- names(settings$run$inputs) + inputs <- inputs[grepl(".id$", inputs)] + + runs <- data.frame() + + # Reading the site.pft specific tags from xml + site.pfts.vec <- settings$run$site$site.pft %>% unlist %>% as.character + + if(!is.null(site.pfts.vec)){ + # find the name of pfts defined in the body of pecan.xml + defined.pfts <- settings$pfts %>% purrr::map('name') %>% unlist %>% as.character + # subset ensemble samples based on the pfts that are specified in the site and they are also sampled from. + if (length(which(site.pfts.vec %in% defined.pfts)) > 0 ) + quantile.samples <- quantile.samples [site.pfts.vec[ which(site.pfts.vec %in% defined.pfts) ]] + # warn if there is a pft specified in the site but it's not defined in the pecan xml. + if (length(which(!(site.pfts.vec %in% defined.pfts)))>0) + PEcAn.logger::logger.warn(paste0("The following pfts are specified for the siteid ", settings$run$site$id ," but they are not defined as a pft in pecan.xml:", + site.pfts.vec[which(!(site.pfts.vec %in% defined.pfts))])) + } + + + ## write median run + MEDIAN <- "50" + median.samples <- list() + for (i in seq_along(quantile.samples)) { + median.samples[[i]] <- quantile.samples[[i]][MEDIAN, , drop=FALSE] + } + names(median.samples) <- names(quantile.samples) + + if (!is.null(con)) { + ensemble.id <- PEcAn.DB::db.query(paste0( + "INSERT INTO ensembles (runtype, workflow_id) ", + "VALUES ('sensitivity analysis', ", format(workflow.id, scientific = FALSE), ") ", + "RETURNING id"), con = con)[['id']] + + paramlist <- paste0("quantile=MEDIAN,trait=all,pft=", + paste(lapply(settings$pfts, function(x) x[["name"]]), sep = ",")) + run.id <- PEcAn.DB::db.query(paste0("INSERT INTO runs ", + "(model_id, site_id, start_time, finish_time, outdir, ensemble_id, parameter_list) ", + "values ('", + settings$model$id, "', '", + settings$run$site$id, "', '", + settings$run$start.date, "', '", + settings$run$end.date, "', '", + settings$run$outdir, "', ", + ensemble.id, ", '", + paramlist, "') ", + "RETURNING id"), con = con)[['id']] + + # associate posteriors with ensembles + for (pft in defaults) { + PEcAn.DB::db.query(paste0( + "INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) ", + "values (", pft$posteriorid, ", ", ensemble.id, ")"), con = con) + } + + # associate inputs with runs + if (!is.null(inputs)) { + for (x in inputs) { + PEcAn.DB::db.query(paste0( + "INSERT INTO inputs_runs (input_id, run_id) ", + "values (", settings$run$inputs[[x]], ", ", run.id, ")"), con = con) + } + } + } else { + run.id <- get.run.id("SA", "median") + ensemble.id <- NA + } + medianrun <- run.id + + # create folders (cleaning up old ones if needed) + if (clean) { + unlink(file.path(settings$rundir, run.id)) + unlink(file.path(settings$modeloutdir, run.id)) + } + dir.create(file.path(settings$rundir, run.id), recursive = TRUE) + dir.create(file.path(settings$modeloutdir, run.id), recursive = TRUE) + + # write run information to disk TODO need to print list of pft names and trait + # names + cat("runtype : sensitivity analysis\n", + "workflow id : ", workflow.id, "\n", + "ensemble id : ", ensemble.id, "\n", + "pft name : ALL PFT", "\n", + "quantile : MEDIAN\n", + "trait : ALL TRAIT", "\n", + "run id : ", run.id, "\n", + "model : ", model, "\n", + "model id : ", settings$model$id, "\n", + "site : ", settings$run$site$name, "\n", + "site id : ", settings$run$site$id, "\n", + "met data : ", settings$run$site$met, "\n", + "start date : ", settings$run$start.date, "\n", + "end date : ", settings$run$end.date, "\n", + "hostname : ", settings$host$name, "\n", + "rundir : ", file.path(settings$host$rundir, run.id), "\n", + "outdir : ", file.path(settings$host$outdir, run.id), "\n", + file = file.path(settings$rundir, run.id, "README.txt"), + sep = "") + + + # I check to make sure the path under the met is a list. if it's specified what met needs to be used in 'met.id' under sensitivity analysis of pecan xml we used that otherwise, I use the first met. + if (is.list(settings$run$inputs$met$path)){ + # This checks for met.id tag in the settings under sensitivity analysis - if it's not there it creates it. Then it's gonna use what it created. + if (is.null(settings$sensitivity.analysis$met.id)) settings$sensitivity.analysis$met.id <- 1 + + settings$run$inputs$met$path <- settings$run$inputs$met$path[[settings$sensitivity.analysis$met.id]] + + } + + + # write configuration + do.call(my.write.config, args = list(defaults = defaults, + trait.values = median.samples, + settings = settings, + run.id = run.id)) + cat(run.id, file = file.path(settings$rundir, "runs.txt"), sep = "\n", append = TRUE) + + ## loop over pfts + runs <- list() + for (i in seq_along(names(quantile.samples))) { + pftname <- names(quantile.samples)[i] + if (pftname == "env") { + next + } + + traits <- colnames(quantile.samples[[i]]) + quantiles.str <- rownames(quantile.samples[[i]]) + + runs[[pftname]] <- data.frame() + + ## loop over variables + for (trait in traits) { + for (quantile.str in quantiles.str) { + if (quantile.str != MEDIAN) { + quantile <- as.numeric(quantile.str) / 100 + trait.samples <- median.samples + trait.samples[[i]][trait] <- quantile.samples[[i]][quantile.str, trait, drop=FALSE] + + if (!is.null(con)) { + paramlist <- paste0("quantile=", quantile.str, ",trait=", trait, ",pft=", pftname) + insert_result <- PEcAn.DB::db.query(paste0("INSERT INTO runs (model_id, site_id, start_time, finish_time, outdir, ensemble_id, parameter_list) values ('", + settings$model$id, "', '", + settings$run$site$id, "', '", + settings$run$start.date, "', '", + settings$run$end.date, "', '", + settings$run$outdir, "', ", + ensemble.id, ", '", + paramlist, "') RETURNING id"), con = con) + run.id <- insert_result[["id"]] + + # associate posteriors with ensembles + for (pft in defaults) { + PEcAn.DB::db.query(paste0("INSERT INTO posteriors_ensembles (posterior_id, ensemble_id) values (", + pft$posteriorid, ", ", + ensemble.id, ");"), con = con) + } + + # associate inputs with runs + if (!is.null(inputs)) { + for (x in inputs) { + PEcAn.DB::db.query(paste0("INSERT INTO inputs_runs (input_id, run_id) ", + "values (", settings$run$inputs[[x]], ", ", run.id, ");"), + con = con) + } + } + } else { + run.id <- get.run.id("SA", + round(quantile, 3), + trait = trait, + pft.name = names(trait.samples)[i]) + } + runs[[pftname]][quantile.str, trait] <- run.id + + # create folders (cleaning up old ones if needed) + if (clean) { + unlink(file.path(settings$rundir, run.id)) + unlink(file.path(settings$modeloutdir, run.id)) + } + dir.create(file.path(settings$rundir, run.id), recursive = TRUE) + dir.create(file.path(settings$modeloutdir, run.id), recursive = TRUE) + + # write run information to disk + cat("runtype : sensitivity analysis\n", + "workflow id : ", workflow.id, "\n", + "ensemble id : ", ensemble.id, "\n", + "pft name : ", names(trait.samples)[i], "\n", + "quantile : ", quantile.str, "\n", + "trait : ", trait, "\n", + "run id : ", run.id, "\n", + "model : ", model, "\n", + "model id : ", settings$model$id, "\n", + "site : ", settings$run$site$name, "\n", + "site id : ", settings$run$site$id, "\n", + "met data : ", settings$run$site$met, "\n", + "start date : ", settings$run$start.date, "\n", + "end date : ", settings$run$end.date, "\n", + "hostname : ", settings$host$name, "\n", + "rundir : ", file.path(settings$host$rundir, run.id), "\n", + "outdir : ", file.path(settings$host$outdir, run.id), "\n", + file = file.path(settings$rundir, run.id, "README.txt"), + sep = "") + + + # write configuration + do.call(my.write.config, args = list(defaults = defaults, + trait.values = trait.samples, + settings = settings, + run.id)) + cat(run.id, file = file.path(settings$rundir, "runs.txt"), sep = "\n", + append = TRUE) + } else { + runs[[pftname]][MEDIAN, trait] <- medianrun + } + } + } + } + + options(scipen = scipen) + return(invisible(list(runs = runs, ensemble.id = ensemble.id))) +} # write.sa.configs From 5ec041efca98a68a06d2d01acb494ae1836a9858 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 22 Sep 2021 20:23:26 +0200 Subject: [PATCH 2273/2289] roxygen --- base/utils/NAMESPACE | 2 -- modules/uncertainty/NAMESPACE | 3 +++ {base/utils => modules/uncertainty}/man/read.sa.output.Rd | 4 ++-- {base/utils => modules/uncertainty}/man/write.sa.configs.Rd | 0 4 files changed, 5 insertions(+), 4 deletions(-) rename {base/utils => modules/uncertainty}/man/read.sa.output.Rd (91%) rename {base/utils => modules/uncertainty}/man/write.sa.configs.Rd (100%) diff --git a/base/utils/NAMESPACE b/base/utils/NAMESPACE index d20f97dc5ef..83ee767abda 100644 --- a/base/utils/NAMESPACE +++ b/base/utils/NAMESPACE @@ -35,7 +35,6 @@ export(n_leap_day) export(paste.stats) export(r2bugs.distributions) export(read.output) -export(read.sa.output) export(read_web_config) export(retry.func) export(rsync) @@ -57,7 +56,6 @@ export(transformstats) export(tryl) export(units_are_equivalent) export(vecpaste) -export(write.sa.configs) export(zero.truncate) importFrom(magrittr,"%>%") importFrom(rlang,.data) diff --git a/modules/uncertainty/NAMESPACE b/modules/uncertainty/NAMESPACE index cfc1bd56af3..d24100c3c80 100644 --- a/modules/uncertainty/NAMESPACE +++ b/modules/uncertainty/NAMESPACE @@ -19,6 +19,7 @@ export(prep.data.assim) export(read.ameriflux.L2) export(read.ensemble.output) export(read.ensemble.ts) +export(read.sa.output) export(run.ensemble.analysis) export(run.sensitivity.analysis) export(runModule.get.results) @@ -30,4 +31,6 @@ export(sensitivity.analysis) export(sensitivity.filename) export(spline.truncate) export(write.ensemble.configs) +export(write.sa.configs) importFrom(dplyr,"%>%") +importFrom(magrittr,"%>%") diff --git a/base/utils/man/read.sa.output.Rd b/modules/uncertainty/man/read.sa.output.Rd similarity index 91% rename from base/utils/man/read.sa.output.Rd rename to modules/uncertainty/man/read.sa.output.Rd index c6efc9cd762..06a0f07196f 100644 --- a/base/utils/man/read.sa.output.Rd +++ b/modules/uncertainty/man/read.sa.output.Rd @@ -35,14 +35,14 @@ read.sa.output( \item{variable}{variables to be read from model output} \item{sa.run.ids}{list of run ids to read. -If NULL, will look in \code{pecandir} for a file named \code{samples.Rdata} +If NULL, will look in `pecandir` for a file named `samples.Rdata` and read from that} \item{per.pft}{flag to determine whether we want SA on pft-specific variables} } \value{ dataframe with one col per quantile analysed and one row per trait, -each cell is a list of AGB over time + each cell is a list of AGB over time } \description{ Reads output of sensitivity analysis runs diff --git a/base/utils/man/write.sa.configs.Rd b/modules/uncertainty/man/write.sa.configs.Rd similarity index 100% rename from base/utils/man/write.sa.configs.Rd rename to modules/uncertainty/man/write.sa.configs.Rd From c15247d24466d65dbc93e40ebcc86224a6d294ef Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 22 Sep 2021 20:31:33 +0200 Subject: [PATCH 2274/2289] update existing calls in other packages --- base/workflow/R/run.write.configs.R | 2 +- modules/uncertainty/R/get.results.R | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base/workflow/R/run.write.configs.R b/base/workflow/R/run.write.configs.R index 0bc746bdd33..795a45fab77 100644 --- a/base/workflow/R/run.write.configs.R +++ b/base/workflow/R/run.write.configs.R @@ -101,7 +101,7 @@ run.write.configs <- function(settings, write = TRUE, ens.sample.method = "unifo ### Write out SA config files PEcAn.logger::logger.info("\n ----- Writing model run config files ----") - sa.runs <- PEcAn.utils::write.sa.configs(defaults = settings$pfts, + sa.runs <- PEcAn.uncertainty::write.sa.configs(defaults = settings$pfts, quantile.samples = sa.samples, settings = settings, model = model, diff --git a/modules/uncertainty/R/get.results.R b/modules/uncertainty/R/get.results.R index 58b6e00b7f2..80562a8a917 100644 --- a/modules/uncertainty/R/get.results.R +++ b/modules/uncertainty/R/get.results.R @@ -109,7 +109,7 @@ get.results <- function(settings, sa.ensemble.id = NULL, ens.ensemble.id = NULL, # when there is variable-per pft in the outputs, check for the tag for deciding SA per pft per.pft <- ifelse(!is.null(settings$sensitivity.analysis$perpft), as.logical(settings$sensitivity.analysis$perpft), FALSE) - sensitivity.output[[pft.name]] <- PEcAn.utils::read.sa.output( + sensitivity.output[[pft.name]] <- read.sa.output( traits = traits, quantiles = quantiles, pecandir = outdir, From d1a06b1d2d728b8bc81527b5581cb0a052a0c514 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 22 Sep 2021 20:40:44 +0200 Subject: [PATCH 2275/2289] avoid magrittr import --- modules/uncertainty/NAMESPACE | 1 - modules/uncertainty/R/sensitivity.R | 5 ++--- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/modules/uncertainty/NAMESPACE b/modules/uncertainty/NAMESPACE index d24100c3c80..648f2a61c4f 100644 --- a/modules/uncertainty/NAMESPACE +++ b/modules/uncertainty/NAMESPACE @@ -33,4 +33,3 @@ export(spline.truncate) export(write.ensemble.configs) export(write.sa.configs) importFrom(dplyr,"%>%") -importFrom(magrittr,"%>%") diff --git a/modules/uncertainty/R/sensitivity.R b/modules/uncertainty/R/sensitivity.R index c9e0bc9aa9e..beed3d1e036 100644 --- a/modules/uncertainty/R/sensitivity.R +++ b/modules/uncertainty/R/sensitivity.R @@ -26,7 +26,6 @@ ##' If NULL, will look in `pecandir` for a file named `samples.Rdata` ##' and read from that ##' @export -##' @importFrom magrittr %>% ##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze, Istem Fer #--------------------------------------------------------------------------------------------------# ##' @author Ryan Kelly, David LeBauer, Rob Kooper, Mike Dietze @@ -124,11 +123,11 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, runs <- data.frame() # Reading the site.pft specific tags from xml - site.pfts.vec <- settings$run$site$site.pft %>% unlist %>% as.character + site.pfts.vec <- as.character(unlist(settings$run$site$site.pft)) if(!is.null(site.pfts.vec)){ # find the name of pfts defined in the body of pecan.xml - defined.pfts <- settings$pfts %>% purrr::map('name') %>% unlist %>% as.character + defined.pfts <- as.character(unlist(purrr::map(settings$pfts, 'name'))) # subset ensemble samples based on the pfts that are specified in the site and they are also sampled from. if (length(which(site.pfts.vec %in% defined.pfts)) > 0 ) quantile.samples <- quantile.samples [site.pfts.vec[ which(site.pfts.vec %in% defined.pfts) ]] From ba674fb365e270bfd38e6c819ecfa7ad409ad9b8 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 22 Sep 2021 21:46:53 +0200 Subject: [PATCH 2276/2289] load file into defined env --- modules/uncertainty/R/sensitivity.R | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/modules/uncertainty/R/sensitivity.R b/modules/uncertainty/R/sensitivity.R index beed3d1e036..934158fb69b 100644 --- a/modules/uncertainty/R/sensitivity.R +++ b/modules/uncertainty/R/sensitivity.R @@ -36,8 +36,9 @@ read.sa.output <- function(traits, quantiles, pecandir, outdir, pft.name = "", if (is.null(sa.run.ids)) { samples.file <- file.path(pecandir, "samples.Rdata") if (file.exists(samples.file)) { - load(samples.file) - sa.run.ids <- runs.samples$sa + samples <- new.env() + load(samples.file, envir = samples) + sa.run.ids <- samples$runs.samples$sa } else { PEcAn.logger::logger.error(samples.file, "not found, this file is required by the read.sa.output function") } From 09cfa1c8e667e5f6141d03b181f7f19b6d83cfe4 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 22 Sep 2021 21:48:54 +0200 Subject: [PATCH 2277/2289] add :: to utils calls that are now out-of-pkg --- modules/uncertainty/R/sensitivity.R | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/modules/uncertainty/R/sensitivity.R b/modules/uncertainty/R/sensitivity.R index 934158fb69b..4335bc0786f 100644 --- a/modules/uncertainty/R/sensitivity.R +++ b/modules/uncertainty/R/sensitivity.R @@ -59,10 +59,12 @@ read.sa.output <- function(traits, quantiles, pecandir, outdir, pft.name = "", # if SA is requested on a variable available per pft, pass pft.name to read.output # so that it only returns values for that pft pass_pft <- switch(per.pft + 1, NULL, pft.name) - out.tmp <- read.output(runid = run.id, outdir = file.path(outdir, run.id), - start.year = start.year, end.year = end.year, - variables = variables[var], - pft.name = pass_pft) + out.tmp <- PEcAn.utils::read.output( + runid = run.id, + outdir = file.path(outdir, run.id), + start.year = start.year, end.year = end.year, + variables = variables[var], + pft.name = pass_pft) assign(variables[var], out.tmp[[variables[var]]]) } @@ -183,7 +185,7 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, } } } else { - run.id <- get.run.id("SA", "median") + run.id <- PEcAn.utils::get.run.id("SA", "median") ensemble.id <- NA } medianrun <- run.id @@ -285,10 +287,11 @@ write.sa.configs <- function(defaults, quantile.samples, settings, model, } } } else { - run.id <- get.run.id("SA", - round(quantile, 3), - trait = trait, - pft.name = names(trait.samples)[i]) + run.id <- PEcAn.utils::get.run.id( + run.type = "SA", + index = round(quantile, 3), + trait = trait, + pft.name = names(trait.samples)[i]) } runs[[pftname]][quantile.str, trait] <- run.id From a08133a96ffe5d8b95c442cf40d5832def8a7aa7 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Wed, 22 Sep 2021 21:52:48 +0200 Subject: [PATCH 2278/2289] remove empty sensitivity.R --- base/utils/R/sensitivity.R | 1 - 1 file changed, 1 deletion(-) delete mode 100644 base/utils/R/sensitivity.R diff --git a/base/utils/R/sensitivity.R b/base/utils/R/sensitivity.R deleted file mode 100644 index 8b137891791..00000000000 --- a/base/utils/R/sensitivity.R +++ /dev/null @@ -1 +0,0 @@ - From 29ff806b8199b6cdb1e50492ac99a69169d9c059 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 23 Sep 2021 11:18:04 +0200 Subject: [PATCH 2279/2289] use github version of vdiffr to replace too-old RSPM version --- base/qaqc/DESCRIPTION | 7 +++++++ docker/depends/pecan.depends.R | 1 + 2 files changed, 8 insertions(+) diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 9cf8e723106..a737c9d5bec 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -23,6 +23,13 @@ Suggests: rmarkdown, testthat (>= 3.0.0), vdiffr (>= 1.0.2) +X-Comment-Remotes: + Installing vdiffr from GitHub because as of 2021-09-23, this is the + easiest way to get version >= 1.0.2 onto Docker images that use older + Rstudio Package Manager snapshots. + When these are updated, OK to go back to installing vdiffr from CRAN. +Remotes: + github::r-lib/vdiffr@v1.0.2 License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 9db487f17eb..48f0d24e045 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -12,6 +12,7 @@ lapply(c( 'araiho/linkages_package', 'ebimodeling/biocro', 'MikkoPeltoniemi/Rpreles', +'r-lib/vdiffr@v1.0.2', 'ropensci/geonames', 'ropensci/nneo' ), remotes::install_github, lib = rlib) From 798e017e20452da68352ca684b157e2c71aafb3d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 23 Sep 2021 12:30:11 +0200 Subject: [PATCH 2280/2289] need newer testthat as well --- base/qaqc/DESCRIPTION | 7 +++++-- docker/depends/pecan.depends.R | 1 + 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index a737c9d5bec..9b8e5b5ee4c 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -21,14 +21,17 @@ Suggests: PEcAn.SIPNET, PEcAn.utils, rmarkdown, - testthat (>= 3.0.0), + testthat (>= 3.0.4), vdiffr (>= 1.0.2) X-Comment-Remotes: Installing vdiffr from GitHub because as of 2021-09-23, this is the easiest way to get version >= 1.0.2 onto Docker images that use older Rstudio Package Manager snapshots. - When these are updated, OK to go back to installing vdiffr from CRAN. + Ditto for testthat, because we need >= 3.0.4 for vdiffr compatibility. + When building on a system that finds these versions on CRAN, + OK to remove these Remotes lines and this comment. Remotes: + github::r-lib/testthat@v3.0.4, github::r-lib/vdiffr@v1.0.2 License: BSD_3_clause + file LICENSE Copyright: Authors diff --git a/docker/depends/pecan.depends.R b/docker/depends/pecan.depends.R index 48f0d24e045..27b98a9f05d 100644 --- a/docker/depends/pecan.depends.R +++ b/docker/depends/pecan.depends.R @@ -12,6 +12,7 @@ lapply(c( 'araiho/linkages_package', 'ebimodeling/biocro', 'MikkoPeltoniemi/Rpreles', +'r-lib/testthat@v3.0.4', 'r-lib/vdiffr@v1.0.2', 'ropensci/geonames', 'ropensci/nneo' From 012c5d4fca23c967be737fb6377939a197c345cc Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 23 Sep 2021 11:21:58 +0200 Subject: [PATCH 2281/2289] rerun pecan.depends.R to install any dependencies added since last Docker build --- .github/workflows/ci.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 19467069d72..5dcce9bf09a 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -56,6 +56,8 @@ jobs: run: Rscript scripts/generate_dependencies.R - name: check for out-of-date dependencies files uses: infotroph/tree-is-clean@v1 + - name: install newly-added dependencies + run: Rscript docker/depends/pecan.depends.R # compile PEcAn code - name: build From caa68729adbfca316a1593978c5d7ad0ef335477 Mon Sep 17 00:00:00 2001 From: Shashank Singh Date: Thu, 23 Sep 2021 21:15:45 +0000 Subject: [PATCH 2282/2289] API changes documentation --- CHANGELOG.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7cc72921f46..ff007cd4a5e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -40,7 +40,7 @@ This is a major change: - fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2753) - ensure that control treatments always receives the random effect index of 1; rename madata.Rdata to jagged.data.Rdata and include database ids and names useful for calculating parameter estimates by treatment (#2756) - ensure that existing meta-analysis results can be used for pfts with cultivars (#2761) - +- Fixed error/notes/warnings after running R CMD checks on utils package([#2823](https://github.com/PecanProject/pecan/pull/2830)) ### Changed - Removed deprecated mstmip_vars and mstmip_local; now all functions use the combined standard_vars.csv @@ -89,6 +89,7 @@ This is a major change: - Ability to run with [MERRA-2 meteorology](https://gmao.gsfc.nasa.gov/reanalysis/MERRA-2/) (reanalysis product based on GEOS-5 model) - Ability to run with ICOS Ecosystem products + ### Removed - Removed travis integration @@ -97,7 +98,14 @@ This is a major change: - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). - Scripts `dump.pgsql.sh` and `dump.mysql.sh` have been deleted. See the ["BeTY database administration"](https://pecanproject.github.io/pecan-documentation/develop/database.html) chapter of the PEcAn documentation for current recommendations (#2563). - Old dependency management scripts `check.dependencies.sh`, `update.dependencies.sh`, and `install_deps.R` have been deleted. Use `generate_dependencies.R` and the automatic dependency handling built into `make install` instead (#2563). - +- Some Functions have been removed from `PEcAn.utils` package:([#2823](https://github.com/PecanProject/pecan/issues/2834)) + - removed already deprecated functions `do_conversions`, `run.write.configs`, `get.ensemble.samples`, `read.ensemble.output` + - removed already deprecated `logger.R` functions: + - `logger.debug` ,`logger.error`, `logger.getLevel` ,`logger.info` ,`logger.setLevel` ,`logger.setOutputFile` ,`logger.setQuitOnSevere` ,`logger.setWidth` ,`logger.severe` , `logger.warn` +-- Some functions have been moved out from `PecAn.utils` package: + - Function `get.results` has been moved to `PEcAn.uncertainty`. + - all `Plot.R` functions (`dhist`, `create.base.plot`, `plot_data`, `theme_border`) have been moved to `PEcAn.visualizaton`. + - all `Sensitivity.R` (`read.sa.output` & `write.sa.configs`) functions have been moved from `PEcAn.utils` to `PEcAn.uncertainty` package.([#2856](https://github.com/PecanProject/pecan/issues/2856)) ## [1.7.1] - 2018-09-12 From eb64269f933444b17dad78ff881bf8d51523715f Mon Sep 17 00:00:00 2001 From: Shashank Singh <73598347+moki1202@users.noreply.github.com> Date: Fri, 24 Sep 2021 12:38:28 +0530 Subject: [PATCH 2283/2289] Update CHANGELOG.md --- CHANGELOG.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index ff007cd4a5e..b4407ecb3a3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -101,8 +101,8 @@ This is a major change: - Some Functions have been removed from `PEcAn.utils` package:([#2823](https://github.com/PecanProject/pecan/issues/2834)) - removed already deprecated functions `do_conversions`, `run.write.configs`, `get.ensemble.samples`, `read.ensemble.output` - removed already deprecated `logger.R` functions: - - `logger.debug` ,`logger.error`, `logger.getLevel` ,`logger.info` ,`logger.setLevel` ,`logger.setOutputFile` ,`logger.setQuitOnSevere` ,`logger.setWidth` ,`logger.severe` , `logger.warn` --- Some functions have been moved out from `PecAn.utils` package: + - `logger.debug` ,`logger.error`, `logger.getLevel` ,`logger.info` ,`logger.setLevel` ,`logger.setOutputFile` ,`logger.setQuitOnSevere` ,`logger.setWidth` ,`logger.severe` , `logger.warn` +- Some functions have been moved out from `PecAn.utils` package: - Function `get.results` has been moved to `PEcAn.uncertainty`. - all `Plot.R` functions (`dhist`, `create.base.plot`, `plot_data`, `theme_border`) have been moved to `PEcAn.visualizaton`. - all `Sensitivity.R` (`read.sa.output` & `write.sa.configs`) functions have been moved from `PEcAn.utils` to `PEcAn.uncertainty` package.([#2856](https://github.com/PecanProject/pecan/issues/2856)) From c151a0d650183787ce20ec7bd4d9f3b41fbcd583 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 24 Sep 2021 13:54:13 +0200 Subject: [PATCH 2284/2289] tweak wording and give louder shoutout to @moki1202 --- CHANGELOG.md | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index b4407ecb3a3..c3cb8ca858a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -40,13 +40,14 @@ This is a major change: - fix bug in summarize.result to output stat, which is needed to turn on RE in the meta-analysis (#2753) - ensure that control treatments always receives the random effect index of 1; rename madata.Rdata to jagged.data.Rdata and include database ids and names useful for calculating parameter estimates by treatment (#2756) - ensure that existing meta-analysis results can be used for pfts with cultivars (#2761) -- Fixed error/notes/warnings after running R CMD checks on utils package([#2823](https://github.com/PecanProject/pecan/pull/2830)) +- Major code cleanup by GSoC student @moki1202, fixing many check warnings across 10 packages + (#2771, #2773, #2774, #2775, #2805, #2815, #2826, #2830, #2857) + ### Changed - Removed deprecated mstmip_vars and mstmip_local; now all functions use the combined standard_vars.csv - RabbitMQ is set to be 3.8 since the 3.9 version can no longer be configured with environment variables. - Removed old api, now split into rpecanapi and apps/api. -- Now using R 4.0.2 for Docker images. This is a major change. Newer version of R and using Ubuntu 20.04 instead of Debian. - Replaced `tmvtnorm` package with `TruncatedNormal` package for speed up per #2621. - Continuous integration changes: Added experimental GitHub Actions CI builds (#2544), streamlined Travis CI builds, added a fourth R version (second-newest old release; currently R 3.5) to Travis test matrix (#2592). - Functions that update database entries no longer pass `created_at` or `updated_at` timestamps. The database now updates these itself and ensures they are consistently in UTC (#1083). @@ -64,6 +65,10 @@ This is a major change: - Changed precipitaion downscale in `PEcAn.data.atmosphere::download.NOAA_GEFS_downscale`. Precipitation was being downscaled via a spline which was causing fake rain events. Instead the 6 hr precipitation flux values from GEFS are preserved with 0's filling in the hours between. - Changed `dbfile.input.insert` to work with inputs (i.e soils) that don't have start and end dates associated with them - Default behavior for `stop_on_error` is now `TRUE` for non-ensemble runs; i.e., workflows that run only one model simulation (or omit the `ensemble` XML group altogether) will fail if the model run fails. For ensemble runs, the old behavior is preserved; i.e., workflows will continue even if one of the model runs failed. This behavior can also be manually controlled by setting the new `run -> stop_on_error` XML tag to `TRUE` or `FALSE`. +- Several functions have been moved out of `PEcAn.utils` into other packages (#2830, #2857): + * `ensemble.filename`, `get.results`, `runModule.get.results`, `read.sa.output`, `sensitivity.filename`, + and `write.sa.configs` have been moved to `PEcAn.uncertainty`. + * `create.base.plot`, `dhist`, `plot_data` and `theme_border` have been moved to `PEcAn.visualizaton`. ### Added @@ -98,14 +103,12 @@ This is a major change: - Database maintenance scripts `vacuum.bety.sh` and `reindex.bety.sh` have been moved to the [BeTY database repository](https://github.com/PecanProject/bety) (#2563). - Scripts `dump.pgsql.sh` and `dump.mysql.sh` have been deleted. See the ["BeTY database administration"](https://pecanproject.github.io/pecan-documentation/develop/database.html) chapter of the PEcAn documentation for current recommendations (#2563). - Old dependency management scripts `check.dependencies.sh`, `update.dependencies.sh`, and `install_deps.R` have been deleted. Use `generate_dependencies.R` and the automatic dependency handling built into `make install` instead (#2563). -- Some Functions have been removed from `PEcAn.utils` package:([#2823](https://github.com/PecanProject/pecan/issues/2834)) - - removed already deprecated functions `do_conversions`, `run.write.configs`, `get.ensemble.samples`, `read.ensemble.output` - - removed already deprecated `logger.R` functions: - - `logger.debug` ,`logger.error`, `logger.getLevel` ,`logger.info` ,`logger.setLevel` ,`logger.setOutputFile` ,`logger.setQuitOnSevere` ,`logger.setWidth` ,`logger.severe` , `logger.warn` -- Some functions have been moved out from `PecAn.utils` package: - - Function `get.results` has been moved to `PEcAn.uncertainty`. - - all `Plot.R` functions (`dhist`, `create.base.plot`, `plot_data`, `theme_border`) have been moved to `PEcAn.visualizaton`. - - all `Sensitivity.R` (`read.sa.output` & `write.sa.configs`) functions have been moved from `PEcAn.utils` to `PEcAn.uncertainty` package.([#2856](https://github.com/PecanProject/pecan/issues/2856)) +- Deprecated copies of functions previously moved to other packages have been removed from `PEcAn.utils` (#2830): + * `do_conversions` and `runModule.run.write.configs`, `run.write.configs`. These are now in `PEcAn.workflow` + * `get.ensemble.samples`, `read.ensemble.output`, `write.ensemble.configs`. These are now in `PEcAn.uncertainty` + * `logger.debug`, `logger.error`, `logger.getLevel`, `logger.info`, `logger.setLevel`, + `logger.setOutputFile`, `logger.setQuitOnSevere`, `logger.setWidth`, `logger.severe`, `logger.warn`. + These are now in `PEcAn.logger` ## [1.7.1] - 2018-09-12 From 0fc1289e0214d99c36d5a0ef7c850e887b9c88a6 Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 24 Sep 2021 13:56:16 +0200 Subject: [PATCH 2285/2289] whitespace --- CHANGELOG.md | 1 - 1 file changed, 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index c3cb8ca858a..771577f7976 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -94,7 +94,6 @@ This is a major change: - Ability to run with [MERRA-2 meteorology](https://gmao.gsfc.nasa.gov/reanalysis/MERRA-2/) (reanalysis product based on GEOS-5 model) - Ability to run with ICOS Ecosystem products - ### Removed - Removed travis integration From ee6616a103d3efc13c4ad7d58a9567e9c1793f0d Mon Sep 17 00:00:00 2001 From: Chris Black Date: Tue, 28 Sep 2021 10:00:04 +0200 Subject: [PATCH 2286/2289] one-char typo --- DEV-INTRO.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/DEV-INTRO.md b/DEV-INTRO.md index 871b9c5934b..b9aebd31222 100644 --- a/DEV-INTRO.md +++ b/DEV-INTRO.md @@ -65,7 +65,7 @@ You can now use the command `docker-compose` to work with the containers setup f ### First time setup -The steps in this section only need to be done the fist time you start working with the stack in docker. After this is done you can skip these steps. You can find more detail about the docker commands in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). +The steps in this section only need to be done the first time you start working with the stack in docker. After this is done you can skip these steps. You can find more detail about the docker commands in the [pecan documentation](https://pecanproject.github.io/pecan-documentation/master/docker-index.html). * setup .env file * create folders to hold the data From ffe18580e6a9289abbb90abf362b8e392e135ead Mon Sep 17 00:00:00 2001 From: Chris Black Date: Thu, 30 Sep 2021 22:48:02 +0200 Subject: [PATCH 2287/2289] share github PAT with document command to allow installing from remotes when needed --- .github/workflows/styler-actions.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 581dab727a4..7a10b0f00fd 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -57,6 +57,7 @@ jobs: shell: bash env: FILES: ${{ join(fromJSON(steps.file_changes.outputs.files_modified), ' ') }} + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} run: | echo ${FILES} \ | tr ' ' '\n' \ From ee72a7d2699286dfb0ce152a9b64ee0d0c3c5eaa Mon Sep 17 00:00:00 2001 From: Chris Black Date: Fri, 1 Oct 2021 14:23:18 +0200 Subject: [PATCH 2288/2289] run pcean.depend too --- .github/workflows/styler-actions.yml | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/.github/workflows/styler-actions.yml b/.github/workflows/styler-actions.yml index 7a10b0f00fd..e47fcd89d9e 100644 --- a/.github/workflows/styler-actions.yml +++ b/.github/workflows/styler-actions.yml @@ -51,6 +51,10 @@ jobs: repo-token: ${{ secrets.GITHUB_TOKEN }} - name: update dependency lists run: Rscript scripts/generate_dependencies.R + - name: install any new dependencies + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} + run: Rscript docker/depends/pecan.depends.R - id: file_changes uses: trilom/file-changes-action@v1.2.4 - name : make From c15ab26ae9615fafb263d39e7c231e93e410d3ee Mon Sep 17 00:00:00 2001 From: Rob Kooper Date: Mon, 4 Oct 2021 14:50:19 -0500 Subject: [PATCH 2289/2289] bump version to 1.7.2 --- CHANGELOG.md | 2 +- base/all/DESCRIPTION | 4 ++-- base/db/DESCRIPTION | 4 ++-- base/qaqc/DESCRIPTION | 4 ++-- base/remote/DESCRIPTION | 4 ++-- base/settings/DESCRIPTION | 4 ++-- base/utils/DESCRIPTION | 4 ++-- base/visualization/DESCRIPTION | 4 ++-- base/workflow/DESCRIPTION | 4 ++-- models/basgra/DESCRIPTION | 4 ++-- models/biocro/DESCRIPTION | 4 ++-- models/cable/DESCRIPTION | 4 ++-- models/clm45/DESCRIPTION | 4 ++-- models/dalec/DESCRIPTION | 4 ++-- models/dvmdostem/DESCRIPTION | 4 ++-- models/ed/DESCRIPTION | 4 ++-- models/fates/DESCRIPTION | 4 ++-- models/gday/DESCRIPTION | 4 ++-- models/jules/DESCRIPTION | 4 ++-- models/linkages/DESCRIPTION | 4 ++-- models/lpjguess/DESCRIPTION | 4 ++-- models/maat/DESCRIPTION | 4 ++-- models/maespa/DESCRIPTION | 4 ++-- models/preles/DESCRIPTION | 4 ++-- models/sipnet/DESCRIPTION | 4 ++-- models/stics/DESCRIPTION | 4 ++-- models/template/DESCRIPTION | 4 ++-- modules/allometry/DESCRIPTION | 4 ++-- modules/assim.batch/DESCRIPTION | 4 ++-- modules/assim.sequential/DESCRIPTION | 4 ++-- modules/benchmark/DESCRIPTION | 4 ++-- modules/data.atmosphere/DESCRIPTION | 4 ++-- modules/data.hydrology/DESCRIPTION | 4 ++-- modules/data.land/DESCRIPTION | 4 ++-- modules/data.mining/DESCRIPTION | 4 ++-- modules/data.remote/DESCRIPTION | 4 ++-- modules/emulator/DESCRIPTION | 4 ++-- modules/meta.analysis/DESCRIPTION | 4 ++-- modules/photosynthesis/DESCRIPTION | 4 ++-- modules/priors/DESCRIPTION | 4 ++-- modules/rtm/DESCRIPTION | 4 ++-- modules/uncertainty/DESCRIPTION | 4 ++-- web/common.php | 2 +- 43 files changed, 84 insertions(+), 84 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 771577f7976..e0ae7dab4b9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,7 +5,7 @@ section for the next release. For more information about this file see also [Keep a Changelog](http://keepachangelog.com/) . -## [Unreleased] +## [1.7.2] - 2021-10-04 ### Due to dependencies, PEcAn is now using R 4.0.3 for Docker images. diff --git a/base/all/DESCRIPTION b/base/all/DESCRIPTION index c9dbbc0740f..433a56017b4 100644 --- a/base/all/DESCRIPTION +++ b/base/all/DESCRIPTION @@ -2,8 +2,8 @@ Package: PEcAn.all Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("David","LeBauer"), person("Xiaohui", "Feng"), diff --git a/base/db/DESCRIPTION b/base/db/DESCRIPTION index d5eaf264342..6b653fc3033 100644 --- a/base/db/DESCRIPTION +++ b/base/db/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.DB Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("David", "LeBauer", role = c("aut", "cre"), email = "dlebauer@email.arizona.edu", comment = c(ORCID = "0000-0001-7228-053X")), diff --git a/base/qaqc/DESCRIPTION b/base/qaqc/DESCRIPTION index 9b8e5b5ee4c..28a568be22d 100644 --- a/base/qaqc/DESCRIPTION +++ b/base/qaqc/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.qaqc Type: Package Title: QAQC -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("David","LeBauer"), person("Tess", "McCabe")) Author: David LeBauer, Tess McCabe diff --git a/base/remote/DESCRIPTION b/base/remote/DESCRIPTION index d39d6616c5b..fcb5f3873bf 100644 --- a/base/remote/DESCRIPTION +++ b/base/remote/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.remote Type: Package Title: PEcAn model execution utilities -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("David","LeBauer"), person("Rob","Kooper"), person("Shawn", "Serbin"), diff --git a/base/settings/DESCRIPTION b/base/settings/DESCRIPTION index 7478de686b1..2a0c13bd227 100644 --- a/base/settings/DESCRIPTION +++ b/base/settings/DESCRIPTION @@ -3,8 +3,8 @@ Title: PEcAn Settings package Authors@R: c(person("David","LeBauer", role = c("aut", "cre"), email = "dlebauer@arizona.edu"), person("Rob","Kooper", rol="aut")) -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 License: BSD_3_clause + file LICENSE Copyright: Authors LazyLoad: yes diff --git a/base/utils/DESCRIPTION b/base/utils/DESCRIPTION index fdfae602a68..b7468806c83 100644 --- a/base/utils/DESCRIPTION +++ b/base/utils/DESCRIPTION @@ -2,8 +2,8 @@ Package: PEcAn.utils Type: Package Title: PEcAn Functions Used for Ecological Forecasts and Reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c( person("Mike", "Dietze", role = "aut"), person("Rob","Kooper", role = c("aut", "cre"), diff --git a/base/visualization/DESCRIPTION b/base/visualization/DESCRIPTION index 1e8e4e4bf05..5eb0b1dcff1 100644 --- a/base/visualization/DESCRIPTION +++ b/base/visualization/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.visualization Type: Package Title: PEcAn visualization functions -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c( person("Mike", "Dietze", role = "aut"), person("David", "LeBauer", diff --git a/base/workflow/DESCRIPTION b/base/workflow/DESCRIPTION index 407d4e86f1d..ce65d898eb4 100644 --- a/base/workflow/DESCRIPTION +++ b/base/workflow/DESCRIPTION @@ -2,8 +2,8 @@ Package: PEcAn.workflow Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("David","LeBauer"), person("Xiaohui", "Feng"), diff --git a/models/basgra/DESCRIPTION b/models/basgra/DESCRIPTION index 9432345def2..e009e8c5e12 100644 --- a/models/basgra/DESCRIPTION +++ b/models/basgra/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.BASGRA Type: Package Title: PEcAn package for integration of the BASGRA model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Istem", "Fer", email = "istem.fer@fmi.fi", role = c("aut", "cre"))) Description: This module provides functions to link the BASGRA model to PEcAn. Imports: diff --git a/models/biocro/DESCRIPTION b/models/biocro/DESCRIPTION index 2ccb30f8c0d..76d4f509c15 100644 --- a/models/biocro/DESCRIPTION +++ b/models/biocro/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.BIOCRO Type: Package Title: PEcAn package for integration of the BioCro model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("David","LeBauer"), person("Christopher", "Black"), person("Deepak", "Jaiswal")) diff --git a/models/cable/DESCRIPTION b/models/cable/DESCRIPTION index bd15089bf54..bb01eb24c99 100644 --- a/models/cable/DESCRIPTION +++ b/models/cable/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.CABLE Type: Package Title: PEcAn package for integration of the CABLE model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Kaitlin","Ragosta"), person("Tony", "Gardella")) Author: Kaitlin Ragosta diff --git a/models/clm45/DESCRIPTION b/models/clm45/DESCRIPTION index ade9d7d45bb..937c1827baf 100644 --- a/models/clm45/DESCRIPTION +++ b/models/clm45/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.CLM45 Type: Package Title: PEcAn package for integration of CLM4.5 model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze")) Author: Mike Dietze Maintainer: Mike Dietze diff --git a/models/dalec/DESCRIPTION b/models/dalec/DESCRIPTION index d20a9d9c7b0..9f5e6af6ac0 100644 --- a/models/dalec/DESCRIPTION +++ b/models/dalec/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.DALEC Type: Package Title: PEcAn package for integration of the DALEC model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Tristan","Quaife")) Author: Mike Dietze, Tristain Quaife diff --git a/models/dvmdostem/DESCRIPTION b/models/dvmdostem/DESCRIPTION index 8076c46cfca..54ec7349b72 100644 --- a/models/dvmdostem/DESCRIPTION +++ b/models/dvmdostem/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.dvmdostem Type: Package Title: PEcAn package for integration of the dvmdostem model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Shawn", "Serbin"), person("Tobey", "Carman")) Author: Tobey Carman, Shawn Serbin diff --git a/models/ed/DESCRIPTION b/models/ed/DESCRIPTION index c2a79f26398..2fb80f4c4b5 100644 --- a/models/ed/DESCRIPTION +++ b/models/ed/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.ED2 Type: Package Title: PEcAn package for integration of ED2 model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("David","LeBauer"), person("Xiaohui", "Feng"), diff --git a/models/fates/DESCRIPTION b/models/fates/DESCRIPTION index 2ddeabbbf83..cca4566fd76 100644 --- a/models/fates/DESCRIPTION +++ b/models/fates/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.FATES Type: Package Title: PEcAn package for integration of FATES model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Shawn", "Serbin")) Author: Mike Dietze diff --git a/models/gday/DESCRIPTION b/models/gday/DESCRIPTION index d6b272cd62a..966d0386e44 100644 --- a/models/gday/DESCRIPTION +++ b/models/gday/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.GDAY Type: Package Title: PEcAn package for integration of the GDAY model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Martin","De Kauwe"), person("Tony", "Gardella")) Author: Martin De Kauwe diff --git a/models/jules/DESCRIPTION b/models/jules/DESCRIPTION index c0705c691ab..b27f9ff92f6 100644 --- a/models/jules/DESCRIPTION +++ b/models/jules/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.JULES Type: Package Title: PEcAn package for integration of the JULES model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze")) Author: Mike Dietze Maintainer: Mike Dietze diff --git a/models/linkages/DESCRIPTION b/models/linkages/DESCRIPTION index 8086ec27253..f8524a247e9 100644 --- a/models/linkages/DESCRIPTION +++ b/models/linkages/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.LINKAGES Type: Package Title: PEcAn package for integration of the LINKAGES model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Ann", "Raiho")) Author: Ann Raiho, Mike Dietze diff --git a/models/lpjguess/DESCRIPTION b/models/lpjguess/DESCRIPTION index 0f5d0335a81..8cfd22d9151 100644 --- a/models/lpjguess/DESCRIPTION +++ b/models/lpjguess/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.LPJGUESS Type: Package Title: PEcAn package for integration of the LPJ-GUESS model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Istem", "Fer"), person("Tony", "Gardella")) Author: Istem Fer, Tony Gardella diff --git a/models/maat/DESCRIPTION b/models/maat/DESCRIPTION index 0102d27ca89..cfecb9cf5e4 100644 --- a/models/maat/DESCRIPTION +++ b/models/maat/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.MAAT Type: Package Title: PEcAn package for integration of the MAAT model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: as.person(c( "Shawn Serbin [aut, cre]", "Anthony Walker [aut]" diff --git a/models/maespa/DESCRIPTION b/models/maespa/DESCRIPTION index 26525555179..c072ea25fe1 100644 --- a/models/maespa/DESCRIPTION +++ b/models/maespa/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.MAESPA Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis using MAESPA -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Tony", "Gardella")) Author: Tony Gardella Maintainer: Tony Gardella diff --git a/models/preles/DESCRIPTION b/models/preles/DESCRIPTION index 0825dc88bad..67bdcb60829 100644 --- a/models/preles/DESCRIPTION +++ b/models/preles/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.PRELES Type: Package Title: PEcAn package for integration of the PRELES model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Tony", "Gardella")) Author: Tony Gardella, Mike Dietze diff --git a/models/sipnet/DESCRIPTION b/models/sipnet/DESCRIPTION index 768471c4355..46071fc27f4 100644 --- a/models/sipnet/DESCRIPTION +++ b/models/sipnet/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.SIPNET Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze")) Author: Mike Dietze Maintainer: Mike Dietze diff --git a/models/stics/DESCRIPTION b/models/stics/DESCRIPTION index 56c2d2a7836..3a773dade15 100644 --- a/models/stics/DESCRIPTION +++ b/models/stics/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.STICS Type: Package Title: PEcAn package for integration of the STICS model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c( person("Istem", "Fer", email = "istem.fer@fmi.fi", diff --git a/models/template/DESCRIPTION b/models/template/DESCRIPTION index 9e895dd92ce..fb191cab2d1 100644 --- a/models/template/DESCRIPTION +++ b/models/template/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.ModelName Type: Package Title: PEcAn package for integration of the ModelName model -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c( person("Jane", "Doe", email = "jdoe@illinois.edu", diff --git a/modules/allometry/DESCRIPTION b/modules/allometry/DESCRIPTION index d67be10cfda..0186fc9bc82 100644 --- a/modules/allometry/DESCRIPTION +++ b/modules/allometry/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.allometry Type: Package Title: PEcAn allometry functions -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze")) Author: Mike Dietze Maintainer: Mike Dietze diff --git a/modules/assim.batch/DESCRIPTION b/modules/assim.batch/DESCRIPTION index 1dbdcb9d1fc..686533d170c 100644 --- a/modules/assim.batch/DESCRIPTION +++ b/modules/assim.batch/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.assim.batch Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze", role = "aut", email = "dietze@bu.edu"), person("Istem","Fer", role = c("aut", "cre"), diff --git a/modules/assim.sequential/DESCRIPTION b/modules/assim.sequential/DESCRIPTION index 9789dbb29df..6f6f09bb44f 100644 --- a/modules/assim.sequential/DESCRIPTION +++ b/modules/assim.sequential/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.assim.sequential Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Author: Mike Dietze Maintainer: Mike Dietze Description: The Predictive Ecosystem Carbon Analyzer (PEcAn) is a scientific diff --git a/modules/benchmark/DESCRIPTION b/modules/benchmark/DESCRIPTION index f5680f22155..28b2b776549 100644 --- a/modules/benchmark/DESCRIPTION +++ b/modules/benchmark/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.benchmark Type: Package Title: PEcAn functions used for benchmarking -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("David","LeBauer"), person("Rob","Kooper"), diff --git a/modules/data.atmosphere/DESCRIPTION b/modules/data.atmosphere/DESCRIPTION index 988c58a9db9..86cce921e2e 100644 --- a/modules/data.atmosphere/DESCRIPTION +++ b/modules/data.atmosphere/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.data.atmosphere Type: Package Title: PEcAn functions used for managing climate driver data -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze", role = "aut"), person("David","LeBauer", role = c("aut", "cre"), email = "dlebauer@illinois.edu"), diff --git a/modules/data.hydrology/DESCRIPTION b/modules/data.hydrology/DESCRIPTION index bd8ca1c023b..8c39115f71b 100644 --- a/modules/data.hydrology/DESCRIPTION +++ b/modules/data.hydrology/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.data.hydrology Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("David","LeBauer"), person("Xiaohui", "Feng"), diff --git a/modules/data.land/DESCRIPTION b/modules/data.land/DESCRIPTION index 85b67b552df..03062638ecb 100644 --- a/modules/data.land/DESCRIPTION +++ b/modules/data.land/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.data.land Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze",email="dietze@bu.edu",role="cre"), person("David","LeBauer",role="aut"), person("Xiaohui", "Feng",role="ctb"), diff --git a/modules/data.mining/DESCRIPTION b/modules/data.mining/DESCRIPTION index 4d812539850..1265f9067c2 100644 --- a/modules/data.mining/DESCRIPTION +++ b/modules/data.mining/DESCRIPTION @@ -2,8 +2,8 @@ Package: PEcAn.data.mining Type: Package Title: PEcAn functions used for exploring model residuals and structures Description: (Temporary description) PEcAn functions used for exploring model residuals and structures -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze")) Author: Mike Dietze Maintainer: Mike Dietze diff --git a/modules/data.remote/DESCRIPTION b/modules/data.remote/DESCRIPTION index 7b6c3663c2d..1a868474833 100644 --- a/modules/data.remote/DESCRIPTION +++ b/modules/data.remote/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.data.remote Type: Package Title: PEcAn functions used for extracting remote sensing data -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Bailey", "Morrison")) Author: Mike Dietze, Bailey Morrison Maintainer: Bailey Morrison diff --git a/modules/emulator/DESCRIPTION b/modules/emulator/DESCRIPTION index a782b0ab330..c5f5a4ad47a 100644 --- a/modules/emulator/DESCRIPTION +++ b/modules/emulator/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.emulator Type: Package Title: Gausian Process emulator -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze")) Author: Michael Dietze Maintainer: Michael Dietze diff --git a/modules/meta.analysis/DESCRIPTION b/modules/meta.analysis/DESCRIPTION index d383b83b9d5..dca7f5f0874 100644 --- a/modules/meta.analysis/DESCRIPTION +++ b/modules/meta.analysis/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.MA Type: Package Title: PEcAn functions used for meta-analysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze", role = c("aut")), person("David", "LeBauer", role = c("aut", "cre"), email = "dlebauer@email.arizona.edu"), person("Xiaohui", "Feng", role = c("aut")), diff --git a/modules/photosynthesis/DESCRIPTION b/modules/photosynthesis/DESCRIPTION index 8f8d43fce45..2c0c90bed75 100644 --- a/modules/photosynthesis/DESCRIPTION +++ b/modules/photosynthesis/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.photosynthesis Type: Package Title: PEcAn functions used for leaf-level photosynthesis calculations -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Xiaohui", "Feng"), person("Shawn", "Serbin")) diff --git a/modules/priors/DESCRIPTION b/modules/priors/DESCRIPTION index 00adc4fe2ac..f23d5070849 100644 --- a/modules/priors/DESCRIPTION +++ b/modules/priors/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAn.priors Type: Package Title: PEcAn functions used to estimate priors from data -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("David","LeBauer")) Author: David LeBauer Maintainer: David LeBauer diff --git a/modules/rtm/DESCRIPTION b/modules/rtm/DESCRIPTION index 669d3819941..5006b51c480 100644 --- a/modules/rtm/DESCRIPTION +++ b/modules/rtm/DESCRIPTION @@ -1,8 +1,8 @@ Package: PEcAnRTM Type: Package Title: PEcAn functions used for radiative transfer modeling -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("Shawn", "Serbin"), person("Alexey", "Shiklomanov")) diff --git a/modules/uncertainty/DESCRIPTION b/modules/uncertainty/DESCRIPTION index bc8a8ceda73..f80eac06cc1 100644 --- a/modules/uncertainty/DESCRIPTION +++ b/modules/uncertainty/DESCRIPTION @@ -2,8 +2,8 @@ Package: PEcAn.uncertainty Type: Package Title: PEcAn functions used for ecological forecasts and reanalysis -Version: 1.7.1 -Date: 2019-09-05 +Version: 1.7.2 +Date: 2021-10-04 Authors@R: c(person("Mike","Dietze"), person("David","LeBauer"), person("Xiaohui", "Feng"), diff --git a/web/common.php b/web/common.php index 7765c5692bb..e82330c5abc 100644 --- a/web/common.php +++ b/web/common.php @@ -14,7 +14,7 @@ function get_footer() { Terrestrial Ecosystems, Department of Energy (ARPA-E #DE-AR0000594 and #DE-AR0000598), Department of Defense, the Arizona Experiment Station, the Energy Biosciences Institute, and an Amazon AWS in Education Grant. - PEcAn Version 1.7.1"; + PEcAn Version 1.7.2"; } function whoami() {